id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.09442
|
Jiahuan Zhang
|
Yizhen Luo, Jiahuan Zhang, Siqi Fan, Kai Yang, Yushuai Wu, Mu Qiao,
Zaiqing Nie
|
BioMedGPT: Open Multimodal Generative Pre-trained Transformer for
BioMedicine
|
12 pages, 4 figures
| null | null | null |
cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Foundation models (FMs) have exhibited remarkable performance across a wide
range of downstream tasks in many domains. Nevertheless, general-purpose FMs
often face challenges when confronted with domain-specific problems, due to
their limited access to the proprietary training data in a particular domain.
In biomedicine, there are various biological modalities, such as molecules,
proteins, and cells, which are encoded by the language of life and exhibit
significant modality gaps with human natural language. In this paper, we
introduce BioMedGPT, an open multimodal generative pre-trained transformer
(GPT) for biomedicine, to bridge the gap between the language of life and human
natural language. BioMedGPT allows users to easily ``communicate'' with diverse
biological modalities through free text, which is the first of its kind.
BioMedGPT aligns different biological modalities with natural language via a
large generative language model, namely, BioMedGPT-LM. We publish
BioMedGPT-10B, which unifies the feature spaces of molecules, proteins, and
natural language via encoding and alignment. Through fine-tuning, BioMedGPT-10B
outperforms or is on par with human and significantly larger general-purpose
foundation models on the biomedical QA task. It also demonstrates promising
performance in the molecule QA and protein QA tasks, which could greatly
accelerate the discovery of new drugs and therapeutic targets. In addition,
BioMedGPT-LM-7B is the first large generative language model based on Llama2 in
the biomedical domain, therefore is commercial friendly. Both BioMedGPT-10B and
BioMedGPT-LM-7B are open-sourced to the research community. In addition, we
publish the datasets that are meticulously curated for the alignment of
multi-modalities, i.e., PubChemQA and UniProtQA. All the models, codes, and
datasets are available at \url{https://github.com/PharMolix/OpenBioMed}.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 10:14:35 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Aug 2023 07:49:37 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Luo",
"Yizhen",
""
],
[
"Zhang",
"Jiahuan",
""
],
[
"Fan",
"Siqi",
""
],
[
"Yang",
"Kai",
""
],
[
"Wu",
"Yushuai",
""
],
[
"Qiao",
"Mu",
""
],
[
"Nie",
"Zaiqing",
""
]
] |
new_dataset
| 0.999353 |
2308.09719
|
Shusaku Egami
|
Shusaku Egami, Yasunori Yamamoto, Ikki Ohmukai, Takashi Okumura
|
CIRO: COVID-19 infection risk ontology
|
18 pages, 8 figures, and this paper has been accepted by PLOS ONE
|
PLoS One, 18(3), e0282291, 2023
|
10.1371/journal.pone.0282291
| null |
cs.AI cs.CY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Public health authorities perform contact tracing for highly contagious
agents to identify close contacts with the infected cases. However, during the
pandemic caused by coronavirus disease 2019 (COVID-19), this operation was not
employed in countries with high patient volumes. Meanwhile, the Japanese
government conducted this operation, thereby contributing to the control of
infections, at the cost of arduous manual labor by public health officials. To
ease the burden of the officials, this study attempted to automate the
assessment of each person's infection risk through an ontology, called COVID-19
Infection Risk Ontology (CIRO). This ontology expresses infection risks of
COVID-19 formulated by the Japanese government, toward automated assessment of
infection risks of individuals, using Resource Description Framework (RDF) and
SPARQL (SPARQL Protocol and RDF Query Language) queries. For evaluation, we
demonstrated that the knowledge graph built could infer the risks, formulated
by the government. Moreover, we conducted reasoning experiments to analyze the
computational efficiency. The experiments demonstrated usefulness of the
knowledge processing, and identified issues left for deployment.
|
[
{
"version": "v1",
"created": "Mon, 7 Aug 2023 11:12:09 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Egami",
"Shusaku",
""
],
[
"Yamamoto",
"Yasunori",
""
],
[
"Ohmukai",
"Ikki",
""
],
[
"Okumura",
"Takashi",
""
]
] |
new_dataset
| 0.999237 |
2308.09722
|
Mst Akter
|
Mst Shapna Akter, Hossain Shahriar, Alfredo Cuzzocrea
|
A Trustable LSTM-Autoencoder Network for Cyberbullying Detection on
Social Media Using Synthetic Data
|
arXiv admin note: text overlap with arXiv:2303.07484
| null | null | null |
cs.LG cs.CL cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Social media cyberbullying has a detrimental effect on human life. As online
social networking grows daily, the amount of hate speech also increases. Such
terrible content can cause depression and actions related to suicide. This
paper proposes a trustable LSTM-Autoencoder Network for cyberbullying detection
on social media using synthetic data. We have demonstrated a cutting-edge
method to address data availability difficulties by producing
machine-translated data. However, several languages such as Hindi and Bangla
still lack adequate investigations due to a lack of datasets. We carried out
experimental identification of aggressive comments on Hindi, Bangla, and
English datasets using the proposed model and traditional models, including
Long Short-Term Memory (LSTM), Bidirectional Long Short-Term Memory (BiLSTM),
LSTM-Autoencoder, Word2vec, Bidirectional Encoder Representations from
Transformers (BERT), and Generative Pre-trained Transformer 2 (GPT-2) models.
We employed evaluation metrics such as f1-score, accuracy, precision, and
recall to assess the models performance. Our proposed model outperformed all
the models on all datasets, achieving the highest accuracy of 95%. Our model
achieves state-of-the-art results among all the previous works on the dataset
we used in this paper.
|
[
{
"version": "v1",
"created": "Tue, 15 Aug 2023 17:20:05 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Akter",
"Mst Shapna",
""
],
[
"Shahriar",
"Hossain",
""
],
[
"Cuzzocrea",
"Alfredo",
""
]
] |
new_dataset
| 0.999088 |
2308.09837
|
Viktor T. Toth
|
Viktor T. Toth
|
Field theory with the Maxima computer algebra system
|
6 pages
| null | null | null |
cs.SC gr-qc physics.comp-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Maxima computer algebra system, the open-source successor to MACSYMA, the
first general-purpose computer algebra system that was initially developed at
the Massachusetts Institute of Technology in the late 1960s and later
distributed by the United States Department of Energy, has some remarkable
capabilities, some of which are implemented in the form of add-on, "share"
packages that are distributed along with the core Maxima system. One such share
package is itensor, for indicial tensor manipulation. One of the more
remarkable features of itensor is functional differentiation. Through this, it
is possible to use itensor to develop a Lagrangian field theory and derive the
corresponding field equations. In the present note, we demonstrate this
capability by deriving Maxwell's equations from the Maxwell Lagrangian, and
exploring the properties of the system, including current conservation.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 22:12:18 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Toth",
"Viktor T.",
""
]
] |
new_dataset
| 0.997475 |
2308.09840
|
Daniel Drew
|
C. Luke Nelson, Daniel S. Drew
|
High Aspect Ratio Multi-stage Ducted Electroaerodynamic Thrusters for
Micro Air Vehicle Propulsion
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Electroaerodynamic propulsion, where force is produced through collisions
between electrostatically accelerated ions and neutral air molecules, is an
attractive alternative to propeller- and flapping wing-based methods for micro
air vehicle (MAV) flight due to its silent and solid-state nature. One major
barrier to adoption is its limited thrust efficiency at useful disk loading
levels. Ducted actuators comprising multiple serially-integrated acceleration
stages are a potential solution, allowing individual stages to operate at
higher efficiency while maintaining a useful total thrust, and potentially
improving efficiency through various aerodynamic and fluid dynamic mechanisms.
In this work, we investigate the effects of duct and emitter electrode
geometries on actuator performance, then show how a combination of increasing
cross-sectional aspect ratio and serial integration of multiple stages can be
used to produce overall thrust densities comparable to commercial propulsors.
An optimized five-stage device attains a thrust density of about 18 N/m$^2$ at
a thrust efficiency of about 2 mN/W, among the highest values ever measured at
this scale. We further show how this type of thruster can be integrated under
the wings of a MAV-scale fixed wing platform, pointing towards future use as a
distributed propulsion system.
|
[
{
"version": "v1",
"created": "Fri, 18 Aug 2023 22:22:05 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Nelson",
"C. Luke",
""
],
[
"Drew",
"Daniel S.",
""
]
] |
new_dataset
| 0.965587 |
2308.09866
|
Junyan Su
|
Junyan Su, Qiulin Lin, Minghua Chen, Haibo Zeng
|
Minimizing Carbon Footprint for Timely E-Truck Transportation: Hardness
and Approximation Algorithm
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Carbon footprint optimization (CFO) is important for sustainable heavy-duty
e-truck transportation. We consider the CFO problem for timely transportation
of e-trucks, where the truck travels from an origin to a destination across a
national highway network subject to a deadline. The goal is to minimize the
carbon footprint by orchestrating path planning, speed planning, and
intermediary charging planning. We first show that it is NP-hard even just to
find a feasible CFO solution. We then develop a $(1+\epsilon_F,
1+\epsilon_\beta)$ bi-criteria approximation algorithm that achieves a carbon
footprint within a ratio of $(1+\epsilon_F)$ to the minimum with no deadline
violation and at most a ratio of $(1+\epsilon_\beta)$ battery capacity
violation (for any positive $\epsilon_F$ and $\epsilon_\beta$). Its time
complexity is polynomial in the size of the highway network, $1/\epsilon_F$,
and $1/\epsilon_\beta$. Such algorithmic results are among the best possible
unless P=NP. Simulation results based on real-world traces show that our scheme
reduces up to 11\% carbon footprint as compared to baseline alternatives
considering only energy consumption but not carbon footprint.
|
[
{
"version": "v1",
"created": "Sat, 19 Aug 2023 00:59:17 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Su",
"Junyan",
""
],
[
"Lin",
"Qiulin",
""
],
[
"Chen",
"Minghua",
""
],
[
"Zeng",
"Haibo",
""
]
] |
new_dataset
| 0.998802 |
2308.09926
|
Yong Niu
|
Yunhan Ma, Yong Niu, Shiwen Mao, Zhu Han, Ruisi He, Zhangdui Zhong,
Ning Wang, Bo Ai
|
Robust Train-to-Train Transmission Scheduling in mmWave Band for High
Speed Train Communication Systems
|
14 pages
| null | null | null |
cs.IT cs.NI math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Demands for data traffic in high-speed railway (HSR) has increased
drastically. The increasing entertainment needs of passengers, safety control
information exchanges of trains, etc., make train-to-train (T2T) communications
face the challenge of achieving high-capacity and high-quality data
transmissions. In order to greatly increase the communication capacity, it is
urgent to introduce millimeter wave (mmWave) technology. Faced with the problem
that mmWave link is easy to be blocked, this paper leverages the existing
equipment to assist relay, and proposes an effective transmission scheduling
scheme to improve the robustness of T2T communication systems. First of all, we
formulate a mixed integer nonlinear programming (MINLP) optimization problem
the transmission scheduling in T2T communication systems where mobile relays
(MRs) are all working in the full-duplex (FD) mode. Then we propose a low
complexity heuristic algorithm to solve the optimization problem, which
consists of three components: relay selection, transmission mode selection, and
transmission scheduling. The simulation results show that the proposed
algorithm can greatly improve the number of completed flows and system
throughput. Finally, we analyze the influence of different design parameters on
the system performance. The results show that the proposed algorithm can
achieve more data flows and system throughput within a reasonable communication
distance threshold in T2T communication with obstacles in different orbits. It
can balance the computational complexity and system performance to achieve an
efficient and robust data transmission.
|
[
{
"version": "v1",
"created": "Sat, 19 Aug 2023 06:52:33 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Ma",
"Yunhan",
""
],
[
"Niu",
"Yong",
""
],
[
"Mao",
"Shiwen",
""
],
[
"Han",
"Zhu",
""
],
[
"He",
"Ruisi",
""
],
[
"Zhong",
"Zhangdui",
""
],
[
"Wang",
"Ning",
""
],
[
"Ai",
"Bo",
""
]
] |
new_dataset
| 0.989982 |
2308.09954
|
Suhang Wu
|
Suhang Wu, Minlong Peng, Yue Chen, Jinsong Su, Mingming Sun
|
Eva-KELLM: A New Benchmark for Evaluating Knowledge Editing of LLMs
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models (LLMs) possess a wealth of knowledge encoded in their
parameters. However, this knowledge may become outdated or unsuitable over
time. As a result, there has been a growing interest in knowledge editing for
LLMs and evaluating its effectiveness. Existing studies primarily focus on
knowledge editing using factual triplets, which not only incur high costs for
collection but also struggle to express complex facts. Furthermore, these
studies are often limited in their evaluation perspectives. In this paper, we
propose Eva-KELLM, a new benchmark for evaluating knowledge editing of LLMs.
This benchmark includes an evaluation framework and a corresponding dataset.
Under our framework, we first ask the LLM to perform knowledge editing using
raw documents, which provides a more convenient and universal approach compared
to using factual triplets. We then evaluate the updated LLM from multiple
perspectives. In addition to assessing the effectiveness of knowledge editing
and the retention of unrelated knowledge from conventional studies, we further
test the LLM's ability in two aspects: 1) Reasoning with the altered knowledge,
aiming for the LLM to genuinely learn the altered knowledge instead of simply
memorizing it. 2) Cross-lingual knowledge transfer, where the LLM updated with
raw documents in one language should be capable of handling queries from
another language. To facilitate further research, we construct and release the
corresponding dataset. Using this benchmark, we investigate the effectiveness
of several commonly-used knowledge editing methods. Experimental results
indicate that the current methods for knowledge editing using raw documents are
not effective in yielding satisfactory results, particularly when it comes to
reasoning with altered knowledge and cross-lingual knowledge transfer.
|
[
{
"version": "v1",
"created": "Sat, 19 Aug 2023 09:17:19 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Wu",
"Suhang",
""
],
[
"Peng",
"Minlong",
""
],
[
"Chen",
"Yue",
""
],
[
"Su",
"Jinsong",
""
],
[
"Sun",
"Mingming",
""
]
] |
new_dataset
| 0.997655 |
2308.09963
|
Marcel Grimmer
|
Marcel Grimmer, Christian Rathgeb, Raymond Veldhuis, Christoph Busch
|
NeutrEx: A 3D Quality Component Measure on Facial Expression Neutrality
| null | null | null | null |
cs.CV cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Accurate face recognition systems are increasingly important in sensitive
applications like border control or migration management. Therefore, it becomes
crucial to quantify the quality of facial images to ensure that low-quality
images are not affecting recognition accuracy. In this context, the current
draft of ISO/IEC 29794-5 introduces the concept of component quality to
estimate how single factors of variation affect recognition outcomes. In this
study, we propose a quality measure (NeutrEx) based on the accumulated
distances of a 3D face reconstruction to a neutral expression anchor. Our
evaluations demonstrate the superiority of our proposed method compared to
baseline approaches obtained by training Support Vector Machines on face
embeddings extracted from a pre-trained Convolutional Neural Network for facial
expression classification. Furthermore, we highlight the explainable nature of
our NeutrEx measures by computing per-vertex distances to unveil the most
impactful face regions and allow operators to give actionable feedback to
subjects.
|
[
{
"version": "v1",
"created": "Sat, 19 Aug 2023 09:38:39 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Grimmer",
"Marcel",
""
],
[
"Rathgeb",
"Christian",
""
],
[
"Veldhuis",
"Raymond",
""
],
[
"Busch",
"Christoph",
""
]
] |
new_dataset
| 0.999106 |
2308.09972
|
Li Niu
|
Qingyang Liu, Jianting Wang, Li Niu
|
DESOBAv2: Towards Large-scale Real-world Dataset for Shadow Generation
|
arXiv admin note: text overlap with arXiv:2306.17358
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image composition refers to inserting a foreground object into a background
image to obtain a composite image. In this work, we focus on generating
plausible shadow for the inserted foreground object to make the composite image
more realistic. To supplement the existing small-scale dataset DESOBA, we
create a large-scale dataset called DESOBAv2 by using object-shadow detection
and inpainting techniques. Specifically, we collect a large number of outdoor
scene images with object-shadow pairs. Then, we use pretrained inpainting model
to inpaint the shadow region, resulting in the deshadowed images. Based on real
images and deshadowed images, we can construct pairs of synthetic composite
images and ground-truth target images. Dataset is available at
https://github.com/bcmi/Object-Shadow-Generation-Dataset-DESOBAv2.
|
[
{
"version": "v1",
"created": "Sat, 19 Aug 2023 10:21:23 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Liu",
"Qingyang",
""
],
[
"Wang",
"Jianting",
""
],
[
"Niu",
"Li",
""
]
] |
new_dataset
| 0.999818 |
2308.09975
|
Liwen Zhang
|
Liwen Zhang, Weige Cai, Zhaowei Liu, Zhi Yang, Wei Dai, Yujie Liao,
Qianru Qin, Yifei Li, Xingyu Liu, Zhiqiang Liu, Zhoufan Zhu, Anbo Wu, Xin Guo
and Yun Chen
|
FinEval: A Chinese Financial Domain Knowledge Evaluation Benchmark for
Large Language Models
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models (LLMs) have demonstrated exceptional performance in
various natural language processing tasks, yet their efficacy in more
challenging and domain-specific tasks remains largely unexplored. This paper
presents FinEval, a benchmark specifically designed for the financial domain
knowledge in the LLMs. FinEval is a collection of high-quality multiple-choice
questions covering Finance, Economy, Accounting, and Certificate. It includes
4,661 questions spanning 34 different academic subjects. To ensure a
comprehensive model performance evaluation, FinEval employs a range of prompt
types, including zero-shot and few-shot prompts, as well as answer-only and
chain-of-thought prompts. Evaluating state-of-the-art Chinese and English LLMs
on FinEval, the results show that only GPT-4 achieved an accuracy close to 70%
in different prompt settings, indicating significant growth potential for LLMs
in the financial domain knowledge. Our work offers a more comprehensive
financial knowledge evaluation benchmark, utilizing data of mock exams and
covering a wide range of evaluated LLMs.
|
[
{
"version": "v1",
"created": "Sat, 19 Aug 2023 10:38:00 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Zhang",
"Liwen",
""
],
[
"Cai",
"Weige",
""
],
[
"Liu",
"Zhaowei",
""
],
[
"Yang",
"Zhi",
""
],
[
"Dai",
"Wei",
""
],
[
"Liao",
"Yujie",
""
],
[
"Qin",
"Qianru",
""
],
[
"Li",
"Yifei",
""
],
[
"Liu",
"Xingyu",
""
],
[
"Liu",
"Zhiqiang",
""
],
[
"Zhu",
"Zhoufan",
""
],
[
"Wu",
"Anbo",
""
],
[
"Guo",
"Xin",
""
],
[
"Chen",
"Yun",
""
]
] |
new_dataset
| 0.999839 |
2308.09980
|
Hongyu Hu
|
Yunwen Huang, Hongyu Hu, Ying Zhu, Yi Xu
|
Breast Lesion Diagnosis Using Static Images and Dynamic Video
|
Accepted by ISBI2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning based Computer Aided Diagnosis (CAD) systems have been
developed to treat breast ultrasound. Most of them focus on a single ultrasound
imaging modality, either using representative static images or the dynamic
video of a real-time scan. In fact, these two image modalities are
complementary for lesion diagnosis. Dynamic videos provide detailed
three-dimensional information about the lesion, while static images capture the
typical sections of the lesion. In this work, we propose a multi-modality
breast tumor diagnosis model to imitate the diagnosing process of radiologists,
which learns the features of both static images and dynamic video and explores
the potential relationship between the two modalities. Considering that static
images are carefully selected by professional radiologists, we propose to
aggregate dynamic video features under the guidance of domain knowledge from
static images before fusing multi-modality features. Our work is validated on a
breast ultrasound dataset composed of 897 sets of ultrasound images and videos.
Experimental results show that our model boosts the performance of
Benign/Malignant classification, achieving 90.0% in AUC and 81.7% in accuracy.
|
[
{
"version": "v1",
"created": "Sat, 19 Aug 2023 11:09:58 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Huang",
"Yunwen",
""
],
[
"Hu",
"Hongyu",
""
],
[
"Zhu",
"Ying",
""
],
[
"Xu",
"Yi",
""
]
] |
new_dataset
| 0.987237 |
2308.09985
|
Hanzhuo Tan
|
Hanzhuo Tan, Chunpu Xu, Jing Li, Yuqun Zhang, Zeyang Fang, Zeyu Chen,
Baohua Lai
|
HICL: Hashtag-Driven In-Context Learning for Social Media Natural
Language Understanding
|
https://github.com/albertan017/HICL
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Natural language understanding (NLU) is integral to various social media
applications. However, existing NLU models rely heavily on context for semantic
learning, resulting in compromised performance when faced with short and noisy
social media content. To address this issue, we leverage in-context learning
(ICL), wherein language models learn to make inferences by conditioning on a
handful of demonstrations to enrich the context and propose a novel
hashtag-driven in-context learning (HICL) framework. Concretely, we pre-train a
model #Encoder, which employs #hashtags (user-annotated topic labels) to drive
BERT-based pre-training through contrastive learning. Our objective here is to
enable #Encoder to gain the ability to incorporate topic-related semantic
information, which allows it to retrieve topic-related posts to enrich contexts
and enhance social media NLU with noisy contexts. To further integrate the
retrieved context with the source text, we employ a gradient-based method to
identify trigger terms useful in fusing information from both sources. For
empirical studies, we collected 45M tweets to set up an in-context NLU
benchmark, and the experimental results on seven downstream tasks show that
HICL substantially advances the previous state-of-the-art results. Furthermore,
we conducted extensive analyzes and found that: (1) combining source input with
a top-retrieved post from #Encoder is more effective than using semantically
similar posts; (2) trigger words can largely benefit in merging context from
the source and retrieved posts.
|
[
{
"version": "v1",
"created": "Sat, 19 Aug 2023 11:31:45 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Tan",
"Hanzhuo",
""
],
[
"Xu",
"Chunpu",
""
],
[
"Li",
"Jing",
""
],
[
"Zhang",
"Yuqun",
""
],
[
"Fang",
"Zeyang",
""
],
[
"Chen",
"Zeyu",
""
],
[
"Lai",
"Baohua",
""
]
] |
new_dataset
| 0.999526 |
2308.09987
|
Lin Shao
|
Bingyang Zhou, Haoyu Zhou, Tianhai Liang, Qiaojun Yu, Siheng Zhao,
Yuwei Zeng, Jun Lv, Siyuan Luo, Qiancai Wang, Xinyuan Yu, Haonan Chen, Cewu
Lu, and Lin Shao
|
ClothesNet: An Information-Rich 3D Garment Model Repository with
Simulated Clothes Environment
|
IEEE/CVF International Conference on Computer Vision (ICCV) 2023
| null | null | null |
cs.RO cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present ClothesNet: a large-scale dataset of 3D clothes objects with
information-rich annotations. Our dataset consists of around 4400 models
covering 11 categories annotated with clothes features, boundary lines, and
keypoints. ClothesNet can be used to facilitate a variety of computer vision
and robot interaction tasks. Using our dataset, we establish benchmark tasks
for clothes perception, including classification, boundary line segmentation,
and keypoint detection, and develop simulated clothes environments for robotic
interaction tasks, including rearranging, folding, hanging, and dressing. We
also demonstrate the efficacy of our ClothesNet in real-world experiments.
Supplemental materials and dataset are available on our project webpage.
|
[
{
"version": "v1",
"created": "Sat, 19 Aug 2023 11:34:40 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Zhou",
"Bingyang",
""
],
[
"Zhou",
"Haoyu",
""
],
[
"Liang",
"Tianhai",
""
],
[
"Yu",
"Qiaojun",
""
],
[
"Zhao",
"Siheng",
""
],
[
"Zeng",
"Yuwei",
""
],
[
"Lv",
"Jun",
""
],
[
"Luo",
"Siyuan",
""
],
[
"Wang",
"Qiancai",
""
],
[
"Yu",
"Xinyuan",
""
],
[
"Chen",
"Haonan",
""
],
[
"Lu",
"Cewu",
""
],
[
"Shao",
"Lin",
""
]
] |
new_dataset
| 0.999856 |
2308.09993
|
Hongwei Ren
|
Hongwei Ren, Yue Zhou, Haotian Fu, Yulong Huang, Renjing Xu, Bojun
Cheng
|
TTPOINT: A Tensorized Point Cloud Network for Lightweight Action
Recognition with Event Cameras
| null | null |
10.1145/3581783.3612258
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Event cameras have gained popularity in computer vision due to their data
sparsity, high dynamic range, and low latency. As a bio-inspired sensor, event
cameras generate sparse and asynchronous data, which is inherently incompatible
with the traditional frame-based method. Alternatively, the point-based method
can avoid additional modality transformation and naturally adapt to the
sparsity of events. Still, it typically cannot reach a comparable accuracy as
the frame-based method. We propose a lightweight and generalized point cloud
network called TTPOINT which achieves competitive results even compared to the
state-of-the-art (SOTA) frame-based method in action recognition tasks while
only using 1.5 % of the computational resources. The model is adept at
abstracting local and global geometry by hierarchy structure. By leveraging
tensor-train compressed feature extractors, TTPOINT can be designed with
minimal parameters and computational complexity. Additionally, we developed a
straightforward downsampling algorithm to maintain the spatio-temporal feature.
In the experiment, TTPOINT emerged as the SOTA method on three datasets while
also attaining SOTA among point cloud methods on all five datasets. Moreover,
by using the tensor-train decomposition method, the accuracy of the proposed
TTPOINT is almost unaffected while compressing the parameter size by 55 % in
all five datasets.
|
[
{
"version": "v1",
"created": "Sat, 19 Aug 2023 11:58:31 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Ren",
"Hongwei",
""
],
[
"Zhou",
"Yue",
""
],
[
"Fu",
"Haotian",
""
],
[
"Huang",
"Yulong",
""
],
[
"Xu",
"Renjing",
""
],
[
"Cheng",
"Bojun",
""
]
] |
new_dataset
| 0.999238 |
2308.10024
|
Zicheng Ye
|
Zicheng Ye, Yuan Li, Huazi Zhang, Jun Wang, Guiying Yan and Zhiming Ma
|
On the Weight Distribution of Weights Less than $2w_{\min}$ in Polar
Codes
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The number of low-weight codewords is critical to the performance of
error-correcting codes. In 1970, Kasami and Tokura characterized the codewords
of Reed-Muller (RM) codes whose weights are less than $2w_{\min}$, where
$w_{\min}$ represents the minimum weight. In this paper, we extend their
results to decreasing polar codes. We present the closed-form expressions for
the number of codewords in decreasing polar codes with weights less than
$2w_{\min}$. Moreover, the proposed enumeration algorithm runs in polynomial
time with respect to the code length.
|
[
{
"version": "v1",
"created": "Sat, 19 Aug 2023 14:17:14 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Ye",
"Zicheng",
""
],
[
"Li",
"Yuan",
""
],
[
"Zhang",
"Huazi",
""
],
[
"Wang",
"Jun",
""
],
[
"Yan",
"Guiying",
""
],
[
"Ma",
"Zhiming",
""
]
] |
new_dataset
| 0.999223 |
2308.10049
|
Lin Pengfei
|
Pengfei Lin, Ehsan Javanmardi, Manabu Tsukada
|
Clothoid Curve-based Emergency-Stopping Path Planning with Adaptive
Potential Field for Autonomous Vehicles
|
14 pages, 20 figures, journal paper in submission
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Potential Field (PF)-based path planning method is widely adopted for
autonomous vehicles (AVs) due to its real-time efficiency and simplicity. PF
often creates a rigid road boundary, and while this ensures that the ego
vehicle consistently operates within the confines of the road, it also brings a
lurking peril in emergency scenarios. If nearby vehicles suddenly switch lanes,
the AV has to veer off and brake to evade a collision, leading to the "blind
alley" effect. In such a situation, the vehicle can become trapped or confused
by the conflicting forces from the obstacle vehicle PF and road boundary PF,
often resulting in indecision or erratic behavior, even crashes. To address the
above-mentioned challenges, this research introduces an Emergency-Stopping Path
Planning (ESPP) that incorporates an adaptive PF (APF) and a clothoid curve for
urgent evasion. First, we design an emergency triggering estimation to detect
the "blind alley" problem by analyzing the PF distribution. Second, we
regionalize the driving scene to search the optimal breach point on the road PF
and the final stopping point for the vehicle by considering the possible motion
range of the obstacle. Finally, we use the optimized clothoid curve to fit
these calculated points under vehicle dynamics constraints to generate a smooth
emergency avoidance path. The proposed ESPP-based APF method was evaluated by
conducting the co-simulation between MATLAB/Simulink and CarSim Simulator in a
freeway scene. The simulation results reveal that the proposed method shows
increased performance in emergency collision avoidance and renders the vehicle
safer, in which the duration of wheel slip is 61.9% shorter, and the maximum
steering angle amplitude is 76.9% lower than other potential field-based
methods.
|
[
{
"version": "v1",
"created": "Sat, 19 Aug 2023 15:14:41 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Lin",
"Pengfei",
""
],
[
"Javanmardi",
"Ehsan",
""
],
[
"Tsukada",
"Manabu",
""
]
] |
new_dataset
| 0.998422 |
2308.10111
|
Yuantian Huang
|
Yuantian Huang, Satoshi Iizuka, Edgar Simo-Serra, and Kazuhiro Fukui
|
Controllable Multi-domain Semantic Artwork Synthesis
|
15 pages, accepted by CVMJ, to appear
| null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
We present a novel framework for multi-domain synthesis of artwork from
semantic layouts. One of the main limitations of this challenging task is the
lack of publicly available segmentation datasets for art synthesis. To address
this problem, we propose a dataset, which we call ArtSem, that contains 40,000
images of artwork from 4 different domains with their corresponding semantic
label maps. We generate the dataset by first extracting semantic maps from
landscape photography and then propose a conditional Generative Adversarial
Network (GAN)-based approach to generate high-quality artwork from the semantic
maps without necessitating paired training data. Furthermore, we propose an
artwork synthesis model that uses domain-dependent variational encoders for
high-quality multi-domain synthesis. The model is improved and complemented
with a simple but effective normalization method, based on normalizing both the
semantic and style jointly, which we call Spatially STyle-Adaptive
Normalization (SSTAN). In contrast to previous methods that only take semantic
layout as input, our model is able to learn a joint representation of both
style and semantic information, which leads to better generation quality for
synthesizing artistic images. Results indicate that our model learns to
separate the domains in the latent space, and thus, by identifying the
hyperplanes that separate the different domains, we can also perform
fine-grained control of the synthesized artwork. By combining our proposed
dataset and approach, we are able to generate user-controllable artwork that is
of higher quality than existing
|
[
{
"version": "v1",
"created": "Sat, 19 Aug 2023 21:16:28 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Huang",
"Yuantian",
""
],
[
"Iizuka",
"Satoshi",
""
],
[
"Simo-Serra",
"Edgar",
""
],
[
"Fukui",
"Kazuhiro",
""
]
] |
new_dataset
| 0.999764 |
2308.10121
|
Shahram Ghandeharizadeh
|
Hamed Alimohammadzadeh, Rohit Bernard, Yang Chen, Trung Phan, Prashant
Singh, Shuqin Zhu, Heather Culbertson, Shahram Ghandeharizadeh
|
Dronevision: An Experimental 3D Testbed for Flying Light Specks
| null | null | null | null |
cs.MM cs.GR cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Today's robotic laboratories for drones are housed in a large room. At times,
they are the size of a warehouse. These spaces are typically equipped with
permanent devices to localize the drones, e.g., Vicon Infrared cameras.
Significant time is invested to fine-tune the localization apparatus to compute
and control the position of the drones. One may use these laboratories to
develop a 3D multimedia system with miniature sized drones configured with
light sources. As an alternative, this brave new idea paper envisions shrinking
these room-sized laboratories to the size of a cube or cuboid that sits on a
desk and costs less than 10K dollars. The resulting Dronevision (DV) will be
the size of a 1990s Television. In addition to light sources, its Flying Light
Specks (FLSs) will be network-enabled drones with storage and processing
capability to implement decentralized algorithms. The DV will include a
localization technique to expedite development of 3D displays. It will act as a
haptic interface for a user to interact with and manipulate the 3D virtual
illuminations. It will empower an experimenter to design, implement, test,
debug, and maintain software and hardware that realize novel algorithms in the
comfort of their office without having to reserve a laboratory. In addition to
enhancing productivity, it will improve safety of the experimenter by
minimizing the likelihood of accidents. This paper introduces the concept of a
DV, the research agenda one may pursue using this device, and our plans to
realize one.
|
[
{
"version": "v1",
"created": "Sat, 19 Aug 2023 22:24:00 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Alimohammadzadeh",
"Hamed",
""
],
[
"Bernard",
"Rohit",
""
],
[
"Chen",
"Yang",
""
],
[
"Phan",
"Trung",
""
],
[
"Singh",
"Prashant",
""
],
[
"Zhu",
"Shuqin",
""
],
[
"Culbertson",
"Heather",
""
],
[
"Ghandeharizadeh",
"Shahram",
""
]
] |
new_dataset
| 0.997908 |
2308.10144
|
Andrew Zhao
|
Andrew Zhao, Daniel Huang, Quentin Xu, Matthieu Lin, Yong-Jin Liu, Gao
Huang
|
ExpeL: LLM Agents Are Experiential Learners
| null | null | null | null |
cs.LG cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The recent surge in research interest in applying large language models
(LLMs) to decision-making tasks has flourished by leveraging the extensive
world knowledge embedded in LLMs. While there is a growing demand to tailor
LLMs for custom decision-making tasks, finetuning them for specific tasks is
resource-intensive and may diminish the model's generalization capabilities.
Moreover, state-of-the-art language models like GPT-4 and Claude are primarily
accessible through API calls, with their parametric weights remaining
proprietary and unavailable to the public. This scenario emphasizes the growing
need for new methodologies that allow learning from agent experiences without
requiring parametric updates. To address these problems, we introduce the
Experiential Learning (ExpeL) agent. Our agent autonomously gathers experiences
and extracts knowledge using natural language from a collection of training
tasks. At inference, the agent recalls its extracted insights and past
experiences to make informed decisions. Our empirical results highlight the
robust learning efficacy of the ExpeL agent, indicating a consistent
enhancement in its performance as it accumulates experiences. We further
explore the emerging capabilities and transfer learning potential of the ExpeL
agent through qualitative observations and additional experiments.
|
[
{
"version": "v1",
"created": "Sun, 20 Aug 2023 03:03:34 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Zhao",
"Andrew",
""
],
[
"Huang",
"Daniel",
""
],
[
"Xu",
"Quentin",
""
],
[
"Lin",
"Matthieu",
""
],
[
"Liu",
"Yong-Jin",
""
],
[
"Huang",
"Gao",
""
]
] |
new_dataset
| 0.976272 |
2308.10146
|
Shujie Zhang
|
Shujie Zhang, Tianyue Zheng, Zhe Chen, Jingzhi Hu, Abdelwahed Khamis,
Jiajun Liu and Jun Luo
|
OCHID-Fi: Occlusion-Robust Hand Pose Estimation in 3D via RF-Vision
|
Accepted to ICCV 2023
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Hand Pose Estimation (HPE) is crucial to many applications, but conventional
cameras-based CM-HPE methods are completely subject to Line-of-Sight (LoS), as
cameras cannot capture occluded objects. In this paper, we propose to exploit
Radio-Frequency-Vision (RF-vision) capable of bypassing obstacles for achieving
occluded HPE, and we introduce OCHID-Fi as the first RF-HPE method with 3D pose
estimation capability. OCHID-Fi employs wideband RF sensors widely available on
smart devices (e.g., iPhones) to probe 3D human hand pose and extract their
skeletons behind obstacles. To overcome the challenge in labeling RF imaging
given its human incomprehensible nature, OCHID-Fi employs a cross-modality and
cross-domain training process. It uses a pre-trained CM-HPE network and a
synchronized CM/RF dataset, to guide the training of its complex-valued RF-HPE
network under LoS conditions. It further transfers knowledge learned from
labeled LoS domain to unlabeled occluded domain via adversarial learning,
enabling OCHID-Fi to generalize to unseen occluded scenarios. Experimental
results demonstrate the superiority of OCHID-Fi: it achieves comparable
accuracy to CM-HPE under normal conditions while maintaining such accuracy even
in occluded scenarios, with empirical evidence for its generalizability to new
domains.
|
[
{
"version": "v1",
"created": "Sun, 20 Aug 2023 03:13:17 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Zhang",
"Shujie",
""
],
[
"Zheng",
"Tianyue",
""
],
[
"Chen",
"Zhe",
""
],
[
"Hu",
"Jingzhi",
""
],
[
"Khamis",
"Abdelwahed",
""
],
[
"Liu",
"Jiajun",
""
],
[
"Luo",
"Jun",
""
]
] |
new_dataset
| 0.978977 |
2308.10180
|
Khaled Alanezi
|
Khaled Alanezi and Shivakant Mishra
|
An IoT Architecture Leveraging Digital Twins: Compromised Node Detection
Scenario
|
This work has been submitted to the IEEE for possible publication
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Modern IoT (Internet of Things) environments with thousands of low-end and
diverse IoT nodes with complex interactions among them and often deployed in
remote and/or wild locations present some unique challenges that make
traditional node compromise detection services less effective. This paper
presents the design, implementation and evaluation of a fog-based architecture
that utilizes the concept of a digital-twin to detect compromised IoT nodes
exhibiting malicious behaviors by either producing erroneous data and/or being
used to launch network intrusion attacks to hijack other nodes eventually
causing service disruption. By defining a digital twin of an IoT infrastructure
at a fog server, the architecture is focused on monitoring relevant information
to save energy and storage space. The paper presents a prototype implementation
for the architecture utilizing malicious behavior datasets to perform
misbehaving node classification. An extensive accuracy and system performance
evaluation was conducted based on this prototype. Results show good accuracy
and negligible overhead especially when employing deep learning techniques such
as MLP (multilayer perceptron).
|
[
{
"version": "v1",
"created": "Sun, 20 Aug 2023 07:03:32 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Alanezi",
"Khaled",
""
],
[
"Mishra",
"Shivakant",
""
]
] |
new_dataset
| 0.978607 |
2308.10204
|
Haoyuan Wu
|
Zhuolun He, Haoyuan Wu, Xinyun Zhang, Xufeng Yao, Su Zheng, Haisheng
Zheng, Bei Yu
|
ChatEDA: A Large Language Model Powered Autonomous Agent for EDA
| null | null | null | null |
cs.AR cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The integration of a complex set of Electronic Design Automation (EDA) tools
to enhance interoperability is a critical concern for circuit designers. Recent
advancements in large language models (LLMs) have showcased their exceptional
capabilities in natural language processing and comprehension, offering a novel
approach to interfacing with EDA tools. This research paper introduces ChatEDA,
an autonomous agent for EDA empowered by a large language model, AutoMage,
complemented by EDA tools serving as executors. ChatEDA streamlines the design
flow from the Register-Transfer Level (RTL) to the Graphic Data System Version
II (GDSII) by effectively managing task planning, script generation, and task
execution. Through comprehensive experimental evaluations, ChatEDA has
demonstrated its proficiency in handling diverse requirements, and our
fine-tuned AutoMage model has exhibited superior performance compared to GPT-4
and other similar LLMs.
|
[
{
"version": "v1",
"created": "Sun, 20 Aug 2023 08:32:13 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"He",
"Zhuolun",
""
],
[
"Wu",
"Haoyuan",
""
],
[
"Zhang",
"Xinyun",
""
],
[
"Yao",
"Xufeng",
""
],
[
"Zheng",
"Su",
""
],
[
"Zheng",
"Haisheng",
""
],
[
"Yu",
"Bei",
""
]
] |
new_dataset
| 0.997441 |
2308.10227
|
Mingyuan Huang
|
Jiachi Chen, Mingyuan Huang, Zewei Lin, Peilin Zheng and Zibin Zheng
|
To Healthier Ethereum: A Comprehensive and Iterative Smart Contract
Weakness Enumeration
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the increasing popularity of cryptocurrencies and blockchain technology,
smart contracts have become a prominent feature in developing decentralized
applications. However, these smart contracts are susceptible to vulnerabilities
that hackers can exploit, resulting in significant financial losses. In
response to this growing concern, various initiatives have emerged. Notably,
the SWC vulnerability list played an important role in raising awareness and
understanding of smart contract weaknesses. However, the SWC list lacks
maintenance and has not been updated with new vulnerabilities since 2020. To
address this gap, this paper introduces the Smart Contract Weakness Enumeration
(SWE), a comprehensive and practical vulnerability list up until 2023. We
collect 273 vulnerability descriptions from 86 top conference papers and
journal papers, employing open card sorting techniques to deduplicate and
categorize these descriptions. This process results in the identification of 40
common contract weaknesses, which are further classified into 20 sub-research
fields through thorough discussion and analysis. SWE provides a systematic and
comprehensive list of smart contract vulnerabilities, covering existing and
emerging vulnerabilities in the last few years. Moreover, SWE is a scalable,
continuously iterative program. We propose two update mechanisms for the
maintenance of SWE. Regular updates involve the inclusion of new
vulnerabilities from future top papers, while irregular updates enable
individuals to report new weaknesses for review and potential addition to SWE.
|
[
{
"version": "v1",
"created": "Sun, 20 Aug 2023 10:46:39 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Chen",
"Jiachi",
""
],
[
"Huang",
"Mingyuan",
""
],
[
"Lin",
"Zewei",
""
],
[
"Zheng",
"Peilin",
""
],
[
"Zheng",
"Zibin",
""
]
] |
new_dataset
| 0.995761 |
2308.10234
|
Tianyue Zheng
|
Jingzhi Hu, Tianyue Zheng, Zhe Chen, Hongbo Wang, Jun Luo
|
MUSE-Fi: Contactless MUti-person SEnsing Exploiting Near-field Wi-Fi
Channel Variation
|
15 pages. Accepted by ACM MobiCom 2023
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Having been studied for more than a decade, Wi-Fi human sensing still faces a
major challenge in the presence of multiple persons, simply because the limited
bandwidth of Wi-Fi fails to provide a sufficient range resolution to physically
separate multiple subjects. Existing solutions mostly avoid this challenge by
switching to radars with GHz bandwidth, at the cost of cumbersome deployments.
Therefore, could Wi-Fi human sensing handle multiple subjects remains an open
question. This paper presents MUSE-Fi, the first Wi-Fi multi-person sensing
system with physical separability. The principle behind MUSE-Fi is that, given
a Wi-Fi device (e.g., smartphone) very close to a subject, the near-field
channel variation caused by the subject significantly overwhelms variations
caused by other distant subjects. Consequently, focusing on the channel state
information (CSI) carried by the traffic in and out of this device naturally
allows for physically separating multiple subjects. Based on this principle, we
propose three sensing strategies for MUSE-Fi: i) uplink CSI, ii) downlink CSI,
and iii) downlink beamforming feedback, where we specifically tackle signal
recovery from sparse (per-user) traffic under realistic multi-user
communication scenarios. Our extensive evaluations clearly demonstrate that
MUSE-Fi is able to successfully handle multi-person sensing with respect to
three typical applications: respiration monitoring, gesture detection, and
activity recognition.
|
[
{
"version": "v1",
"created": "Sun, 20 Aug 2023 11:39:57 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Hu",
"Jingzhi",
""
],
[
"Zheng",
"Tianyue",
""
],
[
"Chen",
"Zhe",
""
],
[
"Wang",
"Hongbo",
""
],
[
"Luo",
"Jun",
""
]
] |
new_dataset
| 0.996944 |
2308.10305
|
Yingxuan You
|
Yingxuan You, Hong Liu, Ti Wang, Wenhao Li, Runwei Ding, Xia Li
|
Co-Evolution of Pose and Mesh for 3D Human Body Estimation from Video
|
Accepted by ICCV 2023. Project page: https://kasvii.github.io/PMCE
| null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite significant progress in single image-based 3D human mesh recovery,
accurately and smoothly recovering 3D human motion from a video remains
challenging. Existing video-based methods generally recover human mesh by
estimating the complex pose and shape parameters from coupled image features,
whose high complexity and low representation ability often result in
inconsistent pose motion and limited shape patterns. To alleviate this issue,
we introduce 3D pose as the intermediary and propose a Pose and Mesh
Co-Evolution network (PMCE) that decouples this task into two parts: 1)
video-based 3D human pose estimation and 2) mesh vertices regression from the
estimated 3D pose and temporal image feature. Specifically, we propose a
two-stream encoder that estimates mid-frame 3D pose and extracts a temporal
image feature from the input image sequence. In addition, we design a
co-evolution decoder that performs pose and mesh interactions with the
image-guided Adaptive Layer Normalization (AdaLN) to make pose and mesh fit the
human body shape. Extensive experiments demonstrate that the proposed PMCE
outperforms previous state-of-the-art methods in terms of both per-frame
accuracy and temporal consistency on three benchmark datasets: 3DPW, Human3.6M,
and MPI-INF-3DHP. Our code is available at https://github.com/kasvii/PMCE.
|
[
{
"version": "v1",
"created": "Sun, 20 Aug 2023 16:03:21 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"You",
"Yingxuan",
""
],
[
"Liu",
"Hong",
""
],
[
"Wang",
"Ti",
""
],
[
"Li",
"Wenhao",
""
],
[
"Ding",
"Runwei",
""
],
[
"Li",
"Xia",
""
]
] |
new_dataset
| 0.995072 |
2308.10382
|
Xing Yao
|
Xing Yao, Han Liu, Dewei Hu, Daiwei Lu, Ange Lou, Hao Li, Ruining
Deng, Gabriel Arenas, Baris Oguz, Nadav Schwartz, Brett C Byram, Ipek Oguz
|
False Negative/Positive Control for SAM on Noisy Medical Images
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Segment Anything Model (SAM) is a recently developed all-range foundation
model for image segmentation. It can use sparse manual prompts such as bounding
boxes to generate pixel-level segmentation in natural images but struggles in
medical images such as low-contrast, noisy ultrasound images. We propose a
refined test-phase prompt augmentation technique designed to improve SAM's
performance in medical image segmentation. The method couples multi-box prompt
augmentation and an aleatoric uncertainty-based false-negative (FN) and
false-positive (FP) correction (FNPC) strategy. We evaluate the method on two
ultrasound datasets and show improvement in SAM's performance and robustness to
inaccurate prompts, without the necessity for further training or tuning.
Moreover, we present the Single-Slice-to-Volume (SS2V) method, enabling 3D
pixel-level segmentation using only the bounding box annotation from a single
2D slice. Our results allow efficient use of SAM in even noisy, low-contrast
medical images. The source code will be released soon.
|
[
{
"version": "v1",
"created": "Sun, 20 Aug 2023 23:01:46 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Yao",
"Xing",
""
],
[
"Liu",
"Han",
""
],
[
"Hu",
"Dewei",
""
],
[
"Lu",
"Daiwei",
""
],
[
"Lou",
"Ange",
""
],
[
"Li",
"Hao",
""
],
[
"Deng",
"Ruining",
""
],
[
"Arenas",
"Gabriel",
""
],
[
"Oguz",
"Baris",
""
],
[
"Schwartz",
"Nadav",
""
],
[
"Byram",
"Brett C",
""
],
[
"Oguz",
"Ipek",
""
]
] |
new_dataset
| 0.998318 |
2308.10411
|
Hao Chen
|
Hao Chen, Weiwei Wan, Masaki Matsushita, Takeyuki Kotaka, Kensuke
Harada
|
In-Rack Test Tube Pose Estimation Using RGB-D Data
|
Submit to IEEE ROBIO 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Accurate robotic manipulation of test tubes in biology and medical industries
is becoming increasingly important to address workforce shortages and improve
worker safety. The detection and localization of test tubes are essential for
the robots to successfully manipulate test tubes. In this paper, we present a
framework to detect and estimate poses for the in-rack test tubes using color
and depth data. The methodology involves the utilization of a YOLO object
detector to effectively classify and localize both the test tubes and the tube
racks within the provided image data. Subsequently, the pose of the tube rack
is estimated through point cloud registration techniques. During the process of
estimating the poses of the test tubes, we capitalize on constraints derived
from the arrangement of rack slots. By employing an optimization-based
algorithm, we effectively evaluate and refine the pose of the test tubes. This
strategic approach ensures the robustness of pose estimation, even when
confronted with noisy and incomplete point cloud data.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 01:35:06 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Chen",
"Hao",
""
],
[
"Wan",
"Weiwei",
""
],
[
"Matsushita",
"Masaki",
""
],
[
"Kotaka",
"Takeyuki",
""
],
[
"Harada",
"Kensuke",
""
]
] |
new_dataset
| 0.996518 |
2308.10441
|
Bo Dai
|
Bo Dai, Linge Wang, Baoxiong Jia, Zeyu Zhang, Song-Chun Zhu, Chi
Zhang, Yixin Zhu
|
X-VoE: Measuring eXplanatory Violation of Expectation in Physical Events
|
19 pages, 16 figures, selected for an Oral presentation at ICCV 2023.
Project link: https://pku.ai/publication/intuitive2023iccv/
| null | null | null |
cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Intuitive physics is pivotal for human understanding of the physical world,
enabling prediction and interpretation of events even in infancy. Nonetheless,
replicating this level of intuitive physics in artificial intelligence (AI)
remains a formidable challenge. This study introduces X-VoE, a comprehensive
benchmark dataset, to assess AI agents' grasp of intuitive physics. Built on
the developmental psychology-rooted Violation of Expectation (VoE) paradigm,
X-VoE establishes a higher bar for the explanatory capacities of intuitive
physics models. Each VoE scenario within X-VoE encompasses three distinct
settings, probing models' comprehension of events and their underlying
explanations. Beyond model evaluation, we present an explanation-based learning
system that captures physics dynamics and infers occluded object states solely
from visual sequences, without explicit occlusion labels. Experimental outcomes
highlight our model's alignment with human commonsense when tested against
X-VoE. A remarkable feature is our model's ability to visually expound VoE
events by reconstructing concealed scenes. Concluding, we discuss the findings'
implications and outline future research directions. Through X-VoE, we catalyze
the advancement of AI endowed with human-like intuitive physics capabilities.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 03:28:23 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Dai",
"Bo",
""
],
[
"Wang",
"Linge",
""
],
[
"Jia",
"Baoxiong",
""
],
[
"Zhang",
"Zeyu",
""
],
[
"Zhu",
"Song-Chun",
""
],
[
"Zhang",
"Chi",
""
],
[
"Zhu",
"Yixin",
""
]
] |
new_dataset
| 0.973537 |
2308.10446
|
Liangrui Pan
|
Liangrui Pan, Yutao Dou, Zhichao Feng, Liwen Xu, Shaoliang Peng
|
LDCSF: Local depth convolution-based Swim framework for classifying
multi-label histopathology images
|
Submitted to BIBM2023
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Histopathological images are the gold standard for diagnosing liver cancer.
However, the accuracy of fully digital diagnosis in computational pathology
needs to be improved. In this paper, in order to solve the problem of
multi-label and low classification accuracy of histopathology images, we
propose a locally deep convolutional Swim framework (LDCSF) to classify
multi-label histopathology images. In order to be able to provide local field
of view diagnostic results, we propose the LDCSF model, which consists of a
Swin transformer module, a local depth convolution (LDC) module, a feature
reconstruction (FR) module, and a ResNet module. The Swin transformer module
reduces the amount of computation generated by the attention mechanism by
limiting the attention to each window. The LDC then reconstructs the attention
map and performs convolution operations in multiple channels, passing the
resulting feature map to the next layer. The FR module uses the corresponding
weight coefficient vectors obtained from the channels to dot product with the
original feature map vector matrix to generate representative feature maps.
Finally, the residual network undertakes the final classification task. As a
result, the classification accuracy of LDCSF for interstitial area, necrosis,
non-tumor and tumor reached 0.9460, 0.9960, 0.9808, 0.9847, respectively.
Finally, we use the results of multi-label pathological image classification to
calculate the tumor-to-stromal ratio, which lays the foundation for the
analysis of the microenvironment of liver cancer histopathological images.
Second, we released a multilabel histopathology image of liver cancer, our code
and data are available at https://github.com/panliangrui/LSF.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 03:44:54 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Pan",
"Liangrui",
""
],
[
"Dou",
"Yutao",
""
],
[
"Feng",
"Zhichao",
""
],
[
"Xu",
"Liwen",
""
],
[
"Peng",
"Shaoliang",
""
]
] |
new_dataset
| 0.962917 |
2308.10449
|
Liangrui Pan
|
Liangrui Pan, Lian Wang, Zhichao Feng, Liwen Xu, Shaoliang Peng
|
CVFC: Attention-Based Cross-View Feature Consistency for Weakly
Supervised Semantic Segmentation of Pathology Images
|
Submitted to BIBM2023
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Histopathology image segmentation is the gold standard for diagnosing cancer,
and can indicate cancer prognosis. However, histopathology image segmentation
requires high-quality masks, so many studies now use imagelevel labels to
achieve pixel-level segmentation to reduce the need for fine-grained
annotation. To solve this problem, we propose an attention-based cross-view
feature consistency end-to-end pseudo-mask generation framework named CVFC
based on the attention mechanism. Specifically, CVFC is a three-branch joint
framework composed of two Resnet38 and one Resnet50, and the independent branch
multi-scale integrated feature map to generate a class activation map (CAM); in
each branch, through down-sampling and The expansion method adjusts the size of
the CAM; the middle branch projects the feature matrix to the query and key
feature spaces, and generates a feature space perception matrix through the
connection layer and inner product to adjust and refine the CAM of each branch;
finally, through the feature consistency loss and feature cross loss to
optimize the parameters of CVFC in co-training mode. After a large number of
experiments, An IoU of 0.7122 and a fwIoU of 0.7018 are obtained on the
WSSS4LUAD dataset, which outperforms HistoSegNet, SEAM, C-CAM, WSSS-Tissue, and
OEEM, respectively.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 03:50:09 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Pan",
"Liangrui",
""
],
[
"Wang",
"Lian",
""
],
[
"Feng",
"Zhichao",
""
],
[
"Xu",
"Liwen",
""
],
[
"Peng",
"Shaoliang",
""
]
] |
new_dataset
| 0.981394 |
2308.10491
|
Francesco Barbato
|
Giulia Rizzoli, Francesco Barbato, Matteo Caligiuri, Pietro Zanuttigh
|
SynDrone -- Multi-modal UAV Dataset for Urban Scenarios
|
Accepted at ICCV Workshops, downloadable dataset with CC-BY license,
8 pages, 4 figures, 8 tables
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The development of computer vision algorithms for Unmanned Aerial Vehicles
(UAVs) imagery heavily relies on the availability of annotated high-resolution
aerial data. However, the scarcity of large-scale real datasets with
pixel-level annotations poses a significant challenge to researchers as the
limited number of images in existing datasets hinders the effectiveness of deep
learning models that require a large amount of training data. In this paper, we
propose a multimodal synthetic dataset containing both images and 3D data taken
at multiple flying heights to address these limitations. In addition to
object-level annotations, the provided data also include pixel-level labeling
in 28 classes, enabling exploration of the potential advantages in tasks like
semantic segmentation. In total, our dataset contains 72k labeled samples that
allow for effective training of deep architectures showing promising results in
synthetic-to-real adaptation. The dataset will be made publicly available to
support the development of novel computer vision methods targeting UAV
applications.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 06:22:10 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Rizzoli",
"Giulia",
""
],
[
"Barbato",
"Francesco",
""
],
[
"Caligiuri",
"Matteo",
""
],
[
"Zanuttigh",
"Pietro",
""
]
] |
new_dataset
| 0.99883 |
2308.10521
|
Deguo Ma
|
Deguo Ma, Chen Li, Lin Qiao, Tianming Du, Dechao Tang, Zhiyu Ma,
Marcin Grzegorzek Hongzan, Hongzan Sun
|
PHE-SICH-CT-IDS: A Benchmark CT Image Dataset for Evaluation Semantic
Segmentation, Object Detection and Radiomic Feature Extraction of
Perihematomal Edema in Spontaneous Intracerebral Hemorrhage
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Intracerebral hemorrhage is one of the diseases with the highest mortality
and poorest prognosis worldwide. Spontaneous intracerebral hemorrhage (SICH)
typically presents acutely, prompt and expedited radiological examination is
crucial for diagnosis, localization, and quantification of the hemorrhage.
Early detection and accurate segmentation of perihematomal edema (PHE) play a
critical role in guiding appropriate clinical intervention and enhancing
patient prognosis. However, the progress and assessment of computer-aided
diagnostic methods for PHE segmentation and detection face challenges due to
the scarcity of publicly accessible brain CT image datasets. This study
establishes a publicly available CT dataset named PHE-SICH-CT-IDS for
perihematomal edema in spontaneous intracerebral hemorrhage. The dataset
comprises 120 brain CT scans and 7,022 CT images, along with corresponding
medical information of the patients. To demonstrate its effectiveness,
classical algorithms for semantic segmentation, object detection, and radiomic
feature extraction are evaluated. The experimental results confirm the
suitability of PHE-SICH-CT-IDS for assessing the performance of segmentation,
detection and radiomic feature extraction methods. To the best of our
knowledge, this is the first publicly available dataset for PHE in SICH,
comprising various data formats suitable for applications across diverse
medical scenarios. We believe that PHE-SICH-CT-IDS will allure researchers to
explore novel algorithms, providing valuable support for clinicians and
patients in the clinical setting. PHE-SICH-CT-IDS is freely published for
non-commercial purpose at:
https://figshare.com/articles/dataset/PHE-SICH-CT-IDS/23957937.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 07:18:51 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Ma",
"Deguo",
""
],
[
"Li",
"Chen",
""
],
[
"Qiao",
"Lin",
""
],
[
"Du",
"Tianming",
""
],
[
"Tang",
"Dechao",
""
],
[
"Ma",
"Zhiyu",
""
],
[
"Hongzan",
"Marcin Grzegorzek",
""
],
[
"Sun",
"Hongzan",
""
]
] |
new_dataset
| 0.999779 |
2308.10526
|
Chongyang Wang
|
Chongyang Wang, Yuan Feng, Lingxiao Zhong, Siyi Zhu, Chi Zhang, Siqi
Zheng, Chen Liang, Yuntao Wang, Chengqi He, Chun Yu, and Yuanchun Shi
|
UbiPhysio: Support Daily Functioning, Fitness, and Rehabilitation with
Action Understanding and Feedback in Natural Language
|
27 pages, 14 figures, 5 tables
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce UbiPhysio, a milestone framework that delivers fine-grained
action description and feedback in natural language to support people's daily
functioning, fitness, and rehabilitation activities. This expert-like
capability assists users in properly executing actions and maintaining
engagement in remote fitness and rehabilitation programs. Specifically, the
proposed UbiPhysio framework comprises a fine-grained action descriptor and a
knowledge retrieval-enhanced feedback module. The action descriptor translates
action data, represented by a set of biomechanical movement features we
designed based on clinical priors, into textual descriptions of action types
and potential movement patterns. Building on physiotherapeutic domain
knowledge, the feedback module provides clear and engaging expert feedback. We
evaluated UbiPhysio's performance through extensive experiments with data from
104 diverse participants, collected in a home-like setting during 25 types of
everyday activities and exercises. We assessed the quality of the language
output under different tuning strategies using standard benchmarks. We
conducted a user study to gather insights from clinical experts and potential
users on our framework. Our initial tests show promise for deploying UbiPhysio
in real-life settings without specialized devices.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 07:26:05 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Wang",
"Chongyang",
""
],
[
"Feng",
"Yuan",
""
],
[
"Zhong",
"Lingxiao",
""
],
[
"Zhu",
"Siyi",
""
],
[
"Zhang",
"Chi",
""
],
[
"Zheng",
"Siqi",
""
],
[
"Liang",
"Chen",
""
],
[
"Wang",
"Yuntao",
""
],
[
"He",
"Chengqi",
""
],
[
"Yu",
"Chun",
""
],
[
"Shi",
"Yuanchun",
""
]
] |
new_dataset
| 0.9998 |
2308.10529
|
Tianyu Yu
|
Tianyu Yu, Chengyue Jiang, Chao Lou, Shen Huang, Xiaobin Wang, Wei
Liu, Jiong Cai, Yangning Li, Yinghui Li, Kewei Tu, Hai-Tao Zheng, Ningyu
Zhang, Pengjun Xie, Fei Huang, Yong Jiang
|
SeqGPT: An Out-of-the-box Large Language Model for Open Domain Sequence
Understanding
|
Initial version of SeqGPT
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models (LLMs) have shown impressive ability for open-domain
NLP tasks. However, LLMs are sometimes too footloose for natural language
understanding (NLU) tasks which always have restricted output and input format.
Their performances on NLU tasks are highly related to prompts or demonstrations
and are shown to be poor at performing several representative NLU tasks, such
as event extraction and entity typing. To this end, we present SeqGPT, a
bilingual (i.e., English and Chinese) open-source autoregressive model
specially enhanced for open-domain natural language understanding. We express
all NLU tasks with two atomic tasks, which define fixed instructions to
restrict the input and output format but still ``open'' for arbitrarily varied
label sets. The model is first instruction-tuned with extremely fine-grained
labeled data synthesized by ChatGPT and then further fine-tuned by 233
different atomic tasks from 152 datasets across various domains. The
experimental results show that SeqGPT has decent classification and extraction
ability, and is capable of performing language understanding tasks on unseen
domains. We also conduct empirical studies on the scaling of data and model
size as well as on the transfer across tasks. Our model is accessible at
https://github.com/Alibaba-NLP/SeqGPT.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 07:31:19 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Yu",
"Tianyu",
""
],
[
"Jiang",
"Chengyue",
""
],
[
"Lou",
"Chao",
""
],
[
"Huang",
"Shen",
""
],
[
"Wang",
"Xiaobin",
""
],
[
"Liu",
"Wei",
""
],
[
"Cai",
"Jiong",
""
],
[
"Li",
"Yangning",
""
],
[
"Li",
"Yinghui",
""
],
[
"Tu",
"Kewei",
""
],
[
"Zheng",
"Hai-Tao",
""
],
[
"Zhang",
"Ningyu",
""
],
[
"Xie",
"Pengjun",
""
],
[
"Huang",
"Fei",
""
],
[
"Jiang",
"Yong",
""
]
] |
new_dataset
| 0.998953 |
2308.10560
|
Andrea Pizzo
|
Andrea Pizzo, Angel Lozano, Sundeep Rangan, Thomas Marzetta
|
Wide-Aperture MIMO via Reflection off a Smooth Surface
|
arXiv admin note: text overlap with arXiv:2205.01213
|
in IEEE Transactions on Wireless Communications, vol. 22, no. 8,
pp. 5229-5239, Aug. 2023
|
10.1109/TWC.2022.3232742.
| null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
This paper provides a deterministic channel model for a scenario where
wireless connectivity is established through a reflection off a smooth planar
surface of an infinite extent. The developed model is rigorously built upon the
physics of wave propagation and is as precise as tight are the unboundedness
and smoothness assumptions on the surface. This model allows establishing how
line-of-sight multiantenna communication is altered by a reflection off an
electrically large surface, a situation of high interest for mmWave and
terahertz frequencies.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 08:31:36 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Pizzo",
"Andrea",
""
],
[
"Lozano",
"Angel",
""
],
[
"Rangan",
"Sundeep",
""
],
[
"Marzetta",
"Thomas",
""
]
] |
new_dataset
| 0.997055 |
2308.10569
|
Cheng Feng
|
Cheng Feng, Zhen Chen, Congxuan Zhang, Weiming Hu, Bing Li, Feng Lu
|
RT-MonoDepth: Real-time Monocular Depth Estimation on Embedded Systems
|
8 pages, 5 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Depth sensing is a crucial function of unmanned aerial vehicles and
autonomous vehicles. Due to the small size and simple structure of monocular
cameras, there has been a growing interest in depth estimation from a single
RGB image. However, state-of-the-art monocular CNN-based depth estimation
methods using fairly complex deep neural networks are too slow for real-time
inference on embedded platforms. This paper addresses the problem of real-time
depth estimation on embedded systems. We propose two efficient and lightweight
encoder-decoder network architectures, RT-MonoDepth and RT-MonoDepth-S, to
reduce computational complexity and latency. Our methodologies demonstrate that
it is possible to achieve similar accuracy as prior state-of-the-art works on
depth estimation at a faster inference speed. Our proposed networks,
RT-MonoDepth and RT-MonoDepth-S, runs at 18.4\&30.5 FPS on NVIDIA Jetson Nano
and 253.0\&364.1 FPS on NVIDIA Jetson AGX Orin on a single RGB image of
resolution 640$\times$192, and achieve relative state-of-the-art accuracy on
the KITTI dataset. To the best of the authors' knowledge, this paper achieves
the best accuracy and fastest inference speed compared with existing fast
monocular depth estimation methods.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 08:59:59 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Feng",
"Cheng",
""
],
[
"Chen",
"Zhen",
""
],
[
"Zhang",
"Congxuan",
""
],
[
"Hu",
"Weiming",
""
],
[
"Li",
"Bing",
""
],
[
"Lu",
"Feng",
""
]
] |
new_dataset
| 0.986524 |
2308.10574
|
Lixin Yang
|
Kailin Li, Lixin Yang, Haoyu Zhen, Zenan Lin, Xinyu Zhan, Licheng
Zhong, Jian Xu, Kejian Wu, Cewu Lu
|
CHORD: Category-level Hand-held Object Reconstruction via Shape
Deformation
|
To be presented at ICCV 2023, Paris
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In daily life, humans utilize hands to manipulate objects. Modeling the shape
of objects that are manipulated by the hand is essential for AI to comprehend
daily tasks and to learn manipulation skills. However, previous approaches have
encountered difficulties in reconstructing the precise shapes of hand-held
objects, primarily owing to a deficiency in prior shape knowledge and
inadequate data for training. As illustrated, given a particular type of tool,
such as a mug, despite its infinite variations in shape and appearance, humans
have a limited number of 'effective' modes and poses for its manipulation. This
can be attributed to the fact that humans have mastered the shape prior of the
'mug' category, and can quickly establish the corresponding relations between
different mug instances and the prior, such as where the rim and handle are
located. In light of this, we propose a new method, CHORD, for Category-level
Hand-held Object Reconstruction via shape Deformation. CHORD deforms a
categorical shape prior for reconstructing the intra-class objects. To ensure
accurate reconstruction, we empower CHORD with three types of awareness:
appearance, shape, and interacting pose. In addition, we have constructed a new
dataset, COMIC, of category-level hand-object interaction. COMIC contains a
rich array of object instances, materials, hand interactions, and viewing
directions. Extensive evaluation shows that CHORD outperforms state-of-the-art
approaches in both quantitative and qualitative measures. Code, model, and
datasets are available at https://kailinli.github.io/CHORD.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 09:14:18 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Li",
"Kailin",
""
],
[
"Yang",
"Lixin",
""
],
[
"Zhen",
"Haoyu",
""
],
[
"Lin",
"Zenan",
""
],
[
"Zhan",
"Xinyu",
""
],
[
"Zhong",
"Licheng",
""
],
[
"Xu",
"Jian",
""
],
[
"Wu",
"Kejian",
""
],
[
"Lu",
"Cewu",
""
]
] |
new_dataset
| 0.997906 |
2308.10597
|
Daniele De Martini
|
Fraser Rennie, David Williams, Paul Newman and Daniele De Martini
|
Doppler-aware Odometry from FMCW Scanning Radar
|
Accepted to ITSC 2023
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work explores Doppler information from a millimetre-Wave (mm-W)
Frequency-Modulated Continuous-Wave (FMCW) scanning radar to make odometry
estimation more robust and accurate. Firstly, doppler information is added to
the scan masking process to enhance correlative scan matching. Secondly, we
train a Neural Network (NN) for regressing forward velocity directly from a
single radar scan; we fuse this estimate with the correlative scan matching
estimate and show improved robustness to bad estimates caused by challenging
environment geometries, e.g. narrow tunnels. We test our method with a novel
custom dataset which is released with this work at
https://ori.ox.ac.uk/publications/datasets.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 09:56:23 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Rennie",
"Fraser",
""
],
[
"Williams",
"David",
""
],
[
"Newman",
"Paul",
""
],
[
"De Martini",
"Daniele",
""
]
] |
new_dataset
| 0.99619 |
2308.10609
|
Hojoon Lee
|
Hojoon Lee, Hawon Jeong, Byungkun Lee, Kyungyup Lee, Jaegul Choo
|
ST-RAP: A Spatio-Temporal Framework for Real Estate Appraisal
|
Accepted to CIKM'23
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we introduce ST-RAP, a novel Spatio-Temporal framework for
Real estate APpraisal. ST-RAP employs a hierarchical architecture with a
heterogeneous graph neural network to encapsulate temporal dynamics and spatial
relationships simultaneously. Through comprehensive experiments on a
large-scale real estate dataset, ST-RAP outperforms previous methods,
demonstrating the significant benefits of integrating spatial and temporal
aspects in real estate appraisal. Our code and dataset are available at
https://github.com/dojeon-ai/STRAP.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 10:18:26 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Lee",
"Hojoon",
""
],
[
"Jeong",
"Hawon",
""
],
[
"Lee",
"Byungkun",
""
],
[
"Lee",
"Kyungyup",
""
],
[
"Choo",
"Jaegul",
""
]
] |
new_dataset
| 0.983741 |
2308.10610
|
Yubiao Yue
|
Yubiao Yue, Xinyu Zeng, Xiaoqiang Shi, Meiping Zhang, Haihua Liang,
Fan Zhang, Yanmei Chen, Zefeng Xie, Wenrui Wu, Zhenzhang Li
|
Ultrafast and Ultralight Network-Based Intelligent System for Real-time
Diagnosis of Ear diseases in Any Devices
|
This manuscript has been submitted to Neural Networks
| null | null | null |
cs.CV cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traditional ear disease diagnosis heavily depends on experienced specialists
and specialized equipment, frequently resulting in misdiagnoses, treatment
delays, and financial burdens for some patients. Utilizing deep learning models
for efficient ear disease diagnosis has proven effective and affordable.
However, existing research overlooked model inference speed and parameter size
required for deployment. To tackle these challenges, we constructed a
large-scale dataset comprising eight ear disease categories and normal ear
canal samples from two hospitals. Inspired by ShuffleNetV2, we developed
Best-EarNet, an ultrafast and ultralight network enabling real-time ear disease
diagnosis. Best-EarNet incorporates the novel Local-Global Spatial Feature
Fusion Module which can capture global and local spatial information
simultaneously and guide the network to focus on crucial regions within feature
maps at various levels, mitigating low accuracy issues. Moreover, our network
uses multiple auxiliary classification heads for efficient parameter
optimization. With 0.77M parameters, Best-EarNet achieves an average frames per
second of 80 on CPU. Employing transfer learning and five-fold cross-validation
with 22,581 images from Hospital-1, the model achieves an impressive 95.23%
accuracy. External testing on 1,652 images from Hospital-2 validates its
performance, yielding 92.14% accuracy. Compared to state-of-the-art networks,
Best-EarNet establishes a new state-of-the-art (SOTA) in practical
applications. Most importantly, we developed an intelligent diagnosis system
called Ear Keeper, which can be deployed on common electronic devices. By
manipulating a compact electronic otoscope, users can perform comprehensive
scanning and diagnosis of the ear canal using real-time video. This study
provides a novel paradigm for ear endoscopy and other medical endoscopic image
recognition applications.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 10:20:46 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Yue",
"Yubiao",
""
],
[
"Zeng",
"Xinyu",
""
],
[
"Shi",
"Xiaoqiang",
""
],
[
"Zhang",
"Meiping",
""
],
[
"Liang",
"Haihua",
""
],
[
"Zhang",
"Fan",
""
],
[
"Chen",
"Yanmei",
""
],
[
"Xie",
"Zefeng",
""
],
[
"Wu",
"Wenrui",
""
],
[
"Li",
"Zhenzhang",
""
]
] |
new_dataset
| 0.997538 |
2308.10621
|
Patrick Ruhkamp
|
HyunJun Jung, Patrick Ruhkamp, Nassir Navab, Benjamin Busam
|
Multi-Modal Dataset Acquisition for Photometrically Challenging Object
|
Accepted at ICCV 2023 TRICKY Workshop
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This paper addresses the limitations of current datasets for 3D vision tasks
in terms of accuracy, size, realism, and suitable imaging modalities for
photometrically challenging objects. We propose a novel annotation and
acquisition pipeline that enhances existing 3D perception and 6D object pose
datasets. Our approach integrates robotic forward-kinematics, external infrared
trackers, and improved calibration and annotation procedures. We present a
multi-modal sensor rig, mounted on a robotic end-effector, and demonstrate how
it is integrated into the creation of highly accurate datasets. Additionally,
we introduce a freehand procedure for wider viewpoint coverage. Both approaches
yield high-quality 3D data with accurate object and camera pose annotations.
Our methods overcome the limitations of existing datasets and provide valuable
resources for 3D vision research.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 10:38:32 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Jung",
"HyunJun",
""
],
[
"Ruhkamp",
"Patrick",
""
],
[
"Navab",
"Nassir",
""
],
[
"Busam",
"Benjamin",
""
]
] |
new_dataset
| 0.997504 |
2308.10627
|
Patrick Ruhkamp
|
Patrick Ruhkamp, Daoyi Gao, HyunJun Jung, Nassir Navab, Benjamin Busam
|
Polarimetric Information for Multi-Modal 6D Pose Estimation of
Photometrically Challenging Objects with Limited Data
|
Accepted at ICCV 2023 TRICKY Workshop
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
6D pose estimation pipelines that rely on RGB-only or RGB-D data show
limitations for photometrically challenging objects with e.g. textureless
surfaces, reflections or transparency. A supervised learning-based method
utilising complementary polarisation information as input modality is proposed
to overcome such limitations. This supervised approach is then extended to a
self-supervised paradigm by leveraging physical characteristics of polarised
light, thus eliminating the need for annotated real data. The methods achieve
significant advancements in pose estimation by leveraging geometric information
from polarised light and incorporating shape priors and invertible physical
constraints.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 10:56:00 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Ruhkamp",
"Patrick",
""
],
[
"Gao",
"Daoyi",
""
],
[
"Jung",
"HyunJun",
""
],
[
"Navab",
"Nassir",
""
],
[
"Busam",
"Benjamin",
""
]
] |
new_dataset
| 0.961186 |
2308.10631
|
Ioan-Adrian Cosma Mr.
|
Adrian Cosma, Emilian Radoi
|
PsyMo: A Dataset for Estimating Self-Reported Psychological Traits from
Gait
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Psychological trait estimation from external factors such as movement and
appearance is a challenging and long-standing problem in psychology, and is
principally based on the psychological theory of embodiment. To date, attempts
to tackle this problem have utilized private small-scale datasets with
intrusive body-attached sensors. Potential applications of an automated system
for psychological trait estimation include estimation of occupational fatigue
and psychology, and marketing and advertisement. In this work, we propose PsyMo
(Psychological traits from Motion), a novel, multi-purpose and multi-modal
dataset for exploring psychological cues manifested in walking patterns. We
gathered walking sequences from 312 subjects in 7 different walking variations
and 6 camera angles. In conjunction with walking sequences, participants filled
in 6 psychological questionnaires, totalling 17 psychometric attributes related
to personality, self-esteem, fatigue, aggressiveness and mental health. We
propose two evaluation protocols for psychological trait estimation. Alongside
the estimation of self-reported psychological traits from gait, the dataset can
be used as a drop-in replacement to benchmark methods for gait recognition. We
anonymize all cues related to the identity of the subjects and publicly release
only silhouettes, 2D / 3D human skeletons and 3D SMPL human meshes.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 11:06:43 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Cosma",
"Adrian",
""
],
[
"Radoi",
"Emilian",
""
]
] |
new_dataset
| 0.99973 |
2308.10638
|
Soubhik Sanyal
|
Soubhik Sanyal, Partha Ghosh, Jinlong Yang, Michael J. Black, Justus
Thies, Timo Bolkart
|
SCULPT: Shape-Conditioned Unpaired Learning of Pose-dependent Clothed
and Textured Human Meshes
| null | null | null | null |
cs.CV cs.AI cs.GR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present SCULPT, a novel 3D generative model for clothed and textured 3D
meshes of humans. Specifically, we devise a deep neural network that learns to
represent the geometry and appearance distribution of clothed human bodies.
Training such a model is challenging, as datasets of textured 3D meshes for
humans are limited in size and accessibility. Our key observation is that there
exist medium-sized 3D scan datasets like CAPE, as well as large-scale 2D image
datasets of clothed humans and multiple appearances can be mapped to a single
geometry. To effectively learn from the two data modalities, we propose an
unpaired learning procedure for pose-dependent clothed and textured human
meshes. Specifically, we learn a pose-dependent geometry space from 3D scan
data. We represent this as per vertex displacements w.r.t. the SMPL model.
Next, we train a geometry conditioned texture generator in an unsupervised way
using the 2D image data. We use intermediate activations of the learned
geometry model to condition our texture generator. To alleviate entanglement
between pose and clothing type, and pose and clothing appearance, we condition
both the texture and geometry generators with attribute labels such as clothing
types for the geometry, and clothing colors for the texture generator. We
automatically generated these conditioning labels for the 2D images based on
the visual question answering model BLIP and CLIP. We validate our method on
the SCULPT dataset, and compare to state-of-the-art 3D generative models for
clothed human bodies. We will release the codebase for research purposes.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 11:23:25 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Sanyal",
"Soubhik",
""
],
[
"Ghosh",
"Partha",
""
],
[
"Yang",
"Jinlong",
""
],
[
"Black",
"Michael J.",
""
],
[
"Thies",
"Justus",
""
],
[
"Bolkart",
"Timo",
""
]
] |
new_dataset
| 0.999736 |
2308.10680
|
Esam Ghaleb
|
Esam Ghaleb, Ilya Burenko, Marlou Rasenberg, Wim Pouw, Peter Uhrig,
Judith Holler, Ivan Toni, Asl{\i} \"Ozy\"urek and Raquel Fern\'andez
|
Co-Speech Gesture Detection through Multi-phase Sequence Labeling
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Gestures are integral components of face-to-face communication. They unfold
over time, often following predictable movement phases of preparation, stroke,
and retraction. Yet, the prevalent approach to automatic gesture detection
treats the problem as binary classification, classifying a segment as either
containing a gesture or not, thus failing to capture its inherently sequential
and contextual nature. To address this, we introduce a novel framework that
reframes the task as a multi-phase sequence labeling problem rather than binary
classification. Our model processes sequences of skeletal movements over time
windows, uses Transformer encoders to learn contextual embeddings, and
leverages Conditional Random Fields to perform sequence labeling. We evaluate
our proposal on a large dataset of diverse co-speech gestures in task-oriented
face-to-face dialogues. The results consistently demonstrate that our method
significantly outperforms strong baseline models in detecting gesture strokes.
Furthermore, applying Transformer encoders to learn contextual embeddings from
movement sequences substantially improves gesture unit detection. These results
highlight our framework's capacity to capture the fine-grained dynamics of
co-speech gesture phases, paving the way for more nuanced and accurate gesture
detection and analysis.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 12:27:18 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Ghaleb",
"Esam",
""
],
[
"Burenko",
"Ilya",
""
],
[
"Rasenberg",
"Marlou",
""
],
[
"Pouw",
"Wim",
""
],
[
"Uhrig",
"Peter",
""
],
[
"Holler",
"Judith",
""
],
[
"Toni",
"Ivan",
""
],
[
"Özyürek",
"Aslı",
""
],
[
"Fernández",
"Raquel",
""
]
] |
new_dataset
| 0.996942 |
2308.10682
|
Joerg Schmalenstroeer
|
Joerg Schmalenstroeer, Tobias Gburrek, Reinhold Haeb-Umbach
|
LibriWASN: A Data Set for Meeting Separation, Diarization, and
Recognition with Asynchronous Recording Devices
|
Accepted for presentation at the ITG conference on Speech
Communication 2023
| null | null | null |
cs.SD cs.CL eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present LibriWASN, a data set whose design follows closely the LibriCSS
meeting recognition data set, with the marked difference that the data is
recorded with devices that are randomly positioned on a meeting table and whose
sampling clocks are not synchronized. Nine different devices, five smartphones
with a single recording channel and four microphone arrays, are used to record
a total of 29 channels. Other than that, the data set follows closely the
LibriCSS design: the same LibriSpeech sentences are played back from eight
loudspeakers arranged around a meeting table and the data is organized in
subsets with different percentages of speech overlap. LibriWASN is meant as a
test set for clock synchronization algorithms, meeting separation, diarization
and transcription systems on ad-hoc wireless acoustic sensor networks. Due to
its similarity to LibriCSS, meeting transcription systems developed for the
former can readily be tested on LibriWASN. The data set is recorded in two
different rooms and is complemented with ground-truth diarization information
of who speaks when.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 12:33:35 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Schmalenstroeer",
"Joerg",
""
],
[
"Gburrek",
"Tobias",
""
],
[
"Haeb-Umbach",
"Reinhold",
""
]
] |
new_dataset
| 0.990903 |
2308.10696
|
Mohammed Gharib Dr.
|
Mohammed Gharib, Fatemeh Afghah
|
SCC5G: A PQC-based Architecture for Highly Secure Critical Communication
over Cellular Network in Zero-Trust Environment
| null | null | null | null |
cs.NI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
5G made a significant jump in cellular network security by offering enhanced
subscriber identity protection and a user-network mutual authentication
implementation. However, it still does not fully follow the zero-trust (ZT)
requirements, as users need to trust the network, 5G network is not necessarily
authenticated in each communication instance, and there is no mutual
authentication between end users. When critical communications need to use
commercial networks, but the environment is ZT, specific security architecture
is needed to provide security services that do not rely on any 5G network
trusted authority. In this paper, we propose SCC5G Secure Critical-mission
Communication over a 5G network in ZT setting. SCC5G is a post-quantum
cryptography (PQC) security solution that loads an embedded hardware root of
authentication (HRA), such as physically unclonable functions (PUF), into the
users' devices, to achieve tamper-resistant and unclonability features for
authentication and key agreement. We evaluate the performance of the proposed
architecture through an exhaustive simulation of a 5G network in an ns-3
network simulator. Results verify the scalability and efficiency of SCC5G by
showing that it poses only a few kilobytes of traffic overhead and adds only an
order of $O(0.1)$ second of latency under the normal traffic load.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 13:04:45 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Gharib",
"Mohammed",
""
],
[
"Afghah",
"Fatemeh",
""
]
] |
new_dataset
| 0.995771 |
2308.10714
|
Yehonatan Fridman
|
Yehonatan Fridman, Suprasad Mutalik Desai, Navneet Singh, Thomas
Willhalm, Gal Oren
|
CXL Memory as Persistent Memory for Disaggregated HPC: A Practical
Approach
|
12 pages, 9 figures
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
In the landscape of High-Performance Computing (HPC), the quest for efficient
and scalable memory solutions remains paramount. The advent of Compute Express
Link (CXL) introduces a promising avenue with its potential to function as a
Persistent Memory (PMem) solution in the context of disaggregated HPC systems.
This paper presents a comprehensive exploration of CXL memory's viability as a
candidate for PMem, supported by physical experiments conducted on cutting-edge
multi-NUMA nodes equipped with CXL-attached memory prototypes. Our study not
only benchmarks the performance of CXL memory but also illustrates the seamless
transition from traditional PMem programming models to CXL, reinforcing its
practicality.
To substantiate our claims, we establish a tangible CXL prototype using an
FPGA card embodying CXL 1.1/2.0 compliant endpoint designs (Intel FPGA CXL IP).
Performance evaluations, executed through the STREAM and STREAM-PMem
benchmarks, showcase CXL memory's ability to mirror PMem characteristics in
App-Direct and Memory Mode while achieving impressive bandwidth metrics with
Intel 4th generation Xeon (Sapphire Rapids) processors.
The results elucidate the feasibility of CXL memory as a persistent memory
solution, outperforming previously established benchmarks. In contrast to
published DCPMM results, our CXL-DDR4 memory module offers comparable bandwidth
to local DDR4 memory configurations, albeit with a moderate decrease in
performance. The modified STREAM-PMem application underscores the ease of
transitioning programming models from PMem to CXL, thus underscoring the
practicality of adopting CXL memory.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 13:27:27 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Fridman",
"Yehonatan",
""
],
[
"Desai",
"Suprasad Mutalik",
""
],
[
"Singh",
"Navneet",
""
],
[
"Willhalm",
"Thomas",
""
],
[
"Oren",
"Gal",
""
]
] |
new_dataset
| 0.979247 |
2308.10729
|
Changzhen Li
|
Changzhen Li, Jie Zhang, Yang Wei, Zhilong Ji, Jinfeng Bai, Shiguang
Shan
|
Patch Is Not All You Need
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision Transformers have achieved great success in computer visions,
delivering exceptional performance across various tasks. However, their
inherent reliance on sequential input enforces the manual partitioning of
images into patch sequences, which disrupts the image's inherent structural and
semantic continuity. To handle this, we propose a novel Pattern Transformer
(Patternformer) to adaptively convert images to pattern sequences for
Transformer input. Specifically, we employ the Convolutional Neural Network to
extract various patterns from the input image, with each channel representing a
unique pattern that is fed into the succeeding Transformer as a visual token.
By enabling the network to optimize these patterns, each pattern concentrates
on its local region of interest, thereby preserving its intrinsic structural
and semantic information. Only employing the vanilla ResNet and Transformer, we
have accomplished state-of-the-art performance on CIFAR-10 and CIFAR-100, and
have achieved competitive results on ImageNet.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 13:54:00 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Li",
"Changzhen",
""
],
[
"Zhang",
"Jie",
""
],
[
"Wei",
"Yang",
""
],
[
"Ji",
"Zhilong",
""
],
[
"Bai",
"Jinfeng",
""
],
[
"Shan",
"Shiguang",
""
]
] |
new_dataset
| 0.999574 |
2308.10735
|
Alexandra Weinberger
|
Oswin Aichholzer and Birgit Vogtenhuber and Alexandra Weinberger
|
Different Types of Isomorphisms of Drawings of Complete Multipartite
Graphs
|
Appears in the Proceedings of the 31st International Symposium on
Graph Drawing and Network Visualization (GD 2023)
| null | null | null |
cs.CG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Simple drawings are drawings of graphs in which any two edges intersect at
most once (either at a common endpoint or a proper crossing), and no edge
intersects itself. We analyze several characteristics of simple drawings of
complete multipartite graphs: which pairs of edges cross, in which order they
cross, and the cyclic order around vertices and crossings, respectively. We
consider all possible combinations of how two drawings can share some
characteristics and determine which other characteristics they imply and which
they do not imply. Our main results are that for simple drawings of complete
multipartite graphs, the orders in which edges cross determine all other
considered characteristics. Further, if all partition classes have at least
three vertices, then the pairs of edges that cross determine the rotation
system and the rotation around the crossings determine the extended rotation
system. We also show that most other implications -- including the ones that
hold for complete graphs -- do not hold for complete multipartite graphs. Using
this analysis, we establish which types of isomorphisms are meaningful for
simple drawings of complete multipartite graphs.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 14:01:07 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Aichholzer",
"Oswin",
""
],
[
"Vogtenhuber",
"Birgit",
""
],
[
"Weinberger",
"Alexandra",
""
]
] |
new_dataset
| 0.990038 |
2308.10828
|
Zhihan Jiang
|
Zhihan Jiang, Jinyang Liu, Junjie Huang, Yichen Li, Yintong Huo,
Jiazhen Gu, Zhuangbin Chen, Jieming Zhu and Michael R. Lyu
|
A Large-scale Benchmark for Log Parsing
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Log data is pivotal in activities like anomaly detection and failure
diagnosis in the automated maintenance of software systems. Due to their
unstructured format, log parsing is often required to transform them into a
structured format for automated analysis. A variety of log parsers exist,
making it vital to benchmark these tools to comprehend their features and
performance. However, existing datasets for log parsing are limited in terms of
scale and representativeness, posing challenges for studies that aim to
evaluate or develop log parsers. This problem becomes more pronounced when
these parsers are evaluated for production use. To address these issues, we
introduce a new collection of large-scale annotated log datasets, named LogPub,
which more accurately mirrors log data observed in real-world software systems.
LogPub comprises 14 datasets, each averaging 3.6 million log lines. Utilizing
LogPub, we re-evaluate 15 log parsers in a more rigorous and practical setting.
We also propose a new evaluation metric to lessen the sensitivity of current
metrics to imbalanced data distribution. Furthermore, we are the first to
scrutinize the detailed performance of log parsers on logs that represent rare
system events and offer comprehensive information for system troubleshooting.
Parsing such logs accurately is vital yet challenging. We believe that our work
could shed light on the design and evaluation of log parsers in more realistic
settings, thereby facilitating their implementation in production systems.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 16:24:15 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Jiang",
"Zhihan",
""
],
[
"Liu",
"Jinyang",
""
],
[
"Huang",
"Junjie",
""
],
[
"Li",
"Yichen",
""
],
[
"Huo",
"Yintong",
""
],
[
"Gu",
"Jiazhen",
""
],
[
"Chen",
"Zhuangbin",
""
],
[
"Zhu",
"Jieming",
""
],
[
"Lyu",
"Michael R.",
""
]
] |
new_dataset
| 0.957306 |
2308.10834
|
Muhammad Shahbaz Khan
|
Muhammad Shahbaz Khan, Jawad Ahmad, Hisham Ali, Nikolaos Pitropakis,
Ahmed Al-Dubai, Baraq Ghaleb, William J. Buchanan
|
SRSS: A New Chaos-Based Single-Round Single S-Box Image Encryption
Scheme for Highly Auto-Correlated Data
|
6 Pages
| null | null | null |
cs.CR cs.IT math.IT
|
http://creativecommons.org/licenses/by-sa/4.0/
|
With the advent of digital communication, securing digital images during
transmission and storage has become a critical concern. The traditional s-box
substitution methods often fail to effectively conceal the information within
highly auto-correlated regions of an image. This paper addresses the security
issues presented by three prevalent S-box substitution methods, i.e., single
S-box, multiple S-boxes, and multiple rounds with multiple S-boxes, especially
when handling images with highly auto-correlated pixels. To resolve the
addressed security issues, this paper proposes a new scheme SRSS-the Single
Round Single S-Box encryption scheme. SRSS uses a single S-box for substitution
in just one round to break the pixel correlations and encrypt the plaintext
image effectively. Additionally, this paper introduces a new Chaos-based Random
Operation Selection System-CROSS, which nullifies the requirement for multiple
S-boxes, thus reducing the encryption scheme's complexity. By randomly
selecting the operation to be performed on each pixel, driven by a chaotic
sequence, the proposed scheme effectively scrambles even high auto-correlation
areas. When compared to the substitution methods mentioned above, the proposed
encryption scheme exhibited exceptionally well in just a single round with a
single S-box. The close-to-ideal statistical security analysis results, i.e.,
an entropy of 7.89 and a correlation coefficient of 0.007, validate the
effectiveness of the proposed scheme. This research offers an innovative path
forward for securing images in applications requiring low computational
complexity and fast encryption and decryption speeds.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 16:32:11 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Khan",
"Muhammad Shahbaz",
""
],
[
"Ahmad",
"Jawad",
""
],
[
"Ali",
"Hisham",
""
],
[
"Pitropakis",
"Nikolaos",
""
],
[
"Al-Dubai",
"Ahmed",
""
],
[
"Ghaleb",
"Baraq",
""
],
[
"Buchanan",
"William J.",
""
]
] |
new_dataset
| 0.999723 |
2308.10846
|
Pranay Pasula
|
Pranay Pasula
|
Real World Time Series Benchmark Datasets with Distribution Shifts:
Global Crude Oil Price and Volatility
|
7 pages, 5 figures. Awarded Best Paper Runner Up / Honorable Mention
and presented as Contributed Talk at IJCAI 2023, the 32nd International Joint
Conference on Artificial Intelligence (AI4TS)
| null | null | null |
cs.LG cs.AI stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
The scarcity of task-labeled time-series benchmarks in the financial domain
hinders progress in continual learning. Addressing this deficit would foster
innovation in this area. Therefore, we present COB, Crude Oil Benchmark
datasets. COB includes 30 years of asset prices that exhibit significant
distribution shifts and optimally generates corresponding task (i.e., regime)
labels based on these distribution shifts for the three most important crude
oils in the world. Our contributions include creating real-world benchmark
datasets by transforming asset price data into volatility proxies, fitting
models using expectation-maximization (EM), generating contextual task labels
that align with real-world events, and providing these labels as well as the
general algorithm to the public. We show that the inclusion of these task
labels universally improves performance on four continual learning algorithms,
some state-of-the-art, over multiple forecasting horizons. We hope these
benchmarks accelerate research in handling distribution shifts in real-world
data, especially due to the global importance of the assets considered. We've
made the (1) raw price data, (2) task labels generated by our approach, (3) and
code for our algorithm available at https://oilpricebenchmarks.github.io.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 16:44:56 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Pasula",
"Pranay",
""
]
] |
new_dataset
| 0.999303 |
2308.10882
|
Samuel Dooley
|
Arka Pal, Deep Karkhanis, Manley Roberts, Samuel Dooley, Arvind
Sundararajan, Siddartha Naidu
|
Giraffe: Adventures in Expanding Context Lengths in LLMs
| null | null | null | null |
cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern large language models (LLMs) that rely on attention mechanisms are
typically trained with fixed context lengths which enforce upper limits on the
length of input sequences that they can handle at evaluation time. To use these
models on sequences longer than the train-time context length, one might employ
techniques from the growing family of context length extrapolation methods --
most of which focus on modifying the system of positional encodings used in the
attention mechanism to indicate where tokens or activations are located in the
input sequence. We conduct a wide survey of existing methods of context length
extrapolation on a base LLaMA or LLaMA 2 model, and introduce some of our own
design as well -- in particular, a new truncation strategy for modifying the
basis for the position encoding.
We test these methods using three new evaluation tasks (FreeFormQA,
AlteredNumericQA, and LongChat-Lines) as well as perplexity, which we find to
be less fine-grained as a measure of long context performance of LLMs. We
release the three tasks publicly as datasets on HuggingFace. We discover that
linear scaling is the best method for extending context length, and show that
further gains can be achieved by using longer scales at evaluation time. We
also discover promising extrapolation capabilities in the truncated basis. To
support further research in this area, we release three new 13B parameter
long-context models which we call Giraffe: 4k and 16k context models trained
from base LLaMA-13B, and a 32k context model trained from base LLaMA2-13B. We
also release the code to replicate our results.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 17:30:16 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Pal",
"Arka",
""
],
[
"Karkhanis",
"Deep",
""
],
[
"Roberts",
"Manley",
""
],
[
"Dooley",
"Samuel",
""
],
[
"Sundararajan",
"Arvind",
""
],
[
"Naidu",
"Siddartha",
""
]
] |
new_dataset
| 0.981973 |
2308.10899
|
Tignting Liao
|
Tingting Liao, Hongwei Yi, Yuliang Xiu, Jiaxaing Tang, Yangyi Huang,
Justus Thies, Michael J. Black
|
TADA! Text to Animatable Digital Avatars
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce TADA, a simple-yet-effective approach that takes textual
descriptions and produces expressive 3D avatars with high-quality geometry and
lifelike textures, that can be animated and rendered with traditional graphics
pipelines. Existing text-based character generation methods are limited in
terms of geometry and texture quality, and cannot be realistically animated due
to inconsistent alignment between the geometry and the texture, particularly in
the face region. To overcome these limitations, TADA leverages the synergy of a
2D diffusion model and an animatable parametric body model. Specifically, we
derive an optimizable high-resolution body model from SMPL-X with 3D
displacements and a texture map, and use hierarchical rendering with score
distillation sampling (SDS) to create high-quality, detailed, holistic 3D
avatars from text. To ensure alignment between the geometry and texture, we
render normals and RGB images of the generated character and exploit their
latent embeddings in the SDS training process. We further introduce various
expression parameters to deform the generated character during training,
ensuring that the semantics of our generated character remain consistent with
the original SMPL-X model, resulting in an animatable character. Comprehensive
evaluations demonstrate that TADA significantly surpasses existing approaches
on both qualitative and quantitative measures. TADA enables creation of
large-scale digital character assets that are ready for animation and
rendering, while also being easily editable through natural language. The code
will be public for research purposes.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 17:59:10 GMT"
}
] | 2023-08-22T00:00:00 |
[
[
"Liao",
"Tingting",
""
],
[
"Yi",
"Hongwei",
""
],
[
"Xiu",
"Yuliang",
""
],
[
"Tang",
"Jiaxaing",
""
],
[
"Huang",
"Yangyi",
""
],
[
"Thies",
"Justus",
""
],
[
"Black",
"Michael J.",
""
]
] |
new_dataset
| 0.988818 |
2109.09248
|
Sanyukta Deshpande
|
Sanyukta Deshpande and Milind A. Sohoni
|
Wages and Utilities in a Closed Economy
| null | null | null | null |
cs.GT
|
http://creativecommons.org/licenses/by/4.0/
|
The broad objective of this paper is to propose a mathematical model for the
study of causes of wage inequality and relate it to choices of consumption, the
technologies of production, and the composition of labor in an economy. The
paper constructs a Simple Closed Model, or an SCM, for short, for closed
economies, in which the consumption and the production parts are clearly
separated and yet coupled. The model is established as a specialization of the
Arrow-Debreu model and its equilibria correspond directly with those of the
general Arrow-Debreu model. The formulation allows us to identify the
combinatorial data which link parameters of the economic system with its
equilibria, in particular, the impact of consumer preferences on wages. The SCM
model also allows the formulation and explicit construction of the consumer
choice game, where expressed utilities of various labor classes serve as
strategies with total or relative wages as the pay-offs. We illustrate, through
examples, the mathematical details of the consumer choice game. We show that
consumer preferences, expressed through modified utility functions, do indeed
percolate through the economy, and influence not only prices but also
production and wages. Thus, consumer choice may serve as an effective tool for
wage redistribution.
|
[
{
"version": "v1",
"created": "Sun, 19 Sep 2021 23:08:19 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Mar 2023 15:08:03 GMT"
},
{
"version": "v3",
"created": "Thu, 17 Aug 2023 23:34:56 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Deshpande",
"Sanyukta",
""
],
[
"Sohoni",
"Milind A.",
""
]
] |
new_dataset
| 0.958475 |
2202.11234
|
Simone Linz
|
Michael J. Dinneen, Pankaj S. Ghodla, Simone Linz
|
A QUBO formulation for the Tree Containment problem
|
final version accepted for publication in Theoretical Computer
Science
| null |
10.1016/j.tcs.2022.09.012
| null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Phylogenetic (evolutionary) trees and networks are leaf-labeled graphs that
are widely used to represent the evolutionary relationships between entities
such as species, languages, cancer cells, and viruses. To reconstruct and
analyze phylogenetic networks, the problem of deciding whether or not a given
rooted phylogenetic network embeds a given rooted phylogenetic tree is of
recurring interest. This problem, formally know as Tree Containment, is
NP-complete in general and polynomial-time solvable for certain classes of
phylogenetic networks. In this paper, we connect ideas from quantum computing
and phylogenetics to present an efficient Quadratic Unconstrained Binary
Optimization formulation for Tree Containment in the general setting. For an
instance (N,T) of Tree Containment, where N is a phylogenetic network with n_N
vertices and T is a phylogenetic tree with n_T vertices, the number of logical
qubits that are required for our formulation is O(n_N n_T).
|
[
{
"version": "v1",
"created": "Tue, 22 Feb 2022 23:44:17 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Oct 2022 16:26:22 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Dinneen",
"Michael J.",
""
],
[
"Ghodla",
"Pankaj S.",
""
],
[
"Linz",
"Simone",
""
]
] |
new_dataset
| 0.992005 |
2203.05072
|
Sifei Luan
|
Frank Sifei Luan, Stephanie Wang, Samyukta Yagati, Sean Kim, Kenneth
Lien, Isaac Ong, Tony Hong, SangBin Cho, Eric Liang, Ion Stoica
|
Exoshuffle: An Extensible Shuffle Architecture
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Shuffle is one of the most expensive communication primitives in distributed
data processing and is difficult to scale. Prior work addresses the scalability
challenges of shuffle by building monolithic shuffle systems. These systems are
costly to develop, and they are tightly integrated with batch processing
frameworks that offer only high-level APIs such as SQL. New applications, such
as ML training, require more flexibility and finer-grained interoperability
with shuffle. They are often unable to leverage existing shuffle optimizations.
We propose an extensible shuffle architecture. We present Exoshuffle, a
library for distributed shuffle that offers competitive performance and
scalability as well as greater flexibility than monolithic shuffle systems. We
design an architecture that decouples the shuffle control plane from the data
plane without sacrificing performance. We build Exoshuffle on Ray, a
distributed futures system for data and ML applications, and demonstrate that
we can: (1) rewrite previous shuffle optimizations as application-level
libraries with an order of magnitude less code, (2) achieve shuffle performance
and scalability competitive with monolithic shuffle systems, and break the
CloudSort record as the world's most cost-efficient sorting system, and (3)
enable new applications such as ML training to easily leverage scalable
shuffle.
|
[
{
"version": "v1",
"created": "Wed, 9 Mar 2022 22:28:49 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Mar 2022 23:21:22 GMT"
},
{
"version": "v3",
"created": "Fri, 13 May 2022 18:56:35 GMT"
},
{
"version": "v4",
"created": "Fri, 20 Jan 2023 00:45:19 GMT"
},
{
"version": "v5",
"created": "Fri, 18 Aug 2023 03:45:53 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Luan",
"Frank Sifei",
""
],
[
"Wang",
"Stephanie",
""
],
[
"Yagati",
"Samyukta",
""
],
[
"Kim",
"Sean",
""
],
[
"Lien",
"Kenneth",
""
],
[
"Ong",
"Isaac",
""
],
[
"Hong",
"Tony",
""
],
[
"Cho",
"SangBin",
""
],
[
"Liang",
"Eric",
""
],
[
"Stoica",
"Ion",
""
]
] |
new_dataset
| 0.959846 |
2204.01175
|
Seth Kulick
|
Seth Kulick, Neville Ryant, Beatrice Santorini, Joel Wallenberg, Assaf
Urieli
|
A Part-of-Speech Tagger for Yiddish
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe the construction and evaluation of a part-of-speech tagger for
Yiddish. This is the first step in a larger project of automatically assigning
part-of-speech tags and syntactic structure to Yiddish text for purposes of
linguistic research. We combine two resources for the current work - an
80K-word subset of the Penn Parsed Corpus of Historical Yiddish (PPCHY) and 650
million words of OCR'd Yiddish text from the Yiddish Book Center (YBC). Yiddish
orthography in the YBC corpus has many spelling inconsistencies, and we present
some evidence that even simple non-contextualized embeddings trained on YBC are
able to capture the relationships among spelling variants without the need to
first "standardize" the corpus. We also use YBC for continued pretraining of
contexualized embeddings, which are then integrated into a tagger model trained
and evaluated on the PPCHY. We evaluate the tagger performance on a 10-fold
cross-validation split, showing that the use of the YBC text for the
contextualized embeddings improves tagger performance. We conclude by
discussing some next steps, including the need for additional annotated
training and test data.
|
[
{
"version": "v1",
"created": "Sun, 3 Apr 2022 22:53:36 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Aug 2023 16:56:31 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Kulick",
"Seth",
""
],
[
"Ryant",
"Neville",
""
],
[
"Santorini",
"Beatrice",
""
],
[
"Wallenberg",
"Joel",
""
],
[
"Urieli",
"Assaf",
""
]
] |
new_dataset
| 0.999482 |
2205.08016
|
Georgios Tzimpragos
|
Jennifer Volk, Alex Wynn, Timothy Sherwood, Georgios Tzimpragos
|
Addressable Superconductor Integrated Circuit Memory from Delay Lines
|
13 pages, 8 figures, 1 table, under review
| null | null | null |
cs.ET cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Recent advances in logic schemes and fabrication processes have renewed
interest in using superconductor electronics for energy-efficient computing and
quantum control processors. However, scalable superconducting memory still
poses a challenge. To address this issue, we present an alternative to
approaches that solely emphasize storage cell miniaturization by exploiting the
minimal attenuation and dispersion properties of superconducting passive
transmission lines to develop a delay-line memory system. This fully
superconducting design operates at speeds between 20 GHz and 100 GHz, with
$\pm$24\% and $\pm$13\% bias margins, respectively, and demonstrates data
densities in the 10s of Mbit/cm$^2$ with the MIT Lincoln Laboratory SC2
fabrication process. Additionally, the circulating nature of this design allows
for minimal control circuitry, eliminates the need for data splitting and
merging, and enables inexpensive implementations of sequential access and
content-addressable memories. Further advances in fabrication processes suggest
data densities of 100s of Mbit/cm$^2$ and beyond
|
[
{
"version": "v1",
"created": "Mon, 16 May 2022 23:10:10 GMT"
},
{
"version": "v2",
"created": "Sat, 20 May 2023 00:25:08 GMT"
},
{
"version": "v3",
"created": "Fri, 18 Aug 2023 01:28:40 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Volk",
"Jennifer",
""
],
[
"Wynn",
"Alex",
""
],
[
"Sherwood",
"Timothy",
""
],
[
"Tzimpragos",
"Georgios",
""
]
] |
new_dataset
| 0.99958 |
2208.11036
|
Lishengsa Yue
|
Ou Zheng, Mohamed Abdel-Aty, Lishengsa Yue, Amr Abdelraouf, Zijin
Wang, Nada Mahmoud
|
CitySim: A Drone-Based Vehicle Trajectory Dataset for Safety Oriented
Research and Digital Twins
|
Transportation Research Record (2023)
| null |
10.1177/03611981231185768
| null |
cs.CV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The development of safety-oriented research and applications requires
fine-grain vehicle trajectories that not only have high accuracy, but also
capture substantial safety-critical events. However, it would be challenging to
satisfy both these requirements using the available vehicle trajectory datasets
do not have the capacity to satisfy both.This paper introduces the CitySim
dataset that has the core objective of facilitating safety-oriented research
and applications. CitySim has vehicle trajectories extracted from 1140 minutes
of drone videos recorded at 12 locations. It covers a variety of road
geometries including freeway basic segments, signalized intersections,
stop-controlled intersections, and control-free intersections. CitySim was
generated through a five-step procedure that ensured trajectory accuracy. The
five-step procedure included video stabilization, object filtering, multi-video
stitching, object detection and tracking, and enhanced error filtering.
Furthermore, CitySim provides the rotated bounding box information of a
vehicle, which was demonstrated to improve safety evaluations. Compared with
other video-based critical events, including cut-in, merge, and diverge events,
which were validated by distributions of both minimum time-to-collision and
minimum post-encroachment time. In addition, CitySim had the capability to
facilitate digital-twin-related research by providing relevant assets, such as
the recording locations' three-dimensional base maps and signal timings.
|
[
{
"version": "v1",
"created": "Tue, 23 Aug 2022 15:24:53 GMT"
},
{
"version": "v2",
"created": "Mon, 31 Jul 2023 05:04:11 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Zheng",
"Ou",
""
],
[
"Abdel-Aty",
"Mohamed",
""
],
[
"Yue",
"Lishengsa",
""
],
[
"Abdelraouf",
"Amr",
""
],
[
"Wang",
"Zijin",
""
],
[
"Mahmoud",
"Nada",
""
]
] |
new_dataset
| 0.999838 |
2209.01859
|
Kazumasa Shinagawa
|
Kazumasa Shinagawa, Reo Eriguchi, Shohei Satake, Koji Nuida
|
Private Simultaneous Messages Based on Quadratic Residues
| null |
Designs, Codes and Cryptography (2023)
|
10.1007/s10623-023-01279-5
| null |
cs.CR math.NT
|
http://creativecommons.org/licenses/by/4.0/
|
Private Simultaneous Messages (PSM) model is a minimal model for secure
multiparty computation. Feige, Kilian, and Naor (STOC 1994) and Ishai
(Cryptology and Information Security Series 2013) constructed PSM protocols
based on quadratic residues. In this paper, we define QR-PSM protocols as a
generalization of these protocols. A QR-PSM protocol is a PSM protocol whose
decoding function outputs the quadratic residuosity of what is computed from
messages. We design a QR-PSM protocol for any symmetric function $f: \{0,1\}^n
\rightarrow \{0,1\}$ of communication complexity $O(n^2)$. As far as we know,
it is the most efficient PSM protocol since the previously known best PSM
protocol was of $O(n^2\log n)$ (Beimel et al., CRYPTO 2014). We also study the
sizes of the underlying finite fields $\mathbb{F}_p$ in the protocols since the
communication complexity of a QR-PSM protocol is proportional to the bit length
of the prime $p$. In particular, we show that the $N$-th Peralta prime $P_N$,
which is used for general QR-PSM protocols, can be taken as at most
$(1+o(1))N^2 2^{2N-2}$, which improves the Peralta's known result (Mathematics
of Computation 1992) by a constant factor $(1+\sqrt{2})^2$.
|
[
{
"version": "v1",
"created": "Mon, 5 Sep 2022 09:29:42 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Sep 2022 09:16:57 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Shinagawa",
"Kazumasa",
""
],
[
"Eriguchi",
"Reo",
""
],
[
"Satake",
"Shohei",
""
],
[
"Nuida",
"Koji",
""
]
] |
new_dataset
| 0.962463 |
2209.04490
|
Srivathsan Gnanasekaran Morkonda
|
Srivathsan G. Morkonda, Sonia Chiasson, Paul C. van Oorschot
|
"Sign in with ... Privacy'': Timely Disclosure of Privacy Differences
among Web SSO Login Options
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The number of login options on web sites has increased since the introduction
of web single sign-on (SSO) protocols. Web SSO services allow users to grant
web sites or relying parties (RPs) access to their personal profile information
from identity provider (IdP) accounts. Many RP sites do not provide sufficient
privacy information that could help users make informed login decisions.
Moreover, privacy differences in permission requests across login options are
largely hidden from users and are time-consuming to manually extract and
compare. In this paper, we present an empirical analysis of popular RP
implementations supporting three major IdP login options (Facebook, Google, and
Apple) and categorize RPs in the top 500 sites into four client-side code
patterns. Informed by these RP patterns, we design and implement SSOPrivateEye
(SPEye), a browser extension prototype that extracts and displays to users
permission request information from SSO login options in RPs covering the three
IdPs.
|
[
{
"version": "v1",
"created": "Fri, 9 Sep 2022 18:41:56 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Aug 2023 04:32:06 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Morkonda",
"Srivathsan G.",
""
],
[
"Chiasson",
"Sonia",
""
],
[
"van Oorschot",
"Paul C.",
""
]
] |
new_dataset
| 0.980446 |
2209.10225
|
Wuqu Wang
|
Wuqu Wang, Nan Liu and Wei Kang
|
Three-user D2D Coded Caching with Two Random Requesters and One Sender
|
To be submitted for possible journal publication
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In device-to-device (D2D) coded caching problems, it is possible that not all
users will make file requests in the delivery phase. Hence, we propose a new
D2D centralized coded caching problem, named the 3-user D2D coded caching with
two random requesters and one sender (2RR1S), where in the delivery phase, any
two of the three users will make file requests, and the user that does not make
any file request is the designated sender. We find the optimal caching and
delivery scheme, denoted as the 2RRIS scheme, for any number of files N by
proving matching converse and achievability results. It is shown that coded
cache placement is needed to achieve the optimal performance. Furthermore, the
optimal rate-memory tradeoff has a uniform expression for N>=4 and different
expressions for N=2 and 3.
To examine the usefulness of the proposed model and scheme, we adapt the
2RR1S scheme to three scenarios. The first one is the 3-user D2D coded caching
model proposed by Ji et al. By characterizing the optimal rate-memory tradeoff
for the 3-user D2D coded caching when N=2, which was previously unknown, we
show that the adapted 2RR1S scheme is in fact optimal for the 3-user D2D coded
caching problem when N=2 and the cache size is medium. The benefit comes from
coded cache placement which is missing from existing D2D coded caching schemes.
The second scenario is where in the delivery phase, each user makes a file
request randomly and independently with the same probability p. We call this
model the request-random D2D coded caching problem. Adapting the 2RR1S scheme
to this scenario, we show the superiority of our adapted scheme over other
existing D2D coded caching schemes for medium to large cache size. The third
scenario is the K-user D2D coded caching with K-s random requesters and s
senders problem, for which an achievability result is obtained by generalizing
the 2RR1S scheme.
|
[
{
"version": "v1",
"created": "Wed, 21 Sep 2022 09:41:15 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Nov 2022 04:52:55 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Apr 2023 17:02:22 GMT"
},
{
"version": "v4",
"created": "Tue, 25 Apr 2023 02:34:14 GMT"
},
{
"version": "v5",
"created": "Thu, 17 Aug 2023 02:54:54 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Wang",
"Wuqu",
""
],
[
"Liu",
"Nan",
""
],
[
"Kang",
"Wei",
""
]
] |
new_dataset
| 0.998913 |
2211.10181
|
Lingyi Hong
|
Lingyi Hong, Wenchao Chen, Zhongying Liu, Wei Zhang, Pinxue Guo,
Zhaoyu Chen, Wenqiang Zhang
|
LVOS: A Benchmark for Long-term Video Object Segmentation
|
Accepted by ICCV 2023. Project page:
https://lingyihongfd.github.io/lvos.github.io/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing video object segmentation (VOS) benchmarks focus on short-term
videos which just last about 3-5 seconds and where objects are visible most of
the time. These videos are poorly representative of practical applications, and
the absence of long-term datasets restricts further investigation of VOS on the
application in realistic scenarios. So, in this paper, we present a new
benchmark dataset named \textbf{LVOS}, which consists of 220 videos with a
total duration of 421 minutes. To the best of our knowledge, LVOS is the first
densely annotated long-term VOS dataset. The videos in our LVOS last 1.59
minutes on average, which is 20 times longer than videos in existing VOS
datasets. Each video includes various attributes, especially challenges
deriving from the wild, such as long-term reappearing and cross-temporal
similar objeccts.Based on LVOS, we assess existing video object segmentation
algorithms and propose a Diverse Dynamic Memory network (DDMemory) that
consists of three complementary memory banks to exploit temporal information
adequately. The experimental results demonstrate the strength and weaknesses of
prior methods, pointing promising directions for further study. Data and code
are available at https://lingyihongfd.github.io/lvos.github.io/.
|
[
{
"version": "v1",
"created": "Fri, 18 Nov 2022 11:59:37 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Aug 2023 12:35:59 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Hong",
"Lingyi",
""
],
[
"Chen",
"Wenchao",
""
],
[
"Liu",
"Zhongying",
""
],
[
"Zhang",
"Wei",
""
],
[
"Guo",
"Pinxue",
""
],
[
"Chen",
"Zhaoyu",
""
],
[
"Zhang",
"Wenqiang",
""
]
] |
new_dataset
| 0.999823 |
2212.04675
|
Qi Jiang
|
Qi Jiang, Hao Sun, Xi Zhang
|
SemanticBEVFusion: Rethink LiDAR-Camera Fusion in Unified Bird's-Eye
View Representation for 3D Object Detection
|
The first two authors contributed equally to this work
|
The 2023 IEEE/RSJ International Conference on Intelligent Robots
and Systems (IROS 2023)
| null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
LiDAR and camera are two essential sensors for 3D object detection in
autonomous driving. LiDAR provides accurate and reliable 3D geometry
information while the camera provides rich texture with color. Despite the
increasing popularity of fusing these two complementary sensors, the challenge
remains in how to effectively fuse 3D LiDAR point cloud with 2D camera images.
Recent methods focus on point-level fusion which paints the LiDAR point cloud
with camera features in the perspective view or bird's-eye view (BEV)-level
fusion which unifies multi-modality features in the BEV representation. In this
paper, we rethink these previous fusion strategies and analyze their
information loss and influences on geometric and semantic features. We present
SemanticBEVFusion to deeply fuse camera features with LiDAR features in a
unified BEV representation while maintaining per-modality strengths for 3D
object detection. Our method achieves state-of-the-art performance on the
large-scale nuScenes dataset, especially for challenging distant objects. The
code will be made publicly available.
|
[
{
"version": "v1",
"created": "Fri, 9 Dec 2022 05:48:58 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Jiang",
"Qi",
""
],
[
"Sun",
"Hao",
""
],
[
"Zhang",
"Xi",
""
]
] |
new_dataset
| 0.998542 |
2212.05566
|
Li Lin
|
Li Lin, Linkai Peng, Huaqing He, Pujin Cheng, Jiewei Wu, Kenneth K. Y.
Wong, Xiaoying Tang
|
YoloCurvSeg: You Only Label One Noisy Skeleton for Vessel-style
Curvilinear Structure Segmentation
|
20 pages, 15 figures, MEDIA accepted
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Weakly-supervised learning (WSL) has been proposed to alleviate the conflict
between data annotation cost and model performance through employing
sparsely-grained (i.e., point-, box-, scribble-wise) supervision and has shown
promising performance, particularly in the image segmentation field. However,
it is still a very challenging task due to the limited supervision, especially
when only a small number of labeled samples are available. Additionally, almost
all existing WSL segmentation methods are designed for star-convex structures
which are very different from curvilinear structures such as vessels and
nerves. In this paper, we propose a novel sparsely annotated segmentation
framework for curvilinear structures, named YoloCurvSeg. A very essential
component of YoloCurvSeg is image synthesis. Specifically, a background
generator delivers image backgrounds that closely match the real distributions
through inpainting dilated skeletons. The extracted backgrounds are then
combined with randomly emulated curves generated by a Space Colonization
Algorithm-based foreground generator and through a multilayer patch-wise
contrastive learning synthesizer. In this way, a synthetic dataset with both
images and curve segmentation labels is obtained, at the cost of only one or a
few noisy skeleton annotations. Finally, a segmenter is trained with the
generated dataset and possibly an unlabeled dataset. The proposed YoloCurvSeg
is evaluated on four publicly available datasets (OCTA500, CORN, DRIVE and
CHASEDB1) and the results show that YoloCurvSeg outperforms state-of-the-art
WSL segmentation methods by large margins. With only one noisy skeleton
annotation (respectively 0.14\%, 0.03\%, 1.40\%, and 0.65\% of the full
annotation), YoloCurvSeg achieves more than 97\% of the fully-supervised
performance on each dataset. Code and datasets will be released at
https://github.com/llmir/YoloCurvSeg.
|
[
{
"version": "v1",
"created": "Sun, 11 Dec 2022 18:15:40 GMT"
},
{
"version": "v2",
"created": "Tue, 27 Dec 2022 16:13:17 GMT"
},
{
"version": "v3",
"created": "Wed, 18 Jan 2023 17:09:00 GMT"
},
{
"version": "v4",
"created": "Sun, 7 May 2023 07:44:04 GMT"
},
{
"version": "v5",
"created": "Fri, 18 Aug 2023 15:43:37 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Lin",
"Li",
""
],
[
"Peng",
"Linkai",
""
],
[
"He",
"Huaqing",
""
],
[
"Cheng",
"Pujin",
""
],
[
"Wu",
"Jiewei",
""
],
[
"Wong",
"Kenneth K. Y.",
""
],
[
"Tang",
"Xiaoying",
""
]
] |
new_dataset
| 0.998654 |
2212.05680
|
Chawin Sitawarin
|
Nabeel Hingun, Chawin Sitawarin, Jerry Li, David Wagner
|
REAP: A Large-Scale Realistic Adversarial Patch Benchmark
|
ICCV 2023. Code and benchmark can be found at
https://github.com/wagner-group/reap-benchmark
| null | null | null |
cs.CV cs.AI cs.CR cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Machine learning models are known to be susceptible to adversarial
perturbation. One famous attack is the adversarial patch, a sticker with a
particularly crafted pattern that makes the model incorrectly predict the
object it is placed on. This attack presents a critical threat to
cyber-physical systems that rely on cameras such as autonomous cars. Despite
the significance of the problem, conducting research in this setting has been
difficult; evaluating attacks and defenses in the real world is exceptionally
costly while synthetic data are unrealistic. In this work, we propose the REAP
(REalistic Adversarial Patch) benchmark, a digital benchmark that allows the
user to evaluate patch attacks on real images, and under real-world conditions.
Built on top of the Mapillary Vistas dataset, our benchmark contains over
14,000 traffic signs. Each sign is augmented with a pair of geometric and
lighting transformations, which can be used to apply a digitally generated
patch realistically onto the sign. Using our benchmark, we perform the first
large-scale assessments of adversarial patch attacks under realistic
conditions. Our experiments suggest that adversarial patch attacks may present
a smaller threat than previously believed and that the success rate of an
attack on simpler digital simulations is not predictive of its actual
effectiveness in practice. We release our benchmark publicly at
https://github.com/wagner-group/reap-benchmark.
|
[
{
"version": "v1",
"created": "Mon, 12 Dec 2022 03:35:05 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Aug 2023 10:46:35 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Hingun",
"Nabeel",
""
],
[
"Sitawarin",
"Chawin",
""
],
[
"Li",
"Jerry",
""
],
[
"Wagner",
"David",
""
]
] |
new_dataset
| 0.999483 |
2302.00431
|
Monisha Singh
|
Monisha Singh, Ximi Hoque, Donghuo Zeng, Yanan Wang, Kazushi Ikeda,
Abhinav Dhall
|
Do I Have Your Attention: A Large Scale Engagement Prediction Dataset
and Baselines
| null | null | null | null |
cs.CV cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
The degree of concentration, enthusiasm, optimism, and passion displayed by
individual(s) while interacting with a machine is referred to as `user
engagement'. Engagement comprises of behavioral, cognitive, and affect related
cues. To create engagement prediction systems that can work in real-world
conditions, it is quintessential to learn from rich, diverse datasets. To this
end, a large scale multi-faceted engagement in the wild dataset EngageNet is
proposed. 31 hours duration data of 127 participants representing different
illumination conditions are recorded. Thorough experiments are performed
exploring the applicability of different features, action units, eye gaze, head
pose, and MARLIN. Data from user interactions (question-answer) are analyzed to
understand the relationship between effective learning and user engagement. To
further validate the rich nature of the dataset, evaluation is also performed
on the EngageWild dataset. The experiments show the usefulness of the proposed
dataset. The code, models, and dataset link are publicly available at
https://github.com/engagenet/engagenet_baselines.
|
[
{
"version": "v1",
"created": "Wed, 1 Feb 2023 13:25:54 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Aug 2023 09:50:29 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Singh",
"Monisha",
""
],
[
"Hoque",
"Ximi",
""
],
[
"Zeng",
"Donghuo",
""
],
[
"Wang",
"Yanan",
""
],
[
"Ikeda",
"Kazushi",
""
],
[
"Dhall",
"Abhinav",
""
]
] |
new_dataset
| 0.99983 |
2303.13538
|
Jithin Jagannath
|
Anu Jagannath, Zackary Kane, Jithin Jagannath
|
Bluetooth and WiFi Dataset for Real World RF Fingerprinting of
Commercial Devices
|
Revision Under Review
| null | null | null |
cs.NI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
RF fingerprinting is emerging as a physical layer security scheme to identify
illegitimate and/or unauthorized emitters sharing the RF spectrum. However, due
to the lack of publicly accessible real-world datasets, most research focuses
on generating synthetic waveforms with software-defined radios (SDRs) which are
not suited for practical deployment settings. On other hand, the limited
datasets that are available focus only on chipsets that generate only one kind
of waveform. Commercial off-the-shelf (COTS) combo chipsets that support two
wireless standards (for example WiFi and Bluetooth) over a shared dual-band
antenna such as those found in laptops, adapters, wireless chargers, Raspberry
Pis, among others are becoming ubiquitous in the IoT realm. Hence, to keep up
with the modern IoT environment, there is a pressing need for real-world open
datasets capturing emissions from these combo chipsets transmitting
heterogeneous communication protocols. To this end, we capture the first known
emissions from the COTS IoT chipsets transmitting WiFi and Bluetooth under two
different time frames. The different time frames are essential to rigorously
evaluate the generalization capability of the models. To ensure widespread use,
each capture within the comprehensive 72 GB dataset is long enough (40
MSamples) to support diverse input tensor lengths and formats. Finally, the
dataset also comprises emissions at varying signal powers to account for the
feeble to high signal strength emissions as encountered in a real-world
setting.
|
[
{
"version": "v1",
"created": "Wed, 15 Mar 2023 13:32:11 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Aug 2023 13:59:00 GMT"
},
{
"version": "v3",
"created": "Thu, 17 Aug 2023 13:25:47 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Jagannath",
"Anu",
""
],
[
"Kane",
"Zackary",
""
],
[
"Jagannath",
"Jithin",
""
]
] |
new_dataset
| 0.999854 |
2303.16633
|
Ren\'e Heinrich
|
Ren\'e Heinrich, Christoph Scholz, Stephan Vogt, Malte Lehna
|
Targeted Adversarial Attacks on Wind Power Forecasts
|
21 pages, including appendix, 12 figures
| null | null | null |
cs.LG cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, researchers proposed a variety of deep learning models for
wind power forecasting. These models predict the wind power generation of wind
farms or entire regions more accurately than traditional machine learning
algorithms or physical models. However, latest research has shown that deep
learning models can often be manipulated by adversarial attacks. Since wind
power forecasts are essential for the stability of modern power systems, it is
important to protect them from this threat. In this work, we investigate the
vulnerability of two different forecasting models to targeted, semi-targeted,
and untargeted adversarial attacks. We consider a Long Short-Term Memory (LSTM)
network for predicting the power generation of individual wind farms and a
Convolutional Neural Network (CNN) for forecasting the wind power generation
throughout Germany. Moreover, we propose the Total Adversarial Robustness Score
(TARS), an evaluation metric for quantifying the robustness of regression
models to targeted and semi-targeted adversarial attacks. It assesses the
impact of attacks on the model's performance, as well as the extent to which
the attacker's goal was achieved, by assigning a score between 0 (very
vulnerable) and 1 (very robust). In our experiments, the LSTM forecasting model
was fairly robust and achieved a TARS value of over 0.78 for all adversarial
attacks investigated. The CNN forecasting model only achieved TARS values below
0.10 when trained ordinarily, and was thus very vulnerable. Yet, its robustness
could be significantly improved by adversarial training, which always resulted
in a TARS above 0.46.
|
[
{
"version": "v1",
"created": "Wed, 29 Mar 2023 12:43:36 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Aug 2023 18:44:16 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Heinrich",
"René",
""
],
[
"Scholz",
"Christoph",
""
],
[
"Vogt",
"Stephan",
""
],
[
"Lehna",
"Malte",
""
]
] |
new_dataset
| 0.99676 |
2304.00670
|
Youngseok Kim
|
Youngseok Kim, Sanmin Kim, Juyeb Shin, Jun Won Choi, Dongsuk Kum
|
CRN: Camera Radar Net for Accurate, Robust, Efficient 3D Perception
|
IEEE/CVF International Conference on Computer Vision (ICCV'23)
| null | null | null |
cs.CV cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous driving requires an accurate and fast 3D perception system that
includes 3D object detection, tracking, and segmentation. Although recent
low-cost camera-based approaches have shown promising results, they are
susceptible to poor illumination or bad weather conditions and have a large
localization error. Hence, fusing camera with low-cost radar, which provides
precise long-range measurement and operates reliably in all environments, is
promising but has not yet been thoroughly investigated. In this paper, we
propose Camera Radar Net (CRN), a novel camera-radar fusion framework that
generates a semantically rich and spatially accurate bird's-eye-view (BEV)
feature map for various tasks. To overcome the lack of spatial information in
an image, we transform perspective view image features to BEV with the help of
sparse but accurate radar points. We further aggregate image and radar feature
maps in BEV using multi-modal deformable attention designed to tackle the
spatial misalignment between inputs. CRN with real-time setting operates at 20
FPS while achieving comparable performance to LiDAR detectors on nuScenes, and
even outperforms at a far distance on 100m setting. Moreover, CRN with offline
setting yields 62.4% NDS, 57.5% mAP on nuScenes test set and ranks first among
all camera and camera-radar 3D object detectors.
|
[
{
"version": "v1",
"created": "Mon, 3 Apr 2023 00:47:37 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Aug 2023 02:27:43 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Kim",
"Youngseok",
""
],
[
"Kim",
"Sanmin",
""
],
[
"Shin",
"Juyeb",
""
],
[
"Choi",
"Jun Won",
""
],
[
"Kum",
"Dongsuk",
""
]
] |
new_dataset
| 0.998521 |
2304.01168
|
Tianqi Wang
|
Tianqi Wang, Sukmin Kim, Wenxuan Ji, Enze Xie, Chongjian Ge, Junsong
Chen, Zhenguo Li, Ping Luo
|
DeepAccident: A Motion and Accident Prediction Benchmark for V2X
Autonomous Driving
| null | null | null | null |
cs.CV cs.LG cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Safety is the primary priority of autonomous driving. Nevertheless, no
published dataset currently supports the direct and explainable safety
evaluation for autonomous driving. In this work, we propose DeepAccident, a
large-scale dataset generated via a realistic simulator containing diverse
accident scenarios that frequently occur in real-world driving. The proposed
DeepAccident dataset includes 57K annotated frames and 285K annotated samples,
approximately 7 times more than the large-scale nuScenes dataset with 40k
annotated samples. In addition, we propose a new task, end-to-end motion and
accident prediction, which can be used to directly evaluate the accident
prediction ability for different autonomous driving algorithms. Furthermore,
for each scenario, we set four vehicles along with one infrastructure to record
data, thus providing diverse viewpoints for accident scenarios and enabling V2X
(vehicle-to-everything) research on perception and prediction tasks. Finally,
we present a baseline V2X model named V2XFormer that demonstrates superior
performance for motion and accident prediction and 3D object detection compared
to the single-vehicle model.
|
[
{
"version": "v1",
"created": "Mon, 3 Apr 2023 17:37:00 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Apr 2023 07:30:02 GMT"
},
{
"version": "v3",
"created": "Fri, 18 Aug 2023 02:38:06 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Wang",
"Tianqi",
""
],
[
"Kim",
"Sukmin",
""
],
[
"Ji",
"Wenxuan",
""
],
[
"Xie",
"Enze",
""
],
[
"Ge",
"Chongjian",
""
],
[
"Chen",
"Junsong",
""
],
[
"Li",
"Zhenguo",
""
],
[
"Luo",
"Ping",
""
]
] |
new_dataset
| 0.999771 |
2304.03858
|
Wiebke (Toussaint) Hutiri
|
Casandra Rusti, Anna Leschanowsky, Carolyn Quinlan, Michaela Pnacek,
Lauriane Gorce, Wiebke Hutiri
|
Benchmark Dataset Dynamics, Bias and Privacy Challenges in Voice
Biometrics Research
|
8 pages (10 with References)
| null | null | null |
cs.CY cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Speaker recognition is a widely used voice-based biometric technology with
applications in various industries, including banking, education, recruitment,
immigration, law enforcement, healthcare, and well-being. However, while
dataset evaluations and audits have improved data practices in face recognition
and other computer vision tasks, the data practices in speaker recognition have
gone largely unquestioned. Our research aims to address this gap by exploring
how dataset usage has evolved over time and what implications this has on bias,
fairness and privacy in speaker recognition systems. Previous studies have
demonstrated the presence of historical, representation, and measurement biases
in popular speaker recognition benchmarks. In this paper, we present a
longitudinal study of speaker recognition datasets used for training and
evaluation from 2012 to 2021. We survey close to 700 papers to investigate
community adoption of datasets and changes in usage over a crucial time period
where speaker recognition approaches transitioned to the widespread adoption of
deep neural networks. Our study identifies the most commonly used datasets in
the field, examines their usage patterns, and assesses their attributes that
affect bias, fairness, and other ethical concerns. Our findings suggest areas
for further research on the ethics and fairness of speaker recognition
technology.
|
[
{
"version": "v1",
"created": "Fri, 7 Apr 2023 23:05:37 GMT"
},
{
"version": "v2",
"created": "Wed, 3 May 2023 19:32:29 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Aug 2023 15:10:17 GMT"
},
{
"version": "v4",
"created": "Fri, 18 Aug 2023 08:05:24 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Rusti",
"Casandra",
""
],
[
"Leschanowsky",
"Anna",
""
],
[
"Quinlan",
"Carolyn",
""
],
[
"Pnacek",
"Michaela",
""
],
[
"Gorce",
"Lauriane",
""
],
[
"Hutiri",
"Wiebke",
""
]
] |
new_dataset
| 0.991137 |
2304.09445
|
Ray Li
|
Omar Alrabiah, Venkatesan Guruswami, Ray Li
|
Randomly punctured Reed--Solomon codes achieve list-decoding capacity
over linear-sized fields
| null | null | null | null |
cs.IT cs.DS math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reed--Solomon codes are a classic family of error-correcting codes consisting
of evaluations of low-degree polynomials over a finite field on some sequence
of distinct field elements. They are widely known for their optimal
unique-decoding capabilities, but their list-decoding capabilities are not
fully understood. Given the prevalence of Reed-Solomon codes, a fundamental
question in coding theory is determining if Reed--Solomon codes can optimally
achieve list-decoding capacity.
A recent breakthrough by Brakensiek, Gopi, and Makam, established that
Reed--Solomon codes are combinatorially list-decodable all the way to capacity.
However, their results hold for randomly-punctured Reed--Solomon codes over an
exponentially large field size $2^{O(n)}$, where $n$ is the block length of the
code. A natural question is whether Reed--Solomon codes can still achieve
capacity over smaller fields. Recently, Guo and Zhang showed that Reed--Solomon
codes are list-decodable to capacity with field size $O(n^2)$. We show that
Reed--Solomon codes are list-decodable to capacity with linear field size
$O(n)$, which is optimal up to the constant factor. We also give evidence that
the ratio between the alphabet size $q$ and code length $n$ cannot be bounded
by an absolute constant. Our techniques also show that random linear codes are
list-decodable up to (the alphabet-independent) capacity with optimal list-size
$O(1/\varepsilon)$ and near-optimal alphabet size $2^{O(1/\varepsilon^2)}$,
where $\varepsilon$ is the gap to capacity. As far as we are aware,
list-decoding up to capacity with optimal list-size $O(1/\varepsilon)$ was
previously not known to be achievable with any linear code over a constant
alphabet size (even non-constructively). Our proofs are based on the ideas of
Guo and Zhang, and we additionally exploit symmetries of reduced intersection
matrices.
|
[
{
"version": "v1",
"created": "Wed, 19 Apr 2023 06:28:54 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Jul 2023 21:41:40 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Jul 2023 17:35:17 GMT"
},
{
"version": "v4",
"created": "Fri, 18 Aug 2023 17:39:42 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Alrabiah",
"Omar",
""
],
[
"Guruswami",
"Venkatesan",
""
],
[
"Li",
"Ray",
""
]
] |
new_dataset
| 0.998232 |
2304.12372
|
Christophe Bolduc
|
Christophe Bolduc, Justine Giroux, Marc H\'ebert, Claude Demers, and
Jean-Fran\c{c}ois Lalonde
|
Beyond the Pixel: a Photometrically Calibrated HDR Dataset for Luminance
and Color Prediction
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Light plays an important role in human well-being. However, most computer
vision tasks treat pixels without considering their relationship to physical
luminance. To address this shortcoming, we introduce the Laval Photometric
Indoor HDR Dataset, the first large-scale photometrically calibrated dataset of
high dynamic range 360{\deg} panoramas. Our key contribution is the calibration
of an existing, uncalibrated HDR Dataset. We do so by accurately capturing RAW
bracketed exposures simultaneously with a professional photometric measurement
device (chroma meter) for multiple scenes across a variety of lighting
conditions. Using the resulting measurements, we establish the calibration
coefficients to be applied to the HDR images. The resulting dataset is a rich
representation of indoor scenes which displays a wide range of illuminance and
color, and varied types of light sources. We exploit the dataset to introduce
three novel tasks, where: per-pixel luminance, per-pixel color and planar
illuminance can be predicted from a single input image. Finally, we also
capture another smaller photometric dataset with a commercial 360{\deg} camera,
to experiment on generalization across cameras. We are optimistic that the
release of our datasets and associated code will spark interest in physically
accurate light estimation within the community. Dataset and code are available
at https://lvsn.github.io/beyondthepixel/.
|
[
{
"version": "v1",
"created": "Mon, 24 Apr 2023 18:10:25 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Aug 2023 06:32:02 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Bolduc",
"Christophe",
""
],
[
"Giroux",
"Justine",
""
],
[
"Hébert",
"Marc",
""
],
[
"Demers",
"Claude",
""
],
[
"Lalonde",
"Jean-François",
""
]
] |
new_dataset
| 0.99685 |
2304.13445
|
Cheng Sun
|
Cheng Sun, Guangyan Cai, Zhengqin Li, Kai Yan, Cheng Zhang, Carl
Marshall, Jia-Bin Huang, Shuang Zhao, Zhao Dong
|
Neural-PBIR Reconstruction of Shape, Material, and Illumination
|
ICCV 2023. Project page at https://neural-pbir.github.io/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Reconstructing the shape and spatially varying surface appearances of a
physical-world object as well as its surrounding illumination based on 2D
images (e.g., photographs) of the object has been a long-standing problem in
computer vision and graphics. In this paper, we introduce an accurate and
highly efficient object reconstruction pipeline combining neural based object
reconstruction and physics-based inverse rendering (PBIR). Our pipeline firstly
leverages a neural SDF based shape reconstruction to produce high-quality but
potentially imperfect object shape. Then, we introduce a neural material and
lighting distillation stage to achieve high-quality predictions for material
and illumination. In the last stage, initialized by the neural predictions, we
perform PBIR to refine the initial results and obtain the final high-quality
reconstruction of object shape, material, and illumination. Experimental
results demonstrate our pipeline significantly outperforms existing methods
quality-wise and performance-wise.
|
[
{
"version": "v1",
"created": "Wed, 26 Apr 2023 11:02:04 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Jul 2023 07:49:26 GMT"
},
{
"version": "v3",
"created": "Thu, 17 Aug 2023 04:16:21 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Sun",
"Cheng",
""
],
[
"Cai",
"Guangyan",
""
],
[
"Li",
"Zhengqin",
""
],
[
"Yan",
"Kai",
""
],
[
"Zhang",
"Cheng",
""
],
[
"Marshall",
"Carl",
""
],
[
"Huang",
"Jia-Bin",
""
],
[
"Zhao",
"Shuang",
""
],
[
"Dong",
"Zhao",
""
]
] |
new_dataset
| 0.987805 |
2305.01406
|
Hisayoshi Muramatsu
|
Hisayoshi Muramatsu, Keigo Kitagawa, Jun Watanabe, Ryohei Hisashiki
|
A Mobile Quad-Arm Robot ARMS: Wheel-Legged Tripedal Locomotion and
Quad-Arm Manipulation
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This article proposes a mobile quad-arm robot: ARMS that unifies wheel-legged
tripedal locomotion, wheeled locomotion, and quad-arm manipulation. The four
arms have different mechanics and are designed to be general-purpose arms to
enable the hybrid wheel-legged locomotion and manipulation. The
three-degree-of-freedom (DOF) front arm has an active wheel, which is used for
wheel-legged tripedal walking and wheel driving with passive wheels attached to
the torso. The three-DOF rear arms are series elastic arms, which are used for
wheel-legged tripedal walking, object grasping, and manipulation. The two-DOF
upper arm is used for manipulation only; its position and orientation are
determined by coordinating all arms. Each motor is controlled by an angle
controller and trajectory modification with angle, angular velocity, angular
acceleration, and torque constraints. ARMS was experimentally validated on the
basis of the following six tasks: joint control, wheel-legged walking, wheel
driving, wheel driving with grasping, wheel-legged walking on an uneven
terrain, and carrying a bag.
|
[
{
"version": "v1",
"created": "Tue, 2 May 2023 13:27:42 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Aug 2023 00:41:46 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Muramatsu",
"Hisayoshi",
""
],
[
"Kitagawa",
"Keigo",
""
],
[
"Watanabe",
"Jun",
""
],
[
"Hisashiki",
"Ryohei",
""
]
] |
new_dataset
| 0.999764 |
2305.09566
|
Tomas Berriel Martins
|
T. Berriel Martins and Javier Civera
|
Ray-Patch: An Efficient Querying for Light Field Transformers
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this paper we propose the Ray-Patch querying, a novel model to efficiently
query transformers to decode implicit representations into target views. Our
Ray-Patch decoding reduces the computational footprint and increases inference
speed up to one order of magnitude compared to previous models, without losing
global attention, and hence maintaining specific task metrics. The key idea of
our novel querying is to split the target image into a set of patches, then
querying the transformer for each patch to extract a set of feature vectors,
which are finally decoded into the target image using convolutional layers. Our
experimental results, implementing Ray-Patch in 3 different architectures and
evaluating it in 2 different tasks and datasets, demonstrate and quantify the
effectiveness of our method, specifically a notable boost in rendering speed
for the same task metrics.
|
[
{
"version": "v1",
"created": "Tue, 16 May 2023 16:03:27 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Aug 2023 09:39:05 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Martins",
"T. Berriel",
""
],
[
"Civera",
"Javier",
""
]
] |
new_dataset
| 0.988941 |
2305.12031
|
Augustin Toma
|
Augustin Toma, Patrick R. Lawler, Jimmy Ba, Rahul G. Krishnan, Barry
B. Rubin, Bo Wang
|
Clinical Camel: An Open Expert-Level Medical Language Model with
Dialogue-Based Knowledge Encoding
|
for model weights, see https://huggingface.co/wanglab/
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We present Clinical Camel, an open large language model (LLM) explicitly
tailored for clinical research. Fine-tuned from LLaMA-2 using QLoRA, Clinical
Camel achieves state-of-the-art performance across medical benchmarks among
openly available medical LLMs. Leveraging efficient single-GPU training,
Clinical Camel surpasses GPT-3.5 in five-shot evaluations on all assessed
benchmarks, including 64.3% on the USMLE Sample Exam (compared to 58.5% for
GPT-3.5), 77.9% on PubMedQA (compared to 60.2%), 60.7% on MedQA (compared to
53.6%), and 54.2% on MedMCQA (compared to 51.0%). In addition to these
benchmarks, Clinical Camel demonstrates its broader capabilities, such as
synthesizing plausible clinical notes. This work introduces dialogue-based
knowledge encoding, a novel method to synthesize conversational data from dense
medical texts. While benchmark results are encouraging, extensive and rigorous
human evaluation across diverse clinical scenarios is imperative to ascertain
safety before implementation. By openly sharing Clinical Camel, we hope to
foster transparent and collaborative research, working towards the safe
integration of LLMs within the healthcare domain. Significant challenges
concerning reliability, bias, and the potential for outdated knowledge persist.
Nonetheless, the transparency provided by an open approach reinforces the
scientific rigor essential for future clinical applications.
|
[
{
"version": "v1",
"created": "Fri, 19 May 2023 23:07:09 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Aug 2023 17:19:02 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Toma",
"Augustin",
""
],
[
"Lawler",
"Patrick R.",
""
],
[
"Ba",
"Jimmy",
""
],
[
"Krishnan",
"Rahul G.",
""
],
[
"Rubin",
"Barry B.",
""
],
[
"Wang",
"Bo",
""
]
] |
new_dataset
| 0.995674 |
2306.05888
|
Xuesong Chen
|
Xuesong Chen, Shaoshuai Shi, Chao Zhang, Benjin Zhu, Qiang Wang, Ka
Chun Cheung, Simon See, Hongsheng Li
|
TrajectoryFormer: 3D Object Tracking Transformer with Predictive
Trajectory Hypotheses
|
Accepted by ICCV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D multi-object tracking (MOT) is vital for many applications including
autonomous driving vehicles and service robots. With the commonly used
tracking-by-detection paradigm, 3D MOT has made important progress in recent
years. However, these methods only use the detection boxes of the current frame
to obtain trajectory-box association results, which makes it impossible for the
tracker to recover objects missed by the detector. In this paper, we present
TrajectoryFormer, a novel point-cloud-based 3D MOT framework. To recover the
missed object by detector, we generates multiple trajectory hypotheses with
hybrid candidate boxes, including temporally predicted boxes and current-frame
detection boxes, for trajectory-box association. The predicted boxes can
propagate object's history trajectory information to the current frame and thus
the network can tolerate short-term miss detection of the tracked objects. We
combine long-term object motion feature and short-term object appearance
feature to create per-hypothesis feature embedding, which reduces the
computational overhead for spatial-temporal encoding. Additionally, we
introduce a Global-Local Interaction Module to conduct information interaction
among all hypotheses and models their spatial relations, leading to accurate
estimation of hypotheses. Our TrajectoryFormer achieves state-of-the-art
performance on the Waymo 3D MOT benchmarks. Code is available at
https://github.com/poodarchu/EFG .
|
[
{
"version": "v1",
"created": "Fri, 9 Jun 2023 13:31:50 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Aug 2023 08:31:15 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Chen",
"Xuesong",
""
],
[
"Shi",
"Shaoshuai",
""
],
[
"Zhang",
"Chao",
""
],
[
"Zhu",
"Benjin",
""
],
[
"Wang",
"Qiang",
""
],
[
"Cheung",
"Ka Chun",
""
],
[
"See",
"Simon",
""
],
[
"Li",
"Hongsheng",
""
]
] |
new_dataset
| 0.999334 |
2306.09756
|
Tobias Pfandzelter
|
Tobias Pfandzelter and David Bermbach
|
Can Orbital Servers Provide Mars-Wide Edge Computing?
|
1st ACM MobiCom Workshop on Satellite Networking and Computing
(SatCom '23)
| null |
10.1145/3570361.3614239
| null |
cs.DC astro-ph.IM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human landing, exploration and settlement on Mars will require local compute
resources at the Mars edge. Landing such resources on Mars is an expensive
endeavor. Instead, in this paper we lay out how concepts from low-Earth orbit
edge computing may be applied to Mars edge computing. This could lower
launching costs of compute resources for Mars while also providing Mars-wide
networking and compute coverage. We propose a possible Mars compute
constellation, discuss applications, analyze feasibility, and raise research
questions for future work.
|
[
{
"version": "v1",
"created": "Fri, 16 Jun 2023 10:41:53 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Aug 2023 09:35:07 GMT"
},
{
"version": "v3",
"created": "Fri, 18 Aug 2023 10:21:12 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Pfandzelter",
"Tobias",
""
],
[
"Bermbach",
"David",
""
]
] |
new_dataset
| 0.997304 |
2306.12624
|
Tianle Li
|
Tianle Li, Max Ku, Cong Wei, Wenhu Chen
|
DreamEdit: Subject-driven Image Editing
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Subject-driven image generation aims at generating images containing
customized subjects, which has recently drawn enormous attention from the
research community. However, the previous works cannot precisely control the
background and position of the target subject. In this work, we aspire to fill
the void and propose two novel subject-driven sub-tasks, i.e., Subject
Replacement and Subject Addition. The new tasks are challenging in multiple
aspects: replacing a subject with a customized one can change its shape,
texture, and color, while adding a target subject to a designated position in a
provided scene necessitates a context-aware posture. To conquer these two novel
tasks, we first manually curate a new dataset DreamEditBench containing 22
different types of subjects, and 440 source images with different difficulty
levels. We plan to host DreamEditBench as a platform and hire trained
evaluators for standard human evaluation. We also devise an innovative method
DreamEditor to resolve these tasks by performing iterative generation, which
enables a smooth adaptation to the customized subject. In this project, we
conduct automatic and human evaluations to understand the performance of
DreamEditor and baselines on DreamEditBench. For Subject Replacement, we found
that the existing models are sensitive to the shape and color of the original
subject. The model failure rate will dramatically increase when the source and
target subjects are highly different. For Subject Addition, we found that the
existing models cannot easily blend the customized subjects into the background
smoothly, leading to noticeable artifacts in the generated image. We hope
DreamEditBench can become a standard platform to enable future investigations
toward building more controllable subject-driven image editing. Our project
homepage is https://dreameditbenchteam.github.io/.
|
[
{
"version": "v1",
"created": "Thu, 22 Jun 2023 01:29:06 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Aug 2023 18:30:35 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Li",
"Tianle",
""
],
[
"Ku",
"Max",
""
],
[
"Wei",
"Cong",
""
],
[
"Chen",
"Wenhu",
""
]
] |
new_dataset
| 0.999425 |
2306.16080
|
Guoqiang Yang
|
Guoqiang Yang, Xiaowen Chang, Zitong Wang and Min Yang
|
A serial dual-channel library occupancy detection system based on Faster
RCNN
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The phenomenon of seat occupancy in university libraries is a prevalent
issue. However, existing solutions, such as software-based seat reservations
and sensors-based occupancy detection, have proven to be inadequate in
effectively addressing this problem. In this study, we propose a novel
approach: a serial dual-channel object detection model based on Faster RCNN.
This model is designed to discern all instances of occupied seats within the
library and continuously update real-time information regarding seat occupancy
status. To train the neural network, a distinctive dataset is utilized, which
blends virtual images generated using Unreal Engine 5 (UE5) with real-world
images. Notably, our test results underscore the remarkable performance uplift
attained through the application of self-generated virtual datasets in training
Convolutional Neural Networks (CNNs), particularly within specialized
scenarios. Furthermore, this study introduces a pioneering detection model that
seamlessly amalgamates the Faster R-CNN-based object detection framework with a
transfer learning-based object classification algorithm. This amalgamation not
only significantly curtails the computational resources and time investments
needed for neural network training but also considerably heightens the
efficiency of single-frame detection rates. Additionally, a user-friendly web
interface and a mobile application have been meticulously developed,
constituting a computer vision-driven platform for detecting seat occupancy
within library premises. Noteworthy is the substantial enhancement in seat
occupancy recognition accuracy, coupled with a reduction in computational
resources required for neural network training, collectively contributing to a
considerable amplification in the overall efficiency of library seat
management.
|
[
{
"version": "v1",
"created": "Wed, 28 Jun 2023 10:27:17 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Aug 2023 13:11:02 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Yang",
"Guoqiang",
""
],
[
"Chang",
"Xiaowen",
""
],
[
"Wang",
"Zitong",
""
],
[
"Yang",
"Min",
""
]
] |
new_dataset
| 0.989699 |
2307.04820
|
G\'abor Sz\'arnyas
|
David P\"uroja and Jack Waudby and Peter Boncz and G\'abor Sz\'arnyas
|
The LDBC Social Network Benchmark Interactive workload v2: A
transactional graph query benchmark with deep delete operations
| null | null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The LDBC Social Network Benchmark's Interactive workload captures an OLTP
scenario operating on a correlated social network graph. It consists of complex
graph queries executed concurrently with a stream of updates operation. Since
its initial release in 2015, the Interactive workload has become the de facto
industry standard for benchmarking transactional graph data management systems.
As graph systems have matured and the community's understanding of graph
processing features has evolved, we initiated the renewal of this benchmark.
This paper describes the draft Interactive v2 workload with several new
features: delete operations, a cheapest path-finding query, support for larger
data sets, and a novel temporal parameter curation algorithm that ensures
stable runtimes for path queries.
|
[
{
"version": "v1",
"created": "Mon, 10 Jul 2023 18:04:54 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Aug 2023 08:28:22 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Püroja",
"David",
""
],
[
"Waudby",
"Jack",
""
],
[
"Boncz",
"Peter",
""
],
[
"Szárnyas",
"Gábor",
""
]
] |
new_dataset
| 0.993458 |
2307.06181
|
Armin Goudarzi
|
Armin Goudarzi
|
B-CLEAN-SC: CLEAN-SC for broadband sources
|
revision 1
| null | null | null |
cs.SD eess.AS physics.flu-dyn
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper presents B-CLEAN-SC, a variation of CLEAN-SC for broadband
sources. Opposed to CLEAN-SC, which ``deconvolves'' the beamforming map for
each frequency individually, B-CLEAN-SC processes frequency intervals. Instead
of performing a deconvolution iteration at the location of the maximum level,
B-CLEAN-SC performs it at the location of the over-frequency-averaged maximum
to improve the location estimation. The method is validated and compared to
standard CLEAN-SC on synthetic cases, and real-world experiments, for broad-
and narrowband sources. It improves the source reconstruction at low and high
frequencies and suppresses noise, while it only increases the need for memory
but not computational effort.
|
[
{
"version": "v1",
"created": "Wed, 12 Jul 2023 14:12:19 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Aug 2023 10:34:48 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Goudarzi",
"Armin",
""
]
] |
new_dataset
| 0.967204 |
2307.06853
|
Zillur Rahman
|
Zillur Rahman and Brendan Tran Morris
|
LVLane: Deep Learning for Lane Detection and Classification in
Challenging Conditions
|
7 pages
|
2023 IEEE International Conference on Intelligent Transportation
Systems (ITSC)
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Lane detection plays a pivotal role in the field of autonomous vehicles and
advanced driving assistant systems (ADAS). Despite advances from image
processing to deep learning based models, algorithm performance is highly
dependent on training data matching the local challenges such as extreme
lighting conditions, partially visible lane markings, and sparse lane markings
like Botts' dots. To address this, we present an end-to-end lane detection and
classification system based on deep learning methodologies. In our study, we
introduce a unique dataset meticulously curated to encompass scenarios that
pose significant challenges for state-of-the-art (SOTA) lane localization
models. Moreover, we propose a CNN-based classification branch, seamlessly
integrated with the detector, facilitating the identification of distinct lane
types. This architecture enables informed lane-changing decisions and empowers
more resilient ADAS capabilities. We also investigate the effect of using mixed
precision training and testing on different models and batch sizes.
Experimental evaluations conducted on the widely-used TuSimple dataset, Caltech
Lane dataset, and our LVLane dataset demonstrate the effectiveness of our model
in accurately detecting and classifying lanes amidst challenging scenarios. Our
method achieves state-of-the-art classification results on the TuSimple
dataset. The code of the work can be found on www.github.com/zillur-av/LVLane.
|
[
{
"version": "v1",
"created": "Thu, 13 Jul 2023 16:09:53 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Aug 2023 15:02:05 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Rahman",
"Zillur",
""
],
[
"Morris",
"Brendan Tran",
""
]
] |
new_dataset
| 0.988896 |
2307.09066
|
Miaoge Li
|
Miaoge Li, Dongsheng Wang, Xinyang Liu, Zequn Zeng, Ruiying Lu, Bo
Chen, Mingyuan Zhou
|
PatchCT: Aligning Patch Set and Label Set with Conditional Transport for
Multi-Label Image Classification
|
accepted by ICCV23
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-label image classification is a prediction task that aims to identify
more than one label from a given image. This paper considers the semantic
consistency of the latent space between the visual patch and linguistic label
domains and introduces the conditional transport (CT) theory to bridge the
acknowledged gap. While recent cross-modal attention-based studies have
attempted to align such two representations and achieved impressive
performance, they required carefully-designed alignment modules and extra
complex operations in the attention computation. We find that by formulating
the multi-label classification as a CT problem, we can exploit the interactions
between the image and label efficiently by minimizing the bidirectional CT
cost. Specifically, after feeding the images and textual labels into the
modality-specific encoders, we view each image as a mixture of patch embeddings
and a mixture of label embeddings, which capture the local region features and
the class prototypes, respectively. CT is then employed to learn and align
those two semantic sets by defining the forward and backward navigators.
Importantly, the defined navigators in CT distance model the similarities
between patches and labels, which provides an interpretable tool to visualize
the learned prototypes. Extensive experiments on three public image benchmarks
show that the proposed model consistently outperforms the previous methods.
|
[
{
"version": "v1",
"created": "Tue, 18 Jul 2023 08:37:37 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Aug 2023 11:53:27 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Li",
"Miaoge",
""
],
[
"Wang",
"Dongsheng",
""
],
[
"Liu",
"Xinyang",
""
],
[
"Zeng",
"Zequn",
""
],
[
"Lu",
"Ruiying",
""
],
[
"Chen",
"Bo",
""
],
[
"Zhou",
"Mingyuan",
""
]
] |
new_dataset
| 0.999427 |
2307.11418
|
Sungwon Hwang
|
Sungwon Hwang, Junha Hyung, Daejin Kim, Min-Jung Kim, Jaegul Choo
|
FaceCLIPNeRF: Text-driven 3D Face Manipulation using Deformable Neural
Radiance Fields
|
ICCV 2023 project page at https://faceclipnerf.github.io
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
As recent advances in Neural Radiance Fields (NeRF) have enabled
high-fidelity 3D face reconstruction and novel view synthesis, its manipulation
also became an essential task in 3D vision. However, existing manipulation
methods require extensive human labor, such as a user-provided semantic mask
and manual attribute search unsuitable for non-expert users. Instead, our
approach is designed to require a single text to manipulate a face
reconstructed with NeRF. To do so, we first train a scene manipulator, a latent
code-conditional deformable NeRF, over a dynamic scene to control a face
deformation using the latent code. However, representing a scene deformation
with a single latent code is unfavorable for compositing local deformations
observed in different instances. As so, our proposed Position-conditional
Anchor Compositor (PAC) learns to represent a manipulated scene with spatially
varying latent codes. Their renderings with the scene manipulator are then
optimized to yield high cosine similarity to a target text in CLIP embedding
space for text-driven manipulation. To the best of our knowledge, our approach
is the first to address the text-driven manipulation of a face reconstructed
with NeRF. Extensive results, comparisons, and ablation studies demonstrate the
effectiveness of our approach.
|
[
{
"version": "v1",
"created": "Fri, 21 Jul 2023 08:22:14 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Aug 2023 03:18:31 GMT"
},
{
"version": "v3",
"created": "Thu, 17 Aug 2023 05:06:09 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Hwang",
"Sungwon",
""
],
[
"Hyung",
"Junha",
""
],
[
"Kim",
"Daejin",
""
],
[
"Kim",
"Min-Jung",
""
],
[
"Choo",
"Jaegul",
""
]
] |
new_dataset
| 0.997945 |
2307.11466
|
Yuwen Heng
|
Yuwen Heng, Yihong Wu, Jiawen Chen, Srinandan Dasmahapatra, Hansung
Kim
|
MatSpectNet: Material Segmentation Network with Domain-Aware and
Physically-Constrained Hyperspectral Reconstruction
|
7 pages main paper
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Achieving accurate material segmentation for 3-channel RGB images is
challenging due to the considerable variation in a material's appearance.
Hyperspectral images, which are sets of spectral measurements sampled at
multiple wavelengths, theoretically offer distinct information for material
identification, as variations in intensity of electromagnetic radiation
reflected by a surface depend on the material composition of a scene. However,
existing hyperspectral datasets are impoverished regarding the number of images
and material categories for the dense material segmentation task, and
collecting and annotating hyperspectral images with a spectral camera is
prohibitively expensive. To address this, we propose a new model, the
MatSpectNet to segment materials with recovered hyperspectral images from RGB
images. The network leverages the principles of colour perception in modern
cameras to constrain the reconstructed hyperspectral images and employs the
domain adaptation method to generalise the hyperspectral reconstruction
capability from a spectral recovery dataset to material segmentation datasets.
The reconstructed hyperspectral images are further filtered using learned
response curves and enhanced with human perception. The performance of
MatSpectNet is evaluated on the LMD dataset as well as the OpenSurfaces
dataset. Our experiments demonstrate that MatSpectNet attains a 1.60% increase
in average pixel accuracy and a 3.42% improvement in mean class accuracy
compared with the most recent publication. The project code is attached to the
supplementary material and will be published on GitHub.
|
[
{
"version": "v1",
"created": "Fri, 21 Jul 2023 10:02:02 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Jul 2023 03:35:03 GMT"
},
{
"version": "v3",
"created": "Sun, 6 Aug 2023 20:19:32 GMT"
},
{
"version": "v4",
"created": "Thu, 17 Aug 2023 09:19:57 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Heng",
"Yuwen",
""
],
[
"Wu",
"Yihong",
""
],
[
"Chen",
"Jiawen",
""
],
[
"Dasmahapatra",
"Srinandan",
""
],
[
"Kim",
"Hansung",
""
]
] |
new_dataset
| 0.999742 |
2307.16377
|
Jiahao Li
|
Jiahao Li, Zongxin Yang, Xiaohan Wang, Jianxin Ma, Chang Zhou, Yi Yang
|
JOTR: 3D Joint Contrastive Learning with Transformers for Occluded Human
Mesh Recovery
|
Camera Ready Version for ICCV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this study, we focus on the problem of 3D human mesh recovery from a
single image under obscured conditions. Most state-of-the-art methods aim to
improve 2D alignment technologies, such as spatial averaging and 2D joint
sampling. However, they tend to neglect the crucial aspect of 3D alignment by
improving 3D representations. Furthermore, recent methods struggle to separate
the target human from occlusion or background in crowded scenes as they
optimize the 3D space of target human with 3D joint coordinates as local
supervision. To address these issues, a desirable method would involve a
framework for fusing 2D and 3D features and a strategy for optimizing the 3D
space globally. Therefore, this paper presents 3D JOint contrastive learning
with TRansformers (JOTR) framework for handling occluded 3D human mesh
recovery. Our method includes an encoder-decoder transformer architecture to
fuse 2D and 3D representations for achieving 2D$\&$3D aligned results in a
coarse-to-fine manner and a novel 3D joint contrastive learning approach for
adding explicitly global supervision for the 3D feature space. The contrastive
learning approach includes two contrastive losses: joint-to-joint contrast for
enhancing the similarity of semantically similar voxels (i.e., human joints),
and joint-to-non-joint contrast for ensuring discrimination from others (e.g.,
occlusions and background). Qualitative and quantitative analyses demonstrate
that our method outperforms state-of-the-art competitors on both
occlusion-specific and standard benchmarks, significantly improving the
reconstruction of occluded humans.
|
[
{
"version": "v1",
"created": "Mon, 31 Jul 2023 02:58:58 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Aug 2023 14:43:05 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Li",
"Jiahao",
""
],
[
"Yang",
"Zongxin",
""
],
[
"Wang",
"Xiaohan",
""
],
[
"Ma",
"Jianxin",
""
],
[
"Zhou",
"Chang",
""
],
[
"Yang",
"Yi",
""
]
] |
new_dataset
| 0.991957 |
2308.00214
|
Chaochao Zhou
|
Chaochao Zhou, Syed Hasib Akhter Faruqui, Abhinav Patel, Ramez N.
Abdalla, Michael C. Hurley, Ali Shaibani, Matthew B. Potts, Babak S. Jahromi,
Leon Cho, Sameer A. Ansari, Donald R. Cantrell
|
Robust Single-view Cone-beam X-ray Pose Estimation with Neural Tuned
Tomography (NeTT) and Masked Neural Radiance Fields (mNeRF)
| null | null | null | null |
cs.CV cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many tasks performed in image-guided, mini-invasive, medical procedures can
be cast as pose estimation problems, where an X-ray projection is utilized to
reach a target in 3D space. Expanding on recent advances in the differentiable
rendering of optically reflective materials, we introduce new methods for pose
estimation of radiolucent objects using X-ray projections, and we demonstrate
the critical role of optimal view synthesis in performing this task. We first
develop an algorithm (DiffDRR) that efficiently computes Digitally
Reconstructed Radiographs (DRRs) and leverages automatic differentiation within
TensorFlow. Pose estimation is performed by iterative gradient descent using a
loss function that quantifies the similarity of the DRR synthesized from a
randomly initialized pose and the true fluoroscopic image at the target pose.
We propose two novel methods for high-fidelity view synthesis, Neural Tuned
Tomography (NeTT) and masked Neural Radiance Fields (mNeRF). Both methods rely
on classic Cone-Beam Computerized Tomography (CBCT); NeTT directly optimizes
the CBCT densities, while the non-zero values of mNeRF are constrained by a 3D
mask of the anatomic region segmented from CBCT. We demonstrate that both NeTT
and mNeRF distinctly improve pose estimation within our framework. By defining
a successful pose estimate to be a 3D angle error of less than 3 deg, we find
that NeTT and mNeRF can achieve similar results, both with overall success
rates more than 93%. However, the computational cost of NeTT is significantly
lower than mNeRF in both training and pose estimation. Furthermore, we show
that a NeTT trained for a single subject can generalize to synthesize
high-fidelity DRRs and ensure robust pose estimations for all other subjects.
Therefore, we suggest that NeTT is an attractive option for robust pose
estimation using fluoroscopic projections.
|
[
{
"version": "v1",
"created": "Tue, 1 Aug 2023 01:12:29 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Aug 2023 04:00:55 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Zhou",
"Chaochao",
""
],
[
"Faruqui",
"Syed Hasib Akhter",
""
],
[
"Patel",
"Abhinav",
""
],
[
"Abdalla",
"Ramez N.",
""
],
[
"Hurley",
"Michael C.",
""
],
[
"Shaibani",
"Ali",
""
],
[
"Potts",
"Matthew B.",
""
],
[
"Jahromi",
"Babak S.",
""
],
[
"Cho",
"Leon",
""
],
[
"Ansari",
"Sameer A.",
""
],
[
"Cantrell",
"Donald R.",
""
]
] |
new_dataset
| 0.978587 |
2308.01284
|
Amrita Bhattacharjee
|
Amrita Bhattacharjee, Huan Liu
|
Fighting Fire with Fire: Can ChatGPT Detect AI-generated Text?
|
to appear in SIGKDD Explorations (December 2023)
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models (LLMs) such as ChatGPT are increasingly being used for
various use cases, including text content generation at scale. Although
detection methods for such AI-generated text exist already, we investigate
ChatGPT's performance as a detector on such AI-generated text, inspired by
works that use ChatGPT as a data labeler or annotator. We evaluate the
zero-shot performance of ChatGPT in the task of human-written vs. AI-generated
text detection, and perform experiments on publicly available datasets. We
empirically investigate if ChatGPT is symmetrically effective in detecting
AI-generated or human-written text. Our findings provide insight on how ChatGPT
and similar LLMs may be leveraged in automated detection pipelines by simply
focusing on solving a specific aspect of the problem and deriving the rest from
that solution. All code and data is available at
https://github.com/AmritaBh/ChatGPT-as-Detector.
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2023 17:11:37 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Aug 2023 22:34:38 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Bhattacharjee",
"Amrita",
""
],
[
"Liu",
"Huan",
""
]
] |
new_dataset
| 0.9676 |
2308.02052
|
Zachary D'Aquino
|
Zachary D'Aquino, Sylwester Arabas, Jeffrey Curtis, Akshunna Vaishnav,
Nicole Riemer, and Matthew West
|
PyPartMC: A Pythonic interface to a particle-resolved, Monte Carlo
aerosol simulation framework
| null | null | null | null |
cs.MS physics.ao-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
PyPartMC is a Pythonic interface to PartMC, a stochastic, particle-resolved
aerosol model implemented in Fortran. Both PyPartMC and PartMC are free, libre,
and open-source. PyPartMC reduces the number of steps and mitigates the effort
necessary to install and utilize the resources of PartMC. Without PyPartMC,
setting up PartMC requires: working with UNIX shell, providing Fortran and C
libraries, and performing standard Fortran and C source code configuration,
compilation and linking. This can be challenging for those less experienced
with computational research or those intending to use PartMC in environments
where provision of UNIX tools is less straightforward (e.g., on Windows).
PyPartMC offers a single-step installation/upgrade process of PartMC and all
dependencies through the pip Python package manager on Linux, macOS, and
Windows. This allows streamlined access to the unmodified and versioned Fortran
internals of the PartMC codebase from both Python and other interoperable
environments (e.g., Julia through PyCall). Consequently, users of PyPartMC can
setup, run, process and visualize output of PartMC simulations using a single
general-purpose programming language.
|
[
{
"version": "v1",
"created": "Thu, 3 Aug 2023 21:10:44 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Aug 2023 21:01:33 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"D'Aquino",
"Zachary",
""
],
[
"Arabas",
"Sylwester",
""
],
[
"Curtis",
"Jeffrey",
""
],
[
"Vaishnav",
"Akshunna",
""
],
[
"Riemer",
"Nicole",
""
],
[
"West",
"Matthew",
""
]
] |
new_dataset
| 0.988426 |
2308.03582
|
Hsuvas Borkakoty
|
Hsuvas Borkakoty and Luis Espinosa-Anke
|
WIKITIDE: A Wikipedia-Based Timestamped Definition Pairs Dataset
|
Accepted by RANLP 2023 main conference
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
A fundamental challenge in the current NLP context, dominated by language
models, comes from the inflexibility of current architectures to 'learn' new
information. While model-centric solutions like continual learning or
parameter-efficient fine tuning are available, the question still remains of
how to reliably identify changes in language or in the world. In this paper, we
propose WikiTiDe, a dataset derived from pairs of timestamped definitions
extracted from Wikipedia. We argue that such resource can be helpful for
accelerating diachronic NLP, specifically, for training models able to scan
knowledge resources for core updates concerning a concept, an event, or a named
entity. Our proposed end-to-end method is fully automatic, and leverages a
bootstrapping algorithm for gradually creating a high-quality dataset. Our
results suggest that bootstrapping the seed version of WikiTiDe leads to better
fine-tuned models. We also leverage fine-tuned models in a number of downstream
tasks, showing promising results with respect to competitive baselines.
|
[
{
"version": "v1",
"created": "Mon, 7 Aug 2023 13:38:54 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Aug 2023 12:31:52 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Borkakoty",
"Hsuvas",
""
],
[
"Espinosa-Anke",
"Luis",
""
]
] |
new_dataset
| 0.999582 |
2308.04123
|
Davide Villa
|
Davide Villa, Daniel Uvaydov, Leonardo Bonati, Pedram Johari, Josep
Miquel Jornet, Tommaso Melodia
|
Twinning Commercial Radio Waveforms in the Colosseum Wireless Network
Emulator
|
8 pages, 13 figures, 2 tables
| null |
10.1145/3615453.3616519
| null |
cs.NI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Because of the ever-growing amount of wireless consumers, spectrum-sharing
techniques have been increasingly common in the wireless ecosystem, with the
main goal of avoiding harmful interference to coexisting communication systems.
This is even more important when considering systems, such as nautical and
aerial fleet radars, in which incumbent radios operate mission-critical
communication links. To study, develop, and validate these solutions, adequate
platforms, such as the Colosseum wireless network emulator, are key as they
enable experimentation with spectrum-sharing heterogeneous radio technologies
in controlled environments. In this work, we demonstrate how Colosseum can be
used to twin commercial radio waveforms to evaluate the coexistence of such
technologies in complex wireless propagation environments. To this aim, we
create a high-fidelity spectrum-sharing scenario on Colosseum to evaluate the
impact of twinned commercial radar waveforms on a cellular network operating in
the CBRS band. Then, we leverage IQ samples collected on the testbed to train a
machine learning agent that runs at the base station to detect the presence of
incumbent radar transmissions and vacate the bandwidth to avoid causing them
harmful interference. Our results show an average detection accuracy of 88%,
with accuracy above 90% in SNR regimes above 0 dB and SINR regimes above -20
dB, and with an average detection time of 137 ms.
|
[
{
"version": "v1",
"created": "Tue, 8 Aug 2023 08:26:03 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Aug 2023 15:38:14 GMT"
},
{
"version": "v3",
"created": "Fri, 18 Aug 2023 11:09:16 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Villa",
"Davide",
""
],
[
"Uvaydov",
"Daniel",
""
],
[
"Bonati",
"Leonardo",
""
],
[
"Johari",
"Pedram",
""
],
[
"Jornet",
"Josep Miquel",
""
],
[
"Melodia",
"Tommaso",
""
]
] |
new_dataset
| 0.967874 |
2308.04964
|
Biagio Montaruli
|
Biagio Montaruli, Luca Demetrio, Andrea Valenza, Luca Compagna, Davide
Ariu, Luca Piras, Davide Balzarotti, Battista Biggio
|
Adversarial ModSecurity: Countering Adversarial SQL Injections with
Robust Machine Learning
| null | null | null | null |
cs.LG cs.CR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
ModSecurity is widely recognized as the standard open-source Web Application
Firewall (WAF), maintained by the OWASP Foundation. It detects malicious
requests by matching them against the Core Rule Set, identifying well-known
attack patterns. Each rule in the CRS is manually assigned a weight, based on
the severity of the corresponding attack, and a request is detected as
malicious if the sum of the weights of the firing rules exceeds a given
threshold. In this work, we show that this simple strategy is largely
ineffective for detecting SQL injection (SQLi) attacks, as it tends to block
many legitimate requests, while also being vulnerable to adversarial SQLi
attacks, i.e., attacks intentionally manipulated to evade detection. To
overcome these issues, we design a robust machine learning model, named
AdvModSec, which uses the CRS rules as input features, and it is trained to
detect adversarial SQLi attacks. Our experiments show that AdvModSec, being
trained on the traffic directed towards the protected web services, achieves a
better trade-off between detection and false positive rates, improving the
detection rate of the vanilla version of ModSecurity with CRS by 21%. Moreover,
our approach is able to improve its adversarial robustness against adversarial
SQLi attacks by 42%, thereby taking a step forward towards building more robust
and trustworthy WAFs.
|
[
{
"version": "v1",
"created": "Wed, 9 Aug 2023 13:58:03 GMT"
},
{
"version": "v2",
"created": "Thu, 17 Aug 2023 09:08:49 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Montaruli",
"Biagio",
""
],
[
"Demetrio",
"Luca",
""
],
[
"Valenza",
"Andrea",
""
],
[
"Compagna",
"Luca",
""
],
[
"Ariu",
"Davide",
""
],
[
"Piras",
"Luca",
""
],
[
"Balzarotti",
"Davide",
""
],
[
"Biggio",
"Battista",
""
]
] |
new_dataset
| 0.963717 |
2308.05828
|
Kevin Pu
|
Kevin Pu, Jim Yang, Angel Yuan, Minyi Ma, Rui Dong, Xinyu Wang, Yan
Chen, Tovi Grossman
|
DiLogics: Creating Web Automation Programs With Diverse Logics
| null | null |
10.1145/3586183.3606822
| null |
cs.HC cs.AI cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Knowledge workers frequently encounter repetitive web data entry tasks, like
updating records or placing orders. Web automation increases productivity, but
translating tasks to web actions accurately and extending to new specifications
is challenging. Existing tools can automate tasks that perform the same logical
trace of UI actions (e.g., input text in each field in order), but do not
support tasks requiring different executions based on varied input conditions.
We present DiLogics, a programming-by-demonstration system that utilizes NLP to
assist users in creating web automation programs that handle diverse
specifications. DiLogics first semantically segments input data to structured
task steps. By recording user demonstrations for each step, DiLogics
generalizes the web macros to novel but semantically similar task requirements.
Our evaluation showed that non-experts can effectively use DiLogics to create
automation programs that fulfill diverse input instructions. DiLogics provides
an efficient, intuitive, and expressive method for developing web automation
programs satisfying diverse specifications.
|
[
{
"version": "v1",
"created": "Thu, 10 Aug 2023 19:01:30 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Aug 2023 15:33:39 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Pu",
"Kevin",
""
],
[
"Yang",
"Jim",
""
],
[
"Yuan",
"Angel",
""
],
[
"Ma",
"Minyi",
""
],
[
"Dong",
"Rui",
""
],
[
"Wang",
"Xinyu",
""
],
[
"Chen",
"Yan",
""
],
[
"Grossman",
"Tovi",
""
]
] |
new_dataset
| 0.992201 |
2308.06668
|
Jiajia Li
|
Jiajia Li, Mingle Xu, Lirong Xiang, Dong Chen, Weichao Zhuang, Xunyuan
Yin and Zhaojian Li
|
Foundation Models in Smart Agriculture: Basics, Opportunities, and
Challenges
|
16 pages, 3 figures
| null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The past decade has witnessed the rapid development of ML and DL
methodologies in agricultural systems, showcased by great successes in variety
of agricultural applications. However, these conventional ML/DL models have
certain limitations: They heavily rely on large, costly-to-acquire labeled
datasets for training, require specialized expertise for development and
maintenance, and are mostly tailored for specific tasks, thus lacking
generalizability. Recently, foundation models have demonstrated remarkable
successes in language and vision tasks across various domains. These models are
trained on a vast amount of data from multiple domains and modalities. Once
trained, they can accomplish versatile tasks with just minor fine-tuning and
minimal task-specific labeled data. Despite their proven effectiveness and huge
potential, there has been little exploration of applying FMs to agriculture
fields. Therefore, this study aims to explore the potential of FMs in the field
of smart agriculture. In particular, we present conceptual tools and technical
background to facilitate the understanding of the problem space and uncover new
research directions in this field. To this end, we first review recent FMs in
the general computer science domain and categorize them into four categories:
language FMs, vision FMs, multimodal FMs, and reinforcement learning FMs.
Subsequently, we outline the process of developing agriculture FMs and discuss
their potential applications in smart agriculture. We also discuss the unique
challenges associated with developing AFMs, including model training,
validation, and deployment. Through this study, we contribute to the
advancement of AI in agriculture by introducing AFMs as a promising paradigm
that can significantly mitigate the reliance on extensive labeled datasets and
enhance the efficiency, effectiveness, and generalization of agricultural AI
systems.
|
[
{
"version": "v1",
"created": "Sun, 13 Aug 2023 02:59:36 GMT"
},
{
"version": "v2",
"created": "Fri, 18 Aug 2023 14:16:37 GMT"
}
] | 2023-08-21T00:00:00 |
[
[
"Li",
"Jiajia",
""
],
[
"Xu",
"Mingle",
""
],
[
"Xiang",
"Lirong",
""
],
[
"Chen",
"Dong",
""
],
[
"Zhuang",
"Weichao",
""
],
[
"Yin",
"Xunyuan",
""
],
[
"Li",
"Zhaojian",
""
]
] |
new_dataset
| 0.973227 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.