id
stringlengths 9
10
| submitter
stringlengths 2
52
⌀ | authors
stringlengths 4
6.51k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
345
⌀ | doi
stringlengths 11
120
⌀ | report-no
stringlengths 2
243
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | abstract
stringlengths 33
3.33k
| versions
list | update_date
timestamp[s] | authors_parsed
list | prediction
stringclasses 1
value | probability
float64 0.95
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.13340
|
Xueling Feng
|
Yan Sun, Xueling Feng, Liyan Ma, Long Hu, Mark Nixon
|
TriGait: Aligning and Fusing Skeleton and Silhouette Gait Data via a
Tri-Branch Network
|
Accepted by IJCB 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Gait recognition is a promising biometric technology for identification due
to its non-invasiveness and long-distance. However, external variations such as
clothing changes and viewpoint differences pose significant challenges to gait
recognition. Silhouette-based methods preserve body shape but neglect internal
structure information, while skeleton-based methods preserve structure
information but omit appearance. To fully exploit the complementary nature of
the two modalities, a novel triple branch gait recognition framework, TriGait,
is proposed in this paper. It effectively integrates features from the skeleton
and silhouette data in a hybrid fusion manner, including a two-stream network
to extract static and motion features from appearance, a simple yet effective
module named JSA-TC to capture dependencies between all joints, and a third
branch for cross-modal learning by aligning and fusing low-level features of
two modalities. Experimental results demonstrate the superiority and
effectiveness of TriGait for gait recognition. The proposed method achieves a
mean rank-1 accuracy of 96.0% over all conditions on CASIA-B dataset and 94.3%
accuracy for CL, significantly outperforming all the state-of-the-art methods.
The source code will be available at https://github.com/feng-xueling/TriGait/.
|
[
{
"version": "v1",
"created": "Fri, 25 Aug 2023 12:19:51 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Sun",
"Yan",
""
],
[
"Feng",
"Xueling",
""
],
[
"Ma",
"Liyan",
""
],
[
"Hu",
"Long",
""
],
[
"Nixon",
"Mark",
""
]
] |
new_dataset
| 0.97189 |
2308.13414
|
Arunima Mandal
|
Arunima Mandal, Yuanhang Shao, Xiuwen Liu
|
Automatic Historical Stock Price Dataset Generation Using Python
| null | null | null | null |
cs.CE
|
http://creativecommons.org/licenses/by/4.0/
|
With the dynamic political and economic environments, the ever-changing stock
markets generate large amounts of data daily. Acquiring up-to-date data is
crucial to enhancing predictive precision in stock price behavior studies.
However, preparing the dataset manually can be challenging and time-demanding.
The stock market analysis usually revolves around specific indices such as
S&P500, Nasdaq, Dow Jones, the New York Stock Exchange (NYSE), etc. It is
necessary to analyze all the companies of any particular index. While raw data
are accessible from diverse financial websites, these resources are tailored
for individual company data retrieval and there is a big gap between what is
available and what is needed to generate large datasets. Python emerges as a
valuable tool for comprehensively collecting all constituent stocks within a
given index. While certain online sources offer code snippets for limited
dataset generation, a comprehensive and unified script is yet to be developed
and publicly available. Therefore, we present a comprehensive and consolidated
code resource that facilitates the extraction of updated datasets for any
particular time period and for any specific stock market index and closes the
gap. The code is available at
https://github.com/amp1590/automatic_stock_data_collection.
|
[
{
"version": "v1",
"created": "Fri, 25 Aug 2023 14:44:56 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Mandal",
"Arunima",
""
],
[
"Shao",
"Yuanhang",
""
],
[
"Liu",
"Xiuwen",
""
]
] |
new_dataset
| 0.978118 |
2308.13416
|
Ensheng Shi
|
Ensheng Shi, Fengji Zhang, Yanlin Wang, Bei Chen, Lun Du, Hongyu
Zhang, Shi Han, Dongmei Zhang, Hongbin Sun
|
SoTaNa: The Open-Source Software Development Assistant
| null | null | null | null |
cs.SE cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Software development plays a crucial role in driving innovation and
efficiency across modern societies. To meet the demands of this dynamic field,
there is a growing need for an effective software development assistant.
However, existing large language models represented by ChatGPT suffer from
limited accessibility, including training data and model weights. Although
other large open-source models like LLaMA have shown promise, they still
struggle with understanding human intent. In this paper, we present SoTaNa, an
open-source software development assistant. SoTaNa utilizes ChatGPT to generate
high-quality instruction-based data for the domain of software engineering and
employs a parameter-efficient fine-tuning approach to enhance the open-source
foundation model, LLaMA. We evaluate the effectiveness of \our{} in answering
Stack Overflow questions and demonstrate its capabilities. Additionally, we
discuss its capabilities in code summarization and generation, as well as the
impact of varying the volume of generated data on model performance. Notably,
SoTaNa can run on a single GPU, making it accessible to a broader range of
researchers. Our code, model weights, and data are public at
\url{https://github.com/DeepSoftwareAnalytics/SoTaNa}.
|
[
{
"version": "v1",
"created": "Fri, 25 Aug 2023 14:56:21 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Shi",
"Ensheng",
""
],
[
"Zhang",
"Fengji",
""
],
[
"Wang",
"Yanlin",
""
],
[
"Chen",
"Bei",
""
],
[
"Du",
"Lun",
""
],
[
"Zhang",
"Hongyu",
""
],
[
"Han",
"Shi",
""
],
[
"Zhang",
"Dongmei",
""
],
[
"Sun",
"Hongbin",
""
]
] |
new_dataset
| 0.999328 |
2308.13418
|
Lukas Blecher
|
Lukas Blecher, Guillem Cucurull, Thomas Scialom, Robert Stojnic
|
Nougat: Neural Optical Understanding for Academic Documents
|
17 pages, 10 figures
| null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Scientific knowledge is predominantly stored in books and scientific
journals, often in the form of PDFs. However, the PDF format leads to a loss of
semantic information, particularly for mathematical expressions. We propose
Nougat (Neural Optical Understanding for Academic Documents), a Visual
Transformer model that performs an Optical Character Recognition (OCR) task for
processing scientific documents into a markup language, and demonstrate the
effectiveness of our model on a new dataset of scientific documents. The
proposed approach offers a promising solution to enhance the accessibility of
scientific knowledge in the digital age, by bridging the gap between
human-readable documents and machine-readable text. We release the models and
code to accelerate future work on scientific text recognition.
|
[
{
"version": "v1",
"created": "Fri, 25 Aug 2023 15:03:36 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Blecher",
"Lukas",
""
],
[
"Cucurull",
"Guillem",
""
],
[
"Scialom",
"Thomas",
""
],
[
"Stojnic",
"Robert",
""
]
] |
new_dataset
| 0.999564 |
2308.13449
|
Sungbae Chun
|
Aibek Bekbayev, Sungbae Chun, Yerzat Dulat, James Yamazaki
|
The Poison of Alignment
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
From the perspective of content safety issues, alignment has shown to limit
large language models' (LLMs) harmful content generation. This intentional
method of reinforcing models to not respond to certain user inputs seem to be
present in many modern open-source instruction tuning datasets such as
OpenAssistant or Guanaco. We introduce a novel insight to an instruction-tuned
model's performance affected by the presence of alignment in supervised
fine-tuning dataset. To be specific, we noticed that alignment acts as if it is
poisoning the instruction dataset. Experimentally, we demonstrate that aligned
answers significantly worsen the performance of the resulting fine-tuned
model's on various reasoning benchmarks such as Big Bench (BBH), Massive
Multitask Language Understanding (MMLU), Human Eval, and Discrete Reasoning
Over Paragraphs (DROP), performing worse than the counterpart tuned without
alignment by 4-33%.
|
[
{
"version": "v1",
"created": "Fri, 25 Aug 2023 15:51:15 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Bekbayev",
"Aibek",
""
],
[
"Chun",
"Sungbae",
""
],
[
"Dulat",
"Yerzat",
""
],
[
"Yamazaki",
"James",
""
]
] |
new_dataset
| 0.998418 |
2308.13490
|
Sami Abu-El-Haija
|
Phitchaya Mangpo Phothilimthana, Sami Abu-El-Haija, Kaidi Cao, Bahare
Fatemi, Charith Mendis, Bryan Perozzi
|
TpuGraphs: A Performance Prediction Dataset on Large Tensor
Computational Graphs
| null | null | null | null |
cs.LG cs.AR cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Precise hardware performance models play a crucial role in code
optimizations. They can assist compilers in making heuristic decisions or aid
autotuners in identifying the optimal configuration for a given program. For
example, the autotuner for XLA, a machine learning compiler, discovered 10-20%
speedup on state-of-the-art models serving substantial production traffic at
Google. Although there exist a few datasets for program performance prediction,
they target small sub-programs such as basic blocks or kernels. This paper
introduces TpuGraphs, a performance prediction dataset on full tensor programs,
represented as computational graphs, running on Tensor Processing Units (TPUs).
Each graph in the dataset represents the main computation of a machine learning
workload, e.g., a training epoch or an inference step. Each data sample
contains a computational graph, a compilation configuration, and the execution
time of the graph when compiled with the configuration. The graphs in the
dataset are collected from open-source machine learning programs, featuring
popular model architectures, e.g., ResNet, EfficientNet, Mask R-CNN, and
Transformer. TpuGraphs provides 25x more graphs than the largest graph property
prediction dataset (with comparable graph sizes), and 770x larger graphs on
average compared to existing performance prediction datasets on machine
learning programs. This graph-level prediction task on large graphs introduces
new challenges in learning, ranging from scalability, training efficiency, to
model quality.
|
[
{
"version": "v1",
"created": "Fri, 25 Aug 2023 17:04:35 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Phothilimthana",
"Phitchaya Mangpo",
""
],
[
"Abu-El-Haija",
"Sami",
""
],
[
"Cao",
"Kaidi",
""
],
[
"Fatemi",
"Bahare",
""
],
[
"Mendis",
"Charith",
""
],
[
"Perozzi",
"Bryan",
""
]
] |
new_dataset
| 0.999843 |
2308.13497
|
Sakayo Toadoum Sari He
|
Sakayo Toadoum Sari and Angela Fan and Lema Logamou Seknewna
|
Ngambay-French Neural Machine Translation (sba-Fr)
|
Accepted at RANLP 2023 - International Workshop NLP tools and
resources for translation and interpreting applications
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In Africa, and the world at large, there is an increasing focus on developing
Neural Machine Translation (NMT) systems to overcome language barriers. NMT for
Low-resource language is particularly compelling as it involves learning with
limited labelled data. However, obtaining a well-aligned parallel corpus for
low-resource languages can be challenging. The disparity between the
technological advancement of a few global languages and the lack of research on
NMT for local languages in Chad is striking. End-to-end NMT trials on
low-resource Chad languages have not been attempted. Additionally, there is a
dearth of online and well-structured data gathering for research in Natural
Language Processing, unlike some African languages. However, a guided approach
for data gathering can produce bitext data for many Chadian language
translation pairs with well-known languages that have ample data. In this
project, we created the first sba-Fr Dataset, which is a corpus of
Ngambay-to-French translations, and fine-tuned three pre-trained models using
this dataset. Our experiments show that the M2M100 model outperforms other
models with high BLEU scores on both original and original+synthetic data. The
publicly available bitext dataset can be used for research purposes.
|
[
{
"version": "v1",
"created": "Fri, 25 Aug 2023 17:13:20 GMT"
}
] | 2023-08-28T00:00:00 |
[
[
"Sari",
"Sakayo Toadoum",
""
],
[
"Fan",
"Angela",
""
],
[
"Seknewna",
"Lema Logamou",
""
]
] |
new_dataset
| 0.997888 |
1910.09642
|
Rafael Henrique Vareto Mr.
|
Rafael Henrique Vareto, Araceli Marcia Sandanha, William Robson
Schwartz
|
The SWAX Benchmark: Attacking Biometric Systems with Wax Figures
| null | null | null | null |
cs.CV cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A face spoofing attack occurs when an intruder attempts to impersonate
someone who carries a gainful authentication clearance. It is a trending topic
due to the increasing demand for biometric authentication on mobile devices,
high-security areas, among others. This work introduces a new database named
Sense Wax Attack dataset (SWAX), comprised of real human and wax figure images
and videos that endorse the problem of face spoofing detection. The dataset
consists of more than 1800 face images and 110 videos of 55 people/waxworks,
arranged in training, validation and test sets with a large range in
expression, illumination and pose variations. Experiments performed with
baseline methods show that despite the progress in recent years, advanced
spoofing methods are still vulnerable to high-quality violation attempts.
|
[
{
"version": "v1",
"created": "Mon, 21 Oct 2019 20:40:54 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Vareto",
"Rafael Henrique",
""
],
[
"Sandanha",
"Araceli Marcia",
""
],
[
"Schwartz",
"William Robson",
""
]
] |
new_dataset
| 0.999884 |
2112.13592
|
Fangneng Zhan
|
Fangneng Zhan, Yingchen Yu, Rongliang Wu, Jiahui Zhang, Shijian Lu,
Lingjie Liu, Adam Kortylewski, Christian Theobalt, Eric Xing
|
Multimodal Image Synthesis and Editing: The Generative AI Era
|
TPAMI 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
As information exists in various modalities in real world, effective
interaction and fusion among multimodal information plays a key role for the
creation and perception of multimodal data in computer vision and deep learning
research. With superb power in modeling the interaction among multimodal
information, multimodal image synthesis and editing has become a hot research
topic in recent years. Instead of providing explicit guidance for network
training, multimodal guidance offers intuitive and flexible means for image
synthesis and editing. On the other hand, this field is also facing several
challenges in alignment of multimodal features, synthesis of high-resolution
images, faithful evaluation metrics, etc. In this survey, we comprehensively
contextualize the advance of the recent multimodal image synthesis and editing
and formulate taxonomies according to data modalities and model types. We start
with an introduction to different guidance modalities in image synthesis and
editing, and then describe multimodal image synthesis and editing approaches
extensively according to their model types. After that, we describe benchmark
datasets and evaluation metrics as well as corresponding experimental results.
Finally, we provide insights about the current research challenges and possible
directions for future research. A project associated with this survey is
available at https://github.com/fnzhan/Generative-AI.
|
[
{
"version": "v1",
"created": "Mon, 27 Dec 2021 10:00:16 GMT"
},
{
"version": "v2",
"created": "Sun, 24 Jul 2022 15:54:48 GMT"
},
{
"version": "v3",
"created": "Tue, 26 Jul 2022 18:00:04 GMT"
},
{
"version": "v4",
"created": "Mon, 24 Apr 2023 12:43:35 GMT"
},
{
"version": "v5",
"created": "Sat, 5 Aug 2023 00:10:53 GMT"
},
{
"version": "v6",
"created": "Thu, 24 Aug 2023 16:17:21 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Zhan",
"Fangneng",
""
],
[
"Yu",
"Yingchen",
""
],
[
"Wu",
"Rongliang",
""
],
[
"Zhang",
"Jiahui",
""
],
[
"Lu",
"Shijian",
""
],
[
"Liu",
"Lingjie",
""
],
[
"Kortylewski",
"Adam",
""
],
[
"Theobalt",
"Christian",
""
],
[
"Xing",
"Eric",
""
]
] |
new_dataset
| 0.962586 |
2203.13310
|
Renrui Zhang
|
Renrui Zhang, Han Qiu, Tai Wang, Ziyu Guo, Xuanzhuo Xu, Ziteng Cui, Yu
Qiao, Peng Gao, Hongsheng Li
|
MonoDETR: Depth-guided Transformer for Monocular 3D Object Detection
|
Accepted by ICCV 2023. Code is available at
https://github.com/ZrrSkywalker/MonoDETR
| null | null | null |
cs.CV cs.AI eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Monocular 3D object detection has long been a challenging task in autonomous
driving. Most existing methods follow conventional 2D detectors to first
localize object centers, and then predict 3D attributes by neighboring
features. However, only using local visual features is insufficient to
understand the scene-level 3D spatial structures and ignores the long-range
inter-object depth relations. In this paper, we introduce the first DETR
framework for Monocular DEtection with a depth-guided TRansformer, named
MonoDETR. We modify the vanilla transformer to be depth-aware and guide the
whole detection process by contextual depth cues. Specifically, concurrent to
the visual encoder that captures object appearances, we introduce to predict a
foreground depth map, and specialize a depth encoder to extract non-local depth
embeddings. Then, we formulate 3D object candidates as learnable queries and
propose a depth-guided decoder to conduct object-scene depth interactions. In
this way, each object query estimates its 3D attributes adaptively from the
depth-guided regions on the image and is no longer constrained to local visual
features. On KITTI benchmark with monocular images as input, MonoDETR achieves
state-of-the-art performance and requires no extra dense depth annotations.
Besides, our depth-guided modules can also be plug-and-play to enhance
multi-view 3D object detectors on nuScenes dataset, demonstrating our superior
generalization capacity. Code is available at
https://github.com/ZrrSkywalker/MonoDETR.
|
[
{
"version": "v1",
"created": "Thu, 24 Mar 2022 19:28:54 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Mar 2022 07:00:29 GMT"
},
{
"version": "v3",
"created": "Sat, 28 May 2022 10:21:04 GMT"
},
{
"version": "v4",
"created": "Thu, 24 Aug 2023 04:18:17 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Zhang",
"Renrui",
""
],
[
"Qiu",
"Han",
""
],
[
"Wang",
"Tai",
""
],
[
"Guo",
"Ziyu",
""
],
[
"Xu",
"Xuanzhuo",
""
],
[
"Cui",
"Ziteng",
""
],
[
"Qiao",
"Yu",
""
],
[
"Gao",
"Peng",
""
],
[
"Li",
"Hongsheng",
""
]
] |
new_dataset
| 0.987008 |
2203.13737
|
Donald Pinckney
|
Donald Pinckney, Federico Cassano, Arjun Guha, Jon Bell, Massimiliano
Culpo, Todd Gamblin
|
Flexible and Optimal Dependency Management via Max-SMT
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Package managers such as NPM have become essential for software development.
The NPM repository hosts over 2 million packages and serves over 43 billion
downloads every week. Unfortunately, the NPM dependency solver has several
shortcomings. 1) NPM is greedy and often fails to install the newest versions
of dependencies; 2) NPM's algorithm leads to duplicated dependencies and
bloated code, which is particularly bad for web applications that need to
minimize code size; 3) NPM's vulnerability fixing algorithm is also greedy, and
can even introduce new vulnerabilities; and 4) NPM's ability to duplicate
dependencies can break stateful frameworks and requires a lot of care to
workaround. Although existing tools try to address these problems they are
either brittle, rely on post hoc changes to the dependency tree, do not
guarantee optimality, or are not composable.
We present PacSolve, a unifying framework and implementation for dependency
solving which allows for customizable constraints and optimization goals. We
use PacSolve to build MaxNPM, a complete, drop-in replacement for NPM, which
empowers developers to combine multiple objectives when installing
dependencies. We evaluate MaxNPM with a large sample of packages from the NPM
ecosystem and show that it can: 1) reduce more vulnerabilities in dependencies
than NPM's auditing tool in 33% of cases; 2) chooses newer dependencies than
NPM in 14% of cases; and 3) chooses fewer dependencies than NPM in 21% of
cases. All our code and data is open and available.
|
[
{
"version": "v1",
"created": "Fri, 25 Mar 2022 16:11:51 GMT"
},
{
"version": "v2",
"created": "Tue, 12 Jul 2022 17:10:10 GMT"
},
{
"version": "v3",
"created": "Thu, 15 Dec 2022 00:09:36 GMT"
},
{
"version": "v4",
"created": "Thu, 24 Aug 2023 04:20:31 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Pinckney",
"Donald",
""
],
[
"Cassano",
"Federico",
""
],
[
"Guha",
"Arjun",
""
],
[
"Bell",
"Jon",
""
],
[
"Culpo",
"Massimiliano",
""
],
[
"Gamblin",
"Todd",
""
]
] |
new_dataset
| 0.955344 |
2212.00143
|
Felix Dumais Mr.
|
F\'elix Dumais, Jon Haitz Legarreta, Carl Lemaire, Philippe Poulin,
Fran\c{c}ois Rheault, Laurent Petit, Muhamed Barakovic, Stefano Magon, Maxime
Descoteaux, Pierre-Marc Jodoin (for the Alzheimer's Disease Neuroimaging
Initiative)
|
FIESTA: Autoencoders for accurate fiber segmentation in tractography
|
36 pages, 13 figures, accepted in NeuroImage
|
NeuroImage 279, 120288 (2023)
|
10.1016/j.neuroimage.2023.120288
| null |
cs.CV cs.LG eess.IV q-bio.NC q-bio.QM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
White matter bundle segmentation is a cornerstone of modern tractography to
study the brain's structural connectivity in domains such as neurological
disorders, neurosurgery, and aging. In this study, we present FIESTA (FIbEr
Segmentation in Tractography using Autoencoders), a reliable and robust, fully
automated, and easily semi-automatically calibrated pipeline based on deep
autoencoders that can dissect and fully populate white matter bundles. This
pipeline is built upon previous works that demonstrated how autoencoders can be
used successfully for streamline filtering, bundle segmentation, and streamline
generation in tractography. Our proposed method improves bundle segmentation
coverage by recovering hard-to-track bundles with generative sampling through
the latent space seeding of the subject bundle and the atlas bundle. A latent
space of streamlines is learned using autoencoder-based modeling combined with
contrastive learning. Using an atlas of bundles in standard space (MNI), our
proposed method segments new tractograms using the autoencoder latent distance
between each tractogram streamline and its closest neighbor bundle in the atlas
of bundles. Intra-subject bundle reliability is improved by recovering
hard-to-track streamlines, using the autoencoder to generate new streamlines
that increase the spatial coverage of each bundle while remaining anatomically
correct. Results show that our method is more reliable than state-of-the-art
automated virtual dissection methods such as RecoBundles, RecoBundlesX,
TractSeg, White Matter Analysis and XTRACT. Our framework allows for the
transition from one anatomical bundle definition to another with marginal
calibration efforts. Overall, these results show that our framework improves
the practicality and usability of current state-of-the-art bundle segmentation
framework.
|
[
{
"version": "v1",
"created": "Wed, 30 Nov 2022 22:28:24 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Dec 2022 21:58:01 GMT"
},
{
"version": "v3",
"created": "Thu, 24 Aug 2023 17:29:24 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Dumais",
"Félix",
"",
"for the Alzheimer's Disease Neuroimaging\n Initiative"
],
[
"Legarreta",
"Jon Haitz",
"",
"for the Alzheimer's Disease Neuroimaging\n Initiative"
],
[
"Lemaire",
"Carl",
"",
"for the Alzheimer's Disease Neuroimaging\n Initiative"
],
[
"Poulin",
"Philippe",
"",
"for the Alzheimer's Disease Neuroimaging\n Initiative"
],
[
"Rheault",
"François",
"",
"for the Alzheimer's Disease Neuroimaging\n Initiative"
],
[
"Petit",
"Laurent",
"",
"for the Alzheimer's Disease Neuroimaging\n Initiative"
],
[
"Barakovic",
"Muhamed",
"",
"for the Alzheimer's Disease Neuroimaging\n Initiative"
],
[
"Magon",
"Stefano",
"",
"for the Alzheimer's Disease Neuroimaging\n Initiative"
],
[
"Descoteaux",
"Maxime",
"",
"for the Alzheimer's Disease Neuroimaging\n Initiative"
],
[
"Jodoin",
"Pierre-Marc",
"",
"for the Alzheimer's Disease Neuroimaging\n Initiative"
]
] |
new_dataset
| 0.995986 |
2212.02011
|
Jie Hong
|
Jie Hong, Shi Qiu, Weihao Li, Saeed Anwar, Mehrtash Harandi, Nick
Barnes and Lars Petersson
|
PointCaM: Cut-and-Mix for Open-Set Point Cloud Learning
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Point cloud learning is receiving increasing attention, however, most
existing point cloud models lack the practical ability to deal with the
unavoidable presence of unknown objects. This paper mainly discusses point
cloud learning under open-set settings, where we train the model without data
from unknown classes and identify them in the inference stage. Basically, we
propose to solve open-set point cloud learning using a novel Point Cut-and-Mix
mechanism consisting of Unknown-Point Simulator and Unknown-Point Estimator
modules. Specifically, we use the Unknown-Point Simulator to simulate
out-of-distribution data in the training stage by manipulating the geometric
context of partial known data. Based on this, the Unknown-Point Estimator
module learns to exploit the point cloud's feature context for discriminating
the known and unknown data. Extensive experiments show the plausibility of
open-set point cloud learning and the effectiveness of our proposed solutions.
Our code is available at \url{https://github.com/ShiQiu0419/pointcam}.
|
[
{
"version": "v1",
"created": "Mon, 5 Dec 2022 03:53:51 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Aug 2023 04:21:17 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Hong",
"Jie",
""
],
[
"Qiu",
"Shi",
""
],
[
"Li",
"Weihao",
""
],
[
"Anwar",
"Saeed",
""
],
[
"Harandi",
"Mehrtash",
""
],
[
"Barnes",
"Nick",
""
],
[
"Petersson",
"Lars",
""
]
] |
new_dataset
| 0.980427 |
2303.13796
|
Lei Yang
|
Wenjia Wang, Yongtao Ge, Haiyi Mei, Zhongang Cai, Qingping Sun, Yanjun
Wang, Chunhua Shen, Lei Yang, Taku Komura
|
Zolly: Zoom Focal Length Correctly for Perspective-Distorted Human Mesh
Reconstruction
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As it is hard to calibrate single-view RGB images in the wild, existing 3D
human mesh reconstruction (3DHMR) methods either use a constant large focal
length or estimate one based on the background environment context, which can
not tackle the problem of the torso, limb, hand or face distortion caused by
perspective camera projection when the camera is close to the human body. The
naive focal length assumptions can harm this task with the incorrectly
formulated projection matrices. To solve this, we propose Zolly, the first
3DHMR method focusing on perspective-distorted images. Our approach begins with
analysing the reason for perspective distortion, which we find is mainly caused
by the relative location of the human body to the camera center. We propose a
new camera model and a novel 2D representation, termed distortion image, which
describes the 2D dense distortion scale of the human body. We then estimate the
distance from distortion scale features rather than environment context
features. Afterwards, we integrate the distortion feature with image features
to reconstruct the body mesh. To formulate the correct projection matrix and
locate the human body position, we simultaneously use perspective and
weak-perspective projection loss. Since existing datasets could not handle this
task, we propose the first synthetic dataset PDHuman and extend two real-world
datasets tailored for this task, all containing perspective-distorted human
images. Extensive experiments show that Zolly outperforms existing
state-of-the-art methods on both perspective-distorted datasets and the
standard benchmark (3DPW).
|
[
{
"version": "v1",
"created": "Fri, 24 Mar 2023 04:22:41 GMT"
},
{
"version": "v2",
"created": "Sat, 12 Aug 2023 16:32:11 GMT"
},
{
"version": "v3",
"created": "Thu, 24 Aug 2023 16:18:35 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Wang",
"Wenjia",
""
],
[
"Ge",
"Yongtao",
""
],
[
"Mei",
"Haiyi",
""
],
[
"Cai",
"Zhongang",
""
],
[
"Sun",
"Qingping",
""
],
[
"Wang",
"Yanjun",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Yang",
"Lei",
""
],
[
"Komura",
"Taku",
""
]
] |
new_dataset
| 0.997485 |
2303.15871
|
Manan Tayal
|
Manan Tayal, Shishir Kolathaya
|
Control Barrier Functions in Dynamic UAVs for Kinematic Obstacle
Avoidance: A Collision Cone Approach
|
Submitted to 2023 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS). 8 pages, 9 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unmanned aerial vehicles (UAVs), specifically quadrotors, have revolutionized
various industries with their maneuverability and versatility, but their safe
operation in dynamic environments heavily relies on effective collision
avoidance techniques. This paper introduces a novel technique for safely
navigating a quadrotor along a desired route while avoiding kinematic
obstacles. The proposed approach employs control barrier functions and utilizes
collision cones to ensure that the quadrotor's velocity and the obstacle's
velocity always point away from each other. In particular, we propose a new
constraint formulation that ensures that the relative velocity between the
quadrotor and the obstacle always avoids a cone of vectors that may lead to a
collision. By showing that the proposed constraint is a valid control barrier
function (CBFs) for quadrotors, we are able to leverage on its real-time
implementation via Quadratic Programs (QPs), called the CBF-QPs. We validate
the effectiveness of the proposed CBF-QPs by demonstrating collision avoidance
with moving obstacles under multiple scenarios. This is shown in the pybullet
simulator.Furthermore we compare the proposed approach with CBF-QPs shown in
literature, especially the well-known higher order CBF-QPs (HO-CBF-QPs), where
in we show that it is more conservative compared to the proposed approach. This
comparison also shown in simulation in detail.
|
[
{
"version": "v1",
"created": "Tue, 28 Mar 2023 10:26:30 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Tayal",
"Manan",
""
],
[
"Kolathaya",
"Shishir",
""
]
] |
new_dataset
| 0.957362 |
2306.01458
|
Feng Zheng
|
Feng Zheng, Hongkang Yu, Chenchen Wang, Luyang Sun, Qingqing Wu and
Yijian Chen
|
Extremely Large-scale Array Systems: Near-Field Codebook Design and
Performance Analysis
| null | null | null | null |
cs.IT cs.SY eess.SP eess.SY math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Extremely Large-scale Array (ELAA) promises to deliver ultra-high data rates
with increased antenna elements. However, increasing antenna elements leads to
a wider realm of near-field, which challenges the traditional design of
codebooks. In this paper, we propose novel near-field codebook schemes based on
the fitting formula of codewords' quantization performance. First, we analyze
the quantization performance properties of uniform linear array (ULA) and
uniform planar array (UPA) codewords. Our findings reveal an intriguing
property: the correlation formula for ULA codewords can be represented by the
elliptic formula, while the correlation formula for UPA codewords can be
approximated using the ellipsoid formula. Building on this insight, we propose
a ULA uniform codebook that maximizes the minimum correlation based on the
derived formula. Moreover, we introduce a ULA dislocation codebook to further
reduce quantization overhead. Continuing our exploration, we propose UPA
uniform and dislocation codebook schemes. Our investigation demonstrates that
oversampling in the angular domain offers distinct advantages, achieving
heightened accuracy while minimizing overhead in quantifying near-field
channels. Numerical results demonstrate the appealing advantages of the
proposed codebook over existing methods in decreasing quantization overhead and
increasing quantization accuracy.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 11:36:02 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Aug 2023 11:29:48 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Zheng",
"Feng",
""
],
[
"Yu",
"Hongkang",
""
],
[
"Wang",
"Chenchen",
""
],
[
"Sun",
"Luyang",
""
],
[
"Wu",
"Qingqing",
""
],
[
"Chen",
"Yijian",
""
]
] |
new_dataset
| 0.998371 |
2306.01891
|
Abanob Soliman
|
Abanob Soliman, Fabien Bonardi, D\'esir\'e Sidib\'e, Samia Bouchafa
|
DH-PTAM: A Deep Hybrid Stereo Events-Frames Parallel Tracking And
Mapping System
|
9 pages, 9 figures and 4 tables
| null | null | null |
cs.CV cs.RO eess.IV eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a robust approach for a visual parallel tracking and
mapping (PTAM) system that excels in challenging environments. Our proposed
method combines the strengths of heterogeneous multi-modal visual sensors,
including stereo event-based and frame-based sensors, in a unified reference
frame through a novel spatio-temporal synchronization of stereo visual frames
and stereo event streams. We employ deep learning-based feature extraction and
description for estimation to enhance robustness further. We also introduce an
end-to-end parallel tracking and mapping optimization layer complemented by a
simple loop-closure algorithm for efficient SLAM behavior. Through
comprehensive experiments on both small-scale and large-scale real-world
sequences of VECtor and TUM-VIE benchmarks, our proposed method (DH-PTAM)
demonstrates superior performance in terms of robustness and accuracy in
adverse conditions, especially in large-scale HDR scenarios. Our
implementation's research-based Python API is publicly available on GitHub for
further research and development: https://github.com/AbanobSoliman/DH-PTAM.
|
[
{
"version": "v1",
"created": "Fri, 2 Jun 2023 19:52:13 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Aug 2023 21:29:03 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Soliman",
"Abanob",
""
],
[
"Bonardi",
"Fabien",
""
],
[
"Sidibé",
"Désiré",
""
],
[
"Bouchafa",
"Samia",
""
]
] |
new_dataset
| 0.983682 |
2306.08713
|
Chiara Plizzari
|
Chiara Plizzari, Toby Perrett, Barbara Caputo, Dima Damen
|
What can a cook in Italy teach a mechanic in India? Action Recognition
Generalisation Over Scenarios and Locations
|
Accepted at ICCV 2023. Project page:
https://chiaraplizz.github.io/what-can-a-cook/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose and address a new generalisation problem: can a model trained for
action recognition successfully classify actions when they are performed within
a previously unseen scenario and in a previously unseen location? To answer
this question, we introduce the Action Recognition Generalisation Over
scenarios and locations dataset (ARGO1M), which contains 1.1M video clips from
the large-scale Ego4D dataset, across 10 scenarios and 13 locations. We
demonstrate recognition models struggle to generalise over 10 proposed test
splits, each of an unseen scenario in an unseen location. We thus propose CIR,
a method to represent each video as a Cross-Instance Reconstruction of videos
from other domains. Reconstructions are paired with text narrations to guide
the learning of a domain generalisable representation. We provide extensive
analysis and ablations on ARGO1M that show CIR outperforms prior domain
generalisation works on all test splits. Code and data:
https://chiaraplizz.github.io/what-can-a-cook/.
|
[
{
"version": "v1",
"created": "Wed, 14 Jun 2023 19:31:50 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Aug 2023 10:06:59 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Plizzari",
"Chiara",
""
],
[
"Perrett",
"Toby",
""
],
[
"Caputo",
"Barbara",
""
],
[
"Damen",
"Dima",
""
]
] |
new_dataset
| 0.999686 |
2306.15401
|
Zheng Lian
|
Zheng Lian, Licai Sun, Mingyu Xu, Haiyang Sun, Ke Xu, Zhuofan Wen,
Shun Chen, Bin Liu, Jianhua Tao
|
Explainable Multimodal Emotion Reasoning
| null | null | null | null |
cs.MM cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multimodal emotion recognition is an active research topic in artificial
intelligence. Its primary objective is to integrate multi-modalities (such as
acoustic, visual, and lexical clues) to identify human emotional states.
Current works generally assume accurate emotion labels for benchmark datasets
and focus on developing more effective architectures. But due to the inherent
subjectivity of emotions, existing datasets often lack high annotation
consistency, resulting in potentially inaccurate labels. Consequently, models
built on these datasets may struggle to meet the demands of practical
applications. To address this issue, it is crucial to enhance the reliability
of emotion annotations. In this paper, we propose a novel task called
``\textbf{Explainable Multimodal Emotion Reasoning (EMER)}''. In contrast to
previous works that primarily focus on predicting emotions, EMER takes a step
further by providing explanations for these predictions. The prediction is
considered correct as long as the reasoning process behind the predicted
emotion is plausible. This paper presents our initial efforts on EMER, where we
introduce a benchmark dataset, establish baseline models, and define evaluation
metrics. Meanwhile, we observe the necessity of integrating multi-faceted
capabilities to deal with EMER. Therefore, we propose the first multimodal
large language model (LLM) in affective computing, called \textbf{AffectGPT}.
We aim to tackle the long-standing challenge of label ambiguity and chart a
path toward more reliable techniques. Furthermore, EMER offers an opportunity
to evaluate the audio-video-text understanding capabilities of recent
multimodal LLM. To facilitate further research, we make the code and data
available at: https://github.com/zeroQiaoba/AffectGPT.
|
[
{
"version": "v1",
"created": "Tue, 27 Jun 2023 11:54:57 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Jun 2023 10:26:05 GMT"
},
{
"version": "v3",
"created": "Thu, 24 Aug 2023 00:27:48 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Lian",
"Zheng",
""
],
[
"Sun",
"Licai",
""
],
[
"Xu",
"Mingyu",
""
],
[
"Sun",
"Haiyang",
""
],
[
"Xu",
"Ke",
""
],
[
"Wen",
"Zhuofan",
""
],
[
"Chen",
"Shun",
""
],
[
"Liu",
"Bin",
""
],
[
"Tao",
"Jianhua",
""
]
] |
new_dataset
| 0.996095 |
2306.17810
|
Melissa Dell
|
Emily Silcock, Melissa Dell
|
A Massive Scale Semantic Similarity Dataset of Historical English
| null | null | null | null |
cs.CL econ.GN q-fin.EC
|
http://creativecommons.org/licenses/by/4.0/
|
A diversity of tasks use language models trained on semantic similarity data.
While there are a variety of datasets that capture semantic similarity, they
are either constructed from modern web data or are relatively small datasets
created in the past decade by human annotators. This study utilizes a novel
source, newly digitized articles from off-copyright, local U.S. newspapers, to
assemble a massive-scale semantic similarity dataset spanning 70 years from
1920 to 1989 and containing nearly 400M positive semantic similarity pairs.
Historically, around half of articles in U.S. local newspapers came from
newswires like the Associated Press. While local papers reproduced articles
from the newswire, they wrote their own headlines, which form abstractive
summaries of the associated articles. We associate articles and their headlines
by exploiting document layouts and language understanding. We then use deep
neural methods to detect which articles are from the same underlying source, in
the presence of substantial noise and abridgement. The headlines of reproduced
articles form positive semantic similarity pairs. The resulting publicly
available HEADLINES dataset is significantly larger than most existing semantic
similarity datasets and covers a much longer span of time. It will facilitate
the application of contrastively trained semantic similarity models to a
variety of tasks, including the study of semantic change across space and time.
|
[
{
"version": "v1",
"created": "Fri, 30 Jun 2023 17:16:04 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Aug 2023 01:22:36 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Silcock",
"Emily",
""
],
[
"Dell",
"Melissa",
""
]
] |
new_dataset
| 0.993102 |
2307.11482
|
Qiao Yan
|
Yihan Wang, Qiao Yan and Yi Wang
|
R2Det: Redemption from Range-view for Accurate 3D Object Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
LiDAR-based 3D object detection is of paramount importance for autonomous
driving. Recent trends show a remarkable improvement for bird's-eye-view (BEV)
based and point-based methods as they demonstrate superior performance compared
to range-view counterparts. This paper presents an insight that leverages
range-view representation to enhance 3D points for accurate 3D object
detection. Specifically, we introduce a Redemption from Range-view Module
(R2M), a plug-and-play approach for 3D surface texture enhancement from the 2D
range view to the 3D point view. R2M comprises BasicBlock for 2D feature
extraction, Hierarchical-dilated (HD) Meta Kernel for expanding the 3D
receptive field, and Feature Points Redemption (FPR) for recovering 3D surface
texture information. R2M can be seamlessly integrated into state-of-the-art
LiDAR-based 3D object detectors as preprocessing and achieve appealing
improvement, e.g., 1.39%, 1.67%, and 1.97% mAP improvement on easy, moderate,
and hard difficulty level of KITTI val set, respectively. Based on R2M, we
further propose R2Detector (R2Det) with the Synchronous-Grid RoI Pooling for
accurate box refinement. R2Det outperforms existing range-view-based methods by
a significant margin on both the KITTI benchmark and the Waymo Open Dataset.
Codes will be made publicly available.
|
[
{
"version": "v1",
"created": "Fri, 21 Jul 2023 10:36:05 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Aug 2023 05:14:34 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Wang",
"Yihan",
""
],
[
"Yan",
"Qiao",
""
],
[
"Wang",
"Yi",
""
]
] |
new_dataset
| 0.999077 |
2307.12907
|
Zihan Wang
|
Zihan Wang and Xiangyang Li and Jiahao Yang and Yeqi Liu and Shuqiang
Jiang
|
GridMM: Grid Memory Map for Vision-and-Language Navigation
|
Accepted by ICCV 2023. The code is available at
https://github.com/MrZihan/GridMM
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision-and-language navigation (VLN) enables the agent to navigate to a
remote location following the natural language instruction in 3D environments.
To represent the previously visited environment, most approaches for VLN
implement memory using recurrent states, topological maps, or top-down semantic
maps. In contrast to these approaches, we build the top-down egocentric and
dynamically growing Grid Memory Map (i.e., GridMM) to structure the visited
environment. From a global perspective, historical observations are projected
into a unified grid map in a top-down view, which can better represent the
spatial relations of the environment. From a local perspective, we further
propose an instruction relevance aggregation method to capture fine-grained
visual clues in each grid region. Extensive experiments are conducted on both
the REVERIE, R2R, SOON datasets in the discrete environments, and the R2R-CE
dataset in the continuous environments, showing the superiority of our proposed
method.
|
[
{
"version": "v1",
"created": "Mon, 24 Jul 2023 16:02:42 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Jul 2023 04:05:58 GMT"
},
{
"version": "v3",
"created": "Wed, 23 Aug 2023 10:37:21 GMT"
},
{
"version": "v4",
"created": "Thu, 24 Aug 2023 04:42:35 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Wang",
"Zihan",
""
],
[
"Li",
"Xiangyang",
""
],
[
"Yang",
"Jiahao",
""
],
[
"Liu",
"Yeqi",
""
],
[
"Jiang",
"Shuqiang",
""
]
] |
new_dataset
| 0.998058 |
2308.08043
|
Lang Cao
|
Lang Cao
|
DiagGPT: An LLM-based Chatbot with Automatic Topic Management for
Task-Oriented Dialogue
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Large Language Models (LLMs), such as ChatGPT, are becoming increasingly
sophisticated, demonstrating capabilities that closely resemble those of
humans. These AI models are playing an essential role in assisting humans with
a wide array of tasks in daily life. A significant application of AI is its use
as a chat agent, responding to human inquiries across various domains. Current
LLMs have shown proficiency in answering general questions. However, basic
question-answering dialogue often falls short in complex diagnostic scenarios,
such as legal or medical consultations. These scenarios typically necessitate
Task-Oriented Dialogue (TOD), wherein an AI chat agent needs to proactively
pose questions and guide users towards specific task completion. Previous
fine-tuning models have underperformed in TOD, and current LLMs do not
inherently possess this capability. In this paper, we introduce DiagGPT
(Dialogue in Diagnosis GPT), an innovative method that extends LLMs to TOD
scenarios. Our experiments reveal that DiagGPT exhibits outstanding performance
in conducting TOD with users, demonstrating its potential for practical
applications.
|
[
{
"version": "v1",
"created": "Tue, 15 Aug 2023 21:14:09 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Aug 2023 05:22:44 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Cao",
"Lang",
""
]
] |
new_dataset
| 0.996472 |
2308.10559
|
Leila Ismail Prof.
|
Leila Ismail and Rajkumar Buyya
|
Metaverse: A Vision, Architectural Elements, and Future Directions for
Scalable and Realtime Virtual Worlds
| null | null | null | null |
cs.HC cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
With the emergence of Cloud computing, Internet of Things-enabled
Human-Computer Interfaces, Generative Artificial Intelligence, and
high-accurate Machine and Deep-learning recognition and predictive models,
along with the Post Covid-19 proliferation of social networking, and remote
communications, the Metaverse gained a lot of popularity. Metaverse has the
prospective to extend the physical world using virtual and augmented reality so
the users can interact seamlessly with the real and virtual worlds using
avatars and holograms. It has the potential to impact people in the way they
interact on social media, collaborate in their work, perform marketing and
business, teach, learn, and even access personalized healthcare. Several works
in the literature examine Metaverse in terms of hardware wearable devices, and
virtual reality gaming applications. However, the requirements of realizing the
Metaverse in realtime and at a large-scale need yet to be examined for the
technology to be usable. To address this limitation, this paper presents the
temporal evolution of Metaverse definitions and captures its evolving
requirements. Consequently, we provide insights into Metaverse requirements. In
addition to enabling technologies, we lay out architectural elements for
scalable, reliable, and efficient Metaverse systems, and a classification of
existing Metaverse applications along with proposing required future research
directions.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 08:23:10 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Aug 2023 12:54:50 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Ismail",
"Leila",
""
],
[
"Buyya",
"Rajkumar",
""
]
] |
new_dataset
| 0.996668 |
2308.11381
|
Zichen Yu
|
Zichen Yu, Quanli Liu, Wei Wang, Liyong Zhang, Xiaoguang Zhao
|
DALNet: A Rail Detection Network Based on Dynamic Anchor Line
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rail detection is one of the key factors for intelligent train. In the paper,
motivated by the anchor line-based lane detection methods, we propose a rail
detection network called DALNet based on dynamic anchor line. Aiming to solve
the problem that the predefined anchor line is image agnostic, we design a
novel dynamic anchor line mechanism. It utilizes a dynamic anchor line
generator to dynamically generate an appropriate anchor line for each rail
instance based on the position and shape of the rails in the input image. These
dynamically generated anchor lines can be considered as better position
references to accurately localize the rails than the predefined anchor lines.
In addition, we present a challenging urban rail detection dataset DL-Rail with
high-quality annotations and scenario diversity. DL-Rail contains 7000 pairs of
images and annotations along with scene tags, and it is expected to encourage
the development of rail detection. We extensively compare DALNet with many
competitive lane methods. The results show that our DALNet achieves
state-of-the-art performance on our DL-Rail rail detection dataset and the
popular Tusimple and LLAMAS lane detection benchmarks. The code will be
released at https://github.com/Yzichen/mmLaneDet.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 12:12:59 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Aug 2023 00:34:54 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Yu",
"Zichen",
""
],
[
"Liu",
"Quanli",
""
],
[
"Wang",
"Wei",
""
],
[
"Zhang",
"Liyong",
""
],
[
"Zhao",
"Xiaoguang",
""
]
] |
new_dataset
| 0.997065 |
2308.11897
|
Jos\'e Antonio Riaza Valverde
|
Jos\'e Antonio Riaza Valverde
|
Tau Prolog: A Prolog interpreter for the Web
|
21 pages, 3 figures, under consideration in Theory and Practice of
Logic Programming (TPLP)
| null | null | null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Tau Prolog is a client-side Prolog interpreter fully implemented in
JavaScript, which aims at implementing the ISO Prolog Standard. Tau Prolog has
been developed to be used with either Node.js or a browser seamlessly, and
therefore, it has been developed following a non-blocking, callback-based
approach to avoid blocking web browsers. Taking the best from JavaScript and
Prolog, Tau Prolog allows the programmer to handle browser events and
manipulate the Document Object Model (DOM) of a web using Prolog predicates. In
this paper we describe the architecture of Tau Prolog and its main packages for
interacting with the Web, and we present its programming environment. Under
consideration in Theory and Practice of Logic Programming (TPLP).
|
[
{
"version": "v1",
"created": "Wed, 23 Aug 2023 03:45:42 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Valverde",
"José Antonio Riaza",
""
]
] |
new_dataset
| 0.994449 |
2308.12213
|
Hualiang Wang
|
Hualiang Wang, Yi Li, Huifeng Yao, Xiaomeng Li
|
CLIPN for Zero-Shot OOD Detection: Teaching CLIP to Say No
|
ICCV 2023
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Out-of-distribution (OOD) detection refers to training the model on an
in-distribution (ID) dataset to classify whether the input images come from
unknown classes. Considerable effort has been invested in designing various OOD
detection methods based on either convolutional neural networks or
transformers. However, zero-shot OOD detection methods driven by CLIP, which
only require class names for ID, have received less attention. This paper
presents a novel method, namely CLIP saying no (CLIPN), which empowers the
logic of saying no within CLIP. Our key motivation is to equip CLIP with the
capability of distinguishing OOD and ID samples using positive-semantic prompts
and negation-semantic prompts. Specifically, we design a novel learnable no
prompt and a no text encoder to capture negation semantics within images.
Subsequently, we introduce two loss functions: the image-text binary-opposite
loss and the text semantic-opposite loss, which we use to teach CLIPN to
associate images with no prompts, thereby enabling it to identify unknown
samples. Furthermore, we propose two threshold-free inference algorithms to
perform OOD detection by utilizing negation semantics from no prompts and the
text encoder. Experimental results on 9 benchmark datasets (3 ID datasets and 6
OOD datasets) for the OOD detection task demonstrate that CLIPN, based on
ViT-B-16, outperforms 7 well-used algorithms by at least 2.34% and 11.64% in
terms of AUROC and FPR95 for zero-shot OOD detection on ImageNet-1K. Our CLIPN
can serve as a solid foundation for effectively leveraging CLIP in downstream
OOD tasks. The code is available on https://github.com/xmed-lab/CLIPN.
|
[
{
"version": "v1",
"created": "Wed, 23 Aug 2023 15:51:36 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Aug 2023 00:48:47 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Wang",
"Hualiang",
""
],
[
"Li",
"Yi",
""
],
[
"Yao",
"Huifeng",
""
],
[
"Li",
"Xiaomeng",
""
]
] |
new_dataset
| 0.999256 |
2308.12318
|
Wenkai Zhang
|
Wenkai Zhang, Bo Wu, Junwei Cheng, Hailong Zhou, Jianji Dong, Dongmei
Huang, P. K. A. Wai and Xinliang Zhang
|
Eight-input optical programmable logic array enabled by parallel
spectrum modulation
| null | null | null | null |
cs.ET physics.optics
|
http://creativecommons.org/licenses/by/4.0/
|
Despite over 40 years' development of optical logic computing, the studies
have been still struggling to support more than four operands, since the high
parallelism of light has not been fully leveraged blocked by the optical
nonlinearity and redundant input modulation in existing methods. Here, we
propose a scalable multi-input optical programmable logic array (PLA) with
minimal logical input, enabled by parallel spectrum modulation. By making full
use of the wavelength resource, an eight-input PLA is experimentally
demonstrated, and there are 2^256 possible combinations of generated logic
gates. Various complex logic fuctions, such as 8-256 decoder, 4-bit comparator,
adder and multiplier are experimentally demonstrated via leveraging the PLA.
The scale of PLA can be further extended by fully using the dimensions of
wavelength and space. As an example, a nine-input PLA is implemented to realize
the two-dimensional optical cellular automaton for the first time and perform
Conway's Game of Life to simulate the evolutionary process of cells. Our work
significantly alleviates the challenge of extensibility of optical logic
devices, opening up new avenues for future large-scale, high-speed and
energy-efficient optical digital computing.
|
[
{
"version": "v1",
"created": "Wed, 23 Aug 2023 11:21:16 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Zhang",
"Wenkai",
""
],
[
"Wu",
"Bo",
""
],
[
"Cheng",
"Junwei",
""
],
[
"Zhou",
"Hailong",
""
],
[
"Dong",
"Jianji",
""
],
[
"Huang",
"Dongmei",
""
],
[
"Wai",
"P. K. A.",
""
],
[
"Zhang",
"Xinliang",
""
]
] |
new_dataset
| 0.992926 |
2308.12329
|
Anders Miltner
|
Anders Miltner and Devon Loehr and Arnold Mong and Kathleen Fisher and
David Walker
|
Saggitarius: A DSL for Specifying Grammatical Domains
|
OOPSLA 2023
| null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Common data types like dates, addresses, phone numbers and tables can have
multiple textual representations, and many heavily-used languages, such as SQL,
come in several dialects. These variations can cause data to be misinterpreted,
leading to silent data corruption, failure of data processing systems, or even
security vulnerabilities. Saggitarius is a new language and system designed to
help programmers reason about the format of data, by describing grammatical
domains -- that is, sets of context-free grammars that describe the many
possible representations of a datatype. We describe the design of Saggitarius
via example and provide a relational semantics. We show how Saggitarius may be
used to analyze a data set: given example data, it uses an algorithm based on
semi-ring parsing and MaxSAT to infer which grammar in a given domain best
matches that data. We evaluate the effectiveness of the algorithm on a
benchmark suite of 110 example problems, and we demonstrate that our system
typically returns a satisfying grammar within a few seconds with only a small
number of examples. We also delve deeper into a more extensive case study on
using Saggitarius for CSV dialect detection. Despite being general-purpose, we
find that Saggitarius offers comparable results to hand-tuned, specialized
tools; in the case of CSV, it infers grammars for 84% of benchmarks within 60
seconds, and has comparable accuracy to custom-built dialect detection tools.
|
[
{
"version": "v1",
"created": "Wed, 23 Aug 2023 17:57:30 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Miltner",
"Anders",
""
],
[
"Loehr",
"Devon",
""
],
[
"Mong",
"Arnold",
""
],
[
"Fisher",
"Kathleen",
""
],
[
"Walker",
"David",
""
]
] |
new_dataset
| 0.999017 |
2308.12370
|
Subhrajyoti Dasgupta
|
Sanjoy Chowdhury, Sreyan Ghosh, Subhrajyoti Dasgupta, Anton
Ratnarajah, Utkarsh Tyagi and Dinesh Manocha
|
AdVerb: Visually Guided Audio Dereverberation
|
Accepted at ICCV 2023. For project page, see
https://gamma.umd.edu/researchdirections/speech/adverb
| null | null | null |
cs.CV cs.MM cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
We present AdVerb, a novel audio-visual dereverberation framework that uses
visual cues in addition to the reverberant sound to estimate clean audio.
Although audio-only dereverberation is a well-studied problem, our approach
incorporates the complementary visual modality to perform audio
dereverberation. Given an image of the environment where the reverberated sound
signal has been recorded, AdVerb employs a novel geometry-aware cross-modal
transformer architecture that captures scene geometry and audio-visual
cross-modal relationship to generate a complex ideal ratio mask, which, when
applied to the reverberant audio predicts the clean sound. The effectiveness of
our method is demonstrated through extensive quantitative and qualitative
evaluations. Our approach significantly outperforms traditional audio-only and
audio-visual baselines on three downstream tasks: speech enhancement, speech
recognition, and speaker verification, with relative improvements in the range
of 18% - 82% on the LibriSpeech test-clean set. We also achieve highly
satisfactory RT60 error scores on the AVSpeech dataset.
|
[
{
"version": "v1",
"created": "Wed, 23 Aug 2023 18:20:59 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Chowdhury",
"Sanjoy",
""
],
[
"Ghosh",
"Sreyan",
""
],
[
"Dasgupta",
"Subhrajyoti",
""
],
[
"Ratnarajah",
"Anton",
""
],
[
"Tyagi",
"Utkarsh",
""
],
[
"Manocha",
"Dinesh",
""
]
] |
new_dataset
| 0.957976 |
2308.12380
|
Yufeng Yin
|
Yufeng Yin, Di Chang, Guoxian Song, Shen Sang, Tiancheng Zhi, Jing
Liu, Linjie Luo, Mohammad Soleymani
|
FG-Net: Facial Action Unit Detection with Generalizable Pyramidal
Features
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Automatic detection of facial Action Units (AUs) allows for objective facial
expression analysis. Due to the high cost of AU labeling and the limited size
of existing benchmarks, previous AU detection methods tend to overfit the
dataset, resulting in a significant performance loss when evaluated across
corpora. To address this problem, we propose FG-Net for generalizable facial
action unit detection. Specifically, FG-Net extracts feature maps from a
StyleGAN2 model pre-trained on a large and diverse face image dataset. Then,
these features are used to detect AUs with a Pyramid CNN Interpreter, making
the training efficient and capturing essential local features. The proposed
FG-Net achieves a strong generalization ability for heatmap-based AU detection
thanks to the generalizable and semantic-rich features extracted from the
pre-trained generative model. Extensive experiments are conducted to evaluate
within- and cross-corpus AU detection with the widely-used DISFA and BP4D
datasets. Compared with the state-of-the-art, the proposed method achieves
superior cross-domain performance while maintaining competitive within-domain
performance. In addition, FG-Net is data-efficient and achieves competitive
performance even when trained on 1000 samples. Our code will be released at
\url{https://github.com/ihp-lab/FG-Net}
|
[
{
"version": "v1",
"created": "Wed, 23 Aug 2023 18:51:11 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Yin",
"Yufeng",
""
],
[
"Chang",
"Di",
""
],
[
"Song",
"Guoxian",
""
],
[
"Sang",
"Shen",
""
],
[
"Zhi",
"Tiancheng",
""
],
[
"Liu",
"Jing",
""
],
[
"Luo",
"Linjie",
""
],
[
"Soleymani",
"Mohammad",
""
]
] |
new_dataset
| 0.998245 |
2308.12420
|
Walter Hernandez
|
Walter Hernandez, Kamil Tylinski, Alastair Moore, Niall Roche, Nikhil
Vadgama, Horst Treiblmaier, Jiangbo Shangguan, Paolo Tasca, and Jiahua Xu
|
Evolution of ESG-focused DLT Research: An NLP Analysis of the Literature
| null | null | null | null |
cs.IR cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Distributed Ledger Technologies (DLTs) have rapidly evolved, necessitating
comprehensive insights into their diverse components. However, a systematic
literature review that emphasizes the Environmental, Sustainability, and
Governance (ESG) components of DLT remains lacking. To bridge this gap, we
selected 107 seed papers to build a citation network of 63,083 references and
refined it to a corpus of 24,539 publications for analysis. Then, we labeled
the named entities in 46 papers according to twelve top-level categories
derived from an established technology taxonomy and enhanced the taxonomy by
pinpointing DLT's ESG elements. Leveraging transformer-based language models,
we fine-tuned a pre-trained language model for a Named Entity Recognition (NER)
task using our labeled dataset. We used our fine-tuned language model to
distill the corpus to 505 key papers, facilitating a literature review via
named entities and temporal graph analysis on DLT evolution in the context of
ESG. Our contributions are a methodology to conduct a machine learning-driven
systematic literature review in the DLT field, placing a special emphasis on
ESG aspects. Furthermore, we present a first-of-its-kind NER dataset, composed
of 54,808 named entities, designed for DLT and ESG-related explorations.
|
[
{
"version": "v1",
"created": "Wed, 23 Aug 2023 20:42:32 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Hernandez",
"Walter",
""
],
[
"Tylinski",
"Kamil",
""
],
[
"Moore",
"Alastair",
""
],
[
"Roche",
"Niall",
""
],
[
"Vadgama",
"Nikhil",
""
],
[
"Treiblmaier",
"Horst",
""
],
[
"Shangguan",
"Jiangbo",
""
],
[
"Tasca",
"Paolo",
""
],
[
"Xu",
"Jiahua",
""
]
] |
new_dataset
| 0.996533 |
2308.12466
|
Akshat Gupta
|
Akshat Gupta
|
Are ChatGPT and GPT-4 Good Poker Players? -- A Pre-Flop Analysis
| null | null | null | null |
cs.CL cs.AI cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Since the introduction of ChatGPT and GPT-4, these models have been tested
across a large number of tasks. Their adeptness across domains is evident, but
their aptitude in playing games and specifically their aptitude in the realm of
poker has remained unexplored. Poker is a game that requires decision making
under uncertainty and incomplete information. In this paper, we put ChatGPT and
GPT-4 through the poker test and evaluate their poker skills. Our findings
reveal that while both models display an advanced understanding of poker,
encompassing concepts like the valuation of starting hands, playing positions
and other intricacies of game theory optimal (GTO) poker, both ChatGPT and
GPT-4 are NOT game theory optimal poker players.
Through a series of experiments, we first discover the characteristics of
optimal prompts and model parameters for playing poker with these models. Our
observations then unveil the distinct playing personas of the two models. We
first conclude that GPT-4 is a more advanced poker player than ChatGPT. This
exploration then sheds light on the divergent poker tactics of the two models:
ChatGPT's conservativeness juxtaposed against GPT-4's aggression. In poker
vernacular, when tasked to play GTO poker, ChatGPT plays like a Nit, which
means that it has a propensity to only engage with premium hands and folds a
majority of hands. When subjected to the same directive, GPT-4 plays like a
maniac, showcasing a loose and aggressive style of play. Both strategies,
although relatively advanced, are not game theory optimal.
|
[
{
"version": "v1",
"created": "Wed, 23 Aug 2023 23:16:35 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Gupta",
"Akshat",
""
]
] |
new_dataset
| 0.957771 |
2308.12477
|
Melissa Dell
|
Melissa Dell, Jacob Carlson, Tom Bryan, Emily Silcock, Abhishek Arora,
Zejiang Shen, Luca D'Amico-Wong, Quan Le, Pablo Querubin, Leander Heldring
|
American Stories: A Large-Scale Structured Text Dataset of Historical
U.S. Newspapers
| null | null | null | null |
cs.CL cs.CV econ.GN q-fin.EC
|
http://creativecommons.org/licenses/by/4.0/
|
Existing full text datasets of U.S. public domain newspapers do not recognize
the often complex layouts of newspaper scans, and as a result the digitized
content scrambles texts from articles, headlines, captions, advertisements, and
other layout regions. OCR quality can also be low. This study develops a novel,
deep learning pipeline for extracting full article texts from newspaper images
and applies it to the nearly 20 million scans in Library of Congress's public
domain Chronicling America collection. The pipeline includes layout detection,
legibility classification, custom OCR, and association of article texts
spanning multiple bounding boxes. To achieve high scalability, it is built with
efficient architectures designed for mobile phones. The resulting American
Stories dataset provides high quality data that could be used for pre-training
a large language model to achieve better understanding of historical English
and historical world knowledge. The dataset could also be added to the external
database of a retrieval-augmented language model to make historical information
- ranging from interpretations of political events to minutiae about the lives
of people's ancestors - more widely accessible. Furthermore, structured article
texts facilitate using transformer-based methods for popular social science
applications like topic classification, detection of reproduced content, and
news story clustering. Finally, American Stories provides a massive silver
quality dataset for innovating multimodal layout analysis models and other
multimodal applications.
|
[
{
"version": "v1",
"created": "Thu, 24 Aug 2023 00:24:42 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Dell",
"Melissa",
""
],
[
"Carlson",
"Jacob",
""
],
[
"Bryan",
"Tom",
""
],
[
"Silcock",
"Emily",
""
],
[
"Arora",
"Abhishek",
""
],
[
"Shen",
"Zejiang",
""
],
[
"D'Amico-Wong",
"Luca",
""
],
[
"Le",
"Quan",
""
],
[
"Querubin",
"Pablo",
""
],
[
"Heldring",
"Leander",
""
]
] |
new_dataset
| 0.998977 |
2308.12490
|
Yu-Wen Chen
|
Yu-Wen Chen, Zhou Yu, Julia Hirschberg
|
MultiPA: a multi-task speech pronunciation assessment system for a
closed and open response scenario
| null | null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The design of automatic speech pronunciation assessment can be categorized
into closed and open response scenarios, each with strengths and limitations. A
system with the ability to function in both scenarios can cater to diverse
learning needs and provide a more precise and holistic assessment of
pronunciation skills. In this study, we propose a Multi-task Pronunciation
Assessment model called MultiPA. MultiPA provides an alternative to Kaldi-based
systems in that it has simpler format requirements and better compatibility
with other neural network models. Compared with previous open response systems,
MultiPA provides a wider range of evaluations, encompassing assessments at both
the sentence and word-level. Our experimental results show that MultiPA
achieves comparable performance when working in closed response scenarios and
maintains more robust performance when directly used for open responses.
|
[
{
"version": "v1",
"created": "Thu, 24 Aug 2023 01:24:09 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Chen",
"Yu-Wen",
""
],
[
"Yu",
"Zhou",
""
],
[
"Hirschberg",
"Julia",
""
]
] |
new_dataset
| 0.995765 |
2308.12537
|
Weikun Zhang
|
Zichao Dong, Weikun Zhang, Xufeng Huang, Hang Ji, Xin Zhan, Junbo Chen
|
HuBo-VLM: Unified Vision-Language Model designed for HUman roBOt
interaction tasks
| null | null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human robot interaction is an exciting task, which aimed to guide robots
following instructions from human. Since huge gap lies between human natural
language and machine codes, end to end human robot interaction models is fair
challenging. Further, visual information receiving from sensors of robot is
also a hard language for robot to perceive. In this work, HuBo-VLM is proposed
to tackle perception tasks associated with human robot interaction including
object detection and visual grounding by a unified transformer based vision
language model. Extensive experiments on the Talk2Car benchmark demonstrate the
effectiveness of our approach. Code would be publicly available in
https://github.com/dzcgaara/HuBo-VLM.
|
[
{
"version": "v1",
"created": "Thu, 24 Aug 2023 03:47:27 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Dong",
"Zichao",
""
],
[
"Zhang",
"Weikun",
""
],
[
"Huang",
"Xufeng",
""
],
[
"Ji",
"Hang",
""
],
[
"Zhan",
"Xin",
""
],
[
"Chen",
"Junbo",
""
]
] |
new_dataset
| 0.987992 |
2308.12539
|
Vipul Gupta
|
Vipul Gupta, Pranav Narayanan Venkit, Hugo Lauren\c{c}on, Shomir
Wilson, Rebecca J. Passonneau
|
CALM : A Multi-task Benchmark for Comprehensive Assessment of Language
Model Bias
| null | null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
As language models (LMs) become increasingly powerful, it is important to
quantify and compare them for sociodemographic bias with potential for harm.
Prior bias measurement datasets are sensitive to perturbations in their
manually designed templates, therefore unreliable. To achieve reliability, we
introduce the Comprehensive Assessment of Language Model bias (CALM), a
benchmark dataset to quantify bias in LMs across three tasks. We integrate 16
existing datasets across different domains, such as Wikipedia and news
articles, to filter 224 templates from which we construct a dataset of 78,400
examples. We compare the diversity of CALM with prior datasets on metrics such
as average semantic similarity, and variation in template length, and test the
sensitivity to small perturbations. We show that our dataset is more diverse
and reliable than previous datasets, thus better capture the breadth of
linguistic variation required to reliably evaluate model bias. We evaluate 20
large language models including six prominent families of LMs such as Llama-2.
In two LM series, OPT and Bloom, we found that larger parameter models are more
biased than lower parameter models. We found the T0 series of models to be the
least biased. Furthermore, we noticed a tradeoff between gender and racial bias
with increasing model size in some model series. The code is available at
https://github.com/vipulgupta1011/CALM.
|
[
{
"version": "v1",
"created": "Thu, 24 Aug 2023 03:53:55 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Gupta",
"Vipul",
""
],
[
"Venkit",
"Pranav Narayanan",
""
],
[
"Laurençon",
"Hugo",
""
],
[
"Wilson",
"Shomir",
""
],
[
"Passonneau",
"Rebecca J.",
""
]
] |
new_dataset
| 0.999076 |
2308.12545
|
Donald Pinckney
|
Donald Pinckney, Federico Cassano, Arjun Guha, Jonathan Bell
|
npm-follower: A Complete Dataset Tracking the NPM Ecosystem
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Software developers typically rely upon a large network of dependencies to
build their applications. For instance, the NPM package repository contains
over 3 million packages and serves tens of billions of downloads weekly.
Understanding the structure and nature of packages, dependencies, and published
code requires datasets that provide researchers with easy access to metadata
and code of packages. However, prior work on NPM dataset construction typically
has two limitations: 1) only metadata is scraped, and 2) packages or versions
that are deleted from NPM can not be scraped. Over 330,000 versions of packages
were deleted from NPM between July 2022 and May 2023. This data is critical for
researchers as it often pertains to important questions of security and
malware. We present npm-follower, a dataset and crawling architecture which
archives metadata and code of all packages and versions as they are published,
and is thus able to retain data which is later deleted. The dataset currently
includes over 35 million versions of packages, and grows at a rate of about 1
million versions per month. The dataset is designed to be easily used by
researchers answering questions involving either metadata or program analysis.
Both the code and dataset are available at https://dependencies.science.
|
[
{
"version": "v1",
"created": "Thu, 24 Aug 2023 04:05:49 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Pinckney",
"Donald",
""
],
[
"Cassano",
"Federico",
""
],
[
"Guha",
"Arjun",
""
],
[
"Bell",
"Jonathan",
""
]
] |
new_dataset
| 0.999832 |
2308.12600
|
Falak Shah
|
Rishit Javia, Falak Shah, and Shivam Dave
|
PoseSync: Robust pose based video synchronization
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Pose based video sychronization can have applications in multiple domains
such as gameplay performance evaluation, choreography or guiding athletes. The
subject's actions could be compared and evaluated against those performed by
professionals side by side. In this paper, we propose an end to end pipeline
for synchronizing videos based on pose. The first step crops the region where
the person present in the image followed by pose detection on the cropped
image. This is followed by application of Dynamic Time Warping(DTW) on angle/
distance measures between the pose keypoints leading to a scale and shift
invariant pose matching pipeline.
|
[
{
"version": "v1",
"created": "Thu, 24 Aug 2023 07:02:15 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Javia",
"Rishit",
""
],
[
"Shah",
"Falak",
""
],
[
"Dave",
"Shivam",
""
]
] |
new_dataset
| 0.971921 |
2308.12614
|
Indrajit Paul
|
Ashok Kumar Das, Indrajit Paul
|
Obstruction characterization of co-TT graphs
|
arXiv admin note: substantial text overlap with arXiv:2206.05917
| null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Threshold tolerance graphs and their complement graphs ( known as co-TT
graphs) were introduced by Monma, Reed and Trotter[24]. Introducing the concept
of negative interval Hell et al.[19] defined signed-interval bigraphs/digraphs
and have shown that they are equivalent to several seemingly different classes
of bigraphs/digraphs. They have also shown that co-TT graphs are equivalent to
symmetric signed-interval digraphs. In this paper we characterize
signed-interval bigraphs and signed-interval graphs respectively in terms of
their biadjacency matrices and adjacency matrices. Finally, based on the
geometric representation of signed-interval graphs we have setteled the open
problem of forbidden induced subgraph characterization of co-TT graphs posed by
Monma, Reed and Trotter in the same paper.
|
[
{
"version": "v1",
"created": "Thu, 24 Aug 2023 07:26:05 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Das",
"Ashok Kumar",
""
],
[
"Paul",
"Indrajit",
""
]
] |
new_dataset
| 0.99912 |
2308.12646
|
Taras Kucherenko
|
Taras Kucherenko, Rajmund Nagy, Youngwoo Yoon, Jieyeon Woo, Teodor
Nikolov, Mihail Tsakov, Gustav Eje Henter
|
The GENEA Challenge 2023: A large scale evaluation of gesture generation
models in monadic and dyadic settings
|
The first three authors made equal contributions. Accepted for
publication at the ACM International Conference on Multimodal Interaction
(ICMI)
| null | null | null |
cs.HC cs.GR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper reports on the GENEA Challenge 2023, in which participating teams
built speech-driven gesture-generation systems using the same speech and motion
dataset, followed by a joint evaluation. This year's challenge provided data on
both sides of a dyadic interaction, allowing teams to generate full-body motion
for an agent given its speech (text and audio) and the speech and motion of the
interlocutor. We evaluated 12 submissions and 2 baselines together with
held-out motion-capture data in several large-scale user studies. The studies
focused on three aspects: 1) the human-likeness of the motion, 2) the
appropriateness of the motion for the agent's own speech whilst controlling for
the human-likeness of the motion, and 3) the appropriateness of the motion for
the behaviour of the interlocutor in the interaction, using a setup that
controls for both the human-likeness of the motion and the agent's own speech.
We found a large span in human-likeness between challenge submissions, with a
few systems rated close to human mocap. Appropriateness seems far from being
solved, with most submissions performing in a narrow range slightly above
chance, far behind natural motion. The effect of the interlocutor is even more
subtle, with submitted systems at best performing barely above chance.
Interestingly, a dyadic system being highly appropriate for agent speech does
not necessarily imply high appropriateness for the interlocutor. Additional
material is available via the project website at
https://svito-zar.github.io/GENEAchallenge2023/ .
|
[
{
"version": "v1",
"created": "Thu, 24 Aug 2023 08:42:06 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Kucherenko",
"Taras",
""
],
[
"Nagy",
"Rajmund",
""
],
[
"Yoon",
"Youngwoo",
""
],
[
"Woo",
"Jieyeon",
""
],
[
"Nikolov",
"Teodor",
""
],
[
"Tsakov",
"Mihail",
""
],
[
"Henter",
"Gustav Eje",
""
]
] |
new_dataset
| 0.998099 |
2308.12712
|
Qingchun Yang
|
Shizhou Zhang, Qingchun Yang, De Cheng, Yinghui Xing, Guoqiang Liang,
Peng Wang, Yanning Zhang
|
Ground-to-Aerial Person Search: Benchmark Dataset and Approach
|
Accepted by ACM MM 2023
| null |
10.1145/3581783.3612105
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we construct a large-scale dataset for Ground-to-Aerial Person
Search, named G2APS, which contains 31,770 images of 260,559 annotated bounding
boxes for 2,644 identities appearing in both of the UAVs and ground
surveillance cameras. To our knowledge, this is the first dataset for
cross-platform intelligent surveillance applications, where the UAVs could work
as a powerful complement for the ground surveillance cameras. To more
realistically simulate the actual cross-platform Ground-to-Aerial surveillance
scenarios, the surveillance cameras are fixed about 2 meters above the ground,
while the UAVs capture videos of persons at different location, with a variety
of view-angles, flight attitudes and flight modes. Therefore, the dataset has
the following unique characteristics: 1) drastic view-angle changes between
query and gallery person images from cross-platform cameras; 2) diverse
resolutions, poses and views of the person images under 9 rich real-world
scenarios. On basis of the G2APS benchmark dataset, we demonstrate detailed
analysis about current two-step and end-to-end person search methods, and
further propose a simple yet effective knowledge distillation scheme on the
head of the ReID network, which achieves state-of-the-art performances on both
of the G2APS and the previous two public person search datasets, i.e., PRW and
CUHK-SYSU. The dataset and source code available on
\url{https://github.com/yqc123456/HKD_for_person_search}.
|
[
{
"version": "v1",
"created": "Thu, 24 Aug 2023 11:11:26 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Zhang",
"Shizhou",
""
],
[
"Yang",
"Qingchun",
""
],
[
"Cheng",
"De",
""
],
[
"Xing",
"Yinghui",
""
],
[
"Liang",
"Guoqiang",
""
],
[
"Wang",
"Peng",
""
],
[
"Zhang",
"Yanning",
""
]
] |
new_dataset
| 0.999837 |
2308.12794
|
Zaharah A. Bukhsh
|
Robbert Reijnen, Kjell van Straaten, Zaharah Bukhsh, Yingqian Zhang
|
Job Shop Scheduling Benchmark: Environments and Instances for Learning
and Non-learning Methods
| null | null | null | null |
cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We introduce an open-source GitHub repository containing comprehensive
benchmarks for a wide range of machine scheduling problems, including Job Shop
Scheduling (JSP), Flow Shop Scheduling (FSP), Flexible Job Shop Scheduling
(FJSP), FJSP with Assembly constraints (FAJSP), FJSP with Sequence-Dependent
Setup Times (FJSP-SDST), and the online FJSP (with online job arrivals). Our
primary goal is to provide a centralized hub for researchers, practitioners,
and enthusiasts interested in tackling machine scheduling challenges.
|
[
{
"version": "v1",
"created": "Thu, 24 Aug 2023 13:49:48 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Reijnen",
"Robbert",
""
],
[
"van Straaten",
"Kjell",
""
],
[
"Bukhsh",
"Zaharah",
""
],
[
"Zhang",
"Yingqian",
""
]
] |
new_dataset
| 0.99939 |
2308.12816
|
Michael Unterkalmsteiner
|
Michael Unterkalmsteiner, Pekka Abrahamsson, Xiaofeng Wang, Anh
Nguyen-Duc, Syed M. Ali Shah, Sohaib Shahid Bajwa, Guido H. Baltes, Kieran
Conboy, Eoin Cullina, Denis Dennehy, Henry Edison, Carlos
Fern\'andez-S\'anchez, Juan Garbajosa, Tony Gorschek, Eriks Klotins, Laura
Hokkanen, Fabio Kon, Ilaria Lunesu, Michele Marchesi, Lorraine Morgan, Markku
Oivo, Christoph Selig, Pertti Sepp\"anen, Roger Sweetman, Pasi Tyrv\"ainen,
Christina Ungerer, Agust\'in Yag\"ue
|
Software Startups -- A Research Agenda
| null |
e-Informatica Softw. Eng. J. 10(1): 89-124 (2016)
|
10.5277/e-Inf160105
| null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Software startup companies develop innovative, software-intensive products
within limited time frames and with few resources, searching for sustainable
and scalable business models. Software startups are quite distinct from
traditional mature software companies, but also from micro-, small-, and
medium-sized enterprises, introducing new challenges relevant for software
engineering research. This paper's research agenda focuses on software
engineering in startups, identifying, in particular, 70+ research questions in
the areas of supporting startup engineering activities, startup evolution
models and patterns, ecosystems and innovation hubs, human aspects in software
startups, applying startup concepts in non-startup environments, and
methodologies and theories for startup research. We connect and motivate this
research agenda with past studies in software startup research, while pointing
out possible future directions. While all authors of this research agenda have
their main background in Software Engineering or Computer Science, their
interest in software startups broadens the perspective to the challenges, but
also to the opportunities that emerge from multi-disciplinary research. Our
audience is therefore primarily software engineering researchers, even though
we aim at stimulating collaborations and research that crosses disciplinary
boundaries. We believe that with this research agenda we cover a wide spectrum
of the software startup industry current needs.
|
[
{
"version": "v1",
"created": "Thu, 24 Aug 2023 14:20:21 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Unterkalmsteiner",
"Michael",
""
],
[
"Abrahamsson",
"Pekka",
""
],
[
"Wang",
"Xiaofeng",
""
],
[
"Nguyen-Duc",
"Anh",
""
],
[
"Shah",
"Syed M. Ali",
""
],
[
"Bajwa",
"Sohaib Shahid",
""
],
[
"Baltes",
"Guido H.",
""
],
[
"Conboy",
"Kieran",
""
],
[
"Cullina",
"Eoin",
""
],
[
"Dennehy",
"Denis",
""
],
[
"Edison",
"Henry",
""
],
[
"Fernández-Sánchez",
"Carlos",
""
],
[
"Garbajosa",
"Juan",
""
],
[
"Gorschek",
"Tony",
""
],
[
"Klotins",
"Eriks",
""
],
[
"Hokkanen",
"Laura",
""
],
[
"Kon",
"Fabio",
""
],
[
"Lunesu",
"Ilaria",
""
],
[
"Marchesi",
"Michele",
""
],
[
"Morgan",
"Lorraine",
""
],
[
"Oivo",
"Markku",
""
],
[
"Selig",
"Christoph",
""
],
[
"Seppänen",
"Pertti",
""
],
[
"Sweetman",
"Roger",
""
],
[
"Tyrväinen",
"Pasi",
""
],
[
"Ungerer",
"Christina",
""
],
[
"Yagüe",
"Agustín",
""
]
] |
new_dataset
| 0.973381 |
2308.12828
|
Dima Kagan
|
Nadav Shalit, Michael Fire, Dima Kagan, Eran Ben-Elia
|
Short Run Transit Route Planning Decision Support System Using a Deep
Learning-Based Weighted Graph
| null | null | null | null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Public transport routing plays a crucial role in transit network design,
ensuring a satisfactory level of service for passengers. However, current
routing solutions rely on traditional operational research heuristics, which
can be time-consuming to implement and lack the ability to provide quick
solutions. Here, we propose a novel deep learning-based methodology for a
decision support system that enables public transport (PT) planners to identify
short-term route improvements rapidly. By seamlessly adjusting specific
sections of routes between two stops during specific times of the day, our
method effectively reduces times and enhances PT services. Leveraging diverse
data sources such as GTFS and smart card data, we extract features and model
the transportation network as a directed graph. Using self-supervision, we
train a deep learning model for predicting lateness values for road segments.
These lateness values are then utilized as edge weights in the transportation
graph, enabling efficient path searching. Through evaluating the method on Tel
Aviv, we are able to reduce times on more than 9\% of the routes. The improved
routes included both intraurban and suburban routes showcasing a fact
highlighting the model's versatility. The findings emphasize the potential of
our data-driven decision support system to enhance public transport and city
logistics, promoting greater efficiency and reliability in PT services.
|
[
{
"version": "v1",
"created": "Thu, 24 Aug 2023 14:37:55 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Shalit",
"Nadav",
""
],
[
"Fire",
"Michael",
""
],
[
"Kagan",
"Dima",
""
],
[
"Ben-Elia",
"Eran",
""
]
] |
new_dataset
| 0.994875 |
2308.12866
|
Yuan Gong
|
Yuan Gong, Yong Zhang, Xiaodong Cun, Fei Yin, Yanbo Fan, Xuan Wang,
Baoyuan Wu, Yujiu Yang
|
ToonTalker: Cross-Domain Face Reenactment
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We target cross-domain face reenactment in this paper, i.e., driving a
cartoon image with the video of a real person and vice versa. Recently, many
works have focused on one-shot talking face generation to drive a portrait with
a real video, i.e., within-domain reenactment. Straightforwardly applying those
methods to cross-domain animation will cause inaccurate expression transfer,
blur effects, and even apparent artifacts due to the domain shift between
cartoon and real faces. Only a few works attempt to settle cross-domain face
reenactment. The most related work AnimeCeleb requires constructing a dataset
with pose vector and cartoon image pairs by animating 3D characters, which
makes it inapplicable anymore if no paired data is available. In this paper, we
propose a novel method for cross-domain reenactment without paired data.
Specifically, we propose a transformer-based framework to align the motions
from different domains into a common latent space where motion transfer is
conducted via latent code addition. Two domain-specific motion encoders and two
learnable motion base memories are used to capture domain properties. A source
query transformer and a driving one are exploited to project domain-specific
motion to the canonical space. The edited motion is projected back to the
domain of the source with a transformer. Moreover, since no paired data is
provided, we propose a novel cross-domain training scheme using data from two
domains with the designed analogy constraint. Besides, we contribute a cartoon
dataset in Disney style. Extensive evaluations demonstrate the superiority of
our method over competing methods.
|
[
{
"version": "v1",
"created": "Thu, 24 Aug 2023 15:43:14 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Gong",
"Yuan",
""
],
[
"Zhang",
"Yong",
""
],
[
"Cun",
"Xiaodong",
""
],
[
"Yin",
"Fei",
""
],
[
"Fan",
"Yanbo",
""
],
[
"Wang",
"Xuan",
""
],
[
"Wu",
"Baoyuan",
""
],
[
"Yang",
"Yujiu",
""
]
] |
new_dataset
| 0.996612 |
2308.12870
|
Gengxuan Tian
|
Gengxuan Tian, Junqiao Zhao, Yingfeng Cai, Fenglin Zhang, Wenjie Mu,
Chen Ye
|
VNI-Net: Vector Neurons-based Rotation-Invariant Descriptor for LiDAR
Place Recognition
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
LiDAR-based place recognition plays a crucial role in Simultaneous
Localization and Mapping (SLAM) and LiDAR localization.
Despite the emergence of various deep learning-based and hand-crafting-based
methods, rotation-induced place recognition failure remains a critical
challenge.
Existing studies address this limitation through specific training strategies
or network structures.
However, the former does not produce satisfactory results, while the latter
focuses mainly on the reduced problem of SO(2) rotation invariance. Methods
targeting SO(3) rotation invariance suffer from limitations in discrimination
capability.
In this paper, we propose a new method that employs Vector Neurons Network
(VNN) to achieve SO(3) rotation invariance.
We first extract rotation-equivariant features from neighboring points and
map low-dimensional features to a high-dimensional space through VNN.
Afterwards, we calculate the Euclidean and Cosine distance in the
rotation-equivariant feature space as rotation-invariant feature descriptors.
Finally, we aggregate the features using GeM pooling to obtain global
descriptors.
To address the significant information loss when formulating
rotation-invariant descriptors, we propose computing distances between features
at different layers within the Euclidean space neighborhood.
This greatly improves the discriminability of the point cloud descriptors
while ensuring computational efficiency.
Experimental results on public datasets show that our approach significantly
outperforms other baseline methods implementing rotation invariance, while
achieving comparable results with current state-of-the-art place recognition
methods that do not consider rotation issues.
|
[
{
"version": "v1",
"created": "Thu, 24 Aug 2023 15:47:21 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Tian",
"Gengxuan",
""
],
[
"Zhao",
"Junqiao",
""
],
[
"Cai",
"Yingfeng",
""
],
[
"Zhang",
"Fenglin",
""
],
[
"Mu",
"Wenjie",
""
],
[
"Ye",
"Chen",
""
]
] |
new_dataset
| 0.986713 |
2308.12882
|
Sayanton V. Dibbo
|
Sayanton V. Dibbo, Juston S. Moore, Garrett T. Kenyon, Michael A. Teti
|
LCANets++: Robust Audio Classification using Multi-layer Neural Networks
with Lateral Competition
|
This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
| null | null | null |
cs.SD cs.CR cs.LG eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Audio classification aims at recognizing audio signals, including speech
commands or sound events. However, current audio classifiers are susceptible to
perturbations and adversarial attacks. In addition, real-world audio
classification tasks often suffer from limited labeled data. To help bridge
these gaps, previous work developed neuro-inspired convolutional neural
networks (CNNs) with sparse coding via the Locally Competitive Algorithm (LCA)
in the first layer (i.e., LCANets) for computer vision. LCANets learn in a
combination of supervised and unsupervised learning, reducing dependency on
labeled samples. Motivated by the fact that auditory cortex is also sparse, we
extend LCANets to audio recognition tasks and introduce LCANets++, which are
CNNs that perform sparse coding in multiple layers via LCA. We demonstrate that
LCANets++ are more robust than standard CNNs and LCANets against perturbations,
e.g., background noise, as well as black-box and white-box attacks, e.g.,
evasion and fast gradient sign (FGSM) attacks.
|
[
{
"version": "v1",
"created": "Wed, 23 Aug 2023 17:42:00 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Dibbo",
"Sayanton V.",
""
],
[
"Moore",
"Juston S.",
""
],
[
"Kenyon",
"Garrett T.",
""
],
[
"Teti",
"Michael A.",
""
]
] |
new_dataset
| 0.993999 |
2308.12883
|
Yoshiaki Itoh
|
Sumie Ueda, Takashi Tsuchiya, Yoshiaki Itoh
|
Computational Dating for the Nuzi Cuneiform Archive: The Least Squares
Constrained by Family Trees and Synchronisms
| null | null | null | null |
cs.DL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We introduce a computational method of dating for an archive in ancient
Mesopotamia. We use the name index Nuzi Personal Names (NPN) published in 1943.
We made an electronic version of NPN and added the kinships of the two powerful
families to NPN to reflect the Nuzi studies after 1943. Nuzi is a town from the
15th - 14th century B.C.E.for a period of some five generations in Arrapha. The
cuneiform tablets listed in NPN are for contracts on land transactions,
marriage, loans, slavery, etc. In NPN, the kinships and cuneiform tablets
(contracts, documents, texts) involved are listed for each person. We
reconstruct family trees from the added NPN to formulate the least squares
problem with the constraints: a person's father is at least 22.5 years older
than the person, contractors were living at the time of the contract, etc. Our
results agree with the Assyriological results of M. P. Maidman on the seniority
among siblings of a powerful family. Our method could be applied to the other
clay tablet archives once we have the name index in the format of NPN.
|
[
{
"version": "v1",
"created": "Wed, 23 Aug 2023 07:59:25 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Ueda",
"Sumie",
""
],
[
"Tsuchiya",
"Takashi",
""
],
[
"Itoh",
"Yoshiaki",
""
]
] |
new_dataset
| 0.99852 |
2308.12910
|
Ziyan Yang
|
Ziyan Yang, Kushal Kafle, Zhe Lin, Scott Cohen, Zhihong Ding, Vicente
Ordonez
|
SCoRD: Subject-Conditional Relation Detection with Text-Augmented Data
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose Subject-Conditional Relation Detection SCoRD, where conditioned on
an input subject, the goal is to predict all its relations to other objects in
a scene along with their locations. Based on the Open Images dataset, we
propose a challenging OIv6-SCoRD benchmark such that the training and testing
splits have a distribution shift in terms of the occurrence statistics of
$\langle$subject, relation, object$\rangle$ triplets. To solve this problem, we
propose an auto-regressive model that given a subject, it predicts its
relations, objects, and object locations by casting this output as a sequence
of tokens. First, we show that previous scene-graph prediction methods fail to
produce as exhaustive an enumeration of relation-object pairs when conditioned
on a subject on this benchmark. Particularly, we obtain a recall@3 of 83.8% for
our relation-object predictions compared to the 49.75% obtained by a recent
scene graph detector. Then, we show improved generalization on both
relation-object and object-box predictions by leveraging during training
relation-object pairs obtained automatically from textual captions and for
which no object-box annotations are available. Particularly, for
$\langle$subject, relation, object$\rangle$ triplets for which no object
locations are available during training, we are able to obtain a recall@3 of
42.59% for relation-object pairs and 32.27% for their box locations.
|
[
{
"version": "v1",
"created": "Thu, 24 Aug 2023 16:35:35 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Yang",
"Ziyan",
""
],
[
"Kafle",
"Kushal",
""
],
[
"Lin",
"Zhe",
""
],
[
"Cohen",
"Scott",
""
],
[
"Ding",
"Zhihong",
""
],
[
"Ordonez",
"Vicente",
""
]
] |
new_dataset
| 0.999458 |
2308.12956
|
Ming Li
|
Huafeng Kuang, Jie Wu, Xiawu Zheng, Ming Li, Xuefeng Xiao, Rui Wang,
Min Zheng, Rongrong Ji
|
DLIP: Distilling Language-Image Pre-training
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision-Language Pre-training (VLP) shows remarkable progress with the
assistance of extremely heavy parameters, which challenges deployment in real
applications. Knowledge distillation is well recognized as the essential
procedure in model compression. However, existing knowledge distillation
techniques lack an in-depth investigation and analysis of VLP, and practical
guidelines for VLP-oriented distillation are still not yet explored. In this
paper, we present DLIP, a simple yet efficient Distilling Language-Image
Pre-training framework, through which we investigate how to distill a light VLP
model. Specifically, we dissect the model distillation from multiple
dimensions, such as the architecture characteristics of different modules and
the information transfer of different modalities. We conduct comprehensive
experiments and provide insights on distilling a light but performant VLP
model. Experimental results reveal that DLIP can achieve a state-of-the-art
accuracy/efficiency trade-off across diverse cross-modal tasks, e.g.,
image-text retrieval, image captioning and visual question answering. For
example, DLIP compresses BLIP by 1.9x, from 213M to 108M parameters, while
achieving comparable or better performance. Furthermore, DLIP succeeds in
retaining more than 95% of the performance with 22.4% parameters and 24.8%
FLOPs compared to the teacher model and accelerates inference speed by 2.7x.
|
[
{
"version": "v1",
"created": "Thu, 24 Aug 2023 17:50:21 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Kuang",
"Huafeng",
""
],
[
"Wu",
"Jie",
""
],
[
"Zheng",
"Xiawu",
""
],
[
"Li",
"Ming",
""
],
[
"Xiao",
"Xuefeng",
""
],
[
"Wang",
"Rui",
""
],
[
"Zheng",
"Min",
""
],
[
"Ji",
"Rongrong",
""
]
] |
new_dataset
| 0.991703 |
2308.12963
|
Xiyue Zhu Mr.
|
Xiyue Zhu, Vlas Zyrianov, Zhijian Liu, Shenlong Wang
|
MapPrior: Bird's-Eye View Map Layout Estimation with Generative Models
| null | null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Despite tremendous advancements in bird's-eye view (BEV) perception, existing
models fall short in generating realistic and coherent semantic map layouts,
and they fail to account for uncertainties arising from partial sensor
information (such as occlusion or limited coverage). In this work, we introduce
MapPrior, a novel BEV perception framework that combines a traditional
discriminative BEV perception model with a learned generative model for
semantic map layouts. Our MapPrior delivers predictions with better accuracy,
realism, and uncertainty awareness. We evaluate our model on the large-scale
nuScenes benchmark. At the time of submission, MapPrior outperforms the
strongest competing method, with significantly improved MMD and ECE scores in
camera- and LiDAR-based BEV perception.
|
[
{
"version": "v1",
"created": "Thu, 24 Aug 2023 17:58:30 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Zhu",
"Xiyue",
""
],
[
"Zyrianov",
"Vlas",
""
],
[
"Liu",
"Zhijian",
""
],
[
"Wang",
"Shenlong",
""
]
] |
new_dataset
| 0.964629 |
2308.12965
|
Sai Kumar Dwivedi
|
Sai Kumar Dwivedi, Cordelia Schmid, Hongwei Yi, Michael J. Black,
Dimitrios Tzionas
|
POCO: 3D Pose and Shape Estimation with Confidence
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The regression of 3D Human Pose and Shape (HPS) from an image is becoming
increasingly accurate. This makes the results useful for downstream tasks like
human action recognition or 3D graphics. Yet, no regressor is perfect, and
accuracy can be affected by ambiguous image evidence or by poses and appearance
that are unseen during training. Most current HPS regressors, however, do not
report the confidence of their outputs, meaning that downstream tasks cannot
differentiate accurate estimates from inaccurate ones. To address this, we
develop POCO, a novel framework for training HPS regressors to estimate not
only a 3D human body, but also their confidence, in a single feed-forward pass.
Specifically, POCO estimates both the 3D body pose and a per-sample variance.
The key idea is to introduce a Dual Conditioning Strategy (DCS) for regressing
uncertainty that is highly correlated to pose reconstruction quality. The POCO
framework can be applied to any HPS regressor and here we evaluate it by
modifying HMR, PARE, and CLIFF. In all cases, training the network to reason
about uncertainty helps it learn to more accurately estimate 3D pose. While
this was not our goal, the improvement is modest but consistent. Our main
motivation is to provide uncertainty estimates for downstream tasks; we
demonstrate this in two ways: (1) We use the confidence estimates to bootstrap
HPS training. Given unlabelled image data, we take the confident estimates of a
POCO-trained regressor as pseudo ground truth. Retraining with this
automatically-curated data improves accuracy. (2) We exploit uncertainty in
video pose estimation by automatically identifying uncertain frames (e.g. due
to occlusion) and inpainting these from confident frames. Code and models will
be available for research at https://poco.is.tue.mpg.de.
|
[
{
"version": "v1",
"created": "Thu, 24 Aug 2023 17:59:04 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Dwivedi",
"Sai Kumar",
""
],
[
"Schmid",
"Cordelia",
""
],
[
"Yi",
"Hongwei",
""
],
[
"Black",
"Michael J.",
""
],
[
"Tzionas",
"Dimitrios",
""
]
] |
new_dataset
| 0.990288 |
2308.12970
|
Vladislav Golyanik
|
Navami Kairanda and Marc Habermann and Christian Theobalt and
Vladislav Golyanik
|
NeuralClothSim: Neural Deformation Fields Meet the Kirchhoff-Love Thin
Shell Theory
|
27 pages, 22 figures and 3 tables; project page:
https://4dqv.mpi-inf.mpg.de/NeuralClothSim/
| null | null | null |
cs.GR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cloth simulation is an extensively studied problem, with a plethora of
solutions available in computer graphics literature. Existing cloth simulators
produce realistic cloth deformations that obey different types of boundary
conditions. Nevertheless, their operational principle remains limited in
several ways: They operate on explicit surface representations with a fixed
spatial resolution, perform a series of discretised updates (which bounds their
temporal resolution), and require comparably large amounts of storage.
Moreover, back-propagating gradients through the existing solvers is often not
straightforward, which poses additional challenges when integrating them into
modern neural architectures. In response to the limitations mentioned above,
this paper takes a fundamentally different perspective on physically-plausible
cloth simulation and re-thinks this long-standing problem: We propose
NeuralClothSim, i.e., a new cloth simulation approach using thin shells, in
which surface evolution is encoded in neural network weights. Our
memory-efficient and differentiable solver operates on a new continuous
coordinate-based representation of dynamic surfaces, i.e., neural deformation
fields (NDFs); it supervises NDF evolution with the rules of the non-linear
Kirchhoff-Love shell theory. NDFs are adaptive in the sense that they 1)
allocate their capacity to the deformation details as the latter arise during
the cloth evolution and 2) allow surface state queries at arbitrary spatial and
temporal resolutions without retraining. We show how to train our
NeuralClothSim solver while imposing hard boundary conditions and demonstrate
multiple applications, such as material interpolation and simulation editing.
The experimental results highlight the effectiveness of our formulation and its
potential impact.
|
[
{
"version": "v1",
"created": "Thu, 24 Aug 2023 17:59:54 GMT"
}
] | 2023-08-25T00:00:00 |
[
[
"Kairanda",
"Navami",
""
],
[
"Habermann",
"Marc",
""
],
[
"Theobalt",
"Christian",
""
],
[
"Golyanik",
"Vladislav",
""
]
] |
new_dataset
| 0.999176 |
2207.02552
|
Rajen Kumar
|
Rajen Kumar, Sushant Kumar Jha, Prashant Kumar Srivastava and Sudhan
Majhi
|
A Construction of Type-II ZCCS for the MC-CDMA System with Low PMEPR
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this letter, we propose a novel construction of type-II $Z$-complementary
code set (ZCCS) having arbitrary sequence length using the Kronecker product
between a complete complementary code (CCC) and mutually orthogonal uni-modular
sequences. In this construction, Barker sequences are used to reduce row
sequence peak-to-mean envelope power ratio (PMEPR) for some specific lengths
sequence and column sequence PMEPR for some specific sizes of codes. The column
sequence PMEPR of the proposed type-II ZCCS is upper bounded by a number
smaller than $2$. The proposed construction also contributes new lengths of
type-II $Z$-complementary pair (ZCP) and type-II $Z$-complementary set (ZCS).
Furthermore, the PMEPR of these new type-II ZCPs is also lower than existing
type-II ZCPs.
|
[
{
"version": "v1",
"created": "Wed, 6 Jul 2022 10:05:55 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Aug 2022 09:10:59 GMT"
},
{
"version": "v3",
"created": "Tue, 22 Aug 2023 20:05:40 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Kumar",
"Rajen",
""
],
[
"Jha",
"Sushant Kumar",
""
],
[
"Srivastava",
"Prashant Kumar",
""
],
[
"Majhi",
"Sudhan",
""
]
] |
new_dataset
| 0.997223 |
2210.01055
|
Tianyu Huang
|
Tianyu Huang, Bowen Dong, Yunhan Yang, Xiaoshui Huang, Rynson W.H.
Lau, Wanli Ouyang, Wangmeng Zuo
|
CLIP2Point: Transfer CLIP to Point Cloud Classification with Image-Depth
Pre-training
|
Accepted by ICCV2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Pre-training across 3D vision and language remains under development because
of limited training data. Recent works attempt to transfer vision-language
pre-training models to 3D vision. PointCLIP converts point cloud data to
multi-view depth maps, adopting CLIP for shape classification. However, its
performance is restricted by the domain gap between rendered depth maps and
images, as well as the diversity of depth distributions. To address this issue,
we propose CLIP2Point, an image-depth pre-training method by contrastive
learning to transfer CLIP to the 3D domain, and adapt it to point cloud
classification. We introduce a new depth rendering setting that forms a better
visual effect, and then render 52,460 pairs of images and depth maps from
ShapeNet for pre-training. The pre-training scheme of CLIP2Point combines
cross-modality learning to enforce the depth features for capturing expressive
visual and textual features and intra-modality learning to enhance the
invariance of depth aggregation. Additionally, we propose a novel Dual-Path
Adapter (DPA) module, i.e., a dual-path structure with simplified adapters for
few-shot learning. The dual-path structure allows the joint use of CLIP and
CLIP2Point, and the simplified adapter can well fit few-shot tasks without
post-search. Experimental results show that CLIP2Point is effective in
transferring CLIP knowledge to 3D vision. Our CLIP2Point outperforms PointCLIP
and other self-supervised 3D networks, achieving state-of-the-art results on
zero-shot and few-shot classification.
|
[
{
"version": "v1",
"created": "Mon, 3 Oct 2022 16:13:14 GMT"
},
{
"version": "v2",
"created": "Sun, 20 Nov 2022 12:08:19 GMT"
},
{
"version": "v3",
"created": "Wed, 23 Aug 2023 03:24:13 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Huang",
"Tianyu",
""
],
[
"Dong",
"Bowen",
""
],
[
"Yang",
"Yunhan",
""
],
[
"Huang",
"Xiaoshui",
""
],
[
"Lau",
"Rynson W. H.",
""
],
[
"Ouyang",
"Wanli",
""
],
[
"Zuo",
"Wangmeng",
""
]
] |
new_dataset
| 0.989174 |
2211.08772
|
Shuwei Li
|
Shuwei Li, Jikai Wang, Michael S. Brown, Robby T. Tan
|
MIMT: Multi-Illuminant Color Constancy via Multi-Task Local Surface and
Light Color Learning
|
8 pages, 6 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The assumption of a uniform light color distribution is no longer applicable
in scenes that have multiple light colors. Most color constancy methods are
designed to deal with a single light color, and thus are erroneous when applied
to multiple light colors. The spatial variability in multiple light colors
causes the color constancy problem to be more challenging and requires the
extraction of local surface/light information. Motivated by this, we introduce
a multi-task learning method to discount multiple light colors in a single
input image. To have better cues of the local surface/light colors under
multiple light color conditions, we design a novel multi-task learning
framework. Our framework includes auxiliary tasks of achromatic-pixel detection
and surface-color similarity prediction, providing better cues for local light
and surface colors, respectively. Moreover, to ensure that our model maintains
the constancy of surface colors regardless of the variations of light colors, a
novel local surface color feature preservation scheme is developed. We
demonstrate that our model achieves 47.1% improvement (from 4.69 mean angular
error to 2.48) compared to a state-of-the-art multi-illuminant color constancy
method on a multi-illuminant dataset (LSMI).
|
[
{
"version": "v1",
"created": "Wed, 16 Nov 2022 09:00:20 GMT"
},
{
"version": "v2",
"created": "Sat, 25 Mar 2023 09:37:18 GMT"
},
{
"version": "v3",
"created": "Tue, 22 Aug 2023 19:45:17 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Li",
"Shuwei",
""
],
[
"Wang",
"Jikai",
""
],
[
"Brown",
"Michael S.",
""
],
[
"Tan",
"Robby T.",
""
]
] |
new_dataset
| 0.987107 |
2303.11225
|
Zenghao Chai
|
Zenghao Chai, Tianke Zhang, Tianyu He, Xu Tan, Tadas Baltru\v{s}aitis,
HsiangTao Wu, Runnan Li, Sheng Zhao, Chun Yuan, Jiang Bian
|
HiFace: High-Fidelity 3D Face Reconstruction by Learning Static and
Dynamic Details
|
Accepted to ICCV 2023, camera-ready version; Project page:
https://project-hiface.github.io/
| null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
3D Morphable Models (3DMMs) demonstrate great potential for reconstructing
faithful and animatable 3D facial surfaces from a single image. The facial
surface is influenced by the coarse shape, as well as the static detail (e,g.,
person-specific appearance) and dynamic detail (e.g., expression-driven
wrinkles). Previous work struggles to decouple the static and dynamic details
through image-level supervision, leading to reconstructions that are not
realistic. In this paper, we aim at high-fidelity 3D face reconstruction and
propose HiFace to explicitly model the static and dynamic details.
Specifically, the static detail is modeled as the linear combination of a
displacement basis, while the dynamic detail is modeled as the linear
interpolation of two displacement maps with polarized expressions. We exploit
several loss functions to jointly learn the coarse shape and fine details with
both synthetic and real-world datasets, which enable HiFace to reconstruct
high-fidelity 3D shapes with animatable details. Extensive quantitative and
qualitative experiments demonstrate that HiFace presents state-of-the-art
reconstruction quality and faithfully recovers both the static and dynamic
details. Our project page can be found at https://project-hiface.github.io.
|
[
{
"version": "v1",
"created": "Mon, 20 Mar 2023 16:07:02 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Aug 2023 11:46:57 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Chai",
"Zenghao",
""
],
[
"Zhang",
"Tianke",
""
],
[
"He",
"Tianyu",
""
],
[
"Tan",
"Xu",
""
],
[
"Baltrušaitis",
"Tadas",
""
],
[
"Wu",
"HsiangTao",
""
],
[
"Li",
"Runnan",
""
],
[
"Zhao",
"Sheng",
""
],
[
"Yuan",
"Chun",
""
],
[
"Bian",
"Jiang",
""
]
] |
new_dataset
| 0.9555 |
2304.02051
|
Marcella Cornia
|
Alberto Baldrati, Davide Morelli, Giuseppe Cartella, Marcella Cornia,
Marco Bertini, Rita Cucchiara
|
Multimodal Garment Designer: Human-Centric Latent Diffusion Models for
Fashion Image Editing
|
ICCV 2023
| null | null | null |
cs.CV cs.AI cs.MM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Fashion illustration is used by designers to communicate their vision and to
bring the design idea from conceptualization to realization, showing how
clothes interact with the human body. In this context, computer vision can thus
be used to improve the fashion design process. Differently from previous works
that mainly focused on the virtual try-on of garments, we propose the task of
multimodal-conditioned fashion image editing, guiding the generation of
human-centric fashion images by following multimodal prompts, such as text,
human body poses, and garment sketches. We tackle this problem by proposing a
new architecture based on latent diffusion models, an approach that has not
been used before in the fashion domain. Given the lack of existing datasets
suitable for the task, we also extend two existing fashion datasets, namely
Dress Code and VITON-HD, with multimodal annotations collected in a
semi-automatic manner. Experimental results on these new datasets demonstrate
the effectiveness of our proposal, both in terms of realism and coherence with
the given multimodal inputs. Source code and collected multimodal annotations
are publicly available at:
https://github.com/aimagelab/multimodal-garment-designer.
|
[
{
"version": "v1",
"created": "Tue, 4 Apr 2023 18:03:04 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Aug 2023 12:45:27 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Baldrati",
"Alberto",
""
],
[
"Morelli",
"Davide",
""
],
[
"Cartella",
"Giuseppe",
""
],
[
"Cornia",
"Marcella",
""
],
[
"Bertini",
"Marco",
""
],
[
"Cucchiara",
"Rita",
""
]
] |
new_dataset
| 0.998083 |
2306.15782
|
Abdur Rahman
|
Abdur Rahman, Arjun Ghosh, and Chetan Arora
|
UTRNet: High-Resolution Urdu Text Recognition In Printed Documents
|
Accepted at The 17th International Conference on Document Analysis
and Recognition (ICDAR 2023)
|
Document Analysis and Recognition - ICDAR 2023 (2023) 305-324
|
10.1007/978-3-031-41734-4_19
| null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this paper, we propose a novel approach to address the challenges of
printed Urdu text recognition using high-resolution, multi-scale semantic
feature extraction. Our proposed UTRNet architecture, a hybrid CNN-RNN model,
demonstrates state-of-the-art performance on benchmark datasets. To address the
limitations of previous works, which struggle to generalize to the intricacies
of the Urdu script and the lack of sufficient annotated real-world data, we
have introduced the UTRSet-Real, a large-scale annotated real-world dataset
comprising over 11,000 lines and UTRSet-Synth, a synthetic dataset with 20,000
lines closely resembling real-world and made corrections to the ground truth of
the existing IIITH dataset, making it a more reliable resource for future
research. We also provide UrduDoc, a benchmark dataset for Urdu text line
detection in scanned documents. Additionally, we have developed an online tool
for end-to-end Urdu OCR from printed documents by integrating UTRNet with a
text detection model. Our work not only addresses the current limitations of
Urdu OCR but also paves the way for future research in this area and
facilitates the continued advancement of Urdu OCR technology. The project page
with source code, datasets, annotations, trained models, and online tool is
available at abdur75648.github.io/UTRNet.
|
[
{
"version": "v1",
"created": "Tue, 27 Jun 2023 20:09:56 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Jul 2023 14:50:27 GMT"
},
{
"version": "v3",
"created": "Wed, 23 Aug 2023 10:02:15 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Rahman",
"Abdur",
""
],
[
"Ghosh",
"Arjun",
""
],
[
"Arora",
"Chetan",
""
]
] |
new_dataset
| 0.999691 |
2307.01982
|
Yuntao Wang
|
Yuntao Wang, Zhou Su
|
An Envy-Free Online UAV Charging Scheme with Vehicle-Mounted Mobile
Wireless Chargers
|
Accepted by China Communications in June 2023
| null |
10.23919/JCC.fa.2023-0056
| null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In commercial unmanned aerial vehicle (UAV) applications, one of the main
restrictions is UAVs' limited battery endurance when executing persistent
tasks. With the mature of wireless power transfer (WPT) technologies, by
leveraging ground vehicles mounted with WPT facilities on their proofs, we
propose a mobile and collaborative recharging scheme for UAVs in an on-demand
manner. Specifically, we first present a novel air-ground cooperative UAV
recharging framework, where ground vehicles cooperatively share their idle
wireless chargers to UAVs and a swarm of UAVs in the task area compete to get
recharging services. Considering the mobility dynamics and energy competitions,
we formulate an energy scheduling problem for UAVs and vehicles under practical
constraints. A fair online auction-based solution with low complexity is also
devised to allocate and price idle wireless chargers on vehicular proofs in
real time. We rigorously prove that the proposed scheme is strategy-proof,
envy-free, and produces stable allocation outcomes. The first property enforces
that truthful bidding is the dominant strategy for participants, the second
ensures that no user is better off by exchanging his allocation with another
user when the auction ends, while the third guarantees the matching stability
between UAVs and UGVs. Extensive simulations validate that the proposed scheme
outperforms benchmarks in terms of energy allocation efficiency and UAV's
utility.
|
[
{
"version": "v1",
"created": "Wed, 5 Jul 2023 01:55:50 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Wang",
"Yuntao",
""
],
[
"Su",
"Zhou",
""
]
] |
new_dataset
| 0.999189 |
2308.01095
|
Jinpeng Lin
|
Jinpeng Lin, Min Zhou, Ye Ma, Yifan Gao, Chenxi Fei, Yangjian Chen,
Zhang Yu, Tiezheng Ge
|
AutoPoster: A Highly Automatic and Content-aware Design System for
Advertising Poster Generation
|
Accepted for ACM MM 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Advertising posters, a form of information presentation, combine visual and
linguistic modalities. Creating a poster involves multiple steps and
necessitates design experience and creativity. This paper introduces
AutoPoster, a highly automatic and content-aware system for generating
advertising posters. With only product images and titles as inputs, AutoPoster
can automatically produce posters of varying sizes through four key stages:
image cleaning and retargeting, layout generation, tagline generation, and
style attribute prediction. To ensure visual harmony of posters, two
content-aware models are incorporated for layout and tagline generation.
Moreover, we propose a novel multi-task Style Attribute Predictor (SAP) to
jointly predict visual style attributes. Meanwhile, to our knowledge, we
propose the first poster generation dataset that includes visual attribute
annotations for over 76k posters. Qualitative and quantitative outcomes from
user studies and experiments substantiate the efficacy of our system and the
aesthetic superiority of the generated posters compared to other poster
generation methods.
|
[
{
"version": "v1",
"created": "Wed, 2 Aug 2023 11:58:43 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Aug 2023 06:26:56 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Lin",
"Jinpeng",
""
],
[
"Zhou",
"Min",
""
],
[
"Ma",
"Ye",
""
],
[
"Gao",
"Yifan",
""
],
[
"Fei",
"Chenxi",
""
],
[
"Chen",
"Yangjian",
""
],
[
"Yu",
"Zhang",
""
],
[
"Ge",
"Tiezheng",
""
]
] |
new_dataset
| 0.999651 |
2308.10592
|
Emilia Wi\'snios
|
Inez Okulska, Kinga G{\l}\k{a}bi\'nska, Anna Ko{\l}os, Agnieszka
Karli\'nska, Emilia Wi\'snios, Adam Nowakowski, Pawe{\l} Ellerik, Andrzej
Pra{\l}at
|
BAN-PL: a Novel Polish Dataset of Banned Harmful and Offensive Content
from Wykop.pl web service
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Advances in automated detection of offensive language online, including hate
speech and cyberbullying, require improved access to publicly available
datasets comprising social media content. In this paper, we introduce BAN-PL,
the first open dataset in the Polish language that encompasses texts flagged as
harmful and subsequently removed by professional moderators. The dataset
encompasses a total of 691,662 pieces of content from a popular social
networking service, Wykop, often referred to as the "Polish Reddit", including
both posts and comments, and is evenly distributed into two distinct classes:
"harmful" and "neutral". We provide a comprehensive description of the data
collection and preprocessing procedures, as well as highlight the linguistic
specificity of the data. The BAN-PL dataset, along with advanced preprocessing
scripts for, i.a., unmasking profanities, will be publicly available.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 09:47:31 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Aug 2023 11:01:21 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Okulska",
"Inez",
""
],
[
"Głąbińska",
"Kinga",
""
],
[
"Kołos",
"Anna",
""
],
[
"Karlińska",
"Agnieszka",
""
],
[
"Wiśnios",
"Emilia",
""
],
[
"Nowakowski",
"Adam",
""
],
[
"Ellerik",
"Paweł",
""
],
[
"Prałat",
"Andrzej",
""
]
] |
new_dataset
| 0.999886 |
2308.11236
|
Bilel Benjdira Dr.
|
Bilel Benjdira, Anis Koubaa, Anas M. Ali
|
ROSGPT_Vision: Commanding Robots Using Only Language Models' Prompts
| null | null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this paper, we argue that the next generation of robots can be commanded
using only Language Models' prompts. Every prompt interrogates separately a
specific Robotic Modality via its Modality Language Model (MLM). A central Task
Modality mediates the whole communication to execute the robotic mission via a
Large Language Model (LLM). This paper gives this new robotic design pattern
the name of: Prompting Robotic Modalities (PRM). Moreover, this paper applies
this PRM design pattern in building a new robotic framework named
ROSGPT_Vision. ROSGPT_Vision allows the execution of a robotic task using only
two prompts: a Visual and an LLM prompt. The Visual Prompt extracts, in natural
language, the visual semantic features related to the task under consideration
(Visual Robotic Modality). Meanwhile, the LLM Prompt regulates the robotic
reaction to the visual description (Task Modality). The framework automates all
the mechanisms behind these two prompts. The framework enables the robot to
address complex real-world scenarios by processing visual data, making informed
decisions, and carrying out actions automatically. The framework comprises one
generic vision module and two independent ROS nodes. As a test application, we
used ROSGPT_Vision to develop CarMate, which monitors the driver's distraction
on the roads and makes real-time vocal notifications to the driver. We showed
how ROSGPT_Vision significantly reduced the development cost compared to
traditional methods. We demonstrated how to improve the quality of the
application by optimizing the prompting strategies, without delving into
technical details. ROSGPT_Vision is shared with the community (link:
https://github.com/bilel-bj/ROSGPT_Vision) to advance robotic research in this
direction and to build more robotic frameworks that implement the PRM design
pattern and enables controlling robots using only prompts.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 07:21:24 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Aug 2023 08:31:16 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Benjdira",
"Bilel",
""
],
[
"Koubaa",
"Anis",
""
],
[
"Ali",
"Anas M.",
""
]
] |
new_dataset
| 0.998612 |
2308.11289
|
Xinrui Li
|
Xinrui Li, Zhenjun Dong, Yong Zeng, Shi Jin, Rui Zhang
|
Multi-User Modular XL-MIMO Communications: Near-Field Beam Focusing
Pattern and User Grouping
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we investigate multi-user modular extremely large-scale
multiple-input multiple-output (XL-MIMO) communication systems, where modular
extremely large-scale uniform linear array (XL-ULA) is deployed at the base
station (BS) to serve multiple single-antenna users. By exploiting the unique
modular array architecture and considering the potential near-field
propagation, we develop sub-array based uniform spherical wave (USW) models for
distinct versus common angles of arrival/departure (AoAs/AoDs) with respect to
different sub-arrays/modules, respectively. Under such USW models, we analyze
the beam focusing patterns at the near-field observation location by using
near-field beamforming. The analysis reveals that compared to the conventional
XL-MIMO with collocated antenna elements, modular XL-MIMO can provide better
spatial resolution by benefiting from its larger array aperture. However, it
also incurs undesired grating lobes due to the large inter-module separation.
Moreover, it is found that for multi-user modular XL-MIMO communications, the
achievable signal-to-interference-plus-noise ratio (SINR) for users may be
degraded by the grating lobes of the beam focusing pattern. To address this
issue, an efficient user grouping method is proposed for multi-user
transmission scheduling, so that users located within the grating lobes of each
other are not allocated to the same time-frequency resource block (RB) for
their communications. Numerical results are presented to verify the
effectiveness of the proposed user grouping method, as well as the superior
performance of modular XL-MIMO over its collocated counterpart with densely
distributed users.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 09:07:38 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Aug 2023 01:37:16 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Li",
"Xinrui",
""
],
[
"Dong",
"Zhenjun",
""
],
[
"Zeng",
"Yong",
""
],
[
"Jin",
"Shi",
""
],
[
"Zhang",
"Rui",
""
]
] |
new_dataset
| 0.978756 |
2308.11298
|
Zeyu Zhang
|
Biao Wu, Yutong Xie, Zeyu Zhang, Jinchao Ge, Kaspar Yaxley, Suzan
Bahadir, Qi Wu, Yifan Liu, Minh-Son To
|
BHSD: A 3D Multi-Class Brain Hemorrhage Segmentation Dataset
|
Accepted by MLMI 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Intracranial hemorrhage (ICH) is a pathological condition characterized by
bleeding inside the skull or brain, which can be attributed to various factors.
Identifying, localizing and quantifying ICH has important clinical
implications, in a bleed-dependent manner. While deep learning techniques are
widely used in medical image segmentation and have been applied to the ICH
segmentation task, existing public ICH datasets do not support the multi-class
segmentation problem. To address this, we develop the Brain Hemorrhage
Segmentation Dataset (BHSD), which provides a 3D multi-class ICH dataset
containing 192 volumes with pixel-level annotations and 2200 volumes with
slice-level annotations across five categories of ICH. To demonstrate the
utility of the dataset, we formulate a series of supervised and semi-supervised
ICH segmentation tasks. We provide experimental results with state-of-the-art
models as reference benchmarks for further model developments and evaluations
on this dataset.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 09:20:55 GMT"
},
{
"version": "v2",
"created": "Wed, 23 Aug 2023 05:44:57 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Wu",
"Biao",
""
],
[
"Xie",
"Yutong",
""
],
[
"Zhang",
"Zeyu",
""
],
[
"Ge",
"Jinchao",
""
],
[
"Yaxley",
"Kaspar",
""
],
[
"Bahadir",
"Suzan",
""
],
[
"Wu",
"Qi",
""
],
[
"Liu",
"Yifan",
""
],
[
"To",
"Minh-Son",
""
]
] |
new_dataset
| 0.999599 |
2308.11620
|
Wa Nkongolo Mike Nkongolo
|
Tshimankinda Jerome Ngoy and Mike Nkongolo
|
Software-based signal compression algorithm for ROM-stored electrical
cables
|
Submitted to the International Journal of Reconfigurable and Embedded
Systems (IJRES). Section: Reconfigurable System. Title: A Signal Compression
Algorithm Transmitted by the Software for Electrical Cables Stored in ROM.
Article ID: 21019. Editor: Selvakumar Manickam. Review Initiated: 2023-07-07
| null | null | null |
cs.IT cs.AR eess.SP math.IT
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This project introduces a groundbreaking approach to address the challenge of
periodic signal compression. By proposing a novel adaptive coding method,
coupled with hardware-assisted data compression, we have developed a new
architecture model tailored for efficient data compression. The selected
compression scheme has demonstrated remarkable results, showcasing reduced
memory communication volume and power consumption in the cache memory path of
benchmark systems. With a reduction range of 4.2% to 35.2%, this innovation
paves the way for affordable smart sensing, monitoring, diagnostics, and
protection in emerging low-cost device types. Consequently, this cutting-edge
technology enhances electrical signal compression and contributes to grid
improvement. Additionally, we explore the novel application of harnessing
wasted thermal energy in the Read-Only Memory (ROM) using thermoelectricity
(TE). This approach captures the excess thermal energy, converting it into
electrical energy through optimized supercapacitor charging, resulting in
efficient energy utilization. This innovation intersects the fields of embedded
systems, data compression, energy efficiency, and smart grid technology.
|
[
{
"version": "v1",
"created": "Sun, 9 Jul 2023 10:34:13 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Ngoy",
"Tshimankinda Jerome",
""
],
[
"Nkongolo",
"Mike",
""
]
] |
new_dataset
| 0.974461 |
2308.11737
|
Jiacong Xu
|
Jiacong Xu, Yi Zhang, Jiawei Peng, Wufei Ma, Artur Jesslen, Pengliang
Ji, Qixin Hu, Jiehua Zhang, Qihao Liu, Jiahao Wang, Wei Ji, Chen Wang,
Xiaoding Yuan, Prakhar Kaushik, Guofeng Zhang, Jie Liu, Yushan Xie, Yawen
Cui, Alan Yuille, Adam Kortylewski
|
Animal3D: A Comprehensive Dataset of 3D Animal Pose and Shape
|
11 pages, 5 figures
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurately estimating the 3D pose and shape is an essential step towards
understanding animal behavior, and can potentially benefit many downstream
applications, such as wildlife conservation. However, research in this area is
held back by the lack of a comprehensive and diverse dataset with high-quality
3D pose and shape annotations. In this paper, we propose Animal3D, the first
comprehensive dataset for mammal animal 3D pose and shape estimation. Animal3D
consists of 3379 images collected from 40 mammal species, high-quality
annotations of 26 keypoints, and importantly the pose and shape parameters of
the SMAL model. All annotations were labeled and checked manually in a
multi-stage process to ensure highest quality results. Based on the Animal3D
dataset, we benchmark representative shape and pose estimation models at: (1)
supervised learning from only the Animal3D data, (2) synthetic to real transfer
from synthetically generated images, and (3) fine-tuning human pose and shape
estimation models. Our experimental results demonstrate that predicting the 3D
shape and pose of animals across species remains a very challenging task,
despite significant advances in human pose estimation. Our results further
demonstrate that synthetic pre-training is a viable strategy to boost the model
performance. Overall, Animal3D opens new directions for facilitating future
research in animal 3D pose and shape estimation, and is publicly available.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 18:57:07 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Xu",
"Jiacong",
""
],
[
"Zhang",
"Yi",
""
],
[
"Peng",
"Jiawei",
""
],
[
"Ma",
"Wufei",
""
],
[
"Jesslen",
"Artur",
""
],
[
"Ji",
"Pengliang",
""
],
[
"Hu",
"Qixin",
""
],
[
"Zhang",
"Jiehua",
""
],
[
"Liu",
"Qihao",
""
],
[
"Wang",
"Jiahao",
""
],
[
"Ji",
"Wei",
""
],
[
"Wang",
"Chen",
""
],
[
"Yuan",
"Xiaoding",
""
],
[
"Kaushik",
"Prakhar",
""
],
[
"Zhang",
"Guofeng",
""
],
[
"Liu",
"Jie",
""
],
[
"Xie",
"Yushan",
""
],
[
"Cui",
"Yawen",
""
],
[
"Yuille",
"Alan",
""
],
[
"Kortylewski",
"Adam",
""
]
] |
new_dataset
| 0.99983 |
2308.11754
|
Mahmoud Nazzal
|
Mahmoud Nazzal, Issa Khalil, Abdallah Khreishah, NhatHai Phan, and Yao
Ma
|
Multi-Instance Adversarial Attack on GNN-Based Malicious Domain
Detection
|
To Appear in the 45th IEEE Symposium on Security and Privacy (IEEE
S\&P 2024), May 20-23, 2024
| null | null | null |
cs.CR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Malicious domain detection (MDD) is an open security challenge that aims to
detect if an Internet domain is associated with cyber-attacks. Among many
approaches to this problem, graph neural networks (GNNs) are deemed highly
effective. GNN-based MDD uses DNS logs to represent Internet domains as nodes
in a maliciousness graph (DMG) and trains a GNN to infer their maliciousness by
leveraging identified malicious domains. Since this method relies on accessible
DNS logs to construct DMGs, it exposes a vulnerability for adversaries to
manipulate their domain nodes' features and connections within DMGs. Existing
research mainly concentrates on threat models that manipulate individual
attacker nodes. However, adversaries commonly generate multiple domains to
achieve their goals economically and avoid detection. Their objective is to
evade discovery across as many domains as feasible. In this work, we call the
attack that manipulates several nodes in the DMG concurrently a multi-instance
evasion attack. We present theoretical and empirical evidence that the existing
single-instance evasion techniques for are inadequate to launch multi-instance
evasion attacks against GNN-based MDDs. Therefore, we introduce MintA, an
inference-time multi-instance adversarial attack on GNN-based MDDs. MintA
enhances node and neighborhood evasiveness through optimized perturbations and
operates successfully with only black-box access to the target model,
eliminating the need for knowledge about the model's specifics or non-adversary
nodes. We formulate an optimization challenge for MintA, achieving an
approximate solution. Evaluating MintA on a leading GNN-based MDD technique
with real-world data showcases an attack success rate exceeding 80%. These
findings act as a warning for security experts, underscoring GNN-based MDDs'
susceptibility to practical attacks that can undermine their effectiveness and
benefits.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 19:51:16 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Nazzal",
"Mahmoud",
""
],
[
"Khalil",
"Issa",
""
],
[
"Khreishah",
"Abdallah",
""
],
[
"Phan",
"NhatHai",
""
],
[
"Ma",
"Yao",
""
]
] |
new_dataset
| 0.994763 |
2308.11755
|
Raj Korpan
|
Raj Korpan
|
VBMO: Voting-Based Multi-Objective Path Planning
|
First International Workshop on Search and Planning with Complex
Objectives (WoSePCO) at IJCAI'2023
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This paper presents VBMO, the Voting-Based Multi-Objective path planning
algorithm, that generates optimal single-objective plans, evaluates each of
them with respect to the other objectives, and selects one with a voting
mechanism. VBMO does not use hand-tuned weights, consider the multiple
objectives at every step of search, or use an evolutionary algorithm. Instead,
it considers how a plan that is optimal in one objective may perform well with
respect to others. VBMO incorporates three voting mechanisms: range, Borda, and
combined approval. Extensive evaluation in diverse and complex environments
demonstrates the algorithm's ability to efficiently produce plans that satisfy
multiple objectives.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 19:51:48 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Korpan",
"Raj",
""
]
] |
new_dataset
| 0.997846 |
2308.11776
|
Ange Lou
|
Ange Lou and Jack Noble
|
WS-SfMLearner: Self-supervised Monocular Depth and Ego-motion Estimation
on Surgical Videos with Unknown Camera Parameters
| null | null | null | null |
cs.CV cs.AI eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Depth estimation in surgical video plays a crucial role in many image-guided
surgery procedures. However, it is difficult and time consuming to create depth
map ground truth datasets in surgical videos due in part to inconsistent
brightness and noise in the surgical scene. Therefore, building an accurate and
robust self-supervised depth and camera ego-motion estimation system is gaining
more attention from the computer vision community. Although several
self-supervision methods alleviate the need for ground truth depth maps and
poses, they still need known camera intrinsic parameters, which are often
missing or not recorded. Moreover, the camera intrinsic prediction methods in
existing works depend heavily on the quality of datasets. In this work, we
aimed to build a self-supervised depth and ego-motion estimation system which
can predict not only accurate depth maps and camera pose, but also camera
intrinsic parameters. We proposed a cost-volume-based supervision manner to
give the system auxiliary supervision for camera parameters prediction. The
experimental results showed that the proposed method improved the accuracy of
estimated camera parameters, ego-motion, and depth estimation.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 20:35:24 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Lou",
"Ange",
""
],
[
"Noble",
"Jack",
""
]
] |
new_dataset
| 0.999219 |
2308.11804
|
Eugene Bagdasaryan
|
Eugene Bagdasaryan, Vitaly Shmatikov
|
Ceci n'est pas une pomme: Adversarial Illusions in Multi-Modal
Embeddings
| null | null | null | null |
cs.CR cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-modal encoders map images, sounds, texts, videos, etc. into a single
embedding space, aligning representations across modalities (e.g., associate an
image of a dog with a barking sound). We show that multi-modal embeddings can
be vulnerable to an attack we call "adversarial illusions." Given an input in
any modality, an adversary can perturb it so as to make its embedding close to
that of an arbitrary, adversary-chosen input in another modality. Illusions
thus enable the adversary to align any image with any text, any text with any
sound, etc.
Adversarial illusions exploit proximity in the embedding space and are thus
agnostic to downstream tasks. Using ImageBind embeddings, we demonstrate how
adversarially aligned inputs, generated without knowledge of specific
downstream tasks, mislead image generation, text generation, and zero-shot
classification.
|
[
{
"version": "v1",
"created": "Tue, 22 Aug 2023 21:57:22 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Bagdasaryan",
"Eugene",
""
],
[
"Shmatikov",
"Vitaly",
""
]
] |
new_dataset
| 0.998599 |
2308.11918
|
Jingchun Zhou
|
Jingchun Zhou, Zongxin He, Kin-Man Lam, Yudong Wang, Weishi Zhang,
ChunLe Guo, Chongyi Li
|
AMSP-UOD: When Vortex Convolution and Stochastic Perturbation Meet
Underwater Object Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a novel Amplitude-Modulated Stochastic Perturbation
and Vortex Convolutional Network, AMSP-UOD, designed for underwater object
detection. AMSP-UOD specifically addresses the impact of non-ideal imaging
factors on detection accuracy in complex underwater environments. To mitigate
the influence of noise on object detection performance, we propose AMSP Vortex
Convolution (AMSP-VConv) to disrupt the noise distribution, enhance feature
extraction capabilities, effectively reduce parameters, and improve network
robustness. We design the Feature Association Decoupling Cross Stage Partial
(FAD-CSP) module, which strengthens the association of long and short-range
features, improving the network performance in complex underwater environments.
Additionally, our sophisticated post-processing method, based on non-maximum
suppression with aspect-ratio similarity thresholds, optimizes detection in
dense scenes, such as waterweed and schools of fish, improving object detection
accuracy. Extensive experiments on the URPC and RUOD datasets demonstrate that
our method outperforms existing state-of-the-art methods in terms of accuracy
and noise immunity. AMSP-UOD proposes an innovative solution with the potential
for real-world applications. Code will be made publicly available.
|
[
{
"version": "v1",
"created": "Wed, 23 Aug 2023 05:03:45 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Zhou",
"Jingchun",
""
],
[
"He",
"Zongxin",
""
],
[
"Lam",
"Kin-Man",
""
],
[
"Wang",
"Yudong",
""
],
[
"Zhang",
"Weishi",
""
],
[
"Guo",
"ChunLe",
""
],
[
"Li",
"Chongyi",
""
]
] |
new_dataset
| 0.99819 |
2308.11985
|
Lin Sun
|
Lin Sun, Todd Rosenkrantz, Prathyusha Enganti, Huiyang Li, Zhijun
Wang, Hao Che, Hong Jiang, Xukai Zou
|
DSSP: A Distributed, SLO-aware, Sensing-domain-privacy-Preserving
Architecture for Sensing-as-a-Service
|
14 pages
| null | null | null |
cs.DC cs.PF
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this paper, we propose DSSP, a Distributed, SLO-aware,
Sensing-domain-privacy-Preserving architecture for Sensing-as-a-Service (SaS).
DSSP addresses four major limitations of the current SaS architecture. First,
to improve sensing quality and enhance geographic coverage, DSSP allows
Independent sensing Administrative Domains (IADs) to participate in sensing
services, while preserving the autonomy of control and privacy for individual
domains. Second, DSSP enables a marketplace in which a sensing data seller
(i.e., an IAD) can sell its sensing data to more than one buyer (i.e., cloud
service provider (CSP)), rather than being locked in with just one CSP. Third,
DSSP enables per-query tail-latency service-level-objective (SLO) guaranteed
SaS. Fourth, DSSP enables distributed, rather than centralized, query
scheduling, making SaS highly scalable. At the core of DSSP is the design of a
budget decomposition technique that translates: (a) a query tail-latency SLO
into exact task response time budgets for sensing tasks of the query dispatched
to individual IADs; and (b) the task budget for a task arrived at an IAD into
exact subtask queuing deadlines for subtasks of the task dispatched to
individual edge nodes in each IAD. This enables IADs to allocate their internal
resources independently and accurately to meet the task budgets and hence,
query tail-latency SLO, based on a simple subtask-budget-aware
earliest-deadline-first queuing (EDFQ) policy for all the subtasks. The
performance and scalability of DSSP are evaluated and verified by both
on-campus testbed experiment at small scale and simulation at large scale.
|
[
{
"version": "v1",
"created": "Wed, 23 Aug 2023 08:18:36 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Sun",
"Lin",
""
],
[
"Rosenkrantz",
"Todd",
""
],
[
"Enganti",
"Prathyusha",
""
],
[
"Li",
"Huiyang",
""
],
[
"Wang",
"Zhijun",
""
],
[
"Che",
"Hao",
""
],
[
"Jiang",
"Hong",
""
],
[
"Zou",
"Xukai",
""
]
] |
new_dataset
| 0.999582 |
2308.12008
|
Frederick Riemenschneider
|
Frederick Riemenschneider and Anette Frank
|
Graecia capta ferum victorem cepit. Detecting Latin Allusions to Ancient
Greek Literature
|
Paper accepted for publication at the First Workshop on Ancient
Language Processing (ALP) 2023; 9 pages, 5 tables
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Intertextual allusions hold a pivotal role in Classical Philology, with Latin
authors frequently referencing Ancient Greek texts. Until now, the automatic
identification of these intertextual references has been constrained to
monolingual approaches, seeking parallels solely within Latin or Greek texts.
In this study, we introduce SPhilBERTa, a trilingual Sentence-RoBERTa model
tailored for Classical Philology, which excels at cross-lingual semantic
comprehension and identification of identical sentences across Ancient Greek,
Latin, and English. We generate new training data by automatically translating
English texts into Ancient Greek. Further, we present a case study,
demonstrating SPhilBERTa's capability to facilitate automated detection of
intertextual parallels. Our models and resources are available at
https://github.com/Heidelberg-NLP/ancient-language-models.
|
[
{
"version": "v1",
"created": "Wed, 23 Aug 2023 08:54:05 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Riemenschneider",
"Frederick",
""
],
[
"Frank",
"Anette",
""
]
] |
new_dataset
| 0.998894 |
2308.12009
|
Christopher Hahne
|
Christopher Hahne, Michel Hayoz, Raphael Sznitman
|
StofNet: Super-resolution Time of Flight Network
|
pre-print
| null | null | null |
cs.CV eess.IV physics.geo-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Time of Flight (ToF) is a prevalent depth sensing technology in the fields of
robotics, medical imaging, and non-destructive testing. Yet, ToF sensing faces
challenges from complex ambient conditions making an inverse modelling from the
sparse temporal information intractable. This paper highlights the potential of
modern super-resolution techniques to learn varying surroundings for a reliable
and accurate ToF detection. Unlike existing models, we tailor an architecture
for sub-sample precise semi-global signal localization by combining
super-resolution with an efficient residual contraction block to balance
between fine signal details and large scale contextual information. We
consolidate research on ToF by conducting a benchmark comparison against six
state-of-the-art methods for which we employ two publicly available datasets.
This includes the release of our SToF-Chirp dataset captured by an airborne
ultrasound transducer. Results showcase the superior performance of our
proposed StofNet in terms of precision, reliability and model complexity. Our
code is available at https://github.com/hahnec/stofnet.
|
[
{
"version": "v1",
"created": "Wed, 23 Aug 2023 09:02:01 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Hahne",
"Christopher",
""
],
[
"Hayoz",
"Michel",
""
],
[
"Sznitman",
"Raphael",
""
]
] |
new_dataset
| 0.999603 |
2308.12028
|
Hao Chen
|
Chen hao, Xie Runfeng, Cui Xiangyang, Yan Zhou, Wang Xin, Xuan
Zhanwei, Zhang Kai
|
LKPNR: LLM and KG for Personalized News Recommendation Framework
| null | null | null | null |
cs.IR cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Accurately recommending candidate news articles to users is a basic challenge
faced by personalized news recommendation systems. Traditional methods are
usually difficult to grasp the complex semantic information in news texts,
resulting in unsatisfactory recommendation results. Besides, these traditional
methods are more friendly to active users with rich historical behaviors.
However, they can not effectively solve the "long tail problem" of inactive
users. To address these issues, this research presents a novel general
framework that combines Large Language Models (LLM) and Knowledge Graphs (KG)
into semantic representations of traditional methods. In order to improve
semantic understanding in complex news texts, we use LLMs' powerful text
understanding ability to generate news representations containing rich semantic
information. In addition, our method combines the information about news
entities and mines high-order structural information through multiple hops in
KG, thus alleviating the challenge of long tail distribution. Experimental
results demonstrate that compared with various traditional models, the
framework significantly improves the recommendation effect. The successful
integration of LLM and KG in our framework has established a feasible path for
achieving more accurate personalized recommendations in the news field. Our
code is available at https://github.com/Xuan-ZW/LKPNR.
|
[
{
"version": "v1",
"created": "Wed, 23 Aug 2023 09:39:18 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"hao",
"Chen",
""
],
[
"Runfeng",
"Xie",
""
],
[
"Xiangyang",
"Cui",
""
],
[
"Zhou",
"Yan",
""
],
[
"Xin",
"Wang",
""
],
[
"Zhanwei",
"Xuan",
""
],
[
"Kai",
"Zhang",
""
]
] |
new_dataset
| 0.998535 |
2308.12035
|
Shuhei Kurita
|
Shuhei Kurita, Naoki Katsura, Eri Onami
|
RefEgo: Referring Expression Comprehension Dataset from First-Person
Perception of Ego4D
|
15 pages, 11 figures. ICCV2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Grounding textual expressions on scene objects from first-person views is a
truly demanding capability in developing agents that are aware of their
surroundings and behave following intuitive text instructions. Such capability
is of necessity for glass-devices or autonomous robots to localize referred
objects in the real-world. In the conventional referring expression
comprehension tasks of images, however, datasets are mostly constructed based
on the web-crawled data and don't reflect diverse real-world structures on the
task of grounding textual expressions in diverse objects in the real world.
Recently, a massive-scale egocentric video dataset of Ego4D was proposed. Ego4D
covers around the world diverse real-world scenes including numerous indoor and
outdoor situations such as shopping, cooking, walking, talking, manufacturing,
etc. Based on egocentric videos of Ego4D, we constructed a broad coverage of
the video-based referring expression comprehension dataset: RefEgo. Our dataset
includes more than 12k video clips and 41 hours for video-based referring
expression comprehension annotation. In experiments, we combine the
state-of-the-art 2D referring expression comprehension models with the object
tracking algorithm, achieving the video-wise referred object tracking even in
difficult conditions: the referred object becomes out-of-frame in the middle of
the video or multiple similar objects are presented in the video.
|
[
{
"version": "v1",
"created": "Wed, 23 Aug 2023 09:49:20 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Kurita",
"Shuhei",
""
],
[
"Katsura",
"Naoki",
""
],
[
"Onami",
"Eri",
""
]
] |
new_dataset
| 0.999812 |
2308.12061
|
Amna Elmustafa
|
Jonathan Xu, Amna Elmustafa, Liya Weldegebriel, Emnet Negash, Richard
Lee, Chenlin Meng, Stefano Ermon, David Lobell
|
HarvestNet: A Dataset for Detecting Smallholder Farming Activity Using
Harvest Piles and Remote Sensing
|
18 pages, 22 figures
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Small farms contribute to a large share of the productive land in developing
countries. In regions such as sub-Saharan Africa, where 80% of farms are small
(under 2 ha in size), the task of mapping smallholder cropland is an important
part of tracking sustainability measures such as crop productivity. However,
the visually diverse and nuanced appearance of small farms has limited the
effectiveness of traditional approaches to cropland mapping. Here we introduce
a new approach based on the detection of harvest piles characteristic of many
smallholder systems throughout the world. We present HarvestNet, a dataset for
mapping the presence of farms in the Ethiopian regions of Tigray and Amhara
during 2020-2023, collected using expert knowledge and satellite images,
totaling 7k hand-labeled images and 2k ground collected labels. We also
benchmark a set of baselines including SOTA models in remote sensing with our
best models having around 80% classification performance on hand labelled data
and 90%, 98% accuracy on ground truth data for Tigray, Amhara respectively. We
also perform a visual comparison with a widely used pre-existing coverage map
and show that our model detects an extra 56,621 hectares of cropland in Tigray.
We conclude that remote sensing of harvest piles can contribute to more timely
and accurate cropland assessments in food insecure region.
|
[
{
"version": "v1",
"created": "Wed, 23 Aug 2023 11:03:28 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Xu",
"Jonathan",
""
],
[
"Elmustafa",
"Amna",
""
],
[
"Weldegebriel",
"Liya",
""
],
[
"Negash",
"Emnet",
""
],
[
"Lee",
"Richard",
""
],
[
"Meng",
"Chenlin",
""
],
[
"Ermon",
"Stefano",
""
],
[
"Lobell",
"David",
""
]
] |
new_dataset
| 0.99986 |
2308.12067
|
Lai Wei
|
Lai Wei, Zihao Jiang, Weiran Huang, Lichao Sun
|
InstructionGPT-4: A 200-Instruction Paradigm for Fine-Tuning MiniGPT-4
| null | null | null | null |
cs.LG cs.AI cs.CL cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multimodal large language models acquire their instruction-following
capabilities through a two-stage training process: pre-training on image-text
pairs and fine-tuning on supervised vision-language instruction data. Recent
studies have shown that large language models can achieve satisfactory results
even with a limited amount of high-quality instruction-following data. In this
paper, we introduce InstructionGPT-4, which is fine-tuned on a small dataset
comprising only 200 examples, amounting to approximately 6% of the
instruction-following data used in the alignment dataset for MiniGPT-4. We
first propose several metrics to access the quality of multimodal instruction
data. Based on these metrics, we present a simple and effective data selector
to automatically identify and filter low-quality vision-language data. By
employing this method, InstructionGPT-4 outperforms the original MiniGPT-4 on
various evaluations (e.g., visual question answering, GPT-4 preference).
Overall, our findings demonstrate that less but high-quality instruction tuning
data is efficient to enable multimodal large language models to generate better
output.
|
[
{
"version": "v1",
"created": "Wed, 23 Aug 2023 11:27:30 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Wei",
"Lai",
""
],
[
"Jiang",
"Zihao",
""
],
[
"Huang",
"Weiran",
""
],
[
"Sun",
"Lichao",
""
]
] |
new_dataset
| 0.998548 |
2308.12079
|
Brittany Reid
|
Brittany Reid, Christoph Treude, Markus Wagner
|
Using the TypeScript compiler to fix erroneous Node.js snippets
|
Accepted in the 23rd IEEE International Working Conference on Source
Code Analysis and Manipulation (SCAM) 2023
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most online code snippets do not run. This means that developers looking to
reuse code from online sources must manually find and fix errors. We present an
approach for automatically evaluating and correcting errors in Node.js code
snippets: Node Code Correction (NCC). NCC leverages the ability of the
TypeScript compiler to generate errors and inform code corrections through the
combination of TypeScript's built-in codefixes, our own targeted fixes, and
deletion of erroneous lines. Compared to existing approaches using linters, our
findings suggest that NCC is capable of detecting a larger number of errors per
snippet and more error types, and it is more efficient at fixing snippets. We
find that 73.7% of the code snippets in NPM documentation have errors; with the
use of NCC's corrections, this number was reduced to 25.1%. Our evaluation
confirms that the use of the TypeScript compiler to inform code corrections is
a promising strategy to aid in the reuse of code snippets from online sources.
|
[
{
"version": "v1",
"created": "Wed, 23 Aug 2023 11:58:01 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Reid",
"Brittany",
""
],
[
"Treude",
"Christoph",
""
],
[
"Wagner",
"Markus",
""
]
] |
new_dataset
| 0.986315 |
2308.12088
|
Junyi Shen
|
Junyi Shen, Tetsuro Miyazaki, Shingo Ohno, Maina Sogabe, and Kenji
Kawashima
|
Trajectory Tracking Control of Dual-PAM Soft Actuator with Hysteresis
Compensator
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Soft robotics is an emergent and swiftly evolving field. Pneumatic actuators
are suitable for driving soft robots because of their superior performance.
However, their control is not easy due to their hysteresis characteristics. In
response to these challenges, we propose an adaptive control method to
compensate hysteresis of a soft actuator. Employing a novel dual pneumatic
artificial muscle (PAM) bending actuator, the innovative control strategy
abates hysteresis effects by dynamically modulating gains within a traditional
PID controller corresponding with the predicted motion of the reference
trajectory. Through comparative experimental evaluation, we found that the new
control method outperforms its conventional counterparts regarding tracking
accuracy and response speed. Our work reveals a new direction for advancing
control in soft actuators.
|
[
{
"version": "v1",
"created": "Wed, 23 Aug 2023 12:20:06 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Shen",
"Junyi",
""
],
[
"Miyazaki",
"Tetsuro",
""
],
[
"Ohno",
"Shingo",
""
],
[
"Sogabe",
"Maina",
""
],
[
"Kawashima",
"Kenji",
""
]
] |
new_dataset
| 0.997831 |
2308.12116
|
Christoph Reich
|
Christoph Reich, Tim Prangemeier, Heinz Koeppl
|
The TYC Dataset for Understanding Instance-Level Semantics and Motions
of Cells in Microstructures
|
Accepted at ICCV 2023 Workshop on BioImage Computing. Project page
(with links to the dataset and code):
https://christophreich1996.github.io/tyc_dataset/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Segmenting cells and tracking their motion over time is a common task in
biomedical applications. However, predicting accurate instance-wise
segmentation and cell motions from microscopy imagery remains a challenging
task. Using microstructured environments for analyzing single cells in a
constant flow of media adds additional complexity. While large-scale labeled
microscopy datasets are available, we are not aware of any large-scale dataset,
including both cells and microstructures. In this paper, we introduce the
trapped yeast cell (TYC) dataset, a novel dataset for understanding
instance-level semantics and motions of cells in microstructures. We release
$105$ dense annotated high-resolution brightfield microscopy images, including
about $19$k instance masks. We also release $261$ curated video clips composed
of $1293$ high-resolution microscopy images to facilitate unsupervised
understanding of cell motions and morphology. TYC offers ten times more
instance annotations than the previously largest dataset, including cells and
microstructures. Our effort also exceeds previous attempts in terms of
microstructure variability, resolution, complexity, and capturing device
(microscopy) variability. We facilitate a unified comparison on our novel
dataset by introducing a standardized evaluation strategy. TYC and evaluation
code are publicly available under CC BY 4.0 license.
|
[
{
"version": "v1",
"created": "Wed, 23 Aug 2023 13:10:33 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Reich",
"Christoph",
""
],
[
"Prangemeier",
"Tim",
""
],
[
"Koeppl",
"Heinz",
""
]
] |
new_dataset
| 0.999792 |
2308.12134
|
Pieter Hartel
|
Pieter Hartel, Eljo Haspels, Mark van Staalduinen, Octavio Texeira
|
DarkDiff: Explainable web page similarity of TOR onion sites
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
In large-scale data analysis, near-duplicates are often a problem. For
example, with two near-duplicate phishing emails, a difference in the
salutation (Mr versus Ms) is not essential, but whether it is bank A or B is
important. The state-of-the-art in near-duplicate detection is a black box
approach (MinHash), so one only knows that emails are near-duplicates, but not
why. We present DarkDiff, which can efficiently detect near-duplicates while
providing the reason why there is a near-duplicate. We have developed DarkDiff
to detect near-duplicates of homepages on the Darkweb. DarkDiff works well on
those pages because they resemble the clear web of the past.
|
[
{
"version": "v1",
"created": "Wed, 23 Aug 2023 13:44:14 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Hartel",
"Pieter",
""
],
[
"Haspels",
"Eljo",
""
],
[
"van Staalduinen",
"Mark",
""
],
[
"Texeira",
"Octavio",
""
]
] |
new_dataset
| 0.996896 |
2308.12141
|
Jie Zhang
|
Zhe Lei, Jie Zhang, Jingtao Li, Weiming Zhang, and Nenghai Yu
|
Aparecium: Revealing Secrets from Physical Photographs
| null | null | null | null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Watermarking is a crucial tool for safeguarding copyrights and can serve as a
more aesthetically pleasing alternative to QR codes. In recent years,
watermarking methods based on deep learning have proved superior robustness
against complex physical distortions than traditional watermarking methods.
However, they have certain limitations that render them less effective in
practice. For instance, current solutions necessitate physical photographs to
be rectangular for accurate localization, cannot handle physical bending or
folding, and require the hidden area to be completely captured at a close
distance and small angle. To overcome these challenges, we propose a novel deep
watermarking framework dubbed \textit{Aparecium}. Specifically, we preprocess
secrets (i.e., watermarks) into a pattern and then embed it into the cover
image, which is symmetrical to the final decoding-then-extracting process. To
capture the watermarked region from complex physical scenarios, a locator is
also introduced. Besides, we adopt a three-stage training strategy for training
convergence. Extensive experiments demonstrate that \textit{Aparecium} is not
only robust against different digital distortions, but also can resist various
physical distortions, such as screen-shooting and printing-shooting, even in
severe cases including different shapes, curvature, folding, incompleteness,
long distances, and big angles while maintaining high visual quality.
Furthermore, some ablation studies are also conducted to verify our design.
|
[
{
"version": "v1",
"created": "Wed, 23 Aug 2023 13:56:38 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Lei",
"Zhe",
""
],
[
"Zhang",
"Jie",
""
],
[
"Li",
"Jingtao",
""
],
[
"Zhang",
"Weiming",
""
],
[
"Yu",
"Nenghai",
""
]
] |
new_dataset
| 0.997044 |
2308.12152
|
Emilio Vital Brazil
|
Ronan Amorim, Emilio Vital Brazil, Faramarz Samavati, Mario Costa
Sousa
|
Geo-Sketcher: Rapid 3D Geological Modeling using Geological and
Topographic Map Sketches
|
21 pages, 30 Figures
| null | null | null |
cs.GR cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The construction of 3D geological models is an essential task in oil/gas
exploration, development and production. However, it is a cumbersome,
time-consuming and error-prone task mainly because of the model's geometric and
topological complexity. The models construction is usually separated into
interpretation and 3D modeling, performed by different highly specialized
individuals, which leads to inconsistencies and intensifies the challenges. In
addition, the creation of models following geological rules is paramount for
properly depicting static and dynamic properties of oil/gas reservoirs. In this
work, we propose a sketch-based approach to expedite the creation of valid 3D
geological models by mimicking how domain experts interpret geological
structures, allowing creating models directly from interpretation sketches. Our
sketch-based modeler (Geo-Sketcher) is based on sketches of standard 2D
topographic and geological maps, comprised of lines, symbols and annotations.
We developed a graph-based representation to enable (1) the automatic
computation of the relative ages of rock series and layers, and (2) the
embedding of specific geological rules directly in the sketching. We introduce
the use of Hermite-Birkhoff Radial Basis Functions to interpolate the
geological map constraints, and demonstrate the capabilities of our approach
with a variety of results with different levels of complexity.
|
[
{
"version": "v1",
"created": "Mon, 21 Aug 2023 17:01:36 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Amorim",
"Ronan",
""
],
[
"Brazil",
"Emilio Vital",
""
],
[
"Samavati",
"Faramarz",
""
],
[
"Sousa",
"Mario Costa",
""
]
] |
new_dataset
| 0.989619 |
2308.12163
|
Sucheng Ren
|
Ziyu Yang, Sucheng Ren, Zongwei Wu, Nanxuan Zhao, Junle Wang, Jing
Qin, Shengfeng He
|
NPF-200: A Multi-Modal Eye Fixation Dataset and Method for
Non-Photorealistic Videos
|
Accepted by ACM MM 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Non-photorealistic videos are in demand with the wave of the metaverse, but
lack of sufficient research studies. This work aims to take a step forward to
understand how humans perceive non-photorealistic videos with eye fixation
(\ie, saliency detection), which is critical for enhancing media production,
artistic design, and game user experience. To fill in the gap of missing a
suitable dataset for this research line, we present NPF-200, the first
large-scale multi-modal dataset of purely non-photorealistic videos with eye
fixations. Our dataset has three characteristics: 1) it contains soundtracks
that are essential according to vision and psychological studies; 2) it
includes diverse semantic content and videos are of high-quality; 3) it has
rich motions across and within videos. We conduct a series of analyses to gain
deeper insights into this task and compare several state-of-the-art methods to
explore the gap between natural images and non-photorealistic data.
Additionally, as the human attention system tends to extract visual and audio
features with different frequencies, we propose a universal frequency-aware
multi-modal non-photorealistic saliency detection model called NPSNet,
demonstrating the state-of-the-art performance of our task. The results uncover
strengths and weaknesses of multi-modal network design and multi-domain
training, opening up promising directions for future works. {Our dataset and
code can be found at \url{https://github.com/Yangziyu/NPF200}}.
|
[
{
"version": "v1",
"created": "Wed, 23 Aug 2023 14:25:22 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Yang",
"Ziyu",
""
],
[
"Ren",
"Sucheng",
""
],
[
"Wu",
"Zongwei",
""
],
[
"Zhao",
"Nanxuan",
""
],
[
"Wang",
"Junle",
""
],
[
"Qin",
"Jing",
""
],
[
"He",
"Shengfeng",
""
]
] |
new_dataset
| 0.999723 |
2308.12228
|
Changyan He
|
Adam Schonewille, Changyan He, Cameron Forbrigger, Nancy Wu, James
Drake, Thomas Looi, Eric Diller
|
Electromagnets Under the Table: an Unobtrusive Magnetic Navigation
System for Microsurgery
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Miniature magnetic tools have the potential to enable minimally invasive
surgical techniques to be applied to space-restricted surgical procedures in
areas such as neurosurgery. However, typical magnetic navigation systems, which
create the magnetic fields to drive such tools, either cannot generate large
enough fields, or surround the patient in a way that obstructs surgeon access
to the patient. This paper introduces the design of a magnetic navigation
system with eight electromagnets arranged completely under the operating table,
to endow the system with maximal workspace accessibility, which allows the
patient to lie down on the top surface of the system without any constraints.
The found optimal geometric layout of the electromagnets maximizes the field
strength and uniformity over a reasonable neurosurgical operating volume. The
system can generate non-uniform magnetic fields up to 38 mT along the x and y
axes and 47 mT along the z axis at a working distance of 120 mm away from the
actuation system workbench, deep enough to deploy magnetic microsurgical tools
in the brain. The forces which can be exerted on millimeter-scale magnets used
in prototype neurosurgical tools are validated experimentally. Due to its large
workspace, this system could be used to control milli-robots in a variety of
surgical applications.
|
[
{
"version": "v1",
"created": "Wed, 23 Aug 2023 16:09:28 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Schonewille",
"Adam",
""
],
[
"He",
"Changyan",
""
],
[
"Forbrigger",
"Cameron",
""
],
[
"Wu",
"Nancy",
""
],
[
"Drake",
"James",
""
],
[
"Looi",
"Thomas",
""
],
[
"Diller",
"Eric",
""
]
] |
new_dataset
| 0.997721 |
2308.12234
|
Lucas Morin
|
Lucas Morin, Martin Danelljan, Maria Isabel Agea, Ahmed Nassar, Valery
Weber, Ingmar Meijer, Peter Staar, Fisher Yu
|
MolGrapher: Graph-based Visual Recognition of Chemical Structures
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The automatic analysis of chemical literature has immense potential to
accelerate the discovery of new materials and drugs. Much of the critical
information in patent documents and scientific articles is contained in
figures, depicting the molecule structures. However, automatically parsing the
exact chemical structure is a formidable challenge, due to the amount of
detailed information, the diversity of drawing styles, and the need for
training data. In this work, we introduce MolGrapher to recognize chemical
structures visually. First, a deep keypoint detector detects the atoms. Second,
we treat all candidate atoms and bonds as nodes and put them in a graph. This
construct allows a natural graph representation of the molecule. Last, we
classify atom and bond nodes in the graph with a Graph Neural Network. To
address the lack of real training data, we propose a synthetic data generation
pipeline producing diverse and realistic results. In addition, we introduce a
large-scale benchmark of annotated real molecule images, USPTO-30K, to spur
research on this critical topic. Extensive experiments on five datasets show
that our approach significantly outperforms classical and learning-based
methods in most settings. Code, models, and datasets are available.
|
[
{
"version": "v1",
"created": "Wed, 23 Aug 2023 16:16:11 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Morin",
"Lucas",
""
],
[
"Danelljan",
"Martin",
""
],
[
"Agea",
"Maria Isabel",
""
],
[
"Nassar",
"Ahmed",
""
],
[
"Weber",
"Valery",
""
],
[
"Meijer",
"Ingmar",
""
],
[
"Staar",
"Peter",
""
],
[
"Yu",
"Fisher",
""
]
] |
new_dataset
| 0.999545 |
2308.12261
|
Vijay Viswanathan
|
Vijay Viswanathan, Chenyang Zhao, Amanda Bertsch, Tongshuang Wu,
Graham Neubig
|
Prompt2Model: Generating Deployable Models from Natural Language
Instructions
|
8 pages
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models (LLMs) enable system builders today to create competent
NLP systems through prompting, where they only need to describe the task in
natural language and provide a few examples. However, in other ways, LLMs are a
step backward from traditional special-purpose NLP models; they require
extensive computational resources for deployment and can be gated behind APIs.
In this paper, we propose Prompt2Model, a general-purpose method that takes a
natural language task description like the prompts provided to LLMs, and uses
it to train a special-purpose model that is conducive to deployment. This is
done through a multi-step process of retrieval of existing datasets and
pretrained models, dataset generation using LLMs, and supervised fine-tuning on
these retrieved and generated datasets. Over three tasks, we demonstrate that
given the same few-shot prompt as input, Prompt2Model trains models that
outperform the results of a strong LLM, gpt-3.5-turbo, by an average of 20%
while being up to 700 times smaller. We also show that this data can be used to
obtain reliable performance estimates of model performance, enabling model
developers to assess model reliability before deployment. Prompt2Model is
available open-source at https://github.com/neulab/prompt2model.
|
[
{
"version": "v1",
"created": "Wed, 23 Aug 2023 17:28:21 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Viswanathan",
"Vijay",
""
],
[
"Zhao",
"Chenyang",
""
],
[
"Bertsch",
"Amanda",
""
],
[
"Wu",
"Tongshuang",
""
],
[
"Neubig",
"Graham",
""
]
] |
new_dataset
| 0.999352 |
2308.12267
|
Parvez Mahbub
|
Parvez Mahbub, Mohammad Masudur Rahman, Ohiduzzaman Shuvo, Avinash
Gopal
|
Bugsplainer: Leveraging Code Structures to Explain Software Bugs with
Neural Machine Translation
|
arXiv admin note: substantial text overlap with arXiv:2212.04584
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Software bugs cost the global economy billions of dollars each year and take
up ~50% of the development time. Once a bug is reported, the assigned developer
attempts to identify and understand the source code responsible for the bug and
then corrects the code. Over the last five decades, there has been significant
research on automatically finding or correcting software bugs. However, there
has been little research on automatically explaining the bugs to the
developers, which is essential but a highly challenging task. In this paper, we
propose Bugsplainer, a novel web-based debugging solution that generates
natural language explanations for software bugs by learning from a large corpus
of bug-fix commits. Bugsplainer leverages code structures to reason about a bug
and employs the fine-tuned version of a text generation model, CodeT5, to
generate the explanations.
Tool video: https://youtu.be/xga-ScvULpk
|
[
{
"version": "v1",
"created": "Wed, 23 Aug 2023 17:35:16 GMT"
}
] | 2023-08-24T00:00:00 |
[
[
"Mahbub",
"Parvez",
""
],
[
"Rahman",
"Mohammad Masudur",
""
],
[
"Shuvo",
"Ohiduzzaman",
""
],
[
"Gopal",
"Avinash",
""
]
] |
new_dataset
| 0.988806 |
2012.06874
|
Franz J. Brandenburg
|
Franz J. Brandenburg
|
Book Embeddings of k-Map Graphs
| null | null | null | null |
cs.CG
|
http://creativecommons.org/licenses/by/4.0/
|
A map is a partition of the sphere into regions that are labeled as countries
or holes. The vertices of a map graph are the countries of a map. There is an
edge if and only if the countries are adjacent and meet in at least one point.
For a k-map graph, at most k countries meet in a point. A graph is k-planar if
it can be drawn in the plane with at most k crossings per edge. A p-page book
embedding of a graph is a linear ordering of the vertices and an assignment of
the edges to p pages, so that there is no conflict for edges assigned to the
same page. The minimum number of pages is the book thickness of a graph, also
known as stack number or page number. We show that every k-map graph has a book
embedding in $6\lfloor k/2 \rfloor+5$ pages, which, for n-vertex graphs, can be
computed in O(kn) time from its map. Our result improves the best known upper
bound. Towards a lower bound, it is shown that some k-map graphs need $\lfloor
3k/4 \rfloor$ pages. In passing, we obtain an improved upper bound of eleven
pages for 1-planar graphs, which are subgraphs of 4-map graphs, and of 17 pages
for optimal 2-planar graphs.
|
[
{
"version": "v1",
"created": "Sat, 12 Dec 2020 17:49:12 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Aug 2023 13:08:38 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Brandenburg",
"Franz J.",
""
]
] |
new_dataset
| 0.964977 |
2208.07174
|
Han Wu
|
Han Wu, Sareh Rowlands and Johan Wahlstrom
|
A Man-in-the-Middle Attack against Object Detection Systems
|
6 pages, 7 figures
| null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Object detection systems using deep learning models have become increasingly
popular in robotics thanks to the rising power of CPUs and GPUs in embedded
systems. However, these models are susceptible to adversarial attacks. While
some attacks are limited by strict assumptions on access to the detection
system, we propose a novel hardware attack inspired by Man-in-the-Middle
attacks in cryptography. This attack generates an Universal Adversarial
Perturbation (UAP) and then inject the perturbation between the USB camera and
the detection system via a hardware attack. Besides, prior research is misled
by an evaluation metric that measures the model accuracy rather than the attack
performance. In combination with our proposed evaluation metrics, we
significantly increases the strength of adversarial perturbations. These
findings raise serious concerns for applications of deep learning models in
safety-critical systems, such as autonomous driving.
|
[
{
"version": "v1",
"created": "Mon, 15 Aug 2022 13:21:41 GMT"
},
{
"version": "v2",
"created": "Fri, 16 Sep 2022 01:46:44 GMT"
},
{
"version": "v3",
"created": "Mon, 21 Aug 2023 22:42:48 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Wu",
"Han",
""
],
[
"Rowlands",
"Sareh",
""
],
[
"Wahlstrom",
"Johan",
""
]
] |
new_dataset
| 0.978301 |
2210.11549
|
Ziyue Xiang
|
Ziyue Xiang, Paolo Bestagini, Stefano Tubaro, Edward J. Delp
|
H4VDM: H.264 Video Device Matching
| null | null |
10.1007/978-3-031-37742-6_24
| null |
cs.CV cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Methods that can determine if two given video sequences are captured by the
same device (e.g., mobile telephone or digital camera) can be used in many
forensics tasks. In this paper we refer to this as "video device matching". In
open-set video forensics scenarios it is easier to determine if two video
sequences were captured with the same device than identifying the specific
device. In this paper, we propose a technique for open-set video device
matching. Given two H.264 compressed video sequences, our method can determine
if they are captured by the same device, even if our method has never
encountered the device in training. We denote our proposed technique as H.264
Video Device Matching (H4VDM). H4VDM uses H.264 compression information
extracted from video sequences to make decisions. It is more robust against
artifacts that alter camera sensor fingerprints, and it can be used to analyze
relatively small fragments of the H.264 sequence. We trained and tested our
method on a publicly available video forensics dataset consisting of 35
devices, where our proposed method demonstrated good performance.
|
[
{
"version": "v1",
"created": "Thu, 20 Oct 2022 19:31:23 GMT"
},
{
"version": "v2",
"created": "Sat, 19 Aug 2023 15:17:00 GMT"
},
{
"version": "v3",
"created": "Tue, 22 Aug 2023 16:15:26 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Xiang",
"Ziyue",
""
],
[
"Bestagini",
"Paolo",
""
],
[
"Tubaro",
"Stefano",
""
],
[
"Delp",
"Edward J.",
""
]
] |
new_dataset
| 0.998502 |
2301.09767
|
Mariyam Amir
|
Mariyam Amir, Murchana Baruah, Mahsa Eslamialishah, Sina Ehsani,
Alireza Bahramali, Sadra Naddaf-Sh, Saman Zarandioon
|
Truveta Mapper: A Zero-shot Ontology Alignment Framework
| null | null | null | null |
cs.LG cs.AI cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this paper, a new perspective is suggested for unsupervised Ontology
Matching (OM) or Ontology Alignment (OA) by treating it as a translation task.
Ontologies are represented as graphs, and the translation is performed from a
node in the source ontology graph to a path in the target ontology graph. The
proposed framework, Truveta Mapper (TM), leverages a multi-task
sequence-to-sequence transformer model to perform alignment across multiple
ontologies in a zero-shot, unified and end-to-end manner. Multi-tasking enables
the model to implicitly learn the relationship between different ontologies via
transfer-learning without requiring any explicit cross-ontology manually
labeled data. This also enables the formulated framework to outperform existing
solutions for both runtime latency and alignment quality. The model is
pre-trained and fine-tuned only on publicly available text corpus and
inner-ontologies data. The proposed solution outperforms state-of-the-art
approaches, Edit-Similarity, LogMap, AML, BERTMap, and the recently presented
new OM frameworks in Ontology Alignment Evaluation Initiative (OAEI22), offers
log-linear complexity, and overall makes the OM task efficient and more
straightforward without much post-processing involving mapping extension or
mapping repair. We are open sourcing our solution.
|
[
{
"version": "v1",
"created": "Tue, 24 Jan 2023 00:32:56 GMT"
},
{
"version": "v2",
"created": "Fri, 31 Mar 2023 22:05:53 GMT"
},
{
"version": "v3",
"created": "Tue, 22 Aug 2023 00:22:42 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Amir",
"Mariyam",
""
],
[
"Baruah",
"Murchana",
""
],
[
"Eslamialishah",
"Mahsa",
""
],
[
"Ehsani",
"Sina",
""
],
[
"Bahramali",
"Alireza",
""
],
[
"Naddaf-Sh",
"Sadra",
""
],
[
"Zarandioon",
"Saman",
""
]
] |
new_dataset
| 0.967605 |
2302.10753
|
Lingrui Yu
|
Lingrui Yu
|
DTAAD: Dual Tcn-Attention Networks for Anomaly Detection in Multivariate
Time Series Data
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Anomaly detection techniques enable effective anomaly detection and diagnosis
in multi-variate time series data, which are of major significance for today's
industrial applications. However, establishing an anomaly detection system that
can be rapidly and accurately located is a challenging problem due to the lack
of outlier tags, the high dimensional complexity of the data, memory
bottlenecks in the actual hardware, and the need for fast reasoning. We have
proposed an anomaly detection and diagnosis model -- DTAAD in this paper, based
on Transformer, and Dual Temporal Convolutional Network(TCN). Our overall model
will be an integrated design in which autoregressive model(AR) combines
autoencoder(AE) structures, and scaling methods and feedback mechanisms are
introduced to improve prediction accuracy and expand correlation differences.
Constructed by us, the Dual TCN-Attention Network (DTA) only uses a single
layer of Transformer encoder in our baseline experiment, that belongs to an
ultra-lightweight model. Our extensive experiments on six publicly datasets
validate that DTAAD exceeds current most advanced baseline methods in both
detection and diagnostic performance. Specifically, DTAAD improved F1 scores by
$8.38\%$, and reduced training time by $99\%$ compared to baseline. The code
and training scripts are publicly on GitHub at
https://github.com/Yu-Lingrui/DTAAD.
|
[
{
"version": "v1",
"created": "Fri, 17 Feb 2023 06:59:45 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Aug 2023 04:41:17 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Yu",
"Lingrui",
""
]
] |
new_dataset
| 0.99339 |
2303.02242
|
Qian Lou
|
Qian Lou, Yepeng Liu, Bo Feng
|
TrojText: Test-time Invisible Textual Trojan Insertion
|
In The Eleventh International Conference on Learning Representations.
2023 (ICLR 2023)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In Natural Language Processing (NLP), intelligent neuron models can be
susceptible to textual Trojan attacks. Such attacks occur when Trojan models
behave normally for standard inputs but generate malicious output for inputs
that contain a specific trigger. Syntactic-structure triggers, which are
invisible, are becoming more popular for Trojan attacks because they are
difficult to detect and defend against. However, these types of attacks require
a large corpus of training data to generate poisoned samples with the necessary
syntactic structures for Trojan insertion. Obtaining such data can be difficult
for attackers, and the process of generating syntactic poisoned triggers and
inserting Trojans can be time-consuming. This paper proposes a solution called
TrojText, which aims to determine whether invisible textual Trojan attacks can
be performed more efficiently and cost-effectively without training data. The
proposed approach, called the Representation-Logit Trojan Insertion (RLI)
algorithm, uses smaller sampled test data instead of large training data to
achieve the desired attack. The paper also introduces two additional
techniques, namely the accumulated gradient ranking (AGR) and Trojan Weights
Pruning (TWP), to reduce the number of tuned parameters and the attack
overhead. The TrojText approach was evaluated on three datasets (AG's News,
SST-2, and OLID) using three NLP models (BERT, XLNet, and DeBERTa). The
experiments demonstrated that the TrojText approach achieved a 98.35\%
classification accuracy for test sentences in the target class on the BERT
model for the AG's News dataset. The source code for TrojText is available at
https://github.com/UCF-ML-Research/TrojText.
|
[
{
"version": "v1",
"created": "Fri, 3 Mar 2023 22:19:22 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Aug 2023 02:34:19 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Lou",
"Qian",
""
],
[
"Liu",
"Yepeng",
""
],
[
"Feng",
"Bo",
""
]
] |
new_dataset
| 0.999269 |
2303.05555
|
Dongha Chung
|
Dongha Chung, Jonghwi Kim, Changyu Lee, and Jinwhan Kim
|
Pohang Canal Dataset: A Multimodal Maritime Dataset for Autonomous
Navigation in Restricted Waters
|
Submitted to IJRR as a data paper for review
|
The International Journal of Robotics Research. 2023
|
10.1177/02783649231191145
| null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This paper presents a multimodal maritime dataset and the data collection
procedure used to gather it, which aims to facilitate autonomous navigation in
restricted water environments. The dataset comprises measurements obtained
using various perception and navigation sensors, including a stereo camera, an
infrared camera, an omnidirectional camera, three LiDARs, a marine radar, a
global positioning system, and an attitude heading reference system. The data
were collected along a 7.5-km-long route that includes a narrow canal, inner
and outer ports, and near-coastal areas in Pohang, South Korea. The collection
was conducted under diverse weather and visual conditions. The dataset and its
detailed description are available for free download at
https://sites.google.com/view/pohang-canal-dataset.
|
[
{
"version": "v1",
"created": "Thu, 9 Mar 2023 19:30:21 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Chung",
"Dongha",
""
],
[
"Kim",
"Jonghwi",
""
],
[
"Lee",
"Changyu",
""
],
[
"Kim",
"Jinwhan",
""
]
] |
new_dataset
| 0.999844 |
2304.07597
|
Christoph Reich
|
Christoph Reich, Tim Prangemeier, Andr\'e O. Fran\c{c}ani, Heinz
Koeppl
|
An Instance Segmentation Dataset of Yeast Cells in Microstructures
|
IEEE EMBC 2023 (in press), Christoph Reich and Tim Prangemeier - both
authors contributed equally
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Extracting single-cell information from microscopy data requires accurate
instance-wise segmentations. Obtaining pixel-wise segmentations from microscopy
imagery remains a challenging task, especially with the added complexity of
microstructured environments. This paper presents a novel dataset for
segmenting yeast cells in microstructures. We offer pixel-wise instance
segmentation labels for both cells and trap microstructures. In total, we
release 493 densely annotated microscopy images. To facilitate a unified
comparison between novel segmentation algorithms, we propose a standardized
evaluation strategy for our dataset. The aim of the dataset and evaluation
strategy is to facilitate the development of new cell segmentation approaches.
The dataset is publicly available at
https://christophreich1996.github.io/yeast_in_microstructures_dataset/ .
|
[
{
"version": "v1",
"created": "Sat, 15 Apr 2023 17:05:24 GMT"
},
{
"version": "v2",
"created": "Sun, 23 Apr 2023 11:30:29 GMT"
},
{
"version": "v3",
"created": "Tue, 22 Aug 2023 16:14:53 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Reich",
"Christoph",
""
],
[
"Prangemeier",
"Tim",
""
],
[
"Françani",
"André O.",
""
],
[
"Koeppl",
"Heinz",
""
]
] |
new_dataset
| 0.999534 |
2304.13207
|
Mohammad Reza Karimi Dastjerdi
|
Mohammad Reza Karimi Dastjerdi, Jonathan Eisenmann, Yannick
Hold-Geoffroy, Jean-Fran\c{c}ois Lalonde
|
EverLight: Indoor-Outdoor Editable HDR Lighting Estimation
|
ICCV 2023, https://lvsn.github.io/everlight/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Because of the diversity in lighting environments, existing illumination
estimation techniques have been designed explicitly on indoor or outdoor
environments. Methods have focused specifically on capturing accurate energy
(e.g., through parametric lighting models), which emphasizes shading and strong
cast shadows; or producing plausible texture (e.g., with GANs), which
prioritizes plausible reflections. Approaches which provide editable lighting
capabilities have been proposed, but these tend to be with simplified lighting
models, offering limited realism. In this work, we propose to bridge the gap
between these recent trends in the literature, and propose a method which
combines a parametric light model with 360{\deg} panoramas, ready to use as
HDRI in rendering engines. We leverage recent advances in GAN-based LDR
panorama extrapolation from a regular image, which we extend to HDR using
parametric spherical gaussians. To achieve this, we introduce a novel lighting
co-modulation method that injects lighting-related features throughout the
generator, tightly coupling the original or edited scene illumination within
the panorama generation process. In our representation, users can easily edit
light direction, intensity, number, etc. to impact shading while providing
rich, complex reflections while seamlessly blending with the edits.
Furthermore, our method encompasses indoor and outdoor environments,
demonstrating state-of-the-art results even when compared to domain-specific
methods.
|
[
{
"version": "v1",
"created": "Wed, 26 Apr 2023 00:20:59 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Aug 2023 18:53:15 GMT"
}
] | 2023-08-23T00:00:00 |
[
[
"Dastjerdi",
"Mohammad Reza Karimi",
""
],
[
"Eisenmann",
"Jonathan",
""
],
[
"Hold-Geoffroy",
"Yannick",
""
],
[
"Lalonde",
"Jean-François",
""
]
] |
new_dataset
| 0.970777 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.