Search is not available for this dataset
title
string | abstract
string | url
string | category
string | prediction
string | probability
float64 | arxiv_id
string |
---|---|---|---|---|---|---|
Meta-Learning for Automated Selection of Anomaly Detectors for Semi-Supervised Datasets
|
In anomaly detection, a prominent task is to induce a model to identify
anomalies learned solely based on normal data. Generally, one is interested in
finding an anomaly detector that correctly identifies anomalies, i.e., data
points that do not belong to the normal class, without raising too many false
alarms. Which anomaly detector is best suited depends on the dataset at hand
and thus needs to be tailored. The quality of an anomaly detector may be
assessed via confusion-based metrics such as the Matthews correlation
coefficient (MCC). However, since during training only normal data is available
in a semi-supervised setting, such metrics are not accessible. To facilitate
automated machine learning for anomaly detectors, we propose to employ
meta-learning to predict MCC scores based on metrics that can be computed with
normal data only. First promising results can be obtained considering the
hypervolume and the false positive rate as meta-features.
|
http://arxiv.org/abs/2211.13681v1
|
cs.LG
|
not_new_dataset
| 0.992006 |
2211.13681
|
Can lies be faked? Comparing low-stakes and high-stakes deception video datasets from a Machine Learning perspective
|
Despite the great impact of lies in human societies and a meager 54% human
accuracy for Deception Detection (DD), Machine Learning systems that perform
automated DD are still not viable for proper application in real-life settings
due to data scarcity. Few publicly available DD datasets exist and the creation
of new datasets is hindered by the conceptual distinction between low-stakes
and high-stakes lies. Theoretically, the two kinds of lies are so distinct that
a dataset of one kind could not be used for applications for the other kind.
Even though it is easier to acquire data on low-stakes deception since it can
be simulated (faked) in controlled settings, these lies do not hold the same
significance or depth as genuine high-stakes lies, which are much harder to
obtain and hold the practical interest of automated DD systems. To investigate
whether this distinction holds true from a practical perspective, we design
several experiments comparing a high-stakes DD dataset and a low-stakes DD
dataset evaluating their results on a Deep Learning classifier working
exclusively from video data. In our experiments, a network trained in
low-stakes lies had better accuracy classifying high-stakes deception than
low-stakes, although using low-stakes lies as an augmentation strategy for the
high-stakes dataset decreased its accuracy.
|
http://arxiv.org/abs/2211.13035v2
|
cs.CV
|
not_new_dataset
| 0.992103 |
2211.13035
|
Video compression dataset and benchmark of learning-based video-quality metrics
|
Video-quality measurement is a critical task in video processing. Nowadays,
many implementations of new encoding standards - such as AV1, VVC, and LCEVC -
use deep-learning-based decoding algorithms with perceptual metrics that serve
as optimization objectives. But investigations of the performance of modern
video- and image-quality metrics commonly employ videos compressed using older
standards, such as AVC. In this paper, we present a new benchmark for
video-quality metrics that evaluates video compression. It is based on a new
dataset consisting of about 2,500 streams encoded using different standards,
including AVC, HEVC, AV1, VP9, and VVC. Subjective scores were collected using
crowdsourced pairwise comparisons. The list of evaluated metrics includes
recent ones based on machine learning and neural networks. The results
demonstrate that new no-reference metrics exhibit a high correlation with
subjective quality and approach the capability of top full-reference metrics.
|
http://arxiv.org/abs/2211.12109v2
|
cs.CV
|
new_dataset
| 0.994528 |
2211.12109
|
DSLOB: A Synthetic Limit Order Book Dataset for Benchmarking Forecasting Algorithms under Distributional Shift
|
In electronic trading markets, limit order books (LOBs) provide information
about pending buy/sell orders at various price levels for a given security.
Recently, there has been a growing interest in using LOB data for resolving
downstream machine learning tasks (e.g., forecasting). However, dealing with
out-of-distribution (OOD) LOB data is challenging since distributional shifts
are unlabeled in current publicly available LOB datasets. Therefore, it is
critical to build a synthetic LOB dataset with labeled OOD samples serving as a
testbed for developing models that generalize well to unseen scenarios. In this
work, we utilize a multi-agent market simulator to build a synthetic LOB
dataset, named DSLOB, with and without market stress scenarios, which allows
for the design of controlled distributional shift benchmarking. Using the
proposed synthetic dataset, we provide a holistic analysis on the forecasting
performance of three different state-of-the-art forecasting methods. Our
results reflect the need for increased researcher efforts to develop algorithms
with robustness to distributional shifts in high-frequency time series data.
|
http://arxiv.org/abs/2211.11513v1
|
q-fin.ST
|
new_dataset
| 0.994512 |
2211.11513
|
AutoTherm: A Dataset and Ablation Study for Thermal Comfort Prediction in Vehicles
|
State recognition in well-known and customizable environments such as
vehicles enables novel insights into users and potentially their intentions.
Besides safety-relevant insights into, for example, fatigue, user
experience-related assessments become increasingly relevant. As thermal comfort
is vital for overall comfort, we introduce a dataset for its prediction in
vehicles incorporating 31 input signals and self-labeled user ratings based on
a 7-point Likert scale (-3 to +3) by 21 subjects. An importance ranking of such
signals indicates higher impact on prediction for signals like ambient
temperature, ambient humidity, radiation temperature, and skin temperature.
Leveraging modern machine learning architectures enables us to not only
automatically recognize human thermal comfort state but also predict future
states. We provide details on how we train a recurrent network-based classifier
and, thus, perform an initial performance benchmark of our proposed thermal
comfort dataset. Ultimately, we compare our collected dataset to publicly
available datasets.
|
http://arxiv.org/abs/2211.08257v2
|
cs.HC
|
new_dataset
| 0.994484 |
2211.08257
|
Machine Learning Performance Analysis to Predict Stroke Based on Imbalanced Medical Dataset
|
Cerebral stroke, the second most substantial cause of death universally, has
been a primary public health concern over the last few years. With the help of
machine learning techniques, early detection of various stroke alerts is
accessible, which can efficiently prevent or diminish the stroke. Medical
dataset, however, are frequently unbalanced in their class label, with a
tendency to poorly predict minority classes. In this paper, the potential risk
factors for stroke are investigated. Moreover, four distinctive approaches are
applied to improve the classification of the minority class in the imbalanced
stroke dataset, which are the ensemble weight voting classifier, the Synthetic
Minority Over-sampling Technique (SMOTE), Principal Component Analysis with
K-Means Clustering (PCA-Kmeans), Focal Loss with the Deep Neural Network (DNN)
and compare their performance. Through the analysis results, SMOTE and
PCA-Kmeans with DNN-Focal Loss work best for the limited size of a large severe
imbalanced dataset,which is 2-4 times outperform Kaggle work.
|
http://arxiv.org/abs/2211.07652v1
|
cs.LG
|
not_new_dataset
| 0.992072 |
2211.07652
|
LSA-T: The first continuous Argentinian Sign Language dataset for Sign Language Translation
|
Sign language translation (SLT) is an active field of study that encompasses
human-computer interaction, computer vision, natural language processing and
machine learning. Progress on this field could lead to higher levels of
integration of deaf people. This paper presents, to the best of our knowledge,
the first continuous Argentinian Sign Language (LSA) dataset. It contains
14,880 sentence level videos of LSA extracted from the CN Sordos YouTube
channel with labels and keypoints annotations for each signer. We also present
a method for inferring the active signer, a detailed analysis of the
characteristics of the dataset, a visualization tool to explore the dataset and
a neural SLT model to serve as baseline for future experiments.
|
http://arxiv.org/abs/2211.15481v1
|
cs.CV
|
new_dataset
| 0.994403 |
2211.15481
|
Model Evaluation in Medical Datasets Over Time
|
Machine learning models deployed in healthcare systems face data drawn from
continually evolving environments. However, researchers proposing such models
typically evaluate them in a time-agnostic manner, with train and test splits
sampling patients throughout the entire study period. We introduce the
Evaluation on Medical Datasets Over Time (EMDOT) framework and Python package,
which evaluates the performance of a model class over time. Across five medical
datasets and a variety of models, we compare two training strategies: (1) using
all historical data, and (2) using a window of the most recent data. We note
changes in performance over time, and identify possible explanations for these
shocks.
|
http://arxiv.org/abs/2211.07165v1
|
cs.LG
|
not_new_dataset
| 0.992093 |
2211.07165
|
Collecting Interactive Multi-modal Datasets for Grounded Language Understanding
|
Human intelligence can remarkably adapt quickly to new tasks and
environments. Starting from a very young age, humans acquire new skills and
learn how to solve new tasks either by imitating the behavior of others or by
following provided natural language instructions. To facilitate research which
can enable similar capabilities in machines, we made the following
contributions (1) formalized the collaborative embodied agent using natural
language task; (2) developed a tool for extensive and scalable data collection;
and (3) collected the first dataset for interactive grounded language
understanding.
|
http://arxiv.org/abs/2211.06552v3
|
cs.CL
|
new_dataset
| 0.99389 |
2211.06552
|
Dark patterns in e-commerce: a dataset and its baseline evaluations
|
Dark patterns, which are user interface designs in online services, induce
users to take unintended actions. Recently, dark patterns have been raised as
an issue of privacy and fairness. Thus, a wide range of research on detecting
dark patterns is eagerly awaited. In this work, we constructed a dataset for
dark pattern detection and prepared its baseline detection performance with
state-of-the-art machine learning methods. The original dataset was obtained
from Mathur et al.'s study in 2019, which consists of 1,818 dark pattern texts
from shopping sites. Then, we added negative samples, i.e., non-dark pattern
texts, by retrieving texts from the same websites as Mathur et al.'s dataset.
We also applied state-of-the-art machine learning methods to show the automatic
detection accuracy as baselines, including BERT, RoBERTa, ALBERT, and XLNet. As
a result of 5-fold cross-validation, we achieved the highest accuracy of 0.975
with RoBERTa. The dataset and baseline source codes are available at
https://github.com/yamanalab/ec-darkpattern.
|
http://arxiv.org/abs/2211.06543v1
|
cs.LG
|
new_dataset
| 0.994416 |
2211.06543
|
A Benchmarking Dataset with 2440 Organic Molecules for Volume Distribution at Steady State
|
Background: The volume of distribution at steady state (VDss) is a
fundamental pharmacokinetics (PK) property of drugs, which measures how
effectively a drug molecule is distributed throughout the body. Along with the
clearance (CL), it determines the half-life and, therefore, the drug dosing
interval. However, the molecular data size limits the generalizability of the
reported machine learning models. Objective: This study aims to provide a clean
and comprehensive dataset for human VDss as the benchmarking data source,
fostering and benefiting future predictive studies. Moreover, several
predictive models were also built with machine learning regression algorithms.
Methods: The dataset was curated from 13 publicly accessible data sources and
the DrugBank database entirely from intravenous drug administration and then
underwent extensive data cleaning. The molecular descriptors were calculated
with Mordred, and feature selection was conducted for constructing predictive
models. Five machine learning methods were used to build regression models,
grid search was used to optimize hyperparameters, and ten-fold cross-validation
was used to evaluate the model. Results: An enriched dataset of VDss
(https://github.com/da-wen-er/VDss) was constructed with 2440 molecules. Among
the prediction models, the LightGBM model was the most stable and had the best
internal prediction ability with Q2 = 0.837, R2=0.814 and for the other four
models, Q2 was higher than 0.79. Conclusions: To the best of our knowledge,
this is the largest dataset for VDss, which can be used as the benchmark for
computational studies of VDss. Moreover, the regression models reported within
this study can be of use for pharmacokinetic related studies.
|
http://arxiv.org/abs/2211.05661v1
|
q-bio.QM
|
new_dataset
| 0.994534 |
2211.05661
|
Sentiment Analysis of Persian Language: Review of Algorithms, Approaches and Datasets
|
Sentiment analysis aims to extract people's emotions and opinion from their
comments on the web. It widely used in businesses to detect sentiment in social
data, gauge brand reputation, and understand customers. Most of articles in
this area have concentrated on the English language whereas there are limited
resources for Persian language. In this review paper, recent published articles
between 2018 and 2022 in sentiment analysis in Persian Language have been
collected and their methods, approach and dataset will be explained and
analyzed. Almost all the methods used to solve sentiment analysis are machine
learning and deep learning. The purpose of this paper is to examine 40
different approach sentiment analysis in the Persian Language, analysis
datasets along with the accuracy of the algorithms applied to them and also
review strengths and weaknesses of each. Among all the methods, transformers
such as BERT and RNN Neural Networks such as LSTM and Bi-LSTM have achieved
higher accuracy in the sentiment analysis. In addition to the methods and
approaches, the datasets reviewed are listed between 2018 and 2022 and
information about each dataset and its details are provided.
|
http://arxiv.org/abs/2212.06041v1
|
cs.CL
|
not_new_dataset
| 0.992033 |
2212.06041
|
Efficacy of MRI data harmonization in the age of machine learning. A multicenter study across 36 datasets
|
Pooling publicly-available MRI data from multiple sites allows to assemble
extensive groups of subjects, increase statistical power, and promote data
reuse with machine learning techniques. The harmonization of multicenter data
is necessary to reduce the confounding effect associated with non-biological
sources of variability in the data. However, when applied to the entire dataset
before machine learning, the harmonization leads to data leakage, because
information outside the training set may affect model building, and potentially
falsely overestimate performance. We propose a 1) measurement of the efficacy
of data harmonization; 2) harmonizer transformer, i.e., an implementation of
the ComBat harmonization allowing its encapsulation among the preprocessing
steps of a machine learning pipeline, avoiding data leakage. We tested these
tools using brain T1-weighted MRI data from 1740 healthy subjects acquired at
36 sites. After harmonization, the site effect was removed or reduced, and we
showed the data leakage effect in predicting individual age from MRI data,
highlighting that introducing the harmonizer transformer into a machine
learning pipeline allows for avoiding data leakage.
|
http://arxiv.org/abs/2211.04125v3
|
cs.LG
|
not_new_dataset
| 0.992073 |
2211.04125
|
Human-Machine Collaboration Approaches to Build a Dialogue Dataset for Hate Speech Countering
|
Fighting online hate speech is a challenge that is usually addressed using
Natural Language Processing via automatic detection and removal of hate
content. Besides this approach, counter narratives have emerged as an effective
tool employed by NGOs to respond to online hate on social media platforms. For
this reason, Natural Language Generation is currently being studied as a way to
automatize counter narrative writing. However, the existing resources necessary
to train NLG models are limited to 2-turn interactions (a hate speech and a
counter narrative as response), while in real life, interactions can consist of
multiple turns. In this paper, we present a hybrid approach for dialogical data
collection, which combines the intervention of human expert annotators over
machine generated dialogues obtained using 19 different configurations. The
result of this work is DIALOCONAN, the first dataset comprising over 3000
fictitious multi-turn dialogues between a hater and an NGO operator, covering 6
targets of hate.
|
http://arxiv.org/abs/2211.03433v1
|
cs.CL
|
new_dataset
| 0.994492 |
2211.03433
|
Fitting a Collider in a Quantum Computer: Tackling the Challenges of Quantum Machine Learning for Big Datasets
|
Current quantum systems have significant limitations affecting the processing
of large datasets with high dimensionality, typical of high energy physics. In
the present paper, feature and data prototype selection techniques were studied
to tackle this challenge. A grid search was performed and quantum machine
learning models were trained and benchmarked against classical shallow machine
learning methods, trained both in the reduced and the complete datasets. The
performance of the quantum algorithms was found to be comparable to the
classical ones, even when using large datasets. Sequential Backward Selection
and Principal Component Analysis techniques were used for feature's selection
and while the former can produce the better quantum machine learning models in
specific cases, it is more unstable. Additionally, we show that such
variability in the results is caused by the use of discrete variables,
highlighting the suitability of Principal Component analysis transformed data
for quantum machine learning applications in the high energy physics context.
|
http://arxiv.org/abs/2211.03233v3
|
hep-ph
|
not_new_dataset
| 0.992255 |
2211.03233
|
A Synthetic Dataset for 5G UAV Attacks Based on Observable Network Parameters
|
Synthetic datasets are beneficial for machine learning researchers due to the
possibility of experimenting with new strategies and algorithms in the training
and testing phases. These datasets can easily include more scenarios that might
be costly to research with real data or can complement and, in some cases,
replace real data measurements, depending on the quality of the synthetic data.
They can also solve the unbalanced data problem, avoid overfitting, and can be
used in training while testing can be done with real data. In this paper, we
present, to the best of our knowledge, the first synthetic dataset for Unmanned
Aerial Vehicle (UAV) attacks in 5G and beyond networks based on the following
key observable network parameters that indicate power levels: the Received
Signal Strength Indicator (RSSI) and the Signal to Interference-plus-Noise
Ratio (SINR). The main objective of this data is to enable deep network
development for UAV communication security. Especially, for algorithm
development or the analysis of time-series data applied to UAV attack
recognition. Our proposed dataset provides insights into network functionality
when static or moving UAV attackers target authenticated UAVs in an urban
environment. The dataset also considers the presence and absence of
authenticated terrestrial users in the network, which may decrease the deep
networks ability to identify attacks. Furthermore, the data provides deeper
comprehension of the metrics available in the 5G physical and MAC layers for
machine learning and statistics research. The dataset will available at link
archive-beta.ics.uci.edu
|
http://arxiv.org/abs/2211.09706v1
|
cs.NI
|
new_dataset
| 0.994533 |
2211.09706
|
Data Models for Dataset Drift Controls in Machine Learning With Optical Images
|
Camera images are ubiquitous in machine learning research. They also play a
central role in the delivery of important services spanning medicine and
environmental surveying. However, the application of machine learning models in
these domains has been limited because of robustness concerns. A primary
failure mode are performance drops due to differences between the training and
deployment data. While there are methods to prospectively validate the
robustness of machine learning models to such dataset drifts, existing
approaches do not account for explicit models of the primary object of
interest: the data. This limits our ability to study and understand the
relationship between data generation and downstream machine learning model
performance in a physically accurate manner. In this study, we demonstrate how
to overcome this limitation by pairing traditional machine learning with
physical optics to obtain explicit and differentiable data models. We
demonstrate how such data models can be constructed for image data and used to
control downstream machine learning model performance related to dataset drift.
The findings are distilled into three applications. First, drift synthesis
enables the controlled generation of physically faithful drift test cases to
power model selection and targeted generalization. Second, the gradient
connection between machine learning task model and data model allows advanced,
precise tolerancing of task model sensitivity to changes in the data
generation. These drift forensics can be used to precisely specify the
acceptable data environments in which a task model may be run. Third, drift
optimization opens up the possibility to create drifts that can help the task
model learn better faster, effectively optimizing the data generating process
itself. A guide to access the open code and datasets is available at
https://github.com/aiaudit-org/raw2logit.
|
http://arxiv.org/abs/2211.02578v3
|
cs.LG
|
not_new_dataset
| 0.992192 |
2211.02578
|
MultiWOZ-DF -- A Dataflow implementation of the MultiWOZ dataset
|
Semantic Machines (SM) have introduced the use of the dataflow (DF) paradigm
to dialogue modelling, using computational graphs to hierarchically represent
user requests, data, and the dialogue history [Semantic Machines et al. 2020].
Although the main focus of that paper was the SMCalFlow dataset (to date, the
only dataset with "native" DF annotations), they also reported some results of
an experiment using a transformed version of the commonly used MultiWOZ dataset
[Budzianowski et al. 2018] into a DF format. In this paper, we expand the
experiments using DF for the MultiWOZ dataset, exploring some additional
experimental set-ups. The code and instructions to reproduce the experiments
reported here have been released. The contributions of this paper are: 1.) A DF
implementation capable of executing MultiWOZ dialogues; 2.) Several versions of
conversion of MultiWOZ into a DF format are presented; 3.) Experimental results
on state match and translation accuracy.
|
http://arxiv.org/abs/2211.02303v1
|
cs.CL
|
not_new_dataset
| 0.992075 |
2211.02303
|
Making Machine Learning Datasets and Models FAIR for HPC: A Methodology and Case Study
|
The FAIR Guiding Principles aim to improve the findability, accessibility,
interoperability, and reusability of digital content by making them both human
and machine actionable. However, these principles have not yet been broadly
adopted in the domain of machine learning-based program analyses and
optimizations for High-Performance Computing (HPC). In this paper, we design a
methodology to make HPC datasets and machine learning models FAIR after
investigating existing FAIRness assessment and improvement techniques. Our
methodology includes a comprehensive, quantitative assessment for elected data,
followed by concrete, actionable suggestions to improve FAIRness with respect
to common issues related to persistent identifiers, rich metadata descriptions,
license and provenance information. Moreover, we select a representative
training dataset to evaluate our methodology. The experiment shows the
methodology can effectively improve the dataset and model's FAIRness from an
initial score of 19.1% to the final score of 83.0%.
|
http://arxiv.org/abs/2211.02092v1
|
cs.LG
|
not_new_dataset
| 0.992197 |
2211.02092
|
Seeing the Unseen: Errors and Bias in Visual Datasets
|
From face recognition in smartphones to automatic routing on self-driving
cars, machine vision algorithms lie in the core of these features. These
systems solve image based tasks by identifying and understanding objects,
subsequently making decisions from these information. However, errors in
datasets are usually induced or even magnified in algorithms, at times
resulting in issues such as recognising black people as gorillas and
misrepresenting ethnicities in search results. This paper tracks the errors in
datasets and their impacts, revealing that a flawed dataset could be a result
of limited categories, incomprehensive sourcing and poor classification.
|
http://arxiv.org/abs/2211.01847v1
|
cs.CV
|
not_new_dataset
| 0.99205 |
2211.01847
|
Evaluating a Synthetic Image Dataset Generated with Stable Diffusion
|
We generate synthetic images with the "Stable Diffusion" image generation
model using the Wordnet taxonomy and the definitions of concepts it contains.
This synthetic image database can be used as training data for data
augmentation in machine learning applications, and it is used to investigate
the capabilities of the Stable Diffusion model.
Analyses show that Stable Diffusion can produce correct images for a large
number of concepts, but also a large variety of different representations. The
results show differences depending on the test concepts considered and problems
with very specific concepts. These evaluations were performed using a vision
transformer model for image classification.
|
http://arxiv.org/abs/2211.01777v2
|
cs.CV
|
new_dataset
| 0.994407 |
2211.01777
|
Crime Prediction using Machine Learning with a Novel Crime Dataset
|
Crime is an unlawful act that carries legal repercussions. Bangladesh has a
high crime rate due to poverty, population growth, and many other
socio-economic issues. For law enforcement agencies, understanding crime
patterns is essential for preventing future criminal activity. For this
purpose, these agencies need structured crime database. This paper introduces a
novel crime dataset that contains temporal, geographic, weather, and
demographic data about 6574 crime incidents of Bangladesh. We manually gather
crime news articles of a seven year time span from a daily newspaper archive.
We extract basic features from these raw text. Using these basic features, we
then consult standard service-providers of geo-location and weather data in
order to garner these information related to the collected crime incidents.
Furthermore, we collect demographic information from Bangladesh National Census
data. All these information are combined that results in a standard machine
learning dataset. Together, 36 features are engineered for the crime prediction
task. Five supervised machine learning classification algorithms are then
evaluated on this newly built dataset and satisfactory results are achieved. We
also conduct exploratory analysis on various aspects the dataset. This dataset
is expected to serve as the foundation for crime incidence prediction systems
for Bangladesh and other countries. The findings of this study will help law
enforcement agencies to forecast and contain crime as well as to ensure optimal
resource allocation for crime patrol and prevention.
|
http://arxiv.org/abs/2211.01551v1
|
cs.LG
|
new_dataset
| 0.994499 |
2211.01551
|
MT-GenEval: A Counterfactual and Contextual Dataset for Evaluating Gender Accuracy in Machine Translation
|
As generic machine translation (MT) quality has improved, the need for
targeted benchmarks that explore fine-grained aspects of quality has increased.
In particular, gender accuracy in translation can have implications in terms of
output fluency, translation accuracy, and ethics. In this paper, we introduce
MT-GenEval, a benchmark for evaluating gender accuracy in translation from
English into eight widely-spoken languages. MT-GenEval complements existing
benchmarks by providing realistic, gender-balanced, counterfactual data in
eight language pairs where the gender of individuals is unambiguous in the
input segment, including multi-sentence segments requiring inter-sentential
gender agreement. Our data and code is publicly available under a CC BY SA 3.0
license.
|
http://arxiv.org/abs/2211.01355v1
|
cs.CL
|
new_dataset
| 0.994366 |
2211.01355
|
Confidence-Nets: A Step Towards better Prediction Intervals for regression Neural Networks on small datasets
|
The recent decade has seen an enormous rise in the popularity of deep
learning and neural networks. These algorithms have broken many previous
records and achieved remarkable results. Their outstanding performance has
significantly sped up the progress of AI, and so far various milestones have
been achieved earlier than expected. However, in the case of relatively small
datasets, the performance of Deep Neural Networks (DNN) may suffer from reduced
accuracy compared to other Machine Learning models. Furthermore, it is
difficult to construct prediction intervals or evaluate the uncertainty of
predictions when dealing with regression tasks. In this paper, we propose an
ensemble method that attempts to estimate the uncertainty of predictions,
increase their accuracy and provide an interval for the expected variation.
Compared with traditional DNNs that only provide a prediction, our proposed
method can output a prediction interval by combining DNNs, extreme gradient
boosting (XGBoost) and dissimilarity computation techniques. Albeit the simple
design, this approach significantly increases accuracy on small datasets and
does not introduce much complexity to the architecture of the neural network.
The proposed method is tested on various datasets, and a significant
improvement in the performance of the neural network model is seen. The model's
prediction interval can include the ground truth value at an average rate of
71% and 78% across training sizes of 90% and 55%, respectively. Finally, we
highlight other aspects and applications of the approach in experimental error
estimation, and the application of transfer learning.
|
http://arxiv.org/abs/2210.17092v1
|
cs.LG
|
not_new_dataset
| 0.992179 |
2210.17092
|
Multi-feature Dataset for Windows PE Malware Classification
|
This paper describes a multi-feature dataset for training machine learning
classifiers for detecting malicious Windows Portable Executable (PE) files. The
dataset includes four feature sets from 18,551 binary samples belonging to five
malware families including Spyware, Ransomware, Downloader, Backdoor and
Generic Malware. The feature sets include the list of DLLs and their functions,
values of different fields of PE Header and Sections. First, we explain the
data collection and creation phase and then we explain how did we label the
samples in it using VirusTotal's services. Finally, we explore the dataset to
describe how this dataset can benefit the researchers for static malware
analysis. The dataset is made public in the hope that it will help inspire
machine learning research for malware detection.
|
http://arxiv.org/abs/2210.16285v1
|
cs.CR
|
new_dataset
| 0.994437 |
2210.16285
|
Will we run out of data? An analysis of the limits of scaling datasets in Machine Learning
|
We analyze the growth of dataset sizes used in machine learning for natural
language processing and computer vision, and extrapolate these using two
methods; using the historical growth rate and estimating the compute-optimal
dataset size for future predicted compute budgets. We investigate the growth in
data usage by estimating the total stock of unlabeled data available on the
internet over the coming decades. Our analysis indicates that the stock of
high-quality language data will be exhausted soon; likely before 2026. By
contrast, the stock of low-quality language data and image data will be
exhausted only much later; between 2030 and 2050 (for low-quality language) and
between 2030 and 2060 (for images). Our work suggests that the current trend of
ever-growing ML models that rely on enormous datasets might slow down if data
efficiency is not drastically improved or new sources of data become available.
|
http://arxiv.org/abs/2211.04325v1
|
cs.LG
|
not_new_dataset
| 0.992094 |
2211.04325
|
Towards emotion recognition for virtual environments: an evaluation of EEG features on benchmark dataset
|
One of the challenges in virtual environments is the difficulty users have in
interacting with these increasingly complex systems. Ultimately, endowing
machines with the ability to perceive users emotions will enable a more
intuitive and reliable interaction. Consequently, using the
electroencephalogram as a bio-signal sensor, the affective state of a user can
be modelled and subsequently utilised in order to achieve a system that can
recognise and react to the user's emotions. This paper investigates features
extracted from electroencephalogram signals for the purpose of affective state
modelling based on Russell's Circumplex Model. Investigations are presented
that aim to provide the foundation for future work in modelling user affect to
enhance interaction experience in virtual environments. The DEAP dataset was
used within this work, along with a Support Vector Machine and Random Forest,
which yielded reasonable classification accuracies for Valence and Arousal
using feature vectors based on statistical measurements and band power from the
\'z, \b{eta}, \'z, and \'z\'z waves and High Order Crossing of the EEG signal.
|
http://arxiv.org/abs/2210.13876v1
|
cs.HC
|
not_new_dataset
| 0.991981 |
2210.13876
|
On the Robustness of Dataset Inference
|
Machine learning (ML) models are costly to train as they can require a
significant amount of data, computational resources and technical expertise.
Thus, they constitute valuable intellectual property that needs protection from
adversaries wanting to steal them. Ownership verification techniques allow the
victims of model stealing attacks to demonstrate that a suspect model was in
fact stolen from theirs.
Although a number of ownership verification techniques based on watermarking
or fingerprinting have been proposed, most of them fall short either in terms
of security guarantees (well-equipped adversaries can evade verification) or
computational cost. A fingerprinting technique, Dataset Inference (DI), has
been shown to offer better robustness and efficiency than prior methods.
The authors of DI provided a correctness proof for linear (suspect) models.
However, in a subspace of the same setting, we prove that DI suffers from high
false positives (FPs) -- it can incorrectly identify an independent model
trained with non-overlapping data from the same distribution as stolen. We
further prove that DI also triggers FPs in realistic, non-linear suspect
models. We then confirm empirically that DI in the black-box setting leads to
FPs, with high confidence.
Second, we show that DI also suffers from false negatives (FNs) -- an
adversary can fool DI (at the cost of incurring some accuracy loss) by
regularising a stolen model's decision boundaries using adversarial training,
thereby leading to an FN. To this end, we demonstrate that black-box DI fails
to identify a model adversarially trained from a stolen dataset -- the setting
where DI is the hardest to evade.
Finally, we discuss the implications of our findings, the viability of
fingerprinting-based ownership verification in general, and suggest directions
for future work.
|
http://arxiv.org/abs/2210.13631v3
|
cs.LG
|
not_new_dataset
| 0.992148 |
2210.13631
|
PcMSP: A Dataset for Scientific Action Graphs Extraction from Polycrystalline Materials Synthesis Procedure Text
|
Scientific action graphs extraction from materials synthesis procedures is
important for reproducible research, machine automation, and material
prediction. But the lack of annotated data has hindered progress in this field.
We demonstrate an effort to annotate Polycrystalline Materials Synthesis
Procedures (PcMSP) from 305 open access scientific articles for the
construction of synthesis action graphs. This is a new dataset for material
science information extraction that simultaneously contains the synthesis
sentences extracted from the experimental paragraphs, as well as the entity
mentions and intra-sentence relations. A two-step human annotation and
inter-annotator agreement study guarantee the high quality of the PcMSP corpus.
We introduce four natural language processing tasks: sentence classification,
named entity recognition, relation classification, and joint extraction of
entities and relations. Comprehensive experiments validate the effectiveness of
several state-of-the-art models for these challenges while leaving large space
for improvement. We also perform the error analysis and point out some unique
challenges that require further investigation. We will release our annotation
scheme, the corpus, and codes to the research community to alleviate the
scarcity of labeled data in this domain.
|
http://arxiv.org/abs/2210.12401v1
|
cs.CL
|
new_dataset
| 0.994462 |
2210.12401
|
NEREL-BIO: A Dataset of Biomedical Abstracts Annotated with Nested Named Entities
|
This paper describes NEREL-BIO -- an annotation scheme and corpus of PubMed
abstracts in Russian and smaller number of abstracts in English. NEREL-BIO
extends the general domain dataset NEREL by introducing domain-specific entity
types. NEREL-BIO annotation scheme covers both general and biomedical domains
making it suitable for domain transfer experiments. NEREL-BIO provides
annotation for nested named entities as an extension of the scheme employed for
NEREL. Nested named entities may cross entity boundaries to connect to shorter
entities nested within longer entities, making them harder to detect.
NEREL-BIO contains annotations for 700+ Russian and 100+ English abstracts.
All English PubMed annotations have corresponding Russian counterparts. Thus,
NEREL-BIO comprises the following specific features: annotation of nested named
entities, it can be used as a benchmark for cross-domain (NEREL -> NEREL-BIO)
and cross-language (English -> Russian) transfer. We experiment with both
transformer-based sequence models and machine reading comprehension (MRC)
models and report their results.
The dataset is freely available at https://github.com/nerel-ds/NEREL-BIO.
|
http://arxiv.org/abs/2210.11913v1
|
cs.CL
|
new_dataset
| 0.994522 |
2210.11913
|
Performance of different machine learning methods on activity recognition and pose estimation datasets
|
With advancements in computer vision taking place day by day, recently a lot
of light is being shed on activity recognition. With the range for real-world
applications utilizing this field of study increasing across a multitude of
industries such as security and healthcare, it becomes crucial for businesses
to distinguish which machine learning methods perform better than others in the
area. This paper strives to aid in this predicament i.e. building upon previous
related work, it employs both classical and ensemble approaches on rich pose
estimation (OpenPose) and HAR datasets. Making use of appropriate metrics to
evaluate the performance for each model, the results show that overall, random
forest yields the highest accuracy in classifying ADLs. Relatively all the
models have excellent performance across both datasets, except for logistic
regression and AdaBoost perform poorly in the HAR one. With the limitations of
this paper also discussed in the end, the scope for further research is vast,
which can use this paper as a base in aims of producing better results.
|
http://arxiv.org/abs/2210.10247v1
|
cs.CV
|
not_new_dataset
| 0.992207 |
2210.10247
|
Potrika: Raw and Balanced Newspaper Datasets in the Bangla Language with Eight Topics and Five Attributes
|
Knowledge is central to human and scientific developments. Natural Language
Processing (NLP) allows automated analysis and creation of knowledge. Data is a
crucial NLP and machine learning ingredient. The scarcity of open datasets is a
well-known problem in machine and deep learning research. This is very much the
case for textual NLP datasets in English and other major world languages. For
the Bangla language, the situation is even more challenging and the number of
large datasets for NLP research is practically nil. We hereby present Potrika,
a large single-label Bangla news article textual dataset curated for NLP
research from six popular online news portals in Bangladesh (Jugantor,
Jaijaidin, Ittefaq, Kaler Kontho, Inqilab, and Somoyer Alo) for the period
2014-2020. The articles are classified into eight distinct categories
(National, Sports, International, Entertainment, Economy, Education, Politics,
and Science \& Technology) providing five attributes (News Article, Category,
Headline, Publication Date, and Newspaper Source). The raw dataset contains
185.51 million words and 12.57 million sentences contained in 664,880 news
articles. Moreover, using NLP augmentation techniques, we create from the raw
(unbalanced) dataset another (balanced) dataset comprising 320,000 news
articles with 40,000 articles in each of the eight news categories. Potrika
contains both the datasets (raw and balanced) to suit a wide range of NLP
research. By far, to the best of our knowledge, Potrika is the largest and the
most extensive dataset for news classification.
|
http://arxiv.org/abs/2210.09389v1
|
cs.CL
|
new_dataset
| 0.99451 |
2210.09389
|
Space, Time, and Interaction: A Taxonomy of Corner Cases in Trajectory Datasets for Automated Driving
|
Trajectory data analysis is an essential component for highly automated
driving. Complex models developed with these data predict other road users'
movement and behavior patterns. Based on these predictions - and additional
contextual information such as the course of the road, (traffic) rules, and
interaction with other road users - the highly automated vehicle (HAV) must be
able to reliably and safely perform the task assigned to it, e.g., moving from
point A to B. Ideally, the HAV moves safely through its environment, just as we
would expect a human driver to do. However, if unusual trajectories occur,
so-called trajectory corner cases, a human driver can usually cope well, but an
HAV can quickly get into trouble. In the definition of trajectory corner cases,
which we provide in this work, we will consider the relevance of unusual
trajectories with respect to the task at hand. Based on this, we will also
present a taxonomy of different trajectory corner cases. The categorization of
corner cases into the taxonomy will be shown with examples and is done by cause
and required data sources. To illustrate the complexity between the machine
learning (ML) model and the corner case cause, we present a general processing
chain underlying the taxonomy.
|
http://arxiv.org/abs/2210.08885v1
|
cs.RO
|
not_new_dataset
| 0.992026 |
2210.08885
|
Massive MIMO Channel Prediction Via Meta-Learning and Deep Denoising: Is a Small Dataset Enough?
|
Accurate channel knowledge is critical in massive multiple-input
multiple-output (MIMO), which motivates the use of channel prediction. Machine
learning techniques for channel prediction hold much promise, but current
schemes are limited in their ability to adapt to changes in the environment
because they require large training overheads. To accurately predict wireless
channels for new environments with reduced training overhead, we propose a fast
adaptive channel prediction technique based on a meta-learning algorithm for
massive MIMO communications. We exploit the model-agnostic meta-learning (MAML)
algorithm to achieve quick adaptation with a small amount of labeled data.
Also, to improve the prediction accuracy, we adopt the denoising process for
the training data by using deep image prior (DIP). Numerical results show that
the proposed MAML-based channel predictor can improve the prediction accuracy
with only a few fine-tuning samples. The DIP-based denoising process gives an
additional gain in channel prediction, especially in low signal-to-noise ratio
regimes.
|
http://arxiv.org/abs/2210.08770v1
|
cs.IT
|
not_new_dataset
| 0.991926 |
2210.08770
|
A Large-Scale Annotated Multivariate Time Series Aviation Maintenance Dataset from the NGAFID
|
This paper presents the largest publicly available, non-simulated, fleet-wide
aircraft flight recording and maintenance log data for use in predicting part
failure and maintenance need. We present 31,177 hours of flight data across
28,935 flights, which occur relative to 2,111 unplanned maintenance events
clustered into 36 types of maintenance issues. Flights are annotated as before
or after maintenance, with some flights occurring on the day of maintenance.
Collecting data to evaluate predictive maintenance systems is challenging
because it is difficult, dangerous, and unethical to generate data from
compromised aircraft. To overcome this, we use the National General Aviation
Flight Information Database (NGAFID), which contains flights recorded during
regular operation of aircraft, and maintenance logs to construct a part failure
dataset. We use a novel framing of Remaining Useful Life (RUL) prediction and
consider the probability that the RUL of a part is greater than 2 days. Unlike
previous datasets generated with simulations or in laboratory settings, the
NGAFID Aviation Maintenance Dataset contains real flight records and
maintenance logs from different seasons, weather conditions, pilots, and flight
patterns. Additionally, we provide Python code to easily download the dataset
and a Colab environment to reproduce our benchmarks on three different models.
Our dataset presents a difficult challenge for machine learning researchers and
a valuable opportunity to test and develop prognostic health management methods
|
http://arxiv.org/abs/2210.07317v1
|
cs.LG
|
new_dataset
| 0.994519 |
2210.07317
|
A Systematic Review of Machine Learning Techniques for Cattle Identification: Datasets, Methods and Future Directions
|
Increased biosecurity and food safety requirements may increase demand for
efficient traceability and identification systems of livestock in the supply
chain. The advanced technologies of machine learning and computer vision have
been applied in precision livestock management, including critical disease
detection, vaccination, production management, tracking, and health monitoring.
This paper offers a systematic literature review (SLR) of vision-based cattle
identification. More specifically, this SLR is to identify and analyse the
research related to cattle identification using Machine Learning (ML) and Deep
Learning (DL). For the two main applications of cattle detection and cattle
identification, all the ML based papers only solve cattle identification
problems. However, both detection and identification problems were studied in
the DL based papers. Based on our survey report, the most used ML models for
cattle identification were support vector machine (SVM), k-nearest neighbour
(KNN), and artificial neural network (ANN). Convolutional neural network (CNN),
residual network (ResNet), Inception, You Only Look Once (YOLO), and Faster
R-CNN were popular DL models in the selected papers. Among these papers, the
most distinguishing features were the muzzle prints and coat patterns of
cattle. Local binary pattern (LBP), speeded up robust features (SURF),
scale-invariant feature transform (SIFT), and Inception or CNN were identified
as the most used feature extraction methods.
|
http://arxiv.org/abs/2210.09215v1
|
cs.CV
|
not_new_dataset
| 0.992128 |
2210.09215
|
Generative Adversarial Nets: Can we generate a new dataset based on only one training set?
|
A generative adversarial network (GAN) is a class of machine learning
frameworks designed by Goodfellow et al. in 2014. In the GAN framework, the
generative model is pitted against an adversary: a discriminative model that
learns to determine whether a sample is from the model distribution or the data
distribution. GAN generates new samples from the same distribution as the
training set. In this work, we aim to generate a new dataset that has a
different distribution from the training set. In addition, the Jensen-Shannon
divergence between the distributions of the generative and training datasets
can be controlled by some target $\delta \in [0, 1]$. Our work is motivated by
applications in generating new kinds of rice that have similar characteristics
as good rice.
|
http://arxiv.org/abs/2210.06005v1
|
cs.LG
|
not_new_dataset
| 0.991987 |
2210.06005
|
Computer Vision based inspection on post-earthquake with UAV synthetic dataset
|
The area affected by the earthquake is vast and often difficult to entirely
cover, and the earthquake itself is a sudden event that causes multiple defects
simultaneously, that cannot be effectively traced using traditional, manual
methods. This article presents an innovative approach to the problem of
detecting damage after sudden events by using an interconnected set of deep
machine learning models organized in a single pipeline and allowing for easy
modification and swapping models seamlessly. Models in the pipeline were
trained with a synthetic dataset and were adapted to be further evaluated and
used with unmanned aerial vehicles (UAVs) in real-world conditions. Thanks to
the methods presented in the article, it is possible to obtain high accuracy in
detecting buildings defects, segmenting constructions into their components and
estimating their technical condition based on a single drone flight.
|
http://arxiv.org/abs/2210.05282v1
|
cs.CV
|
not_new_dataset
| 0.992165 |
2210.05282
|
Combining datasets to increase the number of samples and improve model fitting
|
For many use cases, combining information from different datasets can be of
interest to improve a machine learning model's performance, especially when the
number of samples from at least one of the datasets is small. However, a
potential challenge in such cases is that the features from these datasets are
not identical, even though there are some commonly shared features among the
datasets. To tackle this challenge, we propose a novel framework called Combine
datasets based on Imputation (ComImp). In addition, we propose a variant of
ComImp that uses Principle Component Analysis (PCA), PCA-ComImp in order to
reduce dimension before combining datasets. This is useful when the datasets
have a large number of features that are not shared between them. Furthermore,
our framework can also be utilized for data preprocessing by imputing missing
data, i.e., filling in the missing entries while combining different datasets.
To illustrate the power of the proposed methods and their potential usages, we
conduct experiments for various tasks: regression, classification, and for
different data types: tabular data, time series data, when the datasets to be
combined have missing data. We also investigate how the devised methods can be
used with transfer learning to provide even further model training improvement.
Our results indicate that the proposed methods are somewhat similar to transfer
learning in that the merge can significantly improve the accuracy of a
prediction model on smaller datasets. In addition, the methods can boost
performance by a significant margin when combining small datasets together and
can provide extra improvement when being used with transfer learning.
|
http://arxiv.org/abs/2210.05165v2
|
stat.ML
|
not_new_dataset
| 0.992284 |
2210.05165
|
FLamby: Datasets and Benchmarks for Cross-Silo Federated Learning in Realistic Healthcare Settings
|
Federated Learning (FL) is a novel approach enabling several clients holding
sensitive data to collaboratively train machine learning models, without
centralizing data. The cross-silo FL setting corresponds to the case of few
($2$--$50$) reliable clients, each holding medium to large datasets, and is
typically found in applications such as healthcare, finance, or industry. While
previous works have proposed representative datasets for cross-device FL, few
realistic healthcare cross-silo FL datasets exist, thereby slowing algorithmic
research in this critical application. In this work, we propose a novel
cross-silo dataset suite focused on healthcare, FLamby (Federated Learning
AMple Benchmark of Your cross-silo strategies), to bridge the gap between
theory and practice of cross-silo FL. FLamby encompasses 7 healthcare datasets
with natural splits, covering multiple tasks, modalities, and data volumes,
each accompanied with baseline training code. As an illustration, we
additionally benchmark standard FL algorithms on all datasets. Our flexible and
modular suite allows researchers to easily download datasets, reproduce results
and re-use the different components for their research. FLamby is available
at~\url{www.github.com/owkin/flamby}.
|
http://arxiv.org/abs/2210.04620v3
|
cs.LG
|
new_dataset
| 0.99404 |
2210.04620
|
An Instance Selection Algorithm for Big Data in High imbalanced datasets based on LSH
|
Training of Machine Learning (ML) models in real contexts often deals with
big data sets and high-class imbalance samples where the class of interest is
unrepresented (minority class). Practical solutions using classical ML models
address the problem of large data sets using parallel/distributed
implementations of training algorithms, approximate model-based solutions, or
applying instance selection (IS) algorithms to eliminate redundant information.
However, the combined problem of big and high imbalanced datasets has been less
addressed. This work proposes three new methods for IS to be able to deal with
large and imbalanced data sets. The proposed methods use Locality Sensitive
Hashing (LSH) as a base clustering technique, and then three different sampling
methods are applied on top of the clusters (or buckets) generated by LSH. The
algorithms were developed in the Apache Spark framework, guaranteeing their
scalability. The experiments carried out in three different datasets suggest
that the proposed IS methods can improve the performance of a base ML model
between 5% and 19% in terms of the geometric mean.
|
http://arxiv.org/abs/2210.04310v1
|
cs.LG
|
not_new_dataset
| 0.992219 |
2210.04310
|
Synthetic Dataset Generation for Privacy-Preserving Machine Learning
|
Machine Learning (ML) has achieved enormous success in solving a variety of
problems in computer vision, speech recognition, object detection, to name a
few. The principal reason for this success is the availability of huge datasets
for training deep neural networks (DNNs). However, datasets can not be publicly
released if they contain sensitive information such as medical or financial
records. In such cases, data privacy becomes a major concern. Encryption
methods offer a possible solution to this issue, however their deployment on ML
applications is non-trivial, as they seriously impact the classification
accuracy and result in substantial computational overhead.Alternatively,
obfuscation techniques can be used, but maintaining a good balance between
visual privacy and accuracy is challenging. In this work, we propose a method
to generate secure synthetic datasets from the original private datasets. In
our method, given a network with Batch Normalization (BN) layers pre-trained on
the original dataset, we first record the layer-wise BN statistics. Next, using
the BN statistics and the pre-trained model, we generate the synthetic dataset
by optimizing random noises such that the synthetic data match the layer-wise
statistical distribution of the original model. We evaluate our method on image
classification dataset (CIFAR10) and show that our synthetic data can be used
for training networks from scratch, producing reasonable classification
performance.
|
http://arxiv.org/abs/2210.03205v5
|
cs.CR
|
not_new_dataset
| 0.992201 |
2210.03205
|
QUAK: A Synthetic Quality Estimation Dataset for Korean-English Neural Machine Translation
|
With the recent advance in neural machine translation demonstrating its
importance, research on quality estimation (QE) has been steadily progressing.
QE aims to automatically predict the quality of machine translation (MT) output
without reference sentences. Despite its high utility in the real world, there
remain several limitations concerning manual QE data creation: inevitably
incurred non-trivial costs due to the need for translation experts, and issues
with data scaling and language expansion. To tackle these limitations, we
present QUAK, a Korean-English synthetic QE dataset generated in a fully
automatic manner. This consists of three sub-QUAK datasets QUAK-M, QUAK-P, and
QUAK-H, produced through three strategies that are relatively free from
language constraints. Since each strategy requires no human effort, which
facilitates scalability, we scale our data up to 1.58M for QUAK-P, H and 6.58M
for QUAK-M. As an experiment, we quantitatively analyze word-level QE results
in various ways while performing statistical analysis. Moreover, we show that
datasets scaled in an efficient way also contribute to performance improvements
by observing meaningful performance gains in QUAK-M, P when adding data up to
1.58M.
|
http://arxiv.org/abs/2209.15285v2
|
cs.CL
|
new_dataset
| 0.994447 |
2209.15285
|
No Free Lunch in "Privacy for Free: How does Dataset Condensation Help Privacy"
|
New methods designed to preserve data privacy require careful scrutiny.
Failure to preserve privacy is hard to detect, and yet can lead to catastrophic
results when a system implementing a ``privacy-preserving'' method is attacked.
A recent work selected for an Outstanding Paper Award at ICML 2022 (Dong et
al., 2022) claims that dataset condensation (DC) significantly improves data
privacy when training machine learning models. This claim is supported by
theoretical analysis of a specific dataset condensation technique and an
empirical evaluation of resistance to some existing membership inference
attacks.
In this note we examine the claims in the work of Dong et al. (2022) and
describe major flaws in the empirical evaluation of the method and its
theoretical analysis. These flaws imply that their work does not provide
statistically significant evidence that DC improves the privacy of training ML
models over a naive baseline. Moreover, previously published results show that
DP-SGD, the standard approach to privacy preserving ML, simultaneously gives
better accuracy and achieves a (provably) lower membership attack success rate.
|
http://arxiv.org/abs/2209.14987v1
|
cs.LG
|
not_new_dataset
| 0.991808 |
2209.14987
|
TruEyes: Utilizing Microtasks in Mobile Apps for Crowdsourced Labeling of Machine Learning Datasets
|
The growing use of supervised machine learning in research and industry has
increased the need for labeled datasets. Crowdsourcing has emerged as a popular
method to create data labels. However, working on large batches of tasks leads
to worker fatigue, negatively impacting labeling quality. To address this, we
present TruEyes, a collaborative crowdsourcing system, enabling the
distribution of micro-tasks to mobile app users. TruEyes allows machine
learning practitioners to publish labeling tasks, mobile app developers to
integrate task ads for monetization, and users to label data instead of
watching advertisements. To evaluate the system, we conducted an experiment
with N=296 participants. Our results show that the quality of the labeled data
is comparable to traditional crowdsourcing approaches and most users prefer
task ads over traditional ads. We discuss extensions to the system and address
how mobile advertisement space can be used as a productive resource in the
future.
|
http://arxiv.org/abs/2209.14708v1
|
cs.HC
|
not_new_dataset
| 0.992039 |
2209.14708
|
METS-CoV: A Dataset of Medical Entity and Targeted Sentiment on COVID-19 Related Tweets
|
The COVID-19 pandemic continues to bring up various topics discussed or
debated on social media. In order to explore the impact of pandemics on
people's lives, it is crucial to understand the public's concerns and attitudes
towards pandemic-related entities (e.g., drugs, vaccines) on social media.
However, models trained on existing named entity recognition (NER) or targeted
sentiment analysis (TSA) datasets have limited ability to understand
COVID-19-related social media texts because these datasets are not designed or
annotated from a medical perspective. This paper releases METS-CoV, a dataset
containing medical entities and targeted sentiments from COVID-19-related
tweets. METS-CoV contains 10,000 tweets with 7 types of entities, including 4
medical entity types (Disease, Drug, Symptom, and Vaccine) and 3 general entity
types (Person, Location, and Organization). To further investigate tweet users'
attitudes toward specific entities, 4 types of entities (Person, Organization,
Drug, and Vaccine) are selected and annotated with user sentiments, resulting
in a targeted sentiment dataset with 9,101 entities (in 5,278 tweets). To the
best of our knowledge, METS-CoV is the first dataset to collect medical
entities and corresponding sentiments of COVID-19-related tweets. We benchmark
the performance of classical machine learning models and state-of-the-art deep
learning models on NER and TSA tasks with extensive experiments. Results show
that the dataset has vast room for improvement for both NER and TSA tasks.
METS-CoV is an important resource for developing better medical social media
tools and facilitating computational social science research, especially in
epidemiology. Our data, annotation guidelines, benchmark models, and source
code are publicly available (https://github.com/YLab-Open/METS-CoV) to ensure
reproducibility.
|
http://arxiv.org/abs/2209.13773v1
|
cs.CL
|
new_dataset
| 0.994551 |
2209.13773
|
Critical Evaluation of LOCO dataset with Machine Learning
|
Purpose: Object detection is rapidly evolving through machine learning
technology in automation systems. Well prepared data is necessary to train the
algorithms. Accordingly, the objective of this paper is to describe a
re-evaluation of the so-called Logistics Objects in Context (LOCO) dataset,
which is the first dataset for object detection in the field of intralogistics.
Methodology: We use an experimental research approach with three steps to
evaluate the LOCO dataset. Firstly, the images on GitHub were analyzed to
understand the dataset better. Secondly, Google Drive Cloud was used for
training purposes to revisit the algorithmic implementation and training.
Lastly, the LOCO dataset was examined, if it is possible to achieve the same
training results in comparison to the original publications.
Findings: The mean average precision, a common benchmark in object detection,
achieved in our study was 64.54%, and shows a significant increase from the
initial study of the LOCO authors, achieving 41%. However, improvement
potential is seen specifically within object types of forklifts and pallet
truck.
Originality: This paper presents the first critical replication study of the
LOCO dataset for object detection in intralogistics. It shows that the training
with better hyperparameters based on LOCO can even achieve a higher accuracy
than presented in the original publication. However, there is also further room
for improving the LOCO dataset.
|
http://arxiv.org/abs/2209.13499v1
|
cs.CV
|
not_new_dataset
| 0.992138 |
2209.13499
|
WikiDes: A Wikipedia-Based Dataset for Generating Short Descriptions from Paragraphs
|
As free online encyclopedias with massive volumes of content, Wikipedia and
Wikidata are key to many Natural Language Processing (NLP) tasks, such as
information retrieval, knowledge base building, machine translation, text
classification, and text summarization. In this paper, we introduce WikiDes, a
novel dataset to generate short descriptions of Wikipedia articles for the
problem of text summarization. The dataset consists of over 80k English samples
on 6987 topics. We set up a two-phase summarization method - description
generation (Phase I) and candidate ranking (Phase II) - as a strong approach
that relies on transfer and contrastive learning. For description generation,
T5 and BART show their superiority compared to other small-scale pre-trained
models. By applying contrastive learning with the diverse input from beam
search, the metric fusion-based ranking models outperform the direct
description generation models significantly up to 22 ROUGE in topic-exclusive
split and topic-independent split. Furthermore, the outcome descriptions in
Phase II are supported by human evaluation in over 45.33% chosen compared to
23.66% in Phase I against the gold descriptions. In the aspect of sentiment
analysis, the generated descriptions cannot effectively capture all sentiment
polarities from paragraphs while doing this task better from the gold
descriptions. The automatic generation of new descriptions reduces the human
efforts in creating them and enriches Wikidata-based knowledge graphs. Our
paper shows a practical impact on Wikipedia and Wikidata since there are
thousands of missing descriptions. Finally, we expect WikiDes to be a useful
dataset for related works in capturing salient information from short
paragraphs. The curated dataset is publicly available at:
https://github.com/declare-lab/WikiDes.
|
http://arxiv.org/abs/2209.13101v1
|
cs.CL
|
new_dataset
| 0.994466 |
2209.13101
|
OLIVES Dataset: Ophthalmic Labels for Investigating Visual Eye Semantics
|
Clinical diagnosis of the eye is performed over multifarious data modalities
including scalar clinical labels, vectorized biomarkers, two-dimensional fundus
images, and three-dimensional Optical Coherence Tomography (OCT) scans.
Clinical practitioners use all available data modalities for diagnosing and
treating eye diseases like Diabetic Retinopathy (DR) or Diabetic Macular Edema
(DME). Enabling usage of machine learning algorithms within the ophthalmic
medical domain requires research into the relationships and interactions
between all relevant data over a treatment period. Existing datasets are
limited in that they neither provide data nor consider the explicit
relationship modeling between the data modalities. In this paper, we introduce
the Ophthalmic Labels for Investigating Visual Eye Semantics (OLIVES) dataset
that addresses the above limitation. This is the first OCT and near-IR fundus
dataset that includes clinical labels, biomarker labels, disease labels, and
time-series patient treatment information from associated clinical trials. The
dataset consists of 1268 near-IR fundus images each with at least 49 OCT scans,
and 16 biomarkers, along with 4 clinical labels and a disease diagnosis of DR
or DME. In total, there are 96 eyes' data averaged over a period of at least
two years with each eye treated for an average of 66 weeks and 7 injections. We
benchmark the utility of OLIVES dataset for ophthalmic data as well as provide
benchmarks and concrete research directions for core and emerging machine
learning paradigms within medical image analysis.
|
http://arxiv.org/abs/2209.11195v1
|
eess.IV
|
new_dataset
| 0.99454 |
2209.11195
|
SPICE, A Dataset of Drug-like Molecules and Peptides for Training Machine Learning Potentials
|
Machine learning potentials are an important tool for molecular simulation,
but their development is held back by a shortage of high quality datasets to
train them on. We describe the SPICE dataset, a new quantum chemistry dataset
for training potentials relevant to simulating drug-like small molecules
interacting with proteins. It contains over 1.1 million conformations for a
diverse set of small molecules, dimers, dipeptides, and solvated amino acids.
It includes 15 elements, charged and uncharged molecules, and a wide range of
covalent and non-covalent interactions. It provides both forces and energies
calculated at the {\omega}B97M-D3(BJ)/def2-TZVPPD level of theory, along with
other useful quantities such as multipole moments and bond orders. We train a
set of machine learning potentials on it and demonstrate that they can achieve
chemical accuracy across a broad region of chemical space. It can serve as a
valuable resource for the creation of transferable, ready to use potential
functions for use in molecular simulations.
|
http://arxiv.org/abs/2209.10702v2
|
physics.chem-ph
|
new_dataset
| 0.994538 |
2209.10702
|
ESTA: An Esports Trajectory and Action Dataset
|
Sports, due to their global reach and impact-rich prediction tasks, are an
exciting domain to deploy machine learning models. However, data from
conventional sports is often unsuitable for research use due to its size,
veracity, and accessibility. To address these issues, we turn to esports, a
growing domain that encompasses video games played in a capacity similar to
conventional sports. Since esports data is acquired through server logs rather
than peripheral sensors, esports provides a unique opportunity to obtain a
massive collection of clean and detailed spatiotemporal data, similar to those
collected in conventional sports. To parse esports data, we develop awpy, an
open-source esports game log parsing library that can extract player
trajectories and actions from game logs. Using awpy, we parse 8.6m actions,
7.9m game frames, and 417k trajectories from 1,558 game logs from professional
Counter-Strike tournaments to create the Esports Trajectory and Actions (ESTA)
dataset. ESTA is one of the largest and most granular publicly available sports
data sets to date. We use ESTA to develop benchmarks for win prediction using
player-specific information. The ESTA data is available at
https://github.com/pnxenopoulos/esta and awpy is made public through PyPI.
|
http://arxiv.org/abs/2209.09861v1
|
cs.LG
|
new_dataset
| 0.994394 |
2209.09861
|
GLARE: A Dataset for Traffic Sign Detection in Sun Glare
|
Real-time machine learning detection algorithms are often found within
autonomous vehicle technology and depend on quality datasets. It is essential
that these algorithms work correctly in everyday conditions as well as under
strong sun glare. Reports indicate glare is one of the two most prominent
environment-related reasons for crashes. However, existing datasets, such as
LISA and the German Traffic Sign Recognition Benchmark, do not reflect the
existence of sun glare at all. This paper presents the GLARE traffic sign
dataset: a collection of images with U.S based traffic signs under heavy visual
interference by sunlight. GLARE contains 2,157 images of traffic signs with sun
glare, pulled from 33 videos of dashcam footage of roads in the United States.
It provides an essential enrichment to the widely used LISA Traffic Sign
dataset. Our experimental study shows that although several state-of-the-art
baseline methods demonstrate superior performance when trained and tested
against traffic sign datasets without sun glare, they greatly suffer when
tested against GLARE (e.g., ranging from 9% to 21% mean mAP, which is
significantly lower than the performances on LISA dataset). We also notice that
current architectures have better detection accuracy (e.g., on average 42% mean
mAP gain for mainstream algorithms) when trained on images of traffic signs in
sun glare.
|
http://arxiv.org/abs/2209.08716v1
|
cs.CV
|
new_dataset
| 0.994499 |
2209.08716
|
RDD2022: A multi-national image dataset for automatic Road Damage Detection
|
The data article describes the Road Damage Dataset, RDD2022, which comprises
47,420 road images from six countries, Japan, India, the Czech Republic,
Norway, the United States, and China. The images have been annotated with more
than 55,000 instances of road damage. Four types of road damage, namely
longitudinal cracks, transverse cracks, alligator cracks, and potholes, are
captured in the dataset. The annotated dataset is envisioned for developing
deep learning-based methods to detect and classify road damage automatically.
The dataset has been released as a part of the Crowd sensing-based Road Damage
Detection Challenge (CRDDC2022). The challenge CRDDC2022 invites researchers
from across the globe to propose solutions for automatic road damage detection
in multiple countries. The municipalities and road agencies may utilize the
RDD2022 dataset, and the models trained using RDD2022 for low-cost automatic
monitoring of road conditions. Further, computer vision and machine learning
researchers may use the dataset to benchmark the performance of different
algorithms for other image-based applications of the same type (classification,
object detection, etc.).
|
http://arxiv.org/abs/2209.08538v1
|
cs.CV
|
new_dataset
| 0.994468 |
2209.08538
|
HAPI: A Large-scale Longitudinal Dataset of Commercial ML API Predictions
|
Commercial ML APIs offered by providers such as Google, Amazon and Microsoft
have dramatically simplified ML adoption in many applications. Numerous
companies and academics pay to use ML APIs for tasks such as object detection,
OCR and sentiment analysis. Different ML APIs tackling the same task can have
very heterogeneous performance. Moreover, the ML models underlying the APIs
also evolve over time. As ML APIs rapidly become a valuable marketplace and a
widespread way to consume machine learning, it is critical to systematically
study and compare different APIs with each other and to characterize how APIs
change over time. However, this topic is currently underexplored due to the
lack of data. In this paper, we present HAPI (History of APIs), a longitudinal
dataset of 1,761,417 instances of commercial ML API applications (involving
APIs from Amazon, Google, IBM, Microsoft and other providers) across diverse
tasks including image tagging, speech recognition and text mining from 2020 to
2022. Each instance consists of a query input for an API (e.g., an image or
text) along with the API's output prediction/annotation and confidence scores.
HAPI is the first large-scale dataset of ML API usages and is a unique resource
for studying ML-as-a-service (MLaaS). As examples of the types of analyses that
HAPI enables, we show that ML APIs' performance change substantially over
time--several APIs' accuracies dropped on specific benchmark datasets. Even
when the API's aggregate performance stays steady, its error modes can shift
across different subtypes of data between 2020 and 2022. Such changes can
substantially impact the entire analytics pipelines that use some ML API as a
component. We further use HAPI to study commercial APIs' performance
disparities across demographic subgroups over time. HAPI can stimulate more
research in the growing field of MLaaS.
|
http://arxiv.org/abs/2209.08443v1
|
cs.SE
|
new_dataset
| 0.994496 |
2209.08443
|
Dataset Inference for Self-Supervised Models
|
Self-supervised models are increasingly prevalent in machine learning (ML)
since they reduce the need for expensively labeled data. Because of their
versatility in downstream applications, they are increasingly used as a service
exposed via public APIs. At the same time, these encoder models are
particularly vulnerable to model stealing attacks due to the high
dimensionality of vector representations they output. Yet, encoders remain
undefended: existing mitigation strategies for stealing attacks focus on
supervised learning. We introduce a new dataset inference defense, which uses
the private training set of the victim encoder model to attribute its ownership
in the event of stealing. The intuition is that the log-likelihood of an
encoder's output representations is higher on the victim's training data than
on test data if it is stolen from the victim, but not if it is independently
trained. We compute this log-likelihood using density estimation models. As
part of our evaluation, we also propose measuring the fidelity of stolen
encoders and quantifying the effectiveness of the theft detection without
involving downstream tasks; instead, we leverage mutual information and
distance measurements. Our extensive empirical results in the vision domain
demonstrate that dataset inference is a promising direction for defending
self-supervised models against model stealing.
|
http://arxiv.org/abs/2209.09024v3
|
cs.LG
|
not_new_dataset
| 0.991948 |
2209.09024
|
Quantum Transfer Learning for Real-World, Small, and High-Dimensional Datasets
|
Quantum machine learning (QML) networks promise to have some computational
(or quantum) advantage for classifying supervised datasets (e.g., satellite
images) over some conventional deep learning (DL) techniques due to their
expressive power via their local effective dimension. There are, however, two
main challenges regardless of the promised quantum advantage: 1) Currently
available quantum bits (qubits) are very small in number, while real-world
datasets are characterized by hundreds of high-dimensional elements (i.e.,
features). Additionally, there is not a single unified approach for embedding
real-world high-dimensional datasets in a limited number of qubits. 2) Some
real-world datasets are too small for training intricate QML networks. Hence,
to tackle these two challenges for benchmarking and validating QML networks on
real-world, small, and high-dimensional datasets in one-go, we employ quantum
transfer learning composed of a multi-qubit QML network, and a very deep
convolutional network (a with VGG16 architecture) extracting informative
features from any small, high-dimensional dataset. We use real-amplitude and
strongly-entangling N-layer QML networks with and without data re-uploading
layers as a multi-qubit QML network, and evaluate their expressive power
quantified by using their local effective dimension; the lower the local
effective dimension of a QML network, the better its performance on unseen
data. Our numerical results show that the strongly-entangling N-layer QML
network has a lower local effective dimension than the real-amplitude QML
network and outperforms it on the hard-to-classify three-class labelling
problem. In addition, quantum transfer learning helps tackle the two challenges
mentioned above for benchmarking and validating QML networks on real-world,
small, and high-dimensional datasets.
|
http://arxiv.org/abs/2209.07799v4
|
quant-ph
|
not_new_dataset
| 0.992248 |
2209.07799
|
COMPASS: A Formal Framework and Aggregate Dataset for Generalized Surgical Procedure Modeling
|
Purpose: We propose a formal framework for the modeling and segmentation of
minimally-invasive surgical tasks using a unified set of motion primitives
(MPs) to enable more objective labeling and the aggregation of different
datasets.
Methods: We model dry-lab surgical tasks as finite state machines,
representing how the execution of MPs as the basic surgical actions results in
the change of surgical context, which characterizes the physical interactions
among tools and objects in the surgical environment. We develop methods for
labeling surgical context based on video data and for automatic translation of
context to MP labels. We then use our framework to create the COntext and
Motion Primitive Aggregate Surgical Set (COMPASS), including six dry-lab
surgical tasks from three publicly-available datasets (JIGSAWS, DESK, and
ROSMA), with kinematic and video data and context and MP labels.
Results: Our context labeling method achieves near-perfect agreement between
consensus labels from crowd-sourcing and expert surgeons. Segmentation of tasks
to MPs results in the creation of the COMPASS dataset that nearly triples the
amount of data for modeling and analysis and enables the generation of separate
transcripts for the left and right tools.
Conclusion: The proposed framework results in high quality labeling of
surgical data based on context and fine-grained MPs. Modeling surgical tasks
with MPs enables the aggregation of different datasets and the separate
analysis of left and right hands for bimanual coordination assessment. Our
formal framework and aggregate dataset can support the development of
explainable and multi-granularity models for improved surgical process
analysis, skill assessment, error detection, and autonomy.
|
http://arxiv.org/abs/2209.06424v5
|
cs.RO
|
new_dataset
| 0.994565 |
2209.06424
|
Intrusion Detection Systems Using Support Vector Machines on the KDDCUP'99 and NSL-KDD Datasets: A Comprehensive Survey
|
With the growing rates of cyber-attacks and cyber espionage, the need for
better and more powerful intrusion detection systems (IDS) is even more
warranted nowadays. The basic task of an IDS is to act as the first line of
defense, in detecting attacks on the internet. As intrusion tactics from
intruders become more sophisticated and difficult to detect, researchers have
started to apply novel Machine Learning (ML) techniques to effectively detect
intruders and hence preserve internet users' information and overall trust in
the entire internet network security. Over the last decade, there has been an
explosion of research on intrusion detection techniques based on ML and Deep
Learning (DL) architectures on various cyber security-based datasets such as
the DARPA, KDDCUP'99, NSL-KDD, CAIDA, CTU-13, UNSW-NB15. In this research, we
review contemporary literature and provide a comprehensive survey of different
types of intrusion detection technique that applies Support Vector Machines
(SVMs) algorithms as a classifier. We focus only on studies that have been
evaluated on the two most widely used datasets in cybersecurity namely: the
KDDCUP'99 and the NSL-KDD datasets. We provide a summary of each method,
identifying the role of the SVMs classifier, and all other algorithms involved
in the studies. Furthermore, we present a critical review of each method, in
tabular form, highlighting the performance measures, strengths, and limitations
of each of the methods surveyed.
|
http://arxiv.org/abs/2209.05579v1
|
cs.CR
|
not_new_dataset
| 0.992208 |
2209.05579
|
Examining Uniqueness and Permanence of the WAY EEG GAL dataset toward User Authentication
|
This study evaluates the discriminating capacity (uniqueness) of the EEG data
from the WAY EEG GAL public dataset to authenticate individuals against one
another as well as its permanence. In addition to the EEG data, Luciw et al.
provide EMG (Electromyography), and kinematics data for engineers and
researchers to utilize WAY EEG GAL for further studies. However, evaluating the
EMG and kinematics data is outside the scope of this study. The goal of the
state-of-the-art is to determine whether EEG data can be utilized to control
prosthetic devices. On the other hand, this study aims to evaluate the
separability of individuals through EEG data to perform user authentication. A
feature importance algorithm is utilized to select the best features for each
user to authenticate them against all others. The authentication platform
implemented for this study is based on Machine Learning models/classifiers. As
an initial test, two pilot studies are performed using Linear Discriminant
Analysis (LDA) and Support Vector Machine (SVM) to observe the learning trends
of the models by multi-labeling the EEG dataset. Utilizing kNN first as the
classifier for user authentication, accuracy around 75% is observed. Thereafter
to improve the performance both linear and non-linear SVMs are used to perform
classification. The overall average accuracies of 85.18% and 86.92% are
achieved using linear and non-linear SVMs respectively. In addition to
accuracy, F1 scores are also calculated. The overall average F1 score of 87.51%
and 88.94% are achieved for linear and non-linear SVMs respectively. Beyond the
overall performance, high performing individuals with 95.3% accuracy (95.3% F1
score) using linear SVM and 97.4% accuracy (97.3% F1 score) using non-linear
SVM are also observed.
|
http://arxiv.org/abs/2209.04802v1
|
cs.LG
|
not_new_dataset
| 0.991981 |
2209.04802
|
Analyzing Wearables Dataset to Predict ADLs and Falls: A Pilot Study
|
Healthcare is an important aspect of human life. Use of technologies in
healthcare has increased manifolds after the pandemic. Internet of Things based
systems and devices proposed in literature can help elders, children and adults
facing/experiencing health problems. This paper exhaustively reviews
thirty-nine wearable based datasets which can be used for evaluating the system
to recognize Activities of Daily Living and Falls. A comparative analysis on
the SisFall dataset using five machine learning methods i.e., Logistic
Regression, Linear Discriminant Analysis, K-Nearest Neighbor, Decision Tree and
Naive Bayes is performed in python. The dataset is modified in two ways, in
first all the attributes present in dataset are used as it is and labelled in
binary form. In second, magnitude of three axes(x,y,z) for three sensors value
are computed and then used in experiment with label attribute. The experiments
are performed on one subject, ten subjects and all the subjects and compared in
terms of accuracy, precision and recall. The results obtained from this study
proves that KNN outperforms other machine learning methods in terms of
accuracy, precision and recall. It is also concluded that personalization of
data improves accuracy.
|
http://arxiv.org/abs/2209.04785v1
|
cs.LG
|
new_dataset
| 0.993654 |
2209.04785
|
Data Feedback Loops: Model-driven Amplification of Dataset Biases
|
Datasets scraped from the internet have been critical to the successes of
large-scale machine learning. Yet, this very success puts the utility of future
internet-derived datasets at potential risk, as model outputs begin to replace
human annotations as a source of supervision.
In this work, we first formalize a system where interactions with one model
are recorded as history and scraped as training data in the future. We then
analyze its stability over time by tracking changes to a test-time bias
statistic (e.g. gender bias of model predictions). We find that the degree of
bias amplification is closely linked to whether the model's outputs behave like
samples from the training distribution, a behavior which we characterize and
define as consistent calibration. Experiments in three conditional prediction
scenarios - image classification, visual role-labeling, and language generation
- demonstrate that models that exhibit a sampling-like behavior are more
calibrated and thus more stable. Based on this insight, we propose an
intervention to help calibrate and stabilize unstable feedback systems.
Code is available at https://github.com/rtaori/data_feedback.
|
http://arxiv.org/abs/2209.03942v1
|
cs.LG
|
not_new_dataset
| 0.992168 |
2209.03942
|
Impact of dataset size and long-term ECoG-based BCI usage on deep learning decoders performance
|
In brain-computer interfaces (BCI) research, recording data is time-consuming
and expensive, which limits access to big datasets. This may influence the BCI
system performance as machine learning methods depend strongly on the training
dataset size. Important questions arise: taking into account neuronal signal
characteristics (e.g., non-stationarity), can we achieve higher decoding
performance with more data to train decoders? What is the perspective for
further improvement with time in the case of long-term BCI studies? In this
study, we investigated the impact of long-term recordings on motor imagery
decoding from two main perspectives: model requirements regarding dataset size
and potential for patient adaptation. We evaluated the multilinear model and
two deep learning (DL) models on a long-term BCI and Tetraplegia NCT02550522
clinical trial dataset containing 43 sessions of ECoG recordings performed with
a tetraplegic patient. In the experiment, a participant executed 3D virtual
hand translation using motor imagery patterns. We designed multiple
computational experiments in which training datasets were increased or
translated to investigate the relationship between models' performance and
different factors influencing recordings. Our analysis showed that adding more
data to the training dataset may not instantly increase performance for
datasets already containing 40 minutes of the signal. DL decoders showed
similar requirements regarding the dataset size compared to the multilinear
model while demonstrating higher decoding performance. Moreover, high decoding
performance was obtained with relatively small datasets recorded later in the
experiment, suggesting motor imagery patterns improvement and patient
adaptation. Finally, we proposed UMAP embeddings and local intrinsic
dimensionality as a way to visualize the data and potentially evaluate data
quality.
|
http://arxiv.org/abs/2209.03789v1
|
eess.SP
|
not_new_dataset
| 0.99212 |
2209.03789
|
A crowdsourced dataset of aerial images with annotated solar photovoltaic arrays and installation metadata
|
Photovoltaic (PV) energy generation plays a crucial role in the energy
transition. Small-scale PV installations are deployed at an unprecedented pace,
and their integration into the grid can be challenging since public authorities
often lack quality data about them. Overhead imagery is increasingly used to
improve the knowledge of residential PV installations with machine learning
models capable of automatically mapping these installations. However, these
models cannot be easily transferred from one region or data source to another
due to differences in image acquisition. To address this issue known as domain
shift and foster the development of PV array mapping pipelines, we propose a
dataset containing aerial images, annotations, and segmentation masks. We
provide installation metadata for more than 28,000 installations. We provide
ground truth segmentation masks for 13,000 installations, including 7,000 with
annotations for two different image providers. Finally, we provide installation
metadata that matches the annotation for more than 8,000 installations. Dataset
applications include end-to-end PV registry construction, robust PV
installations mapping, and analysis of crowdsourced datasets.
|
http://arxiv.org/abs/2209.03726v2
|
cs.CV
|
new_dataset
| 0.994496 |
2209.03726
|
Avast-CTU Public CAPE Dataset
|
There is a limited amount of publicly available data to support research in
malware analysis technology. Particularly, there are virtually no publicly
available datasets generated from rich sandboxes such as Cuckoo/CAPE. The
benefit of using dynamic sandboxes is the realistic simulation of file
execution in the target machine and obtaining a log of such execution. The
machine can be infected by malware hence there is a good chance of capturing
the malicious behavior in the execution logs, thus allowing researchers to
study such behavior in detail. Although the subsequent analysis of log
information is extensively covered in industrial cybersecurity backends, to our
knowledge there has been only limited effort invested in academia to advance
such log analysis capabilities using cutting edge techniques. We make this
sample dataset available to support designing new machine learning methods for
malware detection, especially for automatic detection of generic malicious
behavior. The dataset has been collected in cooperation between Avast Software
and Czech Technical University - AI Center (AIC).
|
http://arxiv.org/abs/2209.03188v1
|
cs.CR
|
new_dataset
| 0.994422 |
2209.03188
|
A Case Study on the Classification of Lost Circulation Events During Drilling using Machine Learning Techniques on an Imbalanced Large Dataset
|
This study presents machine learning models that forecast and categorize lost
circulation severity preemptively using a large class imbalanced drilling
dataset. We demonstrate reproducible core techniques involved in tackling a
large drilling engineering challenge utilizing easily interpretable machine
learning approaches.
We utilized a 65,000+ records data with class imbalance problem from Azadegan
oilfield formations in Iran. Eleven of the dataset's seventeen parameters are
chosen to be used in the classification of five lost circulation events. To
generate classification models, we used six basic machine learning algorithms
and four ensemble learning methods. Linear Discriminant Analysis (LDA),
Logistic Regression (LR), Support Vector Machines (SVM), Classification and
Regression Trees (CART), k-Nearest Neighbors (KNN), and Gaussian Naive Bayes
(GNB) are the six fundamental techniques. We also used bagging and boosting
ensemble learning techniques in the investigation of solutions for improved
predicting performance. The performance of these algorithms is measured using
four metrics: accuracy, precision, recall, and F1-score. The F1-score weighted
to represent the data imbalance is chosen as the preferred evaluation
criterion.
The CART model was found to be the best in class for identifying drilling
fluid circulation loss events with an average weighted F1-score of 0.9904 and
standard deviation of 0.0015. Upon application of ensemble learning techniques,
a Random Forest ensemble of decision trees showed the best predictive
performance. It identified and classified lost circulation events with a
perfect weighted F1-score of 1.0. Using Permutation Feature Importance (PFI),
the measured depth was found to be the most influential factor in accurately
recognizing lost circulation events while drilling.
|
http://arxiv.org/abs/2209.01607v2
|
cs.LG
|
not_new_dataset
| 0.99202 |
2209.01607
|
MultiCoNER: A Large-scale Multilingual dataset for Complex Named Entity Recognition
|
We present MultiCoNER, a large multilingual dataset for Named Entity
Recognition that covers 3 domains (Wiki sentences, questions, and search
queries) across 11 languages, as well as multilingual and code-mixing subsets.
This dataset is designed to represent contemporary challenges in NER, including
low-context scenarios (short and uncased text), syntactically complex entities
like movie titles, and long-tail entity distributions. The 26M token dataset is
compiled from public resources using techniques such as heuristic-based
sentence sampling, template extraction and slotting, and machine translation.
We applied two NER models on our dataset: a baseline XLM-RoBERTa model, and a
state-of-the-art GEMNET model that leverages gazetteers. The baseline achieves
moderate performance (macro-F1=54%), highlighting the difficulty of our data.
GEMNET, which uses gazetteers, improvement significantly (average improvement
of macro-F1=+30%). MultiCoNER poses challenges even for large pre-trained
language models, and we believe that it can help further research in building
robust NER systems. MultiCoNER is publicly available at
https://registry.opendata.aws/multiconer/ and we hope that this resource will
help advance research in various aspects of NER.
|
http://arxiv.org/abs/2208.14536v1
|
cs.CL
|
new_dataset
| 0.994555 |
2208.14536
|
Annotated Dataset Creation through General Purpose Language Models for non-English Medical NLP
|
Obtaining text datasets with semantic annotations is an effortful process,
yet crucial for supervised training in natural language processsing (NLP). In
general, developing and applying new NLP pipelines in domain-specific contexts
for tasks often requires custom designed datasets to address NLP tasks in
supervised machine learning fashion. When operating in non-English languages
for medical data processing, this exposes several minor and major,
interconnected problems such as lack of task-matching datasets as well as
task-specific pre-trained models. In our work we suggest to leverage pretrained
language models for training data acquisition in order to retrieve sufficiently
large datasets for training smaller and more efficient models for use-case
specific tasks. To demonstrate the effectiveness of your approach, we create a
custom dataset which we use to train a medical NER model for German texts,
GPTNERMED, yet our method remains language-independent in principle. Our
obtained dataset as well as our pre-trained models are publicly available at:
https://github.com/frankkramer-lab/GPTNERMED
|
http://arxiv.org/abs/2208.14493v1
|
cs.CL
|
new_dataset
| 0.99418 |
2208.14493
|
Fraud Dataset Benchmark and Applications
|
Standardized datasets and benchmarks have spurred innovations in computer
vision, natural language processing, multi-modal and tabular settings. We note
that, as compared to other well researched fields, fraud detection has unique
challenges: high-class imbalance, diverse feature types, frequently changing
fraud patterns, and adversarial nature of the problem. Due to these, the
modeling approaches evaluated on datasets from other research fields may not
work well for the fraud detection. In this paper, we introduce Fraud Dataset
Benchmark (FDB), a compilation of publicly available datasets catered to fraud
detection FDB comprises variety of fraud related tasks, ranging from
identifying fraudulent card-not-present transactions, detecting bot attacks,
classifying malicious URLs, estimating risk of loan default to content
moderation. The Python based library for FDB provides a consistent API for data
loading with standardized training and testing splits. We demonstrate several
applications of FDB that are of broad interest for fraud detection, including
feature engineering, comparison of supervised learning algorithms, label noise
removal, class-imbalance treatment and semi-supervised learning. We hope that
FDB provides a common playground for researchers and practitioners in the fraud
detection domain to develop robust and customized machine learning techniques
targeting various fraud use cases.
|
http://arxiv.org/abs/2208.14417v3
|
cs.LG
|
new_dataset
| 0.734783 |
2208.14417
|
A Dataset and Baseline Approach for Identifying Usage States from Non-Intrusive Power Sensing With MiDAS IoT-based Sensors
|
The state identification problem seeks to identify power usage patterns of
any system, like buildings or factories, of interest. In this challenge paper,
we make power usage dataset available from 8 institutions in manufacturing,
education and medical institutions from the US and India, and an initial
un-supervised machine learning based solution as a baseline for the community
to accelerate research in this area.
|
http://arxiv.org/abs/2209.00987v2
|
eess.SP
|
new_dataset
| 0.994397 |
2209.00987
|
An Energy Activity Dataset for Smart Homes
|
A smart home energy dataset that records miscellaneous energy consumption
data is publicly offered. The proposed energy activity dataset (EAD) has a high
data type diversity in contrast to existing load monitoring datasets. In EAD, a
simple data point is labeled with the appliance, brand, and event information,
whereas a complex data point has an extra application label. Several
discoveries have been made on the energy consumption patterns of many
appliances. Load curves of the appliances are measured when different events
and applications are triggered and utilized. A revised
longest-common-subsequence (LCS) similarity measurement algorithm is proposed
to calculate energy dataset similarities. Thus, the data quality prior
information becomes available before training machine learning models. In
addition, a subsample convolutional neural network (SCNN) is put forward. It
serves as a non-intrusive optical character recognition (OCR) approach to
obtain energy data directly from monitors of power meters. The link for the EAD
dataset is:
https://drive.google.com/drive/folders/1zn0V6Q8eXXSKxKgcs8ZRValL5VEn3anD
|
http://arxiv.org/abs/2208.13416v2
|
eess.SP
|
new_dataset
| 0.994449 |
2208.13416
|
Interpreting Black-box Machine Learning Models for High Dimensional Datasets
|
Deep neural networks (DNNs) have been shown to outperform traditional machine
learning algorithms in a broad variety of application domains due to their
effectiveness in modeling complex problems and handling high-dimensional
datasets. Many real-life datasets, however, are of increasingly high
dimensionality, where a large number of features may be irrelevant for both
supervised and unsupervised learning tasks. The inclusion of such features
would not only introduce unwanted noise but also increase computational
complexity. Furthermore, due to high non-linearity and dependency among a large
number of features, DNN models tend to be unavoidably opaque and perceived as
black-box methods because of their not well-understood internal functioning.
Their algorithmic complexity is often simply beyond the capacities of humans to
understand the interplay among myriads of hyperparameters. A well-interpretable
model can identify statistically significant features and explain the way they
affect the model's outcome. In this paper, we propose an efficient method to
improve the interpretability of black-box models for classification tasks in
the case of high-dimensional datasets. First, we train a black-box model on a
high-dimensional dataset to learn the embeddings on which the classification is
performed. To decompose the inner working principles of the black-box model and
to identify top-k important features, we employ different probing and
perturbing techniques. We then approximate the behavior of the black-box model
by means of an interpretable surrogate model on the top-k feature space.
Finally, we derive decision rules and local explanations from the surrogate
model to explain individual decisions. Our approach outperforms
state-of-the-art methods like TabNet and XGboost when tested on different
datasets with varying dimensionality between 50 and 20,000 w.r.t metrics and
explainability.
|
http://arxiv.org/abs/2208.13405v2
|
cs.LG
|
not_new_dataset
| 0.992224 |
2208.13405
|
Machine Learning Models Evaluation and Feature Importance Analysis on NPL Dataset
|
Predicting the probability of non-performing loans for individuals has a
vital and beneficial role for banks to decrease credit risk and make the right
decisions before giving the loan. The trend to make these decisions are based
on credit study and in accordance with generally accepted standards, loan
payment history, and demographic data of the clients. In this work, we evaluate
how different Machine learning models such as Random Forest, Decision tree,
KNN, SVM, and XGBoost perform on the dataset provided by a private bank in
Ethiopia. Further, motivated by this evaluation we explore different feature
selection methods to state the important features for the bank. Our findings
show that XGBoost achieves the highest F1 score on the KMeans SMOTE
over-sampled data. We also found that the most important features are the age
of the applicant, years of employment, and total income of the applicant rather
than collateral-related features in evaluating credit risk.
|
http://arxiv.org/abs/2209.09638v1
|
cs.LG
|
not_new_dataset
| 0.992097 |
2209.09638
|
MangoLeafBD: A Comprehensive Image Dataset to Classify Diseased and Healthy Mango Leaves
|
Agriculture is of one of the few remaining sectors that is yet to receive
proper attention from the machine learning community. The importance of
datasets in the machine learning discipline cannot be overemphasized. The lack
of standard and publicly available datasets related to agriculture impedes
practitioners of this discipline to harness the full benefit of these powerful
computational predictive tools and techniques. To improve this scenario, we
develop, to the best of our knowledge, the first-ever standard, ready-to-use,
and publicly available dataset of mango leaves. The images are collected from
four mango orchards of Bangladesh, one of the top mango-growing countries of
the world. The dataset contains 4000 images of about 1800 distinct leaves
covering seven diseases. Although the dataset is developed using mango leaves
of Bangladesh only, since we deal with diseases that are common across many
countries, this dataset is likely to be applicable to identify mango diseases
in other countries as well, thereby boosting mango yield. This dataset is
expected to draw wide attention from machine learning researchers and
practitioners in the field of automated agriculture.
|
http://arxiv.org/abs/2209.02377v1
|
cs.CV
|
new_dataset
| 0.994523 |
2209.02377
|
Deep Learning-based ECG Classification on Raspberry PI using a Tensorflow Lite Model based on PTB-XL Dataset
|
The number of IoT devices in healthcare is expected to rise sharply due to
increased demand since the COVID-19 pandemic. Deep learning and IoT devices are
being employed to monitor body vitals and automate anomaly detection in
clinical and non-clinical settings. Most of the current technology requires the
transmission of raw data to a remote server, which is not efficient for
resource-constrained IoT devices and embedded systems. Additionally, it is
challenging to develop a machine learning model for ECG classification due to
the lack of an extensive open public database. To an extent, to overcome this
challenge PTB-XL dataset has been used. In this work, we have developed machine
learning models to be deployed on Raspberry Pi. We present an evaluation of our
TensorFlow Model with two classification classes. We also present the
evaluation of the corresponding TensorFlow Lite FlatBuffers to demonstrate
their minimal run-time requirements while maintaining acceptable accuracy.
|
http://arxiv.org/abs/2209.00989v1
|
eess.SP
|
not_new_dataset
| 0.992058 |
2209.00989
|
Ontology-Driven Self-Supervision for Adverse Childhood Experiences Identification Using Social Media Datasets
|
Adverse Childhood Experiences (ACEs) are defined as a collection of highly
stressful, and potentially traumatic, events or circumstances that occur
throughout childhood and/or adolescence. They have been shown to be associated
with increased risks of mental health diseases or other abnormal behaviours in
later lives. However, the identification of ACEs from textual data with Natural
Language Processing (NLP) is challenging because (a) there are no NLP ready ACE
ontologies; (b) there are few resources available for machine learning,
necessitating the data annotation from clinical experts; (c) costly annotations
by domain experts and large number of documents for supporting large machine
learning models. In this paper, we present an ontology-driven self-supervised
approach (derive concept embeddings using an auto-encoder from baseline NLP
results) for producing a publicly available resource that would support
large-scale machine learning (e.g., training transformer based large language
models) on social media corpus. This resource as well as the proposed approach
are aimed to facilitate the community in training transferable NLP models for
effectively surfacing ACEs in low-resource scenarios like NLP on clinical notes
within Electronic Health Records. The resource including a list of ACE ontology
terms, ACE concept embeddings and the NLP annotated corpus is available at
https://github.com/knowlab/ACE-NLP.
|
http://arxiv.org/abs/2208.11701v1
|
cs.CL
|
not_new_dataset
| 0.858809 |
2208.11701
|
Minimizing the Effect of Noise and Limited Dataset Size in Image Classification Using Depth Estimation as an Auxiliary Task with Deep Multitask Learning
|
Generalizability is the ultimate goal of Machine Learning (ML) image
classifiers, for which noise and limited dataset size are among the major
concerns. We tackle these challenges through utilizing the framework of deep
Multitask Learning (dMTL) and incorporating image depth estimation as an
auxiliary task. On a customized and depth-augmented derivation of the MNIST
dataset, we show a) multitask loss functions are the most effective approach of
implementing dMTL, b) limited dataset size primarily contributes to
classification inaccuracy, and c) depth estimation is mostly impacted by noise.
In order to further validate the results, we manually labeled the NYU Depth V2
dataset for scene classification tasks. As a contribution to the field, we have
made the data in python native format publicly available as an open-source
dataset and provided the scene labels. Our experiments on MNIST and
NYU-Depth-V2 show dMTL improves generalizability of the classifiers when the
dataset is noisy and the number of examples is limited.
|
http://arxiv.org/abs/2208.10390v1
|
cs.CV
|
not_new_dataset
| 0.992131 |
2208.10390
|
Evaluating and Crafting Datasets Effective for Deep Learning With Data Maps
|
Rapid development in deep learning model construction has prompted an
increased need for appropriate training data. The popularity of large datasets
- sometimes known as "big data" - has diverted attention from assessing their
quality. Training on large datasets often requires excessive system resources
and an infeasible amount of time. Furthermore, the supervised machine learning
process has yet to be fully automated: for supervised learning, large datasets
require more time for manually labeling samples. We propose a method of
curating smaller datasets with comparable out-of-distribution model accuracy
after an initial training session using an appropriate distribution of samples
classified by how difficult it is for a model to learn from them.
|
http://arxiv.org/abs/2208.10033v2
|
cs.LG
|
not_new_dataset
| 0.992038 |
2208.10033
|
Scalable mRMR feature selection to handle high dimensional datasets: Vertical partitioning based Iterative MapReduce framework
|
While building machine learning models, Feature selection (FS) stands out as
an essential preprocessing step used to handle the uncertainty and vagueness in
the data. Recently, the minimum Redundancy and Maximum Relevance (mRMR)
approach has proven to be effective in obtaining the irredundant feature
subset. Owing to the generation of voluminous datasets, it is essential to
design scalable solutions using distributed/parallel paradigms. MapReduce
solutions are proven to be one of the best approaches to designing
fault-tolerant and scalable solutions. This work analyses the existing
MapReduce approaches for mRMR feature selection and identifies the limitations
thereof. In the current study, we proposed VMR_mRMR, an efficient vertical
partitioning-based approach using a memorization approach, thereby overcoming
the extant approaches limitations. The experiment analysis says that VMR_mRMR
significantly outperformed extant approaches and achieved a better
computational gain (C.G). In addition, we also conducted a comparative analysis
with the horizontal partitioning approach HMR_mRMR [1] to assess the strengths
and limitations of the proposed approach.
|
http://arxiv.org/abs/2208.09901v1
|
cs.DC
|
not_new_dataset
| 0.992158 |
2208.09901
|
Improving Multilayer-Perceptron(MLP)-based Network Anomaly Detection with Birch Clustering on CICIDS-2017 Dataset
|
Machine learning algorithms have been widely used in intrusion detection
systems, including Multi-layer Perceptron (MLP). In this study, we proposed a
two-stage model that combines the Birch clustering algorithm and MLP classifier
to improve the performance of network anomaly multi-classification. In our
proposed method, we first apply Birch or Kmeans as an unsupervised clustering
algorithm to the CICIDS-2017 dataset to pre-group the data. The generated
pseudo-label is then added as an additional feature to the training of the
MLP-based classifier. The experimental results show that using Birch and
K-Means clustering for data pre-grouping can improve intrusion detection system
performance. Our method can achieve 99.73% accuracy in multi-classification
using Birch clustering, which is better than similar researches using a
stand-alone MLP model.
|
http://arxiv.org/abs/2208.09711v2
|
cs.CR
|
not_new_dataset
| 0.99217 |
2208.09711
|
Commander's Intent: A Dataset and Modeling Approach for Human-AI Task Specification in Strategic Play
|
Effective Human-AI teaming requires the ability to communicate the goals of
the team and constraints under which you need the agent to operate. Providing
the ability to specify the shared intent or operation criteria of the team can
enable an AI agent to perform its primary function while still being able to
cater to the specific desires of the current team. While significant work has
been conducted to instruct an agent to perform a task, via language or
demonstrations, prior work lacks a focus on building agents which can operate
within the parameters specified by a team. Worse yet, there is a dearth of
research pertaining to enabling humans to provide their specifications through
unstructured, naturalist language. In this paper, we propose the use of goals
and constraints as a scaffold to modulate and evaluate autonomous agents. We
contribute to this field by presenting a novel dataset, and an associated data
collection protocol, which maps language descriptions to goals and constraints
corresponding to specific strategies developed by human participants for the
board game Risk. Leveraging state-of-the-art language models and augmentation
procedures, we develop a machine learning framework which can be used to
identify goals and constraints from unstructured strategy descriptions. To
empirically validate our approach we conduct a human-subjects study to
establish a human-baseline for our dataset. Our results show that our machine
learning architecture is better able to interpret unstructured language
descriptions into strategy specifications than human raters tasked with
performing the same machine translation task (F(1,272.53) = 17.025, p < 0.001).
|
http://arxiv.org/abs/2208.08374v1
|
cs.AI
|
new_dataset
| 0.994589 |
2208.08374
|
Multimodal Lecture Presentations Dataset: Understanding Multimodality in Educational Slides
|
Lecture slide presentations, a sequence of pages that contain text and
figures accompanied by speech, are constructed and presented carefully in order
to optimally transfer knowledge to students. Previous studies in multimedia and
psychology attribute the effectiveness of lecture presentations to their
multimodal nature. As a step toward developing AI to aid in student learning as
intelligent teacher assistants, we introduce the Multimodal Lecture
Presentations dataset as a large-scale benchmark testing the capabilities of
machine learning models in multimodal understanding of educational content. Our
dataset contains aligned slides and spoken language, for 180+ hours of video
and 9000+ slides, with 10 lecturers from various subjects (e.g., computer
science, dentistry, biology). We introduce two research tasks which are
designed as stepping stones towards AI agents that can explain (automatically
captioning a lecture presentation) and illustrate (synthesizing visual figures
to accompany spoken explanations) educational content. We provide manual
annotations to help implement these two research tasks and evaluate
state-of-the-art models on them. Comparing baselines and human student
performances, we find that current models struggle in (1) weak crossmodal
alignment between slides and spoken text, (2) learning novel visual mediums,
(3) technical language, and (4) long-range sequences. Towards addressing this
issue, we also introduce PolyViLT, a multimodal transformer trained with a
multi-instance learning loss that is more effective than current approaches. We
conclude by shedding light on the challenges and opportunities in multimodal
understanding of educational presentations.
|
http://arxiv.org/abs/2208.08080v1
|
cs.AI
|
new_dataset
| 0.99452 |
2208.08080
|
The Conversational Short-phrase Speaker Diarization (CSSD) Task: Dataset, Evaluation Metric and Baselines
|
The conversation scenario is one of the most important and most challenging
scenarios for speech processing technologies because people in conversation
respond to each other in a casual style. Detecting the speech activities of
each person in a conversation is vital to downstream tasks, like natural
language processing, machine translation, etc. People refer to the detection
technology of "who speak when" as speaker diarization (SD). Traditionally,
diarization error rate (DER) has been used as the standard evaluation metric of
SD systems for a long time. However, DER fails to give enough importance to
short conversational phrases, which are short but important on the semantic
level. Also, a carefully and accurately manually-annotated testing dataset
suitable for evaluating the conversational SD technologies is still unavailable
in the speech community. In this paper, we design and describe the
Conversational Short-phrases Speaker Diarization (CSSD) task, which consists of
training and testing datasets, evaluation metric and baselines. In the dataset
aspect, despite the previously open-sourced 180-hour conversational
MagicData-RAMC dataset, we prepare an individual 20-hour conversational speech
test dataset with carefully and artificially verified speakers timestamps
annotations for the CSSD task. In the metric aspect, we design the new
conversational DER (CDER) evaluation metric, which calculates the SD accuracy
at the utterance level. In the baseline aspect, we adopt a commonly used
method: Variational Bayes HMM x-vector system, as the baseline of the CSSD
task. Our evaluation metric is publicly available at
https://github.com/SpeechClub/CDER_Metric.
|
http://arxiv.org/abs/2208.08042v1
|
cs.CL
|
new_dataset
| 0.994514 |
2208.08042
|
Ex-Ante Assessment of Discrimination in Dataset
|
Data owners face increasing liability for how the use of their data could
harm under-priviliged communities. Stakeholders would like to identify the
characteristics of data that lead to algorithms being biased against any
particular demographic groups, for example, defined by their race, gender, age,
and/or religion. Specifically, we are interested in identifying subsets of the
feature space where the ground truth response function from features to
observed outcomes differs across demographic groups. To this end, we propose
FORESEE, a FORESt of decision trEEs algorithm, which generates a score that
captures how likely an individual's response varies with sensitive attributes.
Empirically, we find that our approach allows us to identify the individuals
who are most likely to be misclassified by several classifiers, including
Random Forest, Logistic Regression, Support Vector Machine, and k-Nearest
Neighbors. The advantage of our approach is that it allows stakeholders to
characterize risky samples that may contribute to discrimination, as well as,
use the FORESEE to estimate the risk of upcoming samples.
|
http://arxiv.org/abs/2208.07918v2
|
cs.LG
|
not_new_dataset
| 0.992276 |
2208.07918
|
BDSL 49: A Comprehensive Dataset of Bangla Sign Language
|
Language is a method by which individuals express their thoughts. Each
language has its own set of alphabetic and numeric characters. People can
communicate with one another through either oral or written communication.
However, each language has a sign language counterpart. Individuals who are
deaf and/or mute communicate through sign language. The Bangla language also
has a sign language, which is called BDSL. The dataset is about Bangla hand
sign images. The collection contains 49 individual Bangla alphabet images in
sign language. BDSL49 is a dataset that consists of 29,490 images with 49
labels. Images of 14 different adult individuals, each with a distinct
background and appearance, have been recorded during data collection. Several
strategies have been used to eliminate noise from datasets during preparation.
This dataset is available to researchers for free. They can develop automated
systems using machine learning, computer vision, and deep learning techniques.
In addition, two models were used in this dataset. The first is for detection,
while the second is for recognition.
|
http://arxiv.org/abs/2208.06827v1
|
cs.CV
|
new_dataset
| 0.994464 |
2208.06827
|
A hands-on gaze on HTTP/3 security through the lens of HTTP/2 and a public dataset
|
Following QUIC protocol ratification on May 2021, the third major version of
the Hypertext Transfer Protocol, namely HTTP/3, was published around one year
later in RFC 9114. In light of these consequential advancements, the current
work aspires to provide a full-blown coverage of the following issues, which to
our knowledge have received feeble or no attention in the literature so far.
First, we provide a complete review of attacks against HTTP/2, and elaborate on
if and in which way they can be migrated to HTTP/3. Second, through the
creation of a testbed comprising the at present six most popular HTTP/3-enabled
servers, we examine the effectiveness of a quartet of attacks, either stemming
directly from the HTTP/2 relevant literature or being entirely new. This
scrutiny led to the assignment of at least one CVE ID with a critical base
score by MITRE. No less important, by capitalizing on a realistic, abundant in
devices testbed, we compiled a voluminous, labeled corpus containing traces of
ten diverse attacks against HTTP and QUIC services. An initial evaluation of
the dataset mainly by means of machine learning techniques is included as well.
Given that the 30 GB dataset is made available in both pcap and CSV formats,
forthcoming research can easily take advantage of any subset of features,
contingent upon the specific network topology and configuration.
|
http://arxiv.org/abs/2208.06722v2
|
cs.CR
|
not_new_dataset
| 0.934479 |
2208.06722
|
MetaGraspNet: A Large-Scale Benchmark Dataset for Scene-Aware Ambidextrous Bin Picking via Physics-based Metaverse Synthesis
|
Autonomous bin picking poses significant challenges to vision-driven robotic
systems given the complexity of the problem, ranging from various sensor
modalities, to highly entangled object layouts, to diverse item properties and
gripper types. Existing methods often address the problem from one perspective.
Diverse items and complex bin scenes require diverse picking strategies
together with advanced reasoning. As such, to build robust and effective
machine-learning algorithms for solving this complex task requires significant
amounts of comprehensive and high quality data. Collecting such data in real
world would be too expensive and time prohibitive and therefore intractable
from a scalability perspective. To tackle this big, diverse data problem, we
take inspiration from the recent rise in the concept of metaverses, and
introduce MetaGraspNet, a large-scale photo-realistic bin picking dataset
constructed via physics-based metaverse synthesis. The proposed dataset
contains 217k RGBD images across 82 different article types, with full
annotations for object detection, amodal perception, keypoint detection,
manipulation order and ambidextrous grasp labels for a parallel-jaw and vacuum
gripper. We also provide a real dataset consisting of over 2.3k fully annotated
high-quality RGBD images, divided into 5 levels of difficulties and an unseen
object set to evaluate different object and layout properties. Finally, we
conduct extensive experiments showing that our proposed vacuum seal model and
synthetic dataset achieves state-of-the-art performance and generalizes to real
world use-cases.
|
http://arxiv.org/abs/2208.03963v1
|
cs.CV
|
new_dataset
| 0.994558 |
2208.03963
|
Customs Import Declaration Datasets
|
Given the huge volume of cross-border flows, effective and efficient control
of trade becomes more crucial in protecting people and society from illicit
trade. However, limited accessibility of the transaction-level trade datasets
hinders the progress of open research, and lots of customs administrations have
not benefited from the recent progress in data-based risk management. In this
paper, we introduce an import declaration dataset to facilitate the
collaboration between domain experts in customs administrations and researchers
from diverse domains, such as data science and machine learning. The dataset
contains 54,000 artificially generated trades with 22 key attributes, and it is
synthesized with conditional tabular GAN while maintaining correlated features.
Synthetic data has several advantages. First, releasing the dataset is free
from restrictions that do not allow disclosing the original import data. The
fabrication step minimizes the possible identity risk which may exist in trade
statistics. Second, the published data follow a similar distribution to the
source data so that it can be used in various downstream tasks. Hence, our
dataset can be used as a benchmark for testing the performance of any
classification algorithm. With the provision of data and its generation
process, we open baseline codes for fraud detection tasks, as we empirically
show that more advanced algorithms can better detect fraud.
|
http://arxiv.org/abs/2208.02484v3
|
cs.LG
|
new_dataset
| 0.994493 |
2208.02484
|
Style Transfer of Black and White Silhouette Images using CycleGAN and a Randomly Generated Dataset
|
CycleGAN can be used to transfer an artistic style to an image. It does not
require pairs of source and stylized images to train a model. Taking this
advantage, we propose using randomly generated data to train a machine learning
model that can transfer traditional art style to a black and white silhouette
image. The result is noticeably better than the previous neural style transfer
methods. However, there are some areas for improvement, such as removing
artifacts and spikes from the transformed image.
|
http://arxiv.org/abs/2208.04140v1
|
cs.LG
|
not_new_dataset
| 0.991568 |
2208.04140
|
A Case for Dataset Specific Profiling
|
Data-driven science is an emerging paradigm where scientific discoveries
depend on the execution of computational AI models against rich,
discipline-specific datasets. With modern machine learning frameworks, anyone
can develop and execute computational models that reveal concepts hidden in the
data that could enable scientific applications. For important and widely used
datasets, computing the performance of every computational model that can run
against a dataset is cost prohibitive in terms of cloud resources. Benchmarking
approaches used in practice use representative datasets to infer performance
without actually executing models. While practicable, these approaches limit
extensive dataset profiling to a few datasets and introduce bias that favors
models suited for representative datasets. As a result, each dataset's unique
characteristics are left unexplored and subpar models are selected based on
inference from generalized datasets. This necessitates a new paradigm that
introduces dataset profiling into the model selection process. To demonstrate
the need for dataset-specific profiling, we answer two questions:(1) Can
scientific datasets significantly permute the rank order of computational
models compared to widely used representative datasets? (2) If so, could
lightweight model execution improve benchmarking accuracy? Taken together, the
answers to these questions lay the foundation for a new dataset-aware
benchmarking paradigm.
|
http://arxiv.org/abs/2208.03315v1
|
cs.LG
|
not_new_dataset
| 0.992111 |
2208.03315
|
CircuitNet: An Open-Source Dataset for Machine Learning Applications in Electronic Design Automation (EDA)
|
The electronic design automation (EDA) community has been actively exploring
machine learning (ML) for very large-scale integrated computer-aided design
(VLSI CAD). Many studies explored learning-based techniques for cross-stage
prediction tasks in the design flow to achieve faster design convergence.
Although building ML models usually requires a large amount of data, most
studies can only generate small internal datasets for validation because of the
lack of large public datasets. In this essay, we present the first open-source
dataset called CircuitNet for ML tasks in VLSI CAD.
|
http://arxiv.org/abs/2208.01040v4
|
cs.LG
|
new_dataset
| 0.994331 |
2208.01040
|
Gotham Testbed: a Reproducible IoT Testbed for Security Experiments and Dataset Generation
|
The growing adoption of the Internet of Things (IoT) has brought a
significant increase in attacks targeting those devices. Machine learning (ML)
methods have shown promising results for intrusion detection; however, the
scarcity of IoT datasets remains a limiting factor in developing ML-based
security systems for IoT scenarios. Static datasets get outdated due to
evolving IoT architectures and threat landscape; meanwhile, the testbeds used
to generate them are rarely published. This paper presents the Gotham testbed,
a reproducible and flexible security testbed extendable to accommodate new
emulated devices, services or attackers. Gotham is used to build an IoT
scenario composed of 100 emulated devices communicating via MQTT, CoAP and RTSP
protocols, among others, in a topology composed of 30 switches and 10 routers.
The scenario presents three threat actors, including the entire Mirai botnet
lifecycle and additional red-teaming tools performing DoS, scanning, and
attacks targeting IoT protocols. The testbed has many purposes, including a
cyber range, testing security solutions, and capturing network and application
data to generate datasets. We hope that researchers can leverage and adapt
Gotham to include other devices, state-of-the-art attacks and topologies to
share scenarios and datasets that reflect the current IoT settings and threat
landscape.
|
http://arxiv.org/abs/2207.13981v3
|
cs.CR
|
new_dataset
| 0.988946 |
2207.13981
|
Towards overcoming data scarcity in materials science: unifying models and datasets with a mixture of experts framework
|
While machine learning has emerged in recent years as a useful tool for rapid
prediction of materials properties, generating sufficient data to reliably
train models without overfitting is still impractical for many applications.
Towards overcoming this limitation, we present a general framework for
leveraging complementary information across different models and datasets for
accurate prediction of data scarce materials properties. Our approach, based on
a machine learning paradigm called mixture of experts, outperforms pairwise
transfer learning on 16 of 19 materials property regression tasks, performing
comparably on the remaining three. Unlike pairwise transfer learning, our
framework automatically learns to combine information from multiple source
tasks in a single training run, alleviating the need for brute-force
experiments to determine which source task to transfer from. The approach also
provides an interpretable, model-agnostic, and scalable mechanism to transfer
information from an arbitrary number of models and datasets to any downstream
property prediction task. We anticipate the performance of our framework will
further improve as better model architectures, new pre-training tasks, and
larger materials datasets are developed by the community.
|
http://arxiv.org/abs/2207.13880v1
|
cond-mat.mtrl-sci
|
not_new_dataset
| 0.992192 |
2207.13880
|
Deep Learning for Classification of Thyroid Nodules on Ultrasound: Validation on an Independent Dataset
|
Objectives: The purpose is to apply a previously validated deep learning
algorithm to a new thyroid nodule ultrasound image dataset and compare its
performances with radiologists. Methods: Prior study presented an algorithm
which is able to detect thyroid nodules and then make malignancy
classifications with two ultrasound images. A multi-task deep convolutional
neural network was trained from 1278 nodules and originally tested with 99
separate nodules. The results were comparable with that of radiologists. The
algorithm was further tested with 378 nodules imaged with ultrasound machines
from different manufacturers and product types than the training cases. Four
experienced radiologists were requested to evaluate the nodules for comparison
with deep learning. Results: The Area Under Curve (AUC) of the deep learning
algorithm and four radiologists were calculated with parametric, binormal
estimation. For the deep learning algorithm, the AUC was 0.69 (95% CI: 0.64 -
0.75). The AUC of radiologists were 0.63 (95% CI: 0.59 - 0.67), 0.66 (95%
CI:0.61 - 0.71), 0.65 (95% CI: 0.60 - 0.70), and 0.63 (95%CI: 0.58 - 0.67).
Conclusion: In the new testing dataset, the deep learning algorithm achieved
similar performances with all four radiologists. The relative performance
difference between the algorithm and the radiologists is not significantly
affected by the difference of ultrasound scanner.
|
http://arxiv.org/abs/2207.13765v2
|
eess.IV
|
not_new_dataset
| 0.991989 |
2207.13765
|
Continuous User Authentication Using Machine Learning and Multi-Finger Mobile Touch Dynamics with a Novel Dataset
|
As technology grows and evolves rapidly, it is increasingly clear that mobile
devices are more commonly used for sensitive matters than ever before. A need
to authenticate users continuously is sought after as a single-factor or multi
factor authentication may only initially validate a user, which does not help
if an impostor can bypass this initial validation. The field of touch dynamics
emerges as a clear way to non intrusively collect data about a user and their
behaviors in order to develop and make imperative security related decisions in
real time. In this paper we present a novel dataset consisting of tracking 25
users playing two mobile games Snake.io and Minecraft each for 10 minutes,
along with their relevant gesture data. From this data, we ran machine learning
binary classifiers namely Random Forest and K Nearest Neighbor to attempt to
authenticate whether a sample of a particular users actions were genuine. Our
strongest model returned an average accuracy of roughly 93% for both games,
showing touch dynamics can differentiate users effectively and is a feasible
consideration for authentication schemes. Our dataset can be observed at
https://github.com/zderidder/MC-Snake-Results
|
http://arxiv.org/abs/2207.13648v1
|
cs.HC
|
new_dataset
| 0.994424 |
2207.13648
|
The Bearable Lightness of Big Data: Towards Massive Public Datasets in Scientific Machine Learning
|
In general, large datasets enable deep learning models to perform with good
accuracy and generalizability. However, massive high-fidelity simulation
datasets (from molecular chemistry, astrophysics, computational fluid dynamics
(CFD), etc. can be challenging to curate due to dimensionality and storage
constraints. Lossy compression algorithms can help mitigate limitations from
storage, as long as the overall data fidelity is preserved. To illustrate this
point, we demonstrate that deep learning models, trained and tested on data
from a petascale CFD simulation, are robust to errors introduced during lossy
compression in a semantic segmentation problem. Our results demonstrate that
lossy compression algorithms offer a realistic pathway for exposing
high-fidelity scientific data to open-source data repositories for building
community datasets. In this paper, we outline, construct, and evaluate the
requirements for establishing a big data framework, demonstrated at
https://blastnet.github.io/, for scientific machine learning.
|
http://arxiv.org/abs/2207.12546v1
|
cs.LG
|
not_new_dataset
| 0.992037 |
2207.12546
|
Transition1x -- a Dataset for Building Generalizable Reactive Machine Learning Potentials
|
Machine Learning (ML) models have, in contrast to their usefulness in
molecular dynamics studies, had limited success as surrogate potentials for
reaction barrier search. It is due to the scarcity of training data in relevant
transition state regions of chemical space. Currently, available datasets for
training ML models on small molecular systems almost exclusively contain
configurations at or near equilibrium. In this work, we present the dataset
Transition1x containing 9.6 million Density Functional Theory (DFT)
calculations of forces and energies of molecular configurations on and around
reaction pathways at the wB97x/6-31G(d) level of theory. The data was generated
by running Nudged Elastic Band (NEB) calculations with DFT on 10k reactions
while saving intermediate calculations. We train state-of-the-art equivariant
graph message-passing neural network models on Transition1x and cross-validate
on the popular ANI1x and QM9 datasets. We show that ML models cannot learn
features in transition-state regions solely by training on hitherto popular
benchmark datasets. Transition1x is a new challenging benchmark that will
provide an important step towards developing next-generation ML force fields
that also work far away from equilibrium configurations and reactive systems.
|
http://arxiv.org/abs/2207.12858v2
|
physics.chem-ph
|
new_dataset
| 0.994405 |
2207.12858
|
HouseX: A Fine-grained House Music Dataset and its Potential in the Music Industry
|
Machine sound classification has been one of the fundamental tasks of music
technology. A major branch of sound classification is the classification of
music genres. However, though covering most genres of music, existing music
genre datasets often do not contain fine-grained labels that indicate the
detailed sub-genres of music. In consideration of the consistency of genres of
songs in a mixtape or in a DJ (live) set, we have collected and annotated a
dataset of house music that provide 4 sub-genre labels, namely future house,
bass house, progressive house and melodic house. Experiments show that our
annotations well exhibit the characteristics of different categories. Also, we
have built baseline models that classify the sub-genre based on the
mel-spectrograms of a track, achieving strongly competitive results. Besides,
we have put forward a few application scenarios of our dataset and baseline
model, with a simulated sci-fi tunnel as a short demo built and rendered in a
3D modeling software, with the colors of the lights automated by the output of
our model.
|
http://arxiv.org/abs/2207.11690v2
|
cs.SD
|
new_dataset
| 0.994542 |
2207.11690
|
GreenDB -- A Dataset and Benchmark for Extraction of Sustainability Information of Consumer Goods
|
The production, shipping, usage, and disposal of consumer goods have a
substantial impact on greenhouse gas emissions and the depletion of resources.
Machine Learning (ML) can help to foster sustainable consumption patterns by
accounting for sustainability aspects in product search or recommendations of
modern retail platforms. However, the lack of large high quality publicly
available product data with trustworthy sustainability information impedes the
development of ML technology that can help to reach our sustainability goals.
Here we present GreenDB, a database that collects products from European online
shops on a weekly basis. As proxy for the products' sustainability, it relies
on sustainability labels, which are evaluated by experts. The GreenDB schema
extends the well-known schema.org Product definition and can be readily
integrated into existing product catalogs. We present initial results
demonstrating that ML models trained with our data can reliably (F1 score 96%)
predict the sustainability label of products. These contributions can help to
complement existing e-commerce experiences and ultimately encourage users to
more sustainable consumption patterns.
|
http://arxiv.org/abs/2207.10733v3
|
cs.LG
|
new_dataset
| 0.99449 |
2207.10733
|
Benchmark tests of atom segmentation deep learning models with a consistent dataset
|
The information content of atomic resolution scanning transmission electron
microscopy (STEM) images can often be reduced to a handful of parameters
describing each atomic column, chief amongst which is the column position.
Neural networks (NNs) are a high performance, computationally efficient method
to automatically locate atomic columns in images, which has led to a profusion
of NN models and associated training datasets. We have developed a benchmark
dataset of simulated and experimental STEM images and used it to evaluate the
performance of two sets of recent NN models for atom location in STEM images.
Both models exhibit high performance for images of varying quality from several
different crystal lattices. However, there are important differences in
performance as a function of image quality, and both models perform poorly for
images outside the training data, such as interfaces with large difference in
background intensity. Both the benchmark dataset and the models are available
using the Foundry service for dissemination, discovery, and reuse of machine
learning models.
|
http://arxiv.org/abs/2207.10173v1
|
cond-mat.mtrl-sci
|
new_dataset
| 0.994315 |
2207.10173
|
The Anatomy of Video Editing: A Dataset and Benchmark Suite for AI-Assisted Video Editing
|
Machine learning is transforming the video editing industry. Recent advances
in computer vision have leveled-up video editing tasks such as intelligent
reframing, rotoscoping, color grading, or applying digital makeups. However,
most of the solutions have focused on video manipulation and VFX. This work
introduces the Anatomy of Video Editing, a dataset, and benchmark, to foster
research in AI-assisted video editing. Our benchmark suite focuses on video
editing tasks, beyond visual effects, such as automatic footage organization
and assisted video assembling. To enable research on these fronts, we annotate
more than 1.5M tags, with relevant concepts to cinematography, from 196176
shots sampled from movie scenes. We establish competitive baseline methods and
detailed analyses for each of the tasks. We hope our work sparks innovative
research towards underexplored areas of AI-assisted video editing.
|
http://arxiv.org/abs/2207.09812v2
|
cs.CV
|
new_dataset
| 0.99434 |
2207.09812
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.