awacke1 commited on
Commit
4fefc43
·
verified ·
1 Parent(s): 05c1b6a

Create text_file.txt

Browse files
Files changed (1) hide show
  1. text_file.txt +404 -0
text_file.txt ADDED
@@ -0,0 +1,404 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ GPT-4o Key Features that are Helpful:
2
+ 1. Automated Posting and Scheduling
3
+ 2. Advanced editing including video
4
+ 3. Voice interaction
5
+ 4. Character consistency
6
+ 5. Multi Image Processing
7
+ 6. Video Processing
8
+ 7. AI Screenshare
9
+
10
+
11
+
12
+ # 🩺🔍 Search Results
13
+ ### 11 Jul 2023 | [FairLay-ML: Intuitive Remedies for Unfairness in Data-Driven Social-Critical Algorithms](https://arxiv.org/abs/2307.05029) | [⬇️](https://arxiv.org/pdf/2307.05029)
14
+ *Normen Yu, Gang Tan, Saeid Tizpaz-Niari*
15
+
16
+ This thesis explores open-sourced machine learning (ML) model explanation
17
+ tools to understand whether these tools can allow a layman to visualize,
18
+ understand, and suggest intuitive remedies to unfairness in ML-based
19
+ decision-support systems. Machine learning models trained on datasets biased
20
+ against minority groups are increasingly used to guide life-altering social
21
+ decisions, prompting the urgent need to study their logic for unfairness. Due
22
+ to this problem's impact on vast populations of the general public, it is
23
+ critical for the layperson -- not just subject matter experts in social justice
24
+ or machine learning experts -- to understand the nature of unfairness within
25
+ these algorithms and the potential trade-offs. Existing research on fairness in
26
+ machine learning focuses mostly on the mathematical definitions and tools to
27
+ understand and remedy unfair models, with some directly citing user-interactive
28
+ tools as necessary for future work. This thesis presents FairLay-ML, a
29
+ proof-of-concept GUI integrating some of the most promising tools to provide
30
+ intuitive explanations for unfair logic in ML models by integrating existing
31
+ research tools (e.g. Local Interpretable Model-Agnostic Explanations) with
32
+ existing ML-focused GUI (e.g. Python Streamlit). We test FairLay-ML using
33
+ models of various accuracy and fairness generated by an unfairness detector
34
+ tool, Parfait-ML, and validate our results using Themis. Our study finds that
35
+ the technology stack used for FairLay-ML makes it easy to install and provides
36
+ real-time black-box explanations of pre-trained models to users. Furthermore,
37
+ the explanations provided translate to actionable remedies.
38
+
39
+ ---------------
40
+
41
+ ### 29 Jan 2020 | [stream-learn -- open-source Python library for difficult data stream batch analysis](https://arxiv.org/abs/2001.11077) | [⬇️](https://arxiv.org/pdf/2001.11077)
42
+ *Pawe{\l} Ksieniewicz, Pawe{\l} Zyblewski*
43
+
44
+ stream-learn is a Python package compatible with scikit-learn and developed
45
+ for the drifting and imbalanced data stream analysis. Its main component is a
46
+ stream generator, which allows to produce a synthetic data stream that may
47
+ incorporate each of the three main concept drift types (i.e. sudden, gradual
48
+ and incremental drift) in their recurring or non-recurring versions. The
49
+ package allows conducting experiments following established evaluation
50
+ methodologies (i.e. Test-Then-Train and Prequential). In addition, estimators
51
+ adapted for data stream classification have been implemented, including both
52
+ simple classifiers and state-of-art chunk-based and online classifier
53
+ ensembles. To improve computational efficiency, package utilises its own
54
+ implementations of prediction metrics for imbalanced binary classification
55
+ tasks.
56
+
57
+ ---------------
58
+
59
+ ### 16 Oct 2022 | [POTATO: exPlainable infOrmation exTrAcTion framewOrk](https://arxiv.org/abs/2201.13230) | [⬇️](https://arxiv.org/pdf/2201.13230)
60
+ *\'Ad\'am Kov\'acs, Kinga G\'emes, Eszter Ikl\'odi, G\'abor Recski*
61
+
62
+ We present POTATO, a task- and languageindependent framework for
63
+ human-in-the-loop (HITL) learning of rule-based text classifiers using
64
+ graph-based features. POTATO handles any type of directed graph and supports
65
+ parsing text into Abstract Meaning Representations (AMR), Universal
66
+ Dependencies (UD), and 4lang semantic graphs. A streamlit-based user interface
67
+ allows users to build rule systems from graph patterns, provides real-time
68
+ evaluation based on ground truth data, and suggests rules by ranking graph
69
+ features using interpretable machine learning models. Users can also provide
70
+ patterns over graphs using regular expressions, and POTATO can recommend
71
+ refinements of such rules. POTATO is applied in projects across domains and
72
+ languages, including classification tasks on German legal text and English
73
+ social media data. All components of our system are written in Python, can be
74
+ installed via pip, and are released under an MIT License on GitHub.
75
+
76
+ ---------------
77
+
78
+ ### 01 Aug 2019 | [ProSper -- A Python Library for Probabilistic Sparse Coding with Non-Standard Priors and Superpositions](https://arxiv.org/abs/1908.06843) | [⬇️](https://arxiv.org/pdf/1908.06843)
79
+ *Georgios Exarchakis, J\"org Bornschein, Abdul-Saboor Sheikh, Zhenwen Dai, Marc Henniges, Jakob Drefs, J\"org L\"ucke*
80
+
81
+ ProSper is a python library containing probabilistic algorithms to learn
82
+ dictionaries. Given a set of data points, the implemented algorithms seek to
83
+ learn the elementary components that have generated the data. The library
84
+ widens the scope of dictionary learning approaches beyond implementations of
85
+ standard approaches such as ICA, NMF or standard L1 sparse coding. The
86
+ implemented algorithms are especially well-suited in cases when data consist of
87
+ components that combine non-linearly and/or for data requiring flexible prior
88
+ distributions. Furthermore, the implemented algorithms go beyond standard
89
+ approaches by inferring prior and noise parameters of the data, and they
90
+ provide rich a-posteriori approximations for inference. The library is designed
91
+ to be extendable and it currently includes: Binary Sparse Coding (BSC), Ternary
92
+ Sparse Coding (TSC), Discrete Sparse Coding (DSC), Maximal Causes Analysis
93
+ (MCA), Maximum Magnitude Causes Analysis (MMCA), and Gaussian Sparse Coding
94
+ (GSC, a recent spike-and-slab sparse coding approach). The algorithms are
95
+ scalable due to a combination of variational approximations and
96
+ parallelization. Implementations of all algorithms allow for parallel execution
97
+ on multiple CPUs and multiple machines for medium to large-scale applications.
98
+ Typical large-scale runs of the algorithms can use hundreds of CPUs to learn
99
+ hundreds of dictionary elements from data with tens of millions of
100
+ floating-point numbers such that models with several hundred thousand
101
+ parameters can be optimized. The library is designed to have minimal
102
+ dependencies and to be easy to use. It targets users of dictionary learning
103
+ algorithms and Machine Learning researchers.
104
+
105
+ ---------------
106
+
107
+ ### 27 Jul 2020 | [metric-learn: Metric Learning Algorithms in Python](https://arxiv.org/abs/1908.04710) | [⬇️](https://arxiv.org/pdf/1908.04710)
108
+ *William de Vazelhes and CJ Carey and Yuan Tang and Nathalie Vauquier and Aur\'elien Bellet*
109
+
110
+ metric-learn is an open source Python package implementing supervised and
111
+ weakly-supervised distance metric learning algorithms. As part of
112
+ scikit-learn-contrib, it provides a unified interface compatible with
113
+ scikit-learn which allows to easily perform cross-validation, model selection,
114
+ and pipelining with other machine learning estimators. metric-learn is
115
+ thoroughly tested and available on PyPi under the MIT licence.
116
+
117
+ ---------------
118
+
119
+ ### 10 Nov 2023 | [Deep Fast Vision: A Python Library for Accelerated Deep Transfer Learning Vision Prototyping](https://arxiv.org/abs/2311.06169) | [⬇️](https://arxiv.org/pdf/2311.06169)
120
+ *Fabi Prezja*
121
+
122
+ Deep learning-based vision is characterized by intricate frameworks that
123
+ often necessitate a profound understanding, presenting a barrier to newcomers
124
+ and limiting broad adoption. With many researchers grappling with the
125
+ constraints of smaller datasets, there's a pronounced reliance on pre-trained
126
+ neural networks, especially for tasks such as image classification. This
127
+ reliance is further intensified in niche imaging areas where obtaining vast
128
+ datasets is challenging. Despite the widespread use of transfer learning as a
129
+ remedy to the small dataset dilemma, a conspicuous absence of tailored auto-ML
130
+ solutions persists. Addressing these challenges is "Deep Fast Vision", a python
131
+ library that streamlines the deep learning process. This tool offers a
132
+ user-friendly experience, enabling results through a simple nested dictionary
133
+ definition, helping to democratize deep learning for non-experts. Designed for
134
+ simplicity and scalability, Deep Fast Vision appears as a bridge, connecting
135
+ the complexities of existing deep learning frameworks with the needs of a
136
+ diverse user base.
137
+
138
+ ---------------
139
+
140
+ ### 12 Jul 2021 | [Online Graph Dictionary Learning](https://arxiv.org/abs/2102.06555) | [⬇️](https://arxiv.org/pdf/2102.06555)
141
+ *C\'edric Vincent-Cuaz, Titouan Vayer, R\'emi Flamary, Marco Corneli, Nicolas Courty*
142
+
143
+ Dictionary learning is a key tool for representation learning, that explains
144
+ the data as linear combination of few basic elements. Yet, this analysis is not
145
+ amenable in the context of graph learning, as graphs usually belong to
146
+ different metric spaces. We fill this gap by proposing a new online Graph
147
+ Dictionary Learning approach, which uses the Gromov Wasserstein divergence for
148
+ the data fitting term. In our work, graphs are encoded through their nodes'
149
+ pairwise relations and modeled as convex combination of graph atoms, i.e.
150
+ dictionary elements, estimated thanks to an online stochastic algorithm, which
151
+ operates on a dataset of unregistered graphs with potentially different number
152
+ of nodes. Our approach naturally extends to labeled graphs, and is completed by
153
+ a novel upper bound that can be used as a fast approximation of Gromov
154
+ Wasserstein in the embedding space. We provide numerical evidences showing the
155
+ interest of our approach for unsupervised embedding of graph datasets and for
156
+ online graph subspace estimation and tracking.
157
+
158
+ ---------------
159
+
160
+ ### 25 Nov 2021 | [Online Orthogonal Dictionary Learning Based on Frank-Wolfe Method](https://arxiv.org/abs/2103.01484) | [⬇️](https://arxiv.org/pdf/2103.01484)
161
+ *Ye Xue and Vincent Lau*
162
+
163
+ Dictionary learning is a widely used unsupervised learning method in signal
164
+ processing and machine learning. Most existing works of dictionary learning are
165
+ in an offline manner. There are mainly two offline ways for dictionary
166
+ learning. One is to do an alternative optimization of both the dictionary and
167
+ the sparse code; the other way is to optimize the dictionary by restricting it
168
+ over the orthogonal group. The latter one is called orthogonal dictionary
169
+ learning which has a lower complexity implementation, hence, it is more
170
+ favorable for lowcost devices. However, existing schemes on orthogonal
171
+ dictionary learning only work with batch data and can not be implemented
172
+ online, which is not applicable for real-time applications. This paper proposes
173
+ a novel online orthogonal dictionary scheme to dynamically learn the dictionary
174
+ from streaming data without storing the historical data. The proposed scheme
175
+ includes a novel problem formulation and an efficient online algorithm design
176
+ with convergence analysis. In the problem formulation, we relax the orthogonal
177
+ constraint to enable an efficient online algorithm. In the algorithm design, we
178
+ propose a new Frank-Wolfe-based online algorithm with a convergence rate of
179
+ O(ln t/t^(1/4)). The convergence rate in terms of key system parameters is also
180
+ derived. Experiments with synthetic data and real-world sensor readings
181
+ demonstrate the effectiveness and efficiency of the proposed online orthogonal
182
+ dictionary learning scheme.
183
+
184
+ ---------------
185
+
186
+ ### 14 Jun 2022 | [Supervised Dictionary Learning with Auxiliary Covariates](https://arxiv.org/abs/2206.06774) | [⬇️](https://arxiv.org/pdf/2206.06774)
187
+ *Joowon Lee, Hanbaek Lyu, Weixin Yao*
188
+
189
+ Supervised dictionary learning (SDL) is a classical machine learning method
190
+ that simultaneously seeks feature extraction and classification tasks, which
191
+ are not necessarily a priori aligned objectives. The goal of SDL is to learn a
192
+ class-discriminative dictionary, which is a set of latent feature vectors that
193
+ can well-explain both the features as well as labels of observed data. In this
194
+ paper, we provide a systematic study of SDL, including the theory, algorithm,
195
+ and applications of SDL. First, we provide a novel framework that `lifts' SDL
196
+ as a convex problem in a combined factor space and propose a low-rank projected
197
+ gradient descent algorithm that converges exponentially to the global minimizer
198
+ of the objective. We also formulate generative models of SDL and provide global
199
+ estimation guarantees of the true parameters depending on the hyperparameter
200
+ regime. Second, viewed as a nonconvex constrained optimization problem, we
201
+ provided an efficient block coordinate descent algorithm for SDL that is
202
+ guaranteed to find an $\varepsilon$-stationary point of the objective in
203
+ $O(\varepsilon^{-1}(\log \varepsilon^{-1})^{2})$ iterations. For the
204
+ corresponding generative model, we establish a novel non-asymptotic local
205
+ consistency result for constrained and regularized maximum likelihood
206
+ estimation problems, which may be of independent interest. Third, we apply SDL
207
+ for imbalanced document classification by supervised topic modeling and also
208
+ for pneumonia detection from chest X-ray images. We also provide simulation
209
+ studies to demonstrate that SDL becomes more effective when there is a
210
+ discrepancy between the best reconstructive and the best discriminative
211
+ dictionaries.
212
+
213
+ ---------------
214
+
215
+ ### 07 Oct 2013 | [Online Unsupervised Feature Learning for Visual Tracking](https://arxiv.org/abs/1310.1690) | [⬇️](https://arxiv.org/pdf/1310.1690)
216
+ *Fayao Liu, Chunhua Shen, Ian Reid, Anton van den Hengel*
217
+
218
+ Feature encoding with respect to an over-complete dictionary learned by
219
+ unsupervised methods, followed by spatial pyramid pooling, and linear
220
+ classification, has exhibited powerful strength in various vision applications.
221
+ Here we propose to use the feature learning pipeline for visual tracking.
222
+ Tracking is implemented using tracking-by-detection and the resulted framework
223
+ is very simple yet effective. First, online dictionary learning is used to
224
+ build a dictionary, which captures the appearance changes of the tracking
225
+ target as well as the background changes. Given a test image window, we extract
226
+ local image patches from it and each local patch is encoded with respect to the
227
+ dictionary. The encoded features are then pooled over a spatial pyramid to form
228
+ an aggregated feature vector. Finally, a simple linear classifier is trained on
229
+ these features.
230
+ Our experiments show that the proposed powerful---albeit simple---tracker,
231
+ outperforms all the state-of-the-art tracking methods that we have tested.
232
+ Moreover, we evaluate the performance of different dictionary learning and
233
+ feature encoding methods in the proposed tracking framework, and analyse the
234
+ impact of each component in the tracking scenario. We also demonstrate the
235
+ flexibility of feature learning by plugging it into Hare et al.'s tracking
236
+ method. The outcome is, to our knowledge, the best tracker ever reported, which
237
+ facilitates the advantages of both feature learning and structured output
238
+ prediction.
239
+
240
+ ---------------
241
+
242
+ ### 04 Mar 2024 | [Automated Generation of Multiple-Choice Cloze Questions for Assessing English Vocabulary Using GPT-turbo 3.5](https://arxiv.org/abs/2403.02078) | [⬇️](https://arxiv.org/pdf/2403.02078)
243
+ *Qiao Wang, Ralph Rose, Naho Orita, Ayaka Sugawara*
244
+
245
+ A common way of assessing language learners' mastery of vocabulary is via
246
+ multiple-choice cloze (i.e., fill-in-the-blank) questions. But the creation of
247
+ test items can be laborious for individual teachers or in large-scale language
248
+ programs. In this paper, we evaluate a new method for automatically generating
249
+ these types of questions using large language models (LLM). The VocaTT
250
+ (vocabulary teaching and training) engine is written in Python and comprises
251
+ three basic steps: pre-processing target word lists, generating sentences and
252
+ candidate word options using GPT, and finally selecting suitable word options.
253
+ To test the efficiency of this system, 60 questions were generated targeting
254
+ academic words. The generated items were reviewed by expert reviewers who
255
+ judged the well CopyClaude does not have the ability to run the code it generates yet.AWcontinue-formedness of the sentences and word options, adding comments
256
+ to items judged not well-formed. Results showed a 75% rate of well-formedness
257
+ for sentences and 66.85% rate for suitable word options. This is a marked
258
+ improvement over the generator used earlier in our research which did not take
259
+ advantage of GPT's capabilities. Post-hoc qualitative analysis reveals several
260
+ points for improvement in future work including cross-referencing
261
+ part-of-speech tagging, better sentence validation, and improving GPT prompts.
262
+
263
+ 13 Dec 2016 | TF.Learn: TensorFlow's High-level Module for Distributed Machine Learning | ⬇️
264
+ Yuan Tang
265
+ TF.Learn is a high-level Python module for distributed machine learning
266
+ inside TensorFlow. It provides an easy-to-use Scikit-learn style interface to
267
+ simplify the process of creating, configuring, training, evaluating, and
268
+ experimenting a machine learning model. TF.Learn integrates a wide range of
269
+ state-of-art machine learning algorithms built on top of TensorFlow's low level
270
+ APIs for small to large-scale supervised and unsupervised problems. This module
271
+ focuses on bringing machine learning to non-specialists using a general-purpose
272
+ high-level language as well as researchers who want to implement, benchmark,
273
+ and compare their new methods in a structured environment. Emphasis is put on
274
+ ease of use, performance, documentation, and API consistency.
275
+
276
+ 11 Dec 2019 | Majorization Minimization Technique for Optimally Solving Deep Dictionary Learning | ⬇️
277
+ Vanika Singhal and Angshul Majumdar
278
+ The concept of deep dictionary learning has been recently proposed. Unlike
279
+ shallow dictionary learning which learns single level of dictionary to
280
+ represent the data, it uses multiple layers of dictionaries. So far, the
281
+ problem could only be solved in a greedy fashion; this was achieved by learning
282
+ a single layer of dictionary in each stage where the coefficients from the
283
+ previous layer acted as inputs to the subsequent layer (only the first layer
284
+ used the training samples as inputs). This was not optimal; there was feedback
285
+ from shallower to deeper layers but not the other way. This work proposes an
286
+ optimal solution to deep dictionary learning whereby all the layers of
287
+ dictionaries are solved simultaneously. We employ the Majorization Minimization
288
+ approach. Experiments have been carried out on benchmark datasets; it shows
289
+ that optimal learning indeed improves over greedy piecemeal learning.
290
+ Comparison with other unsupervised deep learning tools (stacked denoising
291
+ autoencoder, deep belief network, contractive autoencoder and K-sparse
292
+ autoencoder) show that our method supersedes their performance both in accuracy
293
+ and speed.
294
+
295
+ 17 May 2022 | Applications of Deep Neural Networks with Keras | ⬇️
296
+ Jeff Heaton
297
+ Deep learning is a group of exciting new technologies for neural networks.
298
+ Through a combination of advanced training techniques and neural network
299
+ architectural components, it is now possible to create neural networks that can
300
+ handle tabular data, images, text, and audio as both input and output. Deep
301
+ learning allows a neural network to learn hierarchies of information in a way
302
+ that is like the function of the human brain. This course will introduce the
303
+ student to classic neural network structures, Convolution Neural Networks
304
+ (CNN), Long Short-Term Memory (LSTM), Gated Recurrent Neural Networks (GRU),
305
+ General Adversarial Networks (GAN), and reinforcement learning. Application of
306
+ these architectures to computer vision, time series, security, natural language
307
+ processing (NLP), and data generation will be covered. High-Performance
308
+ Computing (HPC) aspects will demonstrate how deep learning can be leveraged
309
+ both on graphical processing units (GPUs), as well as grids. Focus is primarily
310
+ upon the application of deep learning to problems, with some introduction to
311
+ mathematical foundations. Readers will use the Python programming language to
312
+ implement deep learning using Google TensorFlow and Keras. It is not necessary
313
+ to know Python prior to this book; however, familiarity with at least one
314
+ programming language is assumed.
315
+
316
+ 26 Feb 2015 | Learning computationally efficient dictionaries and their implementation as fast transforms | ⬇️
317
+ Luc Le Magoarou (INRIA - IRISA), R'emi Gribonval (INRIA - IRISA)
318
+ Dictionary learning is a branch of signal processing and machine learning
319
+ that aims at finding a frame (called dictionary) in which some training data
320
+ admits a sparse representation. The sparser the representation, the better the
321
+ dictionary. The resulting dictionary is in general a dense matrix, and its
322
+ manipulation can be computationally costly both at the learning stage and later
323
+ in the usage of this dictionary, for tasks such as sparse coding. Dictionary
324
+ learning is thus limited to relatively small-scale problems. In this paper,
325
+ inspired by usual fast transforms, we consider a general dictionary structure
326
+ that allows cheaper manipulation, and propose an algorithm to learn such
327
+ dictionaries --and their fast implementation-- over training data. The approach
328
+ is demonstrated experimentally with the factorization of the Hadamard matrix
329
+ and with synthetic dictionary learning experiments.
330
+
331
+ 03 Dec 2021 | SSDL: Self-Supervised Dictionary Learning | ⬇️
332
+ Shuai Shao, Lei Xing, Wei Yu, Rui Xu, Yanjiang Wang, Baodi Liu
333
+ The label-embedded dictionary learning (DL) algorithms generate influential
334
+ dictionaries by introducing discriminative information. However, there exists a
335
+ limitation: All the label-embedded DL methods rely on the labels due that this
336
+ way merely achieves ideal performances in supervised learning. While in
337
+ semi-supervised and unsupervised learning, it is no longer sufficient to be
338
+ effective. Inspired by the concept of self-supervised learning (e.g., setting
339
+ the pretext task to generate a universal model for the downstream task), we
340
+ propose a Self-Supervised Dictionary Learning (SSDL) framework to address this
341
+ challenge. Specifically, we first design a $p$-Laplacian Attention Hypergraph
342
+ Learning (pAHL) block as the pretext task to generate pseudo soft labels for
343
+ DL. Then, we adopt the pseudo labels to train a dictionary from a primary
344
+ label-embedded DL method. We evaluate our SSDL on two human activity
345
+ recognition datasets. The comparison results with other state-of-the-art
346
+ methods have demonstrated the efficiency of SSDL.
347
+
348
+ 05 Jun 2018 | Scikit-learn: Machine Learning in Python | ⬇️
349
+ Fabian Pedregosa, Ga"el Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Andreas M"uller, Joel Nothman, Gilles Louppe, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, 'Edouard Duchesnay
350
+ Scikit-learn is a Python module integrating a wide range of state-of-the-art
351
+ machine learning algorithms for medium-scale supervised and unsupervised
352
+ problems. This package focuses on bringing machine learning to non-specialists
353
+ using a general-purpose high-level language. Emphasis is put on ease of use,
354
+ performance, documentation, and API consistency. It has minimal dependencies
355
+ and is distributed under the simplified BSD license, encouraging its use in
356
+ both academic and commercial settings. Source code, binaries, and documentation
357
+ can be downloaded from http://scikit-learn.org.
358
+
359
+ 15 Jul 2020 | Complete Dictionary Learning via $\ell_p$-norm Maximization | ⬇️
360
+ Yifei Shen, Ye Xue, Jun Zhang, Khaled B. Letaief, and Vincent Lau
361
+ Dictionary learning is a classic representation learning method that has been
362
+ widely applied in signal processing and data analytics. In this paper, we
363
+ investigate a family of $\ell_p$-norm ($p>2,p \in \mathbb{N}$) maximization
364
+ approaches for the complete dictionary learning problem from theoretical and
365
+ algorithmic aspects. Specifically, we prove that the global maximizers of these
366
+ formulations are very close to the true dictionary with high probability, even
367
+ when Gaussian noise is present. Based on the generalized power method (GPM), an
368
+ efficient algorithm is then developed for the $\ell_p$-based formulations. We
369
+ further show the efficacy of the developed algorithm: for the population GPM
370
+ algorithm over the sphere constraint, it first quickly enters the neighborhood
371
+ of a global maximizer, and then converges linearly in this region. Extensive
372
+ experiments will demonstrate that the $\ell_p$-based approaches enjoy a higher
373
+ computational efficiency and better robustness than conventional approaches and
374
+ $p=3$ performs the best.
375
+
376
+ 27 Nov 2023 | Utilizing Explainability Techniques for Reinforcement Learning Model Assurance | ⬇️
377
+ Alexander Tapley and Kyle Gatesman and Luis Robaina and Brett Bissey and Joseph Weissman
378
+ Explainable Reinforcement Learning (XRL) can provide transparency into the
379
+ decision-making process of a Deep Reinforcement Learning (DRL) model and
380
+ increase user trust and adoption in real-world use cases. By utilizing XRL
381
+ techniques, researchers can identify potential vulnerabilities within a trained
382
+ DRL model prior to deployment, therefore limiting the potential for mission
383
+ failure or mistakes by the system. This paper introduces the ARLIN (Assured RL
384
+ Model Interrogation) Toolkit, an open-source Python library that identifies
385
+ potential vulnerabilities and critical points within trained DRL models through
386
+ detailed, human-interpretable explainability outputs. To illustrate ARLIN's
387
+ effectiveness, we provide explainability visualizations and vulnerability
388
+ analysis for a publicly available DRL model. The open-source code repository is
389
+ available for download at https://github.com/mitre/arlin.
390
+
391
+ 19 Sep 2019 | InterpretML: A Unified Framework for Machine Learning Interpretability | ⬇️
392
+ Harsha Nori and Samuel Jenkins and Paul Koch and Rich Caruana
393
+ InterpretML is an open-source Python package which exposes machine learning
394
+ interpretability algorithms to practitioners and researchers. InterpretML
395
+ exposes two types of interpretability - glassbox models, which are machine
396
+ learning models designed for interpretability (ex: linear models, rule lists,
397
+ generalized additive models), and blackbox explainability techniques for
398
+ explaining existing systems (ex: Partial Dependence, LIME). The package enables
399
+ practitioners to easily compare interpretability algorithms by exposing
400
+ multiple methods under a unified API, and by having a built-in, extensible
401
+ visualization platform. InterpretML also includes the first implementation of
402
+ the Explainable Boosting Machine, a powerful, interpretable, glassbox model
403
+ that can be as accurate as many blackbox models. The MIT licensed source code
404
+ can be downloaded from github.com/microsoft/interpret.