text
stringlengths 12
14.7k
|
---|
Loss functions for classification : For proper loss functions, the loss margin can be defined as μ ϕ = − ϕ ′ ( 0 ) ϕ ″ ( 0 ) =- and shown to be directly related to the regularization properties of the classifier. Specifically a loss function of larger margin increases regularization and produces better estimates of the posterior probability. For example, the loss margin can be increased for the logistic loss by introducing a γ parameter and writing the logistic loss as 1 γ log ( 1 + e − γ v ) \log(1+e^) where smaller 0 < γ < 1 increases the margin of the loss. It is shown that this is directly equivalent to decreasing the learning rate in gradient boosting F m ( x ) = F m − 1 ( x ) + γ h m ( x ) , (x)=F_(x)+\gamma h_(x), where decreasing γ improves the regularization of the boosted classifier. The theory makes it clear that when a learning rate of γ is used, the correct formula for retrieving the posterior probability is now η = f − 1 ( γ F ( x ) ) (\gamma F(x)) . In conclusion, by choosing a loss function with larger margin (smaller γ ) we increase regularization and improve our estimates of the posterior probability which in turn improves the ROC curve of the final classifier.
|
Loss functions for classification : While more commonly used in regression, the square loss function can be re-written as a function ϕ ( y f ( x → ) ) )) and utilized for classification. It can be generated using (2) and Table-I as follows ϕ ( v ) = C [ f − 1 ( v ) ] + ( 1 − f − 1 ( v ) ) C ′ [ f − 1 ( v ) ] = 4 ( 1 2 ( v + 1 ) ) ( 1 − 1 2 ( v + 1 ) ) + ( 1 − 1 2 ( v + 1 ) ) ( 4 − 8 ( 1 2 ( v + 1 ) ) ) = ( 1 − v ) 2 . (v)]+(1-f^(v))C'[f^(v)]=4((v+1))(1-(v+1))+(1-(v+1))(4-8((v+1)))=(1-v)^. The square loss function is both convex and smooth. However, the square loss function tends to penalize outliers excessively, leading to slower convergence rates (with regards to sample complexity) than for the logistic loss or hinge loss functions. In addition, functions which yield high values of f ( x → ) ) for some x ∈ X will perform poorly with the square loss function, since high values of y f ( x → ) ) will be penalized severely, regardless of whether the signs of y and f ( x → ) ) match. A benefit of the square loss function is that its structure lends itself to easy cross validation of regularization parameters. Specifically for Tikhonov regularization, one can solve for the regularization parameter using leave-one-out cross-validation in the same time as it would take to solve a single problem. The minimizer of I [ f ] for the square loss function can be directly found from equation (1) as f Square ∗ = 2 η − 1 = 2 p ( 1 ∣ x ) − 1. ^=2\eta -1=2p(1\mid x)-1.
|
Loss functions for classification : The logistic loss function can be generated using (2) and Table-I as follows ϕ ( v ) = C [ f − 1 ( v ) ] + ( 1 − f − 1 ( v ) ) C ′ [ f − 1 ( v ) ] = 1 log ( 2 ) [ − e v 1 + e v log e v 1 + e v − ( 1 − e v 1 + e v ) log ( 1 − e v 1 + e v ) ] + ( 1 − e v 1 + e v ) [ − 1 log ( 2 ) log ( e v 1 + e v 1 − e v 1 + e v ) ] = 1 log ( 2 ) log ( 1 + e − v ) . \phi (v)&=C[f^(v)]+\left(1-f^(v)\right)\,C'\left[f^(v)\right]\\&=\left[\log -\left(1-\right)\log \left(1-\right)\right]+\left(1-\right)\left[\log \left(\right)\right]\\&=\log(1+e^).\end The logistic loss is convex and grows linearly for negative values which make it less sensitive to outliers. The logistic loss is used in the LogitBoost algorithm. The minimizer of I [ f ] for the logistic loss function can be directly found from equation (1) as f Logistic ∗ = log ( η 1 − η ) = log ( p ( 1 ∣ x ) 1 − p ( 1 ∣ x ) ) . ^=\log \left(\right)=\log \left(\right). This function is undefined when p ( 1 ∣ x ) = 1 or p ( 1 ∣ x ) = 0 (tending toward ∞ and −∞ respectively), but predicts a smooth curve which grows when p ( 1 ∣ x ) increases and equals 0 when p ( 1 ∣ x ) = 0.5 . It's easy to check that the logistic loss and binary cross-entropy loss (Log loss) are in fact the same (up to a multiplicative constant 1 log ( 2 ) ). The cross-entropy loss is closely related to the Kullback–Leibler divergence between the empirical distribution and the predicted distribution. The cross-entropy loss is ubiquitous in modern deep neural networks.
|
Loss functions for classification : The exponential loss function can be generated using (2) and Table-I as follows ϕ ( v ) = C [ f − 1 ( v ) ] + ( 1 − f − 1 ( v ) ) C ′ [ f − 1 ( v ) ] = 2 ( e 2 v 1 + e 2 v ) ( 1 − e 2 v 1 + e 2 v ) + ( 1 − e 2 v 1 + e 2 v ) ( 1 − 2 e 2 v 1 + e 2 v e 2 v 1 + e 2 v ( 1 − e 2 v 1 + e 2 v ) ) = e − v (v)]+(1-f^(v))C'[f^(v)]=2\right)\left(1-\right)+\left(1-\right)\left((1-)\right)=e^ The exponential loss is convex and grows exponentially for negative values which makes it more sensitive to outliers. The exponentially-weighted 0-1 loss is used in the AdaBoost algorithm giving implicitly rise to the exponential loss. The minimizer of I [ f ] for the exponential loss function can be directly found from equation (1) as f Exp ∗ = 1 2 log ( η 1 − η ) = 1 2 log ( p ( 1 ∣ x ) 1 − p ( 1 ∣ x ) ) . ^=\log \left(\right)=\log \left(\right).
|
Loss functions for classification : The Savage loss can be generated using (2) and Table-I as follows ϕ ( v ) = C [ f − 1 ( v ) ] + ( 1 − f − 1 ( v ) ) C ′ [ f − 1 ( v ) ] = ( e v 1 + e v ) ( 1 − e v 1 + e v ) + ( 1 − e v 1 + e v ) ( 1 − 2 e v 1 + e v ) = 1 ( 1 + e v ) 2 . (v)]+(1-f^(v))C'[f^(v)]=\left(\right)\left(1-\right)+\left(1-\right)\left(1-\right)=)^. The Savage loss is quasi-convex and is bounded for large negative values which makes it less sensitive to outliers. The Savage loss has been used in gradient boosting and the SavageBoost algorithm. The minimizer of I [ f ] for the Savage loss function can be directly found from equation (1) as f Savage ∗ = log ( η 1 − η ) = log ( p ( 1 ∣ x ) 1 − p ( 1 ∣ x ) ) . ^=\log \left(\right)=\log \left(\right).
|
Loss functions for classification : The Tangent loss can be generated using (2) and Table-I as follows ϕ ( v ) = C [ f − 1 ( v ) ] + ( 1 − f − 1 ( v ) ) C ′ [ f − 1 ( v ) ] = 4 ( arctan ( v ) + 1 2 ) ( 1 − ( arctan ( v ) + 1 2 ) ) + ( 1 − ( arctan ( v ) + 1 2 ) ) ( 4 − 8 ( arctan ( v ) + 1 2 ) ) = ( 2 arctan ( v ) − 1 ) 2 . \phi (v)&=C[f^(v)]+\left(1-f^(v)\right)C'[f^(v)]\\&=4\left(\arctan(v)+\right)\left(1-\left(\arctan(v)+\right)\right)+\left(1-\left(\arctan(v)+\right)\right)\left(4-8\left(\arctan(v)+\right)\right)\\&=\left(2\arctan(v)-1\right)^.\end The Tangent loss is quasi-convex and is bounded for large negative values which makes it less sensitive to outliers. Interestingly, the Tangent loss also assigns a bounded penalty to data points that have been classified "too correctly". This can help prevent over-training on the data set. The Tangent loss has been used in gradient boosting, the TangentBoost algorithm and Alternating Decision Forests. The minimizer of I [ f ] for the Tangent loss function can be directly found from equation (1) as f Tangent ∗ = tan ( η − 1 2 ) = tan ( p ( 1 ∣ x ) − 1 2 ) . ^=\tan \left(\eta -\right)=\tan \left(p\left(1\mid x\right)-\right).
|
Loss functions for classification : The hinge loss function is defined with ϕ ( υ ) = max ( 0 , 1 − υ ) = [ 1 − υ ] + , where [ a ] + = max ( 0 , a ) =\max(0,a) is the positive part function. V ( f ( x → ) , y ) = max ( 0 , 1 − y f ( x → ) ) = [ 1 − y f ( x → ) ] + . ),y)=\max(0,1-yf())=[1-yf()]_. The hinge loss provides a relatively tight, convex upper bound on the 0–1 indicator function. Specifically, the hinge loss equals the 0–1 indicator function when sgn ( f ( x → ) ) = y (f())=y and | y f ( x → ) | ≥ 1 )|\geq 1 . In addition, the empirical risk minimization of this loss is equivalent to the classical formulation for support vector machines (SVMs). Correctly classified points lying outside the margin boundaries of the support vectors are not penalized, whereas points within the margin boundaries or on the wrong side of the hyperplane are penalized in a linear fashion compared to their distance from the correct boundary. While the hinge loss function is both convex and continuous, it is not smooth (is not differentiable) at y f ( x → ) = 1 )=1 . Consequently, the hinge loss function cannot be used with gradient descent methods or stochastic gradient descent methods which rely on differentiability over the entire domain. However, the hinge loss does have a subgradient at y f ( x → ) = 1 )=1 , which allows for the utilization of subgradient descent methods. SVMs utilizing the hinge loss function can also be solved using quadratic programming. The minimizer of I [ f ] for the hinge loss function is f Hinge ∗ ( x → ) = ^()\;=\;1&p(1\mid )>p(-1\mid )\\-1&p(1\mid )<p(-1\mid )\end when p ( 1 ∣ x ) ≠ 0.5 , which matches that of the 0–1 indicator function. This conclusion makes the hinge loss quite attractive, as bounds can be placed on the difference between expected risk and the sign of hinge loss function. The Hinge loss cannot be derived from (2) since f Hinge ∗ ^ is not invertible.
|
Loss functions for classification : The generalized smooth hinge loss function with parameter α is defined as f α ∗ ( z ) = ^(z)\;=\;-z&z\leq 0\\z^-z+&0<z<1\\0&z\geq 1\end, where z = y f ( x → ) . ). It is monotonically increasing and reaches 0 when z = 1 .
|
Loss functions for classification : Differentiable programming Scoring function == References ==
|
Manifold alignment : Manifold alignment is a class of machine learning algorithms that produce projections between sets of data, given that the original data sets lie on a common manifold. The concept was first introduced as such by Ham, Lee, and Saul in 2003, adding a manifold constraint to the general problem of correlating sets of high-dimensional vectors.
|
Manifold alignment : Manifold alignment assumes that disparate data sets produced by similar generating processes will share a similar underlying manifold representation. By learning projections from each original space to the shared manifold, correspondences are recovered and knowledge from one domain can be transferred to another. Most manifold alignment techniques consider only two data sets, but the concept extends to arbitrarily many initial data sets. Consider the case of aligning two data sets, X and Y , with X i ∈ R m \in \mathbb ^ and Y i ∈ R n \in \mathbb ^ . Manifold alignment algorithms attempt to project both X and Y into a new d-dimensional space such that the projections both minimize distance between corresponding points and preserve the local manifold structure of the original data. The projection functions are denoted: ϕ X : R m → R d :\,\mathbb ^\rightarrow \mathbb ^ ϕ Y : R n → R d :\,\mathbb ^\rightarrow \mathbb ^ Let W represent the binary correspondence matrix between points in X and Y : W i , j = =1&if\,X_\leftrightarrow Y_\\0&otherwise\end Let S X and S Y represent pointwise similarities within data sets. This is usually encoded as the heat kernel of the adjacency matrix of a k-nearest neighbor graph. Finally, introduce a coefficient 0 ≤ μ ≤ 1 , which can be tuned to adjust the weight of the 'preserve manifold structure' goal, versus the 'minimize corresponding point distances' goal. With these definitions in place, the loss function for manifold alignment can be written: arg min ϕ X , ϕ Y μ ∑ i , j ‖ ϕ X ( X i ) − ϕ X ( X j ) ‖ 2 S X , i , j + μ ∑ i , j ‖ ϕ Y ( Y i ) − ϕ Y ( Y j ) ‖ 2 S Y , i , j + ( 1 − μ ) ∑ i , j ‖ ϕ X ( X i ) − ϕ Y ( Y j ) ‖ 2 W i , j ,\phi _\mu \sum _\left\Vert \phi _\left(X_\right)-\phi _\left(X_\right)\right\Vert ^S_+\mu \sum _\left\Vert \phi _\left(Y_\right)-\phi _\left(Y_\right)\right\Vert ^S_+\left(1-\mu \right)\sum _\Vert \phi _\left(X_\right)-\phi _\left(Y_\right)\Vert ^W_ Solving this optimization problem is equivalent to solving a generalized eigenvalue problem using the graph laplacian of the joint matrix, G: G = [ μ S X ( 1 − μ ) W ( 1 − μ ) W T μ S Y ] \mu S_&\left(1-\mu \right)W\\\left(1-\mu \right)W^&\mu S_\end\right]
|
Manifold alignment : The algorithm described above requires full pairwise correspondence information between input data sets; a supervised learning paradigm. However, this information is usually difficult or impossible to obtain in real world applications. Recent work has extended the core manifold alignment algorithm to semi-supervised , unsupervised , and multiple-instance settings.
|
Manifold alignment : The algorithm described above performs a "one-step" alignment, finding embeddings for both data sets at the same time. A similar effect can also be achieved with "two-step" alignments , following a slightly modified procedure: Project each input data set to a lower-dimensional space independently, using any of a variety of dimension reduction algorithms. Perform linear manifold alignment on the embedded data, holding the first data set fixed, mapping each additional data set onto the first's manifold. This approach has the benefit of decomposing the required computation, which lowers memory overhead and allows parallel implementations.
|
Manifold alignment : Manifold alignment can be used to find linear (feature-level) projections, or nonlinear (instance-level) embeddings. While the instance-level version generally produces more accurate alignments, it sacrifices a great degree of flexibility as the learned embedding is often difficult to parameterize. Feature-level projections allow any new instances to be easily embedded in the manifold space, and projections may be combined to form direct mappings between the original data representations. These properties are especially important for knowledge-transfer applications.
|
Manifold alignment : Manifold alignment is suited to problems with several corpora that lie on a shared manifold, even when each corpus is of a different dimensionality. Many real-world problems fit this description, but traditional techniques are not able to take advantage of all corpora at the same time. Manifold alignment also facilitates transfer learning, in which knowledge of one domain is used to jump-start learning in correlated domains. Applications of manifold alignment include: Cross-language information retrieval / automatic translation By representing documents as vector of word counts, manifold alignment can recover the mapping between documents of different languages. Cross-language document correspondence is relatively easy to obtain, especially from multi-lingual organizations like the European Union. Transfer learning of policy and state representations for reinforcement learning Alignment of protein NMR structures Accelerating model learning in robotics by sharing data generated by other robots
|
Manifold alignment : Manifold hypothesis
|
Manifold alignment : Xiong, L.; F. Wang; C. Zhang (2007). "Semi-definite manifold alignment". Proceedings of the 18th European Conference on Machine Learning. CiteSeerX 10.1.1.91.7346. Wang, Chang; Sridhar Mahadevan (2009). "A General Framework for Manifold Alignment" (PDF). AAAI Fall Symposium on Manifold Learning and Its Applications. Wang, Chang; Sridhar Mahadevan (2010). "Multiscale Manifold Alignment" (PDF). Univ. Of Massachusetts TR UM-CS-2010-049. Ma, Yunqian (Apr 15, 2012). Manifold Learning Theory and Applications. Taylor & Francis Group. p. 376. ISBN 978-1-4398-7109-6. Chang Wang's Manifold alignment overview
|
Minimum redundancy feature selection : Minimum redundancy feature selection is an algorithm frequently used in a method to accurately identify characteristics of genes and phenotypes and narrow down their relevance and is usually described in its pairing with relevant feature selection as Minimum Redundancy Maximum Relevance (mRMR). This method was first proposed in 2003 by Hanchuan Peng and Chris Ding, followed by a theoretical formulation based on mutual information, along with the first definition of multivariate mutual information, published in IEEE Trans. Pattern Analysis and Machine Intelligence in 2005. Feature selection, one of the basic problems in pattern recognition and machine learning, identifies subsets of data that are relevant to the parameters used and is normally called Maximum Relevance. These subsets often contain material which is relevant but redundant and mRMR attempts to address this problem by removing those redundant subsets. mRMR has a variety of applications in many areas such as cancer diagnosis and speech recognition. Features can be selected in many different ways. One scheme is to select features that correlate strongest to the classification variable. This has been called maximum-relevance selection. Many heuristic algorithms can be used, such as the sequential forward, backward, or floating selections. On the other hand, features can be selected to be mutually far away from each other while still having "high" correlation to the classification variable. This scheme, termed as Minimum Redundancy Maximum Relevance (mRMR) selection has been found to be more powerful than the maximum relevance selection. As a special case, the "correlation" can be replaced by the statistical dependency between variables. Mutual information can be used to quantify the dependency. In this case, it is shown that mRMR is an approximation to maximizing the dependency between the joint distribution of the selected features and the classification variable. Studies have tried different measures for redundancy and relevance measures. A recent study compared several measures within the context of biomedical images.
|
Minimum redundancy feature selection : Peng, H.C., Long, F., and Ding, C., "Feature selection based on mutual information: criteria of max-dependency, max-relevance, and min-redundancy," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 27, No. 8, pp. 1226–1238, 2005. Chris Ding and Hanchuan Peng, "Minimum Redundancy Feature Selection from Microarray Gene Expression Data". 2nd IEEE Computer Society Bioinformatics Conference (CSB 2003), 11–14 August 2003, Stanford, CA, USA. Pages 523–529. Penglab mRMR
|
Mixture of experts : Mixture of experts (MoE) is a machine learning technique where multiple expert networks (learners) are used to divide a problem space into homogeneous regions. MoE represents a form of ensemble learning. They were also called committee machines.
|
Mixture of experts : MoE always has the following components, but they are implemented and combined differently according to the problem being solved: Experts f 1 , . . . , f n ,...,f_ , each taking the same input x , and producing outputs f 1 ( x ) , . . . , f n ( x ) (x),...,f_(x) . A weighting function (also known as a gating function) w , which takes input x and produces a vector of outputs ( w ( x ) 1 , . . . , w ( x ) n ) ,...,w(x)_) . This may or may not be a probability distribution, but in both cases, its entries are non-negative. θ = ( θ 0 , θ 1 , . . . , θ n ) ,\theta _,...,\theta _) is the set of parameters. The parameter θ 0 is for the weighting function. The parameters θ 1 , … , θ n ,\dots ,\theta _ are for the experts. Given an input x , the mixture of experts produces a single output by combining f 1 ( x ) , . . . , f n ( x ) (x),...,f_(x) according to the weights w ( x ) 1 , . . . , w ( x ) n ,...,w(x)_ in some way, usually by f ( x ) = ∑ i w ( x ) i f i ( x ) w(x)_f_(x) . Both the experts and the weighting function are trained by minimizing some loss function, generally via gradient descent. There is much freedom in choosing the precise form of experts, the weighting function, and the loss function.
|
Mixture of experts : The previous section described MoE as it was used before the era of deep learning. After deep learning, MoE found applications in running the largest models, as a simple way to perform conditional computation: only parts of the model are used, the parts chosen according to what the input is. The earliest paper that applies MoE to deep learning dates back to 2013, which proposed to use a different gating network at each layer in a deep neural network. Specifically, each gating is a linear-ReLU-linear-softmax network, and each expert is a linear-ReLU network. Since the output from the gating is not sparse, all expert outputs are needed, and no conditional computation is performed. The key goal when using MoE in deep learning is to reduce computing cost. Consequently, for each query, only a small subset of the experts should be queried. This makes MoE in deep learning different from classical MoE. In classical MoE, the output for each query is a weighted sum of all experts' outputs. In deep learning MoE, the output for each query can only involve a few experts' outputs. Consequently, the key design choice in MoE becomes routing: given a batch of queries, how to route the queries to the best experts.
|
Mixture of experts : Product of experts Mixture models Mixture of gaussians Ensemble learning
|
Mixture of experts : Before deep learning era McLachlan, Geoffrey J.; Peel, David (2000). Finite mixture models. Wiley series in probability and statistics applied probability and statistics section. New York Chichester Weinheim Brisbane Singapore Toronto: John Wiley & Sons, Inc. ISBN 978-0-471-00626-8. Yuksel, S. E.; Wilson, J. N.; Gader, P. D. (August 2012). "Twenty Years of Mixture of Experts". IEEE Transactions on Neural Networks and Learning Systems. 23 (8): 1177–1193. doi:10.1109/TNNLS.2012.2200299. ISSN 2162-237X. PMID 24807516. S2CID 9922492. Masoudnia, Saeed; Ebrahimpour, Reza (12 May 2012). "Mixture of experts: a literature survey". Artificial Intelligence Review. 42 (2): 275–293. doi:10.1007/s10462-012-9338-y. S2CID 3185688. Nguyen, Hien D.; Chamroukhi, Faicel (July 2018). "Practical and theoretical aspects of mixture-of-experts modeling: An overview". WIREs Data Mining and Knowledge Discovery. 8 (4). doi:10.1002/widm.1246. ISSN 1942-4787. S2CID 49301452. Practical techniques for training MoE Transformer models Zoph, Barret; Bello, Irwan; Kumar, Sameer; Du, Nan; Huang, Yanping; Dean, Jeff; Shazeer, Noam; Fedus, William (2022). "ST-MoE: Designing Stable and Transferable Sparse Expert Models". arXiv:2202.08906 [cs.CL]. Muennighoff, Niklas; Soldaini, Luca; Groeneveld, Dirk; Lo, Kyle; Morrison, Jacob; Min, Sewon; Shi, Weijia; Walsh, Pete; Tafjord, Oyvind (2024-09-03), OLMoE: Open Mixture-of-Experts Language Models, arXiv:2409.02060, with associated data release at allenai/OLMoE, Ai2, 2024-10-17, retrieved 2024-10-18 Rajbhandari, Samyam; Li, Conglong; Yao, Zhewei; Zhang, Minjia; Aminabadi, Reza Yazdani; Awan, Ammar Ahmad; Rasley, Jeff; He, Yuxiong (January 14, 2022). "DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale". arXiv:2201.05596 [cs.LG]. DeepSeek-AI (June 19, 2024), DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model, arXiv:2405.04434 DeepSeek-AI (2024-12-27), DeepSeek-V3 Technical Report, arXiv:2412.19437 Literature review for deep learning era Fedus, William; Dean, Jeff; Zoph, Barret (2022-09-04), A Review of Sparse Expert Models in Deep Learning, arXiv, doi:10.48550/arXiv.2209.01667, arXiv:2209.01667 Fuzhao, Xue (2024-07-21). "XueFuzhao/awesome-mixture-of-experts". GitHub. Retrieved 2024-07-21. Vats, Arpita (2024-09-02). "arpita8/Awesome-Mixture-of-Experts-Papers". GitHub. Retrieved 2024-09-06. Cai, Weilin; Jiang, Juyong; Wang, Fan; Tang, Jing; Kim, Sunghun; Huang, Jiayi (2024-08-08). "A Survey on Mixture of Experts". arXiv:2407.06204 [cs.LG].
|
Multi expression programming : Multi Expression Programming (MEP) is an evolutionary algorithm for generating mathematical functions describing a given set of data. MEP is a Genetic Programming variant encoding multiple solutions in the same chromosome. MEP representation is not specific (multiple representations have been tested). In the simplest variant, MEP chromosomes are linear strings of instructions. This representation was inspired by Three-address code. MEP strength consists in the ability to encode multiple solutions, of a problem, in the same chromosome. In this way, one can explore larger zones of the search space. For most of the problems this advantage comes with no running-time penalty compared with genetic programming variants encoding a single solution in a chromosome.
|
Multi expression programming : MEP chromosomes are arrays of instructions represented in Three-address code format. Each instruction contains a variable, a constant, or a function. If the instruction is a function, then the arguments (given as instruction's addresses) are also present.
|
Multi expression programming : When the chromosome is evaluated it is unclear which instruction will provide the output of the program. In many cases, a set of programs is obtained, some of them being completely unrelated (they do not have common instructions). For the above chromosome, here is the list of possible programs obtained during decoding: E1 = a, E2 = b, E4 = c, E5 = d, E3 = a + b. E6 = c + d. E7 = (a + b) * d. Each instruction is evaluated as a possible output of the program. The fitness (or error) is computed in a standard manner. For instance, in the case of symbolic regression, the fitness is the sum of differences (in absolute value) between the expected output (called target) and the actual output.
|
Multi expression programming : Which expression will represent the chromosome? Which one will give the fitness of the chromosome? In MEP, the best of them (which has the lowest error) will represent the chromosome. This is different from other GP techniques: In Linear genetic programming the last instruction will give the output. In Cartesian Genetic Programming the gene providing the output is evolved like all other genes. Note that, for many problems, this evaluation has the same complexity as in the case of encoding a single solution in each chromosome. Thus, there is no penalty in running time compared to other techniques.
|
Multi expression programming : Genetic programming Cartesian genetic programming Gene expression programming Grammatical evolution Linear genetic programming
|
Multi expression programming : Multi Expression Programming website Multi Expression Programming source code
|
Multiple kernel learning : Multiple kernel learning refers to a set of machine learning methods that use a predefined set of kernels and learn an optimal linear or non-linear combination of kernels as part of the algorithm. Reasons to use multiple kernel learning include a) the ability to select for an optimal kernel and parameters from a larger set of kernels, reducing bias due to kernel selection while allowing for more automated machine learning methods, and b) combining data from different sources (e.g. sound and images from a video) that have different notions of similarity and thus require different kernels. Instead of creating a new kernel, multiple kernel algorithms can be used to combine kernels already established for each individual data source. Multiple kernel learning approaches have been used in many applications, such as event recognition in video, object recognition in images, and biomedical data fusion.
|
Multiple kernel learning : Multiple kernel learning algorithms have been developed for supervised, semi-supervised, as well as unsupervised learning. Most work has been done on the supervised learning case with linear combinations of kernels, however, many algorithms have been developed. The basic idea behind multiple kernel learning algorithms is to add an extra parameter to the minimization problem of the learning algorithm. As an example, consider the case of supervised learning of a linear combination of a set of n kernels K . We introduce a new kernel K ′ = ∑ i = 1 n β i K i ^\beta _K_ , where β is a vector of coefficients for each kernel. Because the kernels are additive (due to properties of reproducing kernel Hilbert spaces), this new function is still a kernel. For a set of data X with labels Y , the minimization problem can then be written as min β , c E ( Y , K ′ c ) + R ( K , c ) \mathrm (Y,K'c)+R(K,c) where E is an error function and R is a regularization term. E is typically the square loss function (Tikhonov regularization) or the hinge loss function (for SVM algorithms), and R is usually an ℓ n norm or some combination of the norms (i.e. elastic net regularization). This optimization problem can then be solved by standard optimization methods. Adaptations of existing techniques such as the Sequential Minimal Optimization have also been developed for multiple kernel SVM-based methods.
|
Multiple kernel learning : Available MKL libraries include SPG-GMKL: A scalable C++ MKL SVM library that can handle a million kernels. GMKL: Generalized Multiple Kernel Learning code in MATLAB, does ℓ 1 and ℓ 2 regularization for supervised learning. (Another) GMKL: A different MATLAB MKL code that can also perform elastic net regularization SMO-MKL: C++ source code for a Sequential Minimal Optimization MKL algorithm. Does p -n orm regularization. SimpleMKL: A MATLAB code based on the SimpleMKL algorithm for MKL SVM. MKLPy: A Python framework for MKL and kernel machines scikit-compliant with different algorithms, e.g. EasyMKL and others. == References ==
|
Neural radiance field : A neural radiance field (NeRF) is a method based on deep learning for reconstructing a three-dimensional representation of a scene from two-dimensional images. The NeRF model enables downstream applications of novel view synthesis, scene geometry reconstruction, and obtaining the reflectance properties of the scene. Additional scene properties such as camera poses may also be jointly learned. First introduced in 2020, it has since gained significant attention for its potential applications in computer graphics and content creation.
|
Neural radiance field : The NeRF algorithm represents a scene as a radiance field parametrized by a deep neural network (DNN). The network predicts a volume density and view-dependent emitted radiance given the spatial location (x, y, z) and viewing direction in Euler angles (θ, Φ) of the camera. By sampling many points along camera rays, traditional volume rendering techniques can produce an image.
|
Neural radiance field : Early versions of NeRF were slow to optimize and required that all input views were taken with the same camera in the same lighting conditions. These performed best when limited to orbiting around individual objects, such as a drum set, plants or small toys. Since the original paper in 2020, many improvements have been made to the NeRF algorithm, with variations for special use cases.
|
Neural radiance field : NeRFs have a wide range of applications, and are starting to grow in popularity as they become integrated into user-friendly applications.
|
Non-negative matrix factorization : Non-negative matrix factorization (NMF or NNMF), also non-negative matrix approximation is a group of algorithms in multivariate analysis and linear algebra where a matrix V is factorized into (usually) two matrices W and H, with the property that all three matrices have no negative elements. This non-negativity makes the resulting matrices easier to inspect. Also, in applications such as processing of audio spectrograms or muscular activity, non-negativity is inherent to the data being considered. Since the problem is not exactly solvable in general, it is commonly approximated numerically. NMF finds applications in such fields as astronomy, computer vision, document clustering, missing data imputation, chemometrics, audio signal processing, recommender systems, and bioinformatics.
|
Non-negative matrix factorization : In chemometrics non-negative matrix factorization has a long history under the name "self modeling curve resolution". In this framework the vectors in the right matrix are continuous curves rather than discrete vectors. Also early work on non-negative matrix factorizations was performed by a Finnish group of researchers in the 1990s under the name positive matrix factorization. It became more widely known as non-negative matrix factorization after Lee and Seung investigated the properties of the algorithm and published some simple and useful algorithms for two types of factorizations.
|
Non-negative matrix factorization : Let matrix V be the product of the matrices W and H, V = W H . =\mathbf \mathbf \,. Matrix multiplication can be implemented as computing the column vectors of V as linear combinations of the column vectors in W using coefficients supplied by columns of H. That is, each column of V can be computed as follows: v i = W h i , _=\mathbf \mathbf _\,, where vi is the i-th column vector of the product matrix V and hi is the i-th column vector of the matrix H. When multiplying matrices, the dimensions of the factor matrices may be significantly lower than those of the product matrix and it is this property that forms the basis of NMF. NMF generates factors with significantly reduced dimensions compared to the original matrix. For example, if V is an m × n matrix, W is an m × p matrix, and H is a p × n matrix then p can be significantly less than both m and n. Here is an example based on a text-mining application: Let the input matrix (the matrix to be factored) be V with 10000 rows and 500 columns where words are in rows and documents are in columns. That is, we have 500 documents indexed by 10000 words. It follows that a column vector v in V represents a document. Assume we ask the algorithm to find 10 features in order to generate a features matrix W with 10000 rows and 10 columns and a coefficients matrix H with 10 rows and 500 columns. The product of W and H is a matrix with 10000 rows and 500 columns, the same shape as the input matrix V and, if the factorization worked, it is a reasonable approximation to the input matrix V. From the treatment of matrix multiplication above it follows that each column in the product matrix WH is a linear combination of the 10 column vectors in the features matrix W with coefficients supplied by the coefficients matrix H. This last point is the basis of NMF because we can consider each original document in our example as being built from a small set of hidden features. NMF generates these features. It is useful to think of each feature (column vector) in the features matrix W as a document archetype comprising a set of words where each word's cell value defines the word's rank in the feature: The higher a word's cell value the higher the word's rank in the feature. A column in the coefficients matrix H represents an original document with a cell value defining the document's rank for a feature. We can now reconstruct a document (column vector) from our input matrix by a linear combination of our features (column vectors in W) where each feature is weighted by the feature's cell value from the document's column in H.
|
Non-negative matrix factorization : NMF has an inherent clustering property, i.e., it automatically clusters the columns of input data V = ( v 1 , … , v n ) =(v_,\dots ,v_) . More specifically, the approximation of V by V ≃ W H \simeq \mathbf \mathbf is achieved by finding W and H that minimize the error function (using the Frobenius norm) ‖ V − W H ‖ F , , subject to W ≥ 0 , H ≥ 0. , If we furthermore impose an orthogonality constraint on H , i.e. H H T = I \mathbf ^=I , then the above minimization is mathematically equivalent to the minimization of K-means clustering. Furthermore, the computed H gives the cluster membership, i.e., if H k j > H i j _>\mathbf _ for all i ≠ k, this suggests that the input data v j belongs to k -th cluster. The computed W gives the cluster centroids, i.e., the k -th column gives the cluster centroid of k -th cluster. This centroid's representation can be significantly enhanced by convex NMF. When the orthogonality constraint H H T = I \mathbf ^=I is not explicitly imposed, the orthogonality holds to a large extent, and the clustering property holds too. When the error function to be used is Kullback–Leibler divergence, NMF is identical to the probabilistic latent semantic analysis (PLSA), a popular document clustering method.
|
Non-negative matrix factorization : There are several ways in which the W and H may be found: Lee and Seung's multiplicative update rule has been a popular method due to the simplicity of implementation. This algorithm is: initialize: W and H non negative. Then update the values in W and H by computing the following, with n as an index of the iteration. H [ i , j ] n + 1 ← H [ i , j ] n ( ( W n ) T V ) [ i , j ] ( ( W n ) T W n H n ) [ i , j ] _^\leftarrow \mathbf _^ ^)^\mathbf )_ ^)^\mathbf ^\mathbf ^)_ and W [ i , j ] n + 1 ← W [ i , j ] n ( V ( H n + 1 ) T ) [ i , j ] ( W n H n + 1 ( H n + 1 ) T ) [ i , j ] _^\leftarrow \mathbf _^ (\mathbf ^)^)_ ^\mathbf ^(\mathbf ^)^)_ Until W and H are stable. Note that the updates are done on an element by element basis not matrix multiplication. We note that the multiplicative factors for W and H, i.e. the W T V W T W H ^\mathbf ^\mathbf \mathbf and V H T W H H T \mathbf ^ \mathbf \mathbf ^ terms, are matrices of ones when V = W H =\mathbf \mathbf . More recently other algorithms have been developed. Some approaches are based on alternating non-negative least squares: in each step of such an algorithm, first H is fixed and W found by a non-negative least squares solver, then W is fixed and H is found analogously. The procedures used to solve for W and H may be the same or different, as some NMF variants regularize one of W and H. Specific approaches include the projected gradient descent methods, the active set method, the optimal gradient method, and the block principal pivoting method among several others. Current algorithms are sub-optimal in that they only guarantee finding a local minimum, rather than a global minimum of the cost function. A provably optimal algorithm is unlikely in the near future as the problem has been shown to generalize the k-means clustering problem which is known to be NP-complete. However, as in many other data mining applications, a local minimum may still prove to be useful. In addition to the optimization step, initialization has a significant effect on NMF. The initial values chosen for W and H may affect not only the rate of convergence, but also the overall error at convergence. Some options for initialization include complete randomization, SVD, k-means clustering, and more advanced strategies based on these and other paradigms.
|
Non-negative matrix factorization : In Learning the parts of objects by non-negative matrix factorization Lee and Seung proposed NMF mainly for parts-based decomposition of images. It compares NMF to vector quantization and principal component analysis, and shows that although the three techniques may be written as factorizations, they implement different constraints and therefore produce different results. It was later shown that some types of NMF are an instance of a more general probabilistic model called "multinomial PCA". When NMF is obtained by minimizing the Kullback–Leibler divergence, it is in fact equivalent to another instance of multinomial PCA, probabilistic latent semantic analysis, trained by maximum likelihood estimation. That method is commonly used for analyzing and clustering textual data and is also related to the latent class model. NMF with the least-squares objective is equivalent to a relaxed form of K-means clustering: the matrix factor W contains cluster centroids and H contains cluster membership indicators. This provides a theoretical foundation for using NMF for data clustering. However, k-means does not enforce non-negativity on its centroids, so the closest analogy is in fact with "semi-NMF". NMF can be seen as a two-layer directed graphical model with one layer of observed random variables and one layer of hidden random variables. NMF extends beyond matrices to tensors of arbitrary order. This extension may be viewed as a non-negative counterpart to, e.g., the PARAFAC model. Other extensions of NMF include joint factorization of several data matrices and tensors where some factors are shared. Such models are useful for sensor fusion and relational learning. NMF is an instance of nonnegative quadratic programming, just like the support vector machine (SVM). However, SVM and NMF are related at a more intimate level than that of NQP, which allows direct application of the solution algorithms developed for either of the two methods to problems in both domains.
|
Non-negative matrix factorization : The factorization is not unique: A matrix and its inverse can be used to transform the two factorization matrices by, e.g., W H = W B B − 1 H =\mathbf ^\mathbf If the two new matrices W ~ = W B =WB and H ~ = B − 1 H =\mathbf ^\mathbf are non-negative they form another parametrization of the factorization. The non-negativity of W ~ and H ~ applies at least if B is a non-negative monomial matrix. In this simple case it will just correspond to a scaling and a permutation. More control over the non-uniqueness of NMF is obtained with sparsity constraints.
|
Non-negative matrix factorization : Current research (since 2010) in nonnegative matrix factorization includes, but is not limited to, Algorithmic: searching for global minima of the factors and factor initialization. Scalability: how to factorize million-by-billion matrices, which are commonplace in Web-scale data mining, e.g., see Distributed Nonnegative Matrix Factorization (DNMF), Scalable Nonnegative Matrix Factorization (ScalableNMF), Distributed Stochastic Singular Value Decomposition. Online: how to update the factorization when new data comes in without recomputing from scratch, e.g., see online CNSC Collective (joint) factorization: factorizing multiple interrelated matrices for multiple-view learning, e.g. multi-view clustering, see CoNMF and MultiNMF Cohen and Rothblum 1993 problem: whether a rational matrix always has an NMF of minimal inner dimension whose factors are also rational. Recently, this problem has been answered negatively.
|
Non-negative matrix factorization : Multilinear algebra Multilinear subspace learning Tensor Tensor decomposition Tensor software
|
NSynth : NSynth (a portmanteau of "Neural Synthesis") is a WaveNet-based autoencoder for synthesizing audio, outlined in a paper in April 2017.
|
NSynth : The model generates sounds through a neural network based synthesis, employing a WaveNet-style autoencoder to learn its own temporal embeddings from four different sounds. Google then released an open source hardware interface for the algorithm called NSynth Super, used by notable musicians such as Grimes and YACHT to generate experimental music using artificial intelligence. The research and development of the algorithm was part of a collaboration between Google Brain, Magenta and DeepMind.
|
NSynth : In 2018 Google released a hardware interface for the NSynth algorithm, called NSynth Super, designed to provide an accessible physical interface to the algorithm for musicians to use in their artistic production. Design files, source code and internal components are released under an open source Apache License 2.0, enabling hobbyists and musicians to freely build and use the instrument. At the core of the NSynth Super there is a Raspberry Pi, extended with a custom printed circuit board to accommodate the interface elements.
|
NSynth : Despite not being publicly available as a commercial product, NSynth Super has been used by notable artists, including Grimes and YACHT. Grimes reported using the instrument in her 2020 studio album Miss Anthropocene. YACHT announced an extensive use of NSynth Super in their album Chain Tripping. Claire L. Evans compared the potential influence of the instrument to the Roland TR-808. The NSynth Super design was honored with a D&AD Yellow Pencil award in 2018.
|
NSynth : Engel, Jesse; Resnick, Cinjon; Roberts, Adam; Dieleman, Sander; Eck, Douglas; Simonyan, Karen; Norouzi, Mohammad (2017). "Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders". arXiv:1704.01279 [cs.LG].
|
NSynth : Official Nsynth Super site Official Magenta site In-browser emulation of the Nsynth algorithm
|
Online machine learning : In computer science, online machine learning is a method of machine learning in which data becomes available in a sequential order and is used to update the best predictor for future data at each step, as opposed to batch learning techniques which generate the best predictor by learning on the entire training data set at once. Online learning is a common technique used in areas of machine learning where it is computationally infeasible to train over the entire dataset, requiring the need of out-of-core algorithms. It is also used in situations where it is necessary for the algorithm to dynamically adapt to new patterns in the data, or when the data itself is generated as a function of time, e.g., prediction of prices in the financial international markets. Online learning algorithms may be prone to catastrophic interference, a problem that can be addressed by incremental learning approaches.
|
Online machine learning : In the setting of supervised learning, a function of f : X → Y is to be learned, where X is thought of as a space of inputs and Y as a space of outputs, that predicts well on instances that are drawn from a joint probability distribution p ( x , y ) on X × Y . In reality, the learner never knows the true distribution p ( x , y ) over instances. Instead, the learner usually has access to a training set of examples ( x 1 , y 1 ) , … , ( x n , y n ) ,y_),\ldots ,(x_,y_) . In this setting, the loss function is given as V : Y × Y → R , such that V ( f ( x ) , y ) measures the difference between the predicted value f ( x ) and the true value y . The ideal goal is to select a function f ∈ H , where H is a space of functions called a hypothesis space, so that some notion of total loss is minimized. Depending on the type of model (statistical or adversarial), one can devise different notions of loss, which lead to different learning algorithms.
|
Online machine learning : In statistical learning models, the training sample ( x i , y i ) ,y_) are assumed to have been drawn from the true distribution p ( x , y ) and the objective is to minimize the expected "risk" I [ f ] = E [ V ( f ( x ) , y ) ] = ∫ V ( f ( x ) , y ) d p ( x , y ) . [V(f(x),y)]=\int V(f(x),y)\,dp(x,y)\ . A common paradigm in this situation is to estimate a function f ^ through empirical risk minimization or regularized empirical risk minimization (usually Tikhonov regularization). The choice of loss function here gives rise to several well-known learning algorithms such as regularized least squares and support vector machines. A purely online model in this category would learn based on just the new input ( x t + 1 , y t + 1 ) ,y_) , the current best predictor f t and some extra stored information (which is usually expected to have storage requirements independent of training data size). For many formulations, for example nonlinear kernel methods, true online learning is not possible, though a form of hybrid online learning with recursive algorithms can be used where f t + 1 is permitted to depend on f t and all previous data points ( x 1 , y 1 ) , … , ( x t , y t ) ,y_),\ldots ,(x_,y_) . In this case, the space requirements are no longer guaranteed to be constant since it requires storing all previous data points, but the solution may take less time to compute with the addition of a new data point, as compared to batch learning techniques. A common strategy to overcome the above issues is to learn using mini-batches, which process a small batch of b ≥ 1 data points at a time, this can be considered as pseudo-online learning for b much smaller than the total number of training points. Mini-batch techniques are used with repeated passing over the training data to obtain optimized out-of-core versions of machine learning algorithms, for example, stochastic gradient descent. When combined with backpropagation, this is currently the de facto training method for training artificial neural networks.
|
Online machine learning : Continual learning means constantly improving the learned model by processing continuous streams of information. Continual learning capabilities are essential for software systems and autonomous agents interacting in an ever changing real world. However, continual learning is a challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting.
|
Online machine learning : The paradigm of online learning has different interpretations depending on the choice of the learning model, each of which has distinct implications about the predictive quality of the sequence of functions f 1 , f 2 , … , f n ,f_,\ldots ,f_ . The prototypical stochastic gradient descent algorithm is used for this discussion. As noted above, its recursion is given by w t = w t − 1 − γ t ∇ V ( ⟨ w t − 1 , x t ⟩ , y t ) =w_-\gamma _\nabla V(\langle w_,x_\rangle ,y_) The first interpretation consider the stochastic gradient descent method as applied to the problem of minimizing the expected risk I [ w ] defined above. Indeed, in the case of an infinite stream of data, since the examples ( x 1 , y 1 ) , ( x 2 , y 2 ) , … ,y_),(x_,y_),\ldots are assumed to be drawn i.i.d. from the distribution p ( x , y ) , the sequence of gradients of V ( ⋅ , ⋅ ) in the above iteration are an i.i.d. sample of stochastic estimates of the gradient of the expected risk I [ w ] and therefore one can apply complexity results for the stochastic gradient descent method to bound the deviation I [ w t ] − I [ w ∗ ] ]-I[w^] , where w ∗ is the minimizer of I [ w ] . This interpretation is also valid in the case of a finite training set; although with multiple passes through the data the gradients are no longer independent, still complexity results can be obtained in special cases. The second interpretation applies to the case of a finite training set and considers the SGD algorithm as an instance of incremental gradient descent method. In this case, one instead looks at the empirical risk: I n [ w ] = 1 n ∑ i = 1 n V ( ⟨ w , x i ⟩ , y i ) . [w]=\sum _^V(\langle w,x_\rangle ,y_)\ . Since the gradients of V ( ⋅ , ⋅ ) in the incremental gradient descent iterations are also stochastic estimates of the gradient of I n [ w ] [w] , this interpretation is also related to the stochastic gradient descent method, but applied to minimize the empirical risk as opposed to the expected risk. Since this interpretation concerns the empirical risk and not the expected risk, multiple passes through the data are readily allowed and actually lead to tighter bounds on the deviations I n [ w t ] − I n [ w n ∗ ] [w_]-I_[w_^] , where w n ∗ ^ is the minimizer of I n [ w ] [w] .
|
Online machine learning : Vowpal Wabbit: Open-source fast out-of-core online learning system which is notable for supporting a number of machine learning reductions, importance weighting and a selection of different loss functions and optimisation algorithms. It uses the hashing trick for bounding the size of the set of features independent of the amount of training data. scikit-learn: Provides out-of-core implementations of algorithms for Classification: Perceptron, SGD classifier, Naive bayes classifier. Regression: SGD Regressor, Passive Aggressive regressor. Clustering: Mini-batch k-means. Feature extraction: Mini-batch dictionary learning, Incremental PCA.
|
Online machine learning : Learning paradigms Incremental learning Lazy learning Offline learning, the opposite model Reinforcement learning Multi-armed bandit Supervised learning General algorithms Online algorithm Online optimization Streaming algorithm Stochastic gradient descent Learning models Adaptive Resonance Theory Hierarchical temporal memory k-nearest neighbor algorithm Learning vector quantization Perceptron
|
Online machine learning : 6.883: Online Methods in Machine Learning: Theory and Applications. Alexander Rakhlin. MIT
|
Out-of-bag error : Out-of-bag (OOB) error, also called out-of-bag estimate, is a method of measuring the prediction error of random forests, boosted decision trees, and other machine learning models utilizing bootstrap aggregating (bagging). Bagging uses subsampling with replacement to create training samples for the model to learn from. OOB error is the mean prediction error on each training sample xi, using only the trees that did not have xi in their bootstrap sample. Bootstrap aggregating allows one to define an out-of-bag estimate of the prediction performance improvement by evaluating predictions on those observations that were not used in the building of the next base learner.
|
Out-of-bag error : When bootstrap aggregating is performed, two independent sets are created. One set, the bootstrap sample, is the data chosen to be "in-the-bag" by sampling with replacement. The out-of-bag set is all data not chosen in the sampling process. When this process is repeated, such as when building a random forest, many bootstrap samples and OOB sets are created. The OOB sets can be aggregated into one dataset, but each sample is only considered out-of-bag for the trees that do not include it in their bootstrap sample. The picture below shows that for each bag sampled, the data is separated into two groups. This example shows how bagging could be used in the context of diagnosing disease. A set of patients are the original dataset, but each model is trained only by the patients in its bag. The patients in each out-of-bag set can be used to test their respective models. The test would consider whether the model can accurately determine if the patient has the disease.
|
Out-of-bag error : Since each out-of-bag set is not used to train the model, it is a good test for the performance of the model. The specific calculation of OOB error depends on the implementation of the model, but a general calculation is as follows. Find all models (or trees, in the case of a random forest) that are not trained by the OOB instance. Take the majority vote of these models' result for the OOB instance, compared to the true value of the OOB instance. Compile the OOB error for all instances in the OOB dataset. The bagging process can be customized to fit the needs of a model. To ensure an accurate model, the bootstrap training sample size should be close to that of the original set. Also, the number of iterations (trees) of the model (forest) should be considered to find the true OOB error. The OOB error will stabilize over many iterations so starting with a high number of iterations is a good idea. Shown in the example to the right, the OOB error can be found using the method above once the forest is set up.
|
Out-of-bag error : Out-of-bag error and cross-validation (CV) are different methods of measuring the error estimate of a machine learning model. Over many iterations, the two methods should produce a very similar error estimate. That is, once the OOB error stabilizes, it will converge to the cross-validation (specifically leave-one-out cross-validation) error. The advantage of the OOB method is that it requires less computation and allows one to test the model as it is being trained.
|
Out-of-bag error : Out-of-bag error is used frequently for error estimation within random forests but with the conclusion of a study done by Silke Janitza and Roman Hornung, out-of-bag error has shown to overestimate in settings that include an equal number of observations from all response classes (balanced samples), small sample sizes, a large number of predictor variables, small correlation between predictors, and weak effects.
|
Out-of-bag error : Boosting (meta-algorithm) Bootstrap aggregating Bootstrapping (statistics) Cross-validation (statistics) Random forest Random subspace method (attribute bagging) == References ==
|
Prefrontal cortex basal ganglia working memory : Prefrontal cortex basal ganglia working memory (PBWM) is an algorithm that models working memory in the prefrontal cortex and the basal ganglia. It can be compared to long short-term memory (LSTM) in functionality, but is more biologically explainable. It uses the primary value learned value model to train prefrontal cortex working-memory updating system, based on the biology of the prefrontal cortex and basal ganglia. It is used as part of the Leabra framework and was implemented in Emergent in 2019.
|
Prefrontal cortex basal ganglia working memory : The prefrontal cortex has long been thought to subserve both working memory (the holding of information online for processing) and "executive" functions (deciding how to manipulate working memory and perform processing). Although many computational models of working memory have been developed, the mechanistic basis of executive function remains elusive. PBWM is a computational model of the prefrontal cortex to control both itself and other brain areas in a strategic, task-appropriate manner. These learning mechanisms are based on subcortical structures in the midbrain, basal ganglia and amygdala, which together form an actor/critic architecture. The critic system learns which prefrontal representations are task-relevant and trains the actor, which in turn provides a dynamic gating mechanism for controlling working memory updating. Computationally, the learning mechanism is designed to simultaneously solve the temporal and structural credit assignment problems. The model's performance compares favorably with standard backpropagation-based temporal learning mechanisms on the challenging 1-2-AX working memory task, and other benchmark working memory tasks.
|
Prefrontal cortex basal ganglia working memory : First, there are multiple separate stripes (groups of units) in the prefrontal cortex and striatum layers. Each stripe can be independently updated, such that this system can remember several different things at the same time, each with a different "updating policy" of when memories are updated and maintained. The active maintenance of the memory is in prefrontal cortex (PFC), and the updating signals (and updating policy more generally) come from the striatum units (a subset of basal ganglia units). PVLV provides reinforcement learning signals to train up the dynamic gating system in the basal ganglia.
|
Prefrontal cortex basal ganglia working memory : State–action–reward–state–action Sammon Mapping Constructing skill trees == References ==
|
Prescription monitoring program : In the United States, prescription monitoring programs (PMPs) or prescription drug monitoring programs (PDMPs) are state-run programs which collect and distribute data about the prescription and dispensation of federally controlled substances and, depending on state requirements, other potentially abusable prescription drugs. PMPs are meant to help prevent adverse drug-related events such as opioid overdoses, drug diversion, and substance abuse by decreasing the amount and/or frequency of opioid prescribing, and by identifying those patients who are obtaining prescriptions from multiple providers (i.e., "doctor shopping") or those physicians overprescribing opioids. Most US health care workers support the idea of PMPs, which intend to assist physicians, physician assistants, nurse practitioners, dentists and other prescribers, the pharmacists, chemists and support staff of dispensing establishments. The database, whose use is required by State law, typically requires prescribers and pharmacies dispensing controlled substances to register with their respective state PMPs and (for pharmacies and providers who dispense from their offices) to report the dispensation of such prescriptions to an electronic online database. The majority of PMPs are authorized to notify law enforcement agencies or licensing boards or physicians when a prescriber, or patients receiving prescriptions, exceed thresholds established by the state or prescription recipient exceeds thresholds established by the State. All states have implemented PDMPs, although evidence for the effectiveness of these programs is mixed. While prescription of opioids has decreased with PMP use, overdose deaths in many states have actually increased, with those states sharing data with neighboring jurisdictions or requiring reporting of more drugs experiencing highest increases in deaths. This may be because those declined opioid prescriptions turn to street drugs, whose potency and contaminants carry greater overdose risk.
|
Prescription monitoring program : Prescription drug monitoring programs, or PDMPs, are an example of one initiative proposed to alleviate effects of the opioid crisis. The programs are designed to restrict prescription drug abuse by limiting a patient's ability to obtain similar prescriptions from multiple providers (i.e. “doctor shopping”) and reducing diversion of controlled substances. This is meant to reduce risk of fatal overdose caused by high doses of opioids or interactions between opioids and benzodiazepenes, and to enable better decision making on the part of healthcare providers who may be unaware of a patient's prescription drug use, history or other prescriptions. PDMPs have been implemented in state legislations since 1939 in California, a time before electronic medical records, though implementation increased with s awareness of overprescribing of opioids and overdose. A later New York state program was struck down by the U.S. Supreme Court in Whalen v. Roe. But, by 2019, 49 states, the District of Columbia, and Guam had enacted PDMP legislation. In 2021 Missouri, the last State to not use a PMP, adopted legislation to create one. PMPs are constantly being updated to increase speed of data collection, sharing of data across States, and ease of interpretation. This is being done by integrating PDMP reports with other health information technologies such as health information exchanges (HIE), electronic health record (EHR) systems, and/ or pharmacy dispensing software systems. One program that has been implemented in nine states is called the PDMP Electronic Health Records Integration and Interoperability Expansion, also known as PEHRIIE. Another software, marketed by Bamboo Health and integrated with PMPs in 43 states, uses an algorithm to track factors thought to increase risk of diversion, abuse or overdose, and assigns patients a three digit score based on presumed indicators of risk. While some studies have suggested that PDMP-HIT integration and sharing of interstate data brings benefits such as reduced opioid-related inpatient morbidity, others have found no or negative impact on mortality compared to states without PMP data sharing. Patient and media reports suggest need for testing and evaluation of algorithmic software used to score risk, with some patients reporting denial of prescriptions without c explanation or clarity of data.
|
Prescription monitoring program : Most health care workers support PMPs which intend to assist physicians, physician assistants, nurse practitioners, dentists and other prescribers, the pharmacists, chemists and support staff of dispensing establishments, as well as law-enforcement agencies. The collaboration supports the legitimate medical use of controlled substances while limiting their abuse and diversion. Pharmacies dispensing controlled substances and prescribers typically must register with their respective state PMPs and (for pharmacies and providers who dispense controlled substances from their offices) report the dispensation to an electronic online database. Some pharmacy software can submit these reports automatically to multiple states.
|
Prescription monitoring program : Many doctors and researchers support the idea of PDMPs as a tool in combatting the opioid epidemic. Opioid prescribing, opioid diversion and supply, opioid misuse, and opioid-related morbidity and mortality are common elements in data entered into PDMPs. Prescription Monitoring Programs are purported to offer economic benefits for the states who implement them by decreasing overall health care costs, lost productivity, and investigation times. However, there are many studies that conclude the impact of PDMPs is unclear. While use of PMPs has been accompanied by decrease in opioid prescribing, few analyses consider corresponding use of street opioids, extramedical use, or diversion, which might provide a more holistic method for evaluation of PMP intent and efficacy. Evidence for PDMP impact on fatal overdoses is decidedly mixed, with multiple studies finding increased overdose rates in some states, decreases in others, or no clear impact. Interestingly, an increase in heroin overdoses after PDMP implementation has been commonly reported, presumably as denial of prescription opioids sends patients in search of street drugs. Narx Scores have been criticized by researchers and patient advocates for the lack of transparency in the generation process as well as the potential for disparate treatment of women and minority groups. Writing in Duke Law Journal, Jennifer Oliva stated that "black-box algorithms" are used to generate the scores.
|
Prototype methods : Prototype methods are machine learning methods that use data prototypes. A data prototype is a data value that reflects other values in its class, e.g., the centroid in a K-means clustering problem.
|
Prototype methods : The following are some prototype methods K-means clustering Learning vector quantization (LVQ) Gaussian mixtures
|
Prototype methods : While K-nearest neighbor's does not use prototypes, it is similar to prototype methods like K-means clustering. == References ==
|
PVLV : The primary value learned value (PVLV) model is a possible explanation for the reward-predictive firing properties of dopamine (DA) neurons. It simulates behavioral and neural data on Pavlovian conditioning and the midbrain dopaminergic neurons that fire in proportion to unexpected rewards. It is an alternative to the temporal-differences (TD) algorithm. It is used as part of Leabra. == References ==
|
Randomized weighted majority algorithm : The randomized weighted majority algorithm is an algorithm in machine learning theory for aggregating expert predictions to a series of decision problems. It is a simple and effective method based on weighted voting which improves on the mistake bound of the deterministic weighted majority algorithm. In fact, in the limit, its prediction rate can be arbitrarily close to that of the best-predicting expert.
|
Randomized weighted majority algorithm : Imagine that every morning before the stock market opens, we get a prediction from each of our "experts" about whether the stock market will go up or down. Our goal is to somehow combine this set of predictions into a single prediction that we then use to make a buy or sell decision for the day. The principal challenge is that we do not know which experts will give better or worse predictions. The RWMA gives us a way to do this combination such that our prediction record will be nearly as good as that of the single expert which, in hindsight, gave the most accurate predictions.
|
Randomized weighted majority algorithm : In machine learning, the weighted majority algorithm (WMA) is a deterministic meta-learning algorithm for aggregating expert predictions. In pseudocode, the WMA is as follows: initialize all experts to weight 1 for each round: add each expert's weight to the option they predicted predict the option with the largest weighted sum multiply the weights of all experts who predicted wrongly by 1 2 Suppose there are n experts and the best expert makes m mistakes. Then, the weighted majority algorithm (WMA) makes at most 2.4 ( log 2 n + m ) n+m) mistakes. This bound is highly problematic in the case of highly error-prone experts. Suppose, for example, the best expert makes a mistake 20% of the time; that is, in N = 100 rounds using n = 10 experts, the best expert makes m = 20 mistakes. Then, the weighted majority algorithm only guarantees an upper bound of 2.4 ( log 2 10 + 20 ) ≈ 56 10+20)\approx 56 mistakes. As this is a known limitation of the weighted majority algorithm, various strategies have been explored in order to improve the dependence on m . In particular, we can do better by introducing randomization. Drawing inspiration from the Multiplicative Weights Update Method algorithm, we will probabilistically make predictions based on how the experts have performed in the past. Similarly to the WMA, every time an expert makes a wrong prediction, we will decrement their weight. Mirroring the MWUM, we will then use the weights to make a probability distribution over the actions and draw our action from this distribution (instead of deterministically picking the majority vote as the WMA does).
|
Randomized weighted majority algorithm : The randomized weighted majority algorithm is an attempt to improve the dependence of the mistake bound of the WMA on m . Instead of predicting based on majority vote, the weights, are used as probabilities for choosing the experts in each round and are updated over time (hence the name randomized weighted majority). Precisely, if w i is the weight of expert i , let W = ∑ i w i w_ . We will follow expert i with probability w i W . This results in the following algorithm: initialize all experts to weight 1. for each round: add all experts' weights together to obtain the total weight W choose expert i randomly with probability w i W predict as the chosen expert predicts multiply the weights of all experts who predicted wrongly by β The goal is to bound the worst-case expected number of mistakes, assuming that the adversary has to select one of the answers as correct before we make our coin toss. This is a reasonable assumption in, for instance, the stock market example provided above: the variance of a stock price should not depend on the opinions of experts that influence private buy or sell decisions, so we can treat the price change as if it was decided before the experts gave their recommendations for the day. The randomized algorithm is better in the worst case than the deterministic algorithm (weighted majority algorithm): in the latter, the worst case was when the weights were split 50/50. But in the randomized version, since the weights are used as probabilities, there would still be a 50/50 chance of getting it right. In addition, generalizing to multiplying the weights of the incorrect experts by β < 1 instead of strictly 1 2 allows us to trade off between dependence on m and log 2 n n . This trade-off will be quantified in the analysis section.
|
Randomized weighted majority algorithm : Let W t denote the total weight of all experts at round t . Also let F t denote the fraction of weight placed on experts which predict the wrong answer at round t . Finally, let N be the total number of rounds in the process. By definition, F t is the probability that the algorithm makes a mistake on round t . It follows from the linearity of expectation that if M denotes the total number of mistakes made during the entire process, E [ M ] = ∑ t = 1 N F t ^F_ . After round t , the total weight is decreased by ( 1 − β ) F t W t W_ , since all weights corresponding to a wrong answer are multiplied by β < 1 . It then follows that W t + 1 = W t ( 1 − ( 1 − β ) F t ) =W_(1-(1-\beta )F_) . By telescoping, since W 1 = n =n , it follows that the total weight after the process concludes is On the other hand, suppose that m is the number of mistakes made by the best-performing expert. At the end, this expert has weight β m . It follows, then, that the total weight is at least this much; in other words, W ≥ β m . This inequality and the above result imply Taking the natural logarithm of both sides yields Now, the Taylor series of the natural logarithm is In particular, it follows that ln ( 1 − ( 1 − β ) F t ) < − ( 1 − β ) F t )<-(1-\beta )F_ . Thus, Recalling that E [ M ] = ∑ t = 1 N F t ^F_ and rearranging, it follows that Now, as β → 1 from below, the first constant tends to 1 ; however, the second constant tends to + ∞ . To quantify this tradeoff, define ε = 1 − β to be the penalty associated with getting a prediction wrong. Then, again applying the Taylor series of the natural logarithm, It then follows that the mistake bound, for small ε , can be written in the form ( 1 + ϵ 2 + O ( ε 2 ) ) m + ϵ − 1 ln ( n ) +O(\varepsilon ^)\right)m+\epsilon ^\ln(n) . In English, the less that we penalize experts for their mistakes, the more that additional experts will lead to initial mistakes but the closer we get to capturing the predictive accuracy of the best expert as time goes on. In particular, given a sufficiently low value of ε and enough rounds, the randomized weighted majority algorithm can get arbitrarily close to the correct prediction rate of the best expert. In particular, as long as m is sufficiently large compared to ln ( n ) (so that their ratio is sufficiently small), we can assign we can obtain an upper bound on the number of mistakes equal to This implies that the "regret bound" on the algorithm (that is, how much worse it performs than the best expert) is sublinear, at O ( m ln ( n ) ) ) .
|
Randomized weighted majority algorithm : Recall that the motivation for the randomized weighted majority algorithm was given by an example where the best expert makes a mistake 20% of the time. Precisely, in N = 100 rounds, with n = 10 experts, where the best expert makes m = 20 mistakes, the deterministic weighted majority algorithm only guarantees an upper bound of 2.4 ( log 2 10 + 20 ) ≈ 56 10+20)\approx 56 . By the analysis above, it follows that minimizing the number of worst-case expected mistakes is equivalent to minimizing the function Computational methods show that the optimal value is roughly β ≈ 0.641 , which results in the minimal worst-case number of expected mistakes of E [ M ] ≈ 31.19 . When the number of rounds is increased (say, to N = 1000000 ) while the accuracy rate of the best expert is kept the same the improvement can be even more dramatic; the weighted majority algorithm guarantees only a worst-case mistake rate of 48.0%, but the randomized weighted majority algorithm, when properly tuned to the optimal value of ε ≈ 0.0117 , achieves a worst-case mistake rate of 20.2%.
|
Randomized weighted majority algorithm : The Randomized Weighted Majority Algorithm can be used to combine multiple algorithms in which case RWMA can be expected to perform nearly as well as the best of the original algorithms in hindsight. Note that the RWMA can be generalized to solve problems which do not have binary mistake variables, which makes it useful for a wide class of problems. Furthermore, one can apply the Randomized Weighted Majority Algorithm in situations where experts are making choices that cannot be combined (or can't be combined easily). For example, RWMA can be applied to repeated game-playing or the online shortest path problem. In the online shortest path problem, each expert is telling you a different way to drive to work. You pick one path using RWMA. Later you find out how well you would have done using all of the suggested paths and penalize appropriately. The goal is to have an expected loss not much larger than the loss of the best expert.
|
Randomized weighted majority algorithm : Multi-armed bandit problem. Efficient algorithm for some cases with many experts. Sleeping experts/"specialists" setting.
|
Randomized weighted majority algorithm : Machine learning Weighted majority algorithm Game theory Multi-armed bandit
|
Randomized weighted majority algorithm : Weighted Majority & Randomized Weighted Majority Avrim Blum (2004) machine learning theory Rob Schapire 2006 Foundations of Machine Learning Predicting From Experts Advice Uri Feige, Robi Krauthgamer, Moni Naor. Algorithmic Game Theory Nika Haghtalab 2020 Theoretical Foundations of Machine Learning (Notes)
|
Repeated incremental pruning to produce error reduction (RIPPER) : In machine learning, repeated incremental pruning to produce error reduction (RIPPER) is a propositional rule learner proposed by William W. Cohen as an optimized version of IREP.
|
Rule-based machine learning : Rule-based machine learning (RBML) is a term in computer science intended to encompass any machine learning method that identifies, learns, or evolves 'rules' to store, manipulate or apply. The defining characteristic of a rule-based machine learner is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system. Rule-based machine learning approaches include learning classifier systems, association rule learning, artificial immune systems, and any other method that relies on a set of rules, each covering contextual knowledge. While rule-based machine learning is conceptually a type of rule-based system, it is distinct from traditional rule-based systems, which are often hand-crafted, and other rule-based decision makers. This is because rule-based machine learning applies some form of learning algorithm such as Rough sets theory to identify and minimise the set of features and to automatically identify useful rules, rather than a human needing to apply prior domain knowledge to manually construct rules and curate a rule set.
|
Rule-based machine learning : Rules typically take the form of an ' expression', (e.g. , or as a more specific example, ). An individual rule is not in itself a model, since the rule is only applicable when its condition is satisfied. Therefore rule-based machine learning methods typically comprise a set of rules, or knowledge base, that collectively make up the prediction model usually know as decision algorithm. Rules can also be interpreted in various ways depending on the domain knowledge, data types(discrete or continuous) and in combinations.
|
Skill chaining : Skill chaining is a skill discovery method in continuous reinforcement learning. It has been extended to high-dimensional continuous domains by the related Deep skill chaining algorithm.
|
Skill chaining : Konidaris, George; Andrew Barto (2009). "Skill discovery in continuous reinforcement learning domains using skill chaining". Advances in Neural Information Processing Systems 22. Bagaria, Akhil; George Konidaris (2020). "Option discovery using deep skill chaining". International Conference on Learning Representations.
|
Sparse PCA : Sparse principal component analysis (SPCA or sparse PCA) is a technique used in statistical analysis and, in particular, in the analysis of multivariate data sets. It extends the classic method of principal component analysis (PCA) for the reduction of dimensionality of data by introducing sparsity structures to the input variables. A particular disadvantage of ordinary PCA is that the principal components are usually linear combinations of all input variables. SPCA overcomes this disadvantage by finding components that are linear combinations of just a few input variables (SPCs). This means that some of the coefficients of the linear combinations defining the SPCs, called loadings, are equal to zero. The number of nonzero loadings is called the cardinality of the SPC.
|
Sparse PCA : Consider a data matrix, X , where each of the p columns represent an input variable, and each of the n rows represents an independent sample from data population. One assumes each column of X has mean zero, otherwise one can subtract column-wise mean from each element of X . Let Σ = 1 n − 1 X ⊤ X X^X be the empirical covariance matrix of X , which has dimension p × p . Given an integer k with 1 ≤ k ≤ p , the sparse PCA problem can be formulated as maximizing the variance along a direction represented by vector v ∈ R p ^ while constraining its cardinality: max v T Σ v subject to ‖ v ‖ 2 = 1 ‖ v ‖ 0 ≤ k . \max \quad &v^\Sigma v\\\quad &\left\Vert v\right\Vert _=1\\&\left\Vert v\right\Vert _\leq k.\end Eq. 1 The first constraint specifies that v is a unit vector. In the second constraint, ‖ v ‖ 0 represents the ℓ 0 pseudo-norm of v, which is defined as the number of its non-zero components. So the second constraint specifies that the number of non-zero components in v is less than or equal to k, which is typically an integer that is much smaller than dimension p. The optimal value of Eq. 1 is known as the k-sparse largest eigenvalue. If one takes k=p, the problem reduces to the ordinary PCA, and the optimal value becomes the largest eigenvalue of covariance matrix Σ. After finding the optimal solution v, one deflates Σ to obtain a new matrix Σ 1 = Σ − ( v T Σ v ) v v T , =\Sigma -(v^\Sigma v)vv^, and iterate this process to obtain further principal components. However, unlike PCA, sparse PCA cannot guarantee that different principal components are orthogonal. In order to achieve orthogonality, additional constraints must be enforced. The following equivalent definition is in matrix form. Let V be a p×p symmetric matrix, one can rewrite the sparse PCA problem as max T r ( Σ V ) subject to T r ( V ) = 1 ‖ V ‖ 0 ≤ k 2 R a n k ( V ) = 1 , V ⪰ 0. \max \quad &Tr(\Sigma V)\\\quad &Tr(V)=1\\&\Vert V\Vert _\leq k^\\&Rank(V)=1,V\succeq 0.\end Eq. 2 Tr is the matrix trace, and ‖ V ‖ 0 represents the non-zero elements in matrix V. The last line specifies that V has matrix rank one and is positive semidefinite. The last line means that one has V = v v T , so Eq. 2 is equivalent to Eq. 1. Moreover, the rank constraint in this formulation is actually redundant, and therefore sparse PCA can be cast as the following mixed-integer semidefinite program max T r ( Σ V ) subject to T r ( V ) = 1 | V i , i | ≤ z i , ∀ i ∈ , | V i , j | ≤ 1 2 z i , ∀ i , j ∈ : i ≠ j , V ⪰ 0 , z ∈ p , ∑ i z i ≤ k \max \quad &Tr(\Sigma V)\\\quad &Tr(V)=1\\&\vert V_\vert \leq z_,\forall i\in \,\vert V_\vert \leq z_,\forall i,j\in \:i\neq j,\\&V\succeq 0,z\in \^,\sum _z_\leq k\end Eq. 3 Because of the cardinality constraint, the maximization problem is hard to solve exactly, especially when dimension p is high. In fact, the sparse PCA problem in Eq. 1 is NP-hard in the strong sense.
|
Sparse PCA : As most sparse problems, variable selection in SPCA is a computationally intractable non-convex NP-hard problem, therefore greedy sub-optimal algorithms are often employed to find solutions. Note also that SPCA introduces hyperparameters quantifying in what capacity large parameter values are penalized. These might need tuning to achieve satisfactory performance, thereby adding to the total computational cost.
|
Sparse PCA : Several alternative approaches (of Eq. 1) have been proposed, including a regression framework, a penalized matrix decomposition framework, a convex relaxation/semidefinite programming framework, a generalized power method framework an alternating maximization framework forward-backward greedy search and exact methods using branch-and-bound techniques, a certifiably optimal branch-and-bound approach Bayesian formulation framework. A certifiably optimal mixed-integer semidefinite branch-and-cut approach The methodological and theoretical developments of Sparse PCA as well as its applications in scientific studies are recently reviewed in a survey paper.
|
Sparse PCA : amanpg - R package for Sparse PCA using the Alternating Manifold Proximal Gradient Method elasticnet – R package for Sparse Estimation and Sparse PCA using Elastic-Nets epca – R package for exploratory principal component analysis for large-scale dataset, including sparse principal component analysis and sparse matrix approximation. nsprcomp - R package for sparse and/or non-negative PCA based on thresholded power iterations scikit-learn – Python library for machine learning which contains Sparse PCA and other techniques in the decomposition module.
|
Sparse PCA : == References ==
|
State–action–reward–state–action : State–action–reward–state–action (SARSA) is an algorithm for learning a Markov decision process policy, used in the reinforcement learning area of machine learning. It was proposed by Rummery and Niranjan in a technical note with the name "Modified Connectionist Q-Learning" (MCQ-L). The alternative name SARSA, proposed by Rich Sutton, was only mentioned as a footnote. This name reflects the fact that the main function for updating the Q-value depends on the current state of the agent "S1", the action the agent chooses "A1", the reward "R2" the agent gets for choosing this action, the state "S2" that the agent enters after taking that action, and finally the next action "A2" the agent chooses in its new state. The acronym for the quintuple (St, At, Rt+1, St+1, At+1) is SARSA. Some authors use a slightly different convention and write the quintuple (St, At, Rt, St+1, At+1), depending on which time step the reward is formally assigned. The rest of the article uses the former convention.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.