text
stringlengths 12
14.7k
|
---|
State–action–reward–state–action : Q n e w ( S t , A t ) ← ( 1 − α ) Q ( S t , A t ) + α [ R t + 1 + γ Q ( S t + 1 , A t + 1 ) ] (S_,A_)\leftarrow (1-\alpha )Q(S_,A_)+\alpha \,[R_+\gamma \,Q(S_,A_)] A SARSA agent interacts with the environment and updates the policy based on actions taken, hence this is known as an on-policy learning algorithm. The Q value for a state-action is updated by an error, adjusted by the learning rate α. Q values represent the possible reward received in the next time step for taking action a in state s, plus the discounted future reward received from the next state-action observation. Watkin's Q-learning updates an estimate of the optimal state-action value function Q ∗ based on the maximum reward of available actions. While SARSA learns the Q values associated with taking the policy it follows itself, Watkin's Q-learning learns the Q values associated with taking the optimal policy while following an exploration/exploitation policy. Some optimizations of Watkin's Q-learning may be applied to SARSA.
|
State–action–reward–state–action : Prefrontal cortex basal ganglia working memory Sammon mapping Constructing skill trees Q-learning Temporal difference learning Reinforcement learning == References ==
|
Stochastic gradient descent : Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It can be regarded as a stochastic approximation of gradient descent optimization, since it replaces the actual gradient (calculated from the entire data set) by an estimate thereof (calculated from a randomly selected subset of the data). Especially in high-dimensional optimization problems this reduces the very high computational burden, achieving faster iterations in exchange for a lower convergence rate. The basic idea behind stochastic approximation can be traced back to the Robbins–Monro algorithm of the 1950s. Today, stochastic gradient descent has become an important optimization method in machine learning.
|
Stochastic gradient descent : Both statistical estimation and machine learning consider the problem of minimizing an objective function that has the form of a sum: Q ( w ) = 1 n ∑ i = 1 n Q i ( w ) , \sum _^Q_(w), where the parameter w that minimizes Q ( w ) is to be estimated. Each summand function Q i is typically associated with the i -th observation in the data set (used for training). In classical statistics, sum-minimization problems arise in least squares and in maximum-likelihood estimation (for independent observations). The general class of estimators that arise as minimizers of sums are called M-estimators. However, in statistics, it has been long recognized that requiring even local minimization is too restrictive for some problems of maximum-likelihood estimation. Therefore, contemporary statistical theorists often consider stationary points of the likelihood function (or zeros of its derivative, the score function, and other estimating equations). The sum-minimization problem also arises for empirical risk minimization. There, Q i ( w ) (w) is the value of the loss function at i -th example, and Q ( w ) is the empirical risk. When used to minimize the above function, a standard (or "batch") gradient descent method would perform the following iterations: w := w − η ∇ Q ( w ) = w − η n ∑ i = 1 n ∇ Q i ( w ) . \sum _^\nabla Q_(w). The step size is denoted by η (sometimes called the learning rate in machine learning) and here " := " denotes the update of a variable in the algorithm. In many cases, the summand functions have a simple form that enables inexpensive evaluations of the sum-function and the sum gradient. For example, in statistics, one-parameter exponential families allow economical function-evaluations and gradient-evaluations. However, in other cases, evaluating the sum-gradient may require expensive evaluations of the gradients from all summand functions. When the training set is enormous and no simple formulas exist, evaluating the sums of gradients becomes very expensive, because evaluating the gradient requires evaluating all the summand functions' gradients. To economize on the computational cost at every iteration, stochastic gradient descent samples a subset of summand functions at every step. This is very effective in the case of large-scale machine learning problems.
|
Stochastic gradient descent : In stochastic (or "on-line") gradient descent, the true gradient of Q ( w ) is approximated by a gradient at a single sample: w := w − η ∇ Q i ( w ) . (w). As the algorithm sweeps through the training set, it performs the above update for each training sample. Several passes can be made over the training set until the algorithm converges. If this is done, the data can be shuffled for each pass to prevent cycles. Typical implementations may use an adaptive learning rate so that the algorithm converges. In pseudocode, stochastic gradient descent can be presented as : A compromise between computing the true gradient and the gradient at a single sample is to compute the gradient against more than one training sample (called a "mini-batch") at each step. This can perform significantly better than "true" stochastic gradient descent described, because the code can make use of vectorization libraries rather than computing each step separately as was first shown in where it was called "the bunch-mode back-propagation algorithm". It may also result in smoother convergence, as the gradient computed at each step is averaged over more training samples. The convergence of stochastic gradient descent has been analyzed using the theories of convex minimization and of stochastic approximation. Briefly, when the learning rates η decrease with an appropriate rate, and subject to relatively mild assumptions, stochastic gradient descent converges almost surely to a global minimum when the objective function is convex or pseudoconvex, and otherwise converges almost surely to a local minimum. This is in fact a consequence of the Robbins–Siegmund theorem.
|
Stochastic gradient descent : Suppose we want to fit a straight line y ^ = w 1 + w 2 x =w_+w_x to a training set with observations ( ( x 1 , y 1 ) , ( x 2 , y 2 ) … , ( x n , y n ) ) ,y_),(x_,y_)\ldots ,(x_,y_)) and corresponding estimated responses ( y ^ 1 , y ^ 2 , … , y ^ n ) _,_,\ldots ,_) using least squares. The objective function to be minimized is Q ( w ) = ∑ i = 1 n Q i ( w ) = ∑ i = 1 n ( y ^ i − y i ) 2 = ∑ i = 1 n ( w 1 + w 2 x i − y i ) 2 . ^Q_(w)=\sum _^\left(_-y_\right)^=\sum _^\left(w_+w_x_-y_\right)^. The last line in the above pseudocode for this specific problem will become: [ w 1 w 2 ] ← [ w 1 w 2 ] − η [ ∂ ∂ w 1 ( w 1 + w 2 x i − y i ) 2 ∂ ∂ w 2 ( w 1 + w 2 x i − y i ) 2 ] = [ w 1 w 2 ] − η [ 2 ( w 1 + w 2 x i − y i ) 2 x i ( w 1 + w 2 x i − y i ) ] . w_\\w_\end\leftarrow w_\\w_\end-\eta (w_+w_x_-y_)^\\(w_+w_x_-y_)^\end=w_\\w_\end-\eta 2(w_+w_x_-y_)\\2x_(w_+w_x_-y_)\end. Note that in each iteration or update step, the gradient is only evaluated at a single x i . This is the key difference between stochastic gradient descent and batched gradient descent. In general, given a linear regression y ^ = ∑ k ∈ 1 : m w k x k =\sum _w_x_ problem, stochastic gradient descent behaves differently when m < n (underparameterized) and m ≥ n (overparameterized). In the overparameterized case, stochastic gradient descent converges to arg min w : w T x k = y k ∀ k ∈ 1 : n ‖ w − w 0 ‖ x_=y_\forall k\in 1:n\|w-w_\| . That is, SGD converges to the interpolation solution with minimum distance from the starting w 0 . This is true even when the learning rate remains constant. In the underparameterized case, SGD does not converge if learning rate remains constant.
|
Stochastic gradient descent : In 1951, Herbert Robbins and Sutton Monro introduced the earliest stochastic approximation methods, preceding stochastic gradient descent. Building on this work one year later, Jack Kiefer and Jacob Wolfowitz published an optimization algorithm very close to stochastic gradient descent, using central differences as an approximation of the gradient. Later in the 1950s, Frank Rosenblatt used SGD to optimize his perceptron model, demonstrating the first applicability of stochastic gradient descent to neural networks. Backpropagation was first described in 1986, with stochastic gradient descent being used to efficiently optimize parameters across neural networks with multiple hidden layers. Soon after, another improvement was developed: mini-batch gradient descent, where small batches of data are substituted for single samples. In 1997, the practical performance benefits from vectorization achievable with such small batches were first explored, paving the way for efficient optimization in machine learning. As of 2023, this mini-batch approach remains the norm for training neural networks, balancing the benefits of stochastic gradient descent with gradient descent. By the 1980s, momentum had already been introduced, and was added to SGD optimization techniques in 1986. However, these optimization techniques assumed constant hyperparameters, i.e. a fixed learning rate and momentum parameter. In the 2010s, adaptive approaches to applying SGD with a per-parameter learning rate were introduced with AdaGrad (for "Adaptive Gradient") in 2011 and RMSprop (for "Root Mean Square Propagation") in 2012. In 2014, Adam (for "Adaptive Moment Estimation") was published, applying the adaptive approaches of RMSprop to momentum; many improvements and branches of Adam were then developed such as Adadelta, Adagrad, AdamW, and Adamax. Within machine learning, approaches to optimization in 2023 are dominated by Adam-derived optimizers. TensorFlow and PyTorch, by far the most popular machine learning libraries, as of 2023 largely only include Adam-derived optimizers, as well as predecessors to Adam such as RMSprop and classic SGD. PyTorch also partially supports Limited-memory BFGS, a line-search method, but only for single-device setups without parameter groups.
|
Stochastic gradient descent : Stochastic gradient descent is a popular algorithm for training a wide range of models in machine learning, including (linear) support vector machines, logistic regression (see, e.g., Vowpal Wabbit) and graphical models. When combined with the back propagation algorithm, it is the de facto standard algorithm for training artificial neural networks. Its use has been also reported in the Geophysics community, specifically to applications of Full Waveform Inversion (FWI). Stochastic gradient descent competes with the L-BFGS algorithm, which is also widely used. Stochastic gradient descent has been used since at least 1960 for training linear regression models, originally under the name ADALINE. Another stochastic gradient descent algorithm is the least mean squares (LMS) adaptive filter.
|
Stochastic gradient descent : Many improvements on the basic stochastic gradient descent algorithm have been proposed and used. In particular, in machine learning, the need to set a learning rate (step size) has been recognized as problematic. Setting this parameter too high can cause the algorithm to diverge; setting it too low makes it slow to converge. A conceptually simple extension of stochastic gradient descent makes the learning rate a decreasing function ηt of the iteration number t, giving a learning rate schedule, so that the first iterations cause large changes in the parameters, while the later ones do only fine-tuning. Such schedules have been known since the work of MacQueen on k-means clustering. Practical guidance on choosing the step size in several variants of SGD is given by Spall.
|
Stochastic gradient descent : For small learning rate η stochastic gradient descent ( w n ) n ∈ N 0 )_ _ can be viewed as a discretization of the gradient flow ODE d d t W t = − ∇ Q ( W t ) W_=-\nabla Q(W_) subject to additional stochastic noise. This approximation is only valid on a finite time-horizon in the following sense: assume that all the coefficients Q i are sufficiently smooth. Let T > 0 and g : R d → R ^\to \mathbb be a sufficiently smooth test function. Then, there exists a constant C > 0 such that for all η > 0 max k = 0 , … , ⌊ T / η ⌋ | E [ g ( w k ) ] − g ( W k η ) | ≤ C η , \left|\mathbb [g(w_)]-g(W_)\right|\leq C\eta , where E denotes taking the expectation with respect to the random choice of indices in the stochastic gradient descent scheme. Since this approximation does not capture the random fluctuations around the mean behavior of stochastic gradient descent solutions to stochastic differential equations (SDEs) have been proposed as limiting objects. More precisely, the solution to the SDE d W t = − ∇ ( Q ( W t ) + 1 4 η | ∇ Q ( W t ) | 2 ) d t + η Σ ( W t ) 1 / 2 d B t , =-\nabla \left(Q(W_)+\eta |\nabla Q(W_)|^\right)dt+\Sigma (W_)^dB_, for Σ ( w ) = 1 n 2 ( ∑ i = 1 n Q i ( w ) − Q ( w ) ) ( ∑ i = 1 n Q i ( w ) − Q ( w ) ) T \left(\sum _^Q_(w)-Q(w)\right)\left(\sum _^Q_(w)-Q(w)\right)^ where d B t denotes the Ito-integral with respect to a Brownian motion is a more precise approximation in the sense that there exists a constant C > 0 such that max k = 0 , … , ⌊ T / η ⌋ | E [ g ( w k ) ] − E [ g ( W k η ) ] | ≤ C η 2 . \left|\mathbb [g(w_)]-\mathbb [g(W_)]\right|\leq C\eta ^. However this SDE only approximates the one-point motion of stochastic gradient descent. For an approximation of the stochastic flow one has to consider SDEs with infinite-dimensional noise.
|
Stochastic gradient descent : Backtracking line search Broken Neural Scaling Law Coordinate descent – changes one coordinate at a time, rather than one example Linear classifier Online machine learning Stochastic hill climbing Stochastic variance reduction
|
Stochastic gradient descent : Bottou, Léon (2004), "Stochastic Learning", Advanced Lectures on Machine Learning, LNAI, vol. 3176, Springer, pp. 146–168, ISBN 978-3-540-23122-6 Buduma, Nikhil; Locascio, Nicholas (2017), "Beyond Gradient Descent", Fundamentals of Deep Learning : Designing Next-Generation Machine Intelligence Algorithms, O'Reilly, ISBN 9781491925584 LeCun, Yann A.; Bottou, Léon; Orr, Genevieve B.; Müller, Klaus-Robert (2012), "Efficient BackProp", Neural Networks: Tricks of the Trade, Springer, pp. 9–48, ISBN 978-3-642-35288-1 Spall, James C. (2003), Introduction to Stochastic Search and Optimization, Wiley, ISBN 978-0-471-33052-3
|
Stochastic gradient descent : "Gradient Descent, How Neural Networks Learn". 3Blue1Brown. October 16, 2017. Archived from the original on 2021-12-22 – via YouTube. Goh (April 4, 2017). "Why Momentum Really Works". Distill. 2 (4). doi:10.23915/distill.00006. Interactive paper explaining momentum.
|
Stochastic variance reduction : (Stochastic) variance reduction is an algorithmic approach to minimizing functions that can be decomposed into finite sums. By exploiting the finite sum structure, variance reduction techniques are able to achieve convergence rates that are impossible to achieve with methods that treat the objective as an infinite sum, as in the classical Stochastic approximation setting. Variance reduction approaches are widely used for training machine learning models such as logistic regression and support vector machines as these problems have finite-sum structure and uniform conditioning that make them ideal candidates for variance reduction.
|
Stochastic variance reduction : A function f is considered to have finite sum structure if it can be decomposed into a summation or average: f ( x ) = 1 n ∑ i = 1 n f i ( x ) , \sum _^f_(x), where the function value and derivative of each f i can be queried independently. Although variance reduction methods can be applied for any positive n and any f i structure, their favorable theoretical and practical properties arise when n is large compared to the condition number of each f i , and when the f i have similar (but not necessarily identical) Lipschitz smoothness and strong convexity constants. The finite sum structure should be contrasted with the stochastic approximation setting which deals with functions of the form f ( θ ) = E ξ [ F ( θ , ξ ) ] _[F(\theta ,\xi )] which is the expected value of a function depending on a random variable ξ . Any finite sum problem can be optimized using a stochastic approximation algorithm by using F ( ⋅ , ξ ) = f ξ .
|
Stochastic variance reduction : Stochastic variance reduced methods without acceleration are able to find a minima of f within accuracy ϵ > , i.e. f ( x ) − f ( x ∗ ) ≤ ϵ )\leq \epsilon in a number of steps of the order: O ( ( L μ + n ) log ( 1 ϵ ) ) . +n\right)\log \left(\right)\right). The number of steps depends only logarithmically on the level of accuracy required, in contrast to the stochastic approximation framework, where the number of steps O ( L / ( μ ϵ ) ) L/(\mu \epsilon ) required grows proportionally to the accuracy required. Stochastic variance reduction methods converge almost as fast as the gradient descent method's O ( ( L / μ ) log ( 1 / ϵ ) ) (L/\mu )\log(1/\epsilon ) rate, despite using only a stochastic gradient, at a 1 / n lower cost than gradient descent. Accelerated methods in the stochastic variance reduction framework achieve even faster convergence rates, requiring only O ( ( n L μ + n ) log ( 1 ϵ ) ) +n\right)\log \left(\right)\right) steps to reach ϵ accuracy, potentially n faster than non-accelerated methods. Lower complexity bounds. for the finite sum class establish that this rate is the fastest possible for smooth strongly convex problems.
|
Stochastic variance reduction : Variance reduction approaches fall within 3 main categories: table averaging methods, full-gradient snapshot methods and dual methods. Each category contains methods designed for dealing with convex, non-smooth, and non-convex problems, each differing in hyper-parameter settings and other algorithmic details.
|
Stochastic variance reduction : Accelerated variance reduction methods are built upon the standard methods above. The earliest approaches make use of proximal operators to accelerate convergence, either approximately or exactly. Direct acceleration approaches have also been developed
|
Stochastic variance reduction : Stochastic gradient descent Coordinate descent Online machine learning Proximal operator Stochastic optimization Stochastic approximation == References ==
|
T-distributed stochastic neighbor embedding : t-distributed stochastic neighbor embedding (t-SNE) is a statistical method for visualizing high-dimensional data by giving each datapoint a location in a two or three-dimensional map. It is based on Stochastic Neighbor Embedding originally developed by Geoffrey Hinton and Sam Roweis, where Laurens van der Maaten and Hinton proposed the t-distributed variant. It is a nonlinear dimensionality reduction technique for embedding high-dimensional data for visualization in a low-dimensional space of two or three dimensions. Specifically, it models each high-dimensional object by a two- or three-dimensional point in such a way that similar objects are modeled by nearby points and dissimilar objects are modeled by distant points with high probability. The t-SNE algorithm comprises two main stages. First, t-SNE constructs a probability distribution over pairs of high-dimensional objects in such a way that similar objects are assigned a higher probability while dissimilar points are assigned a lower probability. Second, t-SNE defines a similar probability distribution over the points in the low-dimensional map, and it minimizes the Kullback–Leibler divergence (KL divergence) between the two distributions with respect to the locations of the points in the map. While the original algorithm uses the Euclidean distance between objects as the base of its similarity metric, this can be changed as appropriate. A Riemannian variant is UMAP. t-SNE has been used for visualization in a wide range of applications, including genomics, computer security research, natural language processing, music analysis, cancer research, bioinformatics, geological domain interpretation, and biomedical signal processing. For a data set with n elements, t-SNE runs in O(n2) time and requires O(n2) space.
|
T-distributed stochastic neighbor embedding : Given a set of N high-dimensional objects x 1 , … , x N _,\dots ,\mathbf _ , t-SNE first computes probabilities p i j that are proportional to the similarity of objects x i _ and x j _ , as follows. For i ≠ j , define p j ∣ i = exp ( − ‖ x i − x j ‖ 2 / 2 σ i 2 ) ∑ k ≠ i exp ( − ‖ x i − x k ‖ 2 / 2 σ i 2 ) = _-\mathbf _\rVert ^/2\sigma _^)\exp(-\lVert \mathbf _-\mathbf _\rVert ^/2\sigma _^) and set p i ∣ i = 0 =0 . Note the above denominator ensures ∑ j p j ∣ i = 1 p_=1 for all i . As van der Maaten and Hinton explained: "The similarity of datapoint x j to datapoint x i is the conditional probability, p j | i , that x i would pick x j as its neighbor if neighbors were picked in proportion to their probability density under a Gaussian centered at x i ." Now define p i j = p j ∣ i + p i ∣ j 2 N =+p_ This is motivated because p i and p j from the N samples are estimated as 1/N, so the conditional probability can be written as p i ∣ j = N p i j =Np_ and p j ∣ i = N p j i =Np_ . Since p i j = p j i =p_ , you can obtain previous formula. Also note that p i i = 0 =0 and ∑ i , j p i j = 1 p_=1 . The bandwidth of the Gaussian kernels σ i is set in such a way that the entropy of the conditional distribution equals a predefined entropy using the bisection method. As a result, the bandwidth is adapted to the density of the data: smaller values of σ i are used in denser parts of the data space. The entropy increases with the perplexity of this distribution P i ; this relation is seen as P e r p ( P i ) = 2 H ( P i ) )=2^) where H ( P i ) ) is the Shannon entropy H ( P i ) = − ∑ j p j | i log 2 p j | i . )=-\sum _p_\log _p_. The perplexity is a hand-chosen parameter of t-SNE, and as the authors state, "perplexity can be interpreted as a smooth measure of the effective number of neighbors. The performance of SNE is fairly robust to changes in the perplexity, and typical values are between 5 and 50.". Since the Gaussian kernel uses the Euclidean distance ‖ x i − x j ‖ -x_\rVert , it is affected by the curse of dimensionality, and in high dimensional data when distances lose the ability to discriminate, the p i j become too similar (asymptotically, they would converge to a constant). It has been proposed to adjust the distances with a power transform, based on the intrinsic dimension of each point, to alleviate this. t-SNE aims to learn a d -dimensional map y 1 , … , y N _,\dots ,\mathbf _ (with y i ∈ R d _\in \mathbb ^ and d typically chosen as 2 or 3) that reflects the similarities p i j as well as possible. To this end, it measures similarities q i j between two points in the map y i _ and y j _ , using a very similar approach. Specifically, for i ≠ j , define q i j as q i j = ( 1 + ‖ y i − y j ‖ 2 ) − 1 ∑ k ∑ l ≠ k ( 1 + ‖ y k − y l ‖ 2 ) − 1 = _-\mathbf _\rVert ^)^\sum _(1+\lVert \mathbf _-\mathbf _\rVert ^)^ and set q i i = 0 =0 . Herein a heavy-tailed Student t-distribution (with one-degree of freedom, which is the same as a Cauchy distribution) is used to measure similarities between low-dimensional points in order to allow dissimilar objects to be modeled far apart in the map. The locations of the points y i _ in the map are determined by minimizing the (non-symmetric) Kullback–Leibler divergence of the distribution P from the distribution Q , that is: K L ( P ∥ Q ) = ∑ i ≠ j p i j log p i j q i j \left(P\parallel Q\right)=\sum _p_\log The minimization of the Kullback–Leibler divergence with respect to the points y i _ is performed using gradient descent. The result of this optimization is a map that reflects the similarities between the high-dimensional inputs.
|
T-distributed stochastic neighbor embedding : While t-SNE plots often seem to display clusters, the visual clusters can be strongly influenced by the chosen parameterization (especially the perplexity) and so a good understanding of the parameters for t-SNE is needed. Such "clusters" can be shown to even appear in structured data with no clear clustering, and so may be false findings. Similarly, the size of clusters produced by t-SNE is not informative, and neither is the distance between clusters. Thus, interactive exploration may be needed to choose parameters and validate results. It has been shown that t-SNE can often recover well-separated clusters, and with special parameter choices, approximates a simple form of spectral clustering.
|
T-distributed stochastic neighbor embedding : A C++ implementation of Barnes-Hut is available on the github account of one of the original authors. The R package Rtsne implements t-SNE in R. ELKI contains tSNE, also with Barnes-Hut approximation scikit-learn, a popular machine learning library in Python implements t-SNE with both exact solutions and the Barnes-Hut approximation. Tensorboard, the visualization kit associated with TensorFlow, also implements t-SNE (online version) The Julia package TSne implements t-SNE
|
T-distributed stochastic neighbor embedding : Wattenberg, Martin; Viégas, Fernanda; Johnson, Ian (2016-10-13). "How to Use t-SNE Effectively". Distill. 1 (10): e2. doi:10.23915/distill.00002. ISSN 2476-0757.. Interactive demonstration and tutorial. Visualizing Data Using t-SNE, Google Tech Talk about t-SNE Implementations of t-SNE in various languages, A link collection maintained by Laurens van der Maaten
|
Wake-sleep algorithm : The wake-sleep algorithm is an unsupervised learning algorithm for deep generative models, especially Helmholtz Machines. The algorithm is similar to the expectation-maximization algorithm, and optimizes the model likelihood for observed data. The name of the algorithm derives from its use of two learning phases, the “wake” phase and the “sleep” phase, which are performed alternately. It can be conceived as a model for learning in the brain, but is also being applied for machine learning.
|
Wake-sleep algorithm : The goal of the wake-sleep algorithm is to find a hierarchical representation of observed data. In a graphical representation of the algorithm, data is applied to the algorithm at the bottom, while higher layers form gradually more abstract representations. Between each pair of layers are two sets of weights: Recognition weights, which define how representations are inferred from data, and generative weights, which define how these representations relate to data.
|
Wake-sleep algorithm : Training consists of two phases – the “wake” phase and the “sleep” phase. It has been proven that this learning algorithm is convergent.
|
Wake-sleep algorithm : Since the recognition network is limited in its flexibility, it might not be able to approximate the posterior distribution of latent variables well. To better approximate the posterior distribution, it is possible to employ importance sampling, with the recognition network as the proposal distribution. This improved approximation of the posterior distribution also improves the overall performance of the model.
|
Wake-sleep algorithm : Restricted Boltzmann machine, a type of neural net that is trained with a conceptually similar algorithm. Helmholtz machine, a neural network model trained by the wake-sleep algorithm. == References ==
|
Weighted majority algorithm (machine learning) : In machine learning, weighted majority algorithm (WMA) is a meta learning algorithm used to construct a compound algorithm from a pool of prediction algorithms, which could be any type of learning algorithms, classifiers, or even real human experts. The algorithm assumes that we have no prior knowledge about the accuracy of the algorithms in the pool, but there are sufficient reasons to believe that one or more will perform well. Assume that the problem is a binary decision problem. To construct the compound algorithm, a positive weight is given to each of the algorithms in the pool. The compound algorithm then collects weighted votes from all the algorithms in the pool, and gives the prediction that has a higher vote. If the compound algorithm makes a mistake, the algorithms in the pool that contributed to the wrong predicting will be discounted by a certain ratio β where 0<β<1. It can be shown that the upper bounds on the number of mistakes made in a given sequence of predictions from a pool of algorithms A is O ( l o g | A | + m ) if one algorithm in x i _ makes at most m mistakes. There are many variations of the weighted majority algorithm to handle different situations, like shifting targets, infinite pools, or randomized predictions. The core mechanism remains similar, with the final performances of the compound algorithm bounded by a function of the performance of the specialist (best performing algorithm) in the pool.
|
Weighted majority algorithm (machine learning) : Randomized weighted majority algorithm == References ==
|
Zero-shot learning : Zero-shot learning (ZSL) is a problem setup in deep learning where, at test time, a learner observes samples from classes which were not observed during training, and needs to predict the class that they belong to. The name is a play on words based on the earlier concept of one-shot learning, in which classification can be learned from only one, or a few, examples. Zero-shot methods generally work by associating observed and non-observed classes through some form of auxiliary information, which encodes observable distinguishing properties of objects. For example, given a set of images of animals to be classified, along with auxiliary textual descriptions of what animals look like, an artificial intelligence model which has been trained to recognize horses, but has never been given a zebra, can still recognize a zebra when it also knows that zebras look like striped horses. This problem is widely studied in computer vision, natural language processing, and machine perception.
|
Zero-shot learning : The first paper on zero-shot learning in natural language processing appeared in a 2008 paper by Chang, Ratinov, Roth, and Srikumar, at the AAAI’08, but the name given to the learning paradigm there was dataless classification. The first paper on zero-shot learning in computer vision appeared at the same conference, under the name zero-data learning. The term zero-shot learning itself first appeared in the literature in a 2009 paper from Palatucci, Hinton, Pomerleau, and Mitchell at NIPS’09. This terminology was repeated later in another computer vision paper and the term zero-shot learning caught on, as a take-off on one-shot learning that was introduced in computer vision years earlier. In computer vision, zero-shot learning models learned parameters for seen classes along with their class representations and rely on representational similarity among class labels so that, during inference, instances can be classified into new classes. In natural language processing, the key technical direction developed builds on the ability to "understand the labels"—represent the labels in the same semantic space as that of the documents to be classified. This supports the classification of a single example without observing any annotated data, the purest form of zero-shot classification. The original paper made use of the Explicit Semantic Analysis (ESA) representation but later papers made use of other representations, including dense representations. This approach was also extended to multilingual domains, fine entity typing and other problems. Moreover, beyond relying solely on representations, the computational approach has been extended to depend on transfer from other tasks, such as textual entailment and question answering. The original paper also points out that, beyond the ability to classify a single example, when a collection of examples is given, with the assumption that they come from the same distribution, it is possible to bootstrap the performance in a semi-supervised like manner (or transductive learning). Unlike standard generalization in machine learning, where classifiers are expected to correctly classify new samples to classes they have already observed during training, in ZSL, no samples from the classes have been given during training the classifier. It can therefore be viewed as an extreme case of domain adaptation.
|
Zero-shot learning : Naturally, some form of auxiliary information has to be given about these zero-shot classes, and this type of information can be of several types. Learning with attributes: classes are accompanied by pre-defined structured description. For example, for bird descriptions, this could include "red head", "long beak". These attributes are often organized in a structured compositional way, and taking that structure into account improves learning. While this approach was used mostly in computer vision, there are some examples for it also in natural language processing. Learning from textual description. As pointed out above, this has been the key direction pursued in natural language processing. Here class labels are taken to have a meaning and are often augmented with definitions or free-text natural-language description. This could include for example a wikipedia description of the class. Class-class similarity. Here, classes are embedded in a continuous space. A zero-shot classifier can predict that a sample corresponds to some position in that space, and the nearest embedded class is used as a predicted class, even if no such samples were observed during training.
|
Zero-shot learning : The above ZSL setup assumes that at test time, only zero-shot samples are given, namely, samples from new unseen classes. In generalized zero-shot learning, samples from both new and known classes, may appear at test time. This poses new challenges for classifiers at test time, because it is very challenging to estimate if a given sample is new or known. Some approaches to handle this include: a gating module, which is first trained to decide if a given sample comes from a new class or from an old one, and then, at inference time, outputs either a hard decision, or a soft probabilistic decision a generative module, which is trained to generate feature representation of the unseen classes--a standard classifier can then be trained on samples from all classes, seen and unseen.
|
Zero-shot learning : Zero shot learning has been applied to the following fields: image classification semantic segmentation image generation object detection natural language processing computational biology
|
Zero-shot learning : One-shot learning in computer vision Transfer learning Fast mapping Explanation-based learning == References ==
|
Loss function : In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its opposite (in specific domains, variously called a reward function, a profit function, a utility function, a fitness function, etc.), in which case it is to be maximized. The loss function could include terms from several levels of the hierarchy. In statistics, typically a loss function is used for parameter estimation, and the event in question is some function of the difference between estimated and true values for an instance of data. The concept, as old as Laplace, was reintroduced in statistics by Abraham Wald in the middle of the 20th century. In the context of economics, for example, this is usually economic cost or regret. In classification, it is the penalty for an incorrect classification of an example. In actuarial science, it is used in an insurance context to model benefits paid over premiums, particularly since the works of Harald Cramér in the 1920s. In optimal control, the loss is the penalty for failing to achieve a desired value. In financial risk management, the function is mapped to a monetary loss.
|
Loss function : In many applications, objective functions, including loss functions as a particular case, are determined by the problem formulation. In other situations, the decision maker’s preference must be elicited and represented by a scalar-valued function (called also utility function) in a form suitable for optimization — the problem that Ragnar Frisch has highlighted in his Nobel Prize lecture. The existing methods for constructing objective functions are collected in the proceedings of two dedicated conferences. In particular, Andranik Tangian showed that the most usable objective functions — quadratic and additive — are determined by a few indifference points. He used this property in the models for constructing these objective functions from either ordinal or cardinal data that were elicited through computer-assisted interviews with decision makers. Among other things, he constructed objective functions to optimally distribute budgets for 16 Westfalian universities and the European subsidies for equalizing unemployment rates among 271 German regions.
|
Loss function : In some contexts, the value of the loss function itself is a random quantity because it depends on the outcome of a random variable X.
|
Loss function : A decision rule makes a choice using an optimality criterion. Some commonly used criteria are: Minimax: Choose the decision rule with the lowest worst loss — that is, minimize the worst-case (maximum possible) loss: a r g m i n δ max θ ∈ Θ R ( θ , δ ) . \ \max _\ R(\theta ,\delta ). Invariance: Choose the decision rule which satisfies an invariance requirement. Choose the decision rule with the lowest average loss (i.e. minimize the expected value of the loss function): a r g m i n δ E θ ∈ Θ [ R ( θ , δ ) ] = a r g m i n δ ∫ θ ∈ Θ R ( θ , δ ) p ( θ ) d θ . \operatorname _[R(\theta ,\delta )]= \ \int _R(\theta ,\delta )\,p(\theta )\,d\theta .
|
Loss function : Sound statistical practice requires selecting an estimator consistent with the actual acceptable variation experienced in the context of a particular applied problem. Thus, in the applied use of loss functions, selecting which statistical method to use to model an applied problem depends on knowing the losses that will be experienced from being wrong under the problem's particular circumstances. A common example involves estimating "location". Under typical statistical assumptions, the mean or average is the statistic for estimating location that minimizes the expected loss experienced under the squared-error loss function, while the median is the estimator that minimizes expected loss experienced under the absolute-difference loss function. Still different estimators would be optimal under other, less common circumstances. In economics, when an agent is risk neutral, the objective function is simply expressed as the expected value of a monetary quantity, such as profit, income, or end-of-period wealth. For risk-averse or risk-loving agents, loss is measured as the negative of a utility function, and the objective function to be optimized is the expected value of utility. Other measures of cost are possible, for example mortality or morbidity in the field of public health or safety engineering. For most optimization algorithms, it is desirable to have a loss function that is globally continuous and differentiable. Two very commonly used loss functions are the squared loss, L ( a ) = a 2 , and the absolute loss, L ( a ) = | a | . However the absolute loss has the disadvantage that it is not differentiable at a = 0 . The squared loss has the disadvantage that it has the tendency to be dominated by outliers—when summing over a set of a 's (as in ∑ i = 1 n L ( a i ) ^L(a_) ), the final sum tends to be the result of a few particularly large a-values, rather than an expression of the average a-value. The choice of a loss function is not arbitrary. It is very restrictive and sometimes the loss function may be characterized by its desirable properties. Among the choice principles are, for example, the requirement of completeness of the class of symmetric statistics in the case of i.i.d. observations, the principle of complete information, and some others. W. Edwards Deming and Nassim Nicholas Taleb argue that empirical reality, not nice mathematical properties, should be the sole basis for selecting loss functions, and real losses often are not mathematically nice and are not differentiable, continuous, symmetric, etc. For example, a person who arrives before a plane gate closure can still make the plane, but a person who arrives after can not, a discontinuity and asymmetry which makes arriving slightly late much more costly than arriving slightly early. In drug dosing, the cost of too little drug may be lack of efficacy, while the cost of too much may be tolerable toxicity, another example of asymmetry. Traffic, pipes, beams, ecologies, climates, etc. may tolerate increased load or stress with little noticeable change up to a point, then become backed up or break catastrophically. These situations, Deming and Taleb argue, are common in real-life problems, perhaps more common than classical smooth, continuous, symmetric, differentials cases.
|
Loss function : Bayesian regret Loss functions for classification Discounted maximum loss Hinge loss Scoring rule Statistical risk
|
Loss function : Aretz, Kevin; Bartram, Söhnke M.; Pope, Peter F. (April–June 2011). "Asymmetric Loss Functions and the Rationality of Expected Stock Returns" (PDF). International Journal of Forecasting. 27 (2): 413–437. doi:10.1016/j.ijforecast.2009.10.008. SSRN 889323. Berger, James O. (1985). Statistical decision theory and Bayesian Analysis (2nd ed.). New York: Springer-Verlag. Bibcode:1985sdtb.book.....B. ISBN 978-0-387-96098-2. MR 0804611. Cecchetti, S. (2000). "Making monetary policy: Objectives and rules". Oxford Review of Economic Policy. 16 (4): 43–59. doi:10.1093/oxrep/16.4.43. Horowitz, Ann R. (1987). "Loss functions and public policy". Journal of Macroeconomics. 9 (4): 489–504. doi:10.1016/0164-0704(87)90016-4. Waud, Roger N. (1976). "Asymmetric Policymaker Utility Functions and Optimal Policy under Uncertainty". Econometrica. 44 (1): 53–66. doi:10.2307/1911380. JSTOR 1911380.
|
Cross-entropy : In information theory, the cross-entropy between two probability distributions p and q , over the same underlying set of events, measures the average number of bits needed to identify an event drawn from the set when the coding scheme used for the set is optimized for an estimated probability distribution q , rather than the true distribution p .
|
Cross-entropy : The cross-entropy of the distribution q relative to a distribution p over a given set is defined as follows: H ( p , q ) = − E p [ log q ] , _[\log q], where E p [ ⋅ ] [\cdot ] is the expected value operator with respect to the distribution p . The definition may be formulated using the Kullback–Leibler divergence D K L ( p ∥ q ) (p\parallel q) , divergence of p from q (also known as the relative entropy of p with respect to q ). H ( p , q ) = H ( p ) + D K L ( p ∥ q ) , (p\parallel q), where H ( p ) is the entropy of p . For discrete probability distributions p and q with the same support X , this means The situation for continuous distributions is analogous. We have to assume that p and q are absolutely continuous with respect to some reference measure r (usually r is a Lebesgue measure on a Borel σ-algebra). Let P and Q be probability density functions of p and q with respect to r . Then − ∫ X P ( x ) log Q ( x ) d x = E p [ − log Q ] , P(x)\,\log Q(x)\,\mathrm x=\operatorname _[-\log Q], and therefore NB: The notation H ( p , q ) is also used for a different concept, the joint entropy of p and q .
|
Cross-entropy : In information theory, the Kraft–McMillan theorem establishes that any directly decodable coding scheme for coding a message to identify one value x i out of a set of possibilities ,\ldots ,x_\ can be seen as representing an implicit probability distribution q ( x i ) = ( 1 2 ) ℓ i )=\left(\right)^ over ,\ldots ,x_\ , where ℓ i is the length of the code for x i in bits. Therefore, cross-entropy can be interpreted as the expected message-length per datum when a wrong distribution q is assumed while the data actually follows a distribution p . That is why the expectation is taken over the true probability distribution p and not q . Indeed the expected message-length under the true distribution p is E p [ ℓ ] = − E p [ ln q ( x ) ln ( 2 ) ] _[\ell ]=-\operatorname _\left[\right] = − E p [ log 2 q ( x ) ] = − ∑ x i p ( x i ) log 2 q ( x i ) _\left[\log _\right]=-\sum _p(x_)\,\log _q(x_) = − ∑ x p ( x ) log 2 q ( x ) = H ( p , q ) . p(x)\,\log _q(x)=H(p,q).
|
Cross-entropy : There are many situations where cross-entropy needs to be measured but the distribution of p is unknown. An example is language modeling, where a model is created based on a training set T , and then its cross-entropy is measured on a test set to assess how accurate the model is in predicting the test data. In this example, p is the true distribution of words in any corpus, and q is the distribution of words as predicted by the model. Since the true distribution is unknown, cross-entropy cannot be directly calculated. In these cases, an estimate of cross-entropy is calculated using the following formula: H ( T , q ) = − ∑ i = 1 N 1 N log 2 q ( x i ) ^\log _q(x_) where N is the size of the test set, and q ( x ) is the probability of event x estimated from the training set. In other words, q ( x i ) ) is the probability estimate of the model that the i-th word of the text is x i . The sum is averaged over the N words of the test. This is a Monte Carlo estimate of the true cross-entropy, where the test set is treated as samples from p ( x ) .
|
Cross-entropy : The cross entropy arises in classification problems when introducing a logarithm in the guise of the log-likelihood function. The section is concerned with the subject of estimation of the probability of different possible discrete outcomes. To this end, denote a parametrized family of distributions by q θ , with θ subject to the optimization effort. Consider a given finite sequence of N values x i from a training set, obtained from conditionally independent sampling. The likelihood assigned to any considered parameter θ of the model is then given by the product over all probabilities q θ ( X = x i ) (X=x_) . Repeated occurrences are possible, leading to equal factors in the product. If the count of occurrences of the value equal to x i (for some index i ) is denoted by # x i , then the frequency of that value equals # x i / N /N . Denote the latter by p ( X = x i ) ) , as it may be understood as empirical approximation to the probability distribution underlying the scenario. Further denote by P P := e H ( p , q θ ) ^) the perplexity, which can be seen to equal ∏ x i q θ ( X = x i ) − p ( X = x i ) q_(X=x_)^) by the calculation rules for the logarithm, and where the product is over the values without double counting. So L ( θ ; x ) = ∏ i q θ ( X = x i ) = ∏ x i q θ ( X = x i ) # x i = P P − N = e − N ⋅ H ( p , q θ ) (\theta ; )=\prod _q_(X=x_)=\prod _q_(X=x_)^=PP^= ^) or log L ( θ ; x ) = − N ⋅ H ( p , q θ ) . (\theta ; )=-N\cdot H(p,q_). Since the logarithm is a monotonically increasing function, it does not affect extremization. So observe that the likelihood maximization amounts to minimization of the cross-entropy.
|
Cross-entropy : Cross-entropy minimization is frequently used in optimization and rare-event probability estimation. When comparing a distribution q against a fixed reference distribution p , cross-entropy and KL divergence are identical up to an additive constant (since p is fixed): According to the Gibbs' inequality, both take on their minimal values when p = q , which is 0 for KL divergence, and H ( p ) (p) for cross-entropy. In the engineering literature, the principle of minimizing KL divergence (Kullback's "Principle of Minimum Discrimination Information") is often called the Principle of Minimum Cross-Entropy (MCE), or Minxent. However, as discussed in the article Kullback–Leibler divergence, sometimes the distribution q is the fixed prior reference distribution, and the distribution p is optimized to be as close to q as possible, subject to some constraint. In this case the two minimizations are not equivalent. This has led to some ambiguity in the literature, with some authors attempting to resolve the inconsistency by restating cross-entropy to be D K L ( p ∥ q ) (p\parallel q) , rather than H ( p , q ) . In fact, cross-entropy is another name for relative entropy; see Cover and Thomas and Good. On the other hand, H ( p , q ) does not agree with the literature and can be misleading.
|
Cross-entropy : Cross-entropy can be used to define a loss function in machine learning and optimization. Mao, Mohri, and Zhong (2023) give an extensive analysis of the properties of the family of cross-entropy loss functions in machine learning, including theoretical learning guarantees and extensions to adversarial learning. The true probability p i is the true label, and the given distribution q i is the predicted value of the current model. This is also known as the log loss (or logarithmic loss or logistic loss); the terms "log loss" and "cross-entropy loss" are used interchangeably. More specifically, consider a binary regression model which can be used to classify observations into two possible classes (often simply labelled 0 and 1 ). The output of the model for a given observation, given a vector of input features x , can be interpreted as a probability, which serves as the basis for classifying the observation. In logistic regression, the probability is modeled using the logistic function g ( z ) = 1 / ( 1 + e − z ) ) where z is some function of the input vector x , commonly just a linear function. The probability of the output y = 1 is given by q y = 1 = y ^ ≡ g ( w ⋅ x ) = 1 1 + e − w ⋅ x , =\equiv g(\mathbf \cdot \mathbf )= \cdot \mathbf , where the vector of weights w is optimized through some appropriate algorithm such as gradient descent. Similarly, the complementary probability of finding the output y = 0 is simply given by q y = 0 = 1 − y ^ . =1-. Having set up our notation, p ∈ and q ∈ ,1-\ , we can use cross-entropy to get a measure of dissimilarity between p and q : H ( p , q ) = − ∑ i p i log q i = − y log y ^ − ( 1 − y ) log ( 1 − y ^ ) . p_\log q_\ =\ -y\log -(1-y)\log(1-). Logistic regression typically optimizes the log loss for all the observations on which it is trained, which is the same as optimizing the average cross-entropy in the sample. Other loss functions that penalize errors differently can be also used for training, resulting in models with different final test accuracy. For example, suppose we have N samples with each sample indexed by n = 1 , … , N . The average of the loss function is then given by: J ( w ) = 1 N ∑ n = 1 N H ( p n , q n ) = − 1 N ∑ n = 1 N [ y n log y ^ n + ( 1 − y n ) log ( 1 − y ^ n ) ] , )\ =\ \sum _^H(p_,q_)\ =\ -\sum _^\ y_\log _+(1-y_)\log(1-_)\,, where y ^ n ≡ g ( w ⋅ x n ) = 1 / ( 1 + e − w ⋅ x n ) _\equiv g(\mathbf \cdot \mathbf _)=1/(1+e^ \cdot \mathbf _) , with g ( z ) the logistic function as before. The logistic loss is sometimes called cross-entropy loss. It is also known as log loss. (In this case, the binary label is often denoted by .) Remark: The gradient of the cross-entropy loss for logistic regression is the same as the gradient of the squared-error loss for linear regression. That is, define X T = ( 1 x 11 … x 1 p 1 x 21 ⋯ x 2 p ⋮ ⋮ ⋮ 1 x n 1 ⋯ x n p ) ∈ R n × ( p + 1 ) , =1&x_&\dots &x_\\1&x_&\cdots &x_\\\vdots &\vdots &&\vdots \\1&x_&\cdots &x_\\\end\in \mathbb ^, y i ^ = f ^ ( x i 1 , … , x i p ) = 1 1 + exp ( − β 0 − β 1 x i 1 − ⋯ − β p x i p ) , =(x_,\dots ,x_)=-\beta _x_-\dots -\beta _x_), L ( β ) = − ∑ i = 1 N [ y i log y ^ i + ( 1 − y i ) log ( 1 − y ^ i ) ] . )=-\sum _^\left[y_\log _+(1-y_)\log(1-_)\right]. Then we have the result ∂ ∂ β L ( β ) = X T ( Y ^ − Y ) . L()=X^(-Y). The proof is as follows. For any y ^ i _ , we have ∂ ∂ β 0 ln 1 1 + e − β 0 + k 0 = e − β 0 + k 0 1 + e − β 0 + k 0 , \ln +k_=+k_+k_, ∂ ∂ β 0 ln ( 1 − 1 1 + e − β 0 + k 0 ) = − 1 1 + e − β 0 + k 0 , \ln \left(1-+k_\right)=+k_, ∂ ∂ β 0 L ( β ) = − ∑ i = 1 N [ y i ⋅ e − β 0 + k 0 1 + e − β 0 + k 0 − ( 1 − y i ) 1 1 + e − β 0 + k 0 ] = − ∑ i = 1 N [ y i − y ^ i ] = ∑ i = 1 N ( y ^ i − y i ) , L()&=-\sum _^\left[\cdot e^+k_+k_-(1-y_)+k_\right]\\&=-\sum _^\left[y_-_\right]=\sum _^(_-y_),\end ∂ ∂ β 1 ln 1 1 + e − β 1 x i 1 + k 1 = x i 1 e k 1 e β 1 x i 1 + e k 1 , \ln x_+k_=e^x_+e^, ∂ ∂ β 1 ln [ 1 − 1 1 + e − β 1 x i 1 + k 1 ] = − x i 1 e β 1 x i 1 e β 1 x i 1 + e k 1 , \ln \left[1-x_+k_\right]=e^x_x_+e^, ∂ ∂ β 1 L ( β ) = − ∑ i = 1 N x i 1 ( y i − y ^ i ) = ∑ i = 1 N x i 1 ( y ^ i − y i ) . L()=-\sum _^x_(y_-_)=\sum _^x_(_-y_). In a similar way, we eventually obtain the desired result.
|
Cross-entropy : It may be beneficial to train an ensemble of models that have diversity, such that when they are combined, their predictive accuracy is augmented. Assuming a simple ensemble of K classifiers is assembled via averaging the outputs, then the amended cross-entropy is given by e k = H ( p , q k ) − λ K ∑ j ≠ k H ( q j , q k ) =H(p,q^)-\sum _H(q^,q^) where e k is the cost function of the k t h classifier, q k is the output probability of the k t h classifier, p is the true probability to be estimated, and λ is a parameter between 0 and 1 that defines the 'diversity' that we would like to establish among the ensemble. When λ = 0 we want each classifier to do its best regardless of the ensemble and when λ = 1 we would like the classifier to be as diverse as possible.
|
Cross-entropy : Cross-entropy method Logistic regression Conditional entropy Kullback–Leibler distance Maximum-likelihood estimation Mutual information Perplexity
|
Cross-entropy : de Boer, Kroese, D.P., Mannor, S. and Rubinstein, R.Y. (2005). A tutorial on the cross-entropy method. Annals of Operations Research 134 (1), 19–67.
|
Huber loss : In statistics, the Huber loss is a loss function used in robust regression, that is less sensitive to outliers in data than the squared error loss. A variant for classification is also sometimes used.
|
Huber loss : The Huber loss function describes the penalty incurred by an estimation procedure f. Huber (1964) defines the loss function piecewise by L δ ( a ) = (a)=&|a|\leq \delta ,\\\delta \cdot \left(|a|-\delta \right),&\end This function is quadratic for small values of a, and linear for large values, with equal values and slopes of the different sections at the two points where | a | = δ . The variable a often refers to the residuals, that is to the difference between the observed and predicted values a = y − f ( x ) , so the former can be expanded to L δ ( y , f ( x ) ) = (y,f(x))=(y-f(x))^&|y-f(x)|\leq \delta ,\\\delta \ \cdot \left(|y-f(x)|-\delta \right),&\end The Huber loss is the convolution of the absolute value function with the rectangular function, scaled and translated. Thus it "smoothens out" the former's corner at the origin.
|
Huber loss : Two very commonly used loss functions are the squared loss, L ( a ) = a 2 , and the absolute loss, L ( a ) = | a | . The squared loss function results in an arithmetic mean-unbiased estimator, and the absolute-value loss function results in a median-unbiased estimator (in the one-dimensional case, and a geometric median-unbiased estimator for the multi-dimensional case). The squared loss has the disadvantage that it has the tendency to be dominated by outliers—when summing over a set of a 's (as in ∑ i = 1 n L ( a i ) ^L(a_) ), the sample mean is influenced too much by a few particularly large a -values when the distribution is heavy tailed: in terms of estimation theory, the asymptotic relative efficiency of the mean is poor for heavy-tailed distributions. As defined above, the Huber loss function is strongly convex in a uniform neighborhood of its minimum a = 0 ; at the boundary of this uniform neighborhood, the Huber loss function has a differentiable extension to an affine function at points a = − δ and a = δ . These properties allow it to combine much of the sensitivity of the mean-unbiased, minimum-variance estimator of the mean (using the quadratic loss function) and the robustness of the median-unbiased estimator (using the absolute value function).
|
Huber loss : The Pseudo-Huber loss function can be used as a smooth approximation of the Huber loss function. It combines the best properties of L2 squared loss and L1 absolute loss by being strongly convex when close to the target/minimum and less steep for extreme values. The scale at which the Pseudo-Huber loss function transitions from L2 loss for values close to the minimum to L1 loss for extreme values and the steepness at extreme values can be controlled by the δ value. The Pseudo-Huber loss function ensures that derivatives are continuous for all degrees. It is defined as L δ ( a ) = δ 2 ( 1 + ( a / δ ) 2 − 1 ) . (a)=\delta ^\left(-1\right). As such, this function approximates a 2 / 2 /2 for small values of a , and approximates a straight line with slope δ for large values of a . While the above is the most common form, other smooth approximations of the Huber loss function also exist.
|
Huber loss : For classification purposes, a variant of the Huber loss called modified Huber is sometimes used. Given a prediction f ( x ) (a real-valued classifier score) and a true binary class label y ∈ , the modified Huber loss is defined as L ( y , f ( x ) ) = \max(0,1-y\,f(x))^&\,\,y\,f(x)>-1,\\-4y\,f(x)&\end The term max ( 0 , 1 − y f ( x ) ) is the hinge loss used by support vector machines; the quadratically smoothed hinge loss is a generalization of L .
|
Huber loss : The Huber loss function is used in robust statistics, M-estimation and additive modelling.
|
Huber loss : Winsorizing Robust regression M-estimator Visual comparison of different M-estimators == References ==
|
Mean squared error : In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator (of a procedure for estimating an unobserved quantity) measures the average of the squares of the errors—that is, the average squared difference between the estimated values and the true value. MSE is a risk function, corresponding to the expected value of the squared error loss. The fact that MSE is almost always strictly positive (and not zero) is because of randomness or because the estimator does not account for information that could produce a more accurate estimate. In machine learning, specifically empirical risk minimization, MSE may refer to the empirical risk (the average loss on an observed data set), as an estimate of the true MSE (the true risk: the average loss on the actual population distribution). The MSE is a measure of the quality of an estimator. As it is derived from the square of Euclidean distance, it is always a positive value that decreases as the error approaches zero. The MSE is the second moment (about the origin) of the error, and thus incorporates both the variance of the estimator (how widely spread the estimates are from one data sample to another) and its bias (how far off the average estimated value is from the true value). For an unbiased estimator, the MSE is the variance of the estimator. Like the variance, MSE has the same units of measurement as the square of the quantity being estimated. In an analogy to standard deviation, taking the square root of MSE yields the root-mean-square error or root-mean-square deviation (RMSE or RMSD), which has the same units as the quantity being estimated; for an unbiased estimator, the RMSE is the square root of the variance, known as the standard error.
|
Mean squared error : The MSE either assesses the quality of a predictor (i.e., a function mapping arbitrary inputs to a sample of values of some random variable), or of an estimator (i.e., a mathematical function mapping a sample of data to an estimate of a parameter of the population from which the data is sampled). In the context of prediction, understanding the prediction interval can also be useful as it provides a range within which a future observation will fall, with a certain probability. The definition of an MSE differs according to whether one is describing a predictor or an estimator.
|
Mean squared error : In regression analysis, plotting is a more natural way to view the overall trend of the whole data. The mean of the distance from each point to the predicted regression model can be calculated, and shown as the mean squared error. The squaring is critical to reduce the complexity with negative signs. To minimize MSE, the model could be more accurate, which would mean the model is closer to actual data. One example of a linear regression using this method is the least squares method—which evaluates appropriateness of linear regression model to model bivariate dataset, but whose limitation is related to known distribution of the data. The term mean squared error is sometimes used to refer to the unbiased estimate of error variance: the residual sum of squares divided by the number of degrees of freedom. This definition for a known, computed quantity differs from the above definition for the computed MSE of a predictor, in that a different denominator is used. The denominator is the sample size reduced by the number of model parameters estimated from the same data, (n−p) for p regressors or (n−p−1) if an intercept is used (see errors and residuals in statistics for more details). Although the MSE (as defined in this article) is not an unbiased estimator of the error variance, it is consistent, given the consistency of the predictor. In regression analysis, "mean squared error", often referred to as mean squared prediction error or "out-of-sample mean squared error", can also refer to the mean value of the squared deviations of the predictions from the true values, over an out-of-sample test space, generated by a model estimated over a particular sample space. This also is a known, computed quantity, and it varies by sample and by out-of-sample test space. In the context of gradient descent algorithms, it is common to introduce a factor of 1 / 2 to the MSE for ease of computation after taking the derivative. So a value which is technically half the mean of squared errors may be called the MSE.
|
Mean squared error : An MSE of zero, meaning that the estimator θ ^ predicts observations of the parameter θ with perfect accuracy, is ideal (but typically not possible). Values of MSE may be used for comparative purposes. Two or more statistical models may be compared using their MSEs—as a measure of how well they explain a given set of observations: An unbiased estimator (estimated from a statistical model) with the smallest variance among all unbiased estimators is the best unbiased estimator or MVUE (Minimum-Variance Unbiased Estimator). Both analysis of variance and linear regression techniques estimate the MSE as part of the analysis and use the estimated MSE to determine the statistical significance of the factors or predictors under study. The goal of experimental design is to construct experiments in such a way that when the observations are analyzed, the MSE is close to zero relative to the magnitude of at least one of the estimated treatment effects. In one-way analysis of variance, MSE can be calculated by the division of the sum of squared errors and the degree of freedom. Also, the f-value is the ratio of the mean squared treatment and the MSE. MSE is also used in several stepwise regression techniques as part of the determination as to how many predictors from a candidate set to include in a model for a given set of observations.
|
Mean squared error : Minimizing MSE is a key criterion in selecting estimators: see minimum mean-square error. Among unbiased estimators, minimizing the MSE is equivalent to minimizing the variance, and the estimator that does this is the minimum variance unbiased estimator. However, a biased estimator may have lower MSE; see estimator bias. In statistical modelling the MSE can represent the difference between the actual observations and the observation values predicted by the model. In this context, it is used to determine the extent to which the model fits the data as well as whether removing some explanatory variables is possible without significantly harming the model's predictive ability. In forecasting and prediction, the Brier score is a measure of forecast skill based on MSE.
|
Mean squared error : Squared error loss is one of the most widely used loss functions in statistics, though its widespread use stems more from mathematical convenience than considerations of actual loss in applications. Carl Friedrich Gauss, who introduced the use of mean squared error, was aware of its arbitrariness and was in agreement with objections to it on these grounds. The mathematical benefits of mean squared error are particularly evident in its use at analyzing the performance of linear regression, as it allows one to partition the variation in a dataset into variation explained by the model and variation explained by randomness.
|
Mean squared error : Bias–variance tradeoff Hodges' estimator James–Stein estimator Mean percentage error Mean square quantization error Reduced chi-squared statistic Mean squared displacement Mean squared prediction error Minimum mean square error Overfitting Peak signal-to-noise ratio
|
Mean squared prediction error : In statistics the mean squared prediction error (MSPE), also known as mean squared error of the predictions, of a smoothing, curve fitting, or regression procedure is the expected value of the squared prediction errors (PE), the square difference between the fitted values implied by the predictive function g ^ and the values of the (unobservable) true value g. It is an inverse measure of the explanatory power of g ^ , , and can be used in the process of cross-validation of an estimated model. Knowledge of g would be required in order to calculate the MSPE exactly; in practice, MSPE is estimated.
|
Mean squared prediction error : If the smoothing or fitting procedure has projection matrix (i.e., hat matrix) L, which maps the observed values vector y to predicted values vector y ^ = L y , =Ly, then PE and MSPE are formulated as: P E i = g ( x i ) − g ^ ( x i ) , =g(x_)-(x_), MSPE = E [ PE i 2 ] = ∑ i = 1 n PE i 2 / n . =\operatorname \left[\operatorname _^\right]=\sum _^\operatorname _^/n. The MSPE can be decomposed into two terms: the squared bias (mean error) of the fitted values and the variance of the fitted values: MSPE = ME 2 + VAR , =\operatorname ^+\operatorname , ME = E [ g ^ ( x i ) − g ( x i ) ] =\operatorname \left[(x_)-g(x_)\right] VAR = E [ ( g ^ ( x i ) − E [ g ( x i ) ] ) 2 ] . =\operatorname \left[\left((x_)-\operatorname \left[(x_)\right]\right)^\right]. The quantity SSPE=nMSPE is called sum squared prediction error. The root mean squared prediction error is the square root of MSPE: RMSPE=√MSPE.
|
Mean squared prediction error : The mean squared prediction error can be computed exactly in two contexts. First, with a data sample of length n, the data analyst may run the regression over only q of the data points (with q < n), holding back the other n – q data points with the specific purpose of using them to compute the estimated model’s MSPE out of sample (i.e., not using data that were used in the model estimation process). Since the regression process is tailored to the q in-sample points, normally the in-sample MSPE will be smaller than the out-of-sample one computed over the n – q held-back points. If the increase in the MSPE out of sample compared to in sample is relatively slight, that results in the model being viewed favorably. And if two models are to be compared, the one with the lower MSPE over the n – q out-of-sample data points is viewed more favorably, regardless of the models’ relative in-sample performances. The out-of-sample MSPE in this context is exact for the out-of-sample data points that it was computed over, but is merely an estimate of the model’s MSPE for the mostly unobserved population from which the data were drawn. Second, as time goes on more data may become available to the data analyst, and then the MSPE can be computed over these new data.
|
Mean squared prediction error : When the model has been estimated over all available data with none held back, the MSPE of the model over the entire population of mostly unobserved data can be estimated as follows. For the model y i = g ( x i ) + σ ε i =g(x_)+\sigma \varepsilon _ where ε i ∼ N ( 0 , 1 ) \sim (0,1) , one may write n ⋅ MSPE ( L ) = g T ( I − L ) T ( I − L ) g + σ 2 tr [ L T L ] . (L)=g^(I-L)^(I-L)g+\sigma ^\operatorname \left[L^L\right]. Using in-sample data values, the first term on the right side is equivalent to ∑ i = 1 n ( E [ g ( x i ) − g ^ ( x i ) ] ) 2 = E [ ∑ i = 1 n ( y i − g ^ ( x i ) ) 2 ] − σ 2 tr [ ( I − L ) T ( I − L ) ] . ^\left(\operatorname \left[g(x_)-(x_)\right]\right)^=\operatorname \left[\sum _^\left(y_-(x_)\right)^\right]-\sigma ^\operatorname \left[\left(I-L\right)^\left(I-L\right)\right]. Thus, n ⋅ MSPE ( L ) = E [ ∑ i = 1 n ( y i − g ^ ( x i ) ) 2 ] − σ 2 ( n − tr [ L ] ) . (L)=\operatorname \left[\sum _^\left(y_-(x_)\right)^\right]-\sigma ^\left(n-\operatorname \left[L\right]\right). If σ 2 is known or well-estimated by σ ^ 2 ^ , it becomes possible to estimate MSPE by n ⋅ M S P E ^ ( L ) = ∑ i = 1 n ( y i − g ^ ( x i ) ) 2 − σ ^ 2 ( n − tr [ L ] ) . (L)=\sum _^\left(y_-(x_)\right)^-^\left(n-\operatorname \left[L\right]\right). Colin Mallows advocated this method in the construction of his model selection statistic Cp, which is a normalized version of the estimated MSPE: C p = ∑ i = 1 n ( y i − g ^ ( x i ) ) 2 σ ^ 2 − n + 2 p . =^\left(y_-(x_)\right)^^-n+2p. where p the number of estimated parameters p and σ ^ 2 ^ is computed from the version of the model that includes all possible regressors. That concludes this proof.
|
Mean squared prediction error : Akaike information criterion Bias-variance tradeoff Mean squared error Errors and residuals in statistics Law of total variance Mallows's Cp Model selection == References ==
|
Loss function : In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its opposite (in specific domains, variously called a reward function, a profit function, a utility function, a fitness function, etc.), in which case it is to be maximized. The loss function could include terms from several levels of the hierarchy. In statistics, typically a loss function is used for parameter estimation, and the event in question is some function of the difference between estimated and true values for an instance of data. The concept, as old as Laplace, was reintroduced in statistics by Abraham Wald in the middle of the 20th century. In the context of economics, for example, this is usually economic cost or regret. In classification, it is the penalty for an incorrect classification of an example. In actuarial science, it is used in an insurance context to model benefits paid over premiums, particularly since the works of Harald Cramér in the 1920s. In optimal control, the loss is the penalty for failing to achieve a desired value. In financial risk management, the function is mapped to a monetary loss.
|
Loss function : In many applications, objective functions, including loss functions as a particular case, are determined by the problem formulation. In other situations, the decision maker’s preference must be elicited and represented by a scalar-valued function (called also utility function) in a form suitable for optimization — the problem that Ragnar Frisch has highlighted in his Nobel Prize lecture. The existing methods for constructing objective functions are collected in the proceedings of two dedicated conferences. In particular, Andranik Tangian showed that the most usable objective functions — quadratic and additive — are determined by a few indifference points. He used this property in the models for constructing these objective functions from either ordinal or cardinal data that were elicited through computer-assisted interviews with decision makers. Among other things, he constructed objective functions to optimally distribute budgets for 16 Westfalian universities and the European subsidies for equalizing unemployment rates among 271 German regions.
|
Loss function : In some contexts, the value of the loss function itself is a random quantity because it depends on the outcome of a random variable X.
|
Loss function : A decision rule makes a choice using an optimality criterion. Some commonly used criteria are: Minimax: Choose the decision rule with the lowest worst loss — that is, minimize the worst-case (maximum possible) loss: a r g m i n δ max θ ∈ Θ R ( θ , δ ) . \ \max _\ R(\theta ,\delta ). Invariance: Choose the decision rule which satisfies an invariance requirement. Choose the decision rule with the lowest average loss (i.e. minimize the expected value of the loss function): a r g m i n δ E θ ∈ Θ [ R ( θ , δ ) ] = a r g m i n δ ∫ θ ∈ Θ R ( θ , δ ) p ( θ ) d θ . \operatorname _[R(\theta ,\delta )]= \ \int _R(\theta ,\delta )\,p(\theta )\,d\theta .
|
Loss function : Sound statistical practice requires selecting an estimator consistent with the actual acceptable variation experienced in the context of a particular applied problem. Thus, in the applied use of loss functions, selecting which statistical method to use to model an applied problem depends on knowing the losses that will be experienced from being wrong under the problem's particular circumstances. A common example involves estimating "location". Under typical statistical assumptions, the mean or average is the statistic for estimating location that minimizes the expected loss experienced under the squared-error loss function, while the median is the estimator that minimizes expected loss experienced under the absolute-difference loss function. Still different estimators would be optimal under other, less common circumstances. In economics, when an agent is risk neutral, the objective function is simply expressed as the expected value of a monetary quantity, such as profit, income, or end-of-period wealth. For risk-averse or risk-loving agents, loss is measured as the negative of a utility function, and the objective function to be optimized is the expected value of utility. Other measures of cost are possible, for example mortality or morbidity in the field of public health or safety engineering. For most optimization algorithms, it is desirable to have a loss function that is globally continuous and differentiable. Two very commonly used loss functions are the squared loss, L ( a ) = a 2 , and the absolute loss, L ( a ) = | a | . However the absolute loss has the disadvantage that it is not differentiable at a = 0 . The squared loss has the disadvantage that it has the tendency to be dominated by outliers—when summing over a set of a 's (as in ∑ i = 1 n L ( a i ) ^L(a_) ), the final sum tends to be the result of a few particularly large a-values, rather than an expression of the average a-value. The choice of a loss function is not arbitrary. It is very restrictive and sometimes the loss function may be characterized by its desirable properties. Among the choice principles are, for example, the requirement of completeness of the class of symmetric statistics in the case of i.i.d. observations, the principle of complete information, and some others. W. Edwards Deming and Nassim Nicholas Taleb argue that empirical reality, not nice mathematical properties, should be the sole basis for selecting loss functions, and real losses often are not mathematically nice and are not differentiable, continuous, symmetric, etc. For example, a person who arrives before a plane gate closure can still make the plane, but a person who arrives after can not, a discontinuity and asymmetry which makes arriving slightly late much more costly than arriving slightly early. In drug dosing, the cost of too little drug may be lack of efficacy, while the cost of too much may be tolerable toxicity, another example of asymmetry. Traffic, pipes, beams, ecologies, climates, etc. may tolerate increased load or stress with little noticeable change up to a point, then become backed up or break catastrophically. These situations, Deming and Taleb argue, are common in real-life problems, perhaps more common than classical smooth, continuous, symmetric, differentials cases.
|
Loss function : Bayesian regret Loss functions for classification Discounted maximum loss Hinge loss Scoring rule Statistical risk
|
Loss function : Aretz, Kevin; Bartram, Söhnke M.; Pope, Peter F. (April–June 2011). "Asymmetric Loss Functions and the Rationality of Expected Stock Returns" (PDF). International Journal of Forecasting. 27 (2): 413–437. doi:10.1016/j.ijforecast.2009.10.008. SSRN 889323. Berger, James O. (1985). Statistical decision theory and Bayesian Analysis (2nd ed.). New York: Springer-Verlag. Bibcode:1985sdtb.book.....B. ISBN 978-0-387-96098-2. MR 0804611. Cecchetti, S. (2000). "Making monetary policy: Objectives and rules". Oxford Review of Economic Policy. 16 (4): 43–59. doi:10.1093/oxrep/16.4.43. Horowitz, Ann R. (1987). "Loss functions and public policy". Journal of Macroeconomics. 9 (4): 489–504. doi:10.1016/0164-0704(87)90016-4. Waud, Roger N. (1976). "Asymmetric Policymaker Utility Functions and Optimal Policy under Uncertainty". Econometrica. 44 (1): 53–66. doi:10.2307/1911380. JSTOR 1911380.
|
Sum of absolute differences : In digital image processing, the sum of absolute differences (SAD) is a measure of the similarity between image blocks. It is calculated by taking the absolute difference between each pixel in the original block and the corresponding pixel in the block being used for comparison. These differences are summed to create a simple metric of block similarity, the L1 norm of the difference image or Manhattan distance between two image blocks. The sum of absolute differences may be used for a variety of purposes, such as object recognition, the generation of disparity maps for stereo images, and motion estimation for video compression.
|
Sum of absolute differences : This example uses the sum of absolute differences to identify which part of a search image is most similar to a template image. In this example, the template image is 3 by 3 pixels in size, while the search image is 3 by 5 pixels in size. Each pixel is represented by a single integer from 0 to 9. Template Search image 2 5 5 2 7 5 8 6 4 0 7 1 7 4 2 7 7 5 9 8 4 6 8 5 There are exactly three unique locations within the search image where the template may fit: the left side of the image, the center of the image, and the right side of the image. To calculate the SAD values, the absolute value of the difference between each corresponding pair of pixels is used: the difference between 2 and 2 is 0, 4 and 1 is 3, 7 and 8 is 1, and so forth. Calculating the values of the absolute differences for each pixel, for the three possible template locations, gives the following: Left Center Right 0 2 0 5 0 3 3 3 1 3 7 3 3 4 5 0 2 0 1 1 3 3 1 1 1 3 4 For each of these three image patches, the 9 absolute differences are added together, giving SAD values of 20, 25, and 17, respectively. From these SAD values, it could be asserted that the right side of the search image is the most similar to the template image, because it has the lowest sum of absolute differences as compared to the other two locations.
|
Sum of absolute differences : Computer stereo vision Hadamard transform Motion compensation Motion estimation Object recognition (computer vision) Rate-distortion optimization
|
Sum of absolute differences : E. G. Richardson, Iain (2003). H.264 and MPEG-4 Video Compression: Video Coding for Next-generation Multimedia. Chichester: John Wiley & Sons Ltd.
|
Sum of absolute transformed differences : The sum of absolute transformed differences (SATD) is a block matching criterion widely used in fractional motion estimation for video compression. It works by taking a frequency transform, usually a Hadamard transform, of the differences between the pixels in the original block and the corresponding pixels in the block being used for comparison. The transform itself is often of a small block rather than the entire macroblock. For example, in x264, a series of 4×4 blocks are transformed rather than doing the more processor-intensive 16×16 transform.
|
Sum of absolute transformed differences : SATD is slower than the sum of absolute differences (SAD), both due to its increased complexity and the fact that SAD-specific MMX and SSE2 instructions exist, while there are no such instructions for SATD. However, SATD can still be optimized considerably with SIMD instructions on most modern CPUs. The benefit of SATD is that it more accurately models the number of bits required to transmit the residual error signal. As such, it is often used in video compressors, either as a way to drive and estimate rate explicitly, such as in the Theora encoder (since 1.1 alpha2), as an optional metric used in wide motion searches, such as in the Microsoft VC-1 encoder, or as a metric used in sub-pixel refinement, such as in x264.
|
Sum of absolute transformed differences : Hadamard transform Motion compensation Motion estimation Rate–distortion optimization Sum of absolute differences
|
Sum of absolute transformed differences : E. G. Richardson, Iain (2003). H.264 and MPEG-4 Video Compression: Video Coding for Next-generation Multimedia. Chichester: John Wiley & Sons Ltd.
|
Taguchi loss function : The Taguchi loss function is graphical depiction of loss developed by the Japanese business statistician Genichi Taguchi to describe a phenomenon affecting the value of products produced by a company. Praised by Dr. W. Edwards Deming (the business guru of the 1980s American quality movement), it made clear the concept that quality does not suddenly plummet when, for instance, a machinist exceeds a rigid blueprint tolerance. Instead 'loss' in value progressively increases as variation increases from the intended condition. This was considered a breakthrough in describing quality, and helped fuel the continuous improvement movement. The concept of Taguchi's quality loss function was in contrast with the American concept of quality, popularly known as goal post philosophy, the concept given by American quality guru Phil Crosby. Goal post philosophy emphasizes that if a product feature doesn't meet the designed specifications it is termed as a product of poor quality (rejected), irrespective of amount of deviation from the target value (mean value of tolerance zone). This concept has similarity with the concept of scoring a 'goal' in the game of football or hockey, because a goal is counted 'one' irrespective of the location of strike of the ball in the 'goal post', whether it is in the center or towards the corner. This means that if the product dimension goes out of the tolerance limit the quality of the product drops suddenly. Through his concept of the quality loss function, Taguchi explained that from the customer's point of view this drop of quality is not sudden. The customer experiences a loss of quality the moment product specification deviates from the 'target value'. This 'loss' is depicted by a quality loss function and it follows a parabolic curve mathematically given by L = k(y–m)2, where m is the theoretical 'target value' or 'mean value' and y is the actual size of the product, k is a constant and L is the loss. This means that if the difference between 'actual size' and 'target value' i.e. (y–m) is large, loss would be more, irrespective of tolerance specifications. In Taguchi's view tolerance specifications are given by engineers and not by customers; what the customer experiences is 'loss'. This equation is true for a single product; if 'loss' is to be calculated for multiple products the loss function is given by L = k[S2 + ( y ¯ – m)2], where S2 is the 'variance of product size' and y ¯ is the average product size.
|
Taguchi loss function : The Taguchi loss function is important for a number of reasons—primarily, to help engineers better understand the importance of designing for variation.
|
Taguchi loss function : Taguchi methods Taguchi also focus on Robust design of model. == References ==
|
Log-linear model : A log-linear model is a mathematical model that takes the form of a function whose logarithm equals a linear combination of the parameters of the model, which makes it possible to apply (possibly multivariate) linear regression. That is, it has the general form exp ( c + ∑ i w i f i ( X ) ) w_f_(X)\right) , in which the fi(X) are quantities that are functions of the variable X, in general a vector of values, while c and the wi stand for the model parameters. The term may specifically be used for: A log-linear plot or graph, which is a type of semi-log plot. Poisson regression for contingency tables, a type of generalized linear model. The specific applications of log-linear models are where the output quantity lies in the range 0 to ∞, for values of the independent variables X, or more immediately, the transformed quantities fi(X) in the range −∞ to +∞. This may be contrasted to logistic models, similar to the logistic function, for which the output quantity lies in the range 0 to 1. Thus the contexts where these models are useful or realistic often depends on the range of the values being modelled.
|
Log-linear model : Log-linear analysis General linear model Generalized linear model Boltzmann distribution Elasticity
|
Log-linear model : Gujarati, Damodar N.; Porter, Dawn C. (2009). "How to Measure Elasticity: The Log-Linear Model". Basic Econometrics. New York: McGraw-Hill/Irwin. pp. 159–162. ISBN 978-0-07-337577-9.
|
Boosting (machine learning) : In machine learning (ML), boosting is an ensemble metaheuristic for primarily reducing bias (as opposed to variance). It can also improve the stability and accuracy of ML classification and regression algorithms. Hence, it is prevalent in supervised learning for converting weak learners to strong learners. The concept of boosting is based on the question posed by Kearns and Valiant (1988, 1989): "Can a set of weak learners create a single strong learner?" A weak learner is defined as a classifier that is only slightly correlated with the true classification. A strong learner is a classifier that is arbitrarily well-correlated with the true classification. Robert Schapire answered the question in the affirmative in a paper published in 1990. This has had significant ramifications in machine learning and statistics, most notably leading to the development of boosting. Initially, the hypothesis boosting problem simply referred to the process of turning a weak learner into a strong learner. Algorithms that achieve this quickly became known as "boosting". Freund and Schapire's arcing (Adapt[at]ive Resampling and Combining), as a general technique, is more or less synonymous with boosting.
|
Boosting (machine learning) : While boosting is not algorithmically constrained, most boosting algorithms consist of iteratively learning weak classifiers with respect to a distribution and adding them to a final strong classifier. When they are added, they are weighted in a way that is related to the weak learners' accuracy. After a weak learner is added, the data weights are readjusted, known as "re-weighting". Misclassified input data gain a higher weight and examples that are classified correctly lose weight. Thus, future weak learners focus more on the examples that previous weak learners misclassified. There are many boosting algorithms. The original ones, proposed by Robert Schapire (a recursive majority gate formulation), and Yoav Freund (boost by majority), were not adaptive and could not take full advantage of the weak learners. Schapire and Freund then developed AdaBoost, an adaptive boosting algorithm that won the prestigious Gödel Prize. Only algorithms that are provable boosting algorithms in the probably approximately correct learning formulation can accurately be called boosting algorithms. Other algorithms that are similar in spirit to boosting algorithms are sometimes called "leveraging algorithms", although they are also sometimes incorrectly called boosting algorithms. The main variation between many boosting algorithms is their method of weighting training data points and hypotheses. AdaBoost is very popular and the most significant historically as it was the first algorithm that could adapt to the weak learners. It is often the basis of introductory coverage of boosting in university machine learning courses. There are many more recent algorithms such as LPBoost, TotalBoost, BrownBoost, xgboost, MadaBoost, LogitBoost, and others. Many boosting algorithms fit into the AnyBoost framework, which shows that boosting performs gradient descent in a function space using a convex cost function.
|
Boosting (machine learning) : Given images containing various known objects in the world, a classifier can be learned from them to automatically classify the objects in future images. Simple classifiers built based on some image feature of the object tend to be weak in categorization performance. Using boosting methods for object categorization is a way to unify the weak classifiers in a special way to boost the overall ability of categorization.
|
Boosting (machine learning) : Boosting algorithms can be based on convex or non-convex optimization algorithms. Convex algorithms, such as AdaBoost and LogitBoost, can be "defeated" by random noise such that they can't learn basic and learnable combinations of weak hypotheses. This limitation was pointed out by Long & Servedio in 2008. However, by 2009, multiple authors demonstrated that boosting algorithms based on non-convex optimization, such as BrownBoost, can learn from noisy datasets and can specifically learn the underlying classifier of the Long–Servedio dataset.
|
Boosting (machine learning) : scikit-learn, an open source machine learning library for Python Orange, a free data mining software suite, module Orange.ensemble Weka is a machine learning set of tools that offers variate implementations of boosting algorithms like AdaBoost and LogitBoost R package GBM (Generalized Boosted Regression Models) implements extensions to Freund and Schapire's AdaBoost algorithm and Friedman's gradient boosting machine. jboost; AdaBoost, LogitBoost, RobustBoost, Boostexter and alternating decision trees R package adabag: Applies Multiclass AdaBoost.M1, AdaBoost-SAMME and Bagging R package xgboost: An implementation of gradient boosting for linear and tree-based models.
|
Boosting (machine learning) : Freund, Yoav; Schapire, Robert E. (1997). "A Decision-Theoretic Generalization of On-line Learning and an Application to Boosting" (PDF). Journal of Computer and System Sciences. 55 (1): 119–139. doi:10.1006/jcss.1997.1504. Schapire, Robert E. (1990). "The strength of weak learnability". Machine Learning. 5 (2): 197–227. doi:10.1007/BF00116037. S2CID 6207294. Schapire, Robert E.; Singer, Yoram (1999). "Improved Boosting Algorithms Using Confidence-Rated Predictors". Machine Learning. 37 (3): 297–336. doi:10.1023/A:1007614523901. S2CID 2329907. Zhou, Zhihua (2008). "On the margin explanation of boosting algorithm" (PDF). In: Proceedings of the 21st Annual Conference on Learning Theory (COLT'08): 479–490. Zhou, Zhihua (2013). "On the doubt about margin explanation of boosting" (PDF). Artificial Intelligence. 203: 1–18. arXiv:1009.3613. doi:10.1016/j.artint.2013.07.002. S2CID 2828847.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.