text
stringlengths
12
14.7k
Evolutionary multimodal optimization : De Jong's crowding method, Goldberg's sharing function approach, Petrowski's clearing method, restricted mating, maintaining multiple subpopulations are some of the popular approaches that have been proposed by the community. The first two methods are especially well studied, however, they do not perform explicit separation into solutions belonging to different basins of attraction. The application of multimodal optimization within ES was not explicit for many years, and has been explored only recently. A niching framework utilizing derandomized ES was introduced by Shir, proposing the CMA-ES as a niching optimizer for the first time. The underpinning of that framework was the selection of a peak individual per subpopulation in each generation, followed by its sampling to produce the consecutive dispersion of search-points. The biological analogy of this machinery is an alpha-male winning all the imposed competitions and dominating thereafter its ecological niche, which then obtains all the sexual resources therein to generate its offspring. Recently, an evolutionary multiobjective optimization (EMO) approach was proposed, in which a suitable second objective is added to the originally single objective multimodal optimization problem, so that the multiple solutions form a weak pareto-optimal front. Hence, the multimodal optimization problem can be solved for its multiple solutions using an EMO algorithm. Improving upon their work, the same authors have made their algorithm self-adaptive, thus eliminating the need for pre-specifying the parameters. An approach that does not use any radius for separating the population into subpopulations (or species) but employs the space topology instead is proposed in.
Evolutionary multimodal optimization : Multi-modal optimization using Particle Swarm Optimization (PSO) Niching in Evolution Strategies (ES) Multimodal optimization page at Chair 11, Computer Science, TU Dortmund University IEEE CIS Task Force on Multi-modal Optimization
Expectation–maximization algorithm : In statistics, an expectation–maximization (EM) algorithm is an iterative method to find (local) maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step. It can be used, for example, to estimate a mixture of gaussians, or to solve the multiple linear regression problem.
Expectation–maximization algorithm : The EM algorithm was explained and given its name in a classic 1977 paper by Arthur Dempster, Nan Laird, and Donald Rubin. They pointed out that the method had been "proposed many times in special circumstances" by earlier authors. One of the earliest is the gene-counting method for estimating allele frequencies by Cedric Smith. Another was proposed by H.O. Hartley in 1958, and Hartley and Hocking in 1977, from which many of the ideas in the Dempster–Laird–Rubin paper originated. Another one by S.K Ng, Thriyambakam Krishnan and G.J McLachlan in 1977. Hartley’s ideas can be broadened to any grouped discrete distribution. A very detailed treatment of the EM method for exponential families was published by Rolf Sundberg in his thesis and several papers, following his collaboration with Per Martin-Löf and Anders Martin-Löf. The Dempster–Laird–Rubin paper in 1977 generalized the method and sketched a convergence analysis for a wider class of problems. The Dempster–Laird–Rubin paper established the EM method as an important tool of statistical analysis. See also Meng and van Dyk (1997). The convergence analysis of the Dempster–Laird–Rubin algorithm was flawed and a correct convergence analysis was published by C. F. Jeff Wu in 1983. Wu's proof established the EM method's convergence also outside of the exponential family, as claimed by Dempster–Laird–Rubin.
Expectation–maximization algorithm : The EM algorithm is used to find (local) maximum likelihood parameters of a statistical model in cases where the equations cannot be solved directly. Typically these models involve latent variables in addition to unknown parameters and known data observations. That is, either missing values exist among the data, or the model can be formulated more simply by assuming the existence of further unobserved data points. For example, a mixture model can be described more simply by assuming that each observed data point has a corresponding unobserved data point, or latent variable, specifying the mixture component to which each data point belongs. Finding a maximum likelihood solution typically requires taking the derivatives of the likelihood function with respect to all the unknown values, the parameters and the latent variables, and simultaneously solving the resulting equations. In statistical models with latent variables, this is usually impossible. Instead, the result is typically a set of interlocking equations in which the solution to the parameters requires the values of the latent variables and vice versa, but substituting one set of equations into the other produces an unsolvable equation. The EM algorithm proceeds from the observation that there is a way to solve these two sets of equations numerically. One can simply pick arbitrary values for one of the two sets of unknowns, use them to estimate the second set, then use these new values to find a better estimate of the first set, and then keep alternating between the two until the resulting values both converge to fixed points. It's not obvious that this will work, but it can be proven in this context. Additionally, it can be proven that the derivative of the likelihood is (arbitrarily close to) zero at that point, which in turn means that the point is either a local maximum or a saddle point. In general, multiple maxima may occur, with no guarantee that the global maximum will be found. Some likelihoods also have singularities in them, i.e., nonsensical maxima. For example, one of the solutions that may be found by EM in a mixture model involves setting one of the components to have zero variance and the mean parameter for the same component to be equal to one of the data points.
Expectation–maximization algorithm : Although an EM iteration does increase the observed data (i.e., marginal) likelihood function, no guarantee exists that the sequence converges to a maximum likelihood estimator. For multimodal distributions, this means that an EM algorithm may converge to a local maximum of the observed data likelihood function, depending on starting values. A variety of heuristic or metaheuristic approaches exist to escape a local maximum, such as random-restart hill climbing (starting with several different random initial estimates θ ( t ) ^ ), or applying simulated annealing methods. EM is especially useful when the likelihood is an exponential family, see Sundberg (2019, Ch. 8) for a comprehensive treatment: the E step becomes the sum of expectations of sufficient statistics, and the M step involves maximizing a linear function. In such a case, it is usually possible to derive closed-form expression updates for each step, using the Sundberg formula (proved and published by Rolf Sundberg, based on unpublished results of Per Martin-Löf and Anders Martin-Löf). The EM method was modified to compute maximum a posteriori (MAP) estimates for Bayesian inference in the original paper by Dempster, Laird, and Rubin. Other methods exist to find maximum likelihood estimates, such as gradient descent, conjugate gradient, or variants of the Gauss–Newton algorithm. Unlike EM, such methods typically require the evaluation of first and/or second derivatives of the likelihood function.
Expectation–maximization algorithm : Expectation-Maximization works to improve Q ( θ ∣ θ ( t ) ) \mid ^) rather than directly improving log ⁡ p ( X ∣ θ ) \mid ) . Here it is shown that improvements to the former imply improvements to the latter. For any Z with non-zero probability p ( Z ∣ X , θ ) \mid \mathbf ,) , we can write log ⁡ p ( X ∣ θ ) = log ⁡ p ( X , Z ∣ θ ) − log ⁡ p ( Z ∣ X , θ ) . \mid )=\log p(\mathbf ,\mathbf \mid )-\log p(\mathbf \mid \mathbf ,). We take the expectation over possible values of the unknown data Z under the current parameter estimate θ ( t ) by multiplying both sides by p ( Z ∣ X , θ ( t ) ) \mid \mathbf ,^) and summing (or integrating) over Z . The left-hand side is the expectation of a constant, so we get: log ⁡ p ( X ∣ θ ) = ∑ Z p ( Z ∣ X , θ ( t ) ) log ⁡ p ( X , Z ∣ θ ) − ∑ Z p ( Z ∣ X , θ ( t ) ) log ⁡ p ( Z ∣ X , θ ) = Q ( θ ∣ θ ( t ) ) + H ( θ ∣ θ ( t ) ) , \log p(\mathbf \mid )&=\sum _ p(\mathbf \mid \mathbf ,^)\log p(\mathbf ,\mathbf \mid )-\sum _ p(\mathbf \mid \mathbf ,^)\log p(\mathbf \mid \mathbf ,)\\&=Q(\mid ^)+H(\mid ^),\end where H ( θ ∣ θ ( t ) ) \mid ^) is defined by the negated sum it is replacing. This last equation holds for every value of θ including θ = θ ( t ) =^ , log ⁡ p ( X ∣ θ ( t ) ) = Q ( θ ( t ) ∣ θ ( t ) ) + H ( θ ( t ) ∣ θ ( t ) ) , \mid ^)=Q(^\mid ^)+H(^\mid ^), and subtracting this last equation from the previous equation gives log ⁡ p ( X ∣ θ ) − log ⁡ p ( X ∣ θ ( t ) ) = Q ( θ ∣ θ ( t ) ) − Q ( θ ( t ) ∣ θ ( t ) ) + H ( θ ∣ θ ( t ) ) − H ( θ ( t ) ∣ θ ( t ) ) . \mid )-\log p(\mathbf \mid ^)=Q(\mid ^)-Q(^\mid ^)+H(\mid ^)-H(^\mid ^). However, Gibbs' inequality tells us that H ( θ ∣ θ ( t ) ) ≥ H ( θ ( t ) ∣ θ ( t ) ) \mid ^)\geq H(^\mid ^) , so we can conclude that log ⁡ p ( X ∣ θ ) − log ⁡ p ( X ∣ θ ( t ) ) ≥ Q ( θ ∣ θ ( t ) ) − Q ( θ ( t ) ∣ θ ( t ) ) . \mid )-\log p(\mathbf \mid ^)\geq Q(\mid ^)-Q(^\mid ^). In words, choosing θ to improve Q ( θ ∣ θ ( t ) ) \mid ^) causes log ⁡ p ( X ∣ θ ) \mid ) to improve at least as much.
Expectation–maximization algorithm : The EM algorithm can be viewed as two alternating maximization steps, that is, as an example of coordinate descent. Consider the function: F ( q , θ ) := E q ⁡ [ log ⁡ L ( θ ; x , Z ) ] + H ( q ) , _[\log L(\theta ;x,Z)]+H(q), where q is an arbitrary probability distribution over the unobserved data z and H(q) is the entropy of the distribution q. This function can be written as F ( q , θ ) = − D K L ( q ∥ p Z ∣ X ( ⋅ ∣ x ; θ ) ) + log ⁡ L ( θ ; x ) , q\parallel p_(\cdot \mid x;\theta )+\log L(\theta ;x), where p Z ∣ X ( ⋅ ∣ x ; θ ) (\cdot \mid x;\theta ) is the conditional distribution of the unobserved data given the observed data x and D K L is the Kullback–Leibler divergence. Then the steps in the EM algorithm may be viewed as: Expectation step: Choose q to maximize F : q ( t ) = a r g m a x q ⁡ F ( q , θ ( t ) ) =\operatorname _\ F(q,\theta ^) Maximization step: Choose θ to maximize F : θ ( t + 1 ) = a r g m a x θ ⁡ F ( q ( t ) , θ ) =\operatorname _\ F(q^,\theta )
Expectation–maximization algorithm : EM is frequently used for parameter estimation of mixed models, notably in quantitative genetics. In psychometrics, EM is an important tool for estimating item parameters and latent abilities of item response theory models. With the ability to deal with missing data and observe unidentified variables, EM is becoming a useful tool to price and manage risk of a portfolio. The EM algorithm (and its faster variant ordered subset expectation maximization) is also widely used in medical image reconstruction, especially in positron emission tomography, single-photon emission computed tomography, and x-ray computed tomography. See below for other faster variants of EM. In structural engineering, the Structural Identification using Expectation Maximization (STRIDE) algorithm is an output-only method for identifying natural vibration properties of a structural system using sensor data (see Operational Modal Analysis). EM is also used for data clustering. In natural language processing, two prominent instances of the algorithm are the Baum–Welch algorithm for hidden Markov models, and the inside-outside algorithm for unsupervised induction of probabilistic context-free grammars. In the analysis of intertrade waiting times i.e. the time between subsequent trades in shares of stock at a stock exchange the EM algorithm has proved to be very useful.
Expectation–maximization algorithm : A Kalman filter is typically used for on-line state estimation and a minimum-variance smoother may be employed for off-line or batch state estimation. However, these minimum-variance solutions require estimates of the state-space model parameters. EM algorithms can be used for solving joint state and parameter estimation problems. Filtering and smoothing EM algorithms arise by repeating this two-step procedure: E-step Operate a Kalman filter or a minimum-variance smoother designed with current parameter estimates to obtain updated state estimates. M-step Use the filtered or smoothed state estimates within maximum-likelihood calculations to obtain updated parameter estimates. Suppose that a Kalman filter or minimum-variance smoother operates on measurements of a single-input-single-output system that possess additive white noise. An updated measurement noise variance estimate can be obtained from the maximum likelihood calculation σ ^ v 2 = 1 N ∑ k = 1 N ( z k − x ^ k ) 2 , _^=\sum _^-_)^, where x ^ k _ are scalar output estimates calculated by a filter or a smoother from N scalar measurements z k . The above update can also be applied to updating a Poisson measurement noise intensity. Similarly, for a first-order auto-regressive process, an updated process noise variance estimate can be calculated by σ ^ w 2 = 1 N ∑ k = 1 N ( x ^ k + 1 − F ^ x ^ k ) 2 , _^=\sum _^_-_)^, where x ^ k _ and x ^ k + 1 _ are scalar state estimates calculated by a filter or a smoother. The updated model coefficient estimate is obtained via F ^ = ∑ k = 1 N ( x ^ k + 1 − F ^ x ^ k ) 2 ∑ k = 1 N x ^ k 2 . =^_-_)^^_^. The convergence of parameter estimates such as those above are well studied.
Expectation–maximization algorithm : A number of methods have been proposed to accelerate the sometimes slow convergence of the EM algorithm, such as those using conjugate gradient and modified Newton's methods (Newton–Raphson). Also, EM can be used with constrained estimation methods. Parameter-expanded expectation maximization (PX-EM) algorithm often provides speed up by "us[ing] a `covariance adjustment' to correct the analysis of the M step, capitalising on extra information captured in the imputed complete data". Expectation conditional maximization (ECM) replaces each M step with a sequence of conditional maximization (CM) steps in which each parameter θi is maximized individually, conditionally on the other parameters remaining fixed. Itself can be extended into the Expectation conditional maximization either (ECME) algorithm. This idea is further extended in generalized expectation maximization (GEM) algorithm, in which is sought only an increase in the objective function F for both the E step and M step as described in the As a maximization–maximization procedure section. GEM is further developed in a distributed environment and shows promising results. It is also possible to consider the EM algorithm as a subclass of the MM (Majorize/Minimize or Minorize/Maximize, depending on context) algorithm, and therefore use any machinery developed in the more general case.
Expectation–maximization algorithm : EM is a partially non-Bayesian, maximum likelihood method. Its final result gives a probability distribution over the latent variables (in the Bayesian style) together with a point estimate for θ (either a maximum likelihood estimate or a posterior mode). A fully Bayesian version of this may be wanted, giving a probability distribution over θ and the latent variables. The Bayesian approach to inference is simply to treat θ as another latent variable. In this paradigm, the distinction between the E and M steps disappears. If using the factorized Q approximation as described above (variational Bayes), solving can iterate over each latent variable (now including θ) and optimize them one at a time. Now, k steps per iteration are needed, where k is the number of latent variables. For graphical models this is easy to do as each variable's new Q depends only on its Markov blanket, so local message passing can be used for efficient inference.
Expectation–maximization algorithm : In information geometry, the E step and the M step are interpreted as projections under dual affine connections, called the e-connection and the m-connection; the Kullback–Leibler divergence can also be understood in these terms.
Expectation–maximization algorithm : EM typically converges to a local optimum, not necessarily the global optimum, with no bound on the convergence rate in general. It is possible that it can be arbitrarily poor in high dimensions and there can be an exponential number of local optima. Hence, a need exists for alternative methods for guaranteed learning, especially in the high-dimensional setting. Alternatives to EM exist with better guarantees for consistency, which are termed moment-based approaches or the so-called spectral techniques. Moment-based approaches to learning the parameters of a probabilistic model enjoy guarantees such as global convergence under certain conditions unlike EM which is often plagued by the issue of getting stuck in local optima. Algorithms with guarantees for learning can be derived for a number of important models such as mixture models, HMMs etc. For these spectral methods, no spurious local optima occur, and the true parameters can be consistently estimated under some regularity conditions.
Expectation–maximization algorithm : mixture distribution compound distribution density estimation Principal component analysis total absorption spectroscopy The EM algorithm can be viewed as a special case of the majorize-minimization (MM) algorithm.
Expectation–maximization algorithm : Hogg, Robert; McKean, Joseph; Craig, Allen (2005). Introduction to Mathematical Statistics. Upper Saddle River, NJ: Pearson Prentice Hall. pp. 359–364. Dellaert, Frank (February 2002). The Expectation Maximization Algorithm (PDF) (Technical Report number GIT-GVU-02-20). Georgia Tech College of Computing. gives an easier explanation of EM algorithm as to lowerbound maximization. Bishop, Christopher M. (2006). Pattern Recognition and Machine Learning. Springer. ISBN 978-0-387-31073-2. Gupta, M. R.; Chen, Y. (2010). "Theory and Use of the EM Algorithm". Foundations and Trends in Signal Processing. 4 (3): 223–296. CiteSeerX 10.1.1.219.6830. doi:10.1561/2000000034. A well-written short book on EM, including detailed derivation of EM for GMMs, HMMs, and Dirichlet. Bilmes, Jeff (1997). A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models (Technical Report TR-97-021). International Computer Science Institute. includes a simplified derivation of the EM equations for Gaussian Mixtures and Gaussian Mixture Hidden Markov Models. McLachlan, Geoffrey J.; Krishnan, Thriyambakam (2008). The EM Algorithm and Extensions (2nd ed.). Hoboken: Wiley. ISBN 978-0-471-20170-0.
Expectation–maximization algorithm : Various 1D, 2D and 3D demonstrations of EM together with Mixture Modeling are provided as part of the paired SOCR activities and applets. These applets and activities show empirically the properties of the EM algorithm for parameter estimation in diverse settings. Class hierarchy in C++ (GPL) including Gaussian Mixtures The on-line textbook: Information Theory, Inference, and Learning Algorithms, by David J.C. MacKay includes simple examples of the EM algorithm such as clustering using the soft k-means algorithm, and emphasizes the variational view of the EM algorithm, as described in Chapter 33.7 of version 7.2 (fourth edition). Variational Algorithms for Approximate Bayesian Inference, by M. J. Beal includes comparisons of EM to Variational Bayesian EM and derivations of several models including Variational Bayesian HMMs (chapters). The Expectation Maximization Algorithm: A short tutorial, A self-contained derivation of the EM Algorithm by Sean Borman. The EM Algorithm, by Xiaojin Zhu. EM algorithm and variants: an informal tutorial by Alexis Roche. A concise and very clear description of EM and many interesting variants.
FastICA : FastICA is an efficient and popular algorithm for independent component analysis invented by Aapo Hyvärinen at Helsinki University of Technology. Like most ICA algorithms, FastICA seeks an orthogonal rotation of prewhitened data, through a fixed-point iteration scheme, that maximizes a measure of non-Gaussianity of the rotated components. Non-gaussianity serves as a proxy for statistical independence, which is a very strong condition and requires infinite data to verify. FastICA can also be alternatively derived as an approximative Newton iteration.
FastICA : Unsupervised learning Machine learning The IT++ library features a FastICA implementation in C++ Infomax
FastICA : FastICA in Python FastICA package for Matlab or Octave fastICA package in R programming language FastICA in Java on SourceForge FastICA in Java in RapidMiner. FastICA in Matlab FastICA in MDP FastICA in Julia
Federated Learning of Cohorts : Federated Learning of Cohorts (FLoC) is a type of web tracking. It groups people into "cohorts" based on their browsing history for the purpose of interest-based advertising. FLoC was being developed as a part of Google's Privacy Sandbox initiative, which includes several other advertising-related technologies with bird-themed names.: 48 Despite "federated learning" in the name, FLoC does not utilize any federated learning. Google began testing the technology in Chrome 89 released in March 2021 as a replacement for third-party cookies. By April 2021, every major browser aside from Google Chrome that is based on Google's open-source Chromium platform had declined to implement FLoC. The technology was criticized on privacy grounds by groups including the Electronic Frontier Foundation and DuckDuckGo, and has been described as anti-competitive; it generated an antitrust response in multiple countries as well as questions about General Data Protection Regulation compliance. In July 2021, Google quietly suspended development of FLoC; Chrome 93, released on August 31, 2021, became the first version which disabled FLoC, but did not remove the internal programming. On January 25, 2022, Google officially announced it had ended development of FLoC technologies and proposed the new Topics API to replace it. Brave developers criticized Topics API as a rebranding of FLoC with only minor changes and without addressing their main concerns.
Federated Learning of Cohorts : The Federated Learning of Cohorts algorithm analyzes users' online activity within the browser, and generates a "cohort ID" using the SimHash algorithm to group a given user with other users who access similar content.: 9 Each cohort contains several thousand users in order to make identifying individual users more difficult, and cohorts are updated weekly. Websites are then able to access the cohort ID using an API: 9 and determine what advertisements to serve. Google does not label cohorts based on interest beyond grouping users and assigning an ID, so advertisers need to determine the user types of each cohort on their own.: 47
Federated Learning of Cohorts : Google claimed in January 2021 that FLoC was at least 95% effective compared to tracking using third-party cookies, but AdExchanger reported that some people in the advertising technology industry expressed skepticism about the claim and the methodology behind it. As every website that opts into FLoC will have the same access about which cohort the user belongs to, the technology's developers say this democratizes access to some information about a user's general browser history, in contrast to the status quo, where websites have to use tracking techniques. The Electronic Frontier Foundation has criticized FLoC, with one EFF researcher calling the testing of the technology in Chrome "a concrete breach of user trust in service of a technology that should not exist" in a post on the organization's blog. The EFF also created a website which allows Chrome users to check whether FLoC is being tested in their browsers. The EFF criticized the fact that every site will be able to access data about a user, without having to track them across the web first. Additionally on the EFF blog, Cory Doctorow praised Chrome's planned removal of third-party cookies, but added that "[just] because FLoC is billed as pro-privacy and also criticized as anti-competitive, it doesn't mean that privacy and competition aren't compatible", stating that Google is "appointing itself the gatekeeper who decides when we're spied on while skimming from advertisers with nowhere else to go." On April 10, 2021, the CEO of DuckDuckGo released a statement telling people not to use Google Chrome, stating that Chrome users can be included in FLoC without choosing to be and that no other browser vendor has expressed interest in using the tracking method. The statement said that "there is no such thing as a behavioral tracking mechanism imposed without consent that respects people's privacy" and that Google should make FLoC "explicitly opt-in" and "free of dark patterns". DuckDuckGo also announced that its website will not collect FLoC IDs or use them to target ads, and updated its Chrome extension to block websites from interacting with FLoC. On April 12, 2021, Brave, a web browser built on the Chromium platform, criticized FLoC in a blog post and announced plans to disable FLoC in the Brave browser and make company's main website opt out of FLoC. The blog post, co-written by the company's CEO Brendan Eich, described Google's efforts to replace third-party cookies as "Titanic-level deckchair-shuffling" and "a step backward from more fundamental, privacy-and-user focused changes the Web needs." Tech and media news site The Verge noted that not all possible repercussions of FLoC for ad tech are known, and that its structure could benefit or harm smaller ad tech companies, noting specifically that larger ad tech companies may be better equipped to "parse what FLoCs mean and what ads to target against them." Multiple companies including GitHub, Drupal and Amazon declined to enable FLoC, instead opting to disable FLoC outright by including the HTTP Header Permissions-Policy: interest-cohort=(). WordPress, a widely used website framework floated a proposal to disable FLoC based tracking across all websites that used the framework. Almost all major browsers based on Google's open-source Chromium platform declined to implement FLoC, including Microsoft Edge, Vivaldi, Brave, and Opera. In May 2021, The Economist reported that it may be hard for Google to "stop the system from grouping people by characteristics they wish to keep private, such as race or sexuality."
Federated Learning of Cohorts : Am I FLoCed?—EFF website reporting to users if FLoC is enabled FLoCs explained at the Privacy Sandbox Initiative website More detailed FLoC Origin Trial & Clustering – infos from the Chromium project
Gaussian splatting : Gaussian splatting is a volume rendering technique that deals with the direct rendering of volume data without converting the data into surface or line primitives. The technique was originally introduced as splatting by Lee Westover in the early 1990s. With advancements in computer graphics, newer methods such as 3D Gaussian splatting and 3D Temporal Gaussian splatting have been developed to offer real-time radiance field rendering and dynamic scene rendering respectively.
Gaussian splatting : 3D Gaussian splatting is a technique used in the field of real-time radiance field rendering. It enables the creation of high-quality real-time novel-view scenes by combining multiple photos or videos, addressing a significant challenge in the field. The method represents scenes with 3D Gaussians that retain properties of continuous volumetric radiance fields, integrating sparse points produced during camera calibration. It introduces an Anisotropic representation using 3D Gaussians to model radiance fields, along with an interleaved optimization and density control of the Gaussians. A fast visibility-aware rendering algorithm supporting anisotropic splatting is also proposed, catered to GPU usage.
Gaussian splatting : Extending 3D Gaussian splatting to dynamic scenes, 3D Temporal Gaussian splatting incorporates a time component, allowing for real-time rendering of dynamic scenes with high resolutions. It represents and renders dynamic scenes by modeling complex motions while maintaining efficiency. The method uses a HexPlane to connect adjacent Gaussians, providing an accurate representation of position and shape deformations. By utilizing only a single set of canonical 3D Gaussians and predictive analytics, it models how they move over different timestamps. It is sometimes referred to as "4D Gaussian splatting"; however, this naming convention implies the use of 4D Gaussian primitives (parameterized by a 4×4 mean and a 4×4 covariance matrix). Most work in this area still employs 3D Gaussian primitives, applying temporal constraints as an extra parameter of optimization. Achievements of this technique include real-time rendering on dynamic scenes with high resolutions, while maintaining quality. It showcases potential applications for future developments in film and other media, although there are current limitations regarding the length of motion captured.
Gaussian splatting : 3D Gaussian splatting has been adapted and extended across various computer vision and graphics applications, from dynamic scene rendering to autonomous driving simulations and 4D content creation: Text-to-3D using Gaussian Splatting: Applies 3D Gaussian splatting to text-to-3D generation. End-to-end Autonomous Driving: Mentions 3D Gaussian splatting as a data-driven sensor simulation method for autonomous driving, highlighting its ability to generate realistic novel views of a scene. SuGaR: Proposes a method to extract precise and fast meshes from 3D Gaussian splatting. SplaTAM: Applies 3D Gaussian-based radiance fields to Simultaneous Localization and Mapping (SLAM), leveraging fast rendering and optimization capabilities to achieve state-of-the-art results. Align Your Gaussians: Uses dynamic 3D Gaussians for 4D content creation from text.
Gaussian splatting : Computer graphics Neural radiance field Volume rendering == References ==
GeneRec : GeneRec is a generalization of the recirculation algorithm, and approximates Almeida-Pineda recurrent backpropagation. It is used as part of the Leabra algorithm for error-driven learning. The symmetric, midpoint version of GeneRec is equivalent to the contrastive Hebbian learning algorithm (CHL).
GeneRec : Leabra O'Reilly (1996; Neural Computation) == References ==
Genetic Algorithm for Rule Set Production : Genetic Algorithm for Rule Set Production (GARP) is a computer program based on genetic algorithm that creates ecological niche models for species. The generated models describe environmental conditions (precipitation, temperatures, elevation, etc.) under which the species should be able to maintain populations. As input, local observations of species and related environmental parameters are used which describe potential limits of the species' capabilities to survive. Such environmental parameters are commonly stored in geographical information systems. A GARP model is a random set of mathematical rules which can be read as limiting environmental conditions. Each rule is considered as a gene; the set of genes is combined in random ways to further generate many possible models describing the potential of the species to occur.
Genetic Algorithm for Rule Set Production : Stockwell, D. R. B. 1999. Genetic algorithms II. Pages 123–144 in A. H. Fielding, editor. Machine learning methods for ecological applications. Kluwer Academic Publishers, Boston Stockwell, D. R. B., and D. G. Peters. 1999. The GARP modelling system: Problems and solutions to automated spatial prediction. International Journal of Geographic Information Systems 13:143–158
Genetic Algorithm for Rule Set Production : OpenModeller – (related GARP page) Lifemapper
Graphical time warping : Graphical time warping (GTW) is a framework for jointly aligning multiple pairs of time series or sequences. GTW considers both the alignment accuracy of each sequence pair and the similarity among pairs. On contrary, alignment with dynamic time warping (DTW) considers the pairs independently and minimizes only the distance between the two sequences in a given pair. Therefore, GTW generalizes DTW and could achieve a better alignment performance when similarity among pairs is expected. One application of GTW is signal propagation analysis in time-lapse bio-imaging data, where the propagation patterns in adjacent pixels are generally similar. Other applications include signature identification, binocular stereo depth calculation, and liquid chromatography–mass spectrometry (LC-MS) profile alignment in proteomics data analysis. Indeed, as long as the data are structured with inter-dependent time series/sequences, they can be analyzed with GTW. GTW is able to model constraints or similarities between warping paths by transforming the DTW-equivalent shortest path problem to the maximum flow problem in the dual graph, which can be solved by most max-flow algorithms. However, when the data is large, these algorithms become time-consuming and the memory usage is high. An efficient algorithm, Bidirectional pushing with Linear Component Operations (BILCO), was developed to solve the GTW problem. It could achieve an average 10-fold improvement in both computational and memory usage compared with the state of art generic maximum flow algorithms in GTW applications.
Graphical time warping : Dynamic time warping Elastic matching Sequence alignment Multiple sequence alignment == References ==
IDistance : In pattern recognition, iDistance is an indexing and query processing technique for k-nearest neighbor queries on point data in multi-dimensional metric spaces. The kNN query is one of the hardest problems on multi-dimensional data, especially when the dimensionality of the data is high. iDistance is designed to process kNN queries in high-dimensional spaces efficiently and performs extremely well for skewed data distributions, which usually occur in real-life data sets. iDistance employs a two-phase search strategy involving an initial filtering of candidate regions and a subsequent refinement of results, an approach aligned with the Filter and Refine Principle (FRP). This means that the index first prunes the search space to eliminate unlikely candidates, then verifies the true nearest neighbors in a refinement step, following the general FRP paradigm used in database search algorithms. The iDistance index can also be augmented with machine learning models to learn data distributions for improved searching and storage of multi-dimensional data.
IDistance : Building the iDistance index has two steps: A number of reference points in the data space are chosen. There are various ways of choosing reference points. Using cluster centers as reference points is the most efficient way. The data points are partitioned into Voronoi cells based on well-chosen reference points. The distance between a data point and its closest reference point is calculated. This distance plus a scaling value is called the point's iDistance. By this means, points in a multi-dimensional space are mapped to one-dimensional values, and then a B+-tree can be adopted to index the points using the iDistance as the key. The figure on the right shows an example where three reference points (O1, O2, O3) are chosen. The data points are then mapped to a one-dimensional space and indexed in a B+-tree. Various extensions have been proposed to make the selection of reference points for effective query performance, including employing machine learning to learn the identification of reference points.
IDistance : To process a kNN query, the query is mapped to a number of one-dimensional range queries, which can be processed efficiently on a B+-tree. In the above figure, the query Q is mapped to a value in the B+-tree while the kNN search ``sphere" is mapped to a range in the B+-tree. The search sphere expands gradually until the k NNs are found. This corresponds to gradually expanding range searches in the B+-tree. The iDistance technique can be viewed as a way of accelerating the sequential scan. Instead of scanning records from the beginning to the end of the data file, the iDistance starts the scan from spots where the nearest neighbors can be obtained early with a very high probability.
IDistance : The iDistance has been used in many applications including Image retrieval Video indexing Similarity search in P2P systems Mobile computing Recommender system
IDistance : The iDistance was first proposed by Cui Yu, Beng Chin Ooi, Kian-Lee Tan and H. V. Jagadish in 2001. Later, together with Rui Zhang, they improved the technique and performed a more comprehensive study on it in 2005.
IDistance : Filter and refine
IDistance : iDistance implementation in C by Rui Zhang Google's iDistance implementation in C++ Early Separation of Filter and Refinement Steps in Spatial Query Optimization Filter and Refine Principle (FRP)
Incremental learning : In computer science, incremental learning is a method of machine learning in which input data is continuously used to extend the existing model's knowledge i.e. to further train the model. It represents a dynamic technique of supervised learning and unsupervised learning that can be applied when training data becomes available gradually over time or its size is out of system memory limits. Algorithms that can facilitate incremental learning are known as incremental machine learning algorithms. Many traditional machine learning algorithms inherently support incremental learning. Other algorithms can be adapted to facilitate incremental learning. Examples of incremental algorithms include decision trees (IDE4, ID5R and gaenari), decision rules, artificial neural networks (RBF networks, Learn++, Fuzzy ARTMAP, TopoART, and IGNG) or the incremental SVM. The aim of incremental learning is for the learning model to adapt to new data without forgetting its existing knowledge. Some incremental learners have built-in some parameter or assumption that controls the relevancy of old data, while others, called stable incremental machine learning algorithms, learn representations of the training data that are not even partially forgotten over time. Fuzzy ART and TopoART are two examples for this second approach. Incremental algorithms are frequently applied to data streams or big data, addressing issues in data availability and resource scarcity respectively. Stock trend prediction and user profiling are some examples of data streams where new data becomes continuously available. Applying incremental learning to big data aims to produce faster classification or forecasting times.
Incremental learning : Transduction (machine learning)
Incremental learning : charleslparker (March 12, 2013). "Brief Introduction to Streaming data and Incremental Algorithms". BigML Blog. Gepperth, Alexander; Hammer, Barbara (2016). Incremental learning algorithms and applications (PDF). ESANN. pp. 357–368. LibTopoART: A software library for incremental learning tasks "Creme: Library for incremental learning". Archived from the original on 2019-08-03. gaenari: C++ incremental decision tree algorithm YouTube search results Incremental Learning
K-nearest neighbors algorithm : In statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric supervised learning method. It was first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. Most often, it is used for classification, as a k-NN classifier, the output of which is a class membership. An object is classified by a plurality vote of its neighbors, with the object being assigned to the class most common among its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor. The k-NN algorithm can also be generalized for regression. In k-NN regression, also known as nearest neighbor smoothing, the output is the property value for the object. This value is the average of the values of k nearest neighbors. If k = 1, then the output is simply assigned to the value of that single nearest neighbor, also known as nearest neighbor interpolation. For both classification and regression, a useful technique can be to assign weights to the contributions of the neighbors, so that nearer neighbors contribute more to the average than distant ones. For example, a common weighting scheme consists of giving each neighbor a weight of 1/d, where d is the distance to the neighbor. The input consists of the k closest training examples in a data set. The neighbors are taken from a set of objects for which the class (for k-NN classification) or the object property value (for k-NN regression) is known. This can be thought of as the training set for the algorithm, though no explicit training step is required. A peculiarity (sometimes even a disadvantage) of the k-NN algorithm is its sensitivity to the local structure of the data. In k-NN classification the function is only approximated locally and all computation is deferred until function evaluation. Since this algorithm relies on distance, if the features represent different physical units or come in vastly different scales, then feature-wise normalizing of the training data can greatly improve its accuracy.
K-nearest neighbors algorithm : Suppose we have pairs ( X 1 , Y 1 ) , ( X 2 , Y 2 ) , … , ( X n , Y n ) ,Y_),(X_,Y_),\dots ,(X_,Y_) taking values in R d × ^\times \ , where Y is the class label of X, so that X | Y = r ∼ P r for r = 1 , 2 (and probability distributions P r ). Given some norm ‖ ⋅ ‖ on R d ^ and a point x ∈ R d ^ , let ( X ( 1 ) , Y ( 1 ) ) , … , ( X ( n ) , Y ( n ) ) ,Y_),\dots ,(X_,Y_) be a reordering of the training data such that ‖ X ( 1 ) − x ‖ ≤ ⋯ ≤ ‖ X ( n ) − x ‖ -x\|\leq \dots \leq \|X_-x\| .
K-nearest neighbors algorithm : The training examples are vectors in a multidimensional feature space, each with a class label. The training phase of the algorithm consists only of storing the feature vectors and class labels of the training samples. In the classification phase, k is a user-defined constant, and an unlabeled vector (a query or test point) is classified by assigning the label which is most frequent among the k training samples nearest to that query point. A commonly used distance metric for continuous variables is Euclidean distance. For discrete variables, such as for text classification, another metric can be used, such as the overlap metric (or Hamming distance). In the context of gene expression microarray data, for example, k-NN has been employed with correlation coefficients, such as Pearson and Spearman, as a metric. Often, the classification accuracy of k-NN can be improved significantly if the distance metric is learned with specialized algorithms such as Large Margin Nearest Neighbor or Neighbourhood components analysis. A drawback of the basic "majority voting" classification occurs when the class distribution is skewed. That is, examples of a more frequent class tend to dominate the prediction of the new example, because they tend to be common among the k nearest neighbors due to their large number. One way to overcome this problem is to weight the classification, taking into account the distance from the test point to each of its k nearest neighbors. The class (or value, in regression problems) of each of the k nearest points is multiplied by a weight proportional to the inverse of the distance from that point to the test point. Another way to overcome skew is by abstraction in data representation. For example, in a self-organizing map (SOM), each node is a representative (a center) of a cluster of similar points, regardless of their density in the original training data. K-NN can then be applied to the SOM.
K-nearest neighbors algorithm : The best choice of k depends upon the data; generally, larger values of k reduces effect of the noise on the classification, but make boundaries between classes less distinct. A good k can be selected by various heuristic techniques (see hyperparameter optimization). The special case where the class is predicted to be the class of the closest training sample (i.e. when k = 1) is called the nearest neighbor algorithm. The accuracy of the k-NN algorithm can be severely degraded by the presence of noisy or irrelevant features, or if the feature scales are not consistent with their importance. Much research effort has been put into selecting or scaling features to improve classification. A particularly popular approach is the use of evolutionary algorithms to optimize feature scaling. Another popular approach is to scale features by the mutual information of the training data with the training classes. In binary (two class) classification problems, it is helpful to choose k to be an odd number as this avoids tied votes. One popular way of choosing the empirically optimal k in this setting is via bootstrap method.
K-nearest neighbors algorithm : The most intuitive nearest neighbour type classifier is the one nearest neighbour classifier that assigns a point x to the class of its closest neighbour in the feature space, that is C n 1 n n ( x ) = Y ( 1 ) ^(x)=Y_ . As the size of training data set approaches infinity, the one nearest neighbour classifier guarantees an error rate of no worse than twice the Bayes error rate (the minimum achievable error rate given the distribution of the data).
K-nearest neighbors algorithm : The k-nearest neighbour classifier can be viewed as assigning the k nearest neighbours a weight 1 / k and all others 0 weight. This can be generalised to weighted nearest neighbour classifiers. That is, where the ith nearest neighbour is assigned a weight w n i , with ∑ i = 1 n w n i = 1 ^w_=1 . An analogous result on the strong consistency of weighted nearest neighbour classifiers also holds. Let C n w n n ^ denote the weighted nearest classifier with weights i = 1 n \_^ . Subject to regularity conditions, which in asymptotic theory are conditional variables which require assumptions to differentiate among parameters with some criteria. On the class distributions the excess risk has the following asymptotic expansion R R ( C n w n n ) − R R ( C Bayes ) = ( B 1 s n 2 + B 2 t n 2 ) , _(C_^)-_(C^)=\left(B_s_^+B_t_^\right)\, for constants B 1 and B 2 where s n 2 = ∑ i = 1 n w n i 2 ^=\sum _^w_^ and t n = n − 2 / d ∑ i = 1 n w n i =n^\sum _^w_\left\-(i-1)^\right\ . The optimal weighting scheme i = 1 n ^\_^ , that balances the two terms in the display above, is given as follows: set k ∗ = ⌊ B n 4 d + 4 ⌋ =\lfloor Bn^\rfloor , w n i ∗ = 1 k ∗ [ 1 + d 2 − d 2 k ∗ 2 / d ] ^=\left[1+-^\-(i-1)^\\right] for i = 1 , 2 , … , k ∗ and w n i ∗ = 0 ^=0 for i = k ∗ + 1 , … , n +1,\dots ,n . With optimal weights the dominant term in the asymptotic expansion of the excess risk is O ( n − 4 d + 4 ) (n^) . Similar results are true when using a bagged nearest neighbour classifier.
K-nearest neighbors algorithm : k-NN is a special case of a variable-bandwidth, kernel density "balloon" estimator with a uniform kernel. The naive version of the algorithm is easy to implement by computing the distances from the test example to all stored examples, but it is computationally intensive for large training sets. Using an approximate nearest neighbor search algorithm makes k-NN computationally tractable even for large data sets. Many nearest neighbor search algorithms have been proposed over the years; these generally seek to reduce the number of distance evaluations actually performed. k-NN has some strong consistency results. As the amount of data approaches infinity, the two-class k-NN algorithm is guaranteed to yield an error rate no worse than twice the Bayes error rate (the minimum achievable error rate given the distribution of the data). Various improvements to the k-NN speed are possible by using proximity graphs. For multi-class k-NN classification, Cover and Hart (1967) prove an upper bound error rate of R ∗ ≤ R k N N ≤ R ∗ ( 2 − M R ∗ M − 1 ) \ \leq \ R_ \ \leq \ R^\left(2-\right) where R ∗ is the Bayes error rate (which is the minimal error rate possible), R k N N is the asymptotic k-NN error rate, and M is the number of classes in the problem. This bound is tight in the sense that both the lower and upper bounds are achievable by some distribution. For M = 2 and as the Bayesian error rate R ∗ approaches zero, this limit reduces to "not more than twice the Bayesian error rate".
K-nearest neighbors algorithm : There are many results on the error rate of the k nearest neighbour classifiers. The k-nearest neighbour classifier is strongly (that is for any joint distribution on ( X , Y ) ) consistent provided k := k n diverges and k n / n /n converges to zero as n → ∞ . Let C n k n n ^ denote the k nearest neighbour classifier based on a training set of size n. Under certain regularity conditions, the excess risk yields the following asymptotic expansion R R ( C n k n n ) − R R ( C Bayes ) = , _(C_^)-_(C^)=\left\+B_\left(\right)^\right\\, for some constants B 1 and B 2 . The choice k ∗ = ⌊ B n 4 d + 4 ⌋ =\left\lfloor Bn^\right\rfloor offers a trade off between the two terms in the above display, for which the k ∗ -nearest neighbour error converges to the Bayes error at the optimal (minimax) rate O ( n − 4 d + 4 ) \left(n^\right) .
K-nearest neighbors algorithm : The K-nearest neighbor classification performance can often be significantly improved through (supervised) metric learning. Popular algorithms are neighbourhood components analysis and large margin nearest neighbor. Supervised metric learning algorithms use the label information to learn a new metric or pseudo-metric.
K-nearest neighbors algorithm : When the input data to an algorithm is too large to be processed and it is suspected to be redundant (e.g. the same measurement in both feet and meters) then the input data will be transformed into a reduced representation set of features (also named features vector). Transforming the input data into the set of features is called feature extraction. If the features extracted are carefully chosen it is expected that the features set will extract the relevant information from the input data in order to perform the desired task using this reduced representation instead of the full size input. Feature extraction is performed on raw data prior to applying k-NN algorithm on the transformed data in feature space. An example of a typical computer vision computation pipeline for face recognition using k-NN including feature extraction and dimension reduction pre-processing steps (usually implemented with OpenCV): Haar face detection Mean-shift tracking analysis PCA or Fisher LDA projection into feature space, followed by k-NN classification
K-nearest neighbors algorithm : For high-dimensional data (e.g., with number of dimensions more than 10) dimension reduction is usually performed prior to applying the k-NN algorithm in order to avoid the effects of the curse of dimensionality. The curse of dimensionality in the k-NN context basically means that Euclidean distance is unhelpful in high dimensions because all vectors are almost equidistant to the search query vector (imagine multiple points lying more or less on a circle with the query point at the center; the distance from the query to all data points in the search space is almost the same). Feature extraction and dimension reduction can be combined in one step using principal component analysis (PCA), linear discriminant analysis (LDA), or canonical correlation analysis (CCA) techniques as a pre-processing step, followed by clustering by k-NN on feature vectors in reduced-dimension space. This process is also called low-dimensional embedding. For very-high-dimensional datasets (e.g. when performing a similarity search on live video streams, DNA data or high-dimensional time series) running a fast approximate k-NN search using locality sensitive hashing, "random projections", "sketches" or other high-dimensional similarity search techniques from the VLDB toolbox might be the only feasible option.
K-nearest neighbors algorithm : Nearest neighbor rules in effect implicitly compute the decision boundary. It is also possible to compute the decision boundary explicitly, and to do so efficiently, so that the computational complexity is a function of the boundary complexity.
K-nearest neighbors algorithm : Data reduction is one of the most important problems for work with huge data sets. Usually, only some of the data points are needed for accurate classification. Those data are called the prototypes and can be found as follows: Select the class-outliers, that is, training data that are classified incorrectly by k-NN (for a given k) Separate the rest of the data into two sets: (i) the prototypes that are used for the classification decisions and (ii) the absorbed points that can be correctly classified by k-NN using prototypes. The absorbed points can then be removed from the training set.
K-nearest neighbors algorithm : In k-NN regression, also known as k-NN smoothing, the k-NN algorithm is used for estimating continuous variables. One such algorithm uses a weighted average of the k nearest neighbors, weighted by the inverse of their distance. This algorithm works as follows: Compute the Euclidean or Mahalanobis distance from the query example to the labeled examples. Order the labeled examples by increasing distance. Find a heuristically optimal number k of nearest neighbors, based on RMSE. This is done using cross validation. Calculate an inverse distance weighted average with the k-nearest multivariate neighbors.
K-nearest neighbors algorithm : The distance to the kth nearest neighbor can also be seen as a local density estimate and thus is also a popular outlier score in anomaly detection. The larger the distance to the k-NN, the lower the local density, the more likely the query point is an outlier. Although quite simple, this outlier model, along with another classic data mining method, local outlier factor, works quite well also in comparison to more recent and more complex approaches, according to a large scale experimental analysis.
K-nearest neighbors algorithm : A confusion matrix or "matching matrix" is often used as a tool to validate the accuracy of k-NN classification. More robust statistical methods such as likelihood-ratio test can also be applied.
K-nearest neighbors algorithm : Nearest centroid classifier Closest pair of points problem Nearest neighbor graph Segmentation-based object categorization
K-nearest neighbors algorithm : Dasarathy, Belur V., ed. (1991). Nearest Neighbor (NN) Norms: NN Pattern Classification Techniques. ISBN 978-0818689307. Shakhnarovich, Gregory; Darrell, Trevor; Indyk, Piotr, eds. (2005). Nearest-Neighbor Methods in Learning and Vision. MIT Press. ISBN 978-0262195478.
Kernel methods for vector output : Kernel methods are a well-established tool to analyze the relationship between input data and the corresponding output of a function. Kernels encapsulate the properties of functions in a computationally efficient way and allow algorithms to easily swap functions of varying complexity. In typical machine learning algorithms, these functions produce a scalar output. Recent development of kernel methods for functions with vector-valued output is due, at least in part, to interest in simultaneously solving related problems. Kernels which capture the relationship between the problems allow them to borrow strength from each other. Algorithms of this type include multi-task learning (also called multi-output learning or vector-valued learning), transfer learning, and co-kriging. Multi-label classification can be interpreted as mapping inputs to (binary) coding vectors with length equal to the number of classes. In Gaussian processes, kernels are called covariance functions. Multiple-output functions correspond to considering multiple processes. See Bayesian interpretation of regularization for the connection between the two perspectives.
Kernel methods for vector output : The history of learning vector-valued functions is closely linked to transfer learning- storing knowledge gained while solving one problem and applying it to a different but related problem. The fundamental motivation for transfer learning in the field of machine learning was discussed in a NIPS-95 workshop on “Learning to Learn”, which focused on the need for lifelong machine learning methods that retain and reuse previously learned knowledge. Research on transfer learning has attracted much attention since 1995 in different names: learning to learn, lifelong learning, knowledge transfer, inductive transfer, multitask learning, knowledge consolidation, context-sensitive learning, knowledge-based inductive bias, metalearning, and incremental/cumulative learning. Interest in learning vector-valued functions was particularly sparked by multitask learning, a framework which tries to learn multiple, possibly different tasks simultaneously. Much of the initial research in multitask learning in the machine learning community was algorithmic in nature, and applied to methods such as neural networks, decision trees and k-nearest neighbors in the 1990s. The use of probabilistic models and Gaussian processes was pioneered and largely developed in the context of geostatistics, where prediction over vector-valued output data is known as cokriging. Geostatistical approaches to multivariate modeling are mostly formulated around the linear model of coregionalization (LMC), a generative approach for developing valid covariance functions that has been used for multivariate regression and in statistics for computer emulation of expensive multivariate computer codes. The regularization and kernel theory literature for vector-valued functions followed in the 2000s. While the Bayesian and regularization perspectives were developed independently, they are in fact closely related.
Kernel methods for vector output : In this context, the supervised learning problem is to learn the function f which best predicts vector-valued outputs y i given inputs (data) x i . f ( x i ) = y i )=\mathbf for i = 1 , … , N x i ∈ X \in , an input space (e.g. X = R p =\mathbb ^ ) y i ∈ R D \in \mathbb ^ In general, each component of ( y i ), could have different input data ( x d , i ) with different cardinality ( p ) and even different input spaces ( X ). Geostatistics literature calls this case heterotopic, and uses isotopic to indicate that the each component of the output vector has the same set of inputs. Here, for simplicity in the notation, we assume the number and sample space of the data for each output are the same.
Kernel methods for vector output : When implementing an algorithm using any of the kernels above, practical considerations of tuning the parameters and ensuring reasonable computation time must be considered.
Kernel principal component analysis : In the field of multivariate statistics, kernel principal component analysis (kernel PCA) is an extension of principal component analysis (PCA) using techniques of kernel methods. Using a kernel, the originally linear operations of PCA are performed in a reproducing kernel Hilbert space.
Kernel principal component analysis : Recall that conventional PCA operates on zero-centered data; that is, 1 N ∑ i = 1 N x i = 0 \sum _^\mathbf _=\mathbf , where x i _ is one of the N multivariate observations. It operates by diagonalizing the covariance matrix, C = 1 N ∑ i = 1 N x i x i ⊤ \sum _^\mathbf _\mathbf _^ in other words, it gives an eigendecomposition of the covariance matrix: λ v = C v =C\mathbf which can be rewritten as λ x i ⊤ v = x i ⊤ C v for i = 1 , … , N _^\mathbf =\mathbf _^C\mathbf \quad ~i=1,\ldots ,N . (See also: Covariance matrix as a linear operator)
Kernel principal component analysis : To understand the utility of kernel PCA, particularly for clustering, observe that, while N points cannot, in general, be linearly separated in d < N dimensions, they can almost always be linearly separated in d ≥ N dimensions. That is, given N points, x i _ , if we map them to an N-dimensional space with Φ ( x i ) _) where Φ : R d → R N ^\to \mathbb ^ , it is easy to construct a hyperplane that divides the points into arbitrary clusters. Of course, this Φ creates linearly independent vectors, so there is no covariance on which to perform eigendecomposition explicitly as we would in linear PCA. Instead, in kernel PCA, a non-trivial, arbitrary Φ function is 'chosen' that is never calculated explicitly, allowing the possibility to use very-high-dimensional Φ 's if we never have to actually evaluate the data in that space. Since we generally try to avoid working in the Φ -space, which we will call the 'feature space', we can create the N-by-N kernel K = k ( x , y ) = ( Φ ( x ) , Φ ( y ) ) = Φ ( x ) T Φ ( y ) ,\mathbf )=(\Phi (\mathbf ),\Phi (\mathbf ))=\Phi (\mathbf )^\Phi (\mathbf ) which represents the inner product space (see Gramian matrix) of the otherwise intractable feature space. The dual form that arises in the creation of a kernel allows us to mathematically formulate a version of PCA in which we never actually solve the eigenvectors and eigenvalues of the covariance matrix in the Φ ( x ) ) -space (see Kernel trick). The N-elements in each column of K represent the dot product of one point of the transformed data with respect to all the transformed points (N points). Some well-known kernels are shown in the example below. Because we are never working directly in the feature space, the kernel-formulation of PCA is restricted in that it computes not the principal components themselves, but the projections of our data onto those components. To evaluate the projection from a point in the feature space Φ ( x ) ) onto the kth principal component V k (where superscript k means the component k, not powers of k) V k T Φ ( x ) = ( ∑ i = 1 N a i k Φ ( x i ) ) T Φ ( x ) ^\Phi (\mathbf )=\left(\sum _^\mathbf _^\Phi (\mathbf _)\right)^\Phi (\mathbf ) We note that Φ ( x i ) T Φ ( x ) _)^\Phi (\mathbf ) denotes dot product, which is simply the elements of the kernel K . It seems all that's left is to calculate and normalize the a i k _^ , which can be done by solving the eigenvector equation N λ a = K a =K\mathbf where N is the number of data points in the set, and λ and a are the eigenvalues and eigenvectors of K . Then to normalize the eigenvectors a k ^ , we require that 1 = ( V k ) T V k )^V^ Care must be taken regarding the fact that, whether or not x has zero-mean in its original space, it is not guaranteed to be centered in the feature space (which we never compute explicitly). Since centered data is required to perform an effective principal component analysis, we 'centralize' K to become K ′ K ′ = K − 1 N K − K 1 N + 1 N K 1 N K-K\mathbf +\mathbf K\mathbf where 1 N denotes a N-by-N matrix for which each element takes value 1 / N . We use K ′ to perform the kernel PCA algorithm described above. One caveat of kernel PCA should be illustrated here. In linear PCA, we can use the eigenvalues to rank the eigenvectors based on how much of the variation of the data is captured by each principal component. This is useful for data dimensionality reduction and it could also be applied to KPCA. However, in practice there are cases that all variations of the data are same. This is typically caused by a wrong choice of kernel scale.
Kernel principal component analysis : In practice, a large data set leads to a large K, and storing K may become a problem. One way to deal with this is to perform clustering on the dataset, and populate the kernel with the means of those clusters. Since even this method may yield a relatively large K, it is common to compute only the top P eigenvalues and eigenvectors of the eigenvalues are calculated in this way.
Kernel principal component analysis : Consider three concentric clouds of points (shown); we wish to use kernel PCA to identify these groups. The color of the points does not represent information involved in the algorithm, but only shows how the transformation relocates the data points. First, consider the kernel k ( x , y ) = ( x T y + 1 ) 2 ,)=(^ +1)^ Applying this to kernel PCA yields the next image. Now consider a Gaussian kernel: k ( x , y ) = e − | | x − y | | 2 2 σ 2 , ,)=e^-||^, That is, this kernel is a measure of closeness, equal to 1 when the points coincide and equal to 0 at infinity. Note in particular that the first principal component is enough to distinguish the three different groups, which is impossible using only linear PCA, because linear PCA operates only in the given (in this case two-dimensional) space, in which these concentric point clouds are not linearly separable.
Kernel principal component analysis : Kernel PCA has been demonstrated to be useful for novelty detection and image de-noising.
Kernel principal component analysis : Cluster analysis Nonlinear dimensionality reduction Spectral clustering == References ==
Label propagation algorithm : Label propagation is a semi-supervised algorithm in machine learning that assigns labels to previously unlabeled data points. At the start of the algorithm, a (generally small) subset of the data points have labels (or classifications). These labels are propagated to the unlabeled points throughout the course of the algorithm. Within complex networks, real networks tend to have community structure. Label propagation is an algorithm for finding communities. In comparison with other algorithms label propagation has advantages in its running time and amount of a priori information needed about the network structure (no parameter is required to be known beforehand). The disadvantage is that it produces no unique solution, but an aggregate of many solutions.
Label propagation algorithm : At initial condition, the nodes carry a label that denotes the community they belong to. Membership in a community changes based on the labels that the neighboring nodes possess. This change is subject to the maximum number of labels within one degree of the nodes. Every node is initialized with a unique label, then the labels diffuse through the network. Consequently, densely connected groups reach a common label quickly. When many such dense (consensus) groups are created throughout the network, they continue to expand outwards until it is impossible to do so. The process has 5 steps: 1. Initialize the labels at all nodes in the network. For a given node x, Cx (0) = x. 2. Set t = 1. 3. Arrange the nodes in the network in a random order and set it to X. 4. For each x ∈ X chosen in that specific order, let Cx(t) = f(Cxi1(t), ...,Cxim(t),Cxi(m+1) (t − 1), ...,Cxik (t − 1)). Here returns the label occurring with the highest frequency among neighbours. Select a label at random if there are multiple highest frequency labels. 5. If every node has a label that the maximum number of their neighbours have, then stop the algorithm. Else, set t = t + 1 and go to (3).
Label propagation algorithm : Label propagation offers an efficient solution to the challenge of labeling datasets in machine learning by reducing the need for manual labels. Text classification utilizes a graph-based technique, where the nearest neighbor graph is built from network embeddings, and labels are extended based on cosine similarity by merging these pseudo-labeled data points into supervised learning.
Label propagation algorithm : In contrast with other algorithms label propagation can result in various community structures from the same initial condition. The range of solutions can be narrowed if some nodes are given preliminary labels while others are held unlabelled. Consequently, unlabelled nodes will be more likely to adapt to the labelled ones. For a more accurate finding of communities, Jaccard’s index is used to aggregate multiple community structures, containing all important information.
Label propagation algorithm : Python implementation of label propagation algorithm.
Lasso (statistics) : In statistics and machine learning, lasso (least absolute shrinkage and selection operator; also Lasso, LASSO or L1 regularization) is a regression analysis method that performs both variable selection and regularization in order to enhance the prediction accuracy and interpretability of the resulting statistical model. The lasso method assumes that the coefficients of the linear model are sparse, meaning that few of them are non-zero. It was originally introduced in geophysics, and later by Robert Tibshirani, who coined the term. Lasso was originally formulated for linear regression models. This simple case reveals a substantial amount about the estimator. These include its relationship to ridge regression and best subset selection and the connections between lasso coefficient estimates and so-called soft thresholding. It also reveals that (like standard linear regression) the coefficient estimates do not need to be unique if covariates are collinear. Though originally defined for linear regression, lasso regularization is easily extended to other statistical models including generalized linear models, generalized estimating equations, proportional hazards models, and M-estimators. Lasso's ability to perform subset selection relies on the form of the constraint and has a variety of interpretations including in terms of geometry, Bayesian statistics and convex analysis. The LASSO is closely related to basis pursuit denoising.
Lasso (statistics) : Lasso was introduced in order to improve the prediction accuracy and interpretability of regression models. It selects a reduced set of the known covariates for use in a model. Lasso was developed independently in geophysics literature in 1986, based on prior work that used the ℓ 1 penalty for both fitting and penalization of the coefficients. Statistician Robert Tibshirani independently rediscovered and popularized it in 1996, based on Breiman's nonnegative garrote. Prior to lasso, the most widely used method for choosing covariates was stepwise selection. That approach only improves prediction accuracy in certain cases, such as when only a few covariates have a strong relationship with the outcome. However, in other cases, it can increase prediction error. At the time, ridge regression was the most popular technique for improving prediction accuracy. Ridge regression improves prediction error by shrinking the sum of the squares of the regression coefficients to be less than a fixed value in order to reduce overfitting, but it does not perform covariate selection and therefore does not help to make the model more interpretable. Lasso achieves both of these goals by forcing the sum of the absolute value of the regression coefficients to be less than a fixed value, which forces certain coefficients to zero, excluding them from impacting prediction. This idea is similar to ridge regression, which also shrinks the size of the coefficients; however, ridge regression does not set coefficients to zero (and, thus, does not perform variable selection).
Lasso (statistics) : Lasso regularization can be extended to other objective functions such as those for generalized linear models, generalized estimating equations, proportional hazards models, and M-estimators. Given the objective function 1 N ∑ i = 1 N f ( x i , y i , α , β ) \sum _^f(x_,y_,\alpha ,\beta ) the lasso regularized version of the estimator s the solution to min α , β 1 N ∑ i = 1 N f ( x i , y i , α , β ) subject to ‖ β ‖ 1 ≤ t \sum _^f(x_,y_,\alpha ,\beta )\|\beta \|_\leq t where only β is penalized while α is free to take any allowed value, just as β 0 was not penalized in the basic case.
Lasso (statistics) : Lasso variants have been created in order to remedy limitations of the original technique and to make the method more useful for particular problems. Almost all of these focus on respecting or exploiting dependencies among the covariates. Elastic net regularization adds an additional ridge regression-like penalty that improves performance when the number of predictors is larger than the sample size, allows the method to select strongly correlated variables together, and improves overall prediction accuracy. Group lasso allows groups of related covariates to be selected as a single unit, which can be useful in settings where it does not make sense to include some covariates without others. Further extensions of group lasso perform variable selection within individual groups (sparse group lasso) and allow overlap between groups (overlap group lasso). Fused lasso can account for the spatial or temporal characteristics of a problem, resulting in estimates that better match system structure. Lasso-regularized models can be fit using techniques including subgradient methods, least-angle regression (LARS), and proximal gradient methods. Determining the optimal value for the regularization parameter is an important part of ensuring that the model performs well; it is typically chosen using cross-validation.
Lasso (statistics) : The loss function of the lasso is not differentiable, but a wide variety of techniques from convex analysis and optimization theory have been developed to compute the solutions path of the lasso. These include coordinate descent, subgradient methods, least-angle regression (LARS), and proximal gradient methods. Subgradient methods are the natural generalization of traditional methods such as gradient descent and stochastic gradient descent to the case in which the objective function is not differentiable at all points. LARS is a method that is closely tied to lasso models, and in many cases allows them to be fit efficiently, though it may not perform well in all circumstances. LARS generates complete solution paths. Proximal methods have become popular because of their flexibility and performance and are an area of active research. The choice of method will depend on the particular lasso variant, the data and the available resources. However, proximal methods generally perform well. The "glmnet" package in R, where "glm" is a reference to "generalized linear models" and "net" refers to the "net" from "elastic net" provides an extremely efficient way to implement LASSO and some of its variants. The "celer" package in Python provides a highly efficient solver for the Lasso problem, often outperforming traditional solvers like scikit-learn by up to 100 times in certain scenarios, particularly with high-dimensional datasets. This package leverages dual extrapolation techniques to achieve its performance gains. The celer package is available at GitHub.
Lasso (statistics) : Choosing the regularization parameter ( λ ) is a fundamental part of lasso. A good value is essential to the performance of lasso since it controls the strength of shrinkage and variable selection, which, in moderation can improve both prediction accuracy and interpretability. However, if the regularization becomes too strong, important variables may be omitted and coefficients may be shrunk excessively, which can harm both predictive capacity and inferencing. Cross-validation is often used to find the regularization parameter. Information criteria such as the Bayesian information criterion (BIC) and the Akaike information criterion (AIC) might be preferable to cross-validation, because they are faster to compute and their performance is less volatile in small samples. An information criterion selects the estimator's regularization parameter by maximizing a model's in-sample accuracy while penalizing its effective number of parameters/degrees of freedom. Zou et al. proposed to measure the effective degrees of freedom by counting the number of parameters that deviate from zero. The degrees of freedom approach was considered flawed by Kaufman and Rosset and Janson et al., because a model's degrees of freedom might increase even when it is penalized harder by the regularization parameter. As an alternative, the relative simplicity measure defined above can be used to count the effective number of parameters. For the lasso, this measure is given by P ^ = ∑ i = 1 p | β i − β 0 , i | 1 p ∑ l | b OLS , l − β 0 , l | , =\sum _^-\beta _|\sum _|b_,l-\beta _|, which monotonically increases from zero to p as the regularization parameter decreases from ∞ to zero.
Lasso (statistics) : LASSO has been applied in economics and finance, and was found to improve prediction and to select sometimes neglected variables, for example in corporate bankruptcy prediction literature, or high growth firms prediction.
Lasso (statistics) : Least absolute deviations Model selection Nonparametric regression Tikhonov regularization == References ==
Local outlier factor : In anomaly detection, the local outlier factor (LOF) is an algorithm proposed by Markus M. Breunig, Hans-Peter Kriegel, Raymond T. Ng and Jörg Sander in 2000 for finding anomalous data points by measuring the local deviation of a given data point with respect to its neighbours. LOF shares some concepts with DBSCAN and OPTICS such as the concepts of "core distance" and "reachability distance", which are used for local density estimation.
Local outlier factor : The local outlier factor is based on a concept of a local density, where locality is given by k nearest neighbors, whose distance is used to estimate the density. By comparing the local density of an object to the local densities of its neighbors, one can identify regions of similar density, and points that have a substantially lower density than their neighbors. These are considered to be outliers. The local density is estimated by the typical distance at which a point can be "reached" from its neighbors. The definition of "reachability distance" used in LOF is an additional measure to produce more stable results within clusters. The "reachability distance" used by LOF has some subtle details that are often found incorrect in secondary sources, e.g., in the textbook of Ethem Alpaydin.
Local outlier factor : Let k -distance ( A ) (A) be the distance of the object A to the k-th nearest neighbor. Note that the set of the k nearest neighbors includes all objects at this distance, which can in the case of a "tie" be more than k objects. We denote the set of k nearest neighbors as Nk(A). This distance is used to define what is called reachability distance: reachability-distance k ( A , B ) = max _(A,B)=\max\(B),d(A,B)\ In words, the reachability distance of an object A from B is the true distance between the two objects, but at least the k -distance of B. Objects that belong to the k nearest neighbors of B (the "core" of B, see DBSCAN cluster analysis) are considered to be equally distant. The reason for this is to reduce the statistical fluctuations between all points A close to B, where increasing the value for k increases the smoothing effect. Note that this is not a distance in the mathematical definition, since it is not symmetric. (While it is a common mistake to always use the k -distance ( A ) (A) , this yields a slightly different method, referred to as Simplified-LOF) The local reachability density of an object A is defined by lrd k ( A ) := | N k ( A ) | ∑ B ∈ N k ( A ) reachability-distance k ( A , B ) _(A):=(A)|(A)_(A,B) which is the inverse of the average reachability distance of the object A from its neighbors. Note that it is not the average reachability of the neighbors from A (which by definition would be the k -distance ( A ) (A) ), but the distance at which A can be "reached" from its neighbors. With duplicate points, this value can become infinite. The local reachability densities are then compared with those of the neighbors using LOF k ( A ) := 1 | N k ( A ) | ∑ B ∈ N k ( A ) lrd k ( B ) lrd k ( A ) = 1 | N k ( A ) | ⋅ lrd k ( A ) ∑ B ∈ N k ( A ) lrd k ( B ) _(A):=(A)|\sum _(A)_(B)_(A)=(A)|\cdot _(A)\sum _(A)_(B) which is the average local reachability density of the neighbors divided by the object's own local reachability density. A value of approximately 1 indicates that the object is comparable to its neighbors (and thus not an outlier). A value below 1 indicates a denser region (which would be an inlier), while values significantly larger than 1 indicate outliers. LOF(k) ~ 1 means Similar density as neighbors, LOF(k) < 1 means Higher density than neighbors (Inlier), LOF(k) > 1 means Lower density than neighbors (Outlier)
Local outlier factor : Due to the local approach, LOF is able to identify outliers in a data set that would not be outliers in another area of the data set. For example, a point at a "small" distance to a very dense cluster is an outlier, while a point within a sparse cluster might exhibit similar distances to its neighbors. While the geometric intuition of LOF is only applicable to low-dimensional vector spaces, the algorithm can be applied in any context a dissimilarity function can be defined. It has experimentally been shown to work very well in numerous setups, often outperforming the competitors, for example in network intrusion detection and on processed classification benchmark data. The LOF family of methods can be easily generalized and then applied to various other problems, such as detecting outliers in geographic data, video streams or authorship networks.
Local outlier factor : The resulting values are quotient-values and hard to interpret. A value of 1 or even less indicates a clear inlier, but there is no clear rule for when a point is an outlier. In one data set, a value of 1.1 may already be an outlier, in another dataset and parameterization (with strong local fluctuations) a value of 2 could still be an inlier. These differences can also occur within a dataset due to the locality of the method. There exist extensions of LOF that try to improve over LOF in these aspects: Feature Bagging for Outlier Detection runs LOF on multiple projections and combines the results for improved detection qualities in high dimensions. This is the first ensemble learning approach to outlier detection, for other variants see ref. Local Outlier Probability (LoOP) is a method derived from LOF but using inexpensive local statistics to become less sensitive to the choice of the parameter k. In addition, the resulting values are scaled to a value range of [0:1]. Interpreting and Unifying Outlier Scores proposes a normalization of the LOF outlier scores to the interval [0:1] using statistical scaling to increase usability and can be seen an improved version of the LoOP ideas. On Evaluation of Outlier Rankings and Outlier Scores proposes methods for measuring similarity and diversity of methods for building advanced outlier detection ensembles using LOF variants and other algorithms and improving on the Feature Bagging approach discussed above. Local outlier detection reconsidered: a generalized view on locality with applications to spatial, video, and network outlier detection discusses the general pattern in various local outlier detection methods (including, e.g., LOF, a simplified version of LOF and LoOP) and abstracts from this into a general framework. This framework is then applied, e.g., to detecting outliers in geographic data, video streams and authorship networks. == References ==
Logic learning machine : Logic learning machine (LLM) is a machine learning method based on the generation of intelligible rules. LLM is an efficient implementation of the Switching Neural Network (SNN) paradigm, developed by Marco Muselli, Senior Researcher at the Italian National Research Council CNR-IEIIT in Genoa. LLM has been employed in many different sectors, including the field of medicine (orthopedic patient classification, DNA micro-array analysis and Clinical Decision Support Systems ), financial services and supply chain management.
Logic learning machine : The Switching Neural Network approach was developed in the 1990s to overcome the drawbacks of the most commonly used machine learning methods. In particular, black box methods, such as multilayer perceptron and support vector machine, had good accuracy but could not provide deep insight into the studied phenomenon. On the other hand, decision trees were able to describe the phenomenon but often lacked accuracy. Switching Neural Networks made use of Boolean algebra to build sets of intelligible rules able to obtain very good performance. In 2014, an efficient version of Switching Neural Network was developed and implemented in the Rulex suite with the name Logic Learning Machine. Also, an LLM version devoted to regression problems was developed.
Logic learning machine : Like other machine learning methods, LLM uses data to build a model able to perform a good forecast about future behaviors. LLM starts from a table including a target variable (output) and some inputs and generates a set of rules that return the output value y corresponding to a given configuration of inputs. A rule is written in the form: if premise then consequence where consequence contains the output value whereas premise includes one or more conditions on the inputs. According to the input type, conditions can have different forms: for categorical variables the input value must be in a given subset: x 1 ∈ \in \ . for ordered variables the condition is written as an inequality or an interval: x 2 ≤ α \leq \alpha or β ≤ x 3 ≤ γ \leq \gamma A possible rule is therefore in the form if x 1 ∈ \in \ AND x 2 ≤ α \leq \alpha AND β ≤ x 3 ≤ γ \leq \gamma then y = y ¯
Logic learning machine : According to the output type, different versions of the Logic Learning Machine have been developed: Logic Learning Machine for classification, when the output is a categorical variable, which can assume values in a finite set Logic Learning Machine for regression, when the output is an integer or real number.
Logic learning machine : Rulex Official Website Machine Learning Engineer
Loss functions for classification : In machine learning and mathematical optimization, loss functions for classification are computationally feasible loss functions representing the price paid for inaccuracy of predictions in classification problems (problems of identifying which category a particular observation belongs to). Given X as the space of all possible inputs (usually X ⊂ R d \subset \mathbb ^ ), and Y = =\ as the set of labels (possible outputs), a typical goal of classification algorithms is to find a function f : X → Y \to which best predicts a label y for a given input x → . However, because of incomplete information, noise in the measurement, or probabilistic components in the underlying process, it is possible for the same x → to generate different y . As a result, the goal of the learning problem is to minimize expected loss (also known as the risk), defined as I [ f ] = ∫ X × Y V ( f ( x → ) , y ) p ( x → , y ) d x → d y \times V(f(),y)\,p(,y)\,d\,dy where V ( f ( x → ) , y ) ),y) is a given loss function, and p ( x → , y ) ,y) is the probability density function of the process that generated the data, which can equivalently be written as p ( x → , y ) = p ( y ∣ x → ) p ( x → ) . ,y)=p(y\mid )p(). Within classification, several commonly used loss functions are written solely in terms of the product of the true label y and the predicted label f ( x → ) ) . Therefore, they can be defined as functions of only one variable υ = y f ( x → ) ) , so that V ( f ( x → ) , y ) = ϕ ( y f ( x → ) ) = ϕ ( υ ) ),y)=\phi (yf())=\phi (\upsilon ) with a suitably chosen function ϕ : R → R \to \mathbb . These are called margin-based loss functions. Choosing a margin-based loss function amounts to choosing ϕ . Selection of a loss function within this framework impacts the optimal f ϕ ∗ ^ which minimizes the expected risk, see empirical risk minimization. In the case of binary classification, it is possible to simplify the calculation of expected risk from the integral specified above. Specifically, I [ f ] = ∫ X × Y V ( f ( x → ) , y ) p ( x → , y ) d x → d y = ∫ X ∫ Y ϕ ( y f ( x → ) ) p ( y ∣ x → ) p ( x → ) d y d x → = ∫ X [ ϕ ( f ( x → ) ) p ( 1 ∣ x → ) + ϕ ( − f ( x → ) ) p ( − 1 ∣ x → ) ] p ( x → ) d x → = ∫ X [ ϕ ( f ( x → ) ) p ( 1 ∣ x → ) + ϕ ( − f ( x → ) ) ( 1 − p ( 1 ∣ x → ) ) ] p ( x → ) d x → I[f]&=\int _\times V(f(),y)\,p(,y)\,d\,dy\\[6pt]&=\int _\int _\phi (yf())\,p(y\mid )\,p()\,dy\,d\\[6pt]&=\int _[\phi (f())\,p(1\mid )+\phi (-f())\,p(-1\mid )]\,p()\,d\\[6pt]&=\int _[\phi (f())\,p(1\mid )+\phi (-f())\,(1-p(1\mid ))]\,p()\,d\end The second equality follows from the properties described above. The third equality follows from the fact that 1 and −1 are the only possible values for y , and the fourth because p ( − 1 ∣ x ) = 1 − p ( 1 ∣ x ) . The term within brackets [ ϕ ( f ( x → ) ) p ( 1 ∣ x → ) + ϕ ( − f ( x → ) ) ( 1 − p ( 1 ∣ x → ) ) ] ))p(1\mid )+\phi (-f())(1-p(1\mid ))] is known as the conditional risk. One can solve for the minimizer of I [ f ] by taking the functional derivative of the last equality with respect to f and setting the derivative equal to 0. This will result in the following equation ∂ ϕ ( f ) ∂ f η + ∂ ϕ ( − f ) ∂ f ( 1 − η ) = 0 , ( 1 ) \eta +(1-\eta )=0,\;\;\;\;\;(1) where η = p ( y = 1 | x → ) ) , which is also equivalent to setting the derivative of the conditional risk equal to zero. Given the binary nature of classification, a natural selection for a loss function (assuming equal cost for false positives and false negatives) would be the 0-1 loss function (0–1 indicator function), which takes the value of 0 if the predicted classification equals that of the true class or a 1 if the predicted classification does not match the true class. This selection is modeled by V ( f ( x → ) , y ) = H ( − y f ( x → ) ) ),y)=H(-yf()) where H indicates the Heaviside step function. However, this loss function is non-convex and non-smooth, and solving for the optimal solution is an NP-hard combinatorial optimization problem. As a result, it is better to substitute loss function surrogates which are tractable for commonly used learning algorithms, as they have convenient properties such as being convex and smooth. In addition to their computational tractability, one can show that the solutions to the learning problem using these loss surrogates allow for the recovery of the actual solution to the original classification problem. Some of these surrogates are described below. In practice, the probability distribution p ( x → , y ) ,y) is unknown. Consequently, utilizing a training set of n independently and identically distributed sample points S = _,y_),\dots ,(_,y_)\ drawn from the data sample space, one seeks to minimize empirical risk I S [ f ] = 1 n ∑ i = 1 n V ( f ( x → i ) , y i ) [f]=\sum _^V(f(_),y_) as a proxy for expected risk. (See statistical learning theory for a more detailed description.)
Loss functions for classification : Utilizing Bayes' theorem, it can be shown that the optimal f 0 / 1 ∗ ^ , i.e., the one that minimizes the expected risk associated with the zero-one loss, implements the Bayes optimal decision rule for a binary classification problem and is in the form of f 0 / 1 ∗ ( x → ) = ^()\;=\;\;\;\;1&p(1\mid )>p(-1\mid )\\\;\;\;0&p(1\mid )=p(-1\mid )\\-1&p(1\mid )<p(-1\mid )\end . A loss function is said to be classification-calibrated or Bayes consistent if its optimal f ϕ ∗ ^ is such that f 0 / 1 ∗ ( x → ) = sgn ⁡ ( f ϕ ∗ ( x → ) ) ^()=\operatorname (f_^()) and is thus optimal under the Bayes decision rule. A Bayes consistent loss function allows us to find the Bayes optimal decision function f ϕ ∗ ^ by directly minimizing the expected risk and without having to explicitly model the probability density functions. For convex margin loss ϕ ( υ ) , it can be shown that ϕ ( υ ) is Bayes consistent if and only if it is differentiable at 0 and ϕ ′ ( 0 ) < 0 . Yet, this result does not exclude the existence of non-convex Bayes consistent loss functions. A more general result states that Bayes consistent loss functions can be generated using the following formulation ϕ ( v ) = C [ f − 1 ( v ) ] + ( 1 − f − 1 ( v ) ) C ′ [ f − 1 ( v ) ] ( 2 ) (v)]+(1-f^(v))C'[f^(v)]\;\;\;\;\;(2) , where f ( η ) , ( 0 ≤ η ≤ 1 ) is any invertible function such that f − 1 ( − v ) = 1 − f − 1 ( v ) (-v)=1-f^(v) and C ( η ) is any differentiable strictly concave function such that C ( η ) = C ( 1 − η ) . Table-I shows the generated Bayes consistent loss functions for some example choices of C ( η ) and f − 1 ( v ) (v) . Note that the Savage and Tangent loss are not convex. Such non-convex loss functions have been shown to be useful in dealing with outliers in classification. For all loss functions generated from (2), the posterior probability p ( y = 1 | x → ) ) can be found using the invertible link function as p ( y = 1 | x → ) = η = f − 1 ( v ) )=\eta =f^(v) . Such loss functions where the posterior probability can be recovered using the invertible link are called proper loss functions. The sole minimizer of the expected risk, f ϕ ∗ ^ , associated with the above generated loss functions can be directly found from equation (1) and shown to be equal to the corresponding f ( η ) . This holds even for the nonconvex loss functions, which means that gradient descent based algorithms such as gradient boosting can be used to construct the minimizer.