text
stringlengths
12
14.7k
Differential evolution : A basic variant of the DE algorithm works by having a population of candidate solutions (called agents). These agents are moved around in the search-space by using simple mathematical formulae to combine the positions of existing agents from the population. If the new position of an agent is an improvement then it is accepted and forms part of the population, otherwise the new position is simply discarded. The process is repeated and by doing so it is hoped, but not guaranteed, that a satisfactory solution will eventually be discovered. Formally, let f : R n → R ^\to \mathbb be the fitness function which must be minimized (note that maximization can be performed by considering the function h := − f instead). The function takes a candidate solution as argument in the form of a vector of real numbers. It produces a real number as output which indicates the fitness of the given candidate solution. The gradient of f is not known. The goal is to find a solution m for which f ( m ) ≤ f ( p ) )\leq f(\mathbf ) for all p in the search-space, which means that m is the global minimum. Let x ∈ R n \in \mathbb ^ designate a candidate solution (agent) in the population. The basic DE algorithm can then be described as follows: Choose the parameters NP ≥ 4 \geq 4 , CR ∈ [ 0 , 1 ] \in [0,1] , and F ∈ [ 0 , 2 ] . NP : NP is the population size, i.e. the number of candidate agents or "parents". CR : The parameter CR ∈ [ 0 , 1 ] \in [0,1] is called the crossover probability. F : The parameter F ∈ [ 0 , 2 ] is called the differential weight. Typical settings are N P = 10 n , C R = 0.9 and F = 0.8 . Optimization performance may be greatly impacted by these choices; see below. Initialize all agents x with random positions in the search-space. Until a termination criterion is met (e.g. number of iterations performed, or adequate fitness reached), repeat the following: For each agent x in the population do: Pick three agents a , b ,\mathbf , and c from the population at random, they must be distinct from each other as well as from agent x . ( a is called the "base" vector.) Pick a random index R ∈ where n is the dimensionality of the problem being optimized. Compute the agent's potentially new position y = [ y 1 , … , y n ] =[y_,\ldots ,y_] as follows: For each i ∈ , pick a uniformly distributed random number r i ∼ U ( 0 , 1 ) \sim U(0,1) If r i < C R <CR or i = R then set y i = a i + F × ( b i − c i ) =a_+F\times (b_-c_) otherwise set y i = x i =x_ . (Index position R is replaced for certain.) If f ( y ) ≤ f ( x ) )\leq f(\mathbf ) then replace the agent x in the population with the improved or equal candidate solution y . Pick the agent from the population that has the best fitness and return it as the best found candidate solution.
Differential evolution : The choice of DE parameters NP , CR and F can have a large impact on optimization performance. Selecting the DE parameters that yield good performance has therefore been the subject of much research. Rules of thumb for parameter selection were devised by Storn et al. and Liu and Lampinen. Mathematical convergence analysis regarding parameter selection was done by Zaharie.
Differential evolution : Differential evolution can be utilized for constrained optimization as well. A common method involves modifying the target function to include a penalty for any violation of constraints, expressed as: f ( ~ x ) = f ( x ) + ρ × C V ( x ) x)=f(x)+\rho \times \mathrm (x) . Here, C V ( x ) (x) represents either a constraint violation (an L1 penalty) or the square of a constraint violation (an L2 penalty). This method, however, has certain drawbacks. One significant challenge is the appropriate selection of the penalty coefficient ρ . If ρ is set too low, it may not effectively enforce constraints. Conversely, if it's too high, it can greatly slow down or even halt the convergence process. Despite these challenges, this approach remains widely used due to its simplicity and because it doesn't require altering the differential evolution algorithm itself. There are alternative strategies, such as projecting onto a feasible set or reducing dimensionality, which can be used for box-constrained or linearly constrained cases. However, in the context of general nonlinear constraints, the most reliable methods typically involve penalty functions.
Differential evolution : Variants of the DE algorithm are continually being developed in an effort to improve optimization performance. The following directions of development can be outlined: New schemes for performing crossover and mutation of agents Various strategies for handling constraints Adaptive strategies that dynamically adjust population size, F and CR parameters Specialized algorithms for large-scale optimization Multi-objective and many-objective algorithms Techniques for handling binary/integer variables
Differential evolution : Artificial bee colony algorithm CMA-ES Evolution strategy Genetic algorithm == References ==
Dispersive flies optimisation : Dispersive flies optimisation (DFO) is a bare-bones swarm intelligence algorithm which is inspired by the swarming behaviour of flies hovering over food sources. DFO is a simple optimiser which works by iteratively trying to improve a candidate solution with regard to a numerical measure that is calculated by a fitness function. Each member of the population, a fly or an agent, holds a candidate solution whose suitability can be evaluated by their fitness value. Optimisation problems are often formulated as either minimisation or maximisation problems. DFO was introduced with the intention of analysing a simplified swarm intelligence algorithm with the fewest tunable parameters and components. In the first work on DFO, this algorithm was compared against a few other existing swarm intelligence techniques using error, efficiency and diversity measures. It is shown that despite the simplicity of the algorithm, which only uses agents’ position vectors at time t to generate the position vectors for time t + 1, it exhibits a competitive performance. Since its inception, DFO has been used in a variety of applications including medical imaging and image analysis as well as data mining and machine learning.
Dispersive flies optimisation : DFO bears many similarities with other existing continuous, population-based optimisers (e.g. particle swarm optimization and differential evolution). In that, the swarming behaviour of the individuals consists of two tightly connected mechanisms, one is the formation of the swarm and the other is its breaking or weakening. DFO works by facilitating the information exchange between the members of the population (the swarming flies). Each fly x represents a position in a d-dimensional search space: x = ( x 1 , x 2 , … , x d ) =(x_,x_,\ldots ,x_) , and the fitness of each fly is calculated by the fitness function f ( x ) ) , which takes into account the flies' d dimensions: f ( x ) = f ( x 1 , x 2 , … , x d ) )=f(x_,x_,\ldots ,x_) . The pseudocode below represents one iteration of the algorithm: for i = 1 : N flies x i . fitness = f ( x i ) .=f(\mathbf _) end for i x s _ = arg min [ f ( x i ) ] , i ∈ _)],\;i\in \ for i = 1 : N and i ≠ s for d = 1 : D dimensions if U ( 0 , 1 ) < Δ x i d t + 1 = U ( x min , d , x max , d ) ^=U(x_,x_) else x i d t + 1 = x i n d t + U ( 0 , 1 ) ( x s d t − x i d t ) ^=x_^+U(0,1)(x_^-x_^) end if end for d end for i In the algorithm above, x i d t + 1 ^ represents fly i at dimension d and time t + 1 ; x i n d t ^ presents x i 's best neighbouring fly in ring topology (left or right, using flies indexes), at dimension d and time t ; and x s d t ^ is the swarm's best fly. Using this update equation, the swarm's population update depends on each fly's best neighbour (which is used as the focus μ , and the difference between the current fly and the best in swarm represents the spread of movement, σ ). Other than the population size N , the only tunable parameter is the disturbance threshold Δ , which controls the dimension-wise restart in each fly vector. This mechanism is proposed to control the diversity of the swarm. Other notable minimalist swarm algorithm is Bare bones particle swarms (BB-PSO), which is based on particle swarm optimisation, along with bare bones differential evolution (BBDE) which is a hybrid of the bare bones particle swarm optimiser and differential evolution, aiming to reduce the number of parameters. Alhakbani in her PhD thesis covers many aspects of the algorithms including several DFO applications in feature selection as well as parameter tuning.
Dispersive flies optimisation : Some of the recent applications of DFO are listed below: Optimising support vector machine kernel to classify imbalanced data Quantifying symmetrical complexity in computational aesthetics Analysing computational autopoiesis and computational creativity Identifying calcifications in medical images Building non-identical organic structures for game's space development Deep Neuroevolution: Training Deep Neural Networks for False Alarm Detection in Intensive Care Units Identification of animation key points from 2D-medialness maps == References ==
Effective fitness : In natural evolution and artificial evolution (e.g. artificial life and evolutionary computation) the fitness (or performance or objective measure) of a schema is rescaled to give its effective fitness which takes into account crossover and mutation. Effective fitness is used in Evolutionary Computation to understand population dynamics. While a biological fitness function only looks at reproductive success, an effective fitness function tries to encompass things that are needed to be fulfilled for survival on population level. In homogeneous populations, reproductive fitness and effective fitness are equal. When a population moves away from homogeneity a higher effective fitness is reached for the recessive genotype. This advantage will decrease while the population moves toward an equilibrium. The deviation from this equilibrium displays how close the population is to achieving a steady state. When this equilibrium is reached, the maximum effective fitness of the population is achieved. Problem solving with evolutionary computation is realized with a cost function. If cost functions are applied to swarm optimization they are called a fitness function. Strategies like reinforcement learning and NEAT neuroevolution are creating a fitness landscape which describes the reproductive success of cellular automata. The effective fitness function models the number of fit offspring and is used in calculations that include evolutionary processes, such as mutation and crossover, important on the population level. The effective fitness model is superior to its predecessor, the standard reproductive fitness model. It advances in the qualitatively and quantitatively understanding of evolutionary concepts like bloat, self-adaptation, and evolutionary robustness. While reproductive fitness only looks at pure selection, effective fitness describes the flow of a population and natural selection by taking genetic operators into account. A normal fitness function fits to a problem, while an effective fitness function is an assumption if the objective was reached. The difference is important for designing fitness functions with algorithms like novelty search in which the objective of the agents is unknown. In the case of bacteria effective fitness could include production of toxins and rate of mutation of different plasmids, which are mostly stochastically determined
Effective fitness : When evolutionary equations of the studied population dynamics are available, one can algorithmically compute the effective fitness of a given population. Though the perfect effective fitness model is yet to be found, it is already known to be a good framework to the better understanding of the moving of the genotype-phenotype map, population dynamics, and the flow on fitness landscapes. Models using a combination of Darwinian fitness functions and effective functions are better at predicting population trends. Effective models could be used to determine therapeutic outcomes of disease treatment. Other models could determine effective protein engineering and works towards finding novel or heightened biochemistry.
Effective fitness : Foundations of Genetic Programming
Evolutionary programming : Evolutionary programming is an evolutionary algorithm, where a share of new population is created by mutation of previous population without crossover. Evolutionary programming differs from evolution strategy ES( μ + λ ) in one detail. All individuals are selected for the new population, while in ES( μ + λ ), every individual has the same probability to be selected. It is one of the four major evolutionary algorithm paradigms.
Evolutionary programming : It was first used by Lawrence J. Fogel in the US in 1960 in order to use simulated evolution as a learning process aiming to generate artificial intelligence. It was used to evolve finite-state machines as predictors.
Evolutionary programming : Artificial intelligence Genetic algorithm Genetic operator
Evolutionary programming : The Hitch-Hiker's Guide to Evolutionary Computation: What's Evolutionary Programming (EP)? Evolutionary Programming by Jason Brownlee (PhD) Archived 2013-01-18 at the Wayback Machine
Fitness approximation : Fitness approximation aims to approximate the objective or fitness functions in evolutionary optimization by building up machine learning models based on data collected from numerical simulations or physical experiments. The machine learning models for fitness approximation are also known as meta-models or surrogates, and evolutionary optimization based on approximated fitness evaluations are also known as surrogate-assisted evolutionary approximation. Fitness approximation in evolutionary optimization can be seen as a sub-area of data-driven evolutionary optimization.
Fitness approximation : A complete list of references on Fitness Approximation in Evolutionary Computation, by Yaochu Jin. The cyber shack of Adaptive Fuzzy Fitness Granulation (AFFG) Archived 2021-12-06 at the Wayback Machine That is designed to accelerate the convergence rate of EAs. Inverse reinforcement learning Reinforcement learning from human feedback == References ==
Fitness function : A fitness function is a particular type of objective or cost function that is used to summarize, as a single figure of merit, how close a given candidate solution is to achieving the set aims. It is an important component of evolutionary algorithms (EA), such as genetic programming, evolution strategies or genetic algorithms. An EA is a metaheuristic that reproduces the basic principles of biological evolution as a computer algorithm in order to solve challenging optimization or planning tasks, at least approximately. For this purpose, many candidate solutions are generated, which are evaluated using a fitness function in order to guide the evolutionary development towards the desired goal. Similar quality functions are also used in other metaheuristics, such as ant colony optimization or particle swarm optimization. In the field of EAs, each candidate solution, also called an individual, is commonly represented as a string of numbers (referred to as a chromosome). After each round of testing or simulation the idea is to delete the n worst individuals, and to breed n new ones from the best solutions. Each individual must therefore to be assigned a quality number indicating how close it has come to the overall specification, and this is generated by applying the fitness function to the test or simulation results obtained from that candidate solution. Two main classes of fitness functions exist: one where the fitness function does not change, as in optimizing a fixed function or testing with a fixed set of test cases; and one where the fitness function is mutable, as in niche differentiation or co-evolving the set of test cases. Another way of looking at fitness functions is in terms of a fitness landscape, which shows the fitness for each possible chromosome. In the following, it is assumed that the fitness is determined based on an evaluation that remains unchanged during an optimization run. A fitness function does not necessarily have to be able to calculate an absolute value, as it is sometimes sufficient to compare candidates in order to select the better one. A relative indication of fitness (candidate a is better than b) is sufficient in some cases, such as tournament selection or Pareto optimization.
Fitness function : The quality of the evaluation and calculation of a fitness function is fundamental to the success of an EA optimisation. It implements Darwin's principle of "survival of the fittest". Without fitness-based selection mechanisms for mate selection and offspring acceptance, EA search would be blind and hardly distinguishable from the Monte Carlo method. When setting up a fitness function, one must always be aware that it is about more than just describing the desired target state. Rather, the evolutionary search on the way to the optimum should also be supported as much as possible (see also section on auxiliary objectives), if and insofar as this is not already done by the fitness function alone. If the fitness function is designed badly, the algorithm will either converge on an inappropriate solution, or will have difficulty converging at all. Definition of the fitness function is not straightforward in many cases and often is performed iteratively if the fittest solutions produced by an EA is not what is desired. Interactive genetic algorithms address this difficulty by outsourcing evaluation to external agents which are normally humans.
Fitness function : The fitness function should not only closely align with the designer's goal, but also be computationally efficient. Execution speed is crucial, as a typical evolutionary algorithm must be iterated many times in order to produce a usable result for a non-trivial problem. Fitness approximation may be appropriate, especially in the following cases: Fitness computation time of a single solution is extremely high Precise model for fitness computation is missing The fitness function is uncertain or noisy. Alternatively or also in addition to the fitness approximation, the fitness calculations can also be distributed to a parallel computer in order to reduce the execution times. Depending on the population model of the EA used, both the EA itself and the fitness calculations of all offspring of one generation can be executed in parallel.
Fitness function : Practical applications usually aim at optimizing multiple and at least partially conflicting objectives. Two fundamentally different approaches are often used for this purpose, Pareto optimization and optimization based on fitness calculated using the weighted sum.
Fitness function : In addition to the primary objectives resulting from the task itself, it may be necessary to include auxiliary objectives in the assessment to support the achievement of one or more primary objectives. An example of a scheduling task is used for illustration purposes. The optimization goals include not only a general fast processing of all orders but also the compliance with a latest completion time. The latter is especially necessary for the scheduling of rush orders. The second goal is not achieved by the exemplary initial schedule, as shown in the adjacent figure. A following mutation does not change this, but schedules the work step d earlier, which is a necessary intermediate step for an earlier start of the last work step e of the order. As long as only the latest completion time is evaluated, however, the fitness of the mutated schedule remains unchanged, even though it represents a relevant step towards the objective of a timely completion of the order. This can be remedied, for example, by an additional evaluation of the delay of work steps. The new objective is an auxiliary one, since it was introduced in addition to the actual optimization objectives to support their achievement. A more detailed description of this approach and another example can be found in.
Fitness function : Evolutionary computation Inferential programming Test functions for optimization Loss function
Fitness function : A Nice Introduction to Adaptive Fuzzy Fitness Granulation (AFFG) (PDF), A promising approach to accelerate the convergence rate of EAs. The cyber shack of Adaptive Fuzzy Fitness Granulation (AFFG) That is designed to accelerate the convergence rate of EAs. Fitness functions in evolutionary robotics: A survey and analysis (AFFG) (PDF), A review of fitness functions used in evolutionary robotics. Ford, Neal; Richards, Mark, Sadalage, Pramod; Dehghani, Zhamak. (2021) Software Architecture: The Hard Parts O'Reilly Media, Inc. ISBN 9781492086895. == References ==
Gaussian adaptation : Gaussian adaptation (GA), also called normal or natural adaptation (NA) is an evolutionary algorithm designed for the maximization of manufacturing yield due to statistical deviation of component values of signal processing systems. In short, GA is a stochastic adaptive process where a number of samples of an n-dimensional vector x[xT = (x1, x2, ..., xn)] are taken from a multivariate Gaussian distribution, N(m, M), having mean m and moment matrix M. The samples are tested for fail or pass. The first- and second-order moments of the Gaussian restricted to the pass samples are m* and M*. The outcome of x as a pass sample is determined by a function s(x), 0 < s(x) < q ≤ 1, such that s(x) is the probability that x will be selected as a pass sample. The average probability of finding pass samples (yield) is P ( m ) = ∫ s ( x ) N ( x − m ) d x Then the theorem of GA states: For any s(x) and for any value of P < q, there always exist a Gaussian p. d. f. [ probability density function ] that is adapted for maximum dispersion. The necessary conditions for a local optimum are m = m* and M proportional to M*. The dual problem is also solved: P is maximized while keeping the dispersion constant (Kjellström, 1991). Proofs of the theorem may be found in the papers by Kjellström, 1970, and Kjellström & Taxén, 1981. Since dispersion is defined as the exponential of entropy/disorder/average information it immediately follows that the theorem is valid also for those concepts. Altogether, this means that Gaussian adaptation may carry out a simultaneous maximisation of yield and average information (without any need for the yield or the average information to be defined as criterion functions). The theorem is valid for all regions of acceptability and all Gaussian distributions. It may be used by cyclic repetition of random variation and selection (like the natural evolution). In every cycle a sufficiently large number of Gaussian distributed points are sampled and tested for membership in the region of acceptability. The centre of gravity of the Gaussian, m, is then moved to the centre of gravity of the approved (selected) points, m*. Thus, the process converges to a state of equilibrium fulfilling the theorem. A solution is always approximate because the centre of gravity is always determined for a limited number of points. It was used for the first time in 1969 as a pure optimization algorithm making the regions of acceptability smaller and smaller (in analogy to simulated annealing, Kirkpatrick 1983). Since 1970 it has been used for both ordinary optimization and yield maximization.
Gaussian adaptation : It has also been compared to the natural evolution of populations of living organisms. In this case s(x) is the probability that the individual having an array x of phenotypes will survive by giving offspring to the next generation; a definition of individual fitness given by Hartl 1981. The yield, P, is replaced by the mean fitness determined as a mean over the set of individuals in a large population. Phenotypes are often Gaussian distributed in a large population and a necessary condition for the natural evolution to be able to fulfill the theorem of Gaussian adaptation, with respect to all Gaussian quantitative characters, is that it may push the centre of gravity of the Gaussian to the centre of gravity of the selected individuals. This may be accomplished by the Hardy–Weinberg law. This is possible because the theorem of Gaussian adaptation is valid for any region of acceptability independent of the structure (Kjellström, 1996). In this case the rules of genetic variation such as crossover, inversion, transposition etcetera may be seen as random number generators for the phenotypes. So, in this sense Gaussian adaptation may be seen as a genetic algorithm.
Gaussian adaptation : Mean fitness may be calculated provided that the distribution of parameters and the structure of the landscape is known. The real landscape is not known, but figure below shows a fictitious profile (blue) of a landscape along a line (x) in a room spanned by such parameters. The red curve is the mean based on the red bell curve at the bottom of figure. It is obtained by letting the bell curve slide along the x-axis, calculating the mean at every location. As can be seen, small peaks and pits are smoothed out. Thus, if evolution is started at A with a relatively small variance (the red bell curve), then climbing will take place on the red curve. The process may get stuck for millions of years at B or C, as long as the hollows to the right of these points remain, and the mutation rate is too small. If the mutation rate is sufficiently high, the disorder or variance may increase and the parameter(s) may become distributed like the green bell curve. Then the climbing will take place on the green curve, which is even more smoothed out. Because the hollows to the right of B and C have now disappeared, the process may continue up to the peaks at D. But of course the landscape puts a limit on the disorder or variability. Besides — dependent on the landscape — the process may become very jerky, and if the ratio between the time spent by the process at a local peak and the time of transition to the next peak is very high, it may as well look like a punctuated equilibrium as suggested by Gould (see Ridley).
Gaussian adaptation : Thus far the theory only considers mean values of continuous distributions corresponding to an infinite number of individuals. In reality however, the number of individuals is always limited, which gives rise to an uncertainty in the estimation of m and M (the moment matrix of the Gaussian). And this may also affect the efficiency of the process. Unfortunately very little is known about this, at least theoretically. The implementation of normal adaptation on a computer is a fairly simple task. The adaptation of m may be done by one sample (individual) at a time, for example m(i + 1) = (1 – a) m(i) + ax where x is a pass sample, and a < 1 a suitable constant so that the inverse of a represents the number of individuals in the population. M may in principle be updated after every step y leading to a feasible point x = m + y according to: M(i + 1) = (1 – 2b) M(i) + 2byyT, where yT is the transpose of y and b << 1 is another suitable constant. In order to guarantee a suitable increase of average information, y should be normally distributed with moment matrix μ2M, where the scalar μ > 1 is used to increase average information (information entropy, disorder, diversity) at a suitable rate. But M will never be used in the calculations. Instead we use the matrix W defined by WWT = M. Thus, we have y = Wg, where g is normally distributed with the moment matrix μU, and U is the unit matrix. W and WT may be updated by the formulas W = (1 – b)W + bygT and WT = (1 – b)WT + bgyT because multiplication gives M = (1 – 2b)M + 2byyT, where terms including b2 have been neglected. Thus, M will be indirectly adapted with good approximation. In practice it will suffice to update W only W(i + 1) = (1 – b)W(i) + bygT. This is the formula used in a simple 2-dimensional model of a brain satisfying the Hebbian rule of associative learning; see the next section (Kjellström, 1996 and 1999). The figure below illustrates the effect of increased average information in a Gaussian p.d.f. used to climb a mountain Crest (the two lines represent the contour line). Both the red and green cluster have equal mean fitness, about 65%, but the green cluster has a much higher average information making the green process much more efficient. The effect of this adaptation is not very salient in a 2-dimensional case, but in a high-dimensional case, the efficiency of the search process may be increased by many orders of magnitude.
Gaussian adaptation : In the brain the evolution of DNA-messages is supposed to be replaced by an evolution of signal patterns and the phenotypic landscape is replaced by a mental landscape, the complexity of which will hardly be second to the former. The metaphor with the mental landscape is based on the assumption that certain signal patterns give rise to a better well-being or performance. For instance, the control of a group of muscles leads to a better pronunciation of a word or performance of a piece of music. In this simple model it is assumed that the brain consists of interconnected components that may add, multiply and delay signal values. A nerve cell kernel may add signal values, a synapse may multiply with a constant and An axon may delay values. This is a basis of the theory of digital filters and neural networks consisting of components that may add, multiply and delay signalvalues and also of many brain models, Levine 1991. In the figure below the brain stem is supposed to deliver Gaussian distributed signal patterns. This may be possible since certain neurons fire at random (Kandel et al.). The stem also constitutes a disordered structure surrounded by more ordered shells (Bergström, 1969), and according to the central limit theorem the sum of signals from many neurons may be Gaussian distributed. The triangular boxes represent synapses and the boxes with the + sign are cell kernels. In the cortex signals are supposed to be tested for feasibility. When a signal is accepted the contact areas in the synapses are updated according to the formulas below in agreement with the Hebbian theory. The figure shows a 2-dimensional computer simulation of Gaussian adaptation according to the last formula in the preceding section. m and W are updated according to: m1 = 0.9 m1 + 0.1 x1; m2 = 0.9 m2 + 0.1 x2; w11 = 0.9 w11 + 0.1 y1g1; w12 = 0.9 w12 + 0.1 y1g2; w21 = 0.9 w21 + 0.1 y2g1; w22 = 0.9 w22 + 0.1 y2g2; As can be seen this is very much like a small brain ruled by the theory of Hebbian learning (Kjellström, 1996, 1999 and 2002).
Gaussian adaptation : Gaussian adaptation as an evolutionary model of the brain obeying the Hebbian theory of associative learning offers an alternative view of free will due to the ability of the process to maximize the mean fitness of signal patterns in the brain by climbing a mental landscape in analogy with phenotypic evolution. Such a random process gives us much freedom of choice, but hardly any will. An illusion of will may, however, emanate from the ability of the process to maximize mean fitness, making the process goal seeking. I. e., it prefers higher peaks in the landscape prior to lower, or better alternatives prior to worse. In this way an illusive will may appear. A similar view has been given by Zohar 1990. See also Kjellström 1999.
Gaussian adaptation : The efficiency of Gaussian adaptation relies on the theory of information due to Claude E. Shannon (see information content). When an event occurs with probability P, then the information −log(P) may be achieved. For instance, if the mean fitness is P, the information gained for each individual selected for survival will be −log(P) – on the average - and the work/time needed to get the information is proportional to 1/P. Thus, if efficiency, E, is defined as information divided by the work/time needed to get it we have: E = −P log(P). This function attains its maximum when P = 1/e = 0.37. The same result has been obtained by Gaines with a different method. E = 0 if P = 0, for a process with infinite mutation rate, and if P = 1, for a process with mutation rate = 0 (provided that the process is alive). This measure of efficiency is valid for a large class of random search processes provided that certain conditions are at hand. 1 The search should be statistically independent and equally efficient in different parameter directions. This condition may be approximately fulfilled when the moment matrix of the Gaussian has been adapted for maximum average information to some region of acceptability, because linear transformations of the whole process do not affect efficiency. 2 All individuals have equal cost and the derivative at P = 1 is < 0. Then, the following theorem may be proved: All measures of efficiency, that satisfy the conditions above, are asymptotically proportional to –P log(P/q) when the number of dimensions increases, and are maximized by P = q exp(-1) (Kjellström, 1996 and 1999). The figure above shows a possible efficiency function for a random search process such as Gaussian adaptation. To the left the process is most chaotic when P = 0, while there is perfect order to the right where P = 1. In an example by Rechenberg, 1971, 1973, a random walk is pushed thru a corridor maximizing the parameter x1. In this case the region of acceptability is defined as a (n − 1)-dimensional interval in the parameters x2, x3, ..., xn, but a x1-value below the last accepted will never be accepted. Since P can never exceed 0.5 in this case, the maximum speed towards higher x1-values is reached for P = 0.5/e = 0.18, in agreement with the findings of Rechenberg. A point of view that also may be of interest in this context is that no definition of information (other than that sampled points inside some region of acceptability gives information about the extension of the region) is needed for the proof of the theorem. Then, because, the formula may be interpreted as information divided by the work needed to get the information, this is also an indication that −log(P) is a good candidate for being a measure of information.
Gaussian adaptation : Gaussian adaptation has also been used for other purposes as for instance shadow removal by "The Stauffer-Grimson algorithm" which is equivalent to Gaussian adaptation as used in the section "Computer simulation of Gaussian adaptation" above. In both cases the maximum likelihood method is used for estimation of mean values by adaptation at one sample at a time. But there are differences. In the Stauffer-Grimson case the information is not used for the control of a random number generator for centering, maximization of mean fitness, average information or manufacturing yield. The adaptation of the moment matrix also differs very much as compared to "the evolution in the brain" above.
Gaussian adaptation : Entropy in thermodynamics and information theory Fisher's fundamental theorem of natural selection Free will Genetic algorithm Hebbian learning Information content Simulated annealing Stochastic optimization Covariance matrix adaptation evolution strategy (CMA-ES) Unit of selection
Gaussian adaptation : Bergström, R. M. An Entropy Model of the Developing Brain. Developmental Psychobiology, 2(3): 139–152, 1969. Brooks, D. R. & Wiley, E. O. Evolution as Entropy, Towards a unified theory of Biology. The University of Chicago Press, 1986. Brooks, D. R. Evolution in the Information Age: Rediscovering the Nature of the Organism. Semiosis, Evolution, Energy, Development, Volume 1, Number 1, March 2001 Gaines, Brian R. Knowledge Management in Societies of Intelligent Adaptive Agents. Journal of intelligent Information systems 9, 277–298 (1997). Hartl, D. L. A Primer of Population Genetics. Sinauer, Sunderland, Massachusetts, 1981. Hamilton, WD. 1963. The evolution of altruistic behavior. American Naturalist 97:354–356 Kandel, E. R., Schwartz, J. H., Jessel, T. M. Essentials of Neural Science and Behavior. Prentice Hall International, London, 1995. S. Kirkpatrick and C. D. Gelatt and M. P. Vecchi, Optimization by Simulated Annealing, Science, Vol 220, Number 4598, pages 671–680, 1983. Kjellström, G. Network Optimization by Random Variation of component values. Ericsson Technics, vol. 25, no. 3, pp. 133–151, 1969. Kjellström, G. Optimization of electrical Networks with respect to Tolerance Costs. Ericsson Technics, no. 3, pp. 157–175, 1970. Kjellström, G. & Taxén, L. Stochastic Optimization in System Design. IEEE Trans. on Circ. and Syst., vol. CAS-28, no. 7, July 1981. Kjellström, G., Taxén, L. and Lindberg, P. O. Discrete Optimization of Digital Filters Using Gaussian Adaptation and Quadratic Function Minimization. IEEE Trans. on Circ. and Syst., vol. CAS-34, no 10, October 1987. Kjellström, G. On the Efficiency of Gaussian Adaptation. Journal of Optimization Theory and Applications, vol. 71, no. 3, December 1991. Kjellström, G. & Taxén, L. Gaussian Adaptation, an evolution-based efficient global optimizer; Computational and Applied Mathematics, In, C. Brezinski & U. Kulish (Editors), Elsevier Science Publishers B. V., pp 267–276, 1992. Kjellström, G. Evolution as a statistical optimization algorithm. Evolutionary Theory 11:105–117 (January, 1996). Kjellström, G. The evolution in the brain. Applied Mathematics and Computation, 98(2–3):293–300, February, 1999. Kjellström, G. Evolution in a nutshell and some consequences concerning valuations. EVOLVE, ISBN 91-972936-1-X, Stockholm, 2002. Levine, D. S. Introduction to Neural & Cognitive Modeling. Laurence Erlbaum Associates, Inc., Publishers, 1991. MacLean, P. D. A Triune Concept of the Brain and Behavior. Toronto, Univ. Toronto Press, 1973. Maynard Smith, J. 1964. Group Selection and Kin Selection, Nature 201:1145–1147. Maynard Smith, J. Evolutionary Genetics. Oxford University Press, 1998. Mayr, E. What Evolution is. Basic Books, New York, 2001. Müller, Christian L. and Sbalzarini Ivo F. Gaussian Adaptation revisited - an entropic view on Covariance Matrix Adaptation. Institute of Theoretical Computer Science and Swiss Institute of Bioinformatics, ETH Zurich, CH-8092 Zurich, Switzerland. Pinel, J. F. and Singhal, K. Statistical Design Centering and Tolerancing Using Parametric Sampling. IEEE Transactions on Circuits and Systems, Vol. Das-28, No. 7, July 1981. Rechenberg, I. (1971): Evolutionsstrategie — Optimierung technischer Systeme nach Prinzipien der biologischen Evolution (PhD thesis). Reprinted by Fromman-Holzboog (1973). Ridley, M. Evolution. Blackwell Science, 1996. Stauffer, C. & Grimson, W.E.L. Learning Patterns of Activity Using Real-Time Tracking, IEEE Trans. on PAMI, 22(8), 2000. Stehr, G. On the Performance Space Exploration of Analog Integrated Circuits. Technischen Universität Munchen, Dissertation 2005. Taxén, L. A Framework for the Coordination of Complex Systems’ Development. Institute of Technology, Linköping University, Dissertation, 2003. Zohar, D. The quantum self : a revolutionary view of human nature and consciousness rooted in the new physics. London, Bloomsbury, 1990.
Genetic representation : In computer programming, genetic representation is a way of presenting solutions/individuals in evolutionary computation methods. The term encompasses both the concrete data structures and data types used to realize the genetic material of the candidate solutions in the form of a genome, and the relationships between search space and problem space. In the simplest case, the search space corresponds to the problem space (direct representation). The choice of problem representation is tied to the choice of genetic operators, both of which have a decisive effect on the efficiency of the optimization. Genetic representation can encode appearance, behavior, physical qualities of individuals. Difference in genetic representations is one of the major criteria drawing a line between known classes of evolutionary computation. Terminology is often analogous with natural genetics. The block of computer memory that represents one candidate solution is called an individual. The data in that block is called a chromosome. Each chromosome consists of genes. The possible values of a particular gene are called alleles. A programmer may represent all the individuals of a population using binary encoding, permutational encoding, encoding by tree, or any one of several other representations.
Genetic representation : Genetic algorithms (GAs) are typically linear representations; these are often, but not always, binary. Holland's original description of GA used arrays of bits. Arrays of other types and structures can be used in essentially the same way. The main property that makes these genetic representations convenient is that their parts are easily aligned due to their fixed size. This facilitates simple crossover operation. Depending on the application, variable-length representations have also been successfully used and tested in evolutionary algorithms (EA) in general and genetic algorithms in particular, although the implementation of crossover is more complex in this case. Evolution strategy uses linear real-valued representations, e.g., an array of real values. It uses mostly gaussian mutation and blending/averaging crossover. Genetic programming (GP) pioneered tree-like representations and developed genetic operators suitable for such representations. Tree-like representations are used in GP to represent and evolve functional programs with desired properties. Human-based genetic algorithm (HBGA) offers a way to avoid solving hard representation problems by outsourcing all genetic operators to outside agents, in this case, humans. The algorithm has no need for knowledge of a particular fixed genetic representation as long as there are enough external agents capable of handling those representations, allowing for free-form and evolving genetic representations.
Genetic representation : Analogous to biology, EAs distinguish between problem space (corresponds to phenotype) and search space (corresponds to genotype). The problem space contains concrete solutions to the problem being addressed, while the search space contains the encoded solutions. The mapping from search space to problem space is called genotype-phenotype mapping. The genetic operators are applied to elements of the search space, and for evaluation, elements of the search space are mapped to elements of the problem space via genotype-phenotype mapping.
Genetic representation : The importance of an appropriate choice of search space for the success of an EA application was recognized early on. The following requirements can be placed on a suitable search space and thus on a suitable genotype-phenotype mapping:
Genetic representation : When mapping the genotype to the phenotype being evaluated, domain-specific knowledge can be used to improve the phenotype and/or ensure that constraints are met. This is a commonly used method to improve EA performance in terms of runtime and solution quality. It is illustrated below by two of the three examples.
Genotypic and phenotypic repair : Genotypic and phenotypic repair are optional components of an evolutionary algorithm (EA). An EA reproduces essential elements of biological evolution as a computer algorithm in order to solve demanding optimization or planning tasks, at least approximately. A candidate solution is represented by a - usually linear - data structure that plays the role of an individual's chromosome. New solution candidates are generated by mutation and crossover operators following the example of biology. These offspring may be defective, which is corrected or compensated for by genotypic or phenotypic repair.
Genotypic and phenotypic repair : Genotypic repair, also known as genetic repair, is the removal or correction of impermissible entries in the chromosome that violate restrictions. In phenotypic repair, the corrections are only made in the genotype-phenotype mapping and the chromosome remains unchanged. Michalewicz wrote about the importance of restrictions in real-world applications: "In general, constraints are an integral part of the formulation of any problem". Restriction violations are application-specific and therefore it depends on the current problem whether and which type of repair is useful. They can usually also be treated by a correspondingly extended evaluation and it depends on the problem which measures are possible and which is the most suitable. If a phenotypic repair is feasible, then it is usually the most efficient compared to the other measures. A survey on repair methods used as constraint handling techniques can be found in. Violations of the range limits of genes should be avoided as far as possible by the formulation of the genome. If this is not possible or if restrictions within the search space defined by the genome are involved, their violations are usually handled by the evaluation. This can be done, for example, by penalty functions that lower the fitness. Repair is often also required for combinatorial tasks. The application of a 1- or n-point crossover operator can, for example, lead to genes being missing in one of the child genomes that are present in duplicate in the other. In this case, a suitable genotypic repair measure is to move the surplus genes to the other genome in a positional manner. The use of the aforementioned operators in combinatorial tasks has also proven to be useful in combination with crossover types specially developed for permutations, at least for certain problems. Particularly in combinatorial problems, it has been observed that genotypic repair can promote premature convergence to a suboptimum, but can also significantly accelerate a successful search. Studies on various tasks have shown that this is application-dependent. An effective measure to avoid premature convergence is generally the use of structured populations instead of the usual panmictic ones. Sequence restrictions play a role in many scheduling tasks, for example when it comes to planning workflows. If, for example, it is specified that step A must be carried out before step B and the gene of step B is located before the gene of A in the chromosome, then there is an impermissible gene sequence. This is because the scheduling operation of step B requires the planned end of step A for correct scheduling, but this is not yet scheduled at the time gene B is processed. The problem can be solved in two ways: The scheduling operation of step B is postponed until the gene from step A has been processed. The genome remains unchanged and the repair only influences the genotype-phenotype mapping. Since only the phenotype is changed, this is referred to as phenotypic repair. If, on the other hand, the gene of step B is moved behind the gene of step A, this is a genotypic repair. The same applies to the alternative shift of gene A in front of gene B. In this case, genotypic repair has the disadvantage that it prevents a meaningful restructuring of the gene sequence in the chromosome if this requires several intermediate steps (mutations) that at least partially violate restrictions. == References ==
Learning classifier system : Learning classifier systems, or LCS, are a paradigm of rule-based machine learning methods that combine a discovery component (e.g. typically a genetic algorithm in evolutionary computation) with a learning component (performing either supervised learning, reinforcement learning, or unsupervised learning). Learning classifier systems seek to identify a set of context-dependent rules that collectively store and apply knowledge in a piecewise manner in order to make predictions (e.g. behavior modeling, classification, data mining, regression, function approximation, or game strategy). This approach allows complex solution spaces to be broken up into smaller, simpler parts for the reinforcement learning that is inside artificial intelligence research. The founding concepts behind learning classifier systems came from attempts to model complex adaptive systems, using rule-based agents to form an artificial cognitive system (i.e. artificial intelligence).
Learning classifier system : The architecture and components of a given learning classifier system can be quite variable. It is useful to think of an LCS as a machine consisting of several interacting components. Components may be added or removed, or existing components modified/exchanged to suit the demands of a given problem domain (like algorithmic building blocks) or to make the algorithm flexible enough to function in many different problem domains. As a result, the LCS paradigm can be flexibly applied to many problem domains that call for machine learning. The major divisions among LCS implementations are as follows: (1) Michigan-style architecture vs. Pittsburgh-style architecture, (2) reinforcement learning vs. supervised learning, (3) incremental learning vs. batch learning, (4) online learning vs. offline learning, (5) strength-based fitness vs. accuracy-based fitness, and (6) complete action mapping vs best action mapping. These divisions are not necessarily mutually exclusive. For example, XCS, the best known and best studied LCS algorithm, is Michigan-style, was designed for reinforcement learning but can also perform supervised learning, applies incremental learning that can be either online or offline, applies accuracy-based fitness, and seeks to generate a complete action mapping.
Learning classifier system : Adaptive: They can acclimate to a changing environment in the case of online learning. Model free: They make limited assumptions about the environment, or the patterns of association within the data. They can model complex, epistatic, heterogeneous, or distributed underlying patterns without relying on prior knowledge. They make no assumptions about the number of predictive vs. non-predictive features in the data. Ensemble Learner: No single model is applied to a given instance that universally provides a prediction. Instead a relevant and often conflicting set of rules contribute a 'vote' which can be interpreted as a fuzzy prediction. Stochastic Learner: Non-deterministic learning is advantageous in large-scale or high complexity problems where deterministic or exhaustive learning becomes intractable. Implicitly Multi-objective: Rules evolve towards accuracy with implicit and explicit pressures encouraging maximal generality/simplicity. This implicit generalization pressure is unique to LCS. Effectively, more general rules, will appear more often in match sets. In turn, they have a more frequent opportunity to be selected as parents, and pass on their more general (genomes) to offspring rules. Interpretable:In the interest of data mining and knowledge discovery individual LCS rules are logical, and can be made to be human interpretable IF:THEN statements. Effective strategies have also been introduced to allow for global knowledge discovery identifying significant features, and patterns of association from the rule population as a whole. Flexible application Single or multi-step problems Supervised, Reinforcement or Unsupervised Learning Binary Class and Multi-Class Classification Regression Discrete or continuous features (or some mix of both types) Clean or noisy problem domains Balanced or imbalanced datasets. Accommodates missing data (i.e. missing feature values in training instances)
Learning classifier system : Limited Software Availability: There are a limited number of open source, accessible LCS implementations, and even fewer that are designed to be user friendly or accessible to machine learning practitioners. Interpretation: While LCS algorithms are certainly more interpretable than some advanced machine learners, users must interpret a set of rules (sometimes large sets of rules to comprehend the LCS model.). Methods for rule compaction, and interpretation strategies remains an area of active research. Theory/Convergence Proofs: There is a relatively small body of theoretical work behind LCS algorithms. This is likely due to their relative algorithmic complexity (applying a number of interacting components) as well as their stochastic nature. Overfitting: Like any machine learner, LCS can suffer from overfitting despite implicit and explicit generalization pressures. Run Parameters: LCSs often have many run parameters to consider/optimize. Typically, most parameters can be left to the community determined defaults with the exception of two critical parameters: Maximum rule population size, and the maximum number of learning iterations. Optimizing these parameters are likely to be very problem dependent. Notoriety: Despite their age, LCS algorithms are still not widely known even in machine learning communities. As a result, LCS algorithms are rarely considered in comparison to other established machine learning approaches. This is likely due to the following factors: (1) LCS is a relatively complicated algorithmic approach, (2) LCS, rule-based modeling is a different paradigm of modeling than almost all other machine learning approaches. (3) LCS software implementations are not as common. Computationally Expensive: While certainly more feasible than some exhaustive approaches, LCS algorithms can be computationally expensive. For simple, linear learning problems there is no need to apply an LCS. LCS algorithms are best suited to complex problem spaces, or problem spaces in which little prior knowledge exists.
Learning classifier system : Adaptive-control Data Mining Engineering Design Feature Selection Function Approximation Game-Play Image Classification Knowledge Handling Medical Diagnosis Modeling Navigation Optimization Prediction Querying Robotics Routing Rule-Induction Scheduling Strategy
Learning classifier system : The name, "Learning Classifier System (LCS)", is a bit misleading since there are many machine learning algorithms that 'learn to classify' (e.g. decision trees, artificial neural networks), but are not LCSs. The term 'rule-based machine learning (RBML)' is useful, as it more clearly captures the essential 'rule-based' component of these systems, but it also generalizes to methods that are not considered to be LCSs (e.g. association rule learning, or artificial immune systems). More general terms such as, 'genetics-based machine learning', and even 'genetic algorithm' have also been applied to refer to what would be more characteristically defined as a learning classifier system. Due to their similarity to genetic algorithms, Pittsburgh-style learning classifier systems are sometimes generically referred to as 'genetic algorithms'. Beyond this, some LCS algorithms, or closely related methods, have been referred to as 'cognitive systems', 'adaptive agents', 'production systems', or generically as a 'classifier system'. This variation in terminology contributes to some confusion in the field. Up until the 2000s nearly all learning classifier system methods were developed with reinforcement learning problems in mind. As a result, the term ‘learning classifier system’ was commonly defined as the combination of ‘trial-and-error’ reinforcement learning with the global search of a genetic algorithm. Interest in supervised learning applications, and even unsupervised learning have since broadened the use and definition of this term.
Learning classifier system : Rule-based machine learning Production system Expert system Genetic algorithm Association rule learning Artificial immune system Population-based Incremental Learning Machine learning
Mating pool : Mating pool is a concept used in evolutionary algorithms and means a population of parents for the next population. The mating pool is formed by candidate solutions that the selection operators deem to have the highest fitness in the current population. Solutions that are included in the mating pool are referred to as parents. Individual solutions can be repeatedly included in the mating pool, with individuals of higher fitness values having a higher chance of being included multiple times. Crossover operators are then applied to the parents, resulting in recombination of genes recognized as superior. Lastly, random changes in the genes are introduced through mutation operators, increasing the genetic variation in the gene pool. Those two operators improve the chance of creating new, superior solutions. A new generation of solutions is thereby created, the children, who will constitute the next population. Depending on the selection method, the total number of parents in the mating pool can be different to the size of the initial population, resulting in a new population that’s smaller. To continue the algorithm with an equally sized population, random individuals from the old populations can be chosen and added to the new population. At this point, the fitness value of the new solutions is evaluated. If the termination conditions are fulfilled, processes come to an end. Otherwise, they are repeated. The repetition of the steps result in candidate solutions that evolve towards the most optimal solution over time. The genes will become increasingly uniform towards the most optimal gene, a process called convergence. If 95% of the population share the same version of a gene, the gene has converged. When all the individual fitness values have reached the value of the best individual, i.e. all the genes have converged, population convergence is achieved.
Mating pool : Several methods can be applied to create a mating pool. All of these processes involve the selective breeding of a particular number of individuals within a population. There are multiple criteria that can be employed to determine which individuals make it into the mating pool and which are left behind. The selection methods can be split into three general types: fitness proportionate selection, ordinal based selection and threshold based selection.
Memetic algorithm : In computer science and operations research, a memetic algorithm (MA) is an extension of an evolutionary algorithm (EA) that aims to accelerate the evolutionary search for the optimum. An EA is a metaheuristic that reproduces the basic principles of biological evolution as a computer algorithm in order to solve challenging optimization or planning tasks, at least approximately. An MA uses one or more suitable heuristics or local search techniques to improve the quality of solutions generated by the EA and to speed up the search. The effects on the reliability of finding the global optimum depend on both the use case and the design of the MA. Memetic algorithms represent one of the recent growing areas of research in evolutionary computation. The term MA is now widely used as a synergy of evolutionary or any population-based approach with separate individual learning or local improvement procedures for problem search. Quite often, MAs are also referred to in the literature as Baldwinian evolutionary algorithms, Lamarckian EAs, cultural algorithms, or genetic local search.
Memetic algorithm : Inspired by both Darwinian principles of natural evolution and Dawkins' notion of a meme, the term memetic algorithm (MA) was introduced by Pablo Moscato in his technical report in 1989 where he viewed MA as being close to a form of population-based hybrid genetic algorithm (GA) coupled with an individual learning procedure capable of performing local refinements. The metaphorical parallels, on the one hand, to Darwinian evolution and, on the other hand, between memes and domain specific (local search) heuristics are captured within memetic algorithms thus rendering a methodology that balances well between generality and problem specificity. This two-stage nature makes them a special case of dual-phase evolution. In the context of complex optimization, many different instantiations of memetic algorithms have been reported across a wide range of application domains, in general, converging to high-quality solutions more efficiently than their conventional evolutionary counterparts. In general, using the ideas of memetics within a computational framework is called memetic computing or memetic computation (MC). With MC, the traits of universal Darwinism are more appropriately captured. Viewed in this perspective, MA is a more constrained notion of MC. More specifically, MA covers one area of MC, in particular dealing with areas of evolutionary algorithms that marry other deterministic refinement techniques for solving optimization problems. MC extends the notion of memes to cover conceptual entities of knowledge-enhanced procedures or representations.
Memetic algorithm : The no-free-lunch theorems of optimization and search state that all optimization strategies are equally effective with respect to the set of all optimization problems. Conversely, this means that one can expect the following: The more efficiently an algorithm solves a problem or class of problems, the less general it is and the more problem-specific knowledge it builds on. This insight leads directly to the recommendation to complement generally applicable metaheuristics with application-specific methods or heuristics, which fits well with the concept of MAs.
Memetic algorithm : The learning method/meme used has a significant impact on the improvement results, so care must be taken in deciding which meme or memes to use for a particular optimization problem. The frequency and intensity of individual learning directly define the degree of evolution (exploration) against individual learning (exploitation) in the MA search, for a given fixed limited computational budget. Clearly, a more intense individual learning provides greater chance of convergence to the local optima but limits the amount of evolution that may be expended without incurring excessive computational resources. Therefore, care should be taken when setting these two parameters to balance the computational budget available in achieving maximum search performance. When only a portion of the population individuals undergo learning, the issue of which subset of individuals to improve need to be considered to maximize the utility of MA search. Last but not least, it has to be decided whether the respective individual should be changed by the learning success (Lamarckian learning) or not (Baldwinian learning). Thus, the following five design questions must be answered, the first of which is addressed by all of the above 2nd generation representatives during an MA run, while the extended form of meta-Lamarckian learning of expands this to the first four design decisions.
Memetic algorithm : Memetic algorithms have been successfully applied to a multitude of real-world problems. Although many people employ techniques closely related to memetic algorithms, alternative names such as hybrid genetic algorithms are also employed. Researchers have used memetic algorithms to tackle many classical NP problems. To cite some of them: graph partitioning, multidimensional knapsack, travelling salesman problem, quadratic assignment problem, set cover problem, minimal graph coloring, max independent set problem, bin packing problem, and generalized assignment problem. More recent applications include (but are not limited to) business analytics and data science, training of artificial neural networks, pattern recognition, robotic motion planning, beam orientation, circuit design, electric service restoration, medical expert systems, single machine scheduling, automatic timetabling (notably, the timetable for the NHL), manpower scheduling, nurse rostering optimisation, processor allocation, maintenance scheduling (for example, of an electric distribution network), scheduling of multiple workflows to constrained heterogeneous resources, multidimensional knapsack problem, VLSI design, clustering of gene expression profiles, feature/gene selection, parameter determination for hardware fault injection, and multi-class, multi-objective feature selection.
Memetic algorithm : IEEE Workshop on Memetic Algorithms (WOMA 2009). Program Chairs: Jim Smith, University of the West of England, U.K.; Yew-Soon Ong, Nanyang Technological University, Singapore; Gustafson Steven, University of Nottingham; U.K.; Meng Hiot Lim, Nanyang Technological University, Singapore; Natalio Krasnogor, University of Nottingham, U.K. Memetic Computing Journal, first issue appeared in January 2009. 2008 IEEE World Congress on Computational Intelligence (WCCI 2008), Hong Kong, Special Session on Memetic Algorithms. Special Issue on 'Emerging Trends in Soft Computing - Memetic Algorithm' Archived 2011-09-27 at the Wayback Machine, Soft Computing Journal, Completed & In Press, 2008. IEEE Computational Intelligence Society Emergent Technologies Task Force on Memetic Computing Archived 2011-09-27 at the Wayback Machine IEEE Congress on Evolutionary Computation (CEC 2007), Singapore, Special Session on Memetic Algorithms. 'Memetic Computing' by Thomson Scientific's Essential Science Indicators as an Emerging Front Research Area. Special Issue on Memetic Algorithms, IEEE Transactions on Systems, Man, and Cybernetics - Part B: Cybernetics, Vol. 37, No. 1, February 2007. Recent Advances in Memetic Algorithms, Series: Studies in Fuzziness and Soft Computing, Vol. 166, ISBN 978-3-540-22904-9, 2005. Special Issue on Memetic Algorithms, Evolutionary Computation Fall 2004, Vol. 12, No. 3: v-vi. == References ==
Minimum Population Search : In evolutionary computation, Minimum Population Search (MPS) is a computational method that optimizes a problem by iteratively trying to improve a set of candidate solutions with regard to a given measure of quality. It solves a problem by evolving a small population of candidate solutions by means of relatively simple arithmetical operations. MPS is a metaheuristic as it makes few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions. For problems where finding the precise global optimum is less important than finding an acceptable local optimum in a fixed amount of time, using a metaheuristic such as MPS may be preferable to alternatives such as brute-force search or gradient descent. MPS is used for multidimensional real-valued functions but does not use the gradient of the problem being optimized, which means MPS does not require for the optimization problem to be differentiable as is required by classic optimization methods such as gradient descent and quasi-newton methods. MPS can therefore also be used on optimization problems that are not even continuous, are noisy, change over time, etc.
Minimum Population Search : In a similar way to Differential evolution, MPS uses difference vectors between the members of the population in order to generate new solutions. It attempts to provide an efficient use of function evaluations by maintaining a small population size. If the population size is smaller than the dimensionality of the search space, then the solutions generated through difference vectors will be constrained to the n − 1 dimensional hyperplane. A smaller population size will lead to a more restricted subspace. With a population size equal to the dimensionality of the problem ( n = d ) , the “line/hyperplane points” in MPS will be generated within a d − 1 dimensional hyperplane. Taking a step orthogonal to this hyperplane will allow the search process to cover all the dimensions of the search space. Population size is a fundamental parameter in the performance of population-based heuristics. Larger populations promote exploration, but they also allow fewer generations, and this can reduce the chance of convergence. Searching with a small population can increase the chances of convergence and the efficient use of function evaluations, but it can also induce the risk of premature convergence. If the risk of premature convergence can be avoided, then a population-based heuristic could benefit from the efficiency and faster convergence rate of a smaller population. To avoid premature convergence, it is important to have a diversified population. By including techniques for explicitly increasing diversity and exploration, it is possible to have smaller populations with less risk of premature convergence.
Minimum Population Search : A basic variant of the MPS algorithm works by having a population of size equal to the dimension of the problem. New solutions are generated by exploring the hyperplane defined by the current solutions (by means of difference vectors) and performing an additional orthogonal step in order to avoid getting caught in this hyperplane. The step sizes are controlled by the Thresheld Convergence technique, which gradually reduces step sizes as the search process advances. An outline for the algorithm is given below: Generate the first initial population. Allowing these solutions to lie near the bounds of the search space generally gives good results: s k = ( r s 1 ∗ b o u n d 1 / 2 , r s 2 ∗ b o u n d 2 / 2 , . . . , r s n ∗ b o u n d n / 2 ) =(rs_*bound_/2,rs_*bound_/2,...,rs_*bound_/2) where s k is the k -th population member, r s i are random numbers which can be −1 or 1, and the b o u n d i are the lower and upper bounds on each dimension. While a stop condition is not reached: Update threshold convergence values ( m i n _ s t e p and m a x _ s t e p ) Calculate the centroid of the current population ( x c ) For each member of the population ( x i ), generate a new offspring as follows: Uniformly generate a scaling factor ( F i ) between − m a x _ s t e p and m a x _ s t e p Generate a vector ( x o ) orthogonal to the difference vector between x i and x c Calculate a scaling factor for the orthogonal vector: m i n _ o r t h = s q r t ( m a x ( m i n _ s t e p 2 − F i 2 , 0 ) ) -F_^,0)) m a x _ o r t h = s q r t ( m a x ( m a x _ s t e p 2 − F i 2 , 0 ) ) -F_^,0)) o r t h _ s t e p = u n i f o r m ( m i n _ o r t h , m a x _ o r t h ) Generate the new solution by adding the difference and the orthogonal vectors to the original solution n e w _ s o l u t i o n = x i + F i ∗ ( x i − x c ) ∗ o r t h _ s t e p ∗ x o +F_*(x_-x_)*orth\_step*x_ Pick the best members between the old population and the new one by discarding the least fit members. Return the single best solution or the best population found as the final result.
Population model (evolutionary algorithm) : The population model of an evolutionary algorithm (EA) describes the structural properties of its population to which its members are subject. A population is the set of all proposed solutions of an EA considered in one iteration, which are also called individuals according to the biological role model. The individuals of a population can generate further individuals as offspring with the help of the genetic operators of the procedure. The simplest and widely used population model in EAs is the global or panmictic model, which corresponds to an unstructured population. It allows each individual to choose any other individual of the population as a partner for the production of offspring by crossover, whereby the details of the selection are irrelevant as long as the fitness of the individuals plays a significant role. Due to global mate selection, the genetic information of even slightly better individuals can prevail in a population after a few generations (iteration of an EA), provided that no better other offspring have emerged in this phase. If the solution found in this way is not the optimum sought, that is called premature convergence. This effect can be observed more often in panmictic populations. In nature global mating pools are rarely found. What prevails is a certain and limited isolation due to spatial distance. The resulting local neighbourhoods initially evolve independently and mutants have a higher chance of persisting over several generations. As a result, genotypic diversity in the gene pool is preserved longer than in a panmictic population. It is therefore obvious to divide the previously global population by substructures. Two basic models were introduced for this purpose, the island models, which are based on a division of the population into fixed subpopulations that exchange individuals from time to time, and the neighbourhood models, which assign individuals to overlapping neighbourhoods, also known as cellular genetic or evolutionary algorithms (cGA or cEA). The associated division of the population also suggests a corresponding parallelization of the procedure. For this reason, the topic of population models is also frequently discussed in the literature in connection with the parallelization of EAs.
Population model (evolutionary algorithm) : In the island model, also called the migration model or coarse grained model, evolution takes place in strictly divided subpopulations. These can be organised panmictically, but do not have to be. From time to time an exchange of individuals takes place, which is called migration. The time between an exchange is called an epoch and its end can be triggered by various criteria: E.g. after a given time or given number of completed generations, or after the occurrence of stagnation. Stagnation can be detected, for example, by the fact that no fitness improvement has occurred in the island for a given number of generations. Island models introduce a variety of new strategy parameters: Number of subpopulations Size of the subpopulations Neighbourhood relations between islands: they determine which islands are considered neighbouring and can thus exchange individuals, see picture of a simple unidirectional ring (black arrows) and its extension by additional bidirectional neighbourhood relations (additional green arrows) Criteria for the termination of an epoch, synchronous or asynchronous migration Migration rate: number or proportion of individuals involved in migration. Migrant selection: There are many alternatives for this. E.g. the best individuals can replace the worst or randomly selected ones. Depending on the migration rate, this can affect one or more individuals at a time. With these parameters, the selection pressure can be influenced to a considerable extent. For example, it increases with the interconnectedness of the islands and decreases with the number of subpopulations or the epoch length.
Population model (evolutionary algorithm) : The neighbourhood model, also called diffusion model or fine grained model, defines a topological neighbouhood relation between the individuals of a population that is independent of their phenotypic properties. The fundamental idea of this model is to provide the EA population with a special structure defined as a connected graph, in which each vertex is an individual that communicates with its nearest neighbours. Particularly, individuals are conceptually set in a toroidal mesh, and are only allowed to recombine with close individuals. This leads to a kind of locality known as isolation by distance. The set of potential mates of an individual is called its neighbourhood or deme. The adjacent figure illustrates that by showing two slightly overlapping neighbourhoods of two individuals marked yellow, through which genetic information can spread between the two demes. It is known that in this kind of algorithm, similar individuals tend to cluster and create niches that are independent of the deme boundaries and, in particular, can be larger than a deme. There is no clear borderline between adjacent groups, and close niches could be easily colonized by competitive ones and maybe merge solution contents during this process. Simultaneously, farther niches can be affected more slowly. EAs with this type of population are also well known as cellular EAs (cEA) or cellular genetic algorithms (cGA). A commonly used structure for arranging the individuals of a population is a 2D toroidal grid, although the number of dimensions can be easily extended (to 3D) or reduced (to 1D, e.g. a ring, see the figure on the right). The neighbourhood of a particular individual in the grid is defined in terms of the Manhattan distance from it to others in the population. In the basic algorithm, all the neighbourhoods have the same size and identical shapes. The two most commonly used neighbourhoods for two dimesional cEAs are L5 and C9, see the figure on the left. Here, L stands for Linear while C stands for Compact. Each deme represents a panmictic subpopulation within which mate selection and the acceptance of offspring takes place by replacing the parent. The rules for the acceptance of offspring are local in nature and based on the neighbourhood: for example, it can be specified that the best offspring must be better than the parent being replaced or, less strictly, only better than the worst individual in the deme. The first rule is elitist and creates a higher selective pressure than the second non-elitist rule. In elitist EAs, the best individual of a population always survives. In this respect, they deviate from the biological model. The overlap of the neighbourhoods causes a mostly slow spread of genetic information across the neighbourhood boundaries, hence the name diffusion model. A better offspring now needs more generations than in panmixy to spread in the population. This promotes the emergence of local niches and their local evolution, thus preserving genotypic diversity over a longer period of time. The result is a better and dynamic balance between breadth and depth search adapted to the search space during a run. Depth search takes place in the niches and breadth search in the niche boundaries and through the evolution of the different niches of the whole population. For the same neighbourhood size, the spread of genetic information is larger for elongated figures like L9 than for a block like C9, and again significantly larger than for a ring. This means that ring neighbourhoods are well suited for achieving high quality results, even if this requires comparatively long run times. On the other hand, if one is primarily interested in fast and good, but possibly suboptimal results, 2D topologies are more suitable.
Population model (evolutionary algorithm) : When applying both population models to genetic algorithms, evolutionary strategy and other EAs, the splitting of a total population into subpopulations usually reduces the risk of premature convergence and leads to better results overall more reliably and faster than would be expected with panmictic EAs. Island models have the disadvantage compared to neighbourhood models that they introduce a large number of new strategy parameters. Despite the existing studies on this topic in the literature, a certain risk of unfavourable settings remains for the user. With neighbourhood models, on the other hand, only the size of the neighbourhood has to be specified and, in the case of the two-dimensional model, the choice of the neighbourhood figure is added.
Population model (evolutionary algorithm) : Since both population models imply population partitioning, they are well suited as a basis for parallelizing an EA. This applies even more to cellular EAs, since they rely only on locally available information about the members of their respective demes. Thus, in the extreme case, an independent execution thread can be assigned to each individual, so that the entire cEA can run on a parallel hardware platform. The island model also supports parallelization, e.g. by assigning a processor to each island. If the subpopulations of the islands are organized panmictically, all evaluations of the descendants of a generation can be parallelized additionally. In real-world applications the evaluations are usually by far the most time-consuming part. Of course, it is also possible to design the island sub-populations as cEAs, so that the statements made before about parallelizing cEAs apply. In this way, hierarchical population structures with the appropriate parallelizations can be created. Not only comparatively expensive computer clusters but also inexpensive graphics cards (GPUs) or the computers of a grid can be used for parallelization. However, it is important to stress that cEAs, or EAs with a population distributed across islands, represent a search model that differs in many ways from traditional EAs. Moreover, they can run on both sequential and parallel platforms, which highlights the fact that model and implementation are two different concepts.
Population model (evolutionary algorithm) : Erick Cantú-Paz (2001): Efficient and Accurate Parallel Genetic Algorithms (PhD thesis, University of Illinois, Urbana-Champaign, USA). Springer, New York, NY. ISBN 978-1-4613-6964-6 doi:10.1007/978-1-4615-4369-5 Martina Gorges-Schleuter (1990): Genetic Algorithms and Population Structures - A Massively Parallel Algorithm. PhD thesis, Universität Dortmund, Fakultät für Informatik, Germany. Enrique Alba, Bernabé Dorronsoro (2008): Cellular Genetic Algorithms. Springer, New York, NY. ISBN 978-0-387-77609-5 doi:10.1007/978-0-387-77610-1 Dirk Sudholt (2015): Parallel Evolutionary Algorithms. In Janusz Kacprzyk, Witold Pedrycz (eds.): Parallel Evolutionary Algorithms. Springer, Berlin, Heidelberg, pp. 929–959 ISBN 978-3-662-43504-5 doi:10.1007/978-3-662-43505-2 46 Gabriel Luque, Enrique Alba (2011): Parallel Genetic Algorithms. Springer, Berlin Heidelberg. ISBN 978-3-642-22083-8 doi:10.1007/978-3-642-22084-5
Population model (evolutionary algorithm) : Cellular automaton Dual-phase evolution Evolutionary algorithm Metaheuristic == References ==
Premature convergence : Premature convergence is an unwanted effect in evolutionary algorithms (EA), a metaheuristic that mimics the basic principles of biological evolution as a computer algorithm for solving an optimization problem. The effect means that the population of an EA has converged too early, resulting in being suboptimal. In this context, the parental solutions, through the aid of genetic operators, are not able to generate offspring that are superior to, or outperform, their parents. Premature convergence is a common problem found in evolutionary algorithms, as it leads to a loss, or convergence of, a large number of alleles, subsequently making it very difficult to search for a specific gene in which the alleles were present. An allele is considered lost if, in a population, a gene is present, where all individuals are sharing the same value for that particular gene. An allele is, as defined by De Jong, considered to be a converged allele, when 95% of a population share the same value for a certain gene.
Premature convergence : Strategies to regain genetic variation can be: a mating strategy called incest prevention, uniform crossover, mimicking sexual selection, favored replacement of similar individuals (preselection or crowding), segmentation of individuals of similar fitness (fitness sharing), increasing population size The genetic variation can also be regained by mutation though this process is highly random. A general strategy to reduce the risk of premature convergence is to use structured populations instead of the commonly used panmictic ones.
Premature convergence : It is hard to determine when premature convergence has occurred, and it is equally hard to predict its presence in the future. One measure is to use the difference between the average and maximum fitness values, as used by Patnaik & Srinivas, to then vary the crossover and mutation probabilities. Population diversity is another measure which has been extensively used in studies to measure premature convergence. However, although it has been widely accepted that a decrease in the population diversity directly leads to premature convergence, there have been little studies done on the analysis of population diversity. In other words, by using the term population diversity, the argument for a study in preventing premature convergence lacks robustness, unless specified what their definition of population diversity is.
Premature convergence : There are a number of presumed or hypothesized causes for the occurrence of premature convergence.
Premature convergence : Evolutionary computation Evolution == References ==
Ensemble learning : In statistics and machine learning, ensemble methods use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone. Unlike a statistical ensemble in statistical mechanics, which is usually infinite, a machine learning ensemble consists of only a concrete finite set of alternative models, but typically allows for much more flexible structure to exist among those alternatives.
Ensemble learning : Supervised learning algorithms search through a hypothesis space to find a suitable hypothesis that will make good predictions with a particular problem. Even if this space contains hypotheses that are very well-suited for a particular problem, it may be very difficult to find a good one. Ensembles combine multiple hypotheses to form one which should be theoretically better. Ensemble learning trains two or more machine learning algorithms on a specific classification or regression task. The algorithms within the ensemble model are generally referred as "base models", "base learners", or "weak learners" in literature. These base models can be constructed using a single modelling algorithm, or several different algorithms. The idea is to train a diverse set of weak models on the same modelling task, such that the outputs of each weak learner have poor predictive ability (i.e., high bias), and among all weak learners, the outcome and error values exhibit high variance. Fundamentally, an ensemble learning model trains at least two high-bias (weak) and high-variance (diverse) models to be combined into a better-performing model. The set of weak models — which would not produce satisfactory predictive results individually — are combined or averaged to produce a single, high performing, accurate, and low-variance model to fit the task as required. Ensemble learning typically refers to bagging (bootstrap aggregating), boosting or stacking/blending techniques to induce high variance among the base models. Bagging creates diversity by generating random samples from the training observations and fitting the same model to each different sample — also known as homogeneous parallel ensembles. Boosting follows an iterative process by sequentially training each base model on the up-weighted errors of the previous base model, producing an additive model to reduce the final model errors — also known as sequential ensemble learning. Stacking or blending consists of different base models, each trained independently (i.e. diverse/high variance) to be combined into the ensemble model — producing a heterogeneous parallel ensemble. Common applications of ensemble learning include random forests (an extension of bagging), Boosted Tree models, and Gradient Boosted Tree Models. Models in applications of stacking are generally more task-specific — such as combining clustering techniques with other parametric and/or non-parametric techniques. The broader term of multiple classifier systems also covers hybridization of hypotheses that are not induced by the same base learner. Evaluating the prediction of an ensemble typically requires more computation than evaluating the prediction of a single model. In one sense, ensemble learning may be thought of as a way to compensate for poor learning algorithms by performing a lot of extra computation. On the other hand, the alternative is to do a lot more learning with one non-ensemble model. An ensemble may be more efficient at improving overall accuracy for the same increase in compute, storage, or communication resources by using that increase on two or more methods, than would have been improved by increasing resource use for a single method. Fast algorithms such as decision trees are commonly used in ensemble methods (e.g., random forests), although slower algorithms can benefit from ensemble techniques as well. By analogy, ensemble techniques have been used also in unsupervised learning scenarios, for example in consensus clustering or in anomaly detection.
Ensemble learning : Empirically, ensembles tend to yield better results when there is a significant diversity among the models. Many ensemble methods, therefore, seek to promote diversity among the models they combine. Although perhaps non-intuitive, more random algorithms (like random decision trees) can be used to produce a stronger ensemble than very deliberate algorithms (like entropy-reducing decision trees). Using a variety of strong learning algorithms, however, has been shown to be more effective than using techniques that attempt to dumb-down the models in order to promote diversity. It is possible to increase diversity in the training stage of the model using correlation for regression tasks or using information measures such as cross entropy for classification tasks. Theoretically, one can justify the diversity concept because the lower bound of the error rate of an ensemble system can be decomposed into accuracy, diversity, and the other term.
Ensemble learning : While the number of component classifiers of an ensemble has a great impact on the accuracy of prediction, there is a limited number of studies addressing this problem. A priori determining of ensemble size and the volume and velocity of big data streams make this even more crucial for online ensemble classifiers. Mostly statistical tests were used for determining the proper number of components. More recently, a theoretical framework suggested that there is an ideal number of component classifiers for an ensemble such that having more or less than this number of classifiers would deteriorate the accuracy. It is called "the law of diminishing returns in ensemble construction." Their theoretical framework shows that using the same number of independent component classifiers as class labels gives the highest accuracy.
Ensemble learning : R: at least three packages offer Bayesian model averaging tools, including the BMS (an acronym for Bayesian Model Selection) package, the BAS (an acronym for Bayesian Adaptive Sampling) package, and the BMA package. Python: scikit-learn, a package for machine learning in Python offers packages for ensemble learning including packages for bagging, voting and averaging methods. MATLAB: classification ensembles are implemented in Statistics and Machine Learning Toolbox.
Ensemble learning : In recent years, due to growing computational power, which allows for training in large ensemble learning in a reasonable time frame, the number of ensemble learning applications has grown increasingly. Some of the applications of ensemble classifiers include:
Ensemble learning : Ensemble averaging (machine learning) Bayesian structural time series (BSTS) Mixture of experts
Ensemble learning : Zhou Zhihua (2012). Ensemble Methods: Foundations and Algorithms. Chapman and Hall/CRC. ISBN 978-1-439-83003-1. Robert Schapire; Yoav Freund (2012). Boosting: Foundations and Algorithms. MIT. ISBN 978-0-262-01718-3.
Ensemble learning : Robi Polikar (ed.). "Ensemble learning". Scholarpedia. The Waffles (machine learning) toolkit contains implementations of Bagging, Boosting, Bayesian Model Averaging, Bayesian Model Combination, Bucket-of-models, and other ensemble techniques
AdaBoost : AdaBoost (short for Adaptive Boosting) is a statistical classification meta-algorithm formulated by Yoav Freund and Robert Schapire in 1995, who won the 2003 Gödel Prize for their work. It can be used in conjunction with many types of learning algorithm to improve performance. The output of multiple weak learners is combined into a weighted sum that represents the final output of the boosted classifier. Usually, AdaBoost is presented for binary classification, although it can be generalized to multiple classes or bounded intervals of real values. AdaBoost is adaptive in the sense that subsequent weak learners (models) are adjusted in favor of instances misclassified by previous models. In some problems, it can be less susceptible to overfitting than other learning algorithms. The individual learners can be weak, but as long as the performance of each one is slightly better than random guessing, the final model can be proven to converge to a strong learner. Although AdaBoost is typically used to combine weak base learners (such as decision stumps), it has been shown to also effectively combine strong base learners (such as deeper decision trees), producing an even more accurate model. Every learning algorithm tends to suit some problem types better than others, and typically has many different parameters and configurations to adjust before it achieves optimal performance on a dataset. AdaBoost (with decision trees as the weak learners) is often referred to as the best out-of-the-box classifier. When used with decision tree learning, information gathered at each stage of the AdaBoost algorithm about the relative 'hardness' of each training sample is fed into the tree-growing algorithm such that later trees tend to focus on harder-to-classify examples.
AdaBoost : AdaBoost refers to a particular method of training a boosted classifier. A boosted classifier is a classifier of the form F T ( x ) = ∑ t = 1 T f t ( x ) (x)=\sum _^f_(x) where each f t is a weak learner that takes an object x as input and returns a value indicating the class of the object. For example, in the two-class problem, the sign of the weak learner's output identifies the predicted object class and the absolute value gives the confidence in that classification. Each weak learner produces an output hypothesis h which fixes a prediction h ( x i ) ) for each sample in the training set. At each iteration t , a weak learner is selected and assigned a coefficient α t such that the total training error E t of the resulting t -stage boosted classifier is minimized. E t = ∑ i E [ F t − 1 ( x i ) + α t h ( x i ) ] =\sum _E[F_(x_)+\alpha _h(x_)] Here F t − 1 ( x ) (x) is the boosted classifier that has been built up to the previous stage of training and f t ( x ) = α t h ( x ) (x)=\alpha _h(x) is the weak learner that is being considered for addition to the final classifier.
AdaBoost : This derivation follows Rojas (2009): Suppose we have a data set ,y_),\ldots ,(x_,y_)\ where each item x i has an associated class y i ∈ \in \ , and a set of weak classifiers ,\ldots ,k_\ each of which outputs a classification k j ( x i ) ∈ (x_)\in \ for each item. After the ( m − 1 ) -th iteration our boosted classifier is a linear combination of the weak classifiers of the form: C ( m − 1 ) ( x i ) = α 1 k 1 ( x i ) + ⋯ + α m − 1 k m − 1 ( x i ) , (x_)=\alpha _k_(x_)+\cdots +\alpha _k_(x_), where the class will be the sign of C ( m − 1 ) ( x i ) (x_) . At the m -th iteration we want to extend this to a better boosted classifier by adding another weak classifier k m , with another weight α m : C m ( x i ) = C ( m − 1 ) ( x i ) + α m k m ( x i ) (x_)=C_(x_)+\alpha _k_(x_) So it remains to determine which weak classifier is the best choice for k m , and what its weight α m should be. We define the total error E of C m as the sum of its exponential loss on each data point, given as follows: E = ∑ i = 1 N e − y i C m ( x i ) = ∑ i = 1 N e − y i C ( m − 1 ) ( x i ) e − y i α m k m ( x i ) ^e^C_(x_)=\sum _^e^C_(x_)e^\alpha _k_(x_) Letting w i ( 1 ) = 1 ^=1 and w i ( m ) = e − y i C m − 1 ( x i ) ^=e^C_(x_) for m > 1 , we have: E = ∑ i = 1 N w i ( m ) e − y i α m k m ( x i ) ^w_^e^\alpha _k_(x_) We can split this summation between those data points that are correctly classified by k m (so y i k m ( x i ) = 1 k_(x_)=1 ) and those that are misclassified (so y i k m ( x i ) = − 1 k_(x_)=-1 ): E = ∑ y i = k m ( x i ) w i ( m ) e − α m + ∑ y i ≠ k m ( x i ) w i ( m ) e α m = ∑ i = 1 N w i ( m ) e − α m + ∑ y i ≠ k m ( x i ) w i ( m ) ( e α m − e − α m ) E&=\sum _=k_(x_)w_^e^+\sum _\neq k_(x_)w_^e^\\&=\sum _^w_^e^+\sum _\neq k_(x_)w_^\left(e^-e^\right)\end Since the only part of the right-hand side of this equation that depends on k m is ∑ y i ≠ k m ( x i ) w i ( m ) \neq k_(x_)w_^ , we see that the k m that minimizes E is the one in the set ,\ldots ,k_\ that minimizes ∑ y i ≠ k m ( x i ) w i ( m ) \neq k_(x_)w_^ [assuming that α m > 0 >0 ], i.e. the weak classifier with the lowest weighted error (with weights w i ( m ) = e − y i C m − 1 ( x i ) ^=e^C_(x_) ). To determine the desired weight α m that minimizes E with the k m that we just determined, we differentiate: d E d α m = d ( ∑ y i = k m ( x i ) w i ( m ) e − α m + ∑ y i ≠ k m ( x i ) w i ( m ) e α m ) d α m ==k_(x_)w_^e^+\sum _\neq k_(x_)w_^e^) Luckily the minimum occurs when setting this to zero, then solving for α m yields: α m = 1 2 ln ⁡ ( ∑ y i = k m ( x i ) w i ( m ) ∑ y i ≠ k m ( x i ) w i ( m ) ) =\ln \left(=k_(x_)w_^\neq k_(x_)w_^\right) We calculate the weighted error rate of the weak classifier to be ϵ m = ∑ y i ≠ k m ( x i ) w i ( m ) ∑ i = 1 N w i ( m ) =\neq k_(x_)w_^^w_^ , so it follows that: α m = 1 2 ln ⁡ ( 1 − ϵ m ϵ m ) =\ln \left(\right) which is the negative logit function multiplied by 0.5. Due to the convexity of E as a function of α m , this new expression for α m gives the global minimum of the loss function. Note: This derivation only applies when k m ( x i ) ∈ (x_)\in \ , though it can be a good starting guess in other cases, such as when the weak learner is biased ( k m ( x ) ∈ , a ≠ − b (x)\in \,a\neq -b ), has multiple leaves ( k m ( x ) ∈ (x)\in \ ) or is some other function k m ( x ) ∈ R (x)\in \mathbb . Thus we have derived the AdaBoost algorithm: At each iteration, choose the classifier k m , which minimizes the total weighted error ∑ y i ≠ k m ( x i ) w i ( m ) \neq k_(x_)w_^ , use this to calculate the error rate ϵ m = ∑ y i ≠ k m ( x i ) w i ( m ) ∑ i = 1 N w i ( m ) =\neq k_(x_)w_^^w_^ , use this to calculate the weight α m = 1 2 ln ⁡ ( 1 − ϵ m ϵ m ) =\ln \left(\right) , and finally use this to improve the boosted classifier C m − 1 to C m = C ( m − 1 ) + α m k m =C_+\alpha _k_ .
AdaBoost : Boosting is a form of linear regression in which the features of each sample x i are the outputs of some weak learner h applied to x i . While regression tries to fit F ( x ) to y ( x ) as precisely as possible without loss of generalization, typically using least square error E ( f ) = ( y ( x ) − f ( x ) ) 2 , whereas the AdaBoost error function E ( f ) = e − y ( x ) f ( x ) takes into account the fact that only the sign of the final result is used, thus | F ( x ) | can be far larger than 1 without increasing error. However, the exponential increase in the error for sample x i as − y ( x i ) f ( x i ) )f(x_) increases, resulting in excessive weights being assigned to outliers. One feature of the choice of exponential error function is that the error of the final additive model is the product of the error of each stage, that is, e ∑ i − y i f ( x i ) = ∏ i e − y i f ( x i ) -y_f(x_)=\prod _e^f(x_) . Thus it can be seen that the weight update in the AdaBoost algorithm is equivalent to recalculating the error on F t ( x ) (x) after each stage. There is a lot of flexibility allowed in the choice of loss function. As long as the loss function is monotonic and continuously differentiable, the classifier is always driven toward purer solutions. Zhang (2004) provides a loss function based on least squares, a modified Huber loss function: ϕ ( y , f ( x ) ) = -4yf(x)&yf(x)<-1,\\(yf(x)-1)^&-1\leq yf(x)\leq 1.\\0&yf(x)>1\end This function is more well-behaved than LogitBoost for f ( x ) close to 1 or -1, does not penalise ‘overconfident’ predictions ( y f ( x ) > 1 ), unlike unmodified least squares, and only penalises samples misclassified with confidence greater than 1 linearly, as opposed to quadratically or exponentially, and is thus less susceptible to the effects of outliers.
AdaBoost : Boosting can be seen as minimization of a convex loss function over a convex set of functions. Specifically, the loss being minimized by AdaBoost is the exponential loss ∑ i ϕ ( i , y , f ) = ∑ i e − y i f ( x i ) , \phi (i,y,f)=\sum _e^f(x_), whereas LogitBoost performs logistic regression, minimizing ∑ i ϕ ( i , y , f ) = ∑ i ln ⁡ ( 1 + e − y i f ( x i ) ) . \phi (i,y,f)=\sum _\ln \left(1+e^f(x_)\right). In the gradient descent analogy, the output of the classifier for each training point is considered a point ( F t ( x 1 ) , … , F t ( x n ) ) (x_),\dots ,F_(x_)\right) in n-dimensional space, where each axis corresponds to a training sample, each weak learner h ( x ) corresponds to a vector of fixed orientation and length, and the goal is to reach the target point ( y 1 , … , y n ) ,\dots ,y_) (or any region where the value of loss function E T ( x 1 , … , x n ) (x_,\dots ,x_) is less than the value at that point), in the fewest steps. Thus AdaBoost algorithms perform either Cauchy (find h ( x ) with the steepest gradient, choose α to minimize test error) or Newton (choose some target point, find α h ( x ) that brings F t closest to that point) optimization of training error.
AdaBoost : With: Samples x 1 … x n \dots x_ Desired outputs y 1 … y n , y ∈ \dots y_,y\in \ Initial weights w 1 , 1 … w n , 1 \dots w_ set to 1 n Error function E ( f ( x ) , y i ) = e − y i f ( x i ) )=e^f(x_) Weak learners h : x → For t in 1 … T : Choose h t ( x ) (x) : Find weak learner h t ( x ) (x) that minimizes ϵ t , the weighted sum error for misclassified points ϵ t = ∑ h t ( x i ) ≠ y i i = 1 n w i , t =\sum _(x_)\neq y_^w_ Choose α t = 1 2 ln ⁡ ( 1 − ϵ t ϵ t ) =\ln \left(\right) Add to ensemble: F t ( x ) = F t − 1 ( x ) + α t h t ( x ) (x)=F_(x)+\alpha _h_(x) Update weights: w i , t + 1 = w i , t e − y i α t h t ( x i ) =w_e^\alpha _h_(x_) for i in 1 … n Renormalize w i , t + 1 such that ∑ i w i , t + 1 = 1 w_=1 (Note: It can be shown that ∑ h t ( x i ) = y i w i , t + 1 ∑ h t ( x i ) ≠ y i w i , t + 1 = ∑ h t ( x i ) = y i w i , t ∑ h t ( x i ) ≠ y i w i , t (x_)=y_w_(x_)\neq y_w_=(x_)=y_w_(x_)\neq y_w_ at every step, which can simplify the calculation of the new weights.)
AdaBoost : Bootstrap aggregating CoBoosting BrownBoost Gradient boosting Multiplicative weight update method § AdaBoost algorithm
AdaBoost : == Further reading ==
BrownBoost : BrownBoost is a boosting algorithm that may be robust to noisy datasets. BrownBoost is an adaptive version of the boost by majority algorithm. As is the case for all boosting algorithms, BrownBoost is used in conjunction with other machine learning methods. BrownBoost was introduced by Yoav Freund in 2001.
BrownBoost : AdaBoost performs well on a variety of datasets; however, it can be shown that AdaBoost does not perform well on noisy data sets. This is a result of AdaBoost's focus on examples that are repeatedly misclassified. In contrast, BrownBoost effectively "gives up" on examples that are repeatedly misclassified. The core assumption of BrownBoost is that noisy examples will be repeatedly mislabeled by the weak hypotheses and non-noisy examples will be correctly labeled frequently enough to not be "given up on." Thus only noisy examples will be "given up on," whereas non-noisy examples will contribute to the final classifier. In turn, if the final classifier is learned from the non-noisy examples, the generalization error of the final classifier may be much better than if learned from noisy and non-noisy examples. The user of the algorithm can set the amount of error to be tolerated in the training set. Thus, if the training set is noisy (say 10% of all examples are assumed to be mislabeled), the booster can be told to accept a 10% error rate. Since the noisy examples may be ignored, only the true examples will contribute to the learning process.
BrownBoost : BrownBoost uses a non-convex potential loss function, thus it does not fit into the AdaBoost framework. The non-convex optimization provides a method to avoid overfitting noisy data sets. However, in contrast to boosting algorithms that analytically minimize a convex loss function (e.g. AdaBoost and LogitBoost), BrownBoost solves a system of two equations and two unknowns using standard numerical methods. The only parameter of BrownBoost ( c in the algorithm) is the "time" the algorithm runs. The theory of BrownBoost states that each hypothesis takes a variable amount of time ( t in the algorithm) which is directly related to the weight given to the hypothesis α . The time parameter in BrownBoost is analogous to the number of iterations T in AdaBoost. A larger value of c means that BrownBoost will treat the data as if it were less noisy and therefore will give up on fewer examples. Conversely, a smaller value of c means that BrownBoost will treat the data as more noisy and give up on more examples. During each iteration of the algorithm, a hypothesis is selected with some advantage over random guessing. The weight of this hypothesis α and the "amount of time passed" t during the iteration are simultaneously solved in a system of two non-linear equations ( 1. uncorrelated hypothesis w.r.t example weights and 2. hold the potential constant) with two unknowns (weight of hypothesis α and time passed t ). This can be solved by bisection (as implemented in the JBoost software package) or Newton's method (as described in the original paper by Freund). Once these equations are solved, the margins of each example ( r i ( x j ) (x_) in the algorithm) and the amount of time remaining s are updated appropriately. This process is repeated until there is no time remaining. The initial potential is defined to be 1 m ∑ j = 1 m 1 − erf ( c ) = 1 − erf ( c ) \sum _^1-()=1-() . Since a constraint of each iteration is that the potential be held constant, the final potential is 1 m ∑ j = 1 m 1 − erf ( r i ( x j ) / c ) = 1 − erf ( c ) \sum _^1-(r_(x_)/)=1-() . Thus the final error is likely to be near 1 − erf ( c ) () . However, the final potential function is not the 0–1 loss error function. For the final error to be exactly 1 − erf ( c ) () , the variance of the loss function must decrease linearly w.r.t. time to form the 0–1 loss function at the end of boosting iterations. This is not yet discussed in the literature and is not in the definition of the algorithm below. The final classifier is a linear combination of weak hypotheses and is evaluated in the same manner as most other boosting algorithms.
BrownBoost : Input: m training examples ( x 1 , y 1 ) , … , ( x m , y m ) ,y_),\ldots ,(x_,y_) where x j ∈ X , y j ∈ Y = \in X,\,y_\in Y=\ The parameter c Initialise: s = c . (The value of s is the amount of time remaining in the game) r i ( x j ) = 0 (x_)=0 ∀ j . The value of r i ( x j ) (x_) is the margin at iteration i for example x j . While s > 0 : Set the weights of each example: W i ( x j ) = e − ( r i ( x j ) + s ) 2 c (x_)=e^(x_)+s)^ , where r i ( x j ) (x_) is the margin of example x j Find a classifier h i : X → :X\to \ such that ∑ j W i ( x j ) h i ( x j ) y j > 0 W_(x_)h_(x_)y_>0 Find values α , t that satisfy the equation: ∑ j h i ( x j ) y j e − ( r i ( x j ) + α h i ( x j ) y j + s − t ) 2 c = 0 h_(x_)y_e^(x_)+\alpha h_(x_)y_+s-t)^=0 . (Note this is similar to the condition E W i + 1 [ h i ( x j ) y j ] = 0 [h_(x_)y_]=0 set forth by Schapire and Singer. In this setting, we are numerically finding the W i + 1 = exp ⁡ ( ⋯ ⋯ ) =\exp \left(\right) such that E W i + 1 [ h i ( x j ) y j ] = 0 [h_(x_)y_]=0 .) This update is subject to the constraint ∑ ( Φ ( r i ( x j ) + α h ( x j ) y j + s − t ) − Φ ( r i ( x j ) + s ) ) = 0 (x_)+\alpha h(x_)y_+s-t\right)-\Phi \left(r_(x_)+s\right)\right)=0 , where Φ ( z ) = 1 − erf ( z / c ) (z/) is the potential loss for a point with margin r i ( x j ) (x_) Update the margins for each example: r i + 1 ( x j ) = r i ( x j ) + α h ( x j ) y j (x_)=r_(x_)+\alpha h(x_)y_ Update the time remaining: s = s − t Output: H ( x ) = sign ( ∑ i α i h i ( x ) ) \left(\sum _\alpha _h_(x)\right)
BrownBoost : In preliminary experimental results with noisy datasets, BrownBoost outperformed AdaBoost's generalization error; however, LogitBoost performed as well as BrownBoost. An implementation of BrownBoost can be found in the open source software JBoost.
BrownBoost : Boosting AdaBoost Alternating decision trees
Cascading classifiers : Cascading is a particular case of ensemble learning based on the concatenation of several classifiers, using all information collected from the output from a given classifier as additional information for the next classifier in the cascade. Unlike voting or stacking ensembles, which are multiexpert systems, cascading is a multistage one. Cascading classifiers are trained with several hundred "positive" sample views of a particular object and arbitrary "negative" images of the same size. After the classifier is trained it can be applied to a region of an image and detect the object in question. To search for the object in the entire frame, the search window can be moved across the image and check every location with the classifier. This process is most commonly used in image processing for object detection and tracking, primarily facial detection and recognition. The first cascading classifier was the face detector of Viola and Jones (2001). The requirement for this classifier was to be fast in order to be implemented on low-power CPUs, such as cameras and phones.
Cascading classifiers : The term is also used in statistics to describe a model that is staged. For example, a classifier (for example k-means), takes a vector of features (decision variables) and outputs for each possible classification result the probability that the vector belongs to the class. This is usually used to take a decision (classify into the class with highest probability), but cascading classifiers use this output as the input to another model (another stage). This is particularly useful for models that have highly combinatorial or counting rules (for example, class1 if exactly two features are negative, class2 otherwise), which cannot be fitted without looking at all the interaction terms. Having cascading classifiers enables the successive stage to gradually approximate the combinatorial nature of the classification, or to add interaction terms in classification algorithms that cannot express them in one stage. As a simple example, if we try to match the rule (class1 if exactly 2 features out of 3 are negative, class2 otherwise), a decision tree would be: feature 1 negative feature 2 negative feature 3 negative -> class2 feature 3 positive -> class1 feature 2 positive feature 3 negative -> class1 feature 3 positive -> class2 feature 1 positive feature 2 negative feature 3 negative -> class1 feature 3 positive -> class2 feature 2 positive feature 3 negative -> class2 feature 3 positive -> class2 The tree has all the combinations of possible leaves to express the full ruleset, whereas (feature1 positive, feature2 negative) and (feature1 negative, feature2 positive) should actually join to the same rule. This leads to a tree with too few samples on the leaves. A two-stage algorithm can effectively merge these two cases by giving a medium-high probability to class1 if feature1 or (exclusive) feature2 is negative. The second classifier can pick up this higher probability and make a decision on the sign of feature3. In a bias-variance decomposition, cascaded models are usually seen as lowering bias while raising variance.
Cascading classifiers : Boosting (meta-algorithm) Bootstrap aggregating
Cascading classifiers : Gama, J.; Brazdil, P. (2000). "Cascade Generalization". Machine Learning. 41 (3): 315–343. CiteSeerX 10.1.1.46.635. doi:10.1023/a:1007652114878. S2CID 36907021. Minguillón, J. (2002). On Cascading Small Decision Trees (PhD thesis). Universitat Autònoma de Barcelona. Zhao, H.; Ram, S. (2004). "Constrained Cascade Generalization of Decision Trees". IEEE Transactions on Knowledge and Data Engineering. 16 (6): 727–739. CiteSeerX 10.1.1.199.2077. doi:10.1109/tkde.2004.3. S2CID 8937272.
Gaussian process emulator : In statistics, Gaussian process emulator is one name for a general type of statistical model that has been used in contexts where the problem is to make maximum use of the outputs of a complicated (often non-random) computer-based simulation model. Each run of the simulation model is computationally expensive and each run is based on many different controlling inputs. The variation of the outputs of the simulation model is expected to vary reasonably smoothly with the inputs, but in an unknown way. The overall analysis involves two models: the simulation model, or "simulator", and the statistical model, or "emulator", which notionally emulates the unknown outputs from the simulator. The Gaussian process emulator model treats the problem from the viewpoint of Bayesian statistics. In this approach, even though the output of the simulation model is fixed for any given set of inputs, the actual outputs are unknown unless the computer model is run and hence can be made the subject of a Bayesian analysis. The main element of the Gaussian process emulator model is that it models the outputs as a Gaussian process on a space that is defined by the model inputs. The model includes a description of the correlation or covariance of the outputs, which enables the model to encompass the idea that differences in the output will be small if there are only small differences in the inputs.
Gaussian process emulator : Kriging Computer experiment