text
stringlengths
12
14.7k
Anti-unification : Jacobsen, Erik (Jun 1991), Unification and Anti-Unification (PDF), Technical Report Østvold, Bjarte M. (Apr 2004), A Functional Reconstruction of Anti-Unification (PDF), NR Note, vol. DART/04/04, Norwegian Computing Center Boytcheva, Svetla; Markov, Zdravko (2002). "An Algorithm for Inducing Least Generalization Under Relative Implication". Proc. FLAIRS-02. AAAI. pp. 322–326. Kutsia, Temur; Levy, Jordi; Villaret, Mateu (2014). "Anti-Unification for Unranked Terms and Hedges" (PDF). Journal of Automated Reasoning. 52 (2): 155–190. doi:10.1007/s10817-013-9285-6. Software.
Anti-unification : Calculus of constructions: Pfenning, Frank (Jul 1991). "Unification and Anti-Unification in the Calculus of Constructions" (PDF). Proc. 6th LICS. Springer. pp. 74–85. Simply typed lambda calculus (Input: Terms in the eta-long beta-normal form. Output: higher-order patterns): Baumgartner, Alexander; Kutsia, Temur; Levy, Jordi; Villaret, Mateu (Jun 2013). A Variant of Higher-Order Anti-Unification. Proc. RTA 2013. Vol. 21 of LIPIcs. Schloss Dagstuhl, 113-127. Software. Simply typed lambda calculus (Input: Terms in the eta-long beta-normal form. Output: Various fragments of the simply typed lambda calculus including patterns): Cerna, David; Kutsia, Temur (June 2019). "A Generic Framework for Higher-Order Generalizations" (PDF). 4th International Conference on Formal Structures for Computation and Deduction, FSCD, June 24–30, 2019, Dortmund, Germany. Schloss Dagstuhl - Leibniz-Zentrum für Informatik. pp. 74–85. Restricted Higher-Order Substitutions: Wagner, Ulrich (Apr 2002), Combinatorically Restricted Higher Order Anti-Unification, TU Berlin; Schmidt, Martin (Sep 2010), Restricted Higher-Order Anti-Unification for Heuristic-Driven Theory Projection (PDF), PICS-Report, vol. 31–2010, Univ. Osnabrück, Germany, ISSN 1610-5389
Anti-unification : == References ==
First-order inductive learner : In machine learning, first-order inductive learner (FOIL) is a rule-based learning algorithm.
First-order inductive learner : Developed in 1990 by Ross Quinlan, FOIL learns function-free Horn clauses, a subset of first-order predicate calculus. Given positive and negative examples of some concept and a set of background-knowledge predicates, FOIL inductively generates a logical concept definition or rule for the concept. The induced rule must not involve any constants (color(X,red) becomes color(X,Y), red(Y)) or function symbols, but may allow negated predicates; recursive concepts are also learnable. Like the ID3 algorithm, FOIL hill climbs using a metric based on information theory to construct a rule that covers the data. Unlike ID3, however, FOIL uses a separate-and-conquer method rather than divide-and-conquer, focusing on creating one rule at a time and collecting uncovered examples for the next iteration of the algorithm.
First-order inductive learner : The FOIL algorithm is as follows: Input List of examples and predicate to be learned Output A set of first-order Horn clauses FOIL(Pred, Pos, Neg) Let Pos be the positive examples Let Pred be the predicate to be learned Until Pos is empty do: Let Neg be the negative examples Set Body to empty Call LearnClauseBody Add Pred ← Body to the rule Remove from Pos all examples which satisfy Body Procedure LearnClauseBody Until Neg is empty do: Choose a literal L Conjoin L to Body Remove from Neg examples that do not satisfy L
First-order inductive learner : Suppose FOIL's task is to learn the concept grandfather(X,Y) given the relations father(X,Y) and parent(X,Y). Furthermore, suppose our current Body consists of grandfather(X,Y) ← parent(X,Z). This can be extended by conjoining Body with any of the literals father(X,X), father(Y,Z), parent(U,Y), or many others – to create this literal, the algorithm must choose both a predicate name and a set of variables for the predicate (at least one of which is required to be present already in an unnegated literal of the clause). If FOIL extends a clause grandfather(X,Y) ← true by conjoining the literal parent(X,Z), it is introducing the new variable Z. Positive examples now consist of those values <X,Y,Z> such that grandfather(X,Y) is true and parent(X,Z) is true; negative examples are those where grandfather(X,Y) is true but parent(X,Z) is false. On the next iteration of FOIL after parent(X,Z) has been added, the algorithm will consider all combinations of predicate names and variables such that at least one variable in the new literal is present in the existing clause. This results in a very large search space. Several extensions of the FOIL theory have shown that additions to the basic algorithm may reduce this search space, sometimes drastically.
First-order inductive learner : The FOCL algorithm (First Order Combined Learner) extends FOIL in a variety of ways, which affect how FOCL selects literals to test while extending a clause under construction. Constraints on the search space are allowed, as are predicates that are defined on a rule rather than on a set of examples (called intensional predicates); most importantly a potentially incorrect hypothesis is allowed as an initial approximation to the predicate to be learned. The main goal of FOCL is to incorporate the methods of explanation-based learning (EBL) into the empirical methods of FOIL. Even when no additional knowledge is provided to FOCL over FOIL, however, it utilizes an iterative widening search strategy similar to depth-first search: first FOCL attempts to learn a clause by introducing no free variables. If this fails (no positive gain), one additional free variable per failure is allowed until the number of free variables exceeds the maximum used for any predicate.
First-order inductive learner : http://www.csc.liv.ac.uk/~frans/KDD/Software/FOIL_PRM_CPAR/foil.html
Progol : Progol is an implementation of inductive logic programming that combines inverse entailment with general-to-specific search through a refinement graph.
Progol : Inverse entailment is used with mode declarations to derive the bottom clause, the most-specific clause within the mode language which subsume a given example. This clause is used to guide a refinement-graph search. Unlike the searches of Ehud Shapiro's model inference system (MIS) and J. Ross Quinlan's FOIL, Progol's search has a provable guarantee of returning a solution having the maximum compression in the search-space. To do so it performs an admissible A*-like search, guided by compression, over clauses which subsume the most specific clause. Progol deals with noisy data by using a compression measure to trade off the description of errors against the hypothesis description length. Progol allows arbitrary Prolog programs as background knowledge and arbitrary definite clauses as examples.
Progol : Progol was introduced by Stephen Muggleton in 1995. In 1996, it was used by Ashwin Srinivasan, Muggleton, Michael Sternberg and Ross King to predict the mutagenic activity in nitroaromatic compounds. This was considered a landmark application for inductive logic programming, as a general purpose inductive learner had discovered results that were both novel and meaningful to domain experts. Progol proved very influential in the field, and the widely-used inductive logic programming system Aleph builds directly on Progol. == References ==
Theta-subsumption : Theta-subsumption (θ-subsumption, or just subsumption) is a decidable relation between two first-order clauses that guarantees that one clause logically entails the other. It was first introduced by John Alan Robinson in 1965 and has become a fundamental notion in inductive logic programming. Deciding whether a given clause θ-subsumes another is an NP-complete problem.
Theta-subsumption : A clause, that is, a disjunction of first-order literals, can be considered as a set containing all its disjuncts. With this convention, a clause c 1 θ-subsumes a clause c 2 if there is a substitution θ such that the clause obtained by applying θ to c 1 is a subset of c 2 .
Theta-subsumption : θ-subsumption is a weaker relation than logical entailment, that is, whenever a clause c 1 θ-subsumes a clause c 2 , then c 1 logically entails c 2 . However, the converse is not true: A clause can logically entail another clause, but not θ-subsume it. θ-subsumption is decidable; more precisely, the problem of whether one clause θ-subsumes another is NP-complete in the length of the clauses. This is still true when restricting the setting to pairs of Horn clauses. As a binary relation among Horn clauses, θ-subsumption is reflexive and transitive. It therefore defines a preorder. It is not antisymmetric, since different clauses can be syntactic variants of each other. However, in every equivalence class of clauses that mutually θ-subsume each other, there is a unique shortest clause up to variable renaming, which can be effectively computed. The class of quotients with respect to this equivalence relation is a complete lattice, which has both infinite ascending and infinite descending chains. A subset of this lattice is known as a refinement graph.
Theta-subsumption : θ-subsumption was first introduced by J. Alan Robinson in 1965 in the context of resolution, and was first applied to inductive logic programming by Gordon Plotkin in 1970 for finding and reducing least general generalisations of sets of clauses. In 1977, Lewis D. Baxter proves that θ-subsumption is NP-complete, and the 1979 seminal work on NP-complete problems, Computers and Intractability, includes it among its list of NP-complete problems.
Theta-subsumption : Theorem provers based on the resolution or superposition calculus use θ-subsumption to prune redundant clauses. In addition, θ-subsumption is the most prominent notion of entailment used in inductive logic programming, where it is the fundamental tool to determine whether one clause is a specialisation or a generalisation of another. It is further used to test whether a clause covers an example, and to determine whether a given pair of clauses is redundant.
Theta-subsumption : Baxter, Lewis Denver (September 1977). The complexity of unification (PDF) (Thesis). University of Waterloo. De Raedt, Luc (2008), Logical and Relational Learning, Cognitive Technologies, Berlin, Heidelberg: Springer, Bibcode:2008lrl..book.....D, doi:10.1007/978-3-540-68856-3, ISBN 978-3-540-20040-6 Kietz, Jörg-Uwe; Lübbe, Marcus (1994), "An Efficient Subsumption Algorithm for Inductive Logic Programming", Machine Learning Proceedings 1994, Elsevier, pp. 130–138, doi:10.1016/b978-1-55860-335-6.50024-6, ISBN 9781558603356, retrieved 2023-11-26 Plotkin, Gordon D. (1970). Automatic Methods of Inductive Inference (PDF) (PhD). University of Edinburgh. hdl:1842/6656. Robinson, J. A. (1965). "A Machine-Oriented Logic Based on the Resolution Principle". Journal of the ACM. 12 (1): 23–41. doi:10.1145/321250.321253. S2CID 14389185. Waldmann, Uwe; Tourret, Sophie; Robillard, Simon; Blanchette, Jasmin (November 2022). "A Comprehensive Framework for Saturation Theorem Proving". Journal of Automated Reasoning. 66 (4): 499–539. doi:10.1007/s10817-022-09621-7. ISSN 0168-7433. PMC 9637109. PMID 36353684.
Genetic programming : Genetic programming (GP) is an evolutionary algorithm, an artificial intelligence technique mimicking natural evolution, which operates on a population of programs. It applies the genetic operators selection according to a predefined fitness measure, mutation and crossover. The crossover operation involves swapping specified parts of selected pairs (parents) to produce new and different offspring that become part of the new generation of programs. Some programs not selected for reproduction are copied from the current generation to the new generation. Mutation involves substitution of some random part of a program with some other random part of a program. Then the selection and other operations are recursively applied to the new generation of programs. Typically, members of each new generation are on average more fit than the members of the previous generation, and the best-of-generation program is often better than the best-of-generation programs from previous generations. Termination of the evolution usually occurs when some individual program reaches a predefined proficiency or fitness level. It may and often does happen that a particular run of the algorithm results in premature convergence to some local maximum which is not a globally optimal or even good solution. Multiple runs (dozens to hundreds) are usually necessary to produce a very good result. It may also be necessary to have a large starting population size and variability of the individuals to avoid pathologies.
Genetic programming : The first record of the proposal to evolve programs is probably that of Alan Turing in 1950. There was a gap of 25 years before the publication of John Holland's 'Adaptation in Natural and Artificial Systems' laid out the theoretical and empirical foundations of the science. In 1981, Richard Forsyth demonstrated the successful evolution of small programs, represented as trees, to perform classification of crime scene evidence for the UK Home Office. Although the idea of evolving programs, initially in the computer language Lisp, was current amongst John Holland's students, it was not until they organised the first Genetic Algorithms (GA) conference in Pittsburgh that Nichael Cramer published evolved programs in two specially designed languages, which included the first statement of modern "tree-based" Genetic Programming (that is, procedural languages organized in tree-based structures and operated on by suitably defined GA-operators). In 1988, John Koza (also a PhD student of John Holland) patented his invention of a GA for program evolution. This was followed by publication in the International Joint Conference on Artificial Intelligence IJCAI-89. Koza followed this with 205 publications on “Genetic Programming” (GP), name coined by David Goldberg, also a PhD student of John Holland. However, it is the series of 4 books by Koza, starting in 1992 with accompanying videos, that really established GP. Subsequently, there was an enormous expansion of the number of publications with the Genetic Programming Bibliography, surpassing 10,000 entries. In 2010, Koza listed 77 results where Genetic Programming was human competitive. In 1996, Koza started the annual Genetic Programming conference which was followed in 1998 by the annual EuroGP conference, and the first book in a GP series edited by Koza. 1998 also saw the first GP textbook. GP continued to flourish, leading to the first specialist GP journal and three years later (2003) the annual Genetic Programming Theory and Practice (GPTP) workshop was established by Rick Riolo. Genetic Programming papers continue to be published at a diversity of conferences and associated journals. Today there are nineteen GP books including several for students.
Genetic programming : GP has been successfully used as an automatic programming tool, a machine learning tool and an automatic problem-solving engine. GP is especially useful in the domains where the exact form of the solution is not known in advance or an approximate solution is acceptable (possibly because finding the exact solution is very difficult). Some of the applications of GP are curve fitting, data modeling, symbolic regression, feature selection, classification, etc. John R. Koza mentions 76 instances where Genetic Programming has been able to produce results that are competitive with human-produced results (called Human-competitive results). Since 2004, the annual Genetic and Evolutionary Computation Conference (GECCO) holds Human Competitive Awards (called Humies) competition, where cash awards are presented to human-competitive results produced by any form of genetic and evolutionary computation. GP has won many awards in this competition over the years.
Genetic programming : Meta-genetic programming is the proposed meta-learning technique of evolving a genetic programming system using genetic programming itself. It suggests that chromosomes, crossover, and mutation were themselves evolved, therefore like their real life counterparts should be allowed to change on their own rather than being determined by a human programmer. Meta-GP was formally proposed by Jürgen Schmidhuber in 1987. Doug Lenat's Eurisko is an earlier effort that may be the same technique. It is a recursive but terminating algorithm, allowing it to avoid infinite recursion. In the "autoconstructive evolution" approach to meta-genetic programming, the methods for the production and variation of offspring are encoded within the evolving programs themselves, and programs are executed to produce new programs to be added to the population. Critics of this idea often say this approach is overly broad in scope. However, it might be possible to constrain the fitness criterion onto a general class of results, and so obtain an evolved GP that would more efficiently produce results for sub-classes. This might take the form of a meta evolved GP for producing human walking algorithms which is then used to evolve human running, jumping, etc. The fitness criterion applied to the meta GP would simply be one of efficiency.
Genetic programming : Bio-inspired computing Covariance Matrix Adaptation Evolution Strategy (CMA-ES) Evolutionary image processing Fitness approximation Genetic improvement Genetic representation Grammatical evolution Inductive programming Linear genetic programming Multi expression programming Propagation of schema
Genetic programming : Aymen S Saket & Mark C Sinclair Genetic Programming and Evolvable Machines, a journal Evo2 for genetic programming GP bibliography The Hitch-Hiker's Guide to Evolutionary Computation Riccardo Poli, William B. Langdon, Nicholas F. McPhee, John R. Koza, "A Field Guide to Genetic Programming" (2008) Genetic Programming, a community maintained resource
Cartesian genetic programming : Cartesian genetic programming is a form of genetic programming that uses a graph representation to encode computer programs. It grew from a method of evolving digital circuits developed by Julian F. Miller and Peter Thomson in 1997. The term ‘Cartesian genetic programming’ first appeared in 1999 and was proposed as a general form of genetic programming in 2000. It is called ‘Cartesian’ because it represents a program using a two-dimensional grid of nodes. Miller's keynote explains how CGP works. He edited a book entitled Cartesian Genetic Programming, published in 2011 by Springer. The open source project dCGP implements a differentiable version of CGP developed at the European Space Agency by Dario Izzo, Francesco Biscani and Alessio Mereta able to approach symbolic regression tasks, to find solution to differential equations, find prime integrals of dynamical systems, represent variable topology artificial neural networks and more.
Cartesian genetic programming : Genetic programming Gene expression programming Grammatical evolution Linear genetic programming Multi expression programming == References ==
Grammatical evolution : Grammatical evolution (GE) is a genetic programming (GP) technique (or approach) from evolutionary computation pioneered by Conor Ryan, JJ Collins and Michael O'Neill in 1998 at the BDS Group in the University of Limerick. As in any other GP approach, the objective is to find an executable program, program fragment, or function, which will achieve a good fitness value for a given objective function. In most published work on GP, a LISP-style tree-structured expression is directly manipulated, whereas GE applies genetic operators to an integer string, subsequently mapped to a program (or similar) through the use of a grammar, which is typically expressed in Backus–Naur form. One of the benefits of GE is that this mapping simplifies the application of search to different programming languages and other structures.
Grammatical evolution : In type-free, conventional Koza-style GP, the function set must meet the requirement of closure: all functions must be capable of accepting as their arguments the output of all other functions in the function set. Usually, this is implemented by dealing with a single data-type such as double-precision floating point. While modern Genetic Programming frameworks support typing, such type-systems have limitations that Grammatical Evolution does not suffer from.
Grammatical evolution : GE offers a solution to the single-type limitation by evolving solutions according to a user-specified grammar (usually a grammar in Backus-Naur form). Therefore the search space can be restricted, and domain knowledge of the problem can be incorporated. The inspiration for this approach comes from a desire to separate the "genotype" from the "phenotype": in GP, the objects the search algorithm operates on and what the fitness evaluation function interprets are one and the same. In contrast, GE's "genotypes" are ordered lists of integers which code for selecting rules from the provided context-free grammar. The phenotype, however, is the same as in Koza-style GP: a tree-like structure that is evaluated recursively. This model is more in line with how genetics work in nature, where there is a separation between an organism's genotype and the final expression of phenotype in proteins, etc. Separating genotype and phenotype allows a modular approach. In particular, the search portion of the GE paradigm needn't be carried out by any one particular algorithm or method. Observe that the objects GE performs search on are the same as those used in genetic algorithms. This means, in principle, that any existing genetic algorithm package, such as the popular GAlib, can be used to carry out the search, and a developer implementing a GE system need only worry about carrying out the mapping from list of integers to program tree. It is also in principle possible to perform the search using some other method, such as particle swarm optimization (see the remark below); the modular nature of GE creates many opportunities for hybrids as the problem of interest to be solved dictates. Brabazon and O'Neill have successfully applied GE to predicting corporate bankruptcy, forecasting stock indices, bond credit ratings, and other financial applications. GE has also been used with a classic predator-prey model to explore the impact of parameters such as predator efficiency, niche number, and random mutations on ecological stability. It is possible to structure a GE grammar that for a given function/terminal set is equivalent to genetic programming.
Grammatical evolution : Despite its successes, GE has been the subject of some criticism. One issue is that as a result of its mapping operation, GE's genetic operators do not achieve high locality which is a highly regarded property of genetic operators in evolutionary algorithms.
Grammatical evolution : Although GE was originally described in terms of using an Evolutionary Algorithm, specifically, a Genetic Algorithm, other variants exist. For example, GE researchers have experimented with using particle swarm optimization to carry out the searching instead of genetic algorithms with results comparable to that of normal GE; this is referred to as a "grammatical swarm"; using only the basic PSO model it has been found that PSO is probably equally capable of carrying out the search process in GE as simple genetic algorithms are. (Although PSO is normally a floating-point search paradigm, it can be discretized, e.g., by simply rounding each vector to the nearest integer, for use with GE.) Yet another possible variation that has been experimented with in the literature is attempting to encode semantic information in the grammar in order to further bias the search process. Other work showed that, with biased grammars that leverage domain knowledge, even random search can be used to drive GE.
Grammatical evolution : GE was originally a combination of the linear representation as used by the Genetic Algorithm for Developing Software (GADS) and Backus Naur Form grammars, which were originally used in tree-based GP by Wong and Leung in 1995 and Whigham in 1996. Other related work noted in the original GE paper was that of Frederic Gruau, who used a conceptually similar "embryonic" approach, as well as that of Keller and Banzhaf, which similarly used linear genomes.
Grammatical evolution : There are several implementations of GE. These include the following.
Grammatical evolution : Genetic programming Java Grammatical Evolution Cartesian genetic programming Gene expression programming Linear genetic programming Multi expression programming
Linear genetic programming : "Linear genetic programming" is unrelated to "linear programming". Linear genetic programming (LGP) is a particular method of genetic programming wherein computer programs in a population are represented as a sequence of register-based instructions from an imperative programming language or machine language. The adjective "linear" stems from the fact that each LGP program is a sequence of instructions and the sequence of instructions is normally executed sequentially. Like in other programs, the data flow in LGP can be modeled as a graph that will visualize the potential multiple usage of register contents and the existence of structurally noneffective code (introns) which are two main differences of this genetic representation from the more common tree-based genetic programming (TGP) variant. Like other Genetic Programming methods, Linear genetic programming requires the input of data to run the program population on. Then, the output of the program (its behaviour) is judged against some target behaviour, using a fitness function. However, LGP is generally more efficient than tree genetic programming due to its two main differences mentioned above: Intermediate results (stored in registers) can be reused and a simple intron removal algorithm exists that can be executed to remove all non-effective code prior to programs being run on the intended data. These two differences often result in compact solutions and substantial computational savings compared to the highly constrained data flow in trees and the common method of executing all tree nodes in TGP. Furthermore, LGP naturally has multiple outputs by defining multiple output registers and easily cooperates with control flow operations. Linear genetic programming has been applied in many domains, including system modeling and system control with considerable success. Linear genetic programming should not be confused with linear tree programs in tree genetic programming, program composed of a variable number of unary functions and a single terminal. Note that linear tree GP differs from bit string genetic algorithms since a population may contain programs of different lengths and there may be more than two types of functions or more than two types of terminals.
Linear genetic programming : Because LGP programs are basically represented by a linear sequence of instructions, they are simpler to read and to operate on than their tree-based counterparts. For example, a simple program written to solve a Boolean function problem with 3 inputs (in R1, R2, R3) and one output (in R0), could read like this: R1, R2, R3 have to be declared as input (read-only) registers, while R0 and R4 are declared as calculation (read-write) registers. This program is very simple, having just 5 instructions. But mutation and crossover operators could work to increase the length of the program, as well as the content of each of its instructions. Note that one instruction is non-effective or an intron (marked), since it does not impact the output register R0. Recognition of those instructions is the basis for the intron removal algorithm which is used analyze code prior to execution. Technically, this happens by copying an individual and then run the intron removal once. The copy with removed introns is then executed as many times as dictated by the number of training cases. Notably, the original individual is left intact, so as to continue participating in the evolutionary process. It is only the copy that is executed that is compressed by removing these "structural" introns. Another simple program, this one written in the LGP language Slash/A looks like a series of instructions separated by a slash: By representing such code in bytecode format, i.e. as an array of bytes each representing a different instruction, one can make mutation operations simply by changing an element of such an array.
Linear genetic programming : Multi expression programming Cartesian genetic programming Grammatical evolution Genetic programming
Linear genetic programming : Slash/A A programming language and C++ library specifically designed for linear GP DigitalBiology.NET Vertical search engine for GA/GP resources Discipulus Genetic-Programming Software MicroGP Genetic-Programming Software (open source) [1] An open-source Linear GP project based on a Java-based Evolutionary Computation Research System (ECJ). [2]
Joseph Nechvatal : Joseph Nechvatal (born January 15, 1951): 1176 is an American post-conceptual digital artist and art theoretician who creates computer-assisted paintings and computer animations, often using custom computer viruses.
Joseph Nechvatal : Joseph Nechvatal was born in Chicago.: 1176 He studied fine art and philosophy at Southern Illinois University Carbondale, Cornell University and Columbia University.: 1176 He earned a Doctor of Philosophy in Philosophy of Art and Technology at the Planetary Collegium at University of Wales, Newport and has taught art theory and art history at the School of Visual Arts. He has had many solo exhibitions and is one of five artists that art historian Patrick Frank examines in his 2024 book Art of the 1980s: As If the Digital Mattered. His work in the late 1970s and early 1980s chiefly consisted of postminimal gray palimpsest-like drawings that were often photo-mechanically enlarged. Beginning in 1979 he became associated with the artist group Colab, organized the Public Arts International/Free Speech series, and helped established the non-profit group ABC No Rio. In 1983 he co-founded the avant-garde electronic art music audio project Tellus Audio Cassette Magazine. In 1984, Nechvatal began work on an opera called XS: The Opera Opus (1984-6) with the no wave musical composer Rhys Chatham. He began using computers and robotics to make post-conceptual paintings in 1986 and later, in his signature work, began to employ self-created computer viruses. From 1991 to 1993, he was artist-in-residence at the Louis Pasteur Atelier in Arbois, France and at the Saline Royale/Ledoux Foundation's computer lab. There he worked on The Computer Virus Project, his first artistic experiment with computer viruses and computer virus animation. He exhibited computer-robotic paintings at Documenta 8 in 1987. In 2002 he extended his experimentation into viral artificial life through a collaboration with the programmer Stephane Sikora of music2eye in a work called the Computer Virus Project II. Nechvatal has also created a noise music work called viral symphOny, a collaborative sound symphony created by using his computer virus software at the Institute for Electronic Arts at Alfred University. In 2021 Pentiments released Nechvatal's retrospective audio cassette called Selected Sound Works (1981-2021) and in 2022 his The Viral Tempest, a double vinyl LP of new audio work. From 1999 to 2013, Nechvatal taught art theories of immersive virtual reality and the viractual at the School of Visual Arts in New York City (SVA). A book of his collected essays entitled Towards an Immersive Intelligence: Essays on the Work of Art in the Age of Computer Technology and Virtual Reality (1993–2006) was published by Edgewise Press in 2009. Also in 2009, his virtual reality art theory and art history book Immersive Ideals / Critical Distances was published. In 2011, his book Immersion Into Noise was published by Open Humanities Press in conjunction with the University of Michigan Library's Scholarly Publishing Office. Nechvatal has also published three books with Punctum Books: Minóy (noise music—ed.—2014), Destroyer of Naivetés (poetry—2015) and Styling Sagaciousness (poetry—2022). In 2023 his art theory cybersex farce novella venus©~Ñ~vibrator, even was book published by Orbis Tertius Press. The Joseph Nechvatal archive is housed at The Fales Library Downtown Collection at the NYU Special Collections Library in New York City.
Joseph Nechvatal : John Johnston, The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI, MIT Press, 2008, cover Donald Kuspit, The Matrix of Sensations VI: Digital Artists and the New Creative Renaissance Joline Blais and Jon Ippolito, The Edge of Art, Thames & Hudson Ltd, p. 213 Frank Popper, From Technological to Virtual Art, MIT Press, pp. 120–123 Johanna Drucker, [8] Joseph Nechvatal : Critical Pleasure Robert C. Morgan, Voluptuary: An algorithic hermaphornology, Tema Celeste Magazine, volume #93, p. 94 Bruce Wands, Art of the Digital Age, London: Thames & Hudson, p. 65 Robert C. Morgan, Laminations of the Soul, Editions Antoine Candau, 1990, pp. 23–30 Margot Lovejoy, Digital Currents: Art in the Electronic Age Routledge 2004 Joseph Nechvatal, Immersive Excess in the Apse of Lascaux, Technonoetic Arts 3, no3. 2005 Joseph Nechvatal. Immersion Into Noise. Open Humanities Press in conjunction with the University of Michigan Library's Scholarly Publishing Office. Ann Arbor. 2011 Johanna Drucker, Joseph Nechvatal : Critical Pleasure, Redaktion Frank Berndt, 1996, pp. 10–13 Mario Costa, Phenomenology of New Tech Arts, Artmedia, Salerno, 2005, p. 6 & pp. 36 – 38 Dominique Moulon, L'art numerique: spectateur-acteuret vie artificielle, Les images numeriques #47-48, 2004, pp. 124–125 Christine Buci-Glucksmann, L'art à l'époque virtuel, in Frontières esthétiques de l'art, Arts 8, Paris: L'Harmattan, 2004 Brandon Taylor, Collage, Thames & Hudson Ltd, 2006, p. 221 Dominique Moulon, [9] Archived 2009-06-17 at the Wayback Machine Conférence Report : Media Art in France, Un Point d'Actu, L'Art Numerique, pp. 124–125 Edmond Couchot, Des Images, du temps et des machines, édité Actes Sud, 2007, pp. 263–264 Fred Forest, Art et Internet, Editions Cercle D'Art / Imaginaire Mode d'Emploi, pp. 48 –51 Wayne Enstice & Melody Peters, Drawing: Space, Form, & Expression, New Jersey: Prentice Hall, pp. 312–313 Ellen K. Levy, Synthetic Lighting: Complex Simulations of Nature, Photography Quarterly (#88) 2004, pp. 7–9 Marie-Paule Nègre, Des artistes en leur monde, volume 2, la Gazette de l'Hotel Drout, 2008, pp. 82–83 Corrado Levi, È andata così: Cronaca e critica dell'arte 1970-2008, Joseph Nechvatal intervistato nel suo studio a New York (1985–86), pp. 130–135 Donald Kuspit, Del Atre Analogico al Arte Digital in Arte Digital Y Videoarte, Kuspit, D. ed., Consorcio del Circulo de Bellas Artes, Madrid, pp. 33–34 & pp. 210 – 212 Robert C. Morgan, Nechvatal's Visionary Computer Virus, in Gruson, L. ed. 1993. Joseph Nechvatal: Computer Virus Project, Royal Saltworks at Arc-et-Senans: Fondation Claude-Nicolas Ledoux, pp. 8–15 Sarah J. Rogers (ed), Body Mécanique: Artistic Explorations of Digital Realms, Columbus, Ohio, Wexner Center for the Arts, The Ohio State University Edward A. Shanken, Art and Electronic Media. London: Phaidon, 2009. ISBN 978-0-7148-4782-5, pp. 42, 285, 160
Joseph Nechvatal : Joseph Nechvatal's website
Santa Fe Trail problem : The Santa Fe Trail problem is a genetic programming exercise in which artificial ants search for food pellets according to a programmed set of instructions. The layout of food pellets in the Santa Fe Trail problem has become a standard for comparing different genetic programming algorithms and solutions. One method for programming and testing algorithms on the Santa Fe Trail problem is by using the NetLogo application. There is at least one case of a student creating a Lego robotic ant to solve the problem.
Santa Fe Trail problem : Genetic programming Agent-based model Java Grammatical Evolution
Santa Fe Trail problem : Genetic-programming.org Grammatical-evolution.org Teamwork in genetic programming java Grammatical Evolution
Symbolic regression : Symbolic regression (SR) is a type of regression analysis that searches the space of mathematical expressions to find the model that best fits a given dataset, both in terms of accuracy and simplicity. No particular model is provided as a starting point for symbolic regression. Instead, initial expressions are formed by randomly combining mathematical building blocks such as mathematical operators, analytic functions, constants, and state variables. Usually, a subset of these primitives will be specified by the person operating it, but that's not a requirement of the technique. The symbolic regression problem for mathematical functions has been tackled with a variety of methods, including recombining equations most commonly using genetic programming, as well as more recent methods utilizing Bayesian methods and neural networks. Another non-classical alternative method to SR is called Universal Functions Originator (UFO), which has a different mechanism, search-space, and building strategy. Further methods such as Exact Learning attempt to transform the fitting problem into a moments problem in a natural function space, usually built around generalizations of the Meijer-G function. By not requiring a priori specification of a model, symbolic regression isn't affected by human bias, or unknown gaps in domain knowledge. It attempts to uncover the intrinsic relationships of the dataset, by letting the patterns in the data itself reveal the appropriate models, rather than imposing a model structure that is deemed mathematically tractable from a human perspective. The fitness function that drives the evolution of the models takes into account not only error metrics (to ensure the models accurately predict the data), but also special complexity measures, thus ensuring that the resulting models reveal the data's underlying structure in a way that's understandable from a human perspective. This facilitates reasoning and favors the odds of getting insights about the data-generating system, as well as improving generalisability and extrapolation behaviour by preventing overfitting. Accuracy and simplicity may be left as two separate objectives of the regression—in which case the optimum solutions form a Pareto front—or they may be combined into a single objective by means of a model selection principle such as minimum description length. It has been proven that symbolic regression is an NP-hard problem, in the sense that one cannot always find the best possible mathematical expression to fit to a given dataset in polynomial time. Nevertheless, if the sought-for equation is not too complex it is possible to solve the symbolic regression problem exactly by generating every possible function (built from some predefined set of operators) and evaluating them on the dataset in question.
Symbolic regression : While conventional regression techniques seek to optimize the parameters for a pre-specified model structure, symbolic regression avoids imposing prior assumptions, and instead infers the model from the data. In other words, it attempts to discover both model structures and model parameters. This approach has the disadvantage of having a much larger space to search, because not only the search space in symbolic regression is infinite, but there are an infinite number of models which will perfectly fit a finite data set (provided that the model complexity isn't artificially limited). This means that it will possibly take a symbolic regression algorithm longer to find an appropriate model and parametrization, than traditional regression techniques. This can be attenuated by limiting the set of building blocks provided to the algorithm, based on existing knowledge of the system that produced the data; but in the end, using symbolic regression is a decision that has to be balanced with how much is known about the underlying system. Nevertheless, this characteristic of symbolic regression also has advantages: because the evolutionary algorithm requires diversity in order to effectively explore the search space, the result is likely to be a selection of high-scoring models (and their corresponding set of parameters). Examining this collection could provide better insight into the underlying process, and allows the user to identify an approximation that better fits their needs in terms of accuracy and simplicity.
Symbolic regression : Most symbolic regression algorithms prevent combinatorial explosion by implementing evolutionary algorithms that iteratively improve the best-fit expression over many generations. Recently, researchers have proposed algorithms utilizing other tactics in AI. Silviu-Marian Udrescu and Max Tegmark developed the "AI Feynman" algorithm, which attempts symbolic regression by training a neural network to represent the mystery function, then runs tests against the neural network to attempt to break up the problem into smaller parts. For example, if f ( x 1 , . . . , x i , x i + 1 , . . . , x n ) = g ( x 1 , . . . , x i ) + h ( x i + 1 , . . . , x n ) ,...,x_,x_,...,x_)=g(x_,...,x_)+h(x_,...,x_) , tests against the neural network can recognize the separation and proceed to solve for g and h separately and with different variables as inputs. This is an example of divide and conquer, which reduces the size of the problem to be more manageable. AI Feynman also transforms the inputs and outputs of the mystery function in order to produce a new function which can be solved with other techniques, and performs dimensional analysis to reduce the number of independent variables involved. The algorithm was able to "discover" 100 equations from The Feynman Lectures on Physics, while a leading software using evolutionary algorithms, Eureqa, solved only 71. AI Feynman, in contrast to classic symbolic regression methods, requires a very large dataset in order to first train the neural network and is naturally biased towards equations that are common in elementary physics.
Symbolic regression : Closed-form expression § Conversion from numerical forms Genetic programming Gene expression programming Kolmogorov complexity Linear genetic programming Mathematical optimization Multi expression programming Regression analysis Reverse mathematics Discovery system (AI research)
Symbolic regression : Mark J. Willis; Hugo G. Hiden; Ben McKay; Gary A. Montague; Peter Marenbach (1997). "Genetic programming: An introduction and survey of applications" (PDF). IEE Conference Publications. IEE. pp. 314–319. Wouter Minnebo; Sean Stijven (2011). "Chapter 4: Symbolic Regression" (PDF). Empowering Knowledge Computing with Variable Selection (M.Sc. thesis). University of Antwerp. John R. Koza; Martin A. Keane; James P. Rice (1993). "Performance improvement of machine learning via automatic discovery of facilitating functions as applied to a problem of symbolic system identification" (PDF). IEEE International Conference on Neural Networks. San Francisco: IEEE. pp. 191–198.
Symbolic regression : Ivan Zelinka (2004). "Symbolic regression — an overview". Hansueli Gerber (1998). "Simple Symbolic Regression Using Genetic Programming". (Java applet) — approximates a function by evolving combinations of simple arithmetic operators, using algorithms developed by John Koza. Katya Vladislavleva. "Symbolic Regression: Function Discovery & More". Archived from the original on 2014-12-18.
Evolutionary algorithm : Evolutionary algorithms (EA) reproduce essential elements of the biological evolution in a computer algorithm in order to solve “difficult” problems, at least approximately, for which no exact or satisfactory solution methods are known. They belong to the class of metaheuristics and are a subset of population based bio-inspired algorithms and evolutionary computation, which itself are part of the field of computational intelligence. The mechanisms of biological evolution that an EA mainly imitates are reproduction, mutation, recombination and selection. Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions (see also loss function). Evolution of the population then takes place after the repeated application of the above operators. Evolutionary algorithms often perform well approximating solutions to all types of problems because they ideally do not make any assumption about the underlying fitness landscape. Techniques from evolutionary algorithms applied to the modeling of biological evolution are generally limited to explorations of microevolutionary processes and planning models based upon cellular processes. In most real applications of EAs, computational complexity is a prohibiting factor. In fact, this computational complexity is due to fitness function evaluation. Fitness approximation is one of the solutions to overcome this difficulty. However, seemingly simple EA can solve often complex problems; therefore, there may be no direct link between algorithm complexity and problem complexity.
Evolutionary algorithm : The following is an example of a generic evolutionary algorithm: Randomly generate the initial population of individuals, the first generation. Evaluate the fitness of each individual in the population. Check, if the goal is reached and the algorithm can be terminated. Select individuals as parents, preferably of higher fitness. Produce offspring with optional crossover (mimicking reproduction). Apply mutation operations on the offspring. Select individuals preferably of lower fitness for replacement with new individuals (mimicking natural selection). Return to 2
Evolutionary algorithm : Similar techniques differ in genetic representation and other implementation details, and the nature of the particular applied problem. Genetic algorithm – This is the most popular type of EA. One seeks the solution of a problem in the form of strings of numbers (traditionally binary, although the best representations are usually those that reflect something about the problem being solved), by applying operators such as recombination and mutation (sometimes one, sometimes both). This type of EA is often used in optimization problems. Genetic programming – Here the solutions are in the form of computer programs, and their fitness is determined by their ability to solve a computational problem. There are many variants of Genetic Programming: Cartesian genetic programming Gene expression programming Grammatical evolution Linear genetic programming Multi expression programming Evolutionary programming – Similar to evolution strategy, but with a deterministic selection of all parents. Evolution strategy (ES) – Works with vectors of real numbers as representations of solutions, and typically uses self-adaptive mutation rates. The method is mainly used for numerical optimization, although there are also variants for combinatorial tasks. CMA-ES Natural evolution strategy Differential evolution – Based on vector differences and is therefore primarily suited for numerical optimization problems. Coevolutionary algorithm – Similar to genetic algorithms and evolution strategies, but the created solutions are compared on the basis of their outcomes from interactions with other solutions. Solutions can either compete or cooperate during the search process. Coevolutionary algorithms are often used in scenarios where the fitness landscape is dynamic, complex, or involves competitive interactions. Neuroevolution – Similar to genetic programming but the genomes represent artificial neural networks by describing structure and connection weights. The genome encoding can be direct or indirect. Learning classifier system – Here the solution is a set of classifiers (rules or conditions). A Michigan-LCS evolves at the level of individual classifiers whereas a Pittsburgh-LCS uses populations of classifier-sets. Initially, classifiers were only binary, but now include real, neural net, or S-expression types. Fitness is typically determined with either a strength or accuracy based reinforcement learning or supervised learning approach. Quality–Diversity algorithms – QD algorithms simultaneously aim for high-quality and diverse solutions. Unlike traditional optimization algorithms that solely focus on finding the best solution to a problem, QD algorithms explore a wide variety of solutions across a problem space and keep those that are not just high performing, but also diverse and unique.
Evolutionary algorithm : The following theoretical principles apply to all or almost all EAs.
Evolutionary algorithm : The areas in which evolutionary algorithms are practically used are almost unlimited and range from industry, engineering, complex scheduling, agriculture, robot movement planning and finance to research and art. The application of an evolutionary algorithm requires some rethinking from the inexperienced user, as the approach to a task using an EA is different from conventional exact methods and this is usually not part of the curriculum of engineers or other disciplines. For example, the fitness calculation must not only formulate the goal but also support the evolutionary search process towards it, e.g. by rewarding improvements that do not yet lead to a better evaluation of the original quality criteria. For example, if peak utilisation of resources such as personnel deployment or energy consumption is to be avoided in a scheduling task, it is not sufficient to assess the maximum utilisation. Rather, the number and duration of exceedances of a still acceptable level should also be recorded in order to reward reductions below the actual maximum peak value. There are therefore some publications that are aimed at the beginner and want to help avoiding beginner's mistakes as well as leading an application project to success. This includes clarifying the fundamental question of when an EA should be used to solve a problem and when it is better not to.
Evolutionary algorithm : There are some other proven and widely used methods of nature inspired global search techniques such as Memetic algorithm – A hybrid method, inspired by Richard Dawkins's notion of a meme. It commonly takes the form of a population-based algorithm (frequently an EA) coupled with individual learning procedures capable of performing local refinements. Emphasizes the exploitation of problem-specific knowledge and tries to orchestrate local and global search in a synergistic way. A cellular evolutionary or memetic algorithm uses a topological neighbouhood relation between the individuals of a population for restricting the mate selection and by that reducing the propagation speed of above-average individuals. The idea is to maintain genotypic diversity in the poulation over a longer period of time to reduce the risk of premature convergence. Ant colony optimization is based on the ideas of ant foraging by pheromone communication to form paths. Primarily suited for combinatorial optimization and graph problems. Particle swarm optimization is based on the ideas of animal flocking behaviour. Also primarily suited for numerical optimization problems. Gaussian adaptation – Based on information theory. Used for maximization of manufacturing yield, mean fitness or average information. See for instance Entropy in thermodynamics and information theory. In addition, many new nature-inspired or methaphor-guided algorithms have been proposed since the beginning of this century. For criticism of most publications on these, see the remarks at the end of the introduction to the article on metaheuristics.
Evolutionary algorithm : In 2020, Google stated that their AutoML-Zero can successfully rediscover classic algorithms such as the concept of neural networks. The computer simulations Tierra and Avida attempt to model macroevolutionary dynamics.
Evolutionary algorithm : Ashlock, D. (2006), Evolutionary Computation for Modeling and Optimization, Springer, New York, doi:10.1007/0-387-31909-3 ISBN 0-387-22196-4. Bäck, T. (1996), Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic Algorithms, Oxford Univ. Press, New York, ISBN 978-0-19-509971-3. Bäck, T., Fogel, D., Michalewicz, Z. (1999), Evolutionary Computation 1: Basic Algorithms and Operators, CRC Press, Boca Raton, USA, ISBN 978-0-7503-0664-5. Bäck, T., Fogel, D., Michalewicz, Z. (2000), Evolutionary Computation 2: Advanced Algorithms and Operators, CRC Press, Boca Raton, USA, doi:10.1201/9781420034349 ISBN 978-0-3678-0637-8. Banzhaf, W., Nordin, P., Keller, R., Francone, F. (1998), Genetic Programming - An Introduction, Morgan Kaufmann, San Francisco, ISBN 978-1-55860-510-7. Eiben, A.E., Smith, J.E. (2003), Introduction to Evolutionary Computing, Springer, Heidelberg, New York, doi:10.1007/978-3-662-44874-8 ISBN 978-3-662-44873-1. Holland, J. H. (1992), Adaptation in Natural and Artificial Systems, MIT Press, Cambridge, MA, ISBN 978-0-262-08213-6. Michalewicz, Z.; Fogel, D.B. (2004), How To Solve It: Modern Heuristics. Springer, Berlin, Heidelberg, ISBN 978-3-642-06134-9, doi:10.1007/978-3-662-07807-5. Benko, Attila; Dosa, Gyorgy; Tuza, Zsolt (2010). "Bin Packing/Covering with Delivery, solved with the evolution of algorithms". 2010 IEEE Fifth International Conference on Bio-Inspired Computing: Theories and Applications (BIC-TA). pp. 298–302. doi:10.1109/BICTA.2010.5645312. ISBN 978-1-4244-6437-1. S2CID 16875144. Poli, R.; Langdon, W. B.; McPhee, N. F. (2008). A Field Guide to Genetic Programming. Lulu.com, freely available from the internet. ISBN 978-1-4092-0073-4. Archived from the original on 2016-05-27. Retrieved 2011-03-05. Price, K., Storn, R.M., Lampinen, J.A., (2005). Differential Evolution: A Practical Approach to Global Optimization, Springer, Berlin, Heidelberg, ISBN 978-3-642-42416-8, doi:10.1007/3-540-31306-0. Ingo Rechenberg (1971), Evolutionsstrategie - Optimierung technischer Systeme nach Prinzipien der biologischen Evolution (PhD thesis). Reprinted by Fromman-Holzboog (1973). ISBN 3-7728-1642-8 Hans-Paul Schwefel (1974), Numerische Optimierung von Computer-Modellen (PhD thesis). Reprinted by Birkhäuser (1977). Hans-Paul Schwefel (1995), Evolution and Optimum Seeking. Wiley & Sons, New York. ISBN 0-471-57148-2 Simon, D. (2013), Evolutionary Optimization Algorithms Archived 2014-03-10 at the Wayback Machine, Wiley & Sons, ISBN 978-0-470-93741-9 Kruse, Rudolf; Borgelt, Christian; Klawonn, Frank; Moewes, Christian; Steinbrecher, Matthias; Held, Pascal (2013), Computational Intelligence: A Methodological Introduction. Springer, London. ISBN 978-1-4471-5012-1, doi:10.1007/978-1-4471-5013-8. Rahman, Rosshairy Abd.; Kendall, Graham; Ramli, Razamin; Jamari, Zainoddin; Ku-Mahamud, Ku Ruhana (2017). "Shrimp Feed Formulation via Evolutionary Algorithm with Power Heuristics for Handling Constraints". Complexity. 2017: 1–12. doi:10.1155/2017/7053710.
Evolutionary algorithm : An Overview of the History and Flavors of Evolutionary Algorithms
Crossover (evolutionary algorithm) : Crossover in evolutionary algorithms and evolutionary computation, also called recombination, is a genetic operator used to combine the genetic information of two parents to generate new offspring. It is one way to stochastically generate new solutions from an existing population, and is analogous to the crossover that happens during sexual reproduction in biology. New solutions can also be generated by cloning an existing solution, which is analogous to asexual reproduction. Newly generated solutions may be mutated before being added to the population. The aim of recombination is to transfer good characteristics from two different parents to one child. Different algorithms in evolutionary computation may use different data structures to store genetic information, and each genetic representation can be recombined with different crossover operators. Typical data structures that can be recombined with crossover are bit arrays, vectors of real numbers, or trees. The list of operators presented below is by no means complete and serves mainly as an exemplary illustration of this dyadic genetic operator type. More operators and more details can be found in the literature.
Crossover (evolutionary algorithm) : Traditional genetic algorithms store genetic information in a chromosome represented by a bit array. Crossover methods for bit arrays are popular and an illustrative example of genetic recombination.
Crossover (evolutionary algorithm) : For the crossover operators presented above and for most other crossover operators for bit strings, it holds that they can also be applied accordingly to integer or real-valued genomes whose genes each consist of an integer or real-valued number. Instead of individual bits, integer or real-valued numbers are then simply copied into the child genome. The offspring lie on the remaining corners of the hyperbody spanned by the two parents P 1 = ( 1.5 , 6 , 8 ) =(1.5,6,8) and P 2 = ( 7 , 2 , 1 ) =(7,2,1) , as exemplified in the accompanying image for the three-dimensional case.
Crossover (evolutionary algorithm) : For combinatorial tasks, permutations are usually used that are specifically designed for genomes that are themselves permutations of a set. The underlying set is usually a subset of N or N 0 _ . If 1- or n-point or uniform crossover for integer genomes is used for such genomes, a child genome may contain some values twice and others may be missing. This can be remedied by genetic repair, e.g. by replacing the redundant genes in positional fidelity for missing ones from the other child genome. In order to avoid the generation of invalid offspring, special crossover operators for permutations have been developed which fulfill the basic requirements of such operators for permutations, namely that all elements of the initial permutation are also present in the new one and only the order is changed. It can be distinguished between combinatorial tasks, where all sequences are admissible, and those where there are constraints in the form of inadmissible partial sequences. A well-known representative of the first task type is the traveling salesman problem (TSP), where the goal is to visit a set of cities exactly once on the shortest tour. An example of the constrained task type is the scheduling of multiple workflows. Workflows involve sequence constraints on some of the individual work steps. For example, a thread cannot be cut until the corresponding hole has been drilled in a workpiece. Such problems are also called order-based permutations. In the following, two crossover operators are presented as examples, the partially mapped crossover (PMX) motivated by the TSP and the order crossover (OX1) designed for order-based permutations. A second offspring can be produced in each case by exchanging the parent chromosomes.
Crossover (evolutionary algorithm) : Evolutionary algorithm Genetic representation Fitness function Selection (genetic algorithm)
Crossover (evolutionary algorithm) : John Holland (1975). Adaptation in Natural and Artificial Systems, PhD thesis, University of Michigan Press, Ann Arbor, Michigan. ISBN 0-262-58111-6. Schwefel, Hans-Paul (1995). Evolution and Optimum Seeking. New York: John Wiley & Sons. ISBN 0-471-57148-2. Davis, Lawrence (1991). Handbook of genetic algorithms. New York: Van Nostrand Reinhold. ISBN 0-442-00173-8. OCLC 23081440. Eiben, A.E.; Smith, J.E. (2015). Introduction to Evolutionary Computing. Natural Computing Series. Berlin, Heidelberg: Springer. doi:10.1007/978-3-662-44874-8. ISBN 978-3-662-44873-1. S2CID 20912932. Yu, Xinjie; Gen, Mitsuo (2010). Introduction to Evolutionary Algorithms. Decision Engineering. London: Springer. doi:10.1007/978-1-84996-129-5. ISBN 978-1-84996-128-8. Bäck, Thomas; Fogel, David B.; Michalewicz, Zbigniew, eds. (1999). Evolutionary computation. Vol. 1, Basic algorithms and operators. Bristol: Institute of Physics Pub. ISBN 0-585-30560-9. OCLC 45730387.
Crossover (evolutionary algorithm) : Newsgroup: comp.ai.genetic FAQ - see section on crossover (also known as recombination).
Genetic operator : A genetic operator is an operator used in evolutionary algorithms (EA) to guide the algorithm towards a solution to a given problem. There are three main types of operators (mutation, crossover and selection), which must work in conjunction with one another in order for the algorithm to be successful. Genetic operators are used to create and maintain genetic diversity (mutation operator), combine existing solutions (also known as chromosomes) into new solutions (crossover) and select between solutions (selection). The classic representatives of evolutionary algorithms include genetic algorithms, evolution strategies, genetic programming and evolutionary programming. In his book discussing the use of genetic programming for the optimization of complex problems, computer scientist John Koza has also identified an 'inversion' or 'permutation' operator; however, the effectiveness of this operator has never been conclusively demonstrated and this operator is rarely discussed in the field of genetic programming. For combinatorial problems, however, these and other operators tailored to permutations are frequently used by other EAs. Mutation (or mutation-like) operators are said to be unary operators, as they only operate on one chromosome at a time. In contrast, crossover operators are said to be binary operators, as they operate on two chromosomes at a time, combining two existing chromosomes into one new chromosome.
Genetic operator : Genetic variation is a necessity for the process of evolution. Genetic operators used in evolutionary algorithms are analogous to those in the natural world: survival of the fittest, or selection; reproduction (crossover, also called recombination); and mutation.
Genetic operator : While each operator acts to improve the solutions produced by the evolutionary algorithm working individually, the operators must work in conjunction with each other for the algorithm to be successful in finding a good solution. Using the selection operator on its own will tend to fill the solution population with copies of the best solution from the population. If the selection and crossover operators are used without the mutation operator, the algorithm will tend to converge to a local minimum, that is, a good but sub-optimal solution to the problem. Using the mutation operator on its own leads to a random walk through the search space. Only by using all three operators together can the evolutionary algorithm become a noise-tolerant global search algorithm, yielding good solutions to the problem at hand. == References ==
Mutation (evolutionary algorithm) : Mutation is a genetic operator used to maintain genetic diversity of the chromosomes of a population of an evolutionary algorithm (EA), including genetic algorithms in particular. It is analogous to biological mutation. The classic example of a mutation operator of a binary coded genetic algorithm (GA) involves a probability that an arbitrary bit in a genetic sequence will be flipped from its original state. A common method of implementing the mutation operator involves generating a random variable for each bit in a sequence. This random variable tells whether or not a particular bit will be flipped. This mutation procedure, based on the biological point mutation, is called single point mutation. Other types of mutation operators are commonly used for representations other than binary, such as floating-point encodings or representations for combinatorial problems. The purpose of mutation in EAs is to introduce diversity into the sampled population. Mutation operators are used in an attempt to avoid local minima by preventing the population of chromosomes from becoming too similar to each other, thus slowing or even stopping convergence to the global optimum. This reasoning also leads most EAs to avoid only taking the fittest of the population in generating the next generation, but rather selecting a random (or semi-random) set with a weighting toward those that are fitter. The following requirements apply to all mutation operators used in an EA: every point in the search space must be reachable by one or more mutations. there must be no preference for parts or directions in the search space (no drift). small mutations should be more probable than large ones. For different genome types, different mutation types are suitable. Some mutations are Gaussian, Uniform, Zigzag, Scramble, Insertion, Inversion, Swap, and so on. An overview and more operators than those presented below can be found in the introductory book by Eiben and Smith or in.
Mutation (evolutionary algorithm) : The mutation of bit strings ensue through bit flips at random positions. Example: The probability of a mutation of a bit is 1 l , where l is the length of the binary vector. Thus, a mutation rate of 1 per mutation and individual selected for mutation is reached.
Mutation (evolutionary algorithm) : Many EAs, such as the evolution strategy or the real-coded genetic algorithms, work with real numbers instead of bit strings. This is due to the good experiences that have been made with this type of coding. The value of a real-valued gene can either be changed or redetermined. A mutation that implements the latter should only ever be used in conjunction with the value-changing mutations and then only with comparatively low probability, as it can lead to large changes. In practical applications, the respective value range of the decision variables to be changed of the optimisation problem to be solved is usually limited. Accordingly, the values of the associated genes are each restricted to an interval [ x min , x max ] ,x_] . Mutations may or may not take these restrictions into account. In the latter case, suitable post-treatment is then required as described below.
Mutation (evolutionary algorithm) : Mutations of permutations are specially designed for genomes that are themselves permutations of a set. These are often used to solve combinatorial tasks. In the two mutations presented, parts of the genome are rotated or inverted.
Mutation (evolutionary algorithm) : Evolutionary algorithms Genetic algorithms Evolution strategy Genetic programming Evolutionary programming
Mutation (evolutionary algorithm) : John Holland (1975). Adaptation in Natural and Artificial Systems, PhD thesis, University of Michigan Press, Ann Arbor, Michigan. ISBN 0-262-58111-6. Schwefel, Hans-Paul (1995). Evolution and Optimum Seeking. New York: John Wiley & Sons. ISBN 0-471-57148-2. Davis, Lawrence (1991). Handbook of genetic algorithms. New York: Van Nostrand Reinhold. ISBN 0-442-00173-8. OCLC 23081440. Eiben, A.E.; Smith, J.E. (2015). Introduction to Evolutionary Computing. Natural Computing Series. Berlin, Heidelberg: Springer. doi:10.1007/978-3-662-44874-8. ISBN 978-3-662-44873-1. S2CID 20912932. Yu, Xinjie; Gen, Mitsuo (2010). Introduction to Evolutionary Algorithms. Decision Engineering. London: Springer. doi:10.1007/978-1-84996-129-5. ISBN 978-1-84996-128-8. De Jong, Kenneth A. (2006). Evolutionary computation : a unified approach. Cambridge, Mass.: MIT Press. ISBN 978-0-262-25598-1. OCLC 69652176. Fogel, David B.; Bäck, Thomas; Michalewicz, Zbigniew, eds. (1999). Evolutionary computation. Vol. 1, Basic algorithms and operators. Bristol: Institute of Physics Pub. ISBN 0-585-30560-9. OCLC 45730387.
Artificial development : Artificial development, also known as artificial embryogeny or machine intelligence or computational development, is an area of computer science and engineering concerned with computational models motivated by genotype–phenotype mappings in biological systems. Artificial development is often considered a sub-field of evolutionary computation, although the principles of artificial development have also been used within stand-alone computational models. Within evolutionary computation, the need for artificial development techniques was motivated by the perceived lack of scalability and evolvability of direct solution encodings (Tufte, 2008). Artificial development entails indirect solution encoding. Rather than describing a solution directly, an indirect encoding describes (either explicitly or implicitly) the process by which a solution is constructed. Often, but not always, these indirect encodings are based upon biological principles of development such as morphogen gradients, cell division and cellular differentiation (e.g. Doursat 2008), gene regulatory networks (e.g. Guo et al., 2009), degeneracy (Whitacre et al., 2010), grammatical evolution (de Salabert et al., 2006), or analogous computational processes such as re-writing, iteration, and time. The influences of interaction with the environment, spatiality and physical constraints on differentiated multi-cellular development have been investigated more recently (e.g. Knabe et al. 2008). Artificial development approaches have been applied to a number of computational and design problems, including electronic circuit design (Miller and Banzhaf 2003), robotic controllers (e.g. Taylor 2004), and the design of physical structures (e.g. Hornby 2004).
Artificial development : Rene Doursat, "Organically grown architectures: Creating decentralized, autonomous systems by embryomorphic engineering", Organic Computing, R. P. Würtz, (ed.), Springer-Verlag, Ch. 8, pp. 167-200, 2008. Guo, H., Y. Meng and Y. Jin (2009). "A cellular mechanism for multi-robot construction via evolutionary multi-objective optimization of a gene regulatory network." BioSystems 98(3): 193-203. (https://web.archive.org/web/20110719123923/http://www.ece.stevens-tech.edu/~ymeng/publications/BioSystems09_Meng.pdf) Whitacre, J. M., P. Rohlfshagen, X. Yao and A. Bender (2010). The role of degenerate robustness in the evolvability of multi-agent systems in dynamic environments. Parallel Problem Solving from Nature (PPSN) XI, Kraków, Poland. (https://www.researchgate.net/profile/James_Whitacre/publication/220701596_The_Role_of_Degenerate_Robustness_in_the_Evolvability_of_Multi-agent_Systems_in_Dynamic_Environments/links/0d2b2c6889b5121d730dd3be.pdf) Gregory S. Hornby, "Functional Scalability through Generative Representations: the Evolution of Table Designs", Environment and Planning B: Planning and Design, 31(4), 569-587, July 2004. (abstract) Julian F. Miller and Wolfgang Banzhaf (2003): "Evolving the Program for a Cell: From French Flags to Boolean Circuits", On Growth, Form and Computers, S. Kumar and P. Bentley, (eds.), Elsevier Academic Press, 2003. ISBN 978-0-12-428765-5 Arturo de Salabert, Alfonso Ortega and Manuel Alfonseca, (2006) “Optimizing Ecology-friendly Drawing of Plans of Buildings by means of Grammatical Evolution,” Proc. ISC’2006, Eurosis, pp. 493-497. ISBN 90-77381-26-0 Kenneth Stanley and Risto Miikkulainen (2003): "A Taxonomy for artificial embryogeny", Artificial Life 9(2):93-130, 2003. Tim Taylor (2004): "A Genetic Regulatory Network-Inspired Real-Time Controller for a Group of Underwater Robots" Archived 2017-08-13 at the Wayback Machine, Intelligent Autonomous Systems 8 (Proceedings of IAS8), F. Groen, N. Amato, A. Bonarini, E. Yoshida and B. Kröse (eds.), IOS Press, Amsterdam, 2004. ISBN 978-1-58603-414-6 Gunnar Tufte (2008): "Phenotypic, Developmental and Computational Resources: Scaling in Artificial Development", Proc. Genetic and Evolutionary Computation Conf. (GECCO) 2008, ACM, 2008. Knabe, J. F., Nehaniv, C. L. and Schilstra, M. J. "Evolution and Morphogenesis of Differentiated Multicellular Organisms: Autonomously Generated Diffusion Gradients for Positional Information". In Artificial Life XI: Proceedings of the Eleventh International Conference on the Simulation and Synthesis of Living Systems, pages 321-328, MIT Press, 2008. corr. web page
Cellular evolutionary algorithm : A cellular evolutionary algorithm (cEA) is a kind of evolutionary algorithm (EA) in which individuals cannot mate arbitrarily, but every one interacts with its closer neighbors on which a basic EA is applied (selection, variation, replacement). The cellular model simulates natural evolution from the point of view of the individual, which encodes a tentative (optimization, learning, search) problem solution. The essential idea of this model is to provide the EA population with a special structure defined as a connected graph, in which each vertex is an individual who communicates with his nearest neighbors. Particularly, individuals are conceptually set in a toroidal mesh, and are only allowed to recombine with close individuals. This leads us to a kind of locality known as isolation by distance. The set of potential mates of an individual is called its neighborhood. It is known that, in this kind of algorithm, similar individuals tend to cluster creating niches, and these groups operate as if they were separate sub-populations (islands). Anyway, there is no clear borderline between adjacent groups, and close niches could be easily colonized by competitive niches and maybe merge solution contents during the process. Simultaneously, farther niches can be affected more slowly.
Cellular evolutionary algorithm : A cellular evolutionary algorithm (cEA) usually evolves a structured bidimensional grid of individuals, although other topologies are also possible. In this grid, clusters of similar individuals are naturally created during evolution, promoting exploration in their boundaries, while exploitation is mainly performed by direct competition and merging inside them. The grid is usually 2D toroidal structure, although the number of dimensions can be easily extended (to 3D) or reduced (to 1D, e.g. a ring). The neighborhood of a particular point of the grid (where an individual is placed) is defined in terms of the Manhattan distance from it to others in the population. Each point of the grid has a neighborhood that overlaps the neighborhoods of nearby individuals. In the basic algorithm, all the neighborhoods have the same size and identical shapes. The two most commonly used neighborhoods are L5, also called Von Neumann or NEWS (North, East, West and South), and C9, also known as Moore neighborhood. Here, L stands for Linear while C stands for Compact. In cEAs, the individuals can only interact with their neighbors in the reproductive cycle where the variation operators are applied. This reproductive cycle is executed inside the neighborhood of each individual and, generally, consists in selecting two parents among its neighbors according to a certain criterion, applying the variation operators to them (recombination and mutation for example), and replacing the considered individual by the recently created offspring following a given criterion, for instance, replace if the offspring represents a better solution than the considered individual.
Cellular evolutionary algorithm : In a regular synchronous cEA, the algorithm proceeds from the very first top left individual to the right and then to the several rows by using the information in the population to create a new temporary population. After finishing with the bottom-right last individual the temporary population is full with the newly computed individuals, and the replacement step starts. In it, the old population is completely and synchronously replaced with the newly computed one according to some criterion. Usually, the replacement keeps the best individual in the same position of both populations, that is, elitism is used. We must notice that according to the update policy of the population used, we could also define an asynchronous cEA. This is also a well-known issue in cellular automata. In asynchronous cEAs the order in which the individuals in the grid are update changes depending on the criterion used: line sweep, fixed random sweep, new random sweep, and uniform choice. These are the four most usual ways of updating the population. All of them keep using the newly computed individual (or the original if better) for the computations of its neighbors immediately. This makes the population to hold at any time individual in different states of evolution, defining a very interesting new line of research. The overlap of the neighborhoods provides an implicit mechanism of solution migration to the cEA. Since the best solutions spread smoothly through the whole population, genetic diversity in the population is preserved longer than in non structured EAs. This soft dispersion of the best solutions through the population is one of the main issues of the good tradeoff between exploration and exploitation that cEAs perform during the search. It is then easy to see that we could tune this tradeoff (and hence, tune the genetic diversity level along the evolution) by modifying (for instance) the size of the neighborhood used, as the overlap degree between the neighborhoods grows according to the size of the neighborhood. A cEA can be seen as a cellular automaton (CA) with probabilistic rewritable rules, where the alphabet of the CA is equivalent to the potential number of solutions of the problem. Hence, if we see cEAs as a kind of CA, it is possible to import knowledge from the field of CAs to cEAs, and in fact this is an interesting open research line.
Cellular evolutionary algorithm : Cellular EAs are very amenable to parallelism, thus usually found in the literature of parallel metaheuristics. In particular, fine grain parallelism can be used to assign independent threads of execution to every individual, thus allowing the whole cEA to run on a concurrent or actually parallel hardware platform. In this way, large time reductions can be obtained when running cEAs on FPGAs or GPUs. However, it is important to stress that cEAs are a model of search, in many senses different from traditional EAs. Also, they can be run in sequential and parallel platforms, reinforcing the fact that the model and the implementation are two different concepts. See here for a complete description on the fundamentals for the understanding, design, and application of cEAs.
Cellular evolutionary algorithm : Cellular automaton Dual-phase evolution Enrique Alba Evolutionary algorithm Metaheuristic Parallel metaheuristic
Cellular evolutionary algorithm : E. Alba, B. Dorronsoro, Cellular Genetic Algorithms, Springer-Verlag, ISBN 978-0-387-77609-5, 2008 A.J. Neighbor, J.J. Durillo, F. Luna, B. Dorronsoro, E. Alba, MOCell: A New Cellular Genetic Algorithm for Multiobjective Optimization, International Journal of Intelligent Systems, 24:726-746, 2009 E. Alba, B. Dorronsoro, F. Luna, A.J. Neighbor, P. Bouvry, L. Hogie, A Cellular Multi-Objective Genetic Algorithm for Optimal Broadcasting Strategy in Metropolitan MANETs, Computer Communications, 30(4):685-697, 2007 E. Alba, B. Dorronsoro, Computing Nine New Best-So-Far Solutions for Capacitated VRP with a Cellular GA, Information Processing Letters, Elsevier, 98(6):225-230, 30 June 2006 M. Giacobini, M. Tomassini, A. Tettamanzi, E. Alba, The Selection Intensity in Cellular Evolutionary Algorithms for Regular Lattices, IEEE Transactions on Evolutionary Computation, IEEE Press, 9(5):489-505, 2005 E. Alba, B. Dorronsoro, The Exploration/Exploitation Tradeoff in Dynamic Cellular Genetic Algorithms, IEEE Transactions on Evolutionary Computation, IEEE Press, 9(2)126-142, 2005
Cellular evolutionary algorithm : The site on Cellular Evolutionary Algorithms NEO Research Group at University of Málaga, Spain Archived 2018-09-28 at the Wayback Machine
Chromosome (evolutionary algorithm) : A chromosome or genotype in evolutionary algorithms (EA) is a set of parameters which define a proposed solution of the problem that the evolutionary algorithm is trying to solve. The set of all solutions, also called individuals according to the biological model, is known as the population. The genome of an individual consists of one, more rarely of several, chromosomes and corresponds to the genetic representation of the task to be solved. A chromosome is composed of a set of genes, where a gene consists of one or more semantically connected parameters, which are often also called decision variables. They determine one or more phenotypic characteristics of the individual or at least have an influence on them. In the basic form of genetic algorithms, the chromosome is represented as a binary string, while in later variants and in EAs in general, a wide variety of other data structures are used.
Chromosome (evolutionary algorithm) : When creating the genetic representation of a task, it is determined which decision variables and other degrees of freedom of the task should be improved by the EA and possible additional heuristics and how the genotype-phenotype mapping should look like. The design of a chromosome translates these considerations into concrete data structures for which an EA then has to be selected, configured, extended, or, in the worst case, created. Finding a suitable representation of the problem domain for a chromosome is an important consideration, as a good representation will make the search easier by limiting the search space; similarly, a poorer representation will allow a larger search space. In this context, suitable mutation and crossover operators must also be found or newly defined to fit the chosen chromosome design. An important requirement for these operators is that they not only allow all points in the search space to be reached in principle, but also make this as easy as possible. The following requirements must be met by a well-suited chromosome: It must allow the accessibility of all admissible points in the search space. Design of the chromosome in such a way that it covers only the search space and no additional areas. so that there is no redundancy or only as little redundancy as possible. Observance of strong causality: small changes in the chromosome should only lead to small changes in the phenotype. This is also called locality of the relationship between search and problem space. Designing the chromosome in such a way that it excludes prohibited regions in the search space completely or as much as possible. While the first requirement is indispensable, depending on the application and the EA used, one usually only has to be satisfied with fulfilling the remaining requirements as far as possible. The evolutionary search is supported and possibly considerably accelerated by a fulfillment as complete as possible.
Chromosome (evolutionary algorithm) : Thomas Bäck (1996): Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic Algorithms, Oxford Univ. Press. ISBN 978-0-19-509971-3 Wolfgang Banzhaf, P. Nordin, R. Keller, F. Francone (1998): Genetic Programming - An Introduction, Morgan Kaufmann, San Francisco. ISBN 1-55860-510-X Kenneth A. de Jong (2006): Evolutionary Computation: A Unified Approach. MIT Press, Cambridge, MA. ISBN 0-262-04194-4 Melanie Mitchell (1996): An Introduction to Genetic Algorithms. MIT Press, Cambridge MA. ISBN 978-0-262-63185-3 Hans-Paul Schwefel (1995): Evolution and Optimum Seeking. Wiley & Sons, New York. ISBN 0-471-57148-2 == References ==
Cultural algorithm : Cultural algorithms (CA) are a branch of evolutionary computation where there is a knowledge component that is called the belief space in addition to the population component. In this sense, cultural algorithms can be seen as an extension to a conventional genetic algorithm. Cultural algorithms were introduced by Reynolds (see references).
Cultural algorithm : The belief space of a cultural algorithm is divided into distinct categories. These categories represent different domains of knowledge that the population has of the search space. The belief space is updated after each iteration by the best individuals of the population. The best individuals can be selected using a fitness function that assesses the performance of each individual in population much like in genetic algorithms.
Cultural algorithm : The population component of the cultural algorithm is approximately the same as that of the genetic algorithm.
Cultural algorithm : Cultural algorithms require an interface between the population and belief space. The best individuals of the population can update the belief space via the update function. Also, the knowledge categories of the belief space can affect the population component via the influence function. The influence function can affect population by altering the genome or the actions of the individuals.
Cultural algorithm : Initialize population space (choose initial population) Initialize belief space (e.g. set domain specific knowledge and normative value-ranges) Repeat until termination condition is met Perform actions of the individuals in population space Evaluate each individual by using the fitness function Select the parents to reproduce a new generation of offspring Let the belief space alter the genome of the offspring by using the influence function Update the belief space by using the accept function (this is done by letting the best individuals to affect the belief space)
Cultural algorithm : Various optimization problems Social simulation Real-parameter optimization
Cultural algorithm : Artificial intelligence Artificial life Evolutionary computation Genetic algorithm Harmony search Machine learning Memetic algorithm Memetics Metaheuristic Social simulation Sociocultural evolution Stochastic optimization Swarm intelligence
Cultural algorithm : Robert G. Reynolds, Ziad Kobti, Tim Kohler: Agent-Based Modeling of Cultural Change in Swarm Using Cultural Algorithms R. G. Reynolds, “An Introduction to Cultural Algorithms, ” in Proceedings of the 3rd Annual Conference on Evolutionary Programming, World Scientific Publishing, pp 131–139, 1994. Robert G. Reynolds, Bin Peng. Knowledge Learning and Social Swarms in Cultural Systems. Journal of Mathematical Sociology. 29:1-18, 2005 Reynolds, R. G., and Ali, M. Z, “Embedding a Social Fabric Component into Cultural Algorithms Toolkit for an Enhanced Knowledge-Driven Engineering Optimization”, International Journal of Intelligent Computing and Cybernetics (IJICC), Vol. 1, No 4, pp. 356–378, 2008 Reynolds, R G., and Ali, M Z., Exploring Knowledge and Population Swarms via an Agent-Based Cultural Algorithms Simulation Toolkit (CAT), in proceedings of IEEE Congress on Computational Intelligence 2007.
Defining length : In genetic algorithms and genetic programming defining length L(H) is the maximum distance between two defining symbols (that is symbols that have a fixed value as opposed to symbols that can take any value, commonly denoted as # or *) in schema H. In tree GP schemata, L(H) is the number of links in the minimum tree fragment including all the non-= symbols within a schema H.
Defining length : Schemata "00##0", "1###1", "01###", and "##0##" have defining lengths of 4, 4, 1, and 0, respectively. Lengths are computed by determining the last fixed position and subtracting from it the first fixed position. In genetic algorithms as the defining length of a solution increases so does the susceptibility of the solution to disruption due to mutation or cross-over. == References ==
Differential evolution : Differential evolution (DE) is an evolutionary algorithm to optimize a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Such methods are commonly known as metaheuristics as they make few or no assumptions about the optimized problem and can search very large spaces of candidate solutions. However, metaheuristics such as DE do not guarantee an optimal solution is ever found. DE is used for multidimensional real-valued functions but does not use the gradient of the problem being optimized, which means DE does not require the optimization problem to be differentiable, as is required by classic optimization methods such as gradient descent and quasi-newton methods. DE can therefore also be used on optimization problems that are not even continuous, are noisy, change over time, etc. DE optimizes a problem by maintaining a population of candidate solutions and creating new candidate solutions by combining existing ones according to its simple formulae, and then keeping whichever candidate solution has the best score or fitness on the optimization problem at hand. In this way, the optimization problem is treated as a black box that merely provides a measure of quality given a candidate solution and the gradient is therefore not needed.
Differential evolution : Storn and Price introduced Differential Evolution in 1995. Books have been published on theoretical and practical aspects of using DE in parallel computing, multiobjective optimization, constrained optimization, and the books also contain surveys of application areas. Surveys on the multi-faceted research aspects of DE can be found in journal articles.