text
stringlengths
12
14.7k
Overfitting : Usually, a learning algorithm is trained using some set of "training data": exemplary situations for which the desired output is known. The goal is that the algorithm will also perform well on predicting the output when fed "validation data" that was not encountered during its training. Overfitting is the use of models or procedures that violate Occam's razor, for example by including more adjustable parameters than are ultimately optimal, or by using a more complicated approach than is ultimately optimal. For an example where there are too many adjustable parameters, consider a dataset where training data for y can be adequately predicted by a linear function of two independent variables. Such a function requires only three parameters (the intercept and two slopes). Replacing this simple function with a new, more complex quadratic function, or with a new, more complex linear function on more than two independent variables, carries a risk: Occam's razor implies that any given complex function is a priori less probable than any given simple function. If the new, more complicated function is selected instead of the simple function, and if there was not a large enough gain in training data fit to offset the complexity increase, then the new complex function "overfits" the data and the complex overfitted function will likely perform worse than the simpler function on validation data outside the training dataset, even though the complex function performed as well, or perhaps even better, on the training dataset. When comparing different types of models, complexity cannot be measured solely by counting how many parameters exist in each model; the expressivity of each parameter must be considered as well. For example, it is nontrivial to directly compare the complexity of a neural net (which can track curvilinear relationships) with m parameters to a regression model with n parameters. Overfitting is especially likely in cases where learning was performed too long or where training examples are rare, causing the learner to adjust to very specific random features of the training data that have no causal relation to the target function. In this process of overfitting, the performance on the training examples still increases while the performance on unseen data becomes worse. As a simple example, consider a database of retail purchases that includes the item bought, the purchaser, and the date and time of purchase. It's easy to construct a model that will fit the training set perfectly by using the date and time of purchase to predict the other attributes, but this model will not generalize at all to new data because those past times will never occur again. Generally, a learning algorithm is said to overfit relative to a simpler one if it is more accurate in fitting known data (hindsight) but less accurate in predicting new data (foresight). One can intuitively understand overfitting from the fact that information from all past experience can be divided into two groups: information that is relevant for the future, and irrelevant information ("noise"). Everything else being equal, the more difficult a criterion is to predict (i.e., the higher its uncertainty), the more noise exists in past information that needs to be ignored. The problem is determining which part to ignore. A learning algorithm that can reduce the risk of fitting noise is called "robust."
Overfitting : Underfitting is the inverse of overfitting, meaning that the statistical model or machine learning algorithm is too simplistic to accurately capture the patterns in the data. A sign of underfitting is that there is a high bias and low variance detected in the current model or algorithm used (the inverse of overfitting: low bias and high variance). This can be gathered from the Bias-variance tradeoff, which is the method of analyzing a model or algorithm for bias error, variance error, and irreducible error. With a high bias and low variance, the result of the model is that it will inaccurately represent the data points and thus insufficiently be able to predict future data results (see Generalization error). As shown in Figure 5, the linear line could not represent all the given data points due to the line not resembling the curvature of the points. We would expect to see a parabola-shaped line as shown in Figure 6 and Figure 1. If we were to use Figure 5 for analysis, we would get false predictive results contrary to the results if we analyzed Figure 6. Burnham & Anderson state the following.: 32 ... an underfitted model would ignore some important replicable (i.e., conceptually replicable in most other samples) structure in the data and thus fail to identify effects that were actually supported by the data. In this case, bias in the parameter estimators is often substantial, and the sampling variance is underestimated, both factors resulting in poor confidence interval coverage. Underfitted models tend to miss important treatment effects in experimental settings.
Overfitting : Benign overfitting describes the phenomenon of a statistical model that seems to generalize well to unseen data, even when it has been fit perfectly on noisy training data (i.e., obtains perfect predictive accuracy on the training set). The phenomenon is of particular interest in deep neural networks, but is studied from a theoretical perspective in the context of much simpler models, such as linear regression. In particular, it has been shown that overparameterization is essential for benign overfitting in this setting. In other words, the number of directions in parameter space that are unimportant for prediction must significantly exceed the sample size.
Overfitting : Leinweber, D. J. (2007). "Stupid data miner tricks". The Journal of Investing. 16: 15–22. doi:10.3905/joi.2007.681820. S2CID 108627390. Tetko, I. V.; Livingstone, D. J.; Luik, A. I. (1995). "Neural network studies. 1. Comparison of Overfitting and Overtraining" (PDF). Journal of Chemical Information and Modeling. 35 (5): 826–833. doi:10.1021/ci00027a006. Tip 7: Minimize overfitting. Chicco, D. (December 2017). "Ten quick tips for machine learning in computational biology". BioData Mining. 10 (35): 35. doi:10.1186/s13040-017-0155-3. PMC 5721660. PMID 29234465.
Overfitting : Christian, Brian; Griffiths, Tom (April 2017), "Chapter 7: Overfitting", Algorithms To Live By: The computer science of human decisions, William Collins, pp. 149–168, ISBN 978-0-00-754799-9
Overfitting : The Problem of Overfitting Data – Stony Brook University What is "overfitting," exactly? – Andrew Gelman blog CSE546: Linear Regression Bias / Variance Tradeoff – University of Washington What is Underfitting – IBM
Neural network Gaussian process : A Neural Network Gaussian Process (NNGP) is a Gaussian process (GP) obtained as the limit of a certain type of sequence of neural networks. Specifically, a wide variety of network architectures converges to a GP in the infinitely wide limit, in the sense of distribution. The concept constitutes an intensional definition, i.e., a NNGP is just a GP, but distinguished by how it is obtained.
Neural network Gaussian process : Bayesian networks are a modeling tool for assigning probabilities to events, and thereby characterizing the uncertainty in a model's predictions. Deep learning and artificial neural networks are approaches used in machine learning to build computational models which learn from training examples. Bayesian neural networks merge these fields. They are a type of neural network whose parameters and predictions are both probabilistic. While standard neural networks often assign high confidence even to incorrect predictions, Bayesian neural networks can more accurately evaluate how likely their predictions are to be correct. Computation in artificial neural networks is usually organized into sequential layers of artificial neurons. The number of neurons in a layer is called the layer width. When we consider a sequence of Bayesian neural networks with increasingly wide layers (see figure), they converge in distribution to a NNGP. This large width limit is of practical interest, since the networks often improve as layers get wider. And the process may give a closed form way to evaluate networks. NNGPs also appears in several other contexts: It describes the distribution over predictions made by wide non-Bayesian artificial neural networks after random initialization of their parameters, but before training; it appears as a term in neural tangent kernel prediction equations; it is used in deep information propagation to characterize whether hyperparameters and architectures will be trainable. It is related to other large width limits of neural networks.
Neural network Gaussian process : Neural Tangents is a free and open-source Python library used for computing and doing inference with the NNGP and neural tangent kernel corresponding to various common ANN architectures. == References ==
Cache language model : A cache language model is a type of statistical language model. These occur in the natural language processing subfield of computer science and assign probabilities to given sequences of words by means of a probability distribution. Statistical language models are key components of speech recognition systems and of many machine translation systems: they tell such systems which possible output word sequences are probable and which are improbable. The particular characteristic of a cache language model is that it contains a cache component and assigns relatively high probabilities to words or word sequences that occur elsewhere in a given text. The primary, but by no means sole, use of cache language models is in speech recognition systems. To understand why it is a good idea for a statistical language model to contain a cache component one might consider someone who is dictating a letter about elephants to a speech recognition system. Standard (non-cache) N-gram language models will assign a very low probability to the word "elephant" because it is a very rare word in English. If the speech recognition system does not contain a cache component, the person dictating the letter may be annoyed: each time the word "elephant" is spoken another sequence of words with a higher probability according to the N-gram language model may be recognized (e.g., "tell a plan"). These erroneous sequences will have to be deleted manually and replaced in the text by "elephant" each time "elephant" is spoken. If the system has a cache language model, "elephant" will still probably be misrecognized the first time it is spoken and will have to be entered into the text manually; however, from this point on the system is aware that "elephant" is likely to occur again – the estimated probability of occurrence of "elephant" has been increased, making it more likely that if it is spoken it will be recognized correctly. Once "elephant" has occurred several times, the system is likely to recognize it correctly every time it is spoken until the letter has been completely dictated. This increase in the probability assigned to the occurrence of "elephant" is an example of a consequence of machine learning and more specifically of pattern recognition. There exist variants of the cache language model in which not only single words but also multi-word sequences that have occurred previously are assigned higher probabilities (e.g., if "San Francisco" occurred near the beginning of the text subsequent instances of it would be assigned a higher probability). The cache language model was first proposed in a paper published in 1990, after which the IBM speech-recognition group experimented with the concept. The group found that implementation of a form of cache language model yielded a 24% drop in word-error rates once the first few hundred words of a document had been dictated. A detailed survey of language modeling techniques concluded that the cache language model was one of the few new language modeling techniques that yielded improvements over the standard N-gram approach: "Our caching results show that caching is by far the most useful technique for perplexity reduction at small and medium training data sizes". The development of the cache language model has generated considerable interest among those concerned with computational linguistics in general and statistical natural language processing in particular: recently, there has been interest in applying the cache language model in the field of statistical machine translation. The success of the cache language model in improving word prediction rests on the human tendency to use words in a "bursty" fashion: when one is discussing a certain topic in a certain context, the frequency with which one uses certain words will be quite different from their frequencies when one is discussing other topics in other contexts. The traditional N-gram language models, which rely entirely on information from a very small number (four, three, or two) of words preceding the word to which a probability is to be assigned, do not adequately model this "burstiness". Recently, the cache language model concept – originally conceived for the N-gram statistical language model paradigm – has been adapted for use in the neural paradigm. For instance, recent work on continuous cache language models in the recurrent neural network (RNN) setting has applied the cache concept to much larger contexts than before, yielding significant reductions in perplexity. Another recent line of research involves incorporating a cache component in a feed-forward neural language model (FN-LM) to achieve rapid domain adaptation.
Cache language model : Artificial intelligence History of natural language processing History of machine translation Speech recognition Statistical machine translation
Cache language model : Jelinek, Frederick (1997). Statistical Methods for Speech Recognition. The MIT Press. ISBN 0-262-10066-5. Archived from the original on 2011-08-05. Retrieved 2011-09-24.
GPTeens : GPTeens (short for GPT for Teens) is an AI-based chatbot developed by the South Korean company ACROSSPACE. It is built on the Generative pre-trained transformer (GPT) model and incorporates a pipeline structure with additional models to enhance its functionality. The chatbot is expanded using supervised fine-tuning to provide educational materials required by school curricula.
GPTeens : The development of GPTeens began in response to the growing demand for AI-based educational tools in South Korea. In May 2023, a prototype version of the chatbot was introduced to schools for limited testing by teachers and students. The first public version of GPTeens was officially launched in October 2024, featuring educational materials aligned with the South Korean national curriculum.
GPTeens : Interactive Format: Utilizes Natural language processing technology to support conversational interactions with learners. Age-Appropriate Design: Designed to deliver responses suitable for teenage users. Curriculum Integration: Trained on educational materials aligned with the South Korean national curriculum. Safety Measures: Implements content filtering to maintain a safe environment for younger audiences.
GPTeens : GPTeens has been noted as an example of an AI-powered educational resource that integrates curriculum-based content for learners.
GPTeens : Natural Language Processing
GPTeens : South Korean Ministry of Education Official Website
Empowerment (artificial intelligence) : Empowerment in the field of artificial intelligence formalises and quantifies (via information theory) the potential an agent perceives that it has to influence its environment. An agent which follows an empowerment maximising policy, acts to maximise future options (typically up to some limited horizon). Empowerment can be used as a (pseudo) utility function that depends only on information gathered from the local environment to guide action, rather than seeking an externally imposed goal, thus is a form of intrinsic motivation. The empowerment formalism depends on a probabilistic model commonly used in artificial intelligence. An autonomous agent operates in the world by taking in sensory information and acting to change its state, or that of the environment, in a cycle of perceiving and acting known as the perception-action loop. Agent state and actions are modelled by random variables ( S : s ∈ S , A : a ∈ A ,A:a\in ) and time ( t ). The choice of action depends on the current state, and the future state depends on the choice of action, thus the perception-action loop unrolled in time forms a causal bayesian network.
Empowerment (artificial intelligence) : Empowerment ( E ) is defined as the channel capacity ( C ) of the actuation channel of the agent, and is formalised as the maximal possible information flow between the actions of the agent and the effect of those actions some time later. Empowerment can be thought of as the future potential of the agent to affect its environment, as measured by its sensors. E := C ( A t ⟶ S t + 1 ) ≡ max p ( a t ) I ( A t ; S t + 1 ) :=C(A_\longrightarrow S_)\equiv \max _)I(A_;S_) In a discrete time model, Empowerment can be computed for a given number of cycles into the future, which is referred to in the literature as 'n-step' empowerment. E ( A t n ⟶ S t + n ) = max p ( a t , . . . , a t + n − 1 ) I ( A t , . . . , A t + n − 1 ; S t + n ) (A_^\longrightarrow S_)=\max _,...,a_)I(A_,...,A_;S_) The unit of empowerment depends on the logarithm base. Base 2 is commonly used in which case the unit is bits.
Empowerment (artificial intelligence) : Empowerment maximisation can be used as a pseudo-utility function to enable agents to exhibit intelligent behaviour without requiring the definition of external goals, for example balancing a pole in a cart-pole balancing scenario where no indication of the task is provided to the agent. Empowerment has been applied in studies of collective behaviour and in continuous domains. As is the case with Bayesian methods in general, computation of empowerment becomes computationally expensive as the number of actions and time horizon extends, but approaches to improve efficiency have led to usage in real-time control. Empowerment has been used for intrinsically motivated reinforcement learning agents playing video games, and in the control of underwater vehicles. == References ==
Granular computing : Granular computing is an emerging computing paradigm of information processing that concerns the processing of complex information entities called "information granules", which arise in the process of data abstraction and derivation of knowledge from information or data. Generally speaking, information granules are collections of entities that usually originate at the numeric level and are arranged together due to their similarity, functional or physical adjacency, indistinguishability, coherency, or the like. At present, granular computing is more a theoretical perspective than a coherent set of methods or principles. As a theoretical perspective, it encourages an approach to data that recognizes and exploits the knowledge present in data at various levels of resolution or scales. In this sense, it encompasses all methods which provide flexibility and adaptability in the resolution at which knowledge or information is extracted and represented.
Granular computing : As mentioned above, granular computing is not an algorithm or process; there is no particular method that is called "granular computing". It is rather an approach to looking at data that recognizes how different and interesting regularities in the data can appear at different levels of granularity, much as different features become salient in satellite images of greater or lesser resolution. On a low-resolution satellite image, for example, one might notice interesting cloud patterns representing cyclones or other large-scale weather phenomena, while in a higher-resolution image, one misses these large-scale atmospheric phenomena but instead notices smaller-scale phenomena, such as the interesting pattern that is the streets of Manhattan. The same is generally true of all data: At different resolutions or granularities, different features and relationships emerge. The aim of granular computing is to try to take advantage of this fact in designing more effective machine-learning and reasoning systems. There are several types of granularity that are often encountered in data mining and machine learning, and we review them below:
Granular computing : Granular computing can be conceived as a framework of theories, methodologies, techniques, and tools that make use of information granules in the process of problem solving. In this sense, granular computing is used as an umbrella term to cover topics that have been studied in various fields in isolation. By examining all of these existing studies in light of the unified framework of granular computing and extracting their commonalities, it may be possible to develop a general theory for problem solving. In a more philosophical sense, granular computing can describe a way of thinking that relies on the human ability to perceive the real world under various levels of granularity (i.e., abstraction) in order to abstract and consider only those things that serve a specific interest and to switch among different granularities. By focusing on different levels of granularity, one can obtain different levels of knowledge, as well as a greater understanding of the inherent knowledge structure. Granular computing is thus essential in human problem solving and hence has a very significant impact on the design and implementation of intelligent systems.
Granular computing : Rough Sets, Discretization Type-2 Fuzzy Sets and Systems == References ==
SUPS : In computational neuroscience, SUPS (for Synaptic Updates Per Second) or formerly CUPS (Connections Updates Per Second) is a measure of a neuronal network performance, useful in fields of neuroscience, cognitive science, artificial intelligence, and computer science.
SUPS : For a processor or computer designed to simulate a neural network SUPS is measured as the product of simulated neurons N and average connectivity c (synapses) per neuron per second: S U P S = c × N Depending on the type of simulation it is usually equal to the total number of synapses simulated. In an "asynchronous" dynamic simulation if a neuron spikes at υ Hz, the average rate of synaptic updates provoked by the activity of that neuron is υ c N . In a synchronous simulation with step Δ t the number of synaptic updates per second would be c N Δ t . As Δ t has to be chosen much smaller than the average interval between two successive afferent spikes, which implies Δ t < 1 υ N , giving an average of synaptic updates equal to υ c N 2 . Therefore, spike-driven synaptic dynamics leads to a linear scaling of computational complexity O(N) per neuron, compared with the O(N2) in the "synchronous" case.
SUPS : Developed in the 1980s Adaptive Solutions' CNAPS-1064 Digital Parallel Processor chip is a full neural network (NNW). It was designed as a coprocessor to a host and has 64 sub-processors arranged in a 1D array and operating in a SIMD mode. Each sub-processor can emulate one or more neurons and multiple chips can be grouped together. At 25 MHz it is capable of 1.28 GMAC. After the presentation of the RN-100 (12 MHz) single neuron chip at Seattle 1991 Ricoh developed the multi-neuron chip RN-200. It had 16 neurons and 16 synapses per neuron. The chip has on-chip learning ability using a proprietary backdrop algorithm. It came in a 257-pin PGA encapsulation and drew 3.0 W at a maximum. It was capable of 3 GCPS (1 GCPS at 32 MHz). In 1991–97, Siemens developed the MA-16 chip, SYNAPSE-1 and SYNAPSE-3 Neurocomputer. The MA-16 was a fast matrix-matrix multiplier that can be combined to form systolic arrays. It could process 4 patterns of 16 elements each (16-bit), with 16 neuron values (16-bit) at a rate of 800 MMAC or 400 MCPS at 50 MHz. The SYNAPSE3-PC PCI card contained 2 MA-16 with a peak performance of 2560 MOPS (1.28 GMAC); 7160 MOPS (3.58 GMAC) when using three boards. In 2013, the K computer was used to simulate a neural network of 1.73 billion neurons with a total of 10.4 trillion synapses (1% of the human brain). The simulation ran for 40 minutes to simulate 1 s of brain activity at a normal activity level (4.4 on average). The simulation required 1 Petabyte of storage.
SUPS : FLOP SPECint SPECfp Multiply–accumulate operation Orders of magnitude (computing) SyNAPSE == References ==
Embedding (machine learning) : Embedding in machine learning refers to a representation learning technique that maps complex, high-dimensional data into a lower-dimensional vector space of numerical vectors. It also denotes the resulting representation, where meaningful patterns or relationships are preserved. As a technique, it learns these vectors from data like words, images, or user interactions, differing from manually designed methods such as one-hot encoding. This process reduces complexity and captures key features without needing prior knowledge of the problem area (domain). For example, in natural language processing (NLP), it might represent "cat" as [0.2, -0.4, 0.7], "dog" as [0.3, -0.5, 0.6], and "car" as [0.8, 0.1, -0.2], placing "cat" and "dog" close together in the space—reflecting their similarity—while "car" is farther away. The resulting embeddings vary by type, including word embeddings for text (e.g., Word2Vec), image embeddings for visual data, and knowledge graph embeddings for knowledge graphs, each tailored to tasks like NLP, computer vision, or recommendation systems. This dual role enhances model efficiency and accuracy by automating feature extraction and revealing latent similarities across diverse applications.
Embedding (machine learning) : Feature extraction Dimensionality reduction Word embedding Neural network Reinforcement learning == References ==
Formal concept analysis : In information science, formal concept analysis (FCA) is a principled way of deriving a concept hierarchy or formal ontology from a collection of objects and their properties. Each concept in the hierarchy represents the objects sharing some set of properties; and each sub-concept in the hierarchy represents a subset of the objects (as well as a superset of the properties) in the concepts above it. The term was introduced by Rudolf Wille in 1981, and builds on the mathematical theory of lattices and ordered sets that was developed by Garrett Birkhoff and others in the 1930s. Formal concept analysis finds practical application in fields including data mining, text mining, machine learning, knowledge management, semantic web, software development, chemistry and biology.
Formal concept analysis : The original motivation of formal concept analysis was the search for real-world meaning of mathematical order theory. One such possibility of very general nature is that data tables can be transformed into algebraic structures called complete lattices, and that these can be utilized for data visualization and interpretation. A data table that represents a heterogeneous relation between objects and attributes, tabulating pairs of the form "object g has attribute m", is considered as a basic data type. It is referred to as a formal context. In this theory, a formal concept is defined to be a pair (A, B), where A is a set of objects (called the extent) and B is a set of attributes (the intent) such that the extent A consists of all objects that share the attributes in B, and dually the intent B consists of all attributes shared by the objects in A. In this way, formal concept analysis formalizes the semantic notions of extension and intension. The formal concepts of any formal context can—as explained below—be ordered in a hierarchy called more formally the context's "concept lattice". The concept lattice can be graphically visualized as a "line diagram", which then may be helpful for understanding the data. Often however these lattices get too large for visualization. Then the mathematical theory of formal concept analysis may be helpful, e.g., for decomposing the lattice into smaller pieces without information loss, or for embedding it into another structure which is easier to interpret. The theory in its present form goes back to the early 1980s and a research group led by Rudolf Wille, Bernhard Ganter and Peter Burmeister at the Technische Universität Darmstadt. Its basic mathematical definitions, however, were already introduced in the 1930s by Garrett Birkhoff as part of general lattice theory. Other previous approaches to the same idea arose from various French research groups, but the Darmstadt group normalised the field and systematically worked out both its mathematical theory and its philosophical foundations. The latter refer in particular to Charles S. Peirce, but also to the Port-Royal Logic.
Formal concept analysis : In his article "Restructuring Lattice Theory" (1982), initiating formal concept analysis as a mathematical discipline, Wille starts from a discontent with the current lattice theory and pure mathematics in general: The production of theoretical results—often achieved by "elaborate mental gymnastics"—were impressive, but the connections between neighboring domains, even parts of a theory were getting weaker. Restructuring lattice theory is an attempt to reinvigorate connections with our general culture by interpreting the theory as concretely as possible, and in this way to promote better communication between lattice theorists and potential users of lattice theory This aim traces back to the educationalist Hartmut von Hentig, who in 1972 pleaded for restructuring sciences in view of better teaching and in order to make sciences mutually available and more generally (i.e. also without specialized knowledge) critiqueable. Hence, by its origins formal concept analysis aims at interdisciplinarity and democratic control of research. It corrects the starting point of lattice theory during the development of formal logic in the 19th century. Then—and later in model theory—a concept as unary predicate had been reduced to its extent. Now again, the philosophy of concepts should become less abstract by considering the intent. Hence, formal concept analysis is oriented towards the categories extension and intension of linguistics and classical conceptual logic. Formal concept analysis aims at the clarity of concepts according to Charles S. Peirce's pragmatic maxim by unfolding observable, elementary properties of the subsumed objects. In his late philosophy, Peirce assumed that logical thinking aims at perceiving reality, by the triade concept, judgement and conclusion. Mathematics is an abstraction of logic, develops patterns of possible realities and therefore may support rational communication. On this background, Wille defines: The aim and meaning of Formal Concept Analysis as mathematical theory of concepts and concept hierarchies is to support the rational communication of humans by mathematically developing appropriate conceptual structures which can be logically activated.
Formal concept analysis : The data in the example is taken from a semantic field study, where different kinds of bodies of water were systematically categorized by their attributes. For the purpose here it has been simplified. The data table represents a formal context, the line diagram next to it shows its concept lattice. Formal definitions follow below. The above line diagram consists of circles, connecting line segments, and labels. Circles represent formal concepts. The lines allow to read off the subconcept-superconcept hierarchy. Each object and attribute name is used as a label exactly once in the diagram, with objects below and attributes above concept circles. This is done in a way that an attribute can be reached from an object via an ascending path if and only if the object has the attribute. In the diagram shown, e.g. the object reservoir has the attributes stagnant and constant, but not the attributes temporary, running, natural, maritime. Accordingly, puddle has exactly the characteristics temporary, stagnant and natural. The original formal context can be reconstructed from the labelled diagram, as well as the formal concepts. The extent of a concept consists of those objects from which an ascending path leads to the circle representing the concept. The intent consists of those attributes to which there is an ascending path from that concept circle (in the diagram). In this diagram the concept immediately to the left of the label reservoir has the intent stagnant and natural and the extent puddle, maar, lake, pond, tarn, pool, lagoon, and sea.
Formal concept analysis : A formal context is a triple K = (G, M, I), where G is a set of objects, M is a set of attributes, and I ⊆ G × M is a binary relation called incidence that expresses which objects have which attributes. For subsets A ⊆ G of objects and subsets B ⊆ M of attributes, one defines two derivation operators as follows: A′ = , i.e., a set of all attributes shared by all objects from A, and dually B′ = , i.e., a set of all objects sharing all attributes from B. Applying either derivation operator and then the other constitutes two closure operators: A ↦ A′′ = (A′)′ for A ⊆ G (extent closure), and B ↦ B′′ = (B′)′ for B ⊆ M (intent closure). The derivation operators define a Galois connection between sets of objects and of attributes. This is why in French a concept lattice is sometimes called a treillis de Galois (Galois lattice). With these derivation operators, Wille gave an elegant definition of a formal concept: a pair (A,B) is a formal concept of a context (G, M, I) provided that: A ⊆ G, B ⊆ M, A′ = B, and B′ = A. Equivalently and more intuitively, (A,B) is a formal concept precisely when: every object in A has every attribute in B, for every object in G that is not in A, there is some attribute in B that the object does not have, for every attribute in M that is not in B, there is some object in A that does not have that attribute. For computing purposes, a formal context may be naturally represented as a (0,1)-matrix K in which the rows correspond to the objects, the columns correspond to the attributes, and each entry ki,j equals to 1 if "object i has attribute j." In this matrix representation, each formal concept corresponds to a maximal submatrix (not necessarily contiguous) all of whose elements equal 1. It is however misleading to consider a formal context as boolean, because the negated incidence ("object g does not have attribute m") is not concept forming in the same way as defined above. For this reason, the values 1 and 0 or TRUE and FALSE are usually avoided when representing formal contexts, and a symbol like × is used to express incidence.
Formal concept analysis : The concepts (Ai, Bi) of a context K can be (partially) ordered by the inclusion of extents, or, equivalently, by the dual inclusion of intents. An order ≤ on the concepts is defined as follows: for any two concepts (A1, B1) and (A2, B2) of K, we say that (A1, B1) ≤ (A2, B2) precisely when A1 ⊆ A2. Equivalently, (A1, B1) ≤ (A2, B2) whenever B1 ⊇ B2. In this order, every set of formal concepts has a greatest common subconcept, or meet. Its extent consists of those objects that are common to all extents of the set. Dually, every set of formal concepts has a least common superconcept, the intent of which comprises all attributes which all objects of that set of concepts have. These meet and join operations satisfy the axioms defining a lattice, in fact a complete lattice. Conversely, it can be shown that every complete lattice is the concept lattice of some formal context (up to isomorphism).
Formal concept analysis : Real-world data is often given in the form of an object-attribute table, where the attributes have "values". Formal concept analysis handles such data by transforming them into the basic type of a ("one-valued") formal context. The method is called conceptual scaling. The negation of an attribute m is an attribute ¬m, the extent of which is just the complement of the extent of m, i.e., with (¬m)′ = G \ m′. It is in general not assumed that negated attributes are available for concept formation. But pairs of attributes which are negations of each other often naturally occur, for example in contexts derived from conceptual scaling. For possible negations of formal concepts see the section concept algebras below.
Formal concept analysis : An implication A → B relates two sets A and B of attributes and expresses that every object possessing each attribute from A also has each attribute from B. When (G,M,I) is a formal context and A, B are subsets of the set M of attributes (i.e., A,B ⊆ M), then the implication A → B is valid if A′ ⊆ B′. For each finite formal context, the set of all valid implications has a canonical basis, an irredundant set of implications from which all valid implications can be derived by the natural inference (Armstrong rules). This is used in attribute exploration, a knowledge acquisition method based on implications.
Formal concept analysis : Formal concept analysis has elaborate mathematical foundations, making the field versatile. As a basic example we mention the arrow relations, which are simple and easy to compute, but very useful. They are defined as follows: For g ∈ G and m ∈ M let g ↗ m ⇔ (g, m) ∉ I and if m⊆n′ and m′ ≠ n′ , then (g, n) ∈ I, and dually g ↙ m ⇔ (g, m) ∉ I and if g′⊆h′ and g′ ≠ h′ , then (h, m) ∈ I. Since only non-incident object-attribute pairs can be related, these relations can conveniently be recorded in the table representing a formal context. Many lattice properties can be read off from the arrow relations, including distributivity and several of its generalizations. They also reveal structural information and can be used for determining, e.g., the congruence relations of the lattice.
Formal concept analysis : Triadic concept analysis replaces the binary incidence relation between objects and attributes by a ternary relation between objects, attributes, and conditions. An incidence ⁠ ( g , m , c ) ⁠ then expresses that the object g has the attribute m under the condition c. Although triadic concepts can be defined in analogy to the formal concepts above, the theory of the trilattices formed by them is much less developed than that of concept lattices, and seems to be difficult. Voutsadakis has studied the n-ary case. Fuzzy concept analysis: Extensive work has been done on a fuzzy version of formal concept analysis. Concept algebras: Modelling negation of formal concepts is somewhat problematic because the complement (G \ A, M \ B) of a formal concept (A, B) is in general not a concept. However, since the concept lattice is complete one can consider the join (A, B)Δ of all concepts (C, D) that satisfy C ⊆ G \ A; or dually the meet (A, B)𝛁 of all concepts satisfying D ⊆ M \ B. These two operations are known as weak negation and weak opposition, respectively. This can be expressed in terms of the derivation operators. Weak negation can be written as (A, B)Δ = ((G \ A)″, (G \ A)'), and weak opposition can be written as (A, B)𝛁 = ((M \ B)', (M \ B)″). The concept lattice equipped with the two additional operations Δ and 𝛁 is known as the concept algebra of a context. Concept algebras generalize power sets. Weak negation on a concept lattice L is a weak complementation, i.e. an order-reversing map Δ: L → L which satisfies the axioms xΔΔ ≤ x and (x⋀y) ⋁ (x⋀yΔ) = x. Weak opposition is a dual weak complementation. A (bounded) lattice such as a concept algebra, which is equipped with a weak complementation and a dual weak complementation, is called a weakly dicomplemented lattice. Weakly dicomplemented lattices generalize distributive orthocomplemented lattices, i.e. Boolean algebras.
Formal concept analysis : There are a number of simple and fast algorithms for generating formal concepts and for constructing and navigating concept lattices. For a survey, see Kuznetsov and Obiedkov or the book by Ganter and Obiedkov, where also some pseudo-code can be found. Since the number of formal concepts may be exponential in the size of the formal context, the complexity of the algorithms usually is given with respect to the output size. Concept lattices with a few million elements can be handled without problems. Many FCA software applications are available today. The main purpose of these tools varies from formal context creation to formal concept mining and generating the concepts lattice of a given formal context and the corresponding implications and association rules. Most of these tools are academic open-source applications, such as: ConExp ToscanaJ Lattice Miner Coron FcaBedrock GALACTIC
Formal concept analysis : The formal concept analysis can be used as a qualitative method for data analysis. Since the early beginnings of FCA in the early 1980s, the FCA research group at TU Darmstadt has gained experience from more than 200 projects using the FCA (as of 2005). Including the fields of: medicine and cell biology, genetics, ecology, software engineering, ontology, information and library sciences, office administration, law, linguistics, political science. Many more examples are e.g. described in: Formal Concept Analysis. Foundations and Applications, conference papers at regular conferences such as: International Conference on Formal Concept Analysis (ICFCA), Concept Lattices and their Applications (CLA), or International Conference on Conceptual Structures (ICCS).
Formal concept analysis : A Formal Concept Analysis Homepage Demo Formal Concept Analysis. ICFCA International Conference Proceedings doi:10.1007/978-3-540-70901-5 2007 5th doi:10.1007/978-3-540-78137-0 2008 6th doi:10.1007/978-3-642-01815-2 2009 7th doi:10.1007/978-3-642-11928-6 2010 8th doi:10.1007/978-3-642-20514-9 2011 9th doi:10.1007/978-3-642-29892-9 2012 10th doi:10.1007/978-3-642-38317-5 2013 11th doi:10.1007/978-3-319-07248-7 2014 12th doi:10.1007/978-3-319-19545-2 2015 13th doi:10.1007/978-3-319-59271-8 2017 14th doi:10.1007/978-3-030-21462-3 2019 15th doi:10.1007/978-3-030-77867-5 2021 16th
Domain adaptation : Domain adaptation is a field associated with machine learning and transfer learning. It addresses the challenge of training a model on one data distribution (the source domain) and applying it to a related but different data distribution (the target domain). A common example is spam filtering, where a model trained on emails from one user (source domain) is adapted to handle emails for another user with significantly different patterns (target domain). Domain adaptation techniques can also leverage unrelated data sources to improve learning. When multiple source distributions are involved, the problem extends to multi-source domain adaptation. Domain adaptation is a specialized area within transfer learning. In domain adaptation, the source and target domains share the same feature space but differ in their data distributions. In contrast, transfer learning encompasses broader scenarios, including cases where the target domain’s feature space differs from that of the source domain(s).
Domain adaptation : Domain adaptation setups are classified in two different ways; according to the distribution shift between the domains, and according to the available data from the target domain.
Domain adaptation : Let X be the input space (or description space) and let Y be the output space (or label space). The objective of a machine learning algorithm is to learn a mathematical model (a hypothesis) h : X → Y able to attach a label from Y to an example from X . This model is learned from a learning sample S = i = 1 m ,y_)\in (X\times Y)\_^ . Usually in supervised learning (without domain adaptation), we suppose that the examples ( x i , y i ) ∈ S ,y_)\in S are drawn i.i.d. from a distribution D S of support X × Y (unknown and fixed). The objective is then to learn h (from S ) such that it commits the least error possible for labelling new examples coming from the distribution D S . The main difference between supervised learning and domain adaptation is that in the latter situation we study two different (but related) distributions D S and D T on X × Y . The domain adaptation task then consists of the transfer of knowledge from the source domain D S to the target one D T . The goal is then to learn h (from labeled or unlabelled samples coming from the two domains) such that it commits as little error as possible on the target domain D T . The major issue is the following: if a model is learned from a source domain, what is its capacity to correctly label data coming from the target domain?
Domain adaptation : Several compilations of domain adaptation and transfer learning algorithms have been implemented over the past decades: SKADA (Python) ADAPT (Python) TLlib (Python) Domain-Adaptation-Toolbox (MATLAB) == References ==
Google Clips : Google Clips is a discontinued miniature clip-on camera device developed by Google.
Google Clips : It was announced during Google's "Made By Google" event on October 4, 2017. It was released for sale on January 27, 2018. With a flashing light emitting diode (LED) that indicates it is recording, Google Clips automatically captures video clips at moments its machine learning algorithms determine to be interesting or relevant. Google Clips' artificial intelligence (AI) will learn the faces of people so it can learn to take photos with certain people. Google Clips can automatically set lighting and framing. It had 16 GB of storage built-in storage and could record clips for up to 3 hours. This camera was originally priced at $249 in the United States. The product was pulled from the Google Store on October 15, 2019. Google has said that the product would be supported until the end of December 2021.
Google Clips : The Independent wrote that Google Clips is "an impressive little device, but one that also has the potential to feel very creepy." According to The Verge's review, "it didn't capture anything special" over a couple weeks worth of testing. This, added with the steep price, made Google Clips a tough sell. == References ==
Language resource : In linguistics and language technology, a language resource is a "[composition] of linguistic material used in the construction, improvement and/or evaluation of language processing applications, (...) in language and language-mediated research studies and applications." According to Bird & Simons (2003), this includes data, i.e. "any information that documents or describes a language, such as a published monograph, a computer data file, or even a shoebox full of handwritten index cards. The information could range in content from unanalyzed sound recordings to fully transcribed and annotated texts to a complete descriptive grammar", tools, i.e., "computational resources that facilitate creating, viewing, querying, or otherwise using language data", and advice, i.e., "any information about what data sources are reliable, what tools are appropriate in a given situation, what practices to follow when creating new data". The latter aspect is usually referred to as "best practices" or "(community) standards". In a narrower sense, language resource is specifically applied to resources that are available in digital form, and then, "encompassing (a) data sets (textual, multimodal/multimedia and lexical data, grammars, language models, etc.) in machine readable form, and (b) tools/technologies/services used for their processing and management".
Language resource : As of May 2020, no widely used standard typology of language resources has been established (current proposals include the LREMap, METASHARE, and, for data, the LLOD classification). Important classes of language resources include data lexical resources, e.g., machine-readable dictionaries, linguistic corpora, i.e., digital collections of natural language data, linguistic data bases such as the Cross-Linguistic Linked Data collection, tools linguistic annotations and tools for creating such annotations in a manual or semiautomated fashion (e.g., tools for annotating interlinear glossed text such as Toolbox and FLEx, or other language documentation tools), applications for search and retrieval over such data (corpus management systems), for automated annotation (part-of-speech tagging, syntactic parsing, semantic parsing, etc.), metadata and vocabularies vocabularies, repositories of linguistic terminology and language metadata, e.g., MetaShare (for language resource metadata), the ISO 12620 data category registry (for linguistic features, data structures and annotations within a language resource), or the Glottolog database (identifiers for language varieties and bibliographical database).
Language resource : A major concern of the language resource community has been to develop infrastructures and platforms to present, discuss and disseminate language resources. Selected contributions in this regard include: a series of International Conferences on Language Resources and Evaluation (LREC), the European Language Resources Association (ELRA, EU-based), and the Linguistic Data Consortium (LDC, US-based), which represent commercial hosting and dissemination platforms for language resources, the Open Languages Archives Community (OLAC), which provides and aggregates language resource metadata, the Language Resources and Evaluation Journal (LREJ), the European Language Grid is a European platform for language technologies (eg services), data and resources. As for the development of standards and best practices for language resources, these are subject of several community groups and standardization efforts, including ISO Technical Committee 37: Terminology and other language and content resources (ISO/TC 37), developing standards for all aspects of language resources, W3C Community Group Best Practices for Multilingual Linked Open Data (BPMLOD), working on best practice recommendations for publishing language resources as Linked Data or in RDF, W3C Community Group Linked Data for Language Technology (LD4LT), working on linguistic annotations on the web and language resource metadata, W3C Community Group Ontology-Lexica (OntoLex), working on lexical resources, the Open Linguistics working group of the Open Knowledge Foundation, working on conventions for publishing and linking open language resources, developing the Linguistic Linked Open Data cloud, the Text Encoding Initiative (TEI), working on XML-based specifications for language resources and digitally edited text. == References ==
T5 (language model) : T5 (Text-to-Text Transfer Transformer) is a series of large language models developed by Google AI introduced in 2019. Like the original Transformer model, T5 models are encoder-decoder Transformers, where the encoder processes the input text, and the decoder generates the output text. T5 models are usually pretrained on a massive dataset of text and code, after which they can perform the text-based tasks that are similar to their pretrained tasks. They can also be finetuned to perform other tasks. T5 models have been employed in various applications, including chatbots, machine translation systems, text summarization tools, code generation, and robotics.
T5 (language model) : The original T5 models are pre-trained on the Colossal Clean Crawled Corpus (C4), containing text and code scraped from the internet. This pre-training process enables the models to learn general language understanding and generation abilities. T5 models can then be fine-tuned on specific downstream tasks, adapting their knowledge to perform well in various applications. The T5 models were pretrained on many tasks, all in the format of <input text> -> <output text>. Some examples are: restoring corrupted text: Thank you <X> me to your party <Y> week. -> <X> for inviting <Y> last <Z>, where the <Z> means "end of output", and the <X> and <Y> denote blanks to be filled, called "sentinels" in the original report. translation: translate English to German: That is good. -> Das ist gut.. judging the grammatical acceptability of a sentence (CoLA sentence): The course is jumping well. -> not acceptable .
T5 (language model) : The T5 series encompasses several models with varying sizes and capabilities, all encoder-decoder Transformers, where the encoder processes the input text, and the decoder generates the output text. These models are often distinguished by their parameter count, which indicates the complexity and potential capacity of the model. The original paper reported the following 5 models: *The encoder and the decoder have the same shape. So for example, the T5-small has 6 layers in the encoder and 6 layers in the decoder. In the above table, n layer : Number of layers in the encoder; also, number of layers in the decoder. They always have the same number of layers. n head : Number of attention heads in each attention block. d model : Dimension of the embedding vectors. d ff : Dimension of the feedforward network within each encoder and decoder layer. d kv : Dimension of the key and value vectors used in the self-attention mechanism. Note that unlike typical Transformers, the 3B and 11B models do not satisfy d model = d kv n head =d_n_ . Compared to the original Transformer, it uses a few minor modifications: layer normalization with no additive bias; placing the layer normalization outside the residual path; relative positional embedding. For all experiments, they used a WordPiece tokenizer, with vocabulary size 32,000. The tokenizer is shared across both the input and output of each model. It was trained on a mixture of English, German, French, and Romanian data from the C4 dataset, at a ratio of 10:1:1:1.
T5 (language model) : Several subsequent models used the T5 architecture, with non-standardized naming conventions used to differentiate them. This section attempts to collect the main ones. An exhaustive list of the variants released by Google Brain is on the GitHub repo for T5X. Some models are trained from scratch while others are trained by starting with a previous trained model. By default, each model is trained from scratch, except otherwise noted. T5 small, base, large, 3B, 11B (2019): The original models. T5 1.1 small, base, large, XL, XXL: Improved versions of the original T5 series. These have roughly equal parameters. The activation function is GEGLU instead of ReLU. The 3B and the 11B were changed to "XL" and "XXL", and their shapes are changed: LM-adapted T5 (2021): a series of models (from small to XXL) that started from checkpoints of the T5 series, but trained further on 100B additional tokens from C4. Switch Transformer (2021): a mixture-of-experts variant of T5, by replacing the feedforward layers in the encoder and decoder blocks with mixture of expert feedforward layers. T0 3B, 11B (2021): a series of models that started from checkpoints of LM-adapted T5, and further trained to perform tasks based only on task instruction (zero-shot). Different entries in the series uses different finetuning data. ByT5 (2021): a byte-level version of T5, trained on mC4 (multilingual C4) dataset. It operates on text encoded as UTF-8 bytes, without tokenizers. Flan-T5-XL (2022): a model that started with a checkpoint of T5 XL, then instruction-tuned on the FLAN dataset. T5X (2022): a JAX-based re-implementation of the original T5 codebase. It is not a model. The original T5 codebase was implemented in TensorFlow with MeshTF. UL2 20B (2022): a model with the same architecture as the T5 series, but scaled up to 20B, and trained with "mixture of denoisers" objective on the C4. It was trained on a TPU cluster by accident, when a training run was left running accidentally for a month. Flan-UL2 20B (2022): UL2 20B instruction-finetuned on the FLAN dataset. Pile-T5 (2024): has the same architecture of T5, except it used the Llama tokenizer. It was trained on The Pile. It came in sizes of base, large, XL, XXL.
T5 (language model) : The T5 model itself is an encoder-decoder model, allowing it to be used for instruction following. The encoder encodes the instruction, and the decoder autoregressively generates the reply. The T5 encoder can be used as a text encoder, much like BERT. It encodes a text into a sequence of real-number vectors, which can be used for downstream applications. For example, Google Imagen uses T5-XXL as text encoder, and the encoded text vectors are used as conditioning on a diffusion model. As another example, the AuraFlow diffusion model uses Pile-T5-XL.
T5 (language model) : "T5 release - a google Collection". huggingface.co. 2024-07-31. Retrieved 2024-10-16. == Notes ==
Admissible heuristic : In computer science, specifically in algorithms related to pathfinding, a heuristic function is said to be admissible if it never overestimates the cost of reaching the goal, i.e. the cost it estimates to reach the goal is not higher than the lowest possible cost from the current point in the path. In other words, it should act as a lower bound. It is related to the concept of consistent heuristics. While all consistent heuristics are admissible, not all admissible heuristics are consistent.
Admissible heuristic : An admissible heuristic is used to estimate the cost of reaching the goal state in an informed search algorithm. In order for a heuristic to be admissible to the search problem, the estimated cost must always be lower than or equal to the actual cost of reaching the goal state. The search algorithm uses the admissible heuristic to find an estimated optimal path to the goal state from the current node. For example, in A* search the evaluation function (where n is the current node) is: f ( n ) = g ( n ) + h ( n ) where f ( n ) = the evaluation function. g ( n ) = the cost from the start node to the current node h ( n ) = estimated cost from current node to goal. h ( n ) is calculated using the heuristic function. With a non-admissible heuristic, the A* algorithm could overlook the optimal solution to a search problem due to an overestimation in f ( n ) .
Admissible heuristic : n is a node h is a heuristic h ( n ) is cost indicated by h to reach a goal from n h ∗ ( n ) (n) is the optimal cost to reach a goal from n h ( n ) is admissible if, ∀ n h ( n ) ≤ h ∗ ( n ) (n)
Admissible heuristic : An admissible heuristic can be derived from a relaxed version of the problem, or by information from pattern databases that store exact solutions to subproblems of the problem, or by using inductive learning methods.
Admissible heuristic : Two different examples of admissible heuristics apply to the fifteen puzzle problem: Hamming distance Manhattan distance The Hamming distance is the total number of misplaced tiles. It is clear that this heuristic is admissible since the total number of moves to order the tiles correctly is at least the number of misplaced tiles (each tile not in place must be moved at least once). The cost (number of moves) to the goal (an ordered puzzle) is at least the Hamming distance of the puzzle. The Manhattan distance of a puzzle is defined as: h ( n ) = ∑ all tiles d i s t a n c e ( tile, correct position ) () Consider the puzzle below in which the player wishes to move each tile such that the numbers are ordered. The Manhattan distance is an admissible heuristic in this case because every tile will have to be moved at least the number of spots in between itself and its correct position. The subscripts show the Manhattan distance for each tile. The total Manhattan distance for the shown puzzle is: h ( n ) = 3 + 1 + 0 + 1 + 2 + 3 + 3 + 4 + 3 + 2 + 4 + 4 + 4 + 1 + 1 = 36
Admissible heuristic : If an admissible heuristic is used in an algorithm that, per iteration, progresses only the path of lowest evaluation (current cost + heuristic) of several candidate paths, terminates the moment its exploration reaches the goal and, crucially, never closes all optimal paths before terminating (something that's possible with A* search algorithm if special care isn't taken), then this algorithm can only terminate on an optimal path. To see why, consider the following proof by contradiction: Assume such an algorithm managed to terminate on a path T with a true cost Ttrue greater than the optimal path S with true cost Strue. This means that before terminating, the evaluated cost of T was less than or equal to the evaluated cost of S (or else S would have been picked). Denote these evaluated costs Teval and Seval respectively. The above can be summarized as follows, Strue < Ttrue Teval ≤ Seval If our heuristic is admissible it follows that at this penultimate step Teval = Ttrue because any increase on the true cost by the heuristic on T would be inadmissible and the heuristic cannot be negative. On the other hand, an admissible heuristic would require that Seval ≤ Strue which combined with the above inequalities gives us Teval < Ttrue and more specifically Teval ≠ Ttrue. As Teval and Ttrue cannot be both equal and unequal our assumption must have been false and so it must be impossible to terminate on a more costly than optimal path. As an example, let us say we have costs as follows:(the cost above/below a node is the heuristic, the cost at an edge is the actual cost) 0 10 0 100 0 START ---- O ----- GOAL | | 0| |100 | | O ------- O ------ O 100 1 100 1 100 So clearly we would start off visiting the top middle node, since the expected total cost, i.e. f ( n ) , is 10 + 0 = 10 . Then the goal would be a candidate, with f ( n ) equal to 10 + 100 + 0 = 110 . Then we would clearly pick the bottom nodes one after the other, followed by the updated goal, since they all have f ( n ) lower than the f ( n ) of the current goal, i.e. their f ( n ) is 100 , 101 , 102 , 102 . So even though the goal was a candidate, we could not pick it because there were still better paths out there. This way, an admissible heuristic can ensure optimality. However, note that although an admissible heuristic can guarantee final optimality, it is not necessarily efficient.
Admissible heuristic : Consistent heuristic Heuristic function Search algorithm == References ==
Programming by example : In computer science, programming by example (PbE), also termed programming by demonstration or more generally as demonstrational programming, is an end-user development technique for teaching a computer new behavior by demonstrating actions on concrete examples. The system records user actions and infers a generalized program that can be used on new examples. PbE is intended to be easier to do than traditional computer programming, which generally requires learning and using a programming language. Many PbE systems have been developed as research prototypes, but few have found widespread real-world application. More recently, PbE has proved to be a useful paradigm for creating scientific work-flows. PbE is used in two independent clients for the BioMOBY protocol: Seahawk and Gbrowse moby. Also the programming by demonstration (PbD) term has been mostly adopted by robotics researchers for teaching new behaviors to the robot through a physical demonstration of the task. The usual distinction in literature between these terms is that in PbE the user gives a prototypical product of the computer execution, such as a row in the desired results of a query; while in PbD the user performs a sequence of actions that the computer must repeat, generalizing it to be used in different data sets. For final users, to automate a workflow in a complex tool (e.g. Photoshop), the most simple case of PbD is the macro recorder.
Programming by example : Query by Example Automated machine learning Example-based machine translation Inductive programming Lapis (text editor), which allows simultaneous editing of similar items in multiple selections created by example Programming by demonstration Test-driven development
Programming by example : Henry Lieberman's page on Programming by Example Online copy of Watch What I Do, Allen Cypher's book on Programming by Demonstration Online copy of Your Wish is My Command, Henry Lieberman's sequel to Watch What I Do A Visual Language for Data Mapping, John Carlson's description of an Integrated Development Environment (IDE) that used Programming by Example (desktop objects) for data mapping, and an iconic language for recording operations
Tensor (machine learning) : In machine learning, the term tensor informally refers to two different concepts (i) a way of organizing data and (ii) a multilinear (tensor) transformation. Data may be organized in a multidimensional array (M-way array), informally referred to as a "data tensor"; however, in the strict mathematical sense, a tensor is a multilinear mapping over a set of domain vector spaces to a range vector space. Observations, such as images, movies, volumes, sounds, and relationships among words and concepts, stored in an M-way array ("data tensor"), may be analyzed either by artificial neural networks or tensor methods. Tensor decomposition factorizes data tensors into smaller tensors. Operations on data tensors can be expressed in terms of matrix multiplication and the Kronecker product. The computation of gradients, a crucial aspect of backpropagation, can be performed using software libraries such as PyTorch and TensorFlow. Computations are often performed on graphics processing units (GPUs) using CUDA, and on dedicated hardware such as Google's Tensor Processing Unit or Nvidia's Tensor core. These developments have greatly accelerated neural network architectures, and increased the size and complexity of models that can be trained.
Tensor (machine learning) : A tensor is by definition a multilinear map. In mathematics, this may express a multilinear relationship between sets of algebraic objects. In physics, tensor fields, considered as tensors at each point in space, are useful in expressing mechanics such as stress or elasticity. In machine learning, the exact use of tensors depends on the statistical approach being used. In 2001, the field of signal processing and statistics were making use of tensor methods. Pierre Comon surveys the early adoption of tensor methods in the fields of telecommunications, radio surveillance, chemometrics and sensor processing. Linear tensor rank methods (such as, Parafac/CANDECOMP) analyzed M-way arrays ("data tensors") composed of higher order statistics that were employed in blind source separation problems to compute a linear model of the data. He noted several early limitations in determining the tensor rank and efficient tensor rank decomposition. In the early 2000s, multilinear tensor methods crossed over into computer vision, computer graphics and machine learning with papers by Vasilescu or in collaboration with Terzopoulos, such as Human Motion Signatures, TensorFaces TensorTexures and Multilinear Projection. Multilinear algebra, the algebra of higher-order tensors, is a suitable and transparent framework for analyzing the multifactor structure of an ensemble of observations and for addressing the difficult problem of disentangling the causal factors based on second order or higher order statistics associated with each causal factor. Tensor (multilinear) factor analysis disentangles and reduces the influence of different causal factors with multilinear subspace learning. When treating an image or a video as a 2- or 3-way array, i.e., "data matrix/tensor", tensor methods reduce spatial or time redundancies as demonstrated by Wang and Ahuja. Yoshua Bengio, Geoff Hinton and their collaborators briefly discuss the relationship between deep neural networks and tensor factor analysis beyond the use of M-way arrays ("data tensors") as inputs. One of the early uses of tensors for neural networks appeared in natural language processing. A single word can be expressed as a vector via Word2vec. Thus a relationship between two words can be encoded in a matrix. However, for more complex relationships such as subject-object-verb, it is necessary to build higher-dimensional networks. In 2009, the work of Sutskever introduced Bayesian Clustered Tensor Factorization to model relational concepts while reducing the parameter space. From 2014 to 2015, tensor methods become more common in convolutional neural networks (CNNs). Tensor methods organize neural network weights in a "data tensor", analyze and reduce the number of neural network weights. Lebedev et al. accelerated CNN networks for character classification (the recognition of letters and digits in images) by using 4D kernel tensors.
Tensor (machine learning) : Let F be a field such as the real numbers R or the complex numbers C . A tensor T ∈ F I 0 × I 2 × … × I C \in ^\times I_\times \ldots \times I_ is a multilinear transformation from a set of domain vector spaces to a range vector space: T : ↦ F I 0 :\ ^\times ^\times \ldots ^\\mapsto ^ Here, C and I 0 , I 1 , … , I C ,I_,\ldots ,I_ are positive integers, and ( C + 1 ) is the number of modes of a tensor (also known as the number of ways of a multi-way array). The dimensionality of mode c is I c , for 0 ≤ c ≤ C . In statistics and machine learning, an image is vectorized when viewed as a single observation, and a collection of vectorized images is organized as a "data tensor". For example, a set of facial images _,i_,i_,i_\in ^\ with I X pixels that are the consequences of multiple causal factors, such as a facial geometry i p ( 1 ≤ i p ≤ I P ) (1\leq i_\leq I_) , an expression i e ( 1 ≤ i e ≤ I E ) (1\leq i_\leq I_) , an illumination condition i l ( 1 ≤ i l ≤ I L ) (1\leq i_\leq I_) , and a viewing condition i v ( 1 ≤ i v ≤ I V ) (1\leq i_\leq I_) may be organized into a data tensor (ie. multiway array) D ∈ R I X × I P × I E × I L × V \in ^\times I_\times I_\times I_\times V where I P are the total number of facial geometries, I E are the total number of expressions, I L are the total number of illumination conditions, and I V are the total number of viewing conditions. Tensor factorizations methods such as TensorFaces and multilinear (tensor) independent component analysis factorizes the data tensor into a set of vector spaces that span the causal factor representations, where an image is the result of tensor transformation T that maps a set of causal factor representations to the pixel space. Another approach to using tensors in machine learning is to embed various data types directly. For example, a grayscale image, commonly represented as a discrete 2-way array D ∈ R I R X × I C X \in ^\times I_ with dimensionality I R X × I C X \times I_ where I R X are the number of rows and I C X are the number of columns. When an image is treated as 2-way array or 2nd order tensor (i.e. as a collection of column/row observations), tensor factorization methods compute the image column space, the image row space and the normalized PCA coefficients or the ICA coefficients. Similarly, a color image with RGB channels, D ∈ R N × M × 3 . \in \mathbb ^. may be viewed as a 3rd order data tensor or 3-way array.-------- In natural language processing, a word might be expressed as a vector v via the Word2vec algorithm. Thus v becomes a mode-1 tensor v ↦ A ∈ R N . \in \mathbb ^. The embedding of subject-object-verb semantics requires embedding relationships among three words. Because a word is itself a vector, subject-object-verb semantics could be expressed using mode-3 tensors v a × v b × v c ↦ A ∈ R N × N × N . \times v_\times v_\mapsto \in \mathbb ^. In practice the neural network designer is primarily concerned with the specification of embeddings, the connection of tensor layers, and the operations performed on them in a network. Modern machine learning frameworks manage the optimization, tensor factorization and backpropagation automatically.
Tensor (machine learning) : Tensors provide a unified way to train neural networks for more complex data sets. However, training is expensive to compute on classical CPU hardware. In 2014, Nvidia developed cuDNN, CUDA Deep Neural Network, a library for a set of optimized primitives written in the parallel CUDA language. CUDA and thus cuDNN run on dedicated GPUs that implement unified massive parallelism in hardware. These GPUs were not yet dedicated chips for tensors, but rather existing hardware adapted for parallel computation in machine learning. In the period 2015–2017 Google invented the Tensor Processing Unit (TPU). TPUs are dedicated, fixed function hardware units that specialize in the matrix multiplications needed for tensor products. Specifically, they implement an array of 65,536 multiply units that can perform a 256x256 matrix sum-product in just one global instruction cycle. Later in 2017, Nvidia released its own Tensor Core with the Volta GPU architecture. Each Tensor Core is a microunit that can perform a 4x4 matrix sum-product. There are eight tensor cores for each shared memory (SM) block. The first GV100 GPU card has 108 SMs resulting in 672 tensor cores. This device accelerated machine learning by 12x over the previous Tesla GPUs. The number of tensor cores scales as the number of cores and SM units continue to grow in each new generation of cards. The development of GPU hardware, combined with the unified architecture of tensor cores, has enabled the training of much larger neural networks. In 2022, the largest neural network was Google's PaLM with 540 billion learned parameters (network weights) (the older GPT-3 language model has over 175 billion learned parameters that produces human-like text; size isn't everything, Stanford's much smaller 2023 Alpaca model claims to be better, having learned from Meta/Facebook's 2023 model LLaMA, the smaller 7 billion parameter variant). The widely popular chatbot ChatGPT is built on top of GPT-3.5 (and after an update GPT-4) using supervised and reinforcement learning. == References ==
Mode collapse : In machine learning, mode collapse is a failure mode observed in generative models, originally noted in Generative Adversarial Networks (GANs). It occurs when the model produces outputs that are less diverse than expected, effectively "collapsing" to generate only a few modes of the data distribution while ignoring others. This phenomenon undermines the goal of generative models to capture the full diversity of the training data. There are typically two times at which a model can collapse: either during training or during post-training finetuning. Mode collapse reduces the utility of generative models in applications, such as in image synthesis (repetitive or near-identical images); data augmentation (limited diversity in synthetic data); scientific simulations (failure to explore all plausible scenarios).
Mode collapse : Mode collapse is distinct from overfitting, where a model learns detailed patterns in the training data that does not generalize to the test data, and underfitting, where it fails to learn patterns. Memorization is where a model learns to reproduce data from the training data. Memorization is often confused with mode collapse. However, a model can memorize the training dataset without mode collapse. Indeed, if a model is severely mode-collapsed, then it has failed to memorize large parts of the training dataset. Model collapse is one particular mechanism for the phenomenon of mode collapse, i.e. when a generative model 2 is pretrained mainly on the outputs of model 1, then another new generative model 3 is pretrained mainly on the outputs of model 2, etc. When models are trained in this way, each model is typically more mode-collapsed than the previous one. However, there are other mechanisms for mode collapse.
Mode collapse : Training-time mode collapse was originally noted and studied in GANs, where it arises primarily due to imbalances in the training dynamics between the generator and discriminator in GANs. In the original GAN paper, it was also called the "Helvetica scenario". Common causes include: If the discriminator learns too slowly, the generator may exploit weaknesses by producing a narrow set of outputs that consistently fool the discriminator. Traditional GAN loss functions (e.g., Jensen-Shannon divergence) may be too lenient on generating same-looking outputs. The adversarial training process can lead to oscillatory behavior, where the generator and discriminator fail to converge to a stable equilibrium, but instead engage in a rock-beats-paper-beats-scissors kind of cycling. The generator would generate just "rock" until the discriminator learns to classify that as generated, then the generator switch to generating just "scissors", and so on. The generator would always be mode-collapsed, though the precise mode in which it collapses to would change during training. Several GAN-specific strategies were developed to mitigate mode collapse: Two time-scale update rule. Mini-batch discrimination allows the discriminator to evaluate entire batches of samples, encouraging diversity. Unrolled GANs optimize the generator against future states of the discriminator. Wasserstein GAN uses Earth Mover's distance to provide more stable gradients. Use a big and balanced training dataset. Regularization methods such as gradient penalty and spectral normalization.
Mode collapse : The large language models are usually trained in two steps. In the first step ("pretraining"), the model is trained to simply generate text sampled from a large dataset. In the second step ("finetuning"), the model is trained to perform specific tasks by training it on a small dataset containing just the task-specific data. For example, to make a chatbot in this method, one first pretrains a large transformer model over a few trillion words of text scraped from the Internet, then finetunes it on a few million words of example chatlogs that the model should imitate. Mode collapse may occur during finetuning, as the model learns to generate text that accomplishes the specific task, but loses ability to generate other forms of text. It may also be able to generate a smaller subset of texts that accomplish the specific task. It is hypothesized that there is a tradeoff between quality and diversity. Given a single pretrained model, one may finetune it to perform a specific task. More finetuning would result in higher average task performance, but less diverse outputs. Less finetuning would result in lower average performance, but more diverse outputs. A similar tradeoff has been observed in image generation models and GAN-based text generators. Similarly, mode collapse may occur during RLHF, via reward hacking the reward model or other mechanisms.
Mode collapse : Variational autoencoder Generative model Generative artificial intelligence Generative pre-trained transformer Overfitting == References ==
ReRites : ReRites (also known as RERITES, ReadingRites, Big Data Poetry) is a literary work of "Human + A.I. poetry" by David Jhave Johnston that used neural network models trained to generate poetry which the author then edited. ReRites won the Robert Coover Award for a Work of Electronic Literature in 2022.
ReRites : The ReRites project began as a daily rite of writing with a neural network, expanded into a series of performances from which video documentation has been published online, and concluded with a set of 12 books and an accompanying book of essays published by Anteism Books in 2019. In Electronic Literature, Scott Rettberg describes the early phases of the project in 2016, when it bore the preliminary name Big Data Poetry. Jhave (the artist name that David Jhave Johnston goes by) describes the process of writing ReRites as a rite: "Every morning for 2 hours (normally 6:30–8:30am) I get up and edit the poetic output of a neural net. Deleting, weaving, conjugating, lineating, cohering. Re-writing. Re-wiring authorship: hybrid augmented enhanced evolutionary". There is video documentation of the writing process. The human editing of the neural network's output is fundamental to this project, and Jhave gives examples of both unedited text extracts and his edited versions in publications about the project. Kyle Booten describes ReRites as "simultaneously dusty and outrageously verdant, monotonously sublime and speckled with beautiful and rare specimens".
ReRites : ReRites is described by John Cayley as "one of the most thorough and beautiful" poetic responses to machine learning. The work's influence on the field of electronic literature was acknowledged in 2022, when the work won the Electronic Literature Organization's Robert Coover Award for a Work of Electronic Literature. The jury described ReRites as particularly poignant in the time of the pandemic, as it was "a documentation of the performance of the private ritual of writing and the obsessive-compulsive need for writers to communicate — even when no one else is reading". The question of authorship and voice in ReRites has been raised by several critics. Although generated poetry is an established genre in electronic literature, Cayley notes that unlike the combinatory poems created by authors like Nick Montfort, where the author explicitly defines which words and phrases will be recombined, ReRites has "not been directed by literary preconceptions inscribed in the program itself, but only by patterns and rhythms pre-existing in the corpora". In an essay for the Australian journal TEXT, David Thomas Henry Wright asks how to understand authorship and authority in ReRites: "Who or what is the authority of the work? The original data fed into the machine, that is not currently retrievable or discernible from the final works? The code that was taken and adapted for his purposes? Or Jhave, the human editor?" Wright concludes that Jhave is the only actor with any intentionality and therefore the authority of the work. The centrality of the human editor is also emphasised by other scholars. In a chapter analysing ReRites Malthe Stavning Erslev argues that the machine learning misrepresents the dataset it is trained on. While ReRites uses 21st century neural networks, it has been compared to earlier literary traditions. Poet Victoria Stanton, who read at one of the ReRites performances, has compared ReRites to found poetry, while David Thomas Henry Wright compares it to the Oulipo movement and Mark Amerika to the cut-up technique. Scholars also position ReRites firmly within the long tradition of generative poetry both in electronic literature and print, stretching from the I Ching, Queneau's Cent Mille Milliards de Poemes and Nabokov's Pale Fire to computer-generated poems like Christopher Strachey's Love Letter Generator (1952) and more contemporary examples. Jhave describes the process of working with the output from the neural network as "carving". In his book My Life as an Artificial Creative Intelligence, Mark Amerika writes that the "method of carving the digital outputs provided by the language model as part of a collaborative remix jam session with GPT-2, where the language artist and the language model play off each other’s unexpected outputs as if caught in a live postproduction set, is one I share with electronic literature composer David Jhave Johnston, whose AI poetry experiments precede my own investigations." == References ==
OpenAI o3 : OpenAI o3 is a reflective generative pre-trained transformer (GPT) model developed by OpenAI as a successor to OpenAI o1. It is designed to devote additional deliberation time when addressing questions that require step-by-step logical reasoning. OpenAI released a smaller model, o3-mini, on January 31st, 2025.
OpenAI o3 : The OpenAI o3 model was announced on December 20, 2024, with the designation "o3" chosen to avoid trademark conflict with the mobile carrier brand named O2. OpenAI invited safety and security researchers to apply for early access of these models until January 10, 2025. Similarly to o1, there are two different models: o3 and o3-mini. On January 31, 2025, OpenAI released o3-mini to all ChatGPT users (including free-tier) and some API users. OpenAI describes o3-mini as a "specialized alternative" to o1 for "technical domains requiring precision and speed". o3-mini features three reasoning effort levels: low, medium and high. The free version uses medium. The variant using more compute is called o3-mini-high, and is available to paid subscribers. Subscribers to ChatGPT's Pro tier have unlimited access to both o3-mini and o3-mini-high. On February 2, OpenAI launched OpenAI Deep Research, a ChatGPT service using a version of o3 that makes comprehensive reports within 5 to 30 minutes, based on web searches. On February 6, in response to pressure from rivals like DeepSeek, OpenAI announced an update aimed at enhancing the transparency of the thought process in its o3-mini model. On February 12, OpenAI further increased rate limits for o3-mini-high to 50 requests per day (from 50 requests per week) for ChatGPT Plus subscribers, and implemented file/image upload support.
OpenAI o3 : Reinforcement learning was used to teach o3 to "think" before generating answers, using what OpenAI refers to as a "private chain of thought". This approach enables the model to plan ahead and reason through tasks, performing a series of intermediate reasoning steps to assist in solving the problem, at the cost of additional computing power and increased latency of responses. o3 demonstrates significantly better performance than o1 on complex tasks, including coding, mathematics, and science. OpenAI reported that o3 achieved a score of 87.7% on the GPQA Diamond benchmark, which contains expert-level science questions not publicly available online. On SWE-bench Verified, a software engineering benchmark assessing the ability to solve real GitHub issues, o3 scored 71.7%, compared to 48.9% for o1. On Codeforces, o3 reached an Elo score of 2727, whereas o1 scored 1891. On the Abstraction and Reasoning Corpus for Artificial General Intelligence (ARC-AGI) benchmark, which evaluates an AI's ability to handle new logical and skill acquisition problems, o3 attained three times the accuracy of o1. == References ==
Jeph Acheampong : Jeph Acheampong is a Ghanaian-American businessman. He is an Expo Live Global Innovator, a World Economic Forum Global Shaper, and was named a Future of Ghana Pioneer.
Jeph Acheampong : Acheampong was born in Accra, Ghana and grew up in New York. He graduated high school ini 2012 from Forest Hills High School (New York). Acheampong went on to study economics at New York University. Later, he pursued a masters at Harvard University.
Jeph Acheampong : While at New York University, Acheampong founded Anansi Global, a fashion company. In 2018, he founded Blossom Academy, Ghana's first Data science academy. He also cofounded Blossom Corporate Training to upskill working professionals. In 2016, He was awarded the President Service Award. In September 2021, he was featured on CNN in What the Big idea program and was recognized by Nigeria newspapers as 5 Most Influential People making impacts. In 2022, the Government of Ghana appointing Jeph Acheampong to support the development of the Ghana National Artificial Intelligence Strategy 2023-2033. He was invited as a speaker by West Africa Business Forum for youths and young women entrepreneurs from 15 countries across Africa, and also at the 2024 World Youth Development Forum in Guangzhou, China. He is an advocate for systemic change that addresses the root causes of poverty and unemployment and in 2025, He was listed among the top ten Under-40 African Ivy League Graduates making an impact in Africa.
Jeph Acheampong : He is a 2018 Princeton in Africa Fellow, a 2023 Acumen Fellow, and a 2024 Cheng Fellow at the Harvard Kennedy School. == References ==
Overfitting : In mathematical modeling, overfitting is "the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably". An overfitted model is a mathematical model that contains more parameters than can be justified by the data. In a mathematical sense, these parameters represent the degree of a polynomial. The essence of overfitting is to have unknowingly extracted some of the residual variation (i.e., the noise) as if that variation represented underlying model structure.: 45 Underfitting occurs when a mathematical model cannot adequately capture the underlying structure of the data. An under-fitted model is a model where some parameters or terms that would appear in a correctly specified model are missing. Underfitting would occur, for example, when fitting a linear model to nonlinear data. Such a model will tend to have poor predictive performance. The possibility of over-fitting exists because the criterion used for selecting the model is not the same as the criterion used to judge the suitability of a model. For example, a model might be selected by maximizing its performance on some set of training data, and yet its suitability might be determined by its ability to perform well on unseen data; overfitting occurs when a model begins to "memorize" training data rather than "learning" to generalize from a trend. As an extreme example, if the number of parameters is the same as or greater than the number of observations, then a model can perfectly predict the training data simply by memorizing the data in its entirety. (For an illustration, see Figure 2.) Such a model, though, will typically fail severely when making predictions. Overfitting is directly related to approximation error of the selected function class and the optimization error of the optimization procedure. A function class that is too large, in a suitable sense, relative to the dataset size is likely to overfit. Even when the fitted model does not have an excessive number of parameters, it is to be expected that the fitted relationship will appear to perform less well on a new dataset than on the dataset used for fitting (a phenomenon sometimes known as shrinkage). In particular, the value of the coefficient of determination will shrink relative to the original data. To lessen the chance or amount of overfitting, several techniques are available (e.g., model comparison, cross-validation, regularization, early stopping, pruning, Bayesian priors, or dropout). The basis of some techniques is to either (1) explicitly penalize overly complex models or (2) test the model's ability to generalize by evaluating its performance on a set of data not used for training, which is assumed to approximate the typical unseen data that a model will encounter.
Overfitting : In statistics, an inference is drawn from a statistical model, which has been selected via some procedure. Burnham & Anderson, in their much-cited text on model selection, argue that to avoid overfitting, we should adhere to the "Principle of Parsimony". The authors also state the following.: 32–33 Overfitted models ... are often free of bias in the parameter estimators, but have estimated (and actual) sampling variances that are needlessly large (the precision of the estimators is poor, relative to what could have been accomplished with a more parsimonious model). False treatment effects tend to be identified, and false variables are included with overfitted models. ... A best approximating model is achieved by properly balancing the errors of underfitting and overfitting. Overfitting is more likely to be a serious concern when there is little theory available to guide the analysis, in part because then there tend to be a large number of models to select from. The book Model Selection and Model Averaging (2008) puts it this way. Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?
Overfitting : Usually, a learning algorithm is trained using some set of "training data": exemplary situations for which the desired output is known. The goal is that the algorithm will also perform well on predicting the output when fed "validation data" that was not encountered during its training. Overfitting is the use of models or procedures that violate Occam's razor, for example by including more adjustable parameters than are ultimately optimal, or by using a more complicated approach than is ultimately optimal. For an example where there are too many adjustable parameters, consider a dataset where training data for y can be adequately predicted by a linear function of two independent variables. Such a function requires only three parameters (the intercept and two slopes). Replacing this simple function with a new, more complex quadratic function, or with a new, more complex linear function on more than two independent variables, carries a risk: Occam's razor implies that any given complex function is a priori less probable than any given simple function. If the new, more complicated function is selected instead of the simple function, and if there was not a large enough gain in training data fit to offset the complexity increase, then the new complex function "overfits" the data and the complex overfitted function will likely perform worse than the simpler function on validation data outside the training dataset, even though the complex function performed as well, or perhaps even better, on the training dataset. When comparing different types of models, complexity cannot be measured solely by counting how many parameters exist in each model; the expressivity of each parameter must be considered as well. For example, it is nontrivial to directly compare the complexity of a neural net (which can track curvilinear relationships) with m parameters to a regression model with n parameters. Overfitting is especially likely in cases where learning was performed too long or where training examples are rare, causing the learner to adjust to very specific random features of the training data that have no causal relation to the target function. In this process of overfitting, the performance on the training examples still increases while the performance on unseen data becomes worse. As a simple example, consider a database of retail purchases that includes the item bought, the purchaser, and the date and time of purchase. It's easy to construct a model that will fit the training set perfectly by using the date and time of purchase to predict the other attributes, but this model will not generalize at all to new data because those past times will never occur again. Generally, a learning algorithm is said to overfit relative to a simpler one if it is more accurate in fitting known data (hindsight) but less accurate in predicting new data (foresight). One can intuitively understand overfitting from the fact that information from all past experience can be divided into two groups: information that is relevant for the future, and irrelevant information ("noise"). Everything else being equal, the more difficult a criterion is to predict (i.e., the higher its uncertainty), the more noise exists in past information that needs to be ignored. The problem is determining which part to ignore. A learning algorithm that can reduce the risk of fitting noise is called "robust."
Overfitting : Underfitting is the inverse of overfitting, meaning that the statistical model or machine learning algorithm is too simplistic to accurately capture the patterns in the data. A sign of underfitting is that there is a high bias and low variance detected in the current model or algorithm used (the inverse of overfitting: low bias and high variance). This can be gathered from the Bias-variance tradeoff, which is the method of analyzing a model or algorithm for bias error, variance error, and irreducible error. With a high bias and low variance, the result of the model is that it will inaccurately represent the data points and thus insufficiently be able to predict future data results (see Generalization error). As shown in Figure 5, the linear line could not represent all the given data points due to the line not resembling the curvature of the points. We would expect to see a parabola-shaped line as shown in Figure 6 and Figure 1. If we were to use Figure 5 for analysis, we would get false predictive results contrary to the results if we analyzed Figure 6. Burnham & Anderson state the following.: 32 ... an underfitted model would ignore some important replicable (i.e., conceptually replicable in most other samples) structure in the data and thus fail to identify effects that were actually supported by the data. In this case, bias in the parameter estimators is often substantial, and the sampling variance is underestimated, both factors resulting in poor confidence interval coverage. Underfitted models tend to miss important treatment effects in experimental settings.
Overfitting : Benign overfitting describes the phenomenon of a statistical model that seems to generalize well to unseen data, even when it has been fit perfectly on noisy training data (i.e., obtains perfect predictive accuracy on the training set). The phenomenon is of particular interest in deep neural networks, but is studied from a theoretical perspective in the context of much simpler models, such as linear regression. In particular, it has been shown that overparameterization is essential for benign overfitting in this setting. In other words, the number of directions in parameter space that are unimportant for prediction must significantly exceed the sample size.
Overfitting : Leinweber, D. J. (2007). "Stupid data miner tricks". The Journal of Investing. 16: 15–22. doi:10.3905/joi.2007.681820. S2CID 108627390. Tetko, I. V.; Livingstone, D. J.; Luik, A. I. (1995). "Neural network studies. 1. Comparison of Overfitting and Overtraining" (PDF). Journal of Chemical Information and Modeling. 35 (5): 826–833. doi:10.1021/ci00027a006. Tip 7: Minimize overfitting. Chicco, D. (December 2017). "Ten quick tips for machine learning in computational biology". BioData Mining. 10 (35): 35. doi:10.1186/s13040-017-0155-3. PMC 5721660. PMID 29234465.
Overfitting : Christian, Brian; Griffiths, Tom (April 2017), "Chapter 7: Overfitting", Algorithms To Live By: The computer science of human decisions, William Collins, pp. 149–168, ISBN 978-0-00-754799-9
Overfitting : The Problem of Overfitting Data – Stony Brook University What is "overfitting," exactly? – Andrew Gelman blog CSE546: Linear Regression Bias / Variance Tradeoff – University of Washington What is Underfitting – IBM
Aporia (company) : Aporia is a machine learning observability platform based in Tel Aviv, Israel. The company has a US office located in San Jose, California. Aporia has developed software for monitoring and controlling undetected defects and failures used by other companies to detect and report anomalies, and warn in the early stages of faults.
Aporia (company) : Aporia was founded in 2019 by Liran Hason and Alon Gubkin. In April 2021, the company raised a $5 million seed round for its monitoring platform for ML models. In February 2022, the company closed a Series A round of $25 million for its ML observability platform. Aporia was named by Forbes as the Next Billion-Dollar Company in June 2022. In November, the company partnered with ClearML, an MLOPs platform, to improve ML pipeline optimization. In January 2023, Aporia launched Direct Data Connectors, a novel technology allowing organizations to monitor their ML models in minutes (previously the process of integrating ML monitoring into a customer’s cloud environment took weeks or more.) DDC (Direct Data Connectors) enables users to connect Aporia to their preferred data source and monitor all of their data at once, without data sampling or data duplication (which is a huge security risk for major organizations. In April 2023, Aporia announced the company partnered with Amazon Web Services (AWS) to provide more reliable ML observability to AWS consumers by deploying Aporia's architecture to their AWS environment, this will allow customers to monitor their models in production regardless of platform.
Aporia (company) : In 2022, Aporia faced significant challenges when a cybersecurity breach exposed sensitive client data stored within its machine learning observability platform. The breach was traced to a vulnerability in Aporia’s Direct Data Connectors (DDC), which allowed unauthorized access to integrated data sources. This incident compromised the confidentiality and integrity of data from several high-profile clients, including financial institutions and healthcare providers. Investigations revealed that Aporia had delayed patching the identified vulnerability despite prior warnings from independent security researchers. == References ==