text
stringlengths
12
14.7k
Kórsafn : Nature Manifesto – 2024 sound installation by Björk == References ==
Neuro-symbolic AI : Neuro-symbolic AI is a type of artificial intelligence that integrates neural and symbolic AI architectures to address the weaknesses of each, providing a robust AI capable of reasoning, learning, and cognitive modeling. As argued by Leslie Valiant and others, the effective construction of rich computational cognitive models demands the combination of symbolic reasoning and efficient machine learning. Gary Marcus argued, "We cannot construct rich cognitive models in an adequate, automated way without the triumvirate of hybrid architecture, rich prior knowledge, and sophisticated techniques for reasoning." Further, "To build a robust, knowledge-driven approach to AI we must have the machinery of symbol manipulation in our toolkit. Too much useful knowledge is abstract to proceed without tools that represent and manipulate abstraction, and to date, the only known machinery that can manipulate such abstract knowledge reliably is the apparatus of symbol manipulation." Angelo Dalli, Henry Kautz, Francesca Rossi, and Bart Selman also argued for such a synthesis. Their arguments attempt to address the two kinds of thinking, as discussed in Daniel Kahneman's book Thinking Fast and Slow. It describes cognition as encompassing two components: System 1 is fast, reflexive, intuitive, and unconscious. System 2 is slower, step-by-step, and explicit. System 1 is used for pattern recognition. System 2 handles planning, deduction, and deliberative thinking. In this view, deep learning best handles the first kind of cognition while symbolic reasoning best handles the second kind. Both are needed for a robust, reliable AI that can learn, reason, and interact with humans to accept advice and answer questions. Such dual-process models with explicit references to the two contrasting systems have been worked on since the 1990s, both in AI and in Cognitive Science, by multiple researchers.
Neuro-symbolic AI : Approaches for integration are diverse. Henry Kautz's taxonomy of neuro-symbolic architectures follows, along with some examples: Symbolic Neural symbolic is the current approach of many neural models in natural language processing, where words or subword tokens are the ultimate input and output of large language models. Examples include BERT, RoBERTa, and GPT-3. Symbolic[Neural] is exemplified by AlphaGo, where symbolic techniques are used to invoke neural techniques. In this case, the symbolic approach is Monte Carlo tree search and the neural techniques learn how to evaluate game positions. Neural | Symbolic uses a neural architecture to interpret perceptual data as symbols and relationships that are reasoned about symbolically. Neural-Concept Learner is an example. Neural: Symbolic → Neural relies on symbolic reasoning to generate or label training data that is subsequently learned by a deep learning model, e.g., to train a neural model for symbolic computation by using a Macsyma-like symbolic mathematics system to create or label examples. NeuralSymbolic uses a neural net that is generated from symbolic rules. An example is the Neural Theorem Prover, which constructs a neural network from an AND-OR proof tree generated from knowledge base rules and terms. Logic Tensor Networks also fall into this category. Neural[Symbolic] allows a neural model to directly call a symbolic reasoning engine, e.g., to perform an action or evaluate a state. An example would be ChatGPT using a plugin to query Wolfram Alpha. These categories are not exhaustive, as they do not consider multi-agent systems. In 2005, Bader and Hitzler presented a more fine-grained categorization that considered, e.g., whether the use of symbols included logic and if it did, whether the logic was propositional or first-order logic. The 2005 categorization and Kautz's taxonomy above are compared and contrasted in a 2021 article. Recently, Sepp Hochreiter argued that Graph Neural Networks "...are the predominant models of neural-symbolic computing" since "[t]hey describe the properties of molecules, simulate social networks, or predict future states in physical and engineering applications with particle-particle interactions."
Neuro-symbolic AI : Gary Marcus argues that "...hybrid architectures that combine learning and symbol manipulation are necessary for robust intelligence, but not sufficient", and that there are ...four cognitive prerequisites for building robust artificial intelligence: hybrid architectures that combine large-scale learning with the representational and computational powers of symbol manipulation, large-scale knowledge bases—likely leveraging innate frameworks—that incorporate symbolic knowledge along with other forms of knowledge, reasoning mechanisms capable of leveraging those knowledge bases in tractable ways, and rich cognitive models that work together with those mechanisms and knowledge bases. This echoes earlier calls for hybrid models as early as the 1990s.
Neuro-symbolic AI : Garcez and Lamb described research in this area as ongoing at least since the 1990s. At that time, the terms symbolic and sub-symbolic AI were popular. A series of workshops on neuro-symbolic AI has been held annually since 2005 Neuro-Symbolic Artificial Intelligence. In the early 1990s, an initial set of workshops on this topic were organized.
Neuro-symbolic AI : Key research questions remain, such as: What is the best way to integrate neural and symbolic architectures? How should symbolic structures be represented within neural networks and extracted from them? How should common-sense knowledge be learned and reasoned about? How can abstract knowledge that is hard to encode logically be handled?
Neuro-symbolic AI : Implementations of neuro-symbolic approaches include: AllegroGraph: an integrated Knowledge Graph based platform for neuro-symbolic application development. Scallop: a language based on Datalog that supports differentiable logical and relational reasoning. Scallop can be integrated in Python and with a PyTorch learning module. Logic Tensor Networks: encode logical formulas as neural networks and simultaneously learn term encodings, term weights, and formula weights. DeepProbLog: combines neural networks with the probabilistic reasoning of ProbLog. SymbolicAI: a compositional differentiable programming library. Explainable Neural Networks (XNNs): combine neural networks with symbolic hypergraphs and trained using a mixture of backpropagation and symbolic learning called induction.
Neuro-symbolic AI : Symbolic AI Connectionist AI Hybrid intelligent systems
Neuro-symbolic AI : Bader, Sebastian; Hitzler, Pascal (2005-11-10). "Dimensions of Neural-symbolic Integration – A Structured Survey". arXiv:cs/0511042. Garcez, Artur S. d'Avila; Broda, Krysia; Gabbay, Dov M.; Gabbay (2002). Neural-Symbolic Learning Systems: Foundations and Applications. Springer Science & Business Media. ISBN 978-1-85233-512-0. Garcez, Artur; Besold, Tarek; De Raedt, Luc; Földiák, Peter; Hitzler, Pascal; Icard, Thomas; Kühnberger, Kai-Uwe; Lamb, Luís; Miikkulainen, Risto; Silver, Daniel (2015). Neural-Symbolic Learning and Reasoning: Contributions and Challenges. AAAI Spring Symposium - Knowledge Representation and Reasoning: Integrating Symbolic and Neural Approaches. Stanford, CA. doi:10.13140/2.1.1779.4243. Garcez, Artur d'Avila; Gori, Marco; Lamb, Luis C.; Serafini, Luciano; Spranger, Michael; Tran, Son N. (2019). "Neural-Symbolic Computing: An Effective Methodology for Principled Integration of Machine Learning and Reasoning". arXiv:1905.06088 [cs.AI]. Garcez, Artur d'Avila; Lamb, Luis C. (2020). "Neurosymbolic AI: The 3rd Wave". arXiv:2012.05876 [cs.AI]. Hitzler, Pascal; Sarker, Md Kamruzzaman (2022). Neuro-Symbolic Artificial Intelligence: The State of the Art. IOS Press. ISBN 978-1-64368-244-0. Hitzler, Pascal; Sarker, Md Kamruzzaman; Eberhart, Aaron (2023). Compendium of Neurosymbolic Artificial Intelligence. IOS Press. ISBN 978-1-64368-406-2. Hochreiter, Sepp. "Toward a Broad AI." Commun. ACM 65(4): 56–57 (2022). Toward a broad AI Honavar, Vasant (1995). Symbolic Artificial Intelligence and Numeric Artificial Neural Networks: Towards a Resolution of the Dichotomy. The Springer International Series In Engineering and Computer Science. Springer US. pp. 351–388. doi:10.1007/978-0-585-29599-2_11. Kautz, Henry (2020-02-11). The Third AI Summer, Henry Kautz, AAAI 2020 Robert S. Engelmore Memorial Award Lecture. Retrieved 2022-07-06. Kautz, Henry (2022). "The Third AI Summer: AAAI Robert S. Engelmore Memorial Lecture". AI Magazine. 43 (1): 93–104. doi:10.1609/aimag.v43i1.19122. ISSN 2371-9621. S2CID 248213051. Retrieved 2022-07-12. Mao, Jiayuan; Gan, Chuang; Kohli, Pushmeet; Tenenbaum, Joshua B.; Wu, Jiajun (2019). "The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision". arXiv:1904.12584 [cs.CV]. Marcus, Gary; Davis, Ernest (2019). Rebooting AI: Building Artificial Intelligence We Can Trust. Vintage. Marcus, Gary (2020). "The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence". arXiv:2002.06177 [cs.AI]. Dalli, Angelo (2025-02-13). "WAICF2025: Why neurosymbolic AI is the future of trustworthy AI (WAICF 2025 Keynote)". Retrieved 2025-03-06. Rossi, Francesca (2022-07-06). "AAAI2022: Thinking Fast and Slow in AI (AAAI 2022 Invited Talk)". Retrieved 2022-07-06. Selman, Bart (2022-07-06). "AAAI2022: Presidential Address: The State of AI". Retrieved 2022-07-06. Serafini, Luciano; Garcez, Artur d'Avila (2016-07-07). "Logic Tensor Networks: Deep Learning and Logical Reasoning from Data and Knowledge". arXiv:1606.04422 [cs.AI]. Sun, Ron (1995). "Robust reasoning: Integrating rule-based and similarity-based reasoning". Artificial Intelligence. 75 (2): 241–296. doi:10.1016/0004-3702(94)00028-Y. Sun, Ron; Bookman, Lawrence (1994). Computational Architectures Integrating Neural and Symbolic Processes. Kluwer. Sun, Ron; Alexandre, Frederic (1997). Connectionist Symbolic Integration. Lawrence Erlbaum Associates. Sun, R (2001). "Hybrid systems and connectionist implementationalism". Encyclopedia of Cognitive Science (MacMillan Publishing Company, 2001). Valiant, Leslie G (2008). "Knowledge Infusion: In Pursuit of Robustness in Artificial Intelligence". IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science. doi:10.4230/LIPIcs.FSTTCS.2008.1770.
Neuro-symbolic AI : Artificial Intelligence: Workshop series on Neural-Symbolic Learning and Reasoning
Sample complexity : The sample complexity of a machine learning algorithm represents the number of training-samples that it needs in order to successfully learn a target function. More precisely, the sample complexity is the number of training-samples that we need to supply to the algorithm, so that the function returned by the algorithm is within an arbitrarily small error of the best possible function, with probability arbitrarily close to 1. There are two variants of sample complexity: The weak variant fixes a particular input-output distribution; The strong variant takes the worst-case sample complexity over all input-output distributions. The No free lunch theorem, discussed below, proves that, in general, the strong sample complexity is infinite, i.e. that there is no algorithm that can learn the globally-optimal target function using a finite number of training samples. However, if we are only interested in a particular class of target functions (e.g., only linear functions) then the sample complexity is finite, and it depends linearly on the VC dimension on the class of target functions.
Sample complexity : Let X be a space which we call the input space, and Y be a space which we call the output space, and let Z denote the product X × Y . For example, in the setting of binary classification, X is typically a finite-dimensional vector space and Y is the set . Fix a hypothesis space H of functions h : X → Y . A learning algorithm over H is a computable map from Z to H . In other words, it is an algorithm that takes as input a finite sequence of training samples and outputs a function from X to Y . Typical learning algorithms include empirical risk minimization, without or with Tikhonov regularization. Fix a loss function L : Y × Y → R ≥ 0 \colon Y\times Y\to \mathbb _ , for example, the square loss L ( y , y ′ ) = ( y − y ′ ) 2 (y,y')=(y-y')^ , where h ( x ) = y ′ . For a given distribution ρ on X × Y , the expected risk of a hypothesis (a function) h ∈ H is E ( h ) := E ρ [ L ( h ( x ) , y ) ] = ∫ X × Y L ( h ( x ) , y ) d ρ ( x , y ) (h):=\mathbb _[(h(x),y)]=\int _(h(x),y)\,d\rho (x,y) In our setting, we have h = A ( S n ) (S_) , where A is a learning algorithm and S n = ( ( x 1 , y 1 ) , … , ( x n , y n ) ) ∼ ρ n =((x_,y_),\ldots ,(x_,y_))\sim \rho ^ is a sequence of vectors which are all drawn independently from ρ . Define the optimal risk E H ∗ = inf h ∈ H E ( h ) . _^=(h). Set h n = A ( S n ) =(S_) , for each sample size n . h n is a random variable and depends on the random variable S n , which is drawn from the distribution ρ n . The algorithm A is called consistent if E ( h n ) (h_) probabilistically converges to E H ∗ _^ . In other words, for all ϵ , δ > 0 , there exists a positive integer N , such that, for all sample sizes n ≥ N , we have Pr ρ n [ E ( h n ) − E H ∗ ≥ ε ] < δ . [(h_)-_^\geq \varepsilon ]<\delta . The sample complexity of A is then the minimum N for which this holds, as a function of ρ , ϵ , and δ . We write the sample complexity as N ( ρ , ϵ , δ ) to emphasize that this value of N depends on ρ , ϵ , and δ . If A is not consistent, then we set N ( ρ , ϵ , δ ) = ∞ . If there exists an algorithm for which N ( ρ , ϵ , δ ) is finite, then we say that the hypothesis space H is learnable. In others words, the sample complexity N ( ρ , ϵ , δ ) defines the rate of consistency of the algorithm: given a desired accuracy ϵ and confidence δ , one needs to sample N ( ρ , ϵ , δ ) data points to guarantee that the risk of the output function is within ϵ of the best possible, with probability at least 1 − δ . In probably approximately correct (PAC) learning, one is concerned with whether the sample complexity is polynomial, that is, whether N ( ρ , ϵ , δ ) is bounded by a polynomial in 1 / ϵ and 1 / δ . If N ( ρ , ϵ , δ ) is polynomial for some learning algorithm, then one says that the hypothesis space H is PAC-learnable. This is a stronger notion than being learnable.
Sample complexity : One can ask whether there exists a learning algorithm so that the sample complexity is finite in the strong sense, that is, there is a bound on the number of samples needed so that the algorithm can learn any distribution over the input-output space with a specified target error. More formally, one asks whether there exists a learning algorithm A , such that, for all ϵ , δ > 0 , there exists a positive integer N such that for all n ≥ N , we have sup ρ ( Pr ρ n [ E ( h n ) − E H ∗ ≥ ε ] ) < δ , \left(\Pr _[(h_)-_^\geq \varepsilon ]\right)<\delta , where h n = A ( S n ) =(S_) , with S n = ( ( x 1 , y 1 ) , … , ( x n , y n ) ) ∼ ρ n =((x_,y_),\ldots ,(x_,y_))\sim \rho ^ as above. The No Free Lunch Theorem says that without restrictions on the hypothesis space H , this is not the case, i.e., there always exist "bad" distributions for which the sample complexity is arbitrarily large. Thus, in order to make statements about the rate of convergence of the quantity sup ρ ( Pr ρ n [ E ( h n ) − E H ∗ ≥ ε ] ) , \left(\Pr _[(h_)-_^\geq \varepsilon ]\right), one must either constrain the space of probability distributions ρ , e.g. via a parametric approach, or constrain the space of hypotheses H , as in distribution-free approaches.
Sample complexity : The latter approach leads to concepts such as VC dimension and Rademacher complexity which control the complexity of the space H . A smaller hypothesis space introduces more bias into the inference process, meaning that E H ∗ _^ may be greater than the best possible risk in a larger space. However, by restricting the complexity of the hypothesis space it becomes possible for an algorithm to produce more uniformly consistent functions. This trade-off leads to the concept of regularization. It is a theorem from VC theory that the following three statements are equivalent for a hypothesis space H : H is PAC-learnable. The VC dimension of H is finite. H is a uniform Glivenko-Cantelli class. This gives a way to prove that certain hypothesis spaces are PAC learnable, and by extension, learnable.
Sample complexity : In addition to the supervised learning setting, sample complexity is relevant to semi-supervised learning problems including active learning, where the algorithm can ask for labels to specifically chosen inputs in order to reduce the cost of obtaining many labels. The concept of sample complexity also shows up in reinforcement learning, online learning, and unsupervised algorithms, e.g. for dictionary learning.
Sample complexity : A high sample complexity means that many calculations are needed for running a Monte Carlo tree search. It is equivalent to a model-free brute force search in the state space. In contrast, a high-efficiency algorithm has a low sample complexity. Possible techniques for reducing the sample complexity are metric learning and model-based reinforcement learning.
Sample complexity : Active learning (machine learning) == References ==
Figure AI : Figure AI, Inc. is a United States-based robotics company specializing in the development of AI-powered humanoid robots. It was founded in 2022, by Brett Adcock, the founder of Archer Aviation and Vettery. Figure AI's team is composed of experts from robotics, artificial intelligence, sensing, perception, and navigation, blending experiences from notable companies like Boston Dynamics and Tesla.
Figure AI : In 2022, the company introduced its prototype, Figure 01, a bipedal robot designed for manual labor, initially targeting the logistics and warehousing sectors. In May 2023 the company raised $70 million from investors led by Parkway Venture Capital. On January 18 2024 Figure announced a partnership with BMW to deploy humanoid robots in automotive manufacturing facilities. In February 2024, Figure AI secured $675 million in venture capital funding from a consortium that includes Jeff Bezos, Microsoft, Nvidia, Intel, and the startup-funding divisions of Amazon and OpenAI. The funding valued the company at $2.6 billion. It also announced a partnership with OpenAI. The collaboration includes OpenAI building specialized AI models for Figure's humanoid robots, allowing them to accelerate Figure's development timeline by enabling its robots to "process and reason from language". In 2025 Figure ended its collaboration with OpenAI. In February it announced Helix, the next generation of its humanoid robot. On March 15 2025, Figure AI Introduces BOTQ - A High Volume Manufacturing Facility for humanoid Robots. According to Figure AI, BOTQ is a manufacturing line aiming to produce 12,000 humanoids per year.
Group method of data handling : Group method of data handling (GMDH) is a family of inductive algorithms for computer-based mathematical modeling of multi-parametric datasets that features fully automatic structural and parametric optimization of models. GMDH is used in such fields as data mining, knowledge discovery, prediction, complex systems modeling, optimization and pattern recognition. GMDH algorithms are characterized by inductive procedure that performs sorting-out of gradually complicated polynomial models and selecting the best solution by means of the external criterion. The last section of contains a summary of the applications of GMDH in the 1970s. Other names include "polynomial feedforward neural network", or "self-organization of models". It was one of the first deep learning methods, used to train an eight-layer neural net in 1971.
Group method of data handling : Like linear regression, which fits a linear equation over data, GMDH fits arbitrarily high orders of polynomial equations over data. To choose between models, two or more subsets of a data sample are used, similar to the train-validation-test split. GMDH combined ideas from: black box modeling, successive genetic selection of pairwise features, the Gabor's principle of "freedom of decisions choice", and the Beer's principle of external additions. Inspired by an analogy between constructing a model out of noisy data, and sending messages through a noisy channel, they proposed "noise-immune modelling": the higher the noise, the less parameters must the optimal model have, since the noisy channel does not allow more bits to be sent through. The model is structured as a feedforward neural network, but without restrictions on the depth, they had a procedure for automatic models structure generation, which imitates the process of biological selection with pairwise genetic features.
Group method of data handling : The method was originated in 1968 by Prof. Alexey G. Ivakhnenko in the Institute of Cybernetics in Kyiv. Period 1968–1971 is characterized by application of only regularity criterion for solving of the problems of identification, pattern recognition and short-term forecasting. As reference functions polynomials, logical nets, fuzzy Zadeh sets and Bayes probability formulas were used. Authors were stimulated by very high accuracy of forecasting with the new approach. Noise immunity was not investigated. Period 1972–1975. The problem of modeling of noised data and incomplete information basis was solved. Multicriteria selection and utilization of additional priory information for noiseimmunity increasing were proposed. Best experiments showed that with extended definition of the optimal model by additional criterion noise level can be ten times more than signal. Then it was improved using Shannon's Theorem of General Communication theory. Period 1976–1979. The convergence of multilayered GMDH algorithms was investigated. It was shown that some multilayered algorithms have "multilayerness error" – analogous to static error of control systems. In 1977 a solution of objective systems analysis problems by multilayered GMDH algorithms was proposed. It turned out that sorting-out by criteria ensemble finds the only optimal system of equations and therefore to show complex object elements, their main input and output variables. Period 1980–1988. Many important theoretical results were received. It became clear that full physical models cannot be used for long-term forecasting. It was proved, that non-physical models of GMDH are more accurate for approximation and forecast than physical models of regression analysis. Two-level algorithms which use two different time scales for modeling were developed. Since 1989 the new algorithms (AC, OCC, PF) for non-parametric modeling of fuzzy objects and SLP for expert systems were developed and investigated. Present stage of GMDH development can be described as blossom out of deep learning neuronets and parallel inductive algorithms for multiprocessor computers. Such procedure is currently used in deep learning networks.
Group method of data handling : There are many different ways to choose an order for partial models consideration. The very first consideration order used in GMDH and originally called multilayered inductive procedure is the most popular one. It is a sorting-out of gradually complicated models generated from base function. The best model is indicated by the minimum of the external criterion characteristic. Multilayered procedure is equivalent to the Artificial Neural Network with polynomial activation function of neurons. Therefore, the algorithm with such an approach usually referred as GMDH-type Neural Network or Polynomial Neural Network. Li showed that GMDH-type neural network performed better than the classical forecasting algorithms such as Single Exponential Smooth, Double Exponential Smooth, ARIMA and back-propagation neural network.
Group method of data handling : Another important approach to partial models consideration that becomes more and more popular is a combinatorial search that is either limited or full. This approach has some advantages against Polynomial Neural Networks, but requires considerable computational power and thus is not effective for objects with a large number of inputs. An important achievement of Combinatorial GMDH is that it fully outperforms linear regression approach if noise level in the input data is greater than zero. It guarantees that the most optimal model will be founded during exhaustive sorting. Basic Combinatorial algorithm makes the following steps: Divides data sample at least into two samples A and B. Generates subsamples from A according to partial models with steadily increasing complexity. Estimates coefficients of partial models at each layer of models complexity. Calculates value of external criterion for models on sample B. Chooses the best model (set of models) indicated by minimal value of the criterion. For the selected model of optimal complexity recalculate coefficients on a whole data sample. In contrast to GMDH-type neural networks Combinatorial algorithm usually does not stop at the certain level of complexity because a point of increase of criterion value can be simply a local minimum, see Fig.1.
Group method of data handling : Combinatorial (COMBI) Multilayered Iterative (MIA) GN Objective System Analysis (OSA) Harmonical Two-level (ARIMAD) Multiplicative–Additive (MAA) Objective Computer Clusterization (OCC); Pointing Finger (PF) clusterization algorithm; Analogues Complexing (AC) Harmonical Rediscretization Algorithm on the base of Multilayered Theory of Statistical Decisions (MTSD) Group of Adaptive Models Evolution (GAME)
Group method of data handling : FAKE GAME Project — Open source. Cross-platform. GEvom — Free upon request for academic use. Windows-only. GMDH Shell — GMDH-based, predictive analytics and time series forecasting software. Free Academic Licensing and Free Trial version available. Windows-only. KnowledgeMiner — Commercial product. Mac OS X-only. Free Demo version available. PNN Discovery client — Commercial product. Sciengy RPF! — Freeware, Open source. wGMDH — Weka plugin, Open source. R Package – Open source. R Package for regression tasks – Open source. Python library of MIA algorithm - Open source. Python library of basic GMDH algorithms (COMBI, MULTI, MIA, RIA) - Open source.
Group method of data handling : A.G. Ivakhnenko. Heuristic Self-Organization in Problems of Engineering Cybernetics, Automatica, vol.6, 1970 — p. 207-219. S.J. Farlow. Self-Organizing Methods in Modelling: GMDH Type Algorithms. New-York, Bazel: Marcel Decker Inc., 1984, 350 p. H.R. Madala, A.G. Ivakhnenko. Inductive Learning Algorithms for Complex Systems Modeling. CRC Press, Boca Raton, 1994.
Group method of data handling : Library of GMDH books and articles Group Method of Data Handling
Category utility : Category utility is a measure of "category goodness" defined in Gluck & Corter (1985) and Corter & Gluck (1992). It attempts to maximize both the probability that two objects in the same category have attribute values in common, and the probability that objects from different categories have different attribute values. It was intended to supersede more limited measures of category goodness such as "cue validity" (Reed 1972; Rosch & Mervis 1975) and "collocation index" (Jones 1983). It provides a normative information-theoretic measure of the predictive advantage gained by the observer who possesses knowledge of the given category structure (i.e., the class labels of instances) over the observer who does not possess knowledge of the category structure. In this sense the motivation for the category utility measure is similar to the information gain metric used in decision tree learning. In certain presentations, it is also formally equivalent to the mutual information, as discussed below. A review of category utility in its probabilistic incarnation, with applications to machine learning, is provided in Witten & Frank (2005, pp. 260–262).
Category utility : The probability-theoretic definition of category utility given in Fisher (1987) and Witten & Frank (2005) is as follows: C U ( C , F ) = 1 p ∑ c j ∈ C p ( c j ) [ ∑ f i ∈ F ∑ k = 1 m p ( f i k | c j ) 2 − ∑ f i ∈ F ∑ k = 1 m p ( f i k ) 2 ] \sum _\in Cp(c_)\left[\sum _\in F\sum _^p(f_|c_)^-\sum _\in F\sum _^p(f_)^\right] where F = , i = 1 … n \,\ i=1\ldots n is a size- n set of m -ary features, and C = j = 1 … p \\ j=1\ldots p is a set of p categories. The term p ( f i k ) )\ designates the marginal probability that feature f i \ takes on value k , and the term p ( f i k | c j ) |c_)\ designates the category-conditional probability that feature f i \ takes on value k given that the object in question belongs to category c j \ . The motivation and development of this expression for category utility, and the role of the multiplicand 1 p as a crude overfitting control, is given in the above sources. Loosely (Fisher 1987), the term p ( c j ) ∑ f i ∈ F ∑ k = 1 m p ( f i k | c j ) 2 )\sum _\in F\sum _^p(f_|c_)^ is the expected number of attribute values that can be correctly guessed by an observer using a probability-matching strategy together with knowledge of the category labels, while p ( c j ) ∑ f i ∈ F ∑ k = 1 m p ( f i k ) 2 )\sum _\in F\sum _^p(f_)^ is the expected number of attribute values that can be correctly guessed by an observer the same strategy but without any knowledge of the category labels. Their difference therefore reflects the relative advantage accruing to the observer by having knowledge of the category structure.
Category utility : The information-theoretic definition of category utility for a set of entities with size- n binary feature set F = , i = 1 … n \,\ i=1\ldots n , and a binary category C = \ is given in Gluck & Corter (1985) as follows: C U ( C , F ) = [ p ( c ) ∑ i = 1 n p ( f i | c ) log ⁡ p ( f i | c ) + p ( c ¯ ) ∑ i = 1 n p ( f i | c ¯ ) log ⁡ p ( f i | c ¯ ) ] − ∑ i = 1 n p ( f i ) log ⁡ p ( f i ) ^p(f_|c)\log p(f_|c)+p()\sum _^p(f_|)\log p(f_|)\right]-\sum _^p(f_)\log p(f_) where p ( c ) is the prior probability of an entity belonging to the positive category c (in the absence of any feature information), p ( f i | c ) |c)\ is the conditional probability of an entity having feature f i \ given that the entity belongs to category c , p ( f i | c ¯ ) |) is likewise the conditional probability of an entity having feature f i \ given that the entity belongs to category c ¯ , and p ( f i ) )\ is the prior probability of an entity possessing feature f i \ (in the absence of any category information). The intuition behind the above expression is as follows: The term p ( c ) ∑ i = 1 n p ( f i | c ) log ⁡ p ( f i | c ) ^p(f_|c)\log p(f_|c) represents the cost (in bits) of optimally encoding (or transmitting) feature information when it is known that the objects to be described belong to category c . Similarly, the term p ( c ¯ ) ∑ i = 1 n p ( f i | c ¯ ) log ⁡ p ( f i | c ¯ ) )\textstyle \sum _^p(f_|)\log p(f_|) represents the cost (in bits) of optimally encoding (or transmitting) feature information when it is known that the objects to be described belong to category c ¯ . The sum of these two terms in the brackets is therefore the weighted average of these two costs. The final term, ∑ i = 1 n p ( f i ) log ⁡ p ( f i ) ^p(f_)\log p(f_) , represents the cost (in bits) of optimally encoding (or transmitting) feature information when no category information is available. The value of the category utility will, in the above formulation, be non-negative.
Category utility : Like the mutual information, the category utility is not sensitive to any ordering in the feature or category variable values. That is, as far as the category utility is concerned, the category set is not qualitatively different from the category set since the formulation of the category utility does not account for any ordering of the class variable. Similarly, a feature variable adopting values is not qualitatively different from a feature variable adopting values . As far as the category utility or mutual information are concerned, all category and feature variables are nominal variables. For this reason, category utility does not reflect any gestalt aspects of "category goodness" that might be based on such ordering effects. One possible adjustment for this insensitivity to ordinality is given by the weighting scheme described in the article for mutual information.
Category utility : This section provides some background on the origins of, and need for, formal measures of "category goodness" such as the category utility, and some of the history that lead to the development of this particular metric.
Category utility : Category utility is used as the category evaluation measure in the popular conceptual clustering algorithm called COBWEB (Fisher 1987).
Category utility : Abstraction Concept learning Universals Unsupervised learning == References ==
Artificial intelligence of things : The Artificial Intelligence of Things (AIoT) is the combination of artificial intelligence (AI) technologies with the Internet of things (IoT) infrastructure to achieve more efficient IoT operations, improve human-machine interactions and enhance data management and analytics. In 2018, KPMG published a foresight study on the future of AI including scenarios until 2040. The analysts describe a scenario in detail where a community of things would see each device also contain its own AI that could link autonomously to other AIs to, together, perform tasks intelligently. Value creation would be controlled and executed in real-time using swarm intelligence. Many industries could be transformed with the application of swarm intelligence, including: automotive, cloud, medical, military, research, and technology. In the AIoT an important facet is AI being done on some Thing. In its purest form this involves performing the AI on the device, i.e. at the edge or Edge Computing, with no need for external connections. There is no need for an Internet in AIoT, it is an evolution of the concept of the IoT and that is where the comparison ends. The combined power of AI and IoT, promises to unlock unrealized customer value in a broad swath of industry verticals such as edge analytics, autonomous vehicles, personalized fitness, remote healthcare, precision agriculture, smart retail, predictive maintenance, and industrial automation.
Artificial intelligence of things : As defined by the 21st Century Cures Act in 2016, a medical device is a device that performs a function in healthcare with the intention of using it "in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease, in man or other animals, or intended to affect the structure or any function of the body of man or other animals". Under the Federal Food, Drug, and Cosmetic Act, all AI systems falling within this definition are regulated by the FDA. Medical devices are classified into three classes by the FDA based on their uses and risks. The higher the risk is, the stricter the control. The Class I category includes devices with the smallest risk and Class III has the greatest risk. Approved medical devices that utilize artificial intelligence or machine learning (AI/ML) has been increasing steadily. By 2020, the United States The Food and Drug Administration (FDA) approved very many medical devices that utilized AI/ML. A year later, the FDA released a regulatory framework for machines that use AI/ML software, in addition to the EU medical device regulation, which replaced the EU medical. As technology continues to improve, it has rapidly increased the medical fields' method of working and diagnosing. Various AI applications can improve productivity and reduce medical errors, such as diagnoses and treatment selection, and creating risk predictions and stratifying diseases. AI also helps patients by providing patients' data, electronic health records, mobile apps, and providing easy access to devices and sensors to specific patients who are in need of such technologies. The need to protect patients' data is extreme. Using electronic records to conceal patient data becomes increasingly difficult as data becomes integrated into clinical care. The accessibility to patients' data may be easy to access for the patient, but it also brings skepticism of data protection. Technology and AI have combined to provide opportunities for better management of healthcare information and technology integration in the medical industry. AI is implemented to recognize abnormalities and suspicion to sensitive data being accessed by a third-party. On the other hand, it will be necessary to rethink confidentiality and other core medical ethics principles in order to implement deep learning systems, since we cannot rely solely on technology.
Artificial intelligence of things : When integrating AI into cloud engineering, it can help multiple professional fields in maximizing data collection. It can improve performance and efficiency through digital management. Cloud engineering follows engineering methods to apply to cloud computing and focuses on technological cloud services. In conceiving, developing, operating, and maintaining cloud computing systems, it adopts a systematic approach to commercialization, standardization, and governance. Among its diverse aspects are contributions from development engineering, software engineering, web development, performance engineering, security engineering, platform engineering, risk engineering, and quality engineering. Implementing AI into information technology's framework to establish smooth workloads and automate repetitive processes. Using these tools, organizations can better manage data as they develop greater amounts of collective data and integrate data recognition, classification, and management processes as time progresses. With AI, it can bring efficiency to organizations, bringing strategic methods and saving time from repeated tasks. By executing analysis, organizations can save time and be more efficient.
Artificial intelligence of things : Artificial intelligence Medical Device - Artificial Intelligence Cloud Computing - Cloud Engineering Internet of things Edge Computing == References ==
Attributional calculus : Attributional calculus is a logic and representation system defined by Ryszard S. Michalski. It combines elements of predicate logic, propositional calculus, and multi-valued logic. Attributional calculus provides a formal language for natural induction, which is an inductive learning process whose outcomes are in human-readable forms.
Attributional calculus : Michalski, R.S., "ATTRIBUTIONAL CALCULUS: A Logic and Representation Language for Natural Induction," Reports of the Machine Learning and Inference Laboratory, MLI 04–2, George Mason University, Fairfax, VA, April, 2004.
The Fable of Oscar : The Fable of Oscar is a fable proposed by John L. Pollock in his book How to Build a Person (ISBN 9780262161138) to defend the idea of token physicalism, agent materialism, and strong AI. It ultimately illustrates what is needed for an Artificial Intelligence to be built and why humans are just like intelligent machines.
The Fable of Oscar : Once in a distant land there lived a race of Engineers. They have all their physical needs provided by the machines they have invented. One of the Engineers decide that he will create an "intelligent machine" that is much more ingenious than the more machines, in that it can actually sense, learn, and adapt to its environment as an intelligent animal.
The Fable of Oscar : Mind–body problem Robot
The Fable of Oscar : http://johnpollock.us/ftp/OSCAR-web-page/oscar.html http://philpapers.org/rec/POLOAC == References ==
Deep lambertian networks : Deep Lambertian Networks (DLN) is a combination of Deep belief network and Lambertian reflectance assumption which deals with the challenges posed by illumination variation in visual perception. Lambertian Reflectance model gives an illumination invariant representation which can be used for recognition. The Lambertian reflectance model is widely used for modeling illumination variations and is a good approximation for diffuse object surfaces. The DLN is a hybrid undirected-directed generative model that combines DBNs with the Lambertian reflectance model. In the DLN, the visible layer consists of image pixel intensities v ∈ RNv, where Nv is the number of pixels in the image. For every pixel i there are two latent variables namely the albedo and surface normal. GRBMs are used to model the albedo and surface normals. Combining Deep Belief Nets with the Lambertian reflectance assumption, the model can learn good priors over the albedo from 2D images. Illumination variations can be explained by changing only the lighting latent variable. By transferring learned knowledge from similar objects, albedo and surface normals estimation from a single image is also possible. Experiments demonstrate that this model is able to generalize as well as improve over standard baselines in one-shot face recognition. The model has been successfully applied in reconstruction of shadows facial images, given any set of lighting conditions. The model has also been tested on non-living objects. The method outperforms most other methods and is faster than them. == References ==
Semantic analysis (machine learning) : In machine learning, semantic analysis of a text corpus is the task of building structures that approximate concepts from a large set of documents. It generally does not involve prior semantic understanding of the documents. Semantic analysis strategies include: Metalanguages based on first-order logic, which can analyze the speech of humans.: 93- Understanding the semantics of a text is symbol grounding: if language is grounded, it is equal to recognizing a machine-readable meaning. For the restricted domain of spatial analysis, a computer-based language understanding system was demonstrated.: 123 Latent semantic analysis (LSA), a class of techniques where documents are represented as vectors in a term space. A prominent example is probabilistic latent semantic analysis (PLSA). Latent Dirichlet allocation, which involves attributing document terms to topics. n-grams and hidden Markov models, which work by representing the term stream as a Markov chain, in which each term is derived from preceding terms.
Semantic analysis (machine learning) : Explicit semantic analysis Information extraction Semantic similarity Stochastic semantic analysis Ontology learning == References ==
Software agent : In computer science, a software agent is a computer program that acts for a user or another program in a relationship of agency. The term agent is derived from the Latin agere (to do): an agreement to act on one's behalf. Such "action on behalf of" implies the authority to decide which, if any, action is appropriate. Some agents are colloquially known as bots, from robot. They may be embodied, as when execution is paired with a robot body, or as software such as a chatbot executing on a computer, such as a mobile device, e.g. Siri. Software agents may be autonomous or work together with other agents or people. Software agents interacting with people (e.g. chatbots, human-robot interaction environments) may possess human-like qualities such as natural language understanding and speech, personality or embody humanoid form (see Asimo). Related and derived concepts include intelligent agents (in particular exhibiting some aspects of artificial intelligence, such as reasoning), autonomous agents (capable of modifying the methods of achieving their objectives), distributed agents (being executed on physically distinct computers), multi-agent systems (distributed agents that work together to achieve an objective that could not be accomplished by a single agent acting alone), and mobile agents (agents that can relocate their execution onto different processors).
Software agent : The basic attributes of an autonomous software agent are that agents: are not strictly invoked for a task, but activate themselves, may reside in wait status on a host, perceiving context, may get to run status on a host upon starting conditions, do not require interaction of user, may invoke other tasks including communication. The concept of an agent provides a convenient and powerful way to describe a complex software entity that is capable of acting with a certain degree of autonomy in order to accomplish tasks on behalf of its host. But unlike objects, which are defined in terms of methods and attributes, an agent is defined in terms of its behavior. Various authors have proposed different definitions of agents, these commonly include concepts such as: persistence: code is not executed on demand but runs continuously and decides for itself when it should perform some activity; autonomy: agents have capabilities of task selection, prioritization, goal-directed behavior, decision-making without human intervention; social ability: agents are able to engage other components through some sort of communication and coordination, they may collaborate on a task; reactivity: agents perceive the context in which they operate and react to it appropriately.
Software agent : Software agents may offer various benefits to their end users by automating complex or repetitive tasks. However, there are organizational and cultural impacts of this technology that need to be considered prior to implementing software agents.
Software agent : Issues to consider in the development of agent-based systems include how tasks are scheduled and how synchronization of tasks is achieved how tasks are prioritized by agents how agents can collaborate, or recruit resources, how agents can be re-instantiated in different environments, and how their internal state can be stored, how the environment will be probed and how a change of environment leads to behavioral changes of the agents how messaging and communication can be achieved, what hierarchies of agents are useful (e.g. task execution agents, scheduling agents, resource providers ...). For software agents to work together efficiently they must share semantics of their data elements. This can be done by having computer systems publish their metadata. The definition of agent processing can be approached from two interrelated directions: internal state processing and ontologies for representing knowledge interaction protocols – standards for specifying communication of tasks Agent systems are used to model real-world systems with concurrency or parallel processing. Agent Machinery – Engines of various kinds, which support the varying degrees of intelligence Agent Content – Data employed by the machinery in Reasoning and Learning Agent Access – Methods to enable the machinery to perceive content and perform actions as outcomes of Reasoning Agent Security – Concerns related to distributed computing, augmented by a few special concerns related to agents The agent uses its access methods to go out into local and remote databases to forage for content. These access methods may include setting up news stream delivery to the agent, or retrieval from bulletin boards, or using a spider to walk the Web. The content that is retrieved in this way is probably already partially filtered – by the selection of the newsfeed or the databases that are searched. The agent next may use its detailed searching or language-processing machinery to extract keywords or signatures from the body of the content that has been received or retrieved. This abstracted content (or event) is then passed to the agent's Reasoning or inferencing machinery in order to decide what to do with the new content. This process combines the event content with the rule-based or knowledge content provided by the user. If this process finds a good hit or match in the new content, the agent may use another piece of its machinery to do a more detailed search on the content. Finally, the agent may decide to take an action based on the new content; for example, to notify the user that an important event has occurred. This action is verified by a security function and then given the authority of the user. The agent makes use of a user-access method to deliver that message to the user. If the user confirms that the event is important by acting quickly on the notification, the agent may also employ its learning machinery to increase its weighting for this kind of event. Bots can act on behalf of their creators to do good as well as bad. There are a few ways which bots can be created to demonstrate that they are designed with the best intention and are not built to do harm. This is first done by having a bot identify itself in the user-agent HTTP header when communicating with a site. The source IP address must also be validated to establish itself as legitimate. Next, the bot must also always respect a site's robots.txt file since it has become the standard across most of the web. And like respecting the robots.txt file, bots should shy away from being too aggressive and respect any crawl delay instructions.
Software agent : Agent architecture Chatbot Data loss prevention Endpoint detection and response Software bot
Software agent : Software Agents: An Overview Archived July 17, 2011, at the Wayback Machine, Hyacinth S. Nwana. Knowledge Engineering Review, 11(3):1–40, September 1996. Cambridge University Press. FIPA The Foundation for Intelligent Physical Agents JADE Java Agent Developing Framework, an Open Source framework developed by Telecom Italia Labs European Software-Agent Research Center Archived 2017-09-14 at the Wayback Machine JAFIMA JAFIMA: A Java based Agent Framework for Intelligent and Mobile Agents SemanticAgent An Open Source framework to develop SWRL based Agents on top of JADE Mobile-C A Multi-Agent Platform for Mobile C/C++ Agents. HLL High-Level Logic (HLL) Open Source Project. Open source project KATO for PHP and Java developers to write software agents
Syman : SYMAN is an artificial intelligence technology that uses data from social media profiles to identify trends in the job market. SYMAN is designed to organize actionable data for products and services including recruiting, human capital management, CRM, and marketing. SYMAN was developed with a $21 million series B financing round secured by Identified, which was led by VantagePoint Capital Partners and Capricorn Investment Group.
Syman : Workday
Multitask optimization : Multi-task optimization is a paradigm in the optimization literature that focuses on solving multiple self-contained tasks simultaneously. The paradigm has been inspired by the well-established concepts of transfer learning and multi-task learning in predictive analytics. The key motivation behind multi-task optimization is that if optimization tasks are related to each other in terms of their optimal solutions or the general characteristics of their function landscapes, the search progress can be transferred to substantially accelerate the search on the other. The success of the paradigm is not necessarily limited to one-way knowledge transfers from simpler to more complex tasks. In practice an attempt is to intentionally solve a more difficult task that may unintentionally solve several smaller problems. There is a direct relationship between multitask optimization and multi-objective optimization.
Multitask optimization : There are several common approaches for multi-task optimization: Bayesian optimization, evolutionary computation, and approaches based on Game theory.
Multitask optimization : Algorithms for multi-task optimization span a wide array of real-world applications. Recent studies highlight the potential for speed-ups in the optimization of engineering design parameters by conducting related designs jointly in a multi-task manner. In machine learning, the transfer of optimized features across related data sets can enhance the efficiency of the training process as well as improve the generalization capability of learned models. In addition, the concept of multi-tasking has led to advances in automatic hyperparameter optimization of machine learning models and ensemble learning. Applications have also been reported in cloud computing, with future developments geared towards cloud-based on-demand optimization services that can cater to multiple customers simultaneously. Recent work has additionally shown applications in chemistry. In addition, some recent works have applied multi-task optimization algorithms in industrial manufacturing.
Multitask optimization : Multi-objective optimization Multi-task learning Multicriteria classification Multiple-criteria decision analysis == References ==
AZFinText : Arizona Financial Text System (AZFinText) is a textual-based quantitative financial prediction system written by Robert P. Schumaker of University of Texas at Tyler and Hsinchun Chen of the University of Arizona.
AZFinText : This system differs from other systems in that it uses financial text as one of its key means of predicting stock price movement. This reduces the information lag-time problem evident in many similar systems where new information must be transcribed (e.g., such as losing a costly court battle or having a product recall), before the quant can react appropriately. AZFinText overcomes these limitations by utilizing the terms used in financial news articles to predict future stock prices twenty minutes after the news article has been released. It is believed that certain article terms can move stocks more than others. Terms such as factory exploded or workers strike will have a depressing effect on stock prices whereas terms such as earnings rose will tend to increase stock prices. When a human trading expert sees certain terms, they will react in a somewhat predictable fashion. AZFinText capitalizes on the arbitrage opportunities that exist when investment experts over and under-react to certain news stories. By analyzing breaking financial news articles and focusing on specific parts of speech, portfolio selection, term weighting and even article sentiment, the AZFinText system becomes a powerful tool and is a radically different way of looking at stock market prediction.
AZFinText : The foundation of AZFinText can be found in the ACM TOIS article. Within this paper, the authors tested several different prediction models and linguistic textual representations. From this work, it was found that using the article terms and the price of the stock at the time the article was released was the most effective model and using proper nouns was the most effective textual representation technique. Combining the two, AZFinText netted a 2.84% trading return over the five-week study period. AZFinText was then extended to study what combination of peer organizations help to best train the system. Using the premise that IBM has more in common with Microsoft than GM, AZFinText studied the effect of varying peer-based training sets. To do this, AZFinText trained on the various levels of GICS and evaluated the results. It was found that sector-based training was most effective, netting an 8.50% trading return, outperforming Jim Cramer, Jim Jubak and DayTraders.com during the study period. AZFinText was also compared against the top 10 quantitative systems and outperformed 6 of them. A third study investigated the role of portfolio building in a textual financial prediction system. From this study, Momentum and Contrarian stock portfolios were created and tested. Using the premise that past winning stocks will continue to win and past losing stocks will continue to lose, AZFinText netted a 20.79% return during the study period. It was also noted that traders were generally overreacting to news events, creating the opportunity of abnormal returns. A fourth study looked into using author sentiment as an added predictive variable. Using the premise that an author can unwittingly influence market trades simply by the terms they use, AZFinText was tested using tone and polarity features. It was found that Contrarian activity was occurring within the market, where articles of a positive tone would decrease in price and articles of a negative tone would increase in price. A further study investigated what article verbs have the most influence on stock price movement. From this work, it was found that planted, announcing, front, smaller and crude had the highest positive impact on stock price.
AZFinText : AZFinText has been the topic of discussion by numerous media outlets. Some of the more notable ones include The Wall Street Journal, MIT's Technology Review, Dow Jones Newswire, WBIR in Knoxville, TN, Slashdot and other media outlets.
AZFinText : https://blogs.wsj.com/digits/2010/06/21/using-artificial-intelligence-to-digest-financial-news/ slashdot.org/story/10/06/12/1341212/Quant-AI-Picks-Stocks-Better-Than-Humans www.technologyreview.com/blog/guest/25308/
AI Dungeon : AI Dungeon is a single-player/multiplayer text adventure game which uses artificial intelligence (AI) to generate content and allows players to create and share adventures and custom prompts. The game's first version was made available in May 2019, and its second version (initially called AI Dungeon 2) was released on Google Colaboratory in December 2019. It was later ported that same month to its current cross-platform web application. The AI model was then reformed in July 2020.
AI Dungeon : AI Dungeon is a text adventure game that uses artificial intelligence to generate random storylines in response to player-submitted stimuli. In the game, players are prompted to choose a setting for their adventure (e.g. fantasy, mystery, apocalyptic, cyberpunk, zombies), followed by other options relevant to the setting (such as character class for fantasy settings). After beginning an adventure, four main interaction methods can be chosen for the player's text input: Do: Must be followed by a verb, allowing the player to perform an action. Say: Must be followed by dialogue sentences, allowing players to communicate with other characters. Story: Can be followed by sentences describing something that happens to progress the story, or that players want the AI to know for future events. See: Must be followed by a description, allowing the player to perceive events, objects, or characters. Using this command creates an AI generated image, and does not affect gameplay. The game adapts and responds to most actions the player enters. Providing blank inputs can be used to prompt the AI to generate further content, and the game also provides players with options to undo or redo or modify recent events to improve the game's narrative. Players can also tell the AI what elements to "remember" for reference in future parts of their playthrough.
AI Dungeon : Approximately two thousand people played the original version of the game within the first month of its May 2019 release. Within a week of its December 2019 relaunch, the game reached over 100,000 players and over 500,000 play-throughs, and reached 1.5 million players by June 2020. As of December 2019, the game's corresponding Patreon campaign had raised approximately $15,000 per month.
AI Dungeon : Official website Original open-source code for AI Dungeon on GitHub (archived)
Microsoft Copilot : Microsoft Copilot (or simply Copilot) is a generative artificial intelligence chatbot developed by Microsoft. Based on the GPT-4 series of large language models, it was launched in 2023 as Microsoft's primary replacement for the discontinued Cortana. The service was introduced in February 2023 under the name Bing Chat, as a built-in feature for Microsoft Bing and Microsoft Edge. Over the course of 2023, Microsoft began to unify the Copilot branding across its various chatbot products, cementing the "copilot" analogy. At its Build 2023 conference, Microsoft announced its plans to integrate Copilot into Windows 11, allowing users to access it directly through the taskbar. In January 2024, a dedicated Copilot key was announced for Windows keyboards. Copilot utilizes the Microsoft Prometheus model, built upon OpenAI's GPT-4 foundational large language model, which in turn has been fine-tuned using both supervised and reinforcement learning techniques. Copilot's conversational interface style resembles that of ChatGPT. The chatbot is able to cite sources, create poems, generate songs, and use numerous languages and dialects. Microsoft operates Copilot on a freemium model. Users on its free tier can access most features, while priority access to newer features, including custom chatbot creation, is provided to paid subscribers under the "Microsoft Copilot Pro" paid subscription service. Several default chatbots are available in the free version of Microsoft Copilot, including the standard Copilot chatbot as well as Microsoft Designer, which is oriented towards using its Image Creator to generate images based on text prompts.
Microsoft Copilot : In 2019, Microsoft partnered with OpenAI and began investing billions of dollars into the organization. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft. In September 2020, Microsoft announced that it had licensed OpenAI's GPT-3 exclusively. Others can still receive output from its public API, but Microsoft has exclusive access to the underlying model. In November 2022, OpenAI launched ChatGPT, a chatbot which was based on GPT-3.5. ChatGPT gained worldwide attention following its release, becoming a viral Internet sensation. On January 23, 2023, Microsoft announced a multi-year US$10 billion investment in OpenAI. On February 6, Google announced Bard (later rebranded as Gemini), a ChatGPT-like chatbot service, fearing that ChatGPT could threaten Google's place as a go-to source for information. Multiple media outlets and financial analysts described Google as "rushing" Bard's announcement to preempt rival Microsoft's planned February 7 event unveiling Copilot, as well as to avoid playing "catch-up" to Microsoft.
Microsoft Copilot : Tom Warren, a senior editor at The Verge, has noted the conceptual similarity of Copilot and other Microsoft assistant features like Cortana and Clippy. Warren also believes that large language models, as they develop further, could change how users work and collaborate. Rowan Curran, an analyst at Forrester, states that the integration of AI into productivity software may lead to improvements in user experience. Concerns over the speed of Microsoft's recent release of AI-powered products and investments have led to questions surrounding ethical responsibilities in the testing of such products. One ethical concern the public has vocalized is that GPT-4 and similar large language models may reinforce racial or gender bias. Individuals, including Tom Warren, have also voiced concerns for Copilot after witnessing the chatbot showcasing several instances of artificial hallucinations. In June 2024, Copilot was found to have repeated misinformation about the 2024 United States presidential debates. In response to these concerns, Jon Friedman, the Corporate Vice President of Design and Research at Microsoft, stated that Microsoft was "applying [the] learning" from experience with Bing to "mitigate [the] risks" of Copilot. Microsoft claimed that it was gathering a team of researchers and engineers to identify and alleviate any potential negative impacts. The stated aim was to achieve this through the refinement of training data, blocking queries about sensitive topics, and limiting harmful information. Microsoft stated that it intended to employ InterpretML and Fairlearn to detect and rectify data bias, provide links to its sources, and state any applicable constraints.
Microsoft Copilot : Tabnine – Coding assistant Tay (chatbot) – Chatbot developed by Microsoft Zo (chatbot) – Chatbot developed by MicrosoftPages displaying short descriptions of redirect targets
Microsoft Copilot : Official website Media related to Microsoft Copilot at Wikimedia Commons Microsoft Copilot Terms of Use (Archive -- 2024-10-01 -- Wayback Machine, Archive Today, Megalodon, Ghostarchive) Past versions
Learning to rank : Learning to rank or machine-learned ranking (MLR) is the application of machine learning, typically supervised, semi-supervised or reinforcement learning, in the construction of ranking models for information retrieval systems. Training data may, for example, consist of lists of items with some partial order specified between items in each list. This order is typically induced by giving a numerical or ordinal score or a binary judgment (e.g. "relevant" or "not relevant") for each item. The goal of constructing the ranking model is to rank new, unseen lists in a similar way to rankings in the training data.
Learning to rank : For the convenience of MLR algorithms, query-document pairs are usually represented by numerical vectors, which are called feature vectors. Such an approach is sometimes called bag of features and is analogous to the bag of words model and vector space model used in information retrieval for representation of documents. Components of such vectors are called features, factors or ranking signals. They may be divided into three groups (features from document retrieval are shown as examples): Query-independent or static features — those features, which depend only on the document, but not on the query. For example, PageRank or document's length. Such features can be precomputed in off-line mode during indexing. They may be used to compute document's static quality score (or static rank), which is often used to speed up search query evaluation. Query-dependent or dynamic features — those features, which depend both on the contents of the document and the query, such as TF-IDF score or other non-machine-learned ranking functions. Query-level features or query features, which depend only on the query. For example, the number of words in a query. Some examples of features, which were used in the well-known LETOR dataset: TF, TF-IDF, BM25, and language modeling scores of document's zones (title, body, anchors text, URL) for a given query; Lengths and IDF sums of document's zones; Document's PageRank, HITS ranks and their variants. Selecting and designing good features is an important area in machine learning, which is called feature engineering.
Learning to rank : There are several measures (metrics) which are commonly used to judge how well an algorithm is doing on training data and to compare the performance of different MLR algorithms. Often a learning-to-rank problem is reformulated as an optimization problem with respect to one of these metrics. Examples of ranking quality measures: Mean average precision (MAP); DCG and NDCG; Precision@n, NDCG@n, where "@n" denotes that the metrics are evaluated only on top n documents; Mean reciprocal rank; Kendall's tau; Spearman's rho. DCG and its normalized variant NDCG are usually preferred in academic research when multiple levels of relevance are used. Other metrics such as MAP, MRR and precision, are defined only for binary judgments. Recently, there have been proposed several new evaluation metrics which claim to model user's satisfaction with search results better than the DCG metric: Expected reciprocal rank (ERR); Yandex's pfound. Both of these metrics are based on the assumption that the user is more likely to stop looking at search results after examining a more relevant document, than after a less relevant document.
Learning to rank : Learning to Rank approaches are often categorized using one of three approaches: pointwise (where individual documents are ranked), pairwise (where pairs of documents are ranked into a relative order), and listwise (where an entire list of documents are ordered). Tie-Yan Liu of Microsoft Research Asia has analyzed existing algorithms for learning to rank problems in his book Learning to Rank for Information Retrieval. He categorized them into three groups by their input spaces, output spaces, hypothesis spaces (the core function of the model) and loss functions: the pointwise, pairwise, and listwise approach. In practice, listwise approaches often outperform pairwise approaches and pointwise approaches. This statement was further supported by a large scale experiment on the performance of different learning-to-rank methods on a large collection of benchmark data sets. In this section, without further notice, x denotes an object to be evaluated, for example, a document or an image, f ( x ) denotes a single-value hypothesis, h ( ⋅ ) denotes a bi-variate or multi-variate function and L ( ⋅ ) denotes the loss function.
Learning to rank : Norbert Fuhr introduced the general idea of MLR in 1992, describing learning approaches in information retrieval as a generalization of parameter estimation; a specific variant of this approach (using polynomial regression) had been published by him three years earlier. Bill Cooper proposed logistic regression for the same purpose in 1992 and used it with his Berkeley research group to train a successful ranking function for TREC. Manning et al. suggest that these early works achieved limited results in their time due to little available training data and poor machine learning techniques. Several conferences, such as NeurIPS, SIGIR and ICML have had workshops devoted to the learning-to-rank problem since the mid-2000s (decade).
Learning to rank : Similar to recognition applications in computer vision, recent neural network based ranking algorithms are also found to be susceptible to covert adversarial attacks, both on the candidates and the queries. With small perturbations imperceptible to human beings, ranking order could be arbitrarily altered. In addition, model-agnostic transferable adversarial examples are found to be possible, which enables black-box adversarial attacks on deep ranking systems without requiring access to their underlying implementations. Conversely, the robustness of such ranking systems can be improved via adversarial defenses such as the Madry defense.
Learning to rank : Content-based image retrieval Multimedia information retrieval Image retrieval Triplet loss
Learning to rank : Competitions and public datasets LETOR: A Benchmark Collection for Research on Learning to Rank for Information Retrieval Yandex's Internet Mathematics 2009 Yahoo! Learning to Rank Challenge Microsoft Learning to Rank Datasets
Testsigma : Testsigma is a low-code, AI-driven automated testing platform for software testing, CI/CD, and agile teams. It provides testing products and solutions for web, mobile, and API applications and can be integrated with popular CI/CD tools.
Testsigma : Testsigma was founded by Rukmangada Kandyala in 2019. Testsigma has multiple products to let software testing teams test web apps, mobile apps, APIs and ERP applications like Salesforce. Testsigma claims Nagra, Samsung, Cisco, Bosch, NTUC Fairprice as customers.
Testsigma : In 2022 Testsigma raised $4.6 Million in funding led by Accel and Strive. In June 2024 Testsigma raised $8.2M led by MassMutual Ventures.
Testsigma : Testsigma offers many continuous testing capabilities as part of its cloud testing platform, including: Mobile Testing API Testing Web Testing Salesforce testing
Testsigma : Official website
DABUS : DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) is an artificial intelligence (AI) system created by Stephen Thaler. It reportedly conceived of two novel products — a food container constructed using fractal geometry, which enables rapid reheating, and a flashing beacon for attracting attention in an emergency. The filing of patent applications designating DABUS as inventor has led to decisions by patent offices and courts on whether a patent can be granted for an invention reportedly made by an AI system. DABUS itself is a patented AI paradigm capable of accommodating trillions of computational neurons within extensive artificial neural systems that emulate the limbo-thalamo-cortical loop within the mammalian brain. Such systems utilize arrays of trainable neural modules, each containing interrelated memories representative of some conceptual space. Through simple learning rules, these modules bind together to represent both complex ideas (e.g., juxtapositional inventions) and their consequences as chaining topologies. An electro-optical attention window scans the entire array of neural modules in search of so-called “hot buttons,” those neural modules containing impactful memories. Detection of such hot buttons within consequence chains triggers the release or retraction of synaptic disturbances into the system, selectively reinforcing the most salient chain-based notions.
DABUS : The Artificial Inventor Project The latest news on the DABUS patent case (ipstars.com)
Living Intelligence : Living Intelligence is the convergence of artificial intelligence, biotechnology, and advanced sensors.
Living Intelligence : The conceptual framework of Living Intelligence was introduced in 2024 with a report published by Amy Webb and Sam Jordan from Future Today Institute. The report described it as a convergence of three technologies (artificial intelligence, biotechnology, and advanced sensors) for systems capable of sensing, learning, adapting, and evolving. Living Intelligence relies on the interaction between AI systems (such as Large Action Models), sensor networks that collect and transmit data, and biological engineering applications which include generative biology.
Living Intelligence : Living Intelligence can be used in a variety of industries, including business and education. In education, it focuses on human cognition to personalize learning experiences. It can also assist in training AI models with empathy for applications of customer service and healthcare. Notable early developments in Living Intelligence include DishBrain, a biological computer created by Cortical Labs using brain cells, and various applications of generative biology by companies like Ginkgo Bioworks and Google DeepMind's AlphaProteo project. == References ==
Toronto Declaration : The Toronto Declaration: Protecting the Rights to Equality and Non-Discrimination in Machine Learning Systems is a declaration that advocates responsible practices for machine learning practitioners and governing bodies. It is a joint statement issued by groups including Amnesty International and Access Now, with other notable signatories including Human Rights Watch and The Wikimedia Foundation. It was published at RightsCon on May 16, 2018. The Declaration focuses on concerns of algorithmic bias and the potential for discrimination that arises from the use of machine learning and artificial intelligence in applications that may affect people's lives, "from policing, to welfare systems, to healthcare provision, to platforms for online discourse." A secondary concern of the document is the potential for violations of information privacy. The goal of the Declaration is to outline "tangible and actionable standards for states and the private sector." The Declaration calls for tangible solutions, such as reparations for the victims of algorithmic discrimination.
Toronto Declaration : The Toronto Declaration consists of 59 articles, broken into six sections, concerning international human rights law, duties of states, responsibilities of private sector actors, and the right to an effective remedy.
Machine learning in video games : Artificial intelligence and machine learning techniques are used in video games for a wide variety of applications such as non-player character (NPC) control and procedural content generation (PCG). Machine learning is a subset of artificial intelligence that uses historical data to build predictive and analytical models. This is in sharp contrast to traditional methods of artificial intelligence such as search trees and expert systems. Information on machine learning techniques in the field of games is mostly known to public through research projects as most gaming companies choose not to publish specific information about their intellectual property. The most publicly known application of machine learning in games is likely the use of deep learning agents that compete with professional human players in complex strategy games. There has been a significant application of machine learning on games such as Atari/ALE, Doom, Minecraft, StarCraft, and car racing. Other games that did not originally exists as video games, such as chess and Go have also been affected by the machine learning.
Machine learning in video games : Machine learning agents have been used to take the place of a human player rather than function as NPCs, which are deliberately added into video games as part of designed gameplay. Deep learning agents have achieved impressive results when used in competition with both humans and other artificial intelligence agents.
Machine learning in video games : Computer vision focuses on training computers to gain a high-level understanding of digital images or videos. Many computer vision techniques also incorporate forms of machine learning, and have been applied on various video games. This application of computer vision focuses on interpreting game events using visual data. In some cases, artificial intelligence agents have used model-free techniques to learn to play games without any direct connection to internal game logic, solely using video data as input.
Machine learning in video games : Machine learning has seen research for use in content recommendation and generation. Procedural content generation is the process of creating data algorithmically rather than manually. This type of content is used to add replayability to games without relying on constant additions by human developers. PCG has been used in various games for different types of content generation, examples of which include weapons in Borderlands 2, all world layouts in Minecraft and entire universes in No Man's Sky. Common approaches to PCG include techniques that involve grammars, search-based algorithms, and logic programming. These approaches require humans to manually define the range of content possible, meaning that a human developer decides what features make up a valid piece of generated content. Machine learning is theoretically capable of learning these features when given examples to train off of, thus greatly reducing the complicated step of developers specifying the details of content design. Machine learning techniques used for content generation include Long Short-Term Memory (LSTM) Recurrent Neural Networks (RNN), Generative Adversarial networks (GAN), and K-means clustering. Not all of these techniques make use of ANNs, but the rapid development of deep learning has greatly increased the potential of techniques that do.
Machine learning in video games : Music is often seen in video games and can be a crucial element for influencing the mood of different situations and story points. Machine learning has seen use in the experimental field of music generation; it is uniquely suited to processing raw unstructured data and forming high level representations that could be applied to the diverse field of music. Most attempted methods have involved the use of ANN in some form. Methods include the use of basic feedforward neural networks, autoencoders, restricted boltzmann machines, recurrent neural networks, convolutional neural networks, generative adversarial networks (GANs), and compound architectures that use multiple methods.
Learning curve (machine learning) : In machine learning (ML), a learning curve (or training curve) is a graphical representation that shows how a model's performance on a training set (and usually a validation set) changes with the number of training iterations (epochs) or the amount of training data. Typically, the number of training epochs or training set size is plotted on the x-axis, and the value of the loss function (and possibly some other metric such as the cross-validation score) on the y-axis. Synonyms include error curve, experience curve, improvement curve and generalization curve. More abstractly, learning curves plot the difference between learning effort and predictive performance, where "learning effort" usually means the number of training samples, and "predictive performance" means accuracy on testing samples. Learning curves have many useful purposes in ML, including: choosing model parameters during design, adjusting optimization to improve convergence, and diagnosing problems such as overfitting (or underfitting). Learning curves can also be tools for determining how much a model benefits from adding more training data, and whether the model suffers more from a variance error or a bias error. If both the validation score and the training score converge to a certain value, then the model will no longer significantly benefit from more training data.