text
stringlengths 12
14.7k
|
---|
Learning curve (machine learning) : When creating a function to approximate the distribution of some data, it is necessary to define a loss function L ( f θ ( X ) , Y ) (X),Y) to measure how good the model output is (e.g., accuracy for classification tasks or mean squared error for regression). We then define an optimization process which finds model parameters θ such that L ( f θ ( X ) , Y ) (X),Y) is minimized, referred to as θ ∗ .
|
Learning curve (machine learning) : Overfitting Bias–variance tradeoff Model selection Cross-validation (statistics) Validity (statistics) Verification and validation Double descent == References ==
|
Tehran Monolingual Corpus : The Tehran Monolingual Corpus (TMC) is a large-scale Persian monolingual corpus. TMC is suited for Language Modeling and relevant research areas in Natural Language Processing. The corpus is extracted from Hamshahri Corpus and ISNA news agency website. The quality of Hamshahri corpus is improved for language modeling purpose by a series of tokenization and spell-checking steps. TMC comprises more than 250 million words. The total number of unique words (with frequency of two or more) of the corpus is about 300 thousand, which is relatively good for a highly-inflectional language like Persian. TMC is created by Natural Language Processing Lab of University of Tehran. The corpus is free for research use, after obtaining permission from the corpus aggregator.
|
Tehran Monolingual Corpus : Bijankhan Corpus Hamshahri Corpus
|
Hardware for artificial intelligence : Specialized computer hardware is often used to execute artificial intelligence (AI) programs faster, and with less energy, such as Lisp machines, neuromorphic engineering, event cameras, and physical neural networks. Since 2017, several consumer grade CPUs and SoCs have on-die NPUs. As of 2023, the market for AI hardware is dominated by GPUs.
|
Hardware for artificial intelligence : Lisp machines were developed in the late 1970s and early 1980s to make Artificial intelligence programs written in the programming language Lisp run faster.
|
Hardware for artificial intelligence : Dataflow architecture processors used for AI serve various purposes, with varied implementations like the polymorphic dataflow Convolution Engine by Kinara (formerly Deep Vision), structure-driven dataflow by Hailo, and dataflow scheduling by Cerebras.
|
Machine learning in physics : Applying machine learning (ML) (including deep learning) methods to the study of quantum systems is an emergent area of physics research. A basic example of this is quantum state tomography, where a quantum state is learned from measurement. Other examples include learning Hamiltonians, learning quantum phase transitions, and automatically generating new quantum experiments. ML is effective at processing large amounts of experimental or calculated data in order to characterize an unknown quantum system, making its application useful in contexts including quantum information theory, quantum technology development, and computational materials design. In this context, for example, it can be used as a tool to interpolate pre-calculated interatomic potentials, or directly solving the Schrödinger equation with a variational method.
|
Machine learning in physics : Quantum computing Quantum machine learning Quantum annealing Quantum neural network HHL Algorithm == References ==
|
Relation network : A relation network (RN) is an artificial neural network component with a structure that can reason about relations among objects. An example category of such relations is spatial relations (above, below, left, right, in front of, behind). RNs can infer relations, they are data efficient, and they operate on a set of objects without regard to the objects' order.
|
Relation network : In June 2017, DeepMind announced the first relation network. It claimed that the technology had achieved "superhuman" performance on multiple question-answering problem sets.
|
Relation network : RNs constrain the functional form of a neural network to capture the common properties of relational reasoning. These properties are explicitly added to the system, rather than established by learning just as the capacity to reason about spatial, translation-invariant properties is explicitly part of convolutional neural networks (CNN). The data to be considered can be presented as a simple list or as a directed graph whose nodes are objects and whose edges are the pairs of objects whose relationships are to be considered. The RN is a composite function: R N ( O ) = f ϕ ( ∑ i , j g θ ( o i , o j , q ) ) , \left(\sum _g_\left(o_,o_,q\right)\right), where the input is a set of "objects" O = , o i ∈ R m ,o_,...,o_\right\rbrace ,o_\in \mathbb ^ is the ith object, and fφ and gθ are functions with parameters φ and θ, respectively and q is the question. fφ and gθ are multilayer perceptrons, while the 2 parameters are learnable synaptic weights. RNs are differentiable. The output of gθ is a "relation"; therefore, the role of gθ is to infer any ways in which two objects are related. Image (128x128 pixel) processing is done with a 4-layer CNN. Outputs from the CNN are treated as the objects for relation analysis, without regard for what those "objects" explicitly represent. Questions were processed with a long short-term memory network.
|
Relation network : Deep learning == References ==
|
Game theory : Game theory is the study of mathematical models of strategic interactions. It has applications in many fields of social science, and is used extensively in economics, logic, systems science and computer science. Initially, game theory addressed two-person zero-sum games, in which a participant's gains or losses are exactly balanced by the losses and gains of the other participant. In the 1950s, it was extended to the study of non zero-sum games, and was eventually applied to a wide range of behavioral relations. It is now an umbrella term for the science of rational decision making in humans, animals, and computers. Modern game theory began with the idea of mixed-strategy equilibria in two-person zero-sum games and its proof by John von Neumann. Von Neumann's original proof used the Brouwer fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics. His paper was followed by Theory of Games and Economic Behavior (1944), co-written with Oskar Morgenstern, which considered cooperative games of several players. The second edition provided an axiomatic theory of expected utility, which allowed mathematical statisticians and economists to treat decision-making under uncertainty. Game theory was developed extensively in the 1950s, and was explicitly applied to evolution in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been widely recognized as an important tool in many fields. John Maynard Smith was awarded the Crafoord Prize for his application of evolutionary game theory in 1999, and fifteen game theorists have won the Nobel Prize in economics as of 2020, including most recently Paul Milgrom and Robert B. Wilson.
|
Game theory : The games studied in game theory are well-defined mathematical objects. To be fully defined, a game must specify the following elements: the players of the game, the information and actions available to each player at each decision point, and the payoffs for each outcome. (Eric Rasmusen refers to these four "essential elements" by the acronym "PAPI".) A game theorist typically uses these elements, along with a solution concept of their choosing, to deduce a set of equilibrium strategies for each player such that, when these strategies are employed, no player can profit by unilaterally deviating from their strategy. These equilibrium strategies determine an equilibrium to the game—a stable state in which either one outcome occurs or a set of outcomes occur with known probability. Most cooperative games are presented in the characteristic function form, while the extensive and the normal forms are used to define noncooperative games.
|
Game theory : As a method of applied mathematics, game theory has been used to study a wide variety of human and animal behaviors. It was initially developed in economics to understand a large collection of economic behaviors, including behaviors of firms, markets, and consumers. The first use of game-theoretic analysis was by Antoine Augustin Cournot in 1838 with his solution of the Cournot duopoly. The use of game theory in the social sciences has expanded, and game theory has been applied to political, sociological, and psychological behaviors as well. Although pre-twentieth-century naturalists such as Charles Darwin made game-theoretic kinds of statements, the use of game-theoretic analysis in biology began with Ronald Fisher's studies of animal behavior during the 1930s. This work predates the name "game theory", but it shares many important features with this field. The developments in economics were later applied to biology largely by John Maynard Smith in his 1982 book Evolution and the Theory of Games. In addition to being used to describe, predict, and explain behavior, game theory has also been used to develop theories of ethical or normative behavior and to prescribe such behavior. In economics and philosophy, scholars have applied game theory to help in the understanding of good or proper behavior. Game-theoretic approaches have also been suggested in the philosophy of language and philosophy of science. Game-theoretic arguments of this type can be found as far back as Plato. An alternative version of game theory, called chemical game theory, represents the player's choices as metaphorical chemical reactant molecules called "knowlecules". Chemical game theory then calculates the outcomes as equilibrium solutions to a system of chemical reactions.
|
Game theory : Based on the 1998 book by Sylvia Nasar, the life story of game theorist and mathematician John Nash was turned into the 2001 biopic A Beautiful Mind, starring Russell Crowe as Nash. The 1959 military science fiction novel Starship Troopers by Robert A. Heinlein mentioned "games theory" and "theory of games". In the 1997 film of the same name, the character Carl Jenkins referred to his military intelligence assignment as being assigned to "games and theory". The 1964 film Dr. Strangelove satirizes game theoretic ideas about deterrence theory. For example, nuclear deterrence depends on the threat to retaliate catastrophically if a nuclear attack is detected. A game theorist might argue that such threats can fail to be credible, in the sense that they can lead to subgame imperfect equilibria. The movie takes this idea one step further, with the Soviet Union irrevocably committing to a catastrophic nuclear response without making the threat public. The 1980s power pop band Game Theory was founded by singer/songwriter Scott Miller, who described the band's name as alluding to "the study of calculating the most appropriate action given an adversary ... to give yourself the minimum amount of failure". Liar Game, a 2005 Japanese manga and 2007 television series, presents the main characters in each episode with a game or problem that is typically drawn from game theory, as demonstrated by the strategies applied by the characters. The 1974 novel Spy Story by Len Deighton explores elements of game theory in regard to cold war army exercises. The 2008 novel The Dark Forest by Liu Cixin explores the relationship between extraterrestrial life, humanity, and game theory. Joker, the prime antagonist in the 2008 film The Dark Knight presents game theory concepts—notably the prisoner's dilemma in a scene where he asks passengers in two different ferries to bomb the other one to save their own. In the 2018 film Crazy Rich Asians, the female lead Rachel Chu is a professor of economics and game theory at New York University. At the beginning of the film she is seen in her NYU classroom playing a game of poker with her teaching assistant and wins the game by bluffing; then in the climax of the film, she plays a game of mahjong with her boyfriend's disapproving mother Eleanor, losing the game to Eleanor on purpose but winning her approval as a result. In the 2017 film Molly's Game, Brad, an inexperienced poker player, makes an irrational betting decision without realizing and causes his opponent Harlan to deviate from his Nash Equilibrium strategy, resulting in a significant loss when Harlan loses the hand.
|
Game theory : Applied ethics – Practical application of moral considerations Bandwidth-sharing game – Type of resource allocation game Chainstore paradox – Game theory paradox Collective intentionality – Intentionality that occurs when two or more individuals undertake a task together Core (game theory) – term in game theoryPages displaying wikidata descriptions as a fallback Glossary of game theory Intra-household bargaining – negotiations between members of a household to reach decisionsPages displaying wikidata descriptions as a fallback Kingmaker scenario – Endgame situation in game theory Law and economics – Application of economic theory to analysis of legal systems Mutual assured destruction – Doctrine of military strategy Outline of artificial intelligence – Overview of and topical guide to artificial intelligence Parrondo's paradox – Paradox in game theory Precautionary principle – Risk management strategy Quantum refereed game Risk management – Identification, evaluation and control of risks Self-confirming equilibrium Tragedy of the commons – Self-interests causing depletion of a shared resource Traveler's dilemma – non-zero-sum game thought experimentPages displaying wikidata descriptions as a fallback Wilson doctrine (economics) – Argument in economic theory Compositional game theory Lists List of cognitive biases List of emerging technologies List of games in game theory
|
Game theory : Ben-David, S.; Borodin, A.; Karp, R.; Tardos, G.; Wigderson, A. (January 1994). "On the power of randomization in on-line algorithms". Algorithmica. 11 (1): 2–14. doi:10.1007/BF01294260. S2CID 26771869. Downs, Anthony (1957), An Economic theory of Democracy, New York: Harper Fisher, Sir Ronald Aylmer (1930). The Genetical Theory of Natural Selection. Clarendon Press. Gauthier, David (1986), Morals by agreement, Oxford University Press, ISBN 978-0-19-824992-4 Grim, Patrick; Kokalis, Trina; Alai-Tafti, Ali; Kilb, Nicholas; St Denis, Paul (2004), "Making meaning happen", Journal of Experimental & Theoretical Artificial Intelligence, 16 (4): 209–243, doi:10.1080/09528130412331294715, S2CID 5737352 Harper, David; Maynard Smith, John (2003), Animal signals, Oxford University Press, ISBN 978-0-19-852685-8 Howard, Nigel (1971), Paradoxes of Rationality: Games, Metagames, and Political Behavior, Cambridge, MA: The MIT Press, ISBN 978-0-262-58237-7 Kavka, Gregory S. (1986). Hobbesian Moral and Political Theory. Princeton University Press. ISBN 978-0-691-02765-4. Lewis, David (1969), Convention: A Philosophical Study, ISBN 978-0-631-23257-5 (2002 edition) Maynard Smith, John; Price, George R. (1973), "The logic of animal conflict", Nature, 246 (5427): 15–18, Bibcode:1973Natur.246...15S, doi:10.1038/246015a0, S2CID 4224989 Osborne, Martin J.; Rubinstein, Ariel (1994), A course in game theory, MIT Press, ISBN 978-0-262-65040-3. A modern introduction at the graduate level. Poundstone, William (1993). Prisoner's Dilemma (1st Anchor Books ed.). New York: Anchor. ISBN 0-385-41580-X. Quine, W.v.O (1967), "Truth by Convention", Philosophica Essays for A.N. Whitehead, Russel and Russel Publishers, ISBN 978-0-8462-0970-6 Quine, W.v.O (1960), "Carnap and Logical Truth", Synthese, 12 (4): 350–374, doi:10.1007/BF00485423, S2CID 46979744 Skyrms, Brian (1996), Evolution of the social contract, Cambridge University Press, ISBN 978-0-521-55583-8 Skyrms, Brian (2004), The stag hunt and the evolution of social structure, Cambridge University Press, ISBN 978-0-521-53392-8 Sober, Elliott; Wilson, David Sloan (1998), Unto others: the evolution and psychology of unselfish behavior, Harvard University Press, ISBN 978-0-674-93047-6 Webb, James N. (2007), Game theory: decisions, interaction and evolution, Undergraduate mathematics, Springer, ISBN 978-1-84628-423-6 Consistent treatment of game types usually claimed by different applied fields, e.g. Markov decision processes.
|
Game theory : James Miller (2015): Introductory Game Theory Videos. "Games, theory of", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Paul Walker: History of Game Theory Page. David Levine: Game Theory. Papers, Lecture Notes and much more stuff. Alvin Roth:"Game Theory and Experimental Economics page". Archived from the original on 15 August 2000. Retrieved 13 September 2003. — Comprehensive list of links to game theory information on the Web Adam Kalai: Game Theory and Computer Science — Lecture notes on Game Theory and Computer Science Mike Shor: GameTheory.net — Lecture notes, interactive illustrations and other information. Jim Ratliff's Graduate Course in Game Theory (lecture notes). Don Ross: Review Of Game Theory in the Stanford Encyclopedia of Philosophy. Bruno Verbeek and Christopher Morris: Game Theory and Ethics Elmer G. Wiens: Game Theory — Introduction, worked examples, play online two-person zero-sum games. Marek M. Kaminski: Game Theory and Politics Archived 20 October 2006 at the Wayback Machine — Syllabuses and lecture notes for game theory and political science. Websites on game theory and social interactions Kesten Green's Conflict Forecasting at the Wayback Machine (archived 11 April 2011) — See Papers for evidence on the accuracy of forecasts from game theory and other methods Archived 15 September 2019 at the Wayback Machine. McKelvey, Richard D., McLennan, Andrew M., and Turocy, Theodore L. (2007) Gambit: Software Tools for Game Theory. Benjamin Polak: Open Course on Game Theory at Yale Archived 3 August 2010 at the Wayback Machine videos of the course Benjamin Moritz, Bernhard Könsgen, Danny Bures, Ronni Wiersch, (2007) Spieltheorie-Software.de: An application for Game Theory implemented in JAVA. Antonin Kucera: Stochastic Two-Player Games. Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ?( #5) – Finale, summing up, and my own view
|
GPT-2 : Generative Pre-trained Transformer 2 (GPT-2) is a large language model by OpenAI and the second in their foundational series of GPT models. GPT-2 was pre-trained on a dataset of 8 million web pages. It was partially released in February 2019, followed by full release of the 1.5-billion-parameter model on November 5, 2019. GPT-2 was created as a "direct scale-up" of GPT-1 with a ten-fold increase in both its parameter count and the size of its training dataset. It is a general-purpose learner and its ability to perform the various tasks was a consequence of its general ability to accurately predict the next item in a sequence, which enabled it to translate texts, answer questions about a topic from a text, summarize passages from a larger text, and generate text output on a level sometimes indistinguishable from that of humans; however, it could become repetitive or nonsensical when generating long passages. It was superseded by the GPT-3 and GPT-4 models, which are no longer open source. GPT-2 has, like its predecessor GPT-1 and its successors GPT-3 and GPT-4, a generative pre-trained transformer architecture, implementing a deep neural network, specifically a transformer model, which uses attention instead of older recurrence- and convolution-based architectures. Attention mechanisms allow the model to selectively focus on segments of input text it predicts to be the most relevant. This model allows for greatly increased parallelization, and outperforms previous benchmarks for RNN/CNN/LSTM-based models.
|
GPT-2 : Since the transformer architecture enabled massive parallelization, GPT models could be trained on larger corpora than previous NLP (natural language processing) models. While the GPT-1 model demonstrated that the approach was viable, GPT-2 would further explore the emergent properties of networks trained on extremely large corpora. CommonCrawl, a large corpus produced by web crawling and previously used in training NLP systems, was considered due to its large size, but was rejected after further review revealed large amounts of unintelligible content. Instead, OpenAI developed a new corpus, known as WebText; rather than scraping content indiscriminately from the World Wide Web, WebText was generated by scraping only pages linked to by Reddit posts that had received at least three upvotes prior to December 2017. The corpus was subsequently cleaned; HTML documents were parsed into plain text, duplicate pages were eliminated, and Wikipedia pages were removed (since their presence in many other datasets could have induced overfitting). While the cost of training GPT-2 is known to have been $256 per hour, the amount of hours it took to complete training is unknown; therefore, the overall training cost cannot be estimated accurately. However, comparable large language models using transformer architectures have had their costs documented in more detail; the training processes for BERT and XLNet consumed, respectively, $6,912 and $245,000 of resources.
|
GPT-2 : GPT-2 was first announced on 14 February 2019. A February 2019 article in The Verge by James Vincent said that, while "[the] writing it produces is usually easily identifiable as non-human", it remained "one of the most exciting examples yet" of language generation programs: Give it a fake headline, and it’ll write the rest of the article, complete with fake quotations and statistics. Feed it the first line of a short story, and it’ll tell you what happens to your character next. It can even write fan fiction, given the right prompt. The Guardian described this output as "plausible newspaper prose"; Kelsey Piper of Vox said "one of the coolest AI systems I’ve ever seen may also be the one that will kick me out of my job". GPT-2's flexibility was described as "impressive" by The Verge; specifically, its ability to translate text between languages, summarize long articles, and answer trivia questions were noted. A study by the University of Amsterdam employing a modified Turing test found that at least in some scenarios, participants were unable to distinguish poems generated by GPT-2 from those written by humans.
|
GPT-2 : While GPT-2's ability to generate plausible passages of natural language text were generally remarked on positively, its shortcomings were noted as well, especially when generating texts longer than a couple paragraphs; Vox said "the prose is pretty rough, there’s the occasional non-sequitur, and the articles get less coherent the longer they get". The Verge similarly noted that longer samples of GPT-2 writing tended to "stray off topic" and lack overall coherence; The Register opined that "a human reading it should, after a short while, realize something's up", and noted that "GPT-2 doesn't answer questions as well as other systems that rely on algorithms to extract and retrieve information." GPT-2 deployment is resource-intensive; the full version of the model is larger than five gigabytes, making it difficult to embed locally into applications, and consumes large amounts of RAM. In addition, performing a single prediction "can occupy a CPU at 100% utilization for several minutes", and even with GPU processing, "a single prediction can take seconds". To alleviate these issues, the company Hugging Face created DistilGPT2, using knowledge distillation to produce a smaller model that "scores a few points lower on some quality benchmarks", but is "33% smaller and twice as fast".
|
GPT-2 : Even before the release of the full version, GPT-2 was used for a variety of applications and services, as well as for entertainment. In June 2019, a subreddit named r/SubSimulatorGPT2 was created in which a variety of GPT-2 instances trained on different subreddits made posts and replied to each other's comments, creating a situation where one could observe "an AI personification of r/Bitcoin argue with the machine learning-derived spirit of r/ShittyFoodPorn"; by July of that year, a GPT-2-based software program released to autocomplete lines of code in a variety of programming languages was described by users as a "game-changer". In 2019, AI Dungeon was launched, which used GPT-2 to generate dynamic text adventures based on user input. AI Dungeon now offers access to the largest release of GPT-3 API as an optional paid upgrade, the free version of the site uses the 2nd largest release of GPT-3. Latitude, the company formed around AI Dungeon, raised $3.3 million in seed funding in 2021. Several websites host interactive demonstrations of different instances of GPT-2 and other transformer models. In February 2021, a crisis center for troubled teens announced that they would begin using a GPT-2-derived chatbot to help train counselors by allowing them to have conversations with simulated teens (this use was purely for internal purposes, and did not involve having GPT-2 communicate with the teens themselves). On May 9, 2023, OpenAI released a mapped version of GPT-2. OpenAI used successor model, GPT-4, to map each neuron of GPT-2 to determine their functions.
|
GPT-2 : GPT-2 became capable of performing a variety of tasks beyond simple text production due to the breadth of its dataset and technique: answering questions, summarizing, and even translating between languages in a variety of specific domains, without being instructed in anything beyond how to predict the next word in a sequence. One example of generalized learning is GPT-2's ability to perform machine translation between French and English, for which task GPT-2's performance was assessed using WMT-14 translation tasks. GPT-2's training corpus included virtually no French text; non-English text was deliberately removed while cleaning the dataset prior to training, and as a consequence, only 10MB of French of the remaining 40,000MB was available for the model to learn from (mostly from foreign-language quotations in English posts and articles). Despite this, GPT-2 achieved 5 BLEU on the WMT-14 English-to-French test set (slightly below the score of a translation via word-for-word substitution). It was also able to outperform several contemporary (2017) unsupervised machine translation baselines on the French-to-English test set, where GPT-2 achieved 11.5 BLEU. This remained below the highest-performing contemporary unsupervised approach (2019), which had achieved 33.5 BLEU. However, other models used large amounts of French text to achieve these results; GPT-2 was estimated to have used a monolingual French corpus approximately 1/500 the size of comparable approaches. GPT-2 was to be followed by the 175-billion-parameter GPT-3, revealed to the public in 2020 (whose source code has never been made available). Access to GPT-3 is provided exclusively through APIs offered by OpenAI and Microsoft. That was then later followed by GPT-4. == References ==
|
Microsoft Copilot : Microsoft Copilot (or simply Copilot) is a generative artificial intelligence chatbot developed by Microsoft. Based on the GPT-4 series of large language models, it was launched in 2023 as Microsoft's primary replacement for the discontinued Cortana. The service was introduced in February 2023 under the name Bing Chat, as a built-in feature for Microsoft Bing and Microsoft Edge. Over the course of 2023, Microsoft began to unify the Copilot branding across its various chatbot products, cementing the "copilot" analogy. At its Build 2023 conference, Microsoft announced its plans to integrate Copilot into Windows 11, allowing users to access it directly through the taskbar. In January 2024, a dedicated Copilot key was announced for Windows keyboards. Copilot utilizes the Microsoft Prometheus model, built upon OpenAI's GPT-4 foundational large language model, which in turn has been fine-tuned using both supervised and reinforcement learning techniques. Copilot's conversational interface style resembles that of ChatGPT. The chatbot is able to cite sources, create poems, generate songs, and use numerous languages and dialects. Microsoft operates Copilot on a freemium model. Users on its free tier can access most features, while priority access to newer features, including custom chatbot creation, is provided to paid subscribers under the "Microsoft Copilot Pro" paid subscription service. Several default chatbots are available in the free version of Microsoft Copilot, including the standard Copilot chatbot as well as Microsoft Designer, which is oriented towards using its Image Creator to generate images based on text prompts.
|
Microsoft Copilot : In 2019, Microsoft partnered with OpenAI and began investing billions of dollars into the organization. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft. In September 2020, Microsoft announced that it had licensed OpenAI's GPT-3 exclusively. Others can still receive output from its public API, but Microsoft has exclusive access to the underlying model. In November 2022, OpenAI launched ChatGPT, a chatbot which was based on GPT-3.5. ChatGPT gained worldwide attention following its release, becoming a viral Internet sensation. On January 23, 2023, Microsoft announced a multi-year US$10 billion investment in OpenAI. On February 6, Google announced Bard (later rebranded as Gemini), a ChatGPT-like chatbot service, fearing that ChatGPT could threaten Google's place as a go-to source for information. Multiple media outlets and financial analysts described Google as "rushing" Bard's announcement to preempt rival Microsoft's planned February 7 event unveiling Copilot, as well as to avoid playing "catch-up" to Microsoft.
|
Microsoft Copilot : Tom Warren, a senior editor at The Verge, has noted the conceptual similarity of Copilot and other Microsoft assistant features like Cortana and Clippy. Warren also believes that large language models, as they develop further, could change how users work and collaborate. Rowan Curran, an analyst at Forrester, states that the integration of AI into productivity software may lead to improvements in user experience. Concerns over the speed of Microsoft's recent release of AI-powered products and investments have led to questions surrounding ethical responsibilities in the testing of such products. One ethical concern the public has vocalized is that GPT-4 and similar large language models may reinforce racial or gender bias. Individuals, including Tom Warren, have also voiced concerns for Copilot after witnessing the chatbot showcasing several instances of artificial hallucinations. In June 2024, Copilot was found to have repeated misinformation about the 2024 United States presidential debates. In response to these concerns, Jon Friedman, the Corporate Vice President of Design and Research at Microsoft, stated that Microsoft was "applying [the] learning" from experience with Bing to "mitigate [the] risks" of Copilot. Microsoft claimed that it was gathering a team of researchers and engineers to identify and alleviate any potential negative impacts. The stated aim was to achieve this through the refinement of training data, blocking queries about sensitive topics, and limiting harmful information. Microsoft stated that it intended to employ InterpretML and Fairlearn to detect and rectify data bias, provide links to its sources, and state any applicable constraints.
|
Microsoft Copilot : Tabnine – Coding assistant Tay (chatbot) – Chatbot developed by Microsoft Zo (chatbot) – Chatbot developed by MicrosoftPages displaying short descriptions of redirect targets
|
Microsoft Copilot : Official website Media related to Microsoft Copilot at Wikimedia Commons Microsoft Copilot Terms of Use (Archive -- 2024-10-01 -- Wayback Machine, Archive Today, Megalodon, Ghostarchive) Past versions
|
Contrastive Hebbian learning : Contrastive Hebbian learning is a biologically plausible form of Hebbian learning. It is based on the contrastive divergence algorithm, which has been used to train a variety of energy-based latent variable models. In 2003, contrastive Hebbian learning was shown to be equivalent in power to the backpropagation algorithms commonly used in machine learning.
|
Contrastive Hebbian learning : Oja's rule Generalized Hebbian algorithm == References ==
|
Recurrent neural network : Recurrent neural networks (RNNs) are a class of artificial neural networks designed for processing sequential data, such as text, speech, and time series, where the order of elements is important. Unlike feedforward neural networks, which process inputs independently, RNNs utilize recurrent connections, where the output of a neuron at one time step is fed back as input to the network at the next time step. This enables RNNs to capture temporal dependencies and patterns within sequences. The fundamental building block of RNNs is the recurrent unit, which maintains a hidden state—a form of memory that is updated at each time step based on the current input and the previous hidden state. This feedback mechanism allows the network to learn from past inputs and incorporate that knowledge into its current processing. RNNs have been successfully applied to tasks such as unsegmented, connected handwriting recognition, speech recognition, natural language processing, and neural machine translation. However, traditional RNNs suffer from the vanishing gradient problem, which limits their ability to learn long-range dependencies. This issue was addressed by the development of the long short-term memory (LSTM) architecture in 1997, making it the standard RNN variant for handling long-term dependencies. Later, Gated Recurrent Units (GRUs) were introduced as a more computationally efficient alternative. In recent years, Transformers, which rely on self-attention mechanisms instead of recurrence, have become the dominant architecture for many sequence-processing tasks, particularly in natural language processing, due to their superior handling of long-range dependencies and greater parallelizability. Nevertheless, RNNs remain relevant for applications where computational efficiency, real-time processing, or the inherent sequential nature of data is crucial.
|
Recurrent neural network : An RNN-based model can be factored into two parts: configuration and architecture. Multiple RNN can be combined in a data flow, and the data flow itself is the configuration. Each RNN itself may have any architecture, including LSTM, GRU, etc.
|
Recurrent neural network : Modern libraries provide runtime-optimized implementations of the above functionality or allow to speed up the slow loop by just-in-time compilation. Apache Singa Caffe: Created by the Berkeley Vision and Learning Center (BVLC). It supports both CPU and GPU. Developed in C++, and has Python and MATLAB wrappers. Chainer: Fully in Python, production support for CPU, GPU, distributed training. Deeplearning4j: Deep learning in Java and Scala on multi-GPU-enabled Spark. Flux: includes interfaces for RNNs, including GRUs and LSTMs, written in Julia. Keras: High-level API, providing a wrapper to many other deep learning libraries. Microsoft Cognitive Toolkit MXNet: an open-source deep learning framework used to train and deploy deep neural networks. PyTorch: Tensors and Dynamic neural networks in Python with GPU acceleration. TensorFlow: Apache 2.0-licensed Theano-like library with support for CPU, GPU and Google's proprietary TPU, mobile Theano: A deep-learning library for Python with an API largely compatible with the NumPy library. Torch: A scientific computing framework with support for machine learning algorithms, written in C and Lua.
|
Recurrent neural network : Applications of recurrent neural networks include: Machine translation Robot control Time series prediction Speech recognition Speech synthesis Brain–computer interfaces Time series anomaly detection Text-to-Video model Rhythm learning Music composition Grammar learning Handwriting recognition Human action recognition Protein homology detection Predicting subcellular localization of proteins Several prediction tasks in the area of business process management Prediction in medical care pathways Predictions of fusion plasma disruptions in reactors (Fusion Recurrent Neural Network (FRNN) code)
|
Recurrent neural network : Mandic, Danilo P.; Chambers, Jonathon A. (2001). Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability. Wiley. ISBN 978-0-471-49517-8. Grossberg, Stephen (2013-02-22). "Recurrent Neural Networks". Scholarpedia. 8 (2): 1888. Bibcode:2013SchpJ...8.1888G. doi:10.4249/scholarpedia.1888. ISSN 1941-6016. Recurrent Neural Networks. List of RNN papers by Jürgen Schmidhuber's group at Dalle Molle Institute for Artificial Intelligence Research.
|
Statistics : Statistics (from German: Statistik, orig. "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments. When census data cannot be collected, statisticians collect data by developing specific experiment designs and survey samples. Representative sampling assures that inferences and conclusions can reasonably extend from the sample to the population as a whole. An experimental study involves taking measurements of the system under study, manipulating the system, and then taking additional measurements using the same procedure to determine if the manipulation has modified the values of the measurements. In contrast, an observational study does not involve experimental manipulation. Two main statistical methods are used in data analysis: descriptive statistics, which summarize data from a sample using indexes such as the mean or standard deviation, and inferential statistics, which draw conclusions from data that are subject to random variation (e.g., observational errors, sampling variation). Descriptive statistics are most often concerned with two sets of properties of a distribution (sample or population): central tendency (or location) seeks to characterize the distribution's central or typical value, while dispersion (or variability) characterizes the extent to which members of the distribution depart from its center and each other. Inferences made using mathematical statistics employ the framework of probability theory, which deals with the analysis of random phenomena. A standard statistical procedure involves the collection of data leading to a test of the relationship between two statistical data sets, or a data set and synthetic data drawn from an idealized model. A hypothesis is proposed for the statistical relationship between the two data sets, an alternative to an idealized null hypothesis of no relationship between two data sets. Rejecting or disproving the null hypothesis is done using statistical tests that quantify the sense in which the null can be proven false, given the data that are used in the test. Working from a null hypothesis, two basic forms of error are recognized: Type I errors (null hypothesis is rejected when it is in fact true, giving a "false positive") and Type II errors (null hypothesis fails to be rejected when it is in fact false, giving a "false negative"). Multiple problems have come to be associated with this framework, ranging from obtaining a sufficient sample size to specifying an adequate null hypothesis. Statistical measurement processes are also prone to error in regards to the data that they generate. Many of these errors are classified as random (noise) or systematic (bias), but other types of errors (e.g., blunder, such as when an analyst reports incorrect units) can also occur. The presence of missing data or censoring may result in biased estimates and specific techniques have been developed to address these problems.
|
Statistics : "Statistics is both the science of uncertainty and the technology of extracting information from data." - featured in the International Encyclopedia of Statistical Science.Statistics is the discipline that deals with data, facts and figures with which meaningful information is inferred. Data may represent a numerical value, in form of quantitative data, or a label, as with qualitative data. Data may be collected, presented and summarised, in one of two methods called descriptive statistics. Two elementary summaries of data, singularly called a statistic, are the mean and dispersion. Whereas inferential statistics interprets data from a population sample to induce statements and predictions about a population. Statistics is regarded as a body of science or a branch of mathematics. It is based on probability, a branch of mathematics that studies random events. Statistics is considered the science of uncertainty. This arises from the ways to cope with measurement and sampling error as well as dealing with uncertanties in modelling. Although probability and statistics was once paired together as a single subject, they are conceptually distinct from one another. The former is based on deducing answers to specific situations from a general theory of probability, meanwhile statistics induces statements about a population based on a data set. Statistics serves to bridge the gap between probability and applied mathematical fields. Some consider statistics to be a distinct mathematical science rather than a branch of mathematics. While many scientific investigations make use of data, statistics is generally concerned with the use of data in the context of uncertainty and decision-making in the face of uncertainty. Statistics is indexed at 62, a subclass of probability theory and stochastic processes, in the Mathematics Subject Classification. Mathematical statistics is covered in the range 276-280 of subclass QA (science > mathematics) in the Library of Congress Classification. The word statistics ultimately comes from the Latin word Status, meaning "situation" or "condition" in society, which in late Latin adopted the meaning "state". Derived from this, political scientist Gottfried Achenwall, coined the German word statistik (a summary of how things stand). In 1770, the term entered the English language through German and referred to the study of political arrangements. The term gained its modern meaning in the 1790s in John Sinclair's works. In modern German, the term statistik is synonymous with mathematical statistics. The term statistic, in singular form, is used to describe a function that returns its value of the same name.
|
Statistics : Formal discussions on inference date back to the mathematicians and cryptographers of the Islamic Golden Age between the 8th and 13th centuries. Al-Khalil (717–786) wrote the Book of Cryptographic Messages, which contains one of the first uses of permutations and combinations, to list all possible Arabic words with and without vowels. Al-Kindi's Manuscript on Deciphering Cryptographic Messages gave a detailed description of how to use frequency analysis to decipher encrypted messages, providing an early example of statistical inference for decoding. Ibn Adlan (1187–1268) later made an important contribution on the use of sample size in frequency analysis. Although the term statistic was introduced by the Italian scholar Girolamo Ghilini in 1589 with reference to a collection of facts and information about a state, it was the German Gottfried Achenwall in 1749 who started using the term as a collection of quantitative information, in the modern use for this science. The earliest writing containing statistics in Europe dates back to 1663, with the publication of Natural and Political Observations upon the Bills of Mortality by John Graunt. Early applications of statistical thinking revolved around the needs of states to base policy on demographic and economic data, hence its stat- etymology. The scope of the discipline of statistics broadened in the early 19th century to include the collection and analysis of data in general. Today, statistics is widely employed in government, business, and natural and social sciences. The mathematical foundations of statistics developed from discussions concerning games of chance among mathematicians such as Gerolamo Cardano, Blaise Pascal, Pierre de Fermat, and Christiaan Huygens. Although the idea of probability was already examined in ancient and medieval law and philosophy (such as the work of Juan Caramuel), probability theory as a mathematical discipline only took shape at the very end of the 17th century, particularly in Jacob Bernoulli's posthumous work Ars Conjectandi. This was the first book where the realm of games of chance and the realm of the probable (which concerned opinion, evidence, and argument) were combined and submitted to mathematical analysis. The method of least squares was first described by Adrien-Marie Legendre in 1805, though Carl Friedrich Gauss presumably made use of it a decade earlier in 1795. The modern field of statistics emerged in the late 19th and early 20th century in three stages. The first wave, at the turn of the century, was led by the work of Francis Galton and Karl Pearson, who transformed statistics into a rigorous mathematical discipline used for analysis, not just in science, but in industry and politics as well. Galton's contributions included introducing the concepts of standard deviation, correlation, regression analysis and the application of these methods to the study of the variety of human characteristics—height, weight and eyelash length among others. Pearson developed the Pearson product-moment correlation coefficient, defined as a product-moment, the method of moments for the fitting of distributions to samples and the Pearson distribution, among many other things. Galton and Pearson founded Biometrika as the first journal of mathematical statistics and biostatistics (then called biometry), and the latter founded the world's first university statistics department at University College London. The second wave of the 1910s and 20s was initiated by William Sealy Gosset, and reached its culmination in the insights of Ronald Fisher, who wrote the textbooks that were to define the academic discipline in universities around the world. Fisher's most important publications were his 1918 seminal paper The Correlation between Relatives on the Supposition of Mendelian Inheritance (which was the first to use the statistical term, variance), his classic 1925 work Statistical Methods for Research Workers and his 1935 The Design of Experiments, where he developed rigorous design of experiments models. He originated the concepts of sufficiency, ancillary statistics, Fisher's linear discriminator and Fisher information. He also coined the term null hypothesis during the Lady tasting tea experiment, which "is never proved or established, but is possibly disproved, in the course of experimentation". In his 1930 book The Genetical Theory of Natural Selection, he applied statistics to various biological concepts such as Fisher's principle (which A. W. F. Edwards called "probably the most celebrated argument in evolutionary biology") and Fisherian runaway, a concept in sexual selection about a positive feedback runaway effect found in evolution. The final wave, which mainly saw the refinement and expansion of earlier developments, emerged from the collaborative work between Egon Pearson and Jerzy Neyman in the 1930s. They introduced the concepts of "Type II" error, power of a test and confidence intervals. Jerzy Neyman in 1934 showed that stratified random sampling was in general a better method of estimation than purposive (quota) sampling. Today, statistical methods are applied in all fields that involve decision making, for making accurate inferences from a collated body of data and for making decisions in the face of uncertainty based on statistical methodology. The use of modern computers has expedited large-scale statistical computations and has also made possible new methods that are impractical to perform manually. Statistics continues to be an area of active research, for example on the problem of how to analyze big data.
|
Statistics : Statistical techniques are used in a wide range of types of scientific and social research, including: biostatistics, computational biology, computational sociology, network biology, social science, sociology and social research. Some fields of inquiry use applied statistics so extensively that they have specialized terminology. These disciplines include: In addition, there are particular types of statistical analysis that have also developed their own specialised terminology and methodology: Statistics form a key basis tool in business and manufacturing as well. It is used to understand measurement systems variability, control processes (as in statistical process control or SPC), for summarizing data, and to make data-driven decisions.
|
Statistics : Misuse of statistics can produce subtle but serious errors in description and interpretation—subtle in the sense that even experienced professionals make such errors, and serious in the sense that they can lead to devastating decision errors. For instance, social policy, medical practice, and the reliability of structures like bridges all rely on the proper use of statistics. Even when statistical techniques are correctly applied, the results can be difficult to interpret for those lacking expertise. The statistical significance of a trend in the data—which measures the extent to which a trend could be caused by random variation in the sample—may or may not agree with an intuitive sense of its significance. The set of basic statistical skills (and skepticism) that people need to deal with information in their everyday lives properly is referred to as statistical literacy. There is a general perception that statistical knowledge is all-too-frequently intentionally misused by finding ways to interpret only the data that are favorable to the presenter. A mistrust and misunderstanding of statistics is associated with the quotation, "There are three kinds of lies: lies, damned lies, and statistics". Misuse of statistics can be both inadvertent and intentional, and the book How to Lie with Statistics, by Darrell Huff, outlines a range of considerations. In an attempt to shed light on the use and misuse of statistics, reviews of statistical techniques used in particular fields are conducted (e.g. Warne, Lazo, Ramos, and Ritter (2012)). Ways to avoid misuse of statistics include using proper diagrams and avoiding bias. Misuse can occur when conclusions are overgeneralized and claimed to be representative of more than they really are, often by either deliberately or unconsciously overlooking sampling bias. Bar graphs are arguably the easiest diagrams to use and understand, and they can be made either by hand or with simple computer programs. Most people do not look for bias or errors, so they are not noticed. Thus, people may often believe that something is true even if it is not well represented. To make data gathered from statistics believable and accurate, the sample taken must be representative of the whole. According to Huff, "The dependability of a sample can be destroyed by [bias]... allow yourself some degree of skepticism." To assist in the understanding of statistics Huff proposed a series of questions to be asked in each case: Who says so? (Does he/she have an axe to grind?) How does he/she know? (Does he/she have the resources to know the facts?) What's missing? (Does he/she give us a complete picture?) Did someone change the subject? (Does he/she offer us the right answer to the wrong problem?) Does it make sense? (Is his/her conclusion logical and consistent with what we already know?)
|
Statistics : Foundations and major areas of statistics
|
Statistics : Lydia Denworth, "A Significant Problem: Standard scientific methods are under fire. Will anything change?", Scientific American, vol. 321, no. 4 (October 2019), pp. 62–67. "The use of p values for nearly a century [since 1925] to determine statistical significance of experimental results has contributed to an illusion of certainty and [to] reproducibility crises in many scientific fields. There is growing determination to reform statistical analysis... Some [researchers] suggest changing statistical methods, whereas others would do away with a threshold for defining "significant" results". (p. 63.) Barbara Illowsky; Susan Dean (2014). Introductory Statistics. OpenStax CNX. ISBN 978-1938168208. Stockburger, David W. "Introductory Statistics: Concepts, Models, and Applications". Missouri State University (3rd Web ed.). Archived from the original on 28 May 2020. OpenIntro Statistics Archived 2019-06-16 at the Wayback Machine, 3rd edition by Diez, Barr, and Cetinkaya-Rundel Stephen Jones, 2010. Statistics in Psychology: Explanations without Equations. Palgrave Macmillan. ISBN 978-1137282392. Cohen, J (1990). "Things I have learned (so far)" (PDF). American Psychologist. 45 (12): 1304–1312. doi:10.1037/0003-066x.45.12.1304. S2CID 7180431. Archived from the original (PDF) on 2017-10-18. Gigerenzer, G (2004). "Mindless statistics". Journal of Socio-Economics. 33 (5): 587–606. doi:10.1016/j.socec.2004.09.033. Ioannidis, J.P.A. (2005). "Why most published research findings are false". PLOS Medicine. 2 (4): 696–701. doi:10.1371/journal.pmed.0040168. PMC 1855693. PMID 17456002.
|
Statistics : (Electronic Version): TIBCO Software Inc. (2020). Data Science Textbook. Online Statistics Education: An Interactive Multimedia Course of Study. Developed by Rice University (Lead Developer), University of Houston Clear Lake, Tufts University, and National Science Foundation. UCLA Statistical Computing Resources (archived 17 July 2006) Philosophy of Statistics from the Stanford Encyclopedia of Philosophy
|
Knowledge-based systems : A knowledge-based system (KBS) is a computer program that reasons and uses a knowledge base to solve complex problems. Knowledge-based systems were the focus of early artificial intelligence researchers in the 1980s. The term can refer to a broad range of systems. However, all knowledge-based systems have two defining components: an attempt to represent knowledge explicitly, called a knowledge base, and a reasoning system that allows them to derive new knowledge, known as an inference engine.
|
Knowledge-based systems : The knowledge base contains domain-specific facts and rules about a problem domain (rather than knowledge implicitly embedded in procedural code, as in a conventional computer program). In addition, the knowledge may be structured by means of a subsumption ontology, frames, conceptual graph, or logical assertions. The inference engine uses general-purpose reasoning methods to infer new knowledge and to solve problems in the problem domain. Most commonly, it employs forward chaining or backward chaining. Other approaches include the use of automated theorem proving, logic programming, blackboard systems, and term rewriting systems such as Constraint Handling Rules (CHR). These more formal approaches are covered in detail in the Wikipedia article on knowledge representation and reasoning.
|
Knowledge-based systems : Knowledge representation and reasoning Knowledge modeling Knowledge engine Information retrieval Reasoning system Case-based reasoning Conceptual graph Neural networks
|
Knowledge-based systems : Rajendra, Akerkar; Sajja, Priti (2009). Knowledge-Based Systems. Jones & Bartlett Learning. ISBN 9780763776473.
|
OpenAI : OpenAI, Inc. is an American artificial intelligence (AI) research organization founded in December 2015 and headquartered in San Francisco, California. It aims to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". As a leading organization in the ongoing AI boom, OpenAI is known for the GPT family of large language models, the DALL-E series of text-to-image models, and a text-to-video model named Sora. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI. The organization consists of the non-profit OpenAI, Inc., registered in Delaware, and its for-profit subsidiary introduced in 2019, OpenAI Global, LLC. Its stated mission is to ensure that AGI "benefits all of humanity". Microsoft owns roughly 49% of OpenAI's equity, having invested US$13 billion. It also provides computing resources to OpenAI through its cloud platform, Microsoft Azure. In 2023 and 2024, OpenAI faced multiple lawsuits for alleged copyright infringement against authors and media companies whose work was used to train some of OpenAI's products. In November 2023, OpenAI's board removed Sam Altman as CEO, citing a lack of confidence in him, but reinstated him five days later following a reconstruction of the board. Throughout 2024, roughly half of then-employed AI safety researchers left OpenAI, citing the company's prominent role in an industry-wide problem.
|
OpenAI : Some scientists, such as Stephen Hawking and Stuart Russell, have articulated concerns that if advanced AI gains the ability to redesign itself at an ever-increasing rate, an unstoppable "intelligence explosion" could lead to human extinction. Co-founder Musk characterizes AI as humanity's "biggest existential threat". Musk and Altman have stated they are partly motivated by concerns about AI safety and the existential risk from artificial general intelligence. OpenAI states that "it's hard to fathom how much human-level AI could benefit society," and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly". Research on safety cannot safely be postponed: "because of AI's surprising history, it's hard to predict when human-level AI might come within reach." OpenAI states that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible." Co-chair Sam Altman expects the decades-long project to surpass human intelligence. Vishal Sikka, former CEO of Infosys, stated that an "openness", where the endeavor would "produce results generally in the greater interest of humanity", was a fundamental requirement for his support; and that OpenAI "aligns very nicely with our long-held values" and their "endeavor to do purposeful work". Cade Metz of Wired suggested that corporations such as Amazon might be motivated by a desire to use open-source software and data to level the playing field against corporations such as Google and Facebook, which own enormous supplies of proprietary data. Altman stated that Y Combinator companies would share their data with OpenAI.
|
OpenAI : In the early years before his 2018 departure, Musk posed the question: "What is the best thing we can do to ensure the future is good? We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing AI in a way that is safe and is beneficial to humanity." He acknowledged that "there is always some risk that in actually trying to advance (friendly) AI we may create the thing we are concerned about"; but nonetheless, that the best defense was "to empower as many people as possible to have AI. If everyone has AI powers, then there's not any one person or a small set of individuals who can have AI superpower." Musk and Altman's counterintuitive strategy—that of trying to reduce the potential harm of AI by giving everyone access to it—is controversial among those concerned with existential risk from AI. Philosopher Nick Bostrom said, "If you have a button that could do bad things to the world, you don't want to give it to everyone." During a 2016 conversation about technological singularity, Altman said, "We don't plan to release all of our source code" and mentioned a plan to "allow wide swaths of the world to elect representatives to a new governance board". Greg Brockman stated, "Our goal right now ... is to do the best thing there is to do. It's a little vague." Conversely, OpenAI's initial decision to withhold GPT-2 around 2019, due to a wish to "err on the side of caution" in the presence of potential misuse, was criticized by advocates of openness. Delip Rao, an expert in text generation, stated, "I don't think [OpenAI] spent enough time proving [GPT-2] was actually dangerous." Other critics argued that open publication was necessary to replicate the research and to create countermeasures. More recently, in 2022, OpenAI published its approach to the alignment problem, anticipating that aligning AGI to human values would likely be harder than aligning current AI systems: "Unaligned AGI could pose substantial risks to humanity[,] and solving the AGI alignment problem could be so difficult that it will require all of humanity to work together". They stated that they intended to explore how to better use human feedback to train AI systems, and how to safely use AI to incrementally automate alignment research. In 2024, following the temporary removal of Sam Altman and his return, many employees gradually left OpenAI, including most of the original leadership team and a significant number of AI safety researchers. OpenAI also planned a restructuring to operate as a for-profit company. This restructuring could grant Altman a stake in the company.
|
OpenAI : Anthropic – American artificial intelligence research company Center for AI Safety – US-based AI safety research center Future of Humanity Institute – Defunct Oxford interdisciplinary research centre Future of Life Institute – International nonprofit research institute Google DeepMind – Artificial intelligence research laboratory Machine Intelligence Research Institute – Nonprofit organization researching AI safety
|
OpenAI : Official website
|
Ontology learning : Ontology learning (ontology extraction,ontology augmentation generation, ontology generation, or ontology acquisition) is the automatic or semi-automatic creation of ontologies, including extracting the corresponding domain's terms and the relationships between the concepts that these terms represent from a corpus of natural language text, and encoding them with an ontology language for easy retrieval. As building ontologies manually is extremely labor-intensive and time-consuming, there is great motivation to automate the process. Typically, the process starts by extracting terms and concepts or noun phrases from plain text using linguistic processors such as part-of-speech tagging and phrase chunking. Then statistical or symbolic techniques are used to extract relation signatures, often based on pattern-based or definition-based hypernym extraction techniques.
|
Ontology learning : Ontology learning (OL) is used to (semi-)automatically extract whole ontologies from natural language text. The process is usually split into the following eight tasks, which are not all necessarily applied in every ontology learning system.
|
Ontology learning : Dog4Dag (Dresden Ontology Generator for Directed Acyclic Graphs) is an ontology generation plugin for Protégé 4.1 and OBOEdit 2.1. It allows for term generation, sibling generation, definition generation, and relationship induction. Integrated into Protégé 4.1 and OBO-Edit 2.1, DOG4DAG allows ontology extension for all common ontology formats (e.g., OWL and OBO). Limited largely to EBI and Bio Portal lookup service extensions.
|
Ontology learning : Automatic taxonomy construction Computational linguistics Domain ontology Information extraction Natural language understanding Semantic Web Text mining
|
Ontology learning : P. Buitelaar, P. Cimiano (Eds.). Ontology Learning and Population: Bridging the Gap between Text and Knowledge[usurped], Series information for Frontiers in Artificial Intelligence and Applications, IOS Press, 2008. P. Buitelaar, P. Cimiano, and B. Magnini (Eds.). Ontology Learning from Text: Methods, Evaluation and Applications, Series information for Frontiers in Artificial Intelligence and Applications, IOS Press, 2005. Wong, W. (2009), "Learning Lightweight Ontologies from Text across Different Domains using the Web as Background Knowledge[usurped]". Doctor of Philosophy thesis, University of Western Australia. Wong, W., Liu, W. & Bennamoun, M. (2012), "Ontology Learning from Text: A Look back and into the Future". ACM Computing Surveys, Volume 44, Issue 4, Pages 20:1-20:36. Thomas Wächter, Götz Fabian, Michael Schroeder: DOG4DAG: semi-automated ontology generation in OBO-Edit and Protégé. SWAT4LS London, 2011. doi:10.1145/2166896.2166926 == References ==
|
European Neural Network Society : The European Neural Network Society (ENNS) is an association of scientists, engineers, students, and others seeking to learn about and advance understanding of artificial neural networks. Specific areas of interest in this scientific field include modelling of behavioral and brain processes, development of neural algorithms and applying neural modelling concepts to problems relevant in many different domains. Erkki Oja and John G. Taylor are past ENNS presidents and honorary executive board members. As of 2018 its president is Věra Kůrková. Every year since 1991 ENNS organizes the International Conference on Artificial Neural Networks (ICANN). The history and the links to past conferences are available at the ENNS website. This is one of the oldest and best established conferences on the subject, with proceedings published in Springer Lecture Notes in Computer Science, see index in DBLP bibliography database. As a non-profit organization, ENNS promotes scientific activities at European and at the national levels in cooperation with national organizations that focus on neural networks. Every year, many stipends to attend ICANN conference are given. ENNS also sponsors other students and have given awards and prizes at co-sponsored events (schools, workshops, conferences and competitions).
|
Computational humor : Computational humor is a branch of computational linguistics and artificial intelligence which uses computers in humor research. It is a relatively new area, with the first dedicated conference organized in 1996.
|
Computational humor : A statistical machine learning algorithm to detect whether a sentence contained a "That's what she said" double entendre was developed by Kiddon and Brun (2011). There is an open-source Python implementation of Kiddon & Brun's TWSS system. A program to recognize knock-knock jokes was reported by Taylor and Mazlack. This kind of research is important in analysis of human–computer interaction. An application of machine learning techniques for the distinguishing of joke texts from non-jokes was described by Mihalcea and Strapparava (2006). Takizawa et al. (1996) reported on a heuristic program for detecting puns in the Japanese language.
|
Computational humor : A possible application for assistance in language acquisition is described in the section "Pun generation". Another envisioned use of joke generators is in cases of a steady supply of jokes where quantity is more important than quality. Another obvious, yet remote, direction is automated joke appreciation. It is known that humans interact with computers in ways similar to interacting with other humans that may be described in terms of personality, politeness, flattery, and in-group favoritism. Therefore, the role of humor in human–computer interaction is being investigated. In particular, humor generation in user interface to ease communications with computers was suggested. Craig McDonough implemented the Mnemonic Sentence Generator, which converts passwords into humorous sentences. Based on the incongruity theory of humor, it is suggested that the resulting meaningless but funny sentences are easier to remember. For example, the password AjQA3Jtv is converted into "Arafat joined Quayle's Ant, while TARAR Jeopardized thurmond's vase," an example chosen by combining politicians names with verbs and common nouns.
|
Computational humor : John Allen Paulos is known for his interest in mathematical foundations of humor. His book Mathematics and Humor: A Study of the Logic of Humor demonstrates structures common to humor and formal sciences (mathematics, linguistics) and develops a mathematical model of jokes based on catastrophe theory. Conversational systems which have been designed to take part in Turing test competitions generally have the ability to learn humorous anecdotes and jokes. Because many people regard humor as something particular to humans, its appearance in conversation can be quite useful in convincing a human interrogator that a hidden entity, which could be a machine or a human, is in fact a human.
|
Computational humor : Theory of humor World's funniest joke#Other findings
|
Computational humor : "Computational humor", by Binsted, K.; Nijholt, A.; Stock, O.; Strapparava, C.; Ritchie, G.; Manurung, R.; Pain, H.; Waller, A.; Oapos;Mara, D., IEEE Intelligent Systems Volume 21, Issue 2, 2006, pp. 59 – 69 doi:10.1109/MIS.2006.22 O. Stock, C. Strapparava & A. Nijholt (eds.) "The April Fools' Day Workshop on Computational Humour." Proc. Twente Workshop on Language Technology 20 (TWLT20), ISSN 0929-0672, ITC-IRST, Trento, Italy, April 2002, 146 pp == References ==
|
Spell checker : In software, a spell checker (or spelling checker or spell check) is a software feature that checks for misspellings in a text. Spell-checking features are often embedded in software or services, such as a word processor, email client, electronic dictionary, or search engine.
|
Spell checker : A basic spell checker carries out the following processes: It scans the text and extracts the words contained in it. It then compares each word with a known list of correctly spelled words (i.e. a dictionary). This might contain just a list of words, or it might also contain additional information, such as hyphenation points or lexical and grammatical attributes. An additional step is a language-dependent algorithm for handling morphology. Even for a lightly inflected language like English, the spell checker will need to consider different forms of the same word, such as plurals, verbal forms, contractions, and possessives. For many other languages, such as those featuring agglutination and more complex declension and conjugation, this part of the process is more complicated. It is unclear whether morphological analysis—allowing for many forms of a word depending on its grammatical role—provides a significant benefit for English, though its benefits for highly synthetic languages such as German, Hungarian, or Turkish are clear. As an adjunct to these components, the program's user interface allows users to approve or reject replacements and modify the program's operation. Spell checkers can use approximate string matching algorithms such as Levenshtein distance to find correct spellings of misspelled words. An alternative type of spell checker uses solely statistical information, such as n-grams, to recognize errors instead of correctly-spelled words. This approach usually requires a lot of effort to obtain sufficient statistical information. Key advantages include needing less runtime storage and the ability to correct errors in words that are not included in a dictionary. In some cases, spell checkers use a fixed list of misspellings and suggestions for those misspellings; this less flexible approach is often used in paper-based correction methods, such as the see also entries of encyclopedias. Clustering algorithms have also been used for spell checking combined with phonetic information.
|
Spell checker : The first spell checkers were "verifiers" instead of "correctors." They offered no suggestions for incorrectly spelled words. This was helpful for typos but it was not so helpful for logical or phonetic errors. The challenge the developers faced was the difficulty in offering useful suggestions for misspelled words. This requires reducing words to a skeletal form and applying pattern-matching algorithms. It might seem logical that where spell-checking dictionaries are concerned, "the bigger, the better," so that correct words are not marked as incorrect. In practice, however, an optimal size for English appears to be around 90,000 entries. If there are more than this, incorrectly spelled words may be skipped because they are mistaken for others. For example, a linguist might determine on the basis of corpus linguistics that the word baht is more frequently a misspelling of bath or bat than a reference to the Thai currency. Hence, it would typically be more useful if a few people who write about Thai currency were slightly inconvenienced than if the spelling errors of the many more people who discuss baths were overlooked. The first MS-DOS spell checkers were mostly used in proofing mode from within word processing packages. After preparing a document, a user scanned the text looking for misspellings. Later, however, batch processing was offered in such packages as Oracle's short-lived CoAuthor and allowed a user to view the results after a document was processed and correct only the words that were known to be wrong. When memory and processing power became abundant, spell checking was performed in the background in an interactive way, such as has been the case with the Sector Software produced Spellbound program released in 1987 and Microsoft Word since Word 95. Spell checkers became increasingly sophisticated; now capable of recognizing grammatical errors. However, even at their best, they rarely catch all the errors in a text (such as homophone errors) and will flag neologisms and foreign words as misspellings. Nonetheless, spell checkers can be considered as a type of foreign language writing aid that non-native language learners can rely on to detect and correct their misspellings in the target language.
|
Spell checker : English is unusual in that most words used in formal writing have a single spelling that can be found in a typical dictionary, with the exception of some jargon and modified words. In many languages, words are often concatenated into new combinations of words. In German, compound nouns are frequently coined from other existing nouns. Some scripts do not clearly separate one word from another, requiring word-splitting algorithms. Each of these presents unique challenges to non-English language spell checkers.
|
Spell checker : There has been research on developing algorithms that are capable of recognizing a misspelled word, even if the word itself is in the vocabulary, based on the context of the surrounding words. Not only does this allow words such as those in the poem above to be caught, but it mitigates the detrimental effect of enlarging dictionaries, allowing more words to be recognized. For example, baht in the same paragraph as Thai or Thailand would not be recognized as a misspelling of bath. The most common example of errors caught by such a system are homophone errors, such as the bold words in the following sentence: Their coming too sea if its reel. The most successful algorithm to date is Andrew Golding and Dan Roth's "Winnow-based spelling correction algorithm", published in 1999, which is able to recognize about 96% of context-sensitive spelling errors, in addition to ordinary non-word spelling errors. Context-sensitive spell checkers appeared in the now-defunct applications Microsoft Office 2007 and Google Wave. Grammar checkers attempt to fix problems with grammar beyond spelling errors, including incorrect choice of words.
|
Spell checker : Cupertino effect Grammar checker Record linkage problem Spelling suggestion Words (Unix) Autocorrection LanguageTool
|
Spell checker : Norvig.com, "How to Write a Spelling Corrector", by Peter Norvig BBK.ac.uk, "Spellchecking by computer", by Roger Mitton CBSNews.com, Spell-Check Crutch Curtails Correctness, by Lloyd de Vries History and text of "Candidate for a Pullet Surprise" by Mark Eckman and Jerrold H. Zar
|
Artificial intelligence arms race : A military artificial intelligence arms race is an arms race between two or more states to develop and deploy lethal autonomous weapons systems (LAWS). Since the mid-2010s, many analysts have noted the emergence of such an arms race between superpowers for better military AI, driven by increasing geopolitical and military tensions. An AI arms race is sometimes placed in the context of an AI Cold War between the United States and China.
|
Artificial intelligence arms race : Lethal autonomous weapons systems use artificial intelligence to identify and kill human targets without human intervention. LAWS have colloquially been called "slaughterbots" or "killer robots". Broadly, any competition for superior AI is sometimes framed as an "arms race". Advantages in military AI overlap with advantages in other sectors, as countries pursue both economic and military advantages.
|
Artificial intelligence arms race : In 2014, AI specialist Steve Omohundro warned that "An autonomous weapons arms race is already taking place". According to Siemens, worldwide military spending on robotics was US$5.1 billion in 2010 and US$7.5 billion in 2015. China became a top player in artificial intelligence research in the 2010s. According to the Financial Times, in 2016, for the first time, China published more AI papers than the entire European Union. When restricted to number of AI papers in the top 5% of cited papers, China overtook the United States in 2016 but lagged behind the European Union. 23% of the researchers presenting at the 2017 American Association for the Advancement of Artificial Intelligence (AAAI) conference were Chinese. Eric Schmidt, the former chairman of Alphabet, has predicted China will be the leading country in AI by 2025.
|
Artificial intelligence arms race : One risk concerns the AI race itself, whether or not the race is won by any one group. There are strong incentives for development teams to cut corners with regard to the safety of the system, increasing the risk of critical failures and unintended consequences. This is in part due to the perceived advantage of being the first to develop advanced AI technology. One team appearing to be on the brink of a breakthrough can encourage other teams to take shortcuts, ignore precautions and deploy a system that is less ready. Some argue that using "race" terminology at all in this context can exacerbate this effect. Another potential danger of an AI arms race is the possibility of losing control of the AI systems; the risk is compounded in the case of a race to artificial general intelligence, which may present an existential risk. In 2023, a United States Air Force official reportedly said that during a computer test, a simulated AI drone killed the human character operating it. The USAF later said the official had misspoken and that it never conducted such simulations. A third risk of an AI arms race is whether or not the race is actually won by one group. The concern is regarding the consolidation of power and technological advantage in the hands of one group. A US government report argued that "AI-enabled capabilities could be used to threaten critical infrastructure, amplify disinformation campaigns, and wage war":1, and that "global stability and nuclear deterrence could be undermined".:11
|
Artificial intelligence arms race : The international regulation of autonomous weapons is an emerging issue for international law. AI arms control will likely require the institutionalization of new international norms embodied in effective technical specifications combined with active monitoring and informal diplomacy by communities of experts, together with a legal and political verification process. As early as 2007, scholars such as AI professor Noel Sharkey have warned of "an emerging arms race among the hi-tech nations to develop autonomous submarines, fighter jets, battleships and tanks that can find their own targets and apply violent force without the involvement of meaningful human decisions". Miles Brundage of the University of Oxford has argued an AI arms race might be somewhat mitigated through diplomacy: "We saw in the various historical arms races that collaboration and dialog can pay dividends". Over a hundred experts signed an open letter in 2017 calling on the UN to address the issue of lethal autonomous weapons; however, at a November 2017 session of the UN Convention on Certain Conventional Weapons (CCW), diplomats could not agree even on how to define such weapons. The Indian ambassador and chair of the CCW stated that agreement on rules remained a distant prospect. As of 2019, 26 heads of state and 21 Nobel Peace Prize laureates have backed a ban on autonomous weapons. However, as of 2022, most major powers continue to oppose a ban on autonomous weapons. Many experts believe attempts to completely ban killer robots are likely to fail, in part because detecting treaty violations would be extremely difficult. A 2017 report from Harvard's Belfer Center predicts that AI has the potential to be as transformative as nuclear weapons. The report further argues that "Preventing expanded military use of AI is likely impossible" and that "the more modest goal of safe and effective technology management must be pursued", such as banning the attaching of an AI dead man's switch to a nuclear arsenal.
|
Artificial intelligence arms race : A 2015 open letter by the Future of Life Institute calling for the prohibition of lethal autonomous weapons systems has been signed by over 26,000 citizens, including physicist Stephen Hawking, Tesla magnate Elon Musk, Apple's Steve Wozniak and Twitter co-founder Jack Dorsey, and over 4,600 artificial intelligence researchers, including Stuart Russell, Bart Selman and Francesca Rossi. The Future of Life Institute has also released two fictional films, Slaughterbots (2017) and Slaughterbots - if human: kill() (2021), which portray threats of autonomous weapons and promote a ban, both of which went viral. Professor Noel Sharkey of the University of Sheffield argues that autonomous weapons will inevitably fall into the hands of terrorist groups such as the Islamic State.
|
Artificial intelligence arms race : Many Western tech companies avoid being associated too closely with the U.S. military, for fear of losing access to China's market. Furthermore, some researchers, such as DeepMind CEO Demis Hassabis, are ideologically opposed to contributing to military work. For example, in June 2018, company sources at Google said that top executive Diane Greene told staff that the company would not follow-up Project Maven after the current contract expired in March 2019.
|
Artificial intelligence arms race : AI alignment AI slop A.I. Rising Artificial intelligence detection software Cold War Deterrence theory Ethics of artificial intelligence Existential risk from artificial general intelligence Nuclear arms race Post–Cold War era Second Cold War Space Race Unmanned combat aerial vehicle Weak AI
|
Artificial intelligence arms race : Paul Scharre, "Killer Apps: The Real Dangers of an AI Arms Race", Foreign Affairs, vol. 98, no. 3 (May/June 2019), pp. 135–44. "Today's AI technologies are powerful but unreliable. Rules-based systems cannot deal with circumstances their programmers did not anticipate. Learning systems are limited by the data on which they were trained. AI failures have already led to tragedy. Advanced autopilot features in cars, although they perform well in some circumstances, have driven cars without warning into trucks, concrete barriers, and parked cars. In the wrong situation, AI systems go from supersmart to superdumb in an instant. When an enemy is trying to manipulate and hack an AI system, the risks are even greater." (p. 140.) The National Security Commission on Artificial Intelligence. (2019). Interim Report. Washington, DC: Author. Dresp-Langley, Birgitta (2023). "The weaponization of artificial intelligence: What the public needs to be aware of". Frontiers in Artificial Intelligence. 6. doi:10.3389/frai.2023.1154184. PMC 10030838. PMID 36967833.
|
Coupled pattern learner : Coupled Pattern Learner (CPL) is a machine learning algorithm which couples the semi-supervised learning of categories and relations to forestall the problem of semantic drift associated with boot-strap learning methods.
|
Coupled pattern learner : Semi-supervised learning approaches using a small number of labeled examples with many unlabeled examples are usually unreliable as they produce an internally consistent, but incorrect set of extractions. CPL solves this problem by simultaneously learning classifiers for many different categories and relations in the presence of an ontology defining constraints that couple the training of these classifiers. It was introduced by Andrew Carlson, Justin Betteridge, Estevam R. Hruschka Jr. and Tom M. Mitchell in 2009.
|
Coupled pattern learner : CPL is an approach to semi-supervised learning that yields more accurate results by coupling the training of many information extractors. Basic idea behind CPL is that semi-supervised training of a single type of extractor such as ‘coach’ is much more difficult than simultaneously training many extractors that cover a variety of inter-related entity and relation types. Using prior knowledge about the relationships between these different entities and relations CPL makes unlabeled data as a useful constraint during training. For e.g., ‘coach(x)’ implies ‘person(x)’ and ‘not sport(x)’.
|
Coupled pattern learner : Meta-Bootstrap Learner (MBL) was also proposed by the authors of CPL. Meta-Bootstrap learner couples the training of multiple extraction techniques with a multi-view constraint, which requires the extractors to agree. It makes addition of coupling constraints on top of existing extraction algorithms, while treating them as black boxes, feasible. MBL assumes that the errors made by different extraction techniques are independent. Following is a quick summary of MBL. Input: An ontology O, a set of extractors ε Output: Trusted instances for each predicate for i=1,2,...,∞ do foreach predicate p in O do foreach extractor e in ε do Extract new candidates for p using e with recently promoted instances; end FILTER candidates that violate mutual-exclusion or type-checking constraints; PROMOTE candidates that were extracted by all extractors; end end Subordinate algorithms used with MBL do not promote any instance on their own, they report the evidence about each candidate to MBL and MBL is responsible for promoting instances.
|
Coupled pattern learner : In their paper authors have presented results showing the potential of CPL to contribute new facts to existing repository of semantic knowledge, Freebase
|
Coupled pattern learner : Co-training Never-Ending Language Learning
|
Coupled pattern learner : Liu, Qiuhua; Xuejun Liao; Lawrence Carin (2008). "Semi-supervised multitask learning". NIPS. Shinyama, Yusuke; Satoshi Sekine (2006). "Preemptive information extraction using unrestricted relation discovery". HLT-Naacl. Chang, Ming-Wei; Lev-Arie Ratinov; Dan Roth (2007). "Guiding semi-supervision with constraint driven learning". ACL. Banko, Michele; Michael J. Cafarella; Stephen Soderland; Matt Broadhead; Oren Etzioni (2007). "Open information extraction from the web". IJCAI. Blum, Avrim; Tom Mitchell (1998). "Combining labeled and unlabeled data with co-training". Proceedings of the eleventh annual conference on Computational learning theory. pp. 92–100. doi:10.1145/279943.279962. ISBN 1581130570. S2CID 207228399. : |journal= ignored (help) Riloff, Ellen; Rosie Jones (1999). "Learning dictionaries for information extraction by multi-level bootstrapping". AAAI. Rosenfeld, Benjamin; Ronen Feldman (2007). "Using corpus statistics on entities to improve semi-supervised relation extraction from the web". ACL. Wang, Richard C.; William W. Cohen (2008). "Iterative set expansion of named entities using the web". ICDM.
|
Explainable artificial intelligence : Explainable AI (XAI), often overlapping with interpretable AI, or explainable machine learning (XML), is a field of research within artificial intelligence (AI) that explores methods that provide humans with the ability of intellectual oversight over AI algorithms. The main focus is on the reasoning behind the decisions or predictions made by the AI algorithms, to make them more understandable and transparent. This addresses users' requirement to assess safety and scrutinize the automated decision making in applications. XAI counters the "black box" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision. XAI hopes to help users of AI-powered systems perform more effectively by improving their understanding of how those systems reason. XAI may be an implementation of the social right to explanation. Even if there is no such legal right or regulatory requirement, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. XAI aims to explain what has been done, what is being done, and what will be done next, and to unveil which information these actions are based on. This makes it possible to confirm existing knowledge, challenge existing knowledge, and generate new assumptions. Machine learning (ML) algorithms used in AI can be categorized as white-box or black-box. White-box models provide results that are understandable to experts in the domain. Black-box models, on the other hand, are extremely hard to explain and may not be understood even by domain experts. XAI algorithms follow the three principles of transparency, interpretability, and explainability. A model is transparent "if the processes that extract model parameters from training data and generate labels from testing data can be described and motivated by the approach designer." Interpretability describes the possibility of comprehending the ML model and presenting the underlying basis for decision-making in a way that is understandable to humans. Explainability is a concept that is recognized as important, but a consensus definition is not yet available; one possibility is "the collection of features of the interpretable domain that have contributed, for a given example, to producing a decision (e.g., classification or regression)". In summary, Interpretability refers to the user's ability to understand model outputs, while Model Transparency includes Simulatability (reproducibility of predictions), Decomposability (intuitive explanations for parameters), and Algorithmic Transparency (explaining how algorithms work). Model Functionality focuses on textual descriptions, visualization, and local explanations, which clarify specific outputs or instances rather than entire models. All these concepts aim to enhance the comprehensibility and usability of AI systems. If algorithms fulfill these principles, they provide a basis for justifying decisions, tracking them and thereby verifying them, improving the algorithms, and exploring new facts. Sometimes it is also possible to achieve a high-accuracy result with white-box ML algorithms. These algorithms have an interpretable structure that can be used to explain predictions. Concept Bottleneck Models, which use concept-level abstractions to explain model reasoning, are examples of this and can be applied in both image and text prediction tasks. This is especially important in domains like medicine, defense, finance, and law, where it is crucial to understand decisions and build trust in the algorithms. Many researchers argue that, at least for supervised machine learning, the way forward is symbolic regression, where the algorithm searches the space of mathematical expressions to find the model that best fits a given dataset. AI systems optimize behavior to satisfy a mathematically specified goal system chosen by the system designers, such as the command "maximize the accuracy of assessing how positive film reviews are in the test dataset." The AI may learn useful general rules from the test set, such as "reviews containing the word "horrible" are likely to be negative." However, it may also learn inappropriate rules, such as "reviews containing 'Daniel Day-Lewis' are usually positive"; such rules may be undesirable if they are likely to fail to generalize outside the training set, or if people consider the rule to be "cheating" or "unfair." A human can audit rules in an XAI to get an idea of how likely the system is to generalize to future real-world data outside the test set.
|
Explainable artificial intelligence : Cooperation between agents – in this case, algorithms and humans – depends on trust. If humans are to accept algorithmic prescriptions, they need to trust them. Incompleteness in formal trust criteria is a barrier to optimization. Transparency, interpretability, and explainability are intermediate goals on the road to these more comprehensive trust criteria. This is particularly relevant in medicine, especially with clinical decision support systems (CDSS), in which medical professionals should be able to understand how and why a machine-based decision was made in order to trust the decision and augment their decision-making process. AI systems sometimes learn undesirable tricks that do an optimal job of satisfying explicit pre-programmed goals on the training data but do not reflect the more nuanced implicit desires of the human system designers or the full complexity of the domain data. For example, a 2017 system tasked with image recognition learned to "cheat" by looking for a copyright tag that happened to be associated with horse pictures rather than learning how to tell if a horse was actually pictured. In another 2017 system, a supervised learning AI tasked with grasping items in a virtual world learned to cheat by placing its manipulator between the object and the viewer in a way such that it falsely appeared to be grasping the object. One transparency project, the DARPA XAI program, aims to produce "glass box" models that are explainable to a "human-in-the-loop" without greatly sacrificing AI performance. Human users of such a system can understand the AI's cognition (both in real-time and after the fact) and can determine whether to trust the AI. Other applications of XAI are knowledge extraction from black-box models and model comparisons. In the context of monitoring systems for ethical and socio-legal compliance, the term "glass box" is commonly used to refer to tools that track the inputs and outputs of the system in question, and provide value-based explanations for their behavior. These tools aim to ensure that the system operates in accordance with ethical and legal standards, and that its decision-making processes are transparent and accountable. The term "glass box" is often used in contrast to "black box" systems, which lack transparency and can be more difficult to monitor and regulate. The term is also used to name a voice assistant that produces counterfactual statements as explanations.
|
Explainable artificial intelligence : There is a subtle difference between the terms explainability and interpretability in the context of AI. Some explainability techniques don't involve understanding how the model works, and may work across various AI systems. Treating the model as a black box and analyzing how marginal changes to the inputs affect the result sometimes provides a sufficient explanation.
|
Explainable artificial intelligence : During the 1970s to 1990s, symbolic reasoning systems, such as MYCIN, GUIDON, SOPHIE, and PROTOS could represent, reason about, and explain their reasoning for diagnostic, instructional, or machine-learning (explanation-based learning) purposes. MYCIN, developed in the early 1970s as a research prototype for diagnosing bacteremia infections of the bloodstream, could explain which of its hand-coded rules contributed to a diagnosis in a specific case. Research in intelligent tutoring systems resulted in developing systems such as SOPHIE that could act as an "articulate expert", explaining problem-solving strategy at a level the student could understand, so they would know what action to take next. For instance, SOPHIE could explain the qualitative reasoning behind its electronics troubleshooting, even though it ultimately relied on the SPICE circuit simulator. Similarly, GUIDON added tutorial rules to supplement MYCIN's domain-level rules so it could explain the strategy for medical diagnosis. Symbolic approaches to machine learning relying on explanation-based learning, such as PROTOS, made use of explicit representations of explanations expressed in a dedicated explanation language, both to explain their actions and to acquire new knowledge. In the 1980s through the early 1990s, truth maintenance systems (TMS) extended the capabilities of causal-reasoning, rule-based, and logic-based inference systems.: 360–362 A TMS explicitly tracks alternate lines of reasoning, justifications for conclusions, and lines of reasoning that lead to contradictions, allowing future reasoning to avoid these dead ends. To provide an explanation, they trace reasoning from conclusions to assumptions through rule operations or logical inferences, allowing explanations to be generated from the reasoning traces. As an example, consider a rule-based problem solver with just a few rules about Socrates that concludes he has died from poison: By just tracing through the dependency structure the problem solver can construct the following explanation: "Socrates died because he was mortal and drank poison, and all mortals die when they drink poison. Socrates was mortal because he was a man and all men are mortal. Socrates drank poison because he held dissident beliefs, the government was conservative, and those holding conservative dissident beliefs under conservative governments must drink poison.": 164–165 By the 1990s researchers began studying whether it is possible to meaningfully extract the non-hand-coded rules being generated by opaque trained neural networks. Researchers in clinical expert systems creating neural network-powered decision support for clinicians sought to develop dynamic explanations that allow these technologies to be more trusted and trustworthy in practice. In the 2010s public concerns about racial and other bias in the use of AI for criminal sentencing decisions and findings of creditworthiness may have led to increased demand for transparent artificial intelligence. As a result, many academics and organizations are developing tools to help detect bias in their systems. Marvin Minsky et al. raised the issue that AI can function as a form of surveillance, with the biases inherent in surveillance, suggesting HI (Humanistic Intelligence) as a way to create a more fair and balanced "human-in-the-loop" AI. Explainable AI has been recently a new topic researched amongst the context of modern deep learning. Modern complex AI techniques, such as deep learning, are naturally opaque. To address this issue, methods have been developed to make new models more explainable and interpretable. This includes layerwise relevance propagation (LRP), a technique for determining which features in a particular input vector contribute most strongly to a neural network's output. Other techniques explain some particular prediction made by a (nonlinear) black-box model, a goal referred to as "local interpretability". We still today cannot explain the output of today's DNNs without the new explanatory mechanisms, we also can't by the neural network, or external explanatory components There is also research on whether the concepts of local interpretability can be applied to a remote context, where a model is operated by a third-party. There has been work on making glass-box models which are more transparent to inspection. This includes decision trees, Bayesian networks, sparse linear models, and more. The Association for Computing Machinery Conference on Fairness, Accountability, and Transparency (ACM FAccT) was established in 2018 to study transparency and explainability in the context of socio-technical systems, many of which include artificial intelligence. Some techniques allow visualisations of the inputs to which individual software neurons respond to most strongly. Several groups found that neurons can be aggregated into circuits that perform human-comprehensible functions, some of which reliably arise across different networks trained independently. There are various techniques to extract compressed representations of the features of given inputs, which can then be analysed by standard clustering techniques. Alternatively, networks can be trained to output linguistic explanations of their behaviour, which are then directly human-interpretable. Model behaviour can also be explained with reference to training data—for example, by evaluating which training inputs influenced a given behaviour the most. The use of explainable artificial intelligence (XAI) in pain research, specifically in understanding the role of electrodermal activity for automated pain recognition: hand-crafted features and deep learning models in pain recognition, highlighting the insights that simple hand-crafted features can yield comparative performances to deep learning models and that both traditional feature engineering and deep feature learning approaches rely on simple characteristics of the input time-series data.
|
Explainable artificial intelligence : As regulators, official bodies, and general users come to depend on AI-based dynamic systems, clearer accountability will be required for automated decision-making processes to ensure trust and transparency. The first global conference exclusively dedicated to this emerging discipline was the 2017 International Joint Conference on Artificial Intelligence: Workshop on Explainable Artificial Intelligence (XAI). It has evolved over the years, with various workshops organised and co-located to many other international conferences, and it has now a dedicated global event, "The world conference on eXplainable Artificial Intelligence", with its own proceedings. The European Union introduced a right to explanation in the General Data Protection Regulation (GDPR) to address potential problems stemming from the rising importance of algorithms. The implementation of the regulation began in 2018. However, the right to explanation in GDPR covers only the local aspect of interpretability. In the United States, insurance companies are required to be able to explain their rate and coverage decisions. In France the Loi pour une République numérique (Digital Republic Act) grants subjects the right to request and receive information pertaining to the implementation of algorithms that process data about them.
|
Explainable artificial intelligence : Despite ongoing endeavors to enhance the explainability of AI models, they persist with several inherent limitations.
|
Explainable artificial intelligence : Some scholars have suggested that explainability in AI should be considered a goal secondary to AI effectiveness, and that encouraging the exclusive development of XAI may limit the functionality of AI more broadly. Critiques of XAI rely on developed concepts of mechanistic and empiric reasoning from evidence-based medicine to suggest that AI technologies can be clinically validated even when their function cannot be understood by their operators. Some researchers advocate the use of inherently interpretable machine learning models, rather than using post-hoc explanations in which a second model is created to explain the first. This is partly because post-hoc models increase the complexity in a decision pathway and partly because it is often unclear how faithfully a post-hoc explanation can mimic the computations of an entirely separate model. However, another view is that what is important is that the explanation accomplishes the given task at hand, and whether it is pre or post-hoc doesn't matter. If a post-hoc explanation method helps a doctor diagnose cancer better, it is of secondary importance whether it is a correct/incorrect explanation. The goals of XAI amount to a form of lossy compression that will become less effective as AI models grow in their number of parameters. Along with other factors this leads to a theoretical limit for explainability.
|
Explainable artificial intelligence : Explainability was studied also in social choice theory. Social choice theory aims at finding solutions to social decision problems, that are based on well-established axioms. Ariel D. Procaccia explains that these axioms can be used to construct convincing explanations to the solutions. This principle has been used to construct explanations in various subfields of social choice.
|
Explainable artificial intelligence : Algorithmic transparency – study on the transparency of algorithmsPages displaying wikidata descriptions as a fallback Right to explanation – Right to have an algorithm explained Accumulated local effects – Machine learning method
|
Explainable artificial intelligence : "the World Conference on eXplainable Artificial Intelligence". "ACM Conference on Fairness, Accountability, and Transparency (FAccT)". Mazumdar, Dipankar; Neto, Mário Popolin; Paulovich, Fernando V. (2021). "Random Forest similarity maps: A Scalable Visual Representation for Global and Local Interpretation". Electronics. 10 (22): 2862. doi:10.3390/electronics10222862. Park, Dong Huk; Hendricks, Lisa Anne; Akata, Zeynep; Schiele, Bernt; Darrell, Trevor; Rohrbach, Marcus (2016-12-14). "Attentive Explanations: Justifying Decisions and Pointing to the Evidence". arXiv:1612.04757 [cs.CV]. "Explainable AI: Making machines understandable for humans". Explainable AI: Making machines understandable for humans. Retrieved 2017-11-02. "Explaining How End-to-End Deep Learning Steers a Self-Driving Car". Parallel Forall. 2017-05-23. Retrieved 2017-11-02. Knight, Will (2017-03-14). "DARPA is funding projects that will try to open up AI's black boxes". MIT Technology Review. Retrieved 2017-11-02. Alvarez-Melis, David; Jaakkola, Tommi S. (2017-07-06). "A causal framework for explaining the predictions of black-box sequence-to-sequence models". arXiv:1707.01943 [cs.LG]. "Similarity Cracks the Code Of Explainable AI". simMachines. 2017-10-12. Retrieved 2018-02-02.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.