text
stringlengths
12
14.7k
Art Recognition : Official website Art magazine website
Galaxy AI : == Galaxy AI == Galaxy AI is a suite of artificial intelligence (AI) features developed by Samsung Electronics for its Galaxy line of mobile devices. Introduced in January 2024 with the Samsung Galaxy S24 series, Galaxy AI combines on-device and cloud-based AI technologies to enable a range of intelligent features across Samsung’s ecosystem. The suite includes functions such as real-time translation, AI-powered photo editing, and generative search tools.
Galaxy AI : Galaxy AI combines Samsung’s proprietary AI models with partner technologies, such as Google's Gemini AI, to deliver a range of intelligent, context-aware functionalities. The AI suite is designed to enhance user experience by providing real-time assistance, content creation tools, and productivity features.
Galaxy AI : Galaxy AI encompasses several AI-powered tools across different functionalities:
Galaxy AI : Galaxy AI debuted with the Galaxy S24 series in January 2024. Samsung has announced that AI features will be free to use on supported devices until the end of 2025. Samsung initially introduced Galaxy AI with the Galaxy S24 series. However, through software updates, the company has expanded AI support to various existing Galaxy models. As of 2025, Galaxy AI is available on: Galaxy S Series S25, S25+, S25 Ultra S24, S24+, S24 Ultra S23, S23+, S23 Ultra S23 FE S22, S22+, S22 Ultra S21, S21+, S21 Ultra (Limited AI features) Galaxy Z Series Z Fold6, Z Flip6 Z Fold5, Z Flip5 Z Fold4, Z Flip4 Z Fold3, Z Flip3 (Limited AI features) Galaxy Tab Series Galaxy Tab S9, Galaxy Tab S9+, Galaxy Tab S9 Ultra Galaxy Tab S8, Galaxy Tab S8+, Galaxy Tab S8 Ultra Galaxy A Series Samsung Galaxy A55 5G|A55 Samsung Galaxy A54 5G|A54 Samsung Galaxy A35 5G|A35 Samsung Galaxy A34 5G|A34 Galaxy Wearables Galaxy Watch7, Galaxy Watch6 (Limited AI features) Galaxy Buds3, Galaxy Buds3 Pro (Limited AI-enhanced audio processing) Samsung has also indicated plans to expand Galaxy AI to additional Galaxy devices through future software updates.
Galaxy AI : Samsung emphasizes privacy protection and ethical AI development as key priorities for Galaxy AI. To safeguard user data, On-device processing is used for sensitive AI functions, minimizing data transmission to external servers. Encrypted cloud-based AI is employed selectively for high-processing tasks. Samsung’s AI ethics principles focus on fairness, transparency, and accountability in AI deployment. Samsung has stated that user data processed by AI features will not be stored or used for training AI models unless explicitly permitted by the user
Galaxy AI : Official website (US)
Natural-language user interface : Natural-language user interface (LUI or NLUI) is a type of computer human interface where linguistic phenomena such as verbs, phrases and clauses act as UI controls for creating, selecting and modifying data in software applications. In interface design, natural-language interfaces are sought after for their speed and ease of use, but most suffer the challenges to understanding wide varieties of ambiguous input. Natural-language interfaces are an active area of study in the field of natural-language processing and computational linguistics. An intuitive general natural-language interface is one of the active goals of the Semantic Web. Text interfaces are "natural" to varying degrees. Many formal (un-natural) programming languages incorporate idioms of natural human language. Likewise, a traditional keyword search engine could be described as a "shallow" natural-language user interface.
Natural-language user interface : A natural-language search engine would in theory find targeted answers to user questions (as opposed to keyword search). For example, when confronted with a question of the form 'which U.S. state has the highest income tax?', conventional search engines ignore the question and instead search on the keywords 'state', 'income' and 'tax'. Natural-language search, on the other hand, attempts to use natural-language processing to understand the nature of the question and then to search and return a subset of the web that contains the answer to the question. If it works, results would have a higher relevance than results from a keyword search engine, due to the question being included.
Natural-language user interface : Prototype Nl interfaces had already appeared in the late sixties and early seventies. SHRDLU, a natural-language interface that manipulates blocks in a virtual "blocks world" Lunar, a natural-language interface to a database containing chemical analyses of Apollo 11 Moon rocks by William A. Woods. Chat-80 transformed English questions into Prolog expressions, which were evaluated against the Prolog database. The code of Chat-80 was circulated widely, and formed the basis of several other experimental Nl interfaces. An online demo is available on the LPA website. ELIZA, written at MIT by Joseph Weizenbaum between 1964 and 1966, mimicked a psychotherapist and was operated by processing users' responses to scripts. Using almost no information about human thought or emotion, the DOCTOR script sometimes provided a startlingly human-like interaction. An online demo is available on the LPA website. Janus is also one of the few systems to support temporal questions. Intellect from Trinzic (formed by the merger of AICorp and Aion). BBN’s Parlance built on experience from the development of the Rus and Irus systems. IBM Languageaccess Q&A from Symantec. Datatalker from Natural Language Inc. Loqui from BIM Systems. English Wizard from Linguistic Technology Corporation.
Natural-language user interface : Natural-language interfaces have in the past led users to anthropomorphize the computer, or at least to attribute more intelligence to machines than is warranted. On the part of the user, this has led to unrealistic expectations of the capabilities of the system. Such expectations will make it difficult to learn the restrictions of the system if users attribute too much capability to it, and will ultimately lead to disappointment when the system fails to perform as expected as was the case in the AI winter of the 1970s and 80s. A 1995 paper titled 'Natural Language Interfaces to Databases – An Introduction', describes some challenges: Modifier attachment The request "List all employees in the company with a driving licence" is ambiguous unless you know that companies can't have driving licences. Conjunction and disjunction "List all applicants who live in California and Arizona" is ambiguous unless you know that a person can't live in two places at once. Anaphora resolution resolve what a user means by 'he', 'she' or 'it', in a self-referential query. Other goals to consider more generally are the speed and efficiency of the interface, in all algorithms these two points are the main point that will determine if some methods are better than others and therefore have greater success in the market. In addition, localisation across multiple language sites requires extra consideration - this is based on differing sentence structure and language syntax variations between most languages. Finally, regarding the methods used, the main problem to be solved is creating a general algorithm that can recognize the entire spectrum of different voices, while disregarding nationality, gender or age. The significant differences between the extracted features - even from speakers who says the same word or phrase - must be successfully overcome.
Natural-language user interface : The natural-language interface gives rise to technology used for many different applications. Some of the main uses are: Dictation, is the most common use for automated speech recognition (ASR) systems today. This includes medical transcriptions, legal and business dictation, and general word processing. In some cases special vocabularies are used to increase the accuracy of the system. Command and control, ASR systems that are designed to perform functions and actions on the system are defined as command and control systems. Utterances like "Open Netscape" and "Start a new xterm" will do just that. Telephony, some PBX/Voice Mail systems allow callers to speak commands instead of pressing buttons to send specific tones. Wearables, because inputs are limited for wearable devices, speaking is a natural possibility. Medical, disabilities, many people have difficulty typing due to physical limitations such as repetitive strain injuries (RSI), muscular dystrophy, and many others. For example, people with difficulty hearing could use a system connected to their telephone to convert a caller's speech to text. Embedded applications, some new cellular phones include C&C speech recognition that allow utterances such as "call home". This may be a major factor in the future of automatic speech recognition and Linux. Below are named and defined some of the applications that use natural-language recognition, and so have integrated utilities listed above.
Natural-language user interface : Conversational user interface Natural user interface Natural-language programming Voice user interface Chatbot, a computer program that simulates human conversations Noisy text Question answering Selection-based search Semantic search Semantic query Semantic Web == References ==
ALOPEX : ALOPEX (an abbreviation of "algorithms of pattern extraction") is a correlation based machine learning algorithm first proposed by Tzanakou and Harth in 1974.
ALOPEX : In machine learning, the goal is to train a system to minimize a cost function or (referring to ALOPEX) a response function. Many training algorithms, such as backpropagation, have an inherent susceptibility to getting "stuck" in local minima or maxima of the response function. ALOPEX uses a cross-correlation of differences and a stochastic process to overcome this in an attempt to reach the absolute minimum (or maximum) of the response function.
ALOPEX : ALOPEX, in its simplest form is defined by an updating equation: Δ W i j ( n ) = γ Δ W i j ( n − 1 ) Δ R ( n ) + r i ( n ) (n)=\gamma \ \Delta \ W_(n-1)\Delta \ R(n)+r_(n) where: n ≥ 0 is the iteration or time-step. Δ W i j ( n ) (n) is the difference between the current and previous value of system variable W i j at iteration n . Δ R ( n ) is the difference between the current and previous value of the response function R , at iteration n . γ is the learning rate parameter ( γ < 0 minimizes R , and γ > 0 maximizes R ) r i ( n ) ∼ N ( 0 , σ 2 ) (n)\sim \ N(0,\sigma \ ^)
ALOPEX : Essentially, ALOPEX changes each system variable W i j ( n ) (n) based on a product of: the previous change in the variable Δ W i j ( n − 1 ) (n-1) , the resulting change in the cost function Δ R ( n ) , and the learning rate parameter γ . Further, to find the absolute minimum (or maximum), the stochastic process r i j ( n ) (n) (Gaussian or other) is added to stochastically "push" the algorithm out of any local minima.
ALOPEX : Harth, E., & Tzanakou, E. (1974) Alopex: A stochastic method for determining visual receptive fields. Vision Research, 14:1475-1482. Abstract from ScienceDirect
Natural Language Processing (journal) : Natural Language Processing is a bimonthly peer-reviewed academic journal published by Cambridge University Press which covers research and software in natural language processing. It was established in 1995 as Natural Language Engineering, obtaining its current title in 2024. Other than original publications on theoretical and applied aspects of computational linguistics, the journal also contains Industry Watch and Emerging Trends columns tracking developments in the field. The editor-in-chief is Ruslan Mitkov (Lancaster University). From 2024 the journal is published completely open access. According to the Journal Citation Reports, the journal has a 2023 impact factor of 2.3.
Data exploration : Data exploration is an approach similar to initial data analysis, whereby a data analyst uses visual exploration to understand what is in a dataset and the characteristics of the data, rather than through traditional data management systems. These characteristics can include size or amount of data, completeness of the data, correctness of the data, possible relationships amongst data elements or files/tables in the data. Data exploration is typically conducted using a combination of automated and manual activities. Automated activities can include data profiling or data visualization or tabular reports to give the analyst an initial view into the data and an understanding of key characteristics. This is often followed by manual drill-down or filtering of the data to identify anomalies or patterns identified through the automated actions. Data exploration can also require manual scripting and queries into the data (e.g. using languages such as SQL or R) or using spreadsheets or similar tools to view the raw data. All of these activities are aimed at creating a mental model and understanding of the data in the mind of the analyst, and defining basic metadata (statistics, structure, relationships) for the data set that can be used in further analysis. Once this initial understanding of the data is had, the data can be pruned or refined by removing unusable parts of the data (data cleansing), correcting poorly formatted elements and defining relevant relationships across datasets. This process is also known as determining data quality. Data exploration can also refer to the ad hoc querying or visualization of data to identify potential relationships or insights that may be hidden in the data and does not require to formulate assumptions beforehand. Traditionally, this had been a key area of focus for statisticians, with John Tukey being a key evangelist in the field. Today, data exploration is more widespread and is the focus of data analysts and data scientists; the latter being a relatively new role within enterprises and larger organizations.
Data exploration : This area of data exploration has become an area of interest in the field of machine learning. This is a relatively new field and is still evolving. As its most basic level, a machine-learning algorithm can be fed a data set and can be used to identify whether a hypothesis is true based on the dataset. Common machine learning algorithms can focus on identifying specific patterns in the data. Many common patterns include regression and classification or clustering, but there are many possible patterns and algorithms that can be applied to data via machine learning. By employing machine learning, it is possible to find patterns or relationships in the data that would be difficult or impossible to find via manual inspection, trial and error or traditional exploration techniques.
Data exploration : Trifacta – a data preparation and analysis platform Paxata – self-service data preparation software Alteryx – data blending and advanced data analytics software Microsoft Power BI - interactive visualization and data analysis tool OpenRefine - a standalone open source desktop application for data clean-up and data transformation Tableau software – interactive data visualization software
Data exploration : Exploratory data analysis Machine learning Data profiling Data visualization == References ==
Virtual intelligence : Virtual intelligence (VI) is the term given to artificial intelligence that exists within a virtual world. Many virtual worlds have options for persistent avatars that provide information, training, role-playing, and social interactions. The immersion of virtual worlds provides a platform for VI beyond the traditional paradigm of past user interfaces (UIs). What Alan Turing established as the benchmark for telling the difference between human and computerized intelligence was done void of visual influences. With today's VI bots, virtual intelligence has evolved past the constraints of past testing into a new level of the machine's ability to demonstrate intelligence. The immersive features of these environments provide nonverbal elements that affect the realism provided by virtually intelligent agents. Virtual intelligence is the intersection of these two technologies: Virtual environments: Immersive 3D spaces provide for collaboration, simulations, and role-playing interactions for training. Many of these virtual environments are currently being used for government and academic projects, including Second Life, VastPark, Olive, OpenSim, Outerra, Oracle's Open Wonderland, Duke University's Open Cobalt, and many others. Some of the commercial virtual worlds are also taking this technology into new directions, including the high-definition virtual world Blue Mars. Artificial intelligence: Artificial Intelligence (AI) is a branch of computer science that aims to create intelligent machines capable of performing tasks that typically require human intelligence. AI technology seeks to mimic human behavior and intelligence through computational algorithms and data analysis. Within the realm of AI, there is a specialized area known as Virtual Intelligence (VI), which operates within virtual environments to simulate human-like interactions and responses.
Virtual intelligence : Cutlass Bomb Disposal Robot: Northrop Grumman developed a virtual training opportunity because of the prohibitive real-world cost and dangers associated with bomb disposal. By replicating a complicated system without having to learn advanced code, the virtual robot has no risk of damage, trainee safety hazards, or accessibility constraints. MyCyberTwin: NASA is among the companies that have used the MyCyberTwin AI technologies. They used it for the Phoenix rover in the virtual world Second Life. Their MyCyberTwin used a programmed profile to relay information about the Phoenix rover to tell people what it was doing and its purpose. Second China: The University of Florida developed the "Second China" project as an immersive training experience for learning how to interact with the culture and language in a foreign country. Students are immersed in an environment that provides roleplaying challenges coupled with language and cultural sensitivities magnified during country-level diplomatic missions or during times of potential conflict or regional destabilization. The virtual training provides participants with opportunities to access information, take part in guided learning scenarios, communicate, collaborate, and role-play. While China was the country for the prototype, this model can be modified for use with any culture to help better understand social and cultural interactions and see how other people think and what their actions imply. Duke School of Nursing Training Simulation: Extreme Reality developed virtual training to test critical thinking with a nurse performing trained procedures to identify critical data to make decisions and performing the correct steps for intervention. Bots are programmed to response to the nurse's actions as the patient with their conditions improving if the nurse performs the correct actions.
Virtual intelligence : Artificial conversational entity Autonomous agent Avatar (computing) Embodied agent Multi-Agent System Intelligent agent Non-player character Player character Virtual reality X.A.N.A.
Virtual intelligence : Virtual Intelligence, David Burden and Dave Fliesen, ModSim World Canada, June 2010 Sun Tzu Virtual Intelligence demonstration, MODSIM World, October 2009
Backpropagation through structure : Backpropagation through structure (BPTS) is a gradient-based technique for training recursive neural networks, proposed in a 1996 paper written by Christoph Goller and Andreas Küchler. == References ==
Autognostics : Autognostics is a new paradigm that describes the capacity for computer networks to be self-aware. It is considered one of the major components of Autonomic Networking.
Autognostics : One of the most important characteristics of today's Internet that has contributed to its success is its basic design principle: a simple and transparent core with intelligence at the edges (the so-called "end-to-end principle"). Based on this principle, the network carries data without knowing the characteristics of that data (e.g., voice, video, etc.) - only the end-points have application-specific knowledge. If something goes wrong with the data, only the edge may be able to recognize that since it knows about the application and what the expected behavior is. The core has no information about what should happen with that data - it only forwards packets. Although an effective and beneficial attribute, this design principle has also led to many of today's problems, limitations, and frustrations. Currently, it is almost impossible for most end-users to know why certain network-based applications do not work well and what they need to do to make it better. Also, network operators who interact with the core in low-level terms such as router configuration have problems expressing their high-level goals into low-level actions. In high-level terms, this may be summarized as a weak coupling between the network and application layers of the overall system. As a consequence of the Internet end-to-end principle, the network performance experienced by a particular application is difficult to attribute based on the behavior of the individual elements. At any given moment, the measure of performance between any two points is typically unknown and applications must operate blindly. As a further consequence, changes to the configuration of given element, or changes in the end-to-end path, cannot easily be validated. Optimization and provisioning cannot then be automated except against only the simplest design specifications. There is an increasing interest in Autonomic Networking research, and a strong conviction that an evolution from the current networking status quo is necessary. Although to date there have not been any practical implementations demonstrating the benefits of an effective autonomic networking paradigm, there seems to be a consensus as to the characteristics which such implementations would need to demonstrate. These specifically include continuous monitoring, identifying, diagnosing and fixing problems based on high-level policies and objectives. Autognostics, as a major part of the autonomic networking concept, intends to bring networks to a new level of awareness and eliminate the lack of visibility which currently exists in today's networks.
Autognostics : Autognostics is a new paradigm that describes the capacity for computer networks to be self-aware, in part and as a whole, and dynamically adapt to the applications running on them by autonomously monitoring, identifying, diagnosing, resolving issues, subsequently verifying that any remediation was successful, and reporting the impact with respect to the application's use (i.e., providing visibility into the changes to networks and their effects). Although similar to the concept of network awareness, i.e., the capability of network devices and applications to be aware of network characteristics (see References section below), it is noteworthy that autognostics takes that concept one step further. The main difference is the auto part of autognostics, which entails that network devices are self-aware of network characteristics, and have the capability to adapt themselves as a result of continuous monitoring and diagnostics.
Autognostics : Autognostics, or in other words deep self-knowledge, can be best described as the ability of a network to know itself and the applications that run on it. This knowledge is used to autonomously adapt to dynamic network and application conditions such as utilization, capacity, quality of service/application/user experience, etc. In order to achieve autognosis, networks need a means to: Continuously monitor/test the network for application-specific performance Analyze the monitoring/test data to detect problems (e.g., performance degradation) Diagnose, identify and localize sources of degradation Automatically take actions to resolve problems via remediation/provisioning Verify the problems have been resolved (potentially rolling back changes if ineffective) Subsequently, continue to monitor/test for performance
Autognostics : Liang Cheng and Ivan Marsic, Piecewise Network Awareness Service for Wireless/Mobile Pervasive Computing, Mobile Networks and Applications (ACM/Springer MONET), Vol. 7, No. 4, pp. 269–278, 2002. paper available here Michael Bednarczyk, Claudia Giuli and Jason Bednarczyk, Network Awareness: Adopting a Modern Mindset, white paper Evan Hughes and Anil Somayaji, Towards Network Awareness, paper Network Awareness on Windows Vista
Autognostics : Autonomic Computing Autonomic Networking Autonomic Systems
Parity learning : Parity learning is a problem in machine learning. An algorithm that solves this problem must find a function ƒ, given some samples (x, ƒ(x)) and the assurance that ƒ computes the parity of bits at some fixed locations. The samples are generated using some distribution over the input. The problem is easy to solve using Gaussian elimination provided that a sufficient number of samples (from a distribution which is not too skewed) are provided to the algorithm.
Parity learning : In Learning Parity with Noise (LPN), the samples may contain some error. Instead of samples (x, ƒ(x)), the algorithm is provided with (x, y), where for random boolean b ∈ y = f(x),&b\\1-f(x),&\end The noisy version of the parity learning problem is conjectured to be hard and is widely used in cryptography.
Parity learning : Learning with errors
Parity learning : Avrim Blum, Adam Kalai, and Hal Wasserman, “Noise-tolerant learning, the parity problem, and the statistical query model,” J. ACM 50, no. 4 (2003): 506–519. Adam Tauman Kalai, Yishay Mansour, and Elad Verbin, “On agnostic boosting and parity learning,” in Proceedings of the 40th annual ACM symposium on Theory of computing (Victoria, British Columbia, Canada: ACM, 2008), 629–638, http://portal.acm.org/citation.cfm?id=1374466. Oded Regev, “On lattices, learning with errors, random linear codes, and cryptography,” in Proceedings of the thirty-seventh annual ACM symposium on Theory of computing (Baltimore, MD, USA: ACM, 2005), 84–93, http://portal.acm.org/citation.cfm?id=1060590.1060603.
Augmented Analytics : Augmented Analytics is an approach of data analytics that employs the use of machine learning and natural language processing to automate analysis processes normally done by a specialist or data scientist. The term was introduced in 2017 by Rita Sallam, Cindi Howson, and Carlie Idoine in a Gartner research paper. Augmented analytics is based on business intelligence and analytics. In the graph extraction step, data from different sources are investigated.
Augmented Analytics : Machine Learning – a systematic computing method that uses algorithms to sift through data to identify relationships, trends, and patterns. It is a process that allows algorithms to dynamically learn from data instead of having a set base of programmed rules. Natural language generation (NLG) – a software capability that takes unstructured data and translates it into plain-English, readable, language. Automating Insights – using machine learning algorithms to automate data analysis processes. Natural Language Query – enabling users to query data using business terms that are either typed onto a search box or spoken.
Augmented Analytics : Data Democratization is the democratizing data access in order to relieve data congestion and get rid of any sense of data "gatekeepers". This process must be implemented alongside a method for users to make sense of the data. This process is used in hopes of speeding up company decision making and uncovering opportunities hidden in data. There are three aspects to democratising data: Data Parameterisation and Characterisation. Data Decentralisation using an OS of blockchain and DLT technologies, as well as an independently governed secure data exchange to enable trust. Consent Market-driven Data Monetisation. When it comes to connecting assets, there are two features that will accelerate the adoption and usage of data democratisation: decentralized identity management and business data object monetization of data ownership. It enables multiple individuals and organizations to identify, authenticate, and authorize participants and organizations, enabling them to access services, data or systems across multiple networks, organizations, environments, and use cases. It empowers users and enables a personalized, self-service digital onboarding system so that users can self-authenticate without relying on a central administration function to process their information. Simultaneously, decentralized identity management ensures the user is authorized to perform actions subject to the system’s policies based on their attributes (role, department, organization, etc.) and/ or physical location.
Augmented Analytics : Agriculture – Farmers collect data on water use, soil temperature, moisture content and crop growth, augmented analytics can be used to make sense of this data and possibly identify insights that the user can then use to make business decisions. Smart Cities – Many cities across the United States, known as Smart Cities collect large amounts of data on a daily basis. Augmented analytics can be used to simplify this data in order to increase effectiveness in city management (transportation, natural disasters, etc.). Analytic Dashboards – Augmented analytics has the ability to take large data sets and create highly interactive and informative analytical dashboards that assist in many organizational decisions. Augmented Data Discovery – Using an augmented analytics process can assist organizations in automatically finding, visualizing and narrating potentially important data correlations and trends. Data Preparation – Augmented analytics platforms have the ability to take large amounts of data and organize and "clean" the data in order for it to be usable for future analyses. Business – Businesses collect large amounts of data, daily. Some examples of types of data collected in business operations include; sales data, consumer behavior data, distribution data. An augmented analytics platform provides access to analysis of this data, which could be used in making business decisions. == References ==
Artificial general intelligence : Artificial general intelligence (AGI) is a hypothesized type of highly autonomous artificial intelligence (AI) that would match or surpass human capabilities across most or all economically valuable cognitive work. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI. Creating AGI is a primary goal of AI research and of companies such as OpenAI, Google, and Meta. A 2020 survey identified 72 active AGI research and development projects across 37 countries. The timeline for achieving AGI remains a subject of ongoing debate among researchers and experts. As of 2023, some argue that it may be possible in years or decades; others maintain it might take a century or longer; a minority believe it may never be achieved; and another minority claims that it is already here. Notable AI researcher Geoffrey Hinton has expressed concerns about the rapid progress towards AGI, suggesting it could be achieved sooner than many expect. There is debate on the exact definition of AGI and regarding whether modern large language models (LLMs) such as GPT-4 are early forms of AGI. AGI is a common topic in science fiction and futures studies. Contention exists over whether AGI represents an existential risk. Many experts on AI have stated that mitigating the risk of human extinction posed by AGI should be a global priority. Others find the development of AGI to be in too remote a stage to present such a risk.
Artificial general intelligence : AGI is also known as strong AI, full AI, human-level AI, human-level intelligent AI, or general intelligent action. Some academic sources reserve the term "strong AI" for computer programs that will experience sentience or consciousness. In contrast, weak AI (or narrow AI) is able to solve one specific problem but lacks general cognitive abilities. Some academic sources use "weak AI" to refer more broadly to any programs that neither experience consciousness nor have a mind in the same sense as humans. Related concepts include artificial superintelligence and transformative AI. An artificial superintelligence (ASI) is a hypothetical type of AGI that is much more generally intelligent than humans, while the notion of transformative AI relates to AI having a large impact on society, for example, similar to the agricultural or industrial revolution. A framework for classifying AGI by performance and autonomy was proposed in 2023 by Google DeepMind researchers. They define five performance levels of AGI: emerging, competent, expert, virtuoso, and superhuman. For example, a competent AGI is defined as an AI that outperforms 50% of skilled adults in a wide range of non-physical tasks, and a superhuman AGI (i.e. an artificial superintelligence) is similarly defined but with a threshold of 100%. They consider large language models like ChatGPT or LLaMA 2 to be instances of emerging AGI (comparable to unskilled humans). Regarding the autonomy of AGI and associated risks, they define five levels: tool (fully in human control), consultant, collaborator, expert, and agent (fully autonomous).
Artificial general intelligence : Various popular definitions of intelligence have been proposed. One of the leading proposals is the Turing test. However, there are other well-known definitions, and some researchers disagree with the more popular approaches.
Artificial general intelligence : While the development of transformer models like in ChatGPT is considered the most promising path to AGI, whole brain emulation can serve as an alternative approach. With whole brain simulation, a brain model is built by scanning and mapping a biological brain in detail, and then copying and simulating it on a computer system or another computational device. The simulation model must be sufficiently faithful to the original, so that it behaves in practically the same way as the original brain. Whole brain emulation is a type of brain simulation that is discussed in computational neuroscience and neuroinformatics, and for medical research purposes. It has been discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the computing power required to emulate it.
Artificial general intelligence : AGI could have a wide variety of applications. If oriented towards such goals, AGI could help mitigate various problems in the world such as hunger, poverty and health problems. AGI could improve productivity and efficiency in most jobs. For example, in public health, AGI could accelerate medical research, notably against cancer. It could take care of the elderly, and democratize access to rapid, high-quality medical diagnostics. It could offer fun, cheap and personalized education. The need to work to subsist could become obsolete if the wealth produced is properly redistributed. This also raises the question of the place of humans in a radically automated society. AGI could also help to make rational decisions, and to anticipate and prevent disasters. It could also help to reap the benefits of potentially catastrophic technologies such as nanotechnology or climate engineering, while avoiding the associated risks. If an AGI's primary goal is to prevent existential catastrophes such as human extinction (which could be difficult if the Vulnerable World Hypothesis turns out to be true), it could take measures to drastically reduce the risks while minimizing the impact of these measures on our quality of life. Advancements in medicine and healthcare AGI would improve healthcare by making medical diagnostics faster, cheaper, and more accurate. AI-driven systems can analyse patient data and detect diseases at an early stage. This means patients will get diagnosed quicker and be able to seek medical attention before their medical condition gets worse. AGI systems could also recommend personalised treatment plans based on genetics and medical history. Additionally, AGI could accelerate drug discovery by simulating molecular interactions, reducing the time it takes to develop new medicines for conditions like cancer and Alzheimer's. In hospitals, AGI-powered robotic assistants could assist in surgeries, monitor patients, and provide real-time medical support. It could also be used in elderly care, helping aging populations maintain independence through AI-powered caregivers and health-monitoring systems. By evaluating large datasets, AGI can assist in developing personalised treatment plans tailored to individual patient needs. This approach ensures that therapies are optimised based on a patient's unique medical history and genetic profile, improving outcomes and reducing adverse effects. Advancements in science and technology AGI can become a tool for scientific research and innovation. In fields such as physics and mathematics, AGI could help solve complex problems that require massive computational power, such as modeling quantum systems, understanding dark matter, or proving mathematical theorems. Problems that have remained unsolved for decades may be solved with AGI. AGI could also drive technological breakthroughs that could reshape society. It can do this by optimising engineering designs, discovering new materials, and improving automation. For example, AI is already playing a role in developing more efficient renewable energy sources and optimising supply chains in manufacturing. Future AGI systems could push these innovations even further. Enhancing education and productivity AGI can personalize education by creating learning programs that are specific to each student's strengths, weaknesses, and interests. Unlike traditional teaching methods, AI-driven tutoring systems could adapt lessons in real-time, ensuring students understand difficult concepts before moving on. In the workplace, AGI could automate repetitive tasks, freeing up workers for more creative and strategic roles. It could also improve efficiency across industries by optimising logistics, enhancing cybersecurity, and streamlining business operations. If properly managed, the wealth generated by AGI-driven automation could reduce the need for people to work for a living. Working may become optional. Mitigating global crises AGI could play a crucial role in preventing and managing global threats. It could help governments and organizations predict and respond to natural disasters more effectively, using real-time data analysis to forecast hurricanes, earthquakes, and pandemics. By analyzing vast datasets from satellites, sensors, and historical records, AGI could improve early warning systems, enabling faster disaster response and minimising casualties. In climate science, AGI could develop new models for reducing carbon emissions, optimising energy resources, and mitigating climate change effects. It could also enhance weather prediction accuracy, allowing policymakers to implement more effective environmental regulations. Additionally, AGI could help regulate emerging technologies that carry significant risks, such as nanotechnology and bioengineering, by analysing complex systems and predicting unintended consequences. Furthermore, AGI could assist in cybersecurity by detecting and mitigating large-scale cyber threats, protecting critical infrastructure, and preventing digital warfare.
Artificial general intelligence : The AGI portal maintained by Pei Wang
Proaftn : Proaftn is a fuzzy classification method that belongs to the class of supervised learning algorithms. The acronym Proaftn stands for: (PROcédure d'Affectation Floue pour la problématique du Tri Nominal), which means in English: Fuzzy Assignment Procedure for Nominal Sorting. The method enables to determine the fuzzy indifference relations by generalizing the indices (concordance and discordance) used in the ELECTRE III method. To determine the fuzzy indifference relations, PROAFTN uses the general scheme of the discretization technique described in, that establishes a set of pre-classified cases called a training set. To resolve the classification problems, Proaftn proceeds by the following stages: Stage 1. Modeling of classes: In this stage, the prototypes of the classes are conceived using the two following steps: Step 1. Structuring: The prototypes and their parameters (thresholds, weights, etc.) are established using the available knowledge given by the expert. Step 2. Validation: We use one of the two following techniques in order to validate or adjust the parameters obtained in the first step through the assignment examples known as a training set. Direct technique: It consists in adjusting the parameters through the training set and with the expert intervention. Indirect technique: It consists in fitting the parameters without the expert intervention as used in machine learning approaches. In multicriteria classification problem, the indirect technique is known as preference disaggregation analysis. This technique requires less cognitive effort than the former technique; it uses an automatic method to determine the optimal parameters, which minimize the classification errors. Furthermore, several heuristics and metaheuristics were used to learn the multicriteria classification method Proaftn. Stage 2. Assignment: After conceiving the prototypes, Proaftn proceeds to assign the new objects to specific classes.
Proaftn : Site dedicated to the sorting problematic of MCDA
Multi-task learning : Multi-task learning (MTL) is a subfield of machine learning in which multiple learning tasks are solved at the same time, while exploiting commonalities and differences across tasks. This can result in improved learning efficiency and prediction accuracy for the task-specific models, when compared to training the models separately. Inherently, Multi-task learning is a multi-objective optimization problem having trade-offs between different tasks. Early versions of MTL were called "hints". In a widely cited 1997 paper, Rich Caruana gave the following characterization:Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. In the classification context, MTL aims to improve the performance of multiple classification tasks by learning them jointly. One example is a spam-filter, which can be treated as distinct but related classification tasks across different users. To make this more concrete, consider that different people have different distributions of features which distinguish spam emails from legitimate ones, for example an English speaker may find that all emails in Russian are spam, not so for Russian speakers. Yet there is a definite commonality in this classification task across users, for example one common feature might be text related to money transfer. Solving each user's spam classification problem jointly via MTL can let the solutions inform each other and improve performance. Further examples of settings for MTL include multiclass classification and multi-label classification. Multi-task learning works because regularization induced by requiring an algorithm to perform well on a related task can be superior to regularization that prevents overfitting by penalizing all complexity uniformly. One situation where MTL may be particularly helpful is if the tasks share significant commonalities and are generally slightly under sampled. However, as discussed below, MTL has also been shown to be beneficial for learning unrelated tasks.
Multi-task learning : The key challenge in multi-task learning, is how to combine learning signals from multiple tasks into a single model. This may strongly depend on how well different task agree with each other, or contradict each other. There are several ways to address this challenge:
Multi-task learning : A Matlab package called Multi-Task Learning via StructurAl Regularization (MALSAR) implements the following multi-task learning algorithms: Mean-Regularized Multi-Task Learning, Multi-Task Learning with Joint Feature Selection, Robust Multi-Task Feature Learning, Trace-Norm Regularized Multi-Task Learning, Alternating Structural Optimization, Incoherent Low-Rank and Sparse Learning, Robust Low-Rank Multi-Task Learning, Clustered Multi-Task Learning, Multi-Task Learning with Graph Structures.
Multi-task learning : Multi-Target Prediction: A Unifying View on Problems and Methods Willem Waegeman, Krzysztof Dembczynski, Eyke Huellermeier https://arxiv.org/abs/1809.02352v1
Multi-task learning : The Biosignals Intelligence Group at UIUC Washington University in St. Louis Department of Computer Science
Data augmentation : Data augmentation is a statistical technique which allows maximum likelihood estimation from incomplete data. Data augmentation has important applications in Bayesian analysis, and the technique is widely used in machine learning to reduce overfitting when training machine learning models, achieved by training models on several slightly-modified copies of existing data.
Data augmentation : Synthetic Minority Over-sampling Technique (SMOTE) is a method used to address imbalanced datasets in machine learning. In such datasets, the number of samples in different classes varies significantly, leading to biased model performance. For example, in a medical diagnosis dataset with 90 samples representing healthy individuals and only 10 samples representing individuals with a particular disease, traditional algorithms may struggle to accurately classify the minority class. SMOTE rebalances the dataset by generating synthetic samples for the minority class. For instance, if there are 100 samples in the majority class and 10 in the minority class, SMOTE can create synthetic samples by randomly selecting a minority class sample and its nearest neighbors, then generating new samples along the line segments joining these neighbors. This process helps increase the representation of the minority class, improving model performance.
Data augmentation : When convolutional neural networks grew larger in mid-1990s, there was a lack of data to use, especially considering that some part of the overall dataset should be spared for later testing. It was proposed to perturb existing data with affine transformations to create new examples with the same labels, which were complemented by so-called elastic distortions in 2003, and the technique was widely used as of 2010s. Data augmentation can enhance CNN performance and acts as a countermeasure against CNN profiling attacks. Data augmentation has become fundamental in image classification, enriching training dataset diversity to improve model generalization and performance. The evolution of this practice has introduced a broad spectrum of techniques, including geometric transformations, color space adjustments, and noise injection.
Data augmentation : Residual or block bootstrap can be used for time series augmentation.
Data augmentation : Oversampling and undersampling in data analysis Surrogate data Generative adversarial network Variational autoencoder Data pre-processing Convolutional neural network Regularization (mathematics) Data preparation Data fusion == References ==
Case-based reasoning : Case-based reasoning (CBR), broadly construed, is the process of solving new problems based on the solutions of similar past problems. In everyday life, an auto mechanic who fixes an engine by recalling another car that exhibited similar symptoms is using case-based reasoning. A lawyer who advocates a particular outcome in a trial based on legal precedents or a judge who creates case law is using case-based reasoning. So, too, an engineer copying working elements of nature (practicing biomimicry) is treating nature as a database of solutions to problems. Case-based reasoning is a prominent type of analogy solution making. It has been argued that case-based reasoning is not only a powerful method for computer reasoning, but also a pervasive behavior in everyday human problem solving; or, more radically, that all reasoning is based on past cases personally experienced. This view is related to prototype theory, which is most deeply explored in cognitive science.
Case-based reasoning : Case-based reasoning has been formalized for purposes of computer reasoning as a four-step process: Retrieve: Given a target problem, retrieve cases relevant to solving it from memory. A case consists of a problem, its solution, and, typically, annotations about how the solution was derived. For example, suppose Fred wants to prepare blueberry pancakes. Being a novice cook, the most relevant experience he can recall is one in which he successfully made plain pancakes. The procedure he followed for making the plain pancakes, together with justifications for decisions made along the way, constitutes Fred's retrieved case. Reuse: Map the solution from the previous case to the target problem. This may involve adapting the solution as needed to fit the new situation. In the pancake example, Fred must adapt his retrieved solution to include the addition of blueberries. Revise: Having mapped the previous solution to the target situation, test the new solution in the real world (or a simulation) and, if necessary, revise. Suppose Fred adapted his pancake solution by adding blueberries to the batter. After mixing, he discovers that the batter has turned blue – an undesired effect. This suggests the following revision: delay the addition of blueberries until after the batter has been ladled into the pan. Retain: After the solution has been successfully adapted to the target problem, store the resulting experience as a new case in memory. Fred, accordingly, records his new-found procedure for making blueberry pancakes, thereby enriching his set of stored experiences, and better preparing him for future pancake-making demands.
Case-based reasoning : At first glance, CBR may seem similar to the rule induction algorithms of machine learning. Like a rule-induction algorithm, CBR starts with a set of cases or training examples; it forms generalizations of these examples, albeit implicit ones, by identifying commonalities between a retrieved case and the target problem. If for instance a procedure for plain pancakes is mapped to blueberry pancakes, a decision is made to use the same basic batter and frying method, thus implicitly generalizing the set of situations under which the batter and frying method can be used. The key difference, however, between the implicit generalization in CBR and the generalization in rule induction lies in when the generalization is made. A rule-induction algorithm draws its generalizations from a set of training examples before the target problem is even known; that is, it performs eager generalization. For instance, if a rule-induction algorithm were given recipes for plain pancakes, Dutch apple pancakes, and banana pancakes as its training examples, it would have to derive, at training time, a set of general rules for making all types of pancakes. It would not be until testing time that it would be given, say, the task of cooking blueberry pancakes. The difficulty for the rule-induction algorithm is in anticipating the different directions in which it should attempt to generalize its training examples. This is in contrast to CBR, which delays (implicit) generalization of its cases until testing time – a strategy of lazy generalization. In the pancake example, CBR has already been given the target problem of cooking blueberry pancakes; thus it can generalize its cases exactly as needed to cover this situation. CBR therefore tends to be a good approach for rich, complex domains in which there are myriad ways to generalize a case. In law, there is often explicit delegation of CBR to courts, recognizing the limits of rule based reasons: limiting delay, limited knowledge of future context, limit of negotiated agreement, etc. While CBR in law and cognitively inspired CBR have long been associated, the former is more clearly an interpolation of rule based reasoning, and judgment, while the latter is more closely tied to recall and process adaptation. The difference is clear in their attitude toward error and appellate review. Another name for cased based reasoning in problem solving is symptomatic strategies. It does require à priori domain knowledge that is gleaned from past experience which established connections between symptoms and causes. This knowledge is referred to as shallow, compiled, evidential, history-based as well as case-based knowledge. This is the strategy most associated with diagnosis by experts. Diagnosis of a problem transpires as a rapid recognition process in which symptoms evoke appropriate situation categories. An expert knows the cause by virtue of having previously encountered similar cases. Cased based reasoning is the most powerful strategy, and that used most commonly. However, the strategy won't work independently with truly novel problems, or where deeper understanding of whatever is taking place is sought. An alternative approach to problem solving is the topographic strategy which falls into the category of deep reasoning. With deep reasoning, in-depth knowledge of a system is used. Topography in this context means a description or an analysis of a structured entity, showing the relations among its elements. Also known as reasoning from first principles, deep reasoning is applied to novel faults when experience-based approaches aren't viable. The topographic strategy is therefore linked to à priori domain knowledge that is developed from a more a fundamental understanding of a system, possibly using first-principles knowledge. Such knowledge is referred to as deep, causal or model-based knowledge. Hoc and Carlier noted that symptomatic approaches may need to be supported by topographic approaches because symptoms can be defined in diverse terms. The converse is also true – shallow reasoning can be used abductively to generate causal hypotheses, and deductively to evaluate those hypotheses, in a topographical search.
Case-based reasoning : Critics of CBR argue that it is an approach that accepts anecdotal evidence as its main operating principle. Without statistically relevant data for backing and implicit generalization, there is no guarantee that the generalization is correct. However, all inductive reasoning where data is too scarce for statistical relevance is inherently based on anecdotal evidence.
Case-based reasoning : CBR traces its roots to the work of Roger Schank and his students at Yale University in the early 1980s. Schank's model of dynamic memory was the basis for the earliest CBR systems: Janet Kolodner's CYRUS and Michael Lebowitz's IPP. Other schools of CBR and closely allied fields emerged in the 1980s, which directed at topics such as legal reasoning, memory-based reasoning (a way of reasoning from examples on massively parallel machines), and combinations of CBR with other reasoning methods. In the 1990s, interest in CBR grew internationally, as evidenced by the establishment of an International Conference on Case-Based Reasoning in 1995, as well as European, German, British, Italian, and other CBR workshops. CBR technology has resulted in the deployment of a number of successful systems, the earliest being Lockheed's CLAVIER, a system for laying out composite parts to be baked in an industrial convection oven. CBR has been used extensively in applications such as the Compaq SMART system and has found a major application area in the health sciences, as well as in structural safety management. There is recent work that develops CBR within a statistical framework and formalizes case-based inference as a specific type of probabilistic inference. Thus, it becomes possible to produce case-based predictions equipped with a certain level of confidence. One description of the difference between CBR and induction from instances is that statistical inference aims to find what tends to make cases similar while CBR aims to encode what suffices to claim similarly.
Case-based reasoning : AI alignment Artificial intelligence detection software Abductive reasoning Duck test I know it when I see it Commonsense reasoning Purposeful omission Decision tree Genetic algorithm Pattern matching Analogy K-line (artificial intelligence) Ripple down rules Casuistry Similarity heuristic
Case-based reasoning : Aamodt, Agnar, and Enric Plaza. "Case-Based Reasoning: Foundational Issues, Methodological Variations, and System Approaches" Artificial Intelligence Communications 7, no. 1 (1994): 39–52. Althoff, Klaus-Dieter, Ralph Bergmann, and L. Karl Branting, eds. Case-Based Reasoning Research and Development: Proceedings of the Third International Conference on Case-Based Reasoning. Berlin: Springer Verlag, 1999. Bergmann, Ralph Experience Management: Foundations, Development Methodology, and Internet-Based Applications. Springer, LNAI 2432, 2002. Bergmann, R., Althoff, K.-D., Breen, S., Göker, M., Manago, M., Traphöner, R., and Wess, S. Developing industrial case-based reasoning applications: The INRECA methodology. Springer LNAI 1612, 2003. Kolodner, Janet. Case-Based Reasoning. San Mateo: Morgan Kaufmann, 1993. Leake, David. "CBR in Context: The Present and Future", In Leake, D., editor, Case-Based Reasoning: Experiences, Lessons, and Future Directions. AAAI Press/MIT Press, 1996, 1-30. Leake, David, and Enric Plaza, eds. Case-Based Reasoning Research and Development: Proceedings of the Second International Conference on Case-Based Reasoning. Berlin: Springer Verlag, 1997. Lenz, Mario; Bartsch-Spörl, Brigitte; Burkhard, Hans-Dieter; Wess, Stefan, eds. (1998). Case-Based Reasoning Technology: From Foundations to Applications. Lecture Notes in Artificial Intelligence. Vol. 1400. Springer. doi:10.1007/3-540-69351-3. ISBN 978-3-540-64572-6. S2CID 10517603. Oxman, Rivka. Precedents in Design: a Computational Model for the Organization of Precedent Knowledge, Design Studies, Vol. 15 No. 2 pp. 141–157 Riesbeck, Christopher, and Roger Schank. Inside Case-based Reasoning. Northvale, NJ: Erlbaum, 1989. Veloso, Manuela, and Agnar Aamodt, eds. Case-Based Reasoning Research and Development: Proceedings of the First International Conference on Case-Based Reasoning. Berlin: Springer Verlag, 1995. Watson, Ian. Applying Case-Based Reasoning: Techniques for Enterprise Systems. San Francisco: Morgan Kaufmann, 1997.
Case-based reasoning : GAIA – Group of Artificial Intelligence Applications An earlier version of the above article was posted on Nupedia.
Machine-learned interatomic potential : Machine-learned interatomic potentials (MLIPs), or simply machine learning potentials (MLPs), are interatomic potentials constructed by machine learning programs. Beginning in the 1990s, researchers have employed such programs to construct interatomic potentials by mapping atomic structures to their potential energies. These potentials are referred to as MLIPs or MLPs. Such machine learning potentials promised to fill the gap between density functional theory, a highly accurate but computationally intensive modelling method, and empirically derived or intuitively-approximated potentials, which were far lighter computationally but substantially less accurate. Improvements in artificial intelligence technology heightened the accuracy of MLPs while lowering their computational cost, increasing the role of machine learning in fitting potentials. Machine learning potentials began by using neural networks to tackle low-dimensional systems. While promising, these models could not systematically account for interatomic energy interactions; they could be applied to small molecules in a vacuum, or molecules interacting with frozen surfaces, but not much else – and even in these applications, the models often relied on force fields or potentials derived empirically or with simulations. These models thus remained confined to academia. Modern neural networks construct highly accurate and computationally light potentials, as theoretical understanding of materials science was increasingly built into their architectures and preprocessing. Almost all are local, accounting for all interactions between an atom and its neighbor up to some cutoff radius. There exist some nonlocal models, but these have been experimental for almost a decade. For most systems, reasonable cutoff radii enable highly accurate results. Almost all neural networks intake atomic coordinates and output potential energies. For some, these atomic coordinates are converted into atom-centered symmetry functions. From this data, a separate atomic neural network is trained for each element; each atomic network is evaluated whenever that element occurs in the given structure, and then the results are pooled together at the end. This process – in particular, the atom-centered symmetry functions which convey translational, rotational, and permutational invariances – has greatly improved machine learning potentials by significantly constraining the neural network search space. Other models use a similar process but emphasize bonds over atoms, using pair symmetry functions and training one network per atom pair. Other models to learn their own descriptors rather than using predetermined symmetry-dictating functions. These models, called message-passing neural networks (MPNNs), are graph neural networks. Treating molecules as three-dimensional graphs (where atoms are nodes and bonds are edges), the model takes feature vectors describing the atoms as input, and iteratively updates these vectors as information about neighboring atoms is processed through message functions and convolutions. These feature vectors are then used to predict the final potentials. The flexibility of this method often results in stronger, more generalizable models. In 2017, the first-ever MPNN model (a deep tensor neural network) was used to calculate the properties of small organic molecules. Such technology was commercialized, leading to the development of Matlantis in 2022, which extracts properties through both the forward and backward passes.
Machine-learned interatomic potential : One popular class of machine-learned interatomic potential is the Gaussian Approximation Potential (GAP), which combines compact descriptors of local atomic environments with Gaussian process regression to machine learn the potential energy surface of a given system. To date, the GAP framework has been used to successfully develop a number of MLIPs for various systems, including for elemental systems such as Carbon, Silicon, Phosphorus, and Tungsten, as well as for multicomponent systems such as Ge2Sb2Te5 and austenitic stainless steel, Fe7Cr2Ni. == References ==
Boltzmann machine : A Boltzmann machine (also called Sherrington–Kirkpatrick model with external field or stochastic Ising model), named after Ludwig Boltzmann, is a spin-glass model with an external field, i.e., a Sherrington–Kirkpatrick model, that is a stochastic Ising model. It is a statistical physics technique applied in the context of cognitive science. It is also classified as a Markov random field. Boltzmann machines are theoretically intriguing because of the locality and Hebbian nature of their training algorithm (being trained by Hebb's rule), and because of their parallelism and the resemblance of their dynamics to simple physical processes. Boltzmann machines with unconstrained connectivity have not been proven useful for practical problems in machine learning or inference, but if the connectivity is properly constrained, the learning can be made efficient enough to be useful for practical problems. They are named after the Boltzmann distribution in statistical mechanics, which is used in their sampling function. They were heavily popularized and promoted by Geoffrey Hinton, Terry Sejnowski and Yann LeCun in cognitive sciences communities, particularly in machine learning, as part of "energy-based models" (EBM), because Hamiltonians of spin glasses as energy are used as a starting point to define the learning task.
Boltzmann machine : A Boltzmann machine, like a Sherrington–Kirkpatrick model, is a network of units with a total "energy" (Hamiltonian) defined for the overall network. Its units produce binary results. Boltzmann machine weights are stochastic. The global energy E in a Boltzmann machine is identical in form to that of Hopfield networks and Ising models: E = − ( ∑ i < j w i j s i s j + ∑ i θ i s i ) w_\,s_\,s_+\sum _\theta _\,s_\right) Where: w i j is the connection strength between unit j and unit i . s i is the state, s i ∈ \in \ , of unit i . θ i is the bias of unit i in the global energy function. ( − θ i is the activation threshold for the unit.) Often the weights w i j are represented as a symmetric matrix W = [ w i j ] ] with zeros along the diagonal.
Boltzmann machine : The difference in the global energy that results from a single unit i equaling 0 (off) versus 1 (on), written Δ E i , assuming a symmetric matrix of weights, is given by: Δ E i = ∑ j > i w i j s j + ∑ j < i w j i s j + θ i =\sum _w_\,s_+\sum _w_\,s_+\theta _ This can be expressed as the difference of energies of two states: Δ E i = E i=off − E i=on =E_-E_ Substituting the energy of each state with its relative probability according to the Boltzmann factor (the property of a Boltzmann distribution that the energy of a state is proportional to the negative log probability of that state) yields: Δ E i = − k B T ln ⁡ ( p i=off ) − ( − k B T ln ⁡ ( p i=on ) ) , =-k_T\ln(p_)-(-k_T\ln(p_)), where k B is the Boltzmann constant and is absorbed into the artificial notion of temperature T . Noting that the probabilities of the unit being on or off sum to 1 allows for the simplification: − Δ E i k B T = − ln ⁡ ( p i = on ) + ln ⁡ ( p i = off ) = ln ⁡ ( 1 − p i = on p i = on ) = ln ⁡ ( p i = on − 1 − 1 ) , T=-\ln(p_)+\ln(p_)=\ln =\ln(p_^-1), whence the probability that the i -th unit is given by p i = on = 1 1 + exp ⁡ ( − Δ E i k B T ) , =-T, where the scalar T is referred to as the temperature of the system. This relation is the source of the logistic function found in probability expressions in variants of the Boltzmann machine.
Boltzmann machine : The network runs by repeatedly choosing a unit and resetting its state. After running for long enough at a certain temperature, the probability of a global state of the network depends only upon that global state's energy, according to a Boltzmann distribution, and not on the initial state from which the process was started. This means that log-probabilities of global states become linear in their energies. This relationship is true when the machine is "at thermal equilibrium", meaning that the probability distribution of global states has converged. Running the network beginning from a high temperature, its temperature gradually decreases until reaching a thermal equilibrium at a lower temperature. It then may converge to a distribution where the energy level fluctuates around the global minimum. This process is called simulated annealing. To train the network so that the chance it will converge to a global state according to an external distribution over these states, the weights must be set so that the global states with the highest probabilities get the lowest energies. This is done by training.
Boltzmann machine : The units in the Boltzmann machine are divided into 'visible' units, V, and 'hidden' units, H. The visible units are those that receive information from the 'environment', i.e. the training set is a set of binary vectors over the set V. The distribution over the training set is denoted P + ( V ) (V) . The distribution over global states converges as the Boltzmann machine reaches thermal equilibrium. We denote this distribution, after we marginalize it over the hidden units, as P − ( V ) (V) . Our goal is to approximate the "real" distribution P + ( V ) (V) using the P − ( V ) (V) produced by the machine. The similarity of the two distributions is measured by the Kullback–Leibler divergence, G : G = ∑ v P + ( v ) ln ⁡ ( P + ( v ) P − ( v ) ) (v)\ln \left((v)(v)\right) where the sum is over all the possible states of V . G is a function of the weights, since they determine the energy of a state, and the energy determines P − ( v ) (v) , as promised by the Boltzmann distribution. A gradient descent algorithm over G changes a given weight, w i j , by subtracting the partial derivative of G with respect to the weight. Boltzmann machine training involves two alternating phases. One is the "positive" phase where the visible units' states are clamped to a particular binary state vector sampled from the training set (according to P + ). The other is the "negative" phase where the network is allowed to run freely, i.e. only the input nodes have their state determined by external data, but the output nodes are allowed to float. The gradient with respect to a given weight, w i j , is given by the equation: ∂ G ∂ w i j = − 1 R [ p i j + − p i j − ] =-[p_^-p_^] where: p i j + ^ is the probability that units i and j are both on when the machine is at equilibrium on the positive phase. p i j − ^ is the probability that units i and j are both on when the machine is at equilibrium on the negative phase. R denotes the learning rate This result follows from the fact that at thermal equilibrium the probability P − ( s ) (s) of any global state s when the network is free-running is given by the Boltzmann distribution. This learning rule is biologically plausible because the only information needed to change the weights is provided by "local" information. That is, the connection (synapse, biologically) does not need information about anything other than the two neurons it connects. This is more biologically realistic than the information needed by a connection in many other neural network training algorithms, such as backpropagation. The training of a Boltzmann machine does not use the EM algorithm, which is heavily used in machine learning. By minimizing the KL-divergence, it is equivalent to maximizing the log-likelihood of the data. Therefore, the training procedure performs gradient ascent on the log-likelihood of the observed data. This is in contrast to the EM algorithm, where the posterior distribution of the hidden nodes must be calculated before the maximization of the expected value of the complete data likelihood during the M-step. Training the biases is similar, but uses only single node activity: ∂ G ∂ θ i = − 1 R [ p i + − p i − ] =-[p_^-p_^]
Boltzmann machine : Theoretically the Boltzmann machine is a rather general computational medium. For instance, if trained on photographs, the machine would theoretically model the distribution of photographs, and could use that model to, for example, complete a partial photograph. Unfortunately, Boltzmann machines experience a serious practical problem, namely that it seems to stop learning correctly when the machine is scaled up to anything larger than a trivial size. This is due to important effects, specifically: the required time order to collect equilibrium statistics grows exponentially with the machine's size, and with the magnitude of the connection strengths connection strengths are more plastic when the connected units have activation probabilities intermediate between zero and one, leading to a so-called variance trap. The net effect is that noise causes the connection strengths to follow a random walk until the activities saturate.
Boltzmann machine : The Boltzmann machine is based on the Sherrington–Kirkpatrick spin glass model by David Sherrington and Scott Kirkpatrick. The seminal publication by John Hopfield (1982) applied methods of statistical mechanics, mainly the recently developed (1970s) theory of spin glasses, to study associative memory (later named the "Hopfield network"). The original contribution in applying such energy-based models in cognitive science appeared in papers by Geoffrey Hinton and Terry Sejnowski. In a 1995 interview, Hinton stated that in 1983 February or March, he was going to give a talk on simulated annealing in Hopfield networks, so he had to design a learning algorithm for the talk, resulting in the Boltzmann machine learning algorithm. The idea of applying the Ising model with annealed Gibbs sampling was used in Douglas Hofstadter's Copycat project (1984). The explicit analogy drawn with statistical mechanics in the Boltzmann machine formulation led to the use of terminology borrowed from physics (e.g., "energy"), which became standard in the field. The widespread adoption of this terminology may have been encouraged by the fact that its use led to the adoption of a variety of concepts and methods from statistical mechanics. The various proposals to use simulated annealing for inference were apparently independent. Similar ideas (with a change of sign in the energy function) are found in Paul Smolensky's "Harmony Theory". Ising models can be generalized to Markov random fields, which find widespread application in linguistics, robotics, computer vision and artificial intelligence. In 2024, Hopfield and Hinton were awarded Nobel Prize in Physics for their foundational contributions to machine learning, such as the Boltzmann machine.
Boltzmann machine : Restricted Boltzmann machine Helmholtz machine Markov random field (MRF) Ising model (Lenz–Ising model) Hopfield network
Boltzmann machine : Hinton, G. E.; Sejnowski, T. J. (1986). D. E. Rumelhart; J. L. McClelland (eds.). "Learning and Relearning in Boltzmann Machines" (PDF). Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1: Foundations: 282–317. Archived from the original (PDF) on 2010-07-05. Hinton, G. E. (2002). "Training Products of Experts by Minimizing Contrastive Divergence" (PDF). Neural Computation. 14 (8): 1771–1800. CiteSeerX 10.1.1.35.8613. doi:10.1162/089976602760128018. PMID 12180402. S2CID 207596505. Hinton, G. E.; Osindero, S.; Teh, Y. (2006). "A fast learning algorithm for deep belief nets" (PDF). Neural Computation. 18 (7): 1527–1554. CiteSeerX 10.1.1.76.1541. doi:10.1162/neco.2006.18.7.1527. PMID 16764513. S2CID 2309950. Kothari P (2020): https://www.forbes.com/sites/tomtaulli/2020/02/02/coronavirus-can-ai-artificial-intelligence-make-a-difference/?sh=1eca51e55817 Montufar, Guido (2018). "Restricted Boltzmann Machines: Introduction and Review" (PDF). MPI MiS (Preprint). Retrieved 1 August 2023.
Boltzmann machine : Scholarpedia article by Hinton about Boltzmann machines Talk at Google by Geoffrey Hinton
Probabilistic neural network : A probabilistic neural network (PNN) is a feedforward neural network, which is widely used in classification and pattern recognition problems. In the PNN algorithm, the parent probability distribution function (PDF) of each class is approximated by a Parzen window and a non-parametric function. Then, using PDF of each class, the class probability of a new input data is estimated and Bayes’ rule is then employed to allocate the class with highest posterior probability to new input data. By this method, the probability of mis-classification is minimized. This type of artificial neural network (ANN) was derived from the Bayesian network and a statistical algorithm called Kernel Fisher discriminant analysis. It was introduced by D.F. Specht in 1966. In a PNN, the operations are organized into a multilayered feedforward network with four layers: Input layer Pattern layer Summation layer Output layer
Probabilistic neural network : PNN is often used in classification problems. When an input is present, the first layer computes the distance from the input vector to the training input vectors. This produces a vector where its elements indicate how close the input is to the training input. The second layer sums the contribution for each class of inputs and produces its net output as a vector of probabilities. Finally, a compete transfer function on the output of the second layer picks the maximum of these probabilities, and produces a 1 (positive identification) for that class and a 0 (negative identification) for non-targeted classes.
Probabilistic neural network : There are several advantages and disadvantages using PNN instead of multilayer perceptron. PNNs are much faster than multilayer perceptron networks. PNNs can be more accurate than multilayer perceptron networks. PNN networks are relatively insensitive to outliers. PNN networks generate accurate predicted target probability scores. PNNs approach Bayes optimal classification.
Probabilistic neural network : PNN are slower than multilayer perceptron networks at classifying new cases. PNN require more memory space to store the model.
Probabilistic neural network : probabilistic neural networks in modelling structural deterioration of stormwater pipes. probabilistic neural networks method to gastric endoscope samples diagnosis based on FTIR spectroscopy. Application of probabilistic neural networks to population pharmacokineties. Probabilistic Neural Networks to the Class Prediction of Leukemia and Embryonal Tumor of Central Nervous System. Ship Identification Using Probabilistic Neural Networks. Probabilistic Neural Network-Based sensor configuration management in a wireless ad hoc network. Probabilistic Neural Network in character recognizing. Remote-sensing Image Classification. == References ==
Radial basis function : In mathematics a radial basis function (RBF) is a real-valued function φ whose value depends only on the distance between the input and some fixed point, either the origin, so that φ ( x ) = φ ^ ( ‖ x ‖ ) )=(\left\|\mathbf \right\|) , or some other fixed point c , called a center, so that φ ( x ) = φ ^ ( ‖ x − c ‖ ) )=(\left\|\mathbf -\mathbf \right\|) . Any function φ that satisfies the property φ ( x ) = φ ^ ( ‖ x ‖ ) )=(\left\|\mathbf \right\|) is a radial function. The distance is usually Euclidean distance, although other metrics are sometimes used. They are often used as a collection k \_ which forms a basis for some function space of interest, hence the name. Sums of radial basis functions are typically used to approximate given functions. This approximation process can also be interpreted as a simple kind of neural network; this was the context in which they were originally applied to machine learning, in work by David Broomhead and David Lowe in 1988, which stemmed from Michael J. D. Powell's seminal research from 1977. RBFs are also used as a kernel in support vector classification. The technique has proven effective and flexible enough that radial basis functions are now applied in a variety of engineering applications.
Radial basis function : A radial function is a function φ : [ 0 , ∞ ) → R . When paired with a norm on a vector space ‖ ⋅ ‖ : V → [ 0 , ∞ ) , a function of the form φ c = φ ( ‖ x − c ‖ ) =\varphi (\|\mathbf -\mathbf \|) is said to be a radial kernel centered at c ∈ V \in V . A radial function and the associated radial kernels are said to be radial basis functions if, for any finite set of nodes k = 1 n ⊆ V _\_^\subseteq V , all of the following conditions are true:
Radial basis function : Radial basis functions are typically used to build up function approximations of the form where the approximating function y ( x ) ) is represented as a sum of N radial basis functions, each associated with a different center x i _ , and weighted by an appropriate coefficient w i . . The weights w i can be estimated using the matrix methods of linear least squares, because the approximating function is linear in the weights w i . Approximation schemes of this kind have been particularly used in time series prediction and control of nonlinear systems exhibiting sufficiently simple chaotic behaviour and 3D reconstruction in computer graphics (for example, hierarchical RBF and Pose Space Deformation).
Radial basis function : The sum can also be interpreted as a rather simple single-layer type of artificial neural network called a radial basis function network, with the radial basis functions taking on the role of the activation functions of the network. It can be shown that any continuous function on a compact interval can in principle be interpolated with arbitrary accuracy by a sum of this form, if a sufficiently large number N of radial basis functions is used. The approximant y ( x ) ) is differentiable with respect to the weights w i . The weights could thus be learned using any of the standard iterative methods for neural networks. Using radial basis functions in this manner yields a reasonable interpolation approach provided that the fitting set has been chosen such that it covers the entire range systematically (equidistant data points are ideal). However, without a polynomial term that is orthogonal to the radial basis functions, estimates outside the fitting set tend to perform poorly.
Radial basis function : Radial basis functions are used to approximate functions and so can be used to discretize and numerically solve Partial Differential Equations (PDEs). This was first done in 1990 by E. J. Kansa who developed the first RBF based numerical method. It is called the Kansa method and was used to solve the elliptic Poisson equation and the linear advection-diffusion equation. The function values at points x in the domain are approximated by the linear combination of RBFs: The derivatives are approximated as such: where N are the number of points in the discretized domain, d the dimension of the domain and λ the scalar coefficients that are unchanged by the differential operator. Different numerical methods based on Radial Basis Functions were developed thereafter. Some methods are the RBF-FD method, the RBF-QR method and the RBF-PUM method.
Radial basis function : Matérn covariance function Radial basis function interpolation Kansa method
Radial basis function : Hardy, R.L. (1971). "Multiquadric equations of topography and other irregular surfaces". Journal of Geophysical Research. 76 (8): 1905–1915. Bibcode:1971JGR....76.1905H. doi:10.1029/jb076i008p01905. Hardy, R.L. (1990). "Theory and applications of the multiquadric-biharmonic method, 20 years of Discovery, 1968 1988". Comp. Math Applic. 19 (8/9): 163–208. doi:10.1016/0898-1221(90)90272-l. Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 3.7.1. Radial Basis Function Interpolation", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8 Sirayanone, S., 1988, Comparative studies of kriging, multiquadric-biharmonic, and other methods for solving mineral resource problems, PhD. Dissertation, Dept. of Earth Sciences, Iowa State University, Ames, Iowa. Sirayanone, S.; Hardy, R.L. (1995). "The Multiquadric-biharmonic Method as Used for Mineral Resources, Meteorological, and Other Applications". Journal of Applied Sciences and Computations. 1: 437–475.
Linear predictor function : In statistics and in machine learning, a linear predictor function is a linear function (linear combination) of a set of coefficients and explanatory variables (independent variables), whose value is used to predict the outcome of a dependent variable. This sort of function usually comes in linear regression, where the coefficients are called regression coefficients. However, they also occur in various types of linear classifiers (e.g. logistic regression, perceptrons, support vector machines, and linear discriminant analysis), as well as in various other models, such as principal component analysis and factor analysis. In many of these models, the coefficients are referred to as "weights".
Linear predictor function : The basic form of a linear predictor function f ( i ) for data point i (consisting of p explanatory variables), for i = 1, ..., n, is f ( i ) = β 0 + β 1 x i 1 + ⋯ + β p x i p , +\beta _x_+\cdots +\beta _x_, where x i k , for k = 1, ..., p, is the value of the k-th explanatory variable for data point i, and β 0 , … , β p ,\ldots ,\beta _ are the coefficients (regression coefficients, weights, etc.) indicating the relative effect of a particular explanatory variable on the outcome.
Linear predictor function : An example of the usage of a linear predictor function is in linear regression, where each data point is associated with a continuous outcome yi, and the relationship written y i = f ( i ) + ε i = β T x i + ε i , =f(i)+\varepsilon _=^ \mathbf _\ +\varepsilon _, where ε i is a disturbance term or error variable — an unobserved random variable that adds noise to the linear relationship between the dependent variable and predictor function.
Linear predictor function : In some models (standard linear regression, in particular), the equations for each of the data points i = 1, ..., n are stacked together and written in vector form as y = X β + ε , =\mathbf +,\, where y = ( y 1 y 2 ⋮ y n ) , X = ( x 1 ′ x 2 ′ ⋮ x n ′ ) = ( x 11 ⋯ x 1 p x 21 ⋯ x 2 p ⋮ ⋱ ⋮ x n 1 ⋯ x n p ) , β = ( β 1 ⋮ β p ) , ε = ( ε 1 ε 2 ⋮ ε n ) . =y_\\y_\\\vdots \\y_\end,\quad \mathbf =\mathbf '_\\\mathbf '_\\\vdots \\\mathbf '_\end=x_&\cdots &x_\\x_&\cdots &x_\\\vdots &\ddots &\vdots \\x_&\cdots &x_\end,\quad =\beta _\\\vdots \\\beta _\end,\quad =\varepsilon _\\\varepsilon _\\\vdots \\\varepsilon _\end. The matrix X is known as the design matrix and encodes all known information about the independent variables. The variables ε i are random variables, which in standard linear regression are distributed according to a standard normal distribution; they express the influence of any unknown factors on the outcome. This makes it possible to find optimal coefficients through the method of least squares using simple matrix operations. In particular, the optimal coefficients β ^ as estimated by least squares can be written as follows: β ^ = ( X T X ) − 1 X T y . =(X^ X)^X^ \mathbf . The matrix ( X T X ) − 1 X T X)^X^ is known as the Moore–Penrose pseudoinverse of X. The use of the matrix inverse in this formula requires that X is of full rank, i.e. there is not perfect multicollinearity among different explanatory variables (i.e. no explanatory variable can be perfectly predicted from the others). In such cases, the singular value decomposition can be used to compute the pseudoinverse.
Linear predictor function : When a fixed set of nonlinear functions are used to transform the value(s) of a data point, these functions are known as basis functions. An example is polynomial regression, which uses a linear predictor function to fit an arbitrary degree polynomial relationship (up to a given order) between two sets of data points (i.e. a single real-valued explanatory variable and a related real-valued dependent variable), by adding multiple explanatory variables corresponding to various powers of the existing explanatory variable. Mathematically, the form looks like this: y i = β 0 + β 1 x i + β 2 x i 2 + ⋯ + β p x i p . =\beta _+\beta _x_+\beta _x_^+\cdots +\beta _x_^. In this case, for each data point i, a set of explanatory variables is created as follows: ( x i 1 = x i , x i 2 = x i 2 , … , x i p = x i p ) =x_,\quad x_=x_^,\quad \ldots ,\quad x_=x_^) and then standard linear regression is run. The basis functions in this example would be ϕ ( x ) = ( ϕ 1 ( x ) , ϕ 2 ( x ) , … , ϕ p ( x ) ) = ( x , x 2 , … , x p ) . (x)=(\phi _(x),\phi _(x),\ldots ,\phi _(x))=(x,x^,\ldots ,x^). This example shows that a linear predictor function can actually be much more powerful than it first appears: It only really needs to be linear in the coefficients. All sorts of non-linear functions of the explanatory variables can be fit by the model. There is no particular need for the inputs to basis functions to be univariate or single-dimensional (or their outputs, for that matter, although in such a case, a K-dimensional output value is likely to be treated as K separate scalar-output basis functions). An example of this is radial basis functions (RBF's), which compute some transformed version of the distance to some fixed point: ϕ ( x ; c ) = ϕ ( | | x − c | | ) = ϕ ( ( x 1 − c 1 ) 2 + … + ( x K − c K ) 2 ) ;\mathbf )=\phi (||\mathbf -\mathbf ||)=\phi (-c_)^+\ldots +(x_-c_)^) An example is the Gaussian RBF, which has the same functional form as the normal distribution: ϕ ( x ; c ) = e − b | | x − c | | 2 ;\mathbf )=e^ -\mathbf ||^ which drops off rapidly as the distance from c increases. A possible usage of RBF's is to create one for every observed data point. This means that the result of an RBF applied to a new data point will be close to 0 unless the new point is near to the point around which the RBF was applied. That is, the application of the radial basis functions will pick out the nearest point, and its regression coefficient will dominate. The result will be a form of nearest neighbor interpolation, where predictions are made by simply using the prediction of the nearest observed data point, possibly interpolating between multiple nearby data points when they are all similar distances away. This type of nearest neighbor method for prediction is often considered diametrically opposed to the type of prediction used in standard linear regression: But in fact, the transformations that can be applied to the explanatory variables in a linear predictor function are so powerful that even the nearest neighbor method can be implemented as a type of linear regression. It is even possible to fit some functions that appear non-linear in the coefficients by transforming the coefficients into new coefficients that do appear linear. For example, a function of the form a + b 2 x i 1 + c x i 2 x_+x_ for coefficients a , b , c could be transformed into the appropriate linear function by applying the substitutions b ′ = b 2 , c ′ = c , ,c'=, leading to a + b ′ x i 1 + c ′ x i 2 , +c'x_, which is linear. Linear regression and similar techniques could be applied and will often still find the optimal coefficients, but their error estimates and such will be wrong. The explanatory variables may be of any type: real-valued, binary, categorical, etc. The main distinction is between continuous variables (e.g. income, age, blood pressure, etc.) and discrete variables (e.g. sex, race, political party, etc.). Discrete variables referring to more than two possible choices are typically coded using dummy variables (or indicator variables), i.e. separate explanatory variables taking the value 0 or 1 are created for each possible value of the discrete variable, with a 1 meaning "variable does have the given value" and a 0 meaning "variable does not have the given value". For example, a four-way discrete variable of blood type with the possible values "A, B, AB, O" would be converted to separate two-way dummy variables, "is-A, is-B, is-AB, is-O", where only one of them has the value 1 and all the rest have the value 0. This allows for separate regression coefficients to be matched for each possible value of the discrete variable. Note that, for K categories, not all K dummy variables are independent of each other. For example, in the above blood type example, only three of the four dummy variables are independent, in the sense that once the values of three of the variables are known, the fourth is automatically determined. Thus, it's really only necessary to encode three of the four possibilities as dummy variables, and in fact if all four possibilities are encoded, the overall model becomes non-identifiable. This causes problems for a number of methods, such as the simple closed-form solution used in linear regression. The solution is either to avoid such cases by eliminating one of the dummy variables, and/or introduce a regularization constraint (which necessitates a more powerful, typically iterative, method for finding the optimal coefficients).
Linear predictor function : Linear model Linear regression == References ==
Overfitting : In mathematical modeling, overfitting is "the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably". An overfitted model is a mathematical model that contains more parameters than can be justified by the data. In a mathematical sense, these parameters represent the degree of a polynomial. The essence of overfitting is to have unknowingly extracted some of the residual variation (i.e., the noise) as if that variation represented underlying model structure.: 45 Underfitting occurs when a mathematical model cannot adequately capture the underlying structure of the data. An under-fitted model is a model where some parameters or terms that would appear in a correctly specified model are missing. Underfitting would occur, for example, when fitting a linear model to nonlinear data. Such a model will tend to have poor predictive performance. The possibility of over-fitting exists because the criterion used for selecting the model is not the same as the criterion used to judge the suitability of a model. For example, a model might be selected by maximizing its performance on some set of training data, and yet its suitability might be determined by its ability to perform well on unseen data; overfitting occurs when a model begins to "memorize" training data rather than "learning" to generalize from a trend. As an extreme example, if the number of parameters is the same as or greater than the number of observations, then a model can perfectly predict the training data simply by memorizing the data in its entirety. (For an illustration, see Figure 2.) Such a model, though, will typically fail severely when making predictions. Overfitting is directly related to approximation error of the selected function class and the optimization error of the optimization procedure. A function class that is too large, in a suitable sense, relative to the dataset size is likely to overfit. Even when the fitted model does not have an excessive number of parameters, it is to be expected that the fitted relationship will appear to perform less well on a new dataset than on the dataset used for fitting (a phenomenon sometimes known as shrinkage). In particular, the value of the coefficient of determination will shrink relative to the original data. To lessen the chance or amount of overfitting, several techniques are available (e.g., model comparison, cross-validation, regularization, early stopping, pruning, Bayesian priors, or dropout). The basis of some techniques is to either (1) explicitly penalize overly complex models or (2) test the model's ability to generalize by evaluating its performance on a set of data not used for training, which is assumed to approximate the typical unseen data that a model will encounter.
Overfitting : In statistics, an inference is drawn from a statistical model, which has been selected via some procedure. Burnham & Anderson, in their much-cited text on model selection, argue that to avoid overfitting, we should adhere to the "Principle of Parsimony". The authors also state the following.: 32–33 Overfitted models ... are often free of bias in the parameter estimators, but have estimated (and actual) sampling variances that are needlessly large (the precision of the estimators is poor, relative to what could have been accomplished with a more parsimonious model). False treatment effects tend to be identified, and false variables are included with overfitted models. ... A best approximating model is achieved by properly balancing the errors of underfitting and overfitting. Overfitting is more likely to be a serious concern when there is little theory available to guide the analysis, in part because then there tend to be a large number of models to select from. The book Model Selection and Model Averaging (2008) puts it this way. Given a data set, you can fit thousands of models at the push of a button, but how do you choose the best? With so many candidate models, overfitting is a real danger. Is the monkey who typed Hamlet actually a good writer?