text
stringlengths 12
14.7k
|
---|
Data Science and Predictive Analytics : The first edition of the textbook Data Science and Predictive Analytics: Biomedical and Health Applications using R, authored by Ivo D. Dinov, was published in August 2018 by Springer. The second edition of the book was printed in 2023. This textbook covers some of the core mathematical foundations, computational techniques, and artificial intelligence approaches used in data science research and applications. By using the statistical computing platform R and a broad range of biomedical case-studies, the 23 chapters of the book first edition provide explicit examples of importing, exporting, processing, modeling, visualizing, and interpreting large, multivariate, incomplete, heterogeneous, longitudinal, and incomplete datasets (big data).
|
Data Science and Predictive Analytics : The materials in the Data Science and Predictive Analytics (DSPA) textbook have been peer-reviewed in the Journal of the American Statistical Association, International Statistical Institute’s ISI Review Journal, and the Journal of the American Library Association. Many scholarly publications reference the DSPA textbook. As of January 17, 2021, the electronic version of the book first edition (ISBN 978-3-319-72347-1) is freely available on SpringerLink and has been downloaded over 6 million times. The textbook is globally available in print (hardcover and softcover) and electronic formats (PDF and EPub) in many college and university libraries and has been used for data science, computational statistics, and analytics classes at various institutions.
|
Data Science and Predictive Analytics : DSPA textbook (1st edition) Springer website and SpringerLink EBook download DSPA textbook (2nd edition) Springer website is published in The Springer Series in Applied Machine Learning (SSAML) Textbook supporting website
|
Physics-informed neural networks : Physics-informed neural networks (PINNs), also referred to as Theory-Trained Neural Networks (TTNs), are a type of universal function approximators that can embed the knowledge of any physical laws that govern a given data-set in the learning process, and can be described by partial differential equations (PDEs). Low data availability for some biological and engineering problems limit the robustness of conventional machine learning models used for these applications. The prior knowledge of general physical laws acts in the training of neural networks (NNs) as a regularization agent that limits the space of admissible solutions, increasing the generalizability of the function approximation. This way, embedding this prior information into a neural network results in enhancing the information content of the available data, facilitating the learning algorithm to capture the right solution and to generalize well even with a low amount of training examples.
|
Physics-informed neural networks : Most of the physical laws that govern the dynamics of a system can be described by partial differential equations. For example, the Navier–Stokes equations are a set of partial differential equations derived from the conservation laws (i.e., conservation of mass, momentum, and energy) that govern fluid mechanics. The solution of the Navier–Stokes equations with appropriate initial and boundary conditions allows the quantification of flow dynamics in a precisely defined geometry. However, these equations cannot be solved exactly and therefore numerical methods must be used (such as finite differences, finite elements and finite volumes). In this setting, these governing equations must be solved while accounting for prior assumptions, linearization, and adequate time and space discretization. Recently, solving the governing partial differential equations of physical phenomena using deep learning has emerged as a new field of scientific machine learning (SciML), leveraging the universal approximation theorem and high expressivity of neural networks. In general, deep neural networks could approximate any high-dimensional function given that sufficient training data are supplied. However, such networks do not consider the physical characteristics underlying the problem, and the level of approximation accuracy provided by them is still heavily dependent on careful specifications of the problem geometry as well as the initial and boundary conditions. Without this preliminary information, the solution is not unique and may lose physical correctness. On the other hand, physics-informed neural networks (PINNs) leverage governing physical equations in neural network training. Namely, PINNs are designed to be trained to satisfy the given training data as well as the imposed governing equations. In this fashion, a neural network can be guided with training data that do not necessarily need to be large and complete. Potentially, an accurate solution of partial differential equations can be found without knowing the boundary conditions. Therefore, with some knowledge about the physical characteristics of the problem and some form of training data (even sparse and incomplete), PINN may be used for finding an optimal solution with high fidelity. PINNs allow for addressing a wide range of problems in computational science and represent a pioneering technology leading to the development of new classes of numerical solvers for PDEs. PINNs can be thought of as a meshfree alternative to traditional approaches (e.g., CFD for fluid dynamics), and new data-driven approaches for model inversion and system identification. Notably, the trained PINN network can be used for predicting the values on simulation grids of different resolutions without the need to be retrained. In addition, they allow for exploiting automatic differentiation (AD) to compute the required derivatives in the partial differential equations, a new class of differentiation techniques widely used to derive neural networks assessed to be superior to numerical or symbolic differentiation.
|
Physics-informed neural networks : A general nonlinear partial differential equation can be: u t + N [ u ; λ ] = 0 , x ∈ Ω , t ∈ [ 0 , T ] +N[u;\lambda ]=0,\quad x\in \Omega ,\quad t\in [0,T] where u ( t , x ) denotes the solution, N [ ⋅ ; λ ] is a nonlinear operator parameterized by λ , and Ω is a subset of R D ^ . This general form of governing equations summarizes a wide range of problems in mathematical physics, such as conservative laws, diffusion process, advection-diffusion systems, and kinetic equations. Given noisy measurements of a generic dynamic system described by the equation above, PINNs can be designed to solve two classes of problems: data-driven solution data-driven discovery of partial differential equations.
|
Physics-informed neural networks : PINN is unable to approximate PDEs that have strong non-linearity or sharp gradients that commonly occur in practical fluid flow problems. Piece-wise approximation has been an old practice in the field of numerical approximation. With the capability of approximating strong non-linearity extremely light weight PINNs are used to solve PDEs in much larger discrete subdomains that increases accuracy substantially and decreases computational load as well. DPINN (Distributed physics-informed neural networks) and DPIELM (Distributed physics-informed extreme learning machines) are generalizable space-time domain discretization for better approximation. DPIELM is an extremely fast and lightweight approximator with competitive accuracy. Domain scaling on the top has a special effect. Another school of thought is discretization for parallel computation to leverage usage of available computational resources. XPINNs is a generalized space-time domain decomposition approach for the physics-informed neural networks (PINNs) to solve nonlinear partial differential equations on arbitrary complex-geometry domains. The XPINNs further pushes the boundaries of both PINNs as well as Conservative PINNs (cPINNs), which is a spatial domain decomposition approach in the PINN framework tailored to conservation laws. Compared to PINN, the XPINN method has large representation and parallelization capacity due to the inherent property of deployment of multiple neural networks in the smaller subdomains. Unlike cPINN, XPINN can be extended to any type of PDEs. Moreover, the domain can be decomposed in any arbitrary way (in space and time), which is not possible in cPINN. Thus, XPINN offers both space and time parallelization, thereby reducing the training cost more effectively. The XPINN is particularly effective for the large-scale problems (involving large data set) as well as for the high-dimensional problems where single network based PINN is not adequate. The rigorous bounds on the errors resulting from the approximation of the nonlinear PDEs (incompressible Navier–Stokes equations) with PINNs and XPINNs are proved. However, DPINN debunks the use of residual (flux) matching at the domain interfaces as they hardly seem to improve the optimization.
|
Physics-informed neural networks : In the PINN framework, initial and boundary conditions are not analytically satisfied, thus they need to be included in the loss function of the network to be simultaneously learned with the differential equation (DE) unknown functions. Having competing objectives during the network's training can lead to unbalanced gradients while using gradient-based techniques, which causes PINNs to often struggle to accurately learn the underlying DE solution. This drawback is overcome by using functional interpolation techniques such as the Theory of functional connections (TFC)'s constrained expression, in the Deep-TFC framework, which reduces the solution search space of constrained problems to the subspace of neural network that analytically satisfies the constraints. A further improvement of PINN and functional interpolation approach is given by the Extreme Theory of Functional Connections (X-TFC) framework, where a single-layer Neural Network and the extreme learning machine training algorithm are employed. X-TFC allows to improve the accuracy and performance of regular PINNs, and its robustness and reliability are proved for stiff problems, optimal control, aerospace, and rarefied gas dynamics applications.
|
Physics-informed neural networks : Regular PINNs are only able to obtain the solution of a forward or inverse problem on a single geometry. It means that for any new geometry (computational domain), one must retrain a PINN. This limitation of regular PINNs imposes high computational costs, specifically for a comprehensive investigation of geometric parameters in industrial designs. Physics-informed PointNet (PIPN) is fundamentally the result of a combination of PINN's loss function with PointNet. In fact, instead of using a simple fully connected neural network, PIPN uses PointNet as the core of its neural network. PointNet has been primarily designed for deep learning of 3D object classification and segmentation by the research group of Leonidas J. Guibas. PointNet extracts geometric features of input computational domains in PIPN. Thus, PIPN is able to solve governing equations on multiple computational domains (rather than only a single domain) with irregular geometries, simultaneously. The effectiveness of PIPN has been shown for incompressible flow, heat transfer and linear elasticity.
|
Physics-informed neural networks : Physics-informed neural networks (PINNs) have proven particularly effective in solving inverse problems within differential equations, demonstrating their applicability across science, engineering, and economics. They have shown useful for solving inverse problems in a variety of fields, including nano-optics, topology optimization/characterization, multiphase flow in porous media, and high-speed fluid flow. PINNs have demonstrated flexibility when dealing with noisy and uncertain observation datasets. They also demonstrated clear advantages in the inverse calculation of parameters for multi-fidelity datasets, meaning datasets with different quality, quantity, and types of observations. Uncertainties in calculations can be evaluated using ensemble-based or Bayesian-based calculations.
|
Physics-informed neural networks : Surrogate networks are intended for the unknown functions, namely, the components of the strain and the stress tensors as well as the unknown displacement field, respectively. The residual network provides the residuals of the partial differential equations (PDEs) and of the boundary conditions.The computational approach is based on principles of artificial intelligence.
|
Physics-informed neural networks : Deep backward stochastic differential equation method is a numerical method that combines deep learning with Backward stochastic differential equation (BSDE) to solve high-dimensional problems in financial mathematics. By leveraging the powerful function approximation capabilities of deep neural networks, deep BSDE addresses the computational challenges faced by traditional numerical methods like finite difference methods or Monte Carlo simulations, which struggle with the curse of dimensionality. Deep BSDE methods use neural networks to approximate solutions of high-dimensional partial differential equations (PDEs), effectively reducing the computational burden. Additionally, integrating Physics-informed neural networks (PINNs) into the deep BSDE framework enhances its capability by embedding the underlying physical laws into the neural network architecture, ensuring solutions adhere to governing stochastic differential equations, resulting in more accurate and reliable solutions.
|
Physics-informed neural networks : An extension or adaptation of PINNs are Biologically-informed neural networks (BINNs). BINNs introduce two key adaptations to the typical PINN framework: (i) the mechanistic terms of the governing PDE are replaced by neural networks, and (ii) the loss function L t o t is modified to include L c o n s t r , a term used to incorporate domain-specific knowledge that helps enforce biological applicability. For (i), this adaptation has the advantage of relaxing the need to specify the governing differential equation a priori, either explicitly or by using a library of candidate terms. Additionally, this approach circumvents the potential issue of misspecifying regularization terms in stricter theory-informed cases. A natural example of BINNs can be found in cell dynamics, where the cell density u ( x , t ) is governed by a reaction-diffusion equation with diffusion and growth functions D ( u ) and G ( u ) , respectively: u t = ∇ ⋅ ( D ( u ) ∇ u ) + G ( u ) u , x ∈ Ω , t ∈ [ 0 , T ] =\nabla \cdot (D(u)\nabla u)+G(u)u,\quad x\in \Omega ,\quad t\in [0,T] In this case, a component of L c o n s t r could be | | D | | Γ for D < D m i n , D > D m a x ,D>D_ , which penalizes values of D that fall outside a biologically relevant diffusion range defined by D m i n ≤ D ≤ D m a x \leq D\leq D_ . Furthermore, the BINN architecture, when utilizing multilayer-perceptrons (MLPs), would function as follows: an MLP is used to construct u M L P ( x , t ) (x,t) from model inputs ( x , t ) , serving as a surrogate model for the cell density u ( x , t ) . This surrogate is then fed into the two additional MLPs, D M L P ( u M L P ) (u_) and G M L P ( u M L P ) (u_) , which model the diffusion and growth functions. Automatic differentiation can then be applied to compute the necessary derivatives of u M L P , D M L P and G M L P to form the governing reaction-diffusion equation. Note that since u M L P is a surrogate for the cell density, it may contain errors, particularly in regions where the PDE is not fully satisfied. Therefore, the reaction-diffusion equation may be solved numerically, for instance using a method-of-lines approach approach.
|
Physics-informed neural networks : Translation and discontinuous behavior are hard to approximate using PINNs. They fail when solving differential equations with slight advective dominance and hence asymptotic behaviour causes the method to fail. Such PDEs could be solved by scaling variables. This difficulty in training of PINNs in advection-dominated PDEs can be explained by the Kolmogorov n–width of the solution. They also fail to solve a system of dynamical systems and hence have not been a success in solving chaotic equations. One of the reasons behind the failure of regular PINNs is soft-constraining of Dirichlet and Neumann boundary conditions which pose a multi-objective optimization problem which requires manually weighing the loss terms to be able to optimize. More generally, posing the solution of a PDE as an optimization problem brings with it all the problems that are faced in the world of optimization, the major one being getting stuck in local optima.
|
Physics-informed neural networks : Physics Informed Neural Network PINN – repository to implement physics-informed neural network in Python XPINN – repository to implement extended physics-informed neural network (XPINN) in Python PIPN [2]– repository to implement physics-informed PointNet (PIPN) in Python
|
List of programming languages for artificial intelligence : Historically, some programming languages have been specifically designed for artificial intelligence (AI) applications. Nowadays, many general-purpose programming languages also have libraries that can be used to develop AI applications.
|
List of programming languages for artificial intelligence : Python is a high-level, general-purpose programming language that is popular in artificial intelligence. It has a simple, flexible and easily readable syntax. Its popularity results in a vast ecosystem of libraries, including for deep learning, such as PyTorch, TensorFlow, Keras, Google JAX. The library NumPy can be used for manipulating arrays, SciPy for scientific and mathematical analysis, Pandas for analyzing table data, Scikit-learn for various machine learning tasks, NLTK and spaCy for natural language processing, OpenCV for computer vision, and Matplotlib for data visualization. Hugging Face's transformers library can manipulate large language models. Jupyter Notebooks can execute cells of Python code, retaining the context between the execution of cells, which usually facilitates interactive data exploration. Elixir is a high-level functional programming language based on the Erlang VM. Its machine-learning ecosystem includes Nx for computing on CPUs and GPUs, Bumblebee and Axon for serving and training models, Broadway for distributed processing pipelines, Membrane for image and video processing, Livebook for prototyping and publishing notebooks, and Nerves for embedding on devices. R is widely used in new-style artificial intelligence, involving statistical computations, numerical analysis, the use of Bayesian inference, neural networks and in general machine learning. In domains like finance, biology, sociology or medicine it is considered one of the main standard languages. It offers several paradigms of programming like vectorial computation, functional programming and object-oriented programming. Lisp was the first language developed for artificial intelligence. It includes features intended to support programs that could perform general problem solving, such as lists, associations, schemas (frames), dynamic memory allocation, data types, recursion, associative retrieval, functions as arguments, generators (streams), and cooperative multitasking. MATLAB is a proprietary numerical computing language developed by MathWorks. MATLAB has many toolboxes specifically for the development of AI including the Statistics and Machine Learning Toolbox and Deep Learning Toolbox. These toolboxes provide APIs for the high-level and low-level implementation and use of many types of machine learning models that can integrate with the rest of the MATLAB ecosystem. These libraries also have support for code generation for embedded hardware. C++ is a compiled language that can interact with low-level hardware. In the context of AI, it is particularly used for embedded systems and robotics. Libraries such as TensorFlow C++, Caffe or Shogun can be used. JavaScript is widely used for web applications and can notably be executed with web browsers. Libraries for AI include TensorFlow.js, Synaptic and Brain.js. Julia is a language launched in 2012, which intends to combine ease of use and performance. It is mostly used for numerical analysis, computational science, and machine learning. C# can be used to develop high level machine learning models using Microsoft’s .NET suite. ML.NET was developed to aid integration with existing .NET projects, simplifying the process for existing software using the .NET platform. Smalltalk has been used extensively for simulations, neural networks, machine learning, and genetic algorithms. It implements a pure and elegant form of object-oriented programming using message passing. Haskell is a purely functional programming language. Lazy evaluation and the list and LogicT monads make it easy to express non-deterministic algorithms, which is often the case. Infinite data structures are useful for search trees. The language's features enable a compositional way to express algorithms. Working with graphs is however a bit harder at first because of functional purity. Wolfram Language includes a wide range of integrated machine learning abilities, from highly automated functions like Predict and Classify to functions based on specific methods and diagnostics. The functions work on many types of data, including numerical, categorical, time series, textual, and image. Mojo can run some Python programs, and supports programmability of AI hardware. It aims to combine the usability of Python with the performance of low-level programming languages like C++ or Rust.
|
List of programming languages for artificial intelligence : Prolog is a declarative language where programs are expressed in terms of relations, and execution occurs by running queries over these relations. Prolog is particularly useful for symbolic reasoning, database and language parsing applications. Artificial Intelligence Markup Language (AIML) is an XML dialect for use with Artificial Linguistic Internet Computer Entity (A.L.I.C.E.)-type chatterbots. Planner is a hybrid between procedural and logical languages. It gives a procedural interpretation to logical sentences where implications are interpreted with pattern-directed inference. Stanford Research Institute Problem Solver (STRIPS) is a language to express automated planning problem instances. It expresses an initial state, the goal states, and a set of actions. For each action preconditions (what must be established before the action is performed) and postconditions (what is established after the action is performed) are specified. POP-11 is a reflective, incrementally compiled programming language with many of the features of an interpreted language. It is the core language of the Poplog programming environment developed originally by the University of Sussex, and recently in the School of Computer Science at the University of Birmingham which hosts the Poplog website, It is often used to introduce symbolic programming techniques to programmers of more conventional languages like Pascal, who find POP syntax more familiar than that of Lisp. One of POP-11's features is that it supports first-class functions. CycL is a special-purpose language for Cyc.
|
List of programming languages for artificial intelligence : Glossary of artificial intelligence List of constraint programming languages List of computer algebra systems List of logic programming languages List of constructed languages Fifth-generation programming language
|
Lexical simplification : Lexical simplification is a sub-task of text simplification. It can be defined as any lexical substitution task that reduces text complexity.
|
Lexical simplification : Lexical substitution Text simplification
|
Lexical simplification : Advaith Siddharthan. "Syntactic Simplification and Text Cohesion". In Research on Language and Computation, Volume 4, Issue 1, Jun 2006, Pages 77–109, Springer Science, the Netherlands. Siddhartha Jonnalagadda, Luis Tari, Joerg Hakenberg, Chitta Baral and Graciela Gonzalez. Towards Effective Sentence Simplification for Automatic Processing of Biomedical Text. In Proc. of the NAACL-HLT 2009, Boulder, USA, June. [1]
|
Artificial wisdom : Artificial wisdom (AW) is an artificial intelligence (AI) system which is able to display the human traits of wisdom and morals while being able to contemplate its own “endpoint”. Artificial wisdom can be described as artificial intelligence reaching the top-level of decision-making when confronted with the most complex challenging situations. The term artificial wisdom is used when the "intelligence" is based on more than by chance collecting and interpreting data, but by design enriched with smart and conscience strategies that wise people would use. The goal of artificial wisdom is to create artificial intelligence that can successfully replicate the “uniquely human trait[s]” of having wisdom and morals as closely as possible. Thus, artificial wisdom, must “incorporate [the] ethical and moral considerations” of the data it uses. There are also many significant ethical and legal implications of AW which are compounded by the rapid advances in AI and related technologies alongside the lack of the development of ethics, guidelines, and regulations without the oversight of any kind of overarching advisory board. Additionally, there are challenges in how to develop, test, and implement AW in real world scenarios. Existing tests do not test the internal thought process by which a computer system reaches its conclusion, only the result of said process. When examining computer-aided wisdom; the partnership of artificial intelligence and contemplative neuroscience, concerns regarding the future of artificial intelligence shift to a more optimistic viewpoint. This artificial wisdom forms the basis of Louis Molnar's monographic article on artificial philosophy, where he coined the term and proposes how artificial intelligence might view its place in the grand scheme of things.
|
Artificial wisdom : There are no universal or standardized definitions for human intelligence, artificial intelligence, human wisdom, or artificial wisdom. However, the DIKW pyramid, describes the continuum of relationship between data, information, knowledge, and wisdom, puts wisdom at the highest level in its hierarchy. Gottfredson defines intelligence as “the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience”. Definitions for wisdom typically include requiring: The ability for emotional regulation, Pro-social behaviors (e.g., empathy, compassion, and altruism), Self-reflection, “A balance between decisiveness and acceptance of uncertainty and diversity of perspectives, and social advising.” As previously defined, Artificial Wisdom would then be an AI system which is able to solve problems via “an understanding of…context, ethics and moral principles,” rather than simple pre-defined inputs or “learned patterns.” Some scientists have also considered the field of artificial consciousness. However, Jeste states that “…it is generally agreed that only humans can have consciousness, autonomy, will, and theory of mind.” An artificially wise system must also be able to contemplate its end goal and recognize its own ignorance. Additionally, to contemplate its end goal, a wise system must have a “correct conception of worthwhile goals (broadly speaking) or well-being (narrowly speaking)”. "Stephen Grimm further suggests that the following three types of knowledge are individually necessary for wisdom: first, "knowledge of what is good or important for well-being", second, "knowledge of one’s standing, relative to what is good or important for well-being", and third, "knowledge of a strategy for obtaining what is good or important for wellbeing.""
|
Artificial wisdom : There are notable problems with attempting to create an artificially wise system. Consciousness, autonomy, and will are considered strictly human features.
|
Artificial wisdom : == Further reading ==
|
Powerset (company) : Powerset was an American company based in San Francisco, California, that, in 2006, was developing a natural language search engine for the Internet. On July 1, 2008, Powerset was acquired by Microsoft for an estimated $100 million (~$139 million in 2023). Powerset was working on building a natural language search engine that could find targeted answers to user questions (as opposed to keyword based search). For example, when confronted with a question like "Which U.S. state has the highest income tax?", conventional search engines ignore the question phrasing and instead do a search on the keywords "state", "highest", "income", and "tax". Powerset on the other hand, attempts to use natural language processing to understand the nature of the question and return pages containing the answer. The company was in the process of "building a natural language search engine that reads and understands every sentence on the Web". The company has licensed natural language technology from PARC, the former Xerox Palo Alto Research Center. On May 11, 2008, the company unveiled a tool for searching a fixed subset of English Wikipedia using conversational phrases rather than keywords. Acquisition by Microsoft: One significant milestone in Powerset's history was its acquisition by Microsoft on July 1, 2008, for an estimated $100 million. This acquisition was part of Microsoft's broader strategy to enhance its search capabilities and compete more effectively with other search engine providers, particularly Google. Natural Language Search Engine: Powerset's primary focus was on developing a natural language search engine capable of understanding and interpreting user queries in a more human-like manner. Instead of simply matching keywords, Powerset aimed to comprehend the meaning behind the words, allowing for more accurate and contextually relevant search results. Technology and Partnerships: Powerset had licensed natural language technology from PARC, the Xerox Palo Alto Research Center. This technology likely played a crucial role in the development of Powerset's NLP capabilities. Wikipedia Search Tool: In May 2008, Powerset unveiled a search tool that allowed users to search a fixed subset of English Wikipedia using conversational phrases rather than traditional keywords. This demonstrated the potential of Powerset's NLP technology in providing more precise and relevant search results.
|
Powerset (company) : In a form of beta testing, Powerset opened an online community called Powerlabs on September 17, 2007. Business Week said: "The company hopes the site will marshal thousands of people to help build and improve its search engine before it goes public next year." Said The New York Times: "[Powerset Labs] goes far beyond the 'alpha' or 'beta' testing involved in most software projects, when users put a new product through rigorous testing to find its flaws. Powerset doesn’t have a product yet, but rather a collection of promising natural language technologies, which are the fruit of years of research at Xerox PARC." Powerlabs' initial search results are taken from Wikipedia.
|
Powerset (company) : Barney Pell (born March 18, 1968, in Hollywood, California) was co-founder and CEO of Powerset. Pell received his Bachelor of Science degree in symbolic systems from Stanford University in 1989, where he graduated Phi Beta Kappa and was a National Merit Scholar. Pell received a PhD in computer science from Cambridge University in 1993, where he was a Marshall Scholar. He has worked at NASA, as chief strategist and vice president of business development at StockMaster.com (acquired by Red Herring in March, 2000) and at Whizbang! Labs. Prior to joining Powerset, Pell was an Entrepreneur-in-Residence at Mayfield Fund, a venture capital firm in Silicon Valley. Pell is also a founder of Moon Express, Inc., a U.S. company awarded a $10M commercial lunar contract by NASA and a competitor in the Google Lunar X PRIZE. Steve Newcomb was the COO and co-founder of Powerset. Prior to joining Powerset, he was a co-founder of Loudfire, General Manager at Promptu, and was on the board of directors at Jaxtr. He left Powerset in October 2007 to form Virgance, a social startup incubator. Lorenzo Thione (born in Como, Italy) was the product architect and co-founder of Powerset. Prior to joining Powerset, he worked at FXPAL in natural language processing and related research fields. Thione earned his master's degree in software engineering from the University of Texas at Austin. Ronald Kaplan, former manager of research in Natural Language Theory and Technology at PARC, served as the company's CTO and CSO. Ryan Ferrier is a member of the founding team of Powerset. He managed personnel and internal operations. After 2008 he went on to co-found Serious Business, which made Facebook applications and was later bought by Zynga. Another Powerset alumnus, Alex Le, became CTO of Serious Business and went on to become an executive producer at Zynga when it bought the company. Siqi Chen founded a stealth startup in mobile computing after leaving Powerset. Tom Preston-Werner worked at Powerset and left after the acquisition to found GitHub.
|
Powerset (company) : Powerset attracted a wide range of investors, many of whom had considerable experience in the venture capital field. The company received $12.5 million (~$17.7 million in 2023) in Series A funding during November 2007, co-led by the venture capital firms Foundation Capital and The Founders Fund. Among the better-known investors: Esther Dyson, founding chairman of ICANN, founder of the newsletter Release 1.0 and editor at Cnet Peter Thiel, founder and former CEO of PayPal Luke Nosek, founder of PayPal Todd Parker. Managing Partner, Hidden River Ventures Reid Hoffman, executive vice president of PayPal and founder of LinkedIn First Round Capital, seed-stage venture firm
|
Powerset (company) : Bing (search engine) Apache HBase
|
Powerset (company) : Powerset main web site - redirects to Bing Powerset acquired by Microsoft
|
Luminoso : Luminoso is a Cambridge, MA-based text analytics and artificial intelligence company. It spun out of the MIT Media Lab and its crowd-sourced Open Mind Common Sense (OMCS) project. The company has raised $20.6 million in financing, and its clients include Sony, Autodesk, Scotts Miracle-Gro, and GlaxoSmithKline.
|
Luminoso : Luminoso was co-founded in 2010 by Dennis Clark, Jason Alonso, Robyn Speer, and Catherine Havasi, a research scientist at MIT in artificial intelligence and computational linguistics. The company builds on the knowledge base of MIT’s Open Mind Common Sense (OMCS) project, co-founded in 1999 by Havasi, who continues to serve as its director. The OCMS knowledge base has since been combined with knowledge from other crowdsourced resources to become ConceptNet. ConceptNet consists of approximately 28 million statements in 304 languages, with full support for 10 languages and moderate support for 77 languages. ConceptNet is a resource for making an AI that understands the meanings of the words people use. During the World Cup in June 2014, the company provided a widely reported real-time sentiment analysis of the U.S. vs. Germany match, analyzing 900,000 posts on Twitter, Facebook and Google+.
|
Luminoso : The company uses artificial intelligence, natural language processing, and machine learning to derive insights from unstructured data such as contact center interactions, chatbot and live chat transcripts, product reviews, open-ended survey responses, and email. Luminoso's software identifies and quantifies patterns and relationships in text-based data, including domain-specific or creative language. Rather than human-powered keyword searches of data, the software automates taxonomy creation around concepts, allowing related words and phrases to be dynamically generated and tracked. Commercial applications include analyzing, prioritizing, and routing contact center interactions; identifying consumer complaints before they begin to trend; and tracking sentiment during product launches. The software natively analyzes text in fourteen languages, as well as emoji.
|
Luminoso : Luminoso's technology can be accessed via two products: Luminoso Daylight and Luminoso Compass. Luminoso Daylight enables a deep-dive analysis into batch or real-time data, whereas Luminoso Compass automates the categorization of real-time data. Both products offer a user interface as well as an API. Luminoso's products can be implemented through either a cloud-based or an on-premise solution.
|
Luminoso : Luminoso continues to actively conduct research in natural language processing and word embeddings and regularly participates in evaluations such as SemEval. At SemEval 2017, Luminoso participated in Task 2, measuring the semantic similarity of word pairs within and across five languages. Its solution outperformed all competing systems in every language pair tested, with the exception of Persian.
|
Luminoso : Luminoso has been listed as a "Cool Vendor in AI for Marketing" by Gartner, and has also been named a "Boston Artificial Intelligence Startup to Watch" by BostInno. In May 2017, Luminoso was recognized as having the Best Application for AI in the Enterprise by AI Business, and was also shortlisted as the Best AI Breakthrough and Best Innovation in NLP.
|
Luminoso : Major competitors include Clarabridge and Lexalytics.
|
Luminoso : The company raised $1.5 million from angel investors in 2012. Its first institutional funding round of $6.5 was completed in July 2014, led by Acadia Woods with participation from Japan’s Digital Garage. The company followed that with a $10M series B funding round in December 2018, led by DVI Equity Partners, with participation from Liberty Global Ventures, DF Enterprises, Raptor Holdco, Acadia Woods Partners, and Accord Ventures, among others.
|
Luminoso : Official website
|
Lifelong Planning A* : LPA* or Lifelong Planning A* is an incremental heuristic search algorithm based on A*. It was first described by Sven Koenig and Maxim Likhachev in 2001.
|
Lifelong Planning A* : LPA* is an incremental version of A*, which can adapt to changes in the graph without recalculating the entire graph, by updating the g-values (distance from start) from the previous search during the current search to correct them when necessary. Like A*, LPA* uses a heuristic, which is a lower boundary for the cost of the path from a given node to the goal. A heuristic is admissible if it is guaranteed to be non-negative (zero being admissible) and never greater than the cost of the cheapest path to the goal.
|
Lifelong Planning A* : This code assumes a priority queue queue, which supports the following operations: topKey() returns the (numerically) lowest priority of any node in the queue (or infinity if the queue is empty) pop() removes the node with the lowest priority from the queue and returns it insert(node, priority) inserts a node with a given priority into the queue remove(node) removes a node from the queue contains(node) returns true if the queue contains the specified node, false if not
|
Lifelong Planning A* : Being algorithmically similar to A*, LPA* shares many of its properties. Each node is expanded (visited) at most twice for each run of LPA*. Locally overconsistent nodes are expanded at most once per LPA* run, thus its initial run (in which every node enters the overconsistent state) has similar performance to A*, which visits each node at most once. The keys of the nodes expanded for each run are monotonically nondecreasing over time, as is the case with A*. The more informed (and thus larger) the heuristics are (while still satisfying the admissibility criteria), the fewer nodes need to be expanded. The priority queue implementation has a significant impact on performance, as in A*. Using a Fibonacci heap can lead to a significant performance increase over less efficient implementations. For an A* implementation which breaks ties between two nodes with equal f-values in favor of the node with the smaller g-value (which is not well-defined in A*), the following statements are also true: The order in which locally overconsistent nodes are expanded is identical to A*. Of all locally overconsistent nodes, only those whose cost does not exceed that of the goal need to be expanded, as is the case in A*. LPA* additionally has the following properties: When edge costs change, LPA* outperforms A* (assuming the latter is run from scratch) as only a fraction of nodes need to be expanded again.
|
Lifelong Planning A* : D* Lite, a reimplementation of the D* algorithm based on LPA* == References ==
|
GPT-J : GPT-J or GPT-J-6B is an open-source large language model (LLM) developed by EleutherAI in 2021. As the name suggests, it is a generative pre-trained transformer model designed to produce human-like text that continues from a prompt. The optional "6B" in the name refers to the fact that it has 6 billion parameters. The model is available on GitHub, but the web interface no longer communicates with the model. Development stopped in 2021.
|
GPT-J : GPT-J is a GPT-3-like model with 6 billion parameters. Like GPT-3, it is an autoregressive, decoder-only transformer model designed to solve natural language processing (NLP) tasks by predicting how a piece of text will continue. Its architecture differs from GPT-3 in three main ways. The attention and feedforward neural network were computed in parallel during training, allowing for greater efficiency. The GPT-J model uses rotary position embeddings, which has been found to be a superior method of injecting positional information into transformers. GPT-J uses dense attention instead of efficient sparse attention, as used in GPT-3. Beyond that, the model has 28 transformer layers and 16 attention heads. Its vocabulary size is 50257 tokens, the same size as GPT-2's. It has a context window size of 2048 tokens. It was trained on the Pile dataset, using the Mesh Transformer JAX library in JAX to handle the parallelization scheme.
|
GPT-J : GPT-J was designed to generate English text from a prompt. It was not designed for translating or generating text in other languages or for performance without first fine-tuning the model for a specific task. Nonetheless, GPT-J performs reasonably well even without fine-tuning, even in translation (at least from English to French). When neither is fine-tuned, GPT-J-6B performs almost as well as the 6.7 billion parameter GPT-3 (Curie) on a variety of tasks. It even outperforms the 175 billion parameter GPT-3 (Davinci) on code generation tasks. With fine-tuning, it outperforms an untuned GPT-3 (Davinci) on a number of tasks. Like all LLMs, it is not programmed to give factually accurate information, only to generate text based on probability.
|
GPT-J : The untuned GPT-J is available on EleutherAI's website, NVIDIA's Triton Inference Server, and NLP Cloud's website. Cerebras and Amazon Web Services offer services to fine-tune the GPT-J model for company-specific tasks. Graphcore offers both fine-tuning and hosting services for the untuned GPT-J, as well as offering to host the fine-tuned models after they are produced. CoreWeave offers hosting services for both the untuned GPT-J and fine-tuned variants. In March 2023, Databricks released Dolly, an Apache-licensed, instruction-following model created by fine-tuning GPT-J on the Stanford Alpaca dataset. NovelAI's Sigurd and Genji-JP 6B models are both fine-tuned versions of GPT-J. They also offer further fine-tuning services to produce and host custom models. EleutherAI has received praise from Cerebras, GPT-3 Demo, NLP Cloud, and Databricks for making the model open-source, and its open-source status is often cited as a major advantage when choosing which model to use. == References ==
|
Latent diffusion model : The Latent Diffusion Model (LDM) is a diffusion model architecture developed by the CompVis (Computer Vision & Learning) group at LMU Munich. Introduced in 2015, diffusion models (DMs) are trained with the objective of removing successive applications of noise (commonly Gaussian) on training images. The LDM is an improvement on standard DM by performing diffusion modeling in a latent space, and by allowing self-attention and cross-attention conditioning. LDMs are widely used in practical diffusion models. For instance, Stable Diffusion versions 1.1 to 2.1 were based on the LDM architecture.
|
Latent diffusion model : Diffusion models were introduced in 2015 as a method to learn a model that can sample from a highly complex probability distribution. They used techniques from non-equilibrium thermodynamics, especially diffusion. It was accompanied by a software implementation in Theano. A 2019 paper proposed the noise conditional score network (NCSN) or score-matching with Langevin dynamics (SMLD). The paper was accompanied by a software package written in PyTorch release on GitHub. A 2020 paper proposed the Denoising Diffusion Probabilistic Model (DDPM), which improves upon the previous method by variational inference. The paper was accompanied by a software package written in TensorFlow release on GitHub. It was reimplemented in PyTorch by lucidrains. On December 20, 2021, the LDM paper was published on arXiv, and both Stable Diffusion and LDM repositories were published on GitHub. However, they remained roughly the same. Substantial information concerning Stable Diffusion v1 was only added to GitHub on August 10, 2022. All of Stable Diffusion (SD) versions 1.1 to XL were particular instantiations of the LDM architecture. SD 1.1 to 1.4 were released by CompVis in August 2022. There is no "version 1.0". SD 1.1 was a LDM trained on the laion2B-en dataset. SD 1.1 was finetuned to 1.2 on more aesthetic images. SD 1.2 was finetuned to 1.3, 1.4 and 1.5, with 10% of text-conditioning dropped, to improve classifier-free guidance. SD 1.5 was released by RunwayML in October 2022.
|
Latent diffusion model : While the LDM can work for generating arbitrary data conditional on arbitrary data, for concreteness, we describe its operation in conditional text-to-image generation. LDM consists of a variational autoencoder (VAE), a modified U-Net, and a text encoder. The VAE encoder compresses the image from pixel space to a smaller dimensional latent space, capturing a more fundamental semantic meaning of the image. Gaussian noise is iteratively applied to the compressed latent representation during forward diffusion. The U-Net block, composed of a ResNet backbone, denoises the output from forward diffusion backwards to obtain a latent representation. Finally, the VAE decoder generates the final image by converting the representation back into pixel space. The denoising step can be conditioned on a string of text, an image, or another modality. The encoded conditioning data is exposed to denoising U-Nets via a cross-attention mechanism. For conditioning on text, the fixed, a pretrained CLIP ViT-L/14 text encoder is used to transform text prompts to an embedding space.
|
Latent diffusion model : The LDM is trained by using a Markov chain to gradually add noise to the training images. The model is then trained to reverse this process, starting with a noisy image and gradually removing the noise until it recovers the original image. More specifically, the training process can be described as follows: Forward diffusion process: Given a real image x 0 , a sequence of latent variables x 1 : T are generated by gradually adding Gaussian noise to the image, according to a pre-determined "noise schedule". Reverse diffusion process: Starting from a Gaussian noise sample x T , the model learns to predict the noise added at each step, in order to reverse the diffusion process and obtain a reconstruction of the original image x 0 . The model is trained to minimize the difference between the predicted noise and the actual noise added at each step. This is typically done using a mean squared error (MSE) loss function. Once the model is trained, it can be used to generate new images by simply running the reverse diffusion process starting from a random noise sample. The model gradually removes the noise from the sample, guided by the learned noise distribution, until it generates a final image. See the diffusion model page for details.
|
Latent diffusion model : Diffusion model Generative adversarial network Variational autoencoder Stable Diffusion
|
Latent diffusion model : Wang, Phil (2024-09-07). "lucidrains/denoising-diffusion-pytorch". GitHub. Retrieved 2024-09-07. "The Annotated Diffusion Model". huggingface.co. Retrieved 2024-09-07. "U-Net for Stable Diffusion". U-Net for Stable Diffusion. Retrieved 2024-08-31. "Transformer for Stable Diffusion U-Net". Transformer for Stable Diffusion U-Net. Retrieved 2024-09-07.
|
Relational data mining : Relational data mining is the data mining technique for relational databases. Unlike traditional data mining algorithms, which look for patterns in a single table (propositional patterns), relational data mining algorithms look for patterns among multiple tables (relational patterns). For most types of propositional patterns, there are corresponding relational patterns. For example, there are relational classification rules (relational classification), relational regression tree, and relational association rules. There are several approaches to relational data mining: Inductive Logic Programming (ILP) Statistical Relational Learning (SRL) Graph Mining Propositionalization Multi-view learning
|
Relational data mining : Multi-Relation Association Rules: Multi-Relation Association Rules (MRAR) is a new class of association rules which in contrast to primitive, simple and even multi-relational association rules (that are usually extracted from multi-relational databases), each rule item consists of one entity but several relations. These relations indicate indirect relationship between the entities. Consider the following MRAR where the first item consists of three relations live in, nearby and humid: “Those who live in a place which is near by a city with humid climate type and also are younger than 20 -> their health condition is good”. Such association rules are extractable from RDBMS data or semantic web data.
|
Relational data mining : Safarii: a Data Mining environment for analysing large relational databases based on a multi-relational data mining engine. Dataconda: a software, free for research and teaching purposes, that helps mining relational databases without the use of SQL.
|
Relational data mining : Relational dataset repository: a collection of publicly available relational datasets.
|
Relational data mining : Data mining Structure mining Database mining
|
Relational data mining : Web page for a text book on relational data mining
|
Connectionist expert system : Connectionist expert systems are artificial neural network (ANN) based expert systems where the ANN generates inferencing rules e.g., fuzzy-multi layer perceptron where linguistic and natural form of inputs are used. Apart from that, rough set theory may be used for encoding knowledge in the weights better and also genetic algorithms may be used to optimize the search solutions better. Symbolic reasoning methods may also be incorporated (see hybrid intelligent system). (Also see expert system, neural network, clinical decision support system.)
|
Connectionist expert system : Sun, Ron (1994). Integrating rules and connectionism for robust commonsense reasoning. Hoboken, N.J: Wiley & Sons. ISBN 0-471-59324-9.
|
Connectionist expert system : Gallant, Stephen I. (February 1988). "Connectionist expert systems". Comm. ACM. 31 (2): 152–69. doi:10.1145/42372.42377. S2CID 12971665. resource page: http://www.cogsci.rpi.edu/~rsun/reason.html Leão Bde F, Reátegui EB (1993). "HYCONES: a hybrid connectionist expert system". Proc Annu Symp Comput Appl Med Care: 461–5. PMC 2248551. PMID 8130516. Barton JG, Lees A (October 1995). "Development of a connectionist expert system to identify foot problems based on under-foot pressure patterns". Clin Biomech (Bristol, Avon). 10 (7): 385–391. doi:10.1016/0268-0033(95)00015-D. PMID 11415584. Brasil LM, de Azevedo FM, Barreto JM (September 2001). "Hybrid expert system for decision supporting in the medical area: complexity and cognitive computing". Int J Med Inform. 63 (1–2): 19–30. doi:10.1016/S1386-5056(01)00168-X. PMID 11518662. Wei JH (2003). "[Application prospect of human-artificial intelligence system in future manned space flight]". Space Med Med Eng (Beijing) (in Chinese). 16 (Suppl): 482–5. PMID 14989301.
|
Hard sigmoid : In artificial intelligence, especially computer vision and artificial neural networks, a hard sigmoid is non-smooth function used in place of a sigmoid function. These retain the basic shape of a sigmoid, rising from 0 to 1, but using simpler functions, especially piecewise linear functions or piecewise constant functions. These are preferred where speed of computation is more important than precision.
|
Hard sigmoid : The most extreme examples are the sign function or Heaviside step function, which go from −1 to 1 or 0 to 1 (which to use depends on normalization) at 0. Other examples include the Theano library, which provides two approximations: ultra_fast_sigmoid, which is a multi-part piecewise approximation and hard_sigmoid, which is a 3-part piecewise linear approximation (output 0, line with slope 0.2, output 1). == References ==
|
Perceiver : Perceiver is a variant of the Transformer architecture, adapted for processing arbitrary forms of data, such as images, sounds and video, and spatial data. Unlike previous notable Transformer systems such as BERT and GPT-3, which were designed for text processing, the Perceiver is designed as a general architecture that can learn from large amounts of heterogeneous data. It accomplishes this with an asymmetric attention mechanism to distill inputs into a latent bottleneck. Perceiver matches or outperforms specialized models on classification tasks. Perceiver was introduced in June 2021 by DeepMind. It was followed by Perceiver IO in August 2021.
|
Perceiver : Perceiver is designed without modality-specific elements. For example, it does not have elements specialized to handle images, or text, or audio. Further it can handle multiple correlated input streams of heterogeneous types. It uses a small set of latent units that forms an attention bottleneck through which the inputs must pass. One benefit is to eliminate the quadratic scaling problem found in early transformers. Earlier work used custom feature extractors for each modality. It associates position and modality-specific features with every input element (e.g. every pixel, or audio sample). These features can be learned or constructed using high-fidelity Fourier features. Perceiver uses cross-attention to produce linear complexity layers and to detach network depth from input size. This decoupling allows deeper architectures.
|
Perceiver : Perceiver's performance is comparable to ResNet-50 and ViT on ImageNet without 2D convolutions. It attends to 50,000 pixels. It is competitive in all modalities in AudioSet.
|
Perceiver : Convolutional neural network Transformer (machine learning model)
|
Perceiver : DeepMind Perceiver and Perceiver IO | Paper Explained on YouTube Perceiver: General Perception with Iterative Attention (Google DeepMind Research Paper Explained) on YouTube, with the Fourier features explained in more detail
|
Instance-based learning : In machine learning, instance-based learning (sometimes called memory-based learning) is a family of learning algorithms that, instead of performing explicit generalization, compare new problem instances with instances seen in training, which have been stored in memory. Because computation is postponed until a new instance is observed, these algorithms are sometimes referred to as "lazy." It is called instance-based because it constructs hypotheses directly from the training instances themselves. This means that the hypothesis complexity can grow with the data: in the worst case, a hypothesis is a list of n training items and the computational complexity of classifying a single new instance is O(n). One advantage that instance-based learning has over other methods of machine learning is its ability to adapt its model to previously unseen data. Instance-based learners may simply store a new instance or throw an old instance away. Examples of instance-based learning algorithms are the k-nearest neighbors algorithm, kernel machines and RBF networks.: ch. 8 These store (a subset of) their training set; when predicting a value/class for a new instance, they compute distances or similarities between this instance and the training instances to make a decision. To battle the memory complexity of storing all training instances, as well as the risk of overfitting to noise in the training set, instance reduction algorithms have been proposed.
|
Instance-based learning : Analogical modeling == References ==
|
Pruning (artificial neural network) : In deep learning, pruning is the practice of removing parameters from an existing artificial neural network. The goal of this process is to reduce the size (parameter count) of the neural network (and therefore the computational resources required to run it) whilst maintaining accuracy. This can be compared to the biological process of synaptic pruning which takes place in mammalian brains during development.
|
Pruning (artificial neural network) : A basic algorithm for pruning is as follows: Evaluate the importance of each neuron. Rank the neurons according to their importance (assuming there is a clearly defined measure for "importance"). Remove the least important neuron. Check a termination condition (to be determined by the user) to see whether to continue pruning.
|
Pruning (artificial neural network) : Most work on neural network pruning focuses on removing weights, namely, setting their values to zero. Early work suggested to also change the values of non-pruned weights.
|
Pruning (artificial neural network) : Knowledge distillation Neural Darwinism == References ==
|
Energy-based model : An energy-based model (EBM) (also called Canonical Ensemble Learning or Learning via Canonical Ensemble – CEL and LCE, respectively) is an application of canonical ensemble formulation from statistical physics for learning from data. The approach prominently appears in generative artificial intelligence. EBMs provide a unified framework for many probabilistic and non-probabilistic approaches to such learning, particularly for training graphical and other structured models. An EBM learns the characteristics of a target dataset and generates a similar but larger dataset. EBMs detect the latent variables of a dataset and generate new datasets with a similar distribution. Energy-based generative neural networks is a class of generative models, which aim to learn explicit probability distributions of data in the form of energy-based models, the energy functions of which are parameterized by modern deep neural networks. Boltzmann machines are a special form of energy-based models with a specific parametrization of the energy.
|
Energy-based model : For a given input x , the model describes an energy E θ ( x ) (x) such that the Boltzmann distribution P θ ( x ) = exp ( − β E θ ( x ) ) / Z ( θ ) (x)=\exp(-\beta E_(x))/Z(\theta ) is a probability (density), and typically β = 1 . Since the normalization constant: Z ( θ ) := ∫ x ∈ X exp ( − β E θ ( x ) ) d x \exp(-\beta E_(x))dx (also known as the partition function) depends on all the Boltzmann factors of all possible inputs x , it cannot be easily computed or reliably estimated during training simply using standard maximum likelihood estimation. However, for maximizing the likelihood during training, the gradient of the log-likelihood of a single training example x is given by using the chain rule: ∂ θ log ( P θ ( x ) ) = E x ′ ∼ P θ [ ∂ θ E θ ( x ′ ) ] − ∂ θ E θ ( x ) ( ∗ ) \log \left(P_(x)\right)=\mathbb _[\partial _E_(x')]-\partial _E_(x)\,(*) The expectation in the above formula for the gradient can be approximately estimated by drawing samples x ′ from the distribution P θ using Markov chain Monte Carlo (MCMC). Early energy-based models, such as the 2003 Boltzmann machine by Hinton, estimated this expectation via blocked Gibbs sampling. Newer approaches make use of more efficient Stochastic Gradient Langevin Dynamics (LD), drawing samples using: x 0 ′ ∼ P 0 , x i + 1 ′ = x i ′ − α 2 ∂ E θ ( x i ′ ) ∂ x i ′ + ϵ '\sim P_,x_'=x_'-(x_')'+\epsilon , where ϵ ∼ N ( 0 , α ) (0,\alpha ) . A replay buffer of past values x i ′ ' is used with LD to initialize the optimization module. The parameters θ of the neural network are therefore trained in a generative manner via MCMC-based maximum likelihood estimation: the learning process follows an "analysis by synthesis" scheme, where within each learning iteration, the algorithm samples the synthesized examples from the current model by a gradient-based MCMC method (e.g., Langevin dynamics or Hybrid Monte Carlo), and then updates the parameters θ based on the difference between the training examples and the synthesized ones – see equation ( ∗ ) . This process can be interpreted as an alternating mode seeking and mode shifting process, and also has an adversarial interpretation. Essentially, the model learns a function E θ that associates low energies to correct values, and higher energies to incorrect values. After training, given a converged energy model E θ , the Metropolis–Hastings algorithm can be used to draw new samples. The acceptance probability is given by: P a c c ( x i → x ∗ ) = min ( 1 , P θ ( x ∗ ) P θ ( x i ) ) . (x_\to x^)=\min \left(1,(x^)(x_)\right).
|
Energy-based model : The term "energy-based models" was first coined in a 2003 JMLR paper where the authors defined a generalisation of independent components analysis to the overcomplete setting using EBMs. Other early work on EBMs proposed models that represented energy as a composition of latent and observable variables.
|
Energy-based model : EBMs demonstrate useful properties: Simplicity and stability–The EBM is the only object that needs to be designed and trained. Separate networks need not be trained to ensure balance. Adaptive computation time–An EBM can generate sharp, diverse samples or (more quickly) coarse, less diverse samples. Given infinite time, this procedure produces true samples. Flexibility–In Variational Autoencoders (VAE) and flow-based models, the generator learns a map from a continuous space to a (possibly) discontinuous space containing different data modes. EBMs can learn to assign low energies to disjoint regions (multiple modes). Adaptive generation–EBM generators are implicitly defined by the probability distribution, and automatically adapt as the distribution changes (without training), allowing EBMs to address domains where generator training is impractical, as well as minimizing mode collapse and avoiding spurious modes from out-of-distribution samples. Compositionality–Individual models are unnormalized probability distributions, allowing models to be combined through product of experts or other hierarchical techniques.
|
Energy-based model : On image datasets such as CIFAR-10 and ImageNet 32x32, an EBM model generated high-quality images relatively quickly. It supported combining features learned from one type of image for generating other types of images. It was able to generalize using out-of-distribution datasets, outperforming flow-based and autoregressive models. EBM was relatively resistant to adversarial perturbations, behaving better than models explicitly trained against them with training for classification.
|
Energy-based model : Target applications include natural language processing, robotics and computer vision. The first energy-based generative neural network is the generative ConvNet proposed in 2016 for image patterns, where the neural network is a convolutional neural network. The model has been generalized to various domains to learn distributions of videos, and 3D voxels. They are made more effective in their variants. They have proven useful for data generation (e.g., image synthesis, video synthesis, 3D shape synthesis, etc.), data recovery (e.g., recovering videos with missing pixels or image frames, 3D super-resolution, etc), data reconstruction (e.g., image reconstruction and linear interpolation ).
|
Energy-based model : EBMs compete with techniques such as variational autoencoders (VAEs), generative adversarial networks (GANs) or normalizing flows.
|
Energy-based model : Empirical likelihood Posterior predictive distribution Contrastive learning
|
Energy-based model : Implicit Generation and Generalization in Energy-Based Models Yilun Du, Igor Mordatch https://arxiv.org/abs/1903.08689 Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One, Will Grathwohl, Kuan-Chieh Wang, Jörn-Henrik Jacobsen, David Duvenaud, Mohammad Norouzi, Kevin Swersky https://arxiv.org/abs/1912.03263
|
Energy-based model : "CIAR NCAP Summer School". www.cs.toronto.edu. Retrieved 2019-12-27. Dayan, Peter; Hinton, Geoffrey; Neal, Radford; Zemel, Richard S. (1999), "Helmholtz Machine", Unsupervised Learning, The MIT Press, doi:10.7551/mitpress/7011.003.0017, ISBN 978-0-262-28803-3 Hinton, Geoffrey E. (August 2002). "Training Products of Experts by Minimizing Contrastive Divergence". Neural Computation. 14 (8): 1771–1800. doi:10.1162/089976602760128018. ISSN 0899-7667. PMID 12180402. S2CID 207596505. Salakhutdinov, Ruslan; Hinton, Geoffrey (2009-04-15). "Deep Boltzmann Machines". Artificial Intelligence and Statistics: 448–455.
|
Machine translation software usability : The sections below give objective criteria for evaluating the usability of machine translation software output.
|
Machine translation software usability : Do repeated translations converge on a single expression in both languages? I.e. does the translation method show stationarity or produce a canonical form? Does the translation become stationary without losing the original meaning? This metric has been criticized as not being well correlated with BLEU (BiLingual Evaluation Understudy) scores.
|
Machine translation software usability : Is the system adaptive to colloquialism, argot or slang? The French language has many rules for creating words in the speech and writing of popular culture. Two such rules are: (a) The reverse spelling of words such as femme to meuf. (This is called verlan.) (b) The attachment of the suffix -ard to a noun or verb to form a proper noun. For example, the noun faluche means "student hat". The word faluchard formed from faluche colloquially can mean, depending on context, "a group of students", "a gathering of students" and "behavior typical of a student". The Google translator as of 28 December 2006 doesn't derive the constructed words as for example from rule (b), as shown here: Il y a une chorale falucharde mercredi, venez nombreux, les faluchards chantent des paillardes! ==> There is a choral society falucharde Wednesday, come many, the faluchards sing loose-living women! French argot has three levels of usage: familier or friendly, acceptable among friends, family and peers but not at work grossier or swear words, acceptable among friends and peers but not at work or in family verlan or ghetto slang, acceptable among lower classes but not among middle or upper classes The United States National Institute of Standards and Technology conducts annual evaluations [1] of machine translation systems based on the BLEU-4 criterion [2]. A combined method called IQmt which incorporates BLEU and additional metrics NIST, GTM, ROUGE and METEOR has been implemented by Gimenez and Amigo [3].
|
Machine translation software usability : Is the output grammatical or well-formed in the target language? Using an interlingua should be helpful in this regard, because with a fixed interlingua one should be able to write a grammatical mapping to the target language from the interlingua. Consider the following Arabic language input and English language translation result from the Google translator as of 27 December 2006 [4]. This Google translator output doesn't parse using a reasonable English grammar: وعن حوادث التدافع عند شعيرة رمي الجمرات -التي كثيرا ما يسقط فيها العديد من الضحايا- أشار الأمير نايف إلى إدخال "تحسينات كثيرة في جسر الجمرات ستمنع بإذن الله حدوث أي تزاحم". ==> And incidents at the push Carbuncles-throwing ritual, which often fall where many of the victims - Prince Nayef pointed to the introduction of "many improvements in bridge Carbuncles God would stop the occurrence of any competing."
|
Machine translation software usability : Do repeated re-translations preserve the semantics of the original sentence? For example, consider the following English input passed multiple times into and out of French using the Google translator as of 27 December 2006: Better a day earlier than a day late. ==> Améliorer un jour plus tôt qu'un jour tard. ==> To improve one day earlier than a day late. ==> Pour améliorer un jour plus tôt qu'un jour tard. ==> To improve one day earlier than a day late. As noted above and in, this kind of round-trip translation is a very unreliable method of evaluation.
|
Machine translation software usability : An interesting peculiarity of Google Translate as of 24 January 2008 (corrected as of 25 January 2008) is the following result when translating from English to Spanish, which shows an embedded joke in the English-Spanish dictionary which has some added poignancy given recent events: Heath Ledger is dead ==> Tom Cruise está muerto This raises the issue of trustworthiness when relying on a machine translation system embedded in a Life-critical system in which the translation system has input to a Safety Critical Decision Making process. Conjointly it raises the issue of whether in a given use the software of the machine translation system is safe from hackers. It is not known whether this feature of Google Translate was the result of a joke/hack or perhaps an unintended consequence of the use of a method such as statistical machine translation. Reporters from CNET Networks asked Google for an explanation on January 24, 2008; Google said only that it was an "internal issue with Google Translate". The mistranslation was the subject of much hilarity and speculation on the Internet. If it is an unintended consequence of the use of a method such as statistical machine translation, and not a joke/hack, then this event is a demonstration of a potential source of critical unreliability in the statistical machine translation method. In human translations, in particular on the part of interpreters, selectivity on the part of the translator in performing a translation is often commented on when one of the two parties being served by the interpreter knows both languages. This leads to the issue of whether a particular translation could be considered verifiable. In this case, a converging round-trip translation would be a kind of verification.
|
Machine translation software usability : Comparison of machine translation applications Evaluation of machine translation Round-trip translation Translation
|
Machine translation software usability : Gimenez, Jesus and Enrique Amigo. (2005) IQmt: A framework for machine translation evaluation. NIST. Annual machine translation system evaluations and evaluation plan. Papineni, Kishore, Salim Roukos, Todd Ward and Wei-Jing Zhu. (2002) BLEU: A Method for automatic evaluation of machine translation. Proc. 40th Annual Meeting of the ACL, July, 2002, pp. 311–318.
|
Retrieval-augmented generation : Retrieval-augmented generation (RAG) is a technique that enables generative artificial intelligence (Gen AI) models to retrieve and incorporate new information. It modifies interactions with a large language model (LLM) so that the model responds to user queries with reference to a specified set of documents, using this information to supplement information from its pre-existing training data. This allows LLMs to use domain-specific and/or updated information. Use cases include providing chatbot access to internal company data or generating responses based on authoritative sources. RAG improves large language models (LLMs) by incorporating information retrieval before generating responses. Unlike traditional LLMs that rely on static training data, RAG pulls relevant text from databases, uploaded documents, or web sources. According to Ars Technica, "RAG is a way of improving LLM performance, in essence by blending the LLM process with a web search or other document look-up process to help LLMs stick to the facts." This method helps reduce AI hallucinations, which have led to real-world issues like chatbots inventing policies or lawyers citing nonexistent legal cases. By dynamically retrieving information, RAG enables AI to provide more accurate responses without frequent retraining. According to IBM, "RAG also reduces the need for users to continuously train the model on new data and update its parameters as circumstances evolve. In this way, RAG can lower the computational and financial costs of running LLM-powered chatbots in an enterprise setting." Beyond efficiency gains, RAG also allows LLMs to include source references in their responses, enabling users to verify information by reviewing cited documents or original sources. This can provide greater transparency, as users can cross-check retrieved content to ensure accuracy and relevance.
|
Retrieval-augmented generation : In June 2024, Ars Technica reported, "But LLMs aren’t humans, of course. Their training data can age quickly, particularly in more time-sensitive queries. In addition, the LLM often can’t distinguish specific sources of its knowledge, as all its training data is blended together into a kind of soup." In 2023, during its launch demonstration, Google’s Bard provided incorrect information about the James Webb Space Telescope, an error that contributed to a $100 billion decline in Alphabet’s stock value. RAG systems do not inherently verify the credibility of the sources they retrieve from, which can lead to misleading or inaccurate responses. AI systems can generate misinformation even when pulling from factually correct sources if they misinterpret the context. MIT Technology Review gives the example of an AI-generated response stating, "The United States has had one Muslim president, Barack Hussein Obama." The model retrieved this from an academic book titled Barack Hussein Obama: America’s First Muslim President? and misinterpreted the content, generating a false claim. Retrieval-Augmented Generation (RAG) is a method that allows large language models (LLMs) to retrieve and incorporate additional information before generating responses. Unlike LLMs that rely solely on pre-existing training data, RAG integrates newly available data at query time. Ars Technica states, "The beauty of RAG is that when new information becomes available, rather than having to retrain the model, all that’s needed is to augment the model’s external knowledge base with the updated information." The BBC describes "prompt stuffing" as a technique within RAG, in which relevant context is inserted into a prompt to guide the model’s response. This approach provides the LLM with key information early in the prompt, encouraging it to prioritize the supplied data over pre-existing training knowledge.
|
Retrieval-augmented generation : Retrieval-Augmented Generation (RAG) enhances large language models (LLMs) by incorporating an information-retrieval mechanism that allows models to access and utilize additional data beyond their original training set. AWS states, "RAG allows LLMs to retrieve relevant information from external data sources to generate more accurate and contextually relevant responses" (indexing). This approach reduces reliance on static datasets, which can quickly become outdated. When a user submits a query, RAG uses a document retriever to search for relevant content from available sources before incorporating the retrieved information into the model’s response (retrieval). Ars Technica notes that "when new information becomes available, rather than having to retrain the model, all that’s needed is to augment the model’s external knowledge base with the updated information" (augmentation). By dynamically integrating relevant data, RAG enables LLMs to generate more informed and contextually grounded responses (generation). IBM states that "in the generative phase, the LLM draws from the augmented prompt and its internal representation of its training data to synthesize an engaging answer tailored to the user in that instant.
|
Retrieval-augmented generation : Improvements to the basic process above can be applied at different stages in the RAG flow.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.