text
stringlengths 12
14.7k
|
---|
Data preprocessing : Data preprocessing can refer to manipulation, filtration or augmentation of data before it is analyzed, and is often an important step in the data mining process. Data collection methods are often loosely controlled, resulting in out-of-range values, impossible data combinations, and missing values, amongst other issues. Preprocessing is the process by which unstructured data is transformed into intelligible representations suitable for machine-learning models. This phase of model deals with noise in order to arrive at better and improved results from the original data set which was noisy. This dataset also has some level of missing value present in it. The preprocessing pipeline used can often have large effects on the conclusions drawn from the downstream analysis. Thus, representation and quality of data is necessary before running any analysis. Often, data preprocessing is the most important phase of a machine learning project, especially in computational biology. If there is a high proportion of irrelevant and redundant information present or noisy and unreliable data, then knowledge discovery during the training phase may be more difficult. Data preparation and filtering steps can take a considerable amount of processing time. Examples of methods used in data preprocessing include cleaning, instance selection, normalization, one-hot encoding, data transformation, feature extraction and feature selection.
|
Data preprocessing : Online Data Processing Compendium Data preprocessing in predictive data mining. Knowledge Eng. Review 34: e1 (2019)
|
Feature engineering : Feature engineering is a preprocessing step in supervised machine learning and statistical modeling which transforms raw data into a more effective set of inputs. Each input comprises several attributes, known as features. By providing models with relevant information, feature engineering significantly enhances their predictive accuracy and decision-making capability. Beyond machine learning, the principles of feature engineering are applied in various scientific fields, including physics. For example, physicists construct dimensionless numbers such as the Reynolds number in fluid dynamics, the Nusselt number in heat transfer, and the Archimedes number in sedimentation. They also develop first approximations of solutions, such as analytical solutions for the strength of materials in mechanics.
|
Feature engineering : One of the applications of feature engineering has been clustering of feature-objects or sample-objects in a dataset. Especially, feature engineering based on matrix decomposition has been extensively used for data clustering under non-negativity constraints on the feature coefficients. These include Non-Negative Matrix Factorization (NMF), Non-Negative Matrix-Tri Factorization (NMTF), Non-Negative Tensor Decomposition/Factorization (NTF/NTD), etc. The non-negativity constraints on coefficients of the feature vectors mined by the above-stated algorithms yields a part-based representation, and different factor matrices exhibit natural clustering properties. Several extensions of the above-stated feature engineering methods have been reported in literature, including orthogonality-constrained factorization for hard clustering, and manifold learning to overcome inherent issues with these algorithms. Other classes of feature engineering algorithms include leveraging a common hidden structure across multiple inter-related datasets to obtain a consensus (common) clustering scheme. An example is Multi-view Classification based on Consensus Matrix Decomposition (MCMD), which mines a common clustering scheme across multiple datasets. MCMD is designed to output two types of class labels (scale-variant and scale-invariant clustering), and: is computationally robust to missing information, can obtain shape- and scale-based outliers, and can handle high-dimensional data effectively. Coupled matrix and tensor decompositions are popular in multi-view feature engineering.
|
Feature engineering : Feature engineering in machine learning and statistical modeling involves selecting, creating, transforming, and extracting data features. Key components include feature creation from existing data, transforming and imputing missing or invalid features, reducing data dimensionality through methods like Principal Components Analysis (PCA), Independent Component Analysis (ICA), and Linear Discriminant Analysis (LDA), and selecting the most relevant features for model training based on importance scores and correlation matrices. Features vary in significance. Even relatively insignificant features may contribute to a model. Feature selection can reduce the number of features to prevent a model from becoming too specific to the training data set (overfitting). Feature explosion occurs when the number of identified features is too large for effective model estimation or optimization. Common causes include: Feature templates - implementing feature templates instead of coding new features Feature combinations - combinations that cannot be represented by a linear system Feature explosion can be limited via techniques such as: regularization, kernel methods, and feature selection.
|
Feature engineering : Automation of feature engineering is a research topic that dates back to the 1990s. Machine learning software that incorporates automated feature engineering has been commercially available since 2016. Related academic literature can be roughly separated into two types: Multi-relational decision tree learning (MRDTL) uses a supervised algorithm that is similar to a decision tree. Deep Feature Synthesis uses simpler methods.
|
Feature engineering : The feature store is where the features are stored and organized for the explicit purpose of being used to either train models (by data scientists) or make predictions (by applications that have a trained model). It is a central location where you can either create or update groups of features created from multiple different data sources, or create and update new datasets from those feature groups for training models or for use in applications that do not want to compute the features but just retrieve them when it needs them to make predictions. A feature store includes the ability to store code used to generate features, apply the code to raw data, and serve those features to models upon request. Useful capabilities include feature versioning and policies governing the circumstances under which features can be used. Feature stores can be standalone software tools or built into machine learning platforms.
|
Feature engineering : Feature engineering can be a time-consuming and error-prone process, as it requires domain expertise and often involves trial and error. Deep learning algorithms may be used to process a large raw dataset without having to resort to feature engineering. However, deep learning algorithms still require careful preprocessing and cleaning of the input data. In addition, choosing the right architecture, hyperparameters, and optimization algorithm for a deep neural network can be a challenging and iterative process.
|
Feature engineering : Covariate Data transformation Feature extraction Feature learning Hashing trick Instrumental variables estimation Kernel method List of datasets for machine learning research Scale co-occurrence matrix Space mapping
|
Feature engineering : == Further reading ==
|
CIML community portal : The computational intelligence and machine learning (CIML) community portal is an international multi-university initiative. Its primary purpose is to help facilitate a virtual scientific community infrastructure for all those involved with, or interested in, computational intelligence and machine learning. This includes CIML research-, education, and application-oriented resources residing at the portal and others that are linked from the CIML site.
|
CIML community portal : The CIML community portal was created to facilitate an online virtual scientific community wherein anyone interested in CIML can share research, obtain resources, or simply learn more. The effort is currently led by Jacek Zurada (principal investigator), with Rammohan Ragade and Janusz Wojtusiak, aided by a team of 25 volunteer researchers from 13 different countries. The ultimate goal of the CIML community portal is to accommodate and cater to a broad range of users, including experts, students, the public, and outside researchers interested in using CIML methods and software tools. Each community member and user will be guided through the portal resources and tools based on their respective CIML experience (e.g. expert, student, outside researcher) and goals (e.g. collaboration, education). A preliminary version of the community's portal, with limited capabilities, is now operational and available for users. All electronic resources on the portal are peer-reviewed to ensure high quality and cite-ability for literature.
|
CIML community portal : Jacek M. Zurada, Janusz Wojtusiak, Fahmida Chowdhury, James E. Gentle, Cedric J. Jeannot, and Maciej A. Mazurowski, Computational Intelligence Virtual Community: Framework and Implementation Issues, Proceedings of the IEEE World Congress on Computational Intelligence, Hong Kong, June 1–6, 2008. Jacek M. Zurada, Janusz Wojtusiak, Maciej A. Mazurowski, Devendra Mehta, Khalid Moidu, Steve Margolis, Toward Multidisciplinary Collaboration in the CIML Virtual Community, Proceedings of the 2008 Workshop on Building Computational Intelligence and Machine Learning Virtual Organizations, pp. 62–66 Chris Boyle, Artur Abdullin, Rammohan Ragade, Maciej A. Mazurowski, Janusz Wojtusiak, Jacek M. Zurada, Workflow considerations in the emerging CI-ML virtual organization, Proceedings of the 2008 Workshop on Building Computational Intelligence and Machine Learning Virtual Organizations, pp. 67–70
|
CIML community portal : Artificial Intelligence Computational Intelligence Machine Learning National Science Foundation
|
Neural gas : Neural gas is an artificial neural network, inspired by the self-organizing map and introduced in 1991 by Thomas Martinetz and Klaus Schulten. The neural gas is a simple algorithm for finding optimal data representations based on feature vectors. The algorithm was coined "neural gas" because of the dynamics of the feature vectors during the adaptation process, which distribute themselves like a gas within the data space. It is applied where data compression or vector quantization is an issue, for example speech recognition, image processing or pattern recognition. As a robustly converging alternative to the k-means clustering it is also used for cluster analysis.
|
Neural gas : Suppose we want to model a probability distribution P ( x ) of data vectors x using a finite number of feature vectors w i , where i = 1 , ⋯ , N . For each time step t Sample data vector x from P ( x ) Compute the distance between x and each feature vector. Rank the distances. Let i 0 be the index of the closest feature vector, i 1 the index of the second closest feature vector, and so on. Update each feature vector by: w i k t + 1 = w i k t + ε ⋅ e − k / λ ⋅ ( x − w i k t ) , k = 0 , ⋯ , N − 1 ^=w_^+\varepsilon \cdot e^\cdot (x-w_^),k=0,\cdots ,N-1 In the algorithm, ε can be understood as the learning rate, and λ as the neighborhood range. ε and λ are reduced with increasing t so that the algorithm converges after many adaptation steps. The adaptation step of the neural gas can be interpreted as gradient descent on a cost function. By adapting not only the closest feature vector but all of them with a step size decreasing with increasing distance order, compared to (online) k-means clustering a much more robust convergence of the algorithm can be achieved. The neural gas model does not delete a node and also does not create new nodes.
|
Neural gas : A number of variants of the neural gas algorithm exists in the literature so as to mitigate some of its shortcomings. More notable is perhaps Bernd Fritzke's growing neural gas, but also one should mention further elaborations such as the Growing When Required network and also the incremental growing neural gas. A performance-oriented approach that avoids the risk of overfitting is the Plastic Neural gas model.
|
Neural gas : To find the ranking i 0 , i 1 , … , i N − 1 ,i_,\ldots ,i_ of the feature vectors, the neural gas algorithm involves sorting, which is a procedure that does not lend itself easily to parallelization or implementation in analog hardware. However, implementations in both parallel software and analog hardware were actually designed.
|
Neural gas : T. Martinetz, S. Berkovich, and K. Schulten. "Neural-gas" Network for Vector Quantization and its Application to Time-Series Prediction. IEEE-Transactions on Neural Networks, 4(4):558–569, 1993. Martinetz, T.; Schulten, K. (1994). "Topology representing networks". Neural Networks. 7 (3): 507–522. doi:10.1016/0893-6080(94)90109-0.
|
Neural gas : DemoGNG.js Javascript simulator for Neural Gas (and other network models) Java Competitive Learning Applications Unsupervised Neural Networks (including Self-organizing map) in Java with source codes. formal description of Neural gas algorithm A GNG and GWR Classifier implementation in Matlab
|
Agentic AI : Agentic AI is a class of artificial intelligence that focuses on autonomous systems that can make decisions and perform tasks without human intervention. The independent systems automatically respond to conditions, to produce process results. The field is closely linked to agentic automation, also known as agent-based process management systems (APMS), when applied to process automation. Applications include software development, customer support, cybersecurity and business intelligence.
|
Agentic AI : The core concept of agentic AI is the use of AI agents to perform automated tasks but without human intervention. While robotic process automation (RPA) and AI agents can be programmed to automate specific tasks or support rule-based decisions, the rules are usually fixed. Agentic AI operates independently, making decisions through continuous learning and analysis of external data and complex data sets. Functioning agents can require various AI techniques, such as natural language processing, machine learning (ML), and computer vision, depending on the environment.
|
Agentic AI : Some scholars trace the conceptual roots of agentic AI to Alan Turing's mid-20th century work with machine intelligence and Norbert Wiener's work on feedback systems. The term agent-based process management system (APMS) was used as far back as 1998 to describe the concept of using autonomous agents for business process management. The psychological principle of agency was also discussed in the 2008 work of sociologist Albert Bandura, who studied how humans can shape their environments. This research would shape how humans modeled and developed artificial intelligence agents. Some additional milestones of agentic AI include IBM's Deep Blue, demonstrating how agency could work within a confined domain, advances in machine learning in the 2000s, AI being integrated into robotics, and the rise of generative AI such as OpenAI's GPT models and Salesforce's Agentforce platform. In 2025, research firm Forrester named agentic AI a top emerging technology for 2025.
|
Agentic AI : Applications using agentic AI include: Software development - AI coding agents can write large pieces of code, and review it. Agents can even perform non-code related tasks such as reverse engineering specifications from code. Customer support automation - AI agents can improve customer service by improving the ability of chatbots to answer a wider variety of questions, rather than having a limited set of answers pre-programmed by humans. Enterprise workflows - AI agents can automatically automate routine tasks by processing pooled data, as opposed to a company needing APIs preprogrammed for specific tasks. Cybersecurity and threat detection - AI agents deployed for cybersecurity can automatically detect and mitigate threats in real time. Security responses can also be automated based on the type of threat. Business intelligence - AI agents can support business intelligence to produce more useful analytics, such as responding to natural language voice prompts.
|
Agentic AI : Agentic automation, sometimes referred to as agentic process automation, refers to applying agentic AI to generate and operate workflows. In one example, large language models can construct and execute automated (agentic) workflows, reducing or eliminating the need for human intervention. While agentic AI is characterized by its decision-making and action-taking capabilities, generative AI is distinguished by its ability to generate original content based on learned patterns. Robotic process automation (RPA) describes how software tools can automate repetitive tasks, with predefined workflows and structured data handling. RPA's static instructions limit its value. Agentic AI is more dynamic, allowing unstructured data to be processed and analyzed, including contextual analysis, and allowing interaction with users. == References ==
|
Is This What We Want? : Is This What We Want? is an album by various artists, released on 25 February 2025 through Virgin Music Group. It consists of silence recorded in recording studios, protesting the use of unlicensed copyrighted work to train artificial intelligence. The track titles form the sentence "The British Government must not legalise music theft to benefit AI companies". Profits from the album will go toward Help Musicians.
|
Is This What We Want? : Rapid progress in AI technology, constituting an AI boom, was brought to widespread public attention in the early 2020s by text-to-image models such as DALL-E, Midjourney, and Stable Diffusion, which were able to generate complex images that convincingly resembled human-made artworks. The proliferation of such image generation algorithms coincided with the release of GPT-3 and development of GPT-4, advanced large language models which produce highly convincing text. These transformer-based models designed to create new content from prompts are collectively called generative artificial intelligence, and they require vast sets of training data. This data often consists of text, images, and other media scraped from the web, prompting concerns that the AI products may violate intellectual property rights. Suno AI and Udio, two AI startups whose products generate music recordings following user-submitted prompts, were sued in 2024 by Sony Music, Warner Music Group, and Universal Music Group, who alleged that the companies used copyrighted recordings in their training data without authorization. In December 2024, the UK government announced a consultation on copyright and AI, outlining a preferred approach that would see the introduction of a data mining copyright exception with a rights reservation package for rights holders. In the months following the announcement of the consultation, a number of prominent musicians warned of the threat it posed to musicians, including Paul McCartney and Elton John.
|
Is This What We Want? : Is This What We Want? consists of 12 tracks, each uncredited. 1,000 artists are credited as co-writers, including Kate Bush, Damon Albarn, Tori Amos, Annie Lennox, Pet Shop Boys, Billy Ocean, the Clash, Ed O'Brien, Dan Smith, Jamiroquai, Mystery Jets, Hans Zimmer, Imogen Heap, Yusuf/Cat Stevens, Max Richter, the King's Singers, the Sixteen, John Rutter, and James MacMillan. The album was organised by the British composer Ed Newton-Rex, who had previously held a position in Stable Diffusion's parent company Stability AI.
|
Is This What We Want? : The album debuted at number 38 on the UK Albums Downloads Chart.
|
Is This What We Want? : Sleepify by Vulfpeck, an entirely silent album 4'33", a John Cage composition which instructs the performers to remain silent
|
Schema-agnostic databases : Schema-agnostic databases or vocabulary-independent databases aim at supporting users to be abstracted from the representation of the data, supporting the automatic semantic matching between queries and databases. Schema-agnosticism is the property of a database of mapping a query issued with the user terminology and structure, automatically mapping it to the dataset vocabulary. The increase in the size and in the semantic heterogeneity of database schemas bring new requirements for users querying and searching structured data. At this scale it can become unfeasible for data consumers to be familiar with the representation of the data in order to query it. At the center of this discussion is the semantic gap between users and databases, which becomes more central as the scale and complexity of the data grows.
|
Schema-agnostic databases : The evolution of data environments towards the consumption of data from multiple data sources and the growth in the schema size, complexity, dynamicity and decentralisation (SCoDD) of schemas increases the complexity of contemporary data management. The SCoDD trend emerges as a central data management concern in Big Data scenarios, where users and applications have a demand for more complete data, produced by independent data sources, under different semantic assumptions and contexts of use, which is the typical scenario for Semantic Web Data applications. The evolution of databases in the direction of heterogeneous data environments strongly impacts the usability, semiotics and semantic assumptions behind existing data accessibility methods such as structured queries, keyword-based search and visual query systems. With schema-less databases containing potentially millions of dynamically changing attributes, it becomes unfeasible for some users to become aware of the 'schema' or vocabulary in order to query the database. At this scale, the effort in understanding the schema in order to build a structured query can become prohibitive.
|
Schema-agnostic databases : Schema-agnostic queries can be defined as query approaches over structured databases which allow users satisfying complex information needs without the understanding of the representation (schema) of the database. Similarly, Tran et al. defines it as "search approaches, which do not require users to know the schema underlying the data". Approaches such as keyword-based search over databases allow users to query databases without employing structured queries. However, as discussed by Tran et al.: "From these points, users however have to do further navigation and exploration to address complex information needs. Unlike keyword search used on the Web, which focuses on simple needs, the keyword search elaborated here is used to obtain more complex results. Instead of a single set of resources, the goal is to compute complex sets of resources and their relations." The development of approaches to support natural language interfaces (NLI) over databases have aimed towards the goal of schema-agnostic queries. Complementarily, some approaches based on keyword search have targeted keyword-based queries which express more complex information needs. Other approaches have explored the construction of structured queries over databases where schema constraints can be relaxed. All these approaches (natural language, keyword-based search and structured queries) have targeted different degrees of sophistication in addressing the problem of supporting a flexible semantic matching between queries and data, which vary from the completely absence of the semantic concern to more principled semantic models. While the demand for schema-agnosticism has been an implicit requirement across semantic search and natural language query systems over structured data, it is not sufficiently individuated as a concept and as a necessary requirement for contemporary database management systems. Recent works have started to define and model the semantic aspects involved on schema-agnostic queries.
|
Schema-agnostic databases : As of 2016 the concept of schema-agnostic queries has been developed primarily in academia. Most of schema-agnostic query systems have been investigated in the context of Natural Language Interfaces over databases or over the Semantic Web. These works explore the application of semantic parsing techniques over large, heterogeneous and schema-less databases. More recently, the individuation of the concept of schema-agnostic query systems and databases have appeared more explicitly within the literature. Freitas et al. provide a probabilistic model on the semantic complexity of mapping schema-agnostic queries. == References ==
|
Arabic Ontology : Arabic Ontology is a linguistic ontology for the Arabic language, which can be used as an Arabic WordNet with ontologically clean content. People use it also as a tree (i.e. classification) of the concepts/meanings of the Arabic terms. It is a formal representation of the concepts that the Arabic terms convey, and its content is ontologically well-founded, and benchmarked to scientific advances and rigorous knowledge sources rather than to speakers' naïve beliefs as wordnets typically do . The Ontology tree can be explored online.
|
Arabic Ontology : The ontology structure (i.e., data model) is similar to WordNet structure. Each concept in the ontology is given a unique concept identifier (URI), informally described by a gloss, and lexicalized by one or more of synonymous lemma terms. Each term-concept pair is called a sense, and is given a SenseID. A set of senses is called synset. Concepts and senses are described by further attributes such as era and area - to specify when and where it is used, lexicalization type, example sentence, example instances, ontological analysis, and others. Semantic relations (e.g., SubTypeOf, PartOf, and others) are defined between concepts. Some important individuals are included in the ontology, such as individual countries and seas. These individuals are given separate IndividualIDs and linked with their concepts through the InstanceOf relation.
|
Arabic Ontology : Concepts in the Arabic Ontology are mapped to synsets in WordNet, as well as to BFO and DOLCE. Terms used in the Arabic Ontology are mapped to lemmas in the LDC's SAMA database.
|
Arabic Ontology : The Arabic Ontology can be seen as a next generation of WordNet - or as an ontologically clean Arabic WordNet. It follows the same structure (i.e., data model) as WordNet, and it is fully mapped to WordNet. However, there are critical foundational differences between them: The ontology is benchmarked on state-of-art scientific discoveries, while WordNet is benchmarked on native speakers' naïve knowledge. The ontology is governed by scientifically and philosophically well-established top levels. Unlike WordNet, all concepts in the ontology are formal, i.e., a concept is a set of individuals (i.e., a class), thus concepts like (horizon) are not allowed in the ontology. Glosses in the ontology are strictly formulated, and focus on the distinguishing characteristics, which is not the case in WordNet.
|
Arabic Ontology : The Arabic Ontology can be used in many application domains; such as: Information retrieval, to enrich queries (e.g., in search engines) and improve the quality of the results, i.e. meaningful search rather than string-matching search; Machine translation and word-sense disambiguation, by finding the exact mapping of concepts across languages, especially that the Arabic ontology is also mapped to the WordNet; Data Integration and interoperability in which the Arabic ontology can be used as a semantic reference to link databases and information systems; Semantic Web and Web 3.0, by using the Arabic ontology as a semantic reference to disambiguate the meanings used in websites; among many other applications.
|
Arabic Ontology : The URLs in the Arabic Ontology are designed according to the W3C's Best Practices for Publishing Linked Data, as described in the following URL schemes. This allows one to also explore the whole database like exploring a graph: Ontology Concept: Each concept in the Arabic Ontology has a ConceptID and can be accessed using: https:///concept/. In case of a term, the set of concepts that this term lexicalizes are all retrieved. In case of a ConceptID, the concept and its direct subtypes are retrieved, e.g. https://ontology.birzeit.edu/concept/293198 Semantic relations: Relationships between concepts can be accessed using these schemes: (i) the URL: https:// /concept// allows retrieval of relationships among ontology concepts. (ii) the URL: https:///lexicalconcept// allows retrieval of relations between lexical concepts. For example, https://ontology.birzeit.edu/concept/instances/293121 retrieves the instances of the concept 293121. The relations that are currently used in our database are: . == References ==
|
GPT-4o : GPT-4o ("o" for "omni") is a multilingual, multimodal generative pre-trained transformer developed by OpenAI and released in May 2024. GPT-4o is free, but ChatGPT Plus subscribers have higher usage limits. It can process and generate text, images and audio. Its application programming interface (API) is faster and cheaper than its predecessor, GPT-4 Turbo.
|
GPT-4o : Multiple versions of GPT-4o were originally secretly launched under different names on Large Model Systems Organization's (LMSYS) Chatbot Arena as three different models. These three models were called gpt2-chatbot, im-a-good-gpt2-chatbot, and im-also-a-good-gpt2-chatbot. On 7 May 2024, OpenAI CEO Sam Altman tweeted "im-a-good-gpt2-chatbot", which was commonly interpreted as a confirmation that these were new OpenAI models being A/B tested.
|
GPT-4o : When released in May 2024, GPT-4o achieved state-of-the-art results in voice, multilingual, and vision benchmarks, setting new records in audio speech recognition and translation. GPT-4o scored 88.7 on the Massive Multitask Language Understanding (MMLU) benchmark compared to 86.5 for GPT-4. Unlike GPT-3.5 and GPT-4, which rely on other models to process sound, GPT-4o natively supports voice-to-voice. The Advanced Voice Mode was delayed and finally released to ChatGPT Plus and Team subscribers in September 2024. On 1 October 2024, the Realtime API was introduced. When released, the model supported over 50 languages, which OpenAI claims cover over 97% of speakers. Mira Murati demonstrated the model's multilingual capability by speaking Italian to the model and having it translate between English and Italian during the live-streamed OpenAI demonstration event on 13 May 2024. In addition, the new tokenizer uses fewer tokens for certain languages, especially languages that are not based on the Latin alphabet, making it cheaper for those languages. GPT-4o has knowledge up to October 2023, but can access the Internet if up-to-date information is needed. It has a context length of 128k tokens. On March 25, 2025, OpenAI released an image-generation feature that is native to GPT-4o, as an alternative to DALL-E 3. It was made available to paid users, with the rollout to free users being delayed. The use of the feature was subsequently limited, with Sam Altman noting in a Tweet that "[their] GPUs were melting" from its unprecedented popularity.
|
GPT-4o : On July 18, 2024, OpenAI released a smaller and cheaper version, GPT-4o mini. According to OpenAI, its low cost is expected to be particularly useful for companies, startups, and developers that seek to integrate it into their services, which often make a high number of API calls. Its API costs $0.15 per million input tokens and $0.6 per million output tokens, compared to $2.50 and $10, respectively, for GPT-4o. It is also significantly more capable and 60% cheaper than GPT-3.5 Turbo, which it replaced on the ChatGPT interface. The price after fine-tuning doubles: $0.3 per million input tokens and $1.2 per million output tokens. It is estimated that its parameter count is 8B. GPT-4o mini is the default model for guests and those who have hit the limit for GPT-4o.
|
GPT-4o : Llama (language model) Apple Intelligence == References ==
|
Text watermarking : Text watermarking is a technique for embedding hidden information within textual content to verify its authenticity, origin, or ownership. With the rise of generative AI systems using large language models (LLM), there has been significant development focused on watermarking AI-generated text. Potential applications include detecting fake news and academic cheating, and excluding AI-generated material from LLM training data. With LLMs the focus is on linguistic approaches that involve selecting words to form patterns within the text that can later be identified. The results of the first reported large-scale public deployment, a trial using Google's Gemini chatbot, appeared in October 2024: users across 20 million responses found watermarked and unwatermarked text to be of equal quality. Research on text watermarking began in 1997.
|
Text watermarking : Digital watermarking == References ==
|
Kernel embedding of distributions : In machine learning, the kernel embedding of distributions (also called the kernel mean or mean map) comprises a class of nonparametric methods in which a probability distribution is represented as an element of a reproducing kernel Hilbert space (RKHS). A generalization of the individual data-point feature mapping done in classical kernel methods, the embedding of distributions into infinite-dimensional feature spaces can preserve all of the statistical features of arbitrary distributions, while allowing one to compare and manipulate distributions using Hilbert space operations such as inner products, distances, projections, linear transformations, and spectral analysis. This learning framework is very general and can be applied to distributions over any space Ω on which a sensible kernel function (measuring similarity between elements of Ω ) may be defined. For example, various kernels have been proposed for learning from data which are: vectors in R d ^ , discrete classes/categories, strings, graphs/networks, images, time series, manifolds, dynamical systems, and other structured objects. The theory behind kernel embeddings of distributions has been primarily developed by Alex Smola, Le Song , Arthur Gretton, and Bernhard Schölkopf. A review of recent works on kernel embedding of distributions can be found in. The analysis of distributions is fundamental in machine learning and statistics, and many algorithms in these fields rely on information theoretic approaches such as entropy, mutual information, or Kullback–Leibler divergence. However, to estimate these quantities, one must first either perform density estimation, or employ sophisticated space-partitioning/bias-correction strategies which are typically infeasible for high-dimensional data. Commonly, methods for modeling complex distributions rely on parametric assumptions that may be unfounded or computationally challenging (e.g. Gaussian mixture models), while nonparametric methods like kernel density estimation (Note: the smoothing kernels in this context have a different interpretation than the kernels discussed here) or characteristic function representation (via the Fourier transform of the distribution) break down in high-dimensional settings. Methods based on the kernel embedding of distributions sidestep these problems and also possess the following advantages: Data may be modeled without restrictive assumptions about the form of the distributions and relationships between variables Intermediate density estimation is not needed Practitioners may specify the properties of a distribution most relevant for their problem (incorporating prior knowledge via choice of the kernel) If a characteristic kernel is used, then the embedding can uniquely preserve all information about a distribution, while thanks to the kernel trick, computations on the potentially infinite-dimensional RKHS can be implemented in practice as simple Gram matrix operations Dimensionality-independent rates of convergence for the empirical kernel mean (estimated using samples from the distribution) to the kernel embedding of the true underlying distribution can be proven. Learning algorithms based on this framework exhibit good generalization ability and finite sample convergence, while often being simpler and more effective than information theoretic methods Thus, learning via the kernel embedding of distributions offers a principled drop-in replacement for information theoretic approaches and is a framework which not only subsumes many popular methods in machine learning and statistics as special cases, but also can lead to entirely new learning algorithms.
|
Kernel embedding of distributions : Let X denote a random variable with domain Ω and distribution P . Given a symmetric, positive-definite kernel k : Ω × Ω → R the Moore–Aronszajn theorem asserts the existence of a unique RKHS H on Ω (a Hilbert space of functions f : Ω → R equipped with an inner product ⟨ ⋅ , ⋅ ⟩ H and a norm ‖ ⋅ ‖ H ) for which k is a reproducing kernel, i.e., in which the element k ( x , ⋅ ) satisfies the reproducing property ⟨ f , k ( x , ⋅ ) ⟩ H = f ( x ) ∀ f ∈ H , ∀ x ∈ Ω . =f(x)\qquad \forall f\in ,\quad \forall x\in \Omega . One may alternatively consider x ↦ k ( x , ⋅ ) as an implicit feature mapping φ : Ω → H (which is therefore also called the feature space), so that k ( x , x ′ ) = ⟨ φ ( x ) , φ ( x ′ ) ⟩ H can be viewed as a measure of similarity between points x , x ′ ∈ Ω . While the similarity measure is linear in the feature space, it may be highly nonlinear in the original space depending on the choice of kernel.
|
Kernel embedding of distributions : The expectation of any function f in the RKHS can be computed as an inner product with the kernel embedding: E [ f ( X ) ] = ⟨ f , μ X ⟩ H [f(X)]=\langle f,\mu _\rangle _ In the presence of large sample sizes, manipulations of the n × n Gram matrix may be computationally demanding. Through use of a low-rank approximation of the Gram matrix (such as the incomplete Cholesky factorization), running time and memory requirements of kernel-embedding-based learning algorithms can be drastically reduced without suffering much loss in approximation accuracy.
|
Kernel embedding of distributions : This section illustrates how basic probabilistic rules may be reformulated as (multi)linear algebraic operations in the kernel embedding framework and is primarily based on the work of Song et al. The following notation is adopted: P ( X , Y ) = joint distribution over random variables X , Y P ( X ) = ∫ Ω P ( X , d y ) = P(X,\mathrm y)= marginal distribution of X ; P ( Y ) = marginal distribution of Y P ( Y ∣ X ) = P ( X , Y ) P ( X ) = = conditional distribution of Y given X with corresponding conditional embedding operator C Y ∣ X _ π ( Y ) = prior distribution over Y Q is used to distinguish distributions which incorporate the prior from distributions P which do not rely on the prior In practice, all embeddings are empirically estimated from data ,y_),\dots ,(x_,y_)\ and it assumed that a set of samples _,\ldots ,_\ may be used to estimate the kernel embedding of the prior distribution π ( Y ) .
|
Kernel embedding of distributions : In this simple example, which is taken from Song et al., X , Y are assumed to be discrete random variables which take values in the set and the kernel is chosen to be the Kronecker delta function, so k ( x , x ′ ) = δ ( x , x ′ ) . The feature map corresponding to this kernel is the standard basis vector φ ( x ) = e x _ . The kernel embeddings of such a distributions are thus vectors of marginal probabilities while the embeddings of joint distributions in this setting are K × K matrices specifying joint probability tables, and the explicit form of these embeddings is μ X = E [ e X ] = ( P ( X = 1 ) ⋮ P ( X = K ) ) =\mathbb [\mathbf _]=P(X=1)\\\vdots \\P(X=K)\\\end C X Y = E [ e X ⊗ e Y ] = ( P ( X = s , Y = t ) ) s , t ∈ _=\mathbb [\mathbf _\otimes \mathbf _]=(P(X=s,Y=t))_ When P ( X = s ) > 0 , for all s ∈ , the conditional distribution embedding operator, C Y ∣ X = C Y X C X X − 1 , _=__^, is in this setting a conditional probability table C Y ∣ X = ( P ( Y = s ∣ X = t ) ) s , t ∈ _=(P(Y=s\mid X=t))_ and C X X = ( P ( X = 1 ) … 0 ⋮ ⋱ ⋮ 0 … P ( X = K ) ) _=P(X=1)&\dots &0\\\vdots &\ddots &\vdots \\0&\dots &P(X=K)\\\end Thus, the embeddings of the conditional distribution under a fixed value of X may be computed as μ Y ∣ x = C Y ∣ X φ ( x ) = ( P ( Y = 1 ∣ X = x ) ⋮ P ( Y = K ∣ X = x ) ) =_\varphi (x)=P(Y=1\mid X=x)\\\vdots \\P(Y=K\mid X=x)\\\end In this discrete-valued setting with the Kronecker delta kernel, the kernel sum rule becomes ( P ( X = 1 ) ⋮ P ( X = N ) ) ⏟ μ X π = ( P ( X = s ∣ Y = t ) ) ⏟ C X ∣ Y ( π ( Y = 1 ) ⋮ π ( Y = N ) ) ⏟ μ Y π P(X=1)\\\vdots \\P(X=N)\\\end _^=\underbrace \\P(X=s\mid Y=t)\\\\\end __\underbrace \pi (Y=1)\\\vdots \\\pi (Y=N)\\\end _^ The kernel chain rule in this case is given by ( P ( X = s , Y = t ) ) ⏟ C X Y π = ( P ( X = s ∣ Y = t ) ) ⏟ C X ∣ Y ( π ( Y = 1 ) … 0 ⋮ ⋱ ⋮ 0 … π ( Y = K ) ) ⏟ C Y Y π \\P(X=s,Y=t)\\\\\end __^=\underbrace \\P(X=s\mid Y=t)\\\\\end __\underbrace \pi (Y=1)&\dots &0\\\vdots &\ddots &\vdots \\0&\dots &\pi (Y=K)\\\end __^
|
Kernel embedding of distributions : Information Theoretical Estimators toolbox (distribution regression demonstration).
|
Symbol level : In knowledge-based systems, agents choose actions based on the principle of rationality to move closer to a desired goal. The agent is able to make decisions based on knowledge it has about the world (see knowledge level). But for the agent to actually change its state, it must use whatever means it has available. This level of description for the agent's behavior is the symbol level. The term was coined by Allen Newell in 1982. For example, in a computer program, the knowledge level consists of the information contained in its data structures that it uses to perform certain actions. The symbol level consists of the program's algorithms, the data structures themselves, and so on.
|
Symbol level : Knowledge level modeling == References ==
|
Matchbox Educable Noughts and Crosses Engine : The Matchbox Educable Noughts and Crosses Engine (sometimes called the Machine Educable Noughts and Crosses Engine or MENACE) was a mechanical computer made from 304 matchboxes designed and built by artificial intelligence researcher Donald Michie in 1961. It was designed to play human opponents in games of noughts and crosses (tic-tac-toe) by returning a move for any given state of play and to refine its strategy through reinforcement learning. This was one of the first types of artificial intelligence. Michie did not have a computer readily available, so he worked around this restriction by building it out of matchboxes. The matchboxes used by Michie each represented a single possible layout of a noughts and crosses grid. When the computer first played, it would randomly choose moves based on the current layout. As it played more games, through a reinforcement loop, it disqualified strategies that led to losing games, and supplemented strategies that led to winning games. Michie held a tournament against MENACE in 1961, wherein he experimented with different openings. Following MENACE's maiden tournament against Michie, it demonstrated successful artificial intelligence in its strategy. Michie's essays on MENACE's weight initialisation and the BOXES algorithm used by MENACE became popular in the field of computer science research. Michie was honoured for his contribution to machine learning research, and was twice commissioned to program a MENACE simulation on an actual computer.
|
Matchbox Educable Noughts and Crosses Engine : Donald Michie (1923–2007) had been on the team decrypting the German Tunny Code during World War II. Fifteen years later, he wanted to further display his mathematical and computational prowess with an early convolutional neural network. Since computer equipment was not obtainable for such uses, and Michie did not have a computer readily available, he decided to display and demonstrate artificial intelligence in a more esoteric format and constructed a functional mechanical computer out of matchboxes and beads. MENACE was constructed as the result of a bet with a computer science colleague who postulated that such a machine was impossible. Michie undertook the task of collecting and defining each matchbox as a "fun project", later turned into a demonstration tool. Michie completed his essay on MENACE in 1963, "Experiments on the mechanization of game-learning", as well as his essay on the BOXES Algorithm, written with R. A. Chambers and had built up an AI research unit in Hope Park Square, Edinburgh, Scotland. MENACE learned by playing successive matches of noughts and crosses. Each time, it would eliminate a losing strategy by the human player confiscating the beads that corresponded to each move. It reinforced winning strategies by making the moves more likely, by supplying extra beads. This was one of the earliest versions of the Reinforcement Loop, the schematic algorithm of looping the algorithm, dropping unsuccessful strategies until only the winning ones remain. This model starts as completely random, and gradually learns.
|
Matchbox Educable Noughts and Crosses Engine : MENACE was made from 304 matchboxes glued together in an arrangement similar to a chest of drawers. Each box had a code number, which was keyed into a chart. This chart had drawings of tic-tac-toe game grids with various configurations of X, O, and empty squares, corresponding to all possible permutations a game could go through as it progressed. After removing duplicate arrangements (ones that were simply rotations or mirror images of other configurations), MENACE used 304 permutations in its chart and thus that many matchboxes. Each individual matchbox tray contained a collection of coloured beads. Each colour represented a move on a square on the game grid, and so matchboxes with arrangements where positions on the grid were already taken would not have beads for that position. Additionally, at the front of the tray were two extra pieces of card in a "V" shape, the point of the "V" pointing at the front of the matchbox. Michie and his artificial intelligence team called MENACE's algorithm "Boxes", after the apparatus used for the machine. The first stage "Boxes" operated in five phases, each setting a definition and a precedent for the rules of the algorithm in relation to the game.
|
Matchbox Educable Noughts and Crosses Engine : MENACE played first, as O, since all matchboxes represented permutations only relevant to the "X" player. To retrieve MENACE's choice of move, the opponent or operator located the matchbox that matched the current game state, or a rotation or mirror image of it. For example, at the start of a game, this would be the matchbox for an empty grid. The tray would be removed and lightly shaken so as to move the beads around. Then, the bead that had rolled into the point of the "V" shape at the front of the tray was the move MENACE had chosen to make. Its colour was then used as the position to play on, and, after accounting for any rotations or flips needed based on the chosen matchbox configuration's relation to the current grid, the O would be placed on that square. Then the player performed their move, the new state was located, a new move selected, and so on, until the game was finished. When the game had finished, the human player observed the game's outcome. As a game was played, each matchbox that was used for MENACE's turn had its tray returned to it ajar, and the bead used kept aside, so that MENACE's choice of moves and the game states they belonged to were recorded. Michie described his reinforcement system with "reward" and "punishment". Once the game was finished, if MENACE had won, it would then receive a "reward" for its victory. The removed beads showed the sequence of the winning moves. These were returned to their respective trays, easily identifiable since they were slightly open, as well as three bonus beads of the same colour. In this way, in future games MENACE would become more likely to repeat those winning moves, reinforcing winning strategies. If it lost, the removed beads were not returned, "punishing" MENACE, and meaning that in future it would be less likely, and eventually incapable if that colour of bead became absent, to repeat the moves that cause a loss. If the game was a draw, one additional bead was added to each box.
|
Matchbox Educable Noughts and Crosses Engine : Donald Michie's MENACE proved that a computer could learn from failure and success to become good at a task. It used what would become core principles within the field of machine learning before they had been properly theorised. For example, the combination of how MENACE starts with equal numbers of types of beads in each matchbox, and how these are then selected at random, creates a learning behaviour similar to weight initialisation in modern artificial neural networks. In 1968, Donald Michie and R.A Chambers made another BOXES-based algorithm called GLEE (Game Learning Expectimaxing Engine) which had to learn how to balance a pole on a cart. After the resounding reception of MENACE, Michie was invited to the US Office of Naval Research, where he was commissioned to build a BOXES-running program for an IBM Computer for use at Stanford University. Michie created a simulation program of MENACE on a Pegasus 2 computer with the aid of D. Martin. There have been multiple recreations of MENACE in more recent years, both in its original physical form and as a computer program. Its algorithm was later converged into Christopher Watkin's Q-Learning algorithm. Although not as a functional computer, in examples of demonstration, MENACE has been used as a teaching aid for various neural network classes, including a public demonstration from University College London researcher Matthew Scroggs. A copy of MENACE built by Scroggs was featured in the 2019 Royal Institution Christmas Lectures, and in a 2023 episode of QI XL.
|
Matchbox Educable Noughts and Crosses Engine : Michie, D.; Chambers, R. A. (1968), "BOXES: An Experiment in Adaptive Control", Machine Intelligence, Edinburgh, UK: Oliver and Boyd, S2CID 18229198 – via Semantic Scholar, Michie and R. A Chambers' paper on the AI implications of BOXES and MENACE. Russell, David W. (2012), The BOXES Methodology: Black Box Dynamic Control, Springer London, ISBN 978-1849965286, a book on the "Boxes" algorithm employed by MENACE.
|
Automated machine learning : Automated machine learning (AutoML) is the process of automating the tasks of applying machine learning to real-world problems. It is the combination of automation and ML. AutoML potentially includes every stage from beginning with a raw dataset to building a machine learning model ready for deployment. AutoML was proposed as an artificial intelligence-based solution to the growing challenge of applying machine learning. The high degree of automation in AutoML aims to allow non-experts to make use of machine learning models and techniques without requiring them to become experts in machine learning. Automating the process of applying machine learning end-to-end additionally offers the advantages of producing simpler solutions, faster creation of those solutions, and models that often outperform hand-designed models. Common techniques used in AutoML include hyperparameter optimization, meta-learning and neural architecture search.
|
Automated machine learning : In a typical machine learning application, practitioners have a set of input data points to be used for training. The raw data may not be in a form that all algorithms can be applied to. To make the data amenable for machine learning, an expert may have to apply appropriate data pre-processing, feature engineering, feature extraction, and feature selection methods. After these steps, practitioners must then perform algorithm selection and hyperparameter optimization to maximize the predictive performance of their model. If deep learning is used, the architecture of the neural network must also be chosen manually by the machine learning expert. Each of these steps may be challenging, resulting in significant hurdles to using machine learning. AutoML aims to simplify these steps for non-experts, and to make it easier for them to use machine learning techniques correctly and effectively. AutoML plays an important role within the broader approach of automating data science, which also includes challenging tasks such as data engineering, data exploration and model interpretation and prediction.
|
Automated machine learning : Automated machine learning can target various stages of the machine learning process. Steps to automate are: Data preparation and ingestion (from raw data and miscellaneous formats) Column type detection; e.g., Boolean, discrete numerical, continuous numerical, or text Column intent detection; e.g., target/label, stratification field, numerical feature, categorical text feature, or free text feature Task detection; e.g., binary classification, regression, clustering, or ranking Feature engineering Feature selection Feature extraction Meta-learning and transfer learning Detection and handling of skewed data and/or missing values Model selection - choosing which machine learning algorithm to use, often including multiple competing software implementations Ensembling - a form of consensus where using multiple models often gives better results than any single model Hyperparameter optimization of the learning algorithm and featurization Neural architecture search Pipeline selection under time, memory, and complexity constraints Selection of evaluation metrics and validation procedures Problem checking Leakage detection Misconfiguration detection Analysis of obtained results Creating user interfaces and visualizations
|
Automated machine learning : There are a number of key challenges being tackled around automated machine learning. A big issue surrounding the field is referred to as "development as a cottage industry". This phrase refers to the issue in machine learning where development relies on manual decisions and biases of experts. This is contrasted to the goal of machine learning which is to create systems that can learn and improve from their own usage and analysis of the data. Basically, it's the struggle between how much experts should get involved in the learning of the systems versus how much freedom they should be giving the machines. However, experts and developers must help create and guide these machines to prepare them for their own learning. To create this system, it requires labor intensive work with knowledge of machine learning algorithms and system design. Additionally, some other challenges include meta-learning challenges and computational resource allocation.
|
Automated machine learning : Artificial intelligence Artificial intelligence and elections Neural architecture search Neuroevolution Self-tuning Neural Network Intelligence ModelOps Hyperparameter optimization
|
Automated machine learning : "Open Source AutoML Tools: AutoGluon, TransmogrifAI, Auto-sklearn, and NNI". Bizety. 2020-06-16. Ferreira, Luís, et al. "A comparison of AutoML tools for machine learning, deep learning and XGBoost." 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, 2021. https://repositorium.sdum.uminho.pt/bitstream/1822/74125/1/automl_ijcnn.pdf Feurer, M., Klein, A., Eggensperger, K., Springenberg, J., Blum, M., & Hutter, F. (2015). Efficient and robust automated machine learning. Advances in neural information processing systems, 28. https://proceedings.neurips.cc/paper_files/paper/2015/file/11d0e6287202fced83f79975ec59a3a6-Paper.pdf
|
Evolutionary developmental robotics : Evolutionary developmental robotics (evo-devo-robo for short) refers to methodologies that systematically integrate evolutionary robotics, epigenetic robotics and morphogenetic robotics to study the evolution, physical and mental development and learning of natural intelligent systems in robotic systems. The field was formally suggested and fully discussed in a published paper and further discussed in a published dialogue. The theoretical foundation of evo-devo-robo includes evolutionary developmental biology (evo-devo), evolutionary developmental psychology, developmental cognitive neuroscience etc. Further discussions on evolution, development and learning in robotics and design can be found in a number of papers, including papers on hardware systems and computing tissues.
|
Evolutionary developmental robotics : Artificial life Cognitive robotics Morphogenetic robotics Developmental robotics Evolutionary robotics == References ==
|
Microsoft Copilot : Microsoft Copilot (or simply Copilot) is a generative artificial intelligence chatbot developed by Microsoft. Based on the GPT-4 series of large language models, it was launched in 2023 as Microsoft's primary replacement for the discontinued Cortana. The service was introduced in February 2023 under the name Bing Chat, as a built-in feature for Microsoft Bing and Microsoft Edge. Over the course of 2023, Microsoft began to unify the Copilot branding across its various chatbot products, cementing the "copilot" analogy. At its Build 2023 conference, Microsoft announced its plans to integrate Copilot into Windows 11, allowing users to access it directly through the taskbar. In January 2024, a dedicated Copilot key was announced for Windows keyboards. Copilot utilizes the Microsoft Prometheus model, built upon OpenAI's GPT-4 foundational large language model, which in turn has been fine-tuned using both supervised and reinforcement learning techniques. Copilot's conversational interface style resembles that of ChatGPT. The chatbot is able to cite sources, create poems, generate songs, and use numerous languages and dialects. Microsoft operates Copilot on a freemium model. Users on its free tier can access most features, while priority access to newer features, including custom chatbot creation, is provided to paid subscribers under the "Microsoft Copilot Pro" paid subscription service. Several default chatbots are available in the free version of Microsoft Copilot, including the standard Copilot chatbot as well as Microsoft Designer, which is oriented towards using its Image Creator to generate images based on text prompts.
|
Microsoft Copilot : In 2019, Microsoft partnered with OpenAI and began investing billions of dollars into the organization. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft. In September 2020, Microsoft announced that it had licensed OpenAI's GPT-3 exclusively. Others can still receive output from its public API, but Microsoft has exclusive access to the underlying model. In November 2022, OpenAI launched ChatGPT, a chatbot which was based on GPT-3.5. ChatGPT gained worldwide attention following its release, becoming a viral Internet sensation. On January 23, 2023, Microsoft announced a multi-year US$10 billion investment in OpenAI. On February 6, Google announced Bard (later rebranded as Gemini), a ChatGPT-like chatbot service, fearing that ChatGPT could threaten Google's place as a go-to source for information. Multiple media outlets and financial analysts described Google as "rushing" Bard's announcement to preempt rival Microsoft's planned February 7 event unveiling Copilot, as well as to avoid playing "catch-up" to Microsoft.
|
Microsoft Copilot : Tom Warren, a senior editor at The Verge, has noted the conceptual similarity of Copilot and other Microsoft assistant features like Cortana and Clippy. Warren also believes that large language models, as they develop further, could change how users work and collaborate. Rowan Curran, an analyst at Forrester, states that the integration of AI into productivity software may lead to improvements in user experience. Concerns over the speed of Microsoft's recent release of AI-powered products and investments have led to questions surrounding ethical responsibilities in the testing of such products. One ethical concern the public has vocalized is that GPT-4 and similar large language models may reinforce racial or gender bias. Individuals, including Tom Warren, have also voiced concerns for Copilot after witnessing the chatbot showcasing several instances of artificial hallucinations. In June 2024, Copilot was found to have repeated misinformation about the 2024 United States presidential debates. In response to these concerns, Jon Friedman, the Corporate Vice President of Design and Research at Microsoft, stated that Microsoft was "applying [the] learning" from experience with Bing to "mitigate [the] risks" of Copilot. Microsoft claimed that it was gathering a team of researchers and engineers to identify and alleviate any potential negative impacts. The stated aim was to achieve this through the refinement of training data, blocking queries about sensitive topics, and limiting harmful information. Microsoft stated that it intended to employ InterpretML and Fairlearn to detect and rectify data bias, provide links to its sources, and state any applicable constraints.
|
Microsoft Copilot : Tabnine – Coding assistant Tay (chatbot) – Chatbot developed by Microsoft Zo (chatbot) – Chatbot developed by MicrosoftPages displaying short descriptions of redirect targets
|
Microsoft Copilot : Official website Media related to Microsoft Copilot at Wikimedia Commons Microsoft Copilot Terms of Use (Archive -- 2024-10-01 -- Wayback Machine, Archive Today, Megalodon, Ghostarchive) Past versions
|
Belief–desire–intention model : For popular psychology, the belief–desire–intention (BDI) model of human practical reasoning was developed by Michael Bratman as a way of explaining future-directed intention. BDI is fundamentally reliant on folk psychology (the 'theory theory'), which is the notion that our mental models of the world are theories. It was used as a basis for developing the belief–desire–intention software model.
|
Belief–desire–intention model : BDI was part of the inspiration behind the BDI software architecture, which Bratman was also involved in developing. Here, the notion of intention was seen as a way of limiting time spent on deliberating about what to do, by eliminating choices inconsistent with current intentions. BDI has also aroused some interest in psychology. BDI formed the basis for a computational model of childlike reasoning CRIBB.
|
Belief–desire–intention model : Bratman, M. E. (1999) [1987]. Intention, Plans, and Practical Reason. CSLI Publications. ISBN 1-57586-192-5.
|
Algorithm selection : Algorithm selection (sometimes also called per-instance algorithm selection or offline algorithm selection) is a meta-algorithmic technique to choose an algorithm from a portfolio on an instance-by-instance basis. It is motivated by the observation that on many practical problems, different algorithms have different performance characteristics. That is, while one algorithm performs well in some scenarios, it performs poorly in others and vice versa for another algorithm. If we can identify when to use which algorithm, we can optimize for each scenario and improve overall performance. This is what algorithm selection aims to do. The only prerequisite for applying algorithm selection techniques is that there exists (or that there can be constructed) a set of complementary algorithms.
|
Algorithm selection : Given a portfolio P of algorithms A ∈ P \in , a set of instances i ∈ I and a cost metric m : P × I → R \times \to \mathbb , the algorithm selection problem consists of finding a mapping s : I → P \to from instances I to algorithms P such that the cost ∑ i ∈ I m ( s ( i ) , i ) m(s(i),i) across all instances is optimized.
|
Algorithm selection : The algorithm selection problem is mainly solved with machine learning techniques. By representing the problem instances by numerical features f , algorithm selection can be seen as a multi-class classification problem by learning a mapping f i ↦ A \mapsto for a given instance i . Instance features are numerical representations of instances. For example, we can count the number of variables, clauses, average clause length for Boolean formulas, or number of samples, features, class balance for ML data sets to get an impression about their characteristics.
|
Algorithm selection : The algorithm selection problem can be effectively applied under the following assumptions: The portfolio P of algorithms is complementary with respect to the instance set I , i.e., there is no single algorithm A ∈ P \in that dominates the performance of all other algorithms over I (see figures to the right for examples on complementary analysis). In some application, the computation of instance features is associated with a cost. For example, if the cost metric is running time, we have also to consider the time to compute the instance features. In such cases, the cost to compute features should not be larger than the performance gain through algorithm selection.
|
Algorithm selection : Algorithm selection is not limited to single domains but can be applied to any kind of algorithm if the above requirements are satisfied. Application domains include: hard combinatorial problems: SAT, Mixed Integer Programming, CSP, AI Planning, TSP, MAXSAT, QBF and Answer Set Programming combinatorial auctions in machine learning, the problem is known as meta-learning software design black-box optimization multi-agent systems numerical optimization linear algebra, differential equations evolutionary algorithms vehicle routing problem power systems For an extensive list of literature about algorithm selection, we refer to a literature overview.
|
Algorithm selection : Algorithm Selection Library (ASlib) Algorithm selection literature == References ==
|
IBM Watsonx : Watsonx is IBM's commercial generative AI and scientific data platform based on cloud. It offers a studio, data store, and governance toolkit. It supports multiple large language models (LLMs) along with IBM's own Granite. The platform is described as an AI tool tailored to companies and which can be customized for customers' needs and trained on their confidential data, as client data is said to be not collected by IBM for further training of their models. It is also capable of fine-tuning, an approach which makes training pre-trained models on the newly introduced data possible.
|
IBM Watsonx : Watsonx was revealed on May 9, 2023, at the annual Think conference of IBM as a platform that includes multiple services. Just like Watson AI computer with the similar name, Watsonx was named after Thomas J. Watson, IBM's founder and first CEO. On February 13, 2024, Anaconda partnered with IBM to embed its open-source Python packages into Watsonx. Watsonx is currently used at ESPN's Fantasy Football App for managing players' performance. It is also used by Italian telecommunications company Wind Tre. Watsonx was used to generate editorial content around nominees during the 66th Annual Grammy Awards.
|
IBM Watsonx : IBM Watson Generative AI Large language model ChatGPT
|
IBM Watsonx : Official webpage Official introductory video for watsonx AI Prompt Lab
|
AFNLP : AFNLP (Asian Federation of Natural Language Processing Associations) is the organization for coordinating the natural language processing related activities and events in the Asia-Pacific region.
|
AFNLP : AFNLP was founded on 4 October 2000.
|
AFNLP : ALTA – Australasian Language Technology Association ANLP Japan Association of Natural Language Processing ROCLING Taiwan ROC Computational Linguistics Society SIG-KLC Korea SIG-Korean Language Computing of Korea Information Science Society
|
AFNLP : NLPRS: Natural Language Processing Pacific Rim Symposium IRAL: International Workshop on Information Retrieval with Asian Languages PACLING: Pacific Association for Computational Linguistics PACLIC: Pacific Asia Conference on Language, Information and Computation PRICAI: Pacific Rim International Conference on AI ICCPOL: International Conference on Computer Processing of Oriental Languages ROCLING: Research on Computational Linguistics Conference
|
AFNLP : IJCNLP-04: The 1st International Joint Conference on Natural Language Processing in Hainan Island, China IJCNLP-05: The 2nd International Joint Conference on Natural Language Processing in Jeju Island, Korea IJCNLP-08: The 3rd International Joint Conference on Natural Language Processing in Hyderabad, India ACL-IJCNLP-2009: Joint Conference of the 47th Annual Meeting of the Association for Computational Linguistics (ACL) and 4th International Joint Conference on Natural Language Processing (IJCNLP) in Singapore IJNCLP-11: The 5th International Joint Conference on Natural Language Processing in Chiang Mai, Thailand
|
AFNLP : http://www.afnlp.org/
|
PHerc. Paris. 4 : PHerc. Paris. 4 is a carbonized scroll of papyrus, dating to the 1st century BC to the 1st century AD. Part of a corpus known as the Herculaneum papyri, it was buried by hot-ash in the Roman city of Herculaneum during the eruption of Mount Vesuvius in 79 AD. It was subsequently discovered in excavations of the Villa of the Papyri from 1752–1754. Held by the Institut de France in its rolled state, it is now known to be a cornerstone example of non-invasive reading, where in February 2024, an announcement was made that the scroll's contents can be unveiled with the use of non-invasive imaging and machine learning artificial intelligence, paving the way towards the decipherment and scanning of other Herculaneum papyri and otherwise heavily damaged texts.
|
PHerc. Paris. 4 : The Villa of the Papyri was buried during the eruption of Vesuvius in 79 AD, subjecting the scrolls to temperatures of 310–320 °C, compacting them and converting them to charcoal. The first scrolls were uncovered in 1752, with subsequent excavations uncovering more scrolls. There were attempts to unroll the scrolls, as the contents were realized to contain writings by classical philosophers from schools such as Epicureanism. PHerc. Paris. 4 was amongst a set of six scrolls that entered its present day location at the Institut de France. They were a diplomatic gift, made to commemorate peace between the Kingdom of Naples and Sicily, under the reign of Ferdinand IV and Napoleon, with the negotiations mediated by Charles Alquier. In 1803, a tribute of vases and the scrolls arrived in France under the supervision of Francesco Carelli and was personally exhibited to Napoleon and Joséphine whereupon they entered the collection of the Institut. Of the scrolls that entered the collection, PHerc. Paris. 3 and Paris. 4 remain intact. Paris. 1 is in fragments and bits, Paris. 2 is better preserved, Paris. 5 exploded upon unpeeling, and Paris. 6 crumbled.
|
PHerc. Paris. 4 : The 20th century yielded progress in the readings of Herculaneum texts utilizing microscopes, digital photography and multispectral filters approaching the usage infrared spectroscopy to gain better clarity of the texts. In 2015, PHerc. Paris. 1 and PHerc. Paris. 4 were studied side by side, with Paris. 1 having a history of successful limited readings in 1986–1987, with sequences of letters such as "ΠIΠTOIE" and words such as "EIΠOI" (Greek: "would say") proving decipherable. Utilizing a pre-filtered X-ray beam with a double Laue monochromator to convert to a mono-chromatic X-ray beam, the first letters of the unrolled scroll were identified. After the virtual unrolling of the En-Gedi Scroll in 2015, Brent Seales, a computer scientist at the University of Kentucky, spearheaded the effort to uncover the Herculaneum corpus through non-invasive means. On 15 March 2023, Nat Friedman, former CEO of GitHub, and Daniel Gross of Cue, upon hearing a lecture by Seales, launched the Vesuvius Challenge to utilize machine learning and new imaging techniques of the papyri using the Diamond Light Source particle accelerator to create an improved scan of the PHerc. Paris. 4, which was completed in 2019 and subsequently released to the public. The scans were completed at a resolution of 4-8 μm per voxel. The Vesuvius Challenge raised US$1 million, with an objective of clear readings of the scroll and the future aim of reading other carbonized, sealed fragments of the Herculaneum corpus, with a distant idea towards excavating more portions of the Villa of the Papyri in order to recover more scrolls. In October 2023, 21 year old college student and SpaceX intern Luke Farritor and physicist Casey Handmer identified the word "porphyras" (πορΦυρας) or "purple" on the scroll utilizing neural networking to differentiate the paper and the ink; Farritor subsequently won US$40,000 for his find. On 5 February 2024, the Grand Prize, for reading PHerc. Paris. 4, was awarded to Farritor, ETH Zurich robotics student Julian Schilliger, and Free University of Berlin Egyptian Ph.D student Youssef Nader for recovering 11 columns of text, or 2000 characters total, which is about 5% of the contents of the scroll. The uncovered text is believed to be written by Epicurean philosopher Philodemus, and is an unrecorded text about pleasure and how it is affected by the abundance or scarcity of items, to which Philodemus disagreed writing "As too in the case of food, we do not right away believe things that are scarce to be absolutely more pleasant than those which are abundant". The text revealed from the scroll was published in a paper for Zeitschrift für Papyrologie und Epigraphik.
|
Tensor product network : A tensor product network, in artificial neural networks, is a network that exploits the properties of tensors to model associative concepts such as variable assignment. Orthonormal vectors are chosen to model the ideas (such as variable names and target assignments), and the tensor product of these vectors construct a network whose mathematical properties allow the user to easily extract the association from it.
|
Quantum artificial life : Quantum artificial life is the application of quantum algorithms with the ability to simulate biological behavior. Quantum computers offer many potential improvements to processes performed on classical computers, including machine learning and artificial intelligence. Artificial intelligence applications are often inspired by the idea of mimicking human brains through closely related biomimicry. This has been implemented to a certain extent on classical computers (using neural networks), but quantum computers offer many advantages in the simulation of artificial life. Artificial life and artificial intelligence are extremely similar, with minor differences; the goal of studying artificial life is to understand living beings better, while the goal of artificial intelligence is to create intelligent beings. In 2016, Alvarez-Rodriguez et al. developed a proposal for a quantum artificial life algorithm with the ability to simulate life and Darwinian evolution. In 2018, the same research team led by Alvarez-Rodriguez performed the proposed algorithm on the IBM ibmqx4 quantum computer, and received optimistic results. The results accurately simulated a system with the ability to undergo self-replication at the quantum scale.
|
Quantum artificial life : The growing advancement of quantum computers has led researchers to develop quantum algorithms for simulating life processes. Researchers have designed a quantum algorithm that can accurately simulate Darwinian Evolution. Since the complete simulation of artificial life on quantum computers has only been actualized by one group, this section shall focus on the implementation by Alvarez-Rodriguez, Sanz, Lomata, and Solano on an IBM quantum computer. Individuals were realized as two qubits, one representing the genotype of the individual and the other representing the phenotype. The genotype is copied to transmit genetic information through generations, and the phenotype is dependent on the genetic information as well as the individual's interactions with their environment. In order to set up the system, the state of the genotype is instantiated by some rotation of an ancillary state ( | 0 ⟩ ⟨ 0 | ). The environment is a two-dimensional spatial grid occupied by individuals and ancillary states. The environment is divided into cells that are able to possess one or more individuals. Individuals move throughout the grid and occupy cells randomly; when two or more individuals occupy the same cell they interact with each other.
|
Hierarchical navigable small world : The Hierarchical navigable small world (HNSW) algorithm is a graph-based approximate nearest neighbor search technique used in many vector databases. Nearest neighbor search without an index involves computing the distance from the query to each point in the database, which for large datasets is computationally prohibitive. For high-dimensional data, tree-based exact vector search techniques such as the k-d tree and R-tree do not perform well enough because of the curse of dimensionality. To remedy this, approximate k-nearest neighbor searches have been proposed, such as locality-sensitive hashing (LSH) and product quantization (PQ) that trade performance for accuracy. The HNSW graph offers an approximate k-nearest neighbor search which scales logarithmically even in high-dimensional data. It is an extension of the earlier work on navigable small world graphs presented at the Similarity Search and Applications (SISAP) conference in 2012 with an additional hierarchical navigation to find entry points to the main graph faster. HNSW-based libraries are among the best performers in the approximate nearest neighbors benchmark. A related technique is IVFFlat.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.