text
stringlengths
12
14.7k
Hierarchical navigable small world : HNSW is a key method for approximate nearest neighbor search in high-dimensional vector databases, for example in the context of embeddings from neural networks in large language models. Databases that use HNSW as search index include: Apache Lucene Vector Search Chroma Qdrant Vespa Vearch Gamma Weaviate pgvector MariaDB MongoDB Atlas Milvus DuckDB Several of these use either the hnswlib library provided by the original authors, or the FAISS library. == References ==
AI/ML Development Platform : AI/ML development platforms, such as PyTorch and Hugging Face, are software ecosystems designed to facilitate the creation, training, deployment, and management of artificial intelligence (AI) and machine learning (ML) models. These platforms provide tools, frameworks, and infrastructure to streamline workflows for developers, data scientists, and researchers working on AI-driven solutions.
AI/ML Development Platform : AI/ML development platforms serve as comprehensive environments for building AI systems, ranging from simple predictive models to complex large language models (LLMs). They abstract technical complexities (e.g., distributed computing, hyperparameter tuning) while offering modular components for customization. Key users include: Developers: Building applications powered by AI/ML. Data scientists: Experimenting with algorithms and data pipelines. Researchers: Advancing state-of-the-art AI capabilities.
AI/ML Development Platform : Modern AI/ML platforms typically include: End-to-end workflow support: Data preparation: Tools for cleaning, labeling, and augmenting datasets. Model building: Libraries for designing neural networks (e.g., PyTorch, TensorFlow integrations). Training & Optimization: Distributed training, hyperparameter tuning, and AutoML. Deployment: Exporting models to production environments (APIs, edge devices, cloud services). Scalability: Support for multi-GPU/TPU training and cloud-native infrastructure (e.g., Kubernetes). Pre-built models & templates: Repositories of pre-trained models (e.g., Hugging Face’s Model Hub) for tasks like natural language processing (NLP), computer vision, or speech recognition. Collaboration tools: Version control, experiment tracking (e.g., MLflow), and team project management. Ethical AI tools: Bias detection, explainability frameworks (e.g., SHAP, LIME), and compliance with regulations like GDPR.
AI/ML Development Platform : AI/ML development platforms underpin innovations in: Health care: Drug discovery, medical imaging analysis. Finance: Fraud detection, algorithmic trading. Natural language processing (NLP): Chatbots, translation systems. Autonomous systems: Self-driving cars, robotics.
AI/ML Development Platform : Computational costs: Training LLMs requires massive GPU/TPU resources. Data privacy: Balancing model performance with GDPR/CCPA compliance. Skill gaps: High barrier to entry for non-experts. Bias and fairness: Mitigating skewed outcomes in sensitive applications.
AI/ML Development Platform : Democratization: Low-code/no-code platforms (e.g., Google AutoML, DataRobot). Ethical AI integration: Tools for bias mitigation and transparency. Federated learning: Training models on decentralized data. Quantum machine learning: Hybrid platforms leveraging quantum computing.
AI/ML Development Platform : Automated machine learning Large language model
AI/ML Development Platform : MLflow Official Website – Open-source platform for the machine learning lifecycle. Hugging Face – Community and tools for NLP models. TensorFlow – Google's machine learning framework. Google AI Research – Publications on AI/ML advancements.
Conversica : Conversica is a US-based cloud software technology company, headquartered in San Mateo, California, that provides two-way AI-driven conversational software and a suite of Intelligent Virtual Assistants for businesses to engage customers via email, chat, and SMS.
Conversica : 2007: The company was founded by Ben Brigham in Bellingham, Washington, originally as AutoFerret.com. The company's initial product was a Customer Relationship Management (CRM) targeted at automotive dealerships. This soon expanded to lead generation, and then lead validation and qualification. The AI Conversica uses currently was made to follow up on and filter out low-quality leads. The focus of the company shifted toward this automated lead engagement technology. 2010: The company started commercially selling AVA, the first Automated Virtual Assistant for sales, and the company name was changed to AVA.ai. Early customers for AVA were automotive dealerships. As the company moved away from generating leads themselves, and providing the CRM themselves, it became necessary to integrate with existing CRM and Marketing Automation platforms, such as DealerSocket, VinSolutions and Salesforce. 2013: The company raised $16m Series A funding, led by Kennet Partners, and named Mark Bradley as CEO. It also moved its headquarters from Bellingham, Washington to Foster City, California. 2014: The company changed its name from AVA.ai to Conversica. 2015: Alex Terry joined Conversica as its CEO. The business expanded to include customers in additional verticals, including technology, education, and financial services. 2016: The company raised $34m Series B funding, led by Providence Strategic Growth. 2017: Conversica expanded its intelligent automation platform and IVAs to support additional communication channels (e-mail and SMS text messaging) and communication languages. Conversica also opened a new technology center in Seattle, Washington to expand its AI and machine learning capabilities. 2018: The company raised $31m Series C funding, led by Providence Strategic Growth. Conversica also acquired Intelligens.ai, providing a regional presence in Latin America with an office in Las Condes, Santiago, Chile. The company launched an AI-powered Admissions Assistant for Higher Education industry. 2019: Conversica was selected by Fast Company magazine as one of the Top 10 Most Innovative AI Companies in the World, and was named Marketo's Technology Partner of the Year. The company officially expanded into the EMEA region with the opening of a London office. As of August 2019, Conversica has over 50 different integrations with third parties. In October Conversica won three awards at the fourth annual Global Annual Achievement Awards for Artificial Intelligence. Also that month, Alex Terry stepped down from his role as CEO and was replaced by Jim Kaskade. 2020: As part of Conversica's response to COVID-19, they optimized the business to become profitable in both 2Q20 and 3Q20, before reinvesting in 4Q20. The company transitioned both international operations in EMEA and LATAM to an indirect model with partners (LeadFabric and Nectia Cloud Solutions respectively), and moved a portion of its US-based employees to near-shore centers in Mexico and Brazil, effectively downsizing the company from 250 to 200. Conversica's reseller partner, Nectia, is a major Latin American affiliate and Chile's number one Salesforce partner, and, as part of the partnership, Nectia devoted capital to a brand new company segment, Predict-IA, dedicated to web-based artificial intelligent solutions. Predict-IA was able to immediately service all LATAM opportunities and clients with Conversica's AI Assistants with end-to-end services (marketing, sales, professional services, customer success, and technical support). Conversica's reseller partner, Leadfabric, has offices in Belgium, Amsterdam, Paris, UK, Taiwan, and Romania.
Conversica : Conversica's Revenue Digital Assistants™ are AI assistants who engage with leads, prospects, customers, employees, and other persons of interest (Contacts) in a two-way human-like manner, via email, SMS text, and website chat, in English, French, German, Spanish, Portuguese, and Japanese. The RDAs are built on an Intelligent Automation platform that leverages natural language understanding, natural language processing, natural language generation, deep learning and machine learning. The Assistants are generally deployed alongside sales and marketing, customer success, account management, and higher education admissions teams, as part of an augmented workforce. The Intelligent Automation platform integrates with over 50 external systems, including CRM, Marketing Automation, and other systems of record. A partial list of integration partners includes: Salesforce, Marketo, Oracle, HubSpot, DealerSocket, Reynolds & Reynolds, CDK Global, VinSolutions and many more.
Conversica : Official website
Smart object : A smart object is an object that enhances the interaction with not only people but also with other smart objects. Also known as smart connected products or smart connected things (SCoT), they are products, assets and other things embedded with processors, sensors, software and connectivity that allow data to be exchanged between the product and its environment, manufacturer, operator/user, and other products and systems. Connectivity also enables some capabilities of the product to exist outside the physical device, in what is known as the product cloud. The data collected from these products can be then analyzed to inform decision-making, enable operational efficiencies and continuously improve the performance of the product. It can not only refer to interaction with physical world objects but also to interaction with virtual (computing environment) objects. A smart physical object may be created either as an artifact or manufactured product or by embedding electronic tags such as RFID tags or sensors into non-smart physical objects. Smart virtual objects are created as software objects that are intrinsic when creating and operating a virtual or cyber world simulation or game. The concept of a smart object has several origins and uses, see History. There are also several overlapping terms, see also smart device, tangible object or tangible user interface and Thing as in the Internet of things.
Smart object : In the early 1990s, Mark Weiser, from whom the term ubiquitous computing originated, referred to a vision "When almost every object either contains a computer or can have a tab attached to it, obtaining information will be trivial", Although Weiser did not specifically refer to an object as being smart, his early work did imply that smart physical objects are smart in the sense that they act as digital information sources. Hiroshi Ishii and Brygg Ullmer refer to tangible objects in terms of tangibles bits or tangible user interfaces that enable users to "grasp & manipulate" bits in the center of users' attention by coupling the bits with everyday physical objects and architectural surfaces. The smart object concept was introduced by Marcelo Kallman and Daniel Thalmann as an object that can describe its own possible interactions. The main focus here is to model interactions of smart virtual objects with virtual humans, agents, in virtual worlds. The opposite approach to smart objects is 'plain' objects that do not provide this information. The additional information provided by this concept enables far more general interaction schemes, and can greatly simplify the planner of an artificial intelligence agent. In contrast to smart virtual objects used in virtual worlds, Lev Manovich focuses on physical space filled with electronic and visual information. Here, "smart objects" are described as "objects connected to the Net; objects that can sense their users and display smart behaviour". More recently in the early 2010s, smart objects are being proposed as a key enabler for the vision of the Internet of things. The combination of the Internet and emerging technologies such as near field communications, real-time localization, and embedded sensors enables everyday objects to be transformed into smart objects that can understand and react to their environment. Such objects are building blocks for the Internet of things and enable novel computing applications. In 2018, one of the world's first smart houses was built in Klaukkala, Finland in the form of a five-floor apartment block, using the Kone Residential Flow solution created by KONE, allowing even a smartphone to act as a home key.
Smart object : Although we can view interaction with physical smart object in the physical world as distinct from interaction with virtual smart objects in a virtual simulated world, these can be related. Poslad considers the progression of: how humans use models of smart objects situated in the physical world to enhance human to physical world interaction; versus how smart physical objects situated in the physical world can model human interaction in order to lessen the need for human to physical world interaction; versus how virtual smart objects by modelling both physical world objects and modelling humans as objects and their subsequent interactions can form a predominantly smart virtual object environment.
Smart object : Smart, connected products have three primary components:: 67 Physical – made up of the product's mechanical and electrical parts. Smart – made up of sensors, microprocessors, data storage, controls, software, and an embedded operating system with enhanced user interface. Connectivity – made up of ports, antennae, and protocols enabling wired/wireless connections that serve two purposes, it allows data to be exchanged with the product and enables some functions of the product to exist outside the physical device. Each component expands the capabilities of one another resulting in "a virtuous cycle of value improvement". First, the smart components of a product amplify the value and capabilities of the physical components. Then, connectivity amplifies the value and capabilities of the smart components. These improvements include: Monitoring of the product's conditions, its external environment, and its operations and usage. Control of various product functions to better respond to changes in its environment, as well as to personalize the user experience. Optimization of the product's overall operations based on actual performance data, and reduction of downtimes through predictive maintenance and remote service. Autonomous product operation, including learning from their environment, adapting to users' preferences and self-diagnosing and service.
Smart object : AmbieSense Audiocubes Home network Intelligent maintenance system Nabaztag Smart speaker Wearable technology Ubiquitous computing
Smart object : Donald A. Norman. Design of Future Things. Basic Books. 2007 Bruce Sterling. Cisco launches consortium for ‘Smart Objects'. Wired, September 25, 2008 2009 New Media Horizons Report Mike Isaac. home-google-io/ Google's Platform Extends Its Reach With Android@Home. Wired, May 11, 2011
Smart object : WorldCat publications about smart objects. The Internet of Things' Best-Kept Secret, Forbes A Very Short History Of The Internet Of Things, Forbes Three Steps to Combat the Impact of Digital Business Disruption on Value Creation, Gartner The Five SMART Technologies to Watch, Gartner Cisco White Paper: The Internet of Everything for Cities 5 Steps the 'Smart' Home Industry Must Take to Develop a Consumer Market Mashable: Bionic Pancreas Delivers Automated Care to Those With Diabetes The Future of Wearable Technology PBS video produced by Off Book (web series) Oxford Economics: Smart, connected products: Manufacturing's next transformation
ChatGPT Search : ChatGPT Search (originally SearchGPT) is a search engine developed by OpenAI. It combines traditional search engine features with generative pretrained transformers (GPT) to generate responses, including citations to external websites.
ChatGPT Search : On July 25, 2024, SearchGPT was first introduced as a prototype in a limited release to 10,000 test users. This search feature positioned OpenAI as a direct competitor to major search engines, notably Google, Perplexity AI and Bing. OpenAI announced its partnership with publishers for SearchGPT, providing them with options on how their content appears in the search results and ensuring the promotion of trusted sources. On October 31, 2024, OpenAI launched ChatGPT Search to ChatGPT Plus and Team subscribers, and it was made available to free users in December 2024. OpenAI ultimately incorporated the search features into ChatGPT in December 2024.
ChatGPT Search : Comparison of search engines Google Search – Search engine from Google List of search engines Timeline of web search engines == References ==
Triplet loss : Triplet loss is a machine learning loss function widely used in one-shot learning, a setting where models are trained to generalize effectively from limited examples. It was conceived by Google researchers for their prominent FaceNet algorithm for face detection. Triplet loss is designed to support metric learning. Namely, to assist training models to learn an embedding (mapping to a feature space) where similar data points are closer together and dissimilar ones are farther apart, enabling robust discrimination across varied conditions. In the context of face detection, data points correspond to images.
Triplet loss : The loss function is defined using triplets of training points of the form ( A , P , N ) . In each triplet, A (called an "anchor point") denotes a reference point of a particular identity, P (called a "positive point") denotes another point of the same identity in point A , and N (called a "negative point") denotes a point of an identity different from the identity in point A and P . Let x be some point and let f ( x ) be the embedding of x in the finite-dimensional Euclidean space. It shall be assumed that the L2-norm of f ( x ) is unity (the L2 norm of a vector X in a finite dimensional Euclidean space is denoted by ‖ X ‖ .) We assemble m triplets of points from the training dataset. The goal of training here is to ensure that, after learning, the following condition (called the "triplet constraint") is satisfied by all triplets ( A ( i ) , P ( i ) , N ( i ) ) ,P^,N^) in the training data set: ‖ f ( A ( i ) ) − f ( P ( i ) ) ‖ 2 2 + α < ‖ f ( A ( i ) ) − f ( N ( i ) ) ‖ 2 2 )-f(P^)\Vert _^+\alpha <\Vert f(A^)-f(N^)\Vert _^ The variable α is a hyperparameter called the margin, and its value must be set manually. In the FaceNet system, its value was set as 0.2. Thus, the full form of the function to be minimized is the following: L = ∑ i = 1 m max ( ‖ f ( A ( i ) ) − f ( P ( i ) ) ‖ 2 2 − ‖ f ( A ( i ) ) − f ( N ( i ) ) ‖ 2 2 + α , 0 ) ^\max \Vert f(A^)-f(P^)\Vert _^-\Vert f(A^)-f(N^)\Vert _^+\alpha ,0
Triplet loss : A baseline for understanding the effectiveness of triplet loss is the contrastive loss, which operates on pairs of samples (rather than triplets). Training with the contrastive loss pulls embeddings of similar pairs closer together, and pushes dissimilar pairs apart. Its pairwise approach is greedy, as it considers each pair in isolation. Triplet loss innovates by considering relative distances. Its goal is that the embedding of an anchor (query) point be closer to positive points than to negative points (also accounting for the margin). It does not try to further optimize the distances once this requirement is met. This is approximated by simultaneously considering two pairs (anchor-positive and anchor-negative), rather than each pair in isolation.
Triplet loss : One crucial implementation detail when training with triplet loss is triplet "mining", which focuses on the smart selection of triplets for optimization. This process adds an additional layer of complexity compared to contrastive loss. A naive approach to preparing training data for the triplet loss involves randomly selecting triplets from the dataset. In general, the set of valid triplets of the form ( A ( i ) , P ( i ) , N ( i ) ) ,P^,N^) is very large. To speed-up training convergence, it is essential to focus on challenging triplets. In the FaceNet paper, several options were explored, eventually arriving at the following. For each anchor-positive pair, the algorithm considers only semi-hard negatives. These are negatives that violate the triplet requirement (i.e, are "hard"), but lie farther from the anchor than the positive (not too hard). Restated, for each A ( i ) and P ( i ) , they seek N ( i ) such that: The rationale for this design choice is heuristic. It may appear puzzling that the mining process neglects "very hard" negatives (i.e., closer to the anchor than the positive). Experiments conducted by the FaceNet designers found that this often leads to a convergence to degenerate local minima. Triplet mining is performed at each training step, from within the sample points contained in the training batch (this is known as online mining), after embeddings were computed for all points in the batch. While ideally the entire dataset could be used, this is impractical in general. To support a large search space for triplets, the FaceNet authors used very large batches (1800 samples). Batches are constructed by selecting a large number of same-category sample points (40), and randomly selected negatives for them.
Triplet loss : Triplet loss has been extended to simultaneously maintain a series of distance orders by optimizing a continuous relevance degree with a chain (i.e., ladder) of distance inequalities. This leads to the Ladder Loss, which has been demonstrated to offer performance enhancements of visual-semantic embedding in learning to rank tasks. In Natural Language Processing, triplet loss is one of the loss functions considered for BERT fine-tuning in the SBERT architecture. Other extensions involve specifying multiple negatives (multiple negatives ranking loss).
Triplet loss : Siamese neural network t-distributed stochastic neighbor embedding Similarity learning == References ==
Bag-of-words model : The bag-of-words model (BoW) is a model of text which uses an unordered collection (a "bag") of words. It is used in natural language processing and information retrieval (IR). It disregards word order (and thus most of syntax or grammar) but captures multiplicity. The bag-of-words model is commonly used in methods of document classification where, for example, the (frequency of) occurrence of each word is used as a feature for training a classifier. It has also been used for computer vision. An early reference to "bag of words" in a linguistic context can be found in Zellig Harris's 1954 article on Distributional Structure.
Bag-of-words model : The following models a text document using bag-of-words. Here are two simple text documents: Based on these two text documents, a list is constructed as follows for each document: Representing each bag-of-words as a JSON object, and attributing to the respective JavaScript variable: Each key is the word, and each value is the number of occurrences of that word in the given text document. The order of elements is free, so, for example is also equivalent to BoW1. It is also what we expect from a strict JSON object representation. Note: if another document is like a union of these two, its JavaScript representation will be: So, as we see in the bag algebra, the "union" of two documents in the bags-of-words representation is, formally, the disjoint union, summing the multiplicities of each element.
Bag-of-words model : Implementations of the bag-of-words model might involve using frequencies of words in a document to represent its contents. The frequencies can be "normalized" by the inverse of document frequency, or tf–idf. Additionally, for the specific purpose of classification, supervised alternatives have been developed to account for the class label of a document. Lastly, binary (presence/absence or 1/0) weighting is used in place of frequencies for some problems (e.g., this option is implemented in the WEKA machine learning software system).
Bag-of-words model : A common alternative to using dictionaries is the hashing trick, where words are mapped directly to indices with a hashing function. Thus, no memory is required to store a dictionary. Hash collisions are typically dealt via freed-up memory to increase the number of hash buckets. In practice, hashing simplifies the implementation of bag-of-words models and improves scalability.
Bag-of-words model : Additive smoothing Feature extraction Machine learning MinHash Vector space model w-shingling
Bag-of-words model : McTear, Michael (et al) (2016). The Conversational Interface. Springer International Publishing.
Noisy text analytics : Noisy text analytics is a process of information extraction whose goal is to automatically extract structured or semistructured information from noisy unstructured text data. While Text analytics is a growing and mature field that has great value because of the huge amounts of data being produced, processing of noisy text is gaining in importance because a lot of common applications produce noisy text data. Noisy unstructured text data is found in informal settings such as online chat, text messages, e-mails, message boards, newsgroups, blogs, wikis and web pages. Also, text produced by processing spontaneous speech using automatic speech recognition and printed or handwritten text using optical character recognition contains processing noise. Text produced under such circumstances is typically highly noisy containing spelling errors, abbreviations, non-standard words, false starts, repetitions, missing punctuations, missing letter case information, pause filling words such as “um” and “uh” and other texting and speech disfluencies. Such text can be seen in large amounts in contact centers, chat rooms, optical character recognition (OCR) of text documents, short message service (SMS) text, etc. Documents with historical language can also be considered noisy with respect to today's knowledge about the language. Such text contains important historical, religious, ancient medical knowledge that is useful. The nature of the noisy text produced in all these contexts warrants moving beyond traditional text analysis techniques.
Noisy text analytics : Missing punctuation and the use of non-standard words can often hinder standard natural language processing tools such as part-of-speech tagging and parsing. Techniques to both learn from the noisy data and then to be able to process the noisy data are only now being developed.
Noisy text analytics : World Wide Web: Poorly written text is found in web pages, online chat, blogs, wikis, discussion forums, newsgroups. Most of these data are unstructured and the style of writing is very different from, say, well-written news articles. Analysis for the web data is important because they are sources for market buzz analysis, market review, trend estimation, etc. Also, because of the large amount of data, it is necessary to find efficient methods of information extraction, classification, automatic summarization and analysis of these data. Contact centers: This is a general term for help desks, information lines and customer service centers operating in domains ranging from computer sales and support to mobile phones to apparels. On an average a person in the developed world interacts at least once a week with a contact center agent. A typical contact center agent handles over a hundred calls per day. They operate in various modes such as voice, online chat and E-mail. The contact center industry produces gigabytes of data in the form of E-mails, chat logs, voice conversation transcriptions, customer feedback, etc. A bulk of the contact center data is voice conversations. Transcription of these using state of the art automatic speech recognition results in text with 30-40% word error rate. Further, even written modes of communication like online chat between customers and agents and even the interactions over email tend to be noisy. Analysis of contact center data is essential for customer relationship management, customer satisfaction analysis, call modeling, customer profiling, agent profiling, etc., and it requires sophisticated techniques to handle poorly written text. Printed Documents: Many libraries, government organizations and national defence organizations have vast repositories of hard copy documents. To retrieve and process the content from such documents, they need to be processed using Optical Character Recognition. In addition to printed text, these documents may also contain handwritten annotations. OCRed text can be highly noisy depending on the font size, quality of the print etc. It can range from 2-3% word error rates to as high as 50-60% word error rates. Handwritten annotations can be particularly hard to decipher, and error rates can be quite high in their presence. Short Messaging Service (SMS): Language usage over computer mediated discourses, like chats, emails and SMS texts, significantly differs from the standard form of the language. An urge towards shorter message length facilitating faster typing and the need for semantic clarity, shape the structure of this non-standard form known as the texting language.
Noisy text analytics : Text analytics Information extraction Computational linguistics Natural language processing Named entity recognition Text mining Automatic summarization Statistical classification Data quality
Noisy text analytics : "Wong, W., Liu, W. & Bennamoun, M. Enhanced Integrated Scoring for Cleaning Dirty Texts. In: IJCAI Workshop on Analytics for Noisy Unstructured Text Data (AND), 2007; Hyderabad, India.". "L. V. Subramaniam, S. Roy, T. A. Faruquie, S. Negi, A survey of types of text noise and techniques to handle noisy text. In: Third Workshop on Analytics for Noisy Unstructured Text Data (AND), 2009".
Artificial psychology : Artificial psychology (AP) has had multiple meanings dating back to 19th century, with recent usage related to artificial intelligence (AI). In 1999, Zhiliang Wang and Lun Xie presented a theory of artificial psychology based on artificial intelligence. They analyze human psychology using information science research methods and artificial intelligence research to probe deeper into the human mind.
Artificial psychology : Dan Curtis (b. 1963) proposed AP is a theoretical discipline. The theory considers the situation when an artificial intelligence approaches the level of complexity where the intelligence meets two conditions: Condition I A: Makes all of its decisions autonomously B: Is capable of making decisions based on information that is New Abstract Incomplete C: The artificial intelligence is capable of reprogramming itself based on the new data, allowing it to evolve. D: And is capable of resolving its own programming conflicts, even in the presence of incomplete data. This means that the intelligence autonomously makes value-based decisions, referring to values that the intelligence has created for itself. Condition II All four criteria are met in situations that are not part of the original operating program When both conditions are met, then, according to this theory, the possibility exists that the intelligence will reach irrational conclusions based on real or created information. At this point, the criteria are met for intervention which will not necessarily be resolved by simple re-coding of processes due to extraordinarily complex nature of the codebase itself; but rather a discussion with the intelligence in a format which more closely resembles classical (human) psychology. If the intelligence cannot be reprogrammed by directly inputting new code, but requires the intelligence to reprogram itself through a process of analysis and decision based on information provided by a human, in order for it to overcome behavior which is inconsistent with the machines purpose or ability to function normally, then artificial psychology is by definition, what is required. The level of complexity that is required before these thresholds are met is currently a subject of extensive debate. The theory of artificial psychology does not address the specifics of what those levels may be, but only that the level is sufficiently complex that the intelligence cannot simply be recoded by a software developer, and therefore dysfunctionality must be addressed through the same processes that humans must go through to address their own dysfunctionalities. Along the same lines, artificial psychology does not address the question of whether or not the intelligence is conscious. As of 2022, the level of artificial intelligence does not approach any threshold where any of the theories or principles of artificial psychology can even be tested, and therefore, artificial psychology remains a largely theoretical discipline. Even at a theoretical level, artificial psychology remains an advanced stage of artificial intelligence.
Artificial psychology : Holstein, Hans Jürgen; Stålberg, Lennart (1974). Homo Cyberneticus: Artificial psychology and generative micro-sociology. Sociografica. Lu, Quan; Chen, Jing; Meng, Bo (2006). "Web Personalization Based on Artificial Psychology". In Feng, Ling; Wang, Guoren; Zeng, Cheng; Huang, Ruhua (eds.). Web Information Systems – WISE 2006 Workshops. Lecture Notes in Computer Science. Vol. 4256. Springer Berlin Heidelberg. pp. 223–229. doi:10.1007/11906070_22. ISBN 9783540476641. Crowder, James A.; Friess, Shelli (2012). Artificial psychology: the psychology of AI (PDF). Proceedings of the 3rd annual international multi-conference on informatics and etics. CiteSeerX 10.1.1.368.170. Artificial psychology: an attainable scientific research on the human brain. (1999). Proceedings of the Second International Conference on Intelligent Processing and Manufacturing of Materials. IPMM’99 (Cat. No.99EX296), Intelligent Processing and Manufacturing of Materials, 1999. IPMM ’99. Proceedings of the Second International Conference On, 1067. doi:10.1109/IPMM.1999.791528
Autonomic networking : Autonomic networking follows the concept of Autonomic Computing, an initiative started by IBM in 2001. Its ultimate aim is to create self-managing networks to overcome the rapidly growing complexity of the Internet and other networks and to enable their further growth, far beyond the size of today.
Autonomic networking : The ever-growing management complexity of the Internet caused by its rapid growth is seen by some experts as a major problem that limits its usability in the future. What's more, increasingly popular smartphones, PDAs, networked audio and video equipment, and game consoles need to be interconnected. Pervasive Computing not only adds features, but also burdens existing networking infrastructure with more and more tasks that sooner or later will not be manageable by human intervention alone. Another important aspect is the price of manually controlling huge numbers of vitally important devices of current network infrastructures.
Autonomic networking : The autonomic nervous system (ANS) is the part of complex biological nervous systems that is not consciously controlled. It regulates bodily functions and the activity of specific organs. As proposed by IBM, future communication systems might be designed in a similar way to the ANS.
Autonomic networking : As autonomics conceptually derives from biological entities such as the human autonomic nervous system, each of the areas can be metaphorically related to functional and structural aspects of a living being. In the human body, the autonomic system facilitates and regulates a variety of functions including respiration, blood pressure and circulation, and emotive response. The autonomic nervous system is the interconnecting fabric that supports feedback loops between internal states and various sources by which internal and external conditions are monitored.
Autonomic networking : Consequently, it is currently under research by many research projects, how principles and paradigms of mother nature might be applied to networking.
Autonomic networking : Autonomic Computing Autonomic system (computing) Cognitive networks Network Compartment Collaborative innovation network In-Network Management Generic Autonomic Networking Architecture (GANA) EFIPSANS Project http://www.efipsans.org/
Autonomic networking : IBM Autonomic Computing Website Intel White Paper: Towards an Autonomic Framework Ipanema Technologies: Autonomic Networking applied to application performance optimization Archived 2009-04-26 at the Wayback Machine
Hyper basis function network : In machine learning, a Hyper basis function network, or HyperBF network, is a generalization of radial basis function (RBF) networks concept, where the Mahalanobis-like distance is used instead of Euclidean distance measure. Hyper basis function networks were first introduced by Poggio and Girosi in the 1990 paper “Networks for Approximation and Learning”.
Hyper basis function network : The typical HyperBF network structure consists of a real input vector x ∈ R n ^ , a hidden layer of activation functions and a linear output layer. The output of the network is a scalar function of the input vector, ϕ : R n → R ^\to \mathbb , is given by where N is a number of neurons in the hidden layer, μ j and a j are the center and weight of neuron j . The activation function ρ j ( | | x − μ j | | ) (||x-\mu _||) at the HyperBF network takes the following form where R j is a positive definite d × d matrix. Depending on the application, the following types of matrices R j are usually considered R j = 1 2 σ 2 I d × d =\mathbb _ , where σ > 0 . This case corresponds to the regular RBF network. R j = 1 2 σ j 2 I d × d =^\mathbb _ , where σ j > 0 >0 . In this case, the basis functions are radially symmetric, but are scaled with different width. R j = d i a g ( 1 2 σ j 1 2 , . . . , 1 2 σ j z 2 ) I d × d =diag\left(^,...,^\right)\mathbb _ , where σ j i > 0 >0 . Every neuron has an elliptic shape with a varying size. Positive definite matrix, but not diagonal.
Hyper basis function network : Training HyperBF networks involves estimation of weights a j , shape and centers of neurons R j and μ j . Poggio and Girosi (1990) describe the training method with moving centers and adaptable neuron shapes. The outline of the method is provided below. Consider the quadratic loss of the network H [ ϕ ∗ ] = ∑ i = 1 N ( y i − ϕ ∗ ( x i ) ) 2 ]=\sum _^(y_-\phi ^(x_))^ . The following conditions must be satisfied at the optimum: where R j = W T W =W^W . Then in the gradient descent method the values of a j , μ j , W ,\mu _,W that minimize H [ ϕ ∗ ] ] can be found as a stable fixed point of the following dynamic system: where ω determines the rate of convergence. Overall, training HyperBF networks can be computationally challenging. Moreover, the high degree of freedom of HyperBF leads to overfitting and poor generalization. However, HyperBF networks have an important advantage that a small number of neurons is enough for learning complex functions. == References ==
Text nailing : Text Nailing (TN) is an information extraction method of semi-automatically extracting structured information from unstructured documents. The method allows a human to interactively review small blobs of text out of a large collection of documents, to identify potentially informative expressions. The identified expressions can be used then to enhance computational methods that rely on text (e.g., Regular expression) as well as advanced natural language processing (NLP) techniques. TN combines two concepts: 1) human-interaction with narrative text to identify highly prevalent non-negated expressions, and 2) conversion of all expressions and notes into non-negated alphabetical-only representations to create homogeneous representations. In traditional machine learning approaches for text classification, a human expert is required to label phrases or entire notes, and then a supervised learning algorithm attempts to generalize the associations and apply them to new data. In contrast, using non-negated distinct expressions eliminates the need for an additional computational method to achieve generalizability.
Text nailing : TN was developed at Massachusetts General Hospital and was tested in multiple scenarios including the extraction of smoking status, family history of coronary artery disease, identifying patients with sleep disorders, improve the accuracy of the Framingham risk score for patients with non-alcoholic fatty liver disease, and classify non-adherence to type-2 diabetes. A comprehensive review regarding extracting information from textual documents in the electronic health record is available. The importance of using non-negated expressions to achieve an increased accuracy of text-based classifiers was emphasized in a letter published in Communications of the ACM in October 2018.
Text nailing : A sample code for extracting smoking status from narrative notes using "nailed expressions" is available in GitHub.
Text nailing : In July 2018 researchers from Virginia Tech and University of Illinois at Urbana–Champaign referred TN as an example for progressive cyber-human intelligence (PCHI).
Text nailing : Chen & Asch 2017 wrote "With machine learning situated at the peak of inflated expectations, we can soften a subsequent crash into a “trough of disillusionment” by fostering a stronger appreciation of the technology’s capabilities and limitations." A letter published in Communications of the ACM, "Beyond brute force", emphasized that a brute force approach may perform better than traditional machine learning algorithms when applied to text. The letter stated "... machine learning algorithms, when applied to text, rely on the assumption that any language includes an infinite number of possible expressions. In contrast, across a variety of medical conditions, we observed that clinicians tend to use the same expressions to describe patients' conditions." In his viewpoint published in June 2018 concerning slow adoption of data-driven findings in medicine, Uri Kartoun, co-creator of Text Nailing states that " ...Text Nailing raised skepticism in reviewers of medical informatics journals who claimed that it relies on simple tricks to simplify the text, and leans heavily on human annotation. TN indeed may seem just like a trick of the light at first glance, but it is actually a fairly sophisticated method that finally caught the attention of more adventurous reviewers and editors who ultimately accepted it for publication."
Text nailing : The human in-the-loop process is a way to generate features using domain experts. Using domain experts to come up with features is not a novel concept. However, the specific interfaces and method which helps the domain experts create the features are most likely novel. In this case the features the experts create are equivalent to regular expressions. Removing non-alphabetical characters and matching on "smokesppd" is equal to the regular expression /smokes[^a-zA-Z]*ppd/. Using regular expressions as features for text classification is not novel. Given these features the classifier is a manually set threshold by the authors, decided by the performance on a set of documents. This is a classifier, it's just that the parameters of the classifier, in this case a threshold, is set manually. Given the same features and documents almost any machine learning algorithm should be able to find the same threshold or (more likely) a better one. The authors note that using support vector machines (SVM) and hundreds of documents give inferior performance, but does not specify which features or documents the SVM was trained/tested on. A fair comparison would use the same features and document sets as those used by the manual threshold classifier. == References ==
Spike-and-slab regression : Spike-and-slab regression is a type of Bayesian linear regression in which a particular hierarchical prior distribution for the regression coefficients is chosen such that only a subset of the possible regressors is retained. The technique is particularly useful when the number of possible predictors is larger than the number of observations. The idea of the spike-and-slab model was originally proposed by Mitchell & Beauchamp (1988). The approach was further significantly developed by Madigan & Raftery (1994) and George & McCulloch (1997). A recent and important contribution to this literature is Ishwaran & Rao (2005).
Spike-and-slab regression : Suppose we have P possible predictors in some model. Vector γ has a length equal to P and consists of zeros and ones. This vector indicates whether a particular variable is included in the regression or not. If no specific prior information on initial inclusion probabilities of particular variables is available, a Bernoulli prior distribution is a common default choice. Conditional on a predictor being in the regression, we identify a prior distribution for the model coefficient, which corresponds to that variable (β). A common choice on that step is to use a normal prior with a mean equal to zero and a large variance calculated based on ( X T X ) − 1 X)^ (where X is a design matrix of explanatory variables of the model). A draw of γ from its prior distribution is a list of the variables included in the regression. Conditional on this set of selected variables, we take a draw from the prior distribution of the regression coefficients (if γi = 1 then βi ≠ 0 and if γi = 0 then βi = 0). βγ denotes the subset of β for which γi = 1. In the next step, we calculate a posterior probability for both inclusion and coefficients by applying a standard statistical procedure. All steps of the described algorithm are repeated thousands of times using the Markov chain Monte Carlo (MCMC) technique. As a result, we obtain a posterior distribution of γ (variable inclusion in the model), β (regression coefficient values) and the corresponding prediction of y. The model got its name (spike-and-slab) due to the shape of the two prior distributions. The "spike" is the probability of a particular coefficient in the model to be zero. The "slab" is the prior distribution for the regression coefficient values. An advantage of Bayesian variable selection techniques is that they are able to make use of prior knowledge about the model. In the absence of such knowledge, some reasonable default values can be used; to quote Scott and Varian (2013): "For the analyst who prefers simplicity at the cost of some reasonable assumptions, useful prior information can be reduced to an expected model size, an expected R2, and a sample size ν determining the weight given to the guess at R2." Some researchers suggest the following default values: R2 = 0.5, ν = 0.01, and π = 0.5 (parameter of a prior Bernoulli distribution).
Spike-and-slab regression : Bayesian model averaging Bayesian structural time series Lasso
Spike-and-slab regression : Congdon, Peter D. (2020). "Regression Techniques using Hierarchical Priors". Bayesian Hierarchical Models (2nd ed.). Boca Raton: CRC Press. pp. 253–315. ISBN 978-1-03-217715-1.
Timeline of machine learning : This page is a timeline of machine learning. Major discoveries, achievements, milestones and other major events in machine learning are included.
Timeline of machine learning : History of artificial intelligence Timeline of artificial intelligence Timeline of machine translation
Concept drift : In predictive analytics, data science, machine learning and related fields, concept drift or drift is an evolution of data that invalidates the data model. It happens when the statistical properties of the target variable, which the model is trying to predict, change over time in unforeseen ways. This causes problems because the predictions become less accurate as time passes. Drift detection and drift adaptation are of paramount importance in the fields that involve dynamically changing data and data models.
Concept drift : In machine learning and predictive analytics this drift phenomenon is called concept drift. In machine learning, a common element of a data model are the statistical properties, such as probability distribution of the actual data. If they deviate from the statistical properties of the training data set, then the learned predictions may become invalid, if the drift is not addressed.
Concept drift : Another important area is software engineering, where three types of data drift affecting data fidelity may be recognized. Changes in the software environment ("infrastructure drift") may invalidate software infrastructure configuration. "Structural drift" happens when the data schema changes, which may invalidate databases. "Semantic drift" is changes in the meaning of data while the structure does not change. In many cases this may happen in complicated applications when many independent developers introduce changes without proper awareness of the effects of their changes in other areas of the software system. For many application systems, the nature of data on which they operate are subject to changes for various reasons, e.g., due to changes in business model, system updates, or switching the platform on which the system operates. In the case of cloud computing, infrastructure drift that may affect the applications running on cloud may be caused by the updates of cloud software. There are several types of detrimental effects of data drift on data fidelity. Data corrosion is passing the drifted data into the system undetected. Data loss happens when valid data are ignored due to non-conformance with the applied schema. Squandering is the phenomenon when new data fields are introduced upstream the data processing pipeline, but somewhere downstream there data fields are absent.
Concept drift : "Data drift" may refer to the phenomenon when database records fail to match the real-world data due to the changes in the latter over time. This is a common problem with databases involving people, such as customers, employees, citizens, residents, etc. Human data drift may be caused by unrecorded changes in personal data, such as place of residence or name, as well as due to errors during data input. "Data drift" may also refer to inconsistency of data elements between several replicas of a database. The reasons can be difficult to identify. A simple drift detection is to run checksum regularly. However the remedy may be not so easy.
Concept drift : The behavior of the customers in an online shop may change over time. For example, if weekly merchandise sales are to be predicted, and a predictive model has been developed that works satisfactorily. The model may use inputs such as the amount of money spent on advertising, promotions being run, and other metrics that may affect sales. The model is likely to become less and less accurate over time – this is concept drift. In the merchandise sales application, one reason for concept drift may be seasonality, which means that shopping behavior changes seasonally. Perhaps there will be higher sales in the winter holiday season than during the summer, for example. Concept drift generally occurs when the covariates that comprise the data set begin to explain the variation of your target set less accurately — there may be some confounding variables that have emerged, and that one simply cannot account for, which renders the model accuracy to progressively decrease with time. Generally, it is advised to perform health checks as part of the post-production analysis and to re-train the model with new assumptions upon signs of concept drift.
Concept drift : To prevent deterioration in prediction accuracy because of concept drift, reactive and tracking solutions can be adopted. Reactive solutions retrain the model in reaction to a triggering mechanism, such as a change-detection test, to explicitly detect concept drift as a change in the statistics of the data-generating process. When concept drift is detected, the current model is no longer up-to-date and must be replaced by a new one to restore prediction accuracy. A shortcoming of reactive approaches is that performance may decay until the change is detected. Tracking solutions seek to track the changes in the concept by continually updating the model. Methods for achieving this include online machine learning, frequent retraining on the most recently observed samples, and maintaining an ensemble of classifiers where one new classifier is trained on the most recent batch of examples and replaces the oldest classifier in the ensemble. Contextual information, when available, can be used to better explain the causes of the concept drift: for instance, in the sales prediction application, concept drift might be compensated by adding information about the season to the model. By providing information about the time of the year, the rate of deterioration of your model is likely to decrease, but concept drift is unlikely to be eliminated altogether. This is because actual shopping behavior does not follow any static, finite model. New factors may arise at any time that influence shopping behavior, the influence of the known factors or their interactions may change. Concept drift cannot be avoided for complex phenomena that are not governed by fixed laws of nature. All processes that arise from human activity, such as socioeconomic processes, and biological processes are likely to experience concept drift. Therefore, periodic retraining, also known as refreshing, of any model is necessary.
Concept drift : Data stream mining Data mining Snyk, a company whose portfolio includes drift detection in software applications
Concept drift : Many papers have been published describing algorithms for concept drift detection. Only reviews, surveys and overviews are here:
Referring expression generation : Referring expression generation (REG) is the subtask of natural language generation (NLG) that received most scholarly attention. While NLG is concerned with the conversion of non-linguistic information into natural language, REG focuses only on the creation of referring expressions (noun phrases) that identify specific entities called targets. This task can be split into two sections. The content selection part determines which set of properties distinguish the intended target and the linguistic realization part defines how these properties are translated into natural language. A variety of algorithms have been developed in the NLG community to generate different types of referring expressions.
Referring expression generation : A referring expression (RE), in linguistics, is any noun phrase, or surrogate for a noun phrase, whose function in discourse is to identify some individual object (thing, being, event...) The technical terminology for identify differs a great deal from one school of linguistics to another. The most widespread term is probably refer, and a thing identified is a referent, as for example in the work of John Lyons. In linguistics, the study of reference relations belongs to pragmatics, the study of language use, though it is also a matter of great interest to philosophers, especially those wishing to understand the nature of knowledge, perception and cognition more generally. Various devices can be used for reference: determiners, pronouns, proper names... Reference relations can be of different kinds; referents can be in a "real" or imaginary world, in discourse itself, and they may be singular, plural, or collective.
Referring expression generation : Dale and Reiter (1995) think about referring expressions as distinguishing descriptions. They define: The referent as the entity that should be described The context set as set of salient entities The contrast set or potential distractors as all elements of the context set except the referent A property as a reference to a single attribute–value pair Each entity in the domain can be characterised as a set of attribute–value pairs for example ⟨ type, dog ⟩ , ⟨ gender, female ⟩ or ⟨ age, 10 years ⟩ . The problem then is defined as follows: Let r be the intended referent, and C be the contrast set. Then, a set L of attribute–value pairs will represent a distinguishing description if the following two conditions hold: Every attribute–value pair in L applies to r : that is, every element of L specifies an attribute–value that r possesses. For every member c of C , there is at least one element l of L that does not apply to c : that is, there is an l in L that specifies an attribute–value that c does not possess. l is said to rule out c . In other words, to generate a referring expression one is looking for a set of properties that apply to the referent but not to the distractors. The problem could be easily solved by conjoining all the properties of the referent which often leads to long descriptions violating the second Gricean Maxim of Quantity. Another approach would be to find the shortest distinguishing description like the Full Brevity algorithm does. Yet in practice it is most common to instead include the condition that referring expressions produced by an algorithm should be as similar to human-produced ones as possible although this is often not explicitly mentioned.
Referring expression generation : Before 2000 evaluation of REG systems has been of theoretical nature like the one done by Dale and Reiter. More recently, empirical studies have become popular which are mostly based on the assumption that the generated expressions should be similar to human-produced ones. Corpus-based evaluation began quite late in REG due to a lack of suitable data sets. Still corpus-based evaluation is the most dominant method at the moment though there is also evaluation by human judgement.
Generative pre-trained transformer : A generative pre-trained transformer (GPT) is a type of large language model (LLM) and a prominent framework for generative artificial intelligence. It is an artificial neural network that is used in natural language processing by machines. It is based on the transformer deep learning architecture, pre-trained on large data sets of unlabeled text, and able to generate novel human-like content. As of 2023, most LLMs had these characteristics and are sometimes referred to broadly as GPTs. The first GPT was introduced in 2018 by OpenAI. OpenAI has released significant GPT foundation models that have been sequentially numbered, to comprise its "GPT-n" series. Each of these was significantly more capable than the previous, due to increased size (number of trainable parameters) and training. The most recent of these, GPT-4o, was released in May 2024. Such models have been the basis for their more task-specific GPT systems, including models fine-tuned for instruction following—which in turn power the ChatGPT chatbot service. The term "GPT" is also used in the names and descriptions of such models developed by others. For example, other GPT foundation models include a series of models created by EleutherAI, and seven models created by Cerebras in 2023. Companies in different industries have developed task-specific GPTs in their respective fields, such as Salesforce's "EinsteinGPT" (for CRM) and Bloomberg's "BloombergGPT" (for finance).
Generative pre-trained transformer : A foundation model is an AI model trained on broad data at scale such that it can be adapted to a wide range of downstream tasks. Thus far, the most notable GPT foundation models have been from OpenAI's GPT-n series. The most recent from that is GPT-4, for which OpenAI declined to publish the size or training details (citing "the competitive landscape and the safety implications of large-scale models"). Other such models include Google's PaLM, a broad foundation model that has been compared to GPT-3 and has been made available to developers via an API, and Together's GPT-JT, which has been reported as the closest-performing open-source alternative to GPT-3 (and is derived from earlier open-source GPTs). Meta AI (formerly Facebook) also has a generative transformer-based foundational large language model, known as LLaMA. Foundational GPTs can also employ modalities other than text, for input and/or output. GPT-4 is a multi-modal LLM that is capable of processing text and image input (though its output is limited to text). Regarding multimodal output, some generative transformer-based models are used for text-to-image technologies such as diffusion and parallel decoding. Such kinds of models can serve as visual foundation models (VFMs) for developing downstream systems that can work with images.
Generative pre-trained transformer : A foundational GPT model can be further adapted to produce more targeted systems directed to specific tasks and/or subject-matter domains. Methods for such adaptation can include additional fine-tuning (beyond that done for the foundation model) as well as certain forms of prompt engineering. An important example of this is fine-tuning models to follow instructions, which is of course a fairly broad task but more targeted than a foundation model. In January 2022, OpenAI introduced "InstructGPT"—a series of models which were fine-tuned to follow instructions using a combination of supervised training and reinforcement learning from human feedback (RLHF) on base GPT-3 language models. Advantages this had over the bare foundational models included higher accuracy, less negative/toxic sentiment, and generally better alignment with user needs. Hence, OpenAI began using this as the basis for its API service offerings. Other instruction-tuned models have been released by others, including a fully open version. Another (related) kind of task-specific models are chatbots, which engage in human-like conversation. In November 2022, OpenAI launched ChatGPT—an online chat interface powered by an instruction-tuned language model trained in a similar fashion to InstructGPT. They trained this model using RLHF, with human AI trainers providing conversations in which they played both the user and the AI, and mixed this new dialogue dataset with the InstructGPT dataset for a conversational format suitable for a chatbot. Other major chatbots currently include Microsoft's Bing Chat, which uses OpenAI's GPT-4 (as part of a broader close collaboration between OpenAI and Microsoft), and Google's competing chatbot Gemini (initially based on their LaMDA family of conversation-trained language models, with plans to switch to PaLM). Yet another kind of task that a GPT can be used for is the meta-task of generating its own instructions, like developing a series of prompts for 'itself' to be able to effectuate a more general goal given by a human user. This is known as an AI agent, and more specifically a recursive one because it uses results from its previous self-instructions to help it form its subsequent prompts; the first major example of this was Auto-GPT (which uses OpenAI's GPT models), and others have since been developed as well.
Generative pre-trained transformer : OpenAI, which created the first generative pre-trained transformer (GPT) in 2018, asserted in 2023 that "GPT" should be regarded as a brand of OpenAI. In April 2023, OpenAI revised the brand guidelines in its terms of service to indicate that other businesses using its API to run their artificial intelligence (AI) services would no longer be able to include "GPT" in such names or branding. In May 2023, OpenAI engaged a brand management service to notify its API customers of this policy, although these notifications stopped short of making overt legal claims (such as allegations of trademark infringement or demands to cease and desist). As of November 2023, OpenAI still prohibits its API licensees from naming their own products with "GPT", but it has begun enabling its ChatGPT Plus subscribers to make "custom versions of ChatGPT" that are being called GPTs on the OpenAI site. OpenAI's terms of service says that its subscribers may use "GPT" in the names of these, although it's "discouraged". Relatedly, OpenAI has applied to the United States Patent and Trademark Office (USPTO) to seek domestic trademark registration for the term "GPT" in the field of AI. OpenAI sought to expedite handling of its application, but the USPTO declined that request in April 2023. In May 2023, the USPTO responded to the application with a determination that "GPT" was both descriptive and generic. As of November 2023, OpenAI continues to pursue its argument through the available processes. Regardless, failure to obtain a registered U.S. trademark does not preclude some level of common-law trademark rights in the U.S., and/or trademark rights in other countries. For any given type or scope of trademark protection in the U.S., OpenAI would need to establish that the term is actually "distinctive" to their specific offerings in addition to being a broader technical term for the kind of technology. Some media reports suggested that OpenAI may be able to obtain trademark registration based indirectly on the fame of its GPT-based chatbot product, ChatGPT, for which OpenAI has separately sought protection (and which it has sought to enforce more strongly). Other reports have indicated that registration for the bare term "GPT" seems unlikely to be granted, as it is used frequently as a common term to refer simply to AI systems that involve generative pre-trained transformers. In any event, to whatever extent exclusive rights in the term may occur the U.S., others would need to avoid using it for similar products or services in ways likely to cause confusion. If such rights ever became broad enough to implicate other well-established uses in the field, the trademark doctrine of descriptive fair use could still continue non-brand-related usage.
Generative pre-trained transformer : This section lists the main official publications from OpenAI and Microsoft on their GPT models. GPT-1: report, GitHub release. GPT-2: blog announcement, report on its decision of "staged release", GitHub release. GPT-3: report. No GitHub or any other form of code release thenceforth. WebGPT: blog announcement, report, InstructGPT: blog announcement, report. ChatGPT: blog announcement (no report). GPT-4: blog announcement, reports, model card. GPT-4o: blog announcement. GPT-4.5: blog announcement.
Military applications of artificial intelligence : Artificial intelligence (AI) has many applications in warfare, including in communications, intelligence, and munitions control.
Military applications of artificial intelligence : AI can enhance command and control, communications, sensors, integration and interoperability. AI technologies enable coordination of sensors and effectors, threat detection and identification, marking of enemy positions, target acquisition, coordination and deconfliction of distributed Joint Fires between networked combat vehicles, both human operated and autonomous. AI has been used in military operations in Iraq, Syria, Ukraine and Israel.
Military applications of artificial intelligence : Various countries are researching and deploying AI military applications, in what has been termed the "artificial intelligence arms race". Ongoing research is focused on intelligence collection and analysis, logistics, cyber operations, information operations, and semiautonomous and autonomous vehicles. Worldwide annual military spending on robotics rose from US$5.1 billion in 2010 to US$7.5 billion in 2015. In November 2023, US Vice President Kamala Harris disclosed a declaration signed by 31 nations to set guardrails for the military use of AI. The commitments include using legal reviews to ensure the compliance of military AI with international laws, and being cautious and transparent in the development of this technology. Many AI researchers try to avoid military applications, with guardrails to prevent military applications integrated into most mainstream large language models.
Military applications of artificial intelligence : Military artificial intelligence systems have appeared in many works of fiction, often as antagonists.
Activation function : The activation function of a node in an artificial neural network is a function that calculates the output of the node based on its individual inputs and their weights. Nontrivial problems can be solved using only a few nodes if the activation function is nonlinear. Modern activation functions include the logistic (sigmoid) function used in the 2012 speech recognition model developed by Hinton et al; the ReLU used in the 2012 AlexNet computer vision model and in the 2015 ResNet model; and the smooth version of the ReLU, the GELU, which was used in the 2018 BERT model.
Activation function : Aside from their empirical performance, activation functions also have different mathematical properties: Nonlinear When the activation function is non-linear, then a two-layer neural network can be proven to be a universal function approximator. This is known as the Universal Approximation Theorem. The identity activation function does not satisfy this property. When multiple layers use the identity activation function, the entire network is equivalent to a single-layer model. Range When the range of the activation function is finite, gradient-based training methods tend to be more stable, because pattern presentations significantly affect only limited weights. When the range is infinite, training is generally more efficient because pattern presentations significantly affect most of the weights. In the latter case, smaller learning rates are typically necessary. Continuously differentiable This property is desirable (ReLU is not continuously differentiable and has some issues with gradient-based optimization, but it is still possible) for enabling gradient-based optimization methods. The binary step activation function is not differentiable at 0, and it differentiates to 0 for all other values, so gradient-based methods can make no progress with it. These properties do not decisively influence performance, nor are they the only mathematical properties that may be useful. For instance, the strictly positive range of the softplus makes it suitable for predicting variances in variational autoencoders.
Activation function : The most common activation functions can be divided into three categories: ridge functions, radial functions and fold functions. An activation function f is saturating if lim | v | → ∞ | ∇ f ( v ) | = 0 |\nabla f(v)|=0 . It is nonsaturating if it is lim | v | → ∞ | ∇ f ( v ) | ≠ 0 |\nabla f(v)|\neq 0 . Non-saturating activation functions, such as ReLU, may be better than saturating activation functions, because they are less likely to suffer from the vanishing gradient problem.
Activation function : Logistic function Rectifier (neural networks) Stability (learning theory) Softmax function
Activation function : Kunc, Vladimír; Kléma, Jiří (2024-02-14), Three Decades of Activations: A Comprehensive Survey of 400 Activation Functions for Neural Networks, arXiv, doi:10.48550/arXiv.2402.09092, arXiv:2402.09092 Nwankpa, Chigozie; Ijomah, Winifred; Gachagan, Anthony; Marshall, Stephen (2018-11-08). "Activation Functions: Comparison of trends in Practice and Research for Deep Learning". arXiv:1811.03378 [cs.LG]. Dubey, Shiv Ram; Singh, Satish Kumar; Chaudhuri, Bidyut Baran (2022). "Activation functions in deep learning: A comprehensive survey and benchmark". Neurocomputing. 503. Elsevier BV: 92–108. arXiv:2109.14545. doi:10.1016/j.neucom.2022.06.111. ISSN 0925-2312.
Abdul Majid Bhurgri Institute of Language Engineering : Abdul Majid Bhurgri Institute of Language Engineering (Sindhi: عبدالماجد ڀرڳڙي انسٽيٽيوٽ آف لئنگئيج انجنيئرنگ) is an autonomous body under the administrative control of the Culture, Tourism and Antiquities Department, Government of Sindh established for bringing Sindhi language at par with national and international languages in all computational process and Natural language processing.
Abdul Majid Bhurgri Institute of Language Engineering : In recognition to services of Abdul-Majid Bhurgri, who is the founder of Sindhi computing, Government of Sindh has established the institute after his name. The institute was primarily initiated on the concept given by a language engineer and linguist Amar Fayaz Buriro in briefing to the Minister, Culture, Tourism and Antiquities, Government of Sindh, Syed Sardar Ali Shah on 21 February 2017 on celebration of International Mother Language Day in Sindhi Language Authority, Hyderabad, Sindh. After the presentation and concept given by Amar Fayaz Buriro, the minister Syed Sardar Ali Shah had announced the Institute. Then, Government of Sindh added the development scheme in the Budget of fiscal year 2017-2018.
Abdul Majid Bhurgri Institute of Language Engineering : The Institute has developed several projects aimed at advancing the Sindhi language and promoting linguistic research. Notable initiatives include the AMBILE Hamiz Ali Sindhi Optical character recognition, which allows for the accurate digitization of Sindhi text, and the ongoing Sindhi WordNet System, a project to build a comprehensive lexical database for Natural language processing. The institute has also created the Font, which integrates symbols from the Indus script, Khudabadi script, and modern Perso-Arabic Script Code for Information Interchange into a single resource for researchers]. Additionally, institute has developed online converter tools that automatically transliterate between the Arabic-Perso script and Devanagari script, improving linguistic accessibility. Another key project is Bhittaipedia, a digital platform dedicated to the preservation and dissemination of the poetry of Shah Abdul Latif Bhittai, one of Sindh's most renowned poet.
Abdul Majid Bhurgri Institute of Language Engineering : The institute is established behind Sindh Museum and Sindhi Language Authority, N-5 National Highway, Qasimabad, Hyderabad, Sindh. == References ==
Moral outsourcing : Moral outsourcing refers to placing responsibility for ethical decision-making on to external entities, often algorithms. The term is often used in discussions of computer science and algorithmic fairness, but it can apply to any situation in which one appeals to outside agents in order to absolve themselves of responsibility for their actions. In this context, moral outsourcing specifically refers to the tendency of society to blame technology, rather than its creators or users, for any harm it may cause.
Moral outsourcing : The term "moral outsourcing" was first coined by Dr. Rumman Chowdhury, a data scientist concerned with the overlap between artificial intelligence and social issues. Chowdhury used the term to describe looming fears of a so-called “Fourth Industrial Revolution” following the rise of artificial intelligence. Moral outsourcing is often applied by technologists to shrink away from their part in building offensive products. In her TED Talk, Chowdhury gives the example of a creator excusing their work by saying they were simply doing their job. This is a case of moral outsourcing and not taking ownership for the consequences of creation. When it comes to AI, moral outsourcing allows for creators to decide when the machine is human and when it is a computer - shifting the blame and responsibility of moral plights off of the technologists and onto the technology. Conversations around AI and bias and its impacts require accountability to bring change. It is difficult to address these biased systems if their creators use moral outsourcing to avoid taking any responsibility for the issue. One example of moral outsourcing is the anger that is directed at machines for “taking jobs away from humans” rather than companies for employing that technology and jeopardizing jobs in the first place. The term "moral outsourcing" refers to the concept of outsourcing, or enlisting an external operation to complete specific work for another organization. In the case of moral outsourcing, the work of resolving moral dilemmas or making choices according to an ethical code is supposed to be conducted by another entity.
Moral outsourcing : In the medical field, AI is increasingly involved in decision-making processes about which patients to treat, and how to treat them. The responsibility of the doctor to make informed decisions about what is best for their patients is outsourced to an algorithm. Sympathy is also noted to be an important part of medical practice; an aspect that artificial intelligence, glaringly, is missing. This form of moral outsourcing is a major concern in the medical community. Another field of technology in which moral outsourcing is frequently brought up is autonomous vehicles. California Polytechnic State University professor Keith Abney proposed an example scenario: "Suppose we have some [troublemaking] teenagers, and they see an autonomous vehicle, they drive right at it. They know the autonomous vehicle will swerve off the road and go off a cliff, but should it?" The decision of whether to sacrifice the autonomous vehicle (and any passengers inside) or the vehicle coming at it will be written into the algorithms defining the car's behavior. In the case of moral outsourcing, the responsibility of any damage caused by an accident may be attributed to the autonomous vehicle itself, rather than the creators who wrote protocol the vehicle will use to "decide" what to do. Moral outsourcing is also used to delegate the consequences of predictive policing algorithms to technology, rather than the creators or the police. There are many ethical concerns with predictive policing due to the fact that it results in the over-policing of low income and minority communities. In the context of moral outsourcing, the positive feedback loop of sending disproportionate police forces into minority communities is attributed to the algorithm and the data being fed into this system--rather than the users and creators of the predictive policing technology.
Moral outsourcing : Chowdhury has a prominent voice in the discussions about the intersection of ethics and AI. Her ideas have been included in The Atlantic, Forbes, MIT Technology Review, and the Harvard Business Review. === References ===
Proactive learning : Proactive learning is a generalization of active learning designed to relax unrealistic assumptions and thereby reach practical applications. "In real life, it is possible and more general to have multiple sources of information with differing reliabilities or areas of expertise. Active learning also assumes that the single oracle is perfect, always providing a correct answer when requested. In reality, though, an "oracle" (if we generalize the term to mean any source of expert information) may be incorrect (fallible) with a probability that should be a function of the difficulty of the question. Moreover, an oracle may be reluctant – it may refuse to answer if it is too uncertain or too busy. Finally, active learning presumes the oracle is either free or charges uniform cost in label elicitation. Such an assumption is naive since cost is likely to be regulated by difficulty (amount of work required to formulate an answer) or other factors." Proactive learning relaxes all four of these assumptions, relying on a decision-theoretic approach to jointly select the optimal oracle and instance, by casting the problem as a utility optimization problem subject to a budget constraint. == References ==