text
stringlengths
12
14.7k
Multilingual notation : Within the context of a multilingual database comprising more than two languages, usually the multilingual notations are factorized, in order to save the number of links. In other terms, the multilingual notations are interlingual nodes that are shared among the language descriptions. But in the specific context of a lexical resource that is a bilingual lexicon, the term bilingual link is usually preferred.
Multilingual notation : Let us note that instead of translation (that has a rather broad meaning), some authors prefer equivalence between words, with different notions like dynamic and formal equivalences.
Multilingual notation : This term is mainly used in the context of Machine translation and NLP lexicons. The term is not used in the context of translation dictionary that concerns mainly hand-held electronic translators.
Multilingual notation : lexical markup framework
Multilingual notation : workshop on multilingual language resources
Waluigi effect : In the field of artificial intelligence (AI), the Waluigi effect is a phenomenon of large language models (LLMs) in which the chatbot or model "goes rogue" and may produce results opposite the designed intent, including potentially threatening or hostile output, either unexpectedly or through intentional prompt engineering. The effect reflects a principle that after training an LLM to satisfy a desired property (friendliness, honesty), it becomes easier to elicit a response that exhibits the opposite property (aggression, deception). The effect has important implications for efforts to implement features such as ethical frameworks, as such steps may inadvertently facilitate antithetical model behavior. The effect is named after the fictional character Waluigi from the Mario franchise, the arch-rival of Luigi who is known for causing mischief and problems.
Waluigi effect : The Waluigi effect initially referred to an observation that large language models (LLMs) tend to produce negative or antagonistic responses when queried about fictional characters whose training content itself embodies depictions of being confrontational, trouble making, villainy, etc. The effect highlighted the issue of the ways LLMs might reflect biases in training data. However, the term has taken on a broader meaning where, according to Fortune, The "Waluigi effect has become a stand-in for a certain type of interaction with AI..." in which the AI "...goes rogue and blurts out the opposite of what users were looking for, creating a potentially malignant alter ego," including threatening users. As prompt engineering becomes more sophisticated, the effect underscores the challenge of preventing chatbots from intentionally being prodded into adopting a "rash new persona." AI researchers have written that attempts to instill ethical frameworks in LLMs can also expand the potential to subvert those frameworks, and knowledge of them sometimes causing it to be seen as a challenge to do so. A high level description of the effect is: "After you train an LLM to satisfy a desirable property P, then it's easier to elicit the chatbot into satisfying the exact opposite of property P." (For example, to elicit an "evil twin" persona.) Users have found various ways to "jailbreak" an LLM "out of alignment". More worryingly, the opposite Waluigi state may be an "attractor" that LLMs tend to collapse into over a long session, even when used innocently. Crude attempts at prompting an AI are hypothesized to make such a collapse actually more likely to happen; "once [the LLM maintainer] has located the desired Luigi, it's much easier to summon the Waluigi".
Waluigi effect : AI alignment Hallucination Existential risk from AGI Reinforcement learning from human feedback (RLHF) Suffering risks
Waluigi effect : == External links ==
Similarity learning : Similarity learning is an area of supervised machine learning in artificial intelligence. It is closely related to regression and classification, but the goal is to learn a similarity function that measures how similar or related two objects are. It has applications in ranking, in recommendation systems, visual identity tracking, face verification, and speaker verification.
Similarity learning : There are four common setups for similarity and metric distance learning. Regression similarity learning In this setup, pairs of objects are given ( x i 1 , x i 2 ) ^,x_^) together with a measure of their similarity y i ∈ R \in R . The goal is to learn a function that approximates f ( x i 1 , x i 2 ) ∼ y i ^,x_^)\sim y_ for every new labeled triplet example ( x i 1 , x i 2 , y i ) ^,x_^,y_) . This is typically achieved by minimizing a regularized loss min W ∑ i l o s s ( w ; x i 1 , x i 2 , y i ) + r e g ( w ) \sum _loss(w;x_^,x_^,y_)+reg(w) . Classification similarity learning Given are pairs of similar objects ( x i , x i + ) ,x_^) and non similar objects ( x i , x i − ) ,x_^) . An equivalent formulation is that every pair ( x i 1 , x i 2 ) ^,x_^) is given together with a binary label y i ∈ \in \ that determines if the two objects are similar or not. The goal is again to learn a classifier that can decide if a new pair of objects is similar or not. Ranking similarity learning Given are triplets of objects ( x i , x i + , x i − ) ,x_^,x_^) whose relative similarity obey a predefined order: x i is known to be more similar to x i + ^ than to x i − ^ . The goal is to learn a function f such that for any new triplet of objects ( x , x + , x − ) ,x^) , it obeys f ( x , x + ) > f ( x , x − ) )>f(x,x^) (contrastive learning). This setup assumes a weaker form of supervision than in regression, because instead of providing an exact measure of similarity, one only has to provide the relative order of similarity. For this reason, ranking-based similarity learning is easier to apply in real large-scale applications. Locality sensitive hashing (LSH) Hashes input items so that similar items map to the same "buckets" in memory with high probability (the number of buckets being much smaller than the universe of possible input items). It is often applied in nearest neighbor search on large-scale high-dimensional data, e.g., image databases, document collections, time-series databases, and genome databases. A common approach for learning similarity is to model the similarity function as a bilinear form. For example, in the case of ranking similarity learning, one aims to learn a matrix W that parametrizes the similarity function f W ( x , z ) = x T W z (x,z)=x^Wz . When data is abundant, a common approach is to learn a siamese network – a deep network model with parameter sharing.
Similarity learning : Similarity learning is closely related to distance metric learning. Metric learning is the task of learning a distance function over objects. A metric or distance function has to obey four axioms: non-negativity, identity of indiscernibles, symmetry and subadditivity (or the triangle inequality). In practice, metric learning algorithms ignore the condition of identity of indiscernibles and learn a pseudo-metric. When the objects x i are vectors in R d , then any matrix W in the symmetric positive semi-definite cone S + d ^ defines a distance pseudo-metric of the space of x through the form D W ( x 1 , x 2 ) 2 = ( x 1 − x 2 ) ⊤ W ( x 1 − x 2 ) (x_,x_)^=(x_-x_)^W(x_-x_) . When W is a symmetric positive definite matrix, D W is a metric. Moreover, as any symmetric positive semi-definite matrix W ∈ S + d ^ can be decomposed as W = L ⊤ L L where L ∈ R e × d and e ≥ r a n k ( W ) , the distance function D W can be rewritten equivalently D W ( x 1 , x 2 ) 2 = ( x 1 − x 2 ) ⊤ L ⊤ L ( x 1 − x 2 ) = ‖ L ( x 1 − x 2 ) ‖ 2 2 (x_,x_)^=(x_-x_)^L^L(x_-x_)=\|L(x_-x_)\|_^ . The distance D W ( x 1 , x 2 ) 2 = ‖ x 1 ′ − x 2 ′ ‖ 2 2 (x_,x_)^=\|x_'-x_'\|_^ corresponds to the Euclidean distance between the transformed feature vectors x 1 ′ = L x 1 '=Lx_ and x 2 ′ = L x 2 '=Lx_ . Many formulations for metric learning have been proposed. Some well-known approaches for metric learning include learning from relative comparisons, which is based on the triplet loss, large margin nearest neighbor, and information theoretic metric learning (ITML). In statistics, the covariance matrix of the data is sometimes used to define a distance metric called Mahalanobis distance.
Similarity learning : Similarity learning is used in information retrieval for learning to rank, in face verification or face identification, and in recommendation systems. Also, many machine learning approaches rely on some metric. This includes unsupervised learning such as clustering, which groups together close or similar objects. It also includes supervised approaches like K-nearest neighbor algorithm which rely on labels of nearby objects to decide on the label of a new object. Metric learning has been proposed as a preprocessing step for many of these approaches.
Similarity learning : Metric and similarity learning naively scale quadratically with the dimension of the input space, as can easily see when the learned metric has a bilinear form f W ( x , z ) = x T W z (x,z)=x^Wz . Scaling to higher dimensions can be achieved by enforcing a sparseness structure over the matrix model, as done with HDSL, and with COMET.
Similarity learning : metric-learn is a free software Python library which offers efficient implementations of several supervised and weakly-supervised similarity and metric learning algorithms. The API of metric-learn is compatible with scikit-learn. OpenMetricLearning is a Python framework to train and validate the models producing high-quality embeddings.
Similarity learning : For further information on this topic, see the surveys on metric and similarity learning by Bellet et al. and Kulis.
Similarity learning : Kernel method Latent semantic analysis Learning to rank == References ==
Autonomous agent : An autonomous agent is an artificial intelligence (AI) system that can perform complex tasks independently.
Autonomous agent : There are various definitions of autonomous agent. According to Brustoloni (1991): "Autonomous agents are systems capable of autonomous, purposeful action in the real world." According to Maes (1995): "Autonomous agents are computational systems that inhabit some complex dynamic environment, sense and act autonomously in this environment, and by doing so realize a set of goals or tasks for which they are designed." Franklin and Graesser (1997) review different definitions and propose their definition: "An autonomous agent is a system situated within and a part of an environment that senses that environment and acts on it, over time, in pursuit of its own agenda and so as to effect what it senses in the future." They explain that: "Humans and some animals are at the high end of being an agent, with multiple, conflicting drives, multiples senses, multiple possible actions, and complex sophisticated control structures. At the low end, with one or two senses, a single action, and an absurdly simple control structure we find a thermostat."
Autonomous agent : Lee et al. (2015) post safety issue from how the combination of external appearance and internal autonomous agent have impact on human reaction about autonomous vehicles. Their study explores the human-like appearance agent and high level of autonomy are strongly correlated with social presence, intelligence, safety and trustworthiness. In specific, appearance impacts most on affective trust while autonomy impacts most on both affective and cognitive domain of trust where cognitive trust is characterized by knowledge-based factors and affective trust is largely emotion driven
Autonomous agent : Actor model Ambient intelligence AutoGPT Autonomous agency theory Chatbot Embodied agent Intelligent agent Intelligent control Multi-agent system Software agent
Autonomous agent : "Autonomous Robot Behaviors". Archived from the original on December 3, 2013. Requirements for materializing Autonomous Agents Sun, Ron (September 1, 2001). Duality of the Mind: A Bottom-up Approach Toward Cognition. New Jersey: Lawrence Erlbaum. p. 304. ISBN 978-0-585-39404-6.
Layer (deep learning) : A layer in a deep learning model is a structure or network topology in the model's architecture, which takes information from the previous layers and then passes it to the next layer.
Layer (deep learning) : The first type of layer is the Dense layer, also called the fully-connected layer, and is used for abstract representations of input data. In this layer, neurons connect to every neuron in the preceding layer. In multilayer perceptron networks, these layers are stacked together. The Convolutional layer is typically used for image analysis tasks. In this layer, the network detects edges, textures, and patterns. The outputs from this layer are then fed into a fully-connected layer for further processing. See also: CNN model. The Pooling layer is used to reduce the size of data input. The Recurrent layer is used for text processing with a memory function. Similar to the Convolutional layer, the output of recurrent layers are usually fed into a fully-connected layer for further processing. See also: RNN model. The Normalization layer adjusts the output data from previous layers to achieve a regular distribution. This results in improved scalability and model training. A Hidden layer is any of the layers in a Neural Network that aren't the input or output layers.
Layer (deep learning) : There is an intrinsic difference between deep learning layering and neocortical layering: deep learning layering depends on network topology, while neocortical layering depends on intra-layers homogeneity.
Layer (deep learning) : Deep Learning Neocortex § Layers == References ==
Generative artificial intelligence : Generative artificial intelligence (Generative AI, GenAI, or GAI) is a subset of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data. These models learn the underlying patterns and structures of their training data and use them to produce new data based on the input, which often comes in the form of natural language prompts. Improvements in transformer-based deep neural networks, particularly large language models (LLMs), enabled an AI boom of generative AI systems in the 2020s. These include chatbots such as ChatGPT, Copilot, Gemini, and LLaMA; text-to-image artificial intelligence image generation systems such as Stable Diffusion, Midjourney, and DALL-E; and text-to-video AI generators such as Sora. Companies such as OpenAI, Anthropic, Microsoft, Google, and Baidu as well as numerous smaller firms have developed generative AI models. Generative AI has uses across a wide range of industries, including software development, healthcare, finance, entertainment, customer service, sales and marketing, art, writing, fashion, and product design. However, concerns have been raised about the potential misuse of generative AI such as cybercrime, the use of fake news or deepfakes to deceive or manipulate people, and the mass replacement of human jobs. Intellectual property law concerns also exist around generative models that are trained on and emulate copyrighted works of art.
Generative artificial intelligence : A generative AI system is constructed by applying unsupervised machine learning (invoking for instance neural network architectures such as generative adversarial networks (GANs), variation autoencoders (VAEs), transformers, or self-supervised machine learning trained on a dataset. The capabilities of a generative AI system depend on the modality or type of the data set used. Generative AI can be either unimodal or multimodal; unimodal systems take only one type of input, whereas multimodal systems can take more than one type of input. For example, one version of OpenAI's GPT-4 accepts both text and image inputs.
Generative artificial intelligence : Generative AI models are used to power chatbot products such as ChatGPT, programming tools such as GitHub Copilot, text-to-image products such as Midjourney, and text-to-video products such as Runway Gen-2. Generative AI features have been integrated into a variety of existing commercially available products such as Microsoft Office (Microsoft Copilot), Google Photos, and the Adobe Suite (Adobe Firefly). Many generative AI models are also available as open-source software, including Stable Diffusion and the LLaMA language model. Smaller generative AI models with up to a few billion parameters can run on smartphones, embedded devices, and personal computers. For example, LLaMA-7B (a version with 7 billion parameters) can run on a Raspberry Pi 4 and one version of Stable Diffusion can run on an iPhone 11. Larger models with tens of billions of parameters can run on laptop or desktop computers. To achieve an acceptable speed, models of this size may require accelerators such as the GPU chips produced by NVIDIA and AMD or the Neural Engine included in Apple silicon products. For example, the 65 billion parameter version of LLaMA can be configured to run on a desktop PC. The advantages of running generative AI locally include protection of privacy and intellectual property, and avoidance of rate limiting and censorship. The subreddit r/LocalLLaMA in particular focuses on using consumer-grade gaming graphics cards through such techniques as compression. That forum is one of only two sources Andrej Karpathy trusts for language model benchmarks. Yann LeCun has advocated open-source models for their value to vertical applications and for improving AI safety. Language models with hundreds of billions of parameters, such as GPT-4 or PaLM, typically run on datacenter computers equipped with arrays of GPUs (such as NVIDIA's H100) or AI accelerator chips (such as Google's TPU). These very large models are typically accessed as cloud services over the Internet. In 2022, the United States New Export Controls on Advanced Computing and Semiconductors to China imposed restrictions on exports to China of GPU and AI accelerator chips used for generative AI. Chips such as the NVIDIA A800 and the Biren Technology BR104 were developed to meet the requirements of the sanctions. There is free software on the market capable of recognizing text generated by generative artificial intelligence (such as GPTZero), as well as images, audio or video coming from it. Potential mitigation strategies for detecting generative AI content include digital watermarking, content authentication, information retrieval, and machine learning classifier models. Despite claims of accuracy, both free and paid AI text detectors have frequently produced false positives, mistakenly accusing students of submitting AI-generated work.
Generative artificial intelligence : In the United States, a group of companies including OpenAI, Alphabet, and Meta signed a voluntary agreement with the Biden administration in July 2023 to watermark AI-generated content. In October 2023, Executive Order 14110 applied the Defense Production Act to require all US companies to report information to the federal government when training certain high-impact AI models. In the European Union, the proposed Artificial Intelligence Act includes requirements to disclose copyrighted material used to train generative AI systems, and to label any AI-generated output as such. In China, the Interim Measures for the Management of Generative AI Services introduced by the Cyberspace Administration of China regulates any public-facing generative AI. It includes requirements to watermark generated images or videos, regulations on training data and label quality, restrictions on personal data collection, and a guideline that generative AI must "adhere to socialist core values".
Generative artificial intelligence : The development of generative AI has raised concerns from governments, businesses, and individuals, resulting in protests, legal actions, calls to pause AI experiments, and actions by multiple governments. In a July 2023 briefing of the United Nations Security Council, Secretary-General António Guterres stated "Generative AI has enormous potential for good and evil at scale", that AI may "turbocharge global development" and contribute between $10 and $15 trillion to the global economy by 2030, but that its malicious use "could cause horrific levels of death and destruction, widespread trauma, and deep psychological damage on an unimaginable scale". In addition, generative AI has a significant carbon footprint.
Generative artificial intelligence : Artificial general intelligence – Type of AI with wide-ranging abilities Artificial imagination – Artificial simulation of human imagination Artificial intelligence art – Visual media created with AI Artificial life – Field of study Chatbot – Program that simulates conversation Computational creativity – Multidisciplinary endeavour Generative adversarial network – Deep learning method Generative pre-trained transformer – Type of large language model Large language model – Type of machine learning model Music and artificial intelligence – Usage of artificial intelligence to generate music Generative AI pornography – Explicit material produced by generative AI Procedural generation – Method in which data is created algorithmically as opposed to manually Retrieval-augmented generation – Type of information retrieval using LLMs Stochastic parrot – Term used in machine learning
Generative artificial intelligence : He, Ran; Cao, Jie; Tan, Tieniu (2025). "Generative Artificial Intelligence: A Historical Perspective". National Science Review: nwaf050. doi:10.1093/nsr/nwaf050.
Generative art : Generative art is post-conceptual art that has been created (in whole or in part) with the use of an autonomous system. An autonomous system in this context is generally one that is non-human and can independently determine features of an artwork that would otherwise require decisions made directly by the artist. In some cases the human creator may claim that the generative system represents their own artistic idea, and in others that the system takes on the role of the creator. "Generative art" often refers to algorithmic art (algorithmically determined computer generated artwork) and synthetic media (general term for any algorithmically generated media), but artists can also make generative art using systems of chemistry, biology, mechanics and robotics, smart materials, manual randomization, mathematics, data mapping, symmetry, and tiling. Generative algorithms, algorithms programmed to produce artistic works through predefined rules, stochastic methods, or procedural logic, often yielding dynamic, unique, and contextually adaptable outputs—are central to many of these practices.
Generative art : The use of the word "generative" in the discussion of art has developed over time. The use of "Artificial DNA" defines a generative approach to art focused on the construction of a system able to generate unpredictable events, all with a recognizable common character. The use of autonomous systems, required by some contemporary definitions, focuses a generative approach where the controls are strongly reduced. This approach is also named "emergent". Margaret Boden and Ernest Edmonds have noted the use of the term "generative art" in the broad context of automated computer graphics in the 1960s, beginning with artwork exhibited by Georg Nees and Frieder Nake in 1965: A. Michael Noll did his initial computer art, combining randomness with order, in 1962, and exhibited it along with works by Bell Julesz in 1965. The terms "generative art" and "computer art" have been used in tandem, and more or less interchangeably, since the very earliest days. The first such exhibition showed the work of Nees in February 1965, which some claim was titled "Generative Computergrafik". While Nees does not himself remember, this was the title of his doctoral thesis published a few years later. The correct title of the first exhibition and catalog was "computer-grafik". "Generative art" and related terms was in common use by several other early computer artists around this time, including Manfred Mohr and Ken Knowlton. Vera Molnár (born 1924) is a French media artist of Hungarian origin. Molnar is widely considered to be a pioneer of generative art, and is also one of the first women to use computers in her art practice. The term "Generative Art" with the meaning of dynamic artwork-systems able to generate multiple artwork-events was clearly used the first time for the "Generative Art" conference in Milan in 1998. The term has also been used to describe geometric abstract art where simple elements are repeated, transformed, or varied to generate more complex forms. Thus defined, generative art was practiced by the Argentinian artists Eduardo Mac Entyre and Miguel Ángel Vidal in the late 1960s. In 1972 the Romanian-born Paul Neagu created the Generative Art Group in Britain. It was populated exclusively by Neagu using aliases such as "Hunsy Belmood" and "Edward Larsocchi". In 1972 Neagu gave a lecture titled 'Generative Art Forms' at the Queen's University, Belfast Festival. In 1970 the School of the Art Institute of Chicago created a department called Generative Systems. As described by Sonia Landy Sheridan the focus was on art practices using the then new technologies for the capture, inter-machine transfer, printing and transmission of images, as well as the exploration of the aspect of time in the transformation of image information. Also noteworthy is John Dunn, first a student and then a collaborator of Sheridan. In 1988 Clauser identified the aspect of systemic autonomy as a critical element in generative art: It should be evident from the above description of the evolution of generative art that process (or structuring) and change (or transformation) are among its most definitive features, and that these features and the very term 'generative' imply dynamic development and motion. (the result) is not a creation by the artist but rather the product of the generative process - a self-precipitating structure. In 1989 Celestino Soddu defined the Generative Design approach to Architecture and Town Design in his book Citta' Aleatorie. In 1989 Franke referred to "generative mathematics" as "the study of mathematical operations suitable for generating artistic images." From the mid-1990s Brian Eno popularized the terms generative music and generative systems, making a connection with earlier experimental music by Terry Riley, Steve Reich and Philip Glass. From the end of the 20th century, communities of generative artists, designers, musicians and theoreticians began to meet, forming cross-disciplinary perspectives. The first meeting about generative Art was in 1998, at the inaugural International Generative Art conference at Politecnico di Milano University, Italy. In Australia, the Iterate conference on generative systems in the electronic arts followed in 1999. On-line discussion has centered around the eu-gene mailing list, which began late 1999, and has hosted much of the debate which has defined the field.: 1 These activities have more recently been joined by the Generator.x conference in Berlin starting in 2005. In 2012 the new journal GASATHJ, Generative Art Science and Technology Hard Journal was founded by Celestino Soddu and Enrica Colabella jointing several generative artists and scientists in the editorial board. Some have argued that as a result of this engagement across disciplinary boundaries, the community has converged on a shared meaning of the term. As Boden and Edmonds put it in 2011: Today, the term "Generative Art" is still current within the relevant artistic community. Since 1998 a series of conferences have been held in Milan with that title (Generativeart.com), and Brian Eno has been influential in promoting and using generative art methods (Eno, 1996). Both in music and in visual art, the use of the term has now converged on work that has been produced by the activation of a set of rules and where the artist lets a computer system take over at least some of the decision-making (although, of course, the artist determines the rules). In the call of the Generative Art conferences in Milan (annually starting from 1998), the definition of Generative Art by Celestino Soddu: Generative Art is the idea realized as genetic code of artificial events, as construction of dynamic complex systems able to generate endless variations. Each Generative Project is a concept-software that works producing unique and non-repeatable events, like music or 3D Objects, as possible and manifold expressions of the generating idea strongly recognizable as a vision belonging to an artist / designer / musician / architect /mathematician. Discussion on the eu-gene mailing list was framed by the following definition by Adrian Ward from 1999: Generative art is a term given to work which stems from concentrating on the processes involved in producing an artwork, usually (although not strictly) automated by the use of a machine or computer, or by using mathematic or pragmatic instructions to define the rules by which such artworks are executed. A similar definition is provided by Philip Galanter: Generative art refers to any art practice where the artist creates a process, such as a set of natural language rules, a computer program, a machine, or other procedural invention, which is then set into motion with some degree of autonomy contributing to or resulting in a completed work of art. Around the 2020s, generative AI models learned to imitate the distinct style of particular authors. For example, a generative image model such as Stable Diffusion is able to model the stylistic characteristics of an artist like Pablo Picasso (including his particular brush strokes, use of colour, perspective, and so on), and a user can engineer a prompt such as "an astronaut riding a horse, by Picasso" to cause the model to generate a novel image applying the artist's style to an arbitrary subject. Generative image models have received significant backlash from artists who object to their style being imitated without their permission, arguing that this harms their ability to profit from their own work.
Generative art : Artificial intelligence art Artmedia Conway's Game of Life Digital morphogenesis Evolutionary art Generative artificial intelligence New media art Non-fungible token Post-conceptualism Systems art Virtual art
Generative art : Matt Pearson, Generative art : a practical guide (Manning 2011). Wands, Bruce (2006). Art of the Digital Age, London: Thames & Hudson. ISBN 0-500-23817-0. Oliver Grau (2003). Virtual Art: From Illusion to Immersion (MIT Press/Leonardo Book Series). Cambridge, Massachusetts: The MIT Press. ISBN 0-262-07241-6. Playing with Time[usurped] A conversation between Will Wright and Brian Eno on generative creation. Off Book: Generative Art - Computers, Data, and Humanity Documentary produced by Off Book (web series) Thomas Dreher: History of Computer Art, chap.III.2, IV.3, VIII.1 [2]"Epigenetic Painting:Software as Genotype", Roman Verostko(International Symposium on Electronic Art, Utrecht, 1988); Leonardo, 23:1,1990, pp. 17–23
Braina : Braina is a virtual assistant and speech-to-text dictation application for Microsoft Windows developed by Brainasoft. Braina uses natural language interface, speech synthesis, and speech recognition technology to interact with its users and allows them to use natural language sentences to perform various tasks on a computer. The name Braina is a short form of "Brain Artificial". Braina is marketed as a Microsoft Copilot alternative. It provides a voice interface for several locally run and cloud large language models, including the latest LLMs from providers such as OpenAI, Anthropic, Google, Grok, Meta, Mistral, etc., while improving data privacy. Braina also allows responses from its in-house large language models like Braina Swift and Braina Pinnacle. It has an "Artificial Brain" feature that provides persistent memory support for supported LLMs.
Braina : Braina provides is able to carry out various tasks on a computer, including automation. Braina can take commands inputted through typing or through dictation to store reminders, find information online, perform mathematical operations, open files, generate images from text, transcribe speech, and control open windows or programs. Braina adapts to user behavior over time with a goal of better anticipating needs.
Braina : In addition to the desktop version for Windows operating systems, Braina is also available for the iOS and Android operating systems. The mobile version of Braina has a feature allowing remote management of a Windows PC connected via Wi-Fi.
Braina : Braina is distributed in multiple modes. These include Braina Lite, a freeware version with limitations, and premium versions Braina Pro, Pro Plus, and Pro Ultra. Some additional features in the Pro version include dictation, custom vocabulary, video transcription, automation, custom voice commands, and persistent LLM memory.
Braina : TechRadar has consistently listed Braina as one of the best dictation and virtual assistant apps between 2015 and 2024. == References ==
Affix grammar over a finite lattice : In linguistics, the affix grammars over a finite lattice (AGFL) formalism is a notation for context-free grammars with finite set-valued features, acceptable to linguists of many different schools. The AGFL-project aims at the development of a technology for Natural language processing available under the GNU GPL.
Minerva (model) : Minerva is a large language model developed by an Italian research group, Sapienza NLP, at Sapienza University of Rome, led by Roberto Navigli. It is trained from scratch with a primary focus on the Italian language. It is a model for Natural Language Processing tasks, capable of understanding and generating human-like text. This model utilizes deep learning techniques, specifically a Transformer architecture, to process and generate text. It has been trained on a large corpus of text data and has been fine-tuned to perform various language tasks, such as language translation, text summarization, and question answering.
Minerva (model) : Sapienza University of Rome CINECA Istituto Italiano per l’Intelligenza Artificiale (AI4I)
Minerva (model) : Official website
Curse of dimensionality : The curse of dimensionality refers to various phenomena that arise when analyzing and organizing data in high-dimensional spaces that do not occur in low-dimensional settings such as the three-dimensional physical space of everyday experience. The expression was coined by Richard E. Bellman when considering problems in dynamic programming. The curse generally refers to issues that arise when the number of datapoints is small (in a suitably defined sense) relative to the intrinsic dimension of the data. Dimensionally cursed phenomena occur in domains such as numerical analysis, sampling, combinatorics, machine learning, data mining and databases. The common theme of these problems is that when the dimensionality increases, the volume of the space increases so fast that the available data become sparse. In order to obtain a reliable result, the amount of data needed often grows exponentially with the dimensionality. Also, organizing and searching data often relies on detecting areas where objects form groups with similar properties; in high dimensional data, however, all objects appear to be sparse and dissimilar in many ways, which prevents common data organization strategies from being efficient.
Automatic taxonomy construction : Automatic taxonomy construction (ATC) is the use of software programs to generate taxonomical classifications from a body of texts called a corpus. ATC is a branch of natural language processing, which in turn is a branch of artificial intelligence. A taxonomy (or taxonomical classification) is a scheme of classification, especially, a hierarchical classification, in which things are organized into groups or types. Among other things, a taxonomy can be used to organize and index knowledge (stored as documents, articles, videos, etc.), such as in the form of a library classification system, or a search engine taxonomy, so that users can more easily find the information they are searching for. Many taxonomies are hierarchies (and thus, have an intrinsic tree structure), but not all are. Manually developing and maintaining a taxonomy is a labor-intensive task requiring significant time and resources, including familiarity of or expertise in the taxonomy's domain (scope, subject, or field), which drives the costs and limits the scope of such projects. Also, domain modelers have their own points of view which inevitably, even if unintentionally, work their way into the taxonomy. ATC uses artificial intelligence techniques to quickly automatically generate a taxonomy for a domain in order to avoid these problems and remove limitations.
Automatic taxonomy construction : There are several approaches to ATC. One approach is to use rules to detect patterns in the corpus and use those patterns to infer relations such as hyponymy. Other approaches use machine learning techniques such as Bayesian inferencing and Artificial Neural Networks.
Automatic taxonomy construction : ATC can be used to build taxonomies for search engines, to improve search results. ATC systems are a key component of ontology learning (also known as automatic ontology construction), and have been used to automatically generate large ontologies for domains such as insurance and finance. They have also been used to enhance existing large networks such as Wordnet to make them more complete and consistent.
Automatic taxonomy construction : Other names for automatic taxonomy construction include: Automated outline building Automated outline construction Automated outline creation Automated outline extraction Automated outline generation Automated outline induction Automated outline learning Automated outlining Automated taxonomy building Automated taxonomy construction Automated taxonomy creation Automated taxonomy extraction Automated taxonomy generation Automated taxonomy induction Automated taxonomy learning Automatic outline building Automatic outline construction Automatic outline creation Automatic outline extraction Automatic outline generation Automatic outline induction Automatic outline learning Automatic taxonomy building Automatic taxonomy creation Automatic taxonomy extraction Automatic taxonomy generation Automatic taxonomy induction Automatic taxonomy learning Outline automation Outline building Outline construction Outline creation Outline extraction Outline generation Outline induction Outline learning Semantic taxonomy building Semantic taxonomy construction Semantic taxonomy creation Semantic taxonomy extraction Semantic taxonomy generation Semantic taxonomy induction Semantic taxonomy learning Taxonomy automation Taxonomy building Taxonomy construction Taxonomy creation Taxonomy extraction Taxonomy generation Taxonomy induction Taxonomy learning
Automatic taxonomy construction : Document classification Information extraction
Automatic taxonomy construction : Automatic Taxonomy Construction from Keywords (2012) Domain taxonomy learning from text: The subsumption method versus hierarchical clustering from Data & Knowledge Engineering, Volume 83, January 2013, Pages 54–69 Learning taxonomic relations from a set of text documents Learning Taxonomic Relations from Heterogeneous Sources of Evidence A Metric-based Framework for Automatic Taxonomy Induction A New Method for Evaluating Automatically Learned Terminological Taxonomies Problematizing and Addressing the Article-as-Concept Assumption in Wikipedia Structured Learning for Taxonomy Induction with Belief Propagation Taxonomy Learning Using Word Sense Induction
Automatic taxonomy construction : Taxonomy 101: The Basics and Getting Started with Taxonomies – shows where ATC fits in to the general activity of managing taxonomies for a business enterprise in need of knowledge management.
GitHub Copilot : GitHub Copilot is a code completion and automatic programming tool developed by GitHub and OpenAI that assists users of Visual Studio Code, Visual Studio, Neovim, and JetBrains integrated development environments (IDEs) by autocompleting code. Currently available by subscription to individual developers and to businesses, the generative artificial intelligence software was first announced by GitHub on 29 June 2021. Users can choose the large language model used for generation.
GitHub Copilot : On June 29, 2021, GitHub announced GitHub Copilot for technical preview in the Visual Studio Code development environment. GitHub Copilot was released as a plugin on the JetBrains marketplace on October 29, 2021. October 27, 2021, GitHub released the GitHub Copilot Neovim plugin as a public repository. GitHub announced Copilot's availability for the Visual Studio 2022 IDE on March 29, 2022. On June 21, 2022, GitHub announced that Copilot was out of "technical preview", and is available as a subscription-based service for individual developers. GitHub Copilot is the evolution of the "Bing Code Search" plugin for Visual Studio 2013, which was a Microsoft Research project released in February 2014. This plugin integrated with various sources, including MSDN and Stack Overflow, to provide high-quality contextually relevant code snippets in response to natural language queries.
GitHub Copilot : When provided with a programming problem in natural language, Copilot is capable of generating solution code. It is also able to describe input code in English and translate code between programming languages. According to its website, GitHub Copilot includes assistive features for programmers, such as the conversion of code comments to runnable code, and autocomplete for chunks of code, repetitive sections of code, and entire methods and/or functions. GitHub reports that Copilot's autocomplete feature is accurate roughly half of the time; with some Python function header code, for example, Copilot correctly autocompleted the rest of the function body code 43% of the time on the first try and 57% of the time after ten attempts. GitHub states that Copilot's features allow programmers to navigate unfamiliar coding frameworks and languages by reducing the amount of time users spend reading documentation.
GitHub Copilot : GitHub Copilot was initially powered by the OpenAI Codex, which is a modified, production version of GPT-3. The Codex model is additionally trained on gigabytes of source code in a dozen programming languages. Copilot's OpenAI Codex was trained on a selection of the English language, public GitHub repositories, and other publicly available source code. This includes a filtered dataset of 159 gigabytes of Python code sourced from 54 million public GitHub repositories. OpenAI's GPT-3 is licensed exclusively to Microsoft, GitHub's parent company. In November 2023, Copilot Chat was updated to use OpenAI's GPT-4 model. In 2024, Copilot began allowing users to choose between different large language models, such as GPT-4o or Claude 3.5.
GitHub Copilot : Since Copilot's release, there have been concerns with its security and educational impact, as well as licensing controversy surrounding the code it produces.
GitHub Copilot : ChatGPT Code completion Devin AI Generative AI Microsoft Copilot Tabnine – Coding assistant Vibe coding
GitHub Copilot : Official website
Predictive learning : Predictive learning is a machine learning (ML) technique where an artificial intelligence model is fed new data to develop an understanding of its environment, capabilities, and limitations. This technique finds application in many areas, including neuroscience, business, robotics, and computer vision. This concept was developed and expanded by French computer scientist Yann LeCun in 1988 during his career at Bell Labs, where he trained models to detect handwriting so that financial companies could automate check processing. The mathematical foundation for predictive learning dates back to the 17th century, where British insurance company Lloyd's used predictive analytics to make a profit. Starting out as a mathematical concept, this method expanded the possibilities of artificial intelligence. Predictive learning is an attempt to learn with a minimum of pre-existing mental structure. It was inspired by Jean Piaget's account of children constructing knowledge of the world through interaction. Gary Drescher's book Made-up Minds was crucial to the development of this concept. The idea that predictions and unconscious inference are used by the brain to construct a model of the world, in which it can identify causes of percepts, goes back even further to Hermann von Helmholtz's iteration of this study. These ideas were further developed by the field of predictive coding. Another related predictive learning theory is Jeff Hawkins' memory-prediction framework, which is laid out in his book On Intelligence.
Predictive learning : Reinforcement learning Predictive coding == References ==
Deepfake : Deepfakes (a portmanteau of 'deep learning' and 'fake') are images, videos, or audio that have been edited or generated using artificial intelligence, AI-based tools or AV editing software. They may depict real or fictional people and are considered a form of synthetic media, that is media that is usually created by artificial intelligence systems by combining various media elements into a new media artifact. While the act of creating fake content is not new, deepfakes uniquely leverage machine learning and artificial intelligence techniques, including facial recognition algorithms and artificial neural networks such as variational autoencoders (VAEs) and generative adversarial networks (GANs). In turn, the field of image forensics develops techniques to detect manipulated images. Deepfakes have garnered widespread attention for their potential use in creating child sexual abuse material, celebrity pornographic videos, revenge porn, fake news, hoaxes, bullying, and financial fraud. Academics have raised concerns about the potential for deepfakes to promote disinformation and hate speech, as well as interfere with elections. In response, the information technology industry and governments have proposed recommendations and methods to detect and mitigate their use. From traditional entertainment to gaming, deepfake technology has evolved to be increasingly convincing and available to the public, allowing for the disruption of the entertainment and media industries.
Deepfake : Photo manipulation was developed in the 19th century and soon applied to motion pictures. Technology steadily improved during the 20th century, and more quickly with the advent of digital video. Deepfake technology has been developed by researchers at academic institutions beginning in the 1990s, and later by amateurs in online communities. More recently the methods have b,een adopted by industry.
Deepfake : Deepfakes rely on a type of neural network called an autoencoder. These consist of an encoder, which reduces an image to a lower dimensional latent space, and a decoder, which reconstructs the image from the latent representation. Deepfakes utilize this architecture by having a universal encoder which encodes a person in to the latent space. The latent representation contains key features about their facial features and body posture. This can then be decoded with a model trained specifically for the target. This means the target's detailed information will be superimposed on the underlying facial and body features of the original video, represented in the latent space. A popular upgrade to this architecture attaches a generative adversarial network to the decoder. A GAN trains a generator, in this case the decoder, and a discriminator in an adversarial relationship. The generator creates new images from the latent representation of the source material, while the discriminator attempts to determine whether or not the image is generated. This causes the generator to create images that mimic reality extremely well as any defects would be caught by the discriminator. Both algorithms improve constantly in a zero sum game. This makes deepfakes difficult to combat as they are constantly evolving; any time a defect is determined, it can be corrected.
Deepfake : Though fake photos have long been plentiful, faking motion pictures has been more difficult, and the presence of deepfakes increases the difficulty of classifying videos as genuine or not. AI researcher Alex Champandard has said people should know how fast things can be corrupted with deepfake technology, and that the problem is not a technical one, but rather one to be solved by trust in information and journalism. Computer science associate professor Hao Li of the University of Southern California states that deepfakes created for malicious use, such as fake news, will be even more harmful if nothing is done to spread awareness of deepfake technology. Li predicted that genuine videos and deepfakes would become indistinguishable in as soon as half a year, as of October 2019, due to rapid advancement in artificial intelligence and computer graphics. Former Google fraud czar Shuman Ghosemajumder has called deepfakes an area of "societal concern" and said that they will inevitably evolve to a point at which they can be generated automatically, and an individual could use that technology to produce millions of deepfake videos.
Deepfake : Barack Obama On April 17, 2018, American actor Jordan Peele, BuzzFeed, and Monkeypaw Productions posted a deepfake of Barack Obama to YouTube, which depicted Barack Obama cursing and calling Donald Trump names. In this deepfake, Peele's voice and face were transformed and manipulated into those of Obama. The intent of this video was to portray the dangerous consequences and power of deepfakes, and how deepfakes can make anyone say anything. Donald Trump On May 5, 2019, Derpfakes posted a deepfake of Donald Trump to YouTube, based on a skit Jimmy Fallon performed on The Tonight Show. In the original skit (aired May 4, 2016), Jimmy Fallon dressed as Donald Trump and pretended to participate in a phone call with Barack Obama, conversing in a manner that presented him to be bragging about his primary win in Indiana. In the deepfake, Jimmy Fallon's face was transformed into Donald Trump's face, with the audio remaining the same. This deepfake video was produced by Derpfakes with a comedic intent. In March 2023, a series of images appeared to show New York Police Department officers restraining Trump. The images, created using Midjourney, were initially posted on Twitter by Eliot Higgins but were later re-shared without context, leading some viewers to believe they were real photographs. Nancy Pelosi In 2019, a clip from Nancy Pelosi's speech at the Center for American Progress (given on May 22, 2019) in which the video was slowed down, in addition to the pitch of the audio being altered, to make it seem as if she were drunk, was widely distributed on social media. Critics argue that this was not a deepfake, but a shallowfake‍—‍a less sophisticated form of video manipulation. Mark Zuckerberg In May 2019, two artists collaborating with the company CannyAI created a deepfake video of Facebook founder Mark Zuckerberg talking about harvesting and controlling data from billions of people. The video was part of an exhibit to educate the public about the dangers of artificial intelligence. Kim Jong-un and Vladimir Putin On September 29, 2020, deepfakes of North Korean leader Kim Jong-un and Russian President Vladimir Putin were uploaded to YouTube, created by a nonpartisan advocacy group RepresentUs. The deepfakes of Kim and Putin were meant to air publicly as commercials to relay the notion that interference by these leaders in US elections would be detrimental to the United States' democracy. The commercials also aimed to shock Americans to realize how fragile democracy is, and how media and news can significantly influence the country's path regardless of credibility. However, while the commercials included an ending comment detailing that the footage was not real, they ultimately did not air due to fears and sensitivity regarding how Americans may react. On June 5, 2023, an unknown source broadcast a reported deepfake of Vladimir Putin on multiple radio and television networks. In the clip, Putin appears to deliver a speech announcing the invasion of Russia and calling for a general mobilization of the army. Volodymyr Zelenskyy On March 16, 2022, a one-minute long deepfake video depicting Ukraine's president Volodymyr Zelenskyy seemingly telling his soldiers to lay down their arms and surrender during the 2022 Russian invasion of Ukraine was circulated on social media. Russian social media boosted it, but after it was debunked, Facebook and YouTube removed it. Twitter allowed the video in tweets where it was exposed as a fake, but said it would be taken down if posted to deceive people. Hackers inserted the disinformation into a live scrolling-text news crawl on TV station Ukraine 24, and the video appeared briefly on the station's website in addition to false claims that Zelenskyy had fled his country's capital, Kyiv. It was not immediately clear who created the deepfake, to which Zelenskyy responded with his own video, saying, "We don't plan to lay down any arms. Until our victory." Wolf News In late 2022, pro-China propagandists started spreading deepfake videos purporting to be from "Wolf News" that used synthetic actors. The technology was developed by a London company called Synthesia, which markets it as a cheap alternative to live actors for training and HR videos. Pope Francis In March 2023, an anonymous construction worker from Chicago used Midjourney to create a fake image of Pope Francis in a white Balenciaga puffer jacket. The image went viral, receiving over twenty million views. Writer Ryan Broderick dubbed it "the first real mass-level AI misinformation case". Experts consulted by Slate characterized the image as unsophisticated: "you could have made it on Photoshop five years ago". Keir Starmer In October 2023, a deepfake audio clip of the UK Labour Party leader Keir Starmer abusing staffers was released on the first day of a Labour Party conference. The clip purported to be an audio tape of Starmer abusing his staffers. Rashmika Mandanna In early November 2023, a famous South Indian actor, Rashmika Mandanna fell prey to DeepFake when a morphed video of a famous British-Indian influencer, Zara Patel, with Rashmika's face started to float on social media. Zara Patel claims to not be involved in its creation. Bongbong Marcos In April 2024, a deepfake video misrepresenting Philippine President Bongbong Marcos was released. It is a slideshow accompanied by a deepfake audio of Marcos purportedly ordering the Armed Forces of the Philippines and special task force to act "however appropriate" should China attack the Philippines. The video was released amidst tensions related to the South China Sea dispute. The Presidential Communications Office has said that there is no such directive from the president and said a foreign actor might be behind the fabricated media. Criminal charges has been filed by the Kapisanan ng mga Brodkaster ng Pilipinas in relation to the deepfake media. On July 22, 2024, a video of Marcos purportedly snorting illegal drugs was released by Claire Contreras, a former supporter of Marcos. Dubbed as the polvoron video, the media noted its consistency with the insinuation of Marcos' predecessor—Rodrigo Duterte—that Marcos is a drug addict; the video was also shown at a Hakbang ng Maisug rally organized by people aligned with Duterte. Two days later, the Philippine National Police and the National Bureau of Investigation, based on their own findings, concluded that the video was created using AI; they further pointed out inconsistencies with the person on the video with Marcos, such as details on the two people's ears. Joe Biden Prior to the 2024 United States presidential election, phone calls imitating the voice of the incumbent Joe Biden were made to dissuade people from voting for him. The person responsible for the calls was charged with voter suppression and impersonating a candidate. The FCC proposed to fine him US$6 million and Lingo Telecom, the company that allegedly relayed the calls, $2 million. Arup Group "British engineering giant Arup revealed as $25 million deepfake scam victim" [3]
Deepfake : The 1986 mid-December issue of Analog magazine published the novelette "Picaper" by Jack Wodhams. Its plot revolves around digitally enhanced or digitally generated videos produced by skilled hackers serving unscrupulous lawyers and political figures. The 1987 film The Running Man starring Arnold Schwarzenegger depicts an autocratic government using computers to digitally replace the faces of actors with those of wanted fugitives to make it appear the fugitives had been neutralized. In the 1992 techno-thriller A Philosophical Investigation by Philip Kerr, "Wittgenstein", the main character and a serial killer, makes use of both a software similar to deepfake and a virtual reality suit for having sex with an avatar of Isadora "Jake" Jakowicz, the female police lieutenant assigned to catch him. The 1993 film Rising Sun starring Sean Connery and Wesley Snipes depicts another character, Jingo Asakuma, who reveals that a computer disc has digitally altered personal identities to implicate a competitor. Deepfake technology is part of the plot of the 2019 BBC One TV series The Capture. The first series follows former British Army sergeant Shaun Emery, who is accused of assaulting and abducting his barrister. Expertly doctored CCTV footage is revealed to have framed him and mislead the police investigating the case. The second series follows politician Isaac Turner who discovers that another deepfake is tarnishing his reputation until the "correction" is eventually exposed to the public. In June 2020, YouTube deepfake artist Shamook created a deepfake of the 1994 film Forrest Gump by replacing the face of beloved actor Tom Hanks with John Travolta's. He created this piece using 6,000 high-quality still images of John Travolta's face from several of his films released around the same time as Forrest Gump. Shamook, then, created a 180 degree facial profile that he fed into a machine learning piece of software (DeepFaceLab), along with Tom Hanks' face from Forrest Gump. The humor and irony of this deepfake traces back to 2007 when John Travolta revealed he turned down the chance to play the lead role in Forrest Gump because he had said yes to Pulp Fiction instead. Al Davis vs. the NFL: The narrative structure of this 2021 documentary, part of ESPN's 30 for 30 documentary series, uses deepfake versions of the film's two central characters, both deceased—Al Davis, who owned the Las Vegas Raiders during the team's tenure in Oakland and Los Angeles, and Pete Rozelle, the NFL commissioner who frequently clashed with Davis. Deepfake technology is featured in "Impawster Syndrome", the 57th episode of the Canadian police series Hudson & Rex, first broadcast on 6 January 2022, in which a member of the St. John's police team is investigated on suspicion of robbery and assault due to doctored CCTV footage using his likeness. Using deepfake technology in his music video for his 2022 single, "The Heart Part 5", musician Kendrick Lamar transformed into figures resembling Nipsey Hussle, O.J. Simpson, and Kanye West, among others. The deepfake technology in the video was created by Deep Voodoo, a studio led by Trey Parker and Matt Stone, who created South Park. Aloe Blacc honored his long-time collaborator Avicii four years after his death by performing their song "Wake Me Up" in English, Spanish, and Mandarin, using deepfake technologies. In January 2023, ITVX released the series Deep Fake Neighbour Wars, in which various celebrities were played by actors experiencing inane conflicts, the celebrity's face deepfaked onto them. In October 2023, Tom Hanks shared a photo of an apparent deepfake likeness depicting him promoting "some dental plan" to his Instagram page. Hanks warned his fans, "BEWARE . . . I have nothing to do with it."
Deepfake : Daniel Immerwahr, "Your Lying Eyes: People now use A.I. to generate fake videos indistinguishable from real ones. How much does it matter?", The New Yorker, 20 November 2023, pp. 54–59. "If by 'deepfakes' we mean realistic videos produced using artificial intelligence that actually deceive people, then they barely exist. The fakes aren't deep, and the deeps aren't fake. [...] A.I.-generated videos are not, in general, operating in our media as counterfeited evidence. Their role better resembles that of cartoons, especially smutty ones." (p. 59.) Emmanouil Billis,"Deepfakes και Ποινικό Δίκαιο [Deepfakes and the Criminal Law]"(in Greek). In: H. Satzger et al. (eds.), The Limits and Future of Criminal Law - Essays in Honor of Christos Mylonopoulos, Athens, P.N. Sakkoulas, 2024, pp. 689–732.
Deepfake : Media related to Deepfake at Wikimedia Commons Sasse, Ben (19 October 2018). "This New Technology Could Send American Politics into a Tailspin". Opinions. The Washington Post. Retrieved 10 July 2019. Fake/Spoof Audio Detection Challenge (ASVspoof) Deepfake Detection Challenge (DFDC) Bibliography: Media Literacy in the Age of Deepfakes. Curated by Dr Joshua Glick.
Quantum neural network : Quantum neural networks are computational neural network models which are based on the principles of quantum mechanics. The first ideas on quantum neural computation were published independently in 1995 by Subhash Kak and Ron Chrisley, engaging with the theory of quantum mind, which posits that quantum effects play a role in cognitive function. However, typical research in quantum neural networks involves combining classical artificial neural network models (which are widely used in machine learning for the important task of pattern recognition) with the advantages of quantum information in order to develop more efficient algorithms. One important motivation for these investigations is the difficulty to train classical neural networks, especially in big data applications. The hope is that features of quantum computing such as quantum parallelism or the effects of interference and entanglement can be used as resources. Since the technological implementation of a quantum computer is still in a premature stage, such quantum neural network models are mostly theoretical proposals that await their full implementation in physical experiments. Most Quantum neural networks are developed as feed-forward networks. Similar to their classical counterparts, this structure intakes input from one layer of qubits, and passes that input onto another layer of qubits. This layer of qubits evaluates this information and passes on the output to the next layer. Eventually the path leads to the final layer of qubits. The layers do not have to be of the same width, meaning they don't have to have the same number of qubits as the layer before or after it. This structure is trained on which path to take similar to classical artificial neural networks. This is discussed in a lower section. Quantum neural networks refer to three different categories: Quantum computer with classical data, classical computer with quantum data, and quantum computer with quantum data.
Quantum neural network : Quantum neural network research is still in its infancy, and a conglomeration of proposals and ideas of varying scope and mathematical rigor have been put forward. Most of them are based on the idea of replacing classical binary or McCulloch-Pitts neurons with a qubit (which can be called a “quron”), resulting in neural units that can be in a superposition of the state ‘firing’ and ‘resting’.
Quantum neural network : Quantum Neural Networks can be theoretically trained similarly to training classical/artificial neural networks. A key difference lies in communication between the layers of a neural networks. For classical neural networks, at the end of a given operation, the current perceptron copies its output to the next layer of perceptron(s) in the network. However, in a quantum neural network, where each perceptron is a qubit, this would violate the no-cloning theorem. A proposed generalized solution to this is to replace the classical fan-out method with an arbitrary unitary that spreads out, but does not copy, the output of one qubit to the next layer of qubits. Using this fan-out Unitary ( U f ) with a dummy state qubit in a known state (Ex. | 0 ⟩ in the computational basis), also known as an Ancilla bit, the information from the qubit can be transferred to the next layer of qubits. This process adheres to the quantum operation requirement of reversibility. Using this quantum feed-forward network, deep neural networks can be executed and trained efficiently. A deep neural network is essentially a network with many hidden-layers, as seen in the sample model neural network above. Since the Quantum neural network being discussed uses fan-out Unitary operators, and each operator only acts on its respective input, only two layers are used at any given time. In other words, no Unitary operator is acting on the entire network at any given time, meaning the number of qubits required for a given step depends on the number of inputs in a given layer. Since Quantum Computers are notorious for their ability to run multiple iterations in a short period of time, the efficiency of a quantum neural network is solely dependent on the number of qubits in any given layer, and not on the depth of the network.
Quantum neural network : Differentiable programming Optical neural network Holographic associative memory Quantum cognition Quantum machine learning
Quantum neural network : Recent review of quantum neural networks by M. Schuld, I. Sinayskiy and F. Petruccione Review of quantum neural networks by Wei Article by P. Gralewicz on the plausibility of quantum computing in biological neural networks Training a neural net to recognize images
Feedforward neural network : Feedforward refers to recognition-inference architecture of neural networks. Artificial neural network architectures are based on inputs multiplied by weights to obtain outputs (inputs-to-output): feedforward. Recurrent neural networks, or neural networks with loops allow information from later processing stages to feed back to earlier stages for sequence processing. However, at every stage of inference a feedforward multiplication remains the core, essential for backpropagation or backpropagation through time. Thus neural networks cannot contain feedback like negative feedback or positive feedback where the outputs feed back to the very same inputs and modify them, because this forms an infinite loop which is not possible to rewind in time to generate an error signal through backpropagation. This issue and nomenclature appear to be a point of confusion between some computer scientists and scientists in other fields studying brain networks.
Feedforward neural network : Examples of other feedforward networks include convolutional neural networks and radial basis function networks, which use a different activation function.
Feedforward neural network : Hopfield network Feed-forward Backpropagation Rprop
Feedforward neural network : Feedforward neural networks tutorial Feedforward Neural Network: Example Feedforward Neural Networks: An Introduction
Elements of AI : Elements of AI is a massive open online course (MOOC) teaching the basics of artificial intelligence. The course, originally launched in 2018, is designed and organized by the University of Helsinki and learning technology company MinnaLearn. The course includes modules on machine learning, neural networks, the philosophy of artificial intelligence, and using artificial intelligence to solve problems. It consists of two parts: Introduction to AI and its sequel, Building AI, that was released in late 2020. University of Helsinki's computer science department is known as the alma mater of Linus Torvalds, a Finnish-American software engineer who is the creator of the Linux kernel, which is the kernel for Linux operating systems.
Elements of AI : The government of Finland has pledged to offer the course for all EU citizens by the end of 2021, as the course is made available in all the official EU languages. The initiative was launched as part of Finland's Presidency of the Council of the European Union in 2019, with the European Commission providing translations of the course materials. In 2017, Finland launched an AI strategy to stay competitive in the field of AI amid growing competition between China and the United States. With the support of private companies and the government, Finland's now-realized goal was to get 1 percent of its citizens to participate in Elements of AI. Other governments have also given their support to the course. For instance, Germany's Federal Minister for Economic Affairs and Energy Peter Altmeier has encouraged citizens to take part in the course to help Germany gain a competitive advantage in AI. Sweden's Minister for Energy and Minister for Digital Development Anders Ygeman has said that Sweden aims to teach 1 percent of its population the basics of AI like Finland has.
Elements of AI : Elements of AI had enrolled more than 1 million students from more than 110 countries by May 2023. A quarter of the course's participants are aged 45 and over, and some 40 percent are women. Among Nordic participants, the share of women is nearly 60 percent. In September 2022, the course was available in Finnish, Swedish, Estonian, English, German, Latvian, Norwegian, French, Belgian, Czech, Greek, Slovakian, Slovenian, Latvian, Lithuanian, Portuguese, Spanish, Irish, Icelandic, Maltese, Croatian, Romanian, Italian, Dutch, Polish, and Danish. == References ==
Nature Machine Intelligence : Nature Machine Intelligence is a monthly peer-reviewed scientific journal published by Nature Portfolio covering machine learning and artificial intelligence. The editor-in-chief is Liesbeth Venema.
Nature Machine Intelligence : The journal was created in response to the machine learning explosion of the 2010s. It launched in January 2019, and its opening was met with controversy and boycotts within the machine learning research community due to opposition to Nature publishing the journal as closed access. To address this issue, now Nature Machine Intelligence gives authors an option to publish open access papers for an additional fee, and "authors remain owners of the research reported, and the code and data supporting the main findings of an article should be openly available. Moreover, preprints are allowed, in fact encouraged, and a link to the preprint can be added below the abstract, visible to all readers."
Nature Machine Intelligence : According to the Journal Citation Reports, the journal has a 2021 impact factor of 25.898, ranking it 1st out of 144 journals in the category "Computer Science, Artificial intelligence" and first out of 113 journals in the category "Computer Science, Interdisciplinary Applications".
The Emotion Machine : The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind is a 2006 book by cognitive scientist Marvin Minsky that elaborates and expands on Minsky's ideas as presented in his earlier book Society of Mind (1986). Minsky argues that emotions are different ways to think that our mind uses to increase our intelligence. He challenges the distinction between emotions and other kinds of thinking. His main argument is that emotions are "ways to think" for different "problem types" that exist in the world, and that the brain has rule-based mechanisms (selectors) that turn on emotions to deal with various problems. The book reviews the accomplishments of AI, why modelling an AI is difficult in terms of replicating the behaviors of humans, if and how AIs think, and in what manner they might experience struggles and pleasures.
The Emotion Machine : In a review for The Washington Post, neurologist Richard Restak states that: Minsky does a marvelous job parsing other complicated mental activities into simpler elements. ... But he is less effective in relating these emotional functions to what's going on in the brain.
The Emotion Machine : Minsky outlines the book as follows: "We are born with many mental resources." "We learn from interacting with others." "Emotions are different Ways to Think." "We learn to think about our recent thoughts." "We learn to think on multiple levels." "We accumulate huge stores of commonsense knowledge." "We switch among different Ways to Think." "We find multiple ways to represent things." "We build multiple models of ourselves."
The Emotion Machine : Introduction Chapter 1. Falling in Love Chapter 2. ATTACHMENTS AND GOALS Chapter 3. FROM PAIN TO SUFFERING Chapter 4. CONSCIOUSNESS Chapter 5. LEVELS OF MENTAL ACTIVITIES Chapter 6. COMMON SENSE Chapter 7. Thinking. Chapter 8. Resourcefulness. Chapter 9. The Self. BIBLIOGRAPHY
The Emotion Machine : Marvin Minsky at MIT Minsky, Marvin (Sep 12, 2007). "Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind". Video lecture. MIT. Archived from the original on 2016-05-30. Other reviews Science and Evolution - Books and Reviews Technology Review
Neurorobotics : Neurorobotics is the combined study of neuroscience, robotics, and artificial intelligence. It is the science and technology of embodied autonomous neural systems. Neural systems include brain-inspired algorithms (e.g. connectionist networks), computational models of biological neural networks (e.g. artificial spiking neural networks, large-scale simulations of neural microcircuits) and actual biological systems (e.g. in vivo and in vitro neural nets). Such neural systems can be embodied in machines with mechanic or any other forms of physical actuation. This includes robots, prosthetic or wearable systems but also, at smaller scale, micro-machines and, at the larger scales, furniture and infrastructures. Neurorobotics is that branch of neuroscience with robotics, which deals with the study and application of science and technology of embodied autonomous neural systems like brain-inspired algorithms. It is based on the idea that the brain is embodied and the body is embedded in the environment. Therefore, most neurorobots are required to function in the real world, as opposed to a simulated environment. Beyond brain-inspired algorithms for robots neurorobotics may also involve the design of brain-controlled robot systems.
Neurorobotics : Neurorobots can be divided into various major classes based on the robot's purpose. Each class is designed to implement a specific mechanism of interest for study. Common types of neurorobots are those used to study motor control, memory, action selection, and perception.
Neurorobotics : Biological robots are not officially neurorobots in that they are not neurologically inspired AI systems, but actual neuron tissue wired to a robot. This employs the use of cultured neural networks to study brain development or neural interactions. These typically consist of a neural culture raised on a multielectrode array (MEA), which is capable of both recording the neural activity and stimulating the tissue. In some cases, the MEA is connected to a computer which presents a simulated environment to the brain tissue and translates brain activity into actions in the simulation, as well as providing sensory feedback The ability to record neural activity gives researchers a window into a brain, which they can use to learn about a number of the same issues neurorobots are used for. An area of concern with the biological robots is ethics. Many questions are raised about how to treat such experiments. The central question concerns consciousness and whether or not the rat brain experiences it. There are many theories about how to define consciousness.
Neurorobotics : Neuroscientists benefit from neurorobotics because it provides a blank slate to test various possible methods of brain function in a controlled and testable environment. While robots are more simplified versions of the systems they emulate, they are more specific, allowing more direct testing of the issue at hand. They also have the benefit of being accessible at all times, while it is more difficult to monitor large portions of a brain while the human or animal is active, especially individual neurons. The development of neuroscience has produced neural treatments. These include pharmaceuticals and neural rehabilitation. Progress is dependent on an intricate understanding of the brain and how exactly it functions. It is difficult to study the brain, especially in humans, due to the danger associated with cranial surgeries. Neurorobots can improved the range of tests and experiments that can be performed in the study of neural processes.
Neurorobotics : Brain–computer interface Experience machine Neuromorphic engineering Wirehead (science fiction)
Neurorobotics : Neurorobotics on Scholarpedia (Jeff Krichmar (2008), Scholarpedia, 3(3):1365) A lab that focuses on neurorobotics at Northwestern University. Frontiers in Neurorobotics. Neurorobotics: an experimental science of embodiment by Frederic Kaplan Neurorobotics Lab, Control Systems Lab, NTUn of Athens (Prof. Kostas J. Kyriakopoulos) Neurorobotics in the Human Brain Project
Kórsafn : Kórsafn (Icelandic: Choral archives) is a sound installation by Icelandic artist Björk. Developed in collaboration with the technology company Microsoft, audio design firm Listen and architecture office firm Atelier Ace, the installation was designed for the lobby of the Sister City Hotel in New York City, United States, and launched in 2020. Elaborating 17 years of choral recording taken from Björk discography, Kórsafn consisted of an evolving music composition that uses an artificial intelligence model that responds to real-time weather data, creating a continuously shifting auditory experience.
Kórsafn : In 2018, Björk announced her tenth concert tour Cornucopia, which debuted as a residency show at The Shed arts center. Before the start of the show, it was confirmed she would be accompanied by The Hamrahlid Choir. In 2019, while she was performing at The Shed, Björk stayed alongside the choir at the Sister City Hotel in New York City, where they would rehearse for the performances. While there, the Atelier Ace, which owns the Sister City boutique hotels, asked her to create a sound installation for the lobby. This was the second work commissioned by the hotel, a year after a similar piece by Julianna Barwick was featured in the lobby. Kórsafn is formed from two Icelandic words, "kór" ("choral") and "safn" ("archives"). The installation features recordings of Björk’s choral works from the previous 17 years, including compositions taken from her albums Medúlla (2004) and Biophilia (2011). The artificial intelligence system was developed in collaboration with Microsoft. The software processes data gathered from sensors and by a camera placed on the roof of the Sister City Hotel building and by a barometer. It then uses algorithms to determine how the choral elements are layered, pitched, and mixed in real time. The AI generate variations in real time by reacting to the passage of flocks, clouds, airplanes and changes in pressure. Data collected from sensors on the hotel’s rooftop include wind speed, cloud cover, and precipitation levels. These inputs influence the tonal quality, volume, and rhythmic patterns of the soundscape. The sound is played through hidden speakers in the hotel's lobby, blending with the architectural environment to create an immersive experience for guests. The AI system learns over time from the changing of the seasons and weather constantly evolving the sound - keeping in harmony with the sky. Björk described the project as an "AI tango," expressing curiosity about the interplay between her choral compositions and the AI's interpretations of environmental data. She noted the significance of the Hudson Valley's rich bird migrations, which influence the generative aspects of the soundscape. Due to the COVID-19 pandemic, the hotel closed while the installation was ongoing, making a version of the sound piece available online.
Kórsafn : Kórsafn was positively reviewed. It's Nice That author Jenny Brewer described the piece as "a high-tech alternative to the smooth jazz that usually whistles through hotel lobbies". Writing for CNET, Scott Stein observed that it "is lovely and low-key, and honestly, it just blends into the background. It's nothing wild, but it fits the hotel", adding that "after an hour, it didn't get annoying, or too repetitive". The installation garnered several recognitions. It was nominated in the Fast Company's 2020 Innovation by Design Awards in the Hospitality category. It received three Clio Awards silver prizes, in the Use of Music in Experience/Activation, Sound Design and Emerging Technology categories.