text
stringlengths
12
14.7k
Outline of natural language processing : Information extraction (IE) – field concerned in general with the extraction of semantic information from text. This covers tasks such as named-entity recognition, coreference resolution, relationship extraction, etc. Ontology engineering – field that studies the methods and methodologies for building ontologies, which are formal representations of a set of concepts within a domain and the relationships between those concepts. Speech processing – field that covers speech recognition, text-to-speech and related tasks. Statistical natural-language processing – Statistical semantics – a subfield of computational semantics that establishes semantic relations between words to examine their contexts. Distributional semantics – a subfield of statistical semantics that examines the semantic relationship of words across a corpora or in large samples of data.
Outline of natural language processing : Natural-language processing contributes to, and makes use of (the theories, tools, and methodologies from), the following fields: Automated reasoning – area of computer science and mathematical logic dedicated to understanding various aspects of reasoning, and producing software which allows computers to reason completely, or nearly completely, automatically. A sub-field of artificial intelligence, automatic reasoning is also grounded in theoretical computer science and philosophy of mind. Linguistics – scientific study of human language. Natural-language processing requires understanding of the structure and application of language, and therefore it draws heavily from linguistics. Applied linguistics – interdisciplinary field of study that identifies, investigates, and offers solutions to language-related real-life problems. Some of the academic fields related to applied linguistics are education, linguistics, psychology, computer science, anthropology, and sociology. Some of the subfields of applied linguistics relevant to natural-language processing are: Bilingualism / Multilingualism – Computer-mediated communication (CMC) – any communicative transaction that occurs through the use of two or more networked computers. Research on CMC focuses largely on the social effects of different computer-supported communication technologies. Many recent studies involve Internet-based social networking supported by social software. Contrastive linguistics – practice-oriented linguistic approach that seeks to describe the differences and similarities between a pair of languages. Conversation analysis (CA) – approach to the study of social interaction, embracing both verbal and non-verbal conduct, in situations of everyday life. Turn-taking is one aspect of language use that is studied by CA. Discourse analysis – various approaches to analyzing written, vocal, or sign language use or any significant semiotic event. Forensic linguistics – application of linguistic knowledge, methods and insights to the forensic context of law, language, crime investigation, trial, and judicial procedure. Interlinguistics – study of improving communications between people of different first languages with the use of ethnic and auxiliary languages (lingua franca). For instance by use of intentional international auxiliary languages, such as Esperanto or Interlingua, or spontaneous interlanguages known as pidgin languages. Language assessment – assessment of first, second or other language in the school, college, or university context; assessment of language use in the workplace; and assessment of language in the immigration, citizenship, and asylum contexts. The assessment may include analyses of listening, speaking, reading, writing or cultural understanding, with respect to understanding how the language works theoretically and the ability to use the language practically. Language pedagogy – science and art of language education, including approaches and methods of language teaching and study. Natural-language processing is used in programs designed to teach language, including first- and second-language training. Language planning – Language policy – Lexicography – Literacies – Pragmatics – Second-language acquisition – Stylistics – Translation – Computational linguistics – interdisciplinary field dealing with the statistical or rule-based modeling of natural language from a computational perspective. The models and tools of computational linguistics are used extensively in the field of natural-language processing, and vice versa. Computational semantics – Corpus linguistics – study of language as expressed in samples (corpora) of "real world" text. Corpora is the plural of corpus, and a corpus is a specifically selected collection of texts (or speech segments) composed of natural language. After it is constructed (gathered or composed), a corpus is analyzed with the methods of computational linguistics to infer the meaning and context of its components (words, phrases, and sentences), and the relationships between them. Optionally, a corpus can be annotated ("tagged") with data (manually or automatically) to make the corpus easier to understand (e.g., part-of-speech tagging). This data is then applied to make sense of user input, for example, to make better (automated) guesses of what people are talking about or saying, perhaps to achieve more narrowly focused web searches, or for speech recognition. Metalinguistics – Sign linguistics – scientific study and analysis of natural sign languages, their features, their structure (phonology, morphology, syntax, and semantics), their acquisition (as a primary or secondary language), how they develop independently of other languages, their application in communication, their relationships to other languages (including spoken languages), and many other aspects. Human–computer interaction – the intersection of computer science and behavioral sciences, this field involves the study, planning, and design of the interaction between people (users) and computers. Attention to human-machine interaction is important, because poorly designed human-machine interfaces can lead to many unexpected problems. A classic example of this is the Three Mile Island accident where investigations concluded that the design of the human–machine interface was at least partially responsible for the disaster. Information retrieval (IR) – field concerned with storing, searching and retrieving information. It is a separate field within computer science (closer to databases), but IR relies on some NLP methods (for example, stemming). Some current research and applications seek to bridge the gap between IR and NLP. Knowledge representation (KR) – area of artificial intelligence research aimed at representing knowledge in symbols to facilitate inferencing from those knowledge elements, creating new elements of knowledge. Knowledge Representation research involves analysis of how to reason accurately and effectively and how best to use a set of symbols to represent a set of facts within a knowledge domain. Semantic network – study of semantic relations between concepts. Semantic Web – Machine learning – subfield of computer science that examines pattern recognition and computational learning theory in artificial intelligence. There are three broad approaches to machine learning. Supervised learning occurs when the machine is given example inputs and outputs by a teacher so that it can learn a rule that maps inputs to outputs. Unsupervised learning occurs when the machine determines the inputs structure without being provided example inputs or outputs. Reinforcement learning occurs when a machine must perform a goal without teacher feedback. Pattern recognition – branch of machine learning that examines how machines recognize regularities in data. As with machine learning, teachers can train machines to recognize patterns by providing them with example inputs and outputs (i.e. Supervised Learning), or the machines can recognize patterns without being trained on any example inputs or outputs (i.e. Unsupervised Learning). Statistical classification –
Outline of natural language processing : Anaphora – type of expression whose reference depends upon another referential element. E.g., in the sentence 'Sally preferred the company of herself', 'herself' is an anaphoric expression in that it is coreferential with 'Sally', the sentence's subject. Context-free language – Controlled natural language – a natural language with a restriction introduced on its grammar and vocabulary in order to eliminate ambiguity and complexity Corpus – body of data, optionally tagged (for example, through part-of-speech tagging), providing real world samples for analysis and comparison. Text corpus – large and structured set of texts, nowadays usually electronically stored and processed. They are used to do statistical analysis and hypothesis testing, checking occurrences or validating linguistic rules within a specific subject (or domain). Speech corpus – database of speech audio files and text transcriptions. In Speech technology, speech corpora are used, among other things, to create acoustic models (which can then be used with a speech recognition engine). In Linguistics, spoken corpora are used to do research into phonetic, conversation analysis, dialectology and other fields. Grammar – Context-free grammar (CFG) – Constraint grammar (CG) – Definite clause grammar (DCG) – Functional unification grammar (FUG) – Generalized phrase structure grammar (GPSG) – Head-driven phrase structure grammar (HPSG) – Lexical functional grammar (LFG) – Probabilistic context-free grammar (PCFG) – another name for stochastic context-free grammar. Stochastic context-free grammar (SCFG) – Systemic functional grammar (SFG) – Tree-adjoining grammar (TAG) – Natural language – n-gram – sequence of n number of tokens, where a "token" is a character, syllable, or word. The n is replaced by a number. Therefore, a 5-gram is an n-gram of 5 letters, syllables, or words. "Eat this" is a 2-gram (also known as a bigram). Bigram – n-gram of 2 tokens. Every sequence of 2 adjacent elements in a string of tokens is a bigram. Bigrams are used for speech recognition, they can be used to solve cryptograms, and bigram frequency is one approach to statistical language identification. Trigram – special case of the n-gram, where n is 3. Ontology – formal representation of a set of concepts within a domain and the relationships between those concepts. Taxonomy – practice and science of classification, including the principles underlying classification, and the methods of classifying things or concepts. Hyponymy and hypernymy – the linguistics of hyponyms and hypernyms. A hyponym shares a type-of relationship with its hypernym. For example, pigeon, crow, eagle and seagull are all hyponyms of bird (their hypernym); which, in turn, is a hyponym of animal. Taxonomy for search engines – typically called a "taxonomy of entities". It is a tree in which nodes are labelled with entities which are expected to occur in a web search query. These trees are used to match keywords from a search query with the keywords from relevant answers (or snippets). Textual entailment – directional relation between text fragments. The relation holds whenever the truth of one text fragment follows from another text. In the TE framework, the entailing and entailed texts are termed text (t) and hypothesis (h), respectively. The relation is directional because even if "t entails h", the reverse "h entails t" is much less certain. Triphone – sequence of three phonemes. Triphones are useful in models of natural-language processing where they are used to establish the various contexts in which a phoneme can occur in a particular natural language.
Outline of natural language processing : History of natural-language processing History of machine translation History of automated essay scoring History of natural-language user interface History of natural-language understanding History of optical character recognition History of question answering History of speech synthesis Turing test – test of a machine's ability to exhibit intelligent behavior, equivalent to or indistinguishable from, that of an actual human. In the original illustrative example, a human judge engages in a natural-language conversation with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. The test was introduced by Alan Turing in his 1950 paper "Computing Machinery and Intelligence," which opens with the words: "I propose to consider the question, 'Can machines think?'" Universal grammar – theory in linguistics, usually credited to Noam Chomsky, proposing that the ability to learn grammar is hard-wired into the brain. The theory suggests that linguistic ability manifests itself without being taught (see poverty of the stimulus), and that there are properties that all natural human languages share. It is a matter of observation and experimentation to determine precisely what abilities are innate and what properties are shared by all languages. ALPAC – was a committee of seven scientists led by John R. Pierce, established in 1964 by the U. S. Government in order to evaluate the progress in computational linguistics in general and machine translation in particular. Its report, issued in 1966, gained notoriety for being very skeptical of research done in machine translation so far, and emphasizing the need for basic research in computational linguistics; this eventually caused the U. S. Government to reduce its funding of the topic dramatically. Conceptual dependency theory – a model of natural-language understanding used in artificial intelligence systems. Roger Schank at Stanford University introduced the model in 1969, in the early days of artificial intelligence. This model was extensively used by Schank's students at Yale University such as Robert Wilensky, Wendy Lehnert, and Janet Kolodner. Augmented transition network – type of graph theoretic structure used in the operational definition of formal languages, used especially in parsing relatively complex natural languages, and having wide application in artificial intelligence. Introduced by William A. Woods in 1970. Distributed Language Translation (project) –
Outline of natural language processing : Sukhotin's algorithm – statistical classification algorithm for classifying characters in a text as vowels or consonants. It was initially created by Boris V. Sukhotin. T9 (predictive text) – stands for "Text on 9 keys", is a USA-patented predictive text technology for mobile phones (specifically those that contain a 3x4 numeric keypad), originally developed by Tegic Communications, now part of Nuance Communications. Tatoeba – free collaborative online database of example sentences geared towards foreign-language learners. Teragram Corporation – fully owned subsidiary of SAS Institute, a major producer of statistical analysis software, headquartered in Cary, North Carolina, USA. Teragram is based in Cambridge, Massachusetts and specializes in the application of computational linguistics to multilingual natural-language processing. TipTop Technologies – company that developed TipTop Search, a real-time web, social search engine with a unique platform for semantic analysis of natural language. TipTop Search provides results capturing individual and group sentiment, opinions, and experiences from content of various sorts including real-time messages from Twitter or consumer product reviews on Amazon.com. Transderivational search – when a search is being conducted for a fuzzy match across a broad field. In computing the equivalent function can be performed using content-addressable memory. Vocabulary mismatch – common phenomenon in the usage of natural languages, occurring when different people name the same thing or concept differently. LRE Map – Reification (linguistics) – Semantic Web – Metadata – Spoken dialogue system – Affix grammar over a finite lattice – Aggregation (linguistics) – Bag-of-words model – model that represents a text as a bag (multiset) of its words that disregards grammar and word sequence, but maintains multiplicity. This model is a commonly used to train document classifiers Brill tagger – Cache language model – ChaSen, MeCab – provide morphological analysis and word splitting for Japanese Classic monolingual WSD – ClearForest – CMU Pronouncing Dictionary – also known as cmudict, is a public domain pronouncing dictionary designed for uses in speech technology, and was created by Carnegie Mellon University (CMU). It defines a mapping from English words to their North American pronunciations, and is commonly used in speech processing applications such as the Festival Speech Synthesis System and the CMU Sphinx speech recognition system. Concept mining – Content determination – DATR – DBpedia Spotlight – Deep linguistic processing – Discourse relation – Document-term matrix – Dragomir R. Radev – ETBLAST – Filtered-popping recursive transition network – Robby Garner – GeneRIF – Gorn address – Grammar induction – Grammatik – Hashing-Trick – Hidden Markov model – Human language technology – Information extraction – International Conference on Language Resources and Evaluation – Kleene star – Language Computer Corporation – Language model – LanguageWare – Latent semantic mapping – Legal information retrieval – Lesk algorithm – Lessac Technologies – Lexalytics – Lexical choice – Lexical Markup Framework – Lexical substitution – LKB – Logic form – LRE Map – Machine translation software usability – MAREC – Maximum entropy – Message Understanding Conference – METEOR – Minimal recursion semantics – Morphological pattern – Multi-document summarization – Multilingual notation – Naive semantics – Natural language – Natural-language interface – Natural-language user interface – News analytics – Nondeterministic polynomial – Open domain question answering – Optimality theory – Paco Nathan – Phrase structure grammar – Powerset (company) – Production (computer science) – PropBank – Question answering – Realization (linguistics) – Recursive transition network – Referring expression generation – Rewrite rule – Semantic compression – Semantic neural network – SemEval – SPL notation – Stemming – reduces an inflected or derived word into its word stem, base, or root form. String kernel –
Outline of natural language processing : Google Ngram Viewer – graphs n-gram usage from a corpus of more than 5.2 million books
Outline of natural language processing : AFNLP (Asian Federation of Natural Language Processing Associations) – the organization for coordinating the natural-language processing related activities and events in the Asia-Pacific region. Australasian Language Technology Association – Association for Computational Linguistics – international scientific and professional society for people working on problems involving natural-language processing.
Outline of natural language processing : Daniel Bobrow – Rollo Carpenter – creator of Jabberwacky and Cleverbot. Noam Chomsky – author of the seminal work Syntactic Structures, which revolutionized Linguistics with 'universal grammar', a rule based system of syntactic structures. Kenneth Colby – David Ferrucci – principal investigator of the team that created Watson, IBM's AI computer that won the quiz show Jeopardy! Lyn Frazier – Daniel Jurafsky – Professor of Linguistics and Computer Science at Stanford University. With James H. Martin, he wrote the textbook Speech and Language Processing: An Introduction to Natural Language Processing, Speech Recognition, and Computational Linguistics Roger Schank – introduced the conceptual dependency theory for natural-language understanding. Jean E. Fox Tree – Alan Turing – originator of the Turing Test. Joseph Weizenbaum – author of the ELIZA chatterbot. Terry Winograd – professor of computer science at Stanford University, and co-director of the Stanford Human-Computer Interaction Group. He is known within the philosophy of mind and artificial intelligence fields for his work on natural language using the SHRDLU program. William Aaron Woods – Maurice Gross – author of the concept of local grammar, taking finite automata as the competence model of language. Stephen Wolfram – CEO and founder of Wolfram Research, creator of the programming language (natural-language understanding) Wolfram Language, and natural-language processing computation engine Wolfram Alpha. Victor Yngve –
Outline of natural language processing : Crevier, Daniel (1993). AI: The Tumultuous Search for Artificial Intelligence. New York, NY: BasicBooks. ISBN 0-465-02997-3. McCorduck, Pamela (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters, Ltd., ISBN 978-1-56881-205-2, OCLC 52197627. Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2. == External links ==
Predictive state representation : In computer science, a predictive state representation (PSR) is a way to model a state of controlled dynamical system from a history of actions taken and resulting observations. PSR captures the state of a system as a vector of predictions for future tests (experiments) that can be done on the system. A test is a sequence of action-observation pairs and its prediction is the probability of the test's observation-sequence happening if the test's action-sequence were to be executed on the system. One of the advantage of using PSR is that the predictions are directly related to observable quantities. This is in contrast to other models of dynamical systems, such as partially observable Markov decision processes (POMDPs) where the state of the system is represented as a probability distribution over unobserved nominal states.
Predictive state representation : Consider a dynamic system based on a discrete set A of actions and a discrete set O of observations. A history h is a sequence a 1 o 1 … a β„“ o β„“ o_\dots a_o_ where a 1 , … , a β„“ ,\dots ,a_ are the actions taken by the agent in that order and o 1 , … , o β„“ ,\dots ,o_ are the observations made up by the agent. Let us write P ( a 1 o 1 … a β„“ o β„“ ) o_\dots a_o_) to be the conditional probability of observing o 1 , … , o β„“ ,\dots ,o_ given that the actions taken are a 1 , … , a β„“ ,\dots ,a_ . We now want to characterize a given hidden state reached after some history h . To do that, we introduce the notion of test. A test t is of the same type that a history: it is a sequence of action-observation pairs. The idea is now to consider a set of tests ,\dots ,t_\ to full characterize a hidden state. To do that, we first define the probability of a test t conditional on a history h : P ( t ∣ h ) := P ( h t ) P ( h ) . We now define the prediction vector p ( h ) = [ P ( t 1 ∣ h ) , … , P ( t n ∣ h ) ] \mid h),\dots ,P(t_\mid h)] . We say that p ( h ) is a predictive state representation (PSR) if and only if it forms a sufficient statistic for the system. In other words, p ( h ) is a predictive state representation (PSR) if and only if for all possible tests t , there exists a function f t such that for all histories h , P ( t ∣ h ) = f t ( p ( h ) ) (p(h)) . The functions f t are called projection functions. We say that the PSR is linear when the function f t is linear for all possible tests t . The main theorem proved in is stated as follows. Theorem. Consider a finite POMDP with k states. Then there exists a linear PSR with a number of tests n smaller that k .
Predictive state representation : Littman, Michael L.; Richard S. Sutton; Satinder Singh (2002). "Predictive Representations of State" (PDF). Advances in Neural Information Processing Systems 14 (NIPS). pp. 1555–1561. Singh, Satinder; Michael R. James; Matthew R. Rudary (2004). "Predictive State Representations: A New Theory for Modeling Dynamical Systems" (PDF). Uncertainty in Artificial Intelligence: Proceedings of the Twentieth Conference (UAI). pp. 512–519. Wiewiora, Eric Walter (2008), Modeling Probability Distributions with Predictive State Representations (PDF)
LRE Map : The LRE Map (Language Resources and Evaluation) is a freely accessible large database on resources dedicated to Natural language processing. The original feature of LRE Map is that the records are collected during the submission of different major Natural language processing conferences. The records are then cleaned and gathered into a global database called "LRE Map". The LRE Map is intended to be an instrument for collecting information about language resources and to become, at the same time, a community for users, a place to share and discover resources, discuss opinions, provide feedback, discover new trends, etc. It is an instrument for discovering, searching and documenting language resources, here intended in a broad sense, as both data and tools. The large amount of information contained in the Map can be analyzed in many different ways. For instance, the LRE Map can provide information about the most frequent type of resource, the most represented language, the applications for which resources are used or are being developed, the proportion of new resources vs. already existing ones, or the way in which resources are distributed to the community.
LRE Map : Several institutions worldwide maintain catalogues of language resources (ELRA, LDC, NICT Universal Catalogue, ACL Data and Code Repository, OLAC, LT World, etc.) However, it has been estimated that only 10% of existing resources are known, either through distribution catalogues or via direct publicity by providers (web sites and the like). The rest remains hidden, the only occasions where it briefly emerges being when a resource is presented in the context of a research paper or report at some conference. Even in this case, nevertheless, it might be that a resource remains in the background simply because the focus of the research is not on the resource per se.
LRE Map : The LRE Map originated under the name "LREC Map" during the preparation of LREC 2010 conference. More specifically, the idea was discussed within the FlaReNet project, and in collaboration with ELRA and the Institute of Computational Linguistics of CNR in Pisa, the Map was put in place at LREC 2010. The LREC organizers asked the authors to provide some basic information about all the resources (in a broad sense, i.e. including tools, standards and evaluation packages), either used or created, described in their papers. All these descriptors were then gathered in a global matrix called the LREC Map. The same methodology and requirements from the authors has been then applied and extended to other conferences, namely COLING-2010, EMNLP-2010, RANLP-2011, LREC 2012, LREC 2014 and LREC 2016. After this generalization to other conferences, the LREC Map has been renamed as the LRE Map.
LRE Map : The size of the database increases over time. The data collected amount to 4776 entries. Each resource is described according to the following attributes: Resource type, e.g. lexicon, annotation tool, tagger/parser. Resource production status, e.g. newly created finished, existing-updated. Resource availability, e.g. freely available, from data center. Resource modality, e.g. speech, written, sign language. Resource use, e.g. named entity recognition, language identification, machine translation. Resource language, e.g. English, 23 European Union languages, official languages of India.
LRE Map : The LRE map is a very important tool to chart the NLP field. Compared to other studied based on subjective scorings, the LRE map is made of real facts. The map has a great potential for many uses, in addition to being an information gathering tool: It is a great instrument for monitoring the evolution of the field (useful for funders), if applied in different contexts and times. It can be seen as a huge joint effort, the beginning of an even larger cooperative action not just among few leaders but among all the researchers. It is also an "educational" means towards the broad acknowledgment of the need of meta-research activities with the active involvement of many. It is also instrumental in introducing the new notion of "citation of resources" that could provide an award and a means of scholarly recognition for researchers engaged in resource creation. It is used to help the organization of the conferences of the field like LREC.
LRE Map : The data were then cleaned and sorted by Joseph Mariani (CNRS-LIMSI IMMI) and Gil Francopoulo (CNRS-LIMSI IMMI + Tagmatica) in order to compute the various matrices of the final FLaReNet reports. One of them, the matrix for written data at LREC 2010 is as follows: English is the most studied language. Secondly, come French and German languages and then Italian and Spanish.
LRE Map : The LRE Map has been extended to Language Resources and Evaluation Journal and other conferences.
LRE Map : LREC Map research page
Algorithmic probability : In algorithmic information theory, algorithmic probability, also known as Solomonoff probability, is a mathematical method of assigning a prior probability to a given observation. It was invented by Ray Solomonoff in the 1960s. It is used in inductive inference theory and analyses of algorithms. In his general theory of inductive inference, Solomonoff uses the method together with Bayes' rule to obtain probabilities of prediction for an algorithm's future outputs. In the mathematical formalism used, the observations have the form of finite binary strings viewed as outputs of Turing machines, and the universal prior is a probability distribution over the set of finite binary strings calculated from a probability distribution over programs (that is, inputs to a universal Turing machine). The prior is universal in the Turing-computability sense, i.e. no string has zero probability. It is not computable, but it can be approximated. Formally, the probability P is not a probability and it is not computable. It is only "lower semi-computable" and a "semi-measure". By "semi-measure", it means that 0 ≀ βˆ‘ x P ( x ) < 1 P(x)<1 . That is, the "probability" does not actually sum up to one, unlike actual probabilities. This is because some inputs to the Turing machine causes it to never halt, which means the probability mass allocated to those inputs is lost. By "lower semi-computable", it means there is a Turing machine that, given an input string x , can print out a sequence y 1 < y 2 < β‹― <y_<\cdots that converges to P ( x ) from below, but there is no such Turing machine that does the same from above.
Algorithmic probability : Algorithmic probability is the main ingredient of Solomonoff's theory of inductive inference, the theory of prediction based on observations; it was invented with the goal of using it for machine learning; given a sequence of symbols, which one will come next? Solomonoff's theory provides an answer that is optimal in a certain sense, although it is incomputable. Four principal inspirations for Solomonoff's algorithmic probability were: Occam's razor, Epicurus' principle of multiple explanations, modern computing theory (e.g. use of a universal Turing machine) and Bayes’ rule for prediction. Occam's razor and Epicurus' principle are essentially two different non-mathematical approximations of the universal prior. Occam's razor: among the theories that are consistent with the observed phenomena, one should select the simplest theory. Epicurus' principle of multiple explanations: if more than one theory is consistent with the observations, keep all such theories. At the heart of the universal prior is an abstract model of a computer, such as a universal Turing machine. Any abstract computer will do, as long as it is Turing-complete, i.e. every computable function has at least one program that will compute its application on the abstract computer. The abstract computer is used to give precise meaning to the phrase "simple explanation". In the formalism used, explanations, or theories of phenomena, are computer programs that generate observation strings when run on the abstract computer. Each computer program is assigned a weight corresponding to its length. The universal probability distribution is the probability distribution on all possible output strings with random input, assigning for each finite output prefix q the sum of the probabilities of the programs that compute something starting with q. Thus, a simple explanation is a short computer program. A complex explanation is a long computer program. Simple explanations are more likely, so a high-probability observation string is one generated by a short computer program, or perhaps by any of a large number of slightly longer computer programs. A low-probability observation string is one that can only be generated by a long computer program. Algorithmic probability is closely related to the concept of Kolmogorov complexity. Kolmogorov's introduction of complexity was motivated by information theory and problems in randomness, while Solomonoff introduced algorithmic complexity for a different reason: inductive reasoning. A single universal prior probability that can be substituted for each actual prior probability in Bayes's rule was invented by Solomonoff with Kolmogorov complexity as a side product. It predicts the most likely continuation of that observation, and provides a measure of how likely this continuation will be. Solomonoff's enumerable measure is universal in a certain powerful sense, but the computation time can be infinite. One way of dealing with this issue is a variant of Leonid Levin's Search Algorithm, which limits the time spent computing the success of possible programs, with shorter programs given more time. When run for longer and longer periods of time, it will generate a sequence of approximations which converge to the universal probability distribution. Other methods of dealing with the issue include limiting the search space by including training sequences. Solomonoff proved this distribution to be machine-invariant within a constant factor (called the invariance theorem).
Algorithmic probability : Solomonoff invented the concept of algorithmic probability with its associated invariance theorem around 1960, publishing a report on it: "A Preliminary Report on a General Theory of Inductive Inference." He clarified these ideas more fully in 1964 with "A Formal Theory of Inductive Inference," Part I and Part II. In terms of practical implications and applications, the study of bias in empirical data related to Algorithmic Probability emerged in the early 2010s. The bias found led to methods that combined algorithmic probability with perturbation analysis in the context of causal analysis and non-differentiable Machine Learning
Algorithmic probability : Sequential Decisions Based on Algorithmic Probability is a theoretical framework proposed by Marcus Hutter to unify algorithmic probability with decision theory. The framework provides a foundation for creating universally intelligent agents capable of optimal performance in any computable environment. It builds on Solomonoff’s theory of induction and incorporates elements of reinforcement learning, optimization, and sequential decision-making.
Algorithmic probability : Ray Solomonoff Andrey Kolmogorov Leonid Levin
Algorithmic probability : Solomonoff's theory of inductive inference Algorithmic information theory Bayesian inference Inductive inference Inductive probability Kolmogorov complexity Universal Turing machine Information-based complexity
Algorithmic probability : Li, M. and Vitanyi, P., An Introduction to Kolmogorov Complexity and Its Applications, 3rd Edition, Springer Science and Business Media, N.Y., 2008 Hutter, Marcus (2005). Universal artificial intelligence: sequential decisions based on algorithmic probability. Texts in theoretical computer science. Berlin Heidelberg: Springer. ISBN 978-3-540-22139-5.
Algorithmic probability : Rathmanner, S and Hutter, M., "A Philosophical Treatise of Universal Induction" in Entropy 2011, 13, 1076-1136: A very clear philosophical and mathematical analysis of Solomonoff's Theory of Inductive Inference
Algorithmic probability : Algorithmic Probability at Scholarpedia Solomonoff's publications
SqueezeNet : SqueezeNet is a deep neural network for image classification released in 2016. SqueezeNet was developed by researchers at DeepScale, University of California, Berkeley, and Stanford University. In designing SqueezeNet, the authors' goal was to create a smaller neural network with fewer parameters while achieving competitive accuracy. Their best-performing model achieved the same accuracy as AlexNet on ImageNet classification, but has a size 510x less than it.
SqueezeNet : SqueezeNet was originally released on February 22, 2016. This original version of SqueezeNet was implemented on top of the Caffe deep learning software framework. Shortly thereafter, the open-source research community ported SqueezeNet to a number of other deep learning frameworks. On February 26, 2016, Eddie Bell released a port of SqueezeNet for the Chainer deep learning framework. On March 2, 2016, Guo Haria released a port of SqueezeNet for the Apache MXNet framework. On June 3, 2016, Tammy Yang released a port of SqueezeNet for the Keras framework. In 2017, companies including Baidu, Xilinx, Imagination Technologies, and Synopsys demonstrated SqueezeNet running on low-power processing platforms such as smartphones, FPGAs, and custom processors. As of 2018, SqueezeNet ships "natively" as part of the source code of a number of deep learning frameworks such as PyTorch, Apache MXNet, and Apple CoreML. In addition, third party developers have created implementations of SqueezeNet that are compatible with frameworks such as TensorFlow. Below is a summary of frameworks that support SqueezeNet.
SqueezeNet : Some of the members of the original SqueezeNet team have continued to develop resource-efficient deep neural networks for a variety of applications. A few of these works are noted in the following table. As with the original SqueezeNet model, the open-source research community has ported and adapted these newer "squeeze"-family models for compatibility with multiple deep learning frameworks. In addition, the open-source research community has extended SqueezeNet to other applications, including semantic segmentation of images and style transfer.
SqueezeNet : Convolutional neural network MobileNet EfficientNet You Only Look Once Edge computing == References ==
MindsDB : MindsDB is an artificial intelligence company headquartered in California, an innovator bringing AI and Data together and is focused on enabling developers to build AI capabilities that can Reason, Plan and Orchestrate over enterprise data.
MindsDB : MindsDB was founded in 2017 by Jorge Torres and Adam Carrigan. The idea was incubated in early 2018 at Skydeck from UC Berkeley during the first funded batch, this led to the MindsDB Open Source project which started in August 2018. On April 16, 2020, MindsDB raised $3 million. Among the investors were OpenOcean, MMC, Rogue Capital, Zyper founder Amber Atherton, SCM Advisors, YCombinator, and Berkeley SkyDeck. MindsDB core was re-written from scratch during their time at Y Combinator in March 2020. In late 2020, MindsDB began offering paid premium services. On November 1, 2021, MindsDB announced an investment from Walden Catalyst Ventures, closing the total pre-seed round to $7.6M. Additionally, MindsDB announced partnerships with Snowflake, SingleStore, and DataStax (based on Apache CassandraTM) to connect its machine learning platform to these databases. In the same month, February 2023, MindsDB announced its integration with Hugging Face and OpenAI that would allow natural language processing and generative AI models into their database via API accessible with SQL requests. This integration enabled advanced text classification, sentiment analysis, emotion detection, translation, and more. During the same period, 2023, MindsDB raised a $46.5 million seed round from Benchmark, Mayfield and NVentures, and Chetan Puttagunta joined its board of directors.
MindsDB : Minds - an intelligent turnkey private AI system able to understand complex queries, retrieve relevant data from enterprise systems, deliver insights and make actions. It goes beyond traditional models by providing transparency in its decision-making process, respecting data privacy allowing organizations to deploy it within secure, private environments. MindsDB - an open-source data federation and orchestration engine for AI systems, that facilitates the implementation of custom AI workflows across hundreds of data and AI platforms.
MindsDB : MindsDB has formed strategic partnerships with leading companies such as Snowflake, SingleStore, DataStax, and NVIDIA. As of September 2024, the platform supports over 200 integrations, including popular large language models (LLMs) like OpenAI, Anthropic, and Mistral, as well as data platforms such as MySQL, PostgreSQL, Snowflake, and MongoDB. MindsDB also integrates with a wide range of applications, including Salesforce, HubSpot, X(former Twitter), and many others.
MindsDB : Named to Forbes' AI 50 list in 2021 Recognized as a 2022 Gartner Cool Vendor in Data-Centric AI Ranked 10th fastest-growing open-source startup globally and 3rd in the US by ROSS Index in 2022 Recognized by Fast Company as one of the Most Innovative AI companies in 2024 Reached over 26k GitHub stars, 4.8k forks and 750 open-source contributors as of September 2024
MindsDB : Gurevich, Natalia (2023-08-24). "AI companies flocking to Mission worry neighbors". San Francisco Examiner. Retrieved 2024-03-23.
MindsDB : Official website mindsdb on GitHub
Base rate : In probability and statistics, the base rate (also known as prior probabilities) is the class of probabilities unconditional on "featural evidence" (likelihoods). It is the proportion of individuals in a population who have a certain characteristic or trait. For example, if 1% of the population were medical professionals, and remaining 99% were not medical professionals, then the base rate of medical professionals is 1%. The method for integrating base rates and featural evidence is given by Bayes' rule. In the sciences, including medicine, the base rate is critical for comparison. In medicine a treatment's effectiveness is clear when the base rate is available. For example, if the control group, using no treatment at all, had their own base rate of 1/20 recoveries within 1 day and a treatment had a 1/100 base rate of recovery within 1 day, we see that the treatment actively decreases the recovery. The base rate is an important concept in statistical inference, particularly in Bayesian statistics. In Bayesian analysis, the base rate is combined with the observed data to update our belief about the probability of the characteristic or trait of interest. The updated probability is known as the posterior probability and is denoted as P(A|B), where B represents the observed data. For example, suppose we are interested in estimating the prevalence of a disease in a population. The base rate would be the proportion of individuals in the population who have the disease. If we observe a positive test result for a particular individual, we can use Bayesian analysis to update our belief about the probability that the individual has the disease. The updated probability would be a combination of the base rate and the likelihood of the test result given the disease status. The base rate is also important in decision-making, particularly in situations where the cost of false positives and false negatives are different. For example, in medical testing, a false negative (failing to diagnose a disease) could be much more costly than a false positive (incorrectly diagnosing a disease). In such cases, the base rate can help inform decisions about the appropriate threshold for a positive test result.
Base rate : Many psychological studies have examined a phenomenon called base-rate neglect or base rate fallacy, in which category base rates are not integrated with presented evidence in a normative manner, although not all evidence is consistent regarding how common this fallacy is. Mathematician Keith Devlin illustrates the risks as a hypothetical type of cancer that afflicts 1% of all people. Suppose a doctor then says there is a test for said cancer that is approximately 80% reliable, and that the test provides a positive result for 100% of people who have cancer, but it also results in a 'false positive' for 20% of people - who do not have cancer. Testing positive may therefore lead people to believe that it is 80% likely that they have cancer. Devlin explains that the odds are instead less than 5%. What is missing from these statistics is the relevant base rate information. The doctor should be asked, "Out of the number of people who test positive (base rate group), how many have cancer?" In assessing the probability that a given individual is a member of a particular class, information other than the base rate needs to be accounted for, especially featural evidence. For example, when a person wearing a white doctor's coat and stethoscope is seen prescribing medication, there is evidence that allows for the conclusion that the probability of this particular individual being a medical professional is considerably more significant than the category base rate of 1%.
Base rate : Bayes' rule Prior probability Prevalence == References ==
Transfer learning : Transfer learning (TL) is a technique in machine learning (ML) in which knowledge learned from a task is re-used in order to boost performance on a related task. For example, for image classification, knowledge gained while learning to recognize cars could be applied when trying to recognize trucks. This topic is related to the psychological literature on transfer of learning, although practical ties between the two fields are limited. Reusing/transferring information from previously learned tasks to new tasks has the potential to significantly improve learning efficiency. Since transfer learning makes use of training with multiple objective functions it is related to cost-sensitive machine learning and multi-objective optimization.
Transfer learning : In 1976, Bozinovski and Fulgosi published a paper addressing transfer learning in neural network training. The paper gives a mathematical and geometrical model of the topic. In 1981, a report considered the application of transfer learning to a dataset of images representing letters of computer terminals, experimentally demonstrating positive and negative transfer learning. In 1992, Lorien Pratt formulated the discriminability-based transfer (DBT) algorithm. By 1998, the field had advanced to include multi-task learning, along with more formal theoretical foundations. Influential publications on transfer learning include the book Learning to Learn in 1998, a 2009 survey and a 2019 survey. Ng said in his NIPS 2016 tutorial that TL would become the next driver of machine learning commercial success after supervised learning. In the 2020 paper, "Rethinking Pre-Training and self-training", Zoph et al. reported that pre-training can hurt accuracy, and advocate self-training instead.
Transfer learning : The definition of transfer learning is given in terms of domains and tasks. A domain D consists of: a feature space X and a marginal probability distribution P ( X ) , where X = ∈ X ,...,x_\\in . Given a specific domain, D = =\,P(X)\ , a task consists of two components: a label space Y and an objective predictive function f : X β†’ Y \rightarrow . The function f is used to predict the corresponding label f ( x ) of a new instance x . This task, denoted by T = =\,f(x)\ , is learned from the training data consisting of pairs ,y_\ , where x i ∈ X \in and y i ∈ Y \in . Given a source domain D S _ and learning task T S _ , a target domain D T _ and learning task T T _ , where D S β‰  D T _\neq _ , or T S β‰  T T _\neq _ , transfer learning aims to help improve the learning of the target predictive function f T ( β‹… ) (\cdot ) in D T _ using the knowledge in D S _ and T S _ .
Transfer learning : Algorithms are available for transfer learning in Markov logic networks and Bayesian networks. Transfer learning has been applied to cancer subtype discovery, building utilization, general game playing, text classification, digit recognition, medical imaging and spam filtering. In 2020, it was discovered that, due to their similar physical natures, transfer learning is possible between electromyographic (EMG) signals from the muscles and classifying the behaviors of electroencephalographic (EEG) brainwaves, from the gesture recognition domain to the mental state recognition domain. It was noted that this relationship worked in both directions, showing that electroencephalographic can likewise be used to classify EMG. The experiments noted that the accuracy of neural networks and convolutional neural networks were improved through transfer learning both prior to any learning (compared to standard random weight distribution) and at the end of the learning process (asymptote). That is, results are improved by exposure to another domain. Moreover, the end-user of a pre-trained model can change the structure of fully-connected layers to improve performance.
Transfer learning : Crossover (genetic algorithm) Domain adaptation General game playing Multi-task learning Multitask optimization Transfer of learning in educational psychology Zero-shot learning Feature learning external validity
Transfer learning : Thrun, Sebastian; Pratt, Lorien (6 December 2012). Learning to Learn. Springer Science & Business Media. ISBN 978-1-4615-5529-2.
GPT-1 : Generative Pre-trained Transformer 1 (GPT-1) was the first of OpenAI's large language models following Google's invention of the transformer architecture in 2017. In June 2018, OpenAI released a paper entitled "Improving Language Understanding by Generative Pre-Training", in which they introduced that initial model along with the general concept of a generative pre-trained transformer. Up to that point, the best-performing neural NLP models primarily employed supervised learning from large amounts of manually labeled data. This reliance on supervised learning limited their use of datasets that were not well-annotated, in addition to making it prohibitively expensive and time-consuming to train extremely large models; many languages (such as Swahili or Haitian Creole) are difficult to translate and interpret using such models due to a lack of available text for corpus-building. In contrast, a GPT's "semi-supervised" approach involved two stages: an unsupervised generative "pre-training" stage in which a language modeling objective was used to set initial parameters, and a supervised discriminative "fine-tuning" stage in which these parameters were adapted to a target task. The use of a transformer architecture, as opposed to previous techniques involving attention-augmented RNNs, provided GPT models with a more structured memory than could be achieved through recurrent mechanisms; this resulted in "robust transfer performance across diverse tasks".
GPT-1 : BookCorpus was chosen as a training dataset partly because the long passages of continuous text helped the model learn to handle long-range information. It contained over 7,000 unpublished fiction books from various genres. The rest of the datasets available at the time, while being larger, lacked this long-range structure (being "shuffled" at a sentence level). The BookCorpus text was cleaned by the ftfy library to standardized punctuation and whitespace and then tokenized by spaCy.
GPT-1 : The GPT-1 architecture was a twelve-layer decoder-only transformer, using twelve masked self-attention heads, with 64-dimensional states each (for a total of 768). Rather than simple stochastic gradient descent, the Adam optimization algorithm was used; the learning rate was increased linearly from zero over the first 2,000 updates to a maximum of 2.5Γ—10βˆ’4, and annealed to 0 using a cosine schedule. GPT-1 has 117 million parameters. While the fine-tuning was adapted to specific tasks, its pre-training was not; to perform the various tasks, minimal changes were performed to its underlying task-agnostic model architecture. Despite this, GPT-1 still improved on previous benchmarks in several language processing tasks, outperforming discriminatively-trained models with task-oriented architectures on several diverse tasks.
GPT-1 : GPT-1 achieved a 5.8% and 1.5% improvement over previous best results on natural language inference (also known as textual entailment) tasks, evaluating the ability to interpret pairs of sentences from various datasets and classify the relationship between them as "entailment", "contradiction" or "neutral". Examples of such datasets include QNLI (Wikipedia articles) and MultiNLI (transcribed speech, popular fiction, and government reports, among other sources); It similarly outperformed previous models on two tasks related to question answering and commonsense reasoningβ€”by 5.7% on RACE, a dataset of written question-answer pairs from middle and high school exams, and by 8.9% on the Story Cloze Test. GPT-1 improved on previous best-performing models by 4.2% on semantic similarity (or paraphrase detection), evaluating the ability to predict whether two sentences are paraphrases of one another, using the Quora Question Pairs (QQP) dataset. GPT-1 achieved a score of 45.4, versus a previous best of 35.0 in a text classification task using the Corpus of Linguistic Acceptability (CoLA). Finally, GPT-1 achieved an overall score of 72.8 (compared to a previous record of 68.9) on GLUE, a multi-task test. == References ==
Quantum natural language processing : Quantum natural language processing (QNLP) is the application of quantum computing to natural language processing (NLP). It computes word embeddings as parameterised quantum circuits that can solve NLP tasks faster than any classical computer. It is inspired by categorical quantum mechanics and the DisCoCat framework, making use of string diagrams to translate from grammatical structure to quantum processes.
Quantum natural language processing : The first quantum algorithm for natural language processing used the DisCoCat framework and Grover's algorithm to show a quadratic quantum speedup for a text classification task. It was later shown that quantum language processing is BQP-Complete, i.e. quantum language models are more expressive than their classical counterpart, unless quantum mechanics can be efficiently simulated by classical computers. These two theoretical results assume fault-tolerant quantum computation and a QRAM, i.e. an efficient way to load classical data on a quantum computer. Thus, they are not applicable to the noisy intermediate-scale quantum (NISQ) computers available today.
Quantum natural language processing : The algorithm of Zeng and Coecke was adapted to the constraints of NISQ computers and implemented on IBM quantum computers to solve binary classification tasks. Instead of loading classical word vectors onto a quantum memory, the word vectors are computed directly as the parameters of quantum circuits. These parameters are optimised using methods from quantum machine learning to solve data-driven tasks such as question answering, machine translation and even algorithmic music composition.
Quantum natural language processing : Categorical quantum mechanics Natural language processing Quantum machine learning Applied category theory String diagram
Quantum natural language processing : DisCoPy, a Python toolkit for computing with string diagrams lambeq, a Python library for quantum natural language processing
General regression neural network : Generalized regression neural network (GRNN) is a variation to radial basis neural networks. GRNN was suggested by D.F. Specht in 1991. GRNN can be used for regression, prediction, and classification. GRNN can also be a good solution for online dynamical systems. GRNN represents an improved technique in the neural networks based on the nonparametric regression. The idea is that every training sample will represent a mean to a radial basis neuron.
General regression neural network : Y ( x ) = βˆ‘ k = 1 N y k K ( x , x k ) βˆ‘ k = 1 N K ( x , x k ) ^y_K(x,x_)^K(x,x_) where: Y ( x ) is the prediction value of input x y k is the activation weight for the pattern layer neuron at k K ( x , x k ) ) is the Radial basis function kernel (Gaussian kernel) as formulated below.
General regression neural network : GRNN has been implemented in many computer languages including MATLAB, R- programming language, Python (programming language) and Node.js. Neural networks (specifically Multi-layer Perceptron) can delineate non-linear patterns in data by combining with generalized linear models by considering distribution of outcomes (sightly different from original GRNN). There have been several successful developments, including Poisson regression, ordinal logistic regression, quantile regression and multinomial logistic regression that described by Fallah in 2009.
General regression neural network : Similar to RBFNN, GRNN has the following advantages: Single-pass learning so no backpropagation is required. High accuracy in the estimation since it uses Gaussian functions. It can handle noises in the inputs. It requires relatively few data to train. The main disadvantages of GRNN are: Its size can be huge, which would make it computationally expensive. There is no optimal method to improve it. == References ==
Quantification (machine learning) : In machine learning and data mining, quantification (variously called learning to quantify, or supervised prevalence estimation, or class prior estimation) is the task of using supervised learning in order to train models (quantifiers) that estimate the relative frequencies (also known as prevalence values) of the classes of interest in a sample of unlabelled data items. For instance, in a sample of 100,000 unlabelled tweets known to express opinions about a certain political candidate, a quantifier may be used to estimate the percentage of these tweets which belong to class `Positive' (i.e., which manifest a positive stance towards this candidate), and to do the same for classes `Neutral' and `Negative'. Quantification may also be viewed as the task of training predictors that estimate a (discrete) probability distribution, i.e., that generate a predicted distribution that approximates the unknown true distribution of the items across the classes of interest. Quantification is different from classification, since the goal of classification is to predict the class labels of individual data items, while the goal of quantification it to predict the class prevalence values of sets of data items. Quantification is also different from regression, since in regression the training data items have real-valued labels, while in quantification the training data items have class labels. It has been shown in multiple research works that performing quantification by classifying all unlabelled instances and then counting the instances that have been attributed to each class (the 'classify and count' method) usually leads to suboptimal quantification accuracy. This suboptimality may be seen as a direct consequence of 'Vapnik's principle', which states: If you possess a restricted amount of information for solving some problem, try to solve the problem directly and never solve a more general problem as an intermediate step. It is possible that the available information is sufficient for a direct solution but is insufficient for solving a more general intermediate problem. In our case, the problem to be solved directly is quantification, while the more general intermediate problem is classification. As a result of the suboptimality of the 'classify and count' method, quantification has evolved as a task in its own right, different (in goals, methods, techniques, and evaluation measures) from classification.
Quantification (machine learning) : The main variants of quantification, according to the characteristics of the set of classes used, are: Binary quantification, corresponding to the case in which there are only n = 2 classes and each data item belongs to exactly one of them; Single-label multiclass quantification, corresponding to the case in which there are n > 2 classes and each data item belongs to exactly one of them; Multi-label multiclass quantification, corresponding to the case in which there are n β‰₯ 2 classes and each data item can belong to zero, one, or several classes at the same time; Ordinal quantification, corresponding to the single-label multiclass case in which a total order is defined on the set of classes. Regression quantification, a task which stands to 'standard' quantification as regression stands to classification. Strictly speaking, this task is not a quantification task as defined above (since the individual items do not have class labels but are labelled by real values), but has enough commonalities with other quantification tasks to be considered one of them. Most known quantification methods address the binary case or the single-label multiclass case, and only few of them address the multi-label, ordinal, and regression cases. Binary-only methods include the Mixture Model (MM) method, the HDy method, SVM(KLD), and SVM(Q). Methods that can deal with both the binary case and the single-label multiclass case include probabilistic classify and count (PCC), adjusted classify and count (ACC), probabilistic adjusted classify and count (PACC), and the Saerens-Latinne-Decaestecker EM-based method (SLD). Methods for multi-label quantification include regression-based quantification (RQ) and label powerset-based quantification (LPQ). Methods for the ordinal case include Ordinal Quantification Tree (OQT), and ordinal versions of the above-mentioned ACC, PACC, and SLD methods. Methods for the regression case include Regress and splice and Adjusted regress and sum.
Quantification (machine learning) : Several evaluation measures can be used for evaluating the error of a quantification method. Since quantification consists of generating a predicted probability distribution that estimates a true probability distribution, these evaluation measures are ones that compare two probability distributions. Most evaluation measures for quantification belong to the class of divergences. Evaluation measures for binary quantification and single-label multiclass quantification are Absolute Error Squared Error Relative Absolute Error Kullback-Leibler divergence Pearson Divergence Evaluation measures for ordinal quantification are Normalized Match Distance (a particular case of the Earth Mover's Distance) Root Normalized Order-Aware Distance
Quantification (machine learning) : Quantification is of special interest in fields such as the social sciences, epidemiology, market research, and ecological modelling, since these fields are inherently concerned with aggregate data. However, quantification is also useful as a building block for solving other downstream tasks, such as improving the accuracy of classifiers on out-of-distribution data, allocating resources, measuring classifier bias, and estimating the accuracy of classifiers on out-of-distribution data.
Quantification (machine learning) : LQ 2021: the 1st International Workshop on Learning to Quantify LQ 2022: the 2nd International Workshop on Learning to Quantify LQ 2023: the 3rd International Workshop on Learning to Quantify LQ 2024: the 4th International Workshop on Learning to Quantify LeQua 2022: the 1st Data Challenge on Learning to Quantify LeQua 2024: the 2nd Data Challenge on Learning to Quantify QuaPy: An open-source Python-based software library for quantification QuantificationLib: A Python library for quantification and prevalence estimation == References ==
Lexalytics : Lexalytics, Inc. provides sentiment and intent analysis to an array of companies using SaaS and cloud based technology. Salience 6, the engine behind Lexalytics, was built as an on-premises, multi-lingual text analysis engine. It is leased to other companies who use it to power filtering and reputation management programs. In July, 2015 Lexalytics acquired Semantria to be used as a cloud option for its technology. In September, 2021 Lexalytics was acquired by CX company InMoment.
Lexalytics : Lexalytics spun into existence in January 2003 out of a content management startup called Lightspeed. Lightspeed consolidated on America's West Coast. Jeff Catlin, a Lightspeed General Manager, and Mike Marshall, a Lighstpeed Principal Engineer, convinced investors to give them the East Coast company so as to avoid shutdown costs. Catlin and Marshall renamed the operation Lexalytics. Catlin took on the role of chief executive officer with Marshall working as Chief Technology Officer. Lexalytics opted to not accept venture cash. Instead, the company initially shared sales and marketing expenses with U.K. based document management company Infonic. The partner companies soon formed a joint venture in July 2008, which was later dissolved. Since then, Lexalytics has worked with many other companies, like Bottlenose, Salesforce, Thomson Reuters, Oracle and DataSift. Relationships with social media monitoring companies like Datasift tend to find Lexalytics’ Salience engine baked into the product itself. Lexalytics is used similarly to monitor sentiment as it relates to stock trading. In December 2014, Lexalytics announced the latest iteration to its sentiment analysis engine, Salience 6. Earlier that year Lexalytics acquired Semantria in a bid to appeal to a wider variety of business models. Created by former Lexalytics Marketing Director Oleg Rogynskyy, Semantria is a SaaS text mining service offered as an API and Excel based plugin that measures sentiment. The goal of the acquisition, which cost Lexalytics less than US$10 million, was to expand the customer base both within the United States and abroad with multilingual support. The engine that powers Semantria, Salience, is grounded in its deep learning ability. An example of this is its concept matrix, which allows Salience an understanding of concepts and relationship between concepts based on a detailed reading of the entire repository of Wikipedia. This matrix allows Salience to use Wikipedia for automatic categorization. Along with features like the concept matrix, Salience supports 16 international languages. The engine has earned Lexalytics a spot on EContent's β€œTop 100 Companies in the Digital Content Industry” List for 2014–2015. In September 2018, Lexalytics launched document data extraction market using natural language processing (NLP).
Lexalytics : Official website
LipNet : LipNet is a deep neural network for audio-visual speech recognition (ASVR). It was created by University of Oxford researchers Yannis Assael, Brendan Shillingford, Shimon Whiteson, and Nando de Freitas. The technique, outlined in a paper in November 2016, is able to decode text from the movement of a speaker's mouth. Traditional visual speech recognition approaches separated the problem into two stages: designing or learning visual features, and prediction. LipNet was the first end-to-end sentence-level lipreading model that learned spatiotemporal visual features and a sequence model simultaneously. Audio-visual speech recognition has enormous practical potential, with applications such as improved hearing aids, improving the recovery and wellbeing of critically ill patients, and speech recognition in noisy environments, implemented for example in Nvidia's autonomous vehicles. == References ==
Bayesian regret : In stochastic game theory, Bayesian regret is the expected difference ("regret") between the utility of a Bayesian strategy and that of the optimal strategy (the one with the highest expected payoff). The term Bayesian refers to Thomas Bayes (1702–1761), who proved a special case of what is now called Bayes' theorem, who provided the first mathematical treatment of a non-trivial problem of statistical data analysis using what is now known as Bayesian inference.
Bayesian regret : This term has been used to compare a random buy-and-hold strategy to professional traders' records. This same concept has received numerous different names, as the New York Times notes: "In 1957, for example, a statistician named James Hanna called his theorem Bayesian Regret. He had been preceded by David Blackwell, also a statistician, who called his theorem Controlled Random Walks. Other, later papers had titles like 'On Pseudo Games', 'How to Play an Unknown Game', 'Universal Coding' and 'Universal Portfolios'". == References ==
Version space learning : Version space learning is a logical approach to machine learning, specifically binary classification. Version space learning algorithms search a predefined space of hypotheses, viewed as a set of logical sentences. Formally, the hypothesis space is a disjunction H 1 ∨ H 2 ∨ . . . ∨ H n \lor H_\lor ...\lor H_ (i.e., one or more of hypotheses 1 through n are true). A version space learning algorithm is presented with examples, which it will use to restrict its hypothesis space; for each example x, the hypotheses that are inconsistent with x are removed from the space. This iterative refining of the hypothesis space is called the candidate elimination algorithm, the hypothesis space maintained inside the algorithm, its version space.
Version space learning : In settings where there is a generality-ordering on hypotheses, it is possible to represent the version space by two sets of hypotheses: (1) the most specific consistent hypotheses, and (2) the most general consistent hypotheses, where "consistent" indicates agreement with observed data. The most specific hypotheses (i.e., the specific boundary SB) cover the observed positive training examples, and as little of the remaining feature space as possible. These hypotheses, if reduced any further, exclude a positive training example, and hence become inconsistent. These minimal hypotheses essentially constitute a (pessimistic) claim that the true concept is defined just by the positive data already observed: Thus, if a novel (never-before-seen) data point is observed, it should be assumed to be negative. (I.e., if data has not previously been ruled in, then it's ruled out.) The most general hypotheses (i.e., the general boundary GB) cover the observed positive training examples, but also cover as much of the remaining feature space without including any negative training examples. These, if enlarged any further, include a negative training example, and hence become inconsistent. These maximal hypotheses essentially constitute a (optimistic) claim that the true concept is defined just by the negative data already observed: Thus, if a novel (never-before-seen) data point is observed, it should be assumed to be positive. (I.e., if data has not previously been ruled out, then it's ruled in.) Thus, during learning, the version space (which itself is a set – possibly infinite – containing all consistent hypotheses) can be represented by just its lower and upper bounds (maximally general and maximally specific hypothesis sets), and learning operations can be performed just on these representative sets. After learning, classification can be performed on unseen examples by testing the hypothesis learned by the algorithm. If the example is consistent with multiple hypotheses, a majority vote rule can be applied.
Version space learning : The notion of version spaces was introduced by Mitchell in the early 1980s as a framework for understanding the basic problem of supervised learning within the context of solution search. Although the basic "candidate elimination" search method that accompanies the version space framework is not a popular learning algorithm, there are some practical implementations that have been developed (e.g., Sverdlik & Reynolds 1992, Hong & Tsang 1997, Dubois & Quafafou 2002). A major drawback of version space learning is its inability to deal with noise: any pair of inconsistent examples can cause the version space to collapse, i.e., become empty, so that classification becomes impossible. One solution of this problem is proposed by Dubois and Quafafou that proposed the Rough Version Space, where rough sets based approximations are used to learn certain and possible hypothesis in the presence of inconsistent data.
Version space learning : Formal concept analysis Inductive logic programming Rough set. [The rough set framework focuses on the case where ambiguity is introduced by an impoverished feature set. That is, the target concept cannot be decisively described because the available feature set fails to disambiguate objects belonging to different categories. The version space framework focuses on the (classical induction) case where the ambiguity is introduced by an impoverished data set. That is, the target concept cannot be decisively described because the available data fails to uniquely pick out a hypothesis. Naturally, both types of ambiguity can occur in the same learning problem.] Inductive reasoning. [On the general problem of induction.]
Version space learning : Hong, Tzung-Pai; Shian-Shyong Tsang (1997). "A generalized version space learning algorithm for noisy and uncertain data". IEEE Transactions on Knowledge and Data Engineering. 9 (2): 336–340. doi:10.1109/69.591457. S2CID 29926783. Mitchell, Tom M. (1997). Machine Learning. Boston: McGraw-Hill. Sverdlik, W.; Reynolds, R.G. (1992). "Dynamic version spaces in machine learning". Proceedings, Fourth International Conference on Tools with Artificial Intelligence (TAI '92). Arlington, VA. pp. 308–315.
Knowledge graph embedding : In representation learning, knowledge graph embedding (KGE), also called knowledge representation learning (KRL), or multi-relation learning, is a machine learning task of learning a low-dimensional representation of a knowledge graph's entities and relations while preserving their semantic meaning. Leveraging their embedded representation, knowledge graphs (KGs) can be used for various applications such as link prediction, triple classification, entity recognition, clustering, and relation extraction.
Knowledge graph embedding : A knowledge graph G = =\ is a collection of entities E , relations R , and facts F . A fact is a triple ( h , r , t ) ∈ F that denotes a link r ∈ R between the head h ∈ E and the tail t ∈ E of the triple. Another notation that is often used in the literature to represent a triple (or fact) is < h e a d , r e l a t i o n , t a i l > . This notation is called resource description framework (RDF). A knowledge graph represents the knowledge related to a specific domain; leveraging this structured representation, it is possible to infer a piece of new knowledge from it after some refinement steps. However, nowadays, people have to deal with the sparsity of data and the computational inefficiency to use them in a real-world application. The embedding of a knowledge graph is a function that translates each entity and each relation into a vector of a given dimension d , called embedding dimension. It is even possible to embed the entities and relations with different dimensions. The embedding vectors can then be used for other tasks. A knowledge graph embedding is characterized by four aspects: Representation space: The low-dimensional space in which the entities and relations are represented. Scoring function: A measure of the goodness of a triple embedded representation. Encoding models: The modality in which the embedded representation of the entities and relations interact with each other. Additional information: Any additional information coming from the knowledge graph that can enrich the embedded representation. Usually, an ad hoc scoring function is integrated into the general scoring function for each additional information.
Knowledge graph embedding : All algorithms for creating a knowledge graph embedding follow the same approach. First, the embedding vectors are initialized to random values. Then, they are iteratively optimized using a training set of triples. In each iteration, a batch of size b triples is sampled from the training set, and a triple from it is sampled and corruptedβ€”i.e., a triple that does not represent a true fact in the knowledge graph. The corruption of a triple involves substituting the head or the tail (or both) of the triple with another entity that makes the fact false. The original triple and the corrupted triple are added in the training batch, and then the embeddings are updated, optimizing a scoring function. Iteration stops when a stop condition is reached. Usually, the stop condition depends on the overfitting of the training set. At the end, the learned embeddings should have extracted semantic meaning from the training triples and should correctly predict unseen true facts in the knowledge graph.
Knowledge graph embedding : These indexes are often used to measure the embedding quality of a model. The simplicity of the indexes makes them very suitable for evaluating the performance of an embedding algorithm even on a large scale. Given Q as the set of all ranked predictions of a model, it is possible to define three different performance indexes: Hits@K, MR, and MRR.
Knowledge graph embedding : Given a collection of triples (or facts) F = =\ , the knowledge graph embedding model produces, for each entity and relation present in the knowledge graph a continuous vector representation. ( h , r , t ) is the corresponding embedding of a triple with h , t ∈ I R d ^ and r ∈ I R k ^ , where d is the embedding dimension for the entities, and k for the relations. The score function of a given model is denoted by f r ( h , t ) _(h,t) and measures the distance of the embedding of the head from the embedding of tail given the embedding of the relation. In other words, it quantifies the plausibility of the embedded representation of a given fact. Rossi et al. propose a taxonomy of the embedding models and identifies three main families of models: tensor decomposition models, geometric models, and deep learning models.
Knowledge graph embedding : The machine learning task for knowledge graph embedding that is more often used to evaluate the embedding accuracy of the models is the link prediction. Rossi et al. produced an extensive benchmark of the models, but also other surveys produces similar results. The benchmark involves five datasets FB15k, WN18, FB15k-237, WN18RR, and YAGO3-10. More recently, it has been discussed that these datasets are far away from real-world applications, and other datasets should be integrated as a standard benchmark.
Knowledge graph embedding : KGE on GitHub MEI-KGE on GitHub Pykg2vec on GitHub DGL-KE on GitHub PyKEEN on GitHub TorchKGE on GitHub AmpliGraph on GitHub OpenKE on GitHub scikit-kge on GitHub Fast-TransX on GitHub MEIM-KGE on GitHub DICEE on GitHub
Knowledge graph embedding : Knowledge graph Embedding Machine learning Knowledge base Knowledge extraction Statistical relational learning Representation learning Graph embedding
Knowledge graph embedding : Open Graph Benchmark - Stanford WordNet - Princeton
Neural modeling fields : Neural modeling field (NMF) is a mathematical framework for machine learning which combines ideas from neural networks, fuzzy logic, and model based recognition. It has also been referred to as modeling fields, modeling fields theory (MFT), Maximum likelihood artificial neural networks (MLANS). This framework has been developed by Leonid Perlovsky at the AFRL. NMF is interpreted as a mathematical description of the mind's mechanisms, including concepts, emotions, instincts, imagination, thinking, and understanding. NMF is a multi-level, hetero-hierarchical system. At each level in NMF there are concept-models encapsulating the knowledge; they generate so-called top-down signals, interacting with input, bottom-up signals. These interactions are governed by dynamic equations, which drive concept-model learning, adaptation, and formation of new concept-models for better correspondence to the input, bottom-up signals.
Neural modeling fields : In the general case, NMF system consists of multiple processing levels. At each level, output signals are the concepts recognized in (or formed from) input, bottom-up signals. Input signals are associated with (or recognized, or grouped into) concepts according to the models and at this level. In the process of learning the concept-models are adapted for better representation of the input signals so that similarity between the concept-models and signals increases. This increase in similarity can be interpreted as satisfaction of an instinct for knowledge, and is felt as aesthetic emotions. Each hierarchical level consists of N "neurons" enumerated by index n=1,2..N. These neurons receive input, bottom-up signals, X(n), from lower levels in the processing hierarchy. X(n) is a field of bottom-up neuronal synaptic activations, coming from neurons at a lower level. Each neuron has a number of synapses; for generality, each neuron activation is described as a set of numbers, X β†’ ( n ) = , d = 1.. D . (n)=\(n)\,d=1..D. , where D is the number or dimensions necessary to describe individual neuron's activation. Top-down, or priming signals to these neurons are sent by concept-models, Mm(Sm,n) M β†’ m ( S β†’ m , n ) , m = 1.. M . _(_,n),m=1..M. , where M is the number of models. Each model is characterized by its parameters, Sm; in the neuron structure of the brain they are encoded by strength of synaptic connections, mathematically, they are given by a set of numbers, S β†’ m = , a = 1.. A . _=\^\,a=1..A. , where A is the number of dimensions necessary to describe individual model. Models represent signals in the following way. Suppose that signal X(n) is coming from sensory neurons n activated by object m, which is characterized by parameters Sm. These parameters may include position, orientation, or lighting of an object m. Model Mm(Sm,n) predicts a value X(n) of a signal at neuron n. For example, during visual perception, a neuron n in the visual cortex receives a signal X(n) from retina and a priming signal Mm(Sm,n) from an object-concept-model m. Neuron n is activated if both the bottom-up signal from lower-level-input and the top-down priming signal are strong. Various models compete for evidence in the bottom-up signals, while adapting their parameters for better match as described below. This is a simplified description of perception. The most benign everyday visual perception uses many levels from retina to object perception. The NMF premise is that the same laws describe the basic interaction dynamics at each level. Perception of minute features, or everyday objects, or cognition of complex abstract concepts is due to the same mechanism described below. Perception and cognition involve concept-models and learning. In perception, concept-models correspond to objects; in cognition models correspond to relationships and situations. Learning is an essential part of perception and cognition, and in NMF theory it is driven by the dynamics that increase a similarity measure between the sets of models and signals, L(,). The similarity measure is a function of model parameters and associations between the input bottom-up signals and top-down, concept-model signals. In constructing a mathematical description of the similarity measure, it is important to acknowledge two principles: First, the visual field content is unknown before perception occurred Second, it may contain any of a number of objects. Important information could be contained in any bottom-up signal; Therefore, the similarity measure is constructed so that it accounts for all bottom-up signals, X(n), L ( , ) = ∏ n = 1 N l ( X β†’ ( n ) ) . (n)\,\_(_,n)\)=\prod _^(n)). (1) This expression contains a product of partial similarities, l(X(n)), over all bottom-up signals; therefore it forces the NMF system to account for every signal (even if one term in the product is zero, the product is zero, the similarity is low and the knowledge instinct is not satisfied); this is a reflection of the first principle. Second, before perception occurs, the mind does not know which object gave rise to a signal from a particular retinal neuron. Therefore, a partial similarity measure is constructed so that it treats each model as an alternative (a sum over concept-models) for each input neuron signal. Its constituent elements are conditional partial similarities between signal X(n) and model Mm, l(X(n)|m). This measure is "conditional" on object m being present, therefore, when combining these quantities into the overall similarity measure, L, they are multiplied by r(m), which represent a probabilistic measure of object m actually being present. Combining these elements with the two principles noted above, a similarity measure is constructed as follows: L ( , ) = ∏ n = 1 N βˆ‘ m = 1 M r ( m ) l ( X β†’ ( n ) | m ) . (n)\,\_(_,n)\)=\prod _^^(n)|m). (2) The structure of the expression above follows standard principles of the probability theory: a summation is taken over alternatives, m, and various pieces of evidence, n, are multiplied. This expression is not necessarily a probability, but it has a probabilistic structure. If learning is successful, it approximates probabilistic description and leads to near-optimal Bayesian decisions. The name "conditional partial similarity" for l(X(n)|m) (or simply l(n|m)) follows the probabilistic terminology. If learning is successful, l(n|m) becomes a conditional probability density function, a probabilistic measure that signal in neuron n originated from object m. Then L is a total likelihood of observing signals coming from objects described by concept-model . Coefficients r(m), called priors in probability theory, contain preliminary biases or expectations, expected objects m have relatively high r(m) values; their true values are usually unknown and should be learned, like other parameters Sm. Note that in probability theory, a product of probabilities usually assumes that evidence is independent. Expression for L contains a product over n, but it does not assume independence among various signals X(n). There is a dependence among signals due to concept-models: each model Mm(Sm,n) predicts expected signal values in many neurons n. During the learning process, concept-models are constantly modified. Usually, the functional forms of models, Mm(Sm,n), are all fixed and learning-adaptation involves only model parameters, Sm. From time to time a system forms a new concept, while retaining an old one as well; alternatively, old concepts are sometimes merged or eliminated. This requires a modification of the similarity measure L; The reason is that more models always result in a better fit between the models and data. This is a well known problem, it is addressed by reducing similarity L using a "skeptic penalty function," (Penalty method) p(N,M) that grows with the number of models M, and this growth is steeper for a smaller amount of data N. For example, an asymptotically unbiased maximum likelihood estimation leads to multiplicative p(N,M) = exp(-Npar/2), where Npar is a total number of adaptive parameters in all models (this penalty function is known as Akaike information criterion, see (Perlovsky 2001) for further discussion and references).
Neural modeling fields : The learning process consists of estimating model parameters S and associating signals with concepts by maximizing the similarity L. Note that all possible combinations of signals and models are accounted for in expression (2) for L. This can be seen by expanding a sum and multiplying all the terms resulting in MN items, a huge number. This is the number of combinations between all signals (N) and all models (M). This is the source of Combinatorial Complexity, which is solved in NMF by utilizing the idea of dynamic logic. An important aspect of dynamic logic is matching vagueness or fuzziness of similarity measures to the uncertainty of models. Initially, parameter values are not known, and uncertainty of models is high; so is the fuzziness of the similarity measures. In the process of learning, models become more accurate, and the similarity measure more crisp, the value of the similarity increases. The maximization of similarity L is done as follows. First, the unknown parameters are randomly initialized. Then the association variables f(m|n) are computed, f ( m | n ) = r ( m ) l ( X β†’ ( n | m ) ) βˆ‘ m β€² = 1 M r ( m β€² ) l ( X β†’ ( n | m β€² ) ) (n|m))^(n|m')) (3). Equation for f(m|n) looks like the Bayes formula for a posteriori probabilities; if l(n|m) in the result of learning become conditional likelihoods, f(m|n) become Bayesian probabilities for signal n originating from object m. The dynamic logic of the NMF is defined as follows: d S β†’ m d t = βˆ‘ n = 1 N f ( m | n ) βˆ‚ ln ⁑ l ( n | m ) βˆ‚ M β†’ m βˆ‚ M β†’ m βˆ‚ S β†’ m _=\sum _^___ (4). d f ( m | n ) d t = f ( m | n ) βˆ‘ m β€² = 1 M [ Ξ΄ m m β€² βˆ’ f ( m β€² | n ) ] βˆ‚ ln ⁑ l ( n | m β€² ) βˆ‚ M β†’ m β€² βˆ‚ M β†’ m β€² βˆ‚ S β†’ m β€² d S β†’ m β€² d t =f(m|n)\sum _^-f(m'|n)]____ (5) The following theorem has been proved (Perlovsky 2001): Theorem. Equations (3), (4), and (5) define a convergent dynamic NMF system with stationary states defined by maxL. It follows that the stationary states of an MF system are the maximum similarity states. When partial similarities are specified as probability density functions (pdf), or likelihoods, the stationary values of parameters are asymptotically unbiased and efficient estimates of these parameters. The computational complexity of dynamic logic is linear in N. Practically, when solving the equations through successive iterations, f(m|n) can be recomputed at every iteration using (3), as opposed to incremental formula (5). The proof of the above theorem contains a proof that similarity L increases at each iteration. This has a psychological interpretation that the instinct for increasing knowledge is satisfied at each step, resulting in the positive emotions: NMF-dynamic logic system emotionally enjoys learning.
Neural modeling fields : Finding patterns below noise can be an exceedingly complex problem. If an exact pattern shape is not known and depends on unknown parameters, these parameters should be found by fitting the pattern model to the data. However, when the locations and orientations of patterns are not known, it is not clear which subset of the data points should be selected for fitting. A standard approach for solving this kind of problem is multiple hypothesis testing (Singer et al. 1974). Since all combinations of subsets and models are exhaustively searched, this method faces the problem of combinatorial complexity. In the current example, noisy 'smile' and 'frown' patterns are sought. They are shown in Fig.1a without noise, and in Fig.1b with the noise, as actually measured. The true number of patterns is 3, which is not known. Therefore, at least 4 patterns should be fit to the data, to decide that 3 patterns fit best. The image size in this example is 100x100 = 10,000 points. If one attempts to fit 4 models to all subsets of 10,000 data points, computation of complexity, MN ~ 106000. An alternative computation by searching through the parameter space, yields lower complexity: each pattern is characterized by a 3-parameter parabolic shape. Fitting 4x3=12 parameters to 100x100 grid by a brute-force testing would take about 1032 to 1040 operations, still a prohibitive computational complexity. To apply NMF and dynamic logic to this problem one needs to develop parametric adaptive models of expected patterns. The models and conditional partial similarities for this case are described in details in: a uniform model for noise, Gaussian blobs for highly-fuzzy, poorly resolved patterns, and parabolic models for 'smiles' and 'frowns'. The number of computer operations in this example was about 1010. Thus, a problem that was not solvable due to combinatorial complexity becomes solvable using dynamic logic. During an adaptation process, initially fuzzy and uncertain models are associated with structures in the input signals, and fuzzy models become more definite and crisp with successive iterations. The type, shape, and number, of models are selected so that the internal representation within the system is similar to input signals: the NMF concept-models represent structure-objects in the signals. The figure below illustrates operations of dynamic logic. In Fig. 1(a) true 'smile' and 'frown' patterns are shown without noise; (b) actual image available for recognition (signal is below noise, signal-to-noise ratio is between –2 dB and –0.7 dB); (c) an initial fuzzy model, a large fuzziness corresponds to uncertainty of knowledge; (d) through (m) show improved models at various iteration stages (total of 22 iterations). Every five iterations the algorithm tried to increase or decrease the number of models. Between iterations (d) and (e) the algorithm decided, that it needs three Gaussian models for the 'best' fit. There are several types of models: one uniform model describing noise (it is not shown) and a variable number of blob models and parabolic models; their number, location, and curvature are estimated from the data. Until about stage (g) the algorithm used simple blob models, at (g) and beyond, the algorithm decided that it needs more complex parabolic models to describe the data. Iterations stopped at (h), when similarity stopped increasing.
Neural modeling fields : Above, a single processing level in a hierarchical NMF system was described. At each level of hierarchy there are input signals from lower levels, models, similarity measures (L), emotions, which are defined as changes in similarity, and actions; actions include adaptation, behavior satisfying the knowledge instinct – maximization of similarity. An input to each level is a set of signals X(n), or in neural terminology, an input field of neuronal activations. The result of signal processing at a given level are activated models, or concepts m recognized in the input signals n; these models along with the corresponding instinctual signals and emotions may activate behavioral models and generate behavior at this level. The activated models initiate other actions. They serve as input signals to the next processing level, where more general concept-models are recognized or created. Output signals from a given level, serving as input to the next level, are the model activation signals, am, defined as am = Ξ£n=1..N f(m|n). The hierarchical NMF system is illustrated in Fig. 2. Within the hierarchy of the mind, each concept-model finds its "mental" meaning and purpose at a higher level (in addition to other purposes). For example, consider a concept-model "chair." It has a "behavioral" purpose of initiating sitting behavior (if sitting is required by the body), this is the "bodily" purpose at the same hierarchical level. In addition, it has a "purely mental" purpose at a higher level in the hierarchy, a purpose of helping to recognize a more general concept, say of a "concert hall," a model of which contains rows of chairs. From time to time a system forms a new concept or eliminates an old one. At every level, the NMF system always keeps a reserve of vague (fuzzy) inactive concept-models. They are inactive in that their parameters are not adapted to the data; therefore their similarities to signals are low. Yet, because of a large vagueness (covariance) the similarities are not exactly zero. When a new signal does not fit well into any of the active models, its similarities to inactive models automatically increase (because first, every piece of data is accounted for, and second, inactive models are vague-fuzzy and potentially can "grab" every signal that does not fit into more specific, less fuzzy, active models. When the activation signal am for an inactive model, m, exceeds a certain threshold, the model is activated. Similarly, when an activation signal for a particular model falls below a threshold, the model is deactivated. Thresholds for activation and deactivation are set usually based on information existing at a higher hierarchical level (prior information, system resources, numbers of activated models of various types, etc.). Activation signals for active models at a particular level form a "neuronal field," which serve as input signals to the next level, where more abstract and more general concepts are formed.
EM algorithm and GMM model : In statistics, EM (expectation maximization) algorithm handles latent variables, while GMM is the Gaussian mixture model.
EM algorithm and GMM model : In the picture below, are shown the red blood cell hemoglobin concentration and the red blood cell volume data of two groups of people, the Anemia group and the Control Group (i.e. the group of people without Anemia). As expected, people with Anemia have lower red blood cell volume and lower red blood cell hemoglobin concentration than those without Anemia. x is a random vector such as x := ( red blood cell volume , red blood cell hemoglobin concentration ) , , and from medical studies it is known that x are normally distributed in each group, i.e. x ∼ N ( ΞΌ , Ξ£ ) (\mu ,\Sigma ) . z is denoted as the group where x belongs, with z i = 0 =0 when x i belongs to Anemia Group and z i = 1 =1 when x i belongs to Control Group. Also z ∼ Categorical ⁑ ( k , Ο• ) (k,\phi ) where k = 2 , Ο• j β‰₯ 0 , \geq 0, and βˆ‘ j = 1 k Ο• j = 1 ^\phi _=1 . See Categorical distribution. The following procedure can be used to estimate Ο• , ΞΌ , Ξ£ . A maximum likelihood estimation can be applied: β„“ ( Ο• , ΞΌ , Ξ£ ) = βˆ‘ i = 1 m log ⁑ ( p ( x ( i ) ; Ο• , ΞΌ , Ξ£ ) ) = βˆ‘ i = 1 m log ⁑ βˆ‘ z ( i ) = 1 k p ( x ( i ) ∣ z ( i ) ; ΞΌ , Ξ£ ) p ( z ( i ) ; Ο• ) ^\log(p(x^;\phi ,\mu ,\Sigma ))=\sum _^\log \sum _=1^p\left(x^\mid z^;\mu ,\Sigma \right)p(z^;\phi ) As the z i for each x i are known, the log likelihood function can be simplified as below: β„“ ( Ο• , ΞΌ , Ξ£ ) = βˆ‘ i = 1 m log ⁑ p ( x ( i ) ∣ z ( i ) ; ΞΌ , Ξ£ ) + log ⁑ p ( z ( i ) ; Ο• ) ^\log p\left(x^\mid z^;\mu ,\Sigma \right)+\log p\left(z^;\phi \right) Now the likelihood function can be maximized by making partial derivative over ΞΌ , Ξ£ , Ο• , obtaining: Ο• j = 1 m βˆ‘ i = 1 m 1 =\sum _^1\=j\ ΞΌ j = βˆ‘ i = 1 m 1 x ( i ) βˆ‘ i = 1 m 1 =^1\=j\x^^1\left\=j\right\ Ξ£ j = βˆ‘ i = 1 m 1 ( x ( i ) βˆ’ ΞΌ j ) ( x ( i ) βˆ’ ΞΌ j ) T βˆ‘ i = 1 m 1 =^1\=j\(x^-\mu _)(x^-\mu _)^^1\=j\ If z i is known, the estimation of the parameters results to be quite simple with maximum likelihood estimation. But if z i is unknown it is much more complicated. Being z a latent variable (i.e. not observed), with unlabeled scenario, the Expectation Maximization Algorithm is needed to estimate z as well as other parameters. Generally, this problem is set as a GMM since the data in each group is normally distributed. In machine learning, the latent variable z is considered as a latent pattern lying under the data, which the observer is not able to see very directly. x i is the known data, while Ο• , ΞΌ , Ξ£ are the parameter of the model. With the EM algorithm, some underlying pattern z in the data x i can be found, along with the estimation of the parameters. The wide application of this circumstance in machine learning is what makes EM algorithm so important.
EM algorithm and GMM model : The EM algorithm consists of two steps: the E-step and the M-step. Firstly, the model parameters and the z ( i ) can be randomly initialized. In the E-step, the algorithm tries to guess the value of z ( i ) based on the parameters, while in the M-step, the algorithm updates the value of the model parameters based on the guess of z ( i ) of the E-step. These two steps are repeated until convergence is reached. The algorithm in GMM is: Repeat until convergence: 1. (E-step) For each i , j , set w j ( i ) := p ( z ( i ) = j | x ( i ) ; Ο• , ΞΌ , Ξ£ ) ^:=p\left(z^=j|x^;\phi ,\mu ,\Sigma \right) 2. (M-step) Update the parameters Ο• j := 1 m βˆ‘ i = 1 m w j ( i ) :=\sum _^w_^ ΞΌ j := βˆ‘ i = 1 m w j ( i ) x ( i ) βˆ‘ i = 1 m w j ( i ) :=^w_^x^^w_^ Ξ£ j := βˆ‘ i = 1 m w j ( i ) ( x ( i ) βˆ’ ΞΌ j ) ( x ( i ) βˆ’ ΞΌ j ) T βˆ‘ i = 1 m w j ( i ) :=^w_^\left(x^-\mu _\right)\left(x^-\mu _\right)^^w_^ With Bayes Rule, the following result is obtained by the E-step: p ( z ( i ) = j | x ( i ) ; Ο• , ΞΌ , Ξ£ ) = p ( x ( i ) | z ( i ) = j ; ΞΌ , Ξ£ ) p ( z ( i ) = j ; Ο• ) βˆ‘ l = 1 k p ( x ( i ) | z ( i ) = l ; ΞΌ , Ξ£ ) p ( z ( i ) = l ; Ο• ) =j|x^;\phi ,\mu ,\Sigma \right)=|z^=j;\mu ,\Sigma \right)p\left(z^=j;\phi \right)^p\left(x^|z^=l;\mu ,\Sigma \right)p\left(z^=l;\phi \right) According to GMM setting, these following formulas are obtained: p ( x ( i ) | z ( i ) = j ; ΞΌ , Ξ£ ) = 1 ( 2 Ο€ ) n / 2 | Ξ£ j | 1 / 2 exp ⁑ ( βˆ’ 1 2 ( x ( i ) βˆ’ ΞΌ j ) T Ξ£ j βˆ’ 1 ( x ( i ) βˆ’ ΞΌ j ) ) |z^=j;\mu ,\Sigma \right)=\left|\Sigma _\right|^\exp \left(-\left(x^-\mu _\right)^\Sigma _^\left(x^-\mu _\right)\right) p ( z ( i ) = j ; Ο• ) = Ο• j =j;\phi \right)=\phi _ In this way, a switch between the E-step and the M-step is possible, according to the randomly initialized parameters. == References ==
Journal of Machine Learning Research : The Journal of Machine Learning Research is a peer-reviewed open access scientific journal covering machine learning. It was established in 2000 and the first editor-in-chief was Leslie Kaelbling. The current editors-in-chief are Francis Bach (Inria) and David Blei (Columbia University).
Journal of Machine Learning Research : The journal was established as an open-access alternative to the journal Machine Learning. In 2001, forty editorial board members of Machine Learning resigned, saying that in the era of the Internet, it was detrimental for researchers to continue publishing their papers in expensive journals with pay-access archives. The open access model employed by the Journal of Machine Learning Research allows authors to publish articles for free and retain copyright, while archives are freely available online. Print editions of the journal were published by MIT Press until 2004 and by Microtome Publishing thereafter. From its inception, the journal received no revenue from the print edition and paid no subvention to MIT Press or Microtome Publishing. In response to the prohibitive costs of arranging workshop and conference proceedings publication with traditional academic publishing companies, the journal launched a proceedings publication arm in 2007 and now publishes proceedings for several leading machine learning conferences, including the International Conference on Machine Learning, COLT, AISTATS, and workshops held at the Conference on Neural Information Processing Systems.
Journal of Machine Learning Research : "Top journals in computer science". Times Higher Education. 14 May 2009. Retrieved 22 August 2009.
Multilingual notation : A multilingual notation is a representation in a lexical resource that allows translation between two or more words.
Multilingual notation : For instance, within LMF, a multilingual notation could be as presented in the following diagram, for English / French translation. In this diagram, two intermediate SenseAxis instances are used in order to represent a near match between fleuve in French and river in English. The SenseAxis instance on the bottom is not linked directly to any English sense because this notion does not exist in English. A more complex situation is when more than two languages are concerned, as in the following diagram dealing with English, Italian and Spanish.