text
stringlengths
12
14.7k
Multiple sequence alignment : Alignment-free sequence analysis Cladistics Generalized tree alignment Multiple sequence alignment viewers PANDIT, a biological database covering protein domains Phylogenetics Sequence alignment software Structural alignment
Multiple sequence alignment : ExPASy sequence alignment tools Archived Multiple Alignment Resource Page – from the Virtual School of Natural Sciences Tools for Multiple Alignments – from Pôle Bioinformatique Lyonnais An entry point to clustal servers and information An entry point to the main T-Coffee servers An entry point to the main MergeAlign server and information European Bioinformatics Institute servers: ClustalW2 – general purpose multiple sequence alignment program for DNA or proteins. Muscle – MUltiple Sequence Comparison by Log-Expectation T-coffee – multiple sequence alignment. MAFFT – Multiple Alignment using Fast Fourier Transform KALIGN – a fast and accurate multiple sequence alignment algorithm.
Optimistic knowledge gradient : In statistics, the optimistic knowledge gradient is a smart decision-making strategy developed by Xi Chen, Qihang Lin and Dengyong Zhou in 2013 to help solve complex problems in crowdsourced data labeling (a form of optimal computing budget allocation problem). In crowdsourcing, multiple people are asked to label or classify data, but each labeling attempt comes with a cost. The main challenge is figuring out the most efficient way to allocate resources when you want to get the most accurate labels without spending too much money. Imagine you're running a project where you need to classify thousands of images, and each person you ask to label an image charges a fee. The optimistic knowledge gradient helps you determine the most cost-effective way to get the most reliable labels by strategically choosing which items to have labeled and by whom. This approach is particularly useful in machine learning and data science, where getting accurate labeled data is crucial but can be expensive. By using mathematical techniques, the method tries to maximize the information gained while minimizing the overall cost of labeling.
Optimistic knowledge gradient : The optimal computing budget allocation problem is formulated as a Bayesian Markov decision process(MDP) and is solved by using the dynamic programming (DP) algorithm where the Optimistic knowledge gradient policy is used to solve the computationally intractable of the dynamic programming (DP) algorithm. Consider a budget allocation issue in crowdsourcing. The particular crowdsourcing problem we considering is crowd labeling. Crowd labeling is a large amount of labeling tasks which are hard to solve by machine, turn out to easy to solve by human beings, then we just outsourced to an unidentified group of random people in a distributed environment.
Optimistic knowledge gradient : We want to finish this labeling tasks rely on the power of the crowd hopefully. For example, suppose we want to identify a picture according to the people in a picture is adult or not, this is a Bernoulli labeling problem, and all of us can do in one or two seconds, this is an easy task for human being. However, if we have tens of thousands picture like this, then this is no longer the easy task any more. That's why we need to rely on crowdsourcing framework to make this fast. Crowdsourcing framework of this consists of two steps. Step one, we just dynamically acquire from the crowd for items. On the other sides, this is dynamic procedure. We don't just send out this picture to everyone and we focus every response, instead, we do this in quantity. We are going to decide which picture we send it in the next, and which worker we are going to hire in the crowd in the next. According to his or her historical labeling results. And each picture can be sent to multiple workers and every worker can also work on different pictures. Then after we collect enough number of labels for different picture, we go to the second steps where we want to infer true label of each picture based on the collected labels. So there are multiple ways we can do inference. For instance, the simplest we can do this is just majority vote. The problem is that no free lunch, we have to pays for worker for each label he or she provides and we only have a limited project budget. So the question is how to spend the limited budget in a smart way.
Optimistic knowledge gradient : Before showing the mathematic model, the paper mentions what kinds of challenges we are facing.
Optimistic knowledge gradient : For the mathematical model, we have the K items, i = , and total budget is T and we assume each label cost 1 so we are going to have T labels eventually. We assume each items has true label Z i which positive or negative, this binomial cases and we can extended to multiple class, labeling cases, this a singular idea. And the positive set H ∗ is defined as the set of items whose true label is positive. And θ i also defined a soft-label, θ i for each item which number between 0 and 1, and we define θ i as underlying probability of being labeled as positive by a member randomly picked from a group of perfect workers. In this first case, we assume for every worker is perfect, it means they all reliable, but being perfect doesn’t means this worker gives the same answer or right answer. It just means they will try their best to figure out the best answer in their mind, and suppose everyone is perfect worker, just randomly picked one of them, and with θ i probability, we going to get a guy who believe this one is positive. That is how we explain θ i . So we are assume a label Y i is drawn from Bernoulli( θ i ), and θ i must be consistent with the true label, which means θ i is greater or equal to 0.5 if and only if this item is positive with a true positive label. So our goal is to learn H*, the set of positive items. In other word, we want to make an inferred positive set H based on collected labels to maximize: ∑ i = 1 k ( 1 ( i ∈ H ) 1 ( i ∈ H ⋆ ) + 1 ( i ∉ H ) 1 ( i ∉ H ⋆ ) ) ^(__)+__)) It can also be written as: | H ∩ H ⋆ | + | H c ∩ H ⋆ c | |+|H^\cap H^|
PageRank : PageRank (PR) is an algorithm used by Google Search to rank web pages in their search engine results. It is named after both the term "web page" and co-founder Larry Page. PageRank is a way of measuring the importance of website pages. According to Google: PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the website is. The underlying assumption is that more important websites are likely to receive more links from other websites. Currently, PageRank is not the only algorithm used by Google to order search results, but it is the first algorithm that was used by the company, and it is the best known. As of September 24, 2019, all patents associated with PageRank have expired.
PageRank : PageRank is a link analysis algorithm and it assigns a numerical weighting to each element of a hyperlinked set of documents, such as the World Wide Web, with the purpose of "measuring" its relative importance within the set. The algorithm may be applied to any collection of entities with reciprocal quotations and references. The numerical weight that it assigns to any given element E is referred to as the PageRank of E and denoted by P R ( E ) . A PageRank results from a mathematical algorithm based on the Webgraph, created by all World Wide Web pages as nodes and hyperlinks as edges, taking into consideration authority hubs such as cnn.com or mayoclinic.org. The rank value indicates an importance of a particular page. A hyperlink to a page counts as a vote of support. The PageRank of a page is defined recursively and depends on the number and PageRank metric of all pages that link to it ("incoming links"). A page that is linked to by many pages with high PageRank receives a high rank itself. Numerous academic papers concerning PageRank have been published since Page and Brin's original paper. In practice, the PageRank concept may be vulnerable to manipulation. Research has been conducted into identifying falsely influenced PageRank rankings. The goal is to find an effective means of ignoring links from documents with falsely influenced PageRank. Other link-based ranking algorithms for Web pages include the HITS algorithm invented by Jon Kleinberg (used by Teoma and now Ask.com), the IBM CLEVER project, the TrustRank algorithm, the Hummingbird algorithm, and the SALSA algorithm.
PageRank : The eigenvalue problem behind PageRank's algorithm was independently rediscovered and reused in many scoring problems. In 1895, Edmund Landau suggested using it for determining the winner of a chess tournament. The eigenvalue problem was also suggested in 1976 by Gabriel Pinski and Francis Narin, who worked on scientometrics ranking scientific journals, in 1977 by Thomas Saaty in his concept of Analytic Hierarchy Process which weighted alternative choices, and in 1995 by Bradley Love and Steven Sloman as a cognitive model for concepts, the centrality algorithm. A search engine called "RankDex" from IDD Information Services, designed by Robin Li in 1996, developed a strategy for site-scoring and page-ranking. Li referred to his search mechanism as "link analysis," which involved ranking the popularity of a web site based on how many other sites had linked to it. RankDex, the first search engine with page-ranking and site-scoring algorithms, was launched in 1996. Li filed a patent for the technology in RankDex in 1997; it was granted in 1999. He later used it when he founded Baidu in China in 2000. Google founder Larry Page referenced Li's work as a citation in some of his U.S. patents for PageRank. Larry Page and Sergey Brin developed PageRank at Stanford University in 1996 as part of a research project about a new kind of search engine. An interview with Héctor García-Molina, Stanford Computer Science professor and advisor to Sergey, provides background into the development of the page-rank algorithm. Sergey Brin had the idea that information on the web could be ordered in a hierarchy by "link popularity": a page ranks higher as there are more links to it. The system was developed with the help of Scott Hassan and Alan Steremberg, both of whom were cited by Page and Brin as being critical to the development of Google. Rajeev Motwani and Terry Winograd co-authored with Page and Brin the first paper about the project, describing PageRank and the initial prototype of the Google search engine, published in 1998. Shortly after, Page and Brin founded Google Inc., the company behind the Google search engine. While just one of many factors that determine the ranking of Google search results, PageRank continues to provide the basis for all of Google's web-search tools. The name "PageRank" plays on the name of developer Larry Page, as well as of the concept of a web page. The word is a trademark of Google, and the PageRank process has been patented (U.S. patent 6,285,999). However, the patent is assigned to Stanford University and not to Google. Google has exclusive license rights on the patent from Stanford University. The university received 1.8 million shares of Google in exchange for use of the patent; it sold the shares in 2005 for $336 million. PageRank was influenced by citation analysis, early developed by Eugene Garfield in the 1950s at the University of Pennsylvania, and by Hyper Search, developed by Massimo Marchiori at the University of Padua. In the same year PageRank was introduced (1998), Jon Kleinberg published his work on HITS. Google's founders cite Garfield, Marchiori, and Kleinberg in their original papers.
PageRank : The PageRank algorithm outputs a probability distribution used to represent the likelihood that a person randomly clicking on links will arrive at any particular page. PageRank can be calculated for collections of documents of any size. It is assumed in several research papers that the distribution is evenly divided among all documents in the collection at the beginning of the computational process. The PageRank computations require several passes, called "iterations", through the collection to adjust approximate PageRank values to more closely reflect the theoretical true value. A probability is expressed as a numeric value between 0 and 1. A 0.5 probability is commonly expressed as a "50% chance" of something happening. Hence, a document with a PageRank of 0.5 means there is a 50% chance that a person clicking on a random link will be directed to said document.
PageRank : The mathematics of PageRank are entirely general and apply to any graph or network in any domain. Thus, PageRank is now regularly used in bibliometrics, social and information network analysis, and for link prediction and recommendation. It is used for systems analysis of road networks, and in biology, chemistry, neuroscience, and physics.
PageRank : In early 2005, Google implemented a new value, "nofollow", for the rel attribute of HTML link and anchor elements, so that website developers and bloggers can make links that Google will not consider for the purposes of PageRank—they are links that no longer constitute a "vote" in the PageRank system. The nofollow relationship was added in an attempt to help combat spamdexing. As an example, people could previously create many message-board posts with links to their website to artificially inflate their PageRank. With the nofollow value, message-board administrators can modify their code to automatically insert "rel='nofollow'" to all hyperlinks in posts, thus preventing PageRank from being affected by those particular posts. This method of avoidance, however, also has various drawbacks, such as reducing the link value of legitimate comments. (See: Spam in blogs#nofollow) In an effort to manually control the flow of PageRank among pages within a website, many webmasters practice what is known as PageRank Sculpting—which is the act of strategically placing the nofollow attribute on certain internal links of a website in order to funnel PageRank towards those pages the webmaster deemed most important. This tactic had been used since the inception of the nofollow attribute, but may no longer be effective since Google announced that blocking PageRank transfer with nofollow does not redirect that PageRank to other links.
PageRank : Attention inequality CheiRank Domain authority EigenTrust — a decentralized PageRank algorithm Google bombing Google Hummingbird Google matrix Google Panda Google Penguin Google Search Hilltop algorithm Katz centrality – a 1953 scheme closely related to pagerank Link building Search engine optimization SimRank — a measure of object-to-object similarity based on random-surfer model TrustRank VisualRank - Google's application of PageRank to image-search Webgraph
PageRank : Original PageRank U.S. Patent—Method for node ranking in a linked database Archived 2014-08-29 at the Wayback Machine—Patent number 6,285,999—September 4, 2001 PageRank U.S. Patent—Method for scoring documents in a linked database—Patent number 6,799,176—September 28, 2004 PageRank U.S. Patent—Method for node ranking in a linked database Archived 2019-08-28 at the Wayback Machine—Patent number 7,058,628—June 6, 2006 PageRank U.S. Patent—Scoring documents in a linked database Archived 2018-03-31 at the Wayback Machine—Patent number 7,269,587—September 11, 2007
PageRank : Algorithms by Google Our products and services by Google How Google Finds Your Needle in the Web's Haystack by the American Mathematical Society
Path dependence : Path dependence is a concept in the social sciences, referring to processes where past events or decisions constrain later events or decisions. It can be used to refer to outcomes at a single point in time or to long-run equilibria of a process. Path dependence has been used to describe institutions, technical standards, patterns of economic or social development, organizational behavior, and more. In common usage, the phrase can imply two types of claims. The first is the broad concept that "history matters," often articulated to challenge explanations that pay insufficient attention to historical factors. This claim can be formulated simply as "the future development of an economic system is affected by the path it has traced out in the past" or "particular events in the past can have crucial effects in the future." The second is a more specific claim about how past events or decisions affect future events or decisions in significant or disproportionate ways, through mechanisms such as increasing returns, positive feedback effects, or other mechanisms.
Path dependence : Path dependence theory was originally developed by economists to explain technology adoption processes and industry evolution. The theoretical ideas have had a strong influence on evolutionary economics. A common expression of the concept is the claim that predictable amplifications of small differences are a disproportionate cause of later circumstances, and, in the "strong" form, that this historical hang-over is inefficient. There are many models and empirical cases where economic processes do not progress steadily toward some pre-determined and unique equilibrium, but rather the nature of any equilibrium achieved depends partly on the process of getting there. Therefore, the outcome of a path-dependent process will often not converge towards a unique equilibrium, but will instead reach one of several equilibria (sometimes known as absorbing states). This dynamic vision of economic evolution is very different from the tradition of neo-classical economics, which in its simplest form assumed that only a single outcome could possibly be reached, regardless of initial conditions or transitory events. With path dependence, both the starting point and 'accidental' events (noise) can have significant effects on the ultimate outcome. In each of the following examples it is possible to identify some random events that disrupted the ongoing course, with irreversible consequences.
Path dependence : A general type of path dependence is a typological vestige. In typography, for example, some customs persist, although the reason for their existence no longer applies; for example, the placement of the period inside a quotation in U.S. spelling. In metal type, pieces of terminal punctuation, such as the comma and period, are comparatively small and delicate (as they must be x-height for proper kerning.) Placing the full-height quotation mark on the outside protected the smaller cast metal sort from damage if the word needed to be moved around within or between lines. This would be done even if the period did not belong to the text being quoted. Evolution is considered by some to be path-dependent and historically contingent: mutations occurring in the past have had long-term effects on current life forms, some of which may no longer be adaptive to current conditions. For instance, there is a controversy about whether the panda's thumb is a leftover trait or not. In the computer and software markets, legacy systems indicate path dependence: customers' needs in the present market often include the ability to read data or run programs from past generations of products. Thus, for instance, a customer may need not merely the best available word processor, but rather the best available word processor that can read Microsoft Word files. Such limitations in compatibility contribute to lock-in, and more subtly, to design compromises for independently developed products, if they attempt to be compatible. Also see embrace, extend and extinguish. In socioeconomic systems, commercial fisheries' harvest rates and conservation consequences are found to be path dependent as predicted by the interaction between slow institutional adaptation, fast ecological dynamics, and diminishing returns. In physics and mathematics, a non-holonomic system is a physical system in which the states depend on the physical paths taken.
Path dependence : Critical juncture theory Imprinting (organizational theory) Innovation butterfly Historicism Network effect Opportunity cost Ratchet effect Tyranny of small decisions
Path dependence : Arrow, Kenneth J. (1963), 2nd ed. Social Choice and Individual Values. Yale University Press, New Haven, pp. 119–120 (constitutional transitivity as alternative to path dependence on the status quo). Arthur, W. Brian (1994), Increasing Returns and Path Dependence in the Economy. University of Michigan Press. Boas, Taylor C (2007). "Conceptualizing Continuity and Change: The Composite-Standard Model of Path Dependence" (PDF). Journal of Theoretical Politics. 19 (1): 33–54. CiteSeerX 10.1.1.466.8147. doi:10.1177/0951629807071016. S2CID 11323786. Archived from the original (PDF) on 2008-09-05. Retrieved 2007-10-20. Collier, Ruth Berins; Collier, David (1991). Shaping the Political Arena: Critical Junctures, the Labor Movement, and Regime Dynamics in Latin America. Princeton: Princeton University Press. ISBN 9780268077105. Retrieved 6 July 2018. David, Paul A. (June 2000). "Path dependence, its critics and the quest for 'historical economics'" (PDF). Archived from the original (PDF) on 2014-03-24., in P. Garrouste and S. Ioannides (eds), Evolution and Path Dependence in Economic Ideas: Past and Present, Edward Elgar Publishing, Cheltenham, England. Hargreaves Heap, Shawn (1980), "Choosing the Wrong 'Natural' Rate: Accelerating Inflation or Decelerating Employment and Growth?" Economic Journal 90(359) (Sept): 611–20 (ISSN 0013-0133) Mahoney, James (2000). "Path Dependence in Historical Sociology". Theory and Society. 29 (4): 507–548. doi:10.1023/A:1007113830879. S2CID 145564738. Stephen E. Margolis and S.J. Liebowitz (2000), "Path Dependence, Lock-In, and History" Nelson, R. and S. Winter (1982), An evolutionary theory of economic change, Harvard University Press. Page, Scott E. (January 2006). "Path dependence". Quarterly Journal of Political Science. 1 (1): 88. doi:10.1561/100.00000006. Pdf. Penrose, E. T., (1959), The Theory of the Growth of the Firm, New York: Wiley. Pierson, Paul (2000). "Increasing Returns, Path Dependence, and the Study of Politics". American Political Science Review, June. _____ (2004), Politics in Time: Politics in Time: History, Institutions, and Social Analysis, Princeton University Press. Puffert, Douglas J. (1999), "Path Dependence in Economic History" (based on the entry "Pfadabhängigkeit in der Wirtschaftsgeschichte", in the Handbuch zur evolutorischen Ökonomik) _____ (2001), "Path Dependence in Spatial Networks: The Standardization of Railway Track Gauge" _____ (2009), Tracks across continents, paths through history: the economic dynamics of standardization in railway gauge, University of Chicago Press. Schwartz, Herman. "Down the Wrong Path: Path Dependence, Increasing Returns, and Historical Institutionalism"., undated mimeo Shalizi, Cosma (2001), "QWERTY, Lock-in, and Path Dependence", unpublished website, with extensive references Vergne, J. P. and R. Durand (2010), "The missing link between the theory and empirics of path dependence", Journal of Management Studies, 47(4):736–59, with extensive references
Population process : In applied probability, a population process is a Markov chain in which the state of the chain is analogous to the number of individuals in a population (0, 1, 2, etc.), and changes to the state are analogous to the addition or removal of individuals from the population. Typical population processes include birth–death processes and birth, death and catastrophe processes. Although named by analogy to biological populations from population dynamics, population processes find application in a much wider range of fields than just ecology and other biological sciences. These other applications include telecommunications and queueing theory, chemical kinetics and financial mathematics, and hence the population could be of packets in a computer network, of molecules in a chemical reaction, or even of units in a financial index. Population processes are typically characterized by processes of birth and immigration, and of death, emigration and catastrophe, which correspond to the basic demographic processes and broad environmental effects to which a population is subject. However, population processes are also often equivalent to other processes that may typically be characterized under other paradigms (in the literal sense of "patterns"). Queues, for example, are often characterized by an arrivals process, a service process, and the number of servers. In appropriate circumstances, however, arrivals at a queue are functionally equivalent to births or immigration and the service of waiting "customers" is equivalent to death or emigration.
Population process : Moran process == References ==
Quantum Markov chain : In mathematics, the quantum Markov chain is a reformulation of the ideas of a classical Markov chain, replacing the classical definitions of probability with quantum probability.
Quantum Markov chain : Very roughly, the theory of a quantum Markov chain resembles that of a measure-many automaton, with some important substitutions: the initial state is to be replaced by a density matrix, and the projection operators are to be replaced by positive operator valued measures.
Quantum Markov chain : More precisely, a quantum Markov chain is a pair ( E , ρ ) with ρ a density matrix and E a quantum channel such that E : B ⊗ B → B \otimes \to is a completely positive trace-preserving map, and B a C*-algebra of bounded operators. The pair must obey the quantum Markov condition, that Tr ⁡ ρ ( b 1 ⊗ b 2 ) = Tr ⁡ ρ E ( b 1 , b 2 ) \rho (b_\otimes b_)=\operatorname \rho E(b_,b_) for all b 1 , b 2 ∈ B ,b_\in .
Quantum Markov chain : Gudder, Stanley. "Quantum Markov chains." Journal of Mathematical Physics 49.7 (2008): 072105.
Queueing theory : Queueing theory is the mathematical study of waiting lines, or queues. A queueing model is constructed so that queue lengths and waiting time can be predicted. Queueing theory is generally considered a branch of operations research because the results are often used when making business decisions about the resources needed to provide a service. Queueing theory has its origins in research by Agner Krarup Erlang, who created models to describe the system of incoming calls at the Copenhagen Telephone Exchange Company. These ideas were seminal to the field of teletraffic engineering and have since seen applications in telecommunications, traffic engineering, computing, project management, and particularly industrial engineering, where they are applied in the design of factories, shops, offices, and hospitals.
Queueing theory : The spelling "queueing" over "queuing" is typically encountered in the academic research field. In fact, one of the flagship journals of the field is Queueing Systems.
Queueing theory : Queueing theory is one of the major areas of study in the discipline of management science. Through management science, businesses are able to solve a variety of problems using different scientific and mathematical approaches. Queueing analysis is the probabilistic analysis of waiting lines, and thus the results, also referred to as the operating characteristics, are probabilistic rather than deterministic. The probability that n customers are in the queueing system, the average number of customers in the queueing system, the average number of customers in the waiting line, the average time spent by a customer in the total queuing system, the average time spent by a customer in the waiting line, and finally the probability that the server is busy or idle are all of the different operating characteristics that these queueing models compute. The overall goal of queueing analysis is to compute these characteristics for the current system and then test several alternatives that could lead to improvement. Computing the operating characteristics for the current system and comparing the values to the characteristics of the alternative systems allows managers to see the pros and cons of each potential option. These systems help in the final decision making process by showing ways to increase savings, reduce waiting time, improve efficiency, etc. The main queueing models that can be used are the single-server waiting line system and the multiple-server waiting line system, which are discussed further below. These models can be further differentiated depending on whether service times are constant or undefined, the queue length is finite, the calling population is finite, etc.
Queueing theory : A queue or queueing node can be thought of as nearly a black box. Jobs (also called customers or requests, depending on the field) arrive to the queue, possibly wait some time, take some time being processed, and then depart from the queue. However, the queueing node is not quite a pure black box since some information is needed about the inside of the queueing node. The queue has one or more servers which can each be paired with an arriving job. When the job is completed and departs, that server will again be free to be paired with another arriving job. An analogy often used is that of the cashier at a supermarket. Customers arrive, are processed by the cashier, and depart. Each cashier processes one customer at a time, and hence this is a queueing node with only one server. A setting where a customer will leave immediately if the cashier is busy when the customer arrives, is referred to as a queue with no buffer (or no waiting area). A setting with a waiting zone for up to n customers is called a queue with a buffer of size n.
Queueing theory : In 1909, Agner Krarup Erlang, a Danish engineer who worked for the Copenhagen Telephone Exchange, published the first paper on what would now be called queueing theory. He modeled the number of telephone calls arriving at an exchange by a Poisson process and solved the M/D/1 queue in 1917 and M/D/k queueing model in 1920. In Kendall's notation: M stands for "Markov" or "memoryless", and means arrivals occur according to a Poisson process D stands for "deterministic", and means jobs arriving at the queue require a fixed amount of service k describes the number of servers at the queueing node (k = 1, 2, 3, ...) If the node has more jobs than servers, then jobs will queue and wait for service. The M/G/1 queue was solved by Felix Pollaczek in 1930, a solution later recast in probabilistic terms by Aleksandr Khinchin and now known as the Pollaczek–Khinchine formula. After the 1940s, queueing theory became an area of research interest to mathematicians. In 1953, David George Kendall solved the GI/M/k queue and introduced the modern notation for queues, now known as Kendall's notation. In 1957, Pollaczek studied the GI/G/1 using an integral equation. John Kingman gave a formula for the mean waiting time in a G/G/1 queue, now known as Kingman's formula. Leonard Kleinrock worked on the application of queueing theory to message switching in the early 1960s and packet switching in the early 1970s. His initial contribution to this field was his doctoral thesis at the Massachusetts Institute of Technology in 1962, published in book form in 1964. His theoretical work published in the early 1970s underpinned the use of packet switching in the ARPANET, a forerunner to the Internet. The matrix geometric method and matrix analytic methods have allowed queues with phase-type distributed inter-arrival and service time distributions to be considered. Systems with coupled orbits are an important part in queueing theory in the application to wireless networks and signal processing. Modern day application of queueing theory concerns among other things product development where (material) products have a spatiotemporal existence, in the sense that products have a certain volume and a certain duration. Problems such as performance metrics for the M/G/k queue remain an open problem.
Queueing theory : Various scheduling policies can be used at queueing nodes: First in, first out Also called first-come, first-served (FCFS), this principle states that customers are served one at a time and that the customer that has been waiting the longest is served first. Last in, first out This principle also serves customers one at a time, but the customer with the shortest waiting time will be served first. Also known as a stack. Processor sharing Service capacity is shared equally between customers. Priority Customers with high priority are served first. Priority queues can be of two types: non-preemptive (where a job in service cannot be interrupted) and preemptive (where a job in service can be interrupted by a higher-priority job). No work is lost in either model. Shortest job first The next job to be served is the one with the smallest size. Preemptive shortest job first The next job to be served is the one with the smallest original size. Shortest remaining processing time The next job to serve is the one with the smallest remaining processing requirement. Service facility Single server: customers line up and there is only one server Several parallel servers (single queue): customers line up and there are several servers Several parallel servers (several queues): there are many counters and customers can decide for which to queue Unreliable server Server failures occur according to a stochastic (random) process (usually Poisson) and are followed by setup periods during which the server is unavailable. The interrupted customer remains in the service area until server is fixed. Customer waiting behavior Balking: customers decide not to join the queue if it is too long Jockeying: customers switch between queues if they think they will get served faster by doing so Reneging: customers leave the queue if they have waited too long for service Arriving customers not served (either due to the queue having no buffer, or due to balking or reneging by the customer) are also known as dropouts. The average rate of dropouts is a significant parameter describing a queue.
Queueing theory : Queue networks are systems in which multiple queues are connected by customer routing. When a customer is serviced at one node, it can join another node and queue for service, or leave the network. For networks of m nodes, the state of the system can be described by an m–dimensional vector (x1, x2, ..., xm) where xi represents the number of customers at each node. The simplest non-trivial networks of queues are called tandem queues. The first significant results in this area were Jackson networks, for which an efficient product-form stationary distribution exists and the mean value analysis (which allows average metrics such as throughput and sojourn times) can be computed. If the total number of customers in the network remains constant, the network is called a closed network and has been shown to also have a product–form stationary distribution by the Gordon–Newell theorem. This result was extended to the BCMP network, where a network with very general service time, regimes, and customer routing is shown to also exhibit a product–form stationary distribution. The normalizing constant can be calculated with the Buzen's algorithm, proposed in 1973. Networks of customers have also been investigated, such as Kelly networks, where customers of different classes experience different priority levels at different service nodes. Another type of network are G-networks, first proposed by Erol Gelenbe in 1993: these networks do not assume exponential time distributions like the classic Jackson network.
Queueing theory : Gross, Donald; Carl M. Harris (1998). Fundamentals of Queueing Theory. Wiley. ISBN 978-0-471-32812-4. Online Zukerman, Moshe (2013). Introduction to Queueing Theory and Stochastic Teletraffic Models (PDF). arXiv:1307.2968. Deitel, Harvey M. (1984) [1982]. An introduction to operating systems (revisited first ed.). Addison-Wesley. p. 673. ISBN 978-0-201-14502-1. chap.15, pp. 380–412 Gelenbe, Erol; Isi Mitrani (2010). Analysis and Synthesis of Computer Systems. World Scientific 2nd Edition. ISBN 978-1-908978-42-4. Newell, Gordron F. (1 June 1971). Applications of Queueing Theory. Chapman and Hall. Leonard Kleinrock, Information Flow in Large Communication Nets, (MIT, Cambridge, May 31, 1961) Proposal for a Ph.D. Thesis Leonard Kleinrock. Information Flow in Large Communication Nets (RLE Quarterly Progress Report, July 1961) Leonard Kleinrock. Communication Nets: Stochastic Message Flow and Delay (McGraw-Hill, New York, 1964) Kleinrock, Leonard (2 January 1975). Queueing Systems: Volume I – Theory. New York: Wiley Interscience. pp. 417. ISBN 978-0-471-49110-1. Kleinrock, Leonard (22 April 1976). Queueing Systems: Volume II – Computer Applications. New York: Wiley Interscience. pp. 576. ISBN 978-0-471-49111-8. Lazowska, Edward D.; John Zahorjan; G. Scott Graham; Kenneth C. Sevcik (1984). Quantitative System Performance: Computer System Analysis Using Queueing Network Models. Prentice-Hall, Inc. ISBN 978-0-13-746975-8. Jon Kleinberg; Éva Tardos (30 June 2013). Algorithm Design. Pearson. ISBN 978-1-292-02394-6.
Queueing theory : Teknomo's Queueing theory tutorial and calculators Virtamo's Queueing Theory Course Myron Hlynka's Queueing Theory Page LINE: a general-purpose engine to solve queueing models
Random surfing model : The random surfing model is a graph model which describes the probability of a random user visiting a web page. The model attempts to predict the chance that a random internet surfer will arrive at a page by either clicking a link or by accessing the site directly, for example by directly entering the website's URL in the address bar. For this reason, an assumption is made that all users surfing the internet will eventually stop following links in favor of switching to another site completely. The model is similar to a Markov chain, where the chain's states are web pages the user lands on and transitions are equally probable links between these pages.
Random surfing model : A user navigates the internet in two primary ways; the user may access a site directly by entering the site's URL or clicking a bookmark, or the user may use a series of hyperlinks to get to the desired page. The random surfer model assumes that the link which the user selects next is picked at random. The model also assumes that the number of successive links is not infinite – the user will at some point lose interest and leave the current site for a completely new site. The random surfer model is presented as a series of nodes which indicate web pages that can be accessed at random by users. A new node is added to the a graph when a new website is published. The movement about the graphs nodes is modeled by choosing a start node at random, then performing a short and random traversal of the nodes, or random walk. This traversal is analogous to a user accessing a website, then following hyperlink t number of times, until the user either exits the page or accesses another site completely. Connections to other nodes in this graph are formed when outbound links are placed on the page.
Random surfing model : In the random surfing model, webgraphs are presented as a sequence of directed graphs G t , t = 1 , 2 , … ,t=1,2,\ldots such that a graph G t has t vertices and t edges. The process of defining graphs is parameterized with a probability p , thus we let q = 1 − p . Nodes of the model arrive one at time, forming k connections to the existing graph G t . In some models, connections represent directed edges, and in others, connections represent undirected edges. Models start with a single node v 0 and have k self-loops. v t denotes a vertex added in the t t h step, and n denotes the total number of vertices.
Random surfing model : There are some caveats to the standard random surfer model, one of which is that the model ignores the content of the sites which users select – since the model assumes links are selected at random. Because users tend to have a goal in mind when surfing the internet, the content of the linked sites is a determining factor of whether or not the user will click a link.
Random surfing model : The normalized eigenvector centrality combined with random surfer model's assumption of random jumps created the foundation of Google's PageRank algorithm.
Random surfing model : Avrim Blum PageRank Webgraph
Random surfing model : Case study on random web surfers Data Mining and Analysis: Fundamental Concepts and Algorithms is freely available to download for personal use here Microsoft research on PageRank and the Random Surfer Model Paper on how Google web search implements PageRank to find relevant search results
Snakes and ladders : Snakes and ladders is a board game for two or more players regarded today as a worldwide classic. The game originated in ancient India as Moksha Patam, and was brought to the United Kingdom in the 1890s. It is played on a game board with numbered, gridded squares. A number of "ladders" and "snakes" are pictured on the board, each connecting two specific board squares. The object of the game is to navigate one's game piece, according to die rolls, from the start (bottom square) to the finish (top square), helped by climbing ladders but hindered by falling down snakes. The game is a simple race based on sheer luck, and it is popular with young children. The historic version had its roots in morality lessons, on which a player's progression up the board represented a life journey complicated by virtues (ladders) and vices (snakes). The game is also sold under other names, such as the morality themed Chutes and Ladders, which was published by the Milton Bradley Company starting in 1943.
Snakes and ladders : The size of the grid varies, but is most commonly 8×8, 10×10 or 12×12 squares. Boards have snakes and ladders starting and ending on different squares; both factors affect the duration of play. Each player is represented by a distinct game piece token. A single die is rolled to determine random movement of a player's token in the traditional form of play; two dice may be used for a shorter game.
Snakes and ladders : Snakes and ladders originated as part of a family of Indian dice board games that included gyan chauper and pachisi (known in English as Ludo and Parcheesi). It made its way to England and was sold as "Snakes and Ladders", then the basic concept was introduced in the United States as Chutes and Ladders. The game was popular in ancient India by the name Moksha Patam. It was also associated with traditional Hindu philosophy contrasting karma and kama, or destiny and desire. It emphasized destiny, as opposed to games such as pachisi, which focused on life as a mixture of skill (free will) and luck. The underlying ideals of the game inspired a version introduced in Victorian England in 1892. The game has also been interpreted and used as a tool for teaching the effects of good deeds versus bad. The board was covered with symbolic images used in ancient India, the top featuring gods, angels, and majestic beings, while the rest of the board was covered with pictures of animals, flowers and people. The ladders represented virtues such as generosity, faith, and humility, while the snakes represented vices such as lust, anger, murder, and theft. The morality lesson of the game was that a person can attain liberation (Moksha) through doing good, whereas by doing evil one will be reborn as lower forms of life. The number of ladders was fewer than the number of snakes as a reminder that a path of good is much more difficult to tread than a path of sins. Presumably, reaching the last square (number 100) represented the attainment of Moksha (spiritual liberation). Gyan chauper, or jnan chauper, (game of wisdom), the version associated with the Jain philosophy, encompassed the concepts like karma and Moksha. A version popular in the Muslim world is known as shatranj al-'urafa and exists in various versions in India, Iran, and Turkey. In this version, based on sufi philosophy, the game represents the dervish's quest to leave behind the trappings of worldly life and achieve union with God. When the game was brought to England, the Indian virtues and vices were replaced by English ones in hopes of better reflecting Victorian doctrines of morality. Squares of Fulfilment, Grace and Success were accessible by ladders of Thrift, Penitence and Industry and snakes of Indulgence, Disobedience and Indolence caused one to end up in Illness, Disgrace and Poverty. While the Indian version of the game had snakes outnumbering ladders, the English counterpart was more forgiving as it contained equal numbers of each. The association of Britain's snakes and ladders with India and gyan chauper began with the returning of colonial families from India during the British Raj. The décor and art of the early English boards of the 20th century reflect this relationship. By the 1940s very few pictorial references to Indian culture remained, due to the economic demands of the war and the collapse of British rule in India. Although the game's sense of morality has lasted through the game's generations, the physical allusions to religious and philosophical thought in the game as presented in Indian models appear to have all but faded. There has even been evidence of a possible Buddhist version of the game existing in India during the Pala-Sena time period. In Andhra Pradesh, this game is popularly called Vaikunṭhapāḷi or Paramapada Sopāna Paṭamu (the ladder to salvation) in Telugu. In Hindi, this game is called Saanp aur Seedhi, Saanp Seedhi and Mokshapat. In Tamil Nadu the game is called Parama padam and is often played by devotees of Hindu god Vishnu during the Vaikuntha Ekadashi festival in order to stay awake during the night. In Bengali-speaking regions, West Bengal in India and Bangladesh, it is known as Shap Shiri or Shapludu respectively. In the original game the squares of virtue are: Faith (12), Reliability (51), Generosity (57), Knowledge (76), and Asceticism (78). The squares of vice or evil are: Disobedience (41), Vanity (44), Vulgarity (49), Theft (52), Lying (58), Drunkenness (62), Debt (69), Murder (73), Rage (84), Greed (92), Pride (95), and Lust (99).
Snakes and ladders : Each player starts with a token on the starting square (usually the "1" grid square in the bottom left corner, or simply, at the edge of the board next to the "1" grid square). Players take turns rolling a single die to move their token by the number of squares indicated by the die rolled. Tokens follow a fixed route marked on the gameboard which usually follows a boustrophedon (ox-plow) track from the bottom to the top of the playing area, passing once through every square. If, on completion of a move, a player's token lands on the lower-numbered end of a "ladder", the player moves the token up to the ladder's higher-numbered square. If the player lands on the higher-numbered square of a "snake" (or chute), the player moves the token down to the snake's lower-numbered square. If a 6 is rolled, the player, after moving, immediately rolls again for another turn; otherwise play passes to the next player in turn. The player who is first to bring their token to the last square of the track is the winner.
Snakes and ladders : The most widely known edition of snakes and ladders in the United States is Chutes and Ladders, released by Milton Bradley in 1943. The playground setting replaced the snakes, which were thought to be disliked by children at the time. It is played on a 10x10 board, and players advance their pieces according to a spinner rather than a die. The theme of the board design is playground equipment, showing children climbing ladders and descending chutes. The artwork on the board teaches morality lessons: squares on the bottom of the ladders show a child doing a good or sensible deed, at the top of the ladder there is an image of the child enjoying the reward; squares at the top of the chutes show children engaging in mischievous or foolish behavior, on the bottom of the chute the image shows the children suffering the consequences. Black children were depicted in the Milton Bradley game for the first time in 1974. There have been many pop culture versions of the game, with graphics featuring such children's television characters as Dora the Explorer and Sesame Street. It has been marketed as "The Classic Up and Down Game for Preschoolers". In 1999, Hasbro released Chutes and Ladders for PCs. In Canada the game has been traditionally sold as "Snakes and Ladders" and produced by the Canada Games Company. Several Canada-specific versions have been produced over the years, including a version with toboggan runs instead of snakes. An early British version of the game depicts the path of a young boy and girl making their way through a cartoon railroad and train system. During the early 1990s in South Africa, Chutes and Ladders games made from cardboard were distributed on the back of egg boxes as part of a promotion. Even though the concept of major virtues against vices and related Eastern spiritualism is not much emphasized in modern incarnations of the game, the central mechanism of snakes and ladders makes it an effective tool for teaching young children about various subjects. In two separate Indonesian schools, the implementation of the game as media in English lessons of fifth graders not only improved the students' vocabulary but also stimulated their interest and excitement about the learning process. Researchers from Carnegie Mellon University found that pre-schoolers from low income backgrounds who played an hour of numerical board games like snakes and ladders matched the performance of their middle-class counterparts by showing improvements in counting and recognizing number shapes. An eco-inspired version of the game was also used to teach students and teachers about climate change and environmental sustainability. Meyer et al. (2020) explored on the basis of Chutes and Ladders with a free and adaptive game project. This refers on the one hand to systemic game pedagogy. The players and the educators develop the game from ground up and set the rules. The second element of the Monza project is mathematization. Over several years, teachers and learners abstract the game experiences into the language of mathematics.
Snakes and ladders : Any version of snakes and ladders can be represented exactly as an absorbing Markov chain, since from any square the odds of moving to any other square are fixed and independent of any previous game history. The Milton Bradley version of Chutes and Ladders has 100 squares, with 19 chutes and ladders. A player will need an average of 39.2 spins to move from the starting point, which is off the board, to square 100. A two-player game is expected to end in 47.76 moves with a 50.9% chance of winning for the first player. These calculations are based on a variant where throwing a six does not lead to an additional roll; and where the player does not need to roll the exact number to reach square 100; if they overshoot, the game has still ended.
Snakes and ladders : The phrase "back to square one" originated in the game of snakes and ladders, or at least was influenced by it – the earliest attestation of the phrase refers to the game: "Withal he has the problem of maintaining the interest of the reader who is always being sent back to square one in a sort of intellectual game of snakes and ladders." Snakes & Lattes is a board game café chain headquartered in Toronto, Canada, named after snakes and ladders.
Snakes and ladders : Bibliography Augustyn, Frederick J (2004). Dictionary of toys and games in American popular culture. Haworth Press. ISBN 0-7890-1504-8. Parlett, David (1999). "Snakes & Ladders". The Oxford History of Board Games. Oxford University Press. pp. 91–94. ISBN 0-19-212998-8. Tatz, Mark; Kent, Jody (1977). Rebirth: The Tibetan Game of Liberation. Anchor Press. ISBN 0-385-11421-4.
Snakes and ladders : Berlekamp, Elwyn R; Conway, John H; Guy, Richard K (1982). Winning Ways for Your Mathematical Plays. Academic Press. ISBN 0-12-091150-7. Shimkhada, Deepak (1983), "A Preliminary Study of the Game of Karma in India, Nepal, and Tibet" in Artibus Asiae 44:4, pp. 308–322. Topsfield, Andrew (1985), "The Indian Game of Snakes and Ladders" in Artibus Asiae 46:3, pp. 203–226. Topsfield, Andrew (2006), "Snakes and Ladders in India: Some Further Discoveries" in Artibus Asiae 66:1, pp. 143–179.
Snakes and ladders : Media related to Snakes and ladders at Wikimedia Commons
Stochastic cellular automaton : Stochastic cellular automata or probabilistic cellular automata (PCA) or random cellular automata or locally interacting Markov chains are an important extension of cellular automaton. Cellular automata are a discrete-time dynamical system of interacting entities, whose state is discrete. The state of the collection of entities is updated at each discrete time according to some simple homogeneous rule. All entities' states are updated in parallel or synchronously. Stochastic cellular automata are CA whose updating rule is a stochastic one, which means the new entities' states are chosen according to some probability distributions. It is a discrete-time random dynamical system. From the spatial interaction between the entities, despite the simplicity of the updating rules, complex behaviour may emerge like self-organization. As mathematical object, it may be considered in the framework of stochastic processes as an interacting particle system in discrete-time. See for a more detailed introduction.
Stochastic cellular automaton : As discrete-time Markov process, PCA are defined on a product space E = ∏ k ∈ G S k S_ (cartesian product) where G is a finite or infinite graph, like Z and where S k is a finite space, like for instance S k = =\ or S k = =\ . The transition probability has a product form P ( d σ | η ) = ⊗ k ∈ G p k ( d σ k | η ) p_(d\sigma _|\eta ) where η ∈ E and p k ( d σ k | η ) (d\sigma _|\eta ) is a probability distribution on S k . In general some locality is required p k ( d σ k | η ) = p k ( d σ k | η V k ) (d\sigma _|\eta )=p_(d\sigma _|\eta _) where η V k = ( η j ) j ∈ V k =(\eta _)_ with V k a finite neighbourhood of k. See for a more detailed introduction following the probability theory's point of view.
Stochastic cellular automaton : Almeida, R. M.; Macau, E. E. N. (2010), "Stochastic cellular automata model for wildland fire spread dynamics", 9th Brazilian Conference on Dynamics, Control and their Applications, June 7–11, 2010, vol. 285, p. 012038, doi:10.1088/1742-6596/285/1/012038. Clarke, K. C.; Hoppen, S. (1997), "A self-modifying cellular automaton model of historical urbanization in the San Francisco Bay area" (PDF), Environment and Planning B: Planning and Design, 24 (2): 247–261, Bibcode:1997EnPlB..24..247C, doi:10.1068/b240247, S2CID 40847078. Mahajan, Meena Bhaskar (1992), Studies in language classes defined by different types of time-varying cellular automata, Ph.D. dissertation, Indian Institute of Technology Madras. Nishio, Hidenosuke; Kobuchi, Youichi (1975), "Fault tolerant cellular spaces", Journal of Computer and System Sciences, 11 (2): 150–170, doi:10.1016/s0022-0000(75)80065-1, MR 0389442. Smith, Alvy Ray III (1972), "Real-time language recognition by one-dimensional cellular automata", Journal of Computer and System Sciences, 6 (3): 233–253, doi:10.1016/S0022-0000(72)80004-7, MR 0309383. Louis, P.-Y.; Nardi, F. R., eds. (2018). Probabilistic Cellular Automata. Emergence, Complexity and Computation. Vol. 27. Springer. doi:10.1007/978-3-319-65558-1. hdl:2158/1090564. ISBN 9783319655581. Agapie, A.; Andreica, A.; Giuclea, M. (2014), "Probabilistic Cellular Automata", Journal of Computational Biology, 21 (9): 699–708, doi:10.1089/cmb.2014.0074, PMC 4148062, PMID 24999557
Stochastic matrix : In mathematics, a stochastic matrix is a square matrix used to describe the transitions of a Markov chain. Each of its entries is a nonnegative real number representing a probability.: 10 It is also called a probability matrix, transition matrix, substitution matrix, or Markov matrix. The stochastic matrix was first developed by Andrey Markov at the beginning of the 20th century, and has found use throughout a wide variety of scientific fields, including probability theory, statistics, mathematical finance and linear algebra, as well as computer science and population genetics. There are several different definitions and types of stochastic matrices: A right stochastic matrix is a square matrix of nonnegative real numbers, with each row summing to 1 (so it is also called a row stochastic matrix). A left stochastic matrix is a square matrix of nonnegative real numbers, with each column summing to 1 (so it is also called a column stochastic matrix). A doubly stochastic matrix is a square matrix of nonnegative real numbers with each row and column summing to 1. A substochastic matrix is a real square matrix whose row sums are all ≤ 1. In the same vein, one may define a probability vector as a vector whose elements are nonnegative real numbers which sum to 1. Thus, each row of a right stochastic matrix (or column of a left stochastic matrix) is a probability vector. Right stochastic matrices act upon row vectors of probabilities by multiplication from the right (hence their name) and the matrix entry in the i-th row and j-th column is the probability of transition from state i to state j. Left stochastic matrices act upon column vectors of probabilities by multiplication from the left (hence their name) and the matrix entry in the i-th row and j-th column is the probability of transition from state j to state i. This article uses the right/row stochastic matrix convention.
Stochastic matrix : The stochastic matrix was developed alongside the Markov chain by Andrey Markov, a Russian mathematician and professor at St. Petersburg University who first published on the topic in 1906. His initial intended uses were for linguistic analysis and other mathematical subjects like card shuffling, but both Markov chains and matrices rapidly found use in other fields. Stochastic matrices were further developed by scholars such as Andrey Kolmogorov, who expanded their possibilities by allowing for continuous-time Markov processes. By the 1950s, articles using stochastic matrices had appeared in the fields of econometrics and circuit theory. In the 1960s, stochastic matrices appeared in an even wider variety of scientific works, from behavioral science to geology to residential planning. In addition, much mathematical work was also done through these decades to improve the range of uses and functionality of the stochastic matrix and Markovian processes more generally. From the 1970s to present, stochastic matrices have found use in almost every field that requires formal analysis, from structural science to medical diagnosis to personnel management. In addition, stochastic matrices have found wide use in land change modeling, usually under the term Markov matrix.
Stochastic matrix : A stochastic matrix describes a Markov chain Xt over a finite state space S with cardinality α. If the probability of moving from i to j in one time step is Pr(j|i) = Pi,j, the stochastic matrix P is given by using Pi,j as the i-th row and j-th column element, e.g., P = [ P 1 , 1 P 1 , 2 … P 1 , j … P 1 , α P 2 , 1 P 2 , 2 … P 2 , j … P 2 , α ⋮ ⋮ ⋱ ⋮ ⋱ ⋮ P i , 1 P i , 2 … P i , j … P i , α ⋮ ⋮ ⋱ ⋮ ⋱ ⋮ P α , 1 P α , 2 … P α , j … P α , α ] . P_&P_&\dots &P_&\dots &P_\\P_&P_&\dots &P_&\dots &P_\\\vdots &\vdots &\ddots &\vdots &\ddots &\vdots \\P_&P_&\dots &P_&\dots &P_\\\vdots &\vdots &\ddots &\vdots &\ddots &\vdots \\P_&P_&\dots &P_&\dots &P_\\\end\right]. Since the total of transition probability from a state i to all other states must be 1, ∑ j = 1 α P i , j = 1 ; ^P_=1;\, thus this matrix is a right stochastic matrix. The above elementwise sum across each row i of P may be more concisely written as P1 = 1, where 1 is the α-dimensional column vector of all ones. Using this, it can be seen that the product of two right stochastic matrices P′ and P′′ is also right stochastic: P′ P′′ 1 = P′ (P′′ 1) = P′ 1 = 1. In general, the k-th power Pk of a right stochastic matrix P is also right stochastic. The probability of transitioning from i to j in two steps is then given by the (i, j)-th element of the square of P: ( P 2 ) i , j . \right)_. In general, the probability transition of going from any state to another state in a finite Markov chain given by the matrix P in k steps is given by Pk. An initial probability distribution of states, specifying where the system might be initially and with what probabilities, is given as a row vector. A stationary probability vector π is defined as a distribution, written as a row vector, that does not change under application of the transition matrix; that is, it is defined as a probability distribution on the set which is also a row eigenvector of the probability matrix, associated with eigenvalue 1: π P = π . P=. It can be shown that the spectral radius of any stochastic matrix is one. By the Gershgorin circle theorem, all of the eigenvalues of a stochastic matrix have absolute values less than or equal to one. More precisely, the eigenvalues of n -by- n stochastic matrices are restricted to lie within a subset of the complex unit disk, known as Karpelevič regions. This result was originally obtained by Fridrikh Karpelevich, following a question originally posed by Kolmogorov and partially addressed by Nikolay Dmitriyev and Eugene Dynkin. Additionally, every right stochastic matrix has an "obvious" column eigenvector associated to the eigenvalue 1: the vector 1 used above, whose coordinates are all equal to 1. As left and right eigenvalues of a square matrix are the same, every stochastic matrix has, at least, a row eigenvector associated to the eigenvalue 1 and the largest absolute value of all its eigenvalues is also 1. Finally, the Brouwer Fixed Point Theorem (applied to the compact convex set of all probability distributions of the finite set ) implies that there is some left eigenvector which is also a stationary probability vector. On the other hand, the Perron–Frobenius theorem also ensures that every irreducible stochastic matrix has such a stationary vector, and that the largest absolute value of an eigenvalue is always 1. However, this theorem cannot be applied directly to such matrices because they need not be irreducible. In general, there may be several such vectors. However, for a matrix with strictly positive entries (or, more generally, for an irreducible aperiodic stochastic matrix), this vector is unique and can be computed by observing that for any i we have the following limit, lim k → ∞ ( P k ) i , j = π j , \left(P^\right)_=_, where πj is the j-th element of the row vector π. Among other things, this says that the long-term probability of being in a state j is independent of the initial state i. That both of these computations give the same stationary vector is a form of an ergodic theorem, which is generally true in a wide variety of dissipative dynamical systems: the system evolves, over time, to a stationary state. Intuitively, a stochastic matrix represents a Markov chain; the application of the stochastic matrix to a probability distribution redistributes the probability mass of the original distribution while preserving its total mass. If this process is applied repeatedly, the distribution converges to a stationary distribution for the Markov chain.: 14–17 : 116 Stochastic matrices and their product form a category, which is both a subcategory of the category of matrices and of the one of Markov kernels.
Stochastic matrix : Suppose there is a timer and a row of five adjacent boxes. At time zero, a cat is in the first box, and a mouse is in the fifth box. The cat and the mouse both jump to a random adjacent box when the timer advances. For example, if the cat is in the second box and the mouse is in the fourth, the probability that the cat will be in the first box and the mouse in the fifth after the timer advances is one fourth. If the cat is in the first box and the mouse is in the fifth, the probability that the cat will be in box two and the mouse will be in box four after the timer advances is one. The cat eats the mouse if both end up in the same box, at which time the game ends. Let the random variable K be the time the mouse stays in the game. The Markov chain that represents this game contains the following five states specified by the combination of positions (cat,mouse). Note that while a naive enumeration of states would list 25 states, many are impossible either because the mouse can never have a lower index than the cat (as that would mean the mouse occupied the cat's box and survived to move past it), or because the sum of the two indices will always have even parity. In addition, the 3 possible states that lead to the mouse's death are combined into one: State 1: (1,3) State 2: (1,5) State 3: (2,4) State 4: (3,5) State 5: game over: (2,2), (3,3) & (4,4). We use a stochastic matrix, P (below), to represent the transition probabilities of this system (rows and columns in this matrix are indexed by the possible states listed above, with the pre-transition state as the row and post-transition state as the column). For instance, starting from state 1 – 1st row – it is impossible for the system to stay in this state, so P 11 = 0 =0 ; the system also cannot transition to state 2 – because the cat would have stayed in the same box – so P 12 = 0 =0 , and by a similar argument for the mouse, P 14 = 0 =0 . Transitions to states 3 or 5 are allowed, and thus P 13 , P 15 ≠ 0 ,P_\neq 0 . P = [ 0 0 1 / 2 0 1 / 2 0 0 1 0 0 1 / 4 1 / 4 0 1 / 4 1 / 4 0 0 1 / 2 0 1 / 2 0 0 0 0 1 ] . 0&0&1/2&0&1/2\\0&0&1&0&0\\1/4&1/4&0&1/4&1/4\\0&0&1/2&0&1/2\\0&0&0&0&1\end.
Stochastic matrix : Density matrix Markov kernel, the equivalent of a stochastic matrix over a continuous state space Matrix difference equation Models of DNA evolution Muirhead's inequality Probabilistic automaton Transition rate matrix, used to generalize the stochastic matrix to continuous time == References ==
Switching Kalman filter : The switching Kalman filtering (SKF) method is a variant of the Kalman filter. In its generalised form, it is often attributed to Kevin P. Murphy, but related switching state-space models have been in use.
Switching Kalman filter : Applications of the switching Kalman filter include: Brain–computer interfaces and neural decoding, real-time decoding for continuous neural-prosthetic control, and sensorimotor learning in humans. It also has application in econometrics, signal processing, tracking, computer vision, etc. It is an alternative to the Kalman filter when the system's state has a discrete component. The additional error when using a Kalman filter instead of a Switching Kalman filter may be quantified in terms of the switching system's parameters. For example, when an industrial plant has "multiple discrete modes of behaviour, each of which having a linear (Gaussian) dynamics".
Switching Kalman filter : There are several variants of SKF discussed in.
Variable-order Bayesian network : Variable-order Bayesian network (VOBN) models provide an important extension of both the Bayesian network models and the variable-order Markov models. VOBN models are used in machine learning in general and have shown great potential in bioinformatics applications. These models extend the widely used position weight matrix (PWM) models, Markov models, and Bayesian network (BN) models. In contrast to the BN models, where each random variable depends on a fixed subset of random variables, in VOBN models these subsets may vary based on the specific realization of observed variables. The observed realizations are often called the context and, hence, VOBN models are also known as context-specific Bayesian networks. The flexibility in the definition of conditioning subsets of variables turns out to be a real advantage in classification and analysis applications, as the statistical dependencies between random variables in a sequence of variables (not necessarily adjacent) may be taken into account efficiently, and in a position-specific and context-specific manner.
Variable-order Bayesian network : Markov chain Examples of Markov chains Variable order Markov models Markov process Markov chain Monte Carlo Semi-Markov process Artificial intelligence
Variable-order Bayesian network : VOMBAT: https://www2.informatik.uni-halle.de:8443/VOMBAT/
Variable-order Markov model : In the mathematical theory of stochastic processes, variable-order Markov (VOM) models are an important class of models that extend the well known Markov chain models. In contrast to the Markov chain models, where each random variable in a sequence with a Markov property depends on a fixed number of random variables, in VOM models this number of conditioning random variables may vary based on the specific observed realization. This realization sequence is often called the context; therefore the VOM models are also called context trees. VOM models are nicely rendered by colorized probabilistic suffix trees (PST). The flexibility in the number of conditioning random variables turns out to be of real advantage for many applications, such as statistical analysis, classification and prediction.
Variable-order Markov model : Consider for example a sequence of random variables, each of which takes a value from the ternary alphabet . Specifically, consider the string constructed from infinite concatenations of the sub-string aaabc: aaabcaaabcaaabcaaabc…aaabc. The VOM model of maximal order 2 can approximate the above string using only the following five conditional probability components: Pr(a | aa) = 0.5, Pr(b | aa) = 0.5, Pr(c | b) = 1.0, Pr(a | c)= 1.0, Pr(a | ca) = 1.0. In this example, Pr(c | ab) = Pr(c | b) = 1.0; therefore, the shorter context b is sufficient to determine the next character. Similarly, the VOM model of maximal order 3 can generate the string exactly using only five conditional probability components, which are all equal to 1.0. To construct the Markov chain of order 1 for the next character in that string, one must estimate the following 9 conditional probability components: Pr(a | a), Pr(a | b), Pr(a | c), Pr(b | a), Pr(b | b), Pr(b | c), Pr(c | a), Pr(c | b), Pr(c | c). To construct the Markov chain of order 2 for the next character, one must estimate 27 conditional probability components: Pr(a | aa), Pr(a | ab), …, Pr(c | cc). And to construct the Markov chain of order three for the next character one must estimate the following 81 conditional probability components: Pr(a | aaa), Pr(a | aab), …, Pr(c | ccc). In practical settings there is seldom sufficient data to accurately estimate the exponentially increasing number of conditional probability components as the order of the Markov chain increases. The variable-order Markov model assumes that in realistic settings, there are certain realizations of states (represented by contexts) in which some past states are independent from the future states; accordingly, "a great reduction in the number of model parameters can be achieved."
Variable-order Markov model : Let A be a state space (finite alphabet) of size | A | . Consider a sequence with the Markov property x 1 n = x 1 x 2 … x n ^=x_x_\dots x_ of n realizations of random variables, where x i ∈ A \in A is the state (symbol) at position i ( 1 ≤ i ≤ n ) , and the concatenation of states x i and x i + 1 is denoted by x i x i + 1 x_ . Given a training set of observed states, x 1 n ^ , the construction algorithm of the VOM models learns a model P that provides a probability assignment for each state in the sequence given its past (previously observed symbols) or future states. Specifically, the learner generates a conditional probability distribution P ( x i ∣ s ) \mid s) for a symbol x i ∈ A \in A given a context s ∈ A ∗ , where the * sign represents a sequence of states of any length, including the empty context. VOM models attempt to estimate conditional distributions of the form P ( x i ∣ s ) \mid s) where the context length | s | ≤ D varies depending on the available statistics. In contrast, conventional Markov models attempt to estimate these conditional distributions by assuming a fixed contexts' length | s | = D and, hence, can be considered as special cases of the VOM models. Effectively, for a given training sequence, the VOM models are found to obtain better model parameterization than the fixed-order Markov models that leads to a better variance-bias tradeoff of the learned models.
Variable-order Markov model : Various efficient algorithms have been devised for estimating the parameters of the VOM model. VOM models have been successfully applied to areas such as machine learning, information theory and bioinformatics, including specific applications such as coding and data compression, document compression, classification and identification of DNA and protein sequences, [1] statistical process control, spam filtering, haplotyping, speech recognition, sequence analysis in social sciences, and others.
Variable-order Markov model : Stochastic chains with memory of variable length Examples of Markov chains Variable order Bayesian network Markov process Markov chain Monte Carlo Semi-Markov process Artificial intelligence == References ==
Viterbi algorithm : The Viterbi algorithm is a dynamic programming algorithm for obtaining the maximum a posteriori probability estimate of the most likely sequence of hidden states—called the Viterbi path—that results in a sequence of observed events. This is done especially in the context of Markov information sources and hidden Markov models (HMM). The algorithm has found universal application in decoding the convolutional codes used in both CDMA and GSM digital cellular, dial-up modems, satellite, deep-space communications, and 802.11 wireless LANs. It is now also commonly used in speech recognition, speech synthesis, diarization, keyword spotting, computational linguistics, and bioinformatics. For example, in speech-to-text (speech recognition), the acoustic signal is treated as the observed sequence of events, and a string of text is considered to be the "hidden cause" of the acoustic signal. The Viterbi algorithm finds the most likely string of text given the acoustic signal.
Viterbi algorithm : The Viterbi algorithm is named after Andrew Viterbi, who proposed it in 1967 as a decoding algorithm for convolutional codes over noisy digital communication links. It has, however, a history of multiple invention, with at least seven independent discoveries, including those by Viterbi, Needleman and Wunsch, and Wagner and Fischer. It was introduced to natural language processing as a method of part-of-speech tagging as early as 1987. Viterbi path and Viterbi algorithm have become standard terms for the application of dynamic programming algorithms to maximization problems involving probabilities. For example, in statistical parsing a dynamic programming algorithm can be used to discover the single most likely context-free derivation (parse) of a string, which is commonly called the "Viterbi parse". Another application is in target tracking, where the track is computed that assigns a maximum likelihood to a sequence of observations.
Viterbi algorithm : Given a hidden Markov model with a set of hidden states S and a sequence of T observations o 0 , o 1 , … , o T − 1 ,o_,\dots ,o_ , the Viterbi algorithm finds the most likely sequence of states that could have produced those observations. At each time step t , the algorithm solves the subproblem where only the observations up to o t are considered. Two matrices of size T × | S | \right| are constructed: P t , s contains the maximum probability of ending up at state s at observation t , out of all possible sequences of states leading up to it. Q t , s tracks the previous state that was used before s in this maximum probability state sequence. Let π s and a r , s be the initial and transition probabilities respectively, and let b s , o be the probability of observing o at state s . Then the values of P are given by the recurrence relation P t , s = =\pi _\cdot b_&t=0,\\\max _\left(P_\cdot a_\cdot b_\right)&t>0.\end The formula for Q t , s is identical for t > 0 , except that max is replaced with arg ⁡ max , and Q 0 , s = 0 =0 . The Viterbi path can be found by selecting the maximum of P at the final timestep, and following Q in reverse.
Viterbi algorithm : function Viterbi(states, init, trans, emit, obs) is input states: S hidden states input init: initial probabilities of each state input trans: S × S transition matrix input emit: S × O emission matrix input obs: sequence of T observations prob ← T × S matrix of zeroes prev ← empty T × S matrix for each state s in states do prob[0][s] = init[s] * emit[s][obs[0]] for t = 1 to T - 1 inclusive do // t = 0 has been dealt with already for each state s in states do for each state r in states do new_prob ← prob[t - 1][r] * trans[r][s] * emit[s][obs[t]] if new_prob > prob[t][s] then prob[t][s] ← new_prob prev[t][s] ← r path ← empty array of length T path[T - 1] ← the state s with maximum prob[T - 1][s] for t = T - 2 to 0 inclusive do path[t] ← prev[t + 1][path[t + 1]] return path end The time complexity of the algorithm is O ( T × | S | 2 ) \right|^) . If it is known which state transitions have non-zero probability, an improved bound can be found by iterating over only those r which link to s in the inner loop. Then using amortized analysis one can show that the complexity is O ( T × ( | S | + | E | ) ) \right|+\left|\right|)) , where E is the number of edges in the graph, i.e. the number of non-zero entries in the transition matrix.
Viterbi algorithm : A doctor wishes to determine whether patients are healthy or have a fever. The only information the doctor can obtain is by asking patients how they feel. The patients may report that they either feel normal, dizzy, or cold. It is believed that the health condition of the patients operates as a discrete Markov chain. There are two states, "healthy" and "fever", but the doctor cannot observe them directly; they are hidden from the doctor. On each day, the chance that a patient tells the doctor "I feel normal", "I feel cold", or "I feel dizzy", depends only on the patient's health condition on that day. The observations (normal, cold, dizzy) along with the hidden states (healthy, fever) form a hidden Markov model (HMM). From past experience, the probabilities of this model have been estimated as: init = trans = , "Fever": , emit = , "Fever": , In this code, init represents the doctor's belief about how likely the patient is to be healthy initially. Note that the particular probability distribution used here is not the equilibrium one, which would be according to the transition probabilities. The transition probabilities trans represent the change of health condition in the underlying Markov chain. In this example, a patient who is healthy today has only a 30% chance of having a fever tomorrow. The emission probabilities emit represent how likely each possible observation (normal, cold, or dizzy) is, given the underlying condition (healthy or fever). A patient who is healthy has a 50% chance of feeling normal; one who has a fever has a 60% chance of feeling dizzy. A particular patient visits three days in a row, and reports feeling normal on the first day, cold on the second day, and dizzy on the third day. Firstly, the probabilities of being healthy or having a fever on the first day are calculated. The probability that a patient will be healthy on the first day and report feeling normal is 0.6 × 0.5 = 0.3 . Similarly, the probability that a patient will have a fever on the first day and report feeling normal is 0.4 × 0.1 = 0.04 . The probabilities for each of the following days can be calculated from the previous day directly. For example, the highest chance of being healthy on the second day and reporting to be cold, following reporting being normal on the first day, is the maximum of 0.3 × 0.7 × 0.4 = 0.084 and 0.04 × 0.4 × 0.4 = 0.0064 . This suggests it is more likely that the patient was healthy for both of those days, rather than having a fever and recovering. The rest of the probabilities are summarised in the following table: From the table, it can be seen that the patient most likely had a fever on the third day. Furthermore, there exists a sequence of states ending on "fever", of which the probability of producing the given observations is 0.01512. This sequence is precisely (healthy, healthy, fever), which can be found be tracing back which states were used when calculating the maxima (which happens to be the best guess from each day but will not always be). In other words, given the observed activities, the patient was most likely to have been healthy on the first day and also on the second day (despite feeling cold that day), and only to have contracted a fever on the third day. The operation of Viterbi's algorithm can be visualized by means of a trellis diagram. The Viterbi path is essentially the shortest path through this trellis.
Viterbi algorithm : A generalization of the Viterbi algorithm, termed the max-sum algorithm (or max-product algorithm) can be used to find the most likely assignment of all or some subset of latent variables in a large number of graphical models, e.g. Bayesian networks, Markov random fields and conditional random fields. The latent variables need, in general, to be connected in a way somewhat similar to a hidden Markov model (HMM), with a limited number of connections between variables and some type of linear structure among the variables. The general algorithm involves message passing and is substantially similar to the belief propagation algorithm (which is the generalization of the forward-backward algorithm). With an algorithm called iterative Viterbi decoding, one can find the subsequence of an observation that matches best (on average) to a given hidden Markov model. This algorithm is proposed by Qi Wang et al. to deal with turbo code. Iterative Viterbi decoding works by iteratively invoking a modified Viterbi algorithm, reestimating the score for a filler until convergence. An alternative algorithm, the Lazy Viterbi algorithm, has been proposed. For many applications of practical interest, under reasonable noise conditions, the lazy decoder (using Lazy Viterbi algorithm) is much faster than the original Viterbi decoder (using Viterbi algorithm). While the original Viterbi algorithm calculates every node in the trellis of possible outcomes, the Lazy Viterbi algorithm maintains a prioritized list of nodes to evaluate in order, and the number of calculations required is typically fewer (and never more) than the ordinary Viterbi algorithm for the same result. However, it is not so easy to parallelize in hardware.
Viterbi algorithm : The soft output Viterbi algorithm (SOVA) is a variant of the classical Viterbi algorithm. SOVA differs from the classical Viterbi algorithm in that it uses a modified path metric which takes into account the a priori probabilities of the input symbols, and produces a soft output indicating the reliability of the decision. The first step in the SOVA is the selection of the survivor path, passing through one unique node at each time instant, t. Since each node has 2 branches converging at it (with one branch being chosen to form the Survivor Path, and the other being discarded), the difference in the branch metrics (or cost) between the chosen and discarded branches indicate the amount of error in the choice. This cost is accumulated over the entire sliding window (usually equals at least five constraint lengths), to indicate the soft output measure of reliability of the hard bit decision of the Viterbi algorithm.
Viterbi algorithm : Expectation–maximization algorithm Baum–Welch algorithm Forward-backward algorithm Forward algorithm Error-correcting code Viterbi decoder Hidden Markov model Part-of-speech tagging A* search algorithm
Viterbi algorithm : Viterbi AJ (April 1967). "Error bounds for convolutional codes and an asymptotically optimum decoding algorithm". IEEE Transactions on Information Theory. 13 (2): 260–269. doi:10.1109/TIT.1967.1054010. (note: the Viterbi decoding algorithm is described in section IV.) Subscription required. Feldman J, Abou-Faycal I, Frigo M (2002). "A fast maximum-likelihood decoder for convolutional codes". Proceedings IEEE 56th Vehicular Technology Conference. Vol. 1. pp. 371–375. CiteSeerX 10.1.1.114.1314. doi:10.1109/VETECF.2002.1040367. ISBN 978-0-7803-7467-6. S2CID 9783963. Forney GD (March 1973). "The Viterbi algorithm". Proceedings of the IEEE. 61 (3): 268–278. doi:10.1109/PROC.1973.9030. Subscription required. Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 16.2. Viterbi Decoding". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. Archived from the original on 2011-08-11. Retrieved 2011-08-17. Rabiner LR (February 1989). "A tutorial on hidden Markov models and selected applications in speech recognition". Proceedings of the IEEE. 77 (2): 257–286. CiteSeerX 10.1.1.381.3454. doi:10.1109/5.18626. S2CID 13618539. (Describes the forward algorithm and Viterbi algorithm for HMMs). Shinghal, R. and Godfried T. Toussaint, "Experiments in text recognition with the modified Viterbi algorithm," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMI-l, April 1979, pp. 184–193. Shinghal, R. and Godfried T. Toussaint, "The sensitivity of the modified Viterbi algorithm to the source statistics," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-2, March 1980, pp. 181–185.
Viterbi algorithm : Implementations in Java, F#, Clojure, C# on Wikibooks Tutorial on convolutional coding with viterbi decoding, by Chip Fleming A tutorial for a Hidden Markov Model toolkit (implemented in C) that contains a description of the Viterbi algorithm Viterbi algorithm by Dr. Andrew J. Viterbi (scholarpedia.org).
Facial recognition system : A facial recognition system is a technology potentially capable of matching a human face from a digital image or a video frame against a database of faces. Such a system is typically employed to authenticate users through ID verification services, and works by pinpointing and measuring facial features from a given image. Development began on similar systems in the 1960s, beginning as a form of computer application. Since their inception, facial recognition systems have seen wider uses in recent times on smartphones and in other forms of technology, such as robotics. Because computerized facial recognition involves the measurement of a human's physiological characteristics, facial recognition systems are categorized as biometrics. Although the accuracy of facial recognition systems as a biometric technology is lower than iris recognition, fingerprint image acquisition, palm recognition or voice recognition, it is widely adopted due to its contactless process. Facial recognition systems have been deployed in advanced human–computer interaction, video surveillance, law enforcement, passenger screening, decisions on employment and housing and automatic indexing of images. Facial recognition systems are employed throughout the world today by governments and private companies. Their effectiveness varies, and some systems have previously been scrapped because of their ineffectiveness. The use of facial recognition systems has also raised controversy, with claims that the systems violate citizens' privacy, commonly make incorrect identifications, encourage gender norms and racial profiling, and do not protect important biometric data. The appearance of synthetic media such as deepfakes has also raised concerns about its security. These claims have led to the ban of facial recognition systems in several cities in the United States. Growing societal concerns led social networking company Meta Platforms to shut down its Facebook facial recognition system in 2021, deleting the face scan data of more than one billion users. The change represented one of the largest shifts in facial recognition usage in the technology's history. IBM also stopped offering facial recognition technology due to similar concerns.
Facial recognition system : Automated facial recognition was pioneered in the 1960s by Woody Bledsoe, Helen Chan Wolf, and Charles Bisson, whose work focused on teaching computers to recognize human faces. Their early facial recognition project was dubbed "man-machine" because a human first needed to establish the coordinates of facial features in a photograph before they could be used by a computer for recognition. Using a graphics tablet, a human would pinpoint facial features coordinates, such as the pupil centers, the inside and outside corners of eyes, and the widows peak in the hairline. The coordinates were used to calculate 20 individual distances, including the width of the mouth and of the eyes. A human could process about 40 pictures an hour, building a database of these computed distances. A computer would then automatically compare the distances for each photograph, calculate the difference between the distances, and return the closed records as a possible match. In 1970, Takeo Kanade publicly demonstrated a face-matching system that located anatomical features such as the chin and calculated the distance ratio between facial features without human intervention. Later tests revealed that the system could not always reliably identify facial features. Nonetheless, interest in the subject grew and in 1977 Kanade published the first detailed book on facial recognition technology. In 1993, the Defense Advanced Research Project Agency (DARPA) and the Army Research Laboratory (ARL) established the face recognition technology program FERET to develop "automatic face recognition capabilities" that could be employed in a productive real life environment "to assist security, intelligence, and law enforcement personnel in the performance of their duties." Face recognition systems that had been trialled in research labs were evaluated. The FERET tests found that while the performance of existing automated facial recognition systems varied, a handful of existing methods could viably be used to recognize faces in still images taken in a controlled environment. The FERET tests spawned three US companies that sold automated facial recognition systems. Vision Corporation and Miros Inc were founded in 1994, by researchers who used the results of the FERET tests as a selling point. Viisage Technology was established by an identification card defense contractor in 1996 to commercially exploit the rights to the facial recognition algorithm developed by Alex Pentland at MIT. Following the 1993 FERET face-recognition vendor test, the Department of Motor Vehicles (DMV) offices in West Virginia and New Mexico became the first DMV offices to use automated facial recognition systems to prevent people from obtaining multiple driving licenses using different names. Driver's licenses in the United States were at that point a commonly accepted form of photo identification. DMV offices across the United States were undergoing a technological upgrade and were in the process of establishing databases of digital ID photographs. This enabled DMV offices to deploy the facial recognition systems on the market to search photographs for new driving licenses against the existing DMV database. DMV offices became one of the first major markets for automated facial recognition technology and introduced US citizens to facial recognition as a standard method of identification. The increase of the US prison population in the 1990s prompted U.S. states to established connected and automated identification systems that incorporated digital biometric databases, in some instances this included facial recognition. In 1999, Minnesota incorporated the facial recognition system FaceIT by Visionics into a mug shot booking system that allowed police, judges and court officers to track criminals across the state. Until the 1990s, facial recognition systems were developed primarily by using photographic portraits of human faces. Research on face recognition to reliably locate a face in an image that contains other objects gained traction in the early 1990s with the principal component analysis (PCA). The PCA method of face detection is also known as Eigenface and was developed by Matthew Turk and Alex Pentland. Turk and Pentland combined the conceptual approach of the Karhunen–Loève theorem and factor analysis, to develop a linear model. Eigenfaces are determined based on global and orthogonal features in human faces. A human face is calculated as a weighted combination of a number of Eigenfaces. Because few Eigenfaces were used to encode human faces of a given population, Turk and Pentland's PCA face detection method greatly reduced the amount of data that had to be processed to detect a face. Pentland in 1994 defined Eigenface features, including eigen eyes, eigen mouths and eigen noses, to advance the use of PCA in facial recognition. In 1997, the PCA Eigenface method of face recognition was improved upon using linear discriminant analysis (LDA) to produce Fisherfaces. LDA Fisherfaces became dominantly used in PCA feature based face recognition. While Eigenfaces were also used for face reconstruction. In these approaches no global structure of the face is calculated which links the facial features or parts. Purely feature based approaches to facial recognition were overtaken in the late 1990s by the Bochum system, which used Gabor filter to record the face features and computed a grid of the face structure to link the features. Christoph von der Malsburg and his research team at the University of Bochum developed Elastic Bunch Graph Matching in the mid-1990s to extract a face out of an image using skin segmentation. By 1997, the face detection method developed by Malsburg outperformed most other facial detection systems on the market. The so-called "Bochum system" of face detection was sold commercially on the market as ZN-Face to operators of airports and other busy locations. The software was "robust enough to make identifications from less-than-perfect face views. It can also often see through such impediments to identification as mustaches, beards, changed hairstyles and glasses—even sunglasses". Real-time face detection in video footage became possible in 2001 with the Viola–Jones object detection framework for faces. Paul Viola and Michael Jones combined their face detection method with the Haar-like feature approach to object recognition in digital images to launch AdaBoost, the first real-time frontal-view face detector. By 2015, the Viola–Jones algorithm had been implemented using small low power detectors on handheld devices and embedded systems. Therefore, the Viola–Jones algorithm has not only broadened the practical application of face recognition systems but has also been used to support new features in user interfaces and teleconferencing. Ukraine is using the US-based Clearview AI facial recognition software to identify dead Russian soldiers. Ukraine has conducted 8,600 searches and identified the families of 582 deceased Russian soldiers. The IT volunteer section of the Ukrainian army using the software is subsequently contacting the families of the deceased soldiers to raise awareness of Russian activities in Ukraine. The main goal is to destabilise the Russian government. It can be seen as a form of psychological warfare. About 340 Ukrainian government officials in five government ministries are using the technology. It is used to catch spies that might try to enter Ukraine. Clearview AI's facial recognition database is only available to government agencies who may only use the technology to assist in the course of law enforcement investigations or in connection with national security. The software was donated to Ukraine by Clearview AI. Russia is thought to be using it to find anti-war activists. Clearview AI was originally designed for US law enforcement. Using it in war raises new ethical concerns. One London based surveillance expert, Stephen Hare, is concerned it might make the Ukrainians appear inhuman: "Is it actually working? Or is it making [Russians] say: 'Look at these lawless, cruel Ukrainians, doing this to our boys'?"
Facial recognition system : While humans can recognize faces without much effort, facial recognition is a challenging pattern recognition problem in computing. Facial recognition systems attempt to identify a human face, which is three-dimensional and changes in appearance with lighting and facial expression, based on its two-dimensional image. To accomplish this computational task, facial recognition systems perform four steps. First face detection is used to segment the face from the image background. In the second step the segmented face image is aligned to account for face pose, image size and photographic properties, such as illumination and grayscale. The purpose of the alignment process is to enable the accurate localization of facial features in the third step, the facial feature extraction. Features such as eyes, nose and mouth are pinpointed and measured in the image to represent the face. The so established feature vector of the face is then, in the fourth step, matched against a database of faces.
Facial recognition system : In the 18th and 19th century, the belief that facial expressions revealed the moral worth or true inner state of a human was widespread and physiognomy was a respected science in the Western world. From the early 19th century onwards photography was used in the physiognomic analysis of facial features and facial expression to detect insanity and dementia. In the 1960s and 1970s the study of human emotions and its expressions was reinvented by psychologists, who tried to define a normal range of emotional responses to events. The research on automated emotion recognition has since the 1970s focused on facial expressions and speech, which are regarded as the two most important ways in which humans communicate emotions to other humans. In the 1970s the Facial Action Coding System (FACS) categorization for the physical expression of emotions was established. Its developer Paul Ekman maintains that there are six emotions that are universal to all human beings and that these can be coded in facial expressions. Research into automatic emotion specific expression recognition has in the past decades focused on frontal view images of human faces. Facial thermography can be considered as a promising tool of emotion recognition. In 2016, facial feature emotion recognition algorithms were among the new technologies, alongside high-definition CCTV, high resolution 3D face recognition and iris recognition, that found their way out of university research labs. In 2016, Facebook acquired FacioMetrics, a facial feature emotion recognition corporate spin-off by Carnegie Mellon University. In the same year Apple Inc. acquired the facial feature emotion recognition start-up Emotient. By the end of 2016, commercial vendors of facial recognition systems offered to integrate and deploy emotion recognition algorithms for facial features. The MIT's Media Lab spin-off Affectiva by late 2019 offered a facial expression emotion detection product that can recognize emotions in humans while driving.
Facial recognition system : The development of anti-facial recognition technology is effectively an arms race between privacy researchers and big data companies. Big data companies increasingly use convolutional AI technology to create ever more advanced facial recognition models. Solutions to block facial recognition may not work on newer software, or on different types of facial recognition models. One popular cited example of facial-recognition blocking is the CVDazzle makeup and haircut system, but the creators note on their website that it has been outdated for quite some time as it was designed to combat a particular facial recognition algorithm and may not work. Another example is the emergence of facial recognition that can identify people wearing facemasks and sunglasses, especially after the COVID-19 pandemic. Given that big data companies have much more funding than privacy researchers, it is very difficult for anti-facial recognition systems to keep up. There is also no guarantee that obfuscation techniques that were used for images taken in the past and stored, such as masks or software obfuscation, would protect users from facial-recognition analysis of those images by future technology. In January 2013, Japanese researchers from the National Institute of Informatics created 'privacy visor' glasses that use nearly infrared light to make the face underneath it unrecognizable to face recognition software that use infrared. The latest version uses a titanium frame, light-reflective material and a mask which uses angles and patterns to disrupt facial recognition technology through both absorbing and bouncing back light sources. However, these methods are used to prevent infrared facial recognition and would not work on AI facial recognition of plain images. Some projects use adversarial machine learning to come up with new printed patterns that confuse existing face recognition software. One method that may work to protect from facial recognition systems are specific haircuts and make-up patterns that prevent the used algorithms to detect a face, known as computer vision dazzle. Incidentally, the makeup styles popular with Juggalos may also protect against facial recognition. Facial masks that are worn to protect from contagious viruses can reduce the accuracy of facial recognition systems. A 2020 NIST study, tested popular one-to-one matching systems and found a failure rate between five and fifty percent on masked individuals. The Verge speculated that the accuracy rate of mass surveillance systems, which were not included in the study, would be even less accurate than the accuracy of one-to-one matching systems. The facial recognition of Apple Pay can work through many barriers, including heavy makeup, thick beards and even sunglasses, but fails with masks. However, facial recognition of masked faces is increasingly getting more reliable. Another solution is the application of obfuscation to images that may fool facial recognition systems while still appearing normal to a human user. These could be used for when images are posted online or on social media. However, as it is hard to remove images once they are on the internet, the obfuscation on these images may be defeated and the face of the user identified by future advances in technology. Two examples of this technique, developed in 2020, are the ANU's 'Camera Adversaria' camera app, and the University of Chicago's Fawkes image cloaking software algorithm which applies obfuscation to already taken photos. However, by 2021 the Fawkes obfuscation algorithm had already been specifically targeted by Microsoft Azure which changed its algorithm to lower Fawkes' effectiveness.
Facial recognition system : Lists List of computer vision topics List of emerging technologiesOutline of artificial intelligence
Facial recognition system : Farokhi, Sajad; Shamsuddin, Siti Mariyam; Flusser, Jan; Sheikh, U.U; Khansari, Mohammad; Jafari-Khouzani, Kourosh (2014). "Near infrared face recognition by combining Zernike moments and undecimated discrete wavelet transform". Digital Signal Processing. 31 (1): 13–27. Bibcode:2014DSP....31...13F. doi:10.1016/j.dsp.2014.04.008. "The Face Detection Algorithm Set to Revolutionize Image Search" (Feb. 2015), MIT Technology Review Garvie, Clare; Bedoya, Alvaro; Frankle, Jonathan (October 18, 2016). Perpetual Line Up: Unregulated Police Face Recognition in America. Center on Privacy & Technology at Georgetown Law. Retrieved October 22, 2016. "Facial Recognition Software 'Sounds Like Science Fiction,' but May Affect Half of Americans". As It Happens. Canadian Broadcasting Corporation. October 20, 2016. Retrieved October 22, 2016. Interview with Alvaro Bedoya, executive director of the Center on Privacy & Technology at Georgetown Law and co-author of Perpetual Line Up: Unregulated Police Face Recognition in America. Press, Eyal, "In Front of Their Faces: Does facial-recognition technology lead police to ignore contradictory evidence?", The New Yorker, 20 November 2023, pp. 20–26.
Facial recognition system : Media related to Facial recognition system at Wikimedia Commons A Photometric Stereo Approach to Face Recognition (master's thesis). The University of the West of England, Bristol.
FERET (facial recognition technology) : The Facial Recognition Technology (FERET) program was a government-sponsored project that aimed to create a large, automatic face-recognition system for intelligence, security, and law enforcement purposes. The program began in 1993 under the combined leadership of Dr. Harry Wechsler at George Mason University (GMU) and Dr. Jonathon Phillips at the Army Research Laboratory (ARL) in Adelphi, Maryland and resulted in the development of the Facial Recognition Technology (FERET) database. The goal of the FERET program was to advance the field of face recognition technology by establishing a common database of facial imagery for researchers to use and setting a performance baseline for face-recognition algorithms. Potential areas where this face-recognition technology could be used include: Automated searching of mug books using surveillance photos Controlling access to restricted facilities or equipment Checking the credentials of personnel for background and security clearances Monitoring airports, border crossings, and secure manufacturing facilities for particular individuals Finding and logging multiple appearances of individuals over time in surveillance videos Verifying identities at ATM machines Searching photo ID records for fraud detection The FERET database has been used by more than 460 research groups and is currently managed by the National Institute of Standards and Technology (NIST). By 2017, the FERET database has been used to train artificial intelligence programs and computer vision algorithms to identify and sort faces.
FERET (facial recognition technology) : The origin of facial recognition technology is largely attributed to Woodrow Wilson Bledsoe and his work in the 1960s, when he developed a system to identify faces from a database of thousands of photographs. The FERET program first began as a way to unify a large body of face-recognition technology research under a standard database. Before the program's inception, most researchers created their own facial imagery database that was attuned to their own specific area of study. These personal databases were small and usually consisted of images from less than 50 individuals. The only notable exceptions were the following: Alex Pentland’s database of around 7500 facial images at the Massachusetts Institute of Technology (MIT) Joseph Wilder's database of around 250 individuals at Rutgers University Christoph von der Malsburg’s database of around 100 facial images at the University of Southern California (USC) The lack of a common database made it difficult to compare the results of face recognition studies in the scientific literature because each report involved different assumptions, scoring methods, and images. Most of the papers that were published did not use images from a common database nor follow a standard testing protocol. As a result, researchers were unable to make informed comparisons between the performances of different face-recognition algorithms. In September 1993, the FERET program was spearheaded by Dr. Harry Wechsler and Dr. Jonathon Phillips under the sponsorship of the U.S. Department of Defense Counterdrug Technology Development Program through DARPA with ARL serving as technical agent.
FERET (facial recognition technology) : FERET NIST Website Color FERET Database FERET NIST Documents
FERET database : The Facial Recognition Technology (FERET) database is a dataset used for facial recognition system evaluation as part of the Face Recognition Technology (FERET) program. It was first established in 1993 under a collaborative effort between Harry Wechsler at George Mason University and Jonathon Phillips at the Army Research Laboratory in Adelphi, Maryland. The FERET database serves as a standard database of facial images for researchers to use to develop various algorithms and report results. The use of a common database also allowed one to compare the effectiveness of different approaches in methodology and gauge their strengths and weaknesses. The facial images for the database were collected between December 1993 and August 1996, accumulating a total of 14,126 images pertaining to 1,199 individuals along with 365 duplicate sets of images that were taken on a different day. In 2003, the Defense Advanced Research Projects Agency (DARPA) released a high-resolution, 24-bit color version of these images. The dataset tested includes 2,413 still facial images, representing 856 individuals. The FERET database has been used by more than 460 research groups and is managed by the National Institute of Standards and Technology (NIST).
FERET database : Official website about the gray-scale version Official website about the color version More official information IEEE Transactions on Pattern Analysis and Machine Intelligence, VOL. 22, NO. 10, October 2000 More documents about FERET
Handwriting recognition : Handwriting recognition (HWR), also known as handwritten text recognition (HTR), is the ability of a computer to receive and interpret intelligible handwritten input from sources such as paper documents, photographs, touch-screens and other devices. The image of the written text may be sensed "off line" from a piece of paper by optical scanning (optical character recognition) or intelligent word recognition. Alternatively, the movements of the pen tip may be sensed "on line", for example by a pen-based computer screen surface, a generally easier task as there are more clues available. A handwriting recognition system handles formatting, performs correct segmentation into characters, and finds the most possible words.
Handwriting recognition : Offline handwriting recognition involves the automatic conversion of text in an image into letter codes that are usable within computer and text-processing applications. The data obtained by this form is regarded as a static representation of handwriting. Offline handwriting recognition is comparatively difficult, as different people have different handwriting styles. And, as of today, OCR engines are primarily focused on machine printed text and ICR for hand "printed" (written in capital letters) text.
Handwriting recognition : Online handwriting recognition involves the automatic conversion of text as it is written on a special digitizer or PDA, where a sensor picks up the pen-tip movements as well as pen-up/pen-down switching. This kind of data is known as digital ink and can be regarded as a digital representation of handwriting. The obtained signal is converted into letter codes that are usable within computer and text-processing applications. The elements of an online handwriting recognition interface typically include: a pen or stylus for the user to write with a touch sensitive surface, which may be integrated with, or adjacent to, an output display. a software application which interprets the movements of the stylus across the writing surface, translating the resulting strokes into digital text. The process of online handwriting recognition can be broken down into a few general steps: preprocessing, feature extraction and classification The purpose of preprocessing is to discard irrelevant information in the input data, that can negatively affect the recognition. This concerns speed and accuracy. Preprocessing usually consists of binarization, normalization, sampling, smoothing and denoising. The second step is feature extraction. Out of the two- or higher-dimensional vector field received from the preprocessing algorithms, higher-dimensional data is extracted. The purpose of this step is to highlight important information for the recognition model. This data may include information like pen pressure, velocity or the changes of writing direction. The last big step is classification. In this step, various models are used to map the extracted features to different classes and thus identifying the characters or words the features represent.
Handwriting recognition : Handwriting recognition has an active community of academics studying it. The biggest conferences for handwriting recognition are the International Conference on Frontiers in Handwriting Recognition (ICFHR), held in even-numbered years, and the International Conference on Document Analysis and Recognition (ICDAR), held in odd-numbered years. Both of these conferences are endorsed by the IEEE and IAPR. In 2021, the ICDAR proceedings will be published by LNCS, Springer. Active areas of research include: Online recognition Offline recognition Signature verification Postal address interpretation Bank-Check processing Writer recognition
Handwriting recognition : Since 2009, the recurrent neural networks and deep feedforward neural networks developed in the research group of Jürgen Schmidhuber at the Swiss AI Lab IDSIA have won several international handwriting competitions. In particular, the bi-directional and multi-dimensional Long short-term memory (LSTM) of Alex Graves et al. won three competitions in connected handwriting recognition at the 2009 International Conference on Document Analysis and Recognition (ICDAR), without any prior knowledge about the three different languages (French, Arabic, Persian) to be learned. Recent GPU-based deep learning methods for feedforward networks by Dan Ciresan and colleagues at IDSIA won the ICDAR 2011 offline Chinese handwriting recognition contest; their neural networks also were the first artificial pattern recognizers to achieve human-competitive performance on the famous MNIST handwritten digits problem of Yann LeCun and colleagues at NYU. Benjamin Graham of the University of Warwick won a 2013 Chinese handwriting recognition contest, with only a 2.61% error rate, by using an approach to convolutional neural networks that evolved (by 2017) into "sparse convolutional neural networks".