text
stringlengths
12
14.7k
Generalized blockmodeling : According to Doreian, the benefits of generalized blockmodeling, are as follows: unknown sensitivity to particular data features, examination of boundary problems, computationally burdensome, which results in a constraint regarding practical network size (generalized blockmodeling is thus primarily used to analyse smaller networks (below 100 units)), identifying structure from incomplete network information, most of generalized blockmodeling is based on binary networks, but there is also development in the field of valued networks, criterion function is minimized for a specified blockmodel, with results in issues of evaluating statistically, based on the structural data alone, problems regarding three dimensional network data, problems regarding the evolution of fundamental network structure.
Generalized blockmodeling : The book with the same title, Generalized blockmodeling, written by Patrick Doreian, Vladimir Batagelj and Anuška Ferligoj, was in 2007 awarded the Harrison White Outstanding Book Award by the Mathematical Sociology Section of American Sociological Association.
Generalized blockmodeling : Patrick Doreian, Vladimir Batagelj, Anuška Ferligoj, Mark Granovetter (Series Editor), Generalized Blockmodeling (Structural Analysis in the Social Sciences), Cambridge University Press 2004 (ISBN 0-521-84085-6)
Generalized blockmodeling of binary networks : Generalized blockmodeling of binary networks (also relational blockmodeling) is an approach of generalized blockmodeling, analysing the binary network(s). As most network analyses deal with binary networks, this approach is also considered as the fundamental approach of blockmodeling.: 11 This is especially noted, as the set of ideal blocks, when used for interpretation of blockmodels, have binary link patterns, which precludes them to be compared with valued empirical blocks. When analysing the binary networks, the criterion function is measuring block inconsistencies, while also reporting the possible errors. The ideal block in binary blockmodeling has only three types of conditions: "a certain cell must be (at least) 1, a certain cell must be 0 and the f over each row (or column) must be at least 1". It is also used as a basis for developing the generalized blockmodeling of valued networks.
Generalized blockmodeling of binary networks : homogeneity blockmodeling binary relation binary matrix
Generalized blockmodeling of valued networks : Generalized blockmodeling of valued networks is an approach of the generalized blockmodeling, dealing with valued networks (e.g., non-binary). While the generalized blockmodeling signifies a "formal and integrated approach for the study of the underlying functional anatomies of virtually any set of relational data", it is in principle used for binary networks. This is evident from the set of ideal blocks, which are used to interpret blockmodels, that are binary, based on the characteristic link patterns. Because of this, such templates are "not readily comparable with valued empirical blocks". To allow generalized blockmodeling of valued directional (one-mode) networks (e.g. allowing the direct comparisons of empirical valued blocks with ideal binary blocks), a non–parametric approach is used. With this, "an optional parameter determines the prominence of valued ties as a minimum percentile deviation between observed and expected flows". Such two–sided application of parameter then introduces "the possibility of non–determined ties, i.e. valued relations that are deemed neither prominent (1) nor non–prominent (0)." Resulted occurrences of links then motivate the modification of the calculation of inconsistencies between empirical and ideal blocks. At the same time, such links also give a possibility to measure the interpretational certainty, which is specific to each ideal block. Such maximum two–sided deviation threshold, holding the aggregate uncertainty score at zero or near–zero levels, is then proposed as "a measure of interpretational certainty for valued blockmodels, in effect transforming the optional parameter into an outgoing state". Problem with blockmodeling is the standard set of ideal block, as they are all specified using binary link (tie) patters; this results in "a non–trivial exercise to match and count inconsistencies between such ideal binary ties and empirical valued ties". One approach to solve this is by using dichotomization to transform the network into a binary version. The other two approaches were first proposed by Aleš Žiberna in 2007 by introducing valued (generalized) blockmodeling and also homogeneity blockmodeling. The basic idea of the latter is "that the inconsistency of an empirical block with its ideal block can be measured by within block variability of appropriate values". The newly–formed ideal blocks, which are appropriate for blockmodeling of valued networks, are then presented together with the definitions of their block inconsistencies. Two other approaches were later suggested by Carl Nordlund in 2019: deviational approach and correlation-based generalized approach. Both Nordlund's approaches are based on the idea, that valued networks can be compared with the ideal block without values. With this approach, more information is retained for analysis, which also means, that there are fewer partitions having identical values of the criterion function. This means, that the generalized blockmodeling of valued networks measures the inconsistencies more precisely. Usually, only one optimal partition is found in this approach, especially when it is used by homogeneity blockmodeling. Contrary, while using binary blockmodeling on the same sample, usually more than one optimal partition had occurred on several occasions.
Generalized blockmodeling of valued networks : Generalized blockmodeling of binary networks Homogeneity blockmodeling
Homogeneity blockmodeling : In mathematics applied to analysis of social structures, homogeneity blockmodeling is an approach in blockmodeling, which is best suited for a preliminary or main approach to valued networks, when a prior knowledge about these networks is not available. This is because homogeneity blockmodeling emphasizes the similarity of link (tie) strengths within the blocks over the pattern of links. In this approach, tie (link) values (or statistical data computed on them) are assumed to be equal (homogenous) within blocks. This approach to the generalized blockmodeling of valued networks was first proposed by Aleš Žiberna in 2007 with the basic idea, "that the inconsistency of an empirical block with its ideal block can be measured by within block variability of appropriate values". The newly–formed ideal blocks, which are appropriate for blockmodeling of valued networks, are then presented together with the definitions of their block inconsistencies. Similar approach to the homogeneity blockmodeling, dealing with direct approach for structural equivalence, was previously suggested by Stephen P. Borgatti and Martin G. Everett (1992).
Homogeneity blockmodeling : Generalized blockmodeling of binary networks implicit blockmodeling blockmodeling linked networks Homogeneity and heterogeneity
Implicit blockmodeling : Implicit blockmodeling is an approach in blockmodeling, similar to a valued and homogeneity blockmodeling, where initially an additional normalization is used and then while specifying the parameter of the relevant link is replaced by the block maximum. This approach was first proposed by Batagelj and Ferligoj in 2000,: 16–17 and developed by Aleš Žiberna in 2007/08. Comparing with homogeneity, the implicit blockmodeling will perform similarly with max-regular equivalence, but slightly worse in other settings. It will perform worse than valued and homogeneity blockmodeling with a pre-specified blockmodel. == References ==
Andrej Mrvar : Andrej Mrvar is a Slovenian computer scientist and a professor at the University of Ljubljana. He is known for his work in network analysis, graph drawing, decision making, virtual reality, electronic timing and data processing of sports competitions.
Andrej Mrvar : He is well known for his work on Pajek, a free software for analysis and visualization of large networks. Mrvar began work on Pajek in 1996 with Vladimir Batagelj. His book Exploratory Social Network Analysis with Pajek, coauthored with Wouter de Nooy and Vladimir Batagelj, is his most cited work. It was published by Cambridge University Press in three editions (first 2005, second 2011, and third 2018). The book was translated into Japanese (2009) and Chinese (first edition 2012, second 2014). With Anuška Ferligoj, he was a founding co-editor-in-chief of the Metodološki zvezki journal.
Andrej Mrvar : Vidmar Award (Faculty of Electrical and Computer Engineering, University of Ljubljana): 1988, 1990 First prizes for contributions (with Vladimir Batagelj) to Graph Drawing Contests in years: 1995, 1996, 1997, 1998, 1999, 2000 and 2005 / Graph Drawing Hall of Fame. Award of University of Ljubljana for contributions in education and research (Svečana listina Univerze v Ljubljani za pomembne dosežke na področju vzgojnoizobraževalnega in znanstvenoraziskovalega dela): 2001 The INSNA's William D. Richards Software award for work on Pajek (with Vladimir Batagelj): 2013 Award of Faculty of Social Sciences, University of Ljubljana for scientific excellence (Priznanje za znanstveno odličnost): 2013
Andrej Mrvar : Wouter de Nooy, Andrej Mrvar, Vladimir Batagelj, Mark Granovetter (Series Editor), Exploratory Social Network Analysis with Pajek (Structural Analysis in the Social Sciences), Cambridge University Press (First Edition: 2005, Second Edition: 2011, Third Edition: 2018), (ISBN 0-521-60262-9). Japanese Translation (2010). Chinese Translation (First Edition: 2012, Second Edition: 2014) Andrej Mrvar and Vladimir Batagelj, Analysis and visualization of large networks with program package Pajek. Complex Adaptive Systems Modeling, 4:6. SpringerOpen, 2016 Vladimir Batagelj and Andrej Mrvar, Some Analyses of Erdős Collaboration Graph. Social Networks, 22, 173–186, 2000 Vladimir Batagelj and Andrej Mrvar, A Subquadratic Triad Census Algorithm for Large Sparse Networks with Small Maximum Degree. Social Networks, 23, 237–243, 2001 Patrick Doreian and Andrej Mrvar, A Partitioning Approach to Structural Balance, Social Networks, 18, 149–168, 1996 Patrick Doreian and Andrej Mrvar, Partitioning Signed Social Networks, Social Networks, 31, 1–11, 2009 Andrej Mrvar and Patrick Doreian, Partitioning Signed Two-Mode Networks, Journal of Mathematical Sociology, 33, 196–221, 2009 Patrick Doreian and Andrej Mrvar, The international reach of the Koch brothers network. In: Antonyuk, A. and Basov N. (Eds.): Networks in the Global World V. NetGloW 2020. Lecture Notes in Networks and Systems, 181, 225–235. Springer, 2021 Patrick Doreian and Andrej Mrvar, Delineating Changes in the Fundamental Structure of Signed Networks, Frontiers in Physics, 294, 1–11, 2021 Patrick Doreian and Andrej Mrvar, Hubs and Authorities in the Koch Brothers Network. Social Networks, Social Networks, 64, 148–157, 2021 Patrick Doreian and Andrej Mrvar, Public issues, policy proposals, social movements, and the interests of the Koch Brothers network of allies, Quality and Quantity, 56, 305–322, 2022 Douglas R. White, Vladimir Batagelj, Andrej Mrvar, Analyzing Large Kinship and Marriage Networks with Pgraph and Pajek. Social Science Computer Review, 17, 245–274, 1999 Ion Georgiou, Ronald Concer, Andrej Mrvar, A Systemic Approach to Sociometric Group Research: Advancing The Work of Leslie Day Zeleny, 1939–1947, Social Networks, 63, 174–200, 2020
Andrej Mrvar : Andrej Mrvar publications indexed by Google Scholar
Stochastic block model : The stochastic block model is a generative model for random graphs. This model tends to produce graphs containing communities, subsets of nodes characterized by being connected with one another with particular edge densities. For example, edges may be more common within communities than between communities. Its mathematical formulation was first introduced in 1983 in the field of social network analysis by Paul W. Holland et al. The stochastic block model is important in statistics, machine learning, and network science, where it serves as a useful benchmark for the task of recovering community structure in graph data.
Stochastic block model : The stochastic block model takes the following parameters: The number n of vertices; a partition of the vertex set into disjoint subsets C 1 , … , C r ,\ldots ,C_ , called communities; a symmetric r × r matrix P of edge probabilities. The edge set is then sampled at random as follows: any two vertices u ∈ C i and v ∈ C j are connected by an edge with probability P i j . An example problem is: given a graph with n vertices, where the edges are sampled as described, recover the groups C 1 , … , C r ,\ldots ,C_ .
Stochastic block model : If the probability matrix is a constant, in the sense that P i j = p =p for all i , j , then the result is the Erdős–Rényi model G ( n , p ) . This case is degenerate—the partition into communities becomes irrelevant—but it illustrates a close relationship to the Erdős–Rényi model. The planted partition model is the special case that the values of the probability matrix P are a constant p on the diagonal and another constant q off the diagonal. Thus two vertices within the same community share an edge with probability p , while two vertices in different communities share an edge with probability q . Sometimes it is this restricted model that is called the stochastic block model. The case where p > q is called an assortative model, while the case p < q is called disassortative. Returning to the general stochastic block model, a model is called strongly assortative if P i i > P j k >P_ whenever j ≠ k : all diagonal entries dominate all off-diagonal entries. A model is called weakly assortative if P i i > P i j >P_ whenever i ≠ j : each diagonal entry is only required to dominate the rest of its own row and column. Disassortative forms of this terminology exist, by reversing all inequalities. For some algorithms, recovery might be easier for block models with assortative or disassortative conditions of this form.
Stochastic block model : Much of the literature on algorithmic community detection addresses three statistical tasks: detection, partial recovery, and exact recovery.
Stochastic block model : Stochastic block models exhibit a sharp threshold effect reminiscent of percolation thresholds. Suppose that we allow the size n of the graph to grow, keeping the community sizes in fixed proportions. If the probability matrix remains fixed, tasks such as partial and exact recovery become feasible for all non-degenerate parameter settings. However, if we scale down the probability matrix at a suitable rate as n increases, we observe a sharp phase transition: for certain settings of the parameters, it will become possible to achieve recovery with probability tending to 1, whereas on the opposite side of the parameter threshold, the probability of recovery tends to 0 no matter what algorithm is used. For partial recovery, the appropriate scaling is to take P i j = P ~ i j / n =_/n for fixed P ~ , resulting in graphs of constant average degree. In the case of two equal-sized communities, in the assortative planted partition model with probability matrix P = ( p ~ / n q ~ / n q ~ / n p ~ / n ) , /n&/n\\/n&/n\end\right), partial recovery is feasible with probability 1 − o ( 1 ) whenever ( p ~ − q ~ ) 2 > 2 ( p ~ + q ~ ) -)^>2(+) , whereas any estimator fails partial recovery with probability 1 − o ( 1 ) whenever ( p ~ − q ~ ) 2 < 2 ( p ~ + q ~ ) -)^<2(+) . For exact recovery, the appropriate scaling is to take P i j = P ~ i j log ⁡ n / n =_\log n/n , resulting in graphs of logarithmic average degree. Here a similar threshold exists: for the assortative planted partition model with r equal-sized communities, the threshold lies at p ~ − q ~ = r -= . In fact, the exact recovery threshold is known for the fully general stochastic block model.
Stochastic block model : In principle, exact recovery can be solved in its feasible range using maximum likelihood, but this amounts to solving a constrained or regularized cut problem such as minimum bisection that is typically NP-complete. Hence, no known efficient algorithms will correctly compute the maximum-likelihood estimate in the worst case. However, a wide variety of algorithms perform well in the average case, and many high-probability performance guarantees have been proven for algorithms in both the partial and exact recovery settings. Successful algorithms include spectral clustering of the vertices, semidefinite programming, forms of belief propagation, and community detection among others.
Stochastic block model : Several variants of the model exist. One minor tweak allocates vertices to communities randomly, according to a categorical distribution, rather than in a fixed partition. More significant variants include the degree-corrected stochastic block model, the hierarchical stochastic block model, the geometric block model, censored block model and the mixed-membership block model.
Stochastic block model : Stochastic block model have been recognised to be a topic model on bipartite networks. In a network of documents and words, Stochastic block model can identify topics: group of words with a similar meaning.
Stochastic block model : Signed graphs allow for both favorable and adverse relationships and serve as a common model choice for various data analysis applications, e.g., correlation clustering. The stochastic block model can be trivially extended to signed graphs by assigning both positive and negative edge weights or equivalently using a difference of adjacency matrices of two stochastic block models.
Stochastic block model : GraphChallenge encourages community approaches to developing new solutions for analyzing graphs and sparse data derived from social media, sensor feeds, and scientific data to enable relationships between events to be discovered as they unfold in the field. Streaming stochastic block partition is one of the challenges since 2017. Spectral clustering has demonstrated outstanding performance compared to the original and even improved base algorithm, matching its quality of clusters while being multiple orders of magnitude faster.
Stochastic block model : blockmodeling Girvan–Newman algorithm – Community detection algorithm Lancichinetti–Fortunato–Radicchi benchmark – AlgorithmPages displaying short descriptions with no spaces for generating benchmark networks with communities == References ==
Harrison White : Harrison Colyar White (March 21, 1930 – May 18, 2024) was an American sociologist who was the Giddings Professor of Sociology at Columbia University. White played an influential role in the “Harvard Revolution” in social networks and the New York School of relational sociology. He is credited with the development of a number of mathematical models of social structure including vacancy chains and blockmodels. He has been a leader of a revolution in sociology that is still in process, using models of social structure that are based on patterns of relations instead of the attributes and attitudes of individuals. Among social network researchers, White is widely respected. For instance, at the 1997 International Network of Social Network Analysis conference, the organizer held a special “White Tie” event, dedicated to White. Social network researcher Emmanuel Lazega refers to him as both “Copernicus and Galileo” because he invented both the vision and the tools. The most comprehensive documentation of his theories can be found in the book Identity and Control, first published in 1992. A major rewrite of the book appeared in June 2008. In 2011, White received the W.E.B. DuBois Career of Distinguished Scholarship Award from the American Sociological Association, which honors "scholars who have shown outstanding commitment to the profession of sociology and whose cumulative work has contributed in important ways to the advancement of the discipline." Before his retirement to live in Tucson, Arizona, White was interested in sociolinguistics and business strategy as well as sociology.
Harrison White : A good summary of White's sociological contributions is provided by his former student and collaborator, Ronald Breiger: White addresses problems of social structure that cut across the range of the social sciences. Most notably, he has contributed (1) theories of role structures encompassing classificatory kinship systems of native Australian peoples and institutions of the contemporary West; (2) models based on equivalences of actors across networks of multiple types of social relation; (3) theorization of social mobility in systems of organizations; (4) a structural theory of social action that emphasizes control, agency, narrative, and identity; (5) a theory of artistic production; (6) a theory of economic production markets leading to the elaboration of a network ecology for market identities and new ways of accounting for profits, prices, and market shares; and (7) a theory of language use that emphasizes switching between social, cultural, and idiomatic domains within networks of discourse. His most explicit theoretical statement is Identity and Control: A Structural Theory of Social Action (1992), although several of the major components of his theory of the mutual shaping of networks, institutions, and agency are also readily apparent in Careers and Creativity: Social Forces in the Arts (1993), written for a less-specialized audience. More generally, White and his students sparked interest in looking at society as networks rather than as aggregates of individuals. This view is still controversial. In sociology and organizational science, it is difficult to measure cause and effect in a systematic way. Because of that, it is common to use sampling techniques to discover some sort of average in a population. For instance, we are told almost daily how the average European or American feels about a topic. It allows social scientists and pundits to make inferences about cause and say “people are angry at the current administration because the economy is doing poorly.” This kind of generalization certainly makes sense, but it does not tell us anything about an individual. This leads to the idea of an idealized individual, something that is the bedrock of modern economics. Most modern economic theories look at social formations, like organizations, as products of individuals all acting in their own best interest. While this has proved to be useful in some cases, it does not account well for the knowledge that is required for the structures to sustain themselves. White and his students (and his students' students) have been developing models that incorporate the patterns of relationships into descriptions of social formations. This line of work includes: economic sociology, network sociology and structuralist sociology.
Harrison White : In addition to his own publications, White is widely credited with training many influential generations of network analysts in sociology. Including the early work in the 1960s and 1970s during the Harvard Revolution, as well as the 1980s and 1990s at Columbia during the New York School of relational sociology. White's student and teaching assistant, Michael Schwartz, took notes in the spring of 1965, known as Notes on the Constituents of Social Structure, of White's undergraduate Introduction to Social Relations course (Soc Rel 10). These notes were circulated among network analysis students and aficionados, until finally published in 2008 in Sociologica. As popular social science blog Orgtheory.net explains, "in contemporary American sociology, there are no set of student-taken notes that have had as much underground influence as those from Harrison White’s introductory Soc Rel 10 seminar at Harvard." The first generation of Harvard graduate students that trained with White during the 1960s went on to be a formidable cohort of network analytically inclined sociologists. His first graduate student at Harvard was Edward Laumann who went onto develop one of the most widely used methods of studying personal networks known as ego-network surveys (developed with one of Laumann's students at the University of Chicago, Ronald Burt). Several of them went on to contribute to the "Toronto school" of structural analysis. Barry Wellman, for instance, contributed heavily to the cross fertilization of network analysis and community studies, later contributing to the earliest studies of online communities. Another of White's earliest students at Harvard was Nancy Lee (now Nancy Howell) who used social network analysis in her groundbreaking study of how women seeking an abortion found willing doctors before Roe v. Wade. She found that women found doctors through links of friends and acquaintances and was four degrees separated from the doctor on average. White also trained later additions to the Toronto school, Harriet Friedmann ('77) and Bonnie Erickson ('73). One of White's most well-known graduate students was Mark Granovetter, who attended Harvard as a Ph.D. student from 1965 to 1970. Granovetter studied how people got jobs, discovered they were more likely to get them through acquaintances than through friends. Recounting the development of his widely cited 1973 article, "The Strength of Weak Ties", Granovetter credits White's lectures and specifically White's description of sociometric work by Anatol Rapaport and William Horrath that gave him the idea. This, tied with earlier work by Stanley Milgram (who was also in the Harvard Department of Social Relations 1963–1967, though not one of White's students), gave scientists a better sense of how the social world was organized: into many dense groups with “weak ties” between them. Granovetter's work provided the theoretical background for Malcolm Gladwell's The Tipping Point. This line of research is still actively being pursued by Duncan Watts, Albert-László Barabási, Mark Newman, Jon Kleinberg and others. White's research on “vacancy chains” was assisted by a number of graduate students, including Michael Schwartz and Ivan Chase. The outcome of this was the book Chains of Opportunity. The book described a model of social mobility where the roles and the people that filled them were independent. The idea of a person being partially created by their position in patterns of relationships has become a recurring theme in his work. This provided a quantitative analysis of social roles, allowing scientists new ways to measure society that were not based on statistical aggregates. During the 1970s, White work with his student's Scott Boorman, Ronald Breiger, and François Lorrain on a series of articles that introduce a procedure called "blockmodeling" and the concept of "structural equivalence." The key idea behind these articles was identifying a "position" or "role" through similarities in individuals' social structure, rather than characteristics intrinsic to the individuals or a priori definitions of group membership. At Columbia, White trained a new cohort of researchers who pushed network analysis beyond methodological rigor to theoretical extension and the incorporation of previously neglected concepts, namely, culture and language. Many of his students and mentees have had a strong impact in sociology. Other former students include Michael Schwartz and Ivan Chase, both professors at Stony Brook; Joel Levine, who founded Dartmouth College's Math/Social Science program; Edward Laumann who pioneered survey-based egocentric network research and became a dean and provost at University of Chicago; Kathleen Carley at Carnegie Mellon University; Ronald Breiger at the University of Arizona; Barry Wellman at the University of Toronto and then the NetLab Network; Peter Bearman at Columbia University; Bonnie Erickson (Toronto); Christopher Winship (Harvard University); Joel Levine (Dartmouth College), Nicholas Mullins (Virginia Tech, deceased), Margaret Theeman (Boulder), Brian Sherman (retired, Atlanta), Nancy Howell (retired, Toronto); David R. Gibson (University of Notre Dame); Matthew Bothner (University of Chicago); Ann Mische (University of Notre Dame); and Kyriakos Kontopoulos (Temple University).
Harrison White : White died at an assisted living facility in Tucson, on May 19, 2024, at the age of 94.
Harrison White : Azarian, Reza. (2003). The General Sociology of Harrison White, Stockholm, Sweden: Stockholm Studies in Social Mechanisms: 135-140 Breiger, Ronald. L. (2005). White, Harrison. Encyclopedia of Social Theory. G. Ritzer. Thousand Oaks, Sage. 2: 884-886. Coase, Ronald H. (1990). The Firm, The Market and The Law. Chicago, University of Chicago Press. Freeman, Linton. (2004). The Development of Social Network Analysis. Vancouver, Empirical Press. Harrison, Daniel (2001). Theory, Networks, and Social Domination: A Critical Exploration of Harrison C. White. Ph.D. Dissertation. Florida State University Samuelson, Paul A. (1979). Foundations of Economic Analysis. Antheneum, Harvard University Press. Steiny, Donald. (2007). "H. White, Identity and Control (2nd ed.), Cambridge University Press, Cambridge (2008)." Social Networks 29(4): 609-616. Wellman, Barry (1988). "Structural analysis: from method and metaphor to theory and substance," Social Structures: A Network Approach. B. Wellman and S. D. Berkowitz. Cambridge, Cambridge University Press White, Harrison (2008). Identity and Control
Harrison White : Weis, J.G. and Matza, D., 1971. Dialogue with David Matza. Issues in Criminology, 6(1), pp.33-53.
Harrison White : Faculty Website at Columbia University Archived 2018-04-10 at the Wayback Machine Interview with Harrison White by Alair MacLean and Andy Olds Event held in honor of Harrison White SocioSite: Famous Sociologists - Harrison White Information resources on life, academic work and intellectual influence of Harrison White. Editor: dr. Albert Benschop (University of Amsterdam).
Bayesian network : A Bayesian network (also known as a Bayes network, Bayes net, belief network, or decision network) is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). While it is one of several forms of causal notation, causal networks are special cases of Bayesian networks. Bayesian networks are ideal for taking an event that occurred and predicting the likelihood that any one of several possible known causes was the contributing factor. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Efficient algorithms can perform inference and learning in Bayesian networks. Bayesian networks that model sequences of variables (e.g. speech signals or protein sequences) are called dynamic Bayesian networks. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams.
Bayesian network : Formally, Bayesian networks are directed acyclic graphs (DAGs) whose nodes represent variables in the Bayesian sense: they may be observable quantities, latent variables, unknown parameters or hypotheses. Each edge represents a direct conditional dependency. Any pair of nodes that are not connected (i.e. no path connects one node to the other) represent variables that are conditionally independent of each other. Each node is associated with a probability function that takes, as input, a particular set of values for the node's parent variables, and gives (as output) the probability (or probability distribution, if applicable) of the variable represented by the node. For example, if m parent nodes represent m Boolean variables, then the probability function could be represented by a table of 2 m entries, one entry for each of the 2 m possible parent combinations. Similar ideas may be applied to undirected, and possibly cyclic, graphs such as Markov networks.
Bayesian network : Let us use an illustration to enforce the concepts of a Bayesian network. Suppose we want to model the dependencies between three variables: the sprinkler (or more appropriately, its state - whether it is on or not), the presence or absence of rain and whether the grass is wet or not. Observe that two events can cause the grass to become wet: an active sprinkler or rain. Rain has a direct effect on the use of the sprinkler (namely that when it rains, the sprinkler usually is not active). This situation can be modeled with a Bayesian network (shown to the right). Each variable has two possible values, T (for true) and F (for false). The joint probability function is, by the chain rule of probability, Pr ( G , S , R ) = Pr ( G ∣ S , R ) Pr ( S ∣ R ) Pr ( R ) where G = "Grass wet (true/false)", S = "Sprinkler turned on (true/false)", and R = "Raining (true/false)". The model can answer questions about the presence of a cause given the presence of an effect (so-called inverse probability) like "What is the probability that it is raining, given the grass is wet?" by using the conditional probability formula and summing over all nuisance variables: Pr ( R = T ∣ G = T ) = Pr ( G = T , R = T ) Pr ( G = T ) = ∑ x ∈ Pr ( G = T , S = x , R = T ) ∑ x , y ∈ Pr ( G = T , S = x , R = y ) =\Pr(G=T,S=x,R=T)\Pr(G=T,S=x,R=y) Using the expansion for the joint probability function Pr ( G , S , R ) and the conditional probabilities from the conditional probability tables (CPTs) stated in the diagram, one can evaluate each term in the sums in the numerator and denominator. For example, Pr ( G = T , S = T , R = T ) = Pr ( G = T ∣ S = T , R = T ) Pr ( S = T ∣ R = T ) Pr ( R = T ) = 0.99 × 0.01 × 0.2 = 0.00198. \Pr(G=T,S=T,R=T)&=\Pr(G=T\mid S=T,R=T)\Pr(S=T\mid R=T)\Pr(R=T)\\&=0.99\times 0.01\times 0.2\\&=0.00198.\end Then the numerical results (subscripted by the associated variable values) are Pr ( R = T ∣ G = T ) = 0.00198 T T T + 0.1584 T F T 0.00198 T T T + 0.288 T T F + 0.1584 T F T + 0.0 T F F = 891 2491 ≈ 35.77 % . +0.1584_+0.288_+0.1584_+0.0_=\approx 35.77\%. To answer an interventional question, such as "What is the probability that it would rain, given that we wet the grass?" the answer is governed by the post-intervention joint distribution function Pr ( S , R ∣ do ( G = T ) ) = Pr ( S ∣ R ) Pr ( R ) (G=T))=\Pr(S\mid R)\Pr(R) obtained by removing the factor Pr ( G ∣ S , R ) from the pre-intervention distribution. The do operator forces the value of G to be true. The probability of rain is unaffected by the action: Pr ( R ∣ do ( G = T ) ) = Pr ( R ) . (G=T))=\Pr(R). To predict the impact of turning the sprinkler on: Pr ( R , G ∣ do ( S = T ) ) = Pr ( R ) Pr ( G ∣ R , S = T ) (S=T))=\Pr(R)\Pr(G\mid R,S=T) with the term Pr ( S = T ∣ R ) removed, showing that the action affects the grass but not the rain. These predictions may not be feasible given unobserved variables, as in most policy evaluation problems. The effect of the action do ( x ) (x) can still be predicted, however, whenever the back-door criterion is satisfied. It states that, if a set Z of nodes can be observed that d-separates (or blocks) all back-door paths from X to Y then Pr ( Y , Z ∣ do ( x ) ) = Pr ( Y , Z , X = x ) Pr ( X = x ∣ Z ) . (x))=. A back-door path is one that ends with an arrow into X. Sets that satisfy the back-door criterion are called "sufficient" or "admissible." For example, the set Z = R is admissible for predicting the effect of S = T on G, because R d-separates the (only) back-door path S ← R → G. However, if S is not observed, no other set d-separates this path and the effect of turning the sprinkler on (S = T) on the grass (G) cannot be predicted from passive observations. In that case P(G | do(S = T)) is not "identified". This reflects the fact that, lacking interventional data, the observed dependence between S and G is due to a causal connection or is spurious (apparent dependence arising from a common cause, R). (see Simpson's paradox) To determine whether a causal relation is identified from an arbitrary Bayesian network with unobserved variables, one can use the three rules of "do-calculus" and test whether all do terms can be removed from the expression of that relation, thus confirming that the desired quantity is estimable from frequency data. Using a Bayesian network can save considerable amounts of memory over exhaustive probability tables, if the dependencies in the joint distribution are sparse. For example, a naive way of storing the conditional probabilities of 10 two-valued variables as a table requires storage space for 2 10 = 1024 =1024 values. If no variable's local distribution depends on more than three parent variables, the Bayesian network representation stores at most 10 ⋅ 2 3 = 80 =80 values. One advantage of Bayesian networks is that it is intuitively easier for a human to understand (a sparse set of) direct dependencies and local distributions than complete joint distributions.
Bayesian network : Bayesian networks perform three main inference tasks:
Bayesian network : Given data x and parameter θ , a simple Bayesian analysis starts with a prior probability (prior) p ( θ ) and likelihood p ( x ∣ θ ) to compute a posterior probability p ( θ ∣ x ) ∝ p ( x ∣ θ ) p ( θ ) . Often the prior on θ depends in turn on other parameters φ that are not mentioned in the likelihood. So, the prior p ( θ ) must be replaced by a likelihood p ( θ ∣ φ ) , and a prior p ( φ ) on the newly introduced parameters φ is required, resulting in a posterior probability p ( θ , φ ∣ x ) ∝ p ( x ∣ θ ) p ( θ ∣ φ ) p ( φ ) . This is the simplest example of a hierarchical Bayes model. The process may be repeated; for example, the parameters φ may depend in turn on additional parameters ψ , which require their own prior. Eventually the process must terminate, with priors that do not depend on unmentioned parameters.
Bayesian network : Several equivalent definitions of a Bayesian network have been offered. For the following, let G = (V,E) be a directed acyclic graph (DAG) and let X = (Xv), v ∈ V be a set of random variables indexed by V.
Bayesian network : In 1990, while working at Stanford University on large bioinformatic applications, Cooper proved that exact inference in Bayesian networks is NP-hard. This result prompted research on approximation algorithms with the aim of developing a tractable approximation to probabilistic inference. In 1993, Paul Dagum and Michael Luby proved two surprising results on the complexity of approximation of probabilistic inference in Bayesian networks. First, they proved that no tractable deterministic algorithm can approximate probabilistic inference to within an absolute error ɛ < 1/2. Second, they proved that no tractable randomized algorithm can approximate probabilistic inference to within an absolute error ɛ < 1/2 with confidence probability greater than 1/2. At about the same time, Roth proved that exact inference in Bayesian networks is in fact #P-complete (and thus as hard as counting the number of satisfying assignments of a conjunctive normal form formula (CNF)) and that approximate inference within a factor 2n1−ɛ for every ɛ > 0, even for Bayesian networks with restricted architecture, is NP-hard. In practical terms, these complexity results suggested that while Bayesian networks were rich representations for AI and machine learning applications, their use in large real-world applications would need to be tempered by either topological structural constraints, such as naïve Bayes networks, or by restrictions on the conditional probabilities. The bounded variance algorithm developed by Dagum and Luby was the first provable fast approximation algorithm to efficiently approximate probabilistic inference in Bayesian networks with guarantees on the error approximation. This powerful algorithm required the minor restriction on the conditional probabilities of the Bayesian network to be bounded away from zero and one by 1 / p ( n ) where p ( n ) was any polynomial of the number of nodes in the network, n .
Bayesian network : Notable software for Bayesian networks include: Just another Gibbs sampler (JAGS) – Open-source alternative to WinBUGS. Uses Gibbs sampling. OpenBUGS – Open-source development of WinBUGS. SPSS Modeler – Commercial software that includes an implementation for Bayesian networks. Stan (software) – Stan is an open-source package for obtaining Bayesian inference using the No-U-Turn sampler (NUTS), a variant of Hamiltonian Monte Carlo. PyMC – A Python library implementing an embedded domain specific language to represent bayesian networks, and a variety of samplers (including NUTS) WinBUGS – One of the first computational implementations of MCMC samplers. No longer maintained.
Bayesian network : The term Bayesian network was coined by Judea Pearl in 1985 to emphasize: the often subjective nature of the input information the reliance on Bayes' conditioning as the basis for updating information the distinction between causal and evidential modes of reasoning In the late 1980s Pearl's Probabilistic Reasoning in Intelligent Systems and Neapolitan's Probabilistic Reasoning in Expert Systems summarized their properties and established them as a field of study.
Bayesian network : Conrady S, Jouffe L (2015-07-01). Bayesian Networks and BayesiaLab – A practical introduction for researchers. Franklin, Tennessee: Bayesian USA. ISBN 978-0-9965333-0-0. Charniak E (Winter 1991). "Bayesian networks without tears" (PDF). AI Magazine. Kruse R, Borgelt C, Klawonn F, Moewes C, Steinbrecher M, Held P (2013). Computational Intelligence A Methodological Introduction. London: Springer-Verlag. ISBN 978-1-4471-5012-1. Borgelt C, Steinbrecher M, Kruse R (2009). Graphical Models – Representations for Learning, Reasoning and Data Mining (Second ed.). Chichester: Wiley. ISBN 978-0-470-74956-2.
Bayesian network : An Introduction to Bayesian Networks and their Contemporary Applications On-line Tutorial on Bayesian nets and probability Web-App to create Bayesian nets and run it with a Monte Carlo method Continuous Time Bayesian Networks Bayesian Networks: Explanation and Analogy A live tutorial on learning Bayesian networks A hierarchical Bayes Model for handling sample heterogeneity in classification problems, provides a classification model taking into consideration the uncertainty associated with measuring replicate samples. Hierarchical Naive Bayes Model for handling sample uncertainty Archived 2007-09-28 at the Wayback Machine, shows how to perform classification and learning with continuous and discrete variables with replicated measurements.
Bayesian hierarchical modeling : Bayesian hierarchical modelling is a statistical model written in multiple levels (hierarchical form) that estimates the parameters of the posterior distribution using the Bayesian method. The sub-models combine to form the hierarchical model, and Bayes' theorem is used to integrate them with the observed data and account for all the uncertainty that is present. The result of this integration is it allows calculation of the posterior distribution of the prior, providing an updated probability estimate. Frequentist statistics may yield conclusions seemingly incompatible with those offered by Bayesian statistics due to the Bayesian treatment of the parameters as random variables and its use of subjective information in establishing assumptions on these parameters. As the approaches answer different questions the formal results aren't technically contradictory but the two approaches disagree over which answer is relevant to particular applications. Bayesians argue that relevant information regarding decision-making and updating beliefs cannot be ignored and that hierarchical modeling has the potential to overrule classical methods in applications where respondents give multiple observational data. Moreover, the model has proven to be robust, with the posterior distribution less sensitive to the more flexible hierarchical priors. Hierarchical modeling, as its name implies, retains nested data structure, and is used when information is available at several different levels of observational units. For example, in epidemiological modeling to describe infection trajectories for multiple countries, observational units are countries, and each country has its own time-based profile of daily infected cases. In decline curve analysis to describe oil or gas production decline curve for multiple wells, observational units are oil or gas wells in a reservoir region, and each well has each own time-based profile of oil or gas production rates (usually, barrels per month). Hierarchical modeling is used to devise computatation based strategies for multiparameter problems.
Bayesian hierarchical modeling : Statistical methods and models commonly involve multiple parameters that can be regarded as related or connected in such a way that the problem implies a dependence of the joint probability model for these parameters. Individual degrees of belief, expressed in the form of probabilities, come with uncertainty. Amidst this is the change of the degrees of belief over time. As was stated by Professor José M. Bernardo and Professor Adrian F. Smith, “The actuality of the learning process consists in the evolution of individual and subjective beliefs about the reality.” These subjective probabilities are more directly involved in the mind rather than the physical probabilities. Hence, it is with this need of updating beliefs that Bayesians have formulated an alternative statistical model which takes into account the prior occurrence of a particular event.
Bayesian hierarchical modeling : The assumed occurrence of a real-world event will typically modify preferences between certain options. This is done by modifying the degrees of belief attached, by an individual, to the events defining the options. Suppose in a study of the effectiveness of cardiac treatments, with the patients in hospital j having survival probability θ j , the survival probability will be updated with the occurrence of y, the event in which a controversial serum is created which, as believed by some, increases survival in cardiac patients. In order to make updated probability statements about θ j , given the occurrence of event y, we must begin with a model providing a joint probability distribution for θ j and y. This can be written as a product of the two distributions that are often referred to as the prior distribution P ( θ ) and the sampling distribution P ( y ∣ θ ) respectively: P ( θ , y ) = P ( θ ) P ( y ∣ θ ) Using the basic property of conditional probability, the posterior distribution will yield: P ( θ ∣ y ) = P ( θ , y ) P ( y ) = P ( y ∣ θ ) P ( θ ) P ( y ) = This equation, showing the relationship between the conditional probability and the individual events, is known as Bayes' theorem. This simple expression encapsulates the technical core of Bayesian inference which aims to deconstruct the probability, P ( θ ∣ y ) , relative to solvable subsets of its supportive evidence.
Bayesian hierarchical modeling : The usual starting point of a statistical analysis is the assumption that the n values y 1 , y 2 , … , y n ,y_,\ldots ,y_ are exchangeable. If no information – other than data y – is available to distinguish any of the θ j ’s from any others, and no ordering or grouping of the parameters can be made, one must assume symmetry of prior distribution parameters. This symmetry is represented probabilistically by exchangeability. Generally, it is useful and appropriate to model data from an exchangeable distribution as independently and identically distributed, given some unknown parameter vector θ , with distribution P ( θ ) .
Bayesian hierarchical modeling : A three stage version of Bayesian hierarchical modeling could be used to calculate probability at 1) an individual level, 2) at the level of population and 3) the prior, which is an assumed probability distribution that takes place before evidence is initially acquired: Stage 1: Individual-Level Model y i j = f ( t i j ; θ 1 i , θ 2 i , … , θ l i , … , θ K i ) + ϵ i j , ϵ i j ∼ N ( 0 , σ 2 ) , i = 1 , … , N , j = 1 , … , M i . _=f(t_;\theta _,\theta _,\ldots ,\theta _,\ldots ,\theta _)+\epsilon _,\quad \epsilon _\sim N(0,\sigma ^),\quad i=1,\ldots ,N,\,j=1,\ldots ,M_. Stage 2: Population Model θ l i = α l + ∑ b = 1 P β l b x i b + η l i , η l i ∼ N ( 0 , ω l 2 ) , i = 1 , … , N , l = 1 , … , K . =\alpha _+\sum _^\beta _x_+\eta _,\quad \eta _\sim N(0,\omega _^),\quad i=1,\ldots ,N,\,l=1,\ldots ,K. Stage 3: Prior σ 2 ∼ π ( σ 2 ) , α l ∼ π ( α l ) , ( β l 1 , … , β l b , … , β l P ) ∼ π ( β l 1 , … , β l b , … , β l P ) , ω l 2 ∼ π ( ω l 2 ) , l = 1 , … , K . \sim \pi (\sigma ^),\quad \alpha _\sim \pi (\alpha _),\quad (\beta _,\ldots ,\beta _,\ldots ,\beta _)\sim \pi (\beta _,\ldots ,\beta _,\ldots ,\beta _),\quad \omega _^\sim \pi (\omega _^),\quad l=1,\ldots ,K. Here, y i j denotes the continuous response of the i -th subject at the time point t i j , and x i b is the b -th covariate of the i -th subject. Parameters involved in the model are written in Greek letters. The variable f ( t ; θ 1 , … , θ K ) ,\ldots ,\theta _) is a known function parameterized by the K -dimensional vector ( θ 1 , … , θ K ) ,\ldots ,\theta _) . Typically, f is a `nonlinear' function and describes the temporal trajectory of individuals. In the model, ϵ i j and η l i describe within-individual variability and between-individual variability, respectively. If the prior is not considered, the relationship reduces to a frequentist nonlinear mixed-effect model. A central task in the application of the Bayesian nonlinear mixed-effect models is to evaluate posterior density: π ( i = 1 , l = 1 N , K , σ 2 , l = 1 K , l = 1 , b = 1 K , P , l = 1 K | i = 1 , j = 1 N , M i ) \_^,\sigma ^,\\_^,\\_^,\\_^|\\_^) ∝ π ( i = 1 , j = 1 N , M i , i = 1 , l = 1 N , K , σ 2 , l = 1 K , l = 1 , b = 1 K , P , l = 1 K ) \_^,\\_^,\sigma ^,\\_^,\\_^,\\_^) = π ( i = 1 , j = 1 N , M i | i = 1 , l = 1 N , K , σ 2 ) ⏟ S t a g e 1 : I n d i v i d u a l − L e v e l M o d e l × π ( i = 1 , l = 1 N , K | l = 1 K , l = 1 , b = 1 K , P , l = 1 K ) ⏟ S t a g e 2 : P o p u l a t i o n M o d e l × p ( σ 2 , l = 1 K , l = 1 , b = 1 K , P , l = 1 K ) ⏟ S t a g e 3 : P r i o r \_^|\\_^,\sigma ^) _\times \underbrace \_^|\\_^,\\_^,\\_^) _\times \underbrace ,\\_^,\\_^,\\_^) _ The panel on the right displays Bayesian research cycle using Bayesian nonlinear mixed-effects model. A research cycle using the Bayesian nonlinear mixed-effects model comprises two steps: (a) standard research cycle and (b) Bayesian-specific workflow. A standard research cycle involves 1) literature review, 2) defining a problem and 3) specifying the research question and hypothesis. Bayesian-specific workflow stratifies this approach to include three sub-steps: (b)–(i) formalizing prior distributions based on background knowledge and prior elicitation; (b)–(ii) determining the likelihood function based on a nonlinear function f ; and (b)–(iii) making a posterior inference. The resulting posterior inference can be used to start a new research cycle. == References ==
Causal Markov condition : The Markov condition, sometimes called the Markov assumption, is an assumption made in Bayesian probability theory, that every node in a Bayesian network is conditionally independent of its nondescendants, given its parents. Stated loosely, it is assumed that a node has no bearing on nodes which do not descend from it. In a DAG, this local Markov condition is equivalent to the global Markov condition, which states that d-separations in the graph also correspond to conditional independence relations. This also means that a node is conditionally independent of the entire network, given its Markov blanket. The related Causal Markov (CM) condition states that, conditional on the set of all its direct causes, a node is independent of all variables which are not effects or direct causes of that node. In the event that the structure of a Bayesian network accurately depicts causality, the two conditions are equivalent. However, a network may accurately embody the Markov condition without depicting causality, in which case it should not be assumed to embody the causal Markov condition.
Causal Markov condition : Statisticians are enormously interested in the ways in which certain events and variables are connected. The precise notion of what constitutes a cause and effect is necessary to understand the connections between them. The central idea behind the philosophical study of probabilistic causation is that causes raise the probabilities of their effects, all else being equal. A deterministic interpretation of causation means that if A causes B, then A must always be followed by B. In this sense, smoking does not cause cancer because some smokers never develop cancer. On the other hand, a probabilistic interpretation simply means that causes raise the probability of their effects. In this sense, changes in meteorological readings associated with a storm do cause that storm, since they raise its probability. (However, simply looking at a barometer does not change the probability of the storm, for a more detailed analysis, see:).
Causal Markov condition : In a simple view, releasing one's hand from a hammer causes the hammer to fall. However, doing so in outer space does not produce the same outcome, calling into question if releasing one's fingers from a hammer always causes it to fall. A causal graph could be created to acknowledge that both the presence of gravity and the release of the hammer contribute to its falling. However, it would be very surprising if the surface underneath the hammer affected its falling. This essentially states the Causal Markov Condition, that given the existence of gravity the release of the hammer, it will fall regardless of what is beneath it.
Causal Markov condition : Causal model == Notes ==
Influence diagram : An influence diagram (ID) (also called a relevance diagram, decision diagram or a decision network) is a compact graphical and mathematical representation of a decision situation. It is a generalization of a Bayesian network, in which not only probabilistic inference problems but also decision making problems (following the maximum expected utility criterion) can be modeled and solved. ID was first developed in the mid-1970s by decision analysts with an intuitive semantic that is easy to understand. It is now adopted widely and becoming an alternative to the decision tree which typically suffers from exponential growth in number of branches with each variable modeled. ID is directly applicable in team decision analysis, since it allows incomplete sharing of information among team members to be modeled and solved explicitly. Extensions of ID also find their use in game theory as an alternative representation of the game tree.
Influence diagram : An ID is a directed acyclic graph with three types (plus one subtype) of node and three types of arc (or arrow) between nodes. Nodes: Decision node (corresponding to each decision to be made) is drawn as a rectangle. Uncertainty node (corresponding to each uncertainty to be modeled) is drawn as an oval. Deterministic node (corresponding to special kind of uncertainty that its outcome is deterministically known whenever the outcome of some other uncertainties are also known) is drawn as a double oval. Value node (corresponding to each component of additively separable Von Neumann-Morgenstern utility function) is drawn as an octagon (or diamond). Arcs: Functional arcs (ending in value node) indicate that one of the components of additively separable utility function is a function of all the nodes at their tails. Conditional arcs (ending in uncertainty node) indicate that the uncertainty at their heads is probabilistically conditioned on all the nodes at their tails. Conditional arcs (ending in deterministic node) indicate that the uncertainty at their heads is deterministically conditioned on all the nodes at their tails. Informational arcs (ending in decision node) indicate that the decision at their heads is made with the outcome of all the nodes at their tails known beforehand. Given a properly structured ID: Decision nodes and incoming information arcs collectively state the alternatives (what can be done when the outcome of certain decisions and/or uncertainties are known beforehand) Uncertainty/deterministic nodes and incoming conditional arcs collectively model the information (what are known and their probabilistic/deterministic relationships) Value nodes and incoming functional arcs collectively quantify the preference (how things are preferred over one another). Alternative, information, and preference are termed decision basis in decision analysis, they represent three required components of any valid decision situation. Formally, the semantic of influence diagram is based on sequential construction of nodes and arcs, which implies a specification of all conditional independencies in the diagram. The specification is defined by the d -separation criterion of Bayesian network. According to this semantic, every node is probabilistically independent on its non-successor nodes given the outcome of its immediate predecessor nodes. Likewise, a missing arc between non-value node X and non-value node Y implies that there exists a set of non-value nodes Z , e.g., the parents of Y , that renders Y independent of X given the outcome of the nodes in Z .
Influence diagram : Consider the simple influence diagram representing a situation where a decision-maker is planning their vacation. There is 1 decision node (Vacation Activity), 2 uncertainty nodes (Weather Condition, Weather Forecast), and 1 value node (Satisfaction). There are 2 functional arcs (ending in Satisfaction), 1 conditional arc (ending in Weather Forecast), and 1 informational arc (ending in Vacation Activity). Functional arcs ending in Satisfaction indicate that Satisfaction is a utility function of Weather Condition and Vacation Activity. In other words, their satisfaction can be quantified if they know what the weather is like and what their choice of activity is. (Note that they do not value Weather Forecast directly) Conditional arc ending in Weather Forecast indicates their belief that Weather Forecast and Weather Condition can be dependent. Informational arc ending in Vacation Activity indicates that they will only know Weather Forecast, not Weather Condition, when making their choice. In other words, actual weather will be known after they make their choice, and only forecast is what they can count on at this stage. It also follows semantically, for example, that Vacation Activity is independent on (irrelevant to) Weather Condition given Weather Forecast is known.
Influence diagram : The above example highlights the power of the influence diagram in representing an extremely important concept in decision analysis known as the value of information. Consider the following three scenarios; Scenario 1: The decision-maker could make their Vacation Activity decision while knowing what Weather Condition will be like. This corresponds to adding extra informational arc from Weather Condition to Vacation Activity in the above influence diagram. Scenario 2: The original influence diagram as shown above. Scenario 3: The decision-maker makes their decision without even knowing the Weather Forecast. This corresponds to removing informational arc from Weather Forecast to Vacation Activity in the above influence diagram. Scenario 1 is the best possible scenario for this decision situation since there is no longer any uncertainty on what they care about (Weather Condition) when making their decision. Scenario 3, however, is the worst possible scenario for this decision situation since they need to make their decision without any hint (Weather Forecast) on what they care about (Weather Condition) will turn out to be. The decision-maker is usually better off (definitely no worse off, on average) to move from scenario 3 to scenario 2 through the acquisition of new information. The most they should be willing to pay for such move is called the value of information on Weather Forecast, which is essentially the value of imperfect information on Weather Condition. The applicability of this simple ID and the value of information concept is tremendous, especially in medical decision making when most decisions have to be made with imperfect information about their patients, diseases, etc.
Influence diagram : Influence diagrams are hierarchical and can be defined either in terms of their structure or in greater detail in terms of the functional and numerical relation between diagram elements. An ID that is consistently defined at all levels—structure, function, and number—is a well-defined mathematical representation and is referred to as a well-formed influence diagram (WFID). WFIDs can be evaluated using reversal and removal operations to yield answers to a large class of probabilistic, inferential, and decision questions. More recent techniques have been developed by artificial intelligence researchers concerning Bayesian network inference (belief propagation). An influence diagram having only uncertainty nodes (i.e., a Bayesian network) is also called a relevance diagram. An arc connecting node A to B implies not only that "A is relevant to B", but also that "B is relevant to A" (i.e., relevance is a symmetric relationship).
Influence diagram : Detwarasiti, A.; Shachter, R.D. (December 2005). "Influence diagrams for team decision analysis" (PDF). Decision Analysis. 2 (4): 207–228. doi:10.1287/deca.1050.0047. Holtzman, Samuel (1988). Intelligent decision systems. Addison-Wesley. ISBN 978-0-201-11602-1. Howard, R.A. and J.E. Matheson, "Influence diagrams" (1981), in Readings on the Principles and Applications of Decision Analysis, eds. R.A. Howard and J.E. Matheson, Vol. II (1984), Menlo Park CA: Strategic Decisions Group. Koller, D.; Milch, B. (October 2003). "Multi-agent influence diagrams for representing and solving games" (PDF). Games and Economic Behavior. 45: 181–221. doi:10.1016/S0899-8256(02)00544-4. Pearl, Judea (1988). Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Representation and Reasoning Series. San Mateo CA: Morgan Kaufmann. ISBN 0-934613-73-7. Shachter, R.D. (November–December 1986). "Evaluating influence diagrams" (PDF). Operations Research. 34 (6): 871–882. doi:10.1287/opre.34.6.871. Shachter, R.D. (July–August 1988). "Probabilistic inference and influence diagrams" (PDF). Operations Research. 36 (4): 589–604. doi:10.1287/opre.36.4.589. hdl:10338.dmlcz/135724. Virine, Lev; Trumper, Michael (2008). Project Decisions: The Art and Science. Vienna VA: Management Concepts. ISBN 978-1-56726-217-9. Pearl, J. (1985). Bayesian Networks: A Model of Self-Activated Memory for Evidential Reasoning (UCLA Technical Report CSD-850017). Proceedings of the Seventh Annual Conference of the Cognitive Science Society 15–17 April 1985. University of California, Irvine, CA. pp. 329–334. Retrieved 2010-05-01.
Influence diagram : What are influence diagrams? Pearl, J. (December 2005). "Influence Diagrams — Historical and Personal Perspectives" (PDF). Decision Analysis. 2 (4): 232–4. doi:10.1287/deca.1050.0055.
Junction tree algorithm : The junction tree algorithm (also known as 'Clique Tree') is a method used in machine learning to extract marginalization in general graphs. In essence, it entails performing belief propagation on a modified graph called a junction tree. The graph is called a tree because it branches into different sections of data; nodes of variables are the branches. The basic premise is to eliminate cycles by clustering them into single nodes. Multiple extensive classes of queries can be compiled at the same time into larger structures of data. There are different algorithms to meet specific needs and for what needs to be calculated. Inference algorithms gather new developments in the data and calculate it based on the new information provided.
Junction tree algorithm : Lauritzen, Steffen L.; Spiegelhalter, David J. (1988). "Local Computations with Probabilities on Graphical Structures and their Application to Expert Systems". Journal of the Royal Statistical Society. Series B (Methodological). 50 (2): 157–224. doi:10.1111/j.2517-6161.1988.tb01721.x. JSTOR 2345762. MR 0964177. Dawid, A. P. (1992). "Applications of a general propagation algorithm for probabilistic expert systems". Statistics and Computing. 2 (1): 25–26. doi:10.1007/BF01890546. S2CID 61247712. Huang, Cecil; Darwiche, Adnan (1996). "Inference in Belief Networks: A Procedural Guide". International Journal of Approximate Reasoning. 15 (3): 225–263. CiteSeerX 10.1.1.47.3279. doi:10.1016/S0888-613X(96)00069-2. Lepar, V., Shenoy, P. (1998). "A Comparison of Lauritzen-Spiegelhalter, Hugin, and Shenoy-Shafer Architectures for Computing Marginals of Probability Distributions." https://arxiv.org/ftp/arxiv/papers/1301/1301.7394.pdf
Latent and observable variables : In statistics, latent variables (from Latin: present participle of lateo, “lie hidden”) are variables that can only be inferred indirectly through a mathematical model from other observable variables that can be directly observed or measured. Such latent variable models are used in many disciplines, including engineering, medicine, ecology, physics, machine learning/artificial intelligence, natural language processing, bioinformatics, chemometrics, demography, economics, management, political science, psychology and the social sciences. Latent variables may correspond to aspects of physical reality. These could in principle be measured, but may not be for practical reasons. Among the earliest expressions of this idea is Francis Bacon's polemic the Novum Organum, itself a challenge to the more traditional logic expressed in Aristotle's Organon: But the latent process of which we speak, is far from being obvious to men’s minds, beset as they now are. For we mean not the measures, symptoms, or degrees of any process which can be exhibited in the bodies themselves, but simply a continued process, which, for the most part, escapes the observation of the senses. In this situation, the term hidden variables is commonly used, reflecting the fact that the variables are meaningful, but not observable. Other latent variables correspond to abstract concepts, like categories, behavioral or mental states, or data structures. The terms hypothetical variables or hypothetical constructs may be used in these situations. The use of latent variables can serve to reduce the dimensionality of data. Many observable variables can be aggregated in a model to represent an underlying concept, making it easier to understand the data. In this sense, they serve a function similar to that of scientific theories. At the same time, latent variables link observable "sub-symbolic" data in the real world to symbolic data in the modeled world.
Latent and observable variables : There exists a range of different model classes and methodology that make use of latent variables and allow inference in the presence of latent variables. Models include: linear mixed-effects models and nonlinear mixed-effects models Hidden Markov models Factor analysis Item response theory Analysis and inference methods include: Principal component analysis Instrumented principal component analysis Partial least squares regression Latent semantic analysis and probabilistic latent semantic analysis EM algorithms Metropolis–Hastings algorithm
Latent and observable variables : Kmenta, Jan (1986). "Latent Variables". Elements of Econometrics (Second ed.). New York: Macmillan. pp. 581–587. ISBN 978-0-02-365070-3.
Markov blanket : In statistics and machine learning, when one wants to infer a random variable with a set of variables, usually a subset is enough, and other variables are useless. Such a subset that contains all the useful information is called a Markov blanket. If a Markov blanket is minimal, meaning that it cannot drop any variable without losing information, it is called a Markov boundary. Identifying a Markov blanket or a Markov boundary helps to extract useful features. The terms of Markov blanket and Markov boundary were coined by Judea Pearl in 1988. A Markov blanket can be constituted by a set of Markov chains.
Markov blanket : A Markov blanket of a random variable Y in a random variable set S = =\,\ldots ,X_\ is any subset S 1 _ of S , conditioned on which other variables are independent with Y : Y ⊥ ⊥ S ∖ S 1 ∣ S 1 . \backslash _\mid _. It means that S 1 _ contains at least all the information one needs to infer Y , where the variables in S ∖ S 1 \backslash _ are redundant. In general, a given Markov blanket is not unique. Any set in S that contains a Markov blanket is also a Markov blanket itself. Specifically, S is a Markov blanket of Y in S .
Markov blanket : A Markov boundary of Y in S is a subset S 2 _ of S , such that S 2 _ itself is a Markov blanket of Y , but any proper subset of S 2 _ is not a Markov blanket of Y . In other words, a Markov boundary is a minimal Markov blanket. The Markov boundary of a node A in a Bayesian network is the set of nodes composed of A 's parents, A 's children, and A 's children's other parents. In a Markov random field, the Markov boundary for a node is the set of its neighboring nodes. In a dependency network, the Markov boundary for a node is the set of its parents.
Markov blanket : Andrey Markov Free energy minimisation Moral graph Separation of concerns Causality Causal inference == Notes ==
Moral graph : In graph theory, a moral graph is used to find the equivalent undirected form of a directed acyclic graph. It is a key step of the junction tree algorithm, used in belief propagation on graphical models. The moralized counterpart of a directed acyclic graph is formed by adding edges between all pairs of non-adjacent nodes that have a common child, and then making all edges in the graph undirected. Equivalently, a moral graph of a directed acyclic graph G is an undirected graph in which each node of the original G is now connected to its Markov blanket. The name stems from the fact that, in a moral graph, two nodes that have a common child are required to be married by sharing an edge. Moralization may also be applied to mixed graphs, called in this context "chain graphs". In a chain graph, a connected component of the undirected subgraph is called a chain. Moralization adds an undirected edge between any two vertices that both have outgoing edges to the same chain, and then forgets the orientation of the directed edges of the graph.
Moral graph : A graph is weakly recursively simplicial if it has a simplicial vertex and the subgraph after removing a simplicial vertex and some edges (possibly none) between its neighbours is weakly recursively simplicial. A graph is moral if and only if it is weakly recursively simplicial. A chordal graph (a.k.a., recursive simplicial) is a special case of weakly recursively simplicial when no edge is removed during the elimination process. Therefore, a chordal graph is also moral. But a moral graph is not necessarily chordal.
Moral graph : Unlike chordal graphs that can be recognised in polynomial time, Verma & Pearl (1993) proved that deciding whether or not a graph is moral is NP-complete.
Moral graph : D-separation Tree decomposition
Moral graph : M. Studeny: On mathematical description of probabilistic conditional independence structures
Plate notation : In Bayesian inference, plate notation is a method of representing variables that repeat in a graphical model. Instead of drawing each repeated variable individually, a plate or rectangle is used to group variables into a subgraph that repeat together, and a number is drawn on the plate to represent the number of repetitions of the subgraph in the plate. The assumptions are that the subgraph is duplicated that many times, the variables in the subgraph are indexed by the repetition number, and any links that cross a plate boundary are replicated once for each subgraph repetition.
Plate notation : In this example, we consider Latent Dirichlet allocation, a Bayesian network that models how documents in a corpus are topically related. There are two variables not in any plate; α is the parameter of the uniform Dirichlet prior on the per-document topic distributions, and β is the parameter of the uniform Dirichlet prior on the per-topic word distribution. The outermost plate represents all the variables related to a specific document, including θ i , the topic distribution for document i. The M in the corner of the plate indicates that the variables inside are repeated M times, once for each document. The inner plate represents the variables associated with each of the N i words in document i: z i j is the topic distribution for the jth word in document i, and w i j is the actual word used. The N in the corner represents the repetition of the variables in the inner plate N j times, once for each word in document i. The circle representing the individual words is shaded, indicating that each w i j is observable, and the other circles are empty, indicating that the other variables are latent variables. The directed edges between variables indicate dependencies between the variables: for example, each w i j depends on z i j and β.
Plate notation : A number of extensions have been created by various authors to express more information than simply the conditional relationships. However, few of these have become standard. Perhaps the most commonly used extension is to use rectangles in place of circles to indicate non-random variables—either parameters to be computed, hyperparameters given a fixed value (or computed through empirical Bayes), or variables whose values are computed deterministically from a random variable. The diagram on the right shows a few more non-standard conventions used in some articles in Wikipedia (e.g. variational Bayes): Variables that are actually random vectors are indicated by putting the vector size in brackets in the middle of the node. Variables that are actually random matrices are similarly indicated by putting the matrix size in brackets in the middle of the node, with commas separating row size from column size. Categorical variables are indicated by placing their size (without a bracket) in the middle of the node. Categorical variables that act as "switches", and which pick one or more other random variables to condition on from a large set of such variables (e.g. mixture components), are indicated with a special type of arrow containing a squiggly line and ending in a T junction. Boldface is consistently used for vector or matrix nodes (but not categorical nodes).
Plate notation : Plate notation has been implemented in various TeX/LaTeX drawing packages, but also as part of graphical user interfaces to Bayesian statistics programs such as BUGS and BayesiaLab and PyMC. == References ==
Variational message passing : Variational message passing (VMP) is an approximate inference technique for continuous- or discrete-valued Bayesian networks, with conjugate-exponential parents, developed by John Winn. VMP was developed as a means of generalizing the approximate variational methods used by such techniques as latent Dirichlet allocation, and works by updating an approximate distribution at each node through messages in the node's Markov blanket.
Variational message passing : Given some set of hidden variables H and observed variables V , the goal of approximate inference is to maximize a lower-bound on the probability that a graphical model is in the configuration V . Over some probability distribution Q (to be defined later), ln ⁡ P ( V ) = ∑ H Q ( H ) ln ⁡ P ( H , V ) P ( H | V ) = ∑ H Q ( H ) [ ln ⁡ P ( H , V ) Q ( H ) − ln ⁡ P ( H | V ) Q ( H ) ] Q(H)\ln =\sum _Q(H)\ln -\ln . So, if we define our lower bound to be L ( Q ) = ∑ H Q ( H ) ln ⁡ P ( H , V ) Q ( H ) Q(H)\ln , then the likelihood is simply this bound plus the relative entropy between P and Q . Because the relative entropy is non-negative, the function L defined above is indeed a lower bound of the log likelihood of our observation V . The distribution Q will have a simpler character than that of P because marginalizing over P is intractable for all but the simplest of graphical models. In particular, VMP uses a factorized distribution Q ( H ) = ∏ i Q i ( H i ) , Q_(H_), where H i is a disjoint part of the graphical model.
Variational message passing : The likelihood estimate needs to be as large as possible; because it's a lower bound, getting closer log ⁡ P improves the approximation of the log likelihood. By substituting in the factorized version of Q , L ( Q ) , parameterized over the hidden nodes H i as above, is simply the negative relative entropy between Q j and Q j ∗ ^ plus other terms independent of Q j if Q j ∗ ^ is defined as Q j ∗ ( H j ) = 1 Z e E − j ^(H_)=e^ _\ , where E − j _\ is the expectation over all distributions Q i except Q j . Thus, if we set Q j to be Q j ∗ ^ , the bound L is maximized.
Variational message passing : Parents send their children the expectation of their sufficient statistic while children send their parents their natural parameter, which also requires messages to be sent from the co-parents of the node.
Variational message passing : Because all nodes in VMP come from exponential families and all parents of nodes are conjugate to their children nodes, the expectation of the sufficient statistic can be computed from the normalization factor.
Variational message passing : The algorithm begins by computing the expected value of the sufficient statistics for that vector. Then, until the likelihood converges to a stable value (this is usually accomplished by setting a small threshold value and running the algorithm until it increases by less than that threshold value), do the following at each node: Get all messages from parents. Get all messages from children (this might require the children to get messages from the co-parents). Compute the expected value of the nodes sufficient statistics.
Variational message passing : Because every child must be conjugate to its parent, this has limited the types of distributions that can be used in the model. For example, the parents of a Gaussian distribution must be a Gaussian distribution (corresponding to the Mean) and a gamma distribution (corresponding to the precision, or one over σ in more common parameterizations). Discrete variables can have Dirichlet parents, and Poisson and exponential nodes must have gamma parents. More recently, VMP has been extended to handle models that violate this conditional conjugacy constraint.
Variational message passing : Winn, J.M.; Bishop, C. (2005). "Variational Message Passing" (PDF). Journal of Machine Learning Research. 6: 661–694. Beal, M.J. (2003). Variational Algorithms for Approximate Bayesian Inference (PDF) (PhD). Gatsby Computational Neuroscience Unit, University College London. Archived from the original (PDF) on 2005-04-28. Retrieved 2007-02-15.
Variational message passing : Infer.NET: an inference framework which includes an implementation of VMP with examples. dimple: an open-source inference system supporting VMP. An older implementation of VMP with usage examples.
Activity recognition : Activity recognition aims to recognize the actions and goals of one or more agents from a series of observations on the agents' actions and the environmental conditions. Since the 1980s, this research field has captured the attention of several computer science communities due to its strength in providing personalized support for many different applications and its connection to many different fields of study such as medicine, human-computer interaction, or sociology. Due to its multifaceted nature, different fields may refer to activity recognition as plan recognition, goal recognition, intent recognition, behavior recognition, location estimation and location-based services.
Activity recognition : There are some popular datasets that are used for benchmarking activity recognition or action recognition algorithms. UCF-101: It consists of 101 human action classes, over 13k clips and 27 hours of video data. Action classes include applying makeup, playing dhol, cricket shot, shaving beard, etc. HMDB51: This is a collection of realistic videos from various sources, including movies and web videos. The dataset is composed of 6,849 video clips from 51 action categories (such as “jump”, “kiss” and “laugh”), with each category containing at least 101 clips. Kinetics: This is a significantly larger dataset than the previous ones. It contains 400 human action classes, with at least 400 video clips for each action. Each clip lasts around 10s and is taken from a different YouTube video. This dataset was created by DeepMind.
Activity recognition : By automatically monitoring human activities, home-based rehabilitation can be provided for people suffering from traumatic brain injuries. One can find applications ranging from security-related applications and logistics support to location-based services. Activity recognition systems have been developed for wildlife observation and energy conservation in buildings.
Activity recognition : AI effect Applications of artificial intelligence Conditional random field Gesture recognition Hidden Markov model Motion analysis Naive Bayes classifier Support vector machines Object co-segmentation Outline of artificial intelligence Video content analysis == References ==
AlchemyAPI : AlchemyAPI was a software company in the field of machine learning. Its technology employed deep learning for various applications in natural language processing, such as semantic text analysis and sentiment analysis, as well as computer vision. AlchemyAPI offered both traditionally-licensed software products as well API access under a Software as a service model. After acquisition by IBM in 2015, the products were integrated into the Watson line of products and the brand name eventually disappeared.
AlchemyAPI : As the name suggests, the business model of charging for access to an API was central to the company's identity and uncommon for its time: A TechCrunch article highlighted that even though the technology was similar to IBM's Watson, the pay-per-use model made it more accessible, especially to non-enterprise customers. At one point, AlchemyAPI served over 3 billion API calls per month.
AlchemyAPI : AlchemyAPI was founded by Elliot Turner in 2005, and launched their API in 2009. In September 2011, ProgrammableWeb added AlchemyAPI to its API Billionaires Club, alongside giants such as Google and Facebook. In February 2013, it was announced that AlchemyAPI had raised US$2 million to improve the capabilities of its deep learning technology. In September 2013, it was reported that AlchemyAPI had created a Google Glass app that could identify what a person was looking at, and that AlchemyAPI would soon be rolling out deep learning-based image recognition as a service. As of February 2014 (prior to the IBM acquisition), it claimed to have clients in 36 countries and process over 3 billion documents a month. In May 2014, it was reported that AlchemyAPI had released a computer vision API known as AlchemyVision, capable of recognizing objects in photographs and providing image similarity search capabilities. In March 2015, it was announced that AlchemyAPI had been acquired by IBM and the company's breakthroughs in deep learning would accelerate IBM's development of next generation cognitive computing applications. IBM reported plans to integrate AlchemyAPI's deep learning technology into the core Watson platform
AlchemyAPI : A February 2013 article in VentureBeat about big data named AlchemyAPI as one of the primary forces responsible for bringing natural language processing capabilities to the masses. In November 2013, GigaOm listed AlchemyAPI as one of the top startups working in deep learning, along with Cortica and Ersatz.
AlchemyAPI : Official website
AlphaDev : AlphaDev is an artificial intelligence system developed by Google DeepMind to discover enhanced computer science algorithms using reinforcement learning. AlphaDev is based on AlphaZero, a system that mastered the games of chess, shogi and go by self-play. AlphaDev applies the same approach to finding faster algorithms for fundamental tasks such as sorting and hashing.
AlphaDev : On June 7, 2023, Google DeepMind published a paper in Nature introducing AlphaDev, which discovered new algorithms that outperformed the state-of-the-art methods for small sort algorithms. For example, AlphaDev found a faster assembly language sequence for sorting 5-element sequences. Upon analysing the algorithms in-depth, AlphaDev discovered two unique sequences of assembly instructions called the AlphaDev swap and copy moves that avoid a single assembly instruction each time they are applied. For variable sort algorithms, AlphaDev discovered fundamentally different algorithm structures. For example, for VarSort4 (sort up to 4 elements) AlphaDev discovered an algorithm 29 assembly instructions shorter than the human benchmark. AlphaDev also improved on the speed of hashing algorithms by up to 30% in certain cases. In January 2022, Google DeepMind submitted its new sorting algorithms to the organization that manages C++, one of the most popular programming languages in the world, and after independent vetting, AlphaDev's algorithms were added to the library. This was the first change to the C++ Standard Library sorting algorithms in more than a decade and the first update to involve an algorithm discovered using AI. In January 2023, DeepMind also added its hashing algorithm for inputs from 9 to 16 bytes to Abseil, an open-source collection of prewritten C++ algorithms that can be used by anyone coding with C++. Google estimates that these two algorithms are used trillions of times every day.
AlphaDev : AlphaDev is built on top of AlphaZero, the reinforcement-learning model that DeepMind trained to master games such as Go and chess. The company's breakthrough was to treat the problem of finding a faster algorithm as a game and then train its AI to win it. AlphaDev plays a single-player game where the objective is to iteratively build an algorithm in the assembly language that is both fast and correct. AlphaDev uses a neural network to guide its search for optimal moves, and learns from its own experience and synthetic demonstrations. AlphaDev showcases the potential of AI to advance the foundations of computing and optimize code for different criteria. Google DeepMind hopes that AlphaDev will inspire further research on using AI to discover new algorithms and improve existing ones.
AlphaDev : The primary learning algorithm in AlphaDev is an extension of AlphaZero.