diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzpktf" "b/data_all_eng_slimpj/shuffled/split2/finalzzpktf" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzpktf" @@ -0,0 +1,5 @@ +{"text":"\n\\chapter*{Preface to Internet Supplement}\nThis document is an \ninternet supplement to my book ``Partially Observed Markov Decision Processes -- From Filtering to Controlled Sensing'' published\nby Cambridge University Press in 2016.\\footnote{Online ISBN:9781316471104\nand \nHardback ISBN:9781107134607}\n\nThis internet supplement contains exercises, examples and case studies.\nThe material appears in this internet supplement (instead of the book) so that it can be updated.\n This document will evolve over time and further discussion and examples will be added.\n\n\n\n\nThe website \\url{http:\/\/www.pomdp.org} contains downloadable software for solving POMDPs and several examples of POMDPs.\nI have found that by interfacing the POMDP solver with Matlab, one can solve several interesting types of POMDPs such as those \nwith nonlinear costs (in terms of the information state) and bandit problems. \n\nI have given considerable thought into designing the exercises and case studies in this internet supplement.\nThey are mainly mini-research type exercises rather than simplistic drill type exercises. Some of the problems\nare extensions of the material in the book. As can be seen from the content list, this document also contains some short (and in some cases, fairly incomplete) case studies which will be made more detailed over time.\nThese case studies were put in this internet supplement in order to keep the size of the book manageable.\nAs time progresses, I hope to incorporate additional case studies and other pedagogical notes to this document to assist in understanding some of the material in the book. Time permitting, future plans include adding a detailed discussion on structural results for POMDP games; structural results for quasi-variational\ninequalities, etc.\n\n\nTo avoid confusion in numbering, the equations in this internet supplement are numbered consecutively starting from (1) and not chapter wise.\nIn comparison, the equations in the book are numbered chapterwise.\n\n\n\n\n This internet supplement document is work in progress and will be updated periodically.\nI welcome constructive comments from readers of the book and this internet supplement.\nPlease email me at {\\tt vikramk@ece.ubc.ca} \\\\\n\\\\ \\\\\n\n\\hfill\n\\begin{minipage}{6cm}\nVikram Krishnamurthy,\\\\\n 2016\n \\end{minipage}\n\n\\newpage\n\n\\setcounter{chapter}{1}\n\n\n\\chapter{Stochastic State Space Models} \n\n\n\\begin{compactenum}\n\n\n\\item Theorem \\ref{thm:pf} dealt with the stationary distribution and eigenvalues of a stochastic matrix (transition probability matrix of a Markov chain).\nParts of Theorem \\ref{thm:pf} can be shown via elementary linear algebra.\n\n{\\bf Statement 2}:\n Define spectral radius $\\bar{\\lambda}(P) = \\max_i|\\lambda_i|$ \\\\\n{\\em Lemma} : $\\bar{\\lambda}(P) \\leq \\|P\\|_\\infty$ where $\\|P\\|_\\infty =\\max_i \\sum_j P_{ij} $\\\\\n{\\em Proof}: For all eigenvalues $\\lambda$, $|\\lambda | \\|x\\| = \\|\\lambda x\\| = \\|P x \\| \\leq \\|P \\| \\|x \\| \\implies\n |\\lambda| < \\|P\\|$.\n\nFor a stochastic matrix, $\\|P\\|_\\infty =1 $ and $P$ has an eigenvalue at 1. So $\\bar{\\lambda}=1$.\n\n\n{\\bf Statement 3}: For non-negative matrix $P$, $P^\\prime \\pi = \\pi $ implies $P^\\prime|\\pi| = |\\pi|$ where $|\\pi|$ denotes the vector with element-wise absolute values. \\\\\nProof: $|\\pi| = |P^\\prime \\pi| \\leq |P^\\prime| |\\pi| = P^\\prime |\\pi|$\nSo $P^\\prime|\\pi| - |\\pi| \\geq 0$.\\\\\nBut $P^\\prime|\\pi| - |\\pi| > 0$ is impossible, since it implies $1^\\prime P^\\prime |\\pi| > 1^\\prime |\\pi|$,\ni.e., $1^\\prime |\\pi| > 1^\\prime |\\pi|$.\n\n\n\\item Farkas' lemma \\index{Farkas' lemma} is a widely used result in linear algebra. It states: Let $M$ be an $m \\times n$ matrix and $b$ an $m$-dimensional vector. Then only one of the following statements is true:\n\\begin{enumerate}\n\\item There exists a vector $x\\in {\\rm I\\hspace{-.07cm}R}^n$ such that $M x = b$ and $ x \\geq 0$. \n\\item There exists a vector $y\\in {\\rm I\\hspace{-.07cm}R}^m$ such that $M^\\prime y \\geq 0$ and $b^\\prime y < 0$.\n\\end{enumerate}\nHere $x\\geq 0$ means that all components of the vector $x$ are non-negative.\n\nUse Farkas lemma to prove that every transition matrix $P$ has a stationary distribution. That is, for any $X \\timesX$ stochastic matrix $P$, there\nexists a probability vector $\\pi$ such that $P^\\prime \\pi = \\pi$. (Recall a probability vector $\\pi$ satisfies\n$\\pi(i) \\geq 0$, $\\sum_i \\pi(i) = 1$).\n\nHint: Write alternative (a) of Farkas lemma as\n$$ \\begin{bmatrix} (P - I)^\\prime \\\\ \\mathbf{1}^\\prime \\end{bmatrix} \\pi = \\begin{bmatrix} 0_{X} \\\\ 1 \\end{bmatrix} , \\quad \\pi > 0$$\nShow that this has a solution by demonstrating that alternative (b) does not have a solution.\n\n\n\n\n\n\\item\nUsing the maneuvering target model of Chapter \\ref{chp:manuever}, \\index{maneuvering target}\nsimulate the dynamics and measurement process of a target with the following specifications:\\\\\n{\\small\n\\begin{tabular}{|l|ll|} \\hline \\hline\nSampling interval & $\\Delta = 7$ s & \\\\ \\hline\nNumber of measurements & $N = 50$ &\\\\ \\hline\nInitial target position & $(-500 , -500)'$ m &\\\\ \\hline\nInitial target velocity & $(0.0, 5.0)'$ m\/s &\\\\ \\hline\nTransition probability matrix & $P_{ij} = \n\\left\\{\\begin{array}{cc} 0.9 \\mbox{ if } i=j\\\\\n 0.05 \\mbox{ otherwise }\\end{array}\\right.$ &\\\\\n\\hline \nManeuver commands (three)& $f \\mc = \\left( \\begin{array}{llll}\n 0 & 0 & 0 & 0\n \\end{array} \\right)'$ &(straight)\n \\\\\n& $f \\mc = \\left( \\begin{array}{llll}\n -1.225 & -0.35 & 1.225 & 0.35\n \\end{array} \\right)'$ &(left turn)\n \\\\\n& $f \\mc = \\left( \\begin{array}{llll}\n 1.225 & 0.35 & -1.225 & -0.35\n \\end{array} \\right)'$ &(right turn) \\\\ \\hline\nObservation matrix & $H=I_{4\\times 4}$ &\\\\ \\hline\nProcess noise & $Q=(0.1)^2 I_{4\\times 4}$ &\\\\ \\hline\nMeasurement noise & $R= \\mbox{diag}( 20.0^2, 1.0^2, 20.0^2, 1.0^2)$ &\\\\ \\hline\nMeasurement volume $V$ & $[-1000, 1000]$ m in $x$ and $y$ position &\\\\ \n & $[-10.0, 10.0]$ m\/s in $x$ and $y$ velocity &\n\\\\ \\hline \\hline\n\\end{tabular} }\n\n\\item Simulate the optimal predictor via the composition method. The composition method is discussed in \\S \\ref{sec:jmlspred}.\n\n\n\\item As should be apparent from an elementary linear systems course, the algebraic Lyapunov equation (\\ref{eq:algebraic_lyapunov})\nis intimately linked with the stability of a linear discrete time system. Prove that $A$ has all its eigenvalues strictly inside the unit circle\niff for every positive definite matrix $Q$, there exists a positive definite matrix $\\Sigma_\\infty$ such that (\\ref{eq:algebraic_lyapunov}) holds.\n\n\\item Theorem \\ref{thm:dobproperty} states that \n$|\\lambda_2| \\leq \\dob(P) $. That is, the Dobrushin coefficient upper bounds the second largest eigenvalue modulus of a stochastic matrix $P$. \nShow that \n$$\n\\log |\\lambda_2| = \\lim_{k\\rightarrow \\infty} \\frac{1}{k} \\log \\dob(P^k) $$\n\n\n\\item \nOften for sparse transition matrices, $\\dob(P)$ is typically equal to 1 and therefore not useful since it provides a trivial upper bound for $|\\lambda_2|$.\nFor example, consider a random walk characterized by the tridiagonal transition matrix\n$$ P = \\begin{bmatrix} \nr_0 & p_0 & 0 & 0 & \\cdots & 0 \\\\\nq_1 & r_1 & p_1 & 0 & \\cdots & 0 \\\\\n0 & q_2 & r_2 & p_2 & \\cdots & 0 \\\\\n\\vdots & & \\ddots & \\ddots & \\ddots & \\vdots \\\\\n0 & \\cdots & 0 & q_{X-1} & r_{X-1} & p_{X-1} \\\\\n0 & \\cdots & 0 & 0 & q_{X} & r_{X} \\end{bmatrix}\n$$\nThen using Property 3 of $\\dob(\\cdot)$ above, clearly\n$\\sum_l \\min \\{ P_{il}, P_{jl} \\} = 0$, implying that\n $\\dob(P) = 1$. So for this example, the Dobrushin coefficient does not say anything about the initial condition being forgotten geometrically fast.\n \n For such cases, it is often useful to consider the Dobrushin coefficient of powers of $P$.\nIn the above example, clearly every state communicates with every other state in at least $X$ time points. So\n $P^X$ has strictly positive elements. Therefore $\\dob(P^X)$ is strictly smaller than 1 and is a useful bound. Geometric ergodicity follows\n by consider blocks of length $X$, i.e., \n $$ \\dvar{ { P^X}^\\prime \\pi} {{P^X}^\\prime \\bar{\\pi} } \\leq \\dob(P^X) \\dvar{\\pi}{\\bar{\\pi}} $$\n\n\\item Show that the inhomogeneous Markov chain with transition matrix\n$$ P(2n-1) = \\begin{bmatrix} 0.5 & 0.5 \\\\ 1 & 0 \\end{bmatrix} ,\\quad P(2n) = \\begin{bmatrix} 0 & 1 \\\\ 1 & 0 \\end{bmatrix}$$\nis weakly ergodic.\n\n\n\\item {\\bf Wasserstein distance.} \n\\index{Wasserstein distance}\nAs mentioned in \\S \\ref{sec:modelnotes} of the book, the Dobrushin coefficient is a special case of a more general coefficient of ergodicity.\nThis general definition is in terms of the \nWasserstein metric which we now define: Let $d$ be a metric on the state space $\\mathcal{X} = \\{e_1,e_2,\\ldots,\\}$ where the state space is possibly denumerable.\nConsider the bivariate random vector $(x,y) \\in \\mathcal{X}\\times \\mathcal{X}$ with marginals $\\pi_x$ and $\\pi_y$, respectively.\n\nDefine the Wasserstein distance as\n$$ d(\\pi_x,\\pi_y) = \\inf \\E\\{ d(x,y) \\} $$\nwhere the infimum is over the joint distribution of $(x,y)$.\n\n\\begin{compactenum}\n\\item Show that the variational distance is a special case of the Wasserstein distance obtained by choosing $d(x,y)$ as the discrete metric\n$$ d(x,y) = \\begin{cases} 1 & x \\neq y \\\\ 0 & x = y .\\end{cases} $$\n\\item Define the coefficient of ergodicity associated with the Wasserstein distance as \n$$ \\dob(P) = \\sup_{i \\neq j} \\frac{d( P^\\prime e_i, P^\\prime e_j)}{d(e_i,e_j)} $$\nShow that the Dobrushin coefficient is a special case of the above coefficient of ergodicity corresponding to the discrete metric.\n\n\\item Show that the above coefficient of ergodicity satisfies properties 2, 4 and 5 of Theorem \\ref{thm:dobproperty}.\n\\end{compactenum}\n\n\\item {\\bf Ultrametric transition matrices.} \\index{ultrametric matrix}\nIt is trivial to verify that $P^n$ is a stochastic matrix for any integer $n \\geq 0$.\nUnder what conditions is $P^{1\/n}$ a stochastic matrix?\nA symmetric ultrametric stochastic matrix $P$ defined in \\S \\ref{sec:blackwelldom} of the book satisfies this property.\n\n\n\\begin{comment}\n\\item \nThe MATLAB code to generate a discrete valued\n random variable with pmf {\\tt prob} is\n {\\small \n\\begin{lstlisting}\nfunction rand_number = simulatestate(prob);\nstatedim = numel(prob); \nmm = max(prob);\nc = mm * statedim;\naccept = false;\nwhile accept == false\n y = floor(statedim * rand) + 1; \n if rand <= prob(y) \/ mm\n rand_number = y;\n accept = true;\n end\nend\n\\end{lstlisting}} \\end{comment}\n\\end{compactenum}\n\n\\chapter{Optimal Filtering} \n\n\\section{Problems}\n\\begin{compactenum}\n\\item Standard drill exercises include: \n\\begin{compactenum}\n\\item Compare via simulations the recursive least squares with the Kalman filter\n\\item Compare via simulations the recursive least square and the least mean squares\n (LMS) algorithm with a HMM filter when tracking a slow Markov chain. Note that Chapter \\ref{sec:hmmsa} of the book gives performance bounds on how well a LMS algorithm can track a slow Markov chain.\n \\item\nAnother standard exercise is to try out variations of the particle filter with different importance distributions and resampling strategies on different models.\nCompare via simulations the cubature filter, unscented Kalman filter and a particle filter for a bearings only target tracking model.\n\\item\nA classical result involving the Kalman filter is the so called innovations state space model representation and the associated spectral factorization\nproblem for the Riccati equation, see \\cite{AM79}.\n\\item {\\bf Posterior Cramer Rao bound.} The posterior Cramer Rao bound \\cite{TMN98} for filtering can be used to compute a lower bound to the mean square error. \\index{posterior Cramer Rao bound}\nThis requires twice differentiability of the logarithm of the joint density. For HMMs, one possibility is to consider the Weiss-Weinstein\nbounds \\index{Weiss-Weinstein bounds}, see \\cite{RO05}. Chapter~\\ref{chp:filterstructure}\n of the book gives more useful sample path bounds on the HMM filter using\nstochastic dominance.\n\\end{compactenum}\n\n\\item {\\bf Bayes' rule interpretation of Lasso.}\\cite{PC08} \\index{Lasso} Suppose that \nthe state $x \\in {\\rm I\\hspace{-.07cm}R}^X$ is a random variable with prior pdf\n$$ p(x) = \\prod_{j=1}^X \\frac{\\lambda}{2} \\, \\exp\\left( -\\lambda x(j) \\right) .$$\nSuppose $x$ is observed via the observation equation\n$$ y = F x + v, \\qquad v \\sim \\normal(0,\\sigma^2 I) $$\nwhere $A$ is a known $n \\times X$ matrix.\nThe variance $\\sigma^2$ is not known and has a prior pdf $p(\\sigma^2)$. Then show that the posterior of $(x,\\sigma^2)$ given \nthe observation $y$ is of the form\n$$ \np(x,\\sigma^2| y) \\propto\np(\\sigma^2) \\, (\\sigma^2)^{-\\frac{n+1}{2}} \\exp\\big( - \\frac{1}{2\\sigma^2}\\, \\text{Lasso}(x,y,\\mu)\\,\\big) $$\nwhere $\\mu = 2 \\sigma^2 \\lambda$ and \n$$ \\text{Lasso}(x,y,\\mu) = \\|y-F x\\|^2 + \\mu \\|x\\|_1 .$$\nTherefore for fixed $\\sigma^2$, computing the mode $\\hat{x}$ of the posterior is equivalent to computing the minimizer $\\hat{x}$ of $\\text{Lasso}(x,y,\\mu) $.\n\nThe resulting Lasso (least absolute shrinkage and selection operator) estimator $\\hat{x}$ was proposed in \\cite{Tib96} which is one of the most influential papers \n in statistics since the 1990s. Since $\\text{Lasso}(x,y,\\mu) $ is convex in $x$ it can be computed efficiently via convex optimization algorithms.\n\n\\item Show that if $X \\leq Y$ (with probability 1), then $\\E\\{X|Z\\} \\leq \\E\\{Y|Z\\}$ for any information $Z$.\n\\item Show that for a linear Gaussian system (\\ref{eq:filter_linears}), (\\ref{eq:filter_linearobs}),\n$$ p(y_k|y_{1:k-1}) = \\normal(y_k - y_{k|k-1}, H_k \\Sigma_{k|k-1} H^\\prime_k + R_k ) $$\nwhere $y_{k|k-1}$ and $\\Sigma_{k|k-1}$ are defined in (\\ref{eq:statepred}), (\\ref{eq:covpred}), respectively.\n\n\n\\item Simulate in Matlab the HMM filter, and fixed lag smoother. Study empirically how the error probability of the estimates\ndecreases with lag. (The filter is a fixed lag smoother with lag of zero). Please also refer to \\cite{GSV05} for a very nice analysis\nof error probabilities.\n\n\\item Consider a HMM where the Markov chain evolves slowly with transition matrix $P = I + \\epsilon Q$ where $\\epsilon $ is a small\npositive constant and \n$Q$ is a generator matrix. That is $Q_{ii} < 0$, $Q_{ij} > 0$ and each row of $Q$ sums to zero.\nCompare the performance of the HMM filter with the recursive least squares algorithm (with an appropriate\nforgetting factor chosen) for estimating the underlying state.\n\n\n\\item Consider the following Markov modulated auto-regressive time series model: \n$$ z_{k+1} = F(r_{k+1})\\,z_k + \\Gamma(r_{k+1})\\,w_{k+1} + f(r_{k+1})\\,u_{k+1} $$\nwhere $w_{k} \\sim \\normal(0,1)$, $u_k$ is a known exogenous input.\nAssume the sequence $\\{z_k\\}$ is observed. \n Derive an optimal filter for the underlying Markov chain $r_k$. \n (In comparison to a jump Markov linear system, $z_k$ is observed without noise in this problem. The optimal filter is very\n similar to the HMM filter).\n \n \n \\item Consider a Markov chain $x_k$ corrupted by iid zero mean Gaussian noise and a sinusoid:\n $$y_k = x_k + \\sin(k\/100) + v_k$$\nObtain a filtering algorithm for extracting $x_k$ given the observations.\n \n \\item {\\bf Image Based Tracking.} The idea is to estimate the coordinates $z_k$ of the target by measuring its orientation $r_k$ in noise.\n For example an imager can determine which direction an aircraft's nose is pointing thereby giving useful information about which\n direction it can move.\n Assume that the target's orientation evolves according to a finite state Markov chain. (In other words, the imager quantizes the target\n orientation to one of a finite number of possibilities.) Then the model for the filtering problem is\n \\[\n \\begin{split}\n z_{k+1} &= F(r_{k+1})\\,z_k + \\Gamma(r_{k+1})\\,w_{k+1} \\\\\ny_k &\\sim p(y | r_k) \n \\end{split} \\]\n Derive the filtering expression for $\\E\\{z_k | y_1,\\ldots,y_k\\}$. The papers \\cite{SSD93,KE97a,EE99} consider image based filtering. \\index{image-based tracking}\n \n \\item Consider a jump Markov linear system. Via computer simulations, compare the IMM algorithm, Unscented Kalman filter and particle filter.\n \n \n\n \n \\item {\\bf Radar pulse train de-interleaving.} \\index{de-interleaving} \n \\index{jump Markov linear system! pulse de-interleaving} \n In radar signal processing, radar pulses are received from multiple periodic sources.\n It is of interest to estimate the periods of these sources. For example, suppose:\n \\begin{compactitem} \n\\item source 1 pulses are received at times\n$$ 2, 7, 12, 17, 22 , 27, 32, 37, 42,\\ldots , \\quad \\text{ (period = 5, phase =2)} $$\n\\item source 2 pulses are received at times\n$$ 4, 15, 26, 37, 48, 59, 70, 81,\\ldots , \\quad \\text{ (period = 11, phase = 4) }.$$\\end{compactitem} The interleaved signal consists of pulses at times\n$$2,4,7,12,15,22,26,27, 32, 37, 42,\\ldots. $$ So the above interleaved signal contains time of arrival information. \nNote that at time 37, pulses are received from both sources; but it is assumed that there is no amplitude information - so the received signal\nis simply a time of arrival event at time 37.\nAt the receiver, interleaved signal (time of arrivals) is corrupted by jitter noise (modeled as iid noise).\n So the noisy received signal are, for example, $$2.4, \\; 4.1, \\; 6.7, \\; 11.4, \\; 15.5, \\; 21.9, \\; 26.2, \\; 27.5, \\; 30.9, \\; 38.2,\\; 43.6, \\ldots.$$\nGiven this noisy interleaved signal, the de-interleaving problem aims to determine which pulses came from which source.\nThis can be done by estimating the periods (namely, 5 and 11) and phases (namely, 2 and 4) of the 2 sources.\n\n\n\n\n\nThe de-interleaving problem can be formulated as a jump Markov linear system.\nDefine the state \n $x_k' = (T', \\tau_k')$,\nconsists of the periods $T' = (T^{(1)}, \\ldots, T^{(N)})$ of the $N$ sources and \n$\\tau_k'= (\\tau_k^{(1)}, \\ldots, \\tau_k^{(N)})$, where $\\tau_k^{(i)}$ denotes the last\ntime source $i$ was active up to and including the arrival of the $k$th pulse.\nLet $\\tau_1 =( \\phi^{(1)}, \\ldots, \\phi^{(N)})$, be the phases of periodic pulse-train sources.\nThen\n\\begin{equation}\n \\tau_{k+1}^i = \\left\\{\\begin{array}{cc} \\tau_k^i + T^i & \\mbox{if $(k+1)$th pulse is due to source $i$} \\\\\n \\tau_k^i & \\mbox{otherwise} \\end{array} \\right.; \\;\\;\\tau_1^i = \\phi^{(i)}.\n \\label{eq:basic}\\end{equation}\n \nLet $e_i, i=1,\\ldots, N$, be the unit $N$-dimensional vectors with $1$ in the $i$th\nposition. Let $r_k \\in \\{1,\\ldots,N\\}$ denote the active source at time $k$. Then one can express the time of arrivals as\nthe jump Markov linear system\n\\begin{align*} x_{k+1} & = A(r_{k+1}) x_k + w_{k} \\\\\ny_k &= C(r_k) x_k + v_k \\end{align*}\nwhere \n$$\nA(r_{k+1}) = \\begin{bmatrix}\nI_N & 0_{N\\times N} \\\\\n\\mbox{diag}(e_{r_{k+1}}) & I_N\n\\end{bmatrix},\\;\nC(r_k) = \n\\begin{bmatrix}\n 0_{1\\times N} & e_{r_k}'\n\\end{bmatrix}\n$$\nNote that $r_k $ is a periodic process and so has transition probabilities $$P_{i,i+1} = 1, \\; \\text{ for $i< N$, and }\\, P_{N,1} = 1. $$\n$v_k$ denotes the measurement (jitter) noise; while $w_k$ can be used to model time varying periods.\\\\\n{\\em Remark}: Obviously, there are identifiability issues; for example, if $\\phi^{(1)} = \\phi^{(2)}$ and $T^{(1}$ is a multiple of $T^{(2)}$ then\nit is impossible to detect source 1.\n \n \\item {\\bf Narrowband Interference and JMLS.} \\index{jump Markov linear system! narrowband interference} Narrowband interference corrupting a Markov chain can be modeled as a jump Markov linear\n system. Narrowband interference can be modeled as an auto-regressive (AR) process with poles close to the unit circle: for example\n$$i_k = a\\, i_{k-1} + w_k $$\nwhere $a = 1- \\epsilon$ and $\\epsilon $ is a small positive number.\nConsider the observation model\n$$ y_k = x_k + i_k + v_k $$\nwhere $x_k$ is a finite state Markov chain, $i_k$ is narrowband interference and $v_k$ is observation noise. Show that the above\nmodel can be represented as a jump Markov linear system.\n \n \\item {\\bf Bayesian estimation of Stochastic context free grammar.} \\index{stochastic context free grammars}\n First some perspective: HMMs with a finite observation space are also called regular grammars.\n They are a subset of a more general class of models called stochastic context free grammars as depicted by Chomsky's \n hierarchy in Figure \\ref{fig:chomsky}.\n \\begin{figure}[h] \\centering\n\\includegraphics[scale=0.8]{chomsky.pdf}\n\\caption{The Chomsky hierarchy of languages} \\label{fig:chomsky}\n\\end{figure} \n\n Stochastic context free grammars (SCFGs) provide a powerful modeling tool for strings of alphabets and are used widely in\n natural language processing \\cite{MS99a}. For example, consider the randomly generated string $a^n c^m b^n$ where $m,n$ are non-negative integer valued random variables. Here $a^n$ means the alphabet $a$ repeated $n$ times.\n The string $a^n c^m b^n$\n could model the trajectory of a target that moves $n$ steps north and then an arbitrary number of steps east or west and then\n $n$ steps south, implying that the target performs a U-turn.\nA basic course in computer science would show (using a pumping lemma) \nthat such strings cannot be generated exclusively using a Markov chain (since the memory $n$ is variable).\n\n If the string $a^n c^m b^n$ was observed in noise, then Bayesian estimation\n(stochastic parsing) algorithms can be used to estimate the underlying string. Such meta-level tracking algorithms have polynomial computational\ncost (in the data length) and\nare useful for estimating {\\em trajectories} of targets (given noisy position and velocity measurements). They allow a human radar operator\nto interpret tracks and can be viewed as middleware in the human-sensor interface.\nSuch stochastic context free grammars generalize HMMs and facilitate modeling complex spatial trajectories of targets.\n\nPlease refer to \\cite{MS99a} for Bayesian signal processing algorithms and EM algorithms for stochastic context free grammars.\n\\cite{FK13,FK14} gives examples of meta-level target tracking using stochastic context free grammars.\n \n \n \\item {\\bf Kalman vs HMM filter}. \n A Kalman filter is the optimal state estimator for the linear Gaussian state space model\n \\begin{align*} x_{k+1} &= F x_k + w_k, \\\\\ny_k & = H^\\prime x_k + v_k . \\end{align*}\nwhere $w$ and $v$ are mutually independent iid Gaussian processes.\n\n\nRecall from (\\ref{eq:intro_hmms}), (\\ref{eq:intro_hmmobs}) that for a Markov chain with state space \n$\\mathcal{X} = \\{e_1,\\ldots, e_{X}\\}$ of unit vectors, an HMM can be expressed as\n\\begin{align*} x_{k+1} &= P^\\prime x_k + w_k, \\\\\ny_k & = H^\\prime x_k + v_k . \\end{align*}\nA key difference is that in (\\ref{eq:intro_hmms}), $w$ is no longer i.i.d; instead it is \n a martingale difference process:\n $\\E\\{w_k|x_0,x_1,\\ldots,x_{k}\\} = 0$.\n \nFrom \\S \\ref{sec:linearminvar} of the book, it follows that the Kalman filter is the minimum variance linear estimator for the above HMM.\nOf course the optimal {\\em linear} estimator (Kalman filter) can perform substantially worse than the optimal estimator (HMM filter).\nCompare the performance of the HMM filter and Kalman filter numerically for the above example.\n\n \n \\item {\\bf Interpolation of a HMM.} \\index{interpolation of HMM} Consider a Markov chain $x_k$ with transition matrix $P$ where the discrete time clock ticks at intervals of 10 seconds.\n Assume noisy measurements are obtained of at each time $k$. Devise a smoothing algorithm to estimate the state of the Markov chain at 5 second intervals.\n (Note: Obviously on the 5 second time scale, the transition matrix is $P^{1\/2}$. For this to be a valid stochastic matrix\n it is sufficient that $P$ is a symmetric ultrametric matrix or more generally $P^{-1}$ is an M-matrix \\cite{HL11}; see also \\S \\ref{sec:blackwelldom}\n of the book.)\n \n \n \n \\end{compactenum}\n \\section{Case Study. Sensitivity of HMM filter to transition matrix} Almost an identical proof to that of geometric ergodicity proof of the HMM filter in \\S \\ref{sec:hmmforget} can be used to obtain expressions for the sensitivity of the HMM filter to the HMM parameters.\n \\index{HMM filter! sensitivity bound to transition matrix}\n \n{\\bf Aim}: We are interested in\na recursion for $\\|\\pi_k - \\underline{\\pi}_k\\|_1$ when $\\pi_k$ is updated with HMM filter using \ntransition matrix $P$ and $\\underline{\\pi}_k$ is updated with HMM filter using transition matrix $\\underline{\\tp}$. That is,\nwe want an expression\nfor \n\\begin{equation} \\|T(\\pi,y;P) - T(\\underline{\\pi},y; \\underline{\\tp}) \\|_1 \\text{ in terms of } \\|\\pi - \\underline{\\pi}\\|_1. \\label{eq:hmmsensc} \\end{equation}\nSuch a bound if useful when the HMM filter is implemented with an incorrect transition matrix $\\underline{\\tp}$ instead of actual transition matrix $P$.\nThe idea is that when $P $ is close to $\\underline{\\tp} $ then $T(\\pi,y;P) $ is close to $T(\\pi,y;P) $.\n\nA special case of (\\ref{eq:hmmsensc}) is to obtain an expression for \n\\begin{equation} \\|T(\\pi,y;P) - T(\\pi,y; \\underline{\\tp}) \\|_1 \\label{eq:hmmsensc2}\\end{equation} that is when both HMM filters have the same initial belief $\\pi$ but are updated\nwith different transition matrices, namely $P$ and $\\underline{\\tp}$. \n\n The theorem below obtains expressions for both (\\ref{eq:hmmsensc}) and (\\ref{eq:hmmsensc2}).\n \n \\begin{theorem-non} Consider a HMM with transition matrix $P$ and state levels $g$.\n Let $\\epsilon > 0$ denote the user defined parameter.\n Suppose $\\|\\underline{\\tp}-P\\|_1 \\leq \\epsilon$, where $\\|\\cdot\\|_1$ denotes the induced 1-norm for matrices.\\footnote{\nThe three statements $\\| P^\\prime \\pi - \\underline{\\tp}^\\prime \\pi \\|_1 \\leq \\epsilon$, $\\|\\underline{\\tp}-P\\|_1 \\leq \\epsilon$ and $\\sum_{i=1}^X \\| (P^\\prime - \\underline{\\tp}^\\prime)_{:,i} \\|_1 \\pi(i) \\leq \\epsilon$\nare all equivalent since $\\|\\pi\\|_1 = 1$.}\nThen\n\\begin{compactenum}\n\\item The expected absolute deviation between one step of filtering using $P$ versus $\\underline{\\tp}$ is upper bounded as:\n\\begin{multline} \\E_{y} \\left| g^\\prime \\left( T(\\pi,y;P) - T(\\pi,y;\\underline{\\tp}) \\right) \\right| \\leq \n\\epsilon \\sum_y \\max_{i,j} g^\\prime (I - T(\\pi,y;\\underline{\\tp}) \\mathbf{1}^\\prime) B_y (e_i - e_j)\n\\label{eq:ebound}\n\\end{multline}\n\n\\item The sample paths of the filtered posteriors and conditional means have the following explicit bounds at each time $k$:\n\\begin{multline} \\label{eq:samplepath1}\n\\| \\pi_k - \\underline{\\pi}_k\\|_1 \\leq \\frac{\\epsilon}{ \\max\\{ F(\\underline{\\pi}_{k-1},y_k) - \\epsilon, \\, \\mu(y_k) \\}} + \\frac{ \\dob(\\underline{\\tp}) \\, \\|\\pi_{k-1} - \\underline{\\pi}_{k-1} \\|_1}{F(\\underline{\\pi}_{k-1},y_k)}\n\n\\end{multline}\nHere $\\dob(\\underline{\\tp})$ denotes the Dobrushin coefficient of the transition matrix $\\underline{\\tp}$ and $\\underline{\\pi}_k$ is the posterior\ncomputed using the HMM filter with $\\underline{\\tp}$, and\n\\begin{equation} \nF(\\underline{\\pi},y) = \\frac{\\mathbf{1}^\\prime B_{y} \\underline{\\tp}^\\prime \\underline{\\pi}}{\\max_i B_{i,y}}, \\quad\n\\mu(y) = \\frac{ \\min_i B_{iy} } { \\max_i B_{iy} } .\n\\end{equation}\n\\end{compactenum}\n\\end{theorem-non} \n\n\n The above theorem gives explicit upper bounds between the filtered distributions using \n transition matrices $\\underline{\\tp}$ and $\\bar{\\tp}$. The $\\E_y$ in (\\ref{eq:ebound}) is with respect to the \n measure $\\sigma(\\pi,y;P) = \\mathbf{1}^\\prime B_y P^\\prime \\pi$ which corresponds to $\\mathbb{P}(y_k=y| \\pi_{k-1} = \\pi)$. \n \n \n\\begin{proof}\nThe triangle inequality for norms yields\n\\begin{align}\n& \\dvar{\\pi_{k+1} } { \\underline{\\pi}_{k+1} } = \\dvar{ T(\\pi_k,y_{k+1};P) }{ T(\\underline{\\pi}_k,y_{k+1};\\underline{\\tp}) } \\nonumber \\\\\n& \\leq \n\\dvar{ T(\\pi_k,y_{k+1};P) } { T(\\pi_k,y_{k+1};\\underline{\\tp}) } \\nonumber \\\\ & \\hspace{1cm} + \\dvar{ T(\\pi_k,y_{k+1};\\underline{\\tp}) } { T( \\underline{\\pi}_k,y_{k+1}; \\underline{\\tp})}. \n\\label{eq:triangle}\n\\end{align}\n\n{\\bf Part 1}: Consider the first normed term in the right hand side of (\\ref{eq:triangle}).\nApplying (\\ref{eq:rhshmmgeom}) with \n $\\pi = P^\\prime \\pi_k$ and $\\pi^0 = \\underline{\\tp}^\\prime \\pi_k$ yields\n\\begin{multline*}\n g^\\prime (T(\\pi_k,y;P) - T(\\pi_k,y;\\underline{\\tp})) = \\frac{1}{\\sigma(\\pi,y;P)}\n g^\\prime \\left[ I - T(\\pi,y,\\underline{\\tp}) \\mathbf{1}^\\prime\\right] B_y (P - \\underline{\\tp})^\\prime \\pi \n\\end{multline*}\nwhere $\\sigma(\\pi,y;P) = \\mathbf{1}^\\prime B_y P^\\prime \\pi$.\n Then Lemma \\ref{lem:cs}(i) yields \n\\begin{multline*} g^\\prime (T(\\pi_k,y;P) - T(\\pi_k,y;\\underline{\\tp}))\\\\ \\leq \n\\max_{i,j} \\frac{1}{\\sigma(\\pi,y;P)} g^\\prime \\left[ I - T(\\pi,y,\\underline{\\tp}) \\mathbf{1}^\\prime\\right] B_y (e_i - e_j) \\dvar{P^\\prime \\pi}{ \\underline{\\tp}^\\prime \\pi} \\end{multline*}\nSince $\\dvar{P^\\prime \\pi}{ \\underline{\\tp}^\\prime \\pi} \\leq \\epsilon$, taking expectations with respect to the measure $\\sigma(\\pi,y;P)$, completes the proof\nof the first assertion.\n \n{\\bf Part 2}:\nApplying Theorem \\ref{thm:bayesbackvar}(i) with the notation $\\pi = P^\\prime \\pi_k$ and $\\pi^0 = \\underline{\\tp}^\\prime \\pi_k$ yields\n\\begin{align} &\\dvar{T(\\pi_k,y;P) } { T(\\pi_k,y;\\underline{\\tp}) } \\leq \n \\frac{ \\max_i B_{i,y} \\dvar{P^\\prime \\pi_k}{\\underline{\\tp}^\\prime \\pi_k} } { \\mathbf{1}^\\prime B_{y} \\underline{\\tp}^\\prime \\pi_k } \\nonumber\\\\\n& \\leq \\frac{\\epsilon}{2}\\,\\frac{ \\max_i B_{i,y} } { \\mathbf{1}^\\prime B_{y} \\underline{\\tp}^\\prime \\pi_k } \n\\leq \\frac{ \\max_i B_{i,y} \\, \\epsilon\/2} { \\max\\{ \\mathbf{1}^\\prime B_{y} \\underline{\\tp}^\\prime \\underline{\\pi}_k - \\epsilon \\max_i B_{iy}, \\min_i B_{iy} \\}}.\n\\label{eq:term1}\n\\end{align}\nThe second last inequality follows from the construction of $\\underline{\\tp}$ satisfying (\\ref{eq:con2}) (recall the variational norm is half the $l_1$ norm). The last inequality follows from \nTheorem \\ref{thm:bayesbackvar}(ii).\n\nConsider the second normed term in the right hand side of (\\ref{eq:triangle}).\nApplying Theorem \\ref{thm:bayesbackvar}(i) with notation $\\pi = \\underline{\\tp}^\\prime \\pi_k$ and $\\pi^0 = \\underline{\\tp}^\\prime \\underline{\\pi}_k$ yields\n\\begin{multline} \\dvar{ T(\\pi_k,y;\\underline{\\tp}) } { T( \\underline{\\pi}_k,y; \\underline{\\tp}) } \n \\leq \n \\frac{ \\max_i B_{i,y} \\dvar{\\underline{\\tp}^\\prime \\pi_k}{\\underline{\\tp}^\\prime \\underline{\\pi}_k} } { \\mathbf{1}^\\prime B_{y} \\underline{\\tp}^\\prime \\underline{\\pi}_k } \\\\ \\leq\n\\frac{ \\max_i B_{i,y} \\, \\dob(\\underline{\\tp}) \\, \\dvar{ \\pi_k}{\\underline{\\pi}_k} } { \\mathbf{1}^\\prime B_{y} \\underline{\\tp}^\\prime \\underline{\\pi}_k } \\label{eq:term2}\n \\end{multline}\n where the last inequality follows from the submultiplicative property of the Dobrushin coefficient.\nSubstituting (\\ref{eq:term1}) and (\\ref{eq:term2}) into the right hand side of the triangle inequality (\\ref{eq:triangle}) proves the result.\n\\end{proof}\n \n \n \n \\section{Case Study. Reference Probability Method for Filtering}\nWe describe here the so called {\\em reference probability method} for deriving the un-normalized filtering recursion (\\ref{eq:unnormalized}).\nThe main idea is to start with the joint probability mass function of all observations and states until time $k$, namely, $p(x_{0:k}, y_{1:k})$.\nSince this joint density contains all the information we need, it is not surprising that by suitable marginalization and integration, the filtering\nrecursion and hence the conditional\nmean estimate can be computed.\n\nGiven the relatively straightforward derivations of the filtering recursions given in Chapter \\ref{sec:filtermain} of the book, the reader might wonder why we present\nyet another derivation. The reason is that in more complicated filtering problems, the reference probability method gives a systematic way of deriving\nfiltering expressions.\nIt is used extensively in \\cite{EAM95} to derive filters in both discrete and continuous time.\nIn continuous time,\nthe reference probability measure is extremely useful -- it yields the so called Duncan-Mortenson-Zakai equations for nonlinear filtering.\n\n\\subsubsection{The Engineering Version}\nSuppose the state and observation processes $\\{x_k\\}$ and $\\{y_k\\}$ are in a probability space with probability measure $\\mathbb{P}$.\nSince the state and observation noise processes are iid, under $\\mathbb{P}$, we have the following factorization:\n\\begin{align} & p(x_{0:k}, y_{1:k}) = \\prod_{n=1}^k p(y_n|x_n)\\, p(x_n| x_{n-1})\\, \\pi_0(x_0) \n \\label{eq:factor}\\\\ \n &\\propto \\prod_{n=1}^k p_v\\left(D_n^{-1}(x_n)\\left[y_n - H_n(x_k)\\right] \\right)\n p_w\\left(\\Gamma_{n-1}^{-1}(x_{n-1})\\,\\left[x_{n}- F_{n-1}(x_{n-1})\\right]\\right) \\, \\pi_0(x_0) \\nonumber\n\\end{align}\nStarting with $p(x_{0:k}, y_{1:k})$, the \nconditional expectation of any function $\\phi(x_k)$ is\n\\begin{equation}\n\\E\\{ \\phi(x_k) | y_{1:k}\\} = \\frac{ \\int \\phi(x_k) p(x_{0:k}, y_{1:k}) dx_{0:k}}{\\int p(x_{0:k}, y_{1:k}) dx_{0:k}} \n= \\frac{ \\int_\\mathcal{X} \\phi(x_k) \\left[ \\int p(x_{0:k}, y_{1:k}) dx_{0:k-1} \\right] dx_k}{\\int p(x_{0:k}, y_{1:k}) dx_{0:k}} \n\\label{eq:erp}\n\\end{equation}\nThe main idea then is to define the term within the square brackets in the numerator as \n the un-normalized density \n$ {q}_k(x_k) = \\int p(x_{0:k}, y_{1:k}) dx_{0:k-1}$.\n(Of course then ${q}_k(x_k) = p(x_k, y_{1:k})$).\nWe now derive the recursion (\\ref{eq:unnormalized}) for the un-normalized density ${q}_k$:\n\\begin{align*}\n\\int_\\mathcal{X} \\phi(x_k) {q}_k(x_k) dx_k&= \\int_\\mathcal{X} \\phi(x_k) \\int p(x_{0:k}, y_{1:k}) dx_{0:k-1} dx_k \\\\\n &\\hspace{-1.5cm}= \\int_\\mathcal{X} \\int_\\mathcal{X} \\phi(x_k) p(y_k| x_k) p(x_k| x_{k-1}) \\left[\\int p(x_{0:k-1},y_{1:k-1}) dx_{0:k-2} \\right]\\, dx_{k-1} dx_k\n \\\\\n &\\hspace{-1.5cm}= \\int_\\mathcal{X} \\int_\\mathcal{X} \\phi(x_k) p(y_k| x_k) p(x_k| x_{k-1}) {q}_{k-1}(x_{k-1}) dx_{k-1} dx_k\n \\end{align*}\nwhere the second equality follows from (\\ref{eq:factor}). \nSince the above holds for any test function $\\phi$, it follows that the integrands within the outside integral are equal, thereby yielding the\nun-normalized filtering\nrecursion (\\ref{eq:unnormalized}). \n\n\n\n\\subsubsection{Interpretation as Change of Measure}\nWe now interpret the above derivation as the engineering version of the reference probability method.\\footnote{In continuous time, the change of measure of a random process involves Girsanov's theorem, see \\cite{EAM95}. Indeed the Zakai form of the continuous time filters in the appendix of the book can be derived in a fairly\nstraightforward manner using Girsanov's theorem.}\nDefine a new probability measure $\\bar{\\mathbb{P}}$ as having associated density\n$$q} % reference prob method density under \\bar{P(x_{0:k}, y_{1:k}) = \\prod_{n=1}^k p_v(y_n) \\,\n p_w(x_n) \\, \\pi_0(x_0) . $$\n The above equation is tantamount to saying that under this new measure $ \\bar{\\mathbb{P}}$, the processes $\\{x_k\\}$ and $\\{y_k\\}$ are iid sequences\nwith density functions $p_w$ and $p_v$, respectively. $\\bar{\\mathbb{P}}$ will be called the {\\em reference probability measure} - under this measure, due to the iid nature of $\\{x_k\\}$ and $\\{y_k\\}$, the filtering\nrecursion can be derived conveniently, as we now describe.\n\nLet\n $\\bar{\\E}$ denote expectation associated with measure $\\bar{P}$, so that for any function $\\phi(x_k)$, the conditional expectation is\n $$ \\bar{\\E}\\{ \\phi(x_k) | y_{1:k}\\} = \\int \\phi(x_k) q} % reference prob method density under \\bar{P(x_{0:k}, y_{1:k}) dx_{0:k} $$\n Obviously, to obtain the expectation $\\E\\{\\phi(x_k)| y_{1:k}\\}$ under the probability\nmeasure $\\mathbb{P}$, it follows from (\\ref{eq:erp}) that\n \\begin{align}\n \\E\\{ \\phi(x_k) | y_{1:k}\\} &= \\frac{\\int \\phi(x_k) \\Lambda_k q} % reference prob method density under \\bar{P(x_{0:k}, y_{1:k}) dx_{0:k}}\n {\\int \\Lambda_k q} % reference prob method density under \\bar{P(x_{0:k}, y_{1:k}) dx_{0:k}} ,\n \\quad \\text{ where } \\Lambda_k = \\frac{p(x_{0:k}, y_{1:k}) }{ q} % reference prob method density under \\bar{P(x_{0:k}, y_{1:k}) } \\\\\n &= \\frac{\\bar{\\E}\\{ \\Lambda_k\\phi(x_k) | y_{1:k} \\} }{ \\bar{\\E}\\{ \\Lambda_k| y_{1:k}\\}}\n \\nonumber\n \\end{align}\n \n\n\nThe derivation then proceeds as follows. \\begin{align*}\n& \\int_\\mathcal{X} {q}_k(x) \\phi(x) dx =\n\\bar{\\E}\\{\\Lambda_k \\phi(x_k) | \\mathcal{Y}_k\\} \\qquad \\text{ (definition of ${q}_k$) } \\\\ \n&= \\int \\frac{p(x_{0:k},y_{1:k})}{q} % reference prob method density under \\bar{P(x_{0:k},y_{1:k})} \\phi(x_k) q} % reference prob method density under \\bar{P(x_{0:k},y_{1:k}) dx_{0:k} \\\\\n&= \\int \\frac{p(x_{0:k-1},y_{1:k-1})}{q} % reference prob method density under \\bar{P(x_{0:k-1},y_{1:k-1})} \n\\frac{p(y_k | x_k) p(x_k| x_{k-1})}{\\cancel{p_v(y_k) } \\cancel{ p_w(x_k)} }\n\\phi(x_k) \\cancel{p_v(y_k)} \\cancel{p_w(x_k) } q} % reference prob method density under \\bar{P(x_{0:k-1},y_{1:k-1}) dx_{0:k} \\\\\n&= \\int \\frac{p(x_{0:k-1},y_{1:k-1})}{q} % reference prob method density under \\bar{P(x_{0:k-1},y_{1:k-1})} \n\\left[\\int_\\mathcal{X} {p(y_k | x_k) p(x_k| x_{k-1})} \\phi(x_k) dx_k\\right] q} % reference prob method density under \\bar{P(x_{0:k-1},y_{1:k-1}) dx_{0:k-1} \n\\\\\n&= \\bar{\\E}\\{ \\Lambda_{k-1} \\left[\\int_\\mathcal{X} {p(y_k | x_k) p(x_k| x_{k-1})} \\phi(x_k) dx_k\\right] | y_{1:k-1} \\} \\\\\n&= \\int {q}_{k-1}(x_{k-1}) \\left[\\int_\\mathcal{X} {p(y_k | x_k) p(x_k| x_{k-1})} \\phi(x_k) dx_k\\right] dx_{k-1}\n\\end{align*}\nwhere the last equality follows from the definition of ${q}$ in the first equality.\n\nSince this holds for any test function $\\phi(x)$, we have that the material inside the integral in the left and right hand side are equal.\nSo \n$$ \\pi_k(x_k ) = p(y_k| x_k) \\int_\\mathcal{X} {q}_{k-1}(x_{k-1}) p(x_k| x_{k-1}) dx_{k-1}. $$\n\n\n \n \n\n\\chapter{Algorithms for Maximum Likelihood Parameter Estimation} \n\n\\begin{compactenum}\n\n\\item A standard drill exercise involves deriving the Cram\\'er-Rao bound in terms of the Fisher information matrix; see wikipedia or any book\nin statistical signal processing for an elementary\ndescription.\n\n\\item {\\bf Minorization Maximization Algorithm (MM Algorithm).} The EM algorithm is a special case of the MM algorithm\\footnote{MM can also be used equivalently to denote majorization minimization}; see \\cite{HL04} for a nice tutorial on MM \n\\index{Minorization Maximization algorithm} \nalgorithms. MM algorithms constitute a general purpose method for optimization and are not restricted just to maximum likelihood estimation.\n\nThe main idea behind the MM algorithm is as follows: Suppose we wish to compute the maximizer $\\th^*$ of a function $\\phi(\\theta)$.\nThe idea is to construct a minorizing function $g(\\theta, \\theta^{(m)})$ such that\n\\begin{equation} \\begin{split}\ng(\\theta, \\theta^{(m)}) & \\leq \\phi(\\theta) \\quad \\text{ for all } \\theta \\\\\n g(\\theta^{(m)} , \\theta^{(m)} ) &= \\phi(\\theta^{(m)}) . \\label{eq:minorf}\n\\end{split} \\end{equation}\nThat is, the minorizing function $g(\\theta, \\theta^{(m)}) $ lies above $ \\phi(\\theta)$ and is a tangent to it at the point $\\theta^{(m)}$.\nHere $$\\theta^{(m)} = \\operatornamewithlimits{argmax}_\\th g(\\th^{(m-1)}, \\th) $$\n denotes the estimate of the maximizer at iteration $m$ of MM algorithm.\n\nThe property (\\ref{eq:minorf})\nimplies that successive iterations of the MM algorithm yield $$\\phi(\\theta^{(m+1)}) \\geq \\phi(\\theta^{(m)}). $$\nIn words, successive iterations of the MM algorithm yield increasing values of the objective function which is a very useful property for a general\npurpose numerical optimization algorithm.\nThis is shown straightforwardly as follows:\n\\begin{align*} \\phi(\\th^{(m+1}) &= \\phi(\\th^{(m+1}) - g(\\theta^{(m+1)}, \\theta^{(m)}) + g(\\theta^{(m+1)}, \\theta^{(m)}) \\\\\n& \\stackrel{a}{\\geq} \\phi(\\th^{(m+1}) - g(\\theta^{(m+1)}, \\theta^{(m)}) + g(\\theta^{(m)}, \\theta^{(m)}) \\\\\n& \\stackrel{b}{\\geq} \\phi(\\th^{(m}) - \\cancel{g(\\theta^{(m)}, \\theta^{(m)}) } + \\cancel{g(\\theta^{(m)}, \\theta^{(m)})}\n \\end{align*}\nInequality (a) follows since $g(\\theta^{(m+1)}, \\theta^{(m)}) \\geq g(\\theta^{(m)}, \\theta^{(m)})$ by definition since $\\theta^{(m+1)} =\n\\operatornamewithlimits{argmax}_\\th g(\\theta, \\theta^{(m)}) $. Inequality (b) follows from (\\ref{eq:minorf}).\n\nThe EM algorithm is a special case of the MM algorithm where \\index{EM algorithm} \\index{Minorization Maximization algorithm! EM algorithm}\n$$ g(\\th,\\theta^{(m)} = Q(\\th,\\theta^{(m)}) - Q(\\theta^{(m)},\\theta^{(m)}), \\quad\n\\phi(\\th) = \\logl(\\th) - \\logl(\\th^{(m)} ) $$\nHere $\\logl(\\theta) = \\log p(y_{1:N} | \\theta)$ is the log likelihood which we want to maximize to compute the MLE\nand $Q(\\th,\\theta^{(m)}) $ is the auxiliary log likelihood defined\nin (\\ref{eq:auxl0}) which is maximized in the M step of the EM algorithm.\n\nIndeed the minorization property (\\ref{eq:minorf}) was established for the EM algorithm in Lemma \\vref{lem:em} of the book by using Jensen's inequality.\n\n\\item {\\bf EM algorithm in more elegant (abstract) notation.}\nLet $\\{P_\\theta\\,,\\,\\theta\\in \\Theta\\}$ be a family of probability\nmeasures on a measurable space $(\\Omega, {\\cal F})$ all absolutely\ncontinuous with respect to a fixed probability measure $P_0$,\nand let ${\\cal Y} \\subset {\\cal F}$. The likelihood function for\ncomputing an estimate of the parameter $\\theta$ based on the\ninformation available in ${\\cal Y}$ is\n\\begin{displaymath} \n L(\\theta) = \\E_0[ \\frac{dP_\\theta}{dP_0}\\mid {\\cal Y}]\\ , \n\\end{displaymath}\nand the MLE estimate is defined by\n\\begin{displaymath} \n \\widehat{\\theta}\\in \\operatornamewithlimits{argmax}_{\\theta\\in \\Theta} L(\\theta)\\ . \n\\end{displaymath}\nIn general, the MLE is difficult to compute directly,\nand the EM algorithm provides an iterative approximation method~:\n\\begin{description}\n \\item{{\\it Step 1.}} Set $p=0$ and choose $\\widehat{\\theta}_0$.\n\n \\item{{\\it Step 2.}} (E--step) Set $\\theta' = \\widehat{\\theta}_p$\nand compute $Q(\\cdot,\\theta')$, where\n\\begin{displaymath} \n Q(\\theta,\\theta') \n = \\E_{\\theta'}[ \\log\\frac{dP_\\theta}{dP_{\\theta'}}\\mid {\\cal Y}]\\ . \n\\end{displaymath}\n\n \\item{{\\it Step 3.}} (M--step) Find\n\\begin{displaymath} \n \\widehat{\\theta}_{p+1}\\in \\operatornamewithlimits{argmax}_{\\theta \\in \\Theta} Q(\\theta,\\theta')\\ . \n\\end{displaymath}\n\n \\item{{\\it Step 4.}} Replace $p$ by $p+1$ and repeat beginning\nwith Step~2, until a stopping criterion is satisfied.\n\\end{description}\nThe sequence generated $\\{\\widehat{\\theta}_p\\,,\\,p\\geq 0\\}$\ngives non--decreasing values of the likelihood function~:\nindeed, it follows from Jensen's inequality that\n\\begin{displaymath} \n \\log L(\\widehat{\\theta}_{p+1}) - \\log L(\\widehat{\\theta}_p) \n \\geq Q(\\widehat{\\theta}_{p+1},\\widehat{\\theta}_p) \n \\geq Q(\\widehat{\\theta}_p,\\widehat{\\theta}_p) = 0\\ , \n\\end{displaymath}\nwith equality if and only if $\\widehat{\\theta}_{p+1} = \\widehat{\\theta}_p$.\n\n\\item {\\bf Forward-only EM algorithm for Linear Gaussian Model.} In \\S 4.4 of the book, we described a forward-only EM algorithm for ML parameter estimation of the a HMM.\nForward-only EM algorithms can also be constructed for maximum likelihood estimation of the parameters of a linear Gaussian state space model\n\\cite{EK99}. These involve computing filters for functionals of the state and use Kalman filter estimates.\n\n\\item {\\bf Sinusoid in HMM.} Consider a sinusoid with amplitude $A$ and phase $\\phi$. It is observed as\n$$y_k = x_k + A\\, \\sin(k\/100 + \\phi) + v_k$$\nwhere $v_k$ is an iid Gaussian noise process.\nUse the EM algorithm to estimate $A, \\phi$ and the parameters of the Markov chain and noise variance.\n\n\\item In the forward-only EM algorithm of \\S 4.4, the filters for the number of jumps involves $O(X^4)$ computations at each time while filters for the duration time involve\n$O(X^3)$ at each time. Is it possible to reduce the computational cost by approximating some of these estimates? \n\n\\item Using computer simulations, compare the methods of moments estimator for a HMM in \\S 4.5 with the maximum likelihood estimator in terms of efficiency. That is generate several $N$ point trajectories of an HMM with a fixed set of parameters, then compute the variance\nof the estimates. (Of course, instances where the MLE\nthe algorithm converges to local maxima should be eliminated from the computation).\n\n\n\\item Non-asymptotic statistical inference using concentration of measure if very popular today.\nAssuming the likelihood is a Lipschitz function of the observations, and the observations are Markovian, show that\nthe likelihood function concentrates to the Kullback Leibler function.\n\n\\item {\\bf EM Algorithm for State Estimation.} The EM algorithm was used in Chapter \\ref{chp:mle} as a numerical algorithm\nfor maximum likelihood {\\em parameter} estimation. It turns out that the EM algorithm can be used for {\\em state} estimation, particularly for\na jump Markov linear system (JMLS). Recall from \\S \\ref{sec:particle} that a JMLS has model\n\\begin{align*}\nz_{k+1} &= F(\\mc_{k+1})\\,z_k + \\Gamma(\\mc_{k+1})\\,w_{k+1} + f (\\mc_{k+1})\\,u_{k+1} \\\\\ny_k &= H(\\mc_k)\\,z_k + D(\\mc_k)\\,v_k + g(\\mc_k)u_k. \n\\end{align*}\nAs described in \\S \\ref{sec:particle}, \n the optimal filter for a JMLS is computationally intractable.\n In comparison for a JMLS, the EM algorithm can be used to estimate the MAP (maximum aposteriori state estimate).\nsystem (assuming the parameters of the JMLS are known). Show how \none can compute this MAP state estimate $\\max_{z_{1:k},r_{1:k}} P(y_{1:k} | z_{1:k},r_{1:k})$ using the EM algorithm.\nIn \\cite{LK99} is shown that the resulting EM algorithm involves the cross coupling of a Kalman and HMM smoother.\nA data augmentation algorithm in similar spirit appears in \\cite{DLK00}. \\index{jump Markov linear system! EM algorithm for state estimation}\n\n\n\\item {\\bf Quadratic Convergence of Newton Algorithm.}\n\n\\index{Newton algorithm! quadratic convergence}\nWe start with some definitions:\nGiven a sequence $\\{\\th^{(n)}\\}$ generated by an optimization algorithm, the {\\em order} of convergence is $p$ if \n\\begin{equation} \\beta = \\limsup_{n \\rightarrow \\infty} \\frac{\\|\\th^{(n+1) } - \\th^*\\|} {\\|\\th^{(n) } - \\th^*\\|^p} \\text{ exists } \n\\label{eq:convergenceorder}\n\\end{equation}\nAlso if $p=1$ and $\\beta < 1$, the sequence is said to converge linearly to $\\th^*$ with {\\em convergence ratio (rate)} $\\beta$.\nMoreover, the case $p=1$ and $\\beta = 0$ is referred to as superlinear convergence.\n\n\n\\begin{compactenum}\n\\item\nRecall that the Newton Raphson algorithm computes the MLE iteratively as\n$$ \\th^{(n+1)} = \\th^{(n)} + \\bigl({\\nabla^2} \\logl (\\th^{(n)}) \\big)^{-1} \\nabla \\logl(\\th^{(n)}) $$\n The Newton Raphson algorithm has quadratic order of convergence in the following sense.\nSuppose the log likelihood $\\logl(\\th)$ is twice continuous differentiable and that at a local maximum $\\th^*$, the Hessian\n$\\nabla_\\th^2\\logl$ is positive definite. Then if started sufficient close to $\\th^*$, Newton Raphson converges to $\\th^*$ at a quadratic rate.\nthat the model estimates satisfy $\\th^{(n)}$ satisfy\n$$\\| \\th^{(n+1)} - \\th^* \\| \\leq \\beta \\| \\th^{(n)} - \\th^*\\|^2 $$\nfor some constant $\\beta$.\n\nThis is shown straightforwardly (see any optimization textbook) as follows:\n\\begin{equation}\n\\begin{split} \\|\\th^{(n+1)} - \\th^* \\| & = \\| \\th^{(n)} - \\th^* + \\bigl({\\nabla^2} \\logl (\\th^{(n)}) \\big)^{-1} \\nabla \\logl(\\th^{(n)}) \\|\n\\\\ \n&= \\| \\bigl({\\nabla^2} \\logl (\\th^{(n)}) \\big)^{-1} \\biggl( \\nabla \\logl(\\th^{(n)}) - \\nabla \\logl(\\th^*) - \\nabla^2 \\logl (\\th^{(n)}) \\big(\\th^{(n) }- \\th^* \\big)\n\\biggr)\n\\end{split}\n\\end{equation}\nFor $\\|\\th^{(n)} - \\th^*\\| < \\rho$, it is clear from a Taylor series expansion that\n$$ \\|\\nabla \\logl(\\th^*) - \\nabla \\logl(\\th^{(n)}) - \\nabla^2 \\logl (\\th^{(n)}) \\big(\\th^* - \\th^{(n)} \\big) \\| \\leq \\beta_1\\| \\th^{(n)} - \\th^*\\|^2$$\nfor some positive constant $\\beta_1$.\nAlso, $\\| \\bigl({\\nabla^2} \\logl (\\th^{(n)}) \\big)^{-1} \\| \\leq \\beta_2$.\n\n\\item The convergence order and rate\nof the EM algorithm has been studied in great detail since the early 1980s; there are numerous papers in the area; see \\cite{XJ96} and the\nreferences therein.\nThe EM algorithm has linear convergence order, i.e., $p=1$ in (\\ref{eq:convergenceorder}). \nPlease see\n\\cite{MXJ00} and the references therein for examples where EM exhibits superlinear convergence.\n\n\\end{compactenum}\n\\end{compactenum}\n\n\\chapter{Multi-agent Sensing: Social Learning and Data Incest} \n\n\\section{Problems}\n\\begin{enumerate}\n\n\\item A substantial amount of insight can be gleaned by actually simulating the setup (in Matlab) of the social learning\nfilter for both the random variable and Markov chain case. Also simulate the risk-averse social learning filter discussed in \\S \\ref{sec:classicalsocial}\nof the book.\n\n\n\\item {\\bf CVaR Social Learning Filter.} Consider the risk averse social learning \\index{CVaR social learning filter}\ndiscussed in \\S \\ref{sec:classicalsocial}.\nSuppose\nagents choose their actions $a_{k}$ to minimize the CVaR risk averse measure \n$$\na_{k} = {\\underset{a \\in \\mathcal{A}}{\\text{argmin}}} \\{ {\\underset{z \\in \\mathbb{R}}{\\text{min}}} ~ \\{ z + \\frac{1}{\\alpha} \\mathbb{E}_{y_{k}}[{\\max} \\{ (c(x_{k},a)-z),0 \\rbrace] \\} \\} \n$$\nHere $\\alpha \\in (0,1]$ reflects the degree of risk-aversion for the agent (the smaller $\\alpha$ is, the more risk-averse the agent is). \nShow that the structural result Theorem \\ref{thm:monotone} continues to hold for the CVaR social learning filter.\nAlso show that for sufficiently risk-averse agents (namely, $\\alpha$ close to zero), social learning ceases and agents always herd.\n\nGeneralize the above result to any coherent risk measure.\n\n\\item The necessary and sufficient condition given in Theorem \\ref{thm:sufficient} for exact data incest removal requires\n that \n$$ A_n(j,n)=0 \\implies w_n(j)= 0, \\text{ where } \\quad w_n = T_{n-1}^{-1} t_n,\n$$\nand $T_n = \\text{sgn}((\\mathbf{I}_n-A_n)^{-1}) = \\begin{bmatrix} T_{n-1} & t_n \\\\ 0_{1 \\times n-1} & 1 \\end{bmatrix} $ is the transitive closure matrix.\nThus the condition depends purely on the adjacency matrix. Discuss what types of matrices satisfy the above condition.\n\n\\item Theorem \\ref{thm:sufficient} also applies to data incest where the prior and likelihood are Gaussian. The posterior is then\nevaluated by a Kalman filter. Compare the performance of exact data incest removal with the covariance intersection algorithm\nin \\cite{CAM02} which assumes no knowledge of the correlation structure (and hence of the network).\n\n\\item Consensus algorithms \\cite{SM04} have been extremely popular during the last decade and there are numerous papers in the area.\n They are non-Bayesian and seek to compute,\nfor example, the average over measurements observed at a number of nodes in a graph. It is worthwhile comparing the performance\nof the optimal Bayesian incest removal algorithms with consensus algorithms.\n\n\n\\item The data incest removal algorithm in \\S \\ref{sec:incest} of the book arises assumes that agents do not send additional information\napart from their incest free estimates. Suppose agents are allowed to send a fixed number of labels of previous agents from whom\nthey have received information. What is the minimum about of additional labels the agents need to send in order to completely remove data incest.\n\n\\item Quantify the bias introduced by data incest as a function of the adjacency matrix.\n\n\\item Prospect theory \\index{prospect theory} (pioneered by the psychologist Kahneman \\cite{KT79} who won the 2003 Nobel prize in economics) is a behavioral economic theory\nthat seek to model how humans make decisions amongst probabilistic alternatives. (It is an alternative to expected utility theory considered\nin the social learning models of this chapter.) The main features are:\n\\begin{compactenum}\n\\item Preference is an S-shaped curve with reference point $x=0$\n\\item The investor maximizes the expected value $V(x)$ where $V$ is a preference and $x$ is the change in wealth.\n\n\\item Decision maker employ decision weight $w(p)$ rather than objective probability $p$, where the weight function $w(F)$ has a reverse\nS shape where $F$ is the cumulative probability.\n\\end{compactenum}\nConstruct a social learning filter where the utility function satisfies the above assumptions. Under what conditions do information cascades\noccur?\n\n\\item {\\bf Rational Inattention.} Another powerful way for modeling the behavior of (human) decision makers is in terms of rational inattention. \nSee the seminal work of \\cite{Sim03} where essentially the ability of the human to absorb information is modeled via the \ninformation theoretic capacity of a communication channel.\\index{rational inattention}\n\n\\item There are several real life experiments that seek to understand how humans interact in decision making. See for example \\cite{BOL10}\nand \\cite{KH15}. In \\cite{BOL10}, four models are considered. How can these models be linked to social learning?\n\\end{enumerate}\n\n\n\n\n\\section{Social Learning with limited memory} \\index{social learning! limited memory}\nHere we briefly describe a variation of the vanilla social learning protocol. In order to mitigate herding, assume that agents randomly\nsample only a fixed number of previous actions. The aim below is to describe the resulting setup; see \\cite{SS97} for a detailed discussion.\n\n\nLet the variable $\\theta \\in \\{1,2\\}$ denote the states. Let $a \\in \\{1,2\\} $ denote the action alphabet and $y \\in \\{1,2\\} $ denote the observation alphabet. In this model of social learning with limited memory, it is assumed that each agent (at time $t \\geq N+1$) observes \\textit{only} $N$ randomly selected actions from the history $h_{t}= \\{a_{1},a_{2},\\hdots,a_{t-1} \\}$. In the periods $t \\leq N$, each agent acts according to his private belief. This phase is termed as the seed phase in the model. \\\\\n\nLet $z_{t}^{(1)}$ denote the number of times action $1$ is chosen until time $t$, i.e, $$z_{t}^{(1)} = \\sum_{j=1}^{t} I(a_{j}=1).$$ Let $\\hat{z}_{t}^{(1)}$ denote the number of times action $1$ is chosen in a sample of $N$ randomly observed actions in the past, i.e, $\\hat{z}_{t}^{(1)} = \\sum_{j=1}^{N} I(a_{j}=1)$. \\\\\n\nThe social learning protocol with limited memory is as follows:\n\\begin{itemize}\n\\item[1.)] \\textit{Private belief update}: Agent $t$ makes two observations at each instant $t(>N)$. These observations correspond to a noisy private signal $y_{t}$ and a sample of $N$ past actions from the history $h_{t}$ sampled uniformly randomly. Let $B_{y_{t}}$ and $D_{z_{t}=k}$ denote the probability of observing $y_{t}$ and $(\\hat{z}_{t}^{(1)}=k)$ respectively. The private belief is updated as follows.\n\nFor each draw from the past, the probability of observing action 1 is $z_{t}^{(1)}\/ (t-1)$. So the\nprobability that at time $t$, action $1$ occurs $k$ times in a random sample of $N$ observed actions is\n\\begin{equation*}\n\\mathbb{P}(\\hat{z}_{t}^{(1)} = k|z_{t}^{(1)}) = \\frac{N!}{(N-k)!k!} \\left( \\frac{z_{t}^{(1)}}{t}\\right) ^{k} \\left(1-\\frac{z_{t}^{(1)}}{t}\\right)^{N-k}\n\\end{equation*}\nTherefore,\nthe number of times action `$1$' is chosen in the sample, $\\hat{z}_{t}^{(1)}$, has a distribution that depends on $\\theta$ according to:\n\\begin{equation*}\n\\mathbb{P}(\\hat{z}_{t}^{(1)} = k|\\theta) = \\sum_{z_{t}^{(1)}=1}^{t} \\mathbb{P}(\\hat{z}_{t}^{(1)} = k|z_{t}^{(1)}) \\mathbb{P}(z_{t}^{(1)}|\\theta)\n\\end{equation*}\n\nAfter obtaining a private noisy signal $y_{t}$, and having observed ($\\hat{z}_{t}^{(1)}=k$), the belief $\\pi_t = [\\pi_t(1), \\pi_t(2)]^\\prime$ where\n$\\pi_t(i) = \\mathbb{P}(\\th=i| \\hat{z}_{t}^{(1)},y_t)$\nis updated by agent $t$ as:\n\\begin{equation*}\n{\\hspace{1.5cm}}\\pi_{t} = \\frac{B_{y_{t}}D_{z_{t}=k}}{\\textbf{1}'B_{y_{t}}D_{z_{t}=k}}.\n\\end{equation*}\nHere $B$ and $D$ are the observation likelihoods of $y_t$ and $\\hat{z}_{t}^{(1)}$ given the state:\n\\[ ~~B_{y_{t}}=\\text{diag}(\\mathbb{P}(y_{t}|\\theta=i),i\\in \\{1,2\\}), \\quad \nD_{z_{t}=k} = \\begin{bmatrix}\n\\mathbb{P}(\\hat{z}_{t}^{(1)}=k|\\theta=1) \\\\\n\\mathbb{P}(\\hat{z}_{t}^{(1)}=k|\\theta=2)\n\\end{bmatrix}\n\\]\n\n\\item[2.)] \\textit{Agent's decision}: With the private belief $\\pi_{t}$, the agent $t$ makes a decision as:\n\\begin{equation*}\na_{t} = {\\underset{a\\in \\{1,2\\}}{\\text{argmin}}}~ c_{a}^{T} \\pi_{t}\n\\end{equation*}\nwhere $c_{a}$ denotes the cost vector.\n\n\\item[3.)] \\textit{Action distribution}: The distribution of actions $\\mathbb{P}(z_{t}^{(1)}|\\theta)$ in the two states $\\theta = 1,2$ is assumed to be common knowledge at time $t$. It is updated after the decision of agent $t$ as follows. \\\\\n\nThe probability of $(a_{t}=1)$ in period $t$ depends on the actual number of `$1$' actions $z_{t}^{(1)}$ and on the state according to:\n\\begin{equation*}\n{\\hspace{-0.75cm}}\\mathbb{P}(a_{t}=1|z_{t}^{(1)}=n,\\theta) = \\sum_{k=0}^{N} \\sum_{i=1}^{2} \\mathbb{P}(a_{t}=1|y=i,\\hat{z}_{t}^{(1)}=k,z_{t}^{(1)}=n,\\theta)~ \\mathbb{P}(y=i|\\theta)~\\mathbb{P}(\\hat{z}_{t}^{(1)} = k|z_{t}^{(1)}=n)\n\\end{equation*}\n\nwhere,\n\\[ \\mathbb{P}(a_{t}=1|y=i,\\hat{z}_{t}^{(1)}=k,z_{t}^{(1)}=n,\\theta) = \\left\\{ \\begin{array}{ll}\n 1 & \\mbox{if $c_{1}^{T}B_{y=i}D_{z_{t}=k} < c_{2}^{T}B_{y=i}D_{z_{t}=k}$};\\\\\n 0 & \\mbox{otherwise}.\\end{array} \\right. \\] \n\n\nAfter agent $t$ takes an action, the distribution is updated as:\n\\begin{equation}{\\label{eq:dis}}\n{\\hspace{-0.75cm}}\\mathbb{P}(z_{t+1}^{(1)}=n|\\theta) = \\mathbb{P}(z_{t}^{(1)}=n|\\theta)(1-\\mathbb{P}(a_{t}=1|z_{t}^{(1)}=n,\\theta)) + \\mathbb{P}(z_{t}^{(1)}=n-1|\\theta)\\mathbb{P}(a_{t}=1|z_{t}^{(1)}=n,\\theta)\n\\end{equation}\n\\end{itemize}\n\nAccording to equation~\\eqref{eq:dis}, the sufficient statistic $\\mathbb{P}(z_{t}^{(1)}|\\theta)$ is growing with time $t$. It is noted that this has $(t-2)$ numbers at time $t$ and hence grows with time. $\\mathbb{P}(z_{t+1}^{(1)}=n|\\theta)$ in equation~\\eqref{eq:dis} is used to compute $D_{z_{t+1}}$.\n\n\nWith the above model, consider the following questions:\n\\begin{compactenum}\n\\item Show that there is asymptotic herding when $N =1$.\n\\item Show that for $N=2A$, reduction in the historical information will improve social learning. Also, comment on whether there is herding when $N=2$.\n\\item Show that as $N$ increases, the convergence to the true state is slower.\nHint: Even though more observations are chosen, greater weight on the history precludes the use of private information.\n\\end{compactenum}\n\n\\chapter{Fully Observed Markov Decision Processes} \n\n\n\\section{Problems}\n\\begin{compactenum}\n\n\\item The following nice example from \\cite{KV86} gives a useful motivation for feedback control in stochastic systems. It shows that for\nstochastic systems, using feedback control can result in behavior that cannot be obtained by an open loop system.\n\\index{why feedback control?}\n\\begin{compactenum}\n\\item First, recall from undergraduate control courses that for a deterministic linear time invariant system with forward transfer function $G(z^{-1})$ and negative feedback $H(z^{-1})$, the equivalent transfer function\nis $\\frac{G(z^{-1})}{1+G(z^{-1}) H(z^{-1}) }$. So an open loop system with this equivalent transfer function is identical to a feedback system.\n\n\\item More generally, consider the deterministic system $$x_{k+1} = \\phi(x_k, u_k), y_k = \\psi(x_k,u_k) $$ Suppose the actions are given\nby a policy of the form $$u_k = \\mu(x_{0:k},y_{1:k})$$\nThen clearly, the open loop system, $$x_{k+1} = \\phi(x_k, \\mu(x_{0:k},y_{1:k})), y_k = \\psi(x_k,\\mu(x_{0:k},y_{1:k})) $$ generates the same\nstate and observation sequences. \n\nSo for a deterministic system (with fully specified model), open and closed loop behavior are identical.\n\n\\item Now consider a fully observed stochastic system with feedback:\n\\begin{equation}\n\\begin{split} x_{k+1} &= x_k + u_k + w_k , \\\\\n u_k &= - x_k \\end{split} \\end{equation}\n where $w_k$ is iid with zero mean and variance $\\sigma^2$ (as usual we assume $x_0$ is independent of $\\{w_k\\}$.)\n Then $x_{k+1} = w_k$ and so $u_k = -w_{k-1}$ for $k=1,2,\\dots$.\n Therefore $\\E\\{x_k\\} = 0$ and $\\operatorname{Var}\\{x_k^2\\} = \\sigma^2$.\n \n\\item Finally, consider an open loop stochastic system where $u_k$ is a deterministic sequence:\n$$\n x_{k+1}= x_k + u_k + w_k $$\n Then $\\E\\{x_k\\} = \\E\\{x_0\\} + \\sum_{n=0}^{k-1} u_k $ and $\\operatorname{Var}\\{x_k^2\\} = \\E\\{x_0^2\\} + k \\sigma^2$.\n Clearly, it is impossible to construct a deterministic input sequence that yields a zero mean state with variance $\\sigma^2$.\n \\end{compactenum}\n\n\\item {\\bf Trading of call options.} An investor buys a call option at a price $p$. He has $N$ days to exercise this option. If the investor exercises the option when the stock price is $x$, he\ngets $x-p$ dollars. The investor can also decide not the exercise the option at all.\n\nAssume the stock price evolves as $x_k = x_0 + \\sum_{n=1}^k w_n $ where $\\{w_n \\}$ is in iid process.\nLet $\\tau$ denote the day the investor decides to exercise the option. Determine the optimal investment strategy to maximize\n$$\\E\\{ (x_\\tau - p) I(\\tau \\leq T) \\}. $$\nThis is an example of a fully observed stopping time problem. Chapter \\ref{ch:pomdpstop} considers more general stopping time POMDPs.\n\nNote: Define $s_k \\in \\{0,1\\}$ where $s_k = 0$ means that the option has not been exercised until time $k$.\n$s_k = 1$ means that the option has been exercised before time $k$. Define the state $z_k = (x_k,s_k)$.\n\nDenote the action $u_k = 1$ to exercise option and $u_k = 0$ means do not exercise option.\nThen the dynamics are\n$$ s_{k+1} = \\max\\{ s_k,u_k\\} , \\quad x_{k+1} = x_k + w_k $$\nThe reward at each time $k$ is $r(z_k,u_k,k) = (1-s_k) u_k (x_k-p)$ and the problem can be formulated as\n$$\\max_\\mu \\E\\{ \\sum_{k=1}^N r(z_k,u_k,k)\\} $$\n\n\\item\nDiscounted cost problems can also be motivated as stopping time problems (with a random termination time).\nSuppose at each time $k$, the MDP can terminate with probability $1 - \\rho$ or continue with probability $\\rho$.\nLet $\\tau$ denote the random variable for the termination time. Consider the undiscounted cost MDP\n\\begin{align*}\n \\E_{{\\boldsymbol{\\mu}}} \\left\\{ \\sum_{k=0}^{\\tau} c(x_k,u_k) \\mid x_0 = i\\right\\} &= \n\\E_{{\\boldsymbol{\\mu}}} \\left\\{ \\sum_{k=0}^{\\infty} I(k \\leq \\tau) \\, c(x_k,u_k) \\mid x_0 = i\\right\\} \\\\\n&= \\E_{{\\boldsymbol{\\mu}}} \\left\\{ \\sum_{k=0}^{\\infty} \\rho^k c(x_k,u_k) \\mid x_0 = i\\right\\}.\n\\end{align*}\nThe last equality follows since $\\mathbb{P}(k \\leq \\tau) = \\rho^k$. \n\n\n\n\\begin{comment}\n\n\\item Deterministic and stochastic LQ control\n\n\\item Modified Policy iteration\n\n\\item Afriat's test for a single jump change in utility function.\n\n\\end{comment}\n\n\\item We discussed risk averse utilities and dynamic risk measures briefly in \\S \\ref{sec:riskaverse}. Also \\S \\ref{subsec:afriat} discussed revealed\npreferences for constructing a utility function from a dataset. \nGiven a utility function $U(x)$,\na widely used measure for the degree of risk aversion is the Arrow-Pratt risk aversion coefficient which is defined as \n$$ a(x) =- \\frac{d^2 U\/dx^2}{ dU\/dx} .$$\nThis is often termed as an absolute risk aversion measure, while $x\\, a(x)$ is termed a relative risk aversion measure. \nCan this risk averse coefficient be used for mean semi-deviation risk, conditional value at risk( CVaR) and exponential risk?\n\n\n\\item A classical result involving utility functions is the following \\cite[pp.42]{HS84}:\nA rational decision maker who compares random variables only according to their means and variances must have preferences consistent\nwith a quadratic utility function. Prove this result.\n\n\\end{compactenum}\n\n\n\n\n\\section{Case study. Non-cooperative Discounted Cost Markov games} \\index{Markov game} \\label{sec:markovgame}\n\\S \\ref{sec:mdpdiscount} of the book dealt with infinite horizon discounted MDPs.\nBelow we introduce briefly some elementary ideas in non-cooperative infinite horizon discounted Markov games. There are several excellent \nbooks in the area\n\\cite{KV12,BO91}. \n\nMarkov games can be viewed as a multi-agent decentralized extension of MDPs. They arise in a variety of applications including dynamic spectrum allocation, financial models and smart grids. Our aim here is to consider some simple cases where the Nash\nequilibrium can be obtained by solving a linear programming problem.\\footnote{The reader should be cautious with decentralized stochastic\ncontrol. The famous Witsenhausen's counterexample formulated in the 1960s shows that even a deceptively simple toy problem in decentralized\nstochastic control can be very difficult to solve, see \\url{https:\/\/en.wikipedia.org\/wiki\/Witsenhausen\\%27s_counterexample} \\index{Witsenhausen's counterexample}}\n\nConsider the following infinite horizon discounted cost two-payer Markovian game.\nThere are two decision makers (players) indexed by $l=1,2$.\n\\begin{compactitem}\n\\item Let $\\act1_k \\in \\,\\mathcal{U} $ and $\\act2_k\\in \\,\\mathcal{U} $ denote the action of player 1 and player 2, respectively, at time $k$. For convenience we assume\nthe same action space for both players.\n\\item The cost incurred\nby player $l \\in \\{1,2\\}$ for state $x$, actions $\\act1,\\act2$ is $c_l(x,\\act1,\\act2)$.\n\\item The transition probabilities of the Markov process $x$ depends on the actions of both players:\n$$ P_{ij}(\\act1,\\act2) = \\mathbb{P}(x_{k+1} = j | x_k = i, \\act1_k = \\act1, \\act2_k = \\act2) $$\n\\item \nDefine the policies for the stationary (randomized) Markovian policies for two players as $\\pol1$, $\\pol2$, respectively.\nSo $\\act1_k $ is chosen from probability distribution $\\pol1(x_k)$ and $\\act2_k $ is chosen from probability distribution $\\pol2(x_k)$.\nFor convenience denote the class of stationary Markovian policies as ${\\boldsymbol{\\policy}}_S$.\n\\item The cumulative cost incurred by each player $l \\in \\{1,2\\}$ is \n\\begin{equation} \\Jc{l}_{\\pol1,\\pol2}(x) = \\E\\big\\{ \\sum_{k=0}^\\infty \\rho^k c_l(x_k,\\act1_k,\\act2_k) \\vert x_0 = x\\big\\} \n\\label{eq:cumcostgame} \\end{equation}\nwhere as usual $\\rho \\in (0,1)$ is the discount factor.\n\\end{compactitem}\n\nThe non-cooperative assumption in game theory is that the players are interested in minimizing their individual cumulative costs only; they do not\ncollude.\n\n\\subsection{Nash equilibrium of general sum Markov game} \\index{Markov game! Nash equilibrium}\n \\index{game theory! Markov game}\nAssume that each player has complete knowledge of the other player's cost function. Then the policies\n${\\pol1}^*, {\\pol2}^*$ of the non-cooperative infinite horizon Markov game constitute a Nash equilibrium if \n\\begin{equation} \\label{eq:nasheq}\n\\begin{split}\n \\Jc{1}_{{\\pol1}^*,{\\pol2}^*}(x) &\\leq \\Jc{1}_{\\pol1,{\\pol2}^*}(x), \\quad \\text{ for all } \\pol1 \\in {\\boldsymbol{\\policy}}_S \\\\\n \\Jc{1}_{{\\pol1}^*,{\\pol2}^*}(x) &\\leq \\Jc{1}_{ {\\pol1}^*,{\\pol2}}(x), \\quad \\text{ for all } \\pol2 \\in {\\boldsymbol{\\policy}}_S.\n \\end{split} \\end{equation}\nThis means that unilateral deviations from ${\\pol1}^*, {\\pol2}^*$ result in either player being worse off (incurring a larger cost).\nSince in a non-cooperative game collusion is not allowed, there is no rational reason for players to deviate from the Nash equilibrium\n(\\ref{eq:nasheq}).\n\n\nIn game theory, two important issues are: \n\\begin{compactenum}\n\\item {\\em Does a Nash equilibrium exist? } \nFor the above discounted cost game with finite action and state space, the answer is \"yes\".\n\\begin{theorem}\nA discounted Markov game has at least one Nash equilibrium within the class of Markovian stationary (randomized) policies.\n\\end{theorem}\nThe proof is in \\cite{FV12} and involves Kakutani's fixed point theorem.\\footnote{Existence proofs for equilibria involve using either Kakutani's fixed point theorem (which generalizes Brouwer's fixed point theorem to set valued correspondences) or Tarski's fixed point theorem (which applies to supermodular games). Please see \\cite{MWG95} for a nice intuitive visual illustration\nof these fixed point theorems.}\n\n\\item {\\em How can the Nash equilibria be computed?} Define the randomized policy of player 1 (corresponding to $\\pol1$) and player 2 \n(corresponding to $\\pol2$) as \n$$p(i,\\act1) = \\mathbb{P}(\\act1_k = \\act1 | x_k = i), \\quad q(i,\\act2) = \\mathbb{P}(\\act2_k = \\act2 | x_k = i) $$\nThen for an infinite horizon discounted cost Markov game, the Nash equilibria $(p^*,q^*)$ are global optima of the following non-convex optimization problem:\n\\begin{equation} \\begin{split} & \\text{ Compute } \\max \\sum_{l=1}^2 \\sum_{i=1}^X \\alpha_i \\bigg( \\underline{V}^{(l)}(i)\n- \\sum_{\\act1,\\act2} c_l(i,\\act1,\\act2) p(i,\\act1) q(i,\\act2) \\\\ & - \n\\rho \\sum_{j \\in \\mathcal{X}} \\sum_{\\act1,\\act2} P_{ij}(\\act1,\\act2) p(i,\\act1) q(i,\\act2) \\underline{V}^{(l)}(j) \\bigg) \\\\ &\\text{ with respect to $(\\underline{V}^{(1)},\\underline{V}^{(2)},p, q)$} \\\\\n \\text{ subject to } &\\underline{V}^{(1)}(i) \\leq \\sum_{\\act2} c(i,\\act1,\\act2) q(i,\\act2) + \n\\rho \\sum_{j \\in \\mathcal{X}} \\sum_{\\act2} P_{ij}(\\act1,\\act2) q(i,\\act2) \\underline{V}^{(1)}(j) ,\\\\\n&\\underline{V}^{(2)}(i) \\leq \\sum_{\\act2} c(i,\\act1,\\act2) p(i,\\act1) + \n\\rho \\sum_{j \\in \\mathcal{X}} \\sum_{\\act1} P_{ij}(\\act1,\\act2) p(i,\\act1) \\underline{V}^{(2)}(j) ,\\\\\n& q(i,\\act2) \\geq 0, \\quad \\sum_{\\act2} q(i,\\act2) = 1, \\quad i = 1,2,\\ldots, X ,\\; \\act2 = 1,\\ldots,U\n\\\\\n& p(i,\\act1) \\geq 0, \\quad \\sum_{\\act1} p(i,\\act1) = 1, \\quad i = 1,2,\\ldots, X ,\\; \\act1 = 1,\\ldots,U.\n\\end{split} \\label{eq:genmarkovgame} \\end{equation}\n In general, solving the non-convex optimization problem (\\ref{eq:genmarkovgame}) is difficult; there\n can be multiple global optima (each corresponding to a Nash equilibrium) and multiple local optima. In fact there is a fascinating property that if all\n the parameters (transition probabilities, costs) are rational numbers, the Nash equilibrium policy can involve irrational numbers. This points to the \n fact that in general one can only approximately compute the Nash equilibrium.\n\n\\index{Markov game! general sum}\n\\end{compactenum}\n\n{\\bf Proof}.\nFirst write (\\ref{eq:genmarkovgame}) in more abstract but intuitive notation in terms of the randomized policies $p,q$ as\n\\begin{equation} \\begin{split} & \n \\max \\sum_{l=1}^2 \\alpha^\\prime \\bigg( \\underline{V}^{(l)} - c_l(p,q) - \\rho P(p,q) \\underline{V}^{(l)} \\biggr) \\\\\n\\text{ subject to } \\; & \\underline{V}^{(1)} \\leq c_1(\\act1,q) + \\rho P(\\act1,q) \\underline{V}^{(1)} , \\quad \\act1 = 1,\\ldots,U \\\\\n & \\underline{V}^{(2)} \\leq c_2(p,\\act2) + \\rho P(p,\\act2) \\underline{V}^{(2)} , \\quad \\act2 = 1,\\ldots,U \\\\\n& p , q \\text{ valid pmfs } \n \\end{split} \\label{eq:genmarkovgame2} \\end{equation}\n It is clear from the constraints that the objective function is always $\\leq 0$. In fact the maximum is attained when the objective function\n is zero, in which case the constraints hold with equality. When the constraints hold at equality, they satisfy\n $$ V_*^{(l)} = \\big(I - \\rho P(p^*,q^*) \\big)^{-1} c_l(p^*,q^*) , \\quad l = 1,2. $$\n This serves as definition of $V_*^{(l)}$ and \nis equivalent to saying\\footnote{This holds since from (\\ref{eq:cumcostgame}), $ \\Jc{l}_{p^*,q^*}(x) = c_l(p^*,q^*) + \n \\rho P c_l(p^*,q^*) + \\rho^2 P^2 c_l(p^*,q^*) + \\cdots + $. Indeed a similar expression holds for discounted cost MDPs.}\n that $V_*^{(l)}$ is the infinite horizon cost attained by the policies $(p^*,q^*)$. That is,\n\\begin{equation} V_*^{(l)} = c_l(p^*,q^*) + \\rho P(p^*,q^*) V_*^{(l)} \\implies \n \\Jc{l}_{p^*,q^*}(x) = V_*^{(l)}. \\label{eq:neeqa} \\end{equation}\n\n \nAlso setting $\\underline{V}^{(l)} =V_*^{(l)}$, the constraints in (\\ref{eq:genmarkovgame2}) satisfy\n$$ V_*^{(1)} \\leq c_1(p,q^*) + \\rho P(p,q^*)V_*^{(1)} , \\quad V_*^{(2)} \\leq c_2(p^*,q) + \\rho P(p^*,q) V_*^{(2)} $$\nimplying that\n\\begin{equation} \\Jc{1}_{p,q^*}(x) \\geq V_*^{(1)}, \\quad \\Jc{2}_{p^*,q}(x) \\geq V_*^{(2)}. \\label{eq:neeqb} \\end{equation}\n(\\ref{eq:neeqa}) and (\\ref{eq:neeqb}) imply that $(p^*,q^*)$ constitute a Nash equilibrium. \\qed\n \n{\\em Remark}.\nThe reader should compare the above proof with the linear programming formulation for a discounted cost MDP. In that derivation we started with\na similar constraint\n\\begin{equation} \\underline{V} \\leq c_1(\\act1) + \\rho P(\\act1) \\underline{V} . \\label{eq:edmdpc} \\end{equation}\n This implies that $\\underline{V} < V$ where $V$ denotes the unique value function of Bellman's equation. Therefore the objective was to find $\\max \\alpha^\\prime \\underline{V}$\nsubject to (\\ref{eq:edmdpc}). \nSo in MDP case we obtain a linear program.\nIn the dynamic game case, in general, there is no value function to clamp (upper bound) $\\underline{V}$.\n\n\n\n\n\n\\subsection{Zero-sum discounted Markov game} \\index{Markov game! zero sum}\nWith the above brief introduction,\nthe main aim below is to give special cases of {\\em zero-sum} Markov games where the Nash equilibrium can be \ncomputed via linear programming. (Recall \\S \\ref{sec:vipilp} of the book shows how a discounted cost MDP can be solved via linear programming.)\n\nA discounted Markovian game is said to be zero sum\\footnote{A constant sum game \n $c_1(x,\\act1,\\act2) + c_2(x,\\act1,\\act2) = K$ for constant $K$ is equivalent to a zero sum game.\n Define $\\bar{c}_l(x,\\act1,\\act2) = c_l(x,\\act1,\\act2)+K\/2 $, $l=1,2$, resulting in a zero sum game in terms of $\\bar{c}_l$.}\n if \n$$ c_1(x,\\act1,\\act2) + c_2(x,\\act1,\\act2) = 0.$$ \nThat is, \n$$c(x,\\act1,\\act2) \\stackrel{\\text{defn}}{=} c_1(x,\\act1,\\act2) = - c_2(x,\\act1,\\act2) .$$\nFor a zero sum game, the Nash equilibrium (\\ref{eq:nasheq}) becomes a saddle point:\n$$J_{{\\pol1}^*,{\\pol2}}(x) \\leq J_{{\\pol1}^*,{\\pol2}^*}(x) \\leq J_{{\\pol1},{\\pol2}^*}(x),\n$$\nthat is, it is a minimum in the $\\pol1$ direction and a maximum in the $\\pol2$ direction.\n\n\nA well known result from the 1950s due to Shapley is: \\index{Markov game! Shapley's theorem}\n\\begin{theorem}[Shapley] A zero sum infinite horizon discounted cost Markov game has a unique value function, even though\nthere could be multiple Nash equilibria (saddle points). Thus all the Nash equilibria are equivalent.\n\\end{theorem}\nThe value function of the zero-sum game is \n$$ J_{{\\pol1}^*,{\\pol2}^*}(i) = V(i)$$ where\n$V$ satisfies an equation that resembles dynamic programming:\n\\begin{equation} V(i) = \\val \\big[ (1-\\rho) c(i,\\act1,\\act2) + \\rho \\sum_{j} P_{ij} (\\act1,\\act2) V(j) \\big]_{\\act1,\\act2} \\label{eq:dpzg} \\end{equation}\nHere $\\val[M]_{\\act1,\\act2}$ denotes the value of the matrix\\footnote{A zero sum matrix game is of the form: Given a $m\\times n$ \nmatrix $M$, determine the Nash equilibrium\n$$ (x^*, y^*) = \\operatornamewithlimits{argmax}_x \\operatornamewithlimits{argmin}_y y^\\prime M x, \\quad \\text{ where } x, y \\text{ are probability vectors }$$\nThe value of this matrix game is $\\val[M] = {y^*}^\\prime M x^*$ and is computed as the solution of a linear programming (LP) problem as follows:\nClearly $ \\max_ x \\min_y y^\\prime M x = \\max_x \\min_{i} e_i^\\prime M x $ where $e_i$, $i=1,2,\\ldots,m$ denotes the unit $m$-dimensional vector with 1 in the $i$-th position. This follows since a linear function is minimized at its extreme points. So the minimization over continuum has been reduced\nto one over a finite set. Denoting $z = \\min_{i} e_i^\\prime M x $, the value of the game is the solution of the following LP:\n\\begin{equation} \\val[M] = \\begin{cases}\n \\text{ Compute } \\max z \\\\\n z < e_i^\\prime M x , \\quad i =1,2,\\ldots, m, \\\\\n \\mathbf{1}^\\prime x = 1 , \\quad x_j \\geq 0, j=1,2\\ldots,n \\end{cases}\n\\end{equation}\n}\n game with elements $M(\\act1,\\act2)$.\nEven though for a specific vector $V$, the $\\val[\\cdot]$ in the right hand side of (\\ref{eq:dpzg}) can be evaluated by solving an LP, it is not useful for the Markov zero sum\ngame, since we have a functional equation in the variable $V$. So solving a zero sum Markov game is difficult in general.\n\n\\subsubsection*{Nash Equilibrium as a Non-convex Bilinear Program} \\index{Markov game! Nash equilibrium as bilinear program}\nTo give more insight, as we did in the discounted cost MDP case, let us formulate computing the Nash equilibrium (saddle point) of the zero sum Markov game as an optimization problem. In the MDP case we obtained a LP; for the Markov game (as shown below) we obtain a non-convex\nbilinear optimization problem.\n\nDefine the randomized policy of player 1 (minimizer) and player 2 (maximizer) as \n$$p(i,\\act1) = \\mathbb{P}(\\act1_k = \\act1 | x_k = i), \\quad q(i,\\act2) = \\mathbb{P}(\\act2_k = \\act2 | x_k = i) $$\nIn complete analogy to the discounted MDP case in (\\ref{eq:discountprimal}), player 2 optimal strategy $q^*$ is the solution of the bilinear program\n\\begin{equation} \\begin{split} & \\max \\sum_i \\alpha_i \\underline{V}(i) \\;\\text{ with respect to $(\\underline{V}, q)$} \\\\\n& \\text{ subject to } \\underline{V}(i) \\leq \\sum_{\\act2} c(i,\\act1,\\act2) q(i,\\act2) + \n\\rho \\sum_{j \\in \\mathcal{X}} \\sum_{\\act2} P_{ij}(\\act1,\\act2) q(i,\\act2) \\underline{V}(j) ,\\\\\n& q(i,\\act2) \\geq 0, \\quad \\sum_{\\act2} q(i,\\act2) = 1, \\quad i = 1,2,\\ldots, X ,\\; \\act2 = 1,2,\\ldots,U.\n\\end{split} \\label{eq:gdiscountprimal} \\end{equation}\nBy symmetry, player 1 optimal strategy $p^*$ is the solution of the bilinear program\n\\begin{equation} \\begin{split} & \\min \\sum_i \\alpha_i \\underline{V}(i) \\;\\text{ with respect to $(\\underline{V}, p)$} \\\\\n& \\text{ subject to } \\underline{V}(i) \\geq \\sum_{\\act2} c(i,\\act1,\\act2) p(i,\\act1) + \n\\rho \\sum_{j \\in \\mathcal{X}} \\sum_{\\act1} P_{ij}(\\act1,\\act2) p(i,\\act1) \\underline{V}(j) ,\\\\\n& p(i,\\act1) \\geq 0, \\quad \\sum_{\\act1} p(i,\\act1) = 1, \\quad i = 1,2,\\ldots, X ,\\; \\act1 = 1,2,\\ldots,U.\n\\end{split} \\label{eq:gdiscountprimal2} \\end{equation}\nThe key difference between the above discounted Markov game problem and the discounted MDP (\\ref{eq:discountprimal}) is that the above equations\nare no longer LPs. Indeed the constraints are {\\em bilinear} in $ (\\underline{V},q)$ and $(\\underline{V},p)$. So the constraint set for a zero-sum Markov game is non-convex. Despite (\\ref{eq:gdiscountprimal}) and (\\ref{eq:gdiscountprimal2}) being nonconvex, in light of Shapley's theorem all local minima are global minima.\n\nFinally (\\ref{eq:gdiscountprimal}) and (\\ref{eq:gdiscountprimal2}) can be combined into a single optimization problem. To summarize, the (randomized) Nash equilibrium $p^*,q^*$ of \na zero-sum Markov game is the solution of the following bilinear (noconvex) optimization problem:\n\\begin{equation} \\begin{split} & \\max \\sum_i \\alpha_i \\big( \\underline{V}^{(1)}(i) -\\underline{V}^{(2)}(i) \\big) \\;\\text{ with respect to $(\\underline{V}^{(1)},\\underline{V}^{(2)},p, q)$} \\\\\n \\text{ subject to } &\\underline{V}^{(1)}(i) \\leq \\sum_{\\act2} c(i,\\act1,\\act2) q(i,\\act2) + \n\\rho \\sum_{j \\in \\mathcal{X}} \\sum_{\\act2} P_{ij}(\\act1,\\act2) q(i,\\act2) \\underline{V}^{(1)}(j) ,\\\\\n&\\underline{V}^{(2)}(i) \\geq \\sum_{\\act2} c(i,\\act1,\\act2) p(i,\\act1) + \n\\rho \\sum_{j \\in \\mathcal{X}} \\sum_{\\act1} P_{ij}(\\act1,\\act2) p(i,\\act1) \\underline{V}^{(2)}(j) ,\\\\\n& q(i,\\act2) \\geq 0, \\quad \\sum_{\\act2} q(i,\\act2) = 1, \\quad i = 1,2,\\ldots, X ,\\; \\act2 = 1,2,\\ldots,U\n\\\\\n& p(i,\\act1) \\geq 0, \\quad \\sum_{\\act1} p(i,\\act1) = 1, \\quad i = 1,2,\\ldots, X ,\\; \\act1 = 1,2,\\ldots,U.\n\\end{split} \\label{eq:cgdiscountprimal2} \\end{equation}\n\n\n\\subsubsection*{Special cases where computing Nash Equilibrium is an LP} \\index{Markov game! linear programming}\nWe now give two special examples of zero-sum Markov games that can be solved as a linear programming problem (LP); single controller\ngames and switched controller games. In both cases the bilinear terms in (\\ref{eq:cgdiscountprimal2}) vanish and the computing the Nash\nequilibrium reduces to solving linear programs.\n\n\\subsection{Example 1. Single Controller zero-sum Markov Game} \\index{Markov game! single controller}\nIn a single controller Markov game, the transition probabilities are controlled by one player only; we assume that this is player 1. So \n$$ P_{ij}(\\act1,\\act2) = P_{ij}(\\act1) = \\mathbb{P}(x_{k+1} = j | x_k = i, \\act1_k = \\act1) $$\n Due to this assumption, the bilinear constraint in (\\ref{eq:gdiscountprimal}) becomes {\\em linear}, namely \n$$ \\underline{V}(i) \\leq \\sum_{\\act2} c(i,\\act1,\\act2) q(i,\\act2) + \n\\rho \\sum_{j \\in \\mathcal{X}} P_{ij}(\\act1) \\underline{V}(j) $$\nsince $\\sum_{\\act2} q(i,\\act2) = 1$. Therefore (\\ref{eq:gdiscountprimal}) is now an LP which can be solved for $q^*$, namely:\n \\begin{equation} \\begin{split} & \\max_{\\underline{V}} \\sum_i \\alpha_i \\underline{V}(i) \\;\\text{ with respect to $(\\underline{V}, q)$} \\\\\n& \\text{ subject to } \\underline{V}(i) \\leq \\sum_{\\act2} c(i,\\act1,\\act2) q(i,\\act2) + \n\\rho \\sum_{j \\in \\mathcal{X}} P_{ij}(\\act1) \\underline{V}(j) , \\\\\n& q(i,\\act2) \\geq 0, \\quad \\sum_{\\act2} q(i,\\act2) = 1, \\quad i = 1,2,\\ldots, X ,\\; \\act2 = 1,2,\\ldots,U.\n\\end{split} \\label{eq:zdiscountprimal} \\end{equation}\nSolving the above LP yields the Nash equilibrium policy $\\pol2$ for player 2.\n\nThe dual problem to (\\ref{eq:zdiscountprimal}) is the linear program\n\\begin{equation} \\begin{split}\n \\text{ Minimize } & \\sum_{i\\in \\mathcal{X}} \n z(i) \n\n \\;\\text{ with respect to $(z, p)$} \\nonumber \\\\\n\\text{ subject to } & p({i,\\act1}) \\geq 0, \\quad i \\in \\mathcal{X}, u \\in \\,\\mathcal{U} \\\\\n& \\sum_{\\act1} p({j,\\act1} )= \\rho\\, \\sum_{i} \\sum_{\\act1} P_{ij}(\\act1)\\, p({i,\\act1}) + \\alpha_j ,\\; j \\in \\mathcal{X}. \\\\\n& z(i) \\geq \\sum_{\\act1} p(i,\\act1)\\, c(i,\\act1,\\act2)\n\\end{split}\n\\label{eq:gdiscountdual}\n\\end{equation}\n The above dual gives the randomized Nash equilibrium policy $p^*$ for player 1.\n \n \\subsection{Example 2. Switching Controller Markov Game} \\index{Markov game! switched controller}\n This is a special case of a zero sum Markov game where the state space $\\mathcal{X}$ is partitioned into disjoint sets $\\sr1,\\sr2$ such\n that $\\sr1 \\cup \\sr2 = \\mathcal{X}$ and \n $$ P_{ij}(\\act1,\\act2) = \\begin{cases} P_{ij}(\\act1), & i \\in \\sr1 \\\\\n \t\t\t\t\t\t\t P_{ij}(\\act2), & i \\in \\sr2 \\end{cases} $$\nSo for states in $\\sr1$, controller 1 controls that transition matrix, while for states in $\\sr2$, controller 2 controls the transition matrix.\n\n\n\nObviously for $ i \\in\\sr2$, (\\ref{eq:gdiscountprimal}) becomes an linear program while for $i \\in \\sr1$, (\\ref{eq:gdiscountprimal2}) becomes a\nlinear program.\nAs discussed in \\cite{FV12}, the Nash equilibrium can be computed by solving a finite sequence of linear programming problems.\n\n\n \n\\chapter{Partially Observed Markov Decision Processes (POMDPs)} \n\nSeveral well studied instances of POMDPs and their parameter files can be found at \\url{http:\/\/www.pomdp.org\/examples\/}\n\\\\ \\\\\n\n\\begin{compactenum}\n\\item Much insight can be gained by simulating the dynamic programming recursion for a 3-state POMDP. The belief\nstate needs to be quantized to a finite grid. We also strongly recommend using the exact POMDP solver in \\cite{POMDP} to gain\ninsight into the piecewise linear concave nature of the value function.\n\n\\item Implement Lovejoy's suboptimal algorithm and compare its performance with the optimal policy.\n\n\\item {\\bf Tiger problem}: \\index{POMDP tiger problem} This is a colorful name given to the following POMDP problem.\n\nA tiger resides behind one of two doors, a left door $(l)$ and a right door $(r)$.\nThe state $x \\in \\{l,r\\}$ denotes the position of a tiger. The action $u \\in \\{l,r,h\\}$ denotes a human either opening the left door $(l)$,\nopening the right door $(r)$, or simply hearing $(h)$ the growls of the tiger. If the human opens a door, he gets a perfect measurement\nof the position of the tiger (if the tiger is not behind the door he opens, then it must be behind the other door). If the human chooses action\n$h$ then he hears the growls of the tiger which gives noisy information about the tiger's position. Denote the probabilities\n$B_{ll}(h) = p$, $B_{rr}(h) = q$.\n\nEvery time the human chooses the action to open a door, the problem resets and the tiger is put with equal probability behind one of the doors.\n(So the transition probabilities for the actions $l$ and $r$ are $0.5$). \n\nThe cost of opening the door behind where the tiger is hiding is $\\alpha$, possibly reflecting injury from the tiger. The cost\nof opening the other door is $-\\beta$ indicating a reward. Finally the cost of hearing and not opening a door is $\\gamma$.\n\nThe aim is to minimize the cost (maximize the reward) over a finite or infinite horizon.\nTo summarize, the POMDP parameters of the tiger problem are:\n\\begin{align*} \\mathcal{X} &= \\{l,r\\}, \\mathcal{Y}= \\{l,r\\}, \\,\\mathcal{U} = \\{l, r, h\\}, \\\\\nB(l) &= B(r) = I_{2 \\times 2}, B(h) = \\begin{bmatrix} p & 1-p \\\\ 1-q & q \\end{bmatrix} \\\\\nP(l) &= P(r) = \\begin{bmatrix} 0.5 & 0.5 \\\\ 0.5 & 0.5 \\end{bmatrix} ,\nP(h) = I_{2 \\times 2}, \\\\\nc_l &= (\\alpha, -\\beta)^\\prime, c_r = (-\\beta, \\alpha)^\\prime, c_h = (\\gamma, \\gamma)^\\prime\n\\end{align*}\n\n\n\n\n\n\\item {\\bf Open Loop Feedback Control.} As described in \\S \\ref{sec:olfc}, open loop feedback control is a useful suboptimal scheme for solving\nPOMDPs. Is it possible to exploit knowledge that the value function of a POMDP is piecewise linear and concave in\nthe design of an open loop feedback controller? \n\n\n\\item Finitely transient policies were discussed in \\S 7.6. For a 2-state, 2-action, 2-observation POMDP, give an example of POMDP parameters that yield a finitely transient policy with $n^* = 2$.\n\n\n\\item {\\bf Uniform sampling from Belief space.} Recall that the belief space $\\Pi(\\statedim)$ is the unit $X-1$ dimensional simplex.\nShow that a convenient way of sampling uniformly from $\\Pi(\\statedim)$ is to use \nthe Dirichlet distribution $$\\pi_0(i) = \\frac{x_i}{\\sum_{j=1}^X x_j}, \\quad \\text{ where } x_i \\sim \\text{ unit exponential distribution. } $$ \\index{Dirichlet distribution}\n \n\n\\item {\\bf Adaptive Control of a fully observed MDP formulated as a POMDP problem.}\n\\label{prob:adaptive control} \\index{adaptive control of MDP as a POMDP}\nConsider a fully observed MDP with transition matrix $P(u)$ and cost $c(i,u)$, where $u\\in \\{1,2,\\ldots,U\\}$ denotes the action. Suppose the true transition matrices \n$P(u)$ are not known.\nHowever, it is known apriori that they belong to a known finite set of matrices $P(u,\\th)$ where $\\th \\in \\{1,2, ,\\ldots, L\\}$. As data accumulates, the controller\nmust simultaneously control the Markov chain and also estimate the transition matrices.\n\n\nThe above problem can be formulated straightforwardly as a POMDP.\nLet $\\th_k$ denote the parameter process. Since the parameter $\\th_k = \\th$ does not evolve with time, it has identity transition matrix.\nNote that $\\th$ is not known; it is partially observed since we only see the sample path realization of the Markov chain $x$ with transition matrix\n$P(u,\\th)$.\\\\\n{\\bf Aim}: Compute the optimal policy\n$$ \\mu^* = \\operatornamewithlimits{argmin}_\\mu J_\\mu(\\pi_0) = \\E\\{ \\sum_{k=0}^{N-1} c\\big(x_k, u_k\n \\big) | \\pi_0 \\} $$\nwhere $\\pi_0$ is the prior pmf of $\\th$. The key point here is that as in a POMDP (and unlike an MDP),\nthe action $u_k$ will now depend on the history of past actions and the trajectory of the Markov chain as we will now describe.\n\n{\\bf Formulation}:\nDefine the augmented state $(x_k, \\th_k)$. Since $\\th_k =\\th$ does not evolve, clearly the augmented state has transition probabilities\n$$ \\mathbb{P}(x_{k+1} = j, \\th_{k+1} = m | x_{k} = i, \\th_k = l, u_k = u) = P_{ij}(u,l)\\, \\delta(l-m), \\quad \nm=1,\\ldots, L. $$\nAt time $k$, denote the history as\n$\\mathcal{H}_k = \\{x_0,\\ldots,x_k, u_1,\\ldots,u_{k-1}\\}$. Then define the belief state which is the posterior pmf of the model\nparameter estimate:\n$$ \\pi_k(l) = \\mathbb{P}( \\th_k = l| \\mathcal{H}_k) , \\qquad l = 1,2.\\ldots, L.$$\n\\begin{compactenum}\n\\item Show that the posterior is updated via Bayes' formula as\n\\begin{equation}\n\\begin{split}\n \\pi_{k+1}(l) &= T(\\pi_k, x_k,x_{k+1},u_k)(l) \\stackrel{\\text{defn}}{=} \\frac{P_{x_k,x_{k+1}}(u_k, l) \\, \\pi_k( l)}\n{\\sigma(\\pi_k,x_k,x_{k+1})}, \\; l = 1,2.\\ldots, L \\\\ \\text{ where } \\; &\n\\sigma(\\pi_k,x_k,x_{k+1},u_k) = \\sum_m P_{x_k,x_{k+1}}(u_k, m)\\, \\pi_k( m). \\end{split}\n\\end{equation}\nNote that $\\pi_k$ lives in the $L-1$ dimensional unit simplex.\n\nDefine the belief state as $(x_k, \\pi_k)$. The actions are then chosen as\n$$u_{k} = \\mu_k(x_k,\\pi_k) $$\n\nThen the optimal policy $\\mu_k^*(i,\\pi)$ satisfies Bellman's equation\n\\begin{equation} \\begin{split}\nJ_k(i,\\pi) &= \\min_u Q_k( i, u, \\pi) , \\quad \\mu^*_k(i,\\pi) = \\operatornamewithlimits{argmin}_u Q_k( i, u, \\pi) \\\\\nQ_k(i,u,\\pi) &= \nc(i,u) + \\sum_{j} J_{k+1}\\big(j, T(\\pi,i,j,u) \\big) \\, \n\\sigma(\\pi,i,j,u)\n \\end{split}\n\\end{equation} \ninitialized with the terminal cost $J_N(i,\\pi) = c_N(i)$.\n\n\\item Show that the value function $J_k$ is piecewise linear and concave in $\\pi$. Also show how the exact POMDP solution algorithms \nin Chapter \\ref{ch:pomdpbasic}\ncan be used to compute\nthe optimal policy. \n\\end{compactenum}\nThe above problem is related to the concept of {\\em dual control} which dates back to the 1960s \\cite{Fel65}; see also\n\\cite{Lov93} for the use of Lovejoy's suboptimal algorithm to this problem. \\index{dual control}\nDual control relates to the tradeoff between estimation and control: if the controller is uncertain about the model\n parameter, it needs to control \nthe system more aggressively in order to probe the system to estimate it; if the controller is more certain about the model parameter, \nit can deploy a less aggressive control.\nIn other words, initially the controller explores and as the controller becomes more certain it exploits. Multi-armed bandit problems optimize the\ntradeoff between exploration and exploitation. \\index{dual control}\n\n\n\\item{\\bf Optimal Search and Dynamic (Active) hypothesis testing.} \\index{dynamic hypothesis testing} \\index{active hypothesis testing} In \\S 7.7.4 of the book, we considered the classical optimal search problem where the objective was to search for a non-moving target amongst a finite number of cells.\nA crucial assumption was that there are no false alarms; if an object is not present in a cell and the cell is searched, the observation recorded\nis $\\bar{F}$ (not found). \n\nA generalization of this problem is studied in \\cite{Cas95}. Assume there are $\\,\\mathcal{U} = \\{1,2,\\ldots,U\\}$ cells. When cell $u$ is searched\n\\begin{compactitem}\n\\item If the target is in cell $u$ then an observation $y$ is generated with pdf or pmf $\\phi(y)$ if \nthe target is in cell $u$\n\\item If the target is {\\bf not} in cell $u$, then an observation $y$ is generated with pdf or pmf $\\bar{\\phi}(y)$. \n(Recall in classical search $\\bar{\\phi}(y)$ is dirac measure on the observation symbol $\\bar{F}$.)\n\\end{compactitem}\nThe aim is to determine the optimal search policy ${\\boldsymbol{\\mu}}$ over a time\nhorizon $N$\nto maximize\n$$ J_{\\boldsymbol{\\mu}} = \\E_{{\\boldsymbol{\\mu}}} \\max_{u \\in \\{1,\\ldots, U\\}} \\pi_N (u) \\} $$\nat the final time $N$.\n\nAssume the pdf or pmf $\\bar{\\phi}(y)$ is symmetric in $y$, that is $\\bar{\\phi}(y) = \\bar{\\phi}(b-y)$ for some real constant $b$. Then \\cite[Proposition 3] {Cas95} shows the nice\nresult that the optimal policy is to search either of the two most likely locations given the belief $\\pi_k$.\n\nThe above problem can be viewed as an active hypothesis testing problem, which is an instance of a controlled\nsensing problem.\nThe decision maker seeks to adaptively select the most informative sensing action for making a decision in a hypothesis testing problem. Active hypothesis testing goes all the way\nback to the 1959 paper by \nChernoff \\cite{Che59}.\nFor a more general and recent take of active hypothesis testing please see~\\cite{NJ13}. \n\\end{compactenum}\n\n\n\\chapter{POMDPs in Controlled Sensing and Sensor Scheduling} \n\n\\begin{compactenum}\n\n\\item {\\bf Optimal Observer Trajectory for Estimating a Markovian Target.} This problem is identical to the search problem described in\n\\S \\ref{chp:optimal search}. A target moves in space according to a Markov chain. (For convenience assume $X$-cells in two dimensional\nspace.\nA moving observer (sensor) measures the target's state (position) in noise. Assume that the noise depends on the relative distance between the target\nand the observer. \\index{optimal observer trajectory}\nHow should the observer move amongst the $X$-cells in order to locate where the target is? \nOne metric that has been used in the literature \\cite{LI99} is the stochastic observability (which is related to the mutual information) of the target; see also \\S \\ref{sec:radarkf}. The aim of the observer is to move so as to maximize\nthe stochastic observability of the target. As described in \\S \\ref{chp:optimal search}, the problem is equivalent to a POMDP.\n\nA more fancy version of the setup involves multiple observers (sensors) that move within the state space and collaboratively seek to locate the \ntarget. Assume that the observers exchange information about their observations and actions. The problem can again be formulated as a POMDP with a larger action and observation space.\n\nSuppose the exchange of information between the observers occurs over a noisy communication channel where the error probabilities\nevolve according to a Markov chain as in \\S \\ref{sec:minh}. Formulate the problem as a POMDP.\n\n\n\\item {\\bf Risk averse sensor scheduling.} As described in \\S \\ref{sec:nonlinearpomdpmotivation}, in controlled sensing applications, one is interested in incorporating\nthe uncertainty in the state estimate into the instantaneous cost. This cannot be modeled using a linear cost since the uncertainty is minimized\nat each vertex of the simplex $\\Pi(\\statedim)$. In \\S \\ref{sec:nonlinearpomdpmotivation}, quadratic functions of the belief were used to model the conditional\nvariance. A more principled alternative is to use dynamic coherent risk measures; recall three examples of such risk measures were discussed\nin \\S \\ref{sec:riskaverse}. \n\nDiscuss how open loop feedback control can be used for a POMDP with dynamic coherent risk measure.\n\n \\item {\\bf Sensor Usage Constraints.} The aim here is to how the POMDP formulation of a controlled sensing problem can be modified \n straightforwardly to incorporate sensing \n constraints \n on the total usage of particular sensors.\nSuch constraints are often used in \nsensor resource management.\n\n\\begin{compactenum}\n\\item Consider a $N$ horizon problem \\index{sensor management! usage constraints}\nwhere sensor 1 can be used at most $L$ times\nwhere $L \\leq N$. For notational simplicity, assume that there are two sensors, so $\\,\\mathcal{U} = \\{1,2\\}$.\nAssume that there are no constraints on the usage of the other sensors.\n\nFor notational convenience we consider rewards denoted as $R(\\pi,u) = \\sum_{i=1}^X R(i,u) \\pi(i) $ instead of costs $C(\\pi,u)$ expressed in terms of the belief state\n$\\pi$.\nShow that Bellman's equation is given by\n\\begin{multline*}\n V_{n+1}(\\pi,l) = \\max\\{ R(\\pi,1) + \\sum_y V_{n}(T(\\pi,y,1), l-1) \\sigma(\\pi,y,1), \\\\\nR(\\pi,2) + \\sum_y V_{n}(T(\\pi,y,2), l) \\sigma(\\pi,y,2)\\}\n\\end{multline*}\nwith boundary condition $V_n(\\pi,0) = 0$, $n=0,1,\\ldots, N$.\n\n\\item If the constraint is that sensor 1 needs to be used exactly $L$ times, then show that the following additional boundary condition needs to be included:\n$$V_n(\\pi,n) = R(\\pi,1) + \\sum_y V_{n-1}(T(\\pi,y,1), n-1) \\sigma(\\pi,y,1) , \\; \\text{ for } n=1,\\ldots,L .$$\n\n\\item In terms of the POMDP solver software, the constraint for using sensor~1 at most $L$ times is easily incorporated by augmenting the state space. Define the controlled finite state process $r_k \\in \\{0,2,\\ldots,L\\}$ with $(L+1 )\\times (L+1)$ transition matrices\n$$ Q(1) = \\begin{bmatrix} 0 & 1 & 0 & \\cdots & 0\\\\\n 0 & 0 & 1 & \\cdots & 0 \\\\\n \\vdots & \\vdots& \\vdots & \\ddots & 1 \\\\\n 0 & 0 & 0 & \\cdots & 1 \\end{bmatrix}, \\quad Q(2) = I .$$\nThen define the POMDP with:\n\\begin{compactitem}\n\\item transition matrices $P(1) \\otimes Q(1)$ and $P(2) \\otimes Q(2)$,\n\\item observation probabilities\n$p(y | x, r,u) = p(y| x, u)$, \n\\item rewards $R(x,r,u) = R(x,u)$ for $r> 0$ and $R(x,r=0,u) = 0$.\n\\end{compactitem}\n\n\n\nIn the problems for Chapter \\ref{ch:pomdpstop}, we consider a simpler version of the above problem for optimal measurement selection of a HMM.\nIn that simpler case, one can develop structural results for the optimal policy.\n\\end{compactenum}\n\n\\begin{comment}\nLet $S_1 = \\{e_1,\\ldots,e_{N_1+1}\\}$ denote the set of $N_1+1$\ndimensional unit vectors, where $e_i$ has \n 1 in the $i$-th position. \nWe will use process $n_k$ to denote the number of times sensor 1 is used.\nLet $n_k = e_i$ \nif sensor 1 has been used $i-1$ times up to time $t$.\nThen the process $n_k \\in S_1$ can be modelled as follows:\nIf sensor 1 is used (i.e., $u_k =1$) and $n_k = e_i$,\nthen $n_{k+1}$ jumps to state $e_{i+1}$.\nIf any other sensor is used then $n_{k+1}=n_k = e_i$.\nThus $n_k$ is a deterministic\nMarkov chain with dynamics given by\n\\begin{equation}\nn_k = Q(u_k) n_{k-1}, \\qquad n_0 = e_1\n\\end{equation}\nwhere the transition probability matrix $Q(\\cdot)$ is defined as\n $$Q(u_k = 1) = \n\\begin{bmatrix} 0 & 1 & 0 & \\cdots & 0\\\\\n 0 & 0 & 1 & \\cdots & 0 \\\\\n \\vdots & \\vdots& \\vdots & \\ddots & 1 \\\\\n 0 & 0 & 0 & \\cdots & 1 \\end{bmatrix}\n\\quad \\text{and } Q(u_k) = I_{(N_1+1)\\times (N_1+1)} \\text { if $u_k \\neq 1$}$$\n\n\nThe action space $U_{k,n_k}$ is defined as follows:\n$$\nU_{k,n_k} =\n\\begin{cases} \n \\{2,\\ldots,U\\} &\\text{ if } n_k = e_{N_1+1}\\\\\n \\{1,\\ldots,U\\} & \\text{ if } n_k \\neq e_{N_1+1} \\end{cases}\n$$\n\nThe one-step ahead scheduling policy is given by\n$\nV_N(\\pi_N,n_N) = \\epsilon _{N}(\\pi ^{z_{N},r_{N}})\n$\nand for $t = N-1, N-2, \\ldots, 0$\n\\begin{equation} \\label{eq:mgment}\nV_{k}(\\pi ^{z_{k},r_{k}},n) = \\min_{u\\in U_{k+1,n_{k+1}}} \\left[\\; \\epsilon\n_{k}(u,\\pi _{k}^{z_{k},r_{k}}) + \\E_{y_{k+1}}\\left\\{ V_{k+1}\\left( T(\\pi\n_{k}^{z_{k},r_{k}},y_{k+1},u), Q^{\\prime} n\\right) \\right\\} \\right]\n\\end{equation}\nThe above dynamic programming recursion can be recast into a form\nsimilar to that for a Jump Markov linear\nsystem by the following coordinate change:\nConsider the augmented Markov chain $(r_k,n_k)$. This has transition\nprobability matrix \n$ P \\otimes Q$ where $P$ denotes\nthe transition probability matrix of $r_k$ and\n$\\otimes$ denotes tensor (Kronecker product).\nBecause $n_k$ is a fully observed Markov chain, the information state of \n$(x_k,n_k)$ is $\\pi_k^{r_k} \\otimes n_k$ .\n This augmented information state \n is identical to that of a jump Markov linear\nsystem (with larger state space) and can be computed via the \n IMM algorithm in \\S \\ref{sec:IMM} or particle filtering algorithm. Thus the one-step ahead\nalgorithm described above can be used for a practical suboptimal\nsolution. \\end{comment}\n\n\n\n\\item As described in \\S \\ref{sec:nonlinearpomdpmotivation}, in controlled sensing it makes sense to choose a cost that is nonlinear in the belief state $\\pi$ in order to penalize uncertainty in the state estimate.\nOne choice of a nonlinear cost that has zero cost at the vertices of the belief space is \n$$C(\\pi,u) = \\min_{ i \\in \\{1,\\ldots,X\\}} \\pi(i) . $$\nThis cost $C(\\pi,u)$ is piecewise linear and concave in $\\pi \\in \\Pi(\\statedim)$ where $\\Pi(\\statedim)$ denotes the belief space.\n\nSince $C(\\pi,u)$ is positively homogeneous, show that the value function is piecewise linear and concave for any finite horizon $N$.\nHence the optimal POMDP solvers of Chapter \\ref{ch:pomdpbasic} can be used to solve this nonlinear cost POMDP exactly and therefore compute the optimal\n policy.\n\n\\end{compactenum}\n\n\\chapter{Structural Results for Markov Decision Processes} \n\n\\begin{compactenum}\n\n\n\\item {\\bf Supermodularity, Single Crossing Condition \\& Interval Dominance Order.} \\index{supermodular} \\index{single crossing} \\index{interval dominance order}\nA key step in establishing structural results for MDPs is to give sufficient conditions for $u^*(x) = \\operatornamewithlimits{argmax}_u \\phi(x,u)$ to be increasing in $x$.\nIn \\S \\ref{chp:fullsupermod} of Chapter \\ref{chp:monotonemdp} we gave two conditions, namely supermodularity and the single crossing condition (which\nis a more general condition than supermodularity).\nMore recently, the interval dominance order has been introduced in \\cite{QS09} as an even more general condition.\nAll three conditions boil down to the following statement:\n\\begin{equation} \\phi(x+1,u+1) - \\phi(x+1,u) \\geq \\rho(u) \\, \\big( \\phi(x,u+1) - \\phi(x,u) \\big) \\label{eq:intdominance} \\end{equation}\nwhere $\\rho(u)$ is a strictly positive function of $u$. In particular,\n\\begin{compactitem}\n\\item Choosing $\\rho(u)=1$ in (\\ref{eq:intdominance}) yields the supermodularity condition.\n\\item If there exists a fixed positive constant $\\rho(u)$ such that (\\ref{eq:intdominance}) holds, then the single crossing condition holds.\n\\item If there exists a positive function $\\rho(u)$ that is increasing\\footnote{Recall that in the book we use increasing in the weak sense to mean non-decreasing} in $u$, then (\\ref{eq:intdominance}) yields the interval dominance order condition (actually this is a sufficient condition for interval dominance, see \\cite{QS09} for details).\n\\end{compactitem}\nNote that single crossing and interval dominance are ordinal properties in the sense that they are preserved by monotone transformations.\n\nThe sum of supermodular functions is supermodular. Unfortunately, in general, the um of single crossing functions is not single crossing; however, see\n\\cite{QS12} for some results.\nDiscuss if the interval dominance order holds for sums of functions. Can it be used to develop structural results for an MDP?\n\n\\item Clearly, in general, the sum of single crossing functions is not single crossing. Even a constant plus a single crossing function\nis not necessarily single crossing. Sketch the curve of a single crossing function which wiggles close to zero. Then adding a positive constant\nimplies that the curve will cross zero more than once.\nAlso the sum of a supermodular plus single crossing is not single crossing. In terms of $\\phi(x) = f(x,u+1) - f(x,u)$, supermodular implies\n$\\phi(x) $ is increasing in $x$. Clearly the sum of an increasing function and a single crossing is not single crossing in general.\n\n\n\n\\item {\\bf Invariance of optimal policy to costs.} Recall that Theorem \\ref{thm:mdpmonotone} require that the MDP costs satisfy assumptions\n(A1) and (A3) for the optimal policy to be monotone.\nShow that for a discounted cost infinite horizon MDP, assumption (A1) and (A3) can be relaxed as follows:\n\nThere exists a single vector $\\phi \\in {\\rm I\\hspace{-.07cm}R}^X$ such that for every action $u \\in \\,\\mathcal{U}$, \n\\begin{compactenum}\n\\item[(A1')] $(I - \\rho P(u)) \\phi $ is a vector with increasing elements.\n(Recall $\\rho$ is the discount factor.)\n\\item[(A3')] $ (P(u+1) - P(u)) \\phi$ is a vector with decreasing elements.\n\\end{compactenum}\nIn other words the structure of the transition matrix is enough to ensure a monotone policy and no assumptions are required on the cost\n(of course the costs are assumed to be bounded)\n\n{\\em Hint}: Define the new value function\n$\\bar{V}(i) = V(i) - \\phi(i)$ . Clearly the optimal policy remains unchanged and $\\bar{V}$ satisfies Bellman's equation\n$$ \\bar{V}(i) = \\min_u \\{ c(i,u) - \\phi(i) + \\rho \\sum_j \\phi(j) P_{ij}(u) + \\rho \\sum_j \\bar{V}(j) P_{ij}(u)\\} $$\nwhere $\\rho \\in (0,1)$ denotes the discount factor.\n\n\\item {\\bf Myopic lower bound to optimal policy.}\nRecall that supermodularity of the transition matrix (A4) was a key requirement for the optimal policy to be monotone.\nIn particular, Theorem \\ref{thm:mdpmonotone} shows that $Q(i,u)$ is submodular, i.e., \n$Q(i,u+1) - Q(i,u)$ is decreasing in $i$.\nSometimes supermodularity of the transition matrix is too much to ask for. \nConsider instead of (A4) the relaxed condition\n\\begin{compactitem}\n\\item[(A4')]$P_i(u+1) \\geq_s P_i(u)$ for each row $i$.\n\\end{compactitem}\nShow that (A4') together with (A1), (A2) implies that \n$$ \\sum_j P_{ij}(u+1) V(j) \\leq \\sum_j P_{ij}(u) V(j) $$\nDefine the myopic policy $\\policyl(i) = \\operatornamewithlimits{argmin}_u c(i,u)$.\nShow that \nunder (A1), (A2), (A4'),\n$\\mu^*(i) \\geq \\underline{\\mu}(i)$. In other words, the myopic policy $\\policyl$ forms a lower bound to the optimal policy $\\mu^*$.\n\n\\item {\\bf Monotone policy iteration algorithm.} Suppose an MDP has a monotone policy.\n If the MDP parameters are known,\nthen the policy iteration algorithm of \\S \\ref{sec:vipilp} can be used. If the policy $\\mu_{n-1}$ at iteration $n-1$ is monotone then show that under the assumptions of (A1), (A2) of Theorem \\ref{thm:mdpmonotone}, the policy evaluation step yields $J_{\\mu_{n-1}}$ as a decreasing vector. Also show that under (A1)-(A4), (a similar proof to Theorem \\ref{thm:mdpmonotone}) implies that the policy improvement\n step yields $\\mu_n$ that is monotone. So the policy iteration algorithm will automatically be confined to monotone policies if initialized\n by a monotone policy.\n\n\n\n\\item {\\bf Stochastic knapsack problem.} Consider the following version of the stochastic knapsack problem;\\footnote{The classical NP hard knapsack problem deals with\n$U$ items with costs $c(1),c(2),\\ldots,c(U)$ and lifetimes $t_1,t_2,\\ldots t_U$. The aim is to compute \nthe minimum cost subset of these items whose total lifetime is at most~$T$.}\nsee \\cite{Ros83} and also \\cite{CR14a}. \\index{stochastic knapsack problem}\nA machine must operate for $T$ time points. Suppose that one specific component of the machine fails intermittently.\nThis component is replaced when it fails. There are $U$-possible brands one can choose to replace this component when it fails.\nBrand $u\\in \\{1,2\\ldots,U\\}$ costs $c_u$ and has an operating lifetime that is exponentially distributed with rate $\\lambda_u$. \nThe aim is to minimize the expected total cost incurred by replacing the failed component so that the machine operates for $T$ time points.\n\nSuppose a component has just failed. Let $t$ denote the remaining time left to operate the machine.\nThe optimal policy for deciding which of the $U$ possible brands to choose the replacement satisfies Bellman's equation\n\\begin{align*} Q(t,u) &= c(u) + \\int_0^t V(t-\\tau) \\, \\lambda_u e^{-\\lambda_u \\tau} d\\tau , \\quad Q(0,u) = 0, \\\\\nV(t) &= \\min_{u \\in \\{1,2,\\ldots,U\\}} Q(t,u) , \\quad \\policy^*(t) = \\operatornamewithlimits{argmin}_{u \\in \\{1,2,\\ldots,U\\}} Q(t,u)\n\\end{align*}\nShow that if $\\lambda_u c(u)$ is decreasing with $u$, then $Q(t,u)$ is submodular.\nIn particular, show that \n$$\\frac{d}{dt} Q(t,u) = \\lambda_u c(u) $$\nTherefore, the optimal policy $\\policy^*(t)$ has the following structure:\nUse brand 1 when the time remaining is small, then switch to brand 2 when the time increases, then brand 3, etc.\n\nGeneralize the above result to the case when time $k$ is discrete and the brand $u$ has life time pmf $p(k,u)$, $k=0,1\\ldots$. \nThen Bellman's equation reads\n\\begin{align*} Q(n,u) &= c(u) + \\sum_{k=0}^n V(n-k) \\, p(k,u) \\\\\n V(n) &= \\min_{u \\in \\{1,2,\\ldots,U\\}} Q(n,u), \\quad \\policy^*(n) = \\operatornamewithlimits{argmin}_{u \\in \\{1,2,\\ldots,U\\}} Q(n,u)\n \\end{align*}\n What are sufficient conditions in terms of submodularity of the lifetime pmf $p(k,u)$ for the optimal policy to be monotone?\n \n \n \\item {\\bf Monotonicity of optimal policy with respect to horizon.} \\index{effect of planning horizon} Show that the following result holds for a \n finite horizon MDP.\n If $Q_n(i,u)$ is supermodular in $(i,u,n)$ then $V_n(i) = \\max_u Q_n(i,u)$ is supermodular in $i,u$.\n Note that checking supermodularity with respect to $(i,u,n)$ is pairwise: so it suffices to check\n supermodularity with respect to $(i,u)$, $(i,n)$ and $(u,n)$.\n \nWith the above result, consider a finite horizon MDP satisfies the assumptions (A1)-(A4) of \\S \\ref{sec:monotonecond}.\nUnder what further conditions is \n $\\mu_n^*(i)$ is increasing in $n$ for fixed $i$? What does this mean intuitively?\n \n \n \\item {\\bf Monotone Discounted Cost Markov Games.} \\index{structural result! Markov game} \\index{Markov game! structural result} \n \\index{Markov game! Nash equilibrium! structural result} \n In \\S \\vref{sec:markovgame} of this internet supplement we briefly described the formulation of infinite horizon discounted cost Markov games.\n Below we comment briefly on structural results for the Nash equilibrium of such games.\n \n Consider the infinite horizon discounted cumulative cost of (\\ref{eq:cumcostgame}).\n The structural results developed in this chapter for MDPs extend straightforwardly to \n infinite horizon discounted cost Markov games.\n The assumptions (A1) to (A4) of \\S \\ref{sec:monotonecond} of the book need to be extended as follows:\n \n \\begin{description} \n\\item[(A1)] Costs $c(x,u,u^-)$ are decreasing in $x$ and $u^-$. Here $u^-$ denotes the actions of others players.\n\\item[(A2)] $P_i(u,u^-) _{s}{\\le} P_{i+1}(u,u^-)$ for each $i$ and fixed $u,u^-$. \nHere $P_i(u,u^-)$ denotes the $i$-th row of the transition matrix for action $u,u^-$.\n\\item[(A3)] $c(x,u,u^-)$ is submodular in $(x,u)$ and $(u,u^-)$\n\\item[(A4)] $P_{ij}(u,u^-)$ is tail-sum supermodular in $(i,u,u^-)$. That is,\n$$\\sum_{j\\geq l} \\big(P_{i j}(u+1,u^-) - P_{i j}(u,u^-) \\big) \\text{ is increasing in } i .$$\n\\end{description}\n\\begin{theorem}\nUnder conditions (A1)-(A4), there exists a pure Nash equilibrium $({\\pol1}^*, {\\pol2}^*)$ such that the pure policies\n${\\pol1}^*$ and ${\\pol2}^*$ are increasing in state $i$. \n\\end{theorem}\nContrast this with the case of a general Markov game (\\S \\ref{sec:markovgame} of this internet supplement) where one can only\nguarantee the existence of a randomized Nash equilibrium\nin general.\n \n The proof of the above theorem is as follows. First for any increasing fixed policy $\\pol2$ for player 2, one can show via an identical proof to Theorem\n \\ref{thm:mdpmonotone}, the optimal policy ${\\pol1}^*(x,\\pol2(x))$ is increasing in $x$. Similarly, for any increasing fixed policy $\\pol1$ for player 1, \n ${\\pol2}^*(x,\\pol1(x))$ is increasing in $x$. These are obtained as the solution of Bellman's equation. In game theory, these are called best response strategies.\n Therefore the vector function $[{\\pol1}^*(x), {\\pol2}^*(x)]$ is increasing in $x$.\nIt then follows from Tarski's fixed point theorem\\footnote{Let $X$ denote a compact lattice and $f: X \\rightarrow X$ denote an increasing function.\nThen there exists a fixed point $x^* \\in X$ such that $f(x^*) = x^*$} that such a function has a fixed point. Clearly this fixed point is a Nash equilibrium since any unilateral deviation\nmakes one of the players worse off.\n\nActually for submodular games a lot more holds. The smallest and largest Nash equilibria are pure (non-randomized) and satisfy the monotone property\nof the above theorem. These can be obtained\nvia a best response algorithm the simply iterates the best responses ${\\pol1}^*(x,\\pol2(x))$ and ${\\pol2}^*(x,\\pol1(x))$ until convergence.\nThere are numerous papers and books in the area.\n \n\\end{compactenum}\n\n\\chapter{Structural Results for Optimal Filters} \n\n\\begin{compactenum}\n\n\\item In the structural results presented in the book, we have only considered first order stochastic dominance and monotone likelihood ratio dominance (MLR) since they are sufficient\nfor our purposes. Naturally there are many other concepts of stochastic dominance \\cite{MS02}. Show that \n$$ \\text{ MLR } \\implies \\text{Hazard rate order} \\implies \\text{first order} \\implies \\text{second order} $$\nEven though second order stochastic dominance is useful for concave decreasing functions (such as the value function of a POMDP),\njust like first order dominance, it cannot cope with conditioning (Bayes' rule).\n\n\\item Consider a reversible Markov chain with transition matrix $P$, initial distribution $\\pi_0$ and stationary distribution $\\pi_\\infty$.\nSuppose $\\pi_0 \\leq_r \\pi_\\infty$. Show that if $P$ has rows that are first order increasing then $\\pi_n \\leq_r \\pi_\\infty$. \n\n\n\n\\item {\\bf TPn matrix.} A key assumption \\ref{A3} in the structural results is that the transition matrix $P$ is TP2. More generally, suppose $n =2,3,\\ldots$.\nThen a $X\\times X$ \nmatrix $P$ is said to be totally positive of order $n$ (denoted as TPn) if for each $k \\leq n$, all the $k \\times k$ minors of $P$ are non-negative.\n\n\n\\item {\\bf TP2 matrix properties.\\footnote{Note that a TP2 matrix does not need to be a square matrix; we consider $P$ to be square here since it is a transition probability matrix.}}\n\\S \\ref{sec:assumpdiscussion} gave some useful properties of TP2 matrices.\n\nSuppose the $X \\times X$ stochastic matrix $P$ is TP2. \n\\begin{compactenum}\n\\item Show that this implies that the elements satisfy\n\\begin{align*} & P_{11} \\geq P_{21} \\geq \\cdots \\geq P_{X1} \\\\\n & P_{1X} \\geq P_{2X} \\geq \\cdots \\geq P_{XX} \\end{align*}\n \\item\n Suppose $P$ has no null columns. Show that if $P_{ij} = 0$, then either $P_{kl} = 0$ for $k \\leq i$ and $l \\geq j$, or\n $P_{kl} = 0$ for $k \\geq i$ and $l \\leq j$.\n \n \\item Show that \n $$ e_1^\\prime ({P^{n}})^\\prime e_1 \\downarrow n, \\qquad e_X^\\prime ({P^{n}})^\\prime e_1 \\uparrow n.\n $$\nAlso show that for each $n$,\n$$ e_1^\\prime ({P^{n}})^\\prime e_i \\downarrow i, \\qquad e_X^\\prime ({P^{n}})^\\prime e_i \\uparrow i\n$$\n\n\n \\end{compactenum}\n Please see \\cite{KK77} for several other interesting properties of TP2 matrices.\n\n\\item MLR dominance is intimately linked with the TP2 property. Show that \n$$\\pi_1 \\leq_r \\pi_2 \\iff \n\\begin{bmatrix} \\pi_1^\\prime \\\\ \\pi_2^\\prime \\end{bmatrix} \\text{ is TP2 .} $$\n\n\\item{\\bf Properties of MLR dominance.} Suppose $X$ and $Y$ are random variables and recall that $\\geq_r$ denotes MLR dominance.\\footnote{Stochastic dominance is a property of the distribution of a random variable and has nothing to do with the random variable itself. Therefore in the book, we defined stochastic dominance in terms of the pdf or pmf. Here to simplify notation we use the random variable instead of its distribution.}\n\\begin{compactenum}\n\\item Show that $X\\geq_r Y$ is equivalent to \n$$ \\{ X | X \\in A\\} \\geq_s \\{Y | Y \\in A\\} $$\nfor all events $A$ with $P(X \\in A) > 0$ and $P(Y\\in A) > 0$ where $\\geq_s$ denotes first order dominance. This property is due to \\cite{Whi82}.\n\\item Show that $X\\geq_r Y$ implies that $g(X) \\geq_r g(Y)$ for any increasing function $ g$.\n\\item Show that $X \\geq_r Y$ implies that $\\max\\{X,c\\} \\geq_r \\max\\{Y,c\\}$ for any positive constant $c$.\n\n\\item Under what conditions does $X \\geq_r Y$ imply that $-X \\leq_r -Y$?\n\\end{compactenum}\nDo the above two properties hold for first order dominance?\n\n\n\\item {\\bf MLR monotone optimal predictor.} Consider the HMM predictor given by the Chapman Kolmogorov equation\n$ \\pi_k = P^\\prime \\pi_{k-1}$.\nShow that if $P$ is a TP2 matrix and $\\pi_0 \\leq_r \\pi_1$, then \n$\\pi_0 \\leq_r \\pi_1 \\leq_r \\pi_2 \\leq_r \\ldots$.\n\n\\item {\\bf MLR constrained importance sampling.}\nOne of the main results of this chapter was to construct reduced complexity HMM filters that provably form lower and upper bounds to the optimal HMM filter\nin the MLR sense. \nIn this regard, consider the following problem. Suppose it is known that \n$ \\underline{\\tp}^\\prime \\pi \\leq_r P^\\prime \\pi$. Then given the reduced complexity computation of $\\underline{\\tp}^\\prime \\pi$,\nhow can this be exploited to compute $P^\\prime \\pi$?\n\nIt is helpful to think of the following toy example:\nSuppose it is known that $x^\\prime p \\leq 1$ for a positive vector $x$ and probability vector $p$. How can this constraint be exploited to actually compute the inner \nproduct \n$x^\\prime p$? Obviously from a deterministic point of view there is little one can do to exploit this constraint. \\index{constrained importance sampling}\nBut one can use constrained important sampling: one simple estimator is as follows:\n$$ \\frac{1}{N} \\sum_{i=1}^N x_i I(x_i \\leq 1)$$ \nwhere index $i$ is simulated iid from probability vector $p$.\nIn \\cite{KR14} a more sophisticated constrained importance sampling approach is used to estimate $P^\\prime \\pi$ by exploiting the constraint\n$ \\underline{\\tp}^\\prime \\pi \\leq_r P^\\prime \\pi$.\n\n\n\n\\item {\\bf Posterior Cramer Rao bound.} The posterior Cramer Rao bound \\cite{TMN98} for filtering can be used to compute a lower bound to the mean square error. \\index{posterior Cramer Rao bound}\nThis requires twice differentiability of the logarithm of the joint density. For HMMs, one possibility is to consider the Weiss-Weinstein\nbounds \\index{Weiss-Weinstein bounds}, see \\cite{RO05}. Alternatively, the analysis of \\cite{GSV05} can be used. Compare these \nwith the sample path bounds for the HMM filter obtained in this chapter.\n\n\n \\item \\index{shifted likelihood ratio order} The shifted likelihood ratio order is a stronger order than the MLR order. Indeed, $p> q$ in the shifted likelihood ratio order sense\nif $p_i\/q_{i+j}$ is increasing in $i$ for any $j$. (If $j=0$ it coincides with the standard MLR order.) What additional assumptions are required\nto preserve the shifted likelihood ratio order under Bayes' rule? Show that \nthe shifted likelihood ratio order is closed under convolution. How can this property be exploited to bound an optimal filter?\n\n\n\\item In deriving sample path bounds for the optimal filter, we did not exploit the fact that $T(\\pi,y)$ increases with $y$.\nHow can this fact be used in bounding the sample path of an optimal filter?\n\n\n\n\\item {\\bf Neyman-Pearson Detector} \\index{Neyman-Pearson detector}\nHere we briefly review elementary Neyman-Pearson detection theory and show the classical result that MLR dominance results in a threshold optimal detector.\n\n\nGiven the observation $x$ of a random variable, we wish to decide if $x$ is from pdf $f$ or $g$. To do this, we construct a decision policy $\\phi(x)$. The detector\ndecides\n\\begin{equation} \\begin{split} f \\quad \\text{ if } \\phi(x) & = 0 \\\\\n g \\quad \\text{ if } \\phi(x) & = 1 \\end{split} \\label{eq:detectornp} \\end{equation}\nThe performance of the decision policy $\\phi$ in (\\ref{eq:detectornp}) is determined in terms of two metrics:\n\\begin{compactenum}\n\\item $\\mathcal{P} = \\mathbb{P}( \\text{ reject } f | f \\text{ is true }) $\n\\item $\\mathcal{Q} = \\mathbb{P}( \\text{ reject } f | f \\text{ is false }) $\n\\end{compactenum}\nClearly for the decision policy $\\phi(\\cdot)$ in (\\ref{eq:detectornp}),\n$$ \\mathcal{P} = \\int_{\\rm I\\hspace{-.07cm}R} f(x) \\phi(x) dx, \\quad \\mathcal{Q} = \\int_{\\rm I\\hspace{-.07cm}R} g(x) \\phi(x) dx. $$\nThe well known Neyman-Pearson detector seeks to determine the optimal decision policy $\\phi^*$ that maximizes $\\mathcal{Q}$ subject to\nthe constraint $\\mathcal{P} \\leq \\alpha$ for some user specified $\\alpha \\in (0,1]$.\nThe main result is\n\\begin{theorem-non}[Neyman-Pearson lemma]\nAmongst all decision rules $\\phi$ such that $\\mathcal{P} \\leq \\alpha$, the decision rule $\\phi^*$ which maximizes $\\mathcal{Q}$ is given by\n$$ \\phi^*(x) = \\begin{cases} 0 & \\frac{f(x)}{g(x)} \\geq c \\\\\n\t\t1 & \\frac{f(x)}{g(x)} < c \\end{cases} $$\nwhere $c$ is chosen so that $\\mathcal{P} = \\alpha$.\n\\end{theorem-non}\n\\begin{proof}\nClearly for any $x \\in {\\rm I\\hspace{-.07cm}R}$,\n$$ \\big(\\phi^*(x) - \\phi(x) \\big) \\big( c g(x) - f(x) \\big) \\geq 0. $$\nPlease verify the above inequality by showing that if $\\phi^*(x) =1$ then both the terms in the above product are nonnegative; while if $\\phi^*(x) = 0$, then\nboth the terms are nonpositive.\nTherefore,\n$$\nc \\bigg( \\int \\phi^*(x) g(x) dx - \\int \\phi(x) g(x) dx\\bigg) \\geq \\int \\phi^*(x) f(x) dx - \\int \\phi(x) f(x) dx \n$$\nThe right hand side is non-negative since by construction $ \\int \\phi^*(x) f(x) dx = \\alpha$ , while $ \\int \\phi(x) f(x) dx \\leq \\alpha$.\n\\end{proof}\n\n{\\bf Threshold structure of optimal detector.} \\index{Neyman-Pearson detector! optimal threshold structure}\nLet us now give conditions so that the optimal Neyman-Pearson decision policy is a threshold policy:\nSuppose now that $f$ MLR dominates $g$, that is $f(x)\/g(x) \\uparrow x$. \nThen clearly\n\\begin{equation} \\phi^*(x) = \\begin{cases} 0 & x \\geq x^*\\\\\n\t\t\t\t\t1 & x < x^* \\end{cases} \\label{eq:npt}\\end{equation}\n\t\t\t\t\twhere threshold $x^*$ satisfies\n\t\t\t\t\t$$ \\int_{-\\infty}^{x^*} f(x) dx = \\alpha $$\nThus if $f \\geq_r g$, then the optimal detector (in the Neyman-Pearson sense) is the threshold detector (\\ref{eq:npt}).\n\n\n\\end{compactenum}\n\n\n\\chapter{Monotonicity of Value Function for POMDPs} \n\n\\begin{compactenum}\n\n\n\\item Theorem \\ref{thm:pomdpmonotoneval} is the main result of the chapter and it gives conditions under which the value function\nof a POMDP is MLR decreasing. Condition \\ref{A1} was the main assumption on the possibly non-linear cost.\nGive sufficient conditions for a quadratic cost $1-\\pi^\\prime \\pi + c_u^\\prime \\pi$ to satisfy \\ref{A1}. Under what conditions does the entropy\n$-\\sum_i \\pi(i) \\log \\pi(i) + c_u^\\prime \\pi$ satisfy \\ref{A1}.\n\n\\item The shifted likelihood ratio order is a stronger order than the MLR order. Indeed, $p> q$ in the shifted likelihood ratio order sense\nif $p_i\/q_{i+j}$ is increasing in $i$ for any $j$. If $j=0$ it coincides with the standard MLR order. \n(Recall also the problem in the previous chapter which says that the shifted likelihood ratio order is closed under convolution.)\nBy using the shifted likelihood ratio order, what further results on the value function\n$V(\\pi)$ can one get by using Theorem \\ref{thm:pomdpmonotoneval}. \n\n\\item Theorem \\ref{thm:pomdp2state} gives sufficient conditions for a 2-state POMDP to have a threshold policy. We have assumed that \nthe observation probabilities are not action dependent. How should the assumptions and proof be modified to allow for action\ndependent observation probabilities?\n\n\\item How can Theorem \\ref{thm:pomdp2state} be modified if dynamic risk measures of \\S \\ref{sec:riskaverse} are considered?\n(see also \\S \\ref{sec:risk}).\n\n\n\\item {\\bf Finite dimensional characterization of Gittins index for POMDP bandit} \\cite{KW09}: \\S \\ref{sec:POMDPbandit} dealt with\nPOMDP multi-armed bandit problem.\nConsider a POMDP bandit where the Gittins index (\\ref{eq:gittindef}) is characterized as the solution of Bellman's equation\n(\\ref{eq:bellmanbandit}).\n Since the value function of a POMDP is piecewise linear and concave (and therefore a finite dimensional characterization), it follows that \na value iteration algorithm for (\\ref{eq:bellmanbandit}) that characterizes the Gittins index also has a finite dimensional characterization. Obtain an expression for \nthis finite dimensional characterization for the Gittins index (\\ref{eq:gittindef}) for a horizon $N$ value iteration algorithm.\n\n\\item \\S \\ref{sec:POMDPbandit} of the book deals with structural results for POMDP bandits. Consider the problem where several searchers are looking\nfor a stationary target. Only one searcher can operate at a given time and the searchers cannot receive state estimate information from other searchers or a base-station. The base station simply sends a 0 or 1 signal to each searcher telling them when to operate and when to shut down.\nWhen it operates, the searcher obtains moves according to a Markov chain and obtains noisy information about the target.\nShow how the problem can be formulated as a POMDP multi-armed bandit.\n\nShow how a radar seeking to hide its emissions (low probability of intercept radar) can be formulated approximately as a POMDP bandit.\n\n\n\\item How does the structural result for the Gittins index for a POMDP bandit specialize to that of a full observed Markov decision\nprocess bandit problem?\n\n\n\n\\item Consider Problem \\vref{prob:adaptive control} of Chapter \\ref{ch:pomdpbasic} where optimal adaptive control of a fully observed MDP was \nformulated as a POMDP. Give conditions that ensure that the value function $J_k(i,\\pi)$ is MLR decreasing in $\\pi$ and also monotone in $i$. What are the implications of this monotonicity in terms of dual control (i.e., exploration vs exploitation)?\n\n\n\\item {\\bf Optimality of Threshold Policy for 2-state POMDP} \\index{POMDP! optimality of threshold policy}\nRecall that Theorem \\ref{thm:pomdp2state} in the book gave sufficient conditions for the optimal policy of a 2-state POMDP to be a threshold.\nConsider the proof of Theorem \\ref{thm:pomdp2state} in Appendix 11.A of the book. The last step involved going from (\\ref{eq:subex}) \nto a simpler expression via tedious but elementary steps. Here we specify what these steps are.\n\nStart with (\\ref{eq:subex}) in the book:\n\\begin{align}\n\\begin{aligned}\\label{eq:i3}\nI_3 &= \\left[\\sigma(\\bar{\\pi},y,2) + \\sigma(\\bar{\\pi},y,1)\\cfrac{T(\\bar{\\pi},y,1) - T(\\pi,y,2)}{T(\\pi,y,2) - T(\\bar{\\pi},y,2)} + \\sigma(\\pi,y,1)\\cfrac{T(\\pi,y,2) - T(\\pi,y,1)}{T(\\pi,y,2) - T(\\bar{\\pi},y,2)}\\right]\\\\\n&=\\cfrac{I_{31} + I_{32} + I_{33}}{\\sigma(\\pi,y,2)\\left(T(\\pi, y, 2) - T(\\bar{\\pi}, y, 2)\\right)}\\\\\nI_{31} &= \\sigma(\\pi,y,2)\\sigma(\\bar{\\pi},y,1)\\left(T(\\bar{\\pi},y,1) - T(\\pi,y,2)\\right)\\\\ \nI_{32} &= \\sigma(\\pi,y,2)\\sigma(\\bar{\\pi},y,2)\\left(T(\\pi,y,2) - T(\\bar{\\pi},y,2)\\right)\\\\ \nI_{33} &= \\sigma(\\pi,y,2)\\sigma(\\pi,y,1)\\left(T(\\pi,y,2) - T(\\pi,y,1)\\right) \n\\end{aligned}\n\\end{align}\n\nThe second element of HMM predictors $P(a)^\\prime\\pi$ and $(P(a)^\\prime\\bar{\\pi})$ are denoted by $b_{a2}$, $b_{a1}, a = 1, 2$ respectively. Here $b_{a2}$ is defined as follows\n\\begin{align}\\label{eq:sim_fil}\nb_{a2} = (1-\\pi(2))P_{12}(a) + \\pi(2)P_{22}(a). \n\\end{align}\n\nConsider the following simplification of the term $I_{31}$ by using $b_{a2}$ and $b_{a1}$.\n\\begin{align}\\label{eq:i31}\n\\begin{aligned}\nI_{31} = &(B_{1y}(1-b_{22}) + B_{2y}b_{22})B_{2y}b_{11} - (B_{1y}(1-b_{11}) + B_{2y}b_{11})B_{2y}b_{22}\\\\\n = &B_{1y}B_{2y}(b_{11} - b_{22}) \n\\end{aligned}\n\\end{align}\nSimilarly, $I_{32}$ and $I_{33}$ are simplified as follows\n\\begin{align}\\label{eq:i3233}\n\\begin{aligned}\nI_{32} = B_{1y}B_{2y}(b_{22} - b_{21}), I_{33} = B_{1y}B_{2y}(b_{22} - b_{12}) \n\\end{aligned}\n\\end{align}\nSubstituting \\eqref{eq:i31}, \\eqref{eq:i3233} in \\eqref{eq:i3} yields the following\n\\begin{align}\\label{eq:i3f}\n\\begin{aligned}\nI_3 &= B_{1y}B_{2y}\\cfrac{b_{11} + b_{22} - b_{21} - b_{12}}{\\sigma(\\pi,y,2)\\left(T(\\pi, y, 2) - T(\\bar{\\pi}, y, 2)\\right)}\n\\end{aligned}\n\\end{align}\nSubstituting \\eqref{eq:sim_fil} for $b_{ij}$ and some trivial algebraic manipulations yield the following\n\\begin{align}\\label{eq:i3f}\n\\begin{aligned}\nI_3 &= B_{1y}B_{2y}(\\pi(2) - \\bar{\\pi}(2))\\cfrac{P_{22}(2) - P_{12}(2) - (P_{22}(1) - P_{12}(1))}{\\sigma(\\pi,y,2)\\left(T(\\pi, y, 2) - T(\\bar{\\pi}, y, 2)\\right)}.\n\\end{aligned}\n\\end{align}\n\n\\item Consider the following special case of a POMDP. Suppose the prior belief $\\pi_0 \\in \\Pi(\\statedim) $ is known. From time 1 onwards,\nthe state is fully observed. How can the structural results in this chapter be used to characterize the optimal policy?\n\\end{compactenum}\n\n\n\\chapter{Structural Results for Stopping Time POMDPs} \n\n\\section{Problems}\nMost results in stopping time POMDPs in the literature use the fact that the stopping set is convex\n(namely, Theorem \\ref{thm:pomdpconvex}). \nRecall that the only requirements of \nTheorem \\ref{thm:pomdpconvex} \nare that the value function is convex and the stopping cost is linear.\nAnother important result for finite horizon POMDP stopping time problems is the nested \nstopping set property $\\mathcal{S}_0 \\subseteq \\mathcal{S}_1 \\subseteq \\mathcal{S}_2 \\ldots$. The following exercises discuss both these aspects.\n\n\\begin{compactenum}\n\\item {\\bf Nested stopping set structure.} \\index{stopping time POMDP! nested stopping sets}\nConsider the stopping time POMDP dynamic programming equation\n$$ V(\\pi) = \\min\\{c_1^\\prime \\pi, c_2^\\prime \\pi + \\sum_y V( T(\\pi,y,u) ) \\sigma(\\pi,y,u) \\}.$$\nDefine the stopping set as\n$$ \\mathcal{S} = \\{ \\pi: c_1^\\prime \\pi \\leq c_2^\\prime \\pi + \\sum_y V( T(\\pi,y,u) ) \\sigma(\\pi,y,u) \\} = \\{\\pi: \\mu^*(\\pi) = 1 \\text{ (stop) } \\}$$\nRecall the value iteration algorithm is\n$$ V_{n+1}(\\pi) = \\min\\{c_1^\\prime \\pi, c_2^\\prime \\pi + \\sum_y V_n( T(\\pi,y,u) ) \\sigma(\\pi,y,u) \\}, \\quad V_0(\\pi) = 0. $$\nDefine the stopping sets $\\mathcal{S}_n = \\{\\pi: c_1^\\prime \\pi \\leq c_2^\\prime \\pi + \\sum_y V_n( T(\\pi,y,u) ) \\sigma(\\pi,y,u) \\}$.\n\nShow that the stopping sets satisfy $\\mathcal{S}_0 \\subseteq \\mathcal{S}_1 \\subseteq \\mathcal{S}_2 \\ldots$ implying that $$\\mathcal{S} = \\cup_{n} \\mathcal{S}_n $$\n\n\n\\item {\\bf Explicit characterization of stopping set.} Theorem \\ref{thm:pomdpconvex} showed that for a stopping time POMDP, the stopping set $\\mathcal{S}$ is convex.\nBy imposing further conditions, the set $\\mathcal{S}$ can be determined explicitly.\nConsider the following set of belief states\n\\begin{equation} \\mathcal{S}^o= \\{ \\pi: c_1^\\prime \\pi \\leq c_2^\\prime \\pi + c_1^\\prime P^\\prime \\pi \\} \\label{eq:stopsetoo} \\end{equation}\nSuppose the transition matrix $P$ and observation probabilities $B$ of the stopping time POMDP satisfy the following property:\n\\begin{equation} \\pi \\in \\mathcal{S}^o \\implies T(\\pi,y) \\in \\mathcal{S}^o, \\quad \\forall y \\in \\mathcal{Y} . \\label{eq:onesteppp}\n\\end{equation}\n\n\\begin{compactenum}\n\\item Prove that $\\mathcal{S}^o = \\mathcal{S}$. Therefore, the hyperplane $c_1^\\prime \\pi = c_2^\\prime \\pi + c_1^\\prime P^\\prime \\pi $ determines the stopping set $\\mathcal{S}$.\n\n\nThe proof proceeds in two steps: First prove by induction on the value iteration algorithm that for $\\pi \\in \\mathcal{S}^o$, $V_n(\\pi) = c_1^\\prime \\pi$, for $n=1,2\\ldots$. \\index{stopping time POMDP! characterization of stopping set}\n\nSecond, consider a belief $\\pi $ such that the optimal policy goes one step and then stops. This implies that the value function is $V(\\pi) = c_2^\\prime \\pi + c_1^\\prime P^\\prime \\pi $. Therefore clearly $c_2^\\prime \\pi + c_1^\\prime P^\\prime \\pi \n< c_1^\\prime \\pi$. This implies that $\\pi \\notin \\mathcal{S}^o$. So for any belief $\\pi $ such that $\\policy^*(\\pi) $ goes one step and stops, then $\\pi \\notin \\mathcal{S}^o$. Therefore, for any belief $\\pi$ such that $\\policy^*(\\pi) $ goes more than one step and stops, then $\\pi \\notin \\mathcal{S}^o$.\n\nThe two steps imply that $\\mathcal{S}^o = \\mathcal{S}$.Therefore that the stopping set is explicitly given by the polytope in (\\ref{eq:stopsetoo}).\n\n\\item Give sufficient conditions on $P$ and $B$ so that condition (\\ref{eq:onesteppp}) holds for a stopping time POMDP.\n\\end{compactenum}\n\n\\item \n\nShow that an identical, proof to Theorem \\ref{thm:pomdpconvex}\nimplies that the stopping sets $\\mathcal{S}_n$, $n=1,2,\\ldots$ are convex for a finite horizon problem.\n\n\n\\item {\\bf Choosing a single sample from a HMM.} \\index{multiple stopping problem}\nSuppose a Markov chain $x_k$ is observed in noise sequentially over time as $y_k \\sim B_{x_k,y}$, $k=1,2\\ldots,N$.\n Over a horizon of length $N$, I need to choose a single observation $y_k$ to maximize\n$\\E\\{y_k\\}$, $k \\in 1,\\ldots,N$. If at time $k$ I decide to choose observation $y_k$, then I get reward \n$\\E\\{y_k\\}$ and the problem stops.\nIf I decide not to choose observation $y_k$, then I can use it to update my estimate of the state and proceed to the next time\ninstant. However, I am not allowed to choose $y_k$ at a later time.\n\n\\begin{compactenum}\n\\item Which single observation should I choose?\n\n Show that Bellman's equation becomes\n$$ V_{n+1}(\\pi) = \\max_{u\\in \\{1,2\\} } \\{ r^\\prime \\pi , \\sum_y V_n( T(\\pi, y)) \\sigma(\\pi,y) \\} $$\nwhere the elements of $r$ are $r(i) = \\sum_{y} y B_{iy}$, $i=1,\\ldots,X$.\nHere $u=1$ denotes choose an observation, while $u=2$ denotes do not choose an observation.\n\n\n\\item Show using an identical proof to Theorem \\ref{thm:pomdpconvex} that the region of the belief space $\\mathcal{S}_n = \\{\\pi: \\mu^*(\\mu) = 1\\}$ is convex.\nMoreover if (\\ref{A2},\\ref{A3}) hold, show that $e_1$ belongs to $\\mathcal{S}_n$. \nAlso show that $\\mathcal{S}_0 \\subseteq \\mathcal{S}_1 \\subseteq \\mathcal{S}_2 \\ldots$.\n\n\\item {\\bf Optimal Channel sensing.}\nAnother interpretation of the above problem is as follows: The quality $x_k$ of a communication channel is observed in noise. I need to\ntransmit a packet using this channel. If the channel is in state $x$, I incur a cost $c(x)$ for transmission. Given $N$ slots, when should\nI transmit? \\index{optimal channel sensing}\n\\end{compactenum}\n\n\\item {\\bf Optimal measurement selection for a Hidden Markov Model (Multiple stopping problem)}. \\index{multiple stopping problem}\n The following problem generalizes the previous problem as follows. \\index{multiple stopping problem}\n I need to choose the best $L$ observations of a Hidden Markov model in a horizon of length $N$\nwhere $L \\leq N$? If I select observation $k$ then I get a reward $\\E\\{y_k\\}$, if I reject the observation then I get no reward.\nIn either case, I use the observation $y_k$ to update my belief state. (This problem is also called the multiple stopping problem in \n\\cite{Nak95}.)\nShow that Bellman's dynamic programming recursion reads:\n\\begin{multline*} V_{n+1}(\\pi,l) = \\max\\{ r^\\prime \\pi + \\sum_y V_{n}(T(\\pi,y),l-1) \\sigma(\\pi,y) , \\\\ \\sum_y V_{n}(T(\\pi,y),l) \\sigma(\\pi,y) \\} , \\quad n=1,\\ldots,N \\end{multline*}\nwith initial condition $V_n(\\pi,0) = 0$, $n=0,1,\\ldots$ and boundary conditions\n$$V_{n}(\\pi,n) = r^\\prime \\pi + \\sum_y V_{n-1} ( T(\\pi,y), n-1) \\sigma(\\pi,Y) , \\quad n=1,\\ldots,L. $$\nThe boundary condition says that if I have only $n$ time points left to make $n$ observations, then I need to make an observation\nat each of these $n$ time points. Obtain a structural result for the optimal measurement selection policy. (Notice that the actions\ndo no affect the evolution of the belief state $\\pi$, they only affect $l$, so the problem is simpler than a full blown POMDP.)\n\n\n\\item {\\bf Separable POMDPs.} \\index{separable POMDPs} \nRecall that the action space is denoted as $\\,\\mathcal{U} = \\{1,2,\\ldots,U\\}$.\nIn analogy to \\cite[Chapter 7.4]{HS84}, define a POMDP to be separable if:\nthe exists a subset $\\bar{\\,\\mathcal{U}}= \\{1,2\\ldots,\\bar{U}\\}$ of the action space $\\,\\mathcal{U}$ such that for $u \\in \\bar{\\,\\mathcal{U}}$\n\\begin{compactenum}\n\\item The cost is additively separable: $c(x,u) = \\phi(u) + g(x)$ for some scalars $\\phi(u)$ and $g(x)$.\n\\item The transition matrix $P_{ij}(u)$ depends only on $j$. That is the process evolves independently of the previous state.\n\\end{compactenum}\nAssuming that the actions $u \\in \\bar{\\,\\mathcal{U}}$ are ordered so that $\\phi(1) < \\phi(2) < \\ldots < \\phi(\\bar{U})$, clearly it is never\noptimal to pick actions $2,\\ldots,\\bar{U}$. So solving the POMDP involves choosing between actions $\n\\{1, \\bar{U}+1,\\ldots, U\\}$.\nSo from Theorem \\ref{thm:pomdpconvex}, the set of beliefs where the optimal policy \n$\\mu^*(\\pi) = 1$ is convex.\n\nSolving for the optimal policy for which the actions $\\{\\bar{U}+1,\\ldots, U\\}$ arise is still as complex as a solving a standard POMDP. However, the \nbounds proposed in Chapter \\ref{chp:myopicul} can be used. \n\nConsider the special case of the above model where $\\bar{\\,\\mathcal{U}} = \\,\\mathcal{U} $ and instead of (a), $c(x,u)$ are arbitrary costs. Then\nshow that the optimal policy is a linear threshold policy.\n\\end{compactenum}\n\n\\section{Case Study: Bayesian Nash equilibrium of one-shot global game for coordinated sensing} \\index{coordinated sensing}\n\\index{Bayesian global game}\n\\index{Bayesian Nash equilibrium (BNE)}\n \\index{game theory! global game}\nThis section gives a short description of Bayesian global games.\nThe ideas involve MLR dominance of posterior distributions and supermodularity and serves as a useful illustration of the \nstructural results developed in the chapter.\n\nWe start with some perspective:\n Recall that in the classical Bayesian social learning, agents act sequentially in time. The global games model that has been studied\nin economics during the last two decades, considers multiple agents that act simultaneously by predicting the behavior\nof other agents. \nThe theory of global games was first introduced in \\cite{CD93} as a tool for refining equilibria in economic game theory;\nsee \\cite{MS00} for an excellent exposition.\nGlobal games\nrepresent a useful method for decentralized coordination amongst agents; they \nhave been used to model speculative currency attacks and regime change in social systems, see\n\\cite{MS00,KLM07,AHP07}. Applications in sensor networks and cognitive radio appear in \\cite{Kri08,Kri09}.\n\n\\subsection{Global Game Model}\nConsider a continuum of agents in which each agent $i$ obtains noisy measurements $Y^{(i)}$\nof an underlying state of nature $X$. Here \n$$ Y^{(i)} = X + W^{(i)} , \\quad X \\sim \\pi,\\;\nW^{(i)} \\sim p_{W}(\\cdot) $$\n\nAssume all agents have the same noise distribution $p_W$.\n Based on its observation $y^{(i})$, each agent \ntakes an action $u^i \\in \\{1, 2\\}$ to optimize its expected reward \\begin{equation} \\label{eq:utilityi}\n R(X,\\alpha,u=2) = X + \n f(\\alpha), \\quad\n R(X,u=1) = 0 \n \\end{equation}\nHere $\\alpha\\in [0,1]$ denotes the fraction of agents that choose action 2 and $f(\\alpha)$ is a user specified function. We will call\n$f$ the congestion function for reasons explained below.\n\nAs an illustrative\nexample, suppose $x$ (state of nature) denotes the quality of a social group and $y^{(i})$ denotes the measurement of this quality by agent $i$. The action $u^i = 1$ means that agent $i$ decides not to join the social group, while $u^i = 2$ means that agent $i$ joins the group.\nThe utility function $R(u^i=2,\\alpha)$ for joining the social group depends on $\\alpha$, where $\\alpha$ is the fraction of people who decide to join the group. \nIf $\\alpha \\approx 1$, i.e., too many people join the group, then the utility to each agent is small since the group is too congested and agents do not receive sufficient individual service.\nOn the other hand, if $\\alpha \\approx 0$, i.e., too few people join the group, then the utility is also small since there is not enough social interaction.\nIn this case the congestion function $f(\\alpha)$ would be chosen as a quasi-concave function of $\\alpha$ (that increases with $\\alpha$ up to a certain value of $\\alpha$ and then decreases with $\\alpha$).\n\nSince each agent is rational, it uses its observation $y^{(i)}$ to predict $\\alpha$, i.e., the fraction of other agents \nthat choose action 2. The main question is: {\\em What is the optimal strategy for each agent $i$ to maximize its expected reward?}\n\n\\subsection{Bayesian Nash Equilibrium}\nLet us now formulate this problem:\nEach agent\nchooses its action $u \\in \\{1 , 2 \\}$ based on a (possibly randomized) strategy $\\mu^{(i)}$ that maps the current\nobservation $Y^{(i)}$ to the action $u$. In a global game we are interested\nin {\\em symmetric strategies}, i.e., where all choose the same\nstrategy denoted as $\\mu$. \nThat is, each agent $i$ deploys the strategy $$ \\mu:Y^{(i)} \\rightarrow\\{1 ,2 \\} . $$\n(Of course, the action $\\mu(Y^{(i)})$ picked by individual agents $i$ \ndepend on their random observation $Y^{(i)}$. So the actions picked\nare not necessarily identical even though the strategies are identical).\n\n\nLet $\\alpha(x)$ denote the fraction of agents that\nselect action $u=2$ (go) given the quality of music $X = x$.\nSince we are considering an infinite number of agents that behave independently, \n$\\alpha(x)$ is also (with probability 1) the \nconditional probability that an agent receives signal $Y^{(i)}$\nand decides to pick $u=2$, given\n$X$. So\n\\begin{equation} \\label{eq:alfagdef}\n\\alpha(x) = P(\\mu(Y) =2 | X=x) .\n\\end{equation}\n\nWe can now define the Bayesian Nash equilibrium (BNE) of the global game.\nFor each agent $i$ \ngiven its observation $Y^{(i)}$, the goal is to choose a strategy to optimize its local reward. That is, \nagent $i$ seeks to compute strategy $\\mu^{(i),*}$ such that\n\\begin{equation} \\label{eq:localpolicy}\n\\mu^{(i),*}(Y^{(i)})\\in \\{1 \\text{ (stay) } ,2 \\text{ (go) }\\} \\text{ maximizes }\n\\E [R(X,\\alpha(X),\\mu^{(i)}(Y^{(i)}) )|Y^{(i)} ].\n \\end{equation}\nHere $ R(X,\\alpha(X),u )$ is defined as in (\\ref{eq:utilityi}) with $\\alpha(X)$\ndefined in (\\ref{eq:alfagdef}). \n\nIf such\na strategy $\\mu^{(i),*}$ in (\\ref{eq:localpolicy})\n exists and is the same for all agents $i$,\n then they constitute a {\\em symmetric} BNE for the global game.\nWe will use the notation $\\mu^*(Y)$ to denote this symmetric BNE.\n\n\n\\noindent {\\em Remark}: Since we are dealing with an incomplete information game, players\nuse randomized strategies. If a BNE exists, then a pure (non-randomized) version exists straightforwardly\n(see Proposition 8E.1, pp.225 in \\cite{MWG95}). Indeed, with $y^{(i)}$ denoting realization\nof random variable $Y^{(i)}$, \n$$ \\E[R\\big(X,\\alpha(X),\\mu(Y^{(i)}) \\big)| Y^{(i)} = y^{(i)}] = \\sum_{u=1}^2 \\E[ R(X,\\alpha(X),u)|Y^{(i)}= y^{(i)}] P(u|Y^{(i)}= y^{(i)}) .$$\nSince a linear combination is maximized at its extreme values,\n the optimal (BNE) strategy is to choose $P(u^*|Y^{(i)} = y^{(i)}) = 1$ where \n\\begin{equation}\\label{eq:bang} u^* = \\mu^*(y^{(i)}) = \\operatornamewithlimits{argmax}_{u \\in \\{1,2\\}} \\E[R(X,\\alpha(X),u)|Y^{(i)}= y^{(i)}] .\\end{equation}\n\nFor notational convenience denote\n$$ R(y,u) = \\E[R(X,\\alpha(X),u)|Y^{(i)}= y^{(i)}] $$\n\n\\subsection{Main Result. Monotone BNE} \\index{monotone Bayesian Nash equilibrium}\nWith the above description, we will now give sufficient conditions for the BNE $\\mu^*(y)$ to be monotone increasing in $y$ (denoted\n$\\mu^*(y) \\uparrow y$).\nThis implies that the BNE is a threshold policy of the form:\n$$ \\mu^*(y) = \\begin{cases} 1 & y \\leq y^* \\\\\n\t\t\t\t\t2 & y > y^* \\end{cases} $$\nBefore proving this monotone structure, first note that $\\mu^*(y) \\uparrow y$ implies that $\\alpha(x)$ in (\\ref{eq:alfagdef}) becomes\n$$ \\alpha(x) = P( y > y^* | X = x) = P(x + w > y^*) = P( w > y^* - x) = 1- F_W(y^*-x)$$\n\nClearly from (\\ref{eq:bang}), a sufficient condition for $\\mu^*(y) \\uparrow y$ is that\n$$R(y,u ) = \\int R(x, \\alpha(x) ,u ) \\,p( x | y) dx $$\nis supermodular in $(y,u)$ that is \n$$ R(y,u+1) - R(y,u) \\uparrow y.$$\nSince $R(X,u=0)$ it follows that $R(y,1) = 0$. So it suffices that $R(y,2) \\uparrow y $.\n\n\\begin{compactenum}\n\\item What are sufficient conditions on the noise pdf $p_W(\\cdot)$, and congestion function $f(\\cdot)$ in\n(\\ref{eq:utilityi}) so that $R(y,2) \\uparrow y$ and so BNE $\\mu^*(y) \\uparrow y$?\n\nClearly sufficient conditions for $R(y,2) \\uparrow y$ are:\n\\begin{compactenum}\n\\item $p(x|y)$ is MLR increasing in $y$,\n\\item ${\\rm I\\hspace{-.07cm}R}(x,\\alpha(x),2)$ is increasing in $x$.\n\\end{compactenum}\n\nBut we know that $p(x|y)$ is MLR increasing in $y$ if the noise distribution is such that $p_W(y - x)$ is TP2 in $x,y$\n\nAlso $R(x,\\alpha(x),2)$ is increasing in $x$ if its derivative wrt $x$ is positive. That is,\n$$ \\frac{d}{dx} R(x,\\alpha(x),2) = 1 + \\frac{df}{d\\alpha} \\frac{d \\alpha} {dx} = 1 + \\frac{df}{d\\alpha} p_W(y^*-x) > 0 $$\n\nTo summarize: The BNE $\\mu^*(y) \\uparrow y$ if the following two conditions hold:\n\\begin{compactenum}\n\\item $p(y|x) = p_W(y-x)$ is TP2 in $(x,y)$\n\\item $$ \\frac{df}{d\\alpha} > -\\frac{1}{p_W(y^*-x) } $$\n\\end{compactenum}\nNote that a sufficient condition for the second condition is that \n$$ \\frac{df}{d\\alpha} > -\\frac{1}{\\max_w p_W(w) } $$\n\n\\item Suppose $W$ is uniformly distributed in $[-1.1]$. Then using the above conditions show that a sufficient condition on the congestion function $f(\\alpha)$ \nfor the BNE to be monotone is that $df\/d\\alpha > - 2$.\n\n\n\\item Suppose $W$ is zero mean Gaussian noise with variance $\\sigma^2$. Then using the above conditions show that a sufficient condition on the congestion function $f(\\alpha)$ \nfor the BNE to be monotone is that $df\/d\\alpha > - \\sqrt{2 \\pi} \\sigma$.\n\n \n\\subsection{One-shot HMM Global Game} \\index{HMM global game}\nSuppose that $X_0 \\sim \\pi_0$, and given $X_0$, $X_1$ is obtained by simulating from transition matrix $P$. The observation for agent\n$i$ is obtained as the HMM observation\n$$ Y^{(i)} = X_1 + W^{(i)}, \\quad W^{(i)} \\sim p_W(\\cdot). $$ \nIn analogy to the above\nderivation, characterize the BNE of the resulting one-shot HMM global game. (This will require assuming that $P$ is TP2.)\n\n\\end{compactenum}\n\n\n\n\n\\chapter{Stopping Time POMDPs for Quickest Change Detection} \n\n\\begin{compactenum}\n\n\\item For classical detection theory, a ``classic\" book is the multi-volume \\cite{Van68}.\n\\item As mentioned in the book, \nthere are two approaches to quickest change detection: Bayesian and minimax.\nChapter \\ref{chp:stopapply} of the book deals with Bayesian quickest detection which assumes that the change point distribution is known (e.g. phase distribution). The focus of Chapter \\ref{chp:stopapply} was to determine the structure of the optimal policy of the \nBayesian detector by showing that the problem is a special case of a stopping time POMDP.\n\\cite{VB13} uses nonlinear renewal theory to analyze the performance of the optimal Bayesian detector.\n\nThe minimax formulation for quickest detection assumes that the change point is either deterministic or has an unknown distribution.\nFor an excellent starting point on performance analysis of change detectors with minimax formulations please see \\cite{TM10} and \\cite{PH08}.\nThe papers \\cite{Lor71,Mou86} gives a lucid description of the analysis of change detection in this framework.\n\n\\item {\\bf Shiryaev Detection Statistic.} \nIn the classical Bayesian formulation of quickest detection described in \\S \\ref{sec:convexstop},\na two state Markov chain is considered to model geometric distributed change times. Recall (\\ref{eq:tpqdp}), namely,\n\\begin{equation} P = \\begin{bmatrix} 1 & 0 \\\\ 1- P_{22} & P_{22} \\end{bmatrix} , \\; \\pi_0 = \\begin{bmatrix} 0 \\\\ 1 \\end{bmatrix} , \\quad\n\\tau^0 = \\inf\\{ k: x_k = 1\\}. \\end{equation}\nwhere $1-P_{22}$ is the parameter of the geometric prior.\n\nIn classical detection theory, the belief state $\\pi_k$ is written in terms of the {\\em Shiryaev detection\nstatistic} $r_k$ which is defined as follows: \\index{Shiryaev detection statistic}\n\\begin{equation}\n r_k \\stackrel{\\text{defn}}{=} \\frac{1}{1-P_{22}}\\times \\frac{\\pi_k(2) }{1 - \\pi_k(2)} \\end{equation}\nClearly $r_k$ is an increasing function of $\\pi_k(2)$ and so all the monotonicity results in the chapter continue to hold. \nIn particular Corollary \\ref{cor:qdclassical} in the book holds for $r_k$ implying a threshold policy in terms of $r_k$.\n\nIn terms of the Shiryaev \nstatistic $r_k$, it is straightforward to write\nthe belief state update (HMM filter for 2 state Markov chain) as a function of the likelihood ratio as follows:\n\\begin{empheq}[box=\\fbox]{equation}\nr_k = \\frac{1}{1-p} \\left( r_{k-1} + 1 \\right) L(y_k) \\label{eq:shirstat}\n\\end{empheq}\nwhere\n$$ p = 1 - P_{22}, \\quad L(y_k ) = \\frac{B_{2y_k}}{ B_{1y_k}} \\text{ (likelihood ratio) }$$\nIn (\\ref{eq:shirstat}) by choosing $p \\rightarrow 0$, the Shiryaev detection statistic converges to the so called {\\em Shiryaev-Roberts detection statistic}.\nNote that as $p \\rightarrow 0$, the Markov chain becomes a slow Markov chain. We have analyzed in detail how to track the state of such a slow\nMarkov chain via a stochastic approximation algorithm in Chapter \\ref{chp:markovtracksa} of the book. \\index{Shiryaev-Roberts detection statistic}\n\nThe Shiryaev-Roberts detector for change detection reads:\n\\begin{compactenum}\n\\item Update the Shiryaev-Roberts statistic \n$$ r_k = \\left( r_{k-1} + 1 \\right) L(y_k) $$\n\\item If $r_k \\geq r^*$ then stop and declare a change.\nHere $r^*$ is a suitably chosen detection threshold.\n\n\n\\end{compactenum}\nPlease see \\cite{PT12} for a nice survey description of minimax change detection and also the sense in which the above Shiryaev-Roberts detector\nis optimal.\n\n\n\\item {\\bf Classical Bayesian sequential detection.} \n\\index{classical sequential detection}\n This problem shows that classical Bayesian sequential detection is a trivial case of the results\ndeveloped in Chapter \\ref{ch:pomdpstop}.\n\n\n\nConsider a random variable $x \\in \\{1,2\\}$.\nSo the transition matrix is $P = I$. Given noisy observations $y_k \\simB_{x y}$, the aim is to decide if the underlying state is either 1 or 2. Taking stop action 1 declares that the state is 1 and stops. Taking stop action 2 declares that the state is 2 and stops. \nTaking action 3 at time $k$ simply takes another measurement $y_{k+1}$.\nThe misclassification costs are: $$c(x=2,u=1) = c(x=1,u=2) = L.$$ \nThe cost of taking an additional measurement is $c(x,u=3) = C$. What is the optimal policy $\\mu^*(\\pi)$?\n\nSince $P = I$, show that the dynamic programming equation reads\n\\begin{align*}\n V(\\pi) &= \\min\\{ \\pi_2 L, \\;\\pi_1 L, \\; C + \\sum_y V(T(\\pi,y)) \\sigma(\\pi,y) \\} \\\\\n T(\\pi,y) &= \\frac{B_y \\pi}{\\mathbf{1}^\\prime B_y \\pi} , \\quad \\sigma(\\pi,y) = \\mathbf{1}^\\prime B_y \\pi ,\n \\end{align*}\nwhere $\\pi = [\\pi(1), \\pi(2)]^\\prime$ is the belief state. Note that $y \\in \\mathcal{Y}$ where $\\mathcal{Y}$ can be finite or\n continuum (in which case $\\sum$ denotes integration over $\\mathcal{Y}$).\n\nFrom Theorem \\ref{thm:pomdpconvex} we immediately know that the stopping sets $$\\mathcal{R}_1 = \\{\\pi: \\mu^*(\\pi) = 1\\}, \\text{ and }\n\\mathcal{R}_2 = \\{\\pi: \\mu^*(\\pi) = 2\\}$$ are convex sets. Since the belief state is two dimensional, in terms of the second component $\\pi(2)$, $\\mathcal{R}_1 $ and \n$\\mathcal{R}_2$ are intervals in the unit interval $[0,1]$. Clearly $\\pi(2) = 0 \\in \\mathcal{R}_1$ and $\\pi(2)=1$ in $\\mathcal{R}_2$.\nTherefore $\\mathcal{R}_1 = [0,\\pi_1^*]$ and $\\mathcal{R}_2 = [\\pi_2^*,1]$ for some $\\pi_1^* \\leq \\pi_2^*$. So the continue region is\n$[\\pi_1^*,\\pi_2^*]$. \n\nOf course, Theorem \\ref{thm:pomdpconvex} is much more general since it does not require $X=2$ states and $x_k$ can evolve\naccording to a Markov chain with transition matrix $P$ (whereas in the simplistic setting above, $x$ is a random variable).\n\n\\item {\\bf Stochastic Ordering of Passage Times for Phase-Distribution.} \\index{passage time}\nIn quickest detection, we formulated the \nchange point $\\tau^0$ to have a \n{\\em phase type (PH) distribution}. \nA systematic investigation of the statistical properties of PH-distributions can be found in \\cite{Neu89}.\nThe family of all PH-distributions forms a dense subset for the set of all distributions\n\t\\cite{Neu89} i.e., for any given distribution function $F$ such that $F(0) = 0$, one can find a sequence of PH-distributions\t\n$\\{F_n , n\t\\geq\t1\\}$\t to\t\tapproximate\t$F$\tuniformly over $[0, \\infty)$.\nThus PH-distributions can be used to approximate change points with an arbitrary distribution. This is done by\n\tconstructing a multi-state Markov chain as follows:\nAssume state `1' (corresponding to belief $e_1$) is an absorbing state\nand denotes the state after the jump change. The states $2,\\ldots,X$ (corresponding to beliefs $e_2,\\ldots,e_X$) can be viewed as a single composite state that $x$ resides in before the jump. \nTo avoid trivialities, assume that the change occurs after at least one measurement. So the initial distribution $\\pi_0$ satisfies $\\pi_0(1) = 0$.\nThe \ntransition probability matrix is of the form\n\\begin{equation} \\label{eq:phmatrix}\nP = \\begin{bmatrix} 1 & 0 \\\\ \\underline{P}_{(X-1)\\times 1} & \\bar{P}_{(X-1)\\times (X-1)} \\end{bmatrix}.\n\\end{equation}\nThe {\\em first passage time} $\\tau^0$ to state 1 denotes the time at which $x_k$ enters the absorbing state 1:\n \\begin{equation} \\tau^0 = \\min\\{k: x_k = 1\\}. \\label{eq:tau} \\end{equation}\n As described in \\S \\ref{sec:qdphform} of the book,\nthe distribution of $\\tau^0$ is determined by choosing the transition probabilities $\\underline{P}, \\bar{P}$ in \n (\\ref{eq:phmatrix}). \n The \ndistribution of the absorption time to state 1 is denoted by $$\\nu_k = \\mathbb{P}(\\tau^0 = k)$$ and given by\n\\begin{equation} \\label{eq:nu}\n \\nu_0 = \\pi_0(1), \\quad \\nu_k = \\bar{\\pi}_0^\\prime \\bar{P}^{k-1} \\underline{P}, \\quad k\\geq 1, \\end{equation}\n where $\\bar{\\pi}_0 = [\\pi_0(2),\\ldots,\\pi_0(X)]^\\prime$.\n\n\n{\\bf Definition. Increasing Hazard Rate}: A pmf $p$ is said to be increasing hazard rate (IHR) if \\index{increasing hazard rate}\n$$ \\frac{\\bar{F}_{i+1}}{{\\bar{F}_{i}}} \\downarrow i, \\quad \\text{ where } \\bar{F}_i = \\sum_{j=i}^\\infty p_j$$\n\n\n{\\bf Aim}. Show that if the transition matrix $P$ in (\\ref{eq:phmatrix}) is TP2 and initial condition $\\pi_0 = e_X$, then the passage time distribution $\\nu_k$ \nin (\\ref{eq:nu})\nsatisfies the increasing hazard rate (IHR)\nproperty; see \\cite{Sha88} for a detailed proof.\n\n\\item {\\bf Order book high frequency trading and social learning.} \\index{order book high frequency trading}\nAgent based models for high frequency trading with an order book have been studied a lot recently \\cite{AS08}. Agents trade (buy or sell) stocks by exploiting information about the decisions of previous agents (social learning) via an order book in addition to a private (noisy) signal they receive on the value of the stock. We are interested in the following: (1) Modeling the dynamics of these risk averse agents, (2) Sequential detection of a market shock based on the behavior of these agents.\n\nThe agents perform social learning according to the protocol in \\S \\ref{sec:herdaa} of the book. A market maker needs to decide based on the actions of the agents if there\nis a sudden change (shock) in the underlying value of an asset. Assume that the shock occurs with a phase distributed change time.\nThe individual agents perform social learning with a CVaR social learning filter as in \\S \\ref{sec:classicalsocial} of the book. The market maker aims\nto determine the shock as soon as possible.\n\nFormulate this decision problem as a quickest detection problem. Simulate the value function and optimal policy. Compare it with the \nmarket maker's optimal policy\nobtained when the agents perform risk\nneutral social learning. See \\cite{KB16} for details.\n\\end{compactenum}\n\n\\chapter{Myopic Policy Bounds for POMDPs and Sensitivity} \n\n\\begin{compactenum}\n\n\\item To obtain upper and lower bounds to the optimal policy, the key idea was to change the cost vector but still preserve the optimal policy. \\cite{KP15} gives a complete description of this idea.\nWhat if a nonlinear cost was subtracted from the costs thereby still keeping the optimal policy the same. Does that allow for larger regions of the belief space\nwhere the upper and lower bounds coincide?\nIs it possible to construct different transition matrices that yield the same optimal policy? \n\n\n\n\\item{\\bf First order dominance of Markov chain sample paths.} In \\S \\ref{sec:tpcp} of the book we defined the importance concept of copositive dominance\nto say that if two transition matrices $P_1$ and $P_2$ satisfy $P_1 \\preceq P_2$ (see Definition \\ref{def:lR}), then the one step ahead predicted belief satisfies the MLR dominance property\n$$ P_1^\\prime \\pi \\leq_r P_2 ^\\prime \\pi. $$ \nIf we only want first order stochastic dominance, then the following condition suffices: \nLet $U$ denote the $X \\times X $ dimensional triangular matrix with elements \n$U_{ij} = 0, i > j$ and $U_{ij} = 1, i \\leq j$.\n\\begin{compactenum}\n\\item \nShow the following result:\n$$ P_1 U \\geq P_2 U \\implies P_1^\\prime \\pi_1 \\geq_s P_2^\\prime \\pi_2\\; \\text{ if } \\pi_1 \\geq_s \\pi_2. $$\n\\item\n Consider the following special case of a POMDP. Suppose the prior belief $\\pi_0 \\in \\Pi(\\statedim) $ is known. From time 1 onwards,\nthe state is fully observed. How can the structural results in this chapter be used to characterize the optimal policy?\n\\end{compactenum}\n\n\\item In \\cite{Lov87} it is assumed that one can construct a POMDP with observation matrices $B(1),B(2)$ such that \n(i) $T(\\pi,y,2) \\geq_r T(\\pi,y,1)$ for each $y$ and (ii) $\\sigma(\\pi,2) \\geq_s \\sigma(\\pi,1) $. Prove that it is impossible\nto construct an example that satisfies (i) and (ii) apart from the trivial case where $B(1) = B(2)$. Therefore Theorem \\ref{th:theorem1udit} does not apply\nwhen the transition probabilities are the same and only the observation probabilities are action dependent. For such cases, Blackwell dominance is used.\n\n\n\\item {\\bf Extensions of Blackwell dominance idea in POMDPs to more general cases.}\nBlackwell dominance \\index{Blackwell dominance} was used in \\S \\ref{sec:blackwelldom} of the book to construct myopic policies that bound the optimal policy of a POMDP.\nBut Blackwell dominance\nis quite finicky. In Theorem \\ref{thm:compare2} we assumed that the POMDP has dependency structure $x \\rightarrow y^{(2)} \\rightarrow y^{(1)}$. That is, the observation distributions are\n$B(2) = p(y^{(2)}|x)$ and $B(1) = p(y^{(1)}|y^{(2)})$.\n\n\\begin{compactenum}\n\\item \nRecall the proof of Theorem \\ref{thm:compare2} which is written element wise below for maximum clarity:\n\\[\n\\begin{split}\n T_j(\\pi,y^{(1)},1) = \\frac{\\sum_{y^{(2)}} \\sum_i \\pi(i) P_{ij} p(y^{(2)}|j,2) p(y^{(1)} | y^{(2)})}\n{\\sum_m \\sum_{y^{(2)}} \\sum_i \\pi(i) P_{im} p(y^{(2)}|m,2)p(y^{(1)} | y^{(2)}) } \\\\\n= \n \\frac{\\sum_{y^{(2)}} \\dfrac{ \\sum_i \\pi(i) P_{ij} p(y^{(2)}|j,2) }\n{ \\cancel{ \\sum_i \\sum_m \\pi(i) P_{im} p(y^{(2)}|m,2) }} \\cancel{ \\sum_i \\sum_m \\pi(i) P_{im} p(y^{(2)}|m,u)} \\,\n p(y^{(1)} | y^{(2)})}\n{\\sum_m \\sum_{y^{(2)}} \\sum_i \\pi(i) P_{im} p(y^{(2)}|m,2) p(y^{(1)} | y^{(2)}) } \\\\ =\n \\frac{\\sum_{y^{(2)}} T_j(\\pi,y^{(2)},2) \\sigma(\\pi,y^{(2)},2) p(y^{(1)}| y^{(2)})}{ \\sum_{y^{(2)}} \\sigma(\\pi,y^{(2)},2) p(y^{(1)}| y^{(2)})}\n\\end{split}\n\\]\n\nThen clearly $ \\frac{ \\sigma(\\pi,y^{(2)},2) p(y^{(1)}| y^{(2)})}{ \\sum_{y^{(2)}} \\sigma(\\pi,y^{(2)},2) p(y^{(1)}| y^{(2)})}$\nis a probability measure w.r.t $y^{(2)}$.\n\n\\item Consider now the more general POMDP where $p(y^{(1)}| y^{(2)},x)$ depends on the state $x$. (In Theorem \\ref{thm:compare2}\nthis was functionally independent of $x$.)\nThen \n$$ T_j(\\pi,y^{(1)},1) = \\frac{\\sum_{y^{(2)}} T_j(\\pi,y^{(2)},2) \\sigma(\\pi,y^{(2)},2) p(y^{(1)}| y^{(2)},j)}{ \\sum_{y^{(2)}} \\sum_m\\sigma(\\pi,y^{(2)},2) p(y^{(1)}| y^{(2)},m)} $$\nNow \n$$ \\frac{ \\sigma(\\pi,y^{(2)},2) p(y^{(1)}| y^{(2)},j)}{ \\sum_{y^{(2)}} \\sum_m\\sigma(\\pi,y^{(2)},2) p(y^{(1)}| y^{(2)},m)} $$ is no longer a probability measure w.r.t. $y^{(2)}$.\nThe proof of Theorem \\ref{thm:compare2} no longer holds.\n\n\\item Next consider the case where the observation distribution is $p(y^{(2)}_k | x_k, x_{k-1})$ and $p(y^{(1)} | y^{(2)})$. Then the proof\nof Theorem \\ref{thm:compare2} continues to hold.\n\n\\end{compactenum}\n\n\n\\item {\\bf Blackwell dominance implies higher channel capacity.} Show that if $B(1)$ Blackwell dominates $ B(2) $, i.e., $B(2) = B(1) Q$ for some stochastic matrix $Q$, then the capacity of a channel\nwith likelihood probabilities given by $B(1)$ is higher than that with likelihood probabilities $B(2)$. \n\n\\item \nRecall that the structural result involving Blackwell dominance deals with action dependent observation probabilities but assumes identical transition matrices for the various actions.\nShow that copositive dominance and Blackwell dominance can be combined to deal with a POMDP with action dependent transition and\nobservation probabilities of the form:\n\\\\ Action $u = 1$: $P^2, B$.\n\\\\ Action $u=2$: $P, B^2$.\n\\\\\nGive numerical examples of POMDPs with the above structure.\n\\end{compactenum}\n\n\n\\newpage\n\\chapter{Part IV. Stochastic Approximation and Reinforcement Learning}\n\nHere we present three case studies of stochastic approximation algorithms.\nThe first case study deals with online HMM parameter estimation and extends the method described in Chapter \\ref{chp:markovtracksa}.\nThe second case study deals with reinforcement learning of equilibria in repeated games. The third case study deals with discrete stochastic optimization\n(recall \\S \\ref{sec:dopt} gave two algorithms)\nand provides a simple example of such an algorithm. \n\n\n\n\\section{Case Study. Online HMM parameter estimation} \\index{online HMM estimation} \nRecall from Chapter \\ref{chp:markovtracksa} that estimating the parameters of a HMM in real time is motivated by adaptive control of a POMDP.\nThe parameter estimation algorithm can be used to estimate the parameters of the POMDP\nfor a fixed policy; then the policy can be updated using dynamic programming (or approximation) based on the parameters and so on.\n\n\nThis case study outlines several algorithms for recursive estimation of HMM parameters. The reader should implement these \nalgorithms in Matlab to get a good feel for how they work.\n\nConsider the loss function for $N$ data points of a HMM or Gaussian state space model:\n\\begin{equation} J_N(\\th) = \\E\\{ \\sum_{k=1}^N c_\\th(x_k,y_k,\\pi_k^\\theta) \\} \\label{eq:lossfunction} \\end{equation}\nwhere $x_k$ denotes the state, $y_k$ denotes the observation, $\\pi_k^\\th$ denotes the belief state, and $\\th$ denotes the model variable.\n\n\nThe aim is to determine the model $\\th$ that maximizes this loss function.\n\nAn offline gradient algorithm operates iteratively to minimize this loss as follows:\n\\begin{equation} \\th^{(l+1)} = \\th^{(l)} - \\epsilon \\nabla_\\th J_N(\\th)\\big\\vert_{\\th = \\th^{(l)}} \\label{eq:offlinegrad} \\end{equation}\nThe notation $ \\vert_{\\th = \\th^{(l)}}$ above means that the derivatives are evaluated at $\\th = \\th^{(l)}$.\n\nAn offline Newton type algorithm operates iteratively as follows:\n\\begin{equation} \\th^{(l+1)} = \\th^{(l)} - \\big[ \\nabla_\\th^2 J_N(\\th) \\big]^{-1} \\nabla_\\th J_N(\\th)\\big\\vert_{\\th = \\th^{(l)}} \n\\label{eq:offlinenewton}\n\\end{equation}\n\n\\subsection{Recursive Gradient and Gauss-Newton Algorithms}\nA \nrecursive online version of the above gradient algorithm (\\ref{eq:offlinegrad}) is \n\\begin{empheq}[box=\\fbox]{equation} \\begin{split}\n & \\th_{k} = \\th_{k-1} - \\epsilon \\, \\nabla_\\th c_\\th(x_k,y_k,\\pi^\\th_k)\\big\\vert_{\\th = \\th_{k-1}} \\, \n \\\\ & \\pi_k^{\\th_{k-1}} = T( \\pi_{k-1}^{\\th_{k-1}}, y_k; \\th_{k-1}) \n \\end{split} \\label{eq:onlinegrad}\n\\end{empheq}\nwhere $ T( \\pi_{k-1}^{\\th_{k-1}}, y_k; \\th_{k-1})$ is the optimal filtering recursion at time $k$ using prior $\\pi_{k-1}^{\\th_{k-1}}$,\nmodel $\\th_{k-1}$ and observation $y_k$. The notation $ \\vert_{\\th = \\th_{k-1}}$ above means that the derivatives are evaluated at $\\th = \\th_{k-1}$\nFinally, $\\epsilon$ is a small positive step size.\n\nThe recursive Gauss Newton algorithm is an online implementation of (\\ref{eq:offlinenewton}) and reads\n\\begin{empheq}[box=\\fbox]{equation} \\begin{split}\n & \\th_{k} = \\th_{k-1} - \\mathcal{I}_k^{-1} \\, \\nabla_\\th c_\\th(x_k,y_k,\\pi_k^{\\th})\\big\\vert_{\\th = \\th_{k-1}} \\\\\n& \\mathcal{I}_k = \\mathcal{I}_{k-1} + \\epsilon \\nabla^2 c_\\th(x_k,y_k,\\pi_k^{\\th})\\big\\vert_{\\th = \\th_{k-1}} \\\\\n & \\pi_k^{\\th_{k-1}} = T( \\pi_{k-1}^{\\th_{k-1}}, y_k; \\th_{k-1}) \n \\end{split} \\label{eq:recgaussnewton}\n\\end{empheq}\nNote that the above recursive Gauss Newton is a stochastic approximation algorithm with a matrix step size $\\mathcal{I}_k$.\n\n\\subsection{Justification of (\\ref{eq:onlinegrad})}\nBefore proceeding with examples, we give a \n heuristic derivation of (\\ref{eq:onlinegrad}). Write (\\ref{eq:offlinegrad}) as \n$$ \\th_{k}^{(k)} = \\th_k^{(k-1)} - \\epsilon \\nabla_\\th J_N(\\th)\\big\\vert_{\\th = \\th_k^{(k-1)}} $$\nHere the subscript $k$ denotes the estimate based on observations $y_{1:k}$. The superscript $(k)$ denotes the iteration of the offline optimization\nalgorithm.\n\nSuppose that at each iteration $k$ we collect one more observation. Then the above algorithm becomes\n\\begin{equation} \\th_{k}^{(k)} = \\th_{k-1}^{(k-1)} - \\epsilon \\nabla_\\th J_k(\\th)\\big\\vert_{\\th = \\th_{k-1}^{(k-1)}} \\label{eq:onoff}\\end{equation}\nIntroduce the convenient notation\n$$ \\th_k = \\th_{k}^{(k)} . $$\nNext we use the following two crucial approximations:\n\\begin{compactitem}\n\\item First, make the inductive assumption that $\\th_{k-1}$ minimized $J_{k-1}(\\th)$ so that \n$$ \\nabla_\\th J_{k-1}(\\th)\\big\\vert_{\\th = \\th_{k-1}} = 0 $$ Then from (\\ref{eq:lossfunction}) it follows that\n\\begin{equation} \\nabla_\\th J_{k}(\\th)\\big\\vert_{\\th = \\th_{k-1}} = \\nabla_\\th \\E\\{ c_\\th(x_k,y_k,\\pi^\\th_k) \\}\\big\\vert_{\\th = \\th_{k-1}} \n\\label{eq:approx1}\n\\end{equation}\n\\item Note that evaluating the right hand side of (\\ref{eq:approx1}) requires running a filter and its derivates wrt $\\th$ from time 0 to $k$ for fixed model $\\th_{k-1}$. We want a recursive\napproximation for this. It is here that the\nsecond approximation is used. We revlaute the filtering recursion using\nthe a sequence of available model estimates $\\th_t$, $t=1,\\ldots, k$ at each time $t$. In other words, we make the approximation\n\\begin{equation} \\pi_k^{\\th_{k-1}} = T( \\pi_{k-1}^{\\th_{k-1}}, y_k; \\th_k) , k = 1,2,,\\ldots, \\label{eq:approx2} \\end{equation}\n\n\\end{compactitem}\nTo summarize, introducing approximations (\\ref{eq:approx1}) and (\\ref{eq:approx2}) in (\\ref{eq:onoff}) yields the online gradient algorithm (\\ref{eq:onlinegrad}). The derivation of the Gauss-Newton algorithm is similar.\n\n\\subsection{Examples of online HMM estimation algorithm}\nWith the algorithms (\\ref{eq:onlinegrad}) and (\\ref{eq:recgaussnewton}) we can obtain several types of online HMM parameter estimators by choosing different loss functions $J$ in (\\ref{eq:lossfunction}). Below we outline two popular choices.\n\n\\subsubsection*{1. Recursive EM algorithm\\footnote{This name is a misnomer. More accurately the algorithm below is a stochastic approximation algorithm that seeks to approximate the EM algorithm}} \\index{online HMM estimation! recursive EM} \nRecall from the EM algorithm, that the auxiliary likelihood for fixed parameter $\\bar{\\th}$ is \n$$Q_n(\\th,\\bar{\\th}) = \\E\\{ \\log (p(x_{0:n},y_{1:n}| \\th) | y_{1:n}, \\bar{\\th})\\} = \\E\\{ \\sum_{k=1}^n \\log p_\\th(x_k,y_k| x_{k-1}) | y_{1:n}, \\bar{\\th} \\} $$\nWith $\\th^o$ denoting the true model, $\\th$ denoting the model variable, and $\\bar{\\model}$ denoting a fixed model value, define\n$$J_n(\\th,\\bar{\\model}) = \\E_{y_{1:n}}\\{ Q_n(\\th, \\bar{\\th}) | \\th^o\\}. $$\nTo be more specific, \nfor a HMM, from (\\ref{eq:emqfn}),\nin the notation of (\\ref{eq:lossfunction}),\n\\begin{equation} c_\\th (y_k,\\bar{\\model}) = \n\\sum_{i=1}^X \\pi_{k|n}^{\\bar{\\th}}(i) \\; \\log B^\\theta_{iy_k}\n+ \\sum_{i=1}^X \\sum_{j=1}^X \\pi^{\\bar{\\model}}_{k|n} (i,j )\n\\log P^\\theta_{ij} . \\label{eq:receminst}\n\\end{equation}\nwhere $P^\\theta$ denotes the transition matrix and $B^\\theta$ is the observation matrix\nand $\\bar{\\model}$ is a fixed model for which the smoothed posterior $\\pi^{\\bar{\\model}}_{k|n}$ is computed.\n\nNote that $c_\\th$ is a reward and not a loss; our aim is to maximize $J_n$.\nThe idea then is to implement a Gauss-Newton stochastic gradient algorithm for maximizing $J_n(\\th,\\bar{\\model})$ for fixed model $\\bar{\\model}$,\nthen update $\\bar{\\model}$ and so on. This yields the following {\\em recursive EM algorithm}:\n\n\n\\begin{compactenum}\n\\item For $ k = n \\Delta +1, \\ldots (n+1) \\Delta$ run\n\\begin{equation} \\label{eq:recem} \n\\begin{split} \\th_{k} &= \\th_{k-1} + \\mathcal{I}_k^{-1} \n\\sum_i \\nabla_\\model c_\\th (y_k, \\bar{\\model}_{n-1}) \\pi_k^{\\bar{\\model}_{n}}(i) \\\\\n \\mathcal{I}_k & = \\mathcal{I}_{k-1} + \\epsilon \\sum_i \\nabla^2 c_\\th(i,y_k,\\pi_k^{\\th})\\big\\vert_{\\th = \\th_{k-1}} \\, \\pi_k^{\\bar{\\model}_{n}}(i) \\\\\n \n \\pi^{\\bar{\\model}_{n-1}}_{k} & = T(\\pi^{\\bar{\\model}_{n-1}}_{k-1}, y_k; \\th_{k-1}) \\quad \\text{ (HMM filter update) } \n\\end{split}\n\\end{equation}\nHere $\\pi_{k|n}^{\\bar{\\th}}$ and $\\pi^{\\bar{\\model}}_{k|n} (i,j )$ in (\\ref{eq:receminst}) are replaced by filtered estimates\n$\\pi_{k}^{\\bar{\\th}}$ and $\\pi^{\\bar{\\model}}_{k-1} (i) P^{\\bar{\\model}}_{ij} B^{\\bar{\\model}}_{j }y_{k}$.\n\\item Then update \n$ \\bar{\\model}_{n+1} = \\theta_{(n+1)\\Delta} $, set $n$ to $n+1$ and go to step 1.\n\\end{compactenum}\n\n \n\n\nTo ensure that the transition matrix estimates are a valid stochastic matrix, one can parametrize it in terms of spherical coordinates,\nsee~(\\ref{eq:alfadef}).\n\nAs an illustrative example, suppose we wish to estimate the $X$-dimension vector of state levels $g = (g(1),g(2),\\ldots,g(X))^\\prime $\n of a HMM in zero mean Gaussian noise \nwith known variance $\\sigma^2$. Assume the transition matrix $P$ is known.\nThen $\\th = g$ and \n$$ c_\\th (y_k,\\bar{\\model}) \n= - \\frac{1}{2 \\sigma^2} \\sum_{i} \\pi_k^{\\bar{\\model}}(i) \\big( y_k - g(i) \\big)^2 + \\text{ constant }$$\n\n\n\n \\subsubsection*{2. Recursive Prediction Error (RPE)} \\index{online HMM estimation! recursive prediction error} \n Suppose $g$ is the vector of state levels of the underlying Markov chain and $P$ the transition matrix. Then the model to estimate\n is $\\th = (g,P)$. \n Offline prediction error methods seek to find the model $\\th$ that minimizes the loss function\n $$ J_N(\\th) = \\E\\{ \\sum_{k=1}^N (y_k - g^\\prime \\pi_{k|k-1}) ^2\\} $$ \n So squared prediction error at each time $k$ is \n\\begin{equation} c_\\th(x_k,\\theta_k,\\pi^\\theta_k) = \\big(y_k - g^\\prime P^\\prime \\pi^\\th_{k-1}\\big)^2 \\label{eq:rpecost} \\end{equation}\nNote that unlike (\\ref{eq:lossfunction}) there is no conditional expectation in the loss function.\n Note the key difference compared to the recursive EM. In the recursive \nEM $c_\\th (x_k,y_k) $ is functionally independent of $\\pi^\\th$ and hence the recursive EM does not involve derivatives (sensitivity) of the HMM filter. In comparison, the RPE cost (\\ref{eq:rpecost}) involves derivatives of $\\pi^\\th_{k-1}$ with respect to $\\th$.\n Then the derivatives with respect to $\\th$ can be evaluated as in \\S \\ref{sec:rmle}.\n \n \n \\subsubsection*{3. Recursive Maximum likelihood} This was discussed in \\S \\ref{sec:rmle}. The cost function is\n$$c_\\th(x_k,\\theta_k,\\pi^\\theta_k) =\\log \\left[ \\mathbf{1}^\\prime B_{y_k}(\\th) \\pi_{k|k-1}^\\th \\right]\n$$\n \\index{online HMM estimation! recursive maximum likelihood} \n\n\nRecursive versions of the method of moment estimation algorithm for the HMM parameters is presented in \\cite{MKW15}.\n\n\\section{Case Study. Reinforcement Learning of Correlated Equilibria} \\index{reinforcement learning of correlated equilibria}\nThis case study illustrates the use of stochastic approximation algorithms for learning the correlated equilibrium in a repeated\ngame. Recall in Chapter \\ref{chp:markovtracksa} we used the ordinary differential equation analysis of a stochastic approximation algorithm to \ncharacterize where it converges to. For a game, we will show that the stochastic approximation algorithm converges to\na differential inclusion (rather than a differential equation). Differential inclusions are generalization of ordinary differential equations (ODEs) and arise naturally in game-theoretic learning, since the strategies according to which others play are unknown. Then by a straightforward\nLyapunov function type proof, we show that the differential inclusion converges to the set of correlated equilibria of the game, implying that the stochastic approximation algorithm also converges to the set of correlated equilibria.\n\n\\subsection{Finite Game Model}\nConsider a finite action static game\\footnote{For notational convenience we assume two players with identical action spaces.} comprising two players $l=1,2$ with\ncosts\n$c_l(\\act1,\\act2)$ where $\\act1, \\act2 \\in \\{1,\\ldots,U\\}$. Let $p$ and $q$ denote the randomized policies (strategies) of the two players:\n$p(i) = \\mathbb{P}( \\act1 = i) $ and $q(i) = \\mathbb{P}(\\act2=i)$. So $p,q$ are $U$ dimensional probability vectors that live in the $U-1$ dimensional unit simplex $\\Pi$. Then the policies\n$(p^*,q^*)$ constitute a Nash equilibrium if the following inequalities hold:\n\\begin{empheq}[box=\\fbox]{equation} \n\\begin{split}\n \\sum_{\\act1,\\act2} c_1(\\act1,\\act2)\\, p^*(\\act1) \\, q^*(\\act2) \\leq \\sum_{\\act2} c_1(u,\\act2) \\, q^*(\\act2) , \\quad u =1,\\ldots,U \\\\\n \\sum_{\\act1,\\act2} c_2(\\act1,\\act2) \\, p^*(\\act1) \\,q^*(\\act2) \\leq \\sum_{\\act1} c_2(\\act1,u) \\, p^*(\\act1) , \\quad u =1,\\ldots,U.\n \\end{split} \\label{eq:staticnashone}\n \\end{empheq}\nEquivalently, $(p^*,q^*)$ constitute a Nash equilibrium if for all policies $p,q \\in \\Pi$,\n\\begin{equation} \n\\begin{split}\n \\sum_{\\act1,\\act2} c_1(\\act1,\\act2) \\, p^*(\\act1) \\, q^*(\\act2) \\leq \\sum_{\\act1,\\act2} c_1(\\act1,\\act2)\\, p(\\act1) \\, q^*(\\act2) \\\\\n \\sum_{\\act1,\\act2} c_2(\\act1,\\act2) \\, p^*(\\act1)\\, q^*(\\act2) \\leq \\sum_{\\act1,\\act2} c_2(\\act1,\\act2) \\, p^*(\\act1) \\, q(\\act2) \n \\end{split} \\label{eq:staticnash}\n \\end{equation}\nThe first inequality in (\\ref{eq:staticnash}) says that if player 1 cheats and deploys policy $p$ instead of $p^*$, then it is worse off\nand incurs an higher cost. The second inequality says that same thing for player 2. So in a non-cooperative game, since collusion is not allowed, there is no rational reason for any of the players to unilaterally deviate from the Nash equilibrium\n$p^*,q^*$.\n\nBy a standard application of Kakutani's fixed point theorem, it can be shown that for a finite action game, at least\none Nash equilibrium always exists. However, computing it can be difficult since the above constraints are bilinear and therefore nonconvex.\n\n\n\n\\subsection{Correlated Equilibrium} The Nash equilibrium assumes that the player's act independently. The correlated equilibrium is a generalization of the Nash equilibrium. The two players now choose their action from the joint probability distribution\n$\\pi(\\act1,\\act2) $ where $$\\pi(i,j) = \\mathbb{P}(\\act1 = i,\\act2=j).$$ \\index{correlated equilibrium}\nHence the actions of the players are correlated. Then the policy $\\pi^*$ is said to be a correlated equilibrium if\n\\begin{empheq}[box=\\fbox]{equation} \n\\begin{split}\n \\sum_{\\act2} c_1(\\act1,\\act2)\\, \\pi^*(\\act1,\\act2) \\leq \\sum_{\\act2} c_1(u,\\act2)\\, \\pi^*(\\act1,\\act2) \\\\\n \\sum_{\\act1} c_2(\\act1,\\act2) \\pi^*(\\act1,\\act2) \\leq \\sum_{\\act1} c_2(\\act1,u) \\,\\pi^*(\\act1,\\act2) \n \\end{split} \\label{eq:staticcoeq}\n \\end{empheq}\n Define the set of correlated equilibria as\n \\begin{equation} \\label{eq:coreqset}\n \\mathcal{C} = \\bigg\\{\\pi: \\text{(\\ref{eq:staticcoeq}) holds and $\\pi(\\act1,\\act2)\\geq 0, \\sum_{\\act1,\\act2}\n \\pi(\\act1,\\act2)=1$ } \\bigg\\} \\end{equation}\n{\\em Remark}: In the special case where the players act independently, the correlated equilibrium specializes to a Nash equilibrium.\nIndependence implies the joint distribution $\\pi^*(\\act1,\\act2)$ becomes the product of marginals:\nso $\\pi^*(\\act1,\\act2)= p^*(\\act1) q^*(\\act2) $. Then clearly (\\ref{eq:staticcoeq}) reduces to the definition (\\ref{eq:staticnashone})\nof a Nash equilibrium. Note that the set of correlated equilibria specified by (\\ref{eq:coreqset}) is a convex polytope in $\\pi$.\n\n\n\\subsubsection{Why Correlated Equilibria?} \\index{game theory! correlated equilibrium}\n\nJohn F. Nash proved in his famous paper~\\cite{Nas51} that every game with a finite set of players and actions has at least one mixed strategy Nash equilibrium.\nHowever, as asserted by Robert J. Aumann\n\\footnote{Robert J. Aumann was awarded the Nobel Memorial Prize in Economics in 2005 for his work on conflict and cooperation through game-theoretic analysis. He is the first to conduct a full-fledged formal analysis of the so-called infinitely repeated games.}\nin the following extract from~\\cite{Aum87}, ``Nash equilibrium does make sense if one starts by assuming that, for some specified reason, each player knows which strategies the other players are using.'' Evidently, this assumption is rather restrictive and, more importantly, is rarely true in any strategic interactive situation. He adds:\n\\begin{quote} {\\em\n``Far from being inconsistent with the Bayesian view of the world, the notion of equilibrium is an unavoidable consequence of that view. It turns out, though, that the appropriate equilibrium notion is not the ordinary mixed strategy equilibrium of Nash (1951), but the more general notion of correlated equilibrium.''} -- Robert J. Aumann\n\\end{quote}\nThis, indeed, is the very reason why correlated equilibrium~\\cite{Aum87} best suits and is central to the analysis of strategic decision-making. \n\n\nThere is much to be said about correlated equilibrium; see Aumann~\\cite{Aum87} for rationality arguments. Some advantages that make it ever more appealing include:\n\\begin{enumerate}\n \\item \\emph{Realistic:} Correlated equilibrium is realistic in multi-agent learning. Indeed, Hart and Mas-Colell observe in~\\cite{HM01b} that for most simple adaptive procedures, ``\\ldots there is a natural coordination device: the common history, observed by all players. It is thus reasonable to expect that, at the end, independence among players will not obtain;''\n\n \\item \\emph{Structural Simplicity:} The correlated equilibria set constitutes a compact convex polyhedron, whereas the Nash equilibria are isolated points at the extrema of this set~\\cite{NCH04}. Indeed from (\\ref{eq:coreqset}), the set of correlated equilibria is a convex polytope in $\\pi$.\n\n \\item \\emph{Computational Simplicity:} Computing correlated equilibrium only requires solving a linear feasibility problem (linear program with null objective function) that can be done in polynomial time, whereas computing Nash equilibrium requires finding fixed points;\n\n \\item \\emph{Payoff Gains:} The coordination among agents in the correlated equilibrium can lead to potentially higher payoffs than if agents take their actions independently (as required by Nash equilibrium)~\\cite{Aum87};\n\n \\item \\emph{Learning:} There is no natural process that is known to converge to a Nash equilibrium in a general non-cooperative game that is not essentially equivalent to exhaustive search. There are, however, natural processes that do converge to correlated equilibria (the so-called law of conservation of coordination~\\cite{HM03}), e.g., \\index{regret-matching procedure} regret-matching~\\cite{HM00}.\n\\end{enumerate}\n\n\n\nExistence of a centralized coordinating device neglects the distributed essence of social networks. Limited information at each agent about the strategies of others further complicates the process of computing correlated equilibria. In fact, even if agents could compute correlated equilibria, they would need a mechanism that facilitates coordinating on the same equilibrium state in the presence of multiple equilibria---each describing, for instance, a stable coordinated behavior of manufacturers on targeting influential nodes in the competitive diffusion process~\\cite{TAM12}. This highlights the significance of adaptive learning algorithms \\index{game theory! adaptive learning} that, through repeated interactive play and simple strategy adjustments by agents, ensure reaching correlated equilibrium. The most well-known of such algorithms, fictitious play, was first introduced in 1951~\\cite{Rob51}, and is extensively treated in~\\cite{FL98}. It, however, requires monitoring the behavior of all other agents that contradicts the information exchange structure in social networks. The focus below is on the more recent regret-matching learning algorithms~\\cite{BHS06,Cah04,HM00,HM01b}. \n\nFigure~\\ref{fig:strategy-sets} illustrates how the various notions of equilibrium are related in terms of the relative size and inclusion in other equilibria sets. As discussed earlier in this subsection, dominant strategies and pure strategy Nash equilibria do not always exist---the game of ``Matching Pennies'' being a simple\nexample. Every finite game, however, has at least one mixed strategy Nash equilibrium. Therefore, the ``nonexistence critique'' does not apply to any notion that generalizes the mixed strategy Nash equilibrium in Figure~\\ref{fig:strategy-sets}. A Hannan consistent strategy (also known as ``universally consistent'' strategies~\\cite{FL95}) is one that ensures, no matter what other players do, the player's average payoff is asymptotically no worse than if she were to play any \\emph{constant} strategy for in all previous periods. Hannan consistent strategies guarantee no asymptotic external regrets and lead to the so-called ``coarse correlated equilibrium''~\\cite{MV78} notion that generalizes the Aumann's correlated equilibrium.\n\n\n\n\\begin{figure}[h]\n\\begin{tikzpicture}[decoration=zigzag]\n \n \\filldraw[fill=gray!10, draw=gray!80] (0,0) ellipse (8.0cm and 4.0cm);\n \\path[postaction={decorate,\n decoration={\n raise=-1em,\n text along path,\n text={|\\tt|Hannan Consistent},\n text align=center,\n },\n }] (0,0) ++(110:8.0cm and 4.0cm) arc(110:70:8.0cm and 4.0cm);\n \\filldraw[fill=gray!10, draw=gray!80] (0,0) ellipse (6.4cm and 3.2cm);\n \\path[postaction={decorate,\n decoration={\n raise=-1em,\n text along path,\n text={|\\tt|Correlated Equilibria},\n text align=center,\n },\n }] (0,0) ++(110:6.4cm and 3.2cm) arc(110:70:6.4cm and 3.2cm);\n \\filldraw[fill=gray!20, draw=gray!80] (0,0) ellipse (4.8cm and 2.4cm);\n \\path[postaction={decorate,\n decoration={\n raise=-1em,\n text along path,\n text={|\\tt|Randomized Nash},\n text align=center,\n },\n }] (0,0) ++(110:4.8cm and 2.4cm) arc(110:70:4.8cm and 2.4cm);\n \\filldraw[fill=gray!30, draw=gray!80] (0,0) ellipse (3.2cm and 1.6cm);\n \\path[postaction={decorate,\n decoration={\n raise=-1em,\n text along path,\n text={|\\tt|Pure Nash},\n text align=center,\n },\n }] (0,0) ++(110:3.2cm and 1.6cm) arc(110:70:3.2cm and 1.6cm);\n \\filldraw[fill=gray!40, draw=gray!80] (0,0) ellipse (1.6cm and 0.8cm) node\n {\\texttt{Dominant Strategies}};\n\\end{tikzpicture}\n\\caption{Equilibrium notions in non-cooperative games. Enlarging the equilibria set weakens the behavioral sophistication on the player's part to distributively reach equilibrium through repeated plays of the game.}\n\\label{fig:strategy-sets}\n\\end{figure}\n\n\\subsection{Reinforcement Learning Algorithm}\nTo describe the learning algorithm and the concept of regret, it is convenient to deal with rewards rather than costs.\nEach agent $l$ has utility reward $r_l(u^{(l)},u^{-l})$ where $u^{(l)}$ denotes the action of agent $l$ and $u^{-l}$ denotes the action of the other agents.\nThe action space for each agent $l$ is $\\{1,2,\\ldots, U\\}$.\nDefine the inertia parameter\n\\begin{equation} \\mu \\geq U \\big( \\max r_l(u, u^{-l}) - \\min r_l(u, u^{-l}) \\big) \\label{eq:inertia}\\end{equation}\n\nEach agent then runs the regret matching Algorithm \\ref{alg:rm}. Algorithm \\ref{alg:rm} assumes that once a decision is made by an agent, it is observable by all other agents. However, agent $l$ does not know the utility function of other agents. Therefore, a learning algorithms\nsuch as Algorithm \\ref{alg:rm} is required to learn the correlated equilibria.\n\nThe assumption that the actions of each agent are known to all other agents can be relaxed; see \\cite{HM01b} for \"blind\" algorithms that do not require this. \\index{regret matching algorithm} \\index{reinforcement learning of correlated equilibria! regret matching algorithm}\n\n\\begin{algorithm}\nEach agent $l$ with utility reward $r_l(u^{(l)},u^{-l})$ independently executes the following:\n\\begin{compactenum}\n \\item \\bf Initialization: \\rm\n Choose action $u_0^{(l)}\\in \\{1,\\ldots,U\\}$ arbitrarily. \n Set $R^l_1=0.$\n \\item\n Repeat for $n=1,2,\\ldots$, the following steps:\\\\\n { \\bf Choose Action:} $u^{(l)}_{n} \\in \\{1,\\ldots,U\\}$ with probability\n \\begin{align}\n \\label{rmtrans} \\mathbb{P}(u^{(l)}_{n}=j| u_{n-1}^{(l)}=i,R^l_n)\n &= \\begin{cases}\n \\frac{ |R_n^l(i,j)|^+} {\\mu } & j\\neq i, \\\\\n 1-\\sum_{m\\neq i}\\frac{ |R_n^l(i,m)|^+}{\\mu} & j=i\n\\end{cases}\n \\end{align}\nwhere inertia parameter $\\mu$ is defined in (\\ref{eq:inertia}) and $|x|^+ \\stackrel{\\text{defn}}{=} \\max\\{x,0\\}.$\n\n{\\bf Regret Update}: \n Update the $U \\times U$ regret matrix $R^l_{n+1}$ as\n \\begin{equation}\n \\label{sa-d}\n \\begin{array}{ll}&\\!\\!\\!\\displaystyle\n R^l_{n+1}(i,j) =R^l_{n}(i,j)+\\epsilon\\left(I\\{u^{(l)}_n=i\\}\\big(r_l(j,u^{-l}_n)-r_l(i,u^{-l}_n)\\big) -R^l_{n}(u,j)\\right).\n \n \\end{array}\n \\end{equation}\n Here $\\varepsilon \\ll1$ denotes a constant positive step size.\n\\end{compactenum}\n\\caption{Regret Matching Algorithm for Learning Correlated Equilibrium} \\label{alg:rm}\n\\end{algorithm}\n\n\n\\subsubsection{Discussion and Intuition of Algorithm \\ref{alg:rm}}\n{\\em 1. Adaptive Behavior:} In~(\\ref{sa-d}), $\\epsilon$ serves as a forgetting factor to foster adaptivity to the evolution of the non-cooperative game parameters. That is, as agents repeatedly take actions, the effect of the old underlying parameters on their current decisions vanishes.\n\n\\noindent {\\em 2. Inertia:} The choice of $\\mu$ guarantees that there is always a positive probability of playing the same action as the last period. Therefore, $\\mu$ can be viewed as an ``inertia'' parameter: A higher $\\mu$\nyields switching with lower probabilities. It plays a significant role in breaking away from bad cycles. It is worth emphasizing that the speed of convergence to the correlated equilibria set is closely related to this inertia parameter.\n\n\\noindent {\\em 3. Better-reply vs. Best-reply:} In light of the above discussion, the most distinctive feature of the regret-matching procedure, that differentiates it from other works such as~\\cite{FL99a}, is that it implements a better-reply rather than a best-reply strategy\\footnote{This has the additional effect of making the behavior\ncontinuous, without need for approximations~\\cite{HM00}.}. This inertia assigns positive probabilities to any actions that are just better. Indeed, the behavior of a regret-matching decision maker is very far from that of a rational decision maker that makes optimal decisions given his (more or less well-formed) beliefs about the environment. Instead, it resembles the model of a reflex-oriented individual that reinforces decisions with ``pleasurable'' consequences~\\cite{HM01b}.\n\nWe also point out the generality of Algorithm \\ref{alg:rm}, by noting that it can be easily transformed into the well-known\n{\\em fictitious play} algorithm by choosing $u^{(l)}_{n+1}=\\arg\\max_k R^l_{n+1}(i,j)$ deterministically, where $u_n^{(l)}=i$,\nand the extremely simple {\\em best response} algorithm by further specifying $\\epsilon=1$.\n\n\n\n\n\\noindent {\\em 4. Computational Cost:} The computational burden (in terms of calculations per iteration) of the regret-matching algorithm does not grow with the number of agents and is hence scalable. At each iteration, each agent needs to execute two multiplications, two additions, one comparison and two table lookups (assuming random numbers are stored in a table) to calculate the next decision. Therefore, it is suitable for implementation in sensors with limited local computational capability.\n\n\\noindent {\\em 5. Global performance metric}\nFinally, we introduce a metric for the global behavior of the system. \nThe global behavior $z_n$ at time $k$ is defined as the empirical frequency of joint play of all agents up to period $k$. Formally,\n\\begin{equation}\n\\label{eq:global-behavior}\nz_n = \\sum_{\\tau\\leq k} (1-\\epsilon)^{k-\\tau} e_{\\mathbf{u}_\\tau}\n\\end{equation}\nwhere $e_{\\mathbf{u}_\\tau}$ denotes the unit vector with the element corresponding to the joint play $\\mathbf{u}_\\tau$ being equal to one. Given $z_n$, the average payoff accrued by each agent can be straightforwardly evaluated, hence the name global behavior. It is more convenient to define $z_n$ via the stochastic approximation recursion\n\\begin{equation}\n\\label{eq:global-behavior-SA}\nz_n = z_{k-1} + \\epsilon \\left[ e_{\\mathbf{u}_n} - z_{k-1}\\right].\n\\end{equation}\n\nThe global behavior $z_n$ is a system ``diagnostic'' and is only used for the analysis of the emergent collective behavior of agents. That is, it does not need to be computed by individual agents.\nIn real-life application such as smart sensor networks, however, a network controller can monitor $z_n$ and use it to adjust agents' payoff functions to achieve the desired global behavior.\n\n\n\\subsection{Ordinary Differential Inclusion Analysis of Algorithm \\ref{alg:rm}} \\index{ordinary differential inclusion analysis}\nRecall from Chapter \\ref{chp:markovtracksa} that the dynamics of a stochastic approximation algorithm can be characterized by an\nordinary differential equation obtained by averaging the equations in the algorithm. In particular,\nusing Theorem \\ref{thm:odeweak} of Chapter \\ref{chp:markovtracksa}, the estimates generated by the stochastic approximation\nalgorithm converge weakly to the averaged system corresponding to (\\ref{sa-d}) and\n(\\ref{eq:global-behavior-SA}), namely,\n\\begin{equation}\n\\begin{split}\n\\frac{d R(i,j)}{dt} & = \\E_{\\pi} \\left\\{ I(u_t= i ) \\big( r_l(j,u^{-l}) - r_l(i,u^{-l}) \\big) - R(i,j) \\right\\} \\\\\n&= \\sum_{u^{-l}} \\biggl[ \\pi(i | u^{-l}) \\,\\bigg( r_l(j,u^{-l}) - r_l(i,u^{-l}) \\bigg) \\biggr] \\pi(u^{-l}) - R(i,j) \\\\\n\\frac{dz}{dt} &= \\pi(i|u^{-l})\\, \\pi(u^{-l}) - z \n\\end{split} \\label{eq:avgame1}\n\\end{equation}\nwhere $\\pi(u^{(l)}, u^{-l}) = \\pi(u^{-l} | u^{(l)}) \\pi( u^{(l)} ) $ is the stationary distribution of the Markov process $(u^{(l)}, u^{-l})$.\n\nNext note that the transition probabilities in (\\ref{rmtrans}) of $u_n^{(l)}$ given $R_n$ are conditionally independent of $u_n^{-l}$. So given\n$R_n$, $\\pi(i | u^{-l}) = \\pi(i)$.\nSo given the transition probabilities in (\\ref{rmtrans}), clearly the stationary distribution $\\pi(u^{(l)})$ satisfies the linear algebraic equation\n$$\n\\pi(i) = \\pi(i) \\biggl[ 1 - \\sum_{j\\neq i} \\frac{ | R(j,i)|^+ }{ \\mu} \\biggr] + \\sum_{j\\neq i} \\pi(j) \\frac{|R(i,j)|^+}{\\mu} .\n$$\nwhich after cancelling out $\\pi(i)$ on both sides yields\n\\begin{equation}\n\\sum_{i\\neq j} \\pi(i) |R(i,j)|^+ = \\sum_{i\\neq j} \\pi(j) |R(j,i)|^+ \\label{eq:avgame2}\n\\end{equation}\nTherefore the stationary distribution $\\pi$ is functionally independent of the inertia parameter~$\\mu$.\n\nFinally note that as far as player $l$ is concerned, the strategy $\\pi(u^{-l})$ is not known. All is known is that $\\pi(u^{-l})$ is a valid pmf.\nSo we can write\nthe averaged dynamics of the regret matching Algorithm \\ref{alg:rm} as\n \\begin{equation} \\displaystyle\n \\begin{split}\n& \\begin{rcases} \n \\dfrac{d R(i,j)}{dt} \\in \\displaystyle \\sum_{u^{-l}} \\biggl[ \\pi(i ) \\,\\bigg( r_l(j,u^{-l}) - r_l(i,u^{-l}) \\bigg) \\biggr] \\pi(u^{-l}) - R(i,j) \n\\\\\n \\dfrac{dz}{dt} \\in \\pi(i)\\, \\pi(u^{-l}) - z \\end{rcases} \\; \\pi(u^{-l}) \\in \\text{ valid pmf} \\\\\n&\\sum_{i\\neq j} \\pi(i) |R(i,j)|^+ = \\sum_{i\\neq j} \\pi(j) |R(j,i)|^+ \\end{split} \\label{eq:odi}\n\\end{equation}\nThe above averaged dynamics \n constitute an algebraically constrained ordinary differential inclusion.\\footnote{Differential inclusions \\index{differential inclusion} are a generalization of the concept of ordinary differential equations. A generic differential inclusion is of the form $dx\/dt \\in \\mathcal{F}(x,t)$, where $\\mathcal{F}(x,t)$ specifies a family of trajectories rather than a single trajectory as in the ordinary differential equations $dx\/dt = F(x,t)$.}\nWe refer the reader to \\cite{BHS05,BHS06} for an excellent exposition of the use of differential inclusions for analyzing game theoretical type learning algorithms.\n\n{\\em Remark}:\nThe asymptotics of a stochastic approximation algorithm is typically captured by an ordinary differential equation (ODE). Here, although agents observe $u^{-l}$, they are oblivious to the strategies $\\pi(u^{-l})$ from which $u^{-l}$ has been drawn. Different strategies\n$\\pi(u^{-l})$ result in \ndifferent trajectories of $R_n$. Therefore, $R_t$ and $z_t$ are specified by a differential inclusions rather than ODEs \\index{ordinary differential equation}.\n\n\n\\subsection{Convergence of Algorithm \\ref{alg:rm} to the set of correlated equilibria}\nThe previous subsection says that the regret matching Algorithm \\ref{alg:rm} behaves asymptotically as an \nalgebraically constrained differential inclusion (\\ref{eq:odi}). So we only need to analyze the behavior of this differential inclusion to characterize the\nbehavior of the regret matching algorithm. \n\n\\begin{theorem}\nSuppose every agent follows the ``regret-matching''Algorithm~\\ref{alg:rm}. Then as $t\\to\\infty$:\n(i) $R(t) $ converges to the negative orthant in the sense that\n\\begin{equation}\n\\dis\\big[ R(t),\\mathbb{R}^-\\big] = \\inf_{\\boldsymbol{r} \\in {\\rm I\\hspace{-.07cm}R}^-}\\big\\| R(t) - \\boldsymbol{r}\\big\\| \\Rightarrow 0;\n\\end{equation}\n\n(ii) $z(t) $ converges to the correlated equilibria set $\\mathcal{C}$ in the sense that\n\\begin{equation}\n\\label{eq:convergence-1}\n\\dis [ z(t),\\mathcal{C} ] = \\inf_{\\boldsymbol{z} \\in \\mathcal{C}}\\left\\| z(t) - \\boldsymbol{z}\\right\\| \\Rightarrow 0.\n\\end{equation}\n\\end{theorem}\nThe proof below shows the simplicity and elegance of the ordinary differential equation (inclusion) approach for analyzing stochastic approximation algorithm. Just a few elementary lines\nbased on the Lyapunov function yields the proof.\n\n\\begin{proof}\n\nDefine the Lyapunov function \\index{Lyapunov function}\n\\begin{equation}\nV\\big(R\\big) = \\frac{1}{2}\\big( \\textmd{dist}\\big[R,{\\rm I\\hspace{-.07cm}R}^-\\big] \\big)^2= \\frac{1}{2}\\sum_{i,j} \\big(\\big| R(i,j)\\big|^+\\big)^2.\n\\end{equation}\nEvaluating the time-derivative and substituting for $dR(i,j) \/ dt$ from (\\ref{eq:odi}) we obtain\n\\begin{align}\n{d\\over dt}V\\big(R\\big) &= \\sum_{i,j} \\big| R(i,j)\\big|^+ \\cdot {d\\over dt} R(i,j) \\nonumber\\\\\n& = \\sum_{i,j} \\big| r(i,j)\\big|^+ \\Big[ ( r_l\\big(j,u^{-l}\\big) - r_l\\big(i,u^{-l}\\big)) \\pi(i) - R(i,j) \\Big]\\nonumber\\\\\n& = \\underbrace{\\sum_{i,j} \\big| R(i,j)\\big|^+ \\big( r_l \\big(j,u^{-l}\\big) - r_l\\big(i,u^{-l}\\big)\\big) \\pi(i)}_{= 0\\;\\text{from (\\ref{eq:avgame2}) }} - \\sum_{i,j} \\big| R(i,j)\\big|^+ R(i,j)\\nonumber\\\\\n& = -2 V\\big(R\\big).\n\\end{align}\nIn the last equality we used\n\\begin{equation}\n\\sum_{i,j} \\big| R(i,j)\\big|^+ R(i,j) = \\sum_{i,j} \\big( \\big| R(i,j)\\big|^+ \\big)^2 = 2 V\\big(R\\big).\n\\end{equation}\nThis completes the proof of the first assertion, namely that Algorithm \\ref{alg:rm} eventually generates regrets that are non-positive.\n\nTo prove the second assertion, from Algorithm \\ref{alg:rm}, the elements of the regret matrix are\n\\begin{align}\nR_k(i,j) &= \\epsilon \\sum_{\\tau \\leq k} (1-\\epsilon)^{k-\\tau} \\bigg[ r_l \\big(j,u_\\tau^{-l}\\big) - r_l \\big(u^{(l)}_\\tau, u^{-l}_\\tau)\\bigg]\nI(u_\\tau^{(l)} = i) \\nonumber\\\\\n& = \\sum_{u^{-l}} z (i,u^{-l}) \\big[ r_l(j,u^{-l}) - r_l(i, u^{-l})\\big] \n\\end{align}\nwhere $z(i,u^{-l})$ denotes the empirical distribution of agent $l$ choosing action $i$ and the rest playing $u^{-l}$. \nOn any convergent subsequence $\\lbrace z_{\\underline{k}}\\rbrace_{\\underline{k}\\geq 0}\\rightarrow {\\pi}$, then\n\\begin{equation}\n\\label{eq:thrm-5-3}\n\\lim_{k\\to\\infty} R_k(i,j) = \\sum_{u^{-l}} \\pi(i, u^{-l}) \\big[ r_l (j,u^{-l}) - r_l(i,u^{-l})\\big]\n\\end{equation}\nwhere $\\pi(i,u^{-l})$ denotes the probability of agent $l$ choosing action $i$ and the rest playing $u^{-l}$.\nThe first assertion of the theorem proved that the regrets converge to non-positive values (negative orthant). Therefore (\\ref {eq:thrm-5-3})\nyields that \n$$ \\sum_{u^{-l}} \\pi(i, u^{-l}) \\big[ r_l (j,u^{-l}) - r_l(i,u^{-l})\\big] \\leq 0 $$\nimplying that $\\pi$ is a correlated equilibrium.\n\\end{proof}\n\n\\subsection{Extension to switched Markov games} \\index{reinforcement learning of correlated equilibria! switched Markov game}\nConsider the case now where rewards $r_l(u^{(l)},u^{-l})$ evolve according to an unknown Markov chain $\\th_n$. Such a time varying\ngame can result from utilities in a social network evolving with time or the number of players changing with time. \nThe reward for agent $l$ is now $r_l(u^{(l)},u^{-l}, \\th_n)$. The aim is to track the set of correlated equilibria $\\mathcal{C}(\\th_n)$; that is\nuse the regret matching algorithm \\ref{alg:rm} so that agents eventually deploy strategies from $\\mathcal{C}(\\th_n)$.\nIf $\\th_n$ evolves with transition matrix $I + \\epsilon^2 Q$ (where $Q$ is a generator), then it is on a slower time scale than the dynamics of the regret\nmatching Algorithm \\ref{alg:rm}. Then a more general proof in the spirit of Theorem \\ref{ode-lim} yields that the regret matching algorithm can track the time\nvarying correlated equilibrium set $\\mathcal{C}(\\th_n)$. Moreover, in analogy to \\S \\ref {sec:fastmc}, if the transition matrix for $\\th_n$ is $I + \\epsilon Q$, then the asymptotic dynamics\nare given by a switched Markov differential inclusion, see \\cite{KMY08,NKY13}.\n\n\n\\section{Stochastic Search-Ruler Algorithm} \\index{discrete stochastic optimization! stochastic ruler}\n\n\nWe discuss two simple variants of Algorithm \\ref{alg:RS} that\nrequire less restrictive conditions for convergence than condition (O). Assume $c_n(\\th)$ are uniformly bounded for $\\th \\in \\Theta$.\nNeither of the algorithms given below are particularly novel; but they are useful from a pedagogical point of view.\n\n It is convenient to normalize the objective (\\ref{eq:discobj}) as follows:\n Let $\\alpha \\leq c_n(\\th) \\leq \\beta$ where $\\alpha$ denotes a finite lower\nbound and $\\beta>0$ denotes a finite upper bound. Define the\nnormalized costs $m_n(\\th)$ as\n\\begin{equation} \\label{eq:mn}\nm_n(\\th) = \\frac{c_n(\\th) -\\alpha}{\\beta-\\alpha} , \\quad\n\\text{ where } 0 \\leq m_n(\\th) \\leq 1. \n\\end{equation}\nThen the stochastic optimization problem (\\ref{eq:discobj}) is equivalent to \n\\begin{equation}\n\\th^* = \n\\arg \\min_{\\th \\in \\Theta}m(\\th) \\text{ where } m(\\th) = \\E\\{m_n(\\th)\\} \n\\label{eq:scale}\n\\end{equation}\nsince scaling the cost function does not affect the minimizing solution.\nRecall $\\Theta = \\{1,2,\\ldots,S\\}$.\n\n\nDefine the loss function\n\\begin{equation} Y_n(\\th,u_n) = I\\left(m_n(\\th) - u_n\\right)\n\\text{ where } I(x) = \\begin{cases} 1 & \\text{ if } x> 0 \\\\\n 0 & \\text{ otherwise } \\end{cases} \\label{eq:yn}\\end{equation}\nHere\n$u_n $ is a independent uniform random number in $[0,1]$. The uniform random number $u_n$ is a stochastic ruler against which the candidate\n$m_n(\\th)$ is measured.\nThe result was originally used\nin devising stochastic ruler optimization algorithms \\cite{AA01} -- although here we \npropose a more efficient algorithm than the stochastic ruler.\nApplying Algorithm \\ref{alg:RS} to the cost function $ \\E\\{Y_n(\\th,u_n)\\}$ defined \nin (\\ref{eq:yn}) yields the following stochastic search-ruler algorithm:\n\n\\begin{algorithm}\n\\caption{Stochastic Search-Ruler}\n\\label{alg:SR}\nIdentical to Algorithm \\ref{alg:RS} with $c_n(\\th_n)$ and\n$c_n(\\tilde{\\theta}_n)$ replaced by $Y_n(\\th_n,u_n)$ and\n$Y_n(\\tilde{\\th}_n,\\tilde{u}_n)$. Here $u_n$ and $\\tilde{u}_n$ are \nindependent uniform random numbers in $[0,1]$.\n\\end{algorithm}\n\nAnalogous to Theorem \\ref{thm:RS} we have the following result:\n\n\\begin{theorem} \\label{thm:SR} Consider the discrete stochastic optimization problem (\\ref{eq:discobj}).\nThen the Markov chain $\\{\\th_n\\}$ generated by Algorithm \\ref{alg:SR} has \nthe following property for its stationary distribution $\\pi_\\infty$:\n\\begin{equation} \\frac{\\pi_\\infty(\\th^*)}{\\pi_\\infty(\\th)} =\n \\frac{m(\\th)}{m(\\th^*)}\n\\frac{ (1 - m(\\th^*))}{ (1-m(\\th)) } > 1 . \\label{eq:ratio}\\end{equation}\n\\end{theorem}\n\n The theorem says that Algorithm \\ref{alg:SR} is attracted to set the global minimizers $\\mathcal{G}$. It spends more time in $\\mathcal{G}$\nthan any other candidates. The restrictive condition (O) is not required for Algorithm \\ref{alg:SR} to be attracted to $\\mathcal{G}$.\n Theorem \\ref{thm:SR}\ngives an explicit representation of the discriminative power of the algorithm\nbetween the optimizer $\\th^*$ and any other candidate $\\th$ in terms of the normalized\nexpected costs $m(\\th)$ and $m(\\th^*)$.\n Algorithm \\ref{alg:SR} is more efficient than the stochastic ruler algorithm of \\cite{And99} when the candidate samples are chosen with equal probability.\nThe stochastic ruler algorithm of \\cite{And99} has asymptotic efficiency\n$\\pi(\\th^*)\/\\pi(\\th) = (1 - m(\\th^*))\/ (1-m(\\th)) $.\nSo Algorithm \\ref{alg:SR} has the additional improvement in efficiency due to the \nadditional multiplicative term $m(\\th)\/m(\\th^*)$ in (\\ref{eq:ratio}).\n\n\n\\noindent {\\em Variance reduction using common random numbers}:\nA more efficient implementation of Algorithm \\ref{alg:SR} can be obtained\nby using variance reduction based on common random numbers (discussed in Appendix \\ref{sec:crn} of the book) as follows:\nSince $u_n$ is uniformly distributed in $[0,1]$, so is $1-u_n$.\nSimilar to Theorem \\ref{thm:SR}\nit can be shown that \nthe optimizer\n $\\th^*$ is the minimizing solution of\nthe following stochastic optimization problem\n$\n\\th^* = \\arg \\min_\\th \\E\\{Z_n(\\th,u_n)\\} $\nwhere \n\\begin{equation}\nZ_n(\\th,u_n) = \\frac{1}{2} \\left[ Y_n(\\th,u_n) + Y_n(\\th,1-u_n)\\right]\n\\label{eq:zn} \\end{equation}\nwhere the normalized sample cost $m_n(\\th)$ is defined in (\\ref{eq:scale}).\nApplying Algorithm \\ref{alg:SR} with $ Z_n(\\th_n,u_n)$ and $Z_n(\\tilde{\\th}_n,u_n)$ replacing $ Y_n(\\th_n,u_n)$ and $Y_n(\\tilde{\\th}_n,u_n)$,\nrespectively,\nyields the variance reduced search-ruler algorithm.\n\nIn particular, since the indicator function $I(\\cdot)$ in (\\ref{eq:yn}) is a monotone function of its argument, it follows that \n$\\operatorname{Var}\\{Z_n(\\th,u_n)\\} \\leq \\operatorname{Var}\\{Y_n(\\th,u_n)\\} $.\n As a result one would expect that the stochastic\noptimization algorithm using $Z_n$ would converge faster.\n\n\\begin{proof}\nWe first show that \n $\\th^*$ defined in (\\ref{eq:scale}) is the minimizing solution of\nthe stochastic optimization problem\n$\n\\th^* = \\arg \\min_\\th \\E\\{Y_n(\\th,u_n)\\}$.\n Using the smoothing property of conditional expectations (\\ref{eq:lieeng})\n yields \\index{smoothing property of conditional expectation}\n\\begin{align*}\n\\E\\{ I\\left(m_n(\\th) - u_n\\right) \\} &= \n\\E\\{\\E\\{I\\left(m_n(\\th) - u_n\\right)| m_n(\\th) \\} \\} \\\\\n&\\hspace{-1.3cm}= \\E\\{ \\mathbb{P}(u_n < m_n(\\th))\\} = \\E\\{m_n(\\th)\\} = m(\\th)\\end{align*}\n The second equality follows since expectation of an indicator function is probability,\n the third equality holds because $u_n$ is a uniform random number in [0,1] so\n that $\\mathbb{P}(u_n u_n)\n\\end{align*}\nFinally, for this transition matrix, it is easily verified that \n\\begin{equation} \\pi_\\infty(\\th) = \\kappa (1 - m(\\th)) \\prod_{j\\neq \\th} m(j) \\end{equation}\nis the invariant distribution\n where $\\kappa$ denotes a normalization constant.\n Hence \n$$\\frac{\\pi_\\infty(\\th^*)}{ \\pi_\\infty(\\th)} = \\frac{m(\\th)}{m(\\th^*)}\\frac{ (1 - m(\\th^*))}{ (1-m(\\th)) }\n= \\frac{ 1\/m(\\th^*)-1}{1\/m(\\th) - 1} > 1$$\nsince $m(\\th^*)$ is the global minimum\nand therefore $ m(\\th^*) < m(\\th)$ for $\\th \\in \\Theta - \\mathcal{G}$.\n\\end{proof}\n\n\n\\begin{comment}\n\\section{Problems}\n\\begin{compactenum}\n\\item Classical RLS vs LMS\n\\item Convergence of UCB algorithm and comparison with Thompson's sampling.\n\n\n\\item Polyak averaging for stochastic approximations\n\n\\item A detection aided test to adapt step size (kim \\& felisa)\n\n\\item Consensus diffusion algorithms for discrete stochastic optimization\n\n\\item Design of optimal observer trajectory\n\n\n\\end{compactenum} \\end{comment}\n\n\\newpage\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nFeature selection has been widely investigated and used by the machine learning and data mining community. In this context, a feature, also called attribute or variable, represents a property of a process or system than has been measured, or constructed from the original input variables. The goal of feature selection is to select the smallest feature subset given a certain generalization error, or alternatively finding the best feature subset with $k$ features, that yields the minimum generalization error. Additional objectives of feature selection are: (i) improve the generalization performance with respect to the model built using the whole set of features, (ii) provide a more robust generalization and a faster response with unseen data, and (iii) achieve a better and simpler understanding of the process that generates the data \\cite{Kohavi97,Guyon03}. We will assume that the feature selection method is used either as a preprocessing step or in conjunction with a learning machine for classification or regression purposes.\nFeature selection methods are usually classified in three main groups: wrapper, embedded and filter methods \\cite{Guyon03}. Wrappers \\cite{Kohavi97} use the induction learning algorithm as part of the function evaluating feature subsets. The performance is usually measured in terms of the classification rate obtained on a testing set, i.e., the classifier is used as a black box for assessing feature subsets. Although these techniques may achieve a good generalization, the computational cost of training the classifier a combinatorial number of times becomes prohibitive for high dimensional datasets. In addition, many classifiers are prone to over-learning and show sensitiveness to initialization. Embedded methods \\cite{Lal06}, incorporate knowledge about the specific structure of the class of functions used by a certain learning machine, e.g. bounds on the leave-one-out error of SVMs \\cite{Weston00}. Although usually less computationally expensive than wrappers, embedded methods still are much slower than filter approaches, and the features selected are dependent on the learning machine. Filter methods \\cite{Duch03} assume complete independence between the learning machine and the data, and therefore use a metric independent of the induction learning algorithm to assess feature subsets. Filter methods are relatively robust against overfitting, but may fail to select the best feature subset for classification or regression.\nIn the literature, several criteria have been proposed to evaluate single features or feature subsets, among them: inconsistency rate \\cite{Huang2003Inconsistency}, inference correlation \\cite{Mo2011Inference}, classification error \\cite{Estevez1998Clas}, fractal dimension \\cite{Mo2012Fractal}, distance measure \\cite{Sebban2002Dist,Bins2001Dist}, etc. Mutual information (MI) is a measure of statistical independence, that has two main properties. First, it can measure any kind of relation between random variables, including nonlinear relationships \\cite{Cover06}. Second, MI is invariant under transformations in the feature space that are invertible and differentiable, e.g. translations, rotations and any transformation preserving the order of the original elements of the feature vectors \\cite{Kullback97,Kullback51}. Many advances in the field have been reported in the last 20 years since the pioneer work of Battiti \\cite{Battiti94}. Battiti defined the problem of feature selection as the process of selecting the $k$ most relevant variables from an original feature set of $m$ variables, $k0$. In this case, $x_{2}$ and $x_{3}$ interact positively to predict $C$, and this yields a positive value of the multi-information among these variables. The multi-information among the variables $x_{1}$, $x_{4}$ and $C$ is given by: $I(x_{1};x_{4};C)=I(\\{x_{1},x_{4}\\};C)-I(x_{1};C)-I(x_{4};C)$. The relevance of individual features $x_{1}$ and $x_{4}$ is the same, i.e., $I(x_{1};C)=I(x_{4};C)>0$. In this case the joint information provided by $x_{1}$ and $x_{4}$ with respect to $C$ is the same as that of each variable acting separately, i.e., $I(\\{x_{1},x_{4}\\};C)=I(x_{1};C)=I(x_{4};C)$. This yields a negative value of the multi-information among these variables. We can deduce that the interaction between $x_{1}$ and $x_{4}$ does not provide any new information about $C$. Let us consider now the multi-information among $x_{1}$, $x_{2}$ and $C$, which is zero: $I(x_{1};x_{2};C)=I(\\{x_{1},x_{2}\\};C)-I(x_{1};C)-I(x_{2};C)=0$. Since feature $x_{2}$ only provides information about $C$ when interacting with $x_{3}$, then $I(\\{x_{1},x_{2}\\};C)=I(x_{1};C)$. In this case, features $x_{1}$ and $x_{2}$ do not interact in the knowledge of $C$.\n\nFrom the viewpoint of feature selection, the value of the multi-information (positive, negative or zero) gives rich information about the kind of interaction there is among the variables. Let us consider the case where we have a set of already selected features $S$ and a candidate feature $f_{i}$, and we measure the multi-information of these variables with the class variable $C$, $I(f_{i};S;C)=I(S;C|f_{i})-I(S;C)$. When the multi-information is positive, it means that feature $f_i$ and $S$ are complementary. On the other hand, when the multi-information is negative, it means that by adding $f_{i}$ we are diminishing the dependence between $S$ and $C$, because $f_{i}$ and $S$ are redundant. Finally, when the multi-information is zero, it means that $f_{i}$ is irrelevant with respect to the dependency between $S$ and $C$.\n\nThe mutual information between a set of $m$ features and the class variable $C$ can be expressed compactly in terms of multi-information as follows:\n\\begin{equation}\nI(\\left\\{ x_{1},x_{2},...,x_{m}\\right\\} ;C)=\\sum_{k=1}^{m}\\sum_{\\begin{array}{c}\\forall S\\subseteq\\{x_{1},...,x_{m}\\}\\\\ |S|=k\\end{array}}I([S\\cup C]),\\label{eq:Multiple}\n\\end{equation}\nwhere $I([S\\cup C])=I(s_{1};s_{2};\\cdots;s_{k};C).$ Note that the sum on the right side of eq. (\\ref{eq:Multiple}), is taken over all subsets $S$ of size $k$ drawn from the set $\\{x_{1},...,x_{m}\\}$.\n\n\n\\section{Relevance, Redundancy and Complementarity}\n\nThe filter approach to feature selection is based on the idea of relevance, which we will explore in more detail in this section. Basically the problem is to find the feature subset of minimum cardinality that preserves the information contained in the whole set of features with respect to $C$. This problem is usually solved by finding the relevant features and discarding redundant and irrelevant features. In this section, we review the different definitions of relevance, redundancy and complementarity found in the literature.\n\n\\subsection{Relevance}\n\nIntuitively, a given feature is relevant when either individually or together with other variables, it provides information about $C$. In the literature there are many definitions of relevance, including different levels of relevance \\cite{Bell00,Battiti94,Guyon03,Kohavi97,Yu04,Peng05,Almuallim92,Almuallim91,Brown2012,Davies94}. Kohavi and John \\cite{Kohavi97} used a probabilistic framework to define three levels of relevance: strongly relevant, weakly relevant, and irrelevant features, as shown in Table \\ref{tab:relevance}. Strongly relevant features provide unique information about $C$, i.e., they cannot be replaced by other features. Weakly relevant features provide information about $C$, but they can be replaced by other features without losing information about $C$. Irrelevant features do not provide information about $C$, and they can be discarded without losing information. A drawback of the probabilistic approach is the need of testing the conditional independence for all possible feature subsets, and estimating the probability density functions (pdfs) \\cite{Raudy1991}.\n\nAn alternative definition of relevance is given under the framework of mutual information \\cite{Somol04,Bell00,Kojadinovic05,Koller96,Yu04,Kwak02b,Fleuret04,Tishby99}. An advantage of this approach is that there are several good methods for estimating MI. The last column of Table \\ref{tab:relevance} shows how the three levels of individual relevance are defined in terms of MI.\n\\begin{table}[H]\n\\caption{\\label{tab:relevance}Levels of relevance for candidate feature $f_{i}$, according to probabilistic framework \\cite{Kohavi97} and mutual information framework \\cite{Meyer08}}\n\\centering{}\\begin{tabular}{>{\\centering}m{2cm}|c|>{\\centering}m{3.7cm}|>{\\centering}m{2.8cm}}\n\\hline Relevance Level & Condition & Probabilistic Approach & Mutual Information Approach\\tabularnewline \\hline \\hline Strongly\\\\\n Relevant & $\\nexists$ & $p(C|f_{i},\\neg f_{i})\\neq p(C|\\neg f_{i})$ & $I(f_{i};C|\\neg\n f_{i})>0$\\tabularnewline\n\\hline Weakly\\\\\n Relevant & $\\exists\\, S\\subset\\neg f_{i}$ & $p(C|f_{i},\\neg f_{i})=p(C|\\neg f_{i})$\\\\\n {$\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\wedge\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,$}\n$p(C|f_{i},S)\\neq p(C|S)$ & $I(f_{i};C|\\neg f_{i})=0$\\\\\n {$\\,\\,\\,\\,\\,\\,\\,\\,\\wedge\\,\\,\\,\\,\\,\\,\\,\\,$\n$I(f_{i};C|S)>0$}\\tabularnewline \\hline Irrelevant & $\\forall\\, S\\subseteq\\neg f_{i}$ &\n$p(C|f_{i},S)=p(C|S)$ & $I(f_{i};C|S)=0$\\tabularnewline \\end{tabular}\n\\end{table}\n\nThe definitions shown in Table \\ref{tab:relevance} give rise to several drawbacks, which are summarized as follows:\n\\begin{enumerate}\n\\item To classify a given feature $f_{i}$, as irrelevant, it is necessary to assess all possible subsets $S$ of $\\neg f_{i}$. Therefore this procedure is subject to the curse of dimensionality \\cite{Bellman61,Trunk1979}.\n\\item The definition of strongly relevant features is too restrictive. If two features provides information about the class but are redundant, then both features will be discarded by this criterion. For example, let $\\{x_{1},x_{2},x_{3}\\}$ be a set of 3 variables, where $x_{1}=x_{2}$, and $x_{3}$ is noise, and the output class is defined as $C=x_{1}$. Following the strong relevance criterion we have $I(x_{1};C|\\{x_{2},x_{3}\\})=$$I(x_{2};C|\\{x_{1},x_{3}\\})=$$I(x_{3};C|\\{x_{1},x_{2}\\})=0$.\n\\item The definition of weak relevance is not enough for deciding whether to discard a feature from the optimal feature set. It is necessary to discriminate between redundant and non-redundant features.\n\\end{enumerate}\n\n\\subsection{Redundancy}\n\nYu and Liu \\cite{Yu04} proposed a finer classification of features into weakly relevant but redundant and weakly relevant but non-redundant. Moreover, the authors defined the set of optimal features as the one composed by strongly relevant features and weakly relevant but non-redundant features. The concept of redundancy is associated with the level of dependency among two or more features. In principle we can measure the dependency of a given feature $f_{i}$ with respect to a feature subset $S\\subseteq\\text{\\ensuremath{\\neg}}f_{i}$, by simply using the MI, $I(f_{i};S)$.\nThis information theoretic measure of redundancy satisfies the following properties: it is symmetric, non-linear, non-negative, and does not diminish when adding new features \\cite{Meyer08}.\nHowever, using this measure it is not possible to determine concretely with which features of $S$ is $f_{i}$ redundant. This calls for more elaborated criteria of redundancy, such as the Markov blanket \\cite{Koller96,Yu04}, and total correlation \\cite{Watanabe1960}. The Markov blanket is a strong condition for conditional independence, and is defined as follows.\n\\begin{definition}[Markov blanket]\nGiven a feature $f_{i}$, the subset $M\\subseteq\\neg f_{i}$ is a Markov blanket\nof $f_{i}$ iff \\cite{Koller96,Yu04}:\n\\begin{equation}\np(\\{F\\backslash\\{f_{i}\\,,M\\},C\\}\\,\\text{|\\,}\\{f_{i}\\,,M\\}) = p(\\{F\\backslash\\{f_{i}\\,,M\\},C\\}\\,\\text{|\\,}M).\\label{eq:MB_prob}\n\\end{equation}\n\\end{definition}\n\nThis condition requires that M subsumes all the information that $f_{i}$ has about $C$, but also about all other features $\\{F\\backslash\\{f_{i}\\,,M\\}\\}$. It can be proved that strongly relevant features do not have a Markov blanket \\cite{Yu04}.\n\nThe Markov blanket condition given by Eq. (\\ref{eq:MB_prob}) can be rewritten in the context of information theory as follows \\cite{Meyer08}:\n\\begin{equation}\nI(f_{i};\\{C,\\neg {f_{i},M}\\}\\text{|\\,}M)=0.\\label{MBMI}\n\\end{equation}\n\nAn alternative measure of redundancy is the total correlation or multivariate correlation \\cite{Watanabe1960}. Given a set of features $F=\\{f_{1},...,f_{m}\\}$, the total correlation is defined as follows:\n\\begin{equation}\nC(f_{1};...;f_{m})=\\sum_{i=1}^{m}H(f_{i})-H(f_{1},...,f_{m}).\\label{eq:CORRT1}\n\\end{equation}\n\nTotal correlation measures the common information (redundancy) among all the variables in $F$. If we want to measure the redundancy between a given variable $f_{i}$ and any feature subset $S\\subseteq\\neg f_{i}$, then we can use the total correlation as:\n\\begin{equation}\nC(f_{i};S)=H(f_{i})+H(S)-H(f_{i},S),\n\\end{equation}\nhowever this corresponds to the classic definition of MI, i.e., $C(f_{i};S)=I(f_{i};S)$.\n\n\n\\subsection{Complementarity}\n\nThe concept of complementarity has been re-discovered several times \\cite{Meyer08,Bonev08,Brown2012,Vidal2003,Cheng2011}. Recently, it has become more relevant because of the development of more efficient techniques to estimate MI in high-dimensional spaces \\cite{Kraskov2004,Hero99}. Complementarity, also known as synergy, measures the degree of interaction between an individual feature $f_{i}$ and feature subset $S$ given $C$, through the following expression $(I(f_{i};S|C))$. To illustrate the concept of complementarity, we will start expanding the multi-information among $f_{i}$, $C$ and $S$. Decomposing the multi-information in its three possible expressions we have:\n\\begin{equation}\nI(f_{i};S;C)=\n\\begin{cases}\nI(f_{i};S|C)-I(f_{i};S)\\\\\nI(f_{i};C|S)-I(f_{i};C)\\\\\nI(S;C|f_{i})-I(S;C).\n\\end{cases}\\label{eq:INTERATC1}\n\\end{equation}\n\n\nAccording to eq. (\\ref{eq:INTERATC1}), the first row shows that the multi-information can be expressed as the difference between complementarity $(I(f_{i};S|C))$ and redundancy $(I(f_{i};S))$. A positive value of the multi-information entails a dominance of complementarity over redundancy. Analyzing the second row of eq. (\\ref{eq:INTERATC1}), we observe that this expression becomes positive when the information that $f_{i}$ has about $C$ is greater when it interacts with subset $S$ with respect to the case when it does not. This effect is called complementarity. The third row of eq. (\\ref{eq:INTERATC1}), gives us another viewpoint of the complementarity effect. The multi-information is positive when the information that $S$ has about $C$ is greater when it interacts with feature $f_{i}$ compared to the case when it does not interact. Assuming that the complementarity effect is dominant over redundancy, Fig. \\ref{fig:ResRel} illustrates a Venn diagram with the relationships among complementarity, redundancy and relevancy.\n\\begin{center}\n\\begin{figure}[h]\n\\centering{}\\includegraphics[scale=0.4]{fig2.eps}\n\\caption{\\label{fig:ResRel} Venn diagram showing the relationships among complementarity, redundancy and relevancy, assuming that the multi-information among $f_{i}$, $S$ and $C$ is positive.}\n\\end{figure}\n\\par\\end{center}\n\n\n\\section{Optimal Feature Subset}\n\nIn this section we review the different definitions of the optimal feature subset, $S_{opt}$, given in the literature, as well as the search strategies used for obtaining this optimal set. According to \\cite{Tsamardinos03c}, in practice the feature selection problem must include a classifier or an ensemble of classifiers, and a performance metric. The optimal feature subset is defined as the one that maximizes the performance metric having minimum cardinality. However, filter methods are independent of both the learning machine and the performance metric. Any filter method corresponds to a definition of relevance that employs only the data distribution \\cite{Tsamardinos03c}. Yu and Liu \\cite{Yu04} defined the optimal feature set as composed of all strongly relevant features and the weakly relevant but not redundant features. In this section we review the definitions of the optimal feature subset from the viewpoint of filter methods, in particular MI feature selection methods. The key notion is conditional independence, which allows defining the sufficient feature subset as follows \\cite{Bell00,GuyonFE}:\n\\begin{definition}\n$S\\subseteq F$ is a sufficient feature subset iff\n\\begin{equation}\np(C|F)=p(C|S).\\label{eq:SFS}\n\\end{equation}\n\\end{definition}\n\nThis definition implies that $C$ and $\\neg S$ are conditionally independent, i.e., $\\neg S$ provides no additional information about $C$ in the context of $S$. However, we still need a search strategy to select the feature subset $S$, and an exhaustive search using this criterion is impractical due to the curse of dimensionality.\n\nIn probability the measure of sufficient feature subset can be expressed as the expected value over $p(F)$ of the Kullback-Leibler divergence between $p(C|F)$ and $p(C|S)$ \\cite{Koller96}. According to Guyon \\textit{et al.} \\cite{GuyonFE}, this can be expressed in terms of MI as follows:\n\\begin{equation}\nDMI(S)=I(F;C)-I(S;C).\\label{eq:SFS}\n\\end{equation}\n\nGuyon \\textit{et al.} \\cite{GuyonFE} proposed solving the following optimization problem:\n\\begin{equation}\n\\min_{S\\subseteq F}|S|+\\lambda\\cdot DMI(S),\\label{eq:OSF}\n\\end{equation}\nwhere $\\lambda>0$ represents the Lagrange multiplier. If $S$ is a sufficient feature subset, then $DMI(S)=0$, and eq. (\\ref{eq:OSF}) is reduced to $\\min_{S\\subseteq F}|S|$. Since $I(F;C)$ is constant, eq. (\\ref{eq:OSF}) is equivalent to:\n\\begin{equation}\n\\min_{S\\subseteq F}|S|-\\lambda\\cdot I(S;C).\\label{eq:OFS_MI}\n\\end{equation}\n\nThe feature selection problem corresponds to finding the smallest feature subset that maximizes $I(S;C)$. Since the term $\\min_{S\\subseteq F}|S|$ is discrete, the optimization of (\\ref{eq:OFS_MI}) is difficult. Tishby \\textit{et al.} \\cite{Tishby99} proposed replacing the term $\\min_{S\\subseteq F}|S|$ with $I(F;S)$.\n\nAn alternative approach to optimal feature subset selection is using the concept of the Markov blanket (MB). Remember that the Markov blanket, $M$, of a target variable $C$, is the smallest subset of $F$ such that $C$ is independent of the rest of the variables $F\\backslash M$. Koller and Sahami \\cite{Koller96} proposed using MBs as the basis for feature elimination. They proved that features eliminated sequentially based on this criterion remain unnecessary. However, the time needed for inducing an MB grows exponentially with the size of this set, when considering full dependencies. Therefore most MB algorithms implement approximations based on heuristics, e.g. finding the set of $k$ features that are strongly correlated with a given feature \\cite{Koller96}. Fast MB discovery algorithms have been developed for the case of distributions that are faithful to a Bayesian Network \\cite{Tsamardinos03c,Tsamardinos03b}. However, these algorithms require that the optimal feature subset does not contain multivariate associations among variables, which are individually irrelevant but become relevant in the context of others \\cite{BrownyTsamardinos}. In practice, this means for example that current MB discovery algorithms cannot solve Example 1 due to the XOR function.\n\nAn important caveat is that both feature selection approaches, sufficient feature subset and MBs, are based on estimating the probability distribution of $C$ given the data. Estimating posterior probabilities is a harder problem than classification, e.g. in using a $0\\backslash 1$-loss function only the most probable classification is needed. Therefore, this effect may render some features contained in sufficient feature subset or in the MB of $C$ unnecessary \\cite{Tsamardinos03c,Torkkola,GuyonFE}.\n\n\\subsection{Relation between MI and Bayes error classification}\n\nThere are some interesting results relating the MI between a random discrete variable $f$ and a random discrete target variable $C$, with the minimum error obtained by maximum a posteriori classifier (Bayer classification error) \\cite{Hellman70,Cover06,Feder94}. The Bayes error is bounded above and below according to the following expression:\n\\begin{equation}\n1-\\frac{I(f;C)+\\log(2)}{\\log(|C|)}\\leq e_{bayes}(f)\\leqslant\\frac{1}{2}\\left(H(C)-I(f;C)\\right).\\label{eq:IM_lim_inf}\n\\end{equation}\nInterestingly, Eq. (\\ref{eq:IM_lim_inf}) shows that both limits are minimized when the MI, $I(f;C)$, is maximized.\n\n\n\\subsection{Search strategies}\n\\label{search}\nAccording to Guyon \\textit{et al.} \\cite{GuyonFE}, a feature selection method has three components: 1) Evaluation criterion definition, e.g. relevance for filter methods, 2) evaluation criterion estimation, e.g. sufficient feature selection or MB for filter methods, and 3) search strategies for feature subset generation. In this section, we briefly review the main search strategies used by MI feature selection methods. Given a feature set $F$ of cardinality $m$, there are $2^{m}$ possible subsets, therefore an exhaustive search is impractical for high-dimensional datasets. \n\nThere are two basic search strategies: optimal methods and sub-optimal methods \\cite{Webb02}. Optimal search strategies include exhaustive search and accelerated methods based on the monotonic property of a feature selection criterion, such as branch and bound. But optimal methods are impractical for high-dimensional datasets, therefore sub-optimal strategies must be used.\n\nMost popular search methods are sequential forward selection (SFS) \\cite{Whitney1971} and sequential backward elimination (SBE) \\cite{Marill193}. Sequential forward selection is a bottom-up search, which starts with an empty set, and adds new features one at a time. Formally, it adds the candidate feature $f_i$ that maximizes $I(S;C)$ to the subset of selected features $S$, i.e.,\n\\begin{equation}\nS=S\\cup\\{\\underset{f_{i}\\in F\\backslash S}{\\arg\\,\\max}(I(\\{S,f_{i}\\};C))\\}.\n\\end{equation}\n\nSequential backward elimination is a top-down approach, which starts with the whole set of features, and deletes one feature at a time. Formally, it starts with $S=F$, and proceeds deleting the less informative features one at a time, i.e,\n\\begin{equation}\nS=S\\backslash\\{\\underset{f_{i}\\in S}{\\arg\\,\\min}(I(\\{S\\backslash f_{i}\\};C)\\}.\n\\end{equation}\nUsually backward elimination is computationally more expensive than forward selection, e.g. when searching for a small subset of features. However, backward elimination can usually find better feature subsets, because most forward selection methods do not take into account the relevance of variables in the context of features not yet included in the subset of selected features \\cite{Guyon03}. Both kinds of searching methods suffer from the nested effect, meaning that in forward selection a variable cannot be deleted from the feature set once it has been added, and in backward selection a variable cannot be reincorporated once it has been deleted. Instead of adding a single feature at a time, some generalized forward selection variants add several features, to take into account the statistical relationship between variables \\cite{Webb02}. Likewise, the generalized backward elimination deletes several variables at a time. An enhancement may be obtained by combining forward and backward selection, avoiding the nested effect. The strategy ``plus-l-take-away-r'' \\cite{Stearns1976} adds to $S$ $l$ features and then removes the worst $r$ features if $l>r$, or deletes $r$ features and then adds $l$ features if $r0$. The authors considered a Naive Bayes classifier, which assumes independence between variables.\n\nEq. (\\ref{eq:CMIM}) allows deriving the Conditional Mutual Information Maximization (CMIM) criterion \\cite{Fleuret04}, when we consider only the first term on the right hand side of this equation and replace the mean operator with a minimum operator. CMIM discards the second term on the right hand side of eq.(\\ref{eq:CMIM}) completely, taking into account only one-to-one relationships among variables and neglecting the multi-information among $f_{i}, \\neg s_{j}$ and $C$ in the context of $s_j$ $\\forall j$. On the other hand, CMIM-2 \\cite{Vergara2010} criterion corresponds exactly to the first term on the right hand side of eq. (\\ref{eq:CMIM}). These methods are able to detect pairs of relevant variables that act complementarily in predicting the class. In general CMIM-2 outperformed CMIM in experiments using artificial and benchmark datasets \\cite{Vergara2010}.\n\nSo far we have reviewed feature selection approaches that avoid estimating MI in high-dimensional spaces.\nBonev \\textit{et al.} \\cite{Bonev08} proposed an extension of the MD criterion, called Max-min-Dependence (MmD), which is defined as follows:\n\\begin{equation}\nJ_{MmD}(f_{i})=I(\\{f_{i},S\\};C)-I(\\neg \\{f_{i},S\\};C).\n\\end{equation}\nThe procedure starts with the empty set $S=\\emptyset$ and sequentially generates $S^{t+1}$ as:\n\\begin{equation}\nS^{t+1}=S^{t}\\cup\\underset{f_{i}\\in F\\backslash S}{\\max}\\left(J_{MmD}(f_{i})\\right).\n\\end{equation}\nThe MmD criterion is heuristic, and is not derived from a principled approach. However, Bonev \\textit{et al.} \\cite{Bonev08} were one of the first in selecting variables estimating MI in high-dimensional spaces \\cite{Hero99}, which allows using set of variables instead of individual variables. Chow and Huang \\cite{Chow05} proposed combining a pruned Parzen window estimator with quadratic mutual information \\cite{Principe}, using Renyi entropies, to estimate directly the MI between the feature subset $S^{t}$ and the classes $C$, $I(S^{t};C)$, in an effective and efficient way.\n\n\\section{Open Problems}\nIn this section we present some open problems and challenges in the field of feature selection, in particular from the point of view of information theoretic methods. Here can be found a non-exhaustive list of open problems or challenges.\n\n\\begin{enumerate}\n\\item \\textbf{Further developing a unifying framework for information theoretic feature selection.}\nAs we reviewed in section \\ref{UF}, a unifying framework able to explain the advantages and limitations of successful heuristics has been proposed. This theoretical framework should be further developed in order to derive new efficient feature selection algorithms that include in their functional terms information related to the three types of features: relevant, redundant and complementary. Also a stronger connection between this framework and the Markov blanket is needed. Developing hybrid methods that combine maximal dependency with minimal conditional mutual information is another possibility.\n\n\\item \\textbf{Further improving the efficacy and efficiency of information theoretic feature selection methods in high-dimensional spaces.}\nThe computational time depends on the search strategy and the evaluation criterion \\cite{GuyonFE}. As we enter the era of Big Data, there is an urgent need for developing very fast feature selection methods able to work with millions of features and billions of samples. An important challenge is developing more efficient methods for estimating MI in high-dimensional spaces. Automatically determining the optimal size of the feature subset is also of interest, many feature selection methods do not have a stop criterion. Developing new search strategies that go beyond greedy optimization is another interesting possibility.\n\n\\item \\textbf{Further investigating the relationship between mutual information and Bayes error classification.}\nSo far lower and upper bounds for error classification have been found for the case of one random variable and the target class. Extending these results to the case of mutual information between feature subsets and the target class is an interesting open problem.\n\n\\item \\textbf{Further investigating the effect of a finite sample over the statistical criteria employed and in MI estimation.}\nGuyon \\textit{et al.} \\cite{GuyonFE} argued that feature subsets that are not sufficient may render better performance than sufficient feature subsets. For example, in the bio-informatics domain, it is common to have very large input dimensionality and small sample size \\cite{Saeys07}.\n\n\\item \\textbf{Further developing a framework for studying the relation between feature selection and causal discovery.}\nGuyon \\textit{et al.} \\cite{GuyonCFS} investigated causal feature selection. The authors argued that the knowledge of causal relationships can benefit feature selection and viceversa. A challenge is to develop efficient Markov blanket induction algorithms for non-faithful distributions.\n\n\\item \\textbf{Developing new criteria of statistical dependence beyond correlation and MI.}\nSeth and Principe \\cite{Seth2010} revised the postulates of measuring dependence according to Renyi, in the context of feature selection. An important topic is normalization, because a measure of dependence defined on different kinds of random variables should be comparable. There is no standard theory about MI normalization \\cite{Estevez09,Duch2006}. Another problem is that estimators of measures of dependence should be good enough, even when using a few realizations, in the sense of following the desired properties of these measures. Seth and Principe \\cite{Seth2010} argued that this property is not satisfied by MI estimators, because they do not reach the maximum value under strict dependence, and are not invariant to one-to-one transformations.\n\\end{enumerate}\n\n\n\\section{Conclusions}\nWe have presented a review of the state-of-the-art in information theoretic feature selection methods. We showed that modern feature selection methods must go beyond the concepts of relevance and redundance to include complementarity (synergy). In particular, new feature selection methods that assess features in context are necessary. Recently, a unifying framework has been proposed, which is able to retrofit successful heuristic criteria. In this work, we have further developed this framework, presenting some new results and derivations. The unifying theoretical framework allows us to indicate the approximations made by each method, and therefore their limitations. A number of open problems in the field are suggested as challenges for the avid reader.\n\n\\section{Acknowledgement}\nThis work was funded by CONICYT-CHILE under grant FONDECYT 1110701.\n\n\\bibliographystyle{spbasic}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nEnd-to-end (E2E) systems have become dominant in automatic speech recognition (ASR) because of their simplicity and better performance. \nIn addition to new advancements in model architectures \\cite{Han2020-CIC,Gulati2020-CCA,Weninger2021-DEA},\none of the major efforts is to make E2E ASR systems support online streaming applications with strict latency requirements. \nThe Recurrent Neural Network Transducer (RNN-T) has been a favorite E2E model architecture for online streaming because of its time-synchronous processing of input audio and superior performance over the CTC model \\cite{Graves2012-STW,Battenberg2017-ENT,Prabhavalkar2017-ACO}.\n\nThere have been significant improvements to RNN-T since it was proposed in \\cite{Graves2012-STW}, such as replacing the LSTM\/BLSTM encoder with Transformer \\cite{Yeh2019-TTE,Zhang2020-TTA}, Conformer \\cite{Gulati2020-CCA}, and ContextNet \\cite{Han2020-CIC}.\nA major difference between online streaming and offline batch-mode E2E model is that the former is subject to strict and often application-dependent latency constraints. \nTo reduce the deterministic latency incurred during inference, an online ASR system is only allowed to access limited future context.\nSince many popular E2E ASR systems are based on bidirectional long-range context modeling (BLSTM, Transformer, etc.) in the encoder, this is the primary reason that online E2E ASR systems generally underperform their offline counterparts. \nThe degradation in accuracy is largely determined by the accessed amount of future context.\nThere has been extensive research in effectively utilizing future context with limited latency for improving online E2E model performance \\cite{Yeh2019-TTE,Zhang2020-TTA,Li2020-OTC,Li2021-ABA,ChunkedMicrosoft,EMFormer}.\n\nFrom a deployment efficiency point of view, it is beneficial to have a single model able to serve multiple different applications: from offline batch-mode to online streaming under different latency requirements.\nUnfortunately, a model trained for the offline use case generally does not perform well in the online use case and vice versa. \nTherefore, there is a direction of research towards making a single model suitable for multiple use cases with different latency requirements \\cite{Tripathi2020-TTO,Gao2020-UAU,Audhkhasi2021-MMA,Yu2021-DMA,Kim2021-MMT}.\n\n\\textbf{Contributions of our paper:} We extend the dual-mode ASR work in \\cite{Yu2021-DMA} in several aspects that were not covered there or in similar works \\cite{Audhkhasi2021-MMA,Kim2021-MMT}:\n1. Comprehensive evaluation of different online streaming approaches (i.e., autoregressive and chunked attention) based on Conformer Transducer in dual-mode training on two very different data sets. \n2. Evaluation of different distillation approaches for offline-to-online distillation in dual-mode training and the importance of modeling the output shift between offline and online modes.\n3. Propose a dual-mode model trained with shared convolution (i.e., causal convolution) and normalization layers across modes.\n\n\n\\section{Methodology}\n\n\n\n\n\n\\subsection{Dual-mode Conformer Transducer}\n\nIn our paper, we use end-to-end ASR systems based on the Conformer Transducer (Conf-T) architecture, which combines the concept of the recurrent neural network transducer (RNN-T) \\cite{Graves2012-STW} with the Conformer encoder \\cite{Zhang2020-TTA,Gulati2020-CCA}.\nEach Conformer encoder block consists of feedforward, multi-head self-attention (MHSA) \\cite{Vaswani2017-AIA}, and convolution layers.\nFollowing the dual-mode approach \\cite{Yu2021-DMA}, a single Conformer Transducer model can operate in both online and offline mode.\nIn online mode, the outputs of convolutions and attention layers are calculated by masking the weights corresponding to future frames.\nMoreover, online and offline mode use different sets of (batch \/ layer) normalization parameters (running average statistics and scales \/ offsets). \nFor the convolutions, the alternative approach proposed in our paper is to simply use causal (left) padding everywhere.\n\nOffline and online Transducer outputs are calculated as $z_{\\text{on}} = M_{\\text{on}}^{\\theta^\\text{on}}(x)$, $z_{\\text{off}} = M_{\\text{off}}^{\\theta^\\text{off}}(x) \\in [0,1]^{T \\times U \\times K}$, where $M$ is the model, $\\theta^{\\text{on}} = [ \\theta; \\nu^\\text{on} ]$, $\\theta^{\\text{off}} = [ \\theta; \\nu^\\text{off} ]$ are the online and offline model weights, $\\nu^\\text{on}$, $\\nu^\\text{off}$ are the corresponding normalization parameters, and $T$, $U$, $K$ denote \\# frames, \\# tokens, and vocabulary size.\nIn dual-mode training \\cite{Yu2021-DMA}, the offline and online mode are trained jointly while knowledge transfer is done via in-place distillation from the offline to the online mode. \nMore precisely, the following loss is minimized:\n\\begin{equation}\n \n \\mathcal{L} = \\alpha \\mathcal{L}_\\text{trd}(y^*,z_\\text{on}) + \\beta \\mathcal{L}_\\text{trd}(y^*,z_\\text{off}) + \\gamma \\mathcal{L}_\\text{dist}(z_\\text{off},z_\\text{on}) ,\n \\label{eq:loss}\n\\end{equation}\nwhere $\\mathcal{L}_\\text{trd}$ is the transducer loss \\cite{Graves2012-STW}, $\\mathcal{L}_\\text{dist}$ is a distillation loss (cf.\\ Section \\ref{sec:dist}), \n$y^*$ are the training labels, and $\\alpha, \\beta, \\gamma \\geq 0$ are hyperparameters.\n\n\\subsection{Chunked Attention}\n\n\\label{sec:chunked_attn}\n\nTo adapt Transformer-like architectures for the streaming use case, the key part to consider is the MHSA block.\nThe MHSA can be made strictly online by using autoregressive attention \\cite{Vaswani2017-AIA}, i.e., every frame in the encoder can only attend to previous frames.\nThis constraint is efficiently implemented by adding $-\\infty$ (or a large negative value) to the attention logits at the `invalid' positions.\nThere are several ways to modify this under specified latency requirements, such as truncated lookahead~\\cite{Zhang2020-TTA} or contextual lookahead~\\cite{EMFormer}. \nWhile the first approach builds lookahead that accumulates through layers into a larger overall lookahead of the encoder, the latter increases the computational cost at inference by overlapping the input audio chunks. \nIn this work, we focus on the chunked attention approach~\\cite{ChunkedMicrosoft}, where we divide the input audio into non-overlapping chunks.\nFor each encoder input chunk, the MHSA query at each position uses as memory (key and values) all the other positions that belong to the same chunk or previous chunks.\nFigure~\\ref{fig:dual_mode_conformer_chunked_attn} shows the dual-mode Conformer encoder with chunked attention mask.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\linewidth]{Conf.pdf}\n \\caption{Schema of the dual-mode Conformer encoder with chunked\/global attention mask (added to logits), causal\/non-causal convolutions and dual-mode normalization with parameters $\\nu^{(\\cdot)}_{(\\cdot)}$. Dashed lines indicate optional components.\n \n }\n \\label{fig:dual_mode_conformer_chunked_attn}\n\\end{figure}\n\n\\subsection{Distillation}\n\n\\label{sec:dist}\n\nIn addition to architectural advancements and various approaches for effectively leveraging limited future context, knowledge distillation from offline to online model is another way of improving online model performance \\cite{Kurata2020-KDF}. \nWe compare the approaches proposed in \\cite{Panchapagesan2021-EKD, Yang2021-KDF} in the context of in-place distillation in dual-mode training.\n\nKnowledge distillation by using the Kullback-Leibler divergence (KLD) loss \\cite{Yu2013-KDR,Weninger2019-LAS} directly is inefficient for Transducers due to the large size of the output lattice.\nThe efficient KLD \\cite{Panchapagesan2021-EKD} and 1-best distillation \\cite{Yang2021-KDF} losses address this issue by restricting the calculation to a reduced output lattice.\nMoreover, in order to make these approaches work for offline-to-online distillation, we consider the potential emission delay between online and offline model via a tunable shift parameter $\\tau$, similar to \\cite{Yu2021-DMA,Yang2021-KDF}.\n\nThe efficient KLD loss \\cite{Panchapagesan2021-EKD} collapses the probability distribution of the tokens as follows:\n\\begin{equation}\n \n \n \n \n \n \\mathcal{L}_\\text{dist}^\\text{eff}\n = \\sum_{t,u} \\sum_{l \\in \\{y, \\varnothing, r\\}} p_\\text{off}(l | t,u) \\log \\frac{p_\\text{off}(l | t,u)}{p_\\text{on}(l | t-\\tau,u)} ,\n\\end{equation}\nwhere $y$, $\\varnothing$ and $r$ denote the correct, the blank, and all other labels, $p_{(\\cdot)}$ denotes the probability obtained from the Transducer output $z_{(\\cdot)}$, and $t$ and $u$ denote time frame and token indices.\n\nConversely, the 1-best distillation loss \\cite{Yang2021-KDF} takes into account the full probability distribution, but only along the 1-best path in the teacher lattice.\nWe extend this approach to in-place distillation by regenerating the 1-best path of the offline model on-the-fly in each training step. \nFor consistency, we also use KLD, not cross-entropy as in \\cite{Yang2021-KDF}:\n\\begin{equation}\n \n \n \n \\mathcal{L}_\\text{dist}^\\text{1-best}\n = \\sum_{(t,u) \\in \\text{1-best}} \\sum_{k=1}^K p_\\text{off}(k | t, u) \\log \\frac{p_\\text{off}(k | t, u)}{p_\\text{on}(k | t-\\tau, u)} ,\n\\end{equation}\nwhere $k$ is the index of a symbol in the vocabulary.\n\n\n\\section{Experiments and Results}\n\n\\subsection{Librispeech Data}\n\n\\subsubsection{Training recipe}\nWe first perform a comparative evaluation using the 100 hour training subset of the Librispeech \\cite{Panayotov2015-LAA} corpus.\nSpeed perturbation \\cite{Ko2015-AAF} with factors 0.9, 1.0 and 1.1 and SpecAugment \\cite{Park2019-SAA} are applied to improve generalization.\nThe topology of the Conformer Transducer and the training recipe are similar to \\cite{Higuchi2021-ACS}. \nThe encoder consists of a feature frontend that extracts 80-dimensional log-Mel features, two convolutional layers that perform downsampling on the time axis by a factor of 4, and 18 Conformer blocks with hidden dimension 256 and feed-forward dimension 1024. \nThe prediction network has a single LSTM layer with 256 hidden units, and the joint network has 256 units.\nThe vocabulary contains 30 characters.\nModels are trained for 300 epochs.\nThe training hyperparameters (especially learning rate schedule) were tuned for the offline model using a limited grid search on the clean development set of Librispeech, then applied to all other models (online and dual-mode) without further tuning.\nFor dual-mode training, the online and offline losses are weighted equally ($\\alpha = \\beta = 0.5$) and the distillation weight is set to $\\gamma=0$ (no distillation) or $\\gamma=0.01$.\nWe measure the word error rate (WER) on the `clean' and `other' test set of Librispeech.\nDecoding is done by beam search with beam size 8, without using an external language model.\n\n\\subsubsection{Online and offline baselines}\n\n\\begin{table}[t]\n \\caption{Single-mode baselines on Librispeech 100h (LA: lookahead).}\n \\label{tab:results_ls100_single_mode}\n \\centering\n \\begin{tabular}{l|l|cc}\n \\bf Mode & \\bf Attention & \\multicolumn{2}{c}{\\bf Test WER [\\%]} \\\\\n & & \\bf cln & \\bf other \\\\\n \\hline\n Online & Autoregressive & 9.6 & 26.9 \\\\\n Online & Autoreg.\\ LA & 8.4 & 24.8 \\\\\n Online & Chunked & 7.9 & 23.4 \\\\\n \\hline\n Offline (causal conv) & Full context & 6.3 & 18.4 \\\\\n \\quad (non-causal conv) & Full context & 6.3 & 18.3 \\\\\n Offline \\cite{Higuchi2021-ACS} & Full context & 6.8 & 18.9 \\\\\n \\end{tabular}\n\\end{table}\n\nThe results of our single-mode baselines are shown in \\tablename~\\ref{tab:results_ls100_single_mode}.\nOur offline Conformer Transducer system outperforms the reference result obtained by ESPnet \\cite{Higuchi2021-ACS}.\nWe also investigated the usage of causal (left padded) 1-D depthwise convolutions in the Conformer blocks in the offline model. \nThe WER was similar to the standard non-causal (centered) convolutions. \nHence, we chose to apply causal convolutions for offline mode as well, thereby simplifying the implementation compared to the original dual-mode Conf-T \\cite{Yu2021-DMA}.\n\nFor the online systems, we compare autoregressive attention, autoregressive attention with 12 frames ($\\approx$ 0.5 seconds) lookahead in the 9th encoder layer\\footnote{We did not observe significant performance differences when putting the lookahead in another encoder layer or distributing it across multiple encoder layers.}, and chunked attention (see Section \\ref{sec:chunked_attn}) with a chunk size of 25 frames ($\\approx$ 1 second).\nUsing autoregressive attention leads to a drastic WER increase compared to the offline model (52\\,\\% relative).\nHowever, the relative WER increase is still much smaller than the one reported in \\cite{Yu2021-DMA}, suggesting that our online baseline is competitive.\nAs expected, the lookahead reduces the gap between online and offline WER significantly.\nFurthermore, despite having the same average lookahead of about 0.5 seconds, the chunked attention performs better than the autoregressive attention with lookahead (6\\,\\% WER reduction (WERR)).\n\n\\begin{table}[t]\n \\caption{Librispeech 100h task: WER obtained by dual-mode systems in online and offline inference with and without efficient KLD distillation (loss weight $\\gamma$, shift $\\tau$).}\n \\label{tab:results_ls100_dual_mode}\n \\centering\n \\begin{tabular}{l|c|c|cc|cc}\n \\bf Online att. & \\bf $\\gamma$ & $\\tau$ & \\multicolumn{4}{c}{\\bf Test WER [\\%]} \\\\\n & & & \\multicolumn{2}{c|}{\\bf Online} & \\multicolumn{2}{c}{\\bf Offline} \\\\\n & & & \\bf cln & \\bf other & \\bf cln & \\bf other \\\\\n \\hline\n Autoreg. & 0.0 & -- & 9.0 & 25.2 & 7.2 & 21.8 \\\\\n Autoreg. & 0.01 & 0 & 9.0 & 25.8 & 7.2 & 21.2 \\\\\n Autoreg. & 0.01 & -6 & 8.4 & 24.2 & 7.0 & 20.6 \\\\\n \\hline\n Autoreg.\\ LA & 0.0 & -- & 7.7 & 23.1 & 6.8 & 19.9 \\\\\n Autoreg.\\ LA & 0.01 & 0 & 7.7 & 22.6 & 7.0 & 19.8 \\\\\n Autoreg.\\ LA & 0.01 & -6 & 7.5 & 22.2 & 6.7 & 19.8 \\\\\n \\hline\n Chunked & 0.0 & -- & 7.4 & 22.0 & 6.4 & 19.2 \\\\\n Chunked & 0.01 & 0 & 7.7 & 22.4 & 6.4 & 19.2 \\\\\n Chunked & 0.01 & -6 & \\bf 7.1 & 21.5 & \\bf 6.1 & 18.9 \\\\\n \\end{tabular}\n\\end{table}\n\n\n\\subsubsection{Dual-mode systems}\n\n\\tablename~\\ref{tab:results_ls100_dual_mode} shows the results obtained by dual-mode training for various types of online attention.\nCompared to the online baselines in \\tablename~\\ref{tab:results_ls100_single_mode}, the WER in online mode is improved by dual-mode training in all cases (6\\,\\%, 8\\,\\% and 6\\,\\% WERR on test\\_clean for autoregressive, autoregressive with lookahead and chunked attention, respectively).\nFurthermore, we observe that there is a consistent gain in online performance from using distillation with shift $\\tau=-6$, but no gain from distillation without shift.\nStill, the gain from distillation is diminished when lookahead is used, likely because this brings the online performance closer to the offline model and reduces the benefit of knowledge distillation.\nThe dual-mode system using autoregressive attention in the online mode improves on the WER of the corresponding single-mode online system by 12\\,\\% relative.\nConversely, the offline performance is degraded by 11\\,\\% relative.\nThe trend is similar for the autoregressive attention with lookahead, despite overall better performance.\nIn contrast, using chunked attention in the online mode avoids the degradation in offline mode, and the corresponding\ndual-mode system performs better than the single-mode baselines in both offline and online mode, achieving 10\\,\\% and 3\\,\\% relative WERR, respectively.\nThis is likely because the lookahead for a given frame is not constant in chunked attention,\nwhich makes the online prediction task more similar to the offline one and thus facilitates joint training.\n\nWe also investigate the impact of the shift $\\tau$ between offline teacher and online student in the in-place distillation with both efficient distillation and 1-best distillation.\nAs can be seen from \\figurename~\\ref{fig:ls100_shift}, gains from distillation can be achieved only with fairly large shifts (e.g., $\\tau=-6$ corresponds to a $\\approx$ 240\\,ms emission delay), which is consistent with the findings in \\cite{Yang2021-KDF}, while results are unstable for small shifts (in fact, the experiments with $\\tau=-2$ diverged).\nThe best result with efficient distillation is achieved at $\\tau=-6$ (8.4\\,\\% WER), whereas the 1-best distillation performs best with $\\tau=-8$.\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{LS100_shift.pdf}\n \\caption{Dual-mode WER on Librispeech 100h task with efficient and 1-best distillation, varying the shift parameter $\\tau$.}\n \\label{fig:ls100_shift}\n\\end{figure}\n\n\\subsection{Medical Data}\n\nAdditionally, we conduct experiments on an internal data set which consists of conversational speech data in the medical domain (doctor-patient conversations).\nThe experiments are based on a training set of 1\\,k hours manually end-pointed and transcribed speech covering various medical specialties.\nWe measure WER on a speaker-independent test set consisting of 263\\,k words.\n\nThe model topology and training recipe are similar to the one used for Librispeech.\nIn the encoder, we use 16 Conformer blocks with hidden dimension 512 and feed-forward dimension 1024 after the frontend.\nThe prediction network consists of a single Transformer layer with the same dimensions, and the joint network has 512 units.\nThe vocabulary contains 2\\,k word-pieces.\n\n\\begin{table}[t]\n \\caption{Single-mode baselines on medical conversation data}\n \\label{tab:results_dax_single_mode}\n \\centering\n \\begin{tabular}{l|l|c}\n \\bf Mode & \\bf Attention & \\bf WER [\\%] \\\\\n \\hline\n \n \n \n \n \n Online & Autoreg.\\ LA & 14.7 \\\\\n Online & Chunked & 14.4 \\\\\n \\hline\n Offline (causal conv) & Full context & 13.1 \\\\\n \\quad (non-causal conv) & Full context & 13.2 \\\\\n \\end{tabular} \n\\end{table}\n\n\\tablename~\\ref{tab:results_dax_single_mode} shows the single-mode baselines. \nFor the online systems, we compare autoregressive attention with lookahead (12 frames) and chunked attention (24 frames).\nUnlike on Librispeech, a pure autoregressive model (without any lookahead) did not yield satisfactory performance.\nThe chunked attention improves the WER of the online system by 2\\,\\% relative compared to the autoregressive attention with 12 frames lookahead.\nStill, there remains a gap of about 9\\,\\% relative WER difference between the online and the offline system.\n\n\\begin{table}[th]\n \\caption{Medical conversation data: WER obtained by dual-mode systems in online and offline inference with and without efficient distillation (loss weight $\\gamma$, shift $\\tau$).}\n \\label{tab:results_dax_dual_mode}\n \\centering\n \\begin{tabular}{l|c|c|cc}\n \\bf Online att. & \\bf $\\gamma$ & $\\tau$ & \\multicolumn{2}{c}{\\bf WER [\\%]} \\\\\n & & & \\bf Online & \\bf Offline \\\\\n \\hline\n \n \n \n \n Autoreg.\\ LA & 0.0 & -- & 14.2 & 13.3 \\\\\n Autoreg.\\ LA & 0.01 & 0 & 14.2 & 13.3 \\\\\n Autoreg.\\ LA & 0.01 & -6 & 14.1 & 13.2 \\\\\n \\hline\n Chunked & 0.0 & -- & \\bf 13.7 & \\bf 12.9 \\\\\n Chunked & 0.01 & 0 & \\bf 13.7 & \\bf 12.9 \\\\\n Chunked & 0.01 & -6 & 13.8 & 13.0 \\\\\n \n \n \n \n \n \n \n \\end{tabular}\n\\end{table}\n\n\\tablename~\\ref{tab:results_dax_dual_mode} shows the results obtained by dual-mode training.\nWe use the efficient KLD loss in case of $\\gamma > 0$.\nAs in the Librispeech scenario, using chunked attention in the online mode helps improving both online and offline performance. The dual-mode system with chunked attention obtains 3.7\\,\\% \/ 3.1\\,\\% relative WERR compared to the one using autoregressive attention with lookahead, and 5.4\\,\\% \/ 1.6\\,\\% with respect to the corresponding single-mode online \/ offline system.\nHowever, unlike on Librispeech, we do not observe any gain from distillation, even with $\\tau=-6$.\nOne possible reason is that the performance difference between the online and offline models on the medical data set is small (about 9\\,\\% relative, see \\tablename~\\ref{tab:results_ls100_single_mode}) compared to that on the Librispeech data set (23\\,\\% relative, see \\tablename~\\ref{tab:results_ls100_single_mode}).\n\n\n\\subsection{Emission Timing}\n\n\\figurename~\\ref{fig:dax_latency} shows the time delay of transcriptions produced by our dual-mode models, with respect to a single-mode reference.\nWe compute this delay as the average time difference of all the matching couples of correct words. For each word we consider the time of the emitting frame of its last word-piece in the RNN-T output alignment. If the encoder is chunked, times are rounded up to the end of their corresponding encoder chunks.\nThe absolute emission delay is similar between autoregressive attention with lookahead and chunked attention models.\n\nAs can be seen in \\figurename~\\ref{fig:dax_latency}, dual-mode models typically emit faster than the single-mode ones.\nWhen the dual-mode model is trained without distillation, we observe a slightly lower delay in the chunked configuration compared with the autoregressive one, while a significant improvement in terms of emission delay ($\\approx$ 110\\,ms) is evident in both cases when the distillation is enabled.\nHowever, the latency gain from distillation vanishes when the teacher targets are shifted with a negative $\\tau$ value, because this configuration encourages later emission.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=.9\\linewidth]{Latency.pdf}\n \\caption{Emission delay of dual-mode systems vs.\\ single-mode online reference measured on medical data (lower means earlier).}\n \\label{fig:dax_latency}\n\\end{figure}\n\n\n\n\\subsection{Effect of Dual-mode Normalization and Joint Training}\n\nIn \\tablename~\\ref{tab:results_ls100h_dax_dual_single_norm}, we assess the importance of the dual-mode normalization layers in the Conformer blocks proposed by \\cite{Yu2021-DMA} vs.\\ simply sharing the normalization layers between online and offline mode.\nOn Librispeech (using chunked attention and efficient distillation with $\\tau=-6$), the online performance is very similar between dual and single-mode normalization, while there is a small degradation in offline WER.\nThe medical conversation task (using chunked attention but no distillation) shows a similar picture.\nSince we use causal convolutions for both online and offline mode as in the previous experiments, using a single set of normalization layers means that convolutional and feedforward components are identical to the single-mode Conformer, and the MHSA layers vary only the attention mask.\nThus, single-mode normalization considerably simplifies the implementation while yielding similar performance.\n\n\n\\begin{table}[t]\n \\caption{WER obtained by dual-mode systems in online and offline inference, using dual normalization layers (one for online and one for offline) or single normalization layers (shared between online and offline mode).}\n \\label{tab:results_ls100h_dax_dual_single_norm}\n \n \\centering\n \\begin{tabular}{l|c|c}\n \\bf Norm.\\ layers & \\multicolumn{2}{c}{\\bf WER [\\%]} \\\\\n & \\bf Online & \\bf Offline \\\\\n \\hline\n \\multicolumn{3}{c}{\\em Librispeech 100h (clean \/ other)} \\\\\n \\hline\n dual & 7.1 \/ 21.5 & 6.1 \/ 18.9 \\\\\n single & 7.1 \/ 21.3 & 6.3 \/ 19.0 \\\\\n \n \n \n \n \n \\hline\n \\multicolumn{3}{c}{\\em Medical conversation task} \\\\\n \\hline\n dual & 13.7 & 12.9 \\\\\n single & 13.7 & 13.0 \\\\\n \\end{tabular}\n\\end{table}\n\nMotivated by these results, we also investigated a further simplification of the dual-mode training where the attention mask for all MHSA layers is randomly chosen as the global (offline) or chunked (online) one for each line in the current mini-batch, instead of training both modes on the entire batch (joint training, cf.\\ Eq.\\ \\eqref{eq:loss}).\nThis is similar in spirit to the sampling techniques in \\cite{Yu2021-DMA,Audhkhasi2021-MMA,Kim2021-MMT}.\nThe advantage is that only one model (online or offline) is computed for each utterance, thus saving approximately 50\\,\\% of computation and memory requirement.\nWe found such dual-mode training to yield a single model for both online and offline mode that performed similar to the dedicated single-mode models (14.2\\,\\% \/ 13.3\\,\\% WER on the medical task).\nHowever, unlike joint training, it did not result in a sizable WER gain compared to the single-mode baselines.\n\n\n\n\n\n\n\n\\section{Conclusions}\n\nIn this paper, we presented an in-depth study on the performance of dual-mode training for online Conformer Transducer architectures. \nWe could obtain significant WER improvements in online mode on both the Librispeech and a medical conversational speech task, even without in-place distillation, and match the performance of dedicated offline models.\nBest results in online mode were obtained using chunked attention.\nOur results also shed light on the importance of modeling emission delay when doing offline-to-online knowledge distillation: we found that distillation without shift is helpful for reducing latency, while distillation with shift can reduce the WER at the expense of emission delay.\nThe latter could potentially be mitigated by techniques such as FastEmit \\cite{Yu2021-FLS}.\nIn general, the gain from distillation depends on the online configuration (especially the lookahead) and the data set.\nFurthermore, we explored several modifications to the original training approach, and found a simplified version, where only the attention mask is exchanged between online and offline modes,\nto perform equally well as the original proposal \\cite{Yu2021-DMA}.\nIn future work, we will apply our findings to multi-mode ASR \\cite{Kim2021-MMT} for improving robustness of the online model in multiple latency requirements.\n\n\n\n\n\\bibliographystyle{IEEEtran}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{#1}}\n\n\\def\\mbox{sh}{\\mbox{sh}}\n\\def\\mbox{ch}{\\mbox{ch}}\n\\def\\mbox{e}{\\mbox{e}}\n\\def\\,\\mbox{i}\\,{\\,\\mbox{i}\\,}\n\\def\\,\\mbox{\\scriptsize i}\\,{\\,\\mbox{\\scriptsize i}\\,}\n\\def\\lambda{\\lambda}\n\\def\\hbox{th}{\\hbox{th}}\n\\def\\vartheta_1{\\vartheta_1}\n\\def\\vartheta_3{\\vartheta_3}\n\\def\\vartheta_4{\\vartheta_4}\n\\def\\case#1#2{{\\textstyle{#1\\over #2}}}\n\\def\\W#1#2#3#4#5{W #1 \\! \\left(\\hspace{-1mm}\n \\begin{array}{cc}#5 & #4 \\\\ #2 & #3 \\end{array}\n \\hspace{-1mm}\\right)}\n\\def\\case#1#2{{\\textstyle{#1\\over #2}}}\n\n\\begin{document}\n\n\\title{Lattice Ising model in a field: E$_8$ scattering theory}\n\n\\author{\nV. V. Bazhanov\\thanks{\nIAS, Australian National University,\nTheoretical Physics and Mathematics,\nGPO Box 4, Canberra, ACT 2601, Australia, e-mail {\\vartheta_2 vvb105@phys.anu.edu.au}}\n\\thanks{\nOn leave of absence from the Institute\nfor High Energy Physics,\nProtvino, Moscow Region, 142284, Russia.} ,\nB. Nienhuis\\thanks{\nInstituut voor Theoretische Fysica,\nUniversiteit van Amsterdam,\nValckenierstraat~65,\n1018~XE Amsterdam,\nThe Netherlands, e-mail {\\vartheta_2 nienhuis@phys.uva.nl}}\n\\hspace{0.2mm} and\nS. O. Warnaar$^{\\mbox{\\footnotesize\\ddag}}$\\thanks{\nPresent address: Mathematics Department,\nUniversity of Melbourne, Parkville, Victoria 3052,\nAustralia, e-mail {\\vartheta_2 warnaar@mundoe.maths.mu.oz.au}}}\n\n\\date{\\ }\n\\maketitle\n\n\\begin{abstract}\nZamolodchikov found an integrable field theory\nrelated to the Lie algebra E$_8$, which describes the\nscaling limit of the Ising model in a magnetic field.\nHe conjectured that there also exist solvable lattice\nmodels based on E$_8$ in the universality class of the Ising model in a\nfield.\nThe dilute A$_3$ model is a solvable lattice model with\na critical point in the Ising universality class.\nThe parameter by which the model can be taken away\nfrom the critical point acts like a magnetic field by breaking\nthe $\\:\\mbox{\\sf Z} \\hspace{-0.82em} \\mbox{\\sf Z}\\,_2$ symmetry between the states.\nThe expected direct relation of the model with E$_8$\nhas not been found hitherto.\nIn this letter we study the thermodynamics of the dilute\nA$_3$ model and show that in the scaling limit it exhibits\nan appropriate E$_8$ structure, which naturally leads to the E$_8$\nscattering theory for massive excitations over the ground state.\n\\end{abstract}\n\n\\newlength{\\mathin}\n\\setlength{\\mathin}{\\mathindent}\n\\setcounter{page}{1}\n\\nsection{Introduction}\nSince the work \\cite{Zamolodchikov-A87b}\nby A.B. Zamolodchikov it is known that certain\nperturbations of conformal field theories (CFT's) lead to completely\nintegrable models of massive quantum field theory (QFT).\nThe\nexistence of non-trivial higher integrals\nof motion and other dynamical symmetries\n\\cite{Zamolodchikov-A88a,A-B-LC,Reshetikhin90d,LeClair,Eguchi}\nin such a QFT allows to compute the spectrum of the particles\nand their $S$-matrix explicitly.\nAt the same time, these QFT models can be obtained as the\nscaling limit of appropriate non-critical solvable lattice models\nin statistical mechanics\n(see \\cite{Baxter-book} for an introduction\nand references on solvable lattice models).\nIn the latter approach the spectrum and the $S$-matrices\ncan be calculated from the Bethe Ansatz equations for the\ncorresponding lattice model\n\\cite{Reshetikhin87a,Bazhanov90d,Reshetikhin91c}.\nThe natural problem arising in this connection is to find lattice\ncounterparts for all known integrable perturbed CFT's and\nvice versa. A description of known results of such\ncorrespondence lies outside the scope of this letter and we refer the\ninterested reader to [1-10]\nand references therein.\nHere we consider one\nparticularly important example of this correspondence associated with\nthe Ising model at its critical temperature in a magnetic field,\nhereafter referred to as the magnetic Ising model.\n\nA.B. Zamolodchikov has shown \\cite{Zamolodchikov-A89a}\nthat the $c=1\/2$ CFT (corresponding to the critical\nIsing model) perturbed with the spin operator $\\phi_{1,2}=\\phi_{2,2}$\nof dimension $(1\/16,1\/16)$ describes an exactly integrable QFT\ncontaining eight massive particles with a reflectionless factorised\n$S$-matrix. Up to normalisation the masses of these particles coincide\nwith the components $S_i$ of the Perron-Frobenius vector of the Cartan\nmatrix of the Lie algebra E$_8$:\n\\begin{equation}\n\\frac{m_i}{m_j} = \\frac{S_i}{S_j}.\n\\label{massratio}\n\\end{equation}\nThe element of the\n$S$-matrix describing the scattering of the lightest particles, with\nmass $m_1$,\nreads \\cite{Zamolodchikov-A89a}\n\\begin{equation}\nS_{1,1}(\\beta)=\\frac\n{\\hbox{th}\\left(\\frac{\\beta}{2} + \\,\\mbox{i}\\, \\frac{\\pi}{6} \\right)\n \\hbox{th}\\left(\\frac{\\beta}{2} + \\,\\mbox{i}\\, \\frac{\\pi}{5} \\right)\n \\hbox{th}\\left(\\frac{\\beta}{2} + \\,\\mbox{i}\\, \\frac{\\pi}{30} \\right)}\n{\\hbox{th}\\left(\\frac{\\beta}{2} - \\,\\mbox{i}\\, \\frac{\\pi}{6} \\right)\n \\hbox{th}\\left(\\frac{\\beta}{2} - \\,\\mbox{i}\\, \\frac{\\pi}{5} \\right)\n \\hbox{th}\\left(\\frac{\\beta}{2} - \\,\\mbox{i}\\, \\frac{\\pi}{30} \\right)},\n\\end{equation}\nwith $\\beta$ the rapidity.\nThe other elements are uniquely determined by the\nbootstrap program \\cite{Zamolodchikov-A89a}.\n\nThe aim of this letter is to show\nthat the above QFT describes the scaling limit of\nthe dilute A$_3$ model of\nWarnaar, Nienhuis and Seaton \\cite{WNS,WPSN}\nin the appropriate regime.\nIt should be noted that there were some earlier, rather strong\nindications supporting the above correspondence.\nAll these parts remarkably fit together with our results,\ncompleting a sequence of arguments which can be summarised as follows:\n\\begin{description}\n\\item[{\\rm (i)}]\nThe dilute A$_3$ model is an interaction-round-a-face model on\nthe square lattice with spins taking three values (detailed\ndefinitions are given in equations\n(\\ref{Incidence})-(\\ref{regimes})). Admissible values of the\nadjacent spins are\ndetermined by the incidence matrix (\\ref{Incidence}),\nwhich has largest eigenvalue\nequal to $1+\\sqrt{2}$.\n\\item[{\\rm (ii)}]\nThe model has two\nphysically distinct regimes of relevance to our discussion,\nhere denoted as {\\it i)} and {\\it ii)},\ndepending on the region of the spectral parameter or, equivalently, of\na sign of the Hamiltonian of the associated one-dimensional chain.\n(These are the regimes $2^+$ and $3^+$ of ref~\\cite{WPSN}, respectively).\nThe central charges and the conformal dimensions of the leading perturbation\ncomputed from exact expressions for the free energy and the local\nstate probabilities\nof the dilute A$_3$ model\nfor these two regimes read \\cite{WBN,WPSN}\n\\begin{equation}\ni) \\quad c=1\/2, \\quad \\Delta=1\/16; \\qquad ii) \\quad c=6\/5, \\quad\n\\Delta=15\/16.\n\\label{canddelta}\n\\end{equation}\n\\item[{\\rm (iii)}]\nIn ref~\\cite{Bazhanov90d,Bazhanov90b} Bazhanov and Reshetikhin\nproposed thermodynamic Bethe Ansatz equations (TBAE) related\nto the A-D-E Lie algebras, corresponding to\nnon-critical models in statistical mechanics.\nUsing standard\nthermodynamics calculations and the high level Bethe Ansatz (see\n\\cite{Reshetikhin87a} and references therein)\nthey computed: the central charges of the corresponding scaling field\ntheories, dimensions of the leading\nperturbations, the spectra and scattering amplitudes of the\nmassive excitations, expressing them through fused Boltzmann weights.\nIn particular, in the case relevant to our discussion\n($\\cal G$=E$_8$, $g=30$, $p=\\ell=1$,\nin the notation of \\cite{Bazhanov90d}) the\nexponents they found \\footnote{Note that the equations (5$\\cdot$1)\nand (5$\\cdot$4) in \\cite{Bazhanov90d}\nhave been misprinted. Correcting (5$\\cdot$1)\nto $c=c^{\\cal G}(l)+c^{\\cal G}(r-l-g)-c^{\\cal G}(r-g)+\n\\mbox{rank}\\:{\\cal G}$ yields the following result for\nthe central charge in (5$\\cdot$4):\n$c=2\\:\\mbox{rank}\\:{\\cal G}\/(g+2)$. Also the phrases\n``minimal unitary'', just before, and\n``by the operator $\\phi_{(1,3)}$'' just after (5$\\cdot$4)\nshould be deleted.}\nprecisely match (\\ref{canddelta})\nin both regimes.\nFurthermore, the TBAE allowing the calculation of\nthe largest eigenvalue of the incidence matrix of the underlying\nlattice model, gave in this case precisely the value $1+\\sqrt{2}$\n\\cite{Bazhanov-remark-in-Kuniba's-paper}.\n\\item[{\\rm (iv)}]\nFinally, the\nspectrum and $S$-matrix of the scaling field theory in regime\n$\\it i)\\\/$ found in \\cite{Bazhanov90d} from the\nhigh level Bethe Ansatz for E$_8$\ncoincide with those of Zamolodchikov's magnetic Ising model.\n\\end{description}\nAll the above arguments strongly suggest that the TBAE\nbased on the Lie algebra E$_8$ as\nproposed in \\cite{Bazhanov90d,Bazhanov90b},\nare those of the the dilute A$_3$ model.\n\nIn this paper we present the Bethe Ansatz equations (BAE)\nfor the non-critical, dilute A$_L$ model.\nAs these equations, at criticality, are very similar to those of the\nIzergin-Korepin model \\cite{IK,Vichirko}, it is not\nsurprising that, when specialised to $L=3$,\nthey do not display any explicit structure related\nto the root system of E$_8$.\nIt turns out however that this structure reveals itself\nin a quite complicated string structure of the solutions\nto the BAE.\nMotivated by an extensive numerical\ninvestigation of the BAE\nwe formulate an exact conjecture\nfor the thermodynamically significant strings.\nThis leads to TBAE, which, rewritten in a\nnew string basis precisely yield the E$_8$ based TBAE of\nref~\\cite{Bazhanov90d} discussed under (iii).\nAs a result of (iv) this\nfinalises the\ncorrespondence between the dilute A$_3$ model and the\nmagnetic Ising model.\n\n\\nsection{The dilute A models}\nThe dilute A$_L$ model, belonging to the more\ngeneral class of dilute A-D-E models,\nis an exactly solvable, restricted solid-on-solid\nmodel defined on the square lattice.\nEach site of the lattice can take one of $L$ possible\n(height) values, subject to the restriction that\nneighbouring sites of the lattice either have the\nsame height, or differ by $\\pm 1$.\nThis adjacency condition can be\nconveniently expressed by a so-called\nincidence matrix $M$:\n\\begin{equation}\nM_{a,b} = \\delta_{a,b-1} + \\delta_{a,b} + \\delta_{a,b+1}\n\\qquad a,b\\in \\{1,\\ldots,L\\},\n\\label{Incidence}\n\\end{equation}\nwhere we note that $M$ relates to the\nCartan matrix $C^{\\mbox{\\scriptsize A}_L}$ of\nthe Lie algebra A$_L$\nby $M=3 I - C^{\\mbox{\\scriptsize A}_L}$,\nwith $I$ the identity matrix.\nThe eigenvalues of the incidence matrix are\nfound to be\n\\begin{equation}\n\\Lambda_j = 1 + 2\\cos\n\\left(\\frac{\\pi j}{L+1} \\right) \\qquad j=1,\\ldots,L.\n\\end{equation}\nFor the case of interest here, $L=3$, we thus find the\nlargest eigenvalue to be $1+\\sqrt{2}$, in accordance with\nthe prediction for the E$_8$ TBAE as mentioned in (iii)\nof the introduction.\n\nUsing standard definitions of $\\vartheta_{i}(u,q)$-functions,\nsuppressing the dependence on the nome $q=\\mbox{e}^{-\\tau}$, $\\tau>0$,\nthe Boltzmann weights of the allowed height configurations of\nan elementary face of the lattice are\n\\setlength{\\mathindent}{0 cm}\n\\begin{eqnarray}\n\\lefteqn{\\W{}{a}{a}{a}{a}=\n\\frac{\\vartheta_1(6\\lambda-u)\\vartheta_1(3\\lambda+u)}{\\vartheta_1(6\\lambda)\\vartheta_1(3\\lambda)}}\n\\nonumber \\\\ & & \\nonumber \\\\\n\\lefteqn{\\hphantom{\\W{}{a}{a}{a}{a}}\n-\\left(\\frac{S(a+1)}{S(a)}\\frac{\\vartheta_4(2a\\lambda-5\\lambda)}{\\vartheta_4(2a\\lambda+\\lambda)}\n +\\frac{S(a-1)}{S(a)}\\frac{\\vartheta_4(2a\\lambda+5\\lambda)}{\\vartheta_4(2a\\lambda-\\lambda)}\\right)\n\\frac{\\vartheta_1(u)\\vartheta_1(3\\lambda-u)}{\\vartheta_1(6\\lambda)\\vartheta_1(3\\lambda)}}\n\\nonumber \\\\ & & \\nonumber \\\\\n\\lefteqn{\\W{}{a}{a}{a}{a\\pm 1}=\\W{}{a}{a\\pm 1}{a}{a}=\n\\frac{\\vartheta_1(3\\lambda-u)\\vartheta_4(\\pm 2a\\lambda+\\lambda-u)}{\\vartheta_1(3\\lambda)\\vartheta_4(\\pm 2a\\lambda+\\lambda)}}\n\\nonumber \\\\ & & \\nonumber \\\\\n\\lefteqn{\\W{}{a\\pm 1}{a}{a}{a}=\\W{}{a}{a}{a\\pm 1}{a}=\n\\left(\\frac{S(a\\pm 1)}{S(a)}\\right)^{1\/2}\n\\frac{\\vartheta_1(u)\\vartheta_4(\\pm 2a\\lambda-2\\lambda+u)}{\\vartheta_1(3\\lambda)\\vartheta_4(\\pm 2a\\lambda+\\lambda)}}\n\\nonumber \\\\ & & \\nonumber \\\\\n\\lefteqn{\\W{}{a}{a\\pm 1}{a\\pm 1}{a}=\\W{}{a}{a}{a\\pm 1}{a\\pm 1}}\n\\nonumber \\\\ & & \\nonumber \\\\\n\\lefteqn{ \\hphantom{\\W{}{a}{a\\pm 1}{a\\pm 1}{a}}\n=\\left(\\frac{\\vartheta_4(\\pm 2a\\lambda+3\\lambda)\\vartheta_4(\\pm 2a\\lambda-\\lambda)}\n {\\vartheta_4^2(\\pm 2a\\lambda+\\lambda)}\\right)^{1\/2}\n\\frac{\\vartheta_1(u)\\vartheta_1(3\\lambda-u)}{\\vartheta_1(2\\lambda)\\vartheta_1(3\\lambda)} }\n\\nonumber \\\\ & & \\label{Bweights} \\\\\n\\lefteqn{\\W{}{a}{a\\mp 1}{a}{a\\pm 1}=\n\\frac{\\vartheta_1(2\\lambda-u)\\vartheta_1(3\\lambda-u)}{\\vartheta_1(2\\lambda)\\vartheta_1(3\\lambda)}}\n\\nonumber \\\\ & & \\nonumber \\\\\n\\lefteqn{\\W{}{a\\pm 1}{a}{a\\mp 1}{a}=\n-\\left(\\frac{S(a-1)S(a+1)}{S^2(a)}\\right)^{1\/2}\n\\frac{\\vartheta_1(u)\\vartheta_1(\\lambda-u)}{\\vartheta_1(2\\lambda)\\vartheta_1(3\\lambda)}}\n\\nonumber \\\\ & & \\nonumber \\\\\n\\lefteqn{\\W{}{a\\pm 1}{a}{a\\pm 1}{a}=\n\\frac{\\vartheta_1(3\\lambda-u)\\vartheta_1(\\pm 4a\\lambda+2\\lambda+u)}{\\vartheta_1(3\\lambda)\\vartheta_1(\\pm 4a\\lambda+2\\lambda)}\n+\\frac{S(a\\pm 1)}{S(a)}\n\\frac{\\vartheta_1(u)\\vartheta_1(\\pm 4a\\lambda-\\lambda+u)}{\\vartheta_1(3\\lambda) \\vartheta_1(\\pm 4a\\lambda+2\\lambda)}}\n\\nonumber \\\\ & & \\nonumber \\\\\n\\lefteqn{\\hphantom{\\W{}{a\\pm 1}{a}{a\\pm 1}{a}}=\n\\frac{\\vartheta_1(3\\lambda+u)\\vartheta_1(\\pm 4a\\lambda-4\\lambda+u)}\n{\\vartheta_1(3\\lambda)\\vartheta_1(\\pm 4a\\lambda-4\\lambda)}}\n\\nonumber \\\\ & & \\nonumber \\\\\n\\lefteqn{\\hphantom{\\W{}{a\\pm 1}{a}{a\\pm 1}{a}}+\n\\left(\\frac{S(a\\mp 1)}{S(a)}\\frac{\\vartheta_1(4\\lambda)}{\\vartheta_1(2\\lambda)}\n-\\frac{\\vartheta_4(\\pm 2a\\lambda-5\\lambda)}{\\vartheta_4(\\pm 2a\\lambda+\\lambda)} \\right)\n\\frac{\\vartheta_1(u)\\vartheta_1(\\pm 4a\\lambda-\\lambda+u)}{\\vartheta_1(3\\lambda) \\vartheta_1(\\pm 4a\\lambda-4\\lambda)}}\n\\nonumber \\\\ & & \\nonumber \\\\\n\\lefteqn{S(a)=(-)^{\\displaystyle a} \\;\n\\frac{\\vartheta_1(4a\\lambda)}{\\vartheta_4(2a\\lambda)} \\, .}\n\\nonumber\n\\end{eqnarray}\nThe variable $\\lambda$ and the range of the spectral parameter $u$\nin the above weights are\ngiven by\\footnote{In \\cite{WPSN} two more regimes were\ndefined, which are omitted being of no relevance here.}\n\\setlength{\\mathindent}{\\mathin}\n\\begin{equation}\n\\lambda = \\frac{\\pi}{4} \\, \\frac{L+2}{L+1}\n\\qquad\n\\left\\{\n\\begin{array}{lll}\n00.\n\\label{Energy}\n\\end{equation}\nwhere $\\epsilon=-1$ for regime {\\it i)} and\n$\\epsilon=1$ for regime {\\it ii)} in (\\ref{regimes}).\n\nThe densities $\\rho_t$ are normalised such\nthat\n\\begin{equation}\n\\int_{-\\tau r\/\\pi}^{\\tau r\/\\pi}\n\\rho_t(\\alpha)\\: d\\alpha = N^{(t)}\/N.\n\\end{equation}\nTherefore, from equation (\\ref{Nsum}), we have\n\\begin{equation}\n\\sum_{t=0}^8\n\\int_{-\\tau r\/\\pi}^{\\tau r\/\\pi}\nn^{(t)} \\rho_t(\\alpha) \\: d\\alpha = 1.\n\\end{equation}\nThis relation together with equation (\\ref{TBAE}) for $t=0$\nimplies\n\\begin{equation}\n\\tilde{\\rho}_0(\\alpha) =0.\n\\label{zero}\n\\end{equation}\nHence we conclude that the strings of type 0 have no holes in any\nstate, and we eliminate $\\rho_0(\\alpha)$ from (\\ref{TBAE}).\nAfter a tedious calculation we find that the resulting integral\nequations can naturally be described in\nterms of the E$_8$ root system as follows.\n\nLet $C^{\\mbox{\\scriptsize E}_8}_{t,s}$ $t,s=1,\\ldots,8$\nbe the elements of the Cartan matrix for\nE$_8$, where we use the following enumeration of the nodes of the\ncorresponding Dynkin diagram:\n\\[\n\\setlength{\\unitlength}{0.008in}%\n\\begingroup\\makeatletter\n\\def\\x#1#2#3#4#5#6#7\\relax{\\def\\x{#1#2#3#4#5#6}}%\n\\expandafter\\x\\fmtname xxxxxx\\relax \\def\\y{splain}%\n\\ifx\\x\\y \n\\gdef\\SetFigFont#1#2#3{%\n \\ifnum #1<17\\tiny\\else \\ifnum #1<20\\small\\else\n \\ifnum #1<24\\normalsize\\else \\ifnum #1<29\\large\\else\n \\ifnum #1<34\\Large\\else \\ifnum #1<41\\LARGE\\else\n \\huge\\fi\\fi\\fi\\fi\\fi\\fi\n \\csname #3\\endcsname}%\n\\else\n\\gdef\\SetFigFont#1#2#3{\\begingroup\n \\count@#1\\relax \\ifnum 25<\\count@\\count@25\\fi\n \\def\\x{\\endgroup\\@setsize\\SetFigFont{#2pt}}%\n \\expandafter\\x\n \\csname \\romannumeral\\the\\count@ pt\\expandafter\\endcsname\n \\csname @\\romannumeral\\the\\count@ pt\\endcsname\n \\csname #3\\endcsname}%\n\\fi\n\\endgroup\n\\begin{picture}(250,76)(75,710)\n\\thicklines\n\\put( 80,740){\\circle*{10}}\n\\put(120,740){\\circle*{10}}\n\\put(160,740){\\circle*{10}}\n\\put(200,740){\\circle*{10}}\n\\put(240,740){\\circle*{10}}\n\\put(280,740){\\circle*{10}}\n\\put(320,740){\\circle*{10}}\n\\put(240,780){\\circle*{10}}\n\\put(240,740){\\line( 0, 1){ 40}}\n\\put( 80,740){\\line( 1, 0){240}}\n\\put( 80,710){\\makebox(0,0)[b]{\\smash{\\SetFigFont{14}{16.8}{bf}1}}}\n\\put(120,710){\\makebox(0,0)[b]{\\smash{\\SetFigFont{14}{16.8}{bf}2}}}\n\\put(160,710){\\makebox(0,0)[b]{\\smash{\\SetFigFont{14}{16.8}{bf}3}}}\n\\put(200,710){\\makebox(0,0)[b]{\\smash{\\SetFigFont{14}{16.8}{bf}4}}}\n\\put(240,710){\\makebox(0,0)[b]{\\smash{\\SetFigFont{14}{16.8}{bf}5}}}\n\\put(280,710){\\makebox(0,0)[b]{\\smash{\\SetFigFont{14}{16.8}{bf}6}}}\n\\put(320,710){\\makebox(0,0)[b]{\\smash{\\SetFigFont{14}{16.8}{bf}7}}}\n\\put(255,775){\\makebox(0,0)[b]{\\smash{\\SetFigFont{14}{16.8}{bf}8}}}\n\\end{picture}\n\\]\nFurthermore, define the functions\n$K^{\\mbox{\\scriptsize E}_8}_{t,s}$,\n$A^{\\mbox{\\scriptsize E}_8}_{t,s}$,\n$a^{\\mbox{\\scriptsize E}_8}_{t,s}$\nand $s$ by their FT\n\\begin{eqnarray}\n\\hat{K}^{\\mbox{\\scriptsize E}_8}_{t,s}(x)&=&\n\\delta_{t,s} + \\hat{s}(x)\n\\left(C^{\\mbox{\\scriptsize E}_8}_{t,s} - 2\\delta_{t,s}\\right)\n\\nonumber \\\\\n\\hat{A}^{\\mbox{\\scriptsize E}_8}_{t,s}(x)&=&\n\\left[\\hat{K}^{\\mbox{\\scriptsize E}_8}(x)\\right]^{-1}_{t,s}\n\\nonumber \\\\\n\\hat{a}^{\\mbox{\\scriptsize E}_8}_{t,s}(x)&=&\n\\hat{s}(x)\n\\hat{A}^{\\mbox{\\scriptsize E}_8}_{t,s}(x)\\\\\n\\hat{s}(x) &=& \\frac{1}{2\\cosh x} \\nonumber\n\\end{eqnarray}\nWith these definitions, and after eliminating $\\rho_0$,\n the integral equations (\\ref{TBAE}) and\nthe energy expression (\\ref{Energy}) take the form\n\\begin{eqnarray}\na_{1,t}^{\\mbox{\\scriptsize E}_8}(\\alpha)\n&=& \\tilde{\\rho}_t(\\alpha)\n+ \\sum_{s=1}^8 A^{\\mbox{\\scriptsize E}_8}_{t,s} \\ast \\rho_s\n\\, (\\alpha) \\qquad t=1,\\ldots,8, \\nonumber \\\\\n\\frac{E}{N} &=&-\\epsilon\\sum_{t=1}^8\n\\int_{-\\tau r\/\\pi}^{\\tau r\/\\pi}\na_{1,t}^{\\mbox{\\scriptsize E}_8}(\\alpha) \\, \\rho_t(\\alpha)\n\\: d\\alpha +\\mbox{const}.\n\\label{BAE8}\n\\end{eqnarray}\n\nWe can now use (\\ref{BAE8}) to study the scaling limit of the\nmodel. In fact, all relevant calculations have already been\ncarried out in ref~\\cite{Bazhanov90d} and we only need\nto refer to the appropriate results therein.\nTo make the correspondence with ref~\\cite{Bazhanov90d}\nsomewhat more transparent, let us give the\nexpression for the equilibrium free energy $F(T)$ of\nthe one-dimensional spin chain\nat finite temperature $T$,\nas it follows from (\\ref{BAE8}) via standard TBA calculations\n\\cite{Yang-Yang},\n\\begin{equation}\n\\frac{F(T)}{N} = -\\sum_{t=1}^8\n\\int_{-\\tau r\/\\pi}^{\\tau r\/\\pi}\na_{1,t}^{\\mbox{\\scriptsize E}_8}(\\alpha) \\,\nT \\log \\left(1+\\mbox{e}^{-\\beta \\epsilon_t(\\alpha)}\\right)\n\\, d\\alpha +\\mbox{const},\n\\label{FE}\n\\end{equation}\nwhere $\\beta=1\/T$ is the inverse temperature.\nThe functions $\\epsilon_t=T\\log(\\tilde{\\rho}_t\/\\rho_t)$ are\nthe solutions of the integral equation\n\\begin{equation}\n\\epsilon \\delta_{1,t} s(\\alpha) =\nT \\log \\left(1+\\mbox{e}^{-\\beta \\epsilon_t(\\alpha)}\\right)\n- \\sum_{s=1}^8 K_{t,s}^{\\mbox{\\scriptsize E}_8}\n\\ast\nT \\log \\left(1+\\mbox{e}^{\\beta \\epsilon_s(\\alpha)}\\right)\n(\\alpha).\n\\label{NLIE}\n\\end{equation}\n\nThe above two equations are equivalent to\n(3$\\cdot$20) and (3$\\cdot$21) of ref~\\cite{Bazhanov90d},\nrespectively, with their $\\cal G$=E$_8$, $r=32$, $g=30$, $p=\\ell = 1$,\ntheir nome $q$ replaced by $q^{1\/2}$ and\nwith their $\\epsilon_j^a$ negated.\nThis last difference reflects the fact that our TBA equations are dual\nto those of ref~\\cite{Bazhanov90d} in the sence that the densities of\nstrings and holes are interchanged. From (\\ref{FE}) and (\\ref{NLIE})\nit follows that for $T=0$\n\\begin{equation}\n\\begin{array}{lll}\ni) & \\epsilon =-1 \\quad & \\epsilon_t(\\alpha) =\na_{1,t}^{\\mbox{\\scriptsize E}_8} \\\\\n& & \\\\\nii) & \\epsilon =+1 & \\epsilon_t(\\alpha) =\n-\\delta_{t,1} s(\\alpha).\n\\end{array}\n\\end{equation}\nThe functions $|\\epsilon_t(\\alpha)|$ are the energies\nof the excitations over the ground state.\n\nFor $\\epsilon=-1$ the ground state is formed by type 0\nstrings. As was remarked after equation (\\ref{zero}),\nthese strings have no holes for any state. Therefore the\nDirac sea is ``frozen'', and the excitations correspond to\nthe remaining eight string types. The phenomenon of ``freezing''\nof the Dirac sea which can be interpreted as the confinement of\n``holes'' has been first observed in\nthe TBAE calculations of ref~\\cite{Bazhanov}\nfor the RSOS models of\nAndrews, Baxter and Forrester~\\cite{A-B-F}.\n\nFor $\\epsilon=1$ the Dirac sea is formed by the type 1\nstrings, and the only excitations correspond to holes\nin the Dirac sea. These excitations are of the kink type.\n\nNow we consider the scaling limit. We introduce a dimensional\nspacing parameter $d$ for our chain and\ntake the limit $N\\to\\infty$, $d\\to 0$,\nkeeping the (dimensional) length of the chain $L=N d $ to\nbe macroscopically bigger than the correlation length:\n$L>>R_c= q^{-\\xi}d$, where $\\xi$ is the index of the correlation\nlength.\nIn the scaling limit we thus have $d\\sim q^{\\xi}$,\n$N>>q^{-\\xi}$, $q\\to 0$, and we\nobtain the massive relativistic spectrum\nof excitations.\nTo find this, one has to compute the energy dispersion law for the\nphysical excitations in the $q\\to 0$ limit keeping the\nrapidities $\\alpha$ of the order of $\\alpha_0=\\tau r\/\\pi$, where\nthe functions $|\\epsilon_t(\\alpha)|$ have their minima.\nTaking into account the correspondence in notation\ndiscussed after (\\ref{NLIE}), one gets from\n(4$\\cdot$1) and (4$\\cdot$2) of ref~\\cite{Bazhanov90d}\n\\begin{eqnarray}\ni) & & \\epsilon_t\n\\left(\\case{30}{\\pi} \\beta + \\alpha_0 \\right) = m_t \\cosh \\beta +\no(q^{\\xi}\n) \\nonumber \\\\\n& & m_t = \\mbox{const } S_t q^{\\xi}, \\quad \\xi = \\case{8}{15}\n\\nonumber \\\\\nii) & & \\left|\\epsilon_1\n\\left(\\case{2}{\\pi} \\beta + \\alpha_0 \\right)\n\\right| = m \\cosh \\beta +\no(q^{\\xi}) \\label{Mass} \\\\\n& & m = \\mbox{const } q^{\\xi}, \\quad \\xi = 8,\n\\nonumber\n\\end{eqnarray}\nwhere $\\beta$ here denotes the rapidity variable and\n$S_t$ was defined just before equation (\\ref{massratio}).\n\nUsing the scaling relation\n$\\xi = (2-2\\Delta)^{-1}$ it is\nseen that the values of $\\xi$ in (\\ref{Mass})\nlead exactly to the dimensions of the leading perturbations\nas given in (\\ref{canddelta}).\n\nThe values of the central charges of the corresponding\n(ultraviolet) conformal field theories listed in\n(\\ref{canddelta}) have also been previously calculated.\nFor regime $i)$ in \\cite{Bazhanov90d,K-M,AlZamolodchikov}\nand for regime $ii)$ in \\cite{Bazhanov90d}.\n\nFinally, the\n$S$-matrix for regime {\\it i)}, where all the string excitations\nfor $t=1\\ldots 8$ correspond to distinct particles, can be found\nstraightforwardly from equation (\\ref{BAE8}).\nThe result is \\cite{Bazhanov90d}\n\\begin{equation}\nS_{t,s}(\\beta) = \\exp\\left\\{ \\,\\mbox{i}\\,\n\\int_0^{\\infty} A_{t,s}^{\\mbox{\\scriptsize E}_8}(x) \\,\n\\frac{\\sin (30\\beta x\/{\\pi})}{x} \\: dx \\right\\},\n\\end{equation}\nwhich coincides with Zamolodichkov's E$_8$ $S$-matrix\n\\cite{Zamolodchikov-A89a}.\n\nFor regime {\\it ii)} the kink-kink $S$-matrix is of\nthe RSOS type related to the E$_7$ Lie algebra\n\\cite{Bazhanov90d}, and will be discussed elsewhere\n\\cite{BNW}.\n\n\n\\nsection{Summary and Conclusion}\nIn this paper we have established the final link between\nZamolodchikov's E$_8$ $S$-matrix of the critical Ising model\nin a field \\cite{Zamolodchikov-A89a}\nand its underlying lattice model.\nBy making a conjecture for the possible string\nsolutions of its Bethe Ansatz equations,\nwe have derived a system of thermodynamic BAE for the\ndilute A$_3$ lattice model of Warnaar et al. \\cite{WNS}.\nAfter a suitable transformation\nwe have recast these TBAE in terms of the root system\nof the Lie algebra E$_8$.\nThese E$_8$ TBAE are found to be precisely those conjectured\nearlier by Bazhanov and Reshetikhin \\cite{Bazhanov90d}, and\nusing their results, the correspondence\nbetween the dilute A$_3$ model and the E$_8$ $S$-matrix\nis made.\n\nTo conclude\nwe mention that two more remarkable integrable $\\phi_{1,2}$ perturbations\nof CFT's are known, notably those related to $S$-matrices with\nhidden E$_7$ (c=7\/10) and E$_6$ (c=6\/7) structure \\cite{F-Z}.\nLike the E$_8$ case, the underlying lattice models\nof these integrable QFT's correspond to models in the\ndilute A hierarchy.\nThe working for these two extra cases, corresponding\nto dilute A$_4$ and A$_6$, respectively, as well as some\nadditional results for the dilute A$_3$ model will be the\nsubject of a future publication \\cite{BNW}\n\n\\section*{Acknowledgements}\nWe wish to thank\nM.~T.~Batchelor, R.~J.~Baxter, U.~Grimm, P.~A.~Pearce,\nand N.~Yu.~Reshetikhin for interesting discussions.\nWe thank U.~Grimm, P.~A.~Pearce and Y.~K.~Zhou\nfor sending us their work prior to publication.\nOne the authors (VVB) thanks\nthe University of Amsterdam for hospitality during his\nvisit in the summer of 1992, when this work\nhas been initiated.\nThis work has been supported by the Stichting voor Fundamenteel\nOnderzoek der Materie (FOM).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nMore than two years ago N.Seiberg and E.Witten proposed a way\n\\cite{SW1,SW2} leading to considerable progress in\nunderstanding of the low-energy effective actions.\nAmong other things, it provided a non-trivial confirmation\nof the old {\\it belief} that the low-energy effective actions\n(the end-points of the renormalization group flows) fit into universality\nclasses depending on the vacuum structure of the theory.\nThe trivial (and generic) case is that of isolated vacua,\ni.e. when there are no marginal degrees of freedom and effective\ntheories on the moduli space of the vacua. If degenerated vacua form\nfinite-\\-dimensional varieties, the effective actions can be essentially\ndescribed in terms of $1d$ systems\n\\footnote{By $1d$ $(0+1)$ systems here we mean the conventional\nclassical\/quantum mechanical systems with {\\it finite}-dimensional phase\nspace (\\# of degrees of freedom is equal to the rank of the gauge group).}\n(representatives of the\nsame universality class), though the original\ntheory lives in a many-dimensional space-time. Moreover, these effective\ntheories can be integrable\n\\footnote{When degenerated vacua form\ninfinite-dimensional structures one should expect generalized integrability\n(with spectral curve substituted by spectral hypersurface).}.\nOriginally,\nthis phenomenon was studied in the context of $2d$ topological theories,\nwhile the Seiberg-Witten (SW) construction can be considered as\nan extension of this approach from two to four space-time\n(``world sheet'') dimensions.\nThe natural context where one encounters continuous degeneration of vacua in\nfour dimensions is {\\it supersymmetry}; the SW construction {\\it per se}\nconcerns the $4d$ ${\\cal N}=2$ SUSY Yang-Mills models. Integrable structure\nbehind the SW theory has been found in \\cite{GKMMM} and later examined in\ndetail in \\cite{SWI1}-\\cite{N}. Its intimate relation to the previously\nknown topological theories has been revealed recently in \\cite{MMM}\nwhere it was proven that the $4d$ prepotentials do satisfy the\nWDVV equations \\cite{W,DVV}.\nThe purpose of the present paper is to give more evidence\n(and more examples) confirming this relation.\n\nNaively, the SW low-energy effective action is not\ntopological: it describes propagating particles. This is,\nhowever, a necessary consequence of the vacuum degeneracy and\nunbroken supersymmetry. All the correlators in the low-energy\ntheory can be expressed through the correlators of ``holomorphic''\n(or ``pure topological'') subsector. Technically, propagators\nare contained in the $\\langle\\phi\\bar\\phi\\rangle$ correlators,\n(derivatives of) which are related by the ${\\cal N}=2$ supersymmetry\nto the $\\langle\\phi\\phi\\rangle$ ones. This is summarized in the statement\n(sometimes called special geometry) that the low-energy effective action is\n{\\it completely} expressed in terms of a {\\it holomorphic} prepotential\n$F(a_I)$, $\\frac{\\partial}{\\partial\\bar a_I} F = 0$. Even if one accepts the\nterm ``topological'' for this kind of models, what we consider is a $4d$\ntopological theory, {\\it a priori} very different from the familiar ($2d$)\nexamples. However, if one believes in the above-mentioned universality, it is\nclear that only the structure of the moduli space of vacua (but not the\nspace-time dimension) is essential. Since the vacuum variety in the SW\ntheory is finite-dimensional, one should expect that this theory is not too\nfar from the conventional topological models, devised to describe\nexactly this kind of vacua; as was shown in \\cite{MMM} and confirmed below in\nthe present paper this is indeed the case. The way to see this is to recall\nthat the prepotential of conventional topological theory satisfies the\nWDVV equations \\cite{W,DVV} and so does the prepotential of the SW theory\n\\cite{MMM} (similar claims were made in \\cite{BM,dBdW}).\n\nMathematically, the WDVV equations reflect the specific properties of\nthe moduli space of Riemann spheres with punctures \\cite{Ma,KoMa}, in the $2d$\ncontext these spheres being interpreted as the world sheets.\nNaturally, in the $4d$ context they could be substituted by\nthe four-dimensional world hypersurfaces, and naively instead of integrable\nsystems, associated with the spectral curves one would deal with\nthose, associated with spectral hypersurfaces -- with\nno reason for the standard WDVV eqs. to emerge. This is of course\ntrue for ``generic topological $4d$ models'', but this is not\nthe only relevant problem that can be addressed in four dimensions.\nIf one instead emphasizes the issue of vacuum structures and\nuniversality classes, then the {\\it standard} WDVV eqs should\nbe expected in the ({\\it not} the most degenerate from the $4d$ point\nof view) situation with finite-dimensional (in the\nspace of all fields) variety of vacua -- but the conventional\nderivation of these equations is not (at least, directly) applicable.\nMoreover, as argued in \\cite{MMM}, the SW example emphasizes\nthat the natural form (and thus the structure behind) the\nWDVV eqs is covariant under the linear transformations of\ndistinguished (flat) coordinates on the moduli space of vacua --\nwhat is true, but not {\\it transparent} in the conventional\npresentation. For us, all this implies that the true\norigin of the WDVV eqs is still obscure.\n\nOur goal in this paper is rather modest: given (relatively)\nexplicit expressions for particular SW prepotentials\nwe check that they indeed satisfy the WDVV eqs.\n\nIn sect.\\ref{eqs} we briefly remind what are the\nWDVV eqs and how they are related in \\cite{MMM} to\nthe algebra of meromorphic 1-differentials on spectral curves.\n\nThen in sect.\\ref{pert} we discuss the perturbative prepotentials\nin $4d$ and $5d$ ${\\cal N}=2$ SUSY Yang-Mills theories which\nare expected to satisfy the WDVV eqs themselves. We find that this is indeed\ntrue, but only when matter hypermultiplets\nare absent or belong to the first fundamental representation\nof the gauge group (the representation of the lowest dimension). Remarkably,\nthis is in agreement with the\nfact that no Seiberg-Witten curves are known beyond such cases,\nand there are no string model to produce them in the field theory limit.\n\nIn sect.\\ref{npsw} the WDVV eqs are derived at the non-perturbative\nlevel from the representation\nof the prepotential in terms of spectral Riemann surfaces.\n\nThe general reasoning is illustrated in sect.\\ref{Examples} by examples\nwhich include\n${\\cal N}=2$ SUSY YM models with classical simple gauge groups and the\nmatter hypermultiplets in the first fundamental representation.\nAlso the $5d$ $SU(N_c)$ pure gauge model is analyzed, and Seiberg-Witten\ntheory is shown to produce a prepotential, corrected as compared\nto the naive field-theory expectations. Exactly these corrections are\nnecessary to make the WDVV equations true.\n\nFinally, in sect.\\ref{Calogero} the $SU(N_c)$ model with the matter\nhypermultiplet in adjoint representation is analyzed in\ndetail. We demonstrate by explicit calculation that the WDVV equations\nare {\\it not} satisfied in this case. Technically the reason is that\nthe spectral curve is essentially non-hyperelliptic and the\nalgebra of 1-differentials (though existing and being closed) is\nnon-associative.\nThe deep reason for the breakdown of WDVV in its standard from\nis the appearance of new modulus: the parameter $\\tau$\nof the complex structure on {\\it bare} elliptic curve, which is interpreted\nas the ultraviolet (UV) coupling constant of the UV-finite $4d$ theory.\nFurther understanding of the relevant deformation of the WDVV\nequations (presumably related to the analogue of WDVV eqs for\nthe generating function of the elliptic Gromov-Witten classes\n\\cite{Getzler}) is important for further investigation of\nstring models (since $\\tau$ is the remnant of heterotic dilaton\nfrom that point of view).\n\n\\section{The full set of the WDVV equations \\label{eqs}}\n\\setcounter{equation}{0}\nThe WDVV equations are non-linear relations for the third derivatives\nof the prepotential\n\\ba\nF_{IJK} \\equiv \\frac{\\partial^3 F}{\\partial a_I\n\\partial a_J \\partial a_K}.\n\\ea\nwritten in the most convenient way in terms of matrices\n$F_I$, $(F_I)_{MN} \\equiv F_{IMN}$.\nThe {\\it moduli} $a_I$ are defined up to linear transformations\n(i.e. define the {\\it flat structure} on the {\\it moduli space})\nwhich leave the whole set (\\ref{WDVV}) invariant.\n\nAny linear combination\nof these matrices can be used to form a ``metric''\n\\ba\\label{metric}\nQ = \\sum_K q_KF_K.\n\\ea\nGenerically it is non-degenerate square matrix and can be\nused to raise the indices:\n\\ba\\label{3}\nC^{(Q)}_I \\equiv Q^{-1}F_I, \\ \\ \\ {\\rm or}\\ \\\n{C^{(Q)}}^M_{IN} = (Q^{-1})^{ML}F_{ILN}.\n\\ea\nThe WDVV equations state that for any given $Q$ all the\nmatrices $C^{(Q)}_I$ commute with each other:\n\\ba\nC^{(Q)}_IC^{(Q)}_J = C^{(Q)}_JC^{(Q)}_I \\ \\ \\ \\ \\ \\forall I,J\n\\label{WDVV}\n\\ea\nIn particular, if $Q = F_K$, then $C^{(K)}_I = F_K^{-1}F_I$ and \\cite{MMM}\n\\ba\nF_IF_K^{-1}F_J = F_JF_K^{-1}F_I, \\ \\ \\ \\forall I,J,K.\n\\label{WDVVikj}\n\\ea\nIf (\\ref{WDVVikj}) holds for some index $K$, say $K=0$,\nit is automatically true for any other $K$\n(with the only restriction for $F_K$ to be non-degenerate).\nIndeed,\n\\footnote{\nThis simple proof was suggested to us by A.Rosly.\n}\nsince $F_I = F_0 C_I^{(0)}$,\n\\ba\nF_IF_K^{-1}F_J =\nF_0 \\left(C_I^{(0)}(C_K^{(0)})^{-1} C_J^{(0)}\\right)\n\\ea\nis obviously symmetric w.r.t. the permutation $I\\leftrightarrow J$.\nOf course, it also holds for any other non-degenerate $Q$ in\n(\\ref{metric}).\n\nEqs.(\\ref{WDVV}) are just the associativity condition of the formal algebra\n\\ba\n\\varphi_I\\circ \\varphi_J = {C^{(Q)}}_{IJ}^K\n\\varphi_K, \\nonumber \\\\\n(\\ref{WDVV}) \\Leftrightarrow\n(\\varphi_I \\circ \\varphi_M )\\circ \\varphi_J =\n\\varphi_I \\circ (\\varphi_M \\circ \\varphi_J).\n\\label{alge}\n\\ea\nThe way to prove the WDVV eqs, used in \\cite{MMM} and\nin the present paper, is to obtain (\\ref{alge}) from\na family of algebras formed by a certain set of (meromorphic) $(1,0)$-forms on\nthe curves, associated with particular\n$4d$ Yang-Mills models:\n\\ba\ndW_I(\\zeta) dW_J(\\zeta) = {C^{(Q)}}_{IJ}^K dW_K(\\zeta)\ndQ(\\zeta) \\ {\\rm mod}\\ \\left(d\\omega(\\zeta),d\\lambda(\\zeta)\\right).\n\\label{algdiff} \\label{2.8}\n\\ea\nIf this algebra exists and is associative we get the WDVV equations in context\nof the SW approach.\nThe structure constants $C_{IJ}^K$ depend on the choice of the linear\ncombination $dQ(\\zeta) = \\sum_K q_KdW_K(\\zeta)$.\nIn all relevant cases\n$dW_I(\\zeta) \\sim P_I(\\zeta)$, where for appropriately chosen\ncoordinates $\\{\\zeta\\}$, $P_I(\\zeta)$ are polynomials\n(perhaps, of several variables), thus $dQ(\\zeta) \\sim\n{\\cal Q}'(\\zeta) = \\sum_K q_KP_K(\\zeta)$. Also\n$d\\omega(\\zeta) \\sim {\\cal P}'(\\zeta)$,\n$d\\lambda(\\zeta) \\sim \\dot{\\cal P}(\\zeta)$\nwith some polynomials\n${\\cal P}'(\\zeta)$, $\\dot{\\cal P}(\\zeta)$ and (\\ref{WDVV})\nis equivalent to a certain associative algebra of polynomials\n\\ba\nP_I P_J = {C^{(Q)}}_{IJ}^K P_K{\\cal Q}'\\ {\\rm mod}\\\n({\\cal P}',\\dot{\\cal P}).\n\\label{algpol}\n\\ea\nIn order to obtain the WDVV equations one still needs to express\nthe structure constants matrices $C_I^{(Q)}$ through $F_I$\n(which, in contrast to $C_I$, are independent of the ``metric''\n$Q$). In all relevant cases the third derivatives of the\nprepotential are expressed in terms of the same 1-differentials\nas in (\\ref{algdiff}):\n\\ba\nF_{IJK} = \\stackreb{d\\omega = 0}{\\hbox{res}}\n\\frac{dW_IdW_JdW_K}{d\\lambda d\\omega},\n\\label{2.10}\n\\ea\nwhere $\\delta\\lambda \\wedge \\delta\\omega$ is the symplectic form\non the phase space of $1d$ integrable system, associated with given\n$4d$ model.\n\nFrom (\\ref{metric}) and (\\ref{2.10}),\n\\ba\nQ_{LK} \\equiv \\sum_M q_M F_{LKM} =\n\\stackreb{d\\omega = 0}{\\hbox{res}} \\frac{dW_LdW_KdQ}{d\\lambda d\\omega}\n\\ea\nand according to (\\ref{2.8})\n\\ba\nF_{IJK} = {C^{(Q)}}_{IJ}^LQ_{LK}.\n\\ea\nGiven (\\ref{2.8}) and (\\ref{2.10}) this provides the proof of the WDVV eqs.\n\nWe derive eq.(\\ref{2.10}) in detail in sect.\\ref{npsw} and consider\nparticular examples of the SW theory and associated algebras\n(\\ref{2.8}) in sect.\\ref{Examples}. Let us now make\nseveral comments.\n\nConventional derivations of the WDVV equations rely upon the\nassumption that at least one of the ``metrics'' $Q$\nin (\\ref{WDVV}) is flat. Under this assumption one can relate\nthe associativity equation (\\ref{WDVV}) to the flatness of\nthe connection\n\\ba\n\\partial_I \\delta^A_B + zC^A_{IB}\n\\ea\nwith {\\it arbitrary} spectral parameter $z$, and then to the theory\nof deformations of Hodge structures and the standard theory\nof quantum cohomologies.\nThere is no {\\it a priori} reason for any\n\\ba\nQ_{IJ} = \\stackreb{d\\omega = 0}{\\hbox{res}}\n\\frac{dW_IdW_JdQ}{d\\lambda d\\omega},\n\\ea\nto be flat and in this context such kind of derivation -- even if\nexists -- would look somewhat artificial.\n\nIn our presentation the WDVV equations depend on a\n{\\it triple} of differentials on the spectral curve,\n$d\\omega$, $d\\lambda$, $dQ$. The first two, $d\\omega$ and $d\\lambda$,\ndescribe the symplectic structure $\\delta\\omega \\wedge \\delta\\lambda$\nof the relevant integrable system,\nthis is the pair, discussed in \\cite{DKN,KriW,KriPho,M} and in another\ncontext in \\cite{KM}. The choice of the third differential $dQ$\nis more or less arbitrary.\\footnote{\nIn the conventional Landau-Ginzburg topological models the spectral\ncurve is Riemann sphere. In this case $dQ$ can be considered as\n``dressed'' $d\\lambda$: $dQ(\\lambda) = q(\\lambda)d\\lambda$\nwith some {\\it polynomial} $q(\\lambda)$ and there is a distinguished choice,\n$q(\\lambda)=1$, i.e. $dQ = d\\lambda$ when the associated metric is\nflat \\cite{Losev,KriW}.\n}\nSometime (for example, for hyperelliptic\nspectral surfaces) the algebra (\\ref{algdiff})\ndepends only on the pair $dQ$, $d\\omega$ (or ${\\cal Q}'$,\n${\\cal P}'$ in the formulation (\\ref{algpol})).\nWe shall demonstrate below that just in this case the algebra (\\ref{2.8}) is\nassociative and the WDVV equations are fulfilled.\nThe pair of differentials in this case is essentially the same\nas arising in many related contexts: $(p,q)$\nin minimal conformal models, a\npair of polynomials in associated matrix models \\cite{Dou,FKN,KM}, a pair of\noperators in the Kac-Schwarz problem \\cite{Sch,KM}\nand in a bispectral problem in the\ntheory of KP\/Toda hierarchies \\cite{OrHar} etc. The simplest example of this\nstructure\nis the family of associative rings in the number theory, with multiplication\n$ab = qc + pd$ with co-prime $q$ and $p>q$ where $a,b,c$ belong to the field\nof residues modulo $p$, while $d$ is defined modulo $q$. All the algebras\nwith multiplication rule $a\\cdot_q b = c$ are isomorphic, but if considered\nas a {\\it family} over the ``moduli space'' of $q$'s, they form some new\nnon-trivial structure.\nIn this oversimplified example the ``moduli space'' is discrete,\nbut it becomes continuous as soon as one switches to the families of the\npolynomial rings (\\ref{algpol}). One can assume that it is this structure -- a\n{\\it family} of associative algebras (not just a {\\it single} associative\nalgebra) -- that stands behind the WDVV equations, and then it should be\nsearched for in the theory of quantum cohomologies, and in topological\ntheories. (Let us emphasize \\cite{MMM} that even if some particular metric\n$Q$ is flat, this is normally not so for all other $Q$'s.)\n\n\\section{The SW prepotentials. Perturbative examples \\label{pert}}\n\\setcounter{equation}{0}\n\\subsection{General formulas}\nIf the exact prepotential of the SW theory satisfies the WDVV\nequations, the same should be true for its perturbative part\n{\\it per se}, since in $4d$ SUSY YM theory the perturbative contribution\nto effective action is one-loop, pure logarithmic, and well\ndefined as a {\\it leading} term in $1\/a$ expansion.\nIn \\cite{MMM} we discussed this phenomenon for the pure gauge ${\\cal N}=2$\nSUSY model and in this section we are going to present more general examples.\n\nPerturbatively, the prepotential\nof the pure gauge ${\\cal N}=2$ SUSY YM model with the gauge group\n$G=SU(N_c)$ is given by\n\\ba\nF_0 = \\frac{1}{2}\\sum_{1\\leq rj>k}a_ia_j\na_k$ so that the complete perturbative prepotential satisfying the WDVV eqs\nis\n\\ba\\label{FARTC}\nF={1\\over 4}\\sum_{i,j}\\left({1\n\\over 3}a_{ij}^3-{1\\over 2}Li_3\\left(e^{-2a_{ij}}\\right)\\right)-\n{N_c\\over 4}\\sum_{i>j>k}a_ia_ja_k\n\\ea\nwhere $a_{ij}\\equiv a_i-a_j$.\nThe cubic terms in (\\ref{FARTC}) could be compared with the $U^3$-terms in\nthe perturbative part of\nthe prepotential of heterotic string, coming from the\nrequirement of modular invariance \\cite{HM}. We show in sect.5.2 how these\nterms can be obtained in the\nperturbative limit of the SW non-perturbative description of $5d$ theory.\n\n\\section{Non-perturbative proof of WDVV equations \\label{npsw}}\n\\setcounter{equation}{0}\n\\subsection{SW theory and the general theory of Riemann surfaces}\nNon-perturbatively, the SW construction implies that\nfor any $4d$ ${\\cal N}=2$ SUSY Yang-Mills model one associates\na (finite-dimensional) integrable system, its phase space is a family of\nabelian varieties (Hamiltonian tori) over a moduli space ${\\cal M}$,\nparametrized by the angle and action variables respectively.\nThe phase space itself is directly \"observable\" when the $4d$\nmodel is compactified to $3d$, i.e. defined in the space-time\n$R^3\\times S^1$ rather than $R^4$: then the angle variables\nof the integrable system are identified with the Wilson loops along\nthe compactified dimension in the $4d$ model \\cite{SW3}.\nIn the non-compactified case the only four-dimensional observables\nare the periods of the generating 1-form $dS=\"pdq\"$\nalong the non-contractable contours on the Hamiltonian tori. The period\nmatrix for the given system consists of the second derivatives of the\nprepotential while the $B$-periods of $dS$ are its first derivatives.\n$A$-periods of $dS$ define the {\\it flat} structure on moduli space.\n\nIn the context of SW theory one actually deals with specific\nintegrable systems associated with Riemann surfaces\n(complex curves). This can be understood from the point of view of\nstring theory, which relates the emergency of Yang-Mills theories\nfrom string compactifications to emergency of Hitchin-like integrable\nsystems from more general systems \\cite{DoMa} associated with the Calabi-Yau\nfamilies. In practice it means that the families of abelian varieties,\nrelevant for the SW theory are in fact the Jacobian tori of complex curves.\n\nIn the context of SW theory one starts with a {\\it bare} spectral curve,\nwith a holomorphic 1-form $d\\omega $, which is elliptic curve (torus)\n\\ba\\label{torus}\nE(\\tau):\\ \\ \\ \\ y^2 = \\prod_{a=1}^3 (x - e_a(\\tau)), \\ \\ \\\n\\sum_{a=1}^3 e_a(\\tau) = 0,\\ \\ \\ \\ \\ d\\omega = \\frac{dx}{y},\n\\ea\nwhen the YM theory is UV finite, or its degeneration $\\tau\n\\rightarrow i\\infty$ -- the double-punctured sphere (``annulus''):\n\\ba\nx\\rightarrow w\\pm\\frac{1}{w},\\ \\\ny \\rightarrow w\\mp\\frac{1}{w},\\ \\ \\ \\ \\ d\\omega = \\frac{dw}{w}\n\\ea\nFrom the point of view of integrable system, the Lax operator\n${\\cal L}(x,y)$ is defined as a function (1-differential) on the\n{\\it bare} spectral curve \\cite{KriCal}, while the {\\it full} spectral curve\n${\\cal C}$ is given by the Lax-eigenvalue equation: $\\det({\\cal L}(x,y) -\n\\lambda ) = 0$. As a result, ${\\cal C}$ arises as a ramified covering over the\n{\\it bare} spectral curve:\n\\ba\n{\\cal C}:\\ \\ \\ {\\cal P}(\\lambda; x,y) = 0\n\\ea\nIn the case of the gauge group $G=SU(N_c)$, the function ${\\cal P}$ is a\npolynomial of degree $N_c$ in $\\lambda$.\n\nThe function ${\\cal P}$ depends also on parameters (moduli)\n$s_I$, parametrizing the moduli space ${\\cal M}$. From the\npoint of view of integrable system, the Hamiltonians\n(integrals of motion) are some specific co-ordinates on\nthe moduli space. From\nthe four-dimensional point of view, the co-ordinates $s_I$ include $s_i$ --\n(the Schur polynomials of) the adjoint-scalar expectation values $h_k =\n\\frac{1}{k}\\langle{\\rm Tr} \\phi^k\\rangle$ of the vector ${\\cal N}=2$\nsupermultiplet, as well as $s_\\iota = m_\\iota$ -- the masses of the\nhypermultiplets.\n\nThe generating 1-form $dS \\cong \\lambda d\\omega$ is meromorphic on\n${\\cal C}$ (hereafter the equality modulo total derivatives\nis denoted by ``$\\cong$''). The prepotential is defined in terms of the\ncohomological class of $dS$:\n\\ba\na_I = \\oint_{A_I} dS, \\ \\ \\ \\ \\ \\\n\\frac{\\partial F}{\\partial a_I} = \\int_{B_I} dS \\nonumber \\\\\nA_I \\circ B_J = \\delta_{IJ}.\n\\label{defprep}\n\\ea\nThe cycles $A_I$ include the $A_i$'s wrapping around the handles\nof ${\\cal C}$ and $A_\\iota$'s, going around the singularities\nof $dS$ (for simplicity we assume that they are\nsimple poles, generalization is obvious).\nThe conjugate contours $B_I$ include the cycles $B_i$ and the\n{\\it non-closed} contours $B_\\iota$, ending at the singularities\nof $dS$ (see sect.5 of \\cite{IM2} for more details).\nThe integrals $\\int_{B_\\iota} dS$ are actually divergent, but\nthe coefficient of divergent part is equal to residue of $dS$\nat particular singularity, i.e. to $a_\\iota$. Thus the divergent\ncontribution to the prepotential is quadratic in $a_\\iota$, while\nthe prepotential is normally defined {\\it modulo} quadratic combination\nof its arguments (since only its {\\it third} derivatives appear in the\nWDVV equations). In\nparticular models $\\oint_{A,B} dS$ for some conjugate pairs of contours are\nidentically zero on entire ${\\cal M}$: such pairs are not included into our\nset of indices $\\{I\\}$.\n\nIn the context of SW theory with the data $\\{ {\\cal M}, {\\cal C}, dS\\}$\nthe number of independent $a_I$ is exactly the same as ${\\rm dim}_C{\\cal M}$.\nThis allows us to use the functions $a_I(s)$ as co-\\-ordinates on\nmoduli space ${\\cal M}$.\n\nThe self-consistency of the definition (\\ref{defprep}) of $F$,\ni.e. the fact that the matrix\n$\\frac{\\partial^2 F}{\\partial a_I\\partial a_J}$ is symmetric is guaranteed\nby the following reasoning.\nLet us take the derivative of $dS$ over moduli:\n\\ba\n\\frac{\\partial dS}{\\partial s_I} \\cong \\frac{\\partial\\lambda}\n{\\partial s_I} d\\omega = -\\frac{\\partial {\\cal P}}{\\partial s_I}\n\\frac{d\\omega}{{\\cal P}'} \\equiv dv_I.\n\\label{defdv}\n\\ea\nwhere prime denotes the $\\lambda$-derivative (for constant $x, d\\omega$ and\n$s_I$).\nNote that we did not include $\\tau$ into the set of moduli\n$\\{s_I\\}$, therefore we can take $s_I$-derivatives keeping $(x,y)$\n(and thus $d\\omega$) fixed: the variation of coordinate on bare curve would\nnot change the cohomology classes.\nThe variation of ${\\cal P}(\\lambda; x,y) = 0$\n\\ba\n{\\cal P}'d\\lambda + \\frac{\\partial{\\cal P}}{\\partial \\omega}\nd\\omega = 0,\n\\ea\nimplies that the zeroes of $d\\omega$ and ${\\cal P}'$ on ${\\cal C}$\ncoincide (since $d\\lambda$ and $d\\omega$ are supposed to have non-coinciding\nzeroes), and, therefore, $\\frac{d\\omega}{{\\cal P}'}$ is a {\\it holomorphic}\n1-differential on ${\\cal C}$. If $\\partial{\\cal P}\/\\partial s_I$ is\nnon-singular, $dv_I$ is also holomorphic\\footnote{Since curves with punctures\nand the corresponding meromorphic differentials can be obtained by\ndegeneration of smooth curves of higher genera we do not make any distinction\nbetween punctured and smooth curves below. We remind that the holomorphic\n1-differentials can have at most simple poles at the punctures while\nquadratic differentials can have certain double poles etc.}\n -- this is the case of $dv_i$, while\n$dv_\\iota$ have poles encircled by contours $A_\\iota$. The second derivative\n\\ba\n\\frac{\\partial^2 F}{\\partial a_I\\partial a_J} = F_{IJ}\n\\ea\nis essentially the period matrix of the punctured Riemann surface\n${\\cal C}$\n\\ba\n\\sum_J F_{IJ}\\oint_{A_J} dv_L=\\int_{B_I} dv_L +\\int_{{\\partial B_I\\over \\partial s_L}}\ndS\n\\ea\nThe second term at the r.h.s. can be non-vanishing if the contour $B_I$ is\nnot closed. The end-points of such contours (punctures) are included into the\nset of moduli (this actually implies that a local coordinate is specified in\nthe vicinity of the puncture). The derivative ${\\partial B_I\\over \\partial s_L}$ is\nactually a point, thus, $\\int_{{\\partial B_I\\over \\partial s_L}} dS={dS\\over\nd\\lambda}\\left(\\lambda= {\\partial B_I\\over \\partial s_L}\\right)$. This term also cancels\nthe possible divergency in $\\int_{B_I} dv_L$, making $F_{IJ}$ finite.\n\nAs any period matrix $F_{IJ}$ is symmetric:\n\\ba\n\\sum_{IJ} (F_{IJ} - F_{JI}) \\oint_{A_I}dv_K\\oint_{A_J}dv_L =\\\\=\n\\sum_I\\left(\\oint_{A_I}dv_K\\int_{B_I}dv_L -\n\\int_{B_I}dv_K\\oint_{A_I}dv_L\\right)\n+ \\sum_I\\left(\\oint_{A_I}dv_K \\int_{{\\partial B_I\\over \\partial s_L}}\n- \\int_{{\\partial B_I\\over \\partial s_K}} \\oint_{A_I}dv_L\\right)\n\\ea\nThe first sum at the r.h.s. is equal to\n\\ba\n\\sum_I\\left(\\oint_{A_I}dv_K\\int_{B_I}dv_L -\n\\int_{B_I}dv_K\\oint_{A_I}dv_L\\right)=\n{\\hbox{res}} \\left( v_K dv_L \\right) = 0\n\\ea\nsince all the singularities of $dv_K$ are already taken into account\nby inclusion of $A_\\iota$ into our set of $A$-cycles. The second sum is also\nzero because the only non-vanishing entries are those with $K=L$.\n\nIn this paper we also need the moduli derivatives of the prepotential.\nIt is easy to get:\n\\ba\n\\sum_{IJ} \\frac{\\partial F_{IJ}}{\\partial s_M}\n\\oint_{A_I}dv_K\\oint_{A_J}dv_L =\n\\sum_I\\left(\\oint_{A_I}dv_K\\int_{B_I}\\frac{\\partial dv_L}{\\partial s_M} -\n\\int_{B_I}dv_K\\oint_{A_I}\\frac{\\partial dv_L}{\\partial s_M}\\right) +\\Sigma\n_{KLM}=\n\\nonumber \\\\ =\n{\\hbox{res}} \\left(dv_K \\frac{\\partial v_L}{\\partial s_M}\\right)+\\Sigma_{KLM}\n\\label{prom1}\n\\ea\nwhere\n\\ba\\label{sigma}\n\\Sigma_{KLM}=\\sum_I\\left(\\oint_{A_I}dv_K {\\partial\\over\\partial s_M}\\int_{{\\partial B_I\\over \\partial\ns_L}}dS+ \\oint_{A_I}dv_K \\int_{{\\partial B_I\\over \\partial s_M}}dv_L-\n\\oint_{A_I}{\\partial dv_L\\over\\partial s_M} \\int_{{\\partial B_I\\over \\partial s_K}}dS\n\\right)\n\\ea\nis the contribution of the moduli dependence of punctures.\n\nThe residue in (\\ref{prom1}) does not need to\nvanish, since the derivatives w.r.t. the\nmoduli produces new singularities. Indeed, from (\\ref{defdv}) one obtains\n\\ba\n-\\frac{\\partial dv_L}{\\partial s_M} =\n\\frac{\\partial^2{\\cal P}}{\\partial s_L\\partial s_M}\n\\frac{d\\omega}{{\\cal P}'} +\n\\left(\\frac{\\partial{\\cal P}}{\\partial s_L}\\right)'\n\\left(-\\frac{\\partial{\\cal P}}{\\partial s_M}\\right)\n\\frac{d\\omega}{{\\cal P}'} -\n\\frac{\\partial{\\cal P}}{\\partial s_L}\n\\frac{\\partial{\\cal P}'}{\\partial s_M}\n\\frac{d\\omega}{({\\cal P}')^2} + \\frac{\\partial{\\cal P}}{\\partial s_L}\n\\frac{\\partial{\\cal P}}{\\partial s_M}\n\\frac{{\\cal P}''d\\omega}{({\\cal P}')^3}=\n\\nonumber \\\\\n= \\left[\n\\left(\\frac{ {\\partial{\\cal P}\/\\partial s_L}\n {\\partial{\\cal P}\/\\partial s_M} }{ {\\cal P}'} \\right)' +\n\\frac{\\partial^2{\\cal P}}{\\partial s_L\\partial s_M} \\right]\n\\frac{d\\omega}{{\\cal P}'},\n\\ea\nand this expression has new singularities (second order poles)\nat the zeroes of ${\\cal P}'$ (i.e. of $d\\omega$). Note,\nthat the contributions from the singularities of\n$\\partial{\\cal P}\/\\partial s_L$, if any, are already taken into\naccount in the l.h.s. of (\\ref{prom1}).\nPicking up the coefficient at the leading singularity, we obtain:\n\\ba\\label{resch}\n{\\hbox{res}} \\left(dv_K \\frac{\\partial v_L}{\\partial s_M}\\right) +\\Sigma_{KLM}=\n-\\stackreb{d\\omega = 0}{\\hbox{res}}\n\\frac{\\partial{\\cal P}}{\\partial s_K}\n\\frac{\\partial{\\cal P}}{\\partial s_L}\n\\frac{\\partial{\\cal P}}{\\partial s_M}\n\\frac{d\\omega^2}{({\\cal P}')^3 d\\lambda} =\n\\stackreb{d\\omega = 0}{\\hbox{res}}\n\\frac{dv_Kdv_Ldv_M}{d\\omega d\\lambda}\n\\ea\nThe integrals at the l.h.s. of (\\ref{prom1}) serve to convert\nthe differentials $dv_I$ into {\\it canonical} ones $dW_I$,\nsatisfying $\\oint_{A_I} dW_J = \\delta_{IJ}$. The same matrix\n$\\oint_{A_I} dv_J$ relates the derivative w.r.t. the moduli $s_I$\nand the periods $a_I$. Putting all together we obtain\n(see also \\cite{KriW}):\n\\ba\n\\frac{\\partial F_{IJ}}{\\partial s^K} =\n\\stackreb{d\\omega = 0}{\\hbox{res}}\n\\frac{dW_IdW_Jdv_K}{d\\omega d\\lambda}; \\nonumber \\\\\n\\frac{\\partial^3 F}{\\partial a^I\\partial a^J\\partial a^K} =\n\\frac{\\partial F_{IJ}}{\\partial a^K} =\n\\stackreb{d\\omega = 0}{\\hbox{res}}\n\\frac{dW_IdW_JdW_K}{d\\omega d\\lambda}\n\\label{resfor}\n\\ea\n\n\\subsection{Algebra of 1-differentials \\label{algebdiff}}\n\nWe consider particular examples of the algebra (\\ref{2.8}) in\nsect.\\ref{Examples}. Here we sketch some common features\nof these examples that seem to be relevant for existence of such\nalgebras and show that they always exist in the hyperelliptic case.\n\n{\\bf 1)} The linear space of differentials $dW_I$ in (\\ref{2.8})\n(or the space of $dv_I$ in (\\ref{defdv})) differs a little bit from the set\nof {\\it all} holomorphic 1-differentials\\footnote{For the gauge\ngroups $B_n$, $C_n$, $D_n$ one should take {\\it all} 1-differentials which\nare {\\it odd} under the involution $\\lambda\\to -\\lambda$.}\non punctured\n\\footnote{We mean here by\n\"punctured\" that the \"holomorphic differentials\" can have at most {\\it\nsimple} poles at punctures. } spectral curve ${\\cal C}$ for the following\nreasons. In elliptic (UV-finite) cases, when the bare spectral curve is\n$E(\\tau)$, $d\\omega$ is holomorphic on ${\\cal C}$, but it is not included\ninto the set $\\{dW_I\\}$, because the algebra of $dW_I$ is defined {\\it\nmodulo} $d\\omega$. Up to such possible corrections, the number of linearly\nindependent $dW_I$'s is $g + n$, where $g$ is the genus of ${\\cal C}$ and\n$n+1$ -- the number of punctures.\n\n{\\bf 2)}\nThe products $dW_IdW_J$ are holomorphic quadratic differentials\non the punctured curve ${\\cal C}$. There are $\\frac{(g+n)(g+n-1)}{2}$\n(or $\\frac{(g+n-1)(g+n-2)}{2}$) such products, but at most\n$3g-3+2n$ of them are linearly independent, since this is the\nquantity of independent quadratic differentials.\n\n{\\bf 3)} We want to obtain an associative algebra of 1-differentials\n$dW_I$,\n\\ba\ndW_I\\circ dW_J = \\sum_K C_{IJ}^K dW_K\n\\ea\nwhich is an ordinary algebra of multiplications\n{\\it modulo} $d\\omega$ and $d\\lambda$.\nIn addition to these two 1-differentials one should also\nspecify $dQ \\equiv \\sum_I q_IdW_I$.\nThen, any quadratic differential can be represented as their ``linear\ncombination'', namely:\n\\ba\ndW_I dW_J = \\left(\\sum_K C_{IJ}^K dW_K\\right) dQ\n+ d\\widetilde W_{IJ}^\\omega d\\omega +\nd\\widetilde W_{IJ}^\\lambda d\\lambda\n\\label{algmult}\n\\ea\nand existence of such a representation follows already from\nthe estimation of the number of independent adjustment parameters.\n\nFor example, if there are no punctures, all $dW_I$ are holomorphic (have no\npoles) and so does $d\\omega$ (which is not included into the set $\\{dW_I\\}$),\nwhile $d\\lambda$ has the second order pole in a single point (this is\nliterally the case for the Calogero model, see sect.6), then $d\\widetilde\nW_{IJ}^Q \\equiv \\sum_K C_{IJ}^K dW_K$ has $g-1$ adjustment parameters ($g-1$\nstructure constants $C_{IJ}^K$ with $K = 1,2,\\ldots,g-1$), while $d\\widetilde\nW_{IJ}^\\omega$ and $d\\widetilde W_{IJ}^\\lambda$ are holomorphic\ndifferentials, the latter one has the second order zero at the puncture,\ni.e. they form $g$- and $g-2$-parametric families respectively. Thus,\ntotally we get $(g-1) +g+ (g-2)$ adjustment parameters, exactly exhausting\nthe $3g-3$-dimensional set of independent holomorphic quadratic $dW_IdW_J$.\nAccurate analysis shows that the matching is preserved at punctured surface\nas well.\n\n\\subsection{Associativity and the WDVV equations}\nHowever, the existence of the closed algebra does not immediately imply its\nassociativity. Indeed, to check associativity one needs to consider the\nexpansion of products of the 3-differentials:\n\\ba\\label{47}\ndW_IdW_JdW_K=dW_{IJK}dQ^2+d^2\\widetilde{\\widetilde W^{\\omega}\n\\!\\!\\!}_{_{IJK}}d\\omega+\nd^2\\widetilde{\\widetilde W^{\\lambda}\\!\\!\\!}_{_{IJK}}d\\lambda\n\\ea\nwhere $dW_{IJK}dQ$ is a set of holomorphic 1-differentials not\ncontaining $d\\omega$. Now repeating the above counting, we get that there\nexist totally $(g-1)+(2g-1)+(3g-5)$, since $d^2\\widetilde{\\widetilde\nW^{\\omega}\\!\\!\\!\n}_{_{IJK}}$ are holomorphic quadratic differentials that do not contain\n$d\\lambda$, and $d^2\\widetilde{\\widetilde W^{\\lambda}\\!\\!\\!\n}_{_{IJK}}$ are holomorphic\nquadratic differentials with second order zeroes. Thus, for the $5g-5$\nindependent holomorphic 3-differentials we have too many, namely, $6g-7$\nparameters. Therefore, among the differentials $dW_{IJK}dQ$, $d^2\\widetilde\n{\\widetilde W^{\\lambda}\\!\\!\\!}_{_{IJK}}$ and $d^2\\widetilde\n{\\widetilde W^{\\lambda}\\!\\!\\!}_{_{IJK}}$ there are some relations that\ngenerally spoil associativity. We present in sect.6 the explicit example\nwhere associativity is indeed violated.\n\nThe situation simplifies different in the hyperelliptic case when there is\nthe additional ${\\bf Z}_2$-transformation $\\sigma: y\\to -y$. In this case,\nthe holomorphic differentials, ${P_k(\\lambda)d\\lambda\\over Y}$,\n$k=0,\\ldots g-1$ ($P_k$ denotes a polynomial of degree $k$) are {\\it odd}\nunder $\\sigma$ and their quadratic combinations, ${P_k(\\lambda)\n(d\\lambda)^2\\over Y^2}$,\n$k=0,\\ldots,2g-2$ are {\\it even}.\nThe remaining $g-2$ quadratic holomorphic differentials, ${P_k(\\lambda)\n(d\\lambda)^2\\over\nY}$, $k=0,\\ldots,g-3$ are $\\sigma$-odd and do not appear in the l.h.s. of\n(\\ref{algmult}). On the other hand, in the expansion (\\ref{algmult}),\none can throw away the term with $d\\lambda$ at the r.h.s., since $d\\lambda$\nis $\\sigma$-even and, therefore, the coefficient in front of it\nis not a linear combination of the holomorphic differentials $dW_I$. This\nrestores the correct counting of $2g-1$ adjusting parameters for $2g-1$\ndifferentials. It turns out that this restricted algebra spanned by the two\ndifferentials $dQ$ and $d\\omega$ is both closed and {\\it associative}.\nIndeed, if the term with $d\\lambda$ is omitted at the r.h.s. of (\\ref{47}),\nthere remain $3g-2$ adjusting parameters -- exactly as many as\nthere are holomorphic $\\sigma$-odd 3-differentials ${P_k(\\lambda)d\\lambda\\over\ny^3}$,\n$k=0,\\ldots,3g-3$ that can arise at the l.h.s. in the triple products of\n$dW_I$. In other words the algebra in hyperelliptic case is obviously\nassociative since it is isomorphic to the ring of polynomials\n$\\{ P_k(\\lambda) \\}$ -- multiplication of all polynomials (of one variable)\nmodulo some ideal.\n\nThe same counting can be easily repeated for the general surface with $n$\npunctures\\footnote{The simplest way to do this is to obtain all the\npunctures degenerating handles of the Riemann surface of higher genus.} and\nleads to the conclusion that the closed associative algebra does always exist\nfor the hyperelliptic case.\n\n\nAccording to sect.\\ref{eqs} the WDVV eqs are immediately\nimplied by the residue formula (\\ref{resfor}) and associativity of the\nalgebra (\\ref{algmult}) because the terms with $d\\omega$ at the r.h.s. of\n(\\ref{algmult}) do not contribute into (\\ref{resfor}).\n\n\\section{Examples \\label{Examples}}\n\\setcounter{equation}{0}\n\\subsection{Hyperelliptic curves in the Seiberg-Witten theory}\nThis section contains a series of examples of the algebras of 1-differentials\nand WDVV equations, arising in particular ${\\cal N}=2$ supersymmetric models\nin four and five dimensions.\nAccording to \\cite{GKMMM} and \\cite{SWI1}, models without matter\n(pure gauge theories) can be described in terms of integrable systems\nof the periodic Toda-chain family. The corresponding spectral curves\nare hyperelliptic, at least, for the\nclassical simple groups $G$. The function ${\\cal P}(\\lambda, w)$ is nothing\nbut\nthe characteristic polynomial of the dual affine algebra ${\\hat G}^{\\vee}$\nand looks like (parameter $s$ was introduced in (\\ref{adjot}))\n\\ba\n{\\cal P}(\\lambda, w) = 2P(\\lambda) - w - \\frac{Q_0(\\lambda)}{w},\n\\ \\ \\ \\ d\\omega={dw\\over w},\\ \\ \\ \\ Q_0(\\lambda)=\\lambda^{2s}\n\\label{curven}\n\\ea\nwhile\n\\ba\ndS = \\lambda \\frac{dw}{w}\n\\label{dSn}\n\\ea\nHere $P(\\lambda)$ is characteristic polynomial of the\nalgebra $G$ itself, i.e.\n\\ba\nP(\\lambda) = \\det(G - \\lambda I) =\n\\prod_i (\\lambda - \\lambda_i)\n\\ea\nwhere determinant is taken in the first fundamental representation\nand $\\lambda_i$'s are the eigenvalues of the algebraic element $G$.\nFor classical simple algebras we have:\n\\ba\nA_n:\\ \\ \\ P(\\lambda) = \\prod_{i=1}^{N_c}(\\lambda - \\lambda_i), \\ \\ \\\nN_c = n+1,\\ \\ \\ \\sum_{i=1}^{N_c} \\lambda_i = 0; \\ \\ \\\ns=0; \\nonumber \\\\\nB_n:\\ \\ \\ P(\\lambda) = \\lambda\\prod_{i=1}^n(\\lambda^2 - \\lambda_i^2);\n\\ \\ \\ s=2 \\nonumber \\\\\nC_n:\\ \\ \\ P(\\lambda) = \\prod_{i=1}^n(\\lambda^2 - \\lambda_i^2);\n\\ \\ \\ s=-2 \\nonumber \\\\\nD_n:\\ \\ \\ P(\\lambda) = \\prod_{i=1}^n(\\lambda^2 - \\lambda_i^2);\n\\ \\ \\ s=2\n\\label{charpo}\n\\ea\nFor exceptional groups, the curves arising as the characteristic polynomial\nof the dual affine algebras do not acquire the hyperelliptic form -- and\nthey are left beyond the scope of this paper (though they can still have\nenough symmetries to suit into the general theory of s.4).\n\nThe list (\\ref{charpo}) implies that one needs to analyze\nseparately the cases of $A_n$ and the other three series.\n\nAbove formulas are well adjusted for generalizations.\nIn particular, in order to include massive hypermultiplets in the\nfirst fundamental representation one can just change\n$Q_0(\\lambda)$ for $Q(\\lambda) = Q_0(\\lambda)\\prod_{\\iota = 1}^{N_f}\n(\\lambda - m_\\iota)$ if $G=A_n$ \\cite{HO} and $Q(\\lambda) = Q_0(\\lambda)\n\\prod_{\\iota = 1}^{N_f}(\\lambda^2 - m^2_\\iota)$ if $G=B_n,C_n,D_n$ \\cite{AS}\n(sometimes it can also make sense\nto shift $P(\\lambda)$ by an $m$-dependent polynomial $R(\\lambda)$, which does\nnot depend on $\\lambda_i$).\n\nThese cases of the matter hypermultiplets included can be described by the\nperiodic $XXX$-chain family \\cite{SWI2}.\n\nWorking with\nthese models it is sometimes convenient to\nchange the $w$-variable, $w \\ \\rightarrow \\ \\sqrt{Q(\\lambda)}w$\nso that the curve ${\\cal P}(\\lambda, w) = 0$ turns into\n\\ba\nw + \\frac{1}{w} = 2\\frac{P(\\lambda) + R(\\lambda)}{\\sqrt{Q(\\lambda)}}\n\\label{curven2}\n\\ea\n\nAnother possible generalization is the extension\nto $5d$ models \\cite{N}. Then it is enough to change only the\nformula (\\ref{dSn}),\n\\ba\ndS^{(5)} = \\log\\lambda\\ \\frac{dw}{w}\n\\label{dS5}\n\\ea\nand put $Q_0=\\lambda^{N_c\/2}$ while eq.(\\ref{curven}) remains intact.\nFrom the point of view of integrable systems this corresponds to changing\nthe Toda chain for the relativistic Toda chain.\n\nExamples that do not suit into this scheme are related to the\nUV-finite models, of which the simplest one is the\n$SU(N_c)$ gauge theory in $4d$ with massive hypermultiplet in the\nadjoint representation. The corresponding integrable system\nis the elliptic Calogero model, and our analysis in s.\\ref{Calogero}\nreveals that it does not satisfy WDVV equations: the closed algebra\nof holomorphic differentials still exists, but is not\nassociative -- in accordance with the general reasoning in s.4.\n\nTo conclude these short remarks, let us note that, as we shall see below,\nassociativity of algebras of the holomorphic 1-differentials on\nhyperelliptic curve reduces afterwards to associativity of the polynomial\nalgebras. Therefore, we consider here the reference pattern of such an\nalgebra. Namely, we look at the algebra of $N$ one-variable polynomials\n$p_{i}(\\lambda)$ of degree $N-1$ whose product defined by modulo\nof a polynomial $P'$ of degree $N$. More concretely, the algebra\nis defined as\n\\ba\\label{polass}\np_i(\\lambda)p_j(\\lambda)=C^k_{ij}p_k(\\lambda)W(\\lambda)+\\xi_{ij}(\\lambda)\nP'(\\lambda)\n\\ea\nwhere $W(\\lambda)$ is some fixed linear combination of $p_i(\\lambda)$:\n$W(\\lambda)\\equiv \\sum q_lp_l(\\lambda)$\nco-prime with $P'(\\lambda)$. Then, the l.h.s. of this relation is a\npolynomial of degree $2(N-1)$ with $2N-1$ coefficients which are to be\nadjusted by the free parameters in the r.h.s. Since $\\xi_{ij}$ is of degree\n$N-2$, there are exactly $N+(N-1)=2N-1$ free parameters. The same counting is\ncorrect for the triple products of the polynomials and the algebra\nis associative, since the condition $P'(\\lambda)=0$ leads to the\nfactor-algebra over the ideal $P'(\\lambda)$. The same argument does not work\nfor the algebra of holomorphic 1-differentials, since their product belongs\nto a different space (of quadratic differentials) -- see \\cite{MMM2}.\n\n\\subsection{Perturbative limit of the pure gauge theories}\nNow we discuss the simplest case of curves corresponding to\nthe perturbative formulas of sect.3.\nPerturbative limit of the curve (\\ref{curven2}) is\n\\ba\\label{pertcurv}\nw = 2\\frac{P(\\lambda)}{\\sqrt{Q(\\lambda)}}\n\\ea\nand the corresponding\n\\ba\\label{pertdS}\ndS^{(4)} = \\lambda d\\log\\left(\\frac{P(\\lambda)}{\\sqrt{Q(\\lambda)}}\\right);\n\\nonumber \\\\\ndS^{(5)} = \\log\\lambda\\ d\\log\\left(\\frac{P(\\lambda)}{\\sqrt{Q(\\lambda)}}\\right)\n\\ea\nWe consider first the simplest case of the pure gauge $SU(N_c)$ Yang-Mills\ntheory, i.e. $Q(\\lambda)=Q_0(\\lambda)=1$.\nThe both cases of $4d$ and $5d$ theories\ncan be treated simultaneously. Indeed, they are described by the same Riemann\nsurface (\\ref{pertcurv}) which is nothing but\nthe sphere with punctures\\footnote{These punctures emerge as a\ndegeneration of the handles of the hyperelliptic surface so that the\n$a$-cycles encircle the punctures.}, the only difference being the\nrestrictions imposed onto the set of moduli $\\{\\lambda_i\\}$'s --\nin the $4d$ case\nthey are constrained to satisfy the condition $\\sum_i^{N_c} \\lambda_i=0$,\nwhile in\nthe $5d$ one -- $\\prod_i^{N_c} \\lambda_i=1$.\nNow the set of the $N_c-1$ independent\ncanonical holomorphic differentials is\\footnote{It can be instructive to\ndescribe in more details why (\\ref{difff}) arises not only in $4d$, but also\nin $5d$ models. Strictly speaking, in the $5d$ case, one should consider\ndifferentials on annulus, not on sphere. Therefore, instead of\n${d\\lambda\\over\\lambda-\\lambda_i}$, one rather needs to take\n$\\displaystyle{\\sum_{m=-\\infty}^{+\\infty}{da\\over\na-a_i+2n}\\sim\\coth{a-a_i\\over 2}{da\\over\n2}={\\lambda+\\lambda_i\\over\\lambda-\\lambda_i}{d\\lambda\\over 2\\lambda}}$, where\n$\\lambda\\equiv e^a$. Considering now, instead of\n$\\displaystyle{d\\omega_i={d\\lambda\\over\n\\lambda-\\lambda_i}-{d\\lambda\\over\\lambda-\\lambda_{N_c}}={\\lambda_{iN_c}\nd\\lambda\\over (\\lambda-\\lambda_i)(\\lambda-\\lambda_{N_c}}}$, differentials\n$\\displaystyle{d\\omega_i={\\lambda+\\lambda_i\\over\\lambda-\\lambda_i}\n{d\\lambda\\over 2\\lambda} -{\\lambda+\\lambda_{N_c}\\over\\lambda-\\lambda_{N_c}}\n{d\\lambda\\over\n2\\lambda}= {\\lambda_{iN_c}d\\lambda\\over\n(\\lambda-\\lambda_i)(\\lambda-\\lambda_{N_c})}}$, we obtain exactly formula\n(\\ref{difff}).}\n\n\\ba\\label{difff}\nd\\omega_i=\\left({1\\over\\lambda-\\lambda\n_i}-{1\\over\\lambda -\\lambda_{N_c}}\\right)d\\lambda=\n{\\lambda_{iN_c}d\\lambda\\over\n(\\lambda -\\lambda_i)(\\lambda -\\lambda_{N_c})},\\ \\ \\ i=1,...,N_c,\n\\ \\ \\ \\lambda_{ij}\\equiv \\lambda_i-\\lambda_j\n\\ea\nand one can easily check that $\\lambda_i=a_i$ ($a$-periods) in the $4d$ case\nand $\\lambda_i=e^{a_i}$ in the $5d$ case, i.e. $\\sum_i a_i=0$.\n\nThe next step is to expand the product of two such differentials\n\\ba\nd\\omega_id\\omega_j=\\left(C_{ij}^kd\\omega_k\\right)\\left(q_ld\\omega_l\\right)+\n\\left(D^k_{ij}d\\omega_k\\right)d\\log P(\\lambda),\\ \\ \\ d\\log P(\\lambda)=\n\\sum_l^{N_c}{d\\lambda\\over\\lambda -\\lambda_l}\n\\ea\nConverting this expression to the normal form with\ncommon denominator $P(\\lambda)$,\none reproduces (\\ref{polass}) for polynomial algebra.\nTherefore, the structure constants $C_{ij}^k$\nform the closed associative algebra and, in\naccordance with our general formulas, can be expressed through the\nprepotential by residue formulas\n\\ba\nF_{ijk}=\\stackreb{d\\log P=0}{\\hbox{res}} {d\\omega_id\\omega_jd\\omega_k\\over d\\log\nP(\\lambda)d\\lambda} \\ \\ \\ \\hbox{for the $4d$ case},\\\\\nF_{ijk}=\\stackreb{d\\log P=0}{\\hbox{res}} {d\\omega_id\\omega_jd\\omega_k\\over d\\log\nP(\\lambda){d\\lambda\\over\\lambda}} \\ \\ \\ \\hbox{for the $5d$ case}\n\\ea\nThese residues can be easily calculated. The only technical trick is to\ncalculate the residues not at zeroes of $d\\log P$ but at poles of\n$d\\omega$'s. This can be done immediately since there are no contributions\nfrom the infinity $\\lambda=\\infty$.\nThe final results have the following form\n\\ba\n\\hbox{\\underline{for the $4d$ case}}:\\\\\nF_{iii}=\\sum_{k\\ne i}{1\\over a_{ik}}+{6\\over a_{iN_c}}+\n\\sum_{k\\ne N_c}{1\\over a_{kN_c}},\n\\\\\nF_{iij}= {3\\over a_{iN_c}}+{2\\over a_{jN_c}}+\\sum_{k\\ne i,j,N_c}\n{1\\over a_{kN_c}}-{a_{jN_c}\\over a_{iN_c}a_{ij}},\n\\ \\ \\ i\\ne j,\\\\\nF_{ijk}=2\\sum_{l\\ne N_c}{1\\over a_{lN_c}}-\\sum_{l\\ne i,j,k,N_c}\n{1\\over a_{lN_c}},\n\\ \\ \\ i\\ne j\\ne k;\n\\ea\n\n\\ba\n\\hbox{\\underline{for the $5d$ case}}:\\\\\n2F_{iii}=\\sum_{k\\ne i}\\coth a_{ik}+\n6\\coth a_{iN_c}+\\sum_{k\\ne N_c}\\coth\n a_{kN_c},\\\\\n2F_{iij}= -\\coth a_{ij}+\n4\\coth a_{iN_c}+2\\coth a_{jN_c}+\\sum_{k\\ne\ni,j,N_c}\\coth a_{kN_c}+N_c,\\ \\ \\ i\\ne j,\\\\\n2F_{ijk}= 2\\sum_{l\\ne N_c}\\coth a_{lN_c}-\\sum_{l\\ne i,j,k,N_c}\\coth\n a_{lN_c}+N_c,\\ \\ \\ i\\ne j\\ne k\n\\ea\nNow it is immediate\nto check that the prepotential in the $4d$ case really is given by the\nformula (\\ref{FA}), while in the $5d$ case -- by (\\ref{FARTC}).\nThese results are in perfect agreement with the results of s.3.\n\nNow let us turn to the case of massive hypermultiplets included and, for the\nsake of simplicity, restrict ourselves to the $4d$ case. Then, there\narise additional differentials corresponding to the derivatives of $dS$\n(\\ref{pertdS}) with respect to masses. They are of the form\n\\ba\\label{massdiff}\ndW_{\\iota}=-{1\\over 2}{d\\lambda\\over \\lambda -\\lambda_{\\iota}}\n\\ea\nAgain, full set of the differentials gives rise to\nthe closed associative algebra.\nAs earlier, this can be proved by reducing the system of differentials to the\npolynomial one (this time the common denominator should be\n$P(\\lambda)Q(\\lambda)$). Then, applying the residue formula, one reproduces\nformula (\\ref{26}) for the perturbative prepotential with $r=1$. In order to\nobtain (\\ref{26}) with arbitrary coefficient $r$, one needs to consider\ninstead of $\\sqrt{Q(\\lambda)}$ in (\\ref{pertcurv})-(\\ref{pertdS})\n$Q(\\lambda)^{r\/2}$. This only changes normalization of the differentials\n(\\ref{massdiff}): $dW_{\\iota}=-{r\\over 2}{d\\lambda\\over \\lambda -\n\\lambda_{\\iota}}$. Now it is straightforward\nto use the residue formula\\footnote{Let us note that, despite\nthe differentials\n$dW_{\\iota}$ have the pole at infinity, this does not contribute\ninto the residue formula because of the quadratic pole of $d\\lambda$ in the\ndenominator.}\n\n\\ba\\label{residueQ}\nF_{IJK}=\\stackreb{d\\log {P\\over\\sqrt{Q}}=0}{\\hbox{res}}\n{dW_IdW_JdW_K\\over d\\log {P(\\lambda)\\over\\sqrt{Q(\\lambda)}}d\\lambda},\n\\ \\ \\ \\ \\\n\\left\\{I,J,K,\\dots\\right\\}=\\left\\{i,j,k,\\ldots|\\iota,\\iota',\n\\iota'',\\ldots\\right\\}\n\\ea\nand obtain\n\\ba\\label{perth}\nF_{iii}=\\sum_{k\\ne i}{1\\over \\lambda_{ik}}+{6\\over\n\\lambda_{iN_c}}+ \\sum_{k\\ne N_c}{1\\over\n\\lambda_{kN_c}}-{r\\over 2}\\sum_{\\iota}{\\lambda_{iN_c}\\over\n\\lambda_{i{\\iota}}\\lambda_{{\\iota}N_c}},\n\\\\\nF_{iij}= {3\\over \\lambda_{iN_c}}+{2\\over \\lambda_{jN_c}}+\\sum_{k\\ne i,j,N_c}\n{1\\over \\lambda_{kN_c}}-\n{\\lambda_{jN_c}\\over \\lambda_{iN_c}\\lambda_{ij}}-{r\\over 2}\\sum_{\\iota}{1\\over\n\\lambda_{{\\iota}N_c}},\n\\ \\ \\ i\\ne j,\\\\\nF_{ijk}=2\\sum_{l\\ne N_c}{1\\over \\lambda_{lN_c}}-\\sum_{l\\ne i,j,k,N_c}\n{1\\over \\lambda_{lN_c}}-{r\\over 2}\\sum_{\\iota}{1\\over\\lambda_{{\\iota}N_c}},\n\\ \\ \\ i\\ne j\\ne k;\\\\\nF_{ii{\\iota}}= {r\\over 2}\\left({1\\over\\lambda_{i{\\iota}}}-{1\\over\\lambda_{{\\iota}N_c}}\n\\right);\\\\\nF_{ij{\\iota}}=-{r\\over 2}{1\\over\\lambda_{{\\iota}N_c}},\\ \\ \\ i\\ne j;\\\\\nF_{i{\\iota}{\\iota}}=-{r\\over 2}{\\lambda_{iN_c}\\over\\lambda_{i{\\iota}}\n\\lambda_{{\\iota}N_c}};\\\\\nF_{{\\iota}{\\iota}{\\iota}}=\n{r\\over 2}\\sum_i{1\\over\\lambda_{{\\iota}i}}+{r^2\\over 4}\\sum_{{\\iota}'}\n{1\\over\\lambda_{{\\iota}{\\iota}'}};\\\\\nF_{{\\iota}{\\iota}{\\iota}'}=\n-{r^2\\over 4}{1\\over \\lambda_{{\\iota}{\\iota}'}},\\ \\ \\ {\\iota}\\ne {\\iota}';\\\\\nF_{i{\\iota}{\\iota}'}=F_{{\\iota}{\\iota}'{\\iota}''}=0\n\\ea\nThese formulas immediately lead to the prepotential (\\ref{26}) upon the\nidentification $\\lambda_i=a_i$ and $\\lambda_{\\iota}=m_{\\iota}$.\n\nAt last, let us consider the case of other classical groups. We also take into\naccount the massive hypermultiplets. Let us also note that, in this case, the\ncurve is invariant w.r.t. the involution $\\rho: \\lambda\\to\n-\\lambda$. Among all the holomorphic differentials there is a subset of the\n$\\rho$-odd ones that can be obtained by differentiating the $\\rho$-odd\ngenerating differential $dS\\cong \\lambda\\left(\\sum_i {2\\lambda d\\lambda\\over\n\\lambda^2-a_i^2}-r\\sum_{\\iota} {\\lambda d\\lambda\\over \\lambda^2-m_{\\iota}^2}-\ns{d\\lambda\\over\\lambda}\\right)$ w.r.t. the moduli:\n\\ba\ndW_i={2a_i d\\lambda\\over \\lambda^2-a_i^2},\\ \\ \\ \\\ndW_{\\iota}=-{r m_{\\iota} d\\lambda\\over \\lambda^2-m_{\\iota}^2}\n\\ea\nLook at the algebra of these differentials:\n\\ba\\label{ch}\n{d\\lambda\\over \\lambda^2-\\lambda_I^2} {d\\lambda\\over \\lambda^2-\\lambda_J^2}\n\\sim C_{IJ}^K {d\\lambda\\over \\lambda^2-\\lambda_K^2} dW+\n\\xi_{IJ}(\\lambda)\n\\left(\\sum_i {2\\lambda d\\lambda\\over \\lambda^2-a_i^2}-r\\sum_{\\iota}\n{\\lambda d\\lambda\\over \\lambda^2-m_{\\iota}^2}-s{d\\lambda\\over\\lambda}\\right)\n\\ea\nTo understand that this algebra is closed and associative, it is sufficient\nto note that the l.h.s. of (\\ref{ch}) is a linear combination of the\ndifferentials ${(d\\lambda)^2\\over \\lambda^2-\\lambda_I^2}$ and ${(d\\lambda)^2\n\\over (\\lambda^2-\\lambda_I^2)^2}$ -- totally $2N\\equiv 2n+2N_f$ independent\nquadratic $\\rho$-even differentials. However, one should also take into\naccount the condition which cancels the pole at infinity of\n${(d\\lambda)^2\\over \\lambda^2-\\lambda_I^2}$. Thus, the number of\n{\\it holomorphic} $\\rho$-even quadratic differentials is equal to $2N-1$.\nThis quantity perfectly suits the $2N-1$ adjusting parameters in the r.h.s.\nof (\\ref{ch}) -- $N$ $\\rho$-odd differentials in the first term and $N-1$\n$\\rho$-even differentials in the second term. Indeed, even differentials are\n${\\lambda d\\lambda\\over \\lambda^2-\\lambda_I^2}$, i.e. have the pole at\ninfinity. The same does the\n${dw\\over w}=\\left(\\sum_i {2\\lambda d\\lambda\\over \\lambda^2-a_i^2}-\\sum_{\\iota}\n{r\\lambda d\\lambda\\over \\lambda^2-m_{\\iota}^2}-s{d\\lambda\\over\\lambda}\\right)$.\nHowever, since all these quantities can be rewritten as depending on\n$\\lambda^2$, the cancellation of the pole of the differentials leads to\ntheir large-$\\lambda$ behaviour ${d\\lambda\\over\\lambda^3}$ that\nsimultaneously cancels the pole of ${dw\\over w}$. Thus, there is exactly one\ncondition imposed onto $N$ $rho$-even differentials.\n\nThe associativity is checked in complete analogy with the above\nconsideration -- there are $3N-3$ holomorphic $\\rho$-odd cubic differentials\nand exactly $N+(2N-3)=3N-3$ adjusting parameters.\nThus, the algebra (\\ref{ch}) is really closed associative\\footnote{In sect.5.5\nwe show how this algebra reduces to the\npolynomial one.}. The application of the residue formula (\\ref{residueQ})\ngives now (\\ref{adjot}).\n\nNow we turn to the less trivial examples corresponding to full hyperelliptic\ncurves.\n\n\\subsection{Pure gauge model (Toda chain system)}\nThis example was analyzed in detail in ref.\\cite{MMM}.\n\nThe spectral curve has the form\n\\ba\\label{specTC}\n{\\cal P}(\\lambda,w)=2P(\\lambda)-w-{1\\over w},\\ \\ \\\nP(\\lambda ) \\equiv \\sum^{N_c}_{k=0} s_k\\lambda ^k\n\\ea\nand can be represented in the standard hyperelliptic form after the change of\nvariables $Y = \\frac{1}{2}\\left(w - \\frac{1}{w}\\right)$\n\\ba\nY^2 = P^2(\\lambda ) - 1\n\\ea\nand is of genus $g = N_c-1$\n\\footnote{One can take for the $g$ moduli\nthe set $\\{h_k\\}$ or instead the set of periods $\\{a_i\\}$.\nThis particular family is associated with the Toda-chain\nhierarchy, $N_c$ being the length of the chain.}.\nNote that $s_{N_c}=1$ and that the condition of the unit determinant of the\nmatrix in the group $SU(N_c)$ implies here that $s_{N_c-1}=0$ (this is\nequivalent to the condition $\\sum a_i=0$).\nThe generating differential $dS$ has the form\n\\ba\\label{dSTC}\ndS=\\lambda{dw\\over w}=\\lambda{dP\\over Y}\n\\ea\n\nNow let us discuss associativity of the algebra of holomorphic differentials\n\\ba\nd\\omega_i(\\lambda )d\\omega_j(\\lambda ) =\nC_{ij}^k d\\omega_k(\\lambda )\ndW(\\lambda ) \\ {\\hbox{mod}}\\ {dw\\over w}\n\\label{.}\n\\ea\nWe require that $dW(\\lambda)$ and $dP(\\lambda )$ are co-prime.\n\nIf the algebra (\\ref{.}) exists, the structure constants\n$C_{ij}^k$ satisfy the associativity condition.\nBut we still need to show that\nit indeed exists, i.e. that once $dW$ is given, one can find\n($\\lambda $-independent) $C_{ij}^k$. This is a simple exercise:\nall $d\\omega_i$ are linear combinations of\n\\ba\\label{match}\ndv_k(\\lambda ) = \\frac{\\lambda ^{k}d\\lambda }{Y}\n\\cong {\\partial dS\\over\\partial s_k}, \\ \\ \\ k=0,\\ldots,g-1\n\\ea\nwhere\n\\ba\ndv_k(\\lambda ) = \\sigma_{ki}d\\omega_i(\\lambda ), \\ \\ \\\nd\\omega_i = (\\sigma^{-1})_{ik}dv_k, \\ \\ \\\n\\sigma_{ki} = \\oint_{A_i}dv_k,\n\\label{sigmadef}\n\\ea\nalso $dW(\\lambda ) = q_kdv_k(\\lambda )$.\nThus, (\\ref{.}), after multiplying all 1-differentials by\n$Y$, reduces to the algebra of polynomials\n$\\left(\\sigma^{-1}_{ii'}\\lambda ^{i'-1}\\right)$ that is as usual closed and\nassociative. This means that $C_{ij}^k$ satisfy the\nassociativity condition\n\\ba\nC_iC_j = C_j C_i\n\\ea\nNow the residue formula has the form \\cite{MMM}:\n\\ba\nF_{ijk} = \\frac{\\partial^3F}{\\partial a_i\\partial a_j\n\\partial a_k} = \\frac{\\partial F_{ij}}{\\partial a_k} = \\nonumber \\\\\n= \\stackreb{d\\lambda =0}{{\\hbox{res}}} \\frac{d\\omega_id\\omega_j\nd\\omega_k}{d\\lambda\\left(\\frac{dw}{w}\\right)} =\n\\stackreb{d\\lambda =0}{{\\hbox{res}}} \\frac{d\\omega_id\\omega_j\nd\\omega_k}{d\\lambda\\frac{dP}{Y}} =\n\\sum_{\\alpha} \\frac{\\hat\\omega_i(\\lambda_\\alpha)\\hat\\omega_j\n(\\lambda_\\alpha)\\hat\\omega_k(\\lambda_\\alpha)}{P'(\\lambda_\\alpha)\n\/\\hat Y(\\lambda_\\alpha)}\n\\label{v}\n\\ea\nThe sum at the r.h.s. goes over all the $2g+2$ ramification points\n$\\lambda_\\alpha$ of the hyperelliptic curve (i.e. over the zeroes\nof $Y^2 = P^2(\\lambda )-1 = \\prod_{\\alpha=1}^N(\\lambda - \\lambda_\\alpha)$);\n\\ $d\\omega_i(\\lambda) = (\\hat\\omega_i(\\lambda_\\alpha) +\nO(\\lambda-\\lambda_\\alpha))d\\lambda$,\\ $\\ \\ \\ \\hat Y^2(\\lambda_\\alpha) =\n\\prod_{\\beta\\neq\\alpha}(\\lambda_\\alpha - \\lambda_\\beta)$.\n\nNow following the line of sect.2, we can\ndefine the metric:\n\\ba\n\\eta_{kl}(dW) =\n\\stackreb{d\\lambda =0}{{\\hbox{res}}} \\frac{d\\omega_kd\\omega_l\ndW}{d\\lambda\\left(\\frac{dw}{w}\\right)} =\n\\stackreb{d\\lambda =0}{{\\hbox{res}}} \\frac{d\\omega_kd\\omega_l\ndW}{d\\lambda\\frac{dP}{Y}} = \\\\ =\n\\sum_{\\alpha} \\frac{\\hat\\omega_k(\\lambda_\\alpha)\\hat\\omega_l\n(\\lambda_\\alpha)\\hat W(\\lambda_\\alpha)}{P'(\\lambda_\\alpha)\n\/\\hat Y(\\lambda_\\alpha)}\n\\label{vv}\n\\ea\nIn particular, for $dW = d\\omega_k$ (which is evidently co-prime with\n${dP\\over Y}$), $\\eta_{ij}(d\\omega_k) = F_{ijk}$ -- this choice immediately\ngive rise to the WDVV in the form (5) of paper \\cite{MMM}.\n\nGiven (\\ref{.}), (\\ref{v}) and (\\ref{vv}), one can check that\n\\ba\nF_{ijk} = \\eta_{kl}(dW)C_{ij}^k(dW).\n\\label{vvv}\n\\ea\nNote that $F_{ijk} = {\\partial^3F\\over\\partial a_i\\partial a_j\n\\partial a_k}$ at the l.h.s. of (\\ref{vvv}) is independent\nof $dW$! The r.h.s. of (\\ref{vvv}) is equal to:\n\\ba\n\\eta_{kl}(dW)C_{ij}^l(dW) =\n\\stackreb{d\\lambda =0}{{\\hbox{res}}} \\frac{d\\omega_kd\\omega_l\ndW}{d\\lambda\\left(\\frac{dw}{w}\\right)} C_{ij}^l(dW)\n\\stackrel{(\\ref{.})}{=} \\\\ =\n\\stackreb{d\\lambda =0}{{\\hbox{res}}} \\frac{d\\omega_k}\n{d\\lambda\\left(\\frac{dw}{w}\\right)}\n\\left(d\\omega_id\\omega_j - p_{ij}\\frac{dPd\\lambda}{Y^2}\\right) =\nF_{ijk} - \\stackreb{d\\lambda =0}{{\\hbox{res}}} \\frac{d\\omega_k}\n{d\\lambda\\left(\\frac{dP}{Y}\\right)}p_{ij}(\\lambda)\\frac{dPd\\lambda}{Y^2}\n= \\\\ = F_{ijk} - \\stackreb{d\\lambda =0}{{\\hbox{res}}}\n\\frac{p_{ij}(\\lambda )d\\omega_k(\\lambda)} {Y}\n\\ea\nIt remains to prove\nthat the last item is indeed vanishing for any $i,j,k$.\nThis follows from the\nfact that $\\frac{p_{ij}(\\lambda )d\\omega_k(\\lambda )}{Y}$\nis singular only at zeroes of $Y$, it is not singular at\n$\\lambda =\\infty$ because\n$p_{ij}(\\lambda)$ is a polynomial of low enough degree\n$g-2 < g+1$. Thus the sum of its residues at ramification points\nis thus the sum over {\\it all} the residues and therefore vanishes.\n\nThis completes the proof of associativity condition for any $dW$.\n\n\\subsection{$5d$ pure gauge theory (Relativistic Toda chain)}\nThe above proof can be almost literally transferred to the case of the\nrelativistic Toda chain system corresponding to the $5d$ $N=2$ SUSY pure gauge\nmodel with one compactified dimension \\cite{N}. The main reason is\nthat in the case of relativistic Toda chain the spectral curve is a minor\nmodification of (\\ref{specTC}) having the form \\cite[eq.(2.23)]{N}\n\\ba\\label{specRTC}\nw + {1\\over w} = \\left(\\zeta\\lambda\\right)^{-N_c\/2}P(\\lambda ),\n\\ea\nwhich can be again rewritten as a {\\em hyperelliptic} curve in terms of the\nnew variable $Y\\equiv \\left(\\zeta\\lambda\\right)^{N_c\/2}\\left(w-\n{1\\over w}\\right)$\n\\ba\nY^2 = P^2(\\lambda ) - 4\\zeta^{2N_c}\\lambda^{N_c}\n\\ea\nwhere $\\lambda \\equiv e^{2\\xi}$, $\\xi $ is the \"true\" spectral\nparameter of the relativistic Toda chain and $\\zeta$ is its coupling constant.\n\nThe difference with the $4d$ case is at the following two points.\nThe first is that this time $s_0\\sim \\prod e^{a_i}=1$\n\\footnote{This formula looks quite natural since the relativistic Toda chain\nis a sort of \"group generalization\" of the ordinary Toda chain.} but instead\n$s_{N_c-1}$ no longer vanishes and becomes a modulus.\n\nThe second new point is that the generating differential instead of\n(\\ref{dSTC}) is now\n\\ba\\label{dSRTC}\ndS^{(5)} = \\xi{dw\\over w} \\sim \\log\\lambda {dw\\over w}\n\\ea\nso that its periods are completely different from those of $dS^{(4)}$.\nHowever, the set of holomorphic differentials\n\\ba\ndv_k = {\\partial dS^{(5)}\n\\over\\partial s_k} \\cong {\\lambda^{k-1}d\\lambda\\over Y},\n\\ \\ \\ k=1,...,g\n\\ea\nliterally coincides with the set (\\ref{match}), because $k$ now runs through\n$\\ 1,\\ldots,g\\ $ instead of $\\ 0,\\ldots,g-1\\ $ but there is the extra factor of\n$\\lambda$ in the denominator. Thus, the algebra of differentials remains just\nthe same closed associative algebra, but the residue formula is a little\nmodified.\n\nIndeed, when passing from the first line of (\\ref{resfor}) to the second\nline, there will arise the extra factor $\\lambda$ due to the shift of the\nindex of differentials $dv_k$ by 1 in the $5d$ case as compared with the $4d$\none. Thus, finally the residue formula acquires the following form\n\\ba\nF_{ijk} =\n\\stackreb{d\\lambda=0}{{\\hbox{res}}} \\frac{d\\omega_id\\omega_j\nd\\omega_k}{\\left(\\frac{d\\lambda}{\\lambda}\\right)\\left(\\frac{dw}{w}\\right)} =\n\\stackreb{d\\lambda=0}{{\\hbox{res}}} \\frac{d\\omega_id\\omega_j\nd\\omega_k}{\\left(\\frac{d\\lambda}{\\lambda}\\right)\\left(\\frac{dP}{Y}\\right)} =\n\\sum_{\\alpha} \\lambda_{\\alpha}\\frac{\\hat\\omega_i(\\lambda_\\alpha)\\hat\\omega_j\n(\\lambda_\\alpha)\\hat\\omega_k(\\lambda_\\alpha)}{P'(\\lambda_\\alpha)\n\/\\hat Y(\\lambda_\\alpha)}\n\\ea\n\n\\subsection{Pure gauge theory with $B_n$, $C_n$ and $D_n$ gauge groups}\nNow we consider shortly the case of the pure gauge theory with the groups\n$G\\ne A_n$. This example is of importance, since it demonstrates that, in the\ncases when naively there is no closed associative algebra of holomorphic\n1-differentials, it really exists due to some additional symmetries of the\ncurve.\n\nThe curve in this case has the form (\\ref{curven}) with $Q_0(\\lambda)\n=\\lambda^{2s}$ as in\n(\\ref{charpo}), while the generating differential -- (\\ref{dSn}). After\nchange of variables\n\\ba\n2Y(\\lambda)=w-{Q_0(\\lambda)\\over w}\n\\ea\nthe curve acquires the standard hyperelliptic form\n\\ba\\label{hra}\nY^2=P^2(\\lambda)-Q_0(\\lambda)\n\\ea\nNaively, we can repeat the procedure applied in the previous examples. The\nnew problem, however, is that the genus of this curve now is $2n-1$ (see\n(\\ref{charpo})), while there are only $n$ moduli, and, therefore, $n$\nholomorphic differentials which can be obtained from the generating\ndifferential (\\ref{dSn}). Therefore, one might expect that the algebra formed\nby these differentials is too small to be closed (and associative).\n\nHowever, it turns out that, due to the additional $Z_2$-symmetry, this is not\nthe case. Indeed, let us note that the curve (\\ref{hra}) admits the\ninvolution $\\rho:\\lambda\\to -\\lambda$ (in addition to that $Y\\to -Y$).\nTherefore, in the complete set of holomorphic $\\sigma$-odd\ndifferentials on the\nhyperelliptic curve (\\ref{hra}) (see sect.3.4)\n$dv_i={\\lambda^i d\\lambda\\over Y}$,\n$i=0,\\ldots,2n-2$, there are $n-1$ $\\rho$-even and $n-1$ $\\rho$-odd\ndifferentials. Keeping in mind the rational examples of sect.5.2, we should\nlook at the subalgebra formed by the $\\rho$-odd differentials that only can be\nobtained from the generating differential $dS$, i.e. at $dv_i$ with $i=2k$ --\nor at the canonical $\\rho$-odd differentials $d\\omega^o_{i}$\n\\ba\\label{pred}\nd\\omega^o_id\\omega^o_j=C^k_{ij}d\\omega^o_kdW\\ \\ \\hbox{mod}\\;{dw\\over w}\n\\ea\nSince ${dw\\over w}$ is $\\rho$-even, it can be multiplied only by $\\rho$-even\nholomorphic differentials. On the other hand, there are $2n-2$ $\\rho$-even\nquadratic holomorphic differentials. This perfectly matches the number of the\nfree adjusting parameters at the r.h.s. of {\\ref{pred}), i.e. algebra\n(\\ref{pred}) is really closed (and associative, as one can obtain by the\nanalogous counting).\n\nThe fact of closeness and associativity of this\nalgebra can be demonstrated in\nanother and easier way -- like it has been done in our previous examples.\nIndeed, let us multiply all the differentials by $\\lambda Y$ that is the\ncommon denominator (since ${dw\\over w}={dP\\over Y}+\ns\\left(1-{P\\over Y}\\right){d\\lambda\\over \\lambda}$). Then, the algebra is\nreduced (as usual in the hyperelliptic case) to the polynomial algebra. More\nprecisely, this is the closed associative algebra of $n-1$ odd polynomials\n$p_i^o$ of degree $2n-3$ factorized over the ideal which is an even polynomial\n$\\lambda P'(\\lambda)$ of the degree $2n$:\n\\ba\np_i^o(\\lambda)p_j^o(\\lambda)=C^k_{ij}p_k^o(\\lambda)W(\\lambda)\n+\\xi_{ij}^e(\\lambda)\\lambda P'(\\lambda)\n\\ea\nwhere $\\xi_{ij}^e$ is some even polynomial. The degree of this expression is\n$4n-6$, at the l.h.s. of it there are $2n-3$ independent coefficients of the\neven polynomial of degree $4n-6$ with no constant term, while at the r.h.s.\nthere are $n-1$ adjusting parameters $C_{ij}^k$ and $n-2$ adjusting\ncoefficients of the polynomial $\\xi^e_{ij}(\\lambda)$, totally $2n-3$ free\nparameters. This counting proves that the described algebra is closed (and\nassociative as polynomial algebra).\n\n\\subsection{Hypermultiplets in the fundamental representation\n(Spin chains)}\nTo conclude the section, we consider the example of hyperelliptic curve\nassociated with the $SU(N_c)$ Yang-Mills theory with mass\nhypermultiplets.\n\nThe set of holomorphic 1-differentials $\\{d\\omega_i\\}$ relevant\nin the Toda chain case (s.5.3 and \\cite{MMM}) should\nbe now enlarged to include meromorphic ones, with the simple poles at\nthe points $m_{\\iota}^\\pm$. The spectral curve is given by:\n\\ba\\label{qcd}\n{\\cal P}(\\lambda,w) = 2P(\\lambda) -\nw - \\frac{Q(\\lambda)}{w}\n\\ea\nwhere\n\\ba\nP(\\lambda) = \\sum_{k=0}^{N_c} s_k\\lambda^k + R(\\lambda) \\ \\ \\ \\ \\ \\\nQ(\\lambda) = \\prod_{s=1}^{N_f} (\\lambda - m_{\\iota})\n\\ea\n$s_k$ are the Schur polynomials of the Hamiltonians $h_k$ and do not\ndepend on $m_{\\iota}$, while\n$R(\\lambda)$ is instead an $h_k$-independent polynomial in $\\lambda$\nof degree $N_c-1$, which is non-vanishing only for $N_f>N_c$\n\\cite{HO} (see also \\cite{SWI2}).\nIt is convenient to introduce another function $Y$\n\\ba\n2Y = w - \\frac{Q(\\lambda)}{w}, \\ \\ \\ \\ \\\nY^2 = P^2(\\lambda) - Q(\\lambda) = \\prod(\\lambda - \\lambda_\\alpha)\n\\ea\nThe generating differential\n\\ba\\label{dS1}\ndS = \\lambda\\frac{dw}{w} = \\lambda\\left(\n\\frac{dP}{Y} - \\frac{PdQ}{2QY} + \\frac{dQ}{2Q}\\right)\n\\ea\nsatisfies the defining property\n\\ba\n\\frac{\\partial dS}{\\partial a_k} = d\\omega_k\\ \\\n\\left(\\{ dW_a\\} = \\frac{Pol}{QY}d\\lambda\\right)\n\\ea\nand the algebra of multiplication acquires the form:\n\\ba\ndW_a dW_b = C_{ab}^c dW_cdW \\ {\\hbox{mod}}\\ \\frac{dw}{w}\n\\ea\n$a = 1,\\ldots,g+ N_f = \\{i,s\\}$, $g = N_c-1$. The general formulas in the\nparticular case (\\ref{qcd}) are\n\\ba\n{\\partial dS\\over \\partial a_I} \\cong\n{{\\partial P\\over\\partial a_I}- {1\\over 2w}{\\partial Q\\over\\partial a_I}\n\\over {P'-{Q'\\over 2w}}}{dw\\over w}\n\\equiv {\\cal T}_I{dw\\over {\\cal P}'w}\n\\ea\nand\n\\ba\n{dw\\over w} = {dP\\over Y} - {dQ\\over 2Y(P+Y)}\n\\nonumber \\\\\n\\left.Y{dw\\over w}\\right|_{Y=0} = dP - {dQ\\over 2P}\n\\ea\nThe calculations (along the lines of \\cite{MMM}) lead to the same conclusion:\n\\ba\na_k = \\oint_{A_k} dS, \\ \\ \\ \\\n\\oint \\frac{\\partial\\ dS}{\\partial s_l} = \\oint\n\\frac{\\lambda^ld\\lambda}{Y}\n\\ea\n\\ba\n\\frac{\\partial F_{ij}}{\\partial\n a_k} =\n\\frac{\\partial F_{ij}}{\\partial \\lambda_\\alpha}\n\\frac{\\partial\\lambda_\\alpha}{\\partial s_l}\n\\frac{\\partial s_l}{\\partial a_k}\n\\ea\nThe first derivative is (as usual for any hyperelliptic curve)\nequal to\n$\\hat\\omega_i\\hat\\omega_j(\\lambda_\\alpha)$, the\nlast one -- to $\\sigma^{-1}_{kl}$. In order to find the\nmiddle one, consider\n\\ba\n\\left.\\frac{\\partial}{\\partial s_l}\n\\left(P^2 - Q = \\prod_\\beta (\\lambda - \\lambda_\\beta)\\right)\n\\right|_{\\lambda = \\lambda_\\alpha}\n\\ea\nIt gives\n\\ba\n\\left. 2P\\frac{\\partial P}{\\partial s_l} =\n(2PP' - Q')\\frac{\\partial \\lambda_\\alpha}{\\partial s_l}\n\\right|_{\\lambda = \\lambda_\\alpha}\n\\ea\ni.e.\n\\ba\n\\frac{\\partial\\lambda_\\alpha}{\\partial s_l} = \\left.\n\\frac{\\lambda^ld\\lambda}{dP - \\frac{1}{2}\\frac{dQ}{P}}\n\\right|_{\\lambda = \\lambda_\\alpha}\n\\ea\nSince at $\\lambda = \\lambda_\\alpha$ $Y=0$, we obtain:\n\\ba\n\\frac{\\partial\\lambda_\\alpha}{\\partial s_l} =\n \\left.\\frac{\\lambda_\\alpha^ld\\lambda}{Y\\frac{dw}{w}}\n\\right|_{\\lambda = \\lambda_\\alpha} =\n\\left.\\sigma_{lk}\\frac{d\\omega_k}{\\frac{dw}{w}}\n\\right|_{\\lambda = \\lambda_\\alpha}\n\\ea\nPutting this all together we obtain:\n\\ba\n\\frac{\\partial F_{ij}}{\\partial a_k} =\n-\\stackreb{d\\lambda = 0}{\\hbox{res}}\n\\frac{d\\omega_id\\omega_jd\\omega_k}{d\\lambda\\frac{dw}{w}} =\n\\stackreb{d\\omega = 0}{\\hbox{res}}\n\\frac{d\\omega_id\\omega_jd\\omega_k}{d\\lambda\\frac{dw}{w}}\n\\ea\nSince differentials in the numerator have no poles, the\nintegration contour that goes around the zeroes of $d\\lambda$,\ncan be also considered as encircling the zeroes of\n$d\\omega = \\frac{dw}{w}$ -- and we reproduce the general\nresult (\\ref{resfor}).\n\nThe similar identity can be proved for the expressions containing\none derivative w.r.t. the remaining moduli $m_{\\iota} \\sim \\oint dS$,\nas we can still work with the period matrix. More concretely, since\n\\ba\\label{dW1}\ndW_{\\iota} \\cong \\frac{\\partial dS}{\\partial m_{\\iota}} \\cong -\n\\frac{d\\lambda}{w}\\frac{\\partial w}{\\partial m_{\\iota}} = -\n\\frac{d\\lambda}{Y}\\left(\\frac{Q}{2(P+Y)(\\lambda - m_{\\iota})} +\n\\frac{\\partial R}{\\partial m_{\\iota}}\\right)\n\\ea\nthe derivative of ramification point is\n\\ba\n\\frac{\\partial \\lambda_\\alpha}{\\partial m_{\\iota}} =\n-\\left. \\frac{Q}{2P}\\frac{d\\lambda}{Y\\frac{dw}{w}}\n\\left(\\frac{1}{\\lambda - m_{\\iota}} +\n\\frac{2P\\frac{\\partial R}{\\partial m_{\\iota}}}{Q}\\right)\\right|_{\\lambda =\n\\lambda_\\alpha} =-\n\\left. \\frac{d\\lambda}{Y\\frac{dw}{w}}\n\\left(\\frac{Q}{2(P+Y)(\\lambda - m_{\\iota})} +\n\\frac{\\partial R}{\\partial m_{\\iota}}\\right)\\right|_{\\lambda =\n\\lambda_\\alpha}\n=\\left.\\frac{dW_{\\iota}}{\\frac{dw}{w}}\n\\right|_{\\lambda = \\lambda_\\alpha}\n\\ea\nThus\n\\ba\n\\frac{\\partial F_{ij}}{\\partial m_{\\iota}} =\n\\frac{\\partial F_{ij}}{\\partial \\lambda_\\alpha}\n\\frac{\\partial\\lambda_\\alpha}{\\partial m_{\\iota}} =\n-\\stackreb{d\\lambda = 0}{\\hbox{res}}\n\\frac{d\\omega_i d\\omega_j dW_{\\iota}}{d\\lambda \\frac{dw}{w}}\n= \\stackreb{\\frac{dw}{w} = 0}{\\hbox{res}}\n\\frac{d\\omega_i d\\omega_j dW_{\\iota}}{d\\lambda \\frac{dw}{w}}\n\\ea\nagain in accordance with (\\ref{resfor}).\nNote, that according to our agreements the punctures $m_\\iota$\nare included into the set of zeroes of $d\\lambda$ (along with\nramification points $\\lambda_\\alpha$).\n\nDouble and triple-derivatives of ${\\cal F}$ with respect to the\nmasses $m_\\iota$ can not be obtained from the ordinary\nperiod matrices of the hyperelliptic curves. They can be only\nevaluated by the general method of s.4. This calculation,\nwhich is the subject of the next subsection s.\\ref{rfmd},\nhelps also to illustrate the possible subtleties of the general proof.\n\nLet us note that there are two natural choices of the generating differential\n$dS$ for the case of theory with fundamental matter. One choice is given by\nformula (\\ref{dS1}) used throughout this subsection. Another one is\ngiven by $dS=\\lambda{d\\hat w\\over\\hat w}=\\lambda d\\log{P-Y\\over P+Y}$,\nwhere $\\hat w$ is defined in another parameterization of the curve $\\hat w\n+{1\\over\\hat w}={P\\over\\sqrt{Q}}$. The latter choice is\nusually accepted in the literature \\cite{SW2,SWI2,HO,AS}, and has been used in\nthe course of our perturbative consideration of sect.5.2. Note that the\ndifferentials (\\ref{dW1})\n\\ba\ndW_{\\iota}^{(1)} \\cong\n-\\frac{d\\lambda}{Y}\\left(\\frac{P-Y}{2(\\lambda - m_{\\iota})} +\n\\frac{\\partial R}{\\partial m_{\\iota}}\\right)=\ndW^{(2)}_{\\iota}+{1\\over 2}{d\\lambda\\over\\lambda -m_{\\iota}}\n\\ea\nhave poles at $m_{\\iota}$ on one of the sheets, while the differentials\n$dW^{(2)}_{\\iota}$ associated with the other $dS$\npossess the poles on both sheets.\nHowever, these two choices give rise to the identical results.\n\n\n\\subsection{Residue formula for mass differentials \\label{rfmd}}\n\nTo make our consideration of concrete examples exhaustive, in this\nsubsection we discuss the subtle points arising in the course of the\nderivation of the residue formula when two or all the three differentials\n$dv_I$ are associated\nwith punctures (masses). This is the case, when the $\\Sigma$-term (\\ref{sigma})\ncontributes to the result. In what follows we demonstrate\nhow this specific case of the residue formula can be treated in the example\nof the theory with matter hypermultiplet in the fundamental representation,\nand we start with consideration of the perturbative limit. To match the\nnotations of sect.5.2, we use the\ngenerating differential $dS=\\lambda{d\\hat w\\over\\hat w}=-{1\\over 2}\\lambda\nd\\log{P-Y\\over P+Y}$ and, accordingly, the holomorphic differentials\n$dW_{\\iota}^{(2)} \\cong -{P\\over 2Y}\\frac{d\\lambda}{(\\lambda - m_{\\iota})} -\n\\frac{\\partial R}{\\partial m_{\\iota}}{d\\lambda\\over Y}$. Then,\nin the perturbative limit\nthis $dW_{\\iota}$ acquires the form (\\ref{massdiff}),\n$s_{\\iota}=\\lambda_{\\iota}$.\n\nFirst, we derive formula (\\ref{resch})\nin the perturbative limit with two mass\ndifferentials. The symmetry of the l.h.s. of (\\ref{resch})\nw.r.t. the permutations of\nindices is not obvious, since there are no symmetric terms in the l.h.s.\nof (\\ref{resch}) and one needs\nto consider the entire sum. Therefore, in illustrative purposes, we\ndiscuss here all possible arranging of indices in (\\ref{resch}) --\n$\\{KLM\\}=\\{\\iota,\\iota,\\iota'\\}$, $\\{KLM\\}=\\{\\iota,\\iota',\\iota\\}$ and\n$\\{KLM\\}=\\{\\iota',\\iota,\\iota\\}$ checking that they lead to the same result.\nNote that the term $\\oint_{A_I}{\\partial dv_L\\over\\partial s_M}\n\\int_{{\\partial B_I\\over \\partial s_K}}dS$ is zero in all the cases considered below.\nOther terms are equal to\n\\ba\\label{sigma1}\n\\Sigma_{KLM}^{(1)}\\equiv\n\\stackreb{\\frac{\\partial v_L}{\\partial \\lambda_M}=0}{\\hbox{res}}\n\\left(dv_K \\frac{\\partial v_L}{\\partial \\lambda_M}\\right) =\n\\left\\{\\begin{array}{cl}\n0 & \\hbox{when}\\ \\ \\ K=L =\\iota,\\ M=\\iota'\n\\\\\n0 & \\hbox{when}\\ \\ \\ K=M =\\iota,\\ L=\\iota'\n\\\\\n-{1\\over 4}{1\\over\\lambda_{\\iota\\iota'}} & \\hbox{when}\\ \\ \\ L=M =\\iota,\\\nK=\\iota'\n\\end{array} \\right.\n\\ea\n\n\\ba\n\\Sigma_{KLM}^{(2)}\\equiv\\sum_I\\oint_{A_I}dv_K {\\partial\\over\\partial s_M}\\int_{{\\partial\nB_I\\over \\partial \\lambda_L}}dS=\n\\left\\{\\begin{array}{cl}\n-{1\\over 4}{1\\over \\lambda_{\\iota\\iota'}}& \\hbox{when}\\ \\ \\\nK=L=\\iota,\\ M=\\iota'\n\\\\\n0 & \\hbox{when}\\ \\ \\ K=M=\\iota,\\ L=\\iota'\n\\\\\n0 & \\hbox{when}\\ \\ \\ L=M=\\iota,\\ K=\\iota'\n\\end{array}\n\\right.\n\\ea\n\n\\ba\n\\Sigma_{KLM}^{(3)}\\equiv \\sum_I\n\\oint_{A_I}dv_K \\int_{{\\partial B_I\\over \\partial \\lambda_M}}dv_L=\n\\left\\{\\begin{array}{cl}\n0 & \\hbox{when}\\ \\ \\ K=L =\\iota,\\ M=\\iota'\n\\\\\n-{1\\over 4}{1\\over \\lambda_{\\iota\\iota'}}& \\hbox{when}\\ \\ \\\nK=M =\\iota,\\ L=\\iota'\n\\\\\n0 & \\hbox{when}\\ \\ \\ L=M =\\iota,\\ K=\\iota'\n\\end{array}\n\\right.\n\\ea\nsince $\\oint_{A_I}dv_{\\iota}=-{1\\over 2}\\delta_{I\\iota}$,\n$\\int_{{\\partial B_{I}\\over \\partial \\lambda_{\\iota}}}dv_{\\iota'}=\n{1\\over 2}\\delta_{I\\iota}{1\\over \\lambda_{\\iota\\iota'}}$ and\n$\\int_{{\\partial B_I\\over \\partial \\lambda_{\\iota}}}dS=\n\\left.\\delta_{I\\iota}\\log\\left({P(\\lambda)\\over\n\\sqrt{Q(\\lambda)}}\\right)\\right|_{\\lambda=\n\\lambda_{\\iota}}$.\n\nThe sum of $\\Sigma^{(1)}+\\Sigma^{(2)}+\\Sigma^{(3)}$\nin each of the three cases $K=L$, $K=M$ and $L=M$\nis the same how it is\nrequired by the symmetricity w.r.t. the permutations of indices. This sum\nis actually equal to $F_{\\iota\\iota\\iota'}$ in (\\ref{perth}).\n\nAnalogously, one can check the residue formula for three coinciding mass\ndifferentials. Then, the result looks like\n\\ba\n\\Sigma_{\\iota\\iota\\iota}^{(1)}=-{1\\over 4}\\hbox{res} {1\\over (\\lambda-\n\\lambda_{\\iota})^2}=0\n\\\\\n\\Sigma_{\\iota\\iota\\iota}^{(2)}={1\\over 2}\\sum_i{1\\over\\lambda_{i\\iota }}\n+{1\\over 4}\\sum_{\\iota'\\ne\\iota}{1\\over\\lambda_{\\iota\\iota'}}\n+\\left.{1\\over 4}{1\\over\\lambda-\\lambda_{\\iota}}\n\\right|_{\\lambda\\to\\lambda_{\\iota}}\n\\\\\n\\Sigma_{\\iota\\iota\\iota}^{(3)}=-\\left.\n{1\\over 4}{1\\over\\lambda-\\lambda_{\\iota}}\n\\right|_{\\lambda\\to\\lambda_{\\iota}}\n\\ea\nand the sum of these terms is equal to $F_{\\iota\\iota\\iota}$ in\n(\\ref{perth}). Let us note that the only role of $\\Sigma^{(3)}$ is to cancel\nthe singularity arising in $\\Sigma^{(2)}$.\n\nNow we can turn to less trivial case and discuss the residue formula in the\nhyperelliptic case of sect.5.6. Again, we start with consideration of\n$F_{\\iota'\\iota\\iota}$. However, this time we restrict ourselves only to\nthe case $L=M$ -- the most convenient choice of order of the indices, since\nit is the case when the only first term in the l.h.s. of (\\ref{resch})\ncontributes. In the hyperelliptic case the residue acquires two different\ncontributions. The first one comes from the pole at $\\lambda=m_{\\iota}$\nand can be\nobtained with the help of expansion\n\\ba\\label{vsm}\n{\\partial v_{\\iota}\\over\\partial m_{\\iota}}\\stackreb{\\lambda\n\\to m_{\\iota}}{\\longrightarrow}{1\\over 2}{1\\over\\lambda\n-m_{\\iota}}+{\\cal O}\n(\\lambda- m_{\\iota})\n\\ea\nTherefore, the contribution of this pole is equal to that in\nthe perturbative case (\\ref{sigma1}):\n${1\\over 4}{1\\over \\lambda_{\\iota'\\iota}}$. However, now there emerge also\nsome contributions from the ramification points. Indeed,\n\\ba\n{\\partial dv_{\\iota}\\over\\partial m_{\\iota}} \\stackreb{\\lambda\\to \\lambda_{\\alpha}}\n{\\longrightarrow} {1\\over 2}\\left({P(\\lambda_{\\alpha})\\over\n2(\\lambda_{\\alpha}-m_{\\iota})}+{\\partial R(\\lambda_{\\alpha})\n\\over\\partial m_{\\iota}}\\right)\n{\\partial Y^2\\over\\partial m_{\\iota}}{d\\lambda\\over Y^3}\n={P(\\lambda_{\\alpha})\\over\n\\hat Y^3(\\lambda_{\\alpha})}\n\\left({P(\\lambda_{\\alpha})\\over\n2(\\lambda_{\\alpha}-m_{\\iota})}+{\\partial R(\\lambda_{\\alpha})\n\\over\\partial m_{\\iota}}\\right)\n{d\\lambda\\over (\\lambda-\\lambda_{\\alpha})^{3\/2}}\n\\ea\nsince ${\\partial Y^2\\over \\partial m_{\\iota}}=2P{\\partial R\\over\\partial m_{\\iota}}+\n{Q\\over (\\lambda-m_{\\iota})}$. Therefore,\n\\ba\n{\\partial v_{\\iota}\\over\\partial m_{\\iota}} \\stackreb{\\lambda\\to \\lambda_{\\alpha}}\n{\\longrightarrow}\n-2{P(\\lambda_{\\alpha})\\over \\hat Y^4(\\lambda_{\\alpha})}\n\\left({P(\\lambda_{\\alpha})\\over\n2(\\lambda_{\\alpha}-m_{\\iota})}+{\\partial R(\\lambda_{\\alpha})\n\\over\\partial m_{\\iota}}\\right)^2\n{1\\over \\sqrt{\\lambda-\\lambda_{\\alpha}}}\n\\ea\nand the full answer becomes\n\\ba\nF_{\\iota\\iota\\iota'}=\\Sigma_{\\iota'\\iota\\iota}^{(1)}=2\n\\sum_{\\alpha}\n{P(\\lambda_{\\alpha})\\over\\hat Y^4(\\lambda_{\\alpha})}\n\\left({P(\\lambda_{\\alpha})\\over\n2(\\lambda_{\\alpha}-m_{\\iota})}+{\\partial R(\\lambda_{\\alpha})\n\\over\\partial m_{\\iota}}\\right)^2\n\\left({P(\\lambda_{\\alpha})\\over\n2(\\lambda_{\\alpha}-m_{\\iota'})}+{\\partial R(\\lambda_{\\alpha})\n\\over\\partial m_{\\iota'}}\\right)\n-{1\\over 4}{1\\over \\lambda_{\\iota\\iota'}}\\label{fghb}\n\\ea\nThis result should be compared with the residue formula\n\\ba\nF_{\\iota\\iota\\iota'}=\\stackreb{{dw\\over w}=0}{\\hbox{res}} {(dv_{\\iota})^2\ndv_{\\iota'}\\over d\\lambda {dw\\over w}}=\\stackreb{{dw\\over w}=0}{\\hbox{res}}\n{1\\over Y^2P}\n\\left({P\\over\n2(\\lambda-m_{\\iota})}+{\\partial R\\over\\partial m_{\\iota}}\\right)^2\n\\left({P\\over\n2(\\lambda-m_{\\iota'})}+{\\partial R\\over\\partial m_{\\iota'}}\\right)\n{d\\lambda\\over\n\\left(\\log(P\/\\sqrt{Q})\\right)'}\n\\ea\nAs we did it earlier, instead of quite involved calculation of this residue\nat zeroes of ${dw\\over w}$, we calculate it in all other poles (i.e.\nzeroes of $d\\lambda$). Since $\\left(\\log(P\/\\sqrt{Q})\\right)'$ contains\n$\\left[(\\lambda-m_{\\iota})(\\lambda -m_{\\iota'})\\right]^{-1}$,\nonly ramification points and\n$\\lambda=m_{\\iota}$ contributes to the result. Using formula\n\\ba\\label{aux1}\n\\left.\\left(\\log(P\/\\sqrt{Q})\\right)'\\right|_{\\lambda=\\lambda_{\\alpha}}\n={\\hat Y^2(\\lambda_{\\alpha})\\over\n2P^2(\\lambda_{\\alpha})}\n\\ea\none can easily check that this really gives (\\ref{fghb}).\n\nAnalogously one can\nstudy in the hyperelliptic case the residue formula for the\nthree coinciding differentials $F_{\\iota\\iota\\iota}$. This time the\ncalculation is a little longer. Using (\\ref{vsm}), one gets for\n$\\Sigma^{(1)}_{\\iota\\iota\\iota}$\n\\ba\\label{sigma1mmm}\n\\Sigma^{(1)}_{\\iota\\iota\\iota}=\n2\\sum_{\\alpha}\n{P(\\lambda_{\\alpha})\\over\\hat Y^4(\\lambda_{\\alpha})}\n\\left({P(\\lambda_{\\alpha})\\over\n2(\\lambda_{\\alpha}-m_{\\iota})}+{\\partial R(\\lambda_{\\alpha})\n\\over\\partial m_{\\iota}}\\right)^3\n-{1\\over 2P(m_{\\iota})}{\\partial R(m_{\\iota})\\over\n\\partial m_{\\iota}}\n-\\left.{1\\over 4}{\\partial(P\/Y)\\over \\partial\\lambda}\\right|_{\\lambda=m_{\\iota}}\n\\ea\ni.e. the sum over ramification points looks similar\nto the case of two coinciding differentials. Other $\\Sigma$ terms are\neven simpler:\n\\ba\\label{sigma2mmm}\n\\Sigma^{(2)}_{\\iota\\iota\\iota}=\n-\\left.{1\\over 2}{\\partial P\/\\partial\\lambda\\over P}\\right|_{\\lambda=m_{\\iota}}\n-{1\\over 2P(m_{\\iota})}{\\partial R(m_{\\iota})\\over\n\\partial m_{\\iota}}\n+{1\\over 4}\\sum_{\\iota'\\ne\\iota}{1\\over\\lambda_{\\iota\\iota'}}\n+\\left.{1\\over 4}{1\\over\\lambda-\\lambda_{\\iota}}\n\\right|_{\\lambda\\to\\lambda_{\\iota}}\n\\ea\n\\ba\\label{sigma3mmm}\n\\Sigma^{(3)}_{\\iota\\iota\\iota}=-{1\\over 2P(m_{\\iota})}{\\partial R(m_{\\iota})\\over\n\\partial m_{\\iota}} -\\left.\n{1\\over 4}{1\\over\\lambda-\\lambda_{\\iota}}\n\\right|_{\\lambda\\to\\lambda_{\\iota}}\n\\ea\nThe sum of all the three terms should be compared with the residue formula\n\\ba\\label{resmmm}\nF_{\\iota\\iota\\iota}=\\stackreb{{dw\\over w}=0}{\\hbox{res}} {(dv_{\\iota})^3\n\\over d\\lambda {dw\\over w}}=-\\stackreb{{dw\\over w}=0}{\\hbox{res}}\n{1\\over Y^2P}\n\\left({P\\over\n2(\\lambda-m_{\\iota})}+{\\partial R\\over\\partial m_{\\iota}}\\right)^3\n{d\\lambda\\over\n\\left(\\log(P\/\\sqrt{Q})\\right)'}\n\\ea\nAgain taking residues at zeroes of $d\\lambda$ and using formulas\n(\\ref{aux1}) along with\n\\ba\n\\left.(\\lambda-m_{\\iota})\\left(\\log{P\\over\\sqrt{Q}}\\right)'\n\\right|_{\\lambda=m_{\\iota}}=-{1\\over 2}\n\\\\\n\\left[\\left.(\\lambda-m_{\\iota})\\left(\\log{P\\over\\sqrt{Q}}\\right)'\\right]'\n\\right|_{\\lambda=m_{\\iota}}=\\left.{\\partial P\/\\partial\\lambda\\over P}\n\\right|_{\\lambda=m_{\\iota}}-{1\\over 2}\n\\sum_{\\iota'\\ne\\iota}{1\\over\\lambda_{\\iota\\iota'}}\n\\ea\none finally proves that (\\ref{resmmm}) is really equal to the sum of\n(\\ref{sigma1mmm}), (\\ref{sigma2mmm}) and (\\ref{sigma3mmm}).\nContributions from the\nramification points into (\\ref{resmmm}) entirely originate\nfrom analogous term in $\\Sigma^{(1)}_{\\iota\\iota\\iota}$, and\nall the terms in $\\Sigma$'s proportional to $\\partial R\/\\partial m_{\\iota}$\ncome from the first order pole at $\\lambda=m_{\\iota}$. The remaining\nterms in $\\Sigma^{(1)}$ and $\\Sigma^{(2)}$ are equal to the residue of\nthe second order pole at $\\lambda=m_{\\iota}$.\n\n\\section{Hypermultiplet in the adjoint representation (Calogero\nsystem) \\label{Calogero}}\n\\setcounter{equation}{0}\n\\subsection{General formulas \\label{genCa} }\nTo conclude our examples, in this section we discuss in detail the case of\nCalogero system when\nthe closed algebra exists but is {\\it not} associative, how it was\ndiscussed in sect.3-4. The physics os this example is the\n$4d$ ${\\cal N}=2$ SUSY YM model with one matter hypermultiplet in the\nadjoint representation of the gauge group $SU(N_c)$ that\nis described in the SW\nframework by elliptic Calogero model \\cite{SWI3,IM1,IM2}. The bare\ncurve for the UV finite model is elliptic curve (\\ref{torus}) and its modulus\n$\\tau $ can be identified with the bare (UV) coupling constant of the YM\ntheory. The mass of the $4d$ hypermultiplet is proportional to the\ncoupling constant $g$ in Calogero-Moser model. The spectral curve of\nthe elliptic Calogero model\n\\ba\n\\det ({\\cal L}(x,y) - \\lambda ) = 0\n\\ea\nis given by\n\\ba\\label{calocurve}\ny^2 = (x-e_1(\\tau))(x - e_2(\\tau))(x-e_3(\\tau)) =\nx^3 - \\frac{1}{4}g_2(\\tau)x - \\frac{1}{4}g_3(\\tau) \\nonumber \\\\\n{\\cal P}(\\lambda ,x) = \\sum _{i=0}^{N_c}g^is_i{\\cal T}_i(\\lambda, x) =0\n\\nonumber \\\\\n{\\cal T}_0 = 1, \\ \\ \\ \\ {\\cal T}_1 = \\lambda,\\ \\ \\ \\ \\\n{\\cal T}_2 = \\lambda^2 - x,\n\\ \\ \\ \\ \\ {\\cal T}_3 = \\lambda ^3 - 3\\lambda x + 2y, \\ \\ \\ \\ \\\n{\\cal T}_4 = \\lambda^4 - 6\\lambda^2 x + 8\\lambda y - 3(x^2 + \\mu_4),\\nonumber \\\\\n{\\cal T}_5 = \\lambda^5 - 10\\lambda^3x + 20\\lambda^2y -\n15\\lambda(x^2 + \\mu_4) + 4xy, \\ \\ \\ \\ \\\n{\\cal T}_6 = \\lambda^6 - 15\\lambda^4x + 40\\lambda^3y\n- 45\\lambda^2(x^2 + \\mu_4) + 24\\lambda xy - 5,\n\\nonumber \\\\\n\\ldots\n\\label{explT}\n\\ea\n(we remind that $e_1+e_2+e_3=0$).\nUsing gradation $\\deg\\ \\lambda\\ = 1$, $\\deg\\ x\\ = 2$,\n$\\deg\\ y\\ =3$, $\\ldots$,\n${\\cal T}_i$ becomes a homogeneous polynomial of degree $i$.\nActually this is a ``semi''-gradation from the point of view of\n$x$ and $y$, because some coefficients also have non-vanishing\n(but always positive) degree: $\\deg\\ e_i(\\tau) = 2$,\n$\\deg\\ \\mu_4 = 4$ etc.\nAs functions of $\\lambda$, the ${\\cal T}_i$ behave similar to the\nSchur polynomials:\n\\ba\\label{T'}\n{\\cal T}_i' \\equiv \\frac{\\partial{\\cal T}_i}{\\partial \\lambda} =\ni{\\cal T}_{i-1}.\n\\ea\nAs a corollary, ${\\cal P}'$ is again a linear combination of\n${\\cal T}_i$:\n\\ba\n{\\cal P}' \\equiv \\frac{\\partial{\\cal P}}{\\partial\\lambda} =\n\\sum_i ig^is_i{\\cal T}_{i-1}.\n\\label{calP'}\n\\ea\nFor our purposes we can absorb $g_i$ in (\\ref{calocurve}) into\nthe moduli $s_i$.\n\nThe full spectral curve (\\ref{calocurve}) is a $N_c$-fold\ncovering of bare elliptic curve, which has genus $N_c$\n(not $\\sim N_c^2\/2$ as one could naively expect), due to specific\nproperties of the functions ${\\cal T}_i$ which may be demonstrated in\nthe Hitchin-like interpretation\nof Calogero model, when the matrix-valued moment map is restricted\nto have a pole with $N_c-1$ coinciding eigenvalues at it.\nIn the limit $x \\sim \\frac{1}{\\xi^2} \\rightarrow \\infty$,\nthe coordinate $\\lambda \\sim \\frac{1}{\\xi}$ on the $N_c-1$ sheets\nand $\\lambda \\sim -\\frac{N_c-1}{\\xi}$ on the last sheet. We shall\nrefer to these to kinds of sheets as ``$+$'' and ``$-$'' respectively.\nOn the $N_c-1$ ``$+$'' sheets all the functions ${\\cal T}_i$\nbehave as\n\\ba\n{\\cal T}_i \\sim \\lambda\n\\label{asympt}\n\\ea\nin the limit $\\lambda \\longrightarrow \\infty$, while on the ``$-$'' sheet\n${\\cal T}_i \\sim \\lambda^i$. Since ${\\cal P}$ and ${\\cal P'}$ are both\nlinear combinations of ${\\cal T}_i$, they have the same asymptotics\n$\\sim\\lambda^1$ on the ``$+$'' sheet.\n\nFor the generating form $dS=\\lambda d\\omega = \\lambda\n\\frac{dx}{y}$, we have\n\\ba\ndv_k = \\frac{\\partial dS}{\\partial s_k} =\n-\\frac{{\\cal T}_k d\\omega}{{\\cal P}'}, \\ \\ \\\nk = 0,1,2,\\ldots,N_c-2.\n\\label{calodiffs}\n\\ea\nThe complete basis of $N_c$ holomorphic 1-differentials on ${\\cal C}$\nconsists of $dv_k$ with $k=0,1,2,\\ldots,N_c-2,N_c-1$.\nThe holomorphic 1-differential\n$dv_{N_c-1}$ is (up to addition of other $dv_k$ with $k\\leq N_c-2$)\nequal to $d\\omega = \\frac{dx}{y}$. The only non-vanishing\nintegrals of $d\\omega$ are along the cycles on the {\\it bare}\nspectral curve $E(\\tau)$. The periods $a_i$ relevant for the\nSW theory are instead associated with the other cycles,\nwhich turn to be\ntrivial after projection on $E(\\tau)$ (see sect.8 of \\cite{IM2} for discussion\nof the $N_c=2$ example).\n\nIn other words, the polynomial ${\\cal T}_{N_c-1}$ is linearly equivalent\nto a combination of ${\\cal T}_i$ with $i