diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzizeg" "b/data_all_eng_slimpj/shuffled/split2/finalzzizeg" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzizeg" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n{\\it Tomography} builds up higher dimensional objects from lower dimensional projections. {\\it Quantum tomography} \\cite{Fano} is a strategy to reconstruct all that can be observed about a quantum physical system. After becoming a focal point of quantum computing, quantum tomography has recently been applied in a variety of domains \\cite{2004PhRvA..69d2108H, quantcomp00, quantcomp10, 2016JETPL.104..510S, 2010PhRvL.105o0401G, 2017arXiv170208751B, 2015NJPh...17d3063F, 2012arXiv1204.5936A}. \n\nThe method of quantum tomography uses a known ``probe'' to explore an unknown system. Data is related directly to matrix elements, with minimal model dependence and optimal efficiency. \n\nCollider physics is conventionally set up in a framework of unobservable and model-dependent scattering amplitudes. In quantum tomography these unobservable features are skipped to deal directly with observables. The unknown system is parameterized by a certain density matrix $\\rho(X)$, which is model-independent. The probe is described by a known density matrix $\\rho(probe).$ The matrices are represented by {\\it numbers} generated and fit to experimental data, not {\\it abstract operators.} Quantum mechanics predicts an experiment will measure $tr(\\rho(probe) \\cdot \\rho(X))$, where $tr$ is the trace. In many cases $\\rho(probe)$ is extremely simple: A $3\\times 3$ matrix, say. {\\it What will be observed is strictly limited by the dimension and symmetries of the probe.} The powerful efficiency of quantum tomography comes from exploiting the probe's simplicity in the first steps. The description never involves more variables than will actually be measured. \n\nWe illustrate the advantages of quantum tomography with inclusive lepton-pair production. It is a relatively mature subject chosen for its pedagogical convenience. Despite the maturity of the subject, we discover new things. For example, the puzzling plethora of plethora of ad hoc invariant quantities is completely cleared up. We also find new ways to assist experimental data analysis. Positivity is a central issue overlooked in the literature, which we show how to control. Moreover, the tomography procedure carries over straightforwardly to many final states, including the inclusive production of charmonium, bottomonium, dijets, including boosted tops, $HH$, $W^+ W^-$, $ZZ$ \\cite{Abelev:2011md, Aad:2016izn, Melnitchouk:2011tq, Brambilla:2010cs, Aaltonen:2011nr, Chatrchyan:2012ty, Chatrchyan:2013yna,Chatrchyan:2013cla, Chatrchyan:2012woa, Aaij:2013nlm, Aaij:2014qea, Mcclellan:2016cul, Peng:2014hta, Peng:2015spa, Stirling:2012zt, Han:2011vw, CMS:2012xwa, Khachatryan:2015qpa, Cheung:2017loo, Cheung:2017osx, Kang:2014pya, Cervera-Lierta:2017tdt, Kharzeev:2017qzs}. \nOur practical guide to analyzing experimental data uses density matrices at each step and circumvents the more elaborate traditional theoretical formalism. We concentrate on making tools available to experimentalists. We give a step-by-step guide where density matrices stand as definite arrays of {\\it numbers}, bypassing unnecessary formalism. \n\n\\section{The quantum tomography procedure applied to inclusive lepton pair production}\n\nThe tomography procedure reconstructs all that can be observed about a quantum physical system. For inclusive lepton pair production, what can be observed is the invariant mass distribution, the lepton pair angular distribution $dN\/d\\Omega$ and the polarization of the unknown intermediate state of the system, contained in $\\rho(X)$.\\footnote{ Polarization and spin are different concepts. The polarization (and density matrix of the unknown state) predicts the spin, while the spin cannot predict the density matrix.} In this section we reconstruct $dN\/d\\Omega$ and $\\rho(X)$ from first principles using tomography. Structure functions and model-dependent assumptions about the intermediate state, common to the traditional formalism \\cite{Terazawa:1974ci, Vasavada:1977ef, Mirkes:1992hu, Mirkes:1994eb, Mirkes:1994dp}, do not appear.\n\nExpert readers, who are accustomed to seeing some of these formulas derived, might note that the method of derivation is particularly simple. The particular steps we do {\\it not} follow are to be noted. That also explains why some of the relations we find seem to have been overlooked in the past.\n \n\\subsection{Kinematics}\n \n\nConsider inclusive production of a lepton pair with 4-momenta $k$, $k'$ from the collision of two hadrons with 4-momenta $P_{A}$, $P_{B}$: $$P_{A}P_{B} \\rightarrow} \\renewcommand{\\c}{\\cdot \\ell^{+}(k)\\ell^{-}(k')+{\\cal X},$$ where ${\\cal X}$ and the final state lepton spins are unobserved and thus summed over. In the high energy limit $k^{2} = k^{'2} =0$. \n\nLet the total pair momentum $Q=k+k'$. The azimuthal distribution of total pair momenta in the lab frame is isotropic. Lepton pair angular distributions are described in the pair rest-frame defined event-by-event. In this frame the pair momenta are back-to-back and equal in magnitude. The frame orientation depends on the beam momenta and the pair total momentum. \n\nDefining momentum\\footnote{This is a great advantage compared to making calculations with a complicated (and error prone) sequence of rotations and boosts.} observables via a Lorentz-covariant frame convention allows calculations to be done in {\\it any} frame. In its rest frame the total pair momentum $Q^{\\mu} = (\\sqrt{Q^{2}}, \\, \\vec Q=0)$. A set of $xyz$ spatial axes in this frame will be defined by three 4-vectors $X^{\\mu}, \\, Y^{\\mu}, \\, Z^{\\mu}$, satisfying \\ba Q\\cdot X=Q\\cdot Y =Q\\cdot Z=0. \\label{Qdot} \\ea The frame vectors being orthogonal implies \\ba X\\cdot Y=Y \\cdot Z =X\\cdot Z=0 \\label{last} \\ea Taking $P_{A}$=(1, 0, 0, 1), $P_{B}$=(1, 0, 0, -1) (light-cone $\\pm$ vectors), a frame satisfying the relations of Eq. \\ref{Qdot} and Eq. \\ref{last} is given by\\footnote{We use $\\epsilon^{0123}=1$. The mirror symmetry of $pp$ collisions also strongly supports a convention where the direction of the $Z$ axis is determined by the sign of the pair rapidity. The formulas shown do not include this detail.} \\ba \\tilde Z^{\\mu} &= P_{A}^{\\mu} Q\\cdot P_{B} - P_{B} ^{\\mu} Q\\cdot P_{A}; \\nonumber} \\renewcommand{\\bf}{\\textbf \\\\ \\tilde X^{\\mu} &= Q^{\\mu}- P_{A}^{\\mu} {Q^{2}\\over 2 Q\\cdot P_{A}} - P_{B} ^{\\mu} {Q^{2}\\over 2 Q\\cdot P_{B} }; \\nonumber} \\renewcommand{\\bf}{\\textbf \\\\ \\tilde Y^{\\mu}&= \\epsilon^{\\mu \\nu \\a \\b}P_{ A\\nu}P_{B \\a}Q_{\\b}. \\nonumber} \\renewcommand{\\bf}{\\textbf \\ea These frame vectors define the Collins-Soper ($CS$) frame.\\footnote{These expressions simplify a more complicated convention that included finite mass effects in the original definition} \nThe normalized frame vectors are \\ba (X^{\\mu}, \\, Y^{\\mu}, \\, Z^{\\mu}) = ({\\tilde X^{\\mu} \\over \\sqrt{-\\tilde X\\cdot \\tilde X} }, \\, {\\tilde Y^{\\mu} \\over \\sqrt{-\\tilde Y\\cdot \\tilde Y} }, \\, {\\tilde Z^{\\mu} \\over \\sqrt{-\\tilde Z\\cdot \\tilde Z} }) . \\nonumber} \\renewcommand{\\bf}{\\textbf \\ea\n\nTo analyze data for each event labeled $J$: \\ba \\text{Compute} & \\quad Q_{J} =k_{J}+k_{J}'; \\quad \\ell_{J} =k_{J}-k_{J}'; \\quad (X_{J}^{\\mu}, \\, Y_{J}^{\\mu}, \\, Z_{J}^{\\mu}) ; \\nonumber} \\renewcommand{\\bf}{\\textbf \\\\ & \\vec \\ell_{XYZ, J} = ( X_{J}\\cdot \\ell_{J}, \\, Y_{J}\\cdot \\ell_{J} , \\, Z_{J}\\cdot \\ell_{J}); \\nonumber} \\renewcommand{\\bf}{\\textbf \\\\ &\\hat \\ell_{J} =\\ell_{XYZ, \\, J} \/\\sqrt{-\\ell_{XYZ, \\, J}\\cdot \\ell_{XYZ, \\, J}}. \\label{hatell} \\ea\n\nIn fact, $\\hat \\ell_J = (\\sin \\theta \\cos \\phi, \\,\\sin \\theta \\sin \\phi, \\,\\cos \\theta )_J,$ where $\\theta$, $\\phi$ are the polar and azimuthal angles of one (e.g. plus-charge) lepton in the rest frame of $Q$. The meaning of a \"Lorentz invariant ${\\it cos}$ $\\theta$\" is a scalar $Z_{\\mu}(k-k')^{\\mu}$ which becomes $\\hat z \\cdot \\hat{(k - k')}$ in the rest frame of $Q.$\n\n\\subsection{The angular distribution, in terms of the probe and target density matrices}\n\nThe standard amplitude for inclusive production of a fermion- anti-fermion pair of spin $s$, $s'$ has a string of gamma-matrices contracted with final state spinors $v_{a}(k's')$, $\\bar u_{b}(k, s)$. When the amplitude is squared, these factors appear bi-linearly, as in \\ba u_{a}(ks)\\bar u_{a'}(ks) =(1\/2)[( \\slasha k+m)(1+\\gamma_{5}\\slasha s)]_{aa'}. \\nonumber} \\renewcommand{\\bf}{\\textbf \\ea Summing over unobserved $s$ and dropping $m \\delta_{aa'}$, a form of density matrix appears: \\ba \\sum_{s} \\, u_{a}(ks)\\bar u_{a'}(ks) \\rightarrow} \\renewcommand{\\c}{\\cdot k_{\\mu}\\gamma_{aa'}^{\\mu}. \\nonumber} \\renewcommand{\\bf}{\\textbf \\ea The Feynman rules for the density matrix of two relativistic final state fermions (or anti-fermions, or any combination) is a factor given by \\ba \\rho_{aa', \\, bb'}(k, \\, k') \\rightarrow} \\renewcommand{\\c}{\\cdot \\slasha k_{aa'} \\slasha k'_{bb'}\\label{clutter} \\ea This fundamental equality is not present in pure state quantum systems.\n{\\it There is no spinor corresponding to a fermion averaged over initial spins}, nor to a fermion summed over final spins.\n\nAs shown in the Appendix, the rest of the cross section appears in the target density matrix $\\rho(X)$, which must have four indices to contract with the probe indices: \\ba d\\sigma \\sim \\sum_{aa'bb'} \\, \\rho_{aa', \\, bb'}(k, \\, k')\\rho_{aa', \\, bb'}(X) dLIPS = tr \\left(\\rho(k, \\, k')\\rho(X) \\right) dLIPS, \\label{LIPS} \\ea where $dLIPS$ is the Lorentz invariant phase space.\n \nNote that $ u_{a}(ks)\\bar u_{a'}(ks)$ is not positive definite since the Dirac adjoint $ \\bar u_{a'}(ks)= (u^{\\dagger}(ks) \\gamma_{0})_{a}$ has a factor of $ \\gamma_{0}$, introduced by convention. Removing it, $\\sum_{s} \\, u_{a}(ks)u_{a'}^{\\dagger}(ks)$ becomes positive by inspection. (Any matrix of the form $M \\cdot M^{\\dagger}$ has positive eigenvalues.) $\\rho(k, \\, k')$ as written is not normalized, because the Feynman rules shuffle spinor normalizations into overall factors. To make the arrow in Eq. \\ref {clutter} into an equality, multiply on the right by $\\gamma_{0}$ twice, and standardize the normalizations. The same steps applied to $\\rho_{aa', \\, bb'}(X)$ cancels the $\\gamma_{0}$ factors. The result is that the probability to find two fermions has the fundamental quantum mechanical form\\footnote{We remind the reader that the phase space factors $dLIPS$ originate in further organizational steps computing the quantum mechanical transition probability per volume per time, which afterwards restore the phase space factors. } $P(k, \\,k')=tr(\\rho(k, \\, k')\\rho(X))$. \n\nThe left side of Eq. \\ref{LIPS} is $d\\sigma(k, \\, k')$, the same as the joint probability $P(Q, \\, \\ell \\big | \\, init)$ where $init$ are the initial state variables. The phase space for two leptons converts as \\ba k_{0}k_{0}' {d\\sigma \\over d^{3}k d^{3}k'} = {d \\sigma \\over d^{4}Q d\\Omega } . \\nonumber} \\renewcommand{\\bf}{\\textbf \\ea We can write \\ba P(Q, \\, \\ell \\big | \\, init)=P(\\ell \\big |Q, \\, init)P(Q \\big |init). \\nonumber} \\renewcommand{\\bf}{\\textbf \\ea Here $P(Q \\big |init) = d\\sigma\/ d^{4}Q$, and $P(\\ell \\big |Q, \\, init)=dN\/d\\Omega$ is the conditional probability to find $\\ell$ given $Q$ and the initial state. This factorization is general and unrelated to one-boson exchange, parton model, or other considerations. Since $P(\\ell \\big |Q, \\, init)$ is a probability, quantum mechanics predicts it is a trace: \\ba & {dN\\over d \\Omega} = {1\\over \\sigma}{d\\sigma \\over d\\Omega} = P(\\ell \\big |Q, \\, init)={3\\over 4 \\pi}tr(\\rho(\\ell) \\rho(X)), \\label{dsigma} \\ea where $tr$ indicates the trace, $d \\Omega = d cos \\theta \\cdot d \\phi,$ and $\\rho(\\ell),$ the probe, is a $3\\times 3$ matrix to be defined momentarily which depends only on the directions $\\hat \\ell_J$. The target hadronic system is represented by $\\rho(X)$. Since the probe $\\rho(\\ell)$ is a $3\\times 3$ matrix, then $\\rho(X)$ is a $3\\times 3$ matrix of numbers. \\footnote{This is a {\\it more general} statement than enumerating ``structure functions''.}\n\nThe description has just been {\\it reduced} from $\\rho_{aa', \\, bb'}(k, \\, k')$, a Dirac tensor with $4^{4}$ possible matrix elements, to a $3\\times 3$ Hermitian matrix with 8 independent elements, since $tr(\\rho(\\ell))=1$ is one condition. Equation \\ref{dsigma} is the most general angular distribution that can be observed. It is valid for {\\it like-sign and unlike sign pairs, and assumes no model for how the pairs are produced}. The Dirac form (and Dirac traces) is over-complicated, because describing every possible {\\it exclusive} reaction for every possible in and out state is over-achieved in the formalism. \n\n\\subsection{The probe matrix}\n\nThe probe matrix $\\rho(\\ell)$ is given by \\ba \\rho_{ij}(\\ell) = {1+a\\over 3}\\delta_{ij} -a \\hat \\ell_{i}\\hat \\ell_{j} -\\imath b \\epsilon_{ijk}\\hat \\ell_{k}, \\label{rholep} \\ea which is derived in the Appendix. The Standard Model predicts only two parameters, $a$ and $b$. If on-shell lepton helicity is conserved (as in lowest order production by a minimally-coupled vector boson) then $a =1\/2$ and $b = c_{A}c_{V}.$ The latter is not a prediction but a definition. If the production is parity-symmetric then $c_{A}=0$. {\\it The only non-trivial prediction of the Standard Model is the value of $c_{A}c_{V}$}. Lowest-order production by $Z$ bosons predicts $b=\\sin^{2}\\theta_{W}\\sim 0.22$.\n\nMore generally, the probe matrix {\\it itself} represents a reduced system that is unknown {\\it a priori}. It should be determined experimentally. Consider the angular distribution of $e^{+}e^{-} \\rightarrow} \\renewcommand{\\c}{\\cdot \\mu^{+}\\mu^{-}$. Let $\\rho(e; \\, \\hat z)$ describe electrons with parameters $a_{e}, \\, b_{e}$ colliding along the $z$ axis. Let $\\rho(\\mu; \\, \\hat \\ell)$ describe muons with parameters $a_{\\mu}, \\, b_{\\mu}$ emerging along direction $\\hat \\ell$. A short calculation using Eq. \\ref{rholep} twice gives\\footnote{This may be a new result, which goes beyond what is known from one-boson exchange with or without radiative corrections. The production details can only renormalize the parameters.} \\ba {3\\over 4 \\pi}tr \\left(\\rho(e; \\, \\hat z) \\rho(\\mu; \\, \\hat \\ell) \\right ) = & {3\\over 4 \\pi} \\left({1\\over 3} + 2 b_{e}b_{\\mu} \\hat \\ell \\cdot \\hat z+ a_{e}a_{\\mu}((\\hat z\\cdot \\hat \\ell)^{2} -1\/3) \\right), \\nonumber} \\renewcommand{\\bf}{\\textbf \\\\ &= {3\\over 4 \\pi}\\left({1\\over 3} + 2 b_{e}b_{\\mu}\\cos\\theta+ a_{e}a_{\\mu} (\\cos^{2}\\theta-1\/3) \\right). \\label{dist1} \\ea Fitting experimental data will give $ a_{e}a_{\\mu}$ and $b_{e}b_{\\mu}$. If lepton universality is assumed the probe $\\rho(\\mu; \\hat \\ell)$ has measured the probe $\\rho(e; \\hat z)$. \n\n\\subsection{How tomography works: $dN\/d\\Omega$ as a function of $\\rho(\\ell)$, $\\rho(X)$} \n\n\\label{sec:mirrror}\n\nLet $\\hat G_{\\ell}$ be a set of probe operators, with expectation values $ =tr(G_{\\ell}\\rho(X))$. The trace defines the Hilbert-Schmidt inner product of operators. The condition for operators (matrices) to be orthonormal is \\ba tr(G_{\\ell}G_{k}) = \\delta_{\\ell k} \\quad \\text{orthonormal matrices}. \\label{ortho} \\ea There are $N^{2}-1$ orthonormal $N\\times N$ Hermitian operators, not including the identity. When a complete set of probe operators has been measured, the density matrix is tomographically reconstructed from observables as \\ba \\rho(X) = \\sum_{\\ell} \\, G_{\\ell}tr(G_{\\ell}\\rho) = \\sum_{\\ell} \\, G_{\\ell} < G_{\\ell}>. \\nonumber} \\renewcommand{\\bf}{\\textbf \\ea For a pure state density matrix, there exists a basis $\\{ G_{\\ell} \\}$ such that only one term appears in the sum over $\\ell$. Then $\\rho_{pure} =|\\psi><\\psi|,$ and $|\\psi>$ is reconstructed as the eigenvector of $\\rho_{pure}$. \n\nEach orthogonal probe operator measures the corresponding component of the unknown system, and is classified by its transformation properties. For angular distributions the transformations of interest are rotations. \n$\\rho(\\ell)$ contains tensors transforming like spin-0, spin-1 and spin-2. Each tensor of a given type is orthogonal to the others.\n\nOrganizing transformation properties simplifies things significantly. Recall the general form of $\\rho(\\ell)$, from Eq. \\ref{rholep}. The most general form for $\\rho(X)$ that is observable will have the same general expansion, with new parameters: \n \\ba & \\text{Probe:} \\quad \\rho_{ij}(\\ell) = {1\\over 3}\\delta_{ij} +b \\hat \\ell \\cdot \\vec J_{ij} +a U_{ij}(\\hat \\ell); \\quad \\text{where} \\quad U_{ij}(\\hat \\ell)= {\\delta_{ij} \\over 3} -\\hat \\ell_{i}\\hat \\ell_{j} =U_{ji}(\\ell); \\, tr(U(\\ell)) =0; \\label{lastline} \\\\& \\text{System:} \\quad \\rho_{ij}(X) = {1\\over 3}\\delta_{ij} +{1\\over 2}\\vec S \\cdot \\vec J_{ij} +U_{ij}(X) ; \\quad \\text{where} \\quad U(X)=U^{T}(X); \\quad tr(U(X))=0. \\label{nextline} \\ea These formulas reiterate Eq. \\ref{rholep} while identifying $(J_{k})_{ij} = -\\imath \\epsilon_{ijk}$ as the generator of the rotation group in the $3\\times 3$ representation.\\footnote{The real Cartesian basis for $\\vec J$ is being used because it is more transparent than the $J_{z} \\rightarrow} \\renewcommand{\\c}{\\cdot m$ basis that is an alternative. It would have complex parameters.} Upon taking the trace as an inner product, orthogonality selects each term in $\\rho(X)$ that matches its counterpart in $\\rho(\\ell)$. For example $\\vec J$ is orthogonal to all the other terms except the same component of $\\vec J$: \\ba & {1\\over 2} tr(J_{i}J_{k}) =\\delta_{ij} ;\\nonumber} \\renewcommand{\\bf}{\\textbf \\\\ &\\text{hence} \\quad {1\\over 2} tr( \\hat \\ell \\cdot \\vec J \\, \\vec S\\cdot \\vec J ) = \\hat \\ell \\cdot \\vec S. \\nonumber} \\renewcommand{\\bf}{\\textbf \\ea Orthogonality makes it trivial to predict which density matrix terms can be measured by probe matrix terms. We call the matching of terms ``the mirror trick.''\n\nWe now make several relevant comments about Eq. \\ref{lastline} and Eq. \\ref{nextline}: \\begin{itemize} \\item All density matrices can be written as $1_{N\\times N}\/N$ to take care of the normalization, plus a traceless Hermitian part. The unit matrix is the spin-0 part and invariant under rotations. The only contribution of the $1$ terms is $tr( 1\\times 1)\/N^{2} =1\/N$. \\item The textbook density matrix {\\it spin vector} $\\vec S$ consists of those parameters coupled to the angular momentum operator. This is also called the spin-1 contribution. The quantum mechanical average angular momentum of the system is \\ba <\\vec J> =tr(\\rho_{X}\\vec J) =\\vec S. \\nonumber} \\renewcommand{\\bf}{\\textbf \\ea \nWhen the coordinates are rotated, the $\\vec J$ matrices transform exactly so that $\\vec S$ rotates like a vector under proper rotations, and a pseudovector under a change of parity. \\item The last term of Eq. \\ref{lastline}, the spin-2 part, is real, symmetric and traceless. By the mirror trick it can only communicate with a corresponding spin-2 term in $\\rho(X)$ denoted $U_{ij}(X)$, which is real, symmetric and traceless. It can be considered a measure of angular momentum fluctuations: \\ba < {1\\over 2} \\left( J_{i}J_{j}+ J_{j}J_{i} \\right)- {1\\over 3}\\vec J^{2}\\delta_{ij}> = U_{ij}(X). \\nonumber} \\renewcommand{\\bf}{\\textbf \\ea \nA common mistake assumes the quadrupole $U$ should be zero in a pure ``spin state.'' Actually a pure state with $|\\vec S|=1$ has a density matrix \\ba \\rho_{pure, \\, ij}(\\vec S) ={1\\over 2} (\\delta_{ij}- \\hat S_{i}\\hat S_{j})-{ \\imath \\over 2}\\epsilon_{ijk}\\hat S_{k}. \\label{pure} \\ea For example, when $\\vec S =\\hat z$ the density matrix has one circular polarization eigenstate with eigenvalue unity, and two zero eigenvalues. Pure states exist with $\\vec S=0$: They have real eigenvectors corresponding to linear polarization. From the spectral resolution $\\rho(X) =\\sum_{\\a} \\, \\lambda_{\\a} |e_{a}>$ with probabilities $\\lambda_{\\a}$, which are the density matrix eigenvalues. \\item As it stands the $U_{ij}$ matrices in Eq. \\ref{lastline} and Eq. \\ref{nextline} have not been expanded in a complete set of symmetric, orthonormal $3 \\times 3$ matrices. Regardless $\\rho(X)$ can be fit to data whether or not an expansion is done. The purpose of such work is to complete the classification process to assist with interpreting data. We sketch the steps here. Details are provided in an Appendix. Let $E_{M}$ be a basis of traceless orthonormal matrices where $U(\\ell) = \\sum_{M} \\, tr(U(\\ell)E_{M}) E_{M}$. This is the tomographic expansion of the probe. Choose $E_{M}$ so the outputs are normalized real-valued spin-2 spherical harmonics $Y_{M}(\\theta, \\, \\phi)$. The expansion of the unknown system will be $U(X) = \\sum_{M} \\, tr(\\rho(X)E_{M}) E_{M} =\\sum_{M} \\, \\rho_{M}(X)E_{M}$. By orthogonality the spin-2 contribution to the angular distribution will be \\ba {d N\\over d \\Omega } \\sim tr(\\rho(\\ell) \\rho(X))_{spin-2} \\sim \\sum_{M} \\, \\rho_{M}(X) Y_{M}(\\theta, \\, \\phi). \\nonumber} \\renewcommand{\\bf}{\\textbf \\ea Writing out the terms gives \\ba {dN \\over d \\Omega}=& {1\\over 4 \\pi}+\\frac{3}{4 \\pi}S_{x}\\sin\\theta \\cos\\phi+\\frac{3}{4 \\pi}S_{y}\\sin\\theta \\sin\\phi+\\frac{3}{4 \\pi}S_{z}\\cos\\theta \\nonumber} \\renewcommand{\\bf}{\\textbf \\\\ &+c\\rho_{0} ({1\\over \\sqrt{3}} - \\sqrt{3} \\cos^2\\theta) - c \\rho_{1} \\sin(2 \\theta) \\cos\\phi \n+ c\\rho_{2} \\sin^{2}\\theta\\cos(2 \\phi) \\nonumber} \\renewcommand{\\bf}{\\textbf \\\\ & +c\\rho_{3} \\sin^{2}\\theta \\sin(2 \\phi) - c \\rho_{4} \\sin(2 \\theta) \\sin \\phi .\\label{res1} \\ea The label $X$ has been dropped in $\\rho_{M}$ and $c= 3 \/(8\\sqrt{2} \\pi ) $. \nSince $E_{M}$ transform like $Y_{M}$, the coefficients $\\rho_{M}$ transform under rotations like spin-2. That means $\\rho_{M} \\rightarrow} \\renewcommand{\\c}{\\cdot R^{(2)}_{MM'}\\rho_{M'}(X)$, where $ R^{(2)}_{MM'}$ is a matrix available from textbooks \\cite{book}. The traditional $A_{k}$, $\\lambda_{k}$ conventions do not use orthogonal functions. Transformations from the traditional conventions to the $\\rho_M$ convention are given in an Appendix.\n\n\\end{itemize}\n\nNote the transformation properties listed are {\\it exact}. The systematic and statistical errors of a measurement appear in fitting $\\rho(X)$. \n\n\\subsection{Fitting $\\rho(X)$, $dN\/d\\Omega$} \n\n\nQuantum mechanics requires $\\rho(X)$ must be {\\it positive}, which means it has positive eigenvalues. Positivity produces subtle non-linear constraints, similar to unitarity. In the $3\\times 3$ case the relations are generally cubic polynomials. Positivity is not the same concept as yielding a positive cross section, and generally {\\it is a more restrictive} set of relations.\\footnote{When $tr(\\rho(X))=1$ is maintained, positivity is violated when one or more eigenvalues of $\\rho(X)$ exceeds unity, and one or more goes negative. Then for some vector $|e>$ the quadratic form $<0$, which would appear to provide a signal. Yet no such signal might be found in the angular distribution, because $tr(\\rho(\\ell) \\rho(X))>0$ is a much weaker condition. Thus, positivity cannot generally be reduced to bounds on angular distribution coefficients, unless the bounds are so intricately constructed to be equivalent to positivity of the density matrix eigenvalues.} If density matrices are not used it is quite straightforward to fit data yielding a positive cross section while {\\it violating positivity}.\n\nFortunately positivity can be implemented by the Cholesky decomposition of $\\rho_X$ \\cite{chole}, which is discussed in the Appendix. For the $3\\times 3$ case it is: \\ba & \\rho(X)(m)=M(m)\\cdot M^{\\dagger}(m); \\nonumber} \\renewcommand{\\bf}{\\textbf \\\\ & \nM(m) = {1 \\over \\sqrt{ \\sum_{k} m^{2}_k}} \\left(\n\\begin{array}{ccc}\n m_1 & m_4+i m_5 & m_6+i m_7 \\\\\n 0 & m_2 & m_8+i m_9 \\\\\n 0 & 0 & m_3 \\\\\n\\end{array}\n\\right), \\label{normed} \\ea where the parameters $-1 \\leq m_{\\alpha} \\leq 1$. \n\nEvent by event $\\rho(\\ell)$ is an array of numbers, and $\\rho(X)(m)$ is an array of parameters. \nThe results are combined to make the $J$th instance of $tr(\\rho_{J}(\\ell)\\rho(X)(m))$, where $\\rho(X)(m)$ has been parameterized in Eq. \\ref{normed}. Fit the $m_{\\a}$ parameters to the data set. For example, the log likelihood ${\\cal L}$ of the set $J=1...J_{max}$ is \\ba {\\cal L}(m) =\\sum_{J}^{J_{max}} \\, log\\left( tr\\left(\\rho^{(J)}(\\ell) \\cdot \\rho(X)(m) \\right) \\right)+ J_{max}log(3\/4\\pi) . \\label{like} \\ea Sample code available online\\footnote{To help readers appreciate the practical value of these advantages, we constructed standalone analysis code in both ROOT and Mathematica \\cite{url}. We expect the code to provide useful cross-checks on code users might write for themselves. \\label{link}} carries out these steps, returning parameters $m_{\\a}$.\nThe details of cuts and acceptance appear in fitting the {\\it numbers} $m_{\\a}$ using {\\it numbers} for the lepton matrix $\\rho(lep)$ (not angles, nor trigonometric functions.) In one example with simulated $Z$-boson data we found \\ba \\rho_{fit}(X) = \\left(\n\\begin{array}{ccc}\n 0.5574 & 0.01399-0.07144 i &\n -0.004026+0.013487 i \\\\\n 0.01399+0.07144 i & 0.4422 & 0.003138-0.002670\n i \\\\\n -0.004026-0.013487 i & 0.003138+0.002670 i &\n 0.0004268 \\\\\n\\end{array}\n\\right) \\nonumber} \\renewcommand{\\bf}{\\textbf \\ea Using the Standard Model parameters for $\\rho(\\ell)$, Eq. \\ref{hatell} and Eq. \\ref{rholep}, the trace yields \\ba & {d N \\over d \\Omega_{fit}}\\sim tr(\\rho(\\ell)\\rho(X))=0.5000+ 0.0007739 \\sin (\\phi ) \\sin (\\theta )+0.3090 \\cos\n ^2(\\phi ) \\cos ^2(\\theta ) \\nonumber} \\renewcommand{\\bf}{\\textbf \\\\ & +0.1904 \\sin ^2(\\phi )\n \\cos^2(\\theta )+... \\nonumber} \\renewcommand{\\bf}{\\textbf \\ea where ... indicates several terms there is no need to write out. Integrated over $\\phi$, this expression becomes \\ba {dN_{fit} \\over d \\cos\\theta } ={3\\over 4 \\pi}\\left( 1.57 + 0.137 \\cos \\theta + 1.56 \\cos^{2} \\theta \\right). \\nonumber} \\renewcommand{\\bf}{\\textbf \\ea A $1+cos^{2}\\theta$ distribution is the leading order Drell-Yan prediction for virtual spin-1 boson annihilation, while $ 0.137 \\, cos\\theta$ represents a charge asymmetry. \n\nIt is trivial to go from $tr(\\rho(\\ell)\\rho(X))$ to a conventional parameterization of an angular distribution by taking inner products of orthogonal functions. It is also easy to expand $\\rho(X)$ in a basis of orthonormal matrices with the same results. Note these steps are {\\it exact}, and much different from fitting data to trigonometric functions in some convention, which tends to yield multiple solutions, along with violations of positivity, which can introduce pathological convention-dependence. Perhaps struggles with convention-dependence of quarkonium data \\cite{Faccioli:2010ji, Faccioli:2010kd} are related to this. It would be interesting to investigate.\n\n\\subsection{Summary of quantum tomography procedure}\n\\label{sec:summary}\n\nTo analyze data for each event labeled $J$: \\begin{itemize} \\item\n$\\text{Compute} \\; Q_{J} =k_{J}+k_{J}'; \\; \\; \\; \\; \\ell_{J} =k_{J}-k_{J}'; \\; \\; \\; \\; (X_{J}^{\\mu}, \\, Y_{J}^{\\mu}, \\, Z_{J}^{\\mu}) ; \\smallskip \\\\ \\smallskip\n \\vec \\ell_{XYZ, J} = ( X_{J}\\cdot \\ell_{J}, \\, Y_{J}\\cdot \\ell_{J} , \\, Z_{J}\\cdot \\ell_{J});\\\\\n\\hat \\ell_{J} =\\ell_{XYZ, \\, J} \/\\sqrt{-\\ell_{XYZ, \\, J}\\cdot \\ell_{XYZ, \\, J}}.$\n\n \\item Make the lepton density matrix. For $Z$ bosons in the Standard Model it is \\ba \\rho_{ij}(\\ell) = {1 \\over 2}(\\delta_{ij} - \\hat \\ell_{i} \\hat \\ell_{j}) - 0.22\\, \\imath \\epsilon_{ijk} \\hat \\ell_{k}. \\ea \n \\item The results are combined to make the $J$th instance of $tr(\\rho_{J}(\\ell)\\rho(X)(m))$, where $\\rho(X)(m)$ has been parameterized in Eq. \\ref{normed}. Fit the $m_{\\a}$ parameters to the data set. For example, the log likelihood ${\\cal L}$ of the set $J=1...J_{max}$ is \\ba {\\cal L}(m) =\\sum_{J}^{J_{max}} \\, log\\left( tr\\left(\\rho^{(J)}(\\ell) \\cdot \\rho(X)(m) \\right) \\right)+ J_{max}log(3\/4\\pi) . \\label{like} \\ea Sample code available online (see footnote \\ref{link}) carries out these steps, returning parameters $m_{\\a}$. \\end{itemize}\n\n\\subsection{Comments}\n1. The possible symmetries of $\\rho(\\ell)$ enter here. Suppose $c_{A}=0$. Then $\\rho(\\ell)$ is even under parity, real and symmetric. The imaginary antisymmetric elements of $\\rho(X)$ are orthogonal, and contribute nothing to the angular distribution. When known in advance, the redundant parameters of $\\rho(X)$ can be set to zero while making the fit. (That does not mean unmeasured parameters can be forgotten when dealing with positivity.)\nIn general a fitting routine will either report a degeneracy for redundant parameters, or converge to values generated by round-off errors. Degeneracy will always be detected in the Hessian matrix computed to evaluate uncertainties. \n\n2. The normalization condition $ \\sum_{k} \\, m^{2} (k)=1$ can be postponed by removing $1\/\\sqrt{m_{\\a}^{2}}$ from Eq. \\ref{normed}, and subtracting $J_{max}log( \\sum_{k}^{J_{max}} \\, m^{2} (k))$ from the log-likelihood (Eq.\\ref{like}). When that is done the fitted density matrix will not be automatically normalized, due to the symmetry $\\rho(X) \\rightarrow} \\renewcommand{\\c}{\\cdot \\lambda \\rho(X)$ of the modified likelihood. The density matrix becomes normalized by dividing by its trace. Incorporating such tricks improved the speed of the code available online (see footnote \\ref{link}) by a factor of about 100.\n\n3. Algorithms are said to compute a ``unique'' Cholesky decomposition, which would seem to predict $m_{\\a}$ given $\\rho(X)$. The algorithms choose certain signs of $m_{\\a}$ by a convention making the diagonals of $M$ positive. However that is not quite enough to assure a numerical fit finds a unique solution. \n\nThe fundamental issue is that $MM^{\\dagger}=\\rho(X)$ is solved by $M= \\sqrt{\\rho(X)}$, and the square root is not unique. There are $2^{N}$ arbitrary sign choices possible among $N$ eigenvalues of $\\sqrt{\\rho(X)}$. Forcing the diagonals of $M$ to be positive reduces the possibilities greatly, and an algorithm exists to force a unique, canonical form of $m_{\\a}$ in a data fitting routine. We did not make use of such a routine, since fitting $\\rho(X)$ is the objective. Depending upon the data fitting method, increasing the number of ways for $M(m_{\\a})$ to make a fit sometimes makes convergence faster. \n\n4. Let $<>_{exp}$ stand for the expectation value of a quantity in the experimental distribution of events. By symmetry $ <\\hat \\ell>_{exp}$ and $< \\hat \\ell_{i} \\hat \\ell_{j} >_{exp}$ are vector and tensor estimators, respectively, which must depend on the vector and tensor parameters $\\vec S$, $U_{ij}(X)$ in the underlying density matrix. A calculation finds \\ba & <\\hat \\ell>_{exp} = {1 \\over J_{max}}\\sum_{J} \\hat \\ell_{J} = - {1 \\over 4}\\vec S; \\nonumber} \\renewcommand{\\bf}{\\textbf \\\\ & < \\hat \\ell_{i} \\hat \\ell_{j} >_{exp}={1 \\over J_{max}}\\sum_{J} \\hat \\ell_{Ji}\\hat \\ell_{Ji} = {1\\over 3}\\delta_{ij} - {1\\over 5} Re[U_{ij}]. \\nonumber} \\renewcommand{\\bf}{\\textbf \\ea An estimate of $\\rho(X)$ not needing a parameter search then exists directly from data. However positivity of $\\rho(X)$ is more demanding, and not automatically maintained by such estimates.\n\n\\section{Results}\n\\subsection{Analysis Bonuses of the Quantum Tomography Procedure}\n\n\\subsubsection{Convex Optimization}\nThe issue of multiple solutions for $\\rho(X)$ is different. Multiple minima of $\\chi^{2}$ statistics affects fits to cross sections parameterized by trigonometric functions. However, quantum tomography using maximum likelihood happens to be a problem of {\\it convex optimization}. In brief, when $\\rho$ is positive then $$ is a positive convex function of $|e>$. Then $tr(\\rho(\\ell) \\rho(X))$ is convex, being equivalent to a positively weighted sum of such terms. The logarithm is a concave function, leading to a convex optimization problem. That means that {\\it when $\\rho(X)$ is a local maximum of likelihood it is the global maximum.} Exceptions can only come from degeneracies due to symmetry or an inadequate number of data points \\cite{DBLP:journals\/mp\/BurerM05}. Convex optimization is \nimportant because without such a property the evaluation of high-dimensional fits by trial and error can be exponentially difficult.\n\n\\subsubsection{Discrete Transformation Properties} \n\n\\begin{small} \n\n\\begin{table*}[ht]\n\\centering\n\n$\\begin{array}{|lll || c | c | c | c | c |}\n\\hline\nterm & origin & dN\/d\\Omega & C_{\\ell} & P & T & C_{\\ell}P &PT \\\\ \n\\hline \n \\cdot & \\ell & \\cdot & - & - & - & + & + \\\\ \n \\cdot & X & \\cdot & \\cdot & -& - &+ & + \\\\ \n \\cdot & Y &\\cdot & \\cdot & + & + &+ & + \\\\ \n \\cdot & Z & \\cdot & \\cdot & - & - & + & + \\\\ \nS_{x}& X\\ell & \\sin\\theta \\cos \\phi &- & + & + & - & +\\\\ \nS_{y} & Y\\ell & \\sin\\theta \\sin \\phi & - & - & - & + & + \\\\ \nS_{z} & Z\\ell & \\cos\\theta & - & + &+ & - & + \\\\ \n\\rho_{2} & XX \\ell \\ell & \\sin^{2}\\theta\\cos 2\\phi & + & + & + & + &+ \\\\ \n\\rho_{3} & XY \\ell \\ell & \\sin^{2}\\theta\\sin2\\phi & + & - & - & + & + \\\\ \n\\rho_{1} & XZ \\ell \\ell & \\sin 2\\theta \\cos\\phi & + & + & + & + & + \\\\ \n\\rho_{4} & YZ \\ell \\ell & sin 2\\theta\\sin\\phi & + & - & - & + & + \\\\ \n\\rho_{0} & ZZ\\ell \\ell & 1\/\\sqrt{3} -\\sqrt{3} \\cos^{2}\\theta & + & + & + & + & + \\\\ \\hline \n \\end{array}$\n\n \\caption{ \\small Terms in the angular distribution with their properties under discrete transformations $C_{\\ell}$, $P$, and $T$. Here $\\ell$ stands for $\\hat \\ell$, $ X\\ell $ stands for $\\hat X \\cdot \\hat \\ell=-X_{\\mu} \\ell^{\\mu}$, and so on with scalar normalization factors removed. $T$-odd scattering observables from the imaginary parts of amplitudes generally exist without violating fundamental $T$ symmetry. See the text for more explanation. }\\label{tab:symmetries}\n\\end{table*} \\end{small} \n\nTable \\ref{tab:symmetries} lists discrete transformation properties of all terms under parity $P$, time reversal $T$, and\nlepton charge conjugation $C_{\\ell}$. If leptons have different flavors (as in like or unlike sign $e \\mu$) the $C_{\\ell}$ operation swaps the particle defining $\\hat \\ell$. \n\nWhen coordinates $XYZ$ are defined the direction of $\\hat Y =\\hat Z\\times \\hat X$ is even under time reversal and parity, which is exactly the opposite of $X$ and $Z$. Then $\\vec S \\cdot \\hat Y$ is $T$-odd, contributing the $\\sin \\theta \\sin\\phi$ term.\\footnote{In a forthcoming study \\cite{INT} of inclusive lepton pair production near the $Z$ pole, we find interesting, new features in the $S_y$ data of Ref. \\cite{Aad:2016izn}.} The $XY$ and $ZY$ matrix elements of $\\rho(X)$ are also odd under $T$, contributing the terms shown. $T$-odd terms come from imaginary parts of amplitudes, which are generated by loop corrections in perturbative QCD.\n\nNotice that every term in the lepton density matrix (Eq. \\ref{rholep}) is automatically symmetric under $C_{\\ell}P$. This is a kinematic fact of the lepton pair probe which does not originate in the Standard Model. As a result the $C_{\\ell}P$ transformations of the angular distribution depend on the coupling to the unknown system. If overall $CP$ symmetry exists the target density matrix will have $CP$ odd terms where $C_{\\ell}P$ odd terms are found. In the Standard Model these $\\cos\\theta$ and $\\sin \\theta \\cos \\phi$ terms correspond to charge asymmetries of leptons correlated with charge asymmetries of the system, namely the beam quark and anti-quark distributions.\n\nWhile weak $CP$ violation is a mainstream topic, $P$ and $CP$ symmetry of the strong interactions at high energies has not been tested \\cite{mihailo}. The gauge sector of $QCD$ is {\\it kinematically} $CP$ symmetric, because the non-Abelian $tr(\\vec E\\cdot \\vec B)$ term is a pure divergence.\\footnote{Non-perturbative strong $CP$ violation in $QCD$ by a surface term has been proposed. Tests have been dominated by the neutron dipole moment, while calculations of non-perturbative effects are problematic.}. Higher order terms in a gauge-covariant derivative expansion are expected to exist, and can violate $CP$ symmetry \\cite{mihailo}.\n\nHowever, measuring violation of $CP$ or fundamental $T$ symmetry in scattering experiments is invariably frustrated by the experimental impossibility of preparing time-reversed counterparts. Some ingenuity is needed to devise a signal. It appears that any signal will involve four independent 4-momenta $p_{J}$ and a quantity of the form $\\Omega_{4}= \\epsilon_{\\a \\b\\lambda \\sigma } p_{1}^{\\a}p_{2}^{\\b}p_{3}^{\\lambda}p_{4}^{\\sigma}$. For example a term going like $\\ell \\cdot Y \\sim \\epsilon_{\\a \\b\\lambda \\sigma}\\ell_{\\a}Q_{\\b}P_{A \\lambda}P_{B \\sigma}$ might possibly originate in fundamental $T$ symmetry violation, and be mistaken for perturbative loop effects. A more creative road to finding $CP$ violation involves two pairs with sum and difference vectors $Q, \\, \\ell; \\, Q', \\, \\ell'$, and the scalar $\\epsilon_{\\a \\b\\lambda \\sigma}\\ell_{\\a}Q_{\\b}Q_{\\lambda}' \\ell'_{ \\sigma}$, which is even under $C$ and odd under $P$. The pairs need not be leptons (although ``double Drell Yan'' has long been discussed) but might be (say) $\\mu^{+}\\mu^{-} \\pi^{+}\\pi^{-}$. It would be interesting to explore further what a tomographic approach to such observables might uncover. \n\n\n\\subsubsection{Density Matrix Invariants}\n\\label{sec:Invariants} \n\nWe mentioned that scattering planes, trig functions, boosts and rotations could be avoided, and the examples show how. Once a frame convention is defined the lepton ``coordinates'' $( X_{J}\\cdot \\ell_{J}, \\, Y_{J}\\cdot \\ell_{J} , \\, Z_{J}\\cdot \\ell_{J})$ are actually Lorentz scalars. However they depend on the convention for $XYZ$, which is arbitrary. At least four different conventions compete for attention. Moreover, once a frame is chosen, at least two naming schemes (the ``$A_{k}$'' and ``$\\lambda_{k}$'' schemes) exist to describe the angular distribution in terms of trigonometric polynomials..\n\nWell-constructed invariants can reduce the confusion associated with convention-dependent quantities \\cite{Palestini:2010xu, Shao:2012fs, INT, Ma:2017hfg}. Since $\\vec S$ transforms like a vector its magnitude-squared $\\vec S^{2}$ is rotationally invariant. The spin-1 part of $\\rho(X)$ does not mix with the real symmetric part under rotations. Since it is traceless, the real symmetric (spin-2) part has two independent eigenvalues, which are rotationally invariant.\\footnote{Work by Faccioli and collaborators \\cite{Faccioli:2010ps, Faccioli:2010ej} attempted to construct invariants by inspecting the transformation properties of ratios of sums of angular distribution coefficients upon making rotation about the conventional $Y$ axis. The method cannot identify a true invariant unless $Y$ happens to be an eigenvector of the matrix. By the same method the group also identified $\\vec S^{2}$ as a ``parity violating invariant,'' while $\\vec S^{2}$ is actually even under parity. Parity violation is not required to measure $\\vec S$ with polarized beams.} Finally the dot-products of three eigenvectors $\\hat e_{J}$ of the spin-2 part with $\\vec S$ are rotationally invariant. Then $(\\hat e_{j}\\cdot \\vec S)^{2}$ are three invariants not depending on the sign of eigenvectors. That suggests six possible invariants, but $\\sum_{j} \\, (\\hat e_{j}\\cdot \\vec S)^{2}=\\vec S^{2}$ makes the $\\vec S$ invariants dependent, leaving five independent rotational invariants. That is consistent with counting 8 real parameters in a $3\\times 3$ Hermitian matrix, subject to 3 free parameters of the rotation group, leaving 8-3=5 rotational invariants. The same counting for unitary transformations would leave only the two independent eigenvalues of the matrix. \n\n\n\\begin{figure}[htp]\n\\begin{center}\n\\includegraphics[width=4in]{EntyContourOverlap.pdf}\n\\caption{ Contours of constant entropy ${\\cal S}$ of the lepton density matrix $\\rho(\\ell)$ (Eq. \\ref{rholep}) in the plane of parameters $(a, \\, b)$. Contours are separated by 1\/10 unit with ${\\cal S}=0$ at the central intersection. The horizontal dashed line shows the lowest order Standard Model prediction $ b=sin^{2}\\theta_{W}$. Annihilation with on-shell helicity conservation is indicated by the vertical dashed line $a=1\/2$. The left corner of the triangle is a pure state with longitudinal polarization, while the two right corners are pure states of circular polarization. The interior lines represent matrices with maximal symmetry, where two eigenvalues are equal. They cross at the unpolarized limit. The curved gray region represents the much less restrictive constraints of a positive distribution using Eq. \\ref{dist1} and lepton universality. }\n\\label{fig:EntyContours}\n\\end{center}\n\\end{figure}\n\n\nAny function of invariants is invariant. The combinations below have useful physical interpretations: \\begin{itemize} \\item The {\\it degree of polarization $d$} is a standard measure of the deviation from the unpolarized case. It comes from the sum of the squares of the eigenvalues of $\\rho$ minus 1\/3, normalized to the maximum possible: \\ba & d = \\sqrt{ (3 tr(\\rho_X^{2})-1)\/2} ,\\nonumber} \\renewcommand{\\bf}{\\textbf \\\\ & \\qquad \\text{where} \\qquad 0 \\leq d \\leq 1. \\nonumber} \\renewcommand{\\bf}{\\textbf \\ea When $d=0$ the system is unpolarized, and when $d=1$ the system is a pure state. \n\n\\item The {\\it entanglement entropy ${\\cal S}$} is the quantum mechanical measure of order. The formula is \\ba & {\\cal S}=-tr(\\rho_X \\, log(\\rho_X)) \\nonumber} \\renewcommand{\\bf}{\\textbf \\ea In terms of eigenvalues $\\rho_{\\a}$, ${\\cal S} = -\\sum_{\\a } \\, \\rho_{\\a} log(\\rho_{\\a})$.\nWhen $\\rho \\rightarrow} \\renewcommand{\\c}{\\cdot 1_{N\\times N}\/N$ the system is unpolarized, and ${\\cal S}=log(N)$. That is the maximum possible entropy, and minimum possible information. When ${\\cal S}=0$ the entropy is the minimum possible, providing the maximum possible information, and the system is a pure state. \n\nIt is instructive to interpret $e^{{\\cal S}}$ as the ``effective dimension'' of the system. For example the eigenvalues $(1\/2 +b, \\, 1\/2-b, \\, 0)$ occur in the density matrix of on-shell fermion annihilation with helicity conservation. One zero-eigenvalue describes an elliptical disk-shaped object. The entropy ranges from ${\\cal S}=0$, ($e^{{\\cal S}}=1$ for $b=1\/2$, a one dimensional stick shape) to ${\\cal S}=log(2)$, ($e^{{\\cal S}}=2$ for a disk-shaped object with maximum symmetry.) As expected, an unpolarized 3-dimensional system has three equal eigenvalues, is shaped like a sphere, and $e^{{\\cal S}} \\rightarrow} \\renewcommand{\\c}{\\cdot 3.$ \n\n\\begin{figure}[htp]\n\\begin{center}\n\\includegraphics[width=4in]{PlaneCut.jpg}\n\\caption{Boundary of the positivity region of a density matrix depending on three parameters $a, \\, b, \\, c$ described in the text. The two-dimensional region cut by the plane $c=0$ corresponds to Figure \\ref{fig:EntyContours}. }\n\\label{fig:PlaneCut}\n\\end{center}\n\\end{figure}\n\nFigure \\ref{fig:EntyContours} shows the entropy of the lepton density matrix $\\rho(\\ell)$ (Eq. \\ref{rholep}) in the plane of parameters $(a, \\, b)$. The matrix eigenvalues are\\footnote{It can be shown that the eigenvalues $\\lambda_{k} =1\/3+( 2d\/3) \\cos(\\theta_{k} )$, where $d$ is the degree of polarization and $\\theta_{k} = \\cos^{-1}(det( (3\\rho(X)-1_{3\\times 3})\/d)\/2+ 2 \\pi k\/3)$.} $(1\/3 - 2 a\/3, \\, 1\/3 + a\/3 - b, \\, 1\/3 + a\/3 + b)$. The triangular boundaries are the positivity bounds on these parameters, outside of which the entropy has an imaginary part. The corners of the triangle are pure states. The left corner represents a purely longitudinal polarization, $\\rho_{L}=|L>=(0, \\, 0, \\, 1)$ in a coordinate system where $\\hat \\ell=\\hat z$. The two right corners are purely circular polarizations, $\\rho_{\\pm}=|\\epsilon_{\\pm}><\\epsilon_{\\pm}|$, where in the same coordinates $ \\epsilon_{\\pm} =(1, \\, \\pm \\imath, \\, 0)\/\\sqrt{2}.$ The interior lines $a=\\pm b, \\, b=0$ represent maximal symmetry matrices having two equal eigenvalues. The figure also indicates the constraints of a positive distribution for the example of Eq. \\ref{dist1} assuming lepton universality. The values of $a$ and $b$ are actually unrestricted in all directions, so long as they lie within the bounding curves.\n\nThe Standard Model leptons from lowest order $s$-channel $Z$ production have $a=1\/2, \\, b=\\sin^{2}\\theta_{W}$, which is shown in Fig. \\ref{fig:EntyContours} as a dot. The edge $a=1\/2$ corresponds to on-shell helicity conservation, with eigenvalues $0, \\, 1\/2 - \\sin^{2}\\theta_{W}, \\, 1\/2 + \\sin^{2}\\theta_{W}$. The $a, \\, b$ parameters of leptons from a different production process, or subject to radiative corrections, must still lie inside the triangle. Maximal symmetry with eigenvalues (1\/2, 1\/2, 0) occurs where the line of $b=\\sin^{2}\\theta_{W}$ just touches the $b=-a$ line, which happens at $\\sin^{2}\\theta_{W}=1\/4$. That is not far from the Standard Model value, which is very interesting. Since no established theory predicts $\\sin^{2}\\theta_{W}$ one cannot rule out a deeper connection. \n\nIt is tempting but incorrect to assume the bounds discussed would apply to the same terms of a more general density matrix. For example, add $-c \\hat n_{i} \\hat n_{j}$ to the expression in Eq. \\ref{rholep}, where $\\hat n \\cdot \\hat \\ell=0$ and update the normalization condition. The resulting positivity region of $a, \\, b, \\, c$ is shown in Figure \\ref{fig:PlaneCut}, which also shows the plane $c=0$ equivalent to Figure \\ref{fig:EntyContours}. At the extrema $c=\\pm 1$ the region of consistent $(a, \\, b)$ parameters shrinks to single points.\n\n\nThe matrix for $\\rho(X)$ computed earlier is an example where all terms in any standard convention happen to occur. By inspection this system (mostly quark-antiquark annihilation) is superficially much like the lepton one. The entropy of is 0.68 and $e^{{\\cal S}}=1.96$, and one eigenvalue is close to zero. Of course there is much more information in the other parameters, the orientation of eigenvectors, $\\vec S$, and its magnitude.\\end{itemize}\n\n\\section{Discussion} \n\nThe quantum tomography procedure offers {\\it at least seven} significant advantages over standard methods of analyzing the angular correlations of inclusive reactions:\n\n\\begin{itemize}\n\\item{\\it Simplicity and Efficiency.} Tomography exploits a structured order of analysis. By construction, unobservable elements never appear. \n\n\\item{\\it Covariance.} Physical quantities are expressed covariantly every step of the way. That is not always the case with quantities like angular distributions.\n\n\\item{\\it Complete polarization information.} The unknown density matrix $\\rho(X)$ contains all possible information, ready for classification under symmetry groups.\n\n\\item{\\it Model-independence.} No theoretical planning, nor processing, nor assumptions are made about the unknown state. The process of defining general structure functions has been completely bypassed. It is not even necessary to assume anything about the spin of $s$- or $t-$channel intermediates. The {\\it observable} target structures is always a mirror of the probe structure. The ``mirror trick'' is universal as described in Section \\ref{sec:mirrror}. \n\n\\item{\\it Manifest positivity.} A pattern of misconceptions in the literature misidentifies positivity as being equivalent to positive cross sections. It is not difficult to fit data to an angular distribution and violate positivity. In fact, {\\it an angular distribution expressed in terms of expansion coefficients actually lacks the quantum mechanical information to enforce positivity.} \n\n\\item{\\it Convex optimization.} The positive character of the density matrix leads to convex optimization procedures to fit experimental data. This provides a powerful analysis tool that ensures convergence..\n\n\\item{ \\it Frame independence.} Once the unknown density matrix has been reconstructed, rotationally invariant quantities can be made by straightforward methods. This is illustrated in Section \\ref{sec:Invariants}, which includes a discussion of the entanglement entropy.\n\n\\end{itemize}\n\nQuantum tomography has already yielded significant results. Our tomographic analysis \\cite{INT} of a recent ATLAS study of Drell-Yan lepton pairs with invariant mass near the $Z$ pole \\cite{Aad:2016izn} discovered surprising features in the density matrix eigenvalues and entanglement entropy. By way of advertising, we have also gained insight into the mysterious Lam-Tung relation \\cite{LamTung78,*LamTung80}, including why it holds at NLO but fails at NNLO. These topics will be presented in separate papers.\n\n\\begin{acknowledgments}\nThe authors thank the organizers of the INT17-65W workshop \"Probing QCD in Photon-Nucleus Interactions at RHIC and LHC: the Path to EIC\" for the opportunity to present this work. We also thank workshop participants for useful comments.\n\\end{acknowledgments}\n\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction \\& Related Work}\n\nThe recurrent neural network transducer (RNN-T) model\n\\cite{graves2012seqtransduction,graves2013speechrnnt}\nis an end-to-end model which allows for time-synchronous decoding,\nwhich a more natural fit for many applications such as online recognition.\nThus\nRNN-T and many variations has recently gained interest\n\\cite{zhang2020trafotransducer,han2020contextnet,gulati2020conformer,%\nvariani2020hat,%\nzeyer2020:transducer,%\nzhou2021phonemetransducer}.\n\nIn a Bayesian interpretation,\na discriminative acoustic model $p_{\\mathrm{AM}}(y \\mid x)$\ncan be combined with an external language model $p_{\\mathrm{LM}}(y)$\nby\n\\[ p(y \\mid x) =\n\\frac{p_{\\mathrm{AM}}(y \\mid x)}{ p_{\\mathrm{AM}}(y) }\n\\cdot p_{\\mathrm{AM}}(x) \\cdot p_{\\mathrm{LM}}(y)\n\\cdot \\frac{1}{p(x)}. \\]\nIn recognition, when searching for $\\argmax_{y} p(y\\mid x)$,\nwe can omit $p(x)$ and $p_{\\mathrm{AM}}(x)$.\nIn shallow fusion, $p_{\\mathrm{AM}}(y)$ is omitted as well.\nIn the density ratio approach \\cite{mcdermott2019density},\n$p_{\\mathrm{AM}}(y)$ is estimated by a separate language model\ntrained on just the acoustic training transcriptions.\nIn the hybrid autoregressive transducer (HAT) \\cite{variani2020hat},\n$p_{\\mathrm{AM}}(y)$ is estimated directly based on the implicit internal LM (ILM)\nof $p_{\\mathrm{AM}}(y \\mid x)$.\nThe HAT model has a particular simple architecture\nwhich was designed such that there is a simple approximation\nfor this ILM estimation by setting the encoder input to 0.\nWe follow up on the ILM estimation approach\nand try some variations of the estimation.\nUsing 0 as encoder input also works\nbut we found some other variations to be better.\n\n\n\\section{Model}\n\nWe follow a transducer variant as defined in \\cite{zeyer2020:transducer}.\nThe whole model can be seen in \\Cref{fig:librispeech_transducer}.\nLet $x_1^{T'}$ be the acoustic input features (MFCC in our case) of length $T'$,\nand $y_1^S$ some label sequence of length $S$ over labels $\\Sigma$\n(excluding blank \\ensuremath{\\epsilon}).\nWe use \\emph{byte pair encoding (BPE)}-based \\emph{subword units}\n\\cite{sennrich2015neuralbpe,zeyer2018:asr-attention}\nwith a vocabulary size of about 1000 labels%\n\\footnote{In earlier work on attention-based encoder decoder models,\nwe used 10k BPE labels for Librispeech.\nHowever, because of computation time and memory constraints,\nwe reduced it to 1k for the transducer model.}.\n\n\\begin{figure}\n\\makebox[\\textwidth][l]{%\n\\hspace{-7mm}\n\\includegraphics[width=1.1\\columnwidth]{figures\/drawio\/RnnTDec_Librispeech_Encoder_bidir_accurate.pdf}%\n}\n\\caption[Transducer model]{Our transducer model with all dependencies.\nThe decoder is unrolled over the alignment axis $u$.\nCompare to \\cite{zeyer2020:transducer}.}\n\\label{fig:librispeech_transducer}\n\\end{figure}\n\nWe have a multi-layer bidirectional LSTM \\cite{hochreiter1997lstm} \\emph{encoder} model\nwith interchanged max-pooling in time to downscale the input to length $T$\nwith factor 6.\nThis results in\n\\[ h_1^T := \\operatorname{Encoder}(x_1^{T'}) . \\]\n\nWe define the probability for the label sequence $y_1^S$ as\n\\begin{align*}\np(y_1^S \\mid x_1^{T'} ) & :=\n\\sum_{\\alpha_1^U : y_1^S} p(\\alpha_1^U \\mid x_1^T), \\\\\np(\\alpha_1^U \\mid x_1^T)\n& := \\prod_{u=1}^U p_u(\\alpha_u \\mid \\alpha_1^{u-1}, x_1^T),\n\\end{align*}\nwith alignment label $\\alpha_u \\in \\Sigma' := \\Sigma \\cup \\{\\ensuremath{\\epsilon}\\}$,\nand where $\\alpha_1^U : y_1^S$ is defined by the label topology $\\mathcal{A}$.\nSpecifically, we use alignment length $U = T + S$\nand allow all alignment label sequences $\\alpha_1^U$\nwhich match the sequence $y_1^S$ after removing all blanks \\ensuremath{\\epsilon},\nalso notated as $\\mathcal{A}(\\alpha_1^U) = y_1^S$.\nThis defines an alignment between $h$ and $y$\nas can be seen in \\Cref{fig:lattice_rnnt}.\n\n\\begin{figure}\n\t\\centering\n\t\\resizebox{\\columnwidth}{!}{%\n\t\\input{figures\/label_topologies\/topology_rnnt.tikz}\n}\n\\caption[Label topology]{Unrolled label topology with allowed vertical transitions,\nwith a highlighted path for the sequence\n$\\mathcal{A}(\\text{``\\ensuremath{\\epsilon}\\blank{}th\\ensuremath{\\epsilon}\\blank e\\ensuremath{\\epsilon}\\blank s\\ensuremath{\\epsilon} i\\ensuremath{\\epsilon} s\\ensuremath{\\epsilon} \\ensuremath{\\epsilon}''}) = \\text{``thesis''}$.\nThe terminal node is marked in the top-right corner.}\n\\label{fig:lattice_rnnt}\n\\end{figure}\n\nOur \\emph{decoder} model defines the probability distribution\nover labels $\\alpha_u$ as\n\\begin{align*}\np_u(\\alpha_u \\mid \\makebox[0.8em][c]{...}) & :=\n\\begin{cases}\np_u(\\Delta t_u {=} 1 \\mid \\makebox[0.8em][c]{...}), & \\alpha_u = \\ensuremath{\\epsilon}, \\\\\np_u(\\Delta t_u {=} 0 \\mid \\makebox[0.8em][c]{...}) \\cdot q_u(\\alpha_u \\mid \\makebox[0.8em][c]{...}) , & \\alpha_u \\in \\Sigma\n\\end{cases}\n\\\\\np_u( \\Delta t_u \\mid \\makebox[0.8em][c]{...}) & :=\n\\begin{cases}\n\\sigma(-\\operatorname{FF}_{\\operatorname{emit}}( z_u^{\\operatorname{fast}} )), & \\Delta t_u = 1 \\\\\n\\sigma(\\phantom{-}\\operatorname{FF}_{\\operatorname{emit}}(z_u^{\\operatorname{fast}})), &\n\\Delta t_u = 0\n\\end{cases}\n\\\\\nq_u(\\alpha_u \\mid \\makebox[0.8em][c]{...}) &:=\n\\operatorname{softmax}_{\\Sigma}\n( \\operatorname{FF}_{\\Sigma}( z^{\\text{fast}}_{u} )), \\quad \\alpha_u \\in \\Sigma\n\\\\\nz^{\\text{fast}}_{u} & :=\n\\operatorname{Readout}(h_{t_u}, z^{\\text{slow}}_{s_{u}}) \\\\\nz^{\\text{slow}}_{s_{u}} & := \\operatorname{SlowRNN} (y_1^{s_{u} - 1})\n\\end{align*}\nwhere $\\sigma$ is the sigmoid function\nand $\\Delta t \\in \\{0,1\\}$,\nwhere $\\Delta t_u = 0$ means that we emit a new non-blank label ($\\Delta s_u = 1$)\nand $\\Delta t_u = 1$ means that we proceed forward in the time dimension\nwithout emitting a non-blank label ($\\Delta s_u = 0$).\nThus $\\Delta t_u = 1$ can be understand as a reinterpretation of the blank label $\\ensuremath{\\epsilon}$.\nThen we have a separate probability distribution $q$\nover the labels $\\Sigma$ (excluding $\\ensuremath{\\epsilon}$).\n$\\operatorname{FF}_{\\operatorname{emit}}$, $\\operatorname{FF}_{\\Sigma}$\nare linear transformations,\n$\\operatorname{Readout}$ is a linear transformation with maxout activation,\nand $\\operatorname{SlowRNN}$ is an LSTM.\n\n\n\\section{Training}\n\nThe loss is defined as\n\\[ L := -\\log p(y_1^S \\mid x_1^{T'}) = -\\log \\sum_{\\alpha_1^U : y_1^S} p(\\alpha_1^U \\mid x_1^T) . \\]\nAs we use the simplified transducer model\nwhere $\\operatorname{Readout}$ does not depend on $\\alpha_{u-1}$,\nwe can efficiently calculate the exact full sum over all alignments $\\alpha_1^U : y_1^S$\nand do not need the maximum approximation \\cite{zeyer2020:transducer}.\n\nWe use zoneout \\cite{krueger2017zoneout} for the $\\operatorname{SlowRNN}$\nand optionally recurrent weight dropout \\cite{wan13dropconnect} for the encoder BLSTMs.\n\nWe use the Adam optimizer \\cite{kingma2015adam}\nwith learning rate scheduling based on cross validation scores.\nAdditionally, we reset the learning rate back to the initial value after a larger number of epochs,\nafter the model already converged,\nand start over with the learning rate scheduling.\nWe train for 128 epochs.\nThe long amount of training had a huge effect on the overall performance.\n\n\\subsection{Pretraining}\n\nWe use a pretraining scheme\nwhere we schedule multiple aspects of the training:\n\\begin{itemize}\n\\item We grow the encoder from 3 layers with 500 dim.~%\nup to 6 layers with 1000 dimensions \\cite{zeyer2018:attanalysis}.\n\\item We increase the dropout rates.\n\\item\nWe use curriculum learning and start with shorter sequences initially.\n\\item We use linear learning rate warmup from $0.0001$ to $0.001$.\n\\item We use a higher initial time reduction factor 20 in the encoder\nand reduce it to the final factor 6.\n\\end{itemize}\n\n\\subsection{Distributed Multi-GPU Training}\n\nOur distributed training implementation\nuses independent trainer (worker) instances per GPU.\nEach worker independently loads the dataset.\nTo make sure that every worker uses a different part of the dataset,\nit is common to use striding.\nStriding has the disadvantage that it is very IO intensive\nin this setting where every worker loads the dataset independently\nand often becomes the bottleneck.\nSo we came up with the idea to use a different random seed\nfor the shuffling of the dataset for every worker,\nto replace the striding.\nThis greatly improved the IO in our case\nand made the training much faster.\n\nAdditionally, every worker independently trains an own copy of the model\nfor multiple update steps, until the models get synchronized\nby averaging the parameters over all workers.\nWe made the further improvement that we do not synchronize\nafter a fixed number of steps,\nbut instead after a fixed time interval.\nA fixed number of steps implies that the training is always as slow\nas the slowest worker,\nand variations in the runtime often lead to some workers\nbeing slower than others even on same hardware.\nSynchronizing after a fixed time interval does not have this problem,\nwhile being more stochastic.\n\nWe synchronize only after 100 seconds\nto reduce the communication between workers.\nThe workers can potentially be on different computing nodes\nand might need to communicate over network,\nwhich can result in 1-2 seconds for the synchronization.\nWe train on either 8 or 16 GPUs.\n\n\n\\section{Decoding \\& Language Model Combination}\n\nOur beam search decoding tries to find the sequence $\\hat{y}_1^{\\hat{S}}$\ngiven $x_1^{T'}$ which maximizes the probability,\ni.e.~specifically\n\\begin{align*}\nx_1^{T'} \\mapsto \\hat{S}, \\hat{y}_1^{\\hat{S}} & := \\argmax_{S, y_1^S} \\log p(y_1^S \\mid x_1^{T'}) \\\\\n& \\approx \\mathcal{A} \\circ \\argmax_{U, \\alpha_1^U} \\log p(\\alpha_1^U \\mid x_1^{T'})\n\\end{align*}\nWe perform alignment-synchronous decoding,\ni.e.~all hypotheses are in the same alignment step $u$\nwhen being pruned \\cite{zeyer2020:transducer,saon2020rnnt}.\nWe merge hypotheses by summing their scores\nwhen they correspond to the same word sequence after BPE-merging.\n\n\n\n\n\n\n\n\nThe training recipe for our BPE-10K LSTM LM \\cite{irie2020:phd}\nhas been adapted for the new BPE-1k label set\nbut otherwise no changes have been made.\n\\emph{Shallow fusion} (SF) \\cite{gulcehre2016monolingual} is a log-linear combination\nof the $\\log$-scores of external LM and ASR model scores during the recognition process,\nwith scale $\\ensuremath{\\beta}$ for the LM\nand scale $\\ensuremath{\\lambda}$ for the acoustic (non-blank) label probability $q$,\nwhile we do not add an own scale for $p(\\Delta t)$.\nSpecifically, we use the score\n\\begin{align*}\n\\log p^{\\text{SF}}_u(\\alpha_u \\mid \\makebox[0.8em][c]{...}) & :=\n\\begin{cases}\n\\log p_u(\\Delta t_u {=} 1 \\mid \\makebox[0.8em][c]{...}), & \\alpha_u = \\ensuremath{\\epsilon}, \\\\\n\\log p_u(\\Delta t_u {=} 0 \\mid \\makebox[0.8em][c]{...}) \\\\\n\t\t\\quad \\phantom{x} + \\ensuremath{\\lambda} \\cdot \\log q_u (\\alpha_u \\mid \\makebox[0.8em][c]{...}) \\\\\n\t\t\\quad \\phantom{x} + \\ensuremath{\\beta} \\cdot \\log p_{\\operatorname{\\small LM}}( \\alpha_u \\mid \\makebox[0.8em][c]{...} )\n\t\t, & \\alpha_u \\in \\Sigma\n\\end{cases} .\n\\end{align*}\nWe experimented with fixing the label scale at $\\ensuremath{\\lambda}=1$ or $\\ensuremath{\\lambda}=1 - \\ensuremath{\\beta}$.\n\n\n\nInspired by \\cite{mcdermott2019density,variani2020hat,meng2021ilm}\nwe also tried to \\emph{subtract the internal LM} log score.\nIt assumes that we can factorize our model into a language and acoustic model.\nAlthough our model is not directly formulated as such,\nwe can approximate the internal language model.\nFor that we used the estimated score $\\log p_{\\operatorname{\\small ILM}}$\nas shown in \\cref{sec:internal_lm_estimation},\nwhere we use the average of encoder features in the time dimension.\n\\begin{align*}\n\\log p^{\\text{SF-ILM}}_u(\\alpha_u \\mid \\makebox[0.8em][c]{...}) & :=\n\\begin{cases}\n\\log p_u(\\Delta t_u {=} 1 \\mid \\makebox[0.8em][c]{...}), & \\alpha_u = \\ensuremath{\\epsilon}, \\\\\n\\log p_u(\\Delta t_u {=} 0 \\mid \\makebox[0.8em][c]{...}) \\\\\n\t\t\\phantom{.} + \\ensuremath{\\lambda} \\cdot \\log q_u (\\alpha_u \\mid \\makebox[0.8em][c]{...}) \\\\\n\t\t\\phantom{.} + \\ensuremath{\\beta} \\cdot \\log p_{\\operatorname{\\small LM}}( \\alpha_u \\mid \\makebox[0.8em][c]{...} ) \\\\\n\t\t\\phantom{.} - \\ensuremath{\\gamma} \\cdot \\log p_{\\operatorname{\\small ILM}}( \\alpha_u \\mid \\makebox[0.8em][c]{...} )\n\t\t, & \\alpha_u \\in \\Sigma\n\\end{cases}\n\\end{align*}\n\n\n\n\n\n\n\n\n\\subsection{Internal LM Estimation}\n\\label{sec:internal_lm_estimation}\n\nThe transducer is trained on audio-text pairs\nbut learns an implicit prior model on the text.\nThis is explicitly given by the context dependency on previous labels.\nIn this transducer case, the \\ensuremath{\\operatorname{SlowRNN}}{} is also explicitly modeled\nsuch that it models the most important part of this prior\nas it operates only on the text-only part\nand runs label-synchronous.\nThis prior is an implicit internal LM in our acoustic model\n\\[ p_{\\mathrm{prior}}( y ) = \\sum_{x} p_{\\mathrm{AM}}(y \\mid x) \\cdot p(x) \\]\nwhich can not be calculated efficiently in general.\nTo approximate the internal LM,\nwe replace the encoder input\nto the rest of the model ($\\operatorname{Readout}$).\nWe either use a $0$ vector or the encoder mean (\\ensuremath{\\operatorname{avg}}).\nThe mean is computed over the time dimension for each sequence separately.%\n\\footnote{We also tested several other variants but got mixed inconclusive results.\nIn another work \\cite{zeineldeen2021ilm}, we investigate variants on the ILM estimation in more detail\nfor attention-based encoder-decoder models.}\n\nWe evaluate the estimated internal LM on\ntext-only data.\n\nIn \\cref{tab:internal_lm_estimation}, the BPE-level perplexities (PPL)\nare shown and compared against the LSTM LM which was\ntrained only on text data,\nbut without any overlap to the audio transcriptions \\cite{panayotov2015librispeech}.\n\n\n\n\n\\begin{table}[t]\n\\centering\n\\caption[Internal LM estimation on Librispeech]{Perplexity and WER measurements on Librispeech dev-other of a transducer model.\nNote that the BPE-level (1k units) perplexity is evaluated without the EOS token, since the transducer has no explicit end-of-sequence symbol.\nCompared are both setting the encoder ($h$) to $0$ and to the mean over the time-dimension (\\ensuremath{\\operatorname{avg}}).\nThe LSTM and Trafo-LM are trained on text-only data without overlap to\nthe audio transcriptions \\cite{panayotov2015librispeech}.\n}\n\\label{tab:internal_lm_estimation}\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n\\multirow{2}{*}{Model} & \\multirow{2}{*}{Epochs} & \\multicolumn{2}{c|}{Perplexity} & WER\\\\\n & & $h=0$ & \\ensuremath{\\operatorname{avg}} & [\\%]\\\\\n\\hline\\hline\n\\multirow{5}{*}{Transducer BPE-1K} & 8 & 82.76 & 67.47 & 36.41\\\\\n\\cline{2-5}\n & 16 & 49.32 & 38.89 & 17.16\\\\\n\\cline{2-5}\n & 32 & 45.13 & 32.86 & 11.85\\\\\n\\cline{2-5}\n & 64 & 46.53 & 31.94 & \\phantom{0}9.69\\\\\n\\cline{2-5}\n & 133 & 47.05 & 31.37 & \\phantom{0}8.92\\\\\n\\hline\nLSTM LM & 20 & \\multicolumn{2}{c|}{15.40} & $-$\\\\\n\\hline\nTrafo LM & 39 & \\multicolumn{2}{c|}{14.44} & $-$\\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\n\n\n\n\n\n\n\n\n\\subsection{EOS Modelling}\n\\label{sec:lm_eos_modelling}\n\nIn contrast to language models or attention models,\ntransducers and models with explicit time modeling\ndo not have to model the end-of-sentence\/sequence explicitly\nwith an additional token (denoted as \\ensuremath{\\langle\\operatorname{eos}\\rangle}).\nInstead the search ends when all input frames have been consumed.\nHowever, for LM integration, when only considering actual output symbols,\nthe information about when the sequence should end is not considered.\nThis additional information is usually ignored in the literature,\nhowever it provides valuable information to the search process.\n\n\n\n\n\n\nOur approach is to combine the LM EOS~probability\nwith $p_u(\\Delta t {=} 1)$ ($\\ensuremath{\\epsilon}$) in the last time frame ($t_u = T$)\nbecause that determines the EOS in the transducer.\n\\begin{align*}\n&\\log p^{\\text{SF-ILM+EOS}}_u(\\alpha_u \\mid \\makebox[0.8em][c]{...}) \\\\\n& \\phantom{x} :=\n\\begin{cases}\n\\log \\blankproblibrilhs{},\n& \\ensuremath{\\alpha_u} = \\ensuremath{\\epsilon}, t_{\\ensuremath{u}-1} < T \\\\\n\\ensuremath{{\\emitscale}_{\\eos}} \\log \\blankproblibrilhs{} \\\\\n\\phantom{x} + \\ensuremath{{\\lmscale}_{\\eos}} \\log p_{\\operatorname{LM}}(\\ensuremath{\\langle\\operatorname{eos}\\rangle} \\mid \\makebox[0.8em][c]{...}),\n& \\ensuremath{\\alpha_u} = \\ensuremath{\\epsilon}, t_{\\ensuremath{u}-1} = T\\\\\n\\log \\emitproblibrilhs{} \\\\\n\\phantom{x} + \\ensuremath{\\lambda} \\cdot \\log q_u (\\alpha_u \\mid \\makebox[0.8em][c]{...}) \\\\\n\\phantom{x} + \\ensuremath{\\beta} \\cdot \\log p_{\\operatorname{\\small LM}}( \\alpha_u \\mid \\makebox[0.8em][c]{...} ) \\\\\n\\phantom{x} - \\ensuremath{\\gamma} \\cdot \\log p_{\\operatorname{\\small ILM}}( \\alpha_u \\mid \\makebox[0.8em][c]{...} ) ,\n& \\ensuremath{\\alpha_u} \\in \\Sigma\n\\end{cases}\n\\end{align*}\nUsually\n $\\ensuremath{{\\emitscale}_{\\eos}} = \\ensuremath{{\\lmscale}_{\\eos}} = 0.5$ yielded good performance,\nalthough it was not tuned properly.\n\n\n\\section{Experiments}\n\nWe perform experiments on LibriSpeech \\cite{panayotov2015librispeech}.\nOur model training and decoding is implemented in RETURNN \\cite{zeyer2018:returnn},\nbased on TensorFlow \\cite{tensorflow2015}.\nThe distributed multi-GPU training is implemented with Horovod \\cite{sergeev2018horovod}.\nWe make use of Mingkun Huang's warp-transducer loss implementation%\n\\footnote{\\scriptsize\\url{https:\/\/github.com\/HawkAaron\/warp-transducer}}.\nOur decoder uses the builtin RETURNN features for stochastic variables\nand searches over $\\alpha_u$.\nThis uses GPU-based batched one-pass decoding with the external LM and internal LM subtraction.\nWe publish all the configuration files needed to reproduce the experiments%\n\\footnote{\\scriptsize\\url{https:\/\/github.com\/rwth-i6\/returnn-experiments\/tree\/master\/2021-transducer}}.\n\n\nWe have a variety of different exponent scales for our\nlog-linear modeling, as well as additional parameters for EOS-modeling.\nLabel scale $\\ensuremath{\\lambda}$ which is set to either $\\ensuremath{\\lambda}=1$ or $\\ensuremath{\\lambda}=1-\\ensuremath{\\beta}$,\nthe emission model scale $\\ensuremath{\\delta}$, and the scales for external and internal LM \\ensuremath{\\beta}, \\ensuremath{\\gamma},\nrespectively.\nAdditionally for EOS-modeling $\\ensuremath{{\\emitscale}_{\\eos}}$ and $\\ensuremath{{\\lmscale}_{\\eos}}$ is used, although it was fixed to $\\ensuremath{{\\emitscale}_{\\eos}}=\\ensuremath{{\\lmscale}_{\\eos}}=0.5$.\nThe scaling factors $\\ensuremath{\\beta}$ and $\\ensuremath{\\gamma}$ have to be tuned jointly\non a held-out dataset, as can be seen in \\cref{fig:librispeech_lm_scales},\nwith $\\ensuremath{\\lambda}=1-\\ensuremath{\\beta}$.\nThey were tuned separately for each subset dev-clean and dev-other.\nResults for LM integration are presented in \\cref{tab:libri_lm_integration}\nand in \\cref{tab:libri_lm_integration_eos} with additional EOS-modeling.\nWith shallow fusion of just the LM we already see a significant WER improvement\nby over 22\\% relative.\nWhen additionally subtracting the internal LM,\na further significant improvement is observed\nby over 14\\% relative over the shallow fusion.\nThe \\ensuremath{\\operatorname{avg}} ILM estimation seems to be better than \\ensuremath{0}\nexcept on test-other.\nThe effect of EOS gives us 7\\% relative improvement.\nWe also test a stronger Transformer LM in \\cref{tab:libri_lm_integration_eos}\n(perplexities in \\cref{tab:internal_lm_estimation})\nand see further improvement.\n\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{{figures\/plots\/rnnt-fs.bpe1k.readout.zoneout.lm-embed256.lr1e_3.no-curric.bs12k.mgpu.retrain1.lm-lstm.with-eos.tars.160.dev-other.beam24.fusion-div-by-prior}.pdf}\n\\vspace{-8mm}\n\\end{center}\n\\caption[Tuning of LM scales on Librispeech dev-other]{Tuning of LM scales of a transducer \nwith EOS-modeling.\nThe baseline without an external LM has $8.66$\\% WER on dev-other with a beam size of 24.\n$\\ensuremath{\\lambda}=1-\\ensuremath{\\beta}$.\n}\n\\label{fig:librispeech_lm_scales}\n\\end{figure}\n\n\n\n\n\\begin{table}[t]\n\\centering\n\\caption[External LM integration results using different methods]{\nWe investigate the effect of LM integration for the model\nwith either shallow fusion (SF) or additional internal LM (ILM) subtraction.\nAll experiments were conducted with a fixed \\textbf{beam-size 24} and\n\\textbf{without EOS-modeling}, the LSTM-LM has a BPE-1K level perplexity of $15.4$ on dev-other.}\n\\setlength{\\tabcolsep}{0.25em}\n\\begin{tabular}{|c|c|c|c|c|c|c|}\n\\hline\n\\multirow{3}{*}{LM} & \\multirow{3}{*}{\\shortstack{LM\\\\Integration\\\\Method}} & \\multirow{3}{*}{\\shortstack{Label scale\\\\\\ensuremath{\\lambda}}}& \\multicolumn{4}{c|}{WER [\\%]}\\\\\n& & & \\multicolumn{2}{c|}{{dev}} & \\multicolumn{2}{c|}{{test}}\\\\\n& & & {clean} & {other} & {clean} & {other}\\\\\n\\hline\\hline\n\\textemdash & \\textemdash & \\multirow{2}{*}{$\\ensuremath{\\lambda}=1\\phantom{{} - \\ensuremath{\\beta}}$} & 3.22 & 8.76 & 3.30 & 8.70 \\\\\n\n\\cline{1-2}\\cline{4-7}\n\\multirow{3}{*}{LSTM} & \\multirow{2}{*}{SF} & &\n2.53 & 6.79 & 2.66 & 6.99 \\\\\n\n\n\\cline{3-7}\n& & \\multirow{2}{*}{$\\ensuremath{\\lambda}=1-\\ensuremath{\\beta}$} &\n2.47 & 6.50 & 2.57 & 6.70 \\\\\n\n\n\\cline{2-2}\\cline{4-7}\n& SF-ILM(\\ensuremath{\\operatorname{avg}}) & &\n\\textbf{2.29} & \\textbf{5.63} & \\textbf{2.36} & \\textbf{6.39}\\\\\n\\hline\n\\end{tabular}\n\\label{tab:libri_lm_integration}\n\\end{table}\n\n\n\\begin{table}\n\\centering\n\\caption[External LM integration results with EOS modeling]{\nWe investigate the effect of LM integration for the model\nwith either shallow fusion (SF) or additional internal LM (ILM) subtraction.\nAll experiments were conducted with a fixed \\textbf{beam-size 24} and \\textbf{EOS-modeling} (last blank frame), the LSTM-LM has a BPE-1K level perplexity of $15.4$ on dev-other.\nIn \\cref{fig:librispeech_lm_scales} the heat map for the joint tuning over\n$\\ensuremath{{\\lmscale}_{\\textsc{other}}}$ and $\\ensuremath{{\\ilmscale}_{\\textsc{other}}}$.\n$\\ensuremath{{\\emitscale}_{\\eos}} = \\ensuremath{{\\lmscale}_{\\eos}} = 0.5$.\n}\n\\setlength{\\tabcolsep}{0.25em}\n\\begin{tabular}{|c|c|c|c|c|c|c|}\n\\hline\n\\multirow{3}{*}{LM} & \\multirow{3}{*}{\\shortstack{LM\\\\Integration\\\\Method}} &\n\\multirow{3}{*}{\\shortstack{Label scale\\\\\\ensuremath{\\lambda}}} & \\multicolumn{4}{c|}{WER [\\%]}\\\\\n& & & \\multicolumn{2}{c|}{{dev}} & \\multicolumn{2}{c|}{{test}}\\\\\n& & & {clean} & {other} & {clean} & {other}\\\\\n\\hline\\hline\n\\textemdash & \\textemdash & $\\ensuremath{\\lambda}=1\\phantom{{}-\\ensuremath{\\beta}}$ & 3.20 & 8.66 & 3.28 & 8.60\\\\\n\\hline\\hline\n\\multirow{3}{*}{LSTM} & \\multirow{2}{*}{SF} & $\\ensuremath{\\lambda}=1\\phantom{{}-\\ensuremath{\\beta}}$&\n2.52 & 6.69 & 2.65 & 6.85\\\\\n\n\n\\cline{3-7}\n& & \\multirow{2}{*}{$\\ensuremath{\\lambda}=1-\\ensuremath{\\beta}$} &\n2.45 & 6.35 & 2.55 & 6.68\\\\\n\n\n\\cline{2-2}\\cline{4-7}\n& SF-ILM(\\ensuremath{\\operatorname{avg}}) & &\n\\textbf{2.26} & \\textbf{5.49} & \\textbf{2.42} & \\textbf{5.91}\\\\\n\\hline\\hline\n\n\\multirow{3}{*}{Trafo} & SF & \\multirow{3}{*}{$\\ensuremath{\\lambda}=1-\\ensuremath{\\beta}$} &\n2.41 & 6.29 & 2.52 & 6.56\\\\\n\n\n\\cline{2-2}\\cline{4-7}\n\n& SF-ILM(\\ensuremath{\\operatorname{avg}}) & &\n\\textbf{2.17} & \\textbf{5.28} & \\textbf{2.23} & {5.74} \\\\\n\\cline{2-2}\\cline{4-7}\n\n& SF-ILM(\\ensuremath{0}) & &\n{2.22} & {5.32} & {2.25} & \\textbf{5.60} \\\\\n\n\\hline\n\\end{tabular}\n\\label{tab:libri_lm_integration_eos}\n\\end{table}\n\n\n\n\\subsection{Error Analysis}\n\nOne of the sources of errors when looking at an entire system\nare errors the model made when its prediction was wrong.\nWe look at the percentages of substitution, deletion, and insertion errors of the word error rate (WER).\nEspecially interesting is the comparison between different models and their respective LM integration.\nAlso of interest are how long the hypothesized sentences are, relative to the reference transcription.\nThe transducer seems to model the hypothesis length better than hybrid (without rescoring) and attention-based models, although adding an external LM seems to help the attention model.\nOverall we can see that introducing the external LM helps with substitutions and insertion errors,\nwhile the deletions actually increase.\nIn comparison to the attention-based model,\nthe transducer model has significantly less insertion errors,\nbut more deletion errors, relative to the overall WER.\n\n\n\n\\begin{table}[t]\n\\caption{We investigate the different type of word errors on various models.\nHybrid \\cite{luescher2019:librispeech}, Attention \\cite{zeyer2019:trafo-vs-lstm-asr},\nand Transducer (ours).\nWith either shallow fusion (SF) or additional internal LM (ILM) subtraction.\nFor log-linear combination in the transducer case, $\\ensuremath{\\lambda}=1-\\ensuremath{\\beta}$.}\n\\centering\n\\setlength{\\tabcolsep}{0.2em}\n\\begin{tabular}{|c|c|c|S[table-format=2.2]|S[table-format=2.2]|S[table-format=2.2]|c|}\n\\hline\n\\multirow{2}{*}{Model} & \\multirow{2}{*}{LM} & \\multirow{2}{*}{\\shortstack{LM\\\\integration}} & \\multicolumn{3}{c|}{Edit operations [\\%]} & WER\\\\\n & & & {Sub.} & {Del.} & {Ins.} & [\\%] \\\\\n\\hline\\hline\n Attention & None & \\textemdash & 80.40 & 6.96 & 12.65 & 9.93 \\\\\n\\cline{2-7}\n\n & LSTM & SF & 79.42 & 5.68 & 14.90 & 7.50 \\\\\n\\hline\\hline\nHybrid & 4-gr. & SF & 75.75 & 12.98 & 11.27 & 9.37 \\\\\n\\hline\\hline\n\n\\multirow{5}{*}{Transd.} & None & \\textemdash & 82.52 & 7.55 & 9.93 & 8.76 \\\\\n\\cline{2-7}\n\n& \\multirow{3}{*}{LSTM} & SF & 79.10 & 10.93 & 9.97 & 6.50 \\\\\n\\cline{3-7}\n\n& & SF-ILM(\\ensuremath{\\operatorname{avg}}) & 80.90 & 9.06 & 10.04 & 5.63 \\\\\n\\cline{3-7}\n\n & & \\multirow{2}{*}{\\shortstack{SF-ILM(\\ensuremath{\\operatorname{avg}})\\\\+EOS}} & 79.81 & 10.15 & 10.04 & 5.49 \\\\\n\\cline{2-2}\\cline{4-7}\n\n & Trafo& & 80.45 & 9.55 & 10.00 & 5.28 \\\\\n\n\\hline\n\\end{tabular}\n\\end{table}\n\n\n\n\n\\section{Conclusions \\& Future Work}\n\nThe subtraction of the ILM helped to improve the model by a lot (over 14\\% relative)\nover the already strong shallow fusion.\nThe EOS modeling also helped (7\\% relative).\nWe noticed that all recognition experiments are very sensitive to the LM\/ILM scales.\nThe long training time also had a huge effect on the final performance.\n\nAs future work,\nwe plan to study the effect of the label unit and to test simple characters and other subword variations,\nsimilar to \\cite{zeineldeen20:phon-att}.\nThe encoder model might get improvements by more recent advancements \\cite{gulati2020conformer}.\nThe decoder can be extended as well \\cite{zeyer2020:transducer}.\nWe also can potentially improve the ILM estimation.\nFinally, we expect to get improvements by min.~WER training.\n\n\n\\section{Acknowledgements}\n\n\nWe thank Yingbo Gao for providing us with the Transformer LM\non our BPE-1k label set.\nThis project has received funding from the European Research Council (ERC)\nunder the European Union's Horizon 2020 research and innovation programme\n(grant agreement n\\textsuperscript{o}~694537, project \"SEQCLAS\"). The work\nreflects only the authors' views and the European Research Council\nExecutive Agency (ERCEA) is not responsible for any use that may be made\nof the information it contains.\nThis work was partly funded by the Google Focused Award \"Pushing the\nFrontiers of ASR: Training Criteria and Semi-Supervised Learning\".\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Appendix}\\label{s:appendix}\nFor writing compactness we use the following shorthand notations for the transition and observation functions thoughout the next two subsections: $\\ensuremath{T}\\xspace = \\ensuremath{T(s, a, s')}\\xspace$, $\\ensuremath{\\widehat{T}}\\xspace = \\ensuremath{\\widehat{T}(s, a, s')}\\xspace$ and $\\ensuremath{Z}\\xspace = \\ensuremath{Z(s, a, o)}\\xspace$, $\\ensuremath{\\widehat{Z}}\\xspace = \\ensuremath{\\widehat{Z}(s, a, o)}\\xspace$. Additionally in section Section~\\ref{ssec:proofLem4} we use the notations $\\ensuremath{T}\\xspace_{k} = T(\\ensuremath{s}\\xspace_k, \\ensuremath{a}\\xspace, \\ensuremath{s'}\\xspace)$ and $\\ensuremath{\\widehat{T}}\\xspace_{k} = \\widehat{T}(\\ensuremath{s}\\xspace_k, \\ensuremath{a}\\xspace, \\ensuremath{s'}\\xspace)$.\n\\subsection{Proof of Lemma \\ref{lem:lem0}}\\label{ssec:lemma_0_proof}\nConsider any $\\sigma \\in \\Gamma$ with its action $\\ensuremath{a}\\xspace \\in \\ensuremath{A}\\xspace$ and observation strategy $\\nu$. Then for any $\\ensuremath{s}\\xspace\\in\\ensuremath{S}\\xspace$ \n\\begin{align}\\label{eq:first_lemma_0}\n &&&\\left | \\alpha_{\\sigma}(\\ensuremath{s}\\xspace) - \\widehat{\\alpha}_{\\sigma}(\\ensuremath{s}\\xspace) \\right | \\nonumber \\\\\n &&=&\\left | \\rewFuncComp{\\ensuremath{s}\\xspace}{\\ensuremath{a}\\xspace} + \\gamma \\ensuremath{\\int_{s' \\in S}}\\xspace \\ensuremath{\\int_{o \\in O}}\\xspace \\ensuremath{T}\\xspace \\ensuremath{Z}\\xspace \\alpha_{\\nu(\\ensuremath{o}\\xspace)}(\\ensuremath{s'}\\xspace)\\ensuremath{d}\\xspace\\ensuremath{o}\\xspace \\ensuremath{d}\\xspace\\ensuremath{s'}\\xspace \\right. \\nonumber\\\\ \n &&&\\left. -\\rewFuncComp{\\ensuremath{s}\\xspace}{\\ensuremath{a}\\xspace} - \\gamma \\ensuremath{\\int_{s' \\in S}}\\xspace \\ensuremath{\\int_{o \\in O}}\\xspace \\ensuremath{\\widehat{T}}\\xspace \\ensuremath{\\widehat{Z}}\\xspace \\widehat{\\alpha}_{\\nu(\\ensuremath{o}\\xspace)}(\\ensuremath{s'}\\xspace) \\ensuremath{d}\\xspace\\ensuremath{o}\\xspace \\ensuremath{d}\\xspace\\ensuremath{s'}\\xspace \\right | \\nonumber \\\\\n &&=&\\gamma \\left | \\ensuremath{\\int_{s' \\in S}}\\xspace \\ensuremath{\\int_{o \\in O}}\\xspace \\ensuremath{T}\\xspace \\ensuremath{Z}\\xspace \\alpha_{\\nu(\\ensuremath{o}\\xspace)}(\\ensuremath{s'}\\xspace) - \\ensuremath{\\widehat{T}}\\xspace\\ensuremath{\\widehat{Z}}\\xspace \\widehat{\\alpha}_{\\nu(\\ensuremath{o}\\xspace)}(\\ensuremath{s'}\\xspace) \\ensuremath{d}\\xspace\\ensuremath{o}\\xspace \\ensuremath{d}\\xspace\\ensuremath{s'}\\xspace \\right | \\nonumber \\\\\n &&\\leq&\\gamma \\left ( \\left |\\ensuremath{\\int_{s' \\in S}}\\xspace \\ensuremath{\\int_{o \\in O}}\\xspace \\ensuremath{T}\\xspace \\ensuremath{Z}\\xspace \\left [\\alpha_{\\nu(\\ensuremath{o}\\xspace)}(\\ensuremath{s'}\\xspace) -\\widehat{\\alpha}_{\\nu(\\ensuremath{o}\\xspace)}(\\ensuremath{s'}\\xspace) \\right ] \\ensuremath{d}\\xspace\\ensuremath{o}\\xspace \\ensuremath{d}\\xspace\\ensuremath{s'}\\xspace \\right | \\right. \\nonumber \\\\\n & && + \\left. \\left |\\ensuremath{\\int_{s' \\in S}}\\xspace \\ensuremath{\\int_{o \\in O}}\\xspace \\widehat{\\alpha}_{\\nu(\\ensuremath{o}\\xspace)}(\\ensuremath{s'}\\xspace) \\left [\\ensuremath{T}\\xspace\\ensuremath{Z}\\xspace -\\ensuremath{\\widehat{T}}\\xspace \\ensuremath{\\widehat{Z}}\\xspace \\right ] \\ensuremath{d}\\xspace\\ensuremath{o}\\xspace \\ensuremath{d}\\xspace\\ensuremath{s'}\\xspace\\right | \\right )\n\\end{align}\n\nLet's have a look at the second term on the right-hand side of \\eref{eq:first_lemma_0}, that is\n\\begin{align}\\label{eq:first_lemma_1}\n&&term2(\\ensuremath{s}\\xspace, \\ensuremath{a}\\xspace) =& \\left | \\ensuremath{\\int_{s' \\in S}}\\xspace \\ensuremath{\\int_{o \\in O}}\\xspace \\widehat{\\alpha}_{\\nu(\\ensuremath{o}\\xspace)}(\\ensuremath{s'}\\xspace) \\left [\\ensuremath{T}\\xspace \\ensuremath{Z}\\xspace - \\ensuremath{\\widehat{T}}\\xspace \\ensuremath{\\widehat{Z}}\\xspace \\right ] \\ensuremath{d}\\xspace\\ensuremath{o}\\xspace \\ensuremath{d}\\xspace\\ensuremath{s'}\\xspace \\right |\n\\end{align}\n\nWe can expand this term as follows:\n\\begin{align}\\label{eq:first_lemma_2}\n&&&term2(\\ensuremath{s}\\xspace, \\ensuremath{a}\\xspace) \\nonumber \\\\\n&&=& \\left | \\ensuremath{\\int_{s' \\in S}}\\xspace \\ensuremath{\\int_{o \\in O}}\\xspace \\widehat{\\alpha}_{\\nu(\\ensuremath{o}\\xspace)}(\\ensuremath{s'}\\xspace) \\left [ \\ensuremath{T}\\xspace \\ensuremath{Z}\\xspace - \\ensuremath{\\widehat{T}}\\xspace\\ensuremath{Z}\\xspace + \\ensuremath{\\widehat{T}}\\xspace\\ensuremath{Z}\\xspace - \\ensuremath{\\widehat{T}}\\xspace \\ensuremath{\\widehat{Z}}\\xspace\\right ] \\ensuremath{d}\\xspace\\ensuremath{o}\\xspace \\ensuremath{d}\\xspace\\ensuremath{s'}\\xspace \\right |\\nonumber \\\\\n&&\\leq& \\left | \\ensuremath{\\int_{s' \\in S}}\\xspace \\left [ \\ensuremath{T}\\xspace - \\ensuremath{\\widehat{T}}\\xspace\\right ] \\ensuremath{\\int_{o \\in O}}\\xspace \\widehat{\\alpha}_{\\nu(\\ensuremath{o}\\xspace)}(\\ensuremath{s'}\\xspace)\\ensuremath{Z}\\xspace \\ensuremath{d}\\xspace\\ensuremath{o}\\xspace \\ensuremath{d}\\xspace\\ensuremath{s'}\\xspace\\right | \\nonumber \\\\\n& && + \\left |\\ensuremath{\\int_{s' \\in S}}\\xspace \\ensuremath{\\widehat{T}}\\xspace \\ensuremath{\\int_{o \\in O}}\\xspace \\widehat{\\alpha}_{\\nu(\\ensuremath{o}\\xspace)}(\\ensuremath{s'}\\xspace)\\left [\\ensuremath{Z}\\xspace - \\ensuremath{\\widehat{Z}}\\xspace \\right ] \\ensuremath{d}\\xspace\\ensuremath{o}\\xspace\\ensuremath{d}\\xspace\\ensuremath{s'}\\xspace\\right | \\nonumber \\\\\n&&\\leq& \\ensuremath{\\int_{s' \\in S}}\\xspace \\left |\\ensuremath{T}\\xspace- \\ensuremath{\\widehat{T}}\\xspace \\right | \\ensuremath{\\int_{o \\in O}}\\xspace \\left | \\widehat{\\alpha}_{\\nu(\\ensuremath{o}\\xspace)}(\\ensuremath{s'}\\xspace) \\right | \\ensuremath{Z}\\xspace \\ensuremath{d}\\xspace\\ensuremath{o}\\xspace\\ensuremath{d}\\xspace\\ensuremath{s'}\\xspace \\nonumber \\\\\n& && + \\ensuremath{\\int_{s' \\in S}}\\xspace \\ensuremath{\\widehat{T}}\\xspace \\ensuremath{\\int_{o \\in O}}\\xspace \\left | \\widehat{\\alpha}_{\\nu(\\ensuremath{o}\\xspace)}(\\ensuremath{s'}\\xspace) \\right | \\left |\\ensuremath{Z}\\xspace - \\ensuremath{\\widehat{Z}}\\xspace \\right | \\ensuremath{d}\\xspace\\ensuremath{o}\\xspace\\ensuremath{d}\\xspace\\ensuremath{s'}\\xspace\n\\end{align}\n\nThe term $\\left | \\widehat{\\alpha}_{\\nu(\\ensuremath{o}\\xspace)}(\\ensuremath{s'}\\xspace) \\right |$ can be upper-bounded via $\\left | \\widehat{\\alpha}_{\\nu(\\ensuremath{o}\\xspace)}(\\ensuremath{s'}\\xspace)\\right | \\leq \\frac{R_{m}}{1-\\gamma}$ for any $\\ensuremath{s}\\xspace \\in \\ensuremath{S}\\xspace$, which yields\n\\begin{align}\\label{eq:first_lemma_3}\n&&&term2(\\ensuremath{s}\\xspace, \\ensuremath{a}\\xspace) \\nonumber \\\\\n&&\\leq& \\frac{R_{m}}{1-\\gamma} \\left [\\ensuremath{\\int_{s' \\in S}}\\xspace \\left |\\ensuremath{T}\\xspace - \\ensuremath{\\widehat{T}}\\xspace \\right | \\ensuremath{d}\\xspace\\ensuremath{s'}\\xspace + \\ensuremath{\\int_{s' \\in S}}\\xspace\\ensuremath{\\widehat{T}}\\xspace\\ensuremath{\\int_{o \\in O}}\\xspace\\left | \\ensuremath{Z}\\xspace - \\ensuremath{\\widehat{Z}}\\xspace \\right | \\ensuremath{d}\\xspace\\ensuremath{o}\\xspace\\ensuremath{d}\\xspace\\ensuremath{s'}\\xspace \\right ]\n\\end{align}\n\nFrom the definition of the total variation distance, it follows that $\\ensuremath{\\int_{s' \\in S}}\\xspace \\left | \\ensuremath{T}\\xspace - \\ensuremath{\\widehat{T}}\\xspace\\right |\\ensuremath{d}\\xspace\\ensuremath{s'}\\xspace = 2D_{TV}^{\\ensuremath{s}\\xspace, \\ensuremath{a}\\xspace}(\\ensuremath{T}\\xspace, \\ensuremath{\\widehat{T}}\\xspace)$ for any given $\\ensuremath{s}\\xspace \\in \\ensuremath{S}\\xspace$ and $\\ensuremath{a}\\xspace \\in \\ensuremath{A}\\xspace$ and $\\ensuremath{\\int_{o \\in O}}\\xspace \\left |\\ensuremath{Z}\\xspace - \\ensuremath{\\widehat{Z}}\\xspace \\right | \\ensuremath{d}\\xspace\\ensuremath{o}\\xspace= 2D_{TV}^{\\ensuremath{s'}\\xspace, \\ensuremath{a}\\xspace}(\\ensuremath{Z}\\xspace, \\ensuremath{\\widehat{Z}}\\xspace)$ for any given $\\ensuremath{s'}\\xspace \\in \\ensuremath{S}\\xspace$. Substituting these equalities into \\eref{eq:first_lemma_3} and taking the supremum over the conditionals $\\ensuremath{s}\\xspace, \\ensuremath{s'}\\xspace$ and $\\ensuremath{a}\\xspace$ allows us to upper-bound \\eref{eq:first_lemma_3} by\n\\begin{equation}\nterm2(\\ensuremath{s}\\xspace, \\ensuremath{a}\\xspace) \\leq 2\\frac{R_{m}}{1-\\gamma} \\nm{\\ensuremath{P}\\xspace}{\\ensuremath{\\widehat{P}}\\xspace}\n\\end{equation}\n\nSubstituting this upper bound into \\ref{eq:first_lemma_0} yields\n\\begin{align}\\label{eq:lemm2_eq_5}\n&&&\\left | \\alpha_{\\sigma}(\\ensuremath{s}\\xspace) - \\widehat{\\alpha}_{\\sigma}(\\ensuremath{s}\\xspace) \\right | \\nonumber \\\\\n&&\\leq& \\gamma\\biggl | 2\\frac{R_{m}}{1-\\gamma} \\nm{\\ensuremath{P}\\xspace}{\\ensuremath{\\widehat{P}}\\xspace} \\biggr. \\nonumber \\\\\n&&&\\left. + \\ensuremath{\\int_{s' \\in S}}\\xspace\\ensuremath{\\int_{o \\in O}}\\xspace \\ensuremath{T}\\xspace \\ensuremath{Z}\\xspace \\left [\\alpha_{\\nu(\\ensuremath{o}\\xspace)}(\\ensuremath{s'}\\xspace) - \\widehat{\\alpha}_{\\nu(\\ensuremath{o}\\xspace)}(\\ensuremath{s'}\\xspace) \\right] \\ensuremath{d}\\xspace\\ensuremath{o}\\xspace \\ensuremath{d}\\xspace\\ensuremath{s'}\\xspace \\right | \\nonumber \\\\\n&&\\leq& \\gamma \\biggl ( 2 \\frac{R_{m}}{1-\\gamma} \\nm{\\ensuremath{P}\\xspace}{\\ensuremath{\\widehat{P}}\\xspace} \\biggr.\\nonumber \\\\\n &&&\\left. + \\ensuremath{\\int_{s' \\in S}}\\xspace\\ensuremath{\\int_{o \\in O}}\\xspace \\ensuremath{T}\\xspace \\ensuremath{Z}\\xspace \\left |\\alpha_{\\nu(\\ensuremath{o}\\xspace)}(\\ensuremath{s'}\\xspace) - \\widehat{\\alpha}_{\\nu(\\ensuremath{o}\\xspace)}(\\ensuremath{s'}\\xspace) \\right| \\ensuremath{d}\\xspace\\ensuremath{o}\\xspace \\ensuremath{d}\\xspace\\ensuremath{s'}\\xspace \\right )\n\\end{align}\n\nThe last term on the right hand side of \\ref{eq:lemm2_eq_5} is essentially a recursion. Unfolding this recursion yiels\n\n\\begin{equation}\n\\left | \\alphaPiS{\\ensuremath{s}\\xspace} - \\alphaPiHatS{\\ensuremath{s}\\xspace} \\right | \\leq 2\\gamma\\frac{R_{m}}{(1-\\gamma)^2} \\nm{\\ensuremath{P}\\xspace}{\\ensuremath{\\widehat{P}}\\xspace}\n\\end{equation}\n\nwhich is Lemma \\ref{lem:lem0} $\\square$\n\n\\subsection{Proof of \\lref{lem:lemApprox}}\\label{ssec:proofLem4}\nWe can write the absolute difference between the SNM\\xspace-values conditioned on two states $\\ensuremath{s}\\xspace_1, \\ensuremath{s}\\xspace_2 \\in \\ensuremath{S}\\xspace_i$ as\n\\begin{align}\n&&&\\left | \\ensuremath{\\Psi_T}\\xspace(\\ensuremath{s}\\xspace_1) - \\ensuremath{\\Psi_T}\\xspace(\\ensuremath{s}\\xspace_2) \\right | \\nonumber \\\\\n&&=& \\left | \\sup_{\\ensuremath{a}\\xspace \\in \\ensuremath{A}\\xspace} D_{TV}(\\ensuremath{T}\\xspace_1, \\ensuremath{\\widehat{T}}\\xspace_1) - \\sup_{\\ensuremath{a}\\xspace \\in \\ensuremath{A}\\xspace} D_{TV}(\\ensuremath{T}\\xspace_2, \\ensuremath{\\widehat{T}}\\xspace_2) \\right | \\nonumber \\\\\n&&=&\\left |\\frac{1}{2} \\sup_{\\ensuremath{a}\\xspace \\in \\ensuremath{A}\\xspace} \\sup_{\\left |f \\right | \\leq 1} \\left | \\ensuremath{\\int_{s' \\in S}}\\xspace f(\\ensuremath{s'}\\xspace) \\left [\\ensuremath{T}\\xspace_1 - \\ensuremath{\\widehat{T}}\\xspace_1 \\right ] \\ensuremath{d}\\xspace\\ensuremath{s'}\\xspace \\right| \\right. \\nonumber \\\\\n&&&\\left. - \\frac{1}{2} \\sup_{\\ensuremath{a}\\xspace \\in \\ensuremath{A}\\xspace} \\sup_{\\left |f \\right | \\leq 1}\\left |\\ensuremath{\\int_{s' \\in S}}\\xspace f(\\ensuremath{s'}\\xspace) \\left [\\ensuremath{T}\\xspace_2 - \\ensuremath{\\widehat{T}}\\xspace_2 \\right ]\\ensuremath{d}\\xspace\\ensuremath{s'}\\xspace\\right | \\right | \n\\end{align}\n\nManipulating the algebra allows us to write\n\\begin{align}\\label{e:lem3_eq2}\n&&&\\left | \\ensuremath{\\Psi_T}\\xspace(\\ensuremath{s}\\xspace_1) - \\ensuremath{\\Psi_T}\\xspace(\\ensuremath{s}\\xspace_2) \\right | \\nonumber \\\\\n&&\\leq& \\frac{1}{2}\\sup_{\\ensuremath{a}\\xspace\\in\\ensuremath{A}\\xspace}\\left | \\sup_{\\left |f \\right | \\leq 1} \\left ( \\ensuremath{\\int_{s' \\in S}}\\xspace f(\\ensuremath{s'}\\xspace) \\left [\\ensuremath{T}\\xspace_1 - \\ensuremath{T}\\xspace_2 \\right ] \\ensuremath{d}\\xspace\\ensuremath{s'}\\xspace \\right. \\right.\\nonumber \\\\\n&&&\\left.\\left. + \\ensuremath{\\int_{s' \\in S}}\\xspace f(\\ensuremath{s'}\\xspace)\\left [\\ensuremath{\\widehat{T}}\\xspace_1 - \\ensuremath{\\widehat{T}}\\xspace_2 \\right] \\ensuremath{d}\\xspace\\ensuremath{s'}\\xspace \\right ) \\right |\\nonumber \\\\\n&&\\leq& \\frac{1}{2}\\sup_{\\ensuremath{a}\\xspace \\in \\ensuremath{A}\\xspace} \\left (\\sup_{\\left |f \\right | \\leq 1}\\ensuremath{\\int_{s' \\in S}}\\xspace f(\\ensuremath{s'}\\xspace) \\left |\\ensuremath{T}\\xspace_1 - \\ensuremath{T}\\xspace_2 \\right | \\ensuremath{d}\\xspace\\ensuremath{s'}\\xspace \\right. \\nonumber \\\\\n&&&\\left. + \\sup_{\\left |f \\right | \\leq 1} \\ensuremath{\\int_{s' \\in S}}\\xspace f(\\ensuremath{s'}\\xspace) \\left |\\ensuremath{\\widehat{T}}\\xspace_1 - \\ensuremath{\\widehat{T}}\\xspace_2 \\right |\\ensuremath{d}\\xspace\\ensuremath{s'}\\xspace\\right ) \\nonumber \\\\\n&&\\leq& \\frac{1}{2}D_{S}(\\ensuremath{s}\\xspace_1, \\ensuremath{s}\\xspace_2)\\left [C_{\\ensuremath{T}\\xspace_i} + C_{\\ensuremath{\\widehat{T}}\\xspace_i} \\right ]\n\\end{align}\n\nFor the last inequality we bound the terms $\\left | \\ensuremath{T}\\xspace_1 - \\ensuremath{T}\\xspace_2 \\right |$ and $\\left |\\ensuremath{\\widehat{T}}\\xspace_1 - \\ensuremath{\\widehat{T}}\\xspace_2 \\right |$ using \\dref{d:partition}. Furthermore we use the fact that $\\sup_{\\left |f \\right | \\leq 1} \\ensuremath{\\int_{s' \\in S}}\\xspace f(\\ensuremath{s'}\\xspace) \\ensuremath{d}\\xspace\\ensuremath{s'}\\xspace = 1$, assuming that the state space $\\ensuremath{S}\\xspace$ is normalized. This concludes the proof of \\lref{lem:lemApprox}. $\\square$\n\n\\section{Introduction }\n\\label{section:introduction}\nAn autonomous robot must be able to compute reliable motion strategies, despite various errors in actuation and prediction on its effect on the robot and its environment, and despite various errors in sensors and sensing. Computing such robust strategies is computationally hard even for a 3 DOFs point robot\\ccite{Can87:New},\\ccite{Nat88:Complexity}. Conceptually, this problem can be solved in a systematic and principled manner when framed as the Partially Observable Markov Decision Process (POMDP)\\ccite{Kae98:Planning}. A POMDP represents the aforementioned errors as probability distribution functions and estimates the state of the system as probability distribution functions called \\emph{beliefs}. It then computes the best motion strategy with respect to beliefs rather than single states, thereby accounting the fact that the actual state is never known due to the above errors. Although the concept of POMDPs was proposed in the '60s\\ccite{Son71:The}, only recently that POMDPs started to become practical for robotics problems (e.g.\\ccite{Hoe19:POMDP,Hor13:Interactive,Tem09:Unmanned}). This advancement is achieved by trading optimality with approximate optimality for speed and memory. But even then, in general, computing close to optimal POMDP solutions for systems with complex dynamics remains difficult.\n\nSeveral general POMDP solvers ---solvers that do not restrict the type of dynamics and sensing model of the system, nor the type of distributions used to represent uncertainty--- can now compute good motion strategies on-line with a 1-10Hz update rate for a number of robotic problems\\ccite{Kur13:An,Sil10:Monte,Som13:Despot,Sei15:An}. \nHowever, their speed degrades when the robot has complex non-linear dynamics. \nTo compute a good strategy, today's POMDP solvers forward simulate the effect of many sequences of actions from different beliefs are simulated. For problems whose dynamics have no closed-form solutions, a simulation run generally invokes many numerical integrations, and complex dynamics tend to increase the cost of each numerical integration, which in turn significantly increases the total planning cost of these methods. Of course, this cost will increase even more for problems that require more or longer simulation runs, such as in problems with long planning horizons.\n\nMany linearized-based POMDP solvers have been proposed\\ccite{Sun15:High,Agh13:Firm,Ber10:LQGMP,Ber12:LQG,Pre10:The}. They rely on many forward simulations from different beliefs too, but use a linearized model of the dynamics and sensing for simulation. Together with linearization, many of these methods assume that beliefs are Gaussian distributions. This assumption improves the speed of simulation further, because the subsequent belief after an action is performed and an observation is perceived can be computed in closed-form. In contrast, the aforementioned general solvers typically represent beliefs as sets of particles and estimate subsequent beliefs using particle filters. Particle filters are particularly expensive when particle trajectories have to be simulated and each simulation run is costly, as is the case for motion-planning of systems with complex dynamics. As a result, the linearization-based planners require less time to estimate the effect of performing a sequence of actions from a belief, and therefore can \\emph{potentially} find a good strategy faster than the general method. However, it is known that linearization in control and estimation performs well only when the system's non-linearity is ``weak\"\\ccite{Li12:Measure}. The question is, what constitute ``weak\" non-linearity in motion planning under uncertainty? Where will it be useful and where will it be damaging to use linearization (and Gaussian) simplifications? \n\nThis paper extends our previous work\\ccite{Hoe16:Linearization} towards answering the aforementioned questions. Specifically, we propose a measure of non-linearity for stochastic systems, called \\emph{Statistical-distance-based Non-linearity Measure\\xspace (SNM\\xspace)}, to help identify the suitability of linearization in a given problem of motion planning under uncertainty. SNM\\xspace is based on the total variation distance between the original dynamics and sensing models, and their corresponding linearized models. It is general enough to be applied to any type of motion and sensing errors, and any linearization technique, regardless of the type of approximation of the true beliefs (e.g., with and without Gaussian simplification). We show that the difference between the value of the optimal strategy generated if we plan using the original model and if we plan using the linearized model, can be upper bounded by a function linear in SNM\\xspace. Furthermore, our experimental results indicate that compared to recent state-of-the-art methods of non-linearity measures for stochastic systems, SNM\\xspace is more sensitive to the effect that obstacles have on the effectiveness of linearization, which is critical for motion planning.\n\nTo further test the applicability of SNM\\xspace in motion planning, we develop a simple on-line planner that uses a local estimate of SNM\\xspace to automatically switch between a general planner\\ccite{Kur13:An} that uses the original POMDP model and a linearization-based planner (adapted from\\ccite{Sun15:High}) that uses the linearized model. Experimental results on a car-like robot with acceleration control, and a 4-DOFs and 6-DOFs manipulators with torque control indicate that this simple planner can appropriately decide if and when linearization should be used and therefore computes better strategies faster than each of the component planner.\n\n\\section{Acknowledgements}\nThis work is partially funded by ANU Futures Scheme QCE20102. The early part of this work is funded by UQ and CSIRO scholarship for Marcus Hoerger. \n\n\\bibliographystyle{ieeetr}\n\n\\section{SNM}\\label{sec:SNM}\nIntuitively, our proposed measure SNM\\xspace is based on the total variation distance between the effect of performing an action and perceiving an observation under the true dynamics and sensing model, and the effect under the linearized dynamic and sensing model. The total variation distance $D_{TV}$ between two probability measures $\\mu$ and $\\nu$ over a measurable space $\\Omega$ is defined as $D_{TV}(\\mu, \\nu) = \\sup_{E \\in \\Omega} \\left |\\mu(E) - \\nu(E) \\right |$. An alternative expression of $D_{TV}$ which we use throughout the paper is the functional form $D_{TV}(\\mu, \\nu) = \\frac{1}{2}\\sup_{\\left |f \\right | \\leq 1}\\left |\\int f\\ensuremath{d}\\xspace\\mu - \\int f\\ensuremath{d}\\xspace\\nu \\right |$.\nFormally, SNM\\xspace is defined as:\n\\begin{definition}\n\\label{def:mon}\nLet $\\ensuremath{P}\\xspace = \\ensuremath{\\langle S, A, O, T, Z, R, \\ensuremath{b_0}\\xspace, \\gamma \\rangle}\\xspace$ be the POMDP model of the system and $\\ensuremath{\\widehat{P}}\\xspace = \\ensuremath{\\langle S, A, O, \\widehat{T}, \\widehat{Z}, R, \\ensuremath{b_0}\\xspace, \\gamma \\rangle}\\xspace$ be a linearization of \\ensuremath{P}\\xspace, where \\ensuremath{\\widehat{T}}\\xspace is a linearization of the transition function \\ensuremath{T}\\xspace and \\ensuremath{\\widehat{Z}}\\xspace is a linearization of the observation function \\ensuremath{Z}\\xspace of \\ensuremath{P}\\xspace, while all other components of \\ensuremath{P}\\xspace and \\ensuremath{\\widehat{P}}\\xspace are the same. Then, the SNM\\xspace (denoted as \\ensuremath{\\Psi}\\xspace) between \\ensuremath{P}\\xspace and \\ensuremath{\\widehat{P}}\\xspace is $\\nm{\\ensuremath{P}\\xspace}{\\ensuremath{\\widehat{P}}\\xspace} = \\nmT{\\ensuremath{P}\\xspace}{\\ensuremath{\\widehat{P}}\\xspace} + \\nmZ{\\ensuremath{P}\\xspace}{\\ensuremath{\\widehat{P}}\\xspace}$, where \n\\begin{align}\n\\nmT{\\ensuremath{P}\\xspace}{\\ensuremath{\\widehat{P}}\\xspace} &= \\sup_{s \\in S, a \\in A} D_{TV}(\\ensuremath{T(s, a, s')}\\xspace, \\ensuremath{\\widehat{T}(s, a, s')}\\xspace) \\\\\n\\nmZ{\\ensuremath{P}\\xspace}{\\ensuremath{\\widehat{P}}\\xspace} &= \\sup_{s \\in S, a \\in A} D_{TV}(\\ensuremath{Z(s, a, o)}\\xspace, \\ensuremath{\\widehat{Z}(s, a, o)}\\xspace)\n\\end{align}\n\\end{definition}\nNote that SNM\\xspace can be applied as both a global and a local measure. In the latter case, the supremum over the state $s$ can be restricted to a subset of \\ensuremath{S}\\xspace, rather than the entire state space. Furthermore, SNM\\xspace is general enough for any approximation to the true dynamics and sensing model, which means that it can be applied to any type of linearization and belief approximation techniques, including those that assume and those that do not assume Gaussian belief simplifications. \n\nWe want to use the measure \\nm{\\ensuremath{P}\\xspace}{\\ensuremath{\\widehat{P}}\\xspace} to bound the difference between the expected total reward received if the system were to run the optimal policy of the true model \\ensuremath{P}\\xspace and if it were to run the optimal policy of the linearized model \\ensuremath{\\widehat{P}}\\xspace. Note that since our interest is in the actual reward received, the values of these policies are evaluated with respect to the original model \\ensuremath{P}\\xspace (we assume \\ensuremath{P}\\xspace is a faithful model of the system). More precisely, we want to show that:\n\\begin{theorem}\\label{th:valUpperBound}\nIf \\ensuremath{\\pi^*}\\xspace denotes the optimal policy for \\ensuremath{P}\\xspace and \\ensuremath{\\widehat{\\pi}^*}\\xspace denotes the optimal policy for \\ensuremath{\\widehat{P}}\\xspace, then for any $\\ensuremath{b}\\xspace\\in\\mathbb{B}$, \n\\begin{align}\n&V_{\\ensuremath{\\pi^*}\\xspace}(\\ensuremath{b}\\xspace) - V_{\\ensuremath{\\widehat{\\pi}^*}\\xspace}(\\ensuremath{b}\\xspace) \\leq 4\\gamma\\frac{R_{max}}{(1-\\gamma)^2} \\nm{\\ensuremath{P}\\xspace}{\\ensuremath{\\widehat{P}}\\xspace} \\nonumber\n\\end{align}\nwhere \\newline\n$V_{\\pi}(b) = R(b, \\pi(b)) + \\gamma \\int_{o \\in O}Z(b, a, o)V_{\\pi}(\\tau(b, a, o)) \\ensuremath{d}\\xspace\\ensuremath{o}\\xspace$ for any policy $\\pi$ and $\\tau(b, a, o)$ is the belief transition function as defined in \\eref{e:belTrans}\n\\end{theorem}\n\nTo proof \\thref{th:valUpperBound}, we first assume, without loss of generality, that a policy $\\pi$ for a belief \\ensuremath{b}\\xspace is represented by a conditional plan $\\sigma\\in\\Gamma$, where $\\Gamma$ is the set of all conditional plans. $\\sigma$ can be specified by a pair $\\left \\langle \\ensuremath{a}\\xspace, \\nu \\right \\rangle$, where $\\ensuremath{a}\\xspace\\in\\ensuremath{A}\\xspace$ is the action of $\\sigma$ and $\\nu: \\ensuremath{O}\\xspace \\rightarrow \\Gamma$ is an observation strategy which maps an observation to a conditional plan $\\sigma'\\in\\Gamma$.\n\nEvery $\\sigma$ corresponds to an $\\alpha$-function $\\alpha_{\\sigma}: \\ensuremath{S}\\xspace \\rightarrow \\mathbb{R}$ which specifies the expected total discounted reward the robot receives when executing $\\sigma$ starting from $\\ensuremath{s}\\xspace\\in\\ensuremath{S}\\xspace$, i.e.\n\\begin{align}\\label{eq:alpha_s}\n&\\alpha_{\\sigma}(s) = \\rewFuncComp{\\ensuremath{s}\\xspace}{\\ensuremath{a}\\xspace}\\nonumber\\\\ &+ \\gamma \\ensuremath{\\int_{s' \\in S}}\\xspace \\ensuremath{\\int_{o \\in O}}\\xspace T(\\ensuremath{s}\\xspace, \\ensuremath{a}\\xspace, \\ensuremath{s'}\\xspace) Z(\\ensuremath{s'}\\xspace, \\ensuremath{a}\\xspace, \\ensuremath{o}\\xspace)\\alpha_{\\nu(\\ensuremath{o}\\xspace)}(\\ensuremath{s'}\\xspace) \\ensuremath{d}\\xspace\\ensuremath{o}\\xspace \\ensuremath{d}\\xspace\\ensuremath{s'}\\xspace\n\\end{align} \n\nwhere $\\ensuremath{a}\\xspace\\in\\ensuremath{A}\\xspace$ is the action of $\\sigma$ and $\\alpha_{\\nu(\\ensuremath{o}\\xspace)}$ is the $\\alpha$-function corresponding to conditional plan $\\nu(\\ensuremath{o}\\xspace)$.\n\nFor a given belief \\ensuremath{b}\\xspace, the value of the policy $\\pi$ represented by the conditional plan $\\sigma$ is then $V_{\\pi}(\\ensuremath{b}\\xspace) = \\int_{\\ensuremath{s}\\xspace\\in\\ensuremath{S}\\xspace} \\ensuremath{b}\\xspace(\\ensuremath{s}\\xspace)\\alpha_{\\sigma}(\\ensuremath{s}\\xspace)\\ensuremath{d}\\xspace\\ensuremath{s}\\xspace$. Note that \\eref{eq:alpha_s} is defined with respect to POMDP \\ensuremath{P}\\xspace. Analogously we define the linearized $\\alpha$-function $\\widehat{\\alpha}_{\\sigma}$ with respect to the linearized POMDP \\ensuremath{\\widehat{P}}\\xspace by replacing the transition and observation functions in \\eref{eq:alpha_s} with their linearized versions.\n\nNow, suppose that for a given belief \\ensuremath{b}\\xspace, $\\sigma^* = \\argsup_{\\sigma\\in\\Gamma} \\int_{\\ensuremath{s}\\xspace\\in\\ensuremath{S}\\xspace}\\ensuremath{b}\\xspace(\\ensuremath{s}\\xspace)\\alpha_{\\sigma}(\\ensuremath{s}\\xspace)\\ensuremath{d}\\xspace\\ensuremath{s}\\xspace$ and $\\widehat{\\sigma}^* = \\argsup_{\\sigma\\in\\Gamma}\\int_{\\ensuremath{s}\\xspace\\in\\ensuremath{S}\\xspace}\\ensuremath{b}\\xspace(\\ensuremath{s}\\xspace)\\widehat{\\alpha}_{\\sigma}(\\ensuremath{s}\\xspace)\\ensuremath{d}\\xspace\\ensuremath{s}\\xspace$. $\\sigma^*$ and $\\widehat{\\sigma}^*$ represent the policies $\\pi^*$ and $\\widehat{\\pi}^*$ that are optimal at \\ensuremath{b}\\xspace for POMDP \\ensuremath{P}\\xspace and \\ensuremath{\\widehat{P}}\\xspace respectively. For any $\\ensuremath{s}\\xspace\\in\\ensuremath{S}\\xspace$ we have that $\\alphaFunctPolComp{\\widehat{\\sigma}^*}{\\ensuremath{s}\\xspace} \\geq \\linAlphaFunctPolComp{\\widehat{\\sigma}^*}{\\ensuremath{s}\\xspace} - \\left |\\alphaFunctPolComp{\\widehat{\\sigma}^*}{\\ensuremath{s}\\xspace} - \\linAlphaFunctPolComp{\\widehat{\\sigma}^*}{\\ensuremath{s}\\xspace} \\right |$ and $\\linAlphaFunctPolComp{\\sigma^*}{\\ensuremath{s}\\xspace} \\geq \\alphaFunctPolComp{\\sigma^*}{\\ensuremath{s}\\xspace} - \\left | \\alphaFunctPolComp{\\sigma^*}{\\ensuremath{s}\\xspace} - \\linAlphaFunctPolComp{\\sigma^*}{\\ensuremath{s}\\xspace}\\right |$. Therefore \n\\begin{align}\\label{eq:geq_1}\n\\int_{s \\in S}\\xspace \\belS{\\ensuremath{s}\\xspace} \\alphaFunctPolComp{\\widehat{\\sigma}^*}{\\ensuremath{s}\\xspace} \\ensuremath{d}\\xspace\\ensuremath{s}\\xspace \\geq &\\int_{s \\in S}\\xspace \\belS{\\ensuremath{s}\\xspace} \\linAlphaFunctPolComp{\\widehat{\\sigma}^*}{\\ensuremath{s}\\xspace} \\ensuremath{d}\\xspace\\ensuremath{s}\\xspace\\nonumber \\\\ & - \\int_{s \\in S}\\xspace \\belS{\\ensuremath{s}\\xspace} \\left | \\alphaFunctPolComp{\\widehat{\\sigma}^*}{\\ensuremath{s}\\xspace} - \\linAlphaFunctPolComp{\\widehat{\\sigma}^*}{\\ensuremath{s}\\xspace} \\right | \\ensuremath{d}\\xspace\\ensuremath{s}\\xspace\n\\end{align}\n\nand \n\\begin{align}\\label{eq:geq_2}\n\\int_{s \\in S}\\xspace \\belS{\\ensuremath{s}\\xspace} \\linAlphaFunctPolComp{\\sigma^*}{\\ensuremath{s}\\xspace} \\ensuremath{d}\\xspace\\ensuremath{s}\\xspace \\geq &\\int_{s \\in S}\\xspace \\belS{\\ensuremath{s}\\xspace}\\alphaFunctPolComp{\\sigma^*}{\\ensuremath{s}\\xspace} \\ensuremath{d}\\xspace\\ensuremath{s}\\xspace\\nonumber \\\\&- \\int_{s \\in S}\\xspace \\belS{\\ensuremath{s}\\xspace}\\left |\\alphaFunctPolComp{\\sigma^*}{\\ensuremath{s}\\xspace} - \\linAlphaFunctPolComp{\\sigma^*}{\\ensuremath{s}\\xspace} \\right |\\ensuremath{d}\\xspace\\ensuremath{s}\\xspace\n\\end{align}\n\nSince $\\widehat{\\sigma}^*$ is the optimal conditional plan for POMDP \\ensuremath{\\widehat{P}}\\xspace at \\ensuremath{b}\\xspace, we also know that\n\\begin{equation}\\label{eq:geq_3}\n\\int_{s \\in S}\\xspace \\belS{\\ensuremath{s}\\xspace} \\linAlphaFunctPolComp{\\widehat{\\sigma}^*}{\\ensuremath{s}\\xspace} \\ensuremath{d}\\xspace\\ensuremath{s}\\xspace \\geq \\int_{s \\in S}\\xspace \\belS{\\ensuremath{s}\\xspace} \\linAlphaFunctPolComp{\\sigma^*}{\\ensuremath{s}\\xspace} \\ensuremath{d}\\xspace\\ensuremath{s}\\xspace\n\\end{equation}\n\nFrom \\eref{eq:geq_1}, \\eref{eq:geq_2} and \\eref{eq:geq_3} it immediately follows that\n\\begin{alignat}{2}\\label{eq:geq_4}\n\\int_{s \\in S}\\xspace \\belS{\\ensuremath{s}\\xspace} \\alphaFunctPolComp{\\widehat{\\sigma}^*}{\\ensuremath{s}\\xspace} \\ensuremath{d}\\xspace\\ensuremath{s}\\xspace \\geq & &&\\int_{s \\in S}\\xspace \\belS{\\ensuremath{s}\\xspace} \\alphaFunctPolComp{\\sigma^*}{\\ensuremath{s}\\xspace} \\ensuremath{d}\\xspace\\ensuremath{s}\\xspace \\nonumber \\\\ & && - 2 \\int_{s \\in S}\\xspace \\belS{\\ensuremath{s}\\xspace}\\sup_{\\sigma\\in\\Gamma}\\left |\\alphaFunctPolComp{\\sigma}{\\ensuremath{s}\\xspace} - \\linAlphaFunctPolComp{\\sigma}{\\ensuremath{s}\\xspace} \\right |\\ensuremath{d}\\xspace\\ensuremath{s}\\xspace \\nonumber \\\\\nV_{\\widehat{\\pi}^*}(b) \\geq& && V_{\\pi^*}(b) \\nonumber \\\\ & &&- 2 \\int_{s \\in S}\\xspace \\belS{\\ensuremath{s}\\xspace}\\sup_{\\sigma\\in\\Gamma}\\left |\\alphaFunctPolComp{\\sigma}{\\ensuremath{s}\\xspace} - \\linAlphaFunctPolComp{\\sigma}{\\ensuremath{s}\\xspace} \\right | \\ensuremath{d}\\xspace\\ensuremath{s}\\xspace\n\\end{alignat}\n\nBefore we continue, we first have to show the following Lemma:\n\\begin{lemma}\\label{lem:lem0}\nLet $R_m = \\max\\{\\left |R_{min} \\right |, R_{max}\\}$, where $R_{min} = \\min_{s,a} R(s, a)$ and $R_{max} = \\max_{s,a} R(s, a)$. For any conditional plan $\\sigma\\in\\Gamma$ and any $\\ensuremath{s}\\xspace \\in \\ensuremath{S}\\xspace$, the absolute difference between the original and linearized $\\alpha$-functions is upper bounded by\n\\begin{align}\n\\left | \\alphaFunctPolComp{\\sigma}{\\ensuremath{s}\\xspace} - \\linAlphaFunctPolComp{\\sigma}{\\ensuremath{s}\\xspace} \\right | \\leq 2\\gamma\\frac{R_{m}}{(1-\\gamma)^2} \\nm{\\ensuremath{P}\\xspace}{\\ensuremath{\\widehat{P}}\\xspace}\\nonumber\n\\end{align}\n\\end{lemma}\n\nThe proof of \\lref{lem:lem0} is presented in the Appendix~\\ref{ssec:lemma_0_proof}.\n \nUsing the result of \\lref{lem:lem0}, we can now conclude the proof for \\thref{th:valUpperBound}. Substituting the upper bound derived in \\lref{lem:lem0} into the right-hand side of \\eref{eq:geq_4} and re-arranging the terms gives us\n\\begin{equation}\nV_{\\pi^*}(\\ensuremath{b}\\xspace) - V_{\\widehat{\\pi}^*}(\\ensuremath{b}\\xspace) \\leq 4\\gamma\\frac{R_{m}}{(1-\\gamma)^2} \\nm{\\ensuremath{P}\\xspace}{\\ensuremath{\\widehat{P}}\\xspace}\n\\end{equation}\n\nwhich is what we are looking for. $\\square$\n\n\\section{Approximating SNM\\xspace}\\label{ssec:monApprox}\nNow, the question is how can we compute SNM\\xspace sufficiently fast, so that this measure can be used as a heuristic during on-line planning to decide when a linearization-based solver will likely yield a good policy and when a general solver should be used. Unfortunately, such a computation is often infeasible when the planning time per step is limited. Therefore, we approximate SNM\\xspace off-line and re-use the results during run-time. Here we discuss how to approximate the transition component \\ensuremath{\\Psi_T}\\xspace of SNM\\xspace, however, the same method applies to the observation component \\ensuremath{\\Psi_Z}\\xspace.\n\nLet us first rewrite the transition component of \\ensuremath{\\Psi_T}\\xspace as\n\\begin{align}\\label{e:snmTransCompRe}\n\\ensuremath{\\Psi_T}\\xspace &= \\sup_{\\ensuremath{s}\\xspace \\in \\ensuremath{S}\\xspace} \\ensuremath{\\Psi_T}\\xspace(s)\\nonumber \\\\&= \\sup_{\\ensuremath{s}\\xspace\\in\\ensuremath{S}\\xspace}\\sup_{\\ensuremath{a}\\xspace\\in\\ensuremath{A}\\xspace}D_{TV}(\\ensuremath{T(s, a, s')}\\xspace, \\ensuremath{\\widehat{T}(s, a, s')}\\xspace)\n\\end{align}\nwhere $\\ensuremath{\\Psi_T}\\xspace(s)$ is the transition component of SNM\\xspace, given a particular state. To approximate \\ensuremath{\\Psi_T}\\xspace, we replace \\ensuremath{S}\\xspace in \\eref{e:snmTransCompRe} by a sampled representation of \\ensuremath{S}\\xspace, which we denote as $\\tilde{\\ensuremath{S}\\xspace}$. The value $\\ensuremath{\\Psi_T}\\xspace(s)$ is then evaluated for each $\\ensuremath{s}\\xspace\\in\\tilde{\\ensuremath{S}\\xspace}$ off-line, and the results are saved in a lookup-table. This lookup-table can then be used during run-time to get a local approximation of \\ensuremath{\\Psi_T}\\xspace around the current belief.\n\nThe first question that arises is, how do we efficiently sample the state space? A naive approach would be to employ a simple uniform sampling strategy. However, for large state spaces this is often wasteful, because for motion planning problems, large portions of the state space are often irrelevant since they either can't be reached from the initial belief or are unlikely to be traversed by the robot during run-time. A better strategy is to consider only the subset of the state space that is reachable from the support set of the initial belief under any policy, denoted as $\\ensuremath{S}\\xspace_{\\ensuremath{b_0}\\xspace}$. To sample from $\\ensuremath{S}\\xspace_{\\ensuremath{b_0}\\xspace}$, we use a simple but effective method: Assuming deterministic dynamics, we solve the motion planning problem off-line using kinodynamic RRTs and use the nodes in the RRT-trees as a sampled representation of $\\ensuremath{S}\\xspace_{\\ensuremath{b_0}\\xspace}$. In principle any deterministic sampling-based motion planner can be used to generate samples from $\\ensuremath{S}\\xspace_{\\ensuremath{b_0}\\xspace}$, however, in our case RRT is a particularly suitable due to its space-filling property \\ccite{kuffner2011space}. Note that RRT generates states according to a deterministic transition function only. If required, one could also generate additional samples according to the actual stochastic transition function of the robot. However, in our experiments the state samples generated by RRT were sufficient. \n\nThe second difficulty in approximating $\\ensuremath{\\Psi_T}\\xspace(s)$ is the computation of the supremum over the action space. Similar to restricting the approximation to a discrete set of states reachable from the initial belief, we can impose a discretization on the action space which leaves us with a maximization over discrete actions, denoted as $\\tilde{\\ensuremath{A}\\xspace}$. Using the set $\\tilde{A}$, we approximate \\eref{e:snmTransCompRe} for each state in $\\tilde{\\ensuremath{S}\\xspace}_{\\ensuremath{b_0}\\xspace}$ ---the sampled set of $\\ensuremath{S}\\xspace_{\\ensuremath{b_0}\\xspace}$--- as follows:\nGiven a particular state $\\ensuremath{s}\\xspace \\in \\tilde{\\ensuremath{S}\\xspace}_{\\ensuremath{b_0}\\xspace}$ and action $\\ensuremath{a}\\xspace \\in \\tilde{\\ensuremath{A}\\xspace}$, we draw $n$ samples from the original and linearized transition function and construct a multidimensional histogram from both sample sets. In other words, we discretize the distributions that follow from the original and linearized transition function, given a particular state and action. Suppose the histogram consists of $k$ bins. The value $\\ensuremath{\\Psi_T}\\xspace(\\ensuremath{s}\\xspace, \\ensuremath{a}\\xspace)$ is then approximated as\n\\begin{equation}\\label{eq:smmTransCompStAct}\n \\ensuremath{\\Psi_T}\\xspace(\\ensuremath{s}\\xspace, \\ensuremath{a}\\xspace) \\approx \\frac{1}{2} \\sum_{i = 1}^{k} \\left |p_i - \\widehat{p}_i \\right |\n\\end{equation}\nwhere $p_i = \\frac{n_i}{\\sum_{j=1}^k n_j}$ and $n_i$ is the number of states inside bin $i$ sampled from the original transition function, while $\\widehat{p}_i = \\frac{\\widehat{n}_i}{\\sum_{j=1}^k \\widehat{n}_j}$ and $\\widehat{n}_i$ is the number of states inside bin $i$ sampled from the linearized transition function. The right-hand side of \\eref{eq:smmTransCompStAct} is simply the definition of the total variation distance between two discrete distributions.\n\nBy repeating the above process for each action in $\\tilde{A}$ and taking the maximum, we end up with an approximation of $\\ensuremath{\\Psi_T}\\xspace(s)$. This procedure is repeated for every state in the set $\\tilde{\\ensuremath{S}\\xspace}_{\\ensuremath{b_0}\\xspace}$. As a result we get a lookup-table, assigning each state in $\\tilde{\\ensuremath{S}\\xspace}_{\\ensuremath{b_0}\\xspace}$ an approximated value of $\\ensuremath{\\Psi_T}\\xspace(s)$.\n\nDuring planning, we can use the lookup-table and a sampled representation of a belief \\ensuremath{b}\\xspace to approximate SNM\\xspace at \\ensuremath{b}\\xspace. Suppose $\\tilde{\\ensuremath{b}\\xspace}$ is the sampled representation of \\ensuremath{b}\\xspace (e.g., a particle set), then for each state $s \\in \\tilde{\\ensuremath{b}\\xspace}$, we take the state $\\ensuremath{s}\\xspace_{near} \\in \\tilde{\\ensuremath{S}\\xspace}_{\\ensuremath{b_0}\\xspace}$ that is nearest to $s$, and assign $\\ensuremath{\\Psi_T}\\xspace(s) = \\ensuremath{\\Psi_T}\\xspace(\\ensuremath{s}\\xspace_{near})$. The maximum SNM value $\\max_{s \\in \\tilde{\\ensuremath{b}\\xspace}} \\ensuremath{\\Psi_T}\\xspace(s)$ gives us an approximation of the transition component of SNM\\xspace with respect to the belief \\ensuremath{b}\\xspace.\n\nClearly this approximation method assumes that states that are close together should yield similar values for SNM\\xspace. At first glance this is a very strong assumption. In the vicinity of obstacles or constraints, states that are close together could potentially yield very different SNM\\xspace values. However, we will now show that under mild assumptions, pairs of states that are elements within certain subsets of the state space indeed yield similar SNM\\xspace values.\n\nConsider a partitioning of the state space into a finite number of local-Lipschitz subsets $S_i$ that are defined as follows:\n\\begin{definition}\\label{d:partition}\nLet \\ensuremath{S}\\xspace be a metric space with distance metric $D_{\\ensuremath{S}\\xspace}$. $\\ensuremath{S}\\xspace_i$ is called a local-Lipschitz subset of $\\ensuremath{S}\\xspace$ if for any $\\ensuremath{s}\\xspace_1, \\ensuremath{s}\\xspace_2 \\in \\ensuremath{S}\\xspace_i$, any $\\ensuremath{s'}\\xspace \\in \\ensuremath{S}\\xspace$ and any $\\ensuremath{a}\\xspace \\in \\ensuremath{A}\\xspace: \\left |\\ensuremath{T}\\xspace(\\ensuremath{s}\\xspace_1, \\ensuremath{a}\\xspace, \\ensuremath{s'}\\xspace) - \\ensuremath{T}\\xspace(\\ensuremath{s}\\xspace_2, \\ensuremath{a}\\xspace, \\ensuremath{s'}\\xspace) \\right | \\leq C_{\\ensuremath{T}\\xspace_i}D_{\\ensuremath{S}\\xspace}(\\ensuremath{s}\\xspace_1, \\ensuremath{s}\\xspace_2)$ and $\\left |\\ensuremath{\\widehat{T}}\\xspace(\\ensuremath{s}\\xspace_1, \\ensuremath{a}\\xspace, \\ensuremath{s'}\\xspace) - \\ensuremath{\\widehat{T}}\\xspace(\\ensuremath{s}\\xspace_2, \\ensuremath{a}\\xspace, \\ensuremath{s'}\\xspace) \\right | \\leq C_{\\ensuremath{\\widehat{T}}\\xspace_i}D_{\\ensuremath{S}\\xspace}(\\ensuremath{s}\\xspace_1, \\ensuremath{s}\\xspace_2)$, where $C_{\\ensuremath{T}\\xspace_i} \\geq 0$ and $C_{\\ensuremath{\\widehat{T}}\\xspace_i} \\geq 0$ are finite local-Lipschitz constants\n\\end{definition}\nIn other words, $\\ensuremath{S}\\xspace_i$ are subsets of $\\ensuremath{S}\\xspace$ in which the original and linearized transition functions are Lipschitz continuous with Lipschitz constants $C_{\\ensuremath{T}\\xspace_i}$ and $C_{\\ensuremath{\\widehat{T}}\\xspace_i}$. With this definition at hand, we can now show the following lemma:\n\\begin{lemma}\\label{lem:lemApprox}\nLet \\ensuremath{S}\\xspace be a $n-dimensional$ metric space with distance metric $D_{\\ensuremath{S}\\xspace}$ and assume \\ensuremath{S}\\xspace is normalized to $\\left [0, 1 \\right ]^n$. Furthermore let $\\ensuremath{S}\\xspace_i$ be a local-Lipschitz subset of \\ensuremath{S}\\xspace, then\n\\begin{equation}\n\\left | \\ensuremath{\\Psi_T}\\xspace(\\ensuremath{s}\\xspace_1) - \\ensuremath{\\Psi_T}\\xspace(\\ensuremath{s}\\xspace_2) \\right | \\leq \\frac{1}{2} \\sqrt{n} D_{\\ensuremath{S}\\xspace}(\\ensuremath{s}\\xspace_1, \\ensuremath{s}\\xspace_2) \\left [C_{\\ensuremath{T}\\xspace_i} + C_{\\ensuremath{\\widehat{T}}\\xspace_i} \\right ] \\nonumber\n\\end{equation}\n for any $\\ensuremath{s}\\xspace_1, \\ensuremath{s}\\xspace_2 \\in S_i$\n\\label{l:subsetLipschitz}\n\\end{lemma}\nThe proof for this Lemma is presented in \\appref{ssec:proofLem4}. This Lemma indicates that the difference between the SNM\\xspace values for two states from the same local-Lipschitz subset $\\ensuremath{S}\\xspace_i$ depends only on the distance $D_{\\ensuremath{S}\\xspace}$ between them, since $C_{\\ensuremath{T}\\xspace_i}$ and $C_{\\ensuremath{\\widehat{T}}\\xspace_i}$ are constant for each subset $\\ensuremath{S}\\xspace_i$. Thus, as the distance between two states converges towards zero, the SNM\\xspace value difference converges towards zero as well. This implies that we can approximate SNM for a sparse, sampled representation of $\\ensuremath{S}\\xspace_{\\ensuremath{b_0}\\xspace}$ and re-use these approximations on-line with a small error, without requiring an explicit representation of the $\\ensuremath{S}\\xspace_i$ subsets.\n\n\\section{SNM-Planner: An Application of SNM\\xspace for Planning}\n\\label{sec:method}\nSNM-Planner\\xspace is an on-line planner that uses SNM\\xspace as a heuristic to decide whether a general, or a linearization-based POMDP solver should be used to compute the policy from the current belief. The general solver used is Adaptive Belief Tree (ABT)\\ccite{Kur13:An}, while the linearization-based method called Modified High Frequency Replanning (MHFR), which is an adaptation of HFR\\ccite{Sun15:High}. HFR is designed for chance-constraint POMDPs, i.e., it explicitly minimizes the collision probability, while MHFR is a POMDP solver where the objective is to maximize the expected total reward. An overview of SNM-Planner\\xspace is shown in \\aref{alg:smnd}. During run-time, at each planning step, SNM-Planner\\xspace computes a local approximation of SNM\\xspace around the current belief $\\ensuremath{b}\\xspace_i$ (line 5). If this value is smaller than a given threshold, SNM-Planner\\xspace uses MHFR to compute a policy from the current belief, whereas ABT is used when the value exceeds the threshold (line 8-12). The robot then executes an action according the computed policy (line 13) and receives and observation (line 14). Based on the executed action and perceived observation, we update the belief (line 15). SNM-Planner\\xspace represents beliefs as sets of particles and updates the belief using a SIR particle filter\\ccite{arulampalam2002tutorial}. Note that MHFR assumes that beliefs are multivariate Gaussian distributions. Therefore, in case MHFR is used for the policy computation, we compute the first two moments (mean and covariance) of the particle set to obtain a multivariate Gaussian approximation of the current belief. The process then repeats from the updated belief until the robot has entered a terminal state (we assume that we know when the robot enters a terminal state) or until a maximum number of steps is reached. \n\nIn the following two subsections we provide a brief an overview of the two component planners ABT and MHFR.\n\n\\begin{algorithm}\n\\caption{SNM-Planner\\xspace (initial belief \\ensuremath{b_0}\\xspace, SNM\\xspace threshold $\\mu$, max. planning time per step $t$, max. number of steps $N$)}\\label{alg:smnd}\n\\begin{algorithmic}[1]\n\\State InitializeABT(\\ensuremath{P}\\xspace)\n\\State InitializeMHFR(\\ensuremath{P}\\xspace)\n\\State $i=0$, $\\ensuremath{b}\\xspace_i = \\ensuremath{b_0}\\xspace$, terminal = False\n\\While{terminal is False and $i < N$}\n\\State $\\widehat{\\Psi} =\\ $approximateSNM($\\ensuremath{b}\\xspace_i$)\n\\State $t_p = t - t_a$\n\\Comment $t_{a}$ is the time the algorithm takes to approximate SNM\\xspace\n\\If{$\\widehat{\\Psi} < \\mu$}\t\n\t\\State $a =\\ $MHFR($\\ensuremath{b}\\xspace_i$, $t_p$) \n\\Else\n \\State $a =\\ $ABT($\\ensuremath{b}\\xspace_i$, $t_p$)\n\\EndIf\n\\State terminal = executeAction($a$)\n\\State $o =\\ $get observation\n\\State $b_{i+1} = \\tau(\\ensuremath{b}\\xspace_i, a, o)$\n\\State $i = i + 1$\n\\EndWhile\n\\end{algorithmic}\n\\end{algorithm}\n\n\\subsection{Adaptive Belief Tree (ABT)}\\label{ssec:ABT}\nABT is a general and anytime on-line POMDP solver based on Monte-Carlo-Tree-Search (MCTS). ABT updates (rather than recomputes) its policy at each planning step. To update the policy for the current belief, ABT iteratively constructs and maintains a belief tree, a tree whose nodes are beliefs and whose edges are pairs of actions and observations. ABT evaluates sequences of actions by sampling episodes, that is, sequences of state-\u2013action-\u2013observation-\u2013reward tuples, starting from the current belief. Details of ABT can be found in\\ccite{Kur13:An}.\n\n\\subsection{Modified High-Frequency Replanning (MHFR)}\\label{ssec:MHFR}\nThe main difference between HFR and MHFR is that HFR is designed for chance constraint POMDP, i.e., it explicitly minimizes the collision probability, while MHFR is a POMDP solver, whose objective is to maximize the expected total reward. Similar to HFR, MHFR approximates the current belief by a multivariate Gaussian distribution. To compute the policy from the current belief, MHFR samples a set of trajectories from the mean of the current belief to a goal state using multiple instances of RRTs\\ccite{kuffner2011space} in parallel. It then computes the expected total discounted reward of each trajectory by tracking the beliefs around the trajectory using a Kalman Filter, assuming maximum-likelihood observations. The policy then becomes the first action of the trajectory with the highest expected total discounted reward. After executing the action and perceiving an observation, MHFR updates the belief using an Extended Kalman Filter. The process then repeats from the updated belief. To increase efficiency, MHFR additionally adjusts the previous trajectory with the highest expected total discounted reward to start from the mean of the updated belief and adds this trajectory to the set of sampled trajectories. More details on HFR and precise derivations of the method are available in\\ccite{Sun15:High}.\n\n\\section{Background and Related Work}\\label{sec:relWork}\n\\subsection{Background}\nIn this paper, we consider motion planning problems, in which a robot must move from a given initial state to a state in the goal region while avoiding obstacles. The robot operates inside deterministic, bounded, and perfectly known 2D or 3D environments populated by static obstacles.\n\nThe robot's transition and observation models are uncertain and defined as follows. \nLet $\\ensuremath{S}\\xspace \\subset \\mathbb{R}^n$ be the bounded n-dimensional state space, $A \\subset \\mathbb{R}^d$ the bounded $d$-dimensional control space and $\\ensuremath{O}\\xspace \\subset \\mathbb{R}^l$ the bounded $l$-dimensional observation space of the robot. \nThe state of the robot evolves according to a discrete-time non-linear function, which we model in the general form $\\ensuremath{s}\\xspace_{t+1} = f(\\ensuremath{s}\\xspace_t, \\ensuremath{a}\\xspace_t, v_t)$ where $\\ensuremath{s}\\xspace_t \\in \\ensuremath{S}\\xspace$ is the state of the robot at time $t$, $\\ensuremath{a}\\xspace_t \\in \\ensuremath{A}\\xspace$ is the control input at time $t$, and $v_t \\in \\mathbb{R}^d$ is a random transition error. At each time step $t$, the robot perceives imperfect information regarding its current state according to a non-linear stochastic function of the form $\\ensuremath{o}\\xspace_t = h(\\ensuremath{s}\\xspace_t, w_t)$, where $\\ensuremath{o}\\xspace_t \\in \\ensuremath{O}\\xspace$ is the observation at time $t$ and $w_t \\in \\mathbb{R}^d$ is a random observation error.\n\nThis class of motion planning problems under uncertainty can naturally be formulated as a Partially Observable Markov Decision Process (POMDP). Formally, a POMDP is a tuple \\ensuremath{\\langle S, A, O, T, Z, R, \\ensuremath{b_0}\\xspace, \\gamma \\rangle}\\xspace, where \\ensuremath{S}\\xspace, \\ensuremath{A}\\xspace and \\ensuremath{O}\\xspace are the state, action, and observation spaces of the robot. $T$ is a conditional probability function $T(\\ensuremath{s}\\xspace, \\ensuremath{a}\\xspace, \\ensuremath{s'}\\xspace) = p(\\ensuremath{s'}\\xspace \\,|\\, \\ensuremath{s}\\xspace, \\ensuremath{a}\\xspace)$ (where $\\ensuremath{s}\\xspace, \\ensuremath{s'}\\xspace \\in \\ensuremath{S}\\xspace$ and $\\ensuremath{a}\\xspace \\in \\ensuremath{A}\\xspace$) that models the uncertainty in the effect of performing actions, while $Z(\\ensuremath{s}\\xspace, \\ensuremath{a}\\xspace, \\ensuremath{o}\\xspace) = p(\\ensuremath{o}\\xspace | \\ensuremath{s}\\xspace, \\ensuremath{a}\\xspace)$ (where $\\ensuremath{o}\\xspace\\in\\ensuremath{O}\\xspace$) is a conditional probability function that models the uncertainty in perceiving observations. $R(\\ensuremath{s}\\xspace, \\ensuremath{a}\\xspace)$ is a reward function, which encodes the planning objective. \\ensuremath{b_0}\\xspace is the initial belief, capturing the uncertainty in the robot's initial state and $\\gamma \\in (0, 1)$ is a discount factor. \n\nAt each time-step, a POMDP agent is at a state $s \\in \\ensuremath{S}\\xspace$, takes an action $a \\in \\ensuremath{A}\\xspace$, perceives an observation $o \\in \\ensuremath{O}\\xspace$, receives a reward based on the reward function $R(s, a)$, and moves to the next state. Now due to uncertainty in the results of action and sensing, the agent never knows its exact state and therefore, estimates its state as a probability distribution, called belief. The solution to the POMDP problem is an optimal policy (denoted as \\ensuremath{\\pi^*}\\xspace), which is a mapping $\\ensuremath{\\pi^*}\\xspace: \\mathbb{B} \\rightarrow \\ensuremath{A}\\xspace$ from beliefs ($\\mathbb{B}$ denotes the set of all beliefs, which is called the belief space) to actions that maximizes the expected total reward the robot receives, i.e.\n\\begin{align}\n&V^*(\\ensuremath{b_0}\\xspace) = \\nonumber\\\\ &\\max_{a \\in \\ensuremath{A}\\xspace} \\left(R(b, a) + \\gamma \\int_{o \\in \\ensuremath{O}\\xspace} p(o | b, a) V^*(\\tau(b, a, o)) \\, \\ensuremath{d}\\xspace\\ensuremath{o}\\xspace\\right)\n\\end{align}\nwhere $\\tau(b, a, o)$ computes the updated belief estimate after the robot performs action $a \\in \\ensuremath{A}\\xspace$ and perceived $o \\in \\ensuremath{O}\\xspace$ from belief $b$, and is defined as: \n\\begin{align}\\label{e:belTrans}\nb'(s') &= \\tau(b, a, o)(s') \\nonumber \\\\&= \\eta \\, Z(s', a, o) \\int_{s \\in \\ensuremath{S}\\xspace} T(s, a, s') b(s) ds\n\\end{align}\n\nFor the motion planning problems considered in this work, we define the spaces $S$, $A$, and $O$ to be the same as those of the robotic system (for simplicity, we use the same notation). The transition $T$ represents the dynamics model $f$, while $Z$ represents the sensing model $h$. The reward function represents the task' objective, for example, high reward for goal states and low negative reward for states that cause the robot to collide with the obstacles. The initial belief \\ensuremath{b_0}\\xspace represents uncertainty on the starting state of the robot. \n\n\\subsection{Related Work on Non-Linearity Measures}\nLinearization is a common practice in solving non-linear control and estimation problems. It is known that linearization performs well only when the system's non-linearity is ``weak\"\\ccite{Li12:Measure}. To identify the effectiveness of linearization in solving non-linear problems, a number of non-linearity measure have been proposed in the control and information fusion community. \n\nMany of these measures (e.g.\\ccite{Bat80:Relative,Bea60:Confidence,Ema93:A}) have been designed for deterministic systems. For instance,\\ccite{Bat80:Relative} proposed a measure derived from the curvature of the non-linear function. The work in\\ccite{Bea60:Confidence,Ema93:A} computes a measure based on the distance between the non-linear function and its nearest linearization. A brief survey of non-linearity measures for deterministic systems is available in\\ccite{Li12:Measure}.\n\nNon-linearity measures for stochastic systems has been proposed. For instance,\\ccite{Li12:Measure} extends the measures in\\ccite{Bea60:Confidence,Ema93:A} to be based on the average distance between the non-linear function that models the motion and sensing of the system, and the set of all possible linearizations of the function. \n\nAnother example is\\ccite{Dun13:Nonlinearity} that proposes a measures which is based on the distance between distribution over states and its Gaussian approximation, called Measure of Non-Gaussianity\\xspace (MoNG\\xspace), rather than based on the non-linear function itself. Assuming a passive stochastic systems, this measures computes the negentropy between a transformed belief and its Gaussian approximation. The results indicate that this measure is more suitable to measure the non-linearity of stochastic systems, as it takes into account the effect that non-linear transformations have on the shape of the transformed beliefs. This advancement is encouraging and we will use MoNG\\xspace as a comparator of SNM\\xspace. However, for this purpose, MoNG\\xspace must be modified since we consider non-passive problems in work. The exact modifications we made can be found in Section~\\ref{ssec:MHFR}. \n\nDespite the various non-linearity measures that have been proposed, most are not designed to take the effect of obstacles to the non-linearity of the system into account. Except for MoNG\\xspace, all of the aforementioned non-linearity measures will have difficulties in reflecting these effects, even when they are embedded in the motion and sensing models. For instance, curvature-based measures requires the non-linear function to be twice continuously differentiable, but the presence of obstacles is very likely to break the differentiability of the motion model. Furthermore, the effect of obstacles is likely to violate the additive Gaussian error, required for instance by\\ccite{Li12:Measure}. Although MoNG\\xspace can potentially take the effect of obstacles into account, it is not designed to. In the presence of obstacles, beliefs have support only in the valid region of the state space, and therefore computing the difference between beliefs and their Gaussian approximations is likely to underestimate the effect of obstacles. \n\nSNM\\xspace is designed to address these issues. Instead of building upon existing non-linearity measures, SNM\\xspace adopts approaches commonly used for sensitivity analysis\\ccite{Mas12:Loss,Mul97:Does} of Markov Decision Processes (MDP) ---a special class of POMDP where the observation model is perfect, and therefore the system is fully observable. These approaches use statistical distance measures between the original transition dynamics and their perturbed versions. Linearized dynamics can be viewed as a special case of perturbed dynamics, and hence this statistical distance measure can be applied as a non-linearity measure, too. We do need to extend these analysis, as they are generally defined for discrete state space and are defined with respect to only the transition models (MDP assumes the state of the system is fully observable). Nevertheless, such extensions are feasible and the generality of this measure could help identifying the effectiveness of linearization in motion planning under uncertainty problems.\n\n\\section{Experiments and Results}\nThe purpose of our experiments is two-fold: To test the applicability of SNM\\xspace to motion planning under uncertainty problems and to test SNM-Planner\\xspace. For our first objective, we compare SNM\\xspace with a modified version of the Measure of Non-Gaussianity (MoNG\\xspace)\\ccite{Dun13:Nonlinearity}. Details on this measure are in Section~\\ref{ssec:mong}. We evaluate both measures using two robotic systems, a car-like robot with 2$^{nd}$-order dynamics and a torque-controlled 4DOFs manipulator, where both robots are subject to increasing uncertainties and increasing numbers of obstacles in the operating environment. Furthermore we test both measures when the robots are subject to highly non-linear collision dynamics and different observation models. Details on the robot models are presented in Section~\\ref{ssec:robot_models}, whereas the evaluation experiments are presented in Section~\\ref{ssec:testing_snm}.\n\nTo test SNM-Planner\\xspace we compare it with ABT and MHFR on three problem scenarios, including a torque-controlled 7DOFs manipulator operating inside a 3D office environment. Additionally we test how sensitive SNM-Planner\\xspace is to the choice of the SNM\\xspace-threshold. The results for these experiments are presented in Section~\\ref{ssec:testing_snm-planner}.\n\nAll problem environments are modelled within the OPPT framework\\ccite{hoerger2018software}. The solvers are implemented in C++. For the parallel construction of the RRTs in MHFR, we utilize 8 CPU cores throughout the experiments. All parameters are set based on preliminary runs over the possible parameter space, the parameters that generate the best results are then chosen to generate the experimental results.\n\n\\subsection{Measure of Non-Gaussianity\\xspace}\\label{ssec:mong}\nThe Measure of Non-Gaussianity (MoNG\\xspace) proposed in\\ccite{Dun13:Nonlinearity} is based on the negentropy between the PDF of a random variable and its Gaussian approximation. Consider an $n$-dimensional random variable $X$ distributed according to PDF $p(x)$. Furthermore, let \\lin{X} be a Gaussian approximation of $X$ with PDF $\\widehat{p}(x)$, such that $\\lin{X} \\sim N(\\mu , \\Sigma_x)$, where $\\mu$ and $\\Sigma_x$ are the first two moments of $p(x)$. The negentropy between $p$ and \\lin{p} (denoted as \\J{p}{\\lin{p}}) is then defined as\n\\begin{equation}\n\\J{p}{\\lin{p}} = H(\\lin{p}) - H(p)\n\\end{equation}\nwhere\n\\begin{equation}\\label{e:entropies}\n\\begin{split}\n\\entr{\\lin{p}} &= \\frac{1}{2} ln \\left [(2 \\pi e)^n \\left |det(\\Sigma_x) \\right | \\right ] \\\\\n\\entr{p} &= - \\int p(x) \\ln p(x) dx\n\\end{split}\n\\end{equation}\nare the differential entropies of $p$ and $\\lin{p}$ respectively.\nA (multivariate) Gaussian distribution has the largest differential entropy amongst all distributions with equal first two moments, therefore \\J{p}{\\lin{p}} is always non-negative. In practice, since the PDF $p(x)$ is not known exactly in all but the simplest cases, \\entr{p} has to be approximated. \n\nIn\\ccite{Dun13:Nonlinearity} this measure has originally been used to assess the non-linearity of passive systems. Therefore, in order to achieve comparability with SNM\\xspace, we need to extend the Non-Gaussian measure to general active stochastic systems of the form $s_{t+1} = f(s_t, a_t, v_t)$. We do this by evaluating the non-Gaussianity of distribution that follow from the transition function \\ensuremath{T(s, a, s')}\\xspace given state $s$ and action $a$. In particular for a given $s$ and $a$, we can find a Gaussian approximation of \\ensuremath{T(s, a, s')}\\xspace (denoted by \\ensuremath{T_G(s, a, s')}\\xspace) by calculating the first two moments of the distribution that follows from \\ensuremath{T(s, a, s')}\\xspace. \n\nUsing this Gaussian approximation, we define the Measure of Non-Gaussianity\\xspace as\n\\begin{align}\n&MoNG\\xspace(\\ensuremath{T}\\xspace, \\ensuremath{T_G}\\xspace) = \\nonumber\\\\ &\\sup_{\\ensuremath{s}\\xspace \\in \\ensuremath{S}\\xspace, \\ensuremath{a}\\xspace \\in \\ensuremath{A}\\xspace} \\left [H(\\ensuremath{T_G(s, a, s')}\\xspace) - H(\\ensuremath{T(s, a, s')}\\xspace)\\right ]\n\\end{align}\n\nSimilarly we can compute the Measure of Non-Gaussianity\\xspace for the observation function:\n\\begin{align}\n&MoNG\\xspace(\\ensuremath{Z}\\xspace, \\ensuremath{Z_G}\\xspace) = \\nonumber\\\\ &\\sup_{\\ensuremath{s}\\xspace \\in \\ensuremath{S}\\xspace, \\ensuremath{a}\\xspace \\in \\ensuremath{A}\\xspace} \\left [H(\\ensuremath{Z_G(s, a, o)}\\xspace) - H(\\ensuremath{Z(s, a, o)}\\xspace) \\right ]\n\\end{align}\n\nwhere \\ensuremath{Z_G}\\xspace is a Gaussian approximation of \\ensuremath{Z}\\xspace.\n\nIn order to approximate the entropies $\\entr{\\ensuremath{T(s, a, s')}\\xspace)}$ and \\entr{\\ensuremath{Z(s, a, o)}\\xspace}, we are using a similar histogram-based approach as discussed in Section~\\ref{ssec:monApprox}. The entropy terms for the Gaussian approximations can be computed in closed form, according to the first equation in \\eref{e:entropies}\\ccite{Ahmed89Entropy}.\n\n\\subsection{Robot Models}\\label{ssec:robot_models}\n\\subsubsection{4DOFs-Manipulator. }\\label{sssec:4DOFManipulator}\nThe 4DOFs-manipulator consists of 4 links connected by 4 torque-controlled revolute joints. The first joint is connected to a static base. In all problem scenarios the manipulator must move from a known initial state to a state where the end-effector lies inside a goal region located in the workspace of the robot, while avoiding collisions with obstacles the environment is populated with.\n\nThe state of the manipulator is defined as $\\ensuremath{s}\\xspace=(\\theta, \\dot{\\theta}) \\in \\mathbb{R}^{8}$, where $\\theta$ is the vector of joint angles, and $\\dot{\\theta}$ the vector of joint velocities. Both joint angles and joint velocities are subject to linear constraints: The joint angles are constrained by $(-3.14, 3.14)rad$, whereas the joint velocities are constrained by $(6,\\allowbreak 2,\\allowbreak 2,\\allowbreak 2)rad\/s$ in each direction. Each link of the robot has a mass of $1kg$.\n\nThe control inputs of the manipulator are the joint torques, where the maximum joint torques are $(20,\\allowbreak 20,\\allowbreak 10,\\allowbreak 5)Nm\/s$ in each direction. Since ABT assumes a discrete action space, we discretize the joint torques for each joint using the maximum torque in each direction, which leads to 16 actions.\n\nThe dynamics of the manipulator is defined using the well-known Newton-Euler formalism\\ccite{spong06:RobotModelling}. For both manipulators we assume that the input torque for each joint is affected by zero-mean additive Gaussian noise. Note however, even though the error is Gaussian, due to the non-linearities of the motion dynamics the beliefs will not be Gaussian in general. Since the transition dynamics for this robot are quite complex, we assume that the joint torques are applied for 0.1s and we use the ODE physics engine\\ccite{drumwright2010:extending} for the numerical integration of the dynamics, where the discretization (\\textrm{i.e.}\\ $\\delta t$) of the integrator is set to $\\delta t = 0.004s$.\n\nThe robot is equipped with two sensors: The first sensor measures the position of the end-effector inside the robot's workspace, whereas the second sensor measures the joint velocities. Consider a function $g: \\mathbb{R}^{8} \\mapsto \\mathbb{R}^3$ that maps the state of the robot to an end-effector position inside the workspace, then the observation model is defined as \n\\begin{equation}\\label{eq:obs4DOF}\no = [g(s), \\dot{\\theta}]^T + w\n\\end{equation}\nwhere $w_t$ is an error term drawn from a zero-mean multivariate Gaussian distribution with covariance matrix $\\Sigma_w$.\n\nThe initial state of the robot is a state where the joint angles and velocities are zero.\n\nWhen the robot performs an action where it collides with an obstacle it enters a terminal state and receives a penalty of -500. When it reaches the goal area it also enters a terminal state, but receives a reward of 1,000. To encourage the robot to reach the goal area quickly, it receives a small penalty of -1 for every other action. \n\\subsubsection{7DOFs Kuka iiwa manipulator. }\\label{sssec:7DOFManipulator}\nThe 7DOFs Kuka iiwa manipulator is very similar to the 4DOFs-manipulator. However, the robot consists of 7 links connected via 7 revolute joints. We set the POMDP model to be similar to that of the 4DOFs-manipulator, but expand it to handle 7DOFs. For this robot, the joint velocities are constrained by $(3.92,\\allowbreak 2.91,\\allowbreak 2.53,\\allowbreak 2.23,\\allowbreak 2.23,\\allowbreak 2.23,\\allowbreak 1.0)rad\/s$ in each direction and the link masses are $(4,\\allowbreak 4,\\allowbreak 3,\\allowbreak 2.7,\\allowbreak 1.7,\\allowbreak 1.8,\\allowbreak 0.3)kg$. Additionally, the torque limits of the joints are $(25,\\allowbreak 20,\\allowbreak 10,\\allowbreak 10,\\allowbreak 5,\\allowbreak 5,\\allowbreak 0.5)Nm\/s$ in each direction. For ABT we use the same discretization of the joint torques as in the 4DOFs-manipulator case, \\textrm{i.e.} we use the maximum torque per joint in each direction, resulting in 128 actions. Similarly to the 4DOFs-manipulator, we assume that the input torques are applied for 0.1s and we use the ODE physics engine with an integration step size of 0.004s to simulate the transition dynamics. The observation and reward models are the same as for the 4DOFs-manipulator. The initial joint velocities are all zero and almost all joint angles are zero too, except for the second joint, for which the initial joint angle is $-1.5rad$. \\fref{f:scenarioCompStudy}(c) shows the Kuka manipulator operating inside an office scenario.\n\n\\subsubsection{Car-like robot. }\\label{sssec:SecOrderCar. }\nA nonholonomic car-like robot of size ($0.12\\times0.07\\times0.01$) drives on a flat xy-plane inside a 3D environment populated by obstacles The robot must drive from a known start state to a position inside a goal region without colliding with any of the obstacles. The state of the robot at time t is defined as a 4D vector $s_t = (x_t, y_t, \\theta_t, \\upsilon_t) \\in \\mathbb{R}^4$, where $x_t, y_t \\in [-1, 1]$ is the position of the center of the robot on the $xy$-plane, $\\theta_t \\in [-3.14, 3.14]rad$ the orientation and $\\upsilon_t \\in [-0.2, 0.2]$ is the linear velocity of the robot. The initial state of the robot is $(\u22120.7,\u22120.7,1.57rad,0)$ while the goal region is centered at $(0.7,0.7)$ with radius $0.1$. The control input at time $t$, $a_t = (\\alpha_t, \\phi_t)$ is a 2D real vector consisting of the acceleration $\\alpha \\in [-1, 1]$ and the steering wheel angle $\\phi_t \\in [-1rad, 1rad]$. The robot's dynamics is subject to control noise $v_t = (\\tilde{\\alpha}_t, \\tilde{\\phi}_t) \\sim N(0, \\Sigma_v)$. The robot's transition model is\n\\begin{equation}\\label{eq:carDynamics}\n s_{t+1} = f(s_t, a_t, v_t) = \\begin{bmatrix}\nx_t + \\Delta t \\upsilon_t \\cos \\theta_t\\\\ \ny_t + \\Delta t \\upsilon_t \\sin \\theta_t\\\\ \n\\theta_t + \\Delta t \\tan(\\phi_t + \\tilde{\\phi}_t) \/ 0.11\\\\ \n\\upsilon_t + \\Delta t (\\alpha_t + \\tilde{\\alpha}_t)\n\\end{bmatrix}\n\\end{equation}\nwhere $\\Delta t = 0.3s$ is the duration of a timestep and the value $0.11$ is the distance between the front and rear axles of the wheels. \n\nThis robot is equipped with two types of sensors, a localization sensor that receives a signal from two beacons that are located at $(\\hat{x}_1, \\hat{y}_1)$ and $(\\hat{x}_2, \\hat{y}_2)$. The second sensor is a velocity sensor mounted on the car. With these two sensors the observation model is defined as\n\\begin{equation}\\label{e:obstFunctCarMazeAdditive}\no_t = \\begin{bmatrix}\n\\frac{1}{((x_t - \\hat{x}_1)^2 + (y_t - \\hat{y}_1)^2 + 1)}\\\\ \n\\frac{1}{((x_t - \\hat{x}_2)^2 + (y_t - \\hat{y}_2)^2 + 1)}\\\\ \nv_t\n\\end{bmatrix} + w_t\n\\end{equation}\n\nwhere $w_t$ is an error vector drawn from a zero-mean multivariate Gaussian distribution with covariance matrix $\\Sigma_w$.\n\nSimilar to the manipulators described above, the robot receives a penalty of -500 when it collides with an obstacle, a reward of 1,000 when reaching the goal area and a small penalty of -1 for any other action.\n\n\n\\subsection{Testing SNM\\xspace}\\label{ssec:testing_snm}\nIn this set of experiments we want to understand the performance of SNM\\xspace compared to MoNG\\xspace in various scenarios. In particular, we are interested in the effect of increasing uncertainties and the effect that obstacles have on the effectiveness of SNM\\xspace, and if these results are consistent with the performance of a general solver relative to a linearization-based solver. Additionally, we want to see how highly-nonlinear collision dynamics and different observation models -- one with additive Gaussian noise and non-additive Gaussian noise -- affect our measure. \nFor the experiments with increasing motion and sensing errors, recall from Section~\\ref{ssec:robot_models} that the control errors are drawn from zero-mean multivariate Gaussian distributions with covariance matrices $\\Sigma_v$. We define the control errors (denoted as \\ensuremath{e_T}\\xspace) to be the standard deviation of these Gaussian distributions, such that $\\Sigma_v = \\ensuremath{e_T}\\xspace^2 \\times \\mathds{1}$. Similarly for the covariance matrices of the zero-mean multivariate Gaussian sensing errors, we define the observation error as \\ensuremath{e_Z}\\xspace, such that $\\Sigma_w = \\ensuremath{e_Z}\\xspace^2 \\times \\mathds{1}$. Note that during all the experiments, we use normalized spaces, which means that the error vectors affect the normalized action and observation vectors.\nFor SNM\\xspace and MoNG\\xspace we first generated 100,000 state samples for each scenario, and computed a lookup table for each error value off-line, as discussed in Section~\\ref{ssec:monApprox}. Then, during run-time we calculated the average approximated SNM and MonG values. \n\\subsubsection{Effects of increasing uncertainties in cluttered environments. }\\label{ssec:increasingly_uncertain}\n\\begin{figure*}\n\\centering\n\\begin{tabular}{c@{\\hspace*{5pt}}c@{\\hspace*{5pt}}c@{\\hspace*{5pt}}}\n\\includegraphics[height=4cm]{DubinMaze} &\n\\includegraphics[height=4cm]{4DOFFactory1-cropped} &\n\\includegraphics[height=4cm]{KukaOfficeEnvironment} \\\\\n(a) Maze & (b) Factory & (c) KukaOffice\n\\end{tabular}\n\\caption{Test scenarios for the different robots. The objects colored black and gray are obstacles, while the green sphere is the goal region. (a) The Maze scenario for the car-like robot. The blue squares represents the beacons, while the orange square at the bottom left represents the initial state. (b) The 4DOFs-manipulator scenario. (c) The KukaOffice scenario}\n\\label{f:scenarioCompStudy}\n\\end{figure*}\n\nTo investigate the effect of increasing control and observation errors to SNM\\xspace, MoNG\\xspace and the two solvers ABT and MHFR in cluttered environments, we ran a set of experiments where the 4DOFs-manipulator and the car-like robot operate in empty environments and environments with obstacles, with increasing values of \\ensuremath{e_T}\\xspace and \\ensuremath{e_Z}\\xspace, ranging between $0.001$ and $0.075$. The environments with obstacles are the Factory and Maze environments shown in \\fref{f:scenarioCompStudy}(a) and (b). For each scenario and each control-sensing error value (we set $\\ensuremath{e_T}\\xspace = \\ensuremath{e_Z}\\xspace$), we ran 100 simulation runs using ABT and MHFR respectively with a planning time of 2s per step. \n\nThe average values for SNM\\xspace and MoNG\\xspace and the relative value differences between ABT and MHFR in the empty environments are presented in \\tref{t:measureCompareEmpty}.\nThe results show that for both scenarios SNM\\xspace and MoNG\\xspace are sensitive to increasing transition and observation errors. This resonates well with the relative value difference between ABT and MHFR. The more interesting question is now, how sensitive are both measures to obstacles in the environment? \\tref{t:measureCompareClutter}(a) and (b) shows the results for the Factory and the Maze scenario respectively. It is evident that SNM\\xspace increases significantly compared to the empty environments, whereas MoNG\\xspace is almost unaffected. Overall obstacles increase the relative value difference between ABT and MHFR, except for large uncertainties in the Maze scenario. This indicates that MHFR suffers more from the additional non-linearities that obstacles introduce. SNM\\xspace is able to capture these effects well.\n\nAn interesting remark regarding the results for the Maze scenario in \\tref{t:measureCompareClutter}(b) is that the relative value difference actually decreases for large uncertainties. The reason for this can be seen in \\fref{f:relValCarMaze}. As the uncertainties increase, the problem becomes so difficult, such that both solvers fail to compute a reasonable policy within the given planning time. However, clearly MHFR suffers earlier from these large uncertainties compared to ABT.\n\n\\begin{table}\n\\centering\n\\resizebox{\\columnwidth}{!}{%\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n\\multicolumn{4}{|c|}{\\textbf{(a) Empty environment 4DOFs-manipulator}} \\\\ \\hline \\hline\n\\textbf{$\\ensuremath{e_T}\\xspace = \\ensuremath{e_Z}\\xspace$} & \\textbf{SNM\\xspace} & \\textbf{MoNG\\xspace} & $\\mathbf{\\left |\\frac{V_{ABT}(b_0) - V_{MHFR}(b_0)}{V_{ABT}(b_0)}\\right |}$ \\\\ \\hline\n0.001 & 0.207 & 0.548 & 0.0110 \\\\ \\hline\n0.0195 & 0.213 & 0.557 & 0.0346 \\\\ \\hline\n0.038 & 0.243 & 0.603 & 0.0385 \\\\ \\hline\n0.057 & 0.254 & 0.617 & 0.0437 \\\\ \\hline\n0.075 & 0.313 & 0.686 & 0.0470 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{\\textbf{(b) Empty environment Car-like robot}} \\\\ \\hline \\hline\n\\textbf{$\\ensuremath{e_T}\\xspace = \\ensuremath{e_Z}\\xspace$} & \\textbf{SNM\\xspace} & \\textbf{MoNG\\xspace} & $\\mathbf{\\left |\\frac{V_{ABT}(b_0) - V_{MHFR}(b_0)}{V_{ABT}(b_0)}\\right |}$ \\\\ \\hline\n0.001 & 0.169 & 0.473 & 0.1426 \\\\ \\hline\n0.0195 & 0.213 & 0.479 & 0.1793 \\\\ \\hline\n0.038 & 0.295 & 0.458 & 0.1747 \\\\ \\hline\n0.057 & 0.350 & 0.476 & 0.1839 \\\\ \\hline\n0.075 & 0.395 & 0.446 & 0.2641 \\\\ \\hline\n\\end{tabular}}\n\\caption{Average values of SNM, MonG and the relative value difference between ABT and MHFR for the 4DOFs-manipulator (a) and the car-like robot (b) operating inside empty environments.}\n\\label{t:measureCompareEmpty}\n\\end{table}\n\n\\begin{table}\n\\centering\n\\resizebox{\\columnwidth}{!}{%\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n\\multicolumn{4}{|c|}{\\textbf{(a) Factory environment}} \\\\ \\hline \\hline\n\\textbf{$\\ensuremath{e_T}\\xspace = \\ensuremath{e_Z}\\xspace$} & \\textbf{SNM\\xspace} & \\textbf{MoNG\\xspace} & $\\mathbf{\\left |\\frac{V_{ABT}(b_0) - V_{MHFR}(b_0)}{V_{ABT}(b_0)}\\right |}$ \\\\ \\hline\n0.001 & 0.293 & 0.539 & 0.0892 \\\\ \\hline\n0.0195 & 0.351 & 0.567 & 0.1801 \\\\ \\hline\n0.038 & 0.470 & 0.621 & 0.5818 \\\\ \\hline\n0.057 & 0.502 & 0.637 & 0.7161 \\\\ \\hline\n0.075 & 0.602 & 0.641 & 1.4286 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{\\textbf{(b) Maze environment}} \\\\ \\hline \\hline\n\\textbf{$\\ensuremath{e_T}\\xspace = \\ensuremath{e_Z}\\xspace$} & \\textbf{SNM\\xspace} & \\textbf{MoNG\\xspace} & $\\mathbf{\\left |\\frac{V_{ABT}(b_0) - V_{MHFR}(b_0)}{V_{ABT}(b_0)}\\right |}$ \\\\ \\hline\n0.001 & 0.215 & 0.482 & 0.2293 \\\\ \\hline\n0.0195 & 0.343 & 0.483 & 1.4473 \\\\ \\hline\n0.038 & 0.470 & 0.491 & 1.1686 \\\\ \\hline\n0.057 & 0.481 & 0.497 & 0.0985 \\\\ \\hline\n0.075 & 0.555 & 0.502 & 0.0040 \\\\ \\hline\n\\end{tabular}}\n\\caption{Average values of SNM, MonG and the relative value difference between ABT and MHFR for the 4DOFs-manipulator operating inside the Factory environment (a) and the car-like robot operating inside the Maze environment (b).}\n\\label{t:measureCompareClutter}\n\\end{table}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.485\\textwidth]{resDubinMaze2}\n\\caption{The average total discounted rewards achieved by ABT and MHFR in the Maze scenario, as the uncertainties increase. Vertical bars are the 95\\% confidence intervals.}\n\\label{f:relValCarMaze}\n\\end{figure}\n\n\\subsubsection{Effects of increasingly cluttered environments. }\nTo investigate the effects of increasingly cluttered environments on both measures, we ran a set of experiments in which the Car-like robot and the 4DOFs-manipulator operate inside environments with an increasing number of randomly distributed obstacles. For this we generated test scenarios with 5, 10, 15, 20, 25 and 30 obstacles that are uniformly distributed across the environment. For each of these test scenarios, we randomly generated 100 environments. \\fref{f:randObstacles}(a)-(b) shows two example environments with 30 obstacles for the Car-like robot and the 4DOFs-manipulator. For this set of experiments we don't take collision dynamics into account. The control and observation errors are fixed to $e_t = e_z = 0.038$ which corresponds to the median of the uncertainty values. \\tref{t:increasingObstacles} presents the results for SNM\\xspace, MoNG\\xspace and the relative value difference between ABT ant MHFR for the 4DOFs-manipulator (a) and the car-like robot (b). From these results it is clear that, as the environments become increasingly cluttered, the advantage of ABT over MHFR increases, indicating that the obstacles have a significant effect on the Gaussian belief assumption of MHFR. Additionally SNM\\xspace is clearly more sensitive to those effects compared to MoNG\\xspace, whose values remain virtually unaffected by the clutterness of the environments.\n\n\\begin{figure}\n\\centering\n\\begin{tabular}{cc}\n\\includegraphics[width=0.20\\textwidth]{DubinRandom30-cropped} &\n\\includegraphics[width=0.20\\textwidth]{4DOFRandom30-cropped} \\\\\n(a) Car-like robot & (b) 4DOFs-manipulator\n\\end{tabular}\n\\caption{Two example scenarios for the Car-like robot (a) and the 4DOFs-manipulator (b) with 30 randomly distributed obstacles.}\n\\label{f:randObstacles}\n\\end{figure}\n\n\\begin{table}\n\\centering\n\\resizebox{\\columnwidth}{!}{%\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n\\multicolumn{4}{|c|}{\\textbf{(a) 4DOFs-manipulator with increasing number of obstacles}} \\\\ \\hline \\hline\nNum obstacles & SNM & MonG & $\\mathbf{\\left |\\frac{V_{ABT}(b_0) - V_{MHFR}(b_0)}{V_{ABT}(b_0)}\\right |}$ \\\\ \\hline \n5 & 0.359 & 0.650 & 0.0276 \\\\ \\hline\n10 & 0.449 & 0.643 & 0.0683 \\\\ \\hline\n15 & 0.514 & 0.673 & 0.2163 \\\\ \\hline\n20 & 0.527 & 0.683 & 0.2272 \\\\ \\hline\n25 & 0.651 & 0.690 & 0.2675 \\\\ \\hline\n30 & 0.698 & 0.672 & 0.3108 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{\\textbf{(b) Car-like robot with increasing number of obstacles}} \\\\ \\hline \\hline\nNum obstacles & SNM & MonG & $\\mathbf{\\left |\\frac{V_{ABT}(b_0) - V_{MHFR}(b_0)}{V_{ABT}(b_0)}\\right |}$ \\\\ \\hline \n5 & 0.327 & 0.459 & 0.0826 \\\\ \\hline\n10 & 0.387 & 0.473 & 0.1602 \\\\ \\hline\n15 & 0.446 & 0.482 & 0.1846 \\\\ \\hline\n20 & 0.468 & 0.494 & 0.4813 \\\\ \\hline\n25 & 0.529 & 0.489 & 0.5788 \\\\ \\hline\n30 & 0.685 & 0.508 & 0.7884 \\\\ \\hline\n\\end{tabular}}\n\\caption{Average values of SNM, MonG and relative value difference between ABT and MHFR for the 4DOFs-manipulator (a) and the car-like robot (b) operating inside environments with increasing numbers of obstacles.}\n\\label{t:increasingObstacles}\n\\end{table}\n\n\n\\subsubsection{Effects of collision dynamics. }\nIntuitively, collision dynamics are highly non-linear effects. Here we investigate SNM\\xspace's capability in capturing these effects compared to MoNG\\xspace. For this, the robots are allowed to collide with the obstacles. In other words, colliding states are not terminal and the dynamic effects of collisions are reflected in the transition model. \nFor the 4DOFs-manipulator these collisions are modeled as additional constraints (contact points) that are resolved by applying \"correcting velocities\" to the colliding bodies in the opposite direction of the contact normals.\n\nFor the Car-like robot, we modify the transition model \\eref{eq:carDynamics} to consider collision dynamics such that\n\\begin{equation}\\label{eq:car_coll_transition}\ns_{t+1} = \\begin{cases}\nf_{col}(s_t, a_t, v_t) & \\text{ if } f(s_t, a_t, v_t)\\ \\text{collides} \\\\ \nf(s_t, a_t, v_t) & \\text{ else }\n\\end{cases}\n\\end{equation}\nwhere \n\\begin{equation}\\label{eq:car_coll_funct}\nf_{coll}(s_t, a_t, v_t) = \\left [x_t, y_t, \\theta_t, -3v_t \\right ]^T\n\\end{equation}\n\nThis transition function causes the robot to slightly \"bounce\" off obstacles upon collision. There are two interesting remarks regarding this transition function: The first one is that \\eref{eq:car_coll_funct} is a deterministic. In other words, a collision causes an immediate reduction of the uncertainty regarding the state of the robot. Second, while the collision effects \\eref{eq:car_coll_funct} are linear, \\eref{eq:car_coll_transition} is not smooth since the collision dynamics induce discontinuities when the robot operates in the vicinity of obstacles.\n\n\\begin{table}\n\\centering\n\\resizebox{\\columnwidth}{!}{%\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n\\multicolumn{4}{|c|}{\\textbf{(a) Maze environment with collision dynamics}} \\\\ \\hline \\hline\n\\textbf{$\\ensuremath{e_T}\\xspace = \\ensuremath{e_Z}\\xspace$} & \\textbf{SNM\\xspace} & \\textbf{MoNG\\xspace} & $\\mathbf{\\left |\\frac{V_{ABT}(b_0) - V_{MHFR}(b_0)}{V_{ABT}(b_0)}\\right |}$ \\\\ \\hline\n0.001 & 0.425 & 0.490 & 0.3807 \\\\ \\hline\n0.0195 & 0.576 & 0.505 & 7.0765 \\\\ \\hline\n0.038 & 0.636 & 0.542 & 8.6847 \\\\ \\hline\n0.057 & 0.740 & 0.569 & 2.0194 \\\\ \\hline\n0.075 & 0.776 & 0.611 & 1.7971 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{\\textbf{(b) Factory environment with collision dynamics}} \\\\ \\hline \\hline\n\\textbf{$\\ensuremath{e_T}\\xspace = \\ensuremath{e_Z}\\xspace$} & \\textbf{SNM\\xspace} & \\textbf{MoNG\\xspace} & $\\mathbf{\\left |\\frac{V_{ABT}(b_0) - V_{MHFR}(b_0)}{V_{ABT}(b_0)}\\right |}$ \\\\ \\hline\n0.001 & 0.492 & 0.639 & 0.07141 \\\\ \\hline\n0.0195 & 0.621 & 0.621 & 0.4007 \\\\ \\hline\n0.038 & 0.725 & 0.738 & 0.6699 \\\\ \\hline\n0.057 & 0.829 & 0.742 & 1.0990 \\\\ \\hline\n0.075 & 0.889 & 0.798 & 1.7100 \\\\ \\hline\n\\end{tabular}}\n\\caption{Average values of SNM, MonG and the relative value difference between ABT and MHFR for the 4DOFs-manipulator operating inside the Factory environment (a) and the car-like robot operating inside the Maze environment (b) while being subject to collision dynamics.}\n\\label{t:measureCompareColl}\n\\end{table}\n\n\n\\tref{t:measureCompareColl} shows the comparison between SNM\\xspace and MoNG\\xspace and the relative value difference between ABT and MHFR for the 4DOFs-manipulator operating inside the Factory environment (a) and the car-like robot operating inside the Maze environment (b) while being subject to collision dynamics. It can be seen that the additional non-linear effects are captured well by SNM\\xspace. Interestingly, compared to the results in \\tref{t:measureCompareClutter}(a), where the 4DOFs-manipulator operates in the same environment without collision dynamics, MoNG\\xspace captures the effects of collision dynamics as well, which indicates that collision dynamics have a large effect on the Gaussian assumption made by MHFR. Looking at the relative value difference between ABT and MHFR confirms this. MHFR suffers more from the increased non-linearity of the problems caused by collision dynamics compared to ABT. This effect aggravates as the uncertainty increases, which is a clear indication that the problem becomes increasingly non-linear with larger uncertainties.\nLooking at the results for the car-like robot operating in the Maze scenario presents a similar picture. Comparing the results in \\tref{t:measureCompareColl}(b) where collision dynamics are taken into account to \\tref{t:measureCompareClutter}(b), shows that collision dynamics have a significant effect both to SNM\\xspace as well as Measure of Non-Gaussianity\\xspace.\n\n\n\\subsubsection{Effects of non-linear observation functions with non-additive errors. }\\label{ss:observationComparison}\nIn the previous experiments we assumed that the observation functions are non-linear functions with additive Gaussian noise, a special class of non-linear observation functions. This class of observation functions has some interesting implications: First of all, the resulting observation distribution remains Gaussian. This in turn means that MoNG\\xspace for the observation function evaluates to zero. Second, linearizing the observation function results in a Gaussian distribution with the same mean but different covariance. We therefore expect that the observation component SNM\\xspace remains small, even for large uncertainties. \nTo investigate how SNM\\xspace reacts to non-linear observation functions with non-additive noise, we ran a set of experiments for the 4DOFs-manipulator operating inside the Factory environment and the car-like robot operating inside the Maze environment where we replaced both observation functions with non-linear functions with non-additive noise.\nFor the 4DOFs-manipulator we replaced the observation function defined in \\eref{eq:obs4DOF} with\n\\begin{equation}\\label{e:4DOFObsNonAdditive}\no_t = g(s_t + w_t)\n\\end{equation}\nwhere $w_t \\sim N(0, \\Sigma_w)$. In other words, the manipulator has only access to a sensor that measure the position of the end-effector in the workspace.\n\nFor the car-like robot we use the following observation function:\n\\begin{equation}\\label{e:obstFunctCarMazeNonAdditive}\no_t = \\begin{bmatrix}\n\\frac{1}{((x_t + w_t^1 - \\hat{x}_1)^2 + (y_t + w_t^2 - \\hat{y}_1)^2 + 1)}\\\\ \n\\frac{1}{((x_t + w_t^1 - \\hat{x}_2)^2 + (y_t + w_t^2 - \\hat{y}_2)^2 + 1)}\\\\ \nv_t + w_t^3\n\\end{bmatrix}\n\\end{equation}\n\nwhere $\\left (w_t^1, w_t^2, w_t^3 \\right )^T \\sim N(0, \\Sigma_w)$.\nFor both robots, we set $e_t = 0.038$.\n\n\\begin{table}\n\\centering\n\\resizebox{\\columnwidth}{!}{%\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{6}{|c|}{\\textbf{(a) Factory environment with additive observation errors}} \\\\ \\hline \\hline\n$\\mathbf{\\ensuremath{e_Z}\\xspace}$ & 0.001 & 0.0195 & 0.038 & 0.057 & 0.075 \\\\ \\hline\n\\textbf{SNM\\xspace} & 0.001 & 0.004 & 0.013 & 0.036 & 0.047 \\\\ \\hline\n\\textbf{MonG} & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\\\ \\hline \\hline\n\\multicolumn{6}{|c|}{\\textbf{(b) Factory environment with non-additive observation errors}} \\\\ \\hline \\hline\n$\\mathbf{\\ensuremath{e_Z}\\xspace}$ & 0.001 & 0.0195 & 0.038 & 0.057 & 0.075 \\\\ \\hline\n\\textbf{SNM\\xspace} & 0.012 & 0.087 & 0.173 & 0.234 & 0.317 \\\\ \\hline\n\\textbf{MonG} & 0.0 & 0.047 & 0.094 & 0.136 & 0.173 \\\\ \\hline\n\\end{tabular}}\n\\caption{Comparison between the observation component of SNM\\xspace and MoNG\\xspace for the 4DOF-manipulator operating inside the Factory environment with observation function \\eref{eq:obs4DOF} (a) and \\eref{e:4DOFObsNonAdditive} (b) as the observation errors increase.}\n\\label{t:compAdditive4DOF}\n\\end{table}\n\n\\begin{table}\n\\centering\n\\resizebox{\\columnwidth}{!}{%\n\\begin{tabular}{|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{6}{|c|}{\\textbf{(a) Maze environment with additive observation errors}} \\\\ \\hline \\hline\n$\\mathbf{\\ensuremath{e_Z}\\xspace}$ & 0.001 & 0.0195 & 0.038 & 0.057 & 0.075 \\\\ \\hline\n\\textbf{SNM\\xspace} & 0.002 & 0.012 & 0.037 & 0.048 & 0.060 \\\\ \\hline\n\\textbf{MonG} & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\\\ \\hline \\hline\n\\multicolumn{6}{|c|}{\\textbf{(b) Maze environment with non-additive observation errors}} \\\\ \\hline \\hline\n$\\mathbf{\\ensuremath{e_Z}\\xspace}$ & 0.001 & 0.0195 & 0.038 & 0.057 & 0.075 \\\\ \\hline\n\\textbf{SNM\\xspace} & 0.083 & 0.086 & 0.101 & 0.198 & 0.207 \\\\ \\hline\n\\textbf{MonG} & 0.0 & 0.012 & 0.032 & 0.053 & 0.075 \\\\ \\hline\n\\end{tabular}}\n\\caption{Comparison between the observation component of SNM\\xspace and MoNG\\xspace for the car-like robot operating inside the Maze environment with observation function \\eref{e:obstFunctCarMazeAdditive}(a) and observation function \\eref{e:obstFunctCarMazeNonAdditive}(b) as the observation errors increase.}\n\\label{t:compAdditiveCar}\n\\end{table}\n\n\\tref{t:compAdditive4DOF} shows the values for the observation components of SNM\\xspace and MoNG\\xspace for the 4DOFs-manipulator operating inside the Factory environment as the observation errors increase. As expected, for additive Gaussian errors, MoNG\\xspace is zero, whereas SNM\\xspace is small but measurable. This shows that SNM\\xspace is able to capture the difference of the variance between the original and linearized observation functions. For non-additive errors, the observation function is non-Gaussian, therefore we can see that both measures increase as the observation errors increase. Interestingly for both measures the observation components yield significantly smaller values compared to the transition components. This indicates that the non-linearity of the problem stems mostly from the transition function. \nFor the car-like robot operating inside the Maze environment we see a similar picture. For the observation function with additive Gaussian errors, \\tref{t:compAdditiveCar}(a) shows that MoNG\\xspace remains zero for all values of \\ensuremath{e_Z}\\xspace, whereas SNM\\xspace yields a small but measurable value. Again, both measures increase significantly in the non-additive error case in \\tref{t:compAdditiveCar}(b).\n\nThe question is now, how do ABT and MHFR perform in both scenarios when observation functions with non-additive Gaussian errors are used? \\tref{t:measureCompareNonAdditive}(a) shows this relative value difference for the 4DOFs-manipulator operating inside the Factory environment. It can be seen that as the errors increase, the relative value difference between ABT and MHFR increase significantly, compared to the relative value difference shown in \\tref{t:measureCompareClutter}(a), where an observation function with additive errors is used. Similarly, for the car-like robot operating inside the Maze scenario using the observation function with non-additive errors, the relative value difference shown in table \\tref{t:measureCompareNonAdditive}(b) between the two solvers is much larger compared to \\tref{t:measureCompareClutter}(b).\n\nThis is in line with our intuition that non-Gaussian observation functions are more challenging for linearization-based solvers.\n\n\\begin{table}\n\\centering\n\\resizebox{\\columnwidth}{!}{%\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n\\multicolumn{4}{|c|}{\\textbf{(a) Factory environment with non-additive observation errors}} \\\\ \\hline \\hline\n\\textbf{$\\ensuremath{e_T}\\xspace = \\ensuremath{e_Z}\\xspace$} & \\textbf{SNM\\xspace} & \\textbf{MoNG\\xspace} & $\\mathbf{\\left |\\frac{V_{ABT}(b_0) - V_{MHFR}(b_0)}{V_{ABT}(b_0)}\\right |}$ \\\\ \\hline\n0.001 & 0.012 & 0.0 & 0.06992 \\\\ \\hline\n0.0195 & 0.0878 & 0.0476 & 0.43861 \\\\ \\hline\n0.038 & 0.1732 & 0.0941 & 0.89720 \\\\ \\hline\n0.057 & 0.2347 & 0.1363 & 1.46063 \\\\ \\hline\n0.075 & 0.3178 & 0.1740 & 8.34832 \\\\ \\hline \\hline\n\\multicolumn{4}{|c|}{\\textbf{(b) Maze environment with non-additive observation errors}} \\\\ \\hline \\hline\n\\textbf{$\\ensuremath{e_T}\\xspace = \\ensuremath{e_Z}\\xspace$} & \\textbf{SNM\\xspace} & \\textbf{MoNG\\xspace} & $\\mathbf{\\left |\\frac{V_{ABT}(b_0) - V_{MHFR}(b_0)}{V_{ABT}(b_0)}\\right |}$ \\\\ \\hline\n0.001 & 0.0837 & 0.0 & -0.12451 \\\\ \\hline\n0.0195 & 0.0868 & 0.0121 & 0.33872 \\\\ \\hline\n0.038 & 0.1017 & 0.0321 & 1.41429 \\\\ \\hline\n0.057 & 0.1983 & 0.0531 & 8.70111 \\\\ \\hline\n0.075 & 0.2072 & 0.0758 & 0.95132 \\\\ \\hline\n\\end{tabular}}\n\\caption{Average values of SNM, MonG and the relative value difference between ABT and MHFR for the 4DOFs-manipulator operating inside the Factory environment (a) and the car-like robot operating inside the Maze environment (b) with non-additive observation errors}\n\\label{t:measureCompareNonAdditive}\n\\end{table}\n\n\\subsection{Testing SNM-Planner\\xspace}\\label{ssec:testing_snm-planner}\nIn this set of experiments we want to test the performance of SNM-Planner\\xspace in comparison with the two component planners ABT and MHFR. To this end we tested SNM-Planner\\xspace on three problem scenarios: The Maze scenario for the car like robot shown in \\fref{f:scenarioCompStudy}(a) and the Factory scenario for the 4DOFs-manipulator. Additionally we tested SNM-Planner\\xspace on a scenario in which the Kuka iiwa robot operates inside an office environment, as shown in \\fref{f:scenarioCompStudy}(b). Similarly to the Factory scenario, the robot has to reach a goal area while avoiding collisions with the obstacles. The planning time per step is 8s in this scenario. For the SNM\\xspace-threshold we chose 0.5. Here we set $\\ensuremath{e_T}\\xspace=\\ensuremath{e_Z}\\xspace=0.038$.\n\n\\begin{table}\n\\centering\n\\resizebox{\\columnwidth}{!}{%\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n\\textbf{Planner} & \\textbf{Car-like robot} & \\textbf{4DOFs-manipulator} & \\textbf{Kuka iiwa} \\\\ \\hline \\hline\nABT & -150.54 $\\pm$ 40.6 & 801.78 $\\pm$ 25.7 & 498.33 $\\pm$ 30.6 \\\\ \\hline\nMHFR & -314.25 $\\pm$ 31.4 & 345.82 $\\pm$ 60.8 & -163.21 $\\pm$ 29.6 \\\\ \\hline\nSNM-Planner\\xspace & \\textbf{14.68 $\\pm$ 46.3} & \\textbf{833.17 $\\pm$ 13.4} & \\textbf{620.67 $\\pm$ 35.7} \\\\ \\hline\n\\end{tabular}}\n\\caption{Average total discounted reward and $\\pm$ 95\\% confidence interval over 1,000 simulation runs. The proportion of ABT being used in the Maze, Factory and Office scenarios is 37.85\\%, 56.43\\% and 42.33\\% respectively.}\n\\label{t:snmPlannerResults}\n\\end{table}\n\nThe results in \\tref{t:snmPlannerResults} indicate that SNM-Planner\\xspace is able to approximately identify when it is beneficial to use a linearization-based solver and when a general solver should be used. In all three scenarios, SNM-Planner\\xspace outperforms the two component planners. In the Maze scenario, the difference between SNM-Planner\\xspace and the component planners is significant. The reason is, MHFR is well suited to compute a long-term strategy, as it constructs nominal trajectories from the current state estimate all the way to the goal, whereas the planning horizon of ABT is limited by the depth of the search tree. However, in the proximity of obstacles, the Gaussian belief assumption of MHFR are no long valid, and careful planning is required to avoid collisions with the obstacles. In general ABT handles these situations better than MHFR. SNM-Planner\\xspace combines the benefits of both planners and alleviates their shortcoming. \\fref{f:DubinSNMSamples} shows state samples for which the SNM\\xspace-values exceed the given threshold of 0.5. It is obvious that many of these samples are clustered around obstacles. In other words, when the support set of the current belief (i.e. the subset of the state space that is covered by the belief particles) lies in open areas, MHFR is used to drive the robot towards the goal, whereas in the proximity of obstacles, ABT is used to compute a strategy that avoids collisions with the obstacles.\n\nA similar behavior was observed in the KukaOffice environment. During the early planning steps, when the robot operates in the open area, MHFR is well suited to drive the end-effector towards the goal area, but near the narrow passage at the back of the table, ABT in general computes better motion strategies. Again, SNM-Planner\\xspace combines both strategies to compute better motion strategies than each of the component planners alone.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.5\\columnwidth]{DubinSNMSamples}\n\\caption{State samples in the Maze scenario for which the approximated SNM\\xspace value exceeds the chosen threshold of 0.5}\n\\label{f:DubinSNMSamples}\n\\end{figure}\n\n\\subsubsection{Sensitivity of SNM-Planner\\xspace. }\nIn this experiment we test how sensitive the performance of SNM-Planner\\xspace is to the choice of the SNM\\xspace-threshold. Recall that SNM-Planner\\xspace uses this threshold to decide, based on a local approximation of SNM\\xspace, which solver to use for the policy computation. For small thresholds SNM-Planner\\xspace favors ABT, whereas for large thresholds MHFR is favored.\n\nFor this experiment we test SNM-Planner\\xspace on the Factory problem (\\fref{f:scenarioCompStudy}(b)) with multiple values for the SNM\\xspace-threshold, ranging from 0.1 to 0.9. For each threshold value we estimate the average total expected discounted reward achieved by SNM-Planner\\xspace using 1,000 simulation runs. Here we set $\\ensuremath{e_T}\\xspace=\\ensuremath{e_Z}\\xspace=0.038$. \n\n\\tref{t:SNMPlannerSensitivity} summarizes the results. It can be seen that the choice of the threshold can affect the performance of SNM-Planner\\xspace, particularly for values that are on either side of the spectrum (very small values or very large values) where SNM-Planner\\xspace favors only one of the component solvers. However, between the threshold values of 0.2 and 0.5 the results are fairly consistent, which indicates that there's a range of SNM\\xspace-threshold values for which SNM-Planner\\xspace performs well.\n\\begin{table}[htb]\n\\centering\n\\resizebox{\\columnwidth}{!}{%\n\\begin{tabular}{|c|c|c|}\n\\hline\n\\textbf{SNM-Threshold} & \\textbf{Avg. total discounted reward} & \\textbf{\\% ABT used}\\\\\n\\hline\n\\hline\n0.1 & 789.43 $\\pm$ 18.4 & 100.0 \\\\\n\\hline\n0.2 & 794.69 $\\pm$ 15.3 & 95.3 \\\\\n\\hline\n0.3 & 801.82 $\\pm$ 14.2 & 89.8 \\\\\n\\hline\n0.4 & 834.32 $\\pm$ 13.3 & 65.2\\\\\n\\hline\n0.5 & 833.17 $\\pm$ 13.4 & 59.6\\\\\n\\hline\n0.6 & 725.71 $\\pm$ 19.6 & 42.7 \\\\\n\\hline\n0.7 & 622.39 $\\pm$ 18.5 & 30.6 \\\\\n\\hline\n0.8 & 561.02 $\\pm$ 29.4 & 21.5 \\\\\n\\hline\n0.9 & 401.79 $\\pm$ 39.6 & 7.8 \\\\\n\\hline\n\\end{tabular}}\n\\caption{Average total discounted reward and 95\\% confidence intervals of SNM-Planner\\xspace on the Factory problem for varying SNM\\xspace-threshold values. The average is collected over 1,000 simulation runs. The last column shows the percentage of ABT being used as the component solver.\\vspace{-9pt}}\n\\label{t:SNMPlannerSensitivity}\n\\end{table}\n\\vspace{-9pt}\n\n\n\\section{Summary and Future Work}\\label{sec:discussion}\nThis paper presents our preliminary work in identifying the suitability of linearization for motion planning under uncertainty. To this end, we present a general measure of non-linearity, called Statistical-distance-based Non-linearity Measure\\xspace (SNM\\xspace), which is based on the distance between the distributions that represent the system's motion--sensing model and its linearized version. Comparison studies with one of state-of-the-art methods for non-linearity measure indicate that SNM\\xspace is more suitable in taking into account obstacles in measuring the effectiveness of linearization.\n\nWe also propose a simple on-line planner that uses a local estimate of SNM\\xspace to select whether to use a general POMDP solver or a linearization-based solver for robot motion planning under uncertainty. Experimental results indicate that our simple planner can appropriately decide where linearization should be used and generates motion strategies that are comparable or better than each of the component planner.\n\nFuture work abounds. For instance, the question for a better measure remains. The total variation distance relies on computing a maximization, which is often difficult to estimate. Statistical distance functions that relies on expectations exists and can be computed faster. How suitable are these functions as a non-linearity measure? Furthermore, our upper bound result is relatively loose and can only be applied as a sufficient condition to identify if linearization will perform well. It would be useful to find a tighter bound that remains general enough for the various linearization and distribution approximation methods in robotics.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:3-universal-priors}\n\nIn the study of universal induction, we consider an abstraction\nof the world in the form of a binary string. Any sequence from\na finite set of possibilities can be expressed in this way, and\nthat is precisely what contemporary computers are capable of\nanalysing. An ``environment'' provides a measure of probability\nto (possibly infinite) binary strings. Typically, the class\n$\\mathcal{M}$ of enumerable semimeasures is considered. Given\nthe equivalence between $\\mathcal{M}$ and the set of monotone\nTuring machines (Lemma \\ref{lem:measures-TM}), this choice\nreflects the expectation that the environment can be computed\nby (or at least approximated by) a Turing machine.\n\nUniversal induction is an ideal Bayesian induction mechanism\nassigning probabilities to possible continuations of a binary\nstring. In order to do this, a prior distribution, termed a\nuniversal prior, is defined on binary strings. This prior has\nthe property that the Bayesian mechanism converges to the true\n(generating) environment for \\textit{any} environment $\\mu$ in\n$\\mathcal{M}$, given sufficient evidence.\n\nThere are three popular ways of defining a universal prior in\nthe literature: Solomonoff's prior\n\\cite{Solomonoff:64,Zvonkin:70,Hutter:04uaibook},\n\\label{def:u-prior-Solomonofxf} as a universal mixture\n\\cite{Zvonkin:70,Hutter:04uaibook,Hutter:07uspx}, or\n\\label{def:u-prior-mixturex} a universally dominant semimeasure\n\\cite{Hutter:04uaibook,Hutter:07uspx}.\n\\label{def:u-prior-dominantx} Briefly, a universally dominant\nsemimeasure is one that dominates every other semimeasure in\n$\\mathcal{M}$ (Definition \\ref{def:u-prior-dominant}), a\nuniversal mixture is a mixture of all semimeasures in\n$\\mathcal{M}$ with non-zero coefficients (Definition\n\\ref{def:u-prior-mixture}), and a Solomonoff prior assigns the\nprobability that a (chosen) monotone universal Turing machine\noutputs a string given random input (Definition\n\\ref{def:u-prior-Solomonof}). These and other relevant concepts\nare defined in more detail in Section \\ref{sec:definitions}.\n\nSolomonoff's and the universal mixture constructions have been\nknown for many years and they are often used interchangeably in\ntextbooks and lecture notes. Their equivalence has been shown\nin the sense that they dominate each other\n\\cite{Zvonkin:70,Hutter:04uaibook,Li:08}. We extend this result\nin Section \\ref{sec:UTM-is-mixture}, showing that they in fact\ndefine exactly the same class of priors.\n\nFurther, it is trivial to see that both constructions produce\nuniversally dominant semimeasures. The converse is, however,\nnot true. Universally dominant semimeasures are a larger class.\nWe provide a simple example to demonstrate this in Section\n\\ref{sec:dominant-is-not-universal}.\n\nThese results are relatively undemanding technically, however\ngiven their fundamental nature,\nthat they have not to our knowledge been published to date, and\nthe relevance to Ray Solomonoff's famous work on universal\ninduction, we present them here.\n\nThe following diagram summarises these inclusion relations:\n\n\\begin{figure}\n\\begin{displaymath}\n \\xymatrix{\n&\\text{Universally Dominant} \\ar@\/_\/@{<-}[ddl]_{Lemma \\ref{lem:u-mix-is-dominant}} \\ar@\/^\/@{.x}[ddl]^{Theorem\\; \\ref{thm:dom-not-mixture}}&\\\\\n&&\\\\\n\\text{Universal Mixture}\\ar@\/_\/@{<->}[rr]^{Theorem\\; \\ref{thm:UTM-eq-mixture}}\n && \\text{Solomonoff Prior} \\ar@\/_\/[uul]_{Corollary\\; \\ref{corol:UTM-is-dominant}}\n }\n\\end{displaymath}\n\\caption{}\n\\end{figure}\n\n\\section{Definitions}\\label{sec:definitions}\n\nWe represent the set of finite\/infinite binary strings as\n$\\mathbb{B}^*$ and $\\mathbb{B}^\\infty$ respectively. $\\epsilon$\ndenotes the empty string, $xb$ the concatenation of strings $x$\nand $b$, $\\ell(x)$ the length of a string $x$. A cylinder set,\nthe set of all infinite binary strings which start with some\n$x\\in\\mathbb{B}^*$ is denoted $\\Gamma_x$.\n\nA string $x$ is said to be a prefix of a string $y$ if $y=xz$\nfor some string $z$. We write $x\\sqsubseteq y$ or $x\\sqsubset\ny$ if $x$ is a proper substring of $y$ (ie: $z\\ne\\epsilon$). We\ndenote the maximal prefix-free subset of a set of finite\nstrings $\\mathcal{P}$ by $\\lfloor \\mathcal{P} \\rfloor$. It can\nbe obtained by successively removing elements that have a\nprefix in $\\mathcal{P}$. The uniform measure of a set of\nstrings is denoted $| \\mathcal{P}\n|:=\\sum_{p\\in\\lfloor\\mathcal{P}\\rfloor}2^{-\\ell(p)}$. This is\nthe area of continuations of elements of $\\mathcal{P}$\nconsidered as binary decimal numbers.\n\nThere have been several definitions of monotone Turing machines\nin the literature \\cite{Li:08}, however we choose that which is\nnow widely accepted\n\\cite{Solomonoff:64,Zvonkin:70,Hutter:04uaibook,Li:08}\nand has the useful and intuitive property Lemma \\ref{lem:measures-TM}.\n\n\\begin{definition}\nA monotone Turing machine is a computer with binary (one-way)\ninput and output tapes, a bidirectional binary work tape (with\nread\/write heads as appropriate) and a finite state machine to\ndetermine its actions given input and work tape values. The\ninput tape is read-only, the output tape is write-only.\n\\end{definition}\n\nThe definitions of a universal Turing machine in the literature\nare somewhat varied or unclear. Monotone universal Turing\nmachines are relevant here for defining the Solomonoff prior.\nIn the algorithmic information theory literature, most authors\nare concerned with the explicit construction of a single\nreference universal machine\n\\cite{Hutter:04uaibook,Li:08,Solomonoff:64,Turing:36,Zvonkin:70}.\nA more general definition is left to a relatively vague\nstatement along the lines of ``a Turing machine that can\nemulate any other Turing machine''. The definition below\nreflects the typical construction used and is often referred to\nas \\textit{universal by adjunction}\n\\cite{Downey:10book,Figueira:06}.\n\n\\begin{definition}[Monotone Universal Turing Machine]\\label{def:UTM}\nA monotone universal Turing machine is a monotone Turing\nmachine $U$ for which there exist:\n\\begin{enumerate}\n\\item an enumeration $\\{T_i:i\\in\\mathbb{N}\\}$ of all monotone Turing machines\n\\item a computable uniquely decodable self-delimiting code $I:\\mathbb{N}\\rightarrow\\mathbb{B}^*$\n\\end{enumerate}\nsuch that the programs for $U$ that produce output coincide\nwith the set $\\{I(i)p:i\\in\\mathbb{N},\\;p\\in\\mathbb{B}^*\\}$ of\nconcatenations of $I(i)$ and $p$, and\n\\[\n U(I(i)p) = T_i(p)\\quad\\forall\\, i\\in\\mathbb{N}\\;,\\;p\\in\\mathbb{B}^*\n\\]\n\\end{definition}\n\nA key concept in algorithmic information theory is the\nassignment of probability to a string $x$ as the probability\nthat some monotone Turing machine produces output beginning\nwith $x$ given unbiased coin flip input. This approach was used\nby Solomonoff to construct a universal prior\n\\cite{Solomonoff:64}. To better understand the properties of\nsuch a function, we will need the concepts of enumerability\nand semimeasures:\n\n\\begin{definition}\nA function or number $\\phi$ is said to be\n\\textbf{\\emph{enumerable}} or \\textbf{\\emph{lower\nsemicomputable}} (these terms are synonymous) if it can be\napproximated from below (pointwise) by a monotone increasing\nset $\\{\\phi_i:i\\in\\mathbb{N}\\}$ of finitely computable\nfunctions\/numbers, all calculable by a single Turing machine.\nWe write $\\phi_i\\nearrow\\phi$. Finitely computable\nfunctions\/numbers can be computed in finite time by a Turing\nmachine.\n\\end{definition}\n\n\\begin{definition}\nA \\textbf{\\emph{semimeasure}} is a ``defective'' probability\nmeasure on the $\\sigma$-algebra generated by cylinder sets in\n$\\mathbb{B}^\\infty$. We write $\\mu(x)$ for $x\\in\\mathbb{B}^*$\nas shorthand for $\\mu(\\Gamma_x)$. A probability measure must\nsatisfy $\\mu(\\epsilon)=1$,\n$\\mu(x)=\\sum_{b\\in\\mathbb{B}}\\mu(xb)$. A semimeasure allows a\nprobability ``gap'': $\\mu(\\epsilon)\\le1$ and\n$\\mu(x)\\ge\\sum_{b\\in\\mathbb{B}}\\mu(xb)$. $\\mathcal{M}$ denotes\nthe set of all enumerable semimeasures.\n\\end{definition}\n\nThe following definition explicates the relationship between\nmonotone Turing machines and enumerable semimeasures.\n\n\\begin{definition}[Solomonoff semimeasure]\n\\label{def:lambda_T}\nFor each monotone Turing machine $T$ we associate a semimeasure\n\\[\n \\lambda_T(x) := \\sum_{\\lfloor p:T(p)=x*\\rfloor}2^{-\\ell(p)} = |T^{-1}(x*)|\n\\]\nwhere $\\lfloor \\mathcal{P} \\rfloor$ indicates the maximal\nprefix-free subset of a set of finite strings $\\mathcal{P}$,\n$T(p)=x*$ indicates that $x$ is a prefix of (or equal to)\n$T(p)$ and $\\ell(p)$ is the length of $p$.\nIf there are no such programs, we set $\\lambda_T(x):=0$. [See\n\\cite{Li:08} definition 4.5.4]\n\\end{definition}\n\nNote that this is the probability that $T$ outputs a string\nstarting with $x$ given unbiased coin flip input. To see this,\nconsider the uniform measure given by\n$\\lambda(\\Gamma_p):=2^{-\\ell(p)}$. This is the probability of\nobtaining $p$ from unbiased coin flips. $\\lambda_T(x)$ is the\nuniform measure of the set of programs for $T$ that produce\noutput starting with $x$, ie: the probability of obtaining one\nof those programs from unbiased coin flips. Note also that,\nsince $T$ is monotone, this set consists of a union of disjoint\ncylinder sets $\\{\\Gamma_p:p\\in\\lfloor q:T(q)=x*\\rfloor\\}$. By\ndovetailing a search for such programs and an lower\napproximation of the uniform measure $\\lambda$, we can see that\n$\\lambda_T$ is enumerable. See Definition 4.5.4 (p.299) and\nLemma 4.5.5 (p.300) in \\cite{Li:08}.\n\nAn important lemma in this discussion establishes the\nequivalence between the set of all monotone Turing machines and\nthe set $\\mathcal{M}$ of all enumerable semimeasures. It is\nequivalent to Theorem 4.5.2 in \\cite{Li:08} (page 301) with a\nsmall correction: $\\lambda_T(\\epsilon)=1$ for any $T$ by\nconstruction, but $\\mu(\\epsilon)$ may not be $1$, so this case\nmust be excluded.\n\n\\begin{lemma}\\label{lem:measures-TM}\nA semimeasure $\\mu$ is lower semicomputable if and only if\nthere is a monotone Turing machine $T$ such that\n$\\mu=\\lambda_T$ except on $\\Gamma_\\epsilon \\equiv\n\\mathbb{B}^\\infty$ and $\\mu(\\epsilon)$ is lower semicomputable.\n\\end{lemma}\n\nWe are now equipped to formally define the 3 formulations for a\nuniversal prior:\n\n\\begin{definition}[Solomonoff prior]\n\\label{def:u-prior-Solomonof}\nThe Solomonoff prior for a given universal monotone Turing machine $U$ is\n\\[\n M:=\\lambda_U\n\\]\nThe class of all Solomonoff priors we denote $\\mathcal{U}_M$.\n\\end{definition}\n\n\\begin{definition}[Universal mixture]\\label{def:u-prior-mixture}\nA universal mixture is a mixture $\\xi$ with non-zero positive\nweights over an enumeration $\\{\\nu_i:i\\in\\mathbb{N},\n\\nu_i\\in\\mathcal{M}\\}$ of all enumerable semimeasures\n$\\mathcal{M}$:\n\\[\n \\xi = \\sum_{i\\in\\mathbb{N}}w_i\\nu_i\\quad:\\quad \\mathbb{R}\\ni w_i>0\\;,\\;\\sum_{i\\in\\mathbb{N}}w_i\\le1\n\\]\nWe require the weights $w_{()}$ to be a lower semicomputable\nfunction. The mixture $\\xi$ is then itself an enumerable\nsemimeasure, i.e. $\\xi\\in\\mathcal{M}$. The class of all\nuniversal mixtures we denote $\\mathcal{U}_\\xi$.\n\\end{definition}\n\n\\begin{definition}[Universally dominant semimeasure]\\label{def:u-prior-dominant}\nA universally dominant semimeasure is an enumerable semimeasure\n$\\delta$ for which there exists a real number $c_\\mu>0$ for\neach enumerable semimeasure $\\mu$ satisfying:\n\\[\n \\delta(x) \\ge c_\\mu\\mu(x)\\quad\\forall x\\in\\mathbb{B}^*\n\\]\nThe class of all universally dominant semimeasures we denote\n$\\mathcal{U}_\\delta$.\n\\end{definition}\n\nDominance implies absolute continuity: Every enumerable\nsemimeasure is absolutely continuous with respect to a\nuniversally dominant enumerable semimeasure. The converse\n(absolute continuity implies dominance) is however not true.\n\n\\section{Equivalence between Solomonoff priors and universal mixtures}\\label{sec:UTM-is-mixture}\n\nWe show here that every Solomonoff prior $M\\in\\mathcal{U}_M$\ncan be expressed as a universal mixture (i.e.:\n$M\\in\\mathcal{U}_\\xi$) and vice versa. In other words the class\nof Solomonoff priors and the class of universal mixtures are\nidentical: $\\mathcal{U}_M=\\mathcal{U}_\\xi$.\n\nPreviously, it was known\n\\cite{Zvonkin:70,Hutter:04uaibook,Li:08} that a Solomonoff\nprior $M$ and a universal mixture $\\xi$ are equivalent up to\nmultiplicative constants\n\\begin{align}\n M(x) &\\le c_1\\xi(x) &\\forall x\\in\\mathbb{B}^* \\notag\\\\\n \\xi(x) &\\le c_2M(x) &\\forall x\\in\\mathbb{B}^* \\notag\n\\end{align}\nThe result we present is stronger, stating that the two classes\nare exactly identical. Again we exclude the case $x=\\epsilon$\nas $M(\\epsilon)$ is always one for a Solomonoff prior, but\n$\\xi(\\epsilon)$ is never one for a universal mixture $\\xi$ (as\nthere are $\\mu\\in\\mathcal{M}$ with $\\mu(\\epsilon)<1$).\n\n\\begin{lemma} \\label{thm:UTM-is-mixture}\nFor any monotone universal Turing machine $U$ the associated\n\\linebreak Solomonoff prior $M$ can be expressed as a universal\nmixture. i.e. there exists an enumeration\n$\\{\\nu_i\\}_{i=1}^\\infty$ of the set of enumerable semimeasures\n$\\mathcal{M}$ and computable function\n$w_{()}:\\mathbb{N}\\rightarrow\\mathbb{R}$ such that\n\\[\n M(x)=\\sum_{i\\in\\mathbb{N}} w_i\\nu_i(x)\\quad\\forall x\\in\\mathbb{B}^*\\backslash\\epsilon\n\\]\nwith $\\sum_{i\\in\\mathbb{N}} w_i\\le 1$ and $w_i>0\\;\\forall\ni\\in\\mathbb{N}$. In other words the class of Solomonoff priors\nis a subset of the class of universal mixtures:\n$\\mathcal{U}_M\\subseteq\\mathcal{U}_\\xi$.\n\\end{lemma}\n\\begin{proof}\nWe note that all programs that produce output from $U$ are\nuniquely of the form $q=I(i)p$. This allows us to split the sum\nin (\\ref{eqn:split-sum}) below.\n\\begin{align}\nM(x) &= \\sum_{\\lfloor q:U(q)=x*\\rfloor}2^{-\\ell(q)} &\\notag\\\\\n&=\\sum_{i\\in\\mathbb{N}}\\sum_{\\lfloor p:U(I(i)p)=x*\\rfloor}2^{-\\ell(I(i)p)} &\\label{eqn:split-sum} \\\\\n&=\\sum_{i\\in\\mathbb{N}}2^{-l(I(i))}\\sum_{\\lfloor p:T_i(p)=x*\\rfloor}2^{-\\ell(p)} & \\label{eqn:UTM-is-mix-take-prefix} \\notag\\\\\n&=\\sum_{i\\in\\mathbb{N}}2^{-l(I(i))}\\lambda_{T_i}(x) &\\notag\n\\end{align}\n\nClearly $2^{-l(I(i))}>0$ and is a computable function of $i$.\nSince $I$ is a self-delimiting code it must be prefix free, and\nso satisfy Kraft's inequality:\n\\begin{equation}\n \\sum_{i\\in\\mathbb{N}}2^{-l(I(i))} \\le 1 \\notag\n\\end{equation}\n\nLemma \\ref{lem:measures-TM} tells us that the $\\lambda_{T_i}$\ncover every enumerable semimeasure if $\\epsilon$ is excluded\nfrom their domain, which shows that\n$\\sum_{i\\in\\mathbb{N}}2^{-l(I(i))}\\lambda_{T_i}(x)$ is a\nuniversal mixture. This completes the proof.\n\\end{proof}\n\n\\begin{corollary} \\label{corol:UTM-is-dominant}\n\\cite{Zvonkin:70} The Solomonoff prior $M$ for a universal\nmonotone Turing machine $U$ is universally dominant. Thus, the\nclass of Solomonoff priors is a subset of the class of\nuniversally dominant lower semicomputable semimeasures:\n$\\mathcal{U}_M\\subseteq\\mathcal{U}_\\delta$.\n\\end{corollary}\n\\begin{proof}\nFrom Lemma \\ref{thm:UTM-is-mixture} we have for each\n$\\nu\\in\\mathcal{M}$ there exists $j\\in\\mathbb{N}$ with\n$\\nu=\\lambda_{T_j}$ and for all $x\\in\\mathbb{B}^*$:\n\\begin{align*}\n M(x) &= \\sum_{i\\in\\mathbb{N}}2^{-l(I(i))}\\lambda_{T_i}(x) \\\\\n &\\ge 2^{-l(I(j))}\\nu(x)\n\\end{align*}\nas required.\n\\end{proof}\n\n\\begin{lemma}\\label{lem:u-mix-is-dominant}\nEvery universal mixture $\\xi$ is universally dominant. Thus,\nthe class of universal mixtures is a subset of the class of\nuniversally dominant lower semicomputable semimeasures:\n$\\mathcal{U}_\\xi\\subseteq\\mathcal{U}_\\delta$.\n\\end{lemma}\n\\begin{proof}\nThis follows from a similar argument to that in Corollary\n\\ref{corol:UTM-is-dominant}.\n\\end{proof}\n\n\\begin{lemma} \\label{thm:mixture-is-UTM}\nFor every universal mixture $\\xi$ there exists a universal\nmonotone Turing machine and associated Solomonoff prior $M$\nsuch that\n\\[\n\\xi(x)=M(x)\\quad\\forall x\\in\\mathbb{B}^*\\backslash\\epsilon\n\\]\nIn other words the class of universal mixtures is a subset of\nthe class of Solomonoff priors:\n$\\mathcal{U}_\\xi\\subseteq\\mathcal{U}_M$.\n\\end{lemma}\n\n\\begin{proof}\nFirst note that by Lemma \\ref{lem:measures-TM} we can find (by\ndovetailing possible repetitions of some indicies) parallel\nenumerations $\\{\\nu_i\\}_{i\\in\\mathbb{N}}$ of $\\mathcal{M}$ and\n$\\{T_i=\\lambda_{\\nu_i}\\}_{i\\in\\mathbb{N}}$ of all monotone\nTuring machines, and computable weight function $w_{()}$ with\n\\[\n \\xi = \\sum_{i\\in\\mathbb{N}} w_i\\nu_i \\quad , \\quad \\sum_{i\\in\\mathbb{N}}w_i \\le 1\n\\]\n\nTake a computable index and lower approximation\n$\\phi(i,t)\\nearrow w_i$:\n\\begin{align}\nw_i &= \\sum_t|\\phi(i,t+1)-\\phi(i,t)| \\\\\n&= \\sum_j 2^{-k_{ij}}\\\\\ni,j&\\mapsto k_{ij} \\;\\text{computable}\n\\end{align}\nThe K-C theorem\n\\cite{Levin:71,Schnorr:73,Chaitin:75,Downey:10book} says that\nfor any computable sequence of pairs $ \\{k_{ij}\\in\\mathbb{N},\\;\n\\tau_{ij} \\in\\mathbb{B}^*\\}_{i,j\\in\\mathbb{N}}$ with $\\sum\n2^{-k_{ij}}\\le 1$, there exists a prefix Turing machine $P$ and\nstrings $\\{\\sigma_{ij}\\in\\mathbb{B}^*\\}$ such that\n\\begin{equation}\n\\ell(\\sigma_{ij})=k_{ij}\\;,\\;P(\\sigma_{ij})=\\tau_{ij}\n\\end{equation}\nChoosing distinct $\\tau_{ij}$ and the existence of prefix\nmachine $P$ ensures that $\\{\\sigma_{ij}\\}$ is prefix free. We\nnow define a monotone Turing machine $U$. For strings of the\nform $\\sigma_{ij}p$ for some $i,j$:\n\\begin{equation}\n U(\\sigma_{ij}p) := T_i(p)\n\\end{equation}\nFor strings not of this form, $U$ produces no output. $U$\ninherits monotonicity from the $T_i$, and since\n$\\{T_i\\}_{i\\in\\mathbb{N}}$ enumerates all monotone Turing\nmachines, $U$ is universal. The Solomonoff prior associated\nwith $U$ is then:\n\\begin{align}\n\\lambda_U(x) &= |U^{-1}(x*)| \\\\\n&= \\sum_{i,j}2^{-\\ell(\\sigma_{ij})}|T_i^{-1}(x*)| \\\\\n&= \\sum_i (\\sum_j2^{-k_{ij}})\\lambda_{T_i}(x) \\\\\n&= \\sum_i w_i \\nu_i(x) \\\\\n&= \\xi(x)\n\\end{align}\n\\end{proof}\n\nThe main theorem for this section is now trivial:\n\n\\begin{theorem} \\label{thm:UTM-eq-mixture}\nThe classes $\\mathcal{U}_M$ of Solomonoff priors and\n$\\mathcal{U}_\\xi$ of universal mixtures are exactly equivalent.\nIn other words, the two constructions define exactly the same\nset of priors: $\\mathcal{U}_M=\\mathcal{U}_\\xi$.\n\\end{theorem}\n\\begin{proof}\nFollows directly from Lemma \\ref{thm:UTM-is-mixture} and Lemma\n\\ref{thm:mixture-is-UTM}.\n\\end{proof}\n\n\\section{Not all universally dominant enumerable semimeasures are universal mixtures}\\label{sec:dominant-is-not-universal}\n\nIn this section, we see that a universal mixture must have a\n``gap'' in the semimeasure inequality greater than\n$c\\,2^{-K(\\ell(x))}$ for some constant $c>0$ independent of\n$x$, and that there are universally dominant enumerable\nsemimeasures that fail this requirement. This shows that not\nall universally dominant enumerable semimeasures are universal\nmixtures.\n\n\\begin{lemma}\n\\label{lem:u-mix-no-gaps} For every Solomonoff prior $M$ and\nassociated universal monotone Turing machine $U$, there exists\na real constant $c>0$ such that\n\\[\n \\frac{M(x)-M(x0)-M(x1)}{M(x)}\\ge c\\,2^{-K(\\ell(x))}\\quad\\forall x\\in\\mathbb{B}^*\n \\]\nwhere the Kolmogorov complexity $K(n)$ of an integer $n$ is the\nlength of the shortest prefix code for $n$.\n\\end{lemma}\n\\begin{proof}\nFirst, note that $M(x)-M(x0)-M(x1)$ measures the set of\nprograms $U^{-1}(x)$ for which $U$ outputs $x$ and no more.\nConsider the set\n\\[\n \\mathcal{P}:=\\{ql'p\\,|\\,p\\in \\mathbb{B}^*,\\,U(p)\\sqsupseteq x\\}\n\\]\nwhere $l'$ is a shortest prefix code for $\\ell(x)$ and $q$ is a\nprogram such that $U(q{l}'p)$ executes $U(p)$ until $\\ell(x)$\nbits are output, then stops.\n\nNow, for each $r=q{l}'p\\in\\mathcal{P}$ we have $U(r)=x$ since\n$U(p)\\sqsupseteq x$ and $q$ executes $U(p)$ until $\\ell(x)$\nbits are output. Thus $\\mathcal{P}\\subseteq U^{-1}(x)$ and\n\\begin{equation}\n |\\mathcal{P}|\\le |U^{-1}(x)| \\label{eqn:p-u-1x}\n\\end{equation}\nAlso $\\mathcal{P}=q{l}'U^{-1}(x*):=\\{s=q{l}'p\\,|\\,p\\in U^{-1}(x*)\\}$, and so\n\\begin{equation}\n |\\mathcal{P}|=2^{-\\ell(q{l}')}|U^{-1}(x*)| \\label{eqn:p-u-1x*}\n\\end{equation}\ncombining (\\ref{eqn:p-u-1x}) and (\\ref{eqn:p-u-1x*}) and noting\nthat $M(x)-M(x0)-M(x1)=|U^{-1}(x)|$ and $M(x)=|U^{-1}(x*)|$ we\nobtain\n\\begin{align}\nM(x)-M(x0)-M(x1) &= |U^{-1}(x)| \\notag \\\\\n&\\ge |\\mathcal{P}| \\notag\\\\\n&= 2^{-\\ell(q{l}')}|U^{-1}(x*)| \\notag \\\\\n&= 2^{-\\ell(q)}2^{-K(\\ell(x))}M(x) \\notag\n\\end{align}\nSetting $c:=2^{-\\ell(q)}$ this proves the result.\n\\end{proof}\n\n\\begin{theorem} \\label{thm:dom-not-mixture}\nNot all universally dominant enumerable semimeasures are\nuniversal mixtures: $\\mathcal{U}_\\xi\\subset\\mathcal{U}_\\delta$\n\\end{theorem}\n\n\\begin{proof}\nTake some universally dominant semimeasure $\\delta$, then define\n$\n\\delta'(\\epsilon):= 1,\\;\n\\delta'(0)=\n\\delta'(1):=\\frac{1}{2},\\;\n\\delta'(bx):=\\frac{1}{2}\\delta(bx)$ for $b\\in\\mathbb{B}$, $x\\in\\mathbb{B}^*\\backslash\\epsilon\n$. $\\delta'$ is clearly a universally dominant enumerable\nsemimeasure with $\\delta'(0)+\\delta'(1)=\\delta'(\\epsilon)$, and\nby Lemma \\ref{lem:u-mix-no-gaps} it is not a universal mixture.\n\\end{proof}\n\n\\section{Conclusions}\n\nOne of Solomonoff's more famous contributions is the invention\nof a theoretically ideal universal induction mechanism. The\nuniversal prior used in this mechanism can be\ndefined\/constructed in several ways.\nWe clarify the relationships between three different\ndefinitions of universal priors, namely universal mixtures,\nSolomonoff priors and universally dominant semimeasures. We\nshow that the class of universal mixtures and the class of\nSolomonoff priors are exactly the same while the class of\nuniversally dominant lower semicomputable semimeasures is a\nstrictly larger set.\n\nWe have identified some aspects of the discrepancy between\nSolomonoff priors\/universal mixtures and universally dominant\nlower semicomputable semimeasures, however a clearer\nunderstanding and characterisation would be of interest.\n\nSince universal dominance is all that is needed to prove\nconvergence for universal induction\n\\cite{Hutter:04uaibook,Solomonoff:78} it is interesting to ask\nwhether the extra properties of the smaller class of Solomonoff\npriors have any positive consequences for universal induction.\n\n\\subsubsection*{Acknowledgements.}\nWe would like to acknowledge the contribution of an anonymous\nreviewer to a more elegant presentation of the proof of Lemma\n\\ref{thm:mixture-is-UTM}. This work was supported by ARC grant\nDP0988049.\n\n\\begin{small}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nA $p$-algebra is a central simple algebra of degree $p^m$ over a field $F$ with $\\operatorname{char}(F)=p$ for some prime number $p$ and positive integer $m$.\n\nIt was proven in \\cite[Chapter 7, Theorem 30]{Albert} that every $p$-algebra of exponent $p$ is Brauer equivalent to a tensor product of algebras of the form\n$$[\\alpha,\\beta)_{p,F}=F \\langle x,y : x^p-x=\\alpha, y^p=\\beta, y x-x y=y \\rangle$$\nfor some $\\alpha \\in F$ and $\\beta \\in F^\\times$.\nWe call such algebras ``symbol algebras\".\nAn equivalent statement was proven in \\citep{MS} when $\\operatorname{char}(F) \\neq p$ and $F$ contains primitive $p$th roots of unity.\n\nThis means the group $\\prescript{}{p}{Br}(F)$ is generated by symbol algebras.\nThe minimal number of symbol algebras required in order to express a given $A$ in $\\prescript{}{p}{Br}(F)$ is called the symbol length of $A$.\nThe symbol length of $\\prescript{}{p}{Br}(F)$ is the supremum of the symbol lengths of all $A \\in \\prescript{}{p}{Br}(F)$.\nThis number gives an indication of how complicated this group is, and due to the significance of this group, the symbol length has received special attention in papers such as \\cite{Florence} and \\cite{Matzri}.\n\nIn \\cite[Theorem 3.4]{Matzri} it was proven that when $\\operatorname{char}(F) \\neq p$, $F$ contains primitive $p$th roots of unity and the maximal dimension of an anisotropic homogeneous polynomial form of degree $p$ over $F$ is a finite integer $d$, the symbol length of $A$ is at most $\\left \\lceil \\frac{d+1}{p} \\right \\rceil-1$. The case of $p$-algebras was also considered in that paper, but only over $C_m$-fields, which involve assumptions on the greatest dimensions of anisotropic homogeneous polynomial forms of any degree, not only $p$.\n\nWe prove that when $\\operatorname{char}(F)=p$ and the maximal dimension of an anisotropic form of degree $p$ over $F$ is a finite integer $d$ greater than 1, the symbol length of $A$ is at most $\\left \\lceil \\frac{d-1}{p} \\right \\rceil-1$.\nWe obtain this bound by first proving that every two tensor products of symbol algebras $\\bigotimes_{i=1}^k C_i$ and $\\bigotimes_{i=1}^\\ell D_i$ with $(k+\\ell)p \\geq d-1$ can be modified so that they share a common slot.\n\nIn the last section, we focus on the case of $p=2$. We explain how the same argument holds in this case if we replace $d$ with the $u$-invariant, i.e. the maximal dimension of an anisotropic nonsingular quadratic form over $F$, and why the upper bound $\\frac{u(F)}{2}-1$ for the symbol length is sharp when $I_q^3 F=0$.\n\n\\section{Preliminaries}\n\nWe denote by $M_p(F)$ the $p \\times p$ matrix algebra over $F$.\nFor two given central simple algebras $A$ and $B$ over $F$, we write $A=B$ if the algebras are isomorphic.\nWe recall most of the facts we need on symbol algebras.\nFor general background on these algebras see \\cite[Chapter 7]{Albert}.\n\nWe start with some basic symbol changes.\nIn the following remark and lemmas, let $p$ be a prime integer and $F$ be a field with $\\operatorname{char}(F)=p$.\nWe say that a homogeneous polynomial form $\\varphi$ over $F$ is anisotropic if it has no nontrivial zeros.\n\n\\begin{rem}\\label{rem1}\n\\begin{itemize}\n\\item[(a)] $M_p(F)=[\\alpha,1)_{p,F}=[0,\\beta)_{p,F}$ for every $\\alpha \\in F$ and $\\beta \\in F^\\times$.\n\\item[(b)] $[\\alpha,\\beta)_{p,F} \\otimes [\\alpha,\\gamma)_{p,F}=M_p(F) \\otimes [\\alpha,\\beta \\gamma)_{p,F}$.\n\\item[(c)] $[\\alpha,\\gamma)_{p,F} \\otimes [\\beta,\\gamma)_{p,F}=M_p(F) \\otimes [\\alpha+\\beta,\\gamma)_{p,F}$.\n\\item[(d)] If $[\\alpha,\\beta)_{p,F}$ contains zero divisors then it is $M_p(F)$.\n\\item[(e)] Given $K=F[x : x^p-x=\\alpha]$, we denote by $\\operatorname{N}_{K\/F}$ the norm form $K \\rightarrow F$. This form is homogeneous of degree $p$ over $F$ of dimension $[K:F]=p$.\n\\end{itemize}\n\\end{rem}\n\n\\begin{lem}\\label{addtofirst}\nConsider $[\\alpha,\\beta)_{p,F}=F \\langle x,y : x^p-x=\\alpha, y^p=\\beta, yx-xy=y \\rangle$ for some $\\alpha \\in F$ and $\\beta \\in F^\\times$.\nThen \n\\begin{itemize}\n\\item[(a)] $[\\alpha,\\beta)_{p,F}=[\\alpha,\\operatorname{N}_{F[x]\/F}(f) \\beta)_{p,F}$ for any $f \\in F[x]$ with $\\operatorname{N}_{F[x]\/F}(f) \\neq 0$.\n\\item[(b)] $[\\alpha,\\beta)_{p,F}=[\\alpha+\\beta,\\beta)_{p,F}$.\n\\item[(c)] $[\\alpha,\\beta)_{p,F}=[\\alpha+v^p-v,\\beta)_{p,F}$ for any $v \\in F$.\n\\item[(d)] For any $v \\in F$ with $\\beta+v^p \\neq 0$, $[\\alpha,\\beta)_{p,F}=[\\alpha',\\beta+v^p)_{p,F}$ for some $\\alpha' \\in F$.\n\\end{itemize}\n\\end{lem}\n\n\\begin{proof}\nLet $f$ be an element in $F[x]$ with $\\operatorname{N}_{F[x]\/F}(f) \\neq 0$.\nThen $(fy)^p=\\operatorname{N}_{F[x]\/F}(f)y^p=\\operatorname{N}_{F[x]\/F}(f) \\beta$.\nSince $(fy) x-x (fy)=fy$, we have $[\\alpha,\\beta)_{p,F}=[x^p-x,(fy)^p)_{p,F}=[\\alpha,\\operatorname{N}_{F[x]\/F}(f) \\beta)_{p,F}$.\n\nWrite $z=x+y$.\nBy \\cite[Lemma 3.1]{Chapman:2015}, $z$ satisfies $z^p-z=x^p-x+y^p=\\alpha+\\beta$. Since $y z-z y=y$ we have $[\\alpha,\\beta)_{p,F}=[z^p-z,y^p)_{p,F}=[\\alpha+\\beta,\\beta)_{p,F}$.\n\nWrite $t=x+v$. Then $t^p-t=x^p-x+v^p-v=\\alpha+v^p-v$.\nSince $y t-t y=y$, we have $[\\alpha,\\beta)_{p,F}=[\\alpha+v^p-v,\\beta)_{p,F}$.\n\nLet $v \\in F$ be such that $\\beta+v^p \\neq 0$.\nConsider the element $y+v$.\nSince $(y+v)^p=\\beta+v^p \\in F^\\times$, by \\citep[Chapter 7, Lemma 10]{Albert} we have $[\\alpha,\\beta)_{p,F}=[\\alpha',\\beta+v^p)_{p,F}$ for some $\\alpha' \\in F$.\n\\end{proof}\n\nNote that by Lemma \\ref{addtofirst} $(a)$, we have $[\\alpha,\\beta)_{p,F}=[\\alpha,-\\beta)_{p,F}$ for any symbol algebra $[\\alpha,\\beta)_{p,F}$.\n\n\\begin{lem}\\label{two}\nConsider $[\\alpha,\\beta)_{p,F}$ and $[\\gamma,\\delta)_{p,F}$ for some $\\alpha,\\gamma \\in F$ and $\\beta,\\delta \\in F^\\times$.\nThen \n\\begin{itemize}\n\\item[(a)] $[\\alpha,\\beta)_{p,F} \\otimes [\\gamma,\\delta)_{p,F}=[\\alpha+\\gamma,\\beta)_{p,F} \\otimes [\\gamma,\\beta^{-1} \\delta).$\n\\item[(b)] If $\\beta+\\delta \\neq 0$ then $[\\alpha,\\beta)_{p,F} \\otimes [\\gamma,\\delta)_{p,F}=[\\alpha+\\gamma,\\beta+\\delta)_{p,F} \\otimes C$ for some symbol algebra $C$ of degree $p$.\n\\end{itemize}\n\\end{lem}\n\n\\begin{proof}\nWrite $[\\alpha,\\beta)_{p,F}=F \\langle x,y : x^p-x=\\alpha, y^p=\\beta, yx-xy=y \\rangle$ and $[\\gamma,\\delta)_{p,F}=F \\langle z,w : z^p-z=\\gamma, w^p=\\delta, wz-zw=w \\rangle$.\nNote that the elements $x+z$ and $y$ commute with the elements $z$ and $y^{-1} w$, and that \n$$(x+z)^p-(x+z)=x^p-x+z^p-z=\\alpha+\\gamma,$$\n$$y (x+z)-(x+z) y=y,$$\n$$(y^{-1}w)^p=y^{-p} w^p=\\beta^{-1} \\delta, \\enspace \\operatorname{and}$$\n$$(y^{-1} w) z-z (y^{-1} w)=y^{-1} (w z-z w)=y^{-1} w.$$\nTherefore $[\\alpha,\\beta)_{p,F} \\otimes [\\gamma,\\delta)_{p,F}=F \\langle x,y,z,w \\rangle = F\\langle x+z,y \\rangle \\otimes F \\langle z, y^{-1} w \\rangle=[\\alpha+\\gamma,\\beta)_{p,F} \\otimes [\\gamma,\\beta^{-1} \\delta)_{p,F}$.\n\nNow, the elements $x+z$ and $y+w$ satisfy\n$$(y+w)^p=y^p+w^p=\\beta+\\delta, \\enspace \\operatorname{and}$$\n$$(y+w) (x+z)-(x+z) (y+w)=y+w.$$\nTherefore $F \\langle x+z,y+w \\rangle=[\\alpha+\\gamma,\\beta+\\delta)_{p,F}$.\nHence $[\\alpha,\\beta)_{p,F} \\otimes [\\gamma,\\delta)_{p,F}=[\\alpha+\\gamma,\\beta+\\delta)_{p,F} \\otimes C$ for some central simple algebra $C$ over $F$ of degree $p$.\nSince $C$ contains $y^{-1}w$ which satisfies $(y^{-1}w)^p \\in F^\\times$, $C$ must be a symbol algebra (\\cite[Chapter 7, Lemma 10]{Albert}).\n\\end{proof}\n\n\\section{Bounding the Symbol Length}\n\n\\begin{prop}\\label{change_left}\nLet $p$ be a prime integer and let $F$ be a field with $\\operatorname{char}(F) = p$.\nConsider the algebras $[\\alpha_1,\\beta_1)_{p,F},\\dots,[\\alpha_k,\\beta_k)_{p,F}$ for some $\\alpha_1,\\dots,\\alpha_k \\in F$ and $\\beta_1,\\dots,\\beta_k \\in F^\\times$ where $k$ is some positive integer.\nFor each $i \\in \\{1,\\dots,k\\}$, write $[\\alpha_i,\\beta_i)_{p,F}=F \\langle x_i, y_i : x_i^p-x_i=\\alpha_i, y_i^p=\\beta_i, y_i x_i-x_i y_i=y_i \\rangle$.\n\nLet $\\varphi$ be the homogeneous polynomial form defined on $$V=F \\oplus F \\oplus F[x_1] \\oplus \\dots \\oplus F[x_k]$$ by $$\\varphi(u,v,f_1,\\dots,f_k)=u^p (\\alpha_1+\\dots+\\alpha_k)-u^{p-1} v+v^p+\\operatorname{N}_{F[x_1]\/F}(f_1) \\beta_1+\\dots+\\operatorname{N}_{F[x_k]\/F}(f_k) \\beta_k.$$\n\n\\begin{itemize}\n\\item[(a)] For every $(u,v,f_1,\\dots,f_k) \\in V$ with $u \\neq 0$, we have $\\bigotimes_{i=1}^k [\\alpha_i,\\beta_i)_{p,F}=\\bigotimes_{i=1}^k C_i$ where $C_1,\\dots,C_k$ are symbol algebras of degree $p$ and\\\\ $C_1=[\\varphi(1,\\frac{v}{u},\\frac{f_1}{u},\\dots,\\frac{f_k}{u}),\\beta')_{p,F}$ for some $\\beta' \\in F^\\times$.\n\\item[(b)] Let $(0,v,f_1,\\dots,f_k) \\in V$ such that the following elements are nonzero: $f_1,\\dots,f_t$ for some $t \\in \\{1,\\dots,k\\}$, $\\sum_{i=1}^s \\operatorname{N}_{F[x_i]\/F}(f_i) \\beta_i$ for any $s \\in \\{1,\\dots,t\\}$ and $(\\sum_{i=1}^t \\operatorname{N}_{F[x_i]\/F}(f_i) \\beta_i)+v^p$. Then $\\bigotimes_{i=1}^k [\\alpha_i,\\beta_i)_{p,F}=\\bigotimes_{i=1}^k C_i$ where $C_1,\\dots,C_k$ are symbol algebras of degree $p$ and $C_1=[\\alpha',(\\sum_{i=1}^t \\operatorname{N}_{F[x_i]\/F}(f_i) \\beta_i)+v^p)_{p,F}$ for some $\\alpha' \\in F$.\n\\item[(c)] If there exists $(u,v,f_1,\\dots,f_k) \\in V \\setminus \\{(0,\\dots,0)\\}$ such that $\\varphi(u,v,f_1,\\dots,f_k)=0$, then $\\bigotimes_{i=1}^k [\\alpha_i,\\beta_i)_{p,F}=\\bigotimes_{i=1}^k C_i$ where $C_1=M_p(F)$ and $C_2,\\dots,C_k$ are symbol algebras of degree $p$.\n\\end{itemize}\n\\end{prop}\n\n\n\\begin{proof}\n\\sloppy\nLet $(u,v,f_1,\\dots,f_k) \\in V$ with $u \\neq 0$.\nFor each $i \\in \\{1,\\dots,k\\}$ with $\\operatorname{N}_{F[x_i]\/F}(\\frac{f_i}{u}) \\neq 0$, we apply Lemma \\ref{addtofirst} to change $[\\alpha_i,\\beta_i)_{p,F}$ to $[\\alpha_i+\\operatorname{N}_{F[x_i]\/F}(\\frac{f_i}{u}) \\beta_i,\\operatorname{N}_{F[x_i]\/F}(\\frac{f_i}{u}) \\beta_i)_{p,F}$.\nThen for each $i \\in \\{2,\\dots,k\\}$, we apply Lemma \\ref{two} (a) on the first and $i$th symbol algebras.\nThe first algebra in the tensor product obtained after these modifications is $[\\sum_{i=1}^k (\\alpha_i+\\operatorname{N}_{F[x_i]\/F}(\\frac{f_i}{u}) \\beta_i),\\operatorname{N}_{F[x_1]\/F}(\\frac{f_1}{u}) \\beta_1)_{p,F}$ or $[\\sum_{i=1}^k (\\alpha_i+\\operatorname{N}_{F[x_i]\/F}(\\frac{f_i}{u}) \\beta_i),\\beta_1)_{p,F}$ depending on the value of $\\operatorname{N}_{F[x_1]\/F}(\\frac{f_1}{u})$.\nBy Lemma \\ref{addtofirst} $(c)$ we can add $(\\frac{v}{u})^p-\\frac{v}{u}$ to the left slot of the first algebra. That proves part $(a)$.\nIf $\\varphi(u,v,f_1,\\dots,f_k)=0$ then also $\\varphi(1,\\frac{v}{u},\\frac{f_1}{u},\\dots,\\frac{f_k}{u})=0$, which means that the first algebra in the modified tensor product is a matrix algebra.\n\n\\sloppy\nLet $(0,v,f_1,\\dots,f_k) \\in V$ such that the following elements are nonzero: $f_1,\\dots,f_t$ for some $t \\in \\{1,\\dots,k\\}$, $\\sum_{i=1}^s \\operatorname{N}_{F[x_i]\/F}(f_i) \\beta_i$ for any $s \\in \\{1,\\dots,t\\}$ and $(\\sum_{i=1}^t \\operatorname{N}_{F[x_i]\/F}(f_i) \\beta_i)+v^p$.\nFor each $i \\in \\{1,\\dots,t\\}$ change $[\\alpha_i,\\beta_i)_{p,F}$ to $[\\alpha_i,\\operatorname{N}_{F[x_i]\/F}(f_i) \\beta_i)_{p,F}$.\nThen for each $i \\in \\{2,\\dots,t\\}$ apply Lemma \\ref{two} (b) on the first and the $i$th symbol algebras.\nThe first algebra in the resulting tensor product is $[\\alpha_1+\\dots+\\alpha_t,\\sum_{i=1}^t \\operatorname{N}_{F[x_i]\/F}(f_i) \\beta_i)_{p,F}$. Then by Lemma \\ref{addtofirst} (d) we can change this algebra to $[\\alpha',(\\sum_{i=1}^t \\operatorname{N}_{F[x_i]\/F}(f_i) \\beta_i)+v^p)_{p,F}$ for some $\\alpha' \\in F$. That proves part $(b)$.\n\nLet $(0,v,f_1,\\dots,f_k) \\in V \\setminus \\{(0,\\dots,0)\\}$ such that $\\varphi(0,v,f_1,\\dots,f_k)=0$.\nIf $f_1=\\dots=f_k=0$ then $\\varphi(0,v,f_1,\\dots,f_k)=v^p=0$ which is impossible for $v \\in F^\\times$.\nTherefore at least one $f_i$ is nonzero.\nIf there is one $f_i$ which is nonzero and $\\operatorname{N}_{F[x_i]\/F}(f_i)=0$ then the symbol algebra $[\\alpha_i,\\beta_i)_{p,F}$ contains a zero divisor and so it is a matrix algebra.\nAssume every nonzero $f_i$ has $\\operatorname{N}_{F[x_1]\/F}(f_i) \\neq 0$.\nWithout loss of generality we can assume $f_1,\\dots,f_t$ are nonzero for some $t \\in \\{1,\\dots,k\\}$ and $f_i=0$ for $i>t$.\nAssume $\\sum_{i=1}^s \\operatorname{N}_{F[x_i]\/F}(f_i) \\beta_i=0$ for some $s \\in \\{1,\\dots,t\\}$. Assume $s$ is minimal.\nThen we can change the tensor product so that the first algebra is $[\\alpha',\\sum_{i=1}^{s-1} \\operatorname{N}_{F[x_i]\/F}(f_i) \\beta_i)_{p,F}$ and the $s$th algebra is $[\\alpha_s,\\operatorname{N}_{F[x_s]\/F}(f_s) \\beta_s)_{p,F}$.\nSince $\\sum_{i=1}^{s-1} \\operatorname{N}_{F[x_i]\/F}(f_i) \\beta_i=-\\operatorname{N}_{F[x_s]\/F}(f_s) \\beta_s$, we have $[\\alpha',\\sum_{i=1}^{s-1} \\operatorname{N}_{F[x_i]\/F}(f_i) \\beta_i)_{p,F} \\otimes [\\alpha_s,\\operatorname{N}_{F[x_s]\/F}(f_s) \\beta_s)_{p,F}=[\\alpha'+\\alpha_s,\\operatorname{N}_{F[x_s]\/F}(f_s) \\beta_s)_{p,F} \\otimes M_p(F)$.\nAssume $\\sum_{i=1}^s \\operatorname{N}_{F[x_i]\/F}(f_i) \\beta_i \\neq 0$ for all $s \\in \\{1,\\dots,t\\}$.\nThen the tensor product can be changed such that the first algebra is $[\\alpha',\\sum_{i=1}^t \\operatorname{N}_{F[x_i]\/F}(f_i) \\beta_i)_{p,F}$.\nThis algebra contains a zero divisor because $\\varphi(0,v,f_1,\\dots,f_k)=(\\sum_{i=1}^t \\operatorname{N}_{F[x_i]\/F}(f_i) \\beta_i)+v^p=0$, which means it is a matrix algebra. That completes part $(c)$.\n\\end{proof}\n\n\n\\begin{thm}\\label{linkage}\nLet $p$ be a prime integer and let $F$ be a field with $\\operatorname{char}(F) = p$.\nAssume the maximal dimension of an anisotropic homogeneous polynomial form of degree $p$ over $F$ is a finite integer $d$.\nThen every two tensor products $A=\\bigotimes_{i=1}^k [\\alpha_i,\\beta_i)_{p,F}$ and $B=\\bigotimes_{i=1}^\\ell [\\gamma_i,\\delta_i)_{p,F}$ with $(k+\\ell) p \\geq d-1$ can be changed such that $\\alpha_1=\\gamma_1$.\n\\end{thm}\n\n\\begin{proof}\nLet $\\varphi$ and $\\psi$ be the homogeneous polynomial forms of degree $p$ as constructed in Proposition \\ref{change_left} for $A$ and $B$ respectively.\nIf $\\varphi$ has a nontrivial zero then the first algebra in $A$ can be assumed to be a matrix algebra, and so it can be written as $[\\gamma_1,1)_{p,F}$ and the statement follows.\nSimilarly, the statement follows when $\\psi$ has a nontrivial zero.\n\n\\sloppy\nAssume that both forms are anisotropic.\nThe form $\\varphi(u,v,f_1,\\dots,f_k)-\\psi(u,0,f_1',\\dots,f_\\ell')$ \nis of dimension $(k+\\ell)p+2$ over $F$.\nBy assumption, $(k+\\ell) p+2 \\geq d+1$, which means that this form has a nontrivial zero, i.e. $\\varphi(u,v,f_1,\\dots,f_k)=\\psi(u,0,f_1',\\dots,f_\\ell')$ for some $u,v,f_1,\\dots,f_k,f_1',\\dots,f_\\ell'$ such that not all of them are $0$.\nIf $u \\neq 0$ then $\\varphi(1,\\frac{v}{u},\\frac{f_1}{u},\\dots,\\frac{f_k}{u})=\\psi(1,0,\\frac{f_1'}{u},\\dots,\\frac{f_\\ell'}{u})$.\nBy Proposition \\ref{change_left} (a) we can change both tensor products to have $\\varphi(1,\\frac{v}{u},\\frac{f_1}{u},\\dots,\\frac{f_k}{u})$ in the left slot of the first algebra.\n\n\\sloppy\nAssume $u=0$.\nAt least one of the elements $v,f_1,\\dots,f_k,f_1',\\dots,f_\\ell'$ is nonzero.\nIf $f_i'=0$ for every $i \\in \\{1,\\dots,\\ell\\}$ then $\\psi(0,0,f_1',\\dots,f_\\ell')=0$ and so $\\varphi(0,v,f_1,\\dots,f_k)=0$, but since $\\varphi$ is anisotropic we get $v=f_1=\\dots=f_k=0$, contradiction. Therefore $f_i' \\neq 0$ for at least one $i$ in $\\{1,\\dots,\\ell\\}$. If $f_i=0$ for every $i \\in \\{1,\\dots,k\\}$ then $v^p=\\psi(0,0,f_1',\\dots,f_\\ell')$, which means $0=\\psi(0,0,f_1',\\dots,f_\\ell')-v^p=\\psi(0,-v,f_1',\\dots,f_\\ell')$, i.e. $\\psi$ has a nontrivial zero, contradiction.\nTherefore $f_i \\neq 0$ for at least one $i$ in $\\{1,\\dots,k\\}$.\nBy changing the order of the symbol algebras, we can assume $f_1,\\dots,f_t \\neq 0$ for some $t \\in \\{1,\\dots,k\\}$, $f_i=0$ for every $i$ with $i \\geq t+1$, $f_1',\\dots,f'_r \\neq 0$ for some $r \\in \\{1,\\dots,\\ell\\}$ and $f_i'=0$ for every $i$ with $i \\geq r+1$.\nBoth $(0,v,f_1,\\dots,f_k)$ and $(0,0,f_1',\\dots,f_\n\\ell')$ meet the requirements of Proposition \\ref{change_left} (b), and so we can change the first tensor product so that it has $\\varphi(0,v,f_1,\\dots,f_k)$ in the second slot of the first symbol algebra and the second tensor product so that it has $\\psi(0,0,f_1',\\dots,f_\\ell')$ in the second slot of the first symbol algebra.\nThe statement then follows from \\cite[Theorem 3.2]{Chapman:2015}.\n\\end{proof}\n\n\\begin{cor}\\label{corcharp}\nLet $p$ be a prime integer and let $F$ be a field with $\\operatorname{char}(F) = p$.\nAssume the maximal dimension of an anisotropic homogeneous polynomial form of degree $p$ over $F$ is a finite integer $d$ greater than 1.\nThen the symbol length of $p$-algebras of exponent $p$ over $F$ is bounded from above by $\\left \\lceil \\frac{d-1}{p} \\right \\rceil -1$.\n\\end{cor}\n\n\\begin{proof}\nWrite $n=\\left \\lceil \\frac{d-1}{p} \\right \\rceil -1$.\nThen it suffices to prove that for every tensor product $A=\\bigotimes_{i=1}^k [\\alpha_i,\\beta_i)_{p,F}$ with $k \\geq n+1$, \n$A=M_p(F) \\otimes C_1 \\otimes \\dots \\otimes C_{k-1}$ for some symbol algebras $C_1,\\dots,C_{k-1}$ of degree $p$ and the statement then follows by induction.\n\nNote that if $k \\geq n+1=\\left \\lceil \\frac{d-1}{p} \\right \\rceil$ then $kp \\geq d-1$.\nConsider the algebra $[\\alpha_1,\\beta_1)_{p,F}$ and the tensor product $\\bigotimes_{i=2}^k [\\alpha_i,\\beta_i)_{p,F}$.\nBy Theorem \\ref{linkage}, since $p(1+(k-1))=kp \\geq d-1$ we can assume $\\alpha_1=\\alpha_2$, and so \n$$[\\alpha_1,\\beta_1)_{p,F} \\otimes [\\alpha_2,\\beta_2)_{p,F} \\otimes \\dots \\otimes [\\alpha_k,\\beta_k)_{p,F}=M_p(F) \\otimes [\\alpha_1,\\beta_1 \\beta_2)_{p,F} \\otimes [\\alpha_3,\\beta_3)_{p,F} \\otimes \\dots \\otimes [\\alpha_k,\\beta_k)_{p,F}.$$\n\\end{proof}\n\n\\begin{rem}\nBy using similar methods, the upper bound for the symbol length of algebras of exponent $p$ over a field $F$ with $\\operatorname{char}(F) \\neq p$ containing primitive $p$th roots of unity appearing in \\cite[Theorem 3.4]{Matzri} can be sharpened from $\\left \\lceil \\frac{d+1}{p} \\right \\rceil -1$ to $\\left \\lceil \\frac{d-1}{p} \\right \\rceil -1$ as well, where the maximal dimension of an anisotropic homogeneous polynomial form of degree $p$ over $F$ is $d$ greater than 1.\nIn the case of $p=2$ and $\\operatorname{char}(F) \\neq 2$, $d$ is the $u$-invariant of $F$, $u(F)$, and so this upper bound is $\\left \\lceil \\frac{u(F)-1}{2} \\right \\rceil$ which appears in \\cite[Theorem 2 (a)]{Kahn:1990}. When we assume further that $I^3 F=0$, this bound is sharp (see \\cite[Theorem 2 (b)]{Kahn:1990}). For the construction of fields $F$ with $\\operatorname{char}(F) \\neq 2$, $I^3 F=0$ and any even $u(F)$ see \\cite[Theorem 4]{Merkurjev1991}.\n\\end{rem}\n\n\\section{Quaternion Algebras}\n\nIn this section we focus on the case of $p=2$ and $\\operatorname{char}(F)=2$.\nIn this case, the symbol algebras are quaternion algebras.\nThe corresponding homogeneous polynomial forms of degree $p$ constructed in the previous section are now quadratic forms.\nQuadratic forms in this case are of the shape\n$$[a_1,b_1] \\perp \\dots \\perp [a_r,b_r] \\perp \\langle c_1,\\dots,c_t \\rangle$$\nfor some $a_1,b_1,\\dots,a_r,b_r,c_1,\\dots,c_t \\in F$.\nEach $[a_i,b_i]$ stands for the quadratic form $a_i u_i^2+u_i v_i+b_i v_i^2$, $\\langle c_1,\\dots,c_t \\rangle$ is the diagonal form $c_1 w_1^2+\\dots+c_t w_t^2$, and $\\perp$ is the orthogonal sum.\nA quadratic form is nonsingular if $t=0$.\nIn particular, the dimension of a nonsingular quadratic form is always even.\nNote that $c [a,b]$ is isometric to $[\\frac{a}{c},bc]$ for $c \\in F^\\times$, so orthogonal sums of quadratic forms of the shape $c [a,b]$ are also nonsingular.\n\nGiven $K=F[x : x^2+x=\\alpha]$, the norm form ${\\operatorname{N}}_{K\/F} : K \\rightarrow F$ mentioned in Remark \\ref{rem1} $(e)$ is the quadratic form $[\\alpha,1]$.\n\nThe maximal dimension of an anisotropic quadratic form over $F$ is denoted by $\\hat{u}(F)$. The number $d$ appearing in Theorem \\ref{linkage} and Corollary \\ref{corcharp} is equal to $\\hat{u}(F)$ when $p=2$.\nThe $u$-invariant of $F$, denoted by $u(F)$, is the maximal dimension of an anisotropic nonsingular quadratic form over $F$.\nClearly $u(F) \\leq \\hat{u}(F)$, and there are many examples in the literature where this inequality is strict (see for example \\cite{MammoneTignolWadsworth:1991}).\nTherefore, using $u(F)$ instead of $\\hat{u}(F)$ gives a better upper bound for the symbol length in the case of $p=2$.\nWe now rephrase Theorem \\ref{linkage} and Corollary \\ref{corcharp} in terms of $u(F)$.\n\n\\begin{thm}\nLet $F$ be a field with $\\operatorname{char}(F) = 2$ and $u(F) < \\infty$.\nThen every two tensor products of quaternion algebras $A=\\bigotimes_{i=1}^k [\\alpha_i,\\beta_i)_{2,F}$ and $B=\\bigotimes_{i=1}^\\ell [\\gamma_i,\\delta_i)_{2,F}$ with $(k+\\ell) 2 \\geq u(F)$ can be changed such that $\\alpha_1=\\gamma_1$.\n\\end{thm}\n\n\\begin{proof}\nThe proof is essentially the same as in Theorem \\ref{linkage}.\nOne just has to note that the form \n$\\varphi(u,v,f_1,\\dots,f_k)-\\psi(u,0,f_1',\\dots,f_\\ell')$ in this case is the quadratic form\n$$[\\alpha_1+\\dots+\\alpha_k+\\gamma_1+\\dots+\\gamma_\\ell,1] \\perp \\beta_1 [\\alpha_1,1] \\perp \\dots \\perp \\beta_k [\\alpha_k,1] \\perp \\delta_1 [\\gamma_1,1] \\perp \\dots \\perp \\delta_\\ell [\\gamma_\\ell,1].$$\nThis quadratic form is nonsingular of dimension $(k+\\ell) 2+2$, which is greater than $u(F)$. Hence it has a nontrivial zero, and the proof continues in the same manner as the other proof.\n\\end{proof}\n\n\\begin{cor}\\label{corchar2}\nLet $F$ be a field with $\\operatorname{char}(F) = 2$ and $2 \\leq u(F) < \\infty$.\nThen the symbol length of algebras of exponent $2$ over $F$ is bounded from above by $\\frac{u(F)}{2} -1$.\n\\end{cor}\n\nIn certain cases this upper bound is sharp as Proposition \\ref{remchar2} suggests. Before stating this proposition, we recall a few facts about quadratic forms: The unique (up to isometry) isotropic nonsingular quadratic form of dimension 2 over $F$ is the hyperbolic plane $\\varmathbb{H}=[0,1]$. Every nonsingular quadratic form $\\varphi$ over $F$ can be written uniquely as $\\varphi_{\\operatorname{an}} \\perp \\underbrace{\\varmathbb{H} \\perp \\dots \\perp \\varmathbb{H}}_{m \\enspace \\operatorname{times}}$ for some anisotropic form $\\varphi_{\\operatorname{an}}$ and some nonnegative integer $m$ called the Witt index of $\\varphi$. Two forms $\\varphi$ and $\\phi$ are Witt equivalent if their underlying anisotropic subforms $\\varphi_{\\operatorname{an}}$ and $\\phi_{\\operatorname{an}}$ are isometric. The group of Witt equivalence classes of nonsingular quadratic forms over $F$ with $\\perp$ as the group operation is denoted by $I_q F$.\nThe ``Arf invariant\" or ``discriminant\" of a nonsingular form $\\varphi=[a_1,b_1] \\perp \\dots \\perp [a_r,b_r]$ is the class of $\\sum_{i=1}^r a_i b_i$ in the additive group $F\/\\wp(F)$ where $\\wp(F)=\\{\\lambda^2+\\lambda : \\lambda \\in F\\}$. The subgroup of $I_q F$ of forms with trivial discriminant is $I_q^2 F$. \nThe Clifford invariant maps $2r$-dimensional nonsingular forms $\\varphi=[a_1,b_1] \\perp \\dots \\perp [a_r,b_r]$ with trivial discriminant to tensor products of $r-1$ quaternion algebras $$E(\\varphi)=[a_1 b_1,b_r b_1)_{2,F} \\otimes \\dots \\otimes [a_{r-1} b_{r-1},b_r b_{r-1})_{2,F}.$$ For every tensor product $T$ of $r-1$ quaternion algebras there exists a nonsingular form $\\varphi$ of dimension $2r$ with trivial discriminant such that $E(\\varphi)=T$ (see \\cite[Proof of Theorem 4.1]{Chapman:2015:chain}).\nThe Clifford invariant defines a group epimorphism from $I_q^2 F$ to $\\prescript{}{2} Br(F)$ whose kernel is exactly $I_q^3 F$.\n\n\\begin{lem}\\label{addedlem}\nIf $\\operatorname{char}(F)=2$, $I_q^3 F=0$ and $4 \\leq u(F) < \\infty$ then there exists an anisotropic nonsingular quadratic form with trivial discriminant of dimension $u(F)$.\n\\end{lem}\n\n\\begin{proof}\nLet $\\phi$ be an anisotropic nonsingular form of dimension $u(F)$. Denote its discriminant by $\\delta$.\nIf $\\delta \\in \\wp(F)$, we are done, so assume $\\delta \\not \\in \\wp(F)$.\nWrite $\\psi=\\phi \\perp [\\delta,1]$.\nSince $\\psi$ is of dimension $u(F)+2$, it is isotropic.\nThe Witt index of $\\psi$ can be either $1$ or $2$.\nIf it is $2$ then $\\psi=\\psi_{\\operatorname{an}} \\perp \\varmathbb{H} \\perp \\varmathbb{H}=\\psi_{\\operatorname{an}} \\perp [\\delta,1] \\perp [\\delta,1]$, and so $\\phi=\\psi_{\\operatorname{an}} \\perp [\\delta,1]$.\nNote that the discriminant of $\\psi_{\\operatorname{an}}$ is trivial. Since $I_q^3 F=0$, $\\psi_{\\operatorname{an}}$ is universal (see \\cite[Proof of Theorem 4.1]{BarryChapman:2015}), which contradicts the assumption that $\\phi$ is anisotropic.\nConsequently the Witt index of $\\psi$ must be 1.\nTherefore $\\psi_{\\operatorname{an}}$ is an anisotropic nonsingular form of dimension $u(F)$ with trivial discriminant.\n\\end{proof}\n\n\\begin{rem}\nBy \\cite[Theorem 2]{Kahn:1990} $(b)$, $u(F)$ is even when $\\operatorname{char}(F) \\neq 2$, $I^3 F=0$ and $u(F) > 1$. One can therefore prove in a similar way to Lemma \\ref{addedlem} that if $\\operatorname{char}(F) \\neq 2$, $I^3 F=0$ and $4 \\leq u(F) < \\infty$ then there exists an anisotropic quadratic form with trivial discriminant of dimension $u(F)$.\n\\end{rem}\n\n\n\\begin{prop}\\label{remchar2}\nWhen $\\operatorname{char}(F)=2$, $I_q^3 F=0$ and $2 \\leq u(F) < \\infty$, the bound $\\frac{u(F)}{2}-1$ for the symbol length of algebras of exponent 2 over $F$ is sharp.\n\\end{prop}\n\n\\begin{proof}\nWrite $u(F)=2 n$ for some positive integer $n$. The statement is clearly true when $n=1$, so assume $n \\geq 2$. It is enough to find an algebra of exponent 2 over $F$ whose symbol length is $\\frac{u(F)}{2}-1=n-1$. \nBy Lemma \\ref{addedlem} there exists an anisotropic nonsingular form $\\varphi$ with trivial discriminant of dimension $2n$.\nWe claim that the symbol length of $E(\\varphi)$ is $n-1$.\nClearly it is no greater than $n-1$.\nSuppose $E(\\varphi)$ is Brauer equivalent to a tensor product $T$ of $k$ quaternion algebras where $k < n-1$.\nThen there exists some nonsingular form $\\phi$ of dimension $2(k+1)$ with trivial discriminant such that $E(\\phi)=T$.\nSince $E(\\phi)$ and $E(\\varphi)$ are Brauer equivalent, the forms $\\phi$ and $\\varphi$ are equivalent modulo $I_q^3 F$. However, $I_q^3 F=0$, which means that $\\phi$ and $\\varphi$ are Witt equivalent, but this is impossible because $\\varphi$ is anisotropic of dimension $2n$ and $\\phi$ is of dimension $2(k+1) < 2n$.\n\\end{proof}\n\nCorollary \\ref{corchar2} and Proposition \\ref{remchar2} provide characteristic 2 analogues to parts $(a)$ and $(b)$ of \\cite[Theorem 2]{Kahn:1990}.\nFor the construction of fields $F$ with $\\operatorname{char}(F)=2$, $I_q^3 F=0$ and any even $u$-invariant see \\cite[Theorem 38.4]{EKM}.\n\n\\begin{rem}\nThere is a typographical error in the first sentence, second paragraph of the proof of \\cite[Theorem 4.1]{BarryChapman:2015}. The correct sentence should be: ``Now we show that if $f \\in I_q^2 F$ is an anisotropic form of dimension at least $6$ and $f_K$ is isotropic for some $K=F[x: x^2+x=a]$ then $f_K=\\varmathbb{H} \\perp f_0$ where $\\varmathbb{H}$ is a hyperbolic plane and $f_0$ is anisotropic\".\n\\end{rem}\n\n\\section*{Acknowledgments}\n\nThe author thanks the anonymous referee for the helpful comments on the manuscript.\n\n\\section*{Bibliography}\n\\bibliographystyle{amsalpha}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}