diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzktsd" "b/data_all_eng_slimpj/shuffled/split2/finalzzktsd" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzktsd" @@ -0,0 +1,5 @@ +{"text":"\\section{EXIT Charts and the Matching Condition for BEC}\nTo start, let us review the case of transmission over the $\\ensuremath{\\text{BEC}}(\\ensuremath{{\\tt{h}}})$\nusing a degree distribution pair $(\\ensuremath{{\\lambda}}, \\ensuremath{{\\rho}})$.\nIn this case density evolution is equivalent to the EXIT chart\napproach and the condition for successful decoding under \\BP\\ reads\n\\begin{align*}\nc(x) \\defas 1-\\ensuremath{{\\rho}}(1-x) \\leq \\ensuremath{{\\lambda}}^{-1}(x\/\\ensuremath{{\\tt{h}}}) \\defas v^{-1}_{\\ensuremath{{\\tt{h}}}}(x).\n\\end{align*}\nThis is shown in Fig.~\\ref{fig:becmatching} for the degree distribution pair\n$(\\ensuremath{{\\lambda}}(x)=x^3, \\ensuremath{{\\rho}}(x)=x^4)$. \n\\begin{figure}[hbt]\n\\centering\n\\setlength{\\unitlength}{1.0bp}%\n\\begin{picture}(110,110)\n\\put(0,0){\\includegraphics[scale=1.0]{exitchartbecldpcc}}\n\\put(112, 0){\\makebox(0,0)[l]{\\small $x$}}\n\\put(40, 60){\\makebox(0,0){\\small $c(x)$}}\n\\put(16, 89){\\makebox(0,0){\\small $v^{-1}_{\\ensuremath{{\\tt{h}}}}(x)$}}\n\\put(58, 102){\\makebox(0,0)[b]{{\\small $\\ensuremath{{\\tt{h}}}=0.58$}}}\n\\put(100, -2){\\makebox(0,0)[tr]{$h_{\\text{out-variable}}=h_{\\text{in-check}}$}}\n\\put(-4,100){\\makebox(0,0)[tr]{\\rotatebox{90}{$h_{\\text{out-check}}=h_{\\text{in-variable}}$}}}\n\\end{picture}\n\\caption{\\label{fig:becmatching} The EXIT chart method for\nthe degree distribution $(\\ensuremath{{\\lambda}}(x)=x^3, \\ensuremath{{\\rho}}(x)=x^4)$ and\ntransmission over the $\\ensuremath{\\text{BEC}}(\\ensuremath{{\\tt{h}}} = 0.58)$.}\n\\end{figure}\nThe area under the curve $c(x)$ equals $1-\\int \\!\\ensuremath{{\\rho}}$ and the \narea to the left of the curve $v^{-1}_{\\ensuremath{{\\tt{h}}}}(x)$ is equal to \n$\\ensuremath{{\\tt{h}}} \\int \\!\\ensuremath{{\\lambda}}$. By the previous remarks, a necessary condition\nfor successful \\BP\\ decoding\nis that these two areas do not overlap.\nSince the total area equals $1$ we get the necessary condition\n$\\ensuremath{{\\tt{h}}} \\int \\ensuremath{{\\lambda}}+1-\\int \\ensuremath{{\\rho}}\\leq 1$. Rearranging terms, this \nis equivalent to the condition\n\\begin{align*}\n1-C_{\\Shsmall} = \\ensuremath{{\\tt{h}}} \\leq \\frac{\\int \\ensuremath{{\\rho}}}{\\int \\ensuremath{{\\lambda}}}= 1 - r(\\ensuremath{{\\lambda}}, \\ensuremath{{\\rho}}).\n\\end{align*}\nIn words, the rate $r(\\ensuremath{{\\lambda}}, \\ensuremath{{\\rho}})$ of any LDPC ensemble which, for\nincreasing block lengths, allows successful \ndecoding over the $\\ensuremath{\\text{BEC}}(\\ensuremath{{\\tt{h}}})$, can not surpass the Shannon limit\n$1-\\ensuremath{{\\tt{h}}}$. \nAs pointed out in the introduction, an argument very similar to the above was introduced\nby Shokrollahi and Oswald \\cite{Sho00,OsS01} (albeit not using the language and geometric\ninterpretation of EXIT functions and applying a slightly different range of integration).\nIt was the first bound on the performance of iterative systems in which the Shannon capacity\nappeared explicitly using only quantities of density evolution.\nA substantially more general version of this bound can be found in \\cite{AKtB02a,AKTB02,AKtB04}\n(see also Forney \\cite{For05}).\n\nAlthough the final result (namely that transmission above capacity\nis not possible) is trivial, the method of proof is well worth the effort\nsince it shows how capacity enters in the calculation of the performance \nof iterative coding systems. By turning this bound around, we \ncan find conditions under which iterative systems achieve capacity:\nIn particular it shows that the two component-wise\nEXIT curves have to be matched perfectly. Indeed, all currently known\ncapacity achieving degree-distributions for the BEC can be derived \nby starting with this perfect matching condition and working backwards.\n\n\\section{GEXIT Charts and the Matching Condition for BMS Channels}\nLet us now derive the equivalent result for general BMS channels.\nAs a first ingredient we show how to interpolate the sequence of densities\nwhich we get from density evolution so as to form a complete family of\ndensities.\n\n\\begin{definition}[Interpolating Channel Families]\n\\label{def:interpolation}\nConsider a degree distribution pair $(\\ensuremath{{\\lambda}}, \\ensuremath{{\\rho}})$\nand transmission over the BMS channel characterized by its\n$L$-density $\\Ldens{c}$. Let $\\Ldens{a}_{-1}=\\Delta_0$\nand $\\Ldens{a}_0=\\Ldens{c}$ and set $\\Ldens{a}_{\\alpha}$,\n$\\alpha \\in [-1, 0]$, to \n$\\Ldens{a}_{\\alpha}=-\\alpha \\Ldens{a}_{-1} + (1+\\alpha) \\Ldens{a}_0$.\nThe {\\em interpolating density evolution families} \n$\\{\\Ldens{a}_{\\alpha}\\}_{\\alpha=-1}^{\\infty}$\nand $\\{\\Ldens{b}_{\\alpha}\\}_{\\alpha=0}^{\\infty}$ are then defined as follows:\n\\begin{align*}\n\\Ldens{b}_{\\alpha} & = \\sum_{i} \\ensuremath{{\\rho}}_i \\Ldens{a}_{\\alpha-1}^{\\boxast (i-1)},\n\\;\\;\\;\\;\\; \\alpha \\geq 0,\\\\\n\\Ldens{a}_{\\alpha} & = \n\\sum_{i} \\ensuremath{{\\lambda}}_i \\Ldens{c} \\star \\Ldens{b}_{\\alpha}^{\\star (i-1)},\n\\;\\;\\;\\;\\;\\alpha \\geq 0,\n\\end{align*}\nwhere $\\star$ denotes the standard convolution of densities and\n$\\Ldens{a} \\boxast \\Ldens{b}$ denotes the density at the output of\na check node, assuming that the input densities are $\\Ldens{a}$ and $\\Ldens{b}$,\nrespectively.\n\\end{definition}\nDiscussion: First note that $\\Ldens{a}_{\\ell}$ ($\\Ldens{b}_{\\ell}$), \n$\\ell \\in \\ensuremath{\\mathbb{N}}$,\nrepresents the sequence of $L$-densities of density evolution\nemitted by the variable (check) nodes in the $\\ell$-th iteration.\nBy starting density evolution not only with $\\Ldens{a}_{0}=\\Ldens{c}$\nbut with all possible convex combinations of $\\Delta_0$ and\n$\\Ldens{c}$, this discrete sequence of densities is completed to\nform a continuous family of densities ordered by physical degradation.\nThe fact that the densities are ordered by physical degradation\ncan be seen as follows: note that the computation tree for $\\Ldens{a}_{\\alpha}$ \ncan be constructed by taking \nthe standard computation tree of $\\Ldens{a}_{\\lceil \\alpha \\rceil}$ \nand independently erasing the observation associated to each variable leaf node with probability\n$\\lceil \\alpha \\rceil-\\alpha$. It follows that we can convert the computation tree of \n$\\Ldens{a}_{\\alpha}$ to that of $\\Ldens{a}_{\\alpha-1}$ by erasing all\nobservations at the leaf nodes and by independently erasing\neach observation in the second (from the bottom) row of variable nodes\nwith probability $\\lceil \\alpha \\rceil-\\alpha$.\nThe same statement is true for $\\Ldens{b}_{\\alpha}$.\nIf $\\lim_{\\ell \\rightarrow \\infty} \\entropy(\\Ldens{a}_{\\ell})=0$, i.e., \nif \\BP\\\ndecoding is successful in the limit of large blocklengths, then\nthe families are both complete.\n\n\\begin{example}[Density Evolution and Interpolation]\nConsider transmission over the $\\ensuremath{\\text{BSC}}(0.07)$ using a\n$(3, 6)$-regular ensemble. Fig.~\\ref{fig:debsc} depicts\nthe density evolution process for this case.\n\\begin{figure}[htp]\n\\setlength{\\unitlength}{0.5bp}%\n\\begin{center}\n\\begin{picture}(740,170)\n\\put(0,0)\n{\n\\put(0,0){\\includegraphics[scale=0.5]{de1}}\n\\put(120,50){\\includegraphics[scale=0.5]{b1}}\n\\put(0,120){\\includegraphics[scale=0.5]{a0}}\n\\put(50, 110){\\makebox(0,0)[c]{\\tiny $a_{0}$}}\n\\put(170, 40){\\makebox(0,0)[c]{\\tiny $b_{1}$}}\n\\put(50, -2){\\makebox(0,0)[t]{\\tiny $\\entropy(\\Ldens{a})$}}\n\\put(-2, 50){\\makebox(0,0)[r]{\\tiny \\rotatebox{90}{$\\entropy(\\Ldens{b})$}}}\n}\n\\put(260,0)\n{\n\\put(0,0){\\includegraphics[scale=0.5]{de2}}\n\\put(120,50){\\includegraphics[scale=0.5]{b2}}\n\\put(0,120){\\includegraphics[scale=0.5]{a1}}\n\\put(50, 110){\\makebox(0,0)[c]{\\tiny $a_{1}$}}\n\\put(170, 40){\\makebox(0,0)[c]{\\tiny $b_{2}$}}\n\\put(50, -2){\\makebox(0,0)[t]{\\tiny $\\entropy(\\Ldens{a})$}}\n\\put(-2, 50){\\makebox(0,0)[r]{\\tiny \\rotatebox{90}{$\\entropy(\\Ldens{b})$}}}\n}\n\\put(500,0)\n{\n\\put(0,0){\\includegraphics[scale=0.5]{de25}}\n\\put(120,50){\\includegraphics[scale=0.5]{b13}}\n\\put(0,120){\\includegraphics[scale=0.5]{a12}}\n\\put(50, 110){\\makebox(0,0)[c]{\\tiny $a_{12}$}}\n\\put(170, 40){\\makebox(0,0)[c]{\\tiny $b_{13}$}}\n\\put(50, -2){\\makebox(0,0)[t]{\\tiny $\\entropy(\\Ldens{a})$}}\n\\put(-2, 50){\\makebox(0,0)[r]{\\tiny \\rotatebox{90}{$\\entropy(\\Ldens{b})$}}}\n}\n\\end{picture}\n\\end{center}\n\\caption{\\label{fig:debsc} Density evolution for $(3, 6)$-regular ensemble over $\\ensuremath{\\text{BSC}}(0.07)$.}\n\\end{figure}\nThis process gives rise to the sequences of densities $\\{\\Ldens{a}_{\\ell}\\}_{\\ell =0}^{\\infty}$,\nand $\\{ \\Ldens{b}_{\\ell}\\}_{\\ell=1}^{\\infty}$. Fig.~\\ref{fig:interpolation} shows\nthe interpolation of these sequences for the choices $\\alpha=1.0, 0.95, 0.9$ and $0.8$\nand the complete such family.\n\\begin{figure}[htp]\n\\setlength{\\unitlength}{0.6bp}%\n\\begin{center}\n\\begin{picture}(650,110)\n\\put(0,0){\\includegraphics[scale=0.6]{de25}}\n\\put(50, 102){\\makebox(0,0)[b]{\\tiny $\\alpha=1.0$}}\n\\put(50, -2){\\makebox(0,0)[t]{\\tiny $\\entropy(\\Ldens{a})$}}\n\\put(-2, 50){\\makebox(0,0)[r]{\\tiny \\rotatebox{90}{$\\entropy(\\Ldens{b})$}}}\n\\put(130,0){\\includegraphics[scale=0.6]{de52}}\n\\put(180, 102){\\makebox(0,0)[b]{\\tiny $\\alpha=0.95$}}\n\\put(180, -2){\\makebox(0,0)[t]{\\tiny $\\entropy(\\Ldens{a})$}}\n\\put(108, 50){\\makebox(0,0)[r]{\\tiny \\rotatebox{90}{$\\entropy(\\Ldens{b})$}}}\n\\put(260,0){\\includegraphics[scale=0.6]{de53}}\n\\put(310, 102){\\makebox(0,0)[b]{\\tiny $\\alpha=0.9$}}\n\\put(310, -2){\\makebox(0,0)[t]{\\tiny $\\entropy(\\Ldens{a})$}}\n\\put(258, 50){\\makebox(0,0)[r]{\\tiny \\rotatebox{90}{$\\entropy(\\Ldens{b})$}}}\n\\put(390,0){\\includegraphics[scale=0.6]{de54}}\n\\put(440, 102){\\makebox(0,0)[b]{\\tiny $\\alpha=0.8$}}\n\\put(440, -2){\\makebox(0,0)[t]{\\tiny $\\entropy(\\Ldens{a})$}}\n\\put(388, 50){\\makebox(0,0)[r]{\\tiny \\rotatebox{90}{$\\entropy(\\Ldens{b})$}}}\n\\put(520,0){\\includegraphics[scale=0.6]{de55}}\n\\put(570, -2){\\makebox(0,0)[t]{\\tiny $\\entropy(\\Ldens{a})$}}\n\\put(518, 50){\\makebox(0,0)[r]{\\tiny \\rotatebox{90}{$\\entropy(\\Ldens{b})$}}}\n\\end{picture}\n\\end{center}\n\\caption{\\label{fig:interpolation} Interpolation of densities.}\n\\end{figure}\n\\end{example}\n\nAs a second ingredient we recall from \\cite{MMRU04} the definition of GEXIT functions. \nThese GEXIT functions fulfill the Area Theorem for the case of general\nBMS channels.\nUp to date, GEXIT functions have been mainly\nused to derive upper bounds on the \\MAP\\ threshold of iterative\ncoding systems, see e.g., \\cite{MMRU04,MMRU05}. Here we will apply them\nto the components of LDPC ensembles.\n\n\\begin{definition}[The GEXIT Functional]\nGiven two families of $L$-densities\n$\\{\\Ldens{c}_\\cp\\}$ and $\\{\\Ldens{a}_\\cp\\}$ parameterized by $\\epsilon$ define\nthe GEXIT functional $\\gentropy(\\Ldens{c}_{\\cp}, \\Ldens{a}_{\\cp})$ by\n\\begin{align*}\n\\gentropy(\\Ldens{c}_{\\cp}, \\Ldens{a}_{\\cp}) & =\n\\int_{-\\infty}^{\\infty} \\Ldens{a}_{\\cp}(z) \\gexitkl {\\Ldens{c}_{\\cp}} z \\text{d}z,\n\\end{align*}\nwhere\n\\begin{align*}\n\\gexitkl {\\Ldens{c}_{\\cp}} z\n& =\n\\frac{\\int_{-\\infty}^{\\infty} \\frac{\\text{d} \\Ldens{c}_{\\cp}(w)}{\\text{d} \\cp} \n\\log(1+e^{-z-w}) \\text{d}w}{\n\\int_{-\\infty}^{\\infty} \\frac{\\text{d} \\Ldens{c}_{\\cp}(w)}{\\text{d} \\cp}\n\\log(1+e^{-w}) \\text{d}w}.\n\\end{align*}\nNote that the kernel is normalized not with respect to $d \\epsilon$ but\nwith respect to $d \\ensuremath{{\\tt{h}}}$, i.e., with respect to changes in the entropy.\nThe families are required to be smooth in the sense that \n$\\{\\entropy(\\Ldens{c}_{\\cp}), \\gentropy(\\Ldens{c}_{\\cp}, \\Ldens{a}_{\\cp})\\}$\nforms a piecewise continuous curve.\n\\end{definition}\n\\begin{lemma}[GEXIT and Dual GEXIT Function]\n\\label{lem:dualgexit}\nConsider a binary code $C$ and transmission over a complete\nfamily of BMS channels characterized by their family of $L$-densities\n$\\{\\Ldens{c}_\\cp\\}$. Let $\\{\\Ldens{a}_{\\cp}\\}$ denote\nthe corresponding family of (average) extrinsic \\MAP\\ densities.\nThen the standard GEXIT curve is given in parametric form by\n$\\{\\entropy(\\Ldens{c}_{\\cp}), \\gentropy(\\Ldens{c}_{\\cp}, \\Ldens{a}_{\\cp})\\}$.\nThe {\\em dual}\nGEXIT curve is defined by \n$\\{\\gentropy(\\Ldens{a}_{\\cp}, \\Ldens{c}_{\\cp}), \\entropy(\\Ldens{a}_{\\cp})\\}$. \nBoth, standard and dual GEXIT curve have an area equal to \n$r(C)$, the rate of the code.\n\\end{lemma}\nDiscussion:\nNote that both curves are ``comparable'' in that they first component measures\nthe channel $\\Ldens{c}$ and the second argument measure the \\MAP\\ density\n$\\Ldens{a}$. The difference between the two \nlies in the choice of measure which is applied to each component.\n\n\\begin{proof}\nConsider the entropy\n$\\entropy(\\Ldens{c}_{\\cp} \\star \\Ldens{a}_{\\cp})$. We have\n\\begin{align*}\n\\entropy(\\Ldens{c}_{\\cp} \\star \\Ldens{a}_{\\cp}) & =\n\\int_{-\\infty}^{\\infty} \\Bigl(\\int_{-\\infty}^{\\infty}\n\\Ldens{c}_{\\cp}(w) \\Ldens{a}_{\\cp}(v-w) \\text{d}w \\Bigr) \\log(1+e^{-v}) \\text{d}v \\\\\n& = \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}\n\\Ldens{c}_{\\cp}(w) \\Ldens{a}_{\\cp}(z) \\log(1+e^{-w-z}) \\text{d}w \\text{d}z\n\\end{align*}\nConsider now $\\frac{\\text{d} \\entropy(\\Ldens{c}_{\\cp} \\star \\Ldens{a}_{\\cp})}{\\text{d} \\cp}$.\nUsing the previous representation we get\n\\begin{align*}\n\\frac{\\text{d} \\entropy(\\Ldens{c}_{\\cp} \\star \\Ldens{a}_{\\cp})}{\\text{d} \\cp} & =\n\\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}\n\\frac{\\text{d}\\Ldens{c}_{\\cp}(w)}{\\text{d} \\cp} \\Ldens{a}_{\\cp}(z) \\log(1+e^{-w-z}) \\text{d}w \\text{d}z + \\\\\n& \\phantom{=} \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty}\n\\Ldens{c}_{\\cp}(w) \\frac{\\text{d} \\Ldens{a}_{\\cp}(z)}{\\text{d} \\cp} \\log(1+e^{-w-z}) \\text{d}w \\text{d}z.\n\\end{align*}\nThe first expression can be identified with the standard GEXIT curve \nexcept that it is parameterized by a generic parameter $\\cp$.\nThe second expression is essentially the same, but the roles of\nthe two densities are exchanged.\n\nIntegrate now this relationship over the whole range of $\\cp$ and\nassume that this range goes from ``perfect'' (channel) to ``useless''. \nThe integral on the left clearly equals 1. To perform the integrals\nover the right reparameterize the first expression with respect to \n$\\ensuremath{{\\tt{h}}} \\defas \\int_{\\infty}^{\\infty} \\Ldens{c}_{\\cp}(w) \\log(1+e^{-w}) \\text{d} w$\nso that it becomes the standard GEXIT curve given by \n$\\{\\entropy(\\Ldens{c}_{\\cp}), \\gentropy(\\Ldens{c}_{\\cp}, \\Ldens{a}_{\\cp})\\}$.\nIn the same manner reparameterize the second expression by\n$\\ensuremath{{\\tt{h}}} \\defas \\int_{\\infty}^{\\infty} \\Ldens{a}_{\\cp}(w) \\log(1+e^{-w}) \\text{d} w$\nso that it becomes the curve given by\n$\\{\\entropy(\\Ldens{a}_{\\cp}), \\gentropy(\\Ldens{a}_{\\cp}, \\Ldens{c}_{\\cp})\\}$.\nSince the sum of the two areas equals one and the area under the \nstandard GEXIT curve equals $r(C)$, it follows that the area under\nthe second curve equals $1-r(C)$. Finally, note that if we consider the inverse\nof the second curve by exchanging the two coordinates, i.e., if we consider the\ncurve \n$\\{\\gentropy(\\Ldens{a}_{\\cp}, \\Ldens{c}_{\\cp}), \\entropy(\\Ldens{a}_{\\cp})\\}$,\nthen the area under this curve is equal to $1-(1-r(C))=r(C)$, as claimed. \n\\end{proof}\n\\begin{example}[EXIT Versus GEXIT]\nFig.~\\ref{fig:exitversusgexit} compares the EXIT function to the\nGEXIT function for the $[3,1,3]$ repetition code and the $[6,5,2]$ single parity-check\ncode when transmission takes place over the \\ensuremath{\\text{BSC}}. As we can see, the two curves\nare similar but distinct. In particular note that the areas\nunder the GEXIT curves are equal to the rate of the codes but that this is not\ntrue for the EXIT functions.\n\\begin{figure}[htp]\n\\setlength{\\unitlength}{1.0bp}%\n\\begin{center}\n\\begin{picture}(300,120)\n\\put(0,0)\n{\n\\put(0,0){\\includegraphics[scale=1.0]{gexitchartrep}}\n\\put(90,30){\\makebox(0,0)[c]{\\small $\\frac13$}}\n\\put(60, -2){\\makebox(0,0)[t]{\\small $\\entropy(\\Ldens{c}_{\\ensuremath{{\\tt{h}}}})$}}\n\\put(-2, 60){\\makebox(0,0)[r]{\\small \\rotatebox{90}{$\\entropy(\\Ldens{a}_{\\ensuremath{{\\tt{h}}}}), \\gentropy(\\Ldens{c}_{\\ensuremath{{\\tt{h}}}}, \\Ldens{a}_{\\ensuremath{{\\tt{h}}}})$}}}\n\\put(-2, -2){\\makebox(0,0)[rt]{\\small $0$}}\n\\put(120,-2){\\makebox(0,0)[t]{\\small $1$}}\n\\put(-2,120){\\makebox(0,0)[r]{\\small $1$}}\n\\put(60,80){\\makebox(0,0)[b]{\\small $[3, 1, 3]$}}\n}\n\\put(180,0)\n{\n\\put(0,0){\\includegraphics[scale=1.0]{gexitchartspc}}\n\\put(90,30){\\makebox(0,0)[c]{\\small $\\frac56$}}\n\\put(60, -2){\\makebox(0,0)[t]{\\small $\\entropy(\\Ldens{c}_{\\ensuremath{{\\tt{h}}}})$}}\n\\put(-2, 60){\\makebox(0,0)[r]{\\small \\rotatebox{90}{$\\entropy(\\Ldens{a}_{\\ensuremath{{\\tt{h}}}}), \\gentropy(\\Ldens{c}_{\\ensuremath{{\\tt{h}}}}, \\Ldens{a}_{\\ensuremath{{\\tt{h}}}})$}}}\n\\put(-2, -2){\\makebox(0,0)[rt]{\\small $0$}}\n\\put(120,-2){\\makebox(0,0)[t]{\\small $1$}}\n\\put(-2,120){\\makebox(0,0)[r]{\\small $1$}}\n\\put(60,40){\\makebox(0,0)[b]{\\small $[6, 5, 2]$}}\n}\n\\end{picture}\n\\end{center}\n\\caption{\\label{fig:exitversusgexit} A comparison of the EXIT with the GEXIT function for the\n$[3,1,3]$ and the $[6, 5, 2]$ code.} \n\\end{figure}\n\\end{example}\n\\begin{example}[GEXIT Versus Dual GEXIT]\nFig.~\\ref{fig:gexitanddualgexit} shows the standard\nGEXIT function and the dual GEXIT function for the $[5, 4, 2]$ code\nand transmission over the $\\ensuremath{\\text{BSC}}$. Although the two curves have quite\ndistinct shapes, the area under the two curves is the same.\n\\begin{figure}[htp]\n\\setlength{\\unitlength}{1.0bp}%\n\\begin{center}\n\\begin{picture}(400,120)\n\\put(0,0)\n{\n\\put(0,0){\\includegraphics[scale=1.0]{dualgexitbsc1}}\n\\put(60, -2){\\makebox(0,0)[t]{\\small $\\entropy(\\Ldens{c}_{\\ensuremath{{\\tt{h}}}})$}}\n\\put(-2, 60){\\makebox(0,0)[r]{\\small \\rotatebox{90}{$\\gentropy(\\Ldens{c}_{\\ensuremath{{\\tt{h}}}}, \\Ldens{a}_{\\ensuremath{{\\tt{h}}}})$}}}\n\\put(60, 30){\\makebox(0,0)[t]{\\small standard GEXIT}}\n\\put(-2, -2){\\makebox(0,0)[rt]{\\small $0$}}\n\\put(120,-2){\\makebox(0,0)[t]{\\small $1$}}\n\\put(-2,120){\\makebox(0,0)[r]{\\small $1$}}\n}\n\\put(140,0)\n{\n\\put(0,0){\\includegraphics[scale=1.0]{dualgexitbsc2}}\n\\put(60, -2){\\makebox(0,0)[t]{\\small $\\gentropy(\\Ldens{a}_{\\ensuremath{{\\tt{h}}}}, \\Ldens{c}_{\\ensuremath{{\\tt{h}}}})$}}\n\\put(-2, 60){\\makebox(0,0)[r]{\\small \\rotatebox{90}{$\\entropy(\\Ldens{a}_{\\ensuremath{{\\tt{h}}}})$}}}\n\\put(60, 30){\\makebox(0,0)[t]{\\small dual GEXIT}}\n\\put(-2, -2){\\makebox(0,0)[rt]{\\small $0$}}\n\\put(120,-2){\\makebox(0,0)[t]{\\small $1$}}\n\\put(-2,120){\\makebox(0,0)[r]{\\small $1$}}\n}\n\\put(280,0)\n{\n\\put(0,0){\\includegraphics[scale=1.0]{dualgexitbsc3}}\n\\put(60, -2){\\makebox(0,0)[t]{\\small $\\entropy(\\Ldens{c}_{\\ensuremath{{\\tt{h}}}})$, $\\gentropy(\\Ldens{a}_{\\ensuremath{{\\tt{h}}}}, \\Ldens{c}_{\\ensuremath{{\\tt{h}}}})$}}\n\\put(-2, 60){\\makebox(0,0)[r]{\\small \\rotatebox{90}{$\\gentropy(\\Ldens{c}_{\\ensuremath{{\\tt{h}}}}, \\Ldens{a}_{\\ensuremath{{\\tt{h}}}})$,$\\entropy(\\Ldens{a}_{\\ensuremath{{\\tt{h}}}})$}}}\n\\put(60, 30){\\makebox(0,0)[t]{\\small both GEXIT}}\n\\put(-2, -2){\\makebox(0,0)[rt]{\\small $0$}}\n\\put(120,-2){\\makebox(0,0)[t]{\\small $1$}}\n\\put(-2,120){\\makebox(0,0)[r]{\\small $1$}}\n}\n\\end{picture}\n\\end{center}\n\\caption{\\label{fig:gexitanddualgexit} Standard and dual GEXIT function of $[5, 4, 2]$\ncode and transmission over the $\\ensuremath{\\text{BSC}}$.}\n\\end{figure}\n\\end{example}\n\\begin{lemma}\nConsider a degree distribution pair $(\\ensuremath{{\\lambda}}, \\ensuremath{{\\rho}})$\nand transmission over an BMS channel characterized by its\n$L$-density $\\Ldens{c}$ so that density evolution converges to \n$\\Delta_{\\infty}$. \nLet $\\{\\Ldens{a}_{\\alpha}\\}_{\\alpha=-1}^{\\infty}$\nand $\\{\\Ldens{b}_{\\alpha}\\}_{\\alpha=0}^{\\infty}$ denote the interpolated\nfamilies as defined in Definition \\ref{def:interpolation}.\n\nThen the two GEXIT curves parameterized by\n\\begin{align*}\n\\{ \\entropy(\\Ldens{a}_{\\alpha}), \n\\gentropy(\\Ldens{a}_{\\alpha}, \\Ldens{b}_{\\alpha+1}) \\}, \\tag*{GEXIT of check nodes} \\\\\n\\{ \\entropy(\\Ldens{a}_{\\alpha}),\n\\gentropy(\\Ldens{a}_{\\alpha}, \\Ldens{b}_{\\alpha}) \\}, \\tag*{inverse of dual GEXIT of variable nodes}\n\\end{align*}\ndo not overlap and faithfully represent density evolution.\nFurther, the area under the ``check-node'' GEXIT function\nis equal to $1-\\int \\!\\ensuremath{{\\rho}}$ and the area to the left of the\n``inverse dual variable node'' GEXIT function is equal to $\\entropy(\\Ldens{c}) \\int \\!\\ensuremath{{\\lambda}}$.\nIt follows that $r(\\ensuremath{{\\lambda}}, \\ensuremath{{\\rho}}) \\leq 1-\\entropy(\\Ldens{c})$, i.e., \nthe transmission rate can not exceed the Shannon limit.\n\nThis implies that transmission approaching capacity requires\na perfect matching of the two curves.\n\\end{lemma}\n\\begin{proof}\nFirst note that $\\{ \\entropy(\\Ldens{a}_{\\alpha}),\n\\gentropy(\\Ldens{a}_{\\alpha}, \\Ldens{b}_{\\alpha+1}) \\}$\nis the standard GEXIT curve representing the action\nof the check nodes: $\\Ldens{a}_{\\alpha}$ corresponds to\nthe density of the messages {\\em entering} the check nodes and\n$\\Ldens{b}_{\\alpha+1}$ represents the density of the corresponding \noutput messages.\nOn the other hand,\n$\\{ \\entropy(\\Ldens{a}_{\\alpha}),\n\\gentropy(\\Ldens{a}_{\\alpha}, \\Ldens{b}_{\\alpha}) \\}$\nis the inverse of the dual GEXIT curve \ncorresponding to the action at the variable nodes:\nnow the input density to the check nodes is \n$\\Ldens{b}_{\\alpha}$ and $\\Ldens{a}_{\\alpha}$ denotes the \ncorresponding output density.\n\nThe fact that the two curves do not overlap can be seen as follows.\nFix an entropy value. This entropy value corresponds to a\ndensity $\\Ldens{a}_{\\alpha}$ for a unique value of $\\alpha$.\nThe fact that\n$G(\\Ldens{a}_{\\alpha}, \\Ldens{b}_{\\alpha}) \\geq\nG(\\Ldens{a}_{\\alpha}, \\Ldens{b}_{\\alpha+1})$ now follows from\nthe fact that $\\Ldens{b}_{\\alpha+1} \\prec \\Ldens{b}_{\\alpha}$ and\nthat for any symmetric $\\Ldens{a}_{\\alpha}$ this relationship\nstays preserved by applying the GEXIT functional.\n\nThe statements regarding the areas of the two curves \nfollow in a straightforward manner from the GAT and Lemma \\ref{lem:dualgexit}.\nThe bound on the achievable rate follows in the same manner as for\nthe BEC: the total area of the GEXIT box equals one and the two curves do not\noverlap and have areas $1-\\int \\ensuremath{{\\rho}}$ and $\\entropy(\\Ldens{c})$.\nIt follows that\n$1-\\int \\!\\ensuremath{{\\rho}} + \\entropy(\\Ldens{c}) \\int \\!\\ensuremath{{\\lambda}} \\leq 1$,\nwhich is equivalent to the claim $r(\\ensuremath{{\\lambda}}, \\ensuremath{{\\rho}}) \\leq 1-\\entropy(\\Ldens{c})$.\n\\end{proof}\n\nWe see that the matching condition still holds even for general channels.\nThere are a few important differences between the general case and the simple\ncase of transmission over the BEC. For the BEC, the intermediate densities\nare always the BEC densities independent of the degree distribution.\nThis of course enormously simplifies the task. Further, for the BEC, given\nthe two EXIT curves, the progress of density evolution is simply given\nby a staircase function bounded by the two EXIT curves. For the general case,\nthis staircase function still has vertical pieces but the ``horizontal''\npieces are in general at an angle. This is true since the $y$-axis for\nthe ``check node'' step measures \n$\\gentropy(\\Ldens{a}_{\\alpha}, \\Ldens{b}_{\\alpha+1})$, but\nin the subsequent ``inverse variable node'' step \nit measures \n$\\gentropy(\\Ldens{a}_{\\alpha+1}, \\Ldens{b}_{\\alpha+1})$.\nTherefore, one should think of two sets of labels on the $y$-axis,\none measuring $\\gentropy(\\Ldens{a}_{\\alpha}, \\Ldens{b}_{\\alpha+1})$,\nand the second one measuring $\\gentropy(\\Ldens{a}_{\\alpha+1}, \\Ldens{b}_{\\alpha+1})$. The ``horizontal'' step then consists of first\nswitching from the first $y$-axis to the second, so that the labels\ncorrespond to the same density $\\Ldens{b}$ and then drawing a horizontal\nline until it crosses the ``inverse variable node'' GEXIT curve.\nThe ``vertical'' step stays as before, i.e., it really corresponds to\ndrawing a vertical line. All this is certainly best clarified by\na simple example.\n\\begin{example}[$(3, 6)$-Regular Ensemble and Transmission over $\\ensuremath{\\text{BSC}}$]\nConsider the $(3, 6)$-regular ensemble and transmission over the $\\ensuremath{\\text{BSC}}(0.07)$.\nThe corresponding illustrations are shown in Fig.~\\ref{fig:componentgexit}.\nThe top-left figure shows the standard GEXIT curve for the check node side.\nThe top-right figure shows the dual GEXIT curve corresponding to the\nvariable node side. In order to use these two curves in the same figure,\nit is convenient to consider the inverse function for the variable\nnode side. This is shown in the bottom-left figure. In the bottom-right\nfigure both curves are shown together with the ``staircase'' like function\nwhich represents density evolution. As we see, the two curves to not overlap\nand have both the correct areas.\n\\begin{figure}[hbt]\n\\centering\n\\setlength{\\unitlength}{1.5bp}\n\\begin{picture}(220,220)\n\\put(0,120){\n\\put(0,0){\\includegraphics[scale=1.5]{componentgexit1}}\n\\put(50, -2){\\makebox(0,0)[t]{\\small $\\entropy(\\Ldens{a}_{\\alpha})$}}\n\\put(102, 50){\\makebox(0,0)[l]{\\small \\rotatebox{90}{$\\gentropy(\\Ldens{a}_{\\alpha}, \\Ldens{b}_{\\alpha+1})$}}}\n\\put(50, 40){\\makebox(0,0)[t]{\\small GEXIT: check nodes}}\n\\put(50, 30){\\makebox(0,0)[t]{\\small $\\text{area}=\\frac56$}}\n\\put(50, 10){\\makebox(0,0)[c]{$\\Ldens{b}_{\\alpha+1} = \\sum_{i} \\ensuremath{{\\rho}}_i \\Ldens{a}_{\\alpha}^{\\boxast (i-1)} $}}\n}\n\\put(120, 120)\n{\n\\put(0,0){\\includegraphics[scale=1.5]{componentgexit2}}\n\\put(50, -2){\\makebox(0,0)[t]{\\small {$\\gentropy(\\Ldens{a}_{\\alpha}, \\Ldens{b}_{\\alpha})$}}}\n\\put(-2, 50){\\makebox(0,0)[r]{\\small \\rotatebox{90}{$\\entropy(\\Ldens{a}_{\\alpha})$}}}\n\\put(50, 70){\\makebox(0,0)[t]{\\small dual GEXIT: variable nodes}}\n\\put(50, 60){\\makebox(0,0)[t]{\\small $\\text{area}=\\frac13 h(0.07)$}}\n\\put(102, 36.6){\\makebox(0,0)[l]{\\small \\rotatebox{90}{$h(0.07) \\approx 0.366$}}}\n\\put(50, 40){\\makebox(0,0)[c]{$\\Ldens{a}_{\\alpha} = \\Ldens{c} \\star \\sum_{i} \\ensuremath{{\\lambda}}_i \\Ldens{b}_{\\alpha}^{\\star (i-1)} $}}\n}\n\\put(0,0)\n{\n\\put(0,0){\\includegraphics[scale=1.5]{componentgexit3}}\n\\put(50, -2){\\makebox(0,0)[t]{\\small $\\entropy(\\Ldens{a}_{\\alpha})$}}\n\\put(-2, 50){\\makebox(0,0)[r]{\\small \\rotatebox{90}{$\\gentropy(\\Ldens{a}_{\\alpha}, \\Ldens{b}_{\\alpha})$}}}\n\\put(50, 30){\\makebox(0,0)[t]{\\small inverse of dual GEXIT:}}\n\\put(50, 20){\\makebox(0,0)[t]{\\small variable nodes}}\n\\put(36.6, 102){\\makebox(0,0)[b]{\\small $h(0.07) \\approx 0.366$}}\n}\n\\put(120,0)\n{\n\\put(0,0){\\includegraphics[scale=1.5]{componentgexit4}}\n\\put(50, -2){\\makebox(0,0)[t]{\\small $\\entropy(\\Ldens{a}_{\\alpha})$}}\n\\put(-2, 50){\\makebox(0,0)[r]{\\small \\rotatebox{90}{$\\gentropy(\\Ldens{a}_{\\alpha}, \\Ldens{b}_{\\alpha})$}}}\n\\put(102, 50){\\makebox(0,0)[l]{\\small \\rotatebox{90}{$\\gentropy(\\Ldens{a}_{\\alpha}, \\Ldens{b}_{\\alpha+1})$}}}\n\\put(36.6, 102){\\makebox(0,0)[b]{\\small $h(0.07) \\approx 0.366$}}\n}\n\\end{picture}\n\\caption{\n\\label{fig:componentgexit}\nFaithful representation of density evolution by two non-overlapping component-wise\nGEXIT functions which represent the ``actions'' of the check nodes and variable nodes,\nrespectively. The area between the two curves equals is equal to the additive\ngap to capacity.\n}\n\\end{figure}\n\\end{example}\n\nAs remarked earlier, one potential use of the matching condition\nis to find capacity approaching degree distribution pairs. Let us\nquickly outline a further such potential application. Assuming that\nwe have found a sequence of capacity-achieving degree distributions,\nhow does the number of required iterations scale as we approach capacity.\nIt has been conjectured that the the number of required iterations\nscales like $1\/\\delta$, where $\\delta$ is the gap to capacity.\nThis conjecture is based on the geometric picture which the\nmatching condition implies. To make things simple, imagine\nthe two GEXIT curves as two parallel lines, lets say both\nat a 45 degree angle, a certain distance\napart, and think of density evolution as a staircase function.\nFrom the previous results, the area between the lines is proportional\nto $\\delta$. Therefore, if we half $\\delta$ the distance between\nthe lines has to be halved and one would expect that we need\ntwice as many steps. Obviously, the above discussion was \nbased on a number of simplifying assumptions. It remains to \nbe seen if this conjecture can be proven rigorously. \n\n\n\n\\section*{Acknowledgments}\n\nThe work of A.~Montanari was partially supported by the European Union under \nthe project EVERGROW.\n\\bibliographystyle{ieeetr} \n\n\\newcommand{\\SortNoop}[1]{}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsection*{Acknowledgements}\n\nThe first author gratefully acknowledges support for this work by the Swiss National Science Foundation, Project 146356.\n\n\\newpage\n\\input{introduction}\n\n\\input{prelim}\n\n\\input{benchmark}\n\n\\input{approx}\n\n\\input{conclusion}\n\n\\input{int_est\n\n\\pagestyle{plain}\n\n\\printbibliography[heading=bibintoc]\n\n\n\\end{document}\n\n\n\\section{Approximation Properties}\\label{sec:approx}\n\nCompared to the discussion in \\autoref{sec:introduction}, the only addition we are now able to make is to formulate the goals in terms of conditions on the ridgelet coefficient sequence, as well as a finer statement about the relation between the decay of $f$ \\eqref{eq:f_glob_decay} and the deviation $\\delta$ in \\autoref{th:approx_intro}. \\cite[Thm. 4.1]{mutilated} shows that for a function $f\\in H^t(\\bbR^d\\setminus \\{h_i\\})$, supported (compactly) on $[0,1]^d$, the ridgelet coefficient sequence $\\parens*{\\inpr*{\\phi_{j,\\ell,k},f}}_{j,\\ell,k}$ belongs to the $\\ell^p_w$ space with the best possible $p=p^*:=\\parens*{\\frac td + \\frac 12}^{-1}$, which implies the optimal $N$-term approximation rate $\\frac td$ in the $L^2$-norm.\n\nIn contrast, we will show that the coefficient sequence $\\inpr{\\Phi,f}:=\\parens*{\\inpr{\\varphi_\\lambda,f}}_\\lambda$ with $f\\in H^t(\\bbR^d\\setminus \\{h_i\\})$ satisfying the following polynomial decay condition,\n\\begin{align\n\t\\abs{f(\\vec x)} \\lesssim \\reg{\\vec x}^{-2m},\n\\end{align}\nis in $\\ell^{p^*+\\frac dm}_w$. In particular, if the above decay holds for all $m\\in\\bbN$ (which is satisfied for example if $f$ has exponential decay or compact support), then $\\inpr{\\Phi,f}\\in \\ell^{p^*+\\delta^*}_w$ for arbitrary $\\delta^*>0$.\n\nFinally, in the context of CDD-schemes (see \\autoref{ssec:cdd}) --- we need to consider $N$-term approximation with regard to the $\\Hs$-norms instead of the $L^2$ norm. Due to \\eqref{eq:RidgeletFrame}, this corresponds to bounding the $\\ell^p_w$-norm of the weighted coefficient sequence ${\\mathbf{W}}\\inpr{\\Phi,f}$, compare also with \\autoref{cor:approx} in this regard. Since the weights $w_{\\lambda}\\sim \\reg*{2^j \\vec s\\cdot \\vec s_\\jl}$ grow with scale $j$, this causes extra work as well.\n\n\\subsection{Main Theorem}\n\nWe begin straight away by formulating the goal of this section, \\autoref{th:approx}. Once we will have proved this result, we will have satisfied ingredient \\ref{itm:ingr:4} from \\autoref{ssec:cdd}.\n\n\\begin{theorem}\\label{th:approx}\n\tLet $f$ be a function in $L^2(\\bbR^d)$ such that $f \\in H^t(\\bbR^d)$ apart from $N$ hyperplanes --- i.e.~of the form \\eqref{eq:Ht_except_hyp} --- such that the following decay condition (recalling $\\reg{y}=\\sqrt{1+|y|^2}$)\n\t%\n\t\\begin{align}\\label{eq:f_glob_decay}\n\t\t\\abs{f(\\vec x)} \\lesssim \\reg{\\vec x}^{-2m}\n\t\\end{align}\n\t%\n\tholds for a certain $m\\in \\bbN$. Then, if $\\kappa\\in H^{\\ceil{t}+\\frac d2}$ (for the terms involving $\\CS[f]$),\n\t%\n\t\\begin{align}\n\t\t\\inpr{\\Phi,f}_{L^2} &:= \\parens*{\\inpr{\\varphi_\\lambda,f}_{L^2}}_{\\lambda\\in\\Lambda}\\in \\ell^{p^* + \\frac dm}_w,\\\\\n\t\t{\\mathbf{W}}\\inpr{\\Phi,\\CS[f]}_{L^2} &:= \\parens*{w_\\lambda\\inpr{\\varphi_\\lambda,\\CS[f]}_{L^2}}_{\\lambda\\in\\Lambda}\\in \\ell^{p^* + \\frac dm}_w,\n\t\\end{align}\n\t%\n\ti.e.~the (weighted) ridgelet coefficient sequences belong to the $\\ell^p_w$-space with $p=p^* + \\frac dm$, where $p^*= (\\frac td + \\frac 12)^{-1}$ would be the best possible $p$ for $f\\in H^t(\\bbR^d)$, \\emph{even without singularities}(!), and the deviation from the optimal value\\footnote{For $\\inpr{\\Phi,f}$, the absence of ${\\mathbf{W}}$ is not completely without effect, in that the deviation from the optimal $p$ can be bounded by $\\frac{d}{2n}+\\varepsilon$ with arbitrarily small $\\varepsilon>0$.} decays linearly with $m$.\n\t\n\tIf the decay condition \\eqref{eq:f_glob_decay} holds for arbitrarily large $m\\in\\bbN$ --- which is satisfied for exponential decay or compact support, for example --- then, for arbitrary $\\delta^*>0$, $\\inpr{\\Phi,f}_{L^2} \\in \\ell^{p^* + \\delta^*}_w$ and ${\\mathbf{W}} \\inpr{\\Phi,\\CS[f]}_{L^2} \\in \\ell^{p^* + \\delta^*}_w$.\n\\end{theorem}\n\nWe split some essential parts of the proof into separate results, namely the \\emph{localisation in angle} of the ridgelet coefficients in \\autoref{prop:loc_angle} and --- after proving decay rates for (modified) ridgelets in space in \\autoref{lem:decay_ridgelet} --- the \\emph{localisation in space} for the response to the singularity in \\autoref{prop:loc_space}, as well as the general decay of the coefficients for functions satisfying \\eqref{eq:f_glob_decay} in \\autoref{lem:ridge_coeff_tail_decay}.\n\nBefore we continue, let us briefly state the following corollary of \\autoref{th:approx}, recalling the non-linear set $\\Sigma_N$ of functions being linear combinations of less than $N$ terms from $\\Phi$ (from \\autoref{def:nonlin_basics}).\n\n\\begin{corollary}\\label{cor:approx}\n\tFor $f \\in H^t(\\bbR^d\\setminus \\{h_i\\})$ such that \\eqref{eq:f_glob_decay} is satisfied for all $n\\in\\bbN$, we have, for arbitrary $\\delta>0$,\n\t%\n\t\\begin{align}\n\t\t&\\inf_{g\\in \\Sigma_N(\\Phi)} \\norm{f-g}_{L^2} \\lesssim N^{-\\frac{t}{d}+\\delta},\\\\\n\t\t&\\inf_{g\\in \\Sigma_N(\\Phi)} \\norm{\\CS[f]-g}_{\\Hs} \\lesssim N^{-\\frac{t}{d}+\\delta}.\n\t\\end{align}\n\\end{corollary}\n\n\\begin{proof}\n\tThis statement is a direct consequence of \\autoref{prop:wlp_Nterm_frame} and \\eqref{eq:RidgeletFrame}, respectively, that $\\Phi$ is also a frame (without the weight ${\\mathbf{W}}$) for $L^2$.\n\\end{proof}\n\n\\subsection{Localisation in Angle}\\label{sec:loc_angle}\n\nIn \\autoref{lem:fourier_decomp}, we considered how the Fourier transform of a function $f(\\vec x)=H(\\vec x\\cdot \\vec n-v) g(\\vec x)$ splits into two parts, compare \\eqref{eq:sing_fourier_rot}. Based on this decomposition, we will likewise split the coefficient sequence in two, where our main interest is with the \\emph{singular} contribution arising from the cut-off.\n\nThe crucial property of ridgelets we want to explore in this section concerns the fact that large coefficients will only appear for ridgelets that are aligned with the singularity, and we are able to describe the decay of the coefficient in terms of the smoothness of $g$ and the angle between $\\vec s_\\jl$ and $\\vec n$.\n\n\\begin{proposition}\\label{prop:loc_angle}\n\tLet $f(\\vec x)=H(\\vec x\\cdot \\vec n-v) g(\\vec x)$ with $g\\in H^t$, $t\\in\\bbN$. Furthermore, let $\\kappa\\in H^{t+\\frac d2}$ and $\\kappa\\ge \\gamma>0$. Then, the sequences\n\t%\n\t\\begin{align}\n\t\t\\alpha_\\lambda&:=\\inpr*{\\varphi_\\lambda,f}=\\inpr*{\\hat\\varphi_\\lambda,\\hat f}=\\alpha^\\mathrm{r}_\\lambda+\\alpha^\\mathrm{s}_\\lambda,\\\\\n\t\t\\beta_\\lambda&:=\\inpr*{\\varphi_\\lambda,\\CS[f]}=\\inpr*{\\hat\\varphi_\\lambda,\\CF[\\CS[f]]}= \\beta^\\mathrm{r}_\\lambda+\\beta^\\mathrm{s}_\\lambda,\n\t\\end{align}\n\t%\n\tcan be split into a \\emph{regular contribution} $\\alpha^\\mathrm{r}$ resp.~$\\beta^\\mathrm{r}$ (coming from the function) and a \\emph{singular contribution} $\\alpha^\\mathrm{s}$ resp.~$\\beta^\\mathrm{s}$ (coming from the cut-off). The regular part of both sequences shows decay with scale $j$ proportional to $t$,\n\t%\n\t\\begin{align}\n\t\t\\sum_{\\ell,\\vec k} \\abs*{\\alpha^\\mathrm{r}_{\\smash[t]{{j\\!\\?\\?,\\?\\ell\\!\\?\\?,\\?\\vec k}}}}^2 &\\lesssim \\varepsilon_j^2 2^{-2jt}\\norm{g}_{H^t}^2,\\\\\n\t\t\\sum_{\\ell,\\vec k} \\abs*{w_{j\\!\\?\\?,\\?\\ell\\!\\?\\?,\\?\\vec k} \\beta^\\mathrm{r}_{\\smash[t]{{j\\!\\?\\?,\\?\\ell\\!\\?\\?,\\?\\vec k}}}}^2 &\\lesssim \\varepsilon_j^2 2^{-2jt}\\norm{g}_{H^t}^2, \\label{eq:gamma_est_reg}\n\t\\end{align}\n\t%\n\twhere $\\sum_j \\varepsilon_j^2 \\le 1$. Defining the angle $\\theta_{j\\!\\?\\?,\\ell}(\\vec n):=\\arccos (\\vec s_\\jl\\cdot\\vec n)$ that the vectors $\\vec s_\\jl$ enclose with $\\vec n$ and setting\n\t%\n\t\\begin{align}\\label{eq:Lambda_jr}\n\t\t\\Lambda^j_r(\\vec n):=\\begin{cases}\n\t\t\t\\set{\\ell}{2^{-r}\\le \\abs*{\\sin \\theta_{j\\!\\?\\?,\\ell}(\\vec n)} \\le 2^{-r+1}}, & 1\\le r1$. For $r=1$, we have $\\abs{\\sin\\theta}\\gtrsim1$ and thus, the integral in \\eqref{eq:branch_r1} can be estimated by a constant, and is trivially less than (a constant independent of $j$ times) $2^{r(2t-1)}$.\n\t\n\tFinally, for $r=j$, the support $P^j_j$ is contained in a cylinder aligned with the $\\xi_1$-axis (cf. \\cite[Prop. A.6]{compress}), and therefore\n\t%\n\t\\begin{align}\n\t\t\\int_{P^j_j} \\abs*{\\wh{g\\normalr|_{h}} (\\CP_{\\vec e_1} \\vec \\xi)}^2 \\d \\vec \\xi \\le \\int_{[-2^{j+1},2^{j+1}]\\times B_{\\bbR^{d-1}}(0,8)} \\abs*{\\wh{g\\normalr|_{h}} (\\CP_{\\vec e_1} \\vec \\xi)}^2 \\d \\vec \\xi \\lesssim 2^j \\norm*{\\wh{g\\normalr|_{h}}}_{L^2(\\bbR^{d-1})}^2 \\le 2^j \\norm{g}_{H^{t}(\\bbR^{d})}^2.\n\t\\end{align}\n\t%\n\tReturning to \\eqref{eq:gamma_est_ind_hyp} and collecting all the factors, we see that the latter two terms (combined into $\\alpha^\\mathrm{s}_\\lambda$ resp.~$\\beta^\\mathrm{s}_\\lambda$) satisfy \\eqref{eq:gamma_est_sum_Lambda}, as claimed.\n\\end{proof}\n\n\\subsection{Localisation in Space}\\label{sec:loc_space}\n\nSimilarly to the localisation in angle (i.e.~$\\ell$), we require a localisation in space (i.e.~$\\vec k$) for a function cut off across a hyperplane, which we will establish in this section. Before we are able to tackle the proof, we need to investigate the spatial decay of the ridgelets (and a modified variant) in physical space.\n\nFinally, after having dealt with the singular functions in \\autoref{prop:loc_space}, we investigate how spatial decay of a general $L^2$-function transfers to the ridgelet coefficients in \\autoref{lem:ridge_coeff_tail_decay}.\n\n\\begin{lemma}\\label{lem:decay_ridgelet}\n\tFor arbitrary $n\\in\\bbN$ and an arbitrary rotation $R\\in \\mathrm{SO}(d)$, we have that,\n\t%\n\t\\begin{align}\n\t\t\\abs{\\varphi_\\lambda(\\vec x)} &\\lesssim \\frac{2^{\\frac j2}}{\\reg*{U_{j\\!\\?\\?,\\ell}^{-1}\\vec x-\\vec k}^{2m}}, \\label{eq:ridge_est} \\\\\n\t\t\\abs[\\Big]{\\CF^{-1}\\bracket[\\Big]{\\frac{\\xi_1}{\\abs*{\\vec \\xi}^2}\\hat\\varphi_\\lambda (R \\vec \\xi)}(\\vec x)} &\\lesssim \\frac{2^{-\\frac j2}}{\\reg*{U_{j\\!\\?\\?,\\ell}^{-1}R\\vec x-\\vec k}^{2n}},\\label{eq:mod_ridge_est}\n\t\\end{align}\n\t%\n\twhere the implicit constants depends only on $n$ (and the choice of window function used to construct $\\hat\\psi$).\n\\end{lemma}\n\n\\begin{proof}\n\tWe start by proving \\eqref{eq:mod_ridge_est}, inserting the definition and transforming by $\\vec \\xi = R^{-1}U_{j\\!\\?\\?,\\ell}^{-\\top}\\vec\\eta$, which yields (because, for rotations, $R^{-1}=R^{\\top}$)\n\t%\n\t\\begin{align}\n\t\t\\CF^{-1}\\bracket[\\Big]{\\frac{\\xi_1}{\\abs*{\\vec \\xi}^2}\\hat\\varphi_\\lambda (R\\vec \\xi)}(\\vec x)\n\t\t&= 2^{-\\frac{j}{2}}\\int \\frac{\\xi_1}{\\abs*{\\vec \\xi}^2} \\hat \\psi_{j\\!\\?\\?,\\ell}(R\\vec \\xi) \\exp\\parens*{2\\pi\\mathrm{i} (\\vec \\xi\\cdot \\vec x-R\\vec \\xi \\cdot U_{j\\!\\?\\?,\\ell} \\vec k)} \\d \\vec \\xi\\\\\n\t\t&= 2^{-\\frac{j}{2}}\\int \\frac{\\xi_1}{\\abs*{\\vec \\xi}^2} \\hat \\psi_{j\\!\\?\\?,\\ell}(R\\vec \\xi) \\exp\\parens*{2\\pi\\mathrm{i} R\\vec \\xi \\cdot U_{j\\!\\?\\?,\\ell}( U_{j\\!\\?\\?,\\ell}^{-1}R\\vec x- \\vec k)} \\d \\vec \\xi\\\\\n\t\t&=2^{\\frac{j}{2}}\\int \\frac{\\parens*{R^{-1}U_{j\\!\\?\\?,\\ell}^{-\\top} \\vec \\eta}_1}{\\abs*{U_{j\\!\\?\\?,\\ell}^{-\\top} \\vec \\eta}^2} \\hat \\psi_{({j\\!\\?\\?,\\ell})}(\\vec \\eta) \\exp\\parens*{2\\pi\\mathrm{i}\\vec \\eta \\cdot (U_{j\\!\\?\\?,\\ell}^{-1}R\\vec x-\\vec k)} \\d \\vec \\eta. \\label{eq:mod_ridge_est_trafo}\n\t\\end{align}\n\t%\n\tdue to the representation from \\autoref{assump:psi_smooth}. The idea now is to use integration by parts and\n\t%\n\t\\begin{align}\\label{eq:laplace_exp}\n\t\t\\Delta_{\\vec \\eta} \\exp\\parens*{2\\pi\\mathrm{i} \\vec \\eta \\cdot (U_{j\\!\\?\\?,\\ell}^{-1}R\\vec x-\\vec k)} = - (2\\pi)^2 \\abs*{U_{j\\!\\?\\?,\\ell}^{-1}R\\vec x-\\vec k}^2 \\exp\\parens*{2\\pi\\mathrm{i} \\vec \\eta \\cdot (U_{j\\!\\?\\?,\\ell}^{-1}R\\vec x-\\vec k)}.\n\t\\end{align}\n\t%\n\tto generate sufficiently high powers of $\\abs*{U_{j\\!\\?\\?,\\ell}^{-1}R\\vec x-\\vec k}$ in the denominator.\n\t\n\tThe support $U_{j\\!\\?\\?,\\ell}^{\\top}P_{j\\!\\?\\?,\\ell} \\subseteq \\bracket*{\\frac 14,2}\\times B_{\\bbR^{d-1}}(0,4)$ is bounded from above and below (compare again \\cite[Prop. A.6]{compress}) and so we just have to bound the derivatives independently of $j$. Due to \\autoref{assump:psi_smooth}, this is not a problem for the $\\psi_{({j\\!\\?\\?,\\ell})}(\\vec \\eta)$. If we can show the same for the fraction in front, we will have the desired estimate due to the product rule.\n\t%\n\t\\begin{align}\n\t\td_{j\\!\\?\\?,\\ell}(\\vec \\eta):=\\frac{\\parens*{R^{-1}U_{j\\!\\?\\?,\\ell}^{-\\top} \\vec \\eta}_1}{\\abs*{U_{j\\!\\?\\?,\\ell}^{-\\top} \\vec \\eta}^2} =\\frac{\\parens*{R^{-1}R_\\jl^{-1} D_{2^{j}} \\vec \\eta}_1}{\\abs*{D_{2^{j}} \\vec \\eta}^2} = \\frac{2^jc_1\\eta_1+c_2\\eta_2+\\ldots +c_d\\eta_d}{2^{2j}\\eta_1^2+\\eta_2^2+\\ldots +\\eta_d^2}.\n\t\\end{align}\n\t%\n\tDue to the fact that the factor $2^j$ only appears in connection with $\\eta_1$, we only need to consider what happens when deriving by $\\eta_1$, as all other derivatives just produce higher powers of the denominator without adding powers of $2^j$ in the numerator. By splitting the term into $\\eta_1$ and the rest, respectively, and adding\n\t%\n\t\\begin{align}\n\t\t0=-\\mathrm{i} c_1\\sqrt{\\eta_2^2+\\ldots+\\eta_d^2}+\\mathrm{i} c_1\\sqrt{\\eta_2^2+\\ldots+\\eta_d^2}\n\t\\end{align}\n\t%\n\tin the numerator of the first term, we see that (since the quantity we're interested is clearly real),\n\t%\n\t\\begin{align}\n\t\t\\Dpn{k}{\\eta_1} d_{j\\!\\?\\?,\\ell} = \\Re\\parens[\\bigg]{\\Dpn{k}{\\eta_1}\\frac{c_1}{2^j\\eta_1+\\mathrm{i}\\sqrt{\\eta_2^2+\\ldots+\\eta_d^2}}} + \\Dpn{k}{\\eta_1}\\frac{c_2\\eta_2+\\ldots +c_d\\eta_d}{2^{2j}\\eta_1^2+\\eta_2^2+\\ldots +\\eta_d^2}.\n\t\\end{align}\n\t%\n\tThe first term resolves to\n\t%\n\t\\begin{align}\n\t\t\\Re\\parens[\\Bigg]{\\frac{c_1 (-1)^k k! 2^{jk}}{\\parens[\\Big]{2^j\\eta_1+\\mathrm{i}\\sqrt{\\eta_2^2+\\ldots+\\eta_d^2}}^{k+1}}},\n\t\\end{align}\n\t%\n\twhere, clearly, the power of $2^j$ is always at least one higher in the denominator than in the numerator. For the second term, we anticipate Fa\\`{a} di Bruno's formula \\eqref{app:eq:faa_di_bruno} --- the generalisation of the chain rule to arbitrary order of differentiation --- and the generating function machinery of \\autoref{ssec:genfunc} to deal with the Bell polynomials that are involved, i.e.~\\eqref{app:eq:genfunc_bell}. Then, with ``inner'' function $g(x):=2^{2j}x^2$,\n\t%\n\t\\begin{align}\n\t\tB_{n,k}(2^{2j+1}x,2^{2j+1},0,\\ldots)= 2^{2jk}\\frac{n!}{k!} \\parens[\\Big]{\\Dn{n}{t} t^k(2x+t)^k}\\Bigr|_{t=0}, \\quad \\text{for} \\quad \\ceil{\\frac n2}\\le k\\le n,\n\t\\end{align}\n\t%\n\tand thus\n\t%\n\t\\begin{align}\n\t\t\\Dpn{k}{\\eta_1} \\frac{c_2\\eta_2+\\ldots +c_d\\eta_d}{2^{2j}\\eta_1^2+\\eta_2^2+\\ldots +\\eta_d^2} = \\sum_{k=\\ceil{\\frac n2}}^n \\frac{2^{2jk} p_k(\\eta_1) (c_2\\eta_2+\\ldots +c_d\\eta_d)}{(2^{2j}\\eta_1^2+\\eta_2^2+\\ldots +\\eta_d^2)^{k+1}},\n\t\\end{align}\n\t%\n\twhere $p_k(\\eta_1)=(-1)^k n!\\parens{\\Dn{n}{t} t^k(2\\eta_1+t)^k}\\bigr|_{t=0}$ is a polynomial in $\\eta_1$ independent of $2^j$. In particular, in terms of powers of $2^j$, the power in the denominator is always at least two larger than in the numerator.\n\t\n\tTaken together, we can see that because $\\abs{\\vec \\eta}\\sim 1$ for $\\vec \\eta \\in U_{j\\!\\?\\?,\\ell}^{\\top}P_{j\\!\\?\\?,\\ell}$, for any multi-index $\\alpha$ with $\\abs{\\alpha}\\ge 0$,\n\t%\n\t\\begin{align}\\label{eq:mod_ridge_djl}\n\t\t\\Dpi{\\alpha}[d_{j\\!\\?\\?,\\ell}]{\\vec\\eta}(\\vec \\eta) \\lesssim 2^{-j}.\n\t\\end{align}\n\t\n\tThis allows us to apply \\eqref{eq:laplace_exp} to \\eqref{eq:mod_ridge_est_trafo} --- boundary terms vanish because of compact support. Consequently, using \\cite[Cor. C.3]{compress}, we achieve\n\t%\n\t\\begin{align}\n\t\\MoveEqLeft\n\t\t\\abs[\\Big]{\\CF^{-1}\\bracket[\\Big]{\\frac{\\xi_1}{\\abs*{\\vec \\xi}^2}\\hat\\varphi_\\lambda (R\\vec \\xi)}(\\vec x)}\\\\\n\t\t&= \\abs[\\bigg]{2^{\\frac j2} \\frac{(-4\\pi^2)^n}{\\abs*{U_{j\\!\\?\\?,\\ell}^{-1}R\\vec x-\\vec k}^{2n}} \\int_{U_{j\\!\\?\\?,\\ell}^{\\top}P_{j\\!\\?\\?,\\ell}} \\Delta^n\\parens{d_{j\\!\\?\\?,\\ell}(\\vec\\eta) \\hat \\psi_{({j\\!\\?\\?,\\ell})}(\\vec \\eta)} \\exp\\parens*{2\\pi\\mathrm{i}\\vec \\eta \\cdot (U_{j\\!\\?\\?,\\ell}^{-1}R\\vec x-\\vec k)} \\d \\vec \\eta}\\\\\n\t\t&\\lesssim \\frac{2^{\\frac j2}}{\\abs*{U_{j\\!\\?\\?,\\ell}^{-1}R\\vec x-\\vec k}^{2n}} \\int_{U_{j\\!\\?\\?,\\ell}^{\\top}P_{j\\!\\?\\?,\\ell}} \\abs*{d_{j\\!\\?\\?,\\ell}(\\vec \\eta)}_{\\CC^{2n}} \\, \\abs*{\\hat \\psi_{({j\\!\\?\\?,\\ell})}(\\vec \\eta)}_{\\CC^{2n}} \\d \\vec \\eta \\lesssim \\frac{2^{-\\frac j2}}{\\abs*{U_{j\\!\\?\\?,\\ell}^{-1}R\\vec x-\\vec k}^{2n}}.\n\t\\end{align}\n\t%\n\tFinally, this implies\n\t%\n\t\\begin{align}\n\t\t\\abs[\\Big]{\\CF^{-1}\\bracket[\\Big]{\\frac{\\xi_1}{\\abs*{\\vec \\xi}^2}\\hat\\varphi_\\lambda (R\\vec \\xi)}(\\vec x)}\n\t\t&\\lesssim \\frac{2^{-\\frac j2}}{\\reg*{U_{j\\!\\?\\?,\\ell}^{-1}R\\vec x-\\vec k}^{2n}},\n\t\\end{align}\n\t%\n\tbecause the compact support of $\\hat\\varphi_\\lambda$ implies that the function on the left-hand side belongs to $\\CC^\\infty(\\bbR^d)$ --- i.e.~has no singularities --- with respect to $\\vec x$ (and is bounded independently of $\\vec k$) and therefore, we may replace the absolute value with its regularised version (up to a constant).\n\t\n\tThe proof of \\eqref{eq:ridge_est} proceeds along the same lines, with the simplification that we do not have to consider $d_{j\\!\\?\\?,\\ell}$ or its derivatives --- which also removes the factor $2^{-j}$ that we had gained from \\eqref{eq:mod_ridge_djl}. This finishes the proof.\n\\end{proof}\n\nWith these tools in hand, we proceed to the promised localisation in space for a function cut off across a hyper plane.\n\n\\begin{proposition}\\label{prop:loc_space}\n\tLet $f(\\vec x)=H(\\vec x \\cdot \\vec n-v)g(\\vec x)$ with $g\\in H^{\\frac 12}(\\bbR^d)$, cut off across the hyperplane $h=\\set{\\vec x}{\\vec x\\cdot \\vec n=v}$, such that the restriction $g\\normalr|_{h}\\in L^2(\\bbR^{d-1})$ satisfies\n\t%\n\t\\begin{align}\n\t\t\\abs*{g(\\vec x)\\delta_{\\{\\vec x\\cdot \\vec n=v\\}}}=\\abs*{g\\normalr|_{h}(\\vec x{}'_h)} &\\lesssim \\reg{\\vec x{}'_h}^{-2m} = \\reg{\\CP_{\\vec n}\\vec x}^{-2m} \\quad \\text{almost everywhere}, \\label{eq:decay_sing_f}\n\t\t\\intertext{\n\t%\n\twhere $\\vec x{}'_h\\in\\bbR^{d-1}$ is identified with $\\CP_{\\vec n}\\vec x=\\vec x - (\\vec x\\cdot \\vec n)\\vec n$.\n\tFor the solution $u=\\CS[f]$, the mutilated part (cf. \\autoref{prop:sol_smooth_except_hyp}) is of the same form, i.e.~$u(\\vec x)=u_0(\\vec x)+H(\\vec x \\cdot \\vec n-v)u_1(\\vec x)$, and we require\n\t%\n\t}\n\t\t\\abs*{u_1(\\vec x)\\delta_{\\{\\vec x\\cdot \\vec n=v\\}}}=\\abs*{u_1\\normalr|_{h}(\\vec x{}'_h)} &\\lesssim \\reg{\\vec x{}'_h}^{-2m} = \\reg{\\CP_{\\vec n}\\vec x}^{-2m}\\quad \\text{almost everywhere}.\\label{eq:decay_sing_u}\n\t\\end{align}\n\t%\n\tCondition \\eqref{eq:decay_sing_f} does not necessarily imply \\eqref{eq:decay_sing_u}, but the latter follows for example from the stronger assumption that $\\abs{g(\\vec x)} \\lesssim \\reg{\\vec x}^{-2m}$, compare \\autoref{lem:decay_sol}.\n\t\n\tIf the ridgelets are constructed with a window function such that \\eqref{eq:mod_ridge_est} holds for $m$ sufficiently large, we have the following \\emph{localisation in space} in terms of a modified translation parameter $\\vec t(\\vec k)$,\n\t%\n\t\\begin{align}\\label{eq:gamma_est_w}\n\t\t\\abs*{\\alpha^\\mathrm{s}_\\lambda}&\\lesssim \\frac{1}{a} \\frac{2^{-\\frac j2}}{(\\abs[]{\\vec t{}''} + \\rho_1^2+\\rho_2^2+1)^m}, &\n\t\t\\abs*{w_\\lambda\\beta^\\mathrm{s}_\\lambda}&\\lesssim \\frac{1}{a} \\frac{2^{\\frac j2}}{(\\abs[]{\\vec t{}''} + \\rho_1^2+\\rho_2^2+1)^m},\n\t\t\\intertext{\n\t%\n\twhere, with $\\theta_{\\vec n}:=\\arccos(\\vec n\\cdot \\vec e_1)$ and $\\phi:=\\theta_{j\\!\\?\\?,\\ell}-\\theta_{\\vec n}$,\n\t%\n\t\t}\n\t\t\\vec t(\\vec k)&=\\vec k-vU_{j\\!\\?\\?,\\ell}^{-1}\\vec n, & \\rho_1(\\vec t)&:= \\lcopywidth{\\frac 1a}{\\frac{1}{a^2}} (\\cos \\phi\\, t_1-2^j\\sin\\phi\\, t_2),\\\\\n\t\ta&:=\\sqrt{2^{2j} \\sin^2 \\phi + \\cos^2\\phi}, &\n\t\t\\rho_2(\\vec t)&:= \\frac{1}{a^2}\\parens*{2^j\\sin\\phi\\, t_1 + \\cos\\phi\\, t_2}.\n\t\\end{align}\n\t%\n\tThe essence \\eqref{eq:gamma_est_w} is, in slightly obscured form, that the magnitude of the coefficients of the modified translation parameter $\\vec t$ appear unchanged --- for $d-1$ dimensions --- in the decay of the ridgelet coefficients, while in one dimension (basically, along $t_1$), the magnitude is attenuated by a factor $\\sim 2^{-j}$, as soon as $\\sin\\phi$ is not negligible. This behaviour corresponds directly to the way the grid of translations in \\autoref{def:phi_jlk} is refined in one direction (resp.~the shape of the ridgelets themselves).\n\\end{proposition}\n\n\\begin{proof}\n\tSince $g$ and $u_1$ satisfy exactly the same condition (i.e.~\\eqref{eq:decay_sing_f} and \\eqref{eq:decay_sing_u}), we can deal with both cases at the same time (here exemplarily for $\\alpha^\\mathrm{s}_\\lambda$).\n\tTransforming with $\\vec\\zeta=\\Rn\\vec \\xi$, using \\eqref{eq:sing_fourier_rot} and applying the Plancherel identity, this results in (because $\\zeta_1$ is real)\n\t%\n\t\\begin{align}\n\t\t\\alpha^\\mathrm{s}_\\lambda\n\t\t&=\\frac{-\\mathrm{i}}{2\\pi}\\int \\overline{\\hat \\varphi_\\lambda (\\vec \\xi)} \\frac{(\\Rn\\vec \\xi)_1}{\\abs*{\\vec\\xi}^2} \\wh{g\\normalr|_{h}}(\\CP_{\\vec n}\\vec \\xi) \\d \\vec \\xi\n\t\n\t\n\t\t=\\frac{ -\\mathrm{i}}{2\\pi}\\int \\overline{\\hat \\varphi_\\lambda (\\Rn^{-1}\\vec \\zeta) \\frac{\\zeta_1}{\\abs*{\\vec \\zeta}^2}} \\CF\\bracket*{g(\\Rn^{-1}\\vec x)\\delta_{\\{\\vec x\\cdot \\vec e_1=v\\}}}(\\vec \\zeta) \\d \\vec \\zeta \\\\\n\t\n\t\n\t\n\t\t&= \\frac{-\\mathrm{i}}{2\\pi} \\int \\overline{\\CF^{-1}\\bracket[\\Big]{\\frac{\\xi_1}{\\abs*{\\vec \\xi}^2}\\hat\\varphi_\\lambda (\\Rn^{-1}\\vec \\xi)}\\!(\\vec x)} \\, g(\\Rn^{-1}\\vec x)\\delta_{\\{x_1=v\\}} \\d \\vec x\\\\\n\t\t&= \\frac{-\\mathrm{i}}{2\\pi} \\int_{[\\vec e_1]^\\bot} \\overline{\\CF^{-1}\\bracket[\\Big]{\\frac{\\xi_1}{\\abs*{\\vec \\xi}^2}\\hat\\varphi_\\lambda (\\Rn^{-1}\\vec \\xi)}\\!\\binom{v}{\\vec x{}'}} \\, g\\parens{\\Rn^{-1}\\binom{v}{\\vec x{}'}} \\d \\vec x{}',\n\t\\end{align}\n\t%\n\twhere $[\\vec e_1]$ denotes the span of $\\vec e_1$, and $[\\vec e_1]^\\bot$ its orthogonal complement. Furthermore, it is easy to check that $\\Rn^{-1}\\binom{v}{\\vec x{}'}= \\Rn^{-1}\\binom{0}{\\vec x{}'}+v\\vec n\\in h$, as well as $\\CP_{\\vec n} \\Rn^{-1}\\binom{v}{\\vec x{}'} = \\Rn^{-1}\\binom{0}{\\vec x{}'}$. Therefore, using \\eqref{eq:mod_ridge_est} and \\eqref{eq:decay_sing_f},\n\t%\n\t\\begin{align}\n\t\t\\abs{\\alpha^\\mathrm{s}_\\lambda}\n\t\t&\\lesssim 2^{-\\frac j2}\\int_{[\\vec e_1]^\\bot} \\frac{1}{\\reg*{U_{j\\!\\?\\?,\\ell}^{-1}\\Rn^{-1}\\binom{v}{\\vec x{}'}-\\vec k}^{2n}} \\frac{1}{\\reg*{\\Rn^{-1} \\binom{0}{\\vec x{}'}}^{2m}} \\d \\vec x{}'\\\\\n\t\t&= 2^{-\\frac j2}\\int_{[\\vec e_1]^\\bot} \\frac{1}{\\reg*{U_{j\\!\\?\\?,\\ell}^{-1}\\Rn^{-1}\\binom{0}{\\vec x{}'}-\\vec t}^{2n}} \\frac{1}{\\reg*{\\binom{0}{\\vec x{}'}}^{2m}} \\d \\vec x{}',\n\t\\end{align}\n\t%\n\twhere we have split off the component involving $v$ in the first denominator, i.e.\n\t%\n\t\\begin{align}\n\t\tU_{j\\!\\?\\?,\\ell}^{-1}\\binom{v}{\\vec x{}'}-\\vec k=U_{j\\!\\?\\?,\\ell}^{-1}\\binom{0}{\\vec x{}'}-\\vec t, \\quad \\text{using} \\quad \\vec t:=\\vec k-vU_{j\\!\\?\\?,\\ell}^{-1}\\Rn^{-1}\\vec e_1=\\vec k-vU_{j\\!\\?\\?,\\ell}^{-1}\\vec n.\n\t\\end{align}\n\t%\n\tSince $U_{j\\!\\?\\?,\\ell}^{-1}\\Rn^{-1}[\\vec e_1]^\\bot$ is still a hyperplane, the first denominator attains its minimum when the left term corresponds to the orthogonal projection of $\\vec t$ onto that plane. The normal vector transforms with the transpose of the inverse of the transformation, i.e.\n\t%\n\t\\begin{align}\n\t\tU_{j\\!\\?\\?,\\ell}^{-1}\\Rn^{-1}[\\vec e_1]^\\bot \\bot (U_{j\\!\\?\\?,\\ell}^{-1}\\Rn^{-1})^{-\\top}\\vec e_1 = U_{j\\!\\?\\?,\\ell}^\\top \\vec n,\n\t\t\\quad \\text{and we set} \\quad\n\t\t\\vec s{}^*:= \\frac{U_{j\\!\\?\\?,\\ell}^\\top \\vec n}{\\abs*{U_{j\\!\\?\\?,\\ell}^\\top \\vec n}}.\n\t\\end{align}\n\t%\n\tThus, the minimiser of the first denominator is $\\tau(\\vec t):= \\Rn U_{j\\!\\?\\?,\\ell}(\\vec t-\\vec s{}^*(\\vec s{}^*\\cdot \\vec t)) \\in [\\vec e_1]^\\bot$. By the transformation $\\binom{0}{\\vec y{}'}=\\binom{0}{\\vec x{}'}+\\tau(\\vec t)$, we have\n\t%\n\t\\begin{align}\n\t\t\\abs{\\alpha^\\mathrm{s}_\\lambda}&\\lesssim 2^{-\\frac j2}\\int_{[\\vec e_1]^\\bot} \\frac{1}{\\reg*{U_{j\\!\\?\\?,\\ell}^{-1}\\Rn^{-1}\\binom{0}{\\vec y{}'} - \\vec s{}^* (\\vec s{}^*\\cdot \\vec t)}^{2n}} \\frac{1}{\\reg*{\\binom{0}{\\vec y{}'} - \\tau(\\vec t)}^{2m}} \\d \\vec y{}'\\\\\n\t\t&= 2^{-\\frac j2}\\int_{[\\vec e_1]^\\bot} \\frac{1}{\\parens*{\\abs*{U_{j\\!\\?\\?,\\ell}^{-1}\\Rn^{-1}\\binom{0}{\\vec y{}'}}^2 + (\\vec s{}^*\\cdot \\vec t)^2+1}^{n}} \\frac{1}{\\reg*{\\binom{0}{\\vec y{}'} - \\tau(\\vec t)}^{2m}} \\d \\vec y{}', \\label{eq:gamma_est_y_trafo}\n\t\\end{align}\n\t%\n\twhere the two summands in the first denominator are now orthogonal to each other, and the last equality follow by Pythagoras' theorem.\n\t\n\tWe will now try to construct a rotation, which will transform the terms in the integral in a way that it can be dealt with. For reasons of clarity, this will be a two-stop process, first constructing the main rotation $Q$, and then a small correction $T$. The condition on the rotation (that $R_\\jl \\vec s_\\jl=\\vec e_1$) fixes the first component of $R_\\jl\\vec e_1$ to be $\\cos \\theta_{j\\!\\?\\?,\\ell}$. Otherwise, we have the freedom to choose $R_\\jl$ such that\n\t%\n\t\\begin{align}\n\t\tR_\\jl \\vec e_1=\\begin{pmatrix}\n\t\t\\phantom{-}\\cos \\theta_{j\\!\\?\\?,\\ell} \\\\ -\\sin\\theta_{j\\!\\?\\?,\\ell}\\\\0\\\\ \\vdots\n\t\t\\end{pmatrix},\n\t\\qquad \\text{and in the same way,} \\qquad\n\t\t\\Rn \\vec e_1=\\begin{pmatrix}\n\t\t\\phantom{-}\\cos \\theta_{\\vec n} \\\\ -\\sin\\theta_{\\vec n}\\\\0\\\\ \\vdots\n\t\t\\end{pmatrix}.\n\t\\end{align}\n\t%\n\tWe may choose further images for our rotations, as long as we maintain the property that $R\\vec s\\cdot R\\vec s{}'=\\vec s\\cdot\\vec s{}'$ for any pair of images $R\\vec s,R\\vec s{}'$ we choose for vectors $\\vec s, \\vec s{}'\\in{\\mathbb{S}^{d-1}}$, because rotations maintain the intrinsic (geodesic) distance between points on the sphere. Defining $\\vec s_\\jl':=P_{\\vec e_1}\\vec s_\\jl$ (with norm $\\abs*{\\sin\\theta_{j\\!\\?\\?,\\ell}}$), we may therefore choose\n\t%\n\t\\begin{align}\n\t\t\\Rn \\frac{\\vec s_\\jl'}{\\sin\\theta_{j\\!\\?\\?,\\ell}}=\\begin{pmatrix}\n\t\t\\sin \\theta_{\\vec n} \\\\ \\cos\\theta_{\\vec n}\\\\0\\\\ \\vdots\n\t\t\\end{pmatrix},\n\t\\end{align}\n\t%\n\tsince, by our choice for $\\Rn \\vec e_1$ above, $\\Rn \\vec e_1\\cdot \\Rn \\frac{\\vec s_\\jl'}{\\sin\\theta_{j\\!\\?\\?,\\ell}} = 0 = \\vec e_1 \\cdot \\frac{\\vec s_\\jl'}{\\sin\\theta_{j\\!\\?\\?,\\ell}}$. In case that $\\vec s_\\jl=\\vec e_1$ (and thus $\\sin\\theta_{j\\!\\?\\?,\\ell}=0$), we take $\\Rn e_2= (\\sin \\theta_{\\vec n}, \\cos\\theta_{\\vec n}, 0,\\ldots)^\\top$, which is a permissible choice for the same reason.\n\t\n\tOnce more in the same fashion, we are able to determine a rotation $Q$ in the following way\n\t%\n\t\\begin{align}\n\tQ^{-1} \\vec e_1=\\Rn \\vec e_1 \\qquad \\text{and} \\qquad Q^{-1}\\vec e_2 = \\Rn\\frac{\\vec s_\\jl'}{\\sin\\theta_{j\\!\\?\\?,\\ell}},\n\t\\end{align}\n\t%\n\twhereby we achieve\n\t%\n\t\\begin{align}\n\t\tQ^{-1} &= \\left(\\begin{array}{cc|c}\n\t\t\t\\phantom{-}\\cos \\theta_{\\vec n} & \\sin \\theta_{\\vec n} & 0 \\\\\n\t\t\t-\\sin\\theta_{\\vec n} & \\cos\\theta_{\\vec n} & 0 \\\\ \\hline\n\t\t\t0 & 0 & (Q'')^{-1}\n\t\t\\end{array}\\right),\\\\\n\t\tR_\\jl \\Rn^{-1}Q^{-1} &= \\left(\\begin{array}{cc|c}\n\t\t\t\\phantom{-}\\cos \\theta_{j\\!\\?\\?,\\ell} & \\sin \\theta_{j\\!\\?\\?,\\ell} & 0 \\\\\n\t\t\t-\\sin\\theta_{j\\!\\?\\?,\\ell} & \\cos\\theta_{j\\!\\?\\?,\\ell} & 0 \\\\ \\hline\n\t\t\t0 & 0 & (V'')^{-1}\n\t\t\\end{array}\\right). \\label{eq:RjlQ}\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\n\t\\end{align}\n\t%\n\tThe form of the second matrix can be seen since, due to the choice of $Q^{-1}\\vec e_1$, the first column is just $R_\\jl \\vec e_1$, while the second column is calculated as follows. The first entry evaluates to\n\t%\n\t\\begin{align}\n\t\t\\vec e_1 \\cdot R_\\jl \\Rn^{-1} Q^{-1} \\vec e_2\n\t\t&= R_\\jl^{-1} \\vec e_1 \\cdot \\Rn^{-1} \\Rn \\frac{\\vec s_\\jl'}{\\sin\\theta_{j\\!\\?\\?,\\ell}} = \\vec s_\\jl \\cdot \\frac{\\CP_{\\vec e_1} \\vec s_\\jl}{\\sin\\theta_{j\\!\\?\\?,\\ell}} = \\frac{\\sum_{k=2}^d(\\vec s_\\jl)_k^2}{\\sin\\theta_{j\\!\\?\\?,\\ell}} = \\frac{1-(\\vec s_\\jl)_1^2}{\\sin\\theta_{j\\!\\?\\?,\\ell}} \\\\\n\t\t&=\\frac{1-\\cos^2 \\theta_{j\\!\\?\\?,\\ell}}{\\sin\\theta_{j\\!\\?\\?,\\ell}} = \\sin \\theta_{j\\!\\?\\?,\\ell},\n\t\\end{align}\n\t%\n\tsince $(\\vec s_\\jl)_1=\\vec s_\\jl \\cdot \\vec e_1= \\cos \\theta_{j\\!\\?\\?,\\ell}$. As the norm of the first row is thus one already, the other entries must be zero. Similarly,\n\t%\n\t\\begin{align}\n\t\t\\vec e_2 \\cdot R_\\jl \\Rn^{-1}Q^{-1} \\vec e_2 = \\vec e_2 \\cdot R_\\jl \\parens{\\!\\vec s_\\jl - \\begin{pmatrix}\n\t\t(\\vec s_\\jl)_1\\\\0\\\\ \\vdots\n\t\t\\end{pmatrix}\\!} \\frac{1}{\\sin\\theta_{j\\!\\?\\?,\\ell}} = \\frac{\\vec e_2}{\\sin\\theta_{j\\!\\?\\?,\\ell}} \\!\\cdot\\! (\\vec e_1 - \\cos\\theta_{j\\!\\?\\?,\\ell} R_\\jl \\vec e_1) = \\cos\\theta_{j\\!\\?\\?,\\ell}\n\t\\end{align}\n\t%\n\timplies the zeros in the second row and column. The matrices $Q''$ resp.~$V''$ (restricted to have determinant one) represent the degrees of freedom we still have in choosing the rotation $Q$.\n\t\n\tFinally, we define the two matrices\n\t%\n\t\\begin{align}\n\t\tS_{\\theta} = \\begin{pmatrix}\n\t\t\t\\cos \\theta &- \\sin\\theta \\\\ \\sin\\theta & \\phantom{-}\\cos \\theta\n\t\t\\end{pmatrix}, \\qquad T:= \\begin{pmatrix}\n\t\t\tS_{\\theta_{\\vec n}} & 0 \\\\ 0 & \\bbI_{d-2}\n\t\t\\end{pmatrix},\n\t\\end{align}\n\t%\n\twhere $S_{\\theta}$ is the standard rotation matrix in two dimensions satisfying $S_{\\theta}S_{\\phi}=S_{\\theta+\\phi}$. Inverting \\eqref{eq:RjlQ}, we see that\n\t%\n\t\\begin{align}\n\t\tQ&= \\begin{pmatrix}\n\t\t\tS_{\\theta_{\\vec n}} & 0 \\\\ 0 & Q''\n\t\t\\end{pmatrix},\n\t\t& Q\\RnR_\\jl^{-1} &= \\begin{pmatrix}\n\t\t\tS_{\\theta_{j\\!\\?\\?,\\ell}} & 0 \\\\ 0 & V''\n\t\t\\end{pmatrix},\n\t\t& T^{-1}Q &= \\begin{pmatrix}\n\t\t\t\\bbI_{2} & 0 \\\\ 0 & Q''\n\t\t\\end{pmatrix},\n\t\\end{align}\n\t%\n\tand consequently, setting $\\phi:=\\theta_{j\\!\\?\\?,\\ell}-\\theta_{\\vec n}$,\n\t%\n\t\\begin{align}\\label{eq:RjlQ_inv}\n\t\tT^{-1}Q\\RnR_\\jl^{-1} &= \\begin{pmatrix}\n\t\t\tS_{\\phi} & 0 \\\\ 0 & V''\n\t\t\\end{pmatrix},\n\t\t&R_\\jl \\Rn^{-1} &=R_\\jl \\Rn^{-1}Q^{-1}Q\n\t\n\t\n\t\n\t\n\t\n\t\n\t\t=\\begin{pmatrix}\n\t\t\tS_{-\\phi} & 0 \\\\ 0 & (V'')^{-1}Q''\n\t\t\\end{pmatrix}.\n\t\\end{align}\n\t\n\tComing back to \\eqref{eq:gamma_est_y_trafo}, for reasons that will become apparent shortly, we transform with $\\binom{0}{\\vec z{}'} = T^{-1}Q\\binom{0}{\\vec y{}'}$ (since $T^{-1}Q$ leaves the first component invariant!) to arrive at\n\t%\n\t\\begin{align}\n\t\t\\abs{\\alpha^\\mathrm{s}_\\lambda}\n\t\t&\\!\\begin{multlined}[t][\\textwidth-\\mathindent-\\widthof{$\\abs{\\alpha^\\mathrm{s}_\\lambda}$}-\\multlinegap]\n\t\t\t\\lesssim 2^{-\\frac j2}\\int_{[\\vec e_1]^\\bot} \\! \\parens{\\abs*{D_{2^j}R_\\jl \\Rn^{-1}Q^{-1}T\\vec z{}'}^2 + (\\vec s{}^*\\cdot \\vec t)^2+1}^{-n} \\cdot \\ldots \\\\\n\t\t\t\\ldots \\cdot \\reg*{Q^{-1}T\\parens*{(0,\\vec z{}')^\\top - T^{-1}Q\\tau(\\vec t)}}^{-2m} \\d \\vec z{}'\n\t\t\\end{multlined}\\\\\n\t\t&\\!\\begin{multlined}[t][\\textwidth-\\mathindent-\\widthof{$\\abs{\\alpha^\\mathrm{s}_\\lambda}$}-\\multlinegap]\n\t\t\t=2^{-\\frac j2}\\int_{[\\vec e_1]^\\bot} \\! \\parens{\\abs*{\\parens*{-2^j \\sin \\phi\\, z_2, \\cos \\phi\\, z_2, (V'')^{-1}\\vec z{}''}^\\top}^2 + (\\vec s{}^*\\cdot \\vec t)^2+1}^{-n} \\cdot \\ldots \\label{eq:gamma_est_z_trafo} \\\\\n\t\t\t\\ldots \\cdot \\reg*{(0,\\vec z{}')^\\top - T^{-1}Q\\tau(\\vec t)}^{-2m} \\d \\vec z{}',\n\t\t\\end{multlined}\n\t\\end{align}\n\t%\n\twhere we used the (inverse of the) first equality from \\eqref{eq:RjlQ_inv} to evaluate the first denominator. The next step is to calculate $T^{-1}Q\\tau(\\vec t)$, which we tackle as follows, this time with the second equality in \\eqref{eq:RjlQ_inv},\n\t%\n\t\\begin{align}\n\t\tU_{j\\!\\?\\?,\\ell}^\\top \\vec n &= D_{2^{-j}}R_\\jl\\Rn^{-1}\\vec e_1 = \\parens*{2^{-j}\\cos \\phi,-\\sin \\phi,0,\\ldots}^\\top,\\\\\n\t\n\t\t\\abs*{U_{j\\!\\?\\?,\\ell}^\\top \\vec n}&=2^{-j}\\sqrt{2^{2j} \\sin^2\\phi +\\cos\\phi}=:2^{-j} a.\n\t\\end{align}\n\t%\n\tFrom there on, we compute\n\t%\n\t\\begin{align}\n\t\t\\vec s{}^* = \\frac{1}{a} \\parens*{\\cos \\phi,-2^j\\sin \\phi,0,\\ldots}^\\top, \\qquad \\vec s{}^* \\cdot \\vec t =\\frac{1}{a}(\\cos\\phi \\,t_1 - 2^j\\sin\\phi \\,t_2) = \\rho_1(\\vec t),\n\t\\end{align}\n\t%\n\tand therefore, again with \\eqref{eq:RjlQ_inv},\n\t%\n\t\\begin{align}\n\t\tT^{-1}Q\\tau(\\vec t)\n\t\t&=T^{-1}Q\\Rn R_\\jl^{-1}D_{2^{-j}}\\parens*{\\vec t - (\\vec s{}^* \\cdot \\vec t)\\vec s{}^*}\\\\\n\t\t&=T^{-1}Q\\Rn R_\\jl^{-1} \\parens{\n\t\t\\begin{pmatrix}\n\t\t\t2^{-j}t_1\\\\t_2\\\\ \\vec t{}''\n\t\t\\end{pmatrix}\n\t\t-\\frac{\\rho_1(\\vec t)}{a}\n\t\t\\begin{pmatrix}\n\t\t\t2^{-j}\\cos\\phi\\\\-2^j\\sin\\phi\\\\ 0\n\t\t\\end{pmatrix}} \\\\\n\t\t&= \\begin{pmatrix}\n\t\t\t2^{-j}\\cos \\phi \\, t_1 - \\sin \\phi \\, t_2\\\\2^{-j}\\sin\\phi \\, t_1 + \\cos\\phi \\, t_2\\\\ V''\\vec t{}''\n\t\t\\end{pmatrix}\n\t\t-\\frac{\\rho_1(\\vec t)}{a}\n\t\t\\begin{pmatrix}\n\t\t\t2^{-j}\\cos^2\\phi+2^j\\sin^2\\phi\\\\2^{-j}\\sin\\phi\\cos\\phi -2^j\\sin\\phi\\cos\\phi\\\\ 0\n\t\t\\end{pmatrix} \\\\\n\t\t&= \\begin{pmatrix}\n\t\t\t0 \\\\ \\frac{1}{a^2}\\parens*{2^{j} \\sin \\phi \\, t_1 + \\cos \\phi \\, t_2} \\\\ V''\\vec t{}''\n\t\t\\end{pmatrix}\n\t\t= \\begin{pmatrix}\n\t\t\t0 \\\\ \\rho_2(\\vec t) \\\\ V''\\vec t{}''\n\t\t\\end{pmatrix},\n\t\\end{align}\n\t%\n\twhere the first two components simplify substantially, after inserting $\\rho_1(\\vec t)$, and expanding the first term with $\\frac{a^2}{a^2}$. We also note that $\\rho_1^2+a^2\\rho_2^2=t_1^2+t_2^2$.\n\t\n\tContinuing from \\eqref{eq:gamma_est_z_trafo}, we can now bring this in the following form (for convenience, we will not further denote the dependence of $\\rho_1,\\,\\rho_2$ on $\\vec t$)\n\t%\n\t\\begin{align}\n\t\t\\abs{\\alpha^\\mathrm{s}_\\lambda}\n\t\t&\\lesssim 2^{-\\frac j2} \\int_{-\\infty}^{\\infty} \\int_{[\\vec e_1,\\vec e_2]^\\bot} \\parens*{|\\vec z{}''|^2 + {\\underbrace{a^2z_2^2+\\rho_1^2+1}_{=:p^2}}}^{-n} \\parens*{\\abs[]{\\vec z{}''-V''\\vec t{}''}^2+{\\underbrace{(z_2 - \\rho_2)^2+1}_{=:q^2}}}^{-m} \\d \\vec z{}'' \\d z_2.\n\t\\end{align}\n\t%\n\tUsing \\autoref{cor:Imn_higher_dim} for $\\vec z{}''$ (basically applying \\autoref{th:Imn_est} in direction $V''\\vec t{}''$, transforming to polar coordinates and then once more \\autoref{th:Imn_est} with respect to the radius), this yields\n\t%\n\t\\begin{align}\n\t\t\\abs{\\alpha^\\mathrm{s}_\\lambda}\n\t\t&\\lesssim 2^{-\\frac j2} \\int_{-\\infty}^{\\infty} p^{-2n} \\parens{\\abs[]{\\vec t{}''} + p^2+q^2}^{-m} \\d z_2\\\\\n\t\n\t\t&\\le 2^{-\\frac j2} \\int_{-\\infty}^{\\infty} \\parens{a^2z_2^2+ \\rho_1^2+1}^{-m} \\parens{(z_2 - \\rho_2)^2+\\abs[]{\\vec t{}''} + \\rho_1^2+1}^{-m} \\d z_2\n\t\t\\intertext{\n\t%\n\tand again with the help of \\eqref{eq:Imn_est},\n\t%\n\t\t}\n\t\t&\\lesssim \\frac{2^{-\\frac j2}}{(a^2\\rho_2^2+a^2(\\abs[]{\\vec t{}''} + \\rho_1^2+1)+\\rho_1^2+1)^m} \\parens[\\bigg]{\\frac{a^{2m-1}}{(\\rho_1^2+1)^{\\frac{2m-1}2}} + \\frac{1}{(\\abs[]{\\vec t{}''} + \\rho_1^2+1)^{\\frac{2m-1}2}}}\\\\\n\t\t&\\le \\frac{2^{-\\frac j2}}{(\\abs[]{\\vec t{}''} + \\rho_1^2+\\rho_2^2+1)^m} \\parens{\\frac{1}{a} + \\frac{1}{a^{2m}}} \\le \\frac{1}{a} \\frac{2^{-\\frac j2}}{(\\abs[]{\\vec t{}''} + \\rho_1^2+\\rho_2^2+1)^m},\n\t\\end{align}\n\t%\n\tsince $1\\le a\\le 2^{j}$, which is what we wanted to show for $\\alpha^\\mathrm{s}_\\lambda$.\n\t\n\tRepeating the whole procedure for $u_1$ to estimate $w_\\lambda\\beta_\\lambda^\\mathrm{s}$ increases the right-hand side by $2^j$ at worst, due to the weight (it is not apparent if or how the additional smoothness in the direction of $\\vec s$ can be utilised, since we are only integrating over a hyperplane), and the proof is complete.\n\\end{proof}\n\nAs the last result in this section, we consider how general decay of the function being tested against the ridgelet frame transfers to the coefficients.\n\n\\begin{lemma}\\label{lem:ridge_coeff_tail_decay}\n\tFor a function $f\\in L^2$ satisfying the decay condition\n\t%\n\t\\begin{align}\n\t\t\\abs{f(\\vec x)} \\lesssim \\reg{\\vec x}^{-2m}\n\t\\end{align}\n\t%\n\tfor some $n\\in\\bbN$, the ridgelet coefficients satisfy\n\t%\n\t\\begin{align}\\label{eq:ridge_coeff_tail_decay}\n\t\t\\abs*{\\inpr{\\varphi_\\lambda,f}}\\lesssim \\frac{2^{-\\frac j2}}{\\parens*{\\parens*{\\frac{k_1}{2^j}}^2+\\abs[]{\\vec k{}'}^2+1}^m}.\n\t\\end{align}\n\\end{lemma}\n\n\\begin{proof}\n\tUsing \\eqref{eq:ridge_est} and the required decay of $f$, we estimate and then transform by $\\vec y = R_\\jl \\vec x$, yielding\n\t%\n\t\\begin{align}\n\t\t\\abs*{\\inpr{\\varphi_\\lambda,f}}\n\t\t&=\\abs[\\bigg]{\\int \\overline{\\varphi_\\lambda(\\vec x)} f(\\vec x) \\d \\vec x}\n\t\t\\le \\int \\frac{2^{\\frac j2}}{\\reg*{U_{j\\!\\?\\?,\\ell}^{-1}\\vec x-\\vec k}^{2n}} \\frac{1}{\\reg*{\\vec x}^{2m}} \\d \\vec x\n\t\t= \\int \\frac{2^{\\frac j2}}{\\reg*{D_{2^j}\\vec y-\\vec k}^{2n}} \\frac{1}{\\reg*{\\vec y}^{2m}} \\d \\vec y\\\\\n\t\t&=2^{\\frac j2} \\int_{-\\infty}^{\\infty}\\int_{\\bbR^{d-1}} \\frac{1}{\\parens*{\\abs[]{\\vec y{}'-\\vec k{}'}^2+(2^jy_1-k_1)^2+1}^{n}} \\frac{1}{\\parens*{\\abs[]{\\vec y{}'}^2+(y_1)^2+1}^{m}} \\d \\vec y{}' \\d y_1.\n\t\\end{align}\n\t%\n\tBy \\autoref{cor:Imn_higher_dim}, this can be estimated in the following way,\n\t%\n\t\\begin{align}\n\t\t\\abs*{\\inpr{\\varphi_\\lambda,f}}\n\t\t&\\lesssim 2^{\\frac j2} \\int_{-\\infty}^{\\infty} \\frac{1}{\\parens*{\\abs[]{\\vec k{}'}^2+(2^jy_1-k_1)^2+y_1^2+1}^{m}} \\frac{1}{\\parens*{(2^jy_1-k_1)^2+1}^{m}} \\d y_1\\\\\n\t\t&\\le 2^{\\frac j2} \\int_{-\\infty}^{\\infty} \\frac{1}{\\parens*{2^{2j}\\parens*{y_1-\\frac{k_1}{2^j}}^2+1}^{m}} \\frac{1}{\\parens*{y_1^2+\\abs[]{\\vec k{}'}^2+1}^{m}} \\d y_1,\n\t\t\\intertext{\n\t%\n\twhich, by \\eqref{eq:Imn_est}, is less than a multiple of\n\t%\n\t\t}\n\t\t&\\lesssim \\frac{2^{\\frac j2}}{\\parens*{2^{2j}\\parens*{\\frac{k_1}{2^j}}^2+2^{2j}(\\abs[]{\\vec k{}'}^2+1)+1}^{m}} \\parens[\\bigg]{2^{j(2m-1)} + \\frac{1}{\\parens*{\\abs[]{\\vec k{}'}^2+1}^{\\frac{2m-1}{2}}}} \\le \\frac{2^{-\\frac j2}}{\\parens*{\\parens*{\\frac{k_1}{2^j}}^2+\\abs[]{\\vec k{}'}^2+1}^m}.\n\t\\end{align}\n\\end{proof}\n\n\\subsection{Proof of \\autoref{th:approx}}\\label{ssec:proof_approx}\n\nWith the results of Sections \\ref{sec:loc_angle} and \\ref{sec:loc_space}, we are now in a position to prove \\autoref{th:approx}.\n\n\\begin{proof}[Proof of \\autoref{th:approx}]\n\tRecall that $\\alpha_\\lambda=\\inpr*{\\varphi_\\lambda,f}$ and $\\beta_\\lambda=w_\\lambda\\inpr*{\\varphi_\\lambda,\\CS[f]}$ from \\autoref{prop:loc_angle}. Thus, the claim of the theorem will follow if we are able to prove\n\t%\n\t\\begin{align}\n\t\t\\norm{\\alpha_\\Lambda}_{\\ell^{p^* + \\frac dm}_w} \\lesssim \\sum_{i=0}^N\\norm{f_i}_{H^t} \\qquad \\text{and} \\qquad\n\t\t\\norm{{\\mathbf{W}} \\beta_\\Lambda}_{\\ell^{p^* + \\frac dm}_w} \\lesssim \\sum_{i=0}^N\\norm{f_i}_{H^t},\n\t\\end{align}\n\t%\n\twhere the deviation $\\frac dn$ from the best possible value $p^*=\\parens*{\\frac td + \\frac 12}^{-1}$ (for $f_i\\in H^t$), decays with $n$.\n\t\n\t\\step{Preparations}\n\tFirst off, we note that due to the linearity of the ridgelet coefficients, resp.~the differential equation \\eqref{eq:LinTrans}, it suffices to treat a function $f(\\vec x)=H(\\vec x\\cdot \\vec n-v)g(\\vec x)$ and the rest follows through superposition. Also, we will be able to prove the results for $\\alpha_\\Lambda$ and $\\beta_\\Lambda$ almost completely simultaneously, and will mostly deal with $\\beta_\\Lambda$ (noting, where appropriate, the mitigating factors in the easier $\\alpha_\\Lambda$-case). The only additional thing we have to check for $u=\\CS[f]$ is that the decay conditions for \\autoref{lem:ridge_coeff_tail_decay} holds, but this follows directly from \\autoref{lem:decay_sol}.\n\t\n\tFurthermore, we recall from \\autoref{prop:sol_smooth_except_hyp} that $u(\\vec x)=u_0(\\vec x)+H(\\vec x\\cdot \\vec n-v)u_1(\\vec x)$ with $\\norm{u_i}_{H^t}\\lesssim \\norm{g}_{H^t}$. Due to \\eqref{eq:f_glob_decay}, we also have the necessary conditions to apply \\autoref{prop:loc_space},\n\t%\n\t\\begin{align}\n\t\t\\abs*{g\\normalr|_{h}(\\CP_{\\vec n}\\vec x)} &\\lesssim \\reg{\\CP_{\\vec n}\\vec x}^{-2m}, \\qquad \\abs*{u_1\\normalr|_{h}(\\CP_{\\vec n}\\vec x)}\\lesssim \\reg{\\CP_{\\vec n}\\vec x}^{-2m}.\n\t\\end{align}\n\t\n\tWe begin by choosing $\\delta>0$, and will show membership of ${\\mathbf{W}}\\beta_\\Lambda$ in $\\ell^{p'}_w$, where $p'=p^* + \\delta^{*}$ and $\\delta^{*}=\\delta\\parens{t+\\frac d2}^{-1}$. The modified $\\delta^{*}$ will have to satisfy a lower bound to ensure that all the arguments hold, and we will then that $\\frac dm$ is enough to achieve this bound.\n\tFollowing the decomposition $\\beta_\\lambda=\\beta^\\mathrm{r}_\\lambda+\\beta^\\mathrm{s}_\\lambda$, we will show this for both subsequences separately.\n\t\n\t\\step{Singular part, large coefficients}\n\tWe begin with ${\\mathbf{W}}\\beta^\\mathrm{s}_\\Lambda$, splitting the sequence into two parts, namely the \\emph{tail}\n\t%\n\t\\begin{align}\\label{eq:approx_main_sing_tail}\n\t\tT_{j\\!\\?\\?,\\ell}&:=\\set[\\Big]{\\vec k\\in\\bbZ^d}{\\abs*{V_{j\\!\\?\\?,\\ell}\\vec t(\\vec k)} > \\parens*{2^{j+1}\\abs*{\\sin\\theta_{j\\!\\?\\?,\\ell}}}^{\\frac \\delta d}+\\frac{\\sqrt{d}}{2}},\n\t\\end{align}\n\t%\n\tand its complement $T_{j\\!\\?\\?,\\ell}^\\mathsf{c}:=\\bbZ^d\\setminus T_{j\\!\\?\\?,\\ell}$. Here, $\\vec t(\\vec k)$ is defined as in \\autoref{prop:loc_space}, and similarly, $a$ and $\\phi$ are reused to define $V_{j\\!\\?\\?,\\ell}$ as the following block-diagonal matrix (with determinant $\\frac 1a$),\n\t\\begin{align}\n\tV_{j\\!\\?\\?,\\ell}= \\left(\\begin{array}{cc|c}\n\t\t\\frac{\\cos\\phi}{a} & \\frac{2^j\\sin\\phi}{a} & 0 \\\\\n\t\t\\frac{2^j\\sin\\phi}{a^2} & \\frac{\\cos\\phi}{a^2} & 0 \\\\ \\hline\n\t\t0 & 0 &\\bbI_{d-2}\n\t\\end{array}\\right).\n\t\\end{align}\n\t\n\tAs the first step, we will show membership of the sequence ${\\mathbf{W}}\\beta^\\mathrm{s}_\\Lambda\\normalr|_{\\{\\lambda:\\vec k\\not\\in T_{j\\!\\?\\?,\\ell}\\}}$ in $\\ell^{p'}_w$. \n\tTaking the number of elements whose absolute value exceeds $\\varepsilon$, i.e.\n\t%\n\t\\begin{align}\n\t\tN(\\varepsilon):=\\card*{\\lambda}{k \\not\\in T_{j\\!\\?\\?,\\ell} \\land \\abs{w_\\lambda\\beta^\\mathrm{s}_\\lambda}\\ge \\varepsilon},\n\t\\end{align}\n\t%\n\tan equivalent definition of the $\\ell^{p'}_w$-norm is\n\t%\n\t\\begin{align}\n\t\t\\norm*{{\\mathbf{W}}\\beta^\\mathrm{s}_\\Lambda\\normalr|_{\\{\\lambda:\\vec k\\not\\in T_{j\\!\\?\\?,\\ell}\\}}}_{\\ell^{p'}_w}\\sim\\sup_{\\varepsilon>0} \\varepsilon N^{\\frac 1{p'}}(\\varepsilon) \\lesssim \\norm{g}_{H^t},\n\t\\end{align}\n\t%\n\twhich is what we will show in the following.\n\t\n\tTo do so, we define a further subset of $N(\\varepsilon)$,\n\t%\n\t\\begin{align}\n\t\tN_{j,r}(\\varepsilon):=\\card*{(\\ell,\\vec k)}{\\ell\\in\\Lambda^j_r \\land k \\not\\in T_{j\\!\\?\\?,\\ell} \\land \\abs{w_\\lambda\\beta^\\mathrm{s}_\\lambda}\\ge \\varepsilon}.\n\t\\end{align}\n\t%\n\tClearly, \\eqref{eq:gamma_est_sum_Lambda} implies that $\\abs{w_\\lambda\\beta^\\mathrm{s}_\\lambda}^2 \\le C 2^{-j} \\norm{g}^2_{H^t}$, and therefore $N_{j,r}(\\varepsilon)=0$ if $2^j\\ge C\\varepsilon^{-2}\\norm{g}^2_{H^t}$.\n\t\n\tNext, we need to determine the cardinality of $T_{j\\!\\?\\?,\\ell}^\\mathsf{c}$ for $\\ell \\in \\Lambda^j_r$. First we estimate the sum by an integral (where the maximum of each cell of size $[0,1]^d$ can be at most $\\frac{\\sqrt{d}}2$ away from the value of the function at $\\vec k \\in \\bbZ^d$), and then transform with $\\vec x=V_{j\\!\\?\\?,\\ell}\\vec t(\\vec k)=V_{j\\!\\?\\?,\\ell} (\\vec k-vU_{j\\!\\?\\?,\\ell}^{-1}\\vec n)$ --- introducing a factor $a$ from the determinant --- to estimate\n\t%\n\t\\begin{align}\n\t\t\\# T_{j\\!\\?\\?,\\ell}^\\mathsf{c}\n\t\t&\\le \\int \\mathbbm{1}_{\\curly*{\\abs[]{V_{j\\!\\?\\?,\\ell} \\vec t(\\vec k)}\\le \\parens{2^{j+1}\\abs{\\sin\\theta_{j\\!\\?\\?,\\ell}}}^{\\frac \\delta d}+\\sqrt{d}}} \\d \\vec k = \\int a \\,\\mathbbm{1}_{\\curly*{\\abs[]{\\vec x}\\le \\parens{2^{j+1}\\abs{\\sin\\theta_{j\\!\\?\\?,\\ell}}}^{\\frac \\delta d}+\\sqrt{d}}}\\d \\vec x\\\\\n\t\t&= a \\,\\mu\\parens[\\Big]{B_{\\bbR^{d}}\\parens*{0, \\parens*{2^{j+1}\\abs*{\\sin\\theta_{j\\!\\?\\?,\\ell}}}^{\\frac \\delta d}+\\sqrt{d}}} \\lesssim 2^{(j-r)(1+\\delta)}. \\label{eq:card_Sjl}\n\t\\end{align}\n\t%\n\tHere, $\\mu$ is the $d$-dimensional Lebesgue measure.\n\tThe restriction defining $T_{j\\!\\?\\?,\\ell}$ may now seem somewhat arbitrary, but the result is that the number of translations in the singular set is essentially one-dimensional (even though the set of translations is $d$-dimensional!), which gives rise to the following bound for the cardinality $N_{j,r}$ (regardless of the condition that the absolute value be larger than $\\varepsilon$),\n\t%\n\t\\begin{align}\\label{eq:card_Njr}\n\t\tN_{j,r}(\\varepsilon)\\lesssim \\underbrace{2^{(j-r)(d-1)}}_{\\mathllap{\\#\\Lambda^j_r}\\lesssim} \\cdot \\underbrace{2^{(j-r)(1+\\delta)}}_{\\mathllap{\\#T_{j\\!\\?\\?,\\ell}^\\mathsf{c}}\\lesssim} = 2^{(j-r)(d+\\delta)}.\n\t\\end{align}\n\t%\n\tAs we shall see, the factor $d+\\delta$ will directly influence the best $p$ possible for the $\\ell^p_w$-norm of the subsequence involving $\\vec k \\in T_{j\\!\\?\\?,\\ell}$, in the sense that we will achieve\n\t%\n\t\\begin{align}\\label{eq:est_p4sing}\n\t\tp=\\frac{2(d+\\delta)}{2t+(d+\\delta)} \\le \\frac{2(d+\\delta)}{2t+d} = \\parens[\\Big]{\\frac td + \\frac 12}^{-1} + \\frac{2\\delta}{2t+d} = p^*+\\delta^{*} =p',\n\t\\end{align}\n\t%\n\twhere we recall $\\delta^{*}=\\delta\\parens*{t+\\frac d2}^{-1}$.\n\t\n\tOf course, restricting the translations to achieve this bound is only half the battle --- everything we exclude now needs to be bounded afterwards --- but this shall work out in our favour (and make sense of the definition of $T_{j\\!\\?\\?,\\ell}$), because the tail is precisely defined to contain only the small coefficients.\n\t\n\tLet $\\eta:=\\varepsilon\/\\norm{g}_{H^t}$. Then, by \\eqref{eq:gamma_est_sum_Lambda} and \\eqref{eq:card_Njr},\n\t%\n\t\\begin{align}\n\t\tN_{j,r}(\\varepsilon)\\lesssim \\min\\parens*{2^{(j-r)(d+\\delta)},\\eta^{-2} 2^{-j} 2^{-(j-r)(2t-1)}}.\n\t\\end{align}\n\t%\n\tCalculating the maximal $r$ such that the second term is the minimum, we find\n\t%\n\t\\begin{align}\n\t\t\\eta^{-2} 2^{-j} 2^{-(j-r)(2t-1)} \\le 2^{(j-r)(d+\\delta)} \\qquad \\Longleftrightarrow \\qquad r\\le j- \\log_2(\\eta^{-\\frac 2\\sigma} 2^{-\\frac{j}\\sigma}),\n\t\\end{align}\n\t%\n\twhere $\\sigma:=d+\\delta+2t-1$. Therefore,\n\t%\n\t\\begin{align}\n\t\tN_j(\\varepsilon)=\\sum_{r=1}^j N_{j,r}(\\varepsilon) \\lesssim \\min\\parens*{2^{j(d+\\delta)},\\eta^{-2} 2^{-j} (\\eta^{-\\frac 2\\sigma} 2^{-\\frac{j}\\sigma})^{2t-1}}=\\min\\parens*{2^{j(d+\\delta)},\\eta^{-\\frac{2(d+\\delta)}\\sigma} 2^{-\\frac{j(d+\\delta)}\\sigma}}\n\t\\end{align}\n\t%\n\tAgain, we determine the critical $j$ where the minimum switches from one term to the other, and see that\n\t%\n\t\\begin{align}\n\t\t\\eta^{-\\frac{2(d+\\delta)}\\sigma} 2^{-\\frac{j(d+\\delta)}\\sigma} \\le 2^{j(d+\\delta)} \\qquad \\Longleftrightarrow \\qquad \\eta^{-\\frac{2}{\\sigma+1}} \\le 2^j,\n\t\\end{align}\n\t%\n\twhich implies\n\t%\n\t\\begin{align}\n\t\tN_j(\\varepsilon)\\lesssim\\begin{cases}\n\t\t\t2^{j(d+\\delta)}, & 2^j\\le \\eta^{-\\frac{2}{\\sigma+1}}, \\\\\n\t\t\t\\eta^{-\\frac{2(d+\\delta)}\\sigma} 2^{-\\frac{j(d+\\delta)}\\sigma}, & \\eta^{-\\frac{2}{\\sigma+1}} \\le 2^j \\le C\\eta^{-2},\\\\ \n\t\t\t0, & 2^j\\ge C\\eta^{-2}.\n\t\t\\end{cases}\n\t\\end{align}\n\t%\n\tFinally, we have\n\t%\n\t\\begin{align}\n\t\tN(\\varepsilon)\n\t\t&=\\sum_{j=0}^\\infty N_j(\\varepsilon)\\lesssim \\sum_{j\\colon 2^j \\le \\eta^{-2\/(\\sigma+1)}} 2^{j(d+\\delta)} + \\sum_{j\\colon\\eta^{-2\/(\\sigma+1)}\\le 2^j} \\eta^{-\\frac{2(d+\\delta)}\\sigma} 2^{-\\frac{j(d+\\delta)}\\sigma} \\\\\n\t\t&\\lesssim \\eta^{-\\frac{2(d+\\delta)}{\\sigma+1}}+ \\eta^{-\\frac{2(d+\\delta)}\\sigma + \\frac{d+\\delta}\\sigma \\frac{2}{\\sigma+1}} \\lesssim \\eta^{-\\frac{2(d+\\delta)}{2t+d+\\delta}} \\le \\eta^{-p'},\n\t\\end{align}\n\t%\n\tby \\eqref{eq:est_p4sing} since $\\eta<1$, which finishes the argument for the first subsequence.\n\t\n\t\\step{Singular part, tail}\n\tNext we will show membership of the sequence ${\\mathbf{W}}\\beta^\\mathrm{s}_\\Lambda\\normalr|_{\\{\\lambda:\\vec k\\in T_{j\\!\\?\\?,\\ell}\\}}$ in $\\ell^{p'}\\subseteq \\ell^{p'}_w$. We begin by taking $q<1$, and --- after applying \\eqref{eq:gamma_est_w} --- again estimate the sum by an integral (recalling that the maximum per cell can be at most $\\frac{\\sqrt{d}}2$ away from the value at $\\vec k$, which cancels with the shift in the right-hand side of the defining inequality of $T_{j\\!\\?\\?,\\ell}$),\n\t%\n\t\\begin{align}\n\t\t\\sum_{\\vec k \\in T_{j\\!\\?\\?,\\ell}} \\abs{w_\\lambda \\beta^\\mathrm{s}_\\lambda}^q\n\t\t&\\lesssim \\sum_{\\vec k \\in T_{j\\!\\?\\?,\\ell}} \\frac{2^{\\frac {jq}2} a^{-q}}{\\parens*{|\\vec t{}''|^2+\\rho_1^2+\\rho_2^2+1}^{mq}} \\\\\n\t\t&\\le \\int \\mathbbm{1}_{\\curly*{\\abs[]{V_{j\\!\\?\\?,\\ell} \\vec t}> \\sqrt[d\/\\delta]{2^{j+1}\\abs{\\sin\\theta_{j\\!\\?\\?,\\ell}}}}} \\frac{2^{\\frac {jq}2} a^{-q}}{\\parens*{|\\vec t{}''|^2+\\rho_1^2+\\rho_2^2+1}^{mq}} \\d \\vec k.\n\t\\end{align}\n\t%\n\tCalculating $V_{j\\!\\?\\?,\\ell} \\parens*{t_1, t_2, t_3,\\ldots}^\\top=\\parens*{\\rho_1, \\rho_2, t_3,\\ldots}^\\top$,\n\n\n\n\n\n\twe transform the first term by $\\vec x=V_{j\\!\\?\\?,\\ell}^{-1} (\\vec k-U_{j\\!\\?\\?,\\ell}^{-1}\\vec n)$ to yield\n\t%\n\t\\begin{align}\n\t\t\\sum_{\\vec k \\in T_{j\\!\\?\\?,\\ell}} \\abs{w_\\lambda \\beta^\\mathrm{s}_\\lambda}^q \n\t\t&\\lesssim \\int \\mathbbm{1}_{\\curly*{\\abs[]{\\vec x} > \\parens[]{2^{j+1}\\abs{\\sin\\theta_{j\\!\\?\\?,\\ell}}}^{\\frac \\delta d}}} \\frac{2^{\\frac {jq}2}}{(|\\vec x|^2+1)^{mq}} a^{1-q}\\d \\vec x \\\\\n\t\t&\\lesssim \\int_{\\parens[]{2^{j+1}\\abs{\\sin\\theta_{j\\!\\?\\?,\\ell}}}^{\\frac \\delta d}}^\\infty \\frac{2^{\\frac {jq}2}}{(r^2+1)^{mq}} r^{d-1} a^{1-q} \\d r \\\\\n\t\n\t\t&\\lesssim 2^{\\frac {jq}2} 2^{(j-r)(-\\frac{2mq-d}{d}\\delta+1-q)}\n\t\t\\lesssim 2^{\\frac {jq}2}2^{(j-r)(-\\frac{2mq\\delta}{d}+1+\\delta)}. \\label{eq:est_transl_sing}\n\t\\end{align}\n\t%\n\n\tSumming over $\\ell \\in \\Lambda^j_r$, which has cardinality $\\# \\Lambda^j_r\\lesssim 2^{(j-r)(d-1)}$, this implies\n\t%\n\t\\begin{align}\n\t\t\\sum_{\\ell\\in\\Lambda^j_r}\\sum_{\\vec k \\in T_{j\\!\\?\\?,\\ell}} \\abs{w_\\lambda \\beta^\\mathrm{s}_\\lambda}^q \\lesssim 2^{\\frac {jq}2}2^{-(j-r)(\\frac{2mq\\delta}{d}-d-\\delta)}.\n\t\\end{align}\n\t\n\tAt first glance this estimate might seem of questionable benefit, since the first term has a positive power of $j$. However, \\eqref{eq:gamma_est_sum_Lambda} and the following interpolation inequality\\footnote{Log-convexity of $L^p$-norms; follows from applying H\\\"older's inequality to $\\abs{f}=\\abs{f}^\\theta \\abs{f}^{1-\\theta}$.} for a sequence $(c_i)_{i\\in I}$ will save the day,\n\t%\n\t\\begin{align}\n\t\t\\norm{c}_{\\ell^p} \\le \\norm{c}_{\\ell^q}^\\theta \\norm{c}_{\\ell^2}^{1-\\theta} \\qquad \\text{where} \\qquad \\frac 1p = \\frac{\\theta}q + \\frac{1-\\theta}{2}.\n\t\\end{align}\n\t%\n\tIn particular, we have that\n\t%\n\t\\begin{align}\n\t\t\\parens[\\bigg]{\\sum_{\\ell\\in\\Lambda^j_r}\\sum_{\\vec k \\in T_{j\\!\\?\\?,\\ell}} \\abs{w_\\lambda \\beta^\\mathrm{s}_\\lambda}^p}^{\\frac 1p} \n\t\t&\\lesssim \\parens{2^{\\frac {j}2}2^{-(j-r)(\\frac{2m\\delta}{d}-\\frac{d+\\delta}{q})}}^\\theta \\parens{2^{-\\frac j2} 2^{-(j-r)(t-\\frac12)}}^{1-\\theta}\\\\\n\t\t&=2^{-j(\\frac 12 -\\theta)} 2^{-(j-r)\\parens{\\theta(\\frac{2m\\delta}{d}-\\frac{d+\\delta}{q})+(1-\\theta)(t-\\frac12)}}, \\label{eq:sum_gamma_p_theta}\n\t\\end{align}\n\t%\n\twhich implies that for $\\theta<\\frac 12$ (i.e.~$q$ sufficiently small) and $n$ sufficiently large, we have achieved a finite $\\ell^p$-norm for the tail of the singular part, as we can now simply sum over $r$:\n\t%\n\t\\begin{align}\n\t\t\\sum_{\\ell}\\sum_{\\vec k \\in T_{j\\!\\?\\?,\\ell}} \\abs{w_\\lambda \\beta^\\mathrm{s}_\\lambda}^p\n\t\t&\\lesssim 2^{-jp(\\frac 12 -\\theta)}.\n\t\\end{align}\n\t%\n\tThe condition $\\theta<\\frac 12$ prescribes an upper bound for $q$ depending on the desired $p$, namely that $\\theta<\\frac 12 \\Longleftrightarrow \\frac 1q > \\frac 2p-\\frac 12$.\n\t\n\tIn the same way as above, we get that\n\t%\n\t\\begin{align}\n\t\t\\sum_{\\ell\\in\\Lambda^j_r}\\sum_{\\vec k \\not\\in S_{j\\!\\?\\?,\\ell}} \\abs{\\alpha^\\mathrm{s}_\\lambda}^q \\lesssim 2^{-\\frac {jq}2}2^{-(j-r)(\\frac{2nq\\delta}{d}-d-\\delta)},\n\t\\end{align}\n\t%\n\twhich lets us set $q=p'$ directly without having to interpolate (which leads to a slightly improved estimate for $\\delta^*$ than the one below, compare also Step 7).\n\t\n\t\\step{Singular part, estimating $\\delta^{*}$}[\\label{stp:approx:sing_delta}]\n\tTo determine a lower bound for $\\delta^{*}$ in dependence of $n$, we observe that $\\frac{2m\\delta}{d}>\\frac{d+\\delta}{q}$ has to hold for the second exponent in \\eqref{eq:sum_gamma_p_theta} to be negative (actually this is a source for potential improvement for $\\delta$, since, due to $\\theta<\\frac 12$, we could afford to let the first term become slightly negative; we will not deal with this, though). Together with the condition for $\\theta$ and for the $p$ we want to achieve, this implies\n\t%\n\t\\begin{align}\n\t\t\\frac{2m\\delta}{d(d+\\delta)}>\\frac{1}{q}>\\frac{2}{p^*+\\delta^{*}}-\\frac 12 = \\frac{2t+d}{d+\\delta}-\\frac 12,\n\t\\end{align}\n\t%\n\twhere we recalled $p^*=(\\frac td+\\frac 12)^{-1}$ and $\\delta^{*}=\\delta(t+\\frac d2)^{-1}$. From this, we can use this to deduce a worst-case lower bound for the deviation from the optimal $p^*$ to make our argument work, namely\n\t%\n\t\\begin{align}\n\t\t\\delta\\parens[\\Big]{\\frac{2m}{d}+\\frac 12}> 2t+\\frac d2 \\qquad \\Longrightarrow \\qquad \\delta^{*}>2 \\smash{\\underbrace{\\frac{t+\\frac d4}{t+\\frac d2}}_{<1}} \\frac{1}{\\frac {2m}d+\\frac 12},\n\t\\end{align}\n\t%\n\twhich is satisfied in particular if\n\t%\n\t\\begin{align}\n\t\\delta^{*}\\ge \\frac{4d}{4n+d}.\n\t\\end{align}\n\t%\n\tSaid otherwise, everything we showed is true if we choose $\\delta^{*}=\\frac dm$, which, conversely, means that the coefficient sequence is \\emph{at least} in the space $\\ell^{p^*+\\frac dm}_w$, and possibly in an even smaller space $\\ell^{p^*+\\bar{\\delta}}_w$, i.e.~with $\\bar{\\delta}\\le\\frac dm$.\n\t\n\t\\step{Regular part, large coefficients}[\\label{stp:approx:reg_large}]\n\tLike for the singular part, we split the sequence into two parts, again defining a \\emph{tail}\n\t%\n\t\\begin{align}\\label{eq:tail_reg}\n\t\tT&:=\\set[\\Big]{\\vec k\\in\\bbZ^d}{\\abs*{D_{2^{-j}}\\vec k} > 2^{j\\frac{\\delta}{d}}+\\frac{\\sqrt{d}}2},\n\t\\end{align}\n\t%\n\tonly this time, both subsequences will be summable in $\\ell^{p'}\\subseteq \\ell^{p'}_w$. Like before, we estimate the cardinality of the complement $T^\\mathsf{c}:=\\bbZ^d\\setminus T$,\n\t%\n\t\\begin{align}\n\t\t\\#T^\\mathsf{c} \\le \\int \\mathbbm{1}_{\\{D_{2^{-j}}\\vec k<2^{j\\frac{\\delta}{d}}+\\sqrt{d}\\}} \\d \\vec k = 2^j \\int_{0}^{2^{j\\frac{\\delta}{d}}+\\sqrt{d}} r^{d-1} \\d r \\lesssim 2^{j(1+\\delta)}.\n\t\\end{align}\n\t\n\tThe following estimate will be the key to finish this step. For sequences $f,g$ with $g>0$ almost everywhere and some $q>1$, we apply H\\\"older's inequality,\n\t%\n\t\\begin{align}\n\t\t\\norm*{\\abs{f}^{\\frac 1q}}_{\\ell^1} = \\norm*{\\abs{fg}^{\\frac 1q}\\abs{g}^{-\\frac 1q}}_{\\ell^1} \\le \\norm*{\\abs{fg}^{\\frac 1q}}_{\\ell^q} \\norm*{\\abs{g}^{-\\frac 1q}}_{\\ell^{q'}} = \\norm*{fg}_{\\ell^1}^{\\frac 1q} \\norm*{\\abs{g}^{-\\frac 1{q-1}}}_{\\ell^{1}}^{\\frac{q-1}q}\n\t\\end{align}\n\t%\n\twhere $q'=\\frac{q}{q-1}$, which is sometimes called the \\emph{reverse H\\\"older inequality}, because then, by taking $q$\\nth powers and bringing the last factor to the left-hand side, ``$\\norm{fg}_{\\ell^1}\\ge \\norm{f}_{\\ell^{\\frac 1q}} \\norm{g}_{\\ell^{-\\frac{1}{q-1}}}$'', if we allowed negative exponents in the $\\ell^p$-quasi-norms.\n\t\n\tIn our case, we chose $q=\\frac 2{p'}$, $f:=\\abs{{\\mathbf{W}}\\beta^\\mathrm{r}_\\Lambda}^2$ and $g=\\mathbbm{1}_{\\{j,\\vec k\\in T^\\mathsf{c}\\}}$, allowing us to apply the above inequality,\n\t%\n\t\\begin{align}\n\t\t\\norm*{{\\mathbf{W}}\\beta^\\mathrm{r}_\\Lambda\\normalr|_{j,T^\\mathsf{c}}}_{\\ell^{p'}}^2 = \\parens[\\bigg]{\\sum_{\\ell=1}^{L_j} \\sum_{\\vec k\\in T^\\mathsf{c}}\\abs{w_\\lambda \\beta^\\mathrm{s}_\\lambda}^{p'}}^{\\frac 2{p'}} = \\norm*{{\\mathbf{W}}\\beta^\\mathrm{r}_\\Lambda\\normalr|_{j,T^\\mathsf{c}}}_{\\ell^{\\frac 1q}} \\le \\norm*{{\\mathbf{W}}\\beta^\\mathrm{r}_\\Lambda\\normalr|_{j,T^\\mathsf{c}}}_{\\ell^2}^2 \\norm*{\\mathbbm{1}_{\\{j,\\vec k\\in T^\\mathsf{c}\\}}}_{\\ell^{1}}^{q-1}.\n\t\\end{align}\n\t%\n\tSince $g$ is constant, the last norm simply evaluates to $L_j\\cdot\\#T^\\mathsf{c}\\lesssim 2^{j(d+\\delta)}$ and therefore, using \\eqref{eq:gamma_est_reg}, we have\n\t%\n\t\\begin{align}\n\t\t\\norm*{{\\mathbf{W}}\\beta^\\mathrm{r}_\\Lambda\\normalr|_{j,T^\\mathsf{c}}}_{\\ell^{p'}}^2 \\lesssim \\varepsilon_j^2 2^{-2jt} 2^{j(d+\\delta)(\\frac 2{p'}-1)}.\n\t\\end{align}\n\t%\n\tComputing\n\t%\n\t\\begin{align}\n\t\t\\frac{1}{p'}=\\parens[\\Big]{p^*+\\frac{\\delta}{t+\\frac d2}}^{-1} =\\parens[\\Big]{p^*+\\frac{\\delta}{t+\\frac d2}}^{-1}=\\frac{\\frac td +\\frac 12}{1+ \\frac \\delta d}\n\t\\end{align}\n\t%\n\tmakes it clear that the second exponent simplifies to\n\t%\n\t\\begin{align}\n\t\t2jd\\parens[\\Big]{1+\\frac \\delta d}\\parens[\\bigg]{\\frac{\\frac td +\\frac 12}{1+ \\frac \\delta d}-\\frac 12}\n\t\t=2jd\\parens[\\Big]{\\frac td - \\frac{\\delta}{2d}}=2j\\parens[\\Big]{t - \\frac{\\delta}{2}},\n\t\\end{align}\n\t%\n\tand thus\n\t%\n\t\\begin{align}\n\t\t\\norm*{{\\mathbf{W}}\\beta^\\mathrm{r}_\\Lambda\\normalr|_{j,T^\\mathsf{c}}}_{\\ell^{p'}} \\lesssim \\varepsilon_j 2^{-j\\frac{\\delta}{2}},\n\t\\end{align}\n\t%\n\twhich is summable in $j$.\n\t\n\t\\step{Regular part, tail}\n\tFor the tail of ${\\mathbf{W}}\\beta^\\mathrm{r}_\\Lambda$, we need to use the decay of $f$ resp.~$\\CS[f]$ to apply \\autoref{lem:ridge_coeff_tail_decay},\n\t%\n\t\\begin{align}\n\t\t\\sum_{\\vec k\\in T} \\abs{w_\\lambda \\beta^\\mathrm{r}_\\lambda}^{p'}\n\t\t&\\lesssim \\sum_{\\vec k\\in T} \\frac{2^{-\\frac {jp'}2}}{\\parens*{\\parens*{\\frac{k_1}{2^j}}^2+\\abs[]{\\vec k{}'}^2+1}^{mp'}}\n\t\t\\le 2^{-\\frac {jp'}2} \\int \\mathbbm{1}_{\\{\\abs[]{D_{2^{-j}}\\vec k} > 2^{j\\frac{\\delta}{d}}\\}} \\frac{1}{\\parens*{\\parens*{\\frac{k_1}{2^j}}^2+\\abs[]{\\vec k{}'}^2+1}^{mp'}} \\d \\vec k.\n\t\t\\intertext{\n\t%\n\tTransforming with $\\vec t=D_{2^{-j}}\\vec k$, we are able to continue,\n\t%\n\t\t}\n\t\t&= 2^{-\\frac {jp'}2} \\int_{\\abs[]{\\vec k}>2^{j\\frac{\\delta}{d}}} \\frac{2^j}{\\parens*{\\abs[]{\\vec t}^2+1}^{mp'}} \\d \\vec t\n\t\t= 2^{j\\parens[]{1-\\frac{p'}2}} \\!\\! \\int_{2^{j\\frac{\\delta}{d}}}^\\infty r^{d-1-2mp'} \\d r = 2^{-j\\parens[]{(2mp'-d)\\frac{\\delta}{d}+\\frac{p'}2-1}}.\n\t\\end{align}\n\t%\n\tSince this estimate is independent of $\\ell$, we can sum over the $L_j\\sim 2^{j(d-1)}$ directions on scale $j$ to arrive at\n\t%\n\t\\begin{align}\\label{eq:sum_reg_tail}\n\t\t\\sum_{\\ell=1}^{L_j} \\sum_{\\vec k\\in T} \\abs{w_\\lambda \\beta^\\mathrm{r}_\\lambda}^q \\lesssim 2^{-j\\parens{\\frac{2mp'\\delta}{d}+\\frac{p'}2-d-\\delta}}.\n\t\\end{align}\n\t%\n\tFor $m$ sufficiently large, resp.~$\\delta$ not too small (see below), the exponent is negative, and we can thus sum in $j$ and have shown that the tail of the regular part is in $\\ell^{p'}\\subseteq \\ell^{p'}_w$.\n\t\n\t\\step{Regular part, estimating $\\delta^{*}$}\n\tSimilarly to \\autoref{stp:approx:sing_delta}, we check which condition $\\delta$ (resp.~$\\delta^{*}$) needs to satisfy so that the results from above hold (i.e.~that the exponent \\eqref{eq:sum_reg_tail} is negative). Apart from the negligible term $\\frac{p'}2$ in the exponent, this is the same condition we had for the tail of the singular part, only that now, we don't need to interpolate,\n\t%\n\t\\begin{align}\n\t\t\\frac{2m\\delta}{d(d+\\delta)}>\\frac{1}{p'}=\\frac{1}{p^*+\\delta^{*}} = \\frac{t+\\frac d2}{d+\\delta}.\n\t\\end{align}\n\t%\n\tTherefore, the condition for $\\delta^{*}$ becomes\n\t%\n\t\\begin{align}\n\t\t\\delta>\\frac{d}{2m}\\parens[\\Big]{t+\\frac d2} \\qquad \\Longrightarrow \\qquad \\delta^{*}> \\frac{d}{2m},\n\t\\end{align}\n\t%\n\twhich is weaker than the condition from \\autoref{stp:approx:sing_delta}, and satisfied in particular for $\\frac dm$.\n\t\n\tThis allows us to round up the results thus far, and we observe that, taken together, Steps 1--\\thestep\\ prove the claim of the theorem for $t\\in\\bbN$.\n\t\n\t\\step{Interpolating in $t$}\n\tWhat remains to show is to extend this to the half-line $t>0$ via interpolation theory. Taking the operators defined by\n\t%\n\t\\begin{align}\n\t\tT_1 f:=\\inpr{\\Phi,f}, \\qquad T_2 f:={\\mathbf{W}}\\inpr{\\Phi,\\CS[f]},\n\t\\end{align}\n\t%\n\twe know that, by the frame property, resp.~by \\eqref{eq:ridge_Hs_stable},\n\t%\n\t\\begin{align}\n\t\t\\norm{T_i}_{L^2\\to\\ell^2}&<\\infty, \\quad i=1,2.\n\t\\end{align}\n\t%\n\tOn the other hand, as we have just proved above for any $t\\in\\bbN$, the $T_i$ also satisfy\n\t%\n\t\\begin{align}\n\t\t\\norm{T_i}_{H^t\\to\\ell^{p^*+\\delta^*}_w}&<\\infty, \\quad i=1,2,\n\t\\end{align}\n\t%\n\twhere $\\delta^*\\le \\frac dm$ and $\\frac{1}{p^*}=\\frac{t}{d}+\\frac 12$. Using the results from \\autoref{thm:interp}, we see that\n\t%\n\t\\begin{align}\n\t\t\\norm{T_i}_{H^{t\\theta}\\to\\ell^{\\bar{p}}_2}&<\\infty,\n\t\\end{align}\n\t%\n\twhere $\\frac{1}{\\bar{p}}=\\frac{1-\\theta}{2} + \\frac{\\theta}{p^*+\\delta^*}$. Letting $\\bbR^+\\ni \\bar{t}=t\\theta$ (regardless of the choice of $t\\in\\bbN$ and $0<\\theta<1$), we compute\n\t%\n\t\\begin{align}\n\t\t\\frac{1}{\\bar{p}}=\\frac{1-\\theta}{2} + \\frac{\\theta(t+\\frac d2)}{d+\\delta}=\\frac{\\bar{t}}{d+\\delta}+\\frac{\\theta}{2}\\parens[\\Big]{\\frac{d}{d+\\delta}-1} +\\frac 12,\n\t\\end{align}\n\t%\n\tand thus, since $\\theta<\\frac 12$, we can deduce\n\t%\n\t\\begin{align}\n\t\t\\bar{p}=\\frac{d+\\delta}{\\bar{t}+\\frac d2 + \\frac{\\delta}{2}(1-\\theta)} \\le \\frac{d+\\delta}{\\bar{t}+\\frac d2} = \\parens[\\Big]{\\frac {\\bar{t}}d+\\frac 12}^{-1} +\\delta^{*}.\n\t\\end{align}\n\t%\n\tFinally, we remark that $\\ell^{\\bar{p}}_2\\subseteq \\ell^{\\bar{p}}_w$, and thus the proof is finished, since for arbitrary $t\\in\\bbR^+$, we have shown that for $f\\in H^{t}(\\bbR^d)$ with solution $\\CS[f]$, it holds that\n\t%\n\t\\begin{align}\n\t\\inpr{\\Phi,f}\\in\\ell^{p^*+\\frac dm}_w \\quad \\text{as well as} \\quad {\\mathbf{W}}\\inpr{\\Phi,\\CS[f]}\\in\\ell^{p^*+\\frac dm}_w \\quad \\text{where} \\quad p^*=\\parens[\\Big]{\\frac td+\\frac 12}^{-1}.\n\t\\end{align}\n\\end{proof}\n\n\\subsection{Implications}\n\nTo conclude this section, we briefly discuss differences to the proof and results in \\cite{mutilated}.\n\n\\begin{remark}\n\tAn obvious question that presents itself with regard to \\cite{mutilated} is why Cand\\`es was able to achieve $p=p^*$, while we only achieve this value up to an arbitrarily small $\\delta^*$. Aside from the fact that the functions we treat do not have to have compact support, one major source for this loss is the fact that we have so many more translations to deal with ($d$ dimensions vs. one in \\cite{mutilated}).\n\t\n\tTo achieve $p=p^*$, we would have to:\n\t%\n\t\\begin{enumerate}[(A)]\n\t\\item\n\t\tBound \\eqref{eq:card_Sjl} in a way that depends purely linearly from $2^{j-r}$ --- since we can see that the deviation $\\delta$ directly impacts \\eqref{eq:est_p4sing} through \\eqref{eq:card_Njr}.\n\t\\item\n\t\tOn the other hand, the $\\ell^q$ norm of the tail in \\eqref{eq:est_transl_sing} still has to have an exponent of $2^{j-r}$ that is negative for sufficiently large $m$.\n\t\\end{enumerate}\n\t%\n\tThe transformation $V_{j\\!\\?\\?,\\ell}$ in $T_{j\\!\\?\\?,\\ell}$ is already chosen in a way to be able to optimally estimate \\eqref{eq:est_transl_sing} by achieving radial symmetry, and due to its functional determinant $a\\sim 2^{j-r}$, (A) becomes almost impossible. One may squeeze a slight improvement out of \\eqref{eq:gamma_est_w} by pulling a factor $a^{\\frac 1m}$ in the denominator, which would reduce the determinant of transforming with $V_{j\\!\\?\\?,\\ell}$ to $a^{1-\\frac{d}{2m}}$. Still, this improvement is not enough, as it would force $\\delta\\le\\frac{d}{2m}$ in (A), making (B) impossible, since $m$ would disappear from the exponent. Also, the $\\ell^p$-interpolation does not save us, due to the fact that the condition on $\\theta$ would necessitate $q<0$.\n\t\n\tOne solution would be if the fraction $\\frac 1a$ in \\eqref{eq:gamma_est_w} had any power that grew with $m$, even if it were as slow as $\\log(m)$ --- as this would save (B) in the above scenario. But in general, due to the sharpness of \\eqref{eq:Imn_est}, this seems unlikely to be possible. Nevertheless, we do not rule out that $\\delta^*$ could be eliminated through other proof techniques for functions with compact support.\n\\end{remark}\n\n\\begin{remark}\n\tFinally, while we were able to follow the general approach of \\cite{mutilated} --- i.e.~localisation in angle, localisation in space, split off tails etc. --- we had very different issue to deal with. On the one hand, we were able to avoid the sampling estimates in Fourier space (which is essentially due to the fact that Cand\\`es construction is supported on $[\\vec s_\\jl]$ in frequency, while our construction diffuses this to the sets $P_{j\\!\\?\\?,\\ell}$ where we are able to integrate), but on the other hand, issues like the localisation in space became much more intricate and required ``heavy machinery'' (in the form of \\autoref{th:Imn_est}) to deal with. The removal of the condition of compact support --- resp.~being also able to quantify the coefficient decay for functions with only polynomial decay --- is a welcome bonus, as is the fact that we were able to avoid the numerous case distinctions that are often necessary in other proofs of optimality (compare e.g.~\\cite{curvelet_cartoon}).\n\\end{remark}\n\\section{$N$-Term Approximation $\\&$ Benchmark Rates}\\label{sec:benchmark}\n\nWe briefly recall some core concepts of (non-linear) approximation theory, see \\cite{DeVore1998} for a survey.\n\n\\begin{definition}\\label{def:nonlin_basics}\n\tConsider a (relatively) compact class $\\FC\\subseteq \\CH$, where $\\CH$ is a separable Hilbert space, as well as a dictionary $\\Phi=(\\varphi_\\lambda)_{\\lambda\\in\\Lambda}\\in\\CH$. To measure the mentioned quality of approximation in dependence of $N$, we introduce the set\n\t%\n\t\\begin{align}\n\t\t\\Sigma_N(\\Phi):=\\set[\\bigg]{f=\\sum_{\\lambda\\in\\Lambda_N} c_\\lambda \\varphi_\\lambda}{\\#\\Lambda_N\\le N},\n\t\\end{align}\n\t%\n\twhich is non-linear (as $f,g\\in \\Sigma_N(\\Phi)$ generally don't imply $f+g\\in\\Sigma_N(\\Phi)$) --- consequently, analysis of the quantity\n\t%\n\t\\begin{align}\n\t\t\\varsigma_N(f,\\Phi):=\\inf_{g\\in \\Sigma_N(\\Phi)} \\norm{f-g}_\\CH\n\t\\end{align}\n\t%\n\tis referred to as the study of \\emph{non-linear approximation}.\n\\end{definition}\n\nComparing, $\\varsigma_N(f,\\Phi)$ with $N^{-\\alpha}$, we have the following definition of approximation spaces (which can be done in much greater generality, see \\cite[Sec. 4.1]{DeVore1998}).\n\n\\begin{definition}\n\tLet\n\t%\n\t\\begin{align}\n\t\t\\CA^\\alpha_q(\\Phi):=\\set{f\\in \\CH}{\\norm{f}_{\\CA^\\alpha_q}:=\\snorm{f}_{\\CA^\\alpha_q}+\\norm{f}_\\CH <\\infty},\n\t\\end{align}\n\t%\n\twhere\n\t%\n\t\\begin{align}\n\t\t\\snorm{f}_{\\CA^\\alpha_q} :=\n\t\t\\begin{cases}\n\t\t\t\\parens{\\sum_{n\\in\\bbN} \\parens*{n^\\alpha \\varsigma_n(f,\\Phi)}^q \\frac 1n}^{\\frac 1q}, & 00$ and $00}{\\FC\\subseteq A^\\sigma(\\wt \\Phi)}.\n\\end{align}\nThe more interesting question is, how well \\emph{any} dictionary could possibly approximate $\\FC$ (in terms of $N$-term approximation). This can be done very abstractly with encoding\/decoding schemes, see \\cite{hypercubes_don,hypercubes}.\n\n\\begin{definition}\\label{def:encoding_rate}\nFor a class of signals $\\FC\\subseteq \\CH$ with $\\CH$ a separable Hilbert space, we define:\n\t\\begin{itemize}\n\t\\item\n\t\tAn \\emph{encoding\/decoding pair} $(E,D)\\in \\CE\\CD(R)$ consists of two mappings\n\t\t%\n\t\t\\begin{align}\n\t\t\tE:\\FC\\to \\{0,1\\}^R, \\quad D:\\{0,1\\}^R\\to \\CH,\n\t\t\\end{align}\n\t\t%\n\t\twhere we call $R$ the \\emph{runlength} of $(E,D)$ and $\\CE\\CD:=\\bigcup_{R\\in\\bbN} \\CE\\CD(R)$ is the set of all such pairs.\n\t\\item\n\t\tThe \\emph{distortion} of $(E,D)$ is defined as\n\t\t%\n\t\t\\begin{align}\n\t\t\t\\delta(E,D):=\\sup_{f\\in\\FC} \\norm{ f-D(E(f))}_\\CH.\n\t\t\\end{align}\n\t\\item\n\t\tThe \\emph{optimal encoding rate} is defined as\n\t\t%\n\t\t\\begin{align}\n\t\t\t\\sigma^*(\\FC):= \\sup\\set{\\sigma>0}{ \\exists C>0 \\;\\forall N\\in \\bbN \\;\\exists (E_N,D_N)\\in \\CE\\CD(N): \\delta(E_N,D_N)\\le C N^{-\\sigma}}\\tag*{\\qedhere}\n\t\t\\end{align}\n\t\\end{itemize}\n\\end{definition}\n\n\\begin{remark}\n\tThe quantity $\\sigma^*(\\FC)$ limits the best approximation rate of $\\FC$ for any decoding scheme, in particular, for the discretisation with \\emph{any} dictionary (as long as the natural restriction of polynomial depth search\\footnote{This is to exclude pathological behaviour --- namely, choosing arbitrary functions from a countable set $\\Phi$ which is dense in $\\CH$ This would give a perfect $1$-term approximation, but no way to efficiently compute the approximation or even store the dictionary. Polynomial depth search means that the set $\\Lambda_N$ we select for our $N$-term approximation may only search through the first $\\pi(N)$ dictionary elements, where $\\pi$ is an arbitrary polynomial.} is imposed), i.e.\n\t%\n\t\\begin{align}\n\t\t\\sigma^*(\\FC,\\Phi)\\le \\sigma^*(\\FC).\n\t\\end{align}\n\t\n\t\n\tIt is also intimately related to the \\emph{Kolmogorov} (or metric) $\\varepsilon$-entropy (see e.g. \\cite{kolmog_entr}) --- which we will denote by $H_\\varepsilon(\\FC)$ --- in the sense that (\\cite[Rem. 5.10]{hypercubes})\n\t%\n\t\\begin{align}\n\t\t\\sigma^*(\\FC)=\\sup \\set[\\Big]{\\sigma>0}{\\sup_{\\varepsilon>0} \\varepsilon^{\\frac 1\\sigma} H_\\varepsilon(\\FC) < \\infty}.\\tag*{\\qedhere}\n\t\\end{align}\n\\end{remark}\n\nThe interesting thing --- despite its very general definition --- is that $\\sigma^*(\\FC)$ can be estimated with the help of the following definition and \\autoref{thm:hypercubes}.\n\n\\begin{definition}\\label{def:hypercubes}\n\tLet $\\FC\\subseteq \\CH$ be a signal class in a separable Hilbert space $\\CH$.\n\t\\begin{itemize}\n\t\\item\n\t\tWe say $\\FC$ \\emph{contains an embedded orthogonal hypercube of dimension $m$ and sidelength $\\delta$} if there exist $f_0\\in\\FC$ and orthogonal functions $(\\psi_i)_{i=1}^m\\in \\CH$ with $\\norm{\\psi_i}_\\CH=\\delta$, such that the collection of vertices\n\t\t%\n\t\t\\begin{align}\n\t\t\t\\Fh\\parens*{f_0,(\\psi_i)_{i=1}^m}:=\\set[\\bigg]{h=f_0+\\sum_{i=1}^{m}\\varepsilon_i \\psi_i}{\\varepsilon_i\\in\\{0,1\\}}\n\t\t\\end{align}\n\t\t%\n\t\tcan be embedded into $\\FC$.\n\t\\item\n\t\tFor $p>0$, $\\FC$ is said to \\emph{contain a copy of} $\\ell^p_0$ if there exists a sequence of orthogonal\n\t\thypercubes $(\\Fh_k)_{k\\in\\bbN}$ with dimensions $m_k$ and sidelength $\\delta_k$ embedded in $\\FC$, such that $\\delta_k\\to0$ and for some constant $C>0$\n\t\t%\n\t\t\\begin{align}\\label{eq:def_lp0}\n\t\t\t\\delta_k\\ge C m_k^{-\\frac 1p}, \\quad \\forall k\\in\\bbN.\n\t\t\\end{align}\n\t\\end{itemize}\n\\end{definition}\n\n\\begin{remark}\\label{rem:hypercubes}\n\tThe motivation behind \\autoref{def:hypercubes} is, on the one hand, that we know precisely how many bits we need to encode the vertices of a hypercube, and if $\\FC$ contains such hypercubes, then it must be at least as complex.\n\t\n\tOn the other hand, $\\ell^p$ trivially contains orthogonal hypercubes of dimension $m$ and sidelength $m^{-\\frac 1p}$; in fact every $\\ell^p_q$ with $q>0$ contains a copy of $\\ell^p_0$, which partly motivates the choice of notation (since $\\ell^p_{q_1}\\subseteq \\ell^p_{q_2}$ for $q_1\\le q_2$).\n\t\n\tEven more, since $\\ell^{p_1}\\subseteq \\ell^{p_2}$ for $p_1\\le p_2$, it is perhaps not surprising (and easy to check from \\eqref{eq:def_lp0}) that $\\ell^p_q$ with $q>0$ contains a copy of $\\ell^\\tau_0$ for all $\\tau \\le p$.\n\\end{remark}\n\nAs hinted in \\autoref{rem:hypercubes}, being able to precisely understand the complexity of hypercubes allowed \\cite[Thm. 2]{hypercubes_don} to prove the following landmark result. Since the proof is highly non-trivial, we also refer to \\cite[Thm. 5.12]{hypercubes}, where an elementary proof of this result was given.\n\n\\begin{theorem}\\label{thm:hypercubes}\n\tIf the signal class $\\FC\\subseteq \\CH$ contains a copy of $\\ell^p_0$ for $p\\in(0,2]$, the optimal encoding rate satisfies\n\t%\n\t\\begin{align}\n\t\t\\sigma^*(\\FC) \\le \\frac{2-p}{2p}.\n\t\\end{align}\n\\end{theorem}\n\nFinally, since we are interested in functions in the Sobolev space $H^t$, computing the optimal encoding rate of this class with the tools we have just introduced is a simple adaptation of \\cite[Thm. 5.17]{hypercubes}.\n\n\\begin{lemma}[{\\cite[Lem. 2.3.11]{my_thesis}}]\\label{lem:Ht_complexity}\n\tThe Sobolev space $H^t(\\bbR^d)\\subseteq L^2(\\bbR^d)$ contains a copy of $\\ell^{p^*}_0$, where $p^*=\\parens*{\\frac td + \\frac 12}^{-1}$. In particular, $\\sigma^*\\parens*{H^t(\\bbR^d)}\\le \\frac td$.\n\\end{lemma}\n\n\\begin{remark}\\label{rem:benchmark_Ht_ex_hyp}\n\tThe model class of functions $H^t(\\bbR\\setminus\\{h_i\\})$ --- see \\autoref{def:Ht_except_hyp} --- we will deal with regarding \\eqref{eq:LinTrans} consists of functions in $H^t$ that are allowed to have cut-offs across hyperplanes. This is trivially a superset of $H^t$, and so it is clear that\n\t%\n\t\\begin{align}\n\t\t\\sigma^*\\parens*{H^t(\\bbR\\setminus\\{h_i\\})}\\le \\sigma^*\\parens*{H^t(\\bbR^d)} \\le \\frac td.\n\t\\end{align}\n\t%\n\tIn other words, these ``mutilated'' $H^t$-functions have --- potentially --- even worse approximation rates than functions in $H^t$.\n\t\n\tNevertheless, although the possibilities of achieving this benchmark seem slim (considering the very abstract and extremely general definition of $\\sigma^*(\\FC)$), there is some hope: there \\emph{are} constructions for several classes $\\FC$ that \\emph{do} achieve $\\sigma^*(\\FC)$, cf. \\cite[Chap. 5]{hypercubes}, and even in the more complicated case of allowing cut-offs, wavelets (for example) can be shown to achieve the best possible approximation rate for piece-wise $\\CC^k$-functions over an interval, cf. \\cite[Cor. 5.36]{hypercubes}, and point-like singularities in $\\bbR^d$ in general.\n\t\n\tTo summarise, we have now found the definitive benchmark to aim for, since, in view of \\autoref{prop:wlp_Nterm_frame}, we know that $\\inpr{\\Phi,f}\\in \\ell^{p^*}_w$ with $p^*=\\parens*{\\frac td + \\frac 12}^{-1}$ is the \\emph{absolute best} that is (even theoretically) possible for generic functions in $H^t$ (or the superset $H^t(\\bbR\\setminus\\{h_i\\})$) --- any $p0$ --- for this class (as set out in \\autoref{sec:benchmark}) allows us to combine \\autoref{th:approx} with \\cite{compress} to achieve \\autoref{th:main_result}. To the best of our knowledge,\nthis is the first construction of an optimally adapted PDE solver with non-standard frames and for non-elliptic\nproblems.\n\nAs indicated in \\autoref{ssec:impact}, this can be utilised for solving more involved transport equations \\eqref{eq:RTE}, based on the fact that the ridgelet frame $\\Phi$ covers all direction $\\vec s$ simultaneously, while the multiscale structure makes it possible to alleviate the curse of dimensionality, for example by the ``sparse discrete ordinates method'' (see e.g. \\cite{grella}).\n\nFinally, as a numerical check of the claimed results, we return to \\autoref{fig:sing_sol}, which is the solution of \\eqref{eq:LinTrans} with a source function that is a box function times a Gaussian, rotated so that the edges of the box are aligned with the transport direction $\\vec s$ (which is chosen to not coincide with any of the $\\vec s_\\jl$ in the frame). Since --- apart from the singularities --- the function is $\\CC^\\infty$, we even observe super-polynomial decay, in the sense that in the doubly logarithmic plot \\autoref{fig:sing_sol:nterm}, the curve has non-zero curvature (as far as we were able to calculate) and overtakes any straight line (which would correspond to a fixed power $N^{-\\sigma}$).\n\nFurthermore, the localisation in angle discussed in \\autoref{prop:loc_angle}, is also confirmed theoretically. Indeed, the results are again very convincing, in that, for at least the first $100'000$ largest coefficients (on scales $1$--$10$), the corresponding rotational parameters $\\ell$ are all either the $\\vec s_\\jl$ closest to the direction $\\vec s$ or its immediate left\/right neighbours --- across all scales! To put this into perspective, there are $2^{j+1}$ different $\\vec s_\\jl$ on each scale, and we can discard \\emph{all but three} of them without discernible loss in accuracy. For further numerical results in this context see \\cite[Sec. 8.2]{my_thesis}.\n\n\\begin{figure}\n\t\\setlength{\\plotsize}{0.5\\linewidth}\n\t\n\t\\subcaptionbox{Solution for singular $f$\\label{fig:sing_sol:rhs}}{%\n\t\t\\tikzsetnextfilename{box_sol_sml}%\n\t\t\\input{figures\/source\/box_sol_sml.tikz}}}%\n\t\\hfill\\setlength{\\plotsize}{0.35\\linewidth}%\n\t\\subcaptionbox{$N$-term approximation rate\\label{fig:sing_sol:nterm}}{%\n\t\t\\tikzsetnextfilename{Nterm_sing}%\n\t\t\\input{figures\/source\/Nterm_sing.tikz}%\n\n\t}\n\t\\caption{Solution to the advection equation for a singular right-hand side, as well the $N$-term approximation rate of the ridgelet frame}\\label{fig:box_sol}\n\\end{figure}\n\n\n\n\n\n\n\\section{An Integral (In)Equality}\\label{sec:int_est}\n\nThe following result turned out to be necessary for the proof of \\autoref{th:approx}, but has proven useful in other contexts as well (e.g.~\\autoref{lem:decay_sol}). It is very well-suited for quantifying interactions between (offset) decaying functions --- particularly for convolutions (see \\autoref{app:cor:Imn_higher_dim}) --- and substantially stronger than other results of this type we are aware of (see \\autoref{rem:grafakos}).\n\n\\subsection{Main Theorem \\texorpdfstring{$\\&$}{and} Consequences}\n\nThe following theorem is formulated not in its most general form, but in a form that any one-dimensional problem of this type can be transformed into\n\n\\begin{theorem}\\label{app:th:Imn_est}\n\tFor $m,n\\in\\bbN$, $a\\in\\bbR^+_0$, $b\\in\\bbR$, $c,d\\in\\bbR^+$, we have\n\t%\n\t\\begin{align}\n\t\tI_{m,n}&:=\\int_{-\\infty}^{\\infty} \\frac{1}{\\parens{a^2(x-b)^2+c^2}^m} \\frac{1}{\\parens{x^2+d^2}^n} \\d x \\notag \\\\\n\t\t&\\phantom{:}= \\frac{\\pi}{\\parens{(c+ad)^2+a^2 b^2}^{m+n-1}} \\frac{1}{c^{2m-1}} \\frac{1}{d^{2n-1}} \\sum_{\\substack{i+j+2k=2(m+n)-3\\\\i\\ge 2m-1 \\, \\lor\\, j\\ge 2n-1}} c^{m,n}_{i,j} c^{i} (ad)^{j} (ab)^{2k}\\notag \\\\\n\t\t&\\phantom{:}\\lesssim \\frac{a^{2n-1}}{\\parens{(c+ad)^2+a^2 b^2}^n} \\frac{1}{c^{2m-1}} + \\frac{1}{\\parens{(c+ad)^2+a^2 b^2}^m} \\frac{1}{d^{2n-1}} \\label{app:eq:Imn_est_best}\\\\\n\t\t&\\phantom{:}\\le \\frac{a^{2n-1}}{\\parens{a^2b^2+a^2d^2+c^2}^n} \\frac{1}{c^{2m-1}} + \\frac{1}{\\parens{a^2b^2+a^2d^2+c^2}^m} \\frac{1}{d^{2n-1}}. \\label{app:eq:Imn_est}\n\t\\end{align}\n\t%\n\tFurthermore, the generating function for $I_{m,n}$ is\n\t%\n\t\\begin{align}\\label{app:eq:genfunc_Imn_intro}\n\t\t\\sum_{m,n\\ge 1} I_{m,n} y^m z^n = \\frac{\\pi yz}{\\sqrt{c^2-\\smash{y}}\\sqrt{d^2-z}} \\frac{\\sqrt{c^2-\\smash{y}}+a\\sqrt{d^2- z}}{(\\sqrt{c^2-\\smash{y}}+a\\sqrt{d^2- z})^2+a^2b^2},\n\t\\end{align}\n\t%\n\tand with $h(v,w):=(v+w)^2+1$, we also have a generating function for the coefficients,\n\t%\n\t\\begin{multline}\\label{app:eq:genfunc_coeff}\n\t\t\\sum_{\\substack{i,j\\ge 0\\\\m,n\\ge 1}} c^{m,n}_{i,j} v^i w^j y^m z^n \\\\\n\t\n\t\t=\\frac{h(v,w)yz}{\\sqrt{1-h(v,w)y}\\sqrt{1-h(v,w)z}} \\frac{v\\sqrt{1-h(v,w)y}+w\\sqrt{1-h(v,w)z}}{\\parens*{v\\sqrt{1-h(v,w)y}+w\\sqrt{1-h(v,w)z}}^2+1}.\\qquad\n\t\\end{multline}\n\t%\n\tThe coefficients are zero \\emph{unless} the following conditions are satisfied,\n\t%\n\t\\begin{align}\n\ti+j&\\equiv 1\\bmod{2}, & i+j&\\le 2(m+n)-3, & (i&\\ge 2m-1 \\lor j\\ge 2n-1).\n\t\\end{align}\n\t%\n\tIf these are satisfied, the coefficients can be calulated as follows,\n\t%\n\t\\begin{align}\n\t\tc^{m,n}_{i,j}\n\t\t\\begin{multlined}[t][\\linewidth-\\mathindent-\\widthof{$c^{m,n}_{i,j}$}-\\multlinegap]\n\t\t\t=\\sum_{r=0}^{i}\\sum_{s=0}^{j} \\delta_{\\{r+s\\equiv1\\bmod{2}\\}} (-1)^{m+n+\\frac{r+s-1}{2}} \\binom{r+s}{s} \\cdot\\ldots \\\\\n\t\t\t\\ldots \\cdot \\binom{\\frac{r-1}{2}}{m-1}\\binom{\\frac{s-1}{2}}{n-1} \\binom{m+n-1}{m+n-1-\\frac{i+j-r-s}{2}} \\binom{i+j-r-s}{i-r}.\n\t\t\\end{multlined}\n\t\\end{align}\n\t%\n\tAnother representation of the coefficients can be found in \\autoref{app:prop:coeff_expl_long}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\end{theorem}\n\n\\begin{remark}\\label{rem:grafakos}\n\tIn \\cite[App. B.1]{grafakos}, it is shown that for dimension $k\\ge 1$, powers $m,n>k$, factors $p,q>0$ and vectors $\\vec r, \\vec s\\in \\bbR^k$, the following inequality holds\n\t%\n\t\\begin{align}\n\t\t\\int_{\\bbR^k} \\frac{p^k}{\\parens{1+p\\abs{\\vec x-\\vec r}}^m} \\frac{q^k}{\\parens{1+q\\abs{\\vec x-\\vec s}}^n} \\mathrm{d} \\vec x \\lesssim \\frac{\\min(p,q)^k}{\\parens{1+\\min(p,q)\\abs{\\vec r-\\vec s}}^{\\min(m,n)}}.\n\t\\end{align}\n\t%\n\tApplied to our context (using the fact that $1+x^2\\ge \\frac 12 (1+\\abs{x})^2\\ge \\frac 12 (1+x^2)$, resp.~$k=1$), implies\n\t%\n\t\\begin{align}\n\t\tI_{m,n} \\lesssim \\frac{1}{a} \\frac{1}{(b^2+d^2)^{\\min(m,n)}} \\frac{1}{c^{2m-1}} \\frac{1}{d^{2n-2\\min(m,n)}} + \\frac{1}{(a^2b^2+c^2)^{\\min(m,n)}} \\frac{1}{c^{2m-2\\min(m,n)}} \\frac{1}{d^{2n-1}}.\n\t\\end{align}\n\t%\n\tEven though we know that in our case, we can take $m$ to be arbitrarily large (but fixed), this only yields\n\t%\n\t\\begin{align}\\label{app:eq:grafakos}\n\t\tI_{m,n} \\lesssim \\frac{1}{a} \\frac{1}{(b^2+d^2)^{n}} \\frac{1}{c^{2m-1}} + \\frac{1}{\\parens{a^2b^2+c^2}^{n}} \\frac{1}{c^{2(m-n)}} \\frac{1}{d^{2n-1}}.\n\t\\end{align}\n\t%\n\tIn one of the key estimates that we need for \\autoref{th:approx} (see the proof of \\autoref{prop:loc_space}), $c$ and $d$ will contain variables in other dimensions to be integrated over, and the fact that one term now has three factors renders an second application of \\eqref{app:eq:grafakos} impossible in this case. We would need to accept considerable slack in the estimates (by dropping one factor in the second term of the right-hand side of \\eqref{app:eq:grafakos}), which would make achieving sufficiently strong estimates much more difficult (if not impossible).\n\t\n\tCompare also with \\autoref{app:cor:Imn_higher_dim}, where we are able to leverage \\eqref{app:eq:Imn_est} into higher dimensions as well --- in our case obtaining a denominator that contains both $p$ and $q$ (in the notation of this remark), see \\eqref{eq:Imn_higher_dim}.\n\\end{remark}\n\n\\begin{remark}\\label{rem:Imn_est_decay_a}\n\tThe estimate has one deficiency in terms of the behaviour of $a$ --- namely, that $I_{m,n}$ always decreases with increasing $a$ (albeit much slower than might be expected; the decay with $b,c,d$ is much more pronounced), whereas the first term in the estimates increases with $a$ until around\n\t%\n\t\\begin{align}\n\t\ta\\sim \\sqrt{\\frac{n c^2}{b^2+d^2}} \\quad \\text{for \\eqref{app:eq:Imn_est}}\n\t\t\\quad \\text{resp.} \\quad\n\t\ta\\sim \\frac{ncd+\\sqrt{n(b^2+d^2)c^2}}{b^2+d^2} \\quad \\text{for \\eqref{app:eq:Imn_est_best}}\n\t\\end{align}\n\t%\n\tand only then starts to decrease with $a$.\n\t\n\tThis is not avoidable, as the first term of \\eqref{app:eq:Imn_est_best} actually appears as such (modulo a constant) in the explicit representation of $I_{m,n}$, but there, its growth is eliminated by the decay with $a$ of terms like the second one in \\eqref{app:eq:Imn_est_best}, which have a higher weight in practice.\n\t\n\tConsequently, barring a more precise analysis of the constant's dependence on $m$ and $n$, the estimate can be made more efficient in some cases by directly estimating away $a\\ge1$ within $I_{m,n}$ and then applying \\eqref{app:eq:Imn_est_best} --- namely when\n\t%\n\t\\begin{align}\n\t\t1& \\ll a \\lesssim \\frac{(b^2+(c+d)^2)^n}{(b^2+d^2)^n}\n\t\\end{align}\n\t%\n\tif only the first term should be minimised, which is the case if $a\\sim c \\gg b,d$, for example. Considering both terms simultaneously, estimating $a\\ge1$ is still beneficial in the following regime,\n\t%\n\t\\begin{align}\n\t\t1& \\ll a \\lesssim \\frac{(b^2+(c+d)^2)^n}{(b^2+d^2)^n}\\frac{(b^2+(c+d)^2)^{m}d^{2n-1}}{(b^2+(c+d)^2)^m d^{2n-1}+(b^2+(c+d)^2)^n c^{2m-1}}.\\tag*{\\qedhere}\n\t\\end{align}\n\\end{remark}\n\n\\begin{proof}[Proof of \\autoref{app:th:Imn_est}]\n\tThe proof is split into several parts. First, we need to determine the partial fraction decomposition (PFD) of the integrand, which we do in \\autoref{app:prop:pfd}. Since $m$ and $n$ are arbitrary, we will only be able to formulate a recursion at first. However, we can leverage this recursion into explicit generating functions for the terms appearing in the PFD, which we do in \\autoref{app:prop:genfunc}. This machinery is necessary, unfortunately, since mere induction is hopelessly inadequate for the task at hand.\n\t\n\tWith the help of these two tools, we are able to calculate the generating function \\eqref{app:eq:genfunc_Imn_intro}\n\twhich we prove in \\autoref{app:prop:genfunc_Imn}.\n\t\n\tIn the form \\eqref{app:eq:genfunc_Imn_intro}, we have already achieved (essentially) the crucial cancellation (compared to the PFD) that eliminates the ``bad'' factors from the denominator. However, what remains to be shown compared to \\autoref{app:th:Imn_est} is that $c^{m,n}_{i,j}=0$ if $i\\le 2(m-1) \\land j\\le 2(n-1)$. To gain explicit control over these coefficients, we first ``disassemble'' the function \\eqref{app:eq:genfunc_Imn_intro} into its parts (by differentiation) in \\autoref{app:prop:coeff_expl_long}, which yields another formula for $c^{m,n}_{i,j}$ (which is more complex, but without binomial coefficients of non-integers).\n\t\n\tThen, inserting the ``indicators'' we need, we put it back together to arrive at the formula \\eqref{app:eq:genfunc_coeff} in \\autoref{app:prop:genfunc_coeff}. Finally, we take apart \\eqref{app:eq:genfunc_coeff} one last time in a different way that allows us to conclude that the required coefficients are actually zero in \\autoref{app:prop:coeff_zero}. This will finish the proof. Finally, in \\autoref{rem:conject_coeff_pos}, we mention the conjecture that, always, $c^{m,n}_{i,j}\\ge0$, which, however, we have not (seriously) attempted to prove.\n\\end{proof}\n\nBefore we continue, we record an corollary of \\autoref{app:th:Imn_est} for higher dimensions.\n\n\\begin{corollary}\\label{app:cor:Imn_higher_dim}\n\tFor $m,n\\in\\bbN$ and $c,d>0$, as well as vectors $\\vec r,\\vec s \\in\\bbR^k$ and invertible matrices $A,B$ such that $AB^{-1}$ is diagonalisable\\footnote{The restriction that $AB^{-1}$ has to be diagonalisable is obviously artificial and can be removed in principle (although the formula would become much more complicated).}, we assume that two functions satisfy\n\t%\n\t\\begin{align}\n\t\t\\abs{f(\\vec x)}\\lesssim \\parens*{\\abs*{A(\\vec x+\\vec r)}^2+c^2}^{-m}\n\t\t\\quad \\text{and} \\quad\n\t\t\\abs{g(\\vec x)}\\lesssim \\parens*{\\abs*{B(\\vec x+\\vec s)}^2+d^2}^{-n}.\n\t\\end{align}\n\t%\n\tThen, if $m,n> \\ceil*{\\frac{k}{2}}$, we have the following estimate for the convolution of $f$ and $g$,\n\t%\n\t\\begin{multline}\n\t\t\\abs{[f*g](\\vec t)}\n\t\t\\lesssim\\frac{1}{\\abs{\\det B}} \\biggl( \\frac{\\norm[]{BA^{-1}}^{k}}{\\parens*{|B(\\vec r+\\vec s +\\vec t)|^2+d^2+\\|BA^{-1}\\|^2 c^2}^{n}} \\frac{1}{c^{2m-k-1}} + \\ldots \\\\\n\t\t\t\t\t\\ldots+\\frac{\\norm[]{BA^{-1}}^{2m}}{\\parens*{|B(\\vec r+\\vec s +\\vec t)|^2+d^2+\\|BA^{-1}\\|^2 c^2}^{m}} \\frac{1}{d^{2n-k-1}} \\biggr)\n\t\n\t\n\t\\end{multline}\n\t\n\tand --- in a dual way --- the same estimate holds after concurrently switching $\\vec r\\leftrightarrow\\vec s$, $A\\leftrightarrow B$, $c\\leftrightarrow d$ and $m\\leftrightarrow n$, i.e.~we can choose the minimum of the two.\n\t\n\tSpecialising to $A=B=\\bbI$ and $\\vec r=\\vec s =0$, we see that\n\t%\n\t\\begin{align}\\label{eq:Imn_higher_dim}\n\t\t\\abs{[f*g](\\vec t)} \\lesssim \\frac{1}{\\parens*{|\\vec t|^2+c^2+d^2}^n}\\frac{1}{c^{2m-k-1}} + \\frac{1}{\\parens*{|\\vec t|^2+c^2+d^2}^m}\\frac{1}{d^{2n-k-1}}.\n\t\\end{align}\n\\end{corollary}\n\n\\begin{proof}\n\tWe begin by inserting the definition, using the assumed estimates and transforming by $\\vec y=B(\\vec x+\\vec s)$.\n\t%\n\t\\begin{align}\n\t\t\\abs{[f*g](\\vec t)}\n\t\t&= \\abs[\\bigg]{\\int f(\\vec t-\\vec x) g(\\vec x) \\d \\vec x}\n\t\t\\lesssim \\int \\parens*{\\abs*{A(\\vec t-\\vec x+\\vec r)}^2+c^2}^{-m} \\parens*{\\abs*{B(\\vec x+\\vec s)}^2+d^2}^{-n} \\d \\vec x \\label{app:eq:est_conv_def}\\\\\n\t\t&=\\frac{1}{\\abs{\\det B}} \\int \\parens*{\\abs*{A(\\vec r+\\vec t-B^{-1}\\vec y +\\vec s)}^2+c^2}^{-m} \\parens*{\\abs{\\vec y}^2+d^2}^{-n} \\d \\vec y.\n\t\\end{align}\n\t%\n\tDecomposing $AB^{-1}=VDV^{-1}$ with $D$ being a diagonal matrix with eigenvalues sorted by descending absolute value and $V$ being the (unitary) matrix of corresponding eigenvectors, we set $\\vec u:=V^{-1}A(\\vec r+\\vec s+ \\vec t)$ and continue by transforming with $\\vec z= V^{-1}\\vec y$,\n\t%\n\t\\begin{align}\n\t\t\\abs{[f*g](\\vec t)}\n\t\t&\\lesssim \\frac{1}{\\abs{\\det B}} \\int \\parens*{\\abs*{-VDV^{-1}\\vec y+A(\\vec r+\\vec s+ \\vec t)}^2+c^2}^{-m} \\parens*{\\abs{\\vec y}^2+d^2}^{-n} \\d \\vec y\\\\\n\t\t&= \\frac{1}{\\abs{\\det B}} \\int \\parens*{\\abs*{D\\vec z-\\vec u}^2+c^2}^{-m} \\parens*{\\abs{\\vec z}^2+d^2}^{-n} \\d \\vec z.\n\t\\end{align}\n\t%\n\tThe problem we face now is that the above integral behaves ``elliptically'' in some sense --- and in a way we can't remove by suitable stretching --- because each component appears \\emph{both} as $z_i$ and as $\\lambda_i z_i$. With \\eqref{app:eq:Imn_est} in mind, there are two ways out of this. On the one hand, we could shave off dimension after dimension (since the $z_i$ are decoupled, we can apply \\eqref{app:eq:Imn_est} in each dimension sequentially), but this blows up the number of terms to (potentially) $2^k$, and we would ``lose'' at least half a power of either denominator in each step (or every second step, if one is careful).\n\t\n\tThe second way --- which we will choose --- is to (effectively) make the matrix $D$ a multiple of the identity (thus removing the ``elliptic'' influences), by estimating it with its smallest eigenvalue (by magnitude) as follows below. In view of \\autoref{rem:Imn_est_decay_a}, it is not unlikely that this might even be the more efficient estimate in many cases. A combination of the two methods is of course also possible, in fact, if $AB^{-1}$ is not diagonalisable, it is necessary to ``cut apart'' the Jordan blocks in the way described above.\n\t\n\tEach entry of $D\\vec z-\\vec u$ contributes a term $(\\lambda_i z_i- u_i)^2=\\abs{\\lambda_i}^2\\parens*{z_i- \\frac{u_i}{\\lambda_i}}^2$ to the absolute value, and we can estimate this from below (thus estimating the integral from above) by $|\\lambda_d|^2(z_i-\\frac{u_i}{\\lambda_i})^2$. For notational ease, we set $a:=\\abs{\\lambda_d}$ as well as $\\vec v:=D^{-1}\\vec u$, and continue from above,\n\t%\n\t\\begin{align}\n\t\t\\int \\parens*{\\abs*{D(\\vec z-D^{-1}\\vec u)}^2+c^2}^{-m} \\parens*{\\abs{\\vec z}^2+d^2}^{-n} \\d \\vec z\n\t\t&\\le \\int \\parens*{a^2\\abs{\\vec z-\\vec v}^2+c^2}^{-m} \\parens*{\\abs{\\vec z}^2+d^2}^{-n} \\d \\vec z.\n\t\\end{align}\n\t%\n\tNow we set $R:=R_{\\vec v\/ \\abs{\\vec v}}$ (compare \\autoref{def:rot_Rs}) --- satisfying $R\\vec v=\\parens*{|\\vec v|,0,\\ldots}^\\top$ --- and transform with $\\vec w:=R\\vec z$ using the invariance of the Euclidian norm under rotations,\n\t\\begin{multline}\n\t\t\\int \\parens*{a^2\\abs{\\vec z-\\vec v}^2+c^2}^{-m} \\parens*{\\abs{\\vec z}^2+d^2}^{-n} \\d \\vec z\\\\\n\t\t=\\int_{\\bbR^k} \\frac{1}{\\parens*{a^2(w_1 - |\\vec v|)^2 + a^2|\\vec w'|^2 +c^2}^m} \\frac{1}{\\parens*{w_1^2+|\\vec w'|^2+d^2}^n} \\d \\vec w,\n\t\\end{multline}\n\twhere $\\vec w=\\binom{w_1}{\\vec w'}$, i.e.~$\\vec w'$ represents the $k-1$ lower components of $\\vec w$.\n\t\n\tWe split off the integration in $w_1$ and apply \\eqref{app:eq:Imn_est},\n\t%\n\t\\begin{align}\n\t\t\\MoveEqLeft\n\t\t\\int_{\\bbR^{k-1}} \\int_{-\\infty}^{\\infty } \\frac{1}{\\parens*{a^2(w_1 - |\\vec v|)^2 + a^2|\\vec w'|^2 +c^2}^m} \\frac{1}{\\parens*{w_1^2+|\\vec w'|^2+d^2}^n} \\d w_1 \\d \\vec w'\\\\\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-2em-\\multlinegap]\n\t\t\t\\lesssim \\int_{\\bbR^{k-1}} \\frac{a^{2n-1}}{\\parens*{a^2|\\vec v|^2 + 2a^2|\\vec w'|^2+c^2+a^2d^2}^n} \\frac{1}{\\parens*{a^2|\\vec w'|^2+c^2}^{\\frac{2m-1}{2}}} + \\ldots\\\\\n\t\t\t\\ldots + \\frac{1}{\\parens*{a^2|\\vec v|^2 + 2a^2|\\vec w'|^2+c^2+a^2d^2}^m} \\frac{1}{\\parens*{|\\vec w'|^2+d^2}^{\\frac{2n-1}{2}}} \\d \\vec w'\n\t\t\\end{multlined}\\\\\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-2em-\\multlinegap]\n\t\t\t\\le \\int_{0}^\\infty \\frac{a^{2n-k}}{\\parens*{r^2+a^2|\\vec v|^2+c^2+a^2d^2}^n} \\frac{r^{k-1}}{\\parens*{r^2+c^2}^{\\frac{2m-1}{2}}} \\d r + \\ldots\\\\\n\t\t\t\\ldots + \\int_{0}^\\infty \\frac{1}{\\parens*{a^2r^2+a^2|\\vec v|^2+c^2+a^2d^2}^m} \\frac{r^{k-1}}{\\parens*{r^2+d^2}^{\\frac{2n-1}{2}}} \\d r\n\t\t\\end{multlined}\\\\\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-2em-\\multlinegap]\n\t\t\t\\le \\frac{1}{2} \\int_{-\\infty}^\\infty \\frac{a^{2n-k}}{\\parens*{r^2+a^2|\\vec v|^2+c^2+a^2d^2}^n} \\frac{1}{\\parens{r^2+c^2}^{m-\\ceil{\\frac{k}{2}}}} \\frac{1}{(c^2)^{\\ceil{\\frac{k}{2}}-\\frac{k}{2}}} \\d r + \\ldots\\\\\n\t\t\t\\ldots + \\frac{1}{2}\\int_{-\\infty}^\\infty \\frac{1}{\\parens*{a^2r^2+a^2|\\vec v|^2+c^2+a^2d^2}^m} \\frac{1}{\\parens{r^2+d^2}^{n-\\ceil{\\frac{k}{2}}}} \\frac{1}{(d^2)^{\\ceil{\\frac{k}{2}}-\\frac{k}{2}}} \\d r,\n\t\t\\end{multlined}\n\t\\end{align}\n\t%\n\twhere we split the integrals and transformed the first term by $\\vec x'=\\frac{1}{a}\\vec w'$ before changing to polar coordinates. We then extended the integral over $r$ to $-\\infty$ in order to be able to apply \\eqref{app:eq:Imn_est} once more,\n\t%\n\t\\begin{align}\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\multlinegap]\n\t\t\t\\lesssim \\frac{a^{2n-k}}{\\parens*{a^2|\\vec v|^2 +c^2+a^2d^2}^{m-\\ceil{\\frac k2}+n-\\frac 12}}\\frac{1}{(c^2)^{\\ceil{\\frac{k}{2}}-\\frac{k}{2}}} + \\frac{a^{2n-k}}{\\parens*{a^2|\\vec v|^2 +c^2+a^2d^2}^{n}} \\frac{1}{c^{2m-k-1}} + \\ldots \\\\\n\t\t\t\\ldots + \\frac{a^{2n-2\\ceil{\\frac{k}{2}}-1}}{\\parens*{a^2|\\vec v|^2 +c^2+a^2d^2}^{n-\\ceil{\\frac k2}+m-\\frac 12}} \\frac{1}{(d^2)^{\\ceil{\\frac{k}{2}}-\\frac{k}{2}}} + \\frac{1}{\\parens*{a^2|\\vec v|^2 +c^2+a^2d^2}^{m}} \\frac{1}{d^{2n-k-1}}\n\t\t\\end{multlined}\\\\\n\t\t&\\lesssim \\frac{a^{2n-k}}{\\parens*{a^2|\\vec v|^2 +c^2+a^2d^2}^{n}} \\frac{1}{c^{2m-k-1}} + \\frac{1}{\\parens*{a^2|\\vec v|^2 +c^2+a^2d^2}^{m}} \\frac{1}{d^{2n-k-1}}, \n\t\\end{align}\n\t%\n\tbecause, obviously, $a^2|\\vec v|^2 +c^2+a^2d^2>c^2,a^2d^2$. Now, $a=\\abs{\\lambda_d}$, the smallest eigenvalue of $AB^{-1}$, corresponds to the inverse of the largest eigenvalue of $BA^{-1}$, which itself is equal to the matrix norm $\\norm{BA^{-1}}$. Furthermore, $|D^{-1}\\vec u|=|VD^{-1}\\vec u|=|B(\\vec r+\\vec s +\\vec t)|$. Putting everything together, we arrive at\n\t%\n\t\\begin{align}\n\t\t\\abs{[f*g](\\vec t)}\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\widthof{$\\abs{[f*g](\\vec t)}$}-\\multlinegap]\n\t\t\t\\lesssim \\frac{1}{\\abs{\\det B}} \\biggl( \\frac{\\norm[]{BA^{-1}}^{k}}{\\parens*{|B(\\vec r+\\vec s +\\vec t)|^2+d^2+\\|BA^{-1}\\|^2 c^2}^{n}} \\frac{1}{c^{2m-k-1}} + \\ldots \\\\\n\t\t\t\\ldots+\\frac{\\norm[]{BA^{-1}}^{2m}}{\\parens*{|B(\\vec r+\\vec s +\\vec t)|^2+d^2+\\|BA^{-1}\\|^2 c^2}^{m}} \\frac{1}{d^{2n-k-1}} \\biggr).\n\t\t\\end{multlined}\n\t\\end{align}\n\t%\n\tDepending on the quantities in question (but certainly in the case that $\\norm{AB^{-1}}\\ll\\norm{BA^{-1}}$), we transform differently from \\eqref{app:eq:est_conv_def},\n\t%\n\t\\begin{align}\n\t\t\\abs{[f*g](\\vec t)}\n\t\t&\\lesssim \\int \\parens*{\\abs*{A(\\vec t-\\vec x+\\vec r)}^2+c^2}^{-m} \\parens*{\\abs*{B(\\vec x+\\vec s)}^2+d^2}^{-n} \\d \\vec x \\\\\n\t\t&=\\frac{1}{\\abs{\\det A}} \\int \\parens*{\\abs{\\vec y}^2+c^2}^{-m} \\parens*{\\abs*{B(\\vec t+\\vec s-A^{-1}\\vec y+\\vec r}^2+d^2}^{-n} \\d \\vec y \\\\\n\t\t&=\\frac{1}{\\abs{\\det A}} \\int \\parens*{\\abs{\\vec y}^2+c^2}^{-m} \\parens*{\\abs*{BA^{-1}\\vec y - A(\\vec r+ \\vec s + \\vec t)}^2+d^2}^{-n} \\d \\vec y.\n\t\\end{align}\n\t%\n\tProceeding like before, this means that the convolution \\emph{also} satisfies\n\t%\n\t\\begin{align}\n\t\t\\abs{[f*g](\\vec t)}\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\widthof{$\\abs{[f*g](\\vec t)}$}-\\multlinegap]\n\t\t\t\\lesssim \\frac{1}{\\abs{\\det A}} \\biggl( \\frac{\\norm[]{AB^{-1}}^{2n}}{\\parens*{|A(\\vec r+ \\vec s + \\vec t)|^2+c^2+\\|AB^{-1}\\|^2 d^2}^{n}} \\frac{1}{c^{2m-k-1}} + \\ldots \\\\\n\t\t\t\\ldots+\\frac{\\norm[]{AB^{-1}}^{k}}{\\parens*{|A(\\vec r+ \\vec s + \\vec t)|^2+c^2+\\|AB^{-1}\\|^2 d^2}^{m}} \\frac{1}{d^{2n-k-1}} \\biggr).\n\t\t\\end{multlined}\n\t\\end{align}\n\t%\n\tand we can choose the one that is smaller.\n\\end{proof}\n\n\\subsection{Some Basic Generating Function Theory}\\label{ssec:genfunc}\n\nThe main idea of the approach of generating functions --- see e.g.~\\cite{wilf} --- can be described as follows: take a sequence $g_n$ (recursively defined, for example) and calculate\n\\begin{align}\n\tG(x):=\\ops{g_n}{x^n}:=\\sum_{n\\ge 0} g_n x^n,\n\\end{align}\nwhere ``ops'' stands for \\emph{ordinary power series} (as opposed to exponential power series, which we will not need).\nIf $G$ can be identified with a known function, $g_n$ can be recovered as\n\\begin{align}\\label{app:eq:def_coeff_op}\n\tg_n=\\frac{1}{n!}\\Dn{n}[G(x)]{x}\\biggr|_{x=0}=: \\coeff{x^n} G(x),\n\\end{align}\nwhich is often possible, even if the recursion for $g_n$ cannot be resolved by induction. The notation $[x^n]$ will henceforth denote the $n$\\nth coefficient of $G$ with respect to $x$. Two properties follow immediately from the definition,\n\\begin{align}\n\t\\coeff{x^n}(x^k G(x)) &= \\coeff*{x^{n-k}} G(x), \\label{app:eq:coeff_shift} \\\\\n\t\\coeff{x^n}G(\\beta x) &=\\beta^n\\coeff{x^n}G(x), \\label{app:eq:coeff_factor}\n\\end{align}\nwhere $\\beta\\in\\bbR$, as a simple consequence of the chain rule.\n\nThe main tool to calculate $G$ are basic identities from the theory of power series, with the advantage that we can do all calculations purely formally at first, while ultimately, if the resulting $G$ turns out to be convergent in a ball of radius $R>0$ then all our formal calculations are actually justified analytically as well. This kind of freedom is especially useful if $g_n$ is itself a partial sum of the sequence $f_{n,k}$, since we can freely interchange the order of summation, i.e.\n\\begin{align}\n\tG(x):=\\sum_{n\\ge 0} \\sum_{k=0}^n f_{n,k} x^n = \\sum_{n\\ge 0} \\sum_{k\\ge 0} \\mathbbm{1}_{\\{k\\le n\\}} f_{n,k} x^n = \\sum_{k\\ge 0} \\sum_{n\\ge k} f_{n,k} x^n = \\sum_{k\\ge 0} x^k \\smash{\\overbrace{\\sum_{n\\ge 0} f_{n-k,k} x^n}^{=F_k(x)}}.\n\\end{align}\nIf --- as will often be the case --- we can calculate $F_k(x)$ and consequently $G(x)$, we may then find $g_n=\\sum_{k=0}^n f_{n,k}$ as $\\coeff{x^n} G$.\n\nWe consider --- again only formally --- a function $F(x)=\\ops{f_n}{x^n}$ generated by $\\{f_n\\}$, as well as another one generated by $\\{g_n\\}$, $G(x)=\\ops{g_n}{x^n}$; then\n\\begin{align}\n\t\\ops{f_{n+1}}{x^n}&=\\frac{F(x)-F(0)}{x}, \\label{app:eq:ops_shift}\\\\\n\tF(x)G(x)&=\\ops[\\bigg]{\\sum_{k=0}^n f_n g_{n-k}}{x^n}. \\label{app:eq:ops_prod}\n\\end{align}\n\nFurthermore, the last essential ingredient is having the identification of as many power series as possible with known functions. But before we define the binomial coefficient for $\\alpha\\in\\bbC$ (we will only need $\\alpha\\in\\bbR$) and $n\\in\\bbN_0$,\n\\begin{align}\\label{eq:binom_alpha}\n\t\\binom{\\alpha}{n}:=\\frac{\\alpha(\\alpha-1)\\cdot\\ldots\\cdot(\\alpha-n+1)}{n!}= \\frac{\\Gamma(\\alpha+1)}{\\Gamma(n+1)\\Gamma(\\alpha-n+1)},\n\\end{align}\nwhere the last equality holds in the limit $\\alpha'\\to\\alpha$ if one of the $\\Gamma$-terms has a singularity at $\\alpha$.\nBy reversing the signs and order of the factors in the numerator, we see that the following identity holds\n\\begin{align}\\label{app:eq:binom_alpha}\n\t\\binom{\\alpha}{n}=(-1)^n\\binom{n-\\alpha-1}{n}.\n\\end{align}\nWe only list the power series identities we will need (\\cite[Sec 2.5]{wilf}):\n\\begin{alignat}{2}\n\t\\frac{1}{1-x} & =\\sum_{n\\ge 0} x^n, && \\label{app:eq:ops_geom} \\\\\n\t\\frac{1}{(1-x)^{k+1}} & =\\sum_{n\\ge 0} \\binom{n+k}{n} x^n, && k\\in\\bbN_0, \\label{app:eq:ops_geom_k} \\\\\n\t(1+x)^\\alpha & =\\sum_{n\\ge 0} \\binom{\\alpha}{n} x^n, &\\quad& \\alpha\\in\\bbR, \\label{app:eq:ops_geom_alpha} \\\\\n\t\\frac{1}{\\sqrt{1-4x}} & =\\sum_{n\\ge 0}\\binom{2n}{n} x^n, && \\label{app:eq:ops_sqrt} \\\\\n\t\\parens[\\bigg]{\\frac{1-\\sqrt{1-4x}}{2x}}^k & =\\sum_{n\\ge 0} \\frac{k}{n+k}\\binom{2n+k-1}{n} x^n, && k\\in\\bbN. \\label{app:eq:ops_sqrt_k}\n\\end{alignat}\nNote that, due to \\eqref{app:eq:binom_alpha}, \\eqref{app:eq:ops_geom_alpha} generalises both \\eqref{app:eq:ops_geom} and \\eqref{app:eq:ops_geom_k}.\n\n\\subsection{Partial Fraction Decomposition}\n\nBefore we can think about the integration in $I_{m,n}$, we first need to figure out the partial fraction decomposition (PFD) of the integrand.\n\n\\begin{proposition}\\label{app:prop:pfd}\n\tUsing\n\t%\n\t\\begin{align}\\label{app:eq:pfd_Delta}\n\t\t\\Delta:=\\delta_+\\delta_-:=((c+ad)^2+a^2 b^2)((c-ad)^2+a^2 b^2)\n\t\\end{align}\n\t%\n\tit holds that\n\t%\n\t\\begin{align}\n\t\\begin{split}\\label{app:eq:pfd}\n\t\tP_{m,n}&:=\\frac{1}{\\parens{a^2(x-b)^2+c^2}^m} \\frac{1}{\\parens{x^2+d^2}^n} = \\\\\n\t\t&\\phantom{:}= \\sum_{k=1}^{m} \\frac{a^{2n}}{\\Delta^{n+k-1}} \\frac{r^n_k + a^2b\\,x\\,s^n_k}{(a^2(x-b)^2+c^2)^{m-k+1}} + \\sum_{\\ell=1}^{n} \\frac{a^{2(\\ell-1)}}{\\Delta^{m+\\ell-1}} \\frac{t^\\ell_m +a^2b\\,x\\,u^\\ell_m}{(x^2+d^2)^{n-\\ell+1}},\n\t\\end{split}\n\t\\end{align}\n\t%\n\twhere, for $k,\\ell\\in\\bbN$, the coefficients can be calculated recursively:\n\t%\n\t\\mathtoolsset{showonlyrefs=false}\n\t\\begin{subequations}\\label{app:eqs:pfd_recursion}\n\t\\begin{align}\n\t\tr^\\ell_k&=\\sum_{k'=1}^k r^1_{k-k'+1}r^{\\ell-1}_{k'} + a^2b^2 (\\Delta u^1_{k-k'}- (a^2b^2+c^2) u^1_{k-k'+1}) u^{\\ell-1}_{k'} \\label{app:eq:pfd_recursion:r} \\\\\n\t\tt^\\ell_k&=\\sum_{k'=1}^k t^1_{k-k'+1}r^{\\ell-1}_{k'} + a^4b^2d^2u^1_{k-k'+1} u^{\\ell-1}_{k'} \\label{app:eq:pfd_recursion:t} \\\\\n\t\tu^\\ell_k&=\\sum_{k'=1}^k u^{1}_{k-k'+1} r^{\\ell-1}_{k'}-t^1_{k-k'+1}u^{\\ell-1}_{k'} = -s^\\ell_k \\label{app:eq:pfd_recursion:u}\n\t\\end{align}\n\t\\end{subequations}\n\t%\n\tThe calculation first needs to resolve $\\ell\\to\\ell-1\\to\\ldots\\to 1$ back to initial values\n\t%\n\t\\begin{subequations}\\label{app:eqs:pfd_init}\n\t\\begin{align}\n\t\tr^1_k&=r^1_1 t^{1}_{k-1} + a^2b^2(a^2b^2+c^2) u^1_1 u^1_{k-1}, \\label{app:eq:pfd_init:r}\\\\\n\t\tt^1_k&=t^1_1 t^{1}_{k-1} - a^4b^2d^2 u^1_1 u^1_{k-1}, \\label{app:eq:pfd_init:t}\\\\\n\t\tu^1_k&=u^1_1 t^{1}_{k-1} + t^1_1 u^1_{k-1} = -s^1_k, \\label{app:eq:pfd_init:u}\n\t\\end{align}\n\t\\end{subequations}\n\t%\n\twhich themselves can be resolved by recurring back (in $k$) to\n\t%\n\t\\begin{subequations}\\label{app:eqs:pfd_init_init}\n\t\\begin{align}\n\t\tr^1_1&=3a^2b^2+a^2d^2-c^2, & && r^1_0&=-1, \\label{app:eq:pfd_init_init:r}\\\\\n\t\tt^1_1&=\\phantom{3}a^2b^2-a^2d^2+c^2, &\\text{resp.}&& t^1_0&=1, \\label{app:eq:pfd_init_init:t}\\\\\n\t\tu^1_1&=2 = -s^1_1, & && u^1_0&=0=s^1_0. \\label{app:eq:pfd_init_init:u}\n\t\\end{align}\n\t\\end{subequations}\\mathtoolsset{showonlyrefs=true}%\n\t%\n\tFurthermore, we have the following important relation between the coefficients\n\t%\n\t\\begin{align}\\label{app:eq:r_plus_t_eq_u}\n\t\tr^\\ell_k+t^\\ell_k= 2a^2b^2 u^\\ell_k.\n\t\\end{align}\n\\end{proposition}\n\n\\begin{proof}\n\tFirst off, we calculate the PFD for the case $m,n=1$,\n\t%\n\t\\begin{align\n\t\t\\frac{1}{a^2(x-b)^2+c^2} \\frac{1}{x^2+d^2} = \\frac{a^{2}}{\\Delta} \\frac{3a^2 b^2+a^2 d^2-c^2 -2a^2 b\\,x}{a^2(x-b)^2+c^2} + \\frac{1}{\\Delta} \\frac{a^2 b^2-a^2 d^2+c^2 +2a^2 b\\,x}{x^2+d^2}\n\t\\end{align}\n\t%\n\tfrom which we can read off the left half of \\eqref{app:eqs:pfd_init_init}. Next, assuming \\eqref{app:eq:pfd} for $m$ and $n=1$ we use induction in $m$,\n\t%\n\t\\begin{align}\\label{app:eq:pfd_indm_hyp}\n\t\tP_{m+1,1}\n\t\t&= \\sum_{k=1}^{m} \\frac{a^{2}}{\\Delta^{k}} \\frac{r^1_k +a^2b\\,x\\,s^1_k}{(a^2 (x-b)^2+c^2)^{m-k+2}} + \\frac{1}{\\Delta^{m}} \\frac{t^1_m +a^2b\\,x\\,u^1_m}{(a^2y^2+c^2)(x^2+d^2)}.\n\t\\end{align}\n\t%\n\tWe continue by splitting the second term with the help of \\eqref{app:eqs:pfd_init_init},\n\t%\n\t\\begin{align}\n\t\\MoveEqLeft\n\t\t\\frac{1}{\\Delta^{m}} \\frac{t^1_m +a^2b\\,x\\,u^1_m}{(a^2(x-b)^2+c^2)(x^2+d^2)} \\\\\n\t\t&= \\frac{a^{2}}{\\Delta^{m+1}} \\frac{(r^1_1 +a^2b\\,x\\,s^1_1)(t^1_m +a^2b\\,x\\,u^1_m)}{a^2 (x-b)^2+c^2} + \\frac{1}{\\Delta^{m+1}} \\frac{(t^1_1 +a^2b\\,x\\,u^1_1)(t^1_m +a^2b\\,x\\,u^1_m)}{x^2+d^2}\\\\\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-2em-\\multlinegap]\n\t\t\t=\\frac{a^{2}}{\\Delta^{m+1}} \\frac{r^1_1 t^1_m +a^2b\\,x(r^1_1 u^1_m+s^1_1 t^1_m)+a^4 b^2 x^2 s^1_1 u^1_m}{a^2 (x-b)^2+c^2} +\\ldots \\\\\n\t\t\t\\ldots+ \\frac{1}{\\Delta^{m+1}} \\frac{t^1_1 t^1_m +a^2b\\,x(t^1_1 u^1_m+u^1_1 t^1_m)+a^4 b^2 x^2 u^1_1 u^1_m)}{x^2+d^2}\n\t\t\\end{multlined}\n\t\\end{align}\n\t%\n\tConsidering \\eqref{app:eq:pfd_init_init:u}, we collect the $x^2$-terms in the numerators,\n\t%\n\t\\begin{align}\n\t\t\\frac{2a^4 b^2 u^1_m}{\\Delta^{m+1}} \\parens[\\bigg]{\\frac{x^2}{x^2+d^2} - \\frac{a^2 x^2}{a^2 (x-b)^2 + c^2}} =\\frac{2a^4 b^2 u^1_m}{\\Delta^{m+1}} \\parens[\\bigg]{1 - \\frac{d^2}{x^2+d^2} - 1 + \\frac{a^2b^2+c^2-2a^2b\\,x}{a^2 (x-b)^2 + c^2}}.\n\t\\end{align}\n\t%\n\tComing back to \\eqref{app:eq:pfd_indm_hyp}, we set the abbreviation $f_1(x):= a^2 (x-b)^2 + c^2$ for the first denominator, and compute that\n\t%\n\t\\begin{multline}\n\t\tP_{m+1,1} = \\sum_{k=1}^{m} \\frac{a^{2n}}{\\Delta^{k}} \\frac{r^1_k +a^2b\\,x\\,s^1_k}{f_1(x)^{m-k+2}} + \\frac{\\smash{\\overbrace{t^1_1 t^1_m - a^4 b^2 d^2 u^1_1 u^1_m}^{t^1_{m+1}}}+ a^2 b\\,x (\\smash{\\overbrace{t^1_1 u^1_m + u^1_1 t^1_m}^{u^1_{m+1}}})}{\\Delta^{m+1}(x^2+d^2)} + \\ldots\\\\\n\t\t\\ldots+\\frac{a^{2}}{\\Delta^{m+1}f_1(x)} \\parens*{{\\underbrace{r^1_1 t^1_m\\! + a^2 b^2 (a^2b^2\\!+c^2) u^1_1 u^1_m}_{r^1_{m+1}}}\\!+ a^2 b\\,x ({\\underbrace{r^1_1 u^1_m\\! + s^1_1 t^1_m-4a^2b^2 u^1_m}_{=-u^1_1 t^1_m-t^1_1 u^1_m=s^1_{m+1}}})},\n\t\\end{multline}\n\t%\n\tbecause $r^1_1-4a^2 b^2 =-t^1_1$ and $s^1_1=-u^1_1$. This coincides with the recurrences in \\eqref{app:eqs:pfd_init}, as claimed.\n\t\n\tAs claimed in \\eqref{app:eq:r_plus_t_eq_u}, the identity $r^1_1+t^1_1= 4a^2 b^2$ can be extended (here first for $\\ell=1$),\n\t%\n\t\\begin{align}\n\t\\begin{split}\\label{app:eq:r_plus_t_eq_u:one}\n\t\tr^1_k+t^1_k\n\t\t&=(r^1_1+t^1_1) t^1_{k-1} +2a^2 b^2 (a^2 b^2-a^2d^2+c^2) u^1_{k-1}\\\\\n\t\t&= 4a^2b^2 t^1_{k-1}+2a^2 b^2 t^1_1 u^1_{k-1}=2a^2b^2u^1_k,\n\t\\end{split}\n\t\\end{align}\n\t%\n\twhich will help cut short some computations below.\n\t\n\tNow we come to the induction in $n$. The case for arbitrary $m$ and $n=1$ has been proved above, which covers the base case. Under the induction hypothesis that \\eqref{app:eq:pfd} holds for $m,n\\in\\bbN$, we have\n\t%\n\t\\begin{align}\\label{app:eq:pfd_indn_hyp}\n\t\tP_{m,n+1}\n\t\t= \\sum_{k=1}^{m} \\frac{1}{\\Delta^{n+k-1}} \\frac{a^{2n}(r^n_k + a^2b\\,x\\,s^n_k)}{f_1(x)^{m-k+1}}\\frac{1}{x^2+d^2} + \\sum_{\\ell=1}^{n} \\frac{1}{\\Delta^{m+\\ell-1}} \\frac{a^{2(\\ell-1)}(t^\\ell_m +a^2b\\,x\\,u^\\ell_m)}{(x^2+d^2)^{n-\\ell+2}}.\n\t\\end{align}\n\t%\n\tClearly, we can now apply our previous knowledge (the case with $n=1$), i.e.\n\t%\n\t\\begin{multline}\n\t\t\\frac{1}{(a^2(x-b)^2+c^2)^{m-k+1}}\\frac{1}{x^2+d^2}\\\\\n\t\t=\\sum_{k'=1}^{m-k+1} \\frac{1}{\\Delta^{k'}} \\frac{a^{2}(r^1_{k'} +a^2b\\,x\\,s^1_{k'})}{(a^2 (x-b)^2+c^2)^{m-k+1-k'+1}} + \\frac{1}{\\Delta^{m-k+1}} \\frac{t^1_{m-k+1} +a^2b\\,x\\,u^1_{m-k+1}}{x^2+d^2}\n\t\\end{multline}\n\t%\n\tInserting this into \\eqref{app:eq:pfd_indn_hyp} yields\n\t%\n\t\\begin{align}\n\t\\MoveEqLeft\n\t\t\\sum_{k=1}^{m} \\frac{a^{2n}}{\\Delta^{n+k-1}} \\frac{r^n_k + a^2b\\,x\\,s^n_k}{(a^2(x-b)^2+c^2)^{m-k+1}}\\frac{1}{x^2+d^2} \\\\\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-2em-\\multlinegap]\n\t\t\t=\\sum_{k=1}^{m}\\sum_{k'=1}^{m-k+1} \\frac{a^{2(n+1)}}{\\Delta^{n+k+k'-1}} \\frac{r^1_{k'} r^n_k + a^2b\\,x(r^1_{k'} s^n_k+s^1_{k'}r^n_k)+a^4 b^2 x^2 s^1_{k'}s^n_k}{(a^2(x-b)^2+c^2)^{m-k-k'+2}} + \\ldots \\\\\n\t\t\t\\ldots + \\sum_{k=1}^{m} \\frac{a^{2n}}{\\Delta^{m+n}} \\frac{t^1_{m-k+1} r^n_k +a^2b\\,x(t^1_{m-k+1}s^n_k + u^1_{m-k+1} r^n_k)+a^4 b^2 x^2 u^1_{m-k+1}s^n_k}{x^2+d^2}.\n\t\t\n\t\t\n\t\t\\end{multlined}\n\t\\end{align}\n\t%\n\tWe replace the undesired quadratic part $a^4b^2x^4$ depending on the denominator of the term we're dealing with,\n\t%\n\t\\begin{align}\n\t\ta^4 b^2 x^2&=a^2 b^2 \\parens*{a^2 (x-b)^2 + c^2}\\parens[\\Big]{1-\\frac{a^2b^2+c^2-2a^2b\\,x}{a^2 (x-b)^2 +c^2}},\\\\\n\t\ta^4 b^2 x^2&=a^4 b^2 \\parens*{x^2+ d^2}\\parens[\\Big]{1-\\frac{d^2}{x^2+d^2}},\n\t\\end{align}\n\t%\n\tand change the summation indices as follows (demonstrated for only one term):\n\t%\n\t\\begin{align}\n\t\t\\sum_{k=1}^{m}\\sum_{k'=1}^{m-k+1} r^1_{k'} r^n_k\n\t\t&=\\sum_{k=0}^{m-1}\\sum_{k'=1}^{m-k} r^1_{k'} r^n_{k+1}\n\t\t=\\sum_{k=0}^{m-1}\\sum_{k'=0}^{m-k-1} r^1_{k'+1} r^n_{k+1}\n\t\t=\\sum_{k''=0}^{m-1}\\sum_{k+k'=k''} r^1_{k+1} r^n_{k'+1}\\\\\n\t\t&=\\sum_{k''=0}^{m-1}\\sum_{k'=0}^{k''} r^1_{k''-k'+1} r^n_{k'+1}\n\t\t=\\sum_{k=1}^{m}\\sum_{k'=0}^{k-1} r^1_{k-k'} r^n_{k'+1}\n\t\t=\\sum_{k=1}^{m}\\sum_{k'=1}^{k} r^1_{k-k'+1} r^n_{k'}\n\t\\end{align}\n\t%\n\tThis leads to\n\t%\n\t\\begin{align}\n\t\\MoveEqLeft\n\t\t\\sum_{k=1}^{m} \\frac{a^{2n}}{\\Delta^{n+k-1}} \\frac{r^n_k + a^2b\\,x\\,s^n_k}{(a^2(x-b)^2+c^2)^{m-k+1}}\\frac{1}{x^2+d^2} \\\\\n\t\t&= \\sum_{k=1}^{m} \\frac{a^{2(n+1)}}{\\Delta^{n+k}} \\biggl(\\frac{\\sum_{k'=1}^{k} r^1_{k-k'+1} r^n_{k'}-a^2 b^2 (a^2b^2+c^2) s^1_{k-k'+1}s^n_{k'}}{(a^2(x-b)^2+c^2)^{m-k+1}} + \\ldots\\\\\n\t\t&\\mathrel{\\phantom{=}}\\ldots + \\frac{a^2b\\,x\\sum_{k'=1}^{k} s^1_{k-k'+1}r^n_{k'}+(r^1_{k-k'+1}-2a^2b^2 u^1_{k-k'+1})s^n_{k'}}{(a^2(x-b)^2+c^2)^{m-k+1}} + \\ldots \\\\\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-2em-\\multlinegap]\n\t\t\t\\mathrel{\\phantom{=}}\\ldots + \\frac{\\sum_{k'=1}^{k} a^2 b^2 s^1_{k-k'+1}s^n_{k'}}{(a^2(x-b)^2+c^2)^{m-k}} \\biggr) + \\sum_{k'=1}^{m} \\frac{a^{2n}}{\\Delta^{m+n}} \\biggl(\\frac{t^1_{m-k'+1} r^n_{k'} + a^4 b^2 d^2 s^1_{m-k'+1}s^n_{k'}}{x^2+d^2}+ \\ldots \\\\\n\t\t\t\\mathrel{\\phantom{=}}\\ldots + \\frac{a^2b\\,x(t^1_{m-k'+1}s^n_{k'} + u^1_{m-k'+1} r^n_{k'})}{x^2+d^2} + a^4 b^2 u^1_{m-k'+1}s^n_{k'} \\biggr).\n\t\t\\end{multlined}\n\t\\end{align}\n\t%\n\tThe terms with $x^2+d^2$ in the denominator exactly match the claimed $t$- and $u$-terms from \\eqref{app:eqs:pfd_recursion} for $k=m$ and $\\ell=n+1$, and thus we only have to deal with the remaining terms having powers of $a^2(x-b)^2+c^2$ in the denominator (as well as the very last term). To harmonise those powers, we perform an index shift for the third term (except the last summand, which will cancel with the very last term above), to arrive at\n\t%\n\t\\begin{align}\n\t\t\\sum_{k=2}^{m} \\frac{a^{2(n+1)}}{\\Delta^{n+k-1}} \\frac{\\sum_{k'=1}^{k-1} a^2 b^2 s^1_{k-k'}s^n_{k'}}{(a^2(x-b)^2+c^2)^{m-k+1}} + \\frac{a^{2(n+1)}}{\\Delta^{m+n}} \\sum_{k'=1}^{m} a^2 b^2 s^n_{k'} (\\smash{\\overbrace{s^1_{m-k'+1}+u^1_{m-k'+1}}^{=0}}).\n\t\\end{align}\n\t%\n\tHere, we can extend the summation to $\\sum_{k=1}^m\\sum_{k'=1}^k$, because all additional terms are zero.\n\t\n\tFurthermore, due to \\eqref{app:eq:r_plus_t_eq_u:one}, we can simplify the second term (containing all terms with the factor $a^2 b\\,x$) to\n\t%\n\t\\begin{align}\n\t\t\\sum_{k=1}^{m} \\sum_{k'=1}^{k} \\frac{a^{2(n+1)}}{\\Delta^{n+k}} \\frac{a^2b\\,x(t^1_{k-k'+1} u^n_{k'}-u^1_{k-k'+1}r^n_{k'})}{(a^2(x-b)^2+c^2)^{m-k+1}}.\n\t\\end{align}\n\t%\n\tTherefore,\n\t%\n\t\\begin{align}\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\multlinegap]\n\t\t\tP_{m,n+1}= \\sum_{k=1}^{m} \\frac{a^{2(n+1)}}{\\Delta^{n+k}} \\biggl( \\frac{\\sum_{k'=1}^{k} r^1_{k-k'+1} r^n_{k'}+a^2 b^2 u^n_{k'}(\\Delta u^1_{k-k'} - (a^2b^2+c^2) u^1_{k-k'+1})}{(a^2(x-b)^2+c^2)^{m-k+1}} +\\ldots \\\\\n\t\t\t\\ldots+ \\frac{a^2b\\,x\\sum_{k'=1}^{k} t^1_{k-k'+1} u^n_{k'}-u^1_{k-k'+1}r^n_{k'}}{(a^2(x-b)^2+c^2)^{m-k+1}} \\biggr) + \\sum_{\\ell=1}^{n+1} \\frac{a^{2(\\ell-1)}}{\\Delta^{m+\\ell-1}} \\frac{t^\\ell_m +a^2b\\,x\\,u^\\ell_m}{(x^2+d^2)^{n-\\ell+2}},\n\t\t\\end{multlined}\n\t\\end{align}\n\t%\n\twhich now also matches the $r$- and $s$-terms in \\eqref{app:eqs:pfd_recursion}, and in particular also $u^{n+1}_m=-s^{n+1}_m$.\n\t\n\tFinally, we want to extend \\eqref{app:eq:r_plus_t_eq_u:one} to \\eqref{app:eq:r_plus_t_eq_u}, which can be done as follows,\n\t%\n\t\\begin{align\n\t\tr^\\ell_k+t^\\ell_k\n\t\t&=\\sum_{k'=1}^k (r^1_{k-k'+1}+t^1_{k-k'+1})r^{\\ell-1}_{k'} + a^2b^2 (\\Delta u^1_{k-k'}- t^1_1 {\\underbrace{u^1_{k-k'+1}}_{=\\mathrlap{u^1_1 t^{1}_{k-k'} + t^1_1 u^1_{k-k'}}}}) u^{\\ell-1}_{k'}\\\\[-0.245cm]\n\t\t&=\\sum_{k'=1}^k 2a^2b^2 u^1_{k-k'+1} r^{\\ell-1}_{k'} + a^2b^2 \\parens*{\\underbrace{(\\overbrace{\\Delta-(t^1_1)^2}^{=4a^4b^2d^2}) u^1_{k-k'}- 2t^1_1 t^1_{k-k'}}_{=-2t^1_{k-k'+1}}} u^{\\ell-1}_{k'}=2a^2b^2 u^\\ell_k.\\tag*{\\qedhere}\n\t\\end{align}\n\\end{proof}\n\n\\begin{remark}\n\tWe note that the recursion \\eqref{app:eqs:pfd_init} is formulated in terms of $t$- and $u$-terms because we chose to induce in $m$ first --- inducting over $n$ first would lead to a different update rule \\eqref{app:eqs:pfd_init}! \n\\end{remark}\n\n\\subsection{Generating Functions for the PFD}\n\nThe following result about coupled recursions answers an obvious question to ask in this context, and as such, is certainly known already. However, since the solution is quite trivial, we have not searched for a reference, but rather proved it ourselves.\n\n\\begin{proposition}\\label{app:prop:genfunc_2d_rec}\n\tFor a recursion\n\t%\n\t\\begin{align}\n\t\t\\binom{g_k}{h_k}=M \\binom{g_{k-1}}{h_{k-1}} = M^{k-i} \\binom{g_i}{h_i} \\qquad \\text{with} \\qquad M=\\begin{pmatrix}\n\t\t\tm_{1,1}& m_{1,2}\\\\ m_{2,1} & m_{2,2}\n\t\t\\end{pmatrix},\n\t\\end{align}\n\t%\n\twith arbitrary $M$ and initial value $\\binom{g_i}{h_i}$, where $i\\ge 0$, the generating functions $G(y)=\\ops*{g_k}{y^k}$ and $H(y)=\\ops*{h_k}{y^k}$ are given by\n\t%\n\t\\begin{align}\\label{app:eq:genfunc_2d_rec}\n\t\tG(y)=y^i \\frac{(m_{1,2}h_i-m_{2,2}g_i)y+g_i}{\\det(M) y^2 - \\mathrm{tr}(M) y +1}, \\qquad H(y)=y^i \\frac{(m_{2,1}g_i- m_{1,1}h_i)y+h_i}{\\det(M) y^2 - \\mathrm{tr}(M) y +1}.\n\t\\end{align}\n\\end{proposition}\n\n\\begin{proof}\n\tLet us begin by assuming that $M$ is diagonalisable. Then there exists an invertible matrix $Q$ such that\n\t%\n\t\\begin{align}\n\t\tM=Q\\begin{pmatrix}\n\t\t\t\\lambda_1 & 0 \\\\ 0 & \\lambda_2\n\t\t\\end{pmatrix} Q^{-1}.\n\t\\end{align}\n\t%\n\tSetting $\\binom{g'_i}{h'_i}:=Q^{-1}\\binom{g_i}{h_i}$, we can calculate --- because $g_k=0$ for $k0$, we have\n\t%\n\t\\begin{align}\\label{app:eq:bell_sqrt}\n\t\tB_{n,k}\\parens[\\Big]{\\D{y}\\sqrt{c^2-y},\\ldots,\\Dn{n-k+1}{y}\\sqrt{c^2-y}}\\Bigr|_{y=0} = (-2c)^{k-2n} \\frac{(2n-k-1)!}{(k-1)!(n-k)!}.\n\t\\end{align}\n\\end{lemma}\n\n\\begin{proof}\n\tUsing \\eqref{app:eq:ops_geom_alpha} we see that\n\t%\n\t\\begin{align}\n\t\t\\coeff{y^m}\\sqrt{c^2-y}=\\coeff{y^m} c\\sqrt{1-\\frac{y}{c^2}}=\\binom{\\frac 12}{m}(-1)^m c^{1-2m}\n\t\\end{align}\n\t%\n\tInserting this into \\eqref{app:eq:genfunc_bell}, we obtain by virtue of \\eqref{app:eq:ops_shift} and \\eqref{app:eq:ops_sqrt_k} that\n\t%\n\t\\begin{align}\n\t\tB_{n,k}\\parens[\\Big]{\\D{y}\\sqrt{c^2-y},\\ldots}\\Bigr|_{y=0}\n\t\t&= \\coeff{t^n} \\frac{n!}{k!} c^k \\parens*{\\sqrt{1-t\/c^2}-1}^k = \\coeff{t^n} \\frac{n!}{k!} c^{k} \\parens[\\Big]{\\frac{1-\\sqrt{1-t\/c^2}}{t\/(2c^2)}}^k \\parens[\\Big]{\\frac{-t}{2c^2}}^k\\\\\n\t\t&= \\coeff*{t^{n-k}} \\frac{n!}{k!} (-2c)^{-k} \\parens[\\Big]{\\frac{1-\\sqrt{1-t\/c^2}}{t\/(2c^2)}}^k \\\\\n\t\t&= \\frac{n!}{k!} (-2c)^{-k} \\frac{k}{n}\\binom{2n-k-1}{n-k} (4c^2)^{k-n},\n\t\\end{align}\n\t%\n\twhich yields the desired result after rearranging.\n\\end{proof}\n\n\\begin{proposition}\\label{app:prop:coeff_expl_long}\n\tBy deriving \\eqref{app:eq:Imn_as_coeff} $m-1$ times in $y$ and $n-1$ times in $z$, the coefficients in \\eqref{app:eq:Imn_est_best} can be calculated as follows\n\t%\n\t\\begin{align}\n\t\tc^{m,n}_{i,j}\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\widthof{$c^{m,n}_{i,j}$}-\\multlinegap]\n\t\t\t= \\frac{\\delta_{\\{i+j\\equiv1\\bmod{2}\\}}}{4^{m+n-2}} \\sum_{k=1}^{m-1}\\sum_{\\ell=1}^{n-1}\\sum_{p=0}^{\\ell}\\sum_{q=0}^{k}\\sum_{r=0}^{k-q}\\sum_{s,t\\ge 0} \\frac{2^k k}{m-1} \\binom{2m-k-3}{\\phantom{2}m-k-1} \\frac{2^\\ell \\ell}{n-1} \\binom{2n-\\ell-3}{\\phantom{2}n-\\ell-1} \\cdot\\ldots\\\\\n\t\t\t\\shoveright{\\ldots\\cdot \\binom{\\ell+1}{p} \\binom{\\ell-p}{q} \\binom{\\ell+r}{r} \\binom{m+n-2-\\ell-r}{s} \\binom{m+n-2-p-q-r-s}{t} \\cdot\\ldots} \\\\\n\t\t\t\\ldots\\cdot \\binom{m+n-1}{i-p-s} \\binom{m+n-1-i+p+s}{j-q-r-t} (-1)^{\\frac{3(i+j)+1}{2}+p+s+r+t}.\n\t\t\\end{multlined}\n\t\\end{align}\n\t%\n\tFurthermore, $c^{m,n}_{i,j}$ is zero for $i+j>2(m+n)-3$.\n\\end{proposition}\n\n\\begin{proof}\n\tLetting $v(y):=\\sqrt{c^2-y}$ and $w(z):=\\sqrt{d^2-z}$ (which have the same behaviour but different constants to their namesakes above), we begin by rewriting \\eqref{app:eq:Imn_as_coeff},\n\t%\n\t\\begin{align}\n\t\tI_{m,n} &=\\coeff*{y^{m-1} z^{n-1}} \\frac{\\pi}{vw}\\parens{\\frac{v+aw-\\mathrm{i} ab}{(v+aw+\\mathrm{i} ab)(v+aw-\\mathrm{i} ab)} + \\frac{\\mathrm{i} ab}{(v+aw)^2+a^2b^2}}\\\\\n\t\t&=\\Re \\coeff*{y^{m-1} z^{n-1}} \\frac{\\pi}{v(y)w(z)}\\frac{1}{v(y)+aw(z)+\\mathrm{i} ab}, \\label{app:eq:Imn_simplif_frac}\n\t\\end{align}\n\t%\n\tsince, clearly, $I_{m,n}$ is always real (and the second summand always imaginary).\n\t\n\tFor now, we assume that both $n,m\\ge2$, and apply \\eqref{app:eq:faa_di_bruno} to derive $n-1$ times in $z$, arriving at\n\t%\n\t\\begin{align}\n\t\tI_{m,n}&=\\Re\\coeff*{y^{m-1}} \\frac{\\pi}{v(y)} \\frac{1}{(n-1)!}\\sum_{\\ell=1}^{n-1} B_{n-1,\\ell}\\parens[\\Big]{\\D{z}w(z),\\ldots}\\Bigr|_{z=0} \\Dn{\\ell}{w} \\frac{1}{w}\\frac{1}{v(y)+aw+\\mathrm{i} ab}\\Bigr|_{w=d\n\t\n\t\\end{align}\n\t%\n\tUsing the PFD in $w$, we see that\n\t%\n\t\\begin{align}\n\t\t\\Dn{\\ell}{w} \\frac{1}{w}\\frac{1}{v(y)+aw+\\mathrm{i} ab}\\Bigr|_{w=d} \n\t\t&= \\Dn{\\ell}{w} \\frac{1}{v(y)+\\mathrm{i} ab}\\parens[\\Big]{\\frac{1}{w}-\\frac{a}{v(y)+aw+\\mathrm{i} ab}}\\Bigr|_{w=d} \\\\\n\t\t&=\\frac{\\ell!(-1)^\\ell}{v(y)+\\mathrm{i} ab}\\parens[\\Big]{\\frac{1}{w^{\\ell+1}}-\\frac{a^{\\ell+1}}{(v(y)+aw+\\mathrm{i} ab)^{\\ell+1}}}\\Bigr|_{w=d} \\\\\n\t\t&=\\frac{\\ell!(-1)^\\ell}{v(y)+\\mathrm{i} ab}\\frac{1}{d^{\\ell+1}}\\parens[\\Big]{1-\\frac{(ad)^{\\ell+1}}{(v(y)+ad+\\mathrm{i} ab)^{\\ell+1}}}.\n\t\\end{align}\n\t%\n\tTogether with \\eqref{app:eq:bell_sqrt}, we can continue manipulating $I_{m,n}$\n\t%\n\t\\begin{align}\n\tI_{m,n}&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\widthof{$I_{m,n}$}-\\multlinegap]\n\t\t= \\Re\\coeff*{y^{m-1}} \\frac{\\pi}{v(y)}\\frac{1}{v(y)+\\mathrm{i} ab} \\frac{1}{(n-1)!}\\cdot\\ldots \\\\\n\t\t\\ldots\\cdot\\sum_{\\ell=1}^{n-1} \\frac{(-2d)^{\\ell-2n+2}(2n-\\ell-3)!}{(\\ell-1)!(n-\\ell-1)!} \\frac{\\ell!(-1)^\\ell}{d^{\\ell+1}}\\parens[\\Big]{1-\\frac{(ad)^{\\ell+1}}{(v(y)+ad+\\mathrm{i} ab)^{\\ell+1}}}\n\t\\end{multlined}\\\\\n\t&= \\Re\\coeff*{y^{m-1}} \\frac{\\pi}{v(y)}\\frac{1}{v(y)+\\mathrm{i} ab} \\sum_{\\ell=1}^{n-1} \\frac{2^{\\ell+1}}{(2d)^{2n-1}} \\frac{\\ell}{n-1} \\binom{2n-\\ell-3}{\\phantom{2}n-\\ell-1} \\parens[\\Big]{1-\\frac{(ad)^{\\ell+1}}{(v(y)+ad+\\mathrm{i} ab)^{\\ell+1}}}.\n\t\\end{align}\n\t%\n\tUsing\n\t%\n\t\\begin{multline}\n\t\t1-\\frac{(ad)^{\\ell+1}}{(v(y)+ad+\\mathrm{i} ab)^{\\ell+1}} \\\\\n\t\t=\\frac{1}{(v(y)+ad+\\mathrm{i} ab)^{\\ell+1}}\\parens[\\bigg]{\\sum_{p=0}^{\\ell+1}\\binom{\\ell+1}{p}(ad)^p (v(y)+\\mathrm{i} ab)^{\\ell+1-p} -(ad)^{\\ell+1}}\n\t\\end{multline}\n\t%\n\tthis collapses further to\n\t%\n\t\\begin{align}\n\t\tI_{m,n}&=\\Re\\coeff*{y^{m-1}} \\frac{\\pi}{v(y)} \\sum_{\\ell=1}^{n-1} \\sum_{p=0}^{\\ell} \\frac{2^{\\ell+1}}{(2d)^{2n-1}} \\frac{\\ell}{n-1} \\binom{2n-\\ell-3}{\\phantom{2}n-\\ell-1} \\binom{\\ell+1}{p} \\frac{(ad)^p (v(y)+\\mathrm{i} ab)^{\\ell-p}}{(v(y)+ad+\\mathrm{i} ab)^{\\ell+1}}.\n\t\\end{align}\n\t%\n\tApplying the product rule, we observe first that\n\t%\n\t\\begin{align}\n\t\t\\Dn{j}{v}\\frac{1}{v(v+ad+\\mathrm{i} ab)^{\\ell+1}}\n\t\t&=\\sum_{r=0}^{j}\\binom{j}{r}(-1)^j \\frac{r!(\\ell+j-r)!}{\\ell!} \\frac{1}{v^{r+1}(v+ad+\\mathrm{i} ab)^{\\ell+1+j-r}}\\\\\n\t\n\t\t&=j!(-1)^j\\sum_{r=0}^{j}\\binom{\\ell+r}{r}\\frac{1}{v^{j-r+1}(v+ad+\\mathrm{i} ab)^{\\ell+1+r}},\n\t\\end{align}\n\t%\n\twhere we reversed the order of summation for the last equation. Consequently, with $j=k-q$, once more due to the product rule, $I_{m,n}$ is equal to\n\t%\n\t\\begin{align}\n\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\multlinegap]\n\t\t\\mathrel{\\phantom{=}} \\Re \\frac{\\pi}{(m-1)!} \\sum_{\\ell=1}^{n-1} \\sum_{p=0}^{\\ell} \\frac{2^{\\ell+1}}{(2d)^{2n-1}} \\frac{\\ell}{n-1} \\binom{2n-\\ell-3}{\\phantom{2}n-\\ell-1} \\! \\binom{\\ell+1}{p} \\! \\sum_{k=1}^{m-1} \\! B_{m-1,k}\\parens[\\Big]{\\D{y}v(y),\\ldots}\\Bigr|_{y=0} \\! \\cdot\\ldots\\hspace{-2cm}\\\\\n\t\t\\ldots\\cdot\\sum_{q=0}^{\\mathclap{\\min(k,\\ell-p)}} \\hspace{0.3cm}\\binom{k}{q}\\frac{(\\ell-p)!}{(\\ell-p-q)!} (c+\\mathrm{i} ab)^{\\ell-p-q} \\sum_{r=0}^{k-q} \\binom{\\ell+r}{r}\\frac{(ad)^p (k-q)!(-1)^{k-q}}{c^{k-q-r+1}(c+ad+\\mathrm{i} ab)^{\\ell+1+r}}\n\t\\end{multlined}\\\\\n\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\multlinegap]\n\t\t=\\Re \\pi \\sum_{k=1}^{m-1}\\sum_{\\ell=1}^{n-1} \\frac{2^{k+1}}{(2c)^{2m-1}} \\frac{k}{m-1} \\binom{2m-k-3}{\\phantom{2}m-k-1} \\frac{2^{\\ell+1}}{(2d)^{2n-1}} \\frac{\\ell}{n-1} \\binom{2n-\\ell-3}{\\phantom{2}n-\\ell-1} \\cdot\\ldots\\\\\n\t\t\\ldots\\cdot\\sum_{p=0}^{\\ell}\\sum_{q=0}^{k} \\sum_{r=0}^{k-q} \\binom{\\ell+1}{p} \\binom{\\ell-p}{q} \\binom{\\ell+r}{r} \\frac{(-1)^q c^{q+r} (ad)^p (c+\\mathrm{i} ab)^{\\ell-p-q}}{(c+ad+\\mathrm{i} ab)^{\\ell+r+1}}.\n\t\\end{multlined}\n\t\\end{align}\n\t%\n\tTo find the respective powers of $c$ and $ad$, we pull out the maximal power of the denominator --- multiplied by its conjugate to make it real; recalling $\\delta_+=(c+ad)^2+a^2b^2=(c+ad+\\mathrm{i} ab)(c+ad-\\mathrm{i} ab)$ --- to find\n\t%\n\t\\begin{multline}\\label{eq:Imn_coeff_unified_denom}\n\t\tI_{m,n}\n\t\t=\\frac{4\\pi}{(4\\delta_+)^{m+n-1}c^{2m-1}d^{2n-1}} \\Re\\!\\!\\sum_{k,\\ell,p,q,r} \\!\\frac{2^k k}{m-1} \\binom{2m-k-3}{\\phantom{2}m-k-1} \\frac{2^\\ell \\ell}{n-1} \\binom{2n-\\ell-3}{\\phantom{2}n-\\ell-1} \\! \\binom{\\ldots}{p,q,r} \\cdot\\ldots\\\\\n\t\t\\ldots\\cdot(-1)^q c^{q+r} (ad)^p (c+\\mathrm{i} ab)^{\\ell-p-q}(c+ad+\\mathrm{i} ab)^{m+n-2-\\ell-r} (c+ad-\\mathrm{i} ab)^{m+n-1},\n\t\\end{multline}\n\t%\n\twhere we will denote\n\t%\n\t\\begin{align}\n\t\t\\binom{\\ldots}{k,\\ell,m,n,p,q,r}:=\\frac{2^k k}{m-1} \\binom{2m-k-3}{\\phantom{2}m-k-1} \\frac{2^\\ell \\ell}{n-1} \\binom{2n-\\ell-3}{\\phantom{2}n-\\ell-1} \\binom{\\ell+1}{p} \\binom{\\ell-p}{q} \\binom{\\ell+r}{r}\n\t\\end{align}\n\t%\n\tfor brevity, and will often collect consecutive letters with a hyphen, e.g.~$k-n$ for $k,\\ell,m,n$. As the penultimate step, we expand the last line of \\eqref{eq:Imn_coeff_unified_denom} to\n\t%\n\t\\begin{multline}\n\t\n\t\n\t\n\t\t\\sum_{s\\ge 0} \\sum_{t\\ge 0} \\sum_{g\\ge 0} \\sum_{h\\ge 0} \\binom{m+n-2-\\ell-r}{s} \\binom{m+n-2-p-q-r-s}{t} \\binom{m+n-1}{g} \\cdot\\ldots \\\\\n\t\t\\ldots\\cdot \\binom{m+n-1-g}{h} (-1)^{g+h+m+n+q+1} c^{h+q+r+t} (ad)^{g+p+s} (\\mathrm{i} ab)^{2(m+n)-3-g-h-p-q-r-s-t}.\n\t\\end{multline}\n\t%\n\tFinally, we collect like powers of $c$ and $ad$ as powers of $i=h+q+r+t$ and $j=g+p+s$, respectively,\n\t%\n\t\\begin{align}\n\tI_{m,n}\n\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\widthof{$I_{m,n}$}-\\multlinegap]\\label{eq:Imn_repr_coeff}\n\t\t=\\phantom{:}\\frac{\\pi}{\\delta_+^{m+n-1}c^{2m-1}d^{2n-1}} \\Re\\!\\! \\sum_{i,j=0}^{2(m+n)-3} \\!\\! \\sum_{k,\\ell,p,q,r,s,t} \\binom{\\ldots}{k,\\ell,m,n,p,q,r,s,t} \\binom{m+n-1}{j-p-s} \\cdot\\ldots\\\\\n\t\t\\ldots\\cdot \\binom{m+n-1-j+p+s}{i-q-r-t} \\frac{(-1)^{i+j+p+r+s+t+m+n+1}}{4^{m+n-2}} c^i (ad)^j (\\mathrm{i} ab)^{2(m+n)-3-i-j}\n\t\\end{multlined}\\\\\n\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\widthof{$I_{m,n}$}-\\multlinegap]\n\t\t=\\phantom{:}\\frac{\\pi}{\\delta_+^{m+n-1}c^{2m-1}d^{2n-1}} \\Re\\!\\! \\sum_{i,j=0}^{2(m+n)-3} \\!\\! \\sum_{k,\\ell,p,q,r,s,t} \\binom{\\ldots}{k,\\ell,m,n,p,q,r,s,t} \\binom{m+n-1}{j-p-s} \\cdot\\ldots\\\\\n\t\t\\ldots\\cdot \\binom{m+n-1-j+p+s}{i-q-r-t} \\frac{\\mathrm{i}^{-i-j+1}(-1)^{p+r+s+t}}{4^{m+n-2}} c^i (ad)^j (ab)^{2(m+n)-3-i-j}\n\t\\end{multlined}\\\\\n\t&=:\\frac{\\pi}{\\delta_+^{m+n-1}c^{2m-1}d^{2n-1}} \\sum_{i,j=0}^{2(m+n)-3} c^{m,n}_{i,j} c^i (ad)^j (ab)^{2(m+n)-3-i-j}. \n\t\\end{align}\n\t%\n\tObserve that due to the fact that we only consider the real parts of the sum, $i+j$ has to be an odd number, so that the power of $\\mathrm{i}$ is even.\n\t\n\tThe claim that $i+j$ has to be less than $2(m+n)-3$ follows easily from the binomial coefficient in $i$, because it implies that (estimating $t$ by its maximal value, which can be read off from the corresponding binomial coefficient)\n\t%\n\t\\begin{multline}\n\t\ti-q-r-t\\le m+n-1-j+p+s \\\\\n\t\t\\Longrightarrow \\quad\n\t\ti+j\\le m+n-1+p+q+r+s+t\\le 2(m+n)-3.\\tag*{\\qedhere}\n\t\\end{multline}\n\\end{proof}\n\nNow, after having disassembled $I_{m,n}$ into its parts, we are able to insert the indicators we later want to look for --- meaning we multiply by $v^i w^j y^m z^n$ --- and try to reassemble the whole thing once more.\n\n\\begin{proposition}\\label{app:prop:genfunc_coeff}\n\tDenoting $g(v,w):=(cv+adw)^2+a^2b^2$, it can be seen that the generating function of the numerator in $I_{m,n}$ is\n\t%\n\t\\begin{align}\n\t\\MoveEqLeft\n\t\tN(v,w,y,z):=\\sum_{i,j\\ge 0,\\,m,n\\ge 1} c^{m,n}_{i,j} (cv)^i (adw)^j (ab)^{2(m+n)-3-i-j} y^m z^n\\\\\n\t\t&=\\Re \\frac{yz}{\\sqrt{1-g(v,w)y}\\sqrt{1-g(v,w)z}} \\frac{g(v,w)}{cv\\sqrt{1-g(v,w)y} +adw\\sqrt{1-g(v,w)z}+\\mathrm{i} ab} \\label{app:eq:Imn_ops_numerator}\\\\\n\t\t&=\\frac{g(v,w)yz}{\\sqrt{1-g(v,w)y}\\sqrt{1-g(v,w)z}} \\frac{cv\\sqrt{1-g(v,w)y}+adw\\sqrt{1-g(v,w)z}}{(cv\\sqrt{1-g(v,w)y}+adw\\sqrt{1-g(v,w)z})^2+a^2b^2}, \n\t\\end{align}\n\t%\n\tand for the coefficients alone, with $h(v,w):=(v+w)^2+1$,\n\t%\n\t\\begin{align}\n\t\tC(v,w,y,z)\n\t\t&:=\\sum_{i,j\\ge 0,\\,m,n\\ge 1} c^{m,n}_{i,j} v^i w^j y^m z^n\\\\\n\t\t&\\phantom{:}=\\Re \\frac{yz}{\\sqrt{1-h(v,w)y}\\sqrt{1-h(v,w)z}} \\frac{h(v,w)}{v\\sqrt{1-h(v,w)y}+w\\sqrt{1-h(v,w)z}+\\mathrm{i}}\\\\\n\t\t&\\phantom{:}=\\frac{h(v,w)yz}{\\sqrt{1-h(v,w)y}\\sqrt{1-h(v,w)z}} \\frac{v\\sqrt{1-h(v,w)y}+w\\sqrt{1-h(v,w)z}}{(v\\sqrt{1-h(v,w)y}+w\\sqrt{1-h(v,w)z})^2+1}.\n\t\\end{align}\n\\end{proposition}\n\n\\begin{remark}\n\tUsing \\eqref{app:eq:Imn_as_coeff} and \\eqref{app:eq:coeff_factor}, we can deduce that\n\t%\n\t\\begin{align}\n\t\tI_{m,n}=\\frac{\\pi}{\\delta_+^{m+n-1}c^{2m-1}d^{2n-1}}\\coeff*{y^{m}z^{n}}\n\t\n\t\t\\Re \\frac{yz}{\\sqrt{1-\\delta_+y}\\sqrt{1-\\delta_+z}} \\frac{\\delta_+}{c\\sqrt{1-\\delta_+y}+ad\\sqrt{1-\\delta_+z}+\\mathrm{i} ab}\\qquad \\label{app:eq:Imn_ops_rescaled}\n\t\n\t\n\t\\end{align}\n\t%\n\tComparing this with \\eqref{app:eq:Imn_ops_numerator}, we immediately see the correspondence (by setting $v,w=1$, because $g\\bigr|_{v=1,w=1}=\\delta_+$) --- the knowledge we gain with $N(v,w,y,z)$ is the ability to determine a single coefficient as opposed to the whole numerator.\n\\end{remark}\n\n\\begin{proof}[Proof of \\autoref{app:prop:genfunc_coeff}]\n\tAs outlined in \\autoref{ssec:genfunc}, we take \\eqref{eq:Imn_repr_coeff} and extend the summation to $\\infty$ for all parameters except $\\ell$ (the binomial coefficients ensure that only finitely many terms are non-zero, except for $r$, where we need change the order of summation and use $k\\ge q+r$ to enforce $r\\le k-q$), and interchange order so that we can sum, first in $j$ and $i$,\n\t%\n\t\\begin{align}\n\t\tN(v,w,y,z)&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\widthof{$N(v,w,y,z)$}-\\multlinegap]\n\t\t\t=\\Re\\sum_{j,k,\\ell,m,n,p,q,r,s,t} \\binom{\\ldots}{k,\\ell,m,n,p,q,r,s,t} \\binom{m+n-1}{j-p-s} \\frac{(-1)^{m+n+1+j+p+r+s+t}}{4^{m+n-2}} \\cdot\\ldots \\\\\n\t\t\t\\ldots\\cdot (adw)^j (\\mathrm{i} ab)^{2(m+n)-3-j} y^m z^n \\sum_{i=q+r+t}^\\infty \\binom{m+n-1-j+p+s}{i-q-r-t} \\parens[\\Big]{\\frac{-cv}{\\mathrm{i} ab}}^i\n\t\t\\end{multlined}\\\\\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\widthof{$N(v,w,y,z)$}-\\multlinegap]\n\t\t\t=\\Re\\sum_{j,k,\\ell,m,n,p,q,r,s,t} \\binom{\\ldots}{k-n,p-t} \\binom{m+n-1}{j-p-s} \\frac{(-1)^{m+n+1+j+p+r+s+t}}{4^{m+n-2}} \\cdot\\ldots \\\\\n\t\t\t\\ldots\\cdot (adw)^j (\\mathrm{i} ab)^{2(m+n)-3-j} y^m z^n \\parens[\\Big]{\\frac{-cv}{\\mathrm{i} ab}}^{q+r+t} \\parens[\\Big]{1-\\frac{cv}{\\mathrm{i} ab}}^{m+n-1-j+p+s}\n\t\t\\end{multlined}\\\\\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\widthof{$N(v,w,y,z)$}-\\multlinegap]\n\t\t\t=\\Re\\sum_{k,\\ell,m,n,p,q,r,s,t} \\binom{\\ldots}{k-n,p-t} \\frac{(-1)^{m+n+1+q}}{4^{m+n-2}} (cv)^{q+r+t} (\\mathrm{i} ab)^{2(m+n)-3-q-r-t} \\cdot\\ldots \\\\\n\t\t\t\\ldots\\cdot y^m z^n \\parens[\\Big]{1-\\frac{cv}{\\mathrm{i} ab}}^{m+n-1} \\parens[\\Big]{\\frac{adw}{\\mathrm{i} ab}}^{p+s} \\sum_{j=0}^\\infty \\binom{m+n-1}{j} \\parens[\\Big]{\\frac{-adw}{\\mathrm{i} ab-cv}}^{j}\n\t\t\\end{multlined}\\\\\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\widthof{$N(v,w,y,z)$}-\\multlinegap]\n\t\t\t=\\Re\\sum_{k,\\ell,m,n,p,q,r,s,t} \\binom{\\ldots}{k-n,p-t} \\frac{(-1)^{m+n+1+q}}{4^{m+n-2}} \\cdot\\ldots \\\\\n\t\t\t\\ldots\\cdot (cv)^{q+r+t} (adw)^{p+s} (\\mathrm{i} ab)^{2(m+n)-3-p-q-r-s-t} y^m z^n \\parens[\\Big]{1-\\frac{cv+adw}{\\mathrm{i} ab}}^{m+n-1},\n\t\t\\end{multlined}\n\t\\end{align}\n\t%\n\twhere the last line uses $(1-\\frac{cv}{\\mathrm{i} ab})(1-\\frac{adw}{\\mathrm{i} ab-cv})=1-\\frac{cv+adw}{\\mathrm{i} ab}$. We continue by summing in $t$ and $s$, which are just applications of the binomial theorem,\n\t%\n\t\\begin{align}\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\multlinegap]\n\t\t\t=\\Re \\sum_{k-n,p-s} \\binom{\\ldots}{k-n,p-s} \\frac{(-1)^{m+n+1+q}}{4^{m+n-2}} (cv)^{q+r} (adw)^{p+s} (\\mathrm{i} ab)^{2(m+n)-3-p-q-r-s} \\cdot\\ldots \\\\\n\t\t\t\\ldots\\cdot y^m z^n\\parens[\\Big]{1-\\frac{cv+adw}{\\mathrm{i} ab}}^{m+n-1} \\sum_{t=0}^{m+n-2-\\smash{p-q}-r-s} \\binom{m+n-2-p-q-r-s}{t} \\parens[\\Big]{\\frac{cv}{\\mathrm{i} ab}}^t\n\t\t\\end{multlined}\\\\\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\multlinegap]\n\t\t\t=\\Re \\sum_{k-n,p-r} \\binom{\\ldots}{k-n,p-r} \\frac{(-1)^{m+n+1+q}}{4^{m+n-2}} (\\mathrm{i} ab)^{2(m+n)-3-p-q-r} y^m z^n \\parens[\\Big]{1-\\frac{cv+adw}{\\mathrm{i} ab}}^{m+n-1} \\!\\!\\!\\! \\cdot\\ldots \\\\\n\t\t\t\\ldots\\cdot (cv)^{q+r} (adw)^{p} \\parens[\\Big]{1+\\frac{cv}{\\mathrm{i} ab}}^{m+n-2-p-q-r} \\sum_{s=0}^{m+n-2-\\ell-r} \\binom{m+n-2-\\ell-r}{s} \\parens[\\Big]{\\frac{adw}{cv+\\mathrm{i} ab}}^s\n\t\t\\end{multlined}\\\\\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\multlinegap]\n\t\t\t=\\Re \\sum_{k-n,p-r} \\binom{\\ldots}{k-n,p-r} \\frac{(-1)^{m+n+1+q}}{4^{m+n-2}} (cv)^{q+r} (adw)^{p} (\\mathrm{i} ab)^{2(m+n)-3-p-q-r} y^m z^n \\cdot\\ldots \\\\\n\t\t\t\\ldots\\cdot \\parens[\\Big]{1-\\frac{cv+adw}{\\mathrm{i} ab}}^{m+n-1} \\parens[\\Big]{1+\\frac{cv+adw}{\\mathrm{i} ab}}^{m+n-2-p-q-r} \\parens[\\Big]{1+\\frac{adw}{cv+\\mathrm{i} ab}}^{p+q-\\ell}.\n\t\t\\end{multlined}\n\t\\end{align}\n\t%\n\tNext we sum in $n$, setting $g(v,w):=(cv+adw)^2+a^2b^2=-(\\mathrm{i} ab)^2(1-\\frac{cv+adw}{\\mathrm{i} ab})(1+\\frac{cv+adw}{\\mathrm{i} ab})$,\n\t%\n\t\\begin{align}\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\multlinegap]\n\t\t\t=\\Re \\sum_{k-m,p-r} \\binom{\\ldots}{k-m,p-r} \\frac{(-1)^{m+1+q}}{4^{m-2}} (cv)^{q+r} (adw)^{p} (\\mathrm{i} ab)^{2m-3-p-q-r} y^m \\parens[\\Big]{1-\\frac{cv+adw}{\\mathrm{i} ab}}^{m-1}\\!\\!\\!\\!\\! \\cdot\\ldots \\\\\n\t\t\t\\ldots\\cdot \\parens[\\Big]{1+\\frac{cv+adw}{\\mathrm{i} ab}}^{m-2-p-q-r} \\parens[\\Big]{1+\\frac{adw}{cv+\\mathrm{i} ab}}^{p+q-\\ell} \\sum_{n=\\ell+1}^{\\infty}\\frac{2^\\ell \\ell}{n-1} \\binom{2n-\\ell-3}{\\phantom{2}n-\\ell-1} \\parens[\\Big]{\\frac{gz}{4}}^n\n\t\t\\end{multlined}\\\\\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\multlinegap]\n\t\t\t=\\Re \\sum_{k-m,p-r} \\binom{\\ldots}{k-m,p-r} \\frac{(-1)^{m+1+q}}{4^{m-2}} (cv)^{q+r} (adw)^{p} (\\mathrm{i} ab)^{2m-3-p-q-r} y^m \\parens[\\Big]{1-\\frac{cv+adw}{\\mathrm{i} ab}}^{m-1}\\!\\!\\!\\!\\! \\cdot\\ldots \\\\\n\t\t\t\\ldots\\cdot \\parens[\\Big]{1+\\frac{cv+adw}{\\mathrm{i} ab}}^{m-2-p-q-r} \\parens[\\Big]{1+\\frac{adw}{cv+\\mathrm{i} ab}}^{p+q-\\ell}\\parens[\\Big]{\\frac{gz}{4}}^{\\ell+1} 2^\\ell \\sum_{n=0}^{\\infty}\\frac{\\ell}{n+\\ell} \\binom{2n+\\ell-1}{n} \\parens[\\Big]{\\frac{gz}{4}}^n\n\t\t\\end{multlined}\\\\\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\multlinegap]\n\t\t\t=\\Re \\sum_{k-m,p-r} \\binom{\\ldots}{k-m,p-r} \\frac{(-1)^{m+q}}{4^{m-1}} (cv)^{q+r} (adw)^{p} (\\mathrm{i} ab)^{2m-1-p-q-r} y^m z \\cdot\\ldots \\\\\n\t\t\t\\ldots\\cdot \\parens[\\Big]{1-\\frac{cv+adw}{\\mathrm{i} ab}}^{m} \\parens[\\Big]{1+\\frac{cv+adw}{\\mathrm{i} ab}}^{m-1-p-q-r} \\parens[\\Big]{1+\\frac{adw}{cv+\\mathrm{i} ab}}^{p+q-\\ell} \\parens*{1-\\sqrt{1-gz}}^\\ell,\n\t\t\\end{multlined}\n\t\\end{align}\n\t%\n\twhere we used \\eqref{app:eq:ops_sqrt_k} for the last equation and distributed one surplus power back to its constituent factors.\n\t\n\tIn the case of $\\ell=0$, \\eqref{app:eq:ops_sqrt_k} is not applicable, so we would have to subtract the entire term with $\\ell$ set to $0$. As is immediately obvious from the last display, this only affects the case that $n=1$ (because the only remaining power of $z$ is $z^1$), and we will not deal with the case $m=1 \\lor n=1$ here, as they can be done in exactly the same way as for the general case (e.g.~starting from \\eqref{app:eq:Imn_simplif_frac}, where either $y$ or $z$ can then be set to zero and the derivatives only need to be considered for the other variable).\n\t\n\tSimilarly, we tackle the sum in $m$ (again ignoring the error for $k=0$), allowing us to also deal with the sum in $k$,\n\t%\n\t\\begin{align}\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\multlinegap]\n\t\t\t=\\Re \\sum_{k,\\ell,p-r} \\binom{\\ldots}{p-r} 4(-1)^{q} (cv)^{q+r} (adw)^{p} (\\mathrm{i} ab)^{-1-p-q-r} z \\parens[\\Big]{1+\\frac{cv+adw}{\\mathrm{i} ab}}^{-1-p-q-r} \\cdot\\ldots \\\\\n\t\t\t\\ldots\\cdot \\parens[\\Big]{1+\\frac{adw}{cv+\\mathrm{i} ab}}^{p+q-\\ell} \\parens*{1-\\sqrt{1-gz}}^\\ell 2^k \\parens[\\Big]{\\frac{gy}{4}}^{k+1}\\sum_{m=0}^\\infty \\frac{ k}{m+k} \\binom{2m+k-1}{m} \\parens[\\Big]{\\frac{gy}{4}}^{m}\n\t\t\\end{multlined}\\\\\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\multlinegap]\n\t\t\t=\\Re \\sum_{k,\\ell,p-r} \\binom{\\ldots}{p-r} (-1)^{q+1} (cv)^{q+r} (adw)^{p} (\\mathrm{i} ab)^{1-p-q-r} yz \\parens[\\Big]{1-\\frac{cv+adw}{\\mathrm{i} ab}} \\cdot\\ldots \\\\\n\t\t\t\\ldots\\cdot \\parens[\\Big]{1+\\frac{cv+adw}{\\mathrm{i} ab}}^{-p-q-r} \\parens[\\Big]{1+\\frac{adw}{cv+\\mathrm{i} ab}}^{p+q-\\ell} \\parens*{1-\\sqrt{1-gz}}^\\ell \\parens*{1-\\sqrt{1-gy}}^k\n\t\t\\end{multlined}\\\\\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\multlinegap]\n\t\t\t=\\Re \\sum_{\\ell,p-r} \\binom{\\ldots}{p-r} (-1)^{q+1} (cv)^{q+r} (adw)^{p} (\\mathrm{i} ab)^{1-p-q-r} yz \\parens[\\Big]{1-\\frac{cv+adw}{\\mathrm{i} ab}} \\cdot\\ldots \\\\\n\t\t\t\\ldots\\cdot \\parens[\\Big]{1+\\frac{cv+adw}{\\mathrm{i} ab}}^{-p-q-r} \\parens[\\Big]{1+\\frac{adw}{cv+\\mathrm{i} ab}}^{p+q-\\ell} \\parens*{1-\\sqrt{1-gz}}^\\ell \\sum_{k=q+r}^\\infty \\parens*{1-\\sqrt{1-gy}}^k\n\t\t\\end{multlined}\\\\\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\multlinegap]\n\t\t\t=\\Re\\frac{yz}{\\sqrt{1-gy}}\\sum_{\\ell,p-r} \\binom{\\ldots}{p,q,r} (-1)^{q+1} (cv)^{q+r} (adw)^{p} (\\mathrm{i} ab)^{1-p-q-r} \\parens[\\Big]{1-\\frac{cv+adw}{\\mathrm{i} ab}} \\cdot\\ldots \\\\\n\t\t\t\\ldots\\cdot \\parens[\\Big]{1+\\frac{cv+adw}{\\mathrm{i} ab}}^{-p-q-r} \\parens[\\Big]{1+\\frac{adw}{cv+\\mathrm{i} ab}}^{p+q-\\ell} \\parens*{1-\\sqrt{1-gz}}^\\ell \\parens*{1-\\sqrt{1-gy}}^{q+r}.\n\t\t\\end{multlined}\n\t\\end{align}\n\t%\n\tNext, we sum in $r$, by \\eqref{app:eq:ops_geom_alpha},\n\t%\n\t\\begin{align}\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\multlinegap]\n\t\t\t=\\Re\\frac{yz}{\\sqrt{1-gy}} \\sum_{\\ell,p,q} \\binom{\\ldots}{p,q\n\t\t\t(-1)^{q+1} (cv)^{q} (adw)^{p} (\\mathrm{i} ab)^{1-p-q} \\parens[\\Big]{1-\\frac{cv+adw}{\\mathrm{i} ab}} \\parens[\\Big]{1+\\frac{cv+adw}{\\mathrm{i} ab}}^{-p-q} \\cdot\\ldots \\\\\n\t\t\t\\ldots\\cdot \\parens[\\Big]{1+\\frac{adw}{cv+\\mathrm{i} ab}}^{p+q-\\ell} \\parens*{1-\\sqrt{1-gz}}^\\ell \\parens*{1-\\sqrt{1-gy}}^{q} \\;\\smash[b]{\\underbrace{\\sum_{r=0}^\\infty \\binom{\\ell+r}{r} \\parens[\\bigg]{\\frac{cv\\parens*{1-\\sqrt{1-gy}}}{cv+adw+\\mathrm{i} ab}}^r,}_{\\parens*{\\frac{cv+adw+\\mathrm{i} ab}{cv\\sqrt{1-gy}+adw+\\mathrm{i} ab}}^{\\ell+1}}} \\vphantom{\\underbrace{\\sum_{r=0}}_{a}}\n\t\t\\end{multlined}\n\t\\end{align}\n\t%\n\tthen in $q$,\n\t%\n\t\\begin{align}\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\multlinegap]\n\t\t\t=\\Re\\frac{-yz}{\\sqrt{1-gy}}\\sum_{\\ell,p} \\binom{\\ell+1}{p} (adw)^{p} (\\mathrm{i} ab)^{1-p} \\parens[\\Big]{1-\\frac{cv+adw}{\\mathrm{i} ab}} \\parens[\\Big]{1+\\frac{cv+adw}{\\mathrm{i} ab}}^{-p} \\parens[\\Big]{1+\\frac{adw}{cv+\\mathrm{i} ab}}^{p-\\ell} \\!\\!\\!\\!\\! \\cdot\\ldots \\\\\n\t\t\t\\ldots\\cdot \\parens*{1-\\sqrt{1-gz}}^\\ell \\parens[\\Big]{\\frac{cv+adw+\\mathrm{i} ab}{cv\\sqrt{1-gy}+adw+\\mathrm{i} ab}}^{\\ell+1} \\sum_{q=0}^{\\infty} \\binom{\\ell-p}{q} \\parens[\\bigg]{\\frac{-cv\\parens*{1-\\sqrt{1-gy}}}{cv+\\mathrm{i} ab}}^q,\n\t\t\\end{multlined}\n\t\\end{align}\n\t%\n\tas well as in $p$,\n\t%\n\t\\begin{align}\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\multlinegap]\n\t\t\t=\\Re\\frac{-yz}{\\sqrt{1-gy}}\\sum_{\\ell=0}^\\infty \\mathrm{i} ab \\parens[\\Big]{1-\\frac{cv+adw}{\\mathrm{i} ab}} \\parens[\\Big]{\\frac{cv+adw+\\mathrm{i} ab}{cv+\\mathrm{i} ab}}^{-\\ell} \\parens*{1-\\sqrt{1-gz}}^\\ell \\cdot\\ldots \\\\\n\t\t\t\\ldots\\cdot \\parens[\\Big]{\\frac{cv+adw+\\mathrm{i} ab}{cv\\sqrt{1-gy}+adw+\\mathrm{i} ab}}^{\\ell+1} \\parens[\\bigg]{\\frac{cv\\sqrt{1-gy}+\\mathrm{i} ab}{cv+\\mathrm{i} ab}}^\\ell \\sum_{p=0}^{\\ell} \\binom{\\ell+1}{p} \\parens[\\bigg]{\\frac{adw}{cv\\sqrt{1-gy}+\\mathrm{i} ab}}^p\n\t\t\\end{multlined}\\\\\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\multlinegap]\n\t\t\t=\\Re\\frac{yz}{\\sqrt{1-gy}} \\sum_{\\ell=0}^\\infty \\frac{\\parens{cv+adw-\\mathrm{i} ab}\\parens{cv+adw+\\mathrm{i} ab}}{cv\\sqrt{1-gy}+adw+\\mathrm{i} ab} \\parens*{1-\\sqrt{1-gz}}^\\ell \\cdot\\ldots \\\\\n\t\t\t\\ldots\\cdot \\parens[\\bigg]{\\frac{cv\\sqrt{1-gy}+\\mathrm{i} ab}{cv\\sqrt{1-gy}+adw+\\mathrm{i} ab}}^\\ell \\parens{\\frac{\\parens{cv\\sqrt{1-gy}+adw+\\mathrm{i} ab}^{\\ell+1}-(adw)^{\\ell+1}}{(cv\\sqrt{1-gy}+\\mathrm{i} ab)^{\\ell+1}}}.\n\t\t\\end{multlined}\n\t\\end{align}\n\t%\n\tFinally, we wrap things up by summing in $\\ell$,\n\t%\n\t\\begin{align}\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\multlinegap]\n\t\t\t=\\Re\\frac{yz}{\\sqrt{1-gy}} \\frac{(cv+adw)^2+a^2b^2}{cv\\sqrt{1-gy}+\\mathrm{i} ab} \\Biggl(\\sum_{\\ell=0}^\\infty \\parens*{1-\\sqrt{1-gz}}^{\\ell} - \\ldots \\\\\n\t\t\t\\ldots -\\sum_{\\ell=0}^\\infty\\frac{adw}{cv\\sqrt{1-gy}+adw+\\mathrm{i} ab}\\parens[\\bigg]{\\frac{adw\\parens*{1-\\sqrt{1-gz}}}{cv\\sqrt{1-gy}+adw+\\mathrm{i} ab}}^\\ell\\Biggr)\n\t\t\\end{multlined}\\\\\n\t\t&=\\Re\\frac{yz}{\\sqrt{1-gy}} \\frac{g}{cv\\sqrt{1-gy}+\\mathrm{i} ab} \\parens[\\bigg]{\\frac{1}{\\sqrt{1-gz}}-\\frac{adw}{cv\\sqrt{1-gy}+adw\\sqrt{1-gz}+\\mathrm{i} ab}}\\\\\n\t\t&=\\Re\\frac{yz}{\\sqrt{1-gy}\\sqrt{1-gz}} \\frac{g}{cv\\sqrt{1-gy}+adw\\sqrt{1-gz}+\\mathrm{i} ab}.\n\t\\end{align}\n\t%\n\tAs it turns out, this formula also holds for the case $m=1\\lor n=1$ that we have omitted thus far. The proof is left as an exercise to the reader.\n\t\n\tThe form of the function $C(v,w,y,z)$ is easily seen by using \\eqref{app:eq:coeff_factor} --- or more heuristically, setting $a=b=c=d=1$ in $N(v,w,y,z)$. This finishes the proof.\n\\end{proof}\n\n\\begin{proposition}\\label{app:prop:coeff_zero}\n\tIf any of the following conditions is \\emph{not} met, the coefficient $c^{m,n}_{i,j}$ is zero,\n\t%\n\t\\begin{align}\n\ti+j&\\equiv 1\\bmod{2}, & i+j&\\le 2(m+n)-3, & (i&\\ge 2m-1 \\lor j\\ge 2n-1).\n\t\\end{align}\n\t%\n\tIf the first two conditions \\emph{are} met, the coefficients can be calculated as follows\n\t\\begin{align}\n\t\tc^{m,n}_{i,j}\n\t\t\\begin{multlined}[t][\\linewidth-\\mathindent-\\widthof{$c^{m,n}_{i,j}$}-\\multlinegap]\n\t\t\t=\\sum_{r=0}^{i}\\sum_{s=0}^{j} \\delta_{\\{r+s\\equiv1\\bmod{2}\\}} (-1)^{m+n+\\frac{r+s-1}{2}} \\binom{r+s}{s} \\cdot\\ldots \\\\\n\t\t\t\\ldots \\cdot \\binom{\\frac{r-1}{2}}{m-1}\\binom{\\frac{s-1}{2}}{n-1} \\binom{m+n-1}{m+n-1-\\frac{i+j-r-s}{2}} \\binom{i+j-r-s}{i-r}.\n\t\t\\end{multlined}\n\t\\end{align}\n\\end{proposition}\n\n\\begin{proof}\n\tInstead of explicit differentiation, we will reverse the process of calculating a generating function, and ``disassemble'' $C(v,w,y,z)$ in a particular way. However, we will still use the information we have already, i.e.~that $i+j$ must be odd and less than $2(m+n)-3$ (e.g.~from the representation in \\autoref{app:prop:coeff_expl_long}). First we calculate\n\t%\n\t\\begin{align}\n\t\tc^{m,n}_{i,j}\n\t\t&=\\Re\\coeff*{v^i w^j y^{m}z^{n}}\\frac{yz}{\\sqrt{1-hy}\\sqrt{1-hz}} \\frac{h}{v\\sqrt{1-hy}+w\\sqrt{1-hz}+\\mathrm{i}} \\\\\n\t\t&=\\Re\\coeff*{v^i w^j y^{m}z^{n}} \\frac{hyz}{\\sqrt{1-hy}\\sqrt{1-hz}} \\sum_{r\\ge 0}\\frac{(-v\\sqrt{1-hy})^r}{(w\\sqrt{1-hz}+\\mathrm{i})^{r+1}}\\\\\n\t\t&=\\Re\\coeff*{v^i w^j y^{m}z^{n}} \\frac{hyz}{\\sqrt{1-hz}} \\sum_{r\\ge 0} (-v)^r (\\sqrt{1-hy})^{r-1} (-\\mathrm{i})^{r+1} \\sum_{s\\ge 0} \\binom{r+s}{s} \\parens*{\\mathrm{i} w\\sqrt{1-hz}}^s\\\\\n\t\t&=\\Re\\coeff*{v^i w^j y^{m}z^{n}} -hyz \\sum_{r,s\\ge 0} \\mathrm{i}^{r+s+1} \\binom{r+s}{s} v^r w^s \\sum_{k\\ge 0} \\binom{\\frac{r-1}{2}}{k} (-hy)^k \\sum_{\\ell\\ge 0} \\binom{\\frac{s-1}{2}}{\\ell} (-hz)^\\ell,\n\t\n\t\n\t\n\t\n\t\\end{align}\n\t%\n\thaving used \\eqref{app:eq:ops_geom}, \\eqref{app:eq:ops_geom_k} and \\eqref{app:eq:ops_geom_alpha}. Extracting the appropriate powers of $y,z$, we continue\n\t%\n\t\\begin{align}\n\t\tc^{m,n}_{i,j}\n\t\t&=\\Re\\coeff*{v^i w^j} \\sum_{r,s\\ge 0} (-1)^{m+n+1} \\mathrm{i}^{r+s+1} \\binom{r+s}{s} \\binom{\\frac{r-1}{2}}{m-1} \\binom{\\frac{s-1}{2}}{n-1} v^r w^s h^{m+n-1}\\\\\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\widthof{$c^{m,n}_{i,j}$}-\\multlinegap]\n\t\t\t=\\Re\\coeff*{v^i w^j} \\sum_{r,s,t,u\\ge 0} (-1)^{m+n+1} \\mathrm{i}^{r+s+1} \\binom{r+s}{s} \\binom{\\frac{r-1}{2}}{m-1} \\binom{\\frac{s-1}{2}}{n-1} \\cdot\\ldots \\\\\n\t\t\t\\ldots\\cdot\\binom{m+n-1}{t} \\binom{2(m+n-1-t)}{u} v^{r+u} w^{s+2(m+n-1-t)-u} 1^t\n\t\t\\end{multlined}\\\\\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\widthof{$c^{m,n}_{i,j}$}-\\multlinegap]\n\t\t\t=\\Re\\coeff*{v^i w^j} \\sum_{i',j',r,s\\ge 0} (-1)^{m+n+1} \\mathrm{i}^{r+s+1} \\binom{r+s}{s} \\binom{\\frac{r-1}{2}}{m-1} \\binom{\\frac{s-1}{2}}{n-1} \\cdot\\ldots \\\\\n\t\t\t\\ldots\\cdot \\binom{m+n-1}{m+n-1-\\frac{i'+j'-r-s}{2}} \\binom{i'+j'-r-s}{i'-r} v^{i'} w^{j'}\n\t\t\\end{multlined}\\\\\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\widthof{$c^{m,n}_{i,j}$}-\\multlinegap]\n\t\t\t= \\Re\\sum_{r=0}^{i}\\sum_{s=0}^{j} (-1)^{m+n+1} \\mathrm{i}^{r+s+1} \\binom{r+s}{s} \\binom{\\frac{r-1}{2}}{m-1} \\binom{\\frac{s-1}{2}}{n-1} \\cdot\\ldots \\\\\n\t\t\t\\ldots\\cdot\\binom{m+n-1}{m+n-1-\\frac{i+j-r-s}{2}} \\binom{i+j-r-s}{i-r}.\n\t\t\\end{multlined}\n\t\\end{align}\n\t%\n\tFirstly, the restriction of the range in $r,s$ comes from the last binomial coefficient, because clearly the lower entry needs to be positive, and the upper one needs to be larger than the lower one. Also, like $i+j$, we see that $r+s$ has to be odd for the term to contribute to the (real part of the) sum. From this we can read off the formula for the coefficients claimed in the proposition.\n\t\n\tWe use $r+s\\equiv 1\\bmod{2}$ to split the sum into ($r=2p$ even, $s=2q+1$ odd) and ($r=2p+1$ odd, $s=2q$ even) for $p,q\\in\\bbN$,\n\t%\n\t\\begin{align}\n\t\tc^{m,n}_{i,j}\n\t\t&\\!\\begin{multlined}[t][\\linewidth-\\mathindent-\\widthof{$c^{m,n}_{i,j}$}-\\multlinegap]\n\t\t\t=\\sum_{p=0}^{\\floor{\\frac{i}{2}}} \\sum_{q=0}^{\\floor{\\frac{j-1}{2}}} (-1)^{m+n+p+q} \\binom{2p+2q+1}{2q+1} \\binom{p-\\frac{1}{2}}{m-1} \\binom{q}{n-1} \\cdot\\ldots \\\\\n\t\t\t\\shoveright{\\ldots\\cdot\\binom{m+n-1}{m+n+p+q-\\frac{i+j+1}{2}} \\binom{i+j-2p-2q-1}{i-2p} +\\ldots}\\\\\n\t\t\t\\shoveleft{\\mathrel{\\phantom{=}} \\ldots+ \\sum_{p=0}^{\\floor{\\frac{i-1}{2}}} \\sum_{q=0}^{\\floor{\\frac{j}{2}}} (-1)^{m+n+p+q} \\binom{2p+2q+1}{2q} \\binom{p}{m-1} \\cdot \\ldots}\\\\\n\t\t\t\\ldots\\cdot\\binom{q-\\frac{1}{2}}{n-1} \\binom{m+n-1}{m+n+p+q-\\frac{i+j+1}{2}} \\binom{i+j-2p-2q-1}{i-2p-1}.\n\t\t\\end{multlined}\n\t\\end{align}\n\t%\n\tNow, if $i\\le 2(m-1)$, we see that $p0$ and $N\\in\\bbN$, the function $f_N$ --- reconstructed from only the $N$ largest ridgelet coefficients of $f$ --- satisfies\n\t%\n\t\\begin{align}\\label{eq:intro_approx_f}\n\t\t\\norm{f-f_N}_{L^2} \\le C_\\delta N^{-\\frac{t}{d}+\\delta},\n\t\\end{align}\n\t%\n\twhich (up to $\\delta$) is the best theoretically possible rate.\n\t\n\tAdditionally, if $u$ is the solution to \\eqref{eq:LinTrans} with the $f$ from above, then, for $\\kappa$ smooth enough, the reconstruction $u_N$ from the $N$ largest ridgelet coefficients of $u$ similarly satisfies (note the different norm)\n\t%\n\t\\begin{align}\\label{eq:intro_approx_u}\n\t\t\\norm{u-u_N}_{\\Hs} \\le C'_\\delta N^{-\\frac{t}{d}+\\delta}.\n\t\\end{align}\n\\end{theorem}\n\nAs hinted at in the beginning, the original ridgelet definition by \\cite{Can98} already showed \\eqref{eq:intro_approx_f} in \\cite{mutilated}. However, by its construction, it is not possible to incorporate it into the kind of CDD-schemes we want to achieve. In particular, the frame elements of \\cite{mutilated} have unbounded support (which is made mathematically feasible by restricting the domain to $\\Omega=[0,1]^d$), and therefore, the necessary sparsity of the matrix that \\ref{itm:ingr:3} alludes to is impossible to achieve.\n\nIn contrast, \\cite{grohs1} constructs a frame for the full $L^2(\\bbR^d)$ (resp. $\\Hs(\\bbR^d)$), but this necessitates frame elements $\\varphi_\\lambda$ which are intrinsically in $L^2(\\bbR^d)$ themselves, and thus, have much more localised support. The price for this is that we need the full grid $\\bbZ^d$ (under a certain transformation) of translations to cover all of $\\bbR^d$ with our frame elements, while \\cite{Can98} is able to make do with a one-dimensional grid. Since the proof involves counting large coefficients (in some sense), these additional $d-1$ dimensions make the proof of our result substantially more involved and require some fairly delicate estimates to work out. In this respect, we believe that the auxiliary estimates (\\autoref{th:Imn_est} and \\autoref{cor:Imn_higher_dim}) we have proved for this purpose are of independent interest due to their increased strength compared to previous results (see \\autoref{rem:grafakos}).\n\nFinally, for the CDD-machinery to work, we need approximation estimates in the norm of the Hilbert space $\\CH=\\Hs$ in which the solution lives, compare \\eqref{eq:intro_approx_u}. This norm corresponds to multiplying the coefficient sequence (element-wise) with a growing weight (namely the ${\\mathbf{W}}$ from \\ref{itm:ingr:2}), which further complicates matters.\n\nHowever, once we achieve the proof of \\autoref{th:approx_intro}, we will have showed the following, compare \\cite[Cor. 6.1, Thm. 6.2]{compress}\\footnote{Note that the requirements of \\cite[Thm. 6.2]{compress} are stated erroneously, in the sense that the decay of the $f_i$ needs to be global (compare \\eqref{eq:intro_f_glob_decay}) and not just across the interfaces of the hyperplanes.}. For a more detailed account of combining ingredients \\ref{itm:ingr:1}--\\ref{itm:ingr:4} into the result below, we refer to \\cite[Thm. 7.1.1]{my_thesis}.\n\n\\begin{theorem}\\label{th:main_result}\n\tFor arbitrary $\\vec s\\in{\\mathbb{S}^{d-1}}$, consider \\eqref{eq:LinTrans} with right-hand side $f\\in H^t$ that is allowed to be singular across hyperplanes and satisfies the decay condition\n\t%\n\t\\begin{align}\n\t\t\\abs{f(\\vec x)} \\le \\frac{C_m}{\\reg{\\vec x}^m}\n\t\\end{align}\n\t%\n\tfor all $m\\in\\bbN$ (which is possible for compact support or exponential decay, for example).\n\t\n\tAssuming that the absorption coefficient satisfies $\\kappa\\ge\\gamma>0$ and $\\kappa\\in H^{4(t+d+1)}$, an approximand $u_\\varepsilon$ to the solution of $Au=f$ satisfying\n\t%\n\t\\begin{align}\n\t\t\\norm{u-u_\\varepsilon}_{\\Hs} \\le \\varepsilon\n\t\\end{align}\n\t%\n\tcan be found with the help of a numerically feasible ridgelet-based algorithm, such that, for arbitrary $\\sigma<\\frac td$ (and ignoring quadrature cost),\n\t%\n\t\\begin{align}\n\t\t\\#\\curly*{\\text{arithmetic operations necessary to compute $u_\\varepsilon$}}\\lesssim\\varepsilon^{-\\frac 1\\sigma}.\n\t\\end{align}\n\\end{theorem}\n\n\\begin{remark}\\label{rem:sing_kappa}\n\tAs a matter of fact, in \\cite[Sec. 7.2]{my_thesis}, we show that \\autoref{th:main_result} can be extended to $\\kappa\\in H^{4(t+d+1)}$ that is also allowed to be singular across hyperplanes\\footnote{As long as any potential singularities of $f$ lie in hyperplanes that are \\emph{parallel} to the singularity in $\\kappa$.}, which is somewhat surprising, since \\ref{itm:ingr:3} depends crucially on the smoothness of $\\kappa$. In a nutshell, one shifts the singularity from $\\kappa$ to the right-hand side $f$, where it is harmless (see above), which is possible mainly due to having an explicit formula for the solution of \\eqref{eq:LinTrans}.\n\\end{remark}\n\n\\subsection{Impact}\\label{ssec:impact}\n\nAlthough \\eqref{eq:LinTrans} is quite simple, having a highly efficient solver for such an equation opens the door to efficiently solve more complicated equations like the radiative transport equation (RTE),\n\\begin{equation}\\label{eq:RTE}\n\tB u:=\\vec s \\cdot \\nabla u(\\vec x, \\vec x) + \\beta(\\vec x) u(\\vec x,\\vec s) = f(\\vec x) + \\sigma(\\vec x) \\int_{\\bbS^{d-1}} K(\\vec s, \\vec s{}') u(\\vec x,\\vec s{}') \\d \\vec s{}'.\n\\end{equation}\nwhich couples the different directions $\\vec s, \\vec s{}'$; see e.g. \\cite[Sec. 9.5]{modest2013radiative} for an introduction, resp. \\cite{appl_RTE,nummeth_RTE,RTE_schwab,RTE_schwab2}, for examples of applications of \\eqref{eq:RTE} and existing methods to solve it.\n\nThere are several ways to utilise a solver for \\eqref{eq:LinTrans} to solve \\eqref{eq:RTE}. If we neglect the scattering term for the moment, the ridgelet-based solver we develop could be used to solve \\eqref{eq:RTE} by either tensor product or collocation methods in $\\vec s$ (similar to techniques used in \\cite{grella}, where one can additionally make use of the multiscale structure of $\\Phi$ to alleviate the curse of dimensionality by balancing resolution in angle with resolution in space).\n\nOne possibility to reintroduce the scattering term is via an iterative scheme --- for example by evaluating the integral for the previous iterand and adding the result to the right-hand side. We refer to \\cite{FFT-paper}, where an \\ttt{FFT}-based ridgelet discretisation based on this ``source iteration'' has been implemented.\n\nOne crucial aspect in these procedures is that solutions for different $\\vec s$ can be added together easily, which is satisfied by our construction (since the ridgelets achieve optimal approximation for all directions $\\vec s$ simultaneously!). While we have already mentioned that uniformly refined FE methods (even if adaptive) do not work well in this context, one might have the idea to adapt the FEM mesh anisotropically. The problem with this is that the meshes for different directions $\\vec s \\neq \\vec s{}'$ would have to be combined somehow, making cumbersome interpolation between such meshes necessary.\n\nConsidering the work already carried out in \\cite{grohs1,compress}, this paper completes the picture regarding the necessary ingredients for developing a CDD-scheme for \\eqref{eq:LinTrans}. The result of this --- \\autoref{th:main_result} --- is very strong: complexity here is measured in terms of arithmetic operations to be carried out by a processor and the solution is even allowed to possess singularities along lines (resp. hyperplanes in higher dimensions) --- for the right-hand side $f$, as well as the absorption coefficient $\\kappa$ (see \\autoref{rem:sing_kappa}).\n\nTo illustrate the strength of these results, consider a function $f\\in H^t$ apart from a finite number of line singularities (in arbitrary directions) in two dimensions. Then, the approximation error of using just $N$ ridgelets is $\\CO(N^{-\\frac t2})$ and the number of flops to find these coefficients is of order $\\CO(N)$. For functions with medium to high Sobolev regularity (apart from the line singularities), this approximation rate represents an improvement of many orders of magnitude over wavelet or FE Methods, with respective $N$-term approximation rates of $\\CO(N^{-\\frac 12})$ and $\\CO(N^{-\\frac 14})$, irrespective of the magnitude of $t$. In terms of complexity, the advantage is greater still because the linear systems for other methods cannot usually be solved in linear time.\n\nOn the other hand, the convergence results are confined to linear advection equations \\eqref{eq:LinTrans} and our analysis assumes that $\\vec x$ belongs to the full space $\\bbR^d$. The latter fact poses no problem if for instance the source term $f$ is compactly supported but in many applications one needs to restrict $\\vec x$ to a finite domain $D\\subseteq \\bbR^d$ and impose inflow boundary conditions. The efficient incorporation of boundary conditions will require the construction of ridgelet frames on finite domains, which is the subject of future work\\footnote{To be more precise, incorporation of inflow boundary conditions is possible with the code developed in \\cite{FFT-paper} but a rigorous analysis is still lacking.}. With such a construction at hand the theoretical analysis carried out in this paper would essentially go through also for finite domains. In this regard we mention that, very recently, shearlet frames were successfully constructed on domains (\\cite{shearlet-domains}), raising the hope that this approach can be transferred to the closely-related ridgelets.\n\n\\subsection{Outline}\n\nThe outline of the paper is as follows. Below, we wrap up the section by briefly introducing the most important notational conventions we will use throughout this paper. In \\autoref{sec:prelim}, we cover some crucial estimates, the most important properties of the advection equation \\eqref{eq:LinTrans}, as well as the model class of ``mutilated'' Sobolev functions. Furthermore, we recall the ridgelet construction of \\cite{grohs1} and a few classical results we will need later on.\n\n\\autoref{sec:benchmark} deals with a minimalistic introduction to $N$-term approximation and how one can determine the best theoretically possible approximation rate of \\emph{any} discretisation for a given class of functions $\\FC$.\n\nThe core of the thesis is contained in \\autoref{sec:approx}; aside from the differences in the ridgelet constructions (see discussion after \\autoref{th:approx_intro}) and the different techniques necessary to treat them, we are able to follow the structure of \\cite{mutilated} relatively closely --- establishing the localisation of the ridgelet coefficients for a function cut off at a hyper plane first in angle in \\autoref{sec:loc_angle}, and then in space in \\autoref{sec:loc_space}, before we proceed to the proof of the main result, \\autoref{th:approx}, in \\autoref{ssec:proof_approx}.\n\nThe main part of the paper wraps up with the conclusion in \\autoref{sec:conclusion}, while the postponed proof of the crucial estimates in \\autoref{ssec:key_est} follows in \\autoref{sec:int_est}.\n\n\n\\subsection{Notation}\n\nThis subsections lists the most important conventions we will use throughout this paper. As usual, we conclude proofs by $\\Box$; additionally, we mark the end of definitions and remarks by $\\triangle$.\n\nThe letter $\\bbN$ denotes the natural numbers \\emph{without zero}, while $\\bbN_0$ includes it. Similarly, $\\bbR^+:=(0,\\infty)$, while $\\bbR^+_0:=[0,\\infty)$.\n\nWe let $B_X(x,r):=\\{x'\\in X:\\, \\mathrm{dist}_X(x,x')0:\\, A(y)\\le c B(y),\n\\end{align*}\nwhere the constant has to be independent of $y$.\nSimilarly, $A\\sim B$ denotes the case that both $A\\lesssim B$ and $B\\lesssim A$ hold.\n\nFor vector variables (and occasionally multi-indices), $i$ primes ($i=1,\\ldots,3$) will always indicate the last $d-i$ components of that vector, i.e. $\\vec k{}''=(k_3,k_4,\\ldots,k_d)^\\top\\in\\bbR^{d-2}$.\n\nSquare brackets around a vector --- i.e. $[\\vec e_1]$ --- denote the linear span, while $[\\vec e_1]^\\bot$ denotes its orthogonal complement. The orthogonal projection along $\\vec n$ is denoted by $\\CP_{\\vec n}$.\n\nFinally, we let $H(y):=\\mathbbm{1}_{\\bbR^+}(y)$ denote the Heaviside step function, and define the \\emph{regularised absolute value} $\\langle\\vec x\\rangle:=\\sqrt{1+|\\vec x|^2}$ (to avoid problems with division by zero).\n\n\n\\section{Preparations}\\label{sec:prelim}\n\nIn this section, we set up the foundations on which the rest of the paper will be built. In \\autoref{ssec:key_est}, we introduce a crucial estimate that will be necessary later on (but whose proof we postpone to \\autoref{sec:int_est}), while in \\autoref{ssec:prop_advec}, we deal with the properties of the advection equation and the model class of solutions we will consider. In \\autoref{ssec:ridgeframes}, we briefly recall the ridgelet construction of \\cite{grohs1} in the necessary detail to prove our results, and we wrap up this section with some classical results about interpolation in \\autoref{ssec:interp}.\n\n\\subsection{An Integral Estimate}\\label{ssec:key_est}\n\nAs one of the key tools for the main proof, we introduce the following integral inequality, the proof of which we postpone to \\autoref{sec:int_est}.\n\n\\begin{theorem}[{\\autoref{app:th:Imn_est}}]\\label{th:Imn_est}\n\tFor $m,n\\in\\bbN$, $a\\in\\bbR^+_0$, $b\\in\\bbR$, $c,d\\in\\bbR^+$, we have\n\t%\n\t\\begin{align}\n\t\tI_{m,n}&:=\\int_{-\\infty}^{\\infty} \\frac{1}{\\parens{a^2(x-b)^2+c^2}^m} \\frac{1}{\\parens{x^2+d^2}^n} \\d x \\notag \\\\\n\t\t&\\phantom{:}= \\frac{\\pi}{\\parens{a^2b^2+(ad+c)^2}^{m+n-1}} \\frac{1}{c^{2m-1}} \\frac{1}{d^{2n-1}} \\sum_{\\substack{i+j+2k=2(m+n)-3\\\\i\\ge 2m-1 \\, \\lor\\, j\\ge 2n-1}} c^{m,n}_{i,j} c^{i} (ad)^{j} (ab)^{2k}\\notag \\\\\n\t\n\t\t&\\phantom{:}\\lesssim \\frac{a^{2n-1}}{\\parens{a^2b^2+a^2d^2+c^2}^n} \\frac{1}{c^{2m-1}} + \\frac{1}{\\parens{a^2b^2+a^2d^2+c^2}^m} \\frac{1}{d^{2n-1}}. \\label{eq:Imn_est}\n\t\\end{align}\n\t%\n\tFor an explicit representation of the constants $c^{m,n}_{i,j}$, as well as the generating functions of $I_{m,n}$ and $c^{m,n}_{i,j}$, see \\autoref{sec:int_est}.\n\\end{theorem}\n\nAs a simple corollary for higher dimensions, we also record the following corollary of \\autoref{th:Imn_est}, which is proved in \\autoref{sec:int_est} as well.\n\n\\begin{corollary}[{\\autoref{app:cor:Imn_higher_dim}}]\\label{cor:Imn_higher_dim}\n\tFor $m,n\\ge\\ceil{\\frac k2}$ $c,d>0$, we have the following inequality,\n\t%\n\t\\begin{align}\n\t\t\\int_{\\bbR^d} \\frac{1}{\\parens*{|\\vec x - \\vec t|^2 +c^2}^m} \\frac{1}{\\parens*{|\\vec x|^2+d^2}^n} \\d \\vec x \\lesssim \\frac{1}{\\parens*{|\\vec t|^2+c^2+d^2}^n}\\frac{1}{c^{2m-k-1}}+\\frac{1}{\\parens*{|\\vec t|^2+c^2+d^2}^m}\\frac{1}{d^{2n-k-1}}.\n\t\\end{align}\n\\end{corollary}\n\n\\subsection{The Advection Equation $\\&$ Mutilated Functions}\\label{ssec:prop_advec}\n\nTo rotate the transport direction $\\vec s$ in \\eqref{eq:LinTrans} --- resp. the normals of the hyperplanes appearing in \\autoref{def:Ht_except_hyp} --- into a canonical unit vector, we need the following rotation matrices.\n\n\\begin{definition}\\label{def:rot_Rs}\n\tFor any vector $\\vec s\\in{\\mathbb{S}^{d-1}}$, let $R_{\\vec s}$ be a matrix that maps $\\vec s$ to $\\vec e_1=(1,0,\\ldots)^\\top$, and let $R_{\\vec s}^{-1}=R_{\\vec s}^\\top$ be its inverse. This rotation is not unique in dimensions $d\\ge3$, however, the ambiguity will be irrelevant. We also define the respective pullbacks for $f\\in L^2(\\bbR^d)$ by\n\t%\n\t\\begin{align}\n\t\t\\rho_{\\vec s} f(\\vec x):= f(R_{\\vec s}^{-1} \\vec x), \\qquad \\rho_{\\vec s}^{-1} f(\\vec x) := f(R_{\\vec s} \\vec x),\n\t\\end{align}\n\t%\n\tthus (for continuous $f$), $\\rho_{\\vec s} f(\\vec e_1) =f (\\vec s), \\; \\rho_{\\vec s}^{-1} f(\\vec s) = f(\\vec e_1)$.\n\\end{definition}\n\n\\begin{remark}\n\tUsing these pullbacks, it's easy to see that\n\t%\n\t\\begin{align}\\label{eq:trans_dir_e1}\n\t\t\\vec s \\cdot \\nabla u + \\kappa u= f\\quad \\Longleftrightarrow \\quad \\vec e_1 \\cdot \\nabla \\rho_{\\vec s} u + \\rho_{\\vec s} \\kappa \\rho_{\\vec s} u= \\rho_{\\vec s} f.\n\t\\end{align}\n\t%\n\tThis makes an explicit calculation possible (see \\cite[Sec. 1]{compress}), namely that with\n\t%\n\t\\begin{align}\\label{eq:lintrans_expl_sol}\n\t\ty(x_1,\\vec x{}') := \\mathrm{e}^{-K(x_1,\\vec x{}')} \\int_{-\\infty}^{x_1} \\rho_{\\vec s} f (t,\\vec x{}') \\mathrm{e}^{K(t,\\vec x{}')} \\d t , \\quad \\text{where } \\quad K(t,\\vec x{}')= \\int_0^t \\rho_{\\vec s} \\kappa (r,\\vec x{}') \\d r,\n\t\\end{align}\n\t%\n\tsetting $u:=\\rho_{\\vec s}^{-1}y$ yields an explicit solution to \\eqref{eq:LinTrans} for arbitrary $f\\in L^2(\\bbR^d)$, as long as $0<\\kappa_0 \\le \\kappa(\\vec x) < \\infty$ almost everywhere. With the $y$ from \\eqref{eq:lintrans_expl_sol}, we define the solution operator for $A$ as follows,\n\t%\n\t\\begin{align}\n\t\t\\CS[f]:=\\rho_{\\vec s}^{-1}y, \\qquad \\CS\\colon L^2\\to \\Hs.\\tag*{\\qedhere}\n\t\\end{align}\n\\end{remark}\n\nThe following theorem shows that $\\CS$ is bounded from $H^t$ to itself.\n\n\\begin{proposition}[{\\cite[Thm. 2.2]{compress}, \\cite[Thm. 3.3.3]{my_thesis}}]\\label{prop:sol_op_bounded}\n\tFor $\\kappa\\in H^{\\ceil{t}+\\frac d2}(\\bbR^d)$ with $t\\ge 0$, such that $\\kappa\\ge \\gamma>0$, the operator $\\CS$ and is bounded (at least) from $H^t\\to H^t$. Furthermore\n\t%\n\t\\begin{align}\\label{eq:Hs_elliptic}\n\t\t\\norm{\\CS[f]}_{\\Hs}\\sim \\norm{Au}_{L^2}=\\norm{f}_{L^2}.\n\t\\end{align}\n\\end{proposition}\n\nHowever, what we are mainly interested in in this paper are functions of the following form.\n\n\\begin{definition}\\label{def:Ht_except_hyp}\n\tWe say that a function $f$ is in $H^t$ \\emph{except for $N$ hyperplanes} if there are $N$ hyperplanes $h_i$ with corresponding (normalised) orthogonal vectors $\\vec n_i$ and offsets $v_i$, as well as functions $f_0,f_1,\\ldots, f_N\\in H^t(\\bbR^d)$ such that\n\t%\n\t\\begin{align}\\label{eq:Ht_except_hyp}\n\t\tf(\\vec x)=f_0(\\vec x)+\\sum_{i=1}^N f_i(\\vec x) H(\\vec x \\cdot \\vec n_i - v_i),\n\t\\end{align}\n\t%\n\twhere, as mentioned, $H$ is the Heaviside step function. To abbreviate the concept notationally, we will sometimes write $f\\in H^t(\\bbR^d\\setminus\\{(h_i)_{i=1}^N\\})$, or $f\\in H^t(\\bbR^d\\setminus\\{h_i\\})$ for short. This is justified in the sense that --- by factoring with the right choice of equivalence relation --- these are in fact Hilbert spaces again (see \\cite[Rem. 3.2.2]{my_thesis}).\n\\end{definition}\n\nThe analysis of the ridgelet coefficients of such a function will require calculating the Fourier transform of such mutilated Sobolev functions, which --- for each term --- splits into a regular and a singular contribution (from the function and the cut-off, respectively). This follows immediately from \\cite[Eq. (3.3)]{mutilated}, but we formulate it in the version we will use later on.\n\n\\begin{lemma}\\label{lem:fourier_decomp}\n\tThe Fourier transform of a function $f(\\vec x)=H(\\vec x\\cdot \\vec n-v) g(\\vec x)$, where $g\\in H^t$ and $t> \\frac 12$, in terms of a singularity aligned with $\\vec e_1$ is\n\t%\n\t\\begin{align}\\label{eq:sing_fourier_rot}\n\t\t\\hat f(\\Rn^{-1} \\vec \\xi)\n\t\t&= - \\frac{\\mathrm{i}}{2\\pi\\abs*{\\vec\\xi}^2} \\vec \\xi \\cdot \\CF[H(x_1-v)\\nabla g(\\Rn^{-1}\\vec x)](\\vec \\xi) - \\frac{\\mathrm{i}\\, \\xi_1}{2\\pi\\abs*{\\vec\\xi}^2} \\wh{g\\normalr|_{h}}(\\CP_{\\vec e_1}\\vec \\xi),\n\t\\end{align}\n\t%\n\twhere $g\\normalr|_{h}\\in H^{t-\\frac 12}(\\bbR^{d-1})$ is the restriction to the hyperplane $h=\\set{\\vec x\\in\\bbR^d}{\\vec x\\cdot\\vec n=v}$. Furthermore, $\\CP_{\\vec n}$ is the orthogonal projection along $\\vec n$ and we identify $\\CP_{\\vec n}\\vec \\xi$ with $\\vec \\xi{}'_h\\in \\bbR^{d-1}$, while $\\wh{g\\normalr|_{h}}$ is shorthand for $\\CF_{\\bbR^{d-1}}[g\\normalr|_{h}]$.\n\\end{lemma}\n\nSince we are dealing with half-spaces due to the cut-off with the Heaviside function, the following extension result (see e.g. \\cite[Sec. 4.5]{triebel2}) will be useful.\n\n\\begin{theorem}\\label{th:half_space_ext}\n\tFor a function $f\\in H^t$, the restriction to the half-space $\\set{\\vec x\\in\\bbR^d}{x_1>0}$ can be extended to the full space in a bounded fashion, i.e. there is $g\\in H^t$ such that\n\t%\n\t\\begin{align}\n\t\t\\norm{g}_{H^t(\\bbR^d)} \\lesssim \\norm{f}_{H^t(\\set{\\vec x\\in\\bbR^d}{x_1>0})} \\qquad \\text{as well as} \\qquad f(\\vec x) = g(\\vec x) \\quad \\forall \\vec x \\in \\set{\\vec x\\in\\bbR^d}{x_1>0}.\n\t\\end{align}\n\t%\n\tObviously, by rotating and translating $f$ (which leaves the norms invariant), this result also holds for half-spaces separated by arbitrary hyperplanes.\n\\end{theorem}\n\n\n\\begin{remark}\\label{rem:hyp_normals_neg}\n\tUsing \\autoref{th:half_space_ext}, we see that, for arbitrary $\\vec s$, we can restrict the representation of \\eqref{eq:Ht_except_hyp} to hyperplanes satisfying $\\vec s \\cdot \\vec n_i \\le 0$, since, if $\\vec s \\cdot \\vec n_i>0$, we can extend the mutilated $f_i$ to the full space --- absorbing it into $f_0$ --- and subtracting its extension on the other side (hence, the vector $n_i$ in the Heaviside function flips signs).\n\\end{remark}\n\nThe following result extends the boundedness of $\\CS$ from \\autoref{prop:sol_op_bounded} to the spaces $H^t(\\bbR^d\\setminus\\{h_i\\})$, which can be seen as confirmation that this class of functions is well-chosen for the behaviour of \\eqref{eq:LinTrans}.\n\n\\begin{proposition}\\label{prop:sol_smooth_except_hyp}\n\tFor $f\\in H^t(\\bbR^d\\setminus\\{h_i\\})$, the solution $u=\\CS[f]\\in H^t(\\bbR^d\\setminus\\{h_i\\})$ is of the same form \\eqref{eq:Ht_except_hyp}, and furthermore, $\\norm{u_i}_{H^t} \\lesssim \\norm{f_i}_{H^t}$ for $i=1,\\ldots,N$, as well as $\\norm{u_0}_{H^t}\\lesssim \\sum_{i=0}^N \\norm{f_i}_{H^t}$. In other words, the solution operator $\\CS$ is bounded on this space,\n\t%\n\t\\begin{align}\n\t\t\\norm{\\CS}_{H^t(\\bbR^d\\setminus\\{h_i\\})\\to H^t(\\bbR^d\\setminus\\{h_i\\})}<\\infty.\n\t\\end{align}\n\\end{proposition}\n\n\\begin{proof}\n\tWe begin by noting that, since the differential equation \\eqref{eq:LinTrans} is linear, it suffices to deal with one term $f(\\vec x) H(\\vec x \\cdot \\vec n-v)$, and the rest follows via superposition. Furthermore, using \\eqref{eq:trans_dir_e1} and \\autoref{rem:hyp_normals_neg}, we can assume without loss of generality that $\\vec s=\\vec e_1$, and that $(\\vec n)_1\\le 0$. \n\t\n\tAssuming $(\\vec n)_1<0$ (i.e.~a strict inequality) for the moment, we use \\eqref{eq:lintrans_expl_sol} with this right-hand side, and observe that, if $x_1$ is before the cut-off, the Heaviside function does not matter, and for everything beyond that, the integral goes just until the interface of the hyperplane,\n\t%\n\t\\begin{align}\n\t\tu(\\vec x)\n\t\t&= \\mathrm{e}^{-K(\\vec x)} \\int_{-\\infty}^{x_1} f (t,\\vec x{}') H\\parens*{(t,\\vec x{}')^\\top \\cdot \\vec n -v} \\mathrm{e}^{K(t,\\vec x{}')} \\d t\\\\\n\t\t&=H(\\vec x\\cdot \\vec n-v) \\mathrm{e}^{-K(\\vec x)} \\int_{-\\infty}^{x_1} f (t,\\vec x{}') \\mathrm{e}^{K(t,\\vec x{}')} \\d t +H(v-\\vec x\\cdot \\vec n) \\mathrm{e}^{-K(\\vec x)} \\int_{-\\infty}^{\\frac{v-\\vec x{}' \\cdot \\vec n'}{(\\vec n)_1}} f (t,\\vec x{}') \\mathrm{e}^{K(t,\\vec x{}')} \\d t\\\\\n\t\t&=H(\\vec x\\cdot \\vec n-v) \\CS[f](\\vec x) +H(v-\\vec x\\cdot \\vec n) \\CS[f]\\binom{\\frac{v-\\vec x{}' \\cdot \\vec n'}{(\\vec n)_1}}{\\vec x{}'} \\mathrm{e}^{K\\parens*{(\\frac{v-\\vec x{}' \\cdot \\vec n'}{(\\vec n)_1},\\vec x{}')^\\top}} \\mathrm{e}^{-K(\\vec x)}.\n\t\\end{align}\n\t%\n\tDue to \\autoref{prop:sol_op_bounded}, $\\CS[f]\\in H^t(\\bbR^d)$, with norm bounded by $\\norm{f}_{H^t}$. In order for the second term to appear, $v-\\vec x\\cdot \\vec n$ has to be greater than zero, and therefore (since $(\\vec n)_1<0$) we have $x_1>\\frac{v-\\vec x{}' \\cdot \\vec n'}{(\\vec n)_1}$. As the absorption satisfies $\\kappa\\ge\\gamma>0$, $K(\\cdot,\\vec x{}')$ is (strictly) monotonically increasing, and this implies that the term $\\mathrm{e}^{K\\parens*{(\\frac{v-\\vec x{}' \\cdot \\vec n'}{(\\vec n)_1},\\vec x{}')^\\top}} \\mathrm{e}^{-K(\\vec x)}$ is bounded. Since the coordinate transformation in the first component is linear, we conclude that\n\t%\n\t\\begin{align}\n\t\t\\CS[f]\\binom{\\frac{v-\\vec x{}' \\cdot \\vec n'}{(\\vec n)_1}}{\\vec x{}'} \\mathrm{e}^{K\\parens*{(\\frac{v-\\vec x{}' \\cdot \\vec n'}{(\\vec n)_1},\\vec x{}')^\\top}-K(\\vec x)} \\in H^t\\parens*{\\set*{\\vec x\\in\\bbR^d}{\\vec x\\cdot \\vec n-v<0}},\n\t\\end{align}\n\t%\n\tby since multiplication with smooth enough $\\kappa$ is bounded from $H^t$ to itself (which can be showing that the derivatives remain in $L^2$ and interpolation), and noting that the exponential function does not decrease the smoothness. Again by \\autoref{th:half_space_ext}, we can extend this term to $y\\in H^t(\\bbR^d)$, and thus\n\t%\n\t\\begin{align}\n\t\tu(\\vec x) &= H(\\vec x\\cdot \\vec n-v) \\parens*{\\CS[f](\\vec x)-y(\\vec x)} + y(\\vec x).\n\t\\end{align}\n\t%\n\tDue to the boundedness of $\\CS$ on $H^t$ and $\\norm{y}_{H^t} \\lesssim \\norm{f}_{H^t}$, this implies $\\norm{u}_{H^t(\\bbR^d\\setminus h)}\\lesssim \\norm{f}_{H^t(\\bbR^d\\setminus h)}$.\n\t\n\tIn the last remaining case --- that $(\\vec n)_1=0$ (i.e.~$\\vec n \\bot \\vec s$) --- the path of the integral never crosses the hyperplane, and thus $u=H(\\vec x\\cdot \\vec n-v) \\CS[f](\\vec x)$.\n\\end{proof}\n\nLastly, we discuss how decay properties transfer to the solution.\n\n\\begin{lemma}\\label{lem:decay_sol}\n\tIf $f\\in L^2$ satisfies\n\t%\n\t\\begin{align}\n\t\t\\abs{f(\\vec x)} &\\lesssim \\reg{\\vec x}^{-2n},\n\t\\end{align}\n\t%\n\tthen the solution $u=\\CS[f]$ to \\eqref{eq:LinTrans} also satisfies\n\t%\n\t\\begin{align}\n\t\t\\abs{u(\\vec x)} &\\lesssim \\reg{\\vec x}^{-2n}.\n\t\\end{align}\n\\end{lemma}\n\n\\begin{proof}\n\tUsing the solution formula from \\eqref{eq:lintrans_expl_sol} (for $\\vec s =\\vec e_1$), we calculate\n\t%\n\t\\begin{align}\n\t\t\\abs{u(\\vec x)} &\\lesssim \\int_{-\\infty}^{x_1} \\reg*{(t,\\vec x{}')^\\top}^{-2n} \\mathrm{e}^{K(t,\\vec x{}')-K(x_1,\\vec x{}')} \\d t\n\t\t\\le \\int_{-\\infty}^{x_1} \\reg*{(t,\\vec x{}')^\\top}^{-2n} \\mathrm{e}^{-\\gamma(x_1-t)} \\d t,\n\t\\end{align}\n\t%\n\tsince the strict monotonicity of $K$ (due to $\\gamma>0$) implies $K(t,\\vec x{}')-K(x_1,\\vec x{}')\\le \\gamma(t-x_1)$ for $t\\le x_1$. At this point, we use the fact that for arbitrary $m$,\n\t%\n\t\\begin{align}\n\t\t\\mathrm{e}^{-\\gamma y} \\lesssim \\reg{y}^{-2m}, \\qquad \\text{for} \\qquad y\\ge 0.\n\t\\end{align}\n\t%\n\tInserting this into the above and resolving the square root from $\\reg{y}=\\sqrt{1+|y|^2}$, we continue\n\t%\n\t\\begin{align}\n\t\t\\abs{u(\\vec x)}\n\t\t&\\lesssim \\int_{-\\infty}^{x_1} \\reg*{(t,\\vec x{}')^\\top}^{-2n} \\reg{x_1-t}^{-2m} \\d t \\le \\int_{-\\infty}^{\\infty} \\frac{1}{(t^2+|\\vec x{}'|^2+1)^{n}} \\frac{1}{((t-x_1)^2+1)^{m}} \\d t \\\\\n\t\t&\\lesssim \\frac{1}{(x_1^2+|\\vec x{}'|^2+2)^n} + \\frac{1}{(x_1^2+|\\vec x{}'|^2+2)^m}\\frac{1}{(|\\vec x{}'|^2+1)^{n-\\frac 12}}\n\t\t\\le \\frac{1}{(|\\vec x|^2+1)^n} = \\reg{\\vec x}^{-2n},\n\t\\end{align}\n\t%\n\twhere the last line uses \\eqref{eq:Imn_est}, and $m\\ge n$ (which we we are free to choose this large).\n\\end{proof}\n\n\\subsection{The Ridgelet Construction \\cite{grohs1}}\\label{ssec:ridgeframes}\n\nWe recall the definition of a ridgelet frame from \\cite{grohs1,compress}, where details on the construction can be found. The starting point is a (sufficiently) smooth partition of unity in frequency (constructed from a given window function),\n\\begin{align}\\label{eq:partititon_unity}\n\t\\parens*{\\psi_{j\\!\\?\\?,\\ell}}_{j\\in\\bbN_0,\\, \\ell\\in\\{0,\\ldots,L_j\\}} \\quad \\text{such that} \\quad \\sum_{j=0}^\\infty \\sum_{\\ell=0}^{L_j} \\hat \\psi_{j\\!\\?\\?,\\ell}^2 = 1,\n\\end{align}\nsupported on (approximate) polar rectangles\n\\begin{align}\\label{eq:supp_Pjl}\n\t\\mathrm{supp}\\mathop{} \\hat \\psi_{j\\!\\?\\?,\\ell} \n\t&\\subseteq P_{j\\!\\?\\?,\\ell}:= \\set[\\Big]{\n\t\\vec \\xi \\in \\bbR^d}{ 2^{j-1}<|\\vec \\xi| < 2^{j+1}, \\, \\smash{\\frac{\\vec \\xi}{|\\vec \\xi|}} \\in B_{\\mathbb{S}^{d-1}}(\\vec s_{j\\!\\?\\?,\\ell}, 2^{-j+1})},\n\\end{align}\nwhere the vectors $\\vec s_{j\\!\\?\\?,\\ell}$ are an approximately uniform sampling of points on the sphere for each scale $j$, with average distance $\\sim 2^{-j}$ and cardinality $\\#\\{\\vec s_{j\\!\\?\\?,\\ell}\\}_{\\ell} \\lesssim 2^{j(d-1)}$, see \\cite{borup} for the construction or \\cite{grohs1,compress} for more details on their relation to ridgelets.\n\n\\begin{definition}\\label{def:phi_jlk}\n\tUsing the functions $\\hat \\psi_{j\\!\\?\\?,\\ell}$, a Parseval frame for $L^2(\\bbR^d)$ is defined by \n\t%\n\t\\begin{align}\n\t\t\\varphi_{j\\!\\?\\?,\\?\\ell\\!\\?\\?,\\?\\vec k} = \\, 2^{-\\frac{j}{2}} T_{U_{j\\!\\?\\?,\\ell} \\vec k}\\,\\psi_{j\\!\\?\\?,\\ell}, \\quad\n\t\tj\\in\\bbN_0, \\, \\ell \\in \\{0,\\ldots,L_j\\}, \\, \\vec k \\in \\bbZ^d, \n\t\\end{align}\n\t%\n\twith $T$ the translation operator, $T_{\\vec y}f(\\cdot) := f(\\cdot-\\vec y)$, and $U_{j\\!\\?\\?,\\ell}:= R_\\jl^{-1} D_{2^{-j}}$, where $R_\\jl:=R_{\\vec s_{j\\!\\?\\?,\\ell}}$ is the transformation introduced at the beginning of the section, and $D_a$ dilates the first component, $D_a \\vec k := (a \\, k_1, k_2,\\ldots,k_d)^\\top$. The rotation $R_\\jl$ is arbitrary (to the extent that it is ambiguous) but fixed. Whenever possible, we will subsume the indices of $\\varphi$ by $\\lambda=({j\\!\\?\\?,\\?\\ell\\!\\?\\?,\\?\\vec k})$.\n\\end{definition}\n\n\\begin{assumption}\\label{assump:psi_smooth}\n\tThe window functions\n\t%\n\t\\begin{align}\\label{eq:psi_trafo}\n\t\t\\hat \\psi_{({j\\!\\?\\?,\\ell})} (\\vec \\eta):= \\hat\\psi_{j\\!\\?\\?,\\ell}(U_{j\\!\\?\\?,\\ell}^{-\\top}\\vec \\eta)\n\t\\end{align}\n\t%\n\thave bounded derivatives \\emph{independently} of $j$ and $\\ell$. Thus, for all $n$ up to an upper bound $N$ dependent on the differentiability of the window functions (or possibly for all $n\\in\\bbN$ if the window functions are $\\CC^\\infty$), we have the estimate\n\t%\n\t\\begin{align\n\t\t\\norm*{\\hat \\psi_{({j\\!\\?\\?,\\ell})}}_{\\CC^n} \\le\\beta_{n}.\n\t\\end{align}\n\tIn \\cite[Lem. B.1]{compress}, it is shown that this assumption can be satisfied with a reasonable (and still quite flexible) choice of window functions.\n\\end{assumption}\n\n\\begin{definition}\n\tWe consider the diagonal matrix\n\t%\n\t\\begin{align}\n\t\t{\\mathbf{W}}=({\\mathbf{W}}_{\\lambda,\\lambda'})_{\\lambda,\\lambda'\\in\\Lambda} \\quad \\text{with} \\quad {\\mathbf{W}}_{\\lambda,\\lambda'}:= \\begin{cases}\n\t\t0, & \\lambda\\neq\\lambda',\\\\\n\t\t1+2^j \\abs{\\vec s\\cdot \\vec s_{j\\!\\?\\?,\\ell}}, & \\lambda=\\lambda',\n\t\\end{cases}\n\t\\end{align}\n\t%\n\twhich is the right choice of weight to make $\\Phi$ a frame for $\\Hs$ (see \\cite[Thm. 10]{grohs1}, \\cite[Thm. 4.3]{compress}), i.e.\n\t%\n\t\\begin{align}\\label{eq:RidgeletFrame}\n\t\t\\norm{f}_{\\Hs} \\sim \\norm{{\\mathbf{W}} \\inpr{\\Phi,f}}_{\\ell^2}\n\t\\end{align}\n\\end{definition}\n\n\\subsection{Interpolation between Spaces}\\label{ssec:interp}\n\nWe recall the scale of sequence spaces necessary for our analysis, as well as some classical interpolation results in the context we will work in (the mentioned sequence spaces, Sobolev spaces, and operators between them).\n\n\\begin{definition}\n\tThe \\emph{Lorentz spaces} of sequences are defined as follows (\\cite[p. 89]{DeVore1998}),\n\t%\n\t\\begin{align}\n\t\t\\ell^p_q:=\\set[\\bigg]{(c_n)_{n\\in\\bbN}}{\\snorm{c_\\bbN}_{\\ell^p_q}:=\\parens[\\bigg]{\\sum_{n\\in\\bbN}(c^*_n)^q n^{\\frac{q}{p}-1}}^{\\frac{1}{q}}<\\infty},\n\t\\end{align}\n\t%\n\twhere $c^*_n$ is the decreasing rearrangement of $(\\abs{c_n})_{n\\in\\bbN}$, $0< p <\\infty$ and $0< q < \\infty$. For $q=\\infty$, we have\n\t%\n\t\\begin{align}\n\t\t\\ell^p_\\infty:=\\set{(c_n)_{n\\in\\bbN}}{\\snorm{c_\\bbN}_{\\ell^p_\\infty}:=\\sup_{n\\in\\bbN} c^*_n n^{\\frac 1p} <\\infty}=:\\ell^p_w,\n\t\\end{align}\n\t%\n\twhich is also called the \\emph{weak $\\ell^p$-space}. It should be noted that, technically, the norms involved are generally only quasinorms, i.e.\n\t%\n\t\\begin{align}\\label{eq:lpw_triangle_ineq}\n\t\t\\snorm{a_\\bbN+b_\\bbN}_{\\ell^p_q}\\le K_{p,q} \\parens*{\\snorm{a_\\bbN}_{\\ell^p_q}+\\snorm{b_\\bbN}_{\\ell^p_q}}.\n\t\\end{align}\n\t%\n\tAdditionally, it can be shown that $\\ell^p_p=\\ell^p$, as well as $\\ell^p_{q_1}\\subseteq \\ell^p_{q_2}$ for $q_1\\le q_2$.\n\\end{definition}\n\n\n\\begin{theorem}\\label{thm:interp}\n\tFor Sobolev spaces with $t>0$, the real interpolation with $L^2$ depending on $0<\\theta<1$ yields (\\cite[Thm. 6.4.5]{interp_sp})\n\t%\n\t\\begin{align}\\label{eq:interp_sob}\n\t\t(L^2,H^t)_{\\theta,2}=H^{t\\theta}.\n\t\\end{align}\n\t%\n\tOn the sequence space side, we have that (\\cite[Thm. 5.3.1]{interp_sp})\n\t%\n\t\\begin{align}\\label{eq:interp_seq_sp}\n\t\t(\\ell^2,\\ell^p_w)_{\\theta,2}=\\ell^{\\bar{p}}_2, \\quad \\text{where} \\quad \\bar{p}=\\parens[\\Big]{\\frac{1-\\theta}{2} + \\frac{\\theta}{p}}^{-1}.\n\t\\end{align}\n\t%\n\tFinally, for compatible couples\\footnote{I.e. both spaces are linear subspaces of a larger Hausdorff topological vector space and the embeddings are continuous, c.f. \\cite[Def. 3.1.1]{interp_op}} $(X_0,X_1)$ and $(Y_0,Y_1)$ and an admissible linear operator $T$, i.e.\n\t%\n\t\\begin{align}\n\t\tT:X_0+X_1 \\to Y_0+Y_1, \\quad \\text{such that} \\quad T\\bigr|_{X_i}:X_i\\to Y_i \\quad \\text{and} \\quad \\norm{T}_{X_i\\to Y_i}<\\infty \\quad \\text{for}\\quad i=0,1,\n\t\\end{align}\n\t%\n\tit holds that (\\cite[Thm. 3.1.2]{interp_sp}, \\cite[Thm. 5.1.12]{interp_op}), for all $\\theta\\in(0,1)$,\n\t%\n\t\\begin{align}\\label{eq:interp_op}\n\t\t\\norm{T}_{(X_0,X_1)_{\\theta,q}\\to(Y_0,Y_1)_{\\theta,q}}< \\infty.\n\t\\end{align}\n\\end{theorem}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{intro}\n\\input{sections\/intro.tex}\n\n\\section{Related Work}\n\\input{sections\/related_work.tex}\n\n\\section{Challenges in Multi-Object Tracking}\\label{sec:challenges}\n\\input{sections\/mot_challenges}\n\n\\section{The MOTCOM Metrics}\\label{sec:motcom_metrics}\n\\input{sections\/mot_task.tex}\n\\input{sections\/occlusion_method.tex}\n\\input{sections\/motion_method.tex}\n\\input{sections\/appearance_method.tex}\n\\input{sections\/motcom_method.tex}\n\n\\section{Evaluation}\\label{sec:evaluation}\n\\input{sections\/evaluation.tex}\n\n\\section{Results}\\label{sec:results}\n\\input{sections\/results.tex}\n\n\\section{Discussion}\\label{sec:discussion}\n\\input{sections\/discussion.tex}\n\n\\section{Conclusion}\n\\input{sections\/conclusion.tex}\n\n\\subsubsection*{Acknowledgements}\nThis work has been funded by the Independent Research Fund Denmark under case number 9131-00128B.\n\n\\clearpage\n\\bibliographystyle{splncs04}\n\n\\subsubsection{Visual Similarity.}\nThe visual appearance of objects can vary widely depending on the type of object and type of scene.\nAppearance is especially important when tracking is lost, for example, due to occlusion, and re-identification is a common tool for associating broken tracklets. \nThe complexity of this process depends on the visual similarity between objects, but intra-object similarity also plays a role.\nAs an object moves through a scene, its appearance can change from the perspective of the viewer.\nThe object may turn around, increase its distance to the camera, or the illumination conditions may change. \nAside from the visual cues, the object's position is also critical.\nIntuitively, it becomes less likely to confuse objects as the spatial distance between them increases.\n\\subsection{Visual Similarity Metric}\nIn order to define a metric that links an object's visual appearance with tracking complexity, we investigate how similar an object in one frame is compared to itself and other objects in the next frame.\nTwo objects may look similar, but they cannot occupy the same spatial position.\nTherefore, we propose a spatial-aware visual similarity metric called VCOM.\n\n\\begin{figure}[!b]\n \\centering\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/VCOM_example.pdf}\n \\caption{}\n \\label{fig:blurred}\n \\end{subfigure}\n \\qquad \\quad\n \\begin{subfigure}[b]{0.35\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/distance_func_drawing.pdf}\n \\caption{}\n \\label{fig:distance}\n \\end{subfigure}\n \\caption{a) Example showing three images with the object in focus and a blurred background produced from a frame from the MOT17-05 sequence. b) The distance ratio, $r$, affects the FDR when other objects are in the proximity of the target. The red dot is the nearest neighbor, the green dot is the true positive match, and the remaining dots are other objects.}\n\\end{figure}\n\nVCOM consists of a preprocessing, feature extraction, and distance evaluation step.\nFor every object $k \\in K$ in every frame $t \\in F$ an image $I^k_t$ is produced with the object's bounding box in focus and a heavy blurred background.\nWe blur the image using a discrete Gaussian function, except in the region of the object's bounding box as visualized in \\figref{blurred}.\n\nA feature embedding is then extracted from each of the preprocessed images.\nAs opposed to looking at the bounding box alone, using the entire image allows us to retain and embed spatial information in the feature vector.\nThe object's location is especially valuable in scenes with similarly looking objects and the blurred background contributes with low frequency information of the surroundings.\n\nWe blur the image with a Gaussian kernel with a fixed size of 201 and a sigma of 38 and extract the image features using an ImageNet \\cite{ImageNet_2009_CVPR} pre-trained ResNet-18 \\cite{ResNet_2016_CVPR} model.\nWe measure the similarity between the feature vector of the target object in frame $t$ and the feature vectors of all the objects in frame $t+1$ by computing the Euclidean distance.\nThe uncertainty increases if more objects are located within the proximity of the target. \nTherefore, we do not only look for the nearest neighbor, but rather the number of objects within a given distance, $d(r)$, from the target feature vector\n\\begin{equation}\\label{eq:region}\n d(r) = d_{\\mathrm{NN}} + d_{\\mathrm{NN}} \\cdot r\n\\end{equation}\nwhere $d_{\\mathrm{NN}}$ is the distance to the nearest neighbor and $r$ is a distance ratio.\nThe ratio is multiplied by the distance to the nearest neighbor in order to account for the variance in scale, e.g., as induced by object resolution or distinctiveness.\n\nAn object within the distance boundary that shares the same identity as the target object is considered a true positive (TP) and all other objects are considered false positives (FP).\nBy measuring the complexity based on the false discovery rate, $\\mathrm{FDR} = \\frac{\\mathrm{FP}}{\\mathrm{FP} + \\mathrm{TP}}$, we get an output in the range $[0,1]$ where a higher number indicates a more complex task.\nAn illustrative example of how the FDR is determined based on the distance ratio $r$ can be seen in \\figref{distance}.\nIt is ambiguous to choose a single optimal distance ratio $r$.\nTherefore, we calculate VCOM based on the average of distance ratios from the set $R = \\{0.01,0.02,...,1.0\\}$\n\\begin{equation}\n\\mathrm{VCOM} = \\frac{1}{|R|}\\sum^{R}_{r}\\frac{1}{|F|} \\sum_t^F \\frac{1}{|K^t|} \\sum_k^{K^t} FDR_{d(r)}(k) \n\\end{equation}\n\\subsubsection{Computing MOTCOM.}\nWe have evaluated four averaging methods for combining the sub-metrics into MOTCOM.\nThe four methods are the arithmetic, quadratic, geometric, and harmonic means and they are presented in \\equationref{arithmetic}, \\equationref{quadratic},\\equationref{geometric}, and \\equationref{harmonic}, respectively.\n\\begin{table*}[!ht]\n\\begin{tabular}{cc}\n\\begin{minipage}{0.45\\linewidth}\n \\begin{equation}\\label{eq:arithmetic}\n \\mathrm{arithmetic} = \\frac{1}{n} \\sum^{n}_{i=1} m_i\n \\end{equation}\n\\end{minipage} \n& \n\\begin{minipage}{0.45\\linewidth}\n \\begin{equation}\\label{eq:quadratic}\n \\mathrm{quadratic} = \\sqrt{\\frac{1}{n} \\sum^{n}_{i=1} m_i^2}\n \\end{equation}\n\\end{minipage}\n\\\\\n\\begin{minipage}{0.45\\linewidth}\n \\begin{equation}\\label{eq:geometric}\n \\mathrm{geometric} = \\sqrt[n]{\\prod^{n}_{i=1} m_i}\n \\end{equation} \n\\end{minipage}\n & \n\\begin{minipage}{0.45\\linewidth}\n \\begin{equation}\\label{eq:harmonic}\n \\mathrm{harmonic} = \\frac{n}{\\sum^{n}_{i=1} \\frac{1}{m_i}}\n \\end{equation} \n\\end{minipage}\n\\end{tabular}\n\\end{table*}\n\n\\begin{figure}[]\n\\centering\n \\includegraphics[width=0.6\\linewidth]{figs\/corr_spearman_top-30_MOT17_MOT20_means.pdf}\n \\caption{Spearman's correlation matrix. The entries represent the MOTCOM values when the sub-metrics are combined using the four different averaging methods. The HOTA performance is the average of the top-30 ranked trackers. The scores are based on the combined MOT17 and MOT20 test splits.}\n \\label{fig:correlation_means}\n\\end{figure}\n\nWe present the four variations of MOTCOM in \\figref{correlation_means} computed on the combined MOT17 and MOT20 test splits. \nWe see that they all correlate negatively with the HOTA score.\nHowever, the arithmetic mean has the strongest negative correlation and it correlates positively with all the sub-metrics.\nTherefore, we suggest to compute MOTCOM as the arithmetic mean of the sub-metrics.\n\n\\subsubsection{Complete Spearman's Correlation Matrix for MOT17 and MOT20.}\nIn Figure 9 in the main paper we presented a partial Spearman's correlation matrix based on the MOT17 and MOT20 sequences.\nWe used the matrix to evaluate the monotonic relationship between the three complexity metrics (MOTCOM, \\textit{density}, and \\textit{tracks}) and HOTA, MOTA, and IDF1.\nIn \\figref{spearman_top-30} we present the complete Spearman's correlation matrix, which shows additional details on the relationship between the entries.\n\n\\begin{figure}[]\n\\centering\n \\centering\n \\includegraphics[width=0.75\\linewidth]{figs\/corr_spearman_top-30_MOT17_MOT20.pdf}\n \\caption{Spearman's correlation matrix. Based on the average performance of the top-30 trackers on MOT17 and MOT20 test split.}\n \\label{fig:spearman_top-30}\n\\end{figure}\n \n\\newpage\n\n\\subsubsection{Complete Spearman's Correlation Matrix for MOTSynth.}\nIn the main paper we presented the Spearman's Footrule Distance and the complexity scores for the MOTSynth sequences.\nTo expand upon this, we include the complete Spearman's correlation matrix for the MOTSynth train split in \\figref{spearman_motsynth}.\nThe matrix gives a detailed overview of the monotonic relationship between the entries.\n\n\\begin{figure}[]\n \\centering\n \\includegraphics[width=0.75\\linewidth]{figs\/corr_spearman_top-None_MOTSynth.pdf}\n \\caption{Spearman's correlation matrix. Based on the CenterTrack performance on the MOTSynth train split.}\n \\label{fig:spearman_motsynth}\n\\end{figure}\n\n\\subsection{Ground Truth} \nIn order to create a strong foundation for the evaluation, we are in need of benchmark datasets with consistent annotation standards and leader boards with a wide range of state-of-the-art trackers.\nTherefore, we evaluate MOTCOM on the popular MOT17 \\cite{milan2016mot16} and MOT20 \\cite{dendorfer2020motchallenge} datasets\\footnote{With permission from the MOTChallenge benchmark authors.}.\nThere are seven sequences in the test split of MOT17 and four sequences in the test split of MOT20, some of which are presented in \\figref{motchl_samples}.\nFurthermore, leader boards are provided for both benchmarks with results from 212 trackers for MOT17 and 80 trackers for MOT20. \nWe use the results from the top-30 ranked trackers\\footnote{Leader board results obtained on March 4, 2022.} based on the average HOTA score, so as to limit unstable and fluctuating performances.\n\nIn order to strengthen and support the evaluation, we include the training split of the fully synthetic MOTSynth dataset \\cite{fabbri2021motsynth} which contains 764 varied sequences of pedestrians.\nA few samples from the dataset can be seen in \\figref{motsynth_samples}.\nIn order to obtain ground truth tracker performance for MOTSynth, we train and test a CenterTrack model \\cite{zhou2020tracking} on the data.\nWe have chosen CenterTrack as it has been shown to perform well when trained on synthetic data \\cite{fabbri2021motsynth}.\n\n\\subsection{Evaluation Metrics}\nWe evaluate and compare the dataset complexity metrics by their ability to rank the MOT sequences according to the HOTA score of the trackers.\nWe rank the sequences from simple to complex by their \\textit{density}, \\textit{number of tracks} (abbr. \\textit{tracks}), MOTCOM score, and HOTA score.\nDepending on the metric, the ranking is in decreasing (HOTA) or increasing order (\\textit{density}, \\textit{tracks}, MOTCOM).\nThe absolute difference between the ranks, known as Spearman's Footrule Distance (FD) \\cite{diaconis1977spearman}, gives the distance between the ground truth and estimated ranks \n\\begin{equation}\\label{eq:md}\n \\mathrm{FD} = \\sum^{n}_{i=1} |\\mathrm{rank}(x_i) - \\mathrm{rank}(\\mathrm{HOTA}_i)|,\n\\end{equation}\nwhere $n$ is the number of sequences and $x$ is \\textit{density}, \\textit{tracks}, or MOTCOM.\nIn order to directly compare results of sets of different lengths, we normalize the FD by the maximal possible distance $\\mathrm{FD}_{\\mathrm{max}}$ which is computed as \n\\begin{equation}\n \\mathrm{FD}_{\\mathrm{max}} = \n \\begin{cases}\n \\sum^{n}_{i=1}i - \\frac{n}{2} & \\quad \\{n ~ | ~ 2m ~, ~ m \\in ~ \\mathbb{Z}^{+} \\}\\\\\n \\sum^{n}_{i=1}i - \\frac{n+1}{2} & \\quad \\{n ~ | ~ 2m-1 ~,~ m\\in ~ \\mathbb{Z}^{+} \\} \n \\end{cases}.\n\\end{equation}\nFinally, we compute the normalized FD, $\\mathrm{NFD} = \\frac{\\mathrm{FD}}{\\mathrm{FD}_{\\mathrm{max}}}$.\n\n\n\\subsubsection{Preliminaries}\nWe define a MOT sequence as a set of frames $F = \\{1, 2, \\dots\\}$ containing a set of objects $K = \\{k_1, k_2, \\dots\\}$. \nThe objects do not have to be present in every frame, therefore, we define the set of frames where a given object is present by $F^{k} = \\{t_1, t_2, \\dots\\}$.\nThe objects present in a given frame $t$ are defined as the set $K^t = \\{k | k \\in K \\land~t \\in F^k\\}$.\nAt each frame $t$ an object $k$ is represented by its center-position in image coordinates and the height and width of the surrounding bounding box $k_t = (x,y,h,w)$.\n\\subsection{MOTCOM}\nOcclusion alone does not necessarily indicate an overwhelming problem if the object follows a known motion model or if it is visually distinct.\nThe same is true for erratic motion and visual similarity when viewed in isolation.\nHowever, the combination of occlusion, erratic motion, and visual similarity becomes increasingly difficult to handle.\n\nTherefore, we combine the occlusion, erratic motion, and visual similarity metrics into a single MOTCOM metric that describes the overall complexity of a sequence.\nMOTCOM is computed as the weighted arithmetic mean of the three sub-metrics and is given by\n\\begin{equation}\n\\mathrm{MOTCOM} = \\frac{w_{\\mathrm{OCOM}}\\cdot\\mathrm{OCOM}+w_{\\mathrm{MCOM}}\\cdot\\mathrm{MCOM}+w_{\\mathrm{VCOM}}\\cdot\\mathrm{VCOM}}{w_{\\mathrm{OCOM}}+w_{\\mathrm{MCOM}}+w_{\\mathrm{VCOM}}}\n\\end{equation}\nwhere $w_{\\mathrm{OCOM}}$, $w_{\\mathrm{MCOM}}$, and $w_{\\mathrm{VCOM}}$ are the weights for the three sub-metrics.\nEqual weighting can be obtained by setting $w_{\\mathrm{OCOM}}=w_{\\mathrm{MCOM}}=w_{\\mathrm{VCOM}}$, while custom weights may be suitable for specific applications.\nDuring evaluation we weight the sub-metrics equally as we deem each of the sub-problems equally difficult to handle. \n\\subsubsection{Complexity Score Plots for MOT17 and MOT20}\nIn the main paper we evaluate MOTCOM, \\textit{density}, and \\textit{tracks} on the MOT17 and MOT20 test splits.\nWe focus mainly on the ranking capabilities of the metrics as we expect tracker performance to have a monotonic, but not necessarily linear, relationship with complexity.\nThe ranks of MOTCOM, \\textit{density}, and \\textit{tracks} presented in the main paper are based on the scores displayed in \\figref{complexity_metrics_hota}.\n\\begin{figure}[]\n \\centering\n \\begin{subfigure}[b]{0.55\\textwidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figs\/MOTCOM_HOTA_top30_MOT20_MOT17.pdf}\n \\caption{MOTCOM vs. HOTA.}\n \\label{fig:motcom_hota_top30}\n \\end{subfigure}\n ~\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figs\/density_HOTA_top30_MOT20_MOT17.pdf}\n \\caption{Density vs. HOTA.}\n \\label{fig:density_hota_top30}\n \\end{subfigure}\n ~\n \\begin{subfigure}[b]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figs\/tracks_HOTA_top30_MOT20_MOT17.pdf}\n \\caption{Tracks vs. HOTA.}\n \\label{fig:tracks_hota_top30}\n \\end{subfigure}\n \\caption{Average HOTA performance of the top-30 trackers on MOT17 and MOT20 test split against a) MOTCOM, b) \\textit{density}, and c) \\textit{tracks}. Square markers represent MOT20 sequences and crosses are MOT17 sequences.}\n \\label{fig:complexity_metrics_hota}\n\\end{figure}\n\nThe position of the marker indicates the average score and the error bar is the standard deviation.\nThe marker of the MOT17 sequences is a cross and the MOT20 sequences are represented by a square.\nWe see that the MOT20 sequences have significantly higher densities and more tracks compared to the MOT17 sequences, but the HOTA performance is not correspondingly low.\nThis illustrates that \\textit{density} and \\textit{tracks} do not suffice to describe the complexity of MOT sequences.\n\\subsubsection{Erratic Motion.}\nWe use motion as a term for an object's spatial displacement between frames.\nThis is typically caused by the locomotive behavior of the object itself, camera motion, or a combination.\nAs the number of factors that influence the observed motion increases, the motion becomes harder to predict.\nAn example of two objects exhibiting different types of motion is presented in \\figref{erratic_motion}.\nThe blue object moves with approximately the same direction and speed between the time steps.\nPredicting the next state of the object seems trivial and the search space is correspondingly small.\nOn the other hand, the red object behaves erratically and unpredictably while the motion model is less confident as illustrated by the larger search space.\n\\subsection{Motion Metric}\nThe proposed motion metric, MCOM, is based on the assumption that objects move linearly when observed at small time steps.\nIf this assumption is not upheld, it is a sign of erratic motion and thereby a more complex MOT sequence.\n\nInitially, the displacement vector, $P^k_t$, between the object's position in the current and past time step is calculated as\n\\begin{equation}\n P^k_{t} = p^k_{t} - p^k_{t-\\beta},\n\\end{equation}\nwhere $p_t$ is the position of object $k$ at time $t$, defined by its $x$- and $y$-coordinates, and $\\beta$ describes the temporal step size.\nWhen calculating the displacement between two consecutive frames $\\beta = 1$.\nThe displacement vector in the first frame of a trajectory is set to zero and $\\beta$ is capped by the first and last frame of a trajectory when the object is not present at time $t\\pm\\beta$.\n\\begin{figure}[!t]\n \\centering\n \\begin{subfigure}[b]{0.25\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/displacement.pdf}\n \\caption{}\n \\label{fig:displacement}\n \\end{subfigure}\n \\qquad\\quad\n \\begin{subfigure}[b]{0.25\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/dist_area.pdf}\n \\caption{}\n \\label{fig:dist_area}\n \\end{subfigure}\n \\qquad\\quad\n \\begin{subfigure}[b]{0.25\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/dist_area_change.pdf}\n \\caption{}\n \\label{fig:dist_area_change}\n \\end{subfigure}\n \\caption{a) Illustrative example of how the positional error $\\Delta_t$ is calculated as the distance between the true position $p_{t+\\beta}$ and estimated position $\\lambda_{t+\\beta}$. b) The three objects have traveled an equal distance. Relative to their size, the two smaller objects are displaced by a larger amount and the bounding box overlap disappears. c) If the size of an object increases between two time steps the displacement is relatively less important, compared to when the size of the object decreases.}\n \\label{fig:displacement_and_area}\n\\end{figure}\n\nThe position in the next time step is predicted using a linear motion model with constant velocity based on the current position and the calculated displacement vector.\nThe position is predicted by\n\n\\begin{equation}\n \\lambda^k_{t+\\beta} = P^k_{t} + p^k_t.\n\\end{equation}\nThe error between the predicted and true position of the object is calculated by\n\\begin{equation}\n \\Delta^k_{t} = \\ell_2(p^k_{t+\\beta}, \\lambda^k_{t+\\beta})\n\\end{equation}\nwhere $\\ell_2$ is the Euclidean distance function and a larger $\\Delta^k_t$ indicates a more complex motion.\nSee \\figref{displacement} for an illustration of how the displacement error is calculated.\nThis approach may seem overly simplified, but it encapsulates changes in both direction and velocity.\nFurthermore, it is deliberately sensitive to low frame rates and camera motion, as both factors add to the complexity of tracking sequences.\n\nInspired by the analysis of decreasing tracking performance with respect to smaller object sizes by Bergmann et al. \\cite{Bergmann_2019_ICCV}, the size is also taken into consideration.\nThe combination of size and movement affects the difficulty of predicting the next state of the object.\nIn \\figref{dist_area}, the rectangles are equally displaced but do not experience the same displacement relative to their size. \nIntuitively, if a set of objects are moving at similar speeds, it is harder to track the smaller objects due to their lower spatio-temporal overlap.\n\n\\begin{wrapfigure}{r}{0.45\\textwidth}\n\\centering\n \\includegraphics[width=\\linewidth]{figs\/plots\/weighted_func_MCOM.pdf}\n \\caption{$\\alpha$ controls the growth of the function $g(x,\\alpha)$ and decides when an output value of 0.5 is reached. The dashed line illustrates $g(x,\\alpha)$ when using the average of a set of $\\alpha$ values.}\n \\label{fig:weighted_func}\n\\end{wrapfigure}\n\nAccordingly, the motion-based complexity measure is based on the displacement relative to the size of the object.\nAs illustrated in \\figref{dist_area_change}, the size of the object may change between two time steps.\nThe direction of the change is critical as the displacement is less distinct if the size of the object is increasing, compared to the opposite situation.\nTherefore, we multiply the current size of the object with the change in object size to get the transformed object size\n\\begin{equation}\n \\rho^k_t = s^k_t \\cdot \\frac{s^k_{t+\\beta}}{s^k_t}=s^k_{t+\\beta},\n\\end{equation}\nwhere $s_t^k = \\sqrt{w^k_{t} \\cdot h^k_{t}}$ and $h^k_t$ and $w^k_t$ are the height and width of object $k$ at time step $t$, respectively. \nThe motion complexity measure is then calculated as the mean size-compensated displacement across all frames, $F$, and all objects at each frame, $K^t$, and weighted by the log-sigmoid function $g(x,\\alpha)$\n\\begin{equation} \n\\mathrm{MCOM} = \\frac{1}{|A|} \\sum_{\\alpha}^{A} g\\left( \\frac{1}{\\sum^K_k|F^k|}\\sum_k^K\\sum^{F^k}_{t} \\frac{\\Delta^{k}_{t}}{\\rho^k_t} , \\alpha \\right ),\n\\end{equation}\nwhere the average of $A = \\{0.01,0.02,...,1.0\\}$ is used to avoid manually deciding on a specific value for $\\alpha$.\nThe use of the function $g(x,\\alpha)$ is motivated by the aim of having an output in the range $[0,1]$, where a higher number describes a more complex motion. \nThe function $g(x,\\alpha)$ is given by\n\\begin{equation}\\label{eq:weighted_func}\n g(x,\\alpha) = \\frac{1}{1+e^{-\\log(x)}\\alpha} = \\frac{1}{1+\\frac{\\alpha}{x}} = \\frac{x}{x+\\alpha},\n\\end{equation}\nwhere $\\alpha$ affects the gradient of the monotonically increasing function and indicates the point where the output of the function will reach 0.5 as illustrated in \\figref{weighted_func}.\nThe function is designed such that displacements in the lower ranges are weighted higher.\nThe argument for this choice is based on the assumption that minor increments to an extraordinarily erratic locomotive behavior have less impact on the complexity.\n\\subsubsection{Occlusion.}\nOcclusion describes situations where the visual information of an object within the camera view is partially or fully hidden.\nThere are three types of occlusion: \\textit{self-occlusion}, \\textit{scene-occlusion}, and \\textit{inter-object-occlusion} \\cite{andriyenko2011analytical}.\nSelf-occlusion can reduce the visibility of parts of an object, e.g., if a hand is placed in front of a face, but defining the level of self-occlusion is non-trivial and depends on the type of object.\nScene-occlusion occurs when a static object is located in the line of sight between the camera and the target object, thereby decreasing the visual information of the target.\nA scene-occlusion is marked by the red box in \\figref{occlusion_types}, where flowers partially occlude a sitting person.\n\n\\begin{figure}[!b]\n \\centering\n \\begin{subfigure}[t]{0.4\\textwidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figs\/occlusion_types.pdf}\n \\caption{}\n \\label{fig:occlusion_types}\n \\end{subfigure}\n \\qquad \\qquad\n \\begin{subfigure}[t]{0.45\\textwidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figs\/erratic_motion.png}\n \\caption{}\n \\label{fig:erratic_motion}\n \\end{subfigure}\n \\caption{a) Sample from MOT17-04 \\cite{milan2016mot16}. The yellow boxes illustrate objects partly occluded by scene-occlusion (red) and inter-object-occlusion (blue). b) The blue object displays nearly linear motion, whereas the red object is behaving erratically. The ellipsoids symbolize the confidence of an artificial underlying motion model.}\n\\end{figure}\nInter-object-occlusion is typically the most difficult to handle, especially if the objects are of the same type, as the trajectories of multiple objects cross.\nAn example can be seen in \\figref{occlusion_types}, where the blue box marks a person that partially occludes another person. \n\n\\subsection{Occlusion Metric}\nAs mentioned in \\sectionref{challenges}, occlusion can be divided into three types: self-, scene- and inter-object occlusion.\nIn order to quantify the occlusion rate in a sequence, one should ideally account for all three types.\nHowever, it is most often non-trivial to determine the level of self-occlusion and it is commonly not taken into account in MOT.\nPedersen et al. \\cite{Pedersen_2020_CVPR} used the ratio of intersecting object bounding boxes to determine the inter-object occlusion rate.\nSimilarly, the MOT16, MOT17, and MOT20 datasets include a visibility score based on the intersection over area (IoA) of both inter- and scene-objects \\cite{dendorfer2020motchallenge}, where IoA is formulated as the area of intersection over the area of the target.\n\nFollowing this trend, we omit self-occlusion and base the occlusion metric, OCOM, on the IoA and compute it as\n\\begin{equation}\n \\mathrm{OCOM} = \\frac{1}{|K|} \\sum_k^K \\bar{\\nu}^k,\n\\end{equation}\nwhere $\\bar{\\nu}^k$ is the mean level of occlusion of object $k$. $\\nu^k_t$ is in the interval $[0,1]$ where 0 is fully visible and 1 is fully occluded.\nIt is assumed that terrestrial objects move on a ground plane which allows us to interpret their y-values as pseudo-depth and decide on the ordering.\nAnnotations are needed to calculate the occlusion level for objects moving in 3D.\nOCOM is defined in the interval $[0,1]$ where a higher value means more occlusion and a harder problem to solve.\n\n\\subsubsection{Occlusion.}\nOcclusions can be difficult to handle and they are often simply treated as missing data \\cite{andriyenko2011multi}.\nHowever, in scenes were the objects have weak or similar visual features this can be harmful for the tracking performance \\cite{andriyenko2011analytical,milan2013continuous,stadler2021improving}.\n\nMost authors state that a higher occlusion rate makes tracking harder \\cite{Cao2020,liu2019model,luo2014bi}, but they seldom quantify such statements.\nAn exception is the work proposed by Bergmann et al. \\cite{Bergmann_2019_ICCV} where they analyzed the tracking results with respect to object visibility, the size of the objects, and missing detections.\nMoreover, Pedersen et al. \\cite{Pedersen_2020_CVPR} argued that the number of objects is less critical than the amount and level of occlusion when it comes to multi-object tracking of fish.\nThey described the complexity of their sequences based on occlusions alone.\n\n\\subsubsection{Erratic Motion.}\nPrior information can be used to predict the next state of an object which minimizes the search space and hence reduces the impact of noisy or missing detections.\nA linear motion model assuming constant velocity is a simple, but effective method for predicting the movement of non-erratic objects like pedestrians \\cite{luo2020multiple,milan2013continuous}.\nIn scenes that include camera motion or complex movement more advanced models may improve tracker performance.\nPellegrini et al. \\cite{Pellegrini2009} proposed incorporating human social behavior into their motion model and Kratz et al. \\cite{kratz2010tracking} proposed utilizing the movement of a crowd to enhance the tracking of individuals.\nA downside of many advanced motion models is an often poor ability to generalize to other types of objects or environments.\n\n\\subsubsection{Visual Similarity.}\nVisual cues are commonly used in tracklet association and re-identification and are well studied for persons \\cite{ye2021personreid}, vehicles \\cite{khan2019vehiclereid}, and animals \\cite{schneider2019animalreid} such as zebrafish \\cite{haurum2020zebrafishreid} and tigers \\cite{schnedier2020animalreid}. \nModern trackers often solve the association step using CNNs, like Siamese networks, based on a visual affinity model \\cite{Bergmann_2019_ICCV,leal2016learning,xiang2015learning,yin2020unified}.\nSuch methods rely on visual dissimilarity between the objects. \nHowever, tracklet association becomes more difficult when objects are hard to distinguish purely by their appearance.\n\n\\subsubsection{Dataset Complexity.}\nDetermining the complexity of a dataset is a non-trivial task.\nOne may have a ``feeling'' or intuition about which datasets are harder than others, but this is subjective and can differ depending on who you ask, as well as differ depending on the task at hand.\nIn order to objectively determine the complexity of a dataset, one has to develop a task-specific framework.\nAn early attempt at this was the suite of 12 complexity measures (c-measures) by Ho and Basu \\cite{HoBasu2002}, based on concepts such as inter-class overlap and linear separability.\nHowever, these c-measures are not suitable for image datasets due to unrealistic assumptions, such as the data being linearly separable.\nTherefore, Branchaud-Charron et al. \\cite{Branchaud-Charron_2019_CVPR} developed a complexity measure based on spectral clustering, where the inter-class overlap is quantified through the eigenvalues of an approximated adjacency matrix.\nThis approach was shown to correlate well with the CNN performance on several image datasets.\nSimilarly, Cui et al. \\cite{cui2019measuring} presented a framework for evaluating the fine-grainedness of image datasets, by measuring the average distance from data examples to the class centers.\nBoth of these approaches rely on embedding the input images into a feature space by using, e.g., a CNN, and determining the dataset complexity without any indication of what makes the dataset difficult.\n\nIn contrast, dataset complexity in the MOT field has so far been determined through simple statistics such as the number of tracks and density.\nThese quantities are currently displayed for every sequence alongside other stats such as resolution and frame rate for the MOTChallenge benchmark datasets \\cite{dendorfer2020motchallenge}.\nThe preliminary works of Bergmann et al. \\cite{Bergmann_2019_ICCV} and Pedersen et al. \\cite{Pedersen_2020_CVPR} have attempted to further explain what makes a MOT sequence difficult by investigating the effect of occlusions. \nHowever, there is no clear way of describing the complexity of MOT sequences and the current methods have not been verified.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\begin{quote}\n{\\em \nPang Juan was an ancient Chinese military general of the Wei state during the Warring States period. Both he and Sun Bin studied under the tutelage of the hermit Guiguzi, but later led the opposing armies of two countries (resp.) Wei and Qi at war. Ambushed by Qi, the Wei army suffered a crushing defeat and Pang Juan committed suicide. In traditional folklore, Sun Bin carved the words ``Pang Juan dies under this tree'' on a tree at the ambush area. When Pang and his men arrived, he saw that there were carvings on the tree so he lit a torch for a closer look. At that moment, the Qi troops lying in ambush attacked and Pang Juan committed suicide under that very tree. \\quad \\\\ \\quad \\hfill {(Based on \\url{https:\/\/en.wikipedia.org\/wiki\/Pang_Juan}) }\n}\n\\end{quote}\n\n\nThe Pang Juan example demonstrates that the announcement of something false may make it true. (A detailed analysis will be given later.) This is reminiscent of the logic of truthful public announcements, wherein the announcement of something true may make it false. The archetypical example is the announcement of $p \\wedge \\neg \\Box p$, for example when you are being told: ``You do not know that Pang Juan died in 342 BC.'' To get $p \\wedge \\neg \\Box p$ we have to read this sentence as its implicature ``Pang Juan died in 342 BC and you do not know that.'' This phenomenon may even be called the reason that this logic exists, that it is considered interesting, and various puzzles build upon the phenomenon. But in truthful public announcement logic we cannot announce something false thus making it true. The announced formulas are supposed to be true. In an alternative semantics for public announcement logic, that of conscious update \\citep{gerbrandyetal:1997}, and that is also known as {\\em believed (public) announcement logic}, the announcement of a formula is independent of its truth. In that logic, we can call an announcement a lie if the announced formula is false. A true lie is then the announcement of something false that makes it true. \n\nThis analysis of a lie is not entirely according to the traditional view of agents lying to each other: you are lying to me if you believe $\\neg\\phi$ but you say to me that $\\phi$ with the intention to make me believe that $\\phi$.\\footnote{This analysis of lying goes back to \\cite{Augustine:dm}. Lying has been a thriving topic in the philosophical community since then \\citep{siegler:1966,bok:1978,mahon:2006,mahon.stanford:2008}. More recent modal logical analyses that can be seen as a continuation of this philosophical tradition include \\cite{baltag:2002,steiner:2006,kooietal:2011,hvdetal.lying:2012,sakama:2011,liuetal:2013,hvd.lying:2014}. In modal logics with only belief operators the intentional aspect of lying is not modelled.} If we model the realization of this intention (namely that indeed I believe you), and incorporate all belief aspects, we get $\\Box_a\\neg\\phi \\rightarrow [\\Box_a\\phi]\\Box_b \\Box_a\\phi$ (where agent $a$ lies to agent $b$). Abstracting from all belief aspects gives us $\\neg\\phi \\rightarrow [\\phi]\\phi$. Although the former is more precise, it is not uncommon, if not customary, to abstract from the source and the justification of the lie so that we get $\\neg\\phi \\rightarrow [\\phi]\\Box_b \\phi$, from where it is not a big step to $\\neg\\phi \\rightarrow [\\phi]\\phi$, the main focus of our investigation. (In order to make this identification, we can think of agent $b$ as the special agent who is able to distinguish all states, for whom it holds that $\\Box_b \\phi \\leftrightarrow \\phi$.) This simplied schema $\\neg\\phi \\rightarrow [\\phi]\\phi$ is clearly in opposition to the schema $\\phi \\rightarrow [\\phi]\\phi$ for the so-called {\\em successful formulas}. (In truthful public announcement logic the successful formulas are those for which $[\\phi]\\phi$ is valid. In that logic $[\\phi]\\phi$ is equivalent to $\\phi \\rightarrow [\\phi]\\phi$. But in believed public announcement logic, where announcements are independent of the truth of the announced formula, $[\\phi]\\phi$ is not equivalent to $\\phi \\rightarrow [\\phi]\\phi$.) In this work we mainly investigate true lies, successful formulas, and yet other notions in believed announcement logic, and also the iteration of such announcements.\n\nWe also present results in this work that are not for believed announcement logic, but that are still related to iterated announcements, or to lying. \n\nIn the logic of believed public announcements, interpreted on ${\\mathcal{KD}45}$ models encoding belief, the announced formula can be false and true before the announcement and can also be false and true after the announcement; and even beyond that, when iterating that same announcement, this value can change in arbitrary ways after each next announcement. In the logic of truthful public announcements, interpreted on ${\\mathcal{S}5}$ models encoding knowledge, lying is not possible. But it can still be that formulas other than the announcement keep changing their value when iterating a given announcement, in ways otherwise very similar to the results for belief.\n\nIn the Pang Juan example (as we will explain later) it can be argued that the action involved is not so much an announcement but an assignment. Assignments model that propositional variables change their value (it is also known as ontic change): the proposition ``Pang Juan dies under this tree'' was false, and afterwards it is true: factual change. We present a scenario involving lying wherein private (i.e., non-public) announcements and private assignments both play a role: two friends hesitating to go to a party are both being lied to that the other one is going to the party, subsequent to which they both change their minds (factual change) and go to the party.\n\nWe now give an overview of our results and of the contents of our paper. Section~\\ref{sec.two} recalls the logic of believed announcements, and the subsequent Section \\ref{sec.twoplus} introduces all novel terminology to put our results in perspective, and also gives an overview of our results. These results are then developped in detail in the remaining sections. In Section \\ref{sec.2valid} we present results on successful formulas, true lies, self-refuting formulas, and (so-called) impossible lies (that remain false even after announcement). In Section \\ref{sec.three} we elaborate on the result that there are models and announcements such that by iterating that announcement in the model we can change its value arbitrarily often and in arbitrarily chosen ways. In Section \\ref{sec.four} we syntactically characterize the single-agent true lies, i.e., formulas that satisfy $\\neg\\phi \\rightarrow [\\phi]\\phi$. In Section \\ref{sec.five} we discuss iteration of truthful public announcements on models encoding knowledge, and in Section \\ref{sec.six} the interaction of private announcements and private assignments, by way of the above-mentioned example of two friends going to party. Section \\ref{sec.last} presents an integration of our different results in view of future research.\n\n\\section{Logic of believed announcements} \\label{sec.two}\n\nWe recall the modelling of lying and truthtelling (after \\citep{hvdetal.lying:2012}) in {\\em believed (public) announcement logic}, also known as `arrow elimination' (not necessarily truthful) public announcement logic \\citep{gerbrandyetal:1997}, which is a lesser known alternative for the better known `state elimination' (truthful) public announcement logic \\citep{plaza:1989,baltagetal:1998}. Its language, structures, and semantics are as follows. Given are a finite set of agents $A$ and a countable set of propositional variables $P$ (let $a\\inA$ and $p\\inP$).\n\n\\begin{definition}[Language] \\[ \\ensuremath{\\mathcal{L}} \\ \\ni \\ \\phi ::= p \\ | \\ \\neg \\phi \\ | \\ (\\phi \\wedge \\phi) \\ | \\ \\Box_a \\phi \\ | \\ [\\phi]\\phi \\] \\end{definition}\nOther propositional connectives are defined by abbreviation. For $\\Box_a \\phi$, read `agent $a$ believes formula $\\phi$'. If there is a single agent only, we may omit the index and write $\\Box \\phi$ instead. Agent variables are $a,b,\\agentc,\\dots$. For $[\\phi]\\psi$, read `after public announcement of $\\phi$, $\\psi$'. If $\\Box_a \\neg\\phi$, we say that $\\phi$ is {\\em unbelievable} (for $a$) and, consequently, if $\\neg \\Box_a \\neg\\phi$, for which we write $\\Diamond_a \\phi$, we say that $\\phi$ is {\\em believable} (for $a$). This is also read as `agent $a$ considers it possible that $\\phi$'. Shared belief $\\Box_A \\phi$ (everybody believes that $\\phi$) is defined as $\\bigwedge_{a\\inA} \\Box_a \\phi$. We say that {\\em $\\phi$ is believable} iff all agents consider $\\phi$ believable, i.e., $\\bigwedge_{a\\inA} \\Diamond_a \\phi$, for which we write $\\Diamond_A \\phi$ (however, this is not the dual of $\\Box_A\\phi$: we may have that $\\neg\\Box_A\\neg\\phi$ but not $\\Diamond_A\\phi$). Believability plays an important role in our setting.\n\n\\begin{definition}[Structures]\nAn {\\em epistemic model} $M = ( S, R, V )$ consists of a {\\em domain} $S$ of {\\em states} (or `worlds'), an {\\em accessibility function} $R: A \\rightarrow {\\mathcal P}(S \\times S)$, where each $R(a)$, for which we write $R_a$, is an accessibility relation, and a {\\em valuation} $V: P \\rightarrow {\\mathcal P}(S)$, where each $V(p)$ represents the set of states where $p$ is true. For $s \\in S$, $(M,s)$ is an {\\em epistemic state}. \\end{definition} An epistemic state is also known as a pointed Kripke model. We often omit the parentheses in $(M,s)$. Without any restrictions we call the model class ${\\mathcal K}$. The class of models where all accessibility relations are transitive and euclidean is called ${\\mathcal K45}$, and if they are also serial it is called ${\\mathcal KD45}$ (this class is the main focus of our investigations, because it encodes agents with consistent beliefs). The class of models where all accessibility relations are equivalence relations is ${\\mathcal S5}$. Class ${\\mathcal KD45}$ is said to have the {\\em properties of belief}, and ${\\mathcal S5}$ to have the {\\em properties of knowledge}.\n\\begin{definition}[Semantics] \\label{def.truthlyingpub}\nAssume an epistemic model $M = ( S, R, V )$. \n\\[ \\begin{array}{lcl}\nM,s \\models p &\\mbox{iff} & s \\in V_p \\\\ \nM,s \\models \\neg \\phi &\\mbox{iff} & M,s \\not \\models \\phi \\\\ \nM,s \\models \\phi \\wedge \\psi &\\mbox{iff} & M,s \\models \\phi \\text{ and } M,s \\models \\psi \\\\ \nM,s \\models \\Box_a \\phi &\\mbox{iff} & \\mbox{for all \\ } t \\in S: R_a(s,t) \\text{ implies } M,t \\models \\phi \\\\ \nM,s \\models [\\phi] \\psi &\\mbox{iff} & M|\\phi,s \\models \\psi \n\\end{array} \\] \nwhere epistemic model $M|\\phi = ( S, R^\\phi, V )$ is as $M$ except that for all $a\\inA$, $R^\\phi_a \\ := \\ R_a \\cap \\ (S \\times \\II{\\phi}_M)$ (and where $\\II{\\phi}_M := \\{ s\\inS \\mid M,s\\models \\phi\\}$). \n\\end{definition} \n\nIn our semantics, $[\\phi]\\Box_a\\psi$ means that, regardless of the truth of $\\phi$, agent $a$ believes $\\psi$ after public announcement of $\\phi$. In particular, $[p]\\Box_\\agentp$ is valid: after the announcement of a propositional variable, the agent believes that it is true; regardless of the value of that variable. This is why it is called {\\em believed (public) announcement} of $\\phi$, in contrast to the {\\em truthful (public) announcement} of $\\phi$ \\citep{plaza:1989}, wherein we restrict the model to the subdomain of all states where $\\phi$ is true (see Appendix A). As said, believed announcement logic originates with \\cite{gerbrandyetal:1997}, where it is called the logic of conscious updates. In believed announcement logic new information is accepted by the agents independent from the truth of that information. In truthful announcement logic new information is only incorporated if it is true. \n\nIt should be noted that in believed announcement logic announcements can be truthful (namely when true) and lying (namely when false), whereas in truthful announcement logic announcements can only be truthful. In \\cite{hvdetal.lying:2012} the believed public announcement of $\\phi$ is modelled as non-deterministic choice between such truthful and lying public announcement of $\\phi$, so that `after truthful announcement of $\\phi$, $\\psi$' corresponds to $\\phi \\rightarrow [\\phi] \\psi$, and `after lying announcement of $\\phi$, $\\psi$' (after the lie that $\\phi$, $\\psi$) corresponds to $\\neg\\phi \\rightarrow [\\phi] \\psi$.\n\nThe announcing agent is not modelled in announcement logics, but only the effect of its announcements on the audience, the set of all agents. The interaction between announcement and belief can be formalized as $[\\phi] \\Box_a \\psi \\leftrightarrow \\Box_a (\\phi \\rightarrow [\\phi]\\psi)$ \\citep{gerbrandyetal:1997}.\n\nBelieved and truthful announcement are therefore closely related. And even in a technical sense: whenever a believed announcement is true, the semantics delivers results that are indistinguishable in the logics. This is because on any epistemic model, the model restriction semantics and the arrow restriction semantics result in bisimilar models, on the part of the model wherein the announcement is true. Investigations of these correspondences are made in \\cite{kooi.jancl:2007} and \\cite{hvdetal.lying:2012}. \n\n\\section{Iteration of lying and truthful announcements} \\label{sec.twoplus}\n\n\\subsection{What is a true lie?}\n\nLet $p$ be the proposition `Pang Juan dies under this tree' (Chinese does not have a future tense). As a consequence of Pang Juan observing `Pang Juan dies under this tree', he dies under this tree. The logic of public announcements is a logic of public observations, and observing written text is the typical action that has a straightforward formalization in logic: it corresponds to the announcement of the proposition representing that text. We could imagine Pang Juan being uncertain if he would die under this tree ($\\neg \\Box p \\wedge \\neg \\Box \\neg p$), such that observing $p$ removes this uncertainty.\n\nStill, this analysis is unsatisfactory: If the written text had been `Pang Juan does {\\bf not} die under this tree', i.e., $\\neg p$, he would still have died. If the written text had been `The moon is made of green cheese', i.e., the announcement of some unrelated proposition $q$, he would still have died. Even if there had nothing been written on that tree (it wasn't carvings, but merely a gnarled old tree trunk), i.e., the announcement of the trivial proposition $\\top$, he would still have died. It was giving away his position by lighting the torch that caused his death, irrespective of what was written on the tree. Therefore the scenario cannot be modelled as initial uncertainty that is reduced by an informative action: it does not fit the schema $\\neg \\phi \\rightarrow [\\phi] \\phi$. \n\nAlternatively, with some justification it can be said that $p$ is false before the observation of $p$ but after the observation of $p$, $p$ is true. Changing the value of a variable is factual\/ontic change. It could then be a lying public announcement of $p$ (observing $p$ while $p$ is false) linked to a public assignment changing the value of $p$ into true. However, this analysis has the same shortcomings as the previous one: what is observed does not matter. Also, Pang Juan can hardly be seen as enacting his own death. The actors are the soldiers of his enemy Sun Bin. Of course we can see the visual observation (informative) and the subsequent death (ontic) as two distinct, but related, actions.\n\nIndeed \\cite{peteretal:2016} argue that this action is neither epistemic (informative) nor performative (factual).\n\n\\medskip\n\nIn the True Lies movie Jamie Lee Curtis (unsuccessfully) plays the role of a timid slovenly housewife, who starts enacting the lie that she is a spy in order to make her husband jealous and thus seduce him. As Arnold Schwarzenegger really is a spy, because of her enactment she also really becomes a spy (and incidentally also a sparkling, extrovert, and well-dressed woman, a role that fits her rather well). This story of gradual and (exclusively) factual change also does not fit a logic of exclusively informational change. However, this factual changes interacts with belief (informative) change.\n\n\\medskip\n\nIn Section \\ref{sec.six} we will see that the combination of (private) informational and (private) factual change after all makes for a pretty good but different kind of true lie. With that background we will, in Section \\ref{sec.last}, review once more the Pang Juan and the Arnold Schwarzenegger examples. In the current section (and the subsequent three sections, altogether the core of our contribution) we focus exclusively on true lies as informational change, i.e., $\\phi$ is a true lie iff $\\neg\\phi \\rightarrow [\\phi]\\phi$ is valid in the logic of believed announcements. Also relevant is the satisfiability of $\\neg\\phi \\wedge [\\phi]\\phi$. These different perspectives are, of course, related, and we will now proceed to explain how. (In Section \\ref{sec.term} they are properly defined as special cases of a more general setup.)\n\n\\medskip\n\nConsider the formula $\\phi = p \\wedge \\Box p$ and the model $M$ consisting of two states $s$ and $t$ wherein a proposition $p$ is false and true, respectively, and that are indistinguishable for the agent. We then have that $M,t \\models \\neg (p \\wedge \\Box p)$ whereas $M|(p \\wedge \\Box p),t \\models p \\wedge \\Box p$ so that \\[ M,t \\models \\neg (p \\wedge \\Box p) \\rightarrow [p \\wedge \\Box p] (p \\wedge \\Box p). \\] Therefore, the formula $p \\wedge \\Box p$ is a true lie in state $t$ of the model. On the other hand, although $M,s \\models \\neg (p \\wedge \\Box p)$, we have that $M|(p\\wedge\\Box p),s\\not\\models p \\wedge \\Box p$ for the simple reason that $p$ is false in $s$. So \\[ M,s \\not\\models \\neg (p \\wedge \\Box p) \\rightarrow [p \\wedge \\Box p] (p \\wedge \\Box p). \\] We illustrate both transitions below:\n\n\\[ \\begin{array}{lll}\n\\begin{tikzpicture}[thick]\n\\node (0) at (0,0) {$\\neg p$};\n\\node (1) at (2,0) {$p$};\n\\node (0s) at (0,-0.5) {$s$};\n\\node (1t) at (2,-0.5) {$t$};\n\\draw[<->] (0) to (1);\n\\draw[->] (0) edge[loop above,looseness=15] (0); \n\\draw[->] (1) edge[loop above,looseness=15] (1); \n\\end{tikzpicture}\n&\n\\quad \\stackrel {p\\wedge\\Box p} \\Rightarrow \\quad\n&\n\\begin{tikzpicture}[thick]\n\\node (0) at (0,0) {$\\neg p$};\n\\node (1) at (2,0) {$p$};\n\\node (0s) at (0,-0.5) {$s$};\n\\node (1t) at (2,-0.5) {$t$};\n\\end{tikzpicture}\n\\end{array} \\]\nFor another example, note that \\[ M|(p\\wedge\\Box p),t \\models \\neg (p \\wedge \\Box p) \\rightarrow [p \\wedge \\Box p] (p \\wedge \\Box p) \\] for the simple reason that $M|(p\\wedge\\Box p),t \\not\\models \\neg(p \\wedge \\Box p)$. This seems somewhat undesirable: you do not want the implication to be true because the antecedent is false. In a given model $(N,u)$ you only want to call $\\phi$ a true lie if $N,u \\models \\neg \\phi$ but $N,u \\models [\\phi] \\phi$, in other words, if $N,u \\models \\neg \\phi \\wedge [\\phi] \\phi$. \n\nThe formula $p \\wedge \\Box p$ is sometimes a true lie and sometimes not. Of additional interest are the formulas that are always true lies, i.e., formulas $\\phi$ such that $\\neg\\phi \\rightarrow [\\phi]\\phi$ is valid. Consider model class ${\\mathcal{K}45}$. We show that $\\Box p$ and $p \\vee \\Box p$ are true lies. However, the latter is more interesting than the former.\n\nLet us first show that $\\Box p$ is a true lie. Consider an epistemic state $(M,s)$, where $M = (S,R,V)$, and such that $M,s \\models \\neg \\Box p$. Then there is a state $t$ with $(s,t) \\in R$ and such that $M,t \\models \\neg p$. Now consider the believed announcement update $\\Box p$. Let $u$ be any state with $(s,u)\\in R$. In a ${\\mathcal{K}45}$ model $M|\\Box p$ with accessibility relation $R^{\\Box p}$ we have that $(s,u) \\not \\in R^{\\Box p}$, because from $(s,t)\\in R$ and $(s,u)\\in R$ follows (euclidicity) $(u,t)\\in R$ and thus $M,u \\models \\neg \\Box p$, so $(s,u)\\not\\in R^{\\Box p}$. In $M|\\Box p$ no state is accessible from $s$. Thus, $M|\\Box p, s \\models \\Box p$ and therefore $M, s \\models \\neg \\Box p \\rightarrow [\\Box p]\\Box p$. \n\nIf the $\\Box$ modality represents belief, this is not a very interesting form of lying, as in the resulting model the agent's beliefs are inconsistent: $M|\\Box p,s\\models \\Box p$ but also $M|\\Box p,s\\models \\Box \\neg p$. Differently said, as the agent did not believe $\\Box p$, she does not consider it possible that the lie $\\Box p$ is the truth. More interesting are what we will call the {\\em believable} true lies that we associate with the validity of $(\\neg \\phi \\wedge \\Diamond \\phi) \\rightarrow [\\phi]\\phi$ and with the satisfiability of $(\\neg \\phi \\wedge \\Diamond \\phi) \\wedge [\\phi]\\phi$ (and with the model class ${\\mathcal{KD}45}$ --- believable true lies preserve consistent beliefs). Formula $\\Box p$ is not believable in the last sense, as $\\neg \\Box p \\wedge \\Diamond \\Box p$ implies (in ${\\mathcal{KD}45}$) that $\\neg \\Box p$ and $\\Box p$ are both true, which is inconsistent.\n\nWe now show that $p \\vee \\Box p$ is a true lie. Consider a ${\\mathcal{K}45}$ epistemic state $(M,s)$, where $M = (S,R,V)$, and such that $M,s \\models \\neg p \\wedge \\neg \\Box p$. Again, because $M,s \\models \\neg p \\wedge \\neg \\Box p$ and because $M$ is a ${\\mathcal{K}45}$ model, none of the accessible states satisfy $\\Box p$. But of course, some may satisfy $p$. Therefore, $M|(p \\vee \\Box p), s \\models \\Box p$ and thus $M|(p \\vee \\Box p), s \\models p \\vee \\Box p$. Formula $p \\vee \\Box p$ is therefore also a believable true lie, and even in the non-trivial sense that $\\neg (p \\vee \\Box p) \\wedge \\Diamond (p \\vee \\Box p) \\wedge [p \\vee \\Box p] (p \\vee \\Box p)$ is satisfiable (namely when there are accessible $p$ worlds as well from $s$ in $M$). If $p \\vee \\Box p$ is believable, this lie will preserve seriality in the updated model (as there must be an accessible $p$ state), and therefore preserves ${\\mathcal{KD}45}$.\n\nWe illustrate the difference between these two true lies on the same model as above. (We recall that $p \\wedge \\Box p$ is {\\em not} a true lie in both states of the model, but only in state $t$.)\n\n\n\\[ \\begin{array}{lll}\n\\begin{tikzpicture}[thick]\n\\node (0) at (0,0) {$\\neg p$};\n\\node (1) at (2,0) {$p$};\n\\node (0s) at (0,-0.5) {$s$};\n\\node (1t) at (2,-0.5) {$t$};\n\\draw[<->] (0) to (1);\n\\draw[->] (0) edge[loop above,looseness=15] (0); \n\\draw[->] (1) edge[loop above,looseness=15] (1); \n\\end{tikzpicture}\n&\n\\quad \\stackrel {\\Box p} \\Rightarrow \\quad\n&\n\\begin{tikzpicture}[thick]\n\\node (0) at (0,0) {$\\neg p$};\n\\node (1) at (2,0) {$p$};\n\\node (0s) at (0,-0.5) {$s$};\n\\node (1t) at (2,-0.5) {$t$};\n\\end{tikzpicture} \\\\ \\ \\\\ \n\\begin{tikzpicture}[thick]\n\\node (0) at (0,0) {$\\neg p$};\n\\node (1) at (2,0) {$p$};\n\\draw[<->] (0) to (1);\n\\draw[->] (0) edge[loop above,looseness=15] (0); \n\\draw[->] (1) edge[loop above,looseness=15] (1); \n\\end{tikzpicture}\n&\n\\quad \\stackrel {p \\vee \\Box p} \\Rightarrow \\quad\n&\n\\begin{tikzpicture}[thick]\n\\node (0) at (0,0) {$\\neg p$};\n\\node (1) at (2,0) {$p$};\n\\draw[->] (0) to (1);\n\\draw[->] (1) edge[loop above,looseness=15] (1); \n\\end{tikzpicture}\n\\end{array} \\]\n\n\nIn this work we wish to investigate in depth which formulas may be true lies on a given model, and which formulas may be true lies on any model, and for which classes of models. In addition to true lies that become true after lying announcements, there are the successful formulas that remain true after truthful announcements, and we can consider two other options: self-refuting formulas that become false after truthful announcements, and what one might call `impossible lies' that remain false after lying announcements. We will therefore take a more general perspective: what formulas may {\\em change their value or keep their value} after their announcement, how does this depend on whether they were true or false before the announcement, and what happens when we iterate these announcements? We now first introduce terminology to address these matters. \n\n\\subsection{Terminology for iterated announcements} \\label{sec.term}\n\nGiven $\\sigma\\in \\{0,1\\}^*$ and $k\\leq |\\sigma|$, $\\sigma_k$ denotes the $k$th digit of $\\sigma$ and $\\sigma|k$ the prefix consisting of the first $k$ elements of $\\sigma$. We abuse notation and also view $\\sigma_k$ as a function $\\sigma_k:\\mathcal{L} \\to \\mathcal{L}$ on formulas such that\n$$\\sigma_k(\\phi)=\\left\\{\\begin{array}{ll}\n\\phi & \\text{if } \\sigma_k=1\\\\ \n\\neg\\phi & \\text{if } \\sigma_k=0\n\\end{array}\\right. $$\n\\begin{definition} \\label{def.above}\nLet ${\\mathcal X}$ be a class of models and $\\sigma\\in \\{0,1\\}^*$ with $n = |\\sigma| \\geq 2$. Formula $\\phi\\in\\ensuremath{\\mathcal{L}}$ is {\\em $\\sigma$-satisfiable} in ${\\mathcal X}$ iff $\\sigma_1(\\phi) \\wedge [\\phi]\\tau_2(\\phi)$ is satisfiable in ${\\mathcal X}$, and $\\phi$ is {\\em $\\sigma$-valid} in ${\\mathcal X}$ iff $\\sigma_1(\\phi) \\rightarrow [\\phi]\\tau_2(\\phi)$ is valid in ${\\mathcal X}$, where:\n$$\\tau_k(\\phi)=\\left\\{\\begin{array}{ll}\n\\sigma_k(\\phi)\\land [\\phi]\\tau_{k+1}(\\phi) & 1< k d$, let $(N,t)$ be a finite-depth tree that is $d'$-bisimilar to $(M,s)$ (we refer to \\cite{blackburnetal:2001} for this notion). Then for all formulas $\\eta$ of modal depth at most $d'$, $(M,s) \\models \\eta$ iff $(N,t) \\models \\eta$. Therefore, as $(M,s) \\models \\psi$ and the modal depth of $\\psi$ is $d < d'$, also $(N,t) \\models \\psi$, i.e., $(N,t) \\models \\neg\\phi \\wedge [\\phi]\\phi$. This contradicts that finite-depth trees cannot satisfy true lies as already obtained above.\n\nAs neither finite-depth nor infinite-depth trees may satisfy $\\neg\\phi$, no tree satisfies $\\neg\\phi$ and thus no pointed model. Therefore non-trivial true lies do not exist on class ${\\mathcal K}$.\n\\end{proof}\n\n\\begin{proposition} \nLet $\\sigma\\in\\{0,1\\}^2$. Then there are non-trivial $\\sigma$-valid formulas and non-trivial believable $\\sigma$-valid formulas on class ${\\mathcal{KD}45}$.\n\\end{proposition}\n\n\\begin{proof}\nSee Section \\ref{sec.2valid}. The section also gives references for the origin of the corresponding English terminology, in the context of validity and satisfiability.\n\\end{proof}\nWe recall that a believable true lie is a believable $01$-valid formula. The following result is a syntactical characterization of this semantic notion, on the ${\\mathcal{KD}45}$ models. The term `disjunctive lying form' is defined syntactically in Section \\ref{sec.four}.\n\\begin{proposition}[Characterization of believable true lies] \\label{prop.char} \\label{th:char}\nA formula is a believable true lie iff it is not equivalent to a disjunctive lying form.\n\\end{proposition}\n\n\\begin{proof}\nSee Section \\ref{sec.four}. The proof closely follows the proof of the syntactic characterization of successful formulas in \\cite{hollidayetal:2010}.\n\\end{proof}\nThere are successful formulas that are not true lies, such as booleans. Clearly $\\models p \\rightarrow [p]p$, whereas $\\not\\models \\neg p \\rightarrow [p]p$. There are also successful formulas that are true lies, such as $\\neg \\Box p$ and (in the universal fragment:) $p \\vee \\Box p$. At some stage we conjectured that {\\em all true lies are successful formulas}. But this is not the case.\n\n\\begin{proposition}[Some true lies are not successful formulas] \\label{prop.conjone}\nOn class ${\\mathcal{K}45}$ (${\\mathcal{KD}45}$), some 01-valid formulas are not 11-valid. \n\\end{proposition}\n\\begin{proof}\nLet $\\phi=\\Box\\bot\\lor (p\\land \\Diamond p\\land \\Diamond \\neg p)\\lor(\\neg p \\land \\Diamond p \\land \\Box p)$. We show that on ${\\mathcal{K}45}$ $\\phi$ is 01-valid but not 11-valid. \n\nDue to transitivity and euclidicity, a pointed ${\\mathcal{K}45}$ model consists of a designated point possibly pointing at a cluster of indistinguishable points. The formula $\\phi$ actually specifies its ${\\mathcal{K}45}$ models modulo bisimilarity. We list all eight ${\\mathcal{K}45}$ pointed models (modulo bisimilarity) for one variable $p$, where we distinguish those where $\\phi$ is true from those where $\\phi$ is false, and we also give the result of the $\\phi$ update. The underlined state is the designated state.\n\n\\bigskip\n\n\\noindent --- $\\phi$ is true at these four models:\n\\vspace{-.5cm}\n\\begin{center}\n\\begin{tikzpicture}[thick]\n\\node (min) at (-.7,0) {$a:$};\n\\node (0) at (0,0) {$\\neg p$};\n\\node (1) at (1.5,0) {\\underline{$p$}};\n\\end{tikzpicture}\n\\quad\\quad\\quad\n\\begin{tikzpicture}[thick]\n\\node (min) at (-.7,0) {$b:$};\n\\node (0) at (0,0) {\\underline{$\\neg p$}};\n\\node (1) at (1.5,0) {$p$};\n\\end{tikzpicture}\n\\quad\\quad\\quad\n\\begin{tikzpicture}[thick]\n\\node (min) at (-.7,0) {$c:$};\n\\node (0) at (0,0) {$\\neg p$};\n\\node (1) at (1.5,0) {\\underline{$p$}};\n\\draw[<->] (0) to (1);\n\\draw[->] (0) edge[loop above,looseness=15] (0); \n\\draw[->] (1) edge[loop above,looseness=12] (1); \n\\end{tikzpicture}\n\\quad\\quad\\quad\n\\begin{tikzpicture}[thick]\n\\node (min) at (-.7,0) {$d:$};\n\\node (0) at (0,0) {\\underline{$\\neg p$}};\n\\node (1) at (1.5,0) {$p$};\n\\draw[->] (0) to (1);\n\\draw[->] (1) edge[loop above,looseness=15] (1); \n\\end{tikzpicture}\n\\end{center}\n--- annoucing $\\phi$ in those models delivers:\n\\vspace{-.5cm}\n\\begin{center}\n\\begin{tikzpicture}[thick]\n\\node (min) at (-.7,0) {$a':$};\n\\node (0) at (0,0) {$\\neg p$};\n\\node (1) at (1.5,0) {\\underline{$p$}};\n\\end{tikzpicture}\n\\quad\\quad\\quad\n\\begin{tikzpicture}[thick]\n\\node (min) at (-.7,0) {$b':$};\n\\node (0) at (0,0) {\\underline{$\\neg p$}};\n\\node (1) at (1.5,0) {$p$};\n\\end{tikzpicture}\n\\quad\\quad\\quad\n\\begin{tikzpicture}[thick]\n\\node (min) at (-.7,0) {$c':$};\n\\node (0) at (0,0) {$\\neg p$};\n\\node (1) at (1.5,0) {\\underline{$p$}};\n\\draw[->] (0) to (1);\n\\draw[->] (1) edge[loop above,looseness=12] (1); \n\\end{tikzpicture}\n\\quad\\quad\\quad\n\\begin{tikzpicture}[thick]\n\\node (min) at (-.7,0) {$d':$};\n\\node (0) at (0,0) {\\underline{$\\neg p$}};\n\\node (1) at (1.5,0) {$p$};\n\\end{tikzpicture}\n\\end{center}\n\\noindent --- $\\phi$ is false at these four models:\n\\begin{center}\n\\begin{tikzpicture}[thick]\n\\node (min) at (-.7,0) {$e:$};\n\\node (0) at (0,0) {$\\neg p$};\n\\node (1) at (1.5,0) {\\underline{$p$}};\n\\draw[->] (1) edge[loop above,looseness=12] (1); \n\\end{tikzpicture}\n\\quad\\quad\\quad\n\\begin{tikzpicture}[thick]\n\\node (min) at (-.7,0) {$f:$};\n\\node (0) at (0,0) {\\underline{$\\neg p$}};\n\\node (1) at (1.5,0) {$p$};\n\\draw[->] (0) edge[loop above,looseness=12] (0); \n\\end{tikzpicture}\n\\quad\\quad\\quad\n\\begin{tikzpicture}[thick]\n\\node (min) at (-.7,0) {$g:$};\n\\node (0) at (0,0) {\\underline{$\\neg p$}};\n\\node (1) at (1.5,0) {$p$};\n\\draw[<->] (0) to (1);\n\\draw[->] (0) edge[loop above,looseness=12] (0); \n\\draw[->] (1) edge[loop above,looseness=15] (1); \n\\end{tikzpicture}\n\\quad\\quad\\quad\n\\begin{tikzpicture}[thick]\n\\node (min) at (-.7,0) {$h:$};\n\\node (0) at (0,0) {$\\neg p$};\n\\node (1) at (1.5,0) {\\underline{$p$}};\n\\draw[<-] (0) to (1);\n\\draw[->] (0) edge[loop above,looseness=12] (0); \n\\end{tikzpicture}\n\\end{center}\n\\noindent --- announcing $\\phi$ in those models delivers:\n\\vspace{-.5cm}\n\\begin{center}\n\\begin{tikzpicture}[thick]\n\\node (min) at (-.7,0) {$e':$};\n\\node (0) at (0,0) {$\\neg p$};\n\\node (1) at (1.5,0) {\\underline{$p$}};\n\\end{tikzpicture}\n\\quad\\quad\\quad\n\\begin{tikzpicture}[thick]\n\\node (min) at (-.7,0) {$f':$};\n\\node (0) at (0,0) {\\underline{$\\neg p$}};\n\\node (1) at (1.5,0) {$p$};\n\\end{tikzpicture}\n\\quad\\quad\\quad\n\\begin{tikzpicture}[thick]\n\\node (min) at (-.7,0) {$g':$};\n\\node (0) at (0,0) {\\underline{$\\neg p$}};\n\\node (1) at (1.5,0) {$p$};\n\\draw[->] (0) to (1);\n\\draw[->] (1) edge[loop above,looseness=15] (1); \n\\end{tikzpicture}\n\\quad\\quad\\quad\n\\begin{tikzpicture}[thick]\n\\node (min) at (-.7,0) {$h':$};\n\\node (0) at (0,0) {$\\neg p$};\n\\node (1) at (1.5,0) {\\underline{$p$}};\n\\end{tikzpicture}\n\\end{center}\nModels $e',f',g',h'$ are bisimilar to $a, b, d, a$ respectively, thus $\\neg \\phi\\to [\\phi]\\phi$ holds on $e, f, g, h$, i.e., $\\phi$ is 01-valid. However, $c'$ is bisimilar to $e$ thus $\\phi\\to [\\phi]\\phi$ is false at model $c$, i.e., $\\phi$ is not 11-valid. \n\nWe not only have that $\\phi$ is 01-valid but not 11-valid on ${\\mathcal{K}45}$ but also that $\\phi$ is 01-valid (obvious) but not 11-valid (not obvious: the transition between $c$ and $c'$ is between ${\\mathcal{KD}45}$ models) on ${\\mathcal{KD}45}$. The reader may further observe that $\\phi$ is also non-trivially believable 01-valid (the transition from $g$ to $g'$ is between ${\\mathcal{KD}45}$ models).\n\\end{proof}\nA natural question to ask is whether for all $\\sigma \\in \\{0,1\\}^* \\cup \\{0,1\\}^\\omega$ there are (non-trivial) $\\sigma$-valid formulas. The answer is (un)fortunately negative. For example, there are no (non-trivial) $001$-valid formulas. We recall that $001$-valid means $\\vDash \\neg \\phi\\to [\\phi](\\neg\\phi \\land[\\phi]\\phi)$. Let $(M,s)$ be such that $M,s\\nvDash \\phi$. Then $M,w\\vDash [\\phi](\\neg \\phi \\land [\\phi]\\phi)$, i.e., $M|\\phi,s\\vDash \\neg \\phi$ and $M|\\phi,s\\models [\\phi]\\phi$. From the latter we get $M|\\phi|\\phi,s\\models \\phi$. However, as $\\neg \\phi\\to [\\phi](\\neg\\phi \\land[\\phi]\\phi)$ is valid it also holds on $(M|\\phi, s)$, so that $M|\\phi,s\\models\\neg \\phi\\to [\\phi](\\neg\\phi \\land[\\phi]\\phi)$, and in particular $M|\\phi,s\\models \\neg \\phi\\to [\\phi]\\neg\\phi$, so that, as $M|\\phi,s\\models \\neg\\phi$, we have $M|\\phi|\\phi,s\\models \\neg\\phi$ . Contradiction.\n\nWe already observed above that if $\\sigma$ is a prefix of $\\tau$ and a formula is $\\tau$-valid then it is also $\\sigma$-valid. In the other direction this is very obviously false, for example there are impossible lies (such as $\\neg\\Box p$, i.e., $00$-valid formulas). But we saw in the previous paragraph that there are no $001$-valid formulas. However, sometimes, if $\\sigma$ is a prefix of $\\tau$ and a formula is $\\sigma$-valid, then it is also $\\tau$-valid. \n\nGiven $\\sigma\\in\\{0,1\\}^\\omega$ and prefix $\\tau$ of $\\sigma$, then $\\sigma$ and $\\tau$ (and $\\tau$ and $\\sigma$) are {\\em completion equivalent} iff for all intermediate strings $\\tau'$ (for all prefixes $\\tau'$ of $\\sigma$ such that $\\tau$ is a prefix of $\\tau'$), a formula is $\\sigma$-valid iff it is $\\tau'$-valid. This relation is clearly an equivalence relation. The shortest sequence of a completion equivalence class is the {\\em representative} and the longest (possibly infinite) sequence of this class is the {\\em completion}.\n\n\\begin{conjecture}\nThe following completion equivalence classes of $\\sigma$-valid formulas are all non-empty and all different, and there are no other classes.\n\\[\\begin{array}{llll}\n 01^k \\ (k>0) \\quad & \\quad 10^k \\ (k>0) \\quad & \\quad 01^k0 \\ (k\\geq 0) \\quad & \\quad 10^k1 \\ (k\\geq 0)\n\\end{array}\\]\n\\end{conjecture}\n\nIf $\\sigma = 01^k0$ (for $k\\geq 0$), then $\\tau = 0(1^k0)^\\omega$ is its completion; and similarly if $\\sigma = 10^k1$ (for $k\\geq 0$), then $\\tau = 1(0^k1)^\\omega$ is its completion. For example, the $11$-valid formulas are also $111$-valid, or $1111$-valid, or $111\\dots$-valid: they are all completion equivalent; $111\\dots$ is the completion of this class and $11$ is its representative. On the other hand, $\\sigma$s of shape $01^k$ and $10^k$ (for $k > 0$) may have no infinite completions and be already maximal. \n\nThere are $011$-validities on ${\\mathcal{K}45}$. A true lie $p \\vee \\Box p$ is an example. \nWe recall that a true lie is by definition a $01$-validity. But formula $p \\vee \\Box p$ is also $011$-valid, because if announced when false, it results in a ${\\mathcal{K}45}$ model satisfying $\\Box p$ and such that further updates have no informative effect. Therefore, it is also $0111$-valid, and \\dots and $01^\\omega$-valid. Similarly, a {\\em believable} true lie $p \\vee \\Box p$ is an example of a $011$-validity on ${\\mathcal{KD}45}$. As it is believable, this guarantees the existence of an accessible $p$ state. That state will be preserved after the $p \\vee \\Box p$ update, and thus the resulting model is serial. More interesting would be a $011$-validity that is not a $0111$-validity. For another example, from Proposition \\ref{prop.nonon} follows that there are no $01^k$-validities on class ${\\mathcal K}$. But we have not investigated systematically which of the conjectured $\\sigma$-valid types are non-empty and for which classes of models.\n\n\\section{The cases 01, 11, 10, and 00} \\label{sec.2valid}\n\nIn this section we restrict ourselves to validities on class ${\\mathcal{KD}45}$ (or ${\\mathcal{S}5}$, included in ${\\mathcal{KD}45}$).\n\n\\weg{\n@@\nTypical examples of @@check!!!@@ non-trivial believable $\\sigma$-valid formulas on class ${\\mathcal{KD}45}$ are: \n\\[ \\begin{array}{|l|l|}\n\\hline\n00 & \\neg \\Box p \\\\ \n01 & \\Box p \\\\\n10 & p \\wedge \\neg \\Box p \\\\\n11 & p , \\Box p \\\\\n\\hline \n\\end{array} \\]\n@@\n}\n\n\\paragraph{Successful formulas}\n\n\\[ \\models \\phi \\rightarrow [\\phi]\\phi \\]\n\nA successful update in some given model is a $11$-satisfiable formula (in that model). We then call a formula `successful' if it is always a successful update, i.e., if it is not merely $11$-satisfiable but $11$-valid. The typical example of a successful update is `no child steps forward' in the muddy children problem \\citep{mosesetal:1986,hvdetal.puzzle:2015}. Given $k$ muddy children out of $n$ children, this is a succesful update for the first $k-2$ times that father makes his request to step forward if you know whether you are muddy, and only an unsuccessful update the $(k-1)$th time, as the $k$th time that the request is made, the muddy children step forward. The formula in question, for the example of three muddy children, is $\\neg (\\Box_a m_a \\vee \\Box_a \\neg m_a) \\wedge \\neg (\\Box_b m_b \\vee \\Box_b \\neg m_b) \\wedge \\neg (\\Box_c m_c \\vee \\Box_c \\neg m_c)$ (where $m_i$ stands for ``child $i$ is muddy''). However, the archetypical unsuccessful update is $p \\wedge \\neg \\Box p$ \\citep{moore:1942,hintikka:1962}. The original 1940s setting is the incoherence of a person saying ``I went to the pictures [movies] last Tuesday, but I don't believe that I did'' \\cite[p.\\ 543]{moore:1942}. In fact, $p \\wedge \\neg \\Box p$ is a self-refuting formula: after announcing it truthfully, it always becomes false.\n\nThe successful formulas are the most well-known from the literature, and have been investigated on the class of ${\\mathcal{S}5}$ models \\citep{hvdetal.synthese:2006,hollidayetal:2010}. On the class ${\\mathcal{S}5}$, and using the (state eliminating) semantics of truthful public announcement logic, the successful formulas are those for which $\\models [\\phi]\\phi$ (and the self-refuting formulas are then those for which $\\models [\\phi]\\neg\\phi$). On ${\\mathcal{S}5}$ (but of course not in ${\\mathcal{KD}45}$), $\\phi \\rightarrow [\\phi]\\psi$ is equivalent to $[\\phi]\\psi$. A formula is successful in believed announcement logic if and only if it is successful in truthful announcement logic. This we can easily see:\n\nIn truthful announcement logic, $M,s \\models [\\phi]\\phi$ iff ($M,s \\models \\phi$ implies $M|_\\phi,s \\models \\phi$), where $M|_\\phi$ denotes the submodel consisting of the states satisfying $\\phi$ (see Appendix A); whereas in believed announcement logic, $M,s \\models \\phi \\rightarrow [\\phi]\\phi$ iff ($M,s \\models \\phi$ implies $M|\\phi,s \\models \\phi$). Using that, whenever the announcement formula $\\phi$ is true in a model $(M,s)$, the model $(M|_\\phi,s)$ resulting from state elimination is bisimilar to the model $(M|\\phi,s)$ resulting from arrow elimination.\n \n\nA characterization of single-agent successful formulas in ${\\mathcal{S}5}$ is known \\citep{hollidayetal:2010} (we will use it below to obtain results for true lies), but only incidental results and no characterization is known for multi-agent successful formulas. Results are that: the positive fragment is successful \\citep{hvdetal.synthese:2006} (the positive or universal fragment of the language of public announcement logic is inductively defined by $\\phi ::= p \\mid \\neg p \\mid \\phi \\vee \\phi \\mid \\phi \\wedge \\phi \\mid \\Box_a \\phi \\mid [\\neg\\phi]\\phi$), publicly known formulas are successful (formulas $\\Box^*_A \\phi$, where $\\Box^*_A$ stands for common knowledge for the set of all agents --- however note that $\\Box^*_B \\phi$ may be unsuccessful if $B \\subset A$), $\\neg \\Box_a p$ is successful (and non-trivially believable) \\citep{qian:2002} --- an older result than \\cite{hollidayetal:2010}, but subsumed by the later characterization of \\cite{hollidayetal:2010}.\n\nA strong intuitive and technical motivation for the successful formulas in truthful announcement logic is that they describe the so-called {\\em substitution free fragment} of the logic. In general we do not have for dynamic epistemic logics that $\\models \\phi$ iff $\\models \\phi[p\/\\psi]$, for example $[p]p$ is valid but $[p \\wedge \\neg \\Box p](p \\wedge \\neg \\Box p)$ is invalid. But this {\\em uniform substitution} holds for the fragment of the language consisting of the successful formulas \\citep{hollidayetal:2013}.\n\n\\paragraph{Self-refuting formulas}\n\n\\[ \\models \\phi \\rightarrow [\\phi]\\neg\\phi \\]\n\nSimilarly to the successful formulas, a formula is self-refuting in believed announcement logic if and only if it is self-refuting in truthful announcement logic. As said, a well-known self-refuting formula (and that is also non-trivially believable) is $p \\wedge \\neg \\Boxp$. A syntactic characterization of single-agent self-refuting formulas in ${\\mathcal{S}5}$ is also presented in \\cite{hollidayetal:2010}, but a more general multi-agent characterization is also not known to us (researchers have been looking for this in vain for some considerable time).\n\n\\paragraph{Impossible lies}\n\n\\[ \\models \\neg\\phi \\rightarrow [\\phi]\\neg\\phi \\]\n\nClearly $\\top$ is an impossible lie, as this makes the antecedent $\\neg\\phi$ of the implication false, but this is the trivial case. Slightly more interesting impossible lies are the booleans. (And all these are non-trivially believable.) For example, $p$ is an impossible lie because $\\models \\neg p \\rightarrow [p] \\neg p$. This is obvious, as announcements do not change the value of propositional variables.\n\nAnother impossible lie is $\\neg \\Box p$. It is trivial: $\\models \\Box p \\rightarrow [\\neg \\Box p] \\Box p$ holds because in any model satisfying $\\Box p$ the accessibility relation of the agent is empty after the $\\neg \\Box p$ update. \n\nThe formula $\\neg \\Box p$ is also a successful formula. No characterization results are known for impossible lies. \n\n\\paragraph{True lies}\n\n\\[ \\models \\neg\\phi \\rightarrow [\\phi]\\phi \\]\n\nAbove we have already seen examples of true lies. The formula $\\top$ (or any other valid formula) is a trivial true lie. Examples of single-agent true lies are: $\\Box p$, $p \\vee \\Box p$ (which is non-trivially believable). \n\nUnsatisfiable formulas are not true lies, as $[\\bot]\\bot$ is equivalent to $\\bot$ (in believed announcement logic, not in truthful announcement logic), so that $\\neg \\bot \\rightarrow \\bot$ is equivalent to $\\bot$. \n\nSection \\ref{sec.four} proves a syntactical characterization of single-agent believable true lies in class ${\\mathcal{KD}45}$ (Proposition \\ref{prop.char}). \n\n\\section{An unstable formula} \\label{sec.three}\n\nIn this section we investigate some models and formulas where the iteration of the same update never stabilizes and continues to transform the model, forever and ever. First comes an example with a ${\\mathcal K}$ model and a single agent, and an alternating $010101...$-satisfiable update (Section \\ref{sec.threeone}). Then comes another example generalizing this to arbitrary boolean functions $\\sigma$ and $\\sigma$-satisfiability (Section \\ref{sec.threearb}). Following that comes a multi-agent example for ${\\mathcal{KD}45}$ agents (with consistent beliefs) (Section \\ref{sec.threetwo}).\n\n\\subsection{Example of an unstable formula} \\label{sec.threeone}\n\n{\\em We demonstrate that the formula $\\neg \\Box \\bot \\wedge ((\\Diamond \\Box \\bot \\wedge \\Diamond \\neg\\Box \\bot) \\rightarrow \\Diamond (p \\wedge \\Box \\bot))$ is unstable for the model defined below and for the boolean function $\\sigma \\in \\{0,1\\}^\\omega$ (i.e.\\ $\\sigma: \\Nat^+ \\rightarrow \\{0,1\\}$) such that $\\sigma_n = 1$ for $n$ even and $\\sigma_n = 0$ for $n$ odd.}\n\n\\bigskip\n\nConsider a model $M$ in Figure \\ref{fig.unstable} on the left consisting of a root $s$ with branches of length $n$ for each positive natural number $n$. We also consider a single propositional variable $p$ that is false in the leaves, and from there on towards the root alternatingly true and false. We finally take the value of $p$ in the root to be true, but in fact that does not matter. (In the figure, nodes where $p$ is true are denoted $\\circ$ and nodes where $p$ is false are denoted $\\bullet$.) \n\nWe now consider the update of $(M,s)$ with the formula $\\phi = \\neg \\Box \\bot \\wedge ((\\Diamond \\Box \\bot \\wedge \\Diamond \\neg\\Box \\bot) \\rightarrow \\Diamond (p \\wedge \\Box \\bot))$. The formula $\\phi$ describes, in other words: ``I am not a leaf node and if I am the root (i.e., if I am the unique node that is branching) then there is a branch of length 1 wherein $p$ is true in the leaf of that branch.'' \n\nWe take the arrow elimination update, that is, arrows pointing to states where the formula is false are deleted, and all other arrows are preserved. The result is the middle structure in Figure \\ref{fig.unstable}. The underlying frames of that model and the left model in the figure are isomorphic. This we can see by rotating the middle structure counterclockwise by 45 degrees and removing the isolated (unreachable) points. That result is pictured on the right in the figure.\n\nIf we now update again with $\\phi$, the original model reappears (in the strong sense that the root-generated submodel is bisimilar --- this is the ${\\raisebox{.3ex}[0mm][0mm]{\\ensuremath{\\medspace \\underline{\\! \\leftrightarrow\\!}\\medspace}}}$ notation --- and even isomorphic): \\[ \\begin{array}{l} M,s \\models\\neg\\phi \\\\ M|\\phi,s \\models\\phi \\\\mathcal{M}|\\phi|\\phi,s \\models\\neg\\phi \\hspace{1cm} \\hfill \\text{such that } (M|\\phi|\\phi,s) {\\raisebox{.3ex}[0mm][0mm]{\\ensuremath{\\medspace \\underline{\\! \\leftrightarrow\\!}\\medspace}}} (M,s) \\end{array} \\]\nand in general we have in this case (we recall that $M|\\phi^0 = M$ and $M|\\phi^{n+1} = (M|\\phi^n)|\\phi$)\n \\[ \\begin{array}{l} M|\\phi^n,s \\models\\phi \\hspace{1cm} \\hfill \\text{ for $n\\in\\mathbb N$ odd} \\\\\nM|\\phi^n,s \\models\\neg\\phi \\hfill \\text{ for $n\\in\\mathbb N$ even or $n=0$} \\\\\n(M|\\phi^n,s) {\\raisebox{.3ex}[0mm][0mm]{\\ensuremath{\\medspace \\underline{\\! \\leftrightarrow\\!}\\medspace}}} (M|\\phi^{n+2},s) \\hspace{5cm} \\hfill \\text{for all } n \\in \\Nat \\\\\n\\end{array} \\]\n\n\\begin{figure}\n\\scalebox{.88}{\n\\begin{tikzpicture}[->,thick]\n\\node (0) at (0,0) {$\\circ$};\n\\node (11) at (-1,0) {$\\bullet$};\n\\node (21) at (-.85,.85) {$\\circ$};\n\\node (22) at (-1.7,1.7) {$\\bullet$};\n\\node (31) at (0,1) {$\\bullet$};\n\\node (32) at (0,2) {$\\circ$};\n\\node (33) at (0,3) {$\\bullet$};\n\\node (41) at (.85,.85) {$\\circ$};\n\\node (42) at (1.7,1.7) {$\\bullet$};\n\\node (43) at (2.55,2.55) {$\\circ$};\n\\node (44) at (3.4,3.4) {$\\bullet$};\n\\node (0a) at (0,-.3) {\\color{white} $\\neg\\phi$};\n\\node (21a) at (-1.15,.85) {$\\phi$};\n\\node (31a) at (0.3,1) {$\\phi$};\n\\node (32a) at (0.3,2) {$\\phi$};\n\\node (41a) at (1.15,.85) {$\\phi$};\n\\node (42a) at (2,1.7) {$\\phi$};\n\\node (43a) at (2.85,2.55) {$\\phi$};\n\\node (dots) at (3,0) {$ \\ $};\n\\draw[->] (0) to (11);\n\\draw[->] (0) to (21);\n\\draw[->] (21) to (22);\n\\draw[->] (0) to (31);\n\\draw[->] (31) to (32);\n\\draw[->] (32) to (33);\n\\draw[->] (0) to (41);\n\\draw[->] (41) to (42);\n\\draw[->] (42) to (43);\n\\draw[->] (43) to (44);\n\\draw[dotted,-] (0) to (dots);\n\\end{tikzpicture}\n{$\\stackrel {\\phi!} \\Rightarrow$}\n\\begin{tikzpicture}[->,thick]\n\\node (0) at (0,0) {$\\circ$};\n\\node (11) at (-1,0) {$\\bullet$};\n\\node (21) at (-.85,.85) {$\\circ$};\n\\node (22) at (-1.7,1.7) {$\\bullet$};\n\\node (31) at (0,1) {$\\bullet$};\n\\node (32) at (0,2) {$\\circ$};\n\\node (33) at (0,3) {$\\bullet$};\n\\node (41) at (.85,.85) {$\\circ$};\n\\node (42) at (1.7,1.7) {$\\bullet$};\n\\node (43) at (2.55,2.55) {$\\circ$};\n\\node (44) at (3.4,3.4) {$\\bullet$};\n\\node (0a) at (0,-.3) {$\\phi$};\n\\node (31a) at (0.3,1) {$\\phi$};\n\\node (41a) at (1.15,.85) {$\\phi$};\n\\node (42a) at (2,1.7) {$\\phi$};\n\\node (dots) at (3,0) {$ \\ $};\n\\draw[->] (0) to (21);\n\\draw[->] (0) to (31);\n\\draw[->] (31) to (32);\n\\draw[->] (0) to (41);\n\\draw[->] (41) to (42);\n\\draw[->] (42) to (43);\n\\draw[dotted,-] (0) to (dots);\n\\end{tikzpicture}\n{${\\raisebox{.3ex}[0mm][0mm]{\\ensuremath{\\medspace \\underline{\\! \\leftrightarrow\\!}\\medspace}}}$}\n\\begin{tikzpicture}[->,thick]\n\\node (0) at (0,0) {$\\circ$};\n\\node (11) at (-1,0) {$\\circ$};\n\\node (21) at (-.85,.85) {$\\bullet$};\n\\node (22) at (-1.7,1.7) {$\\circ$};\n\\node (31) at (0,1) {$\\circ$};\n\\node (32) at (0,2) {$\\bullet$};\n\\node (33) at (0,3) {$\\circ$};\n\\node (41) at (.85,.85) {$\\bullet$};\n\\node (42) at (1.7,1.7) {$\\circ$};\n\\node (43) at (2.55,2.55) {$\\bullet$};\n\\node (44) at (3.4,3.4) {$\\circ$};\n\\node (0a) at (0,-.3) {$\\phi$};\n\\node (21a) at (-1.15,.85) {$\\phi$};\n\\node (31a) at (0.3,1) {$\\phi$};\n\\node (32a) at (0.3,2) {$\\phi$};\n\\node (41a) at (1.15,.85) {$\\phi$};\n\\node (42a) at (2,1.7) {$\\phi$};\n\\node (43a) at (2.85,2.55) {$\\phi$};\n\\node (dots) at (3,0) {$ \\ $};\n\\draw[->] (0) to (11);\n\\draw[->] (0) to (21);\n\\draw[->] (21) to (22);\n\\draw[->] (0) to (31);\n\\draw[->] (31) to (32);\n\\draw[->] (32) to (33);\n\\draw[->] (0) to (41);\n\\draw[->] (41) to (42);\n\\draw[->] (42) to (43);\n\\draw[->] (43) to (44);\n\\draw[dotted,-] (0) to (dots);\n\\end{tikzpicture}\n}\n\\caption{Example unstable formula}\n\\label{fig.unstable}\n\\end{figure}\n\n\\subsection{Arbitrary boolean function} \\label{sec.threearb}\n\nLet now any boolean function $\\sigma \\in \\{0,1\\}^\\omega$ be given. Consider again the frame of the model $M$ of Subsection \\ref{sec.threeone} and the same formula $\\phi := \\neg \\Box \\bot \\wedge ((\\Diamond \\Box \\bot \\wedge \\Diamond \\neg\\Box \\bot) \\rightarrow \\Diamond (p \\wedge \\Box \\bot))$. The value in the root of the model is irrelevant. In that model, $\\circ$ no longer means that $p$ is true. Instead, we decorate the model with values for atom $p$ according to the string $\\sigma$, as below. This ensures that for all $n \\in \\mathbb N$, $M|\\phi^n,s \\models \\sigma_{n+1}(\\phi)$. \n\nFor an example, let be $\\sigma = 011100011\\dots$. Then $\\sigma_1 = 0$ (the first digit), $\\sigma_2 = 1$, etc. Therefore $p$ is false at the leaf of the branch of length 1 (false means value 0), and in the branch of length 2 $p$ is true halfway ($\\sigma_3$) and again false at the leaf, and so on. For $n=0$, we should get $M|\\phi^0,s \\models \\sigma_1(\\phi)$. As $M|\\phi^0 = M$, and $\\sigma_1(\\phi) = \\neg\\phi$, we get $M,s \\models \\neg\\phi$. And this is indeed the case, because the subformula $\\Diamond (p \\wedge \\Box \\bot)$ of $\\phi$ is false in the root, because $p$ is false ($\\sigma_1 = 0$) at the leaf of the branch of length 1.\n\nAnnouncement of $\\phi$ causes all leaves to be eliminated from the model, as before, with the result that all leaves are now decorated with propositions $p$ with value $\\sigma_2 = 1$. So we should have $M|\\phi,s \\models \\phi$, i.e., $M|\\phi^1,s \\models \\sigma_2(\\phi)$. And so it is, because $M|\\phi,s \\models \\Diamond (p \\wedge \\Box \\bot)$. And so on.\n\n\\begin{center}\n\\scalebox{.9}{\n\\begin{tikzpicture}[->,thick]\n\\node (0) at (0,0) {$\\circ$};\n\\node (11) at (-1,0) {$\\circ$};\n\\node (21) at (-.85,.85) {$\\circ$};\n\\node (22) at (-1.7,1.7) {$\\circ$};\n\\node (31) at (0,1) {$\\circ$};\n\\node (32) at (0,2) {$\\circ$};\n\\node (33) at (0,3) {$\\circ$};\n\\node (41) at (.85,.85) {$\\circ$};\n\\node (42) at (1.7,1.7) {$\\circ$};\n\\node (43) at (2.55,2.55) {$\\circ$};\n\\node (44) at (3.4,3.4) {$\\circ$};\n\\node (11a) at (-1,-.3) {$\\sigma_1$};\n\\node (21a) at (-1.15,.85) {$\\sigma_2$};\n\\node (22a) at (-2,1.7) {$\\sigma_1$};\n\\node (31a) at (0.3,1) {$\\sigma_3$};\n\\node (32a) at (0.3,2) {$\\sigma_2$};\n\\node (33a) at (0.3,3) {$\\sigma_1$};\n\\node (41a) at (1.15,.85) {$\\sigma_4$};\n\\node (42a) at (2,1.7) {$\\sigma_3$};\n\\node (43a) at (2.85,2.55) {$\\sigma_2$};\n\\node (44a) at (3.7,3.4) {$\\sigma_1$};\n\\node (dots) at (3,0) {$ \\ $};\n\\draw[->] (0) to (11);\n\\draw[->] (0) to (21);\n\\draw[->] (21) to (22);\n\\draw[->] (0) to (31);\n\\draw[->] (31) to (32);\n\\draw[->] (32) to (33);\n\\draw[->] (0) to (41);\n\\draw[->] (41) to (42);\n\\draw[->] (42) to (43);\n\\draw[->] (43) to (44);\n\\draw[dotted,-] (0) to (dots);\n\\end{tikzpicture}$\\stackrel {\\phi} \\Rightarrow$\n\\begin{tikzpicture}[->,thick]\n\\node (0) at (0,0) {$\\circ$};\n\\node (11) at (-1,0) {$\\circ$};\n\\node (21) at (-.85,.85) {$\\circ$};\n\\node (22) at (-1.7,1.7) {$\\circ$};\n\\node (31) at (0,1) {$\\circ$};\n\\node (32) at (0,2) {$\\circ$};\n\\node (33) at (0,3) {$\\circ$};\n\\node (41) at (.85,.85) {$\\circ$};\n\\node (42) at (1.7,1.7) {$\\circ$};\n\\node (43) at (2.55,2.55) {$\\circ$};\n\\node (44) at (3.4,3.4) {$\\circ$};\n\\node (11a) at (-1,-.3) {$\\sigma_2$};\n\\node (21a) at (-1.15,.85) {$\\sigma_3$};\n\\node (22a) at (-2,1.7) {$\\sigma_2$};\n\\node (31a) at (0.3,1) {$\\sigma_4$};\n\\node (32a) at (0.3,2) {$\\sigma_3$};\n\\node (33a) at (0.3,3) {$\\sigma_2$};\n\\node (41a) at (1.15,.85) {$\\sigma_5$};\n\\node (42a) at (2,1.7) {$\\sigma_4$};\n\\node (43a) at (2.85,2.55) {$\\sigma_3$};\n\\node (44a) at (3.7,3.4) {$\\sigma_2$};\n\\node (dots) at (3,0) {$ \\ $};\n\\draw[->] (0) to (11);\n\\draw[->] (0) to (21);\n\\draw[->] (21) to (22);\n\\draw[->] (0) to (31);\n\\draw[->] (31) to (32);\n\\draw[->] (32) to (33);\n\\draw[->] (0) to (41);\n\\draw[->] (41) to (42);\n\\draw[->] (42) to (43);\n\\draw[->] (43) to (44);\n\\draw[dotted,-] (0) to (dots);\n\\end{tikzpicture}\n$\\stackrel {\\phi} \\Rightarrow \\hspace{1cm} \\dots$\n}\n\\end{center}\n\n\\noindent \nThis proves Proposition \\ref{prop.a} that there are $\\sigma$-satisfiable formulas on class ${\\mathcal K}$ for any string $\\sigma$.\n\n\\subsection{An unstable formula for agents with consistent beliefs} \\label{sec.unstablecons} \\label{sec.threetwo}\n\nA common way to transform a model for a directed single-agent (unimodal) setting exhibiting a certain desirable property into a model for a symmetric multi-agent setting with the same property, is to replace an arrow by a pair of (undirected) links for different agents, i.e., a pair $(x,y) \\in R$ by two pairs $(x,z) \\in R_a$ and $(z,y) \\in R_b$. In a multi-agent ${\\mathcal{S}5}$ setting, this means chaining equivalence classes of size two for different agents $a$ and $b$. Instead of arbitrarily long finite chains, we then build arbitrarily long alternating and interlocking $a\/b$ chains. The technique has been used to great effect for undecidability arguments for ${\\mathcal{S}5}$ logics \\citep{frenchetal:2008} and also to great effect for expressivity arguments for such logics \\citep{hvdetal.del:2007,kooi.jancl:2007}. This chaining procedure can be extended in a natural way to ${\\mathcal{KD}45}$ models (for consistent belief), namely when (at least) the root of the model is an unreachable state wherein incorrect belief is possible, but where (almost) all remaining states constitute equivalence classes (wherein belief is correct). We apply this procedure to get a symmetric multi-agent model from an asymmetric single-agent model, to the model $M$ of Subsection \\ref{sec.threeone}, in order to get an example of an unstable formula in a multi-agent ${\\mathcal{KD}45}$ setting. \n\nStarting with the model $M$ of that section, we thus get the following model $N$. The accessibility relation is solid for agent $a$ and it is dashed for agent $b$. Intuitively, to get $N$ from $M$, all nodes are replaced by two nodes with the same valuation and that are indistinguishable for an agent $b$, whereas one might say the previous directed arrows are replaced by indistinguishability for an agent $a$. Except for the root, wherein we do something special: all pairs in the accessibility relation for $a$ starting in the root are directed, and we add another node to the model that is accessible from the root for agent $b$ only (in the figure: the node below the root), and from where all nodes accessible from the root for agent $a$ are also $a$-accessible (in the figure we have only drawn one of those arrows, for visual clarity). This is a ${\\mathcal{KD}45}$ model. We assume the usual visual conventions for such models: all directed arrows point to {\\em clusters} of indistinguishable nodes for that agent (see Appendix B). However, merely in order to emphasize the similarity with the model $M$, from the root of the model we have many outgoing arrows (whereas according to the visual convention one would have been enough).\n\n\\bigskip\n\n\\scalebox{0.8}{\n\\begin{tikzpicture}[thick]\n\\node (0) at (0,0) {$\\circ$};\n\\node (11) at (-1,0) {$\\bullet$};\n\\node (12) at (-2,0) {$\\bullet$};\n\\node (21) at (-.85,.85) {$\\circ$};\n\\node (22) at (-1.7,1.7) {$\\circ$};\n\\node (23) at (-2.55,2.55) {$\\bullet$};\n\\node (24) at (-3.4,3.4) {$\\bullet$};\n\\node (31) at (0,1) {$\\bullet$};\n\\node (32) at (0,2) {$\\bullet$};\n\\node (33) at (0,3) {$\\circ$};\n\\node (34) at (0,4) {$\\circ$};\n\\node (35) at (0,5) {$\\bullet$};\n\\node (36) at (0,6) {$\\bullet$};\n\\node (41) at (.85,.85) {$\\circ$};\n\\node (42) at (1.7,1.7) {$\\circ$};\n\\node (43) at (2.55,2.55) {$\\bullet$};\n\\node (44) at (3.4,3.4) {$\\bullet$};\n\\node (45) at (4.25,4.25) {$\\circ$};\n\\node (46) at (5.1,5.1) {$\\circ$};\n\\node (47) at (5.95,5.95) {$\\bullet$};\n\\node (48) at (6.8,6.8) {$\\bullet$};\n\\node (dots) at (3,0) {$ \\ $};\n\\draw[->] (0) to (11);\n\\draw[dashed,-] (11) to (12);\n\\draw[->] (0) to (21);\n\\draw[dashed,-] (21) to (22);\n\\draw[-] (22) to (23);\n\\draw[dashed,-] (23) to (24);\n\\draw[->] (0) to (31);\n\\draw[dashed,-] (31) to (32);\n\\draw[-] (32) to (33);\n\\draw[dashed,-] (33) to (34);\n\\draw[-] (34) to (35);\n\\draw[dashed,-] (35) to (36);\n\\draw[->] (0) to (41);\n\\draw[dashed,-] (41) to (42);\n\\draw[-] (42) to (43);\n\\draw[dashed,-] (43) to (44);\n\\draw[-] (44) to (45);\n\\draw[dashed,-] (45) to (46);\n\\draw[-] (46) to (47);\n\\draw[dashed,-] (47) to (48);\n\\draw[-] (11) to (21);\n\\draw[-] (21) to (31);\n\\draw[-] (31) to (41);\n\\node (x) at (1,0.05) {$ \\ $};\n\\draw[dotted,-] (41) to (x);\n\\node (newroot) at (0,-1) {$\\bullet$};\n\\draw[dashed,->] (0) to (newroot);\n\\draw[->] (newroot) to (11);\n\\draw[dotted,-] (0) to (dots);\n\\end{tikzpicture}\n}\n\n\\noindent Instead of \\[ \\phi = \\neg \\Box \\bot \\wedge ((\\Diamond \\Box \\bot \\wedge \\Diamond \\neg\\Box \\bot) \\rightarrow \\Diamond (p \\wedge \\Box \\bot)) \\] we then get a more or less corresponding multi-agent formula \\[ \\psi = \\neg\\Diamond_b(\\Box_a p \\vee \\Box_a \\neg p) \\wedge ((p\\wedge\\Box_b\\negp) \\rightarrow \\Diamond_a\\Diamond_b \\Box_a p)\\] To get the latter from the former, first replace each $\\Diamond$ by $\\Diamond_a\\Diamond_b$. We recall that in the model, we replaced each directed unlabeled arrow by an $a$ step plus a $b$ step. Then, replace $\\Box \\bot$ by $\\Diamond_b(\\Box_a p \\vee \\Box_a \\neg p)$. We recall that $\\Box \\bot$ defined a leaf node in $M$. As ${\\mathcal{KD}45}$ models are serial, we now observe that the leaves of model $N$ are the unique nodes wherein $a$ knows whether $p$: $\\Box_a p \\vee \\Box_a \\neg p$, such that $\\Diamond_b (\\Box_a p \\vee \\Box_a \\neg p)$ is only true in the final two nodes of each branch. \n\nThe extra node is needed to allow for a formula that distinguishes the root node from other nodes: the root of $N$ is the unique node wherein agent $b$ has incorrect belief: $p \\wedge \\Box_b \\neg p$. In all other nodes, agent $b$ (correctly) knows the value of $p$. (We did not know how to distinguish the root node from other nodes without introducing such an additional node.)\n\nThe nodes $a$-accessible from the root form an infinite $a$-equivalence class. The $a$ arrows leading from the extra node below the root to that class are there so that this node is not a leaf.\n \nWith the `translation' so far, the final part $\\Diamond (p \\wedge \\Box \\bot)$ of the formula $\\phi$ becomes $\\Diamond_a\\Diamond_b(p \\wedge \\Diamond_b(\\Box_a p \\vee \\Box_a \\neg p))$, which is ${\\mathcal{KD}45}$ equivalent to $\\Diamond_a\\Diamond_b\\Box_a p$, as found in the formula $\\psi$. \n\nIn plain English, $\\psi$ describes that: $(i)$ $b$ does not consider it possible that $a$ knows whether $p$, and that $(ii)$ in the node where $b$'s belief is incorrect (the root note), $a$ considers it possible that $b$ considers it possible for $a$ to know {\\em that} (not whether!) $p$. Condition $(i)$ is always false in the leaves and in the nodes just below the leaves, whereas condition $(ii)$ is only true in the root if there is a path of length 2 to a leaf node satisfying $p$. If we now update model $N$ with $\\psi$ we get this model $N|\\psi$ --- wherein $\\psi$ is now true in the root, as the length 2 path is now to a $p$ node:\n\n\\bigskip\n\n\\scalebox{.8}{\n\\begin{tikzpicture}[thick]\n\\node (0) at (0,0) {$\\circ$};\n\\node (11) at (-1,0) {$\\circ$};\n\\node (12) at (-2,0) {$\\circ$};\n\\node (21) at (-.85,.85) {$\\bullet$};\n\\node (22) at (-1.7,1.7) {$\\bullet$};\n\\node (23) at (-2.55,2.55) {$\\circ$};\n\\node (24) at (-3.4,3.4) {$\\circ$};\n\\node (31) at (0,1) {$\\circ$};\n\\node (32) at (0,2) {$\\circ$};\n\\node (33) at (0,3) {$\\bullet$};\n\\node (34) at (0,4) {$\\bullet$};\n\\node (35) at (0,5) {$\\circ$};\n\\node (36) at (0,6) {$\\circ$};\n\\node (41) at (.85,.85) {$\\bullet$};\n\\node (42) at (1.7,1.7) {$\\bullet$};\n\\node (43) at (2.55,2.55) {$\\circ$};\n\\node (44) at (3.4,3.4) {$\\circ$};\n\\node (45) at (4.25,4.25) {$\\bullet$};\n\\node (46) at (5.1,5.1) {$\\bullet$};\n\\node (47) at (5.95,5.95) {$\\circ$};\n\\node (48) at (6.8,6.8) {$\\circ$};\n\\node (dots) at (3,0) {$ \\ $};\n\\draw[->] (0) to (11);\n\\draw[dashed,-] (11) to (12);\n\\draw[->] (0) to (21);\n\\draw[dashed,-] (21) to (22);\n\\draw[-] (22) to (23);\n\\draw[dashed,-] (23) to (24);\n\\draw[->] (0) to (31);\n\\draw[dashed,-] (31) to (32);\n\\draw[-] (32) to (33);\n\\draw[dashed,-] (33) to (34);\n\\draw[-] (34) to (35);\n\\draw[dashed,-] (35) to (36);\n\\draw[->] (0) to (41);\n\\draw[dashed,-] (41) to (42);\n\\draw[-] (42) to (43);\n\\draw[dashed,-] (43) to (44);\n\\draw[-] (44) to (45);\n\\draw[dashed,-] (45) to (46);\n\\draw[-] (46) to (47);\n\\draw[dashed,-] (47) to (48);\n\\draw[dotted,-] (0) to (dots);\n\\draw[-] (11) to (21);\n\\draw[-] (21) to (31);\n\\draw[-] (31) to (41);\n\\node (x) at (1,0.05) {$ \\ $};\n\\draw[dotted,-] (41) to (x);\n\\node (newroot) at (0,-1) {$\\bullet$};\n\\draw[dashed,->] (0) to (newroot);\n\\draw[->] (newroot) to (11);\n\\draw[dotted,-] (0) to (dots);\n\\end{tikzpicture}\n}\n\n\\noindent The next iteration restores $N$: $(N|\\psi|\\psi,s) {\\raisebox{.3ex}[0mm][0mm]{\\ensuremath{\\medspace \\underline{\\! \\leftrightarrow\\!}\\medspace}}} (N,s)$. Again, we get:\n\\[ \\begin{array}{l} N|\\psi^n,s \\models\\psi \\hspace{1cm} \\hfill \\text{ for $n\\in\\mathbb N$ odd} \\\\\nN|\\psi^n,s \\models\\neg\\psi \\hfill \\text{ for $n\\in\\mathbb N$ even or $n=0$} \\\\\n(N|\\psi^n,s) {\\raisebox{.3ex}[0mm][0mm]{\\ensuremath{\\medspace \\underline{\\! \\leftrightarrow\\!}\\medspace}}} (N|\\psi^{n+2},s) \\hspace{5cm} \\hfill \\text{for all } n \\in \\Nat \\\\\n\\end{array} \\]\n\\weg{\n\n\\subsection{A communicative scenario for the multi-agent setting} \\label{sec.threethree}\n\n{\\em The infinitely branching model $N$ of the previous subsection seems to fall out of the sky. In this section we provide a story and a communicative scenario involving truthtelling and lying producing a very similar model with the same iterated update behaviour, as a variation on a well-known epistemic riddle. This merely shows `the power of updates': what we will do is nothing but customize an epistemic model based on the semantic operations of public announcements and agent announcements \\citep{hvd.lying:2014}. It is shows by example that {\\em any} multi-agent ${\\mathcal{KD}45}$ structure can be justified and backed up by communicative scenarios involving truthtelling and lying.}\n\n\n\\bigskip\n\nConsider two agents Anne and Bill, and a quizmaster, Rineke, performing a number of actions and informing Anne and Bill about such actions. Anne and Bill each have a sticker with a natural number on their forehead. These numbers are consecutive. Anne and Bill have to discover what the number is. (So, we have the standard setting of the consecutive number riddle \\citep{littlewood:1953,hvdetal.puzzle:2015}.) \n\nWe further assume that the actual numbers are that Anne has 0 and Bill has 1.\n\nSo far, so good. However, from here on things proceed differently as usual... For a start, Anne's sticker has been attached to her forehead with superglue. Which she does not know yet but will become relevant later in the story.\n\nGiven that Anne has 0 and Bill has 1, Anne and Bill have common knowledge that Anne's number is even and that Bill's number is odd, so without loss of generality we may assume that in our modelling. Some of the announcements below will be lies. We assume that Anne and Bill are {\\em skeptical agents} (in the technical sense of \\cite{hvd.lying:2014}) that believe lies if they consider it possible that the lie is true but that otherwise do not believe the lie; i.e., they do not change their beliefs if they already believe the opposite of the lie.\n\n\\begin{enumerate}\n\\item The quizmaster tells Anne and Bill that she will inform them what the maximum even number is that Anne may have, except when Anne has 2 and Bill has 1, in which case she will only inform Bill, and when Anne has 0 and Bill has 1, in which case she informs neither Anne nor Bill. She then takes two envelopes, puts a sheet of paper into each envelope with the promised information (for example, two sheets with the number 6; or an empty sheet intended for Anne and a sheet with 0 intended for Bill), seals the envelopes, and hands the envelopes to Anne and to Bill.\n\n{\\em But this is not all.}\n\n\\item The quizmaster tells Anne and Bill that if the aforementioned maximum for Anne's number is a multiple of 4 (including 0), then the backside of Anne's sticker has superglue and she will need surgery to have it removed later, whereas otherwise (for example, when Anne has 2) it is as usual easily detachable. (Of course she really has superglue on the backside, as this is more fun.)\n\n\\item If Bill has the number 1, he says to Anne: ``You do not have the number 0.'' (This is a lie, because Anne has the number 0.)\n\n\\item The quizmaster tells Anne and Bill that Anne doesn't have superglue on her sticker. (This is lie.)\n\n\\end{enumerate}\n\nThe model $Q$ (with root $\\mathbf{01}$) resulting from these announcements is as follows, below on the left. (The Appendix ``Customizing Consecutive Numbers'' on page \\pageref{appendix.a} provides details.)\n\nNow consider the proposition $\\eta$: \\begin{quote} {\\em ``Bill considers it possible that (Anne believes that she knows her number and that Anne believes that she knows whether she has superglue on her sticker), and if Bill incorrectly believes that Anne has no superglue on her sticker, then Anne considers it possible that Bill considers it possible that (Anne believes that she knows her number and that Anne believes {\\bf that} she has superglue on her sticker).'' } \\end{quote}\nAs before, this proposition is only true in the final two nodes of each branch and in the root $\\mathbf{01}$ in case the $01\\text{---}21\\text{---}23$ branch is one wherein Anne has superglue. (Initally, not.)\n\nThe updated model $Q|\\eta$ is therefore as follows, below on the right (and the update $Q|\\eta|\\eta$ of that is again isomorphic to the initial model $Q$, etc.).\n\n\\bigskip\n\n\\noindent\n\\scalebox{.7}{\n\\begin{tikzpicture}[thick]\n\\node (0) at (0,0) {$\\mathbf{01}$};\n\\node (11) at (-1,0) {$21$};\n\\node (12) at (-2,0) {$23$};\n\\node (21) at (-.85,.85) {$\\mathbf{21}$};\n\\node (22) at (-1.7,1.7) {$\\mathbf{23}$};\n\\node (23) at (-2.55,2.55) {$\\mathbf{43}$};\n\\node (24) at (-3.4,3.4) {$\\mathbf{45}$};\n\\node (31) at (0,1) {$21$};\n\\node (32) at (0,2) {$23$};\n\\node (33) at (0,3) {$43$};\n\\node (34) at (0,4) {$45$};\n\\node (35) at (0,5) {$65$};\n\\node (36) at (0,6) {$67$};\n\\node (41) at (.85,.85) {$\\mathbf{21}$};\n\\node (42) at (1.7,1.7) {$\\mathbf{23}$};\n\\node (43) at (2.55,2.55) {$\\mathbf{43}$};\n\\node (44) at (3.4,3.4) {$\\mathbf{45}$};\n\\node (45) at (4.25,4.25) {$\\mathbf{65}$};\n\\node (46) at (5.1,5.1) {$\\mathbf{67}$};\n\\node (47) at (5.95,5.95) {$\\mathbf{87}$};\n\\node (48) at (6.8,6.8) {$\\mathbf{89}$};\n\\node (dots) at (3,0) {$ \\ $};\n\\draw[->] (0) to (11);\n\\draw[dashed,-] (11) to (12);\n\\draw[->] (0) to (21);\n\\draw[dashed,-] (21) to (22);\n\\draw[-] (22) to (23);\n\\draw[dashed,-] (23) to (24);\n\\draw[->] (0) to (31);\n\\draw[dashed,-] (31) to (32);\n\\draw[-] (32) to (33);\n\\draw[dashed,-] (33) to (34);\n\\draw[-] (34) to (35);\n\\draw[dashed,-] (35) to (36);\n\\draw[->] (0) to (41);\n\\draw[dashed,-] (41) to (42);\n\\draw[-] (42) to (43);\n\\draw[dashed,-] (43) to (44);\n\\draw[-] (44) to (45);\n\\draw[dashed,-] (45) to (46);\n\\draw[-] (46) to (47);\n\\draw[dashed,-] (47) to (48);\n\\draw[-] (11) to (21);\n\\draw[-] (21) to (31);\n\\draw[-] (31) to (41);\n\\node (x) at (1,0.05) {$ \\ $};\n\\draw[dotted,-] (41) to (x);\n\\node (newroot) at (0,-1) {$01$};\n\\draw[dashed,->] (0) to (newroot);\n\\draw[->] (newroot) to (11);\n\\draw[dotted,-] (0) to (dots);\n\\end{tikzpicture}\n}\n\\ \\ \\ \n\\scalebox{.7}{\n\\begin{tikzpicture}[thick]\n\\node (0) at (0,0) {$\\mathbf{01}$};\n\\node (11) at (-1,0) {$\\mathbf{21}$};\n\\node (12) at (-2,0) {$\\mathbf{23}$};\n\\node (21) at (-.85,.85) {$21$};\n\\node (22) at (-1.7,1.7) {$23$};\n\\node (23) at (-2.55,2.55) {$43$};\n\\node (24) at (-3.4,3.4) {$45$};\n\\node (31) at (0,1) {$\\mathbf{21}$};\n\\node (32) at (0,2) {$\\mathbf{23}$};\n\\node (33) at (0,3) {$\\mathbf{43}$};\n\\node (34) at (0,4) {$\\mathbf{45}$};\n\\node (35) at (0,5) {$\\mathbf{65}$};\n\\node (36) at (0,6) {$\\mathbf{67}$};\n\\node (41) at (.85,.85) {$21$};\n\\node (42) at (1.7,1.7) {$23$};\n\\node (43) at (2.55,2.55) {$43$};\n\\node (44) at (3.4,3.4) {$45$};\n\\node (45) at (4.25,4.25) {$65$};\n\\node (46) at (5.1,5.1) {$67$};\n\\node (47) at (5.95,5.95) {$87$};\n\\node (48) at (6.8,6.8) {$89$};\n\\node (dots) at (3,0) {$ \\ $};\n\\draw[->] (0) to (11);\n\\draw[dashed,-] (11) to (12);\n\\draw[->] (0) to (21);\n\\draw[dashed,-] (21) to (22);\n\\draw[-] (22) to (23);\n\\draw[dashed,-] (23) to (24);\n\\draw[->] (0) to (31);\n\\draw[dashed,-] (31) to (32);\n\\draw[-] (32) to (33);\n\\draw[dashed,-] (33) to (34);\n\\draw[-] (34) to (35);\n\\draw[dashed,-] (35) to (36);\n\\draw[->] (0) to (41);\n\\draw[dashed,-] (41) to (42);\n\\draw[-] (42) to (43);\n\\draw[dashed,-] (43) to (44);\n\\draw[-] (44) to (45);\n\\draw[dashed,-] (45) to (46);\n\\draw[-] (46) to (47);\n\\draw[dashed,-] (47) to (48);\n\\draw[-] (11) to (21);\n\\draw[-] (21) to (31);\n\\draw[-] (31) to (41);\n\\node (x) at (1,0.05) {$ \\ $};\n\\draw[dotted,-] (41) to (x);\n\\node (newroot) at (0,-1) {$01$};\n\\draw[dashed,->] (0) to (newroot);\n\\draw[->] (newroot) to (11);\n\\draw[dotted,-] (0) to (dots);\n\\end{tikzpicture}\n}\n\n}\n\n\\noindent This completes our example of a believable $(01)^\\omega$-satisfiable formula for ${\\mathcal{KD}45}$. From this example now directly follows Proposition \\ref{prop.b} that there are believable $\\sigma$-satisfiable formulas on class ${\\mathcal{KD}45}$ for any string $\\sigma$: this is similar to how we generalized the example of $(01)^\\omega$-satisfiability on ${\\mathcal K}$ to a proof of $\\sigma$-satisfiability on ${\\mathcal K}$ for any $\\sigma\\in\\{0,1\\}^\\omega$, using the same formula and the same model except for a different decoration of the variable $p$.\n\n\\section{Characterization of single-agent believable true lies} \\label{sec.four}\n\nExactly which formulas are true lies? Can they been characterized syntactically, and if so, how? The corresponding problem for successful and self-refuting formulas has been studied and solved for the single-agent case \\citep{hollidayetal:2010}. These characterizations are given for ${\\mathcal{S}5}$ (see Section \\ref{sec.2valid}) but are said to hold also for ${\\mathcal{KD}45}$ ``with minor changes'' \\citep{hollidayetal:2010}. The problem is still open for the multi-agent case, for both classes of formulas. Two main reasons that the single-agent case is easier, are that in that case, first, it is well known that any formula is (${\\mathcal{S}5}$ or ${\\mathcal{KD}45}$) equivalent to a formula in disjunctive normal form without any nestings of modalities, and, second, that satisfiability of a formula in disjunctive normal form can be checked syntactically \\citep{hollidayetal:2010} (see below for details). Based on these two properties, the characterizations in \\cite{hollidayetal:2010} check that certain disjuncts (conditionally) exist in the (disjunctive normal form) formula, which is shown to ensure that the formula is of the required type.\n\nIn this section we give a similar characterization of believable true lies. It is heavily inspired by \\cite{hollidayetal:2010}. Indeed, we take the technique directly from them, as well as notation and concepts. Just as \\cite{hollidayetal:2010} syntactically characterize the set of successful formulas, we syntactically characterize the \\emph{complement} of the set of believable true lies. The differences are in the details of the characterization and the corresponding proof.\n\nIn the rest of this section, a ``formula'' always refers (single-agent) formula in the standard modal language. (Note that any formula in $\\ensuremath{\\mathcal{L}}$ is equivalent to such an announcement-free formula.) We now formally define the two properties mentioned above, disjunctive normal forms and syntactic satisfiability checking, and then give the characterization in Definition \\ref{def:syntactic}.\n\n\\begin{definition}[Disjunctive normal form] A formula $\\phi$ is in \\emph{disjunctive normal form} iff it is a disjunction of conjunctions of the form \\[\\delta = \\alpha \\wedge \\Box \\beta_1 \\wedge \\ldots \\wedge \\Box \\beta_n \\wedge \\Diamond \\gamma_1 \\wedge \\ldots \\wedge \\Diamond \\gamma_m\\] where $\\alpha$ and each $\\gamma_i$ are conjunctions of literals and each $\\beta_i$ is a disjunction of literals. \n\\end{definition}\nWe write $\\ourneg{l}$ to denote the negation of a literal $l$ where double negations are removed. Similarly, when $\\beta$ is a disjunction of literals, $\\ourneg{\\beta}$ denotes the conjunction of negated literals with single negation removed. For a given conjuction $\\delta$ of the form above, in the following we will write $\\delta^\\alpha$ for $\\alpha$ and $\\delta^{\\Box\\Diamond}$ for $\\Box \\beta_1 \\wedge \\ldots \\wedge \\Box \\beta_n \\wedge \\Diamond \\gamma_1 \\wedge \\ldots \\wedge \\Diamond \\gamma_m$. Every formula is ${\\mathcal{KD}45}$-equivalent to one in disjunctive normal form.\n\n\\begin{definition}[Clarity \\citep{hollidayetal:2010}] \nGiven a conjunction or disjunction $\\chi$ of literals, $L(\\chi)$ denotes the set of literals occurring in it. Set $L(\\chi)$ is \\emph{open} iff no literal in $L(\\chi)$ is the negation of any other. A conjunction $\\delta = \\alpha \\wedge \\Box \\beta_1 \\wedge \\ldots \\wedge \\Box \\beta_n \\wedge \\Diamond \\gamma_1 \\wedge \\ldots \\wedge \\Diamond \\gamma_m$ in disjunctive normal form is \\emph{clear} iff (i) $L(\\alpha)$ is open; (ii) there is an open set of literals $\\{l_1,\\ldots,l_n\\}$ with $l_i \\in L(\\beta_i)$; and (iii) for every $\\gamma_k$ there is a set of literals $\\{l_1,\\ldots,l_n\\}$ with $l_i \\in L(\\beta_i)$ such that $\\{l_1,\\ldots,l_n\\} \\cup L(\\gamma_k)$ is open. A disjunction in disjunctive normal form is clear iff at least one of the disjuncts is clear.\n\\end{definition}\nThe key point about clarity is that a formula is ${\\mathcal{KD}45}$-satisfiable iff it is clear \\cite[Lemma 3.6]{hollidayetal:2010}.\n\nWe define a formula that we call a \\emph{disjunctive lying form}. We will then show that, modulo logical equivalence, the class of these formulas corresponds exactly to the class of formulas that are not believable true lies. The characterization will be explained after the definition. \n\n\\begin{definition}[Disjunctive lying form]\n \\label{def:syntactic}\n A formula $\\phi$ in disjunctive normal form is a \\emph{disjunctive lying form} iff there exist (possibly empty) sets $S$ and $T$ of disjuncts of $\\phi$ and a conjunct $\\Box \\beta_\\theta$ of each $\\theta \\in T$, such that any disjunctive normal form of\n\\[\\chi = \\neg \\phi \\wedge \\Diamond \\phi \\wedge \\chi_1 \\wedge \\chi_2 \\wedge \\chi_3\\]\nis clear\\footnote{The definition allows both $S$ and $T$ to be empty. However, if $S$ is empty then (any disjunctive form of) $\\chi$ is not clear ($\\chi_3$ is a contradiction), so it follows that a disjunctive lying form actually must have a non-empty $S$ as a witness. If $T$ is empty then $\\chi_3$ is a tautology. There exist disjunctive lying forms with an empty $T$ as a witness (one example is $p \\wedge \\Box p$).}\n, where\n\\[\\chi_1 = \\bigwedge_{\\theta \\in T} t(\\theta) \\wedge \\bigwedge_{\\theta \\in \\overline{T}} \\neg t(\\theta)\\quad\\quad\\quad t(\\theta) = \\theta^\\alpha \\wedge \\bigwedge_{\\Diamond \\gamma \\mbox{ in } \\theta} \\bigvee_{\\sigma \\in S} \\Diamond(\\sigma^\\alpha \\wedge \\gamma)\\]\n\\[\\chi_2 = \\bigwedge_{\\sigma \\in S}\\sigma^{\\Box\\Diamond} \\wedge \\bigwedge_{\\sigma \\in \\overline{S}}\\neg \\sigma^{\\Box\\Diamond}\\]\n\\[\\chi_3 = \\bigwedge_{\\theta \\in T}\\bigvee_{\\sigma \\in S} \\Diamond(\\sigma^\\alpha \\wedge \\ourneg{\\beta_\\theta})\\]\nand $\\overline{X}$ denotes the set of disjuncts of $\\phi$ that are not in $X$.\n\\end{definition}\n\nWe now recall (page \\pageref{th:char}):\n\n\\bigskip\n\n\\noindent {\\bf Proposition \\ref{th:char}} \\ \\ \n{\\em A formula is a believable true lie iff it is not equivalent to a disjunctive lying form.}\n\n\\bigskip\n\n\\noindent Before we prove Proposition \\ref{th:char}, let us explain the intuition behind the definition. This will also serve as a guide to the proof of Proposition \\ref{th:char}. If $\\chi$ is true in some pointed model (is clear), then $\\neg \\phi \\wedge \\Diamond\\phi$ is true. The role of $\\chi_1-\\chi_3$ is to ensure that $\\neg\\phi$ remains false in the updated model --- which holds iff $\\phi$ is not a believable true lie --- by ensuring that certain disjuncts (conditionally) exist in $\\phi$.\n\nThe role of $S$ is to syntactically encode which states are still accessible from the current state after the update. These are exactly the states where $\\sigma^\\alpha$ is true for some $\\sigma \\in S$. Since $\\phi$ is false in the initial pointed model, there is only one way it can become true in the updated pointed model: if, for some disjunct $\\theta$ of $\\phi$ where $\\theta^\\alpha$ is already true and all $\\Diamond\\gamma$ are already true and \\emph{stay} true in the updated model, all $\\Box\\beta$ that were false in the initial pointed model \\emph{become} true in the updated model. We must avoid that. Those disjuncts that can potentially become true are exactly those in $T$. Thus, we must make sure that for every disjunct $\\theta \\in T$, there is at least one $\\Box\\beta$ such that $\\beta$ is false in at least one state that is still accessible after the update. That is ensured by $\\chi_3$.\n\nAs an example, to see that the believable true lie $p \\vee \\Box p$ (already in disjunctive normal form) is not a disjunctive lying form, we need to check for all possible subsets of disjuncts $S$ and $T$ over the set of all disjuncts $\\{p, \\Box p\\}$, whether $\\chi$ is clear. There are only two possibilities for $T$: $T = \\emptyset$ or $T = \\{\\Box p\\}$. In the former case we get that $\\chi_1 = \\neg p \\wedge \\bot$ (not clear). In the latter case, there are four possibilities for $S$: if $S=\\emptyset$ then $\\chi_3 = \\bot$; if $S=\\{p\\}$ then $\\chi_3 = \\Diamond(p \\wedge \\neg p)$; if $S=\\{\\Box p\\}$ then $\\chi_2 = \\Box p$ and $\\chi_3 = \\Diamond \\neg p$; and if $S = \\{p,\\Box p\\}$ then $\\chi_2 = \\Box p$ and $\\chi_3 = \\Diamond \\neg p \\wedge \\Diamond (p \\wedge \\neg p)$. In all cases $\\chi$ is not clear.\n\nAs another example, consider $p \\wedge \\Box p$ which is \\emph{not} a believable true lie. To see that it is a disjunctive lying form, take $S = \\{p \\wedge \\Box p\\}$ and $T = \\emptyset$. We get that $\\chi_1 = \\neg p$, and $\\chi_2 = \\Box p$ and $\\chi_3 = \\top$. It is easy to see that $\\chi$ is clear (it is equivalent to $\\neg p \\wedge \\Box p \\wedge \\Diamond (p \\wedge \\Box p)$).\n\n\\begin{proof}[of Prop.~\\ref{th:char}] Without loss of generality, let $\\phi$ be a disjunctive normal form.\n \nConsider first the implication towards the left: assume that $\\phi$ is a disjunctive lying form and let $S$, $T$, $\\chi$ and $\\beta_\\theta$ for each $\\theta \\in T$ be as in Def. \\ref{def:syntactic}. We must show that there exists $M,s$ such that $M,s \\models \\neg \\phi \\wedge \\Diamond \\phi$ and $\\upd{M}{\\phi},s \\models \\neg \\phi$. Since any disjunctive normal form of $\\chi$ is clear iff it is satisfiable, so let $M,s$ be such that $M,s \\models \\chi$. Then $M,s \\models \\neg \\phi \\wedge \\Diamond \\phi$. Assume, towards a contradiction, that $\\upd{M}{\\phi},s \\models \\phi$. Then there exists a disjunct $\\theta$ in $\\phi$ such that $\\upd{M}{\\phi},s\\models \\theta$. As explained above, we are going to show that $\\chi_3$ ensures that this is impossible. To do that, we must first show that it must be the case that $\\theta \\in T$ (intuitively defined as the set of disjuncts that can potentially become true in the updated model), and then that $\\chi_3$ ensures that the conjunct $\\Box\\beta_\\theta$ is false in $\\upd{M}{\\phi},s$, leading to a contradiction.\n\n\\medskip\n\nThus, we first show that it must be the case that $\\theta \\in T$. From $\\upd{M}{\\phi},s\\models \\theta^\\alpha$ it follows that $M,s\\models \\theta^\\alpha$ ($\\theta^\\alpha$ is propositional). We now show that also $M,s \\models \\bigwedge_{\\Diamond \\gamma \\mbox{ in } \\theta} \\bigvee_{\\sigma \\in S} \\Diamond(\\sigma^\\alpha \\wedge \\gamma)$, and it follows from $M,s \\models \\chi_1$ that $\\theta \\in T$. Let $\\Diamond\\gamma$ be a conjunct in $\\theta$. We have that $\\upd{M}{\\phi},s \\models \\Diamond\\gamma$, i.e., that there exists a state $t$ such that $Rst$ and $\\upd{M}{\\phi},t\\models \\gamma$ and, since $t$ is a state in $\\upd{M}{\\phi}$, $M,t \\models \\phi$. From the latter it follows that $M,t \\models \\sigma$ for some disjunct $\\sigma$ of $\\phi$. From $M,t \\models \\sigma^{\\Box\\Diamond}$ and $Rst$ it follows that $M,s \\models \\sigma^{\\Box\\Diamond}$ by standard ${\\mathcal{KD}45}$ reasoning, and thus that $\\sigma \\in S$ by $\\chi_2$. Since $\\gamma$ is propositional, $\\upd{M}{\\phi},t\\models \\gamma$ implies that $M,t \\models \\gamma$. Thus, $M,s \\models \\Diamond(\\sigma^\\alpha \\wedge \\gamma)$ for some $\\sigma \\in S$. Since $\\Diamond\\gamma$ was arbitrary, we get that $M,s \\models \\bigwedge_{\\Diamond \\gamma \\mbox{ in } \\theta} \\bigvee_{\\sigma \\in S} \\Diamond(\\sigma^\\alpha \\wedge \\gamma)$ and thus that $\\theta \\in T$.\n\nHaving shown that $\\theta \\in T$, we now get a contradiction from the fact that $M,s \\models \\chi_3$. It follows that there exists a $\\sigma \\in S$ such that $M,s \\models \\Diamond(\\sigma^\\alpha \\wedge \\ourneg{\\beta_\\theta})$. That means that there exists a state $t$ such that $Rst$ and $M,t \\models \\sigma^\\alpha \\wedge \\ourneg{\\beta_\\theta}$. From the fact that $\\sigma \\in S$ we know that $M,s \\models \\sigma^{\\Box\\Diamond}$ and thus, by standard ${\\mathcal{KD}45}$ reasoning, that $M,t \\models \\sigma^{\\Box\\Diamond}$. That means that $M,t \\models \\sigma$ and thus $M,t \\models \\phi$. The latter means that $t$ is accessible from $s$ in $\\upd{M}{\\phi}$, and thus that $\\upd{M}{\\phi},s \\models \\neg\\Box \\beta_\\theta$. But this contradicts $\\upd{M}{\\phi},s \\models \\Box \\beta_\\theta$ which follows from the assumption that $\\upd{M}{\\phi},s \\models \\theta$. This concludes the implication towards the left.\n\nFor the direction towards the right, assume that $\\phi$ is not a true lie, i.e., that there exist $M,s$ such that $M,s \\models \\neg\\phi \\wedge \\Diamond \\phi$ and $\\upd{M}{\\phi},s \\models \\neg \\phi$. We now define $T$, $S$, and $\\beta_\\theta$ for each $\\theta \\in T$, as follows. We will then show that the resulting $\\chi$ is satisfiable. For any disjunct $\\delta$ in $\\phi$, let\n \\[\\delta \\in T \\Leftrightarrow M,s \\models t(\\delta) \\quad\\quad\\quad \\delta \\in S \\Leftrightarrow M,s \\models \\delta^{\\Box\\Diamond}\\]\nLet $\\theta \\in T$. In order to define $\\beta_\\theta$, we first show that for every conjunct $\\Diamond \\gamma$ in $\\theta$, we have that $\\upd{M}{\\phi},s \\models \\Diamond \\gamma$. Let $\\Diamond \\gamma$ be a conjunct in $\\theta$. From the fact that $M,s \\models t(\\theta)$ we get that there is a $\\sigma \\in S$ such that $M,s \\models \\Diamond(\\sigma^\\alpha \\wedge \\gamma)$. In other words, there exists a state $t$ such that $Rst$ and $M,t \\models \\sigma^\\alpha \\wedge \\gamma$. From $\\chi_2$ and the fact that $\\sigma \\in S$ and standard ${\\mathcal{KD}45}$ reasoning we have that $M,t \\models \\sigma^{\\Box\\Diamond}$. Thus, $M,t \\models \\sigma$. That means that $M,t \\models \\phi$ and that $t$ is accessible from $s$ in $\\upd{M}{\\phi}$, so $\\upd{M}{\\phi},s \\models \\Diamond \\gamma$, for all $\\Diamond \\gamma$ in $\\theta$. Since $\\theta^\\alpha$ is propositional, $\\upd{M}{\\phi},s \\models \\theta^\\alpha$ follows from $M,s\\models t(\\theta)$, and thus $\\upd{M}{\\phi},s \\models \\neg \\theta$ (which follows from $\\upd{M}{\\phi},s \\models \\neg \\phi$) implies that there must be a conjunct $\\Box\\beta$ in $\\theta$ such that $\\upd{M}{\\phi},s \\models \\neg \\Box\\beta$. Let $\\beta_\\theta$ be one such $\\beta$.\n\nFinally, we show that $\\chi$ is satisfiable. It follows that any disjunctive normal form of $\\chi$ is clear, and thus that $\\phi$ is a disjunctive lying form. We show that $M,s \\models \\chi$ (where $M,s$ is the pointed model above). We have that $M,s \\models \\neg\\phi \\wedge \\Diamond\\phi$ by assumption, and $M,s \\models \\chi_1 \\wedge \\chi_2$ by definition. It remains to be shown that $M,s \\models \\chi_3$. Let $\\theta \\in T$. By definition of $\\beta_\\theta$, $\\upd{M}{\\phi},s \\models \\Diamond \\ourneg{\\beta_\\theta}$. That means that there exists a state $t$ such that $t$ is accessible from $s$ in the updated model and $\\upd{M}{\\phi},t \\models \\ourneg{\\beta_\\theta}$. Since $t$ is accessible from $s$ in $\\upd{M}{\\phi}$, $M,t \\models \\phi$. Let $\\sigma$ be a disjunct of $\\phi$ such that $M,t \\models \\sigma$. Thus $M,t \\models \\sigma^{\\Box\\Diamond}$, and from that and standard ${\\mathcal{KD}45}$ reasoning, $M,s \\models \\sigma^{\\Box\\Diamond}$, and thus $\\sigma \\in S$. From $M,t \\models \\sigma$ we know that $M,t \\models \\sigma^\\alpha$, and since $\\ourneg{\\beta_\\theta}$ is propositional, $M,t \\models \\ourneg{\\beta_\\theta} \\wedge \\sigma^\\alpha$ for some $\\sigma \\in S$ and thus $M,s \\models \\Diamond(\\ourneg{\\beta_\\theta} \\wedge \\sigma^\\alpha)$. Since $\\theta \\in T$ was arbitrary, $M,s \\models \\chi_3$.\n\\end{proof}\n\n\\section{Knowledge and iterated announcement whether} \\label{sec.five}\n\nThe focus of our story is on lying and therefore on the believed announcement logic, not on the truthful announcement logic wherein lying is impossible. However some of our questions are also meaningful in a logic of knowledge change, interpreted on models with equivalence relations. As an example, let us address the matter of arbitrarily often iterated updates and unstable formulas. For ${\\mathcal{KD}45}$ models we could affirmatively answer the following question (Section \\ref{sec.threetwo}):\n\\begin{quote} {\\em \nAre there a model $(M,s)$ and a formula $\\phi$, such that if we continue to announce in this model that $\\phi$, the value of $\\phi$ never stabilizes? \\hfill (i)}\n\\end{quote}\nThis cannot be in ${\\mathcal{S}5}$, as $\\phi$ cannot be truthfully announced when $\\phi$ is false in the actual state. An obvious question in the setting of ${\\mathcal{S}5}$ models and {\\em truthful} public announcements is:\n\\begin{quote} {\\em \nAre there a model $(M,s)$ and a formula $\\phi$, such that if we continue to announce in this model \\pmb{whether} $\\phi$, the value of $\\phi$ never stabilizes? \\hfill (ii)}\n\\end{quote}\nThe (truthful public) {\\em announcement whether} $\\phi$ announces the {\\em value} of $\\phi$. If $\\phi$ is true, it is a truthful announcement of $\\phi$, whereas if $\\phi$ is false, it is a truthful announcement of $\\neg\\phi$. To question (ii) we do not know the answer. We conjecture that the answer is: {\\bf no}. However, a related question is:\n\\begin{quote} {\\em \nAre there a model $(M,s)$ and formulas $\\phi$ \\pmb{and} $\\pmb{\\psi}$, such that if we continue to announce whether $\\phi$, the value \\pmb{of} $\\pmb{\\psi}$ never stabilizes? \\hfill (iii)}\n\\end{quote}\nThis question we can answer positively, and we think that it may be found in various epistemic scenarios of interest. In the scenario we present here, we do slightly better than (iii):\n\\begin{quote} {\\em \nAre there a model $(M,s)$ and formulas $\\phi$ and $\\psi$, such that if we continue to announce \\pmb{that} $\\phi$, the value of $\\psi$ never stabilizes? \\hfill (iv)}\n\\end{quote}\nAn answer to (iv) is also an answer to (iii).\n\nLack of stability in the sense of (iv) is more formally defined as follows. Let $\\sigma \\in \\{0,1\\}^\\omega$ be {\\em unstable}; we recall that this means that for all $n \\in \\mathbb N$ there are $m,k \\geq n$ such that $\\sigma_m = 1$ and $\\sigma_k = 0$. Then we wish to find $(M,s)$, $\\phi$, and $\\psi$ such that for all $n \\in \\Nat$, $M|_{\\phi^n},s \\models \\sigma_{n+1}(\\psi)$, and where $M$ is an ${\\mathcal{S}5}$ model. (Where we further recall that $\\sigma_n(\\psi) = \\psi$ if $\\sigma_n=1$ and else $\\sigma_n(\\psi) =\\neg\\psi$; and that $M|_\\phi$ is the (${\\mathcal{S}5}$ preserving) state elimination semantics for truthful public announcement---see Section \\ref{sec.two} and Appendix A.)\n\nNow consider $\\sigma = (01)^\\omega$ ($\\sigma = 010101...$), the alternating string. Our solution is a simple adjustment of the model $N$ from Section \\ref{sec.threetwo} to realize an unstable formula for believed announcement logic. The model below is the equivalence closure (reflexive, symmetric, and transitive closure) of that model $N$. To achieve that visually, we only have to replace the directed arrows involving the root and the node below the root by undirected links, and assume transitivity. The alternating values of $\\sigma_i$ have been made explicit in three branches, for an example. As in Section \\ref{sec.threetwo}, we decorate the states of $N$ with the variable $p$ according to $\\sigma$; $\\bullet$ means that $p$ is false ($\\sigma_1 = \\sigma_3 = 0$), and $\\circ$ that $p$ is true ($\\sigma_2 = 1$). \n\n\\bigskip\n\n\\scalebox{.8}{\n\\begin{tikzpicture}[thick]\n\\node (0) at (0,0) {$\\circ$};\n\\node (11) at (-1,0) {$\\bullet$};\n\\node (12) at (-2,0) {$\\bullet$};\n\\node (21) at (-.85,.85) {$\\circ$};\n\\node (22) at (-1.7,1.7) {$\\circ$};\n\\node (23) at (-2.55,2.55) {$\\bullet$};\n\\node (24) at (-3.4,3.4) {$\\bullet$};\n\\node (31) at (0,1) {$\\bullet$};\n\\node (32) at (0,2) {$\\bullet$};\n\\node (33) at (0,3) {$\\circ$};\n\\node (34) at (0,4) {$\\circ$};\n\\node (35) at (0,5) {$\\bullet$};\n\\node (36) at (0,6) {$\\bullet$};\n\\node (41) at (.85,.85) {$\\circ$};\n\\node (42) at (1.7,1.7) {$\\circ$};\n\\node (43) at (2.55,2.55) {$\\bullet$};\n\\node (44) at (3.4,3.4) {$\\bullet$};\n\\node (45) at (4.25,4.25) {$\\circ$};\n\\node (46) at (5.1,5.1) {$\\circ$};\n\\node (47) at (5.95,5.95) {$\\bullet$};\n\\node (48) at (6.8,6.8) {$\\bullet$};\n\\node (11a) at (-1,-.3) {$\\sigma_1$};\n\\node (12a) at (-2,-.3) {$\\sigma_1$};\n\\node (21a) at (-1.15,.85) {$\\sigma_2$};\n\\node (22a) at (-2,1.7) {$\\sigma_2$};\n\\node (23a) at (-2.85,2.55) {$\\sigma_1$};\n\\node (24a) at (-3.7,3.4) {$\\sigma_1$};\n\\node (31a) at (-.3,1.15) {$\\sigma_3$};\n\\node (32a) at (-.3,2) {$\\sigma_3$};\n\\node (33a) at (-.3,3) {$\\sigma_2$};\n\\node (34a) at (-.3,4) {$\\sigma_2$};\n\\node (35a) at (-.3,5) {$\\sigma_1$};\n\\node (36a) at (-.3,6) {$\\sigma_1$};\n\\node (dots) at (3,0) {$ \\ $};\n\\draw[-] (0) to (11);\n\\draw[dashed,-] (11) to (12);\n\\draw[-] (0) to (21);\n\\draw[dashed,-] (21) to (22);\n\\draw[-] (22) to (23);\n\\draw[dashed,-] (23) to (24);\n\\draw[-] (0) to (31);\n\\draw[dashed,-] (31) to (32);\n\\draw[-] (32) to (33);\n\\draw[dashed,-] (33) to (34);\n\\draw[-] (34) to (35);\n\\draw[dashed,-] (35) to (36);\n\\draw[-] (0) to (41);\n\\draw[dashed,-] (41) to (42);\n\\draw[-] (42) to (43);\n\\draw[dashed,-] (43) to (44);\n\\draw[-] (44) to (45);\n\\draw[dashed,-] (45) to (46);\n\\draw[-] (46) to (47);\n\\draw[dashed,-] (47) to (48);\n\\draw[dotted,-] (0) to (dots);\n\\draw[-] (11) to (21);\n\\draw[-] (21) to (31);\n\\draw[-] (31) to (41);\n\\node (x) at (1,0.05) {$ \\ $};\n\\draw[dotted,-] (41) to (x);\n\\node (newroot) at (0,-1) {$\\bullet$};\n\\draw[dashed,-] (0) to (newroot);\n\\draw[-] (newroot) to (11);\n\\draw[dotted,-] (0) to (dots);\n\\end{tikzpicture}\n}\n\n\\bigskip\n\nIn order to get $N|_{\\phi^n},s \\models \\sigma_{n+1}(\\psi)$ we decorate as before the model with values $\\sigma_n$ for the propositional variable $p$ (where the model above demonstrates strict alternation in the first three values) and we take $\\phi := \\neg\\Diamond_b(\\Box_a p \\vee \\Box_a \\neg p)$ and $\\psi := \\neg(\\Box_bp\\vee\\Box_b\\negp) \\rightarrow \\Diamond_a\\Diamond_b \\Box_a p$. In other words, we split the unstable formula $\\neg\\Diamond_b(\\Box_a p \\vee \\Box_a \\neg p) \\wedge ((p\\wedge\\Box_b\\negp) \\rightarrow \\Diamond_a\\Diamond_b \\Box_a p)$ that we employed in Section \\ref{sec.threetwo} into a part doing the job of always removing the leaf and the node before the leaf of every branch, and another part checking whether there is a branch of length two from the root to a $p$ node, where the distinguishing formula for the root has become $b$'s ignorance of $p$ (there are now two nodes that can be the root).\n\n\\bigskip\n\nFor a different example of $M|_{\\phi^n},s \\models \\sigma_{n+1}(\\psi)$, consider an infinite number of (possibly) muddy children, of which an infinite subset are in fact muddy, let $\\phi$ be the usual proposition `nobody steps forward' (nobody knows whether he or she is muddy; see Section \\ref{sec.2valid}), and let $\\psi$ be the proposition `it is common knowledge that at least $n$ children are muddy but it is not common knowledge that at least $n+1$ children are muddy, where $n$ is even', and let $\\sigma = (01)^\\omega$, again. After father's initial announcement that at least one child is muddy (which serves as the initial model $N$ in this case), $\\psi$ is false (because $n=1$ which is odd). Whereas after nobody steps forward following the first request by father to do so if you know whether you are muddy, $\\psi$ is true (because now it is common knowledge that at least two children must be muddy: $n=2$ which is even). And so on, ad infinitum.\n\n\\bigskip\n\nIf we consider non-public events, there are yet other interesting forms of unstable iteration on ${\\mathcal{S}5}$ models. Running ahead of the next Section \\ref{sec.six} (and see Appendix A): there are {\\em action models} such that their iterated execution does not stabilize on a given model. The following is a vintage example in a two-agent setting. \n\nConsider a two-state epistemic model for two agents Anne (solid access) and Bob (dashed access) such that Anne is uncertain whether $p$ but Bob knows whether $p$. Actually, $p$ is true (the $\\circ$ state, boxed). Iterated execution of the action model with two actions with preconditions $\\neg \\Box_a p$ and $p \\wedge \\Box_b \\neg \\Box_a p$ transforms this two-state epistemic model into a three-state model, and vice versa, ad infinitum. We could think of the formula $\\Box_b \\neg \\Box_a p$ being alternatingly true and false in the $p$-world (the $\\circ$-world) about which agent $a$ is uncertain. The execution can be visualized as follows. \n\n\\[ \\begin{array}{lllll}\n\\begin{tikzpicture}[thick]\n\\node (0) at (0,0) {\\framebox{$\\circ$}};\n\\node (1) at (2,0) {$\\bullet$};\n\\draw[-] (0) to (1);\n\\node (0b) at (0,-2) {\\color{white} $\\circ$};\n\\end{tikzpicture}\n&\n\\begin{tikzpicture}[thick]\n\\node (0) at (0,0) {$\\times$};\n\\node (0b) at (0,-2) {\\color{white} $\\circ$};\n\\end{tikzpicture}\n&\n\\begin{tikzpicture}[thick]\n\\node (0) at (0,0) {$\\ensuremath{\\mathsf{s}}$};\n\\node (1) at (2,0) {$\\ensuremath{\\mathsf{t}}$};\n\\node (0b) at (0,-2) {\\color{white} $\\circ$};\n\\node (0x) at (0.3,-0.5) {\\footnotesize{$\\neg \\Box_a p$}};\n\\node (1x) at (2.5,-0.5) {\\footnotesize{$p \\wedge \\Box_b\\neg \\Box_a p$}};\n\\draw[dashed,-] (0) to (1);\n\\end{tikzpicture}\n&\n\\begin{tikzpicture}[thick]\n\\node (0) at (0,0) {$=$};\n\\node (0b) at (0,-2) {\\color{white} $\\circ$};\n\\end{tikzpicture}\n&\n\\begin{tikzpicture}[thick]\n\\node (0) at (0,0) {\\framebox{$\\circ$}};\n\\node (1) at (2,0) {$\\bullet$};\n\\node (0b) at (0,-2) {$\\circ$};\n\\draw[-] (0) to (1);\n\\draw[-,dashed] (0) to (0b);\n\\end{tikzpicture}\n\\\\ \\ \\\\\n\\begin{tikzpicture}[thick]\n\\node (0) at (0,0) {\\framebox{$\\circ$}};\n\\node (1) at (2,0) {$\\bullet$};\n\\node (0b) at (0,-2) {$\\circ$};\n\\draw[-] (0) to (1);\n\\draw[-,dashed] (0) to (0b);\n\\end{tikzpicture}\n&\n\\begin{tikzpicture}[thick]\n\\node (0) at (0,0) {$\\times$};\n\\node (0b) at (0,-2) {\\color{white} $\\circ$};\n\\end{tikzpicture}\n&\n\\begin{tikzpicture}[thick]\n\\node (0) at (0,0) {$\\ensuremath{\\mathsf{s}}$};\n\\node (1) at (2,0) {$\\ensuremath{\\mathsf{t}}$};\n\\node (0b) at (0,-2) {\\color{white} $\\circ$};\n\\node (0x) at (0.3,-0.5) {\\footnotesize{$\\neg \\Box_a p$}};\n\\node (1x) at (2.5,-0.5) {\\footnotesize{$p \\wedge \\Box_b\\neg \\Box_a p$}};\n\\draw[dashed,-] (0) to (1);\n\\end{tikzpicture}\n&\n\\begin{tikzpicture}[thick]\n\\node (0) at (0,0) {$=$};\n\\node (0b) at (0,-2) {\\color{white} $\\circ$};\n\\end{tikzpicture}\n&\n\\begin{tikzpicture}[thick]\n\\node (0) at (0,0) {\\framebox{$\\circ$}};\n\\node (1) at (2,0) {$\\bullet$};\n\\draw[-] (0) to (1);\n\\node (0b) at (0,-2) {\\color{white} $\\circ$};\n\\end{tikzpicture}\n\\end{array}\\]\nIn the regrettably unpublished \\citep{sadzik:2006} the matter of stabilization after action model execution is discussed at great length.\n\n\\section{Private lies} \\label{sec.six}\n\n\\begin{quote}\n{\\em {\\bf True Lies and Butterflies } \\\\\nMei wants to invite two friends Zhu Yingtai and Liang Shanbo to a party. She knows that they are dying to get close to each other. Thus one will come if and only if (s)he believes that the other will come. Obviously they do not yet wish to admit this to each other, because as far as they know they are both still very uncertain about each other's feelings. Given this uncertainty, both in fact don't intend to come to the party. Mei now lies to Yingtai that Shanbo will come to the party and she lies to Shanbo that Yingtai will come to the party. As a result, they will both come to the party. \\\\ (This story is a free adaptation of the {\\em Butterfly Lovers}, a famous Chinese folktale, see \\url{https:\/\/en.wikipedia.org\/wiki\/Butterfly_Lovers}.)\n}\n\\end{quote}\n\nWe can consider this an example of a true lie, because when Mei is telling to to Yingtai that Shanbo plans to come, in fact Shanbo is not planning to come, and when she is telling to Shanbo that Yingtai plans to come, in fact Yingtai is not (yet) planning to come. (For modelling convenience we assume that Yingtai is slow in making up her mind after Mei informs her about Shanbo.) After that, they both change their minds, and both lies have become true.\n\nIn order to model this story we will depart in two respects from the previous setting of believed announcement logic. In the first place these announcements are not public but private. In the second place the agents that we model change their minds. As we consider the factual propositions `Yingtai plans to go to the party' and `Shanbo plans to go to the party', changing your mind involves factual change. A logic allowing both formalizations is called action model logic (with factual change) \\citep{baltagetal:1998,hvdetal.del:2007,jfaketal.lcc:2006}. This logic can be seen as a straightforward generalization of believed announcement logic. (Alternatively, we can model the dynamics in more protocol oriented dynamic epistemic logics, such as \\cite{Wang10:phd,hvdetal.aij:2014}.) We only present some examples of action models and assume familiarity with the framework. For technical details, see Appendix A on page \\pageref{sec.appendixa} or the above references.\n\nFor an initial model, we assume that Yingtai and Shanbo know of themselves whether they intend to go to the party but do not know it of the other one (and that this is known, that this is the background knowledge). This comes with the following model, wherein solid access represents the uncertainty of Yingtai and dashed access represents the uncertainty of Shanbo, and where worlds are named with the facts $p_y$ (`Yingtai comes to the party') and $p_s$ (`Shanbo comes to the party') that are true there, where $01$ stands for `$p_y$ is false and $p_s$ is true', etc. The designated point of the model is boxed: initially both do not intend to go to the party.\n\n\\bigskip\n\n\\begin{tikzpicture}[thick]\n\\node (00m) at (4,0) {\\fbox{$00$}};\n\\node (10m) at (6,0) {$10$};\n\\node (01m) at (4,2) {$01$};\n\\node (11m) at (6,2) {$11$};\n\\draw[dashed,-] (00m) to (10m);\n\\draw[dashed,-] (01m) to (11m);\n\\draw[-] (00m) to (01m);\n\\draw[-] (10m) to (11m);\n\\end{tikzpicture}\n\n\\bigskip\n\nMei now lies to Yingtai, in private, that Shanbo goes to the party. For the convenience of the reader informed about action model logic, we can model this as a three-pointed action model as follows, on the left---where for convenience we have put the similar private lie to Shanbo about Yingtai next to it, on the right.\n\n\\bigskip\n\n\\begin{tikzpicture}[thick]\n\\node (x) at (0,0) {\\fbox{$\\neg p_s$}};\n\\node (y) at (2,0) {$p_s$};\n\\node (z) at (1,2) {$\\top$};\n\\draw[->] (x) to (y);\n\\draw[dashed,->] (x) to (z);\n\\draw[dashed,->] (y) to (z);\n\\end{tikzpicture}\n\\quad\\quad\\quad\n\\begin{tikzpicture}[thick]\n\\node (x) at (0,0) {\\fbox{$\\neg p_y$}};\n\\node (y) at (2,0) {$p_y$};\n\\node (z) at (1,2) {$\\top$};\n\\draw[dashed,->] (x) to (y);\n\\draw[->] (x) to (z);\n\\draw[->] (y) to (z);\n\\end{tikzpicture}\n\n\\bigskip\n\nThe result of the first of these lying actions is as follows.\n\n\\bigskip\n\n\\begin{tikzpicture}[thick]\n\\node (00m) at (4,0) {\\fbox{$00$}};\n\\node (10m) at (6,0) {$10$};\n\\node (01m) at (4,2) {$01$};\n\\node (11m) at (6,2) {$11$};\n\\node (00r) at (8,0) {$00$};\n\\node (10r) at (10,0) {$10$};\n\\node (01r) at (8,2) {$01$};\n\\node (11r) at (10,2) {$11$};\n\\draw[dashed,-] (00r) to (10r);\n\\draw[dashed,-] (01r) to (11r);\n\\draw[-] (00r) to (01r);\n\\draw[-] (10r) to (11r);\n\\draw[dashed,->, bend left=20] (00m) to (00r);\n\\draw[dashed,->, bend left=20] (01m) to (01r);\n\\draw[dashed,->, bend left=20] (10m) to (10r);\n\\draw[dashed,->, bend left=20] (11m) to (11r);\n\\draw[->] (00m) to (01m);\n\\draw[->] (10m) to (11m);\n\\end{tikzpicture}\n\n\\bigskip\n\nNow Mei lies privately to Shanbo that Yingtai goes to the party. The result of that action is as follows. For the convenience of the reader we also depict (on the right) the restriction of this model to the submodel generated by the point $00$:\n\n\\bigskip\n\n\\scalebox{.8}{\n\\noindent\n\\begin{tikzpicture}[thick]\n\\node (00mb) at (4,0) {\\fbox{$00$}};\n\\node (10mb) at (6,0) {$10$};\n\\node (01mb) at (4,2) {$01$};\n\\node (11mb) at (6,2) {$11$};\n\\node (00rb) at (8,0) {$00$};\n\\node (10rb) at (10,0) {$10$};\n\\node (01rb) at (8,2) {$01$};\n\\node (11rb) at (10,2) {$11$};\n\n\\node (00m) at (4,4) {$00$};\n\\node (10m) at (6,4) {$10$};\n\\node (01m) at (4,6) {$01$};\n\\node (11m) at (6,6) {$11$};\n\\node (00r) at (8,4) {$00$};\n\\node (10r) at (10,4) {$10$};\n\\node (01r) at (8,6) {$01$};\n\\node (11r) at (10,6) {$11$};\n\n\\draw[dashed,-] (00r) to (10r);\n\\draw[dashed,-] (01r) to (11r);\n\\draw[-] (00r) to (01r);\n\\draw[-] (10r) to (11r);\n\\draw[dashed,->, bend left=20] (00m) to (00r);\n\\draw[dashed,->, bend left=20] (01m) to (01r);\n\\draw[dashed,->, bend left=20] (10m) to (10r);\n\\draw[dashed,->, bend left=20] (11m) to (11r);\n\\draw[->] (00m) to (01m);\n\\draw[->] (10m) to (11m);\n\n\\draw[dashed,->] (00rb) to (10rb);\n\\draw[dashed,->] (01rb) to (11rb);\n\\draw[->, bend left=20] (00rb) to (00r);\n\\draw[->, bend left=20] (10rb) to (10r);\n\\draw[->, bend left=20] (01rb) to (01r);\n\\draw[->, bend left=20] (11rb) to (11r);\n\\draw[dashed,->, bend left=20] (00mb) to (10rb);\n\\draw[dashed,->, bend left=20] (01mb) to (11rb);\n\\draw[dashed,->, bend left=20] (10mb) to (10rb);\n\\draw[dashed,->, bend left=20] (11mb) to (11rb);\n\\draw[->, bend left=20] (00mb) to (01m);\n\\draw[->, bend left=20] (10mb) to (11m);\n\\draw[->, bend left=20] (01mb) to (01m);\n\\draw[->, bend left=20] (11mb) to (11m);\n\\end{tikzpicture}\n\\quad\\quad\\quad\\quad\n\\begin{tikzpicture}[thick]\n\\node (00mb) at (4,0) {\\fbox{$00$}};\n\\node (10rb) at (10,0) {$10$};\n\\node (01m) at (4,6) {$01$};\n\n\\node (00r) at (8,4) {$00$};\n\\node (10r) at (10,4) {$10$};\n\\node (01r) at (8,6) {$01$};\n\\node (11r) at (10,6) {$11$};\n\n\\draw[dashed,-] (00r) to (10r);\n\\draw[dashed,-] (01r) to (11r);\n\\draw[-] (00r) to (01r);\n\\draw[-] (10r) to (11r);\n\\draw[dashed,->, bend left=20] (00mb) to (10rb);\n\\draw[->, bend left=20] (00mb) to (01m);\n\\draw[->, bend left=20] (10rb) to (10r);\n\\draw[dashed,->, bend left=20] (01m) to (01r);\n\\end{tikzpicture}\n}\n\n\\bigskip\n\nThe above model formalizes that: Yingtai is not going to the party, believes that Shanbo goes to the party, and believes that Shanbo is uncertain whether she goes to the party; and that: Shanbo is not going to the party, believes that Yingtai goes to the party, and believes that Yingtai is uncertain whether he goes to the party.\n\n\\bigskip\n\nWe now first let Yingtai change her mind and then Shanbo change his mind. Yingtai changing her mind can again be formalized as an action model, namely as a private assignment to Yingtai; and similarly, Shanbo changing his mind as a private assignment to Shanbo. It is important here that a {\\em public} assignment is an improper way to formalize this action: a public assignment of Yingtai going to the party if she believes that Shanbo goes to the party would be informative to Shanbo in case he were to believe that she believed that he was going to the party. Because in case he was uncertain if she would go to the party, he would then learn from this public assignment that she would come to the party for his sake. Boring. Because exactly the absence of this sort of knowledge of the other's intentions makes first lovers' meetings so thrilling. That kind of uncertainty should {\\em not} be resolved. Therefore, we formalize it as a private assignment. Interestingly, in the current model the result of a public and of a private assignment (the result of Yingtai privately changing her mind or publicly changing her mind) is the same. But that is because both Yingtai and Shanbo believe that the other is uncertain whether they go to the party.\n\nBelow on the left is the action model for Yingtai changing her mind, and on the right, the one for Shanbo changing his mind. (So, for example, according to our conventions, in the left action model the solid relation, that of Yingtai, has identity access, and the dashed relation, for Shanbo, has only a reflexive arrow in the point that the dashed arrow is pointing to.) \n\n\\bigskip\n\n\\begin{tikzpicture}[thick]\n\\node (x) at (0,0) {\\fbox{$p_y := \\Box_y p_s$}};\n\\node (y) at (4,0) {$\\top$};\n\\draw[dashed,->] (x) to (y);\n\\end{tikzpicture}\n\\quad\\quad\\quad\n\\begin{tikzpicture}[thick]\n\\node (x) at (0,0) {\\fbox{$p_s := \\Box_s p_y$}};\n\\node (y) at (4,0) {$\\top$};\n\\draw[->] (x) to (y);\n\\end{tikzpicture}\n\n\\bigskip\n\nWe now depict, from left to right, once more the model before they change their minds, the model resulting from executing the action of Yingtai changing her mind, and the model resulting from Shanbo changing his mind, where once again we restrict the actually resulting models to the point-generated subframes.\n\n\\bigskip\n\n\\scalebox{.8}{\n\\noindent\n\\begin{tikzpicture}[thick]\n\\node (00mb) at (6.5,2.5) {\\fbox{$00$}};\n\\node (10rb) at (10,2.5) {$10$};\n\\node (01m) at (6.5,6) {$01$};\n\n\\node (00r) at (8,4) {$00$};\n\\node (10r) at (10,4) {$10$};\n\\node (01r) at (8,6) {$01$};\n\\node (11r) at (10,6) {$11$};\n\n\\draw[dashed,-] (00r) to (10r);\n\\draw[dashed,-] (01r) to (11r);\n\\draw[-] (00r) to (01r);\n\\draw[-] (10r) to (11r);\n\\draw[dashed,->] (00mb) to (10rb);\n\\draw[->] (00mb) to (01m);\n\\draw[->] (10rb) to (10r);\n\\draw[dashed,->] (01m) to (01r);\n\\end{tikzpicture}\n$\\stackrel {\\text{Yingtai}} \\Rightarrow$\n\\begin{tikzpicture}[thick]\n\\node (00mb) at (6.5,2.5) {\\fbox{$10$}};\n\\node (10rb) at (10,2.5) {$10$};\n\\node (01m) at (6.5,6) {$11$};\n\n\\node (00r) at (8,4) {$00$};\n\\node (10r) at (10,4) {$10$};\n\\node (01r) at (8,6) {$01$};\n\\node (11r) at (10,6) {$11$};\n\n\\draw[dashed,-] (00r) to (10r);\n\\draw[dashed,-] (01r) to (11r);\n\\draw[-] (00r) to (01r);\n\\draw[-] (10r) to (11r);\n\\draw[dashed,->] (00mb) to (10rb);\n\\draw[->] (00mb) to (01m);\n\\draw[->] (10rb) to (10r);\n\\draw[dashed,->] (01m) to (01r);\n\\end{tikzpicture}\n$\\stackrel {\\text{Shanbo}} \\Rightarrow$\n\\begin{tikzpicture}[thick]\n\\node (00mb) at (6.5,2.5) {\\fbox{$11$}};\n\\node (10rb) at (10,2.5) {$11$};\n\\node (01m) at (6.5,6) {$11$};\n\n\\node (00r) at (8,4) {$00$};\n\\node (10r) at (10,4) {$10$};\n\\node (01r) at (8,6) {$01$};\n\\node (11r) at (10,6) {$11$};\n\n\\draw[dashed,-] (00r) to (10r);\n\\draw[dashed,-] (01r) to (11r);\n\\draw[-] (00r) to (01r);\n\\draw[-] (10r) to (11r);\n\\draw[dashed,->] (00mb) to (10rb);\n\\draw[->] (00mb) to (01m);\n\\draw[->] (10rb) to (10r);\n\\draw[dashed,->] (01m) to (01r);\n\\end{tikzpicture}\n}\n\n\\bigskip\n\nNow that Yingtai and Shanbo have changed their minds, the lies have become the truth! They both go the party, and they will both expect the other to be surprised to find them at the party.\\footnote{This is in accordance with the resulting model, above on the right; for example, in the root \\fbox{11} of that model Yingtai believes ---move to the top 11--- that Shanbo believes ---move to 01 and 11--- that Yingtai is uncertain whether Shanbo comes to the party, both when she comes to the party ---the equivalence class consisting of 00 and 01--- as when she does not come to the party ---the cluster consisting of 10 and 11.} They declare their love to each other and they live happily ever after.\n\n\\weg{\n\n\\medskip\n\nWe conclude with two further technical observations on the model constructions and the analysis.\n\nFirstly, one can imagine Yingtai already changing her mind before Mei informs Shanbo that Yingtai is going to the party---in which case Mei would no longer have been lying. But that is a modelling artifact. To avoid such a scenario we simply assume that Mei simultaneously sends two {\\em letters} to Yingtai and to Shanbo containing the lies. At that moment both are indeed lies. Then again, the information change affected in Yingtai and Shanbo depends on the moment the letter is opened... It does not greatly matter: Yingtai changing her mind can be modelled both before and after Mei talking to Shanbo and the resulting pointed models are bisimilar. But lying twice is far more interesting than lying once only.\n\nSecondly, as mentioned, when executing the private assignments we restricted ourselves to point-generated subframes. Without that restriction, for example, based on the model on the left above, the model in the middle above would look as follows:\n\n\\bigskip\n\n\\scalebox{.8}{\n\\noindent\n\\begin{tikzpicture}[thick]\n\\node (00mb) at (6.5,2.5) {\\fbox{$10$}};\n\\node (10rb) at (10,2.5) {$10$};\n\\node (01m) at (6.5,6) {$11$};\n\n\\node (00r) at (8,4) {$00$};\n\\node (10r) at (10,4) {$10$};\n\\node (01r) at (8,6) {$01$};\n\\node (11r) at (10,6) {$11$};\n\n\\node (x00mb) at (12.5,2.5) {$00$};\n\\node (x10rb) at (16,2.5) {$10$};\n\\node (x01m) at (12.5,6) {$01$};\n\n\\node (x00r) at (14,4) {$00$};\n\\node (x10r) at (16,4) {$10$};\n\\node (x01r) at (14,6) {$01$};\n\\node (x11r) at (16,6) {$11$};\n\n\\draw[dashed,->, bend left=20] (00r) to (x00r);\n\\draw[dashed,->, bend left=20] (01r) to (x01r);\n\\draw[dashed,->, bend left=20] (10r) to (x10r);\n\\draw[dashed,->, bend left=20] (11r) to (x11r);\n\\draw[-] (00r) to (01r);\n\\draw[-] (10r) to (11r);\n\\draw[dashed,->, bend left=20] (00mb) to (x10rb);\n\\draw[->] (00mb) to (01m);\n\\draw[->] (10rb) to (10r);\n\\draw[dashed,->, bend left=20] (01m) to (x01r);\n\n\\draw[dashed,-] (x00r) to (x10r);\n\\draw[dashed,-] (x01r) to (x11r);\n\\draw[-] (x00r) to (x01r);\n\\draw[-] (x10r) to (x11r);\n\\draw[dashed,->] (x00mb) to (x10rb);\n\\draw[->] (x00mb) to (x01m);\n\\draw[->] (x10rb) to (x10r);\n\\draw[dashed,->] (x01m) to (x01r);\n\\end{tikzpicture}\n}\n\n\\bigskip\n\nClearly the simplified visualization is better.\n\n}\n\n\\section{What lies in the future?} \\label{sec.last}\n\nWe have modelled the true lie $\\phi$ in believed announcement logic as the validity \n\\[ \\neg\\phi \\rightarrow [\\phi]\\phi \\]\nor as the satisfiability of \\[ \\neg \\phi \\wedge [\\phi]\\phi, \\] \nwhere the believed announcement models {\\em informative change} and not factual change. Finding satisfiable true lies, of which there are many, seems a first step towards finding valid true lies, of which there are few. The latter seem to give more insight. They have the shape of correctness statements $\\phi \\rightarrow [\\alpha] \\psi$ in Hoare logic; and indeed we have seen that valid true lies come with strong syntactic restrictions. In the private lie example we also modelled actions involving {\\em factual change}. With this tool (and action model logic in its full generality) in hand we can also formalize the Pang Juan and the Arnold Schwarzenegger true lies, and make them fit a similar pattern. However, in this case we stop at satisfiability and the mere description of the examples. We defer a more general treatment and generic patterns involving validity to future research.\n\nThe Pang Juan true lie can be formalized as a two-pointed action model where one action has precondition $p$, the other action has precondition $\\neg p$, where the postcondition is $p := \\top$ in both actions (`no matter what was the case, you will now die'), and where the agent only considers the action with precondition $p$ possible. The designated point is the action where $p$ is false (Pang Juan does not die). Let us call this action $\\alpha(p)$.\n\n\\medskip\n\n\\begin{tikzpicture}[thick]\n\\node (x) at (0,0) {\\fbox{$\\neg p; p := \\top$}};\n\\node (y) at (4,0) {$p; p := \\top$};\n\\draw[->] (x) to (y);\n\\end{tikzpicture}\n\n\\medskip\n\nAlternatively, we can separate the observation part from the ontic change part, and consider the composition of those two actions. The former is then the {\\em lie that $p$}, i.e., the believed announcement that $p$ when $p$ is false. The action model equivalent of a believed announcement is a two-point action model (see Appendix A). The latter a singleton action called public assignment (namely of $\\top$ to $p$). So we get:\n\n\\medskip\n\n\\begin{tikzpicture}[thick]\n\\node (x) at (0,0) {\\fbox{$\\neg p$}};\n\\node (y) at (2,0) {$p$};\n\\draw[->] (x) to (y);\n\\end{tikzpicture}\n\\quad \n\\begin{tikzpicture}[thick]\n\\node (x) at (0,0) {composed with};\n\\end{tikzpicture}\n\\quad \n\\begin{tikzpicture}[thick]\n\\node (x) at (0,0) {\\fbox{$p := \\top$}};\n\\end{tikzpicture}\n\n\\medskip\n\nAs the observing agent is Pang Juan and the killing agent one of his opponents, one may prefer the sequence of two epistemic actions over the single epistemic action. On the other hand, as Pang Juan's observation and his subsequent death seem inextricably linked, one might prefer the single epistemic action. However, there is no notion of agency in dynamic epistemic logics: the composition of two actions is indistinguishable from the single action that combines the informative and ontic part: in the logic we have that $[\\alpha(p)]\\phi$ is equivalent to $[p][p := \\top]\\phi$, for all $\\phi$. So it does not matter. (We also recall that Pang Juan would die no matter which observation he made. We can satisfy this requirement by replacing the announcement that $p$ by the announcement that $q$ in the modelling with two epistemic actions. But that does not give any insight.) We end up with the satisfiability of \n\\[ \\neg p \\wedge [\\alpha(p)]p \\] but without being able to generalize this to a validity of shape $\\neg p \\rightarrow [\\alpha(p)]p$. This would require stronger or different logics, involving agency, or other notions of action or interaction \\citep{peteretal:2016}, or of {\\em self-fulfilling prophesy}.\n\nIn the Arnold Schwarzenegger example there is a lot of belief change and factual change going on. The epistemic goals of both agents, and their respective protocols in order to reach those goals, seem to play an important part. \n\nLet $p_j$ represent `Jamie Lee Curtis is a spy' and $p_a$ represent `Arnold Schwarzenegger is a spy'. At first Arnold is a spy ($p_a$), Arnold believes that he is a spy ($B_a p_a$ ---of course, the reader might say; but hang on!), Jamie believes he is not a spy ($B_j \\neg p_a$), and Arnold's goal is to keep it that way ($B_j \\neg p_a$). Jamie is not a spy ($\\neg p_j$), she believes that she is not a spy ($B_j \\neg p_j$), and Arnold also believes this ($B_a \\neg p_j$). Jamie's goal also is to keep it that way. So they both believe that the other is not a spy, and this is a stable situation, because both want the other to believe that. We now get a number of transitions where beliefs, facts, and goals all change. The initial situation is (i). \\[\\begin{array}{l|llllll|l|l}\n&&&&&&& \\text{goal } a & \\text{goal } j \\\\\n\\hline\n(i) & p_a & \\neg p_j & B_a p_a & B_j \\neg p_a & B_a \\neg p_j & B_j \\neg p_j & B_j \\neg p_a & B_a \\neg p_j \\\\\n(ii) & p_a & \\neg p_j & B_a p_a & B_j \\neg p_a & B_a \\neg p_j & \\pmb{B_j p_j} & B_j \\neg p_a & \\pmb{B_a p_j} \\\\\n(iii) & p_a & \\neg p_j & B_a p_a & \\pmb{B_j p_a} & B_a \\neg p_j & B_j p_j & B_j \\neg p_a & B_a p_j \\\\\n(iv) & p_a & \\pmb{p_j} & B_a p_a & B_j p_a & \\pmb{B_a p_j} & B_j p_j & \\pmb{B_j p_a} & B_a p_j \\\\\n\\end{array} \\]\nThe epistemic goals of the agents are only realized in (i) and (iv); only there they are stable. Indeed, the last stage is the happy ending. (For details, please see the movie.) In (ii), Jamie has started to enact a spy in order to make Arnold believe that she is a spy; she even believes that she has become a spy, without realizing that she has been set up: she is not really a spy. In (iii), she finds out that Arnold is a spy (because unlike Arnold, third parties believe that she is a spy and that she is collaborating with Arnold). In (iv) ---abstracting from intervening details involving helicopter fights and Miami skyscrapers--- they end up collaborating as spies and believing in each other, now truthfully so. Again, a stable situation.\n\nLet us call the composition of all these actions $\\alpha(p_a,p_j)$. Again we have satisfiability of \n\\[ (p_a \\wedge \\neg p_j) \\wedge [\\alpha(p_a,p_j)](p_a \\wedge p_j) \\]\nbut no general pattern involving validity of $(p_a \\wedge \\neg p_j) \\rightarrow [\\alpha(p_a,p_j)](p_a \\wedge p_j)$ or any other additional insight. Lacking in our formal analysis in dynamic epistemic logic are much stronger notions of {\\em epistemic protocol} (how does one realize an epistemic goal given a belief state?), protocol logics, and, again, notions of agency \\citep{bolanderetal:2011,Wang10:phd}. Second-order false belief scenarios are modelled in \\cite{brauneretal:2016,arslanetal:2015}.\n\n\n\\section{Conclusions}\n\nWe have presented a large variety of communicative scenarios involving agents truthtelling and lying while keeping their beliefs consistent. Of particular interest were iterations of announcements where the beliefs of the agents never stabilize: the models keep changing with every next announcement. The investigation can be seen as the obvious next step given existing in-depth investigations of successful and self-refuting updates changing agents' knowledge, i.e., on models where epistemic stances are interpreted with equivalence relations. We have carried this forward not merely to the ${\\mathcal{KD}45}$ models for believing agents, but also to models without structural properties. Two new types of update then come to the fore: true lies (that become true after a lying announcement) and impossible lies (that stay false even after a lying announcement), and also iterations of such announcements. An open question on these iterations is which $\\sigma$-valid formulas are realizable.\n\n\\section*{Acknowledgements}\n\nYanjing Wang thanks C\\'edric D\\'egremont and Andreas Witzel for discussions on the example in Section \\ref{sec.five}, and he acknowledges support from the National Program for Special Support of Eminent Professionals. Hans van Ditmarsch acknowledges support from ERC project EPS 313360. He is also affiliated to IMSc (Institute for Mathematical Sciences), Chennai, India. We thank the reviewers of Synthese for their helpful comments.\n\n\\bibliographystyle{natbib}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{intro} Introduction}\n\n\\begin{figure}[tp]\n\\scalebox {0.6}{\\includegraphics{scfig1}}\n\\caption{Shown schematically is a magnetic flux tube which exits the solar \nphotosphere (shaded region) and enters the solar corona (clear region) through \na sunspot on the chromosphere boundary. The magnetic flux tube then exits the \nsolar corona and enters back into the photosphere through a second sunspot on \nthe chromosphere boundary. The walls of magnetic flux tube are a vortex \nof circulating electric currents.}\n\\label{fig1}\n\\end{figure}\n\nFor physical reasons which are presently not entirely clear, dark sunspots exist \non the optical solar surface. These have long been observed ever since Galileo saw \nsunspots with his optical telescope. It was later found that magnetic flux tubes \nexit out of some solar sunspots and enter back into \nothers\\cite{Hale:1908,Hale:1919}. Employing modern X-ray telescopes,\nspectacular pictures have been taken of magnetic flux \ntubes\\cite{Priest:1998} which arch into and out of the solar corona,\nwell above the optical solar surface. The situation is schematically shown in \nFIG.\\ref{fig1}. The closed magnetic flux tubes are pictured in both the solar \nphotosphere and the solar corona. The closed flux tube magnetic field lines enter \ninto the solar corona through one sunspot and exit out of the solar corona \nthrough another sunspot. The floating flux tubes in the solar corona are held up by \na magnetic buoyancy\\cite{Parker:1955}. The outer walls of the magnetic flux tube consist \nof large circulating electric currents forming a turbulent vortex with a darker \ncomparatively quiet magnetic core. When the magnetic flux tubes explode\\cite{Parker:1957} \ninto a solar flare, with or without a coronal mass ejection\\cite{Kahler:2007}, \ncharged particles with very high energy are produced\\cite{Forbush:1946,Dorman:1993,Reams:2004,Belov:2005,Vashenyuka:2005,Bostanjyan:2007}, \nsay up to \\begin{math} \\sim 10^2 \\end{math} GeV. These relativistic particles can escape \nthe sun and be observed on earth as ground level cosmic ray \nenhancements\\cite{Shea:2001,Cliver:2006,L3:2006} induced by solar flares.\n\nOur purpose is to discuss how such energetic particles arise in the solar corona \nand how these particles induce nuclear reactions well above the solar \nphotosphere. The central feature of our explanation, which centers around \nFaraday's law, is the notion of a solar accelerator closely analogous to the \nbetatron\\cite{Kirst:1941,Serber:1941}. Conceptually, the betatron is a step up \ntransformer whose secondary coil is a toroidal ring of accelerating charged \nparticles circulating about a Faraday law (time varying) magnetic flux tube.\n\n\n\n\n\n\n\nActing as a step up transformer, it is possible for a solar magnetic flux tube \nto transfer circulating charged particle kinetic energy upward from the \nphotosphere to circulating charged particles located in the corona. Circulating \ncurrents located deep in the photosphere can be viewed conceptually as a net \ncurrent \\begin{math} I_{\\cal P} \\end{math} circulating around a primary coil. \nCirculating currents found high in the corona can be viewed as a net current \n\\begin{math} I_{\\cal S} \\end{math} circulating around a secondary coil. \nIf \\begin{math} K_{\\cal P} \\end{math} and \\begin{math} K_{\\cal S} \\end{math} \nrepresent, respectively, the charged particle kinetic energies in the primary and \nsecondary coils, then one finds the step up transformer power equation \n\\begin{math} \n\\dot{K}_{\\cal P}=V_{\\cal P}I_{\\cal P} =V_{\\cal S}I_{\\cal S}=\\dot{K}_{\\cal S} \n\\end{math}, \nwherein \\begin{math} V_{\\cal P} \\end{math} and \\begin{math} V_{\\cal S} \\end{math} \nrepresent, respectively, the voltages across the primary and secondary coils. \nThe total kinetic energy transfer \n\\begin{math} \n\\Delta K_{\\cal P}=\\int V_{\\cal P}I_{\\cal P}dt=\\int V_{\\cal S}I_{\\cal S}dt=\\Delta K_{\\cal S} \n\\end{math}. \nThe essence of the step up transformer mechanism is that the kinetic energy distributed \namong a very large number of charged particles in the photosphere can \nbe transferred via the magnetic flux tube to a distributed kinetic energy shared among \na distant much smaller number of charged particles located in the corona, i.e.\na small accelerating voltage in the primary coil produces a large accelerating voltage \nin the secondary coil. The resulting transfer of kinetic energy is \n{\\em collective} from a large group of charged particles to a smaller group of charged \nparticles. The kinetic energy per charged particle of the dilute gas in the corona \nmay then become much higher than the kinetic energy per charged particle of the \nmore dense fluid in the photosphere. In terms of the connection between temperature \nand kinetic energy, the temperature of the dilute gas in corona will be much higher \nthan the temperature of the more dense fluid photosphere. \n\nIf the kinetic energy of the circulating currents in that part of flux tubes floating \nin the corona becomes sufficiently high, then the flux tubes can explode violently into \na solar flare which may be accompanied by a coronal mass ejection. \nThe loss of magnetic energy during the flux tube explosion is rapidly converted into \ncharged particle kinetic energy. The relativistic high energy products of the explosion \nyield both nuclear and elementary particle interactions. These processes are discussed \nin Sec.\\ref{flare}. For magnetic flux tubes of smaller diameter which do not explode \ninto a flare and\/or a coronal mass ejection, one may still have low energy nuclear \nreactions that occur in a roughly steady state by continual conversion of magnetic \nfield energy into charged particle energy. Such processes can account for the fact \nthat the solar corona remains continually much hotter than the photosphere.\nSteady state low energy nuclear processes are discussed in Sec.\\ref{lenr}. In the \nconcluding Sec.\\ref{conc} we further discuss the notion that not all nuclear \nprocesses necessarily take place near within the solar core. \n\n\n\\section{\\label{flare} Solar Flares}\n\n\\begin{figure}[bp]\n\\scalebox {0.6}{\\includegraphics{scfig2}}\n\\caption{Boson exchange diagrams for electron-proton scattering \ninto a lepton plus ``anything'' $\\{\\ e^- +p^+ \\to l+X\\ \\}$ include photon \n$\\gamma $ and $Z$ exchange wherein the final lepton is an electron, \nas well as charged $W^-$ exchange wherein the final state lepton is a neutrino. \nOn an energy scale of $\\sim 300\\ {\\rm GeV}$, all of these exchange processes have \namplitudes of similar orders of magnitude.}\n\\label{fig2}\n\\end{figure}\n\nThe magnetic flux through a cylindrical tube of inner cross sectional area \n\\begin{math} \\Delta S \\end{math} and mean magnetic field \\begin{math} B \\end{math}, \nis \n\\begin{equation}\n\\Delta \\Phi =B\\Delta S.\n\\label{flare1}\n\\end{equation}\nIf a tube explodes in a time period \\begin{math} \\Delta t \\end{math}, \nthen the resulting loss of magnetic flux yields a mean Faraday law \naccelerator voltage around the tornado walls as given by \n\\begin{equation}\n\\overline{\\rm V}=\\frac{\\Delta \\Phi}{\\Delta t},\n\\label{flare2}\n\\end{equation}\ni.e. \n\\begin{equation}\ne\\overline{\\rm V}=ecB\\left(\\frac{\\Delta S}{\\Lambda }\\right) \n\\ \\ \\ {\\rm wherein}\\ \\ \\ \\Lambda=c\\Delta t.\n\\label{flare3}\n\\end{equation}\nA useful identity for numerical estimates is \n\\begin{equation}\necB\\equiv 29.9792458\n\\left[\\frac{\\rm GeV}{\\rm kilometer}\\right]\n\\left[\\frac{B}{\\rm kiloGauss}\\right].\n\\label{flare4}\n\\end{equation}\nFor a coronal mass ejection exploding coil with a Faraday flux loss \ntime \\begin{math} \\Delta t \\sim 10^2\\ {\\rm second} \\end{math} and \nwith substantial sun spots at the ends of the magnetic flux coil, one \nmay estimate\\cite{Dikpati:2002,Lozitska:1994,Benz:2008} \n\\begin{eqnarray}\n\\Delta S\\approx \\pi R^2 ,\n\\nonumber \\\\ \nR\\sim 10^4\\ {\\rm kilometer}, \n\\nonumber \\\\ \nB\\sim 1\\ {\\rm kiloGauss},\n\\nonumber \\\\ \n\\Lambda \\sim 3\\times 10^7\\ {\\rm kilometer}, \n\\nonumber \\\\ \ne\\overline{\\rm V}\\sim 300\\ {\\rm GeV}.\n\\label{flare5}\n\\end{eqnarray}\n\nThe uncharged walls of the circulating vortex are represented roughly by \nan electron beam circulating in one direction and proton beam circulating \nin the other direction. These two colliding beams are hit with a flare or \ncoronal mass ejecting Faraday law voltage pulse, as in Eq.(\\ref{flare5}), \nsetting an electron proton collision energy scale of \n\\begin{math} E\\sim 300\\ {\\rm GeV} \\end{math}. At such a high energy \nscale, electron-proton scattering\\cite{Roberts:1990} is ruled by electro-weak \nexchange interactions all of the same order of magnitude in probability. \nShown in FIG.\\ref{fig2} is the electro-weak boson exchange Feynman diagram \nfor electron-proton scattering \n\\begin{equation}\ne^- + p^+ \\to l+X.\n\\label{flare6}\n\\end{equation}\nThe final state lepton is an electron for the case of photon \n\\begin{math} \\gamma \\end{math} or \\begin{math} Z \\end{math} \nexchange and the final state lepton is neutrino for the case \nof \\begin{math} W^- \\end{math} exchange.\n\n\n\n\n\n\n\nA solar flare or coronal mass ejecting event is thereby accompanied \nby an increased emission of solar neutrinos over a broad energy scale \nas well as relativistic protons\\cite{Bostanjyan:2007}, \nneutrons\\cite{Hurford:2006,Murphy:1999,Hua:2002,Murphy:2007, Watanabe:2003} \nand electrons\\cite{Kahler:2007}. The full \nplethora\\cite {Ryan:2006,Li:2007} of final \\begin{math} X \\end{math} \nstates including electron, muon and pion particle anti-particle pairs \nshould also be present in such events. The conversion of magnetic field \nenergy into relativistic particle kinetic energy via the Faraday law voltage \npulse is collective in that the magnetic flux in the core of the vortex depends \non the rotational currents of {\\em all} of the initial protons and electrons. \n\n\n\\section{\\label{lenr} Low Energy Nuclear Reactions}\n \nEven without spectacular solar flare explosions ejecting mass through the \nsolar corona, there exists (within flux tubes) collective magnetic \nenergy which allows for a significant occurrence of low energy nuclear reactions \nat many different locations in and around the sun. \nIn particular, let us consider the inverse beta decay reaction \n\\begin{equation}\nW_{\\rm magnetic}+e^- + p^+ \\to \\nu_e + n\n\\label{lenr1}\n\\end{equation} \nwherein the final state lepton is a neutrino, the final state \n``\\begin{math} X \\end{math}'' is a neutron \\begin{math} n \\end{math} \nand \\begin{math} W_{\\rm magnetic} \\end{math} is magnetic field energy \nfed into the reaction. \n\nIn a steady state flux tube (which does {\\em not} explode) entering \ninto the solar corona from one sunspot and exiting out of the solar \ncorona through another sunspot, there is a substantial amount of \nstored magnetic energy. If there \nis a small change \\begin{math} \\delta I \\end{math} in the current going \naround the vortex circumference, then the small change in the magnetic \nfield energy \n\\begin{math} \\delta {\\cal E} \\end{math} obeys \n\\begin{equation}\n\\delta {\\cal E}=\\Phi \\delta I. \n\\label{lenr2}\n\\end{equation}\nIf \\begin{math} L \\end{math} denotes the length of the vortex circumference of the \nmagnetic flux tube, then the change in current due to the weak interaction reaction \nEq.(\\ref{lenr1}) is given by \n\\begin{equation}\n\\delta I=-\\frac{ev}{L}\\ ,\n\\label{lenr3}\n\\end{equation}\nwherein \\begin{math} v \\end{math} is the relative velocity component \n(tangent to the circumference) between the proton and electron. \nPutting \\begin{math} \\Phi = B\\Delta S \\end{math} and \n\\begin{math} \\delta {\\cal E}=-W_{\\rm magnetic} \\end{math} yields \n\\begin{equation}\nW_{\\rm magnetic}=ec\\left(\\frac{\\Phi }{L}\\right)\\frac{v}{c}=\necB\\left(\\frac{\\Delta S}{L}\\right)\\frac{v}{c}\\ .\n\\label{lenr4}\n\\end{equation}\nThe product \\begin{math} (ecB) \\end{math} is given in Eq.(\\ref{flare4}). \nFor the case of a cylindrical flux tube, \n\\begin{equation}\n\\frac{\\Delta S}{L}=\\left(\\frac{\\pi R^2}{2\\pi R}\\right)=\\frac{R}{2}\\ .\n\\label{lenr5}\n\\end{equation}\nyielding \n\\begin{equation}\nW_{\\rm magnetic}\\approx 15{\\rm \\ GeV} \n\\left[\\frac{R}{\\rm kilometer}\\right]\n\\left[\\frac{B}{\\rm kiloGauss}\\right]\n\\frac{v}{c}\\ .\n\\label{lenr6}\n\\end{equation}\nEmploying the estimates \n\\begin{eqnarray} \nR\\sim 10^2\\ {\\rm kilometer}, \n\\nonumber \\\\ \nB\\sim 1\\ {\\rm kiloGauss},\n\\nonumber \\\\ \n\\frac{v}{c}\\sim 10^{-2}, \n\\nonumber \\\\ \nW_{\\rm magnetic}\\sim 15\\ {\\rm GeV}.\n\\label{lenr7}\n\\end{eqnarray}\nOn the energy scale \\begin{math} W_{\\rm magnetic} \\ll 300\\ {\\rm GeV} \\end{math} of \nEq.(\\ref{lenr7}), the weak interaction \\begin{math} p^+ e^- \\end{math} \nprocesses Eq.(\\ref{lenr1}) that produce neutrons proceed more slowly than the purely \nelectromagnetic \\begin{math} p^+ e^- \\end{math} processes. Nevertheless one finds \nappreciable neutron production in the solar corona. The production of neutrons among \nthe protons allows for the creation of nuclei with higher mass numbers via neutron-capture \nnuclear reactions and subsequent beta decays. \n\n \n\n\n\\section{\\label{conc} Conclusion}\n\nMagnetic flux tubes arise\\cite{Dikpati:2002,Lozitska:1994} out of\nthe turbulent magneto-fluid mechanics of a solar fluid plasma with high electrical \nconductivity. Turbulent magneto-fluid flows yield a full spectrum of magnetic \nfield values\\cite{Benz:2008} expected to vary randomly over many different \nlength scales. The estimates of the magnetic field \ndiscussed in this work are merely order of magnitude. They are based on observations \nof the magnetic flux tubes entering into and exiting out of sunspots and also into and \nout of smaller crevices and holes that are commonly observed on the sun's optical surface.\n\nFor many years, the source of relativistic particle \nfluxes\\cite{Yousef:2005,Roussev:2004} often observed to emanate from the solar corona \nhas been theoretically obscure. Our explanation for these fluxes is simply derived from \nFaraday's law \n\\begin{equation}\n-\\frac{\\partial {\\bf B}}{\\partial t}=curl{\\bf E}\n\\label{conc1} \n\\end{equation} \nas it appears in well understood transformer and inductor circuits. Circulating currents\naround the walls of a flux tube can transfer energy into some parts of the magnetic field \nconfiguration while removing equal amounts of energy from other distant parts of the \nmagnetic field configuration. This transformer action is very well understood. \n\n\n\n\n\nThe resulting energy balance allows a large number of low energy charged particles to \ncollectively transfer their kinetic energy to a significantly smaller number of charged \nparticles whose energy per particle then becomes very high. When the charged particle \nenergy of low density solar corona particles is made sufficiently high, reactions of \nthe form in Eqs.(\\ref{flare6}) and (\\ref{lenr1}) clearly can take place, \nleading to neutron production. Once neutrons are created and added to the \nelectron-proton plasma, a variety of nuclear synthesis reactions \nbecome possible\\cite{Fowler:1965}. If that is the case, then Coulomb barrier-penetrating \nfusion reactions in the sun's core are not necessarily the sun's {\\em only} significant \nsource of solar nuclear energy. Relativistic particle fluxes have been clearly observed \nemanating from regions located well above the sun's surface and very far away \nfrom the solar core. Finally, it has not escaped our notice that the energetic particle \nproduction via the collective mechanisms discussed in this work may shed some light on the \norigin of the anomalous short-lived isotopes observed on other astronomical \nobjects\\cite{Cowley:2004}. \n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}