diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzasbs" "b/data_all_eng_slimpj/shuffled/split2/finalzzasbs" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzasbs" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nIn the recent papers~\\cite{finite, infinite} the fermionic signature operator was introduced\non globally hyperbolic Lorentzian spin manifolds. It is a bounded symmetric operator on the Hilbert space\nof solutions of the Dirac equation which depends on the global geometry of space-time.\nThis raises the question how the geometry of space-time is related to spectral properties of\nthe fermionic signature operator. The first step in developing the resulting ``Lorentzian spectral geometry''\nis the paper~\\cite{drum} where the simplest situation of Lorentzian surfaces is considered.\nIn the present paper, we proceed in a somewhat different direction and \nshow that there is a nontrivial index associated to the fermionic signature operator.\nThis is the first time that an index is defined for a geometric operator on a Lorentzian manifold.\n\nWe make essential use of the decomposition of spinors in even space-time dimension\ninto left- and right-handed components (the ``chiral grading''). The basic idea is to\ndecompose the fermionic signature operator~$\\mathscr{S}$ using the chiral grading as\n\\begin{equation} \\label{Sigprop}\n\\mathscr{S} = \\mathscr{S}_L + \\mathscr{S}_R \\qquad \\text{with} \\qquad \\mathscr{S}_L^* = \\mathscr{S}_R \\:,\n\\end{equation}\nand to define the so-called {\\em{chiral index}} of~$\\mathscr{S}$ as the Noether index of~$\\mathscr{S}_L$.\nAfter providing the necessary preliminaries (Section~\\ref{secprelim}), this definition will be given in\nSection~\\ref{secindex} in space-times of finite lifetime.\nIn order to work out the mathematical essence of our index, in Section~\\ref{seccfs} we \nalso give its definition in the general setting of causal fermion systems\n(for an introduction to causal fermion systems see~\\cite{rrev} or~\\cite{topology}).\nSection~\\ref{secodd} is devoted to a variant of the chiral index which applies in the special case\nof the massless Dirac equation and a Dirac operator which is odd with respect to the chiral grading.\nIn Section~\\ref{secstable} we analyze the invariance properties of the chiral indices when\nspace-time or the Dirac operator are deformed by a homotopy.\nIn Sections~\\ref{secex1}--\\ref{secex3} we construct examples of fermionic signature operators\nwith a non-trivial index and illustrate the homotopy invariance.\nFinally, in Section~\\ref{secoutlook} we discuss our results and and give an outlook on\npotential extensions and applications, like the generalization to space-times of infinite lifetime.\n\nWe point out that the purpose of this paper is to define the chiral index, to study a few basic\nproperties and to show in simple examples that it is in general non-trivial.\nBut we do not work out any physical applications, nor do we make the connection\nto geometric or topological invariants.\nThese shortcomings are mainly due to the fact that we only succeeded in computing\nthe index explicitly in highly symmetric and rather artificial examples.\nMoreover, it does not seem easy to verify the conditions needed for the homotopy invariance.\nFor these reasons, we leave physically interesting examples and geometric stability results\nas a subject of future research.\nAll we can say for the moment is that the chiral index describes the ``chiral asymmetry''\nof the Dirac operator in terms of an integer. This integer seems to depend on the geometry of the\nboundary of space-time and on the singular behavior of the potentials in the\nDirac equation. Smooth potentials in the Dirac equation, however, tend to not affect the\nindex.\n\n\\section{Preliminaries} \\label{secprelim}\nWe recall a few basic constructions from~\\cite{finite}. Let~$(\\mycal M, g)$ be a smooth, globally\nhyperbolic Lorentzian spin manifold of even dimension~$k \\geq 2$.\nFor the signature of the metric we use the convention~$(+ ,-, \\ldots, -)$.\nWe denote the spinor bundle by~$S\\mycal M$. Its fibers~$S_x\\mycal M$ are endowed\nwith an inner product~$\\mathopen{\\prec} .|. \\mathclose{\\succ}_x$ of signature~$(n,n)$\nwith~$n=2^{k\/2-1}$ (for details see~\\cite{baum, lawson+michelsohn}),\nwhich we refer to as the spin scalar product. Clifford multiplication is described by a mapping~$\\gamma$\nwhich satisfies the anti-commutation relations,\n\\[ \\gamma \\::\\: T_x\\mycal M \\rightarrow \\text{\\rm{L}}(S_x\\mycal M) \\qquad\n\\text{with} \\qquad \\gamma(u) \\,\\gamma(v) + \\gamma(v) \\,\\gamma(u) = 2 \\, g(u,v)\\,\\mbox{\\rm 1 \\hspace{-1.05 em} 1}_{S_x(\\mycal M)} \\:. \\]\nWe write Clifford multiplication in components with the Dirac matrices~$\\gamma^j$\nand use the short notation with the Feynman dagger, $\\gamma(u) \\equiv u^j \\gamma_j \\equiv \\slashed{u}$.\nThe metric connections on the tangent bundle and the spinor bundle are denoted by~$\\nabla$.\n\nIn the even-dimensional situation under consideration, the spinor bundle has a decomposition\ninto left- and right-handed components. We describe this chiral grading by an operator~$\\Gamma$\n(the ``pseudoscalar operator,'' in physics usually denoted by~$\\gamma^5$),\n\\[ \\Gamma \\::\\: S_x\\mycal M \\rightarrow S_x\\mycal M \\:, \\]\nhaving for all~$u \\in T_x\\mycal M$ the properties\n\\begin{equation} \\label{gammaprop}\n\\Gamma^* = -\\Gamma\\:,\\qquad \\Gamma^2 = \\mbox{\\rm 1 \\hspace{-1.05 em} 1} \\:,\\qquad \n\\Gamma\\, \\gamma(u) = -\\gamma(u) \\,\\Gamma \\:,\\qquad \\nabla \\Gamma = 0\n\\end{equation}\n(where the star denotes the adjoint with respect to the spin scalar product).\nWe denote the chiral projections to the left- and right-handed components by\n\\begin{equation} \\label{chiLRdef}\n\\chi_L = \\frac{1}{2} \\big( \\mbox{\\rm 1 \\hspace{-1.05 em} 1} - \\Gamma \\big) \\qquad \\text{and} \\qquad \\chi_R = \\frac{1}{2} \\big( \\mbox{\\rm 1 \\hspace{-1.05 em} 1} + \\Gamma \\big) \\:.\n\\end{equation}\n\nThe sections of the spinor bundle are also referred to as wave functions.\nWe denote the smooth sections of the spinor bundle by~$C^\\infty(\\mycal M, S\\mycal M)$.\nSimilarly, $C^\\infty_0(\\mycal M, S\\mycal M)$ denotes the smooth sections with compact support.\nOn the compactly supported wave functions, one can introduce the Lorentz invariant\ninner product\n\\begin{gather}\n\\mathopen{<} .|. \\mathclose{>} \\::\\: C^\\infty_0(\\mycal M, S\\mycal M) \\times C^\\infty_0(\\mycal M, S\\mycal M) \\rightarrow \\mathbb{C} \\:, \\\\\n\\mathopen{<} \\psi|\\phi \\mathclose{>} := \\int_\\mycal M \\mathopen{\\prec} \\psi | \\phi \\mathclose{\\succ}_x\\: d\\mu_\\mycal M\\:. \\label{stip}\n\\end{gather}\nThe Dirac operator~${\\mathcal{D}}$ is defined by\n\\[ {\\mathcal{D}} := i \\gamma^j \\nabla_j + {\\mathscr{B}} \\::\\: C^\\infty(\\mycal M, S\\mycal M) \\rightarrow C^\\infty(\\mycal M, S\\mycal M)\\:, \\]\nwhere~${\\mathscr{B}} \\in \\text{\\rm{L}}(S_x)$ (the ``external potential'') typically is a smooth multiplication operator\nwhich is symmetric with respect to the spin scalar product.\nIn some of our examples, ${\\mathscr{B}}$ will be chosen more generally as a convolution operator\nwhich is symmetric with respect to the inner product~\\eqref{stip}.\nFor a given real parameter~$m \\in \\mathbb{R}$ (the ``mass''), the Dirac equation reads\n\\[ ({\\mathcal{D}} - m) \\,\\psi = 0 \\:. \\]\nWe mainly consider solutions in the class~$C^\\infty_{\\text{sc}}(\\mycal M, S\\mycal M)$ of smooth sections\nwith spatially compact support. On such solutions one has the scalar product\n\\begin{equation} \\label{print}\n(\\psi | \\phi) = 2 \\pi \\int_\\mycal N \\mathopen{\\prec} \\psi | \\slashed{\\nu} \\phi \\mathclose{\\succ}_x\\: d\\mu_\\mycal N(x) \\:,\n\\end{equation}\nwhere~$\\mycal N$ denotes any Cauchy surface and~$\\nu$ its future-directed normal.\nDue to current conservation, the scalar product is\nindependent of the choice of~$\\mycal N$ (for details see~\\cite[Section~2]{finite}).\nForming the completion gives the Hilbert space~$(\\H_m, (.|.))$.\n\nFor the construction of the fermionic signature operator, we need to\nextend the bilinear form~\\eqref{stip} to the solution space of the Dirac equation.\nIn order to ensure that the integral in~\\eqref{stip} exists, we need\nto make the following assumption (for more details see~\\cite[Section~3.2]{finite}).\n\\begin{Def} \\label{defmfinite}\nA globally hyperbolic space-time~$(\\mycal M,g)$ is said to be {\\bf{{\\em{m}}-finite}} if\nthere is a constant~$c>0$ such that for\nall~$\\phi, \\psi \\in \\H_m \\cap C^\\infty_{\\text{sc}}(\\mycal M, S\\mycal M)$, the\nfunction~$\\mathopen{\\prec} \\phi | \\psi \\mathclose{\\succ}_x$ is integrable on~$\\mycal M$ and\n\\[ |\\mathopen{<} \\phi | \\psi \\mathclose{>}| \\leq c \\:\\|\\phi\\|\\: \\|\\psi\\| \\]\n(where~$\\| . \\| = (.|.)^\\frac{1}{2}$ is the norm on~$\\H_m$).\n\\end{Def} \\noindent\nUnder this assumption, the space-time inner product is well-defined as\na bounded bilinear form on~$\\H_m$,\n\\[ \\mathopen{<} .|. \\mathclose{>} \\::\\: \\H_m \\times \\H_m \\rightarrow \\mathbb{C}\\:. \\]\nApplying the Riesz representation theorem, we can\nuniquely represent this bilinear form with a signature operator~$\\mathscr{S}$,\n\\begin{equation} \\label{Sdef}\n\\mathscr{S} \\::\\: \\H_m \\rightarrow \\H_m \\qquad \\text{with} \\qquad\n\\mathopen{<} \\phi | \\psi \\mathclose{>} = ( \\phi \\,|\\, \\mathscr{S}\\, \\psi) \\:.\n\\end{equation}\nWe refer to~$\\mathscr{S}$ as the {\\bf{fermionic signature operator}}.\nIt is obviously a bounded symmetric operator on~$\\H_m$.\nWe note that the construction of the fermionic signature operator is manifestly covariant\nand independent of the choice of a Cauchy surface.\n\n\\section{The Chiral Index} \\label{secindex}\nWe now modify the construction of the fermionic signature operator by inserting the chiral projection\noperators into~\\eqref{stip}. We thus obtain the bilinear forms\n\\begin{equation} \\label{stipLR}\n\\mathopen{<} \\psi|\\phi \\mathclose{>}_{L\\!\/\\!R} = \\int_\\mycal M \\mathopen{\\prec} \\psi \\,|\\, \\chi_{L\\!\/\\!R} \\,\\phi \\mathclose{\\succ}_x\\: d\\mu_\\mycal M\\:.\n\\end{equation}\nFor the space-time integrals to exist, we need the following assumption.\n\\begin{Def} \\label{defGfinite}\nA globally hyperbolic space-time~$(\\mycal M,g)$ is said to be {\\bf{$\\Gamma$-finite}} if\nthere is a constant~$c>0$ such that for\nall~$\\phi, \\psi \\in \\H_m \\cap C^\\infty_{\\text{sc}}(\\mycal M, S\\mycal M)$, the\nfunction~$\\mathopen{\\prec} \\phi | \\Gamma \\psi \\mathclose{\\succ}_x$ is integrable on~$\\mycal M$ and\n\\[ |\\mathopen{<} \\phi | \\Gamma \\psi \\mathclose{>}| \\leq c \\:\\|\\phi\\|\\: \\|\\psi\\| \\:. \\]\n\\end{Def} \\noindent\nThere seems no simple relation between $m$-finiteness and $\\Gamma$-finiteness.\nBut both conditions are satisfied if we assume that the space-time~$(\\mycal M,g)$ has {\\bf{finite lifetime}}\nin the sense that it admits a foliation~$(\\mycal N_t)_{t \\in (t_0, t_1)}$ by Cauchy surfaces with~$t_0, t_1 \\in \\mathbb{R}$\nsuch that the function~$\\langle \\nu, \\partial_t \\rangle$ is bounded on~$\\mycal M$\n(see~\\cite[Definition~3.4]{finite}).\nThe following proposition is an immediate generalization of~\\cite[Proposition~3.5]{finite}.\n\\begin{Prp} Every globally hyperbolic manifold of finite lifetime is $m$-finite\nand $\\Gamma$-finite.\n\\end{Prp}\n\\begin{proof} Let~$\\psi \\in \\H_m \\cap C^\\infty_{\\text{sc}}(\\mycal M, S\\mycal M)$ and~$C(x)$ one of the operators~$\\mbox{\\rm 1 \\hspace{-1.05 em} 1}_{S_x}$ or~$i \\Gamma(x)$.\nApplying Fubini's theorem and decomposing the volume measure, we obtain\n\\[ \\mathopen{<} \\psi | C \\psi \\mathclose{>} = \\int_\\mycal M \\mathopen{\\prec} \\psi | C \\psi \\mathclose{\\succ}(x)\\: d\\mu_\\mycal M(x) \\\\\n=\\int_{t_0}^{t_1} \\int_{\\mycal N_t} \\mathopen{\\prec} \\psi | C \\psi \\mathclose{\\succ}\\, \\langle \\nu, \\partial_t \\rangle \\,dt \\,d\\mu_{\\mycal N_t} \\]\nand thus\n\\[ \\big| \\mathopen{<} \\psi | C \\psi \\mathclose{>} \\big| \\leq \\sup_\\mycal M \\langle \\nu, \\partial_t \\rangle\n\\int_{t_0}^{t_1} dt \\int_{\\mycal N_t} |\\mathopen{\\prec} \\psi | C \\psi \\mathclose{\\succ}|\\,d\\mu_{\\mycal N_t} \\:. \\]\nRewriting the integrand as\n\\[ |\\mathopen{\\prec} \\psi | C \\psi \\mathclose{\\succ}| = |\\mathopen{\\prec} \\psi | \\slashed{\\nu}\\, (\\slashed{\\nu} C) \\psi \\mathclose{\\succ}| \\:, \\]\nthe bilinear form~$\\mathopen{\\prec} .| \\slashed{\\nu} . \\mathclose{\\succ}$ is a scalar product. Moreover, the operator~$\\slashed{\\nu} C$\nis symmetric with respect to this scalar product. Using that\n\\[ (\\slashed{\\nu})^2 = \\mbox{\\rm 1 \\hspace{-1.05 em} 1} = (i \\slashed{\\nu} \\Gamma)^2 \\:, \\]\nwe conclude that the sup-norm corresponding to the\nscalar product~$\\mathopen{\\prec} .| \\slashed{\\nu} . \\mathclose{\\succ}$ of the operator~$\\slashed{\\nu} C$ is equal to one. Hence\n\\[ \\int_{\\mycal N_t} |\\mathopen{\\prec} \\psi | C \\psi \\mathclose{\\succ}|\\,d\\mu_{\\mycal N_t} \\leq\n\\int_{\\mycal N_t} \\mathopen{\\prec} \\psi | \\slashed{\\nu} \\psi \\mathclose{\\succ}\\,d\\mu_{\\mycal N_t} = (\\psi | \\psi) \\:, \\]\nand consequently\n\\[ \\big| \\mathopen{<} \\psi | C \\psi \\mathclose{>} \\big| \\leq (t_1-t_0)\\: \\sup_\\mycal M \\langle \\nu, \\partial_t \\rangle\\; \\|\\psi\\|^2\\:. \\]\nPolarization and a denseness argument give the result.\n\\end{proof} \\noindent\n\nAssuming that our space-time is $m$-finite and $\\Gamma$-finite, the bilinear forms~\\eqref{stipLR}\nare bounded on~$\\H_m \\times \\H_m$. Thus we may represent them with respect to the Hilbert space scalar\nproduct in terms of signature operators~$\\mathscr{S}_{L\\!\/\\!R}$,\n\\begin{equation} \\label{SLRdef2}\n\\mathscr{S}_{L\\!\/\\!R} \\::\\: \\H_m \\rightarrow \\H_m \\qquad \\text{with} \\qquad\n\\mathopen{<} \\phi | \\psi \\mathclose{>}_{L\\!\/\\!R} = ( \\phi \\,|\\, \\mathscr{S}_{L\\!\/\\!R}\\, \\psi) \\:.\n\\end{equation}\nWe refer to~$\\mathscr{S}_{L\\!\/\\!R}$ as the {\\bf{chiral signature operators}}. \nTaking the complex conjugate of the equation in~\\eqref{SLRdef2} and using\nthat~$\\chi_L^*=\\chi_R$, we find that~\\eqref{Sigprop} holds, where\nthe star denotes the adjoint in~$\\text{\\rm{L}}(\\H_m)$.\n\nWe now define the chiral index as the Noether index of~$\\mathscr{S}_L$\n(sometimes called Fredholm index; for basics see for example~\\cite[\\S27.1]{lax}).\n\\begin{Def} \\label{defind}\nThe fermionic signature operator is said to have finite chiral index if the operators of~$\\mathscr{S}_L$\nand~$\\mathscr{S}_R$ both have a finite-dimensional kernel.\nThe {\\bf{chiral index}} of the fermionic signature operator is defined by\n\\begin{equation} \\label{ind}\n\\ind \\mathscr{S} = \\dim \\ker \\mathscr{S}_L - \\dim \\ker \\mathscr{S}_R \\:.\n\\end{equation}\n\\end{Def}\n\n\\section{Generalization to the Setting of Causal Fermion Systems} \\label{seccfs}\nOur starting point is a causal fermion system as introduced in~\\cite{rrev}.\n\\begin{Def} {\\em{\nGiven a complex Hilbert space~$(\\H, \\langle .|. \\rangle_\\H)$ (the {\\em{``particle space''}})\nand a parameter~$n \\in \\mathbb{N}$ (the {\\em{``spin dimension''}}), we let~${\\mathscr{F}} \\subset \\text{\\rm{L}}(\\H)$ be the set of all\nself-adjoint operators on~$\\H$ of finite rank, which (counting with multiplicities) have\nat most~$n$ positive and at most~$n$ negative eigenvalues. On~${\\mathscr{F}}$ we are given\na positive measure~$\\rho$ (defined on a $\\sigma$-algebra of subsets of~${\\mathscr{F}}$), the so-called\n{\\em{universal measure}}. We refer to~$(\\H, {\\mathscr{F}}, \\rho)$ as a {\\em{causal fermion system}}.\n}}\n\\end{Def} \\noindent\nStarting from a Lorentzian spin manifold, one can construct a corresponding causal fermion system by\nchoosing~$\\H$ as a suitable subspace of the solution space of the Dirac equation, \nforming the local correlation operators (possibly introducing an ultraviolet regularization) and defining~$\\rho$\nas the push-forward of the volume measure on~$\\mycal M$\n(see~\\cite[Section~4]{finite} or the examples in~\\cite{topology}).\nThe advantage of working with a causal fermion system\nis that the underlying space-time does not need to be a Lorentzian manifold, but it can be a more\ngeneral ``quantum space-time'' (for more details see~\\cite{lqg}).\n\nWe now recall a few basic notions from~\\cite{rrev}. \nOn~${\\mathscr{F}}$ we consider the topology induced by the\noperator norm~$\\|A\\| := \\sup \\{ \\|A u \\|_\\H \\text{ with } \\| u \\|_\\H = 1 \\}$.\nFor every~$x \\in {\\mathscr{F}}$\nwe define the {\\em{spin space}}~$S_x$ by~$S_x = x(\\H)$; it is a subspace of~$\\H$ of dimension\nat most~$2n$. On~$S_x$ we introduce the {\\em{spin scalar product}} $\\mathopen{\\prec} .|. \\mathclose{\\succ}_x$ by\n\\begin{equation} \\label{ssp}\n\\mathopen{\\prec} u | v \\mathclose{\\succ}_x = -\\langle u | x u \\rangle_\\H \\qquad \\text{(for all $u,v \\in S_x$)}\\:;\n\\end{equation}\nit is an indefinite inner product of signature~$(p,q)$ with~$p,q \\leq n$.\nMoreover, we define {\\em{space-time}}~$M$ as the support of the universal measure, $M = \\text{supp}\\, \\rho$.\nIt is a closed subset of~${\\mathscr{F}}$.\n\nIn order to extend the chiral grading to causal fermion systems, we\nassume for every~$x \\in M$ an operator~$\\Gamma(x) \\in \\text{\\rm{L}}(\\H)$ with the properties\n\\begin{equation} \\label{pseudodef}\n\\Gamma(x)|_{S_x} \\::\\: S_x \\rightarrow S_x \\qquad \\text{and} \\qquad\nx\\, \\Gamma(x) = -\\Gamma(x)^*\\, x \\:.\n\\end{equation}\nWe define the operators~$\\chi_{L\\!\/\\!R}(x) \\in \\text{\\rm{L}}(\\H)$ again by~\\eqref{chiLRdef}.\nIn order to explain the equations~\\eqref{pseudodef}, we first note\nthat the right side of~\\eqref{pseudodef} obviously vanishes on the\northogonal complement of~$S_x$. Using furthermore that, by definition of the spin space,\nthe operator~$x$ is invertible on~$S_x$, we infer that\n\\[ \\Gamma(x)|_{S_x^\\perp} = 0 \\:. \\]\nMoreover, the computation\n\\begin{align*}\n\\mathopen{\\prec} \\psi \\,|\\, \\Gamma(x)\\, \\phi \\mathclose{\\succ}_x &= -\\langle \\psi \\,|\\, x \\,\\Gamma(x)\\, \\phi \\rangle_\\H = \n-\\langle \\Gamma(x)^* x\\, \\psi \\,|\\, \\phi \\rangle_\\H \\\\\n&\\!\\!\\overset{\\eqref{pseudodef}}{=}\n\\langle x \\,\\Gamma(x)\\, \\psi \\,|\\, \\phi \\rangle_\\H = -\\mathopen{\\prec} \\Gamma(x)\\, \\psi \\,|\\, \\phi \\mathclose{\\succ}_x\n\\end{align*}\n(with~$\\psi, \\phi \\in S_x$) shows that~$\\Gamma(x) \\in \\text{\\rm{L}}(S_x)$ is antisymmetric with respect to the\nspin scalar product. Thus the first equation in~\\eqref{gammaprop} again holds.\nThis implies that the adjoint of~$\\chi_L(x)$ with respect to~$\\mathopen{\\prec} .|. \\mathclose{\\succ}_x$ equals~$\\chi_R(x)$.\nHowever, we point out that our assumptions~\\eqref{pseudodef} do not imply that~$\\Gamma(x)$\nis idempotent (in the sense that~$\\Gamma(x)^2|_{S_x}=\\mbox{\\rm 1 \\hspace{-1.05 em} 1}_{S_x}$). Hence the analog\nof the second equation in~\\eqref{gammaprop} does not need to hold on a causal fermion system.\nThis property could be imposed in addition, but will not be needed here.\nThe last two relations in~\\eqref{gammaprop} do not have an obvious correspondence on causal fermion systems,\nand they will also not be needed in what follows.\n\nWe now have all the structures needed for defining the fermionic signature operator\nand its chiral index.\nNamely, replacing the scalar product in~\\eqref{Sdef} by the scalar product on the\nparticle space~$\\langle .|. \\rangle_\\H$, we now demand in analogy to~\\eqref{stip} and~\\eqref{Sdef}\nthat the relation\n\\[ \\langle u | \\mathscr{S} v \\rangle_\\H = \\int_M \\mathopen{\\prec} u | v \\mathclose{\\succ}_x\\: d\\rho(x) \\]\nshould hold for all~$u, v \\in \\H$. Using~\\eqref{ssp}, we find that the fermionic signature operator\nis given by the integral\n\\[ \\mathscr{S} = -\\int_M x \\: d\\rho(x) \\:. \\]\nSimilarly, the left-handed signature operator can be introduced by\n\\begin{equation} \\label{sigLint}\n\\mathscr{S}_L = -\\int_M x \\,\\chi_L\\: d\\rho(x) \\:.\n\\end{equation}\nIn the setting on a globally hyperbolic manifold, \nwe had to assume that the manifold was $m$-finite and $\\Gamma$-finite (see Definitions~\\ref{defmfinite}\nand~\\ref{defGfinite}).\nNow we need to assume correspondingly that the integral~\\eqref{sigLint} converges.\nFor the sake of larger generality we prefer to work with weak convergence.\n\n\\begin{Def} The topological fermion system is {\\bf{$\\mathscr{S}_L$-bounded}} if the integral in~\\eqref{sigLint}\nconverges weakly to a bounded operator, i.e.\\ if there is an operator~$\\mathscr{S}_L \\in \\text{\\rm{L}}(\\H)$\nsuch that for all~$u,v \\in \\H$,\n\\[ -\\int_M \\langle u \\,|\\, x \\,\\chi_L v \\rangle_\\H \\: d\\rho(x) = \\langle u | \\mathscr{S}_L v \\rangle_\\H \\:. \\]\n\\end{Def} \\noindent\nIntroducing the right-handed signature operator by~$\\mathscr{S}_R := \\mathscr{S}_L^*$,\nwe can define the {\\bf{chiral index}} again by~\\eqref{ind}.\n\n\\section{The Chiral Index in the Massless Odd Case} \\label{secodd}\nWe return to the setting of Section~\\ref{secindex}\nand consider the special case that the mass vanishes and that the Dirac operator is odd,\n\\begin{equation} \\label{massless}\nm=0 \\qquad \\text{and} \\qquad \\Gamma \\,{\\mathcal{D}} = - {\\mathcal{D}}\\, \\Gamma \\:.\n\\end{equation}\nIn this case, the solution space of the Dirac equation is obviously invariant under~$\\Gamma$,\n\\[ \\Gamma \\::\\: \\H_0 \\rightarrow \\H_0 \\:. \\]\nTaking the adjoint with respect to the scalar product~\\eqref{print} and noting that~$\\Gamma$\nanti-commutes with~$\\slashed{\\nu}$, one sees that~$\\Gamma$ is symmetric on~$\\H_0$.\nHence~$\\chi_L$ and~$\\chi_R$ are orthogonal projection operators,\ngiving rise to the orthogonal sum decomposition\n\\begin{equation} \\label{HLR}\n\\H_0 = \\H_L \\oplus \\H_R \\qquad \\text{with} \\qquad \\H_{L\\!\/\\!R} := \\chi_{L\\!\/\\!R}\\, \\H_0\\:.\n\\end{equation}\nMoreover, the computation\n\\begin{align*}\n\\mathopen{<} \\chi_L \\psi | \\chi_{c'} \\phi \\mathclose{>}_c &=\n\\int_\\mycal M \\mathopen{\\prec} \\chi_L \\psi \\,|\\, \\chi_c \\,\\chi_{c'} \\phi \\mathclose{\\succ}_x\\: d\\mu_\\mycal M \\\\\n&= \\int_\\mycal M \\mathopen{\\prec} \\psi \\,|\\, \\chi_R \\,\\chi_c \\,\\chi_{c'} \\phi \\mathclose{\\succ}_x\\: d\\mu_\\mycal M\n= \\delta_{Rc} \\:\\delta_{cc'} \\:\\mathopen{<} \\psi | \\phi \\mathclose{>}_c\n\\end{align*}\nwith~$c, c' \\in \\{L, R\\}$ (and similarly for~$L$ replaced by~$R$) shows that~$\\mathscr{S}$\nmaps the right-handed component to the left-handed component and vice versa.\nMoreover, in a block matrix notation corresponding to the decomposition~\\eqref{HLR},\nthe operators~$\\mathscr{S}_L$ and~$\\mathscr{S}_R$ have the simple form\n\\[ \\mathscr{S}_L = \\begin{pmatrix} 0 & 0 \\\\ A & 0 \\end{pmatrix} \\qquad \\text{and} \\qquad\n\\mathscr{S}_R = \\begin{pmatrix} 0 & A^* \\\\ 0 & 0 \\end{pmatrix} \\]\nwith a bounded operator~$A : \\H_L \\rightarrow \\H_R$.\nAs a consequence, both~$\\mathscr{S}_L$ and~$\\mathscr{S}_R$ have an infinite-dimensional kernel,\nso that the index cannot be defined by~\\eqref{ind}. This problem can easily be cured by\nrestricting the operators to the respective subspace~$\\H_L$ and~$\\H_R$.\n\n\\begin{Def} \\label{defind0}\nIn the massless odd case~\\eqref{massless}, the fermionic signature operator is said to have finite\nchiral index if the operators~$\\mathscr{S}_L|_{\\H_L}$ and~$\\mathscr{S}_R|_{\\H_R}$ both have a finite-dimensional kernel.\nWe define the index~$\\ind_0 \\mathscr{S}$ by\n\\[ \\ind_0 \\mathscr{S} = \\dim\\ker (\\mathscr{S}_L)|_{\\H_L} - \\dim \\ker (\\mathscr{S}_R)|_{\\H_R} \\:. \\]\n\\end{Def}\n\n\\section{Homotopy Invariance} \\label{secstable}\nWe first recall Dieudonn{\\'e}'s general theorem on the homotopy invariance of the Noether index\n(see for example~\\cite[Theorem~27.1.5'']{lax}).\n\\begin{Thm} \\label{thmhomotopy}\nLet~$T(t) : U \\rightarrow V$, $0 \\leq t \\leq 1$, be a one-parameter family of bounded linear operators\nbetween Banach spaces~$U$ and~$V$ which is continuous in the norm topology. If\nfor every~$t \\in [0,1]$ the vector spaces\n\\begin{equation} \\label{kerfinite}\n\\text{$\\ker T$ and~$V\/T(\\H)$ are both finite-dimensional}\\:,\n\\end{equation}\nthen\n\\[ \\ind T(0) = \\ind T(1) \\:, \\]\nwhere~$\\ind T := \\dim \\ker(T) - \\dim V\/T(\\H)$.\n\\end{Thm}\n\nIn most applications of this theorem, one knows from general arguments that the index\nof~$T$ remains finite under homotopies (for example, in the prominent example of the\nAtiyah-Singer index, this follows from elliptic estimates on a compact manifold).\nFor our chiral index, however, there is no general reason why the chiral index of~$\\mathscr{S}$ should\nremain finite. Indeed, the fermionic signature operator is bounded and typically has many eigenvalues\nnear zero. It may well happen that for a certain value of~$t$, an infinite number of these\neigenvalues becomes zero (for an explicit example see Example~\\ref{exinstable} below).\n\nAnother complication when applying Theorem~\\ref{thmhomotopy} to the fermionic signature operator is that the\nimage of~$\\mathscr{S}_L$ does not need to be a closed subspace of our Hilbert space.\nTo explain the difficulty, we first consider the chiral index of Definition~\\ref{defind}.\nUsing that~$\\ker \\mathscr{S}_R = \\ker \\mathscr{S}_L^* = \\mathscr{S}_L(\\H_m)^\\perp$,\nthe assumption that the fermionic signature operator has finite chiral index can be restated\nthat the vector spaces~$\\ker \\mathscr{S}_L$ and~$\\mathscr{S}_L(\\H_m)^\\perp$ are finite-dimensional subspaces\nof~$\\H_m$. Since\n\\[ \\dim \\mathscr{S}_L(\\H_m)^\\perp = \\dim \\H_m \/ \\big( \\overline{\\mathscr{S}_L(\\H_m)} \\big) \\:, \\]\nthis implies that the {\\em{closure}} of the image of~$\\mathscr{S}_L$ has finite co-dimension.\nIf the image of~$\\mathscr{S}_L$ were closed in~$\\H_m$, the finiteness of the chiral index\nwould imply that the conditions~\\eqref{kerfinite} hold if we set~$T=\\mathscr{S}_L$ and~$U=V=\\H_m$.\nHowever, the image of~$\\mathscr{S}_L$ will in general {\\em{not}} be a closed subspace of~$\\H_m$,\nand in this case it is possible that the condition~\\eqref{kerfinite} is violated for~$T=\\mathscr{S}_L$\nand~$U=V=\\H_m$, although~$\\mathscr{S}$ has finite chiral index (according to Definition~\\ref{defind}).\nIn the massless odd case, the analogous problem occurs if we choose~$T=\\mathscr{S}_L$,\n$U=\\H_L$ and~$V=\\H_R$ (see Definition~\\ref{defind0}).\n\nOur method for making Theorem~\\ref{thmhomotopy} applicable is to endow a subspace\nof the Hilbert space with a finer topology, such that the image of~$\\mathscr{S}_L$\nlies in this subspace and is closed in this topology.\n\n\\begin{Thm} \\label{thmstable}\nLet~$\\mathscr{S}(t) : \\H_m \\rightarrow \\H_m$, $t \\in [0,1]$, be a family of fermionic signature operators\nwith finite chiral index.\nLet~$E$ be a Banach space together with an embedding~$\\iota : E \\hookrightarrow \\H_m$\nwith the following properties:\n\\begin{itemize}\n\\item[(i)] For every~$t \\in [0,1]$, the image of~$\\mathscr{S}_L(t)$ lies in~$\\iota(E)$, giving rise to the mapping\n\\begin{equation} \\label{SigLE}\n\\mathscr{S}_L(t) \\,:\\, \\H_m \\rightarrow E \\:.\n\\end{equation}\n\\item[(ii)] For every~$t \\in [0,1]$, the image of the operator~$\\mathscr{S}_L$, \\eqref{SigLE}, is a closed subspace of~$E$.\n\\item[(iii)] The family~$\\mathscr{S}_L(t) : \\H_m \\rightarrow E$ is continuous in the norm topology.\n\\end{itemize}\nThen the chiral index is a homotopy invariant,\n\\[ \\ind \\mathscr{S}(0) = \\ind \\mathscr{S}(1)\\:. \\]\n\\end{Thm}\n\nIn the chiral odd case, the analogous result is stated as follows.\n\\begin{Thm} \\label{thmstablem0}\nLet~$\\mathscr{S}(t) : \\H_0 \\rightarrow \\H_0$, $t \\in [0,1]$, be\na family of fermionic signature operators of finite chiral index in the massless odd case (see~\\eqref{massless}).\nMoreover, let~$E$ be a Banach space together with an embedding~$\\iota : E \\hookrightarrow \\H_R$\nsuch that the operator~$\\mathscr{S}_L|_{\\H_L} : \\H_L \\rightarrow \\H_R$ has the following properties:\n\\begin{itemize}\n\\item[(i)] For every~$t \\in [0,1]$, the image of~$\\mathscr{S}_L(t)$ lies in~$\\iota(E)$, giving rise to the mapping\n\\begin{equation} \\label{SigLE2}\n\\mathscr{S}_L(t) \\,:\\, \\H_L \\rightarrow E \\:.\n\\end{equation}\n\\item[(ii)] For every~$t \\in [0,1]$, the image of the operator~$\\mathscr{S}_L$, \\eqref{SigLE2}, is a closed subspace of~$E$.\n\\item[(iii)] The family~$\\mathscr{S}_L(t) : \\H_L \\rightarrow E$ is continuous in the norm topology.\n\\end{itemize}\nThen the chiral index in the massless odd case is a homotopy invariant,\n\\[ \\ind_0 \\mathscr{S}(0) = \\ind_0 \\mathscr{S}(1)\\:. \\]\n\\end{Thm}\nIn Example~\\ref{exstable} below, it will be explained how these theorems can be applied.\n\n\\section{Example: Shift Operators in the Setting of Causal Fermion Systems}\nIn the remainder of this paper we illustrate the previous constructions in several examples.\nThe simplest examples for fermionic signature operators with a non-trivial chiral index\ncan be given in the setting of causal fermion systems.\nWe let~$\\H=\\ell^2(\\mathbb{N})$ be the square-summable sequences with the scalar product\n\\[ \\langle u | v \\rangle_\\H = \\sum_{l=1}^\\infty \\overline{u_l} v_l \\:. \\]\nFor any~$k \\in \\mathbb{N}$ we define the operators~$x_k$ by\n\\[ (x_k \\,u)_k = -u_{k+1} \\:,\\qquad (x_k \\,u)_{k+1} = -u_k \\:, \\]\nand all other components of~$x_k u$ vanish. Thus, writing the series in components,\n\\begin{equation} \\label{xkdef}\nx_k \\,u = (\\underbrace{0,\\ldots, 0}_{\\text{$k-1$ entries}}, -u_{k+1}, -u_k, 0, \\ldots ) \\:.\n\\end{equation}\nEvery operator~$x _k$ obviously has rank two with the non-trivial eigenvalues~$\\pm 1$.\nWe let~$\\mu$ be the counting measure on~$\\mathbb{N}$ and~$\\rho = x_*(\\mu)$ the\npush-forward measure of the mapping~$x : k \\mapsto x_k \\in {\\mathscr{F}} \\subset \\text{\\rm{L}}(\\H)$.\nWe thus obtain a causal fermion system~$(\\H, {\\mathscr{F}}, \\rho)$ of spin dimension one.\n\nNext, we introduce the pseudoscalar operators~$\\Gamma(x_k)$ by\n\\begin{equation} \\label{Gkdef}\n\\Gamma(x_k) \\,u = (\\underbrace{0,\\ldots, 0}_{\\text{$k-1$ entries}}, u_k, -u_{k+1}, 0, \\ldots ) \\:.\n\\end{equation}\nObviously, these operators have the properties~\\eqref{pseudodef}. Moreover,\n\\begin{eqnarray*}\nx\\, \\chi_L(x_k)\\, u = (0,\\ldots, 0, & \\!\\!\\!\\!\\!-u_{k+1}, \\;\\;0, & \\!\\!\\!0, \\ldots ) \\\\\nx\\, \\chi_R(x_k)\\, u = (0,\\ldots, 0, & \\:\\;\\;\\;0, \\;\\;-u_k, & \\!\\!\\!0, \\ldots ) \\:.\n\\end{eqnarray*}\nConsequently, the operators\n\\begin{equation} \\label{shiftsum}\n\\mathscr{S}_{L\\!\/\\!R} = -\\sum_{k=1}^\\infty x\\, \\chi_L(x_k)\n\\end{equation}\ntake the form\n\\[ \\mathscr{S}_L \\,u = (u_2, u_3, u_4, \\ldots) \\:,\\qquad \\mathscr{S}_R \\,u = (0, u_1, u_2, \\ldots) \\]\n(note that the series in~\\eqref{shiftsum} converges weakly; in fact it even converges strongly\nin the sense that the series $\\sum_k (x_k \\chi_L u)$ converges in~$\\H$ for every~$u \\in \\H$).\nThese are the usual shift operators, implying that\n\\[ \\ind \\mathscr{S} = 1 \\:. \\]\n\nWe finally remark that a general index~$p \\in \\mathbb{N}$ can be arranged by modifying~\\eqref{xkdef}\nand~\\eqref{Gkdef} to\n\\begin{align*}\nx_k \\,u &= (\\underbrace{0,\\ldots, 0}_{\\text{$k-1$ entries}}, -u_{k+p}, \\!\\!\n\\underbrace{0, \\ldots, 0}_{\\text{$p-1$ entries}}, \\;-u_k, \\;\\:0, \\ldots ) \\\\\n\\Gamma(x_k) \\,u &= (\\;\\;\\overbrace{0,\\ldots, 0}\\;\\:, \\;\\;\\;\\;u_k,\\;\\;\\; \\overbrace{0, \\ldots, 0}, \\:-u_{k+p}, 0, \\ldots ) \\:.\n\\end{align*}\nMoreover, a negative index can be arranged by exchanging the left- and right-handed components.\n\n\\section{Example: A Dirac Operator with~$\\ind_0 \\mathscr{S} \\neq 0$} \\label{secex1}\nWe now construct a two-dimensional space-time~$(\\mycal M,g)$ together with an odd Dirac operator~${\\mathscr{D}}$\nsuch that the resulting fermionic signature operator in the massless case has a non-trivial chiral\nindex~$\\ind_0$ (see Definition~\\ref{defind0}).\nWe choose~$\\mycal M=(0, 2 \\pi) \\times S^1$ with coordinates~$t \\in (0, 2 \\pi)$ and~$\\varphi \\in [0, 2 \\pi)$.\nWe begin with the flat Lorentzian metric\n\\begin{equation} \\label{2dl}\nds^2 = dt^2 - d\\varphi^2 \\:.\n\\end{equation}\nWe consider two-component complex spinors, with the spin scalar product\n\\begin{equation} \\label{ssprod}\n\\mathopen{\\prec} \\psi | \\phi \\mathclose{\\succ} = \\langle \\psi | \\begin{pmatrix} 0 & 1 \\\\ 1 & 0 \\end{pmatrix} \\phi \\rangle_{\\mathbb{C}^2}\\:.\n\\end{equation}\nWe choose the pseudoscalar matrix as\n\\begin{equation} \\label{pseudoc}\n\\Gamma = \\begin{pmatrix} -1 & 0 \\\\ 0 & 1 \\end{pmatrix} \\:,\n\\end{equation}\nso that\n\\begin{equation} \\label{defchir}\n\\chi_L = \\begin{pmatrix} 1 & 0 \\\\ 0 & 0 \\end{pmatrix} \\:,\\qquad\n\\chi_R = \\begin{pmatrix} 0 & 0 \\\\ 0 & 1 \\end{pmatrix} \\:.\n\\end{equation}\nThe space-time inner product~\\eqref{stip} becomes\n\\begin{equation} \\label{stiptorus}\n\\mathopen{<} \\psi|\\phi \\mathclose{>} = \\int_0^{2 \\pi} \\int_0^{2 \\pi} \\mathopen{\\prec} \\psi(t, \\varphi) \\,|\\, \\phi(t, \\varphi) \\mathclose{\\succ}\\:d\\varphi\\, dt \\:.\n\\end{equation}\n\nThe Dirac operator~${\\mathcal{D}}$ should be chosen to be odd (see the right equation in~\\eqref{massless}).\nThis means that~${\\mathcal{D}}$ has the matrix representation\n\\begin{equation} \\label{Direx}\n{\\mathcal{D}} = \\begin{pmatrix} 0 & {\\mathcal{D}}_R \\\\ {\\mathcal{D}}_L & 0 \\end{pmatrix}\n\\end{equation}\nwith suitable operators~${\\mathcal{D}}_L$ and~${\\mathcal{D}}_R$.\nIn order for current conservation to hold, the Dirac operator should be symmetric \nwith respect to the inner product~\\eqref{stiptorus}. This implies that the operators~${\\mathcal{D}}_L$\nand~${\\mathcal{D}}_R$ must both be symmetric,\n\\begin{equation} \\label{DLRsymm}\n{\\mathcal{D}}_L^* = {\\mathcal{D}}_L \\:,\\qquad {\\mathcal{D}}_R^* = {\\mathcal{D}}_R \\:,\n\\end{equation}\nwhere the star denotes the formal adjoint with respect to the scalar product on\nthe Hilbert space~$L^2(\\mycal M, \\mathbb{C})$. We consider the massless Dirac equation\n\\begin{equation} \\label{Dir0}\n{\\mathcal{D}} \\psi = 0 \\:.\n\\end{equation}\nThe scalar product~\\eqref{print} on the solutions takes the form\n\\begin{equation} \\label{printtorus}\n(\\psi | \\phi) = 2 \\pi \\int_0^{2 \\pi} \\langle \\psi(t,\\varphi) | \\phi(t, \\varphi) \\rangle_{\\mathbb{C}^2}\\: d\\varphi \\:,\n\\end{equation}\ngiving rise to the Hilbert space~$(\\H_0, (.|.))$. As a consequence of current conservation,\nthis scalar product is independent of the choice of~$t$.\n\nWe assume that the system is invariant under time translations and is a first order differential\noperator in time. More precisely, we assume that\n\\begin{equation} \\label{DirHam}\n{\\mathcal{D}}_{L\\!\/\\!R} = i \\partial_t - H_{L\\!\/\\!R}\n\\end{equation}\nwith purely spatial operators~$H_{L\\!\/\\!R}$, referred to as the left- and right-handed Hamiltonians.\nMoreover, we assume that these Hamiltonians are homogeneous.\nThis implies that they can be diagonalized by plane waves,\n\\[ {\\mathcal{D}}_c \\:e^{i k \\varphi} = \\omega_{k,c}\\: e^{i k \\varphi} \\qquad \\text{with~$k \\in \\mathbb{Z}$ and~$c \\in \\{L, R\\}$}\\:. \\]\nAs a consequence, the Dirac equation~\\eqref{Dir0} can be solved by the plane waves\n\\begin{equation} \\label{esols}\n{\\mathfrak{e}}_{k, L} = \\frac{1}{2 \\pi}\\: e^{-i \\omega_{k, L} t + i k \\varphi} \\begin{pmatrix} 1 \\\\ 0 \\end{pmatrix} \\:,\\qquad\n{\\mathfrak{e}}_{k, R} = \\frac{1}{2 \\pi}\\: e^{-i \\omega_{k, R} t + i k \\varphi} \\begin{pmatrix} 0 \\\\ 1 \\end{pmatrix} \\:.\n\\end{equation}\nThe vectors~$({\\mathfrak{e}}_{k, c})_{k \\in \\mathbb{Z}, c \\in \\{L, R\\}}$ form an orthonormal basis of the\nHilbert space~$\\H_0$.\nWe remark that the Dirac operator of the Minkowski vacuum is obtained by choosing\n\\[ H_L = i \\partial_\\varphi \\:,\\qquad H_R = -i \\partial_\\varphi \\]\n(see for example~\\cite{drum} or~\\cite[Section~7.2]{topology}).\nIn this case, $\\omega_{k, L\\!\/\\!R} = \\mp k$. More generally, choosing~${\\mathcal{D}}_c$ as a\nhomogeneous differential operator of first order, the eigenvalues~$\\omega_{k,c}$ are linear in~$k$.\nHere we do not want to assume that the operators~${\\mathcal{D}}_c$ are differential operators.\nThen the eigenvalues~$\\omega_{k, L}$ and~$\\omega_{k,R}$\ncan be chosen arbitrarily and independently, except for the constraint coming from the\nsymmetry~\\eqref{DLRsymm} that these eigenvalues must be real.\n\nMore specifically, for a given parameter~$p \\in \\mathbb{N}$ we choose\n\\begin{equation} \\label{omegaex}\n\\omega_{k,L} = -k \\qquad \\text{and} \\qquad\n\\omega_{k, R} = \\left\\{ \\begin{array}{cl} k & \\text{if~$k \\leq 0$} \\\\\nk+p & \\text{if~$k > 0$}\n\\end{array} \\right.\n\\end{equation}\n(see Figure~\\ref{figdisperse}).\n\\begin{figure}\n\\scalebox{1}\n{\n\\begin{pspicture}(0,-2.52)(6.92,2.52)\n\\psdots[dotsize=0.24,fillstyle=solid,dotstyle=o](1.86,0.1)\n\\psdots[dotsize=0.24,fillstyle=solid,dotstyle=o](2.46,-0.5)\n\\usefont{T1}{ptm}{m}{n}\n\\rput(6.61,-0.715){$k$}\n\\psline[linewidth=0.04cm,arrowsize=0.3cm 1.0,arrowlength=1.5,arrowinset=0.5]{->}(0.04,-1.1)(6.7,-1.1)\n\\psline[linewidth=0.04cm,arrowsize=0.3cm 1.0,arrowlength=1.5,arrowinset=0.5]{->}(3.06,-2.5)(3.06,2.5)\n\\usefont{T1}{ptm}{m}{n}\n\\rput(3.45,2.325){$\\omega$}\n\\psline[linewidth=0.02cm,linestyle=dashed,dash=0.16cm 0.16cm](1.66,-2.5)(6.06,1.9)\n\\psline[linewidth=0.02cm,linestyle=dashed,dash=0.16cm 0.16cm](0.06,1.9)(4.46,-2.5)\n\\psdots[dotsize=0.12](3.66,0.7)\n\\psdots[dotsize=0.12](3.06,-1.1)\n\\psdots[dotsize=0.12](2.46,-0.5)\n\\psdots[dotsize=0.12](1.26,0.7)\n\\psdots[dotsize=0.12](0.06,1.9)\n\\psdots[dotsize=0.12](4.26,-2.3)\n\\psdots[dotsize=0.12](2.46,-1.7)\n\\psdots[dotsize=0.12](1.86,-2.3)\n\\psdots[dotsize=0.12](1.86,0.1)\n\\psdots[dotsize=0.12](0.66,1.3)\n\\psdots[dotsize=0.12](4.26,1.3)\n\\psdots[dotsize=0.12](4.86,1.9)\n\\psdots[dotsize=0.12](3.66,-1.7)\n\\psline[linewidth=0.04cm](3.66,-1.0)(3.66,-1.2)\n\\psline[linewidth=0.04cm](4.26,-1.0)(4.26,-1.2)\n\\psline[linewidth=0.04cm](4.86,-1.0)(4.86,-1.2)\n\\psline[linewidth=0.04cm](2.46,-1.0)(2.46,-1.2)\n\\psline[linewidth=0.04cm](1.86,-1.0)(1.86,-1.2)\n\\psline[linewidth=0.04cm](1.26,-1.0)(1.26,-1.2)\n\\psline[linewidth=0.04cm](0.66,-1.0)(0.66,-1.2)\n\\psline[linewidth=0.04cm](6.06,-1.0)(6.06,-1.2)\n\\psline[linewidth=0.04cm](5.46,-1.0)(5.46,-1.2)\n\\psline[linewidth=0.04cm](0.06,-1.0)(0.06,-1.2)\n\\psline[linewidth=0.04cm](3.16,-0.5)(2.96,-0.5)\n\\psline[linewidth=0.02cm,linestyle=dotted,dotsep=0.16cm](0.06,0.7)(6.26,0.7)\n\\psline[linewidth=0.02cm,linestyle=dotted,dotsep=0.16cm](0.06,1.3)(6.26,1.3)\n\\psline[linewidth=0.02cm,linestyle=dotted,dotsep=0.16cm](0.06,1.9)(6.26,1.9)\n\\psline[linewidth=0.02cm,linestyle=dotted,dotsep=0.16cm](0.06,-1.7)(6.26,-1.7)\n\\psline[linewidth=0.02cm,linestyle=dotted,dotsep=0.16cm](0.06,-2.3)(6.26,-2.3)\n\\usefont{T1}{ptm}{m}{n}\n\\rput(3.35,-0.435){$1$}\n\\usefont{T1}{ptm}{m}{n}\n\\rput(1.87,-0.715){$\\omega_{-1,L}$}\n\\usefont{T1}{ptm}{m}{n}\n\\rput(1.29,-0.135){$\\omega_{-2,L}$}\n\\usefont{T1}{ptm}{m}{n}\n\\rput(3.73,-0.815){$1$}\n\\usefont{T1}{ptm}{m}{n}\n\\rput(3.72,0.425){$\\omega_{1,R}$}\n\\usefont{T1}{ptm}{m}{n}\n\\rput(4.63,-2.055){$\\omega_{2,L}$}\n\\usefont{T1}{ptm}{m}{n}\n\\rput(4.28,1.025){$\\omega_{2,R}$}\n\\usefont{T1}{ptm}{m}{n}\n\\rput(1.3,-2.055){$\\omega_{-2,R}$}\n\\end{pspicture} \n}\n\\caption{The eigenvalues~$\\omega_{k, L\\!\/\\!R}$ in the case~$p=2$.}\n\\label{figdisperse}\n\\end{figure}\nThen the space-time inner product of the basis vectors~$({\\mathfrak{e}}_{k, c})_{k \\in \\mathbb{Z}, c \\in \\{L, R\\}}$\nis computed by\n\\begin{align*}\n\\mathopen{<} {\\mathfrak{e}}_{k,L} | {\\mathfrak{e}}_{k', L} \\mathclose{>} &= 0 = \\mathopen{<} {\\mathfrak{e}}_{k,R} | {\\mathfrak{e}}_{k', R} \\mathclose{>} \\\\\n\\mathopen{<} {\\mathfrak{e}}_{k,R} | {\\mathfrak{e}}_{k', L} \\mathclose{>} \n&= \\frac{1}{2 \\pi} \\: \\delta_{k,k'} \\int_0^{2 \\pi} e^{i (\\omega_{k,R} - \\omega_{k,L}) t} \\,dt =\n\\delta_{k,k'} \\: \\delta_{\\omega_{k,R}, \\;\\omega_{k,L}} = \\delta_{k,0} \\: \\delta_{k',0} \\:.\n\\end{align*}\nWe conclude~$\\mathscr{S}$ does not have finite chiral index.\n\nIn order to obtain a non-trivial index, we need to modify our example.\nThe idea is to change the space-time inner product in such a way that\nthe inner product between two different plane-wave solutions with\nthe same frequencies becomes non-zero. As a consequence,\nthe corresponding pair of plane-wave solutions will disappear from the kernel.\nThe only vectors which remain in the kernel are those which do not have a partner\nfor pairing, so that\n\\[ \\ker \\mathscr{S}_L|_{\\H_L} = \\text{span} \\big( {\\mathfrak{e}}_{-1,L}, \\ldots, {\\mathfrak{e}}_{-p,L} \\big) \\:,\\qquad\n\\ker \\mathscr{S}_R|_{\\H_R} = \\{0\\}\\:. \\]\n(see again Figure~\\ref{figdisperse}, where the pairs are indicated by horizontal dashed lines,\nwhereas the vectors in the kernel correspond to the circled dots).\nGenerally speaking, the method to modify the space-time inner product for states with\nthe same frequency is to insert a potential into the Dirac equation which\nis time-independent but has a non-trivial spatial dependence.\nIt is most convenient to work with a {\\em{conformal transformation}}.\nThus we go over from the Minkowski metric~\\eqref{2dl} to the conformally flat metric\n\\begin{equation} \\label{conform}\nd\\tilde{s}^2 = f(\\varphi)^2 \\left( dt^2 - d\\varphi^2 \\right) \\:,\n\\end{equation}\nwhere~$f \\in C^\\infty(\\mathbb{R}\/(2 \\pi \\mathbb{Z}))$ is a non-negative, smooth, $2 \\pi$-periodic function.\nThe conformal invariance of the Dirac equation (for details see for example~\\cite[Section~8.1]{topology}\nand the references therein) states in our situation that the Dirac operator transforms as\n\\begin{equation}\n\\tilde{{\\mathcal{D}}} = f^{-\\frac{3}{2}} \\,{\\mathcal{D}}\\, f^{\\frac{1}{2}} \\:, \\label{Dirconf}\n\\end{equation}\nso that\n\\[ \\tilde{{\\mathcal{D}}} = \\begin{pmatrix} 0 & \\tilde{{\\mathcal{D}}}_R \\\\ \\tilde{{\\mathcal{D}}}_L & 0 \\end{pmatrix}\n\\qquad \\text{with} \\qquad\n\\tilde{{\\mathcal{D}}}_{L\\!\/\\!R} = f^{-\\frac{3}{2}} \\,{\\mathcal{D}}_{L\\!\/\\!R}\\, f^{\\frac{1}{2}} \\:. \\]\nThe solutions of the massless Dirac equation are modified simply by a conformal factor,\n\\begin{equation} \\label{psiconf}\n\\tilde{\\psi} = f^{-\\frac{1}{2}}\\: \\psi \\:.\n\\end{equation}\nThe space-time inner product~\\eqref{stiptorus} and the scalar product~\\eqref{printtorus} transform to\n\\begin{align}\n\\mathopen{<} \\tilde{\\psi} | \\tilde{\\phi} \\mathclose{>}\n&= \\int_0^{2 \\pi} \\int_0^{2 \\pi} \\mathopen{\\prec} \\tilde{\\psi}(t, \\varphi) \\,|\\, \\tilde{\\phi}(t, \\varphi) \\mathclose{\\succ}\\:\nf(\\varphi)^2 \\,d\\varphi\\, dt \\nonumber \\\\\n&= \\int_0^{2 \\pi} \\int_0^{2 \\pi} \\mathopen{\\prec} \\psi(t, \\varphi) \\,|\\, \\phi(t, \\varphi) \\mathclose{\\succ}\\:\nf(\\varphi) \\,d\\varphi\\, dt \\label{stiptrans} \\\\\n(\\tilde{\\psi} | \\tilde{\\phi}) &= 2 \\pi \\int_0^{2 \\pi} \\langle \\tilde{\\psi}(t,\\varphi) | \\tilde{\\phi}(t, \\varphi) \\rangle_{\\mathbb{C}^2}\\: \nf(\\varphi) \\:d\\varphi \\nonumber \\\\\n&= 2 \\pi \\int_0^{2 \\pi} \\langle \\psi(t,\\varphi) | \\phi(t, \\varphi) \\rangle_{\\mathbb{C}^2}\\: \nd\\varphi = (\\psi | \\phi) \\:. \\label{printtrans}\n\\end{align}\nTo understand these transformation laws, one should keep in mind that the spin scalar product\nremains unchanged under conformal transformations. The same is true for the \nintegrand~$\\mathopen{\\prec} \\psi | \\slashed{\\nu} \\phi \\mathclose{\\succ}_x$ of the scalar product~\\eqref{print}, because\nthe operator~$\\slashed{\\nu}$ is normalized by~$\\slashed{\\nu}^2=\\mbox{\\rm 1 \\hspace{-1.05 em} 1}$.\n\nFrom~\\eqref{printtrans} we conclude that the scalar product does not change under conformal\ntransformations. In particular, the conformally transformed plane-wave solutions\n\\begin{equation} \\label{vertical}\n\\tilde{{\\mathfrak{e}}}_{k, L\\!\/\\!R} = f(\\varphi)^{-\\frac{1}{2}}\\: {\\mathfrak{e}}_{k, L\\!\/\\!R}\n\\end{equation}\nare an orthonormal basis of~$\\tilde{\\H}_0$. The space-time inner product~\\eqref{stiptrans}, however,\ninvolves a conformal factor~$f(\\varphi)$. As a consequence, the space-time inner product\nof the basis vectors~$(\\tilde{{\\mathfrak{e}}}_{k, c})_{k \\in \\mathbb{Z}, c \\in \\{L, R\\}}$ can be computed by\n\\begin{align*}\n\\mathopen{<} \\tilde{{\\mathfrak{e}}}_{k,L} | \\tilde{{\\mathfrak{e}}}_{k', L} \\mathclose{>} &= 0 = \\mathopen{<} \\tilde{{\\mathfrak{e}}}_{k,R} | \\tilde{{\\mathfrak{e}}}_{k', R} \\mathclose{>} \\\\\n\\mathopen{<} \\tilde{{\\mathfrak{e}}}_{k,R} | \\tilde{{\\mathfrak{e}}}_{k', L} \\mathclose{>} &= \\int_0^{2 \\pi} dt \\int_0^{2 \\pi} f(\\varphi)\\, d\\varphi\\; \n\\mathopen{\\prec} \\tilde{{\\mathfrak{e}}}_{k,R}(t,\\varphi) \\,|\\, \\tilde{{\\mathfrak{e}}}_{k', L}(t,\\varphi) \\mathclose{\\succ} \\\\\n&= \\frac{1}{2 \\pi} \\:\\delta_{\\omega_{k,R}, \\,\\omega_{k',L}}\n\\int_0^{2 \\pi} f(\\varphi)\\: e^{-i (k-k') \\varphi}\\, d\\varphi\n= \\frac{1}{2 \\pi} \\:\\delta_{\\omega_{k,R}, \\omega_{k',L}}\\: \\hat{f}_{k-k'}\\:,\n\\end{align*}\nwhere~$\\hat{f}_k$ is the $k^\\text{th}$ Fourier coefficient of~$f$,\n\\[ f(\\varphi) = \\frac{1}{2 \\pi} \\sum_{k \\in \\mathbb{Z}} \\hat{f}_k\\: e^{i k \\varphi} \\:. \\]\n\nUsing the explicit form of the frequencies~\\eqref{omegaex}, we obtain the following\ninvariant subspaces and corresponding matrix representations of~$\\mathscr{S}$,\n\\begin{align*}\n\\hat{\\mathscr{S}}|_{\\text{span}(\\tilde{{\\mathfrak{e}}}_{-k, L}, \\tilde{{\\mathfrak{e}}}_{k, R})} &=\n\\frac{1}{2 \\pi} \\begin{pmatrix} 0 & \\overline{\\hat{f}_{2k}} \\\\ \\hat{f}_{2k} & 0 \\end{pmatrix} \n&&\\hspace*{-0.8cm} \\text{if~$k \\leq 0$} \\\\\n\\hat{\\mathscr{S}}|_{\\text{span}(\\tilde{{\\mathfrak{e}}}_{-k-p, L}, \\tilde{{\\mathfrak{e}}}_{k, R})} &=\n\\frac{1}{2 \\pi} \\begin{pmatrix} 0 & \\overline{\\hat{f}_{2k+p}} \\\\ \\hat{f}_{2k+p} & 0 \\end{pmatrix} \n&&\\hspace*{-0.8cm} \\text{if~$k>p$} \\\\\n\\hat{\\mathscr{S}}|_{\\text{span}(\\tilde{{\\mathfrak{e}}}_{-1, L}, \\tilde{{\\mathfrak{e}}}_{-p, L})} &= 0 \\:.\n\\end{align*}\nIn particular, we can read off the chiral index:\n\\begin{Prp} \\label{prpindex} Assume that almost all Fourier coefficients~$\\hat{f}_k$ \nof the conformal function in~\\eqref{conform} are non-zero.\nThen the fermionic signature operator in the massless odd case has finite chiral\nindex (see Definition~\\ref{defind0}) and~$\\ind_0 \\mathscr{S} = p$.\n\\end{Prp}\n\nWe finally compute the Dirac operator in position space. The dispersion relations in~\\eqref{omegaex}\nare realized by the operators\n\\begin{align*}\n{\\mathcal{D}}_L &= i (\\partial_t - \\partial_\\varphi) \\\\\n{\\mathcal{D}}_R &= i (\\partial_t + \\partial_\\varphi) + {\\mathscr{B}} \\:,\n\\end{align*}\nwhere~${\\mathscr{B}}$ is the spatial integral operator\n\\[ \\big( {\\mathscr{B}} \\psi \\big)(t,\\varphi) = \\int_0^{2 \\pi} {\\mathscr{B}}(\\varphi, \\varphi')\\: \\psi(t, \\varphi')\\: d\\varphi' \\]\nwith the distributional integral kernel\n\\[ {\\mathscr{B}}(\\varphi, \\varphi') = -\\frac{p}{2 \\pi} \\sum_{k=1}^\\infty e^{i k (\\varphi-\\varphi')} \\nonumber \\\\\n= -\\frac{p}{2}\\: \\delta(\\varphi-\\varphi') -\\frac{p}{2 \\pi}\\:\\frac{\\text{PP}}{e^{-i(\\varphi-\\varphi')}-1} \\:. \\]\nHence, choosing the Dirac matrices as\n\\begin{equation} \\label{gammadef}\n\\gamma^0 = \\begin{pmatrix} 0 & 1 \\\\ 1 & 0 \\end{pmatrix} \\:,\\qquad\n\\gamma^1 = \\begin{pmatrix} 0 & 1 \\\\ -1 & 0 \\end{pmatrix}\n\\end{equation}\nand using~\\eqref{defchir}, we obtain\n\\begin{equation} \\label{Diracform}\n{\\mathcal{D}} = i \\gamma^0 \\partial_t + i \\gamma^1 \\partial_\\varphi\n+ \\gamma^1 \\chi_R \\, {\\mathscr{B}} \\:.\n\\end{equation}\nPerforming the conformal transformation~\\eqref{Dirconf}, we finally obtain\n\\begin{align}\n\\big(\\tilde{{\\mathcal{D}}} \\psi)(t, \\varphi) &= \\frac{i}{f(\\varphi)} \\left( \\gamma^0 \\partial_t + \\gamma^1 \\partial_\\varphi \n+ \\frac{f'(\\varphi)}{2 f(\\varphi)} -\\frac{p}{2} \\: \\gamma^1 \\chi_R \\right) \\psi(t, \\varphi) \\label{Dir1} \\\\\n&\\qquad -\\frac{p}{2 \\pi}\\: \\frac{\\gamma^1 \\chi_R}{f(\\varphi)^\\frac{3}{2}}\n\\int_0^{2 \\pi} \\frac{\\text{PP}}{e^{-i(\\varphi-\\varphi')}-1}\\:\n\\psi(t, \\varphi')\\: \\sqrt{f(\\varphi')}\\: d\\varphi' \\:. \\label{Dir2}\n\\end{align}\nThus~\\eqref{Dir1} is the Dirac operator in the Lorentzian metric~\\eqref{conform}\nwith a constant right-handed potential. Moreover, the summand~\\eqref{Dir2} is a nonlocal integral\noperator involving a singular integral kernel.\n\nThis example shows that the index of Proposition~\\ref{prpindex} in general does not encode\nthe topology of space-time, because for a fixed space-time topology the index can take\nany integer value. The way we understand the index is that it gives topological information\non the singular behavior of the potential in the Dirac operator.\n\n\\section{Example: A Dirac Operator with~$\\ind \\mathscr{S} \\neq 0$} \\label{secex2}\nWe now construct an example of a fermionic signature operator for which the\nindex~$\\ind \\mathscr{S}$ of Definition~\\ref{defind} is non-trivial.\nTo this end, we want to modify the example of the previous section.\nThe major difference to the previous setting is that the Hilbert space~$\\H_m$\ndoes not have a decomposition into two subspaces~$\\H_L$ and~$\\H_R$, making\nit necessary to consider the operators~$\\mathscr{S}_L$ and~$\\mathscr{S}_R$ as operators on the whole\nsolution space~$\\H_m$. Our first task is to remove the infinite-dimensional kernels of the operators~$\\mathscr{S}_L$\nand~$\\mathscr{S}_R$. This can typically be achieved by perturbing the Dirac operator, for example by introducing\na rest mass. The second and more substantial modification is to arrange that the\noperators~$\\mathscr{S}_L$ and~$\\mathscr{S}_R$ have {\\em{infinite-dimensional invariant subspaces}}.\nThis is needed for the following reason: In the example of the previous section,\nthe operator~$\\mathscr{S}_L|_{\\H_L} : \\H_L \\rightarrow \\H_R$ mapped one Hilbert\nspace to another Hilbert space. Therefore, we obtained a non-trivial index simply by arranging\nthat the operator~$\\mathscr{S}_L|_{\\H_L}$ gives a non-trivial ``pairing'' of vectors of~$\\H_L$ with\nvectors of~$\\H_R$ (as indicated in Figure~\\ref{figdisperse} by the horizontal dashed lines).\nIn particular, if considered as an operator on~$\\H_0$, the operator~$\\mathscr{S}_L$ had at most two-dimensional\ninvariant subspaces.\nFor the chiral index of Definition~\\ref{defind}, however, we have only one Hilbert space~$\\H_m$\nto our disposal, so that the operator~$\\mathscr{S}_L : \\H_m \\rightarrow \\H_m$ is an endomorphism of~$\\H_m$.\nAs a consequence, the chiral index is trivial whenever~$\\H_m$ splits into a direct sum\nof finite-dimensional subspaces which are invariant under~$\\mathscr{S}_L$\n(because on each invariant subspace, the index is trivial due to the\nrank-nullity theorem of linear algebra).\n\nThe following example is designed with the aim of showing in explicit detail that the\nindex is non-zero. Our starting point are the plane-wave solutions~\\eqref{esols}\nwith the frequencies according to~\\eqref{omegaex} with~$p \\in \\mathbb{N}$.\nIn Figure~\\ref{figsnail} the transformed plane-wave solutions~$\\tilde{{\\mathfrak{e}}}_{k,c}$ \n(where the transformation from~${\\mathfrak{e}}_{k,c}$ to~$\\tilde{{\\mathfrak{e}}}_{k,c}$ will be explained below) are\narranged according to their frequencies and momenta on a lattice.\n\\begin{figure}\n\\scalebox{1}\n{\n\\begin{pspicture}(0,-2.175)(7.06,2.18)\n\\psdots[dotsize=0.24,fillstyle=solid,dotstyle=o](3.22,0.06)\n\\usefont{T1}{ptm}{m}{n}\n\\rput(6.23,-0.175){$k$}\n\\psline[linewidth=0.04cm,arrowsize=0.3cm 1.0,arrowlength=1.5,arrowinset=0.5]{->}(1.62,-0.54)(6.42,-0.54)\n\\psline[linewidth=0.04cm,arrowsize=0.3cm 1.0,arrowlength=1.5,arrowinset=0.5]{->}(3.82,-2.14)(3.82,2.16)\n\\usefont{T1}{ptm}{m}{n}\n\\rput(4.21,1.925){$\\omega$}\n\\psline[linewidth=0.02cm,linestyle=dashed,dash=0.16cm 0.16cm](2.42,-1.94)(6.02,1.66)\n\\psline[linewidth=0.02cm,linestyle=dashed,dash=0.16cm 0.16cm](1.62,1.66)(5.22,-1.94)\n\\psdots[dotsize=0.12](4.42,0.66)\n\\psdots[dotsize=0.12](3.82,-0.54)\n\\psdots[dotsize=0.12](3.22,0.06)\n\\psdots[dotsize=0.12](2.02,1.26)\n\\psdots[dotsize=0.12](5.02,-1.74)\n\\psdots[dotsize=0.12](3.22,-1.14)\n\\psdots[dotsize=0.12](2.62,-1.74)\n\\psdots[dotsize=0.12](2.62,0.66)\n\\psdots[dotsize=0.12](5.02,1.26)\n\\psdots[dotsize=0.12](4.42,-1.14)\n\\psline[linewidth=0.04cm](4.42,-0.44)(4.42,-0.64)\n\\psline[linewidth=0.04cm](5.02,-0.44)(5.02,-0.64)\n\\psline[linewidth=0.04cm](5.62,-0.44)(5.62,-0.64)\n\\psline[linewidth=0.04cm](3.22,-0.44)(3.22,-0.64)\n\\psline[linewidth=0.04cm](2.62,-0.44)(2.62,-0.64)\n\\psline[linewidth=0.04cm](2.02,-0.44)(2.02,-0.64)\n\\psline[linewidth=0.04cm](3.92,0.06)(3.72,0.06)\n\\usefont{T1}{ptm}{m}{n}\n\\rput(4.11,0.125){$1$}\n\\usefont{T1}{ptm}{m}{n}\n\\rput(1.44,1.245){$\\tilde{{\\mathfrak{e}}}_{-3,L}$}\n\\usefont{T1}{ptm}{m}{n}\n\\rput(4.49,-0.255){$1$}\n\\psline[linewidth=0.03cm,tbarsize=0.07055555cm 5.0]{|*-|*}(2.02,1.46)(2.02,1.06)\n\\psline[linewidth=0.03cm,tbarsize=0.07055555cm 5.0]{|*-|*}(5.02,1.46)(5.02,1.06)\n\\psline[linewidth=0.03cm,tbarsize=0.07055555cm 5.0]{|*-|*}(4.42,0.86)(4.42,0.46)\n\\psline[linewidth=0.03cm,tbarsize=0.07055555cm 5.0]{|*-|*}(2.62,0.86)(2.62,0.46)\n\\psline[linewidth=0.03cm,tbarsize=0.07055555cm 5.0]{|*-|*}(3.22,0.26)(3.22,-0.14)\n\\psline[linewidth=0.03cm,tbarsize=0.07055555cm 5.0]{|*-|*}(4.42,-0.94)(4.42,-1.34)\n\\psline[linewidth=0.03cm,tbarsize=0.07055555cm 5.0]{|*-|*}(3.22,-0.94)(3.22,-1.34)\n\\psline[linewidth=0.03cm,tbarsize=0.07055555cm 5.0]{|*-|*}(2.62,-1.54)(2.62,-1.94)\n\\psline[linewidth=0.03cm,tbarsize=0.07055555cm 5.0]{|*-|*}(5.02,-1.54)(5.02,-1.94)\n\\psbezier[linewidth=0.03,arrowsize=0.08cm 3.0,arrowlength=1.2,arrowinset=0.4]{->}(2.22,1.26)(3.02,1.54)(3.92,1.7)(4.82,1.26)\n\\psbezier[linewidth=0.03,arrowsize=0.08cm 3.0,arrowlength=1.2,arrowinset=0.4]{->}(5.22,1.16)(5.92,0.2)(5.9,-1.0)(5.2,-1.66)\n\\psbezier[linewidth=0.03,arrowsize=0.08cm 3.0,arrowlength=1.2,arrowinset=0.4]{->}(4.84,-1.78)(4.12,-2.1)(3.7,-2.16)(2.8,-1.78)\n\\psbezier[linewidth=0.03,arrowsize=0.08cm 3.0,arrowlength=1.2,arrowinset=0.4]{->}(2.44,-1.7)(1.74,-1.0)(1.72,-0.06)(2.48,0.64)\n\\psbezier[linewidth=0.03,arrowsize=0.08cm 3.0,arrowlength=1.2,arrowinset=0.4]{->}(2.8,0.68)(3.24,0.88)(3.66,1.0)(4.24,0.68)\n\\psbezier[linewidth=0.03,arrowsize=0.08cm 3.0,arrowlength=1.2,arrowinset=0.4]{->}(4.58,0.6)(5.04,-0.02)(4.98,-0.58)(4.58,-1.1)\n\\psbezier[linewidth=0.03,arrowsize=0.08cm 3.0,arrowlength=1.2,arrowinset=0.4]{->}(4.28,-1.18)(4.1,-1.3)(3.68,-1.36)(3.4,-1.18)\n\\psbezier[linewidth=0.03,arrowsize=0.08cm 3.0,arrowlength=1.2,arrowinset=0.4]{->}(3.06,-1.14)(2.72,-0.7)(2.8,-0.32)(3.04,0.04)\n\\usefont{T1}{ptm}{m}{n}\n\\rput(5.64,-1.875){$\\tilde{{\\mathfrak{e}}}_{2,L}$}\n\\usefont{T1}{ptm}{m}{n}\n\\rput(5.43,1.645){$\\tilde{{\\mathfrak{e}}}_{2,R}$}\n\\psline[linewidth=0.04cm,tbarsize=0.07055555cm 5.0]{|*-|*}(3.82,-0.34)(3.82,-0.74)\n\\end{pspicture} \n}\n\\caption{The action of~$\\mathscr{S}_L$ on the transformed plane-wave solutions in the case~$p=1$.}\n\\label{figsnail}\n\\end{figure}\nWe shall construct the operator~$\\mathscr{S}_L$ in such a way that\nthese plane-wave solutions are mapped to each other as indicated by the arrows.\nThus similar to a shift operator, $\\mathscr{S}_L$ maps the basis vectors to each other ``spiraling in,''\nimplying that the vector~$\\tilde{{\\mathfrak{e}}}_{-1,L}$ (depicted with the circled dot) is in the kernel of~$\\mathscr{S}_L$.\nLikewise, the operator~$\\mathscr{S}_R$ acts like a ``spiraling out'' shift vector, so that it is injective.\nIn this way, we arrange that~$\\ind \\mathscr{S} = 1$. Similarly, in the case~$p>1$ we shall obtain~$p$\nspirals, so that~$\\ind \\mathscr{S}=p$.\n\nBefore entering the detailed construction, we point out that our method\nis driven by the wish that the example should be explicit and that the kernels of the\nchiral signature operators should be given in closed form.\nThis makes it necessary to introduce a Dirac operator which seems somewhat artificial.\nIn particular, instead of introducing a rest mass,\nwe arrange a mixing of the left- and right-handed components\nusing a time-dependent vectorial gauge transformation.\nMoreover, we again work with a conformal transformation with a carefully adjusted\nspatial and time dependence. We consider these special features merely as a\nrequirement needed in order to make the computations as simple as possible.\nIn view of the stability result of Theorem~\\ref{thmstable}, we expect that the index\nis also non-trivial in more realistic examples involving a rest mass and less fine-tuned potentials.\nBut probably, this goes at the expense of longer computations or less explicit arguments.\n\nWe begin on the cylinder~$\\mycal M=(0, 6 \\pi) \\times S^1$, again with the Minkowski metric~\\eqref{2dl}\nand two-component spinors endowed with the spin scalar product~\\eqref{ssprod}.\nThe space-time inner product~\\eqref{stip} becomes\n\\begin{equation} \\label{stiptoru2}\n\\mathopen{<} \\psi|\\phi \\mathclose{>} = \\int_0^{6 \\pi} \\int_0^{2 \\pi} \\mathopen{\\prec} \\psi(t, \\varphi) \\,|\\, \\phi(t, \\varphi) \\mathclose{\\succ}\\: d\\varphi \\:dt \\:,\n\\end{equation}\nwhereas the scalar product on solutions of the Dirac equation is again given by~\\eqref{printtorus}.\nWe again consider the massless Dirac equation~\\eqref{Dir0}\nwith the Dirac operator~\\eqref{Direx} and the left- and right-handed\noperators according to~\\eqref{DirHam}. Moreover, we again assume that the operators~${\\mathcal{D}}_{L\\!\/\\!R}$\nhave the plane-wave solutions~\\eqref{esols} with frequencies~\\eqref{omegaex}.\nFor a fixed real parameter~$\\nu \\neq 0$, we consider the transformation\n\\begin{equation} \\label{Utdef}\nU(t) = \\frac{\\mbox{\\rm 1 \\hspace{-1.05 em} 1} + i \\nu \\gamma^0 \\cos t\/3}{\\sqrt{1 + \\nu^2 \\cos^2 t\/3}} = \n\\frac{1}{\\sqrt{1 + \\nu^2 \\cos^2 t\/3}}\n\\begin{pmatrix} 1 & i \\nu \\cos t\/3 \\\\ i \\nu \\cos t\/3 & 1 \\end{pmatrix} \\:.\n\\end{equation}\nObviously, $U(t) \\in {\\rm{U}}(2)$ is a unitary matrix. Moreover, it commutes with~$\\gamma^0$,\nimplying that it is also unitary with respect to the spin scalar product.\nAs a consequence, the transformation~$U(t)$ is unitary both on the Hilbert space~$\\H_0$\nand with respect to the inner product~\\eqref{stiptoru2}.\nNext, we again consider a conformal transformation~\\eqref{Dirconf} and~\\eqref{psiconf},\nbut now with a conformal function~$f(t, \\varphi)$ which depends on space and time.\nThus we set\n\\begin{equation} \\label{Dirdef}\n\\tilde{{\\mathcal{D}}} = f^{-\\frac{3}{2}} U \\,{\\mathcal{D}}\\, U^* f^{\\frac{1}{2}} \\qquad \\text{and} \\qquad\n\\tilde{\\psi} = f^{-\\frac{1}{2}}\\: U \\,\\psi \\:.\n\\end{equation}\nSimilar to~\\eqref{stiptrans} and~\\eqref{printtrans}, the inner products transform to\n\\[ \\mathopen{<} \\tilde{\\psi} | \\tilde{\\phi} \\mathclose{>}\n= \\int_0^{6 \\pi} \\int_0^{2 \\pi} \\mathopen{\\prec} \\psi(t, \\varphi) \\,|\\, \\phi(t, \\varphi) \\mathclose{\\succ}\\:\nf(t, \\varphi)\\, d\\varphi\\,dt \\qquad \\text{and} \\qquad\n(\\tilde{\\psi} | \\tilde{\\phi}) = (\\psi | \\phi) \\:. \\]\nIn particular, the transformed plane wave solutions~$\\tilde{e}_{k,c}$ are an orthonormal\nbasis of~$\\H_0$.\nKeeping in mind that the chiral projectors in~\\eqref{stipLR} do {\\em{not}} commute with~$U$,\nwe obtain\n\\[ \\mathopen{<} \\tilde{\\psi} |\\tilde{\\phi} \\mathclose{>}_L\n= \\int_0^{6 \\pi} \\int_0^{2 \\pi} \\mathopen{\\prec} U(t)\\, \\psi(t, \\varphi) \\,|\\, \\chi_L\\, U(t)\\, \\phi(t, \\varphi) \\mathclose{\\succ}\\:\nf(t, \\varphi)\\, d\\varphi\\,dt \\]\nand thus, in view of~\\eqref{SLRdef2},\n\\begin{equation} \\label{SL1}\n(\\tilde{{\\mathfrak{e}}}_{k,c} \\,|\\, \\mathscr{S}_L \\,\\tilde{{\\mathfrak{e}}}_{k',c'}) = \n\\int_0^{6 \\pi} \\int_0^{2 \\pi} \\mathopen{\\prec} U {\\mathfrak{e}}_{k,c} \\,|\\, \\chi_L U {\\mathfrak{e}}_{k',c'} \\mathclose{\\succ}\\:\nf(t, \\varphi)\\, d\\varphi\\,dt \\:.\n\\end{equation}\nIn order to get rid of the square roots in~\\eqref{Utdef}, it is most convenient to set\n\\begin{equation} \\label{lambdadef}\nV(t) = \\begin{pmatrix} 1 & i \\nu \\cos t\/3 \\\\ i \\nu \\cos t\/3 & 1 \\end{pmatrix}\n\\qquad \\text{and} \\qquad \\mu(t, \\varphi) = \\frac{f(t, \\varphi)}{1 + \\nu^2 \\cos^2 t\/3} \\:.\n\\end{equation}\nThen~\\eqref{SL1} simplifies to\n\\begin{equation} \\label{SL2}\n(\\tilde{{\\mathfrak{e}}}_{k,c} \\,|\\, \\mathscr{S}_L \\,\\tilde{{\\mathfrak{e}}}_{k',c'}) = \n\\int_0^{6 \\pi} \\int_0^{2 \\pi} \\mathopen{\\prec} V {\\mathfrak{e}}_{k,c} \\,|\\, \\chi_L V {\\mathfrak{e}}_{k',c'} \\mathclose{\\succ}\\:\n\\mu(t, \\varphi)\\, d\\varphi\\,dt \\:.\n\\end{equation}\n\nLet us first discuss the effect of the transformation~$V$. A left-handed spinor is mapped to\n\\[ V \\begin{pmatrix} 1 \\\\ 0 \\end{pmatrix} = \n\\begin{pmatrix} 1 \\\\ 0 \\end{pmatrix} + \\frac{i}{2}\\: e^{it\/3}\\, \\begin{pmatrix} 0 \\\\ 1 \\end{pmatrix}\n+ \\frac{i}{2}\\: e^{-it\/3}\\, \\begin{pmatrix} 0 \\\\ 1 \\end{pmatrix} \\:. \\]\nThus two right-handed contributions are generated, whose frequency differ from the frequency\nof the left-handed component by~$\\pm1\/3$. Similarly, a right-handed spinor is mapped to\n\\[ V \\begin{pmatrix} 0 \\\\ 1 \\end{pmatrix} = \n\\begin{pmatrix} 0 \\\\ 1 \\end{pmatrix} + \\frac{i}{2}\\: e^{it\/3}\\, \\begin{pmatrix} 1 \\\\ 0 \\end{pmatrix}\n+ \\frac{i}{2}\\: e^{-it\/3}\\, \\begin{pmatrix} 1 \\\\ 0 \\end{pmatrix} \\:, \\]\ngenerating two left-handed components with frequencies shifted by $\\pm 1\/3$.\nAgain plotting the frequencies vertically, we depict the transformation~$V$ as in Figure~\\ref{figVtrans}.\n\\begin{figure}\n\\scalebox{1}\n{\n\\begin{pspicture}(0,-0.97)(10.51,0.97)\n\\psdots[dotsize=0.2](4.4,-0.15)\n\\psline[linewidth=0.04cm,tbarsize=0.07055555cm 5.0]{|*-|*}(5.6,0.45)(5.6,-0.75)\n\\psdots[dotsize=0.2](5.6,-0.15)\n\\usefont{T1}{ptm}{m}{n}\n\\rput(4.01,-0.125){$V$}\n\\usefont{T1}{ptm}{m}{n}\n\\rput(4.98,-0.145){$=$}\n\\usefont{T1}{ptm}{m}{n}\n\\rput(4.39,0.215){$L$}\n\\usefont{T1}{ptm}{m}{n}\n\\rput(6.19,-0.145){$L \\;\\;,$}\n\\usefont{T1}{ptm}{m}{n}\n\\rput(6.0,-0.745){$R$}\n\\usefont{T1}{ptm}{m}{n}\n\\rput(6.0,0.455){$R$}\n\\usefont{T1}{ptm}{m}{n}\n\\rput(2.01,-0.145){$\\omega$}\n\\usefont{T1}{ptm}{m}{n}\n\\rput(1.68,0.455){$\\omega+\\frac{1}{3}$}\n\\usefont{T1}{ptm}{m}{n}\n\\rput(1.64,-0.745){$\\omega-\\frac{1}{3}$}\n\\psline[linewidth=0.04cm,arrowsize=0.04cm 4.0,arrowlength=2.0,arrowinset=0.4]{->}(2.4,-0.95)(2.4,0.95)\n\\psline[linewidth=0.04cm](2.3,-0.15)(2.5,-0.15)\n\\psline[linewidth=0.04cm](2.3,0.45)(2.5,0.45)\n\\psline[linewidth=0.04cm](2.3,-0.75)(2.5,-0.75)\n\\psline[linewidth=0.02cm,linestyle=dotted,dotsep=0.16cm](2.4,0.45)(5.5,0.45)\n\\psline[linewidth=0.02cm,linestyle=dotted,dotsep=0.16cm](2.4,-0.15)(3.6,-0.15)\n\\psline[linewidth=0.02cm,linestyle=dotted,dotsep=0.16cm](2.3,-0.75)(5.6,-0.75)\n\\usefont{T1}{ptm}{m}{n}\n\\rput(7.61,-0.125){$V$}\n\\psdots[dotsize=0.2](8.0,-0.15)\n\\usefont{T1}{ptm}{m}{n}\n\\rput(8.58,-0.145){$=$}\n\\psline[linewidth=0.04cm,tbarsize=0.07055555cm 5.0]{|*-|*}(9.2,0.45)(9.2,-0.75)\n\\psdots[dotsize=0.2](9.2,-0.15)\n\\usefont{T1}{ptm}{m}{n}\n\\rput(9.6,-0.145){$R$}\n\\usefont{T1}{ptm}{m}{n}\n\\rput(9.59,-0.745){$L$}\n\\usefont{T1}{ptm}{m}{n}\n\\rput(9.59,0.455){$L$}\n\\usefont{T1}{ptm}{m}{n}\n\\rput(8.0,0.215){$R$}\n\\psline[linewidth=0.02cm,linestyle=dotted,dotsep=0.16cm](6.4,0.45)(9.1,0.45)\n\\psline[linewidth=0.02cm,linestyle=dotted,dotsep=0.16cm](6.4,-0.75)(9.1,-0.75)\n\\psline[linewidth=0.02cm,linestyle=dotted,dotsep=0.16cm](6.7,-0.15)(7.3,-0.15)\n\\psline[linewidth=0.02cm,linestyle=dotted,dotsep=0.16cm](10.0,-0.75)(10.5,-0.75)\n\\psline[linewidth=0.02cm,linestyle=dotted,dotsep=0.16cm](10.0,0.45)(10.5,0.45)\n\\psline[linewidth=0.02cm,linestyle=dotted,dotsep=0.16cm](10.0,-0.15)(10.5,-0.15)\n\\end{pspicture} \n}\n\\caption{The transformation~$V$ in momentum space.}\n\\label{figVtrans}\n\\end{figure}\nThe same notation is also used in Figure~\\ref{figsnail} for the transformed\nplane-wave solutions.\n\nThe inner product~$\\mathopen{\\prec} .| \\chi_L . \\mathclose{\\succ}$ in~\\eqref{SL2} only gives a contribution if the\narguments on the left and right have the opposite chirality.\nSince the transformed plane-wave solutions~$V e_{k,c}$ have a fixed chirality\nat every lattice point, one sees in particular that~\\eqref{SL2} vanishes if~$\\mu$ is chosen\nas a constant. By adding to the constant~$\\mu=1$ contributions with different momenta,\nwe can connect the different lattice points in Figure~\\ref{figsnail}. This leads us to the ansatz\n\\begin{equation} \\label{muform}\n\\mu(t, \\varphi) = 1 + \\mu_\\text{hor}(t, \\varphi) + \\mu_\\text{vert}(t, \\varphi) \\:,\n\\end{equation}\nwhere the last two summands should describe the horizontal respectively vertical arrows\nin Figure~\\ref{figsnail}. For the horizontal arrows we can work similar to~\\eqref{vertical} with a\nspatially-dependent conformal transformation. However, in order to make sure that the\nleft-handed component generated by~$V$ (corresponding to the two $L$s at the very right\nof Figure~\\eqref{figsnail}) are not connected horizontally, we include two Fourier modes\nwhich shift the frequency by~$\\pm 2\/3$,\n\\begin{equation} \\label{muhor}\n\\mu_\\text{hor}(t, \\varphi) = a(\\varphi) \\left( 1 - e^{\\frac{2 i t}{3}} - e^{-\\frac{2 i t}{3}} \\right) ,\n\\end{equation}\nwhere~$a$ has the Fourier decomposition\n\\begin{equation} \\label{aser}\na(\\varphi) = \\sum_{k =1}^\\infty \\left( a_k\\, e^{i k \\varphi} + \\overline{a_{-k}}\\, e^{-i k \\varphi} \\right) \\:.\n\\end{equation}\nFor the vertical arrows we must be careful that the left-handed contribution of~$V {\\mathfrak{e}}_{k,L}$\nis not connected to the right-handed component of~$V {\\mathfrak{e}}_{k,R}$, because then the arrow\nwould have the wrong direction. To this end, we avoid integer frequencies.\nInstead, we work with the frequencies in~$\\mathbb{Z} \\pm 1\/3$, because they\nconnect the left-handed component~$V {\\mathfrak{e}}_{k,R}$ to the right-handed component of~$V {\\mathfrak{e}}_{k,L}$.\nThis leads us to the ansatz\n\\begin{equation} \\label{bser}\n\\mu_\\text{vert}(t, \\varphi) = \\mu_\\text{vert}(t) = \\sum_{n \\in \\mathbb{Z}} e^{i n t}\n\\left(b_n \\,e^{\\frac{it}{3}} + \\overline{b_{-n}}\\, e^{\\frac{-it}{3}} \\right) .\n\\end{equation}\nThe ans\\\"atze~\\eqref{aser} and~\\eqref{bser} ensure that~$\\mu$ is real-valued.\nMoreover, by choosing the Fourier coefficients sufficiently small, one can clearly arrange that\nthe first summand in~\\eqref{muform} dominates, so that~$\\mu$ is strictly positive.\nWe thus obtain the following result.\n\n\\begin{Prp} Assume that the Fourier coefficients~$a_k$ and~$b_n$\nin~\\eqref{aser} and~\\eqref{bser} are sufficiently small and\nthat almost all Fourier coefficients are non-zero.\nThen the function~$\\mu$ defined by~\\eqref{muhor} and~\\eqref{muform}, is strictly positive.\nConsider the Dirac operator~\\eqref{Dirdef} with~$U$ and~$f$ according\nto~\\eqref{Utdef} (for some fixed~$\\nu \\in \\mathbb{R} \\setminus \\{0\\}$) and~\\eqref{lambdadef}.\nThen the chiral index of the fermionic signature operator (see Definition~\\ref{defind}) \nis finite and~$\\ind \\mathscr{S} = p$.\n\\end{Prp}\n\nWe finally discuss the form of the Dirac operator in position space.\nSubstituting~\\eqref{Diracform} into~\\eqref{Dirdef} and using\nthe above form of~$U$ and~$f$, the Dirac operator~$\\tilde{{\\mathcal{D}}}$\ncan be computed in closed form. Similar as discussed in the previous section,\nthe Dirac operator contains a nonlocal integral operator with a singular potential.\nMoreover, the transformation~$U$ modifies the Dirac matrix~$\\gamma^1$ to\n\\[ \\gamma^1 \\rightarrow U \\gamma^1 U^* = \n\\frac{1}{1+ \\nu^2 \\cos^2(t\/3)} \\Big( \\big(1-\\nu^2 \\cos^2(t\/3) \\big) \\:\\gamma^1 - 2 \\nu \\cos^2(t\/3)\\: \\gamma^2\n\\Big) \\:, \\]\nwhere\n\\[ \\gamma^2 := \\begin{pmatrix} i & 0 \\\\ 0 & -i \\end{pmatrix}\\:. \\]\nThus the representation of the Dirac matrices becomes time-dependent; this is the main effect\nof the vectorial transformation~$U$. This transformation changes the first order terms in the\nDirac equation. Moreover, the conformal transformation also changes the first order terms\njust as in~\\eqref{Dir1} by a prefactor~$1\/f$.\n\n\\section{Examples Illustrating the Homotopy Invariance} \\label{secex3}\nWe now give two examples to illustrate our considerations on the homotopy invariance of\nthe chiral index.\nWe begin with an example which shows that the dimension of the kernel of~$\\mathscr{S}_L$\ndoes not need to be constant for deformations which are continuous in~$\\text{\\rm{L}}(\\H_0)$.\nIt may even become infinite-dimensional.\n\n\\begin{Example} \\label{exinstable}\n{\\em{ We consider the space-time~$\\mycal M=(0, T) \\times S^1$ with coordinates~$t \\in (0, T)$\nand~$\\varphi \\in [0, 2 \\pi)$ endowed with the Minkowski metric\n\\[ ds^2 = dt^2 - d\\varphi^2 \\:. \\]\nWe again choose two-component complex spinors with the spin scalar product~\\eqref{ssprod}.\nThe Dirac operator is chosen as\n\\[ {\\mathcal{D}} = i \\gamma^0 \\partial_t + i \\gamma^1 \\partial_\\varphi \\:, \\]\nwhere the Dirac matrices are again given by~\\eqref{gammadef}.\nThe pseudoscalar matrix and the chiral projectors are again\nchosen according to~\\eqref{pseudoc} and~\\eqref{defchir}.\n\nWe consider the massless Dirac equation\n\\[ {\\mathcal{D}} \\psi = 0 \\:. \\]\nThis equation~\\eqref{Dir0} can be solved by plane wave solutions, which we write as\n\\begin{equation} \\label{eLR2}\n{\\mathfrak{e}}_{k,L}(\\zeta) = \\frac{1}{2 \\pi}\\: e^{+i k t + i k \\varphi} \\begin{pmatrix} 1 \\\\ 0 \\end{pmatrix} \\:,\\qquad\n{\\mathfrak{e}}_{k,R}(\\zeta) = \\frac{1}{2 \\pi}\\: e^{-i k t + i k \\varphi} \\begin{pmatrix} 0 \\\\ 1 \\end{pmatrix} \\:,\n\\end{equation}\nwhere~$k \\in \\mathbb{Z}$ (the indices~$L$ and~$R$ denote the left- and right-handed components; at the\nsame time they propagate to the left respectively right). By direct computation, one verifies\nthat~$({\\mathfrak{e}}_{k,c})_{k \\in \\mathbb{Z}, c \\in \\{L,R\\}}$ is an orthonormal basis of~$\\H_0$.\n\nWe next compute the space-time inner product~\\eqref{stip},\n\\begin{align*}\n\\mathopen{<} {\\mathfrak{e}}_{k,R} | {\\mathfrak{e}}_{0, L} \\mathclose{>} &= \\int_0^T dt \\int_0^{2 \\pi} d\\varphi\\; \n\\mathopen{\\prec} {\\mathfrak{e}}_{k,R}(t,\\varphi) \\,|\\, {\\mathfrak{e}}_{0, L}(t,\\varphi) \\mathclose{\\succ}\n= \\frac{1}{2 \\pi} \\int_0^T \\delta_{k,0} \\,dt = \\frac{T}{2 \\pi}\\: \\delta_{k,0} \\\\\n\\mathopen{<} {\\mathfrak{e}}_{k,R} | {\\mathfrak{e}}_{k', L} \\mathclose{>} \n&= \\frac{1}{2 \\pi} \\: \\delta_{k,k'} \\int_0^T e^{2 i k t} \\,dt = \\frac{e^{2 i k T} - 1}{4 \\pi i k}\n\\quad \\text{($k' \\neq 0$)} \\\\\n\\mathopen{<} {\\mathfrak{e}}_{k,L} | {\\mathfrak{e}}_{k', L} \\mathclose{>} &= 0 = \\mathopen{<} {\\mathfrak{e}}_{k,R} | {\\mathfrak{e}}_{k', R} \\mathclose{>} \\:.\n\\end{align*}\nThus the fermionic signature operator~$\\mathscr{S}$ is invariant on the subspaces~$\\H^{(k)}_0$\ngenerated by the basis vectors~${\\mathfrak{e}}_{k,L}$ and~${\\mathfrak{e}}_{k,R}$. Moreover, in these bases it has the\nmatrix representations\n\\[ \\mathscr{S}|_{\\H^{(0)}_0} = \\frac{T}{2 \\pi} \\begin{pmatrix} 0 & 1 \\\\ 1 & 0 \\end{pmatrix} \\qquad \\text{and} \\qquad\n\\mathscr{S}|_{\\H^{(k)}_0} = \\frac{1}{4 \\pi i k}\\:\n\\begin{pmatrix} 0 & e^{2 i k T} - 1 \\\\ e^{-2 i k T} - 1 & 0 \\end{pmatrix} \\quad (k \\neq 0)\\:. \\]\n\nIf~$T \\not \\in \\pi \\mathbb{Q}$, the matrix entries~$e^{\\pm 2 i k T} - 1$ are all non-zero. As a consequence,\nthe operators~$\\mathscr{S}_L|_{\\H_L}$ and~$\\mathscr{S}_R|_{\\H_R}$ are both injective.\nThus~$\\mathscr{S}$ has finite chiral chiral index in the massless odd case (see Definition~\\ref{defind0}).\nIf~$T \\in \\pi \\mathbb{Q}$, however, the chiral index vanishes for all~$k$ for which~$2 k T$ is a multiple of~$2 \\pi$.\nAs a consequence, the operators~$\\mathscr{S}_L|_{\\H_L}$ and~$\\mathscr{S}_R|_{\\H_R}$\nboth have an infinite-dimensional kernel, so that~$\\mathscr{S}$ does not have a finite chiral index.\n}} \\ \\hfill $\\Diamond$\n\\end{Example}\nThis example also explains why we need additional assumptions like those in\nTheorems~\\ref{thmstable} and~\\ref{thmstablem0}. In particular, \nwhen considering homotopies of space-time or of the Dirac operator,\none must be careful to ensure that the chiral index remains finite along the chosen path.\n\nWe next want to construct examples of homotopies to which the stability result of\nTheorem~\\ref{thmstablem0} applies. To this end, it is convenient to work similar to~\\eqref{conform}\nwith a conformal transformation.\n\\begin{Example} \\label{exstable}\n{\\em{ As in Example~\\ref{exinstable} we consider the space-time~$(0,T) \\times S^1$,\nbut now with the conformally transformed metric\n\\[ d\\tilde{s}^2 = f(t)^2 \\left( dt^2 - d\\varphi^2 \\right) \\]\nwhere~$f$ is a non-negative $C^2$-function with\n\\[ \\supp f \\subset (-T,T) \\qquad \\text{and} \\qquad f(0) > 0 \\:. \\]\nSimilar to~\\eqref{vertical} and the computation thereafter, \ntransforming the plane-wave solutions~\\eqref{eLR2} conformally\nto~$\\tilde{{\\mathfrak{e}}}_{k, L\\!\/\\!R} = f(t)^{-\\frac{1}{2}}\\: {\\mathfrak{e}}_{k, L\\!\/\\!R}$,\nwe again obtain an orthonormal basis of~$\\H_0$ and\n\\begin{align*}\n\\mathopen{<} \\tilde{{\\mathfrak{e}}}_{k,R} | \\tilde{{\\mathfrak{e}}}_{k', L} \\mathclose{>} \n&= \\frac{1}{2 \\pi} \\: \\delta_{k,k'} \\int_0^T f(t)\\: e^{2 i k t} \\,dt \\\\\n\\mathopen{<} \\tilde{{\\mathfrak{e}}}_{k,L} | \\tilde{{\\mathfrak{e}}}_{k', L} \\mathclose{>} &= 0 = \\mathopen{<} \\tilde{{\\mathfrak{e}}}_{k,R} | \\tilde{{\\mathfrak{e}}}_{k', R} \\mathclose{>}\n\\end{align*}\nfor all~$k, k' \\in \\mathbb{Z}$.\n\nThe integration-by-parts argument\n\\begin{align*}\n\\int_0^T f(t)\\: e^{2 i k t} \\,dt &=\n\\frac{1}{2 i k} \\int_0^T f(t)\\: \\frac{d}{dt} e^{2 i k t} \\,dt = \n-\\frac{f(0)}{2 i k} - \\frac{1}{2 i k} \\int_0^T f'(t)\\: e^{2 i k t} \\,dt \\\\\n&= -\\frac{f(0)}{2 i k} -\\frac{f'(0)}{4 k^2}\n-\\frac{1}{4 k^2} \\int_0^T f''(t)\\: e^{2 i k t} \\,dt \n\\end{align*}\nshows that the space-time inner products have a simple explicit asymptotics for large~$k$ given by\n\\[ \\mathopen{<} \\tilde{{\\mathfrak{e}}}_{k,R} | \\tilde{{\\mathfrak{e}}}_{k', L} \\mathclose{>} = \\frac{f(0)}{4 \\pi i k}\\; \\delta_{k,k'}\n+ \\O \\Big( \\frac{1}{k^2} \\Big) \\:. \\]\nHence the operator~$\\mathscr{S}_L$ has the form\n\\[ \\mathscr{S}_L {\\mathfrak{e}}_{k,L} = c_k\\, {\\mathfrak{e}}_{k,R} \\]\nwith coefficients~$c_k$ having the asymptotics\n\\[ c_k = \\frac{f(0)}{4 \\pi i k} + \\O \\Big( \\frac{1}{k^2} \\Big) \\:. \\]\nFrom this asymptotics we can read off the following facts. First, it is obvious that~$\\mathscr{S}_L|_{\\H_L}$\nhas a finite-dimensional kernel. Exchanging the chirality, the same is true for~$\\mathscr{S}_R|_{\\H_R}$,\nimplying that~$S$ has a finite chiral index (according to Definition~\\ref{defind0}).\nNext, the vectors in the image of~$\\mathscr{S}_L$ are in the Sobolev space $W^{1,2}$,\n\\[ \\mathscr{S}_L|_{\\H_L} \\::\\: \\H_L \\rightarrow \\H_R \\cap W^{1,2}(S^1, \\mathbb{C}^2) \\:. \\]\nMoreover, the image of this operator is closed (in the $W^{1,2}$-norm).\nFinally, our partial integration argument also yields that\n\\[ \\|\\mathscr{S}_L \\psi\\|_{W^{1,2}} \\leq |f|_{C^2}\\: \\|\\psi\\|_{\\H_0} \\:, \\]\nshowing that the family of signature operators is norm continuous\nfor a $C^2$-homotopy of functions~$f$.\n\nHaving verified the assumptions of Theorem~\\ref{thmstablem0},\nwe conclude that the chiral index in the massless odd case is invariant\nunder $C^2$-homotopies of the conformal function~$f$, provided that~$f(0)$\nstays away from zero.\n}} \\ \\hfill $\\Diamond$\n\\end{Example}\n\n\\section{Conclusion and Outlook} \\label{secoutlook}\nOur analysis shows that the chiral index of a fermionic signature operator\nis well-defined and in general non-trivial. Moreover, it is a homotopy invariant provided\nthat the additional conditions stated in Theorems~\\ref{thmstable} and~\\ref{thmstablem0}\nare satisfied. As already mentioned at the end of the introduction,\nthe physical and geometric meaning of this index is yet to be explored.\n\nWe now outline how our definition of the chiral index could be generalized or\nextended other situations. First, our constructions also apply in the {\\em{Riemannian setting}}\nby working instead of causal fermion systems with so-called Riemannian fermion systems\nor general {\\em{topological fermion systems}} as introduced in~\\cite{topology}.\nIn this situation, one again imposes a pseudoscalar operators~$\\Gamma(x) \\in \\text{\\rm{L}}(\\H)$\nwith the properties~\\eqref{pseudodef}. Then all constructions in Section~\\ref{seccfs}\ngo through. Starting on an even-dimensional Riemannian spin manifold, one can proceed as\nexplained in~\\cite{topology} and first construct a corresponding topological fermion system.\nFor this construction, one must choose a particle space, typically of eigensolutions of the Dirac equation.\nOnce the topological fermion system is constructed, one can again work with the index of Section~\\ref{seccfs}.\nIf the Dirac operator anti-commutes with the pseudoscalar operator, one can choose the\nparticle space~$\\H$ to be invariant under the action of~$\\Gamma$. This gives a\ndecomposition of the particle space into two chiral subspaces,\n$\\H = \\H_L \\oplus \\H_R$. Just as explained in Section~\\ref{secodd},\nthis makes it possible to introduce other indices by restricting the chiral signature operators\nto~$\\H_L$ or~$\\H_R$. Moreover, one could compose the operators from the left\nwith the projection operators onto the subspaces~$\\H_{L\\!\/\\!R}$ and consider the Noether indices\nof the resulting operators.\n\nAnother generalization concerns space-times of {\\em{infinite lifetime}}.\nUsing the constructions in~\\cite{infinite}, in such space-times one can still introduce\nthe fermionic signature operator~$\\mathscr{S}_m$ provided that the space-time satisfies\nthe so-called mass oscillation property. By inserting chiral projection operators, one\ncan again define chiral signature operators~$\\mathscr{S}^{L\\!\/\\!R}_m$ and define the\nchiral index as their Noether index. Also the stability results of Theorems~\\ref{thmstable}\nand~\\ref{thmstablem0} again apply. It is unknown whether the resulting indices have a\ngeometric meaning. Since~$\\mathscr{S}_m$ depends essentially on the asymptotic form of\nthe Dirac solutions near infinity, the corresponding chiral indices should encode \ninformation on the metric and the external potential in the asymptotic ends.\n\nWe finally remark that the fermionic signature operator could be {\\em{localized}}\nby restricting the space-time integrals to a measurable subset~$\\Omega \\subset \\mycal M$.\nFor example, one can introduce a chiral signature operator~$\\mathscr{S}_L(\\Omega)$\nsimilar to~\\eqref{stipLR} and~\\eqref{SLRdef2} by\n\\[ ( \\phi \\,|\\, \\mathscr{S}_L(\\Omega)\\, \\psi) = \n\\int_\\Omega \\mathopen{\\prec} \\psi \\,|\\, \\chi_L \\,\\phi \\mathclose{\\succ}_x\\: d\\mu_\\mycal M\\:. \\]\nLikewise, in the setting of causal fermion systems, one can modify~\\eqref{sigLint} to\n\\[ \\mathscr{S}_L(\\Omega) = -\\int_\\Omega x \\,\\chi_L\\: d\\rho(x) \\:. \\]\nThe corresponding indices should encode information on the behavior of the Dirac solutions\nin the space-time region~$\\Omega$.\n\n\\vspace*{.5em} \\noindent \\thanks {{\\em{Acknowledgments:}}\nI would like to thank Niky Kamran and Hermann Schulz-Baldes for helpful discussions.\n\n\n\\providecommand{\\bysame}{\\leavevmode\\hbox to3em{\\hrulefill}\\thinspace}\n\\providecommand{\\MR}{\\relax\\ifhmode\\unskip\\space\\fi MR }\n\\providecommand{\\MRhref}[2]{%\n \\href{http:\/\/www.ams.org\/mathscinet-getitem?mr=#1}{#2}\n}\n\\providecommand{\\href}[2]{#2}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\n\n\nIn the trace reconstruction problem, the goal is to reconstruct an unknown bit string $x \\in \\{0,1\\}^n$ from multiple independent noisy observations of $x$.\nWe focus on the case where the noise is due to\n $x$ going through the deletion channel, where each bit is deleted independently with probability $q$ (and the remaining bits are concatenated, with no space between them, so the observer is uncertain about the original location of a bit in the output).\nThat is, instead of seeing $x$, we see (many independent copies of) $\\wt{X}$, which is obtained as follows:\nstart with an empty string, and for $k = 0, 1, \\dots, n-1$, do the following:\n\\begin{itemize}\n \\item \\textbf{(retention)} with probability $p=1 - q$, copy $x_k$ to the end of $\\wt{X}$ and increase $k$ by $1$;\n \\item \\textbf{(deletion)} with probability $q$, increase $k$ by $1$.\n\\end{itemize}\nVariants, such as including insertions and substitutions, are discussed in Section 5.\n\nGiven $T$ i.i.d.\\ samples (known as {\\em traces\\\/}) $\\wt{X}^{1}, \\dots, \\wt{X}^{T}$, all obtained from passing the same unknown string $x$ through the deletion channel, a trace reconstruction algorithm outputs an estimate\n$\\wh{X}$ which is a function of $ \\wt{X}^{1}, \\dots, \\wt{X}^{T}$. The main question is: Given $\\delta>0$, how many samples are needed so that there is a choice of $\\wh{X}$ that satisfies $\\P_x[\\wh{X} =x] \\ge 1- \\delta$ for every $x \\in \\{0,1\\}^n$ ? \\newline\n(Here $\\P_x$ is the law of $\\wt{X}_{1}, \\dots, \\wt{X}_{T}$ when the original string was $x$.)\nPrior to this work, the best available upper bound, due to \\cite{HMPW08}, was $T=\\exp(\\wt{O}(n^{1\/2}))$. Our main result, proved in Section 2, yields the following improvement.\n\n\\begin{theorem}\\label{thm:main}\nFor any deletion probability $q < 1$ and any $\\delta>0$, there exists a finite constant $C$ such that, for any original string $x \\in \\{0,1\\}^n$,\nit can be reconstructed with probability at least $1-\\delta$ from\n $T = \\exp \\left( C n^{1\/3} \\right)$ i.i.d.\\ samples of the deletion channel applied to $x$.\n\\end{theorem}\n\nOur estimator will only use individual bit statistics from the outputs of the deletion channel.\nThe following Theorem, proved in Section 4, shows that\n$T = \\exp \\left( \\Omega(n^{1\/3}) \\right)$ traces are needed for reconstruction if these are the only data used.\n\n\\begin{theorem}\\label{thm:opt}\nFix a deletion probability $q < 1$. For each $n$ there exist two distinct strings $x,y \\in \\{0,1\\}^{n}$,\nwith the following property: For all $j$, the total variation distance between the laws of $\\Bigl(\\wt{X}_{j}^{t}\\Bigr)_{t=1}^T$\n and $\\Bigl(\\wt{Y}_{j}^{t}\\Bigr)_{t=1}^T$ is at most $T\\exp \\left( -c n^{1\/3} \\right)$, for some $c=c(q)>0$.\n \\end{theorem}\n (Thus, for $c_10$, trace reconstruction algorithms relying on single bit statistics require $\\exp(n^c)$ traces. That proof was not published.\n\n\n\\item\nAfter the results of this paper were obtained, we learned that similar results were obtained independently and simultaneously by Anindya De, Ryan O'Donnell and Rocco Servedio.\n\n\\end{itemize}\n\n\n\n\n\n\\section{Proof of Theorem~\\ref{thm:main}}\n\n\nIn the proof we consider the random power series\n\\begin{equation}\\label{eq:stat}\n\\sum_{j \\geq 0} \\wt{a}_{j} w^{j},\n\\end{equation}\nwhere $ \\wt{{\\mathbf a}}$ is a sample output of the deletion channel and $w \\in \\mathbb{C}$ is chosen appropriately.\nThe first lemma expresses the expectation of such a random series using the original sequence of interest.\n\n\\begin{lemma}\\label{lem:key1}\nLet $w \\in \\mathbb{C}$, let\\\/ ${\\mathbf a} := \\left( a_{0}, a_{1}, \\dots, a_{n-1} \\right) \\in \\mathbb{R}^{n}$, let\\\/ $ \\wt{{\\mathbf a}}$ be the output of the deletion channel with input ${\\mathbf a}$, and pad\\\/ $ \\wt{{\\mathbf a}}$ with zeroes to the right. Write $p=1-q$. Then\n\\begin{equation}\\label{eq:expectation}\n\\E \\left[ \\sum_{j \\geq 0} \\wt{a}_{j} w^{j} \\right] = p \\sum_{k=0}^{n-1} a_{k} \\left( pw+q \\right)^{k}.\n\\end{equation}\n\\end{lemma}\nThe proof is given in the next section. Intuitively speaking, this identity is useful because by averaging samples we can approximate the expectation on the left-hand side of~\\eqref{eq:expectation}, while from the right-hand side of~\\eqref{eq:expectation} we can extract the original sequence ${\\mathbf a} = \\left( a_{0}, a_{1}, \\dots, a_{n-1} \\right)$.\n\nNote that unless $\\left| w \\right| = 1$, either the first or last terms of $ \\wt{{\\mathbf a}}$ will dominate in the left-hand side of~\\eqref{eq:expectation}.\nSimilarly, if we let $z := pw+q$, then unless $\\left| z \\right| = 1$, either the first or the last terms of ${\\mathbf a}$ will dominate in the right-hand side of~\\eqref{eq:expectation}.\nWe wish to give approximately equal weight to all terms, hence we would like $\\left| w \\right|$ and $\\left| z \\right|$ to both be close to 1.\nThis only happens if both $w$ and $z$ are close to $1$; thus we will let $z$ vary along a small arc on the unit circle near $1$.\nThis explains our interest in the following lemma, which is a special case of Theorem 3.2 in~\\cite{BE97}.\n\n\\begin{lemma}[Borwein and Erd{\\'e}lyi~\\cite{BE97}] \\label{lem:key2}\nThere exists a finite constant $c$ such that the following holds.\nLet\n$${\\mathbf a} = \\left( a_{0}, a_{1}, \\dots, a_{n-1} \\right) \\in \\left\\{ -1, 0, 1 \\right\\}^{n}$$\nbe such that ${\\mathbf a} \\neq 0$.\nLet $A \\left( z \\right) := \\sum_{k = 0}^{n-1} a_{k} z^{k}$ and denote by $\\gamma_L$ the arc $\\left\\{ e^{i \\theta} : -\\pi\/L \\leq \\theta \\leq \\pi \/ L \\right\\}$. Then $\\max_{z \\in \\gamma_L} |A(z)| \\geq e^{-cL}$.\n\\end{lemma}\n\nWe will optimize over the length of the arc $\\gamma_L$, and in the end we shall choose $L$ of order $ n^{1\/3}$.\n\nNote that if $z$ is in the arc $\\gamma_L=\\left\\{ e^{i \\theta} : -\\pi \/ L \\leq \\theta \\leq \\pi \/ L \\right\\}$\nand , then\n\\begin{equation} \\label{wbound}\n w = (z-q)\/p \\quad \\mbox{\\rm satisfies} \\; | w | \\le \\exp \\left( C_1 \/ L^{2} \\right) \\,\n\\end{equation}\nfor some constant $C_1=C_1(q)$. This is because writing\n$z = \\cos \\theta + i \\sin \\theta$,\nand using the Taylor expansion of cosine, we get\n\\begin{eqnarray*}\n\\left| w \\right|^{2} &=& \\frac{1+q^2-2q\\cos(\\theta)}{p^2} = \\frac{1+q^2-2q+2q(1-\\cos \\theta)}{p^2} \\\\ &\\le& 1 + \\frac{q}{p^2} \\theta^{2} + O \\left( \\theta^{4} \\right)\n = \\exp \\left( q \\theta^{2}\/p^2 + O \\left( \\theta^{4} \\right) \\right)\\, .\n\\end{eqnarray*}\nThe quadratic term $\\theta^{2}$ is to be expected: when $z$ is on the unit circle, $w=(z-q)\/p$ is on a circle of radius $1\/p$ centered at $-q\/p$; these circles are tangent at 1.\n\n\n\n\n\\subsection{Proof of Theorem~\\ref{thm:main} using the lemmas}\n\nLet $x,y \\in \\left\\{ 0, 1 \\right\\}^{n}$ be two different bit sequences. Our first goal is to distinguish between $x$ and $y$.\nLet ${\\mathbf a} := x-y$ and let $A(z) := \\sum_{k = 0}^{n-1} a_{k} z^{k}$.\nGiven a large integer $L$ (which we shall choose later), fix $z$ in the arc \n$\\gamma_L=\\left\\{e^{i \\theta} : -\\pi\/L \\leq \\theta \\leq \\pi \/ L \\right\\}$ such that\n$\\left| A (z) \\right| \\geq e^{-cL} $; \nsuch a $z$ exists by Lemma~\\ref{lem:key2}.\nLet $w = (z-q)\/p$. Recall from the previous subsection that $\\left| w \\right| < \\exp \\left( C_1 \/ L^{2} \\right)$ for some $C_1 < \\infty$.\n\nConsidering the random series defined in~\\eqref{eq:stat}, we see via Lemma~\\ref{lem:key1} that\n\\[\n\\E \\Bigl[ \\sum_{j \\geq 0} \\wt{X}_{j} w^{j} \\Bigr] - \\E \\Bigl[ \\sum_{j \\geq 0} \\wt{Y}_{j} w^{j} \\Bigr] = A (z)\\, .\n\\]\nTaking absolute values,\n\\[\n \\sum_{j \\geq 0} \\Bigl|\\E \\Bigl[ \\wt{X}_{j}-\\wt{Y}_{j} \\Bigr] \\Bigr| \\cdot |w|^{j} \\ge |A (z)| \\ge e^{-cL}, \n\\]\nwhence by (\\ref{wbound}),\n\\[\n \\sum_{j \\geq 0} \\Bigl|\\E \\Bigl[ \\wt{X}_{j}-\\wt{Y}_{j} \\Bigr] \\Bigr| \\ge \\exp \\Bigl( -C_1 n \/ L^{2} \\Bigr) \\cdot e^{-cL} \\,\n\\]\nTo approximately maximize the right-hand side, we choose $L$ to be the integer part of $n^{1\/3} $ and obtain that for some constant $C_2$,\n\\[\n \\sum_{j \\geq 0} \\Bigl|\\E \\Bigl[ \\wt{X}_{j}-\\wt{Y}_{j} \\Bigr] \\Bigr| \\ge \\exp \\Bigl(- C_2 n^{1\/3} \\ \\Bigr) \\,.\n\\]\nWe infer that there must exist some smallest $j2C_2$ makes the right-hand side of (\\ref{union}) tend to 0.\n\\hfill $\\Box$\n\n\\section{Proof of the polynomial identity and a simplified inequality}\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:key1}]\nFor $j \\leq n-1$, the output bit $\\wt{a}_{j}$ must come from an input bit $a_{k}$ for some $k \\ge j$.\nNow $\\wt{a}_{j}$ comes from $a_{k}$ if and only if exactly $j$ among $a_{0}, a_{1}, \\dots a_{k-1}$ are retained and $a_{k}$ is also retained.\nThere are $\\binom{k}{j}$ ways of choosing which $j$ bits among $a_{0}, a_{1}, \\dots a_{k-1}$ to retain, and the probability of each such choice is $p^j q^{k-j}$.\nThe probability of retaining $a_{k}$ is $p$. Putting everything together, we obtain that\n\\[\n\\E \\Bigl[ \\sum_{j \\geq 0} \\wt{a}_{j} w^{j} \\Bigr] = p \\sum_{j \\geq 0} w^{j} \\sum_{k = j}^{n-1} a_{k} \\binom{k}{j} p^j q^{k-j} \\,.\n\\]\nChanging the order of summation, we infer that\n\\[\n\\E \\Bigl[ \\sum_{j \\geq 0} \\wt{a}_{j} w^{j} \\Bigr] = p \\sum_{k = 0}^{n-1} a_{k} \\sum_{j=0}^{k} \\binom{k}{j} p^j q^{k-j} w^{j}.\n\\]\nFinally, observe that the sum over $j$ on the right-hand side is exactly the binomial expansion of $(pw+q)^{k}$.\n\\end{proof}\n\n\n\nSince the proof of Lemma~\\ref{lem:key2} in \\cite{BE97} is somewhat involved, for expository purposes, we prove here a weaker estimate. This is simpler to prove and it does not result in a much weaker conclusion. Specifically, if we use Lemma~\\ref{lem:key2_weak} below as a black box instead of Lemma~\\ref{lem:key2}, then we obtain that $T = \\exp \\Bigl( c n^{1\/3} \\log n \\Bigr)$ samples suffice for trace reconstruction; comparing this with Theorem~\\ref{thm:main}, we only lose a log factor in the exponent.\n\n\\begin{lemma}\\label{lem:key2_weak}\nLet\n${\\mathbf a} = \\Bigl( a_{0}, a_{1}, \\dots, a_{n-1} \\Bigr) \\in \\Bigl\\{ -1, 0, 1 \\Bigr\\}^{n}$\nbe such that ${\\mathbf a} \\neq 0$.\nLet $A (z) := \\sum_{k = 0}^{n-1} a_{k} z^{k}$.\nIf $ | A (z)| \\leq \\lambda$ on the arc $\\gamma_L:=\\Bigl\\{ z = e^{i \\theta} : -\\pi \/ L \\leq \\theta \\leq \\pi \/ L \\Bigr\\}$,\nthen $\\lambda \\geq n^{- L}$.\n\\end{lemma}\n\n\n\\begin{proof}\nWe may assume w.l.o.g.\\ that $a_{0} = 1$. (Indeed, if $a_{m}$ is the first nonzero entry and $m \\geq 1$, then replace $A(z)$ by $A(z)\/z^{m}$; this does not change the magnitude of the function on the unit circle, and yields a polynomial with $a_{0} \\neq 0$. Multiplying $A(z)$ by an appropriate sign, we can guarantee that $a_{0} = 1$.) In other words, $A(0) = 1$.\n \nConsider the product\n\\begin{equation}\\label{eq:A_rotations}\n{F} (z) := \\prod_{j=0}^{L-1} A \\Bigl( z \\cdot e^{2 \\pi i j \/ L} \\Bigr).\n\\end{equation}\nWe again have that $F(0) = 1$. By the maximum principle, the the maximum absolute value of the polynomial $F(z)$ on the unit disc is attained on the boundary, i.e., on the unit circle.\nThus there exists $z$ such that $ | z | = 1$ and $\\Bigl| F (z) \\Bigr| \\geq 1$.\nOn the other hand, for every $z$ such that $ | z | = 1$, the assumption of the lemma guarantees that there is at least one factor in~\\eqref{eq:A_rotations} whose absolute value is at most $\\lambda$.\nUsing the trivial bound $ |A (z)| \\leq n$ for every other factor,\nwe obtain that $ | F (z) | \\leq \\lambda n^{L-1}$ for every $z$ such that $ | z | = 1$.\nPutting the two inequalities together we obtain that\n$\\lambda \\geq n^{-(L-1)}$.\n\\end{proof}\n\n\\section{Optimality for single bit tests} \n\\begin{proof}[Proof of Theorem~\\ref{thm:opt}] \nLet $L:=n^{1\/3}$. (To keep the notation light, we omit integer parts and use $c_j$ to denote absolute constants and constants that depend only on $q$). By Theorem 3.3 in~\\cite{BEK99}, there exists a polynomial $Q$ of degree $c_2 L^2$, with coefficients in $\\{-1,0,1\\}$,\nsuch that \n$$\\max_{z \\in [0,1]} |Q(z)| \\le \\exp(-c_3 L) \\,.\n$$ \n Write $Q$ in the form $Q=\\varphi -\\psi $\nwhere $\\varphi$ and $\\psi$ are polynomials of degree $c_2 L^2$ with coefficients in $\\{0,1\\}$.\n\n\nLet $\\widehat{E}_L$ denote the ellipse with foci at $1-8\/L$ and $1$ and with major axis $[1-14\/L,1+6\/L]$, i.e.,\n$$\n\\widehat{E}_L=\\{z: |z-(1-8\/L)|+|z-1| \\le 20\/L \\} \\,.\n$$\n As explained on page 11 of~\\cite{BE97},\nCorollary 4.5 of that paper implies that\n$$\n\\max_{z \\in \\widehat{E}_L} |Q(z)| \\le e^{-c_4 L} \\, .\n$$\nRecall that $p=1-q$ and let $\\Gamma$ denote the circle $\\{z: |z-q|=p\\}$. Then $\\Gamma$ intersects the ellipse $\\widehat{E}_L$ in an arc $\\Gamma_L$ of length $c_5\/L$, since $\\widehat{E}_L$ contains the disk of radius $6\/L$ centered at 1.\nThus we may write\n$$\\Gamma_L=\\{pe^{i\\theta}+q : -c_6\/L \\le \\theta \\le c_6\/L \\} \\,.\n$$\n\nLet $m:=(n-c_2L^2)\/2$. Define the string $x \\in \\{0,1\\}^{n}$ where the first $m$ bits are zeros, the next $c_2L^2$ bits are the coefficients of $\\varphi$, and the final $m$ bits are zeros. The string $y \\in \\{0,1\\}^{n}$ is constructed from $\\psi$ in the same way.\n Then \n$$\nA(z):=\\sum_{k=0}^{n-1} (x_j-y_j)z^j=z^mQ(z) \n$$\nsatisfies \n\\begin{equation} \\label{maxA}\n\\max_{z \\in \\Gamma_L} |A(z)| \\le e^{-c_4 L} \\,.\n\\end{equation}\nDefine $b_j:=\\E \\Bigl[ \\wt{X}_{j}-\\wt{Y}_{j}]$ and $B(w):=\\sum_{j=0}^{n-1} b_j w^j$. By Lemma~\\ref{lem:key1}, we have\n$B(w)=pA(pw+q)$. We can extract $b_j$ from $B(\\cdot)$ by integration:\n$$\nb_j=\\frac{1}{2\\pi}\\int_{-\\pi}^{\\pi} e^{-ij\\theta} B(e^{i\\theta})\\,d\\theta \\,.\n$$\nTherefore\n\\begin{equation} \\label{eq:b_j}\n|b_j| \\le \\frac{1}{2\\pi}\\int_{-\\pi}^{\\pi} |B(e^{i\\theta})|\\, d\\theta \\le \\frac{1}{2\\pi}\\int_{-\\pi}^{\\pi} |A(pe^{i\\theta}+q)|\\, d\\theta \\,.\n\\end{equation}\nFor $\\theta \\in [-c_6\/L,c_6\/L]$, the integrand on the right-hand side is at most $e^{-c_4 L}$ by (\\ref{maxA}).\nTo bound that integrand for larger $\\theta$, observe that \n\\begin{eqnarray} \\label{inside}\n|pe^{i\\theta}+q|^2 &=& p^2\\cos^2\\theta+2pq\\cos\\theta+q^2+p^2\\sin^2\\theta=(p+q)^2+2pq(\\cos\\theta-1) \\\\\n&=& 1-pq\\theta^2+O(\\theta^4) \\le 1-c_7 \\theta^2 \\,.\n\\end{eqnarray}\nSince $|A(z)| \\le |z|^m (1-|z|)^{-1}$ in the unit disk and $m>n\/3$, we infer that if $|\\theta|>c_6\/L$, then\n$$\n|A(pe^{i\\theta}+q)| \\le c_8 L^2 (1-c_9 L^{-2})^{n\/3} \\le \\exp(-c_{10} nL^{-2}) =e^{-c_{10} L} \\,.\n$$\nIn conjunction with (\\ref{maxA}), we conclude that the integrand on the right-hand side of (\\ref{eq:b_j}) is uniformly bounded\nby $e^{-c_{11} L}$, whence\n\\begin{equation} \\label{eq:b_j2}\n|b_j| \\le e^{-c_{11} L} \\quad \\mbox{\\rm for all } \\, j \\,.\n\\end{equation}\n\n\n\nNext, fix $j$. To bound the total variation distance between the laws of $\\bigl(\\wt{X}_{j}^{t}\\bigr)_{t=1}^T$\n and $\\bigl(\\wt{Y}_{j}^{t}\\bigr)_{t=1}^T$, we will use a greedy coupling. More precise estimates can be obtained, e.g., using Hellinger distance, but the improvement will not affect the final result. Let $\\bigl(\\xi_t \\bigr)_{t=1}^T$ be i.i.d. variables, uniform in $[0,1]$.\n Then ${\\bf 1}_{\\xi_t \\le \\E(\\wt{X}_{j})}$ has the law of $\\wt{X}_{j}^t$ and ${\\bf 1}_{\\xi_t \\le \\E(\\wt{Y}_{j})}$ has the law of $\\wt{Y}_{j}^t$.\n These indicators differ with probability $b_j$. Altogether, this coupling implies that the total variation distance between the laws of $\\bigl(\\wt{X}_{j}^{t}\\bigr)_{t=1}^T$ and $\\bigl(\\wt{Y}_{j}^{t}\\bigr)_{t=1}^T$ is at most $Tb_j$.\n Referring to (\\ref{eq:b_j2}) concludes the proof.\n\\end{proof}\n\n\\noindent{\\bf Remark.} Strictly speaking, padding $x$ and $y$ with zeros on the right was not really needed in the above proof.\nThe reason for it is that one can also consider single bit tests on the traces using the $j$th bit from the right in each output;\nthe additional padding and a symmetry argument ensures that these tests will also require $\\exp(\\Omega(n^{1\/3})$ traces for reconstruction.\n\n\n\\section{Substitutions and insertions}\nIf, after $x$ goes through the deletion channel with deletion probability $q$, every bit is flipped with probability $\\lambda<1\/2$,\nthen $\\exp(O(n^{1\/3})$ samples still suffice for reconstruction. Indeed, let $X^\\# $ be the output of this deletion-substitution channel with input $ x$, padded with zeroes to the right. Define $Y^\\#$ from $y$ similarly. Recall that $p=1-q$. Then $\\E(X^\\#_{j}-Y^\\#_j)=(1-2\\lambda)\\E(\\wt{X}_{j}-\\wt{Y}_j)$, so\n(\\ref{eq:expectation}) is replaced by\n\\begin{equation}\\label{eq:expectation2}\n\\E \\left[ \\sum_{j \\geq 0} (X^\\#_{j}-Y^\\#_j) w^{j} \\right] = (1-2\\lambda) p \\sum_{k=0}^{n-1} (x_k-y_k) \\left( pw+q \\right)^{k}.\n\\end{equation}\nThe analysis in Section 2 then proceeds without change, since the pre-factor $1-2\\lambda$ is immaterial.\n\n\\smallskip\n\n{\\bf Insertions} are more interesting. Suppose that before each bit $x_k$ in the input, $G_k-1$ i.i.d.\\ fair bits are inserted, where the variables\n$G_k$ are i.i.d.\\ with a Geometric$(\\alpha)$ distribution, i.e., denoting $\\beta=1-\\alpha$, for all $\\ell \\ge 1$,\n$$\n\\P(G_k=\\ell)=\\alpha\\beta^{\\ell-1} \\,.\n$$\nAfter $x_{n-1}$, at the end of the sequence, $G_{n}-1$ additional fair bits are appended. We call $\\beta=1-\\alpha$ the insertion parameter.\nThus, such an insertion channel will yield an output $X^*$ consisting of $G_0-1$ i.i.d.\\ fair bits, followed by $x_0$, followed by $G_1-1$ i.i.d.\\ fair bits, followed by $x_1$, etc., ending with $x_{n-1}$ and the $G_{n}-1$ bits after it. The next theorem is analogous to Theorem~\\ref{thm:main}.\n\n\\begin{theorem}\\label{thm:insert}\nFor any insertion parameter $\\beta < 1$ and any $\\delta>0$, there exists a finite constant $C$ such that, for any original string $x \\in \\{0,1\\}^n$,\nit can be reconstructed with probability at least $1-\\delta$ from\n $T = \\exp \\left( C n^{1\/3} \\right)$ i.i.d.\\ samples of the insertion channel applied to $x$.\n\\end{theorem}\n\\begin{proof}\nTo prove this theorem, an analog of Lemma~\\ref{lem:key1} is needed:\n\n\\begin{lemma}\\label{lem:keyins}\nGiven strings $x$ and $y$ in $\\{0,1\\}^n$, let $X^*$ and $Y^*$ denote the corresponding outputs of the insertion channel with parameter $\\beta$, where $\\alpha+\\beta=1$. Then for $w \\in \\mathbb{C}$, we have\n\\begin{equation}\\label{eq:expectation3}\n\\E \\left[ \\sum_{j \\geq 0} (X^*_j-Y^*_j) w^{j+1} \\right] = \\sum_{k=0}^{n-1} (x_k-y_k) \\Bigl(\\frac{\\alpha w}{1-\\beta w} \\Bigr)^{k+1} \\,.\n\\end{equation}\n\\end{lemma}\nWe also need an analog of (\\ref{wbound}): if \n$$ \n\\zeta=e^{i\\theta}=\\frac{\\alpha w}{1-\\beta w}\n$$ \nis on the unit circle, then \n\\begin{equation}\\label{analog}\nw=\\zeta\/(\\alpha+\\beta \\zeta) \\quad \\mbox{\\rm satisfies} \\; |w|^2 \\le 1+C\\theta^2 \\,,\n\\end{equation}\n for some constant $C=C(\\alpha)$. This is immediate from (\\ref{inside}), with $\\alpha,\\beta$ replacing $q,p$ there.\n\nWith Lemma~\\ref{lem:keyins} and the inequality (\\ref{analog}) in hand, the rest of the proof of Theorem~\\ref{thm:insert} is identical to the proof of Theorem~\\ref{thm:main}. \n\\end{proof}\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:keyins}]\nIt is convenient to couple $X^*$ and $Y^*$ to use the same geometric variables $G_0,G_1,\\ldots$ and the same inserted bits. Note that the choice of coupling does not affect $\\E(X^*_{j}-Y^*_j)$. Write $a_k=x_k-y_k$ and $D_j:=X^*_{j}-Y^*_j$. Then\n$$\n\\E(D_j)=\\sum_{k=0}^j \\P(G_0+\\dots + G_k=j+1)a_k=\\sum_{k=0}^j {j \\choose k} \\alpha^{k+1} \\beta^{j-k} a_k \\,.\n$$\nTherefore, using the classical expansion\n $$\n\\sum_{j=k}^\\infty {j \\choose k} s^{j-k} =(1-s)^{-k-1}\n$$\nwe obtain that \n\\begin{eqnarray*}\n\\E\\left[ \\sum_{j \\geq 0} D_j w^{j+1} \\right] &=& \\sum_{j \\geq 0} \\sum_{k=0}^j w^{j+1} {j \\choose k} \\alpha^{k+1} \\beta^{j-k} a_k \\\\\n&=& \\sum_{k \\ge 0} a_k (\\alpha w)^{k+1} \\sum_{j \\geq k} {j \\choose k} (\\beta w)^{j-k} \\\\\n&=& \\sum_{k \\ge 0} a_k (\\alpha w)^{k+1} (1-\\beta w)^{-k-1} \\,,\n\\end{eqnarray*}\nwhich is the same as (\\ref{eq:expectation3}).\n\\end{proof} \n\n\\noindent{\\bf Remark.} To combine deletions, insertions and substitutions, simply compose the linear transformation $w \\mapsto pw+q$\nthat appears in (\\ref{eq:expectation2}) with the M\\\"obius transformation $w \\mapsto \\alpha w\/(1-\\beta w)$. Each of these transformations maps the unit circle\nto a smaller circle that is tangent to it at 1, and this also holds for their composition, in any order.\n\n\n \n\n\\section*{Acknowledgements}\nWe first learned of the trace reconstruction problem from Elchanan Mossel and Ben Morris. The second author is grateful to them, as well as to Ronen Eldan, Robin Pemantle and Perla Sousi for many discussions of the problem. The insightful suggestion by Elchanan that one should focus on deciding between\ntwo specific candidates for the original bit string was particularly influential. We are also indebted to Miki Racz and Gireeja Ranade for their help\nwith the exposition.\n\n\n\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\nNoise reduction in experiments facilitates reliable extraction of useful information from a smaller amount of data. This allows for more efficient use of experimental and analytical resources as well as enables the study of systems with intrinsically limited measurement time, e.g. cases with sample damage or out-of-equilibrium dynamics. While instrumentation development and optimization of experimental protocols are crucial in noise reduction, there are situations where computational methods can advance the improvements even further.\n\n\\noindent X-ray Photon Correlation Spectroscopy (XPCS) \\cite{Madsen_Fluerasu_Ruta, Shpyrko_2014, Sinha_2014} is a statistics-based technique that extracts information about a sample's dynamics through spatial and temporal analysis of intensity correlations between sequential images (frames) of a speckled pattern collected from coherent X-ray beam scattered from the sample. The two-time intensity-intensity correlation function \\cite{Brown_1997, Madsen_2010} (2TCF) is a matrix calculated as: \n\\begin{equation}\\label{eq:(1)}\nC2(\\pmb{q},t_{1}, t_{2}) = \\frac{\\langle I(\\pmb{q},t_{1})I(\\pmb{q},t_{2})\\rangle}{\\langle I(\\pmb{q},t_{1})\\rangle \\langle I(\\pmb{q},t_{2})\\rangle}\n\\end{equation} \nwhere \\(I(\\pmb{q},t)\\) is the intensity of a detector pixel corresponding to the wave vector \\(\\pmb{q}\\) at time \\(t\\). The average is taken over pixels with equivalent \\(\\pmb{q}\\) values.\nAn example of a 2TCF is shown in Fig.~\\ref{fig:Figure1}. The dimensions of the matrix are \\emph{N}x\\emph{N}, where \\emph{N} is a number of frames in the experimental series. The dynamics can be traced along the lag times \\(\\delta t=|t_{1}-t_{2}|\\). In the case of equilibrium dynamics, information from a 2TCF can be 'condensed' to a single dimension by integrating along the \\emph{(1,1)} diagonal producing a time-averaged one-time photon correlation function (1TCF) \\cite{Luxi_Li_2014}: \n\\begin{equation}\\label{eq:(2)}\nC1(\\pmb{q},\\delta t) = C_{\\infty} + \\beta|f(\\pmb{q},\\delta t)|^2\n\\end{equation}\nwhere \\(f(\\pmb{q},\\delta t)\\) is the intermediate scattering function at lag time \\(\\delta t\\), \\(\\beta\\) is the optical contrast and \\(C_{\\infty}\\) is the baseline that equals to 1 for ergodic samples. While 1TCF can be directly obtained from raw data \\cite{Lumma_2000}, calculating 2TCF as an intermediate step is beneficial even for presumably equilibrium cases. 2TCF contains time-resolved information about both samples' intrinsic dynamics and fluctuations of the experimental conditions, which enables one to determine between stationary and non-stationary dynamics and whether or not the time-averaged 1TCF is a valid representation of the scattering series. Investigation of 2TCF helps to identify single-frame events, such as cosmic rays detection, and beam-induced dynamics, where timescales might vary with the accumulation of X-ray dose absorbed by the sample during the acquisition of the dataset. \n\n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[]{Figure-1-Konstantinova.jpg}\n\\caption{Data for the model. (A) 2TCF for an experimental series consisting of 400 frames. Red squares show examples of regions selected for the model training. Yellow arrow shows the temporal direction \\emph{t} of the system's dynamics. Yellow solid line shows the 1TCF along \\emph{t}, calculated from the 2TCF. (B) Example of 50$\\times$50 2TCF, passed as an input to the model. (C) Example of the target data for the model, obtained by averaging multiple 50$\\times$50 diagonal sections of the 2TCF. All images have the same intensity scale.}\n\\label{fig:Figure1}\n\\end{figure}\n\n\\noindent XPCS experiments can suffer from various sources of noise and artifacts: probabilistic nature of photon scattering, detector shot noise, and instrumentational instabilities. Significant progress in reduction of the noise involved in photon detection and counting has been made by developing single-photon counting devices \\cite{Grybos_2016, Llopart_2002} and employment of the 'droplet' algorithm \\cite{Livet_2000} or pixel binning \\cite{Falus_2006}. Efforts have been dedicated to integrating feedback loops \\cite{Kongtawong_2020, Strocov_2010} into instrumentational controls to reduce the impact of instabilities. Despite the current advances of experimental setup and methods for data analysis in reduction of noise and instability effects, achieving high signal-to-noise ratio is still a practical challenge in many XPCS experiments. The need to suppress the high-frequency fluctuations leads to extended data collection times \u2013 an approach that itself can introduce additional errors, for instance due to slow changes in experimental conditions. Limited experimental resources may not allow for multiple repeated measurements for systems with very slow dynamics. Besides, a sample's intrinsic properties can limit the time-range, within which the dynamics can be considered \\cite{Madsen_2010} as equilibrium and thus quantitatively evaluated with Eq.~\\ref{eq:(2)}. A tool that helps to accurately extract parameters of the system's equilibrium dynamics from a limited amount of noisy data would be useful, but no generally applicable, out-of-the-box tool exists for XPCS results.\n\n\\noindent Solutions based on artificial neural networks are attractive candidates as they are broadly used for application-specific noise removal. Among such solutions are extensions of autoencoder models \\cite{ Kramer_1991}, which are unsupervised algorithms for learning a condensed representation of an input signal. The principle behind an autoencoder is based on a common fact that the information about significant non-random variations in data is contained in a much smaller number of variables than the dimensionality of the data. An autoencoder model consists of two modules: encoder and decoder. The encoder transforms the input signal to a set of unique variables called latent space. The decoder part then attempts to transform the encoded variables back to the original input. As the number of components in the latent space is generally much smaller than the number of components in the original input, the nonessential information, i.e. random noise, is lost during such transformations. Thus, an autoencoder model on its own can be used as an effective noise reduction tool. However, in the scope of this work we employ a broader idea of noise. We treat all dynamic heterogeneities due to changes in a sample configuration caused by stress or diffusive processes, as well as correlated noise in 2TCF, as an unwanted signal. Such point of view can be preferred when one wants to quantify the average dynamics parameters with Eq.~\\ref{eq:(3)} or to separate the underlying (envelope) dynamics from stochastic heterogeneities. An autoencoder model can be modified to address the removal of a deterministic, application-specific noise by replacing its targets with 'noise-free' versions of the input signals. In the case of an image-like input, such as an XPCS 2TCF, convolutional neural networks (CNN) are the obvious choice for the encoder and decoder modules. CNN-based encoder-decoder (CNN-ED) models have been successfully implemented for noise removal and restoration of impaired signals in audio applications\\cite{Grais_2017, Se_Rim_Park_2017} and images \\cite{Pathak_2016, Xioa-Jiao_Mao_2016}.\n\n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[]{Figure-2-Konstantinova.jpg}\n\\caption{ Architecture of the CNN-ED model. The input and the output images have the same intensity scale.}\n\\label{fig:Figure2}\n\\end{figure}\n\n\\noindent Here, we demonstrate an approach for noise reduction in 2TCFs by means of CNN-ED models. An ensemble of such models, trained on real experimental data, shows noticeable suppression of noise while preserving the functional form of system's equilibrium dynamics and the temporal resolution of the signal. Addressing noise removal from 2TCF instead of the scattering signal at the detector makes the approach agnostic to the type of the registering device, the size of the selected area, the shape of the speckles, the intensity of the scattering signal and the exposure time, enabling the models' application to a wide range of XPCS experiments.\n\n\n\\section*{Results}\n\n\\textbf{Data Processing.}\nThe models are trained using data from the measurements of equilibrium dynamics of nanoparticle filled polymer systems conducted at the Coherent Hard X-ray Scattering (CHX) beamline at NSLS-II. For the nanoparticles' dynamics Eq.~\\ref{eq:(2)} can be approximated by the form \\cite{Madsen_2010}:\n\\begin{equation}\\label{eq:(3)}\nC1(\\pmb{q},t) = C_{\\infty} + \\beta e^{-2(\\Gamma t)^\\alpha}\n\\end{equation}\nwhere \\(\\Gamma\\) is the rate of the dynamics and \\(\\alpha\\) is the compression constant. The baseline \\(C_{\\infty}\\) is nearly 1 in the considered cases. Each experiment contains a series of 200-1000 frames. To augment the training data, additional 2TCFs are constructed using every second frame of the original series, which would be an equivalent to data collection with a twice longer lag period. Multiple regions of interest (ROI) - groups of pixels on the detector with equivalent wavevectors - are analyzed for each series and the 2TCFs are calculated for each ROI.\nFor each model datum, or an \"example\", the input image is obtained by cropping a 50x50 pixels part from a 2TCF with the center on the \\emph{(1,1)} diagonal, starting at the lower left corner, as shown in Fig.~\\ref{fig:Figure1}(A). Each next datum is obtained by shifting the center of the cropped image along the diagonal by 25 frames.\nThe target image for each example is the average of all the cropped inputs extracted from the same 2TCF. Thus, groups of 3 to 39 inputs have the same target. While the target images still contain noise, its level is significantly reduced with respect to the noise of the input images. Here, the size of 50x50 pixels is chosen as for the majority of the examples in the considered dataset the dynamics' parameters can be inferred from the first 50 frames. However, any size can be selected to train a model with little to no modification to its architecture if enough data are available. \n\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{|l|l|l|l|}\n\\hline\n & Training & Validation \\\\\n\\hline\n\nUnique Inputs & 12236 & 5449 \\\\\n\\hline\n\nUnique Targets & 722 & 401 \\\\\n\\hline\n\\end{tabular}\n\\caption{\\label{tab:Table1}Distribution of examples between the training and validation sets.}\n\\end{table}\n\n\\noindent The diagonal (lag=0) 2TCF values of the raw data reflect the normalized variance of the photon count. Such values are vastly different between experiments and detector ROIs. They can by far exceed the values of photon correlation between frames (typically on a scale between 1 and 1.5) and are usually excluded from the traditional XPCS analysis. To prevent the influence of the high diagonal 2TCF values on the model cost function, the pixels along the diagonal are replaced with the values randomly drawn from the distribution of 2TCF values at lag=1. In doing so, we avoid artificial discontinuities in the images.\n\n\\noindent For a proper model training process and to ensure its generalizability, we find that all the input data should be introduced to the model on the same scale. However, a commonly applied standard scaling is not suitable for the present case as the level of noise may affect the values of the essential parameters such as the baseline and the contrast. \nTo bring all examples to a similar range, the estimated contrast for each series and each ROI is scaled to unity (see Methods). After processing, the data are split into the training and validation sets as shown in Table~\\ref{tab:Table1}. The splitting is done in a way that no two inputs from different sets have the same target.\n\n\\noindent\\textbf{Model Training.} The ED model architecture used in this work is shown in Fig.~\\ref{fig:Figure2}. The encoder part consists of two convolutional layers with the kernel size 1$\\times$1. Training the model with larger kernel sizes did not improve the performance of the model. While kernels of size 1$\\times$1 are used sometimes in CNN image applications \\cite{Simonyan_2014} for creating non-linear activations, generally, they are not exclusively incorporated across the entire network. The reason for this is that the convolutional kernels are intended to catch distinctive edges, which form characteristic features of an image. To identify an edge, the distribution of intensities among the neighboring pixels is needed. However, the 2TCFs used in this work do not have sharp edges, which can partially explain the lack of improved learning with larger kernels. Besides, an equilibrium 2TCF has a unique structure, with symmetry along the diagonals. An equilibrium 2TCF and its modified copy with pixel values randomly shuffled along the diagonals would produce exactly the same 1TCF. This property is picked up by the model during compression of convolutional outputs to the latent space.\n\n\\noindent Both convolutional layers consist of 10 channels with rectified linear unit (\\emph{ReLU}) activation function applied to the output of each channel. \nWe find that increasing the number of channels does not significantly change the performance of the model and the smaller number of channels gives poorer performance.\nNo pooling layers\\cite{Nagi_2011} were introduced to prevent information loss\\cite{Ronneberger_2015} at the encoding stage. The output of the convolutional layers contains 25,000 features. A linear transformation is performed to convert them to the latent space of a much smaller dimension. \nWhile some ED image applications implement fully convolutional architectures \\cite{Xioa-Jiao_Mao_2016, Se_Rim_Park_2017}, we believe that the introduction of the linear layer for purpose of denoising equilibrium 2TCFs is beneficial. Not only does the bottleneck layer provide the regularization of the model, it also mixes the features derived by convolutional layers from different parts of the input image.\nThe decoder part consists of two transposed convolutional layers, symmetrical to the encoder part, that convert the latent space back to a 50$\\times$50 image.\nThe \\emph{ReLU} function is applied only to the output of the first decoder layer.\n\n\\noindent The mean squared error (MSE) between the denoised output and the target is a natural choice of cost function for many image denoising applications. The MSE is shown to be useful for image denoising even in cases of some noise being present in the target \\cite{Lehtinen_2018}. Moreover, presence of noise in the input data puts a regularization on model weights, enforcing contractive property \\cite{Alain_2014} on the reconstruction function of denoising EDs. The goal of the model presented here is to reduce the noise in 2TCF in such a way that the 1TCF, calculated from the model output, is as close to the target 1TCF as possible. Thus, the model's learning objective is modified by inclusion the MSE between respective 1TCFs into the cost function.\n\n\\noindent We find that the regularization, which is enforced by the noise in both inputs and targets, in conjunction with the early stopping based on the cost function for the validation set, is sufficient for the model to avoid over-fitting. Introducing additional weight regularization reduced the accuracy of the model, especially for the cases of fast dynamics.\n\n\\noindent However, the cost function calculated for the validation set is not the only parameter to consider when selecting the optimal parameters for the model.\nWhen examining models trained for different latent space dimensions, the validation cost function (Fig.~\\ref{fig:Figure3}) does not have a pronounced minimum in the range of dimensions between 1 and 200.\n\n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[]{Figure-3-Konstantinova.jpg}\n\\caption{Selection of the latent space dimension for the model. From top to bottom, as the function of the latent space dimension: the MSE(1TCF) (blue circles) and the cost function (orange triangles) for the validation set; the MRE($\\Gamma$); the MSE of $\\beta$, $C_{\\infty}$ and $\\alpha$, extracted from the fit of the corresponding CNN-ED ensemble's outputs for the examples in the validation set to Eq.~\\ref{eq:(3)}. The vertical line marks the choice of the latent space dimension for the model.}\n\\label{fig:Figure3}\n\\end{figure}\n\nHowever, this metric may not reflect well the systematic errors in reconstructing the\ndynamics parameters, such as $\\beta$, $\\Gamma$, $\\alpha$ and $C_{\\infty}$, which are essential to drawing scientific conclusions. An efficient model would precisely recover these parameters for a broad set of observations. Thus, the optimal dimension is selected based on how well the model output allows to recover those parameters for the validation data.\nHere, the rate of the dynamics, $\\Gamma$, is the most important parameter to consider since the variation of $\\beta$ is taken care of by pre-processing normalization and the variations of $\\alpha$ and $C_{\\infty}$ are naturally very small in the considered examples.\n\n\\noindent To reduce the variance associated with the randomness of the initial weights initialization, ten models with different random initialization are trained for each latent space dimension. For each of the validation examples, the outputs of the ten models are converted to 1TCF, averaged and then fit to Eq.~\\ref{eq:(3)}. The ground truth values, used for comparison, are obtained by fitting the 1TCF calculated from all (100-1000) frames in the same experiment and the same ROI as the input example is taken from.\n\n\\noindent Since values of dynamics rate can be very close to zero, the mean absolute relative error (MRE) is considered for $\\Gamma$. The MSE is calculated for other parameters. The accuracy of $\\Gamma$ keeps improving with increased number of hidden variables. But the rate of improvement slows down considerably above 5-8 variables. The same is observed for $\\alpha$ and $C_{\\infty}$. This is in agreement with the MSE(1TCF) between the model output and the target, as shown in Fig.~\\ref{fig:Figure3}. The accuracy of $\\beta$ is relatively uniform across all the models. Based on the above, we select the models with eight latent variables for further consideration.\n\n\\noindent To address the variance of the selected CNN-ED, we train 100 such models with different random initialization and select among them the 10 best performing models based on the MSE(1TCF) for the validation set. Selecting only a limited number of the best performing models instead of combining all trained models also optimizes the use of storage memory and computational resources. \n\n\n\\noindent\\textbf{Model Testing.} The performance of the ensemble of models is evaluated through several tests. Firstly, we estimate the model applicability range by applying it to experiments similar to the ones used for training.\nAn example of noise removal from a test datum is shown in Fig.~\\ref{fig:Figure4}. Reduction of the noise is especially important for larger lag times, where fewer scattering frames are available for calculating the correlations. \n\n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[]{Figure-4-Konstantinova.jpg}\n\\caption{Example of 2TCF denoising with the CNN-ED models. (A) From left to right: the raw input 2TCF; the averaged target; the output of the ensemble of CNN-ED models. (B) 1TCF calculated from each 2TCF in (A). The dashed line corresponding to a baseline \\(C_{\\infty} = 1\\) is shown for convenience.}\n\\label{fig:Figure4}\n\\end{figure}\n\n\\noindent As mentioned above, despite the cost function working well for determining the optimal weights for a model, it is not sufficient to assess the reliability of the model output for quantitative analysis of the materials' dynamics. We assess the performance of the ensemble by comparing the fits with Eq.~\\ref{eq:(3)} for the 1TCFs calculated from the cropped 50$\\times$50 pixels raw data (inputs), the corresponding denoised model outputs and the full-range raw data (ground truth target) (see the Supplemental Materials). From the results of the test set, the noise removal from the raw cropped 2TCFs with the CNN-ED ensemble noticeably improves the precision for the dynamics parameters in a wide range of cases with $0.01 frames^{-1}<\\Gamma<0.15 frames^{-1}$ (i.e. the contrast drops by half within approximately the first 3-35 frames) in comparison with fitting the raw cropped 2TCFs. The application of the model enables reasonable estimates even in cases when the low signal-to-noise ratio of the raw cropped data prevents a convergent fit within the parameters' boundaries. In the region $\\Gamma >0.15 frames^{-1}$, the results of the model are no longer more accurate than the raw data in general. \nNote that the precision of the model depends on the accuracy of identifying the optical contrast. Accurate measurements of optical contrast in XPCS experiments with fast dynamics can be challenging as they can involve data collection with reduced exposure or relying on averaged speckle visibility.\nFurthermore, a poor accuracy in identifying dynamics parameters is observed for inputs with very high noise levels (Fig.~S7) and\/or the presence of well pronounced dynamical heterogeneities.\n\n\\noindent If 100 or more frames are available for analysis, 2TCFs with slow dynamics can be reduced by considering every $2^{nd}$, $3^{rd}$, etc. frame, as it is done for augmenting the training data. This to will effectively increase the exposure times and increase the $\\Gamma$ measured in $frames^{-1}$, making the model output more accurate. Alternatively, a model with a larger size of input 2TCF can be trained to handle cases of slow dynamics.\n\n\n\\noindent While it is clear from above how the model performs on average for individual independent 2TCFs, it is useful to see if application of the model can lead to reducing data collection in a typical XPCS experiment. We consider a single 700-frames series of scattering images among those used for creating the test set. The goal is to see if one can extract a sufficient information about the $q$-dependence of the dynamics rate $\\Gamma$ using only the first 50 frames with and without the model application. The target 2TCF for each of the concentric ROIs (shown in Fig.~\\ref{fig:Figure5}(A)) are calculated using all 700 frames. The first $50\\times50$ frames regions of the 2TCFs are considered and the ensemble CNN-ED model is applied to them. The visual comparison between the level of noise in the raw data and in the model output for an ROI with large $q$ is shown in Fig.~\\ref{fig:Figure5}(B). The 1TCFs, calculated from the raw cropped 2TCFs, from the model outputs and from the target 2TCFs for each ROI are fit to Eq.~\\ref{eq:(3)} with $\\alpha = 1$. The results are shown in Fig.~\\ref{fig:Figure5}(C-E). For the parameter $\\Gamma$, at small $q$, where the signal-to-noise ratio is high, all three fits are close. However, as $q$ grows and the noise level increases, the fit for the raw 50-frame 1TCFs starts to deviate from the target values more than the fit for the model outputs. In fact, the outcome of the model remains close to the actual values until the large $q$ values (ROI\\# 16 and above). A similar tendency is observed for parameters $\\beta$ and $C_{\\infty}$. This example demonstrates that application of the model can help to obtain sufficient information about the equilibrium system's dynamics from a smaller amount of data.\n\n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[]{Figure-5-Konstantinova.jpg}\n\\caption{Model application for recovering the $q$-dependence of the dynamics parameters. (A) Scattering image of the sample with the ROI map on top of it. Dark blue corresponds to pixels excluded from the analysis. (B) A 100-frame fragment of 2TCF from ROI \\#14. The first 50-frame part is denoised with the model. Variation of $\\Gamma$ (C), $\\beta$ (D) and $C_{\\infty}$ (E) parameters among ROIs with different $q$.}\n\\label{fig:Figure5}\n\\end{figure}\n\n\\noindent The applicability of the model to non-equilibrium data is also tested. Although the model is trained with the equilibrium examples, it still can be applied to quasi-equilibrium regions of a 2TCF with gradually varying dynamics parameters. Here, the model performance is demonstrated for a sample with ageing dynamics that become slower with time. Since the target values cannot be obtained by averaging many frames for a such case, we calculate two 2TCFs with different noise levels, but carrying the same information, through sub-sampling pixels from the same ROI. The original ROI is a circle of small width with its center at $q$=0. This ROI is used for calculating the target 2TCF. To calculate the test 2TCF with the reduced signal-to-noise ratio we randomly remove 74\\% of pixels from the original ROI. The model is applied along the \\emph{(1,1)} diagonal in a sliding window fashion (see Methods).\n\n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[]{Figure-6-Konstantinova.jpg}\n\\caption{Model application for the case of non-equilibrium dynamics. (A) 2TCF calculated from reduced ROI. (B) The same 2TCF after the model is applied along its diagonal. Temporal evolution of $\\Gamma$ (C), $C_{\\infty}$ (D) and $\\alpha$ (E) parameters extracted from the noisy raw data, denoised data and the target 2TCF.}\n\\label{fig:Figure6}\n\\end{figure}\n\n\\noindent To compare the test 2TCF and the result of the model application, the cuts with width of 1 frame are taken perpendicular to the \\emph{(1,1)} diagonal and the resulting 1TCFs are fit to Eq.~\\ref{eq:(3)} in analogy to other XPCS analyses \\cite{Madsen_2010, Malik_1998}. The target parameters are obtained by taking the cuts of 10 frames with the step of 10 frames from the target 2TCF and fitting the 1TCF, averaged over each cut, to the Eq.~\\ref{eq:(3)}. Averaging is done for improving the accuracy of the target values. \nThe contrast $\\beta$ is estimated as the mean of the respective raw 2TCF(lag=1) at frames 250-300, where the dynamics are fairly slow, and is fixed during the fit.\nThe results for $\\Gamma$, $C_{\\infty}$ and $\\alpha$ are shown in Fig.~\\ref{fig:Figure6}(C-E). While the general trend of $\\Gamma(t)$ could be visually estimated from the raw test data, the output of the model gives much fewer outliers. Moreover, the temporal region, where $\\Gamma$ can be reasonably estimated is wider for the model output. The fit to the raw test data does not allow to estimate $\\Gamma$ in the first 30-40 frames, while the fit to the denoised data is close to the target in that region. The fit of the denoised data only shows a high uncertainty at the corners of the 2TCF, where the corresponding 1TCFs consist of less than 20 points. The variance of parameter $C_{\\infty}$ is also improved for the denoised data, but the most notable improvement in accuracy is observed for the parameter $\\alpha$. The fits to the raw noisy data have high variance, which hides the upward trend of $\\alpha$, in contrast to the fits to the denoised data. \n\n\\noindent In a typical experiment, cuts with width of more than 1 frame are used for estimating the dynamics parameters achieving a better accuracy for the raw data than shown in Fig.~\\ref{fig:Figure6}. However, the selection of regions with quasi-equilibrium dynamics is not trivial. Since the fits to 1-frame-wide cuts from the denoised data have a low variance almost across the entire experiment, application of CNN-EDs makes the data more suitable for automated analysis and for visual inspection of the data when selecting the quasi-equilibrium regions.\n\n\n\\noindent\\textbf{Comparison to Other Techniques.} We compare the performance of our approach to several of-the-shelf solutions for noise reduction in images: linear principle components\u2013based, Gaussian, median and total variation denoising (Chambolles' projection) \\cite{Duran_2013} filters. The comparison of the application of these techniques to the same test example as in Fig.~\\ref{fig:Figure4} is shown in Fig.~\\ref{fig:Figure7}. Principle components filters have the same idea as the ED model \u2013 preserving only the information from a few essential components of the original data. In fact, an autoencoder is a type of non-linear principle component generator. As one would expect, a filter based on linear principle components, trained on the same data as the CNN-EDs, under-performs comparing to the case of non-linear components due to a larger bias of the procedure for the components extraction. Gaussian and median filters are based on smoothing the intensity fluctuations between neighboring pixels and the total variation denoising is a regularized minimization of the additive normally distributed noise. While these approaches help to reduce pixel-to-pixel intensity variations, unlike the demonstrated here CNN-ED models, they do not learn the functional form of the equilibrium 2TCF images and cannot be improved by having a larger training set. By only considering local surrounding of individual pixels in a single image, such algorithms cannot recognize, for example, that the correlation function decays at larger lag times. Consequently, when an isolated high-intensity pixel (noise) is encountered in an image, an application of such filters leads to inflation of intensities in the surrounding pixels, highlighting the noise instead of correcting it. Thus, noise removal with the above filters can introduce false trends in 1TCF, which makes them unsuitable for quantitative XPCS data analysis. On the other hand, a CNN-ED, which is a regression model, learns from numerous examples the characteristic trends in the data and is less likely to introduce artifacts.\n\n\\begin{figure}[!htb]\n\\centering\n\\includegraphics[]{Figure-7-Konstantinova.jpg}\n\\caption{Comparison of various noise removal techniques applied to an example from the test set. Top row: results of applying filters to the raw 2TCF, middle row: 1TCFs calculated from the 2TCFs for the raw input (blue dashed line), the results of the respective filters (orange solid line) and the target (green solid line), bottom row: residuals of the 1TFCs calculated from the example after denoising with the respective filters. }\n\\label{fig:Figure7}\n\\end{figure}\n\n\\section*{Discussion}\nThe CNN-ED approach to noise removal in XPCS shows a reasonable improvement in the quality of the signal, allowing for quantification of a sample's dynamics from a limited amount of data, avoiding extensive data collection, accessing finite regions of reciprocal space and quasi-equilibrium intervals of non-equilibrium dynamics. The CNN-ED models go beyond and are superior to a simple smoothing of intensity between neighboring pixels since these models empirically learn the structural form of the 2TCF. The models are fast to train and do not require an extensive amount of training data. Their accuracy is pretty robust with respect to the choice of hyperparameters such as the number of channels in the hidden layers, the convolutional kernel size and the latent space dimension. The computational resources required for the application of the ensemble of 10 models are smaller than one needs to calculate 2TCFs for a typical number of frames required to achieve the same signal-to-noise ratio. \n\n\\noindent However, there are several limitations to keep in mind when applying CNN-ED models to a 2TCF. The testing results show that the models may not reliably remove the noise for the cases of very fast and very slow dynamics as well as from very noisy data (see the illustration in Supplementary Fig.~\\ref{fig:FigureS7}). Some inaccuracy for the cases of fast dynamics comes from uncertainties in identifying the normalization factors (contrasts) for pre-processing of the inputs, which is also a challenge for traditional analysis. When the speckle visibility drops significantly within a single frame acquisition period, its estimation from the input data can have a high error. As it is seen from the model performance for the validation set and for the non-equilibrium test case in comparison to its performance for the equilibrium test set, a more accurate scaling of the inputs can improve the precision of the model for experiments with faster dynamics.\nBesides, one is advised to consider the context of extracted dynamics for a given material before relying solely on the information extracted from only a single 2TCF regardless of whether a CNN-ED model is applied.\nOne benefits from a series of experiments on a single system, such as a temperature dependence or the demonstrated here $q$-dependence, to lend credibility to extracted dynamics for one particular experiment.\n\n\\noindent In this work, only equilibrium dynamics described by stretched exponents with the baseline close to 1 are used for training. Thus, the model learns to approximate any input with this type of dynamics. This can result in a loss of fine details, such as heterogeneities, oscillations or fast divergence of dynamics parameters in non-equilibrium cases. However, the demonstrated approach to the noise removal can be expanded to other types of dynamics with sufficient amount of data for training. Even in the absence of proper denoised target data, the autoencoder version of the model can significantly reduce the random noise. Furthermore, a CNN-ED model can be trained to correct for specific types of artifacts, such as impact of instrumentational instabilities or outlier frames, leading to a more efficient use of experimental user facilities \\cite{Campbell_2020}. Similarly to other fields\\cite{Baur_2019, Chong_2017}, the autoencoder models can be used for identifying unusual observations in the stream of XPCS data. Additionally, the encoded low-dimensional representation of the 2TCF can be used for classification, regression and clustering tasks, related to samples' dynamics. In the broader scope, the presented here CNN-ED models and their modifications have the potential for application in automated high-rates XPCS data collection\\cite{Zhang_2021} and processing pipelines, reducing the reliance on the human-in-the-loop in decision making during experiments. \n\n\\section*{Methods}\n\n\\noindent\\textbf{Training data.} The data for training and validation set contain experiments for 7 samples from 3 different material groups A(1 sample), B(2 samples) and C(4 samples). The experiments are taken at various exposures, acquisition rates and temperatures. Concentric ROIs with increasing $q$ are used. Depending on the noise level and the dynamics duration, from 2 to 17 ROIs (median 10 ROIs) are considered for each experiment. The diversity of experimental conditions and regions in the reciprocal space allows one to obtain a realistic distribution of dynamics parameters and noise levels. We have not included the cases with very slow dynamics, for which only a small portion is complete within the 50 frames. To cut off the high noise data, we excluded the cases, where the fit to Eq.~\\ref{eq:(3)} did not converge for the full-range 1TCF. \nThe distributions of dynamics parameters for the training and the validation set are shown in Fig.~\\ref{fig:FigureS1}.\nFor the model training purposes, all input data (2TCF$_{noisy}$) are scaled as:\n\\begin{equation}\\label{eq:(4)}\ninput = (2TCF_{noisy} - 1)\/\\beta^{*} + 1\n\\end{equation}\nwhere $\\beta^{*}$ is the estimation of speckle visibility for the integration time of a single frame. It is obtained from fitting the equivalent pixels' intensity fluctuations with a negative binomial distribution \\cite{Luxi_Li_2014}. For this, the speckle visibility, is calculated for each frame and is averaged among all the frames in the series. The target data are reversely scaled accordingly.\n\n\n\\noindent\n\\textbf{Test data.} The test data are collected in a similar fashion to the training\/validation data. Experiments for 5 different samples in the same material group (C) are considered. Experiments are performed for different temperatures, exposure times and acquisition rates. Concentric ROIs with increasing $q$ are used. 10 ROIs with the smallest $q$-s are considered. However, no visual inspection of the data is done prior to model application and the ROIs with slow dynamics are not rejected. ROIs, where the full-range 1TCF fit to Eq.~\\ref{eq:(3)} does not converge, are not considered. Overall, 12060 inputs (679 distinct targets) are considered in the test set. The distribution of the parameters from Eq.~\\ref{eq:(3)} in shown in Fig.~\\ref{fig:FigureS2}.\n\\noindent Unlike training\/validation data, the contrast for normalization of test inputs is estimated from the 1TCF derived from the 2TCF) at lag=1 frame instead of the speckle visibility. This is done to reduce the computation time and to test the model performance for the cases when only the 50$\\times$50 2TCFs, and not the scattered images, are available. No adjustment is done to the baseline as the input data does not provide a good estimate for it. Thus, for each of the noisy 2TCF$_{noisy}$, the model \\emph{input} is calculated as:\n\\begin{equation}\\label{eq:(5)}\ninput = (2TCF_{noisy} - 1)\/1TCF_{noisy}(lag=1) + 1\n\\end{equation}\nThe denoised 2TCF$_{denoised}$ is then obtained from the output as: \n\\begin{equation}\\label{eq:(6)}\n2TCF_{denoised} = (output - 1)*1TCF_{noisy}(lag=1) + 1\n\\end{equation}\n\n\n\\noindent \n\\noindent\\textbf{Non-equilibrium test.}\n\\noindent For the example of ageing dynamics considered in this work, the model is applied to each $50\\times50$ piece of the raw 2TCF along its [1,1] diagonal with the step size 5 frames, starting at the first frame.\nPrior the application of the model, each input is obtained from a raw 2TCF$_{noisy}$ as:\n\\begin{equation}\\label{eq:(7)}\ninput = (2TCF_{noisy} - C_{\\infty}^{*})\/\\beta^{*}(lag = 1) +1 \n\\end{equation}\nwhere $\\beta^{*}(lag = 1)$ is the estimation of contrast at lag=1 as the mean of 2TCF$_{noisy}(lag =1)$ for frames 250-300 and $C_{\\infty}^{*}$ is the estimation of the baseline as the mean of 2TCF$_{noisy}$ at lags 270-300. The reverse transformation is applied to the model output.\nThe overlapping model outputs between the current and the previous steps are averaged. The values outside of the $50\\times50$ diagonal sliding window are remained unchanged. The same procedure is repeated with the model sliding window moving from the last frame towards the first frame. The two results are averaged to reduce the dominating influence of the later dynamics over the earlier dynamics and vice versa. The loss of the temporal resolution due to convolution between the model and the raw signal is not significant for the considered case of slowly-evolving sample dynamics. \n\n\n\\noindent\\textbf{Model Training Details.} The cost function used for training the models is the sum of the Mean Squared Error (MSE) between the target 2TCF and the models' output and the MSE between the respective 1TCFs (without lag=0): \n\\begin{equation}\\label{eq:(8)}\ncost = \\frac{1}{2500\\cdot m}\\sum_{k =1}^{m} ||x^{out}_{k} -x^{target}_{k}||_{2} + \\frac{1}{49 \\cdot m}\\sum_{k =1}^{m} ||1TCF(x^{out}_{k}) -1TCF(x^{target}_{k})||_{2}\n\\end{equation}\nwhere \\(x^{out}_{k}\\) is the model output for the $k$\u2013th training example and \\(x^{target}_{k}\\) is the corresponding target's pixel, \\emph{m} is the number of examples, \\(||\\cdot||_{2}\\) stands for \\emph{2-norm}. \n\n\\noindent At every training epoch, batches of size 8 are processed. Adam \\cite{Kingma_2014} optimizer with initial learning rate from 2.5e-6 to 4e-5 is used. Learning rate is reduced by a factor of 0.9995 at every epoch. Initial weights in the convolutional and linear layers are assigned according to Xavier uniform initialization \\cite{Glorot_2010}. The models are trained with Nvidia GPU accelerator GeForce RTX 2070 Super. For the selected CNN-ED configuration, the average training time is 27 seconds per epoch with 9-82 epochs necessary to train a model. Each input or target takes 30 kB of computer memory.\n\n\\noindent A model application does not require a GPU and, in fact, can be preformed faster without transferring the 2TCF data to a GPU. When using a CUDA accelerator, loading the model from a file, converting the 2TCF from numpy arrays to a CUDA Pytorch tensor, application of the model and converting the result back to a numpy array takes 2.3 ms on average with pure model computation taking 0.48 ms. Without using a CUDA accelerator, the corresponding times are 1.4 ms and 0.57 ms, respectively. \n\n\\section*{Acknowledgements}\n\nThe authors thank A. Fluerasu and M. Fukuto for fruitful discussions. This research used CHX and CSX beamlines and resources of the National Synchrotron Light Source II, a U.S. Department of Energy (DOE) Office of Science User Facility operated for the DOE Office of Science by Brookhaven National Laboratory(BNL) under Contract No. DE-SC0012704 and under a BNL Laboratory Directed Research and Development (LDRD) project 20-038 \"Machine Learning for Real-Time Data Fidelity, Healing, and Analysis for Coherent X-ray Synchrotron Data\" \n\n\\section*{Author contributions statement}\n\nA.M.B, A.M.D, L.W and T.K. conceived the idea, L.W. performed the beamline experiments, generated the XPCS results, and identified individual scans for model development, T.K. processed the data for the model, A.M.D and T.K. wrote the code, M.R. provided technical consultation, T.K., L.W., A.M.D, A.M.B and M.R. analyzed the model performance, T.K. wrote the manuscript with contribution from all authors.\n\n\\section*{Additional information}\n\n\\noindent\\textbf{Accession codes and Data availability} The code and the data used for the model training can be found at GitHub repository, \\emph{https:\/\/github.com\/bnl\/CNN-Encoder-Decoder} at the time of publication. \n\n\\noindent\\textbf{Competing interests} The authors declare no competing interests. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\nA commonly encountered problem in practice is merging databases\ncontaining records collected by different sources, often via\ndissimilar methods. Different variants of this task are known as record linkage, de-duplication, and\nentity resolution. Record linkage is inherently a difficult problem\n\\cite{christen_2011, Herzog_2007,Herzog:2010}. \nThese difficulties are partially\ndue to the noise inherent in the data, which is often hard to\naccurately model \\cite{pasula_2003, steorts_2013b}. A\nmore substantial obstacle, however, is the scalability of the approaches \\cite{WYP:2010}. With $d$ databases of $n$ records each, brute-force approaches,\nusing all-to-all comparisons, require $O(n^d)$ comparisons. This is quickly\nprohibitive for even moderate $n$ or $d$. To avoid this computational\nbottleneck, the number of comparisons made must be drastically reduced, without\ncompromising linkage accuracy. Record linkage is made scalable by ``blocking,'' which involves partitioning datafiles into\n``blocks'' of records and treating records in different blocks as non-co-referent {\\em a\n priori} \\cite{christen_2011, Herzog_2007}. Record linkage methods are only\napplied {\\em within} blocks, reducing the comparisons to\n$O(B n_{\\max}^d)$, with $n_{\\max}$ being the size of the largest of the $B$\nblocks.\n\nThe most basic method for constructing a blocking partition picks certain fields (e.g. geography, or gender and year\nof birth) and places records in the same block if and only if they agree on\nall such fields. This amounts to an {\\em a priori} judgment that these fields\nare error-free. We call this \\emph{traditional blocking} (\\S \\ref{sec:block-naive}).\n\n\nOther data-dependent blocking methods \\cite{christen_2011, WYP:2010}\n are highly application-specific or are based on placing similar records into the\nsame block, using techniques of ``locality-sensitive hashing'' (LSH).\n LSH uses all of\nthe information contained in each record and can be adjusted to ensure that blocks are\nmanageably small, but then does not allow for further record linkage within blocks. For example, \\cite{christen_2014} introduced novel data structures for sorting and fast approximate nearest neighbor look-up within blocks produced by LSH. Their\napproach gave balance between speed and recall, but their technique is\nvery specific to nearest neighbor search with similarity defined by the hash\nfunction. Such\nmethods are fast and have high recall, but suffer from low precision, rather, too\nmany false positives. This approach is called \\emph{private} if, after the blocking is performed, all candidate records pairs are compared and classified into matches\/non-matches using computationally intensive ``private\" comparison and classification techniques \\cite{christen_2009}. \n\nSome blocking schemes involve clustering techniques to partition the records into clusters of similar records. \\cite{mccallum_2000} used canopies, a simple clustering approach to group similar records into overlapping subsets for record linkage. Canopies involves organizing the data into overlapping clusters\/canopies using an inexpensive distance measure. Then a more expensive distance measure is used to link records within each canopy, reducing the number of required comparisons of records.\n \\citep{vatsalan_2013} used a sorted nearest neighborhood clustering approach, combining $k$-anonymous clustering and the use of publicly available reference values to privately link records across multiple files. \n\nSuch clustering-based blocking schemes motivate our variants of LSH methods for blocking. \nThe first, transitive locality sensitive hashing (TLSH), is based upon the community discovery literature such that \\emph{a soft transitivity} (or relaxed transitivity) can be imposed across blocks. The second, $k$-means locality sensitive hashing (KLSH), is based upon the information retrieval literature and clusters similar records into blocks using a vector-space representation and projections.\n (KLSH has been\nused before in information retrieval but never with record linkage\n\\citep{pauleve_2010}.)\n\nThe organization of this paper is as follows. \\S \\ref{sec:block} reviews traditional blocking. We then review other blocking methods in \\S \\ref{sec:block-modern} stemming from the computer science literature. \\S \\ref{sec:lsh} presents two different methods based upon locality sensitive hashing, TLSH and KLSH. We discuss the computational complexity of each approach in \\S \\ref{sec:complex}. We evaluate these methods (\\S \\ref{sec:results}) on simulated data using recall, reduction ratio, and the empirical computational time as our evaluation criteria, comparing to the other methods discussed above. Finally we discuss privacy protection aspects of TLSH and KLSH, given the description of LSH as a ``private\" blocking technique. \n\n\\section{Blocking Methods}\n\\label{sec:block}\nBlocking divides records into mutually\nexclusive and jointly exhaustive ``blocks,'' allowing the linkage\nto be performed within each block. \nThus, only records within the same\nblock can be linked; linkage algorithms may still aggregate information across\nblocks. \nTraditional blocking requires domain knowledge to\npick out highly reliable, if not error-free, fields for blocking. This methodology has at least two drawbacks. The first is that the resulting blocks may still be so large that linkage within them is\ncomputationally impractical. The second is that because blocks {\\em only}\nconsider selected fields, much time may be wasted comparing records that\nhappen to agree on those fields but are otherwise radically different.\n\nWe first review some simple alternatives to traditional blocking on fields, and then introduce\nother blocking approaches that stem from computer science.\n\n\n\\subsection{Simple Alternatives to Blocking}\n\\label{sec:block-naive}\nSince fields can be unreliable for many applications, blocking may miss large proportions of matches. Nevertheless, we can make use of domain-specific knowledge on the types of errors expected for field attributes. To make decisions about matches\/non-matches, we must understand the \\emph{kinds of errors} that are unlikely for a certain field or a combination of them. With this information, we can identify a pair as a non-match when it has strong disagreements in a combination of fields. It is crucial that this calculation be scalable since it must be checked for all pairs of records. Some sequence of these steps reduces the set of pairs to a size such that more computationally expensive comparisons can be made. In \\S \\ref{ss:naive_results}, we apply these concepts.\n\n\\subsection{Cluster-Based Blocking}\n\\label{sec:block-modern}\nOthers have described blocking as a clustering problem, \nsometimes with a special emphasis on privacy, e.g., see \n\\cite{durham_2012,\nkarakasidis_2012, kuzu_2011, vatsalan_2013}. \nThe motivation is\nnatural: the records in a cluster should be similar,\nmaking good candidate pairs for linkage. \n\n\nOne clustering approach proposed for blocking is nearest neighbor clustering.\nThreshold nearest neighbor clustering (TNN) begins with a single record as the base\nof the first cluster, and recursively adds the nearest neighbors of records in\nthe cluster until the distance\\footnote{The distance metric used can vary depending on the\nnature of the records.} to the nearest neighbor exceeds some threshold.\nThen one of the remaining records is picked to be the base for the next\ncluster, and so forth. K-nearest neighbor clustering (KNN) uses a similar procedure, but ensures\nthat each cluster contains at least $k$ records\\footnote{Privacy-preserving versions of these approaches use ``reference\nvalues'' rather than the records themselves to cluster the records \\cite{vatsalan_2013}.}, to help maintain\n``$k$-anonymity'' \\cite{karakasidis_2012}.\n\nA major drawback of nearest neighbor clustering is that it requires\ncomputing a large number of distances between records, $O(n^2)$. \nBlocking a new record means finding its nearest neighbors, an\n $O(n)$ operation. \n\nThe cost of calculating distances between records in\nlarge, high-dimensional datasets led \\cite{mccallum_2000} to propose the\nmethod of \\emph{canopies}. In this approach, a computationally cheap (if inaccurate)\ndistance metric is used to place records into potentially-overlapping sets (canopies). \nAn initial record is picked randomly to be the base of the first\ncanopy; all records within a distance $t_1$ of the base are grouped under that\ncanopy. Those within distance $t_2 \\leq t_1$ of the base are removed from\nlater consideration. A new record is picked to be the base of the next\ncanopy, and the procedure is repeated until the list of candidate records is empty. More accurate but\nexpensive distance measures are computed only between records that fall under\nat least one shared canopy. That is, only record-pairs sharing a canopy are candidates to be linked.\n\n\nCanopies is not strictly a blocking method. They overlap, \nmaking the collection of canopies only a covering of the set\nof records, rather than a partition. We can derive blocks from canopies,\neither set-theoretically or by setting $t_1=t_2$. The complexity of building\nthe canopies is $O(nC_n)$, with $C_n$ being the number of canopies, itself a\ncomplicated and random function of the data, the thresholds, and the order in\nwhich records are chosen as bases. Further, finding fast, rough distance measures for complicated high-dimensional records is non-trivial.\n\n\\subsection{LSH-Based Approaches}\n\\label{sec:lsh}\nWe explore two LSH-based blocking methods. These are based, respectively, on graph\npartitioning or community discovery, and on combining random projections with\nclassical clustering. The main reason for exploring these two methods is that even with comparatively\nefficient algorithms for partitioning the similarity graph, doing that is still\ncomputationally impractical for hundreds of thousands of records\n\n\n\n\n\\subsubsection{Shingling}\nLSH-based blocking schemes ``shingle''\n\\cite{rajaraman_2012} records. That is, each record is treated as a string and\nis replaced by a ``bag'' (or ``multi-set'') of length-$k$ contiguous\nsub-strings that it contains. These are known as ``$k$-grams'', ``shingles'',\nor ``tokens''. For example, the string ``TORONTO'' yields the bag of length-two\nshingles ``TO'', ``OR'', ``RO'', ``ON'', ``NT'', ``TO''. (N.B., ``TO'' appears\ntwice.)\n\nAs alternative to shingling, we might use a bag-of-words representation, or\neven to shingle into consecutive pairs (triples, etc.) of words. \nIn our\nexperiments, shingling at the level of letters worked better than dividing by\nwords.\n\n\\subsubsection{Transitive LSH (TLSH)}\n\\label{subsec:tlsh}\n\nWe create a graph of the similarity between records. \nFor simplicity, assume that all fields are string-valued. Each record is\nshingled with a common $k$, and the bags of shingles for all $n$ records are\nreduced to an $n$-column binary-valued matrix $M$, indicating which\nshingles occur in which records. \n$M$ is large, since the number of length-$k$ shingles typically grows\nexponentially with $k$. As most shingles are absent from most records, $M$ is\n sparse. We reduce its dimension by generating a random\n``minhash'' function and applying it to each column. Such functions map\ncolumns of $M$ to integers, ensuring that the probability of two columns being\nmapped to the same value equals the Jaccard similarity between the columns\n\\cite{rajaraman_2012}. Generating $p$ different minhash functions, we reduce\nthe large, sparse matrix $M$ to a dense $p\\times n$ matrix, $M^{\\prime}$, of integer-valued\n``signatures,'' while preserving information. Each row of\n$M^{\\prime}$ is a random projection of $M$. Finally, we divide\nthe rows of $M^{\\prime}$ into $b$ non-overlapping ``bands,'' apply a hash\nfunction to each band and column, and establish an edge between two records if\ntheir columns of $M^{\\prime}$ are mapped to the same value in any\nband.\\footnote{To be mapped to the same value in a particular band, two columns\n must either be equal, or a low-probability ``collision'' occurred for the\n hash function.}\n\nThese edges define a graph: records are nodes, and edges indicate a certain\ndegree of similarity between them. We form blocks by dividing the graph into its connected components.\nHowever, the largest connected\ncomponents are typically very large, making them unsuitable as blocks.\nThus, we sub-divide the connected components into ``communities'' or ``modules'' \n--- sub-graphs that are densely connected\ninternally, but sparsely connected to the rest of the graph. This \nensures that the blocks produced consist of records that are all highly\nsimilar, while having relatively few ties of similarity to\nrecords in other blocks \\cite{fortunato_2010}. Specifically, we apply the\nalgorithm of \\cite{clauset_2004}\\footnote{We could use other community-discovery algorithms, e.g. \\cite{goldenberg_2010}.}, sub-dividing communities greedily, until even\nthe largest community is smaller than a specified threshold.\\footnote{This maximum\nsize ensures that record linkage is feasible.} \n The end result is a set of blocks that balance false\nnegative errors in linkage (minimized by having a few large blocks) and the\nspeed of linkage (minimized by keeping each block small). We summarize the\nwhole procedure in Algorithm \\ref{subsec:tlsh} (see Appendix \\ref{sec:app}).\n\nTLSH involves many tuning parameters (the length of shingles, the number\nof random permutations, the maximum size of communities, etc.)\nWe chose the shingle such that we have \nthe highest recall possible for each application. We used a random permutation of 100, since the recall was approximately constant for all permutations higher than 100. \nFurthermore, we chose a maximum size of the communities of 500, after tuning this specifically for desired speed.\n\n\n\\subsubsection{K-Means Locality Sensitive Hashing (KLSH)}\n\\label{subsec:klsh}\n\nThe second LSH-based blocking method begins, like TLSH,\nby shingling the records, treated as strings, but then differs in several ways.\nFirst, we do not ignore the number of times each shingle type appears in a\nrecord, but rather keep track of these counts, leading to a bag-of-shingles\nrepresentation for records. Second, we measure similarity between\nrecords using the inner product of bag-of-shingles vectors, with\ninverse-document-frequency (IDF) weighting. Third, we reduce the\ndimensionality of the bag-of-shingles vectors by random projections, followed\nby clustering the low-dimensional projected vectors with the $k$-means\nalgorithm. \nHence, we can control the mean\nnumber of records per cluster to be $n\/c$, where $c$ is the number of block-clusters. In practice, there is a fairly small\ndispersion around this mean, leading to blocks that, by construction, have the roughly the same distribution for all applications.\\footnote{This property is not guaranteed for most LSH methods.} The KLSH algorithm is given in Appendix \\ref{sec:app}.\n\n\n\n\\section{Computational Complexity}\n\\label{sec:complex}\n\n\n\\subsection{Computational Complexity of TLSH}\n\nThe first steps of the algorithm can be done independently across records.\nShingling a single record is $O(1),$ so shingling all the records\nis $O(n)$. Similarly, applying one minhash function to the shingles of one\nrecord is $O(1),$ and there are $p$ minhash functions, so minhashing\ntakes $O(np)$ time. Hashing again, with $b$ bands, takes $O(nb)$ time. We\n assume that $p$ and $b$ are both $O(1)$ as $n$ grows.\n\n\nWe create an edge between every pair of records that get mapped to the same\nvalue by the hash function in some band. Rather than iterating over pairs of\nrecords, it is faster to iterate over values $v$ in the range of the\nhash function. If there are $|v|$ records mapped to the value $v$, creating\ntheir edges takes $O(|v|^2)$ time. On average, $|v| = n V^{-1}$, where $V$ is the\nnumber of points in the range of the hash function, so creating the edge list\ntakes $O(V (n\/V)^2 ) = O(n^2 V^{-1})$ time. \\cite{clauset_2004} shows that creating\nthe communities from the graph is $O(n (\\log{n})^2)$.\n\n\n\nThe total complexity of TLSH is $O(n) + O(np) + O(nb) + O(n^2 V^{-1}) +\nO(n(\\log{n})^2) = O(n^2 V^{-1})$, and is dominated by actually building the graph.\n\n\n\\subsection{Computational Complexity of KLSH}\n\nAs with TLSH, the shingling phase of KLSH takes $O(n)$ time. The time required\nfor the random projections, however, is more complicated. Let $w(n)$ be the\nnumber of distinct words found across the $n$ records. The time needed to do\none random projection of one record is then $O(w(n))$, and the time for the\nwhole random projection phase is $O(npw(n))$. For $k$-means cluster, with a\nconstant number of iterations $I$, the time required to form $b$ clusters of\n$n$ $p$-dimensional vectors is $O(bnpI)$. Hence, the complexity is $O(npw(n))\n+ O(bnpI)$.\n\n\nHeaps's law suggests $w(n) = O(n^\\beta)$, where $0 < \\beta\n< 1$.\\footnote{For English text, $0.4 < \\beta < 0.6$.} Thus, the complexity\nis $O(p n^{1+\\beta}) + O(bnpI)$. \nFor record linkage to run in linear time, it must\nrun in constant time in each block. Thus, the number of records per block must be\nconstant, i.e., $b = O(n)$. Hence, the time-complexity for blocking\nis $O(p n^{1+\\beta}) + O(n^2 pI) = O(n^2 pI),$ a quadratic time algorithm\ndominated by the clustering. Letting $b = O(1)$ yields an over-all time\ncomplexity of $O(p n^{1+\\beta})$, dominated by the projection step. If we\nassume $\\beta = 0.5$ and let $b = O(\\sqrt{n}),$ then both the projection and\nthe clustering steps are $O(pn^{1.5})$. Record linkage in each block is $O(n),$ so record linkage is $O(n^{1.5}),$ \nrather than $O(n^2)$ without blocking.\n\n\\subsection{Computational Complexity of Traditional Blocking Approaches}\nTraditional blocking approaches use attributes of the records to partition records into blocks. As such, calculating the blocks using traditional approaches requires $O(n)$ computations. For example, approaches that block on birth year only require a partition of the records based on these fields. That is, each record is simply mapped to one of the unique birth year values in the dataset, which is an $O(n)$ calculation for a list of size $n$. Some traditional approaches, however, require $O(n^2)$ computations. For example, in Table \\ref{t:naive_results}, we show some effective blocking strategies which require $O(n^2)$ computations, but each operation is so cheap that they can be run in reasonable time for moderately sized files.\n\n\\section{Results}\n\\label{sec:results}\nWe test the previously mentioned approaches on data from the RecordLinkage R package.\\footnote{\\url{http:\/\/www.inside-r.org\/packages\/cran\/RecordLinkage\/docs\/RLdata}}\nThese simulated datasets contain 500 and 10,000 records (denoted \\texttt{RLdata500} and \\texttt{RLdata10000}), with exactly 10\\% duplicates in each list. These datasets contain first and last Germanic name and full date of birth (DOB). Each duplicate contains one error with respect to the original record, and there is maximum of one duplicate per original record. Each record has a unique identifier, allowing us to test the performance of the blocking methods. \n\nWe explore the performance of the previously presented methods under other scenarios of measurement error. \\citep{Christen05, ChristenPudjijono09, ChristenVatsalan13} developed a data generation and corruption tool that creates synthetic datasets containing various field attributes. This tool includes dependencies between fields and permits the generation of different types of errors. We now describe the characteristics of the datafiles used in the simulation. We consider three files having the following field attributes: first and last name, gender, postal code, city, telephone number, credit card number, and age. For each database, we allow either 10, 30, or 50\\% duplicates per file, and each duplicate has five errors with respect to the original record, where these five errors are allocated at random among the fields. Each original record has maximum of five duplicates. We refer to these files as the ``noisy'' files.\n\n\n\\subsection{Traditional Blocking Approaches}\\label{ss:naive_results}\nTables \\ref{t:naive_results} -- \\ref{t:naive_results2} provide results of traditional blocking when applied to the \\texttt{RLdata10000} and ``noisy\" files. While field-specific information \\emph{can} yield favorable blocking solutions, each blocking criteria is application specific. The overall goal of blocking is to reduce the overall set of candidate pairs, while minimizing the false negatives induced. Thus, we find the \\emph{recall} and \\emph{reduction ratio} (RR). This corresponds to the proportion of true matches that the blocking criteria preserves, and the proportion of record-pairs discarded by the blocking, respectively. \n\nCriteria 1 -- 5 (Table \\ref{t:naive_results}) and 1 -- 6 (Table \\ref{t:naive_results2}) show that \\emph{some} blocking approaches are poor, where the recall is never above 90\\%. Criteria requiring exact agreement in a single field or on a combination of them are susceptible to field errors. More reliable criteria are constructed using combinations of fields such that multiple disagreements must be met for a pair to be declared as a non-match. (See Criteria 7--10 and 12 in Table \\ref{t:naive_results}, and 7 -- 8 in Table \\ref{t:naive_results2}.) We obtain high recall and RR using these, but in general their performance is context-dependent. \n\nCriteria 10 (Table \\ref{t:naive_results}) deals with the case when a pair is declared a non-match whenever it disagrees in four or more fields, which is reliable since false-negative pairs are only induced when the datafile contains large amounts of error. For example this criterion does not lead to good results with the noisy files, hence a stronger criteria is needed, such as 7 (Table \\ref{t:naive_results2}). Using Criteria 12 (Table \\ref{t:naive_results}) and 8 (Table \\ref{t:naive_results2}), we further reduce the set of candidate pairs whenever a pair has a strong disagreement in an important field.\\footnote{We use the Levenshtein distance (LD) of first and last names for pairs passing Criterion 10 of Table \\ref{t:naive_results} or Criteria 7 of Table \\ref{t:naive_results2}, and declare pairs as non-matches when LD $\\geq 4$ in either first or last name.} These criteria are robust. In order to induce false negatives, the error in the file must be much higher than expected. \n\n\n\n\n\n\n\n\\begin{table}[htdp]\n\\begin{center}\n\\begin{tabular}{rlrr}\n\t\t\t\t\\hline\\\\[-8pt]\n\t\t\t\t& Declare non-match if disagreement in: & Recall (\\%) & RR (\\%) \\\\\n\t\t\t\t\\hline\\\\[-8pt]\n\t\t\t\t1.&First OR last name & 39.20 & 99.98\\\\\n\t\t\t\t2.&Day OR month OR year of birth & 59.30 & 99.99\\\\\n\t\t\t\t3.&Year of birth & 84.20 & 98.75\\\\\n\t\t\t\t4.&Day of birth & 86.10 & 96.74\\\\\n\t\t\t\t5.&Month of birth & 88.40 & 91.70\\\\\n\t\t\t\t6.&Decade of birth & 93.20 & 87.76\\\\\n\t\t\t\t7.&First AND last name & 99.20 & 97.36 \\\\\n\t\t\t\t8.&\\{First AND last name\\} OR &&\\\\\n\t\t\t\t&\\{day AND month AND year of birth\\} & 99.20 & 99.67 \\\\\n\t\t\t\t9.&Day AND month AND year of birth & 100.00 & 87.61\\\\\n\t\t\t\t10.&More than three fields & 100.00 & 99.26 \\\\\n\t\t\t\t11.&Initial of first OR last name & 100.00 & 99.25\\\\\n\t\t\t\t12.&\\{More than three fields\\} OR &&\\\\\n\t\t\t\t& \\{Levenshtein dist. $\\geq 4$ in first OR last name\\}& 100.00 & 99.97\\\\\n\t\t\t\t\\hline\n\t\t\t\t\t\t\t\\end{tabular}\n\t\t\t\t\t\t\t\n\\end{center}\n \\caption{Criteria for declaring pairs as non-matches, where results correspond to the \\texttt{RLdata10000} datafile. }\n \\label{t:naive_results}\n\n\\end{table}%\n\n\n\n\n\n\\begin{table}[htdp]\n\\begin{center}\n\\begin{tabular}{rlrr}\n \\hline\\\\[-8pt]\n\t& Declare non-match if disagree in: & Recall (\\%) & RR (\\%) \\\\\n \\hline\\\\[-8pt]\n\t1.&Gender &\t 31.96\t&\t 53.39\\\\\n\t2.&City &\t 31.53\t&\t 77.25\\\\\n\t3.&Postal Code &\t 32.65\t&\t 94.20\\\\\n\t4.&First OR last name &\t 1.30\t&\t$>$99.99\\\\\n\t5.&Initial of first OR last name&\t 78.10\t&\t 99.52\\\\\n\t6.&First AND last name &\t 26.97\t&\t 99.02\\\\\n\t7.&All fields &\t 93.28\t&\t 40.63 \\\\\n\t8.&\\{All fields\\} OR \\{Levenshtein dist. &&\\\\\n\t& $\\geq 4$ in first OR last name\\} &\t 92.84\t&\t 99.92\\\\\n\t\\hline\n\\end{tabular}\n\\end{center}\n \\caption{Criteria for declaring pairs as non-matches, where results correspond to the noisy datafile with 10\\% duplicates. Similar results obtained for 30 and 50\\% duplicates. }\n \\label{t:naive_results2}\n\\end{table}%\n\n\n\n\n\\vspace*{-2em}\n\n\\subsection{Clustering Approaches}\n\\label{sec:cluster-results}\n\nOur implementations of \\cite{mccallum_2000}'s canopies approach and \\cite{vatsalan_2013}'s nearest neighbor approach perform poorly on the \\texttt{RLdata10000} and ``noisy\" datasets\\footnote{In our implementations, we use the TF-IDF matrix representation of the records and Euclidean distance to compare pairs of records in TNN and canopies. We tried several other distance measures, each of which gave similar results.}. Figure \\ref{TNN_and_canopies} gives results of these approaches for different threshold parameters ($t$ is the threshold parameter for sorted TNN) for the \\texttt{RLdata10000} dataset. For all thresholds, both TNN and canopies fail to achieve a balance of high recall and a high reduction ratio.\n\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.48\\textwidth]{tnn-results-rldata10000-Euclidean.pdf} \\includegraphics[width=0.48\\textwidth]{canopies-results-rldata10000.pdf}\n\\caption{Performance of threshold nearest neighbors (left) and canopies (right) on the RLdata10000 datafile.}\n\\label{TNN_and_canopies}\n\\end{figure}\n\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.48\\textwidth]{tnn-results-Euclidean.pdf} \\includegraphics[width=0.48\\textwidth]{canopies-results.pdf}\n\\caption{Performance of TNN (left) and canopies (right) on the ``noisy\" datafile (10\\% duplicates). The other ``noisy\" datafiles exhibited similar behavior as the figures above. }\n\\label{TNN_and_canopies_error}\n\\end{figure}\n\n\nTurning to the ``noisy\" dataset with 10\\% duplicates, we find that TNN fails to achieve a balance of high recall and high reduction ratio, regardless of the threshold $t$ that is used. Similarly, the canopies approach does not yield a balance of high recall while reducing the number of candidate pairs.\n\nClearly, both clustering approaches fail to achieve a balance of high recall and RR for any threshold parameters. The inefficacy of these approaches is likely due the limited number of field attributes (five fields) and the Euclidean distance metric used for these datasets. In particular, only three fields in the ``noisy\" dataset use textual information, which both of these approaches use to identify similar records. Limited field information can make it difficult for clustering approaches to group similar records together, since the resulting term frequency matrices will be very sparse. Thus, we investigate the behavior with the same number of duplicates, but vary the error rate and provide richer information at the field attribute level. Figure \\ref{TNN_and_canopies_error} illustrates that both methods do not have a good balance between recall and RR, which we investigated for various thresholds. As such, further analysis of these approaches on more information-rich datasets is required in order to make sound conclusions about their efficacy for blocking. (We note that the metrics used in TLSH and KLSH, which shingle the records, were chosen so as to not have such problems.) \n\n\n\n\\subsection{LSH Approaches}\n\\label{sec:data500}\nSince the performance of KLSH and TLSH depends on tuning parameters, we tune each application appropriately to these. We empirically measure the scalability of these methods, which are consistent with our derivations in \\S \\ref{sec:complex}.\n\nWe analyze the \\texttt{RLdata10000} database for TLSH and KLSH. As we increase $k$ under TLSH, we see that the recall peaks at $k=5,$ and does very poorly (below 40\\% recall) when $k\\leq 4$. For KLSH, the highest and most consistent recall is when $k=2,$ since it is always above 80\\% and it is about the same no matter the total number of blocks chosen (see Figure \\ref{distort_10000}). In terms of RR, we see that TLSH performs extremely poorly as the total number of blocks increases, whereas KLSH performs extremely well in terms of RR comparatively (Figure \\ref{reduction}). Figure \\ref{comp_time} shows empirically that the running time for both KLSH and TLSH scales quadratically with the $n$, matching our asymptotic derivation. We then analyze the ``noisy\" database for TLSH and KLSH (see Figures \\ref{reduction_new} and \\ref{klsh_recall_new}).\n\n\n\\subsubsection{Comparisons of Methods}\nIn terms of comparing to the methods presented in Table \\ref{t:naive_results}, we find that TLSH is not comparable in terms of recall or RR. However, KLSH easily beats Criteria 1--2 and competes with Criteria 3--4 on both recall and RR. It does not perform as well in terms of recall as the rest of the criteria, however, it \\emph{may} in other applications with more complex information for each record (this is a subject of future work). When comparing the Table \\ref{t:naive_results2} to TLSH and KLSH when run for the noisy datafile, we find that TLSH and KLSH usually do better when tuned properly, however not always. Due to the way these files have been constructed, more investigation need to be done in terms of how naive methods work for real work type applications versus LSH-based methods. \n\nComparing to other blocking methods, both KLSH and TLSH outperform KNN in terms of recall (and RR for the noisy datafiles). We find that for this dataset, canopies do not perform well in terms of recall or RR unless a specific threshold $t_1$ is chosen. However, given this choice of $t_1$, this approach yields either high recall and low RR or vice versa, making canopies undesirable according to our criteria. \n\nFor the \\texttt{RLdata10000} dataset, the simple yet effective traditional blocking methods and KLSH perform best in terms of balancing both high recall and high RR. As already stated, we expect the performance of these to be heavily application-dependent. Additionally, note that each method relies on high-quality labeled record linkage data to measure the recall and RR and the clustering methods require tuning parameters, which can be quite sensitive. Our studies show that TLSH is the least sensitive in general and further explorations should be done here. Future work should explore the characteristics of the underlying datasets for which one method would be preferred over another.\n\n\n\n\n\\subsubsection{Sensitivity Analysis on \\texttt{RLdata500} and \\texttt{RLdata10000}}\nA sensitivity analysis is given for KLSH and TLSH. \nFor TLSH, the \\texttt{RLdata500} dataset is not very sensitive to $b$ since the recall is always above 80\\% whereas the \\texttt{RLdata10000} dataset is quite sensitive to the band, and we recommend the use of a band of 21--22 since the recall for these $b$ is $\\approx$ 96\\%, although this may change for other datasets. We then evaluate TLSH using the ``best'' choice of the band for shingled values from $k=1,\\ldots 5. $ The sensitivity analysis for the ``noisy\" datafiles was quite similar to that described above, where a band of 22 was deemed the most appropriate for TLSH. For KLSH, we found that we needed to increase the number of permutations slightly to improve the recall and recommend $p=150.$\n\n\nFor KLSH, we find that when the number of random permutations $p$ is above 100, the recall does not change considerably. We refer back to Figure \\ref{distort_10000} (right), which illustrates the recall versus number of blocks when $p = 100.$ When $k=4,$ the recall is always above 70\\%. However, we find that when $k=2,$ the recall is always above 80\\%.\n\n\n\n\n\n\n\\section{Discussion}\n\\label{sec:disc} \n\nWe have explored two LSH methods for blocking, one of which would naturally fit into the privacy preserving record linkage (PPRL) framework, since the method could be made to be private by creating reference values for each individual in the database. This has been done for many blocking methods in the context of PPRL \\citep{durham_2012, vatsalan_2011,karakasidis_2012, kuzu_2011}. KLSH performs just as well or better than commonly used blocking methods, such as some simple traditional blocking methods, nearest neighbor clustering approaches, and canopies \\citep{vatsalan_2013, mccallum_2000}. One drawback is that like LSH-based methods, it must be tuned for each application since it is sensitive to the tuning parameters. Thus, some \\emph{reliable} training data must be available to evaluate the recall and RR (and tune KLSH or clustering type methods). In many situations, a researcher may be better off by using domain-specific knowledge to reduce the set of comparisons, as shown in \\S \\ref{ss:naive_results}.\n\nLSH-methods have been described elsewhere as ``private blocking\" due to the hashing step. However, they do not in fact provide any formal privacy guarantees in our setting. The new variant that we have introduced, KLSH, does satisfy the $k$-anonymity criterion for the de-duplication of a single file. However, the data remain subject to intruder attacks, as the literature on differential privacy makes clear, and the vulnerability is greater the smaller the value of $k$. Our broader goal, however, is to merge and analyze data from multiple files. Privacy protection in that context is far more complicated. Even if one could provide privacy guarantees for each file separately, it would still be possible to identify specific entities or \\emph{sensitive} information regarding entities in the merged database. \n\nThe approach of PPRL reviewed in \\cite{hall12} sets out to deal with this problem. Merging data from multiple files with the same or similar values without releasing their attributes is what PPRL hopes to achieve. Indeed, one of course needs to go further, since performing statistical analyses on the merged database is the real objective of PPRL. Whether the new ``private blocking\" approaches discussed offer any progress on this problem, it is unclear at best. Adequately addressing the PPRL goals remains elusive, as do formal privacy guarantees, be they from differential privacy or other methods. \n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.45\\textwidth]{pics\/complexity_quadratic.pdf}\n\\includegraphics[width=0.45\\textwidth]{pics\/RLdata_bands_tlsh}\n\\caption{\\text{RLdata10000\/RLdata500} datasets. Left: Square Root Elapsed time versus number of records for KLSH and TLSH, illustrating that both methods scale nearly quadratically (matching the computationally complexity findings). We shingle using $k=5$ for both methods. We use a band of 26 for TLSH. Right: Recall versus $b$ for both \\texttt{RLdata500} and \\texttt{RLdata10000} after running TLSH.}\n\\label{comp_time}\n\\end{figure}\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.45\\textwidth]{pics\/newpics\/recall_tlsh_rldata10000_5_4_14_k28}\n\\includegraphics[width=0.45\\textwidth]{pics\/newpics\/tlsh_rldata10000_p100_k14.pdf}\n\\caption{\\text{RLdata10000} dataset. Left: Recall versus number of shingles $k$ for KLSH. The highest recall occurs at $k=5$. Right: Recall versus the total number of blocks, where we vary the number of shingles $k$. We find that the highest recall is for $k=2$.}\n\\label{distort_10000}\n\\end{figure}\n\n\n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.45\\textwidth]{pics\/RR_klsh.pdf}\n\\includegraphics[width=0.45\\textwidth]{pics\/RR_tlsh_new.pdf}\n\\caption{\\text{RLdata10000} dataset. Left: For TLSH, we see the RR versus the number of shingles, where the RR is always very high. We emphasize that TLSH does about as well on the RR as any of the other methods, and certainly does much better than many traditional blocking methods and KNN. (The RR is always above $98\\%$ for all shingles with $b=26$.)\nRight: For KLSH, we illustrate the RR versus the total number of blocks for various $k=1,\\ldots,4$ illustrating that as the number of blocks increases, the RR increases dramatically. When the total block size is at least 25, the RR $\\geq 95\\%.$}\n\\label{reduction}\n\\end{figure}\n\n\n\n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.45\\textwidth]{pics\/tsh_b22_10000_et_newdata}\n\\includegraphics[width=0.45\\textwidth]{pics\/tsh_b22_et_newdata_2.pdf}\n\\caption{ Left: We run TLSH for 10 percent duplicates, as before, the application is quite sensitive to $b,k.$ Hence, it is quite easy to find values of $b,k$ such that the recall is very low or if tuning is done properly, we can find values of $b,k$ where the recall is acceptable. We note this relies on very good ground truth. The only value of $k$ we recommend is 4 since it is close to 90\\% recall. The computational time is the same as previously. Right: Elapsed time for 10, 30, and 50 percent duplicates on ``noisy\" dataset. }\n\\label{reduction_new}\n\\end{figure}\n\n\n\n\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width=0.48\\textwidth]{pics\/klsh_newdata_p20_10percent}\n\\includegraphics[width=0.48\\textwidth]{pics\/klsh_p150_10percent}\n\\caption{ We run KLSH at 10 percent duplicates with p=100 (left) and p=150 (right). We see as the number of permutations increases (left figure), the recall increases. The behavior is the same for 30 and 50 percent duplicates. This indicates that KLSH needs to be tuned for each application based on $p.$ \n}\n\\label{klsh_recall_new}\n\\end{figure}\n\n\n\n\n\\section*{Acknowledgements}\nWe thank Peter Christen, Patrick Ball, and Cosma Shalizi for thoughtful conversations that led to to early versions of this manuscript. We also thank the reviewers for their suggestions and comments.\n\n\\clearpage\n\\newpage\n\n\\bibliographystyle{ims}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nPoint processes on the line, generated by transitions of Continuous Time Markov Chains (CTMCs) have been studied intensely by the applied probability community over the past few decades under the umbrella of Matrix Analytic Methods (MAM), see e.g. \\cite{latouche1999introduction}. These have been applied to teletraffic \\cite{akar1998matrix}, business networks \\cite{herbertsson2007pricing}, social operations research \\cite{xing2013operations}, and biological systems \\cite{olsson2015equilibrium}. The typical model referred to as the Markovian Arrival Process (MAP) is comprised of a finite state irreducible CTMC which generates events at selected instances of state change and\/or according to Poisson processes modulated by the CTMC. MAPs have been shown to be dense in the class of point processes so that they can essentially approximate any point process, \\cite{asmussen1993marked}. Yet at the same time, they are analytically tractable and may often be incorporated effectively within more complex stochastic models \\cite{neuts1979versatile}. \n\nIn general, treating point processes as {\\em stationary} often yields a useful mathematical perspective which matches scenarios when there is no known dependence on time. In describing a point process we use $N(t)$ to denote the number of events during $[0,t]$ and further use the sequence $\\{T_n\\}$ to denote the sequence of inter-event times. Two notions of stationarity are useful in this respect. Roughly, a point process is {\\em time-stationary} if the distribution of the number of events within a given interval does not depend on the location of the interval; that is if $N(t_1+s)-N(t_1)$ is distributed as $N(t_2+s)-N(t_2)$ for any non-negative $t_1, t_2$ and $s$. A point process is {\\em event-stationary} if the joint distribution of $T_{k_1},\\ldots,T_{k_n}$ is the same as that of $T_{k_1+\\ell},\\ldots,T_{k_n+\\ell}$ for any integer sequence of indices $k_1, \\ldots,k_n$ and any integer shift $\\ell$. For a given model of a point process, one may often consider either the event-stationary or the time-stationary case. The probability laws of both cases agree in the case of the Poisson process. However, this is not true in general. \nFor MAPs, time-stationarity and event-stationarity are easily characterized by the initial distribution of the background CTMC. Starting it at its stationary distribution yields time-stationarity and starting at the stationary distribution of the embedded Markov chain (jump chain) yields event-stationarity.\n\nA common way to parameterize MAPs is by considering the generator, $Q$, of an irreducible finite state CTMC and setting $Q= C + D$. Roughly speaking, the matrix $C$ determines state transitions without event counts and the matrix $D$ determines event counts. Such parameterization hints at considering two special cases: Markov Modulated Poisson Processes (MMPP) arising from a diagonal matrix $D$, and Markovian Switched Poisson Processes (MSPP) arising from a diagonal matrix $C$. \n\nMMPPs are a widely used class of processes in modelling and are a typical example of a Cox process, also known as a doubly stochastic Poisson process, \\cite{grandell2006doubly} and \\cite{tang2009markov}. For a detailed outline of a variety of classic MMPP results, see~\\cite{fischer1993markov} and references therein. MSPPs were introduced in \\cite{dan1991counter} and to date, have not been as popular for modeling. However, the duality of diagonal $D$ vs. diagonal $C$ motivates us to consider and contrast both these processes. We also note that hyper-exponential renewal processes are special cases of MSPPs as well as Markovian Transition Counting Processes (as introduced in \\cite{asanjarani2016queueing}).\n\nOur focus in this paper is on second order properties of MMPPs and MSPPs and related traits. Consider the squared coefficient of variation and the limiting index of dispersion of counts given by,\n\\begin{equation}\n\\label{eq:3535}\nc^2 = \\frac{\\mathrm{Var}(T_1^{\\boldsymbol{\\alpha}})}{\\mathbb{E}^2\\,[ T_1^{\\boldsymbol{\\alpha}}]},\n\\qquad\n\\mbox{and}\n\\qquad\nd^2 = \\lim_{t \\to \\infty} \\frac{\\mathrm{Var}(N(t))}{\\mathbb{E}[N(t)]},\n\\end{equation} \nwhere $T_1^{\\boldsymbol{\\alpha}}$ is the time of the first event, taken from the event stationary version. Modelling folklore of MMPP sometimes assumes that $c^2 \\ge 1$. This is perhaps due to the fact that $d^2 \\ge 1$ is straightforward to verify and the similarity between these measures (for example for a renewal process, $c^2 = d^2$). However, as we highlight in this paper, establishing such ``burstiness'' properties is not straightforward.\n\nA related property is having $T_1^\\alpha$ exhibit Decreasing Hazard Rate (DHR), where for a random variable with PDF $f(t)$ and CDF $F(t)$ the hazard rate is,\n\\[\nh(t)=\\frac{f(t)}{1-F(t)}.\n\\]\nA further related property is the stochastic order, $T_1^\\pi \\ge_{\\mbox{st}} T_1^\\alpha$ where $T_1^\\pi$ is the first event time in the time-stationary version. We denote the properties as follows:\n\\begin{description}\n\\item (I) $d^2 \\ge 1$.\n\\item (II) $T_1^{\\boldsymbol{\\alpha}}$ exhibits DHR. \n\\item (III) $c^2 \\ge 1$.\n\\item (IV) The stochastic order $T_1^{\\boldsymbol{\\pi}} \\ge_{\\mbox{st}} T_1^{\\boldsymbol{\\alpha}}$.\n\\end{description} \n\nAll these properties are related and in this paper we highlight relationships between (I), (II), (III) and (IV) and establish the following: For MSPPs and MMPPs of order $2$ we show that (I)--(IV) holds. For general MMPPs it is known that (I) holds however, a counter-example of Miklos Telek and Illes Horvath shows that (II) does not hold and we conjecture (and numerically test) that (III) and (IV) holds.\n\nOur interest in this class of problems stemmed from relationships between different types of MAPs as in \\cite{nazarathy2008asymptotic} and \\cite{asanjarani2016queueing}. Once it became evident that $c^2 \\ge 1$ for MMPPs is an open problem even though it is acknowledged as a modeling fact in folklore, we searched for alternative proof avenues. This led to the stochastic order in (IV) as well as to considering DHR properties (the latter via communication with Miklos Telek and Illes Horvaths).\n\nThe remainder of the paper is structured as follows. In Section~\\ref{sec2} we present preliminaries, focusing on the relationships between properties (I) -- (IV) as well as defining MMPPs and MSPPs. In Section~\\ref{sec3} we present our main results and the conjecture. We close in Section~\\ref{sec4}.\n\n\n\n\n\n\n\n\n\n\n\\section{Preliminaries}\n\\label{sec2}\n\nConsider first properties (I)--(IV) and their relationships. With an aim of establishing property (III), $c^2 \\ge 1$, there are several possible avenues based on properties (I), (II) and (IV). We now explain these relationships.\n\n\\vspace{5pt}\n\\noindent\n\\paragraph*{Using property (I):} First, from the theory of simple point processes on the line, note the relationship between $d^2$ and $c^2$: \n\\begin{equation}\\label{eq:dcR}\nd^2 = c^2\\Big(1+2\\sum_{j=1}^\\infty \\frac{\\mathrm{Cov}(T_0^{\\boldsymbol{\\alpha}},T_j^{\\boldsymbol{\\alpha}})}{\\mathrm{Var}(T_0^{\\boldsymbol{\\alpha}})}\\Big).\n\\end{equation}\nHowever, the autocorrelation structure is typically intractable and hence does not yield results. If we were focusing on a renewal process where $T_i$ and $T_j$ are independent for $i \\neq j$ then this immediately shows that $d^2 = c^2$. Our focus is broader and hence property (I) indicating that $d^2 \\ge 1$ does not appear to be of use.\n\n\\vspace{5pt}\n\\noindent\n\\paragraph*{Using property (II):} An alternative way is to consider property (II) and use the fact that for any DHR random variable we have $c^2 \\ge 1$ (see \\cite{stoyan1983comparison}). Hence if property (II) holds then (III) holds.\n\n\n\\vspace{5pt}\n\\noindent\n\\paragraph*{Using property (IV):} We have the following Lemma, implying that (III) is a consequence of the stochastic order (IV).\n\n\n\n\\begin{lemma}\n\\label{lem:soscv}\nConsider a simple non-transient point process on the line, and let $T_1^{\\boldsymbol{\\pi}}$, $T_1^{\\boldsymbol{\\alpha}}$ represent the first inter-event time in the time-stationary case and event-stationary case respectively. Then $c^2 \\ge 1$ if and only if $\\mathbb{E}[T_1^{\\boldsymbol{\\pi}}] \\geq \\mathbb{E}[T_1^{\\boldsymbol{\\alpha}}]$.\n\\end{lemma}\n\\begin{proof}\nFrom point process theory (see for example, Eq. (3.4.17) of \\cite{daley2007introduction}), it holds $$\n\\mathbb{E}[T_1^{\\boldsymbol{\\pi}}]=\\frac{1}{2}\\lambda^* \\mathbb{E}\\big[\\big(T_1^{\\boldsymbol{\\alpha}}\\big)^2\\big],\n$$\nwhere,\n\\[\n\\lambda^* = \\lim_{t \\to \\infty} \\frac{E\\big[N[0,t]\\big]}{t} = \\frac{1}{\\mathbb{E}[T_1^{\\boldsymbol{\\alpha}}]}.\n\\]\nNow,\n\\[\nc^2 =\\frac{\\mathbb{E}[\\big(T_1^{\\boldsymbol{\\alpha}}\\big)^2]-\\big(\\mathbb{E}[T_1^{\\boldsymbol{\\alpha}}]\\big)^2}{\\big(\\mathbb{E}[T_1^{\\boldsymbol{\\alpha}}]\\big)^2} = 2 \\frac{\\mathbb{E}[T_1^{\\boldsymbol{\\pi}}]}{\\mathbb{E}[T_1^{\\boldsymbol{\\alpha}}]} - 1,\n\\]\nand we obtain the result.\n\\end{proof}\n\n\\paragraph*{MAPs:} We now describe Markovian Arrival Process (MAPs). \nA MAP of order $p$ (MAP$_p$) is generated by a two-dimensional Markov process $\\{(N(t), X(t)); t \\geq 0\\}$ on state space $\\{0,1, 2, \\cdots\\}\\times \\{1,2, \\cdots, p\\}$. The counting process $N(\\cdot)$ counts the number of ``events'' in $[0,t]$ with $\\mathbb P(N(0)=0)=1$. The phase process $X(\\cdot)$ is an irreducible CTMC with state space $\\{1, \\ldots, p\\}$, initial distribution $\\boldsymbol{\\eta}$ and generator matrix $Q$. A MAP is characterized by parameters $(\\boldsymbol{\\eta}, C,D)$, where the matrix $C$ has negative diagonal elements and non-negative off-diagonal elements and records the rates of phase transitions which are not associated with an event. The matrix $D$ has non-negative elements and describes the changes of the phase process with an event (increase of $N(t)$ by 1). Moreover, we have $Q=C+D$. More details are in \\cite{asmussen2003applied} (Chapter~XI) and \\cite{he2014fundamentals} (Chapter~2).\n \nMAPs are attractive due to the tractability of many of their properties, including distribution functions, generating functions, and moments of both $N(\\cdot)$ and the sequence of inter-event times $\\{T_n\\}$. \nSince $Q$ is assumed irreducible and finite, it has a unique stationary distribution $\\boldsymbol{\\pi}$ satisfying $\\boldsymbol{\\pi} Q = \\mathbf{0}'$, $\\boldsymbol{\\pi} \\mathbf{1} = 1$. Note that from $Q {\\mathbf 1} = \\mathbf {0}'$ we have $-C {\\mathbf 1}=D {\\mathbf 1}$.\n Of further interest is the embedded discrete-time Markov chain with irreducible stochastic matrix $P = (-C)^{-1}D$ and stationary distribution $\\boldsymbol{\\alpha}$, where $\\boldsymbol{\\alpha} P=\\boldsymbol{\\alpha}$ and $\\boldsymbol{\\alpha} \\mathbf{1}=1$. \n\nObserve the relation between the stationary distributions $\\boldsymbol{\\pi}$ and $\\boldsymbol{\\alpha}$:\n\\begin{equation}\\label{Eq:pi-alpha}\n\\boldsymbol{\\alpha}=\\frac{\\boldsymbol{\\pi} D}{\\boldsymbol{\\pi} D \\mathbf{1}} \\qquad \\text{and} \\qquad \\boldsymbol{\\pi}=\\frac{\\boldsymbol{\\alpha} (-C)^{-1}}{\\boldsymbol{\\alpha} (-C)^{-1}\\mathbf{1}}=\\lambda^*\\boldsymbol{\\alpha} (-C)^{-1},\n\\end{equation}\nwhere $\\lambda^* = \\boldsymbol{\\pi} D {\\mathbf 1} = - \\boldsymbol{\\pi} C {\\mathbf 1}$.\n\n\nThe following known proposition, as distilled from the literature (see for example \\cite{asmussen2003applied}, Chapter XI) provides the key results of MAPs that we use in this paper. It shows that $T_1$ is a Phase Type (PH) random variable with parameters $\\boldsymbol{\\eta}$ for the initial distribution of the phase and $C$ for the sub-generator matrix. It further shows that the initial distribution of the phase process may render the MAP as time stationary or event stationary.\n\n\\begin{proposition}\nConsider a MAP with parameters ($\\boldsymbol{\\eta}$,$C$,$D$), then\n\\begin{equation}\n\\label{eq:T1dist}\n\\mathbb P(T_1 > t) = \\boldsymbol{\\eta} e^{C t} {\\mathbf 1}.\n\\end{equation}\nFurther, if $\\boldsymbol{\\eta} = \\boldsymbol{\\pi}$ then the MAP is time-stationary and if $\\boldsymbol{\\eta}=\\boldsymbol{\\alpha}$ it is event stationary, where $\\boldsymbol{\\pi}$ and $\\boldsymbol{\\alpha}$ are associated stationary distributions.\n\\end{proposition}\n\nNote that for such a $PH(\\boldsymbol{\\eta}, C)$ random variable the density $f(t)$ and the hazard rate $h(t)$, are respectively,\n\\[\nf(t) = \\boldsymbol{\\eta} e^{C t} D {\\mathbf 1},\n\\qquad\nh(t) = \\frac{\\boldsymbol{\\eta}e^{Ct} D \\mathbf{1}}{\\boldsymbol{\\eta}e^{Ct} \\mathbf{1}}.\n\\]\nFurther, as may be used for showing DHR, the derivative of the hazard rate is,\n\\begin{equation}\n\\label{eq:derHazard}\nh^{\\prime}(t)= \\frac{\\boldsymbol{\\eta} C e^{Ct} (-C) \\mathbf{1} \\,\\,\n\\boldsymbol{\\eta}e^{Ct} \\mathbf{1} - \\boldsymbol{\\eta} C e^{Ct} \\mathbf{1}\\,\\, \\boldsymbol{\\eta}e^{Ct} (-C) \\mathbf{1}\n}{(\\boldsymbol{\\eta}e^{Ct} \\mathbf{1})^2}.\n\\end{equation}\n\nWe now describe second-order properties associated with each case. \n\\paragraph*{Event-Stationary Case:} \n The MAP is event-stationary\\footnote{Sometimes an event-stationary MAP is referred to as an interval-stationary MAP, see for instance \\cite{fischer1993markov}.} if \n \n$\\boldsymbol{\\eta}=\\boldsymbol{\\alpha}$. In this case, the (generic) inter-event time is phase-type distributed, $PH(\\boldsymbol{\\alpha}, C)$ and thus has $k$-th moment:\n$$\nM_k=\\mathbb{E}[T_n^k]=k! \\boldsymbol{\\alpha} (-C)^{-k}\\mathbf{1}\n=(-1)^{k+1} \\, k! \\, \\frac{1}{\\lambda^*} \\boldsymbol{\\pi} \\big(C^{-1}\\big)^{k-1} {\\mathbf 1},\n$$\nwith the first and second moments (here represented in terms of $\\boldsymbol{\\pi}$ and $C$):\n\\[\nM_1=\\frac{1}{\\lambda^*} \\boldsymbol{\\pi} {\\mathbf 1} = \\frac{1}{\\lambda^*},\n\\qquad\nM_2=2 \\frac{1}{\\lambda^*} \\boldsymbol{\\pi} (-C)^{-1} {\\mathbf 1}.\n\\]\nThe squared coefficient of variation (SCV) of events (intervals) has a simple formula: \n\\begin{equation}\\label{Eq:SCV-Interval}\nc^2+1 = \\frac{M_2}{M_1^2} = \n\\frac{-2 \\, (1\/\\lambda^*) \\, \\boldsymbol{\\pi} \\, C^{-1} {\\mathbf 1}}{(1\/\\lambda^*)^2}\n= 2 \\boldsymbol{\\pi} C {\\mathbf 1} \\boldsymbol{\\pi} C^{-1} {\\mathbf 1}.\n\\end{equation}\n\n\n\\paragraph*{Time-Stationary Case:} \n A MAP with parameters $(\\boldsymbol{\\eta}, C,D)$ is time-stationary \\index{time-stationary MAP} if $\\boldsymbol{\\eta}=\\boldsymbol{\\pi}$. In the time-stationary case ($\\boldsymbol{\\eta}=\\boldsymbol{\\pi}$), we have (see \\cite{asmussen2003applied}):\n\\begin{align}\n\\label{Eq:Mean}\n\\mathbb{E}[N(t)]&= {\\boldsymbol{\\pi}} D \\mathbf{1}\\,t,\\\\\n\\label{Eq:Var}\n\\mathrm{Var}\\big(N(t)\\big)&=\\{{\\boldsymbol{\\pi}}D \\mathbf{1}+2\\, {\\boldsymbol{\\pi}}D D_Q^{\\sharp} D \\mathbf{1}\\}\\,t- 2 {\\boldsymbol{\\pi}}D D_Q^{\\sharp} D_Q^{\\sharp}(t) D \\mathbf{1},\n\\end{align}\n where $D_Q^{\\sharp}$ is the \\textit{deviation matrix}\\index{deviation matrix} associated with $Q$ defined by the following formula.\n\\begin{equation} \n\\label{Eq:deviation}\nD_Q^{\\sharp}=\\lim_{t\\rightarrow\\infty} D^{\\sharp}_Q(t)=\\int_0^{\\infty}(e^{Qu}-\\mathbf{1}{\\boldsymbol{\\pi}})\\, du.\n\\end{equation}\n Note that in some sources, for instance \\cite{asmussen2003applied} and \\cite{narayana1992first}, the variance formula \\eqref{Eq:Var} is presented in terms of the matrix $Q^{-}:=(\\mathbf{1}{\\boldsymbol{\\pi}} - Q)^{-1}$. The relation between these two matrices is $Q^{-}=D_Q^{\\sharp} +\\mathbf{1}{\\boldsymbol{\\pi}}$, see \\cite{coolen2002deviation}.\n \nApplying \\eqref{Eq:Mean} and \\eqref{Eq:Var}, we can write $d^2$ in terms of a MAP parameters as:\n\\begin{equation}\\label{eq:d}\nd^2= 1+\n\\frac{2}{\\lambda^*}\\, {\\boldsymbol{\\pi}}D D_Q^{\\sharp} D \\mathbf{1}.\n\\end{equation}\n\n\\paragraph*{MMPP:} A MAP with a diagonal matrix $D$ is an MMPP.\nMMPPs correspond to doubly-stochastic Poisson processes (also known as Cox processes) where the modulating process is driven by a CTMC. \nMMPPs have been used extensively in stochastic modelling and analysis, see for example \\cite{fischer1993markov}.\nThe parameters of an MMPP$_p$ are $D= \\text{diag}(\\lambda_i)$, where $\\lambda_i \\geq 0$ for $i=1, \\ldots, p$, and $C=Q-D$. Here, $Q$ is the generator matrix of a CTMC. \nFor MMPPs, \\eqref{Eq:Mean} and \\eqref{Eq:Var} can be simplified by using the following relations:\n$$\n\\boldsymbol{\\pi} D \\mathbf{1}=\\sum_{i=1}^p {\\pi}_i \\lambda_i,\n\\qquad D \\mathbf{1}=\\boldsymbol{\\lambda}=(\\lambda_1, \\cdots, \\lambda_p)^\\prime,\n\\qquad \\boldsymbol{\\pi} D=({\\pi}_1 \\lambda_1, \\cdots, {\\pi}_p\\lambda_p)\\,.\n$$\n\n\\paragraph*{MSPP:} A MAP with a diagonal matrix $C$ is is an MSPP. For MSPP$_p$ events switch between $p$ Poisson processes with rates $\\lambda_1, \\cdots, \\lambda_p$, where each switch also incurs an event. Here as in MMPPs we denote the diagonal elements of $D$ via $\\lambda_1, \\cdots, \\lambda_p$. However, unlike MMPPs, (irreducible) MSPPs don't have a diagonal $D$. We also remark that the modulation in the MSPP is of a discrete nature and it occurs at certain event epochs of the counting process, whereas the modulation of the MMPP is performed at epochs without events. See \\cite{artalejo2010markovian} and \\cite{he2014fundamentals}.\n\nAs our research attempts have shown, analyzing MSPPs is considerably easier than MMPPs, because a diagonal $C$ is much easier to handle than a non-diagonal $C$ and in an (irreducible) MMPP, $C$ must be non-diagonal.\n\n\n\\paragraph*{Properties (I)-(IV) for MAPs:} Using the results above, for any irreducible MAP with matrices $C$ and $D$ we have that the main properties (I)-(IV) of this paper can be formulated as follows:\n\\begin{align}\n \\label{Eq:d2}(I)&\\qquad {\\boldsymbol{\\pi}}D D_Q^{\\sharp} D \\mathbf{1} \\ge 0, \\\\\n \\label{Eq:HDR}\n (II)&\\qquad \\boldsymbol{\\alpha} C e^{Ct} (-C) \\mathbf{1} \\,\\,\n\\boldsymbol{\\alpha}e^{Ct} \\mathbf{1} + (\\boldsymbol{\\alpha} C e^{Ct} \\mathbf{1})^2 \\leq 0 \\qquad \\forall t \\ge 0, \\\\\n \\label{Eq:c2} (III)&\\qquad \\boldsymbol{\\pi} C {\\mathbf 1} \\boldsymbol{\\pi} C^{-1} {\\mathbf 1} \\ge 1, \\\\\n \\label{Eq:SO}(IV)&\\qquad \\boldsymbol{\\pi} e^{C t} \\mathbf{1} \\ge \\boldsymbol{\\alpha} e^{C t} \\mathbf{1},\n\\qquad \\forall t \\ge 0. \n\\end{align} \n\n\n\n\\section{Main Results}\n\\label{sec3}\n\nWe now present results for MSPP and MMPP$_2$ for properties (I)-(IV) as presented in the introduction. Establishing property (I), $d^2 \\ge 1$ is not a difficult task for both MMPPs and MSPPs:\n\n\\begin{proposition}\n\\label{eq:pp22}\nMMPP and MSPP processes have $d^2 \\ge 1$.\n\\end{proposition}\n\n\\begin{proof}\nThis is a well-known result that for all doubly stochastic Poisson processes (Cox processes), $d^2 \\ge 1$. So, we have the proof for an MMPP, for instance see Chapter 6 of \\cite{kingman1993poisson}.\n\nFor an MSPP, using the fact that for a given MMPP, we have $d^2\\geq 1$, results in:\n\\begin{equation}\\label{eq:Ddiag}\n{\\boldsymbol{\\pi}}D D_Q^{\\sharp} D \\mathbf{1} \\geq 0,\\quad \\text{for any diagonal non-negative matrix $D$.}\n\\end{equation}\nOn the other hand, all MAPs satisfy $\n{\\boldsymbol{\\pi}}D D_Q^{\\sharp} D \\mathbf{1}={\\boldsymbol{\\pi}}(-C) D_Q^{\\sharp} (-C) \\mathbf{1}$. Since for an MSPP, $-C$ is a diagonal non-negative matrix, from \\eqref{eq:Ddiag} we have \\eqref{Eq:d2}.\n\\end{proof}\n\n\nIt isn't difficult to show that property (II), DHR holds for MSPP:\n\n\\begin{proposition}\n\\label{eq:pp22}\nFor an MSPP the hazard rate of the stationary inter-event time is non-increasing.\n\\end{proposition}\n\\begin{proof}\nDenote the diagonal matrix $C$ with $C=diag(-c_i)$ and the positive elements of the column vector $e^{Ct} \\mathbf{1}$ with $\\mathbf{u}$.\nSo, Eq.~\\eqref{Eq:HDR} can be written element-wise as:\n\\begin{equation}\\label{eq:dif2}\n-(\\sum _{i=1}^p\\alpha_i c_i^2 u_i)\\,(\\sum_{i=1}^p \\alpha_i u_i)+(\\sum_{i=1}^p \\alpha_i c_i u_i)^2.\n\\end{equation}\nDenoting $v_i=\\alpha_i u_i$ and assuming $p_i=\\frac{v_i}{\\sum_{i=1}^p v_i}$ results in:\n$$\n-(\\sum _{i=1}^p c_i^2 p_i)+(\\sum _{i=1}^p c_i p_i)^2.\n$$\nThe above expression can be viewed as the minus variance of a random variable that takes values $c_i$ with probability $p_i$. Therefore, we have \\eqref{Eq:HDR}.\n\n\\end{proof}\n\nHowever, somewhat surprisingly, MMPPs don't necessarily possess DHR. An exception is MMPP$_2$ as shown in Proposition~\\ref{prop:mmpp2}. However for higher order MMPPs DHR doesn't always hold. The gist of the following example was communicated to us by Milkos Telek and Illes Horvath. Set \n\\begin{equation}\n\\label{Example}\nQ =\\left(\n\\begin{array}{cccc}\n-1 & 1 & 0 & 0 \\\\ \n0 & -1 & 1 & 0\\\\\n0 & 0 & -1 & 1\\\\\n1& 0&0 & -1 \n \\end{array}\n\\right)\n\\qquad\n\\mbox{and}\n\\qquad\nD =\\left(\n\\begin{array}{cccc}\\displaystyle\n0.01 & 0 &\\, \\,0 & \\,\\,\\,\\,0 \\\\ \n0 & 0.01 &\\, \\,0 &\\, \\,\\,\\,0\\\\\n0 & 0 & \\,\\,1 & \\,\\,\\,\\,0\\\\\n0& 0&\\,\\,0 & \\,\\,\\,\\,1\n \\end{array}\n\\right).\n\\end{equation}\nAs shown in Figure~\\ref{Fig:example}, the hazard rate function for an MMPP with the above matrices is not monotone. Hence at least for general MMPPs, trying to show (III), $c^2\\ge 1$, via hazard rates is not a viable avenue.\n\\begin{figure}\n\\center\n\\includegraphics[scale=0.4]{TelekExample.eps}\n\\caption{{{\\small The hazard rate of the MMPP in \\eqref{Example} is not monotone.}}}\n\\label{Fig:example}\n\\end{figure}\n\n\n\n\n\nSince hazards rates don't appear to be a viable paths for establishing (III) for MMPPs, an alternative may be to consider the stochastic order (IV). Starting with MSPPs, we see that this property holds.\n\n\\begin{proposition}\n\\label{eq:msppSO}\nFor an MSPP $T_1^{\\boldsymbol{\\pi}} \\ge_{\\mbox{st}} T_1^{\\boldsymbol{\\alpha}}$.\n\\end{proposition}\n\n\\begin{proof}\n\nUsing \\eqref{Eq:SO}, the claim is,\n\\begin{equation}\n\\label{eq:351}\n(\\boldsymbol{\\pi} - \\boldsymbol{\\alpha})e^{C t} \\mathbf{1} \\ge 0,\n\\qquad \\forall t \\ge 0.\n\\end{equation}\nWithout loss of generality we assume that there is an order $01$ is straightforward. \n\nNow, $\\{\\lambda^*-c_i\\}_{i=1, \\cdots, p}$ is a non-increasing sequence and therefore in the sequence $\\{\\pi_i-\\alpha_i\\}=\\{\\frac{\\pi_i}{\\lambda^*}(\\lambda^*-c_i)\\}$ when an element $\\pi_k-\\alpha_k$ is negative, all the elements $\\pi_i-\\alpha_i$ for $i\\geq k$ are negative. \nMoreover, both $\\boldsymbol{\\pi}$ and $\\boldsymbol{\\alpha}$ are probability vectors, so $(\\boldsymbol{\\pi}-\\boldsymbol{\\alpha})\\mathbf{1}=\\sum_i(\\pi_i-\\alpha_i)=0$. Therefore,\n at least the first element in the sequence $\\{\\pi_i-\\alpha_i\\}=\\{\\frac{\\pi_i}{\\lambda^*}(\\lambda^*-c_i)\\}$ is positive. \nHence, there exists an index $1 < k \\leq p$ such that $\\pi_i-\\alpha_i$ for $i=1, \\cdots, k-1$ is non-negative and for $i=k,\\cdots, p$ is negative. Therefore, we have:\n\n\\begin{align*}\n ({\\pi} - {\\boldsymbol\\alpha}) e^{Ct} \\mathbf{1}\n &=\\underbrace{\\sum_{i=1}^{k-1}({\\pi_i}-{ \\alpha_i})e^{-c_i t}}_{\\text{non-negative}}+\\underbrace{\\sum_{i=k}^p({\\pi_i}-{ \\alpha_i})e^{-c_i t}}_{\\text{negative}}\\\\\n& =\\underbrace{\\sum_{i=1}^{k-1}({\\pi_i}-{ \\alpha_i})e^{-c_i t}}_{\\text{non-negative}}-\\underbrace{\\sum_{i=k}^p({\\alpha_i}-{\\pi_i})e^{-c_i t}}_{\\text{non-negative}}.\n \\end{align*}\n Assume: $({\\pi} - {\\alpha}) e^{Ct} \\mathbf{1} <0$ :\n \\begin{equation}\n \\label{Eq:neg}\n \\sum_{i=1}^{k-1}({\\pi_i}-{\\alpha_i})e^{-c_i t}\n< \n\\sum_{i=k}^p({\\alpha_i}-{\\pi_i})e^{-c_i t}.\n \\end{equation}\n Then, since $01$ and $d^2>1$, $h(t)$ is DHR and the stochastic order $T_1^{\\boldsymbol{\\pi}} \\ge_{\\mbox{st}} T_1^{\\boldsymbol{\\alpha}}$ holds.\n\\end{proposition}\n\\begin{proof}\nConsider an MMPP$_2$ with parameters \n\\[\nD=\\left(\\begin{array}{cc}\n\\lambda_1&0\\\\\n0& \\lambda_2\n\\end{array}\\right)\n\\qquad\n\\mbox{and}\n\\qquad \nC=\\left(\\begin{array}{cc}\n-\\sigma_1-\\lambda_1 & \\sigma_1\\\\\n\\sigma_2 & -\\sigma_2-\\lambda_2\n\\end{array}\\right).\n\\]\nThen, $\\boldsymbol{\\pi}=\\frac{1}{\\sigma_1+\\sigma_2}(\\sigma_2,\\, \\sigma_1)$. As in \\cite{heffes1986markov}, evaluation of the transient deviation matrix through (for e.g.) Laplace transform inversion yields:\n$$\n\\frac{\\mathrm{Var}(N(t))}{\\mathbb{E}[N(t)]}=1+\\frac{2\\sigma_1\\sigma_2(\\lambda_1-\\lambda_2)^2}{(\\sigma_1+\\sigma_2)^2(\\lambda_1\\sigma_2+\\lambda_2\\sigma_1)}-\\frac{2\\sigma_1\\sigma_2(\\lambda_1-\\lambda_2)^2}{(\\sigma_1+\\sigma_2)^3(\\lambda_1\\sigma_2+\\lambda_2\\sigma_1)t}(1-e^{-(\\sigma_1+\\sigma_2)t}).\n$$\nTherefore from~\\eqref{eq:3535}, we have\n$$\nd^2=1+ \\frac{2\\sigma_1\\sigma_2(\\lambda_1-\\lambda_2)^2}{(\\sigma_1+\\sigma_2)^2(\\lambda_1\\sigma_2+\\lambda_2\\sigma_1)}.\n$$ \n\nFurther, explicit computation yields,\n$$\nc^2 = 1 + \\frac{2\\sigma_1\\sigma_2(\\lambda_1-\\lambda_2)^2}{(\\sigma_1+\\sigma_2)^2(\\lambda_2\\sigma_1+\\lambda_1(\\lambda_2+\\sigma_2))}.\n$$\nThus it is evident that the MMPP$_2$ has $d^2 >1\\, , c^2 > 1$\nas long as $\\lambda_1 \\neq \\lambda_2$ and $d^2=c^2=1$ when $\\lambda_1 = \\lambda_2$. \n\nFor DHR and the stochastic order, first we note that for an MMPP$_2$ with the above parameters, $\\boldsymbol{\\alpha}=\\frac{1}{\\sigma_1 \\lambda_1+\\sigma_2 \\lambda_2}(\\sigma_1 \\lambda_1, \\sigma_2 \\lambda_2)$. \nBy setting $B= \\sigma_1 +\\sigma_2 + \\lambda_1+\\lambda_2$ and $A=\\sigma_2 \\lambda_1 +\\lambda_2(\\sigma_1+\\lambda_1)$, after some simplification, Eq.~\\eqref{Eq:HDR} is given by:\n$$\n- \\frac{A e^{-Bt}\\sigma_1 \\sigma_2(\\lambda_1-\\lambda_2)^2}{(\\sigma_2 \\lambda_1+\\sigma_1\\lambda_2)^2},\n$$\nwhich is strictly negative for $\\lambda_1 \\neq \\lambda_2$ and is zero for $\\lambda_1 = \\lambda_2$.\nFor the stochastic order, from Eq.~\\eqref{Eq:SO}, we have:\n$$\n(\\boldsymbol{\\pi} - \\boldsymbol{\\alpha})e^{C t} \\mathbf{1}= \\frac{e^{-\\frac{t}{2} \\big(B+\\sqrt{B^2-4A} \\big)\n}\\big(-1+e^{t\\sqrt{B^2-4A}}\\big)\\sigma_1 \\sigma_2 (\\lambda_1-\\lambda_2)^2 }{(\\sigma_1+\\sigma_2) (\\sigma_1 \\lambda_2+\\sigma_2 \\lambda_1) \\sqrt{B^2-4A}},\n$$\nwhich is strictly positive for $\\lambda_1 \\neq \\lambda_2$ and is zero for $\\lambda_1 = \\lambda_2$.\n\n\n\\vspace{30pt}\n\n\n\\end{proof}\n\n\n\\section{Conjectures for MMPP}\n\\label{sec4}\n\n\nWe embarked on this research due to the folklore assumption that for MMPP, $c^2 \\ge 1$ (III). Initially we believed that it is easy to verify, however to date there isn't a known proof for an arbitrary irreducible MMPP. Still, we conjecture that both (III) and (IV) hold for MMPPs:\n\n\\begin{conjecture}\n\\label{conj:1}\nFor an irreducible MMPP, $c^2~\\ge~1$.\n\\end{conjecture}\n\n\\begin{conjecture}\n\\label{conj:2}\nFor an irreducible MMPP, $T_1^{\\boldsymbol{\\pi}} \\ge_{\\mbox{st}} T_1^{\\boldsymbol{\\alpha}}$.\n\\end{conjecture}\n\nIn an attempt to disprove these conjectures or alternatively gain confidence in their validity, we carried out an extensive numerical experiment. Our experiment works by generating random instances of MMPPs . Each instance is generated by first generating a matrix $Q$ with uniform$(0,1)$ off-diagonal entries and diagonal entries that ensure row sums are $0$. We then generate a matrix $D$ with diagonal elements that are exponentially distributed with rate $1$. Such a $(Q,D)$ pair then implies ${\\boldsymbol{\\pi}}$ and ${\\boldsymbol{\\alpha}}$. For each such MMPP we calculate $\\boldsymbol{\\pi} C {\\mathbf 1} \\boldsymbol{\\pi} C^{-1} {\\mathbf 1} -1 $ as in \\eqref{Eq:c2} and $(\\boldsymbol{\\pi} - \\boldsymbol{\\alpha})e^{C t} \\mathbf{1}$ as in \\eqref{Eq:SO}, where we take $t \\in \\{0,0.2,0.4,\\ldots,9.8,10.0\\}$. We then ensure that both of these quantities are non-negative.\n\nWe repeated this experiment for $10^6$ random MMPP instances of orders $3,4,5$ and $6$. In all cases the calculated quantities were greater that $-10^{-15}$. Note that in certain cases, the quantity associated with (IV) was negative and lying in the range $(-10^{-15},-10^{-16}]$. We attribute this to numerical error stemming from the calculation of the matrix exponential $e^{Ct}$. We ran our experiments with the Julia programming language, V1.0. The calculation time was about 1.5 hours.\n\nThis provides some evidence for the validity of Conjectures 1 and 2, although it is clearly not a proof. Further, we note that it is possible that some extreme cases exist that are not likely to come up by uniformly and randomly generating entries of $Q$. For example the cyclic matrix $Q$ in \\eqref{Example}. For this we have also considered random cyclic $Q$ matrices with non-zero entries similar to \\eqref{Example}. We generated $10^6$ such (order $4$) examples and all agreed with (III) and (IV).\n\n\n\n\\section{Conclusion}\n\\label{sec5}\n\nWe have highlighted various related properties for point processes on the line and MAPs exhibiting diagonal matrices ($C$ or $D$) in particular. Showing that $c^2 \\ge 1$ for MMPP and establishing the stochastic order $T_1^{\\boldsymbol{\\pi}} \\ge_{\\mbox{st}} T_1^{\\boldsymbol{\\alpha}}$ remains an open problem. We have shown this for MMPPs of order $2$ and using a similar technique to our MSPP proof, we can also show it for MMPPs with symmetric $C$ matrices. However for general MMPPs this remains an open problem.\n\nWe note, that stepping outside of the matrix analytic paradigm and considering general Cox processes is also an option. In fact, since any Cox process can be approximated by an MMPP, we believe that versions of conjectures $1$ and $2$ also hold for Cox processes under suitable regularity conditions.\n\nThere is also a related branch of questions dealing with characterizing the Poisson process via $c^2 = 1$ and considering when an MMPP is Poisson. For example, for the general class of MAPs, the authors of \\cite{bean2000map} provide a condition for determining if a given MAP is Poisson. It is not hard to construct a MAP with $c^2 = 1$ that is not Poisson. But, we believe that all MMPPs with $c^2 = 1$ are Poisson. Yet, we don't have a proof. Further, we believe that for an MMPP, if $c^2 = 1$ then all $\\lambda_i$ are equal (the converse is trivially\ntrue). We don't have a proof of this either. Related questions also hold for the more general Cox processes.\n\nWe also note that the MSPP class of process that we considered generalized hyper-exponential renewal processes as well as a class of processes called Markovian Transition Counting Processes (MTCP) as in \\cite{asanjarani2016queueing}.\n\n\n\n\\section*{Acknowledgement}\nAzam Asanjarani's research is supported by the Australian Research Council Centre of Excellence for the Mathematical and Statistical Frontiers (ACEMS). Yoni Nazarathy is supported by Australian Research Council Grant DP180101602. We thank Soren Asmussen, Qi-Ming He, Illes Horvath, Peter Taylor and Miklos Telek for useful discussions and insights related to this problem.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}