diff --git a/data_all_eng_slimpj/shuffled/split2/finalfx b/data_all_eng_slimpj/shuffled/split2/finalfx new file mode 100644 index 0000000000000000000000000000000000000000..77ead132772790d9929e657e9c3753918e32637e --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalfx @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nSpectators may be sent into infinite loops by Zeno, but Achilles catches up with the turtle anyway. Paradoxes do not spell trouble in the way contradictions do, as a contradiction appears in them only when improper assumptions are made. The more reasonable these assumptions seem, the brighter shines the paradox. \n\nHere we investigate the paradox of Kochen and Specker \\cite{KS}, \\cite{Bell}, describing a particular property of quantum mechanics by which it is distinguished from classical physics: contextuality \\cite{KS}--\\cite{ID}. The statement ``quantum mechanics is contextual\" means that descriptions of quantum phenomena in terms of classical statistical mechanics---so-called non-contextual hidden variable models (ncHVMs) \\cite{EPR},\\cite{KS},\\cite{Bell}---are in general not viable. In such models, all observables are assigned pre-existing values which are merely revealed by measurement---in stark contrast with quantum mechanics.\\medskip\n\nEach paradox invites us to ask what becomes of the glaring discrepancy once the (in hindsight) improper assumption is excised. For the Kochen-Specker paradox, one such inquiry leads to measurement-based quantum computation (MBQC) \\cite{RB01}, a scheme of universal quantum computation driven by measurement. \n\nWe identify the mathematical structures that simultaneously capture the contextuality and the computational output of measurement-based quantum computations. These structures turn out to be cohomological. Put in graphical form, we explore the following triangle.\n\\begin{equation}\\label{Triangle}\n\\parbox{10cm}{\\includegraphics[width=10cm]{Triangle}}\n\\end{equation}\nIn the first part of this paper, consisting of Sections~\\ref{bg} and \\ref{CohoW}, we flesh out the above diagram for the simplest case, deterministic temporally flat MBQCs and the corresponding proofs of contextuality. Temporally flat means that these MBQCs have no non-trivial temporal order, which is a restriction. Section~\\ref{bg} reviews the necessary background on contextuality and measurement-based quantum computation, and Section~\\ref{CohoW} explains how cohomology encapsulates the essence of parity-based contextuality proofs and temporally flat MBQCs. The main results are Theorem~\\ref{CPth2} \\cite{Coho} and Theorem~\\ref{ObeT} \\cite{CohoMBQC}, which we restate here.\n\nIn the second part of the paper, consisting of Section~\\ref{TO}, we work towards removing the assumption of temporal flatness. MBQCs are typically temporally ordered. Even though the measurements driving the computation commute, measurement bases need to be adjusted depending on the outcomes of other measurements, and this introduces temporal order. This adjustment is necessary to prevent the randomness inherent in quantum measurement from creeping into the logical processing. \n\nWhile we do not yet tackle temporally ordered MBQCs, we demonstrate that a known contextuality proof exhibiting temporal ordering of measurements, the so-called ``iffy'' proof \\cite{Exa}, can be described by the {\\em{same}} cohomological formalism that is used for the temporally flat case. We conjecture that this strategy might also work for general MBQCs. \n\nSection~\\ref{Concl} is the conclusion, and Section~\\ref{TL} covers some stations of the author's own journey through the world of quantum computation and paradox.\n\n\\section{Background}\\label{bg}\n\n\\subsection{Contextuality}\\label{Crev}\n\nWe assume that the reader is familiar with the concept of contextuality \\cite{KS},\\cite{Bell}; see \\cite{Merm} for a review. To provide a short summary, contextuality of quantum mechanics signifies that, in general, quantum mechanical phenomena cannot be described by so-called non-contextual hidden variable models (ncHVMs) \\cite{EPR}. In an ncHVM, observable quantities have predetermined value assignments; i.e., each observable possesses a value, and those values are merely revealed upon measurement. The statistical character of measurement in quantum mechanics is then sought to be reproduced by a probability distribution over the value assignments. For certain sets of measurements no such probability distribution exists. If that's the case, then the physical setting at hand is contextual.\n\nIn this paper, we assume that in each value assignment $\\lambda$ is deterministic; i.e., the value $\\lambda(A)$ assigned to each observable $A$ is an eigenvalue of that observable, in accordance with the Dirac projection postulate. More general constructs are conceivable; for example the value assignments may themselves be probability distributions over eigenvalues \\cite{Spekk}; however, we do not consider such generalizations here. We remark that deterministic ncHVMs are equivalent to factorizable probabilistic ones \\cite{AB}; also see \\cite{Fine}. \\smallskip \n\nThe Kochen-Specker (KS) theorem~\\cite{KS} says that in Hilbert spaces of dimension 3 and higher, it is impossible to assign all quantum-mechanical observables deterministic non-contextual values in a consistent fashion. A very simple proof of the KS theorem, in dimension 4 and up, is provided by Mermin's square \\cite{Merm}. It is the simplest parity proof of contextuality, where the assumption of existence of a consistent non-contextual value assignment $\\lambda$ leads to a system of mod 2--linear equations with an internal inconsistency. As we will discuss below, the connection between contextuality and MBQC runs through the parity proofs.\n\nFor MBQC we employ state-dependent contextuality. In it, consistent value assignments $\\lambda$ do exist, but no probability distribution over them can explain the measurement statistics for the quantum state in question. The reason that value assignments suddenly become possible does not contradict the KS theorem; we merely have shrunk the set of observables considered. Already the original proof \\cite{KS} of the KS theorem and the simpler proof via Mermin's square use a finite number of observables picked from a priori infinite sets; and in the application to MBQC we simply reduce those sets further.\\smallskip\n\nThe key example for the connection between contextuality and MBQC is the state-dependent version of Mermin's star \\cite{Merm}, as was observed in \\cite{AB}. Consider the eight-dimensional Hilbert space of 3 qubits, a specific state in it, the Greenberger-Horne-Zeilinger (GHZ) state \\cite{GHZ},\n\\begin{equation}\\label{GHZ}\n|\\text{GHZ}\\rangle = \\frac{|000\\rangle + |111\\rangle}{\\sqrt{2}},\n\\end{equation}\nand furthermore the six local Pauli observables $X_i$, $Y_i$, $i=1,..,3$. The state-dependent contextuality question is whether those six local observables can be assigned values $\\lambda(\\cdot) = \\pm 1$ in such a way that the measurement statistics for the four non-local Pauli observables \n$X_1X_2X_3$, $X_1Y_2Y_3$, $Y_1X_2Y_3$, $Y_1Y_2X_3$\nis reproduced.\n\nThe GHZ state is a simultaneous eigenstate of these observables,\n$$\nX_1X_2X_3 \\, |\\text{GHZ}\\rangle = -X_1Y_2Y_3\\, |\\text{GHZ}\\rangle =-Y_1X_2Y_3\\, |\\text{GHZ}\\rangle= -Y_1Y_2X_3 \\, |\\text{GHZ}\\rangle = |\\text{GHZ}\\rangle.\n$$ \nThe measurement outcomes for the four non-local observables are deterministic and equal to $\\pm 1$. \n\nNow note that these non-local observables are products of the local ones $X_i$, $Y_i$, namely $X_1X_2X_3 = (X_1) (X_2) (X_3)$, $X_1Y_2Y_3 = (X_1) (Y_2) (Y_3)$, etc. Assuming an ncHVM value assignment $\\lambda$ for the local observables, the above operator constraints translate into constraints on the assigned values $\\lambda(\\cdot)$, namely $\\lambda(X_1)\\lambda(X_2) \\lambda(X_3)=+1$, $\\lambda(X_1)\\lambda(Y_2) \\lambda(Y_3)=-1$, and two more of the same kind. It is useful to write the value assignments $\\lambda$ in the form $\\lambda(\\cdot)=(-1)^{s(\\cdot)}$. In terms of the binary variables $s$, the four constraints read\n\\begin{equation}\\label{sdMS}\n\\begin{array}{rcl}\ns(X_1) + s(X_2) + s(X_3) \\mod 2 &=& 0,\\\\\ns(X_1) + s(Y_2) + s(Y_3) \\mod 2 &=& 1,\\\\\ns(Y_1) + s(X_2) + s(Y_3) \\mod 2 &=& 1,\\\\\ns(Y_1) + s(Y_2) + s(X_3) \\mod 2 &=& 1.\\\\\n\\end{array}\n\\end{equation}\nAdding those four equations mod 2 reveals a contradiction $0=1$, hence no value assignment $s$ (equivalently $\\lambda$) for the six local observables reproduces the measurement statistics of the GHZ state. The state-dependent Mermin star is thus contextual. We will return to Eq.~(\\ref{sdMS}) throughout, as it relates to the simplest example of a contextual MBQC \\cite{AB}.\n\\medskip\n\nIn preparation for the subsequent discussion we review one further concept, the contextual fraction \\cite{ABsheaf}. To define it, consider an empirical model $e$, i.e., a collection of probability distributions over measurement contexts, and split it into a contextual part $e^C$ and a non-contextual part $e^{NC}$,\n\\begin{equation}\ne=\\tau e^{NC} + (1-\\tau) e^C,\\; 0 \\leq \\tau \\leq 1.\n\\end{equation}\nThe maximum possible value of $\\tau$ is called the non-contextual fraction ${\\sf{NCF}}(e)$ of the model $e$,\n\\begin{equation}\n{\\sf{NCF}}(e) := \\max_{e^{NC}} \\tau.\n\\end{equation}\nThe contextual fraction ${\\sf{CF}}(e)$ is then the probability weight of the contextual part $e^{C}$,\n\\begin{equation}\n{\\sf{CF}}(e):=1-{\\sf{NCF}}(e).\n\\end{equation}\nIt is a measure of the ``amount'' of contextuality contained in a given physical setup.\n\n\\subsection{Measurement-based quantum computation}\\label{MBQCrev}\n\nAgain, we assume that the reader is familiar with the concept of measurement-based quantum computation, a.k.a. the one way quantum computer \\cite{RB01}. Here we provide only a very short summary, and then expand on one technical aspect that is of particular relevance for the connection with contextuality--the classical side processing. For a review of MBQC see e.g. \\cite{RW12}. \n\nIn MBQC, the process of quantum computation is driven by local measurement on an initially entangled quantum state; no unitary evolution takes place. Further, the initial quantum state, for example a 2D cluster state, does not carry any information about the algorithm to be implemented---it is universal. All algorithm-relevant information is inputted to that quantum state, processed and read out by the local measurements.\n\nIn quantum mechanics, the basis of a measurement can be freely chosen but the measurement outcome is typically random; and this of course affects MBQC. There, the choice of measurement bases encodes the quantum algorithm to be implemented, and the measurement record encodes the computational output. In MBQC every individual measurement outcome is in fact completely random, and meaningful information is contained only in correlations of measurement outcomes. As it turns out, these computationally relevant correlations have a simple structure. To extract them from the measurement record, every MBQC runs a classical side processing.\n\nThe need for classical side processing in MBQC also arises in a second place: measurement bases must be adapted according to previously obtained measurement outcomes, in order to prevent the randomness of quantum measurement from creeping into the logical processing.\n\nWe confine our attention to the original MBQC scheme on cluster states \\cite{RB01}, which we will henceforth call $l2$-MBQC. There are other MBQC schemes, for example using AKLT states as computational resources, in which the side processing is more involved.\n\nIn $l2$-MBQC, for each measurement $i$ there are two possible choices for the measured observable $O_i[q_i]$, depending on a binary number $q_i$. The eigenvalues of these observables are constrained to be $\\pm 1$. Furthermore, both the bitwise output $\\textbf{o}=(o_1,o_2..,o_k)$ and the choice of measurement bases, $\\textbf{q}=(q_1,q_2,..,q_N)$ are functions of the measurement outcomes $\\textbf{s}=(s_1,s_2,..,s_N)$. In addition, $\\textbf{q}$ is also a function of the classical input $\\textbf{i}=(i_1,i_2,..,i_m)$. These functional relations are all mod 2 linear,\n\\begin{subequations}\\label{CCR}\n\\begin{align}\\label{CCR_out}\n\\textbf{o}&=Z\\textbf{s} \\mod 2,\\\\ \n\\label{CCR_in}\n\\textbf{q} &=T\\textbf{s}+S\\textbf{i} \\mod 2.\n\\end{align}\n\\end{subequations}\nTherein, the binary matrix $T$ encodes the temporal order in a given MBQC. If $T_{ij}=1$ then the basis for the measurement $i$ depends on the outcome of measurement $j$, hence the measurement $j$ must be executed before the measurement $i$. We remark that Eqs.~(\\ref{CCR}) have been discussed with additional constant offset vectors on the r.h.s. \\cite{TO_sym}, but we don't need that level of generality here.\n\n\\subsection{Links between contextuality and MBQC}\\label{Link}\n\nThe basic result relating MBQC to contextuality is the following.\n\n\\begin{Theorem}[\\cite{RR13}]\\label{NLPCrel}\nBe ${\\cal{M}}$ an $l2$-MBQC evaluating a function $o:(\\mathbb{Z}_2)^m \\longrightarrow \\mathbb{Z}_2$. Then, ${\\cal{M}}$ is contextual if it succeeds with an average probability $p_S>1-d_H(o)\/2^m$, where $d_H(o)$ is the Hamming distance of $o$ from the closest linear function.\n\\end{Theorem}\nThat is, if the function evaluated by the $l2$-MBQC is non-linear---hence outside what the classical side processing can compute by itself---then the assumption of non-contextuality puts a limit on the reachable probability of success. The reliability of the MBQC can be improved beyond this threshold only in the presence of contextuality. The more nonlinear the computed function (in terms of the Hamming distance $d_H(o)$), the lower the threshold.\nThe lowest contextuality thresholds are reached for bent functions. For $m$ even and $o$ bent, it holds that $d_H(o) = 2^{m-1} - 2^{m\/2-1}$ \\cite{MWS}, and therefore the contextuality threshold for the average success probability $p_S$ approaches $1\/2$ for large $m$. An MBQC can thus be contextual even though its output is very close to completely random.\\medskip\n \nIn particular when comparing the above Theorem~\\ref{NLPCrel} to structurally similar theorems on the role of entanglement in MBQC \\cite{MVdN}, we observe that the above only provides a binary ``can do vs. cannot do'' separation. According to the theorem, in the presence of contextuality a success probability of unity is a priori possible, but without it the stated bound applies. Yet it is intuitively clear that the reachable success probability of function evaluation in MBQC should depend on the ``amount'' of contextuality present. In this regard, we note the following refinement of Theorem~\\ref{NLPCrel}, invoking the contextual fraction.\n\\begin{Theorem}[\\cite{CF}]\\label{T1}\nLet $f: (\\mathbb{Z}_2)^m \\longrightarrow \\mathbb{Z}_2$ be a Boolean function, and $\\mathbb{H}(f,{\\cal{L}})$ its Hamming distance to the closest linear function. For each l2-MBQC with contextual fraction ${\\sf{CF}}(\\rho)$ that computes $f$ with average success probability $\\overline{p}_S$ over all $2^m$ possible inputs it holds that \n\\begin{equation}\\label{pSbd}\n\\overline{p}_S\\leq 1- \\frac{(1-{\\sf{CF}}(\\rho))\\, \\mathbb{H}(f,{\\cal{L}})}{2^m}.\n\\end{equation}\n\\end{Theorem}\nThus, the larger the contextual fraction, the larger the achievable success probability for function evaluation through MBQC. If the contextual fraction of the resource state becomes unity, then the theorem puts no non-trivial bound on the success probability of the corresponding $l2$-MBQC.\n\nIf, on the other hand, the contextual fraction of the resource state becomes zero, i.e., when the resource state can be described by a non-contextual hidden variable model, the threshold in success probability reduces to that of Theorem~\\ref{NLPCrel}. Theorem~\\ref{T1} interpolates between those two limiting cases.\\medskip\n\nOne important aspect of the MBQC--contextuality relationship is revealed only by the proof of Theorem~\\ref{NLPCrel}, but not by the statement of the theorem itself. Namely, the contextuality of MBQC is intimately related to the classical side processing Eq.~(\\ref{CCR}). Rather than replicating the proof from \\cite{RR13}, here we illustrate the idea through the example of Anders and Browne's GHZ-MBQC \\cite{AB}, related to Mermin's star. We will return to this example throughout.\\smallskip\n\n{\\em{Example (GHZ-MBQC).}} In this scenario, the resource state is a Greenberger-Horne-Zeilinger state of Eq.~(\\ref{GHZ}), and the local measurable observables $O_i[q_i]$, depending on a binary number $q_i$, are $O_i[0]=X_i, \\; O_i[1]=Y_i$, for $i=1,..,3$. These are precisely the ingredients of the state-dependent version of Mermin's star, as we discussed in Section~\\ref{Crev}. As before, the measurement outcomes $s_i\\in \\mathbb{Z}_2$ are related to the measured eigenvalues $\\lambda_i = \\pm 1$ of the respective local Pauli observables via $\\lambda_i=(-1)^{s_i}$. There are two bits $y,z$ of input and one bit $o$ of output, and the computed function is an OR-gate, $o= y \\vee z$. \n\nThe required linear classical side processing is as follows. \n\\begin{subequations}\\label{CCRghz}\n\\begin{align}\n\\label{CCR_inGHZ}\nq_1 = y,\\, q_2 = z,\\, q_3 = y+z \\mod 2,\\\\\n\\label{CCR_outGHZ}\no= s_1+s_2+s_3 \\mod 2.\n\\end{align}\n\\end{subequations}\nThe two input bits $y$ and $z$ determine the choices $q_i$ of measured observables through Eq.~(\\ref{CCR_inGHZ}), and then the corresponding binary measurement outcomes $s_1, s_2, s_3$ determine the outputted value of the function, $o(y,z)$.\n\nLet's verify that the output is the intended OR function. First, consider $y=z=0$. Thus, by Eq.~(\\ref{CCR_inGHZ}), $q_1=q_2=q_3=0$, and all three locally measured observables are of $X$-type. While the outcomes $s_1,s_2,s_3$ are individually random, they are correlated since the product of the corresponding observables $X_i$ is the stabilizer of the GHZ state, $X_1X_2X_3|\\text{GHZ}\\rangle = |\\text{GHZ}\\rangle$. Therefore, $s_1+s_2+s_3\\mod 2 = 0$. Hence, with Eq.~(\\ref{CCR_outGHZ}), $o(0,0)=0$ as required for the OR-gate.\n\nWe consider one more input combination, $y=0$ and $z=1$. Then, with Eq.~(\\ref{CCR_inGHZ}), $q_1=0$ and $q_2=q_3=1$. Hence $X_1$, $Y_2$ and $Y_3$ are measured. Because of the stabilizer relation $X_1Y_2Y_3|\\text{GHZ}\\rangle = - |\\text{GHZ}\\rangle$, the three measurement outcomes $s_1,s_2,s_3$ satisfy $s_1+s_2+s_3 \\mod 2 =1$. With Eq.~(\\ref{CCR_outGHZ}), $o(0,1)=1$ as required. The discussion of the other two inputs is analogous.\\smallskip\n\nThe OR-gate is a very simple function; yet it is of consequence for the above computational setting. Every MBQC requires a classical control computer, to enact the classical side processing of Eq.~(\\ref{CCRghz}). This control computer is constrained to performing addition mod 2, and it is therefore not classically computationally universal. The OR-gate is a non-linear Boolean function. By adding it to the available operations, the extremely limited classical control computer is boosted to classical computational universality \\cite{AB}.\n\nTo understand the connection between contextuality and MBQC classical processing relations, we state Eq.~(\\ref{CCR_outGHZ}) separately for all four input values.\n\\begin{equation}\\label{GHZmbqc1}\n\\begin{array}{rrcr}\n\\textbf{input:} \\; (0,0)& \\hspace*{4mm}\\textbf{output:} \\; 0&=& s(X_1)+s(X_2)+s(X_3)\\\\\n(0,1)& 1&=& s(X_1)+s(Y_2)+s(Y_3)\\\\\n(1,0)& 1&=& s(Y_1)+s(X_2)+s(Y_3)\\\\\n(1,1)& 1&=& s(Y_1)+s(Y_2)+s(X_3)\n\\end{array}\n\\end{equation}\nNote the striking resemblance of Eq.~(\\ref{GHZmbqc1}) to the earlier Eq.~(\\ref{sdMS}). The only difference is that Eq.~(\\ref{GHZmbqc1}) refers to quantum mechanical measurement record, one context at a time, whereas Eq.~(\\ref{sdMS}) refers to a noncontextual value assignment in an ncHVM, applying to all contexts simultaneously. Thus, if we assume an ncHVM then we obtain a contradiction; and if we do not assume it then the same equations describe a computation.\n\nThis dichotomy exists not only for the GHZ-scenario discussed here, but indeed for all MBQCs satisfying the classical processing relations Eq.~(\\ref{CCR}). It is the basis for Theorems~\\ref{NLPCrel} and \\ref{T1}.\n\n\\section{Cohomology}\\label{CohoW}\n\nIn the previous section we found that for $l2$-MBQCs contextuality and computation hinge on the same algebraic structure. If we impose an ncHVM description on top of this structure, we obtain a contradiction; and if we do not impose it, we obtain a computation. \nThis begs the question: {\\em{What precisely is this common algebraic structure underlying both parity-based contextuality proofs and measurement-based quantum computation?}} This is where cohomology comes in.\\smallskip\n\nBelow, we build up the cohomological picture for deterministic, temporally flat MBQCs. The connection between MBQC and contextuality runs through state-dependent parity-based contextuality proofs. In Section~\\ref{CohoCon}, we first introduce the cohomological description of the state-independent counterpart. It is based on a chain complex ${\\cal{C}}(E)$, and slightly simpler. We then progress to the state-dependent version, described by the relative chain complex ${\\cal{C}}(E,E_0)$. In Section~\\ref{CohoComp}, we explain the relation between cohomology in ${\\cal{C}}(E,E_0)$ and MBQC output.\n\n\\subsection{Cohomology and contextuality}\\label{CohoCon} \n\nWe begin with the simpler state-independent parity proofs of contextuality, and then move on to their state-dependent cousins which are of more direct interest for MBQC. In all that follows we consider observables whose eigenvalues are all $\\pm 1$. We denote these observables by $T_a$, a notation we now explain.\n\nThe basic object in the cohomological discussion of the parity proofs are chain complexes ${\\cal{C}}(E)=(C_0,C_1,C_2,C_3)$ consisting of points (0-chains), edges (1-chains), faces (2-chains) and volumes (3-chains), and boundary maps $\\partial$ between those chains. The observables $T_a$ forming the contextuality proof are associated with the edges $a\\in E$ in the complex ${\\cal{C}}(E)$. More precisely, each edge $a$ corresponds to an equivalence class $\\{\\pm T_a\\}$ of observables, $a:=\\{\\pm T_a\\}$. From each equivalence class $a$, one observable is picked and denoted as $T_a$. \n\nFrom the perspective of contextuality, the reason for considering the observables $T_a$ and $-T_a$ as equivalent is the following. If a parity-based contextuality proof can be based on some set of observables $\\{T_a, a\\in E\\}$, then any signed set $\\{(-1)^{\\gamma(a)} T_a, \\;\\gamma(a) \\in \\mathbb{Z}_2, \\forall a\\in E\\}$ produces an equivalent proof. The signs $(-1)^{\\gamma(a)}$ in the definition of the observables $T_a$ don't matter for the existence of contextuality proofs; and this leads us to consider the equivalence classes $\\{\\pm T_a\\}$. We will return to this observation once we have set up the appropriate notation, right after Theorem~\\ref{CPth1}.\n\nThe 1-chains $c_1\\in C_1$ are linear combinations of the edges $a\\in E$ with $\\mathbb{Z}_2$ coefficients. \nThe faces of ${\\cal{C}}(E)$ are sets $f=(a_1,a_2,..,a_n)$ of edge labels $a_i$ of pairwise commuting operators $T_{a_i}$, such that for every face $f $ it holds that\n\\begin{equation}\\label{ProdRel}\n\\prod_{a\\in f} T_a = I (-1)^{\\beta(f)},\n\\end{equation}\nfor a suitable function $\\beta$ defined on the faces. We denote the set of faces by $F$, and the 2-chains $c_2 \\in C_2$ are linear combinations of the faces $f\\in F$ with coefficients in $\\mathbb{Z}_2$. \n\nWe can now define a boundary operator $\\partial: C_2 \\longrightarrow C_1$ via $\\partial(f) = \\sum_{a\\in f} a$, for all $f\\in F$, and extension from $F$ to $C_2$ by linearity. We can then also define a coboundary operator $d: C^1 \\longrightarrow C^2$ in the usual way; i.e. for every 1-cochain $x \\in C^1$ it holds that $d x(f):=x(\\partial f)$, for all $f\\in F$.\\medskip\n\nThe function $\\beta: C_2 \\longrightarrow \\mathbb{Z}_2$ plays a central role in the cohomological discussion of contextuality. Namely, assume that a non-contextual value assignment $\\lambda$ exists, and as before write $\\lambda(\\cdot) = (-1)^{s(\\cdot)}$. Then, Eq.~(\\ref{ProdRel}) implies that $\\beta(f) = \\sum_{a\\in f}s(a) = s(\\partial f)$ for all $f\\in F$. We may write this in cochain notation as\n\\begin{equation}\\label{betads}\n\\beta = ds.\n\\end{equation}\nThis equation may be interpreted as a constraint on the value assignment $s$, given $\\beta$. But it may as well be regarded as a constraint on $\\beta$. Namely, not all functions $\\beta$ are of form Eq.~(\\ref{betads}), for any 1-cochain $s$. Thus, a measurement setting based on ${\\cal{C}}(E)$ is non-contextual only if $\\beta = ds$ for some $s\\in C^1$, or, equivalently, it is contextual if $\\beta \\neq ds$, for any $s\\in C^1$.\n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{lclcl}\n(a) && (b) && (c)\\\\\n\\includegraphics[height=3.5cm]{MerminStar2.pdf} &&\n\\includegraphics[height=3.3cm]{MstarB2.pdf} &&\n\\includegraphics[height=3.3cm]{Mstar3.pdf} \n\\end{tabular}\n\\caption{\\label{MermSt} Mermin's star. (a) Standard representation. Each line represents a measurement context, composed of four commuting Pauli observables multiplying to $\\pm I$. (b) Mermin's star re-arranged on a surface. The Pauli observables now correspond to edges, and each measurement context to the boundary of one of the four elementary faces. The exterior edges are pairwise identified. The colored edges carry a value assignment, resulting from the GHZ stabilizer. (c) Relative complex ${\\cal{C}}(E,E_0)$. The edges corresponding to observables in the GHZ stabilizer are removed by contraction.}\n\\end{center}\n\\end{figure}\n\nWe will now slightly reformulate the last statement, to better bring out its cohomological nature. The function $\\beta$ is by definition a 2-cochain. But in fact it is a 2-cocycle, $d\\beta =0$ \\cite{Coho}. Thus, we may express the above contextuality condition as follows.\n\\begin{Theorem}[\\cite{Coho}]\\label{CPth1}A set of measurements specified by the chain complex ${\\cal{C}}(E)$ is contextual if for the cocycle class $[\\beta] \\in H^2({\\cal{C}}(E),\\mathbb{Z}_2)$ it holds that\n$$[\\beta] \\neq 0.$$ \n\\end{Theorem}\n{\\em{Remark:}} We observed above that no transformation $T_a \\longrightarrow (-1)^{\\gamma(a)} T_a$, $\\forall a \\in E$, affects the existence of contextuality proofs. We can now verify this statement in Theorem~\\ref{CPth1}. At the level of the cocycle $\\beta$, the transformations act as $\\beta \\longrightarrow \\beta + d\\gamma$. Hence, $[\\beta] \\longrightarrow [\\beta]$. The parity proofs are thus indeed unchanged. We point out that the transformations discussed here---which we call gauge transformations---have a further use in characterizing MBQC output functions; see Section~\\ref{CohoComp}.\n\n\\medskip\nNow let's consider the state-independent Mermin star in this framework.\nThe ten Pauli observables $T_a$ therein are assigned to the edges $a \\in E$ in a chain complex ${\\cal{C}}$; see Fig.~\\ref{MermSt}b. For the five faces shown we have $\\beta(f_1)=\\beta(f_2) =\\beta(f_3) = \\beta(f_4)=0$, and $\\beta(f_5)=1$. Further denote ${\\cal{F}}:=\\sum_{i=1}^5 f_i$, such that $\\partial {\\cal{F}} =0$ and $\\beta({\\cal{F}})=1$. Now assume Mermin's star were non-contextual. Then, $\\beta=ds$ for some $s\\in C^1$, and we have\n$$\n0 = s(0) = s(\\partial {\\cal{F}}) = ds({\\cal{F}}) = \\beta({\\cal{F}}) = 1.\n$$\nContradiction. Hence, Mermin's star is contextual.\\medskip\n\nWe now seek a state-dependent version of Theorem~\\ref{CPth1}, preferably formulated in an analogous way. This can be achieved by proceeding from the chain complex ${\\cal{C}}(E)$ to a relative chain complex ${\\cal{C}}(E,E_0)$. The quantum state $|\\Psi\\rangle$ now appears, and the set $E_0\\subset E$ consists of those edges $a$ for which the corresponding operator $T_a$ has $|\\Psi\\rangle$ as an eigenstate, \n\\begin{equation}\\label{ES}\nT_a|\\Psi\\rangle = (-1)^{\\mu(a)}|\\Psi\\rangle,\\;\\text{with } \\mu: E_0 \\longrightarrow \\mathbb{Z}_2.\n\\end{equation}\nGeometrically, ${\\cal{C}}(E,E_0)$ is obtained from ${\\cal{C}}(E)$ by contracting the edges in $E_0$. Thereby, the faces of ${\\cal{C}}(E)$ whose boundary lives entirely inside $E_0$ are removed. Under this contraction, the boundary map $\\partial$ changes to a relative boundary map $\\partial_R$ defined by $\\partial_R(f) = \\sum_{a\\in f\\backslash E_0}a$.\n\nExtending the above function $\\mu$ to all of $E$ by setting $\\mu(a):=0$ for all $a \\not \\in E_0$, we define a relative 2-cochain\n\\begin{equation}\\label{betaPsi}\n\\beta_\\Psi:= \\beta + d\\mu \\mod 2.\n\\end{equation}\nAgain, $\\beta_\\Psi$ is a 2-cocycle. Also, $\\beta_\\Psi$ evaluates to zero on all faces with boundary entirely inside $E_0$, and it is thus a cocycle in the relative complex ${\\cal{C}}(E,E_0)$.\n\nQuantum mechanically, the measurement record in the context corresponding to any face $f\\in F$ has to satisfy $s|_{f\\cap E_0} = \\mu |_{f \\cap E_0}$, and $\\beta(f) = s(\\partial f)$. Then, from the above definitions it follows that\n\\begin{equation}\\label{BetaPsi_s}\n\\beta_\\Psi(f) = s(\\partial_R f).\n\\end{equation}\nNow assume a value assignment $s$ exists. It has to satisfy the condition Eq.~(\\ref{BetaPsi_s}) for all faces $f\\in F$ simultaneously. We may thus write the constraints on such a global value assignment $s$ as $ds=\\beta_\\Psi$, with $d$ now being the coboundary operator in the complex ${\\cal{C}}(E,E_0)$. \n\nWe thus have, in complete analogy with the state-independent case, the following result.\n\\begin{Theorem}[\\cite{Coho}]\\label{CPth2}A set of measurements and a quantum state $|\\Psi\\rangle$ specified by the chain complex ${\\cal{C}}(E,E_0)$ are contextual if for the cocycle class $[\\beta_\\Psi] \\in H^2({\\cal{C}}(E,E_0),\\mathbb{Z}_2)$ it holds that\n$$[\\beta_\\Psi] \\neq 0.$$ \n\\end{Theorem}\n{\\em{Example, Part II.}} We now apply this to the example of the state-dependent Mermin star. Four faces remain in ${\\cal{C}}(E,E_0)$ after contraction of $E_0$ in ${\\cal{C}}(E)$, $f_1',.., f_4'$. We have\n $\\beta_\\Psi(f_1')=0$, $\\beta_\\Psi(f_2')= \\beta_\\Psi(f_3')=\\beta_\\Psi(f_4')=1$.\nDenote ${\\cal{F}}'=\\sum_{i=1}^4 f_i'$ such that the relative boundary of ${\\cal{F}}'$ vanishes, $\\partial_R {\\cal{F}}'=0$, and $\\beta_\\Psi({\\cal{F}}')=1$. \n\nNow assume that the state-dependent Mermin star is non-contextual. Then, $\\beta_\\Psi=ds$ for some 1-cochain $s\\in C^1({\\cal{C}}(E,E_0),\\mathbb{Z}_2)$. And thus \n\\begin{equation}\\label{cohoCount}\n1= \\beta_\\Psi({\\cal{F}}') = s({\\partial {\\cal{F}}'}) = s(0) = 0.\n\\end{equation}\nContradiction. Hence the state-dependent Mermin star is contextual.\n\nEq.~(\\ref{cohoCount}) is the cohomological version of Eq.~(\\ref{sdMS}). It describes the exact same system of linear constraints.\n\n\\subsection{Cohomology and computation}\\label{CohoComp} \n\nRecall from Section~\\ref{MBQCrev} that in MBQC there are two measurable observables at each physical site $i$, $O_i[q_i]$, $q_i \\in \\mathbb{Z}_2$. To make use of the cohomological formalism, we now denote these observables as\n\\begin{equation}\\label{OT}\nO_i[0]=T_{a_i},\\; O_i[1]=T_{\\overline{a}_i},\\;\\; \\forall i=1,..,n.\n\\end{equation}\nWe define the notion of an input group to import the classical processing relation Eq.~(\\ref{CCR_in}) into our cohomological picture. The input group is \n$Q =\\langle \\textbf{i}_j,\\;j=1,..,m\\rangle \\cong \\mathbb{Z}_2^m$. The generators of $Q$ act on the observables of Eq.~(\\ref{OT}) as \n\\begin{equation}\\label{cflip}\n\\begin{array}{rrl}\n\\textbf{i}_j(a_i)=(a_i),&\\textbf{i}_j(\\overline{a}_i)=(\\overline{a}_i),& \\text{if $S_{ij}=0$},\\\\\n\\textbf{i}_j(a_i)=(\\overline{a}_i),&\\textbf{i}_j(\\overline{a}_i)=(a_i),& \\text{if $S_{ij}=1$}.\n\\end{array}\n\\end{equation}\nDenoting by ${\\cal{E}}_\\text{e}$ a reference context corresponding to the trivial input $\\text{e} \\in Q$, ${\\cal{E}}_\\text{e} := \\{a_j,\\,j = 1,..,n\\}$, and by ${\\cal{E}}_\\textbf{i}$ the measurement context for any input $\\textbf{i} \\in Q$, then, with the definitions Eq.~(\\ref{OT}) and (\\ref{cflip}), the relation\n\\begin{equation}\\label{Iact}\n{\\cal{E}}_\\textbf{i} = \\textbf{i}({\\cal{E}}_\\text{e}):=\\{\\textbf{i}(a_j),\\,j=1,.., n\\}\n\\end{equation}\nreproduces the classical side processing relation Eq.~(\\ref{CCR_in}) in the limit of temporally flat MBQCs, $T=0$. This is the limit we are presently interested in.\n\nWe have thus far represented computational input by a group $Q$ that maps the complex ${\\cal{C}}(E,E_0)$ to itself, and we now turn to the computational output. In terms of the above sets ${\\cal{E}}_\\textbf{i}$, the classical side processing relations for output, Eq.~(\\ref{CCR_out}), read\n\\begin{equation}\\label{CCR_out_2}\no(\\textbf{i}) = \\sum_{a\\in {\\cal{E}}_\\textbf{i}} s(a) \\mod 2,\\;\\; \\forall \\textbf{i} \\in Q.\n\\end{equation}\nWe note that for any $\\textbf{i} \\in Q$, the observables $T_a$, $a\\in {\\cal{E}}_\\textbf{i}$, pairwise commute. Furthermore, in the setting of deterministic computation, the input group $Q$ (equivalently, the matrix $S$ in Eq.~(\\ref{CCR_in})) is chosen such that the MBQC resource state $|\\Psi\\rangle$ is an eigenstate of all observables $\\prod_{a \\in {\\cal{E}}_\\textbf{i}} T_a $. That is, $\\prod_{a \\in {\\cal{E}}_\\textbf{i}} T_a =\\pm T_x$, with $x\\in E_0$; cf. Eq.~(\\ref{ES}). Therefore, the edges $a\\in {\\cal{E}}_\\textbf{i}$ form the boundary of a face $f_\\textbf{i}$ in the contracted complex ${\\cal{C}}(E,E_0)$, i.e., $f_\\textbf{i} \\in C_2({\\cal{C}}(E,E_0))$ satisfies ${\\cal{E}}_\\textbf{i} = \\{\\partial_R f_\\textbf{i}\\}$. Finally, with Eq.~(\\ref{Iact}), ${\\cal{E}}_\\textbf{i} = \\{\\textbf{i} (\\partial_R f_\\text{e})\\}$, and the face $f_\\text{e}$ corresponds to ${\\cal{E}}_\\text{e}$. Therefore, Eq.~(\\ref{CCR_out_2}) can be rewritten in cohomological notation as\n$$\no(\\textbf{i}) = s(\\textbf{i}(\\partial_R f_\\text{e})),\n$$\nwhere $s$ is the measurement record for the observables in ${\\cal{E}}_\\textbf{i}$. \n\nInserting Eq.~(\\ref{BetaPsi_s}) into the last equation, we obtain the following result.\n\\begin{Theorem}[\\cite{CohoMBQC}]\\label{ObeT}\nThe function $o: Q \\longrightarrow \\mathbb{Z}_2$ computed in a given deterministic and temporally flat $l2$-MBQC is related to the cocycle $\\beta_\\Psi \\in C^2({\\cal{C}}(E,E_0))$ via\n\\begin{equation}\\label{obeta}\no(\\textbf{i}) = \\beta_\\Psi(\\textbf{i}(f_\\text{e})),\\; \\forall \\textbf{i} \\in Q.\n\\end{equation}\n\\end{Theorem}\nThis relation between the computational output $o$ and the 2-cocycle $\\beta_\\Psi$ is the main result of this section. It has been established in greater generality in \\cite{CohoMBQC} (Theorem~4 therein), but we don't need the additional generality here. Theorem~\\ref{ObeT} is the computational counterpart to Theorem~\\ref{CPth2} in Section~\\ref{CohoCon}. Both results together establish that a single cohomological object, the cocycle $\\beta_\\Psi$, governs contextuality and computational output in MBQC. Jointly, Theorems~\\ref{CPth2} and \\ref{ObeT} thus flesh out the Diagram~(\\ref{Triangle}).\\medskip\n\n{\\em{Example, Part III.}} For the GHZ-MBQC, Eq.~(\\ref{obeta}) may be explicitely verified by inspecting Fig.~\\ref{MermSt}c. \nThe reference context is ${\\cal{E}}_{\\text{e}}=(a_{X_1},a_{X_2},a_{X_3})$, hence $f_\\text{e} = f'_1$, w.r.t. the labeling of Fig.~\\ref{MermSt}c. The input group is $Q=\\mathbb{Z}_2\\times \\mathbb{Z}_2$. Its two generators $\\textbf{i}_1, \\textbf{i}_2$ are related to the input bits $y$, $z$ of the OR-gate via $y \\mapsto \\textbf{i}_1,\\; z \\mapsto \\textbf{i}_2$, and Eq.~(\\ref{cflip}) becomes\n\\begin{equation}\\label{Qghz}\n\\begin{array}{rl}\n\\textbf{i}_1: &a_{X_1} \\leftrightarrow a_{Y_1}, \\; a_{X_3} \\leftrightarrow a_{Y_3}, \\; a_{X_2} \\circlearrowright,\\; a_{Y_2} \\circlearrowright,\\\\\n\\textbf{i}_2: &a_{X_2} \\leftrightarrow a_{Y_2}, \\; a_{X_3} \\leftrightarrow a_{Y_3}, \\; a_{X_1} \\circlearrowright,\\; a_{Y_1} \\circlearrowright.\n\\end{array}\n\\end{equation}\nWe may now verify in the cohomological calculus established above that this action does indeed lead to the execution of the OR-gate in the corresponding GHZ-MBQC. For example, if $y=z=0$ then $\\textbf{i}=\\text{e}$, and $\\textbf{i}(f_1') =f_1'$; and thus $o(0,0) = \\beta_\\Psi(f_1') = 0 = \\text{OR}(0,0)$. Further, if $y=1$ and $z=0$, then $\\textbf{i} =\\textbf{i}_1$, and $\\textbf{i}_1(f'_1) = f_3'$. Thus, $o(1,0) = \\beta_\\Psi(f_3')=1=\\text{OR}(1,0)$. The other two cases are analogous. See Fig.~\\ref{GHZsymm} for illustration of the action of the input group given by Eq.~(\\ref{cflip}).\\medskip\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=8cm]{GroupInp}\n\\caption{\\label{GHZsymm}Input group of the GHZ-MBQC. Displayed is the action of the element $\\textbf{i}_1$ of the input group $Q = \\mathbb{Z}_2\\times \\mathbb{Z}_2$. As described by Eq.~(\\ref{Qghz}), for qubits 1 and 3, $X$ and $Y$ are interchanged under the given input, and the reference context $(X_1,X_2,X_3)$ is thereby changed into $(Y_1,X_2,Y_3)$. }\n\\end{center}\n\\end{figure}\n\nOne point remains to be discussed. When comparing Theorems~\\ref{CPth2} and \\ref{ObeT}, we notice a difference. Theorem~\\ref{CPth2} invokes the cohomology class $[\\beta_\\Psi]$ whereas Theorem \\ref{ObeT} invokes the cocycle $\\beta_\\Psi$ itself. Only the former theorem is therefore truly topological. This prompts the question: {\\em{Is there an operationally meaningful way of grouping the MBQC output functions $o$ into equivalence classes $[o]$ that depend only on $[\\beta_\\Psi]$?}}\n\nThat is indeed the case. The equivalence classes $[o]$ of MBQC output functions are motivated and constructed as follows. We note that the signs in the observables $\\{T_a,\\,a\\in E\\backslash E_0\\}$ are a mere convention. If an observable $T_a$, for some $a\\in E\\backslash E_0$, is measured in a given MBQC, then a measurement of $-T_a$ is exactly as hard, because the corresponding projectors are the same. To obtain one measurement from the other, only the labels of the two pointer positions of the measurement device need to be switched. Therefore, the change\n\\begin{equation}\\label{GT}\nT_a \\longrightarrow (-1)^{\\gamma(a)} T_a, \\;\\forall a\\in E\\backslash E_0,\n\\end{equation}\nfor any cochain $\\gamma: C_1(E,E_0) \\longrightarrow \\mathbb{Z}_2$ is an equivalence transformation, or, as it is also called, a gauge transformation. \n\nYet, these transformations have an effect. The cocycle $\\beta_\\Psi$ changes, namely\n$$\n\\gamma: \\beta_\\Psi \\mapsto \\beta_\\Psi + d\\gamma.\n$$\nAnd thus, by Theorem~\\ref{ObeT}, the outputted function $o$ changes too. Functions obtained from one another through such a transformation should be considered computationally equivalent, as was argued above. It is thus meaningful to group MBQC output functions $o$ into equivalence classes\n$$\n[o(\\cdot)]:=\\{(\\beta_\\Psi+d\\gamma)(\\cdot f_\\text{e}),\\;\\forall \\gamma \\in C^1({\\cal{C}}(E,E_0))\\}.\n$$\nWith this definition, Theorem~\\ref{ObeT} has the following corollary.\n\\begin{Cor}\\label{CohoCompCor}\nFor each deterministic and temporally flat $l2$-MBQC, the equivalence class $[o]$ of output functions is fully determined by $[\\beta_\\Psi]$.\n\\end{Cor}\nThus, the gauge-invariant information in an MBQC output function is contained in the same cohomological information that also provides the contextuality proof. \n\\medskip\n\n{\\em{Example, Part IV.}} In the GHZ-MBQC, we may flip $Y_3 \\longrightarrow - Y_3$. In result, the new computed function is an AND. Therefore, AND and OR are equivalent wrt. MBQC. Considering the whole set of equivalence transformations for this example, we find that there are two equivalence classes of functions on two bits, the non-linear Boolean functions and the linear ones. Each member of the former class boosts the classical control computer of MBQC to computational universality, whereas the second class has no effect on the computational power at all. \n\nFrom the cohomological perspective, $H^2({\\cal{C}}(E,E_0),\\mathbb{Z}_2) = \\mathbb{Z}_2$, i.e. there are two equivalence classes of cocycles $\\beta_\\Psi$. The trivial class corresponds to the linear Boolean functions on two bits and the non-trivial class to the non-linear Boolean functions.\n\n\\subsection{On the probabilistic case}\n\nIn the previous sections we focussed to deterministic MBQC. Indeed, powerful deterministic quantum algorithms do exist, notably for the Discrete Log problem \\cite{MZ}. However, most known quantum algorithms are probabilistic, i.e., they succeed with a probability smaller than one. A cohomological treatment of probabilistic MBQCs is given in \\cite{CohoMBQC}, based on group cohomology. Here we are content with alerting the reader to the additional layer of difficulty posed by the probabilisitic case.\\smallskip\n\nLet's trace the restriction to deterministic MBQCs back to its origin. In Theorem~\\ref{ObeT}, the central result on the computational side, it is present through the cocycle $\\beta_\\Psi \\in C^2({\\cal{C}}(E,E_0),\\mathbb{Z}_2)$. This cocycle is defined in Eq.~(\\ref{betaPsi}), in terms of the cocycle $\\beta \\in C^2({\\cal{C}}(E),\\mathbb{Z}_2)$ and the value assignment $\\mu: E_0 \\longrightarrow \\mathbb{Z}_2$. The value assignment $\\mu$ in turn refers to eigenvalues of certain observables related to computational output, of which the resource state $|\\Psi\\rangle$ is an eigenstate; cf. Eq.~(\\ref{ES}). \n\nIn the probabilistic case, the value assignment $\\mu$ does in general not exist. Hence, $\\beta_\\Psi$ is not defined, and we cannot have straightforward probabilistic counterparts of Theorems \\ref{CPth2} and \\ref{ObeT}.\\smallskip\n\nBut the problem is not merely technical; it is conceptual. Consider our running example of the GHZ-MBQC, which executes an OR-gate with certainty. As soon as probabilistic computations are admitted, we may as well say that it evaluates the constant function $y\\equiv 1$ with an average success probability of 75 percent. In fact, the same computation executes any 2-bit Boolean function, except $\\neg \\text{OR}$, with some nonzero probability of success. How can we then say that one particular function is computed while all others are not?\n\n Key to the solution is a group $G$ of symmetry transformations that extends the input group $Q$, in the group-theoretic sense. $G$ maps the complex ${\\cal{C}}(E,E_0)$ to itself, acting on the observables $T_a$, $a\\in E\\backslash E_0$ via\n\\begin{equation}\\label{PhiDef}\ng(T_a) = (-1)^{\\tilde{\\Phi}_g(a)}T_{ga},\\;\\;\\forall g\\in G.\n\\end{equation}\nTherein, the phase function $\\tilde{\\Phi}$ is, per construction, a 1-cocycle in group cohomology.\n\nThere is a further condition on $G$. Namely, the action Eq.~(\\ref{PhiDef}) of $G$ on the set of observables $\\{\\pm T_a, a\\in E\\backslash E_0\\}$ induces an action on the output function $o$, and we require $o$ to be invariant under this action. It turns out that, given $G$, this invariance condition constrains $o$ up to an additive constant \\cite{CohoMBQC}. Thus, the output function $o$ is {\\em{defined}} through a symmetry group.\n\nFurthermore, $o$ can be expressed in terms of the phase function $\\tilde{\\Phi}$, and a contextuality proof can be given in terms of a group cohomology class derived from $\\tilde{\\Phi}$. In result, Theorems~\\ref{CPth2} and \\ref{ObeT} have counterparts in the probabilistic case. They are given as Theorem 5 in \\cite{Coho} and Theorem 6 in \\cite{CohoMBQC}, respectively. The probabilistic counterpart of Corollary~\\ref{CohoCompCor} is Corollary~2 in \\cite{CohoMBQC}.\n\n\\section{Temporal order}\\label{TO}\n\nThe connection between contextuality and $l2$-MBQC described by Theorem~\\ref{NLPCrel} is completely general. It applies to deterministic and probabilistic measurement-based computations, as well as temporally flat and temporally ordered ones. It is only the cohomological description of this connection that is presently restricted to temporally flat computations. This is a technical limitation, and the purpose of this section is to outline an approach for overcoming it. \n\nThe idea is to not change the cohomological description at all, but to enlarge the complex ${\\cal{C}}(E,E_0)$ by additional observables which take care of the temporal ordering. We illustrate this approach with the setting of the ``iffy'' proof \\cite{Exa}.\n\nIn Section~\\ref{Iffy1} we review the iffy contextuality proof, largely following the original exposition \\cite{Exa}. We then explain why the signature feature of iffiness is incompatible with applications to MBQC. In Section~\\ref{Iffy2} we present a cohomological contextuality proof for the iffy scenario that is MBQC-compatible. This proof includes temporal order, yet is covered by Theorem~\\ref{CPth2} without any modification.\n\n\n\\subsection{The ``iffy\" contextuality proof}\\label{Iffy1}\n\nTo get started, we require a simple example for a contextuality proof with temporal order, a counterpart to the non-adaptive GHZ proof. Luckily, Ref.~\\cite{Exa}, Section 6, offers one; in fact, it offers a whole family of examples. We begin by writing them in a stabilizer notation that suits our purpose. \n\nThe examples consist of a three-qubit resource state $|\\Psi\\rangle$, and local measurement settings for the three qubits. For any even integer $N$, choose\n$$\n|\\Psi\\rangle \\sim |00\\rangle |\\nu\\rangle + | 11\\rangle |\\omega\\rangle,\n$$\nwhere\n$$\n\\begin{array}{rcl}\n|\\nu\\rangle &=& \\cos \\frac{\\lambda}{2}|0\\rangle + \\sin\\frac{\\lambda}{2}|1\\rangle,\\\\\n|\\omega\\rangle &=&\\sin \\frac{\\lambda}{2}|0\\rangle + \\cos\\frac{\\lambda}{2}|1\\rangle,\n\\end{array}\n$$\nand $\\lambda = \\pi\/2 - \\pi\/N$. This defines the resource state. Now the measurements: qubit 3 will be measured in the eigenbasis of $X$ or $Y$, and qubits 1 and 2 will be measured in the eigenbases of any of the operators\n\\begin{equation}\\label{XkDef}\nX_k:= \\cos\\left( k\\frac{\\pi}{N}\\right) X + \\sin\\left( k\\frac{\\pi}{N}\\right)Y,\\;\\; \\forall k =0 ... 2N-1.\n\\end{equation}\nNote that $X_{N+k} = - X_k$, such that we really only need the observables $X_0$, .. , $X_{N-1}$.\n\nDenote by $P_{y,\\pm}$ the projector on the eigenstate of $Y$ with positive and negative eigenvalue, respectively, and define the operators\n\\begin{equation}\\label{tauX}\n\\begin{array}{rcll}\n\\tau_k &: =& X_{N-1-k}^{(1)} \\otimes X_{k}^{(2)}\\otimes P_{y,+}^{(3)} + X_{N+1-k}^{(1)} \\otimes X_{k}^{(2)}\\otimes P_{y,-}^{(3)},& k=0,.., N-1,\\\\\n\\overline{X}_k &:=& X^{(1)}_{N-k}\\otimes X^{(2)}_k \\otimes X^{(3)},& k=0,.., N-1.\n\\end{array}\n\\end{equation}\nBy direct calculation, we can verify that\n\\begin{subequations}\\label{Stab}\n\\begin{align}\\label{StabX}\n\\overline{X}_k |\\Psi\\rangle &=- |\\Psi\\rangle,\\; \\forall k,\\\\ \n\\label{StabY}\n\\tau_k |\\Psi\\rangle &= - |\\Psi\\rangle,\\; \\forall k.\n\\end{align}\n\\end{subequations}\nThe measurement strategies considered in the contextuality proof have temporal order. Namely, first qubit 3 is measured, in the $X$ or $Y$ basis. In the latter case, the further choice of the measurement bases for qubits 1 and 2 depends on the outcome of the measurement at 3.\n\nFrom Eq.~(\\ref{Stab}) we can read off the constraints on the non-contextual hidden variable model, which are provided in \\cite{Exa}. Denote by $a_k$ and $b_k$ the binary measurement outcomes on qubits 1 and 2, respectively, given the measured observable $X_k$, and by $c_0$ ($c_1$) the outcome on qubit 3 if the measured observable is $X$ ($Y$). If these values form the value assignment of an ncHVM, they must satisfy the constraints\n\\begin{equation}\\label{ValAss}\n\\begin{array}{rcll}\na_i \\oplus b_j \\oplus c_0 &=& 0, & \\forall i,j\\;\\text{s.th. } i+j =0,\\\\\na_i \\oplus b_j \\oplus c_0 &=& 1, & \\forall i,j\\;\\text{s.th. } i+j =N,\\\\ \\\\\na_i \\oplus b_j &=& 0, & \\forall i,j\\;\\text{s.th. } i+j +(-1)^{c_1} =0,\\\\\na_i \\oplus b_j &=& 1, & \\forall i,j\\;\\text{s.th. } i+j +(-1)^{c_1} =N.\n\\end{array}\n\\end{equation}\nThe contextuality proof proceeds from there, as usual, by adding up equations mod 2. This will be discussed below.\\medskip\n\nWe now show how Eq.~(\\ref{ValAss}) are derived from the stabilizer relations Eq.~(\\ref{Stab})\\footnote{The original derivation of Eq.~(\\ref{ValAss}) in \\cite{Exa} uses a different formalism which we do not reproduce here.}. The two relations at the top of Eq.~(\\ref{ValAss}) follow straightforwardly from Eq.~(\\ref{StabX}); here we focus on the relations at the bottom of Eq.~(\\ref{ValAss}), which derive from Eq.~(\\ref{StabY}).\n\nFirst, for the observables of Eq.~(\\ref{tauX}), with Eq.~(\\ref{Stab}) we have the following values\n\\begin{equation}\ns_{\\tau_k}= s_{\\overline{X}_k} =1,\\;\\;k=0,.., N-1.\n\\end{equation}\ncorresponding to eigenvalues $(-1)^1=-1$. Now consider separately the two cases of $c_1=0$ and $c_1=1$, respectively.\n\nCase I: $c_1=0$. We now want to argue that, in this case, the observables\n$\n\\tau_k(0) = X_{N-1-k}^{(1)} \\otimes X_{k}^{(2)}\n$\nare also assigned the value 1, \n$$\ns_{\\tau_k(0)} =1, \\;\\; k=0,..,N-1.\n$$\nThe argument is as follows. If $c_1=0$, then this fact could be established by measuring $Y^{(3)}$. According to quantum mechanics, the post-measurement state would be $|y,+\\rangle:=P_{y,+}^{(3)}|\\Psi\\rangle$. For this state it holds that \n\\begin{equation}\\label{eqChain}\n\\tau_k(0)|y,+\\rangle = \\tau_k|y,+\\rangle = \\tau_k P_{y,+}|\\Psi\\rangle = P_{y,+} \\tau_k|\\Psi\\rangle = - P_{y,+} |\\Psi\\rangle =-|y,+\\rangle. \n\\end{equation}\nFor later reference, note that in the above chain of equalities we have used the properties\n\\begin{equation}\\label{ref}\n \\tau_k(0)P_{y,+} = \\tau_k P_{y,+}\\mbox{ and } [\\tau_k,P_{y,+}]=0.\n\\end{equation}\nBy Eq.~(\\ref{eqChain}), $s_{\\tau_k(0)} =1$, for all $k$, as claimed. Further, by standard arguments, $s_{\\tau_k(0)} =a_{N-1-k} \\oplus b_k$. Combining the last two statements,\n$$\na_{N-1-k} \\oplus b_k = 1,\\;\\;\\forall k=0,.., N-1.\n$$\nThis provides the lower part of Eq.~(\\ref{ValAss}) for the case of $c_1=0$.\n\nCase II: $c_1=1$. A completely analogous argument establishes the bottom half of Eq.~(\\ref{ValAss}) for $c_1=1$.\\smallskip\n\nEq.~(\\ref{ValAss}) is thus established as a set of constraints that any value assignment $\\{a_k,b_l,c_m\\}$ needs to satisfy. We now complete the proof, focussing on Case I, $c_1=0$. Case II is analogous. \n\nWe assume that a value assignment exists. From the upper half of Eq.~(\\ref{ValAss}), we pick the equation $a_0 +b_0+c_0 \\mod 2=0$, and the equations $a_k+b_{N-k}+c_0 \\mod 2 =1$, for $k=1,..,N-1$. From the lower half we pick the equations $a_l + b_{N-1-l} =1$, for $l=0,..,N-1$. Summing those equations, we obtain $N c_0 +2\\sum_{k=0}^{N-1}(a_k+b_k) = 2N-1\\;\\;(\\text{mod}\\; 2)$. Since $N$ is even, this is a contradiction. $\\Box$ \\medskip\n\nNow that we have presented the iffy contextuality proof, let's take a step back and ask two questions.\n\n(1) {\\em{Where is temporal order in this contextuality proof?}}---Suppose one wants to test the correlations of Eq.~(\\ref{tauX}) through local measurement. The correlations are labeled by an integer $k \\in \\mathbb{Z}_N$, and a further binary integer $l\\in \\mathbb{Z}_2$ that decides whether qubit \\#3 is measured in the $X$-basis ($l=0$) or in the $Y$-basis $(l=1)$. Given the input $(k,l)$, the pattern of local measurements to test the correlations is fully specified. Therein, if $l=1$, the measurement basis for qubit \\#1 depends on the outcome $c_1$ obtained on qubit \\#3, cf. Eq.~(\\ref{tauX}), upper line. Thus, qubit \\#1 must be measured {\\em{after}} qubit \\#3. This is the same temporal ordering due to adaptive measurement as occurs in MBQC.\\smallskip\n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{lcl}\n(a) && (b)\\\\\n\\includegraphics[width=4cm]{surf4} & &\\includegraphics[width=4cm]{surf5}\n\\end{tabular}\n\\caption{\\label{topol}Chain complexes in the iffy proof, for $N=4$. (a) the complex ${\\cal{C}}^{(0)}$ for $c_1=0$ and (b) the complex ${\\cal{C}}^{(1)}$ for $c_1=1$. In either case, the four edges labeled ``$c_0$'' correspond to the same observable $X^{(3)}$, and are identified. The faces $f$ on which $\\beta_\\Psi(f)=1\\, (0)$ are shown in color (white).}\n\\end{center}\n\\end{figure}\n\n(2) {\\em{Is the iffy proof topological?}}---Yes, but with a caveat. The value assignment for $c_1$ is not part of the topological description. Instead there are two separate topological descriptions, one for $c_1=0$ and one for $c_1=1$. They are depicted in Fig.~\\ref{topol}, (a) the complex ${\\cal{C}}^{(0)}$ for $c_1=0$ and (b) the complex ${\\cal{C}}^{(1)}$ for $c_1=1$. In both cases there is a surface ${\\cal{F}}^{(c_1)}$ comprising all of the faces displayed. Those surfaces have the property that $\\partial {\\cal{F}}^{(c_1)} =0$. In both cases it holds that $\\beta_\\Psi^{(c_1)}({\\cal{F}}^{(c_1)})=1$, which, together with the former statement, implies that $\\left[\\beta_\\Psi^{(c_1)}\\right]\\neq 0$, $\\forall c_1\\in \\mathbb{Z}_2$. The iffy proof thus has two cohomological parts, conditioned by the value of $c_1$,\n\\begin{equation}\\label{IP}\n\\text{Iffy\\,Proof} =\\left\\{ \\mathbb{Z}_2 \\ni c_1 \\mapsto \\left({\\cal{C}}^{(c_1)}, \\beta_\\Psi^{(c_1)}\\right)\\right\\}.\n\\end{equation}\n\nThe conditioning on $c_1$ is in the way of using the iffy proof as a template for describing temporally ordered MBQCs. To see why this is so, let's recap the earlier topological proofs. There, the assumption of a noncontextual value assignment $s$ is contradicted by $[\\beta_\\Psi]\\neq 0$, and $\\beta_\\Psi$ is an object that is well-defined in quantum mechanics. Beyond the contextuality witness (see Theorem~\\ref{CPth2}), $\\beta_\\Psi$ also contains the function computed in MBQC (see Theorem~\\ref{ObeT}).\n\nThe counterpart of $\\beta_\\Psi$ in the present iffy proof is the quantum-classical hybrid structure given by Eq.~(\\ref{IP}). It consists of the quantum-mechanically valid parts ${\\cal{C}}^{(c_1)}$, $\\beta_\\Psi^{(c_1)}$, and one element, $c_1$, of the non-contextual value assignment, so far assumed to exist. (Recall that ruling out the existence of such a value assignment is the very purpose of the contextuality proof.) Unlike $\\beta_\\Psi$ in the former cases, as a whole this hybrid object is not compatible with quantum mechanics. It is thus not suitable to base a description of MBQC on. Now that we have understood this, we seek to modify the iffy proof such that it becomes compatible with measurement-based quantum computation.\n\n\\subsection{Deiffifying the iffy proof}\\label{Iffy2}\n\nHere we present a topological contextuality proof for the above iffy scenario that uses a complex of the type defined in \\cite{Coho}. The proof works in completely the same way as in the temporally flat scenarios it was previously applied to.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=8cm]{Complex1.jpg}\n\\caption{\\label{Comp1}Complex for the cohomological contextuality proof of the iffy scenario. There are four edges corresponding to $X^{(3)}$, and two each for $\\sigma^\\pm_k$, for various values of $k$, and for $Y^{(3)}$. Such edges are identified. The faces $f$ coloured in red have $\\beta_\\Psi(f)=1$, and the white faces $g$ have $\\beta_\\Psi(g)=0$.}\n\\end{center}\n\\end{figure}\n\nWe define a couple of extra observables, for all $k\\in \\mathbb{Z}_{2N}$,\n\\begin{subequations}\\label{EpSig}\n\\begin{align}\n\\epsilon_k &:= \\frac{I^{(3)}+Y^{(3)}}{2} \\otimes X^{(1)}_{k-1} + \\frac{I^{(3)}-Y^{(3)}}{2} \\otimes X^{(1)}_{k+1},\\\\\n\\sigma^+_k &:= \\frac{I^{(3)}+Y^{(3)}}{2} \\otimes X^{(1)}_{k-1} + \\frac{I^{(3)}-Y^{(3)}}{2} \\otimes I^{(1)},\\\\\n\\sigma^-_k &:= \\frac{I^{(3)}+Y^{(3)}}{2} \\otimes I^{(1)} + \\frac{I^{(3)}-Y^{(3)}}{2} \\otimes X^{(1)}_{k+1}.\n\\end{align}\n\\end{subequations}\nThese are correlated observables on qubits \\#1 and \\#3. They can also be considered as unitary gates in which qubit \\#3 is the control and qubit \\#1 the target. This is how the original iffiness enters into our topological proof, but in a fully quantum fashion. \n\nThe stabilizer relations Eq.~(\\ref{Stab}) can be expressed in terms of the observables defined in Eq.~(\\ref{EpSig}). (only the first relation changes),\n\\begin{subequations}\\label{StabRel2}\n\\begin{align}\n\\label{SR2a}\n\\epsilon_{N-k} \\otimes X^{(2)}_k \\, |\\Psi\\rangle &= - |\\Psi\\rangle,\\\\\n\\label{SR2b}\nX_{N-k}^{(1)} \\otimes X^{(2)}_k X^{(3)}\\, |\\Psi\\rangle &=- |\\Psi\\rangle. \n\\end{align}\n\\end{subequations}\nFurther, the observables $\\epsilon_k$, $\\sigma^\\pm_k$ satisfy the following {\\em{recoupling relations}}:\n\\begin{subequations}\\label{esRel}\n\\begin{align}\n\\label{RelA}\n\\epsilon_k &= \\sigma_k^+ \\sigma_k^-,\\\\\n\\label{RelB}\nX^{(1)}_k &= \\sigma^+_{k+1}\\sigma^-_{k-1},\\\\\n\\label{RelC}\n-Y^{(3)} &= \\sigma^+_k \\sigma^+_{N+k},\\\\\n\\label{RelD}\nY^{(3)} &= \\sigma^-_k \\sigma^-_{N+k}.\n\\end{align}\n\\end{subequations}\nFinally, we note the commutation relations\n\\begin{subequations}\\label{CommRel}\n\\begin{align}\n[\\sigma^+_k,\\sigma^-_l]&=0,\\;\\; \\forall k,l \\in\\mathbb{Z}_{2N},\\\\\n[\\sigma^\\pm_k,Y^{(3)}]&=0,\\;\\; \\forall k, \\in\\mathbb{Z}_{2N}.\n\\end{align}\n\\end{subequations}\nWith these relations, the complex shown in Fig.~(\\ref{Comp1}) is well-composed. I.e., all faces correspond to triples of commuting operators that multiply to $\\pm I$. \n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=5cm]{Complex2}\n\\caption{\\label{Comp2}The complex for the cohomological contextuality proof of the iffy scenario, in a different colouring. Orange: faces corresponding to the stabilizer relation Eq.~(\\ref{SR2a}), purple: faces stemming from the stabilizer relation Eq.~(\\ref{SR2b}), white: faces invoking the recoupling relations Eq.~(\\ref{esRel}).}\n\\end{center}\n\\end{figure}\n\nWe first consider the case of $N=4$ which is displayed in Fig.~\\ref{Comp1}, and then the general case. Denote ${\\cal{F}}:=\\sum_i f_i$, i.e. ${\\cal{F}}$ is the complete surface shown. It is easily verified that, after identifying the outer edges, $\\partial {\\cal{F}}=0$. Further, there are 9 faces in ${\\cal{F}}$ on which $\\beta_\\Psi$ evaluates to 1, hence $\\beta_\\Psi({\\cal{F}}) = 1\\mod 2$. Both facts together imply that $[\\beta_\\Psi]\\neq 0$, and hence the arrangement is contextual. $\\Box$\\medskip\n\nWe now turn to the general case of even $N$. If and only if $N$ is even, there is an even number of edges labeled by $X^{(3)}$ in the boundary of the disc (shown in Fig.~\\ref{Comp1} for $N=4$). Hence $\\partial {\\cal{F}} = 0 \\mod 2$ if and only if $N$ is even. We still need to establish $\\beta_\\Psi({\\cal{F}}) = 1 \\mod 2$. So let's count the number of faces $f$ with $\\beta_\\Psi(f)=1$. Such faces arise through the relation of Eq.~(\\ref{StabRel2}), and there are $2N$ of them. Hence their contribution cancels mod 2. \n\nThere is one more contribution to $\\beta_\\Psi({\\cal{F}})$. For guidance, we look at Fig.~\\ref{Comp1} and follow the red arrow in the counter-clockwise sense. The first observable we encounter that has non-trivial support only on qubit \\#1 is $X_0^{(1)}$. The next such observable is $X_1^{(1)}$, then $X_2^{(1)}$, and so forth. Going around the disk, we increase the value of $k$ for such observables $X_k^{(1)}$ in increments of 1. Completing the circle, we arrive at $X_N^{(1)}$ which equals $-X_0^{(1)}$ by virtue of Eq.~(\\ref{XkDef}). $X^{(1)}_0$ already is the label of the start-stop edge, and hence we obtain an additional factor of $-1$ (That is why, in Fig.~\\ref{Comp1}, the color of the last face before completing the circle is white, $\\beta_\\Psi(f_\\text{last})=0$). We have thus overcounted the contributions stemming from Eq.~(\\ref{StabRel2}) by 1, which we now correct for. There are no other contributions, hence $\\beta_\\Psi({\\cal{F}})=1$. \n\nNow assume the existence of a value assignment $s=(a_k,b_l,c_m)$, i.e., $\\beta_\\Psi=ds$. Then,\n$$\n1 = \\beta_\\Psi({\\cal{F}}) = ds({\\cal{F}}) = s(\\partial_R {\\cal{F}}) = s(0) = 0.\n$$ \nContradiction. Thus, no non-contextual value assignment exists. $\\Box$\\medskip\n\nTo conclude, let's compare the above proof for the iffy scenario with the original iffy proof. The ``iffiness\" is gone. The algebraic structure Eq.~(\\ref{IP}) underlying the iffy proof is replaced by a simpler one, namely a relative chain complex ${\\cal{C}}(N)$ with 2-cocycle $\\beta_\\Psi(N)$ living in it ($N$ even). This is exactly the same structure as in the parity-based contextuality proofs without temporal order. \n\nWe achieved this reduction to the prior case by introducing additional observables in the chain complex, namely $\\{\\epsilon_k, \\sigma^+_k, \\sigma^-_k\\}$ as defined in Eq.~(\\ref{EpSig}), to represent the temporal ordering. We propose this as a blueprint for a general method of constructing cohomological contextuality proofs describing temporally ordered measurement-based quantum computations. \n\n\\section{Conclusion}\\label{Concl}\n\nIn this paper, we have explained the contextuality--MBQC--cohomology triangle of Diagram~(\\ref{Triangle}). Its upper corners, contextuality and measurement-based quantum computation, represent the phenomenology of interest; and the lower corner, cohomology, the mathematical method to describe it. The link between MBQC and contextuality is provided by Theorems~\\ref{NLPCrel} and \\ref{T1}, the link between contextuality and cohomology by Theorem~\\ref{CPth2}, and the link between MBQC and cohomology by Theorem~\\ref{ObeT} and Corollary~\\ref{CohoCompCor}. Finally, in the center of the diagram sits the cocycle class $[\\beta_\\Psi]$, an element of the second cohomology group of the underlying chain complex. It contains the function computed in a given MBQC up to gauge equivalence, and the corresponding contextuality proof. \n\n\nA limitation of the cohomological framework established to date is that it only applies to temporally flat MBQCs, which form a small subclass. Here we made a first step towards describing MBQCs and contextuality proofs with temporal order in a cohomological fashion, by providing a cohomological contextuality proof in one concrete temporally ordered setting, the so-called ``iffy'' scenario \\cite{Exa}. Extending the cohomological formalism to all MBQCs with proper temporal order is a main subject of future research on the MBQC-contextuality connection.\\medskip\n\n\\noindent\n{\\em{Acknowledgments.}} The author thanks the Yukawa Institute for Theoretical Physics Kyoto (YITP) for their hospitality. Part of this work was performed there. This work is supported by NSERC.\n\n\\section{Travel log}\\label{TL}\n\nAs I learned over the years, the 8th Conference on Quantum Physics and Logic, held in Nijmegen, the Netherlands in November 2011, is remembered fondly by many participants; for all sorts of reasons. Here I'd like to describe my journey towards this conference, how I spiralled out of it, and my thoughts for the future.\n\nMy journey began in Munich in 2003, the final year of my PhD. Hans Briegel and I had discovered the one-way quantum computer, a scheme of measurement-based quantum computation (as it is now known) in 2000, and had answered the obvious first question---universality. Quite naturally, the universality proof was based on a mapping to the circuit model. But, besides proving the point, the mapping seemed inadequate in many ways. For example, the temporal order among the measurements in MBQC was different and more flat than the mapping would suggest: all Clifford gates can be implemented in the first round of measurement, before all other gates, irrespective of where they are located in the simulated circuit. This and similar observations prompted us to look for a description of MBQC outside the realm of circuit simulation, and, in the first place, for the basic structures upon which such a description could be built. \n\nThere was, and is, no manual for how to approach this question. We are left to our own intuition and judgement. A structural element we focussed on early were the correlations among measurement outcomes that yield the computational result. Individually, the measurement outcomes in MBQC are completely random, and meaningful information can only be gleaned from certain correlations among them. What made the analysis of these correlations simultaneously difficult and interesting was their non-stabilizerness; i.e. the fact that the correlator observables are in general not mere tensor products of Pauli operators $X$, $Y$, $Z$.\n\nFault-tolerance seemed a path to make progress on these correlations. I figured that it could not be established for MBQC without understanding the structure of these correlations first. At the time, fault-tolerance with high error threshold was a problem with a price tag. In addition, when solved for MBQC we could sure learn something from the solution---a goldilocks problem. \n\nWhen first putting non-stabilizer quantum correlations on my map in early 2003, unbeknownst to me, someone in far away Moscow was finding out something about them: Sergey Bravyi. The next year we would be office mates at Caltech. \n\nHaving arrived at Caltech in October 2003, it took about two years until, resting upon the scrap of two unsuccessful attempts, I established fault-tolerant universal MBQC with 3D cluster states \\cite{FT1},\\cite{FT2} (joint work with Jim Harrington and Kovid Goyal). Price tag fetched: the fault-tolerance threshold was high, and the whole construction elegant. \n\nAnd yet, one thing didn't completely fall into place---the learning-from-the-solution part. As noted above, I had stipulated that in order to establish fault-tolerance for MBQC, the structure of the non-stabilizer correlations would need to be understood first. It panned out differently. Those correlations did not need to be understood, and I hadn't understood them. This realization is one of three waypoints encountered at Caltech on my journey to Nijmegen. \n\nHowever, some correlations in MBQC---those which provide the error-correction capability for Clifford gates---could be understood very well. Namely, it turned out that those correlations have a cohomological underpinning. 3D cluster states can be described by a pair of three-dimensional chain complexes, related by Poincare duality. The measurement outcomes live on the respective faces, and are thus represented by 2-cochains $s$. The cluster state stabilizer implies that, in the absence of errors, the measurement record satisfies the constraint $s(\\partial v)=0$, for all volumes $v$, and hence $s$ is a 2-cocycle. Furthermore, the output of the MBQC is given by evaluations $s(f)$, for non-trivial 2-cycles $f$. Fault-tolerance and computation on 3D cluster states is thus a matter of cohomology. This finding is the second Caltech waypoint.\n\nIn 2004, Sergey Bravyi and Alexei Kitaev developed ``magic state distillation'' \\cite{BK}, an efficient and robust technique for implementing non-Clifford gates fault-tolerantly. It was eventually incorporated into fault-tolerant MBQC, but its main effect on me was a different one. Magic state distillation exploits non-Pauli quantum correlations to operate, as they are found, for example, in Reed-Muller quantum codes. Save the aspect of temporal order, these were precisely the quantum correlations I wanted to understand in the first place! \n\nA shortcut seemed to open: What about using quantum Reed-Muller code states as computational resource states in MBQC---could toy computations exhibiting non-trivial correlations be constructed this way? I was eager to try, and settled on the following conditions for Reed-Muller toy MBQCs: (i) The classical side processing relations Eq.~(\\ref{CCR}) have to be obeyed; in particular, the input values form a vector space, as required by Eq.~(\\ref{CCR_in}). (ii) The outcome is deterministic for every admissible value of input, and (iii) the MBQC is non-Clifford. Further, the criterion for an ``interesting'' computation was that it computed a non-linear Boolean function. Quite a low bar, but justified as it exceeds what the classical side processing permits by itself.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=16cm]{Maple.pdf}\n\\caption{\\label{Maple}Numerical experiment on toy MBQCs using Reed-Muller quantum code states as computational resources. Shown is the output for the example based on a 31-qubit punctured Reed-Muller code. All tests worked out---the Boolean function computed was total and non-linear.}\n\\end{center}\n\\end{figure}\n\nArmed with those criteria, I got my laptop running. I started with the 15 qubit punctured Reed-Muller quantum code, and it didn't work. So I went on to the 31 qubit punctured Reed Muller code, which, given the next came at 63, I knew was the largest I could handle. I held my breath. There was deterministic output on 2048 inputs---power of 2, good sign. The output was imbalanced, hence the computed function non-linear. A final check remained to be made: did the inputs form a vector space, as required by Eq.~(\\ref{CCR_in})? That worked out too! I was over the moon. \n\nSometime in the subsequent months, while finalizing the fault-tolerance work, it must have trickled in that to be excited about such toy quantum computations needed a very particular taste or preparation. They didn't achieve anything of real computational value. At any rate, the finding of these Reed-Muller toy MBQCs is my third waypoint at Caltech.\n\nIn 2008, after I had moved to the University of British Columbia by way of the Perimeter Institute (PI), at a workshop at PI I heard Dan Browne speak about similar toy MBQCs. In a work of Janet Anders and him \\cite{AB}, they considered MBQCs on a Greenberger-Horne-Zeilinger state, satisfying the above conditions (i) and (ii). Not enforcing condition (iii) (non-Cliffordness) allowed them to get by with 3 qubits rather than 31. But much, much more importantly, they managed to relate their 3 qubit-MBQC to something known and valued in the world of Physics: Mermin's star. Thus the MBQC--contextuality link saw the light of day. Learning of this result I was ready to go to QPL 2011, although the conference was still 3\\,1\/2 years ahead.\\medskip\n\nFinally, being at QPL 2011 in Nijmegen, what made the day for me was a talk by Samson Abramsky, Shane Mansfield and Rui Barbosa on ``The Cohomology of Non-Locality and Contextuality''. It had taken me quite a bit of effort to make it to the conference---teaching had to be rescheduled and so on. But I boarded the return plane in Amsterdam with a swagger: very, very worth the trouble. Although, honestly, in actual terms I had not learned all that much. I had understood precisely one slide of Shane Mansfield's presentation, and that was the title slide. What my journey through Caltech and PI had prepared me for was to see significance in the words ``contextuality'' and ``cohomology'' appearing side by side. I also somehow managed to not be completely bypassed by Mansfield's cohomological explanation of the GHZ scenario, at least in so far as I noted the argument's existence. Of course I tried to chase down Mansfield and Barbosa after their talk, but they seemed quite busy answering other calls.\\medskip\n\nFor me, the upshot of Nijmegen was that a cohomological theory of MBQC was in range, making sense of all the known toy examples and hopefully beyond. To get started, all I needed to do was to get to grip with the Abramsky--Mansfield--Barbosa paper \\cite{A2}, which finally happened in the Spring of 2012. Then it turned out that their cohomological explanation of the GHZ example did not quite provide the desired connection to MBQC. The latter required a cohomological interpretation of precisely Mermin's argument for the GHZ-scenario, not merely a cohomological explanation of that scenario. And so, with my collaborators Cihan Okay, Stephen Bartlett, Sam Roberts and Emily Tyhurst, we set out to define our own cohomological framework. I do not need to describe the ensuing work here, since I already did in the previous sections.\\medskip\n\nThis brings me to my thoughts for the future. Regarding measurement-based quantum computation, the recent investigations into its structure---contextuality as we discussed it here, computational phases of matter \\cite{screen}--\\cite{Archi} and temporal order \\cite{CompMod}, \\cite{Gflow}, \\cite{TO_sym}---have to day remained separate. And yet they share a common ingredient at their cores: symmetry. I'm confident that these investigations will be unified into a single framework in the coming years, and that something new will spring from it.\n\nTo think about the future of our field more broadly, let's take a really long run-up and zoom right into the year 1842. Ada, Countess of Lovelace and assistant to the British computing pioneer Charles Babbage, had just invented the notion of the computer program. Also, at a time when everybody around her saw the future of computation in calculating trajectories of cannon balls, she had the fundamental insight that not only numbers can be processed by computers, but rather symbolic information of any kind---musical notes, images, text \\cite{Innovators}. Her insight lives on today in digital radio and television, the internet, Maple, the Google search engine, and countless other inventions of the information age. \n\nBut, quantum computation extends beyond this line of thought. Quantum information is not ``symbolic''. Due to the irreversibility of quantum measurement, it cannot be perceived by looking at it. And with the limits of the reigning paradigm exposed, a new era of computation can begin---at least in the skunkworks. On the theory side of it, whether one is thinking about measurement-based quantum computation or the circuit model, essentially everything boils down to one thing: quantum algorithms. Towering achievements such as Shor's factoring not\\-with\\-standing, we seem to have difficulty inventing new quantum algorithms, and it's a matter of intuition. \n\n\\begin{center}\nWhat would Ada's insight be today?\n\\end{center}\n\n\\newpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nOscillating time series are common in applications and are\ncharacterized by series of patterns of an upward trend followed\nby a downward trend. When oscillating time series do not exhibit apparent periodicity, such as those generated by chaotic systems, the problem of prediction lies basically on the time and magnitude of the peaks and troughs, as the results of three time series competitions showed \\cite{Weigend94,Suykens98,ESTSP07}. Interestingly, the\nwinning prediction schemes in the two first competitions were\nbased on a local prediction model (with rather involved\nmodifications of the standard nearest neighbor prediction\napproach).\nLocal models stem from dynamical systems and chaos theory, are computationally efficient, and perform as well as more\ncomplicated black-box models, such as neural networks, in the\nprediction of irregular time series \\cite{Kantz97}. For multi-step ahead prediction typically a higher embedding dimension $M$ is required. For a fixed time delay $\\tau$, the reconstructed points span a time window of length $\\tau_w=(M-1)\\tau$. This should be large enough to account for the mean orbital period of the underlying trajectory, i.e. $\\tau_w$ should cover the period of an oscillation or a pattern of oscillations \\cite{Kugiumtzis96}.\n\n\n\nTurning point prediction is of great practical interest in many applications, such as finance \\cite{Bao08}. A recently developed approach attempts to model oscillating time series from low-dimensional systems with the so-called peak-to-peak dynamics \\cite{Piccardi08}. This approach relies on simple one or two dimensional maps for the peaks. In \\cite{Kugiumtzis08b}, it was shown that the prediction of\nturning points with local models is improved using state space\nreconstruction on the time series of turning points at a lower embedding dimension $m$. Here, we extend the state space reconstruction to include also the time series of the times of the turning points. This is the setting of local dynamic regression, where a local model on two time series (for magnitudes and times of turning points) is build in order to predict the magnitudes and times of turning points.\n\n\\section{State Space Reconstruction of Turning Points}\n\nSuppose an oscillating time series of length $N$,\n$\\{x(t)\\}_{t=1}^N$, is observed at a sampling time $\\tau_s$. A sample $y_i=x(t_i)$ is a turning point of $\\{x(t)\\}_{t=1}^N$ at time step $t_i$ if it is the minimum or maximum of all samples in $[t_i-p,t_i+p]$, for a scale parameter $p$. Scanning all samples of $\\{x(t)\\}_{t=1}^N$, the time series $\\{y_i\\}_{i=1}^n$ and $\\{t_i\\}_{i=1}^n$ of magnitudes and times of the alternating turning points, respectively, are derived.\nInstead of the time of the turning points we derive the duration of the upward and downward trends from the first differences $z_i=t_i-t_{i-1}$, giving the time series $\\{z_i\\}_{i=2}^n$.\nThus two successive samples of $\\{y_i\\}_{i=2}^n$ together with the synchronous samples of $\\{z_i\\}_{i=2}^n$ regard an oscillation of $\\{x(t)\\}_{t=1}^N$, as shown in Fig.~\\ref{fig:oscext}.\n\\begin{figure}[h!]\n\\hspace{7mm} \\includegraphics[height=35mm]{hypxoscext.eps}\n\\caption{Time series of turning point magnitudes and trend durations derived from an oscillating time series.}\n \\label{fig:oscext}\n\\end{figure}\nThe bivariate time series $\\{y_i,z_i\\}_{i=2}^n$ compresses the information in $\\{x(t)\\}_{t=1}^N$ with some loss of information depending on the pattern of the samples between the turning points. At the limit where the upward and downward trends are linear there is no loss of information as any sample $x(t_i-k)$ between two turning points $x(t_{i-1})$ and $x(t_i)$, where $k \\in \\{0,1,\\ldots,t_i-t_{i-1}\\}$, can be expressed in terms of the magnitude and time of the two turning points as\n\\[\nx(t_i-k) = x(t_i)-k\\frac{x(t_i)-x(t_{i-1})}{t_i-t_{i-1}}=\n y_i-k\\frac{y_i-y_{i-1}}{t_i-t_{i-1}}.\n\\]\n\nThe state space reconstruction of $\\{y_i\\}_{i=2}^n$ can be considered as a specific state space reconstruction of $\\{x(t)\\}_{t=1}^N$ at time points $\\{t_i\\}_{i=1}^n$ for delays depending at each $t_i$. For an embedding dimension $m$, this reads\n\\begin{equation}\n \\begin{array}{rcl}\n \\mathbf{y}_i & = & [y_i,y_{i-1},\\ldots,y_{i-m+1}]^{\\prime} \\\\\n & = & [x(t_i),x(t_{i-1}),\\ldots,x(t_{i-m+1})]^{\\prime},\n \\end{array}\n \\label{eq:embedextpoi}\n\\end{equation}\nfor $i=m,\\ldots,n$ \\cite{Kugiumtzis08b}. The advantage of this reconstruction is that it reduces the embedding dimension $M$ of the standard state space reconstruction of the type $\\mathbf{x}(t)=[x(t),x(t-\\tau_1),\\ldots,x(t-\\tau_{M-1})]^{\\prime}$ to $m$. Usually, in prediction tasks the delays $\\tau_j$ are small (and commonly a fixed delay $\\tau$ is used) suggesting a rather large $M$ in order the time window $\\tau_w$ to cover the mean oscillation period.\n\nWe extend the state space reconstruction in (\\ref{eq:embedextpoi}) to account for the duration of trends. The analysis on the bivariate time series $\\{y_i,z_i\\}_{i=2}^n$ requires that both time series are standardized (subtracting the mean and dividing with the standard deviation for each time series). The state space reconstruction on the standardized $\\{y_i,z_i\\}_{i=2}^n$ reads\n\\begin{equation}\n \\mathbf{w}_i = [y_i,y_{i-1},\\ldots,y_{i-m_y+1},z_i,z_{i-1},\\ldots,z_{i-m_z+1}]^{\\prime}. \\label{eq:embedmagtim}\n\\end{equation}\nWe allow for different embedding dimensions $m_y$ and $m_z$ for the magnitudes of turning points and duration of trends, respectively.\n\n\\section{Dynamic Regression Prediction of Turning Points}\n\nThe prediction of $y_{i+T}$ and $z_{i+T}$ for a lead time $T$ can be posed independently and this constitutes a problem of dynamic regression (also termed distributed lag modeling) \\cite{Pankratz91}. In this setting we apply local average models (LAM) and local linear models (LLM) \\cite{Kantz97}. The prediction of $y_{i+T}$ with LAM is given by the average of the $T$ step ahead mappings of the $K$ nearest neighboring points to $\\mathbf{w}_i$\n$\\{\\mathbf{w}_{i(1)},\\ldots,\\mathbf{w}_{i(K)}\\}$\n\\begin{equation}\n \\hat{y}_{i+T} = \\bar{y}_{i(K)+T} = \\frac{1}{K}\\sum_{j=1}^K y_{i(j)+T}.\n \\label{eq:lammag}\n\\end{equation}\nAssuming a linear autoregressive model restricted to the neighboring points to $\\mathbf{w}_i$, the LLM prediction of $y_{i+T}$ is\n\\begin{equation}\n \\hat{y}_{i+T} = \\bar{y}_{i(K)+T} + \\mathbf{a}^{\\prime} (\\mathbf{w}_i - \\bar{\\mathbf{w}}_{i(K)}),\n \\label{eq:llmmag}\n\\end{equation}\nwhere $\\bar{\\mathbf{w}}_{i(K)}$ is the center point of the $K$ neighboring points to $\\mathbf{w}_i$ and $\\mathbf{a}$ is estimated from the minimization of the error function\n\\begin{equation}\n\\sum_{j=1}^K \\left(y_{i(j)+T}-\\bar{y}_{i(K)+T}-\\mathbf{a}^{\\prime} (\\mathbf{w}_{i(j)} - \\bar{\\mathbf{w}}_{i(K)})\\right)^2.\n \\label{eq:errfun}\n\\end{equation}\nWe consider also regularization of the ordinary least square solution of (\\ref{eq:errfun}) making use of principal component regression (PCR) and projection on the $q$ first components \\cite{Kugiumtzis98}. Note that $z_{i+T}$ is predicted in the same way, but in line with dynamic regression setting the suitable embedding dimensions $m_y$ and $m_z$ may be different for the models (LAM or LLM) for $y_{i+T}$ and $z_{i+T}$.\nThis approach differs from the approach in \\cite{Kugiumtzis08b} in that the neighboring points are formed not only based on the turning point magnitudes but also on the duration of trends.\nBoth LAM and LLM models are simple extensions of the respective local models used for univariate time series. Note that the direct scheme is used here, but the iterative prediction scheme can be applied in a similar way (in \\cite{Kugiumtzis08b} it was found that the iterative scheme of LAM on $\\{x(t)\\}_{t=1}^N$ gave worse results). In the following, we compare the prediction of turning points (magnitude and time) using LAM or LLM models estimated on all the samples of the oscillating time series $\\{x(t)\\}_{t=1}^N$ (denoted osc-LAM and osc-LLM) and on the bivariate time series of turning point magnitudes and trend durations $\\{y_i,z_i\\}_{i=2}^n$ (denoted tur-LAM and tur-LLM).\n\n\\section{Turning Point Prediction on Simulated Systems}\n\nBefore presenting the results of the predictions on selected simulated systems we make some general observations regarding the implementation of the prediction schemes. For a fixed number of oscillations, $N$ is inversely proportional to $\\tau_s$, so that a better time resolution of the measurements implies a larger oscillating time series $\\{x(t)\\}_{t=1}^N$, whereas the length of the turning point time series $\\{y_i,z_i\\}_{i=2}^n$ is unaffected (being $2(n-1)$).\nA small $\\tau_s$ is actually welcome in the analysis based on turning points because it allows for more accurate detection of the turning points and especially the trend durations. For example, for a time series with an average oscillation period of 10 samples the range of $z_i$ is limited to integers from 1 to less than 10, and this range is insufficient to define neighborhoods (in the projected reconstructed state space of dimension $m_z$). Thus a smaller $\\tau_s$ would render the information of $\\{z_i\\}_{i=2}^n$ more useful in the setting of dynamic regression.\n\nThe parameter $p$ that defines the local window for the detection of turning points depends on $\\tau_s$ and should not be too large, so that turning points of short lasted oscillations can be detected, and not too small, so that glitches of noisy oscillations are not assigned to turning points. For the latter case, a small $p$ can still be used if the time series is filtered, and then the turning points are detected on the smoothed time series to give the turning point times, whereas the turning point magnitudes are taken from the original time series. In the simulations we use $p=3$ and filter only noisy data with an order depending on the system and noise amplitude.\n\nWhen predicting turning points with osc-LAM or osc-LLM on $\\{x(t)\\}_{t=1}^N$, the current point is not a turning point $x(t_i)$ but the sample $x_{t_i+p}$ at the time the turning point $x(t_i)$ can first be detected. Thus using a large $p$ or $\\tau_s$ favors the prediction on $\\{x(t)\\}_{t=1}^N$ because then the current point is well in the next trend of the oscillation. The prediction schemes are illustrated in Fig.~\\ref{fig:preoscext}.\n\\begin{figure}[h!]\n\\hspace{7mm} \\includegraphics[height=35mm]{preextdisplay1.eps}\n\\caption{Turning point prediction: the real time series segment (grey lines and circles), the sample and turning point predictions with osc-LAM (dark dots and crosses), and the turning point prediction with tur-LAM (dark asterisks). The current time of the turning point is set to 0 and the delays of the standard embedding on the samples are shown with open circles.}\n \\label{fig:preoscext}\n\\end{figure}\nNote that the predicted turning points with osc-LAM are detected among the multi-step sample predictions in the same way as the turning points are detected in the oscillating time series.\n\nWe applied the LAM and LLM schemes on multiple realizations of known systems, such as the first and third variable of the R\\\"{o}ssler system \\cite{Roessler76}, the first and fourth variable of the R\\\"{o}ssler hyper-chaos system \\cite{Roessler79} (a segment of this is shown in Fig.~\\ref{fig:preoscext}), and the Mackey-Glass delay differential equation for different delays $\\Delta=17,30,100$ \\cite{Mackey77}. The prediction measure is the normalized root mean square error (NRMSE) of the turning point prediction at the last quarter of each time series. In Fig.~\\ref{fig:MCHyp}, the average NRMSE (with the standard deviation forming the error bars) is shown for 1000 realizations of the fourth variable of the R\\\"{o}ssler hyper-chaos system using the osc-LAM and tur-LAM models as well as the osc-LLM and tur-LLM models.\n \\begin{figure}[h!]\n\\centering\n\\hbox{\\includegraphics[height=33mm]{hypwextpredlammag.eps}\n\\hspace{7mm} \\includegraphics[height=33mm]{hypwextpredlamtim.eps}}\n\\hbox{\\includegraphics[height=33mm]{hypwextpredllmmag.eps}\n\\hspace{7mm} \\includegraphics[height=33mm]{hypwextpredllmtim.eps}}\n\\caption{(a) The average NRMSE (with error bars for the standard deviation) of the prediction of next turning point magnitude of the fourth variable of the R\\\"{o}ssler hyperchaos system ($\\tau_s=0.1$, $N=2^{14}$) using osc-LAM and tur-LAM (for $m_z=0,1,2$ as given in the legend). The $\\tau_w$ in the abscissa is defined as $\\tau_w=(M-1)10$ for osc-LAM and $\\tau_w=(m-1)33$ for tur-LAM, as the mean oscillation period is estimated to be 66. (b) As in (a) but for the prediction of trend duration. (c) and (d) are as (a) and (b), respectively, but using the LLM models instead with PCR regularization parameter $q=3$.}\n \\label{fig:MCHyp}\n\\end{figure}\nThe parameters of state space reconstruction for both $\\{x(t)\\}_{t=1}^N$ and $\\{y_i,z_i\\}_{i=2}^n$ were chosen so that $\\tau_w$ covers up to three mean oscillation periods. For the latter, different combinations of $m_y$ and $m_z$ were considered and in Fig.~\\ref{fig:MCHyp} the tur-LAM and tur-LLM are shown for $m_z=0,1,2$ ($m_z=0$ denotes that the model is built only on the turning point magnitudes). In this example, there is little improvement of turning point prediction using the trend durations. Using either LAM or LLM, the prediction of turning points based on $\\{y_i,z_i\\}_{i=2}^n$ is superior. For osc-LLM prediction of turning point magnitudes, NRMSE is larger than one (the mean prediction) and has a large variance (not shown in Fig.~\\ref{fig:MCHyp}c). The linear mapping diverges for multi-step ahead predictions, and we conjecture that this is because temporally close points are selected in the set of the $K$ neighboring points. The large variance of NRMSE is observed with all LLM models for $m=3$ (equal to $q$) and this needs further investigation.\n\n\nThe best predictions of LAM for turning point magnitudes and trend durations for both $\\{x(t)\\}_{t=1}^N$ and $\\{y_i,z_i\\}_{i=2}^n$ are given in Table~\\ref{tab:MCHyp}.\n\\begin{table}[h!]\n \\centering\n \\begin{tabular}{|c|c|cc|ccc|}\n \\hline\n \\multicolumn{2}{|c|}{} & \\multicolumn{5}{c|}{Turning point magnitude} \\\\ \\hline\n $T$ & $K$ & $M$ & NRMSE & $m_y$ & $m_z$ & NRMSE \\\\ \\hline\n 1 & 1 & 9 & 0.604 & 3 & 1 & 0.505 \\\\\n 1 & 5 & 9 & 0.621 & 3 & 1 & 0.518 \\\\\n 1 & 10 & 9 & 0.662 & 2 & 1 & 0.558 \\\\\n\\hline\n 2 & 1 & 10 & 0.837 & 3 & 1 & 0.679 \\\\\n 2 & 5 & 8 & 0.748 & 3 & 1 & 0.642 \\\\\n 2 & 10 & 3 & 0.732 & 3 & 0 & 0.665 \\\\\n\\hline\n \\multicolumn{2}{|c|}{} & \\multicolumn{5}{c|}{Trend duration} \\\\ \\hline\n 1 & 1 & 10 & 0.669 & 2 & 1 & 0.368 \\\\\n 1 & 5 & 3 & 0.549 & 2 & 0 & 0.366 \\\\\n 1 & 10 & 3 & 0.526 & 2 & 0 & 0.414 \\\\\n\\hline\n 2 & 1 & 10 & 1.016 & 3 & 1 & 0.817 \\\\\n 2 & 5 & 10 & 0.989 & 2 & 0 & 0.772 \\\\\n 2 & 10 & 9 & 1.018 & 2 & 0 & 0.782 \\\\\n\\hline\n \\end{tabular}\n \\caption{For the system in Fig.~\\ref{fig:MCHyp} and for each combination of $T=1,2$ and $K=1,5,10$, the $M$ of best prediction with osc-LAM and $m_y$ and $m_z$ of best prediction with tur-LAM together with the respective NRMSE are given, where $M=3,\\ldots,10$ ($\\tau=10$) and $m=2,\\ldots,6$.}\n \\label{tab:MCHyp}\n\\end{table}\nThe best turning point predictions (magnitude and time) are derived with tur-LAM at small embedding dimensions (up to 3 for $m_y$ and 0 or 1 for $m_z$). Closer investigation showed that for some prediction tasks osc-LAM predicted better than tur-LAM, whereas in other cases it formed a turning point far from the true turning point, so overall the NRMSE was worse. This difference in osc-LAM and tur-LAM persists for different $N$ (we tested for $\\log_2N=12,13$) and at a lesser extent also for the addition of observational noise (we used $5\\%$ and $10\\%$ of noise amplitude). Moreover, the inclusion of the last trend duration ($m_z=1$) improved the prediction of turning point magnitudes and only marginally the prediction of trend durations. The same qualitative results were obtained from the same simulations on the other systems. For the highly complex Mackey-Glass system with $\\Delta=100$ (it has a fractal dimension of about 7) the best results of tur-LAM were obtained for high embedding dimensions of both turning point magnitudes and trend durations, indicating that for this system the trend duration is important to predict the next peaks or troughs.\n\n\\section{Conclusion}\nWe showed that the local prediction of turning points (magnitude and time) can be improved if the nearest neighbor model of average or linear mapping is build on reconstructed points from the bivariate time series of turning point magnitudes and trend duration.\n\n\n\n\\section*{Acknowledgments}\nThe work is part of the research project 03ED748 within the framework of the ''Reinforcement Programme of Human Research Manpower'' (PENED) and it is co-financed at 90\\% jointly by European Social Fund (75\\%) and the Greek Ministry of Development (25\\%) and at 10\\% by Rikshospitalet, Norway.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nLet $(X, \\mathbf{Z})$ be a random vector in $\\mathbb{R} \\times \\mathbb{R}^d = \\mathbb{R}^{d+1}$, $d \\ge 1$. We assume that $(X, \\mathbf{Z})$ has a joint density on $\\mathbb{R}^{d+1}$. If we want to predict $X$ using $\\mathbf{Z}$ we usually formulate the following regression problem:\n\\begin{eqnarray}\\label{eq:RegMdl}\nX = m(\\mathbf{Z}) + \\epsilon,\n\\end{eqnarray}\nwhere $m(\\mathbf{z}) = \\mathbb E(X|\\mathbf{Z} = \\mathbf{z})$ is the conditional mean of $X$ given $\\mathbf{Z} = \\mathbf{z}$ and $\\epsilon := X - m(\\mathbf{Z})$ is the {\\it residual} (although $\\epsilon$ is usually called the error, and its estimate the residual, for this paper we feel that the term residual is more appropriate). Typically we further assume that the residual $\\epsilon$ is {\\it independent} of $\\mathbf{Z}$. However, intuitively, we are just trying to break the information in $(X,\\mathbf{Z})$ into two parts: a part that contains all relevant information about $X$, and the ``residual'' (the left over) which does not have anything to do with the relationship between $X$ and $\\mathbf{Z}$.\n\nIn this paper we address the following question: given any random vector $(X, \\mathbf{Z})$ how do we define the notion of a ``residual'' of $X$ on $\\mathbf{Z}$ that matches with the above intuition? Thus, formally, we want to find a function $\\varphi: \\mathbb{R}^{d+1} \\to \\mathbb{R}$ such that the residual $\\varphi(X, \\mathbf{Z})$ satisfies the following two conditions:\n\\begin{enumerate}\n\\item[(C.1)] $\\;\\;\\;\\;\\;$ the residual $\\varphi(X, \\mathbf{Z})$ is independent of the predictor $\\mathbf{Z}$, i.e.,\n\\begin{eqnarray*}\\label{eq:Indep}\n\\varphi(X, \\mathbf{Z}) \\perp \\! \\! \\! \\perp \\mathbf{Z}, \\qquad \\mbox{and }\n\\end{eqnarray*}\n\\item[(C.2)] $\\;\\;\\;\\;\\;$ the information content of $(X, \\mathbf{Z})$ is the same as that of $( \\varphi(X, \\mathbf{Z}), \\mathbf{Z} )$, i.e.,\n\\begin{equation}\\label{eq:Info}\n\\sigma(X, \\mathbf{Z}) = \\sigma( \\varphi(X, \\mathbf{Z}), \\mathbf{Z} ),\n\\end{equation}\nwhere $\\sigma(X, \\mathbf{Z})$ denotes the $\\sigma$-field generated by $X$ and $\\mathbf{Z}$. We can also express~\\eqref{eq:Info} as: there exists a measurable function $h : \\mathbb{R}^{d+1} \\to \\mathbb{R} $ such that\n\\begin{equation}\\label{eq:GenX}\nX = h(\\mathbf{Z}, \\varphi(X, \\mathbf{Z}));\n\\end{equation}\nsee e.g., Theorem 20.1 of~\\cite{Bill95}.\n\\end{enumerate}\n\nIn this paper we propose a notion of a residual that satisfies the above two conditions, under any joint distribution of $X$ and $\\mathbf{Z}$. We investigate the properties of this notion of residual in Section~\\ref{sec:NPResid}. We show that this notion indeed reduces to the usual residual (error) in the multivariate normal regression model. Further, we use this notion of residual to develop a test for conditional independence.\n\nSuppose now that $(X,Y,\\mathbf{Z})$ has a joint density on $\\mathbb{R} \\times \\mathbb{R} \\times \\mathbb{R}^d = \\mathbb{R}^{d+2}$. The assumption of conditional independence means that $X$ is independent of $Y$ given $\\mathbf{Z}$, i.e., $X \\perp \\! \\! \\! \\perp Y |\\mathbf{Z}$. Conditional independence is an important concept in modeling causal relations (\\cite{dawid79}, \\cite{Pearl00}), in graphical models (\\cite{Lauritzen96}; \\cite{koller09}), in economic theory (see \\cite{Chiappori00}), and in the literature of program evaluations (see \\cite{Heckman97}) among other fields. Traditional methods for testing conditional independence are either restricted to the discrete case (\\cite{Lauritzen96}; \\cite{Agresti02}) or impose simplifying assumption when the random variables are continuous (\\cite{Lawrance76}). However, recently there has been a few nonparametric testing procedures proposed for testing conditional independence without assuming a functional form between the distributions of $X,Y$, and $\\mathbf{Z}$. \\cite{SuWhite07} consider testing conditional independence based on the difference between the conditional characteristic functions, while \\cite{SuWhite08} use the Hellinger distance between conditional densities of $ X$ given $Y$ and $\\mathbf{Z}$, and $X$ given $Y$ to test for conditional independence. A test based on estimation of the maximal nonlinear conditional correlation is proposed in \\cite{Huang10}. \\cite{B11} develops a test based on partial copula. \\cite{KerCondInd07} propose a measure of conditional dependence of random variables, based on normalized cross-covariance operators on reproducing kernel Hilbert spaces; \\cite{Z12} propose another kernel-based conditional independence test. \\cite{poczos12} extend the concept of distance correlation (developed by \\cite{SzekelyRizzoBakirov07} to measure dependence between two random variables or vectors) to characterize conditional dependence. \\cite{SR14} investigate a method that is easy to compute and can capture non-linear dependencies but does not completely characterize conditional independence; also see~\\cite{GW12} and the references therein.\n\n\n\nIn Section~\\ref{sec:TestCondInd} we use the notion of residual defined in Section~\\ref{sec:NPResid} to show that the conditional independence between $X$ and $Y$ given $\\mathbf{Z}$ is equivalent to the mutual independence of three random vectors: the residuals of $X$ on $\\mathbf{Z}$ and $Y$ on $\\mathbf{Z}$, and $\\mathbf{Z}$. We reduce this testing of mutual independence to a one sample multivariate goodness-of-fit test. We further propose a modification of the easy-to-implement \\textit{energy} statistic based method (\\cite{SzekelyRizzo05}; also see \\cite{SzekelyRizzo13}) to test the goodness-of-fit; see Section~\\ref{sec:TestMutInd}. In Section~\\ref{sec:sub_test_cond} we use our notion of nonparametric residual and the proposed goodness-of-fit test to check the null hypothesis of conditional independence. Moreover, we describe a bootstrap scheme to approximate the critical value of this test.\nIn Section \\ref{sec:simul} we compare the finite sample performance of the procedure proposed in this paper with other available methods in the literature through a finite sample simulation study. We end with a brief discussion, Section~\\ref{sec:Disc}, where we point to some open research problems and outline an idea, using the proposed residuals, to define (and test) a nonparametric notion of partial correlation.\n\n\\section{A nonparametric notion of residual}\\label{sec:NPResid}\nConditions (C.1)--(C.2) do not necessarily lead to a unique choice for $\\varphi$. To find a meaningful and unique function $\\varphi$ that satisfies conditions (C.1)--(C.2) we impose the following natural restrictions on $\\varphi$. We assume that\n\\begin{enumerate}\n\\item[(C.3)] $\\;\\;\\;\\;\\;$ $x \\mapsto \\varphi(x,\\mathbf{z})$ is strictly increasing in its support, for every fixed $\\mathbf{z} \\in \\mathbb{R}^d$.\n\\end{enumerate}\nNote that condition (C.3) is a slight strengthening of condition (C.2). Suppose that a function $\\varphi$ satisfies conditions (C.1) and (C.3). Then any strictly monotone transformation of $\\varphi(\\cdot, \\mathbf{z})$ would again satisfy (C.1) and (C.3). Thus, conditions (C.1) and (C.3) do not uniquely specify $\\varphi$. To handle this identifiability issue, we replace condition (C.1) with (C.4), described below.\n\nFirst observe that, by condition (C.1), the conditional distribution of the random variable $\\varphi(X, \\mathbf{Z})$ given $\\mathbf{Z} = \\mathbf{z}$ does not depend on $\\mathbf{z} $. We assume that\n \\begin{enumerate}\n\\item[(C.4)] $\\;\\;\\;\\;\\;$ $\\varphi(X, \\mathbf{Z})| \\mathbf{Z} = \\mathbf{z}$ is uniformly distributed, for all $\\mathbf{z} \\in \\mathbb{R}^d$.\n\\end{enumerate}\nCondition (C.4) is again quite natural -- we usually assume that the residual has a fixed distribution, e.g., in regression we assume that the (standardized) residual in normally distributed with zero mean and unit variance. Note that condition (C.4) is slightly stronger than (C.1) and will help us uniquely identify $\\varphi$. The following result shows that, indeed, under conditions (C.3)--(C.4), a unique $\\varphi$ exists and gives its form.\n\n\\begin{lemma}\\label{lem:NPError}\nLet $F_{X|\\mathbf{Z}}(\\cdot| \\mathbf{z})$ denote the conditional distribution function of $X|\\mathbf{Z} = \\mathbf{z}$. Under conditions (C.3) and (C.4), we have a unique choice of $\\varphi(x, \\mathbf{z})$, given by\n\\begin{eqnarray*}\n\\varphi(x, \\mathbf{z}) = F_{X|\\mathbf{Z}}(x| \\mathbf{z}).\n\\end{eqnarray*}\nAlso, $h(\\mathbf{z}, u)$ can be taken as\n\\begin{eqnarray}\\label{eq:InvCondDist}\nh(\\mathbf{z}, u) =F^{-1}_{X|\\mathbf{Z}}(u|\\mathbf{z}).\n\\end{eqnarray}\n\\end{lemma}\n\\begin{proof} Fix $\\mathbf{z}$ in the support of $\\mathbf{Z}$. Let $u \\in (0, 1)$. Let us write $\\varphi_\\mathbf{z}(x) = \\varphi(x, \\mathbf{z})$. By condition (C.4), we have $\\mathbb P[ \\varphi(X, \\mathbf{Z}) \\le u | \\mathbf{Z} = \\mathbf{z} ] = u$. On the other hand, by (C.3), $$\\mathbb P[ \\varphi(X, \\mathbf{Z}) \\le u | \\mathbf{Z} = \\mathbf{z} ] = \\mathbb P[ X \\le \\varphi_\\mathbf{z}^{-1}(u) | \\mathbf{Z} = \\mathbf{z} ] = F_{X|\\mathbf{Z}}( \\varphi_\\mathbf{z}^{-1}(u) | \\mathbf{z}) .$$ Thus, we have\n$$ F_{X|\\mathbf{Z}}( \\varphi_\\mathbf{z}^{-1}(u) | \\mathbf{z}) = u, \\ \\ \\mbox{ for all } u \\in (0,1), $$\nwhich is equivalent to $ \\varphi_\\mathbf{z}(x) = F_{X|\\mathbf{Z}}(x| \\mathbf{z})$.\n\nLet $h$ be as defined in~\\eqref{eq:InvCondDist}. Then,\n$$ h(\\mathbf{z}, \\varphi(x, \\mathbf{z})) = F^{-1}_{X|\\mathbf{Z}}( \\varphi(x, \\mathbf{z}) |\\mathbf{z}) = F^{-1}_{X|\\mathbf{Z}}( F_{X|\\mathbf{Z}}(x| \\mathbf{z}) |\\mathbf{z}) = x, $$\nas required.\n\\end{proof}\nThus from the above lemma, we conclude that in the nonparametric setup, if we want to have a notion of a residual satisfying conditions (C.3)--(C.4) then the residual has to be $F_{X|\\mathbf{Z}}(X| \\mathbf{Z})$. The following remarks are in order now.\n\\begin{remark}\nLet us first consider the example when $(X, \\mathbf{Z})$ follows a multivariate Gaussian distribution, i.e.,\n$$ \\begin{pmatrix} X \\\\ \\mathbf{Z}\\end{pmatrix} \\sim N \\left ( \\begin{pmatrix} \\mu_1 \\\\ \\bm{\\mu}_2 \\end{pmatrix}, \\Sigma := \\begin{pmatrix} \\sigma_{11}& \\bm{\\sigma}_{12}^\\top \\\\ \\bm{\\sigma}_{12} & \\Sigma_{22} \\end{pmatrix} \\right), $$ where $\\mu_1 \\in \\mathbb{R}$, $\\mu_2 \\in \\mathbb{R}^d$, $\\Sigma$ is a $(d+1) \\times (d+1)$ positive definite matrix with $\\sigma_{11} > 0$, $\\sigma_{12} \\in \\mathbb{R}^{d \\times 1}$ and $\\Sigma_{22} \\in \\mathbb{R}^{d \\times d}$.\n\nThen the conditional distribution of $X$ given $\\mathbf{Z} = \\mathbf{z}$ is $N(\\mu_1 + \\bm{\\sigma}_{12}^\\top \\Sigma_{22}^{-1} (\\mathbf{z} - \\bm{\\mu}_2), \\sigma_{11} - \\bm{\\sigma}_{12}^\\top \\Sigma_{22}^{-1} \\bm{\\sigma}_{12} )$.\nTherefore, we have the following representation in the form of~\\eqref{eq:RegMdl}:\n$$ X = \\mu_1 + \\bm{\\sigma}_{12}^\\top \\Sigma_{22}^{-1} (\\mathbf{Z} - \\bm{\\mu}_2) + \\Big( X - \\mu_1 - \\bm{\\sigma}_{12}^\\top \\Sigma_{22}^{-1} (\\mathbf{Z} - \\bm{\\mu}_2) \\Big) $$\nwhere the usual residual is $X - \\mu_1 - \\bm{\\sigma}_{12}^\\top \\Sigma_{22}^{-1} (\\mathbf{Z} - \\bm{\\mu}_2)$, which is known to be independent of $\\mathbf{Z}$. In this case, using Lemma~\\ref{lem:NPError}, we get\n$$ \\varphi(X, \\mathbf{Z}) = \\Phi \\left(\\frac{ X - \\mu_1 - \\bm{\\sigma}_{12}^\\top \\Sigma_{22}^{-1} (\\mathbf{Z} - \\bm{\\mu}_2) }{\\sqrt{\\sigma_{11} - \\bm{\\sigma}_{12}^\\top \\Sigma_{22}^{-1} \\bm{\\sigma}_{12}} } \\right),$$\nwhere $\\Phi(\\cdot)$ is the distribution function of the standard normal distribution. Thus $\\varphi(X, \\mathbf{Z})$ is just a fixed strictly increasing transformation of the usual residual, and the two notions of residual essentially coincide. \\\\\n\\end{remark}\n\n\n\\begin{remark}\nThe above notion of residual does not extend so easily to the case of discrete random variables. Conditions (C.1) and (C.2) are equivalent to the fact that $\\sigma(X, \\mathbf{Z})$ factorizes into two sub $\\sigma$-fields as $\\sigma(X, \\mathbf{Z}) = \\sigma( \\varphi(X, \\mathbf{Z}) ) \\otimes \\sigma(\\mathbf{Z} )$. This may not be always possible as can be seen from the following simple example.\n\nLet $(X, Z)$ take values in $\\{0, 1\\}^2$ such that $\\mathbb P[X = i, Z =j] >0$ for all $i, j \\in \\{0, 1\\}$. Then it can be shown that such a factorization exists if and only if $X$ and $Z$ are independent, in which case $\\varphi(X, Z) = X$. \\\\\n\\end{remark}\n\n\\begin{remark}\nLemma~\\ref{lem:NPError} also gives an way to generate $X$, using $\\mathbf{Z}$ and the residual. We can first generate $\\mathbf{Z}$, following its marginal distribution, and an independent random variable $U \\sim \\mathcal{U}(0,1)$ (here $\\mathcal{U} (0,1)$ denotes the Uniform distribution on $(0,1)$) which will act as the residual. Then~\\eqref{eq:GenX}, where $h$ is defined in~\\eqref{eq:InvCondDist}, shows that we can generate $X = F^{-1}_{X|\\mathbf{Z}}(U|\\mathbf{Z})$. \\\\\n\\end{remark}\n\nIn practice, we need to estimate the residual $F_{X|\\mathbf{Z}}(X|\\mathbf{Z})$ from observed data, which can be done both parametrically and non-parametrically. If we have a parametric model for $F_{X|\\mathbf{Z}}(\\cdot|\\cdot)$, we can estimate the parameters, using e.g., maximum likelihood, etc. If we do not want to assume any structure on $F_{X|\\mathbf{Z}}(\\cdot|\\cdot)$, we can use any nonparametric smoothing method, e.g., standard kernel methods, for estimation; see~\\cite{B11} for such an implementation. We will discuss the estimation of the residuals in more detail in Section~\\ref{sec:NPEst}.\n\n\\section{Conditional independence}\\label{sec:TestCondInd}\nSuppose now that $(X,Y,\\mathbf{Z})$ has a joint density on $\\mathbb{R} \\times \\mathbb{R} \\times \\mathbb{R}^d = \\mathbb{R}^{d+2}$.\nIn this section we state a simple result that reduces testing for the conditional independence hypothesis $H_0: X \\perp \\! \\! \\! \\perp Y |\\mathbf{Z}$ to a problem of testing mutual independence between three random variables\/vectors that involve our notion of residual. We also briefly describe a procedure to test the mutual independence of the three random variables\/vectors (see Section~\\ref{sec:TestMutInd}). We start with the statement of the crucial lemma.\n\\begin{lemma}\\label{lem:CondInd}\nSuppose that $(X,Y,\\mathbf{Z})$ has a continuous joint density on $\\mathbb{R}^{d+2}$. Then, $X \\perp \\! \\! \\! \\perp Y |\\mathbf{Z}$ if and only if $F_{X|\\mathbf{Z}}(X|\\mathbf{Z}), F_{Y|\\mathbf{Z}}(Y|\\mathbf{Z})$ and $\\mathbf{Z}$ are mutually independent.\n\\end{lemma}\n\\begin{proof}\nLet us make the following change of variable $$ (X,Y,\\mathbf{Z}) \\mapsto (U,V,\\mathbf{Z}) := (F_{X|\\mathbf{Z}}(X), F_{Y|\\mathbf{Z}}(Y), \\mathbf{Z}).$$ The joint density of $(U,V,\\mathbf{Z})$ can be expressed as\n\\begin{equation}\\label{eq:trans}\nf_{(U,V,\\mathbf{Z})}(u,v,\\mathbf{z}) = \\frac{f(x,y,\\mathbf{z})}{f_{X|\\mathbf{Z}=\\mathbf{z}}(x) f_{Y|\\mathbf{Z}=\\mathbf{z}}(y)} = \\frac{f_{(X, Y)|\\mathbf{Z}=\\mathbf{z}}(x, y)f_\\mathbf{Z}(\\mathbf{z})}{f_{X|\\mathbf{Z}=\\mathbf{z}}(x) f_{Y|\\mathbf{Z}=\\mathbf{z}}(y)},\n\\end{equation}\nwhere $x = F_{X|\\mathbf{Z}=\\mathbf{z}}^{-1}(u)$, and $y = F_{Y|\\mathbf{Z}=\\mathbf{z}}^{-1}(v)$. Note that as the Jacobian matrix is upper-triangular, the determinant is the product of the diagonal entries of the matrix, namely, $f_{X|\\mathbf{Z} = \\mathbf{z}}(x)$, $f_{Y|\\mathbf{Z}=\\mathbf{z}}(y)$ and $1$.\n\nIf $X \\perp \\! \\! \\! \\perp Y |\\mathbf{Z}$ then $f_{(U,V,\\mathbf{Z})}(u,v,\\mathbf{z})$ reduces to just $f_\\mathbf{Z}(\\mathbf{z})$, for $u, v \\in (0,1)$, from the definition of conditional independence, which shows that $U,V,\\mathbf{Z}$ are independent (note that it is easy to show that $U,V$ are marginally $\\mathcal{U}(0,1)$, the Uniform distribution on $(0,1)$). Now, given that $U,V,\\mathbf{Z}$ are independent, we know that $f_{(U,V,\\mathbf{Z})}(u,v,\\mathbf{z}) = f_\\mathbf{Z}(\\mathbf{z})$ for $u, v \\in (0,1)$, which from (\\ref{eq:trans}) easily shows that $X \\perp \\! \\! \\! \\perp Y |\\mathbf{Z}$.\n\\end{proof}\n$\\vspace{0.000in}$\n\n\\begin{remark}\\label{rem:berg}\nNote that the joint distribution of $F_{X|\\mathbf{Z}}(X|\\mathbf{Z})$ and $F_{Y|\\mathbf{Z}}(Y|\\mathbf{Z})$ is known as the \\textit{partial copula}; see e.g.,~\\cite{B11}.~\\cite{B11} developed a test for conditional independence by testing mutual independence between $F_{X|\\mathbf{Z}}(X|\\mathbf{Z})$ and $F_{Y|\\mathbf{Z}}(Y|\\mathbf{Z})$. However, as the following example illustrates, the independence of $F_{X|\\mathbf{Z}}(X|\\mathbf{Z})$ and $F_{Y|\\mathbf{Z}}(Y|\\mathbf{Z})$ is not enough to guarantee that $X \\perp \\! \\! \\! \\perp Y |\\mathbf{Z}$. Let $W_1, W_2, W_3$ be i.i.d.~$\\mathcal{U}(0,1)$ random variables. Let $X = W_1+W_3$, $Y =W_2$ and $Z = \\mathrm{mod}(W_1 + W_2, 1)$, where `$\\mathrm{mod}$' stands for the modulo (sometimes called modulus) operation that finds the remainder of the division $W_1 + W_2$ by 1. Clearly, the random vector $(X, Y, Z)$ has a smooth continuous density on $[0,1]^3$. Note that $Z$ is independent of $W_i$, for $i = 1,2$. Hence, $X, Y$ and $Z$ are pairwise independent. Thus, $F_{X|\\mathbf{Z}}(X) = F_X(X)$ and $F_{Y|\\mathbf{Z}}(X) = F_Y(Y)$, where $F_X$ and $F_Y$ are the marginal distribution functions of $X$ and $Y$, respectively. From the independence of $X$ and $Y$, $F_X(X)$ and $F_Y(Y)$ are independent. On the other hand, the value of $W_1$ is clearly determined by $Y$ and $Z$, i.e., $W_1 = Z-Y$ if $Y \\le Z$ and $W_1 = Z-Y+1$ if $Y>Z$. Consequently, $X$ and $Y$ are not conditionally independent given $Z$. To see this, note that for every $z \\in (0,1)$, $$\\mathbb E[ X| Y, Z=z ] = \\left\\{ \\begin{array}{ll}\n z-Y + 0.5 & \\mbox{if $Y \\le z$}\\\\\n z - Y +1 + 0.5& \\mbox{if $Y > z$,}\\end{array} \\right.$$ which obviously depends on $Y$. In Remark~\\ref{Bergsma2} we illustrate this behavior with a finite sample simulation study. \\\\\n\\end{remark}\n\n\\begin{remark} We can extend the above result to the case when $X$ and $Y$ are random vectors in $\\mathbb{R}^p$ and $\\mathbb{R}^q$, respectively. In that case we define the conditional multivariate distribution transform $F_{X|\\mathbf{Z}}$ by successively conditioning on the co-ordinate random variables, i.e., if $X = (X_1,X_2)$ then we can define $F_{X|\\mathbf{Z}}$ as $(F_{X_2|X_1,\\mathbf{Z}}, F_{X_1|\\mathbf{Z}})$. With this definition, Lemma~\\ref{lem:CondInd} still holds. \\\\\n\\end{remark}\n\nTo use Lemma~\\ref{lem:CondInd} to test the conditional independence between $X$ and $Y$ given $\\mathbf{Z}$, we need to first estimate the residuals $F_{X|\\mathbf{Z}}(X|\\mathbf{Z})$ and $F_{Y|\\mathbf{Z}}(Y|\\mathbf{Z})$ from observed data, which can be done by any nonparametric smoothing procedure, e.g., standard kernel methods (see Section~\\ref{sec:NPEst}). Then, any procedure for testing the mutual independence of $F_{X|\\mathbf{Z}}(X|\\mathbf{Z}), F_{Y|\\mathbf{Z}}(Y|\\mathbf{Z})$ and $\\mathbf{Z}$ can be used. In this paper we advocate the use of the {\\it energy} statistic (see \\cite{RizzoSzekely10}), described briefly in the next subsection, to test the mutual independence of three or more random variables\/vectors.\n\n\\subsection{Testing mutual independence of three or more random vectors with known marginals}\\label{sec:TestMutInd}\nTesting independence of two random variables (or vectors) has received much recent attention in the statistical literature; see e.g.,~\\cite{SzekelyRizzoBakirov07}, \\cite{KerIndepALT05}, and the references therein. However, testing the mutual independence of three or more random variables is more complicated and we could not find any easily implementable method in the statistical literature.\n\nIn this sub-section, we test the mutual independence of three or more random variables (vectors) with known marginals by converting the problem to a one-sample goodness-of-fit test for multivariate normality. In the following we briefly describe our procedure in the general setup.\n\nSuppose that we have $r \\ge 3$ continuous random variables (or vectors) $V_1, \\ldots, V_r$ and we want to test their mutual independence. We assume that we know the marginal distributions of $V_1, \\ldots, V_r$; without loss of generality, we can assume that $V_i$'s are standard Gaussian random variables (vectors). We write $T:= (V_1, V_2, \\ldots, V_r) \\in \\mathbb{R}^k$ and introduce $T_{\\text{ind}} := (V_1^*, V_2^*, \\ldots, V_r^*)$ where $V_j^*$ is an i.i.d.~copy of $V_j$, $j=1,2, \\ldots, r$, but in $T_{\\text{ind}}$ the coordinates, $V_1^*, V_2^*, \\ldots, V_r^*$, are independent. To test the mutual independence of $V_1, V_2, \\ldots, V_r$ all we need to test now is whether $T$ and $T_{\\text{ind}}$ are identically distributed. If we observed a sample from $T$, we can test for the equality of distributions of $T$ and $T_{\\text{ind}}$ through a one-sample goodness-of-fit test for the standard multivariate normal distribution, i.e., $$H_0: T \\sim N(\\textbf{0},\\textbf{I}_{k\\times k}),$$ as $T_{\\text{ind}}\\sim N(\\textbf{0},\\textbf{I}_{k\\times k})$, where $\\textbf{I}_{k \\times k}$ is the identity matrix of order $k$ and $\\textbf{0} := (0, \\ldots, 0) \\in \\mathbb{R}^{k}.$\n\n\n\n\nIn this paper we consider the following {\\it energy} statistic (see~\\cite{SzekelyRizzo05} and \\cite{RizzoSzekely10})\n\\begin{equation}\\label{eq:EStat}\n\\Lambda(T) = 2 \\mathbb E \\|T - T_{\\text{ind}}\\| - \\mathbb E \\|T - T'\\| - \\mathbb E \\|T_{\\text{ind}} - T_{\\text{ind}}'\\|,\n\\end{equation}\nwhere $T'$ and $T_{\\text{ind}}'$ are i.i.d.~copies of $T$ and $T_{\\text{ind}}$, respectively ($\\|\\cdot\\|$ denotes the Euclidean norm). Note that $\\Lambda(T)$ is always nonnegative, and equals 0, if and only if $T$ and $T_{\\text{ind}}$ are identically distributed, i.e., if and only if $V_1, V_2, \\ldots, V_r$ are mutually independent (see Corollary 1 of~\\cite{SzekelyRizzo05}).\n\n Suppose now that we observe $n$ i.i.d.~samples $T_1, \\ldots, T_n$ of $T$. The (scaled) sample version of the energy statistic for testing the goodness-of-fit hypothesis is\n \\begin{equation}\\label{eq:teststat}\n\\mathcal{E}_n(T_1,\\ldots, T_n) :=2 \\sum_{i=1}^n \\mathbb E \\|T_i-T_\\text{ind}\\| - \\frac{1}{n} \\sum_{i=1}^n\\sum_{j=1}^{n} \\|T_i-T_j\\|- n \\mathbb E \\|T_\\text{ind}-T^\\prime_\\text{ind}\\|.\n \\end{equation}\nNote that the first expectation in the above display is with respect to $T_\\text{ind}$. Under the null hypothesis of mutual independence, the test statistic $\\mathcal{E}_n(T_1,\\ldots, T_n)$ has a limiting distribution, as $n \\rightarrow \\infty,$ while under the alternative hypothesis $\\mathcal{E}_n(T_1,\\ldots, T_n)$ tends to infinity; see Section 4 of \\cite{SzekelyRizzo05} and Section 8 of \\cite{SzekelyRizzo13} for detailed discussions. Thus any test that rejects the null for large values of $\\mathcal{E}_n(T_1,\\ldots, T_n)$ is consistent against general alternatives.\n\n As $T_\\text{ind}$ and $T^\\prime_\\text{ind}$ are i.i.d.~$N(\\textbf{0}, \\textbf{I}_{k\\times k})$ random variables. The statistic $\\mathcal{E}_n(T_1,\\ldots, T_n)$ is easy to compute:\n $$\\mathbb E\\|T_\\text{ind}-T_\\text{ind}^\\prime\\| =\\sqrt{2}\\mathbb E \\|T_{ind}\\|= 2 \\frac{\\Gamma \\big(\\frac{d+3}{2}\\big)}{\\Gamma \\big( \\frac{d+2}{2}\\big)}$$ and for any $a\\in \\mathbb{R}^{d+2}$, we have\n $$\\mathbb E\\|a-T_\\text{ind}\\| =\\frac{\\sqrt{2}\\Gamma \\big(\\frac{d+3}{2}\\big)}{\\Gamma \\big( \\frac{d+2}{2}\\big)} + \\sqrt{\\frac{2}{\\pi}} \\sum_{k=0}^\\infty \\frac{(-1)^k}{k!\\, 2^k} \\frac{|a|^{2k+2}}{(2k+1)(2k+2)} \\frac{\\Gamma \\big( \\frac{d+3}{2}\\big)\\Gamma \\big( k+\\frac{3}{2}\\big)}{\\Gamma \\big( k+\\frac{d}{2}+2\\big)}.$$\n\n The expression for $\\mathbb E\\|a-T_\\text{ind}\\|$ follows from the discussion in \\cite{Zacks81} (see page 55). See the\nsource code ``energy.c'' in the \\textit{energy} package of R language (\\cite{Rlang}) for a fast implementation of this; also see \\cite{SzekelyRizzo13}.\n\n\\subsection{Testing conditional independence} \\label{sec:sub_test_cond}\nIn this sub-section we use Lemma \\ref{lem:CondInd} and the test for mutual independence proposed in the previous sub-section (Section~\\ref{sec:TestMutInd}) to test for the conditional independence of $X$ and $Y$ given $\\mathbf{Z}.$ We start with a simple lemma\n\n\n\\begin{lemma} \\label{lem:CondIndeqiv}\nSuppose that $(X,Y,\\mathbf{Z})$ has a continuous joint density on $\\mathbb{R}^{d+2}$. Then $X \\perp \\! \\! \\! \\perp Y |\\mathbf{Z}$ if and only if $$W:=(F_{X|\\mathbf{Z}}(X|\\mathbf{Z}), F_{Y|\\mathbf{Z}}(Y|\\mathbf{Z}), F_\\mathbf{Z}(\\mathbf{Z})) \\sim \\mathcal{U}([0,1]^{d+2}),$$ where $F_\\mathbf{Z}(\\mathbf{z}) = \\left(F_{Z_d|Z_{d-1},\\ldots, Z_1}(z_d|z_{d-1},\\ldots, z_1), \\ldots, F_{Z_2|Z_1}(z_2|z_1), F_{Z_1}(z_1)\\right),$ $\\mathbf{Z} =$ \\\\$ (Z_1,\\ldots, Z_d),$ $\\textbf{z}=(z_1,\\ldots, z_d),$ and $\\mathcal{U}([0,1]^{d+2})$ denote the Uniform distribution on $[0,1]^{d+2}$.\n\\end{lemma}\n\\begin{proof}\nNote that by Lemma~\\ref{lem:CondInd}, $X \\perp \\! \\! \\! \\perp Y |\\mathbf{Z}$ if and only if $F_{X|\\mathbf{Z}}(X|\\mathbf{Z}),$ $F_{Y|\\mathbf{Z}}(Y|\\mathbf{Z})$ and $\\mathbf{Z}$ are mutually independent. Furthermore, note that $F_{X|\\mathbf{Z}}(X|\\mathbf{Z}),$ $F_{Y|\\mathbf{Z}}(Y|\\mathbf{Z})$ are i.i.d.~$\\mathcal{U}(0,1)$ random variables. Thus the proof of the lemma will be complete if we show that $F_\\mathbf{Z}(\\mathbf{Z}) \\sim \\mathcal{U}([0,1]^d)$.\n\nAs each of $F_{Z_d|Z_{d-1},\\ldots, Z_1}(Z_d|Z_{d-1},\\ldots, Z_1), \\ldots, F_{Z_2|Z_1}(Z_2|Z_1),$ and $F_{Z_1}(Z_1)$ are $\\mathcal{U}(0,1)$ random variables, it is enough to show that they are mutually independent. For simplicity of notation, we will only prove the independence of $F_{Z_2|Z_1}(Z_2|Z_1)$ and $F_{Z_1}(Z_1)$, independence of other terms can be proved similarly. Note that\n\\begin{align*}\n\\mathbb P(F_{Z_2|Z_1}(Z_2|Z_1) \\le z_2 | F_{Z_1}(Z_1)=z_1) ={}& \\mathbb P(F_{Z_2|Z_1}(Z_2|Z_1) \\le z_2 | Z_1=F_{Z_1}^{-1}(z_1))\\\\\n={}&\\mathbb P\\Big(Z_2 \\le F_{Z_2|Z_1}^{-1}\\big(z_2| F_{Z_1}^{-1}(z_1)\\big) \\Big| Z_1=F_{Z_1}^{-1}(z_1)\\Big)\\\\\n={}&F_{Z_2|Z_1} \\Big(F_{Z_2|Z_1}^{-1}\\big(z_2| F_{Z_1}^{-1}(z_1)\\big) |F_{Z_1}^{-1}(z_1)\\Big)\\\\\n={}&z_2.\n\\end{align*}\nAs the conditional distribution of $F_{Z_2|Z_1}(Z_2|Z_1)$ given $ F_{Z_1}(Z_1) = z_1$ does not depend on $z_1$, we have that $F_{Z_2|Z_1}(Z_2|Z_1)$ and $F_{Z_1}(Z_1)$ are independent.\n\\end{proof}\n\n\nLet us now assume $X \\perp \\! \\! \\! \\perp Y |\\mathbf{Z}$ and define\n\\begin{equation*} \\label{eq:T_def}\nW:=\\left(F_{X|\\mathbf{Z}}(X|\\mathbf{Z}), F_{Y|\\mathbf{Z}}(Y|\\mathbf{Z}), F_{Z_d|\\mathbf{Z}_{-d}}(Z_d|\\mathbf{Z}_{-d}), \\ldots, F_{Z_2|Z_1}(Z_2|Z_1), F_{Z_1}(Z_1)\\right).\n\\end{equation*}\nBy Lemma~\\ref{lem:CondIndeqiv}, we have\n\\begin{equation*} \\label{eq:eq_dist}\nW\\stackrel{\\mathcal D}{=} (U_1, \\dots, U_{d+2}),\n\\end{equation*}\nwhere $U_1, U_2, \\ldots, U_{d+2}$ are i.i.d.~$\\mathcal{U}(0,1)$ random variables. An equivalent formulation is\n\\begin{equation} \\label{eq:mvn}\nH_0: T:= \\Phi^{-1} (W) \\stackrel{\\mathcal D}{=} N(\\textbf{0}, \\textbf{I}_{(d+2) \\times (d+2)}),\n\\end{equation}\n where $\\Phi$ is the distribution function corresponding to the standard Gaussian random variable, and for any $\\textbf{a} \\in \\mathbb{R}^{d+2}$, $\\Phi^{-1} (\\textbf{a}) := (\\Phi^{-1}(a_1), \\ldots, \\Phi^{-1}(a_{d+2})).$\n\n We observe i.i.d.~data $\\{(X_i,Y_i,\\mathbf{Z}_i): i = 1,\\ldots, n\\}$ from the joint distribution of $(X,Y,\\mathbf{Z})$ and we are interested in testing $X \\perp \\! \\! \\! \\perp Y |\\mathbf{Z}$. Suppose first that the distribution functions $F_{X| \\mathbf{Z}}(\\cdot|\\cdot), F_{Y| \\mathbf{Z}}(\\cdot|\\cdot),$ and $F_{\\mathbf{Z}}(\\cdot)$ are known. Then we have an i.i.d.~sample $T_1,\\ldots, T_n$ from $T$, where\n\\begin{equation} \\label{eq:data_ver}\n T_i:=\\Phi^{-1}(F_{X|\\mathbf{Z}}(X_i|\\mathbf{Z}_i), F_{Y|\\mathbf{Z}}(Y_i|\\mathbf{Z}_i), F_{\\mathbf{Z}}(\\mathbf{Z}_i)).\n\\end{equation}\nNow we can use the the test statistic \\eqref{eq:teststat} to test the hypothesis of conditional independence.\n\n As the true conditional distribution functions $F_{X| \\mathbf{Z}}, F_{Y| \\mathbf{Z}},$ and $F_{\\mathbf{Z}}$ are unknown, we can replace them by their estimates $\\widehat F_{X|\\mathbf{Z}}, \\widehat F_{Y|\\mathbf{Z}},$ and $\\widehat F_{\\mathbf{Z}}$, respectively, where $\\widehat F_\\mathbf{Z} (\\mathbf{z}) =\\left( \\widehat F_{Z_d|Z_{d-1},\\ldots, Z_1}(z_d|z_{d-1},\\ldots, z_1), \\ldots,\\widehat F_{Z_2|Z_1}(z_2|z_1), \\widehat F_{Z_1}(z_1)\\right)$; see Section \\ref{sec:NPEst} for more details on how to compute these estimates. Let us now define\n \\begin{equation} \\label{eq:data_hat_ver}\n \\widehat T_i:=\\Phi^{-1}(\\widehat F_{X|\\mathbf{Z}}(X_i|\\mathbf{Z}_i), \\widehat F_{Y|\\mathbf{Z}}(Y_i|\\mathbf{Z}_i), \\widehat F_{\\mathbf{Z}}(\\mathbf{Z}_i)),\n \\end{equation}\n for $i = 1, 2,\\ldots, n.$ We will use\n \\begin{equation} \\label{eq:en_hat}\n\\widehat{\\mathcal{E}_n}:= \\mathcal{E}_n(\\hat{T}_1, \\ldots \\hat{T}_n)\n \\end{equation} to test the hypothesis of conditional independence.\n\\subsubsection{Approximating the asymptotic distribution through bootstrap}\n\nThe limiting behavior of $\\mathcal{E}_n$ is not very useful in computing the critical value of the test statistic $\\widehat{\\mathcal{E}_n}$ proposed in the the previous sub-section. In a related but slightly different problem studied in~\\cite{sen14}, it was shown that, the analogous versions of $\\mathcal{E}_n$ and $\\widehat{\\mathcal{E}_n}$ have very different limiting distributions.\n\nIn independence testing problems it is quite standard and natural to approximate the critical value of the test, under $H_0$, by using a permutation test; see e.g.,~\\cite{SzekelyRizzo09}, \\cite{gretton07}. However, in our problem as we use $\\hat{T}_i$ instead of $T_i$, the permutation test is not valid; see~\\cite{sen14}.\n\n\nIn this sub-section, we propose a bootstrap procedure to approximate the distribution of $\\widehat{\\mathcal{E}_n}$, under the null hypothesis of conditional independence. We now describe the bootstrap procedure.\nLet $\\mathbb{P}_{n,\\mathbf{Z}}$ be the empirical distribution of $\\mathbf{Z}_1, \\ldots,\\mathbf{Z}_n$.\n\\begin{enumerate}[label=\\bfseries Step \\arabic*:]\n\n\\item \t Generate an i.i.d.~sample $\\{U_{i,1}^*, U_{i,2}^*, \\mathbf{Z}^*_{n,i}\\}_{ 1 \\le i \\le n}$ of size $n$ from the measure $\\mathcal{U}(0,1) \\times \\mathcal{U}(0,1) \\times \\mathbb{P}_{n,\\mathbf{Z}}$; recall that $\\mathcal{U}(0,1)$ denotes the Uniform distribution on $(0,1).$\n\n\\item \tThe bootstrap sample is then $\\{X^*_{n,1}, Y^*_{n,1}, \\mathbf{Z}^*_{n,1}\\}_{ 1 \\le i \\le n},$ where\n\\begin{equation}\nX^*_{n,i} := \\widehat{F}^{-1}_{X|Z}(U_{i,1}^*|\\mathbf{Z}_{n,1}^*) \\qquad \\text{and} \\qquad Y^*_{n,i} := \\widehat{F}^{-1}_{Y|Z}(U_{i,2}^*|\\mathbf{Z}_{n,1}^*).\n\\end{equation}\n\n\n\\item Use the bootstrap sample $\\{X^*_{n,i}, Y^*_{n,i}, \\mathbf{Z}^*_{n,i}\\}_{ 1 \\le i \\le n}$ to get smooth estimators $\\widehat F^*_{X|\\mathbf{Z}}, \\widehat F^*_{Y|\\mathbf{Z}},$ and $\\widehat F^*_{\\mathbf{Z}}$ of $F_{X| \\mathbf{Z}}, F_{Y| \\mathbf{Z}},$ and $F_{\\mathbf{Z}}$; see Section \\ref{sec:NPEst} for a discussion on smooth estimation of the conditional distribution functions.\n\n\\item\tCompute the bootstrap test statistic $\\mathcal{E}^*_n:= \\mathcal{E}_n(\\widehat{T}^*_1, \\ldots, \\widehat{T}^*_n) $ where\n{\\small \\begin{equation}\n\\widehat{T}^*_i= \\Phi^{-1} \\big(\\widehat F^*_{X|\\mathbf{Z}}(X^*_{n,i}|\\mathbf{Z}_{n,i}^*), \\widehat F^*_{Y|\\mathbf{Z}}(Y^*_{n,i}|\\mathbf{Z}^*_{n,i}), \\widehat F^*_{\\mathbf{Z}}(\\mathbf{Z}^*_{n,i})).\n\\end{equation} }\n\\end{enumerate}\nWe can now approximate the distribution of $\\widehat{\\mathcal{E}_n}$ by the conditional distribution of $\\mathcal{E}_n^*$ given the data $\\{X_i, Y_i,\\mathbf{Z}_i\\}_{ 1 \\le i \\le n}.$\nIn Section \\ref{sec:simul} we study the finite sample performance of the above procedure through a simulation study and illustrate that our procedure indeed yields a valid test for conditional independence.\n\n\\begin{remark}\n\tIn steps 1 and 2 above, we generate the bootstrap sample from the approximated joint distribution of $(X, Y, \\mathbf{Z})$ under the null hypothesis of conditional independence. In steps 3 and 4 we mimic the evaluation of the test statistic $\\widehat{\\mathcal{E}_n}$ using the bootstrap sample. This is an example of a model based bootstrap procedure.~\\cite{sen14} prove the consistency of a similar bootstrap procedure in a related problem. As the sample size increases the approximated joint distribution of $(X, Y, \\mathbf{Z})$ (under $H_0$) would converge to the truth and the bootstrap distribution would replicate the distribution of $\\widehat{\\mathcal{E}_n}$.\n\\end{remark}\n\n\n\n\n\n\\subsection{Nonparametric estimation of the residuals}\\label{sec:NPEst}\nIn this sub-section we discuss procedures to nonparametrically estimate $ F_{X| \\mathbf{Z}}, F_{Y| \\mathbf{Z}},$ and $F_{\\mathbf{Z}}$ given data $\\{X_i, Y_i,\\mathbf{Z}_i\\}_{ 1 \\le i \\le n}.$ The nonparametric estimation of the conditional distribution functions would involve smoothing. In the following we briefly describe the standard approach to estimating the conditional distribution functions using kernel smoothing techniques (also see~\\cite{LeeLeePark06}, \\cite{YuJones98}, and \\cite{HallWolffYao99}). For notational simplicity, we restrict to the case $d=1$, i.e., $\\mathbf{Z}$ is a real-valued random variable. Given an i.i.d.~sample of $\\{(X_i,Z_i): i = 1,\\ldots, n\\}$ from $f_{X,Z}$, the joint density of $(X,Z)$, we can use the following kernel density estimator of $f_{X,Z}$: $$ \\widehat f_n(x,z) = \\frac{1}{n h_{1,n} h_{2,n}} \\sum_{i=1}^n k \\left( \\frac{x - X_i}{h_{1,n}} \\right) k \\left( \\frac{z - Z_i}{h_{2,n}} \\right)$$ where $k$ is a symmetric probability density on $\\mathbb{R}$ (e.g., the standard normal density function), and $h_{i,n}, i=1,2$, are the smoothing bandwidths. It can be shown that if $n h_{1,n} h_{2,n} \\rightarrow \\infty$ and $\\max\\{h_{1,n}, h_{2,n}\\} \\rightarrow 0$, as $n \\rightarrow \\infty,$ then $\\widehat f_n(x,z) \\stackrel{P}{\\rightarrow} f_{X,Z}(x,z)$. In fact, the theoretical properties of the above kernel density estimator are very well studied; see e.g., \\cite{FG96} and \\cite{EM05} and the references therein. For the convenience of notation, we will write $h_{i,n}$ as $h_i$, $i=1,2$.\n\nThe conditional density of $X$ given $Z$ can then be estimated by $$\\widehat f_{X|Z}(x|z) = \\frac{\\widehat f_n(x,z)}{\\widehat f_Z(z)} = \\frac{\\frac{1}{n h_{1} h_{2}} \\sum_{i=1}^n k \\left( \\frac{x - X_i}{h_{1}} \\right) k \\left( \\frac{z - Z_i}{h_{2}} \\right)}{\\frac{1}{n h_{2}} \\sum_{i=1}^n k \\left( \\frac{z - Z_i}{h_{2}} \\right)}.$$\nThus the conditional distribution function of $X$ given $Z$ can be estimated as $$ \\widehat F_{X|Z}(x|z) = \\frac{\\int_{-\\infty}^ x \\widehat f_n(t,z) \\; dt}{\\widehat f_Z(z)} = \\frac{\\frac{1}{n h_{2}} \\sum_{i=1}^n K \\left( \\frac{x - X_i}{h_{1}} \\right) k \\left( \\frac{z - Z_i}{h_{2}} \\right)}{\\frac{1}{n h_{2}} \\sum_{i=1}^n k \\left( \\frac{z - Z_i}{h_{2}} \\right)} = \\sum_{i=1}^n w_i(z) K \\left( \\frac{x - X_i}{h_{1}} \\right) $$ where $K$ is the distribution function corresponding to $k$ (i.e., $K(u) = \\int_{-\\infty}^u k(v) \\; dv$) and $w_i(z) = \\frac{\\frac{1}{n h_{2}} k \\left( \\frac{z - Z_i}{h_{2}} \\right)}{\\frac{1}{n h_{2}} \\sum_{j=1}^n k \\left( \\frac{z - Z_j}{h_{2}} \\right)}$ are weights that sum to one for every $z$. Least square cross-validation method proposed in \\cite{hall2004cross} can be used to find the optimal choices for $h_1$ and $h_2.$ For general $d$, the optimal parameters must satisfy $h_1 \\sim n^{-2\/(d+4)}$ and $h_2 \\sim n^{-1\/(d+4)};$ see Section 6.2 of \\cite{LiRacine07} and \\cite{lilira13} for a thorough discussion.\n\n\n\\begin{remark}\\label{Bergsma2} Now we provide empirical evidence for the failure of the test proposed in~\\cite{B11} in the example discussed in Remark~\\ref{rem:berg}. We plot (see Figure~\\ref{fig:berg}) the histogram of $p$-values obtained from the proposed test (see Section~\\ref{sec:sub_test_cond}) and that of the $p$-values obtained from testing the independence of $F_{X|\\mathbf{Z}}(X|\\mathbf{Z})$ and $F_{Y|\\mathbf{Z}}(Y|\\mathbf{Z})$ (using their estimates $\\widehat F_{X|\\mathbf{Z}}(\\cdot|\\cdot)$ and $\\widehat F_{Y|\\mathbf{Z}}(\\cdot|\\cdot)$). We use the distance covariance test statistic (see \\citet{SzekelyRizzoBakirov07}) to test for the independence of $F_{X|\\mathbf{Z}}(X|\\mathbf{Z})$ and $F_{Y|\\mathbf{Z}}(Y|\\mathbf{Z})$. Figure~\\ref{fig:berg} demonstrates that a test for mutual independence of $F_{X|\\mathbf{Z}}(X|\\mathbf{Z})$ and $F_{Y|\\mathbf{Z}}(Y|\\mathbf{Z})$ can fail to capture the conditional dependence between $X$ and $Y$ given $\\mathbf{Z}$.\n\\end{remark}\n\\begin{figure}[h!]\n\\includegraphics[scale=.8]{berg_1000_cv_all.pdf}\n\\caption{Histograms of $p$-values (estimated using 1000 bootstrap samples) over $1000$ independent replications. Here, for $i=1,\\ldots,200$, $\\{X_i,Y_i,Z_i\\}$ are i.i.d.~samples from the example discussed in Remark \\ref{rem:berg}.}\n\\label{fig:berg}\n\\end{figure}\n\n\\section{Simulation}\\label{sec:simul}\nWe now investigate the finite sample performance of the testing procedure developed in this paper through a simulation study. We also compare the performance of the our testing procedure to those proposed in \\cite{KerCondInd07} and \\cite{Z12}. We denote the the testing procedure proposed in \\cite{KerCondInd07} by $CI_{perm}$ and use $KCI$ to denote the kernel based conditional independence test proposed in \\cite{Z12}.\n\nTo illustrate and compare the performance of different testing procedures, we consider the following sampling scenario borrowed from \\cite{Z12}. Let us assume that $X$ and $Y$ are only dependent on $Z_1$ (the first coordinate of $\\mathbf{Z}$) and that all other conditioning variables are independent of $X,Y,$ and $Z_1.$ We assume that $\\mathbf{Z} \\sim N_d(\\textbf{0}, \\sigma^2_z \\textbf{I}_{d\\times d})$, $X:= W+ Z_1+ \\epsilon,$ and $Y:= W+ Z_1+ \\epsilon^\\prime,$ where $\\epsilon, \\epsilon^\\prime,$ and $W$ are three independent mean zero Gaussian random variables. Moreover, we assume that $\\epsilon, \\epsilon^\\prime,$ and $W$ are independent of $\\mathbf{Z},$ $var(\\epsilon)=var(\\epsilon^\\prime)=\\sigma^2_E,$ and $var(W)=\n\\sigma^2_W,$ where for any real random variable $V$, $var(V)$ denotes its variance. Note that $X \\perp \\! \\! \\! \\perp Y |\\mathbf{Z}$ if and only if $\\sigma_W=0.$\n\n In our finite sample simulations we fixed $\\sigma_E= 0.3 $ and $\\sigma_z=0.2$. We generate $500$ i.i.d.~samples $\\{X_i, Y_i, \\mathbf{Z}_i\\}_{1 \\le i \\le 500}$ for each of $d=1, 3,$ and $5$ and for different values of $\\sigma_W.$ For each such sample, we use 1000 bootstrap replicates to estimate the $p$-value of the proposed test procedure. We have used the ``\\texttt{np}\" (see \\cite{np}) package in R (\\cite{R}) to estimate the conditional distribution functions with the tuning parameters chosen using least-squares cross validation (see Section~\\ref{sec:NPEst}). In Figure \\ref{fig:power_curve} we plot the power (estimated using 500 independent experiments) of the testing procedure proposed in Section \\ref{sec:sub_test_cond} along with those of $CI_{perm}$ and $KCI$ as $\\sigma_W$ increases from $0$ to $0.25$, for dimensions $1, 3,$ and $5$. We fix the significance level at $0.05$.\n\n\\begin{figure}[h!]\n\\includegraphics[width=.65\\paperwidth]{Final_fig_2.pdf}\n\\caption{The power (at significance level $0.05$) of the three testing procedures for sample size $n=500$ as the dimension $d$ and $\\sigma_W$ increase.}\n\\label{fig:power_curve}\n\\end{figure}\n\n\n The distribution of the $KCI$ test statistic under the null hypothesis of conditional independence is estimated with a Monte Carlo procedure suggested in \\cite{Z12}. To implement the $CI_{perm}$ and the $KCI$ testing procedures, we have used the MATLAB source codes provided in \\cite{Z12}; the source code can be found at \\url{http:\/\/people.tuebingen.mpg.de\/kzhang\/KCI-test.zip}. The R language codes used to implement our procedure are available at \\url{http:\/\/stat.columbia.edu\/~rohit\/research.html}.\n\n\nObserve that for $CI_{perm}$, the probability of type I error is much greater than the significance level for $d=3$. Furthermore, for $d=5$, it fails to detect the alternative for all values of $\\sigma_W$. The performance of $CI_{perm}$ is sensitive to the dimension of the conditioning variable. The probability of type I error for both the proposed and the $KCI$ testing procedures are around the specified significance level. Moreover, the powers of $KCI$ and the proposed test increase to $1$ as $\\sigma_W$ increases. Overall, we think that for this simulation scenario the $KCI$ method has the best performance.\n\n\n\n\n\n\\section{Discussion}\\label{sec:Disc}\nGiven a random vector $(X, \\mathbf{Z})$ in $\\mathbb{R} \\times \\mathbb{R}^d = \\mathbb{R}^{d+1}$ we have defined the notion of a nonparametric residual of $X$ on $\\mathbf{Z}$ as $F_{X|\\mathbf{Z}}(X|\\mathbf{Z})$, which is always independent of the response $\\mathbf{Z}$. We have studied the properties of the nonparametric residual and showed that it indeed reduces to the usual residual in a multivariate normal regression model. However, nonparametric estimation of $F_{X|\\mathbf{Z}}(\\cdot|\\cdot)$ requires smoothing techniques, and hence suffers from the curse of dimensionality. A natural way of mitigating this curse of dimensionality could be to use dimension reduction techniques in estimating the residual $F_{X|\\mathbf{Z}}(X|\\mathbf{Z})$. Another alternative would be to use a parametric model for the conditional distribution function.\n\nSuppose now that $(X,Y,\\mathbf{Z})$ has a joint density on $\\mathbb{R} \\times \\mathbb{R} \\times \\mathbb{R}^d = \\mathbb{R}^{d+2}$. We have used this notion of residual to show that the conditional independence between $X$ and $Y$, given $\\mathbf{Z}$, is equivalent to the mutual independence of the residuals $F_{X|\\mathbf{Z}}(X|\\mathbf{Z})$ and $F_{Y|\\mathbf{Z}}(Y|\\mathbf{Z})$ and the predictor $\\mathbf{Z}$. We have used this result to propose a test for conditional independence, based on the energy statistic.\n\nWe can also use these residuals to come up with a nonparametric notion of partial correlation. The partial correlation of $X$ and $Y$ measures the degree of association between $X$ and $Y$, removing the effect of $\\mathbf{Z}$. In the nonparametric setting, this reduces to measuring the dependence between the residuals $F_{X|\\mathbf{Z}}(X|\\mathbf{Z})$ and $F_{Y|\\mathbf{Z}}(Y|\\mathbf{Z})$. We can use distance covariance (\\cite{SzekelyRizzoBakirov07}), or any other measure of dependence, for this purpose. We can also test for zero partial correlation by testing for the independence of the residuals $F_{X|\\mathbf{Z}}(X|\\mathbf{Z})$ and $F_{Y|\\mathbf{Z}}(Y|\\mathbf{Z})$. \\newline\n\n\n\n\\noindent {\\bf Acknowledgements:} The second author would like to thank Arnab Sen for many helpful discussions, and for his help in writing parts of the paper. He would also like to thank Probal Chaudhuri for motivating the problem. The research of second and third authors is supported by National Science Foundation.\n\\bibliographystyle{elsarticle-harv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{INTRODUCTION}\n\nMost of the star formation in the Galaxy occurs in clusters associated\nwith at least one high-mass star \\citep{Adams2010ARA&A}. An\nunderstanding of star formation on global galactic and extra-galactic\nscales therefore entails the study of the early evolution of high-mass\nstars and how they impact their molecular environment.\n\n\nThe physical characterization of the places where high-mass stars form is \nan important observational achievement of the\nfar-infrared (far-IR) and submillimeter astronomy of the last decades. \nHigh-mass stars form in massive molecular clumps of sizes\n$\\lesssim 1$ pc, column densities $\\gtrsim0.1$ gr~cm$^{-2}$, densities\n$n_{\\rm H_2}\\gtrsim10^4$ cm$^{-3}$, and masses $>200$ \\mbox{\\,$M_{\\odot}$}\\ \\citep{Tan2014prpl}, with \n temperatures depending on their evolutionary stage. Determining the evolutionary\nsequence of these massive molecular clumps and their properties is currently an active field of\nstudy. We can define a schematic timeline that comprises four major\nobservational stages \\citep{Jackson2013PASA,Chambers2009ApJ}: \n\\begin{enumerate}\n\\item{Quiescent and prestellar sources, that is, molecular clumps in the earliest\n phase with no embedded high-mass young stellar objects (HMYSOs). Some of\n these clumps are called infrared dark clumps (IRDCs) because they\n appear in absorption against the bright mid-IR background associated with\n the Galactic plane.}\n\\item{Protostellar clumps are those associated with signs of star\n formation such as outflows and HMYSOs, but where\n H{\\rmfamily\\scshape{ii}}\\ regions have not developed. We expect the embedded young\n high-mass stars to accrete at a high rate \\citep[$\\ge10^{-4}$\n \\mbox{\\,$M_{\\odot}$}\\ yr$^{-1}$, e.g.,][]{McKee2003ApJ,Keto2006ApJ,Tan2014prpl}\n and to reach the main sequence in typically $\\lesssim10^5$ yr\n \\citep{Behrend2001AA,Molinari2008AA}. Based on the Kelvin-Helmholtz\n contraction timescale, high-mass young stars will be likely on the\n main sequence while still accreting.}\n\\item{Molecular clumps associated with compact H{\\rmfamily\\scshape{ii}}\\ regions. The young\n high-mass stars in these clumps have probably finished their main\n accretion phase and have reached their final masses. \nStrong UV radiation from the newly born high-mass stars start to ionize the surrounding cocoon.}\n\\item{Clumps in a late evolutionary stage, where the ionizing radiation,\n winds and outflows feedback, and the expansion of the ionized gas finally\n disrupt the molecular envelope, marking the transition to an\n observational stage characterized by an extended classical H{\\rmfamily\\scshape{ii}}\\ region\n and a photodissociation region (PDR).}\n\\end{enumerate}\nStudying the dust continuum emission in the mid-IR, far-IR, and\nsubmillimeter range is one of the most reliable ways to determine the\nevolutionary phase of molecular clumps. Dust emission in the submillimeter is usually\noptically thin and traces both cold and warm environments. By\ncombining large infrared Galactic plane surveys like Hi-GAL\n\\citep[Herschel Infrared Galactic plane survey,][]{Molinari2010PASP},\nATLASGAL \\citep[APEX Telescope Large Area Survey of the\n Galaxy,][]{Schuller2009AA}, GLIMPSE \\citep[Galactic Legacy Infrared\n Midplane Survey Extraordinaire,][]{Benjamin2003PASP}, and MIPSGAL\n\\citep{Carey2008AAS}, we can determine the evolutionary state and\ncalculate basic physical parameters of a large\nsample of molecular clumps.\n\nWith this prospect in mind, the Millimeter Astronomy Legacy Team 90 GHz\n(MALT90) survey\\footnote{{Survey website: http:\/\/malt90.bu.edu\/. The\n molecular line data can be accessed from\n http:\/\/atoa.atnf.csiro.au\/MALT90.}} (Rathborne et al.\\ in preparation;\n\\citealp{Jackson2013PASA,Foster2011ApJS,Foster2013PASA}) has studied\n3246\\ molecular clumps identified using SExtractor\n\\citep{Bertin1996AA} from the ATLASGAL data at\n870\\um\\ \\citep{Contreras2013AA,Urquhart2014AA}. MALT90 has mapped these\nclumps in 15 molecular and one hydrogen recombination line located\nin the 90 GHz atmospheric band using the 22 m Mopra telescope. The\nobjective is to determine the main physical and chemical characteristics of\na statistically relevant sample of high-mass molecular clumps over a wide\nrange of evolutionary stages. Approximately 80\\% of the MALT90 sources\nexhibit mid-IR characteristics that allow us to classify them into one of\nthe four preceding evolutionary stages: Quiescent, Protostellar,\nH{\\rmfamily\\scshape{ii}}\\ region, or PDR. This classification of the sources was done \n by visual inspection of Spitzer images at 3.6, 4.5, 8.0, and 24\n \\micron, as described in \\citet{Hoq2013ApJ} \\citep[see also][]{Foster2011ApJS}. By combining the\nMALT90 dataset with far-IR continuum and molecular line data, we can\ncharacterize quantitatively their temperatures, column densities, volume\ndensities, distances, masses, physical sizes, kinematics, luminosity, and\nchemistry of the clumps.\n\nIn this paper, we focus on the dust continuum emission of the MALT90\nmolecular clump sample. We model the far-IR and submillimeter emission to\nderive physical parameters which, to a first approximation, are distance\nindependent such as the dust temperature and the column\ndensity. Forthcoming publications by Whitaker et al. (in preparation) and\nContreras et al.\\ (in preparation) will present kinematic distances and\nanalyze the clumps' masses, sizes, volume densities, and luminosities.\nPreliminary analysis of the molecular emission indicates that the\n relative abundances, line opacities (Rathborne et al., in preparation,\n see also \\citealp{Hoq2013ApJ}), and infall signatures (Jackson et al., in\n preparation) are consistent with the mid-IR classification acting as a\n proxy for clump evolution. The MALT90 data have been already\n used in several other studies of high-mass star formation, either\nbased on a small ($<10$) set of relevant sources\n\\citep{Rathborne2014ApJ,Stephens2015ApJ,Walker2015MNRAS,Deharveng2015AA}\nor using a statistical approach on a larger sample \\citep[$>30$,][]{Hoq2013ApJ,Miettinen2014AA,Yu2015MNRAS,He2015MNRAS}. In these\nstudies with large samples \\citep[with the exception of][]{Hoq2013ApJ}, the\ndust temperature and column density of the clumps have not been\nsimultaneously derived from a model of the far-infrared spectral energy\ndistribution (SED). This paper aims to complement future high-mass star\nformation studies based on the MALT90 sample by supplying robust\nmeasurements of these physical properties and their uncertainties.\n\n\n\n\n\nSection \\ref{sec-obs} of this work presents the main\ncharacteristics of the data set and its reduction. Section \\ref{sec-ana}\ndescribes the methods used for analyzing the data, the modeling of the dust\nemission, and uncertainty and degeneracy estimations. Section\n\\ref{sec-dis} discusses possible interpretations of the statistical results\nof the dust parameters and, specially, how the clump evolutionary stages\ncorrelate with the dust derived physical parameters. Section \\ref{sec-sum}\nsummarizes the main results of this work.\n\n\n{\\section{OBSERVATIONS}\\label{sec-obs}} \n\nThe analysis presented in this\nwork is based on data taken with the \\emph{Herschel Space Observatory}\n\\citep[HSO,][]{Pilbratt2010AA} and with the APEX telescope\n\\citep{Gusten2006AA}.\n\n{\\subsection{Processing of Public HSO Hi-GAL Data}\\label{sec-higal}} \n\nWe use public HSO data from the \nHerschel Infrared Galactic Plane Survey key-project\n\\citep[Hi-GAL,][]{Molinari2010PASP} observed between\nJanuary of 2010 and November of 2012 and obtained from the Herschel\nScience Archive. The observations were made using the parallel,\nfast-scanning mode, in which five wavebands were observed\nsimultaneously using the PACS \\citep{Poglitsch2010AA} and the SPIRE\n\\citep{Griffin2010AA} bolometer arrays. The data version\nobtained from the Herschel\nScience Archive corresponds to the Standard Product Generation\nversion 9.2.0.\n\n\nColumns 1 to 4 of Table \\ref{tab-ins} {list} the instrument, the\n representative wavelength in microns of each observed band, the angular\nresolution represented by the FWHM of the point spread function\n\\citep{Olmi2013AA}, and the estimated point source sensitivity\n($\\sigma_p$), respectively. The point source sensitivity, assuming\n Gaussian beams, is given by $\\sigma_{\\rm\n rms}\\Omega_b\\left(\\Omega_b\/2\\Omega_{\\rm pix}\\right)^{-1\/2}$, where\n $\\sigma_{\\rm rms}$ is the rms variations in intensity units, $\\Omega_b$\n is the beam solid angle, and $\\Omega_{\\rm pix}$ is the pixel solid\n angle.\\footnote{Theoretical justification and more detailed calculations for this formula can be found at the Green Bank Telescope technical notes:\n http:\/\/www.gb.nrao.edu\/$\\mathtt{\\sim}$bmason\/pubs\/m2mapspeed.pdf\n (B. Mason, private communication)} The fifth column gives the noise\nlevel of the convolved and re-gridded maps (see Sections \\ref{sec-noi} and\n\\ref{sec-conv}) and the sixth column lists the observatory where the data\nwere taken. Throughout this work, we will refer to the data related to a\nspecific waveband by their representative wavelength in micrometers. The\nposition uncertainty of the Hi-GAL maps is $\\sim$3\\arcsec.\n\nThe generation of maps that combine the two orthogonal scan directions\nwas done using the Herschel Interactive Processing Environment\n(HIPE) versions 9.2 and 10. Cross-scan combination and destriping\nwere performed over 42 Hi-GAL fields of approximately\n$2\\fdg2\\times2\\fdg2$ using the standard tools available in HIPE. Columns 1 to 4 of Table \\ref{tab-ids} give the target name, the ID of the observation, the observing mode, and the observation dates, respectively. \nFor the\nSPIRE maps, we applied the extended source calibration\nprocedure (Section 5.2.5 from the SPIRE Handbook\\footnote{http:\/\/herschel.esac.esa.int\/Docs\/SPIRE\/spire\\_handbook.pdf}) \nsince most of MALT90 sources correspond to dense clumps that\nare comparable to or larger than the largest SPIRE beam size.\nThe saturation limit of\nthe nominal mode for SPIRE (Section 4.1.1 from the SPIRE Handbook) \nis approximately 200 Jy~beam$^{-1}$.\nTo prevent saturation, fields with longitudes $|l|\\le5$\\arcdeg\\ were observed with SPIRE\nusing the bright observing mode instead of the nominal observing\nmode. \n\n\n\n{\\subsection{Other HSO Data}\\label{sec-hobys}} \n\nIn addition to Hi-GAL data, we {used} data from three observations\nmade using the SPIRE bright mode by the HOBYS key project \\citep[Herschel Imaging Survey of OB YSOs,][]{Motte2010AA}. Table\n\\ref{tab-ids} lists these observations' IDs. They were\ndirected toward the NGC 6334 ridge and the central part of M17, areas which\nare heavily saturated in the Hi-GAL data.\n\n\n{\\subsection{ATLASGAL Archival Data}\\label{sec-laboca}} \n\nData at 870 \\um\\ were taken between 2007 and 2010 using the bolometer\nLABOCA \\citep{Siringo2009AA} installed on the APEX telescope located\nin Chajnantor valley, Chile, as part of the ATLASGAL key project\n\\citep{Schuller2009AA}. Calibrated and reduced fits images were\nobtained from the data public releases made by \\citet{Contreras2013AA}\nand \\citet{Urquhart2014AA}. Table \\ref{tab-ins} displays the angular\nresolution, {the point source sensitivity calculated as in Section \\ref{sec-higal} using $\\sigma_{\\rm rms}=60$ mJy beam$^{-1}$ and $\\Omega_{b}\/\\Omega_{\\rm pix}=11.6$ \\citep{Contreras2013AA}, and the typical noise of the convolved and re-gridded ATLASGAL maps}. In addition to this noise, we assume a 10\\%\nuncertainty in the absolute calibration.\n\n{\\section{ANALYSIS}\\label{sec-ana}}\n\nThe following sections describe the methods used in the model fitting and\nuncertainty estimations. There are 2573\\ ATLASGAL sources observed by MALT90 classified\naccording to their mid-IR appearance as Quiescent, Protostellar,\nH{\\rmfamily\\scshape{ii}}\\ region, or PDR. The remaining sources (\\Uncertain) exhibit no\nclear mid-IR features that allow us to classify them unambiguously in these\nevolutionary stages. We refer to these sources as ``Uncertain.'' \nThe\n MALT90 catalog includes 3557\\ entries, of which 2935\\ sources are\n associated with molecular emission detected at a single V$_{\\rm LSR}$. \nMALT90 also detected molecular emission arising at two V$_{\\rm LSR}$\n toward \\DoubleClumps\\ ATLASGAL sources, which correspond to\n 622\\ entries in the MALT90 catalog. The continuum emission from\n these sources comes from two or more clumps located at\n different distances, complicating the interpretation. We have calculated\n column densities and temperatures toward these blended sources, but we\n have excluded them from the discussion of Section \\ref{sec-dis}.\n\n{\\subsection{Noise Estimation of the HSO Data}\\label{sec-noi}}\n\nTo a first approximation, the intensity assigned to each pixel is given \n by the average of the\nbolometer readings that covers that pixel position. The spatial sampling of the maps,\non the other hand, includes $\\sim$3 pixels per beamwidth. \nObserved astronomical signals vary spatially on angular scales $\\gtrsim$ 1\nbeamsize. Therefore, in the large fraction of the map area\n that is away from very strong sources, we expect that the differences between adjacent pixels are\ndominated by instrumental noise.\nIn order to estimate this noise, we use the\nhigh-pass filter defined by \\citet{Rank1999IEEP} to determine the\ndistribution of pixel-to-pixel variations and filter out astronomical\nemission. The width of this distribution determines the typical noise\nthrough the relation $2.36\\sigma=\\text{FWHM}$. The advantage\nof this method is that it gives us an extra and relatively simple \n way to estimate the noise of the final maps. The noise estimation is similar to that obtained from \\emph{jackknife} maps, produced by taking the\ndifference between maps generated by the two halves of the bolometer array \\citep[see][for an analogous procedure]{Nguyen2010AA}.\n\nThe 1-$\\sigma$ point source sensitivities derived from the high-pass\nfilter method described above are typically 18 and 24 mJy for the two\nPACS bands at 70 and 160 \\micron, {and 12 mJy for the three\nSPIRE bands at 250, 350, and 500 \\micron}. These derived\nsensitivities are in good agreement with the ones expected for the\nHi-GAL survey \\citep{Molinari2010PASP} and in reasonable agreement\nwith the sensitivities expected for the parallel\nmode,\\footnote{http:\/\/herschel.esac.esa.int\/Docs\/PMODE\/html\/ch02s03.html}\nwith the possible exception of the 160 \\um\\ band where we estimate\nabout half of the expected noise. The noise value derived at 250\n\\um\\ is comparable with the noise component derived by\n\\citet{Martin2010AA} also from Hi-GAL data, indicating that our estimation\neffectively filters most of the sky emission variations, including the\ncirrus noise. Finally, and as expected, we find that the noise in\nfields observed in the SPIRE bright mode is $\\sim$4 times larger\ncompared to that in fields observed in nominal mode. For subsequent\nanalyses, we consider an additional independent calibration\nuncertainty of 10\\% whenever we compare data among different bands, as\nfor example, in the SED fitting. This\n10\\% represents a conservative approximation of the combined\ncalibration uncertainty of the SPIRE photometers (5.5\\%) and the beam\nsolid angle (4\\%, see Section 5.2.13 of the SPIRE\nHandbook).\n\n{\\subsection{Convolution to a Common Resolution and Foreground\/Background Filtering}\\label{sec-conv}}\n\n\nMulti-wavelength studies of extended astronomical objects, such as\nstar-forming clumps and IRDCs, often combine data taken with\ndifferent angular resolutions. Therefore, to make an adequate\ncomparison of the observed intensities, it is necessary to transform\nthe images to a common angular resolution. We accomplish this by\nconvolving the images to the lowest available\nresolution, given by the 500 \\micron\\ SPIRE instrument, using the\nconvolution kernels of \\citet{Aniano2011PASP} in the case of HSO data.\nThe ATLASGAL data were convolved by a two-dimensional Gaussian with FWHM\nequal to $\\sqrt{35\\farcs0^2- 19\\farcs2^2}\\approx29\\farcs3$, under the\n assumption that the point spread functions of the ATLASGAL and\nthe 500 \\micron\\ data are Gaussians. In addition, to compare the\nintensity of the HSO images with that of the APEX telescope, we need\nto remove from the HSO data the low spatial frequency emission that\nhas been filtered from the ATLASGAL images. The ATLASGAL spatial filtering is\nperformed during the data reduction, and is a by-product of the\natmospheric subtraction method which removes correlated signal between\nthe bolometers \\citep{Siringo2009AA}. As a consequence, any uniform\nastronomical signal covering spatial scales larger than 2\\farcm5 is\nlost \\citep{Schuller2009AA}. \n\nWe filter the HSO data in a similar way by subtracting a background image\nfrom each field and at each band. We assume that this background is a\nsmooth additive component that arises from diffuse emission either behind\nor in front of the clump. \nIn addition to filtering the HSO data in order to\ncombine it with ATLASGAL, the background subtraction serves two more\npurposes: it separates the Galactic cirrus emission from the molecular\nclouds \\citep[e.g.,][]{Battersby2011AA}, and it corrects for the unknown\nzero level of the HSO photometric scale. \nOur background model consists of a\nlower-envelope of the original data under two constrains: its value at each\npixel has to be less than in the image, within a 2-$\\sigma$ tolerance, and\nit has to vary by less than 10\\% over 2\\farcm5, which corresponds to the\nATLASGAL filter angular scale. \n\nWe construct a background image for each Hi-GAL field following a slight\nmodification of the \\emph{CUPID-findback}\nalgorithm\\footnote{http:\/\/starlink.jach.hawaii.edu\/starlink\/findback.html}\nof the \\emph{Starlink} suite \\citep{Berry2013ASPC}. \nThe iterative algorithm used to construct the background starts with\n the original image. Then, we calculate a smoothed image by setting to\n zero (in the Fourier transform plane) the spatial frequencies\n corresponding to flux variations on angular scales $<2\\farcm5$. For each\n pixel in this smoothed image with a value larger than the corresponding\n pixel in the original image plus $2 \\sigma$, the pixel value from the\n smoothed image is replaced by the one in the original image, where\n $\\sigma$ is the uncertainty of the map. The remaining pixels in the\n smoothed image are kept unchanged. The resultant map is the first\n iteration of the algorithm. This first iteration replaces the starting\n image and the cycle repeats, generating further iterations, until the\n change between two consecutive iterations is less than 5\\% in all pixels.\n\nFigure\n \\ref{fig-bac} shows an example of this process, which converges to a\n smooth lower-envelope of the original image. The solid black line\n shows a cut along $l=355\\fdg8$ of the intensity measured at 250 \\um.\n Negative intensity values away from the Galactic plane are a\n consequence of the arbitrary zero-level of the HSO photometry scale.\n Dashed lines show different iterations of the algorithm and the\n final adopted background is marked in red.\nThe error bar at the center of\nthe plot measures 2\\farcm5, that is, the shortest angular scale filtered by\nthe background. Note that Figure \\ref{fig-bac} shows a cut across latitude\nat a fixed longitude, but the algorithm works on the two-dimensional image,\nnot assuming any particular preferred direction.\n\n\n{\\subsection{Single Temperature Grey-Body Model}\\label{sec-fit}}\n\nWe interpret the observed intensities as arising from a single temperature \ngrey-body dust emission model. The monochromatic intensity at a frequency $\\nu$ \nis given by\n \\begin{equation}\nI_\\nu(T_d,N_g)=B_\\nu(T_d)\\left(1-e^{-\\tau_\\nu}\\right)~~,\\label{eq-Idust}\n\\end{equation}\nwhere $B_\\nu(T_d)$ is the Planck function at a dust temperature $T_d$ and \n\\begin{align}\n\\tau_\\nu&=N_{\\rm dust}\\kappa_\\nu~~,\\label{eq-tauDust}\\\\\n &={\\rm GDR}\\times N_g\\kappa_\\nu~~,\\label{eq-gdr}\n\\end{align}\nwhere $\\tau_\\nu$ is the dust optical depth, $N_{\\rm dust}$ is the dust\ncolumn density, and $\\kappa_\\nu$ is the dust absorption\ncoefficient. The relation between the dust and gas ($N_g$)\ncolumn densities is determined by the gas-to-dust mass ratio (GDR),\nwhich we assume is equal to $100$. We also define the particle column\ndensity by $N_p:=N_g\/(\\mu m_{\\rm H})$, where\n$\\mu=2.3$. The number column density of molecular hydrogen ($N_{\\rm H_2}$) is\nobtained in the same way but using $\\mu=2.8$ \\citep{Kauffmann2008AA}, under the assumption\nthat all the hydrogen is in molecular form. We assume throughout this work that $N_g$ is measured in gr cm$^{-2}$ and $N_{\\rm H_2}$ and $N_p$ in cm$^{-2}$.\nTo compare the dust emission model to the data, we weight the\nintensity given by Equation \\eqref{eq-Idust} by the spectral response\nfunction of the specific waveband, in order to avoid post-fitting color\ncorrections \\citep[see for example,][]{Smith2012ApJ}.\n\nWe exclude the 70 \\micron\\ intensity from the single $T_d$ fitting \nsince this emission cannot be\nadequately reproduced by Equation \\eqref{eq-Idust} (see Section \\ref{sec-mq}). \nThis problem has been noted by several authors\n\\citep[e.g.,][]{Elia2010AA,Smith2012ApJ,Battersby2011AA,Russeil2013AA},\nwho have provided at least three possible reasons:\n\\begin{enumerate}\n\\item{emission at this wavelength comes from a warmer component,}\n\\item{cold and dense IRDCs are seen in absorption against the Galactic plane\nat 70 \\um\\ rather than emission,} \n\\item{a large fraction of the\n70 \\um\\ emission comes from very small grains, where the assumption\nof a single equilibrium temperature is not valid.} \n\\end{enumerate}\n \nFor each pixel and given the observed background-subtracted \nintensities $I_{\\rm \\nu, obs}$, \nwe minimize the squared difference function,\n\\begin{equation}\n\\chi^2(T_d,N_g)=\\sum_{\\rm \\nu}\\frac{(I_{\\rm \\nu, obs}-\\tilde{I}_{\\nu})^2}{\\sigma_{\\nu}^2}~~,\\label{eq-chi2}\n\\end{equation}\nwhere the sum is taken over the observed frequencies (i.e., 5 bands) and\n$\\tilde{I}_{\\nu}$ is the intensity spectrum predicted by the model weighted\nby the respective bandpass. The best-fit dust temperature, $T_d$, and gas\ncolumn density, $N_g$, minimize the $\\chi^2$ value. The variance\n$\\sigma_{\\nu}^2$ is equal to the sum in quadrature of the noise (taken from\nTable \\ref{tab-ins}) plus 10\\% of the background-subtracted intensity. We\nfit the model described in Equation \\eqref{eq-Idust} for all the pixels\nwith intensities larger than $2\\sigma_\\nu$ in all bands.\n\n\nThe reduced $\\chi^2$, defined as $\\chi^2_r:=\\chi_{\\rm min}^2\/(m-p)$\n\\citep{Bevington2003DRDP}, is a simple measure of the quality of the\nmodel. Here, $\\chi^2_{\\rm min}$ is the minimized $\\chi^2$ of Equation\n\\eqref{eq-chi2}, $m$ is the number of data-points, and $p$ is the number of\nfitted parameters. In our case, we fit the dust temperature and the\nlogarithm of the gas column density, so $p=2$. Under the hypothesis that\nthe data are affected by ideal, normally distributed noise, $\\chi_r^2$ has\na mean value of 1 and a variance of $2\/(m-p)$.\n\nFigure \\ref{fig-chi2CDF} shows the $\\chi^2_r$ cumulative distribution\nfunction (CDF), calculated using all the pixels for which we fit the SED.\nThe median $\\chi^2_r$ value is 1.6. This value is less than\n$1+\\sqrt{2\/3}\\approx1.8$, which is the expected value plus 1-$\\sigma$ under\nthe assumption of normal errors for any particular fit.\nWe conclude that the SED model is in most cases adequate, or equivalently,\nthe limited amount of photometric data does not justify a more complicated\nmodel. Note that, although the distribution of $\\chi^2_r$ has a reasonable\nmean and median, it has a large tail: the 95\\% quantile is located at\n$\\chi^2_r\\approx9.6$. This value represents a poor fit to the model, which\ncan be usually attributed to a single discordant data-point. Generally,\nthis point corresponds to the 870 \\micron\\ intensity, which illustrates the\ndifficulties of trying to match the spatial filtering of the HSO with\nATLASGAL data, despite the background correction and common resolution\nconvolution. We re-examine the fitting when the $\\chi^2_r$ value is larger\nthan 10 and remove from the fitting at most one data point only if its\nremoval decreases the $\\chi^2_r$ value by a factor of 10 or more.\n\n\n{\\subsubsection{Spectral Index of the Dust Absorption Coefficient}\\label{sec-beta}}\n\nAt frequencies $\\nu<1$~THz, the dust absorption coefficient curve\n$\\kappa_\\nu$ is well approximated by a power\nlaw dependence on frequency with spectral index $\\beta$ \\citep{Hildebrand1983QJRAS}, that is, \n\\begin{equation}\n\\kappa_\\nu=\\kappa_0(\\nu\/\\nu_0)^\\beta~~.\\label{eq-beta}\n\\end{equation}\nIn principle, it is possible to quantify $\\beta$ toward regions where the\nemission is optically thin and the temperature is high enough such that the\nRayleigh-Jeans (R-J) approximation is valid. In this case, from\nEquations \\eqref{eq-Idust} and \\eqref{eq-beta} we deduce that\n\\begin{equation}\nI_{\\nu_1}\/I_{\\nu_2}=(\\nu_1\/\\nu_2)^{\\beta+2}~~,\\label{eq-opthin}\n\\end{equation}\nwhich is independent of temperature. \n\n\nWe estimate $\\beta$ through Equation \\eqref{eq-opthin} using low frequency\n($<600$~GHz) data taken towards warm ($>30$ K) sources to ensure that the\nR-J and the dust absorption coefficient power-law approximations are valid.\nUsing this value of $\\beta$ we will be able to justify better the\n selection of a dust opacity law among the different theoretical\n models \\citep[e.g.,][]{Ormel2011AA}. In order to ensure that the\n sources used to estimate $\\beta$ are warm enough for Equation\n \\eqref{eq-opthin} to be valid, we select IRAS sources that are part of\n the 1.1 mm Bolocam Galactic Plane Survey\n \\citep[BGPS,][]{Rosolowsky2010ApJS,Ginsburg2013ApJS} and the\n ATLASGAL catalog at 870 \\micron. We also require that they fulfill\n $S_{60}\/S_{100}>0.5$, where $S_{60}$ and $S_{100}$ are their fluxes\n at 60 and 100 \\um, respectively. \nIn addition,\nwe select sources with $|l|>10\\arcdeg$ in order to avoid possible confusion that may arise in the crowded regions around the Galactic center. \nWe find 14 IRAS sources fulfilling these\nrequirements: 18079$-$1756, 18089$-$1837, 18114$-$1825, 18132$-$1638,\n18145$-$1557, 18159$-$1550, 18162$-$1612, 18196$-$1331, 18197$-$1351,\n18223$-$1243, 18228$-$1312, 18236$-$1241, 18247$-$1147, and 18248$-$1158.\n\nThe average $\\beta$ calculated for these sources using Equation\n\\eqref{eq-opthin} is 1.6, but with a dispersion of 0.5 among the\nsources. This dispersion is large, but it is compatible with a 15\\%\nuncertainty in the fluxes. The spectral index is in agreement with\nthe absorption coefficient law of silicate-graphite grains, with\n$3\\times10^4$ yr of coagulation, and without ice coatings according to\nthe dust models from \\citet{Ormel2011AA}. For the rest of this work,\nwe use this model of dust for the SED fitting. The tables compiled by\n\\citet{Ormel2011AA} also sample the frequency range of interest for\nthis work in more detail than the frequently used dust models of\n\\citet{Ossenkopf1994AA}. We refrain from fitting $\\beta$ together\nwith the SED for two reasons: i) we lack the adequate data to\neffectively break the degeneracy between $\\beta$ and $T_d$, that is,\ngood spectral sampling of highly sensitive data below $500$ GHz; \n\nand ii) the range of dust\nmodels explored by fitting $\\beta$ includes only power-laws instead of\nusing more physically motivated tabulated dust models.\n \nTo compare our results with previous studies, which may have\nderived temperatures and column densities using different hypotheses,\nwe review how different assumptions on $\\beta$ affect the\nbest-fit estimation of $T_d$. \n Several studies\n\\citep[e.g.,][]{Shetty2009ApJ696-676,Shetty2009ApJ696-2234,Juvela2012AA541-33}\nhave discussed this problem in association with least-squares\nSED fitting in the presence of noise. {They find} that $\\beta$ and $T_d$ are\nsomewhat degenerate {and} associated with \n elongated (sometimes described as banana-shaped) \nbest-fit uncertainty regions in the $\\beta$-$T_d$ plane. \nIn this work, we stress one aspect\nthat has not been sufficiently emphasized: there are \\emph{two}\nbehaviors of the $\\beta$-$T_d$ degeneracy: one is evident \nwhen the data cover the\nSED peak, and the other when the data only cover the\n R-J part of the spectrum. In the first case the\ndegeneracy is well described by the modified Wien displacement law\n\\begin{equation}\n\\frac{h\\nu_{\\rm peak}}{k T_d}\\approx(\\beta+3)~~,\\label{eq-mwdl}\n\\end{equation}\nthat is, the uncertainty region of $T_d$ and $\\beta$ is elongated along the\ncurve defined by Equation \\eqref{eq-mwdl}. In Equation \\eqref{eq-mwdl},\n$\\nu_{\\rm peak}$ represents the frequency where the SED takes its maximum value, which under optically thin\nconditions is proportional to the temperature. The proportionality constant\ndepends on $\\beta$ in a complicated way, but the approximation of Equation\n\\eqref{eq-mwdl} is correct within a 10\\% for $\\beta>1$ and within a 20\\%\nfor all $\\beta\\ge0$. Note that by assuming a value of $\\beta$ and determining\n$\\nu_{\\rm peak}$ observationally we can estimate $T_d$ using Equation\n\\eqref{eq-mwdl} in a simple way. \\citet{Sadavoy2013ApJ} and \\citet[][their\n 20 K case]{Shetty2009ApJ696-676} show examples of uncertainty regions\ngiven by the iso-contours of the $\\chi^2$ function which are elongated\nalong the curve defined in Equation \\eqref{eq-mwdl}. On the other hand, if\nthe spectral range of the data does not cover the observed peak of the SED\nand covers only the R-J region, the degeneracy between $\\beta$ and $T_d$ is\nbetter described by the following relation,\n\\begin{equation}\n\\beta-\\frac{h\\nu_m}{2 k T_d}= {\\rm constant}~~,\\label{eq-RJ}\n\\end{equation}\nwhere $\\nu_m$ is the highest observed frequency. This relation describes\nwell the degeneracy of the high temperature curves (60 and 100 K) shown in\n\\citet{Shetty2009ApJ696-676}. The constant in the right hand side of\nEquation \\eqref{eq-RJ} is approximately\n\\[2+\\frac{d\\ln S_{\\nu_m}}{d \\ln \\nu}~~,\\] that is, 2 plus the logarithmic \nderivative (or spectral index) of the spectrum evaluated in the\nhighest observed frequency. In practice, the exact value of the\nconstants in the right hand side of Equations \\eqref{eq-mwdl} and\n\\eqref{eq-RJ} can be determined from the best-fit\nsolutions. In this work, the HSO bands usually cover the peak of the\nSED so Equation \\eqref{eq-mwdl} is more pertinent.\nDepending on the spectral sampling, we can use Equation \\eqref{eq-mwdl} or\n\\eqref{eq-RJ} to compare temperatures between studies that assume\ndifferent values of $\\beta$. \nFor example, the emission in the HSO\nbands from a cloud of $T_d=15$~K with a $\\beta=1.0$ dust absorption\nlaw is also consistent, by Equation \\ref{eq-mwdl} and assuming 10\\%\nuncertainty, with the emission coming from a cloud of $T_d=12$~K and\n$\\beta=2$. In each case, the HSO bands cover the peak\nof the SED. We use this method to re-scale and compare \nbest-fit temperatures obtained from the literature in Section \\ref{sec-dis}.\n\n{\\subsection{Model Uncertainties}\\label{sec-mq}} \n\nWe estimate the best-fit parameter uncertainties using the projection\nof the 1-$\\sigma$ contour of the function\n$\\Delta\\chi^2:=\\chi^2-\\chi^2_{\\rm min}$ \\citep{Lampton1976ApJ}. In the\ncase of 2 fitted parameters, the 1-$\\sigma$ uncertainty region is enclosed by the\n$\\Delta\\chi^2=2.3$ contour. The parameter uncertainties for the SED\nfitting are given by the projections of these uncertainty regions onto\nthe $T_d$ and $\\log N_{g}$ axes. For pixels in images observed using\nthe nominal observing mode, the projections are well described by\nthe following equations\n\\begin{equation}\n\\label{eq-unc}%\n\\begin{aligned}\n\\delta T^{-} &=\\eta_{10}\\left(0.3-0.4~T_{10}+ 0.4~T_{10}^2\\right)~~,\\\\\n\\delta T^{+} &= \\eta_{10}\\left(1.1-1.3~T_{10}+0.7~T_{10}^2\\right)~~,\\\\\n\\delta\\log N_g &= \\eta_{10}\\left(0.03-0.03\\log N_g\\right)~~, \n\\end{aligned}\n\\end{equation}\nwhere $T_{10}=T_d\/(10~{\\rm K})$, $N_g$ is in gr~cm$^{-2}$, and\n$\\eta_{10}$ is the flux calibration uncertainty in units of 10\\%. \nThe best fit temperature and log-column density with \n their 1-$\\sigma$ uncertainties are given by ${T_d}^{+\\delta T^{+}}_{-\\delta T^{-}}$ and \n$\\log N_g\\pm\\delta \\log N_g$, respectively. \nFor pixels in images observed using the bright observing \nmode, the projections are well described by \n\\begin{equation}\n\\label{eq-uncB}%\n\\begin{aligned}\n\\delta T^{-} &=\\eta_{10}\\left(0.7-0.71~T_{10}+ 0.53~T_{10}^2\\right)~~,\\\\\n\\delta T^{+} &=\\eta_{10}\\left(1.1-1.3~T_{10}+0.74~T_{10}^2\\right)~~,\\\\\n\\delta\\log N_g &= \\eta_{10}\\left(0.05-0.03\\log N_g\\right)~~.\n\\end{aligned}\n\\end{equation}\nEquations \\eqref{eq-unc} and \\eqref{eq-uncB} were derived by fitting\nthe upper and lower limits of $T_d$ and $\\log N_g$ projections of the\nuncertainty region. \n These approximations for the uncertainty are\nvalid for $T_d$ between 7 and 40 K, for $\\log N_g $ between $-3.4$ and\n$1.1$ (equivalent to $\\log N_{\\rm H_2} $ between 19.9 and 24.4), and for\nvalues of $\\eta_{10}$ between 1 and 2, which correspond to 10\\%\nand 20\\% calibration errors, respectively. Figure \\ref{fig-Dchi2con}\nshows an example of the prediction of Equations \\eqref{eq-unc}\ncompared to the $\\Delta\\chi^2$ contours. Within their range of\nvalidity and for data taken in the nominal mode, the intervals \n$\\left[T_d-\\delta T^{-},T_d+\\delta T^{+}\\right]$ and \n$\\left[\\log N_g-\\delta \\log N_g,\\log N_g+\\delta \\log N_g\\right]$ \ncorrespond to the projections of the uncertainty ellipse onto the \n $T_d$ and $\\log N_g$ axes\nwithin 0.2 K and 0.02 dex, respectively.\n\nEquations \\eqref{eq-unc} also indicate that, while the confidence\ninterval of $\\log N_p $ is symmetric and roughly constant, the\ntemperature uncertainties grow rapidly above 25 K and they are skewed\ntowards the higher value \\citep{Hoq2013ApJ}. This is produced mainly because of the absence of data at wavelengths shorter than 160 \\um. \nAs explained in Section \\ref{sec-fit}, we do not use the 70\n\\micron\\ data in our model. Including the 70\n\\um\\ data increases the median of the $\\chi^2_r$ distribution to $\\sim3.5$.\nThe temperature and\nlog-column density uncertainties are also correlated along the approximate direction\nwhere the product $T_d\\times N_g $ is constant, indicated in Figure\n\\ref{fig-Dchi2con}. The better the R-J approximation is for the SED, the\nbetter will be the alignment of the major axis of the ellipse with the line\n$T_d\\times N_g=\\text{constant}$.\n\n\n{\\subsection{Saturated Sources}\\label{sec-sat}}\n\nThe HSO detectors in SPIRE and PACS have saturation limits that depend on\nthe observing mode. The saturation intensities for the\nnominal observing mode are 220 and 1125 Jy~beam$^{-1}$ for the 70 and\n160 \\um\\ PACS bands\\footnote{PACS Observer's Manual, v.\\ 2.5.1, Section 5.4}, \nrespectively, and 200 Jy~beam$^{-1}$ for SPIRE.\nSaturation is most problematic in the 250 \\um\\ SPIRE band. \n\nThere are 46 MALT90 sources whose Hi-GAL data are affected by\nsaturation. Of these, six are covered by HOBYS observations made\nby using the bright mode (Section \\ref{sec-hobys}) that gives reliable 250\n\\um\\ intensities. For the remaining 40 sources, we replace the\nsaturated pixels with the saturation limits given above, and we fit\nthe SED taking these values as lower\nbounds.\n\n{\\subsection{The 70 $\\mu$m Appearance of the Quiescent Clumps}\\label{sec-re70}}\n\nMALT90 and other previous studies\n\\citep[e.g.,][]{Molinari2008AA,Lopez-Sepulcre2010AA,Sanhueza2012ApJ,Sanchez-Monge2013MNRAS,Giannetti2013AA,Csengeri2014AA}\nuse mid-IR observations as a probe of star formation activity. However,\ndeeply embedded, early star formation activity could be undetected at\nmid-IR yet be conspicuous at far-IR. Quantitatively, we expect that the 24\nto 70 \\um\\ flux density ratio of HMYSOs to vary between $10^{-6}$ and 1 for\na wide range of molecular core masses (60 to 240 \\mbox{\\,$M_{\\odot}$}) and central star\nmasses over 1~\\mbox{\\,$M_{\\odot}$}\\ \\citep{Zhang2014ApJ}. Therefore, despite MIPSGAL\nhaving $\\sim 50$ times better point source sensitivity at 24 \\micron\\ than\nthat of Hi-GAL at 70 \\micron, it is possible to detect embedded protostars\nin 70 \\um\\ images that would otherwise appear dark at 24 \\um\\ and would be\nclassified as Quiescent. The 70 \\um\\ data thus allow us to further refine\nthe MALT90 classification since a truly Quiescent clump should lack 70 \\um\\\ncompact sources \\citep[e.g.,][]{Sanhueza2013ApJ,Beuther2013AA553,Tan2013ApJ}.\n\nWe examined the Hi-GAL 70 \\um\\ images of the 616\\ Quiescent sources\nand found 91 (15\\%) that show compact emission at 70 \\um\\ within\n38\\arcsec\\ -- or one Mopra telescope beamsize -- of the nominal MALT90\nsource position. Hereafter, we consider these sources as part of the Protostellar sub-sample. We also\nfound 83 sources that appear in absorption at 70 \\um\\ against the\ndiffuse Galactic emission. We refer to these clumps as far-IR dark\nclumps (far-IRDCs). The remaining 442 Quiescent sources are either associated with diffuse emission not useful for tracing embedded star formation, or they are confused with the 70 \\um\\ diffuse emission from the Galactic plane.\n\n{\\section{DISCUSSION}\\label{sec-dis}}\n\nFigure \\ref{fig-sed} shows the dust temperature and column density obtained\nfor each pixel around the source AGAL343.756$-$00.164, which is taken as a\ntypical example. Best-fit dust temperatures, column densities, and their\nuncertainties are calculated pixel by pixel. The two plots located in the\nlower-right corner of Figure \\ref{fig-sed} show the SED measured in two\ndirections (center and periphery) toward AGAL343.756$-$00.164. The blue\ndashed line in each of these plots is the curve given by Equation\n\\eqref{eq-Idust} evaluated in the best fit solution. The shaded region\naround the curve is the locus covered by the model when the best-fit\nparameters vary within the 1-$\\sigma$ confidence interval. As explained in\nSection \\ref{sec-fit}, the $\\chi^2$ is calculated by comparing the measured\nintensities at each band with the SED model weighted by the respective\nbandpasses.\n\n\n\nTable \\ref{tab-NT} gives the derived dust temperatures and log-column densities\nof 3218\\ MALT90 sources. \nThis correspond to a 99.1\\% of the 3246\\ ATLASGAL sources\n observed by MALT90. The remaining \\NoFit\\ sources are either not covered\n by HSO observations (24 sources) or they are too faint to reliably estimate\n the dust parameters (4 sources).\nColumn 1 indicates the ATLASGAL name of the\nsource. We include \\WithFitDouble\\ entries which\ncorrespond to multiple sources blended along the same line of sight,\nindicated with an ``m'' superscript. \nColumn 2 {gives} the effective angular radius of the source in arcsec,\ndefined as $\\theta_{\\rm eff}=\\sqrt{\\Omega_s\/\\pi}$, where $\\Omega_s$ is the effective angular\narea occupied by the MALT90 source. This area corresponds to the\nintersection between the region enclosing the source where the column\ndensity is greater than 0.01~gr~cm$^{-2}$ ($>2.0\\times10^{21}$~cm$^{-2}$ in\n\\mbox{H$_2$}\\ column density) and the 870 \\um\\ ATLASGAL mask (see\n\\citealp{Contreras2013AA} and \\citealp{Urquhart2014MNRAS}). Figure\n\\ref{fig-sed} shows an example of one of these areas (red contour in top left image).\nColumn 3 of Table \\ref{tab-NT} {gives} the mean dust temperature averaged over the area of each\nsource ($\\bar{{T_d}}$).\nColumns 4 and 5 {list} the lower and upper uncertainty of $\\bar{T_d}$,\nrespectively.\nColumns 6, 7, and 8 give the dust temperature at the position of the \n870 \\um\\ peak intensity ($T_{d,{\\rm P}}$) and its lower and upper uncertainties,\nrespectively.\nColumns 9, 10, and 11 list the average column density ($\\bar{N_{g}}$), its logarithm, and the \nuncertainty of the latter, respectively. \nColumns 12 gives the peak column density ($N_{g,{\\rm P}}$), derived using \nthe 870 \\um\\ peak intensity and $T_{d,{\\rm P}}$ (in Equations \\eqref{eq-Idust}, \\eqref{eq-tauDust} and \\eqref{eq-gdr}).\nColumns 13 and 14 give $\\log N_{g,{\\rm P}}$ and its uncertainty,\nrespectively.\nFinally, column 15 {gives} the mid-IR classification of the MALT90 source, as\nQuiescent (616\\ clumps), Protostellar (749\\ clumps), H{\\rmfamily\\scshape{ii}}\\ region (844\\ clumps), PDR (343\\ clumps), or Uncertain (666\\ clumps). Note that these numbers describe the statistics of Table \\ref{tab-NT}, that is, of the 3218\\ sources for which we have dust column density and temperature estimations.\\\n For the Quiescent\nsources, we indicate with a superscript ``C'' or ``D'' whether the source\nis associated with 70 \\um\\ compact emission or if it is a far-IRDC,\nrespectively (see Section \\ref{sec-re70}). \nNo superscript means that neither of these features appears related to the clump.\n\nPrevious studies of massive molecular clumps have relied on samples\nobtained from the IRAS catalog and fit SEDs to obtain dust temperatures and\nmasses. We find a total of 116 matches between MALT90 and those samples\nas analyzed by \\citet[94 matching sources]{Faundez2004AA} and \\citet[22\n matches]{Giannetti2013AA}. \nOther studies, such as\n\\citet{Sridharan2002ApJ} and \\citet{Williams2005AA}, have targeted the\nnorthern sky and they do not overlap significantly with MALT90 (1 source in common each).\nFrom all these sources, 63 are\nclassified as H{\\rmfamily\\scshape{ii}}\\ region, 44 as Protostellar, 3 as PDR, 6 as\nQuiescent, and 2 as Uncertain. From the relative fraction of Quiescent sources\nin the MALT90 sample, we would expect 13 or more of the 118 to be\nQuiescent with a 99\\% probability, assuming that they are randomly\nsampled. Since there are only 6 Quiescent matches, we conclude that\nprevious surveys were biased toward more evolved stages, illustrating how MALT90 helps to fill in the gap in the study of\ncold clumps.\n\nFigure \\ref{fig-comp} shows the dust temperature calculated by previous\nstudies versus the dust temperatures given in this work. We calculate a\nSpearman correlation coefficient \\citep[Section 4.2.3]{Wall2012psa} of 0.75\nwith a 95\\% confidence interval between 0.68 and 0.83, indicating a\npositive correlation between our temperature estimations and those from the\nliterature. \\citet{Faundez2004AA} assume a dust absorption spectral index\n$\\beta=1$, while our dust model is characterized by\n$\\beta\\approx1.7$. Therefore, we correct their temperatures according to\nEquation \\eqref{eq-mwdl} by multiplying them by $(3+1)\/(3+1.7)\\approx0.85$.\nThe correction decreases the mean of the differences between the dust\ntemperatures obtained by \\citet{Faundez2004AA} and the temperatures\nobtained by us from $+7$ K (uncorrected) to $+2$ K. We apply the same\ncorrection to the temperatures given by \\citet{Giannetti2013AA} using their\nreported best-fit $\\beta$ values. Figure \\ref{fig-comp} shows that\ntemperatures estimated using data from mid- and far-IR bands below 100\n\\um\\ are more often higher than the dust temperatures derived in this work.\nIn consequence, the slope of a linear regression performed in the data shown in Figure \\ref{fig-comp} is slightly larger than unity ($1.13\\pm0.09$).\nThis\nis somewhat expected since our own single temperature SED model\nunderestimates the 70 \\um\\ intensity (see Section \\ref{sec-fit}). The most\nplausible reason is that in these sources there is a warmer dust component\nbetter traced by IR data below 100 \\um.\n\n\nDust temperatures and column densities of a preliminary MALT90 subsample\nconsisting of 323 sources were presented by\n\\citet{Hoq2013ApJ}.\\footnote{\\citet{Hoq2013ApJ} report 333 sources, but\n only 323 of these are part of the final catalog.} They also use the\nHi-GAL data (without ATLASGAL) and they employ a similar data processing\nand SED fitting procedure compared to that used in this\nwork. \\citet{Hoq2013ApJ} report dust temperatures which are consistent\nwithin 13\\% compared to the ones given in Table \\ref{tab-NT}. However, we\nobtain average column densities that are smaller by about 20\\%. The\ndifferences are due to our source sizes being larger than the ones assumed\nby \\citet{Hoq2013ApJ}. They use a fixed size equal to one Mopra telescope\nbeam, while we define the size of the source based on its extension in the\ncolumn density map.\n\n\nWhen there is more than one source in the same line of sight, the continuum\nemission blends two or more clumps located at different distances. This\nmakes uncertain the interpretation of the temperature, column density,\nand evolutionary stage classification. Therefore, for further\nanalysis we remove these sources from the MALT90 sample, leaving\n2907\\ sources. This number breaks down in the following\n way (see Table \\ref{tab-means}): there are 464\\ sources considered as Quiescent (single V$_{\\rm\n LSR}$, without a compact 70 \\um\\ source), 788\\ considered\n Protostellar (including Quiescents with a 70 \\um\\ compact source),\n 767\\ H{\\rmfamily\\scshape{ii}}\\ regions, and 326\\ PDRs. The remaining sources (562) have an\n Uncertain classification. This selection and reclassification of\nsources, as we show in Appendix \\ref{sec-stat}, does not affect the\nconclusions presented in the following sections.\n\n\\subsection{Dust Temperature versus Gas Temperature}\n\nThe dust temperature is fixed by the balance between heating\nand radiative cooling of the grain population. If the density in a\nmolecular cloud is greater than $5\\times10^4$~cm$^{-3}$\n\\citep{Goldsmith2001ApJ,Galli2002AA}, we expect the dust temperature to be\ncoupled to the gas temperature. \nWe test this hypothesis by comparing the\naverage dust temperatures with the gas kinetic temperatures determined from\nammonia observations. Figure \\ref{fig-amm} shows the ammonia temperature\nderived by \n\\citet[23 matching sources]{Dunham2011ApJ}, \n\\citet[{10 matching sources}]{Urquhart2011MNRAS}, \n\\citet[106 matching sources]{Wienen2012AA},\nand \\citet[19 sources]{Giannetti2013AA} \nversus the dust temperature, separated by evolutionary stage.\n\\citet{Dunham2011ApJ} and \\citet{Urquhart2011MNRAS} performed the NH$_3$\nobservations using the Green Bank Telescope at 33\\arcsec\\ angular\nresolution. \\citet{Wienen2012AA} used the Effelsberg Radiotelescope at\n40\\arcsec\\ angular resolution. These are comparable to the resolution of\nour dust temperature maps (35\\arcsec). On the other hand, \\citet{Giannetti2013AA} used\nNH$_3$ data obtained from ATCA with an angular resolution of\n$\\sim20\\arcsec$. In this last case, we compare their ammonia temperatures\nwith $T_{d,{\\rm P}}$ instead of $\\bar{T_d}$.\n\nAll but eight MALT90 sources with ammonia temperature estimation are\nclassified in one of the four evolutionary stages. The Spearman correlation\ncoefficient of the entire sample (158 sources, including these eight with\nUncertain mid-IR classification) {is 0.7, with a 95\\% confidence interval\nbetween 0.6 and 0.8,} indicating a positive correlation between both\ntemperature estimators.\nThe scatter\nof the relation is larger than the typical temperature uncertainty, and it\ngrows with the temperature of the source. For sources below 22 K, ammonia\nand dust temperatures agree within $\\pm3$ K. Above 22 K, the uncertainties\nof both temperature estimators become larger \\citep[see Equations\n \\eqref{eq-unc} and, for example,][]{Walmsley1983AA}, consistent with the\nobserved {increase of the scattering.}\nIn addition, higher temperature clumps are\nlikely being heated from inside and therefore associated with more\nvariations in the dust temperatures along the line of sight, making the\nsingle temperature approximation less reliable. \n{The slopes of the linear regressions performed in the data are\n $0.7\\pm0.1$, $0.8\\pm0.1$, $0.7\\pm0.1$, and $0.9\\pm0.3$ for the Quiescent,\n Protostellar, H{\\rmfamily\\scshape{ii}}\\ region, and PDR samples, respectively.}\nThe \nammonia and dust temperatures relation agrees in general with that found by\n\\citet{Battersby2014ApJ786}, except that we do not find a systematically\nworse agreement on Quiescent sources compared with the other evolutionary\nstages.\n\n \n\\subsection{Temperature and Column Density Statistics}\n\n\nFigure \\ref{fig-sd} shows {maps} of smoothed 2-D histograms of\nthe distributions of $\\bar{T_d}$ and $\\log N_{g,{\\rm P}}$ of the\nMALT90 clumps for each mid-IR classification. \nIn the following analysis we focus on these two quantities and their\n relation with the evolutionary stage. We use $N_{g,{\\rm P}}$ instead of\n $\\bar{N_g}$ because $N_{g,{\\rm P}}$ is independent of the specific\n criterion used to define the extension of the clump and because the\n column density profiles are often steep \\citep[$\\propto s^{-0.8}$, where\n $s$ is the plane-of-the-sky distance to the clump\n center,][]{Garay2007ApJ}, making the average $\\bar{N_g}$ less\n representative of the clump column density values. On the other hand,\n dust temperature gradients are shallower \\citep[$\\propto r^{-0.4}$, where\n $r$ is the distance to the clump center, {see}][]{vanderTak2000ApJ} and\n $\\bar{T_d}$ has less uncertainty compared with the temperature calculated\n toward a single point.\nWe include in the\nProtostellar group those sources that are associated with a\ncompact source at 70 \\um\\ (Section \\ref{sec-re70}).\nThe most conspicuous differences between the evolutionary stages \nare evident between the Quiescent\/Protostellar \nand the H{\\rmfamily\\scshape{ii}}\\ region\/PDR populations\n\\citep{Hoq2013ApJ}. The main difference between these groups is the\ntemperature distribution. Most of the sources in the\nQuiescent\/Protostellar stage have temperatures below 19 K, while most\nH{\\rmfamily\\scshape{ii}}\\ region\/PDR sources have temperatures above 19 K. We also note that \nthe Quiescent, Protostellar, and H{\\rmfamily\\scshape{ii}}\\ region populations have peak column densities $\\gtrsim0.1$~gr~cm$^{-2}$, equivalent to\n$2.13\\times10^{22}$ \\mbox{H$_2$}\\ molecules per cm$^{2}$, while the PDR population has peak column densities of typically half of this value.\n\n\nThese differences are also apparent in Figure \\ref{fig-cdf}, where solid\nlines display the marginalized CDFs of\n$\\bar{T_d}$ and $\\log N_{g,{\\rm P}}$ for each evolutionary stage. The\ndashed lines show the distributions of the Uncertain group. It is clear\nfrom these plots that the median temperature increases monotonically with\nevolutionary stage, and that the Protostellar and PDR clumps are the stages\nassociated with the largest and smallest column densities, respectively.\nFigure \\ref{fig-boxNT} shows Tukey box plots \\citep[][Section\n 5.9]{Feigelson2012msma} of the marginalized distributions of $\\bar{T_d}$\nand $\\log N_{g,{\\rm P}}$ separated by evolutionary stage. In these plots,\nthe boxes indicate the interquartile range (half of the population), the\nthick horizontal line inside each box indicates the median, and the error\nbars encompass the data that are within 1.5 times the inter-quartile\ndistance from the box limits. The remaining points, in all cases less than\n4\\% of the sample, are plotted individually with small circles and we refer\nto them formally as outliers. Figure \\ref{fig-boxNT} shows that the\n$\\bar{T_d}$ and $\\log N_{g,{\\rm P}}$ interquartile range shifts with\nevolutionary stage. This is evidence of systematic differences between the\ndifferent populations, despite the large overlaps. In practice, the\noverlap between populations implies that it is unfeasible to construct\nsensible predictive criteria that could determine the evolutionary stage of\na specific source based on its temperature and peak column density and it\nalso reflects that star formation is a continuous process that cannot be\nprecisely separated into distinct stages. Nevertheless, the fact that the\nproposed evolutionary stages show a monotonic increase in mean temperature\ndemonstrates that the classification scheme has a legitimate physical\nbasis.\n\nIn the following, we focus our analysis on the Quiescent and Protostellar\npopulations. Figures \\ref{fig-sd} to \\ref{fig-boxNT} show that these two\nsamples are similarly distributed and they exhibit the most important\noverlap. We test the statistical significance of the Quiescent and\nProtostellar differences in $\\bar{T_d}$ and $\\log N_{g,{\\rm P}}$ by\ncomparing these differences with their uncertainties. Table\n\\ref{tab-means} shows the medians, means, and r.m.s. deviations of\n$\\bar{T_d}$, $T_{d,{\\rm P}}$, $\\log \\bar{N_g}$, and $\\log N_{g,{\\rm P}}$\nfor each population. In general, these dispersions are larger than the\nuncertainties of the individual values, indicating that the dispersions are\nintrinsic to each population and not due to the fitting uncertainties. The\nmeans $\\bar{T_d}$ for the Quiescent and Protostellar populations are $16.8$\nand $18.6$ K, respectively. The Protostellar population has a mean\n$\\bar{T_d}$ larger by $+1.8$ K compared with the Quiescent population. We\ncan estimate the expected uncertainty of this mean difference using the\ndispersion and size of each population, which gives\n$\\sqrt{\\left(3.8^2\/464\\right)+\\left(4.4^2\/788\\right)}\\approx0.24$\nK. Therefore, the difference is more than seven times the\nexpected uncertainty. On the other hand, the difference of the means of\n$\\log N_{g,{\\rm P}}$ is $+0.17$ in favor of the Protostellar\npopulation. The expected uncertainty in this case is\n$\\sqrt{\\left(0.25^2\/464\\right)+\\left(0.34^2\/788\\right)}=0.017$, that is,\nten times smaller than the observed difference. We conclude that the\nobserved differences of the Quiescent and Protostellar populations are\nstatistically significant. Furthermore, the differences in temperature and\ncolumn density are orthogonal to the expected uncertainty correlation\n(Figure \\ref{fig-Dchi2con}), giving us more confidence that we are\nobserving a real effect in both parameters. We confirm the significance of\nthe difference using more sophisticated statistical tests in Appendix\n\\ref{sec-stat}. Note that either the temperature or the column density\ndifference between other population pairs are larger than those between the\nQuiescent and Protostellar. \n\nAre these statistical differences evident when comparing the $\\bar{N_g}$\nand $T_{d,{\\rm P}}$? On one hand, the difference between the mean\n$T_{d,{\\rm P}}$ of the Protostellar and Quiescent samples is 2.6 K, while\nthe expected uncertainty of this difference is 0.2 K. Therefore, despite\nthe larger fitting uncertainties of $T_{d,{\\rm P}}$ (see Table\n\\ref{tab-NT}), we still detect a statistically significant difference\nbetween both populations when comparing only their central temperatures.\nOn the other hand, the difference between the means of $\\log \\bar{N_g}$ is\n$0.04$ over an expected uncertainty of $0.02$, that is, the difference is\nonly 2 times the uncertainty. The latter is not highly significant, which\nis somehow expected because of the reasons explained at the beginning of\nthis section. It is also expected from previous studies that indicate that\nneither the mass of the clumps \\citep{Hoq2013ApJ} nor their radii\n\\citep{Urquhart2014MNRAS} change conspicuously with evolutionary\nstage. This in turn implies that the average column density should remain\napproximately unchanged.\n\n\nWithin the Quiescent sample, we identified in Section \\ref{sec-re70} a\npopulation of 83 clumps that appear as far-IRDCs at 70 \\um. Of these, there\nare 77 associated with a single source along the line of sight.\nThis sample has a mean and\nmedian $\\bar{T_d}$ of $14.9$ and $14.7$ K, respectively. The remaining \nQuiescent population has mean and median $\\bar{T_d}$ equal to\n$17.2$ and $16.4$ K, respectively. \nBased on a Wilcoxon non-parametric test\n\\citep[Section 5.4]{Wall2012psa} { we obtain a\n p-value of\n$4\\times10^{-7}$} under the null hypothesis that these distributions\nare the same. {Therefore, the temperature differences between the far-IRDCs and the rest of the Quiescent clumps are\nsignificant.} The column densities of the far-IRDC subsample are\nalso larger compared to those of the rest of the\nQuiescent sample. The far-IRDC $\\log N_{g,{\\rm P}}$ mean and median\nare $-0.76$ and $-0.78$, respectively, while for the remaining \nQuiescent clumps they are $-0.88$ and $-0.91$, respectively. Again,\nwe reject the null hypothesis (Wilcoxon p-value of\n$\\sim10^{-5}$) and conclude that the far-IRDC sample is a colder and\ndenser subsample of the Quiescent population.\n \nFinally, the Uncertain group (that is, MALT90 sources that could not be\nclassified into any evolutionary stage) seems to be a mixture of sources in\nthe four evolutionary classes, but associated with lower column densities\n(median $\\log N_{g,{\\rm P}}\\sim0.1$ gr cm$^{-2}$). Figure \\ref{fig-cdf}\nshows that the $\\bar{T_d}$ values of the Uncertain group distribute almost\nexactly in between the other evolutionary stages. Neither a Wilcoxon nor a\nKolmogorov-Smirnov test can distinguish the $\\bar{T_d}$ distributions of\nthe Uncertain sample from the remainder of the MALT90 sources\ncombined with a significance better than 5\\%. Figure \\ref{fig-cdf} also\nshows that the column densities of the Uncertain group are in general lower\ncompared with those of any evolutionary stage except the PDRs. Molecular\nclumps with low peak column density may be more difficult to classify in\nthe mid-IR, since they are probably unrelated to high-mass star formation.\nIt is also possible that a significant fraction of these sources are\nlocated behind the Galactic plane cirrus emission and possibly on the\nfar-side of the distance ambiguity, making the mid-IR classification more\ndifficult and decreasing the observed peak column density because of beam\ndilution.\n\n\\subsubsection{Column Density and Temperature Evolution in Previous Studies}\n\nSince the discovery of IRDCs \\citep[typically dark at mid-IR\n wavelengths, see][]{Egan1998ApJ,Carey1998ApJ} it has been pointed\nout that they likely consist of cold ($<20$ K) \nmolecular gas. This has been confirmed by several studies\nof molecular gas \\citep{Pillai2006AA450,Sakai2008ApJ,Chira2013AA} and dust \\citep[e.g.,][]{Rathborne2010ApJ}. \n\nSystematic \\mbox{H$_2$}\\ column density differences between IR dark, quiescent,\nand star-forming clumps have been more difficult to establish. Some\nauthors have found no significant column density differences between\nthese groups\n\\citep{Rathborne2010ApJ,Lopez-Sepulcre2010AA,Sanchez-Monge2013MNRAS}.\nHowever, most studies based on large samples agree that star forming\nclumps have larger molecular column densities compared to the quiescent ones\n\\citep{Dunham2011ApJ,Giannetti2013AA,Hoq2013ApJ,Csengeri2014AA,Urquhart2014MNRAS,He2015MNRAS}. \nFurthermore,\n\\citet{Beuther2002ApJ}, \\citet{Williams2005AA}, and\n\\citet{Urquhart2014MNRAS} found evidence that molecular clumps\nwhich display star formation activity have a more concentrated density\nprofile.\n\n\\citet{Urquhart2014MNRAS}, based on ATLASGAL and the Red MSX Source\n(RMS) survey \\citep{Lumsden2013ApJ}, analyze a large ($\\sim1300$)\nnumber of molecular clumps with signs of high-mass star formation.\nHigh-mass star formation activity was determined from associations with\nthe MSX point source catalog \\citep{Egan2003VizieR}, methanol masers\n\\citep{Urquhart2013MNRAS431}, and H{\\rmfamily\\scshape{ii}}\\ regions detected using\ncentimeter wavelength radio emission\n\\citep{Urquhart2007AA,Urquhart2009AA501,Urquhart2013MNRAS}. In \\citet{Urquhart2014MNRAS}, ATLASGAL\nclumps associated with WISE sources \\citep{Wright2010AJ} are\ncalled massive star-forming (MSF) clumps and all the rest are otherwise\n``quiescent.''\n\\citet{Urquhart2014MNRAS} find that MSF clumps have larger column\ndensities than their ``quiescent'' clumps by a factor of $\\sim3$.\n\n\\citet{Urquhart2014MNRAS} and \\citet{He2015MNRAS} \nalso report that clumps associated with\nH{\\rmfamily\\scshape{ii}}\\ regions have larger column densities than the remainder of the star forming \nclumps. This result contradicts our finding that H{\\rmfamily\\scshape{ii}}\\ region sources\nhave typically lower column densities compared with the Protostellar\nsample (see Table \\ref{tab-means} and Figure \\ref{fig-cdf}). \nTo\nexamine this disagreement in more detail, we analyze the intersection\nbetween the MSF and the MALT90 samples. There are 515 MSF clumps in\ncommon with the MALT90 sample that are covered by Hi-GAL: 285\nclassified as H{\\rmfamily\\scshape{ii}}\\ regions, 204 as Protostellar, 22 as PDR, and 4 as\nQuiescent. We calculate that these 515 sources have a mean average\ntemperature of 24 K and a mean log-peak column density of $-0.63$. The\ntemperature is consistent with the H{\\rmfamily\\scshape{ii}}\\ region sample of MALT90, but\nthe column densities are much higher. Within these 515 sources \n we find that, in agreement with \\citet{Urquhart2014MNRAS},\nthose with centimeter wavelength emission have significantly higher\ncolumn densities ($\\log N_{g,{\\rm P}}=-0.59$) and temperatures ($26$\nK) compared with the rest ($\\log N_{g,{\\rm P}}=-0.67$ and $\\bar{T_d}=22$\nK).\n\nThe reason for \\citet{Urquhart2014MNRAS} finding that sources associated\nwith H{\\rmfamily\\scshape{ii}}\\ regions are associated with the largest column densities, in\ndisagreement with our results, arises most likely from differences in the\nclassification criteria. \\citet{Urquhart2014MNRAS} report centimeter radio\nemission arising from ionized gas toward 45 out of the 204 common sources\nwe classify as Protostellar, and 94 out of the 285 clumps we classify as\nH{\\rmfamily\\scshape{ii}}\\ regions were observed by the CORNISH survey at 5 GHz \\citep[$\\sim2$\n mJy sensitivity,][]{Hoare2012PASP} and were not detected. These are\nrelatively few sources and exchanging their classification (Protostellar by\nH{\\rmfamily\\scshape{ii}}\\ region and vice-versa) does not modify the trends described in the previous\nsection. However, if they reflect an underlying fraction of misclassified\nsources between the Protostellar and H{\\rmfamily\\scshape{ii}}\\ region groups, they might change\nthe statistics.\n\nConversely, we detect embedded HMYSOs in 641 ATLASGAL\nsources that are treated as ``quiescent'' in \\citet{Urquhart2014MNRAS}, in\npart due to the better sensitivity and angular resolution of MIPS compared\nto MSX and WISE. It is likely that the ``quiescent'' sample of\n\\citet{Urquhart2014MNRAS} does not contain currently young high-mass stars,\nbut does contain a large fraction of intermediate mass star formation\nactivity, and some of these sources are also associated with PDRs. In\nsummary, we expect that the Quiescent sample from MALT90 to be more truly\ndevoid of star formation than the non-MSF ATLASGAL clumps, while at the same\ntime, several of our Protostellar clumps are probably associated with\nH{\\rmfamily\\scshape{ii}}\\ regions, which are more efficiently detected using radio centimeter\nobservations. \n\n\n\n\n\n{\\subsubsection{Temperature and Column Density Contrasts}\\label{sec-cont}}\n\nWe analyze spatial variations of $T_d$ and $N_g$ by comparing their\nvalues at the peak intensity position with the average value in the\nclump. For each MALT90 clump, we define the temperature contrast and\nlog-column density contrast as $\\Delta T=\\bar{T_d}-T_{d,{\\rm P}}$ and\n$\\Delta\\log N_g=\\log\\left(\\bar{N_g}\/N_{g,{\\rm P}}\\right)$,\nrespectively. \n\n\nTable \\ref{tab-cont} {lists} the means and medians of the temperature and\nlog-column density contrasts. Table \\ref{tab-cont} also {gives} 95\\%\nconfidence intervals\\footnote{The upper limit of the CI is the lowest value\n $u$ larger than the observed median for which we can reject the null\n hypothesis that $u$ is the true population median with a significance of\n 5\\%. The lower limit of the CI is calculated similarly.} (CIs) for the medians of $\\Delta T$ and\n$\\Delta\\log N_g$ per evolutionary stage, determined using the sign test\n\\citep[Section 12.2]{Ross2004ipses}. They were calculated using the task\n\\texttt{SIGN.test} from the R statistical suite\\footnote{www.r-project.org}\n(version 3.1.1). The sign test\nis not very sensitive but it has the advantage that it is non-parametric\nand, in contrast to the Wilcoxon test (for example), it does not assume that\nthe distributions have the same shape.\n\nA negative $\\Delta\\log N_g$ indicates that the clump has a centrally peaked\ncolumn density profile with the absolute value of $\\Delta\\log N_g$ being a\nmeasure of its steepness. As a reference, a critical Bonnor-Ebert sphere is\ncharacterized by $\\Delta\\log N_g=-0.5515$. Perhaps not very surprisingly,\nthe $\\Delta\\log N_g$ means, medians, and CIs are always negative,\nindicating that most of the clumps are centrally peaked. We find that\nthe medians are significatively different between evolutionary stages, with\nno overlap in the CIs. The H{\\rmfamily\\scshape{ii}}\\ region clumps are those\nassociated with the steepest column density profiles, followed by the Protostellar and the PDR clumps. Clumps in the Quiescent evolutionary stage are associated with \nthe smoothest column density profiles.\n\nThe temperature contrasts are also distinct for different evolutionary\nstages. A positive $\\Delta T$ indicates that the dust \n temperature increases away from the clump center, that is, dust temperature at the peak column density\n position ($T_{d,{\\rm P}}$) being lower than the average temperature\n ($\\bar{T_d}$). On the other hand, $\\Delta T$ is negative for decreasing\n temperature profiles. $\\Delta T$ is positive for Quiescent clumps and\nPDRs, it is consistent with zero for the Protostellar sources (temperature\nat peak similar to average temperature), and positive (peak column density\nwarmer than average temperature) for the H{\\rmfamily\\scshape{ii}}\\ region sample.\n\n\n\\subsection{\\boldmath Mid-IR Classification versus $T_d$, $N_g$, $\\Delta T_d$, and $\\Delta\\log N_g$}\n\nThe previous sections have presented the differences between the\ntemperature and column densities of the MALT90 groups. These differences\nare qualitatively consistent with the evolutionary sequence sketched in\n\\citet{Jackson2013PASA} that starts with the Quiescent, and proceeds\nthrough the Protostellar, H{\\rmfamily\\scshape{ii}}\\ region, and PDR evolutionary stages. As Figure \\ref{fig-boxNT}\nshows, Quiescent clumps are the coldest, in agreement with the\nexpectation that these clumps are starless and there are no embedded\nyoung high-mass stars. The far-IRDC subsample\nof the Quiescent population is colder and denser on average compared\nto the rest of the Quiescent clumps, and they might represent a late\npre-stellar phase just before the onset of star-formation. The mean\ntemperature and $\\log N_{g,{\\rm P}}$ of the far-IRDC subsample are\n$\\sim15$ K and $-0.78$, respectively.\n\n\nTo establish what fraction of the Quiescent clumps might evolve to form\nhigh-mass stars, we use for now criteria defined by previous authors based\non distance independent information, such as the column density; a more\ncomplete analysis will be done in Contreras et al.\\ (in preparation).\n\\citet{Lada2010ApJ} and \\citet{Heiderman2010ApJ} propose that the star\nformation rate in a molecular cloud is proportional to the mass of gas\nwith column densities in excess of $\\sim$120~\\mbox{\\,$M_{\\odot}$}~pc$^{-2}$\n($\\sim2.43\\times10^{-2}$ gr cm$^{-2}$). Since there is considerable\noverlap in column density between MALT90 clumps that have different levels\nof star formation activity, we start by assuming that this\nrelation gives the average star formation rate over the timescale of 2 Myr\nadopted by \\citet{Lada2010ApJ} and \\citet{Heiderman2010ApJ}. We find that\n98\\% of the Quiescent clumps have $\\bar{N_g}>120$~\\mbox{\\,$M_{\\odot}$}~pc$^{-2}$, including\nall of the far-IRDCs, which suggest that most of these clumps will support\nsome level of star formation activity in the future. \n\\citet{Urquhart2014MNRAS} propose a column density threshold of $0.05$~gr~cm$^{-2}$ for what\nthey denominate ``effective'' high-mass star formation. \nThis same threshold \n was recently proposed by \\citet{He2015MNRAS} based on a study of $405$ ATLASGAL sources.\nOf the Quiescent sample, 78\\% of the\nclumps have an average column density above this threshold, with the percentage increasing to 92\\% for the far-IRDCs. \nBased on these criteria, we conclude that\nvirtually all Quiescent clumps will develop at least low-mass star\nformation activity and that a large fraction ($>70\\%$) will form high-mass\nstars. \nOn the other hand, \\citet{Lopez-Sepulcre2010AA} suggest a third column density threshold\nbased on the observed increase of molecular outflows for clumps with column\ndensities in excess of $0.3$ gr cm$^{-2}$. This column density is\nsignificatively larger than the previous thresholds, and only 3\\% and 6\\%\nof the Quiescent and far-IRDCs populations, respectively, have larger average\ncolumn densities. However, half of the clump sample of\n\\citet{Lopez-Sepulcre2010AA} have diameters $< 35\\arcsec$ (the beam size of\nour column density maps) and more than a third have masses $< 200 \\mbox{\\,$M_{\\odot}$}$,\nwhich indicates that the $0.3$ gr cm$^{-2}$ threshold may be pertinent\nfor more compact structures than the clumps considered in this work.\n\n\nThe temperature and temperature contrast of the Quiescent clumps are\nqualitatively consistent with equilibrium between the interstellar\nradiation field and dust and gas cooling \\citep{Bergin2007ARA&A}. We find\nthat Quiescent clumps are the coldest among the evolutionary stages, but\nthey are typically warmer ($\\sim17$ K) than expected from thermal\nequilibrium between dust cooling and cosmic ray heating alone ($T_d\\sim10$\nK). We also find that the central regions of the Quiescent clumps are\nin general colder than their external layers ($\\Delta T$ negative). \nThese characteristics are consistent with Quiescent clumps being \nheated by a combination of external radiation and cosmic-rays.\nThe Quiescent sources also have\nthe flattest density structure, with the {largest}\n $\\Delta \\log N_g$ among all\nthe other evolutionary stages. This is similar to the behavior found\nby \\citet{Beuther2002ApJ}, that is, the earliest stages of\nhigh-mass star formation are characterized by flat density profiles that\nbecome steeper as they collapse and star formation ensues.\n\n\nThe Protostellar clump sample can be distinguished from Quiescent clumps\nbased on their column density and dust temperature. Protostellar clumps\nhave larger column densities ($\\sim0.2$ gr cm$^{-2}$) and are slightly\nwarmer ($\\sim19$~K). The central temperatures of the Protostellar clumps\nalso increase and become comparable to the temperature in their outer\nregions ($\\Delta T\\cong0$). These characteristics indicate that\nProtostellar clumps have an internal energy source provided by the\nHMYSOs. According to the results presented by \\citet{Hoq2013ApJ}, there is\nno significative difference in the distribution of masses between the\nQuiescent and Protostellar population. If we assume that this is also the\ncase for the sample presented in this work \\citep[which will be confirmed in\nupcoming publications, see also][]{He2015MNRAS}, then the most likely reason for the larger column\ndensities of the Protostellar sample compared with the Quiescent sample is\ngravitational contraction. Because contraction develops faster on the\ndensest, central regions, we expect the column density profiles to become\nsteeper at the center of the clump. This is consistent with the observed\ndecrease of $\\Delta \\log N_g$ for the Protostellar clumps compared with\nthe Quiescent clumps.\n\n\n\n\nThe H{\\rmfamily\\scshape{ii}}\\ region sample is associated with the most negative temperature and\ncolumn density contrasts {(median of $\\Delta T=-0.33$ K and $\\Delta \\log\nN_g=-0.42$)} compared with any other population, which indicates that\nH{\\rmfamily\\scshape{ii}}\\ region clumps are very concentrated and they have a strong\ncentral heating source. This picture is consistent with the presence\nof a young high-mass star in the center of the clump. The slight\ndecrease of the peak column density compared with the Protostellar\nphase {could} be explained because of the expansion induced by the\ndevelopment of the H{\\rmfamily\\scshape{ii}}\\ region and the fraction of gas mass that has\nbeen locked into newly formed stars.\n\n\nFinally, PDR clumps have the lowest column densities and\nlargest temperatures among the four evolutionary stages. They are also\nassociated with having colder temperatures toward the center compared to their\nouter regions. PDR clumps are possibly the remnants of molecular clumps\nthat have already been disrupted by the high-mass stars' winds, strong UV radiation field, and the expansion of \nH{\\rmfamily\\scshape{ii}}\\ regions. These molecular remnants are being illuminated and heated\nfrom the outside by the newly formed stellar population, but probably are\nneither dense nor massive enough to be \nable to sustain further high-mass star formation.\n\n{\\section{SUMMARY}\\label{sec-sum}}\n\nWe determined dust temperature and column density maps toward\n3218\\ molecular clumps. This number corresponds to more than 99\\% of\nthe ATLASGAL sources that form the MALT90 sample. We fit greybody\nmodels to far-IR images taken at 160, 250, 350, 500, and 870 \\um. This\ncatalog represents the largest sample of high-mass clumps for\nwhich both dust temperature and column density have been simultaneously\nestimated. We summarize the main results and conclusions as follows.\n\\begin{enumerate}\n\\item{The average dust temperature increases monotonically along the\n proposed evolutionary sequence, with median temperatures ranging from $16.1$ K\n for the Quiescent clumps to $27.4$ K for the clumps associated with\n PDRs. This confirms that the MALT90 mid-IR classification broadly\n captures the physical state of the molecular clumps.}\n\\item{The highest column densities are associated with the\n Protostellar clumps, that is, those that show mid-IR signs of star\n formation activity preceding the development of an\n H{\\rmfamily\\scshape{ii}}\\ region. The average peak column density of the Protostellar\n clumps is 0.2~gr~cm$^{-2}$, which is about 50\\% higher than the peak\n column densities of clumps in the other evolutionary\n stages. We interpret this as evidence of gravitational contraction\n or possibly that Protostellar clumps are more massive.\n The latter possibility will be analyzed in future work (Contreras et\n al., in preparation).}\n\\item{The radial temperature gradients within the clumps decrease from\n positive (higher temperatures in the outer layers of the clump), to null\n (no dust temperature gradient), and to negative (higher temperatures\n toward the center of the clump) values associated with the Quiescent,\n Protostellar, and H{\\rmfamily\\scshape{ii}}\\ region clumps, respectively. Quantitatively,\n the mean difference between the average ($\\bar{T_d}$) and the\n central ($T_{d,{\\rm P}}$) clump temperatures range\n between $+0.7$, $-0.1$, and $-0.6$ K for the Quiescent, Protostellar,\n and H{\\rmfamily\\scshape{ii}}\\ region samples, respectively. This confirms that Quiescent\n clumps are being externally heated and Protostellar and H{\\rmfamily\\scshape{ii}}\\ region\n clumps have an internal embedded energy source.}\n\\item{The ratio between the peak and average column density for each clump\n category ranges between 1.8 and 2.6. The flattest column density\n profiles are associated with the Quiescent population, becoming steeper\n for the Protostellar {and} H{\\rmfamily\\scshape{ii}}\\ region clumps. This is qualitatively\n consistent with the hypothesis of evolution through gravitational\n contraction, in which the contrast is a measure of evolutionary progress.}\n\\item{The PDR clump population is characterized by low column densities\n ($\\sim0.09$~gr~cm$^{-2}$), high temperatures ($27$ K), and\n a positive radial temperature gradient (colder inner regions toward warmer\n dust on the outside). We interpret this as evidence that these\n sources are the externally illuminated remnants of molecular clumps already\n disrupted by high-mass star formation feedback.}\n\\item{We identify $83$ far-IR dark clouds, that is, Quiescent clumps\n that appear in absorption at 70 \\um\\ against the Galactic\n background. These clumps are cooler and they have higher column\n densities compared to the remainder of the Quiescent population. Therefore, \n they are likely in the latest stage of pre-stellar\n contraction or they may represent a more massive subsample of the Quiescent clumps.}\n\\end{enumerate}\n\n\n\\acknowledgements{A.E.G. and H.A.S. acknowledges support from NASA Grants\n NNX12AI55G and NNX10AD68G. A.E.G. acknowledge partial support from\n CONICYT through project PFB-06 and FONDECYT grant 3150570.\n J.M.J. acknowledges support from NASA Grant NNX12AE42G and NSF grant\n AST-1211844. We thank G.\\ Garay and an anonymous referee for careful\n reading and helpful comments.}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction and notations}\nIn their ICM talk \\cite{SU06}, Skinner and Urban outline a program to connect the order of vanishing of the $L$-functions of certain\npolarized regular motives with the rank of the associated Bloch-Kato Selmer groups.\nTheir strategy is to deform the motives along certain $p$-adic eigenfamilies of Galois representations to construct the expected extensions.\nThey introduce the notion \\emph{finite slope families} to encode the local properties of these $p$-adic families. One may view finite slope families as generalizations of the $p$-adic families arising from Coleman-Mazur eigencurve, which is formulated as weakly refined families by Bellaiche-Chenevier \\cite{BC06}, in the sense that a finite slope family may have \\emph{multiple} constant Hodge-Tate weights $k_1,\\dots, k_r\\in\\mathbb{Z}$ and a Zariski dense subset of crystalline points which have prescribed crystalline periods with Hodge-Tate numbers $k_1,\\dots,k_r$. Skinner and Urban then use the (unproved) analytic continuation of these crystalline periods to deduce that the expected extensions lie in the Selmer groups. Most recently, Harris, Lan, Taylor and Thorne construct Galois representations for (non-self dual) regular algebraic cuspidal automorphic representations of $\\mathrm{GL}(n)$ over CM fields \\cite{HLTT}. Their construction also involves $p$-adic deformations, and it turns out that these Galois representations live in certain $p$-adic families which generalize Skinner-Urban's finite slope families by replacing crystalline periods with semi-stable periods. Furthermore, to show that the Galois representations constructed by them are geometric as predicted by the philosophy of Langlands correspondence, one needs the analytic continuation of semi-stable periods for these families.\n\nIn this paper, we make use of the notion finite slope families to encode the local properties of the $p$-adic families of Galois representations in \\cite{HLTT}; this generalizes the original definition of Skinner-Urban. Our main result is then to prove the analytic continuation of semi-stable periods for such families. This will provide a necessary ingredient to Skinner-Urban's ICM program. Besides, we recently learned from Taylor that, in an ongoing project of Ila Varma, she will establish the aforementioned geometric properties of Galois representations based on the results of this paper and a previous one of us \\cite{L12}. We also note that recently Shah proves some results about interpolating Hodge-Tate and de Rham periods in families of $p$-adic Galois representations which may be applied to some related situations \\cite{S}.\n\nAs the $p$-adic families over Coleman-Mazur eigencurve are special cases of finite slope families, our result generalizes the famous result of Kisin on the analytic continuation of crystalline periods for such families \\cite{Ki03}. However, even in the crystalline case, our strategy and techniques are completely different from his. In fact, in Kisin's original work as well as the recent enhancement made by us \\cite{L12}, one crucially relies on the fact that the families have only one constant Hodge-Tate weight, which is obviously not the case for general finite slope families. On the other hand, the work presented in this paper is inspired by the works of Berger and Colmez on families of de Rham representations \\cite{BC07} and Kedlaya, Pottharst and Xiao on the cohomology of families of $\\m$-modules \\cite{KPX}. For a finite slope family, by adapting the techniques of \\cite{KPX}, we first cut out a sub-family of $\\m$-modules, which is expected to be generated by the desired semi-stable periods, after making a proper and surjective base change. We then develop a theory of families of Hodge-Tate and de Rham $\\m$-modules with bounded Hodge-Tate weights. Finally we prove some analogues of Berger-Colmez for such families of $\\m$-modules, and use them to conclude that the sub-family of $\\m$-modules is semi-stable.\n\nIn the remainder of this introduction, we give more precise statements about our results.\nWe fix a finite extension $K$ of $\\Q$.\nLet $K_0$ be the maximal unramified sub-extension of $K$, and let $f=[K_0:\\Q]$.\n\n\\begin{defn}\\label{def:fs}\nLet $X$ be a reduced and separated rigid analytic space over $K$. A \\emph{finite slope family} of $p$-adic representations of dimension $d$ over $X$ is a locally free coherent $\\OO_X$-module $V_X$ of rank $d$ equipped with a continuous $G_K$-action and together with the following data\n\\begin{enumerate}\n\\item[(1)]a positive integer $c$,\n\\item[(2)]a monic polynomial $Q(T)\\in\\OO_X(X)[T]$ of degree $m$ with unit constant term,\n\\item[(3)]a subset $Z$ of $X$ such that for all $z$ in $Z$, $V_z$ is semi-stable with non-positive Hodge-Tate weights, and for all $B\\in\\mathbb{Z}$ the set of $z$ in $Z$ such\nthat $V_z$ has $d-c$ Hodge-Tate weights less than $B$ is Zariski dense in $X$,\n \\item[(4)]for $z\\in Z$, a $K_0\\otimes_{\\Q}k(z)$-direct summand $\\mathcal{F}_{z}$ of $D^+_{\\mathrm{st}}(V_z)$ which is free of rank $c$ and stable under $\\varphi$ and $N$ such that $\\varphi^f$ has characteristic polynomial $Q(z)(T)$ and all Hodge-Tate weights of $\\mathcal{F}_z$ lie in $[-b,0]$ for some $b$ which is independent of $z$.\n\\end{enumerate}\n \n\n\\end{defn}\n\nOur main results are as follows.\n\n\\begin{theorem}\\label{thm:main}\nLet $V_X$ be a finite slope family over $X$. Then there exists a surjective proper morphism $X'\\ra X$ so that $(K\\otimes_{K_0}D^+_{\\mathrm{st}}(V_{X'}))^{Q(\\varphi)=0}$ has a rank $c$ locally free coherent $K_0\\otimes_{\\Q}\\OO_{X'}$-submodule which specializes to a rank $c$ free $K_0\\otimes_{\\Q}k(x)$-submodule in $\\D_\\rig^\\dag(V_x)$ for any $x\\in X'$. As a consequence, $D^+_{\\mathrm{st}}(V_x)^{Q(\\varphi)(x)=0}$ has a free $K_0\\otimes_{\\Q}k(x)$-submodule of rank $c$ for any $x\\in X$.\n\\end{theorem}\n\nThe following corollary is clear.\n\n\\begin{cor}\nLet $V_X$ be a finite slope family over $X$. If $V_z$ is crystalline for any $z\\in Z$, then there exists a surjective proper morphism $X'\\ra X$ so that $(K\\otimes_{K_0}D^+_{\\mathrm{crys}}(V_{X'}))^{Q(\\varphi)=0}$ has a rank $c$ locally free coherent $K_0\\otimes_{\\Q}\\OO_{X'}$-submodule which specializes to a rank $c$ free $K_0\\otimes_{\\Q}k(x)$-submodule in $\\D_\\rig^\\dag(V_x)$ for any $x\\in X'$. As a consequence, $D^+_{\\mathrm{crys}}(V_x)^{Q(\\varphi)(x)=0}$ has a free $K_0\\otimes_{\\Q}k(x)$-submodule of rank $c$ for any $x\\in X$.\n\\end{cor}\n\n\\section*{Acknowledgements}\nThanks to Christopher Skinner, Richard Taylor and Ila Varma for useful communications. We especially thank Richard Taylor for suggesting a more concise definition of finite slope families.\n\\section{Families of $\\m$-modules}\n\\begin{defn}\nLet $A$ be a Banach algebra over $\\Q$. For $s>0$, a \\emph{$\\varphi$-module} over $\\mathbf{B}^{\\dag,s}_{\\rig,K}\\widehat{\\otimes}_{\\Q}A$ is a finite projective $\\mathbf{B}^{\\dag,s}_{\\rig,K}\\widehat{\\otimes}_{\\Q}A$-module $D_A^s$ equipped with an isomorphism\n$$\\varphi^*D_A^s\\cong D_A^s\\otimes_{\\mathbf{B}^{\\dag,s}_{\\rig,K}\\widehat{\\otimes}_{\\Q}A}\\mathbf{B}^{\\dag,ps}_{\\rig,K}\\widehat{\\otimes}_{\\Q}A.$$ A \\emph{$\\varphi$-module} $D_A$ over $\\mathbf{B}^{\\dag}_{\\rig,K}\\widehat{\\otimes}_{\\Q}A$ is the base change to $\\mathbf{B}^{\\dag}_{\\rig,K}\\widehat{\\otimes}_{\\Q}A$ of a $\\varphi$-module $D_A^s$ over $\\mathbf{B}^{\\dag,s}_{\\rig,K}\\widehat{\\otimes}_{\\Q}A$ for some $s>0$.\nA \\emph{$\\m$-module} over $\\mathbf{B}^{\\dag,s}_{\\rig,K}\\widehat{\\otimes}_{\\Q}A$ is a $\\varphi$-module $D_A^s$ over $\\mathbf{B}^{\\dag,s}_{\\rig,K}\\widehat{\\otimes}_{\\Q}A$ equipped with a commuting semilinear continuous action of $\\Gamma$. A \\emph{$\\m$-module} $D_A$ over $\\mathbf{B}^{\\dag}_{\\rig,K}\\widehat{\\otimes}_{\\Q}A$ is the base change to $\\mathbf{B}^{\\dag}_{\\rig,K}\\widehat{\\otimes}_{\\Q}A$ of a $\\m$-module $D_A^s$ over $\\mathbf{B}^{\\dag,s}_{\\rig,K}\\widehat{\\otimes}_{\\Q}A$ for some $s>0$.\n\\end{defn}\n\n\\begin{notation}\nFor a morphism $A\\ra B$ of Banach algebras over $\\Q$, we denote by $D^s_B$ (resp. $D_B$) the base change of $D^s_A$ (resp. $D_A$) to $\\mathbf{B}^{\\dag,s}_{\\rig,K}\\widehat{\\otimes}_{\\Q}B$ (resp. $\\mathbf{B}^{\\dag}_{\\rig,K}\\widehat{\\otimes}_{\\Q}B$). In the case when $A=S$ is an affinoid algebra over $\\Q$ and $x\\in M(S)$, we denote $D^s_{k(x)}$ (resp. $D_{k(x)}$) by $D_x^s$ (resp. $D_x$) instead.\n\\end{notation}\n\nLet $S$ be an affinoid algebra over $\\Q$. Recall that for sufficiently large $s$, a vector bundle over $\\mathbf{B}^{\\dag,s}_{\\rig,K}\\widehat{\\otimes}_{\\Q}S$ consists of one finite flat module $D_S^{[s_1,s_2]}$ over each ring $\\mathbf{B}^{[s_1,s_2]}_K\\widehat{\\otimes}_{\\Q}S$ with $s\\leq s_1\\leq s_2$, together with isomorphisms\n\\[\nD_S^{[s_1,s_2]}\\otimes_{\\mathbf{B}^{[s_1,s_2]}_{K}\\widehat{\\otimes}_{\\Q}S}\n\\mathbf{B}^{[s_1',s_2']}_{K}\\widehat{\\otimes}_{\\Q}S\\cong D_S^{[s'_1,s'_2]}\n\\]\nfor all $s\\leq s_1'\\leq s_1\\leq s_2\\leq s_2'$ satisfying the cocycle conditions. A $\\varphi$-bundle over $\\mathbf{B}^{\\dag,s}_{\\rig,K}\\widehat{\\otimes}_{\\Q}S$ is a vector bundle $(D_S^{[s_1,s_2]})_{s\\leq s_1\\leq s_2}$ over $\\mathbf{B}^{\\dag,s}_{\\rig,K}\\widehat{\\otimes}_{\\Q}S$ equipped with isomorphisms $\\varphi^*D_S^{[s_1,s_2]}\\cong D_S^{[ps_1,ps_2]}$ for all $s\/p\\leq s_1\\leq s_2$ satisfying the obvious compatibility conditions. When $s$ is sufficiently large, by \\cite[Proposition 2.2.7]{KPX}, the natural functor from the category of $\\varphi$-modules over $\\mathbf{B}^{\\dag,s}_{\\rig,K}\\widehat{\\otimes}_{\\Q}S$ to the category of $\\varphi$-bundles over $\\mathbf{B}^{\\dag,s}_{\\rig,K}\\widehat{\\otimes}_{\\Q}S$ is an equivalence of categories. Note that by its definition, one can glue $\\varphi$-bundles over separated rigid analytic spaces. Therefore this equivalence of categories enables us to introduce the following definition.\n\n\\begin{defn}\nLet $X$ be a separated rigid analytic space over $\\Q$. A family of $\\m$-modules $D_X$ over $X$ is a compatible family of $\\m$-modules $D_S$ over $\\mathbf{B}^{\\dag}_{\\rig,K}\\widehat{\\otimes}_{\\Q}S$ for each affinoid subdomain $M(S)$ of $X$.\n\\end{defn}\n\nThe following theorem follows from \\cite{BC07}, \\cite{KL10} and \\cite{L12}.\n\n\\begin{theorem}\nLet $A$ be a Banach algebra over $\\Q$, and let $V_A$ be a finite locally free $A$-linear representation of $G_K$. Then there is a $\\m$-module $\\D_\\rig^\\dag(V_A)$ over $\\mathbf{B}^{\\dag}_{\\rig,K}\\widehat{\\otimes}_{\\Q}A$ functorially associated to $V_A$. The rule $V_A\\mapsto \\D_\\rig^\\dag(V_A)$ is fully faithful and exact, and it commutes with base change in $A$.\n\\end{theorem}\n\n\nLet $A$ be a Banach algebra over $K_0$. Recall that one has a canonical decomposition\n\\[\nA\\otimes_{\\Q}K_0\\cong\\prod_{\\sigma\\in\\mathrm{Gal}(K_0\/\\Q)}A_{\\sigma}\n\\]\nwhere each $A_{\\sigma}$ is the base change of $A$ by the automorphism $\\sigma$. Furthermore, the $\\mathrm{Gal}(K_0\/\\Q)$-action permutes all $A_\\sigma$'s in the way that $\\tau(A_\\sigma)=A_{\\tau\\sigma}$. For any $a\\in A^\\times$, we equip $A\\otimes_{\\Q}{K_0}$ with a $\\varphi\\otimes 1$-semilinear action $\\varphi$ by setting\n\\[\n\\varphi((x_1,x_{\\varphi},\\dots, x_{\\varphi^{f-1}}))=(ax_{\\varphi^{f-1}},x_1,\\dots,x_{\\varphi^{f-2}})\n\\]\nwhere $x_{\\sigma}\\in A_{\\sigma}$ for each $\\sigma\\in\\mathrm{Gal}(K_0\/\\Q)$; we denote this $\\varphi$-module by $D_a$. It is clear that the $\\varphi$-action on $D_a$ satisfies $\\varphi^f=1\\otimes a$.\n\nWe fix a uniformizer $\\pi_K$ of $K$.\n\\begin{defn} For any continuous character $\\delta:K^\\times\\ra A^\\times$, we associate it a rank 1 $(\\varphi,\\Gamma)$-module $(\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q}A)(\\delta)$ over $\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q}A$ as follows. If $\\delta|_{\\OO_K^\\times}=1$, we set $(\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q}A)(\\delta)=(\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q}A)\n\\otimes_{A\\otimes_{\\Q}{K_0}}D_{\\delta(\\pi_K)}$ where we equip $D_{\\delta(\\pi_K)}$ with the trivial $\\Gamma$-action. For general $\\delta$, we write $\\delta=\\delta'\\delta''$ such that $\\delta'(\\pi_K)=1$ and $\\delta''|_{\\OO_K^\\times}=\\mathrm{id}$. We view $\\delta'$ as an $A$-valued character of $W_K$, and extend it to a character of $G_K$ continuously. We then set\n\\[\n(\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q}A)(\\delta)=\\D_\\rig^\\dagger(\\delta')\n\\otimes_{\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q}A}\n(\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q}A)(\\delta'').\n\\]\nFor a $(\\varphi,\\Gamma)$-module $D_A$ over $\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q}A$, we put $D_A(\\delta)=D_A\\otimes_{\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q}A}\n(\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q}A)(\\delta)$.\n\nLet $X$ be a separated rigid analytic space over $\\Q$. For a continuous character $\\delta:K^\\times\\ra \\OO(X)^\\times$ and a family of $\\m$-module $D_X$ over $X$, we define the families of $\\m$-modules $(\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q}\\OO_X)(\\delta)$ and $D_X(\\delta)$ by gluing $(\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q}S)(\\delta)$ and $D_S(\\delta)$ for all affinoid subdomains $M(S)$ respectively.\n\\end{defn}\n\n\\section{Cohomology of families of $\\m$-modules}\nLet $\\Delta_K$ be the $p$-torsion subgroup of $\\Gamma$. Choose $\\gamma_K\\in\\Gamma_K$ whose image in $\\Gamma\/\\Delta_K$ is a topological generator.\n\\begin{defn}\nLet $S$ be an affnioid algebra over $\\Q$. For a $\\m$-module $D_S$ over $\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q}S$, we define the Herr complex $C^\\bullet_{\\varphi,\\gamma_K}(D_S)$ of $D_S$ concentrated in degree $[0,2]$ as follows:\n\\[\n C^{\\bullet}_{\\varphi,\\gamma_K}(D_S)=\n [D_S^{\\Delta_K}\\stackrel{d_{1}}{\\longrightarrow}D_S^{\\Delta_K}\\oplus D_S^{\\Delta_K}\n \\stackrel{d_{2}}{\\longrightarrow}D_S^{\\Delta_K}]\n\\]\nwith $d_1(x) = ((\\gamma_K - 1)x, (\\varphi - 1)x)$ and $d_2(x,y) =\n(\\varphi - 1)x - (\\gamma_K - 1)y$. One shows that this complex is independent of the choice of $\\gamma_K$ up to canonical quasi-isomorphism. Its cohomology group is denoted by $H^\\bullet(D_S)$.\n\\end{defn}\n\nBy the main result of \\cite{KPX}, one knows that $H^i(D_S)$ is a finite $S$-module and commutes with flat base change in $S$. This enables a cohomology theory for families of $\\m$-modules over general rigid analytic spaces.\n\n\\begin{defn}\nLet $X$ be a separated rigid analytic space over $\\Q$, and let $D_X$ be a family of $\\m$-modules over $X$. For each $0\\leq i\\leq 2$, we define $H^\\bullet(D_X)$ to be the cohomology of the complex\n\\[\nC^{\\bullet}_{\\varphi,\\gamma_K}(D_X)=\n [D_X^{\\Delta_K}\\stackrel{d_{1}}{\\longrightarrow}D_X^{\\Delta_K}\\oplus D_X^{\\Delta_K}\n \\stackrel{d_{2}}{\\longrightarrow}D_X^{\\Delta_K}]\n\\]\nwith $d_1(x) = ((\\gamma_K - 1)x, (\\varphi - 1)x)$ and $d_2(x,y) =\n(\\varphi - 1)x - (\\gamma_K - 1)y$. For each $0\\leq i\\leq 2$, $H^i(D_X)$ is therefore the coherent $\\OO_X$-module obtained by gluing $H^i(D_S)$ for all affinoid subdomains $M(S)$ of $X$.\n\\end{defn}\n\nAs a consequence of finiteness of the cohomology of families of $\\m$-modules, by a standard argument we see that locally on $X$, the complex $C^{\\bullet}_{\\varphi,\\gamma_K}(D_X)$ is quasi isomorphic to a complex of locally free coherent sheaves concentrated in degree $[0,2]$. This would enable us to flatten the cohomology of families of $\\m$-modules by blowing up the base $X$. The following lemma is a rearrangement of some arguments in \\cite[\\S6]{KPX}.\n\n\\begin{lemma}\\label{lem:modification}\nLet $X$ be a reduced, separated and irreducible rigid analytic space over $K$, and let $D_X$ be a family of $\\m$-modules of rank $d$ over $X$. Then the following are true.\n\\begin{enumerate}\n\\item[(1)]The exists a proper birational morphism $\\pi:X'\\ra X$ of reduced rigid analytic spaces over $K$ so that $H^0(D_{X'})$ is flat and $H^i(D_{X'})$ has Tor-dimensions $\\leq 1$ for each $i=1,2$.\n\\item[(2)]Suppose that $D'_{X}$ is a family of $\\m$-modules over $X$ of rank $d'$, and that $\\lambda: D'_X\\ra D_X$ be a morphism between them so that for any $x\\in X$, the image of $\\lambda_x$ is a $\\m$-submodule of rank $d$ of $D_x$. Then there exists a proper birational morphism $\\pi:X'\\ra X$ of reduced rigid analytic spaces over $K$ so that the cokernel of $\\pi^*\\lambda$ has Tor-dimension $\\leq 1$.\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nThe upshot is that for a bounded complex $(C^\\bullet,d^\\bullet)$ of locally free coherent sheaves on $X$, there exists a blow up $\\pi:X'\\ra X$, which depends only on the quasi-isomorphism class of $(C^\\bullet,d^\\bullet)$, so that $\\pi^*d^i$ has flat image for each $i$. Furthermore, the construction of $X'$ commutes with dominant base change in $X$ (see \\cite[Corollary 6.2.5]{KPX} for more details). Thus for (1), we can construct $X'$ locally and then glue. For (2),\nlet $Q_X$ denote the cokernel of $\\lambda$. For any $x\\in X$, since the image of $\\lambda_x$ is a $\\m$-submodule of rank $d$, by \\cite[Lemma 5.3.1]{L12}, we get that $Q_x$ is killed by a power of $t$. Now let $M(S)$ be an affinoid subdomain of $X$, and suppose that $D_S^s$ and $D'^s_S$ are defined for some suitable $s>0$. For $r>s$, set $Q_S^{[s,r]}=D^{[s,r]}_S\/\\lambda(D'^{[s,r]}_S)$. Since for any $x\\in M(S)$, the fiber of $Q_S^{[s,r]}$ at $x$ is killed by a power of $t$, we get that $Q_S^{[s,r]}$ is killed by $t^k$ for some $k>0$. This yields that $Q_S^{[s,r]}$ is a finite $S$-module. Now we apply \\cite[Corollary 6.2.5(1)]{KPX} to a finite presentation of $Q_S^{[s,ps]}$ to get a blow up $Y$ of $M(S)$ so that the pullback of $Q_S^{[s,ps]}$ has Tor-dimension $\\leq1$. Using the fact $(\\varphi^n)^*Q_S^{[s,ps]}\\cong Q_S^{[p^ns,p^{n+1}s]}$, we see that $Y$ is also the blow up obtained by applying \\cite[Corollary 6.2.5(1)]{KPX} to a finite presentation of $Q_S^{[s,p^{n+1}s]}$ for any positive integer $n$. It therefore follows that for any $r>s$, the pullback of $Q_S^{[s,r]}$ has Tor-dimension $\\leq 1$; hence the pullback of $Q_S$ has Tor-dimension $\\leq 1$. Furthermore, the blow ups for all affinoid subdomains $M(S)$ glue to form a blow up $X'$ of $X$ which satisfies the desired condition.\n\\end{proof}\n\n\\begin{lemma}\\label{lem:ker-birational}\nLet $X$ be a reduced, separated and irreducible rigid analytic space over $K$. Let $D'_X$ and $D_{X}$ be families of $\\m$-modules over $X$ of ranks $d'$ and $d$ respectively, and let $\\lambda: D'_X\\ra D_X$ be a morphism between them. Suppose that for any $x\\in X$, the image of $\\lambda_x$ is a $\\m$-submodule of rank $d$ of $D_x$. Then there exists a proper birational morphism $\\pi:X'\\ra X$ of reduced rigid analytic spaces over $K$ such that the kernel of $\\pi^*\\lambda$ is a family of $\\m$-modules of rank $d'-d$ over $X'$, and there exists a Zariski open dense subset $U\\subset X'$ such that $(\\ker(\\pi^*\\lambda))_x=\\ker((\\pi^*\\lambda)_x)$ for any $x\\in U$.\n\\end{lemma}\n\\begin{proof}\nLet $Q_X$ be the cokernel of $\\lambda$. By the previous Lemma, we may suppose that $Q_X$ has Tor-dimension $\\leq1$ after adapting $X$. Now let $P_X$ denote the kernel of $\\lambda$. For any $x\\in X$, the Tor spectral sequence computing the cohomology of the complex $[D_{X}\\stackrel{\\lambda}{\\longrightarrow}D'_{X}]\\otimes^{\\mathbf{L}}_{\\OO_{X}}k(x)$ gives rise to a short exact sequence\n\\[\n0\\longrightarrow P_x\\longrightarrow\\ker(\\lambda_x)\\longrightarrow\\mathrm{Tor}_1(Q_X,k(x))\\longrightarrow0.\n\\]\nSince the image of $\\lambda_x$ is a $\\m$-module of rank $d$, $\\ker(\\lambda_x)$ is a $\\m$-module of rank $d'-d$. Since $Q_X$ is killed by a power of $t$ locally on $X$, we get that the last term of the exact sequence is killed by a power of $t$. This yields that $P_x$ is a $\\m$-module of rank $d'-d$. We therefore conclude that $P_X$ is a family of $\\m$-modules of rank $d'-d$ over $X$ by \\cite[Corollary 2.1.9]{KPX}. Furthermore, since $Q_X$ has Tor-dimension $\\leq1$, by \\cite[Lemma 6.2.7]{KPX}, we get that the set of $x\\in X$ for which $\\mathrm{Tor}_1(Q_X,k(x))\\neq0$ forms a nonwhere dense Zariski closed subset of $X$; this yields the rest of the lemma.\n\\end{proof}\n\nThe following proposition modifies part of \\cite[Theorem 6.2.9]{KPX}.\n\n\n\n\\begin{prop}\\label{prop:cohomology}\nLet $X$ be a reduced, separated and irreducible rigid analytic space over $K$. Let $D_X$ be a family of $\\m$-modules of rank $d$ over $X$, and let $\\delta:K^\\times\\ra \\OO(X)^\\times$ be a continuous character. Suppose that there exist a Zariski dense subset $Z$ of closed points of $X$ and a positive integer $c\\leq d$ such that for every $z\\in Z$, $H^0(D_z^{\\vee}(\\delta_z))$ is a\n$c$-dimensional $k(z)$-vector space.\nThen there exists a proper birational morphism $\\pi:X'\\ra X$ of reduced rigid analytic spaces over $K$ and a morphism $\\lambda: D_{X'}\\ra M_{X'}=(\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q}\\OO_{X'})(\\delta)\\otimes_{\\OO_{X'}}L$ of $\\m$-modules, where $L$ is a locally free coherent $\\OO_{X'}$-module of rank $c$ equipped with trivial $\\varphi,\\Gamma$-actions, such that\n\\begin{enumerate}\n\\item[(1)]for any $x\\in X'$, the image of $\\lambda_{x}$ is a $\\m$-submodule of rank $c$;\n\\item[(2)]the kernel of $\\lambda$ is a family of $\\m$-modules of rank $d-c$ over $X'$, and there exists a Zariski open dense subset $U\\subset X'$ such that $(\\ker\\lambda)_x=\\ker(\\lambda_x)$ for any $x\\in U$.\n\\end{enumerate}\n\\end{prop}\n\\begin{proof}\nUsing Lemma \\ref{lem:modification}, we first choose a proper birational morphism $\\pi:X'\\ra X$ with $X'$ reduced such that $N_{X'}=\\pi^*(D^{\\vee}_{X}(\\delta))$ satisfies the conditions that $H^0(N_{X'})$ is flat and $H^i(N_{X'})$ has Tor-dimension $\\leq 1$ for each $i=1,2$. Then for any $x\\in X'$, the base change spectral sequence $E^{i,j}_2=\\mathrm{Tor}_{-i}(H^j(N_{X'}),k(x))\\Rightarrow H^{i+j}(N_x)$ gives a short exact sequence\n\\[\n0\\longrightarrow H^0(N_{X'})\\otimes_{\\OO_{X'}}k(x)\\longrightarrow H^0(N_x)\\longrightarrow \\mathrm{Tor}_1(H^1(N_{X'}),k(x))\\longrightarrow0\n\\]\nAs $H^1(N_{X'})$ has Tor-dimension $\\leq1$, by \\cite[Lemma 6.2.7]{KPX}, the set of $x\\in X'$ for which the last term of the above exact sequence does not vanish forms a nowhere dense Zariski closed subset $V$. For any $z\\in\\pi^{-1}(Z)\/V$, we deduce from the above exact sequence that $H^0(N_{X'})\\otimes_{\\OO_{X'}}k(z)$ is a $c$-dimensional $k(z)$-vector space. Since $H^0(N_{X'})$ is flat and $\\pi^{-1}(Z)\/V$ is a Zariski dense subset of $X'$, we get that $H^0(N_{X'})$ is locally free of constant rank $c$. Let $L$ be its dual; then the natural map $(\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q}\\OO_{X'})H^0(N_{X'})\\ra N_{X'}$\ngives a map $\\lambda:D_{X'}\\ra M_{X'}=(\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q}\\OO_{X'})(\\delta)\\otimes_{\\OO_{X'}}L$. For any $x\\in X'$, since the map $H^0(N_{X'})\\otimes_{\\OO_{X'}}k(x)\\longrightarrow H^0(N_x)$ is injective, we get that the image of $\\lambda_x$ is a rank $c$ $\\m$-submodule of $M_x$. We thus conclude the proposition using the previous lemma.\n\\end{proof}\n\n\\section{Families of Hodge-Tate $\\m$-modules}\n\nFrom now on, let $S$ be a reduced affinoid algebra over $K$.\n\n\\begin{defn}\\label{def:HT}\nLet $D_S$ be a $\\m$-module of rank $d$ over $\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q}S$. For any positive integer $n$, if $D_S^{r_n}$ is defined, we set\n\\[\n\\D^n_{\\Sen}(D_S)=D_S^{r_n}\\otimes_{\\mathbf{B}^{\\dag,r_n}_{\\rig,K}\\widehat{\\otimes}_{\\Q}S}K_n\\otimes_{\\Q}S.\n\\]\nWe call $D_S$ \\emph{Hodge-Tate with Hodge-Tate weights in $[a,b]$} if\nthere exists a positive integer $n$ such that\nthe natural map\n\\begin{equation}\\label{eq:def-HT}\n(\\oplus_{a\\leq i\\leq b}\\D^n_\\Sen(D_S(-i)))^\\Gamma\\otimes_{K\\otimes_{\\Q}S}(K_n\\otimes_{\\Q}S)[t,t^{-1}]\\longrightarrow \\oplus_{i\\in\\mathbb{Z}}\\D_\\Sen^n(D_S(-i))\n\\end{equation}\nis an isomorphism. We denote by $h_{HT}(D_S)$ the smallest $n$ which satisfies this condition, and we define $D_{\\mathrm{HT}}(D_S)=(\\oplus_{a\\leq i\\leq b}\\D^{h_{HT}(D_S)}_\\Sen(D_S(-i)))^\\Gamma$.\n\\end{defn}\n\n\\begin{lemma}\\label{lem:HT-inv}\nLet $D_S$ be a Hodge-Tate $\\m$-module over $\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q}S$ with weights in $[a,b]$. Then for any $n\\geq h_{HT}(D_S)$, (\\ref{eq:def-HT}) is an isomorphism and $\\D_\\Sen^n(D_S(-i))^{\\Gamma}=\\D_\\Sen^{h_{HT}(D_S)}(D_S(-i))^{\\Gamma}$ for any $i\\in [a,b]$. As a consequence, we have\n$(\\oplus_{a\\leq i\\leq b}\\D_\\Sen^n(D_S(-i)))^\\Gamma=D_{\\mathrm{HT}}(D_S)$.\n\\end{lemma}\n\\begin{proof}\nTensoring with $K_{n}\\otimes_{\\Q}S[t,1\/t]$ on both sides of the map\n\\[\n(\\oplus_{a\\leq i\\leq b}\\D^{h_{HT}(D_S)}_\\Sen(D_S(-i)))^\\Gamma\\otimes_{K\\otimes_{\\Q}S}(K_{h_{HT}(D_S)}\n\\otimes_{\\Q}S)[t,t^{-1}]\\longrightarrow \\oplus_{i\\in\\mathbb{Z}}\\D_\\Sen^{h_{HT}(D_S)}(D_S(-i))\n\\]\nWe get that the natural map\n\\[\n(\\oplus_{a\\leq i\\leq b}\\D^{h_{HT}(D_S)}_\\Sen(D_S(-i)))^\\Gamma\\otimes_{K\\otimes_{\\Q}S}(K_{n}\\otimes_{\\Q}S)[t,t^{-1}]\\longrightarrow \\oplus_{i\\in\\mathbb{Z}}\\D_\\Sen^{n}(D_S(-i))\n\\]\nis an isomorphism. Taking $\\Gamma$-invariants on both sides, we get\n\\[\n(\\oplus_{a\\leq i\\leq b}\\D^{h_{HT}(D_S)}_\\Sen(D_S(-i)))^\\Gamma=(\\oplus_{a\\leq i\\leq b}\\D^{n}_\\Sen(D_S(-i)))^\\Gamma.\n\\]\nThis yields the lemma.\n\\end{proof}\n\n\\begin{remark}\nIf $D_S$ is Hodge-Tate with weights in $[a,b]$, taking $\\Gamma$-invariants on both sides of (\\ref{eq:def-HT}), we see that $\\D^n_\\Sen(D_S(-i))^{\\Gamma}=0$ for any $n\\geq h_{HT}(D_S)$ and $i\\notin [a,b]$.\n\\end{remark}\n\n\\begin{lemma}\\label{lem:HT}\nIf $D_S$ is a Hodge-Tate $\\m$-module over $\\mathbf{B}^\\dag_{\\rig,K}\\widehat{\\otimes}_{\\Q}S$ with weights in $[a,b]$, then for any morphism $S\\ra R$ of affinoid algebras over $K$, $D_R$ is Hodge-Tate with weights in $[a,b]$ and $h_{HT}(D_R)\\leq h_{HT}(D_S)$. Furthermore, the natural map $\\D^n_\\Sen(D_S(i))^\\Gamma\\otimes_{S}R\\ra\\D^n_\\Sen(D_R(i))^\\Gamma$ is an isomorphism for any $i\\in\\mathbb{Z}$ and $n\\geq h_{HT}(D_S)$. As a consequence, the natural map $D_{\\mathrm{HT}}(D_S)\\otimes_SR\\ra D_{\\mathrm{HT}}(D_R)$ is an isomorphism.\n\\end{lemma}\n\\begin{proof}\nLet $n\\geq h_{HT}(D_S)$. Tensoring with $R$ over $S$ on both sides of (\\ref{eq:def-HT}), we get that the natural map\n\\[\n(\\oplus_{a\\leq i\\leq b}\\D^n_\\Sen(D_S(-i))^\\Gamma\\otimes_SR)\\otimes_{K\\otimes_{\\Q}R}(K_n\\otimes_{\\Q}R)[t,t^{-1}]\\longrightarrow \\oplus_{i\\in\\mathbb{Z}}\\D_\\Sen^n(D_R(-i)).\n\\]\nis an isomorphism. Comparing $\\Gamma$-invariants on both sides, we get that the natural map\n\\[\n\\D^n_\\Sen(D_S(-i))^\\Gamma\\otimes_{S}R\\ra\\D^n_\\Sen(D_R(-i))^\\Gamma\n\\]\nis an isomorphism for any $a\\leq i\\leq b$. This implies that the natural map\n\\[\n(\\oplus_{a\\leq i\\leq b}\\D^n_\\Sen(D_R(-i))^\\Gamma\\otimes_{K\\otimes_{\\Q}R}(K_n\\otimes_{\\Q}R)[t,t^{-1}]\\longrightarrow \\oplus_{i\\in\\mathbb{Z}}\\D_\\Sen^n(D_R(-i)).\n\\]\nis an isomorphism. This proves the lemma.\n\\end{proof}\n\n\\begin{cor}\nIf $D_S$ is a Hodge-Tate $\\m$-module of rank $d$ over $\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q}S$, then $D_{\\mathrm{HT}}(D_S)$ is a locally free coherent $K\\otimes_{\\Q}S$-module of rank $d$.\n\\end{cor}\n\\begin{proof}\nBy the previous lemma, it suffices to treat the case that $S$ is a finite extension of $K$; this is clear from the isomorphism (\\ref{eq:def-HT}).\n\\end{proof}\n\n\\begin{defn}\nLet $X$ be a reduced and separated rigid analytic space over $K$, and let $D_X$ be a family of $\\m$-modules of rank $d$ over $X$. We call $D_X$ \\emph{Hodge-Tate} with weights in $[a,b]$ if for some (hence any) admissible cover $\\{M(S_i)\\}_{i\\in I}$ of $X$, $D_{S_i}$ is Hodge-Tate with weights in $[a,b]$ for any $i\\in I$. We define $D_{\\mathrm{HT}}(D_X)$ to be the gluing of all $D_{\\mathrm{HT}}(D_{S_i})$'s.\n\\end{defn}\n\n\\begin{lemma}\\label{lem:HT-criterion}\nLet $D_S$ be a $\\m$-module over $\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q}S$. Then (\\ref{eq:def-HT}) is an isomorphism if and only if the natural map\n\\begin{equation}\\label{eq:lem-HT}\n\\oplus_{a\\leq i\\leq b}\\D_\\Sen^n(D_S)^{\\Gamma_n=\\chi^i}\\longrightarrow\\D_\\Sen^n(D_S)\n\\end{equation}\nis an isomorphism. Furthermore, if this is the case, then (\\ref{eq:def-HT}) holds for $n$.\n\\end{lemma}\n\\begin{proof}\nFor the ``$\\Rightarrow$'' part, since (\\ref{eq:def-HT}) is an isomorphism, we deduce that\n\\begin{equation}\\label{eq:lem-HT-2}\n\\D_\\Sen^n(D_S)=\\oplus_{a\\leq i\\leq b}t^i\\cdot\\D^n_{\\Sen}(D_S(-i))^\\Gamma\n\\otimes_{K\\otimes_{\\Q}S}(K_n\\otimes_{\\Q}S).\n\\end{equation}\nNote that $t^i\\cdot\\D^n_{\\Sen}(D_S(-i))^\\Gamma\\subseteq\\D_\\Sen^n(D_S)^{\\Gamma_n=\\chi^i}$. Hence (\\ref{eq:lem-HT-2}) implies that (\\ref{eq:lem-HT}) is surjective. On the other hand, it is clear that (\\ref{eq:def-HT}) is injective; hence it is an isomorphism. Conversely, suppose that (\\ref{eq:lem-HT}) is an isomorphism. Note that\n\\[\n\\D_\\Sen^n(D_S)^{\\Gamma_n=\\chi^i}=t^i\\cdot\\D_\\Sen^n(D_S(-i))^{\\Gamma_n}=(t^i\\cdot\\D_\\Sen^n(D_S(-i))^\\Gamma)\n\\otimes_{K\\otimes_{\\Q}S}(K_n\\otimes_{\\Q}S),\n\\]\nwhere the latter equality follows from \\cite[Proposition 2.2.1]{BC07}. This implies that $D_S$ satisfies (\\ref{eq:lem-HT-2}), yielding that $D_S$ satisfies (\\ref{eq:def-HT}).\n\\end{proof}\n\n\\begin{prop}\\label{prop:HT-family}\nLet $D_S$ be a $\\m$-module over $\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}S$. Suppose that there exists a Zariski dense subset $Z\\subset M(S)$ such that $D_z$ is Hodge-Tate with weights in $[a,b]$ for any $z\\in Z$ and $\\sup_{z\\in Z}\\{h_{HT}(D_z)\\}<\\infty$. Then $D_S$ is Hodge-Tate with weights in $[a,b]$.\n\\end{prop}\n\\begin{proof}\nLet $n\\geq\\sup_{z\\in Z}\\{h_{HT}(D_z)\\}$ such that $D_S^n$ is defined, and let $\\gamma$ be a topological generator of $\\Gamma_n$. For any $a\\leq i\\leq b$, let $p_i$ denote the operator\n$\\prod_{a\\leq j\\leq b, j\\neq i}\\frac{\\gamma-\\chi^{j}(\\gamma)}{\\chi^i(\\gamma)-\\chi^j(\\gamma)}$,\nand let $M_i=p_i(\\D_\\Sen^n(D_S))$. It is clear that $p_i$ is the identity on $\\D_{\\Sen}^n(D_S)^{\\Gamma_n=\\chi^i}$; hence $\\D_{\\Sen}^n(D_S)^{\\Gamma_n=\\chi^i}\\subseteq M_i$. On the other hand, for any $z\\in Z$, since $D_z$ is Hodge-Tate with weights in $[a,b]$ and $h_{HT}(D_z)\\leq n$, we deduce from Lemma \\ref{lem:HT-criterion} that $p_i(\\D_\\Sen^n(D_z))=\\D^n_\\Sen(D_z)^{\\Gamma_n=\\chi^i}$. This implies that $M_i$ maps onto $\\D^n_\\Sen(D_z)^{\\Gamma_n=\\chi^i}$ under the specialization $\\D_\\Sen^n(D_S)\\ra \\D_\\Sen^n(D_z)$. Since $Z$ is Zariski dense, we conclude $M_i\\subseteq\\D^n_\\Sen(D)^{\\Gamma_n=\\chi^i}$; hence $M_i=\\D^n_\\Sen(D)^{\\Gamma_n=\\chi^i}$.\nLet $M=\\oplus_{a\\leq i\\leq b}M_i$. We claim that the natural inclusion $M\\subseteq \\D_\\Sen^n(D_S)$ is an isomorphism. In fact, for any $z\\in Z$, since $\\D_\\Sen^n(D_z)=\\oplus_{a\\leq i\\leq b}\\D_\\Sen^n(D_z)^{\\Gamma_n=\\chi^i}$, we have that $M$ maps onto $\\D_\\Sen^n(D_z)$. Thus $\\D^n_\\Sen(D_S)\/M$ vanishes at $z$. We therefore conclude $\\D^n_\\Sen(D_S)\/M=0$ because $Z$ is Zariski dense. By Lemma \\ref{lem:HT-criterion} and the claim, we conclude that $D_S$ is Hodge-Tate with weights in $[a,b]$.\n\\end{proof}\n\n\n\n\\section{Families of de Rham $\\m$-modules}\n\\begin{defn}\\label{def:dR}\nLet $D_S$ be a $\\m$-module of rank $d$ over $\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q}S$. For any positive integer $n$, if $D_S^{r_n}$ is defined, we set\n\\[\n\\D^{+,n}_{\\dif}(D_S)=D_S^{r_n}\\otimes_{\\mathbf{B}^{\\dag,r_n}_{\\rig,K}\\widehat{\\otimes}_{\\Q}S}(K_n\\otimes_{\\Q}S)[[t]], \\qquad\n\\D^{n}_{\\dif}(D_S)=\\D^{+,n}_{\\dif}(D_S)[1\/t].\n\\]\nWe equip $\\D_\\dif^n(D_S)$ with the filtration $\\mathrm{Fil}^i\\D_\\dif^n(D_S)=t^i\\D_\\dif^{+,n}(D_S)$. We call $D_S$ \\emph{de Rham with weights in $[a,b]$} if there exists a positive integer $n$ such that\n\\begin{enumerate}\n\\item[(1)]\nthe natural map\n\\begin{equation}\\label{eq:def-de Rham}\n\\D^n_\\dif(D_S)^\\Gamma\\otimes_{K\\otimes_{\\Q}S}(K_n\\otimes_{\\Q}S)[[t]][1\/t]\\longrightarrow \\D_\\dif^n(D_S)\n\\end{equation}\nis an isomorphism;\n\\item[(2)]$\\mathrm{Fil}^{-b}(\\D^n_\\dif(D_S)^\\Gamma)=D_S$ and $\\mathrm{Fil}^{-a+1}(\\D^n_\\dif(D_S)^\\Gamma)=0$\nwhere $\\mathrm{Fil}^{i}(\\D^n_\\dif(D_S)^\\Gamma)$ is the induced filtration on $\\D^n_\\dif(D_S)^\\Gamma$.\n\\end{enumerate}\nWe denote by $h_{dR}(D_S)$ the smallest $n$ which satisfies these conditions, and we define $D_{\\mathrm{dR}}(D_S)=\\D^{h_{dR}(D_S)}_\\dif(D_S)^\\Gamma$.\n\\end{defn}\n\n\\begin{lemma}\\label{lem:dR-inv}\nLet $D$ be a de Rham $\\m$-module over $\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q}S$. Then for any $n\\geq h_{dR}(D_S)$, $\\D^n_\\dif(D_S)^\\Gamma=D_{\\mathrm{dR}}(D_S)$\n\\end{lemma}\n\\begin{proof}\nWe tensor $K_{n+1}\\otimes_{\\Q}S[[t]][1\/t]$ on both sides of the map\n\\[\n\\D^{h_{dR}(D_S)}_\\dif(D_S)^\\Gamma\\otimes_{K\\otimes_{\\Q}S}(K_{h_{dR}(D_S)}\\otimes_{\\Q}S)[[t]][1\/t]\\longrightarrow \\D_\\dif^{h_{dR}(D_S)}(D_S),\n\\]\nyielding that the map\n\\[\n\\D^{h_{dR}(D_S)}_\\dif(D_S)^\\Gamma\\otimes_{K\\otimes_{\\Q}S}(K_{n}\\otimes_{\\Q}S)[[t]][1\/t]\\longrightarrow \\D_\\dif^{n}(D_S).\n\\]\nis an isomorphism. Comparing $\\Gamma$-invariants on both sides, we get the desired result.\n\\end{proof}\n\n\\begin{lemma}\\label{lem:dR-HT}\nIf $D$ is a de Rham $\\m$-module of rank $d$ over $\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q}S$ with weights in $[a,b]$, then $D$ is Hodge-Tate with weights in $[a,b]$ and $h_{HT}(D_S)\\leq h_{dR}(D_S)$. Furthermore, we have $\\mathrm{Gr}^iD_{\\mathrm{dR}}(D_S)=\\D_\\Sen^n(D_S(i))^\\Gamma$ under the identification $\\mathrm{Gr}^i\\D_\\dif^n(D_S)=\\D_\\Sen^n(D_S(i))$ for any $n\\geq h_{dR}(D_S)$.\n\\end{lemma}\n\\begin{proof}\nLet $n\\geq h_{dR}(D_S)$. Since (\\ref{eq:def-de Rham}) is an isomorphism, we deduce that the natural map of graded modules\n\\begin{equation}\\label{eq:lem-dR-HT}\n\\oplus_{i\\in\\mathbb{Z}}\\mathrm{Gr}^iD_{\\mathrm{dR}}(D_S)\n\\otimes_{K\\otimes_{\\Q}S}(K_n\\otimes_{\\Q}S)[t,t^{-1}]\\longrightarrow \\oplus_{i\\in\\mathbb{Z}}\\D_\\Sen^n(D_S(i))\n\\end{equation}\nis surjective. On the other hand, since $t^i\\cdot\\mathrm{Gr}^{-i}D_{\\mathrm{dR}}(D_S)\\subset \\D_{\\Sen}^n(D_S)$, we have that the natural map\n\\[\n\\oplus_{a\\leq i\\leq b}t^i\\cdot\\mathrm{Gr}^{-i}D_{\\mathrm{dR}}(D_S)\\ra \\D_\\Sen^n(D_S)\n\\]\nis injective. This implies that (\\ref{eq:lem-dR-HT}) is injective; hence it is an isomorphism. Comparing the $\\Gamma$-invariants on both sides, we get $\\mathrm{Gr}^iD_{\\mathrm{dR}}(D_S)=\\D_\\Sen^n(D_S(i))^\\Gamma$ for each $i\\in\\mathbb{Z}$. This proves the lemma.\n\\end{proof}\n\n\\begin{lemma}\\label{lem:dR}\nIf $D_S$ is a de Rham $\\m$-module over $\\mathbf{B}^\\dag_{\\rig,K}\\widehat{\\otimes}_{\\Q}S$, then for any morphism $S\\ra R$ of affinoid algebras over $K$, $D_R$ is de Rham with weights in $[a,b]$ and $h_{dR}(D_R)\\leq h_{dR}(D_S)$. Furthermore, the natural maps $\\mathrm{Fil}^i D_{\\mathrm{dR}}(D_S)\\otimes_{S}R\\ra \\mathrm{Fil}^iD_{\\mathrm{dR}}(D_R)$ are isomorphisms for all $i\\in \\mathbb{Z}$.\n\\end{lemma}\n\\begin{proof}\nLet $n\\geq h_{dR}(D_S)$. Tensoring with $(K_n\\otimes_{\\Q}R)[[t]][1\/t]$ on both sides of (\\ref{eq:def-de Rham}), we get that the natural map\n\\begin{equation}\\label{eq:lem-dR}\n(\\D^n_\\dif(D_S)^\\Gamma\\otimes_S R)\\otimes_{K\\otimes_{\\Q}R}(K_n\\otimes_{\\Q}R)[[t]][1\/t]\\longrightarrow \\D_\\dif^n(D_R).\n\\end{equation}\nis an isomorphism. Comparing $\\Gamma$-invariants on both sides of (\\ref{eq:lem-dR}), we get that the natural map $\\D^n_\\dif(D_S)^\\Gamma\\otimes_{S}R\\ra\\D^n_\\dif(D_R)^\\Gamma$\nis an isomorphism; hence $D_R$ is de Rham. Then by Lemmas \\ref{lem:HT} and \\ref{lem:dR-HT}, we deduce that the natural map\n$\\mathrm{Gr}^i(D_{\\mathrm{dR}}(D_S))\\otimes_SR\\ra\\mathrm{Gr}^i(D_{\\mathrm{dR}}(D_R))$ is an isomorphism.\nThis implies the rest of the lemma.\n\\end{proof}\n\n\\begin{cor}\nIf $D_S$ is a de Rham $\\m$-module of rank $d$ over $\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q} S$, then $D_{\\mathrm{dR}}(D_S)$ is a locally free coherent $K\\otimes_{\\Q}S$-module of rank $d$.\n\\end{cor}\n\\begin{proof}\nWe first note that for each $i\\in\\mathbb{Z}$, $\\mathrm{Gr}^i(D_{\\mathrm{dR}}(D_S))$, which is isomorphic to $\\D_\\Sen^n(D_S(i))^\\Gamma$ by Lemma \\ref{lem:dR-HT}, is a coherent $K\\otimes_{\\Q}S$-module. We then deduce that $D_{\\mathrm{dR}}(D_S)$ is a coherent $K\\otimes_{\\Q}S$-module. Using Lemma \\ref{lem:dR}, it then suffices to treat the case that $S$ is a finite extension of $K$; this follows easily from the isomorphism (\\ref{eq:def-de Rham}).\n\\end{proof}\n\n\\begin{defn}\nLet $X$ be a reduced and separated rigid analytic space over $K$, and let $D_X$ be a family of $\\m$-modules of rank $d$ over $X$. We call $D_X$ \\emph{de Rham} if for some (hence any) admissible cover $\\{M(S_i)\\}_{i\\in I}$ of $X$, $D_{S_i}$ is de Rham with weights in $[a,b]$ for any $i\\in I$. We define $D_{\\mathrm{dR}}(D_X)$ to be the gluing of all $D_{\\mathrm{dR}}(D_{S_i})$'s.\n\\end{defn}\n\n\\begin{lemma}\\label{lem:dR-weight}\nIf $D_S$ is a de Rham $\\m$-module over $\\mathbf{B}^\\dag_{\\rig,K}\\widehat{\\otimes}_{\\Q}S$ of rank $d$ with weights in $[a,b]$, then $t^{-a}\\D_\\dif^{+,n}(D_S)\\subset D_{\\mathrm{dR}}(D_S)\\otimes_{K\\otimes_{\\Q}S}(K_n\\otimes_{\\Q}S)[[t]]\\subset t^{-b}\\D_\\dif^{+,n}(D_S)$ for any $n\\geq h_{dR}(D_S)$.\n\\end{lemma}\n\\begin{proof}\nSince $\\mathrm{Gr}^{-b}D_{\\mathrm{dR}}(D_S)=D_{\\mathrm{dR}}(D_S)$, we get $D_{\\mathrm{dR}}(D_S)\\subset t^{-b}\\D^{+,n}_\\dif(D_S)$; hence $D_{\\mathrm{dR}}(D_S)\\otimes_{K\\otimes_{\\Q}S}(K_n\\otimes_{\\Q}S)[[t]]\\subset t^{-b}\\D_\\dif^{+,n}(D_S)$. By the proof of Lemma \\ref{lem:dR-HT}, we know that the natural map (\\ref{eq:lem-dR-HT}) is an isomorphism of graded modules. By the facts that $\\mathrm{Gr}^iD_{\\mathrm{dR}}(D_S)=0$ for $i\\geq -a+1$ and $\\mathrm{Fil}^i\\D_\\dif^n(D_S)$ is $t$-adically complete, we thus deduce that $t^{-a}\\D_\\dif^{+,n}(D_S)\\subset D_{\\mathrm{dR}}(D_S)\\otimes_{K\\otimes_{\\Q}S}(K_n\\otimes_{\\Q}S)[[t]]$.\n\\end{proof}\n\n\\begin{lemma}\nLet $D_S$ be a Hodge-Tate $\\m$-module over $\\mathbf{B}^\\dag_{\\rig,K}\\widehat{\\otimes}_{\\Q}S$ with weights in $[a,b]$. Then for any $k\\geq b-a+1$, $i\\in[a,b]$, $n\\geq h_{HT}(D_S)$ and $\\gamma\\in\\Gamma_n$, the map $\\gamma-\\chi^i(\\gamma):t^k\\D_\\dif^{+,n}(D_S)\\ra t^k\\D_\\dif^{+,n}(D_S)$ is bijective.\n\\end{lemma}\n\\begin{proof}\nSince $\\D_\\dif^{+,n}(D_S)$ is $t$-adically complete, it suffices to show that\n\\[\n\\gamma-1:t^k\\D_\\dif^{+,n}(D_S)\/t^{k+1}\\D_\\dif^{+,n}(D_S)\\ra t^k\\D_\\dif^{+,n}(D_S)\/t^{k+1}\\D_\\dif^{+,n}(D_S)\n\\]\nis bijective for any $k\\geq b-a+1$. Note that $t^k\\D_\\dif^{+,n}(D_S)\/t^{k+1}\\D_\\dif^{+,n}(D_S)$ is isomorphic to $\\D_\\Sen^n(D_S(k))$ as a $\\Gamma$-module. Note that $\\D^n_\\Sen(D_S(k))=\\oplus_{a\\leq j\\leq b}(\\D^n_\\Sen(D_S))^{\\Gamma_n=\\chi^{j+k}}$ by Lemma \\ref{lem:HT-criterion}. Since $j+k\\geq b+1$ for all $j\\in [a,b]$, we deduce that $\\gamma-\\chi^i(\\gamma)$ is bijective on $\\D^n_\\Sen(D_S(k))$.\n\\end{proof}\n\n\\begin{lemma}\\label{lem:dR-criterion}\nLet $D_S$ be a Hodge-Tate $\\m$-module over $\\mathbf{B}^\\dag_{\\rig,K}\\widehat{\\otimes}_{\\Q}S$ with weights in $[a,b]$. Then $D_S$ is de Rham if and only if there exists a positive integer $n\\geq h_{HT}(D_S)$ such that $\\Pi_{i=a}^{2b-a}(\\gamma-\\chi(\\gamma)^i)\\D_\\dif^{+,n}(D_S)\\subset t^{b-a+1}\\D_\\dif^{+,n}(D_S)$. Furthermore, if this is the case, then (\\ref{eq:def-de Rham}) holds for $n$.\n\\end{lemma}\n\\begin{proof}\nSuppose that $D_S$ is de Rham. Let $n\\geq h_{dR}(D_S)$, and put\n\\[\nN=D_{\\mathrm{dR}}(D_S)\\otimes_{K\\otimes_{\\Q}S}(K_n\\otimes_{\\Q}S)[[t]].\n\\]\nSince $D$ has weights in $[a,b]$, by Lemma \\ref{lem:dR-weight}, we have $t^{-a}\\D_\\dif^{+,n}(D_S)\\subset N\\subset t^{-b}\\D_\\dif^{+,n}(D_S)$. On the other hand, by the construction of $N$, it is clear that $(\\gamma-1)N\\subset tN$. It therefore follows that\n\\[\n\\Pi_{i=a}^{2b-a}(\\gamma-\\chi(\\gamma)^i)\\D_\\dif^{+,n}(D_S)\\subset\n\\Pi_{i=a}^{2b-a}(\\gamma-\\chi(\\gamma)^i)(t^aN)\\subset t^{2b-a+1}N\\subset t^{b-a+1}\\D_\\dif^{+,n}(D_S).\n\\]\nNow suppose $\\Pi_{i=a}^{2b-a}(\\gamma-\\chi(\\gamma)^i)\\D_\\dif^{+,n}(D_S)\\subset t^{b-a+1}\\D_\\dif^{+,n}(D_S)$ for some $n\\geq h_{HT}(D_S)$. We claim that for any $j\\in[a,b]$ and $a\\in(\\D^n_\\Sen(D_S))^{\\Gamma_n=\\chi^j}$, we can lift $a$ to an element in $(\\D_{\\dif}^{+,n}(D_S))^{\\Gamma_n=\\chi^j}$. In fact, let $\\tilde{a}$ be any lift of $a$ in $\\D_\\dif^{+,n}(D_S)$, and let $\\tilde{b}=\\prod_{a\\leq i\\leq 2b-a, i\\neq j}\\frac{\\gamma-\\chi^i(\\gamma)}{\\chi^j(\\gamma)-\\chi^i(\\gamma)}\\tilde{a}$ where $\\gamma$ is a topological generator of $\\Gamma_n$; it is clear that $\\tilde{b}$ is also a lift of $a$. Furthermore, by assumption, we have $(\\gamma-\\chi^j(\\gamma))(\\tilde{b})\\in \\Pi_{i=a}^{2b-a}(\\gamma-\\chi(\\gamma)^i)\\D_\\dif^{+,n}(D_S)\\subset t^{b-a+1}\\D^{+,n}_\\dif(D_S)$. By the previous lemma, we choose some $\\tilde{c}\\in t^{b-a+1}\\D^{+,n}_\\dif(D_S)$ satisfying $(\\gamma-\\chi^j(\\gamma))(\\tilde{b})=(\\gamma-\\chi^j(\\gamma))(\\tilde{c})$. It is then clear that $\\tilde{b}-\\tilde{c}$ is a desired lift of $a$. Since $\\D^n_\\Sen(D_S)=\\oplus_{a\\leq i\\leq b}(\\D^n_\\Sen(D_S))^{\\Gamma_n=\\chi^i}$, we have that $(\\D^n_\\Sen(D_S))^{\\Gamma_n=\\chi^i}$ is locally free for each $i\\in[a,b]$. By shrinking $M(S)$, we may further suppose that each $(\\D^n_\\Sen(D_S))^{\\Gamma_n=\\chi^i}$ is free. We then deduce from the claim that there exists a free $K_n\\otimes_{\\Q}S$-module $M\\subseteq(\\D_\\dif^{n}(D_S))^{\\Gamma_n}$ such that the natural map\n\\[\nM\\otimes_{K_n\\otimes_{\\Q}S}(K_n\\otimes_{\\Q}S)[[t]][1\/t]\\longrightarrow \\D_\\dif^n(D_S)\n\\]\nis an isomorphism. It follows that the natural map\n\\[\nM^\\Gamma\\otimes_{K\\otimes_{\\Q}S}(K_n\\otimes_{\\Q}S)[[t]][1\/t]\\longrightarrow \\D_\\dif^n(D_S)\n\\]\nis an isomorphism because $M=M^{\\Gamma}\\otimes_{K\\otimes_{\\Q}S}(K_n\\otimes_{\\Q}S)$ by \\cite[Proposition 2.2.1]{BC07}. Taking $\\Gamma$-invariants on both sides, we get $M^{\\Gamma}=(\\D_\\dif^n(D_S))^\\Gamma$. This implies that $D_S$ is de Rham.\n\\end{proof}\n\n\n\\begin{prop}\\label{prop:dR-family}\nLet $D_S$ be a $\\m$-module over $\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}S$. Suppose that there exists a Zariski dense subset $Z\\subset M(S)$ such that $D_z$ is de Rham with weights in $[a,b]$ for any $z\\in Z$ and $\\sup_{z\\in Z}\\{h_{dR}(D_z)\\}<\\infty$. Then $D_S$ is de Rham with weights in $[a,b]$.\n\\end{prop}\n\\begin{proof}\nBy Proposition \\ref{prop:HT-family}, we first have that $D_S$ is Hodge-Tate with weights in $[a,b]$. Let $n\\geq \\max\\{h_{HT}(D_S),\\sup_{z\\in Z}\\{h_{dR}(D_z)\\}\\}$. By Lemma \\ref{lem:dR-criterion}, we have\n\\[\n\\Pi_{i=a}^{2b-a}(\\gamma-\\chi(\\gamma)^i)\\D_\\dif^{+,n}(D_z)\\subset t^{b-a+1}\\D_\\dif^{+,n}(D_z)\n\\]\nfor any $z\\in Z$. This implies $\\Pi_{i=a}^{2b-a}(\\gamma-\\chi(\\gamma)^i)\\D_\\dif^{+,n}(D_S)\\subset t^{b-a+1}\\D_\\dif^{+,n}(D_S)$ because $Z$ is Zariski dense. Hence $D_S$ is de Rham by Lemma \\ref{lem:dR-criterion} again.\n\\end{proof}\n\n\n\n\\section{$P$-adic local monodromy for families of de Rham $\\m$-modules}\nThe main goal of this section is to prove the $p$-adic local monodromy for families of de Rham $\\m$-modules. The proof is similar to Berger-Colmez's proof of the $p$-adic local monodromy for families of de Rham representations \\cite[\\S6]{BC07}. Indeed, with the results we have proved in \\S2 and \\S3, the proof from [\\emph{loc.cit.}] go over verbatim. We therefore often sketch our proof and refer the reader to [\\emph{loc.cit.}] for more details.\n\nWe fix $E$ to be a finite extension of the products of the complete residue fields of the Shilov boundary of $M(S)$.\n\n\\begin{prop}\\label{prop:N_dR}\nLet $D_S$ be a de Rham $\\m$-module of rank $d$ over $\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q}S$ with weights in $[a,b]$. For any $s>0$ such that $n(s)\\geq h_{dR}(D_S)$, let\n\\[\nN_s(D_E)=\\{y\\in t^{-b}D^{s}_E\\hspace{2mm}\\text{such that}\\hspace{2mm}\\iota_n(y)\\in D_{\\mathrm{dR}}(D_S)\\otimes_{K\\otimes_{\\Q}S}(K_n\\otimes_{\\Q}E)[[t]]\\hspace{1mm}\\text{for each}\\hspace{2mm}n\\geq n(s)\\}.\n\\]\nThen the following are true.\n\\begin{enumerate}\n\\item[(1)]The $\\mathbf{B}^{\\dag,s}_{\\rig,K}\\widehat{\\otimes}_{\\Q} E$-module $N_s(D_E)$ is free of rank $d$ and stable under $\\Gamma$.\n\\item[(2)]We have\n$N_s(D_E)\\otimes_{\\mathbf{B}^{\\dag,s}_{\\rig,K}\\widehat{\\otimes}_{\\Q}E,\\iota_n}(K_n\\otimes_{\\Q}E)[[t]]\n=D_{\\mathrm{dR}}(D_S)\\otimes_{K\\otimes_{\\Q}S}(K_n\\otimes_{\\Q}E)[[t]]$ for each $n\\geq n(s)$.\n\\end{enumerate}\nFurthermore, if we put $N_{\\mathrm{dR}}(D_E)=N_s(D_E)\\otimes_{\\mathbf{B}^{\\dag,s}_{\\rig,K}\\widehat{\\otimes}_{\\Q}E}\n\\mathbf{B}^{\\dag}_{\\rig,K}\\widehat{\\otimes}_{\\Q}E$, then the following are true.\n\\begin{enumerate}\n\\item[(3)]The $\\mathbf{B}^{\\dag}_{\\rig,K}\\widehat{\\otimes}_{\\Q} E$-module $N_{\\mathrm{dR}}(D_E)$ is free of rank $d$, stable under $\\Gamma$, and independent of the choice of $s$.\n\\item[(4)]We have $\\varphi^*(N_{\\mathrm{dR}}(D_E))=N_{\\mathrm{dR}}(D_E)$ and $\\nabla(N_{\\mathrm{dR}}(D_E))\\subset t\\cdot N_{\\mathrm{dR}}(D_E)$.\n\\end{enumerate}\n\\end{prop}\n\\begin{proof}\nSince the localization map $\\iota_n$ is continuous, we first have that $N_s(D_E)$ is a closed $\\mathbf{B}_{\\rig,K}^{\\dag,s}\\widehat{\\otimes}_{\\Q}E$-submodule of $t^{-b}D_E^{s}$. It follows that\n$N_s(D_E)$ is a finite locally free $\\mathbf{B}_{\\rig,K}^{\\dag,s}\\widehat{\\otimes}_{\\Q}E$-module because $\\mathbf{B}_{\\rig,K}^{\\dag,s}\\widehat{\\otimes}_{\\Q}E$ is isomorphic to a finite product of Robba rings. On the other hand, by\nLemma \\ref{lem:dR-weight}, we get that $t^{-a}D_E^{s}$ is contained in $N_s(D_E)$. We thus conclude that $N_s(D_E)$ is a free $\\mathbf{B}_{\\rig,K}^{\\dag,s}\\widehat{\\otimes}_{\\Q}E$-module of rank $d$. To show (2), we proceed as in the proof of \\cite[Proposition 6.1.1]{BC07}. For any $y\\in D_{\\mathrm{dR}}(D_S)\\otimes_{K\\otimes_{\\Q}S}(K_n\\otimes_{\\Q}E)[[t]]$ and $w\\geq \\max\\{0,b-a\\}$, since\n\\[\nD_{\\mathrm{dR}}(D_S)\\otimes_{K\\otimes_{\\Q}S}(K_n\\otimes_{\\Q}E)[[t]]\\subset t^{-b}\\D_\\dif^{+,n}(D_E)\n\\]\nby Lemma \\ref{lem:dR-weight}, we may pick some $y_0\\in t^{-b}D_E^{s}$ such that $\\iota_n(y_0)-y\\in t^w\nD_{\\mathrm{dR}}(D_S)\\otimes_{K\\otimes_{\\Q}S}(K_n\\otimes_{\\Q}E)[[t]]$. Let $t_{n,w}$ be the function defined in \\cite[Lemme I.2.1]{LB04}. It follows that\n\\[\n\\iota_m(t_{n,w}y_0)\\in t^{w-b}\\D_\\dif^{+,n}(D_E)=t^{w-b+a}(t^{-a}\\D_\\dif^{+,n}(D_E))\\subset D_{\\mathrm{dR}}(D_S)\\otimes_{K\\otimes_{\\Q}S}(K_m\\otimes_{\\Q}E)[[t]]\n\\]\nfor $m>n$\nand\n\\[\n\\iota_n(t_{n,w}y_0)-y\\in t^{w}D_{\\mathrm{dR}}(D_S)\\otimes_{K\\otimes_{\\Q}S}(K_n\\otimes_{\\Q}E)[[t]].\n\\]\nThis implies that the natural map $N_s(D_E)\\ra D_{\\mathrm{dR}}(D_S)\\otimes_{K\\otimes_{\\Q}S}(K_n\\otimes_{\\Q}E)[[t]]\/(t^w)$ is surjective; this proves (2). We get (3) immediately from (2). The first half of (4) follows from the fact that $\\iota_{n+1}\\circ \\varphi=\\iota_n$. Note that $\\iota_n(\\nabla(N_s(D_E)))=\\nabla(\\iota_n(N_s(D_E)))\\subset tD_{\\mathrm{dR}}(D_S)\\otimes_{K\\otimes_{\\Q}S}(K_n\\otimes_{\\Q}E)[[t]]$ for any $n\\geq n(s)$; this proves the second half of (4).\n\\end{proof}\n\n\\begin{prop}\\label{prop:monodromy}\nKeep notations as in Proposition \\ref{prop:N_dR}. Then there exists a finite extension $L$ over $K$ such that\n\\[\nM=(N_{\\mathrm{dR}}(D_E)\\otimes_{\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q}E}\n\\mathbf{B}_{\\log,L}^\\dag\\widehat{\\otimes}_{\\Q}E)^{I_L}\n\\]\nis a free $L_0'\\otimes_{\\Q}E$-module of rank $d$ and the natural map\n\\begin{equation*}\n\\begin{split}\nM\\otimes_{L_0'\\otimes_{\\Q}E}\n\\mathbf{B}_{\\log,L}^\\dag\\widehat{\\otimes}_{\\Q}E\n\\longrightarrow N_{\\mathrm{dR}}(D_E)\\otimes_{\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q}E}\n\\mathbf{B}_{\\log,L}^\\dag\\widehat{\\otimes}_{\\Q}E\n\\end{split}\n\\end{equation*}\nis an isomorphism.\n\\end{prop}\n\\begin{proof}\nLet $f'=[K_0':\\Q]$. Note that there is a canonical decomposition $\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q}E\\cong\\prod_{i=0}^{f'-1}\\r_E^{(i)}$\nwhere each $\\r_E^{(i)}$ is isomorphic to $\\r_E$ and stable under $\\Gamma_K$, and satisfies $\\varphi(\\r_E^{(i)})\\subset\\r_E^{(i+1)}$ ($\\r_E^{(f')}=\\r_E^{(0)}$). Let $N^{(i)}_{\\mathrm{dR}}(D_E)=N_{\\mathrm{dR}}(D_E)\n\\otimes_{\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q}E}\\r_E^{(i)}$. It follows that each $N_{\\mathrm{dR}}^{(i)}(D_E)$ is stable under $\\partial=\\nabla\/t$ and $\\varphi^{f'}$; hence it is a $p$-adic differential equation with a Frobenius structure. By the versions of the $p$-adic local monodromy theorem proved by Andr\\'e \\cite{An} or Mebkhout \\cite{Meb}, we conclude that each $N^{(i)}_{\\mathrm{dR}}(D_E)$ is potentially unipotent. This yields the proposition using the argument of \\cite[Proposition 6.2.2]{BC07} and \\cite[Corollaire 6.2.3]{BC07}.\n\\end{proof}\n\n\\begin{lemma}\\label{lem:monodromy}\nKeep notations as in Proposition \\ref{prop:monodromy}, and let\n\\[\nM=(N_s(D_E)\\otimes_{\\mathbf{B}_{\\rig,K}^{\\dag,s}\\widehat{\\otimes}_{\\Q}E}\n\\mathbf{B}_{\\log,K}^{\\dag,s}\\widehat{\\otimes}_{\\Q}E)^{I_L}\n\\]\nfor sufficiently large $s$. Then for any $n\\geq n(s)$, we have\n\\begin{equation}\\label{eq:lem-monodromy}\nL\\otimes_{L_0}\\iota_n(M)=(\\D_\\dif(D_E\\otimes_{\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q}E}\n\\mathbf{B}_{\\rig,L}^\\dag\\widehat{\\otimes}_{\\Q}E))^{I_L}\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nBy the previous proposition, the left hand side of (\\ref{eq:lem-monodromy}) is a free $L\\otimes_{L_0}L_0'\\otimes_{\\Q}E$-module of rank $d$. On the other hand, since $((L_n\\otimes_{\\Q}E)[[t]][1\/t])^{I_L}=L\\otimes_{L_0}L_0'\\otimes_{\\Q}E$, we deduce that the right hand side of (\\ref{eq:lem-monodromy}), which obviously contains the left hand side, is an $L\\otimes_{L_0}L_0'\\otimes_{\\Q}E$-module generated by at most $d$-elements. This yields the desired identity.\n\\end{proof}\n\n\\section{Proof of the main theorem}\nWe start by making some preliminary reductions. After a finite surjective base change of $X$, we may assume that $Q(T)$ factors as $\\prod_{i=1}^m(T-F_i)$. By reordering the $f_i$'s and throwing away some points of $Z$ we may further assume that for all $z\\in Z$, $v_p(F_i(z))\\geq v_p(F_j(z))$ if $i>j$ and $F_i(z)\\neq F_j(z)$ if $F_i\\neq F_j$. We then set $\\F_{i,z}=\nD_{\\mathrm{st}}^+(V_z)^{(\\varphi^f-F_1(z))\\cdots(\\varphi^f-F_{i}(z))=0}$ for all $z\\in Z$ and $1\\leq i\\leq m$. \nUsing Definition \\ref{def:fs}(3), we may suppose that $\\F_{i,z}\\subseteq \\F_z$ for all $z\\in Z$ and $1\\leq i\\leq m$ by shrinking $Z$. Furthermore, by the fact that $N\\varphi=p\\varphi N$ and the condition that $v_p(F_i(z))\\geq v_p(F_j(z))$ if $i>j$, we see that $N=0$ on each graded piece $\\F_{i,z}\/\\F_{i-1,z}$.\nLet $c_{i,z}$ be the rank of $ \\F_{i,z}\/\\F_{i-1,z}$ over $K_0\\otimes k(z)$, and partition $Z$ into finitely many subsets according to the sequence $c_{i,z}$. One of these subsets of $Z$ must still be Zariski dense. Replace $Z$ by this subset and set $c_i = c_{i,z}$ for any $z$ in this subset.\n\n \n \n \n \n \n \n \n \n \n\nFor $z\\in Z$, we will inductively set $\\m$-submodules $\\mathrm{Fil}_{i,z}\\subset\\D_\\rig^\\dag(V_z)$ for $1\\leq i\\leq m$ such that $D_{\\mathrm{st}}(\\mathrm{Fil}_{i,z})=\\F_{i,z}$. For $i=1$, since $V_z$ has non-positive Hodge-Tate weights and $N(\\F_{1,z})=0$, we have\n\\[\n\\F_{1,z}=(D^+_{\\mathrm{crys}}(V_z))^{\\varphi^f=F_1(z)}\\subset\\D_\\rig^\\dag(V_z)^{\\Gamma}\n\\]\nby Berger's dictionary. Let $\\mathrm{Fil}_{1,z}$ be the saturation of the $\\m$-submodule generated by $\\mathcal{F}_{1,z}$. Now suppose we have set $\\mathrm{Fil}_{i-1,z}$ for some $i\\geq 2$. It follows that\n\\[\nD_{\\mathrm{st}}^+(\\D_\\rig^\\dag(V_z)\/\\mathrm{Fil}_{i-1,z})=D_{\\mathrm{st}}^+(V_z)\/\\F_{i-1,z}.\n\\]\nNote that\n\\[\n\\F_{i,z}\/\\F_{i-1,z}=(D_{\\mathrm{st}}^+(V_z)\/\\F_{i-1,z})^{\\varphi^f=F_{i}(z),N=0}.\n\\]\nHence\n\\[\n\\F_{i,z}\/\\F_{i-1,z}=D^+_{\\mathrm{crys}}(\\D_\\rig^\\dag(V_z)\/\\mathrm{Fil}_{i,z})^{\\varphi^f=F_{i}(z)}\\subset\n(\\D_\\rig^\\dag(V_z)\/\\mathrm{Fil}_{i-1,z})^\\Gamma.\n\\]\nWe then set $\\mathrm{Fil}_{i,z}$ to be the preimage of the saturation of the $\\m$-submodule of $\\D_\\rig^\\dag(V_z)\/\\mathrm{Fil}_{i-1,z}$ generated by $\\F_{i,z}\/\\F_{i-1,z}$.\nNow for each $1\\leq i\\leq m$, we define the character $\\delta_i:K\\ra\\OO(X)^\\times$ by setting $\\delta_i(p)=F_i^{-1}$ and $\\delta_i(\\OO_K^\\times)=1$. Let $D_X=\\D_\\rig^\\dag(V_X)^{\\vee}$.\n\n\\begin{lemma}\\label{lem:de Rham-part}\nSuppose that $X$ is irreducible. Then for each $0\\leq i\\leq m$, there exists a proper birational morphism $\\pi:X'\\ra X$ and a sub-family of $\\m$-modules $D^{(i)}_{X'}\\subset D_{X'}$ over $X'$ of rank $d-c_1-\\dots-c_i$ such that\n\\begin{enumerate}\n\\item[(1)]\nfor any $x\\in X'$, the natural map $D_x^{(i)}\\ra D_x$ is injective;\n\\item[(2)]\nthere exists a Zariski open dense subset $U$ of $X'$ such that for any $z\\in Z'=\\pi^{-1}(Z)\\cap U$, the natural map $D^{(i)}_z\\ra D_z$ is the dual of the projection $\\D_\\rig^\\dag(V_{\\pi(z)})\\ra \\D_\\rig^\\dag(V_{\\pi(z)})\/\\mathrm{Fil}_{i,\\pi(z)}$.\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nWe proceed by induction on $i$. The initial case is trivial. Suppose that for some $1\\leq i\\leq m$, the lemma is true for $i-1$.\nNote that $\\mathcal{F}_{i,z}\/\\mathcal{F}_{i-1,z}$ maps into $\\D_\\rig^\\dag(V_{z})\/\\mathrm{Fil}_{i,z}$ for any $z\\in Z$. Since $\\F_{i,z}\/\\F_{i-1,z}=(D_{\\mathrm{crys}}^+(V_z)\/\\F_{i-1,z})^{\\varphi^f=F_{i}(z)}$, we get that $(D^{(i)}_z)^{\\vee}(\\pi^{*}(\\delta_i)(z))$ has $k(z)$-dimension $c_i$ for any $z\\in Z'$. Since $Z'$ is Zariski dense in $X'$, by Proposition \\ref{prop:cohomology}, after adapting $X'$ and $U$, we may find a sub-family of $\\m$-modules $D^{(i)}_{X'}$ of $D^{(i-1)}_{X'}$ with rank $d-c_1-\\dots-c_i$ such that\n\\begin{enumerate}\n\\item[(1')]$D_x^{(i)}\\ra D_x^{(i-1)}$ is injective for any $x\\in X'$;\n\\item[(2')]for any $z\\in \\pi^{-1}(Z)\\cap U$, $D_z^{(i)}$ is the kernel of the dual of the map\n\\[\n (\\mathbf{B}_{\\rig,K}^\\dag\\otimes_{\\Q}k(z))\\cdot(\\mathcal{F}_{i,\\pi(z)}\/\\mathcal{F}_{i-1,\\pi(z)})\\ra \\D_\\rig^\\dag(V_{\\pi(z)})\/\\mathrm{Fil}_{i,\\pi(z)}.\n\\]\n\\end{enumerate}\nIt is clear that (1') and (2') imply (1) and (2) respectively; this finishes the inductive step.\n\\end{proof}\n\nTo prove Theorem \\ref{thm:main}, we also need the following lemma.\n\\begin{lemma}\nLet $V_S$ be a free $S$-linear representation of $G_K$ of rank $d$. Then there exists a positive integer $m(V_S)$ such that for any $x\\in M(S)$ and $a\\in\\D_\\dif^{+}(V_x)$, if $a$ is $\\Gamma$-invariant, then $a\\in\\D_\\dif^{+,m(V_S)}(V_x)$.\n\\end{lemma}\n\\begin{proof}\nThis is a consequence of the Tate-Sen method. Using \\cite[Th\\'eor\\`{e}me 4.2.9]{BC07}, we first choose a finite extension $L$ over $K$ and some positive integer $m$ so that $\\D_{\\rig,L}^{\\dag,r_m}(V_S)$ is a free $\\mathbf{B}_{\\rig,L}^{\\dag,r_m}\\widehat{\\otimes}_{\\Q}S$-module with a basis $\\mathrm{e}=(e_1,\\dots,e_d)$. Let $\\gamma$ be a topological generator of $\\Gamma_{L_m}$ and write $\\gamma(e)=eG$ for some $G\\in\\mathrm{GL}_d(\\mathbf{B}_{\\rig,L}^{\\dag,r_m}\\widehat{\\otimes}_{\\Q}S)$. Recall that by the classical work of Tate \\cite{T}, we know that there exists a constant $c>0$ such that $v_p((\\gamma-1)x)\\leq v_p(x)+c$ for any nonzero $x\\in (1-R_{L,m})\\widehat{L}_\\infty$, where $R_{L,m}:\\widehat{L}_\\infty\\ra L_m$ is Tate's normalized trace map. Since the localization map $\\iota_m:\\mathbf{B}_{\\rig,L}^{\\dag,r_m}\\ra L_m[[t]]$ is continuous, by enlarging $m$, we may suppose that the constant term of $\\iota_m(G)-1$ has norm less than $p^{-c}$. We fix some $m_0\\in\\mathbb{N}$ such that $K_\\infty\\cap L_m=K_{m_0}\\cap L_m$.\n\nNow let $a\\in\\D_\\dif^{+,K_n}(V_x)^\\Gamma$ for some $x\\in X$ and $n\\geq m$. We will show that $a\\in\\D_\\dif^{+,K_{m_0}}(V_x)^\\Gamma$. Since $\\iota_m(\\mathrm{e})$ forms a basis of $\\D^{+,L_n}_{\\dif}(V_S)$, we may write $a=\\iota_m(\\mathrm{e})(x)A$ for some\n\\[\nA\\in \\mathrm{M}_{d\\times1}((L_n\\otimes_{\\Q}k(x))[[t]]).\n\\]\nThe $\\Gamma$-invariance of $a$ implies $\\iota_m(G(x))\\gamma(A)=A$; thus $(1-R_{L,m})\\iota_m(G(x))\\gamma(A)=(1-R_{L,m})A$. Note that $\\iota_m(G(x))$ has entries in $(L_m\\otimes_{\\Q}k(x))[[t]]$. It follows that $(G(x)-1)B=(1-\\gamma^{-1})B$ where $B=(1-R_{L,m})A$. Let $B_0$ be the constant term of $B$. If $B_0\\neq0$, then the constant term of $(\\iota_m(G(x))-1)B$ has valuation $\\geq v(\\iota_m(G(x))-1)+v(B_0)>v(B_0)+c$ whereas the constant term $(1-\\gamma^{-1})B_0$ of $(1-\\gamma^{-1})B$ has valuation $\\leq v(B_0)+c$; this yields a contradiction. Hence $B_0=0$. Iterating this argument, we get $B=0$. Hence $a\\in \\D_\\dif^{+,L_m}(V_x)\\cap\\D_\\dif^{+,K_n}(V_x)\\subset\\D_\\dif^{+,K_{m_0}}(V_x)$. Thus we may choose $m(V_S)=m_0$.\n\\end{proof}\n\n\\emph{Proof of Theorem 0.2}.\nWe retain the notations as above. By passing to irreducible components, we may suppose that $X$ is irreducible. We then apply Lemma \\ref{lem:de Rham-part} to $V_X$. Note that $V_{X'}$ is again a finite slope family over $X'$ with the Zariski dense set of crystalline points $\\pi^{-1}(Z)$. We may suppose that $X'=X$. Let $\\lambda:\\D^\\dag_{\\rig}(V_X)=D^{\\vee}_X\\ra (D_X^{(m)})^{\\vee}$ be the dual of $D_X^{(m)}\\ra D_X$, and let $P_X=\\ker(\\lambda)$. For any $x\\in X$, since $D^{(m)}_x\\ra D_x$ is injective, we get that the image of $\\lambda_x$ is a $\\m$-submodule of rank $d-c_1-\\cdots-c_m$. Thus by Lemma \\ref{lem:ker-birational}, after adapting $X$, we may assume that $P_X$ is a family of $\\m$-modules of rank $c_1+\\cdots+c_m$, and there exists a Zariski open dense subset $U\\subset X$ such that $P_x=\\ker(\\lambda_x)$ for any $x\\in U$. Note that $\\ker(\\lambda_z)=\\mathrm{Fil}_{i,z}$ for any $z\\in Z$. Thus by replacing $Z$ with $Z\\cap U$, we may assume that $P_z=\\mathrm{Fil}_{i,z}$ for any $z\\in Z$. We claim that $P_{X}$ is de Rham with weights in $[-b,0]$. To do so, we set $Y$ to be the set of $x\\in X$ for which $P_x$ is de Rham with weights in $[a,b]$. By the previous lemma, we see that for any affinoid subdomain $M(S)\\subset X$, there exists an integer $m(V_S)$ such that if $P_x$ is de Rham for some $x\\in M(S)$, then $h_{dR}(P_x)\\leq m(V_S)$. We then deduce from Proposition \\ref{prop:dR-family} that $Y\\cap M(S)$ is a Zariski closed subset of $M(S)$. Hence $Y$ is a Zariski closed subset of $X$. On the other hand, since $P_z$ is de Rham with weights in $[-b,0]$, we get $Z\\subset Y$; thus $Y=X$ by the Zariski density of $Z$. Furthermore, using Proposition \\ref{prop:dR-family} and the previous lemma again, we deduce that $P_X$ is de Rham with weights in $[-b,0]$. As a consequence, we obtain a locally free coherent $\\OO_X\\otimes_{\\Q}K$-module $D_{\\mathrm{dR}}(P_X)$ of rank $c_1+\\cdots+c_m$.\n\n\nThe next step is to show that for any $x\\in X$, $D_{\\mathrm{dR}}(P_x)$ is contained in $D^+_{\\mathrm{st}}(V_x)\\otimes_{K_0}K$. Let $Y$ be the set of $x\\in X$ satisfying this condition. We first show that $Y$ is a Zariski closed subset of $X$. For this, it suffices to show that $Y\\cap M(S)$ is a Zariski closed subset of $M(S)$ for any affinoid subdomain $M(S)$ of $X$. To show this, we employ the $p$-adic local monodromy for families of de Rham $\\m$-modules. As in \\S5, let $E$ be the product of the complete residue fields of the Shilov boundary of $M(S)$. Since $P_S$ is a family of de Rham $\\m$-modules with weights in $[-b,0]$, by Lemma \\ref{lem:monodromy}, there exists a finite extension $L$ of $K$ such that for sufficiently large $s$ and $n\\geq n(s)$, we have\n\\[\nL\\otimes_{L_0}\\iota_n(M)=(\\D_\\dif(P_E\\otimes_{\\mathbf{B}_{\\rig,K}^\\dag\\widehat{\\otimes}_{\\Q}E}\n\\mathbf{B}_{\\rig,L}^\\dag\\widehat{\\otimes}_{\\Q}E))^{I_L}\n\\]\nfor\n$M=(N_s(P_E)\\otimes_{\\mathbf{B}_{\\rig,K}^{\\dag,s}\\widehat{\\otimes}_{\\Q}E}\n\\mathbf{B}_{\\log,K}^{\\dag,s}\\widehat{\\otimes}_{\\Q}E)^{I_L}$; furthermore, $N_s(P_E)\\subset P_E^{s}$. Thus\n\\[\n\\iota_n(M)\\subset \\iota_n(P_E\\otimes_{\\mathbf{B}_{\\rig,K}^{\\dag,s}\\widehat{\\otimes}_{\\Q}E}\n\\mathbf{B}_{\\log,K}^{\\dag,s}\\widehat{\\otimes}_{\\Q}E)\\subset\n\\iota_n(\\D_\\rig^\\dag(V_E)\\otimes_{\\mathbf{B}_{\\rig,K}^{\\dag,s}\\widehat{\\otimes}_{\\Q}E}\n\\mathbf{B}_{\\log,K}^{\\dag,s}\\widehat{\\otimes}_{\\Q}E)\\subset\\mathbf{B}^+_{\\mathrm{st}}\\widehat{\\otimes}_{\\Q}V_E.\n\\]\nNote that $D_{\\mathrm{dR}}(P_E)\\subset \\D_\\dif^+(P_E)\\subset\\D_\\dif^+(V_E)\\subset\\mathbf{B}_{\\mathrm{dR}}^+\\widehat{\\otimes}_{\\Q}V_E$. This yields\n\\[\nD_{\\mathrm{dR}}(P_E)\\subset (\\mathbf{B}^+_{\\mathrm{st}}\\widehat{\\otimes}_{\\Q}V_E)\\otimes_{L_0}L\\cap \\mathbf{B}_{\\mathrm{dR}}^+\\widehat{\\otimes}_{\\Q}V_E=\n(\\mathbf{B}^+_{\\mathrm{st}}\\widehat{\\otimes}_{\\Q}V_E)\\otimes_{L_0}L.\n\\]\nWe therefore deduce from \\cite[Lemme 6.3.1]{BC07} that\n\\[\nD_{\\mathrm{dR}}(P_S)\\subset (\\mathbf{B}^+_{\\mathrm{st}}\\widehat{\\otimes}_{\\Q}V_E)\\otimes_{L_0}L\\cap\n\\mathbf{B}_{\\mathrm{dR}}^+\\widehat{\\otimes}_{\\Q}V_S=(\\mathbf{B}^+_{\\mathrm{st}}\\widehat{\\otimes}_{\\Q}V_S)\\otimes_{L_0}L.\n\\]\nIt follows that $Y\\cap M(S)$, which is the set of $x\\in M(S)$ such that $D_{\\mathrm{dR}}(P_x)\\subset (\\mathbf{B}^+_{\\mathrm{st}}\\otimes_{\\Q}V_x)\\otimes_{K_0}K$, is Zariski closed in $M(S)$.\n\nTo conclude the theorem, it then suffices to show that $D_{\\mathrm{dR}}(P_x)\\subset (D^+_{\\mathrm{st}}(V_x)\\otimes_{K_0}K)^{Q(\\varphi)(x)=0}$ for any $x\\in X$; here we $K$-linearly extend the $\\varphi^f$-action to $D^+_{\\mathrm{st}}(V_x)\\otimes_{K_0}K$. Note that $\\mathrm{Fil}_{m,z}$ is semi-stable with $D_{\\mathrm{st}}(\\mathrm{Fil}_{m,z})=\\mathcal{F}_{m,z}$. This implies that $Q(\\varphi)(D_{\\mathrm{dR}}(P_X))$ vanishes at $z$, yielding that $Q(\\varphi)(D_{\\mathrm{dR}}(P_X))=0$ by the Zariski density of $Z$.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzafp b/data_all_eng_slimpj/shuffled/split2/finalzafp new file mode 100644 index 0000000000000000000000000000000000000000..9d6bbb9f7c8dfccd7c87aba6433b9b969cb26134 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzafp @@ -0,0 +1,5 @@ +{"text":"\n\\section{Introduction}\n\\label{sec:intro}\n\t\nIn the classic dichotomy between model-based and data-based approaches to solving complex tasks, Convolutional Neural Networks (CNN) correspond to a particularly efficient tradeoff. CNNs capture key geometric prior information for spatial\/temporal tasks through the notion of local translation invariance. Yet, they combine this prior with high flexibility, that allows them to be scaled to millions of parameters and leverage large datasets with gradient-descent learning strategies, typically operating in the `interpolating' regime, i.e. where the training data is fit perfectly. \n\t\nSuch regime challenges the classic notion of model selection in statistics, whereby increasing the number of parameters trades off bias by variance \\citep{zhang2016understanding}. On the one hand, several recent works studying the role of optimization in this tradeoff argue that model size is not always a good predictor for overfitting \\citep{neyshabur2018towards, zhang2016understanding, neal2018modern, geiger2019scaling, belkin2018reconciling}, and consider instead other complexity measures of the function class, which favor CNNs due to their smaller complexity \\citep{du2018many}. On the other hand, authors have also considered geometric aspects of the energy landscape, such as width of basins \\citep{keskar2016large}, as a proxy for generalisation. However, these properties of the landscape do not appear to account for the benefits associated with specific architectures. Additionally, considering the implicit bias due to the optimization scheme \\citep{soudry2018implicit, gunasekar2018characterizing} is not enough to justify the performance gains without considering the architectural bias. Despite the important insights on the role of over-parametrization in optimization \\citep{du2017gradient, arora2018optimization,venturi2018neural}, the architectural bias prevails as a major factor to explain good generalization in visual classification tasks -- over-parametrized CNN models generalize well, but large neural networks without any convolutional constraints do not. \n\nIn this work, we attempt to further disentangle the bias stemming from the architecture and the optimization scheme by showing that the CNN prior plays a favorable role mostly at the \\emph{beginning} of optimization. Geometrically, the CNN prior defines a low-dimensional subspace within the space of parameters of generic Fully-Connected Networks (FCN) (this subspace is linear since the CNN constraints of weight sharing and locality are linear, see Figure~\\ref{fig:sketch} for a sketch of the core idea). Even though the optimization scheme is able to minimize the training loss with or without the constraints (for sufficiently over-parametrized models \\citep{geiger18, zhang2016understanding}), the CNN subspace provides a ``better route'' that navigates the loss landscape to solutions with better generalization performance.\n\nYet, surprisingly, we observe that leaving this subspace at an appropriate time can result in a FCN with an equivalent or even better generalization than a CNN. Our numerical experiments suggest that the CNN subspace \\textit{as well as} its vicinity are good candidates for high-performance solutions. Furthermore, we observe a threshold distance from the CNN space beyond which the performance drops back down to the vanilla FCN accuracy level. Our results offer a new perspective on the success of the convolutional architecture: within FCN loss landscapes there exist rare basins associated to very good generalization, characterised not only by their width but rather by their distance to the CNN subspace. \nThese can be accessed thanks to the CNN prior, and are otherwise missed in the usual training of FCNs. \n\n\nThe rest of the paper is structured as follows. Section~\\ref{sec:related} discusses prior work in relating architecture and optimization biases. Section~\\ref{sec:model} presents our CNN to FCN embedding algorithm and training procedure, and Section~\\ref{sec:experiments} describes and analyses the experiments performed on the CIFAR-10 dataset \\citep{cifar-10}. We conclude in Section~\\ref{sec:discussion} by describing theoretical setups compatible with our observations and consequences for practical applications. \n\n\\begin{SCfigure}\n\\sidebysidecaption{0.5\\linewidth}{0.5\\linewidth}\n\n{\\includegraphics[width=\\linewidth]{figures\/fig1.pdf}}\n\n{\\caption{\\textbf{White background:} ambient, $M$-dimensional, fully-connected space. \\textbf{Yellow subspace:} linear, $m$-dimensional convolutional subspace. We have $m \\ll M$. \\textbf{Red manifold:} (near-) zero loss valued, (approximate-) solution set for a given training data. Note that it is a nontrivial manifold due to continuous symmetries (also, see the related work section on mode connectivity) and it intersects with the CNN subspace. \\textbf{Blue path:} a CNN initialized and trained with the convolutional constraints. \\textbf{Purple path:} a FCN model initialized and trained without the constraints. \\textbf{Green paths:} Snapshots taken along the CNN training that are lifted to the ambient FCN space, and trained in the FCN space without the constraints.}}\n \\label{fig:sketch}\n\\end{SCfigure}\n\\section{CNN to FCN Embedding}\n\\label{sec:model}\n\nIn both FCNs and CNNs, each feature of a layer is calculated by applying a non-linearity to a weighted sum over the features of the previous layer (or over all the pixels of the image, for the first layer). CNNs are a particular type of FCNs, which make use of two key ingredients to reduce their number of redundant parameters: locality and weight sharing.\n \n\\textit{Locality: } In FCNs, the sum is taken over all the features of the previous layer. In locally connected networks (LCNs), locality is imposed by restricting the sum to a small receptive field (a box of adjacent features of the previous layer). The set of weights of this restricted sum is called a filter. For a given receptive field, one may create multiple features (or channels) by using several different filters. This procedure makes use of the spatial structure of the data and reduces the number of fitting parameters.\n \n\\textit{Weight sharing: } CNNs are a particular type of LCNs where all the filters of a given channel use the same set of weights. This procedure makes use of the somewhat universal properties of feature extracting filters such as edge detectors and reduces even more drastically the number of fitting parameters.\n \nWhen mapping a CNN to its equivalent FCN (eFCN), we obtain very sparse (due to locality) and redundant (due to weight sharing) weight matrices (see Sec.~A of the Supplemental Material for some intuition on the mapping). This typically results in a large memory overhead as the eFCN of a simple CNN can take several orders of magnitude more space in the memory. Therefore, we present the core ideas on a simple 3-layer CNN on CIFAR-10 \\citep{krizhevsky2012imagenet}, and show similar results for AlexNet on CIFAR-100 in Sec.~B of the Supplemental Material.\n \nIn the mapping\\footnote{The source code may be found at: \\href{}{https:\/\/github.com\/sdascoli\/anarchitectural-search}.},\n all layers apart form the convolutional layers (ReLU, Dropout, MaxPool and fully-connected) are left unchanged except for proper reshaping. Each convolutional layer is mapped to a fully-connected layer. \n \n\\textit{As a result, for a given CNN, we obtain its eFCN counterpart with an end-to-end fully-connected architecture which is functionally identical to the original CNN.}\n\n\\section{Discussion and Conclusion}\n\\label{sec:discussion}\n\nIn this work, we examined the inductive bias of CNNs, and challenged the accepted view that FCNs are unable to generalize as well as CNNs on visual tasks. Specifically, we showed that the CNN prior is mainly useful during the early stages of training, to prevent the unconstrained FCN from falling prey of spurious solutions with poor generalization too early.\n\nOur experimental results show that there exists a vicinity of the CNN subspace with high generalization properties, and one may even enhance the performance of CNNs by exploring it, if one relaxes the CNN constraints at an appropriate time during training. The extra degrees of freedom are used to perform complementary tasks which alone are unhelpful. This offers interesting theoretical perspectives, in relation to other high-dimensional estimation problems, such as in spiked tensor models \\citep{anandkumar2016homotopy}, where a smart initialization, containing prior information on the problem, is used to provide an initial condition that bypasses the regions where the estimation landscape is ``rough'' and full of spurious minima.\n\n\nOn the practical front, despite the performance gains obtained, our algorithm remains highly impractical due to the large number of degrees of freedom required on our eFCNs. However, more efficient strategies that would involve a less drastic relaxation of the CNN constraints (e.g., relaxing the weight sharing but keeping the locality constraint such as locally-connected networks \\citep{coates2011selecting}) could be of potential interest to practitioners. \n\n\\newpage\n{\\bf Acknowledgments}\n\n\t\nWe would like to thank Riza Alp Guler and Ilija Radosavovic for helpful discussions. We acknowledge funding from the Simons Foundation (\\#454935, Giulio Biroli). JB acknowledges the partial support by the Alfred P. Sloan Foundation, NSF RI-1816753, NSF CAREER CIF 1845360, and Samsung Electronics.\n\n\n\\section{Visualizing the embedding}\n\\label{app:embedding}\n\nIn Fig.~\\ref{fig:visualization}, we provide an illustration of the mapping from CNN to eFCN. Denoting as $k, s, p$ the filter size, stride and padding of the convolution, we have the following:\n\\begin{align*}\n d_{in} &= 4\\\\\n (k, s, p) &= (3, 1, 0)\\\\\n d_{out} &= \\frac{d_{in} + 2p - k}{s} + 1 = 2\n\\end{align*}\nThe eFCN layer is of size $(c_{in}\\times d_{in}\\times d_{in}, c_{out}\\times d_{out}\\times d_{out}) = (4,16)$ since $c_{in} = c_{out} = 1$ here. In Fig.~\\ref{fig:sparsity}, we show the typical structure of the eFCN weight matrices observed in practice.\n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=0.6\\textwidth]{figures\/sparsity\/mapping.pdf}\n \\caption{eFCN wight matrix (\\textbf{bottom}) obtained when acting on an input of size of size (4,4) (\\textbf{top left}) with a filter of size (3,3) (\\textbf{top right}). The colors of the eFCN weight matrix show where they stem from in the filter (the off-local blocks, in yellow, are set to zero at initialization).}\n \\label{fig:visualization}\n\\end{figure}\n\n\\begin{figure}[h!]\n \\centering\n\t\\includegraphics[width=\\textwidth]{figures\/sparsity\/sparsity_layer_0_t_0.pdf}\n\t\\includegraphics[width=\\textwidth]{figures\/sparsity\/sparsity_layer_0_t_100.pdf}\n\t\\caption{\\textbf{Top}: Heatmap of a block of weights corresponding to the first input channel and the first output channel of the first layer of the eFCN just after its initialization from the converged VanillaCNN. The colorscale indicates the natural logarithm of the absolute value of the weights. The highly sparse and self-repeating structure of the weight matrix is due to the locality and weight sharing constraints. \\textbf{Bottom}: Same after training the eFCN for 100 epochs. The off-local blocks appear in blue : their weights are several orders of magnitude smaller in absolute value than those of the local blocks, in yellow. Note that due to the padding many weights stay at zero even after relaxing the constraints. When unflattened, the first row of this heatmap gives rise to the images shown in Fig.~\\ref{fig:horse_}.}\n\t\\label{fig:sparsity}\n\\end{figure}\n\n\\section{Results with AlexNet on CIFAR-100}\n\\label{app:cifar100}\n\nIn this section, we show that the ideas we presented in the main text hold for various classes of data, architecture and optimizer. Namely, we show that our results hold when switching from SGD to Adam on CIFAR10, and for AlexNet~\\citep{krizhevsky2012imagenet} on the CIFAR-100 dataset. Each subsection contains figures which are counterparts of the ones of the main text : performance and training dynamics of the eFCNs in Fig.~\\ref{fig:performance_}, deviation from CNN subspace in Fig.~\\ref{fig:dev_appx}, role of off-local blocks in learning in Fig.~\\ref{fig:horse_}.\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\begin{subfigure}{0.9\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{figures\/appendix_te_performance.pdf}\n\t\\end{subfigure}\n\t\\begin{subfigure}{0.9\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{figures\/cifar100_alexnet\/appendix_te_performance.pdf}\n\t\\end{subfigure}\n\t\\caption{This figure sums up in a compact way the generalization dynamics of the eFCNs. The red curve represents the test accuracy of the model versus its training time in epochs. Above each point $t_w$ of the training, we depict as crosses the test accuracy history of the eFCN stemmed at relax time $t_w$, with colors indicating the training time of the eFCN after embedding. For comparison, the best test accuracy reached by a standard FCN of same size is depicted as a brown horizontal dashed line. \\textbf{Left}: VanillaCNN on CIFAR-10, with Adam optimizer. \\textbf{Right}: Alexnet on CIFAR-100, with SGD optimizer. We note that results are qualitatively similar : the eFCNs always improve after initialization, outperform the standard FCN, and we again observe that for some relax times the eFCNs even exceeds the best test accuracy reached by the CNN.}\n\t\\label{fig:performance_}\n\\end{figure}\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\begin{subfigure}{0.32\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{figures\/cifar100_alexnet\/fig3b.pdf}\n\t\\end{subfigure}\n\t\\begin{subfigure}{0.32\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{figures\/cifar100_alexnet\/fig3c.pdf}\n\t\\end{subfigure}\n\t\\begin{subfigure}{0.32\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{figures\/cifar100_alexnet\/fig3d.pdf}\n\t\\end{subfigure}\n\t\\caption{\\textbf{Left panel:} relax time $t_w$ of the eFCN vs. $\\delta$, the measure of deviation from the CNN subspace through the locality constraint, at the final point of eFCN training. \\textbf{Middle panel:} $\\delta$ vs. the initial loss value. \\textbf{Right panel:} $\\delta$ vs. final test accuracy of eFCN models. For reference, the blue point in the \\textbf{middle} and \\textbf{right} panels indicate the deviation measure for a standard FCN, where $\\delta \\sim 97\\%$.}\n\t\\label{fig:dev_appx}\n\\end{figure}\n\n\\begin{figure}[h!]\n\t\\centering\n\t\\begin{subfigure}{0.5\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{figures\/cifar100_alexnet\/fig4a.pdf}\n\t\\end{subfigure}\n\t\\begin{subfigure}{0.45\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{figures\/cifar100_alexnet\/fig4c.pdf}\n\t\\end{subfigure}\n\t\\caption{\\textbf{Left}: Visualization of an eFCN ``filter'' from the the first layer just after embedding (left column), after training after 11 epochs (middle column), and training after 78 epochs (right column); where the eFCN is initialized at relax times $t_w=0$ (top row), $t_w=13$ (middle row), and $t_w=115$ (bottom row). The colors indicate the natural logarithm of the absolute value of the weights. \\textbf{Right}: Contributions to the test accuracy of the local blocks (off-local blocks masked out) and off-local blocks (local blocks masked out).}\n \\label{fig:horse_}\n\\end{figure}\n\n\n\n\n \n\\section{Interpolating between CNNs and eFCNs}\n\\label{app:interp}\n\n Another way to understand the dynamics of the eFCNs is to examine the paths that connect them to the CNN they stemmed from in the FCN weight space. Interpolating in the weight space has received some attention in recent literature, in papers such as \\citep{draxler2018essentially, garipov2018loss}, where it has been shown that contrary to previous beliefs the bottom of the landscapes of deep neural networks resembles a flat, connected level set since one can always find a path of low energy connecting minima.\n \n Here we use two interpolation methods in weight space. The first method, labeled \"linear\", consists in sampling $n$ equally spaced points along the linear path connecting the weights. Of course, the interpolated points generally have higher training loss than the endpoints.\n \n The second method, labeled \"string\", consists in starting from the linear interpolation path, and letting the interpolated points fall down the landscape following gradient descent, while ensuring that they stay close enough together by adding an elastic term in the loss : \\begin{equation}\n \\mathcal{L}_{elastic} = \\frac{1}{2} k \\sum_{i=1 }^{n-1} (\\mathbf{x_{i+1}} - \\mathbf{x_i})^2\n \\end{equation}\n By adjusting the stiffness constant $k$ we can control how straight the string is: at high $k$ we recover the linear interpolation, whereas at low $k$ the points decouple and reach the bottom of the landscape, but are far apart and don't give us an actual path.\n Note that this method is a simpler form of the one used in \\cite{draxler2018essentially}, where we don't use the \"nudging\" trick. \n \n For comparison, we also show the performance obtained when interpolating directly in output space (as done in ensembling methods).\n\n Results are shown in figure \\ref{fig:interp_}, with the $x$-axis representing the interpolation parameter $\\alpha \\in [0,1]$. We see that for both the linear and string interpolations, the training loss profile displays a barrier, except at late $t_w$ where the the eFCN has not escaped far from the CNN subspace. Although the string method fails to find a path without a barrier, this is not sufficient to conclude that no paths exist.\n \n However, the behavior of test accuracy is much more surprising. In all cases, despite the increase in training loss, the interpolated paths reach higher test accuracies than the endpoints, even at early $t_w$ when the eFCN and the CNN are quite far from each other. This confirms that there is a basin of high generalization around the CNN subspace, and that optimum performance can actually be found somewhere in between the solution found by the CNN and the solution found by the eFCN. This offers yet another procedure to improve the performance in practice. However, in all cases we note that the gain in accuracy is lower than the gain obtained by interpolating in output space.\n \n \\begin{figure}[h!]\n\t\t\\centering\n\t\t\\begin{subfigure}[b]{0.49\\textwidth}\n\t\t\t\\includegraphics[width=\\textwidth]{figures\/interps\/tw_000_tcnn100_tfc100.png}\n\t\t\t\\caption{\\centering {$t_w = 0$}}\n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}[b]{0.49\\textwidth}\n\t\t\t\\includegraphics[width=\\textwidth]{figures\/interps\/tw_005_tcnn100_tfc100.png}\n\t\t\t\\caption{\\centering {$t_w = 5$}}\n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}[b]{0.49\\textwidth}\n\t\t\t\\includegraphics[width=\\textwidth]{figures\/interps\/tw_018_tcnn100_tfc100.png}\n\t\t\t\\caption{\\centering {$t_w = 18$}}\n\t\t\\end{subfigure}\n\t\t\\begin{subfigure}[b]{0.49\\textwidth}\n\t\t\t\\includegraphics[width=\\textwidth]{figures\/interps\/tw_061_tcnn100_tfc100.png}\n\t\t\t\\caption{\\centering {$t_w = 61$}}\n\t\t\\end{subfigure}\n\t\t\\caption{Interpolation between the solution reached by the CNN after 100 epochs (interpolation parameter $\\alpha = 0$) and the solution found by the eFCN after 100 epochs (interpolation parameter $\\alpha = 1$), for four different relax times $t_w$ indicated below the subfigures. In each subfigure, the \\textbf{left} panel shows train loss, and the \\textbf{right} panel shows test accuracy. The orange line corresponds to linear interpolation, the blue line corresponds to string method interpolation, and the green line corresponds to interpolation in output space.}\n\t\t\\label{fig:interp_}\n\t\\end{figure}\n\n\\section{Related Work}\n\\label{sec:related}\n\n\n\nThe relationship between CNNs and FCNs is an instance of trading-off prior information with expressivity within Neural Networks. There is abundant literature that explored the relationship between different neural architectures, for different purposes. One can roughly classify these works on whether they attempt to map a large model into a smaller one, or vice-versa. \n\nIn the first category, one of the earliest efforts to introduce structure within FCNs with the goal of improving generalization was Nowlan and Hinton's soft weight sharing networks \\citep{nowlan1992simplifying}, in which the weights are regularized via a Mixture of Gaussians. Another highly popular line of work attempts to \\emph{distill} the ``knowledge'' of a large model (or an ensemble of models) into a smaller one \\citep{bucilu2006model, hinton2015distilling, ba2014deep}, with the goal of improving both computational efficiency and generalization performance. Network pruning \\citep{han2015deep} and the recent ``Lottery Ticket Hypothesis'' \\citep{frankle2018lottery} are other remarkable instances of the benefits of model reduction. \n\nIn the second category, which is more directly related to our work, authors have attempted to build larger models by embedding small architectures into larger ones, such as the Net2Net model \\citep{chen2015net2net} or more evolved follow-ups \\citep{saxena2016convolutional}. In these works, however, the motivation is to accelerate learning by some form of knowledge transfer between the small model and the large one, whereas our motivation is to understand the specific role of architectural bias in generalization. \n\nIn the infinite-width context, \\citep{novak2018bayesian} study the role of translation equivariance of CNNs compared to FCNs. They find that in this limit, weight sharing does not play any role in the Bayesian treatment of CNNs, despite providing significant improvment in the finite-channel setup.\n\nThe links between generalization error and the geometry and topology of the optimization landscape have been also extensively studied in recent times. \\cite{du2018many} compare generalisation bounds between CNNs and FCNs, establishing a sample complexity advantage in the case of linear activations. \\citep{long2019size,lee2018learning} obtain specific generalisation bounds for CNN architectures. \n\\cite{chaudhari2016entropy} proposed a different optimization objective, whereby a bilateral filtering of the landscape favors dynamics into wider valleys. \\cite{keskar2016large} explored the link between sharpness of local minima and generalization through Hessian analysis \\citep{sagun2017empirical}, and \\cite{wu2017towards} argued in terms of the volume of basins of attraction. The characterization of the loss landscape along paths connecting different models have been studied recently, e.g. in \\cite{freeman2016topology}, \\cite{garipov2018loss}, and \\cite{draxler2018essentially}. \nThe existence of rare basins leading to better generalization was found and highlighted in simple models in \\cite{zec2,zec1}.\nThe role of the CNN prior within the ambient FCNs loss landscape and its implication for generalization properties were not considered in any of these works. In the following we address this point by building on these previous investigations of the landscape properties.\n\n\n\\section{Experiments}\n\\label{sec:experiments}\n\nWe are given input-label pairs for a supervised classification task, $(x,y)$, with $x \\in \\mathbb{R}^d $ and $y$ the index of the correct class for a given image $x$. The network, parametrized by $\\theta$, outputs $\\hat y = f_x(\\theta)$. To distinguish between different architectures we denote the CNN weights by $\\theta^{CNN}\\in \\mathbb{R}^m$ and the eFCNs weights by $\\theta^{eFCN}\\in \\mathbb{R}^M$. Let's denote the embedding function described in Sec.~\\ref{sec:model} by $\\Phi: \\mathbb{R}^m \\mapsto \\mathbb{R}^M$ where $m \\ll M$ and with a slight abuse of notation use $f(\\cdot)$ for both CNN and eFCN. Dropping the explicit input dependency for simplicity we have: \n\\[\nf(\\theta^{CNN}) = f(\\Phi(\\theta^{CNN})) = f(\\theta^{eFCN}).\n\\]\n\nFor the experiments, we prepare the CIFAR-10 dataset for training without data augmentation. The optimizer is set to stochastic gradient descent with a constant learning rate of 0.1 and a minibatch size of 250. We turn off the momentum and weight decay to simply focus on the stochastic gradient dynamics and we do not adjust the learning rate throughout the training process. In the following, we focus on a convolutional architecture with 3 layers, 64 channels at each layer that are followed by ReLU and MaxPooling operators, and a single fully connected layer that outputs prediction probabilities. In our experience, this VanillaCNN strikes a good balance of simplicity and performance in that its equivalent FCN version does not suffer from memory issues yet it significantly outperforms any FCN model trained from scratch. We study the following protocol: \n\n\\begin{enumerate}\n \\item Initialize the VanillaCNN at $\\theta^{CNN}_{init}$ and train for 150 epochs. At the end of training $\\theta^{CNN}_{final}$ reaches $\\sim 72\\%$ test accuracy.\n \\item Along the way, save $k$ snapshots of the weights at logarithmically spaced epochs: $\\{t_0=0, t_1, \\dots, t_{k-2}, t_{k-1}=150\\}$. It provides $k$ CNN points denoted by $\\{\\theta^{CNN}_{t_0}=\\theta^{CNN}_{init}, \\theta^{CNN}_{t_1}, \\dots, \\theta^{CNN}_{t_{k-1}}\\}$.\n \\item Lift each one to its eFCN: $\\{\\Phi(\\theta^{CNN}_{t_0}), \\dots, \\Phi(\\theta^{CNN}_{t_{k-1}})\\}=\\{\\theta^{eFCN}_{t_0}, \\dots, \\theta^{eFCN}_{t_{k-1}}\\}$ (so that only $m$ among a total of $M$ parameters are non-zero).\n \\item Train these $k$ eFCNs in the FCN space for 100 epochs in the same conditions, except a smaller learning rate of 0.01. We obtain $k$ solutions $\\{\\theta^{eFCN}_{t_0, final}, \\dots, \\theta^{eFCN}_{t_{k-1}, final} \\}$. \n \\item For comparison, train a standard FCN (with the same architecture as the eFCNs but with the default PyTorch initialization) for 100 epochs in the same conditions as the eFCNs, and denote the resulting weights by $\\theta^{FCN}_{final}$. The latter reaches $\\sim 55\\%$ test accuracy.\n\\end{enumerate}\n\nThis process gives us one CNN solution, one FCN solution, and $k$ eFCN solutions that are labeled as \n\n\\begin{equation}\n\\theta^{CNN}_{final}, \\theta^{FCN}_{final}, \\text{ and } \\{\\theta^{eFCN}_{t_0, final}, \\dots, \\theta^{eFCN}_{t_{k-1}, final}\\}\n\\end{equation}\n\nwhich we analyze in the following subsections. Note that due to the difference in size between the CNN and the eFCNs, it unclear what learning rate would give a fair comparison. One solution, shown in Sec.~B of the Supplemental Material, is to use an adaptive learning rate optimizer such as Adam.\n\n\n\\subsection{Performance and training dynamics of eFCNs}\n\\label{sec:performance}\n \nOur first aim is to characterize the training dynamics of eFCNs and study how their training evolution depends on their \\textit{relax time} $t_w \\in \\{t_0=0, t_1, \\dots, t_{k-2}, t_{k-1}=150\\}$ (in epochs). When the architectural constraint is relaxed, the loss decreases monotonically to zero (see the left panel of Fig.~\\ref{fig:dynamics}). The initial losses are smaller for larger $t_w$s, as expected since those $t_w$s correspond to CNNs trained for longer. In the right panel of Fig.~\\ref{fig:dynamics}, we show a more surprising result: test accuracy increases monotonously in time for all $t_w$s, thus showing that \\textit{relaxing the constraints does not lead to overfitting or catastrophic forgetting.} Hence, from the point of view of the FCN space, it is not as if CNN dynamics took place on an unstable region from which the constraints of locality and weight sharing prevented from falling off. It is quite the contrary instead: the CNN dynamics takes place in a basin, and when the constraints are relaxed, the system keeps going down on the training surface and up in test accuracy, as opposed to falling back to the standard FCN regime. \n\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=\\textwidth]{figures\/cifar10_skinnynet\/fig2.pdf}\n \\caption{Training loss (\\textbf{left}) and test accuracy (\\textbf{right}) on CIFAR-100 vs. training time in logarithmic scale including the initial point. Different models are color coded as follows: the VanillaCNN is shown in black, standard FCN is in red, and the eFCNs with their relax time $t_w$s are indicated by the gradient ranging from purple to light green.}\n \\label{fig:dynamics}\n\\end{figure}\n\n\nIn Fig.~\\ref{fig:landscape} (left) we compare the final test accuracies reached by eFCN with the ones of the CNN and the standard FCN. We find two main results. First, the accuracy of the eFCN for $t_w=0$ is approximately at $62.5\\%$, well above the standard FCN result of $57.5\\%$. This shows that imposing an \\textit{untrained} CNN prior is already enough to find a solution with much better performance than a standard FCN. Hence the CNN prior brings us to a good region of the landscape \\textit{to start with}. The second result, perhaps even more remarkable, is that at intermediate relax times ($t_w \\sim 20$ epochs), the eFCN reaches---and exceeds---the final test accuracy reached by the CNN it stemmed from. This supports the idea that the constraints are mostly helpful for navigating the landscape \\textit{during the early stages of optimization}. At late relax times, the eFCN is initialized close to the bottom of the landscape and has little room to move, hence the test accuracy stays the same as that of the fully trained CNN.\n\nInterestingly, very similar observations were made in a concurrent paper~\\citep{golatkar2019time} which studies the effect of early relaxing of regularization procedures such as weight decay and data augmentation. They relate this phenomonelogy to an early \"critical period\" of learning during which regularization is most important.\n\n\n\\begin{figure}[htb]\n\t\\begin{subfigure}{0.32\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{figures\/cifar10_skinnynet\/fig3a.pdf}\n\t\\end{subfigure}\n\t\\begin{subfigure}{0.32\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{figures\/cifar10_skinnynet\/fig3f.pdf}\n\t\\end{subfigure}\n\t\\begin{subfigure}{0.32\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{figures\/cifar10_skinnynet\/fig3e.pdf}\n\t\\end{subfigure}\n\t\\caption{\\textbf{Left}: Performance of eFCNs reached at the end of training (red crosses) compared to its counterpart for the best CNN accuracy (straight line) and the best FCN accuracy (dashed line). \\textbf{Center}: Norm of the gradient for eFCNs at the beginning and at the end of training. \\textbf{Right}: Largest eigenvalue of the Hessian for eFCNs at the beginning and at the end of training. In all figures the $x$-axis is the relax time $t_w$.}\n\t\\label{fig:landscape}\n\\end{figure}\n\n \\subsection{A closer look at the landscape} \n A widespread idea in the deep learning literature is that the sharpness of the minima of the training loss is related to generalization performance \\citep{keskar2016large, jastrzkebski2017three}. The intuition being that flat minima reduce the effect of the difference between training loss and test loss. This motivates us to compare the first and second order properties of the landscape explored by the eFCNs and the CNNs they stem from. To do so, we investigate the norm of the gradient of the training loss, $|\\nabla \\mathcal{L}|$, and the top eigenvalue of the Hessian of the training loss, $\\lambda_{max}$, in the central and right panels of Fig.~\\ref{fig:landscape} (we calculate the latter using a power method). \n \nWe point out several interesting observations. First, the sharpness ($|\\nabla \\mathcal{L}|$) and steepness ($\\lambda_{max}$) indicators increase then decrease during the training of the CNN (as analyzed in~\\citep{achille2017critical}), and display a maximum around $t_w\\simeq 20$, which coincides with the relax time of best improvement for the eFCNs. Second, we see that after training the eFCNs, these indicators plummet by an order of magnitude, which is particularly surprising at very late relax time where it appeared in the left panel of Fig.~\\ref{fig:landscape} (see also \\ref{fig:deviation}) as if the eFCNs was hardly moving away from initialization. This supports the idea that when the constraints are relaxed, the extra degrees of freedom \\textit{lead us to wider basins}, possibly explaining the gain in performance.\n\n\n\\subsection{How far does the eFCN escape from the CNN subspace?}\n\\label{sec:norms}\n\n\\begin{figure}[htb]\n\t\\centering\n\t\\begin{subfigure}{0.32\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{figures\/cifar10_skinnynet\/fig3b.pdf}\n\t\\end{subfigure}\n\t\\begin{subfigure}{0.32\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{figures\/cifar10_skinnynet\/fig3c.pdf}\n\t\\end{subfigure}\n\t\\begin{subfigure}{0.32\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{figures\/cifar10_skinnynet\/fig3d.pdf}\n\t\\end{subfigure}\n\t\\caption{\\textbf{Left panel:} relax time $t_w$ of the eFCN vs. $\\delta$, the measure of deviation from the CNN subspace through the locality constraint, at the final point of eFCN training. \\textbf{Middle panel:} $\\delta$ vs. the initial loss value. \\textbf{Right panel:} $\\delta$ vs. final test accuracy of eFCN models. For reference, the blue point in the \\textbf{middle} and \\textbf{right} panels indicate the deviation measure for a standard FCN, where $\\delta \\sim 97\\%$.}\n\t\\label{fig:deviation}\n\\end{figure}\n\n\nA major question naturally arises: how far do the eFCNs move away from their initial condition? Do they stay close to the sparse configuration they were initialized in? To answer this question, we quantify how locality is violated once the constraints are relaxed (violation of weight sharing will be studied in Sec.~\\ref{sec:representation}). To this end, we consider a natural decomposition of the weights in the FCN space into two parts, $\\theta=(\\theta_{\\text{local}}, \\theta_{\\text{off-local}})$, where $\\theta_{\\text{off-local}} = 0$ for an eFCN when it is initialized from a CNN. A visualization of these blocks may be found in Sec.~A of the Supplemental Material. We then study the ratio $\\delta$ of the norm of the off-local weights to the total norm, $\\delta(\\theta) = \\frac{||\\theta_{\\text{off-local}}||_2}{||\\theta||_2}$, which is a measure of the deviation of the model from the CNN subspace. \n\nFig.~\\ref{fig:deviation} (left) shows that the deviation $\\delta$ at the end of eFCN training decreases monotonically with its relax time $t_w$. Indeed, the earlier we relax the constraints (and therefore the higher the initial loss of the eFCN) the further the eFCN escapes from the CNN subspace, as emphasized in Fig.~\\ref{fig:deviation} (middle). However, even at early relax times, the eFCNs stay rather close to the CNN subspace, since the ratio never exceeds 8\\%, whereas it is around 97\\% for a regular FCN (since the number of off-local weights is much larger than the number of local weights). This underlines the \\textit{persistence of the architectural bias under the stochastic gradient dynamics}.\n\nFig.~\\ref{fig:deviation} (right) shows that when we move away from the CNN subspace, performance stays high then plummets down to FCN level. \\textit{This hints to a critical distance from the CNN subspace within which eFCNs behave like CNNs, and beyond which they fall back to the standard FCN regime}. We further explore this high performance vicinity of the CNN subspace using interpolations in weight space in Sec.~C of the Supplemental Material.\n\n\n\n\n\\subsection{What role do the extra degrees of freedom play in learning?}\n\\label{sec:representation}\n\n\\begin{wrapfigure}{Rh}{0.4\\textwidth}\n \n \\includegraphics[width=0.4\\textwidth]{figures\/cifar10_skinnynet\/fig4c.pdf}\n \n \\centering\n \\caption{\n \n Contributions to the test accuracy of the local blocks (off-local blocks masked out), in orange, and off-local blocks (local blocks masked out), in blue. Combining them together yields a large gain in performance for the eFCN, in green.\n \n }\n \n \\label{fig:mask}\n\\end{wrapfigure}\n\nHow can the eFCN use the extra degrees of freedom to improve performance? From Fig.~\\ref{fig:mask}, we see that the off-local part of the eFCN is useless on its own (with the local part masked off). However, when combined with the local part, it may greatly improve performance when the constraints are relaxed early enough. This hints to the fact that the local and off-local parts are performing complementary tasks.\n\nTo understand what tasks the two parts they are performing, we show in Fig.~\\ref{fig:horse} a ``filter'' from the first layer of the eFCN (whose receptive field is of the size of the images since locality is relaxed). Note that each CNN filter gives rise to many eFCN filters : one for each position of the CNN filter on the image, since weight sharing is relaxed. Here we show the one obtained when the CNN filter (local block) is on the top left of the image. We see that off-local blocks stay orders of magnitude smaller than the local blocks, as expected from Sec.~\\ref{sec:norms} where we saw that locality was almost conserved. We also see that local blocks hardly change during training, showing that weight sharing of the local blocks is also almost conserved.\n\nMore surprisingly, we see that for $t_w>0$ distinctive shapes of the images are learned by the eFCN off-local blocks, which perform some kind of template-matching. Note that the silhouettes are particularly clear for the intermediate relax time (middle row), at which we know from Sec.~\\ref{sec:performance} that the eFCN had the best improvement over the CNN. \\textit{Hence, the eFCN is combining template-matching with convolutional feature extraction in a complementary way}.\n\nNote that by itself, template-matching is very inefficient for complicated and varied images such as those of the CIFAR-10 dataset. Hence it cannot be observed in standard FCNs, as shown in Fig.~\\ref{fig:difference} where we reproduce the counterpart of Fig.~\\ref{fig:horse} for the FCN in the left and middle images (they correspond to initial and final training times respectively). To reveal the silhouettes learned, we need to look at the pixelwise difference between the two images, i.e. focus on the change due to training (this in unnecessary for the eFCN whose off-local weights started at zero). In the right image of Fig.~\\ref{fig:difference}), we see that a loose texture emerges, however, it is not as sharp as that of the eFCN weights after training. Template-matching is only useful as a cherry-on-the-cake alongside more efficient learning procedures.\n\n\\begin{figure}[h!]\n \\centering\n\t\\includegraphics[width=\\textwidth]{figures\/cifar10_skinnynet\/fig4a.pdf}\n\t\\caption{Heatmap of the weights of an eFCN ``filter'' from the first layer just at relax time (\\textbf{left} column), after training for 11 epochs (\\textbf{middle} column), and after training for 78 epochs (\\textbf{right} column). The eFCNs were initialized at relax times $t_w=0$ (\\textbf{top} row), $t_w=13$ (\\textbf{middle} row), and $t_w=115$ (\\textbf{bottom} row). The colors indicate the natural logarithm of the absolute value of the weights. Note that the convolutional filters, in the top right, vary little and remain orders of magnitude larger than the off-local blocks, whereas the off-local blocks pick up strong signals from images as sharp silhouettes appear.}\n \\label{fig:horse}\n\\end{figure}\n\n\\begin{figure}[h!]\n \\centering\n\t\\includegraphics[width=\\textwidth]{figures\/cifar10_skinnynet\/fig4b.pdf}\n\t\\caption{Same heatmap of weights as shown in Fig.~\\ref{fig:horse} but for a standard FCN at a randomly initialized point (\\textbf{left}) and after training for 150 epochs (\\textbf{middle}). The pixelwise difference is shown on the \\textbf{right} panel. A loose texture appears, but it is by no means as sharp as the silhouettes of the eFCNs.}\n \\label{fig:difference}\n\\end{figure}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nComplexities of living matter include its multi-scale nature, its multi-body nature, its\ndisrespect of time-reversal symmetry, and its ability to replicate and\nto learn, or adapt, from which new properties and\/or symmetries may\nemerge. An example of a new emergent property at the cellular level is\nthe elongation of cells during development~\\cite{Paluch_2009} and at the onset of\nsome diseases~\\cite{Lamouille_2014}. With such changes comes new, emergent\ninteractions that need to be quantified. The above complexities call for a focus on {\\it in vitro}\nsystems, with only a few types of players, or constituents, if we are to build\nquantitative models of living matter with predictive power. \n\nOne of the candidate frameworks to model living matter is known as\nactive matter~\\cite{Ramaswamy_2010,Vicsek_2012,Marchetti_2013,Gompper_2020}.\nIn active matter, each constituent is internally driven, typically by\nconsuming energy to generate motion. With this construction,\nnon-living material such as colloids, microbots, and other\nself-propelled particles also fall under the purview of active\nmatter~\\cite{Zottl2016,Lozano2016,Vutukuri2020,Giomi_2013,Chvykov_2021}. To\ndate, two main categories of theoretical approaches have been used to\nquantify active matter: analytical analysis of dynamic stochastic equations of an\nassumed form (see, for example, equations governing the hydrodynamics of\nflocking~\\cite{Toner_Tu_PRL_1998,Toner_Tu_PRE_1998}) and more\nintricate computational approaches. While results from the former \ncategory are robust, the assumptions involved may\nlimit its applicability. \n\nThe challenges of the analytic approaches have motivated machine-learning approaches at the opposite end of the spectrum of quantitative frameworks~\\cite{Cichos_2020}. \nWith machine-learning techniques, one searches out correlations by sifting through reams of data. \nWhile this approach comes with many advantages, there are shortcomings as well. \nOne of the difficulties in machine-learning is the freedom in model\nselection, which could contribute towards the non-reproducibility issues reported in \\cite{baker20161}. \nApplication of machine-learning methods in the exact sciences, such as\nphysics, encounters the additional problem of violating\nbasic physical principles. In other words, both feature and model selections in machine-learning are not constrained as severely as\nprinciples in physics. Therefore, a researcher may train a\nmodel that eventually violates the second law of thermodynamics, or\nthe system's energy may not be bound from below such that ghosts with\nnegative kinetic energy are allowed to propagate. We should mention\nthat there has been very recent progress in dealing with this very issue~\\cite{PhysRevLett.126.098302}. \n\n\nOur approach in this manuscript is to take advantage of learning from\ndata but remaining within the strict framework of the governing laws of physics. Therefore, instead of working with the conventional machine-learning\nmodels, we adopt the conventional statistical models of many-body systems to\nbe trained by data, where the phase-space distribution function serves as the prominent\nfeature of the model. An advantage of our approach to learning\nfrom data is that the interpretation of both the model and its features\nis well understood. The approach also aims at overcoming the\nchallenge of dealing with unknown interactions, emergent or otherwise. The expectation with this hybrid approach is that new and\nold physical principles will readily emerge in living systems at the collective scale. \n\nWe start with the time evolution equation for the phase-space density\nfunction as the underlying equation generating the non-equilibrium\nfield equations for the order parameters, such as the number density\nand the polarization vector. The dynamical equation of the phase-space density is under-determined in its general form but is exact. \nIn analytic approaches, one needs to make simplifying assumptions\nand approximations to overcome the latter problem to find analytic\nsolutions to the field equations. In our approach, we work with the\nexact dynamical equation and estimate the phase-space density directly\nfrom data. Since in an experiment, the phase-space density function\nis truly driven by the exact equation, our estimation would be the\nsolution to the exact field equations with no assumptions required. \nAlso, we show that our data-driven approach can be used to build an\neffective statistical field theory for those systems that are close to\nsteady state. \nFor this purpose, we use our data-driven description of the time\nevolution of the order parameters to observe their fluctuations over\ntime and their correlations over space. Since the latter is related to\nthe functional derivatives of the effective action, we can then solve\na system of equations to reconstruct an analytic effective action.\nFinally, we test our method using both simulations and experiments. \n\n\nThe structure of this paper is as follows. \nWe construct the theoretical framework in Sec.~\\ref{Sec:Theory}. We\nvalidate our data-driven method in three simulations that are\npresented in Sec.~\\ref{Sec:Simulations}. In Sec.~\\ref{Sec:Data}, we apply our method to \nan experimental system of spherical particles embedded within a bacterial swarm that respond to localized UV light. \nWe draw conclusions in Sec.~\\ref{Sec:Conclusion}. \n\n\n\\section{Theoretical setup}\n\\label{Sec:Theory}\nThe breaking of time-reversal\n symmetry in active matter results in a non-equilibrium system making the direct connection\n with equilibrium statistical mechanics approaches untenable,\n particularly in the absence of any steady-state. \n However, non-equilibrium statistical mechanics approaches exist. Generally speaking, the stochastic dynamics of any observable $\\hat{A}(\\Gamma,t)$ in a statistical system can be written as \n \\begin{eqnarray}\n\\frac{\\partial \\hat{A}(\\Gamma,t)}{\\partial t} = {\\cal{L}} \\hat{A}(\\Gamma,t),\n \\end{eqnarray}\nwhere $\\Gamma$ is a point in the N-body phase-space, and ${\\cal{L}}$\nconsists of combinations of derivatives with respect to real-space\npositions, velocities, angles, angular velocities, etc. of each\nof the N particles in the system. It should be noted that the exact\nform of ${\\cal{L}}$ depends on the system of interest. For\nself-propelled rods, for example, its form is given in\nRef.~\\cite{Baskaran_2010}, which is found by starting with the Ito calculus \\cite{gardiner2004handbook}. \n \nA measurable quantity is the ensemble average of the observable, which reads\n\\begin{eqnarray}\n\\label{Eq:ADynamic}\n\\langle \\hat{A}(t) \\rangle &=& \n\\int d\\Gamma\\, \\hat{f}_N(\\Gamma)\\, \\hat{A}(\\Gamma,t),\\nonumber\\\\\n&=& \\int d\\Gamma\\, \\hat{f}_N(\\Gamma,t)\\, \\hat{A}(\\Gamma),\n\\end{eqnarray}\nwhere $\\hat{f}_N(\\Gamma)$ is the N-body phase-space probability density, and in the second line, the dynamics is equivalently passed to the latter density function. Taking partial derivatives of Eq.~\\eqref{Eq:ADynamic} and integrating by parts, we conclude that \n\\begin{eqnarray}\n\\label{Eq:TimeEvol_fN}\n\\frac{\\partial \\hat{f}_N(\\Gamma,t)}{\\partial t} + {\\cal{L}} \\hat{f}_N(\\Gamma,t) = 0. \n\\end{eqnarray}\n\nWe define the $m$-body phase-space density function as \n\\begin{eqnarray}\n\\label{Eq:fmDef}\nf_m(\\Gamma_1, \\cdots, \\Gamma_m) \\equiv \\int d\\Gamma_1\\,d\\Gamma_2\\, \\cdots\\, d\\Gamma_m \\hat{f}_N(\\Gamma,t). \n\\end{eqnarray}\nAs will be discussed later, most of the order parameters in active matter, such as the number density in real-space, the polarization vector, and the nematic tensor, are known in terms of $f_1$. Therefore, we are interested in the time evolution of the latter quantity. \nIn the following, for simplicity, we drop the subscript and refer to the one-body distribution function as $f\\equiv f_1$. \n\n\nWe apply $\\frac{\\partial}{\\partial t}$ to both sides of Eq.~\\eqref{Eq:fmDef} for $m=1$ and use Eq.~\\eqref{Eq:TimeEvol_fN} to rearrange the terms in the following form\n\\begin{eqnarray}\n\\label{Eq:BoltzmannEq}\n\\frac{df}{dt} = \\frac{\\partial f}{\\partial t} + \\frac{\\partial f}{\\partial q^{\\mu}}\\frac{d q^{\\mu}}{dt} = C,\n\\end{eqnarray}\nwhere points in the one-body phase-space are labeled by $\\vec{q}\\equiv\n(\\vec{x},\\vec{v},\\hat{u},\\vec{\\omega}, \\cdots)$, consisting of positions, velocities, orientations, angular velocities, and other possible internal structures, and repeated indices indicate a sum. We will discuss $C$ in the following subsections. \n\n\n\\subsection{Hydrodynamic field equations}\n\\label{Sec:Hydro}\nFor simplicity, we assume that $\\vec{q}\\equiv\n(\\vec{x},\\vec{v},\\hat{u},\\vec{\\omega})$ and neglect the rest of the possible internal structures of the particles. \nTo arrive at the field equations for the order parameters in active matter, we need to compute the moments of Eq.~\\eqref{Eq:BoltzmannEq}. \nThe continuity equation is the zeroth moment of Eq.~\\eqref{Eq:BoltzmannEq} and reads\n\\begin{eqnarray}\n\\label{Eq:Boltzmann0thMom}\n\\partial_t \\rho + \\partial_{\\mu}\\left(\\rho \\bar{v}_{\\mu}\\right) = \\int dv\\, d\\hat{u}\\, d\\omega\\, C,\n\\end{eqnarray}\nwhere $\\partial_{\\mu}\\equiv \\frac{\\partial}{\\partial x^{\\mu}}$, $dv$, $d\\hat{u}$, and $d\\omega$ refer to the ``volume'' integrals in their corresponding spaces, and the particle density and the average velocity are defined respectively by \n\\begin{eqnarray}\n\\label{Eq:densityAndVelocity}\n&&\\rho\\left(\\vec{x},t\\right)\\equiv \\int dv\\, d\\hat{u}\\,d\\omega\\, f,\\nonumber\\\\\n&& \\bar{v}_{\\mu}\\left(\\vec{x},t\\right) \\equiv \\frac{1}{\\rho}\\int dv\\, d\\hat{u}\\,d\\omega\\, f v_{\\mu}.\n\\end{eqnarray}\nAlso, we have set the surface term from $\\frac{\\partial}{\\partial \\hat{u}_{\\mu}}$ equal to zero. \n\nThe time evolution of the average velocity can be found through the first velocity moment of Eq.~\\eqref{Eq:BoltzmannEq}\n\\begin{eqnarray}\n\\label{Eq:Boltzmann1stMom}\n\\partial_t\\left(\\rho \\bar{v}_{\\mu}\\right)+ \\partial_{\\nu}P_{\\mu\\nu} = \\int dv\\, d\\hat{u}\\,d\\omega\\, v_{\\mu}C,\n\\end{eqnarray}\nwhere we have set $\\frac{dv^{\\mu}}{dt}=\\frac{d\\omega^{\\mu}}{dt}=0$. Also, we assumed that the external forces applied to the particles are zero.\nMoreover, the stress tensor is defined by\n\\begin{eqnarray}\n\\label{Eq:StressTensor}\nP_{\\mu\\nu} \\equiv \\int dv\\, d\\hat{u}\\,d\\omega\\, f v_{\\mu} v_{\\nu}.\n\\end{eqnarray}\nIt should be noted that the definition of the stress tensor might be different in other sources where some of the terms\nfrom the right-hand side of Eq.~\\ref{Eq:Boltzmann1stMom} are absorbed into the definition. Nevertheless, we prefer our definition of the stress tensor because (i) it is Eq.~\\eqref{Eq:Boltzmann1stMom} that bears the physical meaning and not the definition of the stress tensor, and (ii) our Eq.~\\ref{Eq:StressTensor} remains the same for every form of $C$, which as we will see later depends on the model of active matter. As an example, if $C=\\partial_{\\mu}f F_{\\mu}$ with $F_{\\mu}$ being proportional to a constant active propelled force, a modified stress tensor could be defined as \n\\begin{eqnarray}\nP'_{\\mu\\nu} = P_{\\mu\\nu} + F_{\\nu} \\left(\\rho \\bar{v}_{\\mu}\\right). \n\\end{eqnarray}\nFinally, the pressure is defined as the trace of the stress tensor divided by the dimension of the real-space\n\\begin{eqnarray}\nP = \\frac{1}{d} P_{\\mu\\mu},\n\\end{eqnarray}\nwhere there is a sum over the repeated indices. \n\n\nThe time evolution of the polarization vector of the particles is given by the first orientation moment of Eq.~\\eqref{Eq:BoltzmannEq}\n\\begin{eqnarray}\n\\label{Eq:Boltzmann_SMom}\n\\partial_t\\left(\\rho p_{\\mu}\\right)+ \\partial_{\\nu}\\Gamma_{\\mu\\nu} \n-\\int dv\\, d\\hat{u}\\,d\\omega\\, f\\, \\omega_{\\nu}\n= \\int dv\\, d\\hat{u}\\,d\\omega\\, \\hat{u}_{\\mu}C,\\nonumber\\\\\n\\end{eqnarray}\nwhere the polarization vector is defined as\n\\begin{eqnarray}\n\\label{Eq:PolarizationVec}\np_{\\mu}\\left(\\vec{x},t\\right)\\equiv \\frac{1}{\\rho} \\int dv\\, d\\hat{u}\\,d\\omega\\, f \\hat{u}_{\\mu},\n\\end{eqnarray}\nand we have defined \n\\begin{eqnarray}\n\\Gamma_{\\mu\\nu}\\left(\\vec{x},t\\right) \\equiv \\int dv\\, d\\hat{u}\\,d\\omega\\, f \\hat{u}_{\\mu} v_{\\nu}.\n\\end{eqnarray}\nA special but common model would be if the probability distribution of velocities and orientations are independent such that $f = f_v f_s$. In this case, $\\Gamma_{\\mu\\nu} \\propto p_{\\mu}\\bar{v}_{\\nu}$. \n\n\nAnother order parameter of interest is the nematic tensor defined as \n\\begin{eqnarray}\n\\label{Eq:NematicTensor}\nQ_{\\mu\\nu}\\left(\\vec{x},t\\right)\\equiv \\frac{1}{\\rho}\\int dv\\, d\\hat{u}\\,d\\omega\\, f \\left(\\hat{u}_{\\mu} \\hat{u}_{\\nu} -\\frac{1}{d}\\delta_{\\mu\\nu}\\right),\n\\end{eqnarray}\nwhose time evolution is given by the second orientation moment of Eq.~\\eqref{Eq:BoltzmannEq}\n\\begin{eqnarray}\n\\label{Eq:Boltzmann2ndSMom}\n&&\\partial_t\\left(\\rho Q_{\\mu\\nu} + \\frac{\\rho \\delta_{\\mu\\nu}}{d}\\right) + \\partial_{\\alpha} \\int dv\\, d\\hat{u}\\,d\\omega\\, f \\hat{u}_{\\mu} \\hat{u}_{\\nu} v_{\\alpha}\\nonumber\\\\\n&&-\\int dv\\, d\\hat{u}\\,d\\omega\\, f \\left(\\omega_{\\mu}\\hat{u}_{\\nu}+ \\omega_{\\nu}\\hat{u}_{\\mu}\\right)\n =\n\\int dv\\, d\\hat{u}\\,d\\omega\\, \\hat{u}_{\\mu} \\hat{u}_{\\nu} C.\\nonumber\\\\\n\\end{eqnarray}\nAgain, under the special case that $f = f_v f_s$, the integral in the second term on the left-hand side is proportional to $\\bar{v}_{\\alpha}\\left(\\rho Q_{\\mu\\nu} + \\frac{\\rho \\delta_{\\mu\\nu}}{d}\\right)$.\n\n\nSince the one-body phase-space density is a probability function, we\nuse the Shannon definition of entropy as a measure of information in the system of particles through\n\\begin{eqnarray}\n\\label{Eq:Entropy}\nS\\left(\\vec{x},t\\right) = - \\int dv\\, d\\hat{u}\\,d\\omega\\, f \\ln f.\n\\end{eqnarray}\nIt is interesting to note that when the Boltzmann H-theorem is valid,\nthe equation above is also equal to the thermodynamic entropy of the\nsystem. Additionally, when the interactions are strong, while the\nabove is a subset of the total thermodynamic entropy, it may still be an insightful measure of the system. \n\n\nIt is key to note that the goal of solving the hydrodynamic field equations~(\\ref{Eq:Boltzmann0thMom},\\ref{Eq:Boltzmann1stMom},\\ref{Eq:Boltzmann_SMom},\\ref{Eq:Boltzmann2ndSMom}) is to find $\\rho\\left(\\vec{x},t\\right)$, $\\bar{v}_{\\mu}\\left(\\vec{x},t\\right)$, $p_{\\mu}\\left(\\vec{x},t\\right)$, $Q_{\\mu\\nu}\\left(\\vec{x},t\\right)$. \nThe difficulty is that although the hydrodynamic equations above are\nexact, they do not make a closed set of differential equations and\ndepend on higher order moments. Therefore, if analytic solutions are\nof interest, one has to make assumptions to break the hierarchy. As can be seen from equations~(\\ref{Eq:densityAndVelocity}, \\ref{Eq:StressTensor},\\ref{Eq:PolarizationVec},\\ref{Eq:NematicTensor}), the time and space dependence of all the order parameters above are known if $f$, the one-body phase-space density, can be found. \nHere, we compute the left-hand side of Eq.~\\eqref{Eq:BoltzmannEq} directly from data to arrive at $C$ without needing to break the hierarchy. \n\n\n\\subsection{Analytic approaches}\n\nEven though Eq.~\\eqref{Eq:BoltzmannEq} is exact, it is not closed because $C$ is a functional of $f_2, f_3, \\cdots$. \nTherefore, analytic descriptions for $f$ is not possible unless we make simplifying assumptions.\nA large subset of theoretical approaches to understanding active matter consists of proposing a form for $C$ based on a set of assumed symmetries and interactions. In a wide class of models for active matter, $C$ is given by the so-called Boltzmann equation such that \n\\begin{eqnarray} \n\\label{Eq:C_Boltzmann}\nC = I_{\\text{dif}}[f] + I_{\\text{col}}[f],\n\\end{eqnarray}\nwhere the detailed description of the two terms for active matter can be found in for example~\\cite{PhysRevLett.109.268701, Peshkov2014, PhysRevE.74.022101, Bertin_2009}. The Boltzmann equation have been used by numerous researchers to describe a wide range of systems of active matter, see for example \\cite{PhysRevX.4.041030,Denk31623,PhysRevLett.120.258002}. \n\nAnother prevalent class of models for active matter are considered to be explained by the Smoluchowski equation with its simplifying assumptions. \nFor two dimensional self-propelled rods, $C$ in Eq.~\\eqref{Eq:BoltzmannEq} reads \\cite{Baskaran_2010, PhysRevLett.101.268101} \n\\begin{eqnarray}\nC = D_R \\partial^2_{\\theta} f - \\partial_{\\theta}\\left(f \\tau \\right) - \\partial_{\\mu} \\left(f F_{\\mu}\\right),\n\\end{eqnarray}\nwhich is a slightly modified Smoluchowski equation, \nwhere $D_R$ is a noise coefficient, $\\theta$ is the angle of the orientation vector, $\\tau$ is the mean-field torque from other particles, and $F$ is a mean-field force. The original Smoluchowski equation of single-particle distribution function $f$, which again falls under the form of Eq.~\\eqref{Eq:BoltzmannEq}, has been used to describe active cytoskeletal filaments \\cite{PhysRevLett.96.258103}. \nMany other assumptions regarding the form of $C$ can be found in the literature. For example, in \\cite{2020PhRvE.101b2602J}, the Gay-Berne potential is used to suggest a form for the time evolution of the one-body probability density $f$. \n\nOne can also write the most general form for $C$ by constructing all possible scalars out of $f$, $v_{\\mu}$, $\\hat{u}_{\\mu}$, $\\partial_{\\mu}$, $\\frac{\\partial}{\\partial_{v^{\\mu}}}$, $\\frac{\\partial}{\\partial_{\\hat{u}^{\\mu}}}$, etc. The most general form of $C$ is the following:\n\\begin{eqnarray}\n\\label{Eq:TheMostGeneralC}\nC &=& a_1 \\partial^2 f + a_2 \\partial_{\\mu} f v_{\\mu} + a_3 \\partial_{\\mu} f \\hat{u}_{\\mu} + \\cdots \\nonumber\\\\\n&+& \\sum_{i=1}^{\\infty} a_{(5,i)} (\\sqrt{p_{\\mu}p_{\\mu}})^i \n+ \\sum_{i=1}^{\\infty} a_{(6,i)} (\\sqrt{\\hat{u}_{\\mu}\\hat{u}_{\\mu}})^i \\nonumber\\\\\n&+& \\sum_{i=1}^{\\infty} a_{(7,i)} (\\sqrt{\\partial_{\\mu}f\\partial_{\\mu}f})^i\n+\n\\cdots,\\nonumber\\\\\n\\end{eqnarray}\nwhere $a$ with any type of subscript stands for a passive or an active coefficient. As can be seen, we have an infinite number of models for active matter each corresponding with a $C$ equal to a subset of the terms above. Nevertheless, for most of the commonly known models of active matter, only a few terms are non-zero. \nFor example, $a_1$ in Eq.~\\eqref{Eq:TheMostGeneralC} is similar to the viscosity in the Navier\u2013Stokes equation, and $a_3$ is similar to one of the active parameters in the dry flock model presented in Ref. \\cite{Marchetti_2013}, where the moment of the right-hand side of Eq.~\\eqref{Eq:BoltzmannEq} reads\n\\begin{eqnarray}\n\\int dv\\, d\\hat{u}\\,d\\omega\\, a_3 \\partial_{\\mu} f \\hat{u}_{\\mu} = a_3 \\partial_{\\mu}\\left(\\rho p_{\\mu}\\right). \n\\end{eqnarray}\nIn fact, this active-current term can be constructed within the Boltzmann framework of Eq.~\\eqref{Eq:C_Boltzmann} as is shown in Ref. \\cite{Bertin_2009}. \n\nIt is also possible to impose constraints on the moments of Eq.~\\eqref{Eq:BoltzmannEq} to arrive at a certain arrangement of non-zero coefficients in Eq.~\\eqref{Eq:TheMostGeneralC}. \nOne possible route is to study the terms analytically by categorizing them according to some group properties. In that way, we will end up with a study similar to the one presented in ~\\cite{Toner_Tu_PRL_1998,Toner_Tu_PRE_1998}. In this paper, we follow a different approach to be discussed next. \n\n\n\n\\subsection{Data-driven approach}\n\\label{Sec:EstPhaseSpace}\nAs we mentioned above, $C$ accounts for all of the higher-body probability functions such that Eq.~\\eqref{Eq:BoltzmannEq} is exact. However, simplifying approximations are needed to find an analytic solution for $f$ because the aforementioned equation is not closed. \nAn alternative to making simplifying assumptions, is to directly estimate $f$ from data and use the exact form of Eq.~\\eqref{Eq:BoltzmannEq} to find $C$ as well as the order parameters such as $\\rho\\left(\\vec{x},t\\right)$, $\\bar{v}_{\\mu}\\left(\\vec{x},t\\right)$, $p_{\\mu}\\left(\\vec{x},t\\right)$, and $Q_{\\mu\\nu}\\left(\\vec{x},t\\right)$. \n\n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.8\\columnwidth]{Figure\/EstimatorDemonstration.png}\n\\includegraphics[width=\\columnwidth]{Figure\/AlgChart.pdf}\n\\caption{(Top) panel shows the density estimator in a one dimensional\n phase-space. There are only five data points shown with black dots.\n Their contribution to the density function is denoted by the red\n dashed lines. The sum of all the contributions is represented by the\n solid blue line and serves as the estimation of the unnormalized probability function. \n(Bottom) panel is a pictorial demonstration of the algorithm of this data-driven method. \\label{Fig:AlgChart}}\n\\end{figure}\nBiological systems are often made of many types of particles and we should write one Eq.~\\eqref{Eq:BoltzmannEq} for every particle type. In this case, $C$ not only accounts for higher-body distribution functions but also for the interactions between different types of particles. In the following, we assume that only one type of particle is of interest. We also assume that an experiment has collected a data sample\ncontaining the information of positions and orientations of the particle type of interest over time. \n\nThe starting point would be to note that the one-body phase-space density $f(\\vec{q})$ is equal to the probability of finding a particle at point $\\vec{q}$ in phase-space. \nPerhaps the most straightforward approach to estimating $f(\\vec{q})$ from a\ndata sample is to discretize the $\\vec{q}$ space and count the number of\nparticles that fall into each bin. The problem with this approach is\nthat if the number of dimensions of $\\vec{q}$ is high, most of the bins remain\nempty. \nFor example, a typical dataset contains only a few\nhundred to a few ten-thousand particles. \n On the other hand, \neven for a poor and blurry discretization of 10\nbins per dimension, \nand for a typical six-dimensional phase-space, \nwe end up with one million bins. \nAnother problem with this approach is that\nthe estimation often depends on the choice of the locations of the bin edges. \n\nAlthough the simple approach above is not necessarily practical for estimating the phase-space density, its careful interpretation can guide us in devising a better estimator. An interpretation of the binning method is that all of the observed particles contribute to a given bin with binary weights. In other words, a particle's contribution to a given bin is zero or one depending on whether the phase-space distance of that particle to the center of the bin is greater or smaller than the bin size. The estimation of the phase-space density then reads\n\\begin{eqnarray}\n\\label{Eq:PhasSpacDensEst}\nf(\\vec{q}) = {\\cal{N}}^{-1} \\sum_{i} w_i(\\vec{q}),\n\\end{eqnarray} \nwhere $i$ runs over particles, ${\\cal{N}}^{-1}$ is a normalization factor, and $w_i(\\vec{q})$ is the contribution of the $i$th particle to the probability at $\\vec{q}$. \n\nIt is already obvious that the binary form of $w_i(\\vec{q})$ is behind the problems of the binning method, and $w_i(\\vec{q})$ should be a continuous function such that all of the observed particles have a non-zero contribution to a given location of phase-space. \nIn statistics, Eq.~\\eqref{Eq:PhasSpacDensEst} with a continuous weight factor is known as kernel density estimator \\cite{10.1214\/aoms\/1177728190}. \nAlso, the weights of particles close to a given phase-space location $\\vec{q}$ should be more significant than the weight of particles that are far from it. Therefore, we use the following Gaussian form for the weights in Eq.~\\ref{Eq:PhasSpacDensEst} \n\\begin{eqnarray}\n\\label{Eq:W_i}\nw_i(\\vec{q}) = \\exp\\left(-\\left(\\frac{|\\vec{q} -\\vec{q}_i|}{r_{\\text{eff}}}\\right)^2\\right),\n\\end{eqnarray}\nwhere $r_{\\text{eff}}$ is a free parameter. \nThe optimized value for $r_{\\text{eff}}$ would be the smallest number that still returns a smooth estimation of phase-space. \nHence, by observing the position of particles in the phase-space $\\vec{q}_i$, the weight $w_i(\\vec{q})$, and subsequently the density $f(\\vec{q})$ can be estimated. \nFinally, phase-space density reads \n\\begin{eqnarray}\nf(\\vec{q},t) = {\\cal{N}}^{-1} \\sum_{i} \\exp\\left(-\\left(\\frac{|\\vec{q} -\\vec{q}_i(t)|}{r_{\\text{eff}}}\\right)^2\\right). \n\\end{eqnarray}\nA pictorial description of the algorithm for our estimator of the phase-space density can be found in Fig.~\\ref{Fig:AlgChart}. \n\nBy inserting this continuous one-body phase-space density function into the equations~(\\ref{Eq:densityAndVelocity} , \\ref{Eq:StressTensor}, \\ref{Eq:PolarizationVec}, \\ref{Eq:NematicTensor}, \\ref{Eq:Entropy}), we can estimate the number density, the bulk velocity, the stress tensor, the polarization vector, the nematic tensor, and the entropy as functions of time, and as the solutions of the hydrodynamic equations in section~\\ref{Sec:Hydro} without needing to solve the differential equations. \n\n\nHaving estimated the phase-space density at different times $t$, and assuming that the external forces are either known or absent, all of the terms on the left-hand side of Eq.~\\ref{Eq:BoltzmannEq} are known. Therefore, we can find an estimation for $C$, which can be subsequently compared with Eq.~\\eqref{Eq:TheMostGeneralC} to find its significant terms. \n\n\n\n\n\n\n\n\n\\subsection{Statistical field theory of active matter}\n\\label{Sec:StatFieldTheory} \nA field theoretic description of a system is potentially powerful due to its capability in explaining various experiments that are seemingly different. It is this one-to-many relationship that highlights the importance of field theory. \nWhen an active matter is close to its steady-state, an effective statistical field theory of its order parameters can be constructed. \nAn objective of this paper is to learn this effective field theory by observing one of the experiments that it can explain. \n\n\nTo start the construction, we note that, on the one hand, the correlations between the order parameters are determined by a probability functional, and if these correlations are known, one should be able to recover the probability functional through a reverse engineering. On the other hand, in Sec.~\\ref{Sec:Theory}, we have devised a method to learn the space and time dependence of the order parameters, and consequently their correlations, from data. \nMore specifically, by definition, the effective partition function is the sum of the probabilities of possible configurations\n\\begin{eqnarray}\n\\label{Eq:TotalPartition}\n&&Z[J] = \\int {\\cal{D}}\\varphi\\, \\exp\\bigg(-S_{\\text{total}}[\\varphi]-\\int d^3x \\varphi J \\bigg),\n\\end{eqnarray}\nwhere $\\varphi$ represents the order parameters, and $J$ is an auxiliary field. \nOn the one hand, the correlations among the order parameters are given by \n\\begin{eqnarray}\n\\label{Eq:CorrFunction}\n&&\\langle \\varphi(\\vec{x}_1)\\cdots \\varphi(\\vec{x}_k) \\rangle \n=\\frac{\\delta^k Z}{\\delta J(\\vec{x}_1)\\cdots \\delta J(\\vec{x}_k)}\\Big|_{J=0}.\n\\end{eqnarray}\nOn the other hand, the same correlations are known through the data-driven method of Sec. \\ref{Sec:Theory}. The objective is to construct the unknown and analytic expression on the right-hand side from the left-hand side, which is known numerically. \n\n\n\n\\label{Sec:Greens}\nIn the following, we pursue the construction of the effective field theory by first expanding the right-hand side of Eq.~\\eqref{Eq:CorrFunction} using the perturbation theory, which is valid given the assumed stationary state of the system. Each term in the expansion would be an unknown variable to be determined. Therefore, we need to observe as many correlation functions, for k = 2, 3, $\\cdots$, as the number of unknown variables. By solving the system of equations, the leading terms in the expansion of the partition function would be recovered from data. It should be noted that, usually, a few first terms of the expansion are of interest, and the higher-order terms can be neglected for the sake of practicality and depending on how much data is available. \n\n\nTo practically use the right-hand side of Eq.~\\eqref{Eq:CorrFunction}, we need to calculate the path integration in Eq.~\\eqref{Eq:TotalPartition}, and write $Z[J]$ explicitly in terms of the leading perturbations in $S_{\\text{total}}[\\varphi]$.\nSince the system is close to the equilibrium and the fluctuations around the saddle point are small, \nthe exponential function in Eq.~\\eqref{Eq:TotalPartition}, equivalent to the probability of configurations, takes the following form\n\\begin{eqnarray}\n\\label{Eq:ProbabilityCloseToEquil}\n&&{\\cal{P}} = \\nonumber\\\\\n&&\\exp\\Bigg(-\\frac{1}{2}\\int d^3x_1 d^3x_2 \n\\frac{\\delta^2 S_{\\text{total}}}{\\delta \\varphi(x_1)\\delta \\varphi(x_2)}\\Big|_{\\varphi_{_{\\text{SP}}}}\n\\varphi(\\vec{x}_1)\\varphi(\\vec{x}_2)\\nonumber\\\\\n &&+ V(\\varphi) - \\int d^3x\\varphi(\\vec{x}) J(\\vec{x}) \\Bigg),\\nonumber\\\\\n\\end{eqnarray}\nwhere $V(\\varphi)$ refers to the higher order terms, and we have assumed that at the saddle point $\\frac{\\delta Z}{\\delta \\varphi}\\Big|_{\\varphi_{_{\\text{SP}}}}=0$. \n\nTo recover the leading term, we introduce subscript ``0'' to refer to $V(\\varphi)=0$, and define $\\varphi_0(x)$ to be the solution to the corresponding field equation \n\\begin{eqnarray}\n\\int d^3x' \n\\frac{\\delta^2 S_{\\text{total}}}{\\delta \\varphi(x')\\delta \\varphi(x)}\\Big|_{\\varphi_{_{\\text{SP}}}}\n\\varphi_{_0}(\\vec{x}') = - J(\\vec{x}),\n\\end{eqnarray}\nwhich can be used to calculate the non-interacting partition function\n\\begin{eqnarray}\n\\label{Eq:Z_0J}\nZ_0[J] = \\exp\\left(-\\frac{1}{2}\\int d^3x_1 d^3x_2 J(\\vec{x}_1)J(\\vec{x}_2) \\Delta(\\vec{x}_1-\\vec{x}_2)\\right).\\nonumber\\\\\n\\end{eqnarray}\nHere, we have dropped a normalization factor, and $\\Delta\\left(\\vec{x}_1-\\vec{x}_2\\right)$ is the Greens' function defined as \n\\begin{eqnarray}\n\\label{Eq:GreenFunctionEquation}\n\\int d^3x' \n\\frac{\\delta^2 S_{\\text{total}}}{\\delta \\varphi(x')\\delta \\varphi(x_1)}\\Big|_{\\varphi_{_{\\text{SP}}}}\n\\Delta(\\vec{x}'-\\vec{x}_2) =\\delta(\\vec{x}_1-\\vec{x}_2).\n\\end{eqnarray}\nFinally, we achieve the desired expansion of the full partition function through a straightforward calculation\n\\begin{eqnarray}\nZ[J] &=& \\exp\\left( V\\left(\\frac{\\delta}{\\delta J}\\right)\\right)Z_0[J]\\nonumber\\\\\n&=& \\left(1 + V\\left(\\frac{\\delta}{\\delta J}\\right) + \\cdots \\right) Z_0[J].\n\\end{eqnarray}\n\n\n\nThe algorithm for reconstructing the effective action is as follows.\nFirst, we use the data-driven method, developed in\nSec.~\\ref{Sec:Theory}, to find the phase-space density at different\nmoments and determine whether or not $f$ evolves with time within some\ntolerance. If not, we collect many snapshots of the system and estimate the phase-space density for each of them. The phase-space densities will be used to learn the order parameters as functions of the real-space position in the corresponding snapshots. \nThe snapshots act as the members of the ensemble of the system. Correlations between the order parameters of different locations, such as $\\langle \\varphi(\\vec{x}_1)\\cdots \\varphi(\\vec{x}_k) \\rangle$, are equal to the average of the multiplications of order parameters, i.e. $\\varphi(\\vec{x}_1)\\cdots \\varphi(\\vec{x}_k)$, over the snapshots. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{figure*}\n\\centering\n\\subfigure[]{\\includegraphics[width=\\columnwidth]{Figure\/SimData4DHist.pdf}}\n\\subfigure[]{\\includegraphics[width=\\columnwidth]{Figure\/Simf_4D.pdf}}\n\\subfigure[]{\\includegraphics[width=0.9\\columnwidth]{Figure\/data_snapshot.pdf}}\n\\subfigure[]{\\includegraphics[width=0.9\\columnwidth]{Figure\/d_eff_0_5scatter_stimated_vs_true.png}}\n\\caption{Simulation test case I: \nSimulation of a five-dimensional dataset including two real-space, two\nvelocity space, and one angle of direction for 1000 particles. We have assumed the true phase-space density and used it to generate the data. The data are passed to our method to estimate the phase-space density. This plot shows that our estimation of the phase-space density is quite accurate. The plot also shows that a simple histogram of the positions of the 1000 particles cannot appropriately represent the underlying phase-space density.\n(a) Four dimensional presentation of the position of the simulated particles in phase-space in one single frame.\nAt each $x-y$ position, there exists a subplot of $v_x-v_y$ space that\nshows the color value of the numbers. Due to the low statistics in the\nsimulations, which is typical in the real experiments, most of the\nregions of the plot are empty. \\label{Fig:Sim4DplotsBacteria}\n(b) Four dimensional presentation of our estimation of the phase-space number density $f(\\vec{q})$.\nAt each $x-y$ position, there exists a subplot of $v_x-v_y$ space that\nshows the color value of the densities. This plot shows that the true\nvalue of the probability of finding particles at the empty regions of\nthe histogram in the previous panel is not zero. This difference\nbetween observed particles and the true probability distribution is\naddressed by our method \\label{Fig:Sim4DplotsBacteria}\n(c) Snapshot of the simulated data. The color bar shows the velocity\nof particles. The video clip of this dataset can be seen at \\href{https:\/\/www.dropbox.com\/s\/lpslxazn30cszi1\/SimData.mp4?dl=0}{here}. \\label{Fig:dataset1}\n(d) Scatter plot of the five-dimensional phase-space density. The y-axis shows the true values and the x-axis shows the corresponding estimated values for the choice of $r_{\\text{eff}}=0.5$. \n\\label{Fig:f_comparisons1}\n} \n\\end{figure*}\n\\begin{figure*}\n\\centering\n\\subfigure[]{\\includegraphics[width=\\columnwidth]{Figure\/deff0_5_P.png}}\n\\subfigure[]{\\includegraphics[width=\\columnwidth]{Figure\/deff0_5_S.png}}\n\\caption{\n(a) The estimated pressure for simulation test case I. \n(b) The estimated entropy for simulation test case I. \\label{Fig:PressureEntropy}\n} \n\\end{figure*}\n\\section{Simulation test cases}\n\\label{Sec:Simulations}\nTo test the data-driven approach outlined in Sec.~\\ref{Sec:Theory}, we apply it to datasets whose properties are\nknown. We consider the following three test cases. \n\n\\subsection{Simulation test case I: Multivariate Gaussian distributions}\nIn this subsection, we assume the following five-dimensional phase-space distribution function\n\\begin{eqnarray}\n\\label{Eq:DistSim1}\nf(\\vec{q}) = {\\cal{N}}^{-1} \\exp\\left(-\\frac{1}{2}\\vec{q}\\cdot \\Sigma^{-1}\\cdot \\vec{q}\\right),\n\\end{eqnarray}\nwhere $\\vec{q}=(x, y, v_x,v_y,\\cos\\theta)$ with $\\theta$ being the angle of\norientation for rod-like particles, and \n\\begin{eqnarray}\n\\Sigma^{-1} = \n\\begin{bmatrix}\n 1.9 & 0.18 & -0.66 & -0.23 & -0.43\\\\\n 0.18 & 1.18 & 0. & 0. & -0.43\\\\\n -0.66 & 0. & 1. & -0.45 & 0.\\\\\n -0.23 & 0. & -0.45 & 1.2 & 0.\\\\\n -0.43 & -0.43 & 0. & 0. & 1.\\\\\n\\end{bmatrix}.\n\\end{eqnarray}\nNote that the Maxwell-Boltzmann distribution is a very special case of the distribution above when every component of $\\Sigma^{-1}$ is zero except $\\Sigma^{-1}_{33}=\\Sigma^{-1}_{44}\\neq 0$. It should also be noted that many interactions have been assumed between positions, velocities, and polarization, and so the system is far from a non-interactive one. \n\n\nWe use the ``numpy.random.multivariate\\_normal'' package in python to draw 15 snapshots, each with 1000 particles. \nA histogram of the four-dimensional phase-space position of observed particles, which is the naive binning method discussed above, is shown in Fig.~\\ref{Fig:dataset1}(a). The estimated phase-space density is shown in Fig.~\\ref{Fig:dataset1}(b). The deficiency of the simple binning approach can be seen by comparing Fig.~\\ref{Fig:dataset1}(a) and Fig.~\\ref{Fig:dataset1}(b). \nOne of the snapshots can be seen in Fig.~\\ref{Fig:dataset1}(c).\nThe true and the estimated, using $r_{\\text{eff}}=0.5$, phase-space densities at every point of the phase-space are compared. The result can be seen in Fig.~\\ref{Fig:dataset1}(d). This panel shows that unlike the histogram approach, the method can accurately estimate the true distribution function. \n\n\nWe study the effects of tuning $r_{\\text{eff}}$ by estimating the phase-space density with different values of $r_{\\text{eff}}$ for each of the 15 samples and comparing their average as our final estimation with Eq.~\\ref{Eq:DistSim1}. Fig.~\\ref{Fig:SSS_dEff} shows the error in our estimation of the phase-space density in terms of $r_{\\text{eff}}$. As can be seen from the figure, $r_{\\text{eff}}\\simeq 0.5$ returns the most accurate estimation of the phase-space density. \nOn the other hand, Fig.~\\ref{Fig:SSS_NSample} show that the error of estimation can be reduced by increasing the number of samples, snapshots of the system. \n\n\n\n\nHaving estimated the phase-space density using 15 samples with $r_{\\text{eff}}=0.5$, we are ready to compute the order parameters of interest using Eqs.~(\\ref{Eq:densityAndVelocity}, \\ref{Eq:StressTensor},\\ref{Eq:PolarizationVec},\\ref{Eq:NematicTensor}).\nFor instance, our estimations of the pressure and the entropy of the simulated data are shown in Fig.~\\ref{Fig:PressureEntropy}\nAlso, assuming that no significant external force exist in the system, we can estimate $C$ at every location of phase-space by inserting the estimated phase-space density into the left-hand side of Eq.~\\eqref{Eq:BoltzmannEq} and find the significant terms in Eq.~\\eqref{Eq:TheMostGeneralC}. \n\n\n\\begin{figure}[thb]\n\\centering\n\\subfigure[]{\n\\includegraphics[width=0.48\\columnwidth]{Figure\/NonLinear_Compare_f_kT_point3.pdf}}\n\\subfigure[]{\n\\includegraphics[width=0.48\\columnwidth]{Figure\/NonLinear_Compare_f_kT1point5.pdf}}\n\\subfigure[]{\n\\includegraphics[width=0.48\\columnwidth]{Figure\/NonLinear_Compare_f_kT2point6.pdf}}\n\\subfigure[]{\n\\includegraphics[width=0.48\\columnwidth]{Figure\/NonLinear_Compare_f_kT3point8.pdf}}\n\\caption{Simulation test case II: Comparison of the data-driven estimation of the phase-space\n density with the true one in a non-linear, non-steady-state system. The true distribution is $f = {\\cal{N}}\\exp(-v^4\/T)$ with the parameter $T=t^{\\frac{3}{7}}$ changing with time such that $\\frac{\\partial f}{\\partial t} \\neq 0$. We let the system evolve with time and take snapshots at four time points to make the estimation. (a) $T = 0.3$, (b) $T=1.5$, (c) $T=2.6$, (d) $T=3.8$.\\label{Fig:NonLinear}}\n\\end{figure}\n\\subsection{Simulation test case II: Non-linear, non-steady-state scenario}\nTo test our method in a non-linear and non-steady system, we simulate a system of particles on a surface whose phase-space density is \n\\begin{eqnarray}\n\\label{Eq:fNonLinearSim} \nf(t,v_x, v_y) = {\\cal{N}}\\exp(-\\frac{v^4}{t^{\\frac{3}{7}}}).\n\\end{eqnarray}\n\nAlthough the distribution function has a non-conventional form, we are still able to draw samples using python's random package with the following trick. First, we use the ``numpy.random.uniform'' to generate one million data point in $v_x$ space. We scale the velocity space by dividing $v_x$ by the length of the container of the particles such that the velocities are confined within an interval of (-4,4). We repeat the same procedure to generate uniform points in the $v_y$ dimension. By zipping the two arrays, we have one million uniformly generated data point in the two-dimensional phase-space. The number of uniformly generated data point is intentionally high to cover almost the entire phase-space. Next, we use Eq.~\\eqref{Eq:fNonLinearSim} to compute the probability of each of the data point. Finally, we use ``random.choices'' package in python to draw samples of 10000 particles out of the one-million population based on their computed probability. \n\nTo increase the accuracy of the estimation, we take ten snapshots of the system at each time, corresponding to having ten replicates, and repeat the procedure at four different times. We insert the snapshots as an input to our data-driven method and estimate the phase-space density. The average of the ten estimated phase-space densities at each time is compared with the true function above. Figure~\\ref{Fig:NonLinear} shows that our method accurately estimates such non-linear, and out of equilibrium systems. We choose $r_{\\text{eff}}=0.05$ at all the time points to get the best optimized result. \n\nSince our estimation of $f(t,v_x,v_y)$ is closely similar to the true density at all of the time points, our estimation of any of the order parameters of the system will be accurate. For example, the pressure of the system reads\n\\begin{eqnarray}\nP(t) = \\frac{1}{2} \\int \\, d^2v f v^2 = \\frac{{\\cal{N}} \\pi}{4}t^{\\frac{3}{4}},\n\\end{eqnarray}\nwhich is clearly out of equilibrium and non-linear and solely depends on how well we can estimate $f(t,v_x,v_y)$. \n\n\n\n\n\n\n\\begin{figure*}[thb]\n\\centering\n\\subfigure[]{\n\\includegraphics[width=\\columnwidth]{Figure\/Rods_snapshot.pdf}}\n\\subfigure[]{\n\\includegraphics[width=\\columnwidth]{Figure\/SelfPropRod_Compare_f.pdf}}\n\\caption{Simulation test case II: (a) One of the ten snapshots of the self-propelled rods\n system. The parameters are chosen such that $\\rho_0 > \\rho_N$ with\n the director unit vector $\\hat{n}$ in the y-direction such that\n there are more particles in the y-direction than those in the\n x-direction in accordance with the analytic solution in Eq.~\\ref{Eq:analytic_f_self_prop_rod}. \n(b) The estimated versus the true phase-space density at uniformly\ndistributed positions of the 6-dimensional phase-space. The\ncomparison shows fair agreement between the two. Therefore, our\nmethod's estimations for the additional order parameters are fairly\naccurate too because these parameters are simply moments of the phase-space density $f$ as defined in the theory section above and also in \\cite{PhysRevE.77.011920, PhysRevLett.101.268101}. \n\\label{Fig:RodSim}}\n\\end{figure*}\n\\subsection{Simulation test case III: Self-propelled rods}\nWe now evaluate the performance of our method by applying\nit to a system whose analytic hydrodynamic solution is known, namely, self-propelled hard rods on a substrate in two dimensions. Both the microscopic and continuum description of the system can be found in Refs.~\n\\cite{PhysRevE.77.011920, PhysRevLett.101.268101}, where the order\nparameters such as the number density, the polarization vector, and\nthe nematic tensor are defined in terms of the one-particle phase-space density that satisfies a modified Smoluchowski equation of the general form of Eq.~\\eqref{Eq:BoltzmannEq} where $q_{\\mu}=\\left(\\vec{r},\\theta, \\vec{v},\\omega \\right)$ refers to the position vector in the two-dimensional configuration space, the angle that defines the orientation of the rods, the velocity of the rods, and their angular velocities. \nAs discussed in the two references above, if the friction due to the\nsubstrate is large and the interactions are of the excluded volume\ntype, and after a few more simplifying assumptions, the following would be a stationary solution\n\\begin{eqnarray}\n\\label{Eq:analytic_f_self_prop_rod}\n&&f\\left(\\vec{r},\\theta, \\vec{v},\\omega \\right) \\simeq f_x(\\vec{r},\\theta) \\times\\nonumber\\\\\n&&\\exp\\left( \n-\\frac{1}{2 k T} \\left( \n(\\vec{v}-v_0 \\hat{u})^2 + I \\omega^2\n\t\\right)\n\\right),\n\\end{eqnarray}\nwhere $v_0$ is the self-propulsion along the direction of the rod, and\n$I =l^2\/12$ with $l$ being the length of the rods. Moreover, \n$f_x$, the real-space probability distribution, takes the following form\n\\begin{eqnarray}\nf_x = \n\\begin{cases}\n\\frac{\\rho_0}{2\\pi}, & \\rho_0 < \\rho_N,\\nonumber\\\\\n\\frac{\\rho_0}{2\\pi}\\left( 1+ 4 Q_{\\alpha\\beta} \\left(\\hat{u}_{\\alpha}\\hat{u}_{\\beta} - \\frac{1}{2} \\delta_{\\alpha\\beta}\\right)\\right), &\n \\rho_0 > \\rho_N,\n\\end{cases}\n\\end{eqnarray}\nwhere $\\rho_0$ is a constant, $\\rho_N=\\frac{3\\pi}{2 l^2}$ is the Onsager transition density, and \n$Q_{\\alpha\\beta}=S \\left( \\hat{n}_{\\alpha} \\hat{n}_{\\beta} -\\frac{1}{2}\\delta_{\\alpha\\beta}\\right)$, with \n$S$ and $\\hat{n}$ being constant scalar and unit vector. For more detail, we refer to the two references above. \n\nBefore we proceed, it should be clarified that although many\ndynamical microscopic simulations exist in the literature, their\ncorresponding phase-space density is not necessarily known. On the\nother hand, to evaluate our method, we need to compare our estimated\nphase-space density with the true one. Fortunately, this true\nphase-space density of the system of self-propelled rods has been\nanalytically derived in Refs.~\\cite{PhysRevE.77.011920,\n PhysRevLett.101.268101} starting from the underlying microscopic\nsystem. Therefore, we test our method in the same manner as described in test case II. \n\nIn the following, we simulate this system by choosing the length of the rods as $l=1$, the temperature as $kT=2$, the active parameter as $v_0=2$, and without loss of generality $\\hat{n}=(0,1)$. \nAlso, we choose $\\rho_0=5$ to satisfy $\\rho_0>\\rho_N$ since this is the more complex scenario whose estimation is harder. \nWe take 10 snapshots of the system to serve as the input data into our data-driven method. One snapshot of the system can be found in Fig.~\\ref{Fig:RodSim}(a).\n\n\nNext, we pass the snapshots to our data-driven method to estimate the\nsingle-particle phase-space density. We find that $r_{\\text{eff}}=0.1$ gives the\nbest optimized estimation. We compare our estimation with the\nanalytic solution in Eq.~\\eqref{Eq:analytic_f_self_prop_rod} and\nvalidate the method in such active systems.\nFigure~\\ref{Fig:RodSim}(b) shows that since our method returns a fair\nestimate of the density function $f$, its prediction for all the rest\nof the order parameters, such as the number density, are fair\nestimations of the corresponding analytic expressions in the mentioned\nreferences. It should be mentioned that the advantage of our method,\nultimately, is in its performance ability in situations where the simplifying assumptions are not valid and no analytic solution exists.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.49\\columnwidth]{Figure\/vid19_im000001.png}\n\\includegraphics[width=0.49\\columnwidth]{Figure\/vid19_im000850.png}\n\\caption{Experimental system: Stages I and II of the passive particles actively driven by the swarm\n of bacteria. The left panel shows an early instance of the\n experiment. The right panel shows the system after ultraviolet\n light, indicated by the bright circular region, is emitted. The grey\n background shows the active bacteria while the black point-like\n particles are the passive particles. \\label{Fig:vid19_tracer_Stages_I_II}}\n\\end{figure}\n\\section{Experimental test case: Actively-driven spherical particles}\n\\label{Sec:Data}\nWe now turn to apply our approach to an experimental test case. Bacterial\nswarms~\\cite{Kearns_2010} are a staple experimental system for active matter ~\\cite{Ramaswamy_2010,Vicsek_2012}.\nEach bacterium is self-propelled via molecular motors controlling\nflagella to move up to speeds of tens of microns per second. While even\nthe motion of an individual bacterium is nontrivial, the collective motion of many bacteria exhibits a range of dynamic phenomena from jets to whirls to turbulence~\\cite{Darnton_2010,Dunkel_2013}. The predominant theoretical approach is\nnematic hydrodynamics and it does predict the onset of bacterial turbulence\nwith associated topological\ndefects~\\cite{Wensink_2012,Bratanov_2015}. So the collective motion of\na bacteria swarm is highly non-trivial as well. In\naddition to quantifying the motion of the swarm, researchers have\nalso studied particles embedded in the swarm. So while the particles themselves are\ninherently passive, they are actively driven~\\cite{Patteson_2019} by a\ncomplex field. See the\nAppendix for details on the experimental set-up and conditions. \n\nThe experiment at hand probes the effects of localized UV light on the\ncollective motion of the swarm and, thus, the spherical particles\ndriven by the swarm. The\nperturbation of the localized UV light causes the bacteria where the light\nis localized to ultimately stop moving after some time window \\cite{Patteson_2019}. What\nhappens prior to this jamming was not resolved by the\nprior experiments in terms of determining if some of the bacteria flee the region of\nlocalized light before others eventually get jammed. We address the issue here. \n\nTo begin, we choose 150 consecutive movie frames well before and another 150\nconsecutive movie frames well after the ultraviolet light is\nemitted. We refer to these two movie sets, each approximately three seconds, as stage-I and stage-II in\nthe following. A snapshot of the two stages can be seen in Fig.~\\ref{Fig:vid19_tracer_Stages_I_II}. \nWe use the trackpy package \\cite{dan_allan_2019_3492186} in python to\ntrace the position of spherical particles over time. We apply conventional selections to remove the spurious particles.\nAfter the positions of particles are quantified in all of the movie frames, we use the change in the positions in the consecutive frames to find the velocity of the particles. \n\n\nHaving reconstructed the positions and the velocities of the spherical\nparticles, we implement our method to estimate the phase-space density for each movie frame. A comparison of the densities reveals that they are close to the steady-state in both stage-I and stage-II. However, the system goes out of equilibrium when the ultraviolet light is turned and migrates from stage-I to stage-II. \n\\begin{figure*}[thb]\n\\centering\n\\subfigure[]{\n\\includegraphics[width=\\columnwidth]{Figure\/vid19_tracer_frames_0_150_f_4D.png}}\n\\subfigure[]{\n\\includegraphics[width=\\columnwidth]{Figure\/diff_PhaseSpace_4D.png}}\n\\subfigure[]{\n\\includegraphics[width=\\columnwidth]{Figure\/vid19_tracer_frames_0_150_C_4D.png}}\n\\subfigure[]{\n\\includegraphics[width=\\columnwidth]{Figure\/vid19_tracer_frames_850_1000_C_4D.png}}\n\\caption{Experimental system: Four-dimensional presentation of quantities associated with the data, where the outer axes refer to the real-space positions, and the inner axes of the small sub-plots refer to the velocity spaces.\n(a) An estimation for the phase-space density of the system in stage-I.\n(b) The difference between the phase-space density of stage-II with\nrespect to that of stage-I. A significant drop can be seen at the\ncenter. A mild drop in the phase-space density is shown around the\ncenter. An increase in the phase-space density at the bottom of the\ncontainer and at the top-left corner can be seen. This result indicates\nthat spherical particles are migrating away from the center in the real-space.\n(c) Four-dimensional presentation of the right-hand side of the Boltzmann equation before UV light.\n(d) Four-dimensional presentation of the right-hand side of the Boltzmann equation after UV light.\n \\label{Fig:vid19_tracer_frames0_150_850_1000_f} \\label{Fig:vid19_tracer_frames0_150_850_1000_C}}\n\\end{figure*}\nOur estimations for the phase-space densities and the interaction\nterms in stage-I and stage-II are shown in\nFigs.~\\ref{Fig:vid19_tracer_frames0_150_850_1000_f}(a,b). The figure shows\nthat the probability of finding particles at the center is reduced but\nincreased at the upper-left corner and the bottom of the\ncontainer. This figure qualitatively indicates that the spherical particles are running away from the ultraviolet light into the upper-left and bottom regions of the real-space. \nSince the spherical particles are passive, and assuming that they are\nindicator of the active bacteria around them, we argue that\nthe living matter---the bacteria---is evading the UV light. Given the\ncrowding effect, it is difficult for all of the bacteria to flee the\nUV light, just as it is in a crowd in a panic trying to exit a\nstadium, for example. However, as the bacteria become jammed in the\nregion of the UV light~\\cite{Patteson_2019}, we speculate bacteria just outside the region\nre-route since their fellow jammed bacteria in the illuminated region\nnow emerge as an obstacle. It is in this re-routing that presumably directs both the\nbacteria and spherical particles away from the UV light. It would be\ninteresting to determine whether or not this phenomenon is robust, i.e., extends beyond our test case. \n\n\\begin{figure*}\n\\centering\n\\subfigure[]{\n\\includegraphics[width=\\columnwidth]{Figure\/mean_rho.png}}\n\\subfigure[]{\n\\includegraphics[width=\\columnwidth]{Figure\/mean_P.png}}\n\\subfigure[]{\n\\includegraphics[width=\\columnwidth]{Figure\/mean_S.png}}\n\\subfigure[]{\n\\includegraphics[width=0.9\\columnwidth]{Figure\/vid19_tracer_frames_0_150P_rho_fit.png}\n}\n\\caption{Experimental system: (a) Real-space density, (b) the pressure, and (c) the entropy\nof actively-driven spherical particles, reconstructed from the mean of our estimations for all of the system's snapshots in stage I. \\label{Fig:mean_rho}\n(d) Pressure versus the real-space density of the spherical particles in stage I. Comparing this plot with Fig.~\\ref{Fig:TracerEqState2} indicates that the equation of state is the same in both stages I and II. \n}\n\\end{figure*}\n\n\n\\begin{figure*}[tbh]\n\\centering\n\\subfigure[]{\n\\includegraphics[width=\\columnwidth]{Figure\/diff_rho.png}}\n\\subfigure[]{\n\\includegraphics[width=\\columnwidth]{Figure\/diff_P.png}}\n\\caption{Experimental system: (a) The difference between the\n real-space density of the spherical particles in stages I and II. The figure quantifies the migration of the particles away from the ultraviolet light. \\label{Fig:diff_rho}\n(b) The difference between the pressure of the spherical particles in\nstages I and II. The figure shows that ultraviolet light leads to a\ndrop in the pressure in the illumination region. \\label{Fig:diff_P}\n}\n\\end{figure*}\n\n\n\n\n\nWith this theoretical framework, we can do more. We use Eq.~\\eqref{Eq:BoltzmannEq} to estimate the four-dimensional interaction term $C$ in both stages I and II, which are shown in Figs.~\\ref{Fig:vid19_tracer_frames0_150_850_1000_C}(c,d). \nIt should be noted that in the Boltzmann form of Eq.~\\eqref{Eq:C_Boltzmann}, the interaction term has the following general form\n\\begin{eqnarray}\n\\label{Eq:CFromAmplitude}\nC[\\vec{x},\\vec{v}] = \\int d^2v'\n\\Big(g(\\vec{x},\\vec{v}\\,' , \\vec{v}) -g(\\vec{x},\\vec{v},\\vec{v}\\,') \\Big),\\nonumber\\\\\n\\end{eqnarray}\nwhere $g(\\vec{x},\\vec{v}\\,' , \\vec{v}) \\equiv \\big|{\\cal{M}}\\left(\\vec{x},\\vec{v}\\,' \\rightarrow \\vec{v}\\right)\\big| f(\\vec{x},\\vec{v}\\,')$, and ${\\cal{M}}\\left(\\vec{x},\\vec{v}\\,' \\rightarrow \\vec{v}\\right)$ is the probability amplitude of scattering one particle with velocity $\\vec{v}\\,'$ into velocity $\\vec{v}$ off the rest of the system at position $\\vec{x}$. The right hand side of the latter formulation suggests that $\\int d^2v C[\\vec{x},\\vec{v}] =0$. This is consistent with our estimation in Fig.~\\ref{Fig:vid19_tracer_frames0_150_850_1000_C} where each sub-plot is symmetric between positive and negative values with a sum of approximately zero. Since we did not enforce Eq.~\\eqref{Eq:CFromAmplitude} in our estimation of $C$, the latter is a validation of our method. \n\n\nFigure~\\ref{Fig:mean_rho} shows the estimated number density, the\npressure, the entropy, and the equation of the state of the spherical particles in stage-I. \nThese quantities are the averages of the corresponding ones overall\nthe movie frames of stage-I since as we discussed above, the\nphase-space densities associated with the frames in stage-I do not\nchange significantly over time suggesting that the system is close to\nsteady-state. The latter is consistent with the linear relationship\nbetween the number density and the pressure of the system, as can be\nseen in the figure above. Both of these findings are nontrivial given\nthe complexity of the bacterial swarm driving the spherical\nparticles. \n\n\n\nTo further quantify the migration of the spherical particles, we compute the\nreal-space density of particles in stage-II and show\nits difference with respect to the number density in stage-I as\n$\\delta\\rho \\equiv \\rho_{\\text{II}}(\\vec{x})-\\rho_{\\text{I}}(\\vec{x})$\nin Fig.~\\ref{Fig:diff_rho}(a). The error associated with this\ndifference is presented in Fig.~\\ref{Fig:diff_rho_Error}, which\nshows that the difference in the number densities is statistically\nsignificant. Fig.~\\ref{Fig:diff_rho}(b) shows the difference in the\npressures of the spherical particles in stage-I and stage-II. As can be seen,\nthe density is increased near the walls of the container and decreased\nat the ultraviolet site. We emphasize that our estimation of the\nphase-space density, and hence the real-space density, do not solely\nrely on counting the particles at the position of interest. Instead,\nwe take the distance of every observed particle from the position of\ninterest to estimate the probability of finding particles at that\npoint. Therefore, our estimation of the probability at a position may\nbe different from zero while no particle has been observed at that\npoint in the dataset. This point has been made clear in\nFig.~\\ref{Fig:Sim4DplotsBacteria}, where using our\nknowledge of underlying distribution in simulations, we have shown that the true probability at a\ngiven point may not be zero but still, no particles exist at that\npoint in the dataset due to the lack of statistics. On the other hand,\nin Fig.~\\ref{Fig:f_comparisons1}(d), we have shown that our estimation\nof the same probability is fairly close to the true one and overcomes\nthe lack of statistics in the dataset. \n\n\nAs can be seen in\nFigs.~\\ref{Fig:vid19_tracer_frames0_150_850_1000_C}(c,d), the\ninteractions in the velocity sub-spaces around the center of the\ncontainer are slightly changed due to the ultraviolet light being\nshined at that location. The change in the interactions is a result of\nthe ultraviolet light leading to a drop in the pressure of the spherical \nparticles, as can be seen in Fig.~\\ref{Fig:diff_P}. On the other\nhand, the system has reached a steady-state, therefore, the balance of the\nforces should be zero at every location of the container. So the only\nexisting force on the particles that balances the pressure is their\ninteractions with the rest of the system, i.e. their interactions with\neach other via the bacteria. \nTherefore, we can use UV light as a means of controlling the net flow of the spherical particles, and perhaps the bacteria. By shining it at a high-density region such as the bottom of the container, we can drive the particles to move toward the empty regions at the top of the box. \n\n\n\nIt should be noted that to derive the equation of state of the\nspherical particles, we have used Eqs.~(\\ref{Eq:densityAndVelocity}, \\ref{Eq:StressTensor}), to compute and compare the real-space\ndensities and the pressures of the spherical particles at every position of the container. Our best fit lines, which can be seen in Figs.~\\ref{Fig:mean_rho} and \\ref{Fig:TracerEqState2}, indicate that the equation of state is the same in both stages and reads\n\\begin{eqnarray}\nP \\simeq 0.04 \\rho,\n\\end{eqnarray}\nwhere the coefficient indicates the effective temperature of the spherical particles in natural units. \nHere, we presented a novel derivation of the linear equation of state that is often assumed for such systems. See for example Ref.~\\cite{Copenhagen2021}.\n\n\n\nAt this point, we turn our attention to the statistical field theory\nof the actively-driven spherical particles. First, we analyze the 150 consecutive movie frames of stage-I for this purpose. First, we define $\\varphi(\\vec{x}) \\equiv \\rho(\\vec{x}) - \\langle \\rho(\\vec{x})\\rangle$, where the expectation symbol refers to the mean of the real-space particle densities in stage-I. Second, is the perturbative formalism via the Greens' function method\nvalid for this system over this time interval? To answer this\nquestion, we look at the expectation values of the terms that might appear in the exponential of Eq.~\\eqref{Eq:ProbabilityCloseToEquil} and determine their strengths. We report that $\\langle \\varphi(\\vec{x})^2\\rangle$ is the most significant term and all the other terms, especially $\\langle \\partial^2 \\varphi \\rangle$, are at least two orders of magnitude smaller. The leading perturbation terms are $\\langle \\varphi \\partial_x \\varphi \\rangle$, $\\langle \\varphi \\partial_y \\varphi \\rangle$, and $\\langle \\varphi^3 \\rangle$ which are two orders of magnitude smaller than the leading term. In the following, we neglect the rest of the corrections to the partition function. The expectation values above are shown in Fig.~\\ref{Fig:vid19_tracer_frames_750_850mean_phi2}.\n\n\nAt this point, we construct the Green's function of the\ndriven spherical particle system. In light of our evaluations of the expectation values above, we can safely assume that to the leading order $Z[J] \\simeq Z_0[J]$. Therefore, from Eqs.~(\\ref{Eq:CorrFunction},\\ref{Eq:Z_0J}), we can conclude that \n\\begin{eqnarray}\n\\Delta\\left(\\vec{x}_1 - \\vec{x}_2\\right) \\simeq - \\langle \\varphi(\\vec{x}_1)\\varphi(\\vec{x}_2)\\rangle.\n\\end{eqnarray}\nAn assessment of the four-dimensional expectation value on the right-hand side of the equation above, whose value is known from data, shows that it can be split as \n\\begin{eqnarray}\n\\langle \\varphi(\\vec{x}_1)\\varphi(\\vec{x}_2)\\rangle \\equiv\n\\langle \\varphi(\\vec{x})^2\\rangle \\delta(\\vec{x}_1 - \\vec{x}_2)\n+\nd(\\vec{x}_1 - \\vec{x}_2),\n\\end{eqnarray} \nwhere $d(\\vec{x}_1 - \\vec{x}_2) \\ll 1$, and $\\langle \\varphi(\\vec{x})^2\\rangle$ is the variance of $\\varphi$ at position $\\vec{x}$ regardless of other positions, and its numerical values at different $\\vec{x}$ is shown in Fig.~\\ref{Fig:vid19_tracer_frames_750_850mean_phi2}. \nA direct substitution of the two equations above into Eq.~\\eqref{Eq:GreenFunctionEquation} proves that \n\\begin{eqnarray}\n\\frac{\\delta^2 S_{\\text{total}}}{\\delta \\varphi(x_1)\\delta \\varphi(x_2)}\\Big|_{\\varphi_{_{\\text{SP}}}}\n\\simeq \n\\frac{1}{\\langle \\varphi(\\vec{x_1})^2\\rangle} \\delta(\\vec{x}_1 - \\vec{x}_2).\n\\end{eqnarray}\nThe effective free energy of the system therefore reads\n\\begin{eqnarray}\nF \\simeq\n\\frac{1}{2}\\int d^2x\n\\frac{1}{\\langle \\varphi(\\vec{x})^2\\rangle} \n\\varphi(\\vec{x})^2,\n\\end{eqnarray}\nand the effective chemical potential of the system is equal to\n\\begin{eqnarray}\n\\mu \\equiv \\frac{\\delta F}{\\delta \\varphi(x)} = \n\\frac{1}{\\langle \\varphi(\\vec{x})^2\\rangle} \n\\varphi(\\vec{x}).\n\\end{eqnarray}\nThe driven spherical particle system is, therefore, described by a Gaussian field theory\nwith a spatially-dependent ``mass'' and, hence, a spatially varying\nchemical potential. We have, therefore, computed the pressure, the temperature, the entropy, the effective free energy, and the effective\nchemical potential of an actively-driven tracer particle collective.\nWe must again emphasize that the bacterial swarm is quite dynamical\nand complicated at the microscopic scale, which indicates that its\ntracers are also dynamical at the same scale, though exhibiting a\ndifferent symmetry in terms of shape. Given the complex flow of\nthe bacterial swarm, the novelty of our method is that it provides a simple field theoretic description of the system at the continuum scale for the tracer particles. \n\n\n\\section{Discussion}\n\\label{Sec:Conclusion}\nWe have presented a data-driven approach to obtain the single-particle phase-space \ndensity as the solution to the stochastic dynamic equation of an\nactive matter system, from which physical quantities such \nas the number density, the bulk velocity, the stress tensor, the polarization vector, the nematic tensor, and entropy can be extracted. \nWe do not assume a particular form for the particle interactions beforehand. In other words, we pose an inverse method in which the data reveals the solution. \n\nShould a stationary state exist, an analytic field theory can be constructed to make further predictions. In this paper, we have focused on a scalar field theory given that we were analyzing spherically-shaped particles. \nOur approach can be readily extended to nematic or polar particles,\nprovided the orientation and polarization data is available. This would be interesting to do so since there\nare a number of nematic hydrodynamic theories addressing the onset of\nbacterial turbulence and the emergence of topological\ndefects~\\cite{Wensink_2012,Bratanov_2015}. Interestingly, there\nexists a recent data-driven approach to quantitatively model bacterial\nswarms rooted in simultaneous measurements of the orientation and\nvelocity fields to obtain effective parameters that can then serve as\ninputs into an equation of motion with an assumed form~\\cite{Li_2018}.\nAgain, here, we make no assumptions about the form of the solution. \n\n\n\nIn the experiment, the spherical particles were\ndriven by a bacterial swarm whose motility was affected by UV light.\nWe found that the system was in the steady-state before and after the\nintroduction of a localized region of UV light and found that the\nparticles obey a Gaussian field theory with a spatially-varying\nvariance, or mass, in the language of conventional field theory. \nWe obtained a simple equation of state in which the pressure is\nproportional to the density so that an effective temperature is\nreadily identifiable, despite the particles being driven by a complex\nbacterial flow. We also have found that within the swarm, the spherical particles flew away from the region of UV light. We propose that this outflow\nis due to re-routing of some of the {\\it Serratia} since those within\nthe localized region of the light become jammed, along with any\nspherical particles in the region as well. \nThe novelty of our method is in constructing a simple continuum \ndescription of a system that is assumed to be complicated at the microscopic level. \n\n\nIn the active matter community, there has been key work addressing the\nexistence of an equation of state~\\cite{Solon_2015,Ginot_2015}. Specifically,\nwhen considering active fluids, one may decide to define pressure as\nthe mean force per unit area exerted by the particles on a confining\nwall. Alternatively, one can define extract pressure from the trace of\na bulk stress tensor, as is done here. The two definitions may not be\nequivalent, at least not in the generic case, given that active matter is a\nnon-equilibrium phenomenon. Some researchers have argued that an equation of state\nwith pressure as a state function is useless, unless one knows the\nspecifics of the particle-wall interactions in steady-state~\\cite{Solon_2015}. We are\nable to bypass such a quandary, at least in\nsteady-state, because various physical quantities are extracted from\nthe data thereby containing all the information about the interactions\nbetween the particles and between the particles and their confinement.\nWe allow the data to talk----walls and all. \nWe note that recent work extracting the\nsingle-particle density from data for equilibrium systems has been used to directly construct a free\nenergy~\\cite{Yatsyshin_2020}. What we present here is more general. \n\n \nWhile we have focused on an active matter system with particles at the\nmicron scale, we can readily apply our method to particles at the\nmolecular scale. Consider, for example, an enzyme interacting with\nDNA. Our method also applies to systems consisting of much larger scales, such as astronomical\nsystems, where such data-driven methods have been of interest~\\cite{2009MNRAS.393..703M}. Herein lies the power of\nthe single-particle phase-space density and its dynamic equation and its applicability to {\\it any} many-body physical system, regardless of scale. Moreover, while in this paper we have used the method on a fixed number of particles, the $C$ term in Eq.~\\eqref{Eq:BoltzmannEq} also takes into account matter creation and\/or annihilation, such as cell birth\/death, or protein synthesis\/degradation, depending on the system of focus. \n\n\nFinally, our data-driven approach can be contrasted with machine-learning methods, which are typically devoid of physical\nprinciples. It may, therefore, be tricky to use machine-learning methods to answer a physics question seeking to answer how a phenomenon occurs as\ncompared to a classification question. Such questions can \nultimately be framed as a question with a yes or no answer, such an\nimage looking more like a cat or a dog, or a many-particle system looking more like a gas \nor a liquid. However, some scientists appreciate this point and are trying to integrate physics-based\nmodeling with convention machine-learning\nmethods~\\cite{Willard_2020}. This approach could ultimately prove\nfruitful. However, in this paper, we have been guided by the single-particle phase-space density,\nthe stochastic dynamic equation, and its data-driven solutions in a given\nsituation. By studying the system in different situations and under various perturbations, we can discover the system's underlying physics in non-equilibrium, which is key for\nliving matter. Moreover, by using the data to find solutions to\ndynamical stochastic equations, should the form of the equations\nthemselves evolve with time as the system adapts, our\nmethod can account for such adaptations---a hallmark\nproperty of living matter. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\clearpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe potentials first discovered by Calogero and Sutherland \\cite{C,Su} and\nsubsequently generalized to arbitrary root systems by Olshanetsky and\nPerelomov \\cite{OP} play a central role in the theory of classical and\nquantum completely integrable systems. One of the main themes of the\noriginal work by Olshanetsky and Perelomov was to establish quantum\ncomplete integrability, that is the existence of complete sets of\ncommuting operators. The actual eigenfunctions of the corresponding\nHamiltonians were discussed in numerous subsequent publications\n\\cite{PRZ, RT, Se,So}.\n\nOur purpose in this paper is study and settle a certain number of\nbasic structural questions concerning the exact solvability of the\nOlshanetsky-Perelomov Hamiltonians. In order to outline the main results of\nour paper, we first need to give a precise definition of what we mean\nby exact solvability. We will adopt a promising approach, which has\nrecently arisen in the framework of the theory of quasi-exactly\nsolvable potentials \\cite{Sh, T1, GKO1, GKO2}, by defining a quantum\nHamiltonian $\\Hop$ to be {\\it algebraically exactly solvable} if one\ncan explicitly construct an ordered basis for the underlying Hilbert\nspace such that the corresponding flag of subspaces is $\\Hop$\n-invariant. In terms of this approach, the first step in the treatment\nof an exactly solvable operator must be the construction of an\ninfinite flag of finite dimensional vector spaces ordered by\ninclusion, the determination of a collection of basic operators that\npreserve this flag, and the demonstration that the operator in\nquestion is generated by the basic ones. The second step is to prove\nthe $L^2$ completeness in the underlying Hilbert space of this family\nof subspaces.\n\nIn order to fit the Olshanetsky-Perelomov Hamiltonians of trigonometric type into this framework, we first recall that\nthese Hamiltonians are indexed by irreducible root systems, with the Calogero-Sutherland potentials corresponding to\ntype $A_{n}$ root systems. We thus consider the vector space of trigonometric functions which are invariant under the\nWeyl group\n$W$ of the given root system $R$. The partial order relation on dominant weights gives rise to a natural flag of\nfinite-dimensional subspaces of this infinite-dimensional vector space. It is quite evident that the flag in question\nis preserved by the ordinary, multi-dimensional Laplacian. Less evident is the fact that one can obtain other\nflag-preserving operators by factoring the Weyl denominator\n$$\nA = \\prod_{\\alpha\\in R^+} \\E^{\\alpha\/2} - \\E^{-\\alpha\/2}$$\ninto\nfactors corresponding to the various orbits of the Weyl group on $R$.\nIt turns out (see Proposition \\ref{prop:invspace}) that the gradient\nof the logarithm of each of the resulting factors also preserves the\nflag in question. More generally, one obtains other flag-preserving\nsecond-order operators by taking linear combinations of the Laplacian\nand of these gradients. The Olshanetsky-Perelomov Hamiltonians are\nthen obtained by a ground-state conjugation. This approach also sheds\nlight on the presence of multiple coupling constants in some of the\nmodels; the number of coupling constants is precisely the number of\ninvariant factors of $A$, i.e. the number of Weyl group orbits in $R$,\nor equivalently the number of distinct root lengths. We then show that\nif all the coupling constants are positive, then the action of the\nHamiltonian on each subspace of the flag is diagonalizable. This is\nthe first main result of our paper; it is given in Theorem 1. The\nsecond main result concerns the $L^2$ completeness of the resulting\neigenfunctions in the underlying Hilbert space of $L^2$ functions on\nthe alcove of the root system $R$. It is also interesting to note\nthat if all the coupling constants are equal to 1, then one recovers a\nsecond-order differential operator whose eigenfunctions are precisely\nthe characters of the corresponding simple Lie algebras. Thus the\ncoupling constants can be regarded as parameters in a deformation of\nthe classical characters. In the classical case, if one re-expresses\nthe gradient of $\\log A$ in terms of a formal power series, one\nobtains Freudenthal's recursion formula for the character\ncoefficients. This trick also works for the deformed characters, and\nleads to a recursion formula that allows one to straightforwardly\ncompute the eigenfunctions of the Olshanetsky-Perelomov Hamiltonians.\nThis result is presented in Section \\ref{sect:recform}.\n\nWe should point out that the Weyl-invariant deformed characters which appear in the expressions of the eigenfunctions\nof the Olshanetsky-Perelomov trigonometric Hamiltonians are related by a change of variables to the multivariate Jacobi\npolynomials which have been investigated by Heckman and Opdam \\cite{HO}. In particular, the analogue of the Freudenthal\nmultiplicity formula which is at the basis of the recursion formula we give in Proposition 19 for the eigenfunctions of\nthe Hamiltonians also appears in the context of their study . We should also mention the interesting recent\ncontributions of Brink, Turbiner and Wyllard \\cite {Br} in the general effort aimed at understanding the exact\nsolvability for multidimensional systems in an algebraic context.\n\n\n\n\n\n\\section{Trigonometric-Type Potentials Associated to Root Systems.}\n\nWe first recall the abstract definition of the trigonometric\nOlsha\\-netsky-Perelomov Hamiltonians in terms of root systems. Let\n$\\Vsp$ be a finite-dimensional real vector space endowed with a\npositive-definite inner product $(u,v)\\in\\reals,\\, u,v\\in \\Vsp$. We\nuse this inner product to identify $\\Vsp$ with $\\Vsp^*$. The induced\npositive-definite inner product on $\\Vsp^*$ will also be denoted by\n$(\\cdot,\\cdot)$. Let $\\Delta: C^{\\infty}(\\Vsp; \\reals)\\to\nC^{\\infty}(\\Vsp; \\reals)$ and $\\nabla: C^{\\infty}(\\Vsp; \\reals)\\to\n\\Gamma(\\mathrm{T}\\Vsp)$ denote the corresponding Laplace-Beltrami and\ngradient operators.\n\nFor a non-zero $\\alpha\\in \\Vsp^*$, we set\n$\\ckalpha=2\\alpha\/(\\alpha,\\alpha)$ and let $s_\\alpha$ denote the \nreflection across the hyperplane orthogonal to $\\alpha$:\n$$s_\\alpha(\\beta)= \\beta- (\\ckalpha,\\beta)\\alpha,\\quad \\beta\\in\n\\Vsp^*.$$\n\nBy a root system, we mean a finite, spanning subset $R$ of $\\Vsp^*$\nsuch that $0 \\notin R$, $s_\\alpha(R)\\subset R$ for all $\\alpha\\in R$\nand $(\\ckalpha,\\beta)\\in \\integers$ for all $\\alpha,\\beta\\in R$.\n\\par\\noindent\nA root system $R$ is said to be irreducible if it cannot be\npartitioned into a union of root systems spanning orthogonal subspaces\nof $\\Vsp$.\n\nTo any root system $R$ corresponds a root lattice $Q=\\{\\sum_R m_\\alpha\n\\alpha : m_\\alpha\\in \\integers\\}$ and a weight lattice $P=\\{\\lambda\n\\in \\Vsp^{*}:( \\ckalpha,\\lambda ) \\in \\integers \\quad \\forall\n\\alpha\\in R\\}$. The Weyl group of $R$, generated by $s_\\alpha,\\,\n\\alpha\\in R$, will be denoted by $W$. The subgroup of $W$ fixing a\nparticular $\\lambda \\in \\Vsp^*$ will be denoted by $W_{\\lambda}$.\n\nThe hyperplanes $\\{\\lambda\\in \\Vsp^*: (\\alpha,\\lambda)=0\\},\\,\\alpha\\in\nR$ define a set of open Weyl chambers in $\\Vsp^*$. We choose a Weyl\nchamber $C$ and let $R^+= R\\cap \\overline{C}$ denote the corresponding\nsubset of positive roots. Let $B\\subset R^+$ denote the set of simple\nroots, i.e. the positive roots that cannot be written as the sum of\ntwo positive roots. Let $P^+=R\\cap\\overline{C}$ denote the set of\ndominant weights.\n\n\nWe will say that a real number $c>0$ is a {\\it root length} if there exists\na $\\alpha\\in R$ such that $c=\\|\\alpha\\|$. Let $c$ be a root\nlength, and set\n\\begin{eqnarray*}\nR_c &=& \\{ \\alpha\\in R: \\|\\alpha\\|=c\\},\\\\\nR_c^+ &=& R_c\\cap R^+,\\\\\nU_c &=& \\frac{c^2}4 \\sum_{\\alpha\\in R_c^+} \\csc^2\n\\frac\\alpha 2.\n\\end{eqnarray*}\n\n\\noindent\nNote \\cite{H} that if $c$ is a root length, then, $R_c$ is nothing but\nthe $W$-orbit of $\\alpha$.\n\nThe Olshanetsky-Perelomov Hamiltonians with trigonometric potentials\nassociated to a root system $R$ are defined in terms of the above\ndata by\n$$\n\\Hop=-\\Delta + \\sum_c a_c U_c,\n$$\nwhere the sum is taken over all root lengths, $c$, and where the\n$a_c$'s are real coupling constants.\n\n\\section{The Algebraic Exact Solvability of $\\Hop$.}\n\nThe affine hyperplanes $\\{\\lambda\\in \\Vsp^*:(\\alpha,\\lambda)\\in\n2\\pi\\integers\\}$ determine in $\\Vsp^*$ a set of isometric open bounded\nsubsets called alcoves. Let $A$ denote the unique alcove (usually\nreferred to as the fundamental alcove) that is contained in $C$ and\nthat has the origin as a boundary point. Let $m$ denote the Lebesgue\nmeasure on $A$. From now on we use the inner product to identify $A$\nwith the corresponding subset of $\\Vsp$ and restrict the domain of\nfunctions introduced subsequently to $A$. Our goal is to construct a\nbasis for the underlying Hilbert space $L^2(A,m)$ in which the\nalgebraic exact solvability of $\\Hop$ is manifest. The elements of\nthis basis will be products of $W$-invariant trigonometric functions\nof certain linear forms on $\\Vsp$ with a common gauge factor vanishing\nalong the walls $\\{u\\in \\Vsp:\\alpha(u) \\in 2\\pi\\integers\\},\\,\n\\alpha\\in R$ of the potential terms $U_c$.\n\nWe now proceed to define this basis. Recall that a choice of positive\nroots naturally induces a partial order relation, $\\leq$, on the\nweight lattice. For $\\lambda\\in P^+$ set\n\\begin{align*}\n P_\\lambda &= \\bigcup_{w\\in W} \\{w(\\mu) : \\mu\\in P^+ \\text{ and }\n \\mu \\leq \\lambda\\},\\\\\n P_{\\lambda^-} &= \\bigcup_{w\\in W} \\{w(\\mu) : \\mu\\in P^+ \\text{ and }\n \\mu \\lneqq \\lambda\\},\n\\end{align*}\n\nFor $S \\subset \\Vsp^*$ let $\\trig(S)$ denote the complex vector space\nspanned by functions of the form $\\E^{\\I\\lambda},\\, \\lambda\\in S$. If\n$S$ is a $W$-invariant subset of $\\Vsp^*$, then there is a well\ndefined action of $W$ on $\\trig(S)$, namely\n$$w\\cdot \\E^{\\I\\lambda} = \\E^{\\I w(\\lambda)},\\quad w\\in W,\n\\lambda\\in S.$$\nIn this case, let $\\trig(S)^W$ denote the subspace of\n$W$-invariant functions.\n\nRecall that a root system $R$ is said to be reduced if for every\n$\\alpha\\in R$, the only roots homothetic to $\\alpha$ are $-\\alpha$ and\n$\\alpha$ itself. A root $\\alpha$ will be called non-divisible if\n$\\alpha\/2$ is not a root. Similarly, $\\alpha$ will be called\nnon-multiplicable if $2\\alpha$ is not a root. Of course, if $R$ is\nreduced, then all roots are both non-divisible and non-multiplicable.\nAn irreducible non-reduced system must be isomorphic to a root system\nof type $\\mathrm{BC}_n$ for some $n$. To describe the latter, take\n$\\Vsp=\\reals^n$ and let $\\epsilon_1,\\ldots, \\epsilon_n$ denote the\ndual basis of the standard basis of $\\reals^n$. The root system in\nquestion consists of three types of roots: short roots $\\pm\n\\epsilon_i$, medium roots $\\pm \\epsilon_i\\pm \\epsilon_j,\\, i\\neq j$ ,\nand long roots $\\pm 2\\epsilon_i$.\n\nFor reasons which will become clear later, it is convenient to re-express the\ncoupling constants $a_c$ appearing in $\\Hop$ as follows. We let $a_c=\nk_c(k_c-1)$ if\n$c$ is the length of an non-multiplicable root, and $a_c = k_c(k_c+k_{2c}-1)$\nif\n$R$ is non-reduced and $c$ is the length of the short roots. \nLet \n$$\nA_c= \\prod_{\\alpha\\in R_c^+} \\sin\\frac\\alpha2 ,\\quad\nF=\\prod_c |A_c|^{k_c},\\quad\n\\rho_c= \\frac12 \\sum_{\\alpha\\in R_c^+} \\alpha,\\quad\n\\rho =\\sum_c k_c\\rho_c.\n$$\n\nThe following theorems, which are the main results of our paper, shows\nthat the Olshanetsky-Perelomov trigonometric Hamiltonians $\\Hop$ are\nexactly solvable in the algebraic sense, and that the corresponding\neigenfunctions are physically meaningful.\n\n\\begin{theorem}\n\\label{thrm:es}\n Let $\\lambda$ be a dominant weight . If\n$k_c\\geq 0$ for each root\n length $c$, then there exists a unique\n $\\phi_\\lambda\\in\\trig(P_\\lambda)^W$ such that $F\\phi_\\lambda$ is an\n eigenfunction of $\\Hop$ with eigenvalue $\\|\\lambda+\\rho\\|^2$.\n Furthermore, if $F\\phi,\\, \\phi\\in\\trig(P)^W$ is an eigenfunction of\n $\\Hop$, then $\\phi=\\phi_\\lambda$ for some $\\lambda\\in P^+$.\n\\end{theorem}\n\n\n\\begin{theorem}\n\\label {thrm:compl}\nThe subspace $F\\trig(P)^W$ is dense in $L^2(A,m)$. Moreover, if\n$k_c\\geq0$ for all root lengths $c$, then the operator $\\Hop$ is\nessentially self-adjoint on the domain $F\\trig(P)^W\\subset L^2(A,m)$.\n\\end{theorem}\n \n \nWe begin with the proof of Theorem 2, assuming Theorem 1 to be true. We first have:\n\n\n\\begin{lemma}\n Let $D$ be an open, bounded subset of Euclidean space , and\n $f:D\\rightarrow\\reals$ a bounded continuous function that does\n not vanish on $D$ {\\rm (} but may vanish on the boundary {\\rm)}.\n With these assumptions, $f L^2(D,m)$ is a dense subset of\n $L^2(D,m)$.\n\\end{lemma}\n\\begin{proof}\n Let $D_0$, an open subset of $D$, be given, and choose\n $D_1$ such that $\\overline{D}_1 \\subset {D}_0$ and such that\n $m(D_0)-m(D_1)$ is smaller than a given $\\epsilon>0$. Note\n that $h=f^{-1}\\chi_{ D_1}$ is a well defined\n element of $L^2(D)$ and that\n $fh=\\chi_{D_1}$. Consequently $\\chi_{D_0}$\n lies in the closure of $fL^2(D)$. The conclusion follows\n from the fact that the characteristic functions form a dense\n subset of $L^2(D)$.\n\\end{proof}\n\n\n\n\\begin{proof}[Proof of Theorem \\ref{thrm:compl}]\n Let $\\Tsp$ denote the torus $\\Vsp^*\/{(2{\\pi} Q)}$. We use\n the inner product on $\\Vsp$ to identify $\\Tsp$ with the identical quotient\n of $\\Vsp$. Recall that $\\trig(P)$ is dense in $L^2(\\Tsp)$ by the\n Fourier representation theorem. Now $W$ acts on $\\Tsp$ and $A$\n serves as a fundamental region for this action \\cite[Ch. VI, \\numero\n 2.1]{Bourbaki}. Consequently $\\trig(P)^W$ is dense in $L^2(\\Tsp)^W$\n and the latter is naturally isomorphic to $L^2(A,m)$. We therefore\n conclude that $F\\trig(P)^W$ is dense in $L^2(A,m)$ by applying the\n preceding Lemma with $f=F$. We now prove the essential\n self-adjointness of $\\Hop$ on the domain $F\\trig(P)^W$. Let\n $A_0\\subset A$ be an open subset with a piece-wise smooth boundary.\n Let $\\phi_1,\\phi_2\\in\\trig(P)^W$ be given. Setting\n $\\psi_i=F\\phi_i,\\, i=1,2$, we have\n \\begin{align*}\n \\int_{A_0} \\Hop(\\psi_1)\\psi_2 - \\psi_1 \\Hop(\\psi_2) &=\n \\int_{A_0} \\mathop{\\mathrm{div}}(\\psi_2 \\nabla\\psi_1 -\n \\psi_1 \\nabla\\psi_2) \\\\\n &= \\int_{\\partial A_0} F^2(\\phi_2 \\nabla\\phi_1 -\\phi_1\n \\nabla\\phi_2)\n \\end{align*}\n Hence, as the boundary of $A_0$ approaches the boundary of\n $A$, the above integrals tend to zero, so that the operator $\\Hop$ is a\n symmetric. By Theorem \\ref{thrm:es} and the density of $F\\trig(P)^W$ in\n$L^2(A,m)$ , the span of eigenfunctions of\n$\\Hop$ is dense in\n $L^2(A)$, and therefore $\\Hop$ must be essentially self-adjoint.\n\\end{proof}\n\n We now proceed with the proof of Theorem 1. The strategy behind the proof of this\ntheorem is to conjugate the Olshanetsky-Perelomov Hamiltonians $\\Hop$ by a suitable\nmultiplication operator chosen in such a way that the resulting operator has a simple\naction on the space $\\trig(P)^W$. This will give rise to an essential intertwining\nrelation which will in turn imply the algebraic exact solvability. In order to\ndetermine this multiplicative factor, we need a series of facts about\nroot lengths. \n\n\nLet $M_c:W\\rightarrow\\{\\pm 1\\}$ be the class function defined by\n$$M_c(s_\\alpha)=\n\\begin{cases}\n-1 & \\text{if } \\alpha\\in B\\cap R_c \\\\\n\\,1 & \\text{if } \\alpha\\in B\\backslash R_c\n\\end{cases}\n$$\nThe following result is a straightforward consequence of the definition of\n$A_c$:\n\\begin{proposition}\n For $w\\in W$ one has $w(A_c) = M_c(w) A_c$. In other words,\n $A_c$ is a relative invariant of $W$ with multiplier $M_c$.\n\\end{proposition}\n\n\\noindent Moreover, we have:\n\n\\begin{proposition}\n \\label{prop:alpha-rho}\n Let $c$ be a root length. If $\\alpha\\in B$, then $(\n \\ckalpha,\\rho_c )$ takes one of four possible values: $1$ if\n $\\|\\alpha\\|=c$, $2$ if $\\|\\alpha\\|=c\/2$, $1\/2$ if $\\|\\alpha\\|=2c$,\n $0$ in all other cases.\n\\end{proposition}\n\\begin{proof}\n Let $\\alpha\\in B$ be given. The action of $s_\\alpha$ maps\n $\\alpha$ to $-\\alpha$ and permutes the elements of $R^+$ not\n homothetic to $\\alpha$ \\cite[Ch. VI, \\numero 1.6]{Bourbaki}. Let\n $\\beta\\in R_c^+$ be given and set $\\beta'=s_\\alpha(\\beta)$. Note\n that if $\\beta=\\beta'$, then $( \\ckalpha , \\beta) = 0$; and that if\n $\\beta'\\neq \\beta$, then $( \\ckalpha , \\beta+\\beta') = 0$. If\n $\\|\\alpha\\|\\notin\\{c,2c,c\/2\\}$, then $\\alpha$ is not homothetic to\n any element of $R_c$, and hence one can break up $\\rho_c$ into\n subterms of length one and two such that each subterm is annihilated\n by $\\ckalpha$. This proves the fourth assertion of the\n proposition. If $\\|\\alpha\\|=c$, then $\\rho_c$ is the sum of\n $\\alpha\/2$ and a remainder perpendicular to $\\ckalpha$.\n Consequently, $( \\ckalpha , \\rho_c ) = 1,$ thereby proving the first assertion. If\n$\\|\\alpha\\| = c\/2$, then $2\\alpha$ is also a\n root, and consequently $\\rho_c$ is the sum of $\\alpha$ and a\n remainder perpendicular to $\\ckalpha$. This implies the second assertion. The case\nthree assertion is proven similarly.\n\\end{proof}\n\n\\begin{corollary}\n If $c$ is the length of an non-multiplicable root, then $\\rho_c$ is a\n weight. If $R$ is non-reduced, and $c$ is the length of the short\n roots, then $\\rho_c$ is merely a half-weight.\n\\end{corollary}\n\n\\begin{corollary}\n \\label{cor:alpha-rho-int}\n Let $c$ be a root length. Then for all $\\alpha\\in R_c$, one has\n $( \\ckalpha, \\rho_c) \\in \\integers$. \n\\end{corollary}\n\\begin{proof}\n If $c$ is the length of an non-multiplicable root, then the claim follows from \npreceding corollary. Suppose then that\n $2c$ is also a root length. For $\\alpha\\in R_c$ note that $2\n (2\\alpha)\\ck{} = \\ckalpha$ and that $2\\rho_c = \\rho_{2c}$. Hence\n $$( \\ckalpha, \\rho_c) = ( (2\\alpha)\\ck{},\n \\rho_{2c}).$$\n Since $2\\alpha$ is non-multiplicable, the right hand\n side is an integer by the preceding corollary.\n\\end{proof}\n\\begin{corollary}\n \\label{cor:rhoc2}\n Let $c$ be a root length and $w\\in W$. Then, $w(\\rho_c)\\in\n Q-\\rho_c$\n\\end{corollary}\n\\begin{proof}\n Note that\n $$w(\\rho_c) = \\frac12 \\sum_{\\alpha\\in R_c^+} \\sigma_\\alpha(w)\n \\alpha,$$\n where $\\sigma_\\alpha(w)$ is either $1$ or $-1$. Hence,\n $\\rho_c+w(\\rho_c)$ is the sum of all $\\alpha\\in R_c^+$ such that\n $\\sigma_\\alpha(w)=1$.\n\\end{proof}\n \nWe are now ready for the next step leading to the required\nintertwining relation, which is to show that $\\trig(P_\\lambda)^W$\nis an invariant subspace of $\\nabla \\log |A_c|$. First, we have:\n\n\n\\begin{proposition}\n \\label{prop:relibase}\n Let $c$ be a root length. If $\\phi\\in\\trig(P-\\rho_c)$ is a relative\n invariant of $W$ with multiplier $M_c$, then $\\phi=A_c\\, \\phi_0$ for\n some $\\phi_0\\in\\trig(P)^W$.\n\\end{proposition}\n\\begin{proof}\n By assumption, $\\phi_1 = \\E^{\\I\\rho_c} \\phi$ is an element of\n $\\trig(P)$. Let $\\alpha\\in R_c^+$ be given. The first claim is\n that $\\phi_1$ is divisible by $\\E^{\\I\\alpha}-1$ in $\\trig(P)$. By\n assumption, $\\phi$ is a linear combination of expressions of the\n form $\\E^{\\I\\lambda} - \\E^{\\I \\lambda'}$ where $\\lambda+\\rho_c\\in\n P$, and $\\lambda'=s_\\alpha(\\lambda)$. Since $\\lambda$ is the\n difference of a weight and $\\rho_c$, Corollary\n \\ref{cor:alpha-rho-int} shows that $( \\ckalpha , \\lambda)\n \\in \\integers$. By switching $\\lambda$ and $\\lambda'$, if necessary,\n one may assume without loss of generality that $-(\n \\ckalpha,\\lambda) \\in \\natnums $. The claim follows by noting\n that\n $$\n \\E^{\\I\\lambda} - \\E^{\\I\\lambda'} = \\E^{\\I\\lambda} \\lp 1-\\E^{-\\I(\n \\ckalpha, \\lambda ) \\alpha} \\rp,\n $$\n and by factoring the right hand side in the usual fashion.\n \n Note that $\\trig(P)$ with the natural function multiplication is\n a unique factorization domain \\cite[Ch. VI, \\numero 3.1]{Bourbaki}.\n Hence the preceding claim implies that there exists a $\\phi_0\\in\n \\trig(P)$ such that\n $$\\phi_1 = \\phi_0\\prod_{\\alpha\\in R_c^+} \\lp \\E^{\\I \\alpha} - 1\\rp.$$\n The proof is concluded by noting that up to a constant factor, $A_c$\n is equal to\n $$\\E^{-\\I\\rho_c}\\prod_{\\alpha\\in R_c^+} \\lp \\E^{\\I \\alpha} - 1\\rp.$$\n The $W$-invariance of $\\phi_0$ follows from the fact that $A_c$ and\n $\\phi$ are relative invariants with the same multiplier.\n\\end{proof}\n\n\\noindent We have:\n\n\\begin{corollary}\n \\label{cor:acformula}\n Let $c$ be a root length. One has\n \\begin{equation}\n \\label{eq:ac}\n (2\\I)^{\\#R_c} A_c = \\frac{1}{\\# W_{\\rho_c}} \\sum_{w\\in W} M_c(w)\n \\E^{\\I w( \\rho_c)}. \n \\end{equation}\n\\end{corollary}\n\n\\begin{proposition}\n \\label{prop:gradlog-action}\n The differential operator $\\nabla \\log |A_c|$ has a well-defined\n action on $\\trig(P)^W$.\n\\end{proposition}\n\\begin{proof}\n Let $\\phi\\in \\trig(P)^W$. The claim is that $(\\nabla \\log\n |A_c|)\\,(\\phi)\\in \\trig(P)^W$. By Corollaries \\ref{cor:rhoc2} and\n \\ref{cor:acformula}, $A_c \\in \\trig(Q-\\rho_c)$, and hence $\\nabla\n A_c(\\phi)\\in \\trig(P-\\rho_c)$. Since $\\nabla$ is a $W$-invariant\n operator, $\\nabla A_c(\\phi)$ is a relative invariant of $W$ with\n multiplier $M_c$. Hence, by Proposition \\ref{prop:relibase},\n there exists a $\\phi_0\\in \\trig(P)^W$ such that $\\nabla A_c(\\phi) =\n A_c\\phi_0$.\n\\end{proof}\nWe now have:\n\\begin{proposition}\n \\label{prop:invspace}\n If $\\lambda\\in P^+$, then $\\trig(P_\\lambda)^W$ is an invariant\n subspace of $\\nabla\n \\log|A_c|$.\n\\end{proposition}\n\\begin{proof}\n Let $\\phi\\in\\trig(P_\\lambda)^W$ be given. Set $\\phi_0 = (\\nabla\n \\log|A_c|)(\\phi)$. By Proposition \\ref{prop:gradlog-action},\n $\\phi_0\\in\\trig(P)^W$. Let $\\mu$ be a maximal element of\n $\\supp(\\phi_0)$. Consequently $\\mu+\\rho_c$ is a maximal\n element of $\\supp(A_c\\phi_0)$. Now\n \\begin{eqnarray*}\n A_c &=& b_1 \\E^{\\I\\rho_c} + \\text{ lower order terms } , \\\\\n \\phi &=& b_2 \\E^{\\I\\lambda} + \\text{ lower order terms }, \n \\end{eqnarray*}\n where $b_1, b_2$ are non-zero constants, and hence,\n $$(\\nabla A_c) (\\phi) = -b_1 b_2 (\\rho_c,\\lambda) \\E^{\\I\n (\\rho_c+\\lambda)} + \\text{ lower order terms }.\n $$\n Since $(\\rho_c,\\lambda)>0$, one must have $\\rho_c+\\lambda =\n \\rho_c+\\mu$. Therefore $\\mu=\\lambda$, and\n $\\phi_0\\in\\trig(P_\\lambda)^W$.\n\\end{proof}\n\n\n\n\\noindent\nThe basic identity which will give rise to the intertwining relation which\nwe are looking for is given in the following proposition:\n\\begin{proposition}\n \\label{prop:gauge-xform}\n Let $f_1,\\ldots, f_n$ be smooth real-valued functions on $\\Vsp$, let\n $k_1,\\ldots, k_n$ be real constants and let\n$$ X = \\sum_{i=1}^n 2k_i \\nabla \\log |f_i|,\\qquad\n F = \\prod_{i=1}^n |f_i|^{k_i}. \n$$\nWe have the identity\n$$ F (-\\Delta - X) = (-\\Delta + U) F,$$\nwhere\n$$\nU = \\sum_i k_i(k_i-1) \\frac{\\|\\nabla f_i\\|^2}{f_i^2} + \\sum_{i\\neq\n j} k_i k_j \\frac{(\\nabla f_i \\, , \\nabla f_j)}{f_i f_j}+\\sum_i k_i\n\\frac{\\Delta f_i}{f_i}.\n$$\n\\end{proposition}\n\n\nThe application of this proposition to the Olshanetsky-Perelomov\nHamiltonians $\\Hop$ requires a number of intermediate formulas.\n\n\\begin{proposition}\n \\label{prop:nabla-ac}\n Let $c$ be a root length. One has\n \\begin{align}\n \\label{eq:delta_ac}\n \\Delta A_c &= -\\|\\rho_c\\|^2 A_c ,\\\\\n \\label{eq:nabla_ac2}\n \\| \\nabla A_c \\|^2 &= \\lp U_c - \\| \\rho_c \\|^2 \\rp A_c^2 , \n \\end{align}\n\\end{proposition}\n\\begin{proof}\n Note that for $\\lambda\\in \\Vsp^*$ one has $\\Delta \\E^{\\I\\lambda} = -\n \\|\\lambda\\|^2 \\E^{\\I\\lambda}.$ Formula \\eqref{eq:delta_ac} follows\n immediately from \\eqref{eq:ac}. Note that\n \\begin{equation}\n \\label{eq:nabla_ac}\n \\nabla A_c = \\frac{A_c}2 \\sum_{\\alpha\\in R_c^+}\n \\cot\\frac\\alpha2 \\nabla\\alpha. \n \\end{equation}\n Consequently,\n \\begin{equation}\n \\label{eq:nabac1} \n \\|\\nabla A_c\\|^2 = \n \\lp \\frac{c^2}4 \\sum_\\alpha \\cot^2 \\frac\\alpha 2\n + \\frac14 \\sum_{\\alpha\\neq\\beta} (\\alpha,\\beta)\\,\n \\cot\\frac\\alpha2\\,\\cot\\frac\\beta2\n \\rp A_c^2\n \\end{equation}\n Taking the divergence of \\eqref{eq:nabla_ac} one obtains\n $$\n \\frac{\\Delta A_c}{A_c} = -\\frac{(\\#R_c)\\, c^2 }4 + \\frac14\n \\sum_{\\alpha\\neq\\beta} (\\alpha,\\beta)\\,\n \\cot\\frac\\alpha2\\,\\cot\\frac\\beta2.\n $$\n Solving for the second term of\n the right hand side of the latter equation, substituting into \\eqref{eq:nabac1}\nand applying\n \\eqref{eq:delta_ac}, we obtain \\eqref{eq:nabla_ac2}.\n \\end{proof}\n\n\\begin{proposition}\n \\label{prop:nabla-aca2c}\n If $c_1$ , $c_2$ are distinct root lengths such that the\n corresponding roots are not homothetic, then\n \\begin{equation}\n \\label{eq:nabla_ac1ac2}\n (\\nabla A_{c_1}\\,, \\nabla A_{c_2}) = -(\\rho_{c_1},\\rho_{c_2}) \\,A_{c_1}\n A_{c_2}\n \\end{equation}\n If $R$ is non-reduced and $c$ is the length of the short roots, then\n \\begin{equation}\n \\label{eq:nabla_aca2c}\n (\\nabla A_c\\,, \\nabla A_{2c}) = [\\, U_c - (\\rho_c,\\rho_{2c})\\, ] \\,A_c\n A_{2c}\n \\end{equation}\n\\end{proposition}\n\\begin{proof}\n Let $c_1$, $c_2$ be given. A straightforward generalization of the\n argument in Proposition \\ref{prop:relibase} yields\n $$\n A_{c_1} A_{c_2} = \\frac1{\\# W_{\\rho_{c_1}+\\rho_{c_2}}}\n \\sum_{w\\in W} M_{c_1}\\!(w)\\, M_{c_2}\\!(w)\\, \\E^{\\I\n w(\\rho_{c_1}+\\rho_{c_2})}.\n $$\n Hence,\n $$\n \\Delta (A_{c_1} A_{c_2}) = -\\| \\rho_{c_1}+\\rho_{c_2}\\|^2 A_{c_1} A_{c_2},\n $$\n and the desired conclusion follows immediately from the usual\n product rule for the Laplacian.\n \n Next, assume that the second of the Proposition's hypotheses holds.\n Set $S_c=\\prod_{\\alpha\\in R_c^+} \\cos(\\alpha\/2),$ and note that\n $A_{2c} = 2A_c S_c.$ Since $R$ is of type $\\mathrm{BC}_n$, a\n direct calculation will show that $\\Delta S_c = -\\|\\rho_c\\|^2 S_c$.\n Consequently,\n \\begin{align*}\n 2\\,(\\nabla A_c \\,, \\nabla S_c) &= \\frac12\\Delta A_{2c} - A_c\n \\Delta\n S_c - S_c\\Delta A_c = -\\|\\rho_c\\|^2 A_{2c}. \\\\\n (\\nabla A_c\\,, \\nabla A_{2c}) &= -\\|\\rho_c\\|^2 A_{2c} + 2 S_c\\,\n \\|\\nabla A_c\\|^2.\n \\end{align*}\n The formula to be proved now follows from \\eqref{eq:nabla_ac2}.\n\\end{proof}\n\nWe can now state and prove the intertwining relation which is \nfundamental to the proof of our main result.\n\\begin{proposition}\n \\label{prop:hamgauge}\n Let\n $$\n \\tHop = -\\Delta - \\sum_c 2k_c \\nabla \\log |A_c|. \n $$\n We have\n $$\n F\\tHop = \\Hop F -\\|\\rho\\|^2. $$\n\\end{proposition}\n\\begin{proof}\n Apply Propositions \\ref{prop:gauge-xform}, \\ref{prop:nabla-ac},\n \\ref{prop:nabla-aca2c}.\n\\end{proof}\n\n\nFinally, we are ready to give the proof of Theorem 1, that is of the\nalgebraic exact solvability of the Olshanetsky-Perelomov Hamiltonian\n$\\Hop$. We begin with the following simple result from linear algebra.\n\\begin{proposition}\n \\label{prop:codim1}\n Let $\\Vsp$ a finite-dimensional vector space over $\\cnums$, and\n $\\Vsp_1\\subset \\Vsp$ a codimension $1$ subspace. Let $T$ be an\n endomorphism of $\\Vsp$ such that $\\Vsp_1$ is an invariant subspace, and\n let $\\kappa\\in\\cnums$ denote the unique eigenvalue of the\n corresponding endomorphism of $\\Vsp\/\\Vsp_1$. If $\\kappa$ is not an\n eigenvalue of $T|_{\\Vsp_1}$, then $\\kappa$ is a multiplicity $1$\n eigenvalue of $T$.\n\\end{proposition}\n\nIt should be noted that the assumption $k_c\\geq0$ in Theorem\n\\ref{thrm:es} is crucial. The necessity of this assumption is\nexplained by the following Proposition. Indeed, one should remark that\nthere exist certain negative values of $k_c$ for which the action of\n$\\Hop$ fails to be diagonalizable.\n\\begin{proposition}\n \\label{prop:unique-evalue}\n Let $\\mu<\\lambda$ be dominant weights. If $k_c\\geq0$ for each root\n length $c$, then $\\|\\lambda+\\rho\\| > \\|\\mu+\\rho\\|$.\n\\end{proposition}\n\\begin{proof}\n Note that \n $$\n \\|\\lambda+\\rho\\|^2 - \\|\\mu+\\rho\\|^2 = \\|\\lambda\\|^2 - \\|\\mu\\|^2\n + 2\\,(\\lambda-\\mu,\\rho).\n $$\n Using the fact that $\\lambda-\\mu\\in P^+$ one can easily show\n that $\\|\\lambda\\|>\\|\\mu\\|$. Furthermore, since $\\lambda-\\mu$ is a\n linear combination of basic roots with positive coefficients,\n Proposition \\ref{prop:alpha-rho} implies that\n $(\\lambda-\\mu,\\rho)>0$.\n\\end{proof}\n\nFinally, we have:\n\n\\noindent \n\\begin{proof}[Proof of Theorem \\ref{thrm:es}]\n \n Let $\\lambda$ be a dominant weight. By Proposition\n \\ref{prop:invspace}, $\\trig(P_\\lambda)^W$ is an invariant\n subspace of $\\tHop$. Using an argument similar to the one given in\n the proof of\n Proposition \\ref{prop:invspace}, it is not hard to verify that if %\n $\\phi\\in\\trig(P_\\lambda)^W$, then\n \\begin{equation}\n \\label{eq:Hcod1}\n \\lp \\tHop - \\|\\lambda\\|^2 - 2\\,(\\rho,\\lambda) \\rp\n (\\phi)\\in\\trig(P_{\\lambda^-})^W. \n \\end{equation}\n Note that $\\trig(P_{\\lambda^-})^W$ is a codimension $1$ subspace\n of $\\trig(P_\\lambda)^W$. Furthermore, by Proposition\n \\ref{prop:unique-evalue},\n $$\n \\|\\lambda\\|^2 + 2(\\lambda,\\rho) > \\|\\mu\\|^2+2(\\mu,\\rho)$$\n for all\n dominant weights $\\mu<\\lambda$. Hence, by Proposition\n \\ref{prop:codim1}, there exists a unique\n $\\phi_\\lambda\\in\\trig(P_\\lambda)^W$ such that $\\tHop\\phi_\\lambda =\n (\\|\\lambda\\|^2 + 2(\\rho,\\lambda))\\phi$. The first of the desired\n conclusions now follows by Proposition \\ref{prop:hamgauge}.\n \n To prove the converse let $F\\phi$ with $\\phi\\in\\trig(P)^W$ be an\n eigenfunction of $\\Hop$ with eigenvalue $\\kappa$. Let $\\lambda\\in\n P^+$ be a maximal element of $\\supp(\\phi)$. Since\n $\\trig(P_{\\lambda^-})^W$ is a codimension $1$ subspace of\n $\\trig(P_\\lambda)^W$, \\eqref{eq:Hcod1} implies that\n $\\kappa=\\|\\lambda\\|^2+2(\\lambda,\\rho)$. Consequently $\\lambda$ is\n the unique maximal element of $\\supp(\\phi)$. By Proposition\n \\ref{prop:codim1}, $\\kappa$ has multiplicity $1$, and this gives the\n desired conclusion.\n\\end{proof} \n\n \n \n\\section{A recursion formula for the eigenfunctions of $\\tHop$}\n\\label{sect:recform}\nIn the present section we show how to explicitly compute the\neigenfunctions of the Olshanetsky-Perelomov Hamiltonian by using a\n$k_c$-para\\-meterized analogue of the Freudenthal multiplicity\nformula. The generalized formula actually yields the eigenfunctions\n$\\phi_\\lambda$ of the related operator $\\tHop$. One should mention\nthat the eigenfunctions $\\phi_\\lambda$ first appeared in the\ninvestigations of Heckman and Opdam \\cite{HO}, who regard these functions\nas multi-variable generalizations of the Jacobi polynomials. The\neigenfunctions of $\\Hop$ are of course obtained by multiplication with\nthe gauge factor $F$.\n\nBy way of motivation it will be useful to recall the context of the\noriginal Freudenthal formula. Suppose that $R$ is reduced and let\n$\\chi_\\lambda,\\,\\lambda\\in P^+$ denote a character of the\ncorresponding compact, simply connected Lie group. The Weyl character\nformula states that\n\\begin{equation}\n \\label{eq:wcf}\n\\chi_\\lambda = \n\\frac{\\sum_{w\\in W }\\sgn(w)\\, \\E^{\\I w(\\lambda+\\trho)}}\n{\\sum_{w \\in W }\\sgn(w)\\, \\E^{\\I w(\\lambda)}}\n\\end{equation}\nwhere $\\trho$ is the half-sum of the positive roots. Now if $k_c=1$\nfor all $c$, then the potential term of $\\Hop$ is zero, and the gauge\nfactor $F$ is nothing but the $W$-antisymmetric denominator of\n\\eqref{eq:wcf}. Furthermore the numerator in \\eqref{eq:wcf} is the\nunique $W$-antisymmetric eigenfunction of $\\Delta$ with highest order\nterm $\\E^{\\I(\\lambda+\\trho)}$. Hence, by the intertwining relation\ndescribed in Proposition \\ref{prop:hamgauge}, the Weyl character\nformula is equivalent to the statement that $\\chi_\\lambda$ is an\neigenfunction of $\\tHop$ with eigenvalue $(\\lambda,\\lambda+2\\trho)$.\nThis observation leads directly to the classical Freudenthal formula\nfor the multiplicities of $\\chi_\\lambda$, and to the following\ngeneralization involving the parameters $k_c$. (See \\cite{FH} for more\ndetails regarding the Weyl and Freudenthal formulas.)\n\\begin{proposition}\n Let $\\phi_\\lambda=\\E^{\\I\\lambda} + \\sum_{\\mu<\\lambda} n_\\mu\n \\E^{\\I\\mu}$ be the eigenfunction of $\\tHop$ described in the\n statement and proof of Theorem \\ref{thrm:es}. Setting $n_\\lambda=1$\n and $n_\\nu=0$ for $\\nu\\not\\leq \\lambda$, the remaining coefficients\n $n_\\mu,\\,\\mu<\\lambda$, are given by the following recursion formula:\n \\begin{equation}\n \\label{eq:fmf}\n (\\|\\lambda+\\rho\\|^2-\\|\\mu+\\rho\\|^2)\\,n_\\mu =\n2\\sum_{\\alpha\\in R^+}\n\\sum_{j\\geq1} k_{|\\alpha|}\\, (\\alpha,\\mu+j\\alpha)\\,n_{\\mu+j\\alpha}\n \\end{equation}\n\\end{proposition}\n\\begin{proof}\n Rewriting \n $$A_c=\\E^{\\I\\rho_c}\\prod_{\\alpha\\in R_c^+} \\lp\n 1-\\E^{-\\I\\alpha}\\rp,\n $$\n one obtains\n $$\n \\tHop = -\\Delta - \\I\\,\\nabla\\!\\rho-2\\,\\I\\!\\sum_{\\alpha\\in R^+}\n k_{|\\alpha|} \\frac{\\E^{-\\I\\alpha}}{1-\\E^{-\\I\\alpha}}\\nabla{\\alpha}\n $$\n Let $\\trig((P))$ denote the vector space of formal power\n series $\\sum_{\\mu\\in P} c_\\mu \\E^{\\I\\mu}$. Since elements of\n $\\trig(P)$ are finitely supported sums, one has a well\n defined-multiplication operation $\\trig((P))\\times\\trig(P)\n \\rightarrow \\trig((P)).$ Thus, setting the domain of $\\tHop$ to\n be $\\trig(P)$ one can extend the operator's coefficient ring and\n write\n $$\n \\tHop = -\\Delta - \\I\\,\\nabla\\!\\rho-2\\,\\I\\!\\sum_{\\alpha\\in\n R^+}\\sum_{j\\geq 1} k_{|\\alpha|}\\, \\E^{-j\\I \\alpha}\n \\nabla{\\alpha}\n $$\n However, because of Proposition \\ref{prop:gradlog-action} one can\n take the codomain of $\\tHop$ to be $\\trig(P)$ rather than all of\n $\\trig((P))$. Acting with the right hand side of the latter equation on $\\phi_\\lambda$,\n collecting like terms, and using the fact that $\\phi_\\lambda$ is an\n eigenfunction with eigenvalue $(\\lambda,\\lambda+2\\rho)$ immediately\n yields \\eqref{eq:fmf}.\n\\end{proof}\nIt is important to remark that by Proposition \\ref{prop:unique-evalue}\nthe coefficient of $n_\\mu$ appearing in \\eqref{eq:fmf} is never zero.\nConsequently \\eqref{eq:fmf} can indeed be used as a recursive formula for the\ncoefficients $n_\\mu$. One should also remark that the $W$-symmetry of\n$\\phi_\\lambda$ means that it suffices to use formula \\eqref{eq:fmf} to\ncalculate $n_\\mu$ with $\\mu\\in P^+$.\n\n\n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\nNowadays, production processes are highly optimized, monitored and controlled.\nOptimization and control need a lot of information about the process which can partially be provided by sensor data.\nMeasurements of these sensors, however, can only be made at selected points or from outside of the product.\nFor this reason and because the overall number of sensors that can be used within a process is generally limited, the knowledge about the process is always incomplete.\nOne way to complete these knowledge gaps can be overcome by additional information provided by numerical analysis.\nUp-to-date, numerical analysis is already widely used in process design and optimization.\nThe computational methods applied in numerical analysis, as the finite element method, finite volumes or finite differences, just to name a few, are already well established and provide good predictions about the processed materials.\nThese methods, however, have all in common that they become quickly costly in terms of computational time, even when solved on so-called high performance computing (HPC) architectures.\nTime-consuming simulations cannot be used in process control since here time is a limiting factor.\nTherefore, the development of reduced order models (ROM) has become of particular interest over the last decades.\\\\\n\\\\\nThere exist already a variety of ROM approache, which in literature are often categorized into hierarchical, projection based or data-driven ROM \\cite{EldredDunlavy2006,BennerGugercinWillcox2015}.\nIn hierarchical ROM, simply the physics of the problem is reduced, e.g. by considering only the main flow direction \\cite{FringsBehrElgeti2017}.\nIn projection based methods the high dimensional solution space is projected onto a smaller, reduced space.\nThe problem is then solved on the reduced solution space.\nProjection based methods require access to the source code and often need a problem dependent implementation.\nExamples to this methodology can be found in \\cite{HesthavenRozzaStamm2016,KeyEtAl2021}.\nIn scope of this work, we will though focus on the remaining group, namely the data-driven approach.\\\\\n\\\\\nThe category of data-driven ROM approaches, once again, incorporates a variety of different strategies.\nEspecially increased interest in machine learning evoked a larger number of new data-driven ROM strategies as for example physics informed neural networks (PINN) \\cite{RaissiPerdikarisKarniadakis2019} or convolutional autoencoder \\cite{FukamiTaichiKoji2022}.\nThe strategy we pursue within the presented work is based on the ROM strategy proposed by Hesthaven and Ubbiali \\cite{HesthavenUbbiali2018}.\nTherein, the solution to a problem formulation is approximated under consideration of existing data sets, where each data set is associated with a distinct set of process parameters.\nBased on this data, we apply the so-called proper orthogonal decomposition (POD) \\cite{Chatterjee2000} to identify a reduced basis (RB) representation for the data set.\nOnce the RB is identified, it is possible to recover reduced coefficients for a specific parameter setting.\nThe relation between process parameters and reduced coefficient is then what is trained in the ROM to approximate the solution of an unseen set of process parameters.\nPossible strategies to the create this mapping are radial basis functions (RBF) \\cite{WaltonHassanMorgan2013,XiaoEtAl2015}, Gaussian progress regression (GPR)\\cite{RozzaEtAl2018} or artificial neural networks (ANN) \\cite{HesthavenUbbiali2018,BerzinsEtAl2020,WangHesthavenRay2019}.\\\\\n\\\\\nIn the presented work, we apply a RB-ANN ROM to the industrial process of plastic profile extrusion.\nInside extrusion process lines, as sketched in \\ref{fig:SketchExtrusionLine}, granular, raw plastics are continuously processed to profiles with fixed cross-sectional shape.\nThe extrusion process can therein be divided into two section, first the hot mixing and shaping part and the subsequent cooling and calibration part.\nIn the remaining work, we will focus on the cooling and calibration part of extrusion.\nInside the calibration unit the still liquid plastic mold is cooled down so that it solidifies and is fixed in shape.\nThereby, the cooling of the extrudate should be as uniform as possible, since otherwise undesired warpage and distortion will damage the profile.\nThe ROM presented in this work, should be cable of delivering advance information on the temperature distribution inside the profile and\nthereby function as decision support tool in process adaptation under varying process conditions.\n\\begin{figure}[h!]\n\t\\centering\n\t\\input{figures\/extrusionLine.tex}\n\t\\caption{Process sections in classic extrusion process.}\n\t\\label{fig:SketchExtrusionLine}\n\\end{figure}\n\n\\section{MATERIAL AND NUMERICAL MODELING}\\label{sec:problem}\nTo fully describe the behavior of plastics melts in the calibration unit, it would require the conservation equation for mass, momentum and energy, as well as further constitutive models to describe the non-Newtonian nature of plastics, depending on the type of plastic also crystallization models and finally a methodology that is able to represent the domain changes due to warpage and deformation. \nThis work, however, wants to focus on the evaluation and exploration of the ROM aspect.\nTherefore, we consider only the temperature equation to describe the cooling plastic melt inside.\nThe temperature distribution $\\temperature$ within the calibration unit, represented by $\\domain$, is described by the following equation:\n\\begin{align}\n\t\\density \\specHeat \\advectionVel \\grad \\temperature - \\thermCond \\grad^2 \\temperature = 0\\qquad&\\text{in}~\\domain. \\label{eq:Heat}\n\\end{align}\nHere, $\\density$ represents the density, $\\specHeat$ the specific heat, $\\advectionVel$ the advection velocity and $\\thermCond$ the thermal conductivity.\nUniform temperatures can be assumed as Dirichlet conditions at the inlet of the calibriation unit, whereas the emitted heat flux through the profile surface depends on the difference between ambient temperature $\\temperatureAmbient$ and surface temperature.\nFurther, the heat transfer coefficient between the processed material and the surrounding cooling fluid influences the heat flux through the surface:\n\\begin{align}\n\t\\temperature = \\temperature_{in}, &\\qquad on~\\domainDB,\\\\\n\t\\normal \\cdot \\heatTransferCoef \\grad \\temperature = \\heatTransferCoef \\left(\\temperature - \\temperatureAmbient\\right) , &\\qquad on~\\domainRB.\\label{eq:RBC}\n\\end{align}\n\nThe data used within our ROM approach is generated with our in-house HPC stabilized finite element code.\nFor a more detailed discussion about utilized discretization and stabilization methods, we refer the interested reader to \\cite{HelmigBehrElgeti2019}.\n\n\\section{STANDARDIZED NON-INTRUSIVE REDUCED ORDER MODEL}\\label{sec:method}\nThe ROM used within this work is the so-called standardized non-intrusive reduced order model (sniROM) propsed by B\\={e}rzi\\c{n}\\v{s} et al. \\cite{BerzinsEtAl2020}.\nIt follows the offline-online paradigm presented by Ubbiali and Hesthaven \\cite{HesthavenUbbiali2018}.\nWithin the offline step, the underlying data for the ROM is aggregated first.\nSubsequently, this data is used to construct a RB.\nLastly, the data and the RB are utilized to train an interpolation model, which establishes a relationship between problem specific parameters and a linear combination of RB vectors.\nIn the online stage, the ROM can then be evaluated quickly for unseen parameters at low computational cost.\n\n\\subsection{Solution representation via reduced basis}\nThe pre-computed data consist of pointwise stored solutions to equations \\eqref{eq:Heat}-\\eqref{eq:RBC} for varying parameter sets $\\param$.\nIn scope of this work, $\\param$ is a vector containing the ambient temperature and the heat transfer coefficient $\\heatTransferCoef$.\nThe pointwise stored solutions will be referred to synonymously as snapshots in the remaining work.\nSimilar as in the finite element method (FEM), the pointwise solution $\\solution$ can be represented on the domain by means of spatial basis functions $\\interpolationFctFE$:\n\\begin{align}\n\t\\solution \\left(\\param;\\x\\right) = \\snapshotV\\left(\\param\\right)^T\\interpolationFctFE\\left(\\x\\right) = \\sum_{i=1}^{\\numDOFs} \\snapshotV_i\\left(\\param\\right)\\interpolationFctFE_i\\left(\\x\\right),\n\\end{align}\nwhere $\\numDOFs$ denotes the number of data points per parameter configuration.\nThe principle idea of reduced basis methods (RBM) is now to approximate any solution vector $\\snapshotV \\left(\\param\\right)$ by a linear combination of the first $\\numReducedBasis$ RB vectors $\\reducedBasisT = \\left[\\reducedBasisV^1 | \\dots | \\reducedBasisV^\\numReducedBasis\\right]$ of a given dataset:\n\\begin{align}\n\t\\snapshotV_i\\left(\\param\\right) \\approx \\reducedSnapshot\\left(\\param\\right) = \\sum_{i=1}^{\\numReducedBasis} \\reducedCoeffV_l \\reducedBasisV^l = \\reducedBasisT \\reducedCoeffV \\left(\\param\\right),\n\\end{align}\nwhere $\\reducedCoeffV$ denote the reduced coefficients.\nWith the pre-computed snapshots, the reduced coefficients to a corresponding parameterset can be recovered via the RB vectors $\\reducedBasisT$.\n\\begin{equation}\n\t\\reducedCoeffV\\left(\\param\\right) = \\reducedBasisT^T \\snapshotV\\left(\\param\\right)\n\\end{equation}\n\n\\subsection{Construction of reduced basis}\nThe RB is constructed from a pre-computed dataset.\nThe dataset consist of snapshots of a distinct parameter set.\nThese snapshots form the columns to the overall snapshot matrix $\\snapshotT \\in \\ensuremath{\\mathbb{R}}^{\\numDOFs \\times \\numSnaps}$, where $\\numSnaps$ is the total number of snapshots used to construct the RB.\nPerforming a proper orthogonal decomposition (POD) \\cite{Chatterjee2000} on the normalized snapshot matrix yields:\n\\begin{align}\n\t\\snapshotT \/ \\sqrt{\\numSnaps} = \\leftSVT \\EigenValT \\rightSVT^T,\n\t\\label{eq:reducedCoefficients}\n\\end{align}\nwhere, $\\EigenValT$ is a diagonal matrix containing the descending eigenvalues of $\\snapshotT$, whereas $\\leftSVT$ and $\\rightSVT$ represent the right and left singular vectors.\nThe first $\\reducedBasisV$ left singular Eigenvectors $\\leftSVT$ are chosen to be the RB of our ROM.\nFollowing the suggestion in \\cite{BerzinsEtAl2020} the snapshot data should be pre-processed, cmp. Figure \\ref{fig:DataStandardization}.\nThe data is centered and parameters and coefficients are standardized.\n\\begin{figure}[h!]\n\t\\begin{tikzpicture}[scale=1.0]\n\t\t\\node (0,0) {\\includegraphics[width=1.0\\textwidth,trim=6cm 1cm 6cm 1cm,clip]{figures\/standardization.png}};\n\t\t\\draw [-latex,line width=1pt](-4.5,-1.2) -- (-3.5,-1.2) node[midway,below]{zero-center};\n\t\t\\draw [-latex,line width=1pt](-0.5,-1.2) -- (0.5,-1.2)node[midway,below]{decorrelate};\n\t\t\\draw [-latex,line width=1pt](4.0,-1.2) -- (5.0,-1.2)node[midway,below]{normalize scale};\n\t\\end{tikzpicture}\n\t\\caption{Data preprocessing.}\n\t\\label{fig:DataStandardization}\n\\end{figure}\nHowever, the POD is not the most efficient strategy to compute the RB.\nIt can be computed more efficiently by the so-called method of snapshots \\cite{Sirovich1991},\nwhere the dimensionality of our dataset is exploited. \n\n\\paragraph{Method of snapshots}\nThe datasets based on FE simulations, typically has significantly less snapshots than degrees of freedom per snapshot.\nSince the squared snapshot matrix $\\snapshotT\\snapshotT^T\/{\\numSnaps} $ is only of dimension $\\ensuremath{\\mathbb{R}}^{\\numSnaps \\times \\numSnaps}$, it is computational less expensive to compute the reduced basis of $\\snapshotT$ via singular value decomposition of its square:\n\\begin{align}\n\t\\snapshotT\\snapshotT^T\/{\\numSnaps} = \\rightSVT \\squaredEigenValT \\rightSVT^T.\n\\end{align}\nThis can then be used to calculate the eigenvalues as $\\EigenValT =\\squaredEigenValT^{1\/2}$ and the left eigenvectors of $\\snapshotT$ by a simple matrix product $\\leftSVT = \\snapshotT \\rightSVT \\EigenValT^{-1}\/{\\numSnaps}$\n\n\\subsection{Interpolation of reduced coefficients}\nOnce the reduced basis vectors are computed to a data set, equation \\eqref{eq:reducedCoefficients} is used to identify the reduced coefficients for each snapshot vector in $\\snapshotT$. \nThis mapping from distinct parameters to a reduced coefficients vector is then utilized to train an interpolation model so that the interpolation model proposes reduced coefficients for an unseen choice of process parameters.\nThe interpolation model chosen within this work is a feedforward neural network (FNN), as shown in Figure \\ref{fig:NN}.\nThe inputs to our network are the varying process parameters $\\param$, whereas the reduced coefficients $\\reducedCoeffV$ are the ouputs.\nThe number of hidden layers and their size are part of the so-called hyperparameters.\nThe FNN is implemented with hyperbolic tangent activation functions.\nAll parameters that describe network or specify the training configuration are called hyperparameters.\nThese parameters affect the quality of the interpolation model.\nFinding the combination of parameters that result the best interpolation is part of the offline training process.\n\\begin{figure}[h!]\n\t\\centering\n\t\\resizebox{0.45\\textwidth}{!}{\n\t\t\\begin{tikzpicture}[scale=1]\n\t\t\t\n\t\t\t\\node[circle,minimum size = 6mm, fill=white!70,draw] (Input-1) at (0,-1) {$\\boldsymbol{\\zeta}$};\n\t\t\n\t\t\t\n\t\t\t\\node[circle, minimum size = 6mm, fill=white!70,\tyshift=(\\outputnum-\\inputnum)*5 mm,draw] (Output-1) at (2.5*\\hiddenlayers+2.5,-1) {$y_1$};\n\t\t\t\\node[circle, minimum size = 6mm, fill=white!70,\tyshift=(\\outputnum-\\inputnum)*5 mm,fill=none ] (Output-2) at (2.5*\\hiddenlayers+2.5,-2) {\\small $\\begin{array}{c} \\cdot\\\\ \\cdot\\\\ \\cdot\\end{array}$};\n\t\t\t\\node[circle, minimum size = 6mm, fill=white!70,\tyshift=(\\outputnum-\\inputnum)*5 mm,draw] (Output-3) at (2.5*\\hiddenlayers+2.5,-3) {$y_L$}; \t\t\n\t\t\t\n\t\t\n\t\t\t\\foreach \\j in {1,...,\\hiddenlayers}\n\t\t\t{\n\t\t\t\t\\foreach \\i in {1,...,\\hiddennum}\n\t\t\t\t{\n\t\t\t\t\t\\node[circle, \n\t\t\t\t\tminimum size = 6mm,\n\t\t\t\t\tfill=white!50,\n\t\t\t\t\tyshift=(\\hiddennum-\\inputnum)*5 mm,\n\t\t\t\t\tdraw\n\t\t\t\t\t] (Hidden-\\j-\\i) at (2.5*\\j,-\\i) {};\n\t\t\t\t}\n\t\t\t}\n\t\t\n\t\t\n\t\t\n\t\t\t\\foreach \\i in {1,...,\\inputnum}\n\t\t\t{\n\t\t\t\t\\foreach \\j in {1,...,\\hiddennum}\n\t\t\t\t{\n\t\t\t\t\t\\draw[black,->, shorten >=1pt] (Input-\\i) -- (Hidden-1-\\j); \n\t\t\t\t}\n\t\t\t}\n\t\t\t\n\t\t\t\\foreach \\i in {1,...,\\hiddennum}\n\t\t\t{\n\t\t\t\t\\foreach \\j in {1,...,\\hiddennum}\n\t\t\t\t{\n\t\t\t\t\t\\draw[black,->, shorten >=1pt] (Hidden-1-\\i) -- (Hidden-2-\\j); \n\t\t\t\t}\n\t\t\t}\n\t\t\t\\foreach \\i in {1,...,\\hiddennum}\n\t\t\t{\n\t\t\t\t\\foreach \\j in {1,...,\\outputnum}\n\t\t\t\t{\n\t\t\t\t\t\\draw[black,->, shorten >=1pt] (Hidden-2-\\i) -- (Output-\\j);\n\t\t\t\t}\n\t\t\t}\n\t\t\t\n\t\\end{tikzpicture}}\n\t\n\t\\caption{Examplary NN used to reduced coefficient interpolation.}\n\t\\label{fig:NN}\n\\end{figure}\n\n\\section{TEMPERATURE DISTRIBUTION IN CALIBRATION UNIT FOR EXTRUDED 3D-DUMBBELL PROFILE}\nIn this section we apply, as an example, the model presented in section \\ref{sec:method}, for the problem described in section \\ref{sec:problem}, to a so-called dumbbell extrusion profile.\nThe dimensions of the utilized geometry are given in Figure \\ref{fig:Sketch} and in Table \\ref{tab:dimensions}.\nThe material parameters used within the FE simulations for the data aggregation are shown in Table \\ref{tab:processParameters}.\nThe parameters varied in the data set are the ambient temperature $\\temperatureAmbient$ and the heat transfer coefficient $\\heatTransferCoef$ between material and surrounding fluid.\n\\begin{figure}[H]\n\t\\centering\n\n\t\\resizebox{\\textwidth}{!}{\n\t\t\\tikzset{every picture\/.style={line width=0.75pt}}\n\t\t\\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1]\n\t\t\n\t\t\t\n\t\t\n\t\t\t\\draw [draw opacity=0][line width=1.5] (159.66,160.04) .. controls (154.24,175.7) and (139.36,186.94) .. (121.86,186.94) .. controls (99.77,186.94) and (81.86,169.03) .. (81.86,146.94) .. controls (81.86,124.84) and (99.77,106.94) .. (121.86,106.94) .. controls (138.02,106.94) and (151.95,116.52) .. (158.26,130.32) -- (121.86,146.94) -- cycle ; \\draw [line width=1.5] (159.66,160.04) .. controls (154.24,175.7) and (139.36,186.94) .. (121.86,186.94) .. controls (99.77,186.94) and (81.86,169.03) .. (81.86,146.94) .. controls (81.86,124.84) and (99.77,106.94) .. (121.86,106.94) .. controls (138.02,106.94) and (151.95,116.52) .. (158.26,130.32) ; \n\t\t\n\t\t\t\\draw [line width=1.5] (158,130) -- (238,130) ;\n\t\t\n\t\t\t\\draw [line width=1.5] (159.66,160.04) -- (238,160) ;\n\t\t\n\t\t\t\\draw [draw opacity=0][line width=1.5] (238.14,130.03) .. controls (243.36,121.59) and (252.64,115.93) .. (263.29,115.8) .. controls (279.86,115.59) and (293.46,128.85) .. (293.66,145.42) .. controls (293.87,161.99) and (280.6,175.59) .. (264.04,175.79) .. controls (252.59,175.93) and (242.55,169.64) .. (237.38,160.27) -- (263.67,145.79) -- cycle ; \\draw [line width=1.5] (238.14,130.03) .. controls (243.36,121.59) and (252.64,115.93) .. (263.29,115.8) .. controls (279.86,115.59) and (293.46,128.85) .. (293.66,145.42) .. controls (293.87,161.99) and (280.6,175.59) .. (264.04,175.79) .. controls (252.59,175.93) and (242.55,169.64) .. (237.38,160.27) ; \n\t\t\n\t\t\t\\draw [line width=0.75] (123,145) -- (81.8,145.2) ;\n\t\t\n\t\t\t\\draw [line width=0.75] (294.2,145.6) -- (263.67,145.79) ;\n\t\t\n\t\t\t\\draw [line width=0.75] (238,180) -- (158,180) ;\n\t\t\n\t\t\t\\draw [line width=0.75] (158,175) -- (158,185) ;\n\t\t\n\t\t\t\\draw [line width=0.75] (238,175) -- (238,185) ;\n\t\t\n\t\t\t\\draw [line width=0.75] (148,160) -- (148,130) ;\n\t\t\n\t\t\t\\draw [line width=0.75] (153,130) -- (143,130) ;\n\t\t\n\t\t\t\\draw [line width=0.75] (153,160) -- (143,160) ;\n\t\t\n\t\t\t\\draw [dash pattern={on 0.84pt off 2.51pt}] (390,115) -- (600,115) ;\n\t\t\n\t\t\t\\draw [dash pattern={on 0.84pt off 2.51pt}] (390,175) -- (600,175) ;\n\t\t\n\t\t\t\\draw [dash pattern={on 0.84pt off 2.51pt}] (390,160) -- (600,160) ;\n\t\t\n\t\t\t\\draw [dash pattern={on 0.84pt off 2.51pt}] (390,130) -- (600,130) ;\n\t\t\n\t\t\t\\draw [line width=0.75] (600,200) -- (390,200) ;\n\t\t\n\t\t\t\\draw [line width=0.75] (390,195) -- (390,205) ;\n\t\t\n\t\t\t\\draw [line width=0.75] (600,195) -- (600,205) ;\n\t\t\n\t\t\t\\draw [line width=1.5] (389,187) -- (600,187) -- (600,103) -- (389,103) -- cycle ;\n\t\t\n\t\t\t\\draw (50,148) -- (50,175) ;\n\t\t\t\\draw [shift={(50,145)}, rotate = 90] [fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.08] [draw opacity=0] (8.93,-4.29) -- (0,0) -- (8.93,4.29) -- cycle ;\n\t\t\n\t\t\t\\draw (77,175) -- (50,175) ;\n\t\t\t\\draw [shift={(80,175)}, rotate = 180] [fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.08] [draw opacity=0] (8.93,-4.29) -- (0,0) -- (8.93,4.29) -- cycle ;\n\t\t\n\t\t\t\\draw (351,148) -- (351,175) ;\n\t\t\t\\draw [shift={(351,145)}, rotate = 90] [fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.08] [draw opacity=0] (8.93,-4.29) -- (0,0) -- (8.93,4.29) -- cycle ;\n\t\t\n\t\t\t\\draw (378,175) -- (351,175) ;\n\t\t\t\\draw [shift={(381,175)}, rotate = 180] [fill={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.08] [draw opacity=0] (8.93,-4.29) -- (0,0) -- (8.93,4.29) -- cycle ;\n\t\t\t\n\t\t\n\t\t\t\\draw (91,124) node [anchor=north west][inner sep=0.75pt] [align=left] {$r_1$};\n\t\t\n\t\t\t\\draw (267,124) node [anchor=north west][inner sep=0.75pt] [align=left] {$r_2$};\n\t\t\n\t\t\t\\draw (152,134) node [anchor=north west][inner sep=0.75pt] [align=left] {$h$};\n\t\t\n\t\t\t\\draw (192,184) node [anchor=north west][inner sep=0.75pt] [align=left] {$w$};\n\t\t\n\t\t\t\\draw (484,205) node [anchor=north west][inner sep=0.75pt] [align=left] {$l$};\n\t\t\n\t\t\t\\draw (59,180) node [anchor=north west][inner sep=0.75pt] [align=left] {$x$};\n\t\t\n\t\t\t\\draw (33,141) node [anchor=north west][inner sep=0.75pt] [align=left] {$y$};\n\t\t\n\t\t\t\\draw (360,180) node [anchor=north west][inner sep=0.75pt] [align=left] {$z$};\n\t\t\n\t\t\t\\draw (334,141) node [anchor=north west][inner sep=0.75pt] [align=left] {$y$};\n\t\t\t\n\t\t\\end{tikzpicture}\n\t}\n\t\\caption{Sketch of dumbbell profile.}\n\t\\label{fig:Sketch}\n\\end{figure}\n\n\\begin{table}[H]\n\t\\centering\n\t\\begin{tabular}{llll}\n\t\t\\multicolumn{3}{l}{\\textbf{Geometric dimensions}} \\\\\n\t\t\\hline\n\t\t$r_1 $& radius & $0.006 $ & $\\left[\\mathrm{m}\\right]$ \\\\\n\t\t$r_2$ & radius & $0.0033 $ & $\\left[\\mathrm{m}\\right]$ \\\\\n\t\t$h$ & height &$ 0.01 $ & $\\left[\\mathrm{m}\\right]$ \\\\\n\t\t$w$ & width &$ 0.003 $& $\\left[\\mathrm{m}\\right]$ \\\\ \n\t\t$l$ & length &$ 1.0 $ & $\\left[\\mathrm{m}\\right]$ \n\t\\end{tabular}\n\n\t\\caption{Geometric parameters.}\n\t\\label{tab:dimensions}\n\\end{table}\n\\begin{table}[H]\n\t\\center\n\t\\begin{tabular}{lllr}\n\t\t\\multicolumn{4}{l}{\\textbf{Process and material parameters}} \\\\\n\t\t\\hline\n\t\t$\\density$ & density & 900 & $\\left[\\mathrm{kg\\,m^{-3}}\\right]$ \\\\\n\t\t$\\advectionVel$ & advection velocity & 0.00011 & $\\left[\\mathrm{m\\,s^{-1}}\\right]$ \\\\\n\t\t$\\temperatureAmbient$ & ambient temperature & $\\left[288,298\\right] $& $\\left[\\mathrm{K}\\right]$ \\\\\n\t\t$\\heatTransferCoef$ & heat transfer coefficient & $\\left[-320,-218\\right]$ & $\\left[\\mathrm{W}\/\\left(\\mathrm{m}^2\\,\\mathrm{K}\\right)\\right]$ \\\\\n\t\t& & & \\\\\n\t\t\\multicolumn{4}{l}{\\textbf{ROM and hyperparameters}} \\\\\n\t\t\\hline\n\t\t\\multicolumn{3}{l}{size trainingsset} & $100$ \\\\\n\t\t\\multicolumn{3}{l}{size testset} & $100 $ \\\\\n\t\t\\multicolumn{3}{l}{size validationset} & $100 $ \\\\\n\t\t\\multicolumn{3}{l}{number outputs\/reduced basis vectors }& $30$ \\\\\n\t\t\\multicolumn{3}{l}{number hidden layers} & 10 \\\\\n\t\t\\multicolumn{3}{l}{number of neurons }& 40 \t\t \n\t\\end{tabular}\n\n\t\\caption{Process and model parameters.}\n\t\\label{tab:processParameters}\n\\end{table}\n\nThe NN used within the ROM is trained with the hyperparameters listed in table \\ref{tab:processParameters}.\nThe ROM temperature predictions on the dumbbell geometry, cmp. Figures \\ref{fig:InletOutlet}-\\ref{fig:tempDistributionCrossSection}, match the expected temperature distributions.\nIn Figures \\ref{fig:InletOutlet} and \\ref{fig:tempDistributionCrossSection}, it can be observed that the extruded plastic is still hotter in the larger cylinder of the dumbbell at the outlet, even though the surface temperature of the profile, cmp. Figure \\ref{fig:tempDistributionSurface}, has cooled down to a uniform temperature.\nThese results indicate that implemented ROM could be applied in process control with only small modifications, e.g. extending the model by an additional heat transfer coefficient to handle both sides of the profile differently.\n\\begin{figure}[h!]\n\t\\centering\n\t\\subfloat[Constant inlet BC. \\label{fig:a}]{\\includegraphics[width=0.4\\textwidth]{figures\/inlet.png}}\\qquad\n\t\\subfloat[Temperature prediction outlet \\label{fig:a}]{\\includegraphics[width=0.4\\textwidth]{figures\/outlet.png}}\n\t\\caption{Cross sectional temperature distributions at inlet (a) and outlet (b))}\n\t\\label{fig:InletOutlet}\n\\end{figure}\n\\begin{figure}[h!]\n\t\\center\n\t\\includegraphics[width=0.9\\textwidth]{figures\/surface.png}\n\t\\caption{Predicted temperature on surface of extrudate.}\n\t\\label{fig:tempDistributionSurface}\n\\end{figure}\n\\begin{figure}[h!]\n\t\\center\n\t\\includegraphics[width=0.9\\textwidth]{figures\/crossSection.png}\n\t\\caption{Predicted temperature inside extrudate.}\n\t\\label{fig:tempDistributionCrossSection}\n\\end{figure}\n\n\\section{CONCLUSION}\nIn scope of this work, we presented a data-driven ROM approach.\nThe ROM allows to predict temperature distribution on plastic extrusion profiles for varying ambient temperatures and heat-transfer coefficients.\nFor the presented dumbbell example, we could show that the model predictions for unseen process configurations resulted physically reasonable results.\nThis work is only a first conceptual work and needs to extended by further investigations of the model.\nln future work, the relation between accuracy and size of trainingsset, as well as the model behavior for more complex geometries should be investigated.\nNevertheless, we can conclude that the model appears to be suitable for the application in process controlling, where a ROM can be created during the process desgin, similar as the extrusion tool itself.\n\\section*{ACKNOWLEDGMENT}\nFunded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy\u2013EXC-2023 Internet of Production\u2013390621612.\nFurther, the authors gratefully acknowledge the computing time granted by the JARA Vergabegremium and provided on the JARA Partition part of the supercomputer JURECA at Forschungszentrum J\u00fclich.\n\n\n\\bibliographystyle{ieeetr} \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\\label{S:one}\n\nDouble pushout graph (DPO) rewriting~\\cite{Ehrig1976} is the most well-known and influential approach to algebraic graph transformation. The rewriting mechanics are specified in terms of the universal properties of pushouts---for this reason, the approach is domain-independent and instantiates across a number of concrete notions of graphs and graph-like structures. Moreover, the introduction of adhesive, quasi-adhesive, weak adhesive and $\\cM$-adhesive categories~\\cite{lack2005adhesive,garner2012axioms,ehrig2004adhesive}---which, roughly speaking, ensure that the pushouts involved are ``well-behaved'', i.e.\\ they satisfy similar exactness properties as pushouts in the category of sets and functions---entails that a standard corpus of theorems~\\cite{DBLP:conf\/gg\/1997handbook} that ensures the ``good behaviour'' of DPO rewriting holds if the underlying ambient category is (quasi-, weak,\\,$\\cM$-)adhesive.\n\nAn important classical theorem of DPO rewriting is the \\emph{Concurrency Theorem}~\\cite{ehrig:2006aa}, which involves an analysis of \\emph{two} DPO productions applied in series. Given a \\emph{dependency relation} (which, intuitively, determines how the right-hand side of the first rule overlaps with the left-hand side of the second), a purely category-theoretic construction results in a \\emph{composite} rule which applies the two rules simultaneously. The Concurrency Theorem then states that the two rules can be applied in series in a way consistent with the relevant dependency relation if and only if the composite rule can be applied, yielding the same result.\n\nThe operation that takes two rules together with a dependency relation and produces a composite rule\ncan be considered as an algebraic operation on the set of DPO productions for a given category.\nFrom this viewpoint, it is natural to ask whether this composition\noperation is associative. It is remarkable that this appears to have been open until recently: an elementary proof of this, in the context of adhesive categories, was announced by us in the conference version~\\cite{Behr2018} of this article.\n\nIn this extended version we:\n\\begin{itemize}[label=$\\triangleright$]\n\\item generalise the associativity result to the setting of various notions of $\\cM$-adhesive categories, giving a careful account of the precise technical conditions that are involved in the proof, which is given in its entirety here for the first time;\n\\item tie the proof of associativity to the classical Concurrency Theorem~\\cite{ehrig:2006aa}, showing the relevant categorical constructions that are shared by the two results\n\\item give a more complete and detailed account of how the associativity theorem leads to the rule algebra framework, on which we elaborate below.\n\\end{itemize}\n\n\nIndeed, associativity is advantageous for a number of reasons. In~\\cite{bdg2016,bdgh2016}, the first author and his team developed the \\emph{rule algebra} framework for a concrete notion of multigraphs. Inspired by a standard construction in mathematical physics, the operation of rule composition along a common interface yields an associative algebra: given a free vector space with basis the set of DPO rules, the product of the associative algebra takes two basis elements to a formal sum, over all possible dependency relations, of their compositions. This associative algebra is useful in applications, being the formal carrier of combinatorial information that underlies \\emph{stochastic} interpretations of rewriting. The most famous example in mathematical physics is the Heisenberg-Weyl algebra~\\cite{blasiak2010combinatorial,blasiak2011combinatorial}, which served as the starting point for~\\cite{bdg2016}. Indeed,~\\cite{bdg2016,bdgh2016} generalised the Heisenberg-Weyl construction from mere set rewriting to multigraph rewriting. Our work, since it is expressed abstractly in terms of $\\cM$-adhesive categories, entails that the Heisenberg-Weyl and the DPO graph rewriting rule algebra can \\emph{both} be seen as two instances of the same construction, expressed in abstract categorical terms.\n\n\\textbf{Structure of the paper.} The necessary categorical preliminaries are collected in Section~\\ref{sec:mAdh}. Our main original results are collected\nin Section~\\ref{sec:acDPO}: following a brief recap of the DPO framework we first return to the classic Concurrency Theorem in Section~\\ref{sec:concur}, then prove our main associativity result (Theorem~\\ref{thm:assocDPO}) in Section~\\ref{sec:assoc}. We devote Section~\\ref{sec:ACDrd} to developing the rule algebra framework in the abstract setting, and proceed to give a number of applications: Heisenberg-Weyl algebra in Section~\\ref{sec:HW}, applications to combinatorics in Section~\\ref{sec:RT} and stochastic mechanics in Section~\\ref{sec:SM}. Our concluding remarks are in Section~\\ref{sec:conclusion}.\n\n\\newpage\n\n\\section{Background: \\texorpdfstring{$\\cM$}{M}-adhesive categories}%\n\\label{sec:mAdh}\n\nWe briefly review standard material, following mostly~\\cite{lack2005adhesive} (see~\\cite{DBLP:conf\/gg\/CorradiniMREHL97,DBLP:conf\/gg\/1997handbook} for further references).\n\n\\medskip\n\n\\begin{defi}[Van Kampen (VK) squares~\\cite{lack2005adhesive}]\\label{def:VK}\nIn a category $\\bfC$, a pushout square $\\star$ (below left) is a \\emph{Van Kampen (VK) square} whenever the following \\emph{VK condition} holds: in every commutative cube over the pushout square such as the one below right in which the back and right faces are pullbacks, the top face is a pushout if and only if the front and left faces are pullbacks.\n\\begin{equation*}\\gdef\\mycdScale{0.85}\n\\vcenter{\\hbox{\\begin{mycd}\n&\nC\n \\ar[dl, \"n\" description]\n \\ar[dr, phantom, \"\\star\"{sloped}]\n& [-25pt]\n&\nA\n \\ar[ll, \"f\" description]\n \\ar[dl, \"m\" description]\\\\\nD\n&\n&\nB\n \\ar[ll, \"g\" description]\n&\\\\\n\\end{mycd}}}\\qquad\\qquad\n\\vcenter{\\hbox{\\begin{mycd}\n&\nC'\n \\ar[dl, \"n'\" description]\n \\ar[dd, near start, \"c\" description]\n & [-25pt]\n&\nA'\n \\ar[ll, \"f'\" description]\n \\ar[dd, \"a\" description]\n \\ar[dl, \"m'\" description]\\\\\nD'\n \\ar[dd, near start, \"d\" description] & &\nB'\n \\ar[ll,crossing over, \"g'\" description] &\\\\\n&\nC\n \\ar[dl, \"n\" description]\n \\ar[dr, phantom, \"\\star\"{sloped}] & &\nA\n \\ar[ll, \"f\" description]\n \\ar[dl, \"m\" description]\\\\\nD &&\nB\n \\ar[ll, \"g\" description]\n \\ar[uu,leftarrow,crossing over, near end, \"b\" description] &\\\\\n\\end{mycd}}}\n\\end{equation*}\nAs an important variant (cf.\\ e.g.~\\cite{Grochau_Azzi_2019}), given a class of morphisms $\\cM\\subset \\mor{\\bfC}$, we let a \\emph{weak horizontal (weak vertical) $\\cM$-VK square} be defined as a pushout square $\\star$ where the VK condition is only required to hold for those commutative cubes where all horizontal (all vertical) morphisms are in $\\cM$. Vertical $\\cM$-VK squares are alternatively referred to as \\emph{\\textbf{$\\mathbf{\\cM}$-VK squares}}.\n\\end{defi}\n\nVarious notions of adhesive categories of importance to rewriting theories are known in the literature, with a number of different naming conventions. We opt here to follow the traditional convention~\\cite{ehrig2010categorical}.\n\\begin{defi}[Variants of adhesive categories]\\label{def:adhc}\nLet $\\bfC$ be a category.\n\\begin{itemize}[label=$\\triangleright$]\n \\item $\\mathbf{C}$ is an \\textbf{\\emph{adhesive category}}~\\cite{lack2005adhesive} if\n \\begin{enumerate}[(I)]\n \\item $\\bfC$ has pullbacks along arbitrary morphisms,\n \\item $\\bfC$ has pushouts along monomorphisms, and\n \\item pushouts along monomorphisms are VK squares.\n \\end{enumerate}\n \\item Let $\\cM\\subset \\mono{\\bfC}$ be a class of monomorphisms.\n \\begin{itemize}[label=$\\triangleright$]\n \\item $(\\bfC,\\cM)$ is an \\textbf{\\emph{adhesive HLR category}}~\\cite{ehrig:2006aa,ehrig2010categorical} if\n \\begin{enumerate}[(I')]\n \\item $\\bfC$ has pullbacks along $\\cM$-morphisms,\n \\item $\\bfC$ has pushouts along $\\cM$-morphisms,\n \\item pushouts along $\\cM$-morphisms are VK squares, and\n \\item $\\cM$ contains all isomorphisms and is stable under composition, pullback and pushout.\n \\end{enumerate}\n \\item If instead of axiom~(III') above pushouts along $\\cM$-morphisms are only required to be horizontal VK squares (vertical weak VK squares), $(\\bfC,\\cM)$ is referred to~\\cite{Grochau_Azzi_2019} as a \\textbf{\\emph{horizontal weak (vertical weak) adhesive HLR category}}. If $\\cM$-pushouts are both horizontal \\emph{and} vertical VK squares, $\\bfC$ is called \\textbf{\\emph{weak adhesive HLR category}}.\n \\end{itemize}\n\\end{itemize}\nAs proposed in~\\cite{ehrig2010categorical}, we will alternatively refer to vertical weak adhesive HLR categories simply as \\textbf{\\emph{$\\mathbf{\\cM}$-adhesive categories}}.\n\\end{defi}\n\n\\begin{table}[htpb]\n\\centering\n\\vspace{2em}\n{\\setlength{\\extrarowheight}{5pt}\n \\begin{tabular}{C{4.5cm}ccccccccc}\nCategory\\newline(underlying data type)\\newline&\n\\rot{$\\cM=\\mathsf{mono}(\\mathbf{C})$} &\n \\rot{adhesive} &\n\\rot{adhesive HLR} &\n\\rot{horizontal weak adh.\\ HLR} &\n\\rot{vertical weak adh.\\ HLR}&\n\\rot{$\\mathcal{M}$-initial object}&\n\\rot{$\\mathcal{M}$-effective unions}&\n\\rot{references}\n \\\\[-0.5em] \\toprule\n$\\mathbf{Set}$\\newline (sets) &\n \\ensuremath{\\checkmark} & \\ensuremath{\\checkmark} & \\ensuremath{\\checkmark} &\n \\ensuremath{\\checkmark} & \\ensuremath{\\checkmark} & \\ensuremath{\\checkmark} & \\ensuremath{\\checkmark} & \\cite{lack2005adhesive} \\\\\n$\\mathbf{Graph}$\\newline(\\emph{directed} multigraphs) &\n \\ensuremath{\\checkmark} & \\ensuremath{\\checkmark} & \\ensuremath{\\checkmark} &\n \\ensuremath{\\checkmark} & \\ensuremath{\\checkmark} & \\ensuremath{\\checkmark} & \\ensuremath{\\checkmark} & \\cite{lack2005adhesive}\\\\\n$\\hat{\\mathbf{S}}$\\newline(presheaves on category $\\mathbf{S}$) &\n \\ensuremath{\\checkmark} & \\ensuremath{\\checkmark} & \\ensuremath{\\checkmark} &\n \\ensuremath{\\checkmark} & \\ensuremath{\\checkmark} & \\ensuremath{\\checkmark} & \\ensuremath{\\checkmark} & \\cite{Lack2006,Grochau_Azzi_2019}\\\\\n$\\mathbf{HyperGraph}$\\newline(hypergraphs) &\n \\ensuremath{\\checkmark} & & \\ensuremath{\\checkmark} &\n \\ensuremath{\\checkmark} & \\ensuremath{\\checkmark} & \\ensuremath{\\checkmark} & \\ensuremath{\\checkmark} & \\cite{ehrig:2006aa}\\\\\n$\\mathbf{AGraph}_{\\Sigma}$\\newline(attributed graphs over signature $\\Sigma$) &\n & & \\ensuremath{\\checkmark} &\n \\ensuremath{\\checkmark} & \\ensuremath{\\checkmark} & & \\ensuremath{\\checkmark} & \\cite{ehrig:2006aa,Braatz:2010aa,Grochau_Azzi_2019}\\\\\n$\\mathbf{SymbGraph}_{D}$\\newline(symbolic graphs over $\\Sigma$-algebra $D$) &\n & & \\ensuremath{\\checkmark} &\n \\ensuremath{\\checkmark} & \\ensuremath{\\checkmark} & & \\ensuremath{\\checkmark} & \\cite{orejas2010symbolic,Grochau_Azzi_2019}\\\\\n$\\mathbf{uGraph}$\\newline(finite \\emph{undirected} multigraphs) &\n \\ensuremath{\\checkmark} & & &\n \\ensuremath{\\checkmark} & \\ensuremath{\\checkmark} & \\ensuremath{\\checkmark} & \\ensuremath{\\checkmark} & $\\begin{array}{c}\\text{\\cite{padberg2017towards}}\\\\\\text{Section~\\ref{sec:RT}}\\end{array}$\\\\\n$\\mathbf{PTnets}$\\newline(place\/transition nets) &\n & & &\n \\ensuremath{\\checkmark} & \\ensuremath{\\checkmark} & \\ensuremath{\\checkmark} & \\ensuremath{\\checkmark} & \\cite{ehrig:2006aa,Braatz:2010aa}\\\\\n$\\mathbf{ElemNets}$\\newline(elementary Petri nets) &\n & & &\n \\ensuremath{\\checkmark} & \\ensuremath{\\checkmark} & \\ensuremath{\\checkmark} & \\ensuremath{\\checkmark} & \\cite{ehrig:2006aa,Braatz:2010aa}\\\\\n$\\mathbf{Spec}$\\newline(algebraic specifications) &\n & & &\n \\ensuremath{\\checkmark} & \\ensuremath{\\checkmark} & \\ensuremath{?} & \\ensuremath{?} & \\cite{ehrig:2006aa}\\\\\n$\\mathbf{lSets}$\\newline(list sets) &\n & & &\n & \\ensuremath{\\checkmark} & \\ensuremath{?} & \\ensuremath{?} & \\cite{Heindel_2010}\\\\[1.5em]\n\\bottomrule\n\\end{tabular}\n}\n\\caption{\\label{tab:adh}Examples of categories exhibiting various forms of adhesivity, with additional properties relevant to this paper and references for further technical details provided. The symbol \\ensuremath{?}~indicates when a certain property is (to the best of our knowledge) not known to hold.}\n\\vspace{2em}\n\\end{table}\n\n\\begin{exa}\n In order to illustrate the utility of the various different notions of adhesive categories, we list in Table~\\ref{tab:adh} examples for each of these types quoted from the rewriting literature. The table lists additional properties of relevance to the present paper that will be subsequently introduced below. The construction principles used for many of the more sophisticated examples listed in the table are rooted in notions of slice, coslice, functor and comma categories (cf.\\ e.g.~\\cite{ehrig:2006aa,Braatz:2010aa} for further details). As an example of a comma category construction, we introduce in Section~\\ref{sec:RT} the category $\\mathbf{uGraph}$ of finite undirected multigraphs.\n\\end{exa}\n\n\n\\begin{rem}\nWe would like to emphasise the hierarchy among concepts as implied by Definition~\\ref{def:adhc}, in that every adhesive category is an adhesive HLR category (for $\\cM=\\mono{\\bfC}$), while every adhesive HLR category is a weak adhesive HLR category. By definition, weak adhesive HLR categories are both horizontal and vertical weak adhesive HLR categories. As discussed in detail in~~\\cite{ehrig2010categorical}, horizontal weak adhesive HLR categories lack certain properties relevant to rewriting, in contrast to the vertical weak variant, which is why the latter is typically considered as the most general type of rewriting-compatible category. Henceforth, we will thus follow the traditional convention to refer to a vertical weak adhesive category as an $\\cM$-adhesive category.\n\\end{rem}\n\nIn many applications of interest, the data structures to be rewritten satisfy a certain notion of finiteness:\n\n\\begin{defi}[{Finitary $\\cM$-adhesive categories, cf.~\\cite[Defs.~2.8 and~4.1]{Braatz:2010aa}}]\nLet $(\\bfC,\\cM)$ be an $\\cM$-adhesive category. $(\\bfC,\\cM)$ is called \\emph{\\textbf{finitary}} if each object is \\emph{finite} (i.e.\\ has only finitely many $\\cM$-subobjects). The \\emph{finitary restriction} $(\\bfC_{{\\rm fin}},\\cM_{{\\rm fin}})$ of $(\\bfC,\\cM)$ is defined as the restriction of $\\bfC$ to the full subcategory $\\bfC_{{\\rm fin}}$ of finite objects, and with $\\cM_{{\\rm fin}}:=\\cM\\cap \\bfC_{{\\rm fin}}$.\n\\end{defi}\n\nThe following result guarantees that every $\\cM$-adhesive category will give rise to a finitary variant via a form of restriction (i.e.\\ which in particular preserves the adhesivity properties).\n\n\\begin{thmC}[{\\cite[Thm.~4.6]{Braatz:2010aa}}]\\label{thm:finRes}\nThe finitary restriction $(\\bfC_{{\\rm fin}},\\cM_{{\\rm fin}})$ of an $\\cM$-adhesive category $(\\bfC,\\cM)$ is a finitary $\\cM$-adhesive category.\n\\end{thmC}\n\nAn important concept used throughout the paper is that of (isomorphism classes of) spans of $\\cM$-morphisms. A span consists of two $\\cM$-morphisms with a common source $C\\xhookleftarrow{b}B\\xhookrightarrow{a}A$. A homomorphism of spans from\n$C\\xhookleftarrow{b}B\\xhookrightarrow{a}A$ to $C\\xhookleftarrow{b'}B'\\xhookrightarrow{a'}A$ consists of a morphism $h\\mathrel{:} B\\rightarrow B'$ such\nthat the two resulting triangles commute. The spans are said to be isomorphic when $h$ is an isomorphism. The following shows that isomorphism classes of spans are composable, and so are the arrows of a category $\\mspan{\\bfC}$\nwith the same objects as $\\bfC$.\n\\begin{lem}\\label{lem:spans}\n Let $(\\bfC,\\cM)$ be an $\\cM$-adhesive category, and let $R= (C\\xhookleftarrow{b}B\\xhookrightarrow{a}A)$ and $S=(E\\xhookleftarrow{d}D\\xhookrightarrow{c}C)$ be two composable spans with $a,b,c,d\\in \\cM$. Then their \\emph{composition} $S\\circ R$, calculated via taking the pullback marked $\\mathsf{PB}$ below,\n \\begin{equation}\\label{eq:PBC}\n \\begin{mycd}\n & &\n F\\ar[dl,\"f\"']\\ar[dr,\"e\"]\\ar[dd,phantom,\"\\mathsf{PB}\"] & &\\\\\n & D \\ar[dl,\"d\"'] \\ar[dr,\"c\"] & &\n B \\ar[dl,\"b\"']\\ar[dr,\"a\"] & \\\\\n E & & C & & A\n \\end{mycd}\\qquad S\\circ R:= (E\\xleftarrow{d\\circ f}F\\xrightarrow{a\\circ e}A)\\,,\n \\end{equation}\n is also a span of $\\cM$-morphisms (i.e. $d\\circ f,a\\circ e\\in \\cM$).\n \\begin{proof}\n The proof follows from the $\\cM$-adhesivity properties, i.e.\\ from stability of the class $\\cM$ under pullback and composition.\n \\end{proof}\n\\end{lem}\n\nNote in particular that the pullback composition operation $\\circ$ on spans behaves in complete analogy to the composition operations of functions as well as on linear operators, at least if considering the following convention\\footnote{This convention is standard in much of the mathematics literature; however, traditionally the opposite convention of reading spans ``left-to-right'' is encountered in the literature on graph rewriting. Since in our framework we will eventually assign linear operators to spans, the ``right-to-left'' convention offers the more convenient encoding.}:\n\n\\begin{conv}\\label{nc}\n We read spans in the \\textbf{\\emph{``right-to-left'' convention}}, such that if we consider spans $R,S$ as above to encode partial functions $r\\mathrel{:} A\\rightharpoonup C,\\, s\\mathrel{:} C\\rightharpoonup E$, then function composition and span composition are compatible (i.e.\\ $s\\circ r$ is computed via $S\\circ R$).\n\\end{conv}\n\n\\subsection{Some useful technical results}\n\nWe recall first some basic pasting properties of pushouts and pullbacks that hold in any category.\n\\begin{lem}%\n\\label{lem:pushoutpullback}\nGiven a commutative diagram\n\\begin{equation*}\\gdef\\mycdScale{0.85}\n\\begin{mycd}\nA\\ar[r]\\ar[d] & B \\ar[r]\\ar[d] &E\\ar[d]\\\\\nC\\ar[r] & D\\ar[r] & F\\\\\n\\end{mycd}\\,,\n\\end{equation*}\n\\begin{itemize}[label=$\\triangleright$]\n\\item (pullback version) if the right square is a pullback then the left square is a pullback if and only if the entire exterior rectangle is a pullback;\n\\item (pushout version) if the left square is a pushout then the right square is a pushout if and only if the entire exterior rectangle is a pushout.\n\\end{itemize}\n\\end{lem}\n\n\\begin{lem}\\label{lem:auxPOPBcat}\nIn any category, given commutative diagrams of the form\n \\begin{equation}\n \\begin{mycd}\n A \\ar[r,\"f\"] \\ar[d,equal]\\ar[dr,phantom,\"(A)\"] & B \\ar[d,equal]\\\\\n A \\ar[r,\"f\"'] & B\n \\end{mycd}\\qquad \\begin{mycd}\n A \\ar[r,equal]\n \\ar[d,equal]\\ar[dr,phantom,\"(B)\"] & A \\ar[d,\"g\"]\\\\\n A \\ar[r,\"g\"'] & B\n \\end{mycd}\\qquad \\begin{mycd}\n A \\ar[r,\"f\"]\n \\ar[d,equal]\\ar[dr,phantom,\"(C)\"] &\n B \\ar[d,\"g\"]\\\\\n A \\ar[r,\"g\\circ f\"'] & C\n \\end{mycd}\\,,\n \\end{equation}\nit holds that\n\\begin{enumerate}[(I)]\n\\item the square marked $(A)$ is a pushout for arbitrary morphisms $f$,\n\\item the square marked $(B)$ is a pullback if and only if the morphism $g$ is a monomorphism\n\\item the square marked $(C)$ is a pullback for arbitrary morphisms $f$ if $g$ is a monomorphism.\n\\end{enumerate}\nIn addition, if the category is an $\\cM$-adhesive category, statements (I), (III) and the ``if'' part of (II) hold for ``monomorphisms'' replaced by ``$\\cM$-morphisms''.\n\\end{lem}\n\\begin{proof}\nThe statements $(I)$ and $(II)$ are classical, whence their proof is omitted for brevity. In order to prove the statement $(III)$, it suffices to combine $(I)$ and $(II)$ to conclude that a square $(C)$ as in the diagram below left\n\\begin{equation}\n \\begin{mycd}\n A \\ar[r,\"f\"]\n \\ar[d,equal]\\ar[dr,phantom,\"(C)\"] &\n B \\ar[d,\"g\"]\\\\\n A \\ar[r,\"g\\circ f\"'] & C\n \\end{mycd}\\qquad \\begin{mycd}\n A \\ar[r,\"f\"'] \\ar[rr, bend left, \"f\"] \\ar[d,equal]\n \\ar[dr,phantom,\"(D)\"]&\n B \\ar[r,equal] \\ar[d,equal]\n \\ar[dr,phantom,\"(E)\"] &\n B \\ar[d,\"g\"]\\\\\n A \\ar[r,\"f\"] \\ar[rr,bend right, \"g\\circ f\"'] &\n B \\ar[r,\"g\"] & C\n\\end{mycd}\n\\end{equation}\nis a pullback square if $f$ is an arbitrary morphism and $g$ a monomorphism, since the square $(C)$ may be obtained as the composition of a pushout square $(D)$ along an isomorphism (whence a monomorphism), which is according to Lemma~\\ref{lem:convenient} also a pullback, and a pullback square $(E)$, by pullback composition (Lemma~\\ref{lem:pushoutpullback}) $(D)+(E)$ and thus $(C)$ is a pullback. As for the specialisation of the statements to the setting of $\\cM$-adhesive categories, the claims follow trivially for (I) (no modification), and also for (III) and the ``if'' part of (II), since $\\cM$ is assumed to be a class of monomorphisms.\n\\end{proof}\n\nNext, we recall a number of useful properties of pushouts and pushout complements in $\\cM$-adhesive categories.\n\\begin{lemC}[{\\cite[Lemma~2.6]{EHRIG:2014ma}}]%\n\\label{lem:convenient}\nIn any $\\cM$-adhesive category:\n\\begin{enumerate}[(I)]\n\\item Pushouts along $\\cM$-monomorphisms are also pullbacks.\n\\item ($\\cM$-pushout-pullback decomposition) if, in the following diagram\n\\begin{equation}\\label{lem:poPbDec}\n\\vcenter{\\hbox{\\begin{mycd}\nA\n \\ar[r,\"b\"']\n \\ar[d,\"c\"']\n \\ar[rr, bend left, \"=\"'] &\nB\n \\ar[r,\"e\"']\\ar[d,\"a\"'] &\nE\n \\ar[d]\\\\\nC\n \\ar[r,\"d\"]\n \\ar[rr, bend right, \"=\"] &\nD\n \\ar[r,\"f\"] & F\n\\end{mycd}}}\n\\end{equation}\nthe exterior face is a pushout, the right face is a pullback, and $f\\in \\cM$ and ($b\\in \\cM$ or $c\\in \\cM$), then the left and right squares are both pushouts and pullbacks.\n\\item (uniqueness of pushout complements) given $A\\hookrightarrow C$ in $\\cM$ and $C\\rightarrow D$, the respective pushout complement $A\\xrightarrow{b} B \\xhookrightarrow{a} D$ (if it exists) is unique up to isomorphism, and with $b\\in \\cM$ (due to stability of $\\cM$-morphisms under pushouts).\n\\end{enumerate}\n\\end{lemC}\n\n\\noindent\nNote that in~\\eqref{lem:poPbDec} by virtue of stability of $\\cM$-morphisms under pushout and pullback, these conditions entail that since $f\\in \\cM$, we also have that $e\\in \\cM$, while $b\\in \\cM$ means that $d\\in \\cM$ (and $c\\in \\cM$ that $a\\in \\cM$).\n\n\n\\subsection{Additional category-theoretical prerequisites}\n\nPassing from adhesive categories to $\\cM$-adhesive categories on the one hand permits to study rewriting in the most general setting for DPO rewriting known to date, yet it comes at the price of a number of technicalities that are necessary to ensure certain associativity properties for the rewriting as introduced in the main part of the paper. The first such property concerns the existence of the analogue of the empty set in the category $\\mathbf{Set}$ or the empty graph in the category $\\mathbf{Graph}$, referred to as \\emph{$\\cM$-initial object} for a general $\\cM$-adhesive category (the existence of which is not guaranteed by $\\cM$-adhesivity, cf.\\ Table~\\ref{tab:adh}). The second requirement concerns the property of an $\\cM$-adhesive category possessing \\emph{$\\cM$-effective unions}, analogous (to a certain extent) to the notion of union of sets and related properties.\n\n\\begin{defi}[{$\\cM$-initial object;~\\cite[Def.~2.5]{Braatz:2010aa}}]\\label{def:Minit}\nAn object $\\mathbb{I}$ of an $\\cM$-adhesive category $(\\bfC,\\cM)$ is defined to be an \\emph{$\\cM$-initial object} if for each object $A\\in \\obj{\\bfC}$ there exists a unique monomorphism $i_A:\\mathbb{I}\\hookrightarrow A$, which is moreover required to be in $\\cM$.\n\\end{defi}\n\n\\begin{lemC}[{\\cite[Fact~2.6]{Braatz:2010aa}}]\\label{lem:binaryCoproducts}\n If an $\\cM$-adhesive category $(\\bfC,\\cM)$ possesses an $\\cM$-initial object $\\mathbb{I}\\in \\obj{\\bfC}$, the category has \\emph{finite coproducts}, and the coproduct injections are in $\\cM$.\n\\end{lemC}\n\\begin{proof}\n We quote the proof from~\\cite{Braatz:2010aa} for illustration of this important property: it suffices to consider the case of binary coproducts. One may construct the coproduct $A+B$ of two objects $A,B\\in \\obj{\\bfC}$ via taking the pushout\n \\begin{equation}\n \\begin{mycd}\n & \\mathbb{I}\\ar[dl,\"i_A\"']\\ar[dr,\"i_B\"]\\ar[dd,phantom,\"\\mathsf{PO}\"] &\\\\\n A\\ar[dr,\"in_A\"'] & & B\\ar[dl,\"in_B\"]\\\\\n & A+B\n \\end{mycd}\\,.\n \\end{equation}\n Since the underlying category is assumed to be $\\cM$-adhesive, according to Definition~\\ref{def:adhc} the above pushout is guaranteed to exist since $i_A,i_B\\in \\cM$ via the assumption of $\\mathbb{I}$ being an $\\cM$-initial object, and by virtue of stability of $\\cM$-morphisms under pushouts, we may moreover conclude that indeed $in_A,in_B\\in \\cM$.\n\\end{proof}\n\nThe second main property required for our construction of rule algebras concerns a certain compatibility property relating pushouts and pullbacks along $\\cM$-morphisms:\n\\begin{defi}[$\\cM$-effective unions]\n An $\\cM$-adhesive category $(\\bfC,\\cM)$ is said to possess \\emph{$\\cM$-effective unions} if the following property holds: given a commutative diagram as below with $b_1,b_2,c_1,c_2,d_1,d_2\\in\\cM$, where $(1)$ is a pushout, the outer square a pullback, and where $x$ is the unique morphism induced by the universal property of the pushout,\n\\begin{equation}\\label{eq:eu}\n\\begin{mycd}\nA\n \\ar[d,hook,\"b_2\"']\\ar[r,hook,\"b_1\"]\\ar[dr,phantom,\"(1)\"] & [-5pt]\nB_1\n \\ar[d,hook,\"c_1\"] \\ar[ddr,hook,bend left,\"d_1\"] &[-15pt] \\\\\nB_2\n \\ar[r,hook,\"c_2\"']\\ar[drr,hook,bend right,\"d_2\"'] & D\n \\ar[dr,near start,\"x\"]&\\\\[-15pt]\n & & E\n\\end{mycd}\n\\end{equation}\nthen $x\\in \\cM$.\n\\end{defi}\n\nAs indicated in Table~\\ref{tab:adh}, while the property of $\\cM$-effective unions is traditionally well-known in a range of important examples, it is nonetheless in general a difficult task to establish this property. Many of the positive examples may be derived via the following (variant of a) result of~\\cite{lack2005adhesive}:\n\\begin{thm}[{Variant of~\\cite[Thm.~5.1]{lack2005adhesive}}]\\label{thm:euAux}\n Let $(\\bfC,\\cM)$ be a horizontal weak adhesive HLR category. Then in a commutative diagram of the form~\\eqref{eq:eu}, $x\\in \\mono{\\bfC}$ is a monomorphism \\emph{(not necessarily in $\\cM$ if $\\cM\\neq\\mono{\\bfC}$)}.\n\\end{thm}\n\\begin{proof}\nIt may be easily verified that the proof provided in~\\cite{lack2005adhesive} in the setting of adhesive category in fact utilises only the axioms of horizontal weak adhesive HLR categories, and thus directly generalises to this setting.\n\\end{proof}\nAs the inspection of Table~\\ref{tab:adh} reveals, $\\mathbf{lSet}$ is the only known example in which the horizontal, but not also the vertical weak adhesive HLR properties hold, while simultaneously in this category $\\cM_{\\mathbf{lSet}}\\neq \\mono{\\mathbf{lSet}}$. Thus Theorem~\\ref{thm:euAux} effectively identifies weak adhesive HLR categories with $\\cM=\\mono{\\bfC}$ as natural examples of categories with ($\\mono{\\bfC}$)-effective unions. This includes the original statement that all adhesive categories have effective unions, but also covers examples of (weak) adhesive HLR categories such as $\\mathbf{HyperGraph}$ and $\\mathbf{uGraph}$. As recently demonstrated in~\\cite{Grochau_Azzi_2019}, for $\\cM$-adhesive categories such as $\\mathbf{AGraph}_{\\Sigma}$ or $\\mathbf{SymbGraph}_{D}$ where the class $\\cM$ of monomorphisms has additional structure, it is possible to use the statement of Theorem~\\ref{thm:euAux} in order to manually prove the existence of $\\cM$-effective unions. On the other hand, the latter two categories fail to possess an $\\cM$-initial object, which prevents them from satisfying all assumptions necessary in order to support a unital rule algebra construction (albeit they \\emph{do} support the concurrency and associativty property of DPO rule compositions). We leave a more in-depth investigation into these matters of admissibility for future work.\n\n\n\\section{Double-pushout (DPO) rewriting}\\label{sec:acDPO}\n\nWe now recall \\emph{Double-Pushout (DPO) rewriting} for $\\cM$-adhesive categories (adapted according to the results of~\\cite{Braatz:2010aa} and to our \\emph{notational convention}~\\ref{nc}).\n\n\\begin{asm}\\label{as:cats}\nThroughout the remainder of this paper, we fix in each definition an \\emph{$\\cM$-adhesive category} $(\\bfC,\\cM)$ (typically just written as $\\bfC$ for brevity) that is assumed to possess an $\\cM$-initial object and $\\cM$-effective unions.\n\\end{asm}\n\n\\begin{defiC}[{\\cite[Def.~7.1]{lack2005adhesive}}]\\label{def:prod}\nA span $p$ of morphisms (with $O$utput, $K$ontext, $I$nput)\n\\begin{equation}\n\\label{eq:prod}\nO\\xleftarrow{o} K\\xrightarrow{i} I\n\\end{equation}\nis called a \\emph{production}. $p$ is said to be\n\\emph{linear} if both $i$ and $o$ are monomorphisms in $\\cM$. We denote the \\emph{set of linear productions} by $\\Lin{\\bfC}$. We will also frequently make use of the \\emph{alternative notation} $\\GRule{O}{p}{I}$ where $p=(O\\xleftarrow{o} K\\xrightarrow{i} I)\\in \\Lin{\\bfC}$.\n\\end{defiC}\nA \\emph{homomorphism of productions} $p\\rightarrow p'$ consists of arrows, $O \\rightarrow O'$, $K \\rightarrow K'$ and $I\\rightarrow I'$, such that the obvious diagram commutes. A homomorphism is an isomorphism when all of its components are isomorphisms. We do not distinguish between isomorphic productions. Note that the notion of morphism of productions is different than that for general spans.\n\n\n\\begin{defiC}[{\\cite[Def.~7.2]{lack2005adhesive}}]\\label{def:gc}\nGiven a production $p$ as in~\\eqref{eq:prod}, a \\emph{match} of $p$ in an object $X\\in \\obj{\\bfC}$ is a morphism $m:I\\rightarrow X$. A match is said to satisfy the \\emph{gluing condition} if there exists an object $E$ and morphisms $k:K\\rightarrow \\overline{K}$ and $x:\\overline{K}\\rightarrow X$ such that~\\eqref{eq:gluing} is a pushout.\n\\begin{equation}\\gdef\\mycdScale{0.85}\n\\label{eq:gluing}\n\\begin{mycd}\nK\\ar[d,\"k\"',dashed]\\ar[dr,phantom,\"\\mathsf{PO}\"] &\nI \\ar[l,leftarrow,\"i\"']\\ar[d,\"m\"]\\\\\n\\overline{K} & X\\ar[l,leftarrow,dashed,\"x\"]\n\\end{mycd}\\,.\n\\end{equation}\nMore concisely, the \\emph{gluing condition} holds if there is a \\emph{pushout complement} of $K\\xrightarrow{i}I\\xrightarrow{m}X$.\n\\end{defiC}\n\n\nFrom here on, we will focus solely on \\emph{linear productions} and matches that are $\\cM$-morphisms, which entails due to the above statements a number of practical simplifications, and which allows us to simplify also the notations as follows:\n\\begin{conv}\nUnless mentioned otherwise, henceforward \\emph{all} arrows are understood to be morphisms of the class $\\cM$ of the underlying $\\cM$-adhesive category $\\bfC$, whence we will use the notation $\\rightarrow$ of ``ordinary'' arrows (instead of $\\hookrightarrow$) to denote arrows of $\\cM$ in all diagrams and formulae.\n\\end{conv}\n\n\\begin{defi}[{compare~\\cite[Def.~7.3]{lack2005adhesive}}]%\n\\label{def:DPOr}\nGiven an object $X\\in \\obj{\\bfC}$ and a linear production $p\\in \\Lin{\\bfC}$, we define the \\emph{set of admissible matches} $\\Match{p}{X}$ as the set of monomorphisms $m:I\\rightarrow X$ in $\\cM$ for which $m$ satisfies the \\emph{gluing condition}. As a consequence, there exist objects and morphisms such that in the diagram below both squares are pushouts (where the square marked $\\mathsf{POC}$ is constructed as a pushout complement):\n\\begin{equation}\\label{eq:DPOr}\\gdef\\mycdScale{0.85}\n\\begin{mycd}\nO \\ar[d,\"{m^{*}}\"'] &\n K \\ar[l,\"o\"']\\ar[r,\"i\"]\\ar[d,\"k\"']\n \\ar[dl,phantom, \"{\\mathsf{PO}}\"]\\ar[dr,phantom,\"{\\mathsf{POC}}\"] &\n I \\ar[d,\"m\"]\n \\\\\n {X'} & {\\overline{K}} \\ar[l,\"o'\"]\\ar[r,\"i'\"'] & X\\\\\n\\end{mycd}\n\\end{equation}\nWe write $p_m(X):=X'$ for the object ``produced'' by the above diagram. The process is called \\emph{derivation} of $X$ along production $p$ and admissible match $m$, and denoted $p_m(X)\\xLeftarrow[p,m]{} X$.\n\\end{defi}\n\nNote that by virtue of Lemma~\\ref{lem:convenient}, the object $p_m(X)$ produced via a given derivation of an object $X$ along a linear production $p$ and an admissible match $m$ is \\emph{unique up to isomorphism}. From here on, we will refer to linear productions as \\emph{linear (rewriting) rules}. Next, we recall the concept of \\emph{(concurrent) composition} of linear rules.\n\n\n\n\\subsection{Concurrent composition and concurrency theorem}%\n\\label{sec:concur}\n\nGiven two linear productions $p_1,p_2\\in \\Lin{\\bfC}$ and an object $X\\in \\obj{\\bfC}$, it is intuitively clear that one may consider acting with $p_2$ on a produced object $X'=p_{1_{m_1}}(X)$ (for some admissible match $m_1$). However, there exists also the interesting possibility to consider first composing the \\emph{rules} in a certain sense, and then applying the \\emph{sequential composite} to the object $X$. To this end, consider the following well-known definition.\n\n\\begin{defi}[DPO-type concurrent composition~\\cite{lack2005adhesive}]\\label{def:DPOcc}\n Let $p_1,p_2\\in \\Lin{\\bfC}$ be two linear productions. Then a span ${\\color{h1color}\\mathbf{m}}=(I_2{\\color{h1color}\\xleftarrow{m_2} M_{21}\\xrightarrow{m_1}}O_1)$ with $m_1,m_2\\in \\cM$---where we use the {\\color{h1color}blue colouring} to signify the overlap of $p_1$ and $p_2$---is called\\footnote{In the DPO rewriting literature, admissible matches of rules are also referred to as \\emph{dependency relations}.} an \\emph{admissible match of $p_2$ into $p_1$}, denoted $\\mathbf{m}\\in \\RMatch{p_2}{p_1}$, if in the diagram below the squares marked $\\mathsf{POC}$ are constructable as pushout complements (where the cospan $I_2\\xrightarrow{n_2}N_{21}\\xleftarrow{n_1}O_1$ is obtained by taking the pushout marked ${\\color{h1color}\\mathsf{PO}}$):\n \\begin{equation}\\label{eq:DPOccomp}\n \\begin{mycd}\n O_2\\ar[d,\"n_2^{*}\"'] &\n K_2 \\ar[l,\"o_2\"']\\ar[r,\"i_2\"]\\ar[d,\"k_2\"']\n \\ar[dl,phantom,\"\\mathsf{PO}\"] &\n I_2 \\ar[dr,h1color,bend right,\"n_2\"]\\ar[dl,phantom,\"\\mathsf{POC}\"] &\n {\\color{h1color}M_{21}}\n \\ar[l,h1color,\"m_2\"']\\ar[r,h1color,\"m_1\"]\\ar[d,h1color,phantom,\"\\mathsf{PO}\"] &\n O_1 \\ar[dl,h1color,bend left,\"n_1\"']\\ar[dr,phantom,\"\\mathsf{POC}\"]&\n K_1 \\ar[l,\"o_1\"']\\ar[r,\"i_1\"]\\ar[d,\"k_1\"]\n \\ar[dr,phantom,\"\\mathsf{PO}\"] &\n I_1\\ar[d,\"n_1^{*}\"]\\\\\n {\\color{h2color}O_{21}} &\n \\overline{K}_2\\ar[l,\"o_2'\"]\\ar[rr,\"i_2'\"'] & &\n {\\color{h1color}N_{21}} & &\n \\overline{K}_1\\ar[ll,\"o_1'\"]\\ar[r,\"i_1'\"] & {\\color{h2color}I_{21}}\\\\\n \\end{mycd}\n \\end{equation}\n In this case, we write\\footnote{It follows from the properties of $\\cM$-adhesive categories (i.e.\\ stability of $\\cM$-morphisms under pushouts) that all morphisms in~\\eqref{eq:DPOccomp} are $\\cM$-morphisms, whence the span $\\comp{p_2}{\\mathbf{m}}{p_1}$ is a span of $\\cM$-morphisms, and thus indeed an element of $\\Lin{\\bfC}$.} $\\comp{p_2}{\\mathbf{m}}{p_1}\\in \\Lin{\\bfC}$ for the \\emph{composite} of $p_2$ with $p_1$ along the admissible match $\\mathbf{m}$, defined as\n \\begin{equation}\n \\comp{p_2}{\\mathbf{m}}{p_1}\\equiv {\\color{h2color}(O_{21}\\xleftarrow{o_{21}}K_{21}\\xrightarrow{i_{21}}I_{21})}:=\n ({\\color{h2color}O_{12}}\\xleftarrow{o_2'}\\overline{K}_2\\xrightarrow{i_2'}{\\color{h1color}N_{21}})\\circ\n ({\\color{h1color}N_{21}}\\xleftarrow{o_1'}\\overline{K}_1\\xrightarrow{i_1'}{\\color{h2color}I_{21}})\\,\n \\end{equation}\n where we have used the {\\color{h2color}orange colouring} to emphasise the components of the composite production, and where $\\circ$ denotes the operation of span composition (cf.\\ Lemma~\\ref{lem:spans}).\n\\end{defi}\n\nThe following theorem is a refinement of a well-known result from the literature, where the novel feature of our version that will prove quintessential in the following is the specification of the theorem via \\emph{admissible matches of linear rules} (rather than the less specific notion of $E$-concurrent derivations as in the work of Ehrig et al.~\\cite{EHRIG:2014ma}). In the adhesive category setting, this approach had already been investigated in~\\cite{lack2005adhesive}. The reason our modification (which hinges on Assumption~\\ref{as:cats}) provides a strong improvement over the traditional results resides in the fact that in the synthesis step (see below), one is not only led to derive a certain \\emph{cospan} encoding the causal interaction of the two sequentially applied rules, but in fact a \\emph{span} of $\\cM$-morphisms that is unique up to isomorphism, and that thus in a certain sense provides a \\emph{minimal} encoding of said causal interaction. Besides practical advantages, this result is in particular strictly necessary in order to lift the notion of associativity of sequential compositions, DPO-type rule algebras and canonical representations of DPO-type rule algebras from the adhesive to the $\\cM$-adhesive setting.\n\n\\begin{thm}[{DPO-type Concurrency Theorem; modification of~\\cite[Thm.~4.17]{EHRIG:2014ma}, compare~\\cite[Thm.~7.11]{lack2005adhesive}}]\\label{thm:concur}\nLet $\\bfC$ be an $\\cM$-adhesive category satisfying Assumption~\\ref{as:cats}. Let $p_1,p_2\\in \\Lin{\\bfC}$ be two linear rules and $X_0\\in \\obj{\\bfC}$ an object.\n\\begin{itemize}\n\\item \\textbf{Synthesis:} Given a two-step sequence of derivations\n\\begin{equation*}\nX_2\\xLeftarrow[p_2,m_2]{} X_1\\xLeftarrow[p_1,m_1]{}X_0\\,,\n\\end{equation*}\nwith $X_1:=p_{1_{m_1}}(X_0)$ and $X_2:=p_{2_{m_2}}(X_1)$, there exists a composite rule $q=\\comp{p_2}{\\mathbf{n}}{p_1}$\nfor a unique $\\mathbf{n}\\in \\RMatch{p_2}{p_1}$,\n and a unique admissible match $n\\in \\Match{q}{X}$, such that\n \\begin{equation*}\n q_n(X)\\xLeftarrow[q,n]{} X_0\\qquad \\text{and}\\qquad q_n(X_0)\\cong X_2\\,.\n \\end{equation*}\n\\item \\textbf{Analysis:} Given an admissible match $\\mathbf{n}\\in \\RMatch{p_2}{p_1}$ of $p_2$ into $p_1$ and an admissible match $n\\in \\Match{q}{X}$ of the composite $q=\\comp{p_2}{\\mathbf{n}}{p_1}$ into $X$, there exists a unique pair of admissible matches $m_1\\in \\Match{p_1}{X_0}$ and $m_2\\in \\Match{p_2}{X_1}$ (with $X_1:=p_{1_{m_1}}(X_0)$) such that\n\\begin{equation*}\n X_2\\xLeftarrow[p_2,m_2]{} X_1 \\xLeftarrow[p_1,m_1]{} X_0\\qquad \\text{and}\\qquad\n X_2\\cong q_n(X)\\,.\n\\end{equation*}\n\\end{itemize}\n\\begin{proof}\n --- \\textbf{Synthesis:} %\n Consider the setting presented in~\\eqref{eq:CTs1}. %\n Here, we have obtained the candidate match ${\\color{h1color}\\mathbf{n}}=(I_2{\\color{h1color}\\leftarrow M_{21}\\rightarrow} O_1)$ via pulling back the cospan $(I_2{\\color{h1color}\\rightarrow} X_1{\\color{h1color}\\leftarrow} O_1)$. %\n Next, we construct ${\\color{h1color}N_{21}}$ via taking the pushout of ${\\color{h1color}\\mathbf{n}}$, which induces a unique arrow ${\\color{h1color}N_{21}\\rightarrow}X_1$. Crucially, it follows from Assumption~\\ref{as:cats} that this arrow is in the class $\\cM$. %\n The diagram in~\\eqref{eq:CTs2} is obtained by taking the pullbacks of the spans $\\overline{K}_i\\rightarrow X_1{\\color{h1color}\\leftarrow N_{21}}$ (obtaining the objects $K_i'$, for $i=1,2$), followed by letting ${\\color{h2color}O_{21}}:=\\pO{O_2\\leftarrow K_2\\rightarrow K_2'}$ and ${\\color{h2color}I_{21}}:=\\pO{O_1\\leftarrow K_1\\rightarrow K_1'}$. %\n By virtue of pushout-pullback (Lemma~\\ref{lem:convenient}) and pushout-pushout decomposition (Lemma~\\ref{lem:pushoutpullback}), respectively, the resulting squares are all pushouts. The final step as depicted in~\\eqref{eq:CTs3} consists in constructing ${\\color{h2color}K_{21}}=\\pB{K_2'\\rightarrow {\\color{h1color}N_{21}}\\leftarrow K_1'}$ and ${\\color{orange}\\overline{K}_{21}}=\\pB{\\overline{K_2}\\rightarrow X_1\\leftarrow \\overline{K_1}}$, which by universality of pullbacks induces a unique arrow ${\\color{h2color}K_{21}\\rightarrow \\overline{K}_{21}}$. %\n By invoking pullback decomposition (Lemma~\\ref{lem:convenient}) and the $\\cM$-van Kampen property (cf.\\ Def.~\\ref{def:adhc}) twice, one may demonstrate that the squares $\\cSquare{{\\color{h1color}K_{21}},{\\color{h1color}\\overline{K}_{21}},\\overline{K}_i,K_i'}$ (for $i=1,2$) are pushouts. Thus the claim follows by invoking pushout pasting according to Lemma~\\ref{lem:pushoutpullback} twice in order to obtain the pushout squares $\\cSquare{{\\color{h1color}K_{21}},{\\color{h1color}\\overline{K}_{21}},X_2,{\\color{h1color}O_{21}}}$ and $\\cSquare{{\\color{h1color}K_{21}},{\\color{h1color}\\overline{K}_{21}},X_0,{\\color{h1color}I_{21}}}$.\n\n --- \\textbf{Analysis:} Given the setting as depicted in~\\eqref{eq:CTa1}, we may obtain the configuration of~\\eqref{eq:CTa2} by letting $\\overline{K}_i=\\pO{K_i'\\leftarrow{\\color{h2color}K_{21}\\rightarrow \\overline{K}_{21}}}$ (for $i=1,2$). By virtue of pushout decomposition (Lemma~\\ref{lem:pushoutpullback}), the resulting new squares are all pushouts. Next, by constructing\\footnote{Since the construction is entirely symmetric in this step, we could have equivalently chosen to define $X_1=\\pO{\\overline{K}_2\\leftarrow K_2'\\rightarrow {\\color{h1color}N_{21}}}$.} $X_1=\\pO{\\overline{K}_1\\leftarrow K_1'\\rightarrow {\\color{h1color}N_{21}}}$, we obtain the diagram in~\\eqref{eq:CTa3}. Since $\\cSquare{{\\color{h1color}K_{21}},{\\color{h1color}\\overline{K}_{21}},X_1,{\\color{h1color}N_{21}}}$ and $\\cSquare{{\\color{h1color}K_{21}},{\\color{h1color}\\overline{K}_{21}},\\overline{K}_2,K_2'}$ are pushouts, by pushout decomposition so is $\\cSquare{K_2',\\overline{K}_2,X_1,{\\color{h1color}N_{21}}}$. Thus we finally arrive at the configuration in~\\eqref{eq:CTa4} via compositions of pushout squares, thus concluding the proof.\n\\end{proof}\n\\end{thm}\n\nThe details of the above proof permit to easily derive the following technical corollary:\n\\begin{cor}\\label{lem:adm1}\n Under the assumptions of Theorem~\\ref{thm:concur}, every configuration such as in the lower part of the diagram in~\\eqref{eq:CTa1}, whence the commutative sub-diagram form by the two pushout squares below,\n \\begin{equation}\n \\begin{mycd}\n {\\color{h2color}O_{21}}\n \\ar[d,h2color] &\n {\\color{h2color}K_{21}}\n \\ar[l,h2color]\\ar[r,h2color]\\ar[d,h2color]\n \\ar[dl,h2color,phantom,\"\\mathsf{PO}\"]\n \\ar[dr,h2color,phantom,\"\\mathsf{PO}\"]&\n {\\color{h2color}I_{21}}\n \\ar[d,h2color] \\\\\n X_2 &\n {\\color{h2color}\\overline{K}_{21}}\n \\ar[l,h2color]\\ar[r,h2color]&\n X_0\n \\end{mycd}\\,,\n \\end{equation}\n uniquely induce the configuration of four adjacent pushout squares presented the lower back part of~\\eqref{eq:CTa3}, whence\n \\begin{equation}\n \\begin{mycd}\n {\\color{h2color}O_{21}}\n \\ar[d,h2color] &\n K_2' \\ar[l]\\ar[r,h1color]\\ar[d]\n \\ar[dl,phantom,\"\\mathsf{PO}\"]\n \\ar[dr,phantom,\"\\mathsf{PO}\"]&\n {\\color{h1color}N_{21}} \\ar[d,h1color]\n &\n K_1' \\ar[r]\\ar[l,h1color]\\ar[d]\\ar[dl,phantom,\"\\mathsf{PO}\"]\n \\ar[dr,phantom,\"\\mathsf{PO}\"]&\n {\\color{h2color}I_{21}}\n \\ar[d,h2color] \\\\\n X_2 &\n \\overline{K}_2 \\ar[l]\\ar[r] &\n X_1 &\n \\overline{K}_1 \\ar[l]\\ar[r] &\n X_0\n \\end{mycd}\\,,\n \\end{equation}\n and vice versa.\n\\end{cor}\n\n\n\n\n\\begin{figure}[ht!]\n\\begin{subequations}\n\\begin{align}\n \\vcenter{\\hbox{\\includegraphics[scale=0.6,page=1]{images\/concurrency-proof.pdf}}}\\label{eq:CTs1}\\\\\n \\vcenter{\\hbox{\\includegraphics[scale=0.6,page=2]{images\/concurrency-proof.pdf}}}\\label{eq:CTs2}\\\\\n \\vcenter{\\hbox{\\includegraphics[scale=0.6,page=3]{images\/concurrency-proof.pdf}}}\\label{eq:CTs3}\n\\end{align}\n\\end{subequations}\n\\caption{\\label{fig:CTs} \\emph{Synthesis} part of the concurrency theorem.}\n\\end{figure}\n\n\\begin{figure}[ht!]\n\\begin{subequations}\n\\begin{align}\n \\vcenter{\\hbox{\\includegraphics[scale=0.6,page=1]{images\/concurrency-proof-Analysis.pdf}}}\\label{eq:CTa1}\\\\\n \\vcenter{\\hbox{\\includegraphics[scale=0.6,page=2]{images\/concurrency-proof-Analysis.pdf}}}\\label{eq:CTa2}\\\\\n \\vcenter{\\hbox{\\includegraphics[scale=0.6,page=3]{images\/concurrency-proof-Analysis.pdf}}}\\label{eq:CTa3}\\\\\n \\vcenter{\\hbox{\\includegraphics[scale=0.6,page=4]{images\/concurrency-proof-Analysis.pdf}}}\\label{eq:CTa4}\n\\end{align}\n\\end{subequations}\n\\caption{\\label{fig:CTa} \\emph{Analysis} part of the concurrency theorem.}\n\\end{figure}\n\n\n\\subsection{Concurrent composition and associativity}%\n\\label{sec:assoc}\n\n\nWhile the concurrency theorem (Theorem~\\ref{thm:concur}) for DPO rewriting is classical, to the best of our knowledge the following result is new. It states a certain form of associativity for compositions of linear productions.\n\n\\begin{thm}[DPO-type associativity theorem]\\label{thm:assocDPO}\n Let $\\bfC$ be an $\\cM$-adhesive category that satisfies Assumption~\\ref{as:cats}. %\n Then the composition operation $\\comp{.}{.}{.}$ on linear productions of $\\bfC$ is \\emph{associative} in the following sense: %\ngiven linear productions $p_1,p_2,p_3\\in \\Lin{\\bfC}$, there exists a bijective correspondence\nbetween pairs of admissible matches $(\\mathbf{m}_{21},\\mathbf{m}_{3(21)})$ and $(\\mathbf{m}_{32},\\mathbf{m}_{(32)1})$ such that\n\\begin{equation}\\label{eq:THMassoc}\n \\comp{p_3}{\\mathbf{m}_{3(21)}}{\\left(\\comp{p_2}{\\mathbf{m}_{21}}{p_1}\\right)}\\; \\cong \\;\n \\comp{\\left(\\comp{p_3}{\\mathbf{m}_{32}}{p_2}\\right)}{\\mathbf{m}_{(32)1}}{p_1}\\,.\n\\end{equation}\n\\begin{proof}\nSince DPO derivations are symmetric, it suffices to show one side of the correspondence. Our proof is constructive, demonstrating how, given a pair of admissible matches\n\\begin{equation}\n\\begin{aligned}\n {\\color{h1color}\\mathbf{m}_{21}}\n &=(O_2{\\color{h1color}\\leftarrow M_{21}\\rightarrow }I_1)\\in \\RMatch{p_2}{p_1}\\\\\n {\\color{h1color}\\mathbf{m}_{3(21)}}\n &=(O_3{\\color{h1color}\\leftarrow M_{3(21)}\\rightarrow }I_{21})\\in \\RMatch{p_3}{p_{21}}\\,,\\qquad p_{21}=\\comp{p_2}{{\\color{h1color}\\mathbf{m}_{21}}}{p_1}\\,,\n\\end{aligned}\n\\end{equation}\none may uniquely (up to isomorphisms) construct from this information a pair of admissible matches\n\\begin{equation}\n\\begin{aligned}\n {\\color{h1color}\\mathbf{m}_{32}}\n &=(O_3{\\color{h1color}\\leftarrow M_{32}\\rightarrow }I_2)\\in \\RMatch{p_3}{p_2}\\\\\n {\\color{h1color}\\mathbf{m}_{(32)1}}\n &=(O_{32}{\\color{h1color}\\leftarrow M_{(32)1}\\rightarrow }I_{1})\\in \\RMatch{p_{32}}{p_1}\\,,\\qquad p_{32}=\\comp{p_3}{{\\color{h1color}\\mathbf{m}_{32}}}{p_2}\\,,\n\\end{aligned}\n\\end{equation}\nand such that the property described in~\\eqref{eq:THMassoc} holds. We begin by forming the composite rule $p_{3(21)}=\\comp{p_3}{\\mathbf{m}_{3(21)}}{p_{21}}$, which results in the diagram\n\\begin{equation}\n\\vcenter{\\hbox{\\includegraphics[scale=0.475,page=1]{images\/assocProof.pdf}}}\n\\end{equation}\nif we invoke Corollary~\\ref{lem:adm1} to construct the four rightmost squares on the bottom. Constructing the pullback ${\\color{h1color}M_{32}}=\\pB{{\\color{h1color}M_{3(21)}\\rightarrow}O_{21}\\leftarrow O_2}$ (which by universality of pullbacks also leads to an arrow ${\\color{h1color}M_{32}\\rightarrow}I_3$) and forming the three additional vertical squares on the far left in the evident fashion in the diagram below\n\\begin{equation}\n\\vcenter{\\hbox{\\includegraphics[scale=0.475,page=2]{images\/assocProof.pdf}}}\n\\end{equation}\nallows us to construct ${\\color{h1color}N_{32}}=\\pO{I_3{\\color{h1color}\\leftarrow M_{32}\\rightarrow}O_2}$, which in turn via universality of pushouts uniquely induces an arrow ${\\color{h1color}N_{32}\\rightarrow N_{3(21)}}$:\n\\begin{equation}\n\\vcenter{\\hbox{\\includegraphics[scale=0.475,page=3]{images\/assocProof.pdf}}}\n\\end{equation}\nHere, the rightmost three squares on the top are formed in the evident fashion (and are pushouts according to Lemma~\\ref{lem:auxPOPBcat}), while the other arrows of the above diagram are constructed as follows:\n\\begin{equation}\n\\begin{aligned}\n K_3'&=\\pB{\\overline{K}_3\\rightarrow{\\color{h1color}N_{3(21)}\\leftarrow N_{32}}}\\,,\\quad & &\n O_{32}&=\\pO{K_3'\\leftarrow K_3\\rightarrow O_3}\\\\\n K_2''&=\\pB{{\\color{h1color}N_{32}\\rightarrow N_{3(21)}\\leftarrow}\\overline{K}_2}\\,,\\quad & &\n I_{32}&=\\pO{K_2''\\leftarrow K_2\\rightarrow I_2}\n\\end{aligned}\n\\end{equation}\nInvoking pushout-pullback and pushout-pushout decomposition repeatedly, it may be verified that all squares thus created on the top and in the front are pushout squares. Defining the pullback object ${\\color{h1color}M_{(32)1}}=\\pB{I_{32}{\\color{h1color}\\rightarrow N_{3(21)}\\leftarrow}O_1}$, thus inducing an arrow ${\\color{h1color}M_{21}\\rightarrow M_{3(21)}}$,\n\\begin{equation}\\label{eq:AssocLMstep}\n\\vcenter{\\hbox{\\includegraphics[scale=0.475,page=4]{images\/assocProof.pdf}}}\n\\end{equation}\nit remains to verify that the square $\\cSquare{{\\color{h1color}M_{3(21)}},I_{32},{\\color{h1color}N_{3(21)}},O_1}$ is not only a pullback, but also a pushout square. To this end, construct the auxiliary diagram depicted in Figure~\\ref{fig:DPOrevAuxDiag}, where the top, back, bottom and front cubes that are formed via the newly added arrows compared to~\\eqref{eq:AssocLMstep} are also drawn separately for clarity, with suitable 3d rotations applied such as to facilitate the application of further steps in the proof based upon the $\\cM$-van Kampen property\\footnote{On a philosophical note, it might be worth observing that while sequential compositions of rules are essentially described by two-dimensional commutative diagrams, this final step of the associativity proof appears to have an inherently \\emph{three-dimensional} character, in that the properties of the commutative cubes in question delicately rely on each other as described in the proof.}. Note in particular that the four additional new arrows exist due to universality of pullbacks.\n\n\\afterpage{%\n \\begin{landscape}\n \\begin{figure}%\n \\centering\n\\includegraphics[scale=0.7,page=5]{images\/assocProof.pdf}\n\\caption{\\label{fig:DPOrevAuxDiag}Auxiliary diagram for the second part of the DPO associativity proof.}\n \\end{figure}\n \\end{landscape}%\n}\nInvoking pullback decomposition as well as the $\\cM$-van Kampen property repeatedly, the new commutative cube on the top (i.e.\\ the one sitting over the two pushout squares $\\cSquare{M_{32},I_3,N_{32},O_2}$ and $\\cSquare{K_2,O_2,N_{32},K_2''}$) and the new commutative cube on the bottom (i.e.\\ the one sitting under the two pushout squares $\\cSquare{M_{3(21)},I_3,N_{3(21)},O_{(21)}}$ and $\\cSquare{K_2',O_{21},N_{3(21)},\\overline{K}_2}$) have pushouts on all of their faces.\n\nAs for the new cubes in the front and back, note first that by virtue of Lemma~\\ref{lem:auxPOPBcat} the back left square $\\cSquare{I_3,I_3,N_{3(21)},N_{32}}$ of the front cube is a pullback, while the square $\\cSquare{M_{32},M_{3(21)},O_{21},O_2}$ had been constructed as a pullback in the main part of the proof. Thus invoking pullback decomposition twice, we may conclude that also the squares $\\cSquare{Q,S,\\overline{K}_2,K_2''}$ in the front and $\\cSquare{P,R,I_{32},K_2}$ in the back are pullbacks, whence invoking the $\\cM$-van Kampen twice allows to conclude that the squares $\\cSquare{Q,S,I_3,I_3}$ in the front left and $\\cSquare{P,R,M_{3(21)},M_{32}}$ in the back left are pushouts. Moreover, since isomorphisms are stable under pushouts by virtue of Lemma~\\ref{lem:auxPOPBcat}, we may conclude that $Q\\cong S$. We collect all of this information into the following diagram:\n\\begin{equation*}\n\\vcenter{\\hbox{\\includegraphics[scale=0.75]{images\/assocProof-Step2.pdf}}}\n\\end{equation*}\nTo prepare the final steps, let us perform the following ``splitting'' of the above diagram:\n\\begin{equation*}\n\\vcenter{\\hbox{\\includegraphics[scale=0.75]{images\/assocProof-Step3.pdf}}}\n\\end{equation*}\nWe start the construction from the very left: evidently $\\cSquare{I_3,M_{3(21)},M_{3(21)},I_3}$ is both a pullback and a pushout. %\nNext, construct the pullbacks $P'=\\pB{M_{3(21)}\\rightarrow I_3\\leftarrow Q}$ and $R'=\\pB{M_{3(21)}'\\rightarrow I_3\\leftarrow S}$; by pushout-pullback decomposition, they split the pushout square $\\cSquare{P,Q,I_3,M_{23}}$ on the top and $\\cSquare{R,S,I_3,M_{3(21)}}$ into two pushout squares each. The latter also implies that $R'\\cong R$.\n\nPasting pushouts, we have that $\\cSquare{M_{3(21)}',R',S,I_3}$ is a pushout, whence by pushout-pushout decomposition so is $\\cSquare{P',R',S,Q}$ (and thus $P'\\cong R'$).\n\nNext, construct the two pushouts $K_3''=\\pO{P'\\leftarrow P\\rightarrow K_2}$ in the top and $\\overline{K}_2'=\\pO{R'\\leftarrow R\\rightarrow K_2'}$ on the bottom (which implies $\\overline{K}_2'\\cong K_2'$). Pushout-pushout decomposition then entails that $\\cSquare{P',K_2''',K_2'',Q}$ and $\\cSquare{R',\\overline{K}_2',\\overline{K}_2,S}$ are pushouts, and consequently so is the square $\\cSquare{K_2''',\\overline{K}_2',\\overline{K}_2,K_2''}$.\n\nWe repeat the construction of the previous step and construct the pushouts $I_2'=\\pO{K_2'''\\leftarrow K_2\\rightarrow I_2}$ and $N_{21}'=\\pO{\\overline{K}_2'\\leftarrow K_2'\\rightarrow N_{21}}$ (which implies that $N_{21}'\\cong N_{21}$). Pushout-pushout decomposition then yields the pushout squares $\\cSquare{K_2''',I_2',I_{32},K_2''}$ and $\\cSquare{\\overline{K}_2',N_{21}',N_{(32)1},\\overline{K}_2}$, and thus also $\\cSquare{I_2',N_{21}',N_{(32)1},I_{32}}$ is a pushout.\n\nNext, split the pullback squares $\\cSquare{M_{21},M_{(32)1},I_{32},I_2}$ in the top and \\[\\cSquare{O_1,O_1,N_{(32)1},N_{21}}\\] on the bottom into two pullback squares each via pullback-pullback decomposition, whence via letting $M_{21}'=\\pB{I_2'\\rightarrow I_{32}\\leftarrow M_{(32)1}}$ and $O_1'=\\pB{N_{21}'\\rightarrow N_{3(21)}\\leftarrow O_1}$. By virtue of Lemma~\\ref{lem:auxPOPBcat} (i.e.\\ stability of isomorphisms under pullbacks), this entails that $O_1'\\cong O_1$, whence the square $\\cSquare{O_1,O_1',N_{21}',N_{21}}$ is a pushout.\n\nBy pullback-pullback decomposition, the square $\\cSquare{M_{21}',O_1',N_{21}',I_2'}$ is a pullback. By pushout-pullback decomposition, since by virtue of the previous step the square \\[\\cSquare{M_{21},O_1',N_{21}',I_2}\\] is a pushout and $\\cSquare{M_{21}',O_1',N_{21}',I_2'}$ a pullback, $\\cSquare{M_{21}',O_1',N_{21}',I_2'}$ is also a pushout.\n\nFinally, since by pushout pasting the square $\\cSquare{M_{21}',O_1',N_{(32)1},I_{32}}$ is a pushout, and since $\\cSquare{M_{(32)1},O_1,N_{(32)1},I_{32}}$ is by construction a pullback, pushout-pullback decomposition entails that $\\cSquare{M_{(32)1},O_1,N_{(32)1},I_{32}}$ is also a pushout, which concludes the proof.\n\\end{proof}\n\\end{thm}\n\n\nIn summary, the associativity property manifests itself in the following form, whereby the data provided along the path highlighted in orange below permits to uniquely compute the data provided along the path highlighted in blue (with both sets of overlaps computing the same ``triple composite'' production):\n\\begin{equation}\n\\vcenter{\\hbox{\\includegraphics[scale=0.475,page=1]{images\/assocProof-final.pdf}}}\n\\end{equation}\n\n\n\n\\section{From associativity of concurrent derivations to rule algebras}%\n\\label{sec:ACDrd}\n\n\nIn DPO rewriting, each linear rewriting rule has a non-deterministic effect when acting on a given object, in the sense that there generically exist multiple possible choices of admissible match of the rule into the object. One interesting way of incorporating this non-determinism into a mathematical rewriting framework is motivated by the physics literature:\n\\begin{itemize}[label=$\\triangleright$]\n\\item Each linear rule is lifted to an element of an abstract \\emph{vector space}.\n\\item Concurrent composition of linear rules is lifted to a \\emph{bilinear multiplication operation} on this abstract vector space, endowing it with the structure of an \\emph{algebra}.\n\\item The action of rules on objects is implemented by mapping each linear rule (seen as an element of the abstract algebra) to an endomorphism on an abstract vector space whose basis vectors are in bijection with the objects of the adhesive category.\n\\end{itemize}\nWhile this recipe might seem somewhat ad hoc, we will demonstrate in Section~\\ref{sec:HW} that it recovers in fact one of the key constructions of quantum physics and enumerative combinatorics, namely we recover the well-known Heisenberg-Weyl algebra and its canonical representation.\n\n\\medskip\nLet us first fix the precise type of categories for which our constructions are well-defined. A very general class of such categories is covered by the following set of assumptions:\n\n\\begin{asm}\\label{ass:RAdpo}\n We assume from hereon that $\\bfC$ is a \\emph{finitary $\\cM$-adhesive category} with $\\cM$-effective unions.\n\\end{asm}\n\nLet us next quote some concepts of general mathematics of key relevance to the material presented in this section:\n\\begin{defi}[cf.\\ e.g.\\ \\cite{hazewinkel2004algebras}]\n Let $V\\equiv(V,+,\\cdot)$ be a \\emph{$\\bK$-vector space}, with $\\bK$ a field (e.g.\\ $\\bK=\\bR$). Then a \\emph{$\\bK$-algebra} $(V,*)$ is defined via equipping the $\\bK$-vector space $V$ with a \\emph{bilinear binary operation} $*:V\\times V\\rightarrow V$. Here, bilinearity entails that\n \\begin{equation}\n \\forall a,b\\in V\\,,\\;\\alpha,\\beta\\in \\bK:\\quad (\\alpha\\cdot a)*(\\beta\\cdot b)=(\\alpha\\beta)\\cdot(a*b)\\,.\n \\end{equation}\n The algebra $(V,*)$ is called \\emph{associative} if\n \\begin{equation}\n \\forall a,b,c\\in V:\\quad a*(b*c)=(a*b)*c\\,.\n \\end{equation}\n It is called \\emph{unital} if there exists an element $u\\in V$ (then referred to as a \\emph{unit element}) such that\n \\begin{equation}\n \\forall v\\in V:\\quad u*v=v*u=v\\,.\n \\end{equation}\n Let $W\\equiv(W,+,\\cdot)$ be another $\\bK$-vector space (over the same field $\\bK$ as the $\\bK$-vector space $V$), and denote by $End_{\\bK}(W)$ the \\emph{algebra of endomorphisms over $W$}. Then a \\emph{representation} \\[\n \\rho: (V,*)\\rightarrow End_{\\bK}(W)\n \\]\n of the associative $\\bK$-algebra $(V,*)$ is defined as an \\emph{algebra homomorphism}, such that\n \\begin{equation}\n \\forall a,b\\in V:\\quad \\rho(a*b)=\\rho(a)\\rho(b)\\,.\n \\end{equation}\n If $(V,*)$ is in addition unital, we also require that $\\rho$ maps the unit element $u\\in V$ to the identity element of $End_{\\bK}(W)$,\n \\begin{equation}\n \\rho(u)=\\mathbb{1}_{End_{\\bK}(W)}\\,.\n \\end{equation}\n\\end{defi}\n\n\\begin{defi}\nLet $\\cR_{\\bfC}\\equiv(\\cR_{\\bfC},+,\\cdot)$ be the free $\\bR$-vector space on $\\Lin{\\bfC}$, defined concretely via a bijection $\\delta:\\Lin{\\bfC}_{\\cong}\\rightarrow \\cR_{\\bfC}$ from \\emph{isomorphism classes of linear productions} in $\\Lin{\\bfC}$ to the set of basis vectors of $\\cR_{\\bfC}$. In order to distinguish between elements of $\\Lin{\\bfC}$ and of $\\cR_{\\bfC}$, we introduce the notation\n\\begin{equation}\n(\\grule{O}{r}{I}):=\\delta\\left(O\\xleftharpoonup{r}I\\right)\\,.\n\\end{equation}\nWe will later refer to $\\cR_{\\bfC}$ as the $\\bR$-vector space of \\emph{rule algebra elements}.\n\\end{defi}\n\n\\begin{defi}\\label{def:RADPO}\nDefine the \\emph{DPO rule algebra product} $*_{\\cR_{\\bfC}}$ on an $\\cM$-adhesive category $\\bfC$ satisfying Assumption~\\ref{ass:RAdpo} as the binary operation\n\\begin{equation}\n*_{\\cR_{\\bfC}}:\\cR_{\\bfC}\\times \\cR_{\\bfC}\\rightarrow \\cR_{\\bfC}:(R_1,R_2)\\mapsto R_1*_{\\cR_{\\bfC}} R_2\\,,\n\\end{equation}\nwhere for two basis vectors $R_i=\\delta(p_i)$ encoding the linear rules $p_i\\in \\Lin{\\bfC}$ ($i=1,2$),\n\\begin{equation}\\label{eq:defRcomp}\nR_2*_{\\cR_{\\bfC}}R_1\n:=\\sum_{\\mathbf{m}\\in \\RMatch{p_2}{p_1}}\\delta\\left(\\comp{p_2}{\\mathbf{m}}{p_1}\\right)\\,.\n\\end{equation}\nHere, we take the notational convention that $\\sum_{\\emptyset}\\dotsc=0_{\\cR_{\\bfC}}$ (i.e.\\ the summation over an empty set of admissible matches evaluates to the zero element of the vector space $\\cR_{\\bfC}$). The definition is extended to arbitrary (finite) linear combinations of basis vectors by bilinearity, whence for $p_i,p_j\\in \\Lin{\\bfC}$ and $\\alpha_i,\\beta_j\\in \\bR$,\n\\begin{equation}\n\\left(\\sum_i \\alpha_i\\cdot\\delta(p_i)\\right)*_{\\cR_{\\bfC}}\\left(\\sum_j\\beta_j\\cdot \\delta(p_j)\\right):=\\sum_{i,j}(\\alpha_i\\cdot\\beta_j)\\cdot \\left(\\delta(p_i)*_{\\cR_{\\bfC}}\\delta(p_j)\\right)\\,.\n\\end{equation}\nWe refer to $\\cR_{\\bfC}\\equiv(\\cR_{\\bfC},*_{\\cR_{\\bfC}})$ as the \\textbf{\\emph{rule algebra}} (of linear DPO-type rewriting rules over the $\\cM$-adhesive category $\\bfC$).\n\\end{defi}\n\nIt is worthwhile noting that if the category $\\bfC$ possesses an $\\cM$-initial object, the ``trivial match'' of two linear productions $p_j=(O_j\\leftarrow K_j\\rightarrow I_j)$ (for $j=1,2$), i.e.\\ $\\mathbf{m}_{\\mathbb{I}}=(I_2\\leftarrow\\mathbb{I}\\rightarrow O_1)$, may be verified to be always an admissible match according to the definition of the DPO-type concurrent composition of productions (Definition~\\ref{def:DPOcc}) and by virtue of Lemma~\\ref{lem:binaryCoproducts}.\n\n\\begin{thm}\nFor every category $\\bfC$ satisfying Assumption~\\ref{ass:RAdpo}, the associated DPO-type rule algebra $\\cR_{\\bfC}\\equiv(\\cR_{\\bfC},*_{\\cR_{\\bfC}})$ is an \\emph{associative algebra}. If $\\bfC$ in addition possesses an \\emph{$\\cM$-initial object} $\\mathbb{I}\\in \\obj{\\bfC}$, $\\cR_{\\bfC}$ is in addition a \\emph{unital algebra}, with unit element $R_{\\mathbb{I}}:=(\\grule{\\mathbb{I}}{id_{\\mathbb{I}}}{\\mathbb{I}})$.\n\\end{thm}\n\\begin{proof}\nAssociativity follows immediately from the associativity of the operation $\\comp{.}{.}{.}$ proved in Theorem~\\ref{thm:assocDPO}. The claim that $R_{\\mathbb{I}}$ is the unit element of the rule algebra $\\cR_{\\bfC}$ of an adhesive category $\\bfC$ with strict initial object follows directly from the definition of the rule algebra product for $R_{\\mathbb{I}}*_{\\cR_{\\bfC}}R$ and $R*_{\\cR_{\\bfC}}R_{\\mathbb{I}}$ for $R\\in \\cR_{\\bfC}$. For clarity, we present below the category-theoretic composition calculation that underlies the equation $R_{\\mathbb{I}}*_{\\cR_{\\bfC}}R=R$:\n\\begin{align}\\gdef\\mycdScale{0.85}\\tabularnewline\n &\\begin{mycd}\n \\mathbb{I}\\ar[d] &\n \\mathbb{I} \\ar[l,equal]\\ar[r,equal]\\ar[d]\n \\ar[dl,phantom,\"\\mathsf{PO}\"] &\n \\mathbb{I} \\ar[dr,h1color,bend right]\n \\ar[dl,phantom,\"\\mathsf{POC}\"] &\n {\\color{h1color}\\mathbb{I}}\n \\ar[l,h1color,equal]\n \\ar[r,h1color]\\ar[d,h1color,phantom,\"\\mathsf{PO}\"] &\n O \\ar[dl,h1color,bend left,equal]\\ar[dr,phantom,\"\\mathsf{POC}\"]&\n K \\ar[l,\"o\"']\\ar[r,\"i\"]\\ar[d,equal]\n \\ar[dr,phantom,\"\\mathsf{PO}\"] &\n I\\ar[d,equal]\\\\\n {\\color{h2color}O} &\n O\\ar[l,equal]\\ar[rr,equal] & &\n {\\color{h1color}O} & &\n K\\ar[ll,\"o\"]\\ar[r,\"i\"'] & {\\color{h2color}I}\\\\\n \\end{mycd}\\\\\n &({\\color{h2color}O}\\xleftarrow{id_O}O\\xrightarrow{id_O}{\\color{h1color}O})\\circ ({\\color{h1color}O}\\xleftarrow{o}K\\xrightarrow{i}{\\color{h2color}I})\n ={\\color{h2color} (O\\xleftarrow{o}K\\xrightarrow{i}I)}\n \\qedhere\n \\end{align}\n\\end{proof}\nThe property of a rule algebra being unital and associative has the important consequence that one can provide \\emph{representations} for it.\nThe following definition, given at the level of $\\cM$-adhesive categories with $\\cM$-initial objects, captures several of the concrete notions\nof canonical representations in the physics literature; in particular, it generalises the concept of canonical representation of the Heisenberg-Weyl algebra as explained in Section~\\ref{sec:HW}. Intuitively, since a given linear rewriting rule $r\\in \\Lin{\\bfC}$ may in general be applied in multiple different ways to a given object $X\\in \\obj{\\bfC}$, i.e.\\ via the different choices of admissible matches $m\\in \\Match{r}{X}$, one might wish to encode this non-determinism in some form or other. The variant of encoding chosen via the rule algebra and canonical representation (see below) consists heuristically in constructing a \\emph{``sum over all outcomes $r_m(X)$ of applying rule $r$ to object $X$ via admissible matches $m$''}. Since a particular DPO derivation from $X$ to $r_m(X)$ according to Definition~\\ref{def:DPOr} defines the object $r_m(X)$ only up to universal isomorphism (of the relevant pushout complement and pushout in~\\eqref{eq:DPOr}), it is a priori clear that we must make precise the concept as a form of \\emph{``sum over all outcomes $r_m(X)$ up to isomorphism''}. At this point, one might in principle envision multiple possible design choices, yet the construction of canonical representation introduced below will have certain practical advantages:\n\\begin{itemize}[label=$\\triangleright$]\n \\item Computing the sum over all outcomes possible via first applying rule $r_1$ via all possible admissible matches to an object $X$, followed by summing over all possible ways to apply rule $r_2$ to the outcomes of the first step, one may alternatively via the Concurrency Theorem (Theorem~\\ref{thm:concur}) first pre-compute the sum over all ways to compose $r_2$ with $r_1$ (computed concretely via the rule algebra product operation $\\delta(r_2)*_{\\cR_{\\bfC}}\\delta(r_1)$), followed by taking the sum over applying the rules encoded in $\\delta(r_2)*_{\\cR_{\\bfC}}\\delta(r_1)$ to $X$ along all admissible matches. The canonical representation $\\rho_{\\cR_{\\bfC}}$ defined below is precisely the ``canonical choice'' to make this concept concrete.\n \\item One of the key motivations for this particular method of construction was to recover the important example of the Heisenberg-Weyl algebra as a special case of the general DPO algebra and canonical representation construction (see Section~\\ref{sec:HW} and further details therein).\n\\end{itemize}\n\n\\begin{defi}\\label{def:canRep}\nLet $\\bfC$ be a category satisfying Assumption~\\ref{ass:RAdpo}, and which in addition possesses an $\\cM$-initial object $\\mathbb{I}\\in \\obj{\\bfC}$, and let $\\cR_{\\bfC}$ be its associated rule algebra of DPO type. Denote by $\\hat{\\bfC}\\equiv(\\hat{\\bfC},+,\\cdot)$ the \\emph{$\\bR$-vector space of objects of $\\bfC$}, defined via a bijection $\\ket{.}:\\obj{\\bfC}_{\\cong}\\rightarrow \\hat{\\bfC}$ from isomorphism classes of objects of $\\bfC$ to the set of basis vectors of $\\hat{C}$. Then the \\emph{canonical representation} $\\rho_{\\bfC}$ of $\\cR_{\\bfC}$ is defined as the algebra homomorphism\n$\\rho_{\\bfC}:\\cR_{\\bfC}\\rightarrow End(\\hat{\\bfC})$, with\n\\begin{equation}\\label{eq:canRep}\n\\rho_{\\bfC}(p)\\ket{C}:=\\begin{cases}\n\\sum_{m\\in \\Match{p}{C}}\\ket{p_m(C)}\\quad &\\text{if }\\Match{p}{C}\\neq \\emptyset\\\\\n0_{\\hat{\\bfC}}&\\text{otherwise,}\n\\end{cases}\n\\end{equation}\nextended to generic elements of $\\cR_{\\bfC}$ and of $\\hat{\\bfC}$ by linearity.\n\\end{defi}\nThe fact that $\\rho_C$ as given in Definition~\\ref{def:canRep} is an algebra homomorphism is shown below.\n\\begin{thm}[Canonical Representation]\\label{thm:canRep}\nFor $\\bfC$ a category satisfying Assumption~\\ref{ass:RAdpo} and with $\\cM$-initial object, $\\rho_{\\bfC}: \\cR_{\\bfC} \\rightarrow End(\\hat{\\bfC})$ of Definition~\\ref{def:canRep} is a homomorphism of unital associative algebras.\n\\end{thm}\n\\begin{proof}\nIn order for $\\rho_{\\bfC}$ to qualify as an algebra homomorphism (of unital associative algebras $\\cR_{\\bfC}$ and $End(\\hat{\\bfC})$), we must have (with $R_{\\mathbb{I}}=\\delta(r_{\\mathbb{I}})$, $r_{\\mathbb{I}}=\\GRule{\\mathbb{I}}{id_{\\mathbb{I}}}{\\mathbb{I}}$)\n\\begin{equation*}\n(i)\\; \\rho_{\\bfC}(R_{\\mathbb{I}})=\\mathbb{1}_{End(\\hat{\\bfC})}\\quad \\text{and}\\quad (ii)\\;\\forall R_1,R_2\\in \\cR_{\\bfC}:\\; \\rho_{\\bfC}(R_1*_{\\cR_{\\bfC}}R_2)=\\rho_{\\bfC}(R_1)\\rho_{\\bfC}(R_1)\\,.\n\\end{equation*}\nDue to linearity, it suffices to prove the two properties on basis elements $\\delta(p),\\delta(q)$ of $\\cR_{\\bfC}$ and on basis elements $\\ket{C}$ of $\\hat{\\bfC}$. Property $(i)$ follows directly from the definition,\n\\begin{equation*}\n\\forall C\\in \\obj{\\bfC}_{\\cong}:\\quad \\rho_{\\bfC}(R_{\\mathbb{I}})\\ket{C}\\overset{\\eqref{eq:canRep}}{=}\\sum_{m\\in\\Match{r_{\\mathbb{I}}}{C}}\\ket{{(r_{\\mathbb{I}})}_m(C)}=\\ket{C}\\,.\n\\end{equation*}\nProperty $(ii)$ follows from Theorem~\\ref{thm:concur} (the Concurrency Theorem): for all basis elements $\\delta(p),\\delta(q)\\in \\cR_{\\bfC}$ (with $p,q\\in \\Lin{\\bfC}$) and for all $C\\in \\obj{\\bfC}_{\\cong}$,\n\\begin{align*}\n\\rho_{\\bfC}\\left(\\delta(q)*_{\\bfC}\\delta(p)\\right)\\ket{C}\n&\\overset{\\eqref{eq:defRcomp}}{=}\n \\sum_{\\mathbf{d}\\in \\RMatch{q}{p}}\n \\rho_{\\bfC}\\left(\\delta\\left(\\comp{q}{\\mathbf{d}}{p}\\right)\\right)\\ket{C}\\\\\n&\\overset{\\eqref{eq:canRep}}{=}\n \\sum_{\\mathbf{d}\\in \\RMatch{q}{p}}\\;\n \\sum_{e\\in\\Match{r_{\\mathbf{d}}}{C}}\\ket{{(r_{\\mathbf{d}})}_e(C)}\\quad \\tag{$r_{\\mathbf{d}}=\\comp{q}{\\mathbf{d}}{p}$}\\\\\n&=\n\\sum_{m\\in \\cM_{p}(C)}\\sum_{n\\in\\Match{q}{p_{m}(C)}}\n\\ket{q_n(p_m(C))} \\tag{via Thm.~\\ref{thm:concur}}\\\\\n&\\overset{\\eqref{eq:canRep}}{=}\n\\sum_{m\\in \\Match{p}{C}}\\rho_{\\bfC}\\left(\\delta(q)\\right)\\ket{p_m(C)}\\\\\n&\\overset{\\eqref{eq:canRep}}{=}\n\\rho_{\\bfC}\\left(\\delta(q)\\right)\\rho_{\\bfC}\\left(\\delta(p)\\right)\\ket{C}\\,.\n\\tag*{\\qedhere}\n\\end{align*}%\n\\end{proof}\n\n\n\n\\subsection{Recovering the blueprint: the Heisenberg-Weyl algebra}%\n\\label{sec:HW}\n\n\nAs a first consistency check and interesting special (and arguably simplest) case of rule algebras, consider the adhesive category $\\mathbb{F}$ of equivalence classes of finite sets, and functions. This category might alternatively be interpreted as the category $\\mathbf{G}_0$ of isomorphism classes of \\emph{finite discrete graphs}, whose linear rules are precisely the injective partial morphisms of discrete graphs. Specialising to a subclass or linear rules, namely to those with a trivial context object,\n\\begin{equation*}\n\\GRule{O}{\\emptyset}{I}\\equiv (O\\leftarrow \\emptyset\\rightarrow I)\\,,\n\\end{equation*}\nwe recover the famous Heisenberg-Weyl algebra and its canonical representation.\n\n\\begin{defi}[c.f.\\ e.g.\\ \\cite{blasiak2011combinatorial}]\nThe associative unital \\emph{Heisenberg-Weyl algebra} over $\\bR$ is defined as\n\\begin{equation}\n A_{HW}:=\\frac{\\bR[\\pi^{\\dag},\\pi]}{\\langle [\\pi,\\pi^{\\dag}]-id\\rangle}\\,,\n\\end{equation}\nwhere $\\bR[\\pi^{\\dag},\\pi]$ denotes the polynomial ring over $\n\\bR$ with two generators $\\pi^{\\dag}$ and $\\pi$ (the ``creator'' and the ``annihilator'') quotiented by the ideal generated by the \\emph{canonical commutation relation}\n\\begin{equation}\n[\\pi,\\pi^{\\dag}]\\equiv \\pi\\pi^{\\dag}-\\pi^{\\dag}\\pi=id\\,.\n\\end{equation}\nTwo well-known constructions of representations of $A_{HW}$ illustrate the important role of this algebra in applications of combinatorics and physics:\n\\begin{itemize}[label=$\\triangleright$]\n \\item The \\emph{Bargmann-Fock (BF) representation} is defined as the algebra homomorphism from $A_{HW}$ to the space of endomorphisms over $\\bR[z]$ (the vector space of polynomials in the formal variable $z$) that maps $\\pi^{\\dag}$ to the linear operator $\\hat{z}$, and $\\pi$ to $\\tfrac{\\partial}{\\partial z}$, with\n \\begin{equation}\n \\forall n\\in \\bZ_{\\geq0}:\\quad \\hat{z} z^n:=z^{n+1}\\,,\\quad\n \\tfrac{\\partial}{\\partial z}z^0:=0\\,,\\quad \\tfrac{\\partial}{\\partial z}z^n:=nz^{n-1}\\; (n>0)\\,.\n \\end{equation}\n The canonical commutation relation may then be explicitly verified to hold on each basis vector $z^n$:\n \\begin{equation}\n \\begin{aligned}\n [\\tfrac{\\partial}{\\partial z},\\hat{z}]z^0\n &=\\tfrac{\\partial}{\\partial z}\\hat{z}z^0-\\hat{z}\\tfrac{\\partial}{\\partial z}z^0=\\tfrac{\\partial}{\\partial z}z^1-0=z^0\\\\\n [\\tfrac{\\partial}{\\partial z},\\hat{z}]z^n\n &=\\tfrac{\\partial}{\\partial z}\\hat{z}z^n-\\hat{z}\\tfrac{\\partial}{\\partial z}z^0=\\tfrac{\\partial}{\\partial z}z^{n+1}-n\\hat{z}z^{n-1}=z^n\\; (n>0)\n \\end{aligned}\n \\end{equation}\n \\item The so-called \\emph{canonical representation} of the HW algebra (see (IV) in Theorem~\\ref{thm:HW}), which is manifestly isomorphic to the BF representation via $\\hat{z}\\leftrightarrow a^{\\dag}$, $\\tfrac{\\partial}{\\partial z}\\leftrightarrow a$ and $z^n\\leftrightarrow \\ket{n}$, is the formulation typically encountered in the combinatorics and physics literature (the latter especially in the context of quantum mechanics).\n\\end{itemize}\n\\end{defi}\n\n\\noindent\nWe will now provide a realisation of the Heisenberg-Weyl algebra directly within the DPO rule algebra formalism. Reserving the symbol $\\cH$ for this realisation of the algebra to avoid confusion, note in particular that we give a very intuitive and \\emph{intrinsic} meaning to the ``creator'' and the ``annihilator'' (as simply the rule algebra elements associated to the linear rules of creating and of deleting a vertex, respectively).\n\n\\begin{defi}\\label{def:HW}\nLet $\\cR_0\\equiv \\cR_{\\mathbf{G}_0}$ denote the rule algebra of DPO type rewriting for finite discrete graphs. Then the subalgebra $\\cH$ of $\\cR_0$ is defined as the algebra whose elementary generators are\n\\begin{equation}\nx^{\\dag}:=(\\grule{\\bullet}{\\emptyset}{\\emptyset})\\equiv\\delta(\\bullet\\leftarrow \\emptyset\\rightarrow \\emptyset)\\,,\\quad\nx:=(\\grule{\\emptyset}{\\emptyset}{\\bullet})\\equiv\\delta(\\emptyset\\leftarrow \\emptyset\\rightarrow \\bullet)\\,,\n\\end{equation}\nand whose elements are (finite) linear combinations of words in $x^{\\dag}$ and $x$ (with concatenation given by the rule algebra multiplication $*_{\\cR_0}$) and of the unit element $R_{\\emptyset}=(\\grule{\\emptyset}{\\emptyset}{\\emptyset})$. The canonical representation of $\\cH$ is the restriction of the canonical representation of $\\cR_0$ to $\\cH$.\n\\end{defi}\nThe following theorem demonstrates how well-known properties of the Heisenberg-Weyl algebra (see e.g.\\ \\cite{blasiak2011combinatorial,bdgh2016,bdp2017} and references therein) follow directly from the previously introduced constructions of the rule algebra and its canonical representation. This justifies our claim that the Heisenberg-Weyl construction is a special case of our general framework.\n\\begin{thm}[Heisenberg-Weyl algebra from discrete graph rewriting rule algebra]\\label{thm:HW}\\hfill\n\\begin{enumerate}[(I)]\n\\item For integers $m,n>0$,\n\\begin{equation}\n\\underbrace{x^{\\dag}*_{\\cR_0}\\dotsc*_{\\cR_0}x^{\\dag}}_{\\text{$m$ times}}=\\underbrace{x^{\\dag}\\uplus \\dotsc\\uplus x^{\\dag}}_{\\text{$m$ times}}\\,,\\quad \\underbrace{x*_{\\cR_0}\\dotsc*_{\\cR_0}x}_{\\text{$n$ times}}=\\underbrace{x\\uplus \\dotsc\\uplus x}_{\\text{$n$ times}}\\,,\n\\end{equation}\nwhere we define for linear rules $p_1,p_2\\in Lin(\\bfC)$\n\\begin{equation}\n\\delta(p_1)\\uplus\\delta(p_2):=\\delta(\\comp{p_1}{\\emptyset}{p_2})\\,.\n\\end{equation}\n\\item The generators $x,x^{\\dag}\\in \\cH$ fulfil the \\emph{canonical commutation relation}\n\\begin{equation}\n[x,x^{\\dag}]\\equiv x*_{\\cR_0}x^{\\dag}-x^{\\dag}*_{\\cR_0}x=R_{\\emptyset}\\,,\\quad R_{\\emptyset}=(\\grule{\\emptyset}{\\emptyset}{\\emptyset})\\,.\n\\end{equation}\n\\item Every element of $\\cH$ may be expressed as a (finite) linear combination of so-called \\emph{normal-ordered} expressions $x^{\\dag\\:*r}*x^{*s}$ (with $r,s\\in \\bZ_{\\geq0}$ and $*\\equiv *_{\\cR_0}$). By convention, $x^{\\dag\\:*r}:=x^{\\dag}*\\dotsc *x^{\\dag}$ (a product of $r$ generators $x^{\\dag}$), and analogously for $x^{*\\:s}$.\n\\item Denoting by $\\ket{n}\\equiv\\ket{\\bullet^{\\uplus\\:n}}$ ($n\\in \\bZ_{\\geq 0}$) the basis vector associated to the discrete graph with $n$ vertices in the vector space $\\hat{\\bfG}_0$ of isomorphism classes of discrete graphs, the canonical representation of $\\cH$ according to Definition~\\ref{def:canRep} reads explicitly\n\\begin{equation}\na^{\\dag}\\ket{n}=\\ket{n+1}\\,,\\quad a\\ket{n}=\\begin{cases}\nn\\cdot\\ket{n-1}\\quad &\\text{if } n>0\\\\\n0_{\\hat{G}_0}&\\text{else}\n\\end{cases}\\,,\n\\end{equation}\nwith $a^{\\dag}:=\\rho_{\\mathbf{G}_0}(x^{\\dag})$ (the \\emph{creation operator}) and $a:=\\rho_{\\mathbf{G}_0}(x)$ (the \\emph{annihilation operator}).\n\\end{enumerate}\n\\end{thm}\n\\newpage\n\\begin{proof}~\\begin{enumerate}[(I)]\n\\item Since there is no partial injection possible between the input of one copy and the output of another copy of $x^{\\dag}$ other than the trivial match, and similarly for two copies of $x$, the claim follows by induction.\n\\item Computing the commutator $[x,x^{\\dag}]=x*x^{\\dag}-x^{\\dag}*x$ (with $*\\equiv*_{\\cR_0}$) explicitly, we find that\n\\begin{equation}\nx*x^{\\dag}= x\\uplus x^{\\dag}+R_{\\emptyset}\\,,\\quad\nx^{\\dag}*x=x^{\\dag}\\uplus x\\,,\n\\end{equation}\nfrom which the claim follows due to commutativity of the operation $\\uplus$ on $\\cR_0$, $x\\uplus x^{\\dag}=x^{\\dag}\\uplus x$. Here, the contribution $R_{\\emptyset}$ arises from the following sequential composition:\n\\begin{equation}\\label{eq:HWproofAux}\\gdef\\mycdScale{0.85}\n\\begin{aligned}\n&\\begin{mycd}\n \\emptyset\\ar[d] &\n \\emptyset \\ar[l]\\ar[r]\\ar[d]\n \\ar[dl,phantom,\"\\mathsf{PO}\"] &\n \\OneVertG[] \\ar[dr,h1color,bend right]\n \\ar[dl,phantom,\"\\mathsf{POC}\"] &\n {\\OneVertG[h1color]}\n \\ar[l,h1color]\n \\ar[r,h1color]\\ar[d,h1color,phantom,\"\\mathsf{PO}\"] &\n \\OneVertG[] \\ar[dl,h1color,bend left]\n \\ar[dr,phantom,\"\\mathsf{POC}\"]&\n \\emptyset \\ar[l]\\ar[r]\\ar[d]\n \\ar[dr,phantom,\"\\mathsf{PO}\"] &\n \\emptyset\\ar[d]\\\\\n {\\color{h2color}\\emptyset} &\n \\emptyset\\ar[l]\\ar[rr] & &\n {\\OneVertG[h1color]} & &\n \\emptyset\\ar[ll]\\ar[r] & {\\color{h2color}\\emptyset}\\\\\n \\end{mycd}\\\\\n &({\\color{h2color}\\emptyset}\\leftarrow\\emptyset\\rightarrow \\OneVertG[h1color])\\circ (\\OneVertG[h1color]\\leftarrow \\emptyset \\rightarrow{\\color{h2color}\\emptyset})\n ={\\color{h2color} (\\emptyset\\leftarrow\\emptyset\\rightarrow\\emptyset)}\n \\end{aligned}\n \\end{equation}\n\\item It suffices to prove the statement for basis elements of $\\cH$. Consider thus an arbitrary composition of a finite number of copies of the generators $x$ and $x^{\\dag}$. Then by repeated application of the commutation relation $[x,x^{\\dag}]=R_{\\emptyset}$, and since $R_{\\emptyset}$ is the unit element for $*_{\\cR_0}$, we can convert the arbitrary basis element of $\\cH$ into a linear combination of normal-ordered elements. More explicitly, from the viewpoint of the composition operation on linear rules, an expression of the form $x^{\\dag\\:*r}*x^{*\\:s}$ is easily verified to evaluate to\n\\begin{equation}\nx^{\\dag\\:*r}*x^{*\\:s}=\\delta\\left(\\bullet^{\\uplus\\:r}\\leftarrow \\emptyset\\rightarrow \\bullet^{\\uplus\\:s}\\right)\\,.\n\\end{equation}\nIn full analogy to the computation presented in~\\eqref{eq:HWproofAux}, composing $x^{\\dag\\:*r}*x^{*\\:s}$ with $x^{\\dag\\:*k}*x^{*\\:\\ell}$ will evaluate to\n\\begin{equation}\nx^{\\dag\\:*r}*x^{*\\:s}*x^{\\dag\\:*k}*x^{*\\:\\ell}=\\sum_{n=0}^{\\min(s,k)}\\tfrac{s! k!}{(s-n)!n!(k-n)!}\\, x^{\\dag\\:*(r+k-n)}*x^{*\\:(s+\\ell-n)}\\,.\n\\end{equation}\nNote in particular that the coefficient of a term $x^{\\dag\\:*(r+k-n)}*x^{*\\:(s+\\ell-n)}$ in the above sum coincides with the number of ways to match $n$ of the vertices between a discrete graph with $s$ vertices (i.e.\\ the input interface of the rule encoded in $x^{\\dag\\:*r}*x^{*\\:s}$) and a discrete graph with $k$ vertices (i.e.\\ the output interface of the rule encoded in $x^{\\dag\\:*k}*x^{*\\:\\ell}$). Thus the DPO algebra implementation precisely explains the full combinatorics involved in the problem of ``normal-ordering'' expressions in the HW algebra.\n\\item Note first that by definition $\\ket{0}=\\ket{\\emptyset}$. To prove the claim that for all $n\\geq 0$\n\\begin{equation*}\na^{\\dag}\\ket{n}=\\ket{n+1}\\,,\n\\end{equation*}\nwe apply Definitions~\\ref{def:DPOr} and~\\ref{def:canRep} by computing the following diagram (compare~\\eqref{eq:DPOr}): there exists precisely one admissible match of the empty graph $\\emptyset\\in \\obj{\\mathbf{G}_0}$ into the $n$-vertex discrete graph $\\OneVertG[]^{\\uplus\\:n}$, whence constructing the pushout complement marked with dashed arrows and the pushout marked with dotted arrows we verify the claim:\n\\begin{equation*}\\gdef\\mycdScale{0.85}\n\\begin{mycd}\n\\OneVertG[] \\ar[d,dotted]\n\\ar[dr,phantom,\"{\\mathsf{PO}}\"] &\n\\emptyset \\ar[l]\\ar[r]\\ar[d,dotted]\n\\ar[dr,phantom,\"{\\mathsf{POC}}\"] &\n\\emptyset \\ar[d,\"\\exists!\"]\\\\\n\\OneVertG[]^{\\uplus\\:(n+1)} &\n\\OneVertG[]^{\\uplus\\:n}\\ar[l,dashed]\\ar[r,dashed] &\n\\OneVertG[]^{\\uplus\\:n}\\\\\n\\end{mycd}\n\\end{equation*}\nProceeding analogously in order to prove the formula for the representation $a=\\rho_{\\mathbf{G_0}}(x)$,\n\\begin{equation*}\n a\\ket{n}:=\\begin{cases}\n n\\cdot\\ket{n-1}\\quad &\\text{if } n>0\\\\\n 0_{\\hat{G}_0} &\\text{else,}\n \\end{cases}\n\\end{equation*}\nwe find that for $n>0$ there exist $n$ admissible matches of the $1$-vertex graph $\\OneVertG[]$ into the $n$-vertex graph $\\OneVertG[]^{\\uplus\\:n}$, for each of which the application of the rule $\\GRule{\\OneVertG[]}{}{\\emptyset}$ along the match results in the graph $\\OneVertG[]^{\\uplus\\:(n-1)}$:\n\\begin{equation*}\\gdef\\mycdScale{0.85}\n\\begin{mycd}\n\\emptyset\n\\ar[d,dotted]\\ar[dr,phantom,\"{\\mathsf{PO}}\"] &\n\\emptyset \\ar[l]\\ar[r]\\ar[d,dotted]\\ar[dr,phantom,\"{\\mathsf{POC}}\"] &\n\\OneVertG[] \\ar[d,\"\\text{$n$ different matches}\"]\\\\\n\\OneVertG[]^{\\uplus\\:(n-1)} &\n\\OneVertG[]^{\\uplus\\:(n-1)}\\ar[l,dashed]\\ar[r,dashed] &\n\\OneVertG[]^{\\uplus\\:n}\\\\\n\\end{mycd}\n\\; \\Rightarrow\\;\n\\forall n>0:a\\ket{\\OneVertG[]^{\\uplus\\: n}}=n\\cdot \\ket{\\OneVertG[]^{\\uplus\\:(n-1)}}\n\\end{equation*}\nFinally, for $n=0$, since by definition there exists no admissible match from the $1$-vertex graph $\\OneVertG$ into the empty graph $\\emptyset$, whence indeed\n\\begin{equation*}\na\\ket{\\emptyset}=\\rho_{\\mathbf{G_0}}\\left(\\grule{\\emptyset}{\\emptyset}{\\OneVertG[]}\\right)\\ket{\\emptyset}=0_{\\hat{\\mathbf{G}}_0}\\,.\n\\qedhere\n\\end{equation*}\n\\end{enumerate}\n\\end{proof}\n\n\n\\subsection{Applications of rule algebras to combinatorics}%\n\\label{sec:RT}\n\nHere we consider an example application, working with undirected multigraphs.\n\\begin{defi}[Compare e.g.\\ \\cite{padberg2017towards}]\n Let $Id_{\\mathbf{Set}}:\\mathbf{Set}\\rightarrow\\mathbf{Set}$ be the identity functor on $\\mathbf{Set}$, and let $\\cP^{(1,2)}:\\mathbf{Set}\\rightarrow\\mathbf{Set}$ be the restricted covariant power set functor (which maps a set $S$ to its subsets of cardinality $1$ or $2$). Then the \\emph{category of finite undirected multigraphs} $\\mathbf{uGraph}$ is defined as the finitary restriction of a comma category~\\cite{ehrig:2006aa},\n \\begin{equation}\n \\mathbf{uGraph}:={(Id_{\\mathbf{Set}},\\cP^{(1,2)})}_{{\\rm fin}}\\,.\n \\end{equation}\n An object $U$ of $\\mathbf{uGraph}$ is specified via the data $U\\equiv(V_U,E_U,inc_U)$, where $V_U$ is a set of vertices, $E_U$ a set of edges, and $inc_U:E_U\\rightarrow \\cP^{(1,2)}(V_U)$ is the incidence function.\n\\end{defi}\n\n\\begin{lem}\n $\\mathbf{uGraph}$ is a weak adhesive HLR category, for $\\cM_{\\mathbf{uGraph}}$ the class of pairs of monomorphisms in $\\mathbf{Set}_{{\\rm fin}}$. It has an $\\cM_{\\mathbf{uGraph}}$-initial object (the empty graph $\\emptyset\\in \\obj{\\mathbf{uGraph}}$) as well as $\\cM_{\\mathbf{uGraph}}$-effective unions.\n\\end{lem}\n\\begin{proof}\n The identity functor $Id_{\\mathbf{Set}}$ trivially preserves pushouts in $\\mathbf{Set}$, while $\\cP^{(1,2)}$ preserves pullbacks along monomorphisms in $\\mathbf{Set}$ (cf.\\ e.g.\\ Appendix~A, Cor.~7 of~\\cite{padberg2017towards}). Therefore, by~\\cite[Thm.~4.15]{ehrig:2006aa}, $(Id_{\\mathbf{Set}},\\cP^{(1,2)})$ is a weak adhesive HLR category, and then by Theorem~\\ref{thm:finRes}, so is its finitary restriction $\\mathbf{uGraph}$. $\\cM_{\\mathbf{uGraph}}$-initiality is trivial, while the property of $\\cM_{\\mathbf{uGraph}}$-effective unions follows by application of Theorem~\\ref{thm:euAux}.\n\\end{proof}\n\nIn order to illustrate the intimate interplay of rule algebraic and combinatorial structures, we will now provide an example where the integer coefficients arising in applications of rules of undirected multigraphs yield a known combinatorial integer sequence.\n\\begin{defi}\nWe define the algebra $\\cA$ as the one generated\\footnote{As in the case of the Heisenberg-Weyl algebra, by ``generated'' we understand that a generic element of $\\cA$ is a finite linear combination of (finite) words in the generators and of the identity element $R_{\\emptyset}$, with concatenation given by the rule algebra composition.} by the rule algebra elements\n\\begin{equation}\\label{eq:Adef}\ne_{+}:=\\tfrac{1}{2}\\cdot\\left(\\tP{%\n \\vI{1}{1}{black}{}{black}{}\n \\vI{1}{2}{black}{}{black}{}\n \\eC{1}{1}{=}{black}{}}\\right)\\,,\\quad\ne_{-}:=\\tfrac{1}{2}\\cdot\\left(\\tP{%\n \\vI{1}{1}{black}{}{black}{}\n \\vI{1}{2}{black}{}{black}{}\n \\eA{2}{1}{=}{black}{}}\\right)\\,, \\quad\nd:=\\tfrac{1}{2}\\cdot\\left(\\tP{%\n \\vI{1}{1}{black}{}{black}{}\n \\vI{1}{2}{black}{}{black}{}}\\right)\\quad (e_{+},e_{-},d\\in \\cR_{\\mathbf{uGraph}})\\,.\n\\end{equation}\nFor convenience, we adopt here a graphical notation (so-called ``rule diagrams''~\\cite{bdg2016}) in which we depict a rule algebra basis element $(\\grule{O}{f}{I})\\in \\cR_{\\mathbf{uGraph}}$ as the graph of its induced injective partial morphism $(I\\xrightharpoonup{f}O)\\in Inj(I,O)$ of graphs $I$ and $O$, with the input graph $I$ drawn at the \\emph{bottom}, the output graph $O$ at the \\emph{top}, and where the structure of the morphism $f$ is indicated with \\emph{dotted lines}. In the example above, $e_{+}$ encodes (up to the factor of $\\tfrac{1}{2}$ chosen purely for convenience) a linear rule with input interface given by two vertices; in applying the rule, the vertices are to be kept (indicated by the vertical dotted lines), while a new edge between them is created (indicated by the $\\times$ and dotted line to the edge, symbolising ``creation''). Dually, $e_{-}$ encodes the deletion of an edge, while $d$ encodes an identity rule on the pattern of two disjoint vertices. In all three rules, the factor $\\tfrac{1}{2}$ is chosen such as to compensate for the symmetry of the three linear rules evident from the depictions (i.e.\\ along the horizontal mirror axis of the ``rule diagrams'').\n\\end{defi}\n\nThe algebra thus defined may be characterised\\footnote{``Characterised'' here refers to the observation that by utilising the commutation relations given in~\\eqref{eq:UAcomm}, it is possible to express an arbitrary element of the algebra as a linear combination in ``normal-ordered terms'' $e_{+}^{*\\:p}*e_{-}^{*\\:m}*d^{*\\:n}$ (for $p,m,n\\in \\bZ_{\\geq0}$).} via its \\emph{commutation relations}, which read (with $[x,y]:=x*y-y*x$ for $*\\equiv*_{\\cR_{\\mathbf{uGraph}}}$)\n\\begin{equation}\\label{eq:UAcomm}\n[e_{-},e_{+}]=d\\,,\\quad [e_{+},d]=[e_{-},d]=0\\,.\n\\end{equation}\nHere, the only nontrivial contribution (i.e.\\ the one that renders the first commutator non-zero) may be computed from the DPO-type composition diagram\\footnote{Note that the number indices are used solely to specify the precise structure of the match, and are not to be understood as actual vertex labels or types.}\nbelow and its variant for the admissible match $\\TwoVertEdgeGLb{}{1}{2}\\leftarrow\n\\TwoVertEdgeGLb{}{12'}{21'}\\rightarrow \\TwoVertEdgeGLb{}{1'}{2'} $:\n\\begin{equation}\\gdef\\mycdScale{0.85}\n\\begin{aligned}\n&\\begin{mycd}\n \\TwoVertG[]\\ar[d] &\n \\TwoVertG[]\n \\ar[l]\\ar[r]\\ar[d]\n \\ar[dl,phantom,\"\\mathsf{PO}\"] &\n \\TwoVertEdgeGLb{}{1}{2}\n \\ar[dr,h1color,bend right]\n \\ar[dl,phantom,\"\\mathsf{POC}\"] &\n {\\TwoVertEdgeGLb{h1color}{11'}{22'}}\n \\ar[l,h1color]\n \\ar[r,h1color]\n \\ar[d,h1color,phantom,\"\\mathsf{PO}\"] &\n \\TwoVertEdgeGLb{}{1'}{2'}\n \\ar[dl,h1color,bend left]\n \\ar[dr,phantom,\"\\mathsf{POC}\"]&\n \\TwoVertG[]\n \\ar[l]\\ar[r]\\ar[d]\n \\ar[dr,phantom,\"\\mathsf{PO}\"] &\n \\TwoVertG[]\\ar[d]\\\\\n \\TwoVertG[h2color] &\n \\TwoVertG[]\\ar[l]\\ar[rr] & &\n \\TwoVertEdgeG[h1color] & &\n \\TwoVertG[]\\ar[ll]\\ar[r] &\n \\TwoVertG[h2color]\\\\\n \\end{mycd}\\\\\n &(\\TwoVertG[h2color]\\leftarrow\\TwoVertG[]\\rightarrow \\TwoVertEdgeG[h1color])\\circ (\\TwoVertEdgeG[h1color]\\leftarrow \\TwoVertG[] \\rightarrow\\TwoVertG[h2color])\n ={\\color{h2color} (\\TwoVertG[h2color]\\leftarrow\\TwoVertG[h2color]\\rightarrow\\TwoVertG[h2color])}\n \\end{aligned}\n\\end{equation}\n\nWe find an interesting structure for the representation of $\\cA$:\n\\begin{lem}\nLet $E_{\\pm}:=\\rho(e_{\\pm})$ and $D:=\\rho(d)$ (for $\\rho\\equiv\\rho_{\\mathbf{uGraph}}$), and let $\\hat{\\bfG}:=\\widehat{\\mathbf{uGraph}}$. Denote for each non-negative integer $n\\in \\bZ_{\\geq0}$ by $\\hat{\\bfG}_n\\subset \\hat{\\bfG}$ the linear subspace of $\\hat{\\bfG}$ spanned by basis vectors indexed by isomorphism classes of undirected graphs with $n$ vertices. Then the linear endomorphisms $\\rho(X)$ for $X\\in\\{e_{+},e_{-},d\\}$ possess the vector spaces $\\hat{\\bfG}_n$ as \\emph{invariant subspaces}, resulting in the \\emph{decompositions}\n\\begin{equation}\n\\rho(X)=\\bigoplus_{n\\geq 0} \\left(\\rho(X)\\vert_{\\hat{\\bfG}_n}\\right)\\,.\n\\end{equation}\n\\begin{proof}\nThe three rules that define the algebra $\\cA$ do not modify the number of vertices when applied to a given graph (via the canonical representation). In other words, for each $X\\in \\{e_{+},e_{-},d\\}$ and for a basis vector $\\ket{G}$ of the invariant subspace $\\hat{\\mathbf{G}}_n$ of graphs with $n$ vertices, the image of $\\ket{G}$ under application of $\\rho(X)$ is a linear combination of basis vectors again in $\\hat{\\bfG}_n$, i.e.\\ $\\rho(X)\\ket{G}\\subset \\hat{\\mathbf{G}}_n$.\n\\end{proof}\n\\end{lem}\n\n\\begin{rem}While at first a rather technical observation, the decomposition of the linear operators $E_{\\pm}=\\rho(e_{\\pm})$ and $D=\\rho(d)$ via restriction to their invariant subspaces gives rise to rich combinatorial structures, even though the operators $E_{\\pm}$ and $D$ originate from representations of very simple rule algebra elements. Since a subspace $\\hat{\\bfG}_n$ is characterised by isomorphism classes of finite undirected multigraphs with a fixed number $n$ of vertices, but with an arbitrary (finite) number of edges, each space $\\hat{\\bfG}_n$ is countably infinite-dimensional. We will exemplify the combinatorial structures that arise for the cases $n=2$ and $n=3$ in the following.\n\\end{rem}\n\nOne may easily verify that the operator $D$ may be equivalently expressed as\n\\begin{equation}\\label{eq:defOv}\nD=\\tfrac{1}{2}\\cdot\\rho\\left(\\tP{%\n \\vI{1}{1}{black}{}{black}{}\n \\vI{1}{2}{black}{}{black}{}}\\right)\n =\\tfrac{1}{2}\\left(O_{\\bullet}O_{\\bullet}-O_{\\bullet}\\right)\\,,\\quad\n O_{\\bullet}:=\\rho\\left(\\tP{\\vI{1}{1}{black}{}{black}{}}\\right)\\,.\n\\end{equation}\nSince the diagonal operator $O_{\\bullet}$ when applied to an arbitrary graph state $\\ket{G}$ for $G\\in \\bfG$ effectively counts the number $n_V(G)$ of vertices of $G$,\n\\begin{equation}\nO_{\\bullet}\\ket{G}=n_V(G)\\ket{G}\\,,\n\\end{equation}\none finds that\n\\begin{equation}\nD\\ket{G}=\\tfrac{1}{2}O_{\\bullet}(O_{\\bullet}-1)\\ket{G}\n=\\tfrac{1}{2}n_V(G)(n_V(G)-1)\\ket{G}\\,.\n\\end{equation}\nOne may thus alternatively analyse the canonical representation of $\\cA$ split into invariant subspaces of $D$. The lowest non-trivial such subspace is the space $\\hat{\\bfG}_2$ of undirected multigraphs on two vertices. It in fact contains a representation of the Heisenberg-Weyl algebra, with $E_{+}$ and $E_{-}$ taking the roles of the creation and of the annihilation operator, respectively, and with the number vectors $\\ket{n}\\equiv \\ket{\\bullet^{\\uplus\\:n}}$ implemented as follows (with ${(m)}_n:=\\Theta(m-n)m!\/(m-n)!$):\n\\begin{equation}\n\\begin{aligned}\nE^n_{+}\\ket{\\tP{\\node[vertices] (a) at (1,1) {};\\node[vertices] (a) at (1.5,1) {};}}\n&=\\ket{\\tP{%\n \\node[vertices] (a) at (1,1) {};\n \\node[vertices] (b) at (3,1) {};\n \\draw (a) edge[bend left=50] (b);\n \\draw (a) edge[bend left=40] (b);\n \\draw (a) edge[bend left=30] (b);\n \\draw (b) edge[bend left=50] (a);\n \\node at ($(a.center)!0.5!(b.center)$) {$\\vcenter{\\hbox{\\vdots}} \\text{\\tiny{$n$ times}}$};}}\\,,\\quad\n E_{-}\\ket{\\tP{%\n \\node[vertices] (a) at (1,1) {};\n \\node[vertices] (b) at (3,1) {};\n \\draw (a) edge[bend left=50] (b);\n \\draw (a) edge[bend left=40] (b);\n \\draw (a) edge[bend left=30] (b);\n \\draw (b) edge[bend left=50] (a);\n \\node at ($(a.center)!0.5!(b.center)$) {$\\vcenter{\\hbox{\\vdots}} \\text{\\tiny{$n$ times}}$};}}=\n {(n)}_1\\ket{\\tP{%\n \\node[vertices] (a) at (1,1) {};\n \\node[vertices] (b) at (3,1) {};\n \\draw (a) edge[bend left=50] (b);\n \\draw (a) edge[bend left=40] (b);\n \\draw (b) edge[bend left=50] (a);\n \\node at ($(a.center)!0.5!(b.center)$) {$\\vcenter{\\hbox{\\vdots}} \\text{\\tiny{$(n-1)$ times}}$};}}\\,.\n\\end{aligned}\n\\end{equation}\nBut already the invariant subspace based on the initial vector $\\ket{\\tP{%\n\\node[vertices] (a) at (1,1) {};\n\\node[vertices] (b) at (1.5,1) {};\n\\node[vertices] (c) at (2,1) {};}}\\in \\hat{\\bfG}_3$ has a very interesting combinatorial structure:\n\\begin{equation}\n\\begin{aligned}\nE_{+}\\ket{\\tP{%\n\\node[vertices] (a) at (1,1) {};\n\\node[vertices] (b) at (1.5,1) {};\n\\node[vertices] (c) at (2,1) {};}}\n&=\n3\\ket{\\tP{%\n\\node[vertices] (a) at (1,1) {};\n\\node[vertices] (b) at (1.5,1) {};\n\\node[vertices] (c) at (2,1) {};\n\\draw (a) edge (b);}}\\equiv 3\\ket{\\{1,0,0\\}}\\\\\nE_{+}^2\\ket{\\tP{%\n\\node[vertices] (a) at (1,1) {};\n\\node[vertices] (b) at (1.5,1) {};\n\\node[vertices] (c) at (2,1) {};}}\n&=\n3\\left(\\ket{\\tP{%\n\\node[vertices] (a) at (1,1) {};\n\\node[vertices] (b) at (1.5,1) {};\n\\node[vertices] (c) at (2,1) {};\n\\draw (a) edge[bend left] (b);\n\\draw (b) edge[bend left] (a);}}\n+2\\ket{\\tP{%\n\\node[vertices] (a) at (1,1) {};\n\\node[vertices] (b) at (1.5,1) {};\n\\node[vertices] (c) at (2,1) {};\n\\draw (a) edge (b);\n\\draw (b) edge (c);}}\n\\right)\n\\equiv3\\left( \\ket{\\{2,0,0\\}}+2\\ket{\\{1,1,0\\}}\\right)\\\\\nE_{+}^3\\ket{\\tP{%\n\\node[vertices] (a) at (1,1) {};\n\\node[vertices] (b) at (1.5,1) {};\n\\node[vertices] (c) at (2,1) {};}}\n&=\n3\\left(\\ket{\\tP{%\n\\node[vertices] (a) at (1,1) {};\n\\node[vertices] (b) at (1.5,1) {};\n\\node[vertices] (c) at (2,1) {};\n\\draw (a) edge[bend left] (b);\n\\draw (a) edge (b);\n\\draw (b) edge[bend left] (a);}}\n+6\\ket{\\tP{%\n\\node[vertices] (a) at (1,1) {};\n\\node[vertices] (b) at (1.5,1) {};\n\\node[vertices] (c) at (2,1) {};\n\\draw (a) edge[bend left] (b);\n\\draw (a) edge[bend right] (b);\n\\draw (b) edge (c);}}\n+2\\ket{\\tP{%\n\\node[vertices] (a) at (1,1) {};\n\\node[vertices] (b) at (1.5,1) {};\n\\node[vertices] (c) at (2,1) {};\n\\draw (a) edge[bend left] (c);\n\\draw (a) edge (b);\n\\draw (b) edge (c);}}\n\\right)\\\\\n&\\equiv3\\left(\\ket{\\{3,0,0\\}}+6\\ket{\\{2,1,0\\}}+2\\ket{\\{1,1,1\\}}\\right)\\\\\n&\\;\\vdots\\\\\nE^n_{+}\\ket{\\tP{%\n\\node[vertices] (a) at (1,1) {};\n\\node[vertices] (b) at (1.5,1) {};\n\\node[vertices] (c) at (2,1) {};}}&\\equiv E^n_{+}\\ket{\\{0,0,0\\}}=3\\sum_{k=0}^n T(n,k)\\ket{S(n,k)}\\,\n\\end{aligned}\n\\end{equation}\nHere, the state $\\ket{\\{f,g,h\\}}$ with $f\\geq g\\geq h\\geq 0$ and $f+g+h=n$ is the graph state on three vertices with (in one of the possible presentations of the isomorphism class) $f$ edges between the first two, $g$ edges between the second two and $h$ edges between the third and the first vertex. Furthermore, $T(n,k)$ and $S(n,k)$ are given by the entry \\href{https:\/\/oeis.org\/A286030}{\\emph{A286030}} of the OEIS database~\\cite{OEISts}. The interpretation of $S(n,k)$ and $T(n,k)$ is that each triple $S(n,k)$ encodes the outcome of a game of three players, counting (without regarding the order of players) the number of wins per player for a total of $n$ games. Here, the second index $k$ in a given $S(n,k)=\\{f',g',h'\\}$ denotes the position of this triple of integers in the reverse lexicographic order over all triples of integers $\\{f,g,h\\}$ satisfying the constraints $f\\geq g\\geq h\\geq 0$ and $f+g+h=n$ (see~\\cite{OEISts} for further details). Then $T(n,k)\/3^{(n-1)}$ gives the probability that a particular pattern $S(n,k)$ occurs in a random sample.\n\n\\medskip\nWhile of course the example presented must be seen as just a first proof of concept, many integer sequences as well as certain types of orthogonal polynomials possess an interpretation as being related to the counting of certain graphical structures (cf.\\ e.g.\\ \\cite{strehl2017lacunary} and references therein). It thus appears to be an interesting avenue of future research to investigate the apparently quite intricate interrelations between rule algebra representation theory and combinatorics, which suggests in particular to reinterpret the graphical methods of importance in enumerative combinatorics via a direct encoding within the rule algebra framework.\n\n\n\\subsection{Applications of rule algebras to stochastic mechanics}%\n\\label{sec:SM}\n\n\nOne of the main motivations that underpinned the development of the rule algebra framework prior to this paper~\\cite{bdg2016,bdgh2016} has been the link between associative unital algebras of transitions and continuous-time Markov chains (CTMCs). Famous examples of such particular types of CTMCs include chemical reaction systems (see e.g.\\ \\cite{bdp2017} for a recent review) and stochastic graph rewriting systems (see~\\cite{bdg2016} for a rule-algebraic implementation). With our novel formulation of unital associative rule algebras and their canonical representation for $\\cM$-adhesive categories, it is possible to specify a \\emph{general stochastic mechanics framework}. While a detailed presentation of the far-reaching consequences of this result is relegated to~\\cite{bdg2018}, it suffices here to define the basic framework and to indicate the potential of the idea with a short worked example.\n\n\\begin{rem}\n For the readers not familiar with the theory of \\emph{continuous-time Markov chains (CTMCs)}, the salient points of the mathematical construction are as follows:\n \\begin{itemize}[label=$\\triangleright$]\n \\item Fixing a suitable $\\cM$-adhesive category $\\bfC$, the CTMCs to be defined will evolve over a space of \\emph{(sub-)probability distributions} indexed by isomorphism classes of objects of $\\bfC$ (see~\\eqref{eq:defProbC}).\n \\item In general, these distributions may (and in many examples will) have \\emph{countably infinite support}. Therefore, one must utilise certain concepts introduced in the mathematical theory of CTMCs (notably \\emph{(sub-)stochastic operators} and \\emph{Fr\\`{e}chet spaces}) in order to explain the passage from linear operators (acting on elements of vector spaces, i.e.\\ on finite linear combinations of basis vectors) to the countably infinitely supported setting.\n \\item Specifically, a type of operator called \\emph{infinitesimal generator} must be constructed that will play a crucial role as the operator governing the evolution of the CTMC\\@. In the setting at hand, a central result of CTMC theory entails that this type of operator is fully characterised by its ``matrix element structure'' (see~\\eqref{eq:defHctmc}), i.e.\\ its off-diagonal elements must be non-negative, while the diagonal elements must evaluate to minus the sum of the off-diagonal elements in a given row (and this sum must be finite).\n \\item Finally, a key feature of CTMC theory for stochastic rewriting systems is its degree of freedom in choosing a set of \\emph{observed properties} (typically a choice of patterns to count) of the system at hand in its time-evolution. Unlike in the special setting of CTMCs arising from stochastic rewriting systems on \\emph{discrete graphs} with vertices of possibly different types (where naturally the counts of the numbers vertices of the different types is the ``canonical'' choice of observed property), in generic rewriting systems it is not clear from the outset which properties are interesting to observe. We will first introduce the mathematical notion of \\emph{observables} as certain diagonal linear operators in Definition~\\ref{def:obs}, followed by an illustration in a concrete worked example at the end of this section.\n \\end{itemize}\n\\end{rem}\n\n\\noindent\nWe begin our construction by specialising the general definition of continuous-time Markov chains (see e.g.\\ \\cite{norris}) to the setting of rewriting systems (compare~\\cite{bdg2016,bdp2017}).\n\n\\begin{defi}\nConsider an $\\cM$-adhesive category $\\bfC$ with $\\cM$-initial object $\\mathbb{I}\\in \\obj{\\bfC}$ and satisfying Assumption~\\ref{ass:RAdpo}, and let $\\hat{\\bfC}$ denote the free $\\bR$-vector space indexed by isomorphism classes of objects of $\\bfC$ according to Definition~\\ref{def:canRep}. Then we define the space $\\Prob{\\bfC}$ as the \\emph{space of sub-probability distributions} in the following sense:\n\\begin{equation}\\label{eq:defProbC}\n\\Prob{\\bfC}:=\\left.\\left\\{\n\\ket{\\Psi}=\\sum_{o\\in \\obj{\\bfC}_{\\cong}}\\psi_o \\ket{o}\n\\right\\vert\n\\forall o\\in \\obj{\\bfC}_{\\cong}: \\psi_o\\in \\bR_{\\geq0}\n\\land\n\\sum_{o\\in \\obj{\\bfC}_{\\cong}}\\psi_o\\leq 1\n\\right\\}\\,.\n\\end{equation}\nLet $\\Stoch{\\bfC}:=End(\\Prob{\\bfC})$ be the space of \\emph{sub-stochastic operators}, and denote by $\\cS_{\\bfC}$ the space\\footnote{The space $\\cS_{\\bfC}$ is referred to as the \\emph{Fr\\`{e}chet space} of real-valued sequences $f\\equiv{(f_o)}_{o\\in \\obj{\\bfC}_{\\cong}}$ with semi-norms $\\|f\\|_{o}:=|f_o|$. Strictly speaking, we are thus tacitly assuming here that the category $\\bfC$ is \\emph{essentially small}, i.e.\\ that it possesses a countable set of isomorphism classes.} of real-valued sequences indexed by isomorphism classes of objects of $\\bfC$, and where all coefficients are \\emph{finite}. Then a \\textbf{\\emph{continuous-time Markov chain (CTMC)}} is specified in terms of a tuple of data $(\\ket{\\Psi(0)},H)$, where\n\\begin{itemize}[label=$\\triangleright$]\n \\item $\\ket{\\Psi(0)}\\in \\Prob{\\bfC}$ is the \\emph{initial state}, and where\n \\item $H\\in End_{\\bR}(\\cS_{\\bfC})$ is the \\emph{infinitesimal generator} or \\emph{Hamiltonian} of the CTMC\\@.\n\\end{itemize}\nThe linear operator $H$ is required to be an infinitesimal (sub-)stochastic operator, whence to fulfil the following constraints on its ``matrix elements'' $h_{o,o'}$:\n\\begin{equation}\\label{eq:defHctmc}\n\\begin{aligned}\nH&\\equiv {(h_{o,o'})}_{o,o'\\in \\obj{\\bfC}_{\\cong}}\\quad \\forall o,o'\\in \\obj{\\bfC}_{\\cong}:\\\\\n&\\quad (i)\\; h_{o,o}\\leq 0\\,,\\quad\n (ii) \\forall o\\neq o':\\; h_{o,o'}\\geq 0\\,,\\quad\n(iii)\\; \\sum_{o'} h_{o,o'}=0\\,.\n\\end{aligned}\n\\end{equation}\nAccording to the mathematical theory of CTMCs~\\cite{norris}, under the above conditions this data encodes the \\emph{evolution semi-group} $\\cE:\\bR_{\\geq 0}\\rightarrow \\Stoch{\\bfC}$ as the (point-wise minimal non-negative) solution of the \\emph{Kolmogorov backwards} or \\emph{master equation}:\n\\begin{equation}\n\\begin{aligned}\n\\tfrac{d}{dt}\\cE(t)&=H\\cE(t)\\,,\\quad \\cE(0)=\\mathbb{1}_{End_{\\bR}(\\cS_{\\bfC})}\n\\Rightarrow\\quad &\\forall t,t'\\in \\bR_{\\geq 0}: \\cE(t)\\cE(t')=\\cE(t+t')\\,.\n\\end{aligned}\n\\end{equation}\nConsequently, the \\emph{time-dependent state} $\\ket{\\Psi(t)}$ of the system is given by\n\\begin{equation}\n\\forall t\\in \\bR_{\\geq 0}:\\quad \\ket{\\Psi(t)}=\\cE(t)\\ket{\\Psi(0)}\\,.\n\\end{equation}\n\\end{defi}\n\nTypically, our interest in analysing a given CTMC will consist in studying the dynamical statistical behaviour of so-called \\emph{observables}. We will fist provide the general definition of observables in CTMCs below, followed by a more specific characterisation in stochastic rewriting systems as part of Theorem~\\ref{thm:smf}.\n\\begin{defi}\\label{def:obs}\nLet $\\cO_{\\bfC}\\subset End_{\\bR}(\\cS_{\\bfC})$ denote the space of \\emph{observables}, defined as the space of \\emph{diagonal operators},\n\\begin{equation}\n\\cO_{\\bfC}:=\\{O\\in End_{\\bR}(\\cS_{\\bfC})\\mid \\forall o\\in \\obj{\\bfC}_{\\cong}:\\; O\\ket{o}=\\omega_O(o)\\ket{o}\\,,\\; \\omega_O(o)\\in \\bR\\}\\,.\n\\end{equation}\nWe furthermore define\\footnote{On distributions in $\\cS_{\\bfC}$ whose sum of coefficients is not finite, we consider $\\bra{}$ to be undefined; in practice, we will however only be interested to evaluate moments of observables on (sub-)probability distributions, i.e.\\ one may directly verify the finiteness properties in a given calculation.} the so-called \\emph{projection operation} $\\bra{}:\\cS_{\\bfC}\\rightarrow \\bR$ via extending by linearity the definition of $\\bra{}$ acting on basis vectors of $\\hat{\\bfC}$,\n\\begin{equation}\n\\forall o\\in \\obj{\\bfC}_{\\cong}:\\quad \\braket{}{o}:=1_{\\bR}\\,.\n\\end{equation}\nThese definitions induce a notion of \\emph{correlators} of observables, defined for $O_1,\\dotsc,O_n\\in \\cO_{\\bfC}$ and $\\ket{\\Psi}\\in \\Prob{\\bfC}$ as\n\\begin{equation}\n\\langle O_1,\\dotsc,O_n\\rangle_{\\ket{\\Psi}}:=\\bra{}O_1,\\dotsc,O_n\\ket{\\Psi}\n=\\sum_{o\\in \\obj{\\bfC}_{\\cong}}\\psi_o\\cdot\\omega_{O_1}(o)\\cdots \\omega_{O_n}(o)\\,.\n\\end{equation}\n\\end{defi}\n\nThe precise relationship between the notions of CTMCs and DPO rewriting rules as encoded in the rule algebra formalism is established in the form of the following theorem (compare~\\cite{bdg2016}):\n\n\\begin{thm}[DPO-type stochastic mechanics framework]\\label{thm:smf}\nLet $\\bfC$ be an $\\cM$-adhesive category satisfying Assumption~\\ref{ass:RAdpo}, and which in addition possesses an $\\cM$-initial object. Let ${\\{(\\grule{O_j}{r_j}{I_j})\\in \\cR_{\\bfC}\\}}_{j\\in \\cJ}$ be a (finite) set of rule algebra elements, and ${\\{\\kappa_j\\in \\bR_{\\geq 0}\\}}_{j\\in \\cJ}$ a collection of non-zero parameters (called \\emph{base rates}). Then one may construct a Hamiltonian $H$ from this data according to\n\\begin{equation}\nH:=\\hat{H}+\\bar{H}\\,,\\quad\n\\hat{H}:=\\sum_{j\\in \\cJ}\\kappa_j\\cdot \\rho_{\\bfC}\\left(\\grule{O_j}{r_j}{I_j}\\right)\\,,\\quad\n\\bar{H}:=-\\sum_{j\\in \\cJ}\\kappa_j\\cdot \\rho_{\\bfC}\\left(\\grule{I_j}{id_{dom(r_j)}}{I_j}\\right)\\,.\n\\end{equation}\nHere, we define for arbitrary $(\\GRule{O}{r}{I})\\equiv(O\\xleftarrow{o}K\\xrightarrow{i}I)\\in \\Lin{\\bfC}$\n\\begin{equation}\n(\\GRule{I}{id_{dom(r)}}{I}):=(I\\xleftarrow{i}K\\xrightarrow{i}I)\\,.\n\\end{equation}\nThe \\emph{observables} for the resulting CTMC are operators of the form\n\\begin{equation}\nO_M^t=\\rho_{\\bfC}\\left(\\grule{M}{t}{M}\\right)\\,.\n\\end{equation}\nWe furthermore have the \\emph{jump-closure property}, whereby for all $(\\grule{O}{r}{I})\\in \\cR_{\\bfC}$\n\\begin{equation}\\label{eq:ojc}\n\\bra{}\\rho_{\\bfC}(\\grule{O}{r}{I})=\\bra{}O_I^{id_{dom(r)}}\\,.\n\\end{equation}\n\\end{thm}\n\\begin{proof}\nBy definition, the DPO-type canonical representation of a generic rule algebra element $(\\grule{O}{r}{I})\\in \\cR_{\\bfC}$ is a row-finite linear operator, since for every $C\\in \\obj{\\bfC}_{\\cong}$ the set of admissible matches $\\Match{p}{C}$ of the associated linear rule $p\\equiv(\\GRule{I}{r}{O})$ is finite. Consequently, $\\rho_{\\bfC}(\\grule{O}{r}{I})$ lifts consistently from a linear operator in $End(\\hat{\\bfC})$ to a linear operator in $End(\\cS_{\\bfC})$. Let us prove next the claim on the precise structure of observables. Recall that according to Definition~\\ref{def:obs}, an observable $O\\in \\cO_{\\bfC}$ must be a linear operator in $End(\\cS_{\\bfC})$ that acts \\emph{diagonally} on basis states $\\ket{C}$ (for $C\\in \\obj{\\bfC}_{\\cong}$), whence that satisfies for all $C\\in \\obj{\\bfC}_{\\cong}$\n\\begin{equation*}\nO\\ket{C}=\\omega_O(C)\\ket{C}\\quad (\\omega_O(C)\\in \\bR)\\,.\n\\end{equation*}\nComparing this equation to the definition of the DPO-type canonical representation (Definition~\\ref{def:canRep}) of a generic rule algebra basis element $\\delta(p)\\in \\cR_{\\bfC}$ (for $p\\equiv(I\\xleftarrow{i}K\\xrightarrow{o}O)\\in \\Lin{\\bfC}$),\n\\begin{equation*}\n\\rho_{\\bfC}(\\delta(p))\\ket{C}:=\\begin{cases}\n\\sum_{m\\in \\Match{p}{C}}\\ket{p_m(C)}\\quad &\\text{if }\\Match{p}{C}\\neq \\emptyset\\\\\n0_{\\hat{\\bfC}}&\\text{else,}\n\\end{cases}\n\\end{equation*}\nwe find that in order for $\\rho_{\\bfC}(\\delta(p))$ to be diagonal we must have\n\\begin{equation*}\n \\forall C\\in \\obj{\\bfC}:\\forall m\\in \\Match{p}{C}:\\quad p_m(C)\\cong C\\,.\n\\end{equation*}\nBut by definition of derivations of objects along admissible matches (Definition~\\ref{def:DPOr}), the only linear rules $p\\in \\Lin{\\bfC}$ that have this special property are precisely the rules of the form\n\\begin{equation*}\np^r_M= (M\\xleftarrow{r}K\\xrightarrow{r}M)\\,.\n\\end{equation*}\nIn particular, defining $O_M^r:=\\rho_{\\bfC}(\\delta(p^r_M))$, we find that the eigenvalue $\\omega_{O^r_M}(C)$ coincides with the cardinality of the set $\\Match{p^r_M}{C}$ of admissible matches,\n\\begin{equation*}\n\\forall C\\in \\obj{\\bfC}_{\\cong}:\\quad O_M^r\\ket{C}=|\\Match{p}{C}|\\cdot\\ket{C}\\,.\n\\end{equation*}\nThis proves that the operators $O^r_M$ form a basis of diagonal operators on $End(\\bfC)$ (and thus on $End(\\cS_{\\bfC})$) that arise from linear combinations of canonical representations of rule algebra elements.\n\nTo prove the jump-closure property, note that it follows from Definition~\\ref{def:DPOr} that for an arbitrary linear rule $p\\equiv(I\\xleftarrow{i}K\\xrightarrow{o}O)\\in \\Lin{\\bfC}$, a generic object $C\\in \\obj{\\bfC}$ and a $\\cM$-morphism $m:I\\rightarrow C$, the admissibility of $m$ as a match is determined by whether or not the match fulfils the gluing condition (Definition~\\ref{def:gc}), i.e.\\ whether or not the following pushout complement exists,\n\\begin{equation*}\\gdef\\mycdScale{0.85}\n\\begin{mycd}\nI\\ar[r,leftarrow,\"i\"]\\ar[d,\"m\"'] & K\\ar[d,\"g\",dashed]\\ar[dl,phantom,\"\\mathsf{POC}\"]\\\\\nC\\ar[r,leftarrow,dashed,\"v\"'] & E\n\\end{mycd}\\,.\n\\end{equation*}\nThus we find that with $p'=(I\\xleftarrow{i}K\\xrightarrow{i}I)\\in \\Lin{\\bfC}$, the set $\\Match{p}{C}$ of admissible matches of $p$ in $C$ and $\\Match{p'}{C}$ of $p'$ in $C$ have the same cardinality. Combining this with the definition of the projection operator $\\bra{}$ (Definition~\\ref{def:obs}),\n\\begin{equation*}\n\\forall C\\in \\obj{\\bfC}_{\\cong}:\\quad \\braket{}{C}:=1_{\\bR}\\,,\n\\end{equation*}\nwe may prove the claim of the jump-closure property via verifying it on arbitrary basis elements (with notations as above):\n\\begin{equation*}\n\\bra{}\\rho_{\\bfC}(\\delta(p))\\ket{C}=|\\Match{p}{C}|=|\\Match{p'}{C}|=\\bra{}\\rho_{\\bfC}(\\delta(p'))\\ket{C}\\,.\n\\end{equation*}\nSince $C\\in \\obj{\\bfC}_{\\cong}$ was chosen arbitrarily, we thus have indeed that\n\\begin{equation*}\n\\bra{}\\rho_{\\bfC}(\\delta(p))=\\bra{}\\rho_{\\bfC}(\\delta(p'))\\,.\n\\end{equation*}\nFinally, combining all of these findings, one may verify that $H$ as stated in the theorem fulfils all required properties in order to qualify as an infinitesimal generator of a continuous-time Markov chain.\n\\end{proof}\n\nWe illustrate the framework with an example of a stochastic rewriting system based on the category $\\mathbf{uGraph}$ of finite undirected multigraphs and morphisms thereof, where we pick the two rule algebra elements $e_{+}$ and $e_{-}$ specified in~\\eqref{eq:Adef} to define the transitions of the system. Together with two non-negative real parameters $\\kappa_{+},\\kappa_{-}\\in \\bR_{\\geq0}$, the resulting Hamiltonian $H=\\hat{H}+\\bar{H}$ reads (with $E_{\\pm}:=\\rho(e_{\\pm})$ and $O_{\\bullet}$ as in~\\eqref{eq:defOv})\n\\begin{equation}\n\\hat{H}=\\kappa_{+}E_{+}+\\kappa_{-} E_{-}\\,,\\quad \\bar{H}=-\\tfrac{1}{2}\\kappa_{+}O_{\\bullet}(O_{\\bullet}-1)-\\kappa_{-}O_E\\,,\n\\quad O_E:=\\tfrac{1}{2}\\rho\\left(\\tP{%\n \\vI{1}{1}{black}{}{black}{}\n \\vI{1}{2}{black}{}{black}{}\n \\eI{1}{1}{=}{black}{}{black}{}}\\right)\\,.\n\\end{equation}\nLet us assume for simplicity that we start our evolution from an initial state $\\ket{\\Psi(0)}=\\ket{G_0}$, with $G_0$ (the isomorphism class of) some finite undirected graph. We denote by $N_V$ and $N_E$ the number of vertices and edges of $G_0$, respectively, which may be computed as\n\\begin{equation}\n\\bra{}O_{\\bullet}\\ket{G_0}=N_V\\,,\\quad \\bra{}O_E\\ket{G_0}=N_E\\,.\n\\end{equation}\nSince the two linear rules that define the system create and delete edges, but do not modify the number of vertices, the time-dependent probability distribution $\\ket{\\Psi(t)}$ (for $t\\geq 0$) with $\\ket{\\Psi(0)}=\\ket{G_0}$ is supported on graph states that all have the same number of vertices $N_V$ as the initial graph $G_0$, which entails that\n\\begin{equation}\\label{eq:OVstatic}\n \\forall\\; t\\geq 0:\\quad \\bra{}O_{\\bullet}\\ket{\\Psi(t)}=N_V\\,.\n\\end{equation}\nLet us thus focus on the dynamics of the edge-counting observable $O_E$. We follow the strategy put forward in~\\cite{bdg2016,bdg2018} and consider the \\emph{exponential moment generating function} $E(t;\\epsilon)$ of $O_E$, defined as\n\\begin{equation}\nE(t;\\epsilon):=\\bra{}e^{\\epsilon O_E}\\ket{\\Psi(t)}\\,,\n\\end{equation}\nwhere $\\epsilon$ is a formal variable. More explicitly, $E(t;\\epsilon)$ encodes the \\emph{statistical moments} of $O_E$, in the sense that for all (finite) $n\\geq1$,\n\\begin{equation}\n \\left[\\tfrac{\\partial^n}{\\partial \\varepsilon^n}E(t;\\varepsilon)\\right]\\big\\vert_{\\varepsilon\\to0}=\\bra{}O_E^n\\ket{\\Psi(t)}\\,.\n\\end{equation}\nWe may calculate the \\emph{evolution equation} for $E(t;\\varepsilon)$ as follows (compare~\\cite{bdg2016,bdg2018}):\n\\begin{equation}\\label{eq:appExAux1}\n\\begin{aligned}\n \\tfrac{\\partial}{\\partial t}E(t;\\varepsilon)&=\n \\bra{}e^{\\varepsilon O_E}H\\ket{\\Psi(t)}\\\\\n &=\\bra{}\\left(e^{\\varepsilon O_E}He^{-\\varepsilon O_E} \\right)e^{\\varepsilon O_E}\\ket{\\Psi(t)}\\\\\n &=\\sum_{n\\geq 0}\\frac{1}{n!}\\bra{}\\left(ad_{\\varepsilon O_E}^{\\circ\\:n}(H)\\right)e^{\\varepsilon O_E}\\ket{\\Psi(t)}\\,.\n\\end{aligned}\n\\end{equation}\nHere, in the last step, we have taken advantage of a variant of the BCH formula (see e.g.~\\cite[Prop.~3.35]{hall2015lieGroups}), whereby for two composable linear operators $A$ and $B$ and for a formal variable $\\lambda$,\n\\begin{equation}\n e^{\\lambda A}Be^{-\\lambda A}=\\sum_{n\\geq 0}\\frac{1}{n!} ad_{\\lambda A}^{\\circ \\:n}(B)\\,,\n\\end{equation}\nwith the adjoint action defined via the so-called \\emph{commutator} $[.,.]$,\n\\begin{equation}\n ad_A(B):=[A,B]=AB-BA \\,.\n\\end{equation}\nMoreover, we let $ad_A^{0}(B):= B$, and for $n\\geq1$,\n\\begin{equation}\n ad_A^{\\circ \\:n}(B):=[A,[A,[\\dotsc,[A,B]\\dotsc]]]\n\\end{equation}\ndenotes the $n$-fold nested commutator. Taking advantage of the general fact that a Hamiltonian as constructed according to Theorem~\\ref{thm:smf} verifies\n\\begin{equation}\\label{eq:Haux}\n\\bra{}H=0\\,,\n\\end{equation}\nwe may conclude that the term in~\\eqref{eq:appExAux1} for $n=0$ vanishes identically. In order to compute the terms for $n\\geq 1$, it is straightforward to verify that\n\\begin{equation}\n ad_{\\varepsilon O_E}(\\kappa_{+}E_{+})=\\varepsilon\\kappa_{+}E_{+}\\,,\\quad\n ad_{\\varepsilon O_E}(\\kappa_{-}E_{-})=-\\varepsilon\\kappa_{-}E_{-}\\,,\\quad\n ad_{\\varepsilon O_E}(\\bar{H})=0\\,,\n\\end{equation}\nwhich entails that\\footnote{It may be worth emphasising that it is this particular type of calculation for which the rule algebra framework provides the technical prerequisites, as it would be otherwise impossible to reason about infinite series of causal interactions and rewriting steps.}\n\\begin{equation}\n \\sum_{n\\geq 1}\\frac{1}{n!}\\, ad_{\\varepsilon O_E}^{\\circ\\:n}(H)\n =\\kappa_{+}\\left(e^{\\varepsilon}-1\\right)E_{+}\n +\\kappa_{-}\\left(e^{-\\varepsilon}-1\\right)E_{-}\\,.\n\\end{equation}\nTo proceed, we invoke the \\emph{jump-closure property} as described in~\\eqref{eq:ojc} to conclude that\n\\begin{equation}\\label{eq:appExJC}\n \\bra{}E_{+}=\\tfrac{1}{2}\\bra{}O_{\\bullet}(O_{\\bullet}-1)\\,,\\quad\n \\bra{}E_{-}=\\bra{}O_E\\,.\n\\end{equation}\nRecalling our earlier result as presented in~\\eqref{eq:OVstatic}, we find that\n\\begin{equation}\n\\tfrac{1}{2}\\bra{}O_{\\bullet}(O_{\\bullet}-1)e^{\\varepsilon O_E}\\ket{\\Psi(t)}\n=\\tfrac{1}{2}\\bra{}e^{\\varepsilon O_E}O_{\\bullet}(O_{\\bullet}-1)\\ket{\\Psi(t)}\n\\overset{\\eqref{eq:OVstatic}}{=}\\binom{N_V}{2} E(t;\\varepsilon)\\,,\n\\end{equation}\nwhere in the first step we have made use of the fact that observables commute (i.e.\\ in particular $[O_{\\bullet},O_E]=0$). As for the contribution due to $\\bra{}E_{-}$, note that\n\\begin{equation}\n \\bra{}E_{-}e^{\\varepsilon O_E}\\ket{\\Psi(t)}\n \\overset{\\eqref{eq:appExJC}}{=}\\bra{}O_E e^{\\varepsilon O_E}\\ket{\\Psi(t)}\n =\\tfrac{\\partial}{\\partial \\varepsilon} \\bra{}e^{\\varepsilon O_E}\\ket{\\Psi(t)}\n =\\tfrac{\\partial}{\\partial \\varepsilon} E(t;\\varepsilon)\\,.\n\\end{equation}\nAssembling these results into~\\eqref{eq:appExAux1}, and with $E(0;\\varepsilon)=e^{\\varepsilon N_E}$ (for $\\ket{\\Psi(0)}=\\ket{G_0}$, and with $N_E$ edges in $G_0$), we obtain the following refined form for the evolution equation of $E(t;\\varepsilon)$:\n\\begin{equation}\\label{eq:appExAux2}\n\\begin{aligned}\n \\tfrac{\\partial}{\\partial t}E(t;\\varepsilon)&=\n \\left[\n \\kappa_{+}\\binom{N_V}{2}\\left(e^{\\varepsilon}-1\\right)\n +\\kappa_{-}\\left(e^{-\\varepsilon}-1\\right)\\tfrac{\\partial}{\\partial\\varepsilon}\n \\right] E(t;\\varepsilon)\\,,\\qquad E(0;\\varepsilon)=e^{\\varepsilon N_E}\\,.\n\\end{aligned}\n\\end{equation}\nIn other words, we have thus transformed the problem of studying the dynamics of the edge-counting observable $O_E$ into the problem of studying the evolution-equation~\\eqref{eq:appExAux2}. We may employ a standard technique well-known from the combinatorics literature to solve this problem in closed form, namely the so-called \\emph{semi-linear normal-ordering technique} as introduced in~\\cite{Dattoli:1997iz,blasiak2005boson,blasiak2011combinatorial} (and recently applied in~\\cite{bdp2017} to semi-linear PDEs for chemical reaction systems). More concretely, we recognise that the differential operator in~\\eqref{eq:appExAux2} has the ``semi-linear'' structure,\n\\begin{equation}\n h=q(\\varepsilon)\\tfrac{\\partial}{\\partial\\varepsilon}+v(\\varepsilon)\\,,\n \\quad\n q(\\varepsilon)=\\kappa_{-}\\left(e^{-\\varepsilon}-1\\right)\\,,\\quad\n v(\\varepsilon)=\\kappa_{+}\\binom{N_V}{2}\\left(e^{\\varepsilon}-1\\right)\\,.\n\\end{equation}\nThe general semi-linear normal-ordering formula then implies that given such a semi-linear differential operator and an evolution equation such as~\\eqref{eq:appExAux2},\n\\begin{equation}\n \\tfrac{\\partial}{\\partial t}E(t;\\varepsilon)=\\left[q(\\varepsilon)\\tfrac{\\partial}{\\partial\\varepsilon}+v(\\varepsilon)\\right]E(t;\\varepsilon)\\,,\\quad E(0;t)=E_0(\\varepsilon)\\,,\n\\end{equation}\nthe solution of this equation reads\\footnote{To be fully precise, in a given problem one first has to compute the \\emph{formal} solution (i.e.\\ with $t$ a formal rather than a real-valued variable) using the normal-ordering formula, and then in a separate step verify that the solution thus obtained is convergent upon specialising $t$ to a real-valued variable.}\n\\begin{equation}\\label{eq:evoResult}\n\\begin{aligned}\n E(t;\\varepsilon)&=g(t;\\varepsilon)E_0(T(t;\\varepsilon))\\,,\\qquad\n \\left\\{\n \\begin{array}{rcl}\n \\tfrac{\\partial}{\\partial t}T(t;\\varepsilon)&=&q(T(t;\\varepsilon))\\,,\\quad T(0;\\varepsilon)=\\varepsilon\\\\\n \\ln(g(t;\\varepsilon))&=&\\int_0^t dw\\, v(T(w;\\varepsilon))\\,.\n \\end{array}\\right.\n\\end{aligned}\n\\end{equation}\nThus via solving the above PDE for $T(t;\\varepsilon)$ and performing the integration to obtain $\\ln(g(t;\\varepsilon))$, we finally arrive at the following closed-form solution of the evolution equation~\\eqref{eq:appExAux2}:\n\\begin{equation}\nE(t;\\varepsilon)=e^{\\frac{\\kappa_{+}}{\\kappa_{-}}\\binom{N_V}{2}\\left(e^{\\varepsilon}-1\\right)\\left(1-e^{-\\kappa_{-}t}\\right)}{\\left(\\left(e^{\\varepsilon}-1\\right)e^{-\\kappa_{-}t}+1\\right)}^{N_E}\\,.\n\\end{equation}\nFor illustration, we present in Figure~\\ref{fig:timeEv} the time-evolution of $\\langle O_E\\rangle(t)$ (whence of the first $\\varepsilon$-derivative of $E(t;\\varepsilon)$ evaluated at $\\varepsilon=0$) for three different choices of parameters $\\kappa_{+}$ and $\\kappa_{-}$, and for four different choices each of initial number of edges $N_E$.\n\n\\medskip\nAs a further refinement, since $E(t;\\varepsilon)$ is the moment-generating function of a univariate probability distribution, we may take advantage of the well-known relationship (see e.g.\\ \\cite{bdp2017} for further details) between the moment-generating function $E(t;\\varepsilon)$ and the \\emph{probability generating function (PGF)} $P(t;\\lambda)$,\n\\begin{equation}\nP(t;\\lambda)=\\sum_{n\\geq 0}p_n(t)\\lambda^n=E(t;\\ln\\lambda)\\,,\n\\end{equation}\nwith $p_n(t)$ interpreted as the probability to count precisely $n$ edges at time $t$ (for $t\\geq0$). Thus we may transform the result~\\eqref{eq:evoResult} into the easier to interpret form\n\\begin{equation}\n\\begin{aligned}\n P(t;\\lambda)&=Pois\\left(\\lambda;\\frac{\\kappa_{+}}{\\kappa_{-}}\\binom{N_V}{2}\\left(1-e^{-\\kappa_{-}t}\\right)\\right)Binom(\\lambda;e^{-\\kappa_{-}t},N_E)\\\\\n Pois(\\lambda;\\alpha)&=e^{\\alpha(\\lambda-1)}\\,,\\quad\n Binom(\\lambda;\\alpha,N)={\\left(\\alpha\\lambda +(1-\\alpha)\\right)}^N\\,,\n\\end{aligned}\n\\end{equation}\nwhere $Pois(\\lambda;\\alpha)$ denotes the PGF of \\emph{Poisson distribution} (of parameter $0\\leq \\alpha <\\infty$), and where $Binom(\\lambda;\\alpha,N)$ denotes the PGF of a \\emph{Binomial distribution} (of parameters $0\\leq \\alpha\\leq 1$ and $N\\in\\mathbb{Z}_{\\geq 0}$). Referring yet again to~\\cite{bdp2017} for further details, since the PGF of the \\emph{convolution} of two probability distributions is given by the product of their PGFs, we thus find that the dynamics of the edge-counting observable $O_E$ is described in terms of a \\emph{convolution} of a Poisson-distribution with a binomial distribution. Moreover, in the limit $t\\to\\infty$ we simply find that the number of edges in the distribution over graph states is Poisson-distributed,\n\\begin{equation}\n\\lim\\limits_{t\\to\\infty}P(t;\\lambda)=Pois\\left(\\lambda;\\frac{\\kappa_{+}}{\\kappa_{-}}\\binom{N_V}{2}\\right)\\,.\n\\end{equation}\nInterestingly, the coefficient $\\binom{N_V}{2}$ in this equation is precisely the number of edges of a \\emph{complete graph} on $N_V$ vertices. Another interesting observation concerns a special choice of base rates $\\kappa_{\\pm}$ and initial state $\\ket{\\Psi(0)}$: if $\\kappa_{+}=\\kappa_{-}$ and $N_E=N_{E*}=\\binom{N_V}{2}$, one may compute from~\\eqref{eq:evoResult} $\\langle O_E\\rangle(t)=N_{E*}=const$ for all $t\\geq 0$. All of these findings combined entail that the edge creation and deletion process described here is in fact nothing else but a so-called \\emph{birth-death process} of random deletion and creation of ``particles'', with the role of ``particles'' played in the present case by the edges of the graphs that the system evolves upon. This result might be somewhat anticipated, in that for the special case $N_V=2$ we found in the previous section that $E_{+}$ and $E_{-}$ acting on the states with two vertices effectively yield a representation of the Heisenberg-Weyl algebra, whence in this case the process reduces trivially to a birth-death process on edges with rates $\\kappa_{+}$ and $\\kappa_{-}$ (see~\\cite{bdp2017} for further details on chemical reaction systems). As an outlook, we have conducted in~\\cite{bdg2018} a full study of the interesting phenomenon of a stochastic rewriting system on state-spaces of graph-like structures exhibiting dynamics that is comparable in nature and mathematical structure to the dynamics of discrete transition systems such as chemical reaction systems and branching processes.\n\n\n\\begin{figure}\n \\caption{Time-evolution of $\\langle O_E\\rangle(t)$ for $\\ket{\\Psi(0)}=\\ket{G_0}$ with $N_V=100$.\\label{fig:timeEv}}\n \\centering\n \\includegraphics[width=0.5\\textwidth]{.\/images\/meanEdgePlot.pdf}\n\\end{figure}\n\n\n\\section{Conclusion and Outlook}%\n\\label{sec:conclusion}\n\nThe elegance and effectiveness of traditional Double-Pushout (DPO) rewriting over $\\cM$-adhesive categories~\\cite{ehrig:2006aa} consists in describing in one uniform categorical framework a wide variety of rewriting systems (cf.\\ e.g.\\ Table~\\ref{tab:adh}) of relevance in many practical applications. It must therefore be considered as somewhat of a striking surprise that techniques of rewriting are not more ubiquitous in the applied mathematics, combinatorics and life sciences literature, given that in many of these fields, models and systems of computations formulated in terms of manipulations of graph-like structures are very frequently utilised. Originally motivated~\\cite{bdp2017,bdg2016} by studying problems in the setting of chemical reaction systems and in stochastic graph rewriting systems, we introduce in this paper what we believe to constitute a first essential stepping stone in order to overcome the aforementioned conceptual divide: the \\emph{DPO rule algebra} framework. Intuitively, an individual DPO rewriting rule may typically be applied in a number of different ways (i.e.\\ mediated by different admissible matches) to a given object; the key idea of the present paper consists in encoding this particular form of \\emph{non-determinism} in a setting of vector spaces (with basis indexed by isomorphism classes of objects) and linear operators mapping a given basis vector to ``the sum over all possible applications'' of a given rule. Modulo certain technical implementation details, this approach however immediately raises a conceptual question about the precise nature of \\emph{sequential applications} of rules to objects: it must be ensured that applying two particular rules in sequence to a given object ``in all possible ways'' is equivalent to applying the aforementioned linear operators (one after the other) onto the basis vector indexed by the object. Inspired by earlier work~\\cite{bdg2016} on multigraph DPO rewriting systems implemented in a formulation based upon binary relations, we present here a general implementation of a consistent mathematical framework that reveals the notion of an \\emph{associative unital algebra} that may be constructed based upon sequential compositions of DPO rules. As illustrated in the special case of DPO rewriting systems on \\emph{discrete graphs} (Section~\\ref{sec:HW}), defining a \\emph{representation} for the aforementioned algebra amounts to precisely the construction of certain linear operators that encode faithfully the sequential applications of DPO rules to objects. Our general construction hinges upon our novel theorem on the associativity of the operation of forming DPO-type concurrent compositions of linear rewriting rules (Theorem~\\ref{thm:assocDPO}), based upon which we introduce the concept of \\emph{rule algebras} in Definition~\\ref{def:RADPO}: each (isomorphism class of a) linear rule is mapped to an element of an abstract vector space, on which the sequential rule composition operation is implemented as a certain bilinear binary operation. For every $\\cM$-adhesive category $\\bfC$ that is finitary and that possesses $\\cM$-effective unions, the associated rule algebra is associative, and if the category possesses an $\\cM$-initial object, this algebra is in addition unital. Each rule algebra element in turn gives rise via the construction of the \\emph{canonical representation} (Definition~\\ref{def:canRep}) to a linear operator acting upon the vector space with basis indexed by isomorphism classes of objects, solving precisely the problem of encoding the non-determinism in rule applications in a principled and general fashion. We then hinted at the potential of our approach in the realm of combinatorics in Section~\\ref{sec:RT}, and, as a first major application of our framework, we presented in Section~\\ref{sec:SM} a \\emph{universal construction of continuous-time Markov chains} based upon linear DPO rules of $\\cM$-adhesive categories $\\bfC$.\n\nThe general motivation of this paper consisted in rendering techniques of rewriting more accessible to practitioners and theoreticians beyond the traditional rewriting theory community. In line with these efforts, we have recently continued the development of our rule algebra framework to include another key framework for categorical rewriting over adhesive categories (so-called \\emph{Sesqui-Pushout (SqPO) rewriting}) in~\\cite{Behr_2019_sqpo}, and a variant for both DPO- and SqPO-type rewriting in the settings of rewriting with \\emph{application conditions} and with \\emph{constraints} on objects in~\\cite{behr2019compositionality}. The latter contains a general associativity theorem for rules with conditions, thus providing the prerequisites for achieving a general rule algebra framework (work currently in progress) that can capture the graph-like data structures of relevance in many practical applications that differ from idealised mathematical structures such as directed multigraphs via certain additional constraints on objects (such as e.g.\\ in planar binary trees). In terms of analysing stochastic rewriting systems via rule-algebraic methods, we have recently reported in~\\cite{bdg2018} a first set of custom techniques that permit to exploit the rule algebra structure in order to extract dynamical information from these stochastic systems. Finally, the associativity property of rule compositions itself has been demonstrated in~\\cite{behr2019tracelets} to give rise to a novel concept of so-called \\emph{tracelets}, which permit to minimally and fully characterise the compositional and causal properties underlying sequences of applications of rewriting rules (so-called derivation traces). While currently still in its early stages of development, we envision that our novel viewpoint on the study of rewriting systems will ultimately yield rich and fruitful interactions of the specialist field of categorical rewriting theory with the broader applied mathematics, computer and life sciences research fields.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzimj b/data_all_eng_slimpj/shuffled/split2/finalzimj new file mode 100644 index 0000000000000000000000000000000000000000..0d401776ab2b35296d043eecf422793093e10eaa --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzimj @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe symmetric group on $N$ letters acts naturally on $\\mathbb{R}^{N}$ (for $N=2,3,\\ldots$) but not irreducibly, because the vector $\\left(\n1,1,\\ldots,1\\right) $ is f\\\/ixed. However the important basis consisting of\nnonsymmetric Jack polynomials is def\\\/ined for $N$ variables and does not behave\nwell under restriction to the orthogonal complement of $\\left( 1,1,\\ldots\n,1\\right) $, in general. In this paper we consider the one exception to this\nsituation, occurring when $N=4$. In this case there is a coordinate system,\nessentially the $4\\times4$ Hadamard matrix, which allows a dif\\\/ferent basis of\npolynomials, derived from the type-$B$ nonsymmetric Jack polynomials for the\nsubgroup $D_{3}$ of the octahedral group $B_{3}$. We will construct an\northogonal basis for the $L^{2}$-space of the measure%\n\\[\n\\prod_{1\\leq i0$.\n\nWe will use the following notations: $\\mathbb{N}_{0}$ denotes the set of nonnegative integers; $\\mathbb{N}_{0}^{N}$ is the set of compositions (or multi-indices), if $\\alpha=\\left(\n\\alpha_{1},\\ldots,\\alpha_{N}\\right) \\in \\mathbb{N}_{0}^{N}$ then $\\left\\vert \\alpha\\right\\vert :=\\sum_{i=1}^{N}\\alpha_{i}$ and\nthe length of $\\alpha$ is $\\ell\\left( \\alpha\\right) :=\\max\\left\\{\ni:\\alpha_{i}>0\\right\\} $. Let $\\mathbb{N}_{0}^{N,+}$ denote the subset of partitions, that is, $\\lambda\\in\n\\mathbb{N}\n_{0}^{N}$ and $\\lambda_{i}\\geq\\lambda_{i+1}$ for $1\\leq i0$ when $f\\neq0$ and $\\kappa\\geq0$. The operators\n$\\mathcal{U}_{i}$ have the very useful property of acting as triangular\nmatrices on the monomial basis furnished with a certain partial order. However\nthe good properties depend completely on the use of $%\n\\mathbb{R}\n^{N}$ even though the group $\\mathcal{S}_{N}$ acts irreducibly on $\\left(\n1,1,\\ldots,1\\right) ^{\\bot}$. We suggest that an underlying necessity for the\nexistence of an analog of $\\left\\{ \\mathcal{U}_{i}\\right\\} $ for any\nref\\\/lection group $W$ is the existence of a $W$-orbit in which any two points\nare orthogonal or antipodal (as in the analysis of the hyperoctahedral group\n$B_{N}$). This generally does not hold for the action of $\\mathcal{S}_{N}$ on\n$\\left( 1,\\ldots,1\\right) ^{\\bot}$. We consider the exceptional case $N=4$\nand exploit the isomorphism between $\\mathcal{S}_{4}$ and the group of type\n$D_{3}$, that is, the subgroup of $B_{3}$ whose simple roots are $\\left(\n1,-1,0\\right)$, $\\left( 0,1,-1\\right)$, $\\left( 0,1,1\\right) $. We map these\nroot vectors to the simple roots $\\left( 0,1,-1,0\\right)$, $\\left(\n0,0,1,-1\\right)$, $\\left( 1,-1,0,0\\right) $ of $\\mathcal{S}_{4}$, in the same\norder. This leads to the linear isometry%\n\\begin{gather}\ny_{1} =\\tfrac{1}{2}\\left( x_{1}+x_{2}-x_{3}-x_{4}\\right), \\nonumber\\\\\ny_{2} =\\tfrac{1}{2}\\left( x_{1}-x_{2}+x_{3}-x_{4}\\right), \\nonumber\\\\\ny_{3} =\\tfrac{1}{2}\\left( x_{1}-x_{2}-x_{3}+x_{4}\\right), \\nonumber\\\\\ny_{0} =\\tfrac{1}{2}\\left( x_{1}+x_{2}+x_{3}+x_{4}\\right) .\\label{y2x}\n\\end{gather}\nConsider the group $D_{3}$ acting on $\\left( y_{1},y_{2},y_{3}\\right) $ and\nuse the type-$B_{3}$ Dunkl operators with the parameter $\\kappa^{\\prime}=0$\n(associated with the class of sign-changes $y_{i}\\mapsto-y_{i}$ which are not\nin $D_{3}$). Let~$\\sigma_{ij}$, $\\tau_{ij}$ denote the ref\\\/lections in\n$y_{i}-y_{j}=0$, $y_{i}+y_{j}=0$ respectively. Then for $i=1,2,3$ let%\n\\begin{gather*}\n\\mathcal{D}_{i}^{B}f\\left( y\\right) =\\frac{\\partial}{\\partial y_{i}%\n}f\\left( y\\right) +\\kappa\\sum_{j=1,j\\neq i}^{3}\\left( \\frac{f\\left(\ny\\right) -f\\left( y\\sigma_{ij}\\right) }{y_{i}-y_{j}}+\\frac{f\\left(\ny\\right) -f\\left( y\\tau_{ij}\\right) }{y_{i}+y_{j}}\\right) ,\\\\\n\\mathcal{U}_{i}^{B}f\\left( y\\right) =\\mathcal{D}_{i}^{B}\\left(\ny_{i}f\\left( y\\right) \\right) -\\kappa\\sum_{1\\leq j\\alpha_{i}\\right\\}\n+\\#\\left\\{ j:1\\leq j\\leq i,\\alpha_{j}=\\alpha_{i}\\right\\} ,\\\\\n\\xi_{i}\\left( \\alpha\\right) :=\\left( N-r\\left( \\alpha,i\\right)\n\\right) \\kappa+\\alpha_{i}+1.\n\\end{gather*}\n\\end{definition}\n\n\nClearly for a f\\\/ixed $\\alpha\\in\\mathbb{N}_{0}^{N}$ the values $\\left\\{\nr\\left( \\alpha,i\\right) :1\\leq i\\leq N\\right\\} $ consist of all of\n$\\left\\{ 1,\\ldots,N\\right\\} $; let $w$ be the inverse function of $i\\mapsto\nr\\left( \\alpha,i\\right) $ so that $w\\in\\mathcal{S}_{N}$, $r\\left(\n\\alpha,w\\left( i\\right) \\right) =i$ and $\\alpha^{+}=w\\alpha\\ $(note that\n$\\alpha\\in\\mathbb{N}_{0}^{N,+}$ if and only if $r\\left( \\alpha,i\\right) =i$\nfor all $i$). Then%\n\\[\n\\mathcal{U}_{i}x^{\\alpha}=\\xi_{i}\\left( \\alpha\\right) x^{\\alpha}%\n+q_{\\alpha,i}\\left( x\\right)\n\\]\nwhere $q_{\\alpha,i}\\left( x\\right) $ is a sum of terms $\\pm\\kappa x^{\\beta}$\nwith $\\alpha\\vartriangleright\\beta$.\n\n\\begin{definition}\nFor $\\alpha\\in\\mathbb{N}_{0}^{N}$, let$\\ \\zeta_{\\alpha}$ denote the $x$-monic\nsimultaneous eigenfunction (NSJP), that is, $\\mathcal{U}_{i}\\zeta_{\\alpha}%\n=\\xi_{i}\\left( \\alpha\\right) \\zeta_{\\alpha}$ for $1\\leq i\\leq N$ and\n\\[\n\\zeta_{\\alpha}=x^{\\alpha}+\\sum\\limits_{\\alpha\\vartriangleright\\beta}%\nA_{\\beta\\alpha}x^{\\beta},\n\\]\nwith coef\\\/f\\\/icients $A_{\\beta\\alpha}\\in\\mathbb{Q}\\left( \\kappa\\right) $,\nrational functions of $\\kappa$.\n\\end{definition}\n\n\nThere are norm formulae for the pairing $\\left\\langle \\cdot,\\cdot\\right\\rangle\n_{\\kappa}$.\nSuppose $\\alpha\\in\\mathbb{N}_{0}^{N}$ and $\\ell\\left( \\alpha\\right) =m$; the\n\\textit{Ferrers diagram} of $\\alpha$ is the set $\\left\\{ \\left(\ni,j\\right) :1\\leq i\\leq m,0\\leq j\\leq\\alpha_{i}\\right\\} .$ For each node\n$\\left( i,j\\right) $ with $1\\leq j\\leq\\alpha_{i}$ there are two special\nsubsets of the Ferrers diagram, the \\textit{arm} $\\left\\{ \\left( i,l\\right)\n:ji,j\\leq\\alpha_{l}\\leq\\alpha_{i}\\right\\} \\cup\\left\\{ \\left(\nl,j-1\\right) :li,j\\leq\\alpha_{l}\\leq\\alpha\n_{i}\\right\\} +\\#\\left\\{ l:l-1$, and%\n\\[\nL_{n}^{a}\\left( t\\right) =\\frac{\\left( a+1\\right) _{n}}{n!}\\sum_{i=0}%\n^{n}\\frac{\\left( -n\\right) _{i}}{\\left( a+1\\right) _{i}}\\frac{t^{i}}{i!}.\n\\]\nThe result of applying $e^{-\\Delta_{B}\/2}$ to a polynomial $x_{E_{k}}%\n\\zeta_{\\alpha}\\left( y^{2}\\right) $ is a complicated expression involving\nsome generalized binomial coef\\\/f\\\/icients (see \\cite[Proposition~9.4.5]{DX}). For the\nsymmetric cases $j_{\\lambda}\\left( y^{2}\\right) $ and $y_{1}y_{2}%\ny_{3}j_{\\lambda}\\left( y^{2}\\right) ,\\lambda\\in\n\\mathbb{N}\n_{0}^{3,+}$ these coef\\\/f\\\/icients were investigated by Lassalle \\cite{L} and\nOkoun\\-kov and Olshanski \\cite[equation~(3.2)]{OO}; in the latter paper there is an\nexplicit formula.\n\nFinally we can use our orthogonal basis to analyze a modif\\\/ication of the\ntype-$A$ quantum Calogero--Sutherland model with four particles on a line and\nharmonic conf\\\/inement. By resca\\-ling, the Hamiltonian (with exchange terms) can\nbe written as:\n\\[\n\\mathcal{H}=-\\Delta+\\frac{\\left\\vert x\\right\\vert ^{2}}{4}+2\\kappa\\sum_{1\\leq\ni0$ (Gaussian).\n\n\n\n\\begin{theorem} \\label{thm1}\nLet $K:{\\cal X} \\times {\\cal X} \\rightarrow \\mbox{I}\\!\\mbox{R}$ be a symmetric and positive definite function. Then there\nexists a Hilbert space of functions ${\\cal H}$ defined on ${\\cal X}$ admitting $K$ as a reproducing Kernel.\nConversely, let ${\\cal H}$ be a Hilbert space of functions $f: {\\cal X} \\rightarrow \\mbox{I}\\!\\mbox{R}$ satisfying\n$\\forall x \\in {\\cal X}, \\exists \\kappa_x>0,$ such that $|f(x)| \\le \\kappa_x \\|f\\|_{\\cal H},\n\\quad \\forall f \\in {\\cal H}. $\nThen ${\\cal H}$ has a reproducing kernel $K$.\n\\end{theorem}\n\n\n\\begin{theorem}\\label{thm4}\n Let $K(x,y)$ be a positive definite kernel on a compact domain or a manifold $X$. Then there exists a Hilbert\nspace $\\mathcal{F}$ and a function $\\Phi: X \\rightarrow \\mathcal{F}$ such that\n$$K(x,y)= \\langle \\Phi(x), \\Phi(y) \\rangle_{\\mathcal{F}} \\quad \\mbox{for} \\quad x,y \\in X.$$\n $\\Phi$ is called a feature map, and $\\mathcal{F}$ a feature space\\footnote{The dimension of the feature space can be infinite, for example in the case of the Gaussian kernel.}.\n\\end{theorem}\n\n\n\nGiven Theorem~\\ref{thm4}, and property [iv.] in Proposition~\\ref{prop1}, note that we can take\n$\\Phi(x):=K_x:=K(x,\\cdot)$ in which case $\\mathcal{F}=\\mathcal{H}$ -- the ``feature space'' is the RKHS itself, as opposed to an isomorphic space.\nWe will make extensive use of this feature map. The fact that Mercer kernels are positive definite and\nsymmetric is also key; these properties ensure that kernels induce positive, symmetric matrices and\nintegral operators, reminiscent of similar properties enjoyed by gramians and covariance matrices.\nFinally, in practice one typically first chooses a Mercer kernel in order to choose an RKHS:\nTheorem~\\ref{thm1} guarantees the existence of a Hilbert space admitting such a function as its\nreproducing kernel.\n\nA key observation however, is that working in RKHS allows one to immediately find nonlinear versions of\n algorithms which can be expressed in terms of inner products. Consider an algorithm expressed in\nterms of the inner product $\\langle x, x^{\\prime} \\rangle_{\\mathcal{X}}$ with $x, x^{\\prime} \\in \\mathcal{X}$. Now assume that\ninstead of looking at a state $x$, we look at its $\\Phi$ image in $\\mathcal{H}$,\n\\[\\label{eqn:phi-mapped-data}\n\\begin{array}{rccl}\n\\Phi&:& X & \\rightarrow {\\cal H} \\\\\n& & x &\\mapsto \\Phi(x) \\;.\n\\end{array}\\]\nIn the RKHS, the inner product $\\langle \\Phi(x), \\Phi(x^{\\prime}) \\rangle$ is\n\\[\\langle \\Phi(x), \\Phi(x^{\\prime}) \\rangle= K(x,x^{\\prime}) \\]\nby the reproducing property. Hence, a nonlinear variant of the original algorithm may be implemented\nusing kernels in place of inner products on $\\mathcal{X}$.\n\n\n\\section{Empirical Gramians in RKHS}\nIn this Section we recall empirical gramians for linear systems~\\cite{moore}, as well as a notion of empirical gramians for nonlinear systems in RKHS introduced in~\\cite{allerton}. The goal of the construction we describe here is to provide meaningful, data-based empirical controllability and observability gramians for nonlinear systems. In~\\cite{allerton}, observability and controllability gramians were used for balanced model reduction, however here we will use these quantities to analyze nonlinear control properties and random dynamical systems. We note that a related notion of gramians for nonlinear systems is briefly discussed in~\\cite{gray}, however no method for computing or estimating them was given.\n\n\\subsection{Empirical Gramians for Linear Systems}\\label{sec:linear_gramians}\nTo compute the Gramians for the linear system (\\ref{linsys}), one can attempt to solve the\nLyapunov equations (\\ref{lyap_lin}) directly although this can be computationally prohibitive. For linear systems, the gramians may be approximated by way of matrix multiplications implementing primal and adjoint systems (see the method of snapshots, e.g.~\\cite{Rowley05}). Alternatively, for any system, linear or nonlinear, one may take the simulation based approach introduced by B.C. Moore~\\cite{moore} for reduction of linear systems, and subsequently extended to nonlinear systems in~\\cite{lall}. The method\nproceeds by exciting each coordinate of the input with impulses from the zero initial state $(x_0=0)$. The system's responses are sampled, and the sample covariance is taken as an approximation to the controllability gramian. Denote the set of canonical orthonormal basis vectors in $\\mbox{I}\\!\\mbox{R}^n$ by $\\{e_i\\}_{i}$. Let $u^i(t) = \\delta(t)e_i$ be the input signal for the $i$-th simulation, and let $x^i(t)$ be the corresponding response of the system. Form the matrix $X(t) = \\bigl[x^1(t)\n~\\cdots~ x^q(t)\\bigr] \\in \\mbox{I}\\!\\mbox{R}^{n\\times q}$, so that $X(t)$ is seen as a data matrix with\ncolumn observations given by the respective responses $x^i(t)$. Then the $(n\\times n)$ controllability gramian is given by\n\\[\nW_{c,\\text{lin}} = \\frac{1}{q}\\int_0^{\\infty}X(t)X(t)^{\\!\\top\\!} dt.\n\\]\nWe can approximate this integral by sampling the matrix function $X(t)$ within a finite time interval $[0,T]$\nassuming for instance the regular partition $\\{t_i\\}_{i=1}^N, t_i = (T\/N)i$. This leads to the {\\em empirical controllability gramian}\n\\[\\label{eqn:Wchat_lin}\n\\widehat{W}_{c,\\text{lin}} = \\frac{T}{Nq}\\sum_{i=1}^N X(t_i)X(t_i)^{\\!\\top\\!} .\n\\]\n\nThe observability gramian is estimated by\nfixing $u(t) = 0$, setting $x_0 = e_i$ for $i=1,\\ldots,n$, and measuring the corresponding system output\nresponses $y^i(t)$. Now assemble the output responses into a matrix $Y(t) = [y^1(t) ~\\cdots~ y^n(t)]\\in\n\\mbox{I}\\!\\mbox{R}^{p\\times n}$. The $(n\\times n)$ observability gramian $W_{o,\\text{lin}}$ and its empirical\ncounterpart $\\widehat{W}_{o,\\text{lin}}$ are respectively given by\n\\[\nW_{o,\\text{lin}} = \\frac{1}{p}\\int_0^{\\infty}Y(t)^{\\!\\top\\!}Y(t) dt\\]\nand\n\\[\\label{eqn:Wohat_lin}\n\\widehat{W}_{o,\\text{lin}} = \\frac{T}{Np}\\sum_{i=1}^N \\widetilde{Y}(t_i)\\widetilde{Y}(t_i)^{\\!\\top\\!}\n\\]\nwhere $\\widetilde{Y}(t) = Y(t)^{\\!\\top\\!}$.\nThe matrix $\\widetilde{Y}(t_i)\\in\\mbox{I}\\!\\mbox{R}^{n\\times p}$ can be thought of as a data matrix with column observations\n\\begin{equation}\\label{eqn:obs_data}\nd_j(t_i) = \\bigl(y_j^1(t_i), \\ldots, y_j^n(t_i)\\bigr)^{\\!\\!\\top\\!} \\in\\mbox{I}\\!\\mbox{R}^n,\n\\end{equation}\nfor $j=1,\\ldots,p, \\,\\,i=1,\\ldots, N$ so that $d_j(t_i)$ corresponds to the response at time $t_i$ of the\nsingle output coordinate $j$ to each of\nthe (separate) initial conditions $x_0=e_k, k=1,\\ldots,n$.\n\n\\subsection{Empirical Gramians in RKHS Characterizing Nonlinear Systems}\\label{sec:rkhs-gramians}\nConsider the generic nonlinear system\n\\[\\label{sigma}\n\\left\\{\\begin{array}{rcl}\\dot{x}&=&F(x,u)\\\\ y &=& h(x), \\end{array}\\right.\n\\]\nwith $x \\in \\mbox{I}\\!\\mbox{R}^n$, $u \\in \\mbox{I}\\!\\mbox{R}^q$, $y\\in \\mbox{I}\\!\\mbox{R}^p$, $F(0)=0$ and $h(0)=0$.\nAssume that the linearization of~\\eqref{sigma} around the origin is controllable, observable and\n$A=\\frac{\\partial F}{\\partial x}|_{x=0}$ is asymptotically stable.\n\nRKHS counterparts to the empirical quantities~\\eqref{eqn:Wchat_lin},\\eqref{eqn:Wohat_lin} defined above for the system~\\eqref{sigma} can be defined by considering feature-mapped lifts of the simulated samples in $\\mathcal{H}_K$. In the following, and without loss of generality,\n{\\em we assume the data are centered in feature space}, and that the observability\nsamples and controllability samples are centered separately. See~(\\cite{smola}, Ch. 14) for a\ndiscussion on implicit data centering in RKHS with kernels.\n\nFirst, observe that the gramians $\\widehat{W}_c, \\widehat{W}_o$ can be viewed as the sample covariance of a collection of $N\\cdot q, N\\cdot p$ vectors in $\\mbox{I}\\!\\mbox{R}^n$ scaled by $T$, respectively. Then applying $\\Phi$ to the samples as in~\\eqref{eqn:phi-mapped-data}, we obtain the corresponding gramians in the RKHS associated to $K$ as bounded linear operators on $\\mathcal{H}_K$:\n\\begin{align}\n\\widehat{W}_c &= \\frac{T}{Nq}\\sum_{i=1}^N\\sum_{j=1}^q \\Phi(x^j(t_i))\\otimes \\Phi(x^j(t_i)) \\label{eqn:emp_Wc_rkhs}\\\\\n\\widehat{W}_o &= \\frac{T}{Np}\\sum_{i=1}^N\\sum_{j=1}^p \\Phi(d_j(t_i))\\otimes\\Phi(d_j(t_i))\\nonumber\n\\end{align}\nwhere the samples $x_j,d_j$ are as defined in Section~\\ref{sec:linear_gramians}, and $a\\otimes b=a\\scal{b}{\\cdot}$ denotes the tensor product in $\\mathcal{H}$. From here on we will use the notation $W_c, W_o$ to refer to RKHS versions of\nthe true (integrated) gramians, and $\\widehat{W}_c, \\widehat{W}_o$ to refer to RKHS versions of the empirical gramians.\n\nLet $\\boldsymbol{\\Psi}$ denote the matrix whose columns are the (scaled) observability samples mapped into feature space by $\\Phi$, and let $\\boldsymbol{\\Phi}$ be the matrix similarly built from the feature space representation of the\ncontrollability samples. Then we may alternatively express the gramians above as\n$\\widehat{W}_c=\\boldsymbol{\\Phi}\\bPhi^{\\!\\top\\!}$ and $\\widehat{W}_o=\\boldsymbol{\\Psi}\\bPsi^{\\!\\top\\!}$, and define two other important quantities:\n\\begin{itemize}\\itemsep 0pt\n\\item The \\emph{controllability kernel matrix} $K_c\\in\\mbox{I}\\!\\mbox{R}^{Nq\\times Nq}$ of kernel\nproducts\n\\begin{align}\nK_c &= \\boldsymbol{\\Phi}^{\\!\\top\\!}\\boldsymbol{\\Phi} \\\\\n(K_c)_{\\mu\\nu} &= K(x_\\mu, x_\\nu) = \\scal{\\Phi(x_\\mu)}{\\Phi(x_\\nu)}_{\\mathcal{F}}\n\\end{align}\nfor $\\mu,\\nu=1,\\ldots,Nq$ where we have re-indexed the set of vectors $\\{x^{j}(t_i)\\}_{i,j} =\n\\{x_{\\mu}\\}_{\\mu}$ to use a single linear index.\n\\item The \\emph{observability kernel matrix} $K_o\\in\\mbox{I}\\!\\mbox{R}^{Np\\times Np}$,\n\\begin{align}\nK_o &= \\boldsymbol{\\Psi}^{\\!\\top\\!}\\boldsymbol{\\Psi}\\\\\n(K_o)_{\\mu\\nu} &= K(d_\\mu, d_\\nu) = \\scal{\\Phi(d_\\mu)}{\\Phi(d_\\nu)}_{\\mathcal{F}}\n\\end{align}\nfor $\\mu,\\nu=1,\\ldots,Np$, where we have again re-indexed the set $\\{d_j(t_i)\\}_{i,j}=\\{d_\\mu\\}_{\\mu}$ for\nsimplicity.\n\\end{itemize}\nNote that $K_c,K_o$ may be highly ill-conditioned. The SVD\nmay be used to show that $\\widehat{W}_c$ and $K_c$ ($\\widehat{W}_o$ and $K_o$) have the same singular values (up to zeros).\n\n\n\\section{Nonlinear Control Systems in RKHS}\nIn this section, we introduce empirical versions of the controllability and observability energies~\\eqref{L_c}-\\eqref{L_o} for stable\nnonlinear control systems of the form~\\eqref{control_nonlin}, that can be estimated from observed data. Our underlying assumption is that a given nonlinear system may be treated as if it were linear in a suitable feature space. That reproducing kernel Hilbert spaces provide rich representations capable of capturing strong nonlinearities in the original input (data) space lends validity to this assumption.\n\nIn general little is known about the energy functions in the nonlinear setting. However,\nScherpen~\\cite{scherpen_thesis} has shown that the energy functions $L_c(x)$ and $L_o(x)$ defined in~\\eqref{L_c} and~\\eqref{L_o} satisfy a Hamilton-Jacobi and a Lyapunov equation, respectively.\n\\begin{theorem}\\label{thm:scherp1}\\cite{scherpen_thesis} Consider the nonlinear control system (\\ref{sigma}) with $F(x,u)=f(x)+G(x)u$. If the origin is an asymptotically stable equilibrium\nof $f(x)$ on a neighborhood $W$ of the origin, then for\nall $x \\in W$, $L_o(x)$ is the unique smooth solution of\n\\[\\label{Lo_hjb} \\frac{\\partial L_o}{\\partial x}(x)f(x)+\\frac{1}{2}h^{\\!\\top\\!}(x)h(x)=0,\\quad L_o(0)=0 \\]\nunder the assumption that (\\ref{Lo_hjb}) has a smooth solution on $W$. Furthermore for all $x \\in W$, $L_c(x)$\nis the unique smooth solution of\n\\[\\label{Lc_hjb} \\frac{\\partial L_c}{\\partial x}(x)f(x)+\\frac{1}{2} \\frac{\\partial L_c}{\\partial\nx}(x)g(x)g^{\\!\\top\\!}(x) \\frac{\\partial^{\\!\\top\\!}L_c}{\\partial x}(x)=0,\\; L_c(0)=0\\]\nunder the assumption that (\\ref{Lc_hjb}) has a smooth solution $\\bar{L}_c$ on $W$ and that the origin is an\nasymptotically stable equilibrium of $-(f(x)+g(x)g^{\\!\\top\\!}(x) \\frac{\\partial \\bar{L}_c}{\\partial x}(x))$ on $W$.\n\\end{theorem}\nWe would like to avoid solving explicitly the PDEs (\\ref{Lo_hjb})- (\\ref{Lc_hjb}) and instead find good\nestimates of their solutions directly from simulated or observed data.\n\n\\subsection{Energy Functions}\\label{sec:energy_fns}\nFollowing the linear theory developed in Section~\\ref{sec:linear-control}, we would like to\ndefine analogous controllability and observability energy functions paralleling~\\eqref{eqn:lin_Lc}-\\eqref{eqn:lin_Lo}, but adapted to the nonlinear setting. We first treat the controllability\nfunction. Let $\\mu_{\\infty}$ on the statespace $\\mathcal{X}$ denote the unknown invariant measure of the nonlinear system~\\eqref{sigma} when driven by white Gaussian noise. We will consider here the case where the controllability samples $\\{x_i\\}_{i=1}^m$ are i.i.d. random draws from $\\mu_{\\infty}$, and $\\mathcal{X}$ is a compact subset of $\\mbox{I}\\!\\mbox{R}^n$. The former assumption is implicitly made in much of the empirical balancing literature, and if a system is simulated for long time intervals, it should hold approximately in practice. If we take $\\Phi(x)=K_x$, the infinite-data limit of~\\eqref{eqn:emp_Wc_rkhs} is given by\n\\[\\label{eqn:covop_gramian}\nW_c = \\mathbb{E}_{\\mu_{\\infty}} [\\widehat{W}_{c}] = \\int_{\\mathcal{X}}\\scal{\\cdot}{K_x}K_xd\\mu_{\\infty}(x).\n\\]\n\nIn general neither $W_c$ nor its empirical approximation $\\widehat{W}_c$ are invertible, so to define a controllability energy similar to~\\eqref{eqn:lin_Lc} one is tempted to define $L_c$ on $\\mathcal{H}$ as\n\\mbox{$L_c(h)=\\scal{W_c^{\\dag}h}{h}$}, where $A^{\\dag}$ denotes the pseudoinverse of the operator $A$. However, the domain of $W_c^{\\dag}$ is equal to the range of $W_c$, and so in general $K_x$ may not be in the domain of $W_c^{\\dag}$. We will therefore introduce the orthogonal projection $W_c^{\\dag}W_c$ mapping $\\mathcal{H}\\mapsto\\text{range}(W_c)$ and define the nonlinear control energy on $\\mathcal{H}$ as\n\\begin{equation}\\label{eqn:best_lc}\nL_c(h) = \\scal{W_c^{\\dag}(W_c^{\\dag}W_c)h}{h}.\n\\end{equation}\nWe will consider finite sample approximations to~\\eqref{eqn:best_lc}, however a further complication is that $\\widehat{W}_c^{\\dag}\\widehat{W}_c$ may not converge to $W_c^{\\dag}W_c$ in the limit of infinite data (taking the pseudoinverse is not a continuous operation), and $\\widehat{W}_c^{\\dag}$ can easily be ill-conditioned in any event. Thus one needs to impose regularization, and we replace the pseudoinverse $A^{\\dag}$ with a regularized inverse $(A + \\lambda I)^{-1}, \\lambda > 0$ throughout. We note that the preceding observations were also made in~\\cite{RosascoDensity}. Intuitively,\nregularization prevents the estimator from overfitting to a bad or unrepresentative sample\nof data. We thus define the estimator \\mbox{$\\hat{L}_c:\\mathcal{X}\\to\\mbox{I}\\!\\mbox{R}_+$} (that is, on the domain $\\{K_x~|~ x\\in\\mathcal{X}\\}\\subseteq\\mathcal{H}$) to be\n\\begin{equation}\\label{eqn:rkhs_lc_def}\n\\hat{L}_c(x)=\\tfrac{1}{2}\\bigl\\langle(\\widehat{W}_c + \\lambda I)^{-2}\\widehat{W}_c K_x,K_x\\bigr\\rangle, \\quad x\\in\\mathcal{X}\n\\end{equation}\nwith infinite-data limit\n\\[\nL_c^{\\lambda}(x) = \\tfrac{1}{2}\\scal{(W_c + \\lambda I)^{-2}W_c K_x}{K_x},\n\\]\nwhere $\\lambda > 0$ is the regularization parameter.\n\nTowards deriving an equivalent but computable expression for $\\hat{L}_c$ defined in terms of kernels, we\nrecall the sampling operator $S_{\\mathbf{x}}$ of~\\cite{SmaleIntegral} and its adjoint. Let $\\mathbf{x} = \\{x_i\\}_{i=1}^{m}$ denote a generic sample of $m$ data points. To $\\mathbf{x}$ we can associate the operators\n\\begin{alignat*}{4}\nS_{\\mathbf{x}} &: \\mathcal{H} &\\to &\\, \\mbox{I}\\!\\mbox{R}^{m}, &\\quad h &\\in\\mathcal{H} &\\mapsto&\\, \\bigl(h(x_1),\\ldots,h(x_{m})\\bigr)\\\\\nS_{\\mathbf{x}}^{\\ast} &:\\mbox{I}\\!\\mbox{R}^{m} &\\to &\\, \\mathcal{H}, &\\quad c &\\in\\mbox{I}\\!\\mbox{R}^{m} &\\mapsto&\\, \\textstyle\\sum_{i=1}^{m}c_iK_{x_i}\\,.\n\\end{alignat*}\nIf $\\mathbf{x}$ is the collection of $m=Nq$ controllability samples, one can check that\n$\\widehat{W}_c = \\tfrac{1}{m}\\SxsS_{\\mathbf{x}}$ and $K_c=S_{\\mathbf{x}}S_{\\mathbf{x}}^{\\ast}$. Consequently,\n\\begin{align*}\n\\hat{L}_c(x) &=\\tfrac{1}{2}\\scal{(\\tfrac{1}{m}\\SxsS_{\\mathbf{x}} + \\lambda I)^{-2}\\tfrac{1}{m}\\SxsS_{\\mathbf{x}} K_{x}}{K_{x}}\\\\\n&=\\tfrac{1}{2m}\\scal{S_{\\mathbf{x}}^{\\ast}(\\tfrac{1}{m}S_{\\mathbf{x}}S_{\\mathbf{x}}^{\\ast} + \\lambda I)^{-2}S_{\\mathbf{x}} K_{x}}{K_{x}}\\\\\n&= \\tfrac{1}{2m}{\\bf k_c}(x)^{\\!\\top\\!}(\\tfrac{1}{m}K_c + \\lambda I)^{-2}{\\bf k_c}(x),\n\\end{align*}\nwhere ${\\bf k_c}(x):=S_{\\mathbf{x}} K_x = \\bigl(K(x,x_{\\mu})\\bigr)_{\\mu=1}^{Nq}$ is the $Nq$-dimensional column vector\ncontaining the kernel products between $x$ and the controllability samples.\n\nSimilarly, letting $\\mathbf{x}$ now denote the collection of $m=Np$ observability samples, we can approximate the future output energy by\n\\begin{align}\n\\hat{L}_o(x) &= \\tfrac{1}{2}\\bigl\\langle\\widehat{W}_oK_x,K_x\\bigr\\rangle \\\\\\label{eqn:Lo_rkhs}\n&= \\tfrac{1}{2m}\\bigl\\langle\\SxsS_{\\mathbf{x}} K_x, K_x\\bigr\\rangle \\nonumber \\\\\n &= \\tfrac{1}{2m}{\\bf k_o}(x)^{\\!\\top\\!}{\\bf k_o}(x)\n = \\tfrac{1}{2m}\\nor{{\\bf k_o}(x)}_2^2\\nonumber\n\\end{align}\nwhere ${\\bf k_o}(x):=\\bigl(K(x,d_{\\mu})\\bigr)_{\\mu=1}^{Np}$ is the $Np$-dimensional column vector\ncontaining the kernel products between $x$ and the observability samples.\nWe collect the above results into the following definition:\n\\begin{definition}\\label{def:rkhs_energies} Given a nonlinear control system of the form~\\eqref{sigma}, we define the kernel\ncontrollability energy function and the kernel observability energy function as, respectively,\n\\begin{align}\n\\hat{L}_c(x) &= \\tfrac{1}{2Nq}{\\bf k_c}(x)^{\\!\\top\\!}(\\tfrac{1}{Nq}K_c + \\lambda I)^{-2}{\\bf k_c}(x) \\\\ \\label{eqn:lc_hat}\n\\hat{L}_o(x) &= \\tfrac{1}{2Np}\\nor{{\\bf k_o}(x)}_2^2 \\;.\n\\end{align}\n\\end{definition}\nNote that the kernels used to define $\\hat{L}_c$ and $\\hat{L}_o$ need not be the same.\n\n\\subsection{Consistency}\nWe'll now turn to showing that the estimator $\\hat{L}_c$ is consistent, but note that\n{\\em we do not address the approximation error} between the energy function estimates and the true\n but unknown underlying functions. Controlling the approximation error requires making specific assumptions\nabout the nonlinear system, and we leave this question open.\n\nIn the following we will make an important set of assumptions regarding the kernel $K$ and the\nRKHS $\\mathcal{H}$ it induces.\n\\begin{assumption}\\label{ass:rkhs}\nThe reproducing kernel $K$ defined on the compact statespace $\\mathcal{X}\\subset\\mbox{I}\\!\\mbox{R}^n$ is locally Lipschitz, measurable and defines a completely regular RKHS. Furthermore the diagonal of $K$ is uniformly bounded,\n\\[\\label{eqn:kappa}\n\\kappa^2 = \\sup_{x\\in\\mathcal{X}}K(x,x) <\\infty.\n\\]\n\\end{assumption}\nSeparable RKHSes are induced by continuous kernels on separable spaces $\\mathcal{X}$.\nSince $\\mathcal{X}\\subset\\mbox{I}\\!\\mbox{R}^n$ is separable and locally Lipschitz functions are also continuous, $\\mathcal{H}$ will always be separable. {\\em Completely regular} RKHSes are introduced in~\\cite{RosascoDensity} and\nthe reader is referred to this reference for details. Briefly, complete regularity ensures recovery of level sets of {\\em any} distribution, in the limit of infinite data. The Gaussian kernel does not define a completely regular RKHS, but the $L_1$ exponential and Laplacian kernels do~\\cite{RosascoDensity}.\n\nWe introduce some additional notation. Let $W_{c,m}$ denote the empirical RKHS gramian formed from a sample of size $m$ observations, and let the corresponding control energy estimate in Definition~\\ref{def:rkhs_energies} involving $W_{c,m}$ and regularization parameter $\\lambda$ be denoted by $L_{c,m}^{\\lambda}$.\n\nThe following preliminary lemma provides finite sample error bounds for Hilbert-Schmidt covariance matrices\non real, separable reproducing kernel Hilbert spaces.\n\\begin{lemma}[\\cite{RosascoIntegral} Theorem 7; Props. 8, 9]\\label{lem:cov_conc}\n\\mbox{}\n\\begin{itemize}\\itemsep 0pt\n\\item[(i)] The operators $W_c, W_{c,m}$ are Hilbert-Schmidt.\n\\item[(ii)] Let $\\delta\\in(0,1]$. With probability at least $1-\\delta$,\n\\[\n\\nor{W_c-W_{c,m}}_{HS} \\leq \\frac{2\\sqrt{2}\\kappa^2}{\\sqrt{m}}\\log^{1\/2}\\frac{2}{\\delta}.\n\\]\n\\end{itemize}\n\\end{lemma}\n\nThe following theorem establishes consistency of the estimator $L_{c,m}^{\\lambda}$, the proof of which follows the method of integral operators developed by~\\cite{SmaleIntegral,AndreaFastRates} and subsequently adopted in the context of density estimation by~(\\cite{RosascoDensity}, Theorem 1).\n\\begin{theorem}\n\\mbox{}\n\\begin{itemize}\\itemsep 0pt\n\\item[(i)] Fix $\\lambda > 0$. For each $x\\in\\mathcal{X}$, with probability at least $1-\\delta$,\n\\[\n\\bigl|L_{c,m}^{\\lambda}(x) - L_c^{\\lambda}(x)\\bigr| \\leq \\frac{2\\sqrt{2}\\kappa^4(\\lambda^2 + \\kappa^4)}{\\lambda^4\\sqrt{m}}\\log^{1\/2}\\frac{2}{\\delta} .\n\\]\n\\item[(ii)] If $(K,\\mathcal{X},\\mu_{\\infty})$ is such that\n \\[\\label{eqn:bounded_pinv}\n \\sup_{x\\in\\mathcal{X}}\\|W_c^{\\dag}(W_c^{\\dag}W_c)K_x\\|_{\\mathcal{H}}<\\infty,\n \\]\nthen for all $x\\in\\mathcal{X}$,\n$$\n\\displaystyle\\lim_{\\lambda\\to 0}|L_c^{\\lambda}(x) - L_c(x)| = 0.\n$$\n\\item[(iii)] If the condition~\\eqref{eqn:bounded_pinv} holds and the sequence $\\{\\lambda_m\\}_m$ satisfies $\\displaystyle\\lim_{m\\to\\infty}\\lambda_m=0$ with\n$\\displaystyle\\lim_{m\\to\\infty}\\tfrac{\\log^{1\/2}m}{\\lambda_m\\sqrt{m}} = 0$, then\n$$\n\\lim_{m\\to\\infty}\\bigl|L_{c,m}^{\\lambda}(x) - L_c(x)\\bigr| = 0,\\quad\\text{almost surely.}\n$$\n\\end{itemize}\n\\end{theorem}\n\\begin{proof}\nFor (i), the sample error, we have\n\\begin{align*}\n2\\bigl|L_{c,m}^{\\lambda}(x) - L_c^{\\lambda}(x)\\bigr|\n& \\leq \\nor{(W_{c,m} + \\lambda I)^{-2}W_{c,m} - (W_c+ \\lambda I)^{-2}W_c }\\nor{K_{x}}^2_{\\mathcal{H}} \\\\\n& \\leq \\bigl\\|(W_{c} + \\lambda I)^{-2}[\\lambda^2(W_{c,m} - W_c) + W_c(W_c - W_{c,m})W_{c,m}](W_{c,m} + \\lambda I)^{-2} \\bigr\\|\\kappa^2\\\\\n& \\leq \\frac{\\kappa^2(\\lambda^2 + \\kappa^4)}{\\lambda^4}\\nor{W_{c,m} - W_c}_{HS}\n\\end{align*}\nwhere $\\nor{\\cdot}$ refers to the operator norm. The second inequality follows from spectral calculus and~\\eqref{eqn:kappa}. The third line follows making use of the estimates $\\nor{(W_{c,m} + \\lambda I)^{-2}}\\leq \\lambda^{-2}, \\nor{(W_c + \\lambda I)^{-2}}\\leq \\lambda^{-2}, \\|W_c\\|_{HS}\\leq \\kappa^2, \\|W_{c,m}\\|_{HS}\\leq\\kappa^2$ (and the fact that $\\lambda > 0$ so that the relevant quantities are invertible).\nPart (i) then follows applying Lemma~\\ref{lem:cov_conc} to the quantity\n$\\nor{W_{c,m} - W_c}_{HS}$.\nFor (ii), the approximation error, note that the compact self-adjoint operator $W_c$ can be expanded onto an orthonormal basis $\\{\\sigma_i,\\phi_i\\}$. We then have\n\\begin{align*}\n 2\\bigl|L_{c}^{\\lambda}(x) - L_c(x)\\bigr|\n& = \\bigl|\\bigl\\langle[(W_{c} + \\lambda I)^{-2}W_{c} - W_c^{\\dag}(W_c^{\\dag}W_c)]K_x,K_x\\bigr\\rangle\\bigr| \\\\\n& = \\left| \\sum_i\\frac{\\sigma_i}{(\\sigma_i + \\lambda)^2}|\\langle \\phi_i,K_x\\rangle|^2 -\n\\sum_{i:\\sigma_i>0}\\frac{1}{\\sigma_i}|\\langle\\phi_i,K_x\\rangle|^2\\right| \\\\\n& \\leq \\lambda\\sum_{i:\\sigma_i>0}\\frac{2\\sigma_i + \\lambda}{(\\sigma_i + \\lambda)^2\\sigma_i}|\\langle \\phi_i,K_x\\rangle|^2 .\n\\end{align*}\nThe last quantity above can be seen to converge to 0 as $\\lambda\\to 0$ since the sum converges for all $x$ under the condition~\\eqref{eqn:bounded_pinv}.\nLastly for part (iii), we see that if $m\\to\\infty$ and $\\lambda^2\\to 0$ slower than $\\sqrt{m}$ then the sample error (i) goes to 0 while (ii) also holds. For almost sure convergence in part (i), we additionally require that for any $\\varepsilon\\in(0,\\infty)$,\n$$\n\\sum_m\\mathbb{P}\\bigl(|L_{c,m}^{\\lambda}(x) - L_c^{\\lambda}(x)| > \\varepsilon\\bigr)\\leq\n\\sum_m e^{-\\mathcal{O}(m\\lambda^4_m\\varepsilon^2)} < \\infty.$$\nThe choice $\\lambda_m = \\log^{-1\/2}m$ satisfies this requirement, as can be seen from the fact that for large enough $M<\\infty$, $\\sum_{m>M} e^{-m\/\\log^2 m} \\leq \\sum_{m>M} e^{-\\sqrt{m}} < \\infty$.\n\\end{proof}\nWe note that the condition~\\eqref{eqn:bounded_pinv} required in part (ii) of the theorem has also been discussed in the context of support estimation in forthcoming work from the authors of~\\cite{RosascoDensity}.\n\n\\subsection{Observability and Controllability Ellipsoids}\nGiven the preceding, we can estimate the reachable and observable sets of a nonlinear control system as level sets of the RKHS energy functions $\\hat{L}_c, \\hat{L}_o$ from Definition~\\ref{def:rkhs_energies}:\n\\begin{definition}\nGiven a nonlinear control system~\\eqref{control_nonlin}, its reachable set can be estimated as\n\\[\n\\label{reachable_estimate}\n\\widehat{\\cal R}_{\\tau}=\\{x\\in\\mathcal{X}~|~\\hat{L}_c(x) \\leq \\tau \\}\n\\]\nand its observable set can be estimated as\n\\[\n\\label{observable_estimate}\n\\widehat{\\cal E}_{\\tau'}=\\{x\\in\\mathcal{X}~|~\\hat{L}_o(x) \\geq \\tau' \\}\n\\]\nfor suitable choices of the threshold parameters $\\tau, \\tau'$.\n\\end{definition}\nIf the energy function estimates above are replaced with the true energy functions, and\n$\\tau=\\tau'=1\/2$, one obtains a finite sample approximation to the controllability and observability ellipsoids defined in Section~\\ref{sec:linear-control} if the system is linear. In general, $\\tau$ may be chosen empirically based on the data, using for instance a cross-validation procedure. Note that in the linear setting, the ellipsoid of strongly observable states is more commonly characterized as $\\{x~|~x^{\\!\\top\\!}W_o^{-1}x\\leq 1\\} = \\{W_o^{\\frac{1}{2}}x ~|~ \\nor{x}\\leq 1\\}$; hence the definition~\\eqref{observable_estimate}.\n\n\\section{Estimation of Invariant Measures for Ergodic Nonlinear SDEs}\\label{sec:nonlinear-sdes}\nIn this Section we consider {\\em ergodic} nonlinear SDEs of the form~\\eqref{sde_nonlin}, where the invariant (or ``stationary'') measure is a key quantity providing a great deal of insight. In the context of\ncontrol, the support of the stationary distribution corresponds to the reachable set of the\nnonlinear control system and may be estimated by~\\eqref{reachable_estimate}. Solving a Fokker-Planck equation\n of the form~\\eqref{ito} is one way to determine the probability distribution describing the solution to an\nSDE. However, for nonlinear systems finding an explicit solution to the Fokker-Planck equation --or even its\nsteady-state solution-- is a challenging problem. The study of existence of steady-state solutions can be traced back to the 1960s~\\cite{fuller,zakai}, however explicit formulas for steady-state solutions of the Fokker-Planck equation exist in only a few special cases (see~\\cite{butchart,da_prato1992,fuller, guinez,liberzon,risken}\nfor example). Such systems are often conservative or second order vector-fields. Hartmann~\\cite{hartmann:08} among others has studied balanced truncation in the context of linear SDEs, where empirical estimation of gramians plays a key role.\n\nWe propose here a data-based non-parametric estimate of the solution to the steady-state Fokker-Planck equation~\\eqref{steady} for a nonlinear SDE, by combining the relation~\\eqref{rho_approx} with the control energy estimate~\\eqref{eqn:lc_hat}. Following the general theme of this paper, we make use of the theory from the linear Gaussian setting described in Section~\\ref{sec:linear-sdes}, but in a suitable reproducing kernel\nHilbert space. Other estimators have of course been proposed in the literature for approximating invariant measures and for density estimation from data more generally (see e.g.~\\cite{biau,froyland,froyland1,kilminster,RosascoDensity}), however to our knowledge we are not aware of any estimation techniques which combine RKHS theory and nonlinear dynamical control systems. An advantage of our approach over other non-parametric methods is that an invariant density is approximated by way of a regularized fitting process, giving the user an additional degree of freedom in the regularization parameter.\n\nOur setting adopts the perspective that the nonlinear stochastic system~\\eqref{sde_nonlin} behaves\napproximately linearly when mapped via $\\Phi$ into the RKHS $\\mathcal{H}$, and as such\nmay be modeled by an infinite dimensional linear system in $\\mathcal{H}$. Although this system is {\\em unknown},\nwe know that it is linear and that we can estimate its gramians and control energies from observed data. Furthermore, we know that the invariant measure of the system in $\\mathcal{H}$ is zero-mean Gaussian with covariance given by the controllability gramian. Thus the original nonlinear system's invariant measure on $\\mathcal{X}$ should be reasonably approximated by the pullback along $\\Phi$ of the Gaussian invariant measure associated with the linear infinite dimensional SDE in $\\mathcal{H}$.\n\nWe summarize the setting in the following {\\em modeling Assumption}:\n\\begin{assumption}\\label{ass:ou_proc}\nLet $\\mathcal{H}$ be a real, possibly infinite dimensional RKHS satisfying Assumption~\\ref{ass:rkhs}.\n\\begin{itemize}\n\\item[(i)] Given a suitable choice of kernel $K$, if the $\\mbox{I}\\!\\mbox{R}^d$-valued stochastic process $x(t)$ is a solution to the (ergodic) stochastically excited nonlinear system~\\eqref{sde_nonlin}, the $\\mathcal{H}$-valued stochastic process \\mbox{$(\\Phi\\circ x)(t)=:X(t)$} can be reasonably modeled as an Ornstein-Uhlenbeck process\n\\[\\label{eqn:infdim-sde}\ndX(t) = AX(t)dt + \\sqrt{C}dW(t), \\quad X(0)=0\\in\\mathcal{H}\n\\]\nwhere $A$ is linear, negative and is the infinitesimal generator of a strongly continuous semigroup $e^{tA}$, $C$ is linear, continuous, positive and self-adjoint, and $W(t)$ is the cylindrical Wiener process.\n\\item[(ii)] The measure $P_{\\infty}$ is the invariant measure of the OU process~\\eqref{eqn:infdim-sde} and $P_{\\infty}$ is the pushforward along $\\Phi$ of the unknown invariant measure $\\mu_{\\infty}$ on the statespace $\\mathcal{X}$ we would like to approximate.\n\\item[(iii)] The measure $\\mu_{\\infty}$ is absolutely continuous with respect to Lebesgue measure, and so admits a density.\n\\end{itemize}\n\\end{assumption}\nWe will proceed in deriving an estimate of the invariant density under these assumptions, but note that\nthere are interesting systems for which the assumptions may not always hold in practice. For example, uncontrollable systems may not have a unique invariant measure. In these cases one must interpret the results discussed here as heuristic in nature.\n\nIt is known that a mild solution $X(t)$ to the SDE~\\eqref{eqn:infdim-sde} exists\nand is unique (\\cite{da_prato1992}, Thm. 5.4. pg. 121). Furthermore, the controllability gramian\nassociated to~\\eqref{eqn:infdim-sde}\n\\[\\label{eqn:infdim-gramian}\nW_ch = \\int_0^{\\infty}e^{tA}Ce^{tA^*}hdt,\\quad h\\in\\mathcal{H}\n\\]\nis trace class (\\cite{da_prato2006}, Lemma 8.19), and the unique measure $P_{\\infty}$ invariant with respect to the Markov semigroup associated to the OU process has characteristic function (\\cite{da_prato2006}, Theorem 8.20)\n\\[\\label{eqn:inv-meas}\n\\widetilde{P}_{\\infty}(h) = \\exp\\Bigl(-\\tfrac{1}{2}\\scal{W_ch}{h}\\Bigr),\\quad h\\in\\mathcal{H} \\;.\n\\]\nWe will use the notation $\\widetilde{P}$ to refer to the Fourier transform of the measure $P$.\nThe law of the solution $X(t)$ to problem~\\eqref{eqn:infdim-sde} given initial condition $X(0)=0$ is\nGaussian with zero mean and covariance operator $Q_t=\\int_{0}^t e^{sA}Ce^{sA^*}ds$. Thus\n\\begin{align*}\nW_c &= \\lim_{t\\to\\infty}\\mathbb{E}[X(t)\\otimes X(t)]\\\\\n & = \\int_{\\mathcal{H}}\\scal{\\cdot}{h}{h}dP_{\\infty}(h)\\\\\n&= \\int_{\\mathcal{X}}\\scal{\\cdot}{K_x}K_xd\\mu_{\\infty}(x)\n\\end{align*}\nwhere the last integral follows pulling $P_{\\infty}$ back to $\\mathcal{X}$ via $\\Phi$, establishing\nthe equivalence between~\\eqref{eqn:infdim-gramian} and ~\\eqref{eqn:covop_gramian}.\n\nGiven that the measure $P_{\\infty}$ has Fourier transform~\\eqref{eqn:inv-meas} and by Assumption~\\ref{ass:ou_proc} is interpreted as the pushforward of $\\mu_{\\infty}$ (that is, for Borel sets $B\\in\\mathcal{B}(\\mathcal{H})$, $P_{\\infty}(B)=(\\Phi_*\\mu_{\\infty})(B)=\\mu_{\\infty}(\\Phi^{-1}(B))$ formally), we\nhave that $\\widetilde{\\mu}_{\\infty}(x) = \\exp\\bigl(-\\tfrac{1}{2}\\scal{W_cK_x}{K_x}\\bigr)$.\n\n\nThe invariant measure $\\mu_{\\infty}$ is defined on a finite dimensional space, so together with\npart (iii) of Assumption~\\ref{ass:ou_proc}, we may consider the corresponding (Radon-Nikodym) density\n$$\\rho_{\\infty}(x) \\propto \\exp\\bigl(-\\tfrac{1}{2}\\scal{W_c^{\\dag}(W_c^{\\dag}W_c)K_x}{K_x}\\bigr)$$\nwhenever the condition~\\eqref{eqn:bounded_pinv} holds. If~\\eqref{eqn:bounded_pinv} does not hold or if we are considering a finite data sample, then we regularize to arrive at\n\\[\n\\rho_{\\infty}(x) \\propto \\exp\\bigl(-\\tfrac{1}{2}\\scal{(W_c + \\lambda I)^{-1}K_x}{K_x}\\bigr)\n\\]\nas discussed in Section~\\ref{sec:linear-sdes} (see Eq.~\\ref{rho_approx}) and Section~\\ref{sec:energy_fns}. This density may be\nestimated from data $\\{x_i\\}_{i=1}^N$ since the controllability energy may be estimated from data: at a new point $x$, we have\n\\[\n \\hat{\\rho}_{\\infty}(x) = Z^{-1}\\exp\\bigl(-\\hat{L}_c(x)\\bigr)\n\\]\n where $\\hat{L}_c$ is the empirical approximation computed according to Definition~\\ref{def:rkhs_energies},\nand the constant $Z$ may be either computed analytically in some cases or simply estimated from the data\nsample to enforce summation to unity.\nWe may also estimate, for example, level sets of $\\rho_{\\infty}$ (such as the support) by considering level\nsets of the regularized control energy function estimator, $\\{x\\in\\mathcal{X}~|~ L_{c,m}(x) \\leq \\tau\\}$.\n\n\n\\section{Conclusion}\nTo summarize our contributions, we have introduced estimators for the controllability\/observability energies and the reachable\/observable sets of nonlinear control systems. We showed that the controllability energy\nestimator may be used to approximate the stationary solution of the Fokker-Planck equation governing nonlinear SDEs (and its support).\n\nThe estimators we derived were based on applying linear methods for control and random\ndynamical systems to nonlinear control systems and SDEs, once mapped into an\ninfinite-dimensional RKHS acting as a ``linearizing space''. These results collectively argue that there is a reasonable passage from linear dynamical systems theory to a data-based nonlinear dynamical systems theory through reproducing kernel Hilbert spaces.\n\nWe leave for future work the formulation of data-based estimators for Lyapunov exponents and the controllability\/observability operators $\\Psi_c,\\Psi_o$ associated to nonlinear systems.\n\n\\section*{Acknowledgements}\nWe thank Lorenzo Rosasco and Jonathan Mattingly for helpful discussions. BH thanks the European Union for financial support received through an International Incoming Marie Curie Fellowship, and JB gratefully acknowledges support under NSF contracts NSF-IIS-08-03293 and NSF-CCF-08-08847 to M. Maggioni.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Missing Proofs from Section \\ref{sec:prelim}}\n\\subsection{Proof of Proposition~\\ref{prop:monotone}}\n\\label{sec:proof-prop:monotone}\n\\propmonotone*\n\\begin{proof}\nFor all practical purposes we may assume $\\bidi(\\typespacei)$ to be compact.\nFix the distributions $\\distsmi$ and strategies $\\bidsmi(\\cdot)$ of other bidders.\nTo simplify notation when $\\bidsmi(\\cdot)$ is fixed, let the interim allocation $\\alloci(\\bidi)$ be $\\Ex[\\valsmi \\sim \\distsmi]{\\alloci(\\bidi, \\bidsmi(\\valsmi))}$, \nthe interim payment\n$\\paymenti(\\bidi) \\coloneqq \\Ex[\\valsmi \\sim \\distsmi]{\\paymenti(\\bidi, \\bidsmi(\\valsmi))}$,\nand the interim utility\n$\\utili(\\vali, \\bidi) \\coloneqq \\utili(\\vali, \\bidi, \\bidsmi(\\cdot))$.\nWithout loss of generality, we may assume for each $\\vali$, $\\utili(\\vali, \\bidi(\\vali)) = \\max_{\\val \\in \\typespacei} \\utili(\\vali, \\bidi(\\val))$\n(Otherwise we can first readjust $\\bidi(\\cdot)$ this way, which only weakly improves the utility of all types.)\n\nSuppose $\\bidi(\\cdot)$ is non-monotone, i.e., there exist $\\vali' > \\vali$, such that $\\bidi(\\vali') < \\bidi(\\vali)$. \nBy the assumption that $\\utili(\\vali, \\bidi(\\vali)) = \\max_{\\val \\in \\typespacei} \\utili(\\vali, \\bidi(\\val))$ for each $\\vali$, we have \n\\begin{equation}\\label{eq:monotone-maximize-1}\n \\vali x_i(\\bidi(\\vali)) - p_i(\\bidi(\\vali)) \\ge \\vali x_i(\\bidi(\\vali')) - p_i(\\bidi(\\vali'));\n\\end{equation}\n\\begin{equation}\\label{eq:monotone-maximize-2}\n \\vali' x_i(\\bidi(\\vali')) - p_i(\\bidi(\\vali')) \\ge \\vali' x_i(\\bidi(\\vali)) - p_i(\\bidi(\\vali)). \n\\end{equation}\nAdding \\eqref{eq:monotone-maximize-1} and \\eqref{eq:monotone-maximize-2}, we obtain\n\\begin{equation}\n (\\vali' - \\vali)[x_i(\\bidi(\\vali')) - x_i(\\bidi(\\vali))] \\ge 0. \n\\end{equation}\nSince $\\vali' > \\vali$, we get \n\\begin{equation}\n x_i(\\bidi(\\vali')) \\ge x_i(\\bidi(\\vali)).\n\\end{equation}\nIn both the first price auction and the all pay auction we also have $x_i(\\bidi(\\vali')) \\le x_i(\\bidi(\\vali))$ because the probability that $i$ receives the item cannot decrease if her bid increases. \nTherefore, it must be\n\\begin{equation}\\label{eq:monotone-equal-alloc}\n x_i(\\bidi(\\vali')) = x_i(\\bidi(\\vali)). \n\\end{equation}\n\nPlugging \\eqref{eq:monotone-equal-alloc} into \\eqref{eq:monotone-maximize-1} and \\eqref{eq:monotone-maximize-2}, we obtain\n\\begin{equation}\\label{eq:monotone-equal-pay}\n p_i(\\bidi(\\vali')) = p_i(\\bidi(\\vali)). \n\\end{equation}\n\n\nFor the all pay auction, since bidder $i$ pays her bid whether or not she wins the item, \\eqref{eq:monotone-equal-pay} implies $\\bidi(\\vali)=\\bidi(\\vali')$, a contradiction. \n\nFor the first price auction, for any bid~$\\bid$ made by bidder~$i$, $p_i(\\bid) = \\bid\\cdot x_i(\\bid)$. By \\eqref{eq:monotone-equal-pay}, $\\bidi(\\vali')x_i(\\bidi(\\vali')) = \\bidi(\\vali) x_i(\\bidi(\\vali))$. On the other hand, $x_i(\\bidi(\\vali')) = x_i(\\bidi(\\vali))$ and $\\bidi(\\vali')>\\bidi(\\vali)$, so we must have\n\\[x_i(\\bidi(\\vali')) = x_i(\\bidi(\\vali)) = 0. \\]\nIn other words, $\\bidi(\\vali)$ must be monotone non-decreasing everywhere except maybe for values whose bids are so low that the bidder does not win and hence obtains zero utility. \nLetting the bidder bid~$0$ for all values on which her allocation is~$0$ does not affect her utility and yields a monotone bidding strategy.\n\\end{proof}\n\n\n\\section{Missing Proofs from Section \\ref{sec:fpa}}\n\\subsection{Upper Bound}\n\\subsubsection{Proof of Lemma~\\ref{lem:pseudo-dimension-utility-class}}\n\\label{sec:proof-lem:pseudo-dimension-utility-class}\n\\pdimutil*\n\n\n\n\n\n\n\\begin{proof}[Proof]\nWe discussed the case with $n=2$ in Section~\\ref{sec:fpa-upper-bound}. Now we consider the general case with $n > 2$ bidders. \nWe give the proof for the random-allocation tie-breaking rule; \nthe proof for the no-allocation rule is similar (and in fact simpler).\nFor ease of notation, we use $\\Pinputs^k$ to denote $\\samplesmi^{k}$.\nRecall that each $\\Pinputs^k$ is a vector in $\\mathbb R^{n-1}$.\nWe write its $j^{\\text{-th}}$ component as $\\Pinputi[j]^k$.\nWe start with a simple observation: for any $\\vali$ and $\\bids(\\cdot)$, the output of $h^{\\vali, \\bids(\\cdot)}$ on any input \n$\\Pinputs^k$\nmust be one of the following $n+1$ values: \n$\\vali-\\bidi, \\frac{\\vali-\\bidi}{2}, \\ldots, \\frac{\\vali-\\bidi}{n}$, or~$0$; this value is fully determined by the $n-1$ comparisons $\\bidi \\lesseqqgtr \\bidi[j](\\Pinputi[j]^k)$ for each $j\\ne i$. \nWe argue that the hypothesis class $\\mathcal{H}_i$ can be divided into $O(m^{2n})$ sub-classes $\\{\\mathcal{H}_i^{\\mathbf k }\\}_{\\mathbf k \\in [m+1]^{2(n-1)}}$ \nsuch that each sub-class $\\mathcal{H}_i^{\\mathbf k }$ generates at most $O(m^n)$ different label vectors. \nThus $\\mathcal{H}_i$ generates at most $O(m^{3n})$ label vectors in total. \nTo pseudo-shatter $m$~samples, we need $O(m^{3n})\\ge 2^m$, which implies $m = O(n\\log n)$. \n\nWe now define sub-classes $\\{\\mathcal{H}_i^{\\mathbf k}\\}_{\\mathbf k}$, each indexed by $\\mathbf k \\in [m + 1]^{2(n-1)}$. \nFor each dimension $j \\ne i$, we sort the $m$ samples by their $j^{\\text{-th}}$ coordinates non-decreasingly, and use $\\pi(j, \\cdot)$ to denote the resulting permutation over $\\{1, 2, \\ldots, m\\}$; \nformally, let $\\Pinputi[j]^{\\pi(j, 1)} \\le \\Pinputi[j]^{\\pi(j, 2)}\\le \\cdots \\le \\Pinputi[j]^{\\pi(j, m)}$.\nFor each hypothesis $h^{\\vali, \\bids(\\cdot)}(\\cdot)$, for each $j$, \nwe define two special positions; these positions are similar to the position~$k$ in the case for two bidders; \nwe now need a pair, because of the need to keep track of ties, due to the more complex random-allocation tie-breaking rule.\nLet $k_{j, 1}$ be $\\max \\{0, \\{k: \\bidi[j](\\Pinputi[j]^{\\pi(j, k)}) < \\bidi(\\vali) \\} \\}$, and let $k_{j, 2}$ be\n$\\min \\{m + 1, \\{k: \\bidi[j](\\Pinputi[j]^{\\pi(j, k)}) > \\bidi(\\vali) \\}\\}$.\nAs in the case for two bidders, this is well defined because of the monotonicity of~$\\bidi[j](\\cdot)$. It also follows that, if $k_{j, 1} < k_{j, 2} - 1$, then for any $k$ such that $k_{j, 1} < k < k_{j, 2}$, we must have $\\bidi[j](\\Pinputi[j]^{\\pi(j, k)}) = \\bidi(\\vali)$.\n\n\n\nA hypothesis $h^{\\vali, \\bids(\\cdot)}(\\cdot)$ belongs to sub-class $\\mathcal{H}_i^{\\mathbf k }$ where the index $\\mathbf k$ is $(k_{j, 1}, k_{j, 2})_{j\\in[n]\\backslash\\{i\\}}$. \nThe number of sub-classes is clearly bounded by $(m+1)^{2(n-1)}$.\n\n\nWe now show that the hypotheses within each sub-class $\\mathcal{H}_i^{\\mathbf k}$ give rise to at most $(m+1)^n$ label vectors.\nLet us focus on one such class with index~$\\mathbf k$. \nOn the $k^{\\text{-th}}$ sample~$\\Pinputs^k$, \na hypothesis's membership in \n$\\mathcal{H}_i^{\\mathbf k}$ suffices to specify whether bidder~$i$ is a winner on this sample, and, if so, the number of other winning bids at a tie.\nTherefore, the class index~$\\mathbf k$ determines a mapping $c: [m] \\to \\{0, 1, \\ldots, n\\}$, with $c(k) > 0$ meaning bidder~$i$ is a winner on sample~$\\Pinputs^k$ at a tie with $c(k)-1$ other bidders, and $c(k) = 0$ meaning bidder~$i$ is a loser on sample~$\\Pinputs^k$. \nThe output of a hypothesis $h^{\\vali, \\bids(\\cdot)}(\\cdot) \\in \\mathcal{H}_i^{\\mathbf k}$ on sample~$\\Pinputs^k$ is then $(\\vali - \\bidi(\\vali)) \/ c(k)$ if $c(k) > 0$ and 0 otherwise. \nThe same utility is output on two samples $\\Pinputs^k$ and~$\\Pinputs^{k'}$ whenever $c(k) = c(k')$. \nTherefore, if we look at the labels assigned to a set~$S$ of samples that are mapped to the nonzero integer by~$c$, there can be at most $|S| + 1 \\leq m + 1$ patterns of labels, because we compare the same utility with $|S|$ witnesses; the set of samples mapped to~$0$ by~$c$ have only one pattern of labels. \nThe vector of labels generated by a hypothesis in such a sub-class is a concatenation of these patterns. \nThe image of $c$ has $n$ nonzero integers, and so there are at most $(m+1)^n$ label vectors.\n\n\n\n\nTo conclude, the total number of label vectors generated by $\\mathcal{H}_i=\\bigcup_{\\mathbf k } \\mathcal{H}_i^{\\mathbf k }$ is at most \n\\[ (m+1)^{2(n-1)} (m+1)^{n} \\le (m+1)^{3n}. \\]\nTo pseudo-shatter $m$~samples, we need $(m+1)^{3n}\\ge 2^m$, which implies $m=O(n\\log n)$.\n\n\\old{\n\\bluecom{old general case}\nNow we consider the general case with $n > 2$ bidders. \nWe show the proof for the random-allocation tie-breaking rule. \nThe proof for the no-allocation rule is similar (and is in fact simpler).\nAgain for ease of notation, we denote by $\\Pinputs^k = \\samplesmi^{k}$.\nA simple observation is that, fixing $\\vali$ and~$\\bids(\\cdot)$, the utility of bidder~$i$ on any sample must be one of $n+1$ values: \n$\\vali-\\bidi, \\frac{\\vali-\\bidi}{2}, \\ldots, \\frac{\\vali-\\bidi}{n}$, or~$0$.\n\neach sample $\\Pinputs^k$. as an $(n-1)$ dimensional vector consisting of the values of the bidders other than~$i$,\ncan be classified into one of $(n+1)$ cases $C_1, \\ldots, C_{n+1}$ that are characterized by the ex-post utility $\\vali-\\bidi, \\frac{\\vali-\\bidi}{2}, \\ldots, \\frac{\\vali-\\bidi}{n}, 0$. \nSpecifically, if there exists a dimension $j\\ne i$ such that $\\bidi[j](\\samplei[j]^{k}) > \\bidi$, then $\\Pinputs^k\\in C_{n+1}$; if all dimensions $j$'s satisfy $\\bidi[j](\\samplei[j]^k)<\\bidi$, then $\\Pinputs^k\\in C_1$; if there are $1\\le d\\le n-1$ dimensions for which $\\bidi[j](\\samplei[j]^k)=\\bidi$, then $\\Pinputs^k\\in C_{d+1}$. \n\nConsider the partition of the $m$ samples into classes by different choices of $\\vali$ and $\\bids(\\cdot)$. For each dimension $j$, let $\\underline{x}\\in\\mathbb R_+$ be the smallest coordinate at which $\\bidi[j](\\underline{x})=\\bidi$, then we have $\\bidi[j](\\samplei[j]^k) < \\bidi$ if $\\samplei[j]^k<\\underline{x}$, because $\\bidi[j](\\cdot)$ is monotone. Similarly, for the largest coordinate $\\overline{x}$ at which $\\bidi[j](\\overline{x})=\\bidi$, we have $\\bidi[j](\\samplei[j]^k) > \\bidi$ whenever $\\samplei[j]^k > \\overline{x}$. And for $\\samplei[j]^k\\in[\\underline{x}, \\overline{x}]$, $\\bidi[j](\\samplei[j]^k) = \\bidi$. We draw two hyperplanes at $x_j=\\underline{x}$ and $x_j=\\overline{x}$ for each dimension $j$. These $2(n-1)$ hyperplanes divide the space into several rectangular region and some infinite regions, such that in each region, the class of all samples is the same, e.g., all the samples in the top-right infinite regions belong to $C_{n+1}$, and the rectangle closest to the origin corresponds to $C_1$. A class may contain several regions.\n\nIn fact, in order to determine the classification of each sample, it suffices to determine two boundary samples $\\Pinputs^{k_1}, \\Pinputs^{k_2}$ that represent $\\underline{x}, \\overline{x}$ in each dimension $j$, namely, \n \\[ k_1=\\argmax_{k}\\{\\samplei[j]^{k}\\;\\mid\\; \\bidi[j](\\samplei[j]^k)<\\bidi\\},\\]\n \\[k_2=\\argmax_{k}\\{\\samplei[j]^k\\;\\mid\\; \\bidi[j](\\samplei[j]^k)\\le \\bidi\\}, \\]\nthen draw the hyperplanes at $x_j=\\samplei[j]^{k_1}$ and $x_j=\\samplei[j]^{k_2}$ (denote by $k_1=0$ or $k_2=0$ if there is no such $k_1$ or $k_2$). The number of different partitions produced by all $\\vali, \\bids(\\cdot)$ is determined by the number of choices of $k_1$ and $k_2$ over all dimensions, which is upper-bounded by\n\\[ \\left[\\binom{m+1}{2} + m+1\\right]^{n-1} \\le (m+1)^{2(n-1)}.\\]\nFor each partition, the samples in each class $C_d$ have the same ex-post utility $\\frac{\\vali-\\bidi}{d}$ or 0 if $d=n+1$. At most $m+1$ distinct labelings can be given to the samples in $C_d$ for different $\\vali, \\bidi$. \nOver all classes, we have at most $(m+1)^{n+1}$ labelings. \n\nTherefore, in total: \n\\[(m+1)^{2(n-1)}(m+1)^{n+1} = (m+1)^{3n-1}. \\]\nSolving $2^m \\le (m+1)^{3n-1}$ gives $m=O(n\\log n)$.\n}\n\n\\end{proof}\n\n\\subsubsection{Proof of Lemma~\\ref{lem:relation-uniform-convergence}}\n\\label{sec:proof-lem:relation-uniform-convergence}\n\\uniformprod*\n\n\\begin{proof}\nThink of the samples $\\samples$ as an $m \\times n$ matrix $(\\samplei^j)$, where each row $j\\in[m]$ represents sample~$\\samples^j$, and each column~$i\\in[n]$ consists of the values sampled from~$\\disti$. \nThen we draw $n$ permutations $\\pi_1, ..., \\pi_n$ of $[m]=\\{1, \\ldots, m\\}$ independently and uniformly at random, and permute the $m$ elements in column~$i$ by~$\\pi_i$. \nRegard each new row $j$ as a new sample, denoted by $\\permSamples^j = (\\samplei[1]^{\\pi_1(j)}, \\samplei[2]^{\\pi_2(j)}, ..., \\samplei[n]^{\\pi_n(j)})$. \nGiven $\\pi_1, \\ldots, \\pi_n$, the ``permuted samples'' $\\permSamples^j$, $j=1, \\ldots, m$ then have the same distributions as $m$ i.i.d.\\@ random draws from~$\\dists$. \n\nFor $h \\in \\mathcal{H}$, let $p_h$ be $\\Ex[\\vals\\sim\\dists]{h(\\vals)}$. \nThen by the definition of $(\\epsilon, \\delta)$-uniform convergence (but not on product distribution),\n\\begin{equation}\\label{eq:samples_pi}\n\\Prx[\\samples, \\pi]{\\exists h \\in \\mathcal{H},\\ \\left| p_h - \\frac{1}{m }\\sum_{j=1}^{m} h(\\permSamples^j) \\right|\\ge\\epsilon} \\le \\delta.\n\\end{equation}\n\nFor a set of fixed samples $\\samples = (\\samples^1, \\ldots, \\samples^m)$, recall that $\\empDisti[i]$ is the uniform distribution over $\\{\\samplei[i]^{1}, \\ldots, \\samplei[i]^{m}\\}$, and $\\empDists = \\prod_{i=1}^n \\empDisti[i]$. \nWe show that the expected \nvalue of $h$ on $\\empDists$ satisfies $\\Ex[\\vals\\sim\\empDists]{h(\\vals)} = \\Ex[\\pi]{\\frac{1}{m}\\sum_{j=1}^m h(\\permSamples^j)}$. This is because\n\\begin{align*}\n \\Ex[\\pi]{\\frac{1}{m}\\sum_{i=1}^m h(\\permSamples^j)}\n ={}&\\frac{1}{m} \\sum_{j=1}^{m} \\Ex[\\pi]{h(\\permSamples^j)} \\\\\n ={}&\\frac{1}{m}\\sum_{j=1}^m \\sum_{(k_1, \\ldots, k_n)\\in[m]^n} h(\\samplei[1]^{k_1}, \\ldots, \\samplei[n]^{k_n})\\ \\cdot \\\\\n & \\hspace{8em}\n \t\\Prx[\\pi]{\\pi_1(j)=k_1, \\ldots, \\pi_n(j)=k_n} \\\\\n ={}&\\frac{1}{m}\\sum_{j=1}^m \\sum_{(k_1, \\ldots, k_n)\\in[m]^n} h(\\samplei[1]^{k_1}, \\ldots, \\samplei[n]^{k_n})\\cdot \\frac{1}{m^n}\\\\\n ={}&\\frac{1}{m^n} \\sum_{(k_1, \\ldots, k_n)\\in[m]^n} h(\\samplei[1]^{k_1}, \\ldots, \\samplei[n]^{k_n}) \\\\\n ={}&\\Ex[\\vals \\sim \\empDists]{h(\\vals)}.\n\\end{align*}\n\nThus, \n\\begin{align*}\n \\left| p_h - \\Ex[\\vals\\sim\\empDists]{h(\\vals)}\\right| \n ={}&\\left| p_h - \\Ex[\\pi]{\\frac{1}{m}\\sum_{j=1}^m h(\\permSamples^j)} \\right| \\\\\n \\le{}& \\Ex[\\pi]{ \\left| p_h - \\frac{1}{m}\\sum_{j=1}^m h(\\permSamples^j) \\right|}\\\\\n \\le{}& \\Prx[\\pi]{ \\left| p_h - \\frac{1}{m}\\sum_{j=1}^m h(\\permSamples^j) \\right|\\ge \\epsilon}\\cdot H \\\\\n & \\hspace{1em} + \\left(1-\\Prx[\\pi]{\\left| p_h - \\frac{1}{m}\\sum_{j=1}^m h(\\permSamples^j) \\right|\\ge \\epsilon}\\right)\\cdot\\epsilon \\\\\n \\le{}& \\Prx[\\pi]{\\mathrm{Bad}(h, \\pi, \\samples)}\\cdot H + \\epsilon, \n\\end{align*}\nwhere in the last step we define event\n\\[ \\mathrm{Bad}(h, \\pi, \\samples) = \\mathbb{I}\\left[\\left| p_h - \\frac{1}{m}\\sum_{j=1}^m h(\\permSamples^j) \\right|\\ge \\epsilon\\right].\\]\nBy simple calculation, whenever $\\left| p_h - \\Ex[\\vals\\sim\\empDists]{h(\\vals)}\\right| \\ge 2\\epsilon$, we have $\\Prx[\\pi]{\\mathrm{Bad}(h, \\pi, \\samples)}\\ge \\epsilon\/H$.\n\nFinally, consider the random draw $\\samples\\sim\\dists$, \n\\begin{align*}\n \\Prx[\\samples]{\\exists h\\in \\mathcal{H}, \\ \\left| p_h - \\Ex[\\vals \\sim \\empDists]{h(\\vals)} \\right|\\ge 2\\epsilon} \n \\le{}& \\Prx[\\samples]{\\exists h \\in \\mathcal{H}, \\ \\Prx[\\pi]{\\mathrm{Bad}(h, \\pi, \\samples)}\\ge \\frac{\\epsilon}{H} } \n \\\\\n \n \\le{}& \\Prx[\\samples]{\\Prx[\\pi]{\\exists h \\in \\mathcal{H}, \\ \\mathrm{Bad}(h, \\pi, \\samples)\\text{ holds} } \\ge \\frac{\\epsilon}{H}}.\n \\end{align*}\n \n By Markov's inequality, this is in turn upper bounded by\n \n \n \\begin{align*}\n \\frac{H}{\\epsilon}\\Ex[\\samples]{\\Prx[\\pi]{\\exists h \\in \\mathcal{H}, \\ \\mathrm{Bad}(h, \\pi, \\samples)\\text{ holds} } }\n \n ={}& \\frac{H}{\\epsilon}\\Prx[\\samples, \\pi]{\\exists h \\in \\mathcal{H}, \\ \\mathrm{Bad}(h, \\pi, \\samples)\\text{ holds} } \\\\\n \n \\le{}& \\frac{H\\delta}{\\epsilon} && \\text{ By \\eqref{eq:samples_pi}} \n\\end{align*}\n\\end{proof}\n\n\\subsection{Lower Bound: Proof of Theorem~\\ref{thm:lower-bound-learning-util}}\n\\label{sec:proof-thm:lower-bound-learning-utility}\n\n\\lowerbound*\n\nFixing $\\epsilon > 0$, fixing $c_1 = 2000$, we first define two value distributions.\nLet $\\dist^+$ be a distribution supported on $\\{0, 1\\}$, and for $\\val \\sim \\dist^+$, $\\Prx{\\val = 0} = 1 - \\frac{1 + c_1 \\epsilon}{n}$, and $\\Prx{\\val = 1} = \\frac{1 + c_1 \\epsilon}{n}$. \nSimilarly define $\\dist^-$: for $\\val \\sim \\dist^-$, $\\Prx{\\val = 0} = 1 - \\frac{1 - c_1 \\epsilon}{n}$, and $\\Prx{\\val = 1} = \\frac{1 - c_1 \\epsilon}{n}$. \n\nLet $\\kl(\\dist^+; \\dist^-)$ denote the KL-divergence between the two distributions.\n\\begin{claim}\n\\label{cl:lb-kl}\n$\\kl(\\dist^+; \\dist^-)= O(\\frac {\\epsilon^2}{n})$.\n\\end{claim}\n\\begin{proof}\nBy definition,\n\\begin{align*}\n \\kl(\\dist^+; \\dist^-) \n ={}& \\frac{1 + c_1\\epsilon}{n} \\ln \\left( \\frac{1 + c_1\\epsilon}{1 - c_1\\epsilon} \\right) \n + \\frac{n - 1 - c_1 \\epsilon}{n} \\ln \\left(\\frac{n - 1 - c_1 \\epsilon}{n - 1 +c_1 \\epsilon}\\right) \\\\\n ={}& \\frac 1n \\ln \\left( \\frac {1 + c_1\\epsilon}{1 - c_1 \\epsilon} \\cdot \\frac{(1 - \\frac{c_1 \\epsilon}{n - 1})^{n-1}}{(1 + \\frac{c_1 \\epsilon}{n-1})^{n-1}}\n \\right)\n + \\frac {c_1 \\epsilon}{n} \\ln \\left(\\frac{1 + c_1 \\epsilon}{1 - c_1 \\epsilon} \\cdot \n \\frac{1 + \\frac{c_1 \\epsilon}{n-1}}{1 - \\frac{c_1 \\epsilon}{n-1}}\\right) \\\\\n \\leq{}& \\frac 1n \\ln \\left( \\frac {1 + c_1\\epsilon}{1 - c_1 \\epsilon} \\cdot \\frac{\\left(1 - \\frac{c_1 \\epsilon}{n - 1}\\right)^{n-1}}{1 + c_1 \\epsilon} \\right)\n + \\frac {2c_1 \\epsilon}{n} \\ln \\left(1 + \\frac{2c_1 \\epsilon}{1 - c_1 \\epsilon} \\right)\n\\\\\n \\leq{}& \\frac 1 n \\ln \\left( \n \\frac{1 - c_1 \\epsilon + \\frac 1 2 (c_1 \\epsilon)^2}{1 - c_1 \\epsilon}\n \\right) + \\frac{8c_1^2 \\epsilon^2}{n} \\\\\n \\leq{}& \\frac{10c_1^2 \\epsilon^2}{n}.\n\\end{align*}\nIn the last two inequalities we used $c_1 \\epsilon < \\frac 1 2$ and $\\ln (1+x) \\leq 1+x$ for all $x > 0$.\n\\end{proof}\n\nIt is well known that upper bounds on KL-divergence implies information theoretic lower bound on the number of samples to distinguish distributions \\cite[e.g.][]{MansourNotes}.\n\\begin{corollary}\n\\label{cor:lb-kl-single}\nGiven $t$ i.i.d.\\@ samples from $\\dist^+$ or~$\\dist^-$, if $t \\leq \\frac{n}{80c_1^2 \\epsilon^2}$, no algorithm~$\\mathcal{H}$ that maps samples to $\\{\\dist^+, \\dist^-\\}$ can do the following: when the samples are from~$\\dist^+$, $\\mathcal{H}$ outputs~$\\dist^+$ with probability at least $\\frac 2 3$, and if the samples are from~$\\dist^-$, $\\mathcal{H}$ outputs~$\\dist^-$ with probability at least~$\\frac 2 3$. \n\\end{corollary}\n\nWe now construct product distributions using $\\dist^+$ and~$\\dist^-$. \nFor any $S \\subseteq [n - 1]$, define product distribution $\\dists_S$ to be $\\prod_i \\disti$ where $\\disti = \\dist^+$ if $i \\in S$, and $\\disti = \\dist^-$ if $i \\in [n-1] \\setminus S$, and $F_n$ is a point mass on value~$1$.\nFor any $j \\in [n - 1]$ and $S \\subseteq [n - 1]$, distinguishing $\\dists_{S \\cup \\{j\\}}$ and $\\dists_{S \\setminus \\{j\\}}$ by samples from the product distribution is no easier than distinguishing $\\dist^+$ and $\\dist^-$, because the coordinates of the samples not from $\\disti[j]$ contains no information about~$\\disti[j]$. \n\n\\begin{corollary}\n\\label{cor:lb-kl}\nFor any $j \\in [n - 1]$ and $S \\subseteq [n - 1]$, given $t$ i.i.d.\\@ samples from $\\dists_{S \\cup \\{j\\}}$ or $\\dists_{S \\setminus \\{j\\}}$, if $t \\leq \\frac n {80 c_1^2 \\epsilon^2}$, no algorithm~$\\mathcal{H}$ can do the following: when the samples are from $\\dists_{S \\cup \\{j\\}}$, $\\mathcal{H}$ outputs~$\\dists_{S \\cup \\{j\\}}$ with probability at least $\\frac 2 3$, and when the samples are from $\\dists_{S \\setminus \\{j\\}}$, $\\mathcal{H}$ outputs~$\\dists_{S \\setminus \\{j\\}}$ with probability at least~$\\frac 2 3$.\n\\end{corollary}\n\nWe now use Corollary~\\ref{cor:lb-kl} to derive an information theoretic lower bound on learning utilities for monotone bidding strategies, for distributions in $\\{\\dists_S\\}_{S \\subseteq [n]}$.\n\n\\begin{proof}[Proof of Theorem~\\ref{thm:lower-bound-learning-util}]\nWithout loss of generality, assume $n$ is odd. \nLet $S$ be an arbitrary subset of~$[n - 1]$ of size either $\\lfloor n\/2 \\rfloor$ or $\\lceil n\/ 2 \\rceil$.\nWe focus on the interim utility of bidder~$n$ with value~$1$ and bidding $\\frac 1 2$. \nDenote this bidding strategy by~$\\bidi[n](\\cdot)$.\nThe other bidders may adopt one of two bidding strategies.\nOne of them is $\\bid^+(\\cdot)$: $\\bid^+(0) = 0$ and $\\bid^+(1) = \\frac 1 2 + \\eta$ for sufficiently small $\\eta > 0$. \nThe other bidding strategy $\\bid^-(\\cdot)$ maps all values to~$0$.\nFor $T \\subseteq [n-1]$, let $\\bids_T(\\cdot)$ be the profile of bidding strategies where $\\bidi(\\cdot) = \\bid^+(\\cdot)$ for $i \\in T$, and $\\bidi(\\cdot) = \\bid^-(\\cdot)$ for $i \\notin T$. \n\n\nFor the distribution $\\dists_S$, \n\\begin{align*}\n \\utili[n]\\left(1, \\frac 1 2, \\bids_T(\\cdot)\\right) \n ={}& \\frac 1 2 \\Prx{\\max_{i \\in T} \\vali = 0} \\\\\n ={}& \\frac 1 2 \n \\left(1 - \n \\frac{1 + c_1 \\epsilon}{n}\n \\right)^{|S \\cap T|}\n \\left(1 - \n \\frac{1 - c_1 \\epsilon}{n}\n \\right)^{|T\\setminus S|}\n \\\\\n ={}& \\frac 1 2\n \\left(\n 1 - \\frac{1 + c_1 \\epsilon}{n}\n \\right)^{|T|}\n \\left(\n \\frac{n - 1 + c_1 \\epsilon}{n - 1 - c_1 \\epsilon}\n \\right)^{|T \\setminus S|}.\n\\end{align*}\nTherefore, for $T, T' \\subseteq [n-1]$ with $|T| = |T'| $,\n\\begin{align*}\n \\frac{\\utili[n](1, \\frac 1 2, \\bids_T(\\cdot))}{\\utili[n](1, \\frac 1 2, \\bids_{T'}(\\cdot))} ={}& \\left(\n 1 + \\frac{2c_1 \\epsilon \/ (n-1)}{1 - \\frac{c_1 \\epsilon}{n-1}} \n \\right)^{|T\\setminus S| - |T' \\setminus S|} \\\\\n \\geq{}& 1 + \\frac{2c_1 \\epsilon}{n-1} \\cdot (|T \\setminus S| - |T' \\setminus S|);\n\\end{align*}\nSuppose $|T \\setminus S| \\geq |T' \\setminus S|$ and $|T| = |T'| \\geq \\lfloor \\frac n 2 \\rfloor$, then\n\\begin{align}\n \\utili[n]\\left(1, \\frac 1 2, \\bids_T(\\cdot)\\right) - \\utili[n]\\left(1, \\frac 1 2, \\bids_{T'}(\\cdot)\\right) \\ge{}& (|T \\setminus S| - |T' \\setminus S|) \\cdot \\frac {2c_1 \\epsilon}{n-1} \\cdot \\utili[n]\\left(1, \\frac 1 2, \\bids_{T'}(\\cdot) \\right) \n\\notag \\\\\n\\geq{}& (|T \\setminus S| - |T' \\setminus S|) \\cdot \\frac {2c_1 \\epsilon}{n-1} \\cdot \\frac 1 {8 e^2}, \n\\label{eq:util-diff-eps}\n\\end{align}\nwhere the last inequality is because $\\utili[n](1, \\frac 1 2, \\bids_{T'}(\\cdot)) \\ge \\frac{1}{2} (1 - \\frac{2}{n})^n = \\frac{1}{2} [(1 - \\frac{2}{n})^\\frac{n}{2}]^2\\ge \\frac{1}{2} (\\frac{1}{2e})^2 = \\frac{1}{8e^2}$. \n\nNow suppose an algorithm~$\\mathcal{A}$ $(\\epsilon, \\delta)$-learns the utilities of all monotone bidding strategies with $t$ samples~$\\samples$ for $t \\leq \\frac{n}{80c_1^2 \\epsilon^2}$.\nDefine $\\mathcal{H}: \\mathbb R_+^{n \\times t} \\times \\mathbb N \\to 2^{[n-1]}$ be a function that outputs among all $T\\subseteq [n-1]$ of size~$k$, the one that maximizes bidder~$n$'s utility when they bid according to bidding strategy $\\bids_T$. \nFormally, \n\\begin{align*}\n \\mathcal{H}(\\samples, k) = \\argmax_{T \\subseteq [n-1], |T| = k} \\mathcal{A}\\left(\\samples, n, 1, (\\bids_{T}(\\cdot), \\bidi[n](\\cdot)) \\right),\n\\end{align*}\n\nBy Definition~\\ref{def:util-learn-ensemble}, for any $S$ with $|S| = \\lfloor n \/ 2 \\rfloor$, for samples drawn from~$\\dists_S$, with probability at least $1 - \\delta$,\n\\begin{equation*}\n \\mathcal{A}(\\samples, n, 1, (\\bids_{[n-1]\\setminus S}(\\cdot), \\bidi[n](\\cdot)) \\\\\n \\geq \\utili[n]\\left(1, \\frac 1 2, \\bids_{[n-1] \\setminus S}(\\cdot) \\right) - \\epsilon;\n\\end{equation*}\nand for any $T \\subseteq[n-1]$ with $|T| = \\lceil n \/ 2 \\rceil$,\n\\begin{equation*}\n \\mathcal{A}(\\samples, n, 1, (\\bids_T(\\cdot), \\bidi[n](\\cdot))\n \\leq \\utili[n]\\left(1, \\frac 1 2, \\bids_T(\\cdot) \\right) + \\epsilon.\n\\end{equation*}\nTherefore, for $W = \\mathcal{H}(\\samples, \\lceil n \/ 2 \\rceil)$, \n\\begin{align*}\n \\utili[n]\\left(1, \\frac 1 2, \\bids_W(\\cdot) \\right) \\geq \n \\utili[n] \\left(1, \\frac 1 2, \\bids_{[n-1]\\setminus S}(\\cdot) \\right) - 2\\epsilon.\n\\end{align*}\nSince $|W| = [n-1]\\setminus S = \\lceil n \/ 2 \\rceil$, by \\eqref{eq:util-diff-eps},\n\\begin{align*}\n \\left(\\lceil \\frac n 2 \\rceil - |W \\setminus S| \\right) \\cdot \\frac{c_1 \\epsilon}{(n-1)4e^2} \\leq 2\\epsilon.\n\\end{align*}\nSo\n\\begin{align*}\n |W \\cap S| \\leq (n - 1) \\cdot \\frac{8e^2}{c_1}.\n\\end{align*}\nIn other words, with probability at least $ 1- \\delta$, $\\mathcal{H}(\\samples, \\lceil n \/ 2 \\rceil)$ is the complement of~$S$ except for at most $\\frac{8e^2}{c_1}$ fraction of the coordinates in $[n-1]$.\n\nSimilarly, for $S$ of cardinality $\\lceil n \/ 2 \\rceil$, \n\\begin{align*}\n |\\mathcal{H}(\\samples, \\lceil n \/ 2 \\rceil) \\cap S| \\leq (n - 1) \\cdot \\frac{8e^2}{c_1} + 1.\n\\end{align*}\nTake $c_2$ to be $\\frac{8e^2}{c_1}$. We have $c_2<\\frac 1 {20}$. For all large enough $n$ and all $S$ of size~$\\lfloor n \/ 2 \\rfloor$ or $\\lceil n \/ 2 \\rceil$, with probability at least $1 - \\delta$, $\\mathcal{H}(\\samples, \\lceil n \/ 2 \\rceil)$ correctly outputs the elements not in~$S$ with an exception of at most $c_2$ fraction of coordinates.\n\nLet $\\mathcal {S}$ be the set of all subsets of $[n-1]$ of size either $\\lceil n \/ 2 \\rceil$ or $\\lfloor n \/ 2 \\rfloor$.\nConsider any~$S \\in \\mathcal {S}$. \nLet $\\theta(S) \\subseteq [n-1]$ denote the set of coordinates whose memberships in~$S$ are correctly predicted by $\\mathcal{H}(\\samples, \\lceil n \/ 2 \\rceil)$ with probability at least $2\/3$; that is, $i \\in \\theta(S)$ if{f} with probability at least $2\/3$, $\\mathcal{H}(\\samples, \\lceil n \/ 2 \\rceil)$ is correct about whether $i \\in S$. \nLet the cardinality of~$|\\theta(S)|$ be $z(n-1)$. Suppose we draw coordinate $i$ uniformly at random from $[n-1]$, and independently draw samples $\\samples$ from $\\dists_S$, then the probability that $\\mathcal{H}(\\samples, \\lceil n \/ 2 \\rceil)$ is correct about whether $i\\in S$ satisfies:\n\\begin{align*}\n \\Prx[i, \\samples]{\\mathcal{H}(\\samples, \\lceil n \/ 2 \\rceil)\\text{ is correct about whether }i\\in S}\n \\geq{}& (1 - c_2) (1 - \\delta) \\\\\n \\geq{}& 0.9,\n\\end{align*}\nand \n\\begin{align*}\n \\Prx[i, \\samples]{\\mathcal{H}(\\samples, \\lceil n \/ 2 \\rceil)\\text{ is correct about whether }i\\in S}\n \\le{}& \\Prx[i]{i\\in \\theta(S)}\\cdot 1 + \\Prx[i]{i\\notin \\theta(S)}\\cdot\\frac{2}{3} \\\\\n ={}& z\\cdot 1 + (1-z)\\cdot \\frac 2 3,\n\\end{align*} \nwhich implies $z > 0.6$. \nIf a pair of sets $S$ and~$S'$ differ in only one coordinate~$i$, and $i \\in \\theta(S) \\cap \\theta(S')$, then $\\mathcal{H}(\\cdot)$ serves as an algorithm that tells apart $\\dists_S$ and~$\\dists_{S'}$, contradicting Corollary~\\ref{cor:lb-kl}. \nWe now show, with a counting argument, that such a pair of $S$ and~$S'$ must exist.\n\nSince for each $S \\in \\mathcal {S}$, $|\\theta(S)| \\geq 0.6(n-1)$, there exists a coordinate $i \\in [n-1]$ and $\\mathcal T \\subseteq \\mathcal {S}$, with $|\\mathcal T| \\geq 0.6 |\\mathcal {S}|$, such that for each $S \\in \\mathcal T$, $i \\in \\theta(S)$. \nBut $\\mathcal {S}$ can be decomposed into $|\\mathcal {S}| \/ 2$ pairs of sets, such that within each pair, the two sets differ by one in size, and precisely one of them contains coordinate~$i$. \nTherefore among these pairs there must exist one $(S, S')$ with $S, S' \\in \\mathcal T$, i.e., $i \\in \\theta(S)$ and $i \\in \\theta(S')$.\nUsing $\\mathcal{H}$, which is induced by~$\\mathcal{A}$, we can tell apart $\\dists_S$ and~$\\dists_{S'}$ with probability at least $2\/3$, which is a contradiction to Corollary~\\ref{cor:lb-kl}.\nThis completes the proof of Theorem~\\ref{thm:lower-bound-learning-util}.\n\\end{proof}\n\n\n\\section{Missing Proofs from Section \\ref{sec:search}}\n\\subsection{Proof of Theorem \\ref{thm:epsNEpoa}}\n\\label{sec:proof-thm:epsNEpoa}\n\\epsNEpoa*\n\nRecall that ${\\mathbb{A}}_i(\\dstrats)$ indicates whether bidder~$i$ receives the item, and \n${\\mathbb{I}}_i(\\dstrats)$ indicates whether bidder~$i$ inspects her value.\nThe expected utility of bidder~$i$ can be decomposed into a welfare term and a payment term, as follows:\n\\[\\utili^{\\DA(\\dists, \\costs)}(\\dstrats) = \\Ex{{\\mathbb{A}}_i(\\dstrats)(\\vali - \\dbidi(\\vali)) - {\\mathbb{I}}_i(\\dstrats)\\costi} = \\Ex{{\\mathbb{A}}_i(\\dstrats)\\vali - {\\mathbb{I}}_i(\\dstrats)\\costi} - \\Ex{{\\mathbb{A}}_i(\\dstrats)\\dbidi(\\vali)},\\]\nwhere the randomness is over $\\vals\\sim\\dists$ and the randomness of mixed strategies.\nAnd the social welfare can be expressed as the sum of utilities and payments of bidders: \n\\begin{align*}\n \\SW^{\\DA(\\dists, \\costs)}(\\dstrats) = \\sum_{i=1}^n\\Ex{ {\\mathbb{A}}_i(\\dstrats)\\vali - {\\mathbb{I}}_i(\\dstrats)\\costi}\n = \\sum_{i=1}^n \\left[ \\utili^{\\DA(\\dists, \\costs)}(\\dstrats) + \\Ex{{\\mathbb{A}}_i(\\dstrats)\\dbidi(\\vali)}\\right].\n\\end{align*}\n\nNow suppose $\\dstrats$ is an $\\epsilon$-NE, then \n\\begin{align}\n \\SW^{\\DA(\\dists, \\costs)}(\\dstrats) & \\ge \\sum_{i=1}^n \\left[\\utili^{\\DA(\\dists, \\costs)}(\\dstrati', \\dstratsmi)-\\epsilon + \\Ex{{\\mathbb{A}}_i(\\dstrats)\\dbidi(\\vali)}\\right] \\nonumber \\\\\n & = \\sum_{i=1}^n \\utili^{\\DA(\\dists, \\costs)}(\\dstrati', \\dstratsmi) + \\sum_{i=1}^n \\Ex{{\\mathbb{A}}_i(\\dstrats)\\dbidi(\\vali)} - n\\epsilon, \n\\label{eq:SW-eps-NE}\n\\end{align}\nfor any set of strategies $\\{\\dstrati'\\}_{i\\in[n]}$.\n\nFor each $i$, let $\\thresholdi$ be the index of $(\\disti, \\costi)$. \nRecall that we use $\\kappa_i$ to denote $\\min\\{\\vali, \\thresholdi\\}$. \nWe will construct strategies $\\dstrati'$ that satisfy the following inequality\n\\begin{equation}\n\\label{eq:smoothness-goal}\n \\sum_{i=1}^n \\utili^{\\DA(\\dists, \\costs)}(\\dstrati', \\dstratsmi) \\ge (1-\\frac{1}{e})\\Ex{\\max_{i\\in[n]} \\kappa_i} - \\sum_{i=1}^n \\Ex{{\\mathbb{A}}_i(\\dstrats)\\dbidi(\\vali)}. \n\\end{equation}\nBy \\eqref{eq:SW-eps-NE} and \\eqref{eq:smoothness-goal} we have\n$\\SW^{\\DA(\\dists, \\costs)}(\\dstrats) \\ge (1-\\frac{1}{e})\\Ex{\\max_{i\\in[n]} \\kappa_i} - n\\epsilon$. Since $\\Ex{\\max_{i\\in[n]} \\kappa_i} = \\OPT^{(\\dists, \\costs)}$ (Lemma~\\ref{lem:optimal-welfare}), the theorem is proved. \n\nNow we construct $\\dstrati'$. Each $\\dstrati'$ is a mixed strategy that does the following: sample a random variable $Z\\in [\\frac{1}{e}, 1]$ with probability density function $f_Z(z) = \\frac{1}{z}$; inspect the value at threshold price $\\dtimei'=(1-Z)\\thresholdi$; and claim the item at price $\\dbidi'(\\vali) = (1-Z)\\kappa_i$. Note that $\\dstrati'$ claims above $\\thresholdi$, and we will make use of the following property of strategies that claim above $\\thresholdi$: \n\\begin{claim}\\label{cl:utility-pandora}\nFor strategy $\\dstrati'$ that claims above $\\thresholdi$, \n$\\Ex{{\\mathbb{A}}_i(\\dstrat', \\dstratsmi)\\vali - {\\mathbb{I}}_i(\\dstrati', \\dstratsmi)\\costi} = \\Ex{{\\mathbb{A}}_i(\\dstrati', \\dstratsmi)\\kappa_i}$.\n\\end{claim}\n\\begin{proof}\nFor convenience we write ${\\mathbb{A}}_i = {\\mathbb{A}}_i(\\dstrati', \\dstratsmi)$ and ${\\mathbb{I}}_i = {\\mathbb{I}}_i(\\dstrati', \\dstratsmi)$. By linearity of expectation and the definition of index,\n\\begin{align*}\n \\Ex{{\\mathbb{A}}_i\\vali - {\\mathbb{I}}_i\\costi}\n & = \\Ex{{\\mathbb{A}}_i\\vali} - \\Ex{{\\mathbb{I}}_i}\\costi \\\\\n & = \\Ex{{\\mathbb{A}}_i\\vali} - \\Ex{{\\mathbb{I}}_i}\\Ex[\\vali\\sim\\disti]{\\max\\{\\vali - \\thresholdi, 0\\}}. \n\\end{align*}\nNote that $\\vali$ and ${\\mathbb{I}}_i$ are independent because bidder~$i$ doesn't know her value before inspection. Thus\n\\begin{align*}\n \\Ex{{\\mathbb{A}}_i\\vali - {\\mathbb{I}}_i\\costi}\n & = \\Ex{{\\mathbb{A}}_i\\vali} - \\Ex{{\\mathbb{I}}_i\\max\\{\\vali - \\thresholdi, 0\\}} \\\\\n & = \\Ex{{\\mathbb{A}}_i\\vali - {\\mathbb{I}}_i \\max\\{\\vali - \\thresholdi, 0\\}} \\\\\n & = \\Ex{{\\mathbb{A}}_i\\vali - {\\mathbb{A}}_i \\max\\{\\vali - \\thresholdi, 0\\} + ({\\mathbb{A}}_i - {\\mathbb{I}}_i) \\max\\{\\vali - \\thresholdi, 0\\}}. \n\\end{align*}\nBecause $\\dstrati'$ claims above $\\thresholdi$, we have ${\\mathbb{A}}_i = {\\mathbb{I}}_i$ whenever $\\vali>\\thresholdi$. This implies $({\\mathbb{A}}_i - {\\mathbb{I}}_i) \\max\\{\\vali - \\thresholdi, 0\\} = 0$ and \n\\begin{align*}\n \\Ex{{\\mathbb{A}}_i\\vali - {\\mathbb{I}}_i\\costi}\n = \\Ex{{\\mathbb{A}}_i(\\vali - \\max\\{\\vali - \\thresholdi, 0\\})}\n = \\Ex{{\\mathbb{A}}_i\\kappa_i}.\n\\end{align*}\n\\end{proof}\n\nNow we argue that the $\\{\\dstrati'\\}_{i\\in[n]}$ constructed above satisfy \\eqref{eq:smoothness-goal}. By Claim~\\ref{cl:utility-pandora}, we have $\\utili^{\\DA(\\dists, \\costs)}(\\dstrati', \\dstratsmi) = \\Ex{{\\mathbb{A}}_i(\\dstrati', \\dstratsmi)(\\kappa_i - \\dbidi'(\\vali))}$. Summing over $i\\in[n]$, \n\\begin{align*}\n \\sum_{i=1}^n \\utili^{\\DA(\\dists, \\costs)}(\\dstrati', \\dstratsmi) & = \\sum_{i=1}^n \\Ex{{\\mathbb{A}}_i(\\dstrati', \\dstratsmi)(\\kappa_i - \\dbidi'(\\vali))} \\\\ \n & = \\Ex{ \\sum_{i=1}^n {\\mathbb{A}}_i(\\dstrati', \\dstratsmi)(\\kappa_i - \\dbidi'(\\vali))} \\\\\n & = \\Ex{ \\sum_{i=1}^n {\\mathbb{A}}_i(\\dstrati', \\dstratsmi) Z\\kappa_i}.\n\\end{align*}\nFor any fixed value profile $\\vals=(\\vali)$, let $i^*\\coloneqq \\argmax_{i\\in[n]}\\{\\kappa_i\\}$. Since ${\\mathbb{A}}_i(\\dstrati', \\dstratsmi) Z\\kappa_i \\ge 0$, we have \n\\begin{align}\\label{eq:utility-maxindex}\n \\sum_{i=1}^n \\utili^{\\DA(\\dists, \\costs)}(\\dstrati', \\dstratsmi) \\ge \\Ex{ {\\mathbb{A}}_{i^*}(\\dstrati[i^*]', \\dstratsmi[i^*]) Z\\kappa_{i^*}}.\n\\end{align}\n\\begin{claim}\\label{cl:smoothness-step}\nFor any $\\vals$, $\\Ex{ {\\mathbb{A}}_{i^*}(\\dstrati[i^*]', \\dstratsmi[i^*]) Z\\kappa_{i^*}\\mid \\vals} \\ge (1-\\frac{1}{e}) \\kappa_{i^*} - \\sum_{i=1}^n {\\mathbb{A}}_i(\\dstrats)\\dbidi(\\vali)$. \n\\end{claim}\n\\begin{proof}\nLet $p\\coloneqq \\max_{j\\ne {i^*}} \\dbidi[j](\\vali[j])$.\nIf $p\\ge (1-\\frac{1}{e})\\kappa_{i^*}$, then $\\Ex{ {\\mathbb{A}}_{i^*}(\\dstrati[i^*]', \\dstratsmi[i^*]) Z\\kappa_{i^*}\\mid \\vals} \\ge 0 \\ge (1-\\frac{1}{e}) \\kappa_{i^*} - p$.\nOtherwise, note that whenever bidder~$i^*$'s bid $(1-Z)\\kappa_{i^*}$ is above $p$, she wins the item, thus\n\\begin{align*}\n \\Ex{ {\\mathbb{A}}_{i^*}(\\dstrati[i^*]', \\dstratsmi[i^*]) Z\\kappa_{i^*}\\mid \\vals} & = \\int_{1\/e}^{1-p\/\\kappa_{i^*}} z\\kappa_{i^*} f_Z(z) \\dd z = \\int_{1\/e}^{1-p\/\\kappa_{i^*}} z\\kappa_{i^*} \\frac{1}{z} \\dd z \\\\\n & = (1- \\frac{1}{e} - \\frac{p}{\\kappa_{i^*}}) \\kappa_{i^*} = (1- \\frac{1}{e})\\kappa_{i^*} - p.\n\\end{align*}\nThe proof is completed by observing that $\\sum_{i=1}^n {\\mathbb{A}}_i(\\dstrats)\\dbidi(\\vali) = \\max_{i\\in[n]} \\bidi(\\vali) \\ge p$.\n\\end{proof}\nTaking expectation over $\\vals\\sim\\dists$, \\eqref{eq:utility-maxindex} and Claim~\\ref{cl:smoothness-step} immediately implies \\eqref{eq:smoothness-goal}. \n\n\n\\subsection{Pandora's Box Problem and Its Sample Complexity}\n\\label{sec:pandora}\n\\input{pandora.tex}\n\n\n\\subsection{Descending Auction with Search Costs}\n\\label{sec:fpa-pandora}\nIn this section, we briefly review the main results by \\citet{KWW16} in Section~\\ref{sec:KWW16}, and then in Section~\\ref{sec:sample-KWW16} present our learning results in auctions with search costs.\nRecall that in this setting, we consider a single-item auction, where each bidder~$i$ has a value~$\\vali \\in [0, H]$ drawn independently from distribution~$\\disti$, but \n$\\vali$ is not known to anyone at the beginning of the auction. \nIn order to observe the value, bidder~$i$ needs to pay a known search cost $\\costi\\in[0, H]$. \n\n\n\\subsubsection{Transformation with Distributional Knowledge}\n\\label{sec:KWW16}\n\n\\paragraph{Descending auction with search costs.} \nIn a \\emph{descending auction} (or Dutch auction), a publicly visible price descends continuously from~$H$. \nAt any point, any bidder may claim the item at the current price. \nWith search cost, a bidder's strategy $\\dstrati$ consists of two parts:\\footnote{Note that there is no private information at the beginning of the auction.} a threshold price $\\dtimei$ and a mapping $\\dbidi(\\cdot)$ from values to bids. \nConcretely, bidder~$i$ decides to inspect when the price descends to~$\\dtimei$, at which point she pays the search cost and immediately learns her value~$\\vali$. \nAfter seeing her value, the bidder chooses another a purchase price $\\dbidi(\\vali) \\leq \\dtimei$ at which to claim the item.\nThe latter is equivalent to submitting a bid $\\dbidi(\\vali)\\le\\dtimei$. \n\nWe say a strategy $\\dstrati=(\\dtimei, \\dbidi(\\cdot))$ is \\emph{monotone} if $\\dbidi(\\cdot)$ is monotone non-decreasing. A strategy is \\emph{mixed} if it is a distribution over pure strategies $\\dstrati$'s. Mixed strategies allow bidders to randomize over the threshold price $\\dtimei$ and the purchase price $\\dbidi(\\vali)$. Abusing notations, we also use $\\dstrati$ to denote a mixed strategy.\nWe say a \\emph{mixed} strategy $\\dstrati$ is \\emph{monotone} if it is a distribution over monotone pure strategies. \n\nWe use $\\DA(\\dists, \\costs)$ to denote a descending auction on value distributions $\\dists$ with search costs $\\costs$, and let $\\utili^{\\DA(\\dists, \\costs)}(\\dstrati, \\dstratsmi)$ be the expected utility of bidder $i$ when bidders use strategies $\\dstrats=(\\dstrati, \\dstratsmi)$ and their values are drawn from $\\dists$. Note that this utility is ex ante, since the value is unknown until the bidder searches. \nThe solution concept we consider is therefore a Nash equilibrium instead of a Bayes Nash equilibrium. \n\n\\begin{defn} \nIn $\\DA(\\dists, \\costs)$, a (mixed) strategy profile $\\dstrats$ is an \\emph{$\\epsilon$-Nash equilibrium} ($\\epsilon$-NE) if for each bidder~$i$ and any strategy $\\dstrati'$, \n\\[\\utili^{\\DA(\\dists, \\costs)}(\\dstrati, \\dstratsmi) \\ge \\utili^{\\DA(\\dists, \\costs)}(\\dstrati', \\dstratsmi) - \\epsilon.\\]\nIf $\\epsilon = 0$, $\\dstrats$ is a \\emph{Nash equilibrium}.\n\\end{defn}\n\nWe use $\\FPA(\\dists)$ to denote the first price auction with value distributions~$\\dists$. \nDenote by $\\utili^{\\FPA(\\dists)}(\\fstrats)$ the (ex ante) expected utility of bidder~$i$ in $\\FPA(\\dists)$, when the bidders use strategy profile~$\\fstrats$. \nWe can similarly define the Nash equilibrium for a first price auction. \n\\begin{defn} \nIn $\\FPA(\\dists)$, a (mixed) strategy profile $\\fstrats$ is an \\emph{$\\epsilon$-Nash equilibrium} ($\\epsilon$-NE) if for each bidder~$i$ and any strategy $\\fstrati'$,\n\\[\\utili^{\\FPA(\\dists)}(\\fstrati, \\fstratsmi) \\ge \\utili^{\\FPA(\\dists)}(\\fstrati', \\fstratsmi) - \\epsilon.\\]\nIf $\\epsilon = 0$, $\\fstrats$ is a \\emph{Nash equilibrium}.\n\\end{defn}\n\nNote that Nash equilibrium is an ex ante notion, in contrast to BNE (Definition~\\ref{def:bne}), which is an interim notion and requires every type to best respond.\nIn a first price auction, an $\\epsilon$-BNE must be an $\\epsilon$-NE, but the reverse is not true.\n\nWith no search cost, the Dutch auction is well known to be equivalent to a first price auction. \nGiven a Dutch auction with search costs, \\citet{KWW16} constructed a first price auction with transformed value distributions and no search costs, and showed that an NE of this FPA corresponds to an NE of the Dutch auction with search costs. \n\n\n\\begin{defn}\\label{def:fpa-pandora-index}\nGiven a distribution $\\disti$ and a search cost $\\costi$, define the \\emph{index} $\\thresholdi$ of $(\\disti, \\costi)$ to be the unique solution to $\\Ex[\\vali\\sim\\disti]{ \\max\\{\\vali - \\thresholdi, 0\\} } = \\costi$. If $\\costi=0$, let $\\thresholdi=H$. \nWe always assume $\\Ex[\\vali\\sim\\disti]{\\vali}\\ge \\costi$, so that $\\thresholdi\\in[0, H]$. (Otherwise the search cost would be so high that the bidder should never search for the value.)\n\\end{defn}\n\nFor a distribution $\\dist$ and some $\\threshold\\in \\mathbb R$, we define \n$\\dist^{\\threshold}$ to be the distribution of $\\min\\{\\val, \\threshold\\}$ where $\\val\\sim F$. \nFor a product distribution~$\\dists$ and a vector $\\thresholds$, we use $\\dists^{\\thresholds}$ to denote the product distribution whose $i^{\\text{-th}}$ component is $\\disti^{\\thresholdi}$. \nA key insight of \\citet{KWW16} is a pair of utility-preserving mappings between strategies in DA$(\\dists, \\costs)$ and FPA$(\\dists^{\\thresholds})$, where $\\thresholds$ is the vector of indices for $(\\dists, \\costs)$.\n\n\n\\begin{defn}\\label{def:strategy-mappings}\nFor each bidder $i$, given distribution $\\disti$ and $\\thresholdi \\in [0, H]$, define two mappings:\\footnote{We describe mappings for pure strategies here. For mixed strategies, their images are naturally distributions over the images of pure strategies under $\\lambda$ and $\\mu$.}\n\\begin{enumerate} \n\\item $\\lambda^{\\thresholdi}$: a monotone strategy $\\fstrati:[0, \\thresholdi]\\to\\mathbb R_+$ in $\\FPA(\\dists^{\\thresholds})$ is mapped by $\\lambda^{\\thresholdi}$ to \nthe strategy in $\\DA(\\dists, \\costs)$ with threshold price $\\dtimei=\\fstrati(\\thresholdi)$ and bidding function $\\dbidi(\\vali) = \\fstrati(\\min \\{\\vali, \\thresholdi\\})$. \n(By the monotonicity of~$\\fstrati$, we have $\\dbidi(\\vali)\\le \\dtimei$). \n\n\\item $\\mu^{(\\disti, \\thresholdi)}$: a strategy $\\dstrati = (\\dtimei, \\dbidi(\\cdot))$ in $\\DA(\\dists, \\costs)$ is mapped by $\\mu^{(\\disti, \\thresholdi)}$ to a strategy $\\fstrati=\\mu^{(\\disti, \\thresholdi)}(\\dstrati)$ in $\\FPA(\\dists^{\\thresholds})$, with $\\fstrati(\\vali) = \\dbidi(\\vali)$ for $\\vali<\\thresholdi$ and $\\fstrati(\\thresholdi)=\\dbidi(\\vali')$, where $\\vali'$ is a random variable drawn from~$\\disti$ conditioning on $\\vali' \\ge \\thresholdi$.\n\\end{enumerate}\n\\end{defn}\n\n\\noindent The superscripts $\\thresholdi$ and $(\\disti, \\thresholdi)$ should make it clear that the mapping~$\\lambda^{\\thresholdi}$ is determined solely by~$\\thresholdi$ while $\\mu^{(\\disti, \\thresholdi)}$ is related to both the distribution and~$\\thresholdi$. \n\nA strategy $\\dstrati = (\\dtimei, \\dbidi(\\cdot))$ in a descending auction is said to \\emph{claim above $\\thresholdi$} \nif $\\dbidi(\\vali) =\\dtimei$ for all $\\vali\\ge \\thresholdi$, i.e., the bidder claims the item immediately if she finds the value of the item greater than or equal to~$\\thresholdi$. \n\n\\begin{claim}[Claim 2 of \\citealp{KWW16}]\n\\label{claim:strategy-equivalence}\nGiven distribution $\\disti$ whose index is $\\thresholdi$, \n\\begin{enumerate}\n \\item If $\\dstrati$ claims above $\\thresholdi$, then $\\dstrati = \\lambda^{\\thresholdi}(\\mu^{(\\disti, \\thresholdi)}(\\dstrati))$. \n \\item If $\\fstrati$ is monotone, then $\\fstrati = \\mu^{(\\disti, \\thresholdi)}(\\lambda^{\\thresholdi}(\\fstrati))$. \n\\end{enumerate}\n\\end{claim}\n\n\\begin{thm}[Claim 3 of \\citealp{KWW16}]\n\\label{thm:DA_FPA_transform}\nSuppose $\\thresholds$ is the vector of indices of $(\\dists, \\costs)$ (Definition~\\ref{def:fpa-pandora-index}). \n\\begin{enumerate}\n \\item For any monotone mixed strategy profile $\\fstrats = (\\fstrati, \\fstratsmi)$ for $\\FPA(\\dists^{\\thresholds})$,\n\n for each bidder~$i$, \n\\[ \\utili^{\\FPA(\\dists^{\\thresholds})}(\\fstrats) = \\utili^{\\DA(\\dists, \\costs)}(\\lambda^{\\thresholds}(\\fstrats)).\\]\n\\item For any mixed (not necessarily monotone) strategy profile $\\dstrats = (\\dstrati, \\dstratsmi)$ for $\\DA(\\dists, \\costs)$, for each bidder~$i$,\n\\[ \\utili^{\\DA(\\dists, \\costs)}(\\dstrats) \\le \\utili^{\\FPA(\\dists^{\\thresholds})}(\\mu^{(\\dists, \\thresholds)}(\\dstrats)),\\]\nwhere ``='' obtains if $\\dstrati$ claims above $\\thresholdi$.\n\\end{enumerate}\n\\end{thm}\n\n\n\\begin{thm}[\\citealp{KWW16}]\\label{thm:fpa-pandora-NE-BNE}\nGiven $\\DA(\\dists, \\costs)$ and $\\FPA(\\dists^{\\thresholds})$ where $\\thresholds$ is the indices of $(\\dists, \\costs)$.\nIf $\\fstrats$ is a BNE in $\\FPA(\\dists^{\\thresholds})$, then $\\lambda^{\\thresholds}(\\fstrats)$ is an NE in $\\DA(\\dists, \\costs)$. Conversely, if $\\dstrats$ is an NE in $\\DA(\\dists, \\costs)$, then $\\mu^{(\\dists, \\thresholds)}(\\dstrats)$ is an NE in $\\FPA(\\dists^{\\thresholds})$. \n\\end{thm}\n\n\n\nFinally, we review a welfare guarantee shown by \\citeauthor{KWW16}. \nCombining Theorem~\\ref{thm:fpa-pandora-NE-BNE} with known bound on the Price of Anarchy for the first price auction \\citep{ST13}, \\citeauthor{KWW16} concluded that the welfare of an NE in a Dutch auction with search costs is at least a $(1-1\/e)$-fraction of the maximum expected welfare.\n\nFor our purpose, we generalize their conclusion to $\\epsilon$-NE. \nFormally, let ${\\mathbb{A}}_i(\\dstrats)$ be an indicator variable for whether bidder~$i$ receives the item, and let ${\\mathbb{I}}_i(\\dstrats)$ be an indicator variable for whether bidder~$i$ inspects her value.\nThe social welfare of a strategy profile $\\dstrats$ is \n\\begin{equation}\n \\SW^{\\DA(\\dists, \\costs)}(\\dstrats) = \\Ex{\\sum_{i=1}^n \\left({\\mathbb{A}}_i(\\dstrats)\\vali - {\\mathbb{I}}_i(\\dstrats)\\costi \\right)},\n\\end{equation}\nwhere the randomness is over $\\vals\\sim\\dists$ and the randomness of mixed strategies.\nLet $\\OPT^{(\\dists, \\costs)}$ be the maximum expected welfare, obtained by Pandora's Box algorithm (Theorem~\\ref{thm:optimal-pandora}) on distributions $\\disti[1], \\ldots, \\disti[n]$ and costs $\\costi[1], \\ldots, \\costi[n]$.\n\\begin{restatable}[Corollary 1 and Theorem 1 of \\citealp{KWW16}]{lemma}{optimalwelfare} \\label{lem:optimal-welfare}\nLet $\\thresholdi$ be the index of $(\\disti, \\costi)$ and $\\kappa_i=\\min\\{\\vali, \\thresholdi\\}$, then \n$\\OPT^{(\\dists, \\costs)} = \\Ex{\\max_{i\\in[n]}\\kappa_i}$. \n\\end{restatable}\n\\begin{restatable}[A slight generalization of \\citealp{KWW16}]{thm}{epsNEpoa}\n\\label{thm:epsNEpoa}\nSuppose $\\dstrats$ is an $\\epsilon$-NE in $\\DA(\\dists, \\costs)$, then $\\SW^{\\DA(\\dists, \\costs)}(\\dstrats) \\ge (1-\\frac{1}{e})\\OPT^{(\\dists, \\costs)} - n\\epsilon$. \n\\end{restatable}\n\\noindent The proof of Theorem~\\ref{thm:epsNEpoa} follows the smoothness framework \\citep{ST13} and is given in Appendix~\\ref{sec:proof-thm:epsNEpoa}. \n\n\\subsubsection{Transformation with Samples}\n\\label{sec:sample-KWW16}\n\nWe are now ready to present our learning results on auctions with search costs.\nIn \\citet{KWW16}, the utility- and equilibrium-preserving mappings $\\lambda^{\\thresholds}$ and $\\mu^{(\\dists, \\thresholds)}$ depend on the value distributions.\nWe examine the number of samples needed to compute approximations of these mappings, when the value distributions are unknown.\nWe find that, given search costs and value samples, $\\tilde O(1 \/ \\epsilon^2)$ samples \nsuffice to construct mappings between strategies that approximately preserve utility; with $\\tilde O(n \/ \\epsilon^2)$ samples, any equilibrium of the first price auction without search costs on a transformed empirical distribution can be mapped to an approximate equilibrium of the descending auction on the true distribution.\nBy Theorem~\\ref{thm:epsNEpoa}, such an approximate equilibrium in the descending auction must obtain a $(1 - 1\/e)$-approximation to the optimal welfare. \nTo make use of this result, a market designer could collect $\\tilde O(n \/ \\epsilon^2)$ value samples to compute an approximate Nash in the said FPA, which then maps to an approximate Nash in the Dutch auction. This approximate Nash can serve as bidding guidance for the participants, and guarantees approximate efficiency of the market.\n\n\n\nWhen value distribution $\\disti$'s are unknown (but cost~$\\costi$'s are known), the mapping~$\\lambda^{\\thresholds}$ \nis also unknown, \nbecause each index~$\\thresholdi$ is determined by the distribution~$\\disti$. \nInstead, we estimate an index~$\\hat{\\threshold}_i$ from samples and use the corresponding mapping~$\\lambda^{\\hat{\\thresholds}}$. \n\n\\begin{defn}\\label{defn:fpa-pandora-empirical}\nPartition the samples $\\samples$ into two sets, $\\samples^A$ and $\\samples^B$, each of size $m\/2$. \nDenote the empirical product distributions on $\\samples^A$ and $\\samples^B$ as $\\empDists^A$ and $\\empDists$, respectively. \nThe \\emph{empirical indices} are the indices $\\hat{\\thresholds}$ for $(\\empDists^A, \\costs)$; namely, $\\hat{\\threshold}_i$ is the unique solution to\n$\\Ex[\\vali\\sim\\empDisti^A]{\\max\\{\\vali - \\hat{\\threshold}_i, 0\\}} = \\costi$.\nThe \\emph{empirical counterpart} of $\\DA(\\dists, \\costs)$ is $\\FPA(\\empDists^{\\hat{\\thresholds}})$. \nThe \\emph{empirical mappings} are $\\lambda^{\\hat{\\thresholds}}$ and $\\mu^{(\\dists, \\hat{\\thresholds})}$, computed as in Definition~\\ref{def:strategy-mappings}.\n\\end{defn}\n\nNote that $\\mu^{(\\dists, \\hat{\\thresholds})}$ depends on distributions while $\\lambda^{\\hat{\\thresholds}}$ does not. The following theorem, analogous to Theorem~\\ref{thm:DA_FPA_transform}, shows that the empirical mappings $\\lambda^{\\hat{\\thresholds}}$ and $\\mu^{(\\dists, \\hat{\\thresholds})}$ approximately preserve the utilities with high probability. \n\\begin{thm}\\label{thm:fpa-pandora-utility-intermediate}\nFor any $\\epsilon, \\delta > 0$, there is \n$M = O \\left(\\frac{H^2}{\\epsilon^2} \\left[\\log\\left(\\frac{H}{\\epsilon} \\right) + \\log\\left(\\frac{n}{\\delta}\\right)\\right]\\right)$, \nsuch that for all $m > M$, with probability at least $1-\\delta$ over the random draw of $\\samples^A$, \n\\begin{enumerate}\n\\item For any monotone mixed strategy profile $\\fstrats$ in $\\FPA(\\dists^{\\hat{\\thresholds}})$, \nfor each bidder~$i$, \n\\[ \\left| \\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\fstrats) - \\utili^{\\DA(\\dists, \\costs)}(\\lambda^{\\hat{\\thresholds}}(\\fstrats)) \\right| \\le \\epsilon.\\]\n\\item For any mixed strategy profile $\\dstrats$ in $\\DA(\\dists, \\costs)$, for each bidder~$i$,\n\\[ \\utili^{\\DA(\\dists, \\costs)}(\\dstrats) \\le \\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\mu^{(\\dists, \\hat{\\thresholds})}(\\dstrats)) + \\epsilon.\\]\nIf $\\dstrati$ claims above $\\hat{\\threshold}_i$, then we also have $\\utili^{\\DA(\\dists, \\costs)}(\\dstrats) \\ge \\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\mu^{(\\dists, \\hat{\\thresholds})}(\\dstrats)) - \\epsilon$. \n\\end{enumerate}\n\n\\end{thm}\n\\noindent \nBefore proving Theorem~\\ref{thm:fpa-pandora-utility-intermediate}, \nwe first derive a few important consequences.\n\n\\begin{corollary}\\label{lem:fpa-pandora-NE-intermediate}\nFor any $\\epsilon, \\delta > 0$, and $m > M$ as in the condition of\nTheorem~\\ref{thm:fpa-pandora-utility-intermediate}, with probability at least $1-\\delta$, \n\\begin{enumerate}\n\\item For any monotone strategy profile $\\fstrats$,\nif $\\fstrats$ is an $\\epsilon'$-NE in $\\FPA(\\dists^{\\hat{\\thresholds}})$, then $\\lambda^{\\hat{\\thresholds}}(\\fstrats)$ is an $(\\epsilon'+2\\epsilon)$-NE in $\\DA(\\dists, \\costs)$. \n\\item Conversely, for any strategy profile $\\dstrats$ that claims above $\\hat{\\thresholds}$, if $\\dstrats$ is an $\\epsilon'$-NE in $\\DA(\\dists, \\costs)$, then $\\mu^{(\\dists, \\hat{\\thresholds})}(\\dstrats)$ is an $(\\epsilon'+2\\epsilon)$-NE in $\\FPA(\\dists^{\\hat{\\thresholds}})$. \n\\end{enumerate}\n\\end{corollary}\n\n\\begin{proof}\nWe prove the two items respectively, \n\\begin{enumerate}\n\\item Let $\\fstrats=(\\fstrati, \\fstratsmi)$ be an $\\epsilon'$-NE in $\\FPA(\\dists^{\\hat{\\thresholds}})$ satisfying the condition in the statement. For any strategy $\\dstrati$, by Theorem~\\ref{thm:fpa-pandora-utility-intermediate} item 2, \n\\begin{align*}\n \\utili^{\\DA(\\dists, \\costs)}(\\dstrati, \\lambda^{\\hat{\\thresholds}}(\\fstratsmi))\n \\le \\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\mu^{(\\dists, \\hat{\\thresholds})}(\\dstrati), \\mu^{(\\dists, \\hat{\\thresholds})}(\\lambda^{\\hat{\\thresholds}}(\\fstratsmi))) + \\epsilon. & \n\\end{align*}\nSince $\\fstratsmi$ is monotone, by Claim~\\ref{claim:strategy-equivalence} item 2, we have $\\mu^{(\\dists, \\hat{\\thresholds})}(\\lambda^{\\hat{\\thresholds}}(\\fstratsmi)) = \\fstratsmi$. Thus, \n\\begin{align*}\n \n \\utili^{\\DA(\\dists, \\costs)}(\\dstrati, \\lambda^{\\hat{\\thresholds}}(\\fstratsmi))\n & \\le \\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\mu^{(\\dists, \\hat{\\thresholds})}(\\dstrati), \\fstratsmi) + \\epsilon &\\\\\n \\shortintertext{\\hfill $\\fstrats$ is an $\\epsilon'$-NE in $\\FPA(\\dists^{\\hat{\\thresholds}})$}\n & \\le \\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\fstrats) + \\epsilon' + \\epsilon & \\\\\n \\shortintertext{\\hfill Theorem \\ref{thm:fpa-pandora-utility-intermediate} item 1}\n & \\le \\utili^{\\DA(\\dists, \\costs)}(\\lambda^{\\hat{\\thresholds}}(\\fstrats)) + \\epsilon' + 2\\epsilon. & \n\\end{align*}\n\n\\item For any strategy $\\fstrati$, by Proposition~\\ref{prop:monotone}, there exists some monotone strategy $\\fstrati'$, such that \n\\begin{align*}\n \\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\fstrati, \\mu^{(\\dists, \\hat{\\thresholds})}(\\dstratsmi)) \\le \\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\fstrati', \\mu^{(\\dists, \\hat{\\thresholds})}(\\dstratsmi)). \n\\end{align*}\nThen by Theorem~\\ref{thm:fpa-pandora-utility-intermediate} item 1, \n\\begin{align*}\n \\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\fstrati', \\mu^{(\\dists, \\hat{\\thresholds})}(\\dstratsmi)) & \\le \\utili^{\\DA(\\dists, \\costs)}(\\lambda^{\\hat{\\thresholds}}(\\fstrati'), \\lambda^{\\hat{\\thresholds}}(\\mu^{(\\dists, \\hat{\\thresholds})}(\\dstratsmi))) + \\epsilon.\n\\end{align*}\nSince $\\dstratsmi$ claims above $\\hat{\\thresholds}_{-i}$, by Claim~\\ref{claim:strategy-equivalence} item 1, we have $\\lambda^{\\hat{\\thresholds}}(\\mu^{(\\dists, \\hat{\\thresholds})}(\\dstratsmi)) = \\dstratsmi$. Thus \n\\begin{align*}\n \\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\fstrati, \\mu^{(\\dists, \\hat{\\thresholds})}(\\dstratsmi)) & \\le \\utili^{\\DA(\\dists, \\costs)}(\\lambda^{\\hat{\\thresholds}}(\\fstrati'), \\dstratsmi) + \\epsilon & \\\\\n \\shortintertext{\\hfill $\\dstrats$ is an $\\epsilon'$-NE in $\\DA(\\dists, \\costs)$}\n & \\le \\utili^{\\DA(\\dists, \\costs)}(\\dstrats) + \\epsilon' + \\epsilon & \\\\\n \\shortintertext{\\hfill Theorem \\ref{thm:fpa-pandora-utility-intermediate} item 2}\n & \\le \\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\mu^{(\\dists, \\hat{\\thresholds})}(\\dstrats)) + \\epsilon' + 2\\epsilon. & \n\\end{align*}\n\\end{enumerate}\n\\end{proof}\n\n\nAs a consequence of Corollary~\\ref{lem:fpa-pandora-NE-intermediate}, Corollary~\\ref{cor:find-BNE} and Theorem~\\ref{thm:epsNEpoa}, any approximate BNE in $\\FPA(\\empDists^{\\hat{\\thresholds}})$ is transformed by $\\lambda^{\\hat{\\thresholds}}$ to a nearly efficient approximate NE in $\\DA(\\dists, \\costs)$, as formalized by the following theorem.\n\\begin{restatable}{thm}{fpapandorane}\n\\label{thm:fpa-pandora-NE-v2}\nFor any $\\epsilon, \\epsilon', \\delta > 0$, there is $M = O \\left(\\frac{H^2}{\\epsilon^2} \\left[n\\log n\\log \\left(\\frac{H}{\\epsilon} \\right) + \\log\\left(\\frac{n}{\\delta}\\right)\\right]\\right)$, such that for all $m > M$, with probability at least $1-\\delta$ over random draws of samples~$\\samples$, we have: for\nany monotone strategy profile $\\fstrats$ that is an $\\epsilon'$-BNE in $\\FPA(\\empDists^{\\hat{\\thresholds}})$, $\\lambda^{\\hat{\\thresholds}}(\\fstrats)$\nis an $(\\epsilon'+4\\epsilon)$-NE in $\\DA(\\dists, \\costs)$; \nmoreover, $\\SW^{\\DA(\\dists, \\costs)}(\\lambda^{\\hat{\\thresholds}}(\\fstrats)) \\ge (1-\\frac{1}{e})\\OPT^{(\\dists, \\costs)} - n(\\epsilon'+4\\epsilon)$\n\\end{restatable}\n\\begin{proof\nFirst use Corollary~\\ref{cor:find-BNE} on distributions $\\dists^{\\hat{\\thresholds}}$. \nNote that $\\empDists^{\\hat{\\thresholds}}$ is an empirical product distribution for $\\dists^{\\hat{\\thresholds}}$; this is because $\\empDists$ consists of samples $\\samples^B$, whereas $\\hat{\\thresholds}$ is determined by samples in $\\samples^A$, and $\\samples^A$ and~$\\samples^B$ are disjoint. Thus, with probability at least $1-\\delta\/2$ over the random draw of $\\samples^B$, any monotone strategy profile $\\fstrats$ that is an $\\epsilon'$-BNE in $\\FPA(\\empDists^{\\hat{\\thresholds}})$ is an $(\\epsilon'+2\\epsilon)$-BNE in $\\FPA(\\dists^{\\hat{\\thresholds}})$. An $(\\epsilon'+2\\epsilon)$-BNE must be an $(\\epsilon'+2\\epsilon)$-NE in $\\FPA(\\dists^{\\hat{\\thresholds}})$, so by Corollary~\\ref{lem:fpa-pandora-NE-intermediate}, with probability at least $1-\\delta\/2$ over the random draw of $\\samples^A$, $\\lambda^{\\hat{\\thresholds}}(\\fstrats)$ is an $(\\epsilon'+4\\epsilon)$-NE in $\\DA(\\dists, \\costs)$.\nThe welfare guarantee follows from Theorem~\\ref{thm:epsNEpoa}. \n\\end{proof}\nTheorem \\ref{thm:fpa-pandora-NE-v2} does not include the reverse direction, i.e., from an $\\epsilon'$-NE in $\\DA(\\dists, \\costs)$ to an $(\\epsilon'+4\\epsilon)$-BNE in $\\FPA(\\empDists^{\\hat{\\thresholds}})$ \n(cf.\\@ Theorem~\\ref{thm:fpa-pandora-NE-BNE}). \nThis is for two reasons:\n(1) Such a transformation will result in $(\\epsilon'+4\\epsilon)$-NE in $\\FPA(\\empDists^{\\hat{\\thresholds}})$, but $(\\epsilon'+4\\epsilon)$-NE in $\\FPA(\\empDists^{\\hat{\\thresholds}})$ is not necessarily an $(\\epsilon'+4\\epsilon)$-BNE.\n(2) Unlike interim utility, ex ante utility cannot be learned from samples directly; in other words, $\\utili^{\\FPA(\\empDists^{\\hat{\\thresholds}})}(\\fstrats)$ does not necessarily approximate $\\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\fstrats)$ even if $\\fstrats$ is monotone. This is because in the computation of ex ante utility we need to take expectation over bidder $i$'s own value but for interim utility we do not need to take such an expectation. \n\n\\paragraph{Proof of Theorem~\\ref{thm:fpa-pandora-utility-intermediate}.}\nThe main idea is as follows: For item 1, we need to show that the utility of a strategy profile $\\fstrats$ in $\\FPA(\\dists^{\\hat{\\thresholds}})$ approximates the utility of its image $\\dstrats=\\lambda^{\\hat{\\thresholds}}(\\fstrats)$ in $\\DA(\\dists, \\costs)$. We wish to use Theorem~\\ref{thm:DA_FPA_transform} to do so but it cannot be used directly because $\\hat{\\thresholds}$ is not the indices of $(\\dists, \\costs)$. Instead, we construct a set of ``empirical costs''~$\\hat{\\costs}$ such that $\\hat{\\thresholds}$ becomes the indices of $(\\dists, \\hat{\\costs})$. Then Theorem~\\ref{thm:DA_FPA_transform} can be used to show that $\\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\fstrats)=\\utili^{\\DA(\\dists, \\hat{\\costs})}(\\dstrats)$. With an additional lemma (Lemma~\\ref{lem:costs_close}) which shows that $\\hat{\\costs}$ approximates $\\costs$ up to $\\epsilon$-error, we are able to establish the following chain of approximate equations\n\\[\n \\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\fstrats) = \\utili^{\\DA(\\dists, \\hat{\\costs})}(\\dstrats) \\stackrel{\\epsilon}{\\approx} \\utili^{\\DA(\\dists, \\costs)}(\\dstrats).\n\\]\nThe proof for item 2 is similar. \n\nFormally, define $\\hat{\\costs}=(\\hat{\\cost}_i)_{i\\in[n]}$, where\n\\begin{equation}\n\\hat{\\cost}_i \\coloneqq \\Ex[\\vali\\sim\\disti]{\\max\\{\\vali - \\hat{\\threshold}_i, 0\\}}.\n\\end{equation}\nNote that $\\hat{\\cost}_i$ is determined by samples~$\\samples^A$ since the empirical index $\\hat{\\threshold}_i$ is computed from $\\samples^A$. \n\\begin{restatable}{lemma}{costsclose}\n\\label{lem:costs_close}\nThere is $M = O\\left(\\frac{H^2}{\\epsilon^2}\\left[\\log\\frac{H}{\\epsilon} + \\log\\frac{n}{\\delta}\\right]\\right)$, such that if $m\/2 > M$, then with probability at least $1-\\delta$ over the random draw of $\\samples^A$, for each $i\\in[n]$, $|\\costi - \\hat{\\cost}_i|\\le\\epsilon$.\n\\end{restatable}\n\\begin{proof}\nThe main idea of the proof is to show that the class $\\mathcal{H}_i=\\{h^{\\threshold}\\;\\mid\\;\\threshold\\in[-H, H]\\}$ where $h^{\\threshold}(x)=\\max\\{x-r, 0\\}$ has pseudo-dimension $\\Pdim(\\mathcal{H}_i) = O(1)$ and thus uniformly converges with $O\\left(\\frac{H^2}{\\epsilon^2}\\left[\\log\\frac{H}{\\epsilon} + \\log\\frac{1}{\\delta}\\right]\\right)$ samples.\n\nFormally, consider the pseudo-dimension $d$ of the class $\\mathcal{H}_i=\\{h^{\\threshold}\\;\\mid\\;\\threshold\\in[-H, H]\\}$ where $h^\\threshold(x)\\coloneqq\\max\\{x-\\threshold, 0\\}$ for $x\\in[0, H]$ (thus $h^\\threshold(x)\\in[0, 2H]$). We claim that $d=O(1)$. To see this, fix any $d$ samples $(x_1, x_2, \\ldots, x_d)$ and any witnesses $(\\Pwitnessi[1], \\Pwitnessi[2], \\ldots, \\Pwitnessi[d])$, we bound the number of distinct labelings that can be given by $\\mathcal{H}_i$ to these samples. Each sample $x_j$ induces a partition of the parameter space (the space of $\\threshold$) $[-H, H]$ into two intervals $[-H, x_j]$ and $(x_j, H]$, such that for any $\\threshold\\le x_j$, $h^{\\threshold}(x_j) = x_j-r$, and for $\\threshold > x_j$, $h^{\\threshold}(x_j)=0$. All $d$ samples partition $[-H, H]$ into (at most) $d+1$ consecutive intervals, $I_1, \\ldots, I_{d+1}$, such that within each interval $I_k$, $h^{\\threshold}(x_j)$ is either $x_j-r$ for all $\\threshold\\in I_k$ or $0$ for all $\\threshold\\in I_k$, for each $j\\in[d]$. We further divide each $I_k$ using witnesses $\\Pwitnessi[j]$'s: for each $j\\in[d]$, if $h^{\\threshold}(x_j) = x_j-r$ for $\\threshold\\in I_k$, then we cut $I_k$ at the point $r = x_j - \\Pwitnessi[j]$; in this way we cut each $I_k$ into at most $d+1$ sub-intervals. Within each sub-interval $I'\\subseteq I_k$, the labeling of the $d$ samples given by all $h^{\\threshold}$ ($\\threshold\\in I'$) is the same. Since there are at most $(d+1)^2$ sub-intervals in total, there are at most $(d+1)^2$ distinct labelings. \nTo pseudo-shatter $d$ samples, we must have $2^d \\leq (d+1)^2$, which gives $d=O(1)$. \n\nBy the definition of $\\hat{\\threshold}_i$, we have \n\\[\\costi=\\Ex[\\vali\\sim \\empDisti^A]{\\max\\{\\vali - \\hat{\\threshold}_i, 0\\}} = \\Ex[\\vali\\sim \\empDisti^A]{h^{\\hat{\\threshold}_i}(\\vali)}, \\]\nand $\\hat{\\threshold}_i\\in [-H, H]$. Also note that $\\hat{\\cost}_i = \\Ex[\\vali\\sim \\disti]{h^{\\hat{\\threshold}_i}(\\vali)}$. \nThus the conclusion $|\\costi-\\hat{\\cost}_i| \\le \\epsilon$ follows from Theorem~\\ref{thm:pseudo-dimension} and a union bound over $i\\in[n]$.\n\\end{proof}\n\n\\begin{lemma}\\label{lem:DA_utility_close}\nSuppose $|\\costi-\\hat{\\cost}_i|\\le\\epsilon$, then for any strategies~$\\dstrats$, \n\\begin{equation*}\n\t\\left| \\utili^{\\DA(\\dists, \\costs)}(\\dstrats) - \\utili^{\\DA(\\dists, \\hat{\\costs})}(\\dstrats)\\right|\\le\\epsilon.\n\\end{equation*}\n\\end{lemma}\n\\begin{proof}\nCouple the realizations of values (and threshold prices and bids if the strategies are randomized) in $\\DA(\\dists, \\costs)$ and $\\DA(\\dists, \\hat{\\costs})$. When bidders use the same strategies $\\dstrats$ in the two auctions $\\DA(\\dists, \\costs)$ and $\\DA(\\dists, \\hat{\\costs})$, bidder~$i$ receives the same allocation and pays the same price. \nThe only difference between bidder~$i$'s utilities in these two auctions is the difference between the search costs she pays, which is upper-bounded by $|\\costi-\\hat{\\cost}_i|\\le \\epsilon$.\n\\end{proof}\n\nNow we finish the proof of Theorem~\\ref{thm:fpa-pandora-utility-intermediate}. \n\\begin{proof}[Proof of Theorem~\\ref{thm:fpa-pandora-utility-intermediate}]\nFirst consider item 1. We use $a\\stackrel{\\epsilon}{\\approx}b$ to denote $|a-b|\\le\\epsilon$. Given any monotone strategies $\\fstrats$ for $\\FPA(\\empDists^{\\hat{\\thresholds}})$, \n\\begin{align*}\n \\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\fstrats) \n ={}& \\utili^{\\DA(\\dists, \\hat{\\costs})}(\\lambda^{\\hat{\\thresholds}}(\\fstrats)) && \\text{Theorem \\ref{thm:DA_FPA_transform} item 1 } \\\\\n \\stackrel{\\epsilon}{\\approx}{}& \\utili^{\\DA(\\dists, \\costs)}(\\lambda^{\\hat{\\thresholds}}(\\fstrats)) &&\\text{Lemma \\ref{lem:DA_utility_close}}. \n\\end{align*}\n\nThen for item 2, given any strategies $\\dstrats$ for $\\DA(\\dists, \\costs)$, by Lemma \\ref{lem:DA_utility_close}, \n\\begin{align*}\n \\utili^{\\DA(\\dists, \\costs)}(\\dstrats) \n \\stackrel{\\epsilon}{\\approx} \\utili^{\\DA(\\dists, \\hat{\\costs})}(\\dstrats)\n\\end{align*}\nBy Theorem \\ref{thm:DA_FPA_transform} item 2, we have $\\utili^{\\DA(\\dists, \\hat{\\costs})}(\\dstrats) \\le \\utili^{\\FPA(\\dists^{\\hat{\\thresholds}})}(\\mu^{(\\dists, \\hat{\\thresholds})}(\\dstrats))$ where ``$=$'' holds if $\\dstrati$ claims above $\\hat{\\threshold}_i$, which concludes the proof. \n\\end{proof}\n\n\n\n\n\n\\section{Proof of Theorem~\\ref{thm:util-learn-upper-bound}}\n\n\\section{proof}\n\n\\subsubsection{Pseudo-dimension and the Proof of Theorem \\ref{thm:util-learn-upper-bound}}\n\\label{sec:pseudodim}\n\nPseudo-dimension is a well known tool for upper bounding sample complexity \\citep[see, e.g.][]{anthony2009neural}, and has been applied to learning in mechanism design \\citep{MR15, MR16, BSV18, BSV19}.\n\n\\begin{defn}\n\t\\label{def:pseudo-dimension}\n\tGiven a class $\\mathcal{H}$ of real-valued functions on input space $\\mathcal{X}$, a set of input $\\Pinputi[1], \\ldots, \\Pinputi[m]$ is said to be \\emph{pseudo-shattered} if there exist \\emph{witnesses} $\\Pwitnessi[1], \\ldots, \\Pwitnessi[m] \\in \\mathbb R$ such that for any label vector $\\Plabels\\in\\{1, -1\\}^m$, there exists $h_{\\Plabels}\\in \\mathcal{H}$ such that $\\sgn(h_{\\Plabels}(\\Pinputi) - \\Pwitnessi) = \\Plabeli$ for each $i=1, \\ldots, m$, where $\\sgn(y)=1$ if $y>0$ and $-1$ if $y<0$. The \\emph{pseudo-dimension} of $\\mathcal{H}$, $\\Pdim(\\mathcal{H})$, is the size of the largest set of inputs that can be pseudo-shattered by $\\mathcal{H}$. \n\\end{defn}\n\n\\begin{defn}\n\t\\label{def:uniform-convergence}\n\tFor $\\epsilon>0, \\delta \\in (0, 1)$, a class of functions $\\mathcal{H}: \\mathcal{X} \\to \\mathbb R$ is \\emph{$(\\epsilon, \\delta)$-uniformly convergent with sample complexity $M$} if\n\t\tfor any $m \\geq M$, \n\tfor any distribution $\\dist$ on~$\\mathcal{X}$, \n\tif $\\sample^1, \\ldots, \\sample^{m}$ are i.i.d.\\@ samples from~$\\dist$, \n\twith probability at least $1 - \\delta$, for every $h \\in \\mathcal{H}$,\n\t$\n\t\t\\left| \\Ex[\\Pinput \\sim \\dist]{h(\\Pinput)} - \\frac 1 {m} \\sum_{j = 1}^{m} h(\\sample^j) \\right| < \\epsilon.\n\t$\n\\end{defn}\n\n\n\\begin{thm}[See \\citealp{anthony2009neural}]\n\t\\label{thm:pseudo-dimension}\n\tLet $\\mathcal{H}$ be a class of functions with range $[0, H]$ and pseudo-dimension $d=\\Pdim(\\mathcal{H})$, \n\tfor any $\\epsilon>0$, $\\delta\\in(0, 1)$, \n\t$\\mathcal{H}$ is $(\\epsilon, \\delta)$-uniformly convergent with sample complexity $O\\left( (\\frac{H}{\\epsilon})^2 [d\\log(\\frac{H}{\\epsilon}) + \\log(\\frac{1}{\\delta})] \\right)$.\n\n\n\n\n\n\n\n\\end{thm}\n\nWe show Theorem~\\ref{thm:util-learn-upper-bound} by treating the utilities on monotone bidding strategies as a class of functions, whose uniform convergence implies that $\\emp$ learns the interim utilities.\n\n\nFor each bidder $i$, let $h^{\\vali, \\bids(\\cdot)}$ be the function that maps the opponents' values to bidder~$i$'s ex post utility, that is, \n\\[h^{\\vali, \\bids(\\cdot)}(\\valsmi) = \\expostUi(\\vali, \\bidi(\\vali), \\bidsmi(\\valsmi)).\\]\nLet $\\mathcal{H}_i$ be the set of all such functions corresponding to the set of monotone strategies, \n\\[\\mathcal{H}_i = \\left\\{h^{\\vali, \\bids(\\cdot) }(\\cdot) \\;\\mid\\; \\vali \\in \\typespacei,~~ \\bids(\\cdot) \\text{ is monotone} \\right\\}. \\]\n\nBy \\eqref{eq:interim-util}, the expectation of $h^{\\vali, \\bids(\\cdot)}(\\cdot)$ over~$\\distsmi$ is the interim utility of bidder $i$: \n\\[\\Ex[\\valsmi\\sim \\distsmi]{h^{\\vali, \\bids(\\cdot)}(\\valsmi)} = \\utili(\\vali, \\bidi(\\vali), \\bidsmi(\\cdot)). \\]\nBy Definition~\\ref{def:emp}, on samples $\\samples = (\\samples^1, \\ldots, \\samples^{m})$, \n\\[\\emp(\\samples, i, \\vali, \\bids(\\cdot)) = \\frac 1 {m} \\sum_{j = 1}^{m} h^{\\vali, \\bids(\\cdot)}(\\samples^j_{-i}).\n\\]\n\nThus, \n\\begin{align}\n \\left| \\emp(\\samples, i, \\vali, \\bids(\\cdot)) - \\utili(\\vali, \\bidi(\\vali), \\bidsmi(\\cdot)) \\right|\n \n\n \\left| \\Ex[\\valsmi]{h^{\\vali, \\bids(\\cdot)}(\\valsmi)} - \\frac 1 {m} \\sum_{j = 1}^{m} h^{\\vali, \\bids(\\cdot)}(\\samples^j_{-i})\\right|.\n\t\\label{eq:emp-func}\n\\end{align}\n\nThe right hand side of~\\eqref{eq:emp-func} is the difference between the expectation of $h^{\\vali, \\bids(\\cdot)}$ on the distribution $\\distsmi$ and that on the empirical distribution with samples drawn from $\\distsmi$.\nNow by Theorem~\\ref{thm:pseudo-dimension},\nto bound the number of samples needed by $\\emp$ to $(\\epsilon, \\delta)$-learn the utilities over monotone strategies, \nit suffices to bound the pseudo-dimension of~$\\mathcal{H}_i$.\nWith the following key lemma, the proof is completed by observing that the range of each $h^{\\vali, \\bids(\\cdot)}$ is within $[-H, H]$ and by taking a union bound over $i \\in [n]$.\n\n\n\n\\begin{restatable}{lemma}{pdimutil}\n\\label{lem:pseudo-dimension-utility-class}\nIf tie breaking is random-allocation or no-allocation, then $\\Pdim(\\mathcal{H}_i) = O(n \\log n)$.\n\\end{restatable}\n\nThe proof of Lemma~\\ref{lem:pseudo-dimension-utility-class} follows a powerful framework introduced by \\citet{MR16} and \\citet{BSV18} for bounding the pseudo-dimension of a class $\\mathcal{H}$ of functions: given samples that are to be pseudo-shattered and for any (fixed) witnesses, one classifies the functions in~$\\mathcal{H}$ into categories, so that functions in the same category must output the same label on all the samples; by counting and bounding the number of such categories, one can bound the number of shattered samples.\nOur proof follows this strategy. To bound the number of categories, we make use of monotonicity of bidding functions, which is specific to our problem.\n\nWe give a proof below for the simplest case with two bidders and no-allocation tie-breaking rule, and relegate the full proof to Appendix~\\ref{sec:proof-lem:pseudo-dimension-utility-class}. \n\n\\begin{proof}[Proof of Lemma~\\ref{lem:pseudo-dimension-utility-class} for a special case.] \nConsider $n=2$ and no-allocation tie-breaking rule. Fix an arbitrary set of $m$ samples $\\samplesmi^1, \\ldots, \\samplesmi^{m}$. \nConsider any set of potential witnesses $(\\Pwitnessi[1], \\Pwitnessi[2], \\ldots, \\Pwitnessi[m])$. \nEach hypothesis in $\\mathcal{H}_i$ then gives every sample~$\\samplesmi^j$ a label according to the witness~$\\Pwitnessi[j]$, giving rise to a label vector in $\\{-1, +1\\}^m$.\nWe show that $\\mathcal{H}_i$ can be divided into $m+1$ sub-classes $\\mathcal{H}_i^0, \\ldots, \\mathcal{H}_i^{m}$, such that each sub-class $\\mathcal{H}_i^k$ generates at most $m+1$ different label vectors. \nThus $\\mathcal{H}_i$ generates at most $(m+1)^2$ label vectors in total. \nTo pseudo-shatter $\\samplesmi^1, \\ldots, \\samplesmi^m$, we need $2^m$ different label vectors; therefore $(m+1)^2\\ge 2^m$, which implies $m = O(1)$. \n\nWe now show how $\\mathcal{H}_i$ is thus divided.\nNote that, for $n = 2$, each $\\samplesmi^k$ is just a real number and we can sort them; for ease of notation let \n$\\Pinput^k$ denote $\\samplei[-i]^{k}$ for $k=1, \\ldots, m$ \nand suppose $\\Pinput^1\\le \\Pinput^2\\le \\cdots \\le \\Pinput^m$. \nWe put hypothesis $h^{\\vali, \\bids(\\cdot)}$ into the $k^{\\text{-th}}$ sub-class, $\\mathcal{H}_i^k$, if\n\\[ \\bidi[-i](\\Pinput^k) < \\bidi(\\vali) \\ \\text{ and }\\ \\bidi[-i](\\Pinput^{k+1}) \\ge \\bidi(\\vali).\\]\nThis is well defined because, by assumption, $\\bidi[-i](\\Pinput)$ is monotone non-decreasing in $\\Pinput$.\n\nWe now show that each sub-class $\\mathcal{H}_i^k$ gives rise to at most $m+1$ label vectors.\nFor any $h^{\\vali, \\bids(\\cdot)}\\in \\mathcal{H}_i^k$, we have $h^{\\vali, \\bids(\\cdot)}(\\Pinput^{j}) = \\vali - \\bidi(\\vali)$ for any $j \\le k$ (because bidder~$i$'s bid $\\bidi(\\vali)$ is higher than the opponent's),\nand $h^{\\vali, \\bids(\\cdot)}(\\Pinput^{j}) = 0$ for any $j > k$.\nOn the first $k$ samples $\\Pinput^{1}, \\ldots, \\Pinput^{k}$, \nany fixed hypothesis $h^{\\vali, \\bids(\\cdot)}(\\cdot) \\in \n\\mathcal{H}_i^k$ \noutputs a constant $\\vali - \\bidi(\\vali)$;\nas one varies this constant and compares it with the $k$ witnesses $\\Pwitnessi[1], \\ldots, \\Pwitnessi[k]$, there are only $k+1$ possible results from the comparisons.\nOn the remaining $m - k$ samples, only one pattern is possible, since all hypotheses in $\\mathcal{H}_i^k$ output $0$ on these samples.\nTherefore, at most $k+1 \\le m+1$ label vectors can be generated by $\\mathcal{H}_i^k$. \n\\end{proof}\n\n\n\n\n\n\n\\subsubsection{Learning on Empirical Product Distributions and Equilibrium Preservation}\n\\label{sec:empp}\n\nThe empirical distribution estimator approximates interim utilities with high probability, but this does not immediately imply that one may take the first price auction on the empirical distribution as a close approximation to the auction on the original distribution. \nThis is because the empirical distribution over samples is \\emph{correlated} --- the values $\\samplei[1]^j, \\ldots, \\samplei[n]^j$ are drawn as a vector, instead of independently. \nStandard notions, such as Bayes Nash equilibria, defined on product distributions become intricate on correlated distributions, and there is no reason to expect the latter to correspond to the equilibria in the original auction.\nTherefore, it is desirable that utilities can also be learned on a \\emph{product} distribution arising from the samples, where each bidder's value is independently drawn, uniformly from the $m$ samples of her value. \nWe show that this can indeed be done, without substantial increase in the number of samples.\nThe key technical step, Lemma~\\ref{lem:relation-uniform-convergence},\nis a reduction from learning on empirical distribution to learning on empirical product distribution. We believe this lemma is of independent interest.\nIn fact, in Section~\\ref{sec:search} we invoke Lemma~\\ref{lem:relation-uniform-convergence} in a different context, that of learning in Pandora's Box problem; the reduction is crucial there for obtaining a polynomial-time learning algorithm.\n\n\\begin{defn}\n\\label{def:empp}\nGiven samples $\\samples = (\\samples^1, \\ldots, \\samples^{m})$, \n$\\empDisti$ is defined to be the uniform distribution over $\\{\\samplei^1, \\ldots, \\samplei^m\\}$. The \\emph{empirical product distribution} $\\empDists$ is the product distribution\n $\\empDists\\coloneqq\\prod_{i=1}^n \\empDisti$.\n\\end{defn}\n\n\t\\begin{defn}\n\t\t\\label{def:uniform-convergence-product}\n\t\tFor $\\epsilon>0, \\delta \\in (0, 1)$, a class of functions $\\mathcal{H}: \\prod_{i=1}^n \\typespacei \\to \\mathbb R$ is \\emph{$(\\epsilon, \\delta)$-uniformly convergent on product distribution with sample complexity $M$} if\n\t\tfor any $m \\geq M$, \n\tfor any product distribution $\\dists$ on~$\\prod_{i=1}^n \\typespacei$, \n\tif $\\samples^1, \\ldots, \\samples^{m}$ are i.i.d.\\@ samples from~$\\dists$, \n\twith probability at least $1 - \\delta$, for every $h \\in \\mathcal{H}$,\n\t\\[\n\n\t\t\\left| \\Ex[\\types \\sim \\dists]{h(\\types)} - \\Ex[\\types \\sim \\empDists]{h(\\types)} \\right| < \\epsilon,\n\t\\]\n\n\twhere $\\empDists$ is the empirical product distribution.\n\t\\end{defn}\n\n\\begin{restatable}{lemma}{uniformprod}\n\t\\label{lem:relation-uniform-convergence}\nLet $\\mathcal{H}$ be a class of functions from a product space $\\typespaces$ to $[0, H]$. \nIf $\\mathcal{H}$ is $(\\epsilon, \\delta)$-uniformly convergent with sample complexity $m=m(\\epsilon, \\delta)$, then $\\mathcal{H}$ is $\\left(2\\epsilon, \\frac{H\\delta}{\\epsilon}\\right)$-uniformly convergent on product distribution with sample complexity $m$. \n\\end{restatable}\n\\noindent Lemma~\\ref{lem:relation-uniform-convergence} is closely related to a concentration inequality by \\citet{DHP16}.\n\\citeauthor{DHP16} show that for any single function $h:\\typespaces\\to[0, H]$, the expectation of $h$ on the empirical product distribution is close to its expectation on any product distribution with high probability.\nOur lemma generalizes this to show a simultaneous concentration for a family of functions, \nand seems more handy for applications such as ours.\n\nCombining Theorem~\\ref{thm:util-learn-upper-bound} with Lemma~\\ref{lem:relation-uniform-convergence}, we derive our learning results on the empirical product distribution.\n\n\\begin{defn}\nThe \\emph{empirical product distribution estimator} $\\empp$ estimates interim utilities of a bidding strategy on the empirical product distribution~$\\empDists$. Formally, \nfor bidder~$i$ with value~$\\vali$, for bidding strategy profile $\\bids(\\cdot)$,\n\\begin{equation}\n\\empp(\\samples, i, \\vali, \\bids(\\cdot)) \\coloneqq \n\\Ex[\\valsmi\\sim \\empDistsmi] { \\expostUi(\\vali, \\bidi(\\vali), \\bidsmi(\\valsmi)) }. \\label{eq:def_empp}\n\\end{equation}\n\\end{defn}\n\n\\begin{thm}\\label{thm:util-learn-upper-bound-product}\nSuppose $\\typespacei\\subseteq[0, H]$ for each $i\\in[n]$, and the tie-breaking rule is random-allocation or no-allocation.\nFor any $\\epsilon>0, \\delta \\in (0, 1)$, there is\n\\begin{equation}\nM = O \\left(\\frac{H^2}{\\epsilon^2} \\left[n\\log n\\log \\left(\\frac{H}{\\epsilon} \\right) + \\log\\left(\\frac{n}{\\delta}\\right)\\right]\\right),\n\\label{eq:util-learn-upper-bound-product}\n\\end{equation}\nsuch that for any $m \\geq M$, \nthe empirical distribution estimator $\\empp$ $(\\epsilon, \\delta)$-learns with $m$ samples\nthe utilities over the set of all monotone bidding strategies.\n\\end{thm}\n\n\nBy Theorem~\\ref{thm:util-learn-upper-bound-product}, utilities in the FPA on the empirical product distribution approximate those in the FPA on the original distribution, therefore the two auctions share the same set of approximate equilibria:\n\n\\begin{corollary}\\label{cor:find-BNE}\nSuppose $\\typespacei\\subseteq[0, H]$ for each $i\\in[n]$ and the tie-breaking rule is random-allocation or no-allocation. \nFor any $\\epsilon, \\epsilon'>0, \\delta \\in (0, 1)$, for $m$ satisfying~\\eqref{eq:util-learn-upper-bound-product}, \n with probability at least $1-\\delta$ over random draws of $\\samples$, for any monotone bidding strategy profile $\\bids(\\cdot)$, if $\\bids(\\cdot)$ is an $\\epsilon'$-BNE in the first price auction on value distribution $\\empDists=\\prod_i\\empDisti$, then $\\bids(\\cdot)$ is an $(\\epsilon'+2\\epsilon)$-BNE in the first price auction on value distribution $\\dists = \\prod_i \\disti$. \n Conversely, if $\\bids(\\cdot)$ is an $\\epsilon'$-BNE in the first price auction on value distribution~$\\dists$, then $\\bids(\\cdot)$ is an $(\\epsilon'+2\\epsilon)$-BNE in the first price auction on value distribution~$\\empDists$. \n\\end{corollary}\n\nCorollary~\\ref{cor:find-BNE} has an interesting consequence. \n\\citet{SWZ20} gave a polynomial-time\nalgorithm for computing Bayes Nash equilibrium in first price auctions on discrete value distributions. \nThe empirical product distribution is discrete, so one can run \\citeauthor{SWZ20}'s algorithm on it.\nCorollary~\\ref{cor:find-BNE} immediately implies:\n\\begin{corollary}\n\\label{cor:polytime-equilibria}\nThere is a Monte Carlo randomized algorithm for computing an $\\epsilon$-BNE in a first price auction with $n$ bidders on arbitrary product value distributions. \nThe running time of the algorithm is polynomial in $n$ and~$\\frac 1 {\\epsilon}$. \n\\end{corollary}\n\nNote that the running time of the algorithm does not depend on the size of the distributions' support, and works for continuous distributions as well.\n\n\n\t\n\n\n\n\n\n\n\n\nResults very similar to Theorem~\\ref{thm:util-learn-upper-bound}, Theorem~\\ref{thm:util-learn-upper-bound-product}, and Corollary~\\ref{cor:find-BNE}, apply to the all pay auction, with the same bounds on the number of samples. \nThe proofs are almost identical and so are omitted.\n\n\n\\section{Introduction}\n\\label{sec:intro}\n\\input{intro.tex}\n\n\\paragraph{Additional Related Works.}\n\\label{sec:related}\n\\input{related.tex}\n\n\n\\section{Preliminaries on Auctions}\n\\label{sec:prelim}\n\\input{prelim.tex}\n\n\\section{Sample Complexity of Utility Estimation}\n\\label{sec:fpa}\n\\input{fpa.tex}\n\n\\subsection{Upper Bound on Sample Complexity}\n\\label{sec:fpa-upper-bound}\n\\input{fpa-upper-bound.tex}\n\n\\subsection{Lower Bound of Sample Complexity}\n\\label{sec:lower-bound}\n\\input{fpa-lower-bound.tex}\n\n\\section{Auctions with Costly Search}\n\\label{sec:search}\n\\input{auction-with-search.tex}\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nIn this work we obtained almost tight sample complexity bounds for learning utilities in first price auctions and all pay auctions. \nWhereas utilities for unconstrained bidding strategies are hard to learn, we show that learning is made possible by focusing on monotone bidding strategies, which is sufficient for all practical purposes.\nWe also extended the results to auctions where search costs are present.\n\nMonotonicity is a natural assumption on bidding strategies in a single item auction, but it does not generalize to multi-parameter settings, where characterization of equilibrium is notoriously difficult.\nIt is an interesting question whether our results can be generalized to multi-item auctions, such as simultaneous first-price auctions, via more general, lossless structural assumptions on the bidding strategies. \n\nOur results also depend on the values being drawn independently. \nWhen bidders' values are correlated, the conditional distribution of opponents' values changes with a bidder's value, and any na\\\"ive utility learning algorithm needs a number of samples that grows linearly with the size of a bidder's type space.\nIt is interesting whether there are meaningful tractable middle grounds for utility learning between product distributions and arbitrary correlated distributions.\n\n\\bibliographystyle{abbrvnat}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn their celebrated paper $\\cite{BrezisCoronLieb}$, Brezis, Coron and Lieb showed, in the context of harmonic maps and liquid crystals theory, the existence of a close relation between sphere-valued harmonic maps having prescribed topological singularities at given points in $\\R^3$ and {\\it minimal connections} between those points, i.e., optimal mass transportation networks (in the sense of Monge-Kantorovich) having those points as marginals. This relation was further enlightened by Almgren, Browder and Lieb in $\\cite{abh}$, who recovered the results in $\\cite{BrezisCoronLieb}$ by interpreting the (minimal connection) optimal transportation problem as a suitable Plateau problem for rectifiable currents having the given marginals as prescribed boundary.\n\nOur aim is to consider minimizing configurations for maps valued into manifolds and with prescribed topological singularities when the energy is possibly more general than the Dirichlet energy, and investigate the connection with Plateau problems for currents (or flat chains) with coefficients in suitable groups. The choice of these groups is linked to the topology of the involved target manifolds. \n\nIn this paper we will consider the particular case where the manifold is a product of spheres and the maps have assigned point singularities, and we will show, in Theorem \\ref{thm1} below, that energy minimizing configurations are related with Steiner-type optimal networks connecting the given points, i.e., solutions of the Steiner problem or solutions of the Gilbert-Steiner irrigation problem. The investigation of maps with values into product of spheres arises in several physical problems, such as the study of the structure of minimizers of two-component Ginzburg-Landau functionals, where the reference (ground state) manifold is a torus ($\\mathbb{S}^{1}\\times \\mathbb{S}^{1}$) (see \\cite{Stan}), or the case of Dipole-Free $^3$He-A, where the order parameter takes values into $(\\mathbb{S}^{2}\\times$ SO(3))$\/\\Z_{2}$, whose covering space is $\\mathbb{S}^{2}\\times \\mathbb{S}^{3}$ (see \\cite{Mermin, Christopher}).\nIn a companion paper in preparation $\\cite{CVO}$ we will discuss and state the results which correspond to more general situations. Let us also stress that the generalization of the results to a broader class of energies (and thus different norms) is not moot, this being the case, for instance, for dislocations in crystals (see \\cite{CoGarMas}). \n\n\nSteiner tree problems and Gilbert-Steiner (single sink) problems can be formulated as follows: given $n$ distinct points $P_{1},\\ldots, P_{n}$ in $\\R^{d}$, where $d, n \\geq 2$, we are looking for an optimal connected transportation network, $L = \\cup_{i=1}^{n-1}\\lambda_i$, along which the unit masses initially located at $P_{1},\\ldots, P_{n-1}$ are transported to the target point $P_n$ (single sink); here $\\lambda_i$ can be seen as the path of the $i^{\\rm th}$ mass flowing from $P_{i}$ to $P_{n}$, and the cost of moving a mass $m$ along a segment with length $l$ is proportional to $lm^{\\alpha}$, $\\alpha\\in[0,1]$. Therefore, we are led to consider the problem\n$$\n\\inf \\left\\{ I_\\alpha(L):\\,L = \\bigcup_{i=1}^{n-1}\\lambda_i\\text{ with }\\lbrace P_{i}, P_{n} \\rbrace \\subset \\lambda_{i}\\text{, for every }i=1,\\ldots, n-1 \\right\\}\n\\leqno{(I)}\n$$\nwhere the energy $I_\\alpha$ is computed as $I_\\alpha(L)=\\int_L |\\theta(x)|^\\alpha d{\\mathcal H}^1(x)$, with $\\theta(x) = \\sum_{i=1}^{n-1} \\mathbf{1}_{\\lambda_i}(x)$. Let us notice that $\\theta$ stands for the mass density along the network. In particular, we consider the range $\\alpha\\in[0,1]$:\n\\begin{itemize}\n\\item when $\\alpha=0$ the problem is equivalent to optimize the total length of the graph $L$, as in the Steiner Tree Problem (STP); \n\\item when $\\alpha=1$ the problem $(I)$ becomes the well-known Monge-Kantorovich problem;\n\\item and when $0<\\alpha<1$ the problem is known as the Gilbert-Steiner problem, or, more generally, as a branched optimal transport problem, due to the fact that the cost is proportional to a concave function $\\theta^{\\alpha}$, which favours the clustering of the mass during the transportation, thus giving rise to the branched structures which characterize the solutions (we refer the reader to $\\cite{Bernot2009}$ for an overview on the topic). \n\\end{itemize}\n\n\nIn the last decade, the communities of Calculus of Variations and Geometric Measure Theory made some efforts to study (Gilbert-)Steiner problems in many aspects, such as existence, regularity, stability and numerical feasibility (see for example \\cite{Xia, PaSt, MaMa2, MaMa, MariaAnonioAndrea, MariaAnonioAndreaPegonProuff, OuSa, BoLeSa, MaOuVe, BoOrOu, BoOu, BoOrOu2} and references therein). Among all the significant results, we would like to mention recent works in $\\cite{MaMa2, MaMa}$ and $\\cite{BoOrOu, BoOrOu2}$, which are closely related to the present paper. To be more precise, in $\\cite{MaMa2, MaMa}$ the authors turn the problem $(I)$ into the problem of mass-minimization of integral currents with multiplicities in a suitable group. For the sake of readability we postpone proper definitions about currents to Section \\ref{section2}, in this introduction we only recall that a $1$-dimensional integral current with coefficients in a group can be thought as a formal sum of finitely many curves and countably many loops with coefficients in a given normed abelian group. For instance, considering the group $\\Z^{n-1}$ and assigning to the boundary datum $P_{1}, P_{2},\\ldots, P_{n-1}, P_{n}$ the multiplicities $e_{1},e_{2},\\ldots,e_{n-1},-(e_{1}+\\ldots+ e_{n-1})$, respectively (where $\\lbrace e_{i} \\rbrace_{1 \\leq i \\leq n-1}$ is the basis of $\\R^{n-1}$), we recover the standard model in \\cite{MaMa2,MaMa}. \n\nIn fact we can interpret the network $L = \\bigcup_{i=1}^{n-1}\\lambda_i$ as the superposition of $n-1$ paths $\\lambda_i$ connecting $P_{i}$ to $P_{n}$ labelled with multiplicity $e_{i}$. This point of view requires a density function with values in $\\Z^{n-1}$, which corresponds to the so-called $1$-dimensional current with coefficients in the group $\\Z^{n-1}$. Furthermore, by equipping $\\Z^{n-1}$ with a certain norm (depending on the cost of the problem), we may define the notion of mass of those currents, and problem $(I)$ turns out to be equivalent to the Plateau problem.\n$$\n\\inf \\left\\{ \\mathbb{M}(T):\\,\\partial T = e_{1}\\delta_{P_1}+e_{2}\\delta_{P_2}+\\ldots+e_{n-1}\\delta_{P_{n-1}}-(e_{1}+e_{2}+\\ldots+e_{n-1})\\delta_{P_n} \\right\\}\n\\leqno{(M)}\n$$\nwhere $T$ is a 1-dimensional current with coefficients in the group $\\Z^{n-1}$ (again, we refer the reader to the Section $\\ref{section2}$ for rigorous definitions). For mass minimization, there is the very useful notion of calibration (see section \\ref{section3}), that is, a tool to prove minimality when dealing with concrete configurations (see Example $\\ref{examplecalib}$). To be precise, a calibration is a sufficient condition for minimality, see Definition \\ref{Calibration} and the following remarks.\n\nIn $\\cite{BoOrOu, BoOrOu2}$, by using $\\cite{MaMa2, MaMa}$, a variational approximation of the problem $(I)$ was provided through Modica-Mortola type energies in the planar case, and through Ginzburg-Landau type energies (see \\cite{ABO2}) in higher dimensional ambient spaces via $\\Gamma$-convergence. The corresponding numerical treatment is also shown there.\n\nFollowing $\\cite{MaMa2, MaMa}$, $\\cite{BoOrOu, BoOrOu2}$, and the strategy outlined in $\\cite{abh}$ (relating the energy of harmonic maps with prescribed point singularities to the mass of $1$-dimensional classical integral currents) we provide here a connection between an energy functional with its energy comparable with $k$-harmonic map problem with prescribed point singularities and (Gilbert-)Steiner problems $(I)$. More precisely, let $P_{1},\\ldots,P_{n-1}, P_{n}$ in $\\R^{d}$ be given, and consider the spaces $H_{i}$ defined as the subsets of $W^{1,d-1}_{\\rm loc}(\\R^{d}; \\mathbb{S}^{d-1})$ where the functions are constant outside a neighbourhood of the segment joining $P_i,P_n$ and have distributional Jacobian $\\frac{\\alpha_{d-1}}{d}( \\delta_{P_i}-\\delta_{P_n})$, respectively. Here $\\alpha_{d-1}$ is the surface area of the unit ball in $\\R^{d}$.\n\nLet $\\mathbb{\\psi}$ be a norm on $\\R^{n-1}$ which will be specified in Section $\\ref{section3}$ (see $\\eqref{normeuclidean}$), and set \n\\begin{equation}\\label{def_h}\n\t\\mathbb{H}({\\bf u})=\\int_{\\R^{d}} \\mathbb{\\psi}(|\\nabla u_{1}|^{d-1}, |\\nabla u_{2}|^{d-1},\\ldots,|\\nabla u_{n-1}|^{d-1})\\,dx\n\\end{equation}\nwhere ${\\bf u}=(u_{1},\\ldots,u_{n-1})\\in H_{1}\\times H_{2} \\times \\ldots \\times H_{n-1}$ is a $2$-tensor. The functional $\\mathbb{H}$ is the so-called $k$-harmonic energy, it is modeled on the $(d-1)$-Dirichlet energy. We will consider here a class of energies $\\mathbb E$ for maps in $H_{1}\\times H_{2} \\times \\ldots \\times H_{n-1}$ which are suitably related to $\\mathbb M$ and $\\mathbb H$, according to Definition \\ref{def:suiten} below. In this case, we investigate the problem of characterizing\n$$\n\\inf \\left\\{ \\mathbb{E}({\\bf u}):\\,{\\bf u}\\in H_{1}\\times H_{2} \\times \\ldots \\times H_{n-1} \\right\\}.\n\\leqno{(H)}\n$$\nThe main contribution of this paper is the following equivalence result in the minimization problem for the mass $\\mathbb M$ and an energy $\\mathbb E$ which is suitably related to $\\mathbb M$ and $\\mathbb H$.\n\n\\begin{theorem}\\label{thm1}\n\nAssume that a minimizer of the problem $(M)$ admits a calibration (see Definition \\ref{Calibration}). Consider an energy functional $\\mathbb{E}$ which is suitably related to $\\mathbb M$ and $\\mathbb H$, in the sense of Definition \\ref{def:suiten}. Then, we have\n\t\\begin{equation}\\label{thmharmonic}\n\t\\inf{\\mathbb{E}}=\\alpha_{d-1} \\inf{\\mathbb{M}}\n\t\\end{equation}\n\tor equivalently, in view of paper $\\cite{MaMa2, MaMa}$,\n\t\\begin{equation}\\label{thmharmonic2}\n\t\\inf{\\mathbb{E}}=\\alpha_{d-1} \\inf{I_\\alpha}\\,.\n\t\\end{equation}\n\\end{theorem}\n\nCurrently, we cannot evade the assumption on the existence of a calibration, because it is still not known if a calibration, or even a weak version of it, is not only sufficient but also a necessary condition for minimality (see Section \\ref{section2}). Nonetheless, dropping this assumption we can still state some partial result as follows. \n\n\\begin{remark}{\\rm\n\t\\begin{enumerate}\n\t\t\\item[(i)] If $\\alpha=1$, $\\psi=\\|\\cdot \\|_{1}$, $\\mathbb{E}=\\frac{1}{(d-1)^{\\frac{d-1}{2}}}\\mathbb{H}$, then we are able to prove that $\\eqref{thmharmonic}$ still holds true, as a variant of the main result in $\\cite{BrezisCoronLieb}$.\n\t\t\\item[(ii)] In case $0\\leq \\alpha <1$, we obtain the following inequality\n\t\t\\begin{equation}\\label{compareintro}\n\t\t\\alpha_{d-1} \\inf{\\mathbb{M}}=\\alpha_{d-1}\\inf{I_\\alpha}\\geq \\inf{\\mathbb{E}}\\, \\geq \\alpha_{d-1} \\inf{\\mathbb{N}}\\,.\n\t\t\\end{equation}\n\t\tThe investigation of equality in $\\eqref{compareintro}$ when $0\\leq \\alpha <1$ is delicate and will be considered in forthcoming works.\n\t\n\t\\end{enumerate}\n\n}\\end{remark}\n\\begin{remark}\\label{conjecture}\n{\\rm\tWe believe that the assumption of the existence of a calibration is not too restrictive. We actually conjecture that minimizing configurations for the problem $(M)$ admit a calibration in case of uniqueness, which is somehow a generic property (see \\cite{Cal_Ma_Stein}). We carry out in Example $\\ref{examplecalib}$ the construction of configurations of $n$ points in $\\R^{n-1}$ with $n-2$ branching points which are generic in character and these configurations admit a calibration.\n}\\end{remark}\nThe organization of the paper is as follows: in Section $\\ref{section2}$, we briefly review some basic notions of Geometric Measure Theory which will be used in the paper, in Section $\\ref{section3}$ we recall (Gilbert-) Steiner problems and briefly describe their connection with Plateau's problem for currents with coefficients in a group. Finally, in Section $\\ref{Prooftheorem1}$ we prove the Theorem $\\ref{thm1}$.\n\\section{Preliminaries and notations}\\label{section2}\n\\subsection{Rectifiable currents with coefficients in a group G}\nIn this section, we present the notion $1$-dimensional currents with coefficients in the group $\\R^{n-1}$ in the ambient space $\\R^{d}$ with $n, d\\geq 2$. We refer to $\\cite{Ma}$ for a more detailed exposition of the subject. \n\nConsider $\\R^{n-1}$ equipped with a norm $\\psi$ and its dual norm $\\psi^{*}$. Denote by $\\Lambda_{1}(\\R^{d})$ the space of $1$-dimensional vectors and by $\\Lambda^{1}(\\R^{d})$ the space of $1$-dimensional covectors in $\\R^{d}$.\n\\begin{definition}{\\rm An $(\\R^{n-1})^{*}$-valued $1$-covector on $\\R^{d}$ is a bilinear map\n\t$$w : \\Lambda_{1}(\\R^{d})\\times \\R^{n-1}\\longrightarrow \\R\\,.\n\t$$\n\n\tLet $\\lbrace e_{1},e_{2},\\ldots,e_{n-1} \\rbrace$ be an orthonormal basis of $\\R^{n-1}$, and let $\\lbrace e^{*}_{1},e^{*}_{2},\\ldots,e^{*}_{n-1} \\rbrace$ be its dual. Then, each $(\\R^{n-1})^{*}$-valued $1$-covector on $\\R^{d}$ can be represented as \n\t$w=w_{1} e^{*}_{1}+\\ldots+w_{n-1}e^{*}_{n-1}\\,,$\n\twhere $w_{i}$ is a ``classical'' $1$-dimensional covector in $\\R^{d}$ for each $i=1,\\ldots,n-1$. To be precise, the action of $w$ on a pair $(\\tau,\\theta)\\in\\Lambda_1(\\R^d)\\times\\R^{n-1}$ can be computed as\n\t\\[\n\\langle w;\\tau,\\theta\\rangle=\\sum_{i=1}^{n-1}\\theta_i\\langle w_i,\\tau\\rangle\\,,\t\n\t\\]\n\twhere the scalar product on the right hand side is the standard Euclidean scalar product in $\\R^d$.\nWe denote by $\\Lambda^{1}_{(\\R^{n-1},\\psi)}(\\R^{d})$ the space of $(\\R^{n-1})^{*}$-valued $1$-covectors in $\\R^{d}$, endowed with the (comass) norm:\n$$\n| w |_{c,\\psi}:=\\sup \\lbrace \\psi^{*} ( \\langle w ; \\tau, \\cdot \\rangle ) \\, : \\, \\vert \\tau \\vert \\leq 1\\rbrace\\,.$$\nSimilarly, we can define the notion of space $(\\R^{n-1})$-valued $1$-vectors in $\\R^{d}$, $\\Lambda_{1, (\\R^{n-1},\\psi)}(\\R^{d})$, endowed with pre-dual (mass) norm: for any $v\\in \\Lambda_{1, (\\R^{n-1},\\psi)}(\\R^{d})$ we define:\n\\begin{equation}\\label{nuclearnorm}\n\\begin{aligned}\n| v |_{m,\\psi}:= & \\sup \\lbrace \\langle w, v \\rangle \\, : \\, \\vert w \\vert_{c,\\psi} \\leq 1, w\\in \\Lambda^{1}_{(\\R^{n-1},\\psi)}(\\R^{d}) \\rbrace\\,\\\\\n= & \\inf \\left\\{ \\sum_{l=1}^L \\psi (z_l) |\\tau_l| \\, : \\, \\tau_{1},\\ldots,\\tau_{l} \\in \\Lambda_{1}(\\R^{d}), \\, z_1, \\ldots, z_k \\in \\R^{n-1} \\mbox{ s.t. }v=\\sum_{l=1}^{L}z_l\\otimes\\tau_{l} \\right\\}\\,.\n\\end{aligned}\n\\end{equation}\n}\n\\end{definition}\n\\begin{definition}{\\rm\nAn $(\\R^{n-1})^{*}$-valued $1$-dimensional differential form defined on $\\R^{d}$ is a map \n$$\\omega: \\R^{d} \\longrightarrow \\Lambda^{1}_{(\\R^{n-1},\\psi)}(\\R^{d})\\,.$$\nLet us remark that the regularity of $\\omega$ is inherited from the components $\\omega_{i}$, $i=1,\\ldots,n-1$. \n\tLet $\\phi=(\\phi_1,\\ldots,\\phi_{n-1})$ be a function of class $C^{1}(\\R^d;\\R^{n-1})$. We denote\n\t$${\\rm d}\\phi:={\\rm d\\phi_{1}}e^{*}_{1}+\\ldots+{\\rm d}\\phi_{n-1}e^{*}_{n-1},$$\n\twhere ${\\rm d}\\phi_{i}$ is the differential of $\\phi_{i}$. Thus ${\\rm d}\\phi \\in C(\\R^{d};\\Lambda^{1}_{(\\R^{n-1},\\psi)}(\\R^{d}) )$.\n}\\end{definition}\n\\begin{definition}{\\rm\nA $1$-dimensional current $T$ with coefficients in $(\\R^{n-1},\\psi)$ is a linear and continuous map\n\t$$T: C^{\\infty}_{c}\\left(\\R^{d};\\Lambda^{1}_{(\\R^{n-1},\\psi)}(\\R^{d})\\right) \\longrightarrow \\R\\,.$$\n\tHere the continuity is meant with respect to the (locally convex) topology on $C^\\infty_c(\\R^d;\\Lambda^1_{(\\R^{n-1},\\psi)}(\\R^d))$ defined in analogy with the topology on $C^\\infty_c(\\R^d;\\R)$ which allows the definition of distributions.\n\tThe mass of $T$ is defined as\n\t\\[\n\t\\mathbb{M}(T):=\\sup \\left\\{ T(\\omega):\\, \\sup_{x\\in \\R^{d}}|\\omega|_{c,\\psi} \\leq 1 \\right\\}\\,.\n\t\\]\n\tMoreover, if $T$ is a $1$-dimensional current with coefficients in $(\\R^{n-1}, \\psi)$, we define the boundary $\\partial T$ of $T$ as a distribution with coefficients in $(\\R^{n-1},\\psi)$, $\\partial T: C^{\\infty}_{c}(\\R^{d};(\\R^{n-1},\\psi) ) \\longrightarrow \\R $, such that $$\\partial T(\\phi):=T({\\rm d}\\phi)\\,.$$ \n\tThe mass of $\\partial T$ is the supremum norm\n\t\\[\n\t\\mathbb{M}(\\partial T):=\\sup \\left\\{ T({\\rm d}\\varphi):\\, \\sup_{x\\in \\R^{d}} \\psi^*(\\varphi)\\leq 1 \\right\\}\\,.\n\t\\]\nA current $T$ is said to be normal if $\\mathbb{M}(T)+\\mathbb{M}(\\partial T)<\\infty$.\n}\\end{definition}\n\\begin{definition}{\\rm\n\tA $1$-dimensional rectifiable current with coefficients in the normed (abelian) group $(\\Z^{n-1}, \\psi)$ is a ($1$-dimensional) normal current (with coefficients in $(\\R^{n-1},\\psi)$)\tsuch that there exists a $1$-dimensional rectifiable set $\\Sigma\\subset\\R^d$, an approximate tangent vectorfield $\\tau \\, : \\, \\Sigma \\longrightarrow \\Lambda_{1}(\\R^{d})$, and a density function $\\theta : \\Sigma \\longrightarrow \\Z^{n-1}$ such that \n\t$$T(\\omega)=\\int_{\\Sigma}\\langle \\omega (x) \\tau (x), \\theta (x) \\rangle \\,d\\mathcal{H}^{1}(x)$$\n\tfor every $\\omega \\in C^{\\infty}_{c}\\left(\\R^{d};\\Lambda^{1}_{(\\R^{n-1},\\psi)}(\\R^{d}) \\right)$. We denote such a current $T$ by the triple $\\llbracket\\Sigma, \\tau, \\theta\\rrbracket$.\n}\\end{definition}\n\n\\begin{remark}\\label{rmk:mass}{\\rm\nThe mass of a rectifiable current $T=\\llbracket\\Sigma,\\tau,\\theta\\rrbracket$ with coefficients in $(\\Z^{n-1}, \\psi)$ can be computed as\n\t$$\\mathbb{M}(T):=\\sup \\left\\{ T(\\omega):\\, \\sup_{x\\in \\R^{d}}|\\omega|_{c,\\psi} \\leq 1 \\right\\}=\\int_{\\Sigma}\\psi (\\theta(x))\\,d\\mathcal{H}^{1}(x)\\,.$$\nMoreover, $\\partial T: C^{\\infty}_{c}(\\R^{d};(\\R^{n-1},\\psi) ) \\longrightarrow \\R $ is a measure and there exist $x_{1},\\ldots,x_{m} \\in \\R^{d}$, $p_{1},\\ldots,p_{m} \\in \\Z^{n-1}$ such that\n\t$$\\partial T(\\phi)=\\sum_{j=1}^{m}p_{j}\\phi(x_{j}).$$\nFinally the mass of the boundary $\\mathbb{M}(\\partial T)$ coincides with $\\sum_{j=1}^{m}\\psi(p_{j})$.\n\t}\\end{remark}\n\t\\begin{remark}{\\rm\n\t\tIn the trivial case $n=2$, we consider rectifiable currents with coefficients in the discrete group $\\Z$ and we recover the classical definition of integral currents (see, for instance, \\cite{FeBook}).\n\t}\\end{remark}\nFinally, it is useful to define the components $T$ with respect to the index $i\\in\\{1,\\ldots,n-1\\}$: for every $1$-dimensional test form $\\tilde\\omega\\in C^\\infty_c(\\R^d;\\Lambda^1(\\R^d))$ we set\n\t$$T^{i}(\\tilde\\omega):=T(\\tilde\\omega e^{*}_{i})\\,.$$\n\tNotice that $T^{i}$ is a classical integral current (with coefficients in $\\Z$). Roughly speaking, in some situations we are allowed to see a current with coefficients in $\\R^{n-1}$ through its components $(T^{1},\\ldots,T^{n-1})$.\n\t\nWhen dealing with the Plateau problem in the setting of currents, it is important to remark a couple of critical features. For the sake of understandability, we recall them here for the particular case of $1$-dimensional currents, but the matter does not depend on the dimension. \n\\begin{remark}\\label{lavrentiev}{\\rm\nIf a boundary $\\{P_1,\\ldots,P_n\\}\\subset\\R^d$ is given, then the problem of the minimization of mass is well posed in the framework of rectifiable currents and in the framework of normal currents as well. In both cases the existence of minimizers is due to a direct method and, in particular, to the closure of both classes of currents. Obviously\n\\begin{align*}\n& \\min\\{\\mathbb{M}(T):\\,T\\text{ normal current with coefficients in }\\R^{n-1}\\text{ and boundary }\\{P_1,\\ldots,P_n\\}\\}\\\\\n\\le & \\min\\{\\mathbb{M}(T):\\,T\\text{ rectifiable current with coefficients in }\\Z^{n-1}\\text{ and boundary }\\{P_1,\\ldots,P_n\\}\\}\\,,\n\\end{align*}\nbut whether the inequality is actually an identity is not known for currents with coefficients in groups. The same question about the occurence of a Lavrentiev gap between normal and integral currents holds for classical currents of dimension bigger than $1$ and it is closely related to the problem of the decomposition of a normal current in rectifiable ones (see \\cite{Ma} for a proper overview of this issue). }\\end{remark}\n\nA formidable tool for proving the minimality of a certain current is to show the existence of a calibration.\n\n\\begin{definition}\\label{Calibration}{\\rm\n\tConsider a rectifiable current $T=\\llbracket\\Sigma, \\tau, \\theta\\rrbracket$ with coefficients in $\\Z^n$, in the ambient space $\\R^{d}$. A smooth $(\\R^{n})^{*}$-valued differential form $\\omega$ in $\\R^{d}$ is a calibration for $T$ if the following conditions hold:\n\t\\begin{enumerate}\n\t\t\\item[(i)]\\label{clr1} for a.e $x\\in \\Sigma$ we have that $\\langle \\omega(x); \\tau(x), \\theta (x)\\rangle=\\psi (\\theta(x));$ \n\t\t\\item[(ii)]\\label{clr2} the form is closed, i.e, ${\\rm d}\\omega=0;$\n\t\t\\item[(iii)]\\label{clr3} for every $x\\in \\R^{d}$, for every unit vector $t \\in \\R^{d}$ and for every $h\\in \\Z^{n}$, we have that\n\t\t$$\\langle \\omega(x); t, h \\rangle \\leq \\psi (h)\\,.$$\n\t\\end{enumerate}\n}\\end{definition}\n\nIt is straightforward to prove that the existence of a calibration associated to a current implies the minimality of the current itself. Indeed, with the notation in Definition \\ref{Calibration}, if $T'=\\llbracket\\Sigma',\\tau',\\theta'\\rrbracket$ is a competitor, i.e., $T'$ is a rectifiable current with coefficients in $\\Z^n$ and $\\partial T'=\\partial T$, then\n\\[\n{\\mathbb M}(T)=\\int_{\\Sigma}\\psi(\\theta)=\\int_{\\Sigma}\\langle\\omega;\\tau,\\theta\\rangle=\\int_{\\Sigma'}\\langle\\omega;\\tau',\\theta'\\rangle\\le\\int_{\\Sigma'}\\psi(\\theta')={\\mathbb M}(T')\\,.\n\\]\n\nWe stress that fact that the existence of a calibration is a sufficient condition for the minimality of a current, so it is always a wise attempt when a current is a good candidate for mass minimization. Nonetheless, it is also natural to wonder if every mass minimizing current has its own calibration and this problem can be tackled in two ways: for specific currents or classes of currents (such as holomorphic subvarieties) one has to face an extension problem with the (competing) constraints (ii) and (iii), since condition (i) already prescribes the behaviour of the form on the support of the current. In general, one may attempt to prove the existence of a calibration as a result of a functional argument, picking it in the dual space of normal currents, but this approach has two still unsolved problems:\n\\begin{itemize}\n\\item the calibration is merely an element of the dual space of normal currents, thus it is far to be smooth;\n\\item this argument works in the space of normal currents and it is not known whether a minimizer in this class is rectifiable as well (see Remark \\ref{lavrentiev}).\n\\end{itemize}\nAnyway, in this specific case of currents with coefficients in $\\Z^n$ which match the energy minimizing networks of a branched optimal transport problem (with a subadditive cost), we think that the Lavrentiev phenomenon cannot occur, as explained in Remark \\ref{conjecture}. \n\n\\subsection{Distributional Jacobian}\nWe recall the definition of distributional Jacobian of a function $u\\in W^{1,d-1}_{\\rm loc}(\\R^{d}; \\R^{d})\\cap L^{\\infty}_{\\rm loc}(\\R^{d}; \\R^{d})$, see also $\\cite{JeSo02, ABO1}$.\n\n\\begin{definition} Let $u$ be in $W^{1,d-1}_{\\rm loc}(\\R^{d}; \\R^{d})\\cap L^{\\infty}_{\\rm loc}(\\R^{d}; \\R^{d})$, we define the pre-jacobian $ju \\in L^1_{\\rm loc}(\\R^d;\\R^d)$ as\n$$ju\n:=(\\det(u,u_{x_{2}},\\ldots,u_{x_{d}}), \\det(u_{x_{1}},u,\\ldots,u_{x_{d}}),\n\\ldots,\\det(u_{x_{1}},\\ldots,u_{x_{d-1}}, u))\\,,$$\nwhere $u_{x_j}$ is a $L^{d-1}_{\\rm loc}(\\R^d;\\R^d)$ representative of the partial derivative of $u$ with respect to the $j^{\\rm th}$ direction. Thus we define the Jacobian $Ju$ of $u$ as $\\frac{1}{d}{\\rm d}(ju)$ in the sense of distributions. More explicitly, if $\\phi \\in C^{\\infty}_{c}(\\R^{d};\\R)$ is a test function, then one has\n\\begin{equation}\\label{distrib_jac}\n\\int_{\\R^{d}}\\phi\\, Ju\\,dx=-\\frac{1}{d}\\int_{\\R^{d}}\\nabla \\phi \\cdot ju\\,dx\\,.\n\\end{equation}\nThe identity required in \\eqref{distrib_jac} is clearer if one notices that $ju$ has been chosen in such a way that ${\\rm div}(\\varphi\\tilde u)=\\nabla\\varphi\\cdot j\\tilde u+d\\varphi\\det D\\tilde u$ whenever $\\tilde u$ is smooth enough to allow the differential computation.\n\\end{definition}\n\nOnce the singularities of the problem ${P_1,\\ldots,P_n}$ have been prescribed, we can also introduce the energy spaces $H_{i}$, for each $i=1,\\ldots,n-1$. By definition a map $u\\in W^{1,d-1}_{\\rm loc}(\\R^{d}; \\mathbb{S}^{d-1})$ belongs to $H_i$ if $Ju=\\frac{\\alpha_{d-1}}{d}( \\delta_{P_i}-\\delta_{P_n})$, and there exists a radius $r=r(u)>0$ such that $u$ is constant outside $B(0, r(u))\\ni P_{i}, P_{n}$, where\n$B(0, r)$ is the open ball of radius $r$ centered at $0$. \n\nFor any $\\textbf{u}\\in H_1\\times \\ldots \\times H_{n-1}$, we define the (matrix-valued) pre-jacobian of $\\textbf{u}$ by\n\\begin{equation}\n\\textbf{ju}=(ju_1,\\ldots,ju_{n-1})\n\\end{equation}\nand its Jacobian by\n\\begin{equation}\n\\textbf{Ju}=(Ju_1,\\ldots,Ju_{n-1})\\,.\n\\end{equation}\nWe observe that $\\textbf{ju}$ is actually a $1$-dimensional normal currents with coefficients in $\\R^{n-1}$. Moreover\n\\begin{equation}\n\\frac{1}{d}\\partial \\, \\textbf{ju}=-Ju\\,.\n\\end{equation}\n\n\\begin{definition}\\label{def:suiten} Given $P_1,\\ldots,P_n\\in \\R^d$ and a norm $\\psi$ on $\\R^{n-1}$, a functional $\\mathbb{E}$ defined on $H_{1}\\times \\ldots \\times H_{n-1}$ is said to be suitably related to $\\mathbb{M}$ and $\\mathbb{H}$ (see \\eqref{def_h} for its definition) if the following properties hold.\n\\begin{itemize}\n\t\\item[(i)] $\\mathbb{M}(\\textbf{\\rm\\bf ju})\\leq \\mathbb{E}({\\bf u})$, where $\\textbf{\\rm \\bf ju}$ is the normal current defined by the pre-jacobian.\n\t\\item[(ii)] If there exist an open set $U\\subset\\R^d$ and a subset $I$ of the set of labels ${1,\\ldots,n-1}$ such that $u_i=u_l$ for every pair $i,l\\in I$ and $u_i=0$ otherwise, we have \n\t\\begin{equation}\n\t\t\\mathbb{E}({\\bf u}\\chi_U) \\leq \\frac{1}{(d-1)^{\\frac{d-1}{2}}} \\mathbb{H}({\\bf u}\\chi_U)\\,,\n\t\\end{equation}\n\twhere $\\chi_U$ is the characteristic function of $U$.\n\t\\item[(iii)] When $k=1$, the functional $\\mathbb{E}$ coincides with the harmonic energy considered in $\\cite{BrezisCoronLieb}$.\n\\end{itemize}\n\\end{definition}\nLet us point out that requirement {\\it (ii)} is taylored on the dipole construction maps ${\\bf u}=(u_{1},\\ldots,u_{n-1})$ in the Step $1$ of the proof of Theorem $\\ref{thm1}$.\n\nWe consider the following problem:\n$$\n\\inf \\left\\{\\mathbb{E}({\\bf u}), \\hspace{0.2cm} {\\bf u}=(u_{1},\\ldots,u_{n-1})\\in H_{1}\\times H_{2} \\times \\ldots \\times H_{n-1} \\right\\}.\n\\leqno{(H)}\n$$\nAs indicated in the introduction, the inspiration for considering the problem $(H)$ and comparing it with the irrigation problem $(I)$ is coming from the works $\\cite{MaMa2, MaMa}$ and $\\cite{abh}$. More precisely, $\\cite{MaMa2, MaMa}$ provided a new framework for the problem $(I)$ by proving it to be equivalent to the problem of mass-minimizing currents with coefficients in the group $\\Z^{n-1}$ with a suitable norm. The point is to look at each irrigation network $L = \\bigcup_{i=1}^{n-1}\\lambda_i$ encoded in the current $T=(T^{1},\\ldots, T^{n-1})$ where $T^{i}$ is a classical current supported by $\\lambda_{i}$, and the irrigation cost of $L$ is the mass of the current $T$. Then, combining this point of view with $\\cite{abh}$ (see also $\\cite{BrezisCoronLieb}$), where the energy of harmonic maps with prescribed point singularities was related to $1$-dimensional classical currents, we are led to investigate the problem $(H)$ in connection with problem $(I)$.\n\nBefore moving to the next section, we provide a candidate for the functional $\\mathbb{E}$ satisfying the properties in Definition \\ref{def:suiten}.\nLet $\\textbf{u}=(u_1,\\ldots,u_{n-1}) \\in H_1\\times \\ldots \\times H_{n-1}$. Let $e_1,\\ldots,e_{n-1}$ be the canonical basis of $\\R^{n-1}$, and let $I$ be a subset of $\\lbrace 1,\\ldots, n-1 \\rbrace$, then we denote by\n$e_{I}$ the sum $\\sum_{i\\in I}e_{i}$. We define the energy density $\\textbf{e}(\\textbf{u})$ at a point $x\\in\\R^d$ as\n\\begin{align}\n\\textbf{e}(\\textbf{u})(x)=(d-1)^{-\\frac{d-1}{2}}\\inf\\Bigg\\{ & \\sum_{I\\in{\\mathcal{I}}}\\|e_I \\|_{\\alpha}|\\nabla u_I(x)|^{d-1}:\\, \\mbox{where }\\textbf{ju}(x)=\\sum_{I\\in{\\mathcal I}}ju_{I}(x)\\otimes e_I\\,\\label{energydensityforE}\\\\\n & \\text{and } \\mathcal{I}\\text{ is a partition of }\\{1,\\ldots,n-1\\}\\Bigg\\}\\,.\\nonumber\n\\end{align}\nTo be precise, here the matrix $\\textbf{ju} (x)$ is decomposed according to a partition ${\\mathcal I}$ of the set $\\{1,\\ldots,n-1\\}$ in such a way that $ju_i(x)=ju_l(x)$ for every pair $i,l\\in I$. \n\nAs an example, take ${\\bf u}=(u_1,u_2)\\in H_1\\times H_2$ for some choice of the points $P_1,P_2,P_3\\in\\R^d$. Then, at some point $x\\in \\R^d$, either $ju_1(x)\\neq ju_2(x)$ or $ju_1(x)=ju_2(x)$.\n\\begin{itemize}\n\\item If $ju_1(x)\\neq ju_2(x)$, then the unique decomposition that we are allowing is ${\\bf j}({\\bf u})(x)=ju_1(x)e_1+ju_2(x)e_2$ and $\\textbf{e}(\\textbf{u})(x)=c_d(|\\nabla u_1(x)|^{d-1}+|\\nabla u_2(x)|^{d-1})$, where we abbreviated $c_d=(d-1)^{-\\frac{d-1}{2}}$.\n\\item If $ju_1(x)=ju_2(x)$, then, thanks to the subadditivity of $\\|\\cdot\\|_\\alpha$, the most convenient decomposition is ${\\bf j}({\\bf u})(x)=ju_1(x)(e_1+e_2)$ and $\\textbf{e}(\\textbf{u})(x)=c_d\\|e_1+e_2\\|_\\alpha|\\nabla u_1(x)|^{d-1}$.\n\\end{itemize}\nFinally, we consider the functional\n\\begin{equation}\\label{energyforE}\n\\mathbb{E}(\\textbf{u})=\\int_{\\R^{d}}{\\textbf e}(\\textbf{u})(x)\\, dx.\n\\end{equation}\n\\begin{prop}\\label{functionalE}Let $\\psi$ be the norm \ndefined as\n\\begin{equation}\n\\psi(h)=\\begin{cases} \n||\\cdot||_{\\alpha}=\\left(\\sum_{j=1}^{n-1}|h_{j}|^{\\frac{1}{\\alpha}}\\right)^{\\alpha} & \\mbox{in case } \\alpha \\in (0; 1], \\, h\\in \\Z^{n-1} \\\\ \n||\\cdot||_{0}=\\max \\lbrace h_{1},\\ldots,h_{n-1} \\rbrace & \\mbox{in case } \\alpha=0, \\, h\\in \\Z^{n-1}\\,.\n\\end{cases}\n\\end{equation}\nLet $\\mathbb{E}$ be the functional defined above, in $\\eqref{energyforE}$. If $\\alpha=1$, i.e. $\\psi=\\|\\cdot \\|_{1}$, we choose $\\mathbb{E}=\\frac{1}{(d-1)^{\\frac{d-1}{2}}}\\mathbb{H}$. \nThen $\\mathbb{E}$ is suitably related to $\\mathbb{M}$ and $\\mathbb{H}$ in the sense of Definition \\ref{def:suiten}.\n\\end{prop}\n\\begin{proof}\nWe start with property {\\it (i)}. Let $\\omega \\in C^{\\infty}_{c}\\left(\\R^{d};\\Lambda^{1}_{(\\R^{n-1},\\psi)}(\\R^{d})\\right)$ be a test form with comass norm $\\sup_{x\\in \\R^d} |\\omega \\,|_{c,\\psi} \\leq 1$. By using the very definition of $|\\cdot |_{m,\\psi}$, see \\eqref{nuclearnorm}, we obtain\n\\begin{equation}\\label{comparenuclearnorm}\n\\begin{aligned}\n|\\, \\textbf{ju} (\\omega)\\,|=\\left|\\int_{\\R^d}\\langle \\textbf{ju}(x), \\omega(x) \\rangle\\, dx\\right| \\leq \\int_{\\R^{d}} |\\textbf{ju} (x)|_{m,\\psi}\\,dx\\,.\n\\end{aligned}\n\\end{equation}\nOn the other hand, as already observed, for a.e $x\\in \\R^d$ we have\n\\begin{equation*}\n\\begin{aligned}\n|\\textbf{ju}(x)|_{m,\\psi}\\leq \\inf\\left\\{\\sum_{I\\in{\\mathcal{I}}}\\|e_I \\|_{\\alpha}|j u_I(x)|:\\, \\mbox{where }\\textbf{ju}(x)=\\sum_{I\\in{\\mathcal I}}ju_{I}(x)\\otimes e_I,\\,\\mathcal{I}\\text{ part. of }\\{1,\\ldots,n-1\\}\\right\\}\\,.\n\\end{aligned}\n\\end{equation*}\nObserve that for any $v\\in H_{l}$, $l=1,\\ldots,n-1$, one has for a.e $x\\in \\R^d$\n\\begin{equation}\n|jv(x)|\\leq \\frac{1}{(d-1)^{\\frac{d-1}{2}}}|\\nabla v(x)|^{d-1}\\,,\n\\end{equation}\nsee also $\\cite{BrezisCoronLieb}$-page 64, $\\cite{abh}$-A.1.3.\nTherefore, we obtain that for a.e $x\\in \\R^d$\n\\begin{equation}\n|\\textbf{ju}|_{m,\\psi}(x)\\leq e(\\textbf{u})(x)\n\\end{equation}\nThis in turn implies that\n\\begin{equation}\n\\begin{aligned}\n|\\, \\textbf{ju} (\\omega)\\,|\\leq \\mathbb{E} (\\textbf{u})\\,.\n\\end{aligned}\n\\end{equation}\nSo, by the arbitrariness of $\\omega$, we conclude that\n\\begin{equation}\n\\mathbb{M}(\\textbf{ju})\\leq \\mathbb{E} (\\textbf{u}).\n\\end{equation} \n\nConcerning property {\\it(ii)}, assume that, in some open set $U$, each $u_{i}$ is equal to either $0$ or a given function $v\\in W^{1,d-1}_{\\rm loc}(\\R^d,\\mathbb{S}^{d-1})$, thus in $U$ the jacobian $\\textbf{ju}$ can be written as $\\textbf{ju}=jv {e}_{I}$, for some $I\\subset \\lbrace 1,\\ldots,n \\rbrace$. This implies that\n\\begin{equation}\n{\\bf e}(\\textbf{u})(x)\\leq \\| e_{I} \\|_{\\alpha} \\frac{1}{(d-1)^{\\frac{d-1}{2}}} |\\nabla v (x)|^{d-1}\n\\end{equation}\nfor a.e $x$ in the dipole, so we can conclude that\n\\begin{equation}\n\\mathbb{E}(\\textbf{u}\\chi_U)\\leq \\mathbb{H}(\\textbf{u}\\chi_U)\\,.\n\\end{equation}\n\n\nFinally, if $k=1$ (i.e., we have just one component $\\textbf{u}=u$), it is obvious that\n\\begin{equation}\n{\\bf e}(\\textbf{u})=\\frac{1}{(d-1)^{\\frac{d-1}{2}}} |\\nabla u|^{d-1}.\n\\end{equation}\nTo conclude the proof, we observe that, in case $\\alpha=1$, that is, $\\psi=\\| \\cdot \\|_{1}$, $\\mathbb{E}=\\frac{1}{(d-1)^{\\frac{d-1}{2}}}\\mathbb{H}$ and this functional obviously satisfies the three properties.\n\\end{proof}\n\\section{(Gilbert-)Steiner problems and currents with coefficients in a group}\\label{section3}\nLet us briefly recall the Gilbert-Steiner problem and the Steiner tree problem and see how it can be turned into a mass-minimization problem for integral currents in a suitable group. \n\nLet $n$ distinct points $P_{1},\\ldots, P_{n}$ in $\\R^{d}$ be given. Denote by $G(A)$ the set of all acyclic graphs $L = \\bigcup_{i=1}^{n-1}\\lambda_i$, along which the unit masses located at $P_{1},\\ldots, P_{n-1}$ are transported to the target point $P_n$ (single sink). Here $\\lambda_i$ is a simple rectifiable curve and represents the path of the mass at $P_{i}$ flowing from $P_{i}$ to $P_{n}$. In $\\cite{MaMa2, MaMa}$, the occurrence of cycles in minimizers is ruled out, thus the problem $(I)$ is proved to be equivalent to \n\n$$\n\\inf \\left\\{ \\int_L |\\theta(x)|^\\alpha d{\\mathcal H}^1(x), \\;\\; L\\in G(A), \\;\\;\\theta(x) = \\sum_{i=1}^{n-1} \\mathbf{1}_{\\lambda_i}(x) \\right\\}\n\\leqno{(I)}\n$$\nwhere $\\theta$ is the mass density along the network $L$. Moreover, in $\\cite{MaMa2, MaMa}$ the problem $(I)$ can be turned into a mass-minimization problem for integral currents with coefficients in the group $\\Z^{n-1}$: the idea is to label differently the masses located at $P_{1}, P_{2} \\ldots, P_{n-1}$ (source points) and to associate the source points $P_{1},\\ldots, P_{n-1}$ to the single sink $P_{n}$. Formally, we produce a $0$-dimensional rectifiable current (a.k.a. a measure) with coefficients in $\\Z^{n-1}$, given by the difference between\n$$\\mu^{-}=e_{1}\\delta_{P_{1}}+e_{2}\\delta_{P_{2}}+ \\ldots +e_{n-1}\\delta_{P_{n-1}} \\mbox{ and }\\mu^{+}=(e_{1}+\\ldots+e_{n})\\delta_{P_{n}}\\,.$$\nWe recall that $\\lbrace e_{1},e_{2},\\ldots,e_{n} \\rbrace$ is the canonical basis of $\\R^{n-1}$. The measures $\\mu^{-}, \\mu^{+}$ are the marginals of the problem $(I)$. To any acyclic graph $L = \\bigcup_{i=1}^{n-1}\\lambda_i$ we associate a current $T$ with coefficients in the group $\\Z^{n-1}$ as follows: to each $\\lambda_{i}$ associate the current $T_{i}=\\llbracket\\lambda_{i},\\tau_{i},e_{i}\\rrbracket$, where $\\tau_{i}$ is the tangent vector of $\\lambda_{i}$. We associate to the graph $L = \\bigcup_{i=1}^{n-1}\\lambda_i$ the current $T=(T_{1},\\ldots,T_{n-1})$ with coefficients in $\\Z^{n-1}$. By construction we obtain $$\\partial T=\\mu^{+}-\\mu^{-}\\,.$$\nChoosing the norm $\\psi$ on $\\Z^{n-1}$ as\n\\begin{equation}\\label{normeuclidean}\n\\psi(h)=\\begin{cases} \n||\\cdot||_{\\alpha}=\\left(\\sum_{j=1}^{n-1}|h_{j}|^{\\frac{1}{\\alpha}}\\right)^{\\alpha} & \\mbox{in case } \\alpha \\in (0; 1], \\, h\\in \\Z^{n-1} \\\\ \n||\\cdot||_{0}=\\max \\lbrace h_{1},\\ldots,h_{n-1} \\rbrace & \\mbox{in case } \\alpha=0, \\, h\\in \\Z^{n-1}\\,,\n\\end{cases}\n\\end{equation}\n\n\nin view of Remark \\ref{rmk:mass}, the problem $(I)$ is equivalent to \n$$\n\\inf \\left\\{ \\mathbb{M}(T), \\hspace{0.2cm} \\partial T =\\mu^{+}-\\mu^{-} \\right\\}\n\\,.\\leqno{(M)}\n$$\nWe refer the reader to $\\cite{MaMa2, MaMa}$ for more details. From now on we restrict our attention to the coefficients group $(\\Z^{n-1}, ||\\cdot||_{\\alpha})$, $0\\leq \\alpha \\leq 1$.\n\\begin{remark}\\label{prejacobiancurrent}\nLet $\\textbf{u}=(u_1,\\ldots,u_{n-1})\\in H_1\\times \\ldots \\times H_{n-1}$. One has\n\\begin{equation}\\label{boundaryofprejacobian}\n\\frac{1}{\\alpha_{d-1}} \\partial \\,\\textbf{ju}=\\mu^{+}-\\mu^{-}\n\\end{equation}\n\\end{remark}\n\nWe remark that turning the problem $(I)$ into a mass-minimization problem allows to rely on the (dual) notion of calibration, which is a useful tool to prove minimality, especially when dealing with concrete configurations. We also recall that the existence of a calibration (see Definition \\ref{Calibration}) associated with a current $T$ implies that $T$ is a mass-minimizing current for the boundary $\\partial T$.\n\\begin{example}\\label{examplecalib}{\\rm\n\t\tLet us consider an irrigation problem with $\\alpha=\\frac{1}{2}$. We will consider a minimal network joining $n+1$ points in $\\R^{n}$, the construction of the network is explained below. Let us stress that in this example the coincidence of the dimension of the ambient space with the dimension of the space of coefficients is needed.\n\t\t\nAdopting the point of view of \\cite{HarveyLawson}, we propose a calibration first, and only {\\it a posteriori} we construct a current which fulfills the requirement (i) in Definition \\ref{Calibration}. We briefly remind that the problem $(I)$ can be seen as the mass-minimization problem for currents with coefficients in $\\Z^{n}$ with the norm $\\Vert \\cdot \\Vert_{\\frac{1}{2}}$. \n\nLet $\\{{\\rm d}x_1,\\ldots,{\\rm d}x_n\\}$ be the (dual) basis of covectors of $\\R^n={\\rm span}(e_1,\\ldots,e_n)$. We now prove that the differential form \n\t\t\\[\n\t\t\\omega=\n\t\t\\begin{bmatrix}\n\t\t{\\rm d}x_{1}\\\\\n\t\t{\\rm d}x_{2}\\\\\n\t\t\\vdots \\\\\n\t\t{\\rm d}x_{n}\n\t\t\\end{bmatrix}\n\t\t\\]\n\t\tsatisfies conditions (ii) and (iii) in Definition $\\ref{Calibration}$. Obviously ${\\rm d}\\omega=0$. Moreover,\n\t\tlet $\\tau=(\\tau_{1}, \\tau_{2},\\ldots,\\tau_{n})\\in \\R^{n}$ be a unit vector (with respect to the Euclidean norm). Thus, for our choice of the norm $\\psi=\\|\\cdot\\|_{\\frac 12}$ we can compute $\\Vert \\langle \\omega; \\tau, \\cdot \\rangle \\Vert^{\\frac{1}{2}}=( \\tau_{1}^{2}+\\tau_{2}^{2}+\\tau_{3}^{2}+\\ldots+\\tau_{n}^{2})^{\\frac{1}{2}}=1$. \n\t\t\nWe will build now a configuration of $n+1$ points $P_{1}, P_{2}, \\ldots, P_{n+1}$ in $\\R^{n}$ calibrated by $\\omega$. Notice that the network has $n-1$ branching points and is somehow generic in character. More precisely, our strategy in building such a configuration is to choose end points, and branching points following the directions parallel to $e_{1}, e_{2}, e_{3}, \\ldots, e_{n}, e_{1}+e_{2},e_{1}+e_{2}+e_{3}, \\ldots,e_{1}+e_{2}+\\ldots+e_{n-1}, e_{1}+e_{2}+\\ldots+e_{n}$. We illustrate the construction in $\\R^{3}, \\R^{4}$. This process can be extended to any dimension.\n\n\\begin{itemize}\n\\item In $\\R^{3}$, let us consider $P_{1}=(-1, 0, 0)$, $P_{2}=(0, -1, 0)$, $P_{3}=(1, 1, -1)$, $P_{4}=(2,2 ,1)$. Take, as \n\t\t\tbranching points, $G_{1}=(0, 0, 0)$, $G_{2}=(1, 1, 0)$. Now consider the current $T=\\llbracket\\Sigma, \\tau, \\theta\\rrbracket$ with support $\\Sigma$ obtained by the union of the segments $\\overline{P_1G_1},\\overline{P_2G_1},\\overline{G_1G_2},\\overline{P_3G_2},\\overline{G_2P_4}$. \n\t\t\t\\begin{figure}[tbh]\n\t\t\t\t\\centering\n\t\t\t\t\\begin{tabular}{cc}\n\t\t\t\t\t\\includegraphics[width=0.4\\linewidth]{Constructionpoints}\n\t\t\t\t\\end{tabular}\n\t\t\t\t\\caption{The picture illustrates the construction of $T$.}\n\t\t\t\t\\label{fig:1d_exe}\n\t\t\t\\end{figure}\n\t\t\t\n\t\t\tThe multiplicity $\\theta$ is set as\n\t\t\t$$\\theta(x)\n\t\t\t=\\begin{cases} \n\t\t\te_{1} & \\mbox{if } x\\in \\overline{P_{1}G_{1}} \\\\ \n\t\t\te_{2} & \\mbox{if } x\\in \\overline{P_{2}G_{1}} \\\\\n\t\t\te_{1}+e_{2} & \\mbox{if } x\\in \\overline{G_{1}G_{2}} \\\\\n\t\t\te_{3} & \\mbox{if } x\\in \\overline{P_{3}G_{2}} \\\\\n\t\t\te_{1}+e_{2}+e_{3} & \\mbox{if } x\\in \\overline{G_{2}P_{4}} \\\\\n\t\t\t0 & \\mbox{elsewhere}.\\\\\n\t\t\t\\end{cases}$$\t\t\t\nWe observe that $T$ is calibrated by $\\omega$, thus $T$ is a minimal network for the irrigation problem with sources $P_1,P_2$ and $P_3$ and sink $P_4$. Notice that edges of the network meet at the branching points with the $90$ degrees angles, as known for branched optimal structures with cost determined by $\\alpha=1\/2$.\n\\item In $\\R^{4}$, we keep points $P_{1}=(-1, 0, 0, 0)$, $P_{2}=(0, -1, 0, 0)$, $P_{3}=(1, 1, -1, 0)$ and, in general, the whole network of the example above as embedded in $\\R^{4}$. We relabel $G_{3}:=(2,2,1,0)$. We now pick $P_{4}$ and $P_{5}$ in such a way that $\\overrightarrow{P_4G_{3}}=e_{4}$ and $\\overrightarrow{G_{3}P_{5}}=e_{1}+e_{2}+e_{3}+e_{4}$. For instance, we choose\t$P_{4}=(2, 2, 1, -1)$ and $P_{5}=(3, 3, 2, 1)$. As before, the marginals of the irrigation problem are $P_1,P_2,P_3,P_4$ as sources and $P_5$ as sink, while $G_1,G_2,G_3$ are branching points.\n\nLet us now consider the current $T=\\llbracket\\Sigma, \\tau, \\theta\\rrbracket$ supported on the union of segments $\\overline{P_1,G_1},\\overline{P_2G_1},\\overline{G_1G_2},\\overline{P_3G_2},\\overline{G_2G_3},\\overline{P_4G_3},\\overline{G_3P_5}$ and multiplicity $\\theta$ given by\n$$\n\t\t\t\\theta(x)\n\t\t\t=\\begin{cases} \n\t\t\te_{1} & \\mbox{if } x\\in \\overline{P_{1}G_{1}} \\\\ \n\t\t\te_{2} & \\mbox{if } x\\in \\overline{P_{2}G_{1}} \\\\\n\t\t\te_{1}+e_{2} & \\mbox{if } x\\in \\overline{G_{1}G_{2}} \\\\\n\t\t\te_{3} & \\mbox{if } x\\in \\overline{P_{3}G_{2}} \\\\\n\t\t\te_{1}+e_{2}+e_{3} & \\mbox{if } x\\in \\overline{G_{2}G_{3}} \\\\\n\t\t\te_{4} & \\mbox{if } x\\in \\overline{P_{4}G_{3}} \\\\\n\t\t\te_{1}+e_{2}+e_{3}+e_{4} & \\mbox{if } x\\in \\overline{G_{3}P_{5}} \\\\\n\t\t\t0 & \\mbox{elsewhere}.\\\\\n\t\t\t\\end{cases}$$\n\t\t\tIt is easy to check that the orientation of each segment coincides with the multiplicity, \n\t\t\ttherefore $T$ is calibrated by $\\omega$.\t\t\n\t\t\t\\item This procedure can be replicated to construct a configuration of $n+1$ points $P_{1}, P_{2}, \\ldots, P_{n+1}$ in $\\R^{n}$ calibrated by $\\omega$, always in the case $\\alpha=1\/2$.\t\n\t\t\\end{itemize}\n\t\t}\\end{example}\n\t\t\\begin{example}{\\rm\n\t\t\tWe now consider a Steiner tree problem. As in the previous example, we aim to construct calibrated configurations joining $n+1$ points in $\\R^{n}$ (with $n-1$ branching points). Consider the following differential form:\n\t\t\t\\[\n\t\t\t\\omega=\n\t\t\t\\begin{bmatrix}\n\t\t\t\\frac{1}{2}{\\rm d}x_{1}+\\frac{\\sqrt{3}}{2}{\\rm d}x_{2}\\\\\n\t\t\t\\frac{1}{2}{\\rm d}x_{1}-\\frac{\\sqrt{3}}{2}{\\rm d}x_{2}\\\\\n\t\t\t\\frac{-1}{2}{\\rm d}x_{1}-\\frac{\\sqrt{3}}{2}{\\rm d}x_{3}\\\\\t\n\t\t\t\\frac{-1}{4}{\\rm d}x_{1}+\\frac{\\sqrt{3}}{4}{\\rm d}x_{3}-\\frac{\\sqrt{3}}{2}{\\rm d}x_{4}\\\\\n\t\t\t\\frac{-1}{8}{\\rm d}x_{1}+\\frac{\\sqrt{3}}{8}{\\rm d}x_{3}+\\frac{\\sqrt{3}}{4}{\\rm d}x_{4}-\\frac{\\sqrt{3}}{2}{\\rm d}x_{5}\\\\\n\t\t\t\\vdots\\\\\n\t\t\t\\frac{-1}{2^{n-2}}{\\rm d}x_{1}+\\frac{\\sqrt{3}}{2^{n-2}}{\\rm d}x_{3}+\\frac{\\sqrt{3}}{2^{n-3}}{\\rm d}x_{4}+\\ldots+\\frac{\\sqrt{3}}{2^{n-k}}{\\rm d}x_{k+1}+\\ldots+\\frac{\\sqrt{3}}{4}{\\rm d}x_{n-1}-\\frac{\\sqrt{3}}{2}{\\rm d}x_{n}\n\t\t\t\\end{bmatrix}\\,.\n\t\t\t\\]\n\t\t\tIt is easy to check that the differential form $\\omega$ is a calibration only among those currents having multiplicities \n\t\t\t$e_{1}, e_{2}, e_{3}, \\ldots, e_{n}, e_{1}+e_{2},e_{1}+e_{2}+e_{3}, \\ldots,e_{1}+e_{2}+\\ldots+e_{n-1}, e_{1}+e_{2}+\\ldots+e_{n}$ and hence it will allow to prove the minimality of configurations in the class of currents with those multiplicities (cf.\\cite{Marcello} for the notion calibrations in families). Nevertheless, it is enough to prove the minimality of global minimizers in some configurations.\n\t\t\t\n\t\t\\begin{itemize}\n\t\t\t\\item \t Consider $n=3$ and\n\t\t\t$P_{1}=\\left(\\frac{-1}{2},\\frac{\\sqrt{3}}{2},0\\right)$, $P_{2}=\\left(\\frac{-1}{2},\\frac{-\\sqrt{3}}{2}, 0\\right)$, $P_{3}=\\left(\\frac{\\sqrt{6}}{2}-\\frac{1}{2}, 0 ,\\frac{\\sqrt{3}}{2}\\right)$, $P_{4}=\\left(\\frac{\\sqrt{6}}{2}-\\frac{1}{2}, 0 , -\\frac{\\sqrt{3}}{2}\\right)$ (see also the example in \\cite[Section $3$]{BoOrOu2}).\nIndeed, we observe that the lengths $|\\overline{P_{1}P_{2}}|=|\\overline{P_{1}P_{3}}|=|\\overline{P_{1}P_{4}}|=|\\overline{P_{2}P_{3}}|=|\\overline{P_{2}P_{4}}|=|\\overline{P_{3}P_{4}}|=\\sqrt{3}$, meaning that the convex envelope of points $P_{1},P_{2},P_{3},P_{4}$ is a tetrahedron: this observation allows us to restrict our investigation among all currents having multiplicities $e_{1}, e_{2}, e_{3}, e_{1}+e_{2}, e_{1}+e_{2}+e_{3}$. More precisely, given any $1$-dimensional integral current $T$ with $\\partial T=(e_{1}+e_{2}+e_{3})\\delta_{P_{4}}-e_{1}\\delta_{P_{1}}-e_{2}\\delta_{P_{2}}-\\ldots -e_{3}\\delta_{P_{3}}$ whose support is an acyclic graph with two additional Steiner points, we can always construct a corresponding current $L$ with multiplicities $e_{1}, e_{2}$, $e_{1}+e_{2}$, $e_{1}+e_{2}+e_{3}$ having the same boundary with $T$ such that $\\mathbb{M}(T)=\\mathbb{M}(L)$ thanks to the symmetric configuration $P_{1}, P_{2}, P_{3}, P_{4}$ combined with the fact that any minimal configuration cannot have less than two Steiner points. Indeed, by contradiction, if a minimal configuration for the vertices of a tetrahedron had $1$ Steiner point, then this configuration would violate the well-known property of the $120$ degrees angles at Steiner points.\nTherefore, $\\omega$ calibrates the current $T=\\llbracket\\Sigma, \\tau, \\theta\\rrbracket$, where $S_{1}=(0, 0, 0), S_{2}=\\left(\\frac{\\sqrt{6}}{2}-1, 0, 0\\right)$ are the Steiner points, $\\Sigma=\\overline{P_{1}S_{1}}\\cup \\overline{P_{2}S_{1}} \\cup \\overline{S_{1}S_{2}} \\cup \\overline{P_{3}S_{2}} \\cup \\overline{S_{2}P_{4}}$ and the multiplicity is given by\n\t\t\t$$\\theta(x)\n\t\t\t=\\begin{cases} \n\t\t\te_{1} & \\mbox{if } x\\in \\overline{P_{1}S_{1}} \\\\ \n\t\t\te_{2} & \\mbox{if } x\\in \\overline{P_{2}S_{1}} \\\\\n\t\t\te_{1}+e_{2} & \\mbox{if } x\\in \\overline{S_{1}S_{2}} \\\\\n\t\t\te_{3} & \\mbox{if } x\\in \\overline{P_{3}S_{2}} \\\\\n\t\t\te_{1}+e_{2}+e_{3} & \\mbox{if } x\\in \\overline{S_{2}P_{4}} \\\\\n\t\t\t0 & \\mbox{elsewhere}\\,.\\\\\n\t\t\t\\end{cases}$$\t\n\t\t\t\\item Using the same strategy of Example \\ref{examplecalib}, we can build a configuration $P_{1}, P_{2}, P_{3}, P_{4}, P_{5}$ in $\\R^{4}$ starting from the points $P_{1}, P_{2}, P_{3}, P_{4}$ above, in such a way that the new configuration is calibrated by $\\omega$ among all currents with multiplicities $e_{1}, e_{2}, e_{3}, e_{4}, e_{1}+e_{2},e_{1}+e_{2}+e_{3}, e_{1}+e_{2}+e_{3}+e_{4}$. This construction can be extended to any dimension.\n\t\t\\end{itemize}\n}\\end{example}\n\n\n\\section{Proof of the main result}\\label{Prooftheorem1}\n\n\n\n\n\n\n\n\n\n\n\nThe proof of Theorem $\\ref{thm1}$ is much in the spirit of the dipole construction of $\\cite{BrezisCoronLieb, abh}$ (in the version of $\\cite{ABO1}$), the properties of the functional $\\mathbb{E}$, and making use of the existence of calibration. \n\\begin{proof}\nLet $\\mathbb{E}$ be the functional which fulfills the requirements of Definition \\ref{def:suiten}.\n\tIn the first steps we prove the inequality \n\t$$\\inf{\\mathbb{E}}\\leq \\alpha_{d-1} \\inf{I_\\alpha}.$$\n\t\nWe briefly recall the dipole construction (see, for instance, \\cite[Theorem $3.1$, Theorem $8.1$]{BrezisCoronLieb}). Given a segment $\\overline{AB}\\subset\\R^d$ and a pair of parameters $\\beta,\\gamma>0$, we define \n\\begin{equation}\\label{neigh}\nU:=\\{x\\in\\R^d:\\,{\\rm dist}(x,\\overline{AB})<\\min\\{\\beta,\\gamma\\,{\\rm dist(x,\\{A,B\\})}\\}\\}\\subset\\R^d\n\\end{equation} \nto be a pencil-shaped neighbourhood with core $\\overline{AB}$ and parameters $\\beta,\\gamma$. For any fixed $\\varepsilon>0$, the dipole construction produces a function $u\\in W^{1,d-1}_{\\rm loc}(\\R^{d}; \\mathbb{S}^{d-1})$ with the following properties:\n\t\\begin{itemize}\n\t\\item $u\\equiv (0,\\ldots,0,1)$ in $\\R^d\\setminus U$;\n\t\\item $Ju=\\frac{\\alpha_{d-1}}{d}(\\delta_{A}-\\delta_{B})$;\n\t\\item moreover the map $u$ satisfies the following inequality\n\t\\begin{equation}\\label{estimateharmonic}\n\t\\frac{1}{(d-1)^{\\frac{d-1}{2}}\\alpha_{d-1}}\\int_{\\R^{d}}|\\nabla u|^{d-1}dx \\leq |AB|+\\varepsilon\\,,\n\t\\end{equation}\n\t\\end{itemize}\t\n\\textbf{Step 1.} \t\nLet $L=\\bigcup_{i=1}^{n-1}\\lambda_i$ be an acyclic connected polyhedral graph, and $T$ be the associated current with coefficients in $\\Z^{n-1}$ corresponding to $L$. Since $L$ is polyhedral, it can also be written as $L=\\bigcup_{j=1}^{k} I_{j}$, where $I_j$ are weighted segments. For each segment $I_{j}$ we can find parameters $\\delta_j,\\gamma_j>0$ such that the pencil-shaped neighbourhood $U_j = \\left\\{ x \\in \\R^d:\\, \\text{dist}(x,I_{j}) \\leq \\min \\left\\{ \\beta_{j}, \\gamma_j\\text{dist}(x,\\partial I_{j}) \\right\\} \\right\\}$ (modelled after \\eqref{neigh}) is essentially disjoint from $U_\\ell$ for every $\\ell\\neq j$. Then, for every $i=1,\\ldots,n-1$, let \n\t$V_{i}=\\bigcup_{j\\in K_i} U_j$\n\tbe a sharp covering of the path $\\lambda_{i}$. To be precise, we choose $K_i\\subset\\{1,\\ldots,k\\}$ such that $V_i\\cap U_\\ell$ is at most an endpoint of the segment $I_\\ell$, if $\\ell\\notin K_i$.\n\t\t\\begin{figure}[tbh]\n\t\t\\centering\n\t\t\\begin{tabular}{cc}\n\t\t\t\\includegraphics[width=0.4\\linewidth]{Dipoleconstruction}\n\t\t\\end{tabular}\n\t\t\\caption{A dipole construction of a Y-shaped graph connecting $3$ points.}\n\t\t\\label{fig:1d_exe2}\n\t\t\\end{figure}\t\n\t\n\t\nFor each path $\\lambda_i$, $i=1,\\ldots,n-1$, we build the map $u_{i}\\in H_{i}$ in such a way that it coincides with a dipole associated to the segment $I_j$ in the neighbourhood $U_j$ for each $j\\in K_i$. We put $u_i\\equiv (0,\\ldots,0,1)$ in $\\R^d\\setminus V_i$.\n\t\nWe obtain that $u_{i}\\in W^{1,d-1}_{\\rm loc}(\\R^{d}; \\mathbb{S}^{d-1})$ and satisfies $Ju_{i}=\\frac{\\alpha_{d-1}}{d}(\\delta_{P_{i}}-\\delta_{P_{n}})$. Moreover, summing up inequality \\eqref{estimateharmonic} repeated for each segment $I_j$ with $j\\in K_i$, the following inequality holds\n\t\\begin{equation*}\n\t\\frac{1}{(d-1)^{\\frac{d-1}{2}}\\alpha_{d-1}}\\int_{\\R^{d}}|\\nabla u_{i}|^{d-1}dx \\leq \\mathbb{M}(T_{i})+k\\varepsilon\\,,\n\t\\end{equation*}\n\twhere $T_{i}$ is the (classical) integral current corresponding to the $i^{\\rm th}$ component of $T$. \n\t\n\t\nIn particular, let us stress that the maps $u_1,\\ldots,u_{n-1}$ have the following further property: if some paths $\\lambda_{i_1},\\lambda_{i_2},\\ldots,\\lambda_{i_m}$ have a common segment $I_j$ for some $j\\in K_{i_1}\\cap K_{i_2}\\cap\\ldots\\cap K_{i_m}$, then $u_{i_1},\\ldots,u_{i_m}$ agree in $U_j$. Furthermore, setting $h_{i_{1}, i_{2},\\ldots,i_{m}}=(0,\\ldots,|\\nabla u_{i_{1}}|^{d-1},\\ldots, |\\nabla u_{i_{m}}|^{d-1},\\ldots,0)$, we obtain\n\\begin{equation*}\n\t\t\\frac{1}{(d-1)^{\\frac{d-1}{2}}\\alpha_{d-1}}\\int_{U_j}||h_{i_{1}, i_{2},\\ldots,i_{m}}||_{\\alpha}dx \\leq m^{\\alpha}(|I_j|+k\\varepsilon)\\,,\n\t\t\\end{equation*}\n\t\twhere $h_{i_{1}, i_{2},\\ldots,i_{m}}=(0,\\ldots,|\\nabla u_{i_{1}}|^{d-1},\\ldots, |\\nabla u_{i_{m}}|^{d-1},\\ldots,0)$. This holds for every $\\alpha\\in[0,1]$.\n\t\n\tCombining all the previous observations, we can conclude that, given any $\\tilde\\epsilon >0$ , there exist $u_{i}\\in H_{i}$, $i=1,\\ldots,n-1$ such that \n\t\\begin{align*}\n\t\\int_{\\R^{d}} ||(|\\nabla u_{1}|^{d-1}, |\\nabla u_{2}|^{d-1},\\ldots,|\\nabla u_{n-1}|^{d-1})||_{\\alpha}\\,dx\n\t\\leq & (d-1)^{\\frac{d-1}{2}}\\alpha_{d-1} \\int_{L}|\\theta (x)|^{\\alpha}d\\mathcal{H}^{1}(x)+\\tilde\\epsilon\n\t\\\\ = & (d-1)^{\\frac{d-1}{2}}\\alpha_{d-1}\\mathbb{M}(T)+\\tilde\\epsilon\\,,\n\t\\end{align*}\n\twhere $\\theta(x) = \\sum_{i=1}^{n-1} \\mathbf{1}_{\\lambda_i}(x)$.\nThus, by the properties of $\\mathbb{E}$, one obtain that\n\\begin{equation}\n\\inf \\mathbb{E} \\leq \\mathbb{E}(\\textbf{u})\\leq \\frac{1}{(d-1)^{\\frac{d-1}{2}}} \\mathbb{H}(\\textbf{u}) \\leq \\alpha_{d-1}\\mathbb{M}(T)+\\tilde\\epsilon.\n\\end{equation}\n\t\n\t\\noindent {\\bf Step 2.} Considering an arbitrary acyclic graph $L=\\bigcup_{i=1}^{n-1}\\lambda_i$, there is a sequence of acyclic polyhedral graphs $\\left(L_{m} \\right)_{m\\ge 1}$, $L_{m}=\\bigcup_{i=1}^{n-1}\\lambda^{m}_i$ such that\n\tthe Hausdorff distance $d_{H}(\\lambda^m_i, \\lambda_i) \\leq \\frac{1}{m}$, moreover (see \\cite[Lemma $3.10$]{BoOrOu}) denoting by $T$ and $T_{m}$ the associated currents with coefficients in $\\Z^{n-1}$ we also have that\n\t$$\\mathbb{M}(T_{m})=\\int_{L_{m}}|\\theta_{m}(x)|^{\\alpha}\\,d\\mathcal{H}^{1}(x) \\leq \\mathbb{M}(T)=\\int_{L}|\\theta(x)|^{\\alpha}\\,d\\mathcal{H}^{1}(x) +\\frac{1}{m}.$$\n\there $\\theta_{m}(x) = \\sum_{i=1}^{n-1} \\mathbf{1}_{\\lambda_i^{m}}(x)$. \n\tOn the other hand, by previous construction there exists a sequence $\\lbrace {\\bf u}_{m} \\rbrace_{m}$, ${\\bf u}_{m}=(u_{1,m},\\ldots,u_{n-1,m})\\in H_{1}\\times \\ldots \\times H_{n-1}$ such that\n\t\\begin{align*}\n\t\\inf \\mathbb{E} \\leq \\mathbb{E}(\\textbf{u}_{m})\\leq \\frac{1}{(d-1)^{\\frac{d-1}{2}}} \\mathbb{H}(\\textbf{u}_{m})\n\t&\\leq \\alpha_{d-1} \\int_{L_{m}}|\\theta_{m}(x)|^{\\alpha}d\\mathcal{H}^{1}(x)+\\frac{1}{m}\\\\\n\t&=\\alpha_{d-1}\\mathbb{M}(T_{m})+\\frac{1}{m}\\\\\n\t&\\leq \\alpha_{d-1}\\mathbb{M}(T)+\\frac{1+\\alpha_{d-1}}{m}\\\\\n\t&=\\alpha_{d-1} \\int_{L}|\\theta(x)|^{\\alpha}d\\mathcal{H}^{1}(x)+\\frac{1+\\alpha_{d-1}}{m}.\\,,\\\\\n\t\\end{align*}This implies that\n\t\\begin{equation}\n\t\\inf{\\mathbb{E}}\\leq \\alpha_{d-1} \\inf{I_\\alpha}=\\alpha_{d-1} \\inf \\mathbb{M}.\n\t\\end{equation}\nOn the other hand, by the properties $(i)$ of Definition \\ref{def:suiten}, we also have that for any $\\textbf{u}=(u_1,\\ldots,u_{n-1})\\in H_{1}\\times \\ldots \\times H_{n-1}$\n\\begin{equation}\n\\alpha_{d-1} \\inf \\mathbb{N} \\leq \\mathbb{M}(\\textbf{ju}) \\leq \\mathbb{E}(\\textbf{u})\n\\end{equation}\n(see Remark \\ref{prejacobiancurrent} to see why the constant $\\alpha_{d-1}$ appears in front of $\\inf \\mathbb{N}$).\nThis allows us to conclude that\n\\begin{equation}\n\\alpha_{d-1} \\inf \\mathbb{N} \\leq \\inf \\mathbb{E}.\n\\end{equation}\nTherefore we obtain the following inequality:\n\\begin{equation}\n\\alpha_{d-1} \\inf \\mathbb{N} \\leq \\inf{\\mathbb{E}}\\leq \\alpha_{d-1} \\inf{I_\\alpha}=\\alpha_{d-1} \\inf \\mathbb{M}.\n\\end{equation}\nBy assumption, a minimizer of the problem $(M)$ admits a calibration, we have \n\\begin{equation}\n\\inf \\mathbb{N}=\\inf \\mathbb{M}=\\inf{I_\\alpha}.\n\\end{equation}\nthis also means that \n\\begin{equation}\n\\alpha_{d-1}\\inf \\mathbb{N}=\\alpha_{d-1}\\inf \\mathbb{M}=\\alpha_{d-1}\\inf{I_\\alpha}=\\inf \\mathbb{E}\n\\end{equation}\nwhich is sought the conclusion.\n\\end{proof}\n\\begin{remark}{\\rm\n\n\n\n\n\tIn the proof of Theorem $\\ref{thm1}$, step 3, we must assume the existence of a calibration $\\omega$. Observe that, without this assumption, we still can deduce from that\n\t\\begin{equation}\\label{compare6}\n\t\\alpha_{d-1} \\inf{\\mathbb{M}}=\\alpha_{d-1}\\inf{I_\\alpha} \\geq \\inf{\\mathbb{E}} \\geq \\alpha_{d-1}\\, \\inf{\\mathbb{N}}\n\t\\end{equation}\n\twhere $\\inf{\\mathbb{N}}$ is the infimum of the problem obtained measuring the mass among $1$-dimensional normal currents with coefficients in $\\R^{n-1}$ (compare with Remark \\ref{lavrentiev}). \n\t\n\tMoreover, in case $\\alpha=1$, $\\psi=\\| \\cdot \\|_{1}$, $\\mathbb{E}=\\mathbb{H}$. First, $(I)$ turns out to coincide with the Monge-Kantorovich problem. Then,\n\t$$\\inf{\\mathbb{H}}\\geq (d-1)^{\\frac{d-1}{2}}\\alpha_{d-1} \\inf{I_\\alpha}=(d-1)^{\\frac{d-1}{2}}\\alpha_{d-1} \\inf{\\mathbb{M}\\,.}$$\n\tTo see this is to use the results of Brezis-Coron-Lieb $\\cite{BrezisCoronLieb}$ separately for each map $u_{i}$, $i=1,\\ldots,n-1$, for the energy\n\t$$\\mathbb{H}({\\bf u})=\\int_{\\R^{d}}(|\\nabla u_{1}|^{d-1}+ |\\nabla u_{2}|^{d-1}+\\ldots+|\\nabla u_{n-1}|^{d-1})\\,dx\\,,$$\n\twhere, again, ${\\bf u}=(u_{1},\\ldots,u_{n-1})\\in H_{1}\\times \\ldots \\times H_{n-1}$.\t\t\n\tThe investigation of equality cases in $\\eqref{compare6}$, when $0\\leq \\alpha <1$, will be considered in forthcoming works.\n\n}\\end{remark}\n\n\n\\section*{Acknowledgements}\nThe authors are partially supported by GNAMPA-INdAM. The research of the third\nauthor has been supported by European Union's Horizon 2020 programme through project 752018.\n\nThe authors wish to warmly thank Giacomo Canevari for extremely fruitful and enlightening discussions.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nRedistribution of angular momentum in astrophysical systems is a major driver of their dynamical and\nsecular evolution. Galactic bars facilitate this process by means of gravitational torques, triggered \ninternally (spontaneously) or externally (interactively). Important aspects of stellar bar evolution\nare still being debated --- their origin and evolutionary changes in \nmorphology, growth and decay, are not entirely clear. \nTheoretical studies of angular momentum redistribution in disk-halo systems\nhave been limited almost exclusively to {\\it nonrotating} halos, following pioneering \nworks on linear perturbation theory by Lynden-Bell (1962), Lynden-Bell \\& Kalnajs (1972), Tremaine \\& \nWeinberg (1984) and Weinberg (1985), which underscored the dominant role of orbital resonances. Numerical \nsimulations have confirmed the angular momentum flow away from disks embedded in\naxisymmetric (e.g., Sellwood 1980; Debattista \\& Sellwood 1998, 2000; Tremaine \\& Ostriker 1999;\nVilla-Vargas et al. 2009, 2010; review by Shlosman 2013) and triaxial \n(e.g., El-Zant \\& Shlosman 2002; El-Zant et al. 2003; Berentzen et al. 2006; Berentzen \n\\& Shlosman 2006; Heller et al. 2007; Machado \\& Athanassoula 2010; Athanassoula et al. \n2013) halos. Resonances have been confirmed to account for the lion's share of angular momentum \ntransfer (e.g., Athanassoula 2002, 2003; Martinez-Valpuesta et al. 2006; Weinberg \\& Katz 2007), \nIn this paradigm, the halo serves as the pure sink and the disk as the net source of angular momentum.\n\nHowever, realistic cosmological halos are expected to possess a net angular momentum, acquired \nduring the maximum expansion epoch (e.g., Hoyle 1949; White 1978) and possibly during\nthe subsequent evolution (Barnes \\& Efstathiou 1987; but see Porciani et al. 2002). Simulations \nhave quantified the distribution of spin values,\n$\\lambda\\equiv J_{\\rm h}\/\\sqrt{2} M_{\\rm vir}R_{\\rm vir}v_{\\rm c}$, for cosmological dark matter (DM) \nhalos to follow a lognormal distribution, where $J_{\\rm h}$ is the\nangular momentum, $M_{\\rm vir}$ and $R_{\\rm vir}$ --- the halo virial mass and radius, and $v_{\\rm c}$\n--- the circular velocity at $R_{\\rm vir}$, with the mean value $\\lambda = 0.035\\pm 0.005$ (e.g., \nBullock et\nal. 2001). Spinning halos can increase the rate of the angular momentum absorption --- \nan issue brought up by Weinberg (1985) but never fully addressed since. Only recently has it been\nconfirmed numerically that the bar instability timescale is indeed shortened for $\\lambda>0$\n(Saha \\& Naab 2013). But these models had been terminated immediately after the bar instability had \nreached its peak, and hence avoided completely the secular stage of bar evolution.\n\nThe $\\lambda=0$ halos consist of two populations of DM particles, prograde and retrograde\n(with respect to disk spin). The amount of angular momentum in {\\it each} of these populations can vary\nfrom zero for nearly radial orbits, to a maximal one for nearly circular orbits. (Both extremes are \nmentioned \nfor pedagogical reasons only.) These extremes in angular momentum correspond to extremes in velocity\nanisotropy. Various degrees of velocity anisotropy \nin the halo lie in between, and represent a rich variety of dynamical models. \nStellar bars mediate the angular momentum transfer in such disk-halo systems with a broad range\nof efficiencies. The current paradigm of stellar bar evolution assumes an idealized \nversion of a nonrotating DM halo which cannot account for the whole bounty of associated processes. \nWe address these issues in a subsequent paper (in preparation).\n\nIn this Letter we demonstrate for the first time that secular growth of galactic bars in spinning DM \nhalos is damped more strongly with increasing $\\lambda$, and this effect is the result of a modified \nangular momentum transfer. Section~2 describes our numerical methods.\nResults are given in section~3. \n \n\\section{Numerics and Initial Conditions}\n\\label{sec:num}\n\nWe use the $N$-body part of the tree-particle-mesh Smoothed Particle Hydrodynamics code \nGADGET-3 originally described in Springel (2005). The units of mass and distance are taken as \n$10^{11}\\,M_\\odot$ and 1\\,kpc, respectively. \nWe use $N_{\\rm h} = 10^6$ particles for the DM halo, and $N_{\\rm d} = 2\\times 10^5$ for stars.\nConvergence models have been run with $N_{\\rm h} = 4\\times 10^6$ and $N_{\\rm d} = 4\\times 10^5$,\nin compliance with the Dubinski et al. (2009) study of discrete resonance interactions\nbetween the bar and halo orbits.\nThe gravitational softening is $\\epsilon_{\\rm grav}=50$\\,pc for stars and DM.\nTo simplify the analysis we have ignored the stellar bulge. The opening angle $\\theta$ of the\ntree code has been reduced from 0.5 used in cosmological simulations to 0.4 which increases\nthe quality of the force calculations. Our models\nhave been run for 10\\,Gyr with an energy conservation of 0.08\\% and angular momentum\nconservation of 0.05\\% over this time. \n\nTo construct the initial conditions, we have used a novel method introduced by Rodionov \\& Sotnikova\n(2006), see also Rodionov et al. (2009). We provide only minimal details for this method,\nwhich is elaborated elsewhere. It is based on the constrained\nevolution of a dynamical system. The basic steps include (1) constructing the model using prescribed\npositions of the particles with some (non-equilibrium) velocities, (2) allowing the particles to evolve \nfor a short time which leads to modified positions and velocities, (3) returning the particles\nto the old positions with the new velocities, and (4) iterating on the previous steps until\nvelocities converge to equilibrium values. This results in the near-equilibrium dynamical system\nwhich is then evolved. \n\nThe initial disk has been constructed as exponential, with the volume density given by \n\n\\begin{equation}\n\\rho_{\\rm d}(R,z) = \\biggl(\\frac{M_{\\rm d}}{4\\pi h^2 z_0}\\biggr)\\,{\\rm exp}(-R\/h) \n \\,{\\rm sech}^2\\biggl(\\frac{z}{z_0}\\biggr),\n\\end{equation}\nwhere $M_{\\rm d}=6.3\\times 10^{10}\\,M_\\odot$ is the disk mass, $h=2.85$\\,kpc is its radial \nscalelength, and $z_0=0.6$\\,kpc is the scaleheight. $R$ and $z$ represent the cylindrical coordinates. \n\nThe halo density is given by Navarro, Frenk \\& White (1996, NFW):\n\n\\begin{equation}\n\\rho_{\\rm h}(r) = \\frac{\\rho_{\\rm s}\\,e^{-(r\/r_{\\rm t})^2}}{[(r+r_{\\rm c})\/r_{\\rm s}](1+r\/r_{\\rm s})^2}\n\\end{equation}\nwhere $\\rho(r)$ is the DM density in spherical coordinates, $\\rho_{\\rm s}$\nis the (fitting) density parameter, and $r_{\\rm s}=9$\\,kpc is the characteristic radius, where the power \nlaw slope is (approximately) equal\nto $-2$, and $r_{\\rm c}$ is a central density core. We used the Gaussian cutoffs at \n$r_{\\rm t}=86$\\,kpc for the halo and $R_{\\rm t}=6h\\sim 17$\\,kpc\nfor the disk models, respectively. The halo mass is $M_{\\rm h} = 6.3\\times 10^{11}\\,M_\\odot$,\nand halo-to-disk mass ratio within $R_{\\rm t}$ is $\\sim 2$. Other ratios have\nbeen explored as well. Oblate halos with various \npolar-to-equatorial axis ratios, $q=c\/a$, \nhave been analyzed, with $0.8\\ltorder q\\ltorder 1$. Here, we limit our discussion to cuspy halos \nwith $q\\sim 1$, and a small \ncore of $r_{\\rm c}=1.4$\\,kpc. Other profiles, such as the large core NFW and\nisothermal sphere density profiles, have been implemented as well, and\nresulted in qualitatively similar evolution. Dispersion velocity anisotropy, $\\beta$, has been \nconstrained initially to be mild, using the novel method of Constrained Evolution \ndiscussed above. Velocities have been taken to be isotropic in the central region and the\nanisotropy increased to $\\beta\\sim 0.3$ outside the disk.\n\nDisk radial dispersion velocities have been taken as $\\sigma_{\\rm R}(R)= \\sigma_{\\rm R,0}\\,{\\rm\nexp}(-R\/2h)$ with $\\sigma_{\\rm R,0}=143\\,{\\rm km\\,s^{-1}}$. This results in $Q=1.5$\nat $R\\sim 2.42\\,h$, and increasing values toward the center and outer disk. Vertical velocity\ndispersions are $\\sigma_{\\rm z}(R)=\\sigma_{\\rm z,0}\\,{\\rm exp}(-R\/2h)$, with $\\sigma_{\\rm\nz,0}=98\\,{\\rm km\\,s^{-1}}$.\n\nTo form spinning halos, we have flipped the angular momenta, $J_{\\rm z}$, \nof a prescribed fraction of DM particles which are on retrograde orbits with respect to the disk, \nby reversing their velocities, in line with Lynden-Bell's (1960) Maxwell demon. Only $\\lambda\\sim 0-0.09$ \nmodels are discussed here. \nThe $\\lambda < 0$ cases are simpler, due to a decreased fraction of prograde halo particles able to\nresonate with the bar\/disk particles (e.g., Christodoulou et al. 1995). \nThe implemented velocity reversals preserve the solution to the Boltzmann\nequation and do not alter the DM density profile or velocity magnitudes (e.g., Lynden-Bell 1960, 1962; \nWeinberg 1985). For spherical halos, the invariancy under velocity reversals is a direct corollary of \nthe Jeans (1919) theorem (see also Binney \\& Tremaine 2008). The most general distribution function \nfor such systems is a sum of $f(E,J^2)$, where $E$ is the energy, $J$ --- the value of the total angular \nmomentum (i.e., $J^2$), and of an odd function of $J_{\\rm z}$, \ni.e., $g(E,J,J_{\\rm z})$ (Lynden-Bell 1960). If $g\\neq 0$, the spherical system has \na net rotation around this axis.\n\nWe left the disk parameters unchanged, while halo models have varied spin\n$\\lambda$. The value of $\\lambda$ has been added to the model name using the last two significant \ndigits, e.g., P60 means $\\lambda=0.060$ and ``P'' stands for prograde. \n\n\\section{Results}\n\\label{sec:res}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[angle=0,scale=0.47]{fig1.ps}\n\\end{center}\n\\caption{{\\it Upper:} Evolution of the bar amplitudes, $A_2$ (normalized by the monopole term $A_0$), for \nfor spherical NFW halos with $q=1$.\nShown are P00, P45, P60 and P90 models. {\\it Lower:} Evolution of bar pattern speed, $\\Omega_{\\rm b}$, \nin the above models.\n}\n\\end{figure}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[angle=0,scale=0.9]{fig2.ps}\n\\end{center}\n\\caption{Disk-bar surface density contours (face-on, edge-on, and end-on) at $t=10$\\,Gyr, for the NFW \nhalos with $q=1$, P00 (left column), P45 (center) and P90 (right) models. Note the different bulge\nshapes: X-shaped for P00, boxy\/X-shaped for P45, and boxy for P90, as well as decreasing strength of\n{\\it ansae} with increasing $\\lambda$ (see text).\n}\n\\end{figure*}\n\nAll models presented here have an identical\nmass distribution, both in DM and stars. Hence, any differences in the\nevolution must follow from the initial distribution of angular momentum in DM halos and\nits redistribution in the bar-disk-halo system.\nFigure\\,1 displays the evolution of the stellar bars through the amplitudes of the Fourier $m=2$ mode, \n$A_2$, \nand their pattern speeds, $\\Omega_{\\rm b}$, for 10\\,Gyrs. This timescale is probably close to the maximum \nuninterrupted growth of galactic disks in the cosmological framework, and hence to the lifetime of\nthe bars. The normalized (by the monopole term $A_0$) bar amplitude has been defined here as \n\n\\begin{equation}\n\\frac{A_2}{A_0} = \\frac{1}{A_0}\\sum_{i=1}^{N_{\\rm d}} m_{\\rm i}\\,e^{2i\\phi_{\\rm i}},\n\\end{equation} \nfor $R\\leq 14$\\,kpc. The summation is performed over all disk particles with the mass $m=m_{\\rm i}$\nat angles $\\phi_{\\rm i}$ in the disk plane. $\\Omega_{\\rm b}$ is obtained\nfrom the phase angle $\\phi= 0.5\\,{\\rm tan^{-1}}[{\\rm Im}(A_2)\/{\\rm Re}(A_2)]$ evolution with time.\nWe divide the evolution into two phases: the dynamical phase, which consists of the initial \nbar instability and terminates with the vertical buckling instability of the bars and formation of \nboxy\/peanut-shaped bulges (e.g., Combes et al. 1990; Pfenniger \\& Friedli 1991; Raha et al. 1991; \nPatsis et al. 2002; \nAthanassoula 2005; Berentzen et al. 2007). Buckling weakens the bar but does not dissolve it \n(Martinez-Valpuesta \\& Shlosman 2004). Repeated bucklings increase the size of the bulge \n(Martinez-Valpuesta \\& Shlosman 2005; Martinez-Valpuesta et al. 2006). One buckling has been\nobserved in the models presented here --- following it,\nthe bar enters the second phase, that of secular evolution.\n\nThe most striking development observed in models of Figure\\,1 during the secular phase is an increased \ndamping of the bar amplitude and a slower or absent bar growth for $\\lambda\\gtorder 0.03$. \nThe P00 model ($\\lambda=0$) displays healthy growth after buckling.\nThe P30 and P45 bars have a slower growth rate than the P00 bar, and do not recover their pre-buckling \nstrength even after 10\\,Gyr. But models P60 and P90 show no growth in $A_2$ at all. \nThe corresponding pattern\nspeed evolution, $\\Omega_{\\rm b}(t)$, for these models differs substantially as well. The A90 bar \ndisplays a perfectly flat $\\Omega_{\\rm b}(t)$, and does not lose its angular momentum to the\ndisk and\/or the halo. This includes both the internal angular momentum (i.e., circulation) and the\ntumbling. Similar trend between the final $\\Omega_{\\rm b}$ and $\\lambda$ can be also\nobserved in Figure\\,7 of Debattista \\& Sellwood (2000), although low\nresolution apparently prevented any conclusion of this sort.\n \nFigure\\,2 compares the end products of the secular evolution of barred disks in models P00, P45 and P90. \nThe differences appear to be profound. First, the bar size clearly anticorrelates with $\\lambda$ --- \nthis is\na reflection of the inability of the bar potential to capture additional orbits and grow in length and\nmass. Second, the {\\it ansae} (handles) feature is the strongest in the P00 bar, while it is smaller in \nsize for P45 \nand completely absent in the P90 bar. Ansae have been associated with captured disk orbits librating around\nthe bar (Martinez-Valpuesta 2006; Martinez-Valpuesta et al. 2006). This is another indication that\nthe bar in high-$\\lambda$ models does not grow. Note that the surface density in the disk is clearly affected,\nas trapping of the disk orbits by the P00 bar creates low-density regions in the disk but not in P90. We\nanalyzed the properties of the halo `ghost' bar (Holley-Bockelmann et al. 2005; Athanassoula\n2007; Shlosman 2008), and found no growth there as well. The offset angle between the ghost and stellar\nbars remains near zero (within the error margin). Third, the face-on morphology of the P00 bar is that of a \nrectangular shape, while that of P90 is elliptical. Fourth, bulges that formed as a a result of the buckling\ninstability show the same anticorrelation trend in size$-\\lambda$, as seen in edge-on (i.e., along the bar's \nminor axis) frames. Furthermore, they differ \nin shape as well: the P00 bulge has an X-shape, P45 is boxy\/X-shaped, and P90 is boxy. Trapped 3-D orbits\nare responsible for the bulge shape (e.g., Patsis et al. 2002; Athanassoula 2005; Martinez-Valpuesta et al. \n2006). \n\nWhat is even more intriguing is the near or complete absence of secular braking in the P60 and P90 bars. Although the\nbars are weak, constancy of $\\Omega_{\\rm b}$ and $A_2$ over 6\\,Gyr in P90 points to no angular momentum \ntransfer away from the bar, or, alternatively, to an opposite flux from the halo which compensates for the \nloss of angular momentum by the bar. As we see below, it is the second possibility that takes place.\nWhile the P60 and P90 models exhibit extremes of this effect, it is visible at various levels in all models \nwith $\\lambda\\gtorder 0.02$. \n\nWhile most of the angular momentum transfer away from the bar is due to resonances, we \ndeal with this aspect of the problem elsewhere. However, we do quantify the {\\it rate} of the\noverall angular momentum transfer between the disk and the halo, i.e., accounting for the resonant and\nnon-resonant angular momentum redistribution. This is accomplished by dividing the disk and halo into\nnested cylindrical shells and constructing a two-dimensional map of the angular momentum change in each shell as\na function of $R$ and $t$ (e.g., Villa-Vargas et al. 2009, 2010). Such a color-coded diagram is shown in \nFigure\\,3 for disk stars (lower frames), $\\langle\\dot J_*\\rangle\\equiv (\\partial J_*\/\\partial t)_{\\rm R}$ \nand for halo particles (upper frames), \n$\\langle\\dot J_{\\rm DM}\\rangle\\equiv (\\partial J_{\\rm DM}\/\\partial t)_{\\rm R}$, \nwhere the brackets indicate time-averaging. \n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[angle=-90,scale=0.6]{fig3.ps}\n\\end{center}\n\\caption{\\underline{DM halos} ({\\it upper frames}): Rate of angular momentum flow $\\dot J$ as a function of \na cylindrical \nradius and time for the P00 (left), P30 (middle), and P60 (right) models with $q=1$ NFW DM halos. \nThe color palette corresponds to gain\/loss rates (i.e., red\/blue) using a logarithmic scale in color. \nThe cylindrical shells have $\\Delta R = 1$\\,kpc, extending to $z=\\pm \\infty$. \n\\underline{Stellar disks} ({\\it lower frames}): same for (identical) disk models embedded in the P00 (left), \nP30 (middle), and \nP60 (right) halos, except $\\Delta R = 0.5$\\,kpc, and $|\\Delta z| = 3$\\,kpc. Positions of major disk \nresonances, ILR, CR, and OLR, have been marked.\n}\n\\end{figure*}\n\nThe diagrams for P00 are the easiest to understand. The red (blue) colors correspond to the absorption\n(emission) of the angular momentum. The continuity of these colors for the P00 disk represents the emission\nand absorption of angular momentum by the disk prime resonances. For example, the dominant blue band drifting \nto larger\n$R$ with time is associated with the emission of angular momentum by the inner Lindblad resonance (ILR),\nand the additional blue band corresponds to the Ultra-Harmonic resonance (UHR). The dominant red band\nfollows the corotation resonance (CR) and the outer Lindblad resonance (OLR). \n\nThe number of DM particles on prograde orbits has steadily increased with $\\lambda$, raising the\npossibility of resonant coupling between them and the bar orbits. This is supported by linear theory\n(Weinberg 1985 and refs. therein) and by numerical simulations (Saha \\& Naab 2013). \nIndeed, we observe increased emission of angular momentum by the ILR and corresponding enhanced absorption\nby the halo. Halo particles are late to pick up the angular momentum\nfrom the bar (due to their higher velocity dispersion), but the exchange is visible already before buckling. \nEnhanced coupling between the orbits is the reason for the shorter timescale for bar instability.\n\nThe secular evolution of bars, however, proceeds under quite different conditions. The bar cannot be \nconsidered as a linear perturbation, and the halo orbits have already been heavily perturbed and some have \nbeen captured by the stellar bar. So, one expects the nearby halo orbits around the bar to be tightly\ncorrelated with the bar. The upper frames in Figure\\,3 display the rate of angular momentum flow in the DM halo.\nWhile the P00 halo appears to be completely dominated by the absorption of angular momentum at all major\nresonances (ILR, CR, OLR), P30 shows a quite different behavior and emits it at the ILR. The loss\nof angular momentum in this region of the DM halo is even more intense in P90. \nAlready at the buckling, we can observe a weak blue band of emission in the P30 halo, alongside a strong\nabsorption, instead of pure absorption in P00. Note that linear resonances shown by continuous curves\nappear to be a bad approximation to the actual nonlinear resonances given by the color bands,\nbecause they are calculated under assumption of circular orbits. In the P90 halo, a strong {\\it emission} \nis visible at the position of the disk ILR, which continues as a band \nalongside weakened absorption. So the absorption gradually weakens and moves out with increased $\\lambda$, \nwhile the emission strengthens and spreads. The disk emission and absorption by major resonances \nalso differs with changing $\\lambda$ --- it gradually develops an intermittent behavior, especially\nat the ILR in P60, where the blue and red bands become intermittent. Such a cyclical behavior is not seen \nin the P00 disk, but becomes visible in the P30 disk and dominates the inner P60 disk. Hence, the spinning halo \nappears to emit and absorb angular momentum recurrently. The halo as a whole still absorbs the\nangular momentum from the disk in P30, while the net flux is zero for P90.\n\nThis result is anticipated. The ability to pump angular momentum into a selected\nnumber of halo particles by means of a stellar bar is not without limits. As the angular momentum of the\nprograde population in the halo is increased, its ability to absorb angular momentum should saturate, and,\nunder certain conditions, even be reversed. After buckling, the bar weakens substantially, as seen in\nFigure\\,1. At later stages, as the bar is expected to resume its growth, the near (disk) halo orbits can possess\nmore angular momentum than the bar region which has been losing it for some time. For this prograde \npopulation, increase of $\\lambda$ simply\nincreases the initial angular momentum and the saturation comes earlier. {\\it What emerges as a fundamental\nproperty of a DM halo is the angular momentum and its distribution for the prograde population of \norbits, irrespective of the value of $\\lambda$.}\n\nEvolution of galactic bars is inseparable from the cosmological evolution of their host galaxies. We find\nthat the secular growth of bars is significantly anticorrelated with the halo spin for $\\lambda\\gtorder\n0.03$. This means that majority of halos will adversely affect the bar strength, and, therefore, the\nangular momentum transfer and the bar braking. \nBeyond dynamical consequences, bars in spinning halos will be systematically smaller, which \nwill make their detection at larger redshifts more difficult. This trend can be further strengthened \nbecause, during mergers, for a limited time period of $\\sim 1-2$\\,Gyr, $\\lambda$ has been shown to\nincrease (e.g., Hetznecker \\& Burkert 2006; Romano-Diaz et al. 2007; Shlosman 2013). Weaker bars\nare known to possess star formation along the offset shocks, unlike strong bars, and are less \nefficient in moving the gas inward. Furthermore, damping bar amplitude has implications for disk\nmorphology, stellar populations and abundance gradients. \n\nTo summarize, we have investigated the dynamical and secular evolution of stellar bars in spinning DM halos. \nIn a representative set of numerical models, we find that\nthe angular momentum flow in the disk-halo system is substantially affected by the momentum distribution\nin the prograde population of DM particles, and is not limited\nto the momentum flux from the disk to halo. The associated bar pattern speed slowdown is minimized\nand ceases for larger $\\lambda$. This means that the bar does not experience gravitational torques\nand its amplitude remains steady, while the angular momentum, both internal circulation and tumbling, is \npreserved. This trend becomes\nvisible for $\\lambda\\gtorder 0.02$ and dominates the bar evolution for halos with $\\lambda\\gtorder 0.03$.\nBecause of a lognormal distribution of $\\lambda$ with a mean value of $0.035\\pm 0.005$, a substantial \nfraction of DM halos will be affected. We analyze the\nrate of angular momentum change by subdividing the disk-halo system into nested cylindrical shells, and \nshow that the DM {\\it halo can both absorb and emit angular momentum}, resulting in a reduction of \nthe net transfer of angular momentum from the disk to the halo.\nThe ability of the halo material to both emit and absorb angular momentum has important corollaries.\n\n\\acknowledgements \nWe are grateful to Sergey Rodionov for guidance with the iterative method to construct initial \nconditions, and to Ingo Berentzen, Jun-Hwan Choi, Emilio Romano-Diaz, Raphael Sadoun and Jorge \nVilla-Vargas for help with numerical\nissues. We thank Volker Springel for providing us with the original version of GADGET-3. This work \nhas been partially supported by grants from the NSF and the STScI (to I.S.). Simulations have been\nperformed on the University of Kentucky DLX Cluster.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzabyj b/data_all_eng_slimpj/shuffled/split2/finalzzabyj new file mode 100644 index 0000000000000000000000000000000000000000..8e376b9c2f949d00395656a5e6b5b200dd0ed58c --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzabyj @@ -0,0 +1,5 @@ +{"text":"\\section{ADIABATIC SPIKE AROUND THE CENTRAL BLACK HOLE}\n\nWe find the dark matter density profile in the region where the black hole\ndominates the gravitational potential. From the data in \\cite{ghe98,eck9697},\nthis is the region $r \\lesssim R_M \\simeq 0.2 \\, {\\rm pc}$. \n Other masses (the\ncentral star cluster, for example) also influence the dark matter distribution,\nbut since they make the gravitational potential deeper, their effect is to\nincrease the central dark matter density and the annihilation signals.\n\nWe work under the assumption that the growth of the black hole is\nadiabatic. This assumption is supported by the collisionless behavior of\nparticle dark matter.\nWe can find the final\ndensity after black hole formation from the final phase-space distribution\n$f'(E',L')$ as\n\\begin{equation}\n\\label{rhof}\n\\rho'(r) = \\int_{E'_{m}}^0 dE' \\,\n\\int_{L'_{c}}^{L'_{m}} dL' \\, \n\\frac{ 4 \\pi L' } { r^2 v_r } \\, f'(E',L') \\, ,\n\\end{equation}\nwith\n\\begin{eqnarray}\nv_r & = & \\left[ 2 \\left( E' + \\frac{GM}{r} - \\frac{L'^2}{2r^2} \\right)\n\\right]^{1\/2} , \\\\\nE'_{m} & = & - \\frac{GM}{r} \\, \\left( 1 - \\frac{ 4 R_{\\rm S}}{r} \\right), \\\\\nL'_{c} & = & 2cR_{\\rm S}, \\\\\nL'_{m} & = & \\left[ 2 r^2 \\left( E' + \\frac{GM}{r} \\right) \\right]^{1\/2} .\n\\end{eqnarray}\nWe have neglected the contribution from unbound orbits ($E'>0$). The lower\nlimit of integration $L'_{c}$, and the second factor in $E'_{m}$, are\nintroduced to eliminate the particles with $L < 2cR_{\\rm S}$ which are captured\nby the black hole. ($R_{\\rm S}=2GM\/c^2$ is the Schwarzschild radius.) We relate\n$f'(E',L')$ to the initial phase-space distribution $f(E,L)$ through the\nrelations, valid under adiabatic conditions,\n$f'(E',L') = f(E,L)$, $L' = L$, $I'(E',L') = I(E,L)$,\nwhere the last two equations are the conservation of the angular momentum $L$\nand of the radial action $I(E,L)$.\n\nThe density slope in the spike depends not only on the slope of the inner halo\nbut also on the behavior of the initial phase-space density $f(E,L)$ as $E$\napproaches the potential at the center $\\phi(0)$~\\cite{qui95}. If $f(E,L)$\napproaches a constant, as in models with finite cores, the spike slope is\n$\\gamma_{\\rm sp} = 3\/2$. If $f(E,L)$ diverges as $[E-\\phi(0)]^{-\\beta}$, as in\nmodels with an inner cusp, the spike slope is $\\gamma_{\\rm sp}>3\/2$. Models\nwith finite cores include~\\cite{models}: the non-singular and the modified\nisothermal sphere, the H\\'enon isochrone model, the Plummer model, the King\nmodels, the modified Hubble profile, the Evans power-law models with $R_c\\ne0$,\nand the Zhao $(\\alpha,\\beta,\\gamma)$ models, $\\rho \\sim r^{-\\gamma}\n(1+r^{1\/\\alpha})^{-\\beta\\alpha}$, with $\\gamma=0$ and $1\/(2\\alpha) = {\\rm\n integer}$. Models with an inner cusp include~\\cite{models}: the models of\nJaffe, Hernquist, Navarro-Frenk-White, the $\\gamma\/\\eta$ models of Dehnen and\nTremaine et al., and the other Zhao models.\n\nAs an example of models with finite cores, we consider \nthe isothermal sphere. It has $f(E,L) = \\rho_0 \n(2\\pi\\sigma_v^2)^{-3\/2} \\allowbreak \\,\n\\exp(-E\/\\sigma_v^2)$. Close to the black hole, we have $E \\ll \\sigma_v^2$,\nand $f(E,L) \\simeq \\rho_0 (2\\pi\\sigma_v^2)^{-3\/2}$, a constant. Then from\neq.~(\\ref{rhof}) we easily find\n\\begin{equation}\n\\label{rhof1}\n\\rho'_{\\rm iso}(r) = \n\\frac{4\\rho_0}{3\\sqrt{\\pi}} \\, \\left( \\frac{GM}{ r\\sigma_v^2} \n\\right)^{3\/2} \\,\n\\left( 1 - \\frac{4 R_{\\rm S}}{r} \\right)^{3\/2} ,\n\\end{equation}\nvalid for $r \\ll R_M \\simeq 0.2 $ pc.\n The last factor comes from the capture of particles by the black hole:\nthe density vanishes for $r<4R_{\\rm S}$. \n\nAs an example of models with an inner cusp, we consider \na single power law density profile, $\\rho(r) = \\rho_0 (r\/r_0)^{-\\gamma}$,\nwith $0 < \\gamma < 2$. Its\n phase-space distribution function is\n\\begin{equation}\nf(E,L) = \\frac{\\rho_0}{(2\\pi\\phi_0)^{3\/2}} \\,\n\\frac{\\Gamma(\\beta)}{\\Gamma(\\beta-\\case{3}{2})} \\,\n\\frac{\\phi_0^\\beta}{E^{\\beta}} \\, ,\n\\end{equation}\nwith $\\beta = (6-\\gamma)\/[2(2-\\gamma)] $ and $\\phi_0 = 4 \\pi G r_0^2\n\\rho_0\/[(3-\\gamma)(2-\\gamma)]$. To find $f'(E',L')$, we need to solve $\nI'(E',L') = I(E,L) $ for $E$ as a function of $E'$. In the field of\na point-like mass, we have \n$ \nI'(E',L') = 2 \\pi \\left[ - L' + GM\/\\sqrt{-2E'} \\right] \n$.\nIn the field of the power law profile, whose potential is proportional to\n$r^{2-\\gamma}$, the action integral cannot be performed exactly. We have\nfound an approximation good to better than 8\\% over all of phase space for\n$0 < \\gamma < 2$: \n\\begin{equation}\nI(E,L) = \\frac{2\\pi}{b} \\, \\left[ - \\frac{L}{\\lambda} + \n\\sqrt{2 r_0^2 \\phi_0} \\left( \\frac{E}{\\phi_0} \\right) ^ \n{ \\frac{4-\\gamma}{2(2-\\gamma)} } \\right],\n\\end{equation}\nwhere $\\lambda = [2\/(4-\\gamma)]^{1\/(2-\\gamma)} \\,\n[(2-\\gamma)\/(4-\\gamma)]^{1\/2}$ and $b = \\pi(2-\\gamma)\/{\\rm\n B}(\\frac{1}{2-\\gamma},\\frac{3}{2}) $. Expressing $E$ as a function of $E'$\nand integrating eq.~(\\ref{rhof}), we obtain\n\\begin{equation}\n\\label{rhoprime}\n\\rho'(r) = \\rho_R \\, g_{\\gamma}(r) \\,\n\\left( \\frac{R_{\\rm sp}}{r}\\right)^{\\gamma_{\\rm sp}} ,\n\\end{equation}\nwith \n$ \\rho_R = \\rho_0 \\left( {R_{\\rm sp}}\/{r_0} \\right)^{-\\gamma} $,\n$ \\gamma_{\\rm sp} = (9-2\\gamma)\/(4-\\gamma) $, and\n$ R_{\\rm sp} = \\alpha_{\\gamma} r_0 \\left( M\/\\rho_0 r_0^3\n\\right)^{1\/(3-\\gamma)} $.\nFor $0\\le \\gamma \\le 2$, the density slope in the spike, $\\gamma_{\\rm sp}$,\nvaries only between 2.25 and 2.5. \n\\begin{figure}[t]\n\\label{figprofile}\n\\epsfig{file=spikef1.eps,width=0.4\\textwidth}\n\\caption{\n Examples of spike density profiles.}\n\\end{figure}\n While the exponent $\\gamma_{\\rm sp}$ can\nalso be obtained by scaling arguments~\\cite{qui95}, the normalization\n$\\alpha_{\\gamma}$ and the factor $g_{\\gamma}(r)$ accounting for capture must be\nobtained numerically. We find that $g_{\\gamma}(r) \\simeq ( 1 -4 R_{\\rm S}\/r)^3$\nover our range of $\\gamma$, and that $\\alpha_{\\gamma} \\simeq 0.293\n\\gamma^{4\/9}$ for $\\gamma \\ll 1$, and is $\\alpha_{\\gamma}=$ 0.00733, 0.120,\n0.140, 0.142, 0.135, 0.122, 0.103, 0.0818, 0.0177 at\n$\\gamma=0.05,0.2,0.4,\\ldots,1.4,2$. The density falls rapidly to zero at $r\n\\lesssim 9.55 R_{\\rm S}$, vanishing for $r<4R_{\\rm S}$.\n\nAnnihilations in the inner regions of the spike set a maximal dark matter\ndensity $\\rho_{\\rm core} = m\/\\sigma v t_{\\rm bh}$, where $t_{\\rm bh}$ is the\nage of the black hole, conservatively $10^{10}$ yr, $m$ is the mass of the dark\nmatter particle, and $\\sigma v$ is its annihilation cross section times\nrelative velocity (notice that for non-relativistic particles $\\sigma v$ is\nindependent of $v$). Using $\\partial \\rho\/\\partial t = - \\sigma v \\rho^2\/m$,\nthe final spike profile is\n\\begin{equation}\n\\rho_{\\rm sp}(r) = \\frac{ \\rho'(r) \\rho_{\\rm core} } { \\rho'(r) + \\rho_{\\rm\n core} } ,\n\\end{equation}\nwhich has a core of radius $R_{\\rm core} = R_{\\rm sp} \\left( \\rho_R\/\\rho_{\\rm\n core} \\right) ^ {1\/\\gamma_{\\rm sp}} $. In the particle models we consider,\nnot more than the initial amount of dark matter within 300 pc is annihilated.\n\nExamples of spike density profiles are shown in Fig.~1.\n\nTo conclude this section, we derive a conservative estimate of the dark matter\ndensity near the galactic center. Assume that the halo density is constant on\nconcentric ellipsoids. Then the halo contribution $v_{h}(r)$ to the rotation\nspeed at distance $r$ fixes the halo mass within $r$. Assume further that the\nhalo density decreases monotonically with distance. Since at large radii it\ndecreases at least as fast as $r^{-2}$, and at small radii only as\n$r^{-\\gamma}$ with $\\gamma < 2$, the density profile becomes steeper with\ndistance. So to continue the $r^{-\\gamma}$ dependence to all radii keeping the\nsame halo mass interior to $r$ as given by the rotation speed, we must decrease\nthe density normalization. In this way we obtain an underestimate of the\ndensity near the center. Letting $\\rho_D = \\rho_0 (D\/r_0)^{-\\gamma} $, we have\n\\begin{equation}\n\\frac{ \\rho_D }{ 1 - \\gamma\/3} \\sim \n\\frac{ 3 v^2_h(D) }{ 4 \\pi G D^2 } \n\\simeq 0.0062 \\, \\frac{ M_{\\odot} }{ {\\rm pc}^{3} }\n\\simeq 0.24 \\, \\frac{ {\\rm GeV} }{ {\\rm cm}^{3} },\n\\end{equation}\nwhere we have taken $v_{h}(D) = 90 {\\rm \\,km\\,s^{-1}}$ at the Sun distance $D\n= 8.5$ kpc, as obtained after subtracting a (somewhat overestimated) luminous\nmatter contribution of 180 km s$^{-1}$ to the circular speed of $220 \\pm 20\n{\\rm \\,km\\,s^{-1}}$~\\cite{deh98}.\n\n\\section{SIGNALS FROM NEUTRALINO ANNIHILATIONS}\n\nOur analysis applies in general to a self-annihilating dark matter particle. To\nmake it concrete, we examine a case of supersymmetric dark matter, the lightest\nneutralino. The minimal supersymmetric standard model privides a well-defined\ncalculational framework, but contains at least 106 yet-unmeasured\nparameters~\\cite{dim95}. Most of them control details of the squark and slepton\nsectors, and can safely be disregarded in dark matter studies. So we restrict\nthe number of parameters to 7, following Bergstr\\\"om and Gondolo~\\cite{ber96}.\nOut of the database of points in parameter space built in\nrefs.~\\cite{ber96,eds97,ber98}, we use the 35121 points in which the neutralino\nis a good cold dark matter candidate, in the sense that its relic density\nsatisfies $0.025 < \\Omega_\\chi h^2 < 1 $. The upper limit comes from the age\nof the Universe, the lower one from requiring that neutralinos are a major\nfraction of galactic dark halos.\n\nGravitational interactions bring the cold neutralinos into our galactic halo\nand into the central spike, where neutralino pairs can annihilate and produce\nphotons, electrons, positrons, protons, antiprotons, and neutrinos. While most\nproducts are subject to absorption and\/or diffusion, the neutrinos escape\nthe spike and propagate to us undisturbed. We focus on the neutrinos and\npostpone the study of other signals.\n\nThe expected neutrino flux from neutralino annihilations in the direction\nof the galactic center can be divided into two components: emission from the\nhalo along the line of sight and emission from the central spike,\n\\begin{equation}\n\\Phi_{\\nu}^{\\rm neutralinos} =\n\\Phi_{\\nu}^{\\rm halo} + \\Phi_{\\nu}^{\\rm spike} .\n\\end{equation}\n\nThe halo flux from neutralino annihilations between us and the galactic center\ncan be estimated assuming that a single power law profile\n$\\rho(r) = \\rho_D (r\/D)^{-\\gamma} $ extends out to the Sun position $r=D$. \nThe integrated neutrino flux within an angle $\\Theta$ of the\ngalactic center is\n\\begin{equation}\n\\label{phihalo}\n\\Phi_{\\nu}^{\\rm halo} = \n\\frac{ \\rho_D^2 Y_{\\nu} \\sigma v D \\Omega(\\Theta) }{ m^2 } ,\n\\end{equation}\nwhere \n$m$ is the neutralino mass, $Y_{\\nu}$ is the number of neutrinos\nproduced per annihilation, either differential or integrated in energy, \n$\\sigma v $ is the\nneutralino--neutralino annihilation cross section times relative velocity,\nand\n\\begin{equation}\n\\Omega(\\Theta) = \n\\frac{\\Theta^2 }{2(1-2\\gamma)} - \\frac{2^{\\gamma-1\/2}\n\\Theta^{3-2\\gamma}}{(3-2\\gamma)(1-2\\gamma)} -\n\\frac{ \\Theta_{\\rm min}^{3-2\\gamma}}{3-2\\gamma} .\n\\end{equation}\nHere $\\Theta$ is in radians and $\\Theta_{\\rm min} = \\max \\bigl[ 10R_{\\rm S}\/D ,\n$ $ (2\\gamma\/3)^{1\/(3-2\\gamma)} (\\rho_D\/\\rho_{\\rm\n core})^{1\/\\gamma} \\bigr]$.\n\nWe evaluate the neutrino yield $Y_\\nu$ and the neutralino annihilation cross\nsections $\\sigma$ using the DarkSUSY package~\\cite{gon99}, which incorporates\nPythia simulations of the $\\nu$ continuum~\\cite{ber99}\nand the annihilation cross sections in~\\cite{eds97,annih}.\n\\begin{figure}[t]\n\\label{figenh}\n\\epsfig{file=spikef2.eps,width=0.4\\textwidth}\n\\caption{\n Enhancement of annihilation signals from the\n galactic center.}\n\\end{figure}\n\\begin{figure}[tbp]\n\\label{figphimu}\n\\epsfig{file=spikef3.eps,width=0.4\\textwidth}\n\\caption{For a Moore et al.\\ halo\n profile, flux of neutrino-induced muons in a neutrino telescope from\n neutralino dark matter annihilations in the direction of the galactic center,\n with (upper panel) and without (lower panel) the central spike. The\n horizontal line is the current upper limit. }\n\\end{figure}\n\nTo the halo flux we need to add the contribution from the spike around the\nblack hole at the galactic center. \nFor an isothermal distribution \nwe find\n\\begin{equation}\n\\label{phispike1}\n\\Phi_{\\nu}^{\\rm spike} =\n\\frac{ \\rho_D^2 Y_{\\nu} \\sigma v D} {m^2} \\,\n\\left( \\frac{ R_M }{ D } \\right)^{3} \\,\n\\, \n\\ln \\left( \\frac{ R_M }{ 25 R_{\\rm S} } \\right) ,\n\\end{equation}\nthe factor of 25 serving to match the exact integration. This flux is a factor\nof $\\sim 10^{-9}$ smaller than the flux from dark matter annihilations between\nus and the galactic center, and so the addition of the spike does not modify\nthe signal. The same conclusion is reached in general for halo models with\nfinite cores.\n\nA strong enhancement results instead \nfor power law profiles. We find\n\\begin{equation}\n\\label{phispike2}\n\\Phi_{\\nu}^{\\rm spike} = \n\\frac{ \\rho_D^2 Y_{\\nu} \\sigma v D} {m^2} \\,\n\\left( \\frac{ R_{\\rm sp} }{ D } \\right)^{3-2\\gamma} \\,\n\\left( \\frac{ R_{\\rm sp} }{ R_{\\rm in} } \\right)^{2\\gamma_{\\rm sp}-3} ,\n\\end{equation}\nwhere $R_{\\rm sp}$ is given after eq.~(\\ref{rhoprime}) with $\\rho_0$\nreplaced by $\\rho_D$ and $r_0$ by $D$.\nWe fix $R_{\\rm in}$ so as to match the\nintegration of the numerically-calculated density profile including capture and\nannihilation: we\nfind that $R_{\\rm in} = 1.5 \\left[ (20 R_{\\rm S})^2+R_{\\rm core}^2\n\\right]^{1\/2} $ gives a good approximation to the flux\n(within 6\\% for our values of $\\gamma$).\n\nContrary to the case of finite cores, for cusped halos there is a huge increase\nin flux from the galactic center when the spike is included, typically 5 orders\nof magnitude or more, unless the inner halo slope $\\gamma$ is very small (see\nFig.~2, where $\\Theta=1.\\!\\!^\\circ 5$). The enhancement is notable, for\nexample, for the profile of Moore et al.\\ which has $\\gamma=1.4$ (see\nFig.~3): including the spike around the black hole dramatically changes \nthe prospects of observing a neutrino flux.\n\nThe neutrino flux from the spike increases with the inner halo slope $\\gamma$.\nImposing that it does not exceed the observed upper limit of $\\sim 2 \\times\n10^4$ muons($>\\!1$GeV) km$^{-2}$ yr$^{-1}$ \\cite{imb-kgf-kamiokande} leads to\nan upper bound on $\\gamma$. There is a separate upper bound for each model.\nThey are plotted in Fig.~4a. (Plotted values of $\\gamma_{\\rm max}>2$ are\nunphysical extrapolations but are shown for completeness.) Present bounds are\nof the order of $\\gamma_{\\rm max} \\sim 1.5$.\n\nFuture neutrino telescopes situated in the Northern hemisphere may improve on\nthis bound or find a signal. For example, with a muon energy threshold of 25\nGeV, the neutrino flux from the spike after imposing the current constraints\ncould still be over 3 orders of magnitude above the atmospheric background\n(Fig.~5), allowing to probe $\\gamma$ as low as 0.05 (Fig.~4b).\n\\begin{figure}[tbp]\n\\label{figgmaxnu}\n\\epsfig{file=spikef4.eps,width=0.4\\textwidth}\n\\caption{Maximum inner slope\n $\\gamma$ of the dark matter halo compatible with the upper limit on the\n neutrino emission from the galactic center. (a) Current limit at 1 GeV;\n (b) future reach at 25 GeV.}\n\\end{figure}\n\nIn conclusion, we have shown that if the galactic dark halo is cusped, as\nfavored in recent N-body simulations of galaxy formation, a bright dark matter\nspike would form around the black hole at the galactic center. A search of a \nneutrino signal from the spike could either set upper bounds on the\ndensity slope of the inner halo or clarify the nature of dark matter. \n\nThis research has been supported in part by grants from NASA and DOE.\n\n\\begin{figure}[tbp]\n\\label{figphimumax}\n\\epsfig{file=spikef5.eps,width=0.4\\textwidth}\n\\caption{Maximal flux of neutrino-induced muons in a neutrino telescope from\n neutralino annihilations at the galactic center, after imposing the current\n constraints on the neutrino emission.}\n\\end{figure}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe structure of globular clusters is shaped by complex interaction between external (interaction with the host galaxy) and internal (for example, relaxation, core collapse, mass segregation) forces. \\cite{King1966} successfully described the star density of globular clusters with assuming no rotation.\n\\citet{Tiongco2017} have suggested that the internal relaxing processes dissipate all the angular momentum on a long term in every globular cluster. However, growing number of studies \\citet{Lane2010, Bellazzini2012, Bianchini2013, Fabricius2014, Lardo2015, Kimmig2015, Boberg2017, Lee2017, Cordero2017, Kamann2018, Ferraro2018, Lanzoni2018, Bianchini2018} present evidence that a significant amount of internal rotation could still be observed in many globular clusters, however the observed rotational strengths are only a fraction of the initial ones \\citep{Kamann2018, Bianchini2018}. These studies mostly utilize recently recorded high quality radial velocity and proper motion data.\n\nThe presence of internal rotation could introduce some problems with the theory of globular clusters formation and evolution. Some studies indicate that the remaining rotation accelerates the evolution and shapes the morphology of the cluster \\citep{Einsel1999, Bianchini2013}. Others suggest that the present day rotation could be a remnant of a strong rotation during the early history of globular clusters \\citep{Vesperini2014, Mapelli2017}. \n\nOrbital motions may be isotropic even during the formation of GCs, for example \\citet{Lahen2020} have argued that the massive clusters are isotropic already during their first 100 Myr after formation.\nMoreover, as \\citet{Lane2010} and \\citet{Tiongco2017} has shown, the interaction between the cluster and the galactic tidal field combined within the internal dynamics could produce complex kinematical features, e.g. radial variation in the orientation of the rotational axis, and anomalous velocity dispersions. The observational evidence of rotating globular clusters is based on high precision radial velocity data \\citep{Bellazzini2012, Bianchini2013, Lardo2015, Kimmig2015, Boberg2017,Lee2017, Cordero2017, Ferraro2018, Lanzoni2018,Cordoni2020a}, integral field unit (IFU) spectrograph \\citep{Fabricius2014, Kamann2018}, and proper motion measurements \\citep{Massari2013, Bellini2017, Bianchini2018, Mastrobuono2020}. \n\nThe other aspect of the globular cluster rotation is the presence of multiple generations of stars and their rotational properties. \nIn the last decade, the existence of two or more distinct generations of stars in most globular clusters became well studied \\citet{Carretta2009a, Carretta2009b, Piotto2015, Milone2017, Milone2018, Meszaros2020}, however understanding the formation of multiple population is still an astrophysical challenge. Multiple populations manifest in light-element abundance variations, second generation stars (SG) are enhanced in N, Na, Ca and Al and depleted in C, O and Mg, while the first generation stars (FG) are the opposite. \nMost of the theories agree, that the second generation stars formed out of the first generation's ejecta mixed with the original intracluster medium \\citep{Decressin2007, Bekki2010, Denissenkov2014, Bekki2017}, but the exact process of this pollution is currently not known, and many observed processes can not be explained with this theory yet. \nThere are other alternative explanations for this phenomena and they are discussed in detail in \\citet{Henault2015, Bastian2018}. \n\nObservational evidence have showed differences in spatial distribution between FG and SG stars. For dynamically younger clusters, the SG is more concentrated than the FG \\citep{Sollima2007, Bellini2009, Lardo2011, Cordero2014, Boberg2016, Lee2017, Gerber2020}. On the other hand, the two generations are completely mixed in globular clusters with more advanced evolutionary stages \\citep{Dalessandro2014, Nardiello2015, Cordero2015, Gerber2018, Gerber2020}. In \\citet{Dalessandro2019}, the link between concentration differences and evolutionary stage was explored in detail based on observations and models.\nOther observational studies revealed differences in the kinematics between the multiple generations \\citep{Richer2013, Bellini2015, Bellini2018, Cordero2017, Milone2018, Libralato2019, Cordoni2020a, Cordoni2020b}, in other cases, the multiple generation of stars share similar kinematic properties \\citep{Pancino2007, Cordoni2020b}\nThese literature sources used different indicators of cluster kinematics, including rotation, velocity dispersion and anisotropy to show the different kinematical properties of multiple populations.\n\n\nOur main purpose is to investigate the rotational properties of the selected clusters using high precision and homogeneous radial velocity data. Our secondary goal is to identify potential differences in the cluster rotation properties between the multiple populations.\nFor this study, we use state-of-the-art data from the high-resolution spectroscopic survey Apache Point Observatory Galactic Evolution Experiment (APOGEE) \\citep{Majewski2016}. The APOGEE started as one component of the $3^{\\rm rd}$ Sloan Digital \nSky Survey (SDSS-III; \\cite{Eisenstein2011}) and continues as part of SDSS$-$IV \\citep{Blanton2017} as APOGEE-2\\footnote{http:\/\/www.sdss.org\/surveys\/apogee-2\/}. The goal of APOGEE-2 is to obtain high-resolution (R = 22500), high signal-to-noise, H-band spectra ($\\lambda$ = 1.51$-$1.70$\\mu$m) of more than 600,000 late-type stars in the Milky Way by the end of 2020, and to determine chemical abundances of $\\sim$26 elements in all observed stars. \nMost APOGEE targets are evolved red-giant branch (RGB) and asymptotic giant branch (AGB) stars from all major Galactic stellar populations.\n\n\n\n\n\\section{Data and reduction}\n\\subsection{Target Selection and Radial Velocities}\n \nThe data were gathered by the Sloan Foundation 2.5 m Richey-Chreritien altitude-azimuth telescope \\citep{Gunn2006} at Apache Point Observatory. The spectra were obtained via the APOGEE spectrograph \\citep{Wilson2019} with a resolution power of 22500. The stellar atmospheric parameters and chemical abundances are calculated from these spectra with APOGEE Stellar Parameters and Chemical Abundances Pipeline (ASPCAP) \\citep{Garcia2016}. \nWe use the list of stars compiled by \\citet{Masseron2019}. The target selection is explained in detail in \\citet{Meszaros2015}, \\citet{Masseron2019} and \\citet{Meszaros2020}. In short, stars were selected based on their radial velocity, distance from cluster center and metallicity. In radial velocity, all studies required stars to be within three times the velocity dispersion of the mean cluster velocity, which were taken from \\citet{Baumgardt2019}.\n\nWe used the DR14 data release of APOGEE \\citep{Holtzman2018} for our study. Radial velocities are derived by the reduction pipeline \\citep{Nidever2015}, while details can be found in that paper, we provide a brief description of the algorithm. For almost all stars, observations are made in multiple visits to improve S\/N and to allow observations of faint objects, which are then combined together to provide the final spectrum of the star. The radial velocity is measured in multiple steps. First, we do an initial measurement for each star from the actual individual spectrum by cross correlating each spectrum with the best match in a template library. In the second step, all of the visits are combined, and relative radial velocities of each visit are iteratively refined by cross-correlating each visit spectrum with the combined spectrum. The final absolute radial velocity is set by cross-correlating the combined spectrum with the best match in a template library. The peak of the cross-correlation function is fitted with a Gaussian in order to determine the accurate spectral shift. \nBinary stars can distort the rotational velocity profile if included in the sample. We removed all the binaries from our sample using database from \\citet{PriceWhelan2020}. In \\citet{PriceWhelan2020}, they identified nearly 20000 binary candidates with high confidence in the 16th data release of APOGEE, out of these we have found 65 stars in common with our targets, only $\\approx~7\\%$ of the total number of stars.\n \nThe uncertainties of radial velocities depend on multiple factors, mainly the characteristic of spectra, the resolution and the S\/N ratio. For example, a star with lots of deep and thin lines in the spectra had a more precise RV than a star with wide and shallow lines. The typical uncertainty of the final radial velocity for stars in this study is 0.1 km\/s.\n\n\n\\subsection{Method}\n\n\\begin{table*} \n\\begin{tabular}{lccccccccc} \n\\toprule \nGC name & N & [Fe\/H] & V$_{\\rm helio}$ & R$_{\\rm h}$ & d$_{\\rm avg}$ & PA & PA$_{\\rm err}$ & A$_{\\rm rot}$ & A$_{\\rm rot err}$ \\\\ \n& & & [km\/s] & [arcmin] & [arcmin] & & & [km\/s] & [km\/s] \\\\ \n\n\\hline \nM2 & 26 & -1.65 & -5.3 & 0.93 & 3.9 & 26 & 19 & 3.48 & 0.82 \t\\\\ \n\nM3 & 145 & -1.5 & -148.6 & 1.12 & 6.3 & 164 & 13 & 1.19 & 0.28 \t\\\\\n\nM5 & 215 & -1.29 & 52.1 & 2.11 & 5.9 & 148 & 6 & 3.45 & 0.37 \t\\\\\n\nM12 & 65 & -1.37 & -42.1 & 2.16 & 5.8 & 56 & 93 & 0.24 & 0.19 \\\\\n\nM13 & 135 & -1.53 & -246.6 & 1.49 & 5.1 & 26 & 9 & 2.38 & 0.39 \t \\\\ \n\nM15 & 138 & -2.37 & -107.5 & 1.06 & 4.8 & 120 & 11 & 2.38 & 0.44 \t\\\\ \n\nM53 & 40 & -1.86 & -79.1 & 1.11 & 4.8 & 98 & 27 & 1.54 & 0.57 \t\\\\ \n\nM71 & 28 & -0.78 & -22.9 & 1.65 & 2.7 & ... & ... & ... & ... \t\\\\ \n \nM92 & 72 & -2.31 & -121.6 & 1.09 & 4.2 & 154 & 14 & 2.06 & 0.58 \t\\\\ \n\nM107 & 67 & -1.02 & -33.8 & 2.70 & 4.3 & 168 & 30 & 0.72 & 0.27 \t\\\\ \n \n\n\\bottomrule \n\\end{tabular} \n\\caption{The table contains the basic parameters of the targeted globular clusters and the results of this study. The second column represents the number of observed stars. The third and the fourth columns are the metallicity and the clusters heliocentric radial velocity \\citet{Harris1996, Miocchi2013}. The fifth is the halflight radii \\citet{Harris1996}. The sixth is the average distance from the cluster centers in our samples. The seventh and eighth columns represent the calculated position angle and its error, while the last two columns are the rotational velocity and its uncertainties. The position angles measured from North to East anti-clockwise direction.}\n\\end{table*}\n\nIn order to investigate the rotational velocity and the position angle of the rotational axes of the cluster, we follow the same method as \\cite{Cote1995, Bellazzini2012, Bianchini2013, Lardo2015, Kimmig2015, Boberg2017,Lee2017, Lanzoni2018}. \nFirst, each cluster was split into two halves along the cluster center. The position angle of separation was the independent variable of the analysis, varied between PA = 0 and 180 (PA = 90 is toward East) in 2 degree step-size.We ran the simulations with multiple step-sizes (e.g. 2, 5, 10, 20 degree) all providing the same end results within our uncertainties. At the end we choose the 2 degrees for all of our clusters to appropriately sample the densest areas.\nNext, the mean radial velocity of these sub-samples were calculated and the difference between the two sub-samples mean velocity were determined. If rotation is present in the system, the delta V$_{\\rm mean}$ draws a sinusoidal variation as a function of the position angle. The amplitude of the function is twice the rotational velocity (because the amplitude is the difference of the two hemispheres) and the min-max position (ideally the difference is 180 degree) is the PA of the rotational axis. \nThus, we caution the reader that rotational velocity values printed in Table 2 and 3, and all figures are twice as large as the real rotational velocity, in agreement to what has been used in the literature.\nOur results are listed in Table~1.\n\n\n\\subsection{Separating multiple populations in GCs}\n\nIn this study, first (FG) and second generation (SG) stars are separated from each other based on their [Al\/Fe] abundances, following the suggested cuts by \\citet{Meszaros2020} from APOGEE data. Previously, \\citet{Meszaros2015} used an extreme-deconvolution code for fitting a distribution with multiple Gaussian components to identify FG from SG stars in a mutli-dimensional abundance space, they showed that almost all of the SG stars have [Al\/Fe]$>$0.3~dex. For this reason we use this simplified criteria to set multiple populations apart using abundances from \\citet{Masseron2019}. For this analysis, we selected 4 clusters with the most observed stars, in which we have enough samples to properly fit the rotational curve. \n\n\n\\subsection{Error estimation}\nWe checked the robustness of the method with a simple jackknife test. We randomly dropped more and more stars from the sample and calculated the rotational curves. The results indicate, we can get a good signal if we have at least 20-30 stars to work with. Fewer stars than this is insufficient for a robust measurement. \n\nIn order to calculate the final uncertainty, we randomly dropped 20 percent of the stars (dropping more than this may result in fewer than 20 stars for some clusters) and derived the position angle and the rotational velocity in the sub-samples. \nWe repeated this process 100 times, then the standard deviation of the sinusoidal fit was calculated for the position angles and rotational velocities for each of these sub-samples. The final uncertainties seen in Figure~1 are the average of the differences between the original fit and the sub-samples.\n\nWe were able to define the errors for all the selected clusters, however for M12, the combination of the small number of observed stars, low rotational signal, and the error estimation method produced a high uncertainty of the PA. \nFor M71 we could not get a clear sinusoidal signal from the data at hand. \nTable~1 contains our derived results. \n\nWe tested the robustness and the stability of the derived rotational amplitudes with a bootstrap analysis. For both populations in all clusters, 100 bootstrap distributions have been realised with redistributing the measured velocities randomly (with re-sampling allowed) among the field stars. These samples suffered a complete loss of any information on the rotation, and contained a \"null signal\". The distributions were then evaluated following exactly the same method as in the case of the observations, and we observed the best-fit amplitude of the inferred rotating model. This amplitude was considered as the upper limit of the rotation amplitudes if the clusters would not rotate, and the measured amplitudes were just a product of numerical fluctuations in the data distribution.\nThe standard distribution of the amplitudes in the bootstrap samples were in the range of 0.5--0.8 ~km\/s, proving that the detection of the rotation of all examined clusters is indeed significant.\n\n\n\n\\section{Discussion}\n\n\\subsection{Comparison with literature}\n\\begin{table*}\\centering\n\\begin{tabular}{@{}lcc@{}c@{}cc@{}c@{}cc@{}c@{}cc@{}c@{}cc@{}c@{}}\n\\toprule\n& \\multicolumn{2}{c}{M2} & \\phantom{abc}& \\multicolumn{2}{c}{M3} & \\phantom{abc} & \\multicolumn{2}{c}{M5} & \\phantom{abc} & \\multicolumn{2}{c}{M12} & \\phantom{abc} & \\multicolumn{2}{c}{M13} \\\\\n\\cmidrule{2-3} \\cmidrule{5-6} \\cmidrule{8-9} \\cmidrule{11-12} \\cmidrule{14-15} \n& PA & A$_{\\rm rot}$ && PA & A$_{\\rm rot}$ && PA & A$_{\\rm rot}$ && PA & A$_{\\rm rot}$ && PA & A$_{\\rm rot}$ \\\\ \n\\hline\n\\cite{Lane2010} & ... & ... && ... & ... && ... & ... && 40 & 0.15$\\scriptsize\\pm0.8$ && ...& ... \\\\ \n\\cite{Bellazzini2012} & ... & ... && ... & ... && 157 & 2.6$\\scriptsize\\pm0.5$ && ... & ... && ... & ... \\\\\n\\cite{Fabricius2014} & ... & ... && 192 $\\scriptsize\\pm11.8$& ... && 149$\\scriptsize\\pm5.6$ & ... && 89$\\scriptsize\\pm19.3$& ... && 17$\\scriptsize\\pm7.8$& ... \\\\ \n\\citet{Kimmig2015} & 53 & 4.7$\\scriptsize\\pm1.0$ && ... & 0.6$\\scriptsize\\pm1.0$ && ... & 2.1$\\scriptsize\\pm0.7$ && ... & 0.2$\\scriptsize\\pm0.5$ && ... & ... \\\\ \n\\citet{Lee2017} & ... & ... && ... & ... && 128 & 3.36$\\scriptsize\\pm0.7$ && ... & ... && ... & ... \\\\ \n\\cite{Cordero2017} & ... & ... && ... & ... && ... & ... && ... & ... && 14$\\scriptsize\\pm19$ & 2.7$\\scriptsize\\pm0.9$ \t \\\\\n\\cite{Kamann2018} & 41.7$\\scriptsize\\pm2.7$ &... && ... & ... &&144$\\scriptsize\\pm20.3$ & ... && ... & ... && ... & ... \t \\\\ \n\\cite{Ferraro2018} & ... & ... && 151 & 1.0 && ... & ... && ... & ... && ... & ... \t \\\\ \n\\cite{Lanzoni2018} & ... & ... && ... & ... && 145 & 4.0 && ... & ... && ... & ... \t \\\\\n\\cite{Sollima2019} & 14$\\scriptsize\\pm12.1$ & 3.01$\\scriptsize\\pm0.7$ && ... & 1.75$\\scriptsize\\pm0.4$ && 132$\\scriptsize\\pm6$ & 4.11$\\scriptsize\\pm0.4$ && ... & 0.93$\\scriptsize\\pm0.4$ && 15$\\scriptsize\\pm14.2$ & 1.53$\\scriptsize\\pm0.6$ \t \\\\ \n\\hline\nthis work & $26\\scriptsize\\pm19$ & $3.48\\scriptsize\\pm0.8$ && $164\\scriptsize\\pm15$ & $1.19\\scriptsize\\pm0.3$ && $148\\scriptsize\\pm6 $ & $3.45\\scriptsize\\pm0.4$ && $56\\scriptsize\\pm93$ & $0.24\\scriptsize\\pm0.2$ && $26\\scriptsize\\pm9$ & $2.38\\scriptsize\\pm0.4$ \\\\ \n\\bottomrule\n\\end{tabular}\n\\caption{Comparison with literature. Part~1. \\\\\nPosition angles and rotational amplitudes from earlier studies. Since different conventions were followed, we convert the published result to PA 90 = East, anti-clockwise system. Literature source which did not use the double rotational velocity were converted to our system. The first sub-column represent the position angle of the rotational axis and the second is the rotational velocity in [km\/s] } \n\\end{table*}\n\n\\begin{table*}\\centering\n\\begin{tabular}{@{}lcc@{}c@{}cc@{}ccc@{}c@{}cc@{}ccc@{}}\n\\toprule\n& \\multicolumn{2}{c}{M15} & \\phantom{abc}& \\multicolumn{2}{c}{M53} & \\phantom{abc} & \\multicolumn{2}{c}{M71} & \\phantom{abc} & \\multicolumn{2}{c}{M92} & \\phantom{abc} & \\multicolumn{2}{c}{M107} \\\\\n\\cmidrule{2-3} \\cmidrule{5-6} \\cmidrule{8-9} \\cmidrule{11-12} \\cmidrule{14-15} \n & PA & A$_{\\rm rot}$ && PA & A$_{\\rm rot}$ && PA & A$_{\\rm rot}$ && PA & A$_{\\rm rot}$ && PA & A$_{\\rm rot}$ \\\\ \n\\hline\n\\cite{Lane2009} & ... & ... && nf & nf && ... & ... && ... & ... && ...& ... \\\\ \n\\cite{Bellazzini2012} & 110 & 3.8$\\scriptsize\\pm0.5$ && ... & ... && 163 & 1.3$\\scriptsize\\pm0.5$ && ... & ... && 84 & 2.9$\\scriptsize\\pm1.0$ \\\\\n\\citet{Bianchini2013} & 106$\\scriptsize\\pm1$ & 2.84&& ... & ... && ... & ... && ... & ... && ... & ... \\\\\n\\cite{Fabricius2014} & ... & ... &&113$\\scriptsize\\pm19.2$& ... && ... & ... && 99$\\scriptsize\\pm12.0$& ... && ... & ... \\\\ \n\\citet{Lardo2015} & 120 & 3.63$\\scriptsize\\pm0.1$ && ... & ... && ... & ... && ... & ... && ... & ... \\\\\n\\citet{Kimmig2015} & ... & 2.5$\\scriptsize\\pm0.8$ && ... & 0.4$\\scriptsize\\pm0.7$ && ... & 0.4$\\scriptsize\\pm0.8$ && ... & 1.8$\\scriptsize\\pm0.8$ && ... & ... \\\\ \n\\citet{Boberg2017} & ... & ... && 74 & 2.8 && ... & ... && ... & ... && ... & ... \\\\ \n\\cite{Kamann2018} &151$\\scriptsize\\pm$10.4& ... && ... & ... && ... & ... && ... & ... && ... & ... \\\\ \n\\cite{Ferraro2018} & ... & ... && ... & ... && ... & ... && ... & ... && 167 & 1.2 \\\\ \n\\cite{Sollima2019} & 128$\\scriptsize\\pm28.8$ & 3.29$\\scriptsize\\pm0.5$ && ... & ... && ... & ... && ... & 1.46$\\scriptsize\\pm0.6$ && ... & ... \t \\\\ \n\n\\hline\nthis work & $120\\scriptsize\\pm11$ & $2.38\\scriptsize\\pm0.4$ && $98\\scriptsize\\pm27$ & $1.54\\scriptsize\\pm0.6$ && ... & ...&& $154\\scriptsize\\pm14$ & $2.06\\scriptsize\\pm0.6$ && $168\\scriptsize\\pm30$ & $0.72\\scriptsize\\pm0.3$ \n\\\\ \n\\bottomrule\n\\end{tabular}\n\\caption{Comparison with literature. nf = not found evidence of rotation. Part~2}\n\\end{table*}\n\nThe latest results available in the literature were collected in Table 2 and 3, which contain the calculated position angles of the rotational axis for clusters in common with our sample. These data also represented in Figure 3. There are multiple conventions used in the literature for angle and direction notations. We converted all these various approaches to the PA 90 = East convention. The last row contains our values.\n\nWe detected systematic rotation in almost all of the targeted globular clusters. We confidently could derive rotational velocity and position angles for nine out of the ten selected clusters. All nine clusters have been studied in the literature before, thus we are able to not only compare our results with previous studies, but also homogenize the rotational velocity as our radial velocities are from one homogeneous survey. \nWe caution the reader that the assumption of a constant rotation velocity as a function of distance to the cluster centre is a significant simplification. \nThe observations of \\citet{Boberg2017, Bianchini2018, Sollima2019} have shown that the peak of the rotational curve is located approximately at the cluster half-light radius, however this location is expected to change during the evolution of the cluster \\citep{Tiongco2017}.\nWe listed the halflight-radius and the average distance of our sample from the cluster centre in Table~1. In all cases, the average distance is at least 2-3 times larger than the halflight-radius suggesting that our assumption of a constant rotation velocity underestimates the magnitude of the rotational velocity.\n\nBefore such a comparison can be made one must transform the results from the literature to a common coordinate system (PA 90 = East, anti-clockwise). After the transformation we are able to conclude while for some of the cluster we have a good agreement within our uncertainties, other less studied clusters with fewer stars show larger than expected discrepancy between studies. In the next few sections we examine these differences for each cluster. \n\n\\subsubsection{\\rm M5}\nM5 is a well observed cluster targeted by \\cite{Bellazzini2012, Fabricius2014, Kimmig2015, Lee2017, Kamann2018,Lanzoni2018}, therefore it is an excellent object to use it as a standard to compare our results to, especially because these literature sources used different measurement methods.\nThe position angle varies between 144.3 to 157 degree in the literature, our result of 148 degree fits nicely in this picture. \n\nAs mentioned before, our calculation technique is similar to many that studied M5 \\citep{Bellazzini2012, Lee2017, Lanzoni2018}, and by comparing our results to these studies, we can find a good agreement in all cases. \\citet{Fabricius2014} and \\citet{Kamann2018} used IFU spectrograph for the analysis. The advantage of this method that it is possible to measure crowded stellar fields in order to perform a detailed analysis of dispersion fields and central rotation. \nIn \\cite{Bianchini2018} the rotational pattern was derived from proper motion data (GAIA). Despite the different observations and methodology we feel confident in our approach as it nicely reproduces the rotation velocity and angle reported by the mentioned studies for M5.\nIn \\citet{Sollima2019} the rotation of M5 (among other clusters) was investigated via radial velocity component from VLT and Keck instruments and proper motion data from GAIA 2nd data release. In \\citet{Sollima2019} the rotational velocity derived from all the three velocity components (taken into consideration the inclination of the rotational axis) while we were able to use only the line-of-sight part. Their derived results show a good agreement with ours in case of PA and the difference in rotational velocity can be explained by the fact that we observed line of sight velocity, while they were able to determine the inclination. \n\n\\subsubsection{\\rm M2}\n\\cite{Kimmig2015} derived the rotational velocity as A$_{\\rm rot} = 4.5 ~\\rm km\/s$, which is slightly larger than our value at $3.48\\pm0.8$. Considering our uncertainty we conclude that these differences are not substantial. The position angle also differs from \\citet{Kimmig2015, Kamann2018, Sollima2019}, but our limited sample size cause a 19 degree of uncertainty, which can explain the difference. \n\n\n\\subsubsection{\\rm M3, M12 and M13}\nWe have a good amount of observed stars in M3 and the derived position angles are within errors to \\cite{Ferraro2018}. The difference from \\cite{Fabricius2014} can be explained by their relatively large errors and the different analysis method applied. \n\nOur derived result for the rotational properties of M12 have high uncertainty, probably because either the amplitude of the rotation is too small, or the inclination is close to 90 degrees, so that we look close to the direction of rotational axis. We have a good agreement with the results \\cite{Lane2010} and considering the large uncertainty we agree well with \\cite{Fabricius2014} too. \n\nOur results show a good agreement at the level of our uncertainties with \\cite{Fabricius2014}, \\cite{Cordero2017} and \\citet{Sollima2019} for M13. The rotational amplitude is also really similar to the one presented in \\cite{Cordero2017}. \n\n\\subsubsection{\\rm M15}\nIn case of M15 we have a slight deviation in the PA from one of the previous results, namely \\cite{Kamann2018}, however we are in a good agreement with the other four \\citep{Bellazzini2012, Bianchini2013, Lardo2015, Sollima2019}. The source of the misalignment with \\cite{Kamann2018}, which is 30 degrees, might be due to two different reasons. One is that \\cite{Kamann2018} used data from an IFU spectrograph and used Voronoi-binned maps of the mean velocity and velocity dispersion across the observed region. The second reasons is that they focused on the cluster center region, while our sample contains fewer stars from the cluster's center and more from the outer region. There is a variation among the derived rotational amplitudes among all the literature sources, but there are on the level of what is expected from the uncertainties of the measurements and methods. \n\n\\subsubsection{\\rm M53}\nFor M53, our PAs value lies between \\cite{Fabricius2014} and \\cite{Boberg2017} with a relatively high error of margin. The cause of this probably is the low sample size and the relatively small rotational signal originated from the cluster. The problem caused by the usage of fewer observed stars in the calculation is visible in Figure~1. If the cluster contains less stars for this method, the signal become more scattered, however the rotation is still detectable in this case. In a study of \\cite{Lane2009} a detectable rotational signal was not found for this cluster within the margin of error. \n\n\\subsubsection{\\rm M71}\nIn case of M71, we did not find conclusive fit for the available data, which is interesting considering that we selected 28 stars from M71, similarly to M2, but such a small sample size was enough for a measurement in that cluster. The lack of measurement may suggest the possibility of this globular cluster not rotating or perhaps the inclination of the axis is close to 90 degree or the rotational signal simply too small to be detectable from our sample. \n\n\\subsubsection{\\rm M92}\nThe difference between our PA and that of \\citet{Fabricius2014} in M92 is significant. Since they have used a different calculation method, we might expect a large difference such as this, however in case of M5, M3 and M53 we have a good agreements with this source. For this reason we do not know why our PA differs so much from \\citet{Fabricius2014}, especially that our sample is large enough for a reliable measurement. Our derived rotational amplitude is similar the one derived in \\cite{Kimmig2015} and \\cite{Sollima2019}. \n\n\\subsubsection{\\rm M107}\nOur derived position angle is very close to \\cite{Ferraro2018}, however there is a 90 degree of deviation from \\cite{Bellazzini2012} and a significant difference between the rotational amplitudes too. We are not sure what causes this, because \\cite{Bellazzini2012} used the same method as we did and for other clusters we found an acceptable agreement. \n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=6.5in,angle=0]{all_gc_curve_5.png}\n\\caption{ Global rotation of the globular clusters, having FG and SG samples unified. \\\\ \nPosition angle of the rotational axis vs. difference between the two sub-sample's mean value in case all of the studied globular clusters except M71 (we can not find conclusive fit). The line shows the best fit sin function and the actual rotational velocity is half of the amplitudes. } \n\\label{fig:m5radseb} \n\\end{figure*}\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=6.5in,angle=0]{all_2gen_curve7.png}\n\\caption{Comparison of rotation curves of FG (red) and SG (blue) stars in the 4 selected globular clusters. \n} \n\\label{fig:2gen_rvcurve} \n\\end{figure*}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=6.5in,angle=0]{compare.png}\n\\caption{Comparison between values determined in other studies (Table 2 and 3) and our results. \n} \n\\label{fig:m5radseb} \n\\end{figure*}\n\n\\subsection{Rotation according to first and second generation stars}\n\nFrom theoretical studies \\citep{Decressin2007, Bekki2010, Denissenkov2014, Bekki2017} we can expect GCs to have higher cluster rotational speed when measured from the SG stars than from only the FG stars. At the same time, the velocity dispersion should be higher among the FG stars. This is because the FG stars formed from massive molecular clouds and their first supernovas expelled the remaining cold gas from the GC. In a next stage, the polluted gas from the FG stars accumulated in the GCs center and this was the origin of the SG stars. Numerical simulations based on this theory \\citep{Bekki2017} suggests higher rotational speed in case of SG than FG stars. We are able to test this idea in M3, M5, M13 and M15, clusters in which we have enough stars to sample both populations and are able to measure the cluster rotation based on the two generation of stars. Our results are listed in Table~4, and shown in Figure~2. \nHowever, we have to mention that in case of other formation scenarios opposite rotational velocities might be observed for FG and SG stars, i.e. the first generation rotates faster and the second slower as suggested by \\citet{Henault2015}. \nMany different literature studies have examined the rotation as a function of stellar populations, such as 47 Tuc \\citep{Milone2018}, Omega Centauri \\citep{Bellini2018}, NGC 6352 \\citep{Libralato2019}, M80 \\citep{Kamann2020} or M54 \\citep{Alfaro2020}, but none of these clusters are in our sample, thus no direct comparison is possible.\n\nFrom figure~2 we can conclude that in case of M5, M13, M15 the rotational velocity originated from the FG and SG stars do not differ significantly, the small discrepancies are all well within our derived uncertainties. We are not able to measure the predicted difference in these three clusters. The position angles of M5 and M13 are also very close for both generation of stars, but we observe a large deviation in case of M15. The position angle is $144\\pm56$ from the FG stars, and $98\\pm18$ from the SG stars, but the extremely large uncertainty of the first value does not allow us to conclude that this difference have an astrophysical origin. A larger sample of observed stars and supplemented by proper motion data may shed some light in this phenomena. \n\n\\citet{Cordoni2020a} examined the rotation of M5 according to FG and SG stars using GAIA proper motion data and line-of-sight velocities. The FG and SG stars present a significant difference in position angle in their study. We did not find these characteristics in our analysis, but it must be noted that our sample is much lower than theirs.\n\nIf we compare our M13 result (see in Table~4) to \\cite{Cordero2017} in which the 'extreme' population correspond to SG with PA = 7 and the remaining stars (\"normal\" + \"intermediate\") to FG with PA = 33, then we have a really good agreement with our results( PA = 12 for SG and PA = 34 for FG). Although the uncertainties in both studies can be considered high, the observed differences are well within these errors. Considering our errors, we do not believe the discrepancy between the PA of the FG and SG group has an astrophysical origin. \n\nM3 is the peculiar object in our sample. Our observations prove that M3 does not appear to show any detectable global rotation when examined through only the FG group of stars, however in case of SG sample, we clearly see a strong rotation curves with an amplitude of $2.69\\pm0.6$ km\/s. The difference in cluster rotational velocities in the two population of stars is significant when compared to the uncertainties. \n\nIn order to estimate the upper limit for a possible rotation that could remain hidden in the numerical fluctuations we used a bootstrap analysis, described in Section 2.4. In M3, the standard deviation of bootstrap amplitudes was 0.55~km\/s for FG stars, therefore the rotational amplitude is $<1.65$~km\/s with a 3-$\\sigma$ confidence. \n\nOur result appear to follow the theoretical predictions by \\cite{Bekki2017}; the cluster rotational velocity based on the SG stars is significantly higher than based on the FG stars. The behavior of the FG stars is also interesting, because our observations suggest a rotational velocity very close to zero without any hint to what the PA might be. The simplest explanation is that the rotational velocity is so small that it is not possible to detect within our precision. \n\n\\begin{table*} \n\\begin{tabular}{lcccccc} \n\\toprule\nGC & Gen & N & PA & A$_{\\rm rot}$ \\\\ \n\\hline\nM3 & (fg) & 95 & ... & ... \\\\\n & (sg) & 45 & 162 $\\pm$ 11 & 2.69 $\\pm$ 0.6 \\\\\n\nM5 & (fg) & 102 & 150 $\\pm$ 8 & 3.37 $\\pm$ 0.3 \t \\\\\n & (sg) & 92 & 150 $\\pm$ 6 & 4.36 $\\pm$ 0.5 \\\\\n \nM13 & (fg) & 36 & 34 $\\pm$ 36 & 2.62 $\\pm$ 0.8 \\\\ \n & (sg) & 70 & 12 $\\pm$ 36 & 3.12 $\\pm$ 0.5 \\\\\n\nM15 & (fg) & 33 & 144 $\\pm$ 56 & 2.82 $\\pm$ 0.8 \t\\\\ \n & (sg) & 49 & 98 $\\pm$ 18 & 2.67 $\\pm$ 0.7 \\\\\n\\bottomrule\n\\end{tabular} \n\\caption{The first and second generation's kinematic properties in case of the four selected clusters.}\n\\end{table*}\n \n\n\\section{Conclusions}\n\nWe found evidence of rotation in M2, M3, M5, M12, M13, M15, M53, M92, and M107, but not in M71, supporting the theory that most globular cluster preserve significant amount of rotation during their lifetime. \nFor most clusters, these results show good agreement with other similar studies. With the precise radial velocity data of the APOGEE survey, we were able to provide homogeneous rotational velocities and PAs for several clusters for which such homogeneity did not exist in independent literature sources. \n\nWe successfully identified rotational signals originated from two different generation of stars in 4 selected clusters. \nIn M3, we discovered a significant difference between the rotational velocity of the cluster when it is examined through only the FG and SG stars. This is very much in agreement with the prediction of numerical simulations by several independent groups \\citep{Decressin2007, Bekki2010, Denissenkov2014, Bekki2017}. The cluster do not show a detectable rotational signal when selection the FG stars only, while in case of the SG stars the cluster have a clear rotational signal at 2.69 km\/s. It is not clear what cause this phenomena, but a detailed analysis with supplement proper motion data might unfold this issue. \n\nIn M5 and M13 the FG rotational velocity is somewhat smaller than the SG velocity as the theory predicts, however the differences are well within the level or our uncertainties, thus we do not believe we see the prediction of the numerical simulations. In terms of PA, the deviations between the populations are really small or nonexistent and well within the derived uncertainties. \nFor M15 one can see a difference in PAs, but the relatively high uncertainty of the PA for the FG stars prevents us to draw a clear conclusion, therefore further analysis with a larger sample size is required. \n \n\n\n\\section*{Acknowledgements} \n\nL. Sz. and Sz. M. has been supported by the Hungarian \nNKFI Grants K-119517 and GINOP-2.3.2-15-2016-00003 of the Hungarian National Research, Development and Innovation Office. Sz. M. has been supported by the J{\\'a}nos Bolyai Research Scholarship of the Hungarian Academy of Sciences, and by the {\\'U}NKP-20-4 New National Excellence Program of the Ministry for Innovation and Technology.\n\nFunding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, \nthe U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges\nsupport and resources from the Center for High-Performance Computing at\nthe University of Utah. The SDSS web site is www.sdss.org.\n\nSDSS-IV is managed by the Astrophysical Research Consortium for the \nParticipating Institutions of the SDSS Collaboration including the \nBrazilian Participation Group, the Carnegie Institution for Science, \nCarnegie Mellon University, the Chilean Participation Group, the French Participation Group, \nHarvard-Smithsonian Center for Astrophysics, \nInstituto de Astrof\\'isica de Canarias, The Johns Hopkins University, \nKavli Institute for the Physics and Mathematics of the Universe (IPMU) \/ \nUniversity of Tokyo, Lawrence Berkeley National Laboratory, \nLeibniz Institut f\\\"ur Astrophysik Potsdam (AIP), \nMax-Planck-Institut f\\\"ur Astronomie (MPIA Heidelberg), \nMax-Planck-Institut f\\\"ur Astrophysik (MPA Garching), \nMax-Planck-Institut f\\\"ur Extraterrestrische Physik (MPE), \nNational Astronomical Observatories of China, New Mexico State University, \nNew York University, University of Notre Dame, \nObservat\\'ario Nacional \/ MCTI, The Ohio State University, \nPennsylvania State University, Shanghai Astronomical Observatory, \nUnited Kingdom Participation Group,\nUniversidad Nacional Aut\\'onoma de M\\'exico, University of Arizona, \nUniversity of Colorado Boulder, University of Oxford, University of Portsmouth, \nUniversity of Utah, University of Virginia, University of Washington, University of Wisconsin, \nVanderbilt University, and Yale University.\n\n\\section*{Data availability} \nThe data used in this article is part of the 14th Data Release from SDSS-IV \/ APOGEE survey, and it is publicly available at \\url{https:\/\/www.sdss.org\/dr14\/data_access\/}. \n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nModern-day reasoning systems often have to react to real-time information about the real world\nprovided by e.g.~sensors.\nThis information is typically conceptualized as a data stream, which is accessed by the reasoning\nsystem.\nThe reasoning tasks associated to data streams -- usually called \\emph{continuous queries} -- are\nexpected to run continuously and produce results through another data stream in an online fashion,\nas new elements arrive.\n\nA data stream is a potentially unbounded sequence of data items generated by an active, uncontrolled\ndata source.\nElements arrive continuously at the system, potentially unordered, and at unpredictable rates.\nThus, reasoning over data streams requires dealing with incomplete or missing data, potentially\nstoring large amounts of data (in case it might be needed to answer future queries), and providing\nanswers in timely fashion -- among other problems, see\ne.g.~\\cite{Babcock2002,Stonebraker2005,DellAglio2017}.\n\nThe output stream is normally ordered by time, which implies that the system may have to\ndelay appending some answer because of uncertainty in possible answers relating to earlier\ntime points.\nThe length of this delay may be unpredictable (\\emph{unbound wait}) or infinite, for example if the\nquery uses operators that range over the whole input data stream (\\emph{blocking operations}).\nIn these cases, answers that have been computed may never be output.\nAn approach to avoid this problem is to restrict the language by forbidding blocking\noperations~\\cite{Zaniolo2012,Ronca2018}.\nAnother approach uses the concept of reasoning window~\\cite{Becketal2015,Ozcep2017}, which bounds\nthe size of the input that can be used for computing each output (either in time units or in number\nof events).\n\nIn several applications, it is useful to know that some answers are likely to be produced in the\nfuture, since there is already some information that might lead to their generation.\nThis is the case namely in prognosis systems (e.g., medical diagnosis, stock market prediction),\nwhere one can prepare for the possibility of something happening.\nTo this goal, we propose \\emph{hypothetical answers}: answers that are supported by information\nprovided by the input stream, but that still depend on other facts being true in the future.\nKnowledge about both the facts that support the answer and possible future facts that may make it\ntrue\ngives users the possibility to make timely, informed decisions\nin contexts where preemptive measures may have to be taken.\n\nMoreover, by giving such hypothetical answers to the user we cope with unbound wait in a\nconstructive way, since the system is no longer ``mute'' while waiting for an answer to become\ndefinitive.\n\nMany existing approaches to reasoning with data streams adapt and extend models, languages and\ntechniques used for querying databases and the semantic web~\\cite{Arasu2006,Barbieri2009}.\nWe develop our theory in line with the works\nof~\\cite{Zaniolo2012,Becketal2015,DaoTran2017,Ozcep2017,Motik}, where continuous queries are treated\nas rules of a logic program that reasons over facts arriving through a data stream.\n\n\\paragraph{Contribution.}\nWe present a declarative semantics for queries in Temporal Datalog, where we define the notions of\nhypothetical and supported answers.\nWe define an operational semantics based on SLD-resolution, and show that there is a natural\nconnection between the answers computed by this semantics and hypothetical and supported answers.\nBy refining SLD-resolution, we obtain an online algorithm for maintaining and updating the set\nof answers that are consistent with the currently available information.\nFinally, we show that our results extend to a language with negation.\n\n\\paragraph{Structure.}\nSection~\\ref{sec:backgr} revisits some fundamental background notions, namely the formalism\nfrom~\\cite{Motik}, which we extend in this paper, and introduces the running example that we use\nthroughout this article.\nSection~\\ref{sec:awp} introduces our declarative semantics for continuous queries, defining\nhypothetical and supported answers, and relates these concepts with the standard definitions of\nanswers.\nSection~\\ref{sec:sld} presents our operational semantics for continuous queries and relates it to\nthe declarative semantics. \nSection~\\ref{sec:algorithm} details our online algorithm to compute supported answers incrementally,\nas input facts arrive through the data stream, and proves it sound and complete.\nSection~\\ref{sec:neg} extends our framework to negation by failure.\nSection~\\ref{sec:rw} compares our proposal to similar ones in the literature, discussing its\nadvantages.\n\n\\section{Background}\n\\label{sec:backgr}\n\nIn this section, we review the most relevant concepts for our work.\n \n\\subsection{Continuous queries in Temporal Datalog}\n\\label{sec:motiks}\nWe use the framework from~\\cite{Motik} to write continuous queries over datastreams, slightly\nadapting some definitions.\nWe work in \\emph{Temporal Datalog}, the fragment of negation-free Datalog extended with the special\ntemporal sort from~\\cite{Chomicki1988}, which is isomorphic to the set of natural numbers equipped\nwith addition with arbitrary constants.\nIn Section~\\ref{sec:neg} we extend this language with negation.\n\n\\paragraph{Syntax of Temporal Datalog.}\nA vocabulary consists of constants (numbers or identifiers in lowercase), variables (single\nuppercase letters) and predicate symbols (identifiers beginning with an uppercase letter).\nAll these may be indexed if necessary; occurrences of predicates and variables are distinguished by\ncontext.\nIn examples, we use words in sans serif for concrete constants and predicates.\n\nConstants and variables have one of two sorts: \\emph{object} or \\emph{temporal}.\nAn \\emph{object term} is either an object (constant) or an object variable.\nA \\emph{time term} is either a natural number (called a \\emph{time point} or \\emph{temporal\n constant}), a time variable, or an expression of the form $T+\\m{k}$ where $T$ is a time variable\nand $\\m{k}$ is an integer.\n\nPredicates can take at most\none temporal parameter, which we assume to be the last one (if present).\nA predicate with no temporal parameters is called \\emph{rigid}, otherwise it is called\n\\emph{temporal}.\nAn atom is an expression $P(t_1,\\ldots,t_n)$ where $P$ is a predicate and each $t_i$ is a term of\nthe expected sort.\n\nA rule has the form $\\wedge_i\\alpha_i\\rightarrow\\alpha$, where $\\alpha$ and each $\\alpha_i$ are\nrigid or temporal atoms.\nAtom $\\alpha$ is called the \\emph{head} of the rule, and $\\wedge_i\\alpha_i$ the \\emph{body}.\nRules are assumed to be \\emph{safe}: each variable in the head must occur in the body.\nA \\emph{program} is a set of rules.\n\nA predicate symbol that occurs in an atom in the head of a rule with non-empty body is called\n\\emph{intensional} (IDB predicate).\nPredicates that are defined only through rules with empty body are called extensional (EDB\npredicates).\nAn atom is extensional (EDB atom) or intensional (IDB atom) according to whether $P$ is extensional\nor intensional.\n\nA term, atom, rule, or program is \\emph{ground} if it contains no variables.\nWe write $\\var\\alpha$ for the set of variables occurring in an atom, and extend this function\nhomomorphically to rules and sets.\nA \\emph{fact} is a function-free ground atom; since Temporal Datalog does not allow function symbols\nexcept in temporal terms, every ground rigid atom is a fact.\n\nRules are instantiated by means of \\emph{substitutions}, which are functions mapping variables to\nterms of the expected sort.\nThe \\emph{support} of a substitution $\\theta$ is the set $\\supp\\theta=\\{X\\mid\\theta(X)\\neq X\\}$.\nWe consider only substitutions with finite support, and write\n$\\theta=\\subst{X_1:=t_1,\\ldots,X_n:=t_n}$ for the substition mapping each\nvariable $X_i$ to the term $t_i$, and leaving all remaining variables unchanged.\nA substitution is \\emph{ground} if every variable in its support is mapped to a constant.\nAn \\emph{instance} $r'=r\\theta$ of a rule $r$ is obtained by simultaneously replacing every variable\n$X$ in $r$ by $\\theta(X)$ and computing any additions of temporal constants.\n\nA \\emph{query} is a pair $Q=\\tuple{P,\\Pi}$ where $\\Pi$ is a program and $P$ is an IDB atom in\nthe language underlying $\\Pi$.\nQuery $Q$ is \\emph{temporal} (respectively, \\emph{rigid}) if the predicate in $P$ is a temporal\n(resp.\\ rigid) predicate.\n(Note that we do not require $P$ to be ground.)\n\nA \\emph{dataset} is a set of EDB facts (\\emph{input facts}), intuitively produced by a data stream.\nFor each dataset $D$ and time point $\\tau$, we consider $D$'s \\emph{$\\tau$-history}: the dataset\n$D_\\tau$ of the facts produced by $D$ whose temporal argument is at most $\\tau$.\nBy convention, $D_{-1}=\\emptyset$.\n\n\\paragraph{Semantics.}\nThe semantics of Temporal Datalog is a variant of the standard semantics based on Herbrand models.\nA Herbrand interpretation $I$ for Temporal Datalog is a set of facts.\nIf $\\alpha$ is an atom with no variables, then we define $\\bar\\alpha$ as the fact obtained from\n$\\alpha$ by evaluating each temporal term.\nIn particular, if $\\alpha$ is rigid, then $\\bar\\alpha=\\alpha$.\nWe say that $I$ satisfies $\\alpha$, $I\\models\\alpha$, if $\\bar\\alpha\\in I$.\nThe extension of the notion of satisfaction to the whole language follows the standard\nconstruction, and the definition of entailment is the standard one.\n\nAn \\emph{answer} to a query $Q=\\tuple{P,\\Pi}$ over a dataset $D$ is a ground substitution\n$\\theta$ whose domain is the set of variables in $P$, satisfying $\\Pi\\cup D\\models P\\theta$.\nIn the context of continuous query answering, we are interested in the case where $D$ is a\n$\\tau$-history of some data stream, which changes with time.\nWe denote the set of all answers to $Q$ over $D_\\tau$ as $\\ans QD\\tau$.\n\nWe illustrate the extension we propose with a Temporal Datalog program, which is a small variant of\nExample 1 in~\\cite{Motik}.\nThis will be the running example throughout our paper.\n\n\\begin{example}\n \\label{ex:toy}\n A set of wind turbines are scattered throughout the North Sea.\n Each turbine has a sensor that sends temperature readings\n $\\m{Temp}(\\mathit{Device},\\mathit{Level},\\mathit{Time})$ to a data centre.\n The data centre tracks activation of cooling measures in each turbine, recording malfunctions and\n shutdowns by means of the following program $\\ensuremath{\\Pi_E}$.\n \\begin{align*}\n \\m{Temp}(X,\\m{high},T) &\\rightarrow \\m{Flag}(X,T) \\\\\n \\m{Flag}(X,T) \\land \\m{Flag}(X, T+1) &\\rightarrow \\m{Cool}(X,T+1) \\\\\n \\m{Cool}(X,T) \\land \\m{Flag}(X, T+1) &\\rightarrow \\m{Shdn}(X,T+1) \\\\\n \\m{Shdn}(X,T) &\\rightarrow \\m{Malf}(X,T-2)\n \n \n \n \\end{align*}\n\n Consider the query $\\ensuremath{Q_E}=\\tuple{\\m{Malf}(X,T),\\ensuremath{\\Pi_E}}$.\n If the history $D_0$ contains only the fact $\\m{Temp}(\\m{wt25},\\m{high},0)$, then at time\n instant $0$ there is no output for $\\ensuremath{Q_E}$.\n If $\\m{Temp}(\\m{wt25},\\m{high},1)$ arrives to $D$, then\n $D_1=\\{\\m{Temp}(\\m{wt25},\\m{high},0),\\m{Temp}(\\m{wt25},\\m{high},1)\\}$,\n \n and there still is no answer to $\\ensuremath{Q_E}$.\n Finally, the arrival of $\\m{Temp}(\\m{wt25},\\m{high},2)$ to $D$ yields\n $D_2=\\{\\m{Temp}(\\m{wt25},\\m{high},0),\\m{Temp}(\\m{wt25},\\m{high},1), \\m{Temp}(\\m{wt25},\\m{high},2)\\}$,\n \n allowing us to infer $\\m{Malf}(\\m{wt25},0)$.\n Then $\\{X:=\\m{wt25},T:=0\\}\\in\\ans\\ensuremath{Q_E} D2$.\n \\hfill\\ensuremath{\\triangleleft}\n\\end{example}\n\nThroughout this work, we do not distinguish between the temporal argument in a fact (the timepoint\nwhere it is produced) and the instant when it arrives in $D$.\nIn other words, we assume that at each time point $\\tau$, the $\\tau$-history $D_\\tau$ contains all\nEDB facts about time instants $\\tau'<\\tau$.\n\n\\subsection{SLD-resolution}\n\\label{sec:SLD}\n\nWe now revisit some concepts from SLD-resolution.\n\nA \\emph{literal} is an atom or its negation.\nAtoms are also called \\emph{positive} literals, and a negated atom is a \\emph{negative} literal.\nA \\emph{definite clause} is a disjunction of literals containing at most one positive literal.\nIn the case where all literals are negative, the clause is a \\emph{goal}.\nWe use the standard rule notation for writing definite clauses.\n\n\\begin{definition}\n Given two substitutions $\\theta$ and $\\sigma$, with\n \\begin{align*}\n \\theta&=\\subst{X_1:=t_1,\\ldots,X_m:=t_m}\\\\\n \\mbox{and }\\sigma&=\\subst{Y_1:=u_1,\\ldots,Y_n:=u_n}\\,,\n \\end{align*}\n their \\emph{composition} $\\theta\\sigma$ is obtained from\n $$\\subst{X_1:=t_1\\sigma,\\ldots,X_m:=t_m\\sigma,Y_1:=u_1,\\ldots,Y_n:=u_n}$$ by (i) deleting any\n binding where $t_i\\sigma=X_i$ and (ii) deleting any binding $Y_j:=u_j$ where\n $Y_j\\in\\{X_1,\\ldots,X_m\\}$.\n\\end{definition}\nFor every atom $\\alpha$, $\\alpha(\\theta\\sigma)=(\\alpha\\theta)\\sigma$.\n\n\\begin{definition}\n Two atomic formulas $P(\\vec X)$ and $P(\\vec Y)$ are \\emph{unifiable} if there exists a\n substitution $\\theta$ such that $P(\\vec x)\\theta=P(\\vec Y)\\theta$.\n\n A unifier $\\theta$ of $P(\\vec X)$ and $P(\\vec Y)$ is called a \\emph{most general unifier (mgu)}\n if for each unifier $\\sigma$ of $P(\\vec X)$ and $P(\\vec Y)$ there exists a substitution $\\gamma$\n such that $\\sigma=\\theta\\gamma$.\n\\end{definition}\nIt is well known that there always exist several mgus of any two unifiable atoms, and that they are\nunique up to renaming of variables.\n\nRecall that a \\emph{goal} is a clause of the form $\\neg\\wedge_j\\beta_j$.\nIf $C$ is a rule $\\wedge_i\\alpha_i\\to\\alpha$, $G$ is a goal $\\neg\\wedge_j\\beta_j$\nwith $\\var G\\cap\\var C=\\emptyset$, and $\\theta$ is an mgu of $\\alpha$ and $\\beta_k$, then the\n\\emph{resolvent} of $G$ and $C$ is the goal\n$\\neg\\left(\\bigwedge_{jk}\\beta_j\\right)\\theta$.\n\nIf $P$ is a program and $G$ is a goal, an \\emph{SLD-derivation} of $P\\cup\\{G\\}$ is a (finite or\ninfinite) sequence $G_0,G_1,\\ldots$ of goals with $G=G_0$, a sequence $C_1,C_2,\\ldots$ of\n$\\alpha$-renamings of program clauses of $P$ and a sequence $\\theta_1,\\theta_2,\\ldots$ of\nsubstitutions such that $G_{i+1}$ is the resolvent of $G_i$ and $C_{i+1}$ using $\\theta_{i+1}$.\nA finite SLD-derivation of $P\\cup\\{G\\}$ where the last goal is a contradiction ($\\Box$) is called an\n\\emph{SLD-refutation} of $P\\cup\\{G\\}$ of length $n$, and the substitution obtained by restricting\nthe composition of $\\theta_1,\\ldots,\\theta_n$ to the variables occurring in $G$ is called a\n\\emph{computed answer} of $P\\cup\\{G\\}$.\n\n\\section{Hypothetical answers}\n\\label{sec:awp}\n\nIn our running example, $\\m{Temp(wt25,high,0)}$ being produced at time instant $0$ yields some\nevidence that $\\m{Malf(wt25,0)}$ may turn out to be true.\nAt time instant $1$, we may receive further evidence as in the example (the arrival of\n$\\m{Temp(wt25,high,1)}$), or we might find out that this fact will not be true (if\n$\\m{Temp(wt25,high,1)}$ does not arrive).\n\nWe propose a theory where such \\emph{hypothetical answers} to a continuous query are output: if some\nsubstitution can become an answer as long as some facts in the future are true, then we output this\ninformation.\nIn this way we can lessen the negative effects of unbound wait.\nHypothetical answers can also refer to future time points: in our example, \\subst{X:=\\m{wt25},T:=2}\nwould also be output at time point 0 as a substitution that may prove to be an answer to the query\n\\tuple{\\m{Shdn}(X,T),\\ensuremath{\\Pi_E}} when further information arrives.\n \nOur formalism uses ideas from multi-valued logic, where some substitutions correspond to answers\n(true), others are known not to be answers (false), and others are consistent with the available\ndata, but can not yet be shown to be true or false.\nIn our example, the fact $\\m{Malf(wt25,0)}$ is consistent with the data at time point $0$, and thus\n``possible''; it is also consistent with the data at time point $1$, and thus ``more possible''; and\nit finally becomes (known to be) true at time point 2.\n\nAs already motivated, we want answers to give us not only the substitutions that make the query goal\ntrue, but also ones that make the query goal possible in the following sense: they depend both on\npast and future facts, and the past facts are already known.\n\nFor the remainder of the article, we assume fixed a query $Q=\\tuple{P,\\Pi}$, a data stream $D$ and a\ntime instant $\\tau$.\n\\begin{definition}\n \\label{defn:hypothetical}\n A \\emph{hypothetical answer} to query $Q$ over $D_\\tau$ is a pair\n $\\tuple{\\theta,H}$, where $\\theta$ is a substitution and $H$ is a finite set of ground EDB\n temporal atoms (the hypotheses) such that:\n \\begin{itemize}\n \\item $\\supp\\theta=\\var P$;\n \\item $H$ only contains atoms with time stamp $\\tau'>\\tau$;\n \\item $\\Pi\\cup D_\\tau\\cup H\\models P\\theta$;\n \\item $H$ is minimal with respect to set inclusion.\n \\end{itemize}\n $\\hans QD\\tau$ is the set of hypothetical answers to $Q$ over $D_\\tau$.\n\\end{definition}\n\nIntuitively, a hypothetical answer \\tuple{\\theta,H} states that $P\\theta$ holds if all facts\nin $H$ are ever produced by the data stream.\nThus, $P\\theta$ is currently backed up by the information available.\nIn particular, if $H=\\emptyset$ then $P\\theta$ is an answer in the standard sense (it is a known\nfact).\n\n\\begin{proposition}\n \\label{prop:HA-answer}\n Let $Q=\\tuple{P,\\Pi}$ be a query, $D$ be a data stream and $\\tau$ be a time instant.\n If $\\tuple{\\theta,\\emptyset}\\in\\hans QD\\tau$, then $\\theta\\in\\ans QD\\tau$.\n\\end{proposition}\n\\begin{proof}\n $\\tuple{\\theta,H}\\in\\hans QD\\tau$ if $\\Pi\\cup D_\\tau\\cup H\\models P\\theta$.\n When $H=\\emptyset$, this reduces to $\\Pi\\cup D_\\tau\\models P\\theta$, which coincides with the\n definition of answer.\n\\end{proof}\n\nWe can generalize this proposition, formalizing the intuition we gave for the definition of\nhypothetical answer.\n\n\\begin{proposition}\n \\label{prop:HA-char}\n Let $Q=\\tuple{P,\\Pi}$ be a query, $D$ be a data stream and $\\tau$ be a time instant.\n If $\\tuple{\\theta,H}\\in\\hans QD\\tau$, then there exist a time point $\\tau'\\geq\\tau$ and\n a data stream $D'$ such that $D_\\tau=D'_\\tau$ and $\\theta\\in\\ans Q{D'}{\\tau'}$.\n\\end{proposition}\n\\begin{proof}\n Let $D'$ be the data stream $D\\cup H$ and $\\tau'$ be the highest timestamp occurring in $H$.\n It is straightforward to verify that $D'$ satisfies the thesis.\n\\end{proof}\n\n\\begin{example}\n \\label{ex:HA}\n We illustrate these concepts in the context of Example~\\ref{ex:toy}.\n Consider $\\theta=\\subst{X:=\\m{wt25},T:=0}$.\n Then\n $\\tuple{\\theta,\\{\\m{Temp}(\\m{wt25},\\m{high},1),\\m{Temp}(\\m{wt25},\\m{high},2)\\}}\\in\\hans\\ensuremath{Q_E} D0$.\n Since $\\m{Temp}(\\m{wt25},\\m{high},1)\\in D_1$,\n we get $\\tuple{\\theta,\\{\\m{Temp}(\\m{wt25},\\m{high},2)\\}}\\in\\hans\\ensuremath{Q_E} D1$.\n Finally, $\\m{Temp}(\\m{wt25},\\m{high},2)\\in D_2$, so $\\tuple{\\theta,\\emptyset}\\in\\hans\\ensuremath{Q_E} D2$.\n This answer has no hypotheses, and indeed $\\theta\\in\\ans\\ensuremath{Q_E} D2$.\n\n Take $\\theta'=\\subst{X:=\\m{wt42},T:=1}$ for another constant $\\m{wt42}$.\n Then also e.g. $\\tuple{\\theta',H'_0}\\in\\hans\\ensuremath{Q_E} D0$ with\n \n $H'_0=\\{\\m{Temp}(\\m{wt42},\\m{high},1),\\m{Temp}(\\m{wt42},\\m{high},2),\\m{Temp}(\\m{wt42},\\m{high},3)\\}$,\n but since $\\m{Temp}(\\m{wt42},\\m{high},1)\\notin D_1$ there is no element\n $\\tuple{\\theta',H'}\\in\\hans\\ensuremath{Q_E} D\\tau$ for $\\tau\\geq1$.\n \\hfill\\ensuremath{\\triangleleft}\n\\end{example}\n\nHypothetical answers $\\tuple{\\theta,H}\\in\\hans QD\\tau$ where $H\\neq\\emptyset$ can be further split into\ntwo kinds: those that are supported by some present or past true fact(s), and those for which there\nis no evidence whatsover -- they only depend on future, unknown facts.\nFor the former, $\\Pi\\cup H\\not\\models P\\theta$: they rely on some fact from $D_\\tau$.\nThis is the class of answers that interests us, as there is non-trivial information in\nsaying that they may become true.\n\n\\begin{definition}\n \\label{def:supported}\n Let $Q=\\tuple{P,\\Pi}$ be a query, $D$ be a data stream and $\\tau$ be a time instant.\n A non-empty set of facts $E\\subseteq D_\\tau$ is \\emph{evidence} supporting\n $\\tuple{\\theta,H}\\in\\hans QD\\tau$ if $E$ is a minimal set satisfying $\\Pi\\cup E\\cup H\\models P\\theta$.\n A \\emph{supported answer} to $Q$ over $D_\\tau$ is a triple \\tuple{\\theta,H,E} where\n $E$ is evidence supporting \\tuple{\\theta,H}.\n\n $\\sans QD\\tau$ is the set of supported answers to $Q$ over $D_\\tau$.\n\\end{definition}\n\\longpaper{\\todo{Check that $E\\subseteq D_\\tau$ holds in all proofs.}}\n\nSince set inclusion is well-founded, if $\\tuple{\\theta,H}\\in\\hans QD\\tau$ and\n$\\Pi\\cup E\\cup H\\models P\\theta$, then there exists a set $E'$ such that \\tuple{\\theta,H,E'} is a\nsupported answer to $Q$ over $D_\\tau$.\nHowever, in general, several such sets $E'$ may exist.\nAs a consequence, Propositions~\\ref{prop:HA-answer} and~\\ref{prop:HA-char} generalize to supported\nanswers in the obvious way.\n\n\\begin{example}\n \\label{ex:SA}\n Consider the hypothetical answers from Example~\\ref{ex:HA}.\n \n The hypothetical answer \\tuple{\\theta,H_0} is supported by the evidence\n $$E_0=\\{\\m{Temp}(\\m{wt25},\\m{high},0)\\}\\,,$$\n \n while \\tuple{\\theta,H_1} is supported by\n $$E_1=\\{\\m{Temp}(\\m{wt25},\\m{high},0),\\m{Temp}(\\m{wt25},\\m{high},1)\\}\\,.$$\n \n Since there is no evidence for \\tuple{\\theta',H'_0}, this answer is not supported.\n \\hfill\\ensuremath{\\triangleleft}\n\\end{example}\n\nThis example illustrates that unsupported hypothetical answers are not very informative: it is the\nexistence of supporting evidence that distinguishes interesting hypothetical answers from any\narbitrary future fact.\n\nHowever, it is useful to consider even unsupported hypothetical answers in order to develop\nincremental algorithms to compute supported answers: the sequence of sets\n$\\Theta^E_\\tau=\\{\\theta\\mid\\tuple{\\theta,H,E}\\in\\sans QD\\tau\\mbox{ for some $H,E$}\\}$\nis non-monotonic, as at every time point new unsupported hypothetical answers may get evidence and\nsupported hypothetical answers may get rejected.\nThe sequence\n$\\Theta^H_\\tau=\\{\\theta\\mid\\tuple{\\theta,H}\\in\\hans QD\\tau\\mbox{ for some $H$}\\}$, on\nthe other hand, is anti-monotonic, as the following results state.\n\n\\begin{proposition}\n \\label{prop:HA-split}\n If $\\tuple{\\theta,H}\\in\\hans QD\\tau$, then there exists $H^0$ such that\n $\\tuple{\\theta,H^0}\\in\\hans QD{-1}$ and $H=H^0\\setminus D_\\tau$.\n Furthermore, if $H\\neq H^0$, then $\\tuple{\\theta,H,H^0\\setminus H}\\in\\sans QD\\tau$.\n\\end{proposition}\n\\begin{proof}\n Recall that $D_{-1}=\\emptyset$ by convention.\n If $\\tuple{\\theta,H}\\in\\hans QD\\tau$, then $\\Pi\\cup D_\\tau\\cup H\\models P\\theta$.\n Since $D_\\tau$ is finite and set inclusion is well-founded, there is a minimal subset $H^0$ of\n $D_\\tau\\cup H$ with the property that $H\\subseteq H^0$ and $\\Pi\\cup H^0\\models P\\theta$.\n Clearly $H=H^0\\setminus D_\\tau$.\n\n Assume that $H^-\\subseteq H^0$ is also such that $\\Pi\\cup H^-\\models P\\theta$.\n Then $\\Pi\\cup D_\\tau\\cup(H^-\\setminus D_\\tau)\\models P\\theta$; but\n $\\tuple{\\theta,H}\\in\\hans QD\\tau$, so $H\\subseteq H^-\\setminus D_\\tau$ and therefore also\n $H\\subseteq H^-$.\n By definition of $H^0$, this implies that $H^0\\subseteq H^-$, hence\n $\\tuple{\\theta,H^0}\\in\\hans QD{-1}$.\n\n Finally, if $E\\subseteq H^0\\setminus H$ is evidence supporting \\tuple{\\theta,H}, then\n $\\Pi\\cup E\\cup H\\models P\\theta$, hence $H^0\\subseteq E\\cup H$, since\n $\\tuple{\\theta,E\\cup H}\\in\\hans QD{-1}$.\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\n\\begin{proposition}\n \\label{prop:HA-split-general}\n If $\\tuple{\\theta,H}\\in\\hans QD\\tau$ and $\\tau'<\\tau$, then there exists\n $\\tuple{\\theta,H'}\\in\\hans QD{\\tau'}$ such that \n $H=H'\\setminus\\left(D_\\tau\\setminus D_{\\tau'}\\right)$.\n\\end{proposition}\n\\begin{proof}\n Just as the proof of the previous proposition, but dividing $\\Pi\\cup D_\\tau$ into\n $\\Pi\\cup D_{\\tau'}$ and $D_\\tau\\setminus D_{\\tau'}$ instead of into $\\Pi$ and $D_\\tau$.\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\nExamples~\\ref{ex:HA} and~\\ref{ex:SA} also illustrate this property, with hypotheses turning into\nevidence as time progresses.\nSince $D_{-1}=\\emptyset$, Proposition~\\ref{prop:HA-split} is a particular case of\nProposition~\\ref{prop:HA-split-general}.\n\nIn the next sections we show how to compute hypothetical answers and the corresponding sets of\nevidence for a given continuous query.\n\n\\section{Operational semantics via SLD-resolution}\n\\label{sec:sld}\n\nThe definitions of hypothetical and supported answers are declarative.\nWe now show how SLD-resolution can be adapted to algorithms that compute these answers.\nWe use standard results about SLD-resolution, see for example~\\cite{Lloyd1984}.\n\nWe begin with a simple observation: since the only function symbol in our language is addition of\ntemporal parameters (which is invertible), we can always choose mgus that do not replace variables\nin the goal with new ones.\n\n\\begin{lemma}\n \\label{lem:mgu-vars}\n Let $\\neg\\wedge_i\\alpha_i$ be a goal and $\\wedge_j\\beta_j\\to\\beta$ be a rule such that\n $\\beta$ is unifiable with $\\alpha_k$ for some $k$.\n Then there is an mgu $\\theta=\\subst{X_1:=t_1,\\ldots,X_n:=t_n}$ of $\\alpha_k$ and $\\beta$ such that\n all variables occurring in $t_1,\\ldots,t_n$ also occur in $\\alpha_k$.\n\\end{lemma}\n\\begin{proof}\n Let $\\rho=\\subst{X_1:=t_1,\\ldots,X_n:=t_n}$ be an mgu of $\\alpha_k$ and $\\beta$.\n For each $i$, $t_i$ can either be a variable $Y_i$ or a time expression $T_i+k_i$.\n First, iteratively build a substitution $\\sigma$ as follows: for $i\\in[1,\\ldots,n]$, if $X_i$\n occurs in $\\alpha_k$ but $t_i$ does not and $\\sigma$ does not yet include a replacement for the\n variable in $t_i$, extend $\\sigma$ with $Y_i:=X_i$, if $t_i$ is $Y_i$, or $T_i:=X_i-k_i$, if $t_i$\n is $T_i+k_i$.\n\n We now show that $\\theta=\\rho\\sigma$ is an mgu of $\\alpha_k$ and $\\beta$ with the desired\n property.\n If $X:=t\\in\\theta$, then either (i)~$X$ is $X_i$ and $t$ is $t_i\\sigma$ for some $i$ or\n (ii)~$X:=t\\in\\sigma$.\n In case~(i), by construction of $\\sigma$ if $X_i$ occurs in $\\alpha_k$ but $t_i$ includes a\n variable not in $\\alpha_k$, then $\\sigma$ replaces that variable with a term using only variables\n in $\\alpha_k$. In case~(ii), by construction $X$ does not occur in $\\alpha_k$.\n\n To show that $\\theta$ is an mgu of $\\alpha_k$ and $\\beta$ it suffices to observe that $\\sigma$ is\n invertible, with\n $$\\sigma^{-1}=\\subst{X:=Y \\mid Y:=X\\in\\sigma}\\subst{X:=T-k\\mid T:=X+k\\in\\sigma}\\,.$$\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\nWithout loss of generality, we assume all mgus in SLD-derivations to have the property in\nLemma~\\ref{lem:mgu-vars}.\n\nOur intuition is as follows: classical SLD-resolution computes substitutions that make a conjunction\nof atoms a logical consequence of a program by constructing SLD-derivations that end in the empty\nclause.\nWe relax this by allowing derivations to end with a goal if: this goal only refers to EDB predicates\nand all the temporal terms in it refer to future instants (possibly after further instantiation).\nThis makes the notion of derivation also dependent on a time parameter.\n\n\\begin{definition}\n An atom $P(t_1,\\ldots,t_n)$ is a \\emph{future atom wrt $\\tau$} if $P$ is a temporal predicate and\n the time term $t_n$ either contains a temporal variable or is a time instant $t_n>\\tau$.\n\\end{definition}\n\n\\begin{definition}\n An \\emph{SLD-refutation with future premises} of $Q$ over $D_\\tau$ is a finite SLD-derivation of\n $\\Pi\\cup D_\\tau\\cup\\{\\neg P\\}$ whose last goal only contains future EDB atoms wrt $\\tau$.\n\n If $\\mathcal D$ is an SLD-refutation with future premises of $Q$ over $D_\\tau$ with last\n goal $G=\\neg\\wedge_i\\alpha_i$ and $\\theta$ is the substitution obtained by restricting the\n composition of the mgus in $\\mathcal D$ to $\\var P$, then \\tuple{\\theta,\\wedge_i\\alpha_i}\n is a \\emph{computed answer with premises} to $Q$ over $D_\\tau$, denoted \\cans QD\\tau\\theta{\\wedge_i\\alpha_i}.\n\\end{definition}\n\n\\begin{example}\n Consider the query $\\ensuremath{Q_E}$ from Example~\\ref{ex:toy} and let $\\tau=1$.\n There is an SLD-derivation of $\\Pi\\cup D_1\\cup\\{\\neg\\m{Malf}(X,T)\\}$ ending with the goal\n $\\m{Temp}(\\m{wt25},\\m{high},2)$, which is a future EDB atom with respect to $1$.\n Thus, $\\cans\\ensuremath{Q_E} D1\\theta{\\m{Temp}(\\m{wt25},\\m{high},2)}$ with\n $\\theta={\\subst{X:=\\m{wt25},T:=0}}$.\n \\hfill\\ensuremath{\\triangleleft}\n\\end{example}\n\nAs this example illustrates,\ncomputed answers with premises are the operational counterpart to hypothetical answers, with two\ncaveats.\nFirst, a computed answer with premises need not be ground: there may be\nsome universally quantified variables in the last goal.\nSecond, $\\wedge_i\\alpha_i$ may contain redundant conjuncts, in the sense\nthat they might not be needed to establish the goal.\nWe briefly illustrate these two features.\n\n\\begin{example}\n \\label{ex:SLD-problems}\n In our running example, there is also an SLD-derivation of $\\Pi\\cup D_1\\cup\\{\\neg\\m{Malf}(X,T)\\}$\n ending with the goal $\\neg\\bigwedge_{i=0}^2\\m{Temp}(X,\\m{high},T+i)$, which only contains future\n EDB atoms wrt $1$.\n Thus also\n $\\cans{Q_E}D1\\emptyset{\\bigwedge_{i=0}^2\\m{Temp}(X,\\m{high},T+i)}$.\n \\hfill\\ensuremath{\\triangleleft}\n\\end{example}\n\n\\begin{example}\n Consider the following program $\\Pi'$:\n \n \n \\begin{align*}\n \\m{P}(\\m{a},T) &\\rightarrow \\m{R}(\\m{a},T) \\\\\n \\m{P}(\\m{a},T)\\wedge\\m{Q}(\\m{a},T) &\\rightarrow \\m{R}(\\m{a},T)\n \\end{align*}\n and the query $Q'=\\tuple{\\m{R}(X,T),\\Pi'}$.\n \n Let $D'_0=\\emptyset$.\n There is an SLD-derivation of $\\Pi'\\cup D'_0\\cup\\{\\neg\\m{R}(X,T)\\}$ ending with goal\n $\\neg\\left(\\m{P}(\\m{a},T)\\wedge\\m{Q}(\\m{a},T)\\right)$, which only contains future EDB atoms wrt\n $\\tau$.\n Thus $\\cans{Q'}{D'}0{\\subst{X:=\\m{a}}}{\\m{P}(\\m{a},T)\\wedge\\m{Q}(\\m{a},T)}$.\n However, atom $\\m{Q}(\\m{a},T)$ is redundant, since $\\m{P}(\\m{a},T)$ alone suffices to make\n $\\subst{X:=\\m{a}}$ an answer to $Q$ for any $T$.\n \n (Observe that also $\\cans{Q'}{D'}0{\\subst{X:=\\m{a}}}{\\m{P}(\\m{a},T)}$, but produced by a\n different SLD-derivation.)\n \\hfill\\ensuremath{\\triangleleft}\n\\end{example}\n\nWe now look at the relationship between the operational definition of computed answer with premises\nand the notion of hypothetical answer.\nThe examples above show that these notions do not precisely correspond.\nHowever, we can show that computed answers with premises approximate hypothetical answers and,\nconversely, every hypothetical answer is a grounded instance of a computed answer with\npremises.\n\n\\begin{proposition}[Soundness]\n \\label{prop:SLD-sound}\n Let $Q=\\tuple{P,\\Pi}$ be a query, $D$ be a data stream and $\\tau$ be a time instant.\n If $\\cans QD\\tau\\theta{\\wedge_i\\alpha_i}$ and $\\sigma$ is a ground substitution such that\n $\\supp\\sigma=\\var{\\wedge_i\\alpha_i}\\cup(\\var P\\setminus\\supp\\theta)$ and\n $t\\sigma>\\tau$ for every temporal term $t$ occurring in $\\wedge_i\\alpha_i$, then there is a set\n $H\\subseteq\\{\\alpha_i\\sigma\\}_i$ such that $\\tuple{(\\theta\\sigma)|_{\\var P},H}\\in\\hans QD\\tau$.\n\\end{proposition}\n\\begin{proof}\n Assume that there is some SLD-refutation with future premises of $Q$ over $D_\\tau$.\n Then this is an SLD-derivation whose last goal $G=\\vee_i\\neg\\alpha_i$ only contains future EDB\n atoms with respect to $\\tau$.\n Let $\\sigma$ be any substitution in the conditions of the hypothesis.\n Taking $H'=\\{\\alpha_i\\sigma\\}_i$, we can extend this SLD-derivation to a (standard) SLD-refutation\n for $\\Pi\\cup D_\\tau\\cup H'\\cup\\{\\neg P\\}$, by resolving $G$ with each of the $\\alpha_i$ in\n turn.\n The computed answer is then the restriction of $\\theta\\sigma$ to $\\var P$.\n By soundness of SLD-resolution, $\\Pi\\cup D_\\tau\\cup H'\\models P(\\theta\\sigma)|_{\\var P}$.\n Since set inclusion is well-founded, we can find a minimal set $H\\subseteq H'$ with the latter\n property.\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\n\\begin{proposition}[Completeness]\n \\label{prop:SLD-compl}\n Let $Q=\\tuple{P,\\Pi}$ be a query, $D$ be a data stream and $\\tau$ be a time instant.\n If $\\tuple{\\theta,H}\\in\\hans QD\\tau$, then there exist substitutions $\\rho$ and $\\sigma$ and a finite\n set of atoms $\\{\\alpha_i\\}_i$ such that $\\theta=\\rho\\sigma$, $H=\\{\\alpha_i\\sigma\\}_i$ and\n $\\cans QD\\tau\\rho{\\wedge_i\\alpha_i}$.\n\\end{proposition}\n\\begin{proof}\n Suppose $\\tuple{\\theta,H}\\in\\hans QD\\tau$.\n Then $\\Pi\\cup D_\\tau\\cup H\\models P\\theta$.\n By completeness of SLD-resolution, there exist substitutions $\\gamma$ and $\\delta$ and an\n SLD-derivation for $\\Pi\\cup D_\\tau\\cup H\\cup\\{\\neg P\\}$ with computed answer $\\gamma$ such that\n $\\theta=\\gamma\\delta$.\n\n By minimality of $H$, for each $\\alpha\\in H$ there must exist a step in this SLD-derivation where\n the current goal is resolved with $\\alpha$.\n Without loss of generality, we can assume that these are the last steps in the derivation (by\n independence of the computation rule).\n Let $\\mathcal D'$ be the derivation consisting only of these steps, and $\\mathcal D$ be the\n original derivation without $\\mathcal D'$.\n Let $\\rho$ be the answer computed by $\\mathcal D$ and $G=\\vee_i\\neg\\alpha_i$ be its last goal,\n let $\\rho'$ be the answer computed by $\\mathcal D'$, and define $\\sigma=\\rho'\\delta$.\n Then:\n \\begin{itemize}\n \\item Let $X\\in\\supp\\theta$; then $X$ occurs in $P$.\n If $\\rho(X)$ occurs in $G$, then by construction $\\gamma(X)=(\\rho\\rho')(X)$.\n If $\\rho(X)$ is a ground term or $\\rho(X)$ does not occur in $G$, then trivially\n $\\gamma(X)=\\rho(X)=(\\rho\\rho')(X)$ since $\\rho'$ does not change $\\rho(X)$.\n In either case, $\\theta=\\gamma\\delta=\\rho\\rho'\\delta=\\rho\\sigma$.\n \\item $H=\\{\\alpha_i\\sigma\\}_i$: by construction of $\\mathcal D'$, $H=\\{\\alpha_i\\rho'\\}_i$, and\n since $\\alpha_i\\rho'$ is ground for each $i$, it is also equal to\n $\\alpha_i\\rho'\\delta=\\alpha_i\\sigma$.\n \\end{itemize}\n The derivation $\\mathcal D$ shows that $\\cans QD\\tau\\rho{\\wedge_i\\alpha_i}$.\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\nAll notions introduced in this section depend on the time parameter $\\tau$, and in particular on the\nhistory dataset $D_\\tau$.\nIn the next section, we explore the idea of ``organizing'' the SLD-derivation adequately to\npre-process $\\Pi$ independently of $D_\\tau$, so that the computation of (hypothetical) answers can\nbe split into an offline part and a less expensive online part.\n\n\\section{Incremental computation of hypothetical answers}\n\\label{sec:algorithm}\n\nProposition~\\ref{prop:HA-split-general} states that the set of hypothetical answers evolves as time\npasses, with hypothetical answers either gaining evidence and becoming query answers or being put\naside due to their dependence on facts that turn out not to be true.\n\nIn this section, we show how we can use this temporal evolution to compute supported answers\nincrementally.\nWe start by revisiting SLD-derivations and showing how they can reflect this temporal structure.\n\n\\begin{proposition}\n \\label{prop:SLD-strat}\n Let $Q=\\tuple{P,\\Pi}$ be a query and $D$ be a data stream.\n For any time constant $\\tau$, if $\\cans QD\\tau\\theta{\\wedge_i\\alpha_i}$, then there exist an SLD-refutation with future premises\n of $Q$ over $D_\\tau$ computing \\tuple{\\theta,\\wedge_i\\alpha_i} and a sequence\n $k_{-1}\\leq k_0\\leq\\ldots\\leq k_\\tau$ such that:\n \\begin{itemize}\n \\item goals $G_1,\\ldots,G_{k_{-1}}$ are obtained by resolving with clauses from $\\Pi$;\n \\item for $0\\leq i\\leq\\tau$, goals $G_{k_{i-1}+1},\\ldots,G_{k_i}$ are obtained by resolving with\n clauses from $D_i\\setminus D_{i-1}$.\n \\end{itemize}\n\\end{proposition}\n\\begin{proof}\n Straightforward corollary of the independence of the computation rule.\n\\end{proof}\n\nAn SLD-refutation with future premises with the property guaranteed by\nProposition~\\ref{prop:SLD-strat} is called a \\emph{stratified} SLD-refutation with future\npremises.\nSince data stream $D$ only contains EDB atoms, it also follows that in a stratified SLD-refutation\nall goals after $G_{k_{-1}}$ are always resolved with EDB atoms.\nFurthermore, each \\longpaper{goal }$G_{k_i}$ contains only future EDB atoms with respect to $i$.\nLet $\\theta_i$ be the restriction of the composition of all substitutions in the SLD-derivation up\nto step $k_i$ to $\\var P$.\nThen $G_{k_i}=\\neg\\wedge_j\\alpha_j$ represents all hypothetical answers to $Q$ over $D_i$ of the\nform \\tuple{(\\theta_i\\sigma)|_{\\var P},\\wedge_j\\alpha_j} for some ground substitution $\\sigma$\n(cf.~Proposition~\\ref{prop:SLD-sound}).\n\nThis yields an online procedure to compute supported answers to continuous queries over data\nstreams.\nIn a pre-processing step, we calculate all computed answers with premises to $Q$ over $D_{-1}$, and\nkeep the ones with minimal set of formulas.\n(Note that Proposition~\\ref{prop:SLD-compl} guarantees that all minimal sets are generated by\nthis procedure, although some non-minimal sets may also appear as in Example~\\ref{ex:SLD-problems}.)\nThe online part of the procedure then performs SLD-resolution between each of these sets and the\nfacts produced by the data stream, adding the resulting resolvents to a set of\nschemata of supported answers (i.e.~where variables may still occur).\nBy Proposition~\\ref{prop:SLD-strat}, if there is at least one resolution step at this stage, then\nthe hypothetical answers represented by these schemata all have evidence, so they are indeed\nsupported.\n\nIn general, the pre-processing step of this procedure may not terminate, as the following example\nillustrates.\n\n\\begin{example}\n \\label{ex:infinite}\n Consider the following program $\\Pi''$, where $R$ is an extensional predicate and $S$ is an\n intensional predicate.\n \\begin{align*}\n S(X,T) &\\rightarrow S(X,T+1) &\n R(X,T) &\\rightarrow S(X,T)\n \\end{align*}\n If $R(a,t_0)$ is produced by the datastream, then $S(a,t)$ is true for every $t\\geq t_0$.\n \n Thus, $\\tuple{[X:=a],\\{R(a,T-k)\\}}\\in\\hans{\\tuple{S(X,T),\\Pi''}}D0$ for all $k$.\n The pre-processing step needs to output this infinite set, so it cannot terminate.\n \\hfill\\ensuremath{\\triangleleft}\n\\end{example}\n\nWe establish termination of the pre-processing step for two different classes of queries.\nA query $Q=\\tuple{P,\\Pi}$ is \\emph{connected} if each rule in $P$ contains at most one temporal\nvariable, which occurs in the head whenever it occurs in the body; and it is \\emph{nonrecursive} if\nthe directed graph induced by its dependencies is acyclic, cf.~\\cite{Motik}.\n\\begin{proposition}\n \\label{prop:termination}\n Let $Q=\\tuple{P,\\Pi}$ be a nonrecursive and connected query.\n Then the set of all computed answers with premises to $Q$ over $D_{-1}$ can be computed in finite\n time.\n\\end{proposition}\n\\begin{proof}\n Let $T$ be the (only) temporal variable in $P$.\n Then all SLD-derivations for $\\Pi\\cup\\neg{P}$ have a maximum depth: if we associate to each\n predicate the length of the maximum path in the dependency graph for $\\Pi$ starting from it and to\n each goal the sorted sequence of such values for each of its atoms, then each resolution step\n decreases this sequence with respect to the lexicographic ordering.\n Since this ordering is well-founded, the SLD-derivation must terminate.\n\n Furthermore, since $\\Pi$ is finite, there is a finite number of possible descendants for each\n node.\n Therefore, the tree containing all possible SLD-derivations for $\\Pi\\cup\\neg P$ is a finite\n branching tree with finite height, and by K\u00f6nig's Lemma it is finite.\n\n Since each resolution step terminates (possibly with failure) in finite time, this tree can be\n built in finite time.\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\nThe algorithm implicit in the proof of Proposition~\\ref{prop:termination} can be improved by\nstandard techniques (e.g.~by keeping track of generated nodes to avoid duplicates).\nHowever, since it is a pre-processing step that is done offline and only once, we do not discuss\nsuch optimizations.\n\n\\cite{Motik} also formally define delay and window size as follows.\n$Q$ is a query with temporal variable $T$.\n\\begin{definition}\n \\label{defn:delay}\n A \\emph{delay} for $Q$ is a natural number $d$ such that: for every substitution $\\theta$ and\n every $\\tau\\geq T\\theta+d$, $\\theta\\in\\ans QD\\tau$ iff $\\theta\\in\\ans QD{T\\theta+d}$.\n\n \\label{defn:window}\n A natural number $w$ is a \\emph{window size} for $Q$ if: each $\\theta$ is an answer to $Q$ over\n $D_\\tau$ iff $\\theta$ is an answer to $Q$ over $D_\\tau\\setminus D_{\\tau-w}$.\n\\end{definition}\n\nTo show termination of the pre-processing step assuming existence of a delay and window size, we\napply a slightly modified version of SLD-resolution: we do not allow fresh non-temporal variables to\nbe added.\nSo if a rule includes new variables in its body, we generate a node for each of its possible\ninstances.\nWe start by proving an auxiliary lemma.\n\n\\begin{lemma}\n \\label{lem:termination}\n If $Q=\\tuple{P,\\Pi}$ has delay $d$ and window size $w$, then there exist two natural numbers $d'$\n and $w'$ such that: if an atom in a goal in an SLD-derivation for $\\Pi\\cup\\{\\neg P\\}$ contains a\n temporal argument $T+k$ with $k>d'$ or $k<{-w'}$, then the subtree with that goal as root does not\n contain a leaf including an extensional atom with a temporal parameter dependent on $T$.\n\\end{lemma}\n\\begin{proof}\n Assume that the thesis does not hold, i.e.~for any $d'$ and $w'$ there exists an atom in an\n SLD-derivation for $\\Pi\\cup\\{\\neg P\\}$ whose subtree contains a leaf including an extensional atom\n with temporal parameter dependent on $T$.\n By the independence of the computation rule, this property must hold for any SLD-derivation for\n $\\Pi\\cup\\{\\neg P\\}$.\n\n Then there is a predicate symbol that occurs infinitely many times in the tree for $\\Pi\\cup\\{\\neg\n P\\}$ with distinct instantiations of the temporal argument.\n Since the number of distinct instantiations for the non-temporal arguments is finite (up to\n $\\alpha$-equivalence), there must be at least one instantiation that occurs infinitely often.\n If applying SLD-resolution to it generates an extensional literal whose temporal argument depends\n on $T$, then the tree would have infinitely many leaves containing distinct occurrences of its\n underlying predicate symbol.\n\n For any $t$, it is then straightforward to construct an instance of the data stream and a\n substitution $\\theta$ such that $\\theta\\not\\in\\ans QD{T\\theta}$ and $\\theta\\in\\ans QD{T\\theta+t}$,\n so no number smaller than $t$ can be a delay for $Q$.\n Similarly, we can construct an instance of the data stream and a substitution such that $\\theta$\n is an answer to $Q$ over $D_{\\theta+t}$ but not over $D_{\\theta+t}\\setminus D_\\theta$, so $t$\n cannot be a window size for $Q$.\n \\hfill\\ensuremath{\\Box}\n\\end{proof}\n\n\\begin{proposition}\n \\label{thm:termination-char}\n If $Q$ has a delay $d$ and a window $w$, then the set of all computed answers with premises to $Q$\n over $D_{-1}$ can be computed in finite time.\n\\end{proposition}\n\\begin{proof}\n Let $w'$ and $d'$ be the two natural numbers guaranteed to exist by the previous lemma.\n Then, if a goal contains a positive literal with temporal parameter $T+k$ with $k>d'$ or\n $k<{-w'}$, we know that the subtree generated from that literal cannot yield leaves with temporal\n parameter depending on $T$.\n Hence, either this tree contains only failed branches, or it contains leaves that do not depend on\n $T$, or it is infinite.\n\n Although we do not know the value of $d'$ and $w'$, we can still use this knowledge by performing\n the construction of the tree in a depth-first fashion.\n Furthermore, we choose a computation rule that delays atoms with temporal variable other than $T$.\n For each node, before expanding it we first check whether the atom we are processing has already\n appeared before (modulo temporal variables), and if so we skip the construction of the\n corresponding subtree by directly inserting the corresponding leaves.\n\n If an atom contains a temporal variable other than $T$, all leaves in its subtree contain no\n extensional predicates with non-constant temporal argument (otherwise there would not be a delay\n for $T$).\n Therefore these sub-goals can also be treated uniformly.\n\n Finally, suppose that a branch is infinite.\n Again using a finiteness argument, any of its branches must necessarily at some point include\n repeated literals.\n In this case we can immediately mark it as failed.\n \\hfill\\ensuremath{\\Box}\n\\end{proof}\n\nThe (offline) pre-processing step consisting of running the algorithm implicit in the proof of this\nproposition, allows us to compute a finite set $\\mathcal P_Q$ of \\emph{preconditions} for $Q$ that\nrepresents $\\hans QD{-1}$: for each computed answer \\tuple{\\theta,\\wedge_i\\alpha_i} with premises to\n$Q$ over $D_{-1}$ where $\\{\\alpha_i\\}_i$ is minimal, $\\mathcal P_Q$ contains an entry\n\\tuple{\\theta,M,\\{\\alpha_i\\}_i\\setminus M} where $M$ is the subset of the $\\alpha_i$ with minimal\ntimestamp\n(i.e.~those elements of $\\alpha_i$ whose temporal variable is $T+k$ with minimal $k$).\n\nEach tuple $\\tuple{\\theta,M,F}\\in\\mathcal P_Q$ represents the set of all hypothetical answers\n\\tuple{\\theta\\sigma,(M\\cup F)\\sigma} as in Proposition~\\ref{prop:SLD-sound}.\n\nWe now show that computing and updating the set $\\sans QD\\tau$ can be done efficiently.\nThis set is maintained again as a set $\\mathcal S_\\tau$ of schematic supported answers.\nWe assume that $Q$ is a connected query; we discuss how to remove this restriction later.\n\n\\begin{proposition}\n \\label{prop:algorithm}\n The following algorithm computes $\\mathcal S_{\\tau+1}$ from $\\mathcal P_Q$ and $\\mathcal S_\\tau$\n in time polynomial in the size of $\\mathcal P_Q$, $\\mathcal S_\\tau$ and $D_{\\tau+1}\\setminus D_\\tau$.\n \\begin{enumerate}\n \\item For each $\\tuple{\\theta,M,F}\\in\\mathcal P_Q$ and each computed answer $\\sigma$ to\n $\\left(D_{\\tau+1}\\setminus D_\\tau\\right)\\cup\\{\\neg\\bigwedge M\\}$, add\n \\tuple{\\theta\\sigma,M\\sigma,F\\sigma} to $\\mathcal S_{\\tau+1}$.\n\n \\item For each $\\tuple{\\theta,E,H}\\in\\mathcal S_\\tau$, compute the set $M\\subseteq H$ of\n atoms with timestamp $\\tau+1$.\n For each computed answer $\\sigma$ to\n $\\left(D_{\\tau+1}\\setminus D_\\tau\\right)\\cup\\{\\neg\\bigwedge M\\}$, add\n \\tuple{\\theta\\sigma,(E\\cup M)\\sigma,(H\\setminus M)\\sigma} to $\\mathcal S_{\\tau+1}$.\n \\end{enumerate}\n\\end{proposition}\n\\begin{proof}\n By connectedness, all time variables in $M\\sigma\\cup F\\sigma$ are instantiated in $\\theta\\sigma$.\n\n To show that this algorithm runs in polynomial time in the size of $\\mathcal P_Q$,\n $\\mathcal S_\\tau$ and $D_{\\tau+1}\\setminus D_\\tau$, note that the size of every SLD-derivation\n that needs to be constructed is bound by the number of atoms in the initial goal, since\n $D_{\\tau+1}\\setminus D_\\tau$ only contains facts.\n Furthermore, all unifiers can be constructed in time linear in the size of the formulas involved,\n since the only function symbol available is addition of temporal terms.\n Finally, the total number of SLD-derivations that needs to be considered is bound by the number of\n elements of $\\mathcal P_Q\\times\\left(D_{\\tau+1}\\setminus D_\\tau\\right)$.\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\n\\begin{example}\n We illustrate this mechanism with our running example.\n The set $\\mathcal P_Q$ contains\n $$\\tuple{\\emptyset,\\{\\underbrace{\\m{Temp}(X,\\m{high},T)}_M\\},\n \\{\\underbrace{\\m{Temp}(X,\\m{high},T+i)\\mid i=1,2}_F\\}}\\,.$$\n\n From $\\m{Temp}(\\m{wt25},\\m{high},0)\\in D_0$, we obtain the substitution\n $\\theta_0=\\subst{X:=\\m{wt25},T:=0}$ from SLD-resolution between $M$ and $D_0$ (step 1).\n Therefore, $\\mathcal S_0$ contains\n $$\\tuple{\\theta_0,\\{\\underbrace{\\m{Temp}(\\m{wt25},\\m{high},0)}_{E_0}\\},\n \\{\\underbrace{\\m{Temp}(\\m{wt25},\\m{high},i)\\mid i=1,2}_{H_0}\\}}\\,.$$\n\n Next, $D_1\\setminus D_0=\\{\\m{Temp}(\\m{wt25},\\m{high},1)\\}$.\n This is the only element of $H_0$ with timestamp $1$.\n By step~2, $\\mathcal S_1$ contains\n $$\\tuple{\\theta_0,\\{\\underbrace{\\m{Temp}(\\m{wt25},\\m{high},i)\\mid i=0,1}_{E_1}\\},\n \\{\\underbrace{\\m{Temp}(\\m{wt25},\\m{high},2)}_{H_1}\\}}\\,.$$\n Furthermore, from $\\mathcal P_Q$ we also add (step 1)\n $$\\tuple{\\theta_1,\\{\\m{Temp}(\\m{wt25},\\m{high},1)\\},\n \\{\\m{Temp}(\\m{wt25},\\m{high},i)\\mid i=2,3\\}}$$\n to $\\mathcal S_1$, with $\\theta_1=\\subst{X:=\\m{wt25},T:=1}$.\n\n Next, $D_2\\setminus D_1=\\{\\m{Temp}(\\m{wt25},\\m{high},2)\\}$.\n This is the only atom with timestamp $2$ in the premises of both elements of $\\mathcal S_1$,\n so $\\mathcal S_2$ contains (step 2)\n $$\\tuple{\\theta_0,\\{\\m{Temp}(\\m{wt25},\\m{high},i)\\mid i=0,1,2\\},\\emptyset}$$\n and\n $$\\tuple{\\theta_1,\\{\\m{Temp}(\\m{wt25},\\m{high},i)\\mid i=1,2\\},\\{\\m{Temp}(\\m{wt25},\\m{high},3)\\}}\\,.$$\n From $\\mathcal P_Q$ we also get (step 1)\n $$\\tuple{\\theta_2,\\{\\m{Temp}(\\m{wt25},\\m{high},2)\\},\\{\\m{Temp}(\\m{wt25},\\m{high},i)\\mid i=3,4\\}}$$\n with $\\theta_2=\\subst{X:=\\m{wt25},T:=2}$.\n\n If $D_3\\setminus D_2=\\emptyset$, then the premises for $\\theta_1$ and $\\theta_2$ become\n unsatisfied, and no new supported answers are generated from $\\mathcal P_Q$.\n Thus\n $$\\mathcal S_3=\\{\\tuple{\\theta_0,\\{\\m{Temp}(\\m{wt25},\\m{high},i)\\mid i=0,1,2\\},\\emptyset}\\,.\\qquad\\hfill\\ensuremath{\\triangleleft}$$\n\\end{example}\n\nThe following example also illustrates that, by outputting hypothetical answers, we can answer\nqueries earlier than in other formalisms.\n\n\n\\begin{example}\n \\label{ex:good}\n Suppose that we extend the program $\\ensuremath{\\Pi_E}$ in our running example with the following rule (as in\n Example~2 from~\\cite{Motik}).\n \\[\n \\m{Temp}(X,\\m{n\/a},T) \\to \\m{Malf}(X,T)\n \\]\n If\n $D_1=\\{\\m{Temp}(\\m{wt25},\\m{high},0),\\m{Temp}(\\m{wt25},\\m{high},1),\\m{Temp}(\\m{wt42},\\m{n\/a},1)\\}$,\n \\begin{align*}\n \\mbox{then }\\mathcal S_1\n &=\\{\\langle\\subst{T:=0,X:=\\m{wt25}},\\\\\n &\\qquad\\{\\m{Temp}(\\m{wt25},\\m{high},i)\\mid i=0,1\\},\\\\\n &\\qquad\\{\\m{Temp}(\\m{wt25},\\m{high},2)\\}\\rangle,\\\\\n &\\quad\\langle\\subst{T:=1,X:=\\m{wt42}},\\{\\m{Temp}(\\m{wt42},\\m{n\/a},1)\\},\\emptyset\\rangle\\}\\,.\n \\end{align*}\n Thus, the answer \\subst{T:=1,X:=\\m{wt42}} is produced at timepoint 1, rather than being delayed\n until it is known whether \\subst{T:=0,X:=\\m{wt25}} is an answer.\\hfill\\ensuremath{\\triangleleft}\n\\end{example}\n\n\\begin{proposition}[Soundness]\n If $\\tuple{\\theta,E,H}\\in\\mathcal S_\\tau$ and $\\sigma$ instantiates all free variables in\n $E\\cup H$, then $\\tuple{\\theta\\sigma,H\\sigma,E\\sigma}\\in\\sans QD\\tau$.\n\\end{proposition}\n\\begin{proof}\n By induction on $\\tau$, we show that \\cans QD\\tau\\theta{\\wedge_i\\alpha_i}\\ with $H=\\{\\alpha_i\\}_i$.\n If \\tuple{\\theta,E,H} is obtained from an element in $P_Q$ and $D_{\\tau+1}\\setminus D_\\tau$,\n then this derivation is obtained by composing the derivation for generating the relevant element\n of $P_Q$ with the one for \\tuple{\\theta,E,H}.\n If \\tuple{\\theta,E,H} is obtained from an element of $\\mathcal S_\\tau$ and\n $D_{\\tau+1}\\setminus D_\\tau$, then this derivation is obtained by composing the derivation\n obtained by induction hypothesis to the one used for deriving \\tuple{\\theta,E,H}.\n\n By applying Proposition~\\ref{prop:SLD-sound} to this SLD-derivation, we conclude that\n $\\tuple{\\theta,H}\\in\\hans QD\\tau$.\n Furthermore, $E\\neq\\emptyset$ and $E$ is evidence for this answer by construction.\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\n\\begin{proposition}[Completeness]\n If $\\tuple{\\sigma,H,E}\\in\\sans QD\\tau$, then there exist a substitution $\\rho$ and a triple\n $\\tuple{\\theta,E',H'}\\in\\mathcal S_\\tau$ such that $\\sigma=\\theta\\rho$, $H=H'\\rho$ and\n $E=E'\\rho$.\n\\end{proposition}\n\\begin{proof}\n By Proposition~\\ref{prop:SLD-compl}, \\cans QD\\tau\\theta{\\wedge_i\\alpha_i}\\ for some substitution $\\rho$ and set of atoms\n $H'=\\{\\alpha_i\\}_i$ with $H=\\{\\alpha_i\\rho\\}_i$ and $\\sigma=\\theta\\rho$ for some $\\theta$.\n By Proposition~\\ref{prop:SLD-strat}, there is a stratified SLD-derivation computing this answer.\n The strata of this derivation correspond to an incremental proof that\n $\\tuple{\\theta,E',H'}\\in\\mathcal S_\\tau$, where $E'$ is the set of facts from $D_\\tau$ that were\n used in resolution steps in the derivation.\\hfill\\ensuremath{\\Box}\n\\end{proof}\n\nIt also follows from our construction that, if $S_\\tau$ contains a triple \\tuple{\\theta,E,H} with\n$\\theta(T)\\leq\\tau$ and $H\\neq\\emptyset$ and $d$ is a delay for $Q$, then the time stamp of each\nelement of $H$ is at most $\\tau+d$.\nLikewise, if $w$ is a window size for $Q$, then all elements in $E$ must have time stamp at least\n$\\tau-w$.\n\n\\paragraph{Generalization.}\nThe hypotheses in Propositions~\\ref{prop:termination} and~\\ref{thm:termination-char} are not\nnecessary to guarantee termination of the algorithm presented for the pre-processing step.\nIndeed, consider the following example.\n\n\\begin{example}\n \\label{ex:unbound}\n In the context of our running example, we say that a turbine has a manufacturing defect if it\n exhibits two specific failures during its lifetime: at some time it overheats, and at some\n (different) time it does not send a temperature reading.\n Since this is a manufacturing defect, it holds at timepoint $0$, regardless of when the failures\n actually occur.\n We can model this property by the rule\n \\[\\m{Temp}(X,\\m{high},T_1),\\m{Temp}(X,\\m{n\/a},T_2) \\to \\m{Defective}(X,0)\\,.\\]\n Let $\\Pi'_E$ be the program obtained from $\\Pi_E$ by adding this rule, and consider now the query\n $Q'=\\tuple{\\m{Defective}(X,T),\\Pi'_E}$.\n\n Performing SLD-resolution with $\\Pi'_E$ and $\\m{Defective}(X,0)$ yields (in one step) the goal\n $$\\neg\\left(\\m{Temp}(X,\\m{high},T_1)\\wedge\\m{Temp}(X,\\m{n\/a},T_2)\\right)\\,,$$\n which only contains future atoms with respect to $-1$.\n The set of computed answers with premises to $Q'$ over $D_{-1}$ is indeed\n $$\\{\\tuple{\\subst{T:=0},\\m{Temp}(X,\\m{high},T_1)\\wedge\\m{Temp}(X,\\m{n\/a},T_2)}\\}\\,.\\quad\\hfill\\ensuremath{\\triangleleft}$$\n\\end{example}\nAs this example shows, if a rule in the program includes different time variables, the query cannot\nhave a delay or window size (since no predicate can use both $T_1$ and $T_2$).\n\nWe can also adapt our algorithm to work in this situation, removing the hypothesis of connectedness\nin Proposition~\\ref{prop:algorithm} but sacrificing polynomial complexity.\nThe changes are the following.\n\\begin{itemize}\n\\item In the pre-processing step, set $\\mathcal P_Q$ now stores triples of the form\n \\tuple{\\theta,\\{M_T\\}_T,\\{\\alpha_i\\}_i\\setminus M}, where $M$ is computed as before and $M_T$ is\n the set of elements of $M$ whose temporal variable is $T$.\n Recall that each predicate can only contain one temporal variable.\n\\item Step~1 of the algorithm in Proposition~\\ref{prop:algorithm} now reads:\n for each $\\tuple{\\theta,\\{M_T\\}_T,F}\\in\\mathcal P_Q$ and each computed answer $\\sigma$ to\n $\\left(D_{\\tau+1}\\setminus D_\\tau\\right)\\cup\\{\\neg\\bigwedge M_T\\}$, add\n \\tuple{\\theta\\sigma,M_T\\sigma,(F\\cup(M\\setminus M_T))\\sigma} to $\\mathcal S_{\\tau+1}$.\n Note that this step is now performed as many times as there are sets $M_T$, but its running time\n is still polynomial on the size of $\\mathcal P_Q$.\n\\item After Step~2 of the algorithm in Proposition~\\ref{prop:algorithm}, we need to apply a fixpoint\n construction to $\\mathcal S_{\\tau+1}$: for each temporal variable $T$ occurring in\n $\\tuple{\\theta,E,H}\\in\\mathcal S_{\\tau+1}$, consider the set $M_T$ of the atoms in $H$ with time\n variable $T$ and minimal timestamp.\n For each computed answer $\\sigma$ to\n $\\left(D_{\\tau+1}\\setminus D_\\tau\\right)\\cup\\{\\neg\\bigwedge M_T\\}$, add\n \\tuple{\\theta\\sigma,(E\\cup M_T)\\sigma,(H\\setminus M_T)\\sigma} to $\\mathcal S_{\\tau+1}$.\n\\end{itemize}\nThe last step may add new elements to $\\mathcal S_{\\tau+1}$, but they will have fewer temporal\nvariables, so it always terminates.\nHowever, it may not run in polynomial time: each element $\\tuple{\\theta,E,H}\\in\\mathcal S_{\\tau+1}$\nbefore this construction can give rise to $2^v$ elements in the final $\\mathcal S_{\\tau+1}$, where\n$v$ is the number of temporal variables in $E$.\n\nThis modified algorithm allows us to deal with some situations of unbound wait, as we illustrate in\nthe following example.\n\\begin{example}\n \\label{ex:unbound-cont}\n Continuing with Example~\\ref{ex:unbound}, since $D_0$ contains $\\m{Temp}(\\m{wt25},\\m{high},0)$,\n the set $\\mathcal S_0$ includes\n \\[\\langle\\theta'=\\subst{X:=\\m{wt25},T:=0},\\{\\m{Temp}(\\m{wt25},\\m{high},0)\\},\\{\\m{Temp}(\\m{wt25},\\m{n\/a},T_2)\\}\\rangle\\,.\\]\n Note that we do not know when (if ever) $\\theta'$ will become an answer to the original query, but\n there is relevant information output to the user.\n \\hfill\\ensuremath{\\triangleleft}\n\\end{example}\n\n\\section{Adding negation}\n\\label{sec:neg}\n\nWe now show how our framework extends naturally to programs including negated atoms in bodies of\nrules.\nWe make the usual assumptions that negation is safe (each variable in a negated atom in the body of\na rule occurs non-negated elsewhere in the rule).\n\nIn the pre-processing step, we compute $\\mathcal P_Q$ as before.\nHowever, the leaves in the SLD-derivations constructed may now contain negated (intensional) atoms\nas well as extensional atoms.\nFor each such negated atom we generate a fresh query $Q'$, replacing the time parameter with a\nvariable, and repeat the pre-processing step to compute $\\mathcal P_{Q'}$.\nWe iterate this construction until no fresh queries are generated.\nSince the number of queries that can be generated is finite, Propositions~\\ref{prop:termination} and\n~\\ref{thm:termination-char} still hold.\n\nThe online step of the algorithm is now more complicated, as it needs to keep track of all answers\nto the auxiliary queries generated by negated atoms.\nThe following algorithm computes $\\mathcal S_{\\tau+1}$ from $\\mathcal P$ and $\\mathcal S_\\tau$.\n\\begin{enumerate}\n\\item For each $Q$ and each $\\tuple{\\theta,M,F}\\in\\mathcal P_Q$: if there is a negated atom in $M$\n with time parameter $t$, let $\\sigma$ be the substitution such that $t\\sigma=\\tau+1$; otherwise\n let $\\sigma=\\emptyset$.\n Let $M'$ be the set of positive literals in $M$.\n\n For each computed answer $\\sigma'$ to\n $\\left(D_{\\tau+1}\\setminus D_\\tau\\right)\\cup\\{\\neg\\bigwedge M'\\sigma\\}$\n add \\tuple{\\theta\\sigma\\sigma',M'\\sigma\\sigma',((M\\setminus M')\\cup F)\\sigma\\sigma'} to\n $\\mathcal S_{\\tau+1}(Q)$.\n\n (Observe that all time variables in $M\\sigma\\sigma'\\cup F\\sigma\\sigma'$ are instantiated in\n $\\theta\\sigma\\sigma'$.)\n\n\\item For each query $Q$ and each $\\tuple{\\theta,E,H}\\in\\mathcal S_\\tau(Q)$, compute the set\n $M\\subseteq H$ of positive literals that have timestamp $\\tau+1$.\n For each computed answer $\\sigma$ to\n $\\left(D_{\\tau+1}\\setminus D_\\tau\\right)\\cup\\{\\neg\\bigwedge M\\}$, add the tuple\n \\tuple{\\theta\\sigma,(E\\cup M)\\sigma,(H\\setminus M)\\sigma} to $\\mathcal S_{\\tau+1}(Q)$.\n\n\\item For each query $Q$ and each $\\tuple{\\theta,E,H}\\in\\mathcal S_{\\tau+1}(Q)$, compute the set\n $M\\subseteq H$ of negative literals.\n For each literal $\\m{not}\\ \\ell\\in M$ with timestamp $t\\leq\\tau+1$, let $\\ell'$ be the query on the\n same predicate symbol as $\\ell$.\n \n If there is no tuple $\\tuple{\\theta',E',H'}\\in\\mathcal S_{\\tau+1}(\\ell')$ where $\\ell$\n and $\\ell'\\theta'$ are unifiable, remove $\\m{not}\\ \\ell$ from $H$ and add it to $E$.\n\n If there is a tuple $\\tuple{\\theta',E',\\emptyset}\\in\\mathcal S_{\\tau+1}(\\ell')$ such that\n $\\ell$ and $\\ell'\\theta'$ are unifiable, then: (i)~remove \\tuple{\\theta,E,H} from $S_{\\tau+1}(Q)$,\n (ii)~for each substitution $\\sigma$ ranging over the free variables in $\\ell$, if $\\ell\\sigma$\n does not unify with $\\ell'\\theta'$, then add \\tuple{\\theta\\sigma,E\\sigma,H\\sigma} to\n $S_{\\tau+1}(Q)$.\n\\end{enumerate}\n\nStep~3 makes the running time of this algorithm exponential, since it requires iterating over a set\nof substitutions.\nHowever, if negation is $T$-stratified (which we define below), we can still establish a polynomial\nbound.\n\n\\begin{definition}\n Let $\\Pi$ be a program.\n The \\emph{temporal closure} of a program $\\Pi$ is the program $\\ensuremath{\\Pi^\\downarrow}$ defined as follows.\n For each $(n+1)$-ary predicate symbol $p$ with a temporal argument in the signature underlying\n $\\Pi$, the signature for $\\Pi^\\downarrow$ contains a family of $n$-ary predicate symbols\n $\\{p_t\\}_{t\\in\\mathbb N}$.\n For each rule in $\\Pi$, $\\ensuremath{\\Pi^\\downarrow}$ contains all rules obtained by instantiating its temporal\n parameter in all possible ways and replacing $p(x_1,\\ldots,x_n,t)$ by $p_t(x_1,\\ldots,x_n)$.\n A program $\\Pi$ is $T$-stratified if $\\ensuremath{\\Pi^\\downarrow}$ is stratified in the usual sense.\n\\end{definition}\n\nSince the number of predicate symbols in $\\ensuremath{\\Pi^\\downarrow}$ is infinite, the usual procedure for deciding\nwhether a program is $T$-stratified does not necessarily terminate.\nHowever, this procedure can be adapted to our framework.\n\\begin{definition}\n Let $\\Pi$ be a program.\n The \\emph{temporal dependency graph} of $\\Pi$ is constructed as follows: for each predicate symbol\n $p$ with a temporal argument, we create a node with label $p(T)$ (where $T$ is a formal temporal\n variable); if $p$ does not have a temporal argument, we create a node with label $p$.\n\n For each node with label $p(T)$ and each rule $r$ in the definition of $p$, we first compute the\n substitution $\\theta$ that makes the temporal variable in the head of $r$ equal to $T$; for each\n positive literal $q(x_1,\\ldots,x_n,T+k)$ in $\\mathsf{body}(r\\theta)$, we create an edge between $p(T)$ and\n $q(T+k)$ (adding the node $q(T+k)$ if necessary) and label the edge with $+$.\n We proceed likewise for negative literals, but label the edge with $-$.\n\n We iterate this construction for any newly created nodes whose temporal variable is $T+k$ with\n $k\\geq 0$.\n\\end{definition}\n\n\\begin{proposition}\n \\label{prop:stratification-dec}\n There is an algorithm that decides whether a program $\\Pi$ is $T$-stratified, and in the\n affirmative case returns a finite representation of the strata.\n\\end{proposition}\n\\begin{proof}\n We show that: if the temporal dependency graph of $\\Pi$ does not contain a path that passes\n infinitely many times through (not necessarily distinct) edges labeled with $-$, then $\\Pi$ is\n $T$-stratified.\n\n Let $\\gr\\ensuremath{\\Pi^\\downarrow}$ be the graph of dependencies for $\\ensuremath{\\Pi^\\downarrow}$ and $\\gr\\Pi$ be the temporal dependency\n graph for $\\Pi$.\n For each predicate symbol $p_t$ in $\\ensuremath{\\Pi^\\downarrow}$, the graph $\\gr\\ensuremath{\\Pi^\\downarrow}$ contains a subgraph isomorphic to\n a subgraph\\footnote{It may be a proper subgraph in case $t$ is low enough that some temporal\n arguments would become negative.} of the connected component of $\\gr\\Pi$ containing $p(T)$.\n In particular, $\\gr\\ensuremath{\\Pi^\\downarrow}$ contains infinitely many copies of $\\gr\\Pi$.\n\n Suppose that $\\gr\\ensuremath{\\Pi^\\downarrow}$ contains a path that passes infinitely many times through edges labeled\n with $-$, and let $p_t$ be a node in it with minimum value of $t$ (if $t$ occurs multiple times in\n the path, choose any one of its occurrences).\n Consider the isomorphic copy of $\\gr\\Pi$ embedded in $\\gr\\ensuremath{\\Pi^\\downarrow}$ where $p(T)$ is mapped to $p_t$.\n The whole path starting from $p_t$ must be contained in this copy, since the timestamps of all\n corresponding nodes in $\\gr\\Pi$ are greater or equal to $T$.\n\n Using these two properties, we can construct a stratification of $\\ensuremath{\\Pi^\\downarrow}$ in the usual inductive\n way.\n \\begin{itemize}\n \\item $\\mathcal G_0=\\gr\\ensuremath{\\Pi^\\downarrow}$\n \\item For each $i\\geq 0$, $\\Pi_i$ contains all predicates labeling nodes $n$ in $\\mathcal G_i$\n such that: all paths from $n$ contain only edges labeled with $+$.\n \\item For each $i\\geq 0$, $\\mathcal G_{i+1}$ is obtained from $\\mathcal G_i$ by removing all nodes\n with labels in $\\Pi_i$ and all edges to those nodes.\n \\end{itemize}\n\n The only non-trivial part of showing that $\\Pi_0,\\ldots,\\Pi_n,\\ldots$ is a stratification of\n $\\ensuremath{\\Pi^\\downarrow}$ is guaranteeing that every predicate symbol $p_t$ is contained in $\\Pi_k$ for some $k$.\n This is established by induction: if any path from $p_t$ contains at most $n$ edges labeled with\n $-$, then $p_t\\in\\Pi_n$.\n We now show that, for any $p_t$, there is such a bound on the number of edges labeled with $-$.\n\n First observe that, in $\\gr\\Pi$, there is a bound $B$ on the number of edges labeled with $-$ in\n any path starting from $p(T)$: (i)~if there is a path from $q(T+k_1)$ to $q(T+k_2)$ for some $q$\n and $k_1\\leq k_2$ containing an edge labeled with $-$, then there would be a path with an infinite\n number of such edges, and (ii)~since there is a finite number of predicate symbols in $\\Pi$ we\n cannot build paths with an arbitrarily long number of edges labeled with $-$.\n\n Now consider a path starting at $p_t$.\n If this path has length greater than $B$, then it cannot be contained completely in the subgraph\n of $\\gr\\ensuremath{\\Pi^\\downarrow}$ isomorphic to $\\gr\\Pi$ containing $p(T)$.\n Therefore, it must at some point reach a node $q_{t'}$ with $t' m$ and $\\operatorname{girth}(L) \\geq n$, or $k \\geq m$ and $\\operatorname{girth}(L) > n$, for $(m,n) \\in \\{ (3,6), (4,4), (6,3) \\}$, then the faces of $X$ may be metrised as regular hyperbolic $k$-gons with vertex angles $2\\pi\/\\operatorname{girth}(L)$, the complex $X$ is locally $\\operatorname{CAT}(-1)$ and the universal cover of $X$ is thus $\\operatorname{CAT}(-1)$. We will henceforth be considering simply-connected $(k,L)$-complexes $X$ which satisfy the Gromov Link Condition, and so are $\\operatorname{CAT}(0)$. \n\n\n\\subsection{Relationship with incidence geometries}\\label{sec:incidence}\nA $(k,L)$-complex may be viewed combinatorially as a rank three incidence structure, namely, a geometry consisting of three types of objects, vertices, edges and faces, such that each edge is incident with exactly two vertices, each face is incident with exactly $k$ edges and $k$ vertices, and the graph with vertex set the edges incident with a given vertex and edges the faces is isomorphic to $L$. In the notation of Buekenhout \\cite{Bu}, the geometry has diagram\n\\[\\xymatrix@R=0mm{\n{\\bullet} \\ar@{-}[rr]^{(k)} & & {\\bullet} \\ar@{-}[rr]^{\\overline{L}} & & {\\bullet} \n\\\\\n{\\mbox{vertices}} & & {\\mbox{edges}} & & {\\mbox{faces}} \n}\\]\nwhere the label $(k)$ denotes the vertex-edge incidence graph of a $k$-cycle and $\\overline{L}$ denotes the vertex-edge incidence graph of $L$. \n\nWe call a $(k,L)$-complex \\emph{connected} if both the link graph and the vertex-edge incidence graph are connected. A \\emph{flag} in the geometry is an incident vertex-edge-face triple. All connected $(k,L)$-complexes that can be embedded in $\\mathbb{E}^3$ and have a group of automorphisms acting regularly on flags were classified by Pellicer and Shulte \\cite{PS1,PS2}. The only finite ones were seen to be the eighteen finite regular polyhedra. Polyhedra are precisely the finite $(k,L)$-complexes with $L$ a cycle.\n\n\n\n\n\n\n\n\n\n\\section{Graph- and group-theoretic definitions and notation}\n\\label{s:definitions}\n\nIn Section \\ref{sec:graphs} we discuss the main definitions from graph theory that we will require, and then in Section~\\ref{sec:groups} we define the group-theoretic notation that we will use. For ease of comparison with the literature, we now switch to standard graph-theoretic notation. We also assume some basic definitions from algebraic graph theory \\cite{GR} and from the theory of permutation groups \\cite{DM}. All graphs considered in this paper are finite. \n\n\\subsection{Graph theory}\\label{sec:graphs}\n\nLet $\\Gamma$ be a graph with vertex set $V(\\G)$ and edge set $E(\\G)$. If $\\Gamma$ is {\\em simple}, that is, a graph without loops or multiple edges, and $e \\in E(\\G)$ connects the vertices $u$ and $v$, then we identify the (undirected) edge $e$ with the set $\\{u,v\\}$, and we denote the {\\em arc} (directed edge) from $u$ to $v$ by $uv$. For each vertex $v$ we denote by $\\Gamma(v)$ the set of neighbours of $v$, that is, the set of vertices adjacent to $v$. The set of all vertices at distance $i$ from $v$ will be denoted by $\\Gamma_i(v)$. In particular, $\\Gamma_1(v)=\\Gamma(v)$. For $X \\subseteq V(\\G)$, the {\\em restriction of $\\Gamma$ to $X$} is the graph $\\Gamma\\mid_X$ with vertex set $X$ and edge set consisting of those edges of $\\Gamma$ that have both endpoints in $X$. The {\\em girth} $\\operatorname{girth}(\\Gamma)$ is the length of the shortest cycle in $\\Gamma$. The \\emph{valency} of a vertex is the number of neighbours that it has and the graph $\\Gamma$ is called {\\em $k$-regular} if each $v \\in V(\\G)$ has valency $k$. A $3$-regular graph is also called a \\emph{cubic graph}. If $\\Gamma$ is $k$-regular then we also say that $\\Gamma$ {\\em has valency $k$}. A graph is called \\emph{regular} if it is $k$-regular for some $k$. If $\\Gamma$ is bipartite and vertices in the two parts of the bipartition have valency $\\ell$ and $r$, respectively, then we say that $\\Gamma$ {\\em has bi-valency $\\{ \\ell,r \\}$}.\n\n\n\nIf $G$ is a group of automorphisms of $\\Gamma$ and $v \\in V(\\G)$ then $G_v$ denotes the stabiliser in $G$ of the vertex $v$. If $X \\subseteq V(\\G)$ is stabilised setwise by a subgroup $H \\le G$ then we denote by $H^X$ the permutation group induced by $H$ on $X$. In particular, $G_v^{\\Gamma(v)}$ is the group induced on $\\Gamma(v)$ by $G_v$. We say that $G$ is {\\em locally transitive}, {\\em locally primitive}, {\\em locally $2$-transitive}, or {\\em locally fully symmetric} if for each $v \\in V(\\G)$ the group $G_v^{\\Gamma(v)}$ is transitive, primitive, $2$-transitive, or the symmetric group on $\\Gamma(v)$, respectively. The graph $\\Gamma$ is locally transitive, locally primitive, locally $2$-transitive, or locally fully symmetric if there exists $G \\le \\ensuremath{\\operatorname{Aut}}(\\Gamma)$ with the appropriate property (equivalently, as all four properties hold in overgroups, $\\ensuremath{\\operatorname{Aut}}(\\Gamma)$ has the appropriate property). \n\nFor an edge $\\{u,v\\}$ we define $G_{\\{u,v\\}}$ to be the setwise stabiliser of $\\{u,v\\}$, and for an arc $uv$, we define $G_{uv}:= G_u \\cap G_v$. Let $d$ be the usual distance function on $\\Gamma$, so that each edge has length $1$. Then for each natural number $n$ and each $v \\in V(\\G) $, we define \n\\[ G_v^{[n]} := \\{ g \\in G_v \\mid w^g = w \\, \\forall w \\in V(\\G) \\mbox{ such that } d(v,w) \\leq n \\} \\]\nas the pointwise stabiliser of the ball of radius $n$ around $v$. For $\\{ u,v \\} \\in E(\\G)$, $G_{uv}^{[1]}:= G_u^{[1]} \\cap G_v^{[1]}$. \n\nAn \\emph{$s$-arc} in a graph $\\Gamma$ is an $(s+1)$-tuple $(v_0,v_1,\\ldots,v_s)$ of vertices such that $\\{ v_i,v_{i+1}\\} \\in E(\\G)$ and $v_{i-1}\\neq v_{i+1}$, that is, it is a walk of length $s$ that does not immediately turn back on itself. Let $G \\le \\ensuremath{\\operatorname{Aut}}(\\Gamma)$. We say that $\\Gamma$ is \\emph{locally $(G,s)$-arc transitive} if for each vertex $v$, the stabiliser $G_v$ acts transitively on the set of $s$-arcs of $\\Gamma$ starting at $v$. If $G$ is transitive on the set of all $s$-arcs in $\\Gamma$ then we say that $\\Gamma$ is \\emph{$(G,s)$-arc transitive}. If all vertices of $\\Gamma$ have valency at least two then locally $s$-arc transitive implies locally $(s-1)$-arc transitive. Moreover, $s$-arc transitive implies locally $s$-arc transitive. Conversely, if $G$ is transitive on $V(\\G)$ and $\\Gamma$ is locally $(G,s)$-arc transitive then $\\Gamma$ is $(G,s)$-arc transitive. We observe that a graph with all vertices having valency at least 2 is locally $(G,2)$-arc transitive if and only if $G_v^{\\Gamma(v)}$ is 2-transitive for all vertices $v$ (see for example \\cite[Lemma 3.2]{GLP1}). Moreover, if $\\Gamma$ is locally $G$-transitive then $G$ acts transitively on $E(\\G)$ and either $G$ is transitive on $V(\\G)$ or $\\Gamma$ is bipartite and $G$ acts transitively on both sets of the bipartition. If $\\Gamma$ is $(G,s)$-arc-transitive but not $(G,s+1)$-arc-transitive then we say that $\\Gamma$ is {\\em $(G,s)$-transitive}. Finally, if $G=\\ensuremath{\\operatorname{Aut}}(\\Gamma)$ then we drop the name $G$ from all notation introduced in this paragraph and say that $\\Gamma$ is \\emph{locally $s$-arc-transitive}, \\emph{$s$-arc-transitive}, and \\emph{$s$-transitive}, respectively. \n\nThe study of $s$-arc transitive graphs goes back to the seminal work of Tutte \\cite{tutte47,tutte59} who showed that a cubic graph is at most 5-arc transitive. This was later extended by Weiss \\cite{W2} to show that any graph of valency at least three is at most 7-arc transitive. Weiss \\cite{Weiss78} also showed that a cubic graph is at most locally 7-arc transitive while Stellmacher \\cite{sleq9} has announced that a graph of valency at least 3 is at most locally 9-arc transitive. In each case the upper bound is met. Note that a cycle is $s$-arc transitive for all values of $s$.\n\n\n\nThe following definitions, using a slightly different language, were introduced by Lazarovich \\cite{L}.\n\\begin{definition} \n\\label{def:star and iso}\n{\\em Let $\\Gamma$ be a simple graph, let $v \\in V(\\G)$, and let $e=\\{ u,v\\} \\in E(\\G)$. \nThe \\emph{open star of $v$}, denoted $\\operatorname{st}(v)$, is the union of $\\{v \\}$ and the set $\\{ f \\in E(\\G) \\mid f \\mbox{ is incident to } v \\}$. Similarly, the \\emph{open edge-star of $e$}, denoted $\\operatorname{st}(e)$ or $\\operatorname{st}(\\{u,v\\})$, is the union of the sets $\\{ u \\}$, $\\{ v \\}$, and $\\{ f \\in E(\\G) \\mid f \\mbox{ is incident to at least one of } u,v \\}$.\n\nGiven two open stars $\\operatorname{st}(v_1)$ and $\\operatorname{st}(v_2)$, a {\\em star isomorphism} is a bijection $\\varphi: \\operatorname{st}(v_1) \\to \\operatorname{st}(v_2)$ such that $\\varphi(v_1)=v_2$. \n\nGiven two open edge-stars $\\operatorname{st}(\\{u_1,v_1\\})$ and $\\operatorname{st}(\\{u_2,v_2\\})$, an {\\em edge-star isomorphism} is a bijection $\\varphi: \\operatorname{st}(\\{u_1,v_1\\}) \\to \\operatorname{st}(\\{u_2,v_2\\})$ such that\n\\begin{itemize}\n\\item[(i)] $\\varphi(V(\\G) \\cap \\operatorname{st}(\\{u_1,v_1\\}))=V(\\G) \\cap \\operatorname{st}(\\{u_2,v_2\\})$, that is, the vertices $u_1,v_1$ are mapped (in some order) to the vertices $u_2,v_2$.\n\\item[(ii)] $\\varphi$ is incidence-preserving, that is, $f \\in E(\\G) \\cap \\operatorname{st}(\\{u_1,v_1\\})$ is incident to $u_1$ if and only if $\\varphi(f)$ is incident to $\\varphi(u_1)$ and \n$f \\in E(\\G) \\cap \\operatorname{st}(\\{u_1,v_1\\})$ is incident to $v_1$ if and only if $\\varphi(f)$ is incident to $\\varphi(v_1)$. In particular, $\\varphi(\\{ u_1,v_1 \\}) = \\{ u_2,v_2 \\}$. \n\\end{itemize}}\n\\end{definition}\n\n\\begin{definition}\n\\label{def:star trans}\n{\\em Let $\\Gamma$ be a simple graph, and let $G \\le \\ensuremath{\\operatorname{Aut}}(\\Gamma)$. Then $\\Gamma$ is called \n\\begin{itemize}\n\\item[(i)] \\emph{$G$-star-transitive} if for all $v_1,v_2\\in V(\\G)$ and for all star isomorphisms $\\varphi: \\operatorname{st}(v_1) \\to \\operatorname{st}(v_2)$, there exists an automorphism $\\psi \\in G$\nsuch that $\\psi(v_1)=\\varphi(v_1)$ and for all $f \\in E(\\G) \\cap \\operatorname{st}(v_1)$ we have $\\psi(f)=\\varphi(f)$; and \n\\item[(ii)] \\emph{$G$-st(edge)-transitive} if for all $\\{u_1,v_1\\}, \\{u_2,v_2\\} \\in E(\\G)$ and edge-star isomorphisms $\\varphi: \\operatorname{st}(\\{u_1,v_1\\}) \\to \\operatorname{st}(\\{u_2,v_2\\})$,\nthere exists an automorphism $\\psi \\in G$\nsuch that $\\psi(u_1)=\\varphi(u_1)$, $\\psi(v_1)=\\varphi(v_1)$, and for all $f \\in E(\\G) \\cap \\operatorname{st}(\\{u_1,v_1\\})$ we have $\\psi(f)=\\varphi(f)$.\n\\end{itemize}}\nIf $G =\\ensuremath{\\operatorname{Aut}}(\\Gamma)$ then we simply say that $\\Gamma$ is star-transitive or st(edge)-transitive, respectively. \n\\end{definition}\n\n\nA subtlety of Definition \\ref{def:star trans} is that if there is no star-isomorphism $\\operatorname{st}(v_1) \\to \\operatorname{st}(v_2)$ or edge-star isomorphism $\\operatorname{st}(\\{u_1,v_1\\}) \\to \\operatorname{st}(\\{u_2,v_2\\})$, then the required property of extending to a graph automorphism holds trivially. \nAnother subtlety is the introduction of the notions of star-transitivity and st(edge)-transitivity relative to subgroups $G \\le \\ensuremath{\\operatorname{Aut}}(\\Gamma)$. Considering subgroups of $\\ensuremath{\\operatorname{Aut}}(\\Gamma)$ with certain transitivity properties is quite common in algebraic graph theory; the main reason is that there are examples where some $G \\le \\ensuremath{\\operatorname{Aut}}(\\Gamma)$ extends to covers of $\\Gamma$ but the full automorphism group does not. For example, the icosahedron is a cover of the complete graph $K_6$ but not all of $\\ensuremath{\\operatorname{Aut}}(K_6)=S_6$ extends.\n\n\nThe reason for the somewhat cumbersome formulation of the definitions above is that in the case when $\\operatorname{girth}(\\Gamma) \\le 4$, the definition of a star isomorphism or edge-star isomorphism $\\varphi$ does {\\em not} \nrequire that $\\varphi$ preserves the possible adjacency relations among the neighbours of the vertices occurring in the open stars and edge-stars. However, the graph automorphisms defined in Definition~\\ref{def:star trans}, extending the star isomorphisms and edge-star isomorphisms, must preserve such adjacency relations. The following result, whose proof is immediate, says that for large enough girth we can work with much simpler definitions. Given a vertex $v$ we let $X(v):=\\{ v \\} \\cup \\Gamma(v)$, and for an edge $\\{u,v\\}$ we let\n$X(\\{u,v\\}):=\\{ u \\} \\cup \\{ v \\} \\cup \\Gamma(u) \\cup \\Gamma(v)$.\n\n\\begin{prop}\n\\label{prop:equiv}\n$(i)$ If $\\operatorname{girth}(\\Gamma) \\ge 4$ then for $v \\in V(\\G)$, $\\operatorname{st}(v)$ can be identified with the restriction \n$\\Gamma \\mid_{X(v)}$. A star isomorphism is then \na graph isomorphism $\\varphi_1: \\Gamma \\mid_{X(v_1)} \\to \\Gamma \\mid_{X(v_2)} $ and \n$\\Gamma$ is star-transitive if and only if \nevery star isomorphism extends to an automorphism of $\\Gamma$. \n\n$(ii)$ If $\\operatorname{girth}(\\Gamma) \\ge 5$ then for $\\{u,v\\} \\in E(\\G)$, $\\operatorname{st}(\\{u,v\\})$ can be identified with the restriction $\\Gamma \\mid_{X(\\{u,v\\})}$. An edge-star isomorphism is then a graph isomorphism \n$\\varphi_2: \\Gamma \\mid_{X(\\{u_1,v_1\\})} \\to \\Gamma \\mid_{X(\\{u_2,v_2\\})}$ and the graph $\\Gamma$ is st(edge)-transitive if and only if \nevery edge-star isomorphism extends to an automorphism of $\\Gamma$.\n\\end{prop}\n\n\\subsection{Group-theoretic notation}\\label{sec:groups}\n\nFor a natural number $k$, we denote by $S_k$ the symmetric group on $k$ letters, by $A_k$ the alternating group on $k$ letters, by $C_k$ the cyclic group of order $k$, and by $D_{2k}$ the dihedral group of order $2k$. The projective special linear group, projective general linear group, and projective semilinear group of dimension $d$ over a field of size $q$ is denoted by $\\ensuremath{\\operatorname{PSL}}(d,q)$, $\\ensuremath{\\operatorname{PGL}}(d,q)$, and $\\operatorname{P\\Gamma L}(d,q)$, respectively. Given a group $A$ and a natural number $k$, we denote by $A \\Wr S_k$ the following wreath product: let $B$ be the direct product of $k$ copies of $A$. Then $S_k$ acts naturally on $B$ by permuting the $k$ copies of $A$, and $A \\Wr S_k$ is the semidirect product induced by this action. We denote the semidirect product of two groups $A$ and $B$ by $A{:}B$. A group that is a (not necessarily split) extension of a subgroup $A$ by a group $B$ will be denoted by $A.B$. Given a prime $p$, $p^m$ will be used to denote an elementary abelian group of order $p^m$, and we will use $[p^m]$ to denote a group of order $p^m$ when we do not wish to specify the isomorphism type. The \\emph{socle} of a group $G$ is the subgroup generated by all the minimal normal subgroups of $G$, and is denoted by $\\ensuremath{\\operatorname{soc}}(G)$. When $n\\geq 5$ or $n=3$, we have that $\\ensuremath{\\operatorname{soc}}(S_n)=A_n$.\n\nWe refer to a triple of groups $(A,B,A\\cap B)$ as an \\emph{amalgam}. A \\emph{completion} of the amalgam $(A,B,A\\cap B)$ is a group $G$ together with group homomorphisms $\\phi_1:A\\rightarrow G$ and $\\phi_2:B\\rightarrow G$ such that $\\phi_1$ and $\\phi_2$ are one-to-one, $G=\\langle \\phi_1(A),\\phi_2(B)\\rangle$ and $\\phi_1(A)\\cap\\phi_2(B)=\\phi_1(A\\cap B)=\\phi_2(A\\cap B)$.\n\n\n\\section{Graphs with small girth or with small minimal valency}\n\\label{s:small}\n\nIn this section, we characterise star-transitive and st(edge)-transitive graphs of girth at most four, so that in the rest of the paper we can concentrate on the case $\\operatorname{girth}(\\Gamma) \\ge 5$ and use the simplified description of stars and edge-stars, as given in Proposition~\\ref{prop:equiv}. We also characterise star-transitive and st(edge)-transitive graphs of minimal valency one or two. \n\n\\begin{lemma}\n\\label{lem:girth 3}\nThe only connected star-transitive graphs of girth $3$ are the complete graphs $K_n$, for some $n \\ge 3$. The only connected st(edge)-transitive graph of girth $3$ is the triangle $K_3$. The only connected st(edge)-transitive graphs of girth $4$ are the complete bipartite graphs $K_{m,n}$, for some $m,n \\ge 2$. \n\\end{lemma}\n\n\\begin{proof}\nLet $\\{ u,v,w \\}$ be a cycle of length $3$ in $\\Gamma$, and suppose that $\\Gamma$ is star-transitive. Considering the extensions of all star isomorphisms $\\varphi: \\operatorname{st}(v) \\to \\operatorname{st}(v)$ to \nautomorphisms of $\\Gamma$, we obtain that any two vertices in $\\Gamma(v)$ are adjacent. Hence, for any $x \\in \\Gamma(v)$, $x$ is contained in a cycle of length $3$, and by the same argument as above\nall neighbours of $x$ are adjacent. In particular, all $y \\in \\Gamma(x) \\setminus \\{ v\\}$ are adjacent to $v$, and so $\\{ v \\} \\cup \\Gamma(v) = \\{ x\\} \\cup \\Gamma(x)$. \nAs $\\Gamma$ is connected, we obtain that $V(\\G) = \\{ v \\} \\cup \\Gamma(v)$ and $\\Gamma$ is a complete graph.\n\nSuppose now that $\\{ u,v,w \\}$ is a cycle of length $3$ in $\\Gamma$, and that $\\Gamma$ is st(edge)-transitive. If the vertex $u$ has valency greater than $2$ then there exists an edge-star isomorphism \n$\\varphi: \\operatorname{st}(\\{u,v\\}) \\to \\operatorname{st}(\\{u,v\\}) $ such that $\\varphi(u)=u$, $\\varphi(v)=v$, $\\varphi(\\{u,w\\}) \\ne \\{ u,w\\}$, and $\\varphi(\\{v,w\\}) = \\{ v,w\\}$. However, $\\varphi$ cannot be\nextended to an automorphism of $\\Gamma$, a contradiction. Similarly, $v$ and $w$ also must be of valency $2$ and so, as $\\Gamma$ is connected, $V(\\G) = \\{ u,v,w \\}$.\n\nFinally, suppose that $\\operatorname{girth}(\\Gamma)=4$, $\\Gamma$ is st(edge)-transitive, and let $\\{ u,v,w,z \\}$ be a $4$-cycle. Considering the extensions of all edge-star isomorphisms $\\varphi: \\operatorname{st}(\\{u,v\\}) \\to \\operatorname{st}(\\{ u,v\\})$ that fix $u$ and $v$, we obtain\nthat, as an image of the edge $\\{ w,z\\}$, every pair $\\{ w_1,z_1 \\}$ with \n$w_1 \\in \\Gamma(v)$ and $z_1 \\in \\Gamma(u)$ is in $E(\\G)$. Therefore, for every \n$w_1 \\in \\Gamma(v)$ we have $\\Gamma(w_1) \\supseteq \\Gamma(u)$. Repeating the same argument\nwith edge-star isomorphisms $\\varphi: \\operatorname{st}(\\{w_1,v\\}) \\to \\operatorname{st}(\\{ w_1,v\\})$ and \na four-cycle containing $\\{ w_1,z \\}$, we obtain that for every \n$w_2 \\in \\Gamma(v)$ we have $\\Gamma(w_2) \\supseteq \\Gamma(w_1)$. In particular, for $w_2=u$,\n$\\Gamma(w_1)= \\Gamma(u)$. As $w_1$ was an arbitrary element of $\\Gamma(v)$, all vertices in $\\Gamma(v)$ have the same neighbours. Similarly, all vertices in $\\Gamma(u)$ have the same neighbours, $V(\\G)= \\Gamma(u) \\cup \\Gamma(v)$, and $\\Gamma$ is a complete bipartite graph. \n\\end{proof}\n\nNext, we characterise the star-transitive and st(edge)-transitive graphs with a vertex of valency one. For $n \\ge 3$, we define the {\\em spider graph $T_n$} as a graph with $2n+1$ vertices \n$V(T_n)=\\{ x,y_1,\\ldots,y_n,z_1,\\ldots,z_n \\}$ and $2n$ edges \n$E(T_n)= \\{ \\{ x,y_i \\}, \\{ y_i,z_i \\} \\mid 1 \\le i \\le n \\}$. \n\n\\begin{lemma}\n\\label{lem:valency one}\nThe only connected star-transitive graphs with a vertex of valency one are the\ncomplete bipartite graphs $K_{1,n}$, for some $n \\ge 1$. \nThe only connected st(edge)-transitive graphs with a vertex of valency one are: the complete bipartite graphs $K_{1,n}$, for some $n \\ge 1$; the path $P_4$ with four vertices; and the spider graphs $T_n$, for some $n \\ge 3$. \n\\end{lemma}\n\n\\begin{proof}\nLet $\\Gamma$ be a simple graph, let $v \\in V(\\G)$ have valency one, and let $u$ be the unique neighbour of $v$. If $\\Gamma$ is star-transitive then, considering the star-isomorphisms $\\varphi: \\operatorname{st}(u) \\to \\operatorname{st}(u)$ mapping $v$ to other neighbours of $u$, we obtain that all neighbours of $u$ have valency one. As $\\Gamma$ is connected, we obtain $\\Gamma \\cong K_{1,n}$, where $n$ is the valency of $u$. \n\nSuppose now that $\\Gamma$ is st(edge)-transitive. We distinguish three cases, according to the valency of $u$. If $u$ has valency at least three then let $w,z$ be neighbours of $u$ that are different from $v$. Considering edge-star isomorphisms $\\varphi: \\operatorname{st}(\\{u,w\\}) \\to \\operatorname{st}(\\{u,w\\})$ that fix $u$ and map $v$ to neighbours of $u$ different from $w$, we obtain that all neighbours of $u$ different from $w$ have valency one. Repeating the same process with edge-star isomorphisms \n$\\varphi: \\operatorname{st}(\\{u,z\\}) \\to \\operatorname{st}(\\{u,z\\})$, we deduce that $w$ also has valency one and $\\Gamma \\cong K_{1,n}$, where $n$ is the valency of $u$.\n\nIf $u$ has valency one then $\\Gamma \\cong K_{1,1}$. If $u$ has valency two then let $x$ be the neighbour of $u$ different from $v$. We distinguish three subcases, according to the valency of $x$. If $x$ has valency one then $\\Gamma \\cong K_{1,2}$. If $x$ has valency two then, from the edge-star isomorphism \n$\\varphi: \\operatorname{st}(\\{u,x\\}) \\to \\operatorname{st}(\\{u,x\\})$ that exchanges $u$ and $x$, we obtain that $\\Gamma \\cong P_4$. Finally, if the valency of $x$ is at least three then let \n$w,z$ be neighbours of $x$ that are different from $u$. Considering edge-star isomorphisms $\\varphi: \\operatorname{st}(\\{x,w\\}) \\to \\operatorname{st}(\\{x,w\\})$ that fix $x$ and map $u$ to neighbours of $x$ different from $w$, we obtain that all neighbours of $x$ different from $w$ have valency two and they are adjacent to a vertex of valency one. \nRepeating the argument with edge-star isomorphisms $\\varphi: \\operatorname{st}(\\{x,z\\}) \\to \\operatorname{st}(\\{x,z\\})$, we see that the neighbour $w$ also has this property and so $\\Gamma \\cong T_n$, where $n$ is the valency of $x$. \n\\end{proof}\n\nFinally, we handle the case of minimal valency two. For any $n \\ge 3$, the cycle $C_n$ is $2$-regular, star-transitive, and st(edge)-transitive. We obtain further examples by the following constructions. \n\nLet $\\Sigma$ be a simple graph of minimal valency at least three. We construct\nthe {\\em $1$-subdivision} $\\Gamma$ of $\\Sigma$ by replacing each edge by a path of length two. Formally, we define $V(\\G)=V(\\Sigma) \\cup E(\\Sigma)$. The sets \n$V(\\Sigma)$ and $ E(\\Sigma)$ are independent in $\\Gamma$, and $v \\in V(\\Sigma)$\nis connected to $e \\in E(\\Sigma)$ in $\\Gamma$ if and only if $v$ and $e$ are\nincident in $\\Sigma$.\nSimilarly, we construct the {\\em $2$-subdivision} of $\\Sigma$ by replacing each edge by a path of length three. The following proposition is easy to verify.\n\n\\begin{prop}\n\\label{min two examples}\nLet $\\Sigma$ be an arc-transitive graph of minimal valency at least three which is locally fully symmetric. Then the $1$-subdivision of $\\Sigma$ is both star-transitive and st(edge)-transitive. The $2$-subdivision of $\\Sigma$ is st(edge)-transitive, but not star-transitive. \n\\end{prop}\n\n\\begin{lemma}\n\\label{min two star}\nSuppose that $\\Gamma$ is star-transitive and the minimal valency in $\\Gamma$ is two, but $\\Gamma$ is not $2$-regular. Then there exists an arc-transitive graph $\\Sigma$ of valency at least three which is locally fully symmetric such that $\\Gamma$ is isomorphic to the $1$-subdivision of $\\Sigma$.\n\\end{lemma}\n\n\\begin{proof}\nSince $\\Gamma$ is not 2-regular, there exists $\\{ v,w \\} \\in E(\\G)$ with $v$ of valency $k>2$ and $w$ with valency $2$. Lazarovich~\\cite[Lemma 1.1]{L} proved that this implies $\\Gamma$ is edge-transitive; consequently, all edges of $\\Gamma$ connect valency $2$ vertices with vertices of valency $k$. Hence $\\Gamma$ is bipartite and $\\Gamma$ is a $1$-subdivision of a graph $\\Sigma$ with minimal valency at least $3$.\n\nAutomorphisms of $\\Gamma$, restricted to the vertices of valency $k$, naturally define automorphisms of $\\Sigma$. Star isomorphisms $\\operatorname{st}(v) \\to \\operatorname{st}(v)$, with $v \\in V(\\G)$ and $v$ of valency $k$, show that $\\Sigma$ is locally fully symmetric, and consequently $\\Sigma$ is edge-transitive. Finally, star isomorphisms $\\operatorname{st}(w) \\to \\operatorname{st}(w)$ of $\\Gamma$, with $w \\in V(\\G)$ of valency two, show that edges in $\\Sigma$ can be turned around by automorphisms, and so $\\Sigma$ is arc-transitive. \n\\end{proof}\n\n\\begin{lemma}\n\\label{min two edge}\nSuppose that $\\Gamma$ is st(edge)-transitive, the minimal valency in $\\Gamma$ is two, but $\\Gamma$ is not $2$-regular. Then there exists an edge-transitive graph $\\Sigma$ of minimal valency at least three which is locally fully symmetric such that one of the following holds. \n\\begin{itemize}\n\\item[$(1)$] $\\Sigma$ is non-regular and $\\Gamma$ is isomorphic to the $1$-subdivision of $\\Sigma$.\n\\item[$(2)$] $\\Sigma$ is arc-transitive, and $\\Gamma$ is isomorphic to the $1$- or $2$-subdivision of $\\Sigma$.\n\\end{itemize}\n\\end{lemma}\n\n\\begin{proof}\nWe claim that there are no three vertices $u,v,w$, all of valency two, such that $\\{ u,v\\} \\in E(\\G)$ and $\\{ v,w\\} \\in E(\\G)$. Indeed, if $u$, $v$, $w$ are such vertices then there is a unique path in $\\Gamma$ starting with the edge $\\{ u,v\\}$, consisting of vertices of valency two, such that the endpoint $x$ of the path has a neighbour $z$ of valency greater than two. Then, for the last two vertices $x$ and $y$ of this path, the edge-star isomorphism $\\varphi: \\operatorname{st}(\\{ x,y \\}) \\to \\operatorname{st}(\\{ x,y \\})$ that exchanges $x$ and $y$ has no extension to an automorphism of $\\Gamma$, a contradiction.\n\nLet $\\{ v,w \\} \\in E(\\G)$ with $v$ of valency greater than $2$ and $w$ with valency $2$, let $x,y$ be two further neighbours of $v$, and let $m$ be the maximal number of vertices on a path starting at $w$ and consisting of vertices of valency $2$. By the claim in the previous paragraph, $m \\in \\{ 1,2\\}$. Considering the edge-star isomorphisms \n\\begin{equation}\n\\label{eq:xyv}\n\\operatorname{st}(\\{ x,v \\}) \\to \\operatorname{st}(\\{ x,v \\}) \\mbox{ and } \n\\operatorname{st}(\\{ y,v \\}) \\to \\operatorname{st}(\\{ y,v \\})\n\\end{equation}\nthat fix the vertex $v$, we obtain that\nall neighbours of $v$ have valency $2$ and for each neighbour $z$, the maximal length of a path starting at $z$ and consisting of vertices of valency $2$ is $m$. Then, by induction on the distance from $v$, we get that all vertices $v'$ of valency greater than $2$ have this property, and so $\\Gamma$ is the $m$-subdivision of a graph $\\Sigma$ of minimal valency at least $3$.\n\nAutomorphisms of $\\Gamma$, restricted to the vertices of valency greater than two, naturally define automorphisms of $\\Sigma$. \nThe edge-star isomorphisms in \\eqref{eq:xyv} show that $\\Sigma$ is locally fully symmetric, and consequently $\\Sigma$ is edge-transitive. If $m=2$ and $vwab$ is a path in $\\Gamma$ connecting the vertices $v,b$ of valency greater than $2$ then the \nedge-star isomorphism $\\operatorname{st}(\\{ w,a \\}) \\to \\operatorname{st}(\\{ w,a \\})$ that exchanges $w$ and $a$ shows that $\\Sigma$ is arc-transitive and we are in case $(2)$ of the lemma. If $m=1$ and $\\Sigma$ is regular then let $vwb$ a path in $\\Gamma$ connecting \nthe vertices $v,b$ of valency greater than $2$. The edge-star isomorphism $\\operatorname{st}(\\{ w,v \\}) \\to \\operatorname{st}(\\{ w,b\\})$ shows that $\\Sigma$ is arc-transitive, and again we are in case $(2)$. Finally, if $\\Sigma$ is non-regular then we are in case $(1)$.\n\\end{proof}\n\nCombining the results of this section, we obtain Theorem~\\ref{thm:small valency}.\n\n\n\\section{Connections among star-transitivity, st(edge)-transitivity, and arc-transitivity}\\label{s:observations}\n\nThis section contains preliminary results used for the proofs of Theorem \\ref{v-star-st(edge)-t} and \\ref{vtx-intrans}.\nWe begin by recording the following result of Lazarovich \\cite[Lemma 1.1]{L}:\n\n\\begin{lemma}\\label{l:lazarovich} If $\\Gamma$ is a connected star-transitive graph then either:\n\\begin{enumerate}\n\\item\\label{c:vt} $\\Gamma$ is 2-arc-transitive; or\n\\item\\label{c:et} $\\Gamma$ is edge-transitive and bipartite, with $V(\\G) = A_1 \\sqcup A_2$, and there exist $d_1, d_2 \\in \\mathbb{N}$ so that for all $v \\in A_i$, the vertex $v$ has valency $d_i$ ($i = 1,2$).\n\\end{enumerate}\n\\end{lemma}\n\n\\noindent It is noted in the proof of \\cite[Lemma 1.1]{L} that in Case \\eqref{c:et}, $d_1 \\neq d_2$. We will discuss both cases of Lemma \\ref{l:lazarovich} further below.\nOur first observations are as follows. \n\n\\begin{lemma}\\label{l:vertex transitive} Let $\\Gamma$ be a $G$-star-transitive graph. If $\\Gamma$ is $k$-regular then $\\Gamma$ is $G$-vertex-transitive.\n\\end{lemma}\n\n\\begin{lemma}\\label{l:locally fully symmetric} Let $\\Gamma$ be a $G$-star-transitive graph. Then $G_v^{\\Gamma(v)} = S_{|\\Gamma(v)|}$ for all $v \\in V(\\G)$, that is, $\\Gamma$ is locally fully symmetric.\n\\end{lemma}\n\nThe converse of Lemma \\ref{l:locally fully symmetric} does not hold, as there are graphs which are locally fully symmetric but are not star-transitive. In fact, there are regular graphs which are locally fully symmetric but are not star-transitive. The following example was first described by Lipschutz and Xu \\cite{LX}. Let $G = \\ensuremath{\\operatorname{PGL}}(2,p)$ for $p$ prime, $p \\equiv \\pm1 \\pmod{24}$. Then $G$ is generated by subgroups $H \\cong D_{24}$ and $K \\cong S_4$, such that $H \\cap K \\cong D_8$. The graph $\\Gamma$ is defined to be the bipartite graph with vertex set $G\/H \\sqcup G\/K$ and edge set $G\/(H \\cap K)$, so that the edge $g(H \\cap K)$, for $g \\in G$, connects the vertices $gH$ and $gK$. Then $\\Gamma$ is cubic and locally fully symmetric, since the natural left-action of $G$ induces $S_3$ at each vertex. However, $\\Gamma$ is not vertex-transitive.\n\nThe following sufficient conditions for star-transitivity are easily verified.\n\n\\begin{lemma}\\label{l:suff star} If there exists a subgroup $G \\le \\ensuremath{\\operatorname{Aut}}(\\Gamma)$ such that either:\n\\begin{enumerate}\n\\item\\label{suff1} $G$ is locally fully symmetric and vertex-transitive; or \n\\item\\label{suff2} $G$ is locally fully symmetric and edge-transitive, and there are \nnatural numbers $k \\neq \\ell$ such that each vertex has valency either $k$ or $\\ell$;\n\\end{enumerate}\nthen $\\Gamma$ is $G$-star-transitive.\n\\end{lemma}\n\nWe now consider st(edge)-transitivity. \nLazarovich's main results concern graphs which are both star-transitive and st(edge)-transitive. We shall prove that, with the exception of the small-valency cases handled in Section~\\ref{s:small}, st(edge)-transitivity implies star-transitivity.\nRecall that for $v \\in V(\\G)$ and $\\{ u,v \\} \\in E(\\G)$, we defined \n$X(v) = \\{ v \\} \\cup \\Gamma(v)$ and $X(\\{ u,v\\})=\\{ u \\} \\cup \\{ v\\} \\cup \n\\Gamma(u) \\cup \\Gamma(v)$.\n\n\\begin{lemma}\\label{lem:stedge}\\label{lem:edgeaction} A connected graph $\\Gamma$ with $G = \\ensuremath{\\operatorname{Aut}}(\\Gamma)$, minimal valency at least three and girth at least four is st(edge)-transitive if and only if it is edge-transitive and either:\n\\begin{enumerate}\n\\item there is a $k \\in \\mathbb{N}$ so that for all edges $\\{ u,v\\}$, $G_{\\{u,v\\}}^{X(\\{u,v\\})} = S_{k-1} \\Wr S_2$, in which case $\\Gamma$ is $k$-regular; or \n\\item there are $k,\\ell \\in \\mathbb{N}$ with $k \\neq \\ell$ so that for all edges $\\{ u,v\\}$, $G_{\\{u,v\\}}^{X(\\{u,v\\})} = S_{k-1} \\times S_{\\ell-1}$, in which case $\\Gamma$ is $(k,\\ell)$-biregular.\n\\end{enumerate}\n\\end{lemma}\n\n\n\n\\begin{proof}\nObserve first that since the minimum valency of $\\Gamma$ is at least three then $\\Gamma$ is not a tree. If $\\Gamma$ has girth four and is st(edge)-transitive then Lemma \\ref{lem:girth 3} implies that $\\Gamma$ is complete bipartite. Thus $\\Gamma$ is edge-transitive and (1) holds if $\\Gamma$ is regular while (2) holds in $\\Gamma$ is biregular. Conversely, assume that $\\Gamma$ has girth four, is edge-transitive and either (1) or (2) hold. Let $\\{u,v,w,z\\}$ be a 4-cycle. Then $z^{G_{u,v,w}}=\\Gamma(u)\\backslash\\{v\\}$ and so $\\Gamma(u)=\\Gamma(w)$. Similarly we see that $\\Gamma(v)=\\Gamma(z)$ and so $\\Gamma$ is complete bipartite and hence st(edge)-transitive.\n\nIf $\\operatorname{girth}(\\Gamma) \\ge 5$ then clearly $\\ensuremath{\\operatorname{Aut}}(\\Gamma \\mid_{X(\\{u,v\\})}) \\cong S_{k-1} \\Wr S_2$ or\n$S_{k-1} \\times S_{\\ell-1}$ in the cases $k=\\ell$ and $k \\ne \\ell$, respectively. Moreover, by Proposition~\\ref{prop:equiv}(ii), every edge-star isomorphism is a graph isomorphism, and $\\Gamma$ is st(edge)-transiitive if and only if $\\Gamma$ is edge-transitive and every $\\varphi \\in \\ensuremath{\\operatorname{Aut}}(\\Gamma \\mid_{X(\\{u,v\\})})$ extends to an automorphism in\n$G_{\\{u,v\\}}$. Since the restriction of any \n$\\psi \\in G_{\\{u,v\\}}$ to $X(\\{u,v\\})$ is in $\\ensuremath{\\operatorname{Aut}}(\\Gamma \\mid_{X(\\{u,v\\})})$ the result follows.\n\\end{proof}\n\n\\begin{lemma}\n\\label{lem:stimpliesstar}\nLet $\\Gamma$ be a $G$-st(edge)-transitive graph of minimum valency at least three. Then $\\Gamma$ is $G$-star-transitive.\n\\end{lemma}\n\n\\begin{proof}\nIf $\\operatorname{girth}(\\Gamma) \\le 4$ then $\\Gamma \\cong K_{m,n}$ by Lemma~\\ref{lem:girth 3} and the statement of this lemma holds. Suppose that $\\operatorname{girth}(\\Gamma) \\ge 5$, let \n$\\{u,v\\}$ be an edge of $\\Gamma$, let $k$ be the valency of $v$ and let $G=\\ensuremath{\\operatorname{Aut}}(\\Gamma)$. By Lemma \\ref{lem:edgeaction}, $G_v^{\\Gamma(v)\\backslash \\{u\\}}=S_{k-1}$. Let $w\\in\\Gamma(v)\\backslash\\{u\\}$. Then again by Lemma \\ref{lem:edgeaction}, $G_v^{\\Gamma(v)\\backslash \\{w\\}}=S_{k-1}$. As $k \\ge 3$, it follows that $G_v^{\\Gamma(v)}=S_k$ and in particular $G_x^{\\Gamma(x)}=S_{|\\Gamma(x)|}$ for each vertex $x$. Thus $\\Gamma$ is locally fully symmetric, hence locally 2-arc transitive and edge-transitive. \n\nIf $u$ has valency $\\ell\\neq k$ then Lemma \\ref{l:suff star}(\\ref{suff2}) implies that $\\Gamma$ is star-transitive.\nIf $u$ also has valency $k$ then, since $\\Gamma$ is st(edge)-transitive, Lemma \\ref{lem:edgeaction}(1) implies that there is an element of $G$ interchanging $u$ and $v$. Hence $G$ is arc-transitive and in particular vertex-transitive. Thus by Lemma \\ref{l:suff star}(\\ref{suff1}), $\\Gamma$ is star-transitive.\n\\end{proof}\n\n\n\n\n\nWe now consider actions on arcs.\n\n\\begin{lemma}\\label{l:locally 2} If $\\Gamma$ is $G$-star-transitive then $G$ is locally 2-transitive on $\\Gamma$ and thus $\\Gamma$ is locally $(G,2)$-arc transitive. \n\\end{lemma}\n\n\\begin{proof} Since $G_v^{\\Gamma(v)} = S_{|\\Gamma(v)|}$ which is $2$-transitive, the graph $\\Gamma$ is locally $2$-transitive.\n\\end{proof}\n\nIt follows that if a connected star-transitive graph $\\Gamma$ is vertex-transitive then $\\Gamma$ is $2$-arc transitive, as was given in Case \\eqref{c:vt} of Lemma \\ref{l:lazarovich} above.\n\n\\begin{lemma} \n\\label{lem:3 arc}\nIf $\\Gamma$ has minimal valency at least three and $\\Gamma$ is $G$-st(edge)-transitive then $\\Gamma$ is locally $(G,3)$-arc transitive.\n\\end{lemma}\n\n\\begin{proof}\nBy Lemma \\ref{lem:stimpliesstar}, $\\Gamma$ is $G$-star-transitive and so by Lemma \\ref{l:locally 2}, $\\Gamma$ is locally $(G,2)$-arc transitive. Let $(u,v,w)$ be a 2-arc of $\\Gamma$. Since $\\Gamma$ is $G$-st(edge)-transitive, Lemma \\ref{lem:edgeaction} applied to the edge $\\{v,w\\}$ implies that $G_{uvw}^{\\Gamma(w)\\backslash\\{v\\}}=S_{|\\Gamma(w)|-1}$. Hence $G_{uvw}$ acts transitively on the set of $3$-arcs starting with $(u,v,w)$. Thus $\\Gamma$ is locally $(G,3)$-arc transitive.\n\\end{proof}\n\nBy Lemmas~\\ref{lem:3 arc}, \\ref{lem:stimpliesstar} and \\ref{l:vertex transitive}, we have the following corollary. \n\n\\begin{corollary}\\label{c:3-arc transitive} Suppose $\\Gamma$ is a connected $k$-regular graph. If $\\Gamma$ is $G$-star-transitive and $G$-st(edge)-transitive, then $\\Gamma$ is $(G,3)$-arc transitive.\n\\end{corollary}\n\n\\section{The vertex-transitive case}\n\\label{s:vertex trans}\n\nIn this section we prove Theorem \\ref{v-star-st(edge)-t}, that is, we conduct a local analysis of vertex-transitive and st(edge)-transitive graphs. \nWe recall that a group is called \\emph{$p$-local} for some prime $p$ if it contains a normal $p$-subgroup. Part (1) of the following fundamental theorem was proven by \nGardiner~\\cite[Corollary 2.3]{G} and was also established by Weiss in \\cite{W1}. Part (2) of the theorem is due to Weiss~\\cite{W2}. With the hypothesis as in Theorem~\\ref{t:arc kernel}(2),\nWeiss~\\cite{W2} proved additional results on the structure of the point stabiliser $G_u$, which we shall recall as needed in the proofs of Lemmas~\\ref{val=4} and \\ref{val>4}.\n\n\\begin{theorem}\\label{t:arc kernel} Let $\\Gamma$ be a connected graph and let $G \\le \\ensuremath{\\operatorname{Aut}}(\\Gamma)$ be vertex-transitive and locally primitive. Then there exists a prime $p$ such that for all arcs $uv$:\n\\begin{enumerate}\n\\item $G_{uv}^{[1]}$ is a $p$-group; and \n\\item if in addition $G_{uv}^{[1]} \\ne 1$ then \n$G_{uv}^{\\Gamma(u)}$ is $p$-local.\n\\end{enumerate}\n\\end{theorem}\n\nLet $\\Gamma$ be a connected graph, and let $G\\le\\ensuremath{\\operatorname{Aut}}(\\Gamma)$ be vertex-transitive.\nRecall that a $(G,s)$-arc-transitive graph is called {\\em $(G,s)$-transitive} if it is not $(G,s+1)$-arc-transitive.\nFor small valencies, the explicit structure of a vertex stabiliser is known. For example, in the cubic case we have the following result due to Tutte \\cite{tutte47}, and Djokovi\\v{c} and Miller \\cite{DM2}.\n\n\\begin{theorem}\nLet $\\Gamma$ be a cubic $(G,s)$-transitive graph. Then one of the following hold:\n\\begin{enumerate}\n \\item $s=1$ and $G_v=C_3$;\n\\item $s=2$ and $G_v=S_3$;\n\\item $s=3$ and $G_v=S_3\\times C_2$;\n\\item $s=4$ and $G_v=S_4$;\n\\item $s=5$ and $G_v=S_4\\times C_2$.\n\\end{enumerate}\n\\end{theorem}\n\\begin{corollary}\\label{cor:cubic}\nLet $\\Gamma$ be a cubic $(G,s)$-transitive graph. Then $\\Gamma$ is $G$-star-transitive if and only if $s\\geq 2$, while $\\Gamma$ is $G$-star-transitive and $G$-st(edge)-transitive if and only if $s\\geq 3$.\n\\end{corollary}\n\n\nIn the 4-regular case, a complete determination of the vertex stabilisers for 4-regular 2-arc transitive graphs was given by Potocnik \\cite{P}, building on earlier work of Weiss \\cite{W4}.\n\n\n\\begin{lemma}\\label{val=4}\nLet $\\Gamma$ be $4$-regular, let $v \\in V(\\G)$, and let $G \\le \\ensuremath{\\operatorname{Aut}}(\\Gamma)$. Then $\\Gamma$ is $G$-star-transitive if and only if\none of the following statements holds:\n\\begin{enumerate}\n\\item $\\Gamma$ is $(G,2)$-transitive, and $G_v=S_4$;\n\n\\item $\\Gamma$ is $(G,3)$-transitive, and $G_v=S_4\\times S_3$ or $G_v=(A_4\\times C_3).2$ with the element of order 2 inducing a nontrivial automorphism of both $C_3$ and $A_4$;\n\n\\item $\\Gamma$ is $(G,4)$-transitive, and $G_v=3^2{:}\\ensuremath{\\operatorname{GL}}(2,3)$;\n\n\\item $\\Gamma$ is $(G,7)$-transitive, and $G_v=[3^5]{:}\\ensuremath{\\operatorname{GL}}(2,3)$.\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nSuppose first that $G$ satisfies one of the conditions $(1)$--$(4)$. Then $G$ is locally $2$-transitive, so \n$G_v^{\\Gamma(v)}\\cong A_4$ or $S_4$. None of the listed point stabilisers have a quotient group isomorphic to \n$A_4$, so $G_v^{\\Gamma(v)}\\cong S_4$. Consequently, by Lemma~\\ref{l:suff star}, $\\Gamma$ is $G$-star-transitive.\n\nConversely, suppose that $\\Gamma$ is $G$-star-transitive. \nBy Lemma~\\ref{l:locally fully symmetric}, $G_v^{\\Gamma(v)}=S_4\\cong\\ensuremath{\\operatorname{PGL}}(2,3)$. If $G_v^{[1]}=1$, then $G_v\\cong G_v^{\\Gamma(v)}=S_4$ and so the stabiliser $G_{uvw}$ of the 2-arc $uvw$ is isomorphic to $S_2$. Hence $G_{uvw}$ is not transitive on the set of three 3-arcs beginning with $uvw$ and so $\\Gamma$ is $(G,2)$-transitive.\n\nSuppose that $G_v^{[1]}\\not=1$ and let $\\{ v,w\\} \\in E(\\G)$.\nIf $G_{vw}^{[1]}=1$, then \n\\[1\\not=G_v^{[1]}\\cong G_v^{[1]}\/G_{vw}^{[1]}\\cong (G_v^{[1]})^{\\Gamma(w)}\\lhd G_{vw}^{\\Gamma(w)}\\cong S_3.\\]\nThus, $G_v^{[1]}=C_3$ or $S_3$, and so for $u\\in\\Gamma(v)\\backslash\\{w\\}$ we have that $G_{uvw}$ induces either $C_3$ or $S_3$ on the set of three 3-arcs beginning with $uvw$. Hence $\\Gamma$ is $(G,3)$-arc-transitive. Moreover, since $G_{vw}^{[1]}=1$, it follows from \\cite{W2} that $\\Gamma$ is not $(G,4)$-arc-transitive. Since $S_3$ has no outer automorphisms, if $G_v^{[1]}=S_3$ then we must have $G_v=S_3\\times S_4$. If $G_v^{[1]}=C_3$ then $G_v=C_3 \\times S_4$ or $(C_3\\times A_4).2$ with the element of order 2 inducing a nontrivial automorphism of both $C_3$ and $A_4$. However, the first case does not occur (see for example \\cite[p.1330]{P}). Thus $G_v=S_3\\times S_4$ or $(C_3\\times A_4).2$ and in both cases $\\Gamma$ is $(G,3)$-transitive.\n\nFinally, assume that $G_{vw}^{[1]}\\not=1$.\nThen by \\cite{W2}, $G_{vw}^{[1]}$ is a 3-group,\n\\[G_v=3^2{:}\\ensuremath{\\operatorname{GL}}(2,3),\\ \\mbox{or}\\ [3^5]{:}\\ensuremath{\\operatorname{GL}}(2,3),\\]\nand $\\Gamma$ is $(G,4)$-transitive or $(G,7)$-transitive, respectively.\n\\end{proof}\n\n\\vskip0.1in\n\\noindent{\\bf Remarks:}\n\\begin{enumerate}\n \\item In case (2) of Lemma \\ref{val=4} we have $G_{vw}=S_3\\times S_3$ or $G_{vw}=(C_3\\times C_3).2$ with respectively $G_v^{[1]}=S_3$ or $C_3$.\n \\item The stabiliser $G_v=3^2{:}\\ensuremath{\\operatorname{GL}}(2,3)$ is a parabolic subgroup of $\\ensuremath{\\operatorname{PGL}}(3,3)$, while\nthe stabiliser $G_v=[3^5]{:}\\ensuremath{\\operatorname{GL}}(2,3)$ is a parabolic subgroup of the exceptional group $G_2(3)$ of Lie type. In both cases $G_v^{\\Gamma(v)}\\cong\\ensuremath{\\operatorname{PGL}}(2,3)\\cong S_4$ and $(G_v^{[1]})^{\\Gamma(w)}\\cong S_3$.\n\\end{enumerate}\n\n\n\n\\vskip0.1in\n\n\\begin{lemma}\\label{val=4-2}\nAssume that $\\Gamma$ is of valency $4$, let $v \\in V(\\G)$, and let $G \\le \\ensuremath{\\operatorname{Aut}}(\\Gamma)$.\nThen $\\Gamma$ is $G$-star-transitive and $G$-st(edge)-transitive if and only if \none of the following is true.\n\\begin{itemize}\n\\item[(1)] $\\Gamma$ is $(G,3)$-transitive, and $G_v=S_4\\times S_3$;\n\n\\item[(2)] $\\Gamma$ is $(G,4)$-transitive, and $G_v=3^2{:}\\ensuremath{\\operatorname{GL}}(2,3)$;\n\n\\item[(3)] $\\Gamma$ is $(G,7)$-transitive, and $G_v=[3^5]{:}\\ensuremath{\\operatorname{GL}}(2,3)$.\n\\end{itemize}\n\\end{lemma}\n\\begin{proof}\nBy Corollary \\ref{c:3-arc transitive} and Lemma \\ref{lem:edgeaction}, if $\\Gamma$ is $G$-star-transitive and $G$-st(edge)-transitive then $\\Gamma$ is $(G,3)$-arc transitive and $G_{vw}$ induces $S_3\\times S_3$ on $(\\Gamma(v)\\cup\\Gamma(w))\\backslash \\{v,w\\}$. This rules out case (1) of Lemma \\ref{val=4} and case (2) where $G_v=(A_4\\times C_3).2$. By Lemma \\ref{lem:stedge} and the remarks following Lemma \\ref{val=4}, it follows that the case where $G_v=S_4\\times S_3$ is $G$-st(edge)-transitive. It remains to prove that the $(G,4)$-arc-transitive graphs given in Lemma~\\ref{val=4} $(3)$ and $(4)$ are $G$-st(edge)-transitive.\n\nSuppose that $G_v=3^2{:}\\ensuremath{\\operatorname{GL}}(2,3)$. Since $G_v^{\\Gamma(v)}\\cong G_v\/G_v^{[1]}$ is a transitive subgroup of $S_4$, the normal subgroup structure of $G_v$ implies that $G_v^{[1]}=3^2{:}2$, and $G_{vw}=3^2{:}2.S_3$. Since $G_v^{[1]} \\neq 1$, it follows that there exists $u\\in\\Gamma(v)$ such that $G_v^{[1]}$ acts on $\\Gamma(u)$ non-trivially; otherwise $G_v^{[1}=G_v^{[2]}$, and it follows from the connectivity of $\\Gamma$ that $G_v^{[1]}$ fixes all vertices, contradicting $G_v^{[1]}\\neq 1$. Since $G_v$ is transitive on $\\Gamma(v)$ we deduce that $G_v^{[1]}$ acts non-trivially on $\\Gamma(w)$. Hence $1 \\neq (G_v^{[1]})^{\\Gamma(w)} \\lhd G_{vw}^{\\Gamma(w)} = S_3$. Recall also that in this case $G_{vw}^{[1]}$ is a 3-group and so $(G_v^{[1]})^{\\Gamma(w)}$ has even order. Thus $G_v^{[1]}\/G_{vw}^{[1]}=S_3$ and so $G_{vw}^{[1]}=C_3$.\nThen we have\n\\[3.2.S_3\\cong G_{vw}\/G_{vw}^{[1]}\\cong G_{vw}^{\\Gamma(v)\\cup\\Gamma(w)}\\le\nG_{vw}^{\\Gamma(v)}\\times G_{vw}^{\\Gamma(w)}\\cong S_3\\times S_3.\\]\nTherefore, $G_{vw}^{\\Gamma(v)\\cup\\Gamma(w)}\\cong S_3\\times S_3$, and so by Lemma \\ref{lem:stedge},\n$\\Gamma$ is $G$-st(edge)-transitive.\n\nNext suppose that $G_v=[3^5]{:}\\ensuremath{\\operatorname{GL}}(2,3)$. Then the normal subgroup structure of $G_v$ implies that $G_v^{[1]}=[3^5].2$ and $G_{vw}=[3^5].2.S_3$. Arguing as in the previous case we deduce again that $G_v^{[1]}\/G_{vw}^{[1]}\\cong S_3$, and so $G_{vw}^{[1]}=[3^4]$. A similar argument to the previous case reveals that $G_{vw}^{\\Gamma(v)\\cup\\Gamma(w)}\\cong S_3\\times S_3$ and so by Lemma \\ref{lem:stedge}, $\\Gamma$ is st(edge)-transitive.\n\\end{proof}\n\nNext, we consider the case of valency at least 5.\n\n\\begin{lemma}\\label{val>4}\nSuppose that $\\Gamma$ is of valency $r\\geq 5$, let $v \\in V(\\G)$, and let $G \\le \\ensuremath{\\operatorname{Aut}}(\\Gamma)$. Then $\\Gamma$ is $G$-star-transitive if and only if one of the following holds.\n\n\\begin{enumerate}[$(1)$]\n\\item $\\Gamma$ is $(G,2)$-transitive and $G_v=S_r$;\n\n\\item $\\Gamma$ is $(G,3)$-transitive, $r=7$, $G_v=S_7$ and $G_{\\{u,v\\}}=\\ensuremath{\\operatorname{Aut}}(A_6)$;\n\n\\item $\\Gamma$ is $(G,3)$-transitive, and $G_v=S_r\\times S_{r-1}$ or $(A_r\\times A_{r-1}).2$ with the element of order 2 inducing a nontrivial outer automorphism of both $A_r$ and $A_{r-1}$;\n\n\\item $r=5$, $\\Gamma$ is $(G,4)$-transitive and $G_v=[4^2]{:}{\\rm \\Gamma L}(2,4)$;\n\n\\item $r=5$, $\\Gamma$ is $(G,5)$-transitive and $G_v=[4^3]{:}{\\rm \\Gamma L}(2,4)$.\n\\end{enumerate}\nMoreover, $\\Gamma$ is $G$-star-transitive and $G$-st(edge)-transitive if and only if\n$\\Gamma$ is $(G,3)$-transitive and $G_v=S_r\\times S_{r-1}$.\n\\end{lemma}\n\\begin{proof}\nSuppose that $\\Gamma$ and $G$ satisfy one of $(1)$ - $(5)$. Then $G_v^{\\Gamma(v)}$ is 2-transitive. Except when both $r=6$ and case (3) holds, the only 2-transitive quotient group of $G_v$ on $r$ points is $S_r$ and so $G$ is locally fully symmetric. Thus by Lemma \\ref{l:suff star}, $\\Gamma$ is $G$-star-transitive. It remains to consider case (3) when $r=6$. Here $G_v$ has $S_6$ and $S_5$ as 2-transitive factor groups of degree 6. If $G_v^{\\Gamma(v)}=S_5$ acting 2-transitively on 6 points it follows that for a 2-arc $uvw$ we have $G_{uvw}=S_6\\times C_4$, which does not have a transitive action of degree 5, contradicting $G$ being 3-arc transitive. Thus $G_v^{\\Gamma(v)}=S_6$ and so $\\Gamma$ is $G$-star-transitive in this case as well.\n\n\n\n\n\nConversely, suppose that $\\Gamma$ is $G$-star-transitive.\nIf $G_v^{[1]}=1$, then $G_v\\cong G_v^{\\Gamma(v)}\\cong S_r$ and so $\\Gamma$ is $(G,2)$-arc-transitive. For $u\\in\\Gamma(v)$ we have $G_{uv}=S_{r-1}$ which acts faithfully on both $\\Gamma(u)\\backslash\\{v\\}$ and $\\Gamma(v)\\backslash\\{u\\}$. For $r-1\\neq 6$ this implies that the actions are equivalent, that is, the stabiliser of a vertex in one action fixes a vertex in the other. So for a 2-arc $wvu$ we have that $G_{wvu}$ fixes an element of $\\Gamma(u)\\backslash\\{v\\}$ and hence $\\Gamma$ is not $(G,3)$-arc-transitive. For $r=7$, $S_{r-1}$ has two inequivalent actions of degree 6 that are interchanged by an outer automorphism. The stabiliser of a point in one action is transitive in the other action and so $\\Gamma$ will be $(G,3)$-arc-transitive if and only if $G_{\\{u,v\\}}=\\ensuremath{\\operatorname{Aut}}(S_6)$. Moreover, $\\Gamma$ is not $(G,4)$-arc-transitive in this case as the stabiliser of a 3-arc is $C_5\\rtimes C_4$, which does not have a transitive action of degree 6.\n\nIf $G_v^{[1]}\\not=1$ and $G_{vw}^{[1]}=1$, then we have\n\\[1\\not=G_v^{[1]}\\cong G_v^{[1]}\/G_{vw}^{[1]}\\cong(G_v^{[1]})^{\\Gamma(w)}\\lhd G_{vw}^{\\Gamma(w)}\\cong S_{r-1}.\\]\nThus either $G_v^{[1]}=A_{r-1}$ or $S_{r-1}$, or $r=5$ and $G_v^{[1]}=2^2$. The last case is eliminated by \\cite[Lemma 5.3]{M}. Thus for $u\\in\\Gamma(v)\\backslash\\{w\\}$ we have that $G_{uvw}$ induces either $A_{r-1}$ or $S_{r-1}$ on the set of $r-1$ 3-arcs beginning with $uvw$. Hence $\\Gamma$ is $(G,3)$-arc-transitive. Moreover, since $G_{vw}^{[1]}=1$, it follows from \\cite{W2} that $\\Gamma$ is not $(G,4)$-arc-transitive. If $G_v^{[1]}=S_{r-1}$, since $S_{r-1}$ has no outer automorphisms for $r\\neq 7$, it follows that either $G_v=S_r\\times S_{r-1}$ or $r=7$ and $G_v=(A_7\\times S_6).2$ with elements of $G_v\\backslash (A_7\\times S_6)$ inducing an outer automorphism of both $S_6$ and $A_7$. Suppose that we have the latter case. Then $G_{vw}=(A_6\\times S_6).2$. However, in such a group $G_v^{[1]}$ is a characteristic subgroup of $G_{vw}$ as it is the only normal subgroup isomorphic to $S_6$. Thus $G_v^{[1]}$ is normalised in $G_{\\{v,w\\}}$ and hence normal in $\\langle G_v,G_{\\{v,w\\}}\\rangle=G$. Hence $G_v$ contains a nontrivial normal subgroup of $G$, contradicting the action on $V\\Gamma$ being faithful. Thus if $G_v^{[1]}=S_{r-1}$ then $G_v=S_r\\times S_{r-1}$.\n\n If $G_v^{[1]}=A_{r-1}$ then $G_v=S_r\\times A_{r-1}$ or $(A_r\\times A_{r-1}).2$. However, in the first case we can argue as above to show that $G_v^{[1]}\\lhd G$ again yielding a contradiction. (In this case $G_v^{[1]}$ is characteristic in $G_{vw}$ as it is the only normal subgroup isomorphic to $A_{r-1}$ not contained in an $S_{r-1}$.) Similarly in the second case, elements of $G_{vw}\\backslash (A_{r-1}\\times A_{r-1})$ must induce nontrivial outer automorphisms of both normal subgroups isomorphic to $A_{r-1}$ and so $G_v= (A_r\\times A_{r-1}).2$ with $G_v\/A_r\\cong S_{r-1}$ and $G_v\/A_{r-1}\\cong S_r$.\n\nNext, assume that $G_{vw}^{[1]}\\not=1$.\nBy Theorem~\\ref{t:arc kernel}, $G_{vw}^{\\Gamma(v)} \\cong S_{r-1}$ is $p$-local.\nThus, $r=5$, and $p=2$.\nFurther, by \\cite{W2}, either $\\Gamma$ is $(G,4)$-transitive, and\n$G_v=[4^2]{:}\\ensuremath{\\operatorname{GL}}(2,4)$ or $[4^2]{:}{\\rm \\Gamma L}(2,4)$ (that is, the maximal parabolics in $\\ensuremath{\\operatorname{PGL}}(3,4)$ or $\\operatorname{P\\Gamma L}(3,4)$ respectively),\nor $\\Gamma$ is $(G,5)$-transitive, and \n$G_v=[4^3]{:}\\ensuremath{\\operatorname{GL}}(2,4)$ or $[4^3]{:}{\\rm \\Gamma L}(2,4)$ (that is, the maximal parabolics in $\\ensuremath{\\operatorname{PSp}}(4,4)$ or $\\operatorname{P\\Gamma Sp}(4,4)$ respectively). Since neither $[4^2]{:}\\ensuremath{\\operatorname{GL}}(2,4)$ nor $[4^3]{:}\\ensuremath{\\operatorname{GL}}(2,4)$ have $S_5$ as a quotient group they cannot occur.\n\nWe also have to establish which graphs in the list are st(edge)-transitive. \nIf $G_v=S_r\\times S_{r-1}$ in case $(3)$ \nthen $G_{vw}=S_{r-1}\\times S_{r-1}$ acts faithfully on $(\\Gamma(v)\\cup\\Gamma(w))\\backslash\\{v,w\\}$ and so the arc-transitivity of $\\Gamma$ together with Lemma \\ref{lem:stedge} implies that $\\Gamma$ is $G$-st(edge)-transitive. In all other graphs in cases $(1)$, $(2)$ and $(3)$, $G_v$ is too small to satisfy the necessary condition in Lemma~\\ref{lem:edgeaction}. In cases (3) and (4), Lemma \\ref{lem:stedge} implies that to be $G$-st(edge)-transitive we must have that $G_v^{[1]}$ has $S_4$ as a quotient group. However, a simple \\textsf{GAP} \\cite{gap} computation shows that this is not the case.\n\\end{proof}\n\n\\vskip0.1in\n\\noindent{\\bf Remark:} \nThe stabiliser $G_v=[4^2]{:}{\\rm \\Gamma L}(2,4)$ is a parabolic subgroup in $\\operatorname{P\\Gamma L}(3,4)$ while the stabiliser $G_v=[4^3]{:}{\\rm \\Gamma L}(2,4)$ is a parabolic subgroup in $\\operatorname{P\\Gamma Sp}(4,4)$. In both cases $G_v^{\\Gamma(v)}\\cong \\operatorname{P\\Gamma L}(2,4)\\cong S_5$.\n\n\\vskip0.1in\n\\noindent{\\bf Remark:} The Hoffman--Singleton graph with automorphism group $\\ensuremath{\\operatorname{PSU}}(3,5).2$ on 50 vertices is an example of a graph in case (2).\n\n\\vskip0.1in\nCombining the lemmas in this section, we obtain a characterisation of graphs which are vertex-transitive,\nstar-transitive and st(edge)-transitive, as stated in Theorem~\\ref{v-star-st(edge)-t}.\n\n\n\\section{The vertex intransitive case}\n\\label{s:vertex intran}\n\nIn this section, we study connected star-transitive and st(edge)-transitive graphs $\\Gamma$ with vertex-intransitive automorphism groups. Such graphs must be bipartite, of bivalency $\\{\\ell,r\\}$ for some $\\ell \\ne r$. \nThe analogue of Theorem~\\ref{t:arc kernel} was proved by van Bon~\\cite{vB1}.\n\n\\begin{theorem}\n\\label{t:vB}\nLet $\\Gamma$ be a connected graph and $G\\leqslant \\ensuremath{\\operatorname{Aut}}(\\Gamma)$ such that $G_v^{\\Gamma(v)}$ is primitive for all vertices $v$. Then for an edge $\\{v,w\\}$ either:\n\\begin{itemize}\n\\item[(i)] $G_{vw}^{[1]}$ is a $p$-group; or\n\\item[(ii)] (possibly after interchanging $v$ and $w$), $G_{vw}^{[1]}=G_v^{[2]}$, and $G_w^{[2]}=G_w^{[3]}$ is a $p$-group.\n\\end{itemize}\n\\end{theorem}\n\nAssume that $G\\le\\ensuremath{\\operatorname{Aut}}(\\Gamma)$, and $\\Gamma$ is $G$-star-transitive and $G$-st(edge)-transitive.\nLet $uvwx$ be a $3$-arc of $\\Gamma$, and suppose that $|\\Gamma(v)|=r$ and $|\\Gamma(w)|=\\ell$. We note that $G_{vw}^{[1]}$ acts on both $\\Gamma(u)$ and $\\Gamma(x)$.\n\n\\begin{lemma}\\label{3,5}\nAssume that $G_{vw}^{[1]}$ acts non-trivially on both $\\Gamma(u)$ and $\\Gamma(x)$.\nThen $\\{\\ell,r\\}=\\{3,5\\}$.\n\\end{lemma}\n\\begin{proof}\nSince $G_{vw}^{[1]}$ acts non-trivially on $\\Gamma(u)$ it follows that $G_{vw}^{[1]}\\neq G_v^{[2]}$ and so Theorem~\\ref{t:vB} implies that $G_{vw}^{[1]}$ is a $p$-group. Since \n$G_{vw}^{[1]}\\lhd G_w^{[1]}\\lhd G_{wx}$ and $G_{vw}^{[1]}$ acts non-trivially on $\\Gamma(x)$, we have\n\\[1\\not=(G_{vw}^{[1]})^{\\Gamma(x)}\\lhd (G_w^{[1]})^{\\Gamma(x)}\\lhd G_{wx}^{\\Gamma(x)}.\\]\nThus, $G_{wx}^{\\Gamma(x)}\\cong S_{r-1}$ has a subnormal $p$-subgroup, and similarly, so does $G_{uv}^{\\Gamma(u)}\\cong S_{\\ell-1}$.\nHence $r,\\ell\\le 5$.\nIf $r=4$, then $G_{vw}^{\\Gamma(v)}\\cong S_3$ and $p=3$.\nAs $\\ell\\not=r$, we have $\\ell=3$ or 5, and thus $G_{vw}^{\\Gamma(w)}=S_2$ or $S_4$,\nwhich do not have subnormal $3$-subgroups. This is a contradiction.\nThus, $r\\not=4$, and similarly, $\\ell\\not=4$. Hence $\\{\\ell,r\\}=\\{3,5\\}$.\n\\end{proof}\n\n\n\n\\begin{example}\n\\label{eg:hermitian}\nLet $\\Gamma$ be the point-line incidence graph of the generalised quadrangle of order $(2,4)$ arising from a nondegenerate Hermitian form on a 4-dimensional vector space over $\\ensuremath{\\operatorname{GF}}(4)$. That is, the points are the totally isotropic 1-spaces, the lines are the totally isotropic 2-spaces, and a 1-space and 2-space are incident if one is contained in the other. Let $G=\\operatorname{P\\Gamma U}(4,2)=\\ensuremath{\\operatorname{PSU}}(4,2).2$, the full automorphism group of $\\Gamma$, and let $(w,v)$ be an incident point-line pair. Then $G_v=2^4{:}S_5$, and $G_w=2.(A_4\\times A_4).2^2$\nwith $G_v\\cap G_w=2^4.S_4$.\nThen $|G_v:G_v\\cap G_w|=5$ and $|G_w:G_v\\cap G_w|=3$, and $\\Gamma$ is locally $(G,4)$-arc-transitive\nof bivalency $\\{3,5\\}$. It follows from Lemma \\ref{l:suff star} that $\\Gamma$ is star-transitive.\nWe have $G_v^{[1]}=2^4$, and $G_{vw}^{[1]}=2^3$. Thus $G_{vw}\/G_{vw}^{[1]}=S_4\\times S_2$, and so by Lemma \\ref{lem:stedge}, $\\Gamma$ is st(edge)-transitive.\n\\end{example}\n\\vskip0.1in\n\n\\begin{lemma}\\label{[2]=1}\nAssume that $G_w^{[1]}\\not=1$ and $G_w^{[2]}=1$.\nThen one of the following holds:\n\\begin{itemize}\n\\item[(i)] $G_{vw}^{[1]}=1$, $G_v=S_r\\times S_{\\ell-1}$ and $G_w=S_{\\ell}\\times S_{r-1}$;\n\n\\item[(ii)] $(A_{r-1})^{\\ell-1}\\leqslant G_{vw}^{[1]}\\leqslant (S_{r-1})^{\\ell-1}$, \n\\[\\begin{array}{rllll}\n(A_r\\times (A_{r-1})^{\\ell-1}).2.S_{\\ell-1}&\\leqslant &G_v&\\leqslant&S_r\\times (S_{r-1}\\Wr S_{\\ell-1}), \\mbox{and}\\\\\n\n(A_{r-1})^{\\ell}.2.S_\\ell& \\leqslant& G_w &\\leqslant &S_{r-1}\\Wr S_\\ell; \\textrm{ or}\n\\end{array}\\]\n\n\\item[(iii)]\n$|\\Gamma(v)|\\le 5$.\n\n\n\\end{itemize}\n\\end{lemma}\n\\begin{proof}\nSince $G_w^{[2]}=1$, we have\n\\[\\begin{array}{l}\nG_w\\cong G_w\/G_w^{[2]}\\cong G_w^{\\Gamma_1(w)\\cup\\Gamma_2(w)}\\le \nG_{vw}^{\\Gamma(v)}\\Wr G_w^{\\Gamma(w)}\\cong S_{r-1}\\Wr S_{\\ell}, \\text{ and }\\\\\n\nG_{vw}\\cong G_{vw}\/G_w^{[2]}\\cong G_{vw}^{\\Gamma_1(w)\\cup\\Gamma_2(w)}\\le \n(S_{r-1}\\Wr S_{\\ell-1})\\times S_{r-1}\\\\\n\\end{array}\\]\nLet $\\Gamma(w)=\\{v_1,\\ldots,v_\\ell\\}$ with $v_1=v$. For each $i\\in\\{1,\\ldots, \\ell\\}$, let $B_i$ be the subgroup of $\\ensuremath{\\operatorname{Sym}}(\\Gamma_1(w)\\cup\\Gamma_2(w))$ consisting of all permutations that fix each vertex\nin $\\Gamma(w)$, act trivially on $\\Gamma(v_j)$ for $j\\neq i$ and induce an element of $S_{r-1}$ on $\\Gamma(v_i)\\backslash\\{w\\}$. Then $B_i\\cong S_{r-1}$. Also, let $C$ be a subgroup of $\\ensuremath{\\operatorname{Sym}}(\\Gamma_1(w)\\cup\\Gamma_2(w))$ isomorphic to $S_\\ell$ and such that $C$ acts faithfully on $\\Gamma(w)$. \nIn particular, if $g\\in C$ and $v_i^g=v_j$ then $B_i^g=B_j$.\nThen we can identify $G_w$ with a subgroup of $X:=(B_1\\times B_2\\times\\cdots\\times B_\\ell)\\rtimes C$.\nThus $G_{vw}=G_w\\cap ( B_1\\times ((B_2\\times \\cdots \\times B_\\ell)\\rtimes C_v))$, where $C_v\\cong S_{\\ell-1}$ and $G_w^{[1]}=G_w\\cap (B_1\\times\\cdots\\times B_\\ell)$. \n\nLet $\\rho$ be the projection of $X$ onto $C$ and for each $i=1,2,\\ldots, \\ell$, let $\\pi_i$ be the projection of $B_1\\times\\cdots\\times B_\\ell$ onto $B_i$. Since $G$ is st(edge)-transitive, by Lemma \\ref{lem:edgeaction} we have that $G_w^{\\Gamma(w)}=S_\\ell$ and so $\\rho(G_w)=C$. Moreover, \n$G_{vw}^{\\Gamma(v)\\cup\\Gamma(w)}\\cong S_{r-1}\\times S_{\\ell-1}$ and so $\\rho(G_{vw})=C_v\\cong S_{\\ell-1}$ and $\\pi_1( \\ker(\\rho)\\cap G_{vw})=B_1\\cong S_{r-1}$. Since $\\ker\\rho\\cap G_w=G_w^{[1]}$ it follows that $\\pi_1(G_w^{[1]})\\cong S_{r-1}$. Since $G_w$ acts transitively on the $\\ell$ factors $B_1,\\ldots,B_{\\ell}$ it follows that $\\pi_i(G_w^{[1]})\\cong\\pi_j(G_w^{[1]})\\cong S_{r-1}$ for distinct $i$ and $j$. \n\nSuppose that $r\\geq 6$. Then $A_{r-1}$ is a nonabelian simple group and since $G_w$ acts primitively on the $\\ell$ factors $B_1,\\ldots,B_{\\ell}$ it follows from \\cite[p. 328]{scott} that either $G_{w}^{[1]}\\cong \\pi_i(G_{w}^{[1]})$ for each $i$, or $(A_{r-1})^\\ell\\cong \\ensuremath{\\operatorname{soc}}(B_1)\\times\\cdots\\times\\ensuremath{\\operatorname{soc}}(B_\\ell)\\leqslant G_w^{[1]}$. Since $G_{vw}^{[1]}\\leqslant G_w^{[1]}$ and $\\pi_1(G_{vw}^{[1]})=1$ these two cases correspond to $G_{vw}^{[1]}=1$ and $G_{vw}^{[1]}\\neq 1$ respectively.\n\n\nSuppose first that $G_{vw}^{[1]}=1$. Then $G_{vw}\\cong S_{r-1}\\times S_{\\ell-1}$. As $\\ker(\\rho)\\cap G_w\\leqslant G_{vw}$ we have $\\ker(\\rho)\\cap G_w\\cong S_{r-1}$. Moreover, as $\\rho(G_w)=S_\\ell$ we have $G_w\/(\\ker(\\rho)\\cap G_w)\\cong S_{\\ell}$ with $\\rho(G_{vw})=S_{\\ell -1}$ and $G_{vw}\\cap \\ker\\rho\\cong S_{r-1}$. Thus either $G_w\\cong S_{r-1}\\times S_\\ell$, or $G_w\\cong (S_{r-1}\\times A_{\\ell}).2$ with each element of $G_w$ not in $S_{r-1}\\times A_{\\ell}$ inducing a nontrivial automorphism of both $S_{r-1}$ and $A_\\ell$. Since $G_w$ contains $G_{vw}\\cong S_{r-1}\\times S_{\\ell-1}$, which has an element of $S_{\\ell-1}$ not in $A_\\ell$ that centralises $S_{r-1}$, the second case is not possible. Thus $G_w\\cong S_{r-1}\\times S_\\ell$. Similarly, $G_v^{[1]}=S_{\\ell -1}$ and $G_v^{\\Gamma(v)}\\cong S_r$ and arguing as in the $G_w$ case we deduce that $G_v=S_r\\times S_{\\ell-1}$. Hence we are in case (i).\n\n\n\n\nAssume now that $G_{vw}^{[1]}\\neq 1$. Then, as we have already seen, $\\ensuremath{\\operatorname{soc}}(B_1)\\times\\cdots\\times\\ensuremath{\\operatorname{soc}}(B_\\ell)\\leqslant G_w^{[1]}$. Since $\\pi_1(G_w^{[1]})=S_{r-1}$ and $\\pi_i(G_w^{[1]})\\cong \\pi_j(G_w^{[1]})$ for all distinct $i$ and $j$, it follows that $G_w^{[1]}$ contains a subgroup $N$ isomorphic to $(A_{r-1})^\\ell.2$ such that $\\pi_i(N)\\cong S_{r-1}$ for all $i$. Thus\n$$(A_{r-1}^{\\ell}).2.S_\\ell \\leqslant G_w \\leqslant S_{r-1}\\Wr S_\\ell$$\nMoreover, for the arc stabiliser we have\n$$(A_{r-1})^{\\ell}.2.S_{\\ell-1} \\leqslant G_{vw}\\leqslant S_{r-1}\\times (S_{r-1}\\Wr S_{\\ell-1})$$\nwith $(A_{r-1})^{\\ell-1}\\leqslant G_{vw}^{[1]}\\leqslant (S_{r-1})^{\\ell-1}$.\n\nConsider the group $\\overline{G}_w=G_w\/(\\ensuremath{\\operatorname{soc}}(B_1)\\times\\cdots\\times\\ensuremath{\\operatorname{soc}}(B_\\ell))$. Then $2.S_{\\ell}\\leqslant \\overline{G}_w\\leqslant 2^\\ell.S_\\ell$. Hence $\\overline{G}_w=2^m.S_\\ell$ where the $2^m$ is a submodule of the permutation module for $S_\\ell$. This implies that $G_{w}=(A_{r-1})^\\ell.2^m.S_\\ell$. Hence $G_{vw}=(A_{r-1})^\\ell.2^m.S_{\\ell-1}$ and $G_v^{[1]}=(A_{r-1})^{\\ell-1}.2^{m-1}.S_{\\ell-1}$, as $G_{vw}\/G_{v}^{[1]}\\cong G_{vw}^{\\Gamma(v)}\\cong S_{r-1}$. Observe that $G_v^{[1]}$ has a unique minimal normal subgroup $N$ and $N\\cong (A_{r-1})^{\\ell-1}$. Thus $N\\lhd G_v$. Since $G_v\/G_v^{[1]}\\cong S_r$ and $G_v^{[1]}$ already induces $S_{\\ell-1}$ on the $\\ell-1$ distinct simple factors of $N$, it follows that $G_v$ contains a normal subgroup isomorphic to $A_r\\times (A_{r-1})^{\\ell-1}$. We then deduce that $G_v=(A_r\\times (A_{r-1})^{\\ell-1}).2^m.S_\\ell$ and in particular part (ii) holds.\n\\end{proof}\n\nBy Lemma \\ref{[2]=1} we may thus assume that $G_v^{[2]}\\neq 1$ and $G_w^{[2]}\\neq 1$. Moreover, by Lemma \\ref{3,5} we may assume that $G_{vw}^{[1]}$ acts trivially on $\\Gamma(u)$ for some neighbour $u$ of $v$. Since $G_{vw}$ is transitive on $\\Gamma(v)\\setminus\\{w\\}$ and normalises $G_{vw}^{[1]}$,\nwe conclude that $G_{vw}^{[1]}$ fixes all vertices in $\\Gamma_2(v)$. In particular, $G_{vw}^{[1]}=G_v^{[2]}$. \n\n\n\\begin{lemma}\\label{[2]not=1}\nAssume that $G_v^{[2]}\\neq 1\\neq G_w^{[2]}$ and $G_{vw}^{[1]}=G_v^{[2]}$ fixes all vertices in $\\Gamma_2(v)$. Then $r\\leq 5$ and either $G_w^{[2]}$ or $G_v^{[2]}$ is a $p$-group with $p=2$ or $3$.\n\\end{lemma}\n\\begin{proof}\nBy Theorem~\\ref{t:vB}, either $G_v^{[2]}$ is a $p$-group, or $G_w^{[2]}=G_w^{[3]}$ is a $p$-group.\n\nSuppose first that $G_v^{[2]}$ is a $p$-group. Then \n$$1\\neq G_v^{[2]}\\lhd G_w^{[1]}\\lhd G_{wx}$$\nSuppose that $G_v^{[2]}$ acts trivially on $\\Gamma(x)$.\nThen as $G_v^{[2]}\\lhd G_{vw}$ and $G_{vw}$ acts transitively on $\\Gamma(w)$ \nit follows that $G_v^{[2]}$ acts trivially on $\\Gamma_2(w)$. \nThus, $G_v^{[2]}\\le G_w^{[2]}\\le G_{vw}^{[1]}=G_v^{[2]}$, and so $G_v^{[2]}=G_w^{[2]}$.\nSince $G$ is edge-transitive on $\\Gamma$, we conclude that $G_v^{[2]}$ fixes all vertices of $\\Gamma$, which is a contradiction.\n\n Hence $G_v^{[2]}$ acts nontrivially on $\\Gamma(x)$ and so $G_{wx}^{\\Gamma(x)}\\cong S_{r-1}$ has a nontrivial normal $p$-subgroup. Thus $r\\leq 5$ and $p=2$ or $3$.\n\n\nNext suppose that $G_w^{[2]}=G_w^{[3]}$ is a $p$-group.\nPick $y\\in\\Gamma(u)\\setminus\\{v\\}$.\nThen \n\\[1\\not=G_w^{[2]}=G_w^{[3]}\\lhd G_{uv}^{[1]}\\lhd G_u^{[1]}\\lhd G_{uy}.\\]\nSuppose that $G_w^{[3]}$ acts trivially on $\\Gamma(y)$.\nSince $G_{wvu}$ normalises $G_w^{[3]}$ and is transitive on $\\Gamma(u)\\backslash\\{v\\}$,\nit follows that $G_w^{[3]}$ is trivial on $\\Gamma(x)$ for all $x\\in \\Gamma(u)$,\nand hence on $\\Gamma_2(u)$.\nFurther, $G_{wv}$ normalises $G_w^{[3]}$ and is transitive on $\\Gamma(v)\\setminus\\{w\\}$,\nand it follows that $G_w^{[3]}$ acts trivially on $\\Gamma_2(z)$ for all $z\\in\\Gamma(v)$.\nThus, $G_w^{[3]}$ fixes all vertices in $\\Gamma_3(v)$, and $G_w^{[3]}\\le G_v^{[3]}$.\nSince $G$ is edge-transitive on $\\Gamma$, we conclude that $G_w^{[3]}=G_w^{[4]}$.\nHence we have \n\\[1\\not=(G_w^{[3]})^{\\Gamma(y)}\\lhd (G_{uv}^{[1]})^{\\Gamma(y)}\\lhd\n(G_u^{[1]})^{\\Gamma(y)}\\lhd G_{uy}^{\\Gamma(y)}\\cong S_{r-1}.\\]\nThus, $S_{r-1}$ has a subnormal $p$-subgroup and so $r\\le 5$, and $p=2$ or 3.\n\\end{proof}\n\nCombining the lemmas of this section, we obtain the characterisation of vertex-intransitive,\nstar-transitive and st(edge)-transitive graphs, as stated in Theorem~\\ref{vtx-intrans}.\n\n\n\\section{Examples}\\label{s:examples}\n\nLazarovich \\cite{L} provides a list of examples of graphs $L$ which are star-transitive and st(edge)-transitive, as well as a discussion of earlier work on uniqueness of $(k,L)$-complexes. In this section we use our results above to expand on the examples and discussion in \\cite{L}, and to give new infinite families of examples in both the vertex-transitive and vertex-intransitive cases.\n\nRecall that we denote by $\\mathcal{X}(k,L)$ the collection of all simply-connected $(k,L)$-complexes and that we assume $L$ is finite and connected.\n\n\n\n\n\n\n\n\\subsection{Special cases}\n\n\\subsubsection{Cycles}\n\nThe only finite, connected, $2$-regular graph is the cycle on $n$ vertices $C_n$. For $n \\geq 3$, as observed in \\cite{L}, the graph $C_n$ is star-transitive and st(edge)-transitive. If $(k,n) \\in \\{ (3,6), (4,4), (6,3)\\}$ and the polygons are metrised as regular Euclidean $k$-gons, then the unique simply-connected $(k,C_n)$-complex is the tessellation of the Euclidean plane by regular Euclidean $k$-gons. If $k \\geq k'$ and $n > n'$, or $k > k'$ and $n \\geq n'$, where $(k',n') \\in \\{ (3,6), (4,4), (6,3)\\}$, then the unique simply-connected $(k,C_n)$-complex is combinatorially isomorphic to the tessellation of the hyperbolic plane by regular hyperbolic $k$-gons with vertex angles $\\frac{2\\pi}{n}$. \n\n\\subsubsection{Cubic graphs}\n\nLet $k \\geq 3$ be an integer and let $L$ be a finite, connected, cubic graph, such that the pair $(k,L)$ satisfies the Gromov Link Condition. In \\cite{Sw}, \\'Swi{\\polhk{a}}tkowski\\ proved that if $k \\geq 4$ and $L$ is $3$-arc transitive, then there exists a unique simply-connected $(k,L)$-complex, while if $k = 3$ and $L$ is $3$-arc transitive, or $k \\geq 3$ and $L$ is not $3$-arc transitive, $|\\mathcal{X}(k,L)| > 1$. (In fact, \\'Swi{\\polhk{a}}tkowski\\ proved that in these latter cases $\\mathcal{X}(k,L)$ is uncountable; compare \\cite[Theorem B]{L}.)\n\nIt follows from results of Tutte \\cite{tutte47, tutte59} that, as observed in \\cite{L}, a connected cubic graph which is $3$-arc transitive is both star-transitive and st(edge)-transitive. Conversely, if a connected cubic graph $L$ is star-transitive and st(edge)-transitive then $L$ is $3$-arc transitive, by Corollary \\ref{c:3-arc transitive} above. Together with the main results of \\cite{L}, this recovers the results of \\'Swi{\\polhk{a}}tkowski\\ from \\cite{Sw} when $k \\geq 4$ is even.\n\nThe collection of $3$-arc transitive cubic graphs is too large to classify. For examples of such graphs, see for instance \\cite{GR}.\n\n\n\n\\subsubsection{Complete bipartite graphs}\n\nIt is observed in \\cite{L} that the complete bipartite graph $K_{m,n}$ is star-transitive and st(edge)-transitive. The unique simply-connected $(4,K_{m,n})$-complex is the product of an $m$-valent and an $n$-valent tree \\cite{Wise96}. If $k > 4$, $m,n \\geq 2$ and either $k$ is even or $m = n$ the unique simply-connected $(k,K_{m,n})$-complex is isomorphic to Bourdon's building, which is a $2$-dimensional hyperbolic building (see \\cite{B1}). If $k$ is odd and $m \\ne n$, there is no $(k,L)$-complex.\n\nBy Lemma \\ref{lem:valency one} above, the only finite, connected graphs with a vertex of valency one that are star-transitive and st(edge)-transitive are the complete bipartite graphs $K_{1,n}$. If $n \\geq 2$ and $k \\geq 4$ is even then the unique simply-connected $(k,K_{1,n})$-complex $X$ is a ``tree\" of $k$-gons, with alternating edges of each $k$-gon contained in either a unique $k$-gon or $n$ distinct $k$-gons.\n\n\n\\subsubsection{Odd graphs}\\label{s:odd}\n\nFor $n \\geq 2$ the \\emph{Odd graph} $O_{n+1}$ is defined to have vertex set the $n$-element subsets of $\\{ 1, 2, \\ldots, 2n+1\\}$, with two vertices being adjacent if the corresponding $n$-sets are disjoint. Thus the graph is $(n+1)$-valent. The Petersen graph is the case $n = 2$. \n\nAs noted in \\cite{L}, the Odd graphs are star-transitive and st(edge)-transitive. Their vertex stabilisers are $S_{n} \\times S_{n+1}$, and their edge-stabilisers are $S_n \\Wr S_2$. The girth of the Petersen graph $O_3$ is $5$ and for all $n \\geq 3$ the girth of $O_{n+1}$ is $6$. \n\nIt was proved by Praeger \\cite[Theorem 4]{CEP} that if $\\Gamma$ is a graph of valency $r$ with $A_r\\leqslant G_v^{\\Gamma(v)}$ and $G$ primitive on $V\\Gamma$ then either $|\\Gamma_3(v)|=r(r-1)^2$, that is girth at least 7, or $\\Gamma$ is the Odd graph on $2r-1$ points. Thus by Lemma \\ref{l:locally fully symmetric}, if $\\Gamma$ is $G$-star-transitive with $G$ primitive on vertices then either $\\Gamma$ has girth at least 7 or is an Odd graph. Vertex-transitive graphs of girth 5 and with $G_v^{\\Gamma(v)}=S_r$ were investigated by Ivanov, who showed that that such graphs are either the Petersen graph, the Hoffman--Singleton graph, the double cover of the Clebsch graph or each 2-arc in $\\Gamma$ is contained in a unique cycle of length 5 \\cite[Lemma 5.6]{I}. Combining \\cite[Lemma 5.4]{I} and Theorem \\ref{v-star-st(edge)-t} we deduce that the only vertex-transitive graph of girth 5 that is both star-transitive and st(edge)-transitive is the Petersen graph.\n\nWe note that any finite cover of an Odd graph for which all automorphisms lift will also be star-transitive and st(edge)-transitive. Thus for every valency $n+1 \\geq 3$ we obtain an infinite family of $(n+1)$-regular graphs which are star-transitive and st(edge)-transitive. \n\n\n\n\n\n\n\\subsubsection{Spherical buildings}\n\nLet $\\mathfrak{L}$ be a simple Lie algebra of rank $2$ over $\\mathbb{C}$. So\n$\\mathfrak{L}$ is of type $A_2$, $B_2 = C_2$ or $G_2$. We\ndenote by $\\mathfrak{L}(q)$ the (untwisted) Chevalley group of type\n$\\mathfrak{L}$ over the field $\\ensuremath{\\operatorname{GF}}(q)$. Denote by $L$ the spherical\nbuilding associated to the group $\\mathfrak{L}(q)$. Assume that $k\n\\geq 5$. Then by results\nof Haglund~\\cite{H}, in the following cases,\nthere is a unique simply-connected $(k,L)$-complex:\n\\begin{enumerate}\n\\item $\\mathfrak{L}$ is of type $A_2$ and $q \\in \\{2,3\\}$;\n\\item $\\mathfrak{L}$ is of type $B_2$ and $q = 2$;\n\\item $\\mathfrak{L}$ is of type $G_2$ and $q = 3$.\n\\end{enumerate}\nIn each of these cases, the $(k,L)$-complex obtained may be metrised\nas a $2$-dimensional hyperbolic building.\n(For larger values of $q$, Haglund also established the uniqueness of\n$(k,L)$-complexes which satisfy some additional local conditions,\ncalled\nlocal reflexivity and constant holonomy.)\n\n\n\n\n\n\nThe local actions for the rank 2 buildings arising from finite groups of Lie type are given in the first five columns of Table \\ref{tab:localbuild}. The full automorphism group $G$ in each case can be found in \\cite[Chapter 4]{vM} and the local actions of $\\ensuremath{\\operatorname{soc}}(G)$ are given in \\cite[Theorem 8.4.1 and Table 8.1]{vM}. The local actions for $G$ can then be deduced. By determining the values of $q$ for which the local action induces the full symmetric group we can deduce the information in the locally fully symmetric column. Moreover, since the rank 2 buildings for $\\ensuremath{\\operatorname{PSp}}(4,q)$ are vertex-transitive if and only if $q$ is even while the rank 2 buildings for $G_2(q)$ are vertex-transitive if and only if $q$ is a power of 3, the star-transitive column follows from Lemma \\ref{l:suff star}. The case where $G=\\operatorname{P\\Gamma U}(4,2)$ was analysed in Example \\ref{eg:hermitian}. The fact that the buildings for $\\ensuremath{\\operatorname{PSL}}(3,2)$ and $\\ensuremath{\\operatorname{PSp}}(4,2)$ are st(edge)-transitive follow from Corollary \\ref{cor:cubic} and the fact that they are 4-arc transitive and 5-arc transitive respectively. The fact that the building for $G_2(3)$ is st(edge)-transitive follows from part (3) of Lemma \\ref{val=4-2} and the preceding remark.\n\n Finally, the fact that the buildings for $\\ensuremath{\\operatorname{PSL}}(3,4)$ and $\\ensuremath{\\operatorname{PSp}}(4,4)$ are not st(edge)-transitive follows from the fact that for $g\\in G_v$ to induce an odd permutation on $\\Gamma(v)$ it must induce a nontrivial field automorphism of the simple group associated with $G$. Thus it is not possible for an element of $G_{vw}$ to induce an odd permutation of $\\Gamma(v)$ and an even permutation of $\\Gamma(w)$, and so the necessary condition of Lemma \\ref{lem:stedge} does not hold. \n\n\n\n\\begin{center}\n \\begin{table}\n \\begin{tabular}{l|llllp{1.9cm}ll}\n$G$ & $|\\Gamma(v)|$ & $G_v^{\\Gamma(v)}$ & $|\\Gamma(w)|$ &$G_w^{\\Gamma(w)}$ & locally fully symmetric & star-transitive& st(edge)-transitive\\\\ \n\\hline \n$\\operatorname{P\\Gamma L}(3,q)$ & $q+1$ & $\\operatorname{P\\Gamma L}(2,q)$ & $q+1$ &$\\operatorname{P\\Gamma L}(2,q)$ & $q=2,3,4$ & $q=2,3,4$ &$q=2,3$ \\\\\n$\\operatorname{P\\Gamma Sp}(4,q)$ & $q+1$ & $\\operatorname{P\\Gamma L}(2,q)$ & $q+1$ & $\\operatorname{P\\Gamma L}(2,q)$ & $q=2,3,4$ & $q=2,4$ & $q=2$\\\\\n$\\operatorname{P\\Gamma U}(4,q)$ & $q^2+1$ & $\\operatorname{P\\Gamma L}(2,q^2)$ & $q+1$ & $\\operatorname{P\\Gamma L}(2,q)$ & $q=2$& $q=2$& $q=2$\\\\\n$\\operatorname{P\\Gamma U}(5,q)$ & $q^2+1$ & $\\operatorname{P\\Gamma L}(2,q^2)$ & $q^3+1$& $\\operatorname{P\\Gamma U}(3,q)$ & never\\\\\n$\\ensuremath{\\operatorname{Aut}}(G_2(q))$ & $q+1$ & $\\operatorname{P\\Gamma L}(2,q)$ & $q+1$ & $\\operatorname{P\\Gamma L}(2,q)$ & $q=2,3,4$ & $q=3$ & $q=3$ \\\\\n$\\ensuremath{\\operatorname{Aut}}({}^3D_4(q))$ & $q+1$ & $\\operatorname{P\\Gamma L}(2,q)$ & $q^3+1$ & $\\operatorname{P\\Gamma L}(2,q^3)$ & never \\\\\n$\\ensuremath{\\operatorname{Aut}}({}^2F_4(q))$ & $q+1$ & $\\operatorname{P\\Gamma L}(2,q)$ & $q^2+1$ & $\\ensuremath{\\operatorname{Aut}}(Sz(q))$ & never \\\\ \\\\\n\n\\end{tabular}\n\n\\caption{Local actions of rank 2 buildings}\n\\label{tab:localbuild}\n \\end{table}\n\n\\end{center}\n\n\n\n\\subsection{New vertex-transitive examples}\nOne way to frame the search for examples is in the context of amalgams. Let $G$ be a group with subgroup $H$ and element $g\\in G$ that does not normalise $H$ such that $g^2\\in H$ and $\\langle H,g\\rangle=G$. Following Sabidussi \\cite{Sa}, we can construct a connected graph $\\mathrm{Cos}(G,H,g)$ with vertices the right cosets of $H$ in $G$ and $Hx$ being adjacent to $Hy$ if and only if $xy^{-1}\\in HgH$. The group $G$ acts by right multiplication as an arc-transitive group of automorphisms of $\\mathrm{Cos}(G,H,g)$ with the stabiliser in $G$ of the vertex corresponding to $H$ being $H$. Conversely, if $\\Gamma$ is a connected graph with an arc-transitive group $G$ of automorphisms then for an edge $\\{v,w\\}$ and $g\\in G_{\\{v,w\\}}\\backslash G_{vw}$ we have that $\\Gamma$ is isomorphic to $\\mathrm{Cos}(G,G_v,g)$. Moreover, note that $G=\\langle G_v,G_{\\{v,w\\}}\\rangle$. Thus by knowing the possible $G_v$ and $G_{\\{v,w\\}}$ for a vertex-transitive, star-transitive and st(edge)-transitive graph, finding examples becomes a search for completions of the amalgam $(G_v,G_{\\{v,w\\}},G_{vw})$. \n\nTaking $(G_v,G_{\\{v,w\\}},G_{vw})=(S_r\\times S_{r-1},S_{r-1}\\Wr S_2,S_{r-1}\\times S_2)$ as in case (1) of Theorem \\ref{v-star-st(edge)-t}, the Odd graphs arise when we take $G=S_{2r-1}$ as a completion while the complete bipartite graphs arise when we take $G=S_r\\Wr S_2$ as a completion. \n\nAnother example can be constructed with $G=S_{(r-1)^2}$. Let $n=(r-1)^2$ and note that $n=\\binom{r}{2}+\\binom{r-1}{2}$. Thus letting $\\Omega=\\{1,\\ldots,n\\}$ we can identify $\\Omega$ with the disjoint union of the set $\\Omega_1$ of $2$-subsets of a set $A$ of size $r$ and the set $\\Omega_2$ of 2-subsets of a set $B$ of size $r-1$. Then let $H$ be the subgroup of $G$ isomorphic to $S_r\\times S_{r-1}$ that has $\\Omega_1$ and $\\Omega_2$ as its orbits on $\\Omega$. Choose $\\overline{B}$ to be a subset of $A$ of size $r-1$ and let $\\overline{\\Omega}_2$ be the subset of $\\Omega_1$ consisting of all 2-subsets of $\\overline{B}$. Note that $H_{\\overline{\\Omega}_2}\\cong S_{r-1}\\times S_{r-1}$ and choosing $g\\in G$ to be an element of order 2 that interchanges $\\overline{\\Omega}_2$ and $\\Omega_2$, we have that $\\langle H_{\\overline{\\Omega}_2},g\\rangle \\cong S_{r-1}\\Wr S_2$. Let $\\Gamma=\\mathrm{Cos}(G,H,g)$ and let $v$ be the vertex corresponding to the coset $H$ and $w$ the vertex corresponding to the coset $Hg$. Then $G_v=H\\cong S_r\\times S_{r-1}$, $G_{vw}=H_{\\overline{\\Omega}_2}\\cong S_{r-1}\\times S_{r-1}$ and $G_{\\{v,w\\}}=\\langle H_{\\overline{\\Omega}_2},g\\rangle \\cong S_{r-1}\\Wr S_2$. Thus $G_v^{\\Gamma(v)}\\cong S_r$ and since $G_{\\{v,w\\}}$ acts faithfully on $\\Gamma(v)\\cup\\Gamma(w)$ it follows from Lemmas \\ref{l:suff star} and \\ref{lem:stedge} that $\\Gamma$ is \nstar-transitive and st(edge)-transitive. It remains to show that $\\Gamma$ is connected, that is, we need to show that $\\langle H,g\\rangle=G$. Let $X=\\langle H,g\\rangle$. Then $X$ is transitive on $\\Omega$ and since $H$ is primitive on each of its orbits it follows that $X$ is primitive on $\\Omega$. A transposition on $B$ induces $r-3$ transpositions on $\\Omega_2$ and so $X$ contains an element $\\sigma$ that moves only $2(r-3)$ elements of $\\Omega$. Since $2(r-3) < 2(\\sqrt{n}-1)$, \\cite[Corollary 3]{LS} implies that $X=A_n$ or $S_n$. If $r$ is even then $\\sigma$ is an odd permutation and so $X=S_n=G$. If $r$ is odd then a transposition of $B$ induces $r-2$ transpositions of $\\Omega$, and so $H$ also contains an odd permutation in this case. Thus $X=S_n=G$ for all $r$. We conclude that $\\Gamma$ is connected.\n\n\n\n\n\\subsection{New vertex-intransitive examples}\n\nWe have already seen that the complete bipartite graphs $K_{n,m}$ with $n\\neq m$ are examples in the vertex-intransitive case, as is the generalised quadrangle associated with $\\operatorname{P\\Gamma U}(4,2)$.\n\nFor all natural numbers $m > n \\geq 3$, we will construct examples of connected star-transitive and st(edge)-transitive which are $(m,n)$-biregular. (The restriction to $n \\geq 3$ is justified by the results of Section \\ref{s:small} above on graphs of minimal valency $2$.)\n\n\n\nAs in the vertex-transitive case, the search for examples can be framed in terms of amalgams. Given a group $G$ and subgroups $L$ and $R$ such that $G=\\langle L,R\\rangle$ we can construct the bipartite graph $\\mathrm{Cos}(G,L,R)$ with vertices the right cosets of $L$ in $G$ and the right cosets of $R$ in $G$ such that $Lx$ is adjacent to $Ry$ if and only if $Lx\\cap Ry\\neq \\varnothing$. Then $G$ acts by right multiplication as an edge-transitive group of automorphisms of $\\mathrm{Cos}(G,L,R)$. Conversely, if $\\Gamma$ is a connected graph with a group $G$ of automorphisms that is edge-transitive but not vertex-transitive then $\\Gamma$ is isomorphic to $\\mathrm{Cos}(G,G_v,G_w)$ for some edge $\\{v,w\\}$. Also $G=\\langle G_v,G_w\\rangle$. Thus examples of vertex-intransitive, star-transitive and st(edge)-transitive graphs can be found by finding completions of the amalgam $(G_v,G_w,G_v\\cap G_w)$.\n\n\n\n\n\\subsubsection{Construction from Johnson graphs}\\label{s:johnson}\n\nLet $\\Gamma=\\Gamma_{m,n}$ be the bipartite graph whose vertices are the $m$-subsets and $(m-1)$-subsets of an $n$-set, with two vertices being adjacent if one is contained in the other. Then $G=S_n\\leqslant \\ensuremath{\\operatorname{Aut}}(\\Gamma_{m,n})$. When $n=2m+1$, $\\Gamma$ is called the \\emph{doubled Odd graph}. Let $v$ be an $m$-subset. Then $\\Gamma(v)$ is the set of $(m-1)$-subsets contained in $v$ and $G_v=S_m\\times S_{n-m}$ with $G_v^{\\Gamma(v)}=S_m$. Given $w\\in\\Gamma(v)$ we have $G_w=S_{m-1}\\times S_{n-m+1}$ and $\\Gamma(w)$ is the set of $m$-subsets containing $w$. Thus $G_w^{\\Gamma(w)}=S_{n-m+1}$ and $G_{vw}=S_{m-1}\\times S_{n-m}$. Hence by Lemma \\ref{l:suff star}, $\\Gamma$ is $G$-star-transitive when $m\\neq n-m+1$. Moreover, $G_{vw}$ acts faithfully on $\\Gamma(v)\\cup \\Gamma(w)$ so Lemma \\ref{lem:edgeaction} implies that $\\Gamma$ is also $G$-st(edge)-transitive when $m\\neq n-m+1$.\n\nThe \\emph{Johnson graph} $J(n,m)$ is the graph with vertex set the set of $m$-subsets of an $n$-set such that two $m$-subsets are adjacent if and only if their intersection has size $m-1$. Note that an $(m-1)$-subset defines a maximal clique of $J(n,m)$, namely the set of all $m$-sets containing the given $(m-1)$-set. Let $\\mathcal{C}$ be the set of all such maximal cliques and define the graph whose vertices are the vertices of $J(n,m)$ and the maximal cliques in $\\mathcal{C}$, with adjacency being the natural inclusion. Then the new graph is isomorphic to $\\Gamma_{m,n}$.\n\n\n\n\\subsubsection{Construction from Hamming graphs}\\label{s:hamming}\n\nAnother new family of examples are as follows. This is from Example 4.3 of Giudici--Li--Praeger \\cite{GLP1}. Let $H(k,n)$ be the Hamming graph whose vertex set is the set of ordered $k$-tuples (possibly with repeats) from a set $\\Omega$ of size $n$, with two vertices being adjacent if and only if they differ in exactly one coordinate. Let $\\Gamma$ be the bipartite graph with vertex set $\\Delta_1\\cup\\Delta_2$, where $\\Delta_1$ is the set of vertices of $H(m,n)$ and $\\Delta_2$ is the set of maximal cliques of $H(m,n)$. Adjacency is given by inclusion. The group $G=S_n\\Wr S_k$ is a group of automorphisms of both $H(m,n)$ and $\\Gamma$, and acts transitively on the edges of $\\Gamma$ with orbits $\\Delta_1$ and $\\Delta_2$ on vertices.\n\nLet $\\omega\\in\\Omega$ and $w=(\\omega,\\ldots,\\omega)\\in\\Delta_1$. Then $G_w=S_{n-1}\\Wr S_k$ and the maximal cliques of $H(m,n)$ containing $w$ are $$\\{(\\alpha,\\omega,\\ldots,\\omega)\\mid \\alpha\\in \\Omega\\}, \\{(\\omega,\\alpha,\\omega,\\ldots,\\omega)\\mid \\alpha\\in \\Omega\\},\\,\\, \\ldots, \\,\\, \\{(\\omega,\\ldots,\\omega,\\alpha)\\mid \\alpha\\in\\Omega\\}$$ Hence $G_w^{\\Gamma(w)}=S_k$. Moreover, letting $v=\\{(\\alpha,\\omega,\\ldots,\\omega)\\mid \\alpha\\in \\Omega\\}$ we have that $G_v= S_n\\times (S_n\\Wr S_{k-1})$ and $G_v^{\\Gamma(v)}=S_n$. Thus by Lemma \\ref{l:suff star}, $\\Gamma$ is star-transitive when $k\\neq n$. Now $G_{vw}=S_{n-1}\\times (S_{n-1}\\Wr S_{k-1})$ and this induces $S_{k-1}$ on $\\Gamma(w)\\backslash\\{v\\}$ and independently induces $S_{n-1}$ on $\\Gamma(v)\\backslash\\{w\\}$. Hence by Lemma \\ref{lem:edgeaction}, $\\Gamma$ is st(edge)-transitive when $k\\neq n$. This provides an example for case (3) of Theorem \\ref{vtx-intrans} with $G_v$ and $G_w$ being as large as possible.\n\nLet $\\sigma\\in S_n$ such that $\\omega^\\sigma=\\omega$ and $S_n=\\langle A_n,\\sigma\\rangle$. Then for $H=\\langle A_n^k,(\\sigma,\\ldots,\\sigma)\\rangle\\rtimes S_k\\cong (A_n^k).2.S_k$, we have \n$$H_w=\\langle (A_{n-1})^k,(\\sigma,\\ldots,\\sigma)\\rangle \\rtimes S_k\\cong (A_{n-1})^k.2.S_k$$ \n$$H_v=\\langle A_n\\times (A_{n-1})^{k-1},(\\sigma,\\ldots,\\sigma)\\rangle\\rtimes S_{k-1}\\cong (A_n\\times (A_{n-1})^{k-1}).2.S_{k-1}$$ and \n$$H_{vw}= \\langle A_{n-1}^k,(\\sigma,\\ldots,\\sigma)\\rangle S_{k-1}$$\n Moreover, we still have that $H_w^{\\Gamma(w)}\\cong S_{k}$, $H_v^{\\Gamma(v)}\\cong S_n$ and $H_{vw}$ induces $S_{k-1}$ on $\\Gamma(w)\\backslash\\{v\\}$ and independently induces $S_{n-1}$ on $\\Gamma(v)\\backslash\\{w\\}$. Thus $\\Gamma$ is also $H$-star-transitive and $H$-st(edge)-transitive. This gives an example for case (3) of Theorem \\ref{vtx-intrans} with the vertex stabilisers being as small as possible.\n\n\\subsubsection{}\n\nThis example is a specialisation of \\cite[Example 4.6]{GLP2}. Let $n$ be a positive integer coprime to 3, let $V=\\ensuremath{\\operatorname{GF}}(3)^n$ and $W$ be the subspace of $V$ of all vectors $(v_1,\\ldots,v_n)$ such that $\\sum v_i=0$. Let $G_0=S_n\\times Z$, where $Z$ is the group of scalar transformations and $V$ is the natural permutation module for $S_n$. Then $G_0$ fixes the subspace $W$. Let $v=(1,\\ldots,1,-n+1)\\in W$ and let $\\Delta$ be the set of all translates of images of $\\langle v\\rangle$ under $G_0$, that is, $\\Delta=\\{\\langle v\\rangle^g +w\\mid w\\in W, g\\in G_0\\}$. Note that $|\\langle v\\rangle^{G_0}|=n$. Let $\\Gamma$ be the bipartite graph with vertex set $W\\cup \\Delta$ and adjacency given by inclusion. Then $\\Gamma$ is biregular with bivalency $\\{n,3\\}$ and $G=W\\rtimes G_0\\leqslant \\ensuremath{\\operatorname{Aut}}(\\Gamma)$.\n\nNote that the stabiliser in $G$ of the zero vector is $G_0$ and $\\Gamma(0)=\\langle v\\rangle^{G_0}$. Thus $G_0^{\\Gamma(0)}=S_n$. Moreover, $\\Gamma(\\langle v\\rangle)=\\{\\lambda v\\mid \\lambda\\in\\ensuremath{\\operatorname{GF}}(3)\\}$ and $G_{\\langle v\\rangle}=(\\langle v\\rangle\\rtimes Z)\\times S_{n-1}$. Thus $G_{\\langle v\\rangle}^{\\Gamma(\\langle v\\rangle)}=S_3$. Hence by Lemma \\ref{l:suff star}, $\\Gamma$ is $G$-star-transitive. Moreover, $G_{0,\\langle v\\rangle}=S_{n-1}\\times Z$ acting faithfully on $\\Gamma(0)\\cup \\Gamma(\\langle v\\rangle)$. Since $Z\\cong S_2$, Lemma \\ref{lem:edgeaction} implies that $\\Gamma$ is st(edge)-transitive. We note that $\\Gamma$ belongs to case (i) of Lemma \\ref{[2]=1}.\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\setcounter{equation}{0}\n\nWhilst there is little doubt that Quantum Chromodynamics [QCD] is the theory \nof the strong interaction, despite four decades of intense effort the \ngenuine solution of the confinement puzzle and the hadron spectrum remains \nelusive. That is not to say that no progress has been made -- our \nunderstanding of QCD is being steadily augmented in many ways: for example, \nwith lattice Monte-Carlo techniques \\cite\n{Greensite:2003bk,Chandrasekharan:2004cn} and effective theories (\\cite\n{Bernard:2006gx} and references therein). One way to understand the problem \nof confinement and the hadron spectrum from \\emph{ab initio} principles is \nto study the Dyson--Schwinger equations. These equations are the central equations to \nthe Lagrange formulation of a field theory. They are in the continuum and \nembody all symmetries of the system at hand.\n\nDyson--Schwinger studies of QCD in Landau gauge have enjoyed a renaissance in the last \nten years. A consistent picture of how the important degrees of freedom in \nthe infrared (i.e., those responsible for confinement and the hadron \nspectrum) stem from the ghost sector of the theory has emerged \n\\cite{vonSmekal:1997isa,vonSmekal:1997is,Watson:2001yv} and this has led to \nincreasingly sophisticated calculations of QCD properties, culminating \nrecently in hadron observables \\cite{Fischer:2005en} and possible \nexplanations of confinement (see the recent review \\cite{Fischer:2006ub} for \na discussion of this topic). Landau gauge, in addition to having the \nappealing property of covariance, has a distinct advantage when searching \nfor practical approximation schemes that allow one to extract information \nabout the infrared behaviour of QCD Green's functions, namely, that the \nghost-gluon vertex remains UV finite to all orders in perturbation theory \n\\cite{Taylor:1971ff}. For this reason, it has been possible to extract \nunambiguous information about the system \\cite{Watson:2001yv}.\n\nDespite the calculational advantages enjoyed by Landau gauge, it is perhaps \nnot the best choice of gauge to study the infrared physics of QCD. In this \nrespect, Coulomb gauge is perhaps more advantageous. \nThere exists a natural picture (though not a proof) of \nconfinement in Coulomb gauge \\cite{Zwanziger:1998ez} and phenomenological \napplications guide the way to understanding the spectrum of hadrons, for \nexample in \\cite{Ligterink:2003hd,Szczepaniak:2006nx} (as indeed they have \ndone in Landau gauge \\cite{Fischer:2006ub,Maris:2003vk}). This is not to \nsay that one choice of gauge is better than another -- it is crucial to our \nunderstanding of the problem that more than one gauge is considered: firstly \nbecause the physical observables are gauge invariant and it is a test of our \napproximations that the results respect this and secondly because whilst \nconfinement is a gauge invariant reality, its mechanism may be manifested \ndifferently in different gauges such that we will learn far more by studying \nthe different gauges.\n\nRecently, progress has been made in studying Yang-Mills theory in Coulomb \ngauge within the Hamiltonian approach \n\\cite{Szczepaniak:2001rg,\nSzczepaniak:2003ve,Feuchter:2004mk,Reinhardt:2004mm}. \nHere, the advantage is that Gau\\ss' law can be explicitly resolved (such \nthat, in principle, gauge invariance is fully accounted for) and this \nresults in an explicit expression for the static potential between color-\ncharges. In \n\\cite{Szczepaniak:2001rg,\nSzczepaniak:2003ve,Feuchter:2004mk,Reinhardt:2004mm}, \nthe Yang-Mills Schroedinger equation was solved variationally for the vacuum \nstate using Gaussian type ans\\\"{a}tze for the wave-functional. Minimizing \nthe energy density results in a coupled set of Dyson--Schwinger equations which have been \nsolved analytically in the infrared \\cite{Schleifenbaum:2006bq} and \nnumerically in the entire momentum regime. If the geometric structure of \nthe space of gauge orbits reflected by the non-trivial Faddeev-Popov \ndeterminant is properly included \\cite{Feuchter:2004mk}, one finds an \ninfrared divergent gluon energy and a linear rising static quark potential \n-- both signals of confinement. Furthermore, these confinement properties \nhave been shown to be not dependent on the specific ansatz for the vacuum \nwave-functional but result from the geometric structure of the space of \ngauge orbits \\cite{Reinhardt:2004mm}. However, in spite of this success, \none should bear in mind that an ansatz for the wave-functional is always \nrequired and so the approach does not {\\it a priori} provide a systematic \nexpansion or truncation scheme as, for example, the loop expansion scheme \nused in the common Dyson--Schwinger approach. A study of the Dyson--Schwinger equations in Coulomb \ngauge will hopefully shed some light on the problem.\n\nGiven the appealing properties of Coulomb gauge, it is perhaps surprising \nthat no pure Dyson--Schwinger study exists in the literature. However, there is a good \nreason for this: in Coulomb gauge, closed ghost-loops give rise to \nunregulated divergences -- the energy divergence problem. It has been found \nonly relatively recently how one may circumvent this problem such that a Dyson--Schwinger \nstudy may be attempted \\cite{Zwanziger:1998ez}. The key lies in using the \nfirst order formalism. There is not yet a complete proof that the local \nformulation of Coulomb gauge Yang-Mills theory within the first order \nformalism is renormalisable but significant progress has been made \n\\cite{Zwanziger:1998ez,Baulieu:1998kx}. There is one undesirable feature to \nthe first order formalism and this is that the number of fields \nproliferates. As will be seen in this paper, this does have serious \nimplications for the Dyson--Schwinger equations.\n\nThe purpose of the present work is to derive the Dyson--Schwinger equations for Coulomb \ngauge Yang-Mills theory within the first order formalism. These equations \nwill form the basis for an extended program studying QCD in Coulomb gauge. \nThe paper is organised as follows. We begin in Section~2 by introducing \nYang-Mills theory in Coulomb gauge and the first order formalism. In \nparticular, we consider the BRS invariance of the system. Having introduced \nthe first order formalism, we then motivate the reasons for considering it \n(the cancellation of the energy divergent sector and the reduction to \nphysical degrees of freedom) in Section~3. Section~4 is then concerned with \nthe derivation of the equations of motion and the equations that stem from \nthe BRS invariance. There exists certain relationships that give rise to \nexact statements about the Green's functions that enter the system and these \nare detailed in Section~5. The Feynman rules and the general decomposition \nof the two-point Green's functions are derived and discussed in Sections~6~\n\\&~7. In Section~8, the Dyson--Schwinger equations are derived in some detail and are \ndiscussed. Finally, we summarize and give an outlook of future work in \nSection~9.\n\n\n\\section{First Order Formalism and BRS Invariance}\n\\setcounter{equation}{0}\nThroughout this work, we work in Minkowski space and with the following \nconventions. The metric is $g_{\\mu\\nu}=\\mathrm{diag}(1,-\\vec{1})$. Greek \nletters ($\\mu$, $\\nu$, $\\ldots$) denote Lorentz indices, roman subscripts \n($i$, $j$, $\\ldots$) denote spatial indices and superscripts ($a$, $b$, \n$\\ldots$) denote color indices. We will sometimes also write configuration \nspace coordinates ($x$, $y$, $\\ldots$) as subscripts where no confusion \narises.\n\nThe Yang-Mills action is defined as\n\\begin{equation}\n{\\cal S}_{YM}=\\int\\dx{x}\\left[-\\frac{1}{4}F_{\\mu\\nu}^aF^{a\\mu\\nu}\\right]\n\\end{equation}\nwhere the (antisymmetric) field strength tensor $F$ is given in terms of the \ngauge field $A_{\\mu}^a$:\n\\begin{equation}\nF_{\\mu\\nu}^a=\\partial_{\\mu}A_{\\nu}^a-\\partial_{\\nu}A_{\\mu}^a\n+gf^{abc}A_{\\mu}^bA_{\\nu}^c.\n\\end{equation}\nIn the above, the $f^{abc}$ are the structure constants of the $SU(N_c)$ \ngroup whose generators obey $\\left[T^a,T^b\\right]=\\imath f^{abc}T^c$. The \nYang-Mills action is invariant under a local $SU(N_c)$ gauge transform \ncharacterised by the parameter $\\th_x^a$:\n\\begin{equation}\nU_x=\\exp{\\left\\{-\\imath\\th_x^aT^a\\right\\}}.\n\\end{equation}\nThe field strength tensor can be expressed in terms of the chromo-electric \nand -magnetic fields ($\\sigma=A^0$)\n\\begin{equation}\n\\vec{E}^a=-\\partial^0\\vec{A}^a-\\vec{\\nabla}\\sigma^a+gf^{abc}\\vec{A}^b\\sigma^c,\\;\\;\\;\\;\nB_i^a=\\epsilon_{ijk}\\left[\\nabla_jA_k^a-\\frac{1}{2} gf^{abc}A_j^bA_k^c\\right]\n\\end{equation}\nsuch that ${\\cal S}_{YM}=\\int(E^2-B^2)\/2$. The electric and magnetic terms in \nthe action do not mix under the gauge transform which for the gauge fields \nis written\n\\begin{equation}\nA_\\mu\\rightarrow A'_\\mu=U_xA_\\mu U_x^\\dag\n-\\frac{\\imath}{g}(\\partial_\\mu U_x)U_x^\\dag.\n\\end{equation}\nGiven an infinitessimal transform $U_x=1-\\imath\\th_x^aT^a$ the variation of \nthe gauge field is\n\\begin{equation}\n\\delta A_{\\mu}^a=-\\frac{1}{g}\\hat{D}_{\\mu}^{ac}\\th^c\n\\end{equation}\nwhere the covariant derivative in the adjoint representation is given by\n\\begin{equation}\n\\hat{D}_{\\mu}^{ac}=\\delta^{ac}\\partial_{\\mu}+gf^{abc}A_{\\mu}^b.\n\\end{equation}\n\nLet us consider the functional integral\n\\begin{equation}\nZ=\\int{\\cal D}\\Phi\\exp{\\left\\{\\imath{\\cal S}_{YM}\\right\\}}\n\\end{equation}\nwhere $\\Phi$ denotes the collection of all fields. Since the action is \ninvariant under gauge transformations, $Z$ is divergent by virtue of the \nzero mode. To overcome this problem we use the Faddeev-Popov technique and \nintroduce a gauge-fixing term along with an associated ghost term \n\\cite{IZ}. Using a Lagrange multiplier field to implement the gauge-fixing, \nin Coulomb gauge ($\\s{\\div}{\\vec{A}}=0$) we can then write\n\\begin{equation}\nZ=\\int{\\cal D}\\Phi\\exp{\\left\\{\\imath{\\cal S}_{YM}+\\imath{\\cal S}_{fp}\\right\\}},\\;\\;\\;\\;\n{\\cal S}_{fp}=\\int d^4x\\left[-\\lambda^a\\s{\\vec{\\nabla}}{\\vec{A}^a}\n-\\ov{c}^a\\s{\\vec{\\nabla}}{\\vec{D}^{ab}}c^b\\right].\n\\end{equation}\nThe new term in the action is invariant under the standard BRS transform \nwhereby the infinitessimal parameter $\\th^a$ is factorised into two \nGrassmann-valued components $\\th^a=c^a\\delta\\lambda$ where $\\delta\\lambda$ is the \ninfinitessimal variation (not to be confused with the colored Lagrange \nmultiplier field $\\lambda^a$). The BRS transform of the new fields reads\n\\begin{eqnarray}\n\\delta\\ov{c}^a&=&\\frac{1}{g}\\lambda^a\\delta\\lambda\\nonumber\\\\\n\\delta c^a&=&-\\frac{1}{2} f^{abc}c^bc^c\\delta\\lambda\\nonumber\\\\\n\\delta\\lambda^a&=&0.\n\\end{eqnarray}\nFor reasons that will become clear in the next section (expounded in \n\\cite{Zwanziger:1998ez}) we convert to the first order (or phase space) \nformalism by splitting the Yang-Mills action into chromo-electric and \n-magnetic terms and introducing an auxiliary field ($\\vec{\\pi}$) via the \nfollowing identity\n\\begin{equation}\n\\exp{\\left\\{\\imath\\int d^4x\\frac{1}{2}\\s{\\vec{E}^a}{\\vec{E}^a}\\right\\}}\n=\\int{\\cal D}\\vec{\\pi}\\exp{\\left\\{\\imath\\int d^4x\\left[-\\frac{1}{2}\n\\s{\\vec{\\pi}^a}{\\vec{\\pi}^a}-\\s{\\vec{\\pi}^a}{\\vec{E}^a}\\right]\\right\\}}.\n\\end{equation}\nClassically, the $\\vec{\\pi}$-field would be the momentum conjugate to \n$\\vec{A}$. In order to maintain BRS-invariance, we require that\n\\begin{equation}\n\\int d^4x\\left[\\s{\\delta\\vec{\\pi}^a}{\\left(\\vec{\\pi}^a+\\vec{E}^a\\right)}\n+\\s{\\vec{\\pi}^a}{\\delta\\vec{E}^a}\\right]=0.\n\\label{eq:inv0}\n\\end{equation}\nGiven that the variation of $\\vec{E}$ under the infinitessimal gauge \ntransformation is $\\delta\\vec{E}^a=f^{abc}\\vec{E}^c\\th^b$, then the general \nsolution to \\eq{eq:inv0} is\n\\begin{equation}\n\\delta\\vec{\\pi}^a=f^{abc}\\th^b\\left[(1-\\alpha)\\vec{\\pi}^c-\\alpha\\vec{E}^c\\right]\n\\end{equation}\nwhere $\\alpha$ is some non-colored constant, but which in general could be some \nfunction of position $x$. The $\\vec{\\pi}$-field is split into transverse \nand longitudinal components using the identity\n\\begin{eqnarray}\n\\mathrm{const}&=&\\int{\\cal D}\\phi\\delta\\left(\\s{\\vec{\\nabla}}{\\vec{\\pi}}\n+\\nabla^2\\phi\\right)\\nonumber\\\\\n&=&\\int{\\cal D}\\left\\{\\phi,\\tau\\right\\}\\exp{\\left\\{-\\imath\\int \nd^4x\\tau^a\\left(\\s{\\vec{\\nabla}}{\\vec{\\pi}^a}+\\nabla^2\\phi^a\\right)\\right\\}}.\n\\end{eqnarray}\nThis constant is gauge invariant and this means that the new fields $\\phi$ \nand $\\tau$ must transform as\n\\begin{equation}\n\\delta\\phi^a=\\s{\\frac{\\vec{\\nabla}}{\\left(-\\nabla^2\\right)}}{\\delta\\vec{\\pi}^a},\n\\;\\;\\delta\\tau^a=0.\n\\end{equation}\nIf we make the change of variables \n$\\vec{\\pi}\\rightarrow\\vec{\\pi}-\\vec{\\nabla}\\phi$ then collecting together \nall the parts of $Z$ that contain $\\vec{\\pi}$, we can write\n\\begin{equation}\nZ_{\\pi}=\\int{\\cal D}\\left\\{\\vec{\\pi},\\phi,\\tau\\right\\}\\exp{\\left\\{\\imath\\int \nd^4x\\left[-\\tau^a\\s{\\vec{\\nabla}}{\\vec{\\pi}^a}\n-\\frac{1}{2}\\s{(\\vec{\\pi}^a-\\div\\phi^a)}{(\\vec{\\pi}^a-\\div\\phi^a)}\n-\\s{\\left(\\vec{\\pi}^a-\\div\\phi^a\\right)}{\\vec{E}^a}\\right]\\right\\}}\n\\end{equation}\nwhich is now invariant under\n\\begin{eqnarray}\n\\delta\\vec{E}^a&=&f^{abc}\\vec{E}^c\\th^b,\\nonumber\\\\\n\\delta\\vec{\\pi}^a&=&f^{abc}\\th^b\\left[(1-\\alpha)\\left(\\vec{\\pi}^c\n-\\vec{\\nabla}\\phi^c\\right)-\\alpha\\vec{E}^c\\right]+\\vec{\\nabla}\\delta\\phi^a,\n\\nonumber\\\\\n\\delta\\phi^a&=&f^{abc}\\left\\{\\s{\\frac{\\vec{\\nabla}}{\\left(-\\nabla^2\\right)}}\n{\\left[(1-\\alpha)\\left(\\vec{\\pi}^c-\\vec{\\nabla}\\phi^c\\right)\n-\\alpha\\vec{E}^c\\right]}\\th^b\\right\\},\\nonumber\\\\\n\\delta\\tau^a&=&0.\n\\end{eqnarray}\nWe notice that the parts of the transform that are proportional to $\\alpha$ are \nindependent of the rest of the BRS transform and can thus be regarded as a \nseparate invariance. In particular, since it is independent of the \nFaddeev--Popov components, then we may regard it quite generally as a local \ntransform parameterised by $\\th^a$. This new invariance stems from the \narbitrariness in introducing the $\\vec{\\pi}$-field. If we expand the \nchromo-electric field into its component form then in summary we can write \nour full functional integral as\n\\begin{equation}\nZ=\\int{\\cal D}\\Phi\\exp{\\left\\{\\imath{\\cal S}_B+\\imath{\\cal S}_{fp}+\\imath{\\cal S}_{\\pi}\\right\\}}\n\\label{eq:func}\n\\end{equation}\nwith\n\\begin{eqnarray}\n{\\cal S}_B&=&\\int d^4x\\left[-\\frac{1}{2}\\s{\\vec{B}^a}{\\vec{B}^a}\\right],\\nonumber\\\\\n{\\cal S}_{fp}&=&\\int d^4x\\left[-\\lambda^a\\s{\\vec{\\nabla}}{\\vec{A}^a}\n-\\ov{c}^a\\s{\\vec{\\nabla}}{\\vec{D}^{ab}}c^b\\right],\\nonumber\\\\\n{\\cal S}_{\\pi}&=&\\int d^4x\\left[-\\tau^a\\s{\\vec{\\nabla}}{\\vec{\\pi}^a}\n-\\frac{1}{2}\\s{(\\vec{\\pi}^a-\\div\\phi^a)}{(\\vec{\\pi}^a-\\div\\phi^a)}\n+\\s{(\\vec{\\pi}^a-\\div\\phi^a)}{\\left(\\partial^0\\vec{A}^a+\\vec{D}^{ab}\\sigma^b\n\\right)}\\right]\n\\label{eq:act}\n\\end{eqnarray}\nand which is invariant under \\emph{two} sets of transforms: the BRS\n\\begin{eqnarray}\n\\delta\\vec{A}^a&=&\\frac{1}{g}\\vec{D}^{ac}c^c\\delta\\lambda,\\nonumber\\\\\n\\delta\\sigma^a&=&-\\frac{1}{g}D^{0ac}c^c\\delta\\lambda,\\nonumber\\\\\n\\delta\\ov{c}^a&=&\\frac{1}{g}\\lambda^a\\delta\\lambda,\\nonumber\\\\\n\\delta c^a&=&-\\frac{1}{2} f^{abc}c^bc^c\\delta\\lambda,\\nonumber\\\\\n\\delta\\vec{\\pi}^a&=&f^{abc}c^b\\delta\\lambda\\left(\\vec{\\pi}^c\n-\\vec{\\nabla}\\phi^c\\right)+\\vec{\\nabla}\\delta\\phi^a,\\nonumber\\\\\n\\delta\\phi^a&=&f^{abc}\\left\\{\\s{\\frac{\\vec{\\nabla}}{\\left(-\\nabla^2\\right)}}\n{\\left(\\vec{\\pi}^c-\\vec{\\nabla}\\phi^c\\right)}c^b\\delta\\lambda\\right\\},\\nonumber\\\\\n\\delta\\lambda^a&=&0,\\nonumber\\\\\n\\delta\\tau^a&=&0,\n\\label{eq:brstrans}\n\\end{eqnarray}\nand the new transform, which we denote the $\\alpha$-transform,\n\\begin{eqnarray}\n\\delta\\vec{\\pi}^a&=&f^{abc}\\th^b\\left(\\vec{\\pi}^c-\\vec{\\nabla}\\phi^c\n-\\partial^0\\vec{A}-\\vec{D}^{cd}\\sigma^d\\right)+\\vec{\\nabla}\\delta\\phi^a,\\nonumber\\\\\n\\delta\\phi^a&=&f^{abc}\\left\\{\\s{\\frac{\\vec{\\nabla}}{\\left(-\\nabla^2\\right)}}\n{\\left(\\vec{\\pi}^c-\\vec{\\nabla}\\phi^c-\\partial^0\\vec{A}^c-\\vec{D}^{cd}\\sigma^d\n\\right)}\\th^b\\right\\}\n\\label{eq:altrans}\n\\end{eqnarray}\n(all other fields being unchanged). It useful for later to denote the \ncombination of fields and differential operators occuring in \\eq{eq:altrans} \nas\n\\begin{equation}\n\\vec{X}^c=\\vec{\\pi}^c-\\vec{\\nabla}\\phi^c-\\partial^0\\vec{A}-\\vec{D}^{cd}\\sigma^d.\n\\label{eq:X}\n\\end{equation}\n\n\\section{Formal Reduction to ``Physical\" Degrees of Freedom}\n\\setcounter{equation}{0}\nThere are two factors that motivate our use of the first order formalism. \nThe first lies in the ability, albeit formally, to reduce the functional \nintegral previously considered (and hence the generating functional) to \n``physical\" degrees of freedom \\cite{Zwanziger:1998ez}. These are the \ntransverse gluon and transverse $\\vec{\\pi}$ fields which in classical terms \nwould be the configuration variables and their momentum conjugates. We keep \nthe term ``physical\" in quotation marks because it is realised that in \nYang-Mills theory, the true physical objects would be the color singlet \nglueballs, their observables being the mass spectrum and the decay widths. \nThe second factor concerns the well-known energy divergence problem of \nCoulomb gauge QCD \\cite{Heinrich:1999ak,Andrasi:2005xu,Doust:1987yd}. In \nCoulomb gauge, the Faddeev-Popov operator involves only spatial derivatives \nand the spatial components of the gauge fields, but these fields are \nthemselves dependent on the spacetime position. This leads to the ghost \npropagator and ghost-gluon vertex being independent of the energy whereas \nloops involving pure ghost components are integrated over both 3-momentum \n{\\it and} energy which gives an ill-defined integration. In the usual, \nsecond order, formulation of the theory these energy divergences do in \nprinciple cancel order by order in perturbation theory (tested up to \ntwo-loops \\cite{Heinrich:1999ak}) but this cancellation is difficult to \nisolate. Within the first order formalism the cancellation is made manifest \nsuch that the problem of ill-defined integrals can be circumvented.\n\nGiven the functional integral, \\eq{eq:func}, and the action, \\eq{eq:act}, we \nrewrite the Lagrange multiplier terms as $\\delta$-function constraints and the \nghost terms as the original Faddeev-Popov determinant. Since the \n$\\delta$-function constraints are now exact we can automatically eliminate any \n$\\s{\\div}{\\vec{A}}$ and $\\s{\\div}{\\vec{\\pi}}$ terms in the action. This is \nclearly at the expense of a local formulation and the BRS invariance of the \ntheory is no longer manifest. The functional integral is now\n\\begin{equation}\nZ=\\int{\\cal D}\\Phi\\mathrm{Det}\\left[-\\s{\\vec{\\nabla}}{\\vec{D}}\\delta^4(x-y)\\right]\n\\delta\\left(\\s{\\div}{\\vec{A}}\\right)\\delta\\left(\\s{\\div}{\\vec{\\pi}}\\right)\n\\exp{\\left\\{\\imath{\\cal S}\\right\\}}\n\\end{equation}\nwith\n\\begin{equation}\n{\\cal S}=\\int d^4x\\left[-\\frac{1}{2}\\s{\\vec{B}^a}{\\vec{B}^a}\n-\\frac{1}{2}\\s{\\vec{\\pi}^a}{\\vec{\\pi}^a}+\\frac{1}{2}\\phi^a\\nabla^2\\phi^a\n+\\s{\\vec{\\pi}^a}{\\partial^0\\vec{A}^a}+\\sigma^a\\left(\\s{\\div}{\\vec{D}^{ab}}\\phi^b\n+g\\hat{\\rho}^a\\right)\\right].\n\\end{equation}\nwhere we have defined an effective charge \n$\\hat{\\rho}^a=f^{ade}\\s{\\vec{A}^d}{\\vec{\\pi}^e}$. The integral over $\\sigma$ \ncan also be written as a $\\delta$-function constraint and is the implementation \nof the chromo-dynamical equivalent of Gau\\ss' law giving\n\\begin{equation}\nZ=\\int{\\cal D}\\Phi\\mathrm{Det}\\left[-\\s{\\vec{\\nabla}}{\\vec{D}}\\delta^4(x-y)\\right]\n\\delta\\left(\\s{\\div}{\\vec{A}}\\right)\\delta\\left(\\s{\\div}{\\vec{\\pi}}\\right)\n\\delta\\left(-\\s{\\div}{\\vec{D}^{ab}}\\phi^b-g\\hat{\\rho}^a\\right)\n\\exp{\\left\\{\\imath{\\cal S}\\right\\}}\n\\end{equation}\nwith\n\\begin{equation}\n{\\cal S}=\\int d^4x\\left[-\\frac{1}{2}\\s{\\vec{B}^a}{\\vec{B}^a}\n-\\frac{1}{2}\\s{\\vec{\\pi}^a}{\\vec{\\pi}^a}+\\frac{1}{2}\\phi^a\\nabla^2\\phi^a\n+\\s{\\vec{\\pi}^a}{\\partial^0\\vec{A}^a}\\right].\n\\end{equation}\nLet us define the inverse Faddeev-Popov operator $M$:\n\\begin{equation}\n\\left[-\\s{\\div}{\\vec{D}^{ab}}\\right]M^{bc}=\\delta^{ac}.\n\\end{equation}\nWith this definition we can factorise the Gau{\\ss} law $\\delta$-function \nconstraint as\n\\begin{equation}\n\\delta\\left(-\\s{\\div}{\\vec{D}^{ab}}\\phi^b-g\\hat{\\rho}^a\\right)\n=\\mathrm{Det}\\left[-\\s{\\vec{\\nabla}}{\\vec{D}}\\delta^4(x-y)\\right]^{-1}\n\\delta\\left(\\phi^a-M^{ab}g\\hat{\\rho}^b\\right).\n\\end{equation}\nCrucially, the inverse functional determinant cancels the original \nFaddeev-Popov determinant, leaving us with\n\\begin{equation}\nZ=\\int{\\cal D}\\Phi\\delta\\left(\\s{\\div}{\\vec{A}}\\right)\n\\delta\\left(\\s{\\div}{\\vec{\\pi}}\\right)\\delta\\left(\\phi^a-M^{ab}g\\hat{\\rho}^b\\right)\n\\exp{\\left\\{\\imath{\\cal S}\\right\\}}.\n\\end{equation}\nWe now use the $\\delta$-function constraint to eliminate the $\\phi$-field. \nRecognising the Hermitian nature of the inverse Faddeev-Popov operator $M$ \nwe can reorder the operators in the action to give us\n\\begin{equation}\nZ=\\int{\\cal D}\\Phi\\delta\\left(\\s{\\div}{\\vec{A}}\\right)\n\\delta\\left(\\s{\\div}{\\vec{\\pi}}\\right)\\exp{\\left\\{\\imath{\\cal S}\\right\\}}\n\\end{equation}\nwith\n\\begin{equation}\n{\\cal S}=\\int d^4x\\left[-\\frac{1}{2}\\s{\\vec{B}^a}{\\vec{B}^a}\n-\\frac{1}{2}\\s{\\vec{\\pi}^a}{\\vec{\\pi}^a}\n-\\frac{1}{2} g\\hat{\\rho}^bM^{ba}(-\\nabla^2)M^{ac}g\\hat{\\rho}^c\n+\\s{\\vec{\\pi}^a}{\\partial^0\\vec{A}^a}\\right].\n\\end{equation}\nThe above action is our desired form, with only transverse $\\vec{A}$ and \n$\\vec{\\pi}$ fields present. All other fields, especially those responsible \nfor the Faddeev-Popov determinant (i.e., the certainly unphysical ghosts) \nhave been formally eliminated. However, the appearance of the functional \n$\\delta$-functions and the inverse Faddeev-Popov operator $M$ have led to a \nnon-local formalism. It is not known how to use forms such as the above in \ncalculational schemes. The issue of renormalisability is certainly unclear \nand one does not have a Ward identity in the usual sense.\n\nThe non-local nature of the above result may not lend itself to \ncalculational devices but does serve as a guide to the local formulation. \nIn particular, it is evident that decomposition of degrees of freedom, both \nphysical and unphysical, inherent to the first order formalism leads more \nnaturally to the cancellation of the unphysical components in the \ndescription of physical phenomena than perhaps other choices such as Landau \ngauge. The task ahead is to identify, within the local formulation, how \nthese cancellations arise and to ensure that approximation schemes respect \nsuch cancellations. For example, the cancellation of the Faddeev-Popov \ndeterminant and the appearance of the inverse Faddeev-Popov operator should \nlead to the separation of the physical gluon dynamics contained within the \nghost sector and the unphysical ghosts themselves, i.e., the unphysical \nghost loop of the gluon polarisation should be cancelled whilst another loop \ncontaining only physical information will take its place. Also, it should \nbe evident that the energy divergences associated with ghost loops are \nexplicitly cancelled such that ill-defined integrals do not occur.\n\n\\section{Field Equations of Motion and Continuous Symmetries}\n\\setcounter{equation}{0}\nThe generating functional of the theory is given by our previously \nconsidered functional integral in the presence of sources. Explicitly, \ngiven the action, \\eq{eq:act}, we have\n\\begin{equation}\nZ[J]=\\int{\\cal D}\\Phi\\exp{\\left\\{\\imath{\\cal S}_B+\\imath{\\cal S}_{fp}\n+\\imath{\\cal S}_{\\pi}+\\imath{\\cal S}_s\\right\\}}\n\\label{eq:gen0}\n\\end{equation}\nwith sources defined by\n\\begin{equation}\n{\\cal S}_s=\\int\\dx{x}\\left[\\rho^a\\sigma^a+\\s{\\vec{J}^a}{\\vec{A}^a}+\\ov{c}^a\\eta^a\n+\\ov{\\eta}^ac^a+\\kappa^a\\phi^a+\\s{\\vec{K}^a}{\\vec{\\pi}^a}+\\xi_\\lambda^a\\lambda^a\n+\\xi_\\tau^a\\tau^a\\right].\n\\label{eq:source0}\n\\end{equation}\nIt is useful to introduce a compact notation for the sources and fields and \nwe denote a generic field $\\Phi_\\alpha$ with source $J_\\alpha$ such that the index \n$\\alpha$ stands for all attributes of the field in question (including its \ntype) such that, for instance, we could write\n\\begin{equation}\n{\\cal S}_s=J_\\alpha\\Phi_\\alpha\n\\end{equation}\nwhere summation over all discrete indices and integration over all \ncontinuous arguments is implicitly understood.\n\nThe field equations of motion are derived from the observation that the \nintegral of a total derivative vanishes up to boundary terms. The boundary \nterms vanish but this is not so trivial in the light of the Gribov problem. \nPerturbatively (expanding around the free-field), there are certainly no \nboundary terms to be considered, however, nonperturbatively the presence of \nso-called Gribov copies does complicate the picture somewhat.\n\nIn \\cite{Gribov:1977wm}, Gribov showed that the Faddeev-Popov technique does \nnot uniquely fix the gauge and showed that even after gauge-fixing there are \n(physically equivalent) gauge configurations related by finite gauge \ntransforms still present. It was proposed to restrict the space of gauge \nfield configurations ($A$) to the so-called Gribov region $\\Omega$ defined by\n\\begin{equation}\n\\Omega\\equiv\\left\\{A:\\s{\\div}{\\vec{A}}=0;-\\s{\\div}{\\vec{D}}\\geq0\\right\\}.\n\\end{equation}\n$\\Omega$ is a region where the Coulomb gauge condition holds and \nfurthermore, the eigenvalues of the Faddeev-Popov operator are all \npositive. It contains the element $\\vec{A}=0$ and is bounded in every \ndirection \\cite{Zwanziger:1998yf}. However, as explained in \n\\cite{Zwanziger:2003cf} and references therein, the Gribov region is not \nentirely free of Gribov copies and one should more correctly consider the \nfundamental modular region $\\Lambda$ which is defined as the region free of \nGribov copies. It turns out though that the functional integral is \ndominated by configurations on the common boundary of $\\Lambda$ and $\\Omega$ \nso that, in practise, restriction to the Gribov region is sufficient.\n\nGiven that non-trivial boundary conditions are being imposed (i.e., \nrestricting to the Gribov region $\\Omega$), the question of the boundary \nterms in the derivation of the field equations of motion now becomes \nextremely relevant. However, by definition on the boundary of $\\Omega$, the \nFaddeev-Popov determinant {\\it vanishes} such that the boundary terms are \nidentically zero. The form of the field equations of motion is therefore \nequivalent as if we had extended the integration region to the full \nconfiguration space \\cite{Zwanziger:2003cf}.\n\nWriting ${\\cal S}={\\cal S}_B+{\\cal S}_{fp}+{\\cal S}_{\\pi}$ we have that\n\\begin{equation}\n0=\\int{\\cal D}\\Phi\\frac{\\delta}{\\delta\\imath\\Phi_\\alpha}\n\\exp{\\left\\{\\imath{\\cal S}+\\imath{\\cal S}_s\\right\\}}\n=\\int{\\cal D}\\Phi\\left\\{\\frac{\\delta{\\cal S}}{\\delta\\Phi_\\alpha}\n+\\frac{\\delta{\\cal S}_s}{\\delta\\Phi_\\alpha}\\right\\}\n\\exp{\\left\\{\\imath{\\cal S}+\\imath{\\cal S}_s\\right\\}}\n\\end{equation}\nand so, taking advantage of the linearity in the fields of the source term \nof the action we have\n\\begin{equation}\nJ_{\\alpha}Z=-\\int{\\cal D}\\Phi\\left\\{\\frac{\\delta{\\cal S}}{\\delta\\Phi_\\alpha}\\right\\}\n\\exp{\\left\\{\\imath{\\cal S}+\\imath{\\cal S}_s\\right\\}}.\n\\label{eq:eom}\n\\end{equation}\nWe use the convention that all Grassmann-valued derivatives are \nleft-derivatives and so in the above there will be an additional minus sign \non the left-hand side when $\\alpha$ refers to derivatives with respect to \neither the $c$-field or the $\\eta$-source. The explicit form of the various \nfield equations of motion are given in Appendix~\\ref{app:eom}.\n\nContinuous transforms, under which the action is invariant, can be regarded \nas changes of variable and providing that the Jacobian is trivial, one is \nleft with an equation relating the variations of the source terms in the \naction. We consider the two invariances derived explicitly in the previous \nsection and that the Jacobian factors are trivial is shown in \nAppendix~\\ref{app:jac}. In the case of the BRS transform, \\eq{eq:brstrans}, \nwe have\n\\begin{eqnarray}\n0&=&\\int{\\cal D}\\Phi\\frac{\\delta}{\\delta\\imath\\delta\\lambda}\n\\exp{\\left\\{\\imath{\\cal S}+\\imath{\\cal S}_s+\\imath\\delta{\\cal S}_s\\right\\}}_{\\delta\\lambda=0}\n\\nonumber\\\\\n&=&\\int{\\cal D}\\Phi\\int\\dx{x}\\left\\{-\\frac{1}{g}\\rho^aD^{0ab}c^b\n+\\frac{1}{g}\\s{\\vec{J}^a}{\\vec{D}^{ab}}c^b-\\frac{1}{g}\\lambda^a\\eta^a\n-\\frac{1}{2} f^{abc}\\ov{\\eta}^ac^bc^c\n\\right.\\nonumber\\\\&&\\left.\n+f^{abc}c^b\\s{(\\vec{\\pi}^c-\\div\\phi^c)}{\\left[\\vec{K}^a-\\frac{\\div}{\n(-\\nabla^2)}(\\kappa^a-\\s{\\div}{\\vec{K}^a})\\right]}\\right\\}\n\\exp{\\left\\{\\imath{\\cal S}+\\imath{\\cal S}_s\\right\\}}.\n\\end{eqnarray}\nNotice that the infinitessimal variation $\\delta\\lambda$ that parameterises the BRS \ntransform is a global quantity, leading to the overall integral over $x$. \nThe equation for the $\\alpha$-transform is:\n\\begin{eqnarray}\n0&=&\\int{\\cal D}\\Phi\\frac{\\delta}{\\delta\\imath\\th_x^a}\n\\exp{\\left\\{\\imath{\\cal S}+\\imath{\\cal S}_s+\\imath\\delta{\\cal S}_s\\right\\}}_{\\th=0}\\nonumber\\\\\n&=&\\int{\\cal D}\\Phi f^{abc}\\s{\\vec{X}_x^c}{\\left[\\vec{K}_x^a-\\frac{\\div_x}{\n(-\\nabla_x^2)}(\\kappa_x^a-\\s{\\div_x}{\\vec{K}_x^a})\\right]}\n\\exp{\\left\\{\\imath{\\cal S}+\\imath{\\cal S}_s\\right\\}}\n\\label{eq:alinv}\n\\end{eqnarray}\nwhere $\\vec{X}$ is given by \\eq{eq:X}. Constraints imposed by the discrete \nsymmetries of time-reversal and parity will be discussed later.\n\nThe above equations of motion and symmetries refer to functional derivatives \nof the full generating functional. In practise we are concerned with \nconnected two-point (propagator) and one-particle irreducible $n$-point \n(proper) Green's functions since these comprise the least sophisticated \ncommon building blocks from which all other amplitudes may be constructed. \nThe generating functional of connected Green's functions is $W[J]$ where\n\\begin{equation}\nZ[J]=e^{W[J]}.\n\\end{equation}\nWe introduce a bracket notation for functional derivatives of $W$ such that\n\\begin{equation}\n\\ev{\\imath J_1}=\\frac{\\delta W}{\\delta\\imath J_1}.\n\\end{equation}\nThe classical field $\\Phi_\\alpha$ is defined as\n\\begin{equation}\n\\Phi_\\alpha=\\frac{1}{Z}\\int{\\cal D}\\Phi\\Phi_\\alpha\n\\exp{\\left\\{\\imath{\\cal S}+\\imath{\\cal S}_s\\right\\}}\n=\\frac{1}{Z}\\frac{\\delta Z}{\\delta\\imath J_\\alpha}\n\\end{equation}\n(the classical field is distinct from the quantum fields which are \nfunctionally integrated over, but for convenience we use the same \nnotation). The generating functional of proper Green's functions is the \neffective action $\\Gamma$, which is a function of the classical fields and is \ndefined via a Legendre transform of $W$:\n\\begin{equation}\n\\Gamma[\\Phi]=W[J]-\\imath J_\\alpha\\Phi_\\alpha.\n\\end{equation}\nWe use the same bracket notation to denote derivatives of $\\Gamma$ with respect \nto fields -- no confusion arises since we never mix derivatives with respect \nto sources and fields.\n\nLet us now present the equations of motion in terms of proper functions \n(from which we will derive the Dyson--Schwinger equations). Using the equations of \nmotion listed in Appendix~\\ref{app:eom} we have the following equations:\n\\begin{itemize}\n\\item\n$\\sigma$-based. This is the functional form of Gau\\ss' law.\n\\begin{eqnarray}\n\\ev{\\imath\\sigma_x^a}-\\ev{\\imath\\tau_x^a}&=&\\nabla_x^2\\phi_x^a\n+gf^{abc}A_{ix}^b\\pi_{ix}^c-gf^{abc}A_{ix}^b\\nabla_{ix}\\phi_x^c\n+gf^{abc}\\ev{\\imath J_{ix}^b\\imath K_{ix}^c}\n\\nonumber\\\\&&\n-gf^{abc}\\int d^4y\\delta(x-y)\\nabla_{ix}\\ev{\\imath J_{iy}^b\\imath\\kappa_x^c}.\n\\label{eq:sidse0}\n\\end{eqnarray}\nNote that we have implicitly used the $\\tau$ equation of motion, \n\\eq{eq:tadse1}, in order to eliminate terms involving $\\s{\\div}{\\vec{\\pi}}$ \nin favour of the source $\\xi_{\\tau}$.\n\\item\n$\\vec{A}$-based. We write this in such a way as to factorise the functional \nderivatives and the kinematical factors. The equation reads:\n\\begin{eqnarray}\n\\ev{\\imath A_{ix}^a}&=&\\nabla_{ix}\\lambda_x^a-\\partial_x^0\\pi_{ix}^a\n+\\partial_x^0\\nabla_{ix}\\phi_x^a+\\left[\\delta_{ij}\\nabla_x^2\n-\\nabla_{ix}\\nabla_{jx}\\right]A_{jx}^a\n\\nonumber\\\\&&\n+gf^{abc}\\int\\dx{y}\\dx{z}\\delta(y-x)\\delta(z-x)\\left[\\nabla_{iz}\\ov{c}_z^bc_y^c\n+\\pi_{iz}^b\\sigma_y^c-\\nabla_{iz}\\phi_z^b\\sigma_y^c\\right]\n\\nonumber\\\\&&\n+gf^{abc}\\int\\dx{y}\\dx{z}\\delta(y-x)\\delta(z-x)\n\\left[\\nabla_{iz}\\ev{\\imath\\ov{\\eta}_y^c\\imath\\eta_z^b}\n+\\ev{\\imath K_{iz}^b\\imath\\rho_y^c}\n-\\nabla_{iz}\\ev{\\imath\\kappa_z^b\\imath\\rho_y^c}\\right]\n\\nonumber\\\\&&\n+gf^{abc}\\int\\dx{y}\\dx{z}\\delta(y-x)\\delta(z-x)\\left\\{\\delta_{jk}\\nabla_{iz}\n+2\\delta_{ij}\\nabla_{ky}-\\delta_{ik}\\nabla_{jy}\\right\\}\n\\left[\\ev{\\imath J_{jy}^b\\imath J_{kz}^c}+A_{jy}^bA_{kz}^c\\right]\n\\nonumber\\\\&&\n-\\frac{1}{4}g^2f^{fbc}f^{fde}\\delta_{jk}\\delta_{li}\n\\left[\\delta^{cg}\\delta^{eh}(\\delta^{ab}\\delta^{di}+\\delta^{ad}\\delta^{bi})\n+\\delta^{bg}\\delta^{dh}(\\delta^{ac}\\delta^{ie}+\\delta^{ae}\\delta^{ic})\\right]\\times\n\\nonumber\\\\&&\n\\left[\\ev{\\imath J_{jx}^g\\imath J_{kx}^h\\imath J_{lx}^i}\n+A_{jx}^g\\ev{\\imath J_{kx}^h\\imath J_{lx}^i}\n+A_{kx}^h\\ev{\\imath J_{jx}^g\\imath J_{lx}^i}\n+A_{lx}^i\\ev{\\imath J_{jx}^g\\imath J_{kx}^h}+A_{jx}^gA_{kx}^hA_{lx}^i\\right].\n\\label{eq:adse0}\n\\end{eqnarray}\n\\item\nghost-based. The ghost and the antighost equations provide the same \ninformation. The two fields are complimentary and derivatives must come in \npairs if the expression is to survive when sources are set to zero. The \nantighost equation is\n\\begin{equation}\n\\ev{\\imath\\ov{c}_x^a}=-\\nabla_x^2c_x^a\n-gf^{abc}\\nabla_{ix}\n\\left[\\ev{\\imath\\ov{\\eta}_x^b\\imath J_{ix}^c}+c_x^bA_{ix}^a\\right].\n\\label{eq:ghost0}\n\\end{equation}\n\\item\n$\\vec{\\pi}$-based.\n\\begin{equation}\n\\ev{\\imath\\pi_{ix}^a}=\\nabla_{ix}\\tau_x^a-\\pi_{ix}^a+\\nabla_{ix}\\phi_x^a\n+\\partial_x^0A_{ix}^a+\\nabla_{ix}\\sigma_x^a\n+gf^{abc}\\left[\\ev{\\imath\\rho_x^b\\imath J_{ix}^c}+\\sigma_x^bA_{ix}^c\\right].\n\\label{eq:pidse0}\n\\end{equation}\n\\item\n$\\phi$-based. We notice that the interaction terms in the equation of \nmotion for the $\\phi$-field, \\eq{eq:phdse1} are, up to a derivative, \n\\emph{identical} to those of the $\\vec{\\pi}$-based equation, \\eq{eq:pidse1}. \nThis arises since $-\\div\\phi$ is nothing more than the longitudinal part of \n$\\vec{\\pi}$ and means that there is a redundancy in the formalism that can \nbe exploited to simplify proceedings. We can write\n\\begin{equation}\n\\left(\\s{\\div_x}{\\vec{K}_x^a}-\\kappa_x^a\\right)Z\n=-\\int{\\cal D}\\Phi\\nabla_x^2\\tau_x^a\\exp{\\left\\{\\imath{\\cal S}+\\imath{\\cal S}_s\\right\\}},\n\\label{eq:phidse1}\n\\end{equation}\nfrom which it follows that\n\\begin{eqnarray}\n\\label{eq:phidse}\\s{\\div_x}{\\vec{K}_x^a}-\\kappa_x^a&=&\n-\\nabla_x^2\\ev{\\imath\\xi_{\\tau x}^a},\\\\\n\\label{eq:phidse0}\\ev{\\imath\\phi_x^a}\n-\\s{\\div_x}{\\ev{\\imath\\vec{\\pi}_x^a}}&=&-\\nabla_x^2\\tau_x^a.\n\\end{eqnarray}\n\\item\n$\\lambda$- and $\\tau$-based. It will be useful to have these equations written \nin terms of both connected and proper Green's functions.\n\\begin{eqnarray}\n\\label{eq:ladse}\\xi_{\\lambda x}^a=\\s{\\div_x}{\\ev{\\imath\\vec{J}_x^a}},&\n\\;\\;\\;\\;&\\ev{\\imath\\lambda_x^a}=-\\s{\\div_x}{\\vec{A}_x^a},\\\\\n\\label{eq:tadse}\\xi_{\\tau x}^a=\\s{\\div_x}{\\ev{\\imath\\vec{K}_x^a}},&\n\\;\\;\\;\\;&\\ev{\\imath\\tau_x^a}=-\\s{\\div_x}{\\vec{\\pi}_x^a}.\n\\end{eqnarray}\n\\end{itemize}\nThe BRS transform gives rise to the following equation (the Ward--Takahashi \nidentity):\n\\begin{eqnarray}\n0&=&\\int d^4x\\left\\{\\frac{1}{g}(\\partial_x^0\\rho_x^a)\\ev{\\imath\\ov{\\eta}_x^a}\n-f^{acb}\\rho_x^a\\left[\\ev{\\imath\\rho_x^c\\imath\\ov{\\eta}_x^b}\n+\\ev{\\imath\\rho_x^c}\\ev{\\imath\\ov{\\eta}_x^b}\\right]\n-\\frac{1}{g}(\\nabla_{ix}J_{ix}^a)\\ev{\\imath\\ov{\\eta}_x^a}\n\\right.\\nonumber\\\\&&\n-f^{acb}J_{ix}^a\\left[\\ev{\\imath J_{ix}^c\\imath\\ov{\\eta}_x^b}\n+\\ev{\\imath J_{ix}^c}\\ev{\\imath\\ov{\\eta}_x^b}\\right]\n-\\frac{1}{g}\\eta_x^a\\ev{\\imath\\xi_{\\lambda x}^a}\n-\\frac{1}{2} f^{abc}\\ov{\\eta}_x^a\\left[\\ev{\\imath\\ov{\\eta}_x^b\\imath\\ov{\\eta}_x^c}\n+\\ev{\\imath\\ov{\\eta}_x^b}\\ev{\\imath\\ov{\\eta}_x^c}\\right]\n\\nonumber\\\\&&\n+f^{abc}\\left[K_{ix}^a-\\frac{\\nabla_{ix}}{(-\\nabla_x^2)}\n(\\kappa_x^a-\\nabla_{jx}K_{jx}^a)\\right]\\times\n\\nonumber\\\\&&\\left.\n\\left[\\ev{\\imath K_{ix}^c\\imath\\ov{\\eta}_x^b}\n-\\int\\dx{y}\\delta(x-y)\\nabla_{ix}\\ev{\\imath\\kappa_x^c\\imath\\ov{\\eta}_y^b}\n+\\ev{\\imath K_{ix}^c}\\ev{\\imath\\ov{\\eta}_x^b}\n-\\ev{\\imath\\ov{\\eta}_x^b}\\nabla_{ix}\\ev{\\imath\\kappa_x^c}\\right]\\right\\}.\n\\label{eq:stid0}\n\\end{eqnarray}\nWe consider for now only the form of the equation relating connected Green's \nfunctions. As will be seen in the next section, it will not be necessary to \nconsider the equation generated by the invariance under the $\\alpha$-transform.\n\n\\section{Exact Relations for Green's Functions}\n\\setcounter{equation}{0}\nGiven the set of `master' field equations of motion and symmetries, it is \npertinent to find out if any of the constraints can be combined to give \nunambiguous information about the eventual Green's functions of the theory. \nWe find that such simplifications do in fact exist.\n\nLet us start by discussing the functional equation generated by \n$\\alpha$-invariance, \\eq{eq:alinv}. It is not necessary here to consider \nfunctional derivatives of either the generating functional of connected \nGreen's functions ($W$) or the effective action ($\\Gamma$) since the derivation \napplies to the functional integrals directly. From Appendix~\\ref{app:eom}, \nthe $\\vec{\\pi}$-based field equation of motion, \\eq{eq:pidse1} is\n\\begin{equation}\nK_{ix}^aZ[J]=-\\int{\\cal D}\\Phi\\left\\{\\nabla_{ix}\\tau_x^a-X_{ix}^a\\right\\}\n\\exp{\\left\\{\\imath{\\cal S}+\\imath{\\cal S}_s\\right\\}},\\label{eq:pidse2}\n\\end{equation}\nUsing \\eq{eq:pidse2} we can rewrite \\eq{eq:alinv} as\n\\begin{equation}\n0=f^{abc}\\int{\\cal D}\\Phi\\s{\\left[\\vec{K}_x^a-\\frac{\\div_x}{(-\\nabla_x^2)}\n(\\kappa_x^a-\\s{\\div_x}{\\vec{K}_x^a})\\right]}{\\left[\\vec{K}_x^c\n+\\div_x\\tau_x^c\\right]}\\exp{\\left\\{\\imath{\\cal S}+\\imath{\\cal S}_s\\right\\}}.\n\\label{eq:alinv1}\n\\end{equation}\nSince $f^{abc}$ is antisymmetric and noting \\eq{eq:phidse1}, the above is \nnow an almost trivial identity. We have thus shown that the \n$\\vec{\\pi}$- and $\\phi$-based equations of motion, \\eq{eq:pidse2} and \n\\eq{eq:phidse1}, guarantee that $\\alpha$-invariance is respected. Conversely, \napproximations to the equations of motion will destroy the symmetry. This \nis a concrete example of a general feature of any physical field theory -- \nthe full solutions of the field equations of motion (and the subsequent \nfunctional derivatives which comprise the Dyson--Schwinger equations) contain all the \ninformation given by the symmetry considerations. In this case, we have the \nambiguity associated with introducing the $\\vec{\\pi}$-field and assigning \nits properties under the BRS transform encoded within the invariance under \nthe $\\alpha$-transform and the field equations of motion are `aware' of this. \nWhat is unusual about this, however, is that the equivalence of the field \nequations of motion and the equations generated by invariance under a \nsymmetry is invariably impossible to show (except order by order in \nperturbation theory) -- full gauge invariance being the archetypal example.\n\nLet us now continue the discussion by considering those equations of motion \nwhich do not contain interaction terms. In the absence of interactions, the \nsolutions to these equations can be written down without difficulty. In \nterms of connected Green's function, the only non-zero functional derivative \nof the $\\lambda$-equation, \\eq{eq:ladse} is\n\\begin{equation}\n\\s{\\div_x}{\\ev{\\imath\\xi_{\\lambda y}^b\\imath\\vec{J}_x^a}}\n=-\\imath\\delta^{ba}\\delta(y-x),\n\\label{eq:ladse2}\n\\end{equation}\nthe right-hand side vanishing for all other derivatives. Separating the \nconfiguration space arguments and setting our conventions for the Fourier \ntransform, we have for a general two-point function (connected or proper) \nwhich obeys translational invariance:\n\\begin{equation}\n\\ev{\\imath J_{\\alpha}(y)\\imath J_{\\beta}(x)}\n=\\ev{\\imath J_{\\alpha}(y-x)\\imath J_{\\beta}(0)}\n=\\int\\dk{k}W_{\\alpha\\beta}(k)e^{-\\imath k\\cdot(y-x)}\n\\end{equation}\nwhere $\\dk{k}=d^4k\/(2\\pi)^4$ and it is implicitly understood that the \nrelevant prescription to avoid integration over poles is present such that \nthe analytic continuation to Euclidean space may be performed. We can \nimmediately write down the functional derivatives of $\\ev{\\imath J_{ix}^q}$ \nusing \\eq{eq:ladse}:\n\\begin{eqnarray}\n\\ev{\\imath J_{jy}^b\\imath J_{ix}^a}&=&\n\\int\\dk{k}W_{AAji}^{ba}(k)t_{ji}(\\vec{k})e^{-\\imath k\\cdot(y-x)},\\nonumber\\\\\n\\ev{\\imath K_{jy}^b\\imath J_{ix}^a}&=&\n\\int\\dk{k}W_{\\pi Aji}^{ba}(k)t_{ji}(\\vec{k})e^{-\\imath k\\cdot(y-x)},\n\\nonumber\\\\\n\\ev{\\imath \\xi_{\\lambda y}^b\\imath J_{ix}^a}&=&\n\\int\\dk{k}\\delta^{ba}\\frac{k_i}{\\vec{k}^2}e^{-\\imath k\\cdot(y-x)},\\nonumber\\\\\n\\ev{\\imath \\xi_{\\tau y}^b\\imath J_{ix}^a}&=&\n\\ev{\\imath \\rho_y^b\\imath J_{ix}^a}=\\ev{\\imath \\kappa_y^b\\imath J_{ix}^a}=0.\n\\end{eqnarray}\nSimilarly, for the $\\tau$-equation, \\eq{eq:tadse}, we have\n\\begin{eqnarray}\n\\ev{\\imath J_{jy}^b\\imath K_{ix}^a}&=&\n\\int\\dk{k}W_{A\\pi ji}^{ba}(k)t_{ji}(\\vec{k})e^{-\\imath k\\cdot(y-x)}\n,\\nonumber\\\\\n\\ev{\\imath K_{jy}^b\\imath K_{ix}^a}&=&\n\\int\\dk{k}W_{\\pi\\pi ji}^{ba}(k)t_{ji}(\\vec{k})e^{-\\imath k\\cdot(y-x)},\n\\nonumber\\\\\n\\ev{\\imath \\xi_{\\tau y}^b\\imath K_{ix}^a}&=&\n\\int\\dk{k}\\delta^{ba}\\frac{k_i}{\\vec{k}^2}e^{-\\imath k\\cdot(y-x)},\\nonumber\\\\\n\\ev{\\imath \\rho_y^b\\imath K_{ix}^a}&=&\\ev{\\imath \\kappa_y^b\\imath K_{ix}^a}\n=\\ev{\\imath \\xi_{\\lambda y}^b\\imath K_{ix}^a}=0.\n\\end{eqnarray}\nWe see that, as expected, the propagators involving only the vector fields \nare transverse and the only other contributions relate to the Lagrange \nmultiplier fields and are purely kinematical in nature. There is one \nsubtlety to the above and that is that whilst the equations for \n$\\ev{\\imath\\xi_{\\lambda}\\imath J}$ and $\\ev{\\imath\\xi_{\\tau}\\imath K}$ are exact \nin the presence of sources, all other equations refer implicitly to the case \nwhere the sources are set to zero and the discrete parity symmetry has been \napplied (see later for a more complete discussion).\n\nIn the same fashion, let us consider the $\\lambda$-equation, \\eq{eq:ladse}, (and \nsimilarly the $\\tau$-equation, \\eq{eq:tadse}) in terms of proper Green's \nfunctions. This equation does contain the same information as its \ncounterpart for connected Green's functions, but clearly has a different \ncharacter. The only non-zero functional derivative is\n\\begin{equation}\n\\ev{\\imath A_{iy}^b\\imath\\lambda_x^a}\n=\\int\\dk{k}\\delta^{ba}k_ie^{-\\imath k\\cdot(y-x)},\n\\end{equation}\nall others vanishing, \\emph{even in the presence of sources}. This applies \nto all proper $n$-point functions involving the $\\lambda$-field. Similarly, we \nhave the only non-vanishing proper function involving the $\\tau$-field:\n\\begin{equation}\n\\ev{\\imath \\pi_{iy}^b\\imath\\tau_x^a}\n=\\int\\dk{k}\\delta^{ba}k_ie^{-\\imath k\\cdot(y-x)}.\n\\end{equation}\nThat there are no proper $n$-point functions involving functional \nderivatives with respect to the Lagrange multiplier fields apart from the \ntwo special cases above leads to an important facet concerning the Dyson--Schwinger \nequations -- there will be no self-energy terms involving derivatives with \nrespect to the $\\lambda$- or $\\tau$-fields since they have no proper vertices, \ndespite the fact that the propagators associated with these fields may be \nnon-trivial.\n\nNext, let us turn to the $\\phi$-based equation of motion in terms of \nconnected Green's functions, \\eq{eq:phidse}. There are only two \nnon-vanishing functional derivatives and we can write down the solutions as \nbefore\n\\begin{eqnarray}\n\\ev{\\imath K_{iy}^b\\imath\\xi_{\\tau x}^a}&=&\n\\int\\dk{k}\\delta^{ba}\\frac{(-k_i)}{\\vec{k}^2}e^{-\\imath k\\cdot(y-x)}\n,\\nonumber\\\\\n\\ev{\\imath\\kappa_y^b\\imath\\xi_{\\tau x}^a}&=&\n\\int\\dk{k}\\delta^{ba}\\frac{\\imath}{\\vec{k}^2}e^{-\\imath k\\cdot(y-x)}\n,\\nonumber\\\\\n\\ev{\\imath J_{iy}^b\\imath\\xi_{\\tau x}^a}&=&\n\\ev{\\imath\\rho_y^b\\imath\\xi_{\\tau x}^a}=\n\\ev{\\imath\\xi_{\\lambda x}^b\\imath\\xi_{\\tau x}^a}=\n\\ev{\\imath\\xi_{\\tau x}^b\\imath\\xi_{\\tau x}^a}=0.\n\\end{eqnarray}\nNotice that \\emph{all} of the connected Green's functions involving the \n$\\tau$-field are now known and are purely kinematical in nature. In terms of \nproper Green's functions, we consider the $\\phi$-based equation of motion, \n\\eq{eq:phidse0}. Recognising that functional derivatives with respect to \nthe $\\lambda$- and $\\tau$-fields yield no more information, we can omit them from \nthe current discussion. The equation tells us that given a proper Green's \nfunction involving $\\pi$, we can immediately construct the corresponding \nfunctional derivative with respect to $\\phi$. We can thus conclude that as \nfar as the proper Green's functions are concerned, derivatives with repect \nto the $\\phi$-field are redundant.\n\nFinally, let us consider the equation derived from the BRS transform in \nterms of connected Green's functions, \\eq{eq:stid0}. Since the \nghost\/antighost fields must come in pairs, we may take the functional \nderivative of this with respect to $\\imath\\eta_z^d$ and subsequently set the \nghost sources to zero whilst considering only the rest. We get:\n\\begin{eqnarray}\n\\lefteqn{\\frac{\\imath}{g}\\ev{\\imath\\xi_{\\lambda z}^d}=\n\\int\\dx{x}\\left\\{\n\\frac{1}{g}(\\partial_x^0\\rho_x^a)\\ev{\\imath\\ov{\\eta}_x^a\\imath\\eta_z^d}\n-f^{acb}\\rho_x^a\\left[\\ev{\\imath\\rho_x^c\\imath\\ov{\\eta}_x^b\\imath\\eta_z^d}\n+\\ev{\\imath\\rho_x^c}\\ev{\\imath\\ov{\\eta}_x^b\\imath\\eta_z^d}\\right]\\right.}\n\\nonumber\\\\&&\n-\\frac{1}{g}(\\nabla_{ix}J_{ix}^a)\\ev{\\imath\\ov{\\eta}_x^a\\imath\\eta_z^d}\n-f^{acb}J_{ix}^a\\left[\\ev{\\imath J_{ix}^c\\imath\\ov{\\eta}_x^b\\imath\\eta_z^d}\n+\\ev{\\imath J_{ix}^c}\\ev{\\imath\\ov{\\eta}_x^b\\imath\\eta_z^d}\\right]\n\\nonumber\\\\&&\n+f^{abc}\\left[K_{ix}^a-\\frac{\\nabla_{ix}}{(-\\nabla_x^2)}\n(\\kappa_x^a-\\nabla_{jx}K_{jx}^a)\\right]\\times\n\\nonumber\\\\&&\\left.\n\\left[\\ev{\\imath K_{ix}^c\\imath\\ov{\\eta}_x^b\\imath\\eta_z^d}\n-\\int\\dx{y}\\delta(x-y)\\nabla_{ix}\\ev{\\imath\\kappa_x^c\\imath\\ov{\\eta}_y^b\n\\imath\\eta_z^d}+\\ev{\\imath K_{ix}^c}\\ev{\\imath\\ov{\\eta}_x^b\\imath\\eta_z^d}\n-\\ev{\\imath\\ov{\\eta}_x^b\\imath\\eta_z^d}\\nabla_{ix}\\ev{\\imath\\kappa_x^c}\\right]\n\\right\\}.\n\\label{eq:stid1}\n\\end{eqnarray}\nFor now, the pertinent information from this identity comes from taking the \nfunctional derivative with respect to the source $\\xi_{\\lambda}$ and setting all \nsources to zero. The result is\n\\begin{equation}\n\\ev{\\imath\\xi_{\\lambda\\omega}^e\\imath\\xi_{\\lambda z}^d}=0.\n\\end{equation}\nWe notice that all other functional derivatives lead to non-trivial \nrelations involving interaction terms. This includes both the \n$\\ev{\\imath\\rho\\imath\\xi_\\lambda}$ and the $\\ev{\\imath\\kappa\\imath\\xi_\\lambda}$ \nconnected Green's functions and we conclude that these functions are not \nmerely kinematical factors as one might expect from quantities involving \nLagrange multiplier fields. We shall return to this topic at a later stage.\n\n\\section{Feynman (and Other) Rules}\n\\setcounter{equation}{0}\nWhilst it is entirely possible to deduce the complete set of Feynman rules \ndirectly from the action, we shall follow a slightly less obvious path \nhere. We derive not only the basic Feynman rules but collect all the \ntree-level (and incidently primitively divergent) quantities that will be of \ninterest. This means that in addition to the tree-level propagators (i.e., \nconnected two-point Green's functions) and proper vertices (i.e., proper \nthree- and four-point functions) we derive also the proper two-point \nfunctions. The reason for this is that (as will be discussed in some detail \nlater) the connected and proper two-point functions are not related in the \nusual way as inverses of one another. The tree-level quantities of interest \ncan be easily derived from the respective equations of motion. Indeed, \nrecalling the previous section, some are already known exactly and we will \nnot need to discuss them further.\n\nBefore beginning, let us highlight a basic feature of the Fourier transform \nto momentum space. We know the commutation or anti-commutation rules for \nour fields\/sources and this will lead to the simplification that we need only \nconsider combinations of fields\/sources and let the commutation rules take \ncare of the permutations. However, momentum assignments must be uniformly \napplied and this leads to some non-trivial relations. Consider firstly the \ngeneric proper two-point function $\\ev{\\imath\\Phi_\\alpha(x)\\imath\\Phi_\\beta(y)}$ \nwhere we have $\\Phi_\\beta(y)\\Phi_\\alpha(x)=\\eta\\Phi_\\alpha(x)\\Phi_\\beta(y)$ with \n$\\eta=\\pm1$. We then have\n\\begin{equation}\n\\ev{\\imath\\Phi_\\alpha(x)\\imath\\Phi_\\beta(y)}\n=\\eta\\ev{\\imath\\Phi_\\beta(y)\\imath\\Phi_\\alpha(x)}\n\\end{equation}\nsuch that in momentum space\n\\begin{equation}\n\\Gamma_{\\alpha\\beta}(k)=\\eta\\Gamma_{\\beta\\alpha}(-k).\n\\end{equation}\nA similar argument applies for connected two-point functions. The situation \nfor proper three-point functions is slightly less complicated since all \nmomenta are defined as incoming. Indeed, we have (the $\\delta$-function \nexpressing momentum conservation comes about because of translational \ninvariance)\n\\begin{equation}\n\\ev{\\imath\\Phi_\\alpha\\imath\\Phi_\\beta\\imath\\Phi_\\gamma}\n=\\int\\dk{k_\\alpha}\\dk{k_\\beta}\\dk{k_\\gamma}(2\\pi)^4\\delta(k_\\alpha+k_\\beta+k_\\gamma)\n\\Gamma_{\\alpha\\beta\\gamma}(k_\\alpha,k_\\beta,k_\\gamma)\ne^{-\\imath k_\\alpha\\cdot x_\\alpha-\\imath k_\\beta\\cdot x_\\beta-\\imath k_\\gamma\\cdot x_\\gamma}\n\\end{equation}\nsuch that, for example\n\\begin{equation}\n\\Gamma_{\\beta\\alpha\\gamma}(k_\\beta,k_\\alpha,k_\\gamma)\n=\\eta_{\\alpha\\beta}\\Gamma_{\\alpha\\beta\\gamma}(k_\\alpha,k_\\beta,k_\\gamma)\n\\end{equation}\nwhere $\\eta_{\\alpha\\beta}$ refers to the sign incurred when swapping $\\alpha$ and \n$\\beta$.\n\nLet us now consider the connected two-point functions. Setting the coupling \nto zero in the equations of motion (listed in Appendix~\\ref{app:eom}) that \ninvolve interaction terms gives us the following non-trivial relations (the \nsuperscript $\\ev{}^{(0)}$ denotes the tree-level quantity)\n\\begin{eqnarray}\n\\rho_x^a-\\xi_{\\tau x}^a&=&-\\nabla_x^2\\ev{\\imath\\kappa_x^a}^{(0)},\\nonumber\\\\\nJ_{ix}^a&=&-\\nabla_{ix}\\ev{\\imath\\xi_{\\lambda x}^a}^{(0)}\n+\\partial_x^0\\ev{\\imath K_{ix}^a}^{(0)}\n-\\partial_x^0\\nabla_{ix}\\ev{\\imath\\kappa_x^a}^{(0)}\n-\\left[\\delta_{ij}\\nabla_x^2+\\nabla_{ix}\\nabla_{jx}\\right]\n\\ev{\\imath J_{jx}^a}^{(0)},\\nonumber\\\\\n\\eta_x^a&=&\\nabla_x^2\\ev{\\imath\\ov{\\eta}_x^a}^{(0)},\\nonumber\\\\\nK_{ix}^a&=&-\\nabla_{ix}\\ev{\\imath\\xi_{\\tau x}^a}^{(0)}\n+\\ev{\\imath K_{ix}^a}^{(0)}-\\nabla_{ix}\\ev{\\imath\\kappa_x^a}^{(0)}\n-\\partial_x^0\\ev{\\imath J_{ix}^a}^{(0)}-\\nabla_{ix}\\ev{\\imath\\rho_x^a}^{(0)}.\n\\end{eqnarray}\nClearly, the ghost propagator is distinct from the rest, since the ghost \nfield must appear with its antighost counterpart. The treel-level ghost \npropagator is\n\\begin{equation}\nW_{\\ov{c}c}^{(0)ab}(k)=-\\delta^{ab}\\frac{\\imath}{\\vec{k}^2}.\n\\end{equation}\nThe remaining tree-level propagators in momentum space (without the common \ncolor factor $\\delta^{ab}$) are summarised in Table~\\ref{tab:w0}. Those \nentries that are underlined are the exact relations considered previously.\n\n\\begin{table}\n\\begin{tabular}{|c||c|c||c|c||c|c|}\\hline\n$W$&$A_j$&$\\pi_j$&$\\sigma$&$\\phi$&$\\lambda$&$\\tau$\\\\\\hline\\rule[-2.4ex]{0ex}{5.5ex}\n$A_i$&$t_{ij}(k)\\frac{i}{(k_0^2-\\vec{k}^2)}$&\n$t_{ij}(k)\\frac{(-k^0)}{(k_0^2-\\vec{k}^2)}$&$\\underline{0}$&$\\underline{0}$&\n$\\underline{\\frac{(-k_i)}{\\vec{k}^2}}$&$\\underline{0}$\\\\\n\\hline\\rule[-2.4ex]{0ex}{5.5ex}\n$\\pi_i$&$t_{ij}(k)\\frac{k^0}{(k_0^2-\\vec{k}^2)}$&\n$t_{ij}(k)\\frac{\\imath\\vec{k}^2}{(k_0^2-\\vec{k}^2)}$&$\\underline{0}$&\n$\\underline{0}$&$\\underline{0}$&$\\underline{\\frac{(-k_i)}{\\vec{k}^2}}$\\\\\n\\hline\\hline\\rule[-2.4ex]{0ex}{5.5ex}\n$\\sigma$&$\\underline{0}$&$\\underline{0}$&$\\frac{\\imath}{\\vec{k}^2}$&\n$\\frac{(-\\imath)}{\\vec{k}^2}$&$\\frac{(-k^0)}{\\vec{k}^2}$&$\\underline{0}$\\\\\n\\hline\\rule[-2.4ex]{0ex}{5.5ex}\n$\\phi$&$\\underline{0}$&$\\underline{0}$&$\\frac{(-\\imath)}{\\vec{k}^2}$&0&0&\n$\\underline{\\frac{\\imath}{\\vec{k}^2}}$\\\\\n\\hline\\hline\\rule[-2.4ex]{0ex}{5.5ex}\n$\\lambda$&$\\underline{\\frac{k_j}{\\vec{k}^2}}$&$\\underline{0}$&\n$\\frac{k^0}{\\vec{k}^2}$&0&$\\underline{0}$&$\\underline{0}$\\\\\n\\hline\\rule[-2.4ex]{0ex}{5.5ex}\n$\\tau$&$\\underline{0}$&$\\underline{\\frac{k_j}{\\vec{k}^2}}$&$\\underline{0}$&\n$\\underline{\\frac{\\imath}{\\vec{k}^2}}$&$\\underline{0}$&$\\underline{0}$\\\\\n\\hline\n\\end{tabular}\n\\caption{\\label{tab:w0}Tree-level propagators (without color factors) in \nmomentum space. Underlined entries denote exact results.}\n\\end{table}\n\nWe can repeat the analysis for the tree-level proper two-point functions. \nThe relevant equations are:\n\\begin{eqnarray}\n\\ev{\\imath\\sigma_x^a}^{(0)}-\\ev{\\imath\\tau_x^a}^{(0)}&=&\n\\nabla_x^2\\phi_x^a,\\nonumber\\\\\n\\ev{\\imath A_{ix}^a}^{(0)}&=&\\nabla_{ix}\\lambda_x^a-\\partial_x^0\\pi_{ix}^a\n+\\partial_x^0\\nabla_{ix}\\phi_x^a+\\left[\\delta_{ij}\\nabla_x^2\n-\\nabla_{ix}\\nabla_{jx}\\right]A_{jx}^a,\\nonumber\\\\\n\\ev{\\imath\\ov{c}_x^a}^{(0)}&=&-\\nabla_x^2c_x^a,\\nonumber\\\\\n\\ev{\\imath\\pi_{ix}^a}^{(0)}&=&\\nabla_{ix}\\tau_x^a-\\pi_{ix}^a\n+\\nabla_{ix}\\phi_x^a+\\partial_x^0A_{ix}^a+\\nabla_{ix}\\sigma_x^a.\n\\end{eqnarray}\nThe ghost proper two-point function is\n\\begin{equation}\n\\Gamma_{\\ov{c}c}^{(0)ab}(k)=\\delta^{ab}\\imath\\vec{k}^2.\n\\end{equation}\nThe remaining proper two-point functions are summarised in \nTable~\\ref{tab:g0} where the reader is reminded that all proper functions \ninvolving derivatives with respect to the $\\phi$-field can be constructed \nfrom the corresponding $\\pi$ derivative.\n\n\\begin{table}\n\\begin{tabular}{|c||c|c||c|c||c|c|}\\hline\n$\\Gamma$&$A_j$&$\\pi_j$&$\\sigma$&$\\phi$&$\\lambda$&$\\tau$\\\\\n\\hline\\rule[-2.4ex]{0ex}{5.5ex}\n$A_i$&$t_{ij}(k)\\imath\\vec{k}^2$&$\\delta_{ij}k^0$&0&\n$\\left\\{-\\imath k^0k_i\\right\\}$&$\\underline{k_i}$&$\\underline{0}$\\\\\n\\hline\\rule[-2.4ex]{0ex}{5.5ex}\n$\\pi_i$&$-k^0\\delta_{ij}$&$\\imath\\delta_{ij}$&$k_i$&$\\left\\{k_i\\right\\}$&\n$\\underline{0}$&$\\underline{k_i}$\\\\\n\\hline\\hline\\rule[-2.4ex]{0ex}{5.5ex}\n$\\sigma$&0&$-k_j$&0&$\\left\\{\\imath\\vec{k}^2\\right\\}$&$\\underline{0}$&\n$\\underline{0}$\\\\\n\\hline\\rule[-2.4ex]{0ex}{5.5ex}\n$\\phi$&$\\left\\{-\\imath k^0k_j\\right\\}$&$\\left\\{-k_j\\right\\}$&\n$\\left\\{\\imath\\vec{k}^2\\right\\}$&$\\left\\{\\imath\\vec{k}^2\\right\\}$&\n$\\underline{0}$&$\\underline{0}$\\\\\n\\hline\\hline\\rule[-2.4ex]{0ex}{5.5ex}\n$\\lambda$&$\\underline{-k_j}$&$\\underline{0}$&$\\underline{0}$&$\\underline{0}$&\n$\\underline{0}$&$\\underline{0}$\\\\\n\\hline\\rule[-2.4ex]{0ex}{5.5ex}\n$\\tau$&$\\underline{0}$&$\\underline{-k_j}$&$\\underline{0}$&$\\underline{0}$&\n$\\underline{0}$&$\\underline{0}$\\\\\\hline\n\\end{tabular}\n\\caption{\\label{tab:g0}Tree-level proper two-point functions (without color \nfactors) in momentum space. Underlined entries denote exact results. \nBracketed quantities refer to functions that are fully determined by others.}\n\\end{table}\n\nDetermining the tree-level vertices (three- and four-point proper Green's \nfunctions) follows the same pattern as for the two-point functions. They \nfollow by isolating the parts of the equations of motion that have explicit \nfactors of the coupling $g$ and functionally differentiating. In momentum \nspace (defining all momenta to be incoming), we have\n\\begin{eqnarray}\n\\Gamma_{\\pi\\sigma A ij}^{(0)abc}&=&-gf^{abc}\\delta_{ij},\\nonumber\\\\\n\\Gamma_{3A ijk}^{(0)abc}(p_a,p_b,p_c)&=&\n-\\imath gf^{abc}\\left[\\delta_{ij}(p_a-p_b)_k+\\delta_{jk}(p_b-p_c)_i\n+\\delta_{ki}(p_c-p_a)_j\\right],\\nonumber\\\\\n\\Gamma_{4A ijkl}^{(0)abcd}&=&-\\imath g^2\\left\\{\\delta_{ij}\\delta_{kl}\n\\left[f^{ace}f^{bde}-f^{ade}f^{cbe}\\right]\n+\\delta_{ik}\\delta_{jl}\\left[f^{abe}f^{cde}-f^{ade}f^{bce}\\right]\n+\\delta_{il}\\delta_{jk}\\left[f^{ace}f^{dbe}f^{abe}f^{cde}\\right]\\right\\}\n,\\nonumber\\\\\n\\Gamma_{\\ov{c}cA i}^{(0)abc}(p_{\\ov{c}},p_c,p_A)&=&-\\imath gf^{abc}p_{\\ov{c}i}.\n\\end{eqnarray}\nWe notice that all the tree-level vertices are independent of the energy. \nIn addition, there is a tree-level vertex involving $\\phi$ that can be \nconstructed from its counterpart involving $\\vec{\\pi}$ and that reads:\n\\begin{equation}\n\\Gamma_{\\phi\\sigma Ai}^{(0)abc}(p_\\phi,p_\\sigma,p_A)\n=\\imath p_{\\phi j}\\Gamma_{\\pi\\sigma Aji}^{(0)abc}=-\\imath gf^{abc}p_{\\phi i}.\n\\end{equation}\nThis vertex has exactly the same form as the ghost-gluon vertex with the \nincoming $\\phi$-momentum playing the same role as the incoming \n$\\ov{c}$-momentum. It is worth mentioning that the ghost-gluon, three- and \nfour-gluon vertices are identical to the Landau gauge forms except that only \nthe spatial components of the vectors are present.\n\nLet us now discuss the cancellation of the ghost (energy divergent) sector. \nIn any Feynman diagram containing a closed ghost loop, there will be an \nassociated energy divergence. It is a general result that associated with \nany closed loop involving Grassmann-valued fields (ghosts or fermions) there \nwill be a factor of $(-1)$. However, Green's functions are given by the sum \nof all possible contributing Feynman diagrams. Since the Feynman rules for \n$W_{\\sigma\\phi}$ and $\\Gamma_{\\phi\\sigma A}$ are identical to $W_{\\ov{c}c}$ and \n$\\Gamma_{\\ov{c}cA}$ we will have, for each closed ghost loop, another loop \ninvolving scalar fields without the factor $(-1)$. Even before performing \nthe loop integration (and regularisation) the integrands of the two diagrams \nwill cancel exactly. In this way we see that the energy divergences coming \nfrom the ghost sector will be eliminated, as expected given that the \nFaddeev-Popov determinant can formally be cancelled. There is one caveat to \nthis. Whilst we have shown that the energy divergences coming from the \nghost sector have been eliminated, we have not shown that the remaining \nloops involving scalar fields are free of energy divergences (although a \nquick glance at the form of the Dyson--Schwinger equations later will suffice to see that \nthis is the case at leading order). We propose to look further into this in \na future publication.\n\n\\section{Decomposition of Two-Point Functions}\n\\setcounter{equation}{0}\nIn order to constrain the possible form of the two-point functions under \ninvestigation we can utilise information about discrete symmetries. We \nconsider time-reversal and parity and we know that Yang-Mills theory \nrespects both. Under time-reversal the generic field \n$\\Phi_\\alpha(x^0,\\vec{x})$ is transformed as follows:\n\\begin{equation}\n\\Phi_\\alpha(x^0,\\vec{x})=\\eta_\\alpha\\Phi_\\alpha(-x^0,\\vec{x})\n\\end{equation}\nwhere $\\eta_\\alpha=\\pm1$. Since the action, \\eq{eq:act}, is invariant under \ntime-reversal (it is a pure number) then by considering each term in turn, \nwe deduce that\n\\begin{equation}\n\\eta_A=\\eta_\\lambda=1,\\;\\;\\;\\;\\eta_\\pi=\\eta_\\tau=\\eta_\\phi=\\eta_\\sigma=-1,\\;\\;\\;\\;\n\\eta_{\\ov{c}}=\\eta_c=\\pm1.\n\\end{equation}\nThe sources have the same transformation properties as the field. These \nproperties allow us to extract information about the energy dependence of \nGreen's functions. For instance, we have that\n\\begin{equation}\n\\Gamma_{A\\pi ij}^{ab}(k^0,\\vec{k})=-\\Gamma_{A\\pi ij}^{ab}(-k^0,\\vec{k}),\n\\end{equation}\nfrom which one can infer that\n\\begin{equation}\n\\Gamma_{A\\pi ij}^{ab}(k^0,\\vec{k})=\\delta^{ab}k^0\\Gamma_{A\\pi ij}(k_0^2,\\vec{k})\n\\end{equation}\n(the sign convention is chosen to match the perturbative results). Aside \nfrom $\\Gamma_{A\\phi}$ which is unambiguously related to $\\Gamma_{A\\pi}$, the only \nother proper two-point function that carries the external factor $k^0$ is\n\\begin{equation}\n\\Gamma_{A\\sigma i}^{ab}(k^0,\\vec{k})=\\delta^{ab}k^0\\Gamma_{A\\sigma i}(k_0^2,\\vec{k}).\n\\end{equation}\nHaving extracted the explicit factors of $k^0$ in the proper two-point \nfuncitons, the (as yet) unknown functions that multiply them are functions \nof $k_0^2$. Turning to the propagators, we assign the factor $-k^0$ to \n$W_{A\\pi}$, $W_{\\sigma\\lambda}$ and $W_{\\phi\\lambda}$.\n\nThe second discrete symmetry of interest is parity whereby\n\\begin{equation}\n\\Phi_\\alpha(x^0,\\vec{x})=\\eta_\\alpha\\Phi_\\alpha(x^0,-\\vec{x})\n\\end{equation}\nwhere again, $\\eta_\\alpha=\\pm1$. Again the action is invariant and we deduce \nthat\n\\begin{equation}\n\\eta_A=\\eta_\\pi=-1,\\;\\;\\;\\;\\eta_\\sigma=\\eta_\\phi=\\eta_\\lambda=\\eta_\\tau=1,\\;\\;\\;\\;\n\\eta_{\\ov{c}}=\\eta_c=\\pm1\n\\end{equation}\nwith the sources transforming as the fields. This symmetry is rather more \nobvious than time-reversal. The physical sense is that for every vector \nfield (and with an associated spatial index) we have some explicit vector \nfactor (again with the associated spatial index). Where the vector fields \n$\\vec{A}$ and $\\vec{\\pi}$ occur in the propagators, we use the \ntransversality conditions from earlier to see that the vector-scalar \npropagators must vanish, except those involving the apropriate Lagrange \nmultiplier field.\n\nWhat the above tells us is how to construct the most general allowed forms \nof the two-point functions. The dressing functions are scalar functions of \nthe positive, scalar arguments $k_0^2$ and $\\vec{k}^2$. We summarise the \nresults in Tables~\\ref{tab:w1} and \\ref{tab:g1}. The ghost propagator is \nwritten $W_{\\ov{c}c}^{ab}(k)=-\\delta^{ab}\\imath D_c\/\\vec{k}^2$. We have nine \nunknown propagator dressing functions. Inlcuding the proper ghost two-point \nfunction $\\Gamma_{\\ov{c}c}^{ab}(k)=\\delta^{ab}\\imath\\vec{k^2}\\Gamma_c$ we see that \nthere are ten proper two-point dressing functions. The extra functions \ncomes about because we have used only the propagator form of the identity \n\\eq{eq:stid1} to eleminate $W_{\\lambda\\la}$.\n\n\\begin{table}\n\\begin{tabular}{|c||c|c||c|c||c|c|}\\hline\n$W$&$A_j$&$\\pi_j$&$\\sigma$&$\\phi$&$\\lambda$&$\\tau$\\\\\n\\hline\\rule[-2.4ex]{0ex}{5.5ex}\n$A_i$&$t_{ij}(k)\\frac{\\imath D_{AA}}{(k_0^2-\\vec{k}^2)}$&\n$t_{ij}(k)\\frac{(-k^0)D_{A\\pi}}{(k_0^2-\\vec{k}^2)}$&0&0&\n$\\frac{(-k_i)}{\\vec{k}^2}$&0\\\\\n\\hline\\rule[-2.4ex]{0ex}{5.5ex}\n$\\pi_i$&$t_{ij}(k)\\frac{k^0D_{A\\pi}}{(k_0^2-\\vec{k}^2)}$&\n$t_{ij}(k)\\frac{\\imath\\vec{k}^2D_{\\pi\\pi}}{(k_0^2-\\vec{k}^2)}$&0&0&0&\n$\\frac{(-k_i)}{\\vec{k}^2}$\\\\\n\\hline\\hline\\rule[-2.4ex]{0ex}{5.5ex}\n$\\sigma$&0&0&$\\frac{\\imath D_{\\sigma\\si}}{\\vec{k}^2}$&\n$\\frac{-\\imath D_{\\sigma\\phi}}{\\vec{k}^2}$&\n$\\frac{(-k^0)D_{\\sigma\\lambda}}{\\vec{k}^2}$&0\\\\\n\\hline\\rule[-2.4ex]{0ex}{5.5ex}\n$\\phi$&0&0&$\\frac{-\\imath D_{\\sigma\\phi}}{\\vec{k}^2}$&\n$\\frac{-\\imath D_{\\phi\\phi}}{\\vec{k}^2}$&\n$\\frac{(-k^0)D_{\\phi\\lambda}}{\\vec{k}^2}$&$\\frac{\\imath}{\\vec{k}^2}$\\\\\n\\hline\\hline\\rule[-2.4ex]{0ex}{5.5ex}\n$\\lambda$&$\\frac{k_j}{\\vec{k}^2}$&0&$\\frac{k^0D_{\\sigma\\lambda}}{\\vec{k}^2}$&\n$\\frac{k^0D_{\\phi\\lambda}}{\\vec{k}^2}$&0&0\\\\\n\\hline\\rule[-2.4ex]{0ex}{5.5ex}\n$\\tau$&0&$\\frac{k_j}{\\vec{k}^2}$&0&$\\frac{\\imath}{\\vec{k}^2}$&0&0\\\\\n\\hline\n\\end{tabular}\n\\caption{\\label{tab:w1}General form of propagators in momentum space. The \nglobal color factor $\\delta^{ab}$ has been extracted. All unknown functions \n$D_{\\alpha\\beta}$ are dimensionless, scalar functions of $k_0^2$ and $\\vec{k}^2$.}\n\\end{table}\n\n\\begin{table}\n\\begin{tabular}{|c||c|c||c|c||c|c|}\\hline\n$\\Gamma$&$A_j$&$\\pi_j$&$\\sigma$&$\\phi$&$\\lambda$&$\\tau$\\\\\n\\hline\\rule[-2.4ex]{0ex}{5.5ex}\n$A_i$&$t_{ij}(\\vec{k})\\imath\\vec{k}^2\\Gamma_{AA}+\\imath k_ik_j\\ov{\\Gamma}_{AA}$&\n$k^0\\left(\\delta_{ij}\\Gamma_{A\\pi}+l_{ij}(\\vec{k})\\ov{\\Gamma}_{A\\pi}\\right)$&\n$-\\imath k^0k_i\\Gamma_{A\\sigma}$&$\\left\\{-\\imath k^0k_i\\left(\\Gamma_{A\\pi}\n+\\ov{\\Gamma}_{A\\pi}\\right)\\right\\}$&$k_i$&0\\\\\n\\hline\\rule[-2.4ex]{0ex}{5.5ex}\n$\\pi_i$&$-k^0\\left(\\delta_{ij}\\Gamma_{A\\pi}+l_{ij}(\\vec{k})\\ov{\\Gamma}_{A\\pi}\\right)$&\n$\\imath\\delta_{ij}\\Gamma_{\\pi\\pi}+\\imath l_{ij}(\\vec{k})\\ov{\\Gamma}_{\\pi\\pi}$&\n$k_i\\Gamma_{\\pi\\sigma}$&\n$\\left\\{k_i\\left(\\Gamma_{\\pi\\pi}+\\ov{\\Gamma}_{\\pi\\pi}\\right)\\right\\}$&0&$k_i$\\\\\n\\hline\\hline\\rule[-2.4ex]{0ex}{5.5ex}\n$\\sigma$&$-\\imath k^0k_j\\Gamma_{A\\sigma}$&$-k_j\\Gamma_{\\pi\\sigma}$&\n$\\imath\\vec{k}^2\\Gamma_{\\sigma\\si}$&$\\left\\{\\imath\\vec{k}^2\\Gamma_{\\pi\\sigma}\\right\\}$&0&\n0\\\\\\hline\\rule[-2.4ex]{0ex}{5.5ex}\n$\\phi$&$\\left\\{-\\imath k^0k_j\\left(\\Gamma_{A\\pi}+\\ov{\\Gamma}_{A\\pi}\\right)\\right\\}$&\n$\\left\\{-k_j\\left(\\Gamma_{\\pi\\pi}+\\ov{\\Gamma}_{\\pi\\pi}\\right)\\right\\}$&\n$\\left\\{\\imath\\vec{k}^2\\Gamma_{\\pi\\sigma}\\right\\}$&\n$\\left\\{-\\imath\\vec{k}^2\\left(\\Gamma_{\\pi\\pi}+\\ov{\\Gamma}_{\\pi\\pi}\\right)\n\\right\\}$&0&0\\\\\\hline\\hline\\rule[-2.4ex]{0ex}{5.5ex}\n$\\lambda$&$-k_j$&0&0&0&0&0\\\\\\hline\\rule[-2.4ex]{0ex}{5.5ex}\n$\\tau$&0&$-k_j$&0&0&0&0\\\\\\hline\n\\end{tabular}\n\\caption{\\label{tab:g1}General form of the proper two-point functions in \nmomentum space. The global color factor $\\delta^{ab}$ has been extracted. \nAll unknown functions $D_{\\alpha\\beta}$ are dimensionless, scalar functions of \n$k_0^2$ and $\\vec{k}^2$. Bracketed quantities refer to functions that are \nfully determined by others.}\n\\end{table}\n\nObviously the propagator and proper two-point dressing functions are related \nvia the Legendre transform. Whereas in covariant gauges this relationship \nis merely an inversion, in our case there is considerably more detail. The \nconnection between the connected and proper two-point functions stems from \nthe observation that\n\\begin{equation}\n\\frac{\\delta\\imath J_\\beta}{\\delta\\imath J_\\alpha}=\\delta_{\\alpha\\beta}\n=-\\imath\\frac{\\delta}{\\delta\\imath J_\\alpha}\\ev{\\imath\\Phi_\\beta}\n=\\frac{\\delta\\Phi_\\gamma}{\\delta\\imath J_\\alpha}\\ev{\\imath\\Phi_\\gamma\\imath\\Phi_\\beta}\n=\\ev{\\imath J_\\alpha\\imath J_\\gamma}\\ev{\\imath\\Phi_\\gamma\\imath\\Phi_\\beta}.\n\\label{eq:leg}\n\\end{equation}\n(Recall here that there is an implicit summation over all discrete indices \nand integration over continuous variables labelled by $\\gamma$.) The ghost \ntwo-point functions are somewhat special in that once sources are set to \nzero, only ghost-antighost pairs need be considered. The above relation \nbecomes\n\\begin{equation}\n\\int d^4z\\ev{\\imath\\ov{\\eta}_x^a\\imath\\eta_z^c}\n\\ev{\\imath\\ov{c}_z^c\\imath c_y^b}=\\delta^{ab}\\delta(x-y).\n\\label{eq:ghleg}\n\\end{equation}\nFourier transforming to momentum space and using the decomposition from \nabove, we get that\n\\begin{equation}\nD_c(k_0^2,\\vec{k}^2)\\Gamma_c(k_0^2,\\vec{k}^2)=1\n\\end{equation}\nshowing that the ghost propagator dressing function is simply the inverse of \nthe ghost proper two-point function. Turning to the rest, we are faced with \na problem akin to matrix inversion in order to see the connection since the \nsum over all the different possible sources\/fields labelled by $\\gamma$ is \nnon-trivial in the above general formula. The decompositions of the \ntwo-point function do however mitigate the complexity somewhat. We tabulate \nthe possible combinations of terms in Table~\\ref{tab:leg0}.\n\n\\begin{table}[t]\n\\begin{tabular}{|c||c|c||c|c||c|c|}\\hline\n$\\alpha$,$\\beta$&$\\vec{A}$&$\\vec{\\pi}$&$\\sigma$&$\\phi$&$\\lambda$&$\\tau$\\\\\n\\hline\\hline\n$\\vec{A}$&$\\vec{A}$,$\\vec{\\pi}$,$\\lambda$&$\\vec{A}$,$\\vec{\\pi}$&\n$\\vec{A}$,$\\vec{\\pi}$&$\\vec{A}$,$\\vec{\\pi}$&$\\vec{A}$&$\\vec{\\pi}$\\\\\n\\hline\n$\\vec{\\pi}$&$\\vec{A}$,$\\vec{\\pi}$&$\\vec{A}$,$\\vec{\\pi}$,$\\tau$&\n$\\vec{A}$,$\\vec{\\pi}$&$\\vec{A}$,$\\vec{\\pi}$&$\\vec{A}$&$\\vec{\\pi}$\\\\\n\\hline\\hline\n$\\sigma$&$\\sigma$,$\\phi$,$\\lambda$&$\\sigma$,$\\phi$&$\\sigma$,$\\phi$&$\\sigma$,$\\phi$&---&---\\\\\n\\hline\n$\\phi$&$\\sigma$,$\\phi$,$\\lambda$&$\\sigma$,$\\phi$,$\\tau$&$\\sigma$,$\\phi$&$\\sigma$,$\\phi$&\n---&---\\\\\\hline\\hline\n$\\lambda$&$\\vec{A}$,$\\sigma$,$\\phi$&$\\vec{A}$,$\\sigma$,$\\phi$&$\\vec{A}$,$\\sigma$,$\\phi$&\n$\\vec{A}$,$\\sigma$,$\\phi$&$\\vec{A}$&---\\\\\\hline\n$\\tau$&$\\vec{\\pi}$,$\\phi$&$\\vec{\\pi}$,$\\phi$&$\\vec{\\pi}$,$\\phi$&\n$\\vec{\\pi}$,$\\phi$&---&$\\vec{\\pi}$\\\\\\hline\n\\end{tabular}\n\\caption{\\label{tab:leg0}Possible terms for the equations relating \npropagator and proper two-point functions stemming from the Legendre \ntransform. Entries denote the allowed field types $\\gamma$ in \\eq{eq:leg}.}\n\\end{table}\n\nWe start by considering the top left components of Table~\\ref{tab:leg0} \ninvolving only $\\vec{A}$, $\\vec{\\pi}$ and the known functions with $\\lambda$ and \n$\\tau$. After decomposition, we have (suppressing the common argument $k$)\n\\begin{eqnarray}\nk_0^2D_{A\\pi}\\Gamma_{A\\pi}-\\vec{k}^2D_{AA}\\Gamma_{AA}&=&k_0^2-\\vec{k}^2,\\nonumber\\\\\nk_0^2D_{A\\pi}\\Gamma_{A\\pi}-\\vec{k}^2D_{\\pi\\pi}\\Gamma_{\\pi\\pi}&=&k_0^2-\\vec{k}^2\n,\\nonumber\\\\\nD_{AA}\\Gamma_{A\\pi}-D_{A\\pi}\\Gamma_{\\pi\\pi}&=&0,\\nonumber\\\\\nD_{A\\pi}\\Gamma_{AA}-D_{\\pi\\pi}\\Gamma_{A\\pi}&=&0.\n\\end{eqnarray}\nWe can thus express the propagator functions $D$ in terms of the proper \ntwo-point functions $\\Gamma$ and we have\n\\begin{eqnarray}\nD_{AA}&=&\\frac{\\left(k_0^2-\\vec{k}^2\\right)\\Gamma_{\\pi\\pi}}{\n\\left(k_0^2\\Gamma_{A\\pi}^2-\\vec{k}^2\\Gamma_{AA}\\Gamma_{\\pi\\pi}\\right)},\\nonumber\\\\\nD_{\\pi\\pi}&=&\\frac{\\left(k_0^2-\\vec{k}^2\\right)\\Gamma_{AA}}{\n\\left(k_0^2\\Gamma_{A\\pi}^2-\\vec{k}^2\\Gamma_{AA}\\Gamma_{\\pi\\pi}\\right)},\\nonumber\\\\\nD_{A\\pi}&=&\\frac{\\left(k_0^2-\\vec{k}^2\\right)\\Gamma_{A\\pi}}{\n\\left(k_0^2\\Gamma_{A\\pi}^2-\\vec{k}^2\\Gamma_{AA}\\Gamma_{\\pi\\pi}\\right)}.\n\\end{eqnarray}\nClearly, these expressions can be inverted to give the functions $\\Gamma$ in \nterms of the functions $D$. Next, let us consider the central components of \nTable~\\ref{tab:leg0} involving only $\\sigma$ and $\\phi$. We get the following \nequations:\n\\begin{eqnarray}\n-D_{\\sigma\\si}\\Gamma_{\\sigma\\si}+D_{\\sigma\\phi}\\Gamma_{\\pi\\sigma}&=&1,\\nonumber\\\\\nD_{\\sigma\\phi}\\Gamma_{\\pi\\sigma}+D_{\\phi\\phi}\\left(\\Gamma_{\\pi\\pi}\n+\\ov{\\Gamma}_{\\pi\\pi}\\right)&=&1,\\nonumber\\\\\n-D_{\\sigma\\si}\\Gamma_{\\pi\\sigma}+D_{\\sigma\\phi}\\left(\\Gamma_{\\pi\\pi}\n+\\ov{\\Gamma}_{\\pi\\pi}\\right)&=&0,\\nonumber\\\\\nD_{\\sigma\\phi}\\Gamma_{\\sigma\\si}+D_{\\phi\\phi}\\Gamma_{\\pi\\sigma}&=&0.\n\\end{eqnarray}\nThe propagator functions in terms of the proper two-point functions are then\n\\begin{eqnarray}\nD_{\\sigma\\si}&=&\\frac{\\left(\\Gamma_{\\pi\\pi}+\\ov{\\Gamma}_{\\pi\\pi}\\right)}{\n\\Gamma_{\\pi\\sigma}^2-\\Gamma_{\\sigma\\si}\\left(\\Gamma_{\\pi\\pi}+\\ov{\\Gamma}_{\\pi\\pi}\\right)}\n,\\nonumber\\\\\nD_{\\phi\\phi}&=&-\\frac{\\Gamma_{\\sigma\\si}}{\\Gamma_{\\pi\\sigma}^2-\\Gamma_{\\sigma\\si}\n\\left(\\Gamma_{\\pi\\pi}+\\ov{\\Gamma}_{\\pi\\pi}\\right)},\\nonumber\\\\\nD_{\\sigma\\phi}&=&\\frac{\\Gamma_{\\pi\\sigma}}{\\Gamma_{\\pi\\sigma}^2-\\Gamma_{\\sigma\\si}\\left(\\Gamma_{\\pi\\pi}\n+\\ov{\\Gamma}_{\\pi\\pi}\\right)}.\n\\end{eqnarray}\nThere are three more equations that are of interest. These are the $\\sigma-A$, \n$\\phi-A$ and $\\lambda-A$ entries of Table~\\ref{tab:leg0} and they read:\n\\begin{eqnarray}\nD_{\\sigma\\si}\\Gamma_{A\\sigma}-D_{\\sigma\\phi}\\left(\\Gamma_{A\\pi}+\\ov{\\Gamma}_{A\\pi}\\right)\n+D_{\\sigma\\lambda}&=&0,\\\\\n-D_{\\sigma\\phi}\\Gamma_{A\\sigma}-D_{\\phi\\phi}\\left(\\Gamma_{A\\pi}+\\ov{\\Gamma}_{A\\pi}\\right)\n+D_{\\phi\\lambda}&=&0,\\\\\n\\ov{\\Gamma}_{AA}-\\frac{k_0^2}{\\vec{k}^2}\\left[D_{\\sigma\\lambda}\\Gamma_{A\\sigma}\n+D_{\\phi\\lambda}\\left(\\Gamma_{A\\pi}+\\ov{\\Gamma}_{A\\pi}\\right)\\right]&=&0.\n\\end{eqnarray}\nWhat these equations tell us is that $D_{\\sigma\\lambda}$ and $D_{\\phi\\lambda}$ are \nrelated to $\\ov{\\Gamma}_{A\\pi}$ and $\\Gamma_{A\\sigma}$ with all other coefficients \nbeing determined. $\\ov{\\Gamma}_{AA}$ is then given as a specific combination \nand is the `extra' proper two-point function alluded to earlier. However, \nthese functions will not be of any real concern since $D_{\\sigma\\lambda}$ and \n$D_{\\phi\\lambda}$ do not enter any loop diagrams of the Dyson--Schwinger equations. In \neffect, $\\ov{\\Gamma}_{A\\pi}$, $\\Gamma_{A\\sigma}$ and $\\ov{\\Gamma}_{AA}$ form a consistency \ncheck on the truncation of the Dyson--Schwinger equations since we have that\n\\begin{equation}\n\\ov{\\Gamma}_{AA}=\\frac{k_0^2}{\\vec{k}^2}\\left[-D_{\\sigma\\si}\\Gamma_{A\\sigma}^2\n+2D_{\\sigma\\phi}\\Gamma_{A\\sigma}\\left(\\Gamma_{A\\pi}+\\ov{\\Gamma}_{A\\pi}\\right)\n+D_{\\phi\\phi}\\left(\\Gamma_{A\\pi}+\\ov{\\Gamma}_{A\\pi}\\right)^2\\right].\n\\end{equation}\nIt is apparent that unlike covariant gauges, the proper two-point function \nfor the gluon is not necessarily transverse.\n\nIn summary, leaving the problem of the vertices aside, in order to solve the \ntwo-point Dyson--Schwinger equations we need to calculate seven proper two-point \nfunctions:\n\\begin{equation}\n\\Gamma_c,\\;\\;\\Gamma_{AA},\\;\\;\\Gamma_{A\\pi},\\;\\;\\Gamma_{\\pi\\pi},\\;\\;\\ov{\\Gamma}_{\\pi\\pi},\\;\\;\n\\Gamma_{\\sigma\\si},\\;\\;\\Gamma_{\\sigma\\pi},\n\\end{equation}\nwhich will give us the required propagator functions:\n\\begin{equation}\nD_c,\\;\\;D_{AA},\\;\\;D_{A\\pi},\\;\\;D_{\\pi\\pi},\\;\\;D_{\\sigma\\si},\\;\\;D_{\\sigma\\phi}\n,\\;\\;D_{\\phi\\phi}.\n\\end{equation}\nThe three proper two-point functions $\\ov{\\Gamma}_{AA}$, $\\ov{\\Gamma}_{A\\pi}$ and \n$\\Gamma_{A\\sigma}$ give a consistency check on any truncation scheme but do not \ndirectly contribute further.\n\n\\section{Derivation of the Propagator Dyson--Schwinger Equations}\n\\setcounter{equation}{0}\nIn this section, we present the explicit derivation of the relevant Dyson--Schwinger \nequations for proper two-point functions.\n\n\\subsection{Ghost Equations}\nAs will be shown in this subsection, the ghost sector of the theory plays a \nrather special role. We will begin by deriving the ghost Dyson--Schwinger equation (this \nwill serve as a template for the derivation of the other Dyson--Schwinger equations). \nWith this it is possible to point out two particular features of the ghost \nsector: that the ghost-gluon vertex is UV finite and that the energy ($k^0$ \ncomponent) argument of any ghost line is irrelevant, i.e., that any proper \nfunction involving ghost fields is independent of the ghost energy.\n\nThe derivation of the Dyson--Schwinger equation for the ghost proper two-point function \nbegins with \\eq{eq:ghost0}. Taking the functional derivative with respect \nto $\\imath c_w^d$, using the configuration space definition of the \ntree-level ghost-gluon vertex and omitting terms which will eventually \nvanish when sources are set to zero, we have\n\\begin{equation}\n\\ev{\\imath c_w^d\\imath\\ov{c}_x^a}=\\imath\\delta^{da}\\nabla_x^2\\delta(w-x)\n+\\int\\dx{y}\\dx{z}\\Gamma_{\\ov{c}cAi}^{(0)abc}(x,y,z)\\frac{\\delta}{\\delta\\imath c_w^d}\n\\ev{\\imath\\ov{\\eta}_y^b\\imath J_{iz}^c}.\n\\end{equation}\nUsing partial differentiation we see that\n\\begin{equation}\n\\frac{\\delta}{\\delta\\imath c_w^d}\\ev{\\imath\\ov{\\eta}_y^b\\imath J_{iz}^c}\n=-\\imath\\int\\dx{v}\\ev{\\imath J_{iz}^c\\imath\\ov{\\eta}_y^b\\imath\\eta_v^e}\n\\ev{\\imath\\ov{c}_v^e\\imath c_w^d}.\n\\end{equation}\nIn the above we have again used the fact that when sources are set to zero, \nthe only ghost functions that survive are those with pairs of \nghost-antighost fields. Since the ghost fields anticommute, we get that\n\\begin{equation}\n\\ev{\\imath\\ov{c}_x^a\\imath c_w^d}=-\\imath\\delta^{ad}\\nabla_x^2\\delta(x-w)\n+\\imath\\int\\dx{y}\\dx{z}\\dx{v}\\Gamma_{\\ov{c}cAi}^{(0)abc}(x,y,z)\n\\ev{\\imath J_{iz}^c\\imath\\ov{\\eta}_y^b\\imath\\eta_v^e}\n\\ev{\\imath\\ov{c}_v^e\\imath c_w^d}\n\\end{equation}\nTaking the partial derivative of \\eq{eq:ghleg} with respect to \n$\\imath J_{iz}^c$ we have (notice that when using partial derivatives here, \nwe must include all possible contributions which for clarity are included \nexplicitly here):\n\\begin{equation}\n\\int\\dx{v}\\ev{\\imath J_{iz}^c\\imath\\ov{\\eta}_y^b\\imath\\eta_v^e}\n\\ev{\\imath\\ov{c}_v^e\\imath c_w^d}=-\\imath\\int\\dx{v}\\dx{u}\n\\ev{\\imath\\ov{\\eta}_y^b\\imath\\eta_v^e}\\left\\{\n\\ev{\\imath J_{iz}^c\\imath J_{ju}^f}\n\\ev{\\imath A_{ju}^f\\imath\\ov{c}_v^e\\imath c_w^d}\n+\\ev{\\imath J_{iz}^c\\imath K_{ju}^f}\n\\ev{\\imath\\pi_{ju}^f\\imath\\ov{c}_v^e\\imath c_w^d}\\right\\}.\n\\end{equation}\nOur ghost Dyson--Schwinger equation in configuration space is thus\n\\begin{eqnarray}\n\\lefteqn{\\ev{\\imath\\ov{c}_x^a\\imath c_w^d}\n=-\\imath\\delta^{ad}\\nabla_x^2\\delta(x-w)}\\nonumber\\\\&&\n+\\int\\dx{y}\\dx{z}\\dx{v}\\dx{u}\\Gamma_{\\ov{c}cAi}^{(0)abc}(x,y,z)\n\\ev{\\imath\\ov{\\eta}_y^b\\imath\\eta_v^e}\\left\\{\n\\ev{\\imath J_{iz}^c\\imath J_{ju}^f}\n\\ev{\\imath A_{ju}^f\\imath\\ov{c}_v^e\\imath c_w^d}\n+\\ev{\\imath J_{iz}^c\\imath K_{ju}^f}\n\\ev{\\imath\\pi_{ju}^f\\imath\\ov{c}_v^e\\imath c_w^d}\\right\\}.\n\\end{eqnarray}\nWe Fourier transform this result to get the Dyson--Schwinger equation for the proper \ntwo-point ghost function in momentum space:\n\\begin{eqnarray}\n\\lefteqn{\\Gamma_c^{ad}(k)=\\delta^{ad}\\imath\\vec{k}^2}\\nonumber\\\\\n&&-\\int\\left(-\\dk{\\omega}\\right)\\Gamma_{\\ov{c}cAi}^{(0)abc}(k,-\\omega,\\omega-k)\nW_c^{be}(\\omega)\\left\\{W_{AAij}^{cf}(k-\\omega)\\Gamma_{\\ov{c}cAj}^{edf}(\\omega,-k,k-\\omega)\n+W_{A\\pi ij}^{cf}(k-\\omega)\\Gamma_{\\ov{c}c\\pi j}^{edf}(\\omega,-k,k-\\omega)\\right\\}\n.\\nonumber\\\\\n\\label{eq:ghdse0}\n\\end{eqnarray}\nWith the convention that the self-energy term on the right-hand side has an \noverall minus sign, we identify $\\left(-\\dk{\\omega}\\right)$ as the loop \nintegration measure in momentum space.\n\nWith any two-point Dyson--Schwinger equation, it is clear that there are two orderings \nfor the functional derivatives on the left-hand side. In the same way, \nthere are three orderings for three-point functions and so on. This means \nthat there are $n$ different equations for the $n$-point proper Green's \nfunctions, although obviously they all have the same solution and must be \nrelated in some way. It is therefore instructive to consider also the \nequation generated by the reverse ordering to see if this will have any \nconsequence. In the ghost case, this means repeating the above analysis but \nstarting with the second ghost equation of motion, \\eq{eq:ghdse2}. The \ncorresponding Dyson--Schwinger equation in momentum space is\n\\begin{eqnarray}\n\\lefteqn{\\Gamma_c^{ad}(k)=\\delta^{ad}\\imath\\vec{k}^2}\\nonumber\\\\&&\n-\\int\\left(-\\dk{\\omega}\\right)\\left\\{W_{AAij}^{fc}(\\omega)\n\\Gamma_{\\ov{c}cAj}^{abc}(k,-k-\\omega,\\omega)+W_{A\\pi ij}^{fc}(\\omega)\n\\Gamma_{\\ov{c}c\\pi j}^{abc}(k,-k-\\omega,\\omega)\\right\\}W_c^{be}(k+\\omega)\n\\Gamma_{\\ov{c}cAi}^{(0)edf}(k+\\omega,-k,-\\omega).\\nonumber\\\\\n\\label{eq:ghdse1}\n\\end{eqnarray}\nThis equation is formally equivalent to \\eq{eq:ghdse0} but we notice that \nthe ordering of the dressed vertices is different. (It is useful to check \nthat the two equations are the same by taking both vertices to be bare such \nthat the equivalence is manifest.)\n\nNotice that one of the vertices that form the loop term(s) must be bare. \nThis arises naturally through the derivation above and if one considers a \nperturbative expansion it is crucial to avoid overcounting of graphs. The \nchoice of which vertex is bare is arbitrary and related to the fact that \nthere are $n$ ways of writing the equation for an $n$-point function. Given \nthat for any loop term we can extract a single bare vertex, for any \nthree-point function involving a ghost-antighost pair we will have a loop \nterm with the following structure (see also Figure~\\ref{fig:ghost}):\n\\begin{equation}\n\\int\\dk{\\omega}\\Gamma_{\\alpha\\beta\\ov{c}cj}^{dgac}(\\omega,p,k-p,-k-\\omega)W_c^{ce}(k+\\omega)\nW_{A\\alpha ij}^{fd}(\\omega)\\Gamma_{\\ov{c}cAi}^{(0)ebf}(k+\\omega,-k,-\\omega).\n\\end{equation}\nNow, since the only propagators involving $A$ are transverse (the $W_{A\\lambda}$ \npropagator is disallowed since no proper vertex function with \n$\\lambda$-derivative exists) the loop term must vanish as $k\\rightarrow0$ for \nfinite $p$\\footnote{The possibility that \n$\\Gamma_{\\Phi_{\\alpha}\\Phi_{\\beta}\\ov{c}c}^{dgac}(\\omega,p,k-p,-k-\\omega)$ is also singular \nin this limit such that the loop remains finite is discounted since at \nfinite $\\omega$ and $p$ this would imply that some colored bound state \nexists.}. Since the loop term vanishes under some finite, kinematical \nconfiguration, an UV divergence (which is independent of the kinematical \nconfiguration) cannot occur and we can can say that this vertex is UV \nfinite. It is tempting to think that such an argument applies to the \ntwo-point ghost equation, however this is false since whilst the loop term \nvanishes, so does the $\\vec{k}^2$ factor that multiplies the rest of the \nequation.\n\n\\begin{figure}[t]\n\\includegraphics{ghostvert.ps}\n\\caption{\\label{fig:ghost} A diagrammatical representation of the \n$\\Gamma_{\\ov{c}c\\beta}(k-p,-k,p)$ proper vertex dressing. Because of the form of \nthe tree-level ghost-gluon vertex and the tranversality of the vector \npropagator, the dressing function vanishes in the limit $k\\rightarrow0$. \nFilled blobs denote dressed propagators and empty circles denote dressed \nproper vertex functions. Wavy lines denote proper functions, springs deonte \nconnected (propagator) functions and dashed lines denote the ghost \npropagator.}\n\\end{figure}\n\nLet us now show that any Green's function involving a ghost-antighost pair \nis independent of the ghost and antighost energies. The proof of this is \nperturbative in nature. We notice that both the tree-level ghost propagator \nand the ghost-gluon vertex are independent of the energy. This means that \nin any one-loop diagram which has at least one internal ghost propagator \n(and hence at least two ghost-gluon vertices) the energy scale associated \nwith the ghost propagator is absent. Using energy conservation another \nenergy scale can be eliminated and we choose this to be the antighost \nenergy. At two-loops, we now have the situation whereby the dressed \ninternal ghost propagator is again independent of the energy and the dressed \nghost-gluon vertex only depends on the gluon energy and so the argument can \nbe repeated. This can be applied to all orders in the perturbative \nexpansion which completes the proof. We thus have that in particular\n\\begin{eqnarray}\nD_c(k_0^2,\\vec{k}^2)&=&D_c(\\vec{k}^2)\\nonumber\\\\\n\\Gamma_{A\\ov{c}ci}(k_1,k_2,k_3)&=&\\Gamma_{A\\ov{c}ci}(k_1,\\vec{k}_2,\\vec{k}_3).\n\\end{eqnarray}\n\n\\subsection{$\\sigma$-based Equation}\nGiven the discussion in the previous section about which proper two-point \nfunctions are relevant, there are only two proper two-point functions \ninvolving derivatives with respect to $\\sigma$ to consider --- \n$\\ev{\\imath\\sigma\\imath\\sigma}$ and $\\ev{\\imath\\sigma\\imath\\pi}$. Since the \n$\\sigma$-based equation of motion, \\eq{eq:sidse0}, involves two interaction terms \nwhereas the $\\pi$-based equation, \\eq{eq:pidse0}, has only one, we use the \nderivatives of \\eq{eq:pidse0} to derive the Dyson--Schwinger equation for \n$\\ev{\\imath\\sigma\\imath\\pi}$ (see next subsection). We therefore consider the \nfunctional derivative of \\eq{eq:sidse0} with respect to $\\imath\\sigma_w^d$, \nafter which the sources will be set to zero. We have, again identifying the \ntree-level vertices,\n\\begin{equation}\n\\ev{\\imath\\sigma_w^d\\imath\\sigma_x^a}=-\\int\\dx{y}\\dx{z}\n\\Gamma_{\\pi\\sigma Aij}^{(0)cab}(z,x,y)\\frac{\\delta}{\\delta\\imath\\sigma_w^d}\n\\ev{\\imath J_{jy}^b\\imath K_{iz}^c}-\\int\\dx{y}\\dx{z}\n\\Gamma_{\\phi\\sigma Ai}^{(0)cab}(z,x,y)\\frac{\\delta}{\\delta\\imath\\sigma_w^d}\n\\ev{\\imath J_{iy}^b\\imath\\kappa_z^c}.\n\\end{equation}\nUsing partial differentiation, and with compact notation,\n\\begin{equation}\n\\frac{\\delta}{\\delta\\imath\\sigma_w^d}\\ev{\\imath J_{jy}^b\\imath K_{iz}^c}\n=-\\ev{\\imath K_{iz}^c\\imath J_\\alpha}\\ev{\\imath J_{jy}^b\\imath J_\\beta}\n\\ev{\\imath\\Phi_\\beta\\imath\\Phi_\\alpha\\imath\\sigma_w^d}\n\\end{equation}\n(similarly for the second term). This gives the Dyson--Schwinger equation in \nconfiguration space:\n\\begin{eqnarray}\n\\ev{\\imath\\sigma_w^d\\imath\\sigma_x^a}&=&\\int\\dx{y}\\dx{z}\n\\Gamma_{\\pi\\sigma Aij}^{(0)cab}(z,x,y)\\ev{\\imath K_{iz}^c\\imath J_\\alpha}\n\\ev{\\imath J_{jy}^b\\imath J_\\beta}\n\\ev{\\imath\\Phi_\\beta\\imath\\Phi_\\alpha\\imath\\sigma_w^d}\\nonumber\\\\&&\n+\\int\\dx{y}\\dx{z}\\Gamma_{\\phi\\sigma Ai}^{(0)cab}(z,x,y)\n\\ev{\\imath\\kappa_z^c\\imath J_\\alpha}\\ev{\\imath J_{iy}^b\\imath J_\\beta}\n\\ev{\\imath\\Phi_\\beta\\imath\\Phi_\\alpha\\imath\\sigma_w^d}\n\\end{eqnarray}\nTaking the Fourier transform and tidying-up indices, the Dyson--Schwinger equation in \nmomentum space is thus\n\\begin{eqnarray}\n\\Gamma_{\\sigma\\si}^{ad}(k)&=&-\\int\\left(-\\dk{\\omega}\\right)\n\\Gamma_{\\pi\\sigma Aij}^{(0)cab}(\\omega-k,k,-\\omega)W_{A\\beta jl}^{be}(\\omega)\n\\Gamma_{\\beta\\alpha\\sigma lk}^{efd}(\\omega,k-\\omega,-k)W_{\\alpha\\pi ki}^{fc}(\\omega-k)\\nonumber\\\\\n&&-\\int\\left(-\\dk{\\omega}\\right)\\Gamma_{\\phi\\sigma Ai}^{(0)cab}(\\omega-k,k,-\\omega)\nW_{A\\beta ij}^{be}(\\omega)\\Gamma_{\\beta\\alpha\\sigma j}^{efd}(\\omega,k-\\omega,-k)\nW_{\\alpha\\phi}^{fc}(\\omega-k).\n\\label{eq:sidse1}\n\\end{eqnarray}\nA couple of remarks are in order here. Firstly, there is no bare term on \nthe right-hand side because the action, under the first order formalism, is \nlinear in $\\sigma$. Secondly, the implicit summation over the terms labelled \nby $\\alpha$ and $\\beta$ means that in fact there are eight possible loop terms \ncomprising the self-energy. However, only two of these involves a \nprimitively divergent vertex. It is an uncomfortable truth that the formal, \nnon-local delta function constraint arising from the linearity of the action \nin $\\sigma$ blossoms into a large set of local self-energy integrals.\n\n\\subsection{$\\pi$-based equations}\nSince the $\\pi$-based equation of motion, \\eq{eq:pidse0}, contains only a \nsingle interaction term, we favor it to calculate the \n$\\ev{\\imath A\\imath\\pi}$ and $\\ev{\\imath\\pi\\imath\\sigma}$ proper two-point \nfunctions (as well as $\\ev{\\imath\\pi\\imath\\pi}$). As discussed previously, \nwe could in principle calculate these from the $A$-based and $\\sigma$-based \nequations as well in order to check the veracity of any truncations used and \nin fact, this connection may serve useful in elucidating constraints on the \nform of the truncated vertices used. Perturbatively, all equations will \nprovide the same result at any given order.\n\nUsing the same techniques as in the last subsection, we get the following \nDyson--Schwinger equations in momentum space:\n\\begin{eqnarray}\n\\Gamma_{\\pi\\sigma i}^{ad}(k)&=&\\delta^{ad}k_i-\\int(-\\dk{\\omega})\n\\Gamma_{\\pi\\sigma Aij}^{(0)abc}(k,-\\omega,\\omega-k)W_{\\sigma\\beta}^{be}(\\omega)\n\\Gamma_{\\beta\\alpha\\sigma l}^{efd}(\\omega,k-\\omega,-k)W_{\\alpha Alj}^{fc}(\\omega-k),\\\\\n\\Gamma_{\\pi Aik}^{ad}(k)&=&-\\delta^{ad}k^0\\delta_{ik}-\\int(-\\dk{\\omega})\n\\Gamma_{\\pi\\sigma Aij}^{(0)abc}(k,-\\omega,\\omega-k)W_{\\sigma\\beta}^{be}(\\omega)\n\\Gamma_{\\beta\\alpha Alk}^{efd}(\\omega,k-\\omega,-k)W_{\\alpha Alj}^{fc}(\\omega-k),\\\\\n\\Gamma_{\\pi\\pi ik}^{ad}(k)&=&\\imath\\delta^{ad}\\delta_{ik}-\\int(-\\dk{\\omega})\n\\Gamma_{\\pi\\sigma Aij}^{(0)abc}(k,-\\omega,\\omega-k)W_{\\sigma\\beta}^{be}(\\omega)\n\\Gamma_{\\beta\\alpha\\pi lk}^{efd}(\\omega,k-\\omega,-k)W_{\\alpha Alj}^{fc}(\\omega-k).\n\\end{eqnarray}\nAgain, notice that the summation over the allowed types of fields indicated \nby $\\alpha$ and $\\beta$ leads to multiple possibilities.\n\n\\subsection{$A$-based equation}\nUsing the tree-level forms for the vertices and discarding those terms which \nwill eventually vanish when sources are set to zero, it is possible to \nrewrite \\eq{eq:adse0} as\n\\begin{eqnarray}\n\\ev{\\imath A_{ix}^a}&=&\n\\left[\\delta_{ij}\\nabla_x^2-\\nabla_{ix}\\nabla_{jx}\\right]A_{jx}^a\n+\\int\\dx{y}\\dx{z}\\Gamma_{\\ov{c}cAi}^{(0)bca}(z,y,x)\n\\ev{\\imath\\ov{\\eta}_y^c\\imath\\eta_z^b}-\\int\\dx{y}\\dx{z}\n\\Gamma_{\\phi\\sigma Ai}^{(0)bca}(z,y,x)\\ev{\\imath\\rho_y^c\\imath\\kappa_z^b}\n\\nonumber\\\\&&\n-\\int\\dx{y}\\dx{z}\\Gamma_{\\pi\\sigma Aji}^{(0)bca}(z,y,x)\n\\ev{\\imath\\rho_y^c\\imath K_{jz}^b}\n-\\int\\dx{y}\\dx{z}\\frac{1}{2}\\Gamma_{3Akji}^{(0)bca}(z,y,x)\n\\ev{\\imath J_{jy}^c\\imath J_{kz}^b}\n\\nonumber\\\\&&\n-\\int\\dx{y}\\dx{z}\\dx{w}\\frac{1}{6}\\Gamma_{4Alkji}^{(0)dcba}(w,z,y,x)\n\\left[3\\imath A_{jy}^b\\ev{\\imath J_{kz}^c\\imath J_{lw}^d}\n+\\imath\\ev{\\imath J_{jy}^b\\imath J_{kz}^c\\imath J_{lw}^d}\\right].\n\\end{eqnarray}\nFunctionally differentiating this with respect to $A$ and proceeding as \nbefore, noting the following for the four-gluon connected vertex\n\\begin{eqnarray}\n\\imath\\frac{\\delta}{\\delta\\imath A_{mv}^e}\n\\ev{\\imath J_{jy}^b\\imath J_{kz}^c\\imath J_{lw}^d}&=&\n-\\ev{\\imath J_{kz}^c\\imath J_\\nu}\n\\ev{\\imath A_{mv}^e\\imath\\Phi_\\nu\\imath\\Phi_\\mu}\n\\ev{\\imath J_\\mu\\imath J_\\gamma}\\ev{\\imath J_{jy}^b\\imath J_\\lambda}\n\\ev{\\imath\\Phi_\\lambda\\imath\\Phi_\\gamma\\imath\\Phi_\\delta}\n\\ev{\\imath J_\\delta\\imath J_{lw}^d}\n\\nonumber\\\\&&\n-\\ev{\\imath J_{kz}^c\\imath J_\\gamma}\\ev{\\imath J_{jy}^b\\imath J_\\nu}\n\\ev{\\imath A_{mv}^e\\imath\\Phi_\\nu\\imath\\Phi_\\mu}\n\\ev{\\imath J_\\mu\\imath J_\\lambda}\n\\ev{\\imath\\Phi_\\lambda\\imath\\Phi_\\gamma\\imath\\Phi_\\delta}\n\\ev{\\imath J_\\delta\\imath J_{lw}^d}\n\\nonumber\\\\&&\n-\\ev{\\imath J_{kz}^c\\imath J_\\gamma}\\ev{\\imath J_{jy}^b\\imath J_\\lambda}\n\\ev{\\imath\\Phi_\\lambda\\imath\\Phi_\\gamma\\imath\\Phi_\\delta}\n\\ev{\\imath J_\\delta\\imath J_\\mu}\n\\ev{\\imath A_{mv}^e\\imath\\Phi_\\mu\\imath\\Phi_\\nu}\n\\ev{\\imath J_\\nu\\imath J_{lw}^d}\n\\nonumber\\\\&&\n+\\ev{\\imath J_{kz}^c\\imath J_\\gamma}\\ev{\\imath J_{jy}^b\\imath J_\\lambda}\n\\ev{\\imath A_{mv}^e\\imath\\Phi_\\lambda\\imath\\Phi_\\gamma\\imath\\Phi_\\delta}\n\\ev{\\imath J_\\delta\\imath J_{lw}^d}\n\\end{eqnarray}\ngives the gluon Dyson--Schwinger equation which in momentum space reads:\n\\begin{eqnarray}\n\\Gamma_{AAim}^{ae}(k)&=&\n\\imath\\delta^{ae}\\left[\\vec{k}^2\\delta_{im}-k_ik_m\\right]\n+\\int(-\\dk{\\omega})\\Gamma_{\\ov{c}cA i}^{(0)bca}(\\omega-k,-\\omega,k)W_c^{cd}(\\omega)\n\\Gamma_{\\ov{c}cAm}^{dfe}(\\omega,k-\\omega,-k)W_c^{fb}(\\omega-k)\n\\nonumber\\\\&&\n-\\int(-\\dk{\\omega})\\Gamma_{\\phi\\sigma Ai}^{(0)bca}(\\omega-k,-\\omega,k)W_{\\sigma\\beta}^{cd}(\\omega)\n\\Gamma_{\\beta\\alpha Am}^{dfe}(\\omega,k-\\omega,-k)W_{\\alpha\\phi}^{fb}(\\omega-k)\n\\nonumber\\\\&&\n-\\int(-\\dk{\\omega})\\Gamma_{A\\pi\\sigma ij}^{(0)bca}(\\omega-k,-\\omega,k)W_{\\sigma\\beta}^{cd}(\\omega)\n\\Gamma_{\\beta\\alpha A km}^{dfe}(\\omega,k-\\omega,-k)W_{\\alpha\\pi kj}^{fb}(\\omega-k)\n\\nonumber\\\\&&\n-\\frac{1}{2}\\int(-\\dk{\\omega})\\Gamma_{3Akji}^{(0)bca}(\\omega-k,-\\omega,k)\nW_{A\\beta jl}^{cd}(\\omega)\\Gamma_{\\beta\\alpha Alnm}^{dfe}(\\omega,k-\\omega,-k)W_{\\alpha Ank}^{fb}(\\omega-k)\n\\nonumber\\\\&&\n-\\frac{1}{6}\\int(-\\dk{\\omega})(-\\dk{v})\n\\Gamma_{4Alkji}^{(0)dcba}(-v,-\\omega,v+\\omega-k,k)W_{A\\lambda jn}^{bf}(k-v-\\omega)\nW_{A\\gamma ko}^{cg}(\\omega)W_{A\\delta lp}^{dh}(v)\\times\n\\nonumber\\\\&&\n\\Gamma_{\\lambda\\gamma\\delta Anopm}^{fghe}(k-\\omega-v,\\omega,v,-k)\n\\nonumber\\\\&&\n+\\frac{1}{2}\\int(-\\dk{\\omega})\\Gamma_{4Aimlk}^{(0)aecd}(k,-k,\\omega,-\\omega)\nW_{AAkl}^{cd}(-\\omega)\n\\nonumber\\\\&&\n+\\frac{1}{2}\\int(-\\dk{\\omega})(-\\dk{v})\\Gamma_{4Alkji}^{(0)dcba}(-v,-\\omega,v+\\omega-k,k)\nW_{A\\delta ln}^{df}(v)W_{A\\gamma ko}^{cg}(\\omega)\\Gamma_{\\delta\\gamma\\lambda nop}^{fgh}(v,\\omega,-v-\\omega)\n\\times\n\\nonumber\\\\&&\nW_{\\lambda\\mu pq}^{hi}(v+\\omega)\\Gamma_{\\mu\\nu Aqrm}^{ije}(v+\\omega,k-v-\\omega,-k)\nW_{\\nu Arj}^{jd}(\\omega+v-k).\n\\end{eqnarray}\nAgain, the occurence of the summation over $\\alpha,\\ldots,\\lambda$ leads to many \ndifferent possible loop terms.\n\n\\begin{figure*}[t]\n\\includegraphics[width=0.95\\linewidth]{dse.ps}\n\\caption{\\label{fig:dses} A diagrammatical representation of the coupled \nsystem of Dyson--Schwinger equations. Filled blobs denote dressed propagators and empty \ncircles denote dressed proper vertex functions. Wavy lines denote proper \nfunctions, springs denote connected (propagator) functions and dashed lines \ndenote the ghost propagator. Labels indicate the various possible \npropagator and vertex combinations that comprise the self-energy terms.}\n\\end{figure*}\n\nWe present the complete set of Dyson--Schwinger equations in Figure~\\ref{fig:dses}. \n\n\\section{Summary and Outlook}\n\nIn this work, we have derived the Dyson--Schwinger equations for Coulomb gauge Yang-Mills \ntheory within the first order formalism. In discussing the first order \nformalism it was noted that the standard BRS transform is supplemented by a \nsecond transform which arises from the ambiguity in setting the gauge \ntransform \nproperties of the $\\pi$ and $\\phi$ fields. The motivation behind the use of \nthe first order formalism is two-fold: the energy divergent ghost sector can \nbe formally eliminated and the system can be formally reduced to physical \ndegrees of freedom, formal here meaning that the resulting expressions are \nnon-local and not useful for practical studies. The cancellation of the \nghost sector is seen within the context of the Dyson--Schwinger equations and the Green's \nfunctions stemming from the local action. It remains to be seen how the \nphysical degrees of freedom emerge.\n\nGiven that the boundary conditions imposed by considering the Gribov problem \nand that the Jacobians of both the standard BRS transform and its \nsupplemental transform within the first order formalism remain trivial, the \nfield equations of motion and the Ward-Takahashi identity have been \nexplicitly derived. The supplemental part of the BRS transform has been \nshown to be equivalent to the equations of motion at the level of the \nfunctional integral and as such is more or less trivial. Certain exact \n(i.e., not containing interaction terms) relations for the Green's functions \nof the theory have been discussed and their solutions presented. These \nrelations serve to simplify the framework considerably. The propagators \npertaining to vector fields are shown to be transverse, the proper functions \ninvolving Lagrange multiplier fields reduce to kinematical factors or vanish \nand the proper functions involving functional derivatives with respect to \nthe $\\phi$-field can be explicitly derived from those involving the \ncorresponding $\\pi$-field derivatives.\n\nThe full set of Feynman rules for the system has been derived along with the \ntree-level proper two-point functions and the general form of the two-point \nfunctions (connected and proper) has been discussed. The relationship \nbetween the (connected) propagators and the proper two-point functions, \nstemming from the Legendre transform, has been studied. The resulting \nequations show that within the first order formalism the dressing functions \nof the two types of two-point Green's functions are non-trivially related to \neach other. In addition, given that there are no vertices involving \nderivatives with respect to the Lagrange multiplier fields, the set of Dyson--Schwinger \nequations needed to study the two-point functions of the theory is reduced.\n\nThe relevant Dyson--Schwinger equations for the system have been derived in some detail. \nIt is shown how the number of self-energy terms is considerably amplified by \nthe introduction of the various fields inherent to the first order \nformalism. The Dyson--Schwinger equations arising from the ghost fields are shown to be \nindependent of the ghost energy and the vertices involving the ghost fields \nare UV finite.\n\nDespite the complexity of dealing with a non-covariant system with many \ndegrees of freedom, the outlook is positive and the rich structure of the \nDyson--Schwinger equations is not as intimidating as it might initially appear. The \nnon-covariance of the setting means that all the dressing functions that \nmust be calculated are generally functions of two variables. However, as \nseen from the Feynman rules, the energy dependence of the theory stems from \nthe tree-level propagators alone and not from vertices (a consequence of the \nfact that the ony explicit time derivative in the action occurs within a \nkinetic term). The time-dependence of the integral kernels will therefore \nbe significantly less complicated than perhaps would otherwise occur. Given \nthe experience in Landau gauge adapting the techniques, both analytical and \nnumerical, to solve the Dyson--Schwinger equations in Coulomb gauge seems eminently \npossible though certainly challenging. The results of such a study should \nprovide a better understanding of the issues of confinement, and with the \ninclusion of quarks, the hadron spectrum.\n\n\n\\begin{acknowledgments}\nIt is a pleasure to thank R.~Alkofer and D.~Zwanziger for useful and \ninspiring discussions. This work has been supported by the Deutsche \nForschungsgemeinschaft (DFG) under contracts no. Re856\/6-1 and Re856\/6-2.\n\\end{acknowledgments}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzahhd b/data_all_eng_slimpj/shuffled/split2/finalzzahhd new file mode 100644 index 0000000000000000000000000000000000000000..460e340500887d031166433feae2959efff2b0c0 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzahhd @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{intro}\nInterest in non-Abelian Einstein-Yang-Mills theories was sparked by \nthe discovery of particle-like \\cite{bartnik} and \nnon-Reissner-Nordstr\\\"om (``hairy'') black hole \\cite{bizon}\nsolutions when the gauge group is ${\\mathfrak {su}}(2)$.\nSince then a plethora of hairy black holes, possessing non-trivial\ngeometry and field structure outside the event horizon, have been\nfound (see, for example, \\cite{hairy}), including \ncoloured black holes in ${\\mathfrak {su}}(N)$ \nEinstein-Yang-Mills theories \\cite{klei1,klei2}.\nMany of these objects have as a fundamental requirement for the\nexistence of hair, a non-Abelian gauge field, often coupled to \nother fields, such as a Higgs field \\cite{EM}.\nAs in \\cite{EM}, the existence of gauge hair is not surprising in\nitself, since the gauge field force is long-range.\nHowever, the non-Abelian nature of the field is important \nfor evading the no-hair theorem, for example, for scalar fields\ncoupled to the gauge field.\nThe vast majority of these solutions have been found only numerically,\nwith analytic work in this area at present being limited to \nan extensive study of the ${\\mathfrak {su}}(2)$ case\n\\cite{bfm,smoller,sw}, and some analysis of the field equations\nfor general $N$ \\cite{kunzle,kunz1}.\n\n\\bigskip\nIn this paper we continue this analysis of the coupled \nEinstein-Yang-Mills equations for an ${\\mathfrak {su}}(N)$\ngauge field, and prove analytically the existence of \n``genuine'' hairy black hole solutions for every $N$.\nBy a ``genuine'' ${\\mathfrak {su}}(N)$ black hole, we mean a \nsolution which is not simply the result of embedding a \nsmaller gauge group in ${\\mathfrak {su}}(N)$.\nAs in the ${\\mathfrak {su}}(2)$ case, the solutions are labeled\nby the number of nodes of each of the $N-1$ non-zero functions\nrequired to describe the gauge field.\nWe shall prove that for each integer $n_{1}$, there are an infinite\nnumber of sequences of integers \n$n_{N-1}\\ge n_{N-2}\\ge \\ldots \\ge n_{1}$\ncorresponding to black hole solutions.\nIt is to be expected that in fact {\\em {every}} such sequence\ncorresponds to a black hole solution, but unfortunately we are unable\nto prove this analytically, although we shall present a\nnumerically-based argument for ${\\mathfrak {su}}(3)$ and\nnumerical investigations (such as that done for ${\\mathfrak {su}}(5)$\nin \\cite{klei1}) for higher dimension groups \nwhich indicate that this\nis in fact the case.\nThe method used to prove the main theorem of this paper is \nremarkably simple, drawing only on elementary topological ideas.\n\n\\bigskip\nThe structure of the paper is as follows. \nIn section 2 we review briefly the ${\\mathfrak {su}}(N)$\nEinstein-Yang-Mills field equations and the ansatz and\nnotations we shall employ in the rest of the paper.\nNext we state some elementary properties of these equations,\nincluding results from \\cite{kunz1}.\nThe remainder of the paper has a similar progression of ideas\nas \\cite{bfm}, and we continue by first analyzing the\nbehaviour of solutions to the field equations in the two asymptotic\nregimes, at infinity and close to the event horizon.\nAlthough these forms were discussed in \\cite{kunz1}, we present\nhere a shorter proof.\nSection 5 is devoted to a discussion of the flat space solutions,\nwhich will be important for later propositions. \nThe results here are somewhat weaker than in \\cite{bfm} due\nto the fact that for ${\\mathfrak {su}}(N)$, we have \n$N-1$ variables and therefore have a $2(N-1)$-dimensional phase\nspace rather than a phase plane as in the ${\\mathfrak {su}}(2)$\ncase, so the powerful Poincar\\'e-Bendixson theory no longer applies.\nWe now consider integrating the field equations outward from \nthe event horizon and consider the various possible behaviours\nof the resulting solutions.\nAs in \\cite{bfm}, there are three types of solution: the regular\nblack holes we are seeking, singular solutions (in which the\nlapse function vanishes outside the event horizon), and \noscillating solutions in which the geometry is not asymptotically \nflat.\nThe main results of this paper are in sections 7 and 8.\nAn inductive argument is used to prove the existence of solutions\nfor ${\\mathfrak {su}}(N)$ assuming existence for \n${\\mathfrak {su}}(N-1)$ (since we have rigorous theorems for\n${\\mathfrak {su}}(2)$ \\cite{bfm,smoller}).\nThe argument is presented in detail for ${\\mathfrak {su}}(3)$\nin section 7 and a brief outline of the extension to general\n$N$ in section 8.\nFinally, a summary and our conclusions are presented in section 9.\n\n\n\n\n\\section{Ansatz and field equations}\n\\label{ansatz}\nIn this section we first describe the ansatz we are using and outline\nthe field equations.\n\n\\bigskip\nThe field equations for an ${\\mathfrak {su}}(N)$ \nYang-Mills gauge field coupled to\ngravity have been derived in \\cite{kunzle} for a spherically symmetric\ngeometry.\nWe take the line element, in the usual Schwarzschild co-ordinates,\nto be\n\\begin{equation}\nds^{2}=- S^{2}\\mu \\, dt^{2} + \\mu ^{-1}\\, dr^{2}+\nr^{2} \\, d\\theta ^{2} + r^{2} \\sin ^{2} \\theta \\, d\\phi ^{2},\n\\end{equation}\nwhere the metric functions $\\mu $ and $S$ are functions of $r$ alone\nand \n\\begin{equation}\n\\mu (r) = 1-\\frac {2m(r)}{r}.\n\\end{equation}\nA spherically symmetric ${\\mathfrak {su}}(N)$ \ngauge potential may be written in the\nform \\cite{kunzle}\n\\begin{equation}\n{\\cal {A}}=A\\,dt+B\\, dr + \n\\frac {1}{2} \\left( C-C^{H} \\right) \\, d\\theta \n-\\frac {i}{2} \\left[ \\left( C+C^{H} \\right) \\sin \\theta\n+D\\cos \\theta \\right] \\, d\\phi\n\\end{equation}\nwhere \n$D={\\mbox {Diag}} \\left\\{ k_{1}, k_{2}, \\ldots , k_{N} \\right\\} $\nwith $k_{1}\\ge k_{2} \\ge \\ldots \\ge k_{N}$ integers whose sum is zero.\nIn addition, $C$ is a strictly upper triangular complex matrix \nsuch that $C_{ij}\\neq 0$ only if $k_{i}=k_{j}+2$ and $C^{H}$ its\nHermitian conjugate, and $A$, $B$ are anti-Hermitian matrices that\ncommute with $D$.\nAn irreducible representation of ${\\mathfrak {su}}(N)$ \ncan be constructed by taking\n\\cite{kunzle}\n\\begin{equation}\nD={\\mbox {Diag}}\\left\\{ N-1, N-3, \\ldots , -N+3, -N+1 \\right\\} .\n\\end{equation}\nIn this case $A$, $B$ have trace zero and can be written as\n\\begin{equation}\nA_{jj}=i\\left\\{\n-\\frac {1}{N} \\sum _{k=1}^{j-1} k{\\cal {A}}_{k}\n+\\sum _{k=j}^{N-1} \\left( 1-\\frac {k}{N} \\right) {\\cal {A}}_{k}\n\\right\\}\n\\end{equation}\nfor real functions ${\\cal {A}}_{k}$, and similarly for $B$.\nThe only non-vanishing entries of $C$ are\n\\begin{equation}\nC_{j,j+1}= \\omega _{j} e^{i\\gamma _{j}}\n\\end{equation}\nwhere $\\omega _{j}$ and $\\gamma _{j}$ are real functions.\nAll the functions in this ansatz depend only on $r$.\nNow we make the following simplifying assumption \\cite{kunzle}:\n\\begin{equation}\n{\\cal {A}}_{j}=0, \\qquad\n{\\cal {B}}_{j}+\\gamma _{j}'=0,\n\\qquad\n\\forall j.\n\\end{equation}\nThe remaining gauge freedom can then be used to set $B=0$\n\\cite{brod}, which implies that the $\\gamma _{j}$ are constants, which\nwe choose to be zero for simplicity.\n\n\\bigskip\nThe gauge field equations for the $\\omega _{j}$ then take the form\n\\cite{kunzle,kunz1}(where we have set $\\kappa =2$ in \\cite{kunzle})\n\\begin{equation}\nr^{2} \\mu \\omega _{j}'' + \\left( 2m-2r^{3} p_{\\theta } \\right)\n\\omega _{j}' + \n\\left[ 1-\\omega _{j}^{2} + \\frac {1}{2} \\left(\n\\omega _{j-1}^{2} + \\omega _{j+1}^{2} \\right) \\right] \\omega _{j}\n=0,\n\\label{firsteqn}\n\\end{equation}\nwhere\n\\begin{equation}\np_{\\theta }=\\frac {1}{4r^{4}} \\sum _{j=1}^{N}\n\\left[ \\omega _{j}^{2} -\\omega _{j-1}^{2} -N -1 +2j \n\\right] ^{2}\n\\end{equation}\nand the Einstein equations can be simplified to\n\\begin{equation}\nm'= \\mu G + r^{2} p_{\\theta } ,\n\\qquad\n\\frac {S'}{S} =\\frac {2G}{r}\n\\label{Seqn}\n\\end{equation}\nwhere\n\\begin{equation}\nG= \\sum _{j=1}^{N-1} \\omega _{j}^{'2}.\n\\label{lasteqn}\n\\end{equation}\nNote that $\\omega _{0}\\equiv 0 \\equiv \\omega _{N}$, so that these\nare the usual equations \\cite{bizon} in the ${\\mathfrak {su}}(2)$ \ncase, and have a \nvery similar, but slightly coupled, structure for general $N$.\nThe equations (\\ref{firsteqn}--\\ref{lasteqn}) possess two symmetries\n\\cite{kunz1}.\nFirstly, the substitution $\\omega _{j}\\rightarrow -\\omega _{j}$\nfor any fixed $j$ leaves the equations invariant, exactly as in the\n${\\mathfrak {su}}(2)$ case. \nSecondly, there is an additional symmetry under the transformation\n$j\\rightarrow N-j$ for all $j$.\nWe will not assume that this symmetry is respected by the \nsolutions of the field equations, so that the ${\\mathfrak {su}}(3)$\ncase is not necessarily trivial.\n\n\\bigskip\nWe are concerned in this paper with black holes, which will have a\nregular event horizon at $r=r_{h}$, where $\\mu =0$.\nThe equations in their present form are singular at $r_{h}$, so in\norder to produce a set of equations which are regular as \n$\\mu \\rightarrow 0$, we define a new independent variable $\\tau $ \nby \\cite{bfm}\n\\begin{equation}\n\\frac {dr}{d\\tau }= r {\\sqrt {\\mu }},\n\\end{equation}\ndenote $d\/d\\tau $ by ${\\dot {}}$, and define new dependent variables\n$\\kappa $, $U_{j}$ and $\\Psi $ as follows \\cite{bfm}:\n\\begin{equation}\n\\Psi = {\\sqrt {\\mu }}, \\qquad\nU_{j}=\\Psi \\omega _{j}', \\qquad\n\\kappa = \\frac {1}{2\\Psi } \\left(\n1+\\Psi ^{2} + 2\\mu G - 2r^{2} p_{\\theta } \\right) .\n\\end{equation}\nThen the field equations take the form \\cite{bfm}:\n\\begin{eqnarray}\n{\\dot {r}} & = & r\\Psi \n\\label{fregeqn} \\\\\n{\\dot {\\omega }}_{j} & = & rU_{j} \\\\\n{\\dot {\\Psi }} & = & \n\t\\left( \\kappa - \\Psi \\right) \\Psi - 2\\mu G \\\\\n\\left( S\\Psi \\right) {\\dot {}} & = & \n\tS\\left( \\kappa - \\Psi \\right) \\Psi \\\\\n{\\dot {U}}_{j} & = & \n\t-\\left( \\kappa - \\Psi \\right) U_{j} \n\t-\\frac {1}{r} \\left[ \n\t1-\\omega _{j}^{2} +\\frac {1}{2} \\left(\n\t\\omega _{j+1}^{2} + \\omega _{j-1}^{2} \\right) \\right] \n\t\\omega _{j} \\\\\n{\\dot {\\kappa }} & = & \n\t-\\kappa ^{2} +1 +2\\mu G.\n\\label{lregeqn}\n\\end{eqnarray}\n\n\\bigskip\nThe main thrust of this article is to prove the existence of regular,\nblack hole, solutions of the field equations for general $N$ with \n$N-1$ degrees of freedom. \nIn other words, we begin with a regular event horizon at $r=r_{h}$,\nwhere $\\mu =0$, and integrate outwards with increasing $r$.\nLater we shall classify the possible behaviour of the solutions \nas $r$ increases.\nWe are interested primarily in those solutions which possess the \nphysical properties of a black hole geometry, namely for which\n$\\mu >0$ for all $r>r_{h}$, the $\\omega _{j}$ and their derivatives\nare finite for all $r>r_{h}$ and the spacetime becomes flat in the\nlimit $r\\rightarrow \\infty $.\nWe shall refer to such solutions as\n{\\em {regular black hole solutions}}.\n\n\\section{Elementary results}\nIn this this section we state a few elementary results, which\nwill prove useful in later analysis.\nFirstly, two lemmas which are proved in the\n${\\mathfrak {su}}(N)$ case exactly as in the ${\\mathfrak {su}}(2)$\ncase, see \\cite{bfm}.\n\n\\begin{lemma}\nIf $\\mu (r_{0})<1$ for some $r_{0}$ then $\\mu (r)<1$ for all \n$r\\ge r_{0}$.\n\\end{lemma}\n\n\\begin{lemma}\n\\label{easylem}\nAs long as $0<\\mu <1$ all field variables are regular functions\nof $r$.\n\\end{lemma}\n\nThese two lemmas show that for a regular event horizon at \n$r=r_{h}$, where $\\mu =0$, then $\\mu <1$ for all $r>r_{h}$\nand the field variables are regular functions as long as \n$\\mu >0$.\nThe black hole solutions we seek approach the Schwarzschild geometry\nas $r\\rightarrow \\infty $, so we may\nassume that all $\\omega _{j}$ are bounded for all $r$ (from lemma\n\\ref{easylem} each $\\omega _{j}$ is bounded on every closed interval\n$[r_{h},r_{1}]$, so we are simply assuming that $\\omega _{j}$\nremains bounded as $r\\rightarrow \\infty $).\nIt will be proved in section \\ref{global} that this \nassumption is in fact valid for solutions having $\\mu >0$\nfor all $r$.\nDefine a quantity $M_{j}$ for each $j$ to be the lowest upper bound\non $\\omega _{j}^{2}$, i.e.\n\\begin{equation}\n\\omega _{j}^{2} \\le M_{j}, \\qquad \\forall j=1,2,\\ldots, N-1.\n\\end{equation}\nIt has been shown in \\cite{smoller} that the following result\nis true for $N=2$, and proved in general with the assumption\nthat all $\\omega _{j}$ are bounded via an elegant method in \\cite{kunz1}.\n\n\\begin{theorem}\n\\label{mtheorem}\n$M_{j}\\le j(N-j)$.\n\\end{theorem}\n\nPart of the proof of this theorem in \\cite{kunz1} involves a \nknowledge of where $\\omega _{j}$ may have maxima or minima.\nThe result following is less powerful than in the ${\\mathfrak {su}}(2)$\ncase due to the coupling in the gauge field equation (\\ref{firsteqn}).\n\n\\begin{prop}\n\\label{maxminprop}\nAs long as $\\mu (r)>0$, the function $\\omega _{j}(r)$ cannot have\nmaxima in the regions\n\\begin{equation}\n\\omega _{j}>{\\sqrt {1+\\frac {1}{2}\\left( \n\t\\omega _{j+1}^{2} +\\omega _{j-1}^{2} \\right) }}\n\\qquad {\\mbox {and}}\n\\qquad\n0>\\omega _{j}> -{\\sqrt {1+\\frac {1}{2}\\left( \n\t\\omega _{j+1}^{2} +\\omega _{j-1}^{2} \\right) }}\n\\end{equation}\nor minima in the regions\n\\begin{equation}\n\\omega _{j}<-{\\sqrt {1+\\frac {1}{2}\\left( \n\t\\omega _{j+1}^{2} +\\omega _{j-1}^{2} \\right) }}\n\\qquad {\\mbox {and}}\n\\qquad\n0<\\omega _{j}<{\\sqrt {1+\\frac {1}{2}\\left( \n\t\\omega _{j+1}^{2} +\\omega _{j-1}^{2} \\right) }}.\n\\end{equation}\n\\end{prop}\n\n\\noindent {\\bf {Proof}} \\smallskip \\newline \nWhen $\\omega _{j}'=0$, from (\\ref{firsteqn}),\n\\begin{equation}\n\\mu \\omega _{j}''=-\\frac {1}{r^{2}} \\left[\n1-\\omega _{j}^{2} + \\frac {1}{2} \\left(\n\\omega _{j+1}^{2} +\\omega _{j-1}^{2} \\right) \\right]\n\\omega _{j},\n\\end{equation}\nso that $\\omega _{j}$ will have a maximum if\n\\begin{equation}\n\\omega _{j}>0 \\qquad {\\mbox {and}} \\qquad\n\\omega _{j}^{2}<1+\\frac {1}{2} \\left(\n\\omega _{j+1}^{2} +\\omega _{j-1}^{2} \\right)\n\\end{equation}\nor\n\\begin{equation}\n\\omega _{j}<0 \\qquad {\\mbox {and}} \\qquad\n\\omega _{j}^{2}>1+\\frac {1}{2} \\left(\n\\omega _{j+1}^{2} +\\omega _{j-1}^{2} \\right) .\n\\end{equation}\nSimilarly, $\\omega _{j}$ will have a minimum if\n\\begin{equation}\n\\omega _{j}>0 \\qquad {\\mbox {and}} \\qquad\n\\omega _{j}^{2}>1+\\frac {1}{2} \\left(\n\\omega _{j+1}^{2} +\\omega _{j-1}^{2} \\right) \n\\end{equation}\nor\n\\begin{equation}\n\\omega _{j}<0 \\qquad {\\mbox {and}} \\qquad\n\\omega _{j}^{2}<1+\\frac {1}{2} \\left(\n\\omega _{j+1}^{2} +\\omega _{j-1}^{2} \\right) .\n\\end{equation}\n\\hfill\n$\\square $\n\n\\section{Asymptotic behaviour of solutions}\n\\label{local}\nWe seek solutions to the above field equations representing black\nholes with a regular event horizon at $r=r_{h}$ and finite total\nenergy density. \nIn order to prove the local existence of solutions of the field\nequations with the desired asymptotic behaviour, \nwe shall apply the following\ntheorem \\cite{bfm}.\n\n\\begin{theorem}\n\\label{bfmth}\nConsider a system of differential equations for $n+m$ functions\n${\\mbox {\\boldmath {$a$}}}=(a_{1}, a_{2}, \\ldots , a_{n})$\nand\n${\\mbox {\\boldmath {$b$}}}=(b_{1}, b_{2}, \\ldots , b_{m})$\nof the form\n\\begin{eqnarray}\nx \\frac {da_{i}}{dx} & = & \nx^{p_{i}} f_{i}(x,{\\mbox {\\boldmath {$a$}}},{\\mbox {\\boldmath {$b$}}}),\n\\nonumber \\\\\nx\\frac {db_{i}}{dx} & = &\n-\\lambda _{i} b_{i} + \nx^{q_{i}} g_{i}(x,{\\mbox {\\boldmath {$a$}}},{\\mbox {\\boldmath {$b$}}}),\n\\label{sysde}\n\\end{eqnarray}\nwith constants $\\lambda _{i}>0$ and integers $p_{i},q_{i}\\ge 1$\nand let ${\\cal {C}}$ be an open subset of ${\\mathbb {R}}^{n}$\nsuch that the functions $f_{i}$ and $g_{i}$ are analytic in a\nneighbourhood of $x=0$, \n${\\mbox {\\boldmath {$a$}}}={\\mbox {\\boldmath {$c$}}}$,\n${\\mbox {\\boldmath {$b$}}}={\\mbox {\\boldmath {$0$}}}$, for all\n${\\mbox {\\boldmath {$c$}}}\\in {\\cal {C}}$.\nThen there exists an $n$-parameter family of solutions of the\nsystem (\\ref{sysde}) such that\n\\begin{equation}\na_{i}(x) = c_{i} + O(x^{p_{i}}), \\qquad\nb_{i}(x) = O(x^{q_{i}}),\n\\end{equation}\nwhere $a_{i}(x)$ and $b_{i}(x)$ are defined for \n${\\mbox {\\boldmath {$c$}}}\\in {\\cal {C}}$, and \n$|x|0$ is finite.\n\n\\begin{prop}\n\\label{horprop}\nThere exists an $N$-parameter family of local solutions of\n(\\ref{firsteqn}--\\ref{lasteqn}) near $r=r_{h}$, analytic in $r_{h}$,\n$\\omega _{j,h}$ and $r$ such that\n\\begin{eqnarray}\n\\mu (r_{h}+\\rho ) & = & \\mu '(r_{h})+O(\\rho ), \n\\nonumber \\\\\n\\omega _{j} (r_{h}+\\rho ) & = & \n\t\\omega _{j,h} +\\omega _{j}'(r_{h}) +O(\\rho ^{2})\n\\label{horrels}\n\\end{eqnarray}\nwhere\n$\\mu '(r_{h})$ and $\\omega _{j}'(r_{h})$ are functions of the\n$\\omega _{j,h}$.\n\\end{prop}\n\n\\noindent {\\bf {Proof}} \\smallskip \\newline \nLet $\\rho =r-r_{h}$ be the new independent variable, and define\n\\begin{equation}\nx=r, \\qquad\n\\lambda = \\frac {\\mu }{\\rho }, \\qquad\n\\psi _{j}=\\omega _{j}, \\qquad\n\\xi _{j}=\\frac {\\mu \\omega _{j}'}{\\rho }.\n\\end{equation}\nThen the field equations take the form\n\\begin{eqnarray}\n\\rho \\frac {dx}{d\\rho } & = & \\rho, \\\\\n\\rho \\frac {d\\lambda }{d\\rho } & = & \n\t-\\lambda + \\left[ \\frac {1}{x} - 2xp_{\\theta }\\right]\n\t+\\rho \\frac {\\lambda }{x}\\left[ 1-2G \\right] \n\\nonumber \\\\\n & = & \n\t-\\lambda +\\rho H_{\\lambda }+F_{\\lambda } \\\\\n\\rho \\frac {d\\psi _{j}}{d\\rho } & = & \n\t\\frac {\\rho \\xi _{j}}{\\lambda } \\\\\n\\rho \\frac {d\\xi _{j}}{d\\rho } & = & \n\t-\\xi _{j} +\\rho H_{j} + F_{j} ,\n\\end{eqnarray}\nwhere the $F$'s and $H$'s are polynomials in $x^{-1}$, \n$\\lambda ^{-1}$, and the other variables, and\nthe $F$'s depend only on $x$ and $\\psi _{j}$'s.\nNext define\n\\begin{equation}\n{\\tilde {\\xi }}_{j} = \\xi _{j}-F_{j}, \\qquad\n{\\tilde {\\lambda }} = \\lambda -F_{\\lambda },\n\\end{equation}\nwhose derivatives are given by\n\\begin{eqnarray}\n\\rho \\frac {d{\\tilde {\\xi }}_{j}}{d\\rho } & = & \n\t-{\\tilde {\\xi }}_{j}+\\rho G_{j} \\\\\n\\rho \\frac {d{\\tilde {\\lambda }}}{d\\rho } & = & \n\t-{\\tilde {\\lambda }} + \\rho G_{\\lambda }.\n\\end{eqnarray}\nHere the $G$'s are analytic in $x^{-1}$, $\\lambda ^{-1}$, $\\lambda $,\n$x$, ${\\tilde {\\lambda }}$, $\\psi _{j}$, ${\\tilde {\\xi }}_{j}$.\nApplying theorem \\ref{bfmth}, there exist solutions of the form\n\\begin{equation}\nx=r_{h}+\\rho , \\qquad\n\\psi _{j}=\\omega _{j,h}+O(\\rho ), \\qquad\n{\\tilde {\\lambda }},{\\tilde {\\xi }}_{j}=O(\\rho ),\n\\end{equation}\nwhich gives the behaviour (\\ref{horrels}), together with the\nrequired analyticity.\nFrom the field equations (\\ref{firsteqn}--\\ref{lasteqn}), setting\n$\\mu (r_{h})=0$ gives the following relations:\n\\begin{eqnarray}\n\\mu '(r_{h}) & = & \\frac {1}{2} r_{h}^{2} p_{\\theta }(r_{h}) \\\\\n\\omega _{j}'(r_{h}) & = & -\n\\frac {\\left[ 1-\\omega _{j,h}^{2}+\\frac {1}{2} \\left(\n\\omega _{j-1,h}^{2}+\\omega _{j+1,h}^{2} \\right) \\omega _{j,h}\n\\right] }{r_{h} -r_{h}^{3}p_{\\theta }(r_{h})} \n\\end{eqnarray}\nwhere\n\\begin{equation}\np_{\\theta }(r_{h})=\n\\frac {1}{4r_{h}^{4}} \\sum _{j=1}^{N} \\left[\n\\omega _{j,h}^{2} -\\omega _{j-1,h}^{2} -N-1+2j \\right] ^{2}.\n\\end{equation}\n\\hfill\n$\\square $\n\n\\section{Flat space solutions}\n\\label{flat}\nThe behaviour of the gauge field equations in the flat space limit\nwill be useful later in section \\ref{global} when we come to examine the \nproperties of regular solutions.\nWe are not concerned here with the global existence of flat space \nsolutions but only the local properties pertinent to the curved space\nproblem.\n\n\\bigskip\nThe analysis here is similar to that in \\cite{bfm}, but note that\nfor general $N$ there will be $N-1$ coupled gauge field equations.\nThe powerful Poincar\\'e-Bendixson theory of autonomous systems \nis not applicable when $N>2$, and so we will be able to derive \nonly correspondingly weaker results, and cannot draw a phase portrait.\nFortunately the theorems later require only a knowledge of the \nnature of the critical points and the local behaviour close to these\npoints, which is the subject of this section.\n\n\\bigskip\nIn flat space the gauge field equations reduce to:\n\\begin{equation}\nr^{2} \\omega _{j}'' +\\left[ 1-\\omega _{j}^{2} +\\frac {1}{2}\n\\left( \\omega _{j+1}^{2} +\\omega _{j-1}^{2} \\right) \\right] \n\\omega _{j} =0.\n\\end{equation}\nThese equations can be made autonomous by changing variables to\n$\\tau =\\log r$, and denoting $d\/d\\tau $ by ${\\dot {}}$:\n\\begin{equation}\n{\\ddot {\\omega }}_{j}-{\\dot {\\omega }}_{j}+\n\\left[ 1-\\omega _{j}^{2} +\\frac {1}{2}\n\\left( \\omega _{j+1}^{2} +\\omega _{j-1}^{2} \\right) \\right] \n\\omega _{j} =0.\n\\end{equation}\nThis system of $N-1$ coupled equations has critical points when\n\\begin{equation}\n\\left[ 1-\\omega _{j}^{2} +\\frac {1}{2}\n\\left( \\omega _{j+1}^{2} +\\omega _{j-1}^{2} \\right) \\right] \n\\omega _{j} =0.\n\\qquad\n\\forall j=1,2,\\ldots ,N-1.\n\\end{equation}\nIf $\\omega _{j}\\neq 0$ for all $j$ then \n\\begin{equation}\n\\omega _{j}={\\sqrt {j(N-j)}}.\n\\end{equation}\nThere is also a critical point when $\\omega _{j}=0$ for all $j$.\n\n\\bigskip\nThe other critical points can be described as follows.\nSuppose that $\\omega _{i}=0$ and $\\omega _{i+k}=0$ but that\n$\\omega _{i+m}\\neq 0$ for $m=1,\\ldots ,k-1$ (where $k\\ge 2$).\nThe critical point is then described by the equations\n\\begin{eqnarray}\n0 & = & \\omega _{i} \\nonumber \\\\\n0 & = & 1-\\omega _{i+1}^{2}+\\frac {1}{2} \\left( \\omega _{i}^{2}\n+\\omega _{i+2}^{2} \\right) \n\\nonumber \n\\\\\n & \\vdots & \n\\nonumber\n\\\\\n0 & = & 1-\\omega _{i+k-1}^{2} +\\frac {1}{2} \\left( \\omega _{i+k-2}^{2}\n+\\omega _{i+k}^{2} \\right)\n\\nonumber\n\\\\\n0 & = & \\omega _{i+k}\n\\end{eqnarray}\nwhich have the solution\n\\begin{equation}\n\\omega _{i+m}={\\sqrt {m(k-m)}} \\qquad \n{\\mbox {for $m=1,\\ldots , k-1$}}.\n\\end{equation}\nIn other words, if we have a run of $k$ non-zero $\\omega $'s with\nzero $\\omega _{i}$ at each end, then the solution for the non-zero\n$\\omega $'s is the same as the $N=k$ case with no zero $\\omega $'s.\nWe can also put together a run of as many zero $\\omega $'s as we like\nin the critical point.\n\n\\bigskip\nIn order to classify the critical points, we linearize the field\nequations.\nLet\n\\begin{equation}\n\\omega _{j}(\\tau )=\\omega _{j}^{(0)}+\\epsilon _{j}(\\tau )\n\\end{equation}\nwhere $\\omega _{j}^{(0)}$ is the value of $\\omega _{j}$ at\nthe critical point and $\\epsilon _{j}$ is a small perturbation.\nThe equation for $\\epsilon _{j}$ is, to first order,\n\\begin{eqnarray}\n0 & = & \n{\\ddot {\\epsilon }}_{j} -{\\dot {\\epsilon }}_{j} \n+\\epsilon _{j}\\left[ 1-\\omega _{j}^{(0)2} +\\frac {1}{2}\n\\left( \\omega _{j+1}^{(0)2} +\\omega _{j-1}^{(0)2} \\right) \\right] \n\\nonumber\n\\\\ \n & & \n+\\omega _{j}^{(0)} \\left[ -2\\epsilon _{j}\\omega _{j}^{(0)}\n+\\epsilon _{j+1} \\omega _{j+1}^{(0)} +\\epsilon _{j-1}\n\\omega _{j-1}^{(0)} \\right] .\n\\end{eqnarray}\nWe shall take the two cases we need to consider in turn.\n\n\\bigskip\nFirstly, the case where $\\omega _{j}^{(0)}=0$, when the equation\nfor $\\epsilon _{j}$ reduces to\n\\begin{equation}\n{\\ddot {\\epsilon }}_{j}-{\\dot {\\epsilon }}_{j}+\n\\epsilon _{j} \\left[ 1+\\frac {1}{2} \\left(\n\\omega _{j+1}^{(0)2} +\\omega _{j-1}^{(0)2} \\right) \\right] =0.\n\\end{equation}\nIn order to find the nature of the critical point, let\n\\begin{equation}\n\\epsilon _{j}=e^{\\lambda \\tau },\n\\end{equation}\nthen $\\lambda $ satisfies the equation\n\\begin{equation}\n\\lambda ^{2}-\\lambda +\\left[ \n1+\\frac {1}{2} \\left(\n\\omega _{j+1}^{(0)2} +\\omega _{j-1}^{(0)2} \\right) \\right] =0.\n\\end{equation}\nThis implies that\n\\begin{equation}\n\\lambda = \\frac {1}{2} \\pm \\frac {1}{2} i\\alpha,\n\\end{equation}\nwhere $\\alpha $ is the positive real number given by\n\\begin{equation}\n\\alpha ^{2} = 3+2\\omega _{j+1}^{(0)2} +2\\omega _{j-1}^{(0)2}.\n\\end{equation}\nWe conclude that, in the $(\\epsilon _{j}, {\\dot {\\epsilon }}_{j})$\nplane, we have an unstable focal point, as found in the ${\\mathfrak {su}}(2)$ \ncase in \\cite{bfm}.\n\n\\bigskip\nSecondly, we consider the situation in which there is at least\none $\\omega _{j}^{(0)}$ which is non-zero.\nSuppose that $\\omega _{i}^{(0)}=0$, and $\\omega _{i+k}^{(0)}=0$,\nbut $\\omega _{i+m}^{(0)}\\neq 0$ for $m=1,\\ldots ,k-1$, where we \ninclude the case that $i=0$ and $k=N$.\nThen we have a series of coupled perturbation equations:\n\\begin{eqnarray}\n0 & = & \n{\\ddot {\\epsilon }}_{i+1} - {\\dot {\\epsilon }}_{i+1}\n-2\\epsilon _{i+1} \\omega _{i+1}^{(0)2} \n+\\epsilon _{i+2} \\omega _{i+2}^{(0)} \\omega _{i+1}^{(0)} \n\\nonumber \\\\\n0 & = & \n{\\ddot {\\epsilon }}_{i+2} - {\\dot {\\epsilon }}_{i+2}\n-2\\epsilon _{i+2} \\omega _{i+2}^{(0)2} \n+\\epsilon _{i+3} \\omega _{i+3}^{(0)} \\omega _{i+2}^{(0)}\n+\\epsilon _{i+1} \\omega _{i+1}^{(0)} \\omega _{i+2}^{(0)}\n\\nonumber \\\\\n & \\vdots & \n\\nonumber \\\\\n0 & = & \n{\\ddot {\\epsilon }}_{i+k-1} - {\\dot {\\epsilon }}_{i+k-1}\n-2\\epsilon _{i+k-1} \\omega _{i+k-1}^{(0)2} \n+\\epsilon _{i+k-2} \\omega _{i+k-2}^{(0)} \\omega _{i+k-1}^{(0)} .\n\\end{eqnarray}\nDefine a vector ${\\mbox {\\boldmath {$\\epsilon $}}}$ by\n\\begin{equation}\n{\\mbox {\\boldmath {$\\epsilon $}}} = \\left( \n\\epsilon _{i+1}, \\ldots , \\epsilon _{i+k-1} \\right) ^{T}\n\\end{equation}\nand consider solutions of the form \n\\begin{equation}\n{\\mbox {\\boldmath {$\\epsilon $}}} = e^{\\lambda \\tau }\n{\\mbox {\\boldmath {$q$}}}\n\\end{equation}\nwhere ${\\mbox {\\boldmath {$q$}}}$ is a constant vector.\nThen $\\lambda ^{2}-\\lambda $ are eigenvalues of the matrix\n${\\cal {M}}_{k-1}$ given by (\\ref{matrix}).\nAs discussed in section \\ref{local}, the matrix ${\\cal {M}}_{k-1}$ \nhas positive real eigenvalues ${\\cal {E}}_{j}$.\nThen\n\\begin{equation}\n\\lambda = \\frac {1\\pm {\\sqrt {1+4{\\cal {E}}_{j}}}}{2}\n=j+1, -j\n\\end{equation}\nwill have positive and negative real values and there is a \nsaddle point.\nAgain this is in direct analogy with the \n${\\mathfrak {su}}(2)$ case \\cite{bfm}.\n\n\n\\section{Global behaviour of the solutions}\n\\label{global}\nIn this section we investigate the behaviour of solutions of\nthe regular field equations (\\ref{fregeqn}--\\ref{lregeqn}) as \nfunctions of $\\tau $.\nFrom section \\ref{local}, we know that, given any starting values\n$\\omega _{j,h}$ for the gauge field functions, then there is \na local solution of the field equations in a neighbourhood of \nthe regular event horizon at $r=r_{h}$, $\\tau =0$, which is analytic\nin $\\tau $ and the initial parameters.\nFurthermore, from lemma \\ref{easylem} as long as $0<\\Psi <1$ the\nsolutions remain regular.\nTherefore, as we integrate out in $\\tau $ from the event horizon there\nare only three possibilities:\n\\begin{enumerate}\n\\item\nThere is a $\\tau _{0}>0$ such that $\\Psi (\\tau _{0})=0$.\n\\item\nFor all $\\tau >0$, we have $\\Psi (\\tau )>0$ and $r(\\tau )$ remains\nbounded as $\\tau \\rightarrow \\infty $.\n\\item\nFor all $\\tau >0$, we have $\\Psi (\\tau )>0$ and \n$r(\\tau )\\rightarrow \\infty $ as $\\tau \\rightarrow \\infty $.\n\\end{enumerate}\nWe refer to solutions of the first type as {\\em {singular}} solutions.\nThose of type 2 are known as {\\em {oscillating}} solutions in\n\\cite{bfm} and we retain their terminology.\nIt will be shown below (proposition \\ref{rmaxprop}), that\nthere is a maximum value $r_{max}$ of $r$ for\noscillating solutions.\nFinally, we denote by $S_{\\infty }$ solutions of the\nfirst type for which $r(\\tau _{0})0$ for all $\\tau $ and \n$\\lim _{\\tau \\rightarrow \\infty }r(\\tau )=\\infty $ then\nall the $\\omega _{j}$ remain bounded as $r\\rightarrow \\infty $.\n\\end{lemma}\n\n\\noindent {\\bf {Proof}} \\smallskip \\newline \nFrom the local existence propositions \\ref{horprop} and \\ref{infprop}\nand lemma \\ref{easylem}, each $\\omega _{j}$ is an analytic\nfunction of $r$ as long as $\\mu >0$.\nIntroduce a new variable $x=r^{-1}$, then each $\\omega _{j}$\nis an analytic function of $x$ in a neighbourhood of $x=0$,\nexcept possibly at $x=0$, and can be written as a Laurent series:\n\\begin{equation}\n\\omega _{j}(x)=\\sum _{n=-\\infty }^{\\infty } a_{n}^{j}x^{n}.\n\\end{equation}\nIn order for $\\mu >0$ for all $r$, it must be the case that\n$2mr^{-1}=2mx<1$, although $m$ itself need not necessarily \nremain bounded.\nHence $2mx$ is bounded in a neighbourhood of $x=0$, and in \nparticular is analytic at $x=0$.\nTherefore we can write\n\\begin{equation}\n2mx=\\sum _{n=0}^{\\infty }b_{n}x^{n}.\n\\end{equation}\nThus\n\\begin{equation}\nm=\\frac {1}{2} \\sum _{n=0}^{\\infty }b_{n}x^{n-1},\n\\qquad\n\\frac {dm}{dr}=-x^{2}\\frac {dm}{dx}\n=-\\frac {1}{2} \\sum _{n=0}^{\\infty }b_{n}(n-1)x^{n},\n\\end{equation}\nso that $dm\/dr$ is analytic in a neighbourhood of $x=0$.\nFrom the field equations,\n\\begin{equation}\n\\frac {dm}{dr}=\\mu G +r^{2}p_{\\theta }\n\\end{equation}\nwhere\n\\begin{equation}\nG=\\sum _{j=1}^{N-1} \\left( \\frac {d\\omega _{j}}{dr} \\right) ^{2}\n\\qquad\nr^{2}p_{\\theta }=\\frac {x^{2}}{4}\n\\sum _{j=1}^{N} \\left[ \n\\omega _{j}^{2}-\\omega _{j-1}^{2}-N-1+2j \\right] ^{2}.\n\\end{equation}\nSince both $G$ and $r^{2}p_{\\theta }$ are positive, they must\neach be analytic in a neighbourhood of $x=0$.\nConsider $G$ first, using\n\\begin{equation}\n\\frac {d\\omega _{j}}{dr}=-x^{2}\\frac {d\\omega _{j}}{dx}\n=-\\sum _{n=-\\infty }^{\\infty }a_{n}^{j} nx^{n+1}\n\\end{equation}\nand since $\\left( \\frac {d\\omega _{j}}{dr}\\right) ^{2}$\nmust be analytic near $x=0$, it must be the case that\n\\begin{equation}\na_{n}^{j}=0 \\qquad \\forall n<-1.\n\\end{equation}\nThen\n\\begin{equation}\n\\omega _{j}^{2}=\\frac {a_{-1}^{j2}}{x^{2}}+\n\\sum _{n=-1}^{\\infty }c_{n}^{j}x^{n}\n\\end{equation}\nfor constants $c_{n}^{j}$ and hence\n\\begin{equation}\n\\omega _{j}^{2}-\\omega _{j-1}^{2}-N-1+2j\n=\\frac {1}{x^{2}}\\left[ a_{-1}^{j2}-a_{-1}^{(j-1)2} \\right]\n+\\sum _{n=-1}^{\\infty }d_{n}^{j}x^{n}.\n\\end{equation}\nSince $r^{2}p_{\\theta }$ is analytic, \n\\begin{equation}\na_{-1}^{j}=\\pm a_{-1}^{j-1}\n\\qquad\n\\forall j.\n\\end{equation}\nBut $\\omega _{0}\\equiv 0$, so $a_{-1}^{0}=0$ and thus $a_{-1}^{j}=0$\nfor all $j$.\nWe conclude that $\\omega _{j}$ is analytic at $x=0$, and therefore\nfinite as $x=0$.\nTherefore $\\omega _{j}$ is bounded for all $r\\in [r_{h},\\infty )$.\n\\hfill\n$\\square $\n\n\\begin{prop}\n\\label{flatprop}\nIf $\\Psi (\\tau )>0$ for all $\\tau $ and \n$\\lim _{\\tau \\rightarrow \\infty }r(\\tau )=\\infty $ then\nthe solution tends to one of the flat space critical points, with \nthe exception of the origin.\n\\end{prop}\n\nThe proof of this proposition closely follows \nProposition 14 of Ref. 7, \nthe main\nthrust of which is the following lemma.\n\n\\begin{lemma}\n\\label{flatlem}\nIf $\\Psi (\\tau )>0$ for all $\\tau $ and \n$\\lim _{\\tau \\rightarrow \\infty }r(\\tau )=\\infty $ then\n\\begin{equation}\n\\lim _{\\tau \\rightarrow \\infty } \\Psi (\\tau )=1.\n\\end{equation}\n\\end{lemma}\n\n\\noindent {\\bf {Proof}} \\smallskip \\newline \nThere are two cases to consider.\n\\begin{enumerate}\n\\item\nThere is at least one $\\omega _{j}$ which has zeros for arbitrarily\nlarge $r$.\n\\item\nAll $\\omega _{j}$ have only a finite number of zeros.\n\\end{enumerate}\n\n\\bigskip\n\\noindent\n{\\em {Case 1}}\n\n\\smallskip\n\\noindent\nThe proof in this situation follows exactly that of \nProposition 14 of Ref. 7 (equations (74--76)) \nsince all the $\\omega _{j}$'s are \nbounded (lemma \\ref{boundlem}).\n\n\\bigskip\n\\noindent\n{\\em {Case 2}}\n\n\\smallskip\n\\noindent\nIn this case there is a $T_{1}>0$ such that for all $\\tau >T_{1}$\nno $\\omega _{j}$ has a zero.\nHowever, since the equations governing the $\\omega _{j}$ are\ncoupled, the $\\omega _{j}$ do not necessarily have to be\nmonotonic, provided proposition \\ref{maxminprop} is satisfied.\n\n\\bigskip\nSuppose, initially, that there is a $T_{2}>T_{1}$ such that for all\n$\\tau >T_{2}$ every $\\omega _{j}$ is monotonic.\nIn this situation each $\\omega _{j}$ has a limit and so \n$\\lim _{\\tau \\rightarrow \\infty }U_{j}=0$ for all $j$.\nThen the proof of Proposition 14 of Ref. 7 carries over directly\nto show that $m(\\tau )$ is bounded and \n$\\lim _{\\tau \\rightarrow \\infty } \\Psi (\\tau )=1$.\nThe proof proceeds as follows.\nFirstly, in analogy to equation (74) of Ref. 7, integrating the\nfield equations (\\ref{fregeqn}--\\ref{lregeqn}) gives,\nfor each $j$,\n\\begin{equation}\n|\\Psi U_{j}(\\tau _{2})-\\Psi U_{j}(\\tau _{1})|\n\\le \\frac {c_{j}}{r(\\tau _{1})}\n\\end{equation}\nfor all $\\tau _{2}>\\tau _{1}>T_{2}$, and some constant $c_{j}$,\nsince all the $\\omega _{j}$ are bounded.\nFix $\\tau _{1}$ for the moment, then for all $\\tau _{3}>\\tau _{1}$,\n\\begin{equation}\n\\int _{\\tau _{1}}^{\\tau _{3}} r\\Psi U_{j}^{2} \\, d\\tau '\n\\le c_{j}'\\int _{\\tau _{1}}^{\\tau _{3}} {\\dot {\\omega }}_{j} \\, \nd\\tau ' \\le\nc_{j}''\n\\end{equation}\nfor constants $c_{j}'$ and $c_{j}''$, since $\\omega _{j}$ has a limit.\nHence\n\\begin{equation}\nm(\\tau _{3})-m(\\tau _{1})\\le \\frac {1}{2} \\sum _{j=1}^{N-1}\nc_{j}''\n\\end{equation}\nwhich is bounded.\n\n\n\\bigskip\nNext turn to the other extreme situation, where $\\omega _{j}^{2}$\nhas minima for arbitrarily large $r$, for every $j$.\nIf $\\omega _{j}^{2}$ has a minimum at $r=r_{0}$, then by\nproposition \\ref{maxminprop},\n\\begin{equation}\n\\omega _{j}^{2}(r_{0}) \\ge 1+\\frac{1}{2} \\left(\n\\omega _{j+1}^{2} (r_{0}) +\\omega _{j-1}^{2} (r_{0}) \\right).\n\\end{equation}\nDefine $N_{j}$ to be the greatest lower bound of the set of\nminimum values of $\\omega _{j}^{2}$ for $\\tau >T_{1}$,\nwhere $N_{j}$ can be zero (although $\\omega _{j}^{2}$ cannot). \nThen we have\n\\begin{equation}\nN_{j}\\ge 1+\\frac {1}{2}\\left( N_{j+1}+N_{j-1} \\right) .\n\\label{nineq}\n\\end{equation} \nThese inequalities may be solved to give\n\\begin{equation}\nN_{j}\\ge j(N-j) \\qquad \\forall j.\n\\end{equation}\nHowever, since $\\omega _{j}^{2}\\le j(N-j)$ for all $j$ from \ntheorem \\ref{mtheorem}, it follows that \n$\\omega _{j}^{2}(\\tau )=j(N-j)$ for sufficiently large $\\tau $\nand all $j$. \nTherefore $U_{j}(\\tau )$ is zero for sufficiently large $\\tau $,\nhence $m(\\tau )$ is bounded as $\\tau \\rightarrow \\infty $ and\n$\\lim _{\\tau \\rightarrow \\infty } \\Psi (\\tau )=1$.\n\n\\bigskip\nThe remaining intermediate case is where $\\omega _{j}$ and\n$\\omega _{j+l}$, where $l>1$, are monotonic for all \nsufficiently large $\\tau $,\nwhilst $\\omega _{j+i}^{2}$, $i=1,\\ldots ,l-1$ have minima for\narbitrarily large $\\tau $.\nWe include the possibility that $j=0$, to cover the case where\nthere is only one $\\omega _{i}$ which has a limit.\nDefine $L_{1}$ and $L_{2}$ by\n\\begin{equation}\n\\omega _{j}^{2} \\rightarrow L_{1}, \\qquad\n\\omega _{j+l}^{2} \\rightarrow L_{2} \n\\end{equation}\nas $\\tau \\rightarrow \\infty $.\nGiven $\\epsilon >0$, there is a $T_{1}$ such that\n\\begin{equation}\n\\left| \\omega _{j}^{2}-L_{1} \\right| <\\epsilon ,\n\\qquad\n\\left| \\omega _{j+l}^{2}-L_{2} \\right| <\\epsilon\n\\qquad \n\\forall \\tau > T_{1}.\n\\end{equation} \nIf $\\omega _{j+1}^{2}$ has a minimum at $\\tau _{0}>T_{1}$, then\n\\begin{eqnarray}\n\\omega _{j+1}^{2}(\\tau _{0}) & \\ge & \n1+\\frac {1}{2} \\left( \\omega _{j+2}^{2}(\\tau _{0}) +\n\\omega _{j}^{2}(\\tau _{0}) \\right) \n\\nonumber\n\\\\\n & \\ge & 1+\\frac {1}{2} \\left(\n\\omega _{j+2}^{2}(\\tau _{0}) +L_{1} -\\epsilon \\right) .\n\\end{eqnarray}\nWith $N_{j+1}$ as before, then\n\\begin{equation}\nN_{j+1} \\ge 1+\\frac {1}{2} N_{j+2} +K_{1}\n\\end{equation}\nwhere $K_{1}=\\frac {1}{2} \\left( L_{1}-\\epsilon \\right)$.\nSimilarly,\n\\begin{equation}\nN_{j+l-1}\\ge 1+\\frac {1}{2} N_{j+l-2}+K_{2}\n\\end{equation}\nwhere $K_{2}=\\frac {1}{2} \\left( L_{2}-\\epsilon \\right) $.\nThe remaining inequalities for $N_{j+2},\\ldots ,N_{j+l-2}$\nare exactly as before (\\ref{nineq}).\nThe previous method can be repeated to give\n\\begin{equation}\nN_{j+i}\\ge i(l-i)+K_{1}+K_{2} \\qquad i=1,\\ldots ,l-1.\n\\end{equation}\n\n\\bigskip\nNow define ${\\tilde {N}}_{i}$ to be the lowest upper bound of the \nset of maximum values of $\\omega _{i}^{2}$ (which exists since\n$\\omega _{i}^{2}$ is bounded).\nIf $\\omega _{j+1}^{2}$ has a maximum at $\\tau _{0}>T_{1}$, then\n\\begin{eqnarray}\n\\omega _{j+1}^{2}(\\tau _{0}) & \\ge & \n1+\\frac {1}{2} \\left( \\omega _{j+2}^{2}(\\tau _{0})+\n\\omega _{j}^{2}(\\tau _{0}) \\right)\n\\nonumber\n\\\\\n& \\ge & 1+\\frac {1}{2} \\left(\n\\omega _{j+2}^{2} +L_{1}+\\epsilon \\right) .\n\\end{eqnarray}\nThe analysis now proceeds exactly as in the previous paragraph\nand yields\n\\begin{equation}\n{\\tilde {N}}_{j+i}\\le i(l-i)+{\\tilde {K}}_{1}+{\\tilde {K}}_{2}\n\\qquad\ni=1,\\ldots ,l-1\n\\end{equation}\nwhere ${\\tilde {K}}_{i}=\\frac {1}{2} \\left( L_{i}+\\epsilon \\right)$ for\n$i=1,2$.\nIn addition, by definition it must be the case that\n\\begin{equation}\n{\\tilde {N}}_{j+i}\\ge N_{j+i}\n\\end{equation}\nwhich means that\n\\begin{equation}\ni(l-i)+\\frac {1}{2} \\left( L_{1}+L_{2} \\right) -\\epsilon\n\\le N_{j+i} \\le {\\tilde {N}}_{j+i} \\le\ni(l-i)+\\frac {1}{2} \\left( L_{1}+L_{2} \\right) +\\epsilon\n\\end{equation}\nfor all $\\epsilon >0$, whence ${\\tilde {N}}_{j+i}=N_{j+i}$ for\nall $i$ and\n\\begin{equation}\n\\omega _{j+i}^{2} =i(l-i)+\\frac {1}{2} \\left( L_{1}+L_{2} \\right)\n\\end{equation}\nfor sufficiently large $\\tau $.\n\n\\bigskip\nIn conclusion, then, we have shown that the intermediate case has\nsome $\\omega _{j}$'s which are monotonic for sufficiently \nlarge $\\tau $, with the remaining $\\omega _{j}$'s being\nconstant for sufficiently large $\\tau $. \nHence in this case also $m$ is bounded as $\\tau \\rightarrow \\infty $\nand $\\lim _{\\tau \\rightarrow \\infty }\\Psi (\\tau )=1$.\n\\hfill\n$\\square $\n\n\\bigskip\n\\noindent\n{\\bf {Proof of Proposition \\ref{flatprop}}}\n\\smallskip\n\\newline\nIn order to show that the geometry becomes flat as \n$\\tau \\rightarrow \\infty $, it remains to show that $S\\rightarrow 1$\nas $\\tau \\rightarrow \\infty $.\nFrom the proof of lemma \\ref{flatlem}, for $\\omega _{j}$\nhaving a limit as $\\tau \\rightarrow \\infty $, we have\n\\begin{equation}\nrU_{i}={\\dot {\\omega }}_{i}\\rightarrow 0 \n\\end{equation}\nas $\\tau \\rightarrow \\infty $.\nIn case 1 of the proof of lemma \\ref{flatlem}, the proof\nof \\cite{bfm} carries straight over to show that \n$U_{i}\\rightarrow 0$ as $\\tau \\rightarrow \\infty $ also in this\nsituation.\nNow\n\\begin{equation}\n\\frac {{\\dot {S}}}{S}=\\Psi G = \n\\frac {1}{\\Psi } \\sum _{i=1}^{N-1} U_{i}^{2} \\le\n\\frac {C}{\\Psi r^{2}}\n\\end{equation}\nfor some constant $C$ and sufficiently large $\\tau $.\nTherefore\n\\begin{equation}\n\\frac {S'}{S}=\\frac {{\\dot {S}}}{S} \\frac {1}{r\\Psi } \\le\n\\frac {C}{\\mu r^{3}}\n\\le \\frac {{\\tilde {C}}}{r^{3}}\n\\end{equation}\nfor some constant ${\\tilde {C}}$ and sufficiently large $\\tau $,\nas $\\mu \\rightarrow 1$ as $\\tau \\rightarrow \\infty $.\nThis means that $S$ has a finite limit as $\\tau \\rightarrow \\infty $.\n\n\\bigskip\nThe field equations only involve $\\frac {S'}{S}$ rather than just $S$\nitself. \nTherefore $S$ is defined only up to a multiplicative constant,\nand without loss of generality we may therefore take the\nfinite limit of $S$ as $\\tau \\rightarrow \\infty $ to be 1,\nso that the spacetime is asymptotically flat.\n\n\\bigskip\nLet $\\delta (\\tau )=2\\Psi - \\kappa -1$ be a small perturbation,\nthen the equation for $\\omega _{j}$ reads\n\\begin{equation}\n{\\ddot {\\omega }}_{j}+\\left[ 1 -\\omega _{j}^{2} \n+\\frac {1}{2} \\left( \\omega _{j+1}^{2} +\\omega _{j-1}^{2} \\right)\n\\right] \\omega _{j} = \\left( 1+ \\delta (\\tau ) \\right)\n{\\dot {\\omega }}_{j}\n\\end{equation}\nas $\\tau \\rightarrow \\infty $.\nSince $\\delta $ is small, it does not alter the position or nature\nof the critical points as compared with the exactly \nflat space case. \nLemma \\ref{flatlem} showed that each \n${\\dot {\\omega }}_{j},{\\ddot {\\omega }}_{j}\\rightarrow 0$ \nas $\\tau \\rightarrow \\infty $.\nHence $\\omega _{j}$ must approach one of the critical points \nwhose nature was elucidated in section \\ref{flat}.\nThe flat space analysis showed that, if $\\omega _{j}=0$,\nthen there is an unstable focal point in the \n$(\\omega _{j},{\\dot {\\omega }}_{j})$ plane.\nHence our solution cannot approach this value, unless \n$\\omega _{j}\\equiv 0$.\nThe solution has to tend to one of the saddle points along a\nstable direction, and, in direct analogy with the \n${\\mathfrak {su}}(2)$ case \\cite{ershov}, \nthere are no solutions where $\\omega _{j}\\rightarrow 0$\nas $\\tau \\rightarrow \\infty $.\nThe solution must therefore be a member of the family found in the\nlocal existence theorem \\ref{infprop}.\n\\hfill\n$\\square $\n\n\\bigskip\nThe power of proposition \\ref{flatprop} lies in that, if we can prove\nthe existence of solutions for which $\\Psi >0$ for all $\\tau >0$\nand $r\\rightarrow \\infty $, then these solutions are automatically \nthe regular black holes (and, if $r_{h}\\rightarrow 0$, \nsoliton solutions \\cite{bartnik})\nwe seek. \nWe close this section by determining the asymptotic behaviour of\nsolutions of type 2.\n\n\\begin{prop}\n\\label{rmaxprop}\nIf $\\Psi (\\tau )>0$ for all $\\tau >0$ and $r(\\tau )$ remains \nbounded, then $\\Psi \\rightarrow 0$ and $\\kappa \\rightarrow 1$\nas $\\tau \\rightarrow \\infty $.\nIn addition, there is at least one $j$ for which\n$\\omega _{j} \\rightarrow 0$ as\n$\\tau \\rightarrow \\infty $, and this $\\omega _{j}$ has\ninfinitely many zeros.\n\\end{prop}\n\n\\noindent {\\bf {Proof}} \\smallskip \\newline \nSince $r$ is monotonic increasing and bounded, it has a limit $r_{0}$.\nIn addition, $m$ is monotonic increasing and bounded since\nthe positivity of $\\mu $ implies that\n\\begin{equation}\nm<\\frac {r}{2} \\le \\frac {r_{0}}{2}.\n\\end{equation}\nAs $\\tau \\rightarrow \\infty $, ${\\dot {r}}\\rightarrow 0$\nand hence $\\Psi \\rightarrow 0$ from (\\ref{fregeqn}).\nConsider the quantity\n\\begin{equation}\nE=-\\frac {r^{2}}{4}\\left( 1+\\Psi ^{2} -2\\kappa \\Psi \\right)\n=-\\frac {r^{2}}{2} \\left( -\\mu G+r^{2} p_{\\theta } \\right).\n\\end{equation}\nThen\n\\begin{eqnarray}\n{\\dot {E}} & = & 2r^{2}\\mu G \\Psi -r^{2}\\kappa \\mu G\n\\nonumber \n\\\\\n& < &\n r^{2}\\mu G (1-\\kappa) \n\\nonumber\n\\\\\n& < & 0,\n\\end{eqnarray}\nfor sufficiently large $\\tau $ for which $\\Psi < 1\/2$, and since\n$\\kappa >1$ \\cite[Lemma 10]{bfm}.\nHence \n\\begin{equation}\nE\\rightarrow -\\frac {r_{0}^{2}}{4}\n\\label{elim}\n\\end{equation}\nas $\\tau \\rightarrow \\infty $, since $E$ is monotonically decreasing\nfor sufficiently large $\\tau $.\nThen, following \\cite[Proposition 15]{bfm}, we have \n$\\kappa \\rightarrow 1$ as $\\tau \\rightarrow \\infty $.\n\n\\bigskip\nLet $\\delta (\\tau )=\\kappa -2\\Psi -1$ be small, then the\nequation for $\\omega _{j}$ reads\n\\begin{equation}\n{\\ddot {\\omega }}_{j} = {\\dot {\\omega }}_{j} \n\\left( -1-\\delta \\right) - \\omega _{j} \\left[\n1-\\omega _{j}^{2} +\\frac {1}{2} \\left(\n\\omega _{j+1}^{2} +\\omega _{j-1}^{2} \\right) \\right] .\n\\end{equation}\nReplacing $\\tau $ by $-\\tau $ yields the equation\n\\begin{equation}\n{\\ddot {\\omega }}_{j} = {\\dot {\\omega }}_{j} \n\\left( 1+\\delta \\right) - \\omega _{j} \\left[\n1-\\omega _{j}^{2} +\\frac {1}{2} \\left(\n\\omega _{j+1}^{2} +\\omega _{j-1}^{2} \\right) \\right] .\n\\end{equation}\nwhich is the equation for flat space solutions, so that $\\omega _{j}$\nmust approach one of the critical points.\nThe fact that $E\\neq 0$ means that $p_{\\theta }$ cannot tend to zero\nas $\\tau \\rightarrow \\infty $ because $\\mu G$ does vanish in the limit.\nHence at least one of the $\\omega _{j}$'s must be zero as\n$\\tau \\rightarrow \\infty $.\nSince $\\delta $ is small it alters neither the position nor\nthe characteristics of the critical points, and hence this \n$\\omega _{j}$ will go into the focus at zero. \nTherefore it oscillates infinitely many times\n before it hits zero.\n\\hfill\n$\\square $\n\n\\bigskip\nWe may determine the value of $r_{0}$ as follows.\nSince $\\kappa $ is finite and we have (\\ref{elim}), it follows\nthat $r^{2}p_{\\theta }\\rightarrow 1\/2$ as\n$\\tau \\rightarrow \\infty $.\nHence\n\\begin{equation}\nr_{0}^{2}=\\frac {1}{2} \\lim _{\\tau \\rightarrow \\infty }\n\\sum _{j=1}^{N}\n\\left( \\omega _{j}^{2}-\\omega _{j-1}^{2} -N-1+2j \\right) ^{2}.\n\\end{equation}\nThe maximum possible value of $r_{0}^{2}$ is when all\n$\\omega _{j}\\rightarrow 0$ as $\\tau \\rightarrow \\infty $.\nIn this case:\n\\begin{equation}\nr_{0,max}^{2} =\\frac {1}{2} \\sum _{j=1}^{N} \\left(\n-N-1+2j \\right) ^{2} = \\frac {1}{6} N(N-1)(N+1).\n\\end{equation}\nThe minimum possible value of $r_{0}^{2}$ is when only one\n$\\omega _{j}\\rightarrow 0$ as $\\tau \\rightarrow \\infty $, when\n\\begin{equation}\nr_{0}^{2} =j^{2}(N-j)^{2}\n\\end{equation}\nwhich has a minimum when $j=1$ or $N-1$, and\n\\begin{equation}\nr_{0}^{2} =(N-1)^{2}.\n\\end{equation}\nWith this value of $r_{0}$, we follow \\cite{bfm} and denote\nby $S_{\\infty }$ singular solutions for which $\\mu $ vanishes\noutside the event horizon when $r0$\nfor all $\\tau $ and $r(\\tau )$ remains bounded as \n$\\tau \\rightarrow \\infty $, has $r\\rightarrow r_{0}$ as\n$\\tau \\rightarrow \\infty $, where\n\\begin{equation}\n(N-1)^{2} \\le r_{0}^{2} \\le \n\\frac {1}{6} N(N+1)(N-1).\n\\end{equation}\nFor $N=3$, therefore, $r_{0}=2$. \nFixing $r_{h}>2$ will therefore rule out the possibility of such\nsolutions.\nThis is analogous to the value $r_{h}>1$ which rules out\nsuch solutions in the ${\\mathfrak {su}}(2)$ case.\nAt the end of this section we shall return to the existence proof\nfor $r_{h}\\le 2$.\n\n\\bigskip\nFor every pair of starting values $(\\omega _{1,h},\\omega _{2,h})$\nthere are then just two possibilities:\n\\begin{enumerate}\n\\item\n$\\mu (\\tau )>0$ for all $\\tau $ and we have a regular black hole\nsolution;\n\\item\nthere is a $\\tau _{0}>0$ such that $\\mu (\\tau _{0})=0$\nand we have a singular solution.\n\\end{enumerate}\nDefine a new variable $R$ by\n\\begin{equation}\nR=\\frac {r-r_{h}}{r}\n\\end{equation}\nand $R_{m}$ by the maximum value of $R$ for each solution, that is,\nin case one $R_{m}=1$ (corresponding to $r\\rightarrow \\infty $), and\nin case two $R_{m}=R(\\tau _{0})$.\nFor each solution we define $n_{i}$ to be the number of zeros of\nthe function $\\omega _{i}$ between $\\tau =0$ and $\\tau =\\infty $\nin case 1 and $\\tau =\\tau _{0}$ in case 2, where $n_{i}$ can be\ninfinite.\nDefining new variables $N_{i}$ by\n\\begin{equation}\nN_{i}=\\frac {n_{i}}{1+n_{i}}\n\\end{equation}\nwe allow the possibility that $N_{i}=1$.\n\n\\bigskip\nEach pair of non-zero starting values $(\\omega _{1,h},\\omega _{2,h})$\ncan then be mapped to the three quantities\n$(R_{m},N_{1},N_{2})$ for the corresponding solution, giving\nrise to a map from ${\\mathbb {R}}^{2}$ \n($(\\omega _{1,h},\\omega _{2,h})$ space) to \n${\\mathbb {B}}^{2}\\times [0,1]$ ($(R_{m},N_{1},N_{2})$ space),\nwhere ${\\mathbb {B}}$ is the discrete set\n$\\{ 0, 1\/2, 2\/3, 3\/4, \\ldots , 1\\} $.\nCall this map $f$.\nFigures 1--3 sketch the nature of $(\\omega _{1,h},\\omega _{2,h})$\nspace, ${\\mathbb {B}}\\times [0,1]$ and ${\\mathbb {B}}^{2}$\nrespectively.\nNote that this map will not be 1-1 for $N=3$, although\nit was conjectured in \\cite{bfm} that for $N=2$ the map is 1-1.\n\n\\bigskip\nNote that since we know the nature of the solution space \nwhen one of the $\\omega _{i,h}$ vanishes (since in this case \n$\\omega _{i}\\equiv 0$), we do not need to extend the map $f$\nto the co-ordinate axes in $(\\omega _{1,h},\\omega _{2,h})$\nspace.\nWith this notation, we are ready to prove proposition\n\\ref{firstprop}, once we have the following lemma.\n\n\\begin{lemma}\n\\label{fcont}\nSuppose that for starting values\n$({\\bar {\\omega }}_{1,h},{\\bar {\\omega }}_{2,h})$\nthere is a singular solution with $\\mu ({\\bar {\\tau }}_{0})=0$\nand the gauge field functions having node structure \n$({\\bar {n}}_{1},{\\bar {n}}_{2})$.\nThen the map $f$ is continuous at \n$({\\bar {\\omega }}_{1,h},{\\bar {\\omega }}_{2,h})$.\n\\end{lemma}\n\n\\noindent {\\bf {Proof}} \\smallskip \\newline \nFrom proposition \\ref{horprop}, the field variables are \ncontinuous\nin $\\tau $ and the starting parameters.\nThus all starting values \n$(\\omega _{1,h},\\omega _{2,h})$ in a sufficiently small \nneighbourhood of \n$({\\bar {\\omega }}_{1,h},{\\bar {\\omega }}_{2,h})$\nwill give rise to a singular solution with $\\mu (\\tau _{0})=0$\nwhere $\\tau _{0}$ is close to ${\\bar {\\tau }}_{0}$,\nthe values of the field variables at $\\tau _{0}$ will be\nclose to those at ${\\bar {\\tau }}_{0}$ in the original solution\nand the node structure will be $({\\bar {n}}_{1},{\\bar {n}}_{2})$\nsince the gauge field functions cannot have double zeros\n(proposition \\ref{maxminprop}).\nIn other words, $f$ is continuous at\n$({\\bar {\\omega }}_{1,h},{\\bar {\\omega }}_{2,h})$.\n\\hfill\n$\\square $\n\n\\bigskip\n\\noindent\n{\\bf {Proof of Proposition \\ref{firstprop}}}\n\\smallskip\n\\newline\nConsider the open subset of the $(\\omega _{1,h},\\omega _{2,h})$\nplane given by\n\\begin{equation}\nD=\\{ (\\omega _{1,h},\\omega _{2,h}) : \n0<\\omega _{1,h}<\\omega _{2,h} \\} .\n\\end{equation}\nThe subset $D'=\\{ (\\omega _{1,h},\\omega _{2,h}):\n0<\\omega _{2,h}<\\omega _{1,h} \\} $\ncan be treated similarly.\nThe symmetries of the field equations (\\ref{firsteqn}--\\ref{lasteqn})\nmean that it is sufficient to consider only the first\nquadrant of the $(\\omega _{1,h},\\omega _{2,h})$ plane.\nFrom \\cite{bfm} we know that along the line\n$\\omega _{1,h}=\\omega _{2,h}$ there are singular solutions with\nnode structure $(n_{1},n_{1})$ for all $n_{1}=1,2,\\ldots $.\nTherefore there are neighbourhoods of $D$ corresponding\nto singular solutions with node structure $(n_{1}, n_{1})$\nfor all $n_{1}=1,2,\\ldots $.\nHence $f(D)$ contains points corresponding to $(R_{m},N_{1},N_{1})$\nfor all $N_{1}\\in {\\mathbb {B}}$.\nTherefore $f(D)$ is not a connected set.\nTherefore $f$ cannot be continuous everywhere on $D$ because $D$ is\nconnected.\nHence we conclude that there exists at least one \n$(\\omega _{1,h},\\omega _{2,h})\\in D$ corresponding to a black hole\nsolution because $f$ is continuous for all \n$(\\omega _{1,h},\\omega _{2,h})$ corresponding to singular solutions.\n\\hfill\n$\\square $\n\n\\bigskip\nThis approach allows us to see quite simply how the transversality\nproperty conjectured in \\cite{bfm} arises in the ${\\mathfrak {su}}(2)$\ncase.\nAlong the line of values of $\\omega _{h}$, singular solutions\nhaving different numbers of nodes must be separated by at least\none regular solution. \nFor ${\\mathfrak {su}}(2)$, it is known that values of\n$\\omega _{h}$ sufficiently close to $1$ correspond to singular\nsolutions with $\\omega $ having one node,\nwhilst sufficiently small $\\omega _{h}$ correspond to solutions\nhaving as many nodes as we like.\nIn addition, if we have a regular solution with $n$ nodes,\nthen all singular solutions with $\\omega _{h}$\nsufficiently close have either $n$ or $n+1$ nodes.\nStarting with the singular solutions with one node, and decreasing\n$\\omega _{h}$, we first hit a regular solution with one node\n(there being only the trivial solution with no nodes).\nThen there may be more singular solutions with one node, or two nodes.\nIn the former case there must be another regular solution with\none node, in the latter case there will next be a regular solution\nwith two nodes.\nEither way, there must be a regular solution with two nodes before\nwe can move on to singular solutions with $n>2$ nodes.\nThe process continues for all $n$.\n\n\\bigskip\nHaving shown that there exist genuine ${\\mathfrak {su}}(3)$\nblack holes, the main result of this section is the following\ntheorem, which we prove at the moment for the case $r_{h}>2$,\nreturning to the case $r_{h}\\le 2$ at the end of the section.\n\n\\begin{theorem}\n\\label{exist}\nGiven ${\\bar {n}}_{1}=0,1,\\ldots $, then there exist\nregular black hole solutions of the ${\\mathfrak {su}}(3)$\nEinstein-Yang-Mills equations with $\\omega _{1}(r)$ having\n${\\bar {n}}_{1}$ nodes and $\\omega _{2}(r)$ having $n_{2}$ nodes,\nfor infinitely many $n_{2}\\ge {\\bar {n}}_{1}$.\n\\end{theorem}\n\nA similar result holds with the roles of $\\omega _{1}$ and\n$\\omega _{2}$ reversed. \nNote that this result is slightly weaker than the corresponding\ntheorem for ${\\mathfrak {su}}(2)$, since we cannot guarantee that\nevery combination of $(n_{1},n_{2})$ is the node structure of\nthe gauge fields for some black hole solution, only that an infinite\nnumber of such combinations does occur for each $n_{1}$.\nAt the end of this section we shall give an argument, based on\na numerical analysis, that in fact black holes exist for all\n$(n_{1},n_{2})$, although we are not able to prove this\nanalytically.\nThe proof of theorem \\ref{exist} will proceed via a series of lemmas.\n\n\\begin{lemma}\n\\label{weaklem}\nSuppose that the starting parameters $(0,{\\bar {\\omega }}_{2,h})$\ncorrespond to a charged regular black hole solution in which\n$\\omega _{2}(r)$ has ${\\bar {n}}_{2}$ zeros.\nThen, given $n_{0}$, for all sufficiently small $\\omega _{1,h}$\nand $\\omega _{2,h}$ sufficiently close to ${\\bar {\\omega }}_{2,h}$,\nthe solutions are regular or singular solutions with \n$n_{1}\\ge n_{0}$ and $n_{2}\\ge {\\bar {n}}_{2}$.\n\\end{lemma}\n\n\\noindent {\\bf {Proof}} \\smallskip \\newline \nBy continuity, all field variables will remain close to the original\ncharged solution until $r\\gg 1$ and the geometry is approximately\nflat.\nAt this point $\\omega _{1}$ will be very small and $\\omega _{2}$\nwill be close to $1$.\nThe equations for $\\epsilon _{1}=\\omega _{1}$ and \n$\\epsilon _{2}=\\omega _{2}-1$ as functions of $\\tau $ are, in \nthis regime, to first order,\n\\begin{eqnarray}\n0 & = & {\\ddot {\\epsilon }}_{1}-{\\dot {\\epsilon }}_{1}\n+\\frac {3}{2}\\epsilon _{1} \n\\nonumber\n\\\\\n0 & = & {\\ddot {\\epsilon }}_{2} -{\\dot {\\epsilon }}_{2}\n-2\\epsilon _{2}.\n\\end{eqnarray}\nThis corresponds to a focus in the \n$(\\omega _{1},{\\dot {\\omega }}_{1})$ plane, and hence with \n$\\omega _{1,h}$ sufficiently small, $\\epsilon _{1}$ will have\nat least $n_{0}$ zeros.\n\\hfill\n$\\square $\n\n\\begin{lemma}\n\\label{reglem}\nIf $({\\bar {\\omega }}_{1,h},{\\bar {\\omega }}_{2,h})$\nleads to a regular black hole solution with $\\omega _{1}$ having\n${\\bar {n}}_{1}$ nodes and $\\omega _{2}$ having ${\\bar {n}}_{2}$\nnodes, then all $(\\omega _{1,h},\\omega _{2,h})$ sufficiently \nclose to\n$({\\bar {\\omega }}_{1,h},{\\bar {\\omega }}_{2,h})$\nlead to regular or singular solutions with $\\omega _{1}$ having\nat least ${\\bar {n}}_{1}$ zeros and $\\omega _{2}$ having\nat least ${\\bar {n}}_{2}$ zeros.\n\\end{lemma}\n\n\\noindent {\\bf {Proof}} \\smallskip \\newline \nSince the solutions are continuous in the starting parameters and\n$r$, for $(\\omega _{1,h},\\omega _{2,h})$ sufficiently close to\n$({\\bar {\\omega }}_{1,h},{\\bar {\\omega }}_{2,h})$,\nthe gauge function $\\omega _{1}$ will have ${\\bar {n}}_{1}$\nzeros and $\\omega _{2}$ will have ${\\bar {n}}_{2}$ zeros for \n$rn_{0}$.\nThen there is a neighbourhood \n${\\tilde {D}}_{0}\\subset D_{0}, {\\tilde {D}}_{0}\\neq D_{0}$\nfor which all solutions are regular or singular solutions with\n$n_{1}\\ge {\\tilde {n}}_{0}$ and $n_{2}\\ge {\\bar {n}}_{2}$.\nIn ${\\tilde {D}}_{0}$ there must be at least one singular solution\nhaving $n_{1}\\ge {\\tilde {n}}_{0}$ and $n_{2}\\ge {\\bar {n}}_{2}$,\nby lemma \\ref{reglem}, corresponding to starting values\n$({\\tilde {\\omega }}_{1,h},{\\tilde {\\omega }}_{2,h})$.\nConsider a curve joining \n$({\\tilde {\\omega }}_{1,h},{\\tilde {\\omega }}_{2,h})$\nand $({\\hat {\\omega }}_{1,h},{\\hat {\\omega }}_{2,h})$\nand lying in $D_{0}$.\nThen there must be at least one regular solution along this curve.\nLet $(\\omega _{1,h}^{R},\\omega _{2,h}^{R})$\nbe the regular solution closest to \n$({\\hat {\\omega }}_{1,h},{\\hat {\\omega }}_{2,h})$,\nhaving $n_{1}=n_{1}^{R}$ and $n_{2}=n_{2}^{R}$.\nSince $(\\omega _{1,h}^{R},\\omega _{2,h}^{R})\\in D_{0}$, it follows \nthat $n_{1}^{R}\\ge n_{0}$ and $n_{2}^{R}\\ge {\\bar {n}}_{2}$.\nAlso, from lemma \\ref{reglem}, \n$n_{1}^{R}\\le n_{0}$ and $n_{2}^{R}\\le {\\bar {n}}_{2}$,\nsince sufficiently close to \n$(\\omega _{1,h}^{R},\\omega _{2,h}^{R})$ there are singular solutions\nhaving $n_{1}=n_{0}$ and $n_{2}={\\bar {n}}_{2}$.\nTherefore $n_{1}^{R}=n_{0}$ and $n_{2}^{R}={\\bar {n}}_{2}$.\n\\hfill\n$\\square $\n\n\\begin{lemma}\n\\label{existlem}\nGiven ${\\bar {n}}_{2}$ and $n_{0}>{\\bar {n}}_{2}$, then there exist\nregular and singular solutions with $n_{2}={\\bar {n}}_{2}$ and\n$n_{1}\\ge n_{0}$.\n\\end{lemma}\n\n\\noindent {\\bf {Proof}} \\smallskip \\newline \nConsider the point $(0,{\\bar {\\omega }}_{2,h})$ which corresponds\nto the charged regular solution with $n_{2}={\\bar {n}}_{2}$.\nThen all $(\\omega _{1,h},\\omega _{2,h})$ sufficiently close to\n$(0,{\\bar {\\omega }}_{2,h})$ are regular or singular solutions\nwith $n_{1}\\ge n_{0}$ and $n_{2}\\ge {\\bar {n}}_{2}$.\nLet all such $(\\omega _{1,h},\\omega _{2,h})$ form a neighbourhood\n$D_{0}$ of $(0,{\\bar {\\omega }}_{2,h})$.\nFrom \\cite{bfm}, there exist $(0,{\\hat {\\omega }}_{2,h})\\in D_{0}$\ncorresponding to singular solutions having $\\omega _{1}\\equiv 0$\nand $n_{2}={\\bar {n}}_{2}$.\nBy continuity, all $(\\omega _{1,h},\\omega _{2,h})$ sufficiently close to\n$(0,{\\hat {\\omega }}_{2,h})$ correspond to singular solutions having\n$n_{1}\\ge n_{0}$ and $n_{2}={\\bar {n}}_{2}$.\nHence, by lemma \\ref{singlem}, there are also in this neighbourhood\nregular black hole solutions having $n_{1}\\ge n_{0}$ and\n$n_{2}={\\bar {n}}_{2}$.\n\\hfill\n$\\square $\n\n\\bigskip\n\\noindent \n{\\bf {Proof of Theorem \\ref{exist}}} \n\\smallskip \n\\newline \nFix ${\\bar {n}}_{2}$.\nThen we have regular solutions with node structure \n$({\\bar {n}}_{2}, {\\bar {n}}_{2})$.\nNow let $n_{0}={\\bar {n}}_{2}+1$, then from lemma \\ref{existlem}\nthere are regular solutions having $n_{2}={\\bar {n}}_{2}$ and\n$n_{1}\\ge n_{0}$.\nLet $n_{0}'$ be the smallest such $n_{1}$.\nNow set $n_{0}=n_{0}'+1$ and repeat the process.\n\\hfill\n$\\square $\n\n\\bigskip\nIn order to guarantee the existence of black hole solutions having\nnode structure $({\\bar {n}}_{1},{\\bar {n}}_{2})$ for every pair of\nintegers $({\\bar {n}}_{1},{\\bar {n}}_{2})$, we would require the\nfollowing lemma in addition to lemma \\ref{existlem}.\n\n\\begin{lemma}\n\\label{tightlem}\nSuppose $({\\bar {\\omega }}_{1,h},{\\bar {\\omega }}_{2,h})$\ncorresponds to a regular black hole solution in which\n$\\omega _{1}$ has ${\\bar {n}}_{1}$ nodes and\n$\\omega _{2}$ has ${\\bar {n}}_{2}$ nodes.\nThen, for $(\\omega _{1,h},\\omega _{2,h})$ sufficiently close\nto $({\\bar {\\omega }}_{1,h},{\\bar {\\omega }}_{2,h})$,\nsolutions for which $\\omega _{2}$ still has ${\\bar {n}}_{2}$\nnodes are such that $\\omega _{1}$ has either ${\\bar {n}}_{1}$\nor ${\\bar {n}}_{1}+1$ nodes.\n\\end{lemma}\n\nThe argument we now give for lemma \\ref{tightlem} is not an analytic\nproof because we require a numerical analysis.\n\n\\bigskip\n\\noindent {\\bf {Proof}} \\smallskip \\newline \nBy continuity, we know that for $(\\omega _{1,h},\\omega _{2,h})$\nsufficiently close to \n$({\\bar {\\omega }}_{1,h},{\\bar {\\omega }}_{2,h})$,\nthe gauge function $\\omega _{1}$ has ${\\bar {n}}_{1}$ zeros\nand $\\omega _{2}$ has ${\\bar {n}}_{2}$ zeros for $rr_{1}$, then, $\\omega _{2}$ will\nbe of one sign.\nTherefore we may consider the new dependent variable\n\\begin{equation}\n\\psi = \\frac {\\omega _{1}}{\\omega _{2}}\n\\end{equation}\nwhich will have the same number of zeros as $\\omega _{1} (r)$\nfor $r>r_{1}$.\nThen the equation satisfied by $\\psi $ is\n\\begin{equation}\nr^{2} \\mu \\psi '' +\\left( 2m-2r^{3}p_{\\theta } \\right)\n\\psi ' +2r^{2}\\mu \\frac {\\omega _{2}'}{\\omega _{2}} \n\\psi '\n+\\frac {3}{2} \\omega _{2}^{2} \\psi \\left( 1-\\psi ^{2} \\right) =0.\n\\label{psieqn}\n\\end{equation}\nOn some interval, $r_{1}1$.\nIn other words, for $\\epsilon _{1}>0$ initially, $\\epsilon _{1}>0$\nalways and $\\omega _{1}$ will have no additional zeros for $r>r_{1}$.\nHence we need only consider the case where $\\epsilon _{1}<0$\ninitially.\nUnfortunately the non-linear perturbation equations cannot be\nintegrated analytically and so we present a numerical argument.\nFor initial $\\epsilon _{1},\\epsilon _{2}$ sufficiently small,\nthe values of $\\epsilon _{1}$ and $\\epsilon _{2}$ will remain close\nto those on the unstable manifold for all $\\tau <\\tau _{1}$ for\nsome $\\tau _{1}$, where $\\tau _{1}$ can be taken to be as large\nas we like by making the initial perturbations sufficiently small.\nNumerical integration of the non-linear equations with the \ninitial point on the unstable manifold and $\\epsilon _{1}<0$,\nshows that $\\epsilon _{2}$ increases monotonically and \n$\\epsilon _{1}$ decreases monotonically with just one zero, and \ncuts through $\\epsilon _{1}=-2$ (corresponding to $\\psi =-1$),\nfrom whence $\\psi $ must be monotonically decreasing\n(see figures 4 and 5).\nHence in this case $\\omega _{1}$ has ${\\bar {n}}_{1}+1$ zeros.\n\\hfill\n$\\square $ \n\n\\bigskip\nLemma \\ref{tightlem} allows us to prove that every combination\nof integers $(n_{1},n_{2})$ must correspond to a black hole\nsolution.\nIn the proof of theorem \\ref{exist}, we begin with regular\nsolutions with node structure $({\\bar {n}}_{2},{\\bar {n}}_{2})$\nand then from lemma \\ref{tightlem} there are either regular \nor singular solutions with node structure\n$({\\bar {n}}_{2}+1,{\\bar {n}}_{2})$.\nUsing lemma \\ref{singlem}, there are then regular solutions\nhaving this node structure and the proof of theorem \\ref{exist}\nfollows as before. \n\n\\bigskip\nSo far we have proved the existence of infinitely many \n${\\mathfrak {su}}(3)$ black holes only for $r_{h}>2$.\nThe remainder of this section will be spent proving the result\nfor $r_{h}\\le 2$.\nIn this case, the points at which the map $f$ is not continuous\n(which must still exist by the argument used in proving proposition\n\\ref{firstprop}) do not necessarily correspond to regular black hole\nsolutions: they could also be oscillating solutions.\nA couple of lemmas concerning oscillating solutions are required\nbefore the proofs of proposition \\ref{firstprop} and theorem\n\\ref{exist} can be extended.\n\n\\begin{lemma}\n\\label{osclem}\nIf $({\\bar {\\omega }}_{1,h},{\\bar {\\omega }}_{2,h})$\nleads to an oscillating solution with \n$\\omega _{1}(\\tau )\\rightarrow 0$ and \n$\\omega _{2}(\\tau )\\rightarrow 1$ as \n$\\tau \\rightarrow \\infty $, and\n$\\omega _{2}$ has ${\\bar {n}}_{2}$ zeros, then all \n$(\\omega _{1,h},\\omega _{2,h})$ sufficiently close to\n$({\\bar {\\omega }}_{1,h},{\\bar {\\omega }}_{2,h})$\nlead to one of the following types of solution:\n\\begin{enumerate}\n\\item\nan oscillating solution in which $\\omega _{1}(\\tau )\\rightarrow 0$ and \n$\\omega _{2}(\\tau )\\rightarrow 1$ as \n$\\tau \\rightarrow \\infty $, with $\\omega _{2}$ having at least\n${\\bar {n}}_{2}$ zeros;\n\\item\ngiven $n_{0}$, either a regular or a singular solution with\n$n_{1}\\ge n_{0}$ and $n_{2}\\ge {\\bar {n}}_{2}$.\n\\item\nan $S_{\\infty }$ solution.\n\\end{enumerate}\n\\end{lemma}\n\n\\noindent {\\bf {Proof}} \\smallskip \\newline \nSince in the original solution $\\omega _{1}$ has infinitely many\nzeros and the solutions are analytic in $\\tau $ and the starting\nvalues, there is some $\\tau _{1}$ such that \nall solutions with $(\\omega _{1,h},\\omega _{2,h})$ \nsufficiently close to $({\\bar {\\omega }}_{1,h},{\\bar {\\omega }}_{2,h})$\nmust have at least $n_{0}$ zeros of $\\omega _{1}$ and \n${\\bar {n}}_{2}$ zeros of $\\omega _{2}$ for $\\tau <\\tau _{1}$,\nand all variables will be close to their asymptotic values.\n\\hfill\n$\\square $\n\n\\begin{lemma}\nThere exists $({\\bar {\\omega }}_{1,h},{\\bar {\\omega }}_{2,h})\\in D$\ncorresponding to an oscillating solution.\n\\end{lemma}\n\n\\noindent {\\bf {Proof}} \\smallskip \\newline \nFrom \\cite{bfm}, we know that there are points on the line\n$\\omega _{1,h}=\\omega _{2,h}$ corresponding to $S_{\\infty }$\nsolutions.\nBy continuity, there is a neighbourhood $D_{0}$ of the line\n$\\omega _{1,h}=\\omega _{2,h}$ corresponding to $S_{\\infty }$\nsolutions.\nHowever, lemma \\ref{reglem} still applies in this case, \nso that the regular solutions on the line \n$\\omega _{1,h}=\\omega _{2,h}$ will have singular solutions in $D$\nsufficiently close to them for which $r(\\tau _{0})>2$.\nTherefore, by lemma \\ref{osclem}, there must be points in $D$\nwhich correspond to oscillating solutions.\n\\hfill\n$\\square $\n\n\\bigskip\nThe proofs of proposition \\ref{firstprop} and theorem \\ref{exist}\nare now exactly the same as for $r_{h}>2$. \nSince oscillating solutions\nhave an infinite number of zeros for at least one of the gauge field\nfunctions, and using lemma \\ref{osclem},\nregular solutions are still needed to separate \nsingular solutions having different node structures.\n\n\\section{${\\mathfrak {su}}(N)$ black holes}\n\\label{sun}\nThe detailed discussion of the previous section now enables us to\nproceed quite swiftly to the analogues of proposition \\ref{firstprop}\nand theorem \\ref{exist} for general $N$. \nThe method of proof will be by induction, which was the basic idea used \nin the previous section, where we proved existence for \n${\\mathfrak {su}}(3)$ exploiting known results about \n${\\mathfrak {su}}(2)$.\n\n\\bigskip\nThe first step is to illustrate how solutions which are from \n${\\mathfrak {su}}(n)$, where $n3$, there are additional embeddings which arise from setting\nat least one of $\\omega _{2}, \\ldots , \\omega _{N-2}\\equiv 0$,\nso that the gauge field equations decouple into two or more\nsets of coupled components, the sets being coupled to each other \nonly through the metric. Solutions of this form have been found\nnumerically in \\cite{klei1}.\nWe illustrate how the existence of black holes of this type may\nbe proved by considering the simplest case, which arises when\n$N=4$.\nIn this case, we have three non-zero gauge field functions, \n$\\omega _{1},\\omega _{2}, \\omega _{3}$.\nIf we set $\\omega _{2}\\equiv 0$, then the field equations take\nthe form\n\\begin{eqnarray}\nr^{2}\\mu \\omega _{1}'' & = & \n-\\left( 2m-2r^{3}p_{\\theta } \\right) \\omega _{1}'\n-\\left( 1-\\omega _{1}^{2} \\right) \\omega _{1} \n\\label{omone}\n\\\\\nr^{2}\\mu \\omega _{3}'' & = & \n-\\left( 2m-2r^{3}p_{\\theta } \\right) \\omega _{3}'\n-\\left( 1-\\omega _{3}^{2} \\right) \\omega _{3} \n\\label{omtwo}\n\\\\\nm' & = & \\left( \\mu G +r^{2} p_{\\theta }\\right)\n\\\\\n\\frac {S'}{S} & = & \\frac {2G}{r}\n\\end{eqnarray}\nwhere\n\\begin{equation}\nG=\\omega _{1}^{'2} + \\omega _{3}^{'2},\n\\qquad\np_{\\theta } = \\frac {1}{2r^{4}} \\left( \n\\left[ \\omega _{1}^{2}-1 \\right] ^{2}+ \n\\left[ \\omega _{3}^{2}-1 \\right] ^{2}+ 8 \\right) .\n\\end{equation}\nThese equations look very much like two uncoupled ${\\mathfrak {su}}(2)$\ndegrees of freedom, however, the two $\\omega $'s are (albeit weakly)\ncoupled.\nThe fact that we have two gauge field functions here means that we\ncan use the methods of section \\ref{su3} to prove the existence of\nregular black hole solutions.\nIn fact, all the results of that section carry directly over to this\nsituation on replacing $\\omega _{2}$ there by $\\omega _{3}$ here.\nThere is one exception. \nThe equations (\\ref{omone},\\ref{omtwo}) are slightly different from\nthose in section \\ref{su3} and this enables us to strengthen\nlemma \\ref{reglem}.\n\n\\begin{lemma}\nIf $({\\bar {\\omega }}_{1,h},{\\bar {\\omega }}_{3,h})$\nleads to a regular black hole solution with $\\omega _{1}$ having\n${\\bar {n}}_{1}$ nodes and $\\omega _{3}$ having ${\\bar {n}}_{3}$\nnodes, then all $(\\omega _{1,h},\\omega _{3,h})$ sufficiently \nclose to\n$({\\bar {\\omega }}_{1,h},{\\bar {\\omega }}_{3,h})$\nlead to regular or singular solutions with $\\omega _{1}$ having\neither ${\\bar {n}}_{1}$ or ${\\bar {n}}_{1}+1$ zeros and \n$\\omega _{3}$ having either ${\\bar {n}}_{3}$ or ${\\bar {n}}_{3}+1$\nzeros.\n\\end{lemma}\n\n\\noindent {\\bf {Proof}} \\smallskip \\newline \nSince the solutions are continuous in the starting parameters and $r$,\nfor $(\\omega _{1,h},\\omega _{3,h})$ sufficiently close to\n$({\\bar {\\omega }}_{1,h},{\\bar {\\omega }}_{3,h})$,\nthere is some $r_{1}$ such that $\\omega _{1}$ will have ${\\bar {n}}_{1}$\nzeros and $\\omega _{3}$ will have ${\\bar {n}}_{3}$ zeros for\n$rr_{1}$ such that all field variables\nare close to their asymptotic values for $r\\in [r_{1},r_{2}]$.\nThe flat space field equations (see section \\ref{flat}) decouple\ncompletely since $\\omega _{2}\\equiv 0$, and so the phase plane\nanalysis of \\cite{bfm} is valid in this case.\nThe phase plane analysis enables us to be more precise about\nthe number of zeros of the perturbed gauge field functions, \nunlike the more general case where the phase space had more than\ntwo coupled dimensions.\nTherefore the conclusions of \\cite[proposition 22]{bfm} can \nbe applied directly here and each gauge field function\ncan have at most one zero for $r>r_{2}$.\n\\hfill\n$\\square $\n\n\\bigskip\nWith this more powerful lemma, the analysis of section \\ref{su3}\nreaches the conclusion that there are regular black hole\nsolutions of the required form for each $r_{h}$ and node structure\n$(n_{1},n_{3})$.\n\n\\bigskip\nFor general $N$, similar embeddings of the form of two or more\n${\\mathfrak {su}}(n)$ ($n\\frac {1}{6}N(N+1)(N-1)$ (in which\ncase no oscillating solutions exist), and \n$r_{h}^{2}\\le \\frac {1}{6} N(N+1)(N-1)$.\nNote that, as in the ${\\mathfrak {su}}(3)$ case, we are not able to\nprove analytically that every sequence of integers \n$(n_{1},n_{2},\\ldots ,n_{N-1})$ corresponds to a black hole solution\nwhose gauge functions have this node structure, although it might\nreasonably be expected that this is indeed the case.\n\n\n\n\n\\section{Results and conclusions}\nIn this paper we have proved the existence of a vast number of\nhairy black holes in ${\\mathfrak {su}}(N)$ Einstein-Yang-Mills\ntheories.\nThe solutions are described by N-1 parameters,\ncorresponding to the number of nodes of the gauge field\nfunctions.\nThe result of \\cite{brod} tells us that each of these solutions\nwill have a topological instability, similar to the flat-space\n``sphaleron'', as was the case for ${\\mathfrak {su}}(2)$\nblack holes.\nThis instability does\nnot necessarily diminish the physical importance of these objects,\nas they may have an important role in processes such as cosmological\nparticle creation \\cite{gibbons}.\n\n\\bigskip\nWe now briefly mention another important reason for studying\n${\\mathfrak {su}}(N)$ black holes, namely the behaviour\nof the solutions as $N\\rightarrow \\infty $.\nBlack holes in ${\\mathfrak {su}}(\\infty )$\nEinstein-Yang-Mills theory would possess an infinite amount\nof hair, requiring an infinite number of parameters to\ndescribe their geometry.\nThis might be analogous to the ``W-hair'' found in \nnon-critical string theory \\cite{ellis}, and have \ndrastic consequences for the Hawking radiation,\ninformation loss and quantum decoherence processes\nassociated with black holes.\nThe existence of infinite amounts of hair might also \nrender such objects stable.\nIt is already known that the ${\\mathfrak {su}}(\\infty )$\nLie algebra is simply that of the diffeomorphisms of the sphere\n\\cite{floratos}.\nThis means that the limit as $N\\rightarrow \\infty $ cannot\nbe taken smoothly, in particular the results of the present paper\nare only valid when $N$ is finite.\nWe hope to return to this matter in a subsequent publication.\n\n\n\n\\section*{Acknowledgments}\nThe work of N.E.M. is supported by a P.P.A.R.C. advanced fellowship\nand that of E.W. is supported by a fellowship at Oriel College, Oxford.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nExploding stars have been noted for millennia, and observed (in a scientific\nsense) for somewhat over a century. \nIt wasn't until the middle of the 20$^{th}$ century that a distinction\ncould be made\nbetween the supernovae, the novae, and the eruptive phenomena seen\nin cataclysmic variables (the dwarf novae).\n\\cite{K63} was the first to suggest that the\nnovae were the consequence of explosive hydrogen burning on the surface\nof a degenerate dwarf. It is now well accepted that\nthe novae are manifestations of runaway thermonuclear reactions\non the\nsurface of a white dwarf (WD) accreting hydrogen in a close binary system\n(e.g., \\citealt{S71}).\nThe novae are highly dynamic phenomena, with\ntimescales ranging from seconds to millennia, occurring in complex systems\ninvolving two stars and mass transfer.\n\nThe primary driver of the evolution of the observational characteristics\nof a nova is \nthe temporal decrease of the optical depth in an expanding atmosphere.\nThe novae are marked by an extraordinary spectral evolution \\citep{Wil91,Wil92}.\nIn the initial phases one often\nsees an optically thick, expanding pseudo-photosphere. In some cases\none sees the growth and then disappearance of inverse P~Cygni\nabsorption lines from the cool,\nhigh velocity ejecta. As the pseudo-photosphere becomes optically thin, emission\nlines of the Hydrogen Balmer series strengthen, accompanied by either a\nspectrum dominated by permitted lines of Fe~II, or of helium and nitrogen.\nThe emission line profiles and line ratios evolve as the optical depth\nof the ejecta decreases, and the nova transitions from the permitted to the \nnebular phase \\citep{Wil91}.\n\nBeyond this template, in detail the novae exhibit a panoply\nof individual behaviors.\n\\cite{PG57} and \\cite{McL60}\ndescribed the evolution of novae as they were known at the time. \n\\cite{Wil92} discussed the formation of the lines, and\ndivided novae into the \\ion{Fe}{2} and He-N classes, based on \nwhich emission lines dominated (aside from the ubiquitous\nBalmer lines of hydrogen).\nNovae are also categorized as recurrent and classical novae, with the former\nhaving more than one recorded outburst. Over a long enough baseline, it is\nlikely that all novae are recurrent (e.g., \\citealt{For78}).\n\n\\cite{BE08} present a recent set of reviews of the nova phenomenon.\n\nThere exist well-sampled photometric records for many novae, such as those \npresented by \\cite{SSH10}. They classify the photometric light curves, from \nplates amassed over the past century, into 7 distinct photometric classes.\nOn the other hand, spectroscopic observations of novae have rarely been \npursued far past maximum because most novae fade rapidly, and time on the\nlarge telescopes required for spectroscopy is precious.\nThe most comprehensive past work was the Tololo Nova atlas\n\\citep{Wil94} of 13 novae followed spectroscopically over a\n5 year interval.\nThe availability of the SMARTS\\footnote{The Small and Medium Aperture\nTelescope System, directed by Charles Bailyn, is an ever-evolving\npartnership that has overseen operations of 4 small telescopes at\nCerro Tololo Interamerican Observatory since 2003.}\ntelescope facilities \\citep{S10} makes\npossible routine synoptic monitoring programs, \nboth photometric and spectroscopic,\nof time-variable sources.\nIt is timely, therefore, to undertake a comprehensive, systematic,\nhigh-cadence study of the\nspectrophotometric evolution of the galactic novae.\n\nThis atlas collects photometry and spectra of the novae we have observed\nwith SMARTS.\nMost of the novae in the atlas are recent novae, discovered since 2003. Most\nare in the southern hemisphere. The observing cadences are irregular; we have\nconcentrated on He-N and recurrent novae, novae in the LMC, and\nnovae that otherwise show unusual characteristics.\n\nOur purpose here is to introduce this atlas.\nOur scientific aim is to facilitate a detailed comparison of the\nvarious characteristics of the novae. These data can be used by and of\nthemselves to study individual objects, \nfor systematic studies to further define the phenomenon, \nand for correlative studies with other comprehensive data sets,\nsuch as the {\\it Swift} Nova Working Group's survey of X-ray and UV\nobservations of recent novae \\citep{Swift11}.\nOur aim in making the atlas public is to make the data accessible to \nthe community.\nWe are focusing on certain novae, \nand on particular aspects of the nova phenomenon (\\S\\ref{sec-disc}),\nbut simply cannot do justice to the full dataset.\n\n\\section{Observations and Data Analysis}\n\n\\subsection{Low Dispersion Spectroscopy}\\label{sec-lds}\nThe spectra reported here have been obtained with the venerable RC \nspectrograph\\footnote{\\url{http:\/\/www.ctio.noao.edu\/spectrographs\/60spec\/60spec.html}}\non the SMARTS\/CTIO 1.5m telescope.\nObservations are queue-scheduled, and are taken by\ndedicated service observers.\n\nThe detector is a Loral 1K CCD. We use a variety of spectroscopic modes, with\nmost of the spectra having been obtained with one of the standard modes \nshown in Table~\\ref{tbl-spsetups}.\n\nWe use slit widths of 1 and 0.8 arcsec in the low and the higher resolution\n47\/II modes, respectively.\nThe slit is oriented E-W and is not rotated during the night.\n\nWe routinely obtain 3 spectra of each target in order to filter for cosmic \nrays. We combine the 3 images and extract the spectrum by fitting a Gaussian \nin the spatial direction at each pixel. Wavelength calibration is accomplished \nby fitting a 3$^{rd}$ to 6$^{th}$ order polynomial to the\nTh-Ar or Ne calibration lamp \nline positions. We observe a spectrophotometric standard star, generally \nLTT~4364 \\citep{Ham92,Ham94} or Feige~110 \\citep{Oke90,Ham92,Ham94},\non most nights to determine the counts-to-flux conversion.\nBecause of slit losses, possible changes in transparency and seeing \nduring the night, and parallactic losses due to the fixed slit orientation,\nthe flux calibration is imprecise. We generally recover the correct \nspectral shape, except at the shortest wavelengths ($<$3800\\AA)\nwhere apparent changes in the slope of the continuum \nare likely attributable to airmass-dependent parallactic slit losses.\nWe have the capability to use simultaneous or contemporaneous photometry\nto recalibrate the spectra.\n\nThere are some quality control issues that have not been fully dealt with,\nespecially when we are near the sensitivity\nlimits of the telescope. These include\nobservations of an incorrect star, obviously incorrect flux calibrations,\nor spectra indistinguishable from noise. We are going through the data\nas time permits to address these issues.\n\n\\subsection{High Dispersion Spectroscopy}\n\nWe have a small number of high resolution spectra of some of the\nbrighter novae near maximum.\nThese were obtained with the Bench-Mounted \nEchelle\\footnote{\\url{http:\/\/www.ctio.noao.edu\/noao\/content\/fiber-echelle-spectrograph}}, and\ncurrently with the Chiron echelle\nspectrograph\\footnote{\\url{http:\/\/www.ctio.noao.edu\/noao\/content\/chiron}}. \nThese data will be incorporated into the atlas at a later time.\n\n\\subsection{Photometry}\n\nMost of the photometry was obtained using the \nANDICAM\\footnote{\\url{http:\/\/www.astronomy.ohio-state.edu\/ANDICAM\/detectors.html}}\ndual-channel imager on the 1.3m telescope.\nObservations are queue-scheduled, with dedicated service observers.\n\nThe ANDICAM optical channel is a 2048$^2$ pixel Fairchild 447 CCD.\nIt is read out with 2x2 binning, which yields a 0.369 arcsec\/pixel\nplate scale. The field of view is roughly 6x6 arcmin, but until\nrecently there has been \nsignificant unusable area on the east and south sides on the chip.\nThe finding charts in the atlas\nshow examples of ANDICAM images. We normally obtain single\nimages, since the fraction of pixels marred by cosmic rays\nand other events is small.\nExposure times range from 1 second to about 2 minutes. We use the standard \nJohnson-Kron-Cousins $B$, $V$,\n$R_C$, and $I_C$ filters (the $U$ filter has been unavailable since 2005, but\nwe have extensive $U$~band for some of the earlier novae, particularly\nV475~Sct and V5114~Sgr).\n\nThe ANDICAM IR channel is a Rockwell 1024$^2$ HgCdTe ``Hawaii'' Array.\nIt is read out in 4 quadrants with 2x2 binning, which yields a\n0.274 arcsec\/pixel plate scale and a 2.4 arcmin field of view.\nThe observations are dithered using an internal mirror. In most cases\nwe use 3 dither positions, with integration times from 4 seconds (the minimum\nintegration time) to about 45 seconds. We use the CIT\/CTIO $J$, $H$, and $K_s$\nfilter set.\n\nThe optical and IR channels are observed simultaneously using a dichroic beam\nsplitter.\nThe observing cadence varies from nightly for new novae to $\\sim$annual\nmonitoring for the oldest novae in our list.\n\nWe perform aperture photometry on the target and between 1 and 25\ncomparison stars in the field. The aperture radius R is either 5 or 7\npixels, depending on field crowding and sky brightness.\nThe background is the median\nvalue in an annulus of inner radius 2R and outer radius 2R+20 pixels centered\non the extraction aperture.\nInstrumental magnitudes are recorded for each star. There are cases where the\nfading remnant becomes blended with nearby stars (within $\\sim$1.5~arcsec).\nTo date we have not accounted for such blending.\nEventually we plan to employ PSF-fitting techniques\nin these crowded regions.\n\nOn most photometric nights an observation of a \\cite{Landolt92} standard field\nis taken. On those nights \nwe determine the zero-point correction and determine the magnitudes of the\ncomparison stars. We adopt the mean magnitudes for each comparison star.\nThese are\ngenerally reproducible to better than 0.02~mag; variable stars are identified\nthrough their scatter around the mean, and are not used in the \ndifferential photometry.\nWith only a single observation of a standard star field each night,\nwe assume the nominal atmospheric extinction law and zero color correction. \nUsing differential photometry, we can\nrecover the apparent magnitude of a target with \na typical uncertainty of $<$0.03 mag at 20$^{th}$ magnitude.\n\nWhile we could do the same with the IR channel images, we find it simpler to\nuse the catalogued 2MASS magnitudes of the standard stars. We implicitly\nassume that the 2MASS comparison stars are non-variable, and that the color\nterms in the photometric solution are negligible.\n\nIn addition, some higher cadence data have been obtained with the SMARTS 0.9m\nand 1.0m telescopes.\nThe 0.9m detector is a 2048x2046 CCD with a 0.401~arcsec\/pixel plate scale.\nOn the 1.0m, we used the 512x512 Apogee camera that was employed prior to\ninstallation of the 4K camera.\nWe perform the differential photometry\nin a manner identical to that for the ANDICAM, and merge the data sets.\nThese data are not yet fully incorporated into the atlas.\n\n\\section{Setup of the Atlas}\n\nThe atlas is on-line at\n\\url{http:\/\/www.astro.sunysb.edu\/fwalter\/SMARTS\/NovaAtlas\/}.\n\nThe atlas consists of a main page for each nova, giving finding charts (in\nboth $V$ and $K$ bands), coordinates, and links to the spectra and photometry.\nThe spectra are available as images, and may be downloaded in ascii (text)\nformat. The photometry page shows plots of the light curve and colors,\nand permits one to download the data in ascii format.\nNote that the plots on the photometry page only show data with formal\nuncertainties $<$0.5~mag, while all measured magnitudes and uncertainties\nare included in the ascii listings.\nThere is a link to a page of references for other observations of the novae.\n\n\\section{The Novae}\n\nAs of 1 July 2012 the atlas includes data on 64 novae. \nOf these, 29 are still bright enough (V$<$18) to reach spectroscopically\nwith the 1.5m\/RC spectrograph. Most are still detectable photometrically\nwith the 1.3m\/ANDICAM imager. Only 5 are no longer on our photometric\ntarget list because\nthey are too faint or too confused with brighter companions.\n\nThe spatial distribution of these novae is shown in\nFigure~\\ref{fig_spdist} in both celestial and galactic\ncoordinates.\n\nLists of our targets and particulars \non the number and observing date distribution of the \nobservations are in Tables~\\ref{tbl-obs} (novae from before 2012);\n\\ref{tbl-obs_new} (novae discovered in 2012), and \\ref{tbl-obs_LMC} (novae in\nthe LMC). The reference time is ideally the time of peak brightness, but this\nis often not well known. In general, T$_0$ is the time is discovery.\nIn the case of T Pyx, which rises very slowly, T$_o$ is the time of peak\nbrightness as estimated from our photometry. For novae that were discovered well\npast peak, including N Sgr 2012b and XMMU J115113.3-623730, T$_0$ is a guess.\nAll the dates in the Tables are referenced to T$_0$.\nThe tabulated $V$ is the last observed $V$ magnitude;\nin most cases this is\nthe brightness in June 2012.\nThe Tables are current as of 1 July 2012. \n\n\n\\normalsize\n\\subsection{Observing Statistics}\n\nAs of 1 July 2012 the full atlas contains 64 novae.\nWe have between 1 and 368 spectra for the novae, with a median of 28\nspectra per nova. The number of photometric points varies between 1 and 265,\nwith a median of 35,\nfor 53 novae. Since some of the observations were taken through thick clouds,\nnot all observations have the best possible S\/N.\n\nThe photometric and spectral coverage is generally non-uniform in time. \nIn addition to annual gaps due to the Sun, there is spotty coverage during the\naustral winter when the weather becomes worse. We do not have unlimited \nobserving time, so we concentrate on those novae that tickle our astronomical\nfancy - the He-N novae, and those showing unusual characteristics. We \ndo not attempt spectroscopy of targets fainter than $V\\sim$18, because\nthe 1.5m telescope has limited grasp. \n\nWe generally do not make great efforts to obtain photometry from day 0, because\namateur astronomers do such a good job. In many cases data available from the\nAAVSO can fill in the first few weeks, while the nova is bright (we do have\nbright limits near $V=8$ and $K=6$). Our forte is the ability\nto a) follow the evolution to quiescence, and b) to do so in the 7 photometric\nbands from $B$ through $K_s$. \nIn one case we were on the nova 1.1 days after discovery,\nbut the median delay is 15 days.\n\nWe try to start the spectroscopic monitoring sooner, because this is a unique\ncapability of SMARTS. The first spectrum is obtained with a median delay of\n8.0 days from discovery, but we have observed 1 nova within 0.6~days of\ndiscovery, 9 within 2 days, and 14 within 3 days. \n\nWe have multi-epoch photometry of 52 novae over timespans of up to\n3173 days (8.7 years),\nand multi-epoch spectroscopy of 63 novae over timespans of up to\n3156 days (8.6 years). These durations will increase with time so long as\nSMARTS continues operating, and the targets are sufficiently bright. The median\nobservation durations of 1317 and 360 days, respectively, for the photometry\nand spectroscopy, are limited mostly by target brightness.\nThe median time between observations is skewed by the growing number of old,\nfaint targets that are now observed with a cadence of 1-2 observations per\nyear, so that the median time between spectra is 8.4 days, and is 27 days\nfor photometric observations.\n\n\\subsection{The Example of V574 Pup}\n\nWe illustrate possible uses of the atlas with the example of V574 Pup,\nan \\ion{Fe}{2} nova for which we have good coverage.\nAside from near-IR observations \\citep{N10}\nand analysis of the super-soft X-ray source \\citep{Swift11},\nthere has been little discussion of this bright nova.\n\nThe main atlas page (Figure~\\ref{fv574main})\npresents finding charts in $V$ and $K$, along with the coordinates, time of\ndiscovery, and links to the spectral and photometric data and references.\n\nThe photometry consists of observations taken on 100 days with the 1.3m\nANDICAM imager, starting on day 32 and\nrunning through day 2723 (5 May 2012). Most of these sets include all 7 ANDICAM\nbands, $BVRIJHK_s$. This light curve is shown in Figure~\\ref{v574_lc}.\nIt is possible to fill in the first 30 days with data from other sources,\nsuch as \\cite{S05}, or by using data from the AAVSO (www.aavso.org).\n\nWe supplemented these with data taken on 20 days using\na temporary small CCD on the SMARTS 1.0m telescope. These were opportunistic\nobservations enabled by the unavailability of the wide-field 4k camera.\nWe use the $\\sim$2 hour long sequences to search for short\nperiodicities. (Similar data exist for very few of the novae in the\natlas.) Three long sequences in the $B$~band,\non days 87, 195, and 196, showed sinusoidal-like modulations.\nRemoving a linear trend from the data on day 87 and normalizing to the mean\nmagnitudes, we find a likely period of 0.0472 days (68 minutes; see\nFigure~\\ref{fv574_var}) from a shortest-string analysis \\citep{D83}.\nHowever, we cannot exclude some aliases. This is shorter than the minimum\norbital period for CVs, and may be half an orbital period (ellipsoidal\nvariability is a possible explanation).\nThe amplitude of the best-fit sinusoid decreased from 0.02 to 0.007~mag\nfrom day 87 to days 195-196.\n\nWe obtained 107 spectra before the target became too faint for the 1.5m\ntelescope.\nWe illustrate two types of investigations\nthat can be supported by high cadence spectral observations.\n\n\\begin{enumerate}\n\\item Figure~\\ref{fv574pc} shows the evolution of the P~Cygni line profiles\nas the wind evolves against the backdrop of the optically-thick\npseudo-photosphere. With daily spectra, it is clear that the absorption\nvelocities are not constant, but rather are accelerating. Through day 14\nthe velocities can be described as a quadratic function of time. Hence the\nacceleration is linear in time. It is hard to see how this can result\nfrom decreasing optical depth effects in an envelope with a monotonic\nvelocity law increasing outwards. \\cite{SNS11} show how similar structures\nseen in T~Pyx can be explained as an outward-moving recombination front in\nan envelope with a linear velocity law.\n\n\n\\item Figure~\\ref{fv574fe} shows the time-evolution of a series of lines of\ndiffering temperatures as the nova evolves through the nebular and coronal\nphases. The [\\ion{Fe}{10}] $\\lambda$6375\\AA\\ line requires high excitation, and \nits presence correlates well with the super-soft (SSS) X-ray emitting\nphase \\citep{Swift11}. V574~Pup was in its SSS phase from before\nday 180 through day 1118; it ended before day 1312. We can use data such as\nthese to explore how well optical lines are diagnostic of the SSS phase.\n\n\\end{enumerate}\n\n\n\\subsection{Notes on the Novae}\n\nNotes here are not meant to be complete or definitive in any sense. They are \nmeant to highlight past or ongoing work on select novae, or to note some\nparticularly interesting cases. We have made no attempt to provide complete\nreferences here; they are in the on-line atlas.\nFor the convenience of the reader, we have collected in\nTable~\\ref{tbl-meas} various basic measurements. These are:\n\\begin{itemize}\n\\item Spectroscopic class. This is a phenomenological classification\nbased on the appearance of the spectrum\nin the first few spectra after the emission lines appear.\nPhysically, this is likely an\nindicator of the optical depth of the envelope.\nOf the 63 classifiable targets, most (47\/63, or 75\\%) are \\ion{Fe}{2} type;\n15 (24\\%) are or may be He-N, and one is a possible symbiotic nova.\nIn one case we cannot tell because our first spectrum was obtained nearly \n2 years after peak.\n\nWe append a ``w'' in those cases where there is a clear P~Cygni absorption\nin the Balmer lines (and sometimes in \\ion{Fe}{2})\nindicative of an optically-thick wind.\nHalf (32 of the 64 novae) show such P~Cyg absorption.\nWe caution that the absence of P~Cyg absorption may\nbe caused by the cadence of the observations.\n\n\\item Photometric class. We examined the $V$ band light\ncurves for the first 500 days and categorized them by eye into one or more\nof the 7 classes defined by \\cite{SSH10}. In many cases we have very little data\nduring the first 3 months, and do not attempt to categorize these. In some cases\nwe had a hard time shoe-horning the lightcurve into one class, and have given\nmultiple classes. For example, N LMC 2005 maintained a fairly flat light\ncurve for about 50 days (class F), then exhibited a cusp (class C). It also\nformed dust (class D), though the dip is not particularly pronounced. The\npresence of dust is indicated by the increase in the $H$ and $K$ fluxes\nas the optical fades.\n\nIn many cases a significant brightening in $K$, suggestive of dust formation,\nis not accompanied by an optical dip, suggesting an asphericity in the dust.\n\nIn some cases there is significant color evolution between the optical\nand near-IR. We will quantify this later.\n\n\\item The FWHM of the H$\\alpha$ emission line. We measure the first grating 47\nspectrum (3.1\\AA\\ resolution) that does not show P~Cyg wind absorption,\nand report\nthe day on which that spectrum was obtained. Uncertainties are of order 2\\%.\nNote that the FWHM can change significantly with time in the \\ion{Fe}{2} novae.\nFor the He-N novae we measure the FWHM of the broad base, ignoring the narrow\ncentral emission component. In some cases there is a faint but broader\ncomponent visible early on. The measurement of the FWZI of this component\nwould be more representative of the maximum expansion velocity.\nWe do not tabulate this because of incompleteness, and because of the\ndifficulty defining the continuum level in some cases.\n\\end{itemize}\n\nWe have not estimated the times for\nthe light to decay by 2 and 3 magnitudes at $V$\n(t$_2$ and t$_3$, respectively) in\nany systematic manner, because it is only in rare cases that we have \nsufficiently dense photometric sampling early enough to make a good estimate.\nWe discuss these in the notes on individual novae. \n\nWe note that the estimates of t$_2$ and t$_3$ can be highly uncertain,\nespecially for fast novae. The reported discovery times are often past\nthe peak. The discovery magnitudes are often visual estimates, or \nunfiltered CCD magnitudes, necessitating a color correction to $V$.\nA full analysis of the light curves,\nincorporating other published literature, AAVSO data (which are much denser\nnear peak), and data from other\nsources, is beyond the scope of this paper.\n\n\n\\subsubsection{N Aql 2005 = V1663 Aql}\nThis is a standard \\ion{Fe}{2} nova. On day 50 there was prominent\n$\\lambda$4640\\AA\\ Bowen blend emission.\nThe auroral [\\ion{O}{3}] lines were strong by day 85. \nOur last spectrum, on day 414, is dominated by\nH$\\alpha$, [\\ion{O}{3}] 4959\/5007, [\\ion{N}{2}] 5755, [\\ion{Fe}{7}] 6087,\n[\\ion{O}{1}] 6300, and [\\ion{Ar}{3}] 7136.\n\n\\subsubsection{N Car 2008 = V679 Car}\nThis \\ion{Fe}{2} nova never seemed to develop a coronal phase. \nWe have limited photometric coverage.\n\n\\subsubsection{N Car 2012 = V834 Car}\nThis recent \\ion{Fe}{2} nova exhibited a strong wind through day 36.\nEvolution of the light curve has been uneventful. There was some jitter of\n$\\pm$0.5 mag from a smooth trend from days 12-40. We estimate\nt$_2$ and t$_3$ to be 20 and 38 days, respectively, with uncertainties of order\n$\\pm$3 days for t$_2$ and $\\pm$1 day for t$_3$.\n\n\\subsubsection{N Cen 2005 = V1047 Cen}\nWe have no photometry, and only two spectra, of this \\ion{Fe}{2} nova.\n\n\\subsubsection{N Cen 2007 = V1065 Cen}\nThis dusty \\ion{Fe}{2} nova was analyzed by \\cite{Hel10}, using SMARTS spectra\nthrough day 719. The atlas includes additional photometry, from days\n944 though 1850.\n\n\\subsubsection{N Cen 2009 = V1213 Cen}\nThis \\ion{Fe}{2} nova became a bright super-soft X-ray source. The coronal phase\nextended from about days 300 to 1000, roughly coinciding with the SSS phase\n\\citep{Swift11}, with strong lines of\n[\\ion{Fe}{10}], [\\ion{Fe}{11}], and [\\ion{Fe}{14}].\nIn quiescence the remnant is blended with two other objects of\ncomparable brightness.\n\n\\subsubsection{PNV J13410800-5815470 = N Cen 2012}\nThis recent \\ion{Fe}{2} nova exhibited wind absorption through day 25. \nt$_2$ is about 16$\\pm$1 days; t$_3$ occurs about day 34.\nThe 2 mag brightening in $K$ starting about day 35,\nwith a contemporaneous drop in the $B$ and $V$~band brightness, \nsuggests dust formation.\nThe strong emission in the \\ion{Ca}{2} near-IR triplet on\nday 11 had disappeared by day 74.\n\n\\subsubsection{PNV J14250600-5845360 = N Cen 2012b}\nThe $K$~band brightness increased by 2 magnitudes between days 18 and 32,\nsuggesting dust formation, but no drop is seen at optical magnitudes.\nThe smooth $V$ light curve yields t$_2$ and t$_3$ of 12.3 and 19.8 days, with \nuncertainties $<$1 day.\nThe spectral development is similar to N Cen 2012.\nThe strong emission in the \\ion{Ca}{2} near-IR triplet on\nday 16 had disappeared by day 61.\n\n\\subsubsection{N Cir 2003 = DE Cir}\nThis fast nova was discovered by \\cite{L03} in the glare of the setting Sun.\nSpectra obtained on days 11 and 12, at high air mass, show this was a He-N\nnova. We did not obtain any photometry until after it reappeared from behind the\nSun. Since then it has been in quiescence at V$\\sim$17, with a variance of\n$\\pm$0.4~mag. The strongest line in the quiescent spectrum is\nHe~II~$\\lambda$4686\\AA.\n\n\\subsubsection{N Cru 2003 = DZ Cru}\nThis is another nova that was discovered in the west in the dusk twilight.\nDespite the discussion about the ``peculiar'' early spectrum \\citep{iauc8185},\nour spectra show this was an \\ion{Fe}{2} nova discovered before maximum, as\nconcluded by \\cite{Rus08}.\n\n\\subsubsection{N Dor 1937 = YY Dor}\nThis is the second recurrent nova discovered in the LMC \\citep{lil04}. \nIt is a fast (t$_2$,t$_3$=4.0, 10.9 days, respectively)\nHe-N nova with the broad tripartite Balmer lines seen in many\nfast recurrent novae.\nAn analysis is in preparation.\n\n\\subsubsection{N Eri 2009 = KT Eri}\nKT Eri is a fast He-N nova with a bright quiescent counterpart.\n\\cite{Hou10} reported a spectacular pre-maximum light curve from\nthe SMEI instrument. The light curve shows two plateaus (Figure~\\ref{kteri-lc}), \nmuch like those seen in U~Sco, prior to dropping to quiescence.\n\\cite{Jur12} find a 737 day period in the quiescent source\nfrom archival plate material. \\cite{Hun11} claim a 56.7 day period \nduring the second plateau. In quiescence, after day 650, \nthere are hints of a period near 55 days,\nand a possibly 4.2 day spectroscopic period (Walter et al. in preparation). \nDue to its brightness and RA (there is less competition for time towards the\ngalactic anti-center), we have excellent spectral time coverage.\nFigure~\\ref{kteri-trsp} shows the time-evolution of KT~Eri in the blue, over\n790 days, from 83 low dispersion blue spectra.\n\nKT~Eri is located in a sparse field; there is only a single comparison star\navailable in the small IR channel field of view.\n\n\n\\subsubsection{N Lup 2011 = PR Lup}\nThe light curve of this slow \\ion{Fe}{2} nova showed\na second maximum about day 3.5. There was little appreciable decay\nduring the first 2 months.\nWind absorption was evident through day 58.\nAs of day 300 it has not entered the coronal phase. \n\n\\subsubsection{N Mus 2008 = QY Mus}\nWe picked up this fairly late, but a combination of the\nspectroscopy and photometry span the time that dust formed.\nIt seems to be a standard \\ion{Fe}{2} nova that bypassed the coronal phase \nand is now in the nebular phase.\n\n\\subsubsection{N Nor 2005 = V382 Nor}\nThis appears to be a standard \\ion{Fe}{2} nova.\n\n\\subsubsection{N Nor 2007 = V390 Nor}\nWe have good spectroscopic coverage of this \\ion{Fe}{2} nova for 4 months,\nbut no photometry. During this time it did not evolve any hot lines.\n\n\\subsubsection{RS Oph}\nThis is the prototypical long period, wind-driven recurrent nova. \nThe emission lines are narrow.\nWe are continuing observations to characterize it well into quiescence. \n\n\\subsubsection{N Oph 2003 = V2573 Oph}\nBased on two spectra, this appears to be a standard\n\\ion{Fe}{2} nova.\n\n\\subsubsection{N Oph 2004 = V2574 Oph}\nOur photometric coverage consists of two observations, 4 and 8 years\nafter the eruption. Neither was obtained on a photometric night, so the\ndata are not yet photometrically calibrated.\nWe have good spectroscopic coverage showing the transition\nfrom an optically thick pseudophotosphere to the permitted line spectrum in\nthis \\ion{Fe}{2} nova.\n\n\\subsubsection{N Oph 2006 = V2575 Oph}\nThis is a standard \\ion{Fe}{2} nova.\n\n\\subsubsection{N Oph 2006b = V2576 Oph}\nMost photometric observations are in $R$ only. It is a standard\n\\ion{Fe}{2} nova.\n\n\\subsubsection{N Oph 2007 = V2615 Oph}\nThis is a normal \\ion{Fe}{2} nova. Photometric coverage begins after 1.5 years.\n\n\\subsubsection{N Oph 2008 = V2670 Oph}\nThis is an \\ion{Fe}{2} nova.\n\n\\subsubsection{N Oph 2008b = V2671 Oph}\nThis \\ion{Fe}{2} nova faded rapidly, and was undetectable,\nexcept at $R$, after 1 year.\n\n\\subsubsection{N Oph 2009 = V2672 Oph}\n\\cite{Mun11} reported on this very fast nova. t$_2$ and t$_3$ passed before our\nfirst photometry; \\cite{Mun11} quote values of 2.3 and 4.2 days, respectively.\nThe spectra and spectral evolution are similar to those\nof U~Sco. We followed this nova spectroscopically through day 31.6, and did not\nsee the deceleration reported by \\cite{Mun11}.\nThis has the broadest H$\\alpha$ line, at FWZI$\\sim$11,000~km\/s, of all the novae\nin the atlas, exceeding that of U~Sco by about 20\\%.\n\n\\subsubsection{PNV J17260708-2551454 = N Oph 2012}\nThis recent slow \\ion{Fe}{2} nova showed no significant decline in\nbrightness from days 10 through 90. Then dust formed, with a drop in the\n$B$ and $V$ brightness by over 5 magnitudes.\nThe lines are narrow: FWHM(H$\\alpha$)$\\sim$950~km\/s.\nThere is little spectral evolution through day 90, with \npersistent P~Cygni line profiles.\n\n\\subsubsection{PNV J17395600-2447420 = N Oph 2012b}\nThis is an \\ion{Fe}{2} nova with an H$\\alpha$ FWHM about 3000~km\/s.\nThrough the first 45 days the brightness drops monotonically in $B$ through $K$.\n\n\\subsubsection{N Pup 2004 = V574 Pup}\nSee \\S4.2. Our first photometry was on day 30, suggesting t$_3<27$ days. \n\\cite{Swift11} quote t$_2$=13 days.\n\n\\subsubsection{N Pup 2007 = V597 Pup}\nThis slowly developing, broad-lined nova developed a coronal phase after\n3-4 months.\n\n\\subsubsection{N Pup 2007b = V598 Pup}\nThis X-ray-discovered nova was in its nebular phase when reported by\n\\cite{RSE07}. The quiescent counterpart appears fairly bright.\n\n\\subsubsection{T Pyx}\nThis well-known recurrent novae is, as has been pointed out by many authors,\nvery different from the fast He-N recurrent novae. Its light curve and spectral\ndevelopment mimic those of slow classical novae. \nThe rate of the photometric decay remained unchanged by the turn-on\/turn-off\nof the SSS X-ray emission (Figure~\\ref{tpyx-sss}).\nThe slope of the photometric decay decreased about day 300.\nThe H$\\alpha$ line width shown in Table~\\ref{tbl-meas} refers to the\nwidth of the broad base after the line profile stabilized; it was about half\nthat during the first 40 days post-peak.\n\n\\cite{Eva12} include some of our near-IR photometry in an analysis of the\nheating of the dust already present in the system.\n\nAs T Pyx remains bright, we are continuing our monitoring.\n\n\\subsubsection{U Sco}\nThis is the prototypical short orbital period recurrent nova.\nSome spectral analysis in included in \\cite{Max12}.\n\n\\subsubsection{N Sco 2004b = V1187 Sco}\nThis well-observed \\ion{Fe}{2} nova became a super-soft X-ray source.\n\n\\subsubsection{N Sco 2005 = V1188 Sco}\nWe have only limited coverage of this \\ion{Fe}{2} nova.\n\n\\subsubsection{N Sco 2007a = V1280 Sco}\nThis extraordinarily slow nova has remained bright ($V$ generally\nbetween 10 and 11) for nearly 2000 days. \\cite{N12} present a lot of data\nfor this nova; our monitoring provides finer temporal coverage, which\nshow absorption events with durations $<$60 days that may be due to dust\nformation in small mass ejection events. The latest spectra, at an age of 5.3\nyears, still show evidence for wind absorption.\n\n\\subsubsection{N Sco 2008 = V1309 Sco}\nThis is a very narrow-lined system, and likely a symbiotic nova\nor a merger \\citep{Mas10}. The spectrum is dominated by narrow Balmer line\nemission.\n\n\\subsubsection{N Sco 2010 No.~2 = V1311 Sco}\nThis nova faded rapidly. It is likely a He-N class nova. On day 9\npossible [Ne III] $\\lambda$3869 is seen, and the $\\lambda$4640 Bowen blend\nis in emission.\n\n\\subsubsection{N Sco 2011 = V1312 Sco}\nThis \\ion{Fe}{2} nova developed coronal line emission.\n\n\\subsubsection{N Sco 2011 No.~2 = V1313 Sco}\nThis nova exhibited very strong \\ion{He}{1} and H-Paschen line emission.\nStrong \\ion{He}{2}\n$\\lambda$4686 emission appeared between days 16 and 19.\nThe Balmer lines have a narrow central core atop a broad base, as in the He-N\nand recurrent novae, but there is also likely \\ion{Fe}{2} multiplet 42\nemission at\n$\\lambda$5169\\AA\\ (other multiplet 42 lines are overwhelmed by \\ion{He}{1}\nemission lines), and wind absorption at least through day 3. This seems to be\na hybrid nova.\nt$_2$ lies between 5.8 and 8.5 days, and t$_3$ between 13 and 18 days,\ndepending on whether the peak was on\n2011 Sep 6.37 or 2011 Sep 7.51 \\citep{Sea11}.\nThe continuum is red. This may be a symbiotic nova.\n\n\\subsubsection{N Sct 2003 = V475 Sct}\nThis was the first nova that we concentrated on. Results appear in \n\\cite{Str06}. This fairly narrow-lined \\ion{Fe}{2} nova may have formed dust;\nno coronal phase was seen. We have $U$-band photometry for the first 100 days,\n\n\\subsubsection{N Sct 2005 = V476 Sct}\nWe have very limited observations of this \\ion{Fe}{2} nova.\n\n\\subsubsection{N Sct 2005b = V477 Sct}\nThis is probably an \\ion{Fe}{2} nova; we have very poor coverage.\n\n\\subsubsection{N Sct 2009 = V496 Sct}\nThis \\ion{Fe}{2} nova likely formed dust. As it is fairly bright, we have good\nspectral coverage for nearly 2 years.\n\n\\subsubsection{N Sgr 2002c = V4743 Sgr}\nThis is the first nova we started observing. We picked it up about 200 days\nafter discovery. There is currently no photometry in the atlas.\n\n\\subsubsection{N Sgr 2003 = V4745 Sgr}\nThis was the first nova to explode during the SMARTS era.\nThere are two prominent P~Cygni absorption line systems,\ninitially at -780 and -1740\nkm\/s, visible from days 12 through 66; they disappear by day 71.\nThere is currently no photometry in the atlas.\n\n\\subsubsection{N Sgr 2004 = V5114 Sgr}\nData for this \\ion{Fe}{2} nova have been analyzed and published by \\cite{E06}.\n\\cite{E06} quote t$_2$ and t$_3$ values of 11 and 21 days; for a peak\n$V$=8.38 we find a marginally slower nova, with t$_2$ and t$_3$ of 14 and\n25 days, respectively.\nWe have $U$ band photometry for the first 180 days.\n\n\\subsubsection{N Sgr 2006 = V5117 Sgr}\nThis is a standard \\ion{Fe}{2} nova. We estimate t$_2$ and t$_3$ are about\n16$\\pm$1 and 42 days, respectively, with uncertainties of perhaps a week in t$_3$.\n\n\\subsubsection{N Sgr 2007 = V5558 Sgr}\nThis very slow nova resembles V723~Cas. We started the photometry at about\nday 100; the nova remained at $7.00$, $1 \\leq k \\leq d$, can be used to define a family of criteria for optimal experimental design, concave for $\\tau\\leq 1\/k$, for which an equivalence theorem can be formulated.\n\n\n\n\\subsection{Quadratic entropy and learning}\\label{S:learning}\nIn a series of papers \\citep{rao1982a, rao1982b, rao1984convexity, rao2010quadratic} C.R. Rao and co-workers have introduced a quadratic entropy which is a generalised version of the $k=2$ functional of this section but with a general kernel $K(x_1,x_2)$ in $\\mathds{R}^d$:\n\\begin{equation}\\label{quadratic-entropy}\n Q_R = \\int \\int K (x_1,x_2) \\mu(\\dd x_1)\\mu(\\dd x_2)\\,.\n\\end{equation}\nFor the discrete version\n$$\nQ_R = \\sum_{i=1}^N \\sum_{j=1}^N K(x_{i},x_{j})\\,p_i\\,p_j,\n$$\nRao and co-workers developed a version of the Analysis of Variance (ANOVA), which they called Anaysis of Quadratic Entropy (ANOQE), or Analysis of Diversity (ANODIV).\nThe Gini coefficient, also used in the continuous and discrete form is a special case with $d=1$ and $K(x_1,x_2) = |x_1-x_2|$.\n\nAs pointed in \\citep[Chap.~3]{rao1984convexity}, a necessary and sufficient condition for the functional $Q_R$ to be concave is\n\\begin{equation}\n\\label{semi}\n\\int \\int K(x_1,x_2) \\nu(\\dd x_1)\\nu(\\dd x_2) \\leq 0\n\\end{equation}\nfor all measures $\\nu$ with $\\int \\nu(\\dd x)=0$. The discrete version of this is\n$$\n\\sum_{i=1}^N \\sum_{j=1}^N K (x_i,x_j)\\, q_i\\, q_j \\leq 0\n$$\nfor any choice of real numbers $q_1,\\ldots,q_N$ such that $\\sum_{i=1}^N q_i=0$.\n\\citet{schilling2012bernstein} discuss\nthe general problem of finding for what class of continuous functions $B(\\cdot)$ of $\\|x_1 - x_2\\|^2$ does the kernel\n$\nK(x_1, x_2) = B\\left (\\|x_1 - x_2\\|^2\\right)\n$\nsatisfy \\eqref{semi}: the solution is that $B(\\cdot)$ must be a so-called Bernstein function. We do not develop these ideas here, but note that $B(\\lambda) = \\lambda^{\\alpha}$ is a Bernstein function for all $0 < \\alpha \\leq 1$.\nThis is the reason that, above,\nwe can claim concavity for $k=1$ and all $0 < \\delta \\leq 2$ in \\eqref{dV}.\n\n\\cite{hainy2014learning}\ndiscuss the link to embedding and review some basic results related to Bayesian learning. One asks\nwhat is the class of functionals $\\psi$ on a distribution $\\mu(\\theta)$ of a parameter in the Bayesian statistical learning\nsuch that for all $\\mu(\\theta)$ and all sampling distributions $\\pi(x | \\theta)$ one expects to learn, in the preposterior sense:\n$\\psi(\\mu(\\theta)) \\leq \\Ex_\\nu \\psi(\\pi(\\theta|X))$, with $X\\sim \\nu$.\nThe condition is that $\\psi$ is convex, a result which has a history but is usually attributed to\n\\cite{degroot1962uncertainty}.\nThis learning is enough to justify calling such a functional a generalised information functional, or a general learning functional. Shannon information falls in this class, and earlier versions of the result were for Shannon information. It follows that wherever, in this paper, we have a concave functional then its negative is a learning functional.\n\n\\section{Functionals based on squared volume}\n\\label{S:squared volume}\n\n\nIn the rest of the paper we focus our attention on the functional\n$$\n\\mu\\in\\SM \\longrightarrow \\psi_k(\\mu)=\\phi_{[k],2,1}(\\mu)=\\Ex\\{ \\SV_k^2(x_1,\\ldots,x_{k+1}) \\}\\,,\n$$\nwhich corresponds to the mean squared volume of simplices of dimension $k$ formed by $k+1$ independent samples from $\\mu$. For instance,\n\\begin{equation}\\label{psi2}\n \\psi_2(\\mu) = \\int\\int\\int \\SV_2^2(x_1,x_2,x_3)\\, \\mu(\\dd x_1)\\, \\mu(\\dd x_2)\\, \\mu(\\dd x_3)\\,,\n\\end{equation}\nwith $\\SV_2(x_1,x_2,x_3)$ the area of the triangle formed by the three points with coordinates $x_1$, $x_2$ and $x_3$ in $\\mathds{R}^d$, $d\\geq 2$. Functionals $\\phi_{[k],\\delta,\\tau}(\\mu)$ for $\\delta \\neq 2$ will be considered in another paper, including the case of negative $\\delta$ and $\\tau$ in connection with space-filling design for computer experiments.\n\nTheorem~\\ref{Prop:1} of Section~\\ref{S:main-theorem} indicates how $\\psi_k(\\mu)$ can be expressed as a function of $V_\\mu$, the covariance matrix of $\\mu$, and shows that $\\phi_{[k],2,1\/k}(\\cdot)$ satisfies properties (a), (b) and (c) of Section~\\ref{S:intro}. The special case of $k=d$\nwas known to Wilks (1932, 1960)\\nocite{wilks1932, wilks1960} in his introduction of generalised variance, see also \\cite{van1965note}. The connection with U-statistics is exploited in Section~\\ref{S:empirical}, where an unbiased minimum-variance estimator of $\\psi_k(\\mu)$ based on a sample $x_1,\\ldots,x_n$ is expressed in terms of the empirical covariance matrix of the sample.\n\n\\subsection{Expected squared $k$-simplex volume}\\label{S:main-theorem}\n\n\\begin{theorem}\\label{Prop:1} Let the $x_i$ be i.i.d.\\ with the probability measure $\\mu\\in\\SM$. Then, for any $k\\in\\{1,\\ldots,d\\}$, we have\n\\begin{eqnarray}\n\\psi_k(\\mu) &=& \\frac{k+1}{k!} \\, \\sum_{i_10$ for $p>0$ and any $q>0$. The family of functionals \\eqref{Kiefer-phi_p} is therefore unable to detect the true dimensionality of the data. On the other hand, $\\psi_k(\\mu)=0$ for all $k>q$ when rank $V_\\mu=q$.\n\n\\subsection{Empirical version and unbiased estimates}\\label{S:empirical}\n\nLet $x_1,\\ldots,x_n$ be a sample of $n$ vectors of $\\mathds{R}^d$, i.i.d.\\ with the measure $\\mu$. This sample can be used to obtain an empirical estimate $({\\widehat\\psi}_1)_n$ of $\\psi_k(\\mu)$, through the consideration of the ${n \\choose k+1}$ $k$-dimensional simplices that can be constructed with the $x_i$. Below we show how a much simpler (and still unbiased) estimation of $\\psi_k(\\mu)$ can be obtained through the empirical variance-covariance matrix of the sample. See also \\cite{wilks1960, Wilks1962}.\n\nDenote\n\\begin{eqnarray*}\n\\widehat x_n &=& \\frac1n\\, \\sum_{i=1}^n x_i \\,, \\\\\n\\widehat V_n &=& \\frac{1}{n-1}\\, \\sum_{i=1}^n (x_i-\\widehat x_n)(x_i-\\widehat x_n)\\TT = \\frac{1}{n(n-1)}\\, \\sum_{i1$, this property would not hold if $\\widehat V_n$ were replaced by another unbiased estimator of $V_\\mu$.\n\n\\vsp\nThe value of $({\\widehat\\psi}_k)_n$ only depend on $\\widehat V_n$, with $\\Ex\\{({\\widehat\\psi}_k)_n\\}=\\psi_k(V_\\mu)$, but its variance depends on the distribution itself.\nAssume $\\Ex\\{\\SV_k^4(x_1,\\ldots,x_{k+1})\\}<\\infty$.\nFrom \\cite[Lemma A, p.~183]{Serfling80}, the variance of $({\\widehat\\psi}_k)_n$ satisfies\n$$\n\\var[({\\widehat\\psi}_k)_n]=\\frac{(k+1)^2}{n}\\, \\omega + O(n^{-2}) \\,,\n$$\nwhere $\\omega= \\var[h(x)]$, with $h(x)=\\Ex\\{\\SV_k^2(x_1,x_2,\\ldots,x_{k+1})|x_1=x\\}$. Obviously, $\\Ex[h(x)]=\\psi_k(\\mu)$ and calculations similar to those in the proof of Theorem~\\ref{Prop:1} give\n\\begin{eqnarray}\n&& \\hspace{0.5cm} \\omega = \\frac{1}{(k!)^2}\\, \\sum_{I,J} \\det[\\{V_\\mu\\}_{I\\times I}] \\, \\det[\\{V_\\mu\\}_{J\\times J}] \\label{zeta} \\\\\n&& \\times\\, \\left[\\Ex\\left\\{ (E_\\mu-x)_I\\TT \\{V_\\mu\\}_{I\\times I}^{-1} (E_\\mu-x)_I (E_\\mu-x)_J\\TT \\{V_\\mu\\}_{J\\times J}^{-1} (E_\\mu-x)_J \\right\\} - k^2\\right]\\,, \\nonumber\n\\end{eqnarray}\nwhere $I$ and $J$ respectively denote two sets of indices $i_10$ and $F_{\\psi_k}(\\mu_k^*;\\nu) \\leq 0$ for all $\\nu\\in\\SM$, that is\n\\begin{equation}\\label{CNS-nu}\n\\tr\\left\\{\\nabla_{\\Psi_k}[V_{\\mu_k^*}]\\,\\frac{\\dd V_{(1-\\ma)\\mu_k^*+\\ma\\nu}}{\\dd\\ma}\\bigg|_{\\ma=0} \\right\\} \\leq 0 \\,, \\ \\forall\\nu\\in\\SM\\,.\n\\end{equation}\nWe obtain the following.\n\n\\begin{theorem}\\label{th:equivTh} The probability measure $\\mu_k^*$ such that $\\psi_k(\\mu_k^*)>0$ is $\\psi_k$-optimal, that is, maximises $\\psi_k(\\mu)$ with respect to $\\mu\\in\\SM$, $k\\in\\{1,\\ldots,d\\}$, if and only if\n\\begin{equation}\\label{CNS}\n \\max_{x\\in\\SX} (x-E_{\\mu_k^*})\\TT \\frac{\\nabla_{\\Psi_k}[V_{\\mu_k^*}]}{\\Psi_k(V_{\\mu_k^*})}(x-E_{\\mu_k^*}) \\leq k \\,.\n\\end{equation}\nMoreover,\n\\begin{equation}\\label{support-mu*}\n(x-E_{\\mu_k^*})\\TT \\frac{\\nabla_{\\Psi_k}[V_{\\mu_k^*}]}{\\Psi_k(V_{\\mu_k^*})}(x-E_{\\mu_k^*}) = k\n\\end{equation}\nfor all $x$ in the support of $\\mu_k^*$.\n\\end{theorem}\n\n\n\\begin{proof}\nFirst note that the Newton equations \\eqref{Newton} and the recurrence \\eqref{nabla-E_k} for $\\nabla_{\\mathcal{E}_k}[\\cdot]$ imply that $\\tr(V\\nabla_{\\Psi_k}[V])=k\\Psi_k(V)$ for all $k=1,\\ldots,d$.\n\nThe condition \\eqref{CNS} is sufficient. Indeed, suppose that $\\mu_k^*$ such that $\\psi_k(\\mu_k^*)>0$ satisfies \\eqref{CNS}. We obtain\n$$\n\\int (x-E_{\\mu_k^*})\\TT \\nabla_{\\Psi_k}[V_{\\mu_k^*}](x-E_{\\mu_k^*})\\, \\nu(\\dd x) \\leq \\tr\\left\\{V_{\\mu_k^*}\\nabla_{\\Psi_k}[V_{\\mu_k^*}] \\right\\}\n$$\nfor any $\\nu\\in\\SM$, which gives \\eqref{CNS-nu} when we use \\eqref{derivativeV}. The condition is also necessary since \\eqref{CNS-nu} must be true in particular for $\\delta_x$, the delta measure at any $x\\in\\SX$, which gives \\eqref{CNS}. The property \\eqref{support-mu*} on the support of $\\mu_k^*$ follows from the observation that $\\int (x-E_{\\mu_k^*})\\TT \\nabla_{\\Psi_k}[V_{\\mu_k^*}](x-E_{\\mu_k^*})\\, \\mu_k^*(\\dd x) = \\tr\\left\\{V_{\\mu_k^*}\\nabla_{\\Psi_k}[V_{\\mu_k^*}] \\right\\}$.\n\\end{proof}\n\nNote that for $k0$ implies that $\\psi_{k-1}(\\mu)>0$, $k=2,\\ldots,d$.\n\n\\begin{remark} As a natural extension of the concept of potential in case of order-two interactions ($k=1$), we call\n$P_{k,\\mu}(x)= \\psi_k(\\mu,\\ldots,\\mu,\\delta_x)$ the potential of $\\mu$ at $x$, where\n$$\n\\psi_k(\\mu_1,\\ldots,\\mu_{k+1})= \\int \\ldots \\int \\SV_k^2(x_1,\\ldots,x_{k+1})\\, \\mu_1(\\dd x_1) \\ldots \\mu_{k+1}(\\dd x_{k+1}) \\,.\n$$\nThis yields $F_{\\psi_k}(\\mu;\\nu) = (k+1)\\, [\\psi_k(\\mu,\\ldots,\\mu,\\nu)-\\psi_k(\\mu)]$, where $\\mu$ appears $k$ times in $\\psi_k(\\mu,\\ldots,\\mu,\\nu)$. Therefore, Theorem~\\ref{th:equivTh} states that $\\mu_k^*$ with $\\psi_k(\\mu_k^*)>0$ is $\\psi_k$-optimal if and only if $\\psi_k(\\mu_k^*,\\ldots,\\mu_k^*,\\nu) \\leq \\psi_k(\\mu_k^*)$ for any $\\nu\\in\\SM$, or equivalently $P_{k,\\mu_k^*}(x) \\leq \\psi_k(\\mu_k^*)$ for all $x\\in\\SX$.\n\nIt can be shown that for any measure $\\mu\\in\\SM$, $\\min_{x\\in\\SX} P_{k,\\mu}(x)$ is reached for $x=E_\\mu$, which extends the result of \\citet{wilks1960} about the minimum property of the internal scatter.\n\\end{remark}\n\n\\begin{remark} \\label{R:Radoslav}\nConsider Kiefer's $\\Phi_p$-class of orthogonally invariant criteria and their associated functional $\\varphi_p(\\cdot)$, see \\eqref{Kiefer-phi_p}.\nFrom a result in \\citep{Harman2004-MODA}, if a measure $\\mu_p$ optimal for some $\\varphi_p(\\cdot)$ with $p\\in(-\\infty,1]$ is such that $V_{\\mu_p}$ is proportional to the identity matrix $I_d$, then $\\mu_p$ is simultaneously optimal for all orthogonally invariant criteria. A measure $\\mu_p$ having this property is therefore $\\psi_k$-optimal for all $k=1,\\ldots,d$.\n\\end{remark}\n\n\\begin{remark}\\label{R:otherET} Using \\eqref{EkEd-k}, when $V$ is nonsingular we obtain the property\n$$\n\\Psi_k(V)=\\frac{(k+1)(d-k)!}{(d-k+1)k!}\\, \\det(V)\\, \\Psi_{d-k}(V^{-1})\n$$\nwhich implies that maximising $\\Psi_k(V)$ is equivalent to maximising $\\log\\det(V)+\\log\\Psi_{d-k}(V^{-1})$. Therefore, Theorem~\\ref{th:equivTh} implies that $\\mu_k^*$ with nonsingular covariance matrix $V_{\\mu_k^*}$ maximises $\\psi_k(\\mu)$ if and only if\n$$\n \\max_{x\\in\\SX} (x-E_{\\mu_k^*})\\TT \\left[V_{\\mu_k^*}^{-1}- V_{\\mu_k^*}^{-1}\\,\\frac{\\nabla_{\\Psi_{d-k}}[V_{\\mu_k^*}^{-1}]}{\\Psi_{d-k}(V_{\\mu_k^*}^{-1})}\\,V_{\\mu_k^*}^{-1} \\right](x-E_{\\mu_k^*}) \\leq d- k \\,,\n$$\nwith equality for $x$ in the support of $\\mu_k^*$. When $k$ is large (and $d-k$ is small), one may thus check the optimality of $\\mu_k^*$ without using the complicated expressions of $\\Psi_k(V)$ and $\\nabla_{\\Psi_k}[V]$.\n\\end{remark}\n\n\\subsubsection{A duality property}\n\nThe characterisation of maximum-diversity measures can also be approached from the point of view of duality theory.\n\nWhen $k=1$, the determination of a $\\psi_1$-optimal measure $\\mu_1^*$ is equivalent to the dual problem of constructing the minimum-volume ball $\\SB_d^*$ containing $\\SX$. If this ball has radius $\\rho$, then $\\psi_1(\\mu_1^*)=2\\rho^2$, and the support points of $\\mu_1^*$ are the points of contact between $\\SX$ and $\\SB_d^*$; see \\cite[Th.~6]{Bjorck56}. Moreover, there exists an optimal measure with no more than $d+1$ points.\n\nThe determination of an optimal measure $\\mu_d^*$ is also dual to a simple geometrical problem: it corresponds to the determination of the minimum-volume ellipsoid $\\SE_d^*$ containing $\\SX$. This is equivalent to a $D$-optimal design problem in $\\mathds{R}^{d+1}$ for the estimation of $\\beta=(\\beta_0,\\beta_1\\TT)\\TT$, $\\beta_1\\in\\mathds{R}^d$, in the linear regression model with intercept $\\beta_0+\\beta_1\\TT x$, $x\\in\\SX$, see \\cite{Titterington75}. Indeed, denote\n$$\nW_\\mu= \\int_\\SX (1\\ \\ x\\TT)\\TT (1\\ \\ x\\TT) \\, \\mu(\\dd x)\\,.\n$$\nThen $\\SE_{d+1}^*=\\{z\\in\\mathds{R}^{d+1}: z\\TT W^{-1}_{\\mu_d^*} z \\leq d+1\\}$, with $\\mu_d^*$ maximising $\\det(W_\\mu)$, is the minimum-volume ellipsoid centered at the origin and containing the set $\\{z\\in\\mathds{R}^{d+1}: z=(1\\ \\ x\\TT)\\TT,\\ x\\in\\SX \\}$. Moreover, $\\SE_d^*$ corresponds to the intersection between $\\SE_{d+1}^*$ and the hyperplane $\\{z\\}_1=1$; see, e.g., \\cite{ShorB92}. This gives $\\psi_d(\\mu_d^*)=(d+1)\/d!\\, \\det(W_{\\mu_d^*})$. The support points of $\\mu_d^*$ are the points of contact between $\\SX$ and $\\SE_d^*$, there exists an optimal measure with no more than $d(d+3)\/2+1$ points, see \\cite{Titterington75}.\n\nThe property below generalises this duality property to any $k\\in\\{1,\\ldots,d\\}$.\n\\begin{theorem}\\label{Th:duality}\n$$\n\\max_{\\mu\\in\\SM} \\Psi_k^{1\/k}(V_\\mu) = \\min_{M,c:\\ \\SX\\subset\\SE(M,c)} \\frac{1}{\\phi_k^\\infty(M)} \\,,\n$$\nwhere $\\SE(M,c)$ denotes the ellipsoid $\\SE(M,c) = \\{x\\in\\mathds{R}^d: (x-c)\\TT M(x-c)\\leq 1 \\}$ and $\\phi_k^\\infty(M)$ is the polar function\n\\begin{equation}\\label{polar-phi}\n \\phi_k^\\infty(M) = \\inf_{V\\succeq 0:\\ \\tr(MV)=1} \\frac{1}{\\Psi_k^{1\/k}(V)} \\,.\n\\end{equation}\n\\end{theorem}\n\nThe proof is given in Appendix. The polar function $\\phi_k^\\infty(\\cdot)$ possesses the properties of what is called an information function in \\citep[Chap.~5]{Pukelsheim93}; in particular, it is concave on the set of symmetric non-negative definite matrices. This duality property has the following consequence.\n\n\\begin{corollary} The determination of a covariance matrix $V_k^*$ that maximises $\\Psi_k(V_\\mu)$ with respect to $\\mu\\in\\SM$ is equivalent to the determination of an ellipsoid $\\SE(M_k^*,c_k^*)$ containing $\\SX$, minimum in the sense that $M_k^*$ maximizes $\\phi_k^\\infty(M)$. The points of contact between $\\SE(M_k^*,c_k^*)$ and $\\SX$ form the support of $\\mu_k^*$.\n\\end{corollary}\n\nFor any $V\\succeq 0$, denote by $M_*(V)$ the matrix\n\\begin{equation}\\label{M_*}\n M_*(V) = \\frac{\\nabla_{\\Psi_k}[V]}{k\\,\\Psi_k(V)]} = \\frac1k\\, \\nabla_{\\log\\Psi_k}[V]\\,.\n\\end{equation}\nNote that $M_*(V)\\succeq 0$, see \\cite[Lemma~7.5]{Pukelsheim93}, and that\n$$\n\\tr[VM_*(V)]=1 \\,,\n$$\nsee the proof of Theorem~\\ref{th:equivTh}.\nThe matrix $V\\succeq 0$ maximises $\\Psi_k(V)$ under the constraint $\\tr(MV)=1$ for some $M\\succeq 0$ if and only if $V[M_*(V)-M]=0$. Therefore, if $M$ is such that there exists $V_*=V_*(M)\\succeq 0$ such that $M=M_*[V_*(M)]$, then $\\phi_k^\\infty(M)=\\Psi_k^{-1\/k}[V_*(M)]$. When $k1$, since $\\psi_k(\\mu_1^*)=0$, $k>1$.\n\n\\paragraph{Example 3}\n\nTake $\\SX=\\SB_d(\\0b,\\rho)$, the closed ball of $\\mathds{R}^d$ centered at the origin $\\0b$ with radius $\\rho$. Let $\\mu_0$ be the uniform measure on the sphere\n$\\SS_d(\\0b,\\rho)$ (the boundary of $\\SB_d(\\0b,\\rho)$). Then, $V_{\\mu_0}$ is proportional to the identity matrix $I_d$, and $\\tr[V_{\\mu_0}]=\\rho^2$ implies that\n$V_{\\mu_0}=\\rho^2 I_d\/d$. Take $k=d$. We have $E_{\\mu_0}=0$ and\n$$\n\\max_{x\\in\\SX} (x-E_{\\mu_0})\\TT \\nabla_{\\Psi_d}[V_{\\mu_0}](x-E_{\\mu_0}) = \\frac{(d+1) \\rho^{2d}}{d^{d-1}d!} =\\tr\\{V_{\\mu_0}\\nabla_{\\Psi_d}[V_{\\mu_0}]\\} \\,,\n$$\nso that $\\mu_0$ is $\\psi_d$-optimal from \\eqref{CNS}.\n\nLet $\\mu_d$ be the measure that allocates mass $1\/(d+1)$ at each vertex of a $d$ regular simplex having its $d+1$ vertices on $\\SS_d(\\0b,\\rho)$, with squared volume $\\rho^{2d} (d+1)^{d+1}\/[d^d (d!)^2]$. We also have $V_{\\mu_d}=\\rho^2 I_d\/d$, so that $\\mu_d$ is $\\psi_d$-optimal too. In view of Remark~\\ref{R:Radoslav}, $\\mu_0$ and $\\mu_d$ are $\\psi_k$-optimal for all $k$ in $\\{1,\\ldots,d\\}$.\n\nLet now $\\mu_k$ be the measure that allocates mass $1\/(k+1)$ at each vertex of a $k$ regular simplex $\\SP_k$, centered at the origin, with its vertices on $\\SS_d(\\0b,\\rho)$. The squared volume of $\\SP_k$ equals $\\rho^{2k}\\, (k+1)^{k+1}\/[k^k (k!)^2]$.\nWithout any loss of generality, we can choose the orientation of the space so that $V_{\\mu_k}$ is diagonal, with its first $k$ diagonal elements equal to $\\rho^2\/k$ and the other elements equal to zero. Note that $\\psi_{k'}(\\mu_k)=0$ for $k'>k$. Direct calculations based on \\eqref{Psi_kE_k} give\n$$\n\\psi_k(\\mu_k)= \\frac{k+1}{k!}\\, \\frac{\\rho^{2k}}{k^k} \\leq \\psi_k(\\mu_0) = \\frac{k+1}{k!}\\, {d \\choose k}\\, \\frac{\\rho^{2k}}{d^k} \\,,\n$$\nwith equality for $k=1$ and $k=d$, the inequality being strict otherwise.\n\n\n\\subsection{Optimal design in regression models}\\label{S:design}\n\nIn this section we consider the case when $V=M^{-1}(\\xi)$, where $M(\\xi)$ is the information matrix\n$$\nM(\\xi) = \\int_{\\ST} f(t)f\\TT(t)\\, \\xi(\\dd t)\n$$\nin a regression model $Y_j=\\mt\\TT f(t_j)+\\mve_j$ with parameters $\\mt\\in\\mathds{R}^d$, for a design measure $\\xi\\in\\Xi$. Here $\\Xi$ denotes the set of probability measures on a set $\\ST$ such that $\\{f(t): t\\in\\ST\\}$ is compact, and $M^{-1}(\\xi)$ is the (asymptotic) covariance matrix of an estimator $\\mth$ of $\\mt$ when the design variables $t$ are distributed according to $\\xi$. The value $\\psi_k(\\mu)$ of Theorem~\\ref{Prop:1} defines a measure of dispersion for $\\mth$, that depends on $\\xi$ through $V_\\mu=M^{-1}(\\xi)$. The design problem we consider consists in choosing $\\xi$ that minimises this dispersion, as measured by $\\Psi_k[M^{-1}(\\xi)]$, or equivalently that maximises $\\Psi_k^{-1}[M^{-1}(\\xi)]$.\n\n\\subsubsection{Properties}\nIt is customary in optimal design theory to maximise a concave and Loewner-increasing function of $M(\\xi)$, see \\cite[Chap.~5]{Pukelsheim93} for desirable properties of optimal design criteria. Here we have the following.\n\n\\begin{theorem}\\label{Th:design} The functions $M \\longrightarrow \\Psi_k^{-1\/k}(M^{-1})$, $k=1,\\ldots,d$, are Loew\\-ner-increasing, concave and differentiable on the set $\\mathbb{M}^+$ of $d\\times d$ symmetric positive-definite matrices. The functions $\\Psi_k(\\cdot)$ are also orthogonally invariant.\n\\end{theorem}\n\n\\begin{proof}\nThe property \\eqref{EkEd-k} yields\n\\begin{equation}\\label{design-criterion}\n \\Psi_k^{-1\/k}(M^{-1}) = \\left(\\frac{k+1}{k!}\\right)^{-1\/k}\\, \\frac{\\det^{1\/k}(M)}{\\mathcal{E}_{d-k}^{1\/k}(M)}\n\\end{equation}\nwhich is a concave function of $M$, see Eq.~(10) of \\cite[p.~116]{MarcusM64}. Since $\\Psi_k(\\cdot)$ is Loewner-increasing, see \\cite{Lopez-FR-D98-MODA}, the function $M \\longrightarrow \\Psi_k^{-1\/k}(M^{-1})$ is Loewner-increasing too. Its orthogonal invariance follows from the fact that it is defined in terms of the eigenvalues of $M$.\n\\end{proof}\n\nNote that Theorems~\\ref{Prop:1} and \\ref{Th:design} imply that the functions\n$M \\longrightarrow - \\log \\Psi_k(M)$\nand $M \\longrightarrow \\log \\Psi_k(M^{-1})$ are convex for all $k=1,\\ldots,d$, a question which was left open in \\citep{Lopez-FR-D98-MODA}.\n\nAs a consequence of Theorem~\\ref{Th:design}, we can derive a necessary and sufficient condition for a design measure $\\xi_k^*$ to maximise $\\Psi_k^{-1\/k}[M^{-1}(\\xi)]$ with respect to $\\xi\\in\\Xi$, for $k=1,\\ldots,d$.\n\n\\begin{theorem}\\label{th:equivTh2} The design measure $\\xi_k^*$ such that $M(\\xi_k^*)\\in \\mathbb{M}^+$ maximises $\\tilde\\psi_k(\\xi)=\\Psi_k^{-1\/k}[M^{-1}(\\xi)]$ with respect to $\\xi\\in\\Xi$ if and only if\n\\begin{equation}\\label{CNS-design-a}\n \\max_{t\\in\\ST} f\\TT(t)M^{-1}(\\xi_k^*)\\, \\frac{\\nabla_{\\Psi_k}[M^{-1}(\\xi_k^*)]}{\\Psi_k[M^{-1}(\\xi_k^*)]}\\,M^{-1}(\\xi_k^*)f(t) \\leq k\n\\end{equation}\nor, equivalently,\n\\begin{equation}\\label{CNS-design}\n \\max_{t\\in\\ST} \\left\\{ f\\TT(t)M^{-1}(\\xi_k^*)f(t) - f\\TT(t)\\frac{\\nabla_{\\Psi_{d-k}}[M(\\xi_k^*)]}{\\Psi_{d-k}[M(\\xi_k^*)]}f(t) \\right\\} \\leq d - k \\,.\n\\end{equation}\nMoreover, there is equality in \\eqref{CNS-design-a} and \\eqref{CNS-design} for all $t$ in the support of $\\xi_k^*$.\n\\end{theorem}\n\n\\begin{proof}\nFrom \\eqref{design-criterion}, the maximisation of $\\tilde\\psi_k(\\xi)$ is equivalent to the maximisation of\n$\\tilde\\phi_k(\\xi)=\\log\\det[M(\\xi)]-\\log\\Psi_{d-k}[M(\\xi)]$.\nThe proof is similar to that of Theorem~\\ref{th:equivTh} and is based on the following expressions for the directional derivatives of these two functionals\nat $\\xi$ in the direction $\\nu\\in\\Xi$,\n$$\nF_{\\tilde\\psi_k}(\\xi;\\nu) = \\tr\\left( \\frac1k\\, M^{-1}(\\xi)\\, \\frac{\\nabla_{\\Psi_k}[M^{-1}(\\xi)]}{\\Psi_k[M^{-1}(\\xi)]}\\,M^{-1}(\\xi)\\, [M(\\nu)-M(\\xi)] \\right) $$\nand\n$$\nF_{\\tilde\\phi_k}(\\xi;\\nu) = \\tr\\left( \\left\\{M^{-1}(\\xi)-\\frac{\\nabla_{\\Psi_{d-k}}[M(\\xi)]}{\\Psi_{d-k}[M(\\xi)]} \\right\\}[M(\\nu)-M(\\xi)] \\right) \\,,\n$$\nand on the property $\\tr\\{M\\nabla_{\\Psi_{j}}[M]\\}=j\\, \\Psi_{j}(M)$.\n\\end{proof}\n\nIn particular, consider the following special cases for $k$ (note that $\\Psi_0(M)=\\mathcal{E}_0(M)=1$ for any $M$).\n\\begin{eqnarray*}\n&&k=d: \\hspace{1.2cm} \\tilde\\psi_d(\\xi) = \\log\\det[M(\\xi)] \\,, \\\\\n&&k=d-1: \\hspace{0.5cm} \\tilde\\psi_{d-1}(\\xi) = \\log\\det[M(\\xi)] - \\log\\tr[M(\\xi)] - \\log 2 \\,,\\\\\n&&k=d-2: \\hspace{0.5cm} \\tilde\\psi_{d-2}(\\xi) = \\log\\det[M(\\xi)] \\\\\n&& \\hspace{4cm} - \\log\\left\\{\\tr^2[M(\\xi)]-\\tr[M^2(\\xi)]\\right\\} - \\log(3\/4)\\,. \\\\\n\\end{eqnarray*}\nThe necessary and sufficient condition \\eqref{CNS-design} then takes the following form:\n\\begin{eqnarray*}\n&&\\hspace{-0.5cm} k=d: \\hspace{1cm} \\max_{t\\in\\ST} f\\TT(t)M^{-1}(\\xi_k^*)f(t) \\leq d \\,,\\\\\n&&\\hspace{-0.5cm} k=d-1: \\hspace{0.3cm} \\max_{t\\in\\ST} \\left\\{ f\\TT(t)M^{-1}(\\xi_k^*)f(t) - \\frac{f\\TT(t)f(t)}{\\tr[M(\\xi_{k}^*)]} \\right\\} \\leq d -1\\,, \\\\\n&&\\hspace{-0.5cm} k=d-2: \\hspace{0.3cm} \\max_{t\\in\\ST} \\left\\{ f\\TT(t)M^{-1}(\\xi_k^*)f(t) \\right. \\\\\n&& \\hspace{3cm} \\left. - 2\\,\\frac{\\tr[M(\\xi_{k}^*)]f\\TT(t)f(t)-f\\TT(t)M(\\xi_{k}^*)f(t)}{\\tr^2[M(\\xi_{k}^*)]-\\tr[M^2(\\xi_{k}^*)]} \\right\\} \\leq d -2 \\,.\\\\\n\\end{eqnarray*}\nAlso, for $k=1$ condition \\eqref{CNS-design-a} gives\n$$\n\\max_{t\\in\\ST} f\\TT(t) \\frac{M^{-2}(\\xi_1^*)}{\\tr[M^{-1}(\\xi_1^*)]} f(t) \\leq 1\n$$\n(which corresponds to $A$-optimal design), and for $k=2$\n$$\n\\max_{t\\in\\ST} \\frac{\\tr[M^{-1}(\\xi_2^*)] f\\TT(t)M^{-2}(\\xi_2^*)f(t) - f\\TT(t)M^{-3}(\\xi_2^*)f(t)}{\\tr^2[M^{-1}(\\xi_2^*)]-\\tr[M^{-2}(\\xi_2^*)]} \\leq 1 \\,.\n$$\n\nFinally, note that a duality theorem, in the spirit of Theorem~\\ref{Th:duality}, can be formulated for the maximisation of $\\Psi_k^{-1\/k}[M^{-1}(\\xi)]$; see \\cite[Th.~7.12]{Pukelsheim93} for the general form a such duality properties in optimal experimental design.\n\n\\subsubsection{Examples}\n\\paragraph{Example 4} For the linear regression model on $\\mt_0+\\mt_1\\,x$ on $[-1,1]$, the optimal design for $\\tilde\\psi_k(\\cdot)$ with $k=d=2$ or $k=1$ is\n$$\n\\xi_k^*=\\left\\{\n\\begin{array}{cc}\n-1 & 1 \\\\\n1\/2 & 1\/2\n\\end{array} \\right\\}\\,,\n$$\nwhere the first line corresponds to support points and the second indicates their respective weights.\n\n\\paragraph{Example 5} For linear regression with the quadratic polynomial model $\\mt_0+\\mt_1\\,t+\\mt_2\\,t^2$ on $[-1,1]$, the optimal designs for $\\tilde\\psi_k(\\cdot)$ have the form\n$$\n\\xi_k^*=\\left\\{\n\\begin{array}{ccc}\n-1 & 0 & 1 \\\\\nw_k & 1-2w_k & w_k\n\\end{array} \\right\\}\\,,\n$$\nwith $w_3 = 1\/3$, $w_2 = (\\sqrt{33}-1)\/16 \\simeq 0.2965352$ and $w_1 = 1\/4$. Define the efficiency $\\mathrm{Eff}_k(\\xi)$ of a design $\\xi$ as\n$$\n\\mathrm{Eff}_k(\\xi) = \\frac{\\tilde\\psi_k(\\xi)}{\\tilde\\psi_k(\\xi_k^*)} \\,.\n$$\nTable~\\ref{Tab:Ex5} gives the efficiencies $\\mathrm{Eff}_k(\\xi_j^*)$ for $j,k=1,\\ldots,d=3$. The design $\\xi_2^*$, optimal for $\\tilde\\psi_2(\\cdot)$, appears to make a good compromise between $A$-optimality (which corresponds to $\\tilde\\psi_1(\\cdot)$) and $D$-optimality (which corresponds to $\\tilde\\psi_3(\\cdot)$).\n\n\\begin{table}[t]\n\\centering\n\\caption{\\small Efficiencies $\\mathrm{Eff}_k(\\xi_j^*)$ for $j,k=1,\\ldots,d$ in Example 5.}\n{\\small \\begin{tabular}{lccc\n\\hline\n & $\\mathrm{Eff}_1$ & $\\mathrm{Eff}_2$ & $\\mathrm{Eff}_3$ \\\\\n\\hline\n$\\xi_1^*$ & 1 & 0.9770 & 0.9449 \\\\\n$\\xi_2^*$ & 0.9654 & 1 & 0.9886 \\\\\n$\\xi_3^*$ & 0.8889 & 0.9848 & 1 \\\\\n \\hline\n\\end{tabular}}\n\n\\label{Tab:Ex5}\n\\end{table}\n\n\\paragraph{Example 6} For linear regression with the cubic polynomial model $\\mt_0+\\mt_1\\,t+\\mt_2\\,t^2+\\mt_3\\,t^3$ on $[-1,1]$, the optimal designs for $\\tilde\\psi_k(\\cdot)$ have the form\n$$\n\\xi_k^*=\\left\\{\n\\begin{array}{cccc}\n-1 & -z_k & z_k & 1 \\\\\nw_k & 1\/2-w_k & 1\/2-w_k & w_k\n\\end{array} \\right\\}\\,,\n$$\nwhere\n$$\n \\begin{array}{ll}\n z_4 = 1\/\\sqrt{5} \\simeq 0.4472136\\,, & w_4 =0.25\\,, \\\\\n z_3 \\simeq 0.4350486\\,, & w_3 \\simeq 0.2149859\\,, \\\\\n z_2 \\simeq 0.4240013\\,, & w_2 \\simeq 0.1730987\\,, \\\\\n z_1 = \\sqrt{3\\sqrt{7}-6}\/3 \\simeq 0.4639509\\,, & w_1 = (4-\\sqrt{7})\/9 \\simeq 0.1504721\\,,\n \\end{array}\n$$\nwith $z_3$ satisfying the equation $2z^6-3z^5-45z^4+6z^3-4z^2-15z+3=0$ and \\\\\n$$\nw_3=\n\\,{\\frac {5\\,{z}^{6}+5\\,{z}^{4}+5\\,{z}^{2}+1-\n\\sqrt {{z}^{12}+2\\,{z\n}^{10}+3\\,{z}^{8}+60\\,{z}^{6}+59\\,{z}^{4}+58\\,{z}^{2}+73}}\n{12({z}^{6}+{\nz}^{4}+{z}^{2}-3)}}\\,,\n$$\nwith $z=z_3$. For $k=d-2=2$, the numbers $z_2$ and $w_2$ are too difficult to express analytically. Table~\\ref{Tab:Ex6} gives the efficiencies $\\mathrm{Eff}_k(\\xi_j^*)$ for $j,k=1,\\ldots,d$. Here again the design $\\xi_2^*$ appears to make a good compromise: it maximises the minimum efficiency $\\min_k \\mathrm{Eff}_f(\\cdot)$ among the designs considered.\n\n\\begin{table}[t]\n\\centering\n\\caption{\\small Efficiencies $\\mathrm{Eff}_k(\\xi_j^*)$ for $j,k=1,\\ldots,d$ in Example 6.}\n{\\small \\begin{tabular}{lcccc\n\\hline\n & $\\mathrm{Eff}_1$ & $\\mathrm{Eff}_2$ & $\\mathrm{Eff}_3$ & $\\mathrm{Eff}_4$ \\\\\n\\hline\n$\\xi_1^*$ & 1 & 0.9785 & 0.9478 & 0.9166 \\\\\n$\\xi_2^*$ & 0.9694 & 1 & 0.9804 & 0.9499 \\\\\n$\\xi_3^*$ & 0.9180 & 0.9753 & 1 & 0.9897 \\\\\n$\\xi_4^*$ & 0.8527 & 0.9213 & 0.9872 & 1 \\\\\n \\hline\n\\end{tabular}}\n\\label{Tab:Ex6}\n\\end{table}\n\n\\section*{Appendix}\n\n\\paragraph{Shift-invariance and positive homogeneity}\n\nDenote by $\\SM$ the set of probability measures defined on the Borel subsets of $\\SX$, a compact subset of $\\mathds{R}^d$. For any $\\mu\\in\\SM$, any $\\mt\\in \\mathds{R}^d$ and any $\\ml\\in\\mathds{R}^+$, respectively denote by $T_{-\\mt}[\\mu]$ and $H_{\\ml^{-1}}[\\mu]$ the measures defined by:\n$$\n\\mbox{ for any } \\mu\\mbox{-measurable } \\SA\\subseteq\\SX\\,, \\ T_{-\\mt}[\\mu](\\SA+\\mt)=\\mu(\\SA)\\,, \\ H_{\\ml^{-1}}[\\mu](\\ml\\SA)=\\mu(\\SA) \\,,\n$$\nwhere $\\SA+\\mt=\\{x+\\mt: x\\in\\SA\\}$ and $\\ml\\SA=\\{\\ml\\,x: x\\in\\SA\\}$.\nThe shift-invariance of $\\phi(\\cdot)$ then means that $\\phi(T_{-\\mt}[\\mu])=\\phi(\\mu)$ for any $\\mu\\in\\SM$ and any $\\mt\\in\\mathds{R}^d$,\npositive homogeneity of degree $q$ means that $\\phi(H_{\\ml^{-1}}[\\mu])=\\ml^q\\, \\phi(\\mu)$ for any $\\mu\\in\\SM$ and any $\\ml\\in\\mathds{R}^+$.\n\\fin\n\n\\paragraph{The variance is the only concave central moment}\n\nFor $q\\neq 2$, the $q$-th central moment\n$\\Delta_q(\\mu)=\\int |x-E_\\mu|^q\\,\\mu(\\dd x)$\nis shift-invariant and homogeneous of degree $q$, but it is not concave on $\\SM$. Indeed, consider for instance the two-point probability measures\n$$\n\\mu_1=\\left\\{ \\begin{array}{cc}\n0 & 1 \\\\\n1\/2 & 1\/2\n\\end{array}\\right\\}\n\\mbox{ and }\n\\mu_2=\\left\\{ \\begin{array}{cc}\n0 & 101 \\\\\nw & 1-w\n\\end{array}\\right\\}\\,,\n$$\nwhere the first line denotes the support points and the second one their respective weights. Then, for\n$$\nw=1-\\frac{1}{404}\\, \\frac{201^{q-1}-202q+405}{201^{q-1}-101q+102}\n$$\none has\n$\\mp^2 \\Delta_q[(1-\\ma)\\mu_1+\\ma\\mu_2]\/\\mp\\ma^2\\big|_{\\ma=0} \\geq 0$ for all $q \\geq 1.84$, the equality being obtained at $q=2$ only. Counterexamples are easily constructed for values of $q$ smaller than 1.84.\n\\fin\n\n\\paragraph{Proof of Lemma~\\ref{L:1}}\nWe have\n$$\n\\Ex \\left \\{ \\det\\left [\\sum_{i=1}^{k+1} z_i z_i\\TT \\right] \\right\\} = (k+1)!\\, \\det\\left[\n \\begin{array}{cc}\n \\Ex(x_1 x_1\\TT) & E_\\mu \\\\\n E_\\mu\\TT & 1 \\\\\n \\end{array}\n\\right] = (k+1)! \\det[V_\\mu]\\,,\n$$\nsee for instance \\cite[Theorem~1]{Pa98}.\n\\fin\n\n\\paragraph{Proof of Lemma~\\ref{L:2}}\nTake any vector $z$ of the same dimension as $x$. Then $z\\TT V_\\mu z=\\var_\\mu(z\\TT x)$, which is a concave functional of $\\mu$, see Section~\\ref{S:intro}. This implies that\n$z\\TT V_{(1-\\ma)\\mu_1+\\ma\\mu_2} z = \\var_{(1-\\ma)\\mu_1+\\ma\\mu_2}(z\\TT x) \\geq (1-\\ma)\\var_{\\mu_1}(z\\TT x)+\\ma \\var_{\\mu_2}(z\\TT x) = (1-\\ma) z\\TT V_{\\mu_1}z +\\ma z\\TT V_{\\mu_2}z$,\nfor any $\\mu_1$, $\\mu_2$ in $\\SM$ and any $\\ma\\in(0,1)$ (see Section~\\ref{S:intro} for the concavity of $\\var_\\mu$).\nSince $z$ is arbitrary, this implies \\eqref{conc0}.\n\\fin\n\n\n\\paragraph{Proof of Theorem~\\ref{Th:empirical}}\nThe estimate \\eqref{widehat-psi} forms a U-statistics for the estimation of $\\psi_k(\\mu)$ and is thus unbiased and has minimum variance, see, e.g., \\cite[Chap.~5]{Serfling80}.\nWe only need to show that it can be written as \\eqref{widehat-psi-b}.\n\nWe can write\n\\begin{eqnarray*}\n({\\widehat\\psi}_k)_n &=& {n \\choose k+1}^{-1} \\\\\n&& \\times \\sum_{j_10,\\ \\ma \\geq 0} \\left\\{ - \\log \\Psi_k^{1\/k}\\left[\\sum_{x\\in\\SX} \\ma_x\\,(x-c^*)(x-c^*)\\TT\\right] + \\mg - \\log(\\mg) -1 \\right\\} \\,, \\\\\n&& = \\min_{\\ma \\geq 0} - \\log \\Psi_k^{1\/k}\\left[\\sum_{x\\in\\SX} \\ma_x\\,(x-c^*)(x-c^*)\\TT\\right] = - \\log\\Psi_k^{1\/k}(V_k^*) \\,,\n\\end{eqnarray*}\nwhere we have denoted $\\mg=\\sum_{x\\in\\SX} \\beta_x$ and $\\ma_x=\\beta_x\/\\mg$ for all $x$. Therefore\n$T^* \\leq - \\log\\Psi_k^{1\/k}(V_k^*)$, that is,\n$\\log\\left[\\min_{M,c:\\ \\SX\\subset\\SE(M,c)} 1\/\\phi_k^\\infty(M)\\right] \\geq \\log\\Psi_k^{1\/k}(V_k^*)$.\n\\fin\n\n\n\n\\section*{Acknowledgments}\nThe work of the first author was partly supported by the ANR project 2011-IS01-001-01 DESIRE (DESIgns for spatial Random fiElds).\n\n\\bibliographystyle{elsart-harv}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nAn in-coincidence experiment measures simultaneously the outgoing\nmomenta of multiple products of a microscopic reaction\n\\cite{schmidt2002coltrims}. It is an instrument that can study the\ncorrelations in reactions involving multiple particles. In double\nionization, for example, a single photon ionizes, simultaneously, two\nelectrons and the outgoing momenta of both particles are captured\n\\cite{akoury2007simplest}. The reaction probes the correlation between\ntwo electrons in, for example, a chemical bound at the moment of\nphoton impact. The outgoing wave of the two electrons is described by\na 6D correlated wave and results in a cross section that depends on\nfour angles, the directions of the first and the second electron.\n\nFree-electron lasers, and similar experiments around the world, are\nexpected to generate a wealth of this high-dimensional scattering\ndata. This will result in high-dimensional forward and inverse wave\nproblems that need to be solved to interpret the data.\n\nThe experimental cross section are often smooth functions as a\nfunction of the angles. Similarly, some parts of the scattering\nsolution, such as single-ionization, only probes a limited subspace\nof the possible full solution space. The scattering solution can then\nbe described by a low-rank wave function, a product of one-particle\nbound states with scattering waves in the other coordinates.\n\nThis paper introduces a low-rank representation for the\nscattering solutions, not only for the single ionization but also for\ndouble and triple ionization waves that appear in breakup reaction.\n\nWe also propose and analyze an alternating direction algorithm that\ndirectly solves for the low-rank components that describe the\nsolution. This reduces a large-scale linear system to smaller,\nlow-dimensional, scattering problems that are solved in a iterative\nsequence. The proposed method can be generalized to high-dimensional\nscattering problems where a low-rank tensor decomposition is used to\nrepresent the full scattering wave function.\n\nEfficient low-rank tensor representations are used in quantum physics\nfor quite some time already \\cite{huckle2013computations,\n murg2010simulating}. They are also used in the applied mathematics\nliterature to approximate high-dimension problems, for a review see\n\\cite{grasedyck2013literature, hackbusch2012tensor, kolda2009tensor}.\nMethods such ALS \\cite{holtz2012alternating}, DMRG\n\\cite{oseledets2011dmrg}, and AMEn \\cite{dolgov2014alternating} use in\nalternating directions, a small linear system to determine the\nlow-rank components of a tensor decomposition. These innovations have\nnot found their application in computational scattering theory.\n\nTo calculate cross sections, from first principles, we start\nfrom a multi-particle Schr\\\"odinger equation. The equation is\nreformulated into a driven Schr\\\"odinger equation with an unknown scattering\nwave function and a right hand side that describes the\nexcitation, for example, a dipole operator working on the initial\nstate.\n\nSince the asymptotic behaviour of a scattering function for multiple\ncharged particles is in many cases unknown, absorbing boundary\nconditions \\cite{ECS,PML} are used. Here, an artificial layer is\nadded to the numerical domain that dampens outgoing waves. The\noutgoing wave boundary conditions are then replaced with\nhomogeneous Dirichlet boundary conditions at the end of the artificial\nlayer. These boundary do not require any knowledge about the\nasymptotic behaviour, which becomes very complicated for these\nmultiple charged particles.\n\nThe resulting equation is discretized on a grid and results in a\nlarge, sparse indefinite linear system. It is typically solved by a\npreconditioned Krylov subspace method \\cite{cools2016fast}. However,\nthe preconditioning techniques for indefinite systems are not as\nefficient as preconditioners for symmetric and positive definite\nsystem. And solving the resulting equation is still a\ncomputationally expensive task, often requiring a distributed\ncalculation on a supercomputer.\n\nTo compare the resulting theoretical cross sections with experimental\ndata, a further postprocessing step is necessary. The cross section is\nthe farfield map and this is calculated through integrals of the\nscattering wave function, which is the solution of the linear system,\nand a Greens function \\cite{mccurdy2004solving}.\n\nThe main result of the paper is that we show that scattering waves\nthat describe multiple ionization can be represented by a low-rank\ntensor. We first show this for a 2D wave and then generalize the\nresults to 3D waves. The methodology can be generalized to higher\ndimension.\n\nThe outline of the paper is a follows. In section~\\ref{sec:stateofart} we\nreview the methodology that solves a forward scattering problem. It\nresults in a driven Schr\\\"odinger equation with absorbing boundary\nconditions. From the solution we can extract the cross section using\nan integral. In section~\\ref{sec:lowrank} we illustrate, in 2D, that a\nsolution can be approximated by a truncated low-rank approximation. We\nalso show that these low-rank components can be calculated directly\nwith an iterative method. In section~\\ref{sec:3dhelmholtz} we show that\nthis methodology generalizes to 3D and higher dimensional problems.\nWe use a truncated tensor decomposition and determine the components \nwith a similar iterative method.\nA discussion of some numerical results and a comparison of the different\npresented versions of the method is given in section~\\ref{sec:NumericalResults}.\nIn the final section, Sec.~\\ref{sec:discussion}, we summarize some\nconclusions and discuss some possible extensions of the presented method.\n\n\n\n\\section{State of the art} \\label{sec:stateofart}\nThis section summarize the methodology that solves a forward\nbreak-up problems with charged particles. The methodology is developed\nin a series of papers \\cite{rescigno2000numerical,mccurdy2004solving}\nand applied to solve the impact-ionization problem\n\\cite{rescigno1999collisional} and double ionization of molecules\n\\cite{vanroose2005complete, vanroose2006double}. These methods are\nbeing extended to treat, for example, water\n\\cite{streeter2018dissociation}.\n\nThe helium atom, He, is the simplest system with double ionization\n\\cite{briggs2000differential}. It has two electrons with coordinates\n$\\mathbf{r}_1 \\in \\mathbb{R}^3$ and $\\mathbf{r}_2 \\in \\mathbb{R}^3$\nrelative to the nucleus positioned at $\\mathbf{0}$. The driven\nSchr\\\"odinger equation for $u(\\mathbf{r}_1,\\mathbf{r}_2) \\in C^2$ then\nreads\n\\begin{equation}\\label{DrivenSchrodinger}\n\\left( {- \\frac{1}{2} \\Delta_{\\mathbf{r}_1} - \\frac{1}{2} \\Delta_{\\mathbf{r}_2} } { - \\frac{1}{\\|\\mathbf{r}_1\\|}- \\frac{1}{\\|\\mathbf{r}_2\\|}} + {\\frac{1}{\\|\\mathbf{r}_1-\\mathbf{r}_2\\|} }- { E} \\right) u(\\mathbf{r}_1,\\mathbf{r}_2) = \\mathcal{\\mu} \\phi_0(\\mathbf{r}_1,\\mathbf{r}_2) \\quad \\forall \\mathbf{r}_1, \\mathbf{r}_2 \\in \\mathbb{R}^3,\n\\end{equation}\nwhere the right hand side is\nthe dipole operator $\\mathcal{\\mu}$ working on the ground state $\\phi_0$, the\neigenstate with the lowest energy $\\lambda_0$. The operators\n$-\\frac{1}{2} \\Delta_{\\mathbf{r}_1}$ and\n$-\\frac{1}{2}\\Delta_{\\mathbf{r}_2}$ are the Laplacian operators for\nthe first and second electron and model the kinetic energy. The\nnuclear attraction is $-1\/\\|\\mathbf{r}_1\\|$ and $-1\/\\|\\mathbf{r}_2\\|$\nand the electron-electron repulsion is\n$1\/\\|\\mathbf{r}_1-\\mathbf{r}_2\\|$.\n\nThe total energy $E = h\\nu + \\lambda_0$ is the energy\ndeposited in the system by the photon, $h\\nu$, and the energy $\\lambda_0$ of\nthe ground state. If the $E>0$, both electrons can\nescape simultaneously from the system. The solution\n$u(\\mathbf{r}_1,\\mathbf{r}_2)$ then represents a 6D wave emerging from\nthe nucleus.\n\nThe equation can be interpreted as a Helmholtz equation with a\nspace-dependent wave number, $k^2(\\mathbf{r}_1,\\mathbf{r}_2)$,\n\\begin{equation}\\label{eq:helmholtz}\n \\left( - \\Delta_{6D} - { k^2(\\mathbf{r}_1,\\mathbf{r}_2)} \\right)\n u(\\mathbf{r}_1,\\mathbf{r}_2) = { f(\\mathbf{r}_1,\\mathbf{r}_2)} \\quad \\forall (\\mathbf{r}_1,\n \\mathbf{r}_2) \\in \\mathbb{R}^6.\n\\end{equation}\n\nIn this paper we prefer to write this Helmholtz equation as\n\\begin{equation}\n \\left( - \\Delta_{6D} - k_0^2 (1 + \\chi(\\mathbf{r}_1,\\mathbf{r}_2) ) \\right) u(\\mathbf{r}_1,\\mathbf{r}_2) = f(\\mathbf{r}_1,\\mathbf{r}_2) \\quad \\forall (\\mathbf{r}_1, \\mathbf{r}_2) \\in \\mathbb{R}^6,\n\\end{equation}\nwhere $k_0^2$ is a constant wave number, in this case related to the\ntotal energy $E$, and a space-dependent function $\\chi: \\mathbb{R}^6\n\\rightarrow \\mathbb{R}$, that goes to zero if $\\|\\mathbf{r}_1\\|\n\\rightarrow \\infty $ or $\\|\\mathbf{r}_1\\| \\rightarrow \\infty $ that\nrepresents all the potentials.\n\n\\subsection{Expansion in spherical waves and absorbing boundary conditions}\nFor small atomic and molecular system, where spherical symmetry is relevant, the\nsystem is typically written in spherical coordinates and expanded in\nspherical harmonics. With $\\mathbf{r}_1(\\rho_1, \\theta_1,\n\\varphi_1)$ and $\\mathbf{r}_2(\\rho_2, \\theta_2, \\varphi_2)$ we can\nwrite\n\\begin{equation}\\label{eq:partialwaves}\n u(\\mathbf{r}_1,\\mathbf{r}_2) = \\sum_{l_1}^\\infty \\sum_{m_1 = -l_1}^{l_1} \\sum_{l_2=0}^{\\infty} \\sum_{m_2=-l_2}^{l_2} u_{l_1m_1,l_2m_2}(\\rho_1,\\rho_2) Y_{l_1m_1}(\\theta_1,\\varphi_1) Y_{l_2m_2}(\\theta_2,\\varphi_2),\n\\end{equation}\nwhere $Y_{l_1m_1}(\\theta_1, \\varphi_1)$ and $Y_{l_2m_2}(\\theta_2,\n\\varphi_2)$ are spherical Harmonics, the eigenfunctions of the angular\npart of a 3D Laplacian in spherical coordinates. In practice the sum in equation~\\eqref{eq:partialwaves} is\ntruncated. The expansion is then a low-rank, truncated, tensor decomposition\nof a 6D tensor describing the solution.\n\nFor each $l_1$, $m_1$, $l_2$ and $m_2$ combination, the radial function\n$u_{l_1m_1,l_2m_2}(\\rho_1,\\rho_2)$ describes an outgoing wave that\ndepends on the distances $\\rho_1$ and $\\rho_2$ of the two electrons to\nthe nucleus.\n\nA coupled equation that simultaneously solves for all the\n$u_{l_1m_1l_2m_2}(\\rho_1,\\rho_2)$'s is found by inserting the\ntruncated sum in \\eqref{DrivenSchrodinger}, multiplying with $Y_{l_1m_1}^*(\\theta_1,\n\\varphi_1)$ and $Y_{l_2m_2}^*(\\theta_2,\\varphi_2)$ and integrating\nover all the angular coordinates,\n\\begin{equation}\\label{eq:coupledpartialwaves}\n \\begin{aligned}\\left(-\\frac{1}{2}\\frac{d^2}{d \\rho_1^2} + \\frac{l_1(l_1+1)}{2\\rho_1^2 }\n -\\frac{1}{2}\\frac{d^2}{d \\rho_2^2} + \\frac{l_2(l_2+1)}{2\\rho_2^2 } + V_{l_1m_1l_2m_2}\n(\\rho_1,\\rho_2) -E \\right) u_{l_1m_1l_2m_2}(\\rho_1,\\rho_2) \\\\ + \\sum_{l^\\prime_1m^\\prime_1l^\\prime_2m^\\prime_2} V_{l_1m_1l_2m_2, l^\\prime_1m^\\prime_1l^\\prime_2m^\\prime_2}(\\rho_1,\\rho_2) u_{l^\\prime_1m^\\prime_1l^\\prime_2m^\\prime_2}(\\rho_1,\\rho_2) = f_{l_1m_1,l_2m_2}(\\rho_1,\\rho_2) \\quad \\forall l_1m_1 l_1m_2. \\quad \\forall \\rho_1,\\rho_2 \\in [0,\\infty[\n\\end{aligned}\n\\end{equation}\nwith boundary conditions $u(\\rho_1=0,\\rho_2)=0$ for all $\\rho_2 \\ge 0$ and $u(\\rho_1,\\rho_2=0)=0$ for all $\\rho_1 \\ge 0$.\n\nThe equation \\eqref{eq:coupledpartialwaves} is typically discretized\non a spectral elements quadrature grid \\cite{rescigno2000numerical}.\n\nTo reflect the physics, where electrons are emitted from the system,\noutgoing wave boundary conditions need to be applied at the outer\nboundaries. There are many ways to implement outgoing wave boundary\nconditions. Exterior complex scaling (ECS) \\cite{ECS} for\nexample, is frequently used in the computational atomic and molecular\nphysics literature. In the computational electromagnetic scattering a\nperfectly matched layers (PML) \\cite{PML} is used, which can also be\ninterpreted as a complex scaled grid. \\cite{chew19943d}\n\n\\subsection{Calculation of the amplitudes}\nTo correctly predict the probabilities of the arriving particles at\nthe detector, we need the amplitudes of the solution far away from the\nmolecule. These are related to the asymptotic amplitudes of the\nwave functions.\n\nLet us go back to the formulation with Helmholtz equation, as given in\n\\eqref{eq:helmholtz}. Suppose that we have solved the following\nHelmholtz equation with absorbing boundary conditions, in any\nrepresentation,\n\\begin{equation} \\label{Helmholtz}\n \\left(- \\Delta -k_0^2 \\left( 1+ \\chi(\\mathbf{x}) \\right) \\right) u_{\\text{sc}}(\\mathbf{x}) = f(\\mathbf{x}), \\quad \\forall \\mathbf{x} \\in [-L,L]^d,\n\\end{equation}\nwhere $f$ is only non-zero on the real part of the grid $[-L,L]^d\n\\subset \\mathbb{R}^d$. Similarly, $\\chi(\\mathbf{x})$ is only non-zero\non the box $[-L,L]^d$.\n\nThe calculation of the asymptotic amplitudes requires the solution\n$u_{\\text{sc}}(\\mathbf{x})$ for an $\\mathbf{x}$ outside of the box\n$[-L, L]^d$. To that end, we reorganize equation~\\eqref{Helmholtz},\nafter we have solved it, as follows\n\\begin{equation} \\label{eq:reorganized}\n \\left( - \\Delta -k_0^2 \\right) u_{\\text{sc}} = f + k_0^2 \\, \\chi \\, u_{\\text{sc}}.\n\\end{equation}\nThe right hand side of \\eqref{eq:reorganized} is now only non-zero on\n$[-L,L]^d$, since both $f$ and $\\chi$ are only non-zero\nthere. Furthermore, since we have solved \\eqref{Helmholtz} we also\nknow $u_\\text{sc}$ on $[-L,L]^d$. So the full right hand side of\n\\eqref{eq:reorganized} is known. The remaining left hand side of\n\\eqref{eq:reorganized} is now a Helmholtz equation with a constant\nwave number $k_0^2$. For this equation the Greens function is known\nanalytically. \n\\begin{equation}\n \\begin{aligned}\n u_{\\text{sc}}(\\mathbf{x}) &= \\int G(\\mathbf{x},\\mathbf{y}) \\left( f(\\mathbf{y}) + k_0^2 \\,\\chi(\\mathbf{y}) u_\\text{sc}(\\mathbf{y})\\right) d\\mathbf{y} \\\\\n &= \\int\\limits_{[-L,L]^d} G(\\mathbf{x},\\mathbf{y}) \\left( f(\\mathbf{y}) + k_0^2\\, \\chi(\\mathbf{y}) u_\\text{sc} (\\mathbf{y}) \\right) d\\mathbf{y} \\quad \\forall x \\in \\mathbb{R}^d,\n\\end{aligned}\n\\end{equation}\nwhere $f$ and $\\chi$ are limited to $[-L,L]^d$ thus we can truncate the\nintegral to the box $[-L,L]^d$.\n\nThis methodology was successfully applied to calculate challenging\nbreak up problems, for example \\cite{rescigno1999collisional}.\n\n\\subsection{Single ionization versus double ionization}\nLet us discuss the qualitative behaviour of the solution for single\nand double ionization. To illustrate the behaviour, we truncate the\npartial wave expansion, \\eqref{eq:partialwaves}, to the first\nterm. This is known as a $s$-wave expansion. The 6D wave function is\nthen approximated as\n\\begin{equation}\n u(\\mathbf{r}_1,\\mathbf{r}_2) \\approx u(\\rho_1,\\rho_2) Y_{00}(\\theta_1,\\varphi_1) Y_{00}(\\theta_2,\\varphi_2).\n\\end{equation}\nThe radial wave, $u(\\rho_1,\\rho_2)$, then fits a 2D Helmholtz equation\n\\begin{equation}\\label{eq:radialequation} \n \\left(- \\frac{1}{2} \\frac{d^2}{d\\rho_1^2} - \\frac{1}{2} \\frac{d^2}{d\\rho_2^2} + V_1(\\rho_1) + V_2(\\rho_2) + V_{12}(\\rho_1,\\rho_2) - E \\right) u(\\rho_1,\\rho_2) = f(\\rho_1,\\rho_2), \\quad \\forall \\rho_1,\\rho_2 \\in [0,\\infty[,\n\\end{equation}\nwhere $V_1(\\rho_1)$ and $V_2(\\rho_2)$ represents the one-particle\npotentials and $V_{12}(\\rho_1,\\rho_2)$ the two-particle repulsion.\nThis model is known as a $s$-wave or Temkin-Poet model\n\\cite{temkin1962nonadiabatic, poet1978exact}.\n\nBefore the photo-ionization, the atom is in a two-particle ground\nstate. In this $s$-wave model, it is the eigenstate of\n\\begin{equation}\n \\left(- \\frac{1}{2} \\frac{d^2}{d\\rho_1^2} - \\frac{1}{2} \\frac{d^2}{d\\rho_2^2} +\n V_1(\\rho_1) + V_2(\\rho_2) + V_{12}(\\rho_1,\\rho_2)\n \\right)\\phi_0(\\rho_1,\\rho_2) = \\lambda_0 \\phi_0(\\rho_1,\\rho_2).\n\\end{equation}\n with the lowest energy. Simultaniously,\nthere are one-particle states that are eigenstates of\n\\begin{equation}\\label{eq:eigenstate_H1}\n \\left(-\\frac{1}{2} \\frac{d^2}{d\\rho_1^2} + V_1(\\rho_1) \\right) \\phi_i(\\rho_1) = \\mu_i \\phi_i(\\rho_1)\n\\end{equation}\nand\n\\begin{equation}\n \\label{eq:eigenstate_H2}\n \\left(- \\frac{1}{2} \\frac{d^2}{d\\rho_2^2} + V_2(\\rho_2) \\right) \\varphi_i(\\rho_2) = \\nu_i \\varphi_i(\\rho_2).\n\\end{equation}\n\nWhen \\eqref{eq:radialequation} is solved with the energy $E = \\hbar\n\\nu + \\lambda<0$, there is only single ionization. Only one of the two\ncoordinates $\\rho_1$ or $\\rho_2$ can become large and the solution,\nas can be seen in Figure~\\ref{fig:singleionization}, is\nlocalized along both axis. The solution is a product of an outgoing\nwave in one coordinate and a bound state in the other\ncoordinate. For example, along the $\\rho_2$-axis, the solution is\ndescribed by $A_i(\\rho_2)\\phi_i(\\rho_1)$, where $A_i(\\rho_2)$ is a\none-dimensional outgoing wave, with an energy $E-\\mu_i$ and\n$\\phi_i(\\rho_1)$ is a bound state of \\eqref{eq:eigenstate_H1} in the\nfirst coordinate with energy $\\mu_i$. Similary, there is a wave,\nalong the $\\rho_1$ axis, that is an outgoing wave of the form\n$B_l(\\rho_1) \\varphi_l(\\rho_2)$, with a scattering wave in the first coordinate, $\\rho_1$,\nand a bound state in the coordinate $\\rho_2$, solution of\n\\eqref{eq:eigenstate_H2}.\n\nWhen \\eqref{eq:radialequation} is solved with energy $E = \\hbar \\nu +\n\\lambda \\ge 0$ there is also double ionization and both coordinates\n$\\rho_1$ and $\\rho_2$ can become large. We see, in\nFigure~\\ref{fig:doubleionization}, a (spherical) wave in the middle of\nthe domain, where both coordinates can be become large. To describe\nthis solution the full coordinate space is necessary. Note that\nthese solutions still show single ionization along the axes. Even for\n$E> 0$, one particle can take away all the energy and leave the other\nparticle as a bound state.\n\n\\begin{figure}\n\t\\centering\n\t\\begin{subfigure}{.45\\textwidth}\n\t\t\\centering\n\t\t\\input{singleionization2d-contourf-fullrank.tikz}\n\t\t\\caption{Single ionization}\n\t\t\\label{fig:singleionization}\n\t\\end{subfigure}\n\t\\begin{subfigure}{.45\\textwidth}\n\t\t\\centering\n\t\t\\input{doubleionization2d-contourf-fullrank.tikz}\n\t\t\\caption{Double ionization}\n\t\t\\label{fig:doubleionization}\n\t\\end{subfigure} \\caption{Left: When the energy $E = \\hbar \\nu + \\lambda_0 <0$, there is\n only single ionization. The solution is then localized along the\n edges, where the solution is a combination of an outgoing wave in\n the $\\rho_1$ and a bound state in $\\rho_2$, or vice-versa. Right: For\n energy $E>0$, there is, in addition to single ionization with\n solution localized along the edges, a double ionization wave where\n both coordinates can become large.\n \\label{fig:single_vs_double}}\n\\end{figure}\n\n\n\\begin{figure}\n\t\\centering\n\t\\begin{subfigure}{.3\\textwidth}\n\t\t\\centering\n\t\t\\input{wave2d-contourf-rank=1.tikz}\n\t\t\\caption{Rank-1 approximation}\n\t\\end{subfigure}\n\t\\begin{subfigure}{.3\\textwidth}\n\t\t\\centering\n\t\t\\input{wave2d-contourf-rank=2.tikz}\n\t\t\\caption{Rank-2 approximation}\n\t\\end{subfigure}\n\t\\begin{subfigure}{.3\\textwidth}\n\t\t\\centering\n\t\t\\input{wave2d-contourf-rank=3.tikz}\n\t\t\\caption{Rank-3 approximation}\n\t\\end{subfigure}\n\t\\\\~\\\\\n\t\\begin{subfigure}{.3\\textwidth}\n\t\t\\centering\n\t\t\\input{wave2d-contourf-rank=4.tikz}\n\t\t\\caption{Rank-4 approximation}\n\t\\end{subfigure}\n\t\\begin{subfigure}{.3\\textwidth}\n\t\t\\centering\n\t\t\\input{wave2d-contourf-rank=5.tikz}\n\t\t\\caption{Rank-5 approximation}\n\t\\end{subfigure}\n\t\\begin{subfigure}{.3\\textwidth}\n\t\t\\centering\n\t\t\\input{wave2d-contourf-fullrank.tikz}\n\t\t\\caption{Full rank wave}\n\t\\end{subfigure}\n\t\\caption{Contour plots of the double ionization wave function (bottom, right) and low-rank approximations for increasing rank.}\n\\end{figure}\n\n\n\\subsection{Coupled channel model for single ionization waves}\n\\label{sec:singleionization}\nIn this section, we write the single ionization solution as a low-rank\ndecomposition and derive the equations for the low-rank\ncomponents. When there is only single ionization, the total wave can\nbe written as\n\\begin{equation}\\label{expansion}\n u(\\rho_1,\\rho_2) = \\sum_{m=1}^{M} \\phi_m(\\rho_1) A_m(\\rho_2) + \\sum_{l=1}^L B_l(\\rho_1)\\varphi_l(\\rho_2),\n\\end{equation}\nwhere $\\phi_m(\\rho_1)$ and $\\varphi_l(\\rho_2)$ are the bound state eigenstates,\ndefined in \\eqref{eq:eigenstate_H1} and \\eqref{eq:eigenstate_H2}. The\nfirst term is localized along the $\\rho_2$-axis, the second term is\nlocalized along the $\\rho_1$-axis with $\\mu_i < 0$ and $\\nu_i<0$.\n\nAs discussed in \\cite{cools2016fast}, this expansion is not unique. We\ncan add multiples of $\\gamma_i \\varphi_i(\\rho_2)$ to $A_i(\\rho_2)$ and\nsimultaneously subtract $\\gamma_l \\phi_l(\\rho_1)$ from $B_l(\\rho_1)$\nwithout contaminating the result. Indeed, for any choice of $\\gamma_i\n\\in \\mathbb{C}$ and $L=M$ holds that\n\\begin{equation}\nu(\\rho_1,\\rho_2) = \\sum_{m=1}^{M} \\phi_m(\\rho_1) \\left(A_m(\\rho_2) + \\gamma_m \\varphi_m(\\rho_2) \\right) + \\sum_{l=1}^L \\left(B_l(\\rho_1)-\\gamma_i \\phi_i(\\rho_1))\\varphi_l(\\rho_2) \\right) = u(\\rho_1,\\rho_2).\n\\end{equation}\nTo make the expansion unique, \\cite{cools2016fast} chooses to select $A_i\n\\perp \\varphi_j$ when $j \\ge i$ and $B_i\\perp \\phi_m$ when $i \\ge j$.\n\nIn this paper, we choose to make the functions in the set $\\{\\phi_{i\n \\in\\{1,\\ldots,m\\}}, B_{l \\in \\{1,\\ldots, L \\}}\\}$ orthogonal. We\nalso assume that $V_{12}(\\rho_1,\\rho_2) \\approx \\sum_{i=1}^L\\sum_{l}^M\n\\phi_i(\\rho_1) \\varphi(\\rho_2) \\int\n\\phi_i^*(\\rho_1)\\varphi^*_l(\\rho_2) V_{12}(\\rho_1,\\rho_2) d\\rho_1\nd\\rho_2.$\n\nGiven a function $f(\\rho_1,\\rho_2)$, a right hand side, we can now\nderive the equations for $A_m$ and $B_l$. When we insert the low-rank\ndecomposition of the expansion \\eqref{expansion}, in the 2D Hamiltonian\n\\eqref{eq:radialequation}, multiply with $\\phi_i$ and integrate over\n$\\rho_1$ we find\n\\begin{equation}\n (H_2 + \\mu_j -E) A_j(\\rho_2) + \\sum_{l=1}^M V_{ij}(\\rho_2) A_l(\\rho_2) = \\int_0^\\infty \\phi^*_j(\\rho_1) f(\\rho_1,\\rho_2) d\\rho_1, \\quad \\text{for} \\quad j=1, \\ldots, M \\quad \\text{and}\\quad \\forall \\rho_2 \\in [0,L],\n\\end{equation}\nwith\n\\begin{equation}\n V_{ij}(\\rho_2)= \\int_0^\\infty \\phi^*_j(\\rho_1) V_{12}(\\rho_1,\\rho_2) \\phi_l(\\rho_1) d\\rho_1 .\n\\end{equation}\nWe have used that $\\phi_i \\perp B_l$ to eliminate the second term in\nthe expansion \\eqref{expansion}.\n\nSimilarly, for $B_l$, we find\n\\begin{equation}\n \\begin{aligned}\n (H_1 + \\nu_l -E) B_l(\\rho_1) + \\sum_{m=1}^L W_{l}(\\rho_1) B_k(\\rho_1) = \\int_0^\\infty\\!\\!\\!\\! \\varphi_l(\\rho_2) \\left(f(\\rho_1,\\rho_2)-\\sum_i^M \\phi_i \\phi^* f(\\rho_1,\\rho_2) \\right) d\\rho_2 \\quad\\\\\n \\text{for} \\quad i=1,\\ldots, M, \\quad \\forall \\rho_1 \\in [0,\\infty[\n \\end{aligned}\n\\end{equation}\nwhere\n\\begin{equation}\n W_{lk}(\\rho_1):=\\left(\\int_0^\\infty \\!\\!\\!\\! \\varphi^*_l(\\rho_2) V_{12}(\\rho_1,\\rho_2) \\varphi_k(\\rho_2) d\\rho_2 \\right).\n\\end{equation}\n\n\\section{Low-rank matrix representation of a 2D wave function that includes both single and double ionization}\n\\label{sec:lowrank}\n\\subsection{Low rank of the double ionization solution}\nWe now discuss the main result of the paper. We will derive a\ncoupled channel equation that gives a low-rank approximation for the\ndouble ionization wave function, as shown in Figure~\\ref{fig:doubleionization}.\n \nIn section \\ref{sec:singleionization} we have shown\nthat the single ionization wave can be represented by a low-rank\ndecomposition. In this section, we show that also the double\nionization wave can be written as a similar low-rank decomposition.\n\nWe first illustrate that the solution of a 2D driven Schr\\\"odinger equation\nthat contains both single and double ionization, it is a solution of\n\\eqref{eq:radialequation} with $E>0$, can be represented by a similar low-rank\ndecomposition.\n\nIn Figure~\\ref{fig:wavefunction} we solve the Helmholtz equation with a\nspace-dependent wave number, $k(x,y)$, in the first quadrant where $x\n\\ge 0$ and $y \\ge 0$. The equation is\n\\begin{equation}\\label{driven}\n\\left( - \\Delta_2 - k^2(x,y) \\right) u_{sc}(x,y) = f(x,y),\n\\end{equation}\nwhere $\\Delta_2$ is the 2D Laplacian and the solution $u_{sc}$\nsatisfies homogeneous boundary conditions $u_{sc}(x,0)=0$ for all\n$x\\ge 0$ and $u_{sc}(0,y)=0$ for all $y \\ge 0$. On the other\nboundaries we have outgoing boundary conditions.\n\nThe right hand side $f(x,y)$ has a support that is limited to $[0,b]^2\n\\subset [0,L]^2 \\subset \\mathbb{R}_+^2$, i.e. $f(x,y)=0$, for all\n$x\\ge b$ or $y\\ge b$.\n\nThe wave number $k(x,y)$ can be split in a constant part, $k_0^2$ and\nvariable part $\\chi(x,y)$. The variable part is also only non-zero on\n $[0,b[^2$\n\\begin{equation}\n k^2(x,y) = \\begin{cases}\n k_0^2 \\left(1 + \\chi(x,y) \\right) \\quad &\\text{if} \\quad x r_1$.\nIn a similar way, equations for $\\mv{U}_2$ and $\\mv{U}_3$ are\nderived by multiplying \\eqref{eq:start3d} with the other factor\nmatrices in the appropriate directions:\n\\begin{equation}\\label{eq:eqnU2}\n\t\\left\\{\n\t\\mv{I} \\otimes \\conj{\\left(-\\mv{D}_{yy} - k^2 \\mv{I} \\right)} +\n\t\\left[ \n\t\\conj{-\\left(\\mv{I} \\otimes \\mv{U}_1^\\H \\mv{D}_{xx} \\mv{U}_1\\right)} - \\conj{\\left(\\mv{U}_3^\\H \\mv{D}_{zz} \\mv{U}_3 \\otimes \\mv{I} \\right)}\n\t\\right] \\otimes \\mv{I}\n\t\\right\\} \\mvec{\\underbrace{\\conj{\\mv{U}_2}\\mv{G}_{(2)}}_{\\mv{X}_2}} \n\t= \\mvec{\\mv{F}_{(2)} \\left(\\mv{U}_3 \\otimes \\mv{U}_1\\right)},\n\\end{equation}\nand\n\\begin{equation}\\label{eq:eqnU3}\n\t\\left\\{\n\t\\mv{I} \\otimes \\conj{\\left(-\\mv{D}_{zz} - k^2 \\mv{I} \\right)} +\n\t\\left[- \n\t\\conj{\\left(\\mv{I} \\otimes \\mv{U}_1^\\H \\mv{D}_{xx} \\mv{U}_1\\right)} - \\conj{\\left(\\mv{U}_2^\\H \\mv{D}_{yy} \\mv{U}_2 \\otimes \\mv{I} \\right)}\n\t\\right] \\otimes \\mv{I}\n\t\\right\\} \\mvec{\\underbrace{\\conj{\\mv{U}_3}\\mv{G}_{(3)}}_{\\mv{X}_3}} \n\t= \\mvec{\\mv{F}_{(3)} \\left(\\mv{U}_2 \\otimes \\mv{U}_1\\right)}.\n\\end{equation}\nAlternating between solving for $\\mv{U}_1$, $\\mv{U}_2$ and $\\mv{U}_3$\nusing \\eqref{eq:eqnU1}, \\eqref{eq:eqnU2} or \\eqref{eq:eqnU3} results\nin algorithm that approximates the low-rank solutions for three\ndimensional problems as given in \\eqref{eq:driven3d}. This algorithm\nis summarized in Algorithm~\\ref{alg:const3dv1}.\nAlso in the three dimensional case the orthogonality of the columns \nof $\\mv{U}_1$, $\\mv{U}_2$ and $\\mv{U}_3$ are maintained by \nadditional QR factorizations.\n\\begin{algorithm}[t]\n \\SetAlgoLined\n\t[$\\mt{G}, \\mv{U}_1, \\mv{U}_2, \\mv{U}_3$] = hosvd(initial guess)\\;\n\t\\While{not converged}{\n\t\t\\For{i = 1, 2, 3}{\n\t\t\tSolve for $\\mv{X}_i = \\conj{\\mv{U}_i}\\mv{G}_{(i)} \\in \\mathbb{C}^{n \\times r^{d-1}}$ using \\eqref{eq:eqnU1}, \\eqref{eq:eqnU2} or \\eqref{eq:eqnU3}\\;\n\t\t\t$\\conj{\\mv{U}_{i}} \\mv{G}_{(i)} = \\qr{\\mv{X}_i(:,~ 1:r_i), 0}$;\n\t\t}\n\t}\n\t$\\mt{G} = \\texttt{reconstruct} \\left[\\mv{G}_{(i)}, ~i\\right]$\\;\n\t$\\mt{M} = \\mt{G} \\times_1 \\mv{U}_1 \\times_2 \\mv{U}_2 \\times_3 \\mv{U}_3$\\;\n \\caption{Solve for the low-rank tensor decomposition of the solution $\\mt{M}$ of a 3D Helmholtz with constant wave number (version 1).}\n \\label{alg:const3dv1}\n\\end{algorithm}\nObserve that we solve for a large matrix $\\mv{X}_i \\in \\mathbb{C}^{n_i \\times\n r_1r_2r_3\/r_i}$. So, in general the rank of this matrix could be\n$\\min\\left(n_i,~ r_1r_2r_3\/r_i\\right)$. But it is also known that\n$\\mv{X}_i = \\conj{\\mv{U}_i}\\mv{G}_{(i)}$ which leads to the fact that\nthe rank of $\\mv{X}_i$ should be at most $r_i$. Selecting the\nfirst $r_i$ columns of $\\mv{X}_i$ and computing its QR decomposition\nis sufficient to derive a new orthonormal basis as factor matrix\n$\\conj{\\mv{U}}_i$.\n\n\nFinally, observe that solving for $\\mv{X}_i$ using \\eqref{eq:eqnU1},\n\\eqref{eq:eqnU2} or \\eqref{eq:eqnU3} is computationally not\nefficient. In all iterations, we solve for a total of $d nr^{d-1}$\nunknowns, while there are only $r^d + dnr$ unknowns in the Tucker\ntensor factorization. Furthermore, solving equations \\eqref{eq:eqnU1},\n\\eqref{eq:eqnU2} and \\eqref{eq:eqnU3} is also expensive. Indeed,\ncomputing a symmetric reverse Cuthill-McKee permutation of the system\nmatrix one observes a matrix with a bandwidth $\\Oh{r^{d-1}}$. For\nexample when $d = 3, n_i = n = 168, r_i = r = 18$ one obtains the\nsparsity pattern on the diagonal of the matrix as shown in Figure\n\\ref{fig:spytop_symrcm_UGsystem_alg1}. So solving a system as given in\n\\eqref{eq:eqnU1}, \\eqref{eq:eqnU2} or \\eqref{eq:eqnU3} has a\ncomputational cost of $\\Oh{nr^{2(d-1)}}$.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\begin{subfigure}{0.4\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{spytop_symrcm_UGsystem_alg1.png}\n\t\t\\caption{Sparsity pattern of the top of the symmetric reverse Cuthill-McKee permutation of the system matrix to solve for $\\mv{X}_i$ using \\eqref{eq:eqnU1}. Note: only the first 4.5 of the 168 blocks are shown.}\n\t\t\\label{fig:spytop_symrcm_UGsystem_alg1}\n \t\\end{subfigure}\n \\quad\n\t\\begin{subfigure}{0.4\\textwidth}\n\t\t\\includegraphics[width=\\textwidth]{spytop_symrcm_Gsystem_alg2.png}\n\t\t\\caption{Sparsity pattern of the symmetric reverse Cuthill-McKee permutation of the system matrix to solve for $\\mv{G}_{(1)}$ using \\eqref{eq:eqnGv2}.}\n\t\t\\label{fig:spytop_symrcm_Gsystem_alg2}\n\t\\end{subfigure}\n\t\\caption{Sparsity patterns of the symmetric reverse Cuthill-McKee permutation of certain system matrices ($d = 3, n = 168, r = 18$).}\n\\end{figure}\n\n\n\\subsubsection{Solving for the basis functions and the core tensor separately (version 2)}\\label{sec:version2}\nTo circumvent solving the large systems in \\eqref{eq:eqnU1},\n\\eqref{eq:eqnU2} and \\eqref{eq:eqnU3}, we can pre-compute the\nQR factorization of the unfolding of the core tensor, $\\mv{G}_{(i)}$,\nand project the equations onto the obtained $\\mv{Q}_i$. Indeed, this\nwill further reduce the number of unknowns in these linear systems to exactly\nthe number of unknowns that are needed for the factor matrices\n$\\conj{\\mv{U}}_i$, for $i = 1,2,3$.\n\nLet us discuss the details. We start again from equation \\eqref{eq:firstUnfolding} and use the QR\nfactorization of $\\mv{G}_{(1)}^\\H$, $\\mv{Q}_1 \\mv{R}_1^\\H = \\qr{\\mv{G}^\\H_{(1)}}$. This yields\n\\begin{equation*}\n\t\\conj{\\left(-\\mv{D}_{xx} - k^2 \\mv{I} \\right) \\mv{U}_1} \\mv{R}_1\\mv{Q}_1^\\H\n\t- \\conj{\\mv{U}_1} \\mv{R}_1\\mv{Q}_1^\\H \\left(I \\otimes \\mv{U}_2^\\H \\mv{D}_{yy} \\mv{U}_2\\right)^\\H\n\t- \\conj{\\mv{U}_1} \\mv{R}_1\\mv{Q}_1^\\H \\left(\\mv{U}_3^\\H \\mv{D}_{zz} \\mv{U}_3 \\otimes I\\right)^\\H\n\t= \\mv{F}_{(1)} \\left(\\mv{U}_3 \\otimes \\mv{U}_2\\right).\n\\end{equation*}\nPost multiplication of this equation by $\\mv{Q}_1$ yields\n\\begin{equation*}\n\t\\conj{\\left(-\\mv{D}_{xx} - k^2 \\mv{I} \\right) \\mv{U}_1} \\mv{R}_1\n\t- \\conj{\\mv{U}_1} \\mv{R}_1\\mv{Q}_1^\\H \\left(I \\otimes \\mv{U}_2^\\H \\mv{D}_{yy} \\mv{U}_2\\right)^\\H \\mv{Q}_1\n\t- \\conj{\\mv{U}_1} \\mv{R}_1\\mv{Q}_1^\\H \\left(\\mv{U}_3^\\H \\mv{D}_{zz} \\mv{U}_3 \\otimes I\\right)^\\H \\mv{Q}_1\n\t= \\mv{F}_{(1)} \\left(\\mv{U}_3 \\otimes \\mv{U}_2\\right)\\mv{Q}_1.\n\\end{equation*}\nTo solve this equation for $\\mv{U}_1$, it is written is vectorized form\nas\n\\begin{equation}\\label{eq:eqnU1v2}\n\\left\\{\n\\mv{I} \\otimes \\conj{\\left(-\\mv{D}_{xx} - k^2 \\mv{I} \\right)} +\n\\mv{Q}_1^{\\mkern-1.5mu\\mathsf{T}}\\left[ -\\conj{\\left(\\mv{I} \\otimes \\mv{U}_2^\\H \\mv{D}_{yy} \\mv{U}_2\\right)} - \\conj{\\left(\\mv{U}_3^\\H \\mv{D}_{zz} \\mv{U}_3 \\otimes \\mv{I} \\right)}\n\\right]\\conj{\\mv{Q}_1} \\otimes \\mv{I}\n\\right\\} \\mvec{\\underbrace{\\conj{\\mv{U}_1}\\mv{R}_1}_{\\mv{X}_1}} = \\mvec{\\mv{F}_{(1)} \\left(\\mv{U}_3 \\otimes \\mv{U}_2\\right) \\mv{Q}_1}.\n\\end{equation}\nIn a similar way, the update equations for $\\mv{U}_2$ and $\\mv{U}_3$ are\nderived by multiplying \\eqref{eq:start3d} with the other factor\nmatrices in the appropriate dimensions and using the QR factorizations\nof $\\mv{G}_{(i)}^\\H$:\n\\begin{equation}\\label{eq:eqnU2v2}\n\\left\\{\n\\mv{I} \\otimes \\conj{\\left(-\\mv{D}_{yy} - k^2 \\mv{I} \\right)} +\n\\mv{Q}_2^{\\mkern-1.5mu\\mathsf{T}}\\left[- \n\\conj{\\left(\\mv{I} \\otimes \\mv{U}_1^\\H \\mv{D}_{xx} \\mv{U}_1\\right)} - \\conj{\\left(\\mv{U}_3^\\H \\mv{D}_{zz} \\mv{U}_3 \\otimes \\mv{I} \\right)}\n\\right]\\conj{\\mv{Q}_2} \\otimes \\mv{I}\n\\right\\} \\mvec{\\underbrace{{\\conj{\\mv{U}_2}\\mv{R}_2}}_{\\mv{X}_2}} \n= \\mvec{\\mv{F}_{(2)} \\left(\\mv{U}_3 \\otimes \\mv{U}_1\\right) \\mv{Q}_2}.\n\\end{equation}\nand\n\\begin{equation}\\label{eq:eqnU3v2}\n\\left\\{\n\\mv{I} \\otimes \\conj{\\left(-\\mv{D}_{zz} - k^2 \\mv{I} \\right)} +\n\\mv{Q}_3^{\\mkern-1.5mu\\mathsf{T}}\\left[ -\n\\conj{\\left(\\mv{I} \\otimes \\mv{U}_1^\\H \\mv{D}_{xx} \\mv{U}_1\\right)} - \\conj{\\left(\\mv{U}_2^\\H \\mv{D}_{yy} \\mv{U}_2 \\otimes \\mv{I} \\right)}\n\\right]\\conj{\\mv{Q}_3} \\otimes \\mv{I}\n\\right\\} \\mvec{\\underbrace{\\conj{\\mv{U}_3}\\mv{R}_3}_{\\mv{X}_3}} \n= \\mvec{\\mv{F}_{(3)} \\left(\\mv{U}_2 \\otimes \\mv{U}_1\\right) \\mv{Q}_3}.\n\\end{equation}\nAll these equations are cheap to solve. Indeed,\n$\\mvec{\\conj{\\mv{U}_i}\\mv{R}_i}$ has length $n_i r_i$. Computing a\nsymmetric reverse Cuthill-McKee permutation of these system matrices one\nobserves a matrix with a bandwidth $\\Oh{r}$, so solving these\nequations has a computational cost $\\Oh{nr^2}$.\n\nOf course, this only updates the factor matrices as basis vectors in each\ndirection. As a single final step, we still have to compute the core\ntensor $\\mt{G}$. This will be the computationally most expensive\npart.\n\nCore tensor $\\mt{G}$ can be obtained by multiplying \\eqref{eq:start3d}\nwith all the $d$ factor matrices in the matching directions.\nUnfolding this equation in a certain direction (eg. the first folding)\nleads again to a matrix equation. In vectorized form, it is given by\n\\begin{equation}\\label{eq:eqnGv2}\n\t\\left\\{\n\t\\mv{I} \\otimes \\conj{\\mv{U}_1^\\H\\left(-\\mv{D}_{xx} - k^2 \\mv{I} \\right)\\mv{U}_1} +\n\t\\left[ \n\t-\\conj{\\left(\\mv{I} \\otimes \\mv{U}_2^\\H \\mv{D}_{yy} \\mv{U}_2\\right)} - \\conj{\\left(\\mv{U}_3^\\H \\mv{D}_{zz} \\mv{U}_3 \\otimes \\mv{I} \\right)}\n\t\\right] \\otimes \\mv{I}\n\t\\right\\} \\mvec{\\mv{G}_{(1)}} = \\mvec{\\mv{U}_1^{\\mkern-1.5mu\\mathsf{T}} \\mv{F}_{(1)} \\left(\\mv{U}_3 \\otimes \\mv{U}_2\\right)}.\n\\end{equation}\nIndeed, considering again an example where $d = 3, n_i = n = 168, r_i\n= r = 18$ one obtains a matrix with a sparsity pattern that is shown\nin Figure~\\ref{fig:spytop_symrcm_Gsystem_alg2}. Hence, this matrix has\nnot a limited bandwidth anymore. It coupled all functions to all other\nfunctions. Although this equation has to be solved only once in the \nalgorithm, when the rank increases, it will rapidly dominate the computational \ncost of this algorithm.\n\n\n\\begin{algorithm}[t]\n\t[$\\mt{G}, \\mv{U}_1, \\mv{U}_2, \\mv{U}_3$] = hosvd(initial guess)\\;\n\t\\While{not converged}{\n\t\t\\For{i = 1, 2, 3}{\n\t\t\t$\\mv{Q}_i \\widetilde{\\mv{R}} = \\qr{\\mv{G}_{(i)}^\\H, 0}$\\;\n\t\t\tSolve for $\\mv{X}_i = \\conj{\\mv{U}_i}\\mv{R}_i \\in \\mathbb{C}^{n_i \\times r_i}$ using \\eqref{eq:eqnU1v2}, \\eqref{eq:eqnU2v2} or \\eqref{eq:eqnU3v2}\\;\n\t\t\t$\\conj{\\mv{U}_{i}} \\mv{R}_i = \\qr{\\mv{X}_i, 0}$\\;\n\t\t\t$\\mt{G} = \\texttt{reconstruct} \\left[\\mv{R}_i\\mv{Q}_i^\\H, ~i\\right]$;\n\t\t}\n\t}\n\tSolve for $\\mv{G}_{(1)} \\in \\mathbb{C}^{r_1 \\times r_2r_3}$ using \\eqref{eq:eqnGv2}\\;\n\t$\\mt{G} = \\texttt{reconstruct} \\left[\\mv{G}_{(1)}, ~1\\right]$\\;\n\t$\\mt{M} = \\mt{G} \\times_1 \\mv{U}_1 \\times_2 \\mv{U}_2 \\times_3 \\mv{U}_3$\\;\n \\caption{Solve for the low-rank tensor decomposition of the solution $\\mt{M}$ of a 3D Helmholtz problem with constant wave number (version 2).}\n \\label{alg:const3dv2}\n\\end{algorithm}\n\n\n\n\\subsubsection{Efficient combination of version 1 and version 2 into new algorithm (version 3)}\\label{sec:version3}\nIn the first version of the algorithm, (see section~\\ref{sec:version1}),\nan update for $\\mv{G}_{(i)}$ is computed for each direction in each\niteration. This leads to a too expensive algorithm.\nThen we changed the algorithm such that the costs for the updates in each\ndirection is reduced, (see section~\\ref{sec:version2}). But, in that version\nalmost all information for a full update of core tensor $\\mt{G}$ is\nlost. Therefore a final, but potential too expensive, equation needs to be solved.\n\nObserve that the expensive computation for the full core tensor, in \nversion 2, can now be replaced by a single solve per iteration as done in\nversion 1. This leads to a third version of the algorithm.\nIt avoids repeatedly solving the large systems (like version 1) and it\ndoes not solve too expensive systems (like version 2). The computational\ncomplexity of this algorithm is equal to the complexity of version 1,\nso $\\Oh{nr^{2(d-1)}}$.\nFurthermore, the systems that need to be solved, each iteration, have \nexactly the same number of unknowns as the representation of the \ntensor in low-rank Tucker tensor format.\nIn summary, this final version of the algorithm is given by Algorithm~\\ref{alg:const3dv3}.\n\n\\begin{algorithm}[t]\n\t[$\\mt{G}, \\mv{U}_1, \\mv{U}_2, \\mv{U}_3$] = hosvd(initial guess)\\;\n\t\\While{not converged}{\n\t\t\\For{i = 1, 2}{\n\t\t\t$\\mv{Q}_i \\widetilde{\\mv{R}} = \\qr{\\mv{G}_{(i)}^\\H, 0}$\\;\n\t\t\tSolve for $\\mv{X}_i = \\conj{\\mv{U}_i}\\mv{R}_i \\in \\mathbb{C}^{n_i \\times r_i}$ using \\eqref{eq:eqnU1v2} or \\eqref{eq:eqnU2v2}\\;\n\t\t\t$\\conj{\\mv{U}_{i}} \\mv{R}_i = \\qr{\\mv{X}_i, 0}$\\;\n\t\t\t$\\mt{G} = \\texttt{reconstruct} \\left[\\mv{R}_i\\mv{Q}_i^\\H, ~i\\right]$;\n\t\t}\n\t\tSolve for $\\mv{X}_3 = \\conj{\\mv{U}_3}\\mv{G}_{(3)} \\in \\mathbb{C}^{n_3 \\times r^{d-1}}$ using \\eqref{eq:eqnU3}\\;\n\t\t$\\conj{\\mv{U}_{3}} \\mv{G}_{(3)} = \\qr{\\mv{X}_3, 0}$\\;\n\t\t$\\mt{G} = \\texttt{reconstruct} \\left[\\mv{G}_{(3)},~3\\right]$\\;\n\t}\n\t$\\mt{M} = \\mt{G} \\times_1 \\mv{U}_1 \\times_2 \\mv{U}_2 \\times_3 \\mv{U}_3$\\;\n \\caption{Solve for the low-rank tensor decomposition of the solution $\\mt{M}$ of a 3D Helmholtz problem with constant wave number (version 3).}\n \\label{alg:const3dv3}\n\\end{algorithm}\n\n\\subsubsection{Numerical comparison of three versions for 3D Helmholtz equation}\nConsider a three dimensional domain $\\Omega = [-10, ~10]^3$ that is\ndiscretized with $M=100$ equidistant mesh points per direction in the\ninterior of the domain. The domain is extended with exterior complex\nscaling to implement the absorbing boundary conditions. Hence, in total\nthere are $n = n_x = n_y = n_z = 168$ unknowns per direction. As\nconstant wave number we use $\\omega = 2$ and a right hand side $f(x,y,z)\n= -e^{-x^2-y^2-z^2}$.\nBy symmetry, we expect a low rank factorization with a equal low-rank \nin each direction, so we fix $r = r_x = r_y = r_x$.\n\nThe convergence of the residuals of the three versions are given in\nthe left column of Figure~\\ref{fig:residual+runtime-const3d}.\nIt is clear that all three versions converge to a good low-rank \napproximation of the full solution. By increasing the maximal\nattainable rank $r$, a better low-rank solution is obtained, as \nexpected. Remarkably, for $r=30$, in version 2, the final residual\nis larger then the residuals obtained by both other algorithms while\nthe compute-time for version 2 is larger than the other algorithms.\n\n\\begin{figure}\n\t\\centering\n\t\\input{residual-v1-M=100.tikz} ~ \\input{runtime-v1-3d-M=100.tikz} \\par\n\t\\input{residual-v2-M=100.tikz} ~ \\input{runtime-v2-3d-M=100.tikz} \\par\n\t\\input{residual-v3-M=100.tikz} ~ \\input{runtime-v3-3d-M=100.tikz}\n \\caption{Left: Plot of residual per iteration for constant wave number in 3D Helmholtz problem.\n \tRight: Plot of runtime of most time consuming parts for constant wave number in 3D Helmholtz problem. Both problems have $M=100$.\n \tTop: Algorithm~\\ref{alg:const3dv1} (version 1),\n \tmiddle: Algorithm~\\ref{alg:const3dv2} (version 2),\n \tbottom: Algorithm~\\ref{alg:const3dv3} (version 3).}\n\t\\label{fig:residual+runtime-const3d}\n\\end{figure}\n\nThe compute-time for the most time-consuming parts in the different\nversions of the algorithm can be measured as a function of the maximal\nattainable rank $r$. For the three versions of the algorithm the\nruntimes are shown in the right column of\nFigure~\\ref{fig:residual+runtime-const3d}. For all parts the expected\nand measured dependence on the rank $r$ are given. For all versions of\nthe algorithm 10 iterations are applied.\n\n\\begin{figure}\n\t\\centering\n \t\\input{runtimes-v123-M=100-iter=4.tikz}\n \\caption{Plot of runtime for 4 iteration with constant wavenumber in 3D using the three different algorithms ($M=100$)\n \t\\label{fig:runtime-const3d}}\n\\end{figure}\n\n\nComparing the total runtime for the three different versions one obtains\nresults as shown in Figure~\\ref{fig:runtime-const3d}. Indeed, as\nexpected version 3 is approximately 3 times faster than version 1 and\nthe runtime scales similar in rank $r$. Further, for small rank $r$\nversion 2 is faster than both other versions. But when the rank\nincreases the expensive solve for the core tensor $\\mt{G}$ starts to\ndominate the runtime. The total runtime will increase dramatically.\n\n\n\\subsection{Projection operator for constant wave number}\nAlso in three dimensions we can write the linear systems\n\\eqref{eq:eqnU1} for $\\mv{U}_1$, \\eqref{eq:eqnU2} for \n$\\mv{U}_2$ and \\eqref{eq:eqnU3} for $\\mv{U}_3$ as\nprojection operators applied to the residual of the tensor equation,\n\\eqref{eq:start3d}.\n\nConsider a tensor $\\mt{M}$ in Tucker format and factorized as $\\mt{M}\n= \\mt{G} \\times_1 \\mv{U}_1 \\times_2 \\mv{U}_2 \\times_3 \\mv{U}_3$, with\nunknowns $\\mt{G}, \\mv{U}_1, \\mv{U}_2$ and $\\mv{U}_3$. Discretization\nof \\eqref{eq:driven3d} leads to a linear operator $\\mathcal{L}$\napplied on tensors. Its matrix representation $\\mv{L}$ has a sum\nof Kronecker products structure, as given in\n\\eqref{eq:operatorLasSumKroneckerProducts}.\n\nSolving for an unknown factors $\\mv{U}_1$, $\\mv{U}_2$ or $\\mv{U}_3$\n(and the core-tensor $\\mt{G}$) using \\eqref{eq:eqnU1},\n\\eqref{eq:eqnU2} or \\eqref{eq:eqnU3} can be interpreted as a\nprojection operator applied on the residual. For example,\n\\eqref{eq:eqnU1} can be interpreted as\n\\begin{equation}\n\t\\conj{\\left(\\mv{U}_3^\\H \\otimes \\mv{U}_2^\\H \\otimes \\mv{I} \\right) \\mv{L} \\left(\\mv{U}_3 \\otimes \\mv{U}_2 \\otimes \\mv{I}\\right)} \\mvec{\\conj{\\mv{U}_1}\\mv{G}_{(1)}} = \\left( \\mv{U}_3^{\\mkern-1.5mu\\mathsf{T}} \\otimes \\mv{U}_2^{\\mkern-1.5mu\\mathsf{T}} \\otimes \\mv{I} \\right) \\mvec{\\mv{F}_{(1)}}.\n\\end{equation}\nThe residual, in tensor format, is given by\n\\begin{equation}\n\t\\begin{split}\n\t\t\\mt{R} &= \\mt{F} - \\mathcal{L}\\mt{M}, \\\\\n\t\t&= \\mt{F} - \\mt{G} \\times_1 \\left(-\\mv{D}_{xx} - k^2 \\mv{I}\\right) \\mv{U}_1 \\times_2 \\mv{U}_2 \\times_3 \\mv{U}_3 + \\mt{G} \\times_1 \\mv{U}_1 \\times_2 \\mv{D}_{yy}\\mv{U}_2 \\times_3 \\mv{U}_3 + \\mt{G} \\times_1 \\mv{U}_1 \\times_2 \\mv{U}_2 \\times_3 \\mv{D}_{zz}\\mv{U}_3.\n\t\\end{split}\n\\end{equation}\nWriting this tensor equation in the first unfolding leads to the following matrix equation\n\\begin{equation}\n\t\t\\mv{R}_{(1)} = \\mv{F}_{(1)} - \\conj{\\left(-\\mv{D}_{xx} - k^2 \\mv{I}\\right)\\mv{U}_1}\\mv{G}_{(1)} \\left(\\mv{U}_3 \\otimes \\mv{U}_2 \\right)^\\H + \\conj{\\mv{U}_1}\\mv{G}_{(1)} \\left(\\mv{U}_3 \\otimes \\mv{D}_{yy}\\mv{U}_2 \\right)^\\H + \\conj{\\mv{U}_1}\\mv{G}_{(1)} \\left(\\mv{D}_{zz}\\mv{U}_3 \\otimes \\mv{U}_2 \\right)^\\H,\n\\end{equation}\nwhich can be vectorized as\n\\begin{equation}\n\t\\begin{split}\n\t\t\\mvec{\\mv{R}_{(1)}} &= \\mvec{\\mv{F}_{(1)}} - \\left(\\conj{\\left(\\mv{U}_3 \\otimes \\mv{U}_2 \\otimes \\left(-\\mv{D}_{xx}-k^2\\mv{I}\\right)\\right)} - \\conj{\\left(\\mv{U}_3 \\otimes \\mv{D}_{yy}\\mv{U}_2 \\otimes \\mv{I}\\right)} - \\conj{\\left(\\mv{D}_{zz}\\mv{U}_3 \\otimes \\mv{U}_2 \\otimes \\mv{I}\\right)}\\right)\\mvec{\\conj{\\mv{U}_1}\\mv{G}_{(1)}} \\\\\n\t\t&= \\mvec{\\mv{F}_{(1)}} - \\conj{\\mv{L} \\left(\\mv{U}_3 \\otimes \\mv{U}_2 \\otimes \\mv{I}\\right)}\\mvec{\\conj{\\mv{U}_1}\\mv{G}_{(1)}} \\\\\n\t\t&= \\mvec{\\mv{F}_{(1)}} - \\conj{\\mv{L} \\left(\\mv{U}_3 \\otimes \\mv{U}_2 \\otimes \\mv{I}\\right)} \\left[\\conj{\\left(\\mv{U}_3^\\H \\otimes \\mv{U}_2^\\H \\otimes \\mv{I} \\right) \\mv{L} \\left(\\mv{U}_3 \\otimes \\mv{U}_2 \\otimes \\mv{I}\\right)}\\right]^{-1}\\left( \\mv{U}_3^{\\mkern-1.5mu\\mathsf{T}} \\otimes \\mv{U}_2^{\\mkern-1.5mu\\mathsf{T}} \\otimes \\mv{I} \\right) \\mvec{\\mv{F}_{(1)}} \\\\\n\t\t&= P_{23}\\mvec{\\mv{F}_{(1)}},\n\t\\end{split}\n\\end{equation}\nwhere operator $P_{23}$ is given by\n\\begin{equation}\\label{eq:projector3dconst}\n\t\\begin{split}\n\t\tP_{23} &= \\mv{I} - \\conj{\\mv{L} \\left(\\mv{U}_3 \\otimes \\mv{U}_2 \\otimes \\mv{I}\\right)} \\left[\\conj{\\left(\\mv{U}_3^\\H \\otimes \\mv{U}_2^\\H \\otimes \\mv{I} \\right) \\mv{L} \\left(\\mv{U}_3 \\otimes \\mv{U}_2 \\otimes \\mv{I}\\right)}\\right]^{-1}\\left( \\mv{U}_3^{\\mkern-1.5mu\\mathsf{T}} \\otimes \\mv{U}_2^{\\mkern-1.5mu\\mathsf{T}} \\otimes \\mv{I} \\right) \\\\\n\t\t&= \\mv{I} - \\mv{X}.\n\t\\end{split}\n\\end{equation}\nThis operator $P_{23}$ is indeed a projection operator. Observe that the terms between the two inverses cancel against one of the inverse factors:\n\\begin{equation*}\n\t\\begin{split}\n\t\t\\mv{X}^2 &= \\conj{\\mv{L} \\left(\\mv{U}_3 \\otimes \\mv{U}_2 \\otimes \\mv{I}\\right)} \\left[\\conj{\\left(\\mv{U}_3^\\H \\otimes \\mv{U}_2^\\H \\otimes \\mv{I} \\right) \\mv{L} \\left(\\mv{U}_3 \\otimes \\mv{U}_2 \\otimes \\mv{I}\\right)}\\right]^{-1}\\left( \\mv{U}_3^{\\mkern-1.5mu\\mathsf{T}} \\otimes \\mv{U}_2^{\\mkern-1.5mu\\mathsf{T}} \\otimes \\mv{I} \\right)\\conj{\\mv{L} \\left(\\mv{U}_3 \\otimes \\mv{U}_2 \\otimes \\mv{I}\\right)} \\left[\\conj{\\left(\\mv{U}_3^\\H \\otimes \\mv{U}_2^\\H \\otimes \\mv{I} \\right) \\mv{L} \\left(\\mv{U}_3 \\otimes \\mv{U}_2 \\otimes \\mv{I}\\right)}\\right]^{-1}\\left( \\mv{U}_3^{\\mkern-1.5mu\\mathsf{T}} \\otimes \\mv{U}_2^{\\mkern-1.5mu\\mathsf{T}} \\otimes \\mv{I} \\right) \\\\\n\t\t&= \\conj{\\mv{L} \\left(\\mv{U}_3 \\otimes \\mv{U}_2 \\otimes \\mv{I}\\right)} \\left[\\conj{\\left(\\mv{U}_3^\\H \\otimes \\mv{U}_2^\\H \\otimes \\mv{I} \\right) \\mv{L} \\left(\\mv{U}_3 \\otimes \\mv{U}_2 \\otimes \\mv{I}\\right)}\\right]^{-1}\\left( \\mv{U}_3^{\\mkern-1.5mu\\mathsf{T}} \\otimes \\mv{U}_2^{\\mkern-1.5mu\\mathsf{T}} \\otimes \\mv{I} \\right) \\\\\n\t\t&= \\mv{X}.\n\t\\end{split}\n\\end{equation*}\nThis operator is a natural extension to higher dimensions of the two\ndimensional operators as derived in section \\ref{sec:projection2d}.\n\nA similar derivation results in projection operators $P_{13}$ and $P_{12}$ for the updates in $\\mv{U}_2$ and $\\mv{U}_3$, respectively.\n\\begin{equation}\\label{eq:projectors3dconst}\n\t\\begin{split}\n\t\tP_{23} &= \\mv{I} - \\conj{\\mv{L} \\left(\\mv{U}_3 \\otimes \\mv{U}_2 \\otimes \\mv{I}\\right)} \\left[\\conj{\\left(\\mv{U}_3^\\H \\otimes \\mv{U}_2^\\H \\otimes \\mv{I} \\right) \\mv{L} \\left(\\mv{U}_3 \\otimes \\mv{U}_2 \\otimes \\mv{I}\\right)}\\right]^{-1}\\left( \\mv{U}_3^{\\mkern-1.5mu\\mathsf{T}} \\otimes \\mv{U}_2^{\\mkern-1.5mu\\mathsf{T}} \\otimes \\mv{I} \\right), \\\\\n\t\tP_{13} &= \\mv{I} - \\conj{\\mv{L} \\left(\\mv{U}_3 \\otimes \\mv{I} \\otimes \\mv{U}_1\\right)} \\left[\\conj{\\left(\\mv{U}_3^\\H \\otimes \\mv{I} \\otimes \\mv{U}_1^\\H \\right) \\mv{L} \\left(\\mv{U}_3 \\otimes \\mv{I} \\otimes \\mv{U}_1\\right)}\\right]^{-1}\\left( \\mv{U}_3^{\\mkern-1.5mu\\mathsf{T}} \\otimes \\mv{I} \\otimes \\mv{U}_1^{\\mkern-1.5mu\\mathsf{T}} \\right), \\\\\n\t\tP_{12} &= \\mv{I} - \\conj{\\mv{L} \\left(\\mv{I} \\otimes \\mv{U}_2 \\otimes \\mv{U}_1\\right)} \\left[\\conj{\\left(\\mv{I} \\otimes \\mv{U}_2^\\H \\otimes \\mv{U}_1^\\H \\right) \\mv{L} \\left(\\mv{I} \\otimes \\mv{U}_2 \\otimes \\mv{U}_1\\right)}\\right]^{-1}\\left( \\mv{I} \\otimes \\mv{U}_2^{\\mkern-1.5mu\\mathsf{T}} \\otimes \\mv{U}_1^{\\mkern-1.5mu\\mathsf{T}} \\right).\n\t\\end{split}\n\\end{equation}\nThe successive application of these projection operators on the\nresidual results in an updated residual that lies in the intersection\nof all subspaces.\n\n\n\\subsection{Helmholtz equation with space-dependent wave number}\nThe presented algorithms with constant wave number can be\nextended to space-dependent wave numbers. So, lets\nconsider a 3D Helmholtz problem where $\\mt{K} = k^2(x,y,z)$ represents the space-dependent\nwave number on the discretized mesh.\n\nFurther, we assume that a Canonical Polyadic decomposition of the\nspace-dependent wave number tensor \n$\\mt{K}$ is known, i.e.\n\\begin{equation}\\label{eq:CP4K}\n\t\\mt{K} = \\sum_{i=1}^s \\sigma_i \\left(\\mv{v}_i^{(1)} \\circ \\mv{v}_i^{(2)} \\circ \\cdots \\circ\t\\mv{v}_i^{(d)}\\right),\n\\end{equation}\nwhere $s \\in \\mathbb{N}_+$ is the CP-rank of $\\mt{K}$ and $\\mv{v}_i^{(j)} \\in \\mathbb{C}^{n_j}$ for $i = 1,2,\\ldots, s; j = 1,2,\\ldots, d$ are vectors. Further, $\\sigma_i$ is a tensor generalization of a singular value and $\\circ$ denotes the vector outer product.\n\nThe application of the space-dependent Helmholtz operator $\\mathcal{L}$ on tensor $\\mt{M}$ is given by\n\\begin{equation}\\label{eq:start3dvar}\n\t\\begin{split}\n\t\t\\mathcal{L} \\mt{M} &= \\mt{F} \\\\\n\t\t\\mathcal{L} \\mt{M} &= -\\mt{G} \\times_1 \\mv{D}_{xx} \\mv{U}_1 \\times_2 \\mv{U}_2 \\times_3 \\mv{U}_3\\\\\n\t\t&- \\mt{G} \\times_1 \\mv{U}_1 \\times_2 \\mv{D}_{yy} \\mv{U}_2 \\times_3 \\mv{U}_3\\\\\n\t\t&- \\mt{G} \\times_1 \\mv{U}_1 \\times_2 \\mv{U}_2 \\times_3 \\mv{D}_{zz} \\mv{U}_3\\\\\n\t\t&- \\mt{K} \\circ \\left(\\mt{G} \\times_1 \\mv{U}_1 \\times_2 \\mv{U}_2 \\times_3 \\mv{U}_3 \\right)\\\\\n\t\t&= \\mt{F},\n\t\\end{split}\n\\end{equation}\nwhere $\\mv{U}_i^\\H \\mv{U}_i = \\mv{I}$ for $i = 1,2,3$ and $\\mt{F}$\nis a tensor representation of the right hand side function $f$ \ndiscretized on the used grid.\nHere $\\circ$ denotes the Hadamard product for tensors.\n\nIn a similar way as in the three dimensional constant wave number\ncase, we can derive equations to iteratively solve for the factors\n$\\mv{U}_1$, $\\mv{U}_2$ and $\\mv{U}_3$. We start from \\eqref{eq:start3dvar} and multiply with\n$\\mv{U}_2$ and $\\mv{U}_3$ in the second and third direction,\nrespectively. Using that the columns of $\\mv{U}_i$ are orthonormal,\nthe following expression is derived:\n\\begin{equation*}\n\t\\mathcal{L} \\mt{M} \\times_2 \\mv{U}_2^\\H \\times_3 \\mv{U}_3^\\H = -\\mt{G} \\times_1 \\mv{D}_{xx}\n\t- \\mt{G} \\times_1 \\mv{U}_1 \\times_2 \\mv{U}_2^\\H\\mv{D}_{yy} \\mv{U}_2\n\t- \\mt{G} \\times_1 \\mv{U}_1 \\times_3 \\mv{U}_3^\\H\\mv{D}_{zz} \\mv{U}_3\n\t- \\left[\\mt{K} \\circ \\left(\\mt{G} \\times_1 \\mv{U}_1 \\times_2 \\mv{U}_2 \\times_3 \\mv{U}_3 \\right) \\right] \\times_2 \\mv{U}_2^\\H \\times_3 \\mv{U}_3^\\H\n\\end{equation*}\nWritten in the first unfolding, the multiplication with $\\mv{U}_2^\\H$ and $\\mv{U}_3^\\H$ in, respectively, the second and third direction is equivalent to post-multiplication with the matrix\n$\\left(\\mv{I} \\otimes \\mv{U}_2^\\H \\right)^\\H \\left(\\mv{U}_3^\\H \\otimes \\mv{I} \\right)^\\H = \\left(\\mv{U}_3^\\H \\otimes \\mv{U}_2^\\H \\right)^\\H = \\left(\\mv{U}_3 \\otimes \\mv{U}_2 \\right)$.\n\nMost of the terms are equal to the case where we had a constant wave\nnumber, see also \\eqref{eq:start3d}. Let us focus on the last term \nthat contains the Hadamard product with the space-dependent wave number, i.e.:\n\\begin{equation}\\label{eq:hadamardProduct}\n\t\\begin{split}\n\t\t\\mt{K} &\\circ \\left(\\mt{G} \\times_1 \\mv{U}_1 \\times_2 \\mv{U}_2 \\times_3 \\mv{U}_3 \\right).\n\t\\end{split}\n\\end{equation}\nFor Hadamard products of tensors, $\\mt{Z} = \\mt{X} \\circ \\mt{Y}$, the following property for the $k$-th unfolding holds $\\mv{Z}_{(k)} = \\mv{X}_{(k)} \\circ \\mv{Y}_{(k)}$.\nThus, written in the first unfolding \\eqref{eq:hadamardProduct} is given by\n\\begin{equation}\\label{eq:hadamardProduct1stUnfolding}\n\t\\begin{split}\n\t\t\\mv{K}_{(1)} &\\circ \\mv{M}_{(1)}\\\\\n\t\t\\mv{K}_{(1)} &\\circ \\left( \\conj{\\mv{U}_1} \\mv{G}_{(1)} \\left( \\mv{U}_3 \\otimes \\mv{U}_2 \\right)^\\H \\right).\n\t\\end{split}\n\\end{equation}\n\nAs the Hadamard product-term \\eqref{eq:hadamardProduct1stUnfolding} is written in the first unfolding and multiplication with $\\mv{U}_2^\\H$ and $\\mv{U}_3^\\H$ in respectively the second and third dimension results in\n\\begin{equation}\\label{eq:lastTerm}\n\t\\left[\\underbrace{\\mv{K}_{(1)}}_{\\mv{K}} \\circ \\underbrace{\\conj{\\mv{U}_1}\\mv{G}_{(1)}}_{\\mv{U}} \\underbrace{\\left( \\mv{U}_3 \\otimes \\mv{U}_2 \\right)^\\H}_{\\mv{V}^\\H} \\right] \\underbrace{\\left(\\mv{U}_3 \\otimes \\mv{U}_2 \\right)}_{\\mv{V}}.\n\\end{equation}\n\nThe derivation of the other terms of \\eqref{eq:start3dvar} are equal to the constant wave number case.\nThe equation in the first unfolding leads to a matrix equation:\n\\begin{equation}\\label{eq:firstUnfoldingVar}\n\t-\\conj{\\mv{D}_{xx}\\mv{U}_1} \\mv{G}_{(1)}\n\t- \\conj{\\mv{U}_1} \\mv{G}_{(1)} \\left(I \\otimes \\mv{U}_2^\\H \\mv{D}_{yy} \\mv{U}_2\\right)^\\H\n\t- \\conj{\\mv{U}_1} \\mv{G}_{(1)} \\left(\\mv{U}_3^\\H \\mv{D}_{zz} \\mv{U}_3 \\otimes I\\right)^\\H\n\t- \\left[\\mv{K}_{(1)} \\circ \\conj{\\mv{U}_1}\\mv{G}_{(1)} \\left( \\mv{U}_3 \\otimes \\mv{U}_2 \\right)^\\H \\right] \\left(\\mv{U}_3 \\otimes \\mv{U}_2 \\right)\n\t= \\mv{F}_{(1)} \\left(\\mv{U}_3 \\otimes \\mv{U}_2\\right).\n\\end{equation}\n\nVectorization of the last term, i.e. \\eqref{eq:lastTerm}, results again in an expression for the space-dependent wave number of the form $\\left(\\mv{K} \\circ \\mv{U}\\mv{V}^\\H\\right)\\mv{V}$, similar to the two dimensional case which was given in \\eqref{eq:2dvarwavenr}.\nUsing again \\eqref{eq:vecThirdTerm}, the vectorization of this expression is given by\n\\begin{equation}\\label{eq:tmp1}\n\t\\left(\\mv{U}_3^{\\mkern-1.5mu\\mathsf{T}} \\otimes \\mv{U}_2^{\\mkern-1.5mu\\mathsf{T}} \\otimes \\mv{I} \\right) \\diag{\\mvec{\\mv{K}_{(1)}}} \\conj{\\left(\\mv{U}_3 \\otimes \t\\mv{U}_2 \\otimes \\mv{I} \\right)} \\mvec{\\conj{\\mv{U}_1}\\mv{G}_{(1)}}.\n\\end{equation}\n\nBecause $\\mt{K}$ is known in a Canonical Polyadic tensor (CP tensor) decomposition\\footnote{Otherwise a Canonical Polyadic tensor decomposition can be computed using for example an CP-ALS algorithm \\cite{kolda2009tensor}.}, as given in \\eqref{eq:CP4K}, we have\n\\begin{equation}\n\t\\diag{\\mvec{\\mv{K}_{(1)}}} = \\sum_{i=1}^{s} \\sigma_i \\diag{\\mv{v}_{i}^{(3)}} \\otimes \\diag{\\mv{v}_{i}^{(2)}} \\otimes \\diag{\\mv{v}_{i}^{(1)}}.\n\\end{equation}\n\nSo, using the CP tensor representation of the space-dependent wave number the vectorization in \\eqref{eq:tmp1} simplifies even further:\n\\begin{equation*}\n\t\\sum_{i=1}^{s} \\sigma_i \\left(\\mv{U}_3^{\\mkern-1.5mu\\mathsf{T}} \\otimes \\mv{U}_2^{\\mkern-1.5mu\\mathsf{T}} \\otimes \\mv{I} \\right) \\left[ \\diag{\\mv{v}_{i}^{(3)}} \\otimes \\diag{\\mv{v}_{i}^{(2)}} \\otimes \\diag{\\mv{v}_{i}^{(1)}} \\right] \\conj{\\left(\\mv{U}_3 \\otimes \\mv{U}_2 \\otimes \\mv{I}\\right)} \\mvec{\\conj{\\mv{U}_1}\\mv{G}_{(1)}}\n\\end{equation*}\nwhich reduced to\n\\begin{equation*}\n\t\\underbrace{\\sum_{i=1}^{s} \\sigma_i\n\t\t\\left(\\mv{U}_3^{\\mkern-1.5mu\\mathsf{T}} \\diag{\\mv{v}_{i}^{(3)}} \\conj{\\mv{U}_3} \\right) \\otimes\n\t\t\\left(\\mv{U}_2^{\\mkern-1.5mu\\mathsf{T}} \\diag{\\mv{v}_{i}^{(2)}} \\conj{\\mv{U}_2} \\right) \\otimes\n\t\t\\left(\\diag{\\mv{v}_{i_1}^{(1)}}\\right)}_{K_1} \\mvec{\\conj{\\mv{U}_1}\\mv{G}_{(1)}}.\n\\end{equation*}\nIn this way the $K_1$ operator is defined and can be applied to $\\mvec{\\conj{\\mv{U}_1}\\mv{G}_{(1)}}$. Observe that this expansion is only advantageous if the space-dependent wave number has low rank, which is typical the case for our applications.\n\nIn a similar way, the $K_2$ and $K_3$ operators can be derived:\n\\begin{equation}\\label{eq:Koperators}\n\t\\begin{split}\n\t\tK_1 &= \\sum_{i=1}^{s}\\sigma_i \\left(\\mv{U}_3^{\\mkern-1.5mu\\mathsf{T}} \\diag{\\mv{v}_{i}^{(3)}} \\conj{\\mv{U}_3} \\right) \\otimes \\left(\\mv{U}_2^{\\mkern-1.5mu\\mathsf{T}} \\diag{\\mv{v}_{i}^{(2)}} \\conj{\\mv{U}_2} \\right) \\otimes \\left(\\diag{\\mv{v}_{i}^{(1)}}\\right) \\\\ \n\t\tK_2 &= \\sum_{i=1}^{s}\\sigma_i \\left(\\mv{U}_3^{\\mkern-1.5mu\\mathsf{T}} \\diag{\\mv{v}_{i}^{(3)}} \\conj{\\mv{U}_3} \\right) \\otimes \\left(\\mv{U}_1^{\\mkern-1.5mu\\mathsf{T}} \\diag{\\mv{v}_{i}^{(1)}} \\conj{\\mv{U}_1} \\right) \\otimes \\left(\\diag{\\mv{v}_{i}^{(2)}}\\right) \\\\\n\t\tK_3 &= \\sum_{i=1}^{s}\\sigma_i \\left(\\mv{U}_2^{\\mkern-1.5mu\\mathsf{T}} \\diag{\\mv{v}_{i}^{(2)}} \\conj{\\mv{U}_2} \\right) \\otimes \\left(\\mv{U}_1^{\\mkern-1.5mu\\mathsf{T}} \\diag{\\mv{v}_{i}^{(1)}} \\conj{\\mv{U}_1} \\right) \\otimes \\left(\\diag{\\mv{v}_{i}^{(3)}}\\right).\n\t\\end{split}\n\\end{equation}\n\nSo, we find the following linear system to solve for $\\mvec{\\conj{\\mv{U}_1}\\mv{G}_{(1)}}$:\n\\begin{equation}\\label{eq:eqnU1var}\n\t\\left\\{\n\t-\\mv{I} \\otimes \\conj{\\mv{D}_{xx}} -\n\t\\left[ \n\t\\conj{\\left(\\mv{I} \\otimes \\mv{U}_2^\\H \\mv{D}_{yy} \\mv{U}_2\\right)} - \\conj{\\left(\\mv{U}_3^\\H \\mv{D}_{zz} \\mv{U}_3 \\otimes \\mv{I}\\right)}\n\t\\right] \\otimes \\mv{I}\n\t- K_1\n\t\\right\\}\n\t\\mvec{\\underbrace{\\conj{\\mv{U}_1}\\mv{G}_{(1)}}_{\\mv{X}_1}} = \\mvec{\\mv{F}_{(1)} \\left(\\mv{U}_3 \\otimes \\mv{U}_2\\right)}.\n\\end{equation}\nObserve this is a square system with $n_1 \\times r_2 r_3$ unknowns\n(where the solution in matrix form $\\mv{X}_1$ is typical for rank $r >\nr_1$). In a similar way, update equations for $\\mv{U}_2$ and\n$\\mv{U}_3$ are derived by multiplying \\eqref{eq:start3dvar} with the\nother factor matrices in the appropriate directions:\n\\begin{equation}\\label{eq:eqnU2var}\n\t\\left\\{\n\t-\\mv{I} \\otimes \\conj{\\mv{D}_{yy}} +\n\t\\left[ -\n\t\\conj{\\left(I \\otimes \\mv{U}_1^\\H \\mv{D}_{xx} \\mv{U}_1\\right)} - \\conj{\\left(\\mv{U}_3^\\H \\mv{D}_{zz} \\mv{U}_3 \\otimes \\mv{I}\\right)}\n\t\\right] \\otimes \\mv{I}\n\t- K_2\n\t\\right\\}\n\t\\mvec{\\underbrace{\\conj{\\mv{U}_2}\\mv{G}_{(2)}}_{\\mv{X}_2}} = \\mvec{\\mv{F}_{(2)} \\left(\\mv{U}_3 \\otimes \\mv{U}_1\\right)}.\n\\end{equation}\nand\n\\begin{equation}\\label{eq:eqnU3var}\n\t\\left\\{-\\mv{I} \\otimes \\conj{\\mv{D}_{zz}} +\n\t\\left[-\n\t\\conj{\\left(\\mv{I} \\otimes \\mv{U}_1^\\H \\mv{D}_{xx} \\mv{U}_1\\right)} - \\conj{\\left(\\mv{U}_2^\\H \\mv{D}_{yy} \\mv{U}_2 \\otimes I\\right)}\n\t\\right] \\otimes I\n\t- K_3\n\t\\right\\} \\mvec{\\underbrace{\\conj{\\mv{U}_3}\\mv{G}_{(3)}}_{\\mv{X}_3}} = \\mvec{\\mv{F}_{(3)} \\left(\\mv{U}_2 \\otimes \\mv{U}_1\\right)}.\n\\end{equation}\nAlternating between solving for $\\mv{U}_1$, $\\mv{U}_2$ and $\\mv{U}_3$ using \\eqref{eq:eqnU1var}, \\eqref{eq:eqnU2var} or \\eqref{eq:eqnU3var} results in an algorithm to approximate low-rank tensor solutions for three dimensional problems as given in \\eqref{eq:driven3d}. Also in this case the orthogonality of the columns of $\\mv{U}_1$, $\\mv{U}_2$ and $\\mv{U}_3$ are maintained by additional QR factorizations.\nSo, we derive the algorithm as formulated in Algorithm \\ref{alg:var3dv1}. The generalization for dimensions $d > 3$ is straight forward.\n\n\\begin{algorithm}[t]\n\t\\SetAlgoLined\n\t[$\\mt{G}, \\mv{U}_1, \\mv{U}_2, \\mv{U}_3$] = hosvd(initial guess)\\;\n\t[$\\mv{\\Sigma}, \\mv{V}_1, \\mv{V}_2, \\mv{V}_3$] = cp\\_als($\\mt{K}$)\\;\n\t\\While{not converged}{\n\t\t\\For{i = 1, 2, 3}{\n\t\t\tCompute $K_i$ using \\eqref{eq:Koperators}\\;\n\t\t\tSolve for $\\mv{X}_i = \\conj{\\mv{U}_i}\\mv{G}_{(i)} \\in \\mathbb{C}^{n_i \\times r^{d-1}}$ using \\eqref{eq:eqnU1var}, \\eqref{eq:eqnU2var} or \\eqref{eq:eqnU3var}\\;\n\t\t\t$\\conj{\\mv{U}_{i}} \\mv{G}_{(i)} = \\qr{\\mv{X}_i(:,~ 1:r_i), 0}$;\n\t\t}\n\t}\n\t$\\mt{G} = \\texttt{reconstruct} \\left[\\mv{G}_{(i)}, ~i\\right]$\\;\n\t$\\mt{M} = \\mt{G} \\times_1 \\mv{U}_1 \\times_2 \\mv{U}_2 \\times_3 \\mv{U}_3$\\;\n\t\\caption{Solve for the low-rank tensor decomposition of the solution $\\mt{M}$ of a 3D Helmholtz problem with space-dependent wave number (version 1).}\n\t\\label{alg:var3dv1}\n\\end{algorithm}\n\nSimilar to the discussion for the constant wave number algorithms,\nobserve that we solve again for a large matrix $\\mv{X}_i \\in \\mathbb{C}^{n_i \\times\tr_1r_2r_3\/r_i}$.\nSo, in general the rank of this matrix could be\n$\\min\\left(n_i,~ r_1r_2r_3\/r_i\\right)$. But it is also known that\n$\\mv{X}_i = \\conj{\\mv{U}_i}\\mv{G}_{(i)}$ leads to the fact that\nthe rank of $\\mv{X}_i$ should be at most $r_i$. So selecting the\nfirst $r_i$ columns of $\\mv{X}_i$ and computing its QR decomposition\nis sufficient to derive a new orthonormal basis as factor matrix\n$\\conj{\\mv{U}}_i$.\n\nAlgorithm \\ref{alg:var3dv1} is exactly the space-dependent wave number\nequivalent of Algorithm \\ref{alg:const3dv1}. The same ideas can be applied\nto derive space-dependent wave number alternatives of the algorithms\ncorresponding to version 2 and 3.\nAgain, to circumvent solving large systems, we can pre-compute the \nQR factorization of $\\mv{G}_{(i)}$ and project these equations onto \nthe obtained $\\mv{Q}_i$. Indeed, this will reduce the number of \nunknowns in these linear systems to exactly the number of unknowns \nas needed for the factor matrices $\\conj{\\mv{U}}_1$ and\n$\\conj{\\mv{U}}_2$.\n\nLet us discuss the details. We start again from equation \\eqref{eq:firstUnfoldingVar} and use the QR\nfactorization of $\\mv{G}_{(1)}^\\H$, $\\mv{Q}_1 \\mv{R}_1^\\H = \\qr{\\mv{G}^\\H_{(1)}}$. This yields\n\\begin{equation*}\n\t-\\conj{\\mv{D}_{xx}\\mv{U}_1} \\mv{R}_1\\mv{Q}_1^\\H\n\t- \\conj{\\mv{U}_1} \\mv{R}_1\\mv{Q}_1^\\H \\left(I \\otimes \\mv{U}_2^\\H \\mv{D}_{yy} \\mv{U}_2\\right)^\\H\n\t- \\conj{\\mv{U}_1} \\mv{R}_1\\mv{Q}_1^\\H \\left(\\mv{U}_3^\\H \\mv{D}_{zz} \\mv{U}_3 \\otimes I\\right)^\\H\n\t- \\left[\\mv{K}_{(1)} \\circ \\conj{\\mv{U}_1}\\mv{R}_1\\mv{Q}_1^\\H \\left( \\mv{U}_3 \\otimes \\mv{U}_2 \\right)^\\H \\right] \\left(\\mv{U}_3 \\otimes \\mv{U}_2 \\right)\n\t= \\mv{F}_{(1)} \\left(\\mv{U}_3 \\otimes \\mv{U}_2\\right).\n\\end{equation*}\nPost multiplication of the left hand side of this equation by $\\mv{Q}_1$ yields\n\\begin{equation*}\n\t-\\conj{\\mv{D}_{xx}\\mv{U}_1} \\mv{R}_1\n\t- \\conj{\\mv{U}_1} \\mv{R}_1\\mv{Q}_1^\\H \\left(I \\otimes \\mv{U}_2^\\H \\mv{D}_{yy} \\mv{U}_2\\right)^\\H\\mv{Q}_1\n\t- \\conj{\\mv{U}_1} \\mv{R}_1\\mv{Q}_1^\\H \\left(\\mv{U}_3^\\H \\mv{D}_{zz} \\mv{U}_3 \\otimes I\\right)^\\H\\mv{Q}_1\n\t- \\left[\\mv{K}_{(1)} \\circ \\conj{\\mv{U}_1}\\mv{R}_1\\mv{Q}_1^\\H \\left( \\mv{U}_3 \\otimes \\mv{U}_2 \\right)^\\H \\right] \\left(\\mv{U}_3 \\otimes \\mv{U}_2 \\right)\\mv{Q}_1.\n\n\\end{equation*}\nTo solve this equation for $\\mv{U}_1$, it is written in vectorized form as\n\\begin{equation}\\label{eq:eqnU1v2forvar}\n\t\\left\\{\n\t-\\mv{I} \\otimes \\conj{\\mv{D}_{xx}} +\n\t\\mv{Q}_1^{\\mkern-1.5mu\\mathsf{T}}\\left[- \n\t\\conj{\\left(\\mv{I} \\otimes \\mv{U}_2^\\H \\mv{D}_{yy} \\mv{U}_2\\right)} - \\conj{\\left(\\mv{U}_3^\\H \\mv{D}_{zz} \\mv{U}_3 \\otimes \\mv{I} \\right)}\n\t\\right]\\conj{\\mv{Q}_1} \\otimes \\mv{I}\n\t-\\mv{Q}_1^{\\mkern-1.5mu\\mathsf{T}} K_1 \\conj{\\mv{Q}_1}\n\t\\right\\} \\mvec{\\underbrace{\\conj{\\mv{U}_1}\\mv{R}_1}_{\\mv{X}_1}} = \\mvec{\\mv{F}_{(1)} \\left(\\mv{U}_3 \\otimes \\mv{U}_2\\right) \\mv{Q}_1}.\n\\end{equation}\nIn a similar way, the update equations for $\\mv{U}_2$ and $\\mv{U}_3$ are\nderived by multiplying \\eqref{eq:start3dvar} with the other factor\nmatrices in the appropriate dimensions and using the QR factorizations\nof $\\mv{G}_{(i)}^\\H$:\n\\begin{equation}\\label{eq:eqnU2v2forvar}\n\t\\left\\{\n\t-\\mv{I} \\otimes \\conj{\\mv{D}_{yy}} +\n\t\\mv{Q}_2^{\\mkern-1.5mu\\mathsf{T}}\\left[ \n\t-\\conj{\\left(\\mv{I} \\otimes \\mv{U}_1^\\H \\mv{D}_{xx} \\mv{U}_1\\right)} - \\conj{\\left(\\mv{U}_3^\\H \\mv{D}_{zz} \\mv{U}_3 \\otimes \\mv{I} \\right)}\n\t\\right]\\conj{\\mv{Q}_2} \\otimes \\mv{I}\n\t-\\mv{Q}_2^{\\mkern-1.5mu\\mathsf{T}} K_2 \\conj{\\mv{Q}_2}\n\t\\right\\} \\mvec{\\underbrace{{\\conj{\\mv{U}_2}\\mv{R}_2}}_{\\mv{X}_2}} \n\t= \\mvec{\\mv{F}_{(2)} \\left(\\mv{U}_3 \\otimes \\mv{U}_1\\right) \\mv{Q}_2}.\n\\end{equation}\nand\n\\begin{equation}\\label{eq:eqnU3v2forvar}\n\t\\left\\{\n\t-\\mv{I} \\otimes \\conj{\\mv{D}_{zz}} +\n\t\\mv{Q}_3^{\\mkern-1.5mu\\mathsf{T}}\\left[ \n\t-\\conj{\\left(\\mv{I} \\otimes \\mv{U}_1^\\H \\mv{D}_{xx} \\mv{U}_1\\right)} - \\conj{\\left(\\mv{U}_2^\\H \\mv{D}_{yy} \\mv{U}_2 \\otimes \\mv{I} \\right)}\n\t\\right]\\conj{\\mv{Q}_3} \\otimes \\mv{I}\n\t\\right\\} \\mvec{\\underbrace{\\conj{\\mv{U}_3}\\mv{R}_3}_{\\mv{X}_3}} \n\t-\\mv{Q}_3^{\\mkern-1.5mu\\mathsf{T}} K_3 \\conj{\\mv{Q}_3}\n\t= \\mvec{\\mv{F}_{(3)} \\left(\\mv{U}_2 \\otimes \\mv{U}_1\\right) \\mv{Q}_3}.\n\\end{equation}\n\nAll these equations are cheap to solve. Indeed,\n$\\mvec{\\conj{\\mv{U}_i}\\mv{R}_i}$ has length $n_i r_i$. Computing a\nsymmetric reverse Cuthill-McKee permutation of the system matrix one\nobserves a matrix with a bandwidth $\\Oh{r}$, so solving these\nequations has a computational cost $\\Oh{nr^2}$.\n\nAlternating between solving for $\\mv{U}_1$, $\\mv{U}_2$ and $\\mv{U}_3$ using \\eqref{eq:eqnU1v2forvar}, \\eqref{eq:eqnU2v2forvar} or \\eqref{eq:eqnU3var} results again in an algorithm to approximate low-rank solutions for three dimensional space-dependent Helmholtz problems. Also in this case the orthogonality of the columns of $\\mv{U}_1$, $\\mv{U}_2$ and $\\mv{U}_3$ are maintained by additional QR factorizations.\nSo, we derive the algorithm as formulated in Algorithm \\ref{alg:var3dv3}.\nAlgorithm \\ref{alg:var3dv3} is exactly the space-dependent wave number equivalent of Algorithm \\ref{alg:const3dv3}.\n\n\\begin{algorithm}[t]\n\t\\SetAlgoLined\n\t[$\\mt{G}, \\mv{U}_1, \\mv{U}_2, \\mv{U}_3$] = hosvd(initial guess)\\;\n\t[$\\mv{\\Sigma}, \\mv{V}_1, \\mv{V}_2, \\mv{V}_3$] = cp\\_als($\\mt{K}$)\\;\n\t\\While{not converged}{\n\t\t\\For{i = 1, 2}{\n\t\t\tCompute $K_i$ using \\eqref{eq:Koperators}\\;\n\t\t\t$\\mv{Q}_i \\widetilde{\\mv{R}} = \\qr{\\mv{G}_{(i)}^\\H, 0}$\\;\n\t\t\tSolve for $\\mv{X}_i = \\conj{\\mv{U}_i}\\mv{R}_i \\in \\mathbb{C}^{n_i \\times r_i}$ using \\eqref{eq:eqnU1v2forvar} or \\eqref{eq:eqnU2v2forvar}\\;\n\t\t\t$\\conj{\\mv{U}_{i}} \\mv{R}_i = \\qr{\\mv{X}_i, 0}$\\;\n\t\t\t$\\mt{G} = \\texttt{reconstruct} \\left[\\mv{R}_i\\mv{Q}_i^\\H, ~i\\right]$;\n\t\t}\n\t\tCompute $K_3$ using \\eqref{eq:Koperators}\\;\n\t\tSolve for $\\mv{X}_3 = \\conj{\\mv{U}_3}\\mv{G}_{(3)} \\in \\mathbb{C}^{n_3 \\times r^{d-1}}$ using \\eqref{eq:eqnU3var}\\;\n\t\t$\\conj{\\mv{U}_{3}} \\mv{G}_{(3)} = \\qr{\\mv{X}_3, 0}$\\;\n\t\t$\\mt{G} = \\texttt{reconstruct} \\left[\\mv{G}_{(3)},~3\\right]$\\;\n\t}\n\t$\\mt{M} = \\mt{G} \\times_1 \\mv{U}_1 \\times_2 \\mv{U}_2 \\times_3 \\mv{U}_3$\\;\n\t\\caption{Solve for the low-rank tensor decomposition of the solution $\\mt{M}$ of a 3D Helmholtz problem with space-dependent wave number (version 3).}\n\t\\label{alg:var3dv3}\n\\end{algorithm}\n\n\n\\subsection{Projection operator for space-dependent wave number}\nConsider a tensor $\\mt{M}$ in Tucker tensor format and factorized as\n$\\mt{M} = \\mt{G} \\times_1 \\mv{U}_1 \\times_2 \\mv{U}_2 \\times_3 \\mv{U}_3$,\nwith unknowns $\\mt{G}, \\mv{U}_1, \\mv{U}_2$ and $\\mv{U}_3$.\nDiscretization of \\eqref{eq:driven3d} with a space-dependent wave number \nleads to a linear operator $\\mathcal{L}$ applied on tensors. Its matrix\nrepresentation $\\mv{L}$ has again a structure as given in\n\\eqref{eq:operatorLasSumKroneckerProducts}.\n\nSolving for the unknown factors $\\mv{U}_1$, $\\mv{U}_2$ or $\\mv{U}_3$\n(and the core-tensor $\\mt{G}$) using \\eqref{eq:eqnU1var},\n\\eqref{eq:eqnU2var} or \\eqref{eq:eqnU3var} can, again, be interpreted as a\nprojection operator applied on the residual. For example,\n\\eqref{eq:eqnU1var} can be interpreted as\n\\begin{equation}\n\t\\conj{\\left(\\mv{U}_3^\\H \\otimes \\mv{U}_2^\\H \\otimes \\mv{I} \\right) \\mv{L} \\left(\\mv{U}_3 \\otimes \\mv{U}_2 \\otimes \\mv{I}\\right)} \\mvec{\\conj{\\mv{U}_1}\\mv{G}_{(1)}} = \\left( \\mv{U}_3^{\\mkern-1.5mu\\mathsf{T}} \\otimes \\mv{U}_2^{\\mkern-1.5mu\\mathsf{T}} \\otimes \\mv{I} \\right) \\mvec{\\mv{F}_{(1)}},\n\\end{equation}\nThe residual in tensor format is given by\n\\begin{equation}\n\t\\begin{split}\n\t\t\\mt{R} = \\mt{F} &- \\mathcal{L}\\mt{M}, \\\\\n\t\t= \\mt{F} \n\t\t&+ \\mt{G} \\times_1 \\mv{D}_{xx} \\mv{U}_1 \\times_2 \\mv{U}_2 \\times_3 \\mv{U}_3 \\\\\n\t\t&+ \\mt{G} \\times_1 \\mv{U}_1 \\times_2 \\mv{D}_{yy}\\mv{U}_2 \\times_3 \\mv{U}_3 \\\\\n\t\t&+ \\mt{G} \\times_1 \\mv{U}_1 \\times_2 \\mv{U}_2 \\times_3 \\mv{D}_{zz}\\mv{U}_3 \\\\\n\t\t&+ \\mt{K} \\circ \\left(\\mt{G} \\times_1 \\mv{U}_1 \\times_2 \\mv{U}_2 \\times_3 \\mv{U}_3 \\right).\n\t\\end{split}\n\\end{equation}\n\nWriting this tensor equation in the first unfolding leads to the following matrix equation\n\\begin{equation}\n\t\\begin{split}\n\t\\mv{R}_{(1)} = \\mv{F}_{(1)} \n\t&+ \\conj{\\mv{D}_{xx}\\mv{U}_1}\\mv{G}_{(1)} \\left(\\mv{U}_3 \\otimes \\mv{U}_2 \\right)^\\H \\\\\n\t&+ \\conj{\\mv{U}_1}\\mv{G}_{(1)} \\left(\\mv{U}_3 \\otimes \\mv{D}_{yy}\\mv{U}_2 \\right)^\\H \\\\\n\t&+ \\conj{\\mv{U}_1}\\mv{G}_{(1)} \\left(\\mv{D}_{zz}\\mv{U}_3 \\otimes \\mv{U}_2 \\right)^\\H \\\\\n\t&+ \\mv{K}_{(1)} \\circ \\left( \\conj{\\mv{U}_1} \\mv{G}_{(1)} \\left( \\mv{U}_3 \\otimes \\mv{U}_2 \\right)^\\H \\right)\n\t\\end{split}\n\\end{equation}\nwhich can be matricized as\n\\begin{equation*}\n\t\\begin{split}\n\t\t\\mvec{\\mv{R}_{(1)}} &= \\mvec{\\mv{F}_{(1)}} \\\\\n\t\t&- \\left(-\\conj{\\left(\\mv{U}_3 \\otimes \\mv{U}_2 \\otimes \\mv{D}_{xx}\\right)} \n\t\t- \\conj{\\left(\\mv{U}_3 \\otimes \\mv{D}_{yy}\\mv{U}_2 \\otimes \\mv{I}\\right)} \n\t\t- \\conj{\\left(\\mv{D}_{zz}\\mv{U}_3 \\otimes \\mv{U}_2 \\otimes \\mv{I}\\right)} \n\t\t- \\diag{\\mvec{\\mv{K}_{(1)}}} \\conj{\\left(\\mv{U}_3 \\otimes \t\\mv{U}_2 \\otimes \\mv{I} \\right)}\\right)\\mvec{\\conj{\\mv{U}_1}\\mv{G}_{(1)}}.\n\t\\end{split}\n\\end{equation*}\nRewriting this results in exactly the same structure and projection operator as in the constant wave number case:\n\\begin{equation}\n\t\\begin{split}\n\t\t\\mvec{\\mv{R}_{(1)}} &= \\ldots \\\\\n\t\t&= \\mvec{\\mv{F}_{(1)}} - \\conj{\\mv{L} \\left(\\mv{U}_3 \\otimes \\mv{U}_2 \\otimes \\mv{I}\\right)}\\mvec{\\conj{\\mv{U}_1}\\mv{G}_{(1)}} \\\\\n\t\t&= \\mvec{\\mv{F}_{(1)}} - \\conj{\\mv{L} \\left(\\mv{U}_3 \\otimes \\mv{U}_2 \\otimes \\mv{I}\\right)} \\left[\\conj{\\left(\\mv{U}_3^\\H \\otimes \\mv{U}_2^\\H \\otimes \\mv{I} \\right) \\mv{L} \\left(\\mv{U}_3 \\otimes \\mv{U}_2 \\otimes \\mv{I}\\right)}\\right]^{-1}\\left( \\mv{U}_3^{\\mkern-1.5mu\\mathsf{T}} \\otimes \\mv{U}_2^{\\mkern-1.5mu\\mathsf{T}} \\otimes \\mv{I} \\right) \\mvec{\\mv{F}_{(1)}} \\\\\n\t\t&= P_{23}\\mvec{\\mv{F}_{(1)}}\n\t\\end{split}\n\\end{equation}\nwhere projection operator $P_{23}$ is similar to the projector in the constant wave number case, see \\eqref{eq:projector3dconst}, and now given by\n\\begin{equation}\n\t\\begin{split}\n\t\tP_{23} &= \\mv{I} - \\conj{\\mv{L} \\left(\\mv{U}_3 \\otimes \\mv{U}_2 \\otimes \\mv{I}\\right)} \\left[\\conj{\\left(\\mv{U}_3^\\H \\otimes \\mv{U}_2^\\H \\otimes \\mv{I} \\right) \\mv{L} \\left(\\mv{U}_3 \\otimes \\mv{U}_2 \\otimes \\mv{I}\\right)}\\right]^{-1}\\left( \\mv{U}_3^{\\mkern-1.5mu\\mathsf{T}} \\otimes \\mv{U}_2^{\\mkern-1.5mu\\mathsf{T}} \\otimes \\mv{I} \\right) \\\\\n\t\t&= \\mv{I} - \\mv{X}.\n\t\\end{split}\n\\end{equation}\n\nA similar derivation results in projection operators $P_{13}$ and $P_{12}$ for the updates in $\\mv{U}_2$ and $\\mv{U}_3$, respectively. Both are also the same as in the constant wave number case, as given in \\eqref{eq:projectors3dconst}.\n\n\n\\section{Numerical results} \\label{sec:NumericalResults}\nIn this section, we demonstrate the promising results of the derived\nalgorithms with some numerical experiments in two and three\ndimensions. Furthermore, we consider discretizations of the Helmholtz\nequation with constant and space-dependent wave numbers.\n\n\\subsection{2D Helmholtz problem with space-dependent wave number}\nFirst, we consider a 2D Helmholtz problem with a space-dependent wave \nnumber given by $k^2(x,y) = 2 + e^{-x^2 -y^2}$.\n\nFor this example the two dimensional domain $\\Omega = [-10, ~10]^2$ is discretized with $M=1000$ equidistant mesh points per direction in the interior of the domain. Further it is extended with exterior complex scaling to implement the absorbing boundary conditions. In total, the number of discretization points per directions equals $n = n_1 = n_2 = 1668$. As external force $f(x, y) = -e^{-x^2-y^2}$ is applied.\n\nIn this space-dependent wave number example it is known that the matrix representation of the semi-exact solution of the Helmholtz equation on the full grid has a low rank. Indeed, approximating the semi-exact solution with a low-rank matrix with rank $r=17$ is in this case sufficient to obtain an error below the threshold $\\tau = 10^{-6}$.\n\nStarting with an random (orthonormalized) initial guess for $\\mv{V}^{(0)} \\in \\mathbb{C}^{n \\times r}$ only a small number of iterations of Algorithm \\ref{alg:2d} is needed to obtain an error similar to the specified threshold $\\tau$. As shown in Figure~\\ref{fig:error+singularvalues-var2d-r=17} both the residual and the error with respect to the semi-exact solution decay in only a few iterations (i.e. in this example 4-8 iterations) to a level almost similar to the expected tolerance.\n\nThe singular values of the approximation $\\mv{A}^{(k)} =\n\\mv{U}^{(k)}{\\mv{R}^{(k)}}^\\H {\\mv{V}^{(k)}}^\\H$ in iteration $i$ can\nbe computed and are shown for increasing iterations in\nFigure~\\ref{fig:error+singularvalues-var2d-r=17}. As expected the low-\nrank approximations recover the singular values of the full grid\nsemi-exact solution. In fact $\\mv{R}^{(k)}$ converges towards\n$\\text{diag}(\\sigma_i)$.\n \n\n\\begin{figure}\n \\centering\n \\begin{tabular}{cc}\n\t\\input{error-variable-2d-r=17.tikz} & \n\t\\input{singularvalues-variable-2d.tikz}\n \\end{tabular}\n \\caption{Plot of error and residual (left) and singular values (right) per iteration for space-dependent wave number in 2D Helmholtz problem ($M=1000, r=17$).}\n \\label{fig:error+singularvalues-var2d-r=17}\n\\end{figure}\n\nThe numerical rank of the matrix representation of the solution of a\nHelmholtz problem with a space-dependent wave number is unknown in\nadvance. But, the presented algorithm is stable with respect to over-\nand underestimation of the numerical rank of the solution. Figure\n\\ref{fig:error+residuals-var2d-M=1000} shows both the error and\nresidual per iteration and illustrates this statement by approximating\nthe same semi-exact solution with increasing ranks $r \\in \\{ 12, 18, 24, 36 \\}$.\n\n\\begin{figure}\n \\centering\n \\begin{tabular}{cc}\n\t\\input{error-variable-2d-M=1000.tikz} &\n\t\\input{residuals-variable-2d-M=1000.tikz}\n \\end{tabular}\n \\caption{Plot of errors (left) and residuals (right) per iteration for space-dependent wave number in 2D Helmholtz problem with increasing ranks ($M=1000$)}\n \\label{fig:error+residuals-var2d-M=1000}\n\\end{figure}\n\n\\begin{figure}\n\t\\centering\n \\begin{tabular}{cc}\n\t\\input{runtime-variable-2d-M=1000.tikz} &\n\t\\input{runtime-variable-2d-M=1000+loglog.tikz}\n \\end{tabular}\n \\caption{Plot of runtime for 10 iterations for space-dependent wave number in 2D Helmholtz problem with increasing ranks ($M=1000$). Left: linlin-scale, right: loglog-scale.}\n \\label{fig:runtime-variable-M=1000}\n\\end{figure}\n\nIn contrast to the constant coefficient wave number case the\nconvergence with space-dependent wave number depends also\non the maximal attainable rank. For increasing maximal attainable\nranks the number of needed iterations decreases. This is especially\nobserved when the error is considered, but it can also be seen in the\nfigure where the residuals are shown, Fig.~\\ref{fig:error+residuals-var2d-M=1000}.\n\n\n\n\\subsection{3D Helmholtz problem with space-dependent wave number}\nIn this example we solve a 3D Helmholtz problem with a space-dependent\nwave number discretized on a DVR-grid \\cite{rescigno2000numerical}. All\nthree versions of the 3D algorithm for space-dependent wave numbers\ncan successfully be applied.\n\nFirst, to reduce computational cost of construction of the operators $K_1$, $K_2$ and $K_3$, see \\eqref{eq:Koperators}, a CP-decomposition of the space-dependent wave number is constructed.\nAs shown in Figure \\ref{fig:cprank_wavenumber} the space-dependent wave number can be well-approximated by a small number of rank-1 tensors. For the examples discussed in this section we used a CP-rank $s = 32$ to approximate this space-dependent wave number. Hence, the error in approximating the wave number is approximately $\\Oh{10^{-4}}$.\n\nFor all three versions of the algorithm we use 10 iterations of the algorithm to converge to the low-rank solution. For example if we compute the low-rank solution (with $r = r_x = r_y = r_z = 16$) the residual after each iteration for all algorithms is shown in Figure \\ref{fig:residual-all-orderDvr=7-solrank=16-waverank=32}.\n\n\\begin{figure}\n\t\\centering\n\t\\begin{subfigure}{.45\\textwidth}\n\t\t\\centering\n\t\t\\input{cprank-orderDvr=7-wavenumber.tikz} \\\\\n\t\t\\caption{CP-rank of space-dependent wave number for 3D Helmholtz problem.}\n\t\t\\label{fig:cprank_wavenumber}\n\t\\end{subfigure}\n\t\\quad\n\t\\begin{subfigure}{.45\\textwidth}\n\t\t\\centering\n\t\t\\input{residual-all-orderDvr=7-solrank=16-waverank=32.tikz} \\\\\n\t\t\\caption{Residual per iteration ($r = 16, s = 32$).}\n\t\t\\label{fig:residual-all-orderDvr=7-solrank=16-waverank=32}\n\t\\end{subfigure}\n\t\\caption{Low rank approximation to space-dependent wave number and residuals for version 1, version 2 and version 3 of 3D Helmholtz problem with space-dependent wave number (orderDvr = 7).}\n\t\\label{fig:residual-all-orderDvr=7}\n\\end{figure}\n\n\\begin{figure}\n\t\\centering\n\t\\begin{subfigure}{.45\\textwidth}\n\t\t\\centering\n\t\t\\input{residual-all-orderDvr=7-waverank=32.tikz} \\\\\n\t\t\\caption{orderDvr = 7}\n\t\t\\label{fig:residual-all-orderDvr=7-waverank=32}\n\t\\end{subfigure}\n\t\\quad\n\t\\begin{subfigure}{.45\\textwidth}\n\t\t\\centering\n\t\t\\input{residual-all-orderDvr=14-waverank=32.tikz} \\\\\n\t\t\\caption{orderDvr = 14}\n\t\t\\label{fig:residual-all-orderDvr=14-waverank=32}\n\t\\end{subfigure}\n\t\\caption{Residual after iteration 10 iterations for all three versions of algorithm with $s = 32$.}\n\t\\label{fig:residual-all-waverank=32}\n\\end{figure}\n\n\nIf we increase the maximal attainable rank $r$ of the low-rank approximation, indeed the residual decreases as shown in Figure \\ref{fig:residual-all-orderDvr=7-waverank=32}. The residual for version 1 and version 3 are good, while version 2 cannot compete with both other versions by reducing the residual as far as the other versions. Therefore version 1 or version 3, as given in Algorithm~\\ref{alg:var3dv1} or Algorithm~\\ref{alg:var3dv3} are preferred.\n\nConsidering the runtimes of the three versions, similar results as before are observed. In this experiment with \\mbox{orderDvr=7} the number of gridpoints equals to $n = 41$. For version 1 and 3, again a runtime of $\\Oh{nr^4}$ is observed. The runtime for version 2 splits into two parts: $\\Oh{nr^2 + r^9}$. Due to the small rank $r$ and the large number of iterations in these examples algorithm 2 is the fastest version. The runtimes for version 1 and version 3 differ indeed approximately a factor $d$, which makes version 3 better then version 1. The runtimes with orderDvr = 7 (i.e. $n=41$) are shown in Figure \\ref{fig:runtimes-all-orderDvr=7-waverank=32+loglog} and with orderDvr = 14 (i.e. $n=90$) are shown in Figure \\ref{fig:runtimes-all-orderDvr=14-waverank=32+loglog}.\n\nComparing the runtimes for orderDvr=7 and orderDvr=14 we see for version 2 (when the rank gets larger) indeed approximately the same runtime independent of orderDvr. Also versions 1 and 3 consume approximately twice as much time which is as expected by the linear dependence on $n$ for both algorithms.\n\n\\begin{figure}\n\t\\centering\n\t\\begin{subfigure}{.45\\textwidth}\n\t\t\\centering\n\t\t\\input{runtimes-all-orderDvr=7-waverank=32+loglog.tikz} \\\\\n\t\t\\caption{orderDvr = 7}\n\t\t\\label{fig:runtimes-all-orderDvr=7-waverank=32+loglog}\n\t\\end{subfigure}\n\t\\quad\n\t\\begin{subfigure}{.45\\textwidth}\n\t\t\\centering\n\t\t\\input{runtimes-all-orderDvr=14-waverank=32+loglog.tikz} \\\\\n\t\t\\caption{orderDvr = 14}\n\t\t\\label{fig:runtimes-all-orderDvr=14-waverank=32+loglog}\n\t\\end{subfigure}\n\t\\caption{Runtime of 10 iterations for all three versions of algorithm for 3D Helmholtz with space-dependent wave number of low-rank $s = 32$.}\n\t\\label{fig:runtimes-all-waverank=32}\n\\end{figure}\n\nAn impression of the low-rank approximation to the wave function is shown in Figures \\ref{fig:impression3d} and \\ref{fig:impression3dv2}. In this impression the single, double and triple ionization are visible and can be represented by a low-rank wave function.\n\n\\begin{figure}\n\t\\centering\n\t\\begin{subfigure}{\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=0.85\\textwidth]{wave2d-LRimpression.png}\n\t\t\\caption{Impression of a low-rank matrix approximation in 2D.}\n\t\t\\label{fig:impression2d}\n\t\\end{subfigure}\n\t\\par\n\t\\begin{subfigure}{\\textwidth}\n\t\t\\centering\n\t\t\\includegraphics[width=0.9\\textwidth]{wave3d-LRimpression.png}\n\t\t\\caption{Impression of a low-rank tensor approximation in 3D.}\n\t\t\\label{fig:impression3d}\n\t\\end{subfigure}\n\t\\caption{Impressions of a low-rank approximation of a matrix and a Tucker tensor representing the wave function as solution to a 2D and 3D Helmholtz problem with a space-dependent wave number.}\n\\end{figure}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.9\\textwidth]{wave3d+nocube+c1+c2-v2.png}\n\t\\caption{Visualization of a 3D wave as low-rank approximation\n to a 3D Helmholtz problem with space-dependent wave number\n with single, double and triple ionization.}\n\t\\label{fig:impression3dv2}\n\\end{figure}\n\n\n\\section{Discussion and conclusions}\\label{sec:discussion}\nIn this paper we have analyzed the scattering solutions of a driven\nSchr\\\"odinger equations. These describe a break-up reaction where a\nquantum system is fragmented into multiple fragments. These problems\nare equivalent to solving a Helmholtz equations with space-dependent wave\nnumbers.\n\nWe have shown, first in 2D and then in 3D, that the wave function of\nmultiple ionization can be well approximated by a low-rank solution.\nIn 2D, the waves can be represented as a product of two low-rank \nmatrices and 3D as a low-rank Tucker tensor decomposition.\n\nWe propose a method that determines these low-rank components\nof the solution directly. We write the solution as a product of low-rank\ncomponents and assume that a guess for all but one component is given.\nWe then write a linear system for the remaining unknown component.\nThis is the repeated until each of the components is updated.\n\nThis procedure can be interpreted as a series of projections of the\nresidual on a subspaces and a correction within that subspace.\n\nIn theory, the generalization for dimensions $d > 3$ is straightforward. But\nfor dimensions $d>3$ it starts to be beneficial to change to a Tensor\nTrain factorization \\cite{oseledets2011tensor}. It is expected that similar \nstrategy can also be applied to tensors in Tensor Train format.\n\nAs demonstrated by the numerical experiments, the presented algorithms\nare able to exploit the low-rank structure of the solutions. This\ngives the advantage to reduce the number of unknowns and shorten the\ncomputational time to solve the Helmholtz equation.\n\nIn two dimensions, the low-rank representation of the solution can be\nrepresented by only $2nr$ unknowns instead of the full grid of\n$n^2$ unknowns. Also the linear systems to solve per iteration have\nonly $nr$ unknowns.\n\nIn high-dimensional Helmholtz equations, the low-rank Tucker tensor\ndecomposition represents the solution with $\\Oh{r^d + dnr}$\nunknowns. So, the total number of unknowns is reduced, but it is still\nexponential in the dimension $d$. For increasing dimensions this leads,\nagain, to systems with a number of unknowns exponential in $d$. Maybe\nother Tucker-like tensor decompositions with a number of unknowns only\npolynomial in $d$ can resolve this problem and make the presented\nalgorithm also applicable for higher dimensions.\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:intro}\n\tIn the past decade many surveys of the galactic plane have been carried out in the continuum, covering wavelengths from the millimetre regime to the infrared (IR).\n\tThey provide a complete picture of the dust emission, tracing both very cold material (at millimetre, sub-mm and far-IR wavelengths; for example ATLASGAL, \\citealt{Schuller+09_aap504_415}, and Hi-GAL, \\citealt{Molinari+10_pasp122_314}), and hot dust and PAHs (in the mid- and near-IR; for example MSX, \\citealt{Egan+03_AFRL}, MIPSGAL, \\citealt{Carey+09_pasp121_76}, WISE, \\citealt{Wright+10_aj140_1868}). The temperature, mass and column density of the dust can be estimated by constructing and modelling the spectral energy distribution of the thermal dust emission \\citep[SED; e.g.][]{Koenig+17_aap599_139}. The dust, however, constitutes only a minor fraction of the total mass of molecular clouds. One has to assume a gas-to-dust mass ratio ($\\ensuremath{\\gamma}$) to derive the mass and column density of molecular hydrogen. \n\tA direct, local determination shows that the hydrogen-to-dust mass ratio is $\\sim100$, corresponding to a gas-to-dust mass ratio $\\ensuremath{\\gamma}\\approx136$, when accounting for helium \\citep{Draine+07_apj663_866}. \n\tCurrent research uses a constant value of the gas-to-dust ratio irrespective of the galactocentric distance of the cloud \\citep[typically $100-150$, e.g.][]{Elia+13_apj772_45,Elia+17_mnras471_100,Koenig+17_aap599_139}, and while these values are reasonable within the solar circle they are not likely to be reliable for the outer parts of the disk, where the metallicity and average disk surface density might be substantially lower.\n\t\n\tHeavy elements are the main constituents of dust grains, and therefore when their abundance with respect to hydrogen changes, dust may be influenced too. Models combining chemical evolution of the Galaxy with dust evolution indeed suggest that $\\ensuremath{\\gamma}$ increases with decreasing metallicity $Z$ \\citep[][]{Dwek98_apj501_643,Mattsson+12_mnras423_38,HirashitaHarada17_mnras467_699}. This is also supported by observations in nearby galaxies \\citep[e.g.][]{Sandstrom+13_apj777_5}. \n\t\n\tExcept for a few cases, the data for external galaxies are averaged over the entire galaxy, and in all cases optically thick CO lines are used to obtain the mass of molecular gas. Moreover, in studies in which the gradient in $\\ensuremath{\\gamma}$ with $Z$ can be spatially resolved, the resolution is of the order of a kpc, introducing large uncertainties, for example, by assuming a uniform single temperature for dust or a specific calibration in deriving the metallicity \\citep[e.g.][]{Sandstrom+13_apj777_5}. As \\citet{Mattsson+12_mnras423_38} discuss, this could lead to a dust content which, in the central regions, often is larger than the amount of available metals in the interstellar medium (ISM).\n\t\n\tThe study of the metallicity-$\\ensuremath{\\gamma}$ relation in the Milky Way not only opens the possibility to have, for the first time, more accurate estimates of the amount of molecular gas in clouds,\n\tbut also provides the possibility to explore it on spatial scales and sensitivities that are extremely challenging to obtain, if not inaccessible, in galaxies other than our own. \\citet{Issa+90_aap236_237} studied how the gradient in gas-to-dust ratio depends on the galactocentric radius, but for a limited range of $D_{GC}$ ($9-11\\usk\\mathrm{kpc}$) and using optically thick CO lines to estimate the amount of molecular gas, via the integrated intensity of the CO (1--0) line-to-molecular mass conversion factor $X_\\mathrm{CO}$. \n\t\n\tIn this work we use a sample of 23 sources in the far outer Galaxy, complemented by 57 sources from the ATLASGAL TOP100 in the inner Galaxy (Fig.~\\ref{fig:distribution_clumps}) to expand this pioneering work, exploring the variation of $\\ensuremath{\\gamma}$ across the entire disk of the Milky Way. \n\tThis opens up the possibility of using the appropriate value of the gas-to-dust ratio to obtain more precise estimates of the very basic properties of molecular clouds throughout the Milky Way from publicly available surveys, such as the total mass and H$_{2}$ column density. From these quantities it is possible to derive molecular abundances and, in combination with complete surveys of the galactic disk, a reliable distribution of mass of molecular gas in the Milky Way.\n\t\n\n\\section{Observations and sample selection}\\label{sec:obs_and_sample}\n\n\tFrom the \\citet{WouterlootBrand89_aaps80_149} IRAS\/CO catalogue and that compiled by K\\\"onig et al. (in prep.)\\ using $^{12}$CO(2--1) and $^{13}$CO (2--1)\\footnote{Observed with the APEX-1 receiver at the Atacama Pathfinder Experiment (APEX) 12-m telescope.}, we selected a sample of 23 sources in the far outer Galaxy ($D_{GC} > 14\\usk\\mathrm{kpc}$; FOG) with the following criteria: i) the source must be associated with IR emission in WISE images ii) Herschel data must be available to estimate the dust content, and iii) the surface density of dust ($\\ensuremath{\\Sigma_\\mathrm{dust}}$) at the emission peak must exceed $3\\times10^{-5}\\usk\\gram\\usk\\centi \\metre^{-2}$, or $N_\\mathrm{H_{2}}=8.75\\times10^{20}\\usk\\centi \\metre^{-2}$ (i.e. $\\Sigma_{gas}\\sim19\\msun\\usk\\mathrm{pc}^{-2}$), assuming $\\ensuremath{\\gamma}=136$. \n\tAccording to the model of \\citet{HirashitaHarada17_mnras467_699}, the latter condition is sufficient to ensure that the vast majority of gas is in molecular form for $Z \\gtrsim 0.2 \\usk Z_{\\odot}$. \n\tIn the FOG, in fact, the metallicity ranges from $\\sim0.5\\,Z_{\\odot}$ at $D_{GC}\\sim14\\usk\\mathrm{kpc}$ to $\\sim0.2\\,Z_{\\odot}$ at $D_{GC}\\sim21\\usk\\mathrm{kpc}$ \\citep[using the results in][ see Eq.~\\ref{eq:Z_grad_MW}]{LuckLambert11_aj142_136}.\n\tObservations of the Magellanic Clouds also provide support for this statement. \n\tThe metallicities in the Large and Small Magellanic Clouds (LMC, SMC) are $Z = 0.5 \\, Z_{\\odot}$ and $Z = 0.2\\, Z_{\\odot}$, respectively \\citep{RussellDopita92_apj384_508}, encompassing the range of the far outer Galaxy. \n\tObservations of the atomic and molecular gas in these galaxies by \\citet{RomanDuval+14_apj797_86} demonstrate that the H\\textsc{i}--H$_{2}$ transition occurs at $\\approx30\\msun\\usk\\mathrm{pc}^{-2}$ in the LMC and $\\approx80\\msun\\usk\\mathrm{pc}^{-2}$ in the SMC. Our criterion on the surface density of dust, when using the gas-to-dust ratios estimated by \\citet{RomanDuval+14_apj797_86} in the Magellanic Clouds, exceeded these observed thresholds: $\\Sigma_{gas}\\approx70\\msun\\usk\\mathrm{pc}^{-2}$ for $Z=0.5\\, Z_{\\odot}$ and $\\sim230\\msun\\usk\\mathrm{pc}^{-2}$ for $Z=0.2\\, Z_{\\odot}$.\n\n\tThe selection of only IR-bright sources, \n\tstill associated with substantial molecular material, \n\timplies that we are dealing with clumps in a relatively advanced stage of the star formation process, when CO is not significantly affected by depletion \\citep{Giannetti+14_aap570_65}.\n\tThe sources have been followed-up with single-pointing observations centred on the dust emission peak, as identified in Herschel images, carried out using the APEX-1 receiver at APEX tuned to $218.09\\ensuremath{\\usk\\giga\\hertz}$, a setup which includes C$^{18}$O(2--1). Here, we have used this transition to estimate the total amount of H$_{2}$ at the position of the dust emission peak. The angular resolution of APEX at this frequency is $\\sim28\\arcsec$.\n\tObservations were performed between September 29 and October 15 2015, and between December 3 and 11 2015. The typical rms noise on the $T_\\mathrm{{A}}^{*}$-scale ranges between $10\\usk\\milli\\kelvin$ and $20\\usk\\milli\\kelvin$ at a spectral resolution of $0.4\\usk\\kilo\\metre\\usk\\second^{-1}$. \n\tWe converted the antenna temperature $T_\\mathrm{{A}}^{*}$ to main beam brightness temperature, $T_\\mathrm{{MB}}$, using $\\eta_{\\mathrm{MB}} = 0.75$.\n\t\n \\begin{figure}[t]\n\t\\centering\n\t\\includegraphics[width=\\columnwidth]{figures\/source_distribution.pdf}\n\t\\caption{Distribution of the sources considered in this work. In white we show the sources in the FOG, and in red we show clumps from the TOP100 sample. The background image is an artist impression of the Milky Way as seen from the northern galactic pole (courtesy of NASA\/JPL-Caltech\/R. Hurt -- SSC\/Caltech). The Sun in at (0, 8.34) kpc.}\\label{fig:distribution_clumps}\n \n \\end{figure}\n\n\\section{Results}\\label{sec:results}\n\n\tAs a first step we constructed the SED for each of the sources in the FOG to obtain the peak mass surface density of the dust. \n\tFor the SED construction and fitting, we follow the procedure described in \\citet{Koenig+17_aap599_139} and adopted in \\citet{Giannetti+17_aap603_33} and \\citet{Urquhart+17_ArXiv}, with minor changes due to the absence of ATLASGAL images for the outer Galaxy. We considered the five Hi-GAL \\citep{Molinari+10_pasp122_314} bands (500, 350, 250, 160 and $70\\usk\\micro\\metre$) from the SPIRE \\citep{Griffin+10_aap518_3} and PACS \\citep{Poglitsch+10_aap518_2} instruments, to reconstruct the cold dust component of the SED. The contribution from a hot embedded component is estimated from mid-IR continuum measurements, using MSX \\citep{Egan+03_AFRL} and WISE \\citep{Wright+10_aj140_1868} images at 21, 14, 12 and $8\\usk\\micro\\metre$, and 24 and $12\\usk\\micro\\metre$, respectively.\n\t\n\tThe flux for each of the bands was calculated using an aperture-and-annulus scheme. The aperture is centred on the emission peak at $250\\usk\\micro\\metre$ and its size was set to three times the FWHM of a Gaussian fitted to the $250\\usk\\micro\\metre$ image. The background was calculated as the median flux over an annulus with inner and outer radii of 1.5 and 2.5 the aperture size, respectively. After being normalised to the area of the aperture, the background was subtracted from the flux within the aperture.\n\tThe uncertainties on the background-corrected fluxes were calculated summing in quadrature the pixel noise of the images and a flux calibration uncertainty. \n\tWe adopted a calibration uncertainty of $20\\%$ for the 350, 250, 160 and $70\\usk\\micro\\metre$ fluxes, and of $30\\%$ for the mid-IR bands. An uncertainty of $50\\%$ is assumed for the $500\\usk\\micro\\metre$ flux, due to the\n\tlarge pixel size, and for the $8\\usk\\micro\\metre$ flux, due to contamination from PAHs.\n\tThe grey-body plus black-body model was optimised via a $\\chi^{2}$ minimisation, and the uncertainties on the parameters were estimated propagating numerically the errors on the observables.\n\tDifferently from \\citet{Koenig+17_aap599_139} and \\citet{Urquhart+17_ArXiv}, we use the $350\\usk\\micro\\metre$ Herschel flux measurement to calculate the peak dust surface density of the clump, because the sources were not covered in ATLASGAL and because this image has a comparable resolution to our molecular-line observations. The method is discussed in more detail in \\citet{Koenig+17_aap599_139} and \\citet{Urquhart+17_ArXiv}, and we refer the interested reader to these publications.\n\n\tThe dust opacity and emissivity used are the same as in \\citet{Koenig+17_aap599_139}, that is, $\\kappa_{870\\usk\\micro\\metre}=1.85\\usk\\centi \\metre^{2}\\usk\\gram^{-1}$ and $\\beta=1.75$, respectively \\citep[see e.g.][]{Kauffmann+08_aap487_993}. The SEDs for the entire sample can be found in Fig.~\\ref{fig:seds}; an example is shown in Fig.~\\ref{fig:sed_example}. In addition to the mass surface density of dust at the far-IR peak, we derived the bolometric luminosity, the dust temperature and mass of the sources, as measured within the apertures listed in Table~\\ref{tab:sed_fit}, that contains the complete results of the SED fit.\n\t\n\tWe fitted the C$^{18}$O (2--1) line using MCWeeds \\citep{Giannetti+17_aap603_33} with the algorithm that makes use of the Normal approximation \\citep{gelman2003bayesian} to obtain the column density of carbon monoxide, under the assumption of LTE; the adopted partition function is reported in Table~\\ref{tab:part_funct}. Using the relation between the dust temperature and the excitation temperature of CO isotopologues found in \\citet[][see their Fig.~10]{Giannetti+17_aap603_33}, we estimated the excitation conditions for the sources in the FOG. We used this value of $T_\\mathrm{{ex}}$ as the most probable one in the prior, with a value of $\\sigma$ equal to the measured intrinsic scatter; all priors are fully described in Table~\\ref{tab:priors} and the results are listed in Table~\\ref{tab:mcweeds_fit_fog}. To exclude biases connected to the $T_\\mathrm{{d}}\\,vs.T_\\mathrm{{ex}}$ relation, we compared the column densities with those computed using the unmodified values of the dust temperature from the SED; this has only a minor impact on the derived quantities. In Appendix~\\ref{app:spectra} we show the fit results, superimposed on the observed spectra; an example is given in Fig.~\\ref{fig:fit_example}.\n\t\n\t\\begin{figure}\n\t\t\\centering\n\t\t\\includegraphics[width=0.5\\textwidth]{figures\/SEDs\/{WB89_789_SED}.pdf}\n\t\t\\caption{Top: example of the SED fit for WB$\\_$789, hosting one of the the furthest clusters from the galactic centre yet detected \\citep{BrandWouterloot07_aa464_909}. Extracted fluxes are indicated by the red crosses, and upper and lower limits are indicated by triangles pointing downwards and upwards, respectively. The best fit curve is indicated in blue, and the separate contributions of the grey and black bodies are shown by the green dashed lines. Bottom: Residuals calculated as $(S_{\\mathrm{obs}} - S_{\\mathrm{mod}}) \/ S_{\\mathrm{obs}}$. The SED and their residuals for all the other sources are included in Fig.~\\ref{fig:seds}.\\label{fig:sed_example}}\n\t\\end{figure}\n\n\t\\begin{figure}\n\t\t\\centering\n\t\t\\includegraphics[width=0.8\\columnwidth]{figures\/spectra\/post_fit_WB89_789_slice_0.pdf}\n\t\t\\caption{Example of the C$^{18}$O(2--1) observations for WB$\\_$789. We indicate in red the best fit from MCWeeds. The spectra and their fits for the entire sample are shown in Fig.~\\ref{fig:spectra}.\\label{fig:fit_example}}\n\t\\end{figure}\n\t\n\tIn order to study how the gas-to-dust ratio varies across the galactic disk, we complemented the FOG sample with sources selected from the TOP100 \\citep{Giannetti+14_aap570_65, Koenig+17_aap599_139}, a representative and statistically significant sample of high-mass star-forming clumps covering a wide range of evolutionary phases \\citep{Koenig+17_aap599_139, Giannetti+17_aap603_33}. These sources are among the brightest in their evolutionary class in the inner Galaxy.\n\tFor the 57 sources classified as H\\textsc{ii}\\ and IRb in the TOP100, we used the column density determinations from \\citet{Giannetti+14_aap570_65} to derive the H$_{2}$ column density. Among the isotopologues analysed in that work, we elected to use C$^{17}$O, because in these extreme sources C$^{18}$O can have a non-negligible optical depth, and because we have FLASH$^{+}$ \\citep{Klein+14_TransThzSciTech4_588} observations of C$^{17}$O (3--2) for the entire subsample. \n\t\n\tWe have $\\mathrm{C^{17}O(1-0)}$ for 17 of the selected sources in the TOP100, for which we were able to estimate the optical depth; $30\\%$ of the sample has an optical depth $\\approx0.1$, the remaining clumps have optical depths below this value.\n\tAssuming LTE, $T=30\\usk\\kelvin$, and $\\tau_\\mathrm{C^{17}O(1-0)}=0.1$, the optical depth of $\\mathrm{C^{17}O(3-2)}$ is about a factor of four to five higher than that of the (1--0) transition, leading to an underestimate of the carbon monoxide column density less than $\\sim30\\%$. Therefore the correction for opacity can be considered negligible (see Sect.~\\ref{sec:discussion}). The column density of C$^{17}$O is then converted to C$^{18}$O using $^{18}\\mathrm{O}\/^{17}\\mathrm{O} = 4$, according to \\citet{Giannetti+14_aap570_65}, as determined from the same sample. The peak surface densities (and total masses) of dust for the clumps in the TOP100 were taken from the results of \\citet{Koenig+17_aap599_139}. The properties of the sources extracted from the TOP100 are listed in Table~\\ref{tab:mcweeds_fit}.\n\t\n\tWe computed the mass surface density of the gas from the column density of C$^{18}$O, obtained via MCWeeds for the FOG sample, and of C$^{17}$O for the TOP100. When more than one velocity component was observed in the spectra of the CO isotopologues, the column densities were summed to obtain the total surface density of carbon monoxide along the line of sight, because all sources contribute to the observed continuum. This has the effect of introducing scatter in the value of $\\ensuremath{\\gamma}$ at a particular $D_{GC}$, as the clumps have different distances, but it only happens in a minor fraction of the sources (e.g. two in the FOG sample), and depends on the $D_{GC}$ of the sources. From the C$^{18}$O surface density, we derived the total mass surface density of the cloud ($\\Sigma$), accounting for helium, and assuming that the expected abundance of the C$^{18}$O is described by:\n\t\\begin{equation}\n\t\t\\chi_\\mathrm{C^{18}O}^\\mathrm{E} = \\frac{9.5 \\times \\pot{-5} \\times 10^{\\alpha(D_{GC} - R_\\mathrm{GC,\\odot})}}{^{16}\\mathrm{O}\/^{18}\\mathrm{O}} , \\label{eq:expected_ab}\n\t\\end{equation}\n\twhere $D_{GC}$ is expressed in $\\mathrm{kpc}$, $R_\\mathrm{GC,\\odot}=8.34 \\usk\\mathrm{kpc}$ \\citep{Reid+14_apj783_130}, and $\\alpha$ describes the C\/H gradient, taken to be $-0.08\\usk \\mathrm{dex}\\usk\\mathrm{kpc}^{-1}$ from \\citet{LuckLambert11_aj142_136}. We assumed that the CO abundance is controlled by the carbon abundance, because it is always less abundant than oxygen, and becomes progressively more so in the outer Galaxy.\n\tA smaller abundance of CO at lower $Z$ \n\tis consistent with observations of low-metallicity galaxies, where the detectable CO-emitting region is significantly smaller than the H$_{2}$ envelope \\citep{Elmegreen+13_nat495_487,Rubio+15_nat525_218}.\n\tThe oxygen isotopic ratio is commonly described by the relation $^{16}\\mathrm{O}\/^{18}\\mathrm{O} = 58.8 D_{GC} + 37.1$ \\citep{WilsonRood94_araa32_191}. On the one hand, independent measurements of $^{16}\\mathrm{OH}\/^{18}\\mathrm{OH}$ by \\citet{Polehampton+05_aap437_957}, despite finding consistent results with the previous works, do not strongly support such a gradient. On the other hand, \\citet{Wouterloot+08_aa487_237} find an even steeper gradient considering sources in the FOG, where C$^{18}$O is likely to be less abundant, if not for the oxygen isotopic ratio, then for selective photodissociation due to lower shielding of the dust, and self-shielding. \n\n\tFor the moment we ignore the effect of a gradient, and adopt the local CO\/C$^{18}$O ratio. We used $\\Sigma$, together with $\\ensuremath{\\Sigma_\\mathrm{dust}}$, as observed data for a JAGS\\footnote{\\url{http:\/\/mcmc-jags.sourceforge.net\/}} \\citep[Just Another Gibbs Sampler,][]{Plummer2003_IWDSC} model which derives the gas-to-dust ratios as $\\Sigma\/\\Sigma_{dust}$, and fits the points in a log-linear space, considering an intrinsic scatter. \n\tFigure~\\ref{fig:gtd_fit}a shows that the gas-to-dust ratio increases with galactocentric distance, with a gradient for $\\ensuremath{\\gamma} \\ensuremath{\\,vs.\\,}\\ D_{GC}$ described by:\n\t\\begin{equation}\n\t\tlog(\\ensuremath{\\gamma}) = \\left( 0.087\\left[ \\asymErr{+0.045}{-0.025} \\right]\\pm0.007\\right) \\, D_{GC} + \\left( 1.44 \\left[ \\asymErr{-0.45}{+0.21} \\right]\\pm0.03 \\right), \\label{eq:gtd_gradient}\n\t\\end{equation}\n\twhere $D_{GC}$ is expressed in $\\mathrm{kpc}$; we first indicate the systematic uncertainty between square brackets (discussed in the next section), and the statistical uncertainties afterwards. \n\tThis equation gives values for $\\ensuremath{\\gamma}$ at the Sun distance between $\\approx130$ and $\\approx 145$, well consistent with the local value of 136, considering the intrinsic scatter of the observed points (cf. Fig.~\\ref{fig:gtd_fit}) and the uncertainties in the derived relation.\n\tAs indicated in Fig.~\\ref{fig:gtd_fit}a, our results are, in general, valid only between $\\sim2\\usk\\mathrm{kpc}$ and $\\sim20\\usk\\mathrm{kpc}$ from the galactic centre, the range spanned by the sources in our sample. \n\t\n\tThe slope of the gradient is very close to that used in Eq.~\\ref{eq:expected_ab}, showing that C$^{18}$O behaves in a way comparable to the dust, with respect to metallicity. This implies that the results are closely linked to the assumed galactocentric carbon gradient. In the next section we discuss as limiting cases how the $\\ensuremath{\\gamma}$ gradient would change if the CO abundance follows the oxygen variation instead, and if C$^{18}$O is less abundant with respect to CO in the outer Galaxy, as a consequence of the $^{16}\\mathrm{O}\/^{18}\\mathrm{O}$ gradient, or of selective photodissociation (see also Fig.~\\ref{fig:gtd_fit}b, c). The effect of such systematic uncertainties causes the variations in the slope and intercept of Eq.~\\ref{eq:gtd_gradient} indicated in the square brackets.\n\n \\begin{figure*}[t]\n\t\\centering\n\t\\includegraphics[width=\\textwidth]{figures\/gtd_gradients.pdf}\n\t\\caption{Variation of the gas-to-dust ratio with galactocentric radius, for our fiducial case (Panel a), considering a CO\/C$^{18}$O galactocentric gradient (Panel b), and assuming that the abundance of CO follows the radial oxygen gradient, rather than the C\/H (Panel c). The thick blue lines indicate the best fit, reported in the bottom right corner; the $68\\%$ and $95\\%$ highest probability density intervals of the fit parameters are indicated by the light and dark yellow-shaded regions, respectively. The intrinsic scatter is indicated by the dashed lines. \n\tFor comparison with external galaxies $log$(O\/H) + 12 is shown on the top axis.}\\label{fig:gtd_fit}\n \\end{figure*}\n\n\\section{Discussion}\\label{sec:discussion}\n\n\tIn this section, we discuss the uncertainties in the gas-to-dust ratio estimates and in its galactocentric gradient, why $\\ensuremath{\\gamma}$ has to be higher at large $D_{GC}$, and how it depends on metallicity. Estimates of the gas-to-dust ratio are difficult, resting on the derivation of surface densities of a tracer of H$_{2}$ (C$^{18}$O and C$^{17}$O in our case) and of the dust. Several assumptions introduce a systematic uncertainty in $\\ensuremath{\\gamma}$. For the surface density of molecular gas, the main sources of uncertainties are the canonical CO abundance, that can vary by a factor of two, the CO--C$^{18}$O conversion, discussed in Sect.~\\ref{sec:results}, and the assumption of LTE, which is likely less important, given the results of the comparison between temperatures derived from CH$_{3}$CCH and CO isotopologues in the TOP100 sample in \\citet{Giannetti+17_aap603_33}. Dust is more problematic, especially because its properties are poorly constrained. Opacity and emissivity are sensitive to the grain composition and size distribution:\n\tdistinct models can induce discrepancies in the estimated mass surface density of dust up to a factor of approximately three \\citep[see e.g.][]{OssenkopfHenning94_aap291_943, LiDraine01_apj554_778, Gordon+14_apj797_85, Gordon+17_apj837_98}.\n\tThe simple SED model adopted is a crude approximation as well: \n\ttemperature varies along the line of sight, and in the extreme case where the representative grey-body temperature changes from $\\approx20\\usk\\kelvin$ (the median in our FOG sample is $\\approx23\\usk\\kelvin$) to $\\approx 50\\usk\\kelvin$, the dust surface density changes by a factor of approximately five.\n\t\n\tPropagating these to the gas-to-dust ratio implies a global uncertainty of nearly a factor of six on $\\ensuremath{\\gamma}$ for each target. It is therefore relevant to test whether a simpler model with a constant $\\ensuremath{\\gamma}$ ratio is to be favoured over the proposed gradient. A Bayesian model comparison, which automatically takes into account the Ockham's Razor principle \\citep[e.g.][]{BolstadIBS}, shows that, in the unfavourable case that the CO abundance follows the oxygen gradient, the odds ratio is approximately eight in favour of the gradient model\\footnote{Considering a flat prior on the slope and intercept of the $log(\\ensuremath{\\gamma}) \\ensuremath{\\,vs.\\,} D_{GC}$ relation in the ranges $0-1$ and $0-4$, respectively.}, which is then to be preferred over a constant value of $\\ensuremath{\\gamma}$ across the entire disk.\n\t\n\tFactors that can change the slope of the $\\ensuremath{\\gamma} \\ensuremath{\\,vs.\\,} D_{GC}$ relation are the molecular gas- and the CO-dark gas fractions, the CO abundance gradient, and the dust model. Larger quantities of gas in atomic form, as well as more CO-dark gas at lower metallicities (due to a lower shielding of dust and self-shielding) would cause the relation to be steeper. However, because we target exclusively dense molecular clouds, the vast majority of gas should be in molecular form (see Sect.~\\ref{sec:obs_and_sample}).\n\tA larger fraction of CO-dark gas is evident for the low-metallicity galaxy WLM \\citep{Elmegreen+13_nat495_487,Rubio+15_nat525_218}; a less extreme, but analogous situation is possible for clouds at the edge of the Milky Way disk \\citep[in the FOG, $Z$ is larger than in WLM by a factor of between approximatley two and five, see][]{Leaman+12_apj750_33}. \n\n\tOn the other hand, the variation of dust composition and size distribution of grains tend to make the measured relation flatter. \n\tSilicates are likely to be more common in the outer Galaxy \\citep[e.g.][]{Carigi+05_apj623_213}; in this case the opacity would be lower, leading to an underestimate of the dust surface density. The models in \\citet{OssenkopfHenning94_aap291_943} show that a variation in the silicate-to-carbon fraction has a much smaller impact than the change in size distribution due to coagulation. \n\tIn the extreme case in which no coagulation takes place in the FOG, and it is, on the contrary, efficient in the inner Galaxy, the dust opacity changes by a factor of approximately two in the far-IR and submm regimes, reducing the mass surface densities by the same quantity. This effect has an impact similar in magnitude, but opposite, to that of the CO\/C$^{18}$O gradient. \n\n\tIn the outer Galaxy, where extinction is lower, dust grains can be more efficiently reprocessed. As a consequence, more carbon is present in the gas phase \\citep[e.g.][]{Parvathi+12_apj760_36}, effectively making the gradient in CO abundance shallower than the C\/H one, if it follows the gas-phase abundance of carbon. A limiting case is obtained by using the slope of the oxygen gradient in Eq.~\\ref{eq:expected_ab}, in which case the slope in Eq.~\\ref{eq:gtd_gradient} can be as shallow as $0.062 \\usk\\mathrm{dex}\\usk\\mathrm{kpc}^{-1}$. It is, however, unlikely that the increased abundance of C in the gas phase and the less efficient coagulation have such an important effect. Conversely, if we neglect these effects, but consider the CO\/C$^{18}$O gradient, we can obtain an upper limit for the gradient slope. Under these conditions, $\\ensuremath{\\gamma}$ varies by $0.132 \\usk\\mathrm{dex}\\usk\\mathrm{kpc}^{-1}$.\n\n\tThe CO\/C$^{18}$O abundance gradient, and larger fractions of CO-dark and atomic gas at lower metallicity \\citep[see e.g.][]{Elmegreen+13_nat495_487,Rubio+15_nat525_218} contrast the effects of the increased fraction of C in the gas phase, of dust size distribution and composition. \n\tFor simplicity, as a fiducial value, we have therefore considered the relation for which the variation in grain size distribution and the increased fraction of CO-dark gas cancel out the impact of the CO\/C$^{18}$O abundance gradient (Fig.~\\ref{fig:gtd_fit}a).\n\t\n\t\\medskip\n\t\n\tTheoretical considerations also indicate that the gas-to-dust ratio has to be higher in the FOG.\n\tIn the following, we show that at a distance of $\\sim15\\usk\\mathrm{kpc}$ the fraction of heavy elements locked into dust grains must be $80\\%$ to maintain $\\ensuremath{\\gamma}$ at the local value of $136$.\n\tFollowing \\citet{Mattsson+12_mnras423_38} we can conservatively use the O\/H gradient to obtain an approximation of the galactocentric metallicity behaviour. \n\tThe radial $Z$ gradient for our Galaxy can be reliably obtained via measurements of the abundance of heavy elements in Cepheids, which are young enough to represent the present-day composition. \n\tWe use the results from \\citet{LuckLambert11_aj142_136}, who consider a large number of Cepheids with $5\\usk\\mathrm{kpc}\\lesssimD_{GC}\\lesssim17\\usk\\mathrm{kpc}$, deriving for oxygen the gradient $d\\mathrm{[O\/H]}\/dD_{GC} = -0.056 \\,\\mathrm{dex}\\usk\\mathrm{kpc}^{-1}$. We obtain for $Z$:\n\t\\begin{equation}\n\t\tlog(Z) = -0.056 D_{GC} - 1.176\\label{eq:Z_grad_MW},\n\t\\end{equation} \n\twhich gives, at the location of the Sun, an H-to-metal mass ratio $\\sim44$. If approximately $40\\%$ of the heavy elements are locked into dust grains \\citep{Dwek98_apj501_643}, this implies $\\ensuremath{\\gamma}=110$, which is in very good agreement with the locally-estimated value of $136$.\n\t\t\n\tThe dust-to-gas mass ratio $Z_{d}$ is the inverse of $\\ensuremath{\\gamma}$, and the fraction of mass in heavy elements locked in dust grains, the dust-to-metal ratio, can be expressed as the ratio of $Z_{d}$ and the gas metallicity, that is, $Z_{d}\/Z$. A dust-to-metal ratio of one would imply that dust grains contain all elements heavier than helium.\n\tIf we were to assume that the gas-to-dust ratio remains constant to $\\ensuremath{\\gamma} \\equiv Z_{d}^{-1}=136$ (implying that progressively more heavy elements end up in dust grains), using Eq.~\\ref{eq:Z_grad_MW} we would see that the dust metallicity $Z_{d}\/Z$ reaches $80\\%$ at $D_{GC}\\approx15\\usk\\mathrm{kpc}$. In addition, the metallicity gradient is most likely steeper than the oxygen gradient \\citep[e.g.][]{Mattsson+12_mnras423_38}, moving this limit inwards, thus indicating that in the FOG the gas-to-dust ratio is bound to be higher.\n\t\n\tNow using our results for the increase of the gas-to-dust ratio with $D_{GC}$, the dust metallicity can be derived from Eqs.~\\ref{eq:gtd_gradient} and \\ref{eq:Z_grad_MW}:\n\t\\begin{equation}\n\t\tlog\\left( \\frac{Z_{d}}{Z} \\right) = \\left( -0.031 \\left[\\asymErr{+0.025}{-0.047} \\right] \\right) D_{GC} - \\left( 0.26 \\left[ \\asymErr{-0.21}{+0.45} \\right] \\right), \\label{eq:Zdust}\n\t\\end{equation}\n\twhich shows that the dust-to-metal ratio decreases with galactocentric radius. \n\tA decrease of the the dust-to-metal ratio is the most common situation in late-type galaxies and indicates that grain growth in the dense ISM dominates over dust destruction \\citep[e.g.][]{Mattsson+12_mnras423_38,Mattsson+14_mnras444_797}. This strongly reinforces the previous argument that a constant $\\ensuremath{\\gamma}=136$ cannot be sustained in the far outer Galaxy, because the metal-to-dust ratio virtually always decreases moving outwards in the disk, for Milky-Way type galaxies.\n\n\t\n\tA good test bench for Eq.~\\ref{eq:gtd_gradient} is represented by the Magellanic Clouds. Combining Eqs.~\\ref{eq:gtd_gradient} and \\ref{eq:Z_grad_MW}, and using the appropriate metallicity \\citep[$Z = 0.5 \\, Z_{\\odot}$ and $Z = 0.2\\, Z_{\\odot}$ for the Large and Small Magellanic Clouds, respectively;][]{RussellDopita92_apj384_508}, we obtain $\\ensuremath{\\gamma} \\sim 420\\asymErr{+250}{-110}$ and $\\ensuremath{\\gamma} = 1750\\asymErr{+4100}{-900}$, in excellent agreement with the results of \\citet{RomanDuval+14_apj797_86}.\n\t \n\\section{Summary and conclusions}\n\n\tWe combined our molecular-line surveys towards dense and massive molecular clouds in the inner- and far outer disk of the Milky Way to study how the gas-to-dust ratio $\\ensuremath{\\gamma}$ varies with galactocentric distance and metallicity.\n\tWe estimated conservative limits for the galactocentric gradient of gas-to-dust mass ratio, by considering multiple factors that influence its slope (see Sect.~\\ref{sec:discussion}), and defined, for simplicity, the fiducial value as the case where dust coagulation and the larger fraction of carbon in the gas phase in the FOG balance the CO\/C$^{18}$O abundance gradient.\n\tThe gas-to-dust mass ratio is shown to increase with $D_{GC}$ according to Eq.~\\ref{eq:gtd_gradient}, and this gradient is compared with that of metallicity, as obtained from Cepheids by \\citet{LuckLambert11_aj142_136}.\n\tThe variation in gas-to-dust ratio is steeper than that of $Z$ ($\\ensuremath{\\gamma} \\propto Z^{-1.4\\asymErr{+0.3}{-1.0}}$),\n\timplying that the the dust-to-metal ratio decreases with distance from the galactic centre. This indicates that dust condensation in the dense ISM dominates over dust destruction, which is typical of late-type galaxies like ours \\citep{Mattsson+12_mnras423_38,Mattsson+14_mnras444_797}. \n\tThe predictions obtained combining Eq.~\\ref{eq:gtd_gradient} and \\ref{eq:Z_grad_MW} for the metallicities of the Magellanic Clouds are in excellent agreement with the results on $\\ensuremath{\\gamma}$ in these galaxies by \\citet{RomanDuval+14_apj797_86}.\n\n\tThe use of Eq.~\\ref{eq:gtd_gradient} to calculate the appropriate value of $\\ensuremath{\\gamma}$ at each galactocentric radius is fundamental for the study of individual objects, allowing us to derive accurate H$_{2}$ column densities and total masses from dust continuum observations, as well as for any study that compares the properties of molecular clouds in the inner and outer Galaxy. This opens the way for a complete view of the galactic disk and of the influence of $Z$ on the physics and chemistry of molecular clouds. \n\n\\begin{acknowledgements}\nWe are thankful to Frank Israel for a discussion on the uncertainties involved in deriving the gas-to-dust ratio and to the anonymous referee that both helped to improve the quality and the clarity of this paper. This work was partly carried out within the Collaborative Research Council 956, sub-project A6, funded by the Deut\\-sche For\\-schungs\\-ge\\-mein\\-schaft (DFG). This paper is based on data acquired with the Atacama Pathfinder EXperiment (APEX). APEX is a collaboration between the Max Planck Institute for Radioastronomy, the European Southern Observatory, and the Onsala Space Observatory. \nThis research made use of Astropy, a community-developed core Python package for Astronomy \\citep[][\\url{http:\/\/www.astropy.org}]{astropy_2013}, of NASA's Astrophysics Data System, and of Matplotlib \\citep{Hunter_2007_matplotlib}. MCWeeds makes use of the PyMC package \\citep{Patil+10_jstatsoft35_1}.\n\\end{acknowledgements}\n\n \n\\bibliographystyle{bibtex\/aa}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzasjc b/data_all_eng_slimpj/shuffled/split2/finalzzasjc new file mode 100644 index 0000000000000000000000000000000000000000..13bd074f05cc42c2c7c85f2b2c05638cad7c2116 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzasjc @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\section{Introduction}\nAcademic publication analysis has always been of interest to the\nresearch community. Earlier focus includes citation analysis and\njournal impact factor analysis, to help evaluate research impact. In\nrecent years, there has been an increasing interest in the social aspects of\nthis research, for example, there exist studies of patterns of\ncollaborations, automatically inferring advisor-advisee\nrelationships, and finding or predicting leaders and rising stars in\nresearch areas. A common challenge to such research is how to deal\nwith the lack of data, or when data is available, its incorrectness\nand incompleteness. However, since the data volume is large, and\nthere exists all kinds of relationships between data items, it is\noften possible to recover certain missing (or correct erroneous)\ndata items from the data we have. In this paper, we study a\nparticular problem of this sort - estimating the missing year\ninformation associated with publications (and hence the authors' years\nof active publication).\n\nRecently, data cleaning on academic social networks has received much\nattention. In KDD Cup 2013, the two challenges are the Author-Paper\nIdentification Challenge and the Author Disambiguation Challenge.\nFor both challenges, the publishing year information of each paper\nis important background knowledge for the design of algorithms.\nHowever, the given data set~\\cite{kddcup2013} has a high\n\\emph{Missing Year Ratio}, $\\frac{155784}{2257249}\\approx 6.90\\%$\n(there are in total 2,257,249 papers, and out of which, 155,784 are\nmissing year papers). This is important motivation for developing\nalgorithms to recover the missing year attribute of publications, we\ncalled this the Missing Year Estimation (MYE) problem.\n\nThe occurrence of the missing data in the bibliographic data can be\ncaused by a variety of reasons. We believe one reason is that the cited\npapers are also included in the dataset, even if the original source\nis unavailable. References are sometimes incomplete, leading to\nmissing and erroneous data. It is also possible that some papers are\nrecovered from scanned sources which makes it hard to extract all\nattributes.\n\nWe first propose a simple algorithm that only makes use of the\n``direct'' information, such as paper citation\/reference\nrelationships or paper-author relationships. The result of this\nsimple algorithm is used as a benchmark for comparison. Our goal is\nto develop sophisticated algorithms that increase both the coverage\n(measured by the percentage of missing year papers recovered) and\naccuracy (mean absolute error, or MAE, of the estimated year to the\nreal year).\nThe more advanced algorithms we propose and study involve\ninformation propagation rules so that information which is multiple\nhops away can also be utilized.\nFor each algorithm, we propose three versions according to the given\nacademic social network type: a) Homogenous (only contains paper\ncitation links), b) Bipartite (only contains paper-author\nrelations), and, c) Heterogeneous (both paper citation and\npaper-author relations). We carry out experiments on the three\npublic data sets (MSR Libra, DBLP and APS), by applying the K-fold\ncross validation method.\n\nOur contributions are: we formulate the problem and introduce a\nbasic (benchmark) algorithm that can already recover most of the\nmissing years if both citation and author information are available.\nWe then systematically developed improved algorithms based on\nmethods in machine learning. These advanced algorithms further\nimprove both coverage and accuracy (around $20\\%$ in the paper\ncitation network, $8\\%$ in paper author bipartite network and\nheterogeneous network), over the benchmark algorithm. In addition,\nthe coverage achieved by the advanced algorithms well matches the\nresults derived by the analytical model.\n\nThe remaining of the paper is organized as follows; we first\nintroduce the estimation methodology in section~\\ref{Sec:method}, then we\ndescribe the data sets used and the experiment results in\nsection~\\ref{Sec:exp}. In section~\\ref{Sec:related}, we discuss the\nrelated works and lastly conclude our work in section~\\ref{Sec:conclusion}.\n\n\\section{Methodology}\\label{Sec:method}\nIn this section, we first introduce the notations and the three\ntypes of the academic social networks we are dealing with.\nFor each network type, we propose three corresponding missing year\nestimation (MYE) algorithms, with different complexity levels.\n\n\\subsection{Notations and three types of the network}\nIn a general academic social network, there are many types of nodes\nand edges. For example, node types can be papers, authors and\npublishing venues, etc; and edges can be citations (linking papers\nto the papers they cite; authorships (connecting authors to the\npapers they have written), and so on.\n\nIn the MYE problem, we are mainly interested in two node types:\npapers and authors; and two edge types: paper citations and paper\nauthorships, which induce three academic social networks:\n\\begin{enumerate}\n\\item [a)] Paper citation network, denoted by a directed graph\n$G_P = (V_P, E_P)$, where $V_P$ is the set of papers and $E_P$ is\nthe set of citation links. Since citation links have directions,\neach citation link can be represented by an ordered paper\npair\\footnote{Throughout the paper, we will adopt this special order\nof the paper pair for representing the citation links. The reason for this is\nthat we try to keep this order consistent with the increasing time\nline, e.g. a paper with an earlier (left position) publishing time\nis cited by a later one (at a right position on the time line).},\ni.e., $\\forall e = (t, f) \\in E_P$, where $t, f \\in V_P$, meaning\nthis citation link is pointing to paper $t$ and originated from\npaper $f$.\n\n\\item[b)] Paper authorship network, denoted by $G_{AP} = (V_A \\cup V_P,\nE_{AP})$, where $V_A$ is the set of authors, $V_P$ is the set of\npapers and edges in the set $E_{AP}$ connecting authors to their\nproduced papers (authorship). Hence $G_{AP}$ is a bipartite graph\nand we have $\\forall e = (a, p) \\in E_{AP}$, where $a \\in V_A$ and\n$p \\in V_P$.\n\n\\item[c)] Heterogenous network, consisting of both paper citation network and paper authorship\nnetwork, denoted by $G = (V_A \\cup V_P, E_P \\cup E_{AP})$.\n\\end{enumerate}\n\nPapers are further categorized into two exclusive sets: with known\nyear information $V_P^K$ and unknown (missing) year information\n$V_P^U$. Hence we have $V_P = V_P^K \\cup V_P^U$ and $V_P^K \\cap\nV_P^U = \\emptyset$. The remaining notations are listed in\nTable~\\ref{Tab:Notation}:\n\\begin{table}[htb]\n\\centering\n\\caption{List of Notations}\n\\begin{tabular}{|c|l|}\n\\hline\n$Y(p),\\;\\forall\\;p \\in V_P$ & the real publishing year of paper\n$p$, note:\\\\\n& $\\forall\\;p^U \\in V_P^U, Y(p^U)$ is only used for validation purpose.\\\\\n\\hline\n$T(p),\\;\\forall\\;p \\in V_P$ & the set of papers that cite paper $p$,\\\\\n& i.e., $T(p) = \\{f| \\forall f \\in V_P, s.t., (p, f)\n\\in E_P\\}$.\\\\\n\\hline\n$F(p),\\;\\forall\\;p \\in V_P$ &the set of papers that are cited\nby paper $p$,\\\\\n& i.e., $F(p) = \\{t| \\forall t \\in V_P, s.t., (t, p) \\in E_P\\}$.\\\\\n\\hline\n$\\hat{Y}(p^U),\\;\\forall\\;p^U \\in V_P^U$ & the estimation result for the missing year paper $p^U$.\\\\\n\\hline\n$P(a),\\;\\forall\\;a\\in V_A$ &the paper set that are written by author $a$.\\\\\n\\hline\n$A(p),\\;\\forall\\;p\\in V_P$ &the author set that have written paper $p$.\\\\\n\\hline\n$w(p, q), \\;\\forall p,q\\;\\in V_P$ & the Consistent-Coauthor-Count between two papers, \\\\\n & $w(p,q)=w(q,p)=|A(p)\\cap A(q)|$\\\\\n\\hline\n$\\Omega(p),\\;\\forall\\;p \\in V_P$ & the Consistent-Coauthor-Pair set\nof a paper $p \\in V_P$,\\\\\n& $\\Omega(p) = \\{q|q \\in V_P\\;\\textrm{and}\\;w(p,q) > 1\\}$\\\\\n\\hline\n$AW\\_Min(a),\\;AW\\_Max(a),$ & the lower and upper bounds of the active publishing\\\\\n$\\forall\\;a\\in V_A$ & time window of author $a$.\\\\\n\\hline\n$\\hat{Y}_{CMin}(p^U), \\hat{Y}_{CMax}(p^U),$ & the lower and upper\nbounds of the year estimation \\\\\n$\\forall\\;p^U \\in V_P^U$ & window, derived in the paper citation network $G_P$.\\\\\n\\hline\n$\\hat{Y}_{AMin}(p^U),\\;\\hat{Y}_{AMax}(p^U)$,\n& the lower and upper bounds of the year estimation\\\\\n$\\forall\\;p^U \\in V_P^U$ & window, derived in the paper authorship network $G_{AP}$\\\\\n\\hline\n$\\hat{Y}_{GMin}(p^U),\\;\\hat{Y}_{GMax}(p^U)$,\n& the lower and upper bounds of the year estimation\\\\\n$\\forall\\;p^U \\in V_P^U$ & window, derived in the heterogenous network $G$\\\\\n\\hline\n\\end{tabular}\n\\label{Tab:Notation}\n\\end{table}\n\n\\subsection{MYE for citation network $G_P$}\nWe first look at a simple example of the missing year estimation\nproblem in the paper citation network, shown in\nFig.~\\ref{Fig:CWExample}. In this example, there are 12 papers ($a -\nl$) and 10 citation edges. 5 papers ($a, b, e, i, j$) have no year\ninformation (i.e. $\\in V_P^U$) and the other 7 papers ($c, d, f, g,\nh, k, l$) have publishing years (i.e. $\\in V_P^K$). Later on, we\nwill use this example to demonstrate the three MYE algorithms\ndesigned for the citation network $G_P$.\n\\begin{figure}[hbt]\n \\centering\n \\includegraphics[width=2.4in]{CW_example.eps}\n \\caption{A simple example of a citation network with 12 papers ($a - l$),\n where papers ($a, b, e, i, j$) are $\\in V_P^U$ and the remaining\n ($c, d, f, g, h, k, l$) are $\\in V_P^K$ .}\n \\label{Fig:CWExample}\n\\end{figure}\n\nThe main idea of estimating the missing years in the citation\nnetwork $G_P$ is to make use of paper citing activities, stated as\nAssumption~\\ref{assumption1}, together with the available\ninformation: a) the year information of those known papers; b) the\ncitation relationships (edges of the $G_P$).\n\\begin{assumption}\nNormally\\footnote{Since the exceptions are rare, we believe that\nignoring such exceptions is reasonable and does not harm our\nalgorithm design.}, a paper can only cite those papers published\nbefore it, i.e., Eq.~(\\ref{assumption1}) is satisfied:\n\\begin{equation} \\label{assumption1}\nY(t) \\leq Y(f),\\;\\;\\forall\\;\\;e = (t, f) \\in E_P,\\;\\;t, f \\in V_P.\n\\end{equation}\n\\end{assumption}\nAssumption~\\ref{assumption1} provides the way to determine either a\npossible upper bound of the target paper's missing year when it is\ncited by a known year paper (i.e., $t\\in V_P^U$ and $f\\in V_P^K$);\nor a possible lower bound of the target paper's missing year when it\ncites a known year paper (i.e., $t\\in V_P^K$ and $f\\in V_P^U$). For\nexample, using Fig.~\\ref{Fig:CWExample}, we can look at paper $a$\n(missing year) and $d$ (published in 1999) with a citation link from\n$d$ to $a$, we take 1999 as one possible upper bound of $a$'s\npublishing year, i.e., $Y(a) \\leq 1999$. Similarly, when we look at\npaper $d$ and $e$, we get a lower bound of the real publishing year\nof $e$, i.e., $1999 \\leq Y(e)$.\n\nFollowing this logic, the missing year estimation task can be\nseparated into two steps: (1) deriving the possible year estimation\nwindow (two bounds); (2) calculating the missing year value based on\nthe derived window.\n\nFor each step, we propose two methods with different complexity, the\nsimple (``Sim'') version and the advanced (``Adv'') version. In the\nnext three subsections, we will introduce the three algorithms\ndesigned for MYE in paper citation network $G_P$. The three\nalgorithms are different combinations of the two methods in each\nstep, listed in Table~\\ref{Tab:GPCombination}.\n\\begin{table}[htb]\n\\centering\n\\caption{Combination of the two proposed methods in each step, for\nthe three algorithms for MYE in $G_P$.}\n\\begin{tabular}{c|c|c}\n\\hline\nAlgorithm & Window derivation method & Year value calculation method\\\\\n\\hline\n$G_P$-SS & Simple & Simple\\\\\n\\hline\n$G_P$-AS & Advanced & Simple\\\\\n\\hline\n$G_P$-AA & Advanced & Advanced\\\\\n\\hline\n\\end{tabular}\n\\label{Tab:GPCombination}\n\\end{table}\n\\subsubsection{Algorithm for MYE in $G_P$: $G_P$-SS}\nWe will first introduce the simple method for each of the two steps,\nthen we will show how $G_P$-SS works by demonstrating the results on\nthe example shown in Fig.~\\ref{Fig:CWExample}.\n\n{\\bf Simple Window Derivation Method:} The simple version of the\nwindow (bounds) derivation method only involves ``one round'' (or in\na ``direct'' manner), which means: (1) spatially, we only consider\nthose papers that are one-hop to the target missing year paper; (2)\ntemporally, we only consider immediate (given) information.\n\nPutting together (1) and (2), mathematically, we are deriving the\nbounds of the missing year paper $p^U \\in V_P^U$ through the subset\nof the papers: $F(p^U) \\cap V_P^K$ (for the lower bound) and $T(p^U)\n\\cap V_P^K$ (for the upper bound) as long as they are not empty. For\nexample, if we look at paper $i$ in Fig.~\\ref{Fig:CWExample}, then\nonly $f$ and $g$ (one-hop away from $i$ and with year information)\nare used for deriving the lower bound, while only $k$ and $l$ for\nthe upper bound. Intuitively, when there are multiple bounds, we\nwill take the tightest one by applying Eq.~(\\ref{eq:y_cmin}) and\n(\\ref{eq:y_cmax}):\n\\begin{eqnarray}\n\\hat{Y}_{CMin}(p^U) &=& \\max_{f\\;\\in\\;F(p^U) \\cap V_P^K} Y(f),\\;\n\\textrm{if}\\;F(p^U) \\cap V_P^K\\neq\\emptyset;\\nonumber\\\\\n&=&\\;\\; -\\;\\infty,\\quad\\textrm{otherwise}; \\label{eq:y_cmin}\\\\\n\\hat{Y}_{CMax}(p^U) &=& \\min_{t\\;\\in\\;T(p^U) \\cap V_P^K}\nY(t),\\;\\textrm{if}\\;T(p^U) \\cap V_P^K\\neq\\emptyset;\\nonumber\\\\\n&=&\\;\\; +\\;\\infty,\\quad\\textrm{otherwise}, \\label{eq:y_cmax}\n\\end{eqnarray}\nwhere $\\hat{Y}_{CMin}(p^U)$ denotes the largest possible lower bound\nof paper $p^U$ and $\\hat{Y}_{CMax}(p^U)$ denotes the smallest\npossible upper bound. Here the $-\\infty$ and $+\\infty$ have no\npractical meaning but are used just to represent the non-existent bounds.\nIn the real implementation, they can be assigned to some pre-defined\nconstant variables such as ``Default\\_Win\\_Min'' and\n``Default\\_Win\\_Max''.\n\nTogether with the conditions of non-existent bounds, we thus have\nfour types of possible year estimation windows:\n\\begin{eqnarray}\n&\\textrm{Type-1: }& [\\hat{Y}_{CMin}(p^U), \\hat{Y}_{CMax}(p^U)]; \\nonumber\\\\\n&\\textrm{Type-2: }& [\\;\\;\\hat{Y}_{CMin}(p^U), \\quad+\\;\\infty\\quad);\\nonumber\\\\\n&\\textrm{Type-3: }& (\\quad-\\;\\infty,\\quad \\hat{Y}_{CMax}(p^U)\\;\\;];\\nonumber\\\\\n&\\textrm{Type-4: }&\n(\\quad-\\;\\infty\\quad,\\quad+\\;\\infty\\quad).\\nonumber\n\\end{eqnarray}\nThe Type-4 window contains no information for estimation,\nhence we define \\emph{Uncovered Paper} to be those missing year\npapers with a Type-4 estimation window. On the other hand, it is\npossible to make a proper estimation on the year value for the\nmissing year papers with Type-1, Type-2 or Type-3 estimation window.\n\n{\\bf Simple Year Value Calculation Method:}\nBased on the derived possible year estimation window for each\nmissing year paper $p^U$, the next step is to make a guess on its\nreal publishing year. The simple calculation method works in a\nstraightforward way,\nEqs.~(\\ref{eq:CWestType1})-(\\ref{eq:CWestType4}):\n\\begin{eqnarray}\n&\\textrm{Type-1: }& \\hat{Y}(p^U) = \\frac{\\hat{Y}_{CMin}(p^U) + \\hat{Y}_{CMax}(p^U)}{2}, \\label{eq:CWestType1}\\\\\n&\\textrm{Type-2: }& \\hat{Y}(p^U) = \\hat{Y}_{CMin}(p^U),\\label{eq:CWestType2}\\\\\n&\\textrm{Type-3: }& \\hat{Y}(p^U) = \\hat{Y}_{CMax}(p^U),\\label{eq:CWestType3}\\\\\n&\\textrm{Type-4: }& \\emph{Uncovered}\\label{eq:CWestType4}.\n\\end{eqnarray}\nIn summary, if both bounds exist (Type-1), we take the average of\nthe two bounds, Eq.~(\\ref{eq:CWestType1}) (assuming that $Y(p^U)$\nfollows any symmetric discrete distribution centered at the middle\npoint of the possible estimation window). If only one bound exists\n(Type-2 or Type-3), we take the bound value as the calculation\nresult. Otherwise (Type-4), instead of making any random guess, we\nlabel it as (\\emph{Uncovered}), which means, the year of such paper\ncannot be estimated properly. Later on, in the performance\nevaluation section, we shall consider the uncovered ratio ($=\n\\frac{\\textrm{Total \\# Uncovered}}{|V_P^U|}$) of all the proposed\nalgorithms as one of the performance metrics\n\nConsidering the example in Fig.~\\ref{Fig:CWExample}, we list both\nthe intermediate and final estimation results conducted by apply\n$G_P$-SS in Table \\ref{Tab:GpSS_example}.\n\\begin{table}[htb]\n\\centering\n\\begin{tabular}{|r|c|c|c|c|c|}\n \\hline\n$p^U$ in Fig.\\ref{Fig:CWExample}& $a$ & $b$ & $e$ & $i$ & $j$\\\\\n \\hline\n $F(p^U)$ & $\\emptyset$ & $\\emptyset$ & $d$ & $e, f, g$ & $i$\\\\\n\\hline\n $F(p^U) \\cap V_P^K$ & $\\emptyset$ & $\\emptyset$ & $d$ & $f, g$ & $\\emptyset$\\\\\n\\hline\n $T(p^U)$ & $d$ & $\\emptyset$ & $h, i$ & $j, k, l$ & $\\emptyset$\\\\\n\\hline\n $T(p^U) \\cap V_P^K$ & $d$ & $\\emptyset$ & $h$ & $k, l$ & $\\emptyset$\\\\\n\\hline\n $\\hat{Y}_{CMin}(p^U)$ & $-\\;\\infty$ & $-\\;\\infty$ & 1999 & 2003 & $-\\;\\infty$\\\\\n\\hline\n $\\hat{Y}_{CMax}(p^U)$ & 1999 & $+\\;\\infty$ & 2007 & 2005 & $+\\;\\infty$\\\\\n\\hline\n $\\hat{Y}(p^U)$ & 1999 & \\emph{Uncovered}& 2003 & 2004 & \\emph{Uncovered}\\\\\n\\hline\n\\end{tabular}\n\\caption{The intermediate and estimation results obtained through\n$G_P$-SS algorithm running on the example of\nFig.~\\ref{Fig:CWExample}}\n\\label{Tab:GpSS_example}\n\\end{table}\n\nIn Table~\\ref{Tab:GpSS_example}, the first row lists all the 5\npapers belonging to $V_P^U$. The second and third rows list the\npaper set cited by each of the 5 papers, where the third row only\ncontains papers with year information, e.g. for paper $i$, it cites\nthree papers $F(i) = \\{e, f, g\\}$ and only two of them have year\ninformation, $F(i) \\cap V_P^K = \\{f, g\\}$. The fourth and fifth rows\nlist the papers that cite each of the 5 papers, where the fifth row\nonly contains papers belonging to $V_P^K$. The next two rows are the\ntwo bounds of the possible estimation window by applying\nEqs.~(\\ref{eq:y_cmin}) and (\\ref{eq:y_cmax}), e.g.,\n$\\hat{Y}_{CMin}(i) = \\max\\{Y(f), Y(g)\\} = \\max\\{2003, 2001\\} =\n2003$. The last row shows the results derived by the simple year\ncalculation scheme,\nEqs.~(\\ref{eq:CWestType1})-(\\ref{eq:CWestType4}).\n\nThe $G_P$-SS is simple, quick and easy for both implementation and\nunderstanding, but its limitation is also obvious. It has not fully\nutilized the available information, which results in a high\nuncovered ratio ($= 2\/5$ shown in Table~\\ref{Tab:GpSS_example}) and\nlooser bounds. Considering this question, can the information\n(derived bounds or estimated results after running $G_P$-SS) of\npaper $i$ be useful for its missing year neighbour papers $j$ and\n$e$? The answer is yes and the next algorithm is designed to\ndeal with this.\n\n\\subsubsection{Algorithm for MYE in $G_P$: $G_P$-AS}\nComparing to $G_P$-SS, $G_P$-AS applies the same simple version of\nyear value calculation method,\nEqs.~(\\ref{eq:CWestType1})-(\\ref{eq:CWestType4}), but an advanced\nmethod for window derivation with information propagations.\n\nA quick way of extending $G_P$-SS is to simply repeat running it. In\nthis way, the estimated result for a missing year paper (e.g. $i$ in\nFig.~\\ref{Fig:CWExample}) in the previous rounds can be used to\nderive bounds for its neighbour missing year papers (e.g. $j$ and $e$\nin Fig.~\\ref{Fig:CWExample}) in the subsequent rounds. However,\nsince the estimated year result for $i$ can be inaccurate, this kind\nof repeating will definitely propagate and even amplify the\ninaccuracy.\n\n{\\bf Advanced Window Derivation Method:}\nGenerally in $G_P$, for each citation edge linking two papers, there\ncan be three possible conditions: (a) both papers have year\ninformation ($\\in V_P^K$); or (b) both papers are missing year ($\\in\nV_P^U$); or (c) one has year information while the other has not.\nThe limitation of simple window derivation method is that it will only\nwork under condition (c). By rephrasing Eq.~(\\ref{assumption1}) as\nEq.~(\\ref{assumption1New}), the advanced window derivation method\nrelaxes this limitation without inducing any inaccuracy in the\npropagation.\n\\begin{equation}\\label{assumption1New}\n\\hat{Y}_{CMin}(t) \\;\\leq\\; Y(t) \\;\\leq\\; Y(f) \\;\\leq\\;\n\\hat{Y}_{CMax}(f).\n\\end{equation}\n\nThe rationale behind Eq.~(\\ref{assumption1New}) is to extend the\nbound transmission rule between two missing year papers: (a) if\n$\\hat{Y}_{CMin}(t)$ exists, it is also a lower bound of $f$; (b) if\n$\\hat{Y}_{CMax}(f)$ exists, it is also an upper bound of $t$. The\npseudo code of the advanced window derivation method is included\nbelow.\n\\begin{algorithm}\n\\caption{The pseudo code of advanced window derivation method}\n\\label{Alg:GPAS}\n\\begin{algorithmic}[1]\n\\REPEAT\n \\STATE UpCnt $\\leftarrow 0$;\n \\FORALL{$e = (t, f) \\in E_P, t,f\\;\\in V_P$}\n \\STATE f\\_CMin\\_Before$\\leftarrow \\hat{Y}_{CMin}(f)$;\n \\STATE t\\_CMax\\_Before$\\leftarrow \\hat{Y}_{CMax}(t)$;\n \\IF{$t, f \\in V_P^U$}\n \\STATE $\\hat{Y}_{CMin}(f) \\leftarrow \\max\\{\\hat{Y}_{CMin}(f), \\hat{Y}_{CMin}(t)\\}$;\n \\STATE $\\hat{Y}_{CMax}(t) \\leftarrow \\min\\{\\hat{Y}_{CMax}(t), \\hat{Y}_{CMax}(f)\\}$;\n \\ELSIF{$t \\in V_P^K$, $f \\in V_P^U$}\n \\STATE $\\hat{Y}_{CMin}(f) \\leftarrow \\max\\{\\hat{Y}_{CMin}(f), Y(t)\\}$;\n \\ELSIF{$t \\in V_P^U$, $f \\in V_P^K$}\n \\STATE $\\hat{Y}_{CMax}(t) \\leftarrow \\min\\{\\hat{Y}_{CMax}(t), Y(f)\\}$;\n \\ENDIF\n \\STATE \/* Check update counts. *\/;\n \\IF{$\\hat{Y}_{CMin}(f) \\neq\\;\\;$f\\_CMin\\_Before}\n \\STATE UpCnt $\\leftarrow$ UpCnt $ + 1$;\n \\ENDIF\n \\IF{$\\hat{Y}_{CMax}(t) \\neq\\;\\;$t\\_CMax\\_Before}\n \\STATE UpCnt $\\leftarrow$ UpCnt $ + 1$;\n \\ENDIF\n \\ENDFOR\n\\UNTIL{UpCnt$\\;=\\;0$; \/* When no update happens, loop ends. *\/}\n\\end{algorithmic}\n\\end{algorithm}\n\nIn Algorithm~\\ref{Alg:GPAS}, we first initialize a local variable\n``UpCnt'' which records the total number of bound updates in each\nloop (Line 2). Lines 3-21 are steps in a loop of processing each\ncitation link of $G_P$, where Lines 9-13 are the same as the simple\nwindow derivation method, Eq.~(\\ref{eq:y_cmin}) and\nEq.~(\\ref{eq:y_cmax}), while Lines 6-8 are the essential part that\ndiffers from the simple version (also the implementation of the two\nbound transmission rules of Eq.~(\\ref{assumption1New})).\n\nIn Table~\\ref{Tab:GpAS_example}, we list both the intermediate and\nestimation results of applying $G_P$-AS on the example of\nFig.~\\ref{Fig:CWExample}.\n\\begin{table}[htb]\n\\centering\n\\begin{tabular}{|c|c|c|c|c|}\n \\hline\n $p^U$ in Fig.\\ref{Fig:CWExample} & Round 1 & Round 2 & Round 3 & $\\hat{Y}(p^U)$\\\\\n \\hline\n $a$ & $(-\\infty, 1999)$ & $(-\\infty, 1999)$ & $(-\\infty, 1999)$ & 1999\\\\\n\\hline\n $b$ & $(-\\infty, +\\infty)$ & $(-\\infty, +\\infty)$ &$(-\\infty, +\\infty)$ & \\emph{NotCovered}\\\\\n\\hline\n $e$ & $(1999, 2007)$ & $(1999, 2005)$ & $(1999, 2005)$ & 2002\\\\\n\\hline\n $i$ & $(2003, 2005)$ & $(2003, 2005)$ & $(2003, 2005)$ & 2004\\\\\n\\hline\n $j$ & $(-\\infty, +\\infty)$ & $(2003, +\\infty)$ & $(2003, +\\infty)$ & 2003 \\\\\n\\hline\n\n & UpCnt = 5 & UpCnt =2 &UpCnt =0& \\\\\n\\hline\n\\end{tabular}\n\\caption{The intermediate and estimation results of applying\n$G_P$-AS on the example shown in Fig.~\\ref{Fig:CWExample}}\n\\label{Tab:GpAS_example}\n\\end{table}\n\nFrom Table~\\ref{Tab:GpAS_example}, we can see that the advanced\nwindow estimation takes two rounds (no updates happen in round 3)\nand the last column is the year estimation results by applying the\nsimple year value calculation method based on the derived bounds.\nComparing to Table~\\ref{Tab:GpSS_example}, the improvement is\nobvious even for this simple example: (1) paper $j$ is no longer\nlabeled as (\\emph{Uncovered}, hence, the uncovered ratio decreases\nto 1\/5; (2) paper $e$ gets a tighter possible estimation window.\n\nSo far, we are doing our best to deal with the possible window\nderivation problem (apparently, paper $b$ in\nFig.~\\ref{Fig:CWExample} has no chance of getting a good estimate, and\nwe will discuss the relationship between the uncovered ratio and the\nstructure of the given citation graph $G_P$ mathematically in\nSection~\\ref{Sec:exp}). In the next algorithm, we investigate how\nthe year value calculation method can be further improved.\n\n\\subsubsection{Algorithm for MYE in $G_P$: $G_P$-AA}\nGiven the derived estimation window $[\\hat{Y}_{CMin}(p^U),\n\\hat{Y}_{CMax}(p^U)]$ for a missing year paper $p^U$, recall\nEqs.~(\\ref{eq:CWestType1})-(\\ref{eq:CWestType4})(how simple year\nvalue calculation method works): (1) if both bounds exist (Type-1),\nthe calculation result is the mean of the two bounds; or (2) if only\none bound exists (Type-2 or Type-3), the calculation result equals\nto the value of the existing bound; or (3) if neither bound exists,\nthen the paper is labeled as \\emph{Uncovered}, representing no\nproper estimation result.\n\nThe year estimation results for cases (1) and (2) affect the\naccuracy metrics, such as Mean Absolute Error (MAE), while case (3)\nonly affects the uncovered ratio, irrelevant to other metrics. For\ncase (1), it is rational to take the average of the two bounds,\nsince the citing-to activity and cited-by activity can be considered\nsymmetric. However for case (2), more investigation is needed. The\nphysical interpretation of case (2) is based on the assumption that\nthe missing year paper has the same publishing time as the earliest\npaper that cites it (the upper bound exists), or the latest paper\ncited by it (the lower bound exists). In reality, this seldom\nhappens. The best guess for a (Type-2 or Type-3) window case may be\ncorrelated to the bound value, not just a fixed distance to the\nbound (e.g. the simple calculation method takes a fixed zero\ndistance). Therefore, the solution for this problem is to find a\nproper function $\\hat{y}(p^U) = d(WinType(p^U), BoundVal(p^U))$ to\ncalculate $\\hat{y}(p^U)$ for each missing year paper $p^U$, based on\nits derived estimation window type, denoted by $WinType(p^U)$ (which\ntakes value of either Type-2 or Type-3), and the value of bound,\ndenoted by $BoundVal(p^U)$.\n\nTo achieve this, we need a separate data set, denoted by\n$\\mathcal{T}$, containing a series of 3-tuple data $t = \\{y_{t},\nWinType_{t}, BoundVal_{t}\\} \\in \\mathcal{T}$ for training purpose.\nEach 3-tuple data corresponds to a missing year paper $t$ in this\ntraining set, where $y_t$ is the validated real publishing year,\n$WinType_t$ is the derived estimation window type and $BoundVal_t$\nis the bound value. If we denote $\\mathcal{T}_{p^U}$ as the subset\nof $\\mathcal{T}$ with respect to $p^U$ and $\\mathcal{T}_{p^U} =\n\\{t|t\\in{\\mathcal{T}}, WinType_{t} = WinType(p^U), BoundVal_{t} =\nBoundVal(p^U)\\}$, then we get the following form for $d(\\cdot)$\ncorresponding with $\\mathcal{T}$:\n\\begin{equation}\\label{eq:d_func}\n\\hat{y}(p^U) = d_{\\mathcal{T}}\\big(WinType(p^U),\nBoundVal(p^U)\\big) =\n\\frac{\\sum_{t\\in\\mathcal{T}_{p^U}}y_t}{|\\mathcal{T}_{p^U}|},\n\\end{equation}\nwhere $|\\mathcal{T}_{p^U}|$ is the element count of the set\n$\\mathcal{T}_{p^U}$.\n\nThe idea of Eq.~(\\ref{eq:d_func}) is to take the expectation of the\nreal publishing years of those papers having the same window type\nand bound value as $P^U$ in the training set $\\mathcal{T}$. However\nit is not trivial to find a proper training set satisfying: (1) a\ncitation graph with similar property and structure to the given\n$G_P$; (2) the $BoundVal$ of this training set covers a wider range\nthan that of $BoundVal(p^U), \\forall p^U \\in V_P^U$.\n\n{\\bf Advanced Year Value Calculation Method:}\nWe first propose a way to find a suitable training set $\\mathcal{T}$\nwhich can satisfy both (1) and (2) mentioned above. After that, the\nestimation results can be calculated through Eq.~(\\ref{eq:d_func}).\n\nOne of the most suitable training sets is just inside the given\ncitation network $G_P$. In fact, each paper with known year\n($\\forall p^K \\in V_P^K$) can also be used to derive a possible\nestimation window (by pretending itself to be a missing year paper).\nConsider the example in Fig.~\\ref{Fig:CWExample}, for paper\n$d(1999)$, the simple window derivation method generates $[1993,\n+\\infty)$. Since this is independent of deriving windows for missing\nyear papers, these two procedures can be merged together to save the\nrunning time. The modified advanced window derivation method for\n$G_P$-AA is shown in Algorithm~\\ref{Alg:GPAA}.\n\\begin{algorithm}\n\\caption{The modified advanced window derivation method for\n$G_P$-AA} \\label{Alg:GPAA}\n\\begin{algorithmic}[1]\n\\REPEAT\n \\STATE UpCnt $\\leftarrow 0$;\n \\FORALL{$e = (t, f) \\in E_P, t,f\\;\\in V_P$}\n \\STATE f\\_CMin\\_Before$\\leftarrow \\hat{Y}_{CMin}(f)$;\n \\STATE t\\_CMax\\_Before$\\leftarrow \\hat{Y}_{CMax}(t)$;\n \\IF{$t, f \\in V_P^U$}\n \\STATE $\\hat{Y}_{CMin}(f) \\leftarrow \\max\\{\\hat{Y}_{CMin}(f), \\hat{Y}_{CMin}(t)\\}$;\n \\STATE $\\hat{Y}_{CMax}(t) \\leftarrow \\min\\{\\hat{Y}_{CMax}(t), \\hat{Y}_{CMax}(f)\\}$;\n \\ELSIF{$t \\in V_P^K$, $f \\in V_P^U$}\n \\STATE $\\hat{Y}_{CMin}(f) \\leftarrow \\max\\{\\hat{Y}_{CMin}(f), Y(t)\\}$;\n \\STATE $\\hat{Y}_{CMax}(t) \\leftarrow \\min\\{\\hat{Y}_{CMax}(t), \\hat{Y}_{CMax}(f)\\}$;\n \/* for training set $\\mathcal{T}$. *\/\n \\ELSIF{$t \\in V_P^U$, $f \\in V_P^K$}\n \\STATE $\\hat{Y}_{CMin}(f) \\leftarrow \\max\\{\\hat{Y}_{CMin}(f), \\hat{Y}_{CMin}(t)\\}$;\n \/* for training set $\\mathcal{T}$. *\/\n \\STATE $\\hat{Y}_{CMax}(t) \\leftarrow \\min\\{\\hat{Y}_{CMax}(t), Y(f)\\}$;\n \\ELSE[$t, f \\in V_P^K$]\n \\STATE $\\hat{Y}_{CMin}(f) \\leftarrow \\max\\{\\hat{Y}_{CMin}(f), Y(t)\\}$;\n \/* for training set $\\mathcal{T}$. *\/\n \\STATE $\\hat{Y}_{CMax}(t) \\leftarrow \\min\\{\\hat{Y}_{CMax}(t), Y(f)\\}$;\n \/* for training set $\\mathcal{T}$. *\/\n \\ENDIF\n \\STATE \/* Check update counts. *\/;\n \\IF{$\\hat{Y}_{CMin}(f) \\neq\\;\\;$f\\_CMin\\_Before}\n \\STATE UpCnt $\\leftarrow$ UpCnt $ + 1$;\n \\ENDIF\n \\IF{$\\hat{Y}_{CMax}(t) \\neq\\;\\;$t\\_CMax\\_Before}\n \\STATE UpCnt $\\leftarrow$ UpCnt $ + 1$;\n \\ENDIF\n \\ENDFOR\n\\UNTIL{UpCnt$\\;=\\;0$; \/* When no update happens, loop ends. *\/}\n\\end{algorithmic}\n\\end{algorithm}\n\nComparing to Algorithm~\\ref{Alg:GPAS}, the pseudo code in\nAlgorithm~\\ref{Alg:GPAA} has added 4 lines (Lines 11, 13, 16 and 17)\nfor preparing the training set. These four lines are still\nsatisfying Eq.~(\\ref{assumption1New}) for avoiding inducing\ninaccuracy, but the information is propagated towards papers in set\n$V^K_P$. Table~\\ref{Tab:GPAA_Example} lists the intermediate and\nfinal results of the example training set $\\mathcal{T}$ in\nFig.~\\ref{Fig:CWExample}.\n\n\\begin{table}[htb]\n\\centering\n\\begin{tabular}{|c|c|c|c|c|c|c|}\n \\hline\n $p^K$ in Fig.\\ref{Fig:CWExample} & Round 1 & Round 2 & Round 3 & Round 4 &$WinType$\\\\\n \\hline\n $c(1993)$ & $(-\\infty, 1999)$ & $(-\\infty, 1999)$ & $(-\\infty, 1999)$& $(-\\infty, 1999)$ & Type-3\\\\\n\\hline\n $d(1999)$ & $(1993, +\\infty)$ & $(1993, 2007)$ &$(1993, 2005)$&$(1993, 2005)$ & Type-1 \\\\\n\\hline\n $f(2003)$ & $(-\\infty, +\\infty)$ & $(-\\infty, 2005)$ & $(-\\infty, 2005)$& $(-\\infty, 2005)$& Type-3\\\\\n\\hline\n $g(2001)$ & $(-\\infty, +\\infty)$ & $(-\\infty, 2005)$ & $(-\\infty, 2005)$& $(-\\infty, 2005)$& Type-3\\\\\n\\hline\n $h(2007)$ & $(-\\infty, +\\infty)$ & $(1999, +\\infty)$ & $(1999, +\\infty)$& $(1999, +\\infty)$& Type-2\\\\\n\\hline\n $k(2005)$ & $(-\\infty, +\\infty)$ & $(2003, +\\infty)$ & $(2003, +\\infty)$&$(2003, +\\infty)$& Type-2\\\\\n\\hline\n $l(2006)$ & $(-\\infty, +\\infty)$ & $(2003, +\\infty)$ & $(2003, +\\infty)$ & $(2003, +\\infty)$& Type-2\\\\\n\\hline\n & UpCnt = 2 & UpCnt = 6 &UpCnt =1& UpCnt =0& \\\\\n\\hline\n\\end{tabular}\n\\caption{The intermediate and final results of the example training\nset $\\mathcal{T}$ in Fig.~\\ref{Fig:CWExample}.}\n\\label{Tab:GPAA_Example}\n\\begin{tabular}{|c|c|c|}\n \\hline\n $p^U$ in Fig.\\ref{Fig:CWExample} & $a$& $j$ \\\\\n\\hline\n Derived Window & $(-\\infty, 1999)$ & $(2003, +\\infty)$\\\\\n\\hline\n$WinType\/BoundVal$ & Type-3\/1999 & Type-2\/2003\\\\\n\\hline\n$\\hat{Y}(p^U)$ by $G_P$-AS & 1999& 2003\\\\\n\\hline\n$\\mathcal{T}_{p^U}$ &c(1993, Type-3, 1999) & k(2005, Type-2, 2003)\\\\\n & & l(2006, Type-2,\n 2003)\\\\\n\\hline\n$\\hat{Y}(p^U)$ by $G_P$-AA & 1993 & 2006 (2005.5)\\\\\n\\hline\n\\end{tabular}\n\\caption{Comparison on the estimation results on papers $a$ and $j$\nof the example in Fig.~\\ref{Fig:CWExample} by $G_P$-AS versus\n$G_P$-AA.} \\label{Tab:GPAA_Example2}\n\\end{table}\n\nRecall Table~\\ref{Tab:GpAS_example}, we notice that the estimation\nresults of paper $a$ and paper $j$ will be affected by the advanced\nyear value calculation method, according to the derived training set\nin Table~\\ref{Tab:GPAA_Example} and Eq.~(\\ref{eq:d_func}). The\ncomparison on the estimation results between $G_P$-AS and $G_P$-AA\nis listed in Table~\\ref{Tab:GPAA_Example2}.\n\nSo far, we are only illustrating how the three algorithms work and\nhow different the estimation results appear. In the experiment\nsection (Section~\\ref{Sec:exp}), we will see their performance\nevaluated on the real data sets.\n\n\\newpage\n\\subsection{MYE for paper authorship network $G_{AP}$}\nIn this section, we move to the paper-author bipartite graph\n$G_{AP}$. An artificially created example of MYE problem in $G_{AP}$\nis shown in Fig.~\\ref{Fig:AWExample}. In this example, there are 8\npapers ($a - h$) and 4 authors ($i - l)$, where papers $a, b, d, e$\nhave year information ($\\in V_P^K$) while $c, f, g, h$ are missing\nyear ($\\in V_P^U$).\n\\begin{figure}[hbt]\n \\centering\n \\includegraphics[width=2.6in]{AW_example.eps}\n \\caption{An example of a paper authorship network with 8 papers ($a -\n h$) and 4 authors ($i - l)$, where papers ($a, b, d, e$) are $\\in V_P^K$ and ($c, f, g, h$) are $\\in\n V_P^U$.}\n \\label{Fig:AWExample}\n\\end{figure}\n\nFor $G_{AP}$, we will also introduce three algorithms, namely\n$G_{AP}$-Ba, $G_{AP}$-Iter and $G_{AP}$-AdvIter, in order of increasing\ncomplexity.\n\n\\subsubsection{Algorithm for MYE in $G_{AP}$:\n$G_{AP}$-Ba and $G_{AP}$-Iter}\n$G_{AP}$-Ba is the basic algorithm and $G_{AP}$-Iter is simply\nrepeating $G_{AP}$-Ba until convergence, thus we introduce them\ntogether. The basic algorithm, $G_{AP}$-Ba, consists of three steps:\n\\begin{enumerate}\n\\item[i)] Derive author's active publishing window.\n\nFor each author, based on the graph topology and paper year\ninformation, we can derive an active paper publishing window.\nEqs.~(\\ref{eq:awmin}) and (\\ref{eq:awmax}) give the definition of\nthe two bounds of this window:\n\\begin{eqnarray}\nAW\\_Min(a) = \\min_{p\\in P(a) \\cap V_P^K} Y(p),\\label{eq:awmin}\\\\\nAW\\_Max(a) = \\max_{p\\in P(a) \\cap V_P^K} Y(p),\\label{eq:awmax}\n\\end{eqnarray}\nwhere $P(a),\\forall\\;a \\;\\in V_A$ is the paper set written by author\n$a$. It is possible that $P(a) \\cap V_P^K = \\emptyset$, and we\nconsider it as a non-existent bound. According to the above\ndefinition, the two bounds are either co-existent or non-existent.\\\\\n\n\\item[ii)] Derive the paper's possible year estimation window.\n\nBased on the derived author active window, we can further define the\npaper possible year window:\n\\begin{eqnarray}\n\\hat{Y}_{AMin}(p^U) = \\min\\{\\max_{a\\in A(p^U)} AW\\_min(a), \\min_{a\\in A(p^U)} AW\\_max(a)\\},\\label{eq:acwmin}\\\\\n\\hat{Y}_{AMax}(p^U) = \\max\\{\\max_{a\\in A(p^U)} AW\\_min(a),\n\\min_{a\\in A(p^U)} AW\\_max(a)\\},\\label{eq:acwmax}\n\\end{eqnarray}\nwhere $A(p^U), \\forall p^U \\in V_P^U$ is the author set of paper\n$p^U$.\\\\\n\nIn most cases, $\\hat{Y}_{AMin}(p^U) = \\max_{a\\in A(p^U)} AW\\_min(a)$\nand $\\hat{Y}_{AMax}(p^U) = \\min_{a\\in A(p^U)} AW\\_max(a)$. However,\nin case of the condition that authors' active windows have no\nintersection (this is possible because the author active window dose\nnot take the missing year papers into account), we rewrite them as Eqs.~(\\ref{eq:acwmin})-(\\ref{eq:acwmax}). For example, we look at\npaper $c$ in Fig.~\\ref{Fig:AWExample}. The author set of paper $c$\nis $A(c) = \\{i(1996,1999), j(2002, 2003), k(2002, 2002)\\}$ with the\nauthor active windows inside parentheses. Then by definition we get\n$\\max_{a\\in A(c)} AW\\_Min(a) = \\max\\{1996, 2002, 2002\\} = 2002$,\nwhile $\\min_{a\\in A(c)} AW\\_Max(a) = \\min\\{1999, 2003, 2002\\} =\n1999$. Therefore, according to\nEqs.~(\\ref{eq:acwmin})-(\\ref{eq:acwmax}), we can derive the possible\nyear estimation window of paper $c: [1999, 2002]$.\\\\\n\n\\item[iii)] Calculate year value.\n\nIn this algorithm, we apply the simple year value calculation\nmethod, the same one as in the $G_P$-SS algorithm. There is only a\nsmall difference in that in $G_P$-SS, there are four types of the year\nestimation window, whereas in $G_{AP}$, there are only two possible\ntypes, both bounds exist (Type-1) or neither exists (Type-4).\nTherefore, the estimated year value is either\n$\\frac{\\hat{Y}_{AMin}(p^U) + \\hat{Y}_{AMax}(p^U)}{2}$ or labeled as\n\\emph{Uncovered}.\n\\end{enumerate}\n\nNote the rationale of the design of the basic algorithm is based on\nan observation that most authors are continuously active in\npublishing papers. Hence, the publishing years of his\/her papers are\nusually within a continuous window. If we obtain the windows of all\nthe coauthors of a missing year paper, the intersection of these\nwindows will be an interval that most probably contains the real\npublishing year.\n\n\\begin{algorithm}\n\\caption{The pseudo code of $G_{AP}$-Iter} \\label{Alg:AWProp}\n\\begin{algorithmic}[1]\n \\REPEAT\n \\FORALL{$e = (a, p) \\in E_{AP}, a \\in V_A, p \\in V_P$}\n \\IF{$p \\in V_P^K$}\n \\STATE $AW\\_Min(a) \\leftarrow \\min\\{Y(p), AW\\_Min(a)\\}$\n \\STATE $AW\\_Max(a) \\leftarrow \\max\\{Y(p), AW\\_Max(a)\\}$\n \\ELSIF[$p \\in V_P^U$]{$\\hat{Y}(p)$ exists}\n \\STATE $AW\\_Min(a) \\leftarrow \\min\\{\\hat{Y}(p), AW\\_Min(a)\\}$\n \\STATE $AW\\_Max(a) \\leftarrow \\max\\{\\hat{Y}(p), AW\\_Max(a)\\}$\n \\ENDIF\n \\ENDFOR\n \\FORALL{$p^U \\in V_P^U$}\n \\FORALL{$a \\in A(p^U)$}\n \\STATE $maxMin \\leftarrow \\max\\{AW\\_Min(a), maxMin\\}$\n \\STATE $minMax \\leftarrow \\min\\{AW\\_Max(a), minMax\\}$\n \\ENDFOR\n \\STATE $\\hat{Y}_{AMin}(p^U) \\leftarrow \\min\\{maxMin, minMax\\}$\n \\STATE $\\hat{Y}_{AMax}(p^U) \\leftarrow \\max\\{maxMin, minMax\\}$\n \\STATE $\\hat{Y}(p^U) \\leftarrow \\frac{\\hat{Y}_{AMin}(p^U) + \\hat{Y}_{AMax}(p^U)}{2}$\n \\ENDFOR\n\\UNTIL{No update happens}\n\\end{algorithmic}\n\\end{algorithm}\nThe pseudo code for $G_{AP}$-Iter (including $G_{AP}$-Ba) is shown\nin Algorithm~\\ref{Alg:AWProp}. Lines 2-19 are the steps of\n$G_{AP}$-Ba, $G_{AP}$-Iter is simply repeating $G_{AP}$-Ba (Line\n1). The estimation results in the previous rounds affect the\nsubsequent rounds because each author's active publishing window\nwill be re-calculated according to all the paper year information\n(given or estimated in the last round, Lines 7-8). Lines 13-17 are\nthe implementation of Eqs.~(\\ref{eq:acwmin}) and (\\ref{eq:acwmax}).\nThe intermediate and final estimation results of running\n$G_{AP}$-Iter on the example in Fig.~\\ref{Fig:AWExample} is listed\nin Table~\\ref{Tab:AWPropExample}.\n\n\\begin{table}[htb]\n\\centering\n\\begin{tabular}{|c|c|c|c|c|}\n \\hline\n Node & Type & Round 1 & Round 2 & Round 3\\\\\n \\hline\n $i$ & Author & $(1996, 1999)$ & $(1996, 2001)$ & $(1996, 2001)$\\\\\n\\hline\n $j$ & Author & $(2002, 2003)$ & $(2001, 2003)$ &$(2001, 2003)$\\\\\n\\hline\n $k$ & Author & $(2002, 2002)$ & $(2001, 2002)$ & $(2001, 2002)$\\\\\n\\hline\n $l$ & Author & $(-\\infty, +\\infty)$ & $(2002, 2002)$ & $(2002, 2002)$\\\\\n\\hline \\hline\n $c$ & Paper & $(1999, 2002)$ & $(2001, 2001)$ & $(2001, 2001)$\\\\\n $\\hat{Y}(c)$ & & 2001 (2000.5) & 2001 & 2001\\\\\n\\hline\n\n $f$ & Paper & $(2002, 2002)$ & $(2002, 2002)$ & $(2002, 2002)$\\\\\n $\\hat{Y}(f)$ & & 2002 & 2002 & 2002\\\\\n\\hline\n $g$ & Paper & $(-\\infty, +\\infty)$ & $(2002, 2002)$ & $(2002, 2002)$\\\\\n $\\hat{Y}(g)$ & & \\emph{Uncovered} & 2002 & 2002\\\\\n\\hline\n $h$ & Paper & $(-\\infty, +\\infty)$ & $(-\\infty, +\\infty)$ & $(-\\infty, +\\infty)$\\\\\n $\\hat{Y}(h)$ & & \\emph{Uncovered} & \\emph{Uncovered} & \\emph{Uncovered}\\\\\n\\hline\n\\end{tabular}\n\\caption{The intermediate and final estimation results obtained by\nrunning $G_{AP}$-Ba and $G_{AP}$-Iter on the example shown in\nFig.~\\ref{Fig:AWExample}} \\label{Tab:AWPropExample}\n\\end{table}\n\nIn Table~\\ref{Tab:AWPropExample}, the $G_{AP}$-Iter repeats 3 rounds\nuntil convergence. We show the intermediate results of the author\nactive windows for author (nodes $i, j, k , l$), the possible paper\npublishing windows for missing year papers (nodes $c, f, g, h$), and\ntheir estimation results ($\\hat{Y}(p^U), p^U \\in \\{c, f, g, h\\}$) in\neach round. The column labeled as ``Round 1'' shows the results\ngenerated by algorithm $G_{AP}$-Ba. Comparing to $G_{AP}$-Ba,\n$G_{AP}$-Iter helps to share information through the co-author\nrelationships, like author $l$ in Table~\\ref{Tab:AWPropExample}.\nTherefore, $G_{AP}$-Iter obtains a lower uncovered ratio (1\/4) than\n$G_{AP}$-Ba (2\/4).\n\nWe need to note that $G_{AP}$-Iter may add inaccuracy during the\ninformation propagation, i.e. the estimation results in the\nprevious rounds affect the derivation of both the author active\nwindows and estimation results in the subsequent rounds. For\nexample, $\\hat{Y}(c)$ after Round 1 is 2001. In Round 2, the active\nwindows of all the coauthors of paper $c$, $P(c) = \\{i, j, k\\}$ are\nupdated, and hence the related paper year estimation windows get\nupdated also. Although $G_{AP}$-Iter helps to decrease the uncovered\nratio, it may not improve an estimation accuracy like MAE (under\ncertain situations, $G_{AP}$-Iter can be even worse than $G_{AP}$-Ba).\n\nIn order to compensate for the weakness of $G_{AP}$-Iter so that both\nthe uncovered ratio and estimation accuracies are improved we propose\nthe $G_{AP}$-AdvIter, which has an advanced iteration procedure to\nreduce the propagation of inaccurate information.\n\n\\subsubsection{Algorithm for MYE in $G_{AP}$: $G_{AP}$-AdvIter}\nAccording to the previous discussion, the key point of improving the\nestimation accuracy in $G_{AP}$ is to propagate as much ``good''\ninformation as possible. Hence, we propose a heuristic algorithm,\n$G_{AP}$-AdvIter to achieve this. Here are some definitions:\n\\begin{enumerate}\n\\item[1.] Consistent-Coauthor-Count between two papers: the number\nof common coauthors of the two papers. We denote it by function\n$w(\\cdot)$. Given any two papers, we can calculate their\nConsistent-Coauthor-Count by the following expression:\n\\begin{equation}\n\\forall\\;p,\\;q\\;\\in\\;V_P,\\; w(p, q) = w(q, p) = |A(p) \\cap A(q)|,\n\\end{equation}\nwhere $w(\\cdot)$ is a non-negative integer and equals to zero only\nwhen the two papers have no common coauthors.\n\n\\item[2.] $w$-Consistent-Coauthor-Pair relationship:\nif any two papers, $\\forall p,q\\in V_P$, satisfy: $w(p, q) = w(q, p)\n> 1$, then we call them a $w$-Consistent-Coauthor-Pair.\n\n\\item[3.] Consistent-Coauthor-Pair set of a paper $p\\in V_P$, denoted\nby $\\Omega(p)$:\n\\begin{equation}\n\\Omega(p) = \\{q| q \\in V_P\\;\\textrm{and}\\;w(p,q) > 1 \\}\n\\end{equation}\n\\end{enumerate}\nWe give some illustrations of these definitions using the example in\nFig.~\\ref{Fig:AWExample}: $w(a,g) = |\\emptyset| = 0$ and $w(c, d) =\n|\\{j, k\\}| = 2$, thus, paper $c, d$ have the\n2-Consistent-Coauthor-Pair relationship. Not including this, there are no\nmore Consistent-Coauthor-Pairs in Fig.~\\ref{Fig:AWExample}.\nTherefore, we obtain $\\Omega(c) = \\{d\\}$, $\\Omega(d) = \\{c\\}$ and\n$\\Omega(p) = \\emptyset, \\forall p \\in \\{a,b,e,f,g,h\\}$.\n\nIt is a reasonable assumption that if more authors work together and\npublish papers, it is more probable that these papers are published\nwithin a small time window. For example, students who worked together\nwith their supervisors\/group members and published certain papers\nduring their Master\/PhD study. Note this is only a sufficient\ncondition, the reverse may not be true.\n\nThe above assumption implies that if two papers have a $w$-Consistent-Coauthor-Pair relationship, then there is a high\nprobability that their publishing years are close. In addition, this\nprobability is positively correlated to the value of $w$. We\nconjecture that the estimated year values from utilizing the\n$w$-Consistent-Coauthor-Pair relationship must be ``better''\ninformation for propagation.\n\nThe pseudo code of $G_{AP}$-AdvIter is listed in Algorithm\n\\ref{Alg:AWLearn}, which shows how we make use of the more reliable\ninformation for propagation.\n\\begin{algorithm}\n\\caption{The pseudo code of $G_{AP}$-AdvIter} \\label{Alg:AWLearn}\n\\begin{algorithmic}[1]\n \\FORALL{$p^U \\in V_P^U$}\n \\STATE Derive the Consistent-Coauthor-Pair set, $\\Omega(p^U)$.\n \\ENDFOR\n \\REPEAT\n \\FORALL{$e = (a, p) \\in E_{AP}, a\\;\\in V_A, p \\in V_P$}\n \\IF{$p \\in V_P^K$}\n \\STATE $AW\\_Min(a) \\leftarrow \\min\\{Y(p), AW\\_Min(a)\\}$\n \\STATE $AW\\_Max(a) \\leftarrow \\max\\{Y(p), AW\\_Max(a)\\}$\n \\ELSIF[$p \\in V_P^U$]{$\\hat{Y}(p)$ exists}\n \\STATE $AW\\_Min(a) \\leftarrow \\min\\{\\hat{Y}(p), AW\\_Min(a)\\}$\n \\STATE $AW\\_Max(a) \\leftarrow \\max\\{\\hat{Y}(p), AW\\_Max(a)\\}$\n \\ENDIF\n \\ENDFOR\n \\FORALL{$p^U \\in V_P^U$}\n \\IF[for AdvIter]{$\\Omega(p^U) \\cap V_P^K \\neq \\emptyset$}\n \\STATE $\\hat{Y}(p^U) \\leftarrow W(p^U, \\gamma)$\n \\ELSE\n \\FORALL{$a \\in A(p^U)$}\n \\STATE $maxMin \\leftarrow \\max\\{AW\\_Min(a), maxMin\\}$\n \\STATE $minMax \\leftarrow \\min\\{AW\\_Max(a), minMax\\}$\n \\ENDFOR\n \\STATE $\\hat{Y}_{AMin}(p^U) \\leftarrow \\min\\{maxMin, minMax\\}$\n \\STATE $\\hat{Y}_{AMax}(p^U) \\leftarrow \\max\\{maxMin, minMax\\}$\n \\STATE $\\hat{Y}(p^U) \\leftarrow \\frac{\\hat{Y}_{AMin}(p^U) + \\hat{Y}_{AMax}(p^U)}{2}$\n \\ENDIF\n \\ENDFOR\n\\UNTIL{No update happens}\n\\end{algorithmic}\n\\end{algorithm}\n\nComparing to Algorithm~\\ref{Alg:AWProp}, we notice that Algorithm\n\\ref{Alg:AWLearn} only added Lines 1-3 and Lines 15-16. Lines 1-3\nis the process used to find $\\Omega(p^U)$ for each missing year paper,\nthis is done during initialization. Lines 15-16 show that we\ngive higher priority to estimating year values if the\n$w$-Consistent-Coauthor-Pair relationship can help, than the basic\nprocedure (Lines 17-25). The expression of the function $W$ is in\nEq.~(\\ref{eq:wa}):\n\\begin{equation}\nW(p^U,\\gamma) = \\frac{\\sum_{q\\in \\Omega(p^U)\\cap V_P^K}\\;\\;w(p^U,\nq)^\\gamma \\times Y(q)}{\\sum_{q\\in \\Omega(p^U)\\cap V_P^K} w(p^U,\nq)^\\gamma}, \\quad\\textrm{if}\\;\\Omega(p^U)\\cap V_P^K \\neq\n\\emptyset\\label{eq:wa}\n\\end{equation}\nThe meaning of Eq.~(\\ref{eq:wa}) is to take a $\\gamma$-weighted\naverage on the given year information of those papers in the set\n$\\Omega(p^U)\\cap V_P^K$. For example, if $\\Omega(p^U)\\cap V_P^K =\n\\{q, r\\}, w(p^U, q) = 2, w(p^U, r) = 3, Y(q) = 2000, Y(r) = 2002$,\nthen $W(p^U,\\gamma) = \\frac{2^\\gamma \\times 2000 + 3^\\gamma \\times\n2002}{2^\\gamma + 3^\\gamma}$. Here parameter $\\gamma$ is used to tune\nthe importance we put on the values of $w$, e.g., if we set $\\gamma\n= 0$, it implies that no weight is considered and the result is\nsimply the average. When $\\gamma = 1$, it is a normal weighted\naverage calculation; while when $\\gamma\\rightarrow\\infty$, it leads to\nthe special case where only the papers in the set $\\Omega(p^U)\\cap\nV_P^K$ with the largest $w$ are involved in the calculation. In\naddition, since it is meaningless for function $W$ if\n$\\Omega(p^U)\\cap V_P^K = \\emptyset$, we need to check\nbeforehand (Line 15).\n\\begin{table}[htb]\n\\centering\n\\begin{tabular}{|c|c|c|c|c|}\n \\hline\n Node & Type & Round 1 & Round 2 & Round 3\\\\\n \\hline\n $i$ & Author & $(1996, 1999)$ & $(1996, 2002)$ & $(1996, 2002)$\\\\\n\\hline\n $j$ & Author & $(2002, 2003)$ & $(2002, 2003)$ &$(2002, 2003)$\\\\\n\\hline\n $k$ & Author & $(2002, 2002)$ & $(2002, 2002)$ & $(2002, 2002)$\\\\\n\\hline\n $l$ & Author & $(-\\infty, +\\infty)$ & $(2002, 2002)$ & $(2002, 2002)$\\\\\n\\hline \\hline\n $c$ & Paper & $(2002, 1999)$ & $(2001, 2001)$ & $(2001, 2001)$\\\\\n $\\hat{Y}(c)$& $W(c, 0)$ & 2002 & 2002 & 2002\\\\\n\\hline\n\n $f$ & Paper & $(2002, 2002)$ & $(2002, 2002)$ & $(2002, 2002)$\\\\\n $\\hat{Y}(f)$ & & 2002 & 2002 & 2002\\\\\n\\hline\n $g$ & Paper & $(-\\infty, +\\infty)$ & $(2002, 2002)$ & $(2002, 2002)$\\\\\n $\\hat{Y}(g)$ & & \\emph{Uncovered} & 2002 & 2002\\\\\n\\hline\n $h$ & Paper & $(-\\infty, +\\infty)$ & $(-\\infty, +\\infty)$ & $(-\\infty, +\\infty)$\\\\\n $\\hat{Y}(h)$ & & \\emph{Uncovered} & \\emph{Uncovered} & \\emph{Uncovered}\\\\\n\\hline\n\\end{tabular}\n\\caption{The intermediate and final estimation results obtained by\nrunning $G_{AP}$-AdvIter on the example shown in\nFig.~\\ref{Fig:AWExample}} \\label{Tab:AWLearnExample}\n\\end{table}\n\nIn Table~\\ref{Tab:AWLearnExample}, we list the intermediate and\nfinal estimation results obtained by running $G_{AP}$-AdvIter on the\nexample shown in Fig.~\\ref{Fig:AWExample}. As analyzed previously,\n$\\Omega(c) = \\{d\\}$, $\\Omega(d) = \\{c\\}$ and $\\Omega(p) = \\emptyset,\n\\forall p \\in \\{a,b,e,f,g,h\\}$, hence only $\\hat{Y}(c) = Y(d) =\n2002$ is affected by $G_{AP}$-AdvIter and also the related author\nactive windows: $i:(1996, 2002)$, $j:(2002, 2003)$ and $k:(2002,\n2002)$.\n\n\\subsection{MYE for heterogenous network $G$}\nFor a heterogeneous network, $G = (G_P \\cup G_{AP})$, which consists\nof both $G_P$ and $G_{AP}$, we make use of the proposed methods and\nthe results discussed in the previous two sections. Since for both $G_P$\nand $G_{AP}$, we proposed three algorithms of different complexity,\nthere can be altogether 9 different combinations. With careful\nconsideration, we pick out 3 typical combinations as MYE algorithms\nfor $G$:\n\\begin{enumerate}\n\\item[1)] $G$-SSBa: combination of $G_P$-SS and $G_{AP}$-Ba\n\\item[2)] $G$-ASIter: combination of $G_P$-AS and $G_{AP}$-Iter\n\\item[3)] $G$-AdvIter: combination $G_P$-AA and $G_{AP}$-AdvIter\n\\end{enumerate}\n\nIn fact, selecting the ``combination'' is not trivial, this\nwill be explained in more detail next. The common part of the two algorithms\nconsists of these two steps: (a) derivation of possible year\nestimation window and (b) calculate the estimated year value based\non the derived window.\n\nNo matter which combined algorithm for $G$ is applied, for each\nmissing year paper, two possible year estimation windows will be\nderived, one by the $G_P$ part $[\\hat{Y}_{CMin}(p^U),\n\\hat{Y}_{CMax}(p^U)]$, and the other by the $G_{AP}$ part\n$[\\hat{Y}_{AMin}(p^U), \\hat{Y}_{AMax}(p^U)]$, due to the\nindependency of the two procedures.\n\nConsidering the four types of the derived estimation window from\n$G_P$ and two types from $G_{AP}$, each missing year paper can end\nwith the following four cases of which case (d) is most likely:\n\\begin{enumerate}\n\\item[(a)] $(\\hat{Y}_{CMin}(p^U),\n\\hat{Y}_{CMax}(p^U)) = (\\hat{Y}_{AMin}(p^U), \\hat{Y}_{AMax}(p^U)) =\n(-\\infty, +\\infty)$, then it can only lead to the \\emph{Uncovered}\nestimation result;\n\\item[(b)] $(\\hat{Y}_{CMin}(p^U),\n\\hat{Y}_{CMax}(p^U)) = (-\\infty, +\\infty)$ but\n$[\\hat{Y}_{AMin}(p^U), \\hat{Y}_{AMax}(p^U)]$ is not, then it is as\nif only the $G_{AP}$ part algorithm is in action;\n\\item[(c)] $(\\hat{Y}_{AMin}(p^U), \\hat{Y}_{AMax}(p^U)) = (-\\infty, +\\infty)$ but\n$[\\hat{Y}_{CMin}(p^U), \\hat{Y}_{CMax}(p^U)]$ is not, then it is as\nif only the $G_{P}$ part algorithm is in action;\n\\item[(d)] Neither window is $(-\\infty, +\\infty)$, we will have a\ndetailed discussion for the three algorithms: $G$-SSBa, $G$-ASIter\nand $G$-AdvIter respectively.\n\\end{enumerate}\n\nA general criterion is used to combine $G$-SSBa, $G$-ASIter and\n$G$-AdvIter, this criterion is that we always give higher\npriority to the window derived from $G_P$ than from $G_{AP}$. This\nis because the former is more reliable than the latter, as the\nlatter may involve inaccuracy in information propagation.\n\n\\subsubsection{Algorithm for MYE in $G$: $G$-SSBa and $G$-ASIter}\nSince the structures of $G$-SSBa and $G$-ASIter are similar, we try\nto merge their pseudo codes together for space saving and ease of\ndescription\\footnote{In real implementation, they are separated.}.\nThe pseudo code of $G$-SSBa and $G$-ASIter for case (d) is listed in\nAlgorithm~\\ref{Alg:GSSBaASIter}.\n\\begin{algorithm}\n\\caption{The pseudo code of $G$-SSBa and $G$-ASIter for case (d)}\n\\label{Alg:GSSBaASIter}\n\\begin{algorithmic}[1]\n\\IF{$G$-SSBa}\n\\STATE $[\\hat{Y}_{CMin}, \\hat{Y}_{CMax}] \\leftarrow$ {\\bf Simple\nWindow Derivation Method} in Eq.~(\\ref{eq:y_cmin}) and\n(\\ref{eq:y_cmax});\n\\ELSIF{$G$-ASIter}\n\\STATE $[\\hat{Y}_{CMin}, \\hat{Y}_{CMax}] \\leftarrow$ {\\bf Advanced\nWindow Derivation Method} in Algorithm~\\ref{Alg:GPAS};\n\\ENDIF\n\\REPEAT\n \\STATE $[\\hat{Y}_{AMin}, \\hat{Y}_{AMax}] \\leftarrow$ by\n $G_{AP}$-Ba, Eqs.~(\\ref{eq:awmin}), (\\ref{eq:awmax}), (\\ref{eq:acwmin}),\n (\\ref{eq:acwmax});\n \\FORALL{$p^U \\in V_P^U$}\n \\STATE \/* Init *\/\n \\STATE $\\hat{Y}_{GMin}(p^U) \\leftarrow -\\infty$;\n \\STATE $\\hat{Y}_{GMax}(p^U) \\leftarrow +\\infty$;\n \\IF{$\\hat{Y}_{CMin}(p^U) > -\\infty$ and $\\hat{Y}_{CMax}(p^U) < +\\infty$}\n \\STATE \/* Type-1 Window in $G_P$ *\/\n \\IF {$\\hat{Y}_{AMin}(p^U) < \\hat{Y}_{CMin}(p^U)$ or $\\hat{Y}_{AMax}(p^U) > \\hat{Y}_{CMax}(p^U)$}\n \\STATE $\\hat{Y}_{GMin}(p^U) \\leftarrow \\hat{Y}_{CMin}(p^U)$;\n \\STATE $\\hat{Y}_{GMax}(p^U) \\leftarrow \\hat{Y}_{CMax}(p^U)$;\n \\ELSE\n \\STATE $\\hat{Y}_{GMin}(p^U) \\leftarrow \\max\\{\\hat{Y}_{CMin}(p^U), \\hat{Y}_{AMin}(p^U)\\}$;\n \\STATE $\\hat{Y}_{GMax}(p^U) \\leftarrow \\min\\{\\hat{Y}_{CMax}(p^U), \\hat{Y}_{AMax}(p^U)\\}$;\n \\ENDIF\n \\ELSIF{$\\hat{Y}_{CMin}(p^U) > -\\infty$ and $\\hat{Y}_{CMax}(p^U) = +\\infty$}\n \\STATE \/* Type-2 Window in $G_P$ *\/\n \\IF {$\\hat{Y}_{AMax}(p^U) < \\hat{Y}_{CMin}(p^U)$}\n \\STATE $\\hat{Y}_{GMin}(p^U) \\leftarrow \\hat{Y}_{CMin}(p^U)$;\n \\STATE $\\hat{Y}_{GMax}(p^U) \\leftarrow \\hat{Y}_{CMin}(p^U)$;\n \\ELSE\n \\STATE $\\hat{Y}_{GMin}(p^U) \\leftarrow \\max\\{\\hat{Y}_{CMin}(p^U), \\hat{Y}_{AMin}(p^U)\\}$;\n \\STATE $\\hat{Y}_{GMax}(p^U) \\leftarrow \\hat{Y}_{AMax}(p^U)$;\n \\ENDIF\n \\ELSIF{$\\hat{Y}_{CMin}(p^U) = -\\infty$ and $\\hat{Y}_{CMax}(p^U) < +\\infty$}\n \\STATE \/* Type-3 Window in $G_P$ *\/\n \\IF {$\\hat{Y}_{AMin}(p^U) > \\hat{Y}_{CMax}(p^U)$}\n \\STATE $\\hat{Y}_{GMin}(p^U) \\leftarrow \\hat{Y}_{CMax}(p^U)$;\n \\STATE $\\hat{Y}_{GMax}(p^U) \\leftarrow \\hat{Y}_{CMax}(p^U)$;\n \\ELSE\n \\STATE $\\hat{Y}_{GMin}(p^U) \\leftarrow \\hat{Y}_{AMin}(p^U)$;\n \\STATE $\\hat{Y}_{GMax}(p^U) \\leftarrow \\min\\{\\hat{Y}_{CMax}(p^U), \\hat{Y}_{AMax}(p^U)\\}$;\n \\ENDIF\n \\ELSE\n \\STATE \/* Type-4 Window in $G_P$ *\/\n \\STATE Case (b);\n \\ENDIF\n \\STATE \/* Simple Year Value Calculation *\/\n \\STATE $\\hat{Y}(p^U) \\leftarrow \\frac{\\hat{Y}_{GMin}(p^U) +\n \\hat{Y}_{GMax}(p^U)}{2}$;\n \\ENDFOR\n \\IF{$G$-SSBa}\n \\STATE Break;\n \\ENDIF\n\\UNTIL{No update happens}\n\\end{algorithmic}\n\\end{algorithm}\n\nIn Algorithm~\\ref{Alg:GSSBaASIter}, we denote $\\hat{Y}_{GMin}(p^U),\n\\hat{Y}_{GMax}(p^U)$ to be the two bounds of the derived year\nestimation window in $G$. In the beginning, we derive\n$[\\hat{Y}_{CMin}, \\hat{Y}_{CMax}]$ by a simple window derivation\nmethod for algorithm $G$-SSBa, or an advanced window derivation method\nfor algorithm $G$-ASIter (Lines 1-5).\n\nNext, we derive $[\\hat{Y}_{GMin}, \\hat{Y}_{GMax}]$ depending on the\ntype of the window in $G_P$, e.g., Lines 12-20 for Type-1, Lines\n21-29 for Type-2 and Lines 30-38 for Type-3. The derivation follows\nthe general criterion that if the intersection of\n$[\\hat{Y}_{CMin}(p^U), \\hat{Y}_{CMax}(p^U)]$ and\n$[\\hat{Y}_{AMin}(p^U), \\hat{Y}_{AMax}(p^U)]$ is not empty, we take\nthis intersection window as $[\\hat{Y}_{GMin}(p^U),\n\\hat{Y}_{GMax}(p^U)]$; otherwise, we take $[\\hat{Y}_{CMin}(p^U),\n\\hat{Y}_{CMax}(p^U)]$. Line 44 is the same simple year value\ncalculation method as in $G_P$-SS, $G_P$-AS, $G_{AP}$-Ba and\n$G_{AP}$-Iter. In fact, if conditions (Line 23 or Line 32) happen\n(i.e. the two windows do not intersect with each other), the\noperation (Lines 24-25 and Lines 33-34, together Line 44) is\nequivalent to Eq.~(\\ref{eq:CWestType2})-Eq.~(\\ref{eq:CWestType3}),\ntaking the bound values. For $G$-SSBa of which the combination\nincludes $G_{AP}$-Ba, the basic procedure will only be carried out once\n(Lines 46-48); While for $G$-ASIter of which the combination\nincludes $G_{AP}$-Iter, the $[\\hat{Y}_{GMin}, \\hat{Y}_{GMax}]$\nwindow will be propagated until convergence (Line 6 together with\nLine 49).\n\n\\subsubsection{Algorithm for MYE in $G$: $G$-AdvIter}\n$G$-AdvIter is the combination of $G_P$-AA and $G_{AP}$-AdvIter,\ntherefore, the concepts of training set $\\mathcal{T}$ as well as the\nConsistent-Coauthor-Pair relationship will be involved.\nAlgorithm~\\ref{Alg:GAdvIter} list the pseudo code of $G$-AdvIter for\ncase (d):\n\\begin{algorithm}\n\\caption{The pseudo code of $G$-AdvIter for case (d)}\n\\label{Alg:GAdvIter}\n\\begin{algorithmic}[1]\n\\STATE Run Algorithm~\\ref{Alg:GPAA}, derive $[\\hat{Y}_{CMin},\n\\hat{Y}_{CMax}]$ and the training set $\\mathcal{T}$.\n \\FORALL{$p^U \\in V_P^U$}\n \\STATE Derive the Consistent-Coauthor-Pair set, $\\Omega(p^U)$.\n \\ENDFOR\n\\REPEAT\n \\STATE $[\\hat{Y}_{AMin}, \\hat{Y}_{AMax}] \\leftarrow$ by\n $G_{AP}$-Ba, Eqs.~(\\ref{eq:awmin}), (\\ref{eq:awmax}), (\\ref{eq:acwmin}),\n (\\ref{eq:acwmax});\n \n \\FORALL{$p^U \\in V_P^U$}\n \\STATE$ \\hat{Y}(p^U) \\leftarrow Null$;\n \\IF{$\\hat{Y}_{CMin}(p^U) > -\\infty$ and $\\hat{Y}_{CMax}(p^U) < +\\infty$}\n \\STATE \/* Type-1 Window in $G_P$ *\/\n \\STATE Derivation of $\\hat{Y}_{GMin}(p^U),\n \\hat{Y}_{GMax}(p^U)$; \/* Same as\n Algorithm~\\ref{Alg:GSSBaASIter}, Lines 12-20 *\/\n \\STATE \/* Year Value Calculate *\/\n \\IF{$\\Omega(p^U) \\cap V_P^K \\neq \\emptyset$}\n \\STATE $\\hat{Y}(p^U) \\leftarrow W_G(p^U, \\gamma, \\hat{Y}_{CMin}(p^U), \\hat{Y}_{CMax}(p^U))$\n \\ENDIF\n \\IF[In case $W_G$ does not work]{$\\hat{Y}(p^U) = Null$}\n \\STATE $\\hat{Y}(p^U) \\leftarrow \\frac{\\hat{Y}_{GMin}(p^U) +\n \\hat{Y}_{GMax}(p^U)}{2}$;\n \\ENDIF\n \\ELSIF{$\\hat{Y}_{CMin}(p^U) > -\\infty$ and $\\hat{Y}_{CMax}(p^U) = +\\infty$}\n \\STATE \/* Type-2 Window in $G_P$ *\/\n \\STATE Derivation of $\\hat{Y}_{GMin}(p^U),\n \\hat{Y}_{GMax}(p^U)$; \/* Same as\n Algorithm~\\ref{Alg:GSSBaASIter}, Lines 21-29 *\/\n \\STATE \/* Year Value Calculate *\/\n \\STATE $dResult \\leftarrow d(\\textrm{Type-2},\n \\hat{Y}_{CMin}(p^U))$;\/*call $d(WinType(p^U),\n BoundVal(p^U)$*\/\n \\STATE $\\delta \\leftarrow dResult-\\hat{Y}_{CMin}(p^U)$;\n \\IF{$\\Omega(p^U) \\cap V_P^K \\neq \\emptyset$}\n \\STATE $\\hat{Y}(p^U) \\leftarrow W_G(p^U, \\gamma, \\hat{Y}_{CMin}(p^U), \\hat{Y}_{CMin}(p^U)+2\\delta)$\n \\ENDIF\n \\IF[In case $W_G$ does not work]{$\\hat{Y}(p^U) = Null$}\n \\IF{$\\hat{Y}_{AMax}(p^U) < \\hat{Y}_{CMin}(p^U)$ or $dResult \\in (\\hat{Y}_{GMin}(p^U), \\hat{Y}_{GMax}(p^U))$}\n \\STATE $\\hat{Y}(p^U) \\leftarrow dResult$\n \\ELSE\n \\STATE $\\hat{Y}(p^U) \\leftarrow \\frac{\\hat{Y}_{GMin}(p^U) + \\hat{Y}_{GMax}(p^U)}{2}$;\n \\ENDIF\n \\ENDIF\n \\ELSIF{$\\hat{Y}_{CMin}(p^U) = -\\infty$ and $\\hat{Y}_{CMax}(p^U) < +\\infty$}\n \\STATE \/* Type-3 Window in $G_P$ *\/\n \\STATE Derivation of $\\hat{Y}_{GMin}(p^U),\n \\hat{Y}_{GMax}(p^U)$; \/* Same as\n Algorithm~\\ref{Alg:GSSBaASIter}, Lines 30-38 *\/\n \\STATE \/* Year Value Calculate *\/\n \\STATE $dResult \\leftarrow d(\\textrm{Type-3},\n \\hat{Y}_{CMax}(p^U))$;\/*call $d(WinType(p^U),\n BoundValue(p^U)$*\/\n \\STATE $\\delta \\leftarrow \\hat{Y}_{CMax}(p^U)-dResult$;\n \\IF{$\\Omega(p^U) \\cap V_P^K \\neq \\emptyset$}\n \\STATE $\\hat{Y}(p^U) \\leftarrow W_G(p^U, \\gamma, \\hat{Y}_{CMax}(p^U)-2\\delta, \\hat{Y}_{CMax}(p^U))$\n \\ENDIF\n \\IF[In case $W_G$ does not work]{$\\hat{Y}(p^U) = Null$}\n \\IF{$\\hat{Y}_{AMin}(p^U) > \\hat{Y}_{CMax}(p^U)$ or $dResult \\in (\\hat{Y}_{GMin}(p^U), \\hat{Y}_{GMax}(p^U))$}\n \\STATE $\\hat{Y}(p^U) \\leftarrow dResult$\n \\ELSE\n \\STATE $\\hat{Y}(p^U) \\leftarrow \\frac{\\hat{Y}_{GMin}(p^U) + \\hat{Y}_{GMax}(p^U)}{2}$;\n \\ENDIF\n \\ENDIF\n \\ELSE\n \\STATE \/* Type-4 Window in $G_P$: Case (b); *\/\n \\ENDIF\n \\ENDFOR\n\\UNTIL{No update happens}\n\\end{algorithmic}\n\\end{algorithm}\n\nIn Algorithm~\\ref{Alg:GAdvIter}, we omit the same code of deriving\n$\\hat{Y}_{GMin}(p^U), \\hat{Y}_{GMax}(p^U)$ as in\nAlgorithm~\\ref{Alg:GSSBaASIter} (Lines 11, 21, 37). At the beginning\n(Line 1), we call the function $G_P$-AA (Algorithm~\\ref{Alg:GPAA})\nto derive $\\hat{Y}_{CMin}(p^U), \\hat{Y}_{CMax}(p^U)$ and the\ntraining set $\\mathcal{T}$, which is a series of 3-tuple data\n$\\{y_{t}, WinType_{t}, BoundVal_{t}\\}$ from the papers with known\nyear information. The preparation of the Consistent-Coauthor-Pair\nset for each missing year paper $\\Omega(p^U)$, like\n$G_{AP}$-AdvIter, is also called (Lines 2-4). The main difference\nbetween $G$-AdvIter and $G$-ASIter is the method of calculating year\nvalue. For all three types of window in $G_P$, we apply the\n$W_G(p^U, \\gamma, y_l, y_r)$ function to calculate the year value:\n\\begin{eqnarray}\n\\textrm{When}\\;\\Omega_G(p^U) &=& \\{q|q\\in\\Omega(p^U), Y(q)\\in(y_l,\ny_r)\\},\\textrm{and}\\;\\Omega_G(p^U)\\cap V_P^K\n\\neq\\emptyset,\\nonumber\\\\\nW_G(p^U,\\gamma, y_l, y_r) &=& \\frac{\\sum_{q\\in \\Omega_G(p^U)\\cap\nV_P^K}\\;\\;w(p^U, q)^\\gamma \\times Y(q)}{\\sum_{q\\in \\Omega_G(p^U)\\cap\nV_P^K} w(p^U, q)^\\gamma};\\nonumber\\\\\n\\textrm{Otherwise}&=& Null. \\label{eq:wG}\n\\end{eqnarray}\n\nIn Eq.~(\\ref{eq:wG}), the different part of $W_G$ is that we pick\nout a subset of papers from $\\Omega(p^U)$, denoted by\n$\\Omega_G(p^U)$, satisfying the condition that the paper publishing\nyears are within an input window $[y_l, y_r]$, i.e., $\\Omega_G(p^U)\n=\\{q|q\\in\\Omega(p^U), Y(q)\\in[y_l, y_r]\\}$. For Type-1 window of\n$G_P$, we choose the subset $\\Omega_G(p^U)$ by setting the input\nwindow to be $[y_l=\\hat{Y}_{CMin}(p^U), y_r=\\hat{Y}_{CMax}(p^U)]$\nfor calculating $\\hat{Y}(p^U)$ (Line 14). But if $\\Omega_G(p^U)\\cap\nV_P^K = \\emptyset$, we revert back to the default way (Lines 16-18).\n\nThe process for a Type-2 or Type-3 window is a little more\ncomplicated. For a Type-2 window, both $\\Omega(p^U)$ and $\\mathcal{T}$\nare available tools. Due to this we propose the following way: we first derive\nthe estimation year value, denoted by $dResult$, through $d(\\cdot)$\nfunction expressed in Eq.~(\\ref{eq:d_func}). We use this $dResult$\nand the input parameter $\\hat{Y}_{CMin}(p^U)$ to define a window\n[$y_l=\\hat{Y}_{CMin}(p^U)$, $y_r=\\hat{Y}_{CMin}(p^U) + 2\\delta]$, of\nwhich the interval equals to twice of the distance from $dResult$ to\n$\\hat{Y}_{CMin}(p^U)$, $\\delta = dResult - \\hat{Y}_{CMin}(p^U)$.\nThis window is then used to derive $\\Omega_G(p^U)$ and calculate\n$\\hat{Y}(p^U)$ (Lines 23-27). If $\\Omega_G(p^U)\\cap V_P^K =\n\\emptyset$, we have a second choice which is $dResult$, if one of\nthe following two conditions is met: (a) The two windows\n$[\\hat{Y}_{CMin}(p^U), \\hat{Y}_{CMax}(p^U)]$ and\n$[\\hat{Y}_{AMin}(p^U), \\hat{Y}_{AMax}(p^U)]$ have no intersection;\nor (b) $dResult \\in [\\hat{Y}_{GMin}(p^U), \\hat{Y}_{GMax}(p^U)]$\n(Lines 28-31). Otherwise, we change back to the default way (Line\n32).\n\nThe process for a Type-3 window is symmetric to Type-2. The only\ndifference is that the input window for deriving $\\Omega_G(p^U)$ and\n$\\hat{Y}_{CMin}(p^U)$ becomes $W_G(p^U, \\gamma,\ny_l=\\hat{Y}_{CMax}(p^U)-2\\delta, y_r=\\hat{Y}_{CMax}(p^U))$ (Lines\n39-43).\n\n\n\\section{Experiment Results}\\label{Sec:exp}\nIn this section we present the experiment settings and evaluation\nresults. In the experiment, we test the proposed MYE algorithms in\nthe last section by applying them to all the three types of the\nacademic social networks, the paper citation network $G_P$, the\npaper authorship network $G_{AP}$ and the heterogenous network $G$.\n\\subsection{Data Sets}\nWe have tried three different data sets: Microsoft academic data\nset~\\cite{libra}, DBLP~\\cite{dblp} with additional citation\ninformation, DBLP-Cit data set~\\cite{Tang:07ICDM,Tang:08KDD}, and\nAmerican Physical Society data set~\\cite{apsdataset}. The raw data\nsets are not perfect in that: (a) there exits a proportion of\nmissing year papers; (b) Some citation links are pointing from early\npublished papers to later ones, which\nbreaks Assumption~\\ref{assumption1}.\n\nSince the performance evaluation needs ground truth knowledge, we\nhave to do some preprocessing on the original data sets, including:\na) remove these missing year papers and their relationships\n(citation links and paper-authorship links); b) remove those\ncitation links that break Assumption~\\ref{assumption1}.\n\nTable \\ref{Tab:datasets} lists the general information about the\nthree data sets after preprocessing:\n\\begin{table}[htb]\n\\centering\n\\begin{tabular}{|c|c|c|c|}\n\\hline Data set & Microsoft Libra & DBLP-Cit & APS \\\\\n\\hline Input Window & (1900 - 2013) & (1900 - 2013) & (1900 - 2013)\\\\\n\\hline \\#papers & 2323235 & 1558503 & 463347\\\\\n\\hline \\#authors & 1278407 & 914053 & 320964\\\\\n\\hline \\#total citation links & 10003121 & 2062896 & 4689142\\\\\n\\hline\n\\end{tabular}\n\\caption{General information of the three data sets used after\npreprocessing.} \\label{Tab:datasets}\n\\end{table}\n\nAs we can see in Table~\\ref{Tab:datasets}, the average number of\ncitation links per paper of the three data sets are: 4.31 for Libra,\n1.33 for DBLP-Cit and 10.34 for APS, which appears disparate. This\nprobably reflects how well these three data sets are collected and\nmanaged. The APS data set is the most complete in terms of the paper\ncitation information, and the DBLP-Cit is probably the\nleast\\footnote{DBLP~\\cite{dblp} is a popular and well-managed data\nset, with complete and accurate meta information. But it does not\nprovide paper citation information. DBLP-Cit is created based on the\noriginal DBLP paper set with adding paper citation relationships\nthrough proper mining method~\\cite{Tang:07ICDM,Tang:08KDD}}. For\nDBLP-Cit, the job to find citation links for an existing paper set\nis a big challenge. The small number of average paper citation links\nshows that it is likely only a small proportion of the complete paper\ncitation links are found.\n\nThe completeness and accuracy of the citation links will only affect\nthose MYE algorithms that rely on citation information, e.g., the\nthree algorithms for $G_P$.\n\n\\subsection{Evaluation methodology}\nWe apply a similar approach to the K-fold cross validation\nmethod~\\cite{mosteller1968data,kohavi1995study} to evaluate the MYE\nalgorithms. For each date set after pre-processing, we randomly\nsplit the paper set into $K$ mutually exclusive groups, i.e., $V_P =\n\\bigcup_{k=1}^K V_{P_k}, \\textrm{and}\\;\\forall i\\neq j, V_{P_i}\\cap\nV_{P_j} = \\emptyset$. In addition, each group has approximately the\nsame size, $|V_{P_k}| \\approx \\frac{|V_P|}{K}, k = 1, 2, \\dots, K$.\n\nFor a given parameter $K$, the experiment repeats $K$ times. In the\n$j$th time, the year information of the papers in group $V_{P_j}$ is\nartificially hidden, thus assumed to be the missing year paper set\n$V_P^U = V_{P_j}$, and the remaining groups become the paper set\nwith known year information, i.e., $V_P^K = V_P\\setminus V_{P_j}$.\nThe overall performance metrics take the average of the results\nobtained in each of the $K$ times.\n\nIndirectly, the value of $K$ controls the severity of the missing\nyear phenomenon. For convenience, we define $\\eta =\n\\frac{|V_P^U|}{|V_P|} \\approx \\frac{1}{K}$ to be the \\emph{Missing\nYear Ratio} of the data set. Throughout the experiment, we have\ntried 5 different $\\eta = \\frac{1}{8}, \\frac{1}{5}, \\frac{1}{4},\n\\frac{1}{3}, \\frac{1}{2}$.\n\n\\subsection{Performance metrics}\nThree metrics are used to evaluate the performance of the MYE\nalgorithms.\n\\begin{enumerate}\n\\item[1)] Coverage\n\nWe have defined the uncovered ratio in Section~\\ref{Sec:method}. It\nequals to the number of those missing year papers finally labeled as\n\\emph{Uncovered} by MYE algorithms, divided by the total number of\nmissing year papers $|V_P^U|$. We use $N^U = |V_P^U| -\n\\textrm{Total\\#Uncovered}$ to denote the number of the covered part.\nIn one experiment, the coverage metric is equal to\n$\\frac{N^U}{|V_P^U|}$. With K-fold cross validation, the overall\ncoverage becomes:\n\\begin{equation}\nCoverage = \\frac{1}{K}\\sum_{k=1}^K \\frac{N^U_k}{|V_{P_k}|},\n\\end{equation}\nwhere the subscript $k$ indicates the $k$th iteration and $V_P^U =\nV_{P_k}$.\n\n\\item[2)] Mean absolute error (MAE)\n\\begin{equation}\nMAE = \\frac{1}{K}\\sum_{k=1}^K \\bigg(\n\\frac{1}{N^U_k}\\sum_{i=1}^{N^U_k} |Y(p^U_i) - \\hat{Y}(p^U_i)|\\bigg),\n\\end{equation}\nwhere the $k$th iteration, $V_P^U = V_{P_k}$, $\\hat{Y}(p^U_i)$ is\nthe estimated year. $Y(p^U_i)$ is the real year of $p^U_i$, which we\nassumed to be unknown when running the MYE algorithms and used only\nfor validation purposes.\n\n\\item[3)] Root mean square error (RMSE)\n\\begin{equation}\nRMSE = \\frac{1}{K}\\sum_{k=1}^K\n\\bigg(\\sqrt{\\frac{1}{N^U_k}\\sum_{i=1}^{N^U_k} \\big[Y(p^U_i) -\n\\hat{Y}(p^U_i)\\big]^2}\\;\\;\\bigg).\n\\end{equation}\n\\end{enumerate}\n\nIn order to have a better understanding of the coverage metric, we\npropose an analytical model to calculate the expected coverage for\nan undirected graph $G = (V, E)$. According to the basic graph\ntheory~\\cite{kleinburgNetwork}, $G$ can be partitioned into $S$\nconnected components $G = \\bigcup_{i=1}^S G_i$, where $\\forall i, j,\nG_i \\cap G_j = \\emptyset$.\n\nThe iteration mechanism of the MYE algorithms (e.g., $G_{AP}$-Iter,\nor $G_{AP}$-AdvIter) ensures that there can be only two possible\noutcomes for any connected component $G_i = (V_i, E_i)$ when\npropagation stops\\footnote{The outcome of $G_P$-AS and $G_P$-AA is a\nlittle complicated, we will discuss it later.}:\n\n(I) All the missing year papers in this component have feasible\nestimated values (hence, $\\neq$ \\emph{Uncovered}), if and only if\nthere exits at least one paper with known year information in this\ncomponent, i.e., $V_i \\cap V_P^K \\neq \\emptyset$;\n\n(II) Otherwise, all the missing year papers in this component are\nlabeled as \\emph{Uncovered}.\n\nIf we assume the missing year paper is uniformly distributed among\nthe whole paper set, then the expected coverage value can be\ncalculated by Eq.~(\\ref{Eq:ExpCoverage}):\n\\begin{equation}\\label{Eq:ExpCoverage}\n Coverage\\big(\\eta, \\bigcup_{i=1}^S V_i\\big) = 1 -\n\\frac{\\sum_{i=1}^S\\; \\eta^{|V_i|}\\cdot|V_i|}{\\eta|V|}\\;.\n\\end{equation}\n\nIn Eq.~(\\ref{Eq:ExpCoverage}), there are two inputs for this\ncalculation: the year missing ratio $\\eta$ and the vertex partition\n$V = \\bigcup_{i=1}^S V_i$. According to the uniform distribution\nassumption, each paper is selected to be the missing year paper with\nequal probability $\\eta$. Thus the denominator equals to the\nexpected number of missing year papers $|V_P^U| = \\eta|V|$. For each\ncomponent $G_i$, $\\eta^{|V_i|}$ is the probability that all the\npapers in it are missing year papers and $\\eta^{|V_i|}\\cdot|V_i|$ is\nhence the expected number of papers that will be labeled as\n\\emph{Uncovered}.\n\nFor the three types of the academic social networks, the above model\nactually cannot be applied directly. To apply it, we have to make\nproper modifications: (1) based on the citation network $G_P = (V_P,\nE_P)$, we construct $G_P'= (V_P, E_P')$ by implicitly considering\nall the citation edges as undirected edges, where $E_P'$ is the\nundirected edge set. (2) based on the paper authorship network\n$G_{AP} = (V_A \\cup V_P, E_{AP})$, we build a coauthor indicator\ngraph $G_{AP}' = (V_P, E_{PP})$, where the existence of an edge\nbetween two papers in $G_{AP}'$ indicates that they have at least\none common author, i.e., $\\forall e_{i,j} \\in E_{PP}, i, j \\in V_P\n\\Leftrightarrow A(i)\\cap A(j) \\neq \\emptyset$, where $A(i)$ is the\nauthor set of paper $i$. (3) For the heterogenous network $G$, by\nsimply combining $G_P'$ and $G_{AP}'$, we obtain $G' = (V_P,\nE_P'\\cup E_{PP})$. Now the analytical model can be applied on\n$G_P'$, $G_{AP}'$ and $G'$ to calculate the expected coverage.\n\n\\subsection{Experiment results in the citation network $G_P$}\nThe first set of experiments are conducted on the citation network\n$G_P = (V_P, E_P)$. The coverage, MAE and RMSE results of algorithms\n$G_P$-SS, $G_P$-AS and $G_P$-AA are plotted in\nFigure~\\ref{Fig:GPLibraDBLPAPS}.\n\\begin{figure*}[hbt]\n \\centering\n \\subfigure[Coverage-Libra]{\n \\label{Fig:GPCovLibra}\n \\includegraphics[width=1.4in]{Gp_Coverage_libra.eps}}\n \n \\subfigure[Coverage-DBLP]{\n \\label{Fig:GPCovDBLP}\n \\includegraphics[width=1.4in]{Gp_Coverage_dblp.eps}}\n \n \\subfigure[Coverage-APS]{\n \\label{Fig:GPCovAPS}\n \\includegraphics[width=1.4in]{Gp_Coverage_aps.eps}}\n \n \\subfigure[MAE-Libra]{\n \\label{Fig:GPMaeLibra}\n \\includegraphics[width=1.4in]{Gp_MAE_libra.eps}}\n \n \\subfigure[MAE-DBLP]{\n \\label{Fig:GPMaeDBLP}\n \\includegraphics[width=1.4in]{Gp_MAE_dblp.eps}}\n \n \\subfigure[MAE-APS]{\n \\label{Fig:GPMaeAPS}\n \\includegraphics[width=1.4in]{Gp_MAE_aps.eps}}\n \n \\subfigure[RMSE-Libra]{\n \\label{Fig:GPRmseLibra}\n \\includegraphics[width=1.4in]{Gp_RMSE_libra.eps}}\n \n \\subfigure[RMSE-DBLP]{\n \\label{Fig:GPRmseDBLP}\n \\includegraphics[width=1.4in]{Gp_RMSE_dblp.eps}}\n \n \\subfigure[RMSE-APS]{\n \\label{Fig:GPRmseAPS}\n \\includegraphics[width=1.4in]{Gp_RMSE_aps.eps}}\n \n \\caption{The Coverage, MAE and RMSE of algorithms $G_P$-SS (Simple Window Derivation and Simple Value Calculation),\n $G_P$-AS (Advanced Window Derivation and Simple Value Calculation) and\n $G_P$-AA (Advanced Window Derivation and Advanced Value Calculation) in a paper citation network $G_P$ of three data sets}\n \\label{Fig:GPLibraDBLPAPS}\n\\end{figure*}\n\nAs shown in Figure~\\ref{Fig:GPLibraDBLPAPS}, we have the following\nobservations:\n\\begin{enumerate}\n\\item[1)] For all the three algorithms, when $\\eta$\nincreases, coverage decreases while both MAE and RMSE increase. This\nimplies the more information available helps to get better\nestimation results, more coverage and less estimation error.\n\n\\item[2)] In Fig.~\\ref{Fig:GPCovLibra}-\\ref{Fig:GPCovAPS},\nthe curve of $G_P$-AS overlaps with that of $G_P$-AA and they have\nbetter coverage than $G_P$-SS. This is consistent with what we have\ndiscussed in Section~\\ref{Sec:method} ($G_P$-AS and $G_P$-AA use the\nsame advanced window derivation method). However, it appears that\nall the three coverage curves have certain deviation from the curve\n(with nodes of red ``X'' in\nFig.~\\ref{Fig:GPCovLibra}-\\ref{Fig:GPCovAPS}) obtained by the\nanalytical model in\nEq.~(\\ref{Eq:ExpCoverage}).\\\\\n\\\\\nThe reason for this is that the analytical model overestimates the number of\ncovered papers for $G_P$-AS and $G_P$-AA. Recall that in\nSection~\\ref{Sec:method}, the window propagation method in $G_P$ is\ndifferent to the iteration scheme of $G_{AP}-Iter$ and\n$G_{AP}-AdvIter$ in that it follows the bound transmission rules in\nEq.~(\\ref{assumption1New}) and does not utilize estimation results\nin the previous rounds. As a result, the outcome (I) discussed above\nmay not always be true, while (II) remains true. We use a typical\nand simple example to illustrate. As shown in\nFig.~\\ref{Eq:ExpException}, there are three papers ($a$, $b$, $c$)\nand two citation links, where only one paper $b$ has year\ninformation while the other two are missing year papers.\nFig.~\\ref{Eq:ExpException} plots all the 7 possible topologies.\n\\begin{figure}[hbt]\n \\centering\n \\includegraphics[width=2.4in]{CW3Top.eps}\n \\caption{An example of paper citation network with three papers\n ($a, b, c$) and two citation links. When two papers are missing year\n ($a$ and $c$), there are totally 7 possible topologies.}\n \\label{Eq:ExpException}\n\\end{figure}\n\nAccording to outcome (I) of the analytical model, neither $a$ nor\n$c$ will be labeled as \\emph{Uncovered}. However, in\nFig.~\\ref{Eq:ExpException} paper $a$ in case (6), and paper $c$ in\ncase (7) get an \\emph{Uncovered} result by applying the advanced window\nderivation method in Eq.~(\\ref{assumption1New}). This would build a more\nprecise analytical model for the citation network, however this is too\ncomplicated. Therefore, we stick to the current method in\nEq.~(\\ref{Eq:ExpCoverage}) as an upper bound for the coverage\nachieved in citation network $G_P$.\n\n\\item[3)] $G_P$-AA outperforms the other two for all network types\nand data sets in terms of both coverage and estimation accuracy, MAE\nand RMSE.\n\n\\item[4)] Comparing the three data sets, we find that the coverage\non APS data is much higher than the other two and DBLP-Cit is the\nlowest. This is mainly caused by the completeness of the citation\ninformation of the three data sets, mentioned in the beginning of\nthis section. Since APS maintains very complete and accurate\ncitation information, this benefits both coverage and accuracy for\nthe MYE in paper citation network (Fig.~\\ref{Fig:GPCovAPS},\nFig.~\\ref{Fig:GPMaeAPS}, Fig.~\\ref{Fig:GPRmseAPS}).\n\n\\item[5)] In Fig.~\\ref{Fig:GPCovLibra} and Fig.~\\ref{Fig:GPCovDBLP},\nthe coverage on the Libra case is higher than DBLP-Cit, however, its MAE\nand RMSE are at similar levels (or worse, e.g., $G_P$-AA in\nFig.~\\ref{Fig:GPMaeLibra} and Fig.~\\ref{Fig:GPMaeDBLP}, all the\ncurves in Fig.~\\ref{Fig:GPRmseLibra} versus\nFig.~\\ref{Fig:GPRmseDBLP}). We think one possible reason is that\nquantitatively, Libra has a more complete paper citation information\nthan DBLP-Cit, but qualitatively, the correctness of Libra data may\nbe worse. We summarise this in Table~\\ref{Tab:DataQuaGP}.\n\\begin{table}[htb]\n\\centering\n\\begin{tabular}{|c|c|}\n\\hline\n\\multicolumn{2}{|c|}{MYE performance in $G_P$}\\\\\n\\hline Coverage & APS $>$ Libra $>$ DBLP\\\\\n\\hline MAE\/RMSE & APS $<$ DBLP $<$ Libra\\\\\n\\hline\n\\multicolumn{2}{|c|}{Inferred data quality of paper citation information}\\\\\n\\hline Completeness & APS $>$ Libra $>$ DBLP\\\\\n\\hline Correctness & APS $>$ DBLP $>$ Libra\\\\\n\\hline\n\\end{tabular}\n\\caption{Summary on data quality of paper citation information of\nthree used datasets inferred from MYE performance in $G_P$.}\n\\label{Tab:DataQuaGP}\n\\end{table}\n\\end{enumerate}\n\n\\subsection{Experiment results for the paper authorship network $G_{AP}$}\nThe second set of experiments are conducted on the paper author\nbipartite network $G_{AP} = (V_A \\cup V_P, E_{AP})$. The coverage,\nMAE and RMSE results of algorithms $G_{AP}$-Ba (the basic scheme),\n$G_{AP}$-Iter (Simple iteration of the basic scheme) and\n$G_{AP}$-AdvIter (Iteration with considering\nConsistent-Coauthor-Pair information) are plotted in\nFigure~\\ref{Fig:GAPLibraDBLPAPS}. Our observations are:\n\\begin{figure*}[hbt]\n \\centering\n \\subfigure[Coverage-Libra]{\n \\label{Fig:GAPCovLibra}\n \\includegraphics[width=1.4in]{Gap_Coverage_libra.eps}}\n \n \\subfigure[Coverage-DBLP]{\n \\label{Fig:GAPCovDBLP}\n \\includegraphics[width=1.4in]{Gap_Coverage_dblp.eps}}\n \n \\subfigure[Coverage-APS]{\n \\label{Fig:GAPCovAPS}\n \\includegraphics[width=1.4in]{Gap_Coverage_aps.eps}}\n \n \\subfigure[MAE-Libra]{\n \\label{Fig:GAPMaeLibra}\n \\includegraphics[width=1.4in]{Gap_MAE_libra.eps}}\n \n \\subfigure[MAE-DBLP]{\n \\label{Fig:GAPMaeDBLP}\n \\includegraphics[width=1.4in]{Gap_MAE_dblp.eps}}\n \n \\subfigure[MAE-APS]{\n \\label{Fig:GAPMaeAPS}\n \\includegraphics[width=1.4in]{Gap_MAE_aps.eps}}\n \n \\subfigure[RMSE-Libra]{\n \\label{Fig:GAPRmseLibra}\n \\includegraphics[width=1.4in]{Gap_RMSE_libra.eps}}\n \n \\subfigure[RMSE-DBLP]{\n \\label{Fig:GAPRmseDBLP}\n \\includegraphics[width=1.4in]{Gap_RMSE_dblp.eps}}\n \n \\subfigure[RMSE-ASP]{\n \\label{Fig:GAPRmseAPS}\n \\includegraphics[width=1.4in]{Gap_RMSE_aps.eps}}\n \\caption{The coverage, MAE and RMSE of algorithms $G_{AP}$-Ba,\n $G_{AP}$-Iter and $G_{AP}$-AdvIter in paper author bipartite network $G_{AP}$ of the three data sets}\n \\label{Fig:GAPLibraDBLPAPS}\n\\end{figure*}\n\n\\begin{enumerate}\n\\item[1)] In Fig.~\\ref{Fig:GAPCovLibra}-\\ref{Fig:GAPCovAPS},\nthe curve of $G_{AP}$-Iter overlaps with that of $G_{AP}$-AdvIter\nand has better coverage than $G_{AP}$-Ba. As is discussed\nbefore (Section~\\ref{Sec:method}), $G_{AP}$-Iter and\n$G_{AP}$-AdvIter utilize the estimation results in the previous\nrounds for the later iterations (information propagation) which\nleads to the higher coverage results. In addition, the curves of\n$G_{AP}$-Iter and $G_{AP}$-Iter match quite well with the expected\nvalue generated by the analytical model.\n\n\\item[2)] In Fig.~\\ref{Fig:GAPMaeLibra}-\\ref{Fig:GAPRmseAPS}, which\nconcerns estimation accuracy, we find that $G_{AP}$-Iter obtains\nworse MAE than $G_{AP}$-Ba. This meets our expectation (in\nSection~\\ref{Sec:method} that the simple iteration scheme of\n$G_{AP}$-Iter spreads inaccuracy during the information\npropagation).\n\n\\item[3)] It shows that $G_{AP}$-AdvIter performs much better than\nthe other two in both coverage and accuracy. For all different\n$\\eta$, $G_{AP}$-AdvIter consistently makes around $10\\%$\nimprovement in MAE measures and $6\\%$ in RMSE measures.\n\n\\item[4)] If we compare the MAE curves of the three data sets in\nFig.~\\ref{Fig:GAPMaeLibra}-\\ref{Fig:GAPMaeAPS}, the same algorithm\ngenerates the best MAE on DBLP-Cit data set, the worst on APS data\nset and intermediate on Libra data set. This result indirectly\nreflects the data quality (on paper-author relationship) of the\nthree data sets, summarized in Table~\\ref{Tab:DataQuaGAP}. As is\nwidely known that, the original DBLP data set (with no citation\ninformation) is well managed and hence maintains the most complete\nand accurate paper-author\/ paper-venue relationships~\\cite{dblp}.\nLibra is an object-level data set, the process of the text-to-object\ntransfer has been done before we obtain them. Different to the paper\ncitation links, the APS data set only provides pure text information of\npaper-author relationships, therefore, the text-to-object task is\ndone by ourselves with some simple text-based matching scheme, which\ninevitably induces the number of errors in $G_{AP}$. In fact, this\ninvolves several difficult and hot research problems in the\ncommunity, for example the Author-Paper Identification Challenge and\nthe Author Disambiguation Challenge in~\\cite{kddcup2013}.\n\n\\begin{table}[htb]\n\\centering\n\\begin{tabular}{|c|c|}\n\\hline\n\\multicolumn{2}{|c|}{MYE performance in $G_{AP}$}\\\\\n\\hline Coverage & DBLP $\\approx$ Libra $\\approx$ APS\\\\\n\\hline MAE\/RMSE & DBLP $<$ Libra $<$ APS\\\\\n\\hline\n\\multicolumn{2}{|c|}{Inferred data quality of paper-author relationship}\\\\\n\\hline Completeness & DBLP $\\approx$ Libra $\\approx$ APS\\\\\n\\hline Correctness & DBLP $>$ Libra $>$ APS\\\\\n\\hline\n\\end{tabular}\n\\caption{Summary on data quality of paper-author relationship of\nthree used datasets inferred from MYE performance in $G_{AP}$.}\n\\label{Tab:DataQuaGAP}\n\\end{table}\n\\end{enumerate}\n\n\\subsection{Experiment results for the heterogenous network $G$}\nThe last set of experiments are conducted on the heterogeneous\nnetwork $G = (G_P \\cup G_{AP})$ which consists of both the paper\ncitation network and the paper author bipartite network. The\ncoverage, MAE and RMSE results of algorithms $G$-SSBa (combination\nof $G_P$-SS and $G_{AP}$-Ba), $G$-ASIter (combination of $G_P$-AS\nand $G_{AP}$-Iter) and $G$-AdvIter (combination of $G_P$-AA and\n$G_{AP}$-AdvIter) are plotted in Figure~\\ref{Fig:GLibraDBLPAPS}.\n\\begin{figure*}[hbt]\n \\centering\n \\subfigure[Coverage-Libra]{\n \\label{Fig:GCovLibra}\n \\includegraphics[width=1.4in]{G_Coverage_libra.eps}}\n \n \\subfigure[Coverage-DBLP]{\n \\label{Fig:GCovDBLP}\n \\includegraphics[width=1.4in]{G_Coverage_dblp.eps}}\n \n \\subfigure[Coverage-APS]{\n \\label{Fig:GCovAPS}\n \\includegraphics[width=1.4in]{G_Coverage_aps.eps}}\n \n \\subfigure[MAE-Libra]{\n \\label{Fig:GMaeLibra}\n \\includegraphics[width=1.4in]{G_MAE_libra.eps}}\n \n \\subfigure[MAE-DBLP]{\n \\label{Fig:GMaeDBLP}\n \\includegraphics[width=1.4in]{G_MAE_dblp.eps}}\n \n \\subfigure[MAE-APS]{\n \\label{Fig:GMaeAPS}\n \\includegraphics[width=1.4in]{G_MAE_aps.eps}}\n \n \\subfigure[RMSE-Libra]{\n \\label{Fig:GRmseLibra}\n \\includegraphics[width=1.4in]{G_RMSE_libra.eps}}\n \n \\subfigure[RMSE-DBLP]{\n \\label{Fig:GRmseDBLP}\n \\includegraphics[width=1.4in]{G_RMSE_dblp.eps}}\n \n \\subfigure[RMSE-APS]{\n \\label{Fig:GRmseAPS}\n \\includegraphics[width=1.4in]{G_RMSE_aps.eps}}\n \n \\caption{The Coverage, MAE and RMSE of algorithms $G$-SSBa,\n $G$-ASIter and $G$-AdvIter in the heterogenous network $G$ of the three data sets.}\n \\label{Fig:GLibraDBLPAPS}\n\\end{figure*}\n\nWe make three observations according to the results shown in\nFig.~\\ref{Fig:GLibraDBLPAPS}:\n\\begin{enumerate}\n\\item[1)] All the curves have similar shapes as those in\nFig.\\ref{Fig:GPLibraDBLPAPS} and Fig.\\ref{Fig:GAPLibraDBLPAPS}, but\nthe results in Fig.\\ref{Fig:GLibraDBLPAPS} have the highest coverage\nand smallest MAE and RMSE. This shows the advantage of the\nheterogeneous information (both paper citation and paper author\nrelationship) and the proper combination of the MYE algorithms in\n$G_P$ and $G_{AP}$.\n\n\\item[2)] In Fig.~\\ref{Fig:GCovLibra}-\\ref{Fig:GCovAPS},\nthere appears certain deviations (although milder than those in\nFig.~\\ref{Fig:GPCovLibra}-\\ref{Fig:GPCovAPS}) from the coverage\ncurves of $G$-ASIter and $G$-AdvIter to that generated by the\nanalytical model. This is again due to the overestimation of the\nexpected number of covered papers by the citation network\ninformation, since $G$-ASIter and $G$-AdvIter are the combinations\nfrom $G_P$-AS and $G_{P}$-AA respectively.\n\n\\item[3)] The $G$-AdvIter outperforms the other two for both\ncoverage and accuracy (with around $8\\%$ improvement in MAE and\n$5\\%$ in RMSE for different $\\eta$).\n\\end{enumerate}\n\n\n\n\\section{Related Works}\\label{Sec:related}\nIn network analysis, early studies focused on the structural\ncharacteristics of missing data, e.g.,~\\cite{kossinets2006effects}.\n\\cite{borgatti2006robustness} studied the impact of the measurement\nerrors on random Erd\\H{o}s-R\\'{e}nyi networks. A more recent work\nby~\\cite{wang2012measurement} reclassifies measurement errors,\nseparating missing data and false data, then analyzes their efforts on\ndifferent topology properties of an online social network and a\npublication citation network. However, only few works study techniques to\ncorrect measurement errors.\n\nVariants of the well-known PageRank~\\cite{brin1998anatomy} and\nHITS~\\cite{kleinberg1999authoritative} algorithms are often used in\nsocial network analysis. \\cite{nachenberg2010polonium} use an\niterative Belief Propagation Algorithm to identify malware from a\nlarge scale of files and machines. \\cite{zhu2005semi} study the\npropagation of two or more competing labels on a graph, using\nsemi-supervised learning methods.\n\nTemporal information is frequently used in topics of an academic\nnetwork, e.g.~\\cite{chiu2010publish,fu2013asn}. In the research of Academic\nRankings,~\\cite{stringer2008effectiveness} find nearly all journals\nwill reach a steady-state of citation distribution within a\njournal-specific time scale, thus they proposed a model for the rank of\npaper impacts using citation counts. To solve the tricky problem of\nname disambiguation in a digital library,~\\cite{tang2012unified}\nutilized the multi-hop co-author relationship and its special\nproperty of time-dependence. \\cite{wang2010mining} proposed a\ntime-constrained probabilistic factor graph model to mining the\nhighly time-dependent advisor-advisee relationship on the\ncollaboration network.\n\nThe topic of evolution of communities also attracts much attention.\n\\cite{blei2006dynamic} have used state space models on the natural\nparameters of the multinomial distributions to represent the dynamic\nevolution of topics. \\cite{iwata2010online} developed the continuous\ntime dynamic model to mine the latent topics through a sequential\ncollection of documents. ~\\cite{gupta2011evolutionary} proposed an\nalgorithm that integrates clustering and evolution diagnosis of\nheterogeneous bibliographic information networks.\n\\cite{lin2011joint}track the evolution of an arbitrary topic and\nreveal the latent diffusion paths of that topic in a social\ncommunity. \\cite{li2012adding} addressed the community detection\nproblem by integrating dynamics and communities into the topic\nmodeling algorithms, and experimented on the Scholarly publications\ndata set ArnetMiner~\\cite{Tang:08KDD}, and recently to generalize the\npreferential attachment model with considering the aging factor~\\cite{wu2013PAaging}.\n\nRecently, data cleaning on academic social networks receive much\nattention. In KDD Cup 2013, the two challenges are the Author-Paper\nIdentification Challenge or the Author Disambiguation Challenge. For\nboth challenges, the publishing year information of each paper is\nimportant background knowledge and affect the design of the\nalgorithms. However, the given data set~\\cite{kddcup2013} has a high\n\\emph{Missing Year Ratio}, $ \\eta = \\frac{155784}{2257249}\\approx\n6.90\\%$. This is one of the practical examples and usages that\nimply the importance of the MYE problems and provide good motivation for this work.\n\n\n\\section{Conclusions}\\label{Sec:conclusion}\nIn this paper, we are dealing with the papers' missing publication\nyear recovery problem in the Academic Social Network (ASN). We have\nconsidered using three possible networks for estimating missing\nyears: the paper citation network, the paper author bipartite\nnetwork and the heterogenous network (the combination of the\nprevious two). In each network, we first propose a simple algorithm\nwhich is considered as a benchmark. Next another algorithm involving\ninformation propagation mechanism is proposed. The propagation\nmechanism helps to increase the estimation coverage ratio. Finally,\nan advanced propagation based algorithm is proposed, and in each of\nthe three networks the advanced algorithm outperforms other\nalgorithms and achieves at least an $8\\%$ improvement on MAE and $5\\%$\non RMSE. In addition, the coverage achieved by the advanced\nalgorithms well matches the results derived by the analytical model.\n\n\n\\bibliographystyle{spbasic} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nRecent results in precision cosmology have created a number of puzzles. Among the\nunexplained phenomena are several apparent coincidences in which the energy density\nof two components are comparable despite having different redshift properties. Another\nis the existence of a negative pressure cosmological fluid that accelerates the rate\nof expansion of the universe. The model of mass-varying neutrinos (MaVaNs),\nintroduced by Fardon et al. in~\\cite{Fardon:2003eh}, suggests that neutrino and\ndark energy densities have tracked each other throughout the lifetime of the\nuniverse through a new scalar field called the acceleron. The energy density of\nthe scalar potential of this new field contributes to the dark energy. The authors\nshowed that this new field is capable of explaining the present cosmological expansion.\n\nSince the original MaVaN proposal, several authors have investigated the stability\nof the \\emph{dark sector}, composed of neutrinos and the scalar field, under perturbations\nto the neutrino density~\\cite{Afshordi:2005ym,Takahashi:2006jt}. Both groups subject the\nmodel to a hydrodynamic analysis. In this approximate picture, the speed of sound squared\nin the cosmological fluid, given by~\\cite{Mukhanov:1990me},\n\\begin{equation}\nc_s^2=\\frac{\\dot{P}}{\\dot{\\rho}}=w+\\frac{\\dot{w}\\rho}{\\dot{\\rho}}=w-\\frac{\\dot{w}}{3H(1+w)}\n\\end{equation}\nis positive only if\n\\begin{equation}\n\\frac{\\partial w}{\\partial z} \\geq -\\frac{3w(1+w)}{1+z}\n\\end{equation}\nwhere $z$ is the redshift. However, for nonrelativistic neutrinos,\n$\\frac{\\partial w}{\\partial z}$ is\na negative quantity while the right hand side is positive for $w$ close to -1. Since at\nleast one neutrino must be nonrelativistic today, the dark sector appears unstable.\n\nAdditionally, Afshordi, et al. perform a stability analysis using kinetic theory to\naccount for neutrino streaming~\\cite{Afshordi:2005ym}. This analysis also\nshows that perturbations in the neutrino field become unstable when the mass of the\nneutrino is of order the neutrino temperature. The ratio of mass to temperature when\nthe instability occurs is a function of the acceleron potential, but is approximately\n7 for the potentials considered in this paper. Afshordi, et al., also examine the result\nof undergoing a phase transition in a neutrino component, and suggest that the\nunstable neutrino field may rapidly form nonlinear structures termed\n\\emph{neutrino nuggets}, which redshift as dark matter and cannot drive the cosmic\nexpansion.\n\nIn this paper, we will assume that the neutrinos are initially relativistic. As\nthe universe expands and cools, the neutrinos become less relativistic until\ntheir mass and temperature are approximately equal.\nAt this point, we assume\nthat they decouple from the scalar field into some structure such as the one described\nby Afshordi et al. The resulting dark matter will not provide a large contribution\nto the energy density, and we will not include the contribution in the dark sector\nenergy. Note that in this paper dark sector will always refer to the energy density\ncontributions of the stable neutrinos and the scalar field.\n\nIn a model with multiple neutrinos, the dark energy may still be driven by relativistic\nspecies even after the heavier components have become unstable.\nHowever, the coupling between the neutrinos and acceleron create a feedback mechanism.\nThe shift in the acceleron expectation value when a neutrino becomes unstable can\nbe sufficient to necessarily change the mass of another species so that it goes from\nrelativistic to nonrelativistic, causing it to become unstable as well. This\n\\emph{cascaded instability} can make all neutrinos unstable at about the same time\nthat the heaviest neutrino becomes unstable.\nThis paper will examine a particular class of models, and will show that see-saw MaVaN\nmodels with flat scalar potentials suffer from precisely this problem.\nThe timing between the instability in successive neutrinos is strongly dependent on the\nflatness of the scalar potential, but a flat potential is also required to generate\nthe observed dark energy. This result may point toward models that do no suffer\nfrom a cascaded instability, and this paper concludes with a simple example.\n\n\\section{See-Saw Models}\n\nConsider a model with $n$ active neutrinos in which each neutrino is paired with\na sterile counterpart, which is coupled to a new scalar field,\n\\begin{equation}\n-L \\supset \\sum_{i=1}^n (M^\\prime_i \\nu_i N_i + \\lambda A N_i N_i)\n\\end{equation}\nwhere $\\nu_i$ are the active neutrinos, $N_i$ are the sterile neutrinos, and $A$ is\nthe acceleron scalar field. $M_i$ and $\\lambda$ describe coupling strengths. If we\nassume that the normal see-saw limit holds, $\\langle \\lambda A\\rangle \\gg M_i$, the\nsystem reduces to an effective Lagrangian describing active neutrinos with a\nMajorana mass term,\n\\begin{equation}\n-L_{eff} \\supset \\sum_{i=1}^n \\frac{M_i^2}{A} \\nu_i \\nu_i \\equiv \\sum_{i=1}^n m_i \\nu_i \\nu_i\n\\end{equation}\nwhere $M_i^2=M^{\\prime 2}_i\/\\lambda$, and $m_i$ is the effective mass.\n\nThe energy density of the dark sector is\n\\begin{equation}\n\\rho_d=\\rho_\\nu+V(A)\n\\label{eq_rho}\n\\end{equation}\nwhere $V$ is the potential of the acceleron field. Assuming that the neutrino\ndistribution function is a stretched thermal distribution, the value of the equation of\nstate for this dark sector is (one derivation is given in~\\cite{Peccei:2004sz})\n\\begin{equation}\nw=\\frac{p_d}{\\rho_d}=\n\\frac{T^4 \\sum_i \\left[ 4\n F\\left(\\frac{m_{i}^2}{T}\\right)-\n J\\left(\\frac{m_{i}^2}{T}\\right)\\right]}{3 \\left[T^4 \\sum_i \n F\\left(\\frac{m_{i}^2}{T}\\right)\n +V(A)\\right]}-1\n\\label{eq_w}\n\\end{equation}\nwhere $i$ runs over the active neutrino species, and we have defined the distribution\nfunction and its derivative,\n\\begin{eqnarray}\nF(x)&=&\\frac{1}{\\pi^2}\\int_0^\\infty \\frac{y^2\\sqrt{y^2+x^2}dy}{e^y+1}\n\\\\\nJ(x)&=&\\frac{x^2}{\\pi^2}\\int_0^\\infty \\frac{y^2dy}{\\sqrt{y^2+x^2}(e^y+1)}\n\\end{eqnarray}\nFardon, et al., show\nin~\\cite{Fardon:2003eh} that the system remains very close to the minimum of the\neffective potential and evolves adiabatically. For the model above, this minimization\nconditions becomes\n\\begin{equation}\n\\frac{\\partial V}{\\partial A}=\\sum_{i=1}^n\\int_0^{\\infty}\\frac{dy}{\\pi^2}\\frac{m_{i} T^2 y^2}{(y^2+\\frac{m_{i}^2}{T^2})^\\frac{1}{2}}\\frac{1}{1+e^y}(-\\frac{\\partial m_{i}}{\\partial A})\n\\label{eq_min}\n\\end{equation}\nwhere $y=\\frac{p_\\nu}{T}$.\n\nCommonly employed forms for the potential include small power law ($V=BA^k,k\\ll 1$),\nlogarithmic ($V=Blog(A\/A_0)$) and quadratic ($V=BA^2$). After starting with the\nsmall power law case, it will be easy to generalize to all cases by assuming\n$\\partial V\/\\partial A=BkA^{k-1}$ with $B$ and $k$ unrestricted.\n\n\\section{Approximations}\n\nThe expectation value of the acceleron at a particular value of $z$ is determined by\nthe minimization equation~(\\ref{eq_min}). Unfortunately, this equation in general\ndoes not have a closed form solution. To examine the behavior of this equation it is\nuseful to approximate the result in three ranges: relativistic ($m_\\nu \\ll T$),\nquasirelativistic ($m_\\nu \\sim T$), and nonrelativistic ($m_\\nu \\gg T$). In these\napproximation, the minimization equation becomes\n\\begin{eqnarray}\n\\label{eq_approxr}\nR: & \\frac{\\partial V}{\\partial A} \\simeq \\frac{1}{A^3}\\frac{M_i^4 T^2}{\\pi^2}I_1\\\\\n\\label{eq_approxnr}\nNR: & \\frac{\\partial V}{\\partial A} \\simeq \\frac{1}{A^2}\\frac{M_i^2 T^3}{\\pi^2}I_2\\\\\n\\label{eq_approxqr}\nQR: & \\frac{\\partial V}{\\partial A} \\simeq \\frac{1}{A^3}\\frac{M_i^4 T^2}{\\pi^2}I_3\n\\end{eqnarray}\nwith the unitless $O(1)$ integrals\n\\begin{eqnarray}\nI_1&=&\\int_0^\\infty dy \\frac{y}{1+e^y} \\simeq 0.822 \\\\\nI_2&=&\\int_0^\\infty dy \\frac{y^2}{1+e^y} \\simeq 1.803 \\\\\nI_3&=&\\int_0^\\infty dy \\frac{y^2}{(1+e^y)\\sqrt{1+y^2}} \\simeq 0.670\n\\end{eqnarray}\n\nA comparison of these approximations to unapproximated numerical simulation for the two\nneutrino models discussed in the next section is shown in in figure~\\ref{fig_approx1}.\nThis figure shows the contribution of the derivative of the neutrino term to\nthe minimization equation, which are the right hand sides of\nequations~(\\ref{eq_approxr}-~\\ref{eq_approxnr}).\nThe relativistic and nonrelativistic approximations (dashed curves) are a good\nmatch to the numerically calculated value in their respective regions of validity\nat high and low redshift.\n\\begin{figure}[tbh]\n\\includegraphics[width=8cm]{mav7-k0.1-approx.eps}\n\\caption{The derivative of neutrino energy density with respect to A. The solid line is a numerically determined result, and the dashed lines are approximations in the nonrelativistic (low z) and relativistic (high z) regimes.}\n\\label{fig_approx1}\n\\end{figure}\n\n\\section{Two Active Neutrinos, Small Power Law}\n\nAs a warm-up, consider the case of two active neutrinos, with $m_1 \\gg m_2$, and\na power law potential with $0g^2A^2$, then there are two minima at\n$\\pm\\sqrt{\\frac{2\\alpha h^2-g^2A^2}{2\\alpha^2}}$. Otherwise, there is a single\n``false minimum'' at $0$. Forcing the field into this false minimum by requiring\n\\begin{equation}\nA>\\sqrt{2\\alpha}h\/g\n\\end{equation}\nthe potential becomes\n\\begin{equation}\nV \\rightarrow b^2A^2+h^4\n\\label{eq_hybridv}\n\\end{equation}\nwhich includes a cosmological-constant type term $h^4$. This term dominates the\nacceleron contribution to the potential if $h\\gg \\sqrt{2\\alpha}b\/g$,\nand from equations~(\\ref{eq_rho}) and~(\\ref{eq_w}) may also dominate over the neutrino\ncontribution to the energy density. If we also assume the\nsee-saw condition, $A\\gg m_i\/\\lambda$, then the model described in previous sections\ncan be used without any change other than using the form of the potential in\nequation~(\\ref{eq_hybridv}).\n\nThe quadratic dependence on $A$ in equation~(\\ref{eq_hybridv}) means that the stability\ncondition for the lighter neutrinos in equation~(\\ref{eq_kcond}) is easily satisfied.\nThe allowed parameter range is quite large, and it is straightforward to find coefficients\nthat are produce observationally allowed values of neutrino mass and equation of state.\nThe numerical simulation of the evolution of one such model, with a hierarchy of masses\nand $h=0.06$ is shown in figure~\\ref{fig_hybridw1}. The lighter two neutrinos stay\nrelativistic until $z=0$, despite the instability in the massive neutrino.\nNote this model achieves both the stability of the lighter neutrinos and has a dark\nsector equation of state of $w=-1$ at $z$ near $0$.\n\\begin{figure}[tbh]\n\\includegraphics[width=8cm]{mavh2-w.eps}\n\\caption{Simulation of the equation of state of a three neutrino hybrid potential system. The kink at $z=200$ occurs when the heaviest neutrino becomes unstable. The remaining two neutrinos continue to be relativistic and stable until z=0.}\n\\label{fig_hybridw1}\n\\end{figure}\n\n\\section{Conclusion and Discussion}\n\n\nSeveral authors have found that a nonrelativistic MaVaN neutrino field is unstable\nto inhomogeneous fluctuations. By considering the evolution of the acceleron expectation\nvalue\nwhen a neutrino field becomes unstable, we have argued above that all neutrino fields\nin the theory are susceptible to a cascaded instability in which they all become\nunstable at nearly the same time. This occurs as long as the scalar potential has\na nearly flat dependence on the acceleron, and the neutrino masses vary inversely with\nthe acceleron. Including a very light neutrino is not sufficient to avoid this problem.\nSince there are at least three neutrino species, and the atmospheric neutrino\ndeficit requires at least one mass scale above 1eV, the instability poses a constraint\non all physical MaVaN models.\n\nThere are a number of possible resolutions. One is to increase the curvature of\nthe scalar potential. In models with a single scalar field, this makes it difficult\nfor the scalar field potential to form dark energy. However, this is easily remedied\nby including a second scalar field. A simple example is illustrated above in the\nhybrid MaVaN model. A similar potential arises naturally in supersymmetric models,\nsuch as those in~\\cite{Fardon:2005wc} and~\\cite{Takahashi:2005kw}.\n\nAnother solution is to modify the theory so that the dark sector never reaches a state\nwhere the adiabatic condition applies. Such models do not suffer from the instability\ndescribed above. One such theory is presented in~\\cite{Brookfield:2005td}.\n\nReducing the dependence of the acceleron on the heavy neutrino components also\nforms a class of possible solutions. If the acceleron is decoupled from each heavy\ncomponent before it becomes unstable, the acceleron expectation value is not\nquickly driven to a new scale. The mass of the lighter neutrinos would be largely\nunaffected, avoiding instability.\n\n\n\n\\section{Acknowledgments}\n\nI would like to thank Ann Nelson for useful conversations and guidance while\nworking on this project. This work was supported in part by the Department of\nEnergy.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\nAutomated facial age estimation is the application of non-manual processes to measure the age of a person by analysing specific facial features with the use of artificial intelligence. \nFacial age related products are becoming increasingly popular in our daily life. Teenagers and adults are using ageing filters for entertainment purposes that are available with Snapchat which have become viral over the past years. These new face-ageing techniques could boost search for wanted criminals or missing people. The usage of facial recognition has incremented exponentially. Furthermore, biometric systems are expanding their robustness with the addition of facial-based authentication factors that prevent impersonation attacks, e.g., Apple's Face ID and Android's face recognition technologies.\n\nFacial recognition is a widely-used technology that maps facial features from images to detect faces and recognise the associated identity. Applications have been commonly found in airports, mobile devices and certain web pages~\\cite{5406526}. Entertainment venues, alcohol, tobacco and certain social media services require an age verification process. Facial recognition is shaping the future of several security innovations: facial security checks could be used to prevent credit card cloning, smartphone unauthorised access, fraudulent exam takers, fake social media accounts, etc. Facial age detection could also be used to prevent unauthorised consumption or purchase of certain goods or services. Undocumented criminals are open to deceive authorities about their age to avoid the judicial system; however, an automated age detector could impede their attempt to bypass the system.\n\nAccurate facial age estimation has long been a difficult task for both human experts and specialised machine learning algorithms. Moreover, the influence of factors, such as environment, health habits, lifestyle, makeup, emotions, and uncontrolled lightning hinder the age estimation process~\\cite{han2013age}. We have studied the possibility of including artificial intelligence as a means to detect and analyse evidence that may be presented in court. Specifically, we have focused on the improvement of facial age estimation algorithms for the identification of victims\/suspects and its applications to child sexual exploitation material (CSEM) and child sexual abuse material (CSAM) investigations\\footnote{These are the terms recommended by the Luxembourg Guidelines\\\\ (\\url{http:\/\/luxembourgguidelines.org\/})}. Challenges arise due to the factors previously mentioned, thus hampering age classification accuracy, especially for borderline cases between underage and adult subjects. Due to the nature of courtroom practice, and the necessity of expert testimony, it is neither intended nor anticipated that these AI techniques will fully replace trained investigators. Rather, this type of investigative aid has the potential to greatly expedite digital forensic analysts in their work, and potentially lower the psychological load of dealing with CSEM material on an ongoing basis.\n\nThe usage of Deep Learning in several fields has become the latest trend: at the end of 2018, Google introduced an AI tool (freely available for non-governmental organisations and industry partners) to assist organisations in detecting and reporting child sexual abuse material online~\\cite{Google2018}. With the emergence of AI and its state-of-the-art branches including computer vision, machine learning and deep learning, age determination has improved significantly. Neural networks learn by processing thousands of images so that they can predict the age of future unseen images at an accuracy that surpasses human facial age perception capacities.\n\nGiven the quantity of digital content being created daily, the previous approach of manual evidence analysis is unfeasible~\\cite{le2018deeplearningmalware}. However, machines require training to deliver accurate estimations. The training process demands a large volume of labelled data, extensive time, and computer resources to understand traits present in digital portraits. In a previous study, several age estimation services were evaluated throughout an age range of 0 to 77 years old~\\cite{anda2018evaluating}. With this study it was found that the real culprit of inaccurate age predictions for minors is linked to the lack of appropriate datasets with adequate age labels. Nonetheless, data collection of underage images is surrounded by ethical and moral concerns. Personal identifiable information, such as name, gender, age, and additional information must be handled with care and the exposure of sensitive information by uploading underage images in an unencrypted Internet can be detrimental. Conversely, data collection with the appropriate safeguards could assist missing children and detect previously unknown child abuse material. \n\nChild exploitation investigations are one of the more common investigation types in digital forensic laboratories throughout the world~\\cite{ANDA2019S142}. These investigations have become an arduous task due to the increasing usage of anonymization tools, private P2P networks and cloud-based KVM systems~\\cite{farina2015overviewcloudforensics}. Worldwide, law enforcement and child protection communities have been fighting to diminish CSEM and human trafficking. Automated age detection techniques can be used to reduce work exposure to incriminating archives of indecent images; therefore, reducing the psychological ramifications. Such techniques have also been exercised for image classification and categorisation according to age, gender, objects contained therein, and the location in which each image was taken, all of which are useful to CSEM investigators.\n \n\\subsection{Contribution of this Work}\n\\label{contribution}\nThe contribution of this work can be summarised as:\n\n\\begin{itemize}\n\n\\item Comprehensive performance evaluation of offline and cloud-based facial recognition models.\n\n\\item The development and evaluation of a novel deep learning based underage subject classification model, \\texttt{DS13K} with N=12792 images, 80\\% for training and 20\\% for testing.\n\n\\item Significant improvement over individual cloud-based age estimators through the use of ensemble-based approaches for subjects under the age of 18 - comparable with expert human estimators.\n\n\\end{itemize}\n\n\\section{Literature Review\/State of the Art}\n\\label{ageing}\n\n\\subsection{Automated Age Estimation}\n\\label{age_recognition}\n\nThe human face can reveal important information, such as gender, approximate age, skin tone, eye colour, hair colour, presence\/absence of makeup, presence\/absence of beard, presence\/absence of moustache, etc. All these elements are know as soft biometric traits. \\citet{Dantcheva2011} defines soft biometric traits as ``physical, behavioural or adhered human characteristics, classifiable in predefined human compliant categories''.\n\nAccurately determining the age of a victim can prove crucial in a CSEM possession and\/or distribution case, especially for borderline age ranges between underage teenagers and young adults. The prediction of age as a soft biometric trait has been proven to be difficult due to the absence of strong cues that determine the oldness of a subject. \\citet{kloess2017challenges} suggest that discrepancies between the face and body, natural variation between different ethnicities and the environment that the person is exposed to are factors that affect the age prediction process. The aforementioned research takes into account multiple factors that can lead to the classification of an image either if it is an indecent image and the respective age group.\n\nThe mean absolute error (MAE) and the mean absolute error per age (MAE\/A) are the performance metrics used throughout this paper. The former is the average difference between the predicted age and the ground truth; the latter is the MAE grouped by the age.\n\nIn the past two decades, error rates have decreased remarkably. A MAE of 1.47 was achieved by \\citet{ratnayake2014juvenile} in 2014 by accomplishing an AdaBoost\\footnote{AdaBoost is a machine learning boosting algorithm that iteratively builds an ensemble of models~\\cite{seiffert2008rusboost}.} fusion of several state-of-the-art classifiers (including Fisher's LDA, Neural Networks, and Support Vector Machine). Nevertheless, this study was executed over a limited private dataset of 50 female images with an age range from 10 to 19, which is indicative of the scarcity of suitable images of this type. In 2011, \\citet{luu2011contourlet} were able to obtain a MAE of 4.1 (which has been typical of techniques utilising the FG-NET database). The \\textit{contourlet} appearance model used was more accurate and faster at localising facial landmarks than active appearance Models. \\citet{ferguson2017juvenile} acknowledged poor accuracy results for age estimation on juvenile faces by human observation. Influence of age, sex and occupation is nullified in the outcome. Moreover, female age estimation was more accurate in younger age groups and male age prediction were more precise after 11 years of age.\n\n\\subsection{Transfer Learning}\n\\label{transfer_learning}\n\nKnowledge transfer, inductive transfer or transfer learning makes use of existent available data to aid the learning on the new target data, which is composed of training and testing~\\cite{Dai:2009:EUF:1553374.1553399}.\n\nThe use of transfer learning has been increasing throughout the years and has been brought to the attention of researchers where several of them have published pretrained models to assist other researchers and prevent them from executing the tedious task of training data to solve a specific problem. \n\nInductive transfer can be beneficial when there is lack of labelled data, copyright issues or when data could be easily outdated. In our study, we are attempting to obtain a sufficient quantity of labelled facial age images; however issues arise due to copyright restrictions, GDPR, and ethical concerns. Therefore, a transfer learning solution is required. In further studies,~\\citet{dong2016automatic} exploited the transfer learning strategy to train deep convolutional neural networks from pretrained models due to the scarcity of age labelled face images. Transfer learning is usually expressed through the use of pretrained models, which are simply models created to solve a specific problem and are suitable for re-usability. Less training data is required when successfully transferring a pretrained model to another task.\n\n\\subsection{Face Ageing Datasets}\n\\label{database}\n\nHigh-quality large-sample-sized facial image datasets annotated with both age and gender are needed to train models that are capable of predicting accurate age. Several age annotated datasets have been released but with certain limitations, such as lack of images in certain age groups, presence of noise in photos that reduce the quality of the dataset and inaccurate age labelling.\n\nIMDB-WIKI is the largest public facial age computer annotated age and gender dataset~\\cite{Rothe-IJCV-2016} and has been subject of hundreds of facial recognition studies. The images were scraped from thousands of celebrities in IMDB\\footnote{\\url{https:\/\/www.imdb.com\/}} and correlated with Wikipedia\\footnote{\\url{https:\/\/www.wikipedia.org\/}}. The collection is quite considerable as the figures reach over half a million; nevertheless, the calculation of age is acknowledged by the authors to not be entirely accurate. We have corroborated that there are inaccurate age labelling and presence of noise. Furthermore, we have taken extra care in using these images due to copyright restrictions.\n\nThe FG-NET~\\cite{fgnet} dataset contains 82 subjects with photographs of each at varying ages ranging from newborn to 69 years old. Although over 50\\% of images in the FG-NET dataset are child images, the demand for underage training and test data has led to the creation of alternative databases. \\citet{grd2016creating} produced a private database in 2016 called ageCFBP with a wider age range. In the same year, Boys2Men was released as another private database focused on male child images~\\cite{castrillon2016boys2men}. \n\nMEDS~\\citep{founds2011nist} is a mugshot dataset of male and female deceased subjects with the oldness feature annotated but does not contain images of underage individuals. The FERET dataset contains around 14,000 images and is pertinent to face detection~\\cite{phillips1998feret}. The age labelling is based solely on human observation.\n\nThe OUI-Adience set is a public collection of labelled images obtained by online facial images of Flickr ``in the wild''. Although ~\\citet{eidinger2014age} has stated that they use Creative Commons license for their images, we have detected from a sample of 10,842 images, that 89.55\\% are associated to images with copyright; therefore, we have avoided the use of such dataset. Another dataset that uses Flickr as a source is the Yahoo Flickr Creative Commons 100M (YFCC100M) that was released in 2014~\\cite{thomee2016yfcc100m}. This is the biggest dataset of images and videos publicly available for researchers. Due to the size of the collection and the dataset being distributed solely as the metadata, the database is constantly evolving. (i.e., the photographs need to be downloaded individually from Flickr).\n\nFor our studies, a hybrid dataset was created from a variety of those available (IMDB, WIKI, FG-NET, MEDS) using the dataset generator software published by \\citet{anda2018evaluating}.\n\n\\section{Existing Tools and Models}\n\\label{existing}\n\nIn this section, the current tools for age estimation that are classified in two categories: Offline and Online. For the former, the tools are associated with pretrained models, where the architecture is known and the training dataset may or may not be shared. For the latter, the tools are hosted as cloud services, and the architecture of the neural network and the training dataset are generally unknown.\n\nThe main advantage of using an offline pretrained model is that they are usually shared by researchers either in frameworks, such as Caffe\\footnote{\\url{https:\/\/caffe.berkeleyvision.org\/}}, Caffe2\\footnote{\\url{https:\/\/caffe2.ai\/}}, Keras\\footnote{\\url{https:\/\/keras.io\/}} or Pytorch\\footnote{\\url{https:\/\/pytorch.org\/}} and thus have no cost. Nonetheless, online tools are associated with machine learning as a service and require a payment per transaction but are much easier to invoke; no installation is required and less local computational power is used.\n\nThe age and gender classification using Convolutional Neural Networks (CNNs) is an offline model that was built on the Adience dataset and released in 2005~\\cite{levi2015age}. This pretrained model consisted of a CNN architecture that was adapted to work even though the amount of learning data was scarce. Similarly, the ranking CNN for age estimation model was released in 2017 and is also an offline model that is available in the Model Zoo\\footnote{\\url{https:\/\/modelzoo.co\/model\/using-ranking-cnn-for-age-estimation}}. This model contains a series of basic CNNs that were fine-tuned from the base network trained on the Adience dataset. The result is a binary output and is ultimately added to the final prediction~\\cite{Chen_2017_CVPR}.\n\n\nAccording to Economy Watch in 2010 \\cite{watch2010us}, Amazon acquired ``Rekognition'' from an Artificial Intelligence start-up company from California called Orbeus. The company had developed a facial recognition software that detected traits on images with the use of a library based on Artificial Neural Networks which are computing systems that learn to accomplish tasks by observing examples rather than executing a specific algorithm and are structured by an initial input layer of neurons, one or more hidden layers, and a final layer of output neurons \\cite{wang2003artificial}.\n\nThe Kairos service has been used for age prediction and face detection; however according to~\\citet{anda2018evaluating}, the age estimation performance was lagging behind the rest of the classifiers included in that study. On the contrary, Microsoft Azure Machine Learning is a fully managed cloud service that is powered by a considerable number of machine learning algorithms aimed for scientists, data analysts and developers~\\cite{mund2015microsoft}. Per \\citet{Weber2016}, it is suggested that Microsoft Azure Cognitive Service uses Multi-layered deep learning technology and is within the top performers for age estimation. Finally, DEX has been subject to hundreds of studies in fields, such as computer vision, deep learning face recognition and age estimation. The huge dataset of over half a million subjects has been used by several researchers and the model has been trained in multiple frameworks, such as Caffe and Keras\\footnote{\\url{https:\/\/github.com\/yu4u\/age-gender-estimation}}.\n\nGoogle has not yet released a fully-fledged age estimation service based on image analysis to the public. The Google Vision Cloud API includes facial recognition and facial landmark features, but only allow the recognition of subjects to be categorised as a minor or non-minor and safe search capabilities, such as the recognition of adult content. It could be suggested that the introduction of the Google tool to assist organisations in detecting and reporting child sexual abuse material online previously mentioned in Section~\\ref{intro}, is the combination of both the minor\/non-minor detector and the adult content detector.\n\nFinally, How-old.net is an application linked to the Microsoft cognitive services and part of Microsoft's \\textit{Project Oxford}. In recent years, the tool went viral on social media and was used mainly for entertainment. Today it can be used to predict underage images with a fairly high accuracy as shown in our study.\n\n\\section{Dataset Curation for Performance Evaluation}\n\\label{curation}\n\nIn order to perform unbiased experimentation with the four services identified, it was necessary to construct a balanced dataset. Thus, we ensured that there were an equal number of images collected for each age. The dataset generator proposed in~\\cite{anda2018evaluating} was used and additional modules for the datasets that are to be discussed in this section were implemented\\footnote{\\url{https:\/\/bitbucket.org\/4nd4\/image_database}}.\n\nBecause the focus of this paper is on the boundary between minority and adulthood, older ages were not considered. Thus, the dataset was limited to an age range of 0 to 25 inclusive. For this dataset, 492 images per age were collected. For younger ages, this quantity of images was not available in existing public dataset, requiring the incorporation of additional manually discovered images. This was achieved by collecting images from Flickr\\footnote{Appropriate ethical approval was awarded for this data gathering process from our research institution (University College Dublin)}. Only photos that were available under an appropriate Creative Commons or Public Domain license, and for which accurate age and gender information were available, were considered. The latter information was taken from metadata, such as photo titles, descriptions, or tags. Other images were included from the UTKFace Dataset~\\cite{zhifei2017cvpr}. IMDB and WIKI photos were avoided but still used in a low proportion. This dataset was used for the experiment described in Section~\\ref{sec:exp1}. Each image is a single frontal face that was cropped and aligned with DLIB\\footnote{C++ toolkit containing machine learning algorithms \\url{http:\/\/dlib.net\/}.} with a dimension of 200~x~200 pixels. Each image was processed by a face detector either by the DLIB libraries by using Histogram Oriented Gradients or Convolutional Neural Networks, or the face detection provided by each service discussed in Section~\\ref{existing}. Initially we had a collection of 15,000 images but due to non-face recognition, the figure decreased and in order to maintain a balanced dataset, the images had to be reduced to 492 per class hence, we limited the dataset to a total size of 12,792.\n\n\\section{Experiments and Results}\n\\label{results}\n\nThree experiments were conducted and the MAE was calculated with the formula depicted in Equation~\\ref{eq:1}, the results of which are presented in the subsections that follows. The first experiment, discussed in Section~\\ref{sec:exp1} focused on the wider age range from 0 to 25 years old, to evaluate and compare the four individual services: How-Old.net, AWS, DEX, and Azure. In addition to the services, our deep learning model, \\texttt{DS13K} was created. The second experiment involves the evaluation of DS13K. The model performance reached an accuracy of 55.38\\% placing it in the top 3 performers after Bagging Regressor and the Gradient Boosting Regressor. The model is described in Section~\\ref{sec:exp2}. The final experiment introduces ensemble machine learning techniques to establish whether these will be useful tools to improve upon the performance of the four systems. This is presented in Section~\\ref{sec:exp3}.\n\n\\begin{equation} \\label{eq:1}\nMAE = \\sum_{i=1}^{n} \\frac { \\left|predicted_i - real_i\\right|} {n}\n\\end{equation}\n\n\\subsection{Underage Range Estimation}\n\\label{sec:exp1}\n\n\\begin{figure*}[h!]\n \\centering\n \\includegraphics[width=0.82\\textwidth,trim={0 0 0 100 cm},clip]{line_plot}\n \\caption{Average Estimated Age from each Service Compared with Actual Age.}\n\\label{fig:underage_service}\n\\end{figure*}\n\n\\begin{figure*}[h!]\n \\centering\n \\includegraphics[width=0.82\\textwidth,trim={0 0 0 100 cm},clip]{bar_plot}\n \\caption{Mean Absolute Error per Age by Service.}\n\\label{fig:bar_plot}\n\\end{figure*}\n\nThe evaluation for the first experiment focused on samples from 0 to 25. The results of the evaluation are shown in Figure~\\ref{fig:underage_service}, with the average predicted age for each service plotted against the subjects' actual ages. The MAE for each service can be seen in Figure~\\ref{fig:bar_plot} and the average MAE for underage subjects is presented in Table~\\ref{table:mae_underage}.\n\nFrom these figures, it can be seen that Amazon Rekognition performs best overall. Although it has a slight tendency towards underestimation up to the age of 12, it maintains its accuracy in older age groups better than Azure and How-Old.net, whose predictions gradually deviate away from the real age between the ages of 10 and 22. These three services show similar accuracy for the youngest subjects below the age of 12.\n\nIn contrast, DEX's pretrained model fails to accurately classify the younger samples. However, from 17 to 21 years old (in the crucial underage\/adulthood boundary zone), it has a better performance than the rest of the models. This pattern is likely due to a lack of sufficient sample images used to train the Deep Expectation model for very young subjects, and is the primary reason why DEX's overall MAE is higher than the others.\n\nIn terms of overall MAE for underage subjects, the AWS biometric detector service performed better than the rest of the services with a MAE of \n3.347 as shown in Table~\\ref{table:mae_underage}. Although the output of the prediction accomplished by AWS was classified with a high and low range, we found that the closest value to the real age would be the lowest value. AWS's superiority is unrivalled across the majority of age ranges, in fact it is between the best two performers for each age. It is also observed that only DEX and AWS underestimated the subjects' ages at any point, while the remaining services overestimated the values almost throughout the entire age range.\n\n\\begin{table}[h!]\n\\centering\n\\begin{tabular}{|c | c|} \n \\hline\n \\textbf{Service} & \\textbf{MAE} \\\\ [0.5ex] \n \\hline\\hline\\hline\nAmazon Rekognition\t&\t3.349\t\t\\\\\nHow-Old.net\t\t\t&\t5.281\t\t\\\\\nMicrosoft Azure\t\t&\t5.347\t\t\\\\\n(D)eep (EX)pectation&\t6.936\t\t\\\\\t[1ex]\n \\hline\n\\end{tabular}\n\\caption{Mean Absolute Error for Underage Images per Service.}\n\\label{table:mae_underage}\n\\end{table}\n\n\\subsection{Development of a Deep Learning Model for Age Estimation (DS13K)}\n\\label{sec:exp2}\n\nThe previously-mentioned DEX model in Section~\\ref{existing} was built on a VGG-16 architecture. For the development of our model, transfer learning was used; our DS13K model was fine-tuned on DEX in order to take advantage of the preexisting layer weights. Furthermore, the 12,792 images used for training and testing (80\\% and 20\\% respectively) came from sources described in Section~\\ref{curation}. Each input image was resized to a dimension of 224 x 224 pixels and the output had a size of 5 (Multi-class classifier) and were mapped to a value pertaining to the following age range classes: [0-5], [6-10], [11-15], [16-17] and [18-25]. The ranges were adapted from the ``Criminal networks involved in the trafficking and exploitation of underage victims in the European Union'' 2018 report\\footnote{\\url{https:\/\/www.europol.europa.eu\/publications-documents\/criminal-networks-involved-in-trafficking-and-exploitation-of-underage-victims-in-eu}}, which indicates that the classification of subjects into one of these age ranges is sufficient, and that precise age estimation is not crucial for investigators. \n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=0.35\\textwidth]{averageFaces}\n \\caption{Average Faces of DS13K Subjects between 16 to 17 Years Old.}\n\\label{fig:average_face}\n\\end{figure}\n\nTo supervise the input of the model, each age class was split into two and the average faces were calculated as depicted in figure \\ref{fig:average_face}. The accuracy per age group as well as the average accuracy per service is in Table~\\ref{table:approach1b}, where the best-performing figure for each age range is illustrated in bold. DS13K has the best average performance followed closely by AWS. In the key [16-17] age range, the accuracy of DS13K was substantially higher than the other services, with 68\\% of subjects in this range being successfully classified. The second-highest accuracy for this range was AWS with 15\\%. As illustrated previously in Figure~\\ref{fig:underage_service}, all the other services tend to overestimate age for subjects in this range, which would lead to underage victims being classified as adults. This overestimation of age is also the primary reason why the accuracy in the top age range [18-25] is higher for these services.\n\n\\begin{table}[h!]\n\\centering\n\\begin{tabular}{|c|c|c|c|c|c|} \n \\hline\n\\textbf{Range} & \\textbf{AWS} & \\textbf{Azure} & \\textbf{DEX} & \\textbf{DS13K} & \\textbf{How Old} \\\\\n&&&& (our approach) & \\\\\n \\hline\\hline\\\n0-5\t\t&\t\\textbf{0.88} & 0.69 &\t0.00 & 0.77\t&\t0.78\t\\\\\n6-10\t&\t0.43 & \\textbf{0.66} &\t0.13 & 0.44\t&\t0.49\t\\\\\n11-15\t&\t\\textbf{0.40} & 0.15 &\t0.25 & 0.16\t&\t0.24\t\\\\\n16-17\t&\t0.15 & 0.00 &\t0.17 & \\textbf{0.68}\t&\t0.03\t\\\\\n18-25\t&\t0.87 & \\textbf{0.97} &\t0.89 & 0.70\t&\t0.95\t\\\\\n\\hline\n\\hline\nAVG\t&\t0.550\t&\t0.496\t&\t0.293\t&\t\\textbf{0.553}\t&\t0.503\\\\\t[1ex]\n\\hline\n\\end{tabular}\n\\caption{Accuracy per Group per Service.}\n\\label{table:approach1b}\n\\end{table}\n\n\nDue to the results encountered by our proposed model, and promising figures for an age range which is of interest to us because of its proximity to the borderline of adulthood [16-17], we decided to include the model in the ensemble approach experiment discussed in the next section.\n\n\\subsection{Comparison with Ensemble Learning Techniques.} \\label{sec:exp3}\n\nThe third experiment was intended to investigate whether Machine Learning (ML) ensemble techniques can be used to improve on the performance exhibited by the existing systems beyond that of each individually. Ensemble techniques are generally defined as those that combine the results of several individual ML algorithms. Given that the existing systems all rely on ML technology, any combination of their results constitutes an ensemble approach. Because the aim of the activity is to compute a predicted age for each subject, regression techniques were considered for this task.\n\nThree standard regression techniques were chosen, namely a logistic regression, gradient boosting and a bagging regressor. These were chosen after observing the results of a number of other regression techniques on this problem. To calculate predicted ages for all of the subjects in the dataset, 10-fold cross validation was used. Here, 90\\% of the dataset is used for training, with the regressors tasked with predicting ages for the remaining 10\\%. The training data consisted of the predicted ages for each subject image provided by five systems: AWS, How-Old.net, Azure, DEX and DS13K. This process is repeated 10 times so that the predictions are computed for the entire dataset.\n\nTo evaluate this experiment, the results of the regression output were compared to each of the five input systems. This comparison was conducted in two ways: firstly the overall MAE was calculated for each technique, and following this the classification accuracy was calculated for the same age ranges used in the previous section. The MAE for each technique across the entire age range [0-25] is shown in Table~\\ref{tab:0-25_MAE}.\n\n\\begin{table}[!htb]\n\\centering\n\\begin{tabular}{|c|c|}\n\\hline\n\\textbf{Method} & \\textbf{MAE} \\\\\n\\hline\\hline\\hline\n\n\\textbf{GradientBoostingRegressor} & \\textbf{2.425} \\\\\n\\textbf{BaggingRegressor} &\t\\textbf{2.623} \\\\\n\\textbf{LogisticRegression} &\t\\textbf{3.120} \\\\\nAWS \t\t&\t3.349 \\\\\nDS13K & 3.964 \\\\\nHow-Old.net & 5.281 \\\\\nAzure \t\t&\t5.347 \\\\\nDEX \t\t&\t6.936 \\\\\n\n\\hline\n\\end{tabular}\n\\caption{Mean Absolute Error Rates for the 0-25 Age Range.}\n\\label{tab:0-25_MAE}\n\\end{table}\n\nThis table indicates that the three regression algorithms employed achieve a lower MAE than the individual systems. This is an interesting result in that it demonstrates that the off-the-shelf regression models that were used reduce the age estimation error when compared with the individual systems. This strongly motivates further research into regression techniques as a promising method to reducing error rates for the facial age estimation problem. Given that the various systems have different performance characteristics across the age range (as evidenced by the results from Section~\\ref{sec:exp1} in particular), these regression models can learn the characteristics of each in order to reduce this effect when combining their outputs.\n\nGiven that regression techniques do have a lower error rate than the other approaches within this age range, is it subsequently of interest to find whether their use is also motivated by their performance on the age-range classification task. When the images are divided into age ranges, the accuracy of the regression techniques was also calculated. This did not require a separate experiment to be run; rather an alternative evaluation was conducted. For this evaluation, the important consideration was whether the specific age predicted by the regressor was within the correct age range for each subject. The accuracy of each regressor for each age range is presented in Table~\\ref{table:ensemble_approach}, and compared with the underlying input systems in Figure~\\ref{fig:bar_age_range}.\n\n\\begin{table}[h!]\n\\centering\n\\begin{tabular}{|c|c|c|c|} \n \\hline\n\\textbf{Range} & \\textbf{Logistic} & \\textbf{Gradient} & \\textbf{Bagging} \\\\\n& \\textbf{Regression} & \\textbf{Boosting} & \\textbf{Regressor}\\\\ [0.5ex] \n \\hline\\hline\\\n0-5\t&\\textbf{0.734}\t\t&\t0.703 & 0.707\t\\\\\n6-10 & 0.575\t&\t\\textbf{0.665} &\t0.553\t\\\\\n11-15 & 0.432\t&\t0.391 &\t\\textbf{0.441}\t\\\\\n16-17 & 0.006\t&\t\\textbf{0.609} &\t0.428\t\\\\\n18-25 & \\textbf{0.867}\t&\t0.684 &\t0.713\t\\\\\t\n\\hline\n\\hline\nAVG &0.523\t& \\textbf{0.611}\t&0.569\\\\[1ex]\n \\hline\n\\end{tabular}\n\\caption{Ensemble Approach Accuracy for Underage Subjects.}\n\\label{table:ensemble_approach}\n\\end{table}\n\n\n\\begin{figure*}[h!]\n \\centering\n \\includegraphics[width=0.98\\textwidth,trim={0.2cm 0.4cm 0.2cm 53},clip]{Age_range}\n \n \\caption{Performance vs Age Group.}\n\\label{fig:bar_age_range}\n\\end{figure*}\n\nFrom these, it can be seen that the logistic regression, while achieving an overall MAE better than the underlying systems, does not exhibit a promising pattern in terms of the age ranges. Its accuracy in the key 16-17 age range is below almost all other approaches. In contrast, the Gradient Boosting and Bagging approaches both show positive results in this range, with both achieving higher accuracy than the four third-party services that were used.\n\nFor underage subjects, the accuracy rates of AWS, How-Old.net and Azure decrease through age ranges as opposed to the adult range [18-25]. It can be observed in Figure~\\ref{fig:bar_age_range} that most online services have trouble classifying images in the core [16-17] bracket but that both the Gradient Boosting and Bagging ensemble approaches and the DS13K model have much better accuracy in this range.\n\nGiven the results in the previous sections, it is unsurprising that AWS, How-Old.net and Azure have the poorest performance for underage subjects near the borderline. In Section~\\ref{sec:exp1}, they are shown to generally overestimate a subject's age in this range, thus frequently misclassifying them as adults. Furthermore, the results in Section~\\ref{sec:exp1}, specifically Figure~\\ref{fig:bar_plot} indicate that their MAE\/Year is greater from the region 13 to 19 years of age in the dataset. Unsurprisingly, the classification accuracy reduces as underage ages get closer to the cut-off point of 18. For 17 year old subjects, DEX's MAE\/Year is the lowest, meaning that the performance is better for that particular age than the rest of the services, whereas Azure has the worst performance between them. Their tendency to overestimate ages results in higher accuracy figures for overage subjects. An 18 year old is very rarely (less than 10\\% of the time) misclassified as being underage. \n\nOn the other hand, the accuracy of the regression models is much higher than for the underlying systems when averaged over the age ranges. Overall, the Gradient Boosting approach shows the best results. Even for 17 year old subjects, it has a better performance over the rest of ensembles, though failing to beat the DS13K model. \n\nOne notable finding is that the ensemble approaches have lower accuracy for subjects who are equal and over 18. This is partially due to the tendency of the underlying systems to overestimate ages, which will naturally lead to high accuracy for overage subjects in the highest age bracket. However, the accuracy of the regression models for overage subjects is far in excess of the accuracy figures for the underlying systems for underage subjects. This is closely related to their overall lower error rates within this age range.\n\nWhen evaluating this result, it is also important to keep in mind the use cases for these technologies. Arguably the consequences of misclassifying a younger subject as being overage are much more serious than the opposite scenario. If these systems are to be used in a forensic scenario to automatically identify potential victims of child abuse, it is important that such victims are not missed by these systems. Wrongly classifying a youngster as being older may result in a case not coming to the attention of investigators. In contrast, erroneously allocating an older subject as being younger may ultimately result in wasted investigator effort to examine a situation that is ultimately non-criminal. There is a strong argument to be made that the latter event is much less serious. Even in this scenario, a false positive classification of an adult subject as being underage would trigger a manual evaluation, thus placing investigators in the same position as if the technology was not used.\n\nHowever, given the multi-year backlog in conducting digital forensic investigations in many jurisdictions~\\cite{lillis2016challenges}, clearly an approach that improves accuracy overall is desirable. While the results presented in this section show great promise, it is clear that further work is required to improve the performance of facial age identification even further if it is to be adopted on a wide scale as part of digital forensic investigators' toolkits.\n\n\\section{Concluding Remarks}\n\\label{conclusion}\n\nThe four services evaluated in this study where Amazon Rekognition (AWS), Microsoft Azure, Deep Expectation (DEX), and How-Old.net. Initial evaluation results on the age range 0 to 25 years indicated that AWS had the overall lowest error rate, followed by How-Old.net; however, the ages that surround the borderline between minority and adulthood (considered to be 18 for this study) were found to follow a different pattern, where DEX surpassed the performance of AWS and Azure. Furthermore, an additional model named DS13K, based on VGG-16, was trained for this task. This achieved the highest accuracy for the borderline age range (16-17) when compared to the four other systems. Experiments on this dataset indicated that ensemble approaches based on regression substantially outperformed the four systems used for this test, both in terms of mean absolute error and the task of classifying subjects into appropriate age ranges. Gradient Boosting and Bagging Regressor approaches outperformed the best individual system (DEX) for the key borderline range (16-17) by over 40\\%. This result offers a strong argument in favour of the proposition that ensemble learning has great potential in improving the precision of facial age determination.\n\nOverall, even off-the-shelf regression techniques have been demonstrated to improve upon the performance of commercial offerings, by combining their outputs effectively. This offers a motivation for further work on bringing AI-based techniques to bear on this and other digital forensic challenges.\n\n\\subsection{Future Work}\n\\label{future}\n\nOur aim is to investigate how to aid digital forensic cases with automated machine learning based techniques. Our objective is to expand this study further through comparative analysis of additional services. We have identified a need for higher-volume datasets for child face recognition to improve our models; once we have collected a dataset with the relevant tags with a considerable size, we would re-train a model specifically for underage images that could help enhance not only age prediction services but also other tools that require identification of child exploitation material.\n\n\n\\bibliographystyle{ACM-Reference-Format}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nAs the LHC programme gets underway, it is timely to examine the status\nof our QCD tools.\nOne may ask whether they have reached the degree of sophistication\nthat we expected of them at this stage.\nIt is quite an achievement for the community to be able to say that\nthe answer to this question is ``yes''.\nA second question is whether our QCD tools have achieved the degree of\nsophistication that is necessary for fully exploiting the LHC over the\nyears to come.\nHere the answer will be more nuanced: there is certainly still ample\nscope for progress.\n\nTo provide a context for our discussion of QCD, it is perhaps worth\nbriefly recalling some of the roles played by QCD at a hadron collider\nwhose key design aims are not to study QCD, but rather to discover the\nHiggs boson and search for physics beyond the standard model.\n\nThere are essentially two ways of making discoveries at the LHC.\nOn one hand, an experiment may measure some kinematic distribution and\nsee a discrepancy relative to the standard-model expectation.\nIt can only be labelled a discovery if one has sufficient confidence\nin the standard model prediction, inevitably involving many aspects of\nQCD, such as parton distribution functions, matrix elements, parton\nshowers, etc.\\footnote{This is true even with many ``data-driven''\n methods for estimating backgrounds, since they often rely on\n QCD-based extrapolations from the region where one has a good\n measurement of the background to that where one suspects the\n presence of the signal.}\nAlternatively, discovery may come through the observation of a\ndistinct kinematic structure, such as an invariant mass peak (or edge,\nin the presence of unmeasured particles).\nAt first sight, QCD might have less of a role to play here; however,\nan understanding how QCD works can make it possible to reduce the\nbackgrounds, and sharpen the kinematic structure of signal, allowing\nit to emerge more convincingly.\nFurthermore, as and when discoveries are made, QCD will also be crucial in\nextracting information about the new objects that have been found:\ntheir couplings, masses, spins, etc.\n\nIn these proceedings we will examine several areas of perturbative QCD\nthat have seen major milestones in the past year or two. The first\nsuch area is that of Monte Carlo event generators.\n\n\\section{Parton-shower event generators (Monte Carlos)}\n\nIt is almost inconceivable to think of the LHC experiments working\nwithout Monte Carlo (MC) programs such as\nPythia~\\cite{Sjostrand:2006za}, Herwig~\\cite{Corcella:2000bw} and\nSherpa~\\cite{Gleisberg:2003xi}, which output detailed simulated pp\ncollision events.\nThe immense preparatory effort for the LHC would not have been\npossible without these tools, be it for the investigation of physics\npotentialities, or the simulations of detector response;\nnearly all of the data shown by LHC experiments at ICHEP 2010 (and\nsince) have been accompanied by comparisons to MC simulations, most\noften in amazingly good agreement;\nand as the experiments move towards producing results at ``particle\n(hadron) level'', i.e.\\ corrected for detector effects, MC simulations\nwill always be central in determining those corrections.\n\n\nThe core of the code base for two of the main MC tools, Pythia (v6)\nand Herwig, dates to the 1980's (early versions of Sherpa also used\nportions of Pythia code) and is written in Fortran~77, a language that\nstrains to adapt to the sophistication that these programs have\nreached today.\nThis prompted an effort across the Monte Carlo community, initiated\nalmost a decade ago, to rewrite the programs in \\texttt{C++}\\xspace.\nAside from the magnitude of the task of rewriting $60-80,000$ lines of\ncode, many questions of \\texttt{C++}\\xspace design needed to be thought through\ncarefully, to ensure that the new code remained maintainable over the\nlifetime of the LHC.\nOne of the milestones of the past couple of years is that, coinciding\nwith the start of the LHC, the new versions of these programs,\nPythia~8.1~\\cite{Sjostrand:2007gs}, Herwig++~2.4~\\cite{Bahr:2008pv}\nand Sherpa~1.2~\\cite{Gleisberg:2008ta}, are now available and mature\nenough for production use, including all core features needed for\ncomplete hadron-collider analyses, such as simulation of the multiple\ninteractions.\n\nThe work towards the \\texttt{C++}\\xspace generators has not simply been a question of\nrewriting old code in \\texttt{C++}\\xspace (for a comprehensive review, see\nref.~\\cite{Buckley:2011ms}).\nFor example, Pythia has acquired a new $p_t$ ordered shower as its\ndefault~\\cite{Sjostrand:2004ef} (the old Fortran virtuality-ordered\nshower is no longer\navailable in the \\texttt{C++}\\xspace version); it also includes numerous developments\nrelated to multiple interactions, e.g.~\\cite{Corke:2010yf} and its\nmodularity has already been exploited to allow the inclusion of an\nalternative shower~\\cite{Giele:2011cb}.\nHerwig has updated its angular-ordered shower~\\cite{Gieseke:2003rz},\nincluding better treatment of massive particles, and it now\nincorporates a native multiple interaction model~\\cite{Bahr:2009ek}.\nSherpa did not have a corresponding Fortran version, however it has\nseen a number of significant developments in the past couple of years,\nmost notably the switch to a dipole shower, and efficient multi-leg\nmatrix elements (COMIX~\\cite{Gleisberg:2008fv}, used together with\nCKKW \\cite{Catani:2001cc} matching to the parton shower).\nOther progress in the generators includes more extended BSM support\nand inclusion of NLO corrections for a broad variety of processes (as\ndiscussed below).\n\nOverall, it is time for this new generation of codes to undergo\nextensive stress testing by the experiments, the last major step on\nthe way to their becoming the Monte Carlo workhorses for the duration\nof the LHC.\n\n\n\n\\section{The NLO revolution}\n\nWhile Monte Carlo event generators give fine-grained predictions\nabout QCD final states, it is not always simple to systematically\nimprove their accuracy.\nThe most straightforward systematically improvable calculational\napproach of QCD at high energies is to use a perturbative\napproximation, involving a series expansion in the strong coupling\n$\\alpha_{\\text{\\sc s}}$, i.e.\\ cross sections are written $\\sigma = c_0 + c_1 \\alpha_{\\text{\\sc s}} + c_2\n\\alpha_{\\text{\\sc s}}^2 + \\ldots$, so that an improvement in accuracy is obtained\n``just'' by calculating one further coefficient in the series.\n\nAt the momentum scales of relevance for LHC, $\\alpha_{\\text{\\sc s}} \\simeq 0.1$ and one\nwould expect a leading order (LO) calculation, one that includes just\nthe first non-zero term of the series, to be accurate to within about\n$10\\%$.\nYet widespread experience shows that this is seldom the case, with\nnext-to-leading order (NLO) corrections often modifying cross sections\nby a NLO\/LO ``$K$ factor'' ratio of two (for example for Higgs\nproduction~\\cite{Dawson:1990zj,Djouadi:1991tka,Harlander:2002wh} or\n$Wb\\bar b$~\\cite{Ellis:1998fv,FebresCordero:2006sj}).\nIn some situations, in which a new channel opens up at NLO,\n$K$-factors can be much larger, even\n$\\order{100}$~\\cite{Bauer:2009km,Denner:2009gj,Rubin:2010xp}.\nThese NLO enhancements are potentially important because, for example,\nin searches for supersymmetry the ``signal'' of supersymmetry is often\njust a factor $\\order{5}$ excess of the data over the expected\nbackground~(e.g.~\\cite{ATLAS:2011hh}), the latter nearly always being\ncalculated at LO.\nHow, then, do we determine whether an excess of data compared to LO is\nan actual signal or simply a background with an unexpectedly large,\n$K$-factor that has yet to be calculated?\n\nPart of the answer is that experiments attempt to constrain the\n$K$-factor in regions of phase-space expected to be signal-depleted. \nHowever extrapolations to possible signal regions still usually\ninvolve LO tools,\\footnote{An interesting distinction here is between\n simple LO, and matched matrix-element plus Monte Carlo samples\n involving multiplicities that go beyond the strictly LO\n process~\\cite{deAquino:2011ix}, which can account for the appearance\n of new higher-order channels and reproduce some NLO $K$-factors.}\nand the known cases with the largest $K$-factors usually also lead to\nstrong kinematic dependence of the NLO correction.\nIn such situations, therefore, it would be reassuring to have an\nactual NLO calculation.\nThe difficulty is that many new physics signals involve quite complex\nbackgrounds.\nFor example, in pair production of gluinos, each gluino might decay to\na anti-quark and squark, with each squark decaying to a quark and an\n(invisible) neutralino, which gives missing energy.\nOne of the backgrounds in this case is then four-jet production in\nassociation with a $Z$-boson that decays to neutrinos, which is too\ncomplex a process for there to be any NLO calculations of it yet.\n\nOne way to quantify the difficulty of a NLO calculation is in terms of\nthe total number of outgoing ``legs'' (partons and electroweak bosons\nall count as legs).\nThe first NLO calculation was for a $2\\to1$ process, Drell-Yan\nproduction, in 1979~\\cite{Altarelli:1979ub}.\nIt took almost ten years before any $2\\to2$ processes got calculated,\nwith several results appearing in the late 1980's and early 90's\n(e.g.\\ heavy-quark\npairs~\\cite{Nason:1987xz,Altarelli:1988qr,Beenakker:1988bq}, dijet\nproduction~\\cite{Aurenche:1986ff,Aversa:1988vb}, and vector-boson plus\njet~\\cite{Arnold:1989ub,Giele:1993dj}).\nAnother ten years passed before a $2\\to3$ process was calculated, with\n$Wb\\bar b$ in 1998 \\cite{Ellis:1998fv} and 3-jet and $W$+2-jets\ncalculated a couple of years later~\\cite{Nagy:2001fj,Campbell:2002tg}.\n\nGiven the motivation from the expected startup of LHC, at this point\nan almost industrial effort got underway to calculate all $2\\to3$\nprocesses of interest for LHC and to open the frontier towards $2\\to\n4$ processes (bearing in mind that that background we mentioned above\nwas a $2\\to5$ process), guided by a document known as the Les Houches\nwishlist~\\cite{Bern:2008ef}.\nRoughly in line with the rule-of-thumb of a 10-year interval for\ncalculation an extra leg,\nthe first $2\\to4$ calculations have appeared in the past couple of\nyears: $W$+3\\,jets~\\cite{Ellis:2009zw,Berger:2009ep}, $Z$+3-jets\n\\cite{Berger:2010vm}, $t\\bar t b\\bar b$\n\\cite{Bredenstein:2010rs,Bevilacqua:2009zn}, $t\\bar\nt$+2-jets~\\cite{Bevilacqua:2010ve},\n$W^\\pm W^\\pm$+2-jets~\\cite{Melia:2010bm}, $WWb\\bar\nb$~\\cite{Denner:2010jp,Bevilacqua:2010qb}, with progress also on $b\\bar b b\\bar\nb$~\\cite{Binoth:2009rv} (and a result for\n$e^+e^-\\to$5-jets~\\cite{Frederix:2010ne}).\n\nWhile some of these results were obtained with traditional Feynman\ndiagrammatic\nmethods~\\cite{Bredenstein:2010rs,Denner:2010jp,Binoth:2009rv}, the\nremaining ones have taken advantage of major developments in\n``unitarity-based'' methods for calculating one-loop amplitudes (which\nhad been the main bottleneck for new NLO results).\nOriginally pioneered in the mid 1990's~\\cite{Bern:1994zx}, the idea\nbehind these methods is to sew tree-level amplitudes together to\nproduce loop amplitudes, equivalent to considering loop momenta\nsuch that specific loop propagators are on-shell.\nThis idea was revitalised in 2004 through the use of momenta with two\ntimelike components~\\cite{Britto:2004nc} to broaden the set of\ntree-level configurations that could be usefully\nassembled.\\footnote{Specifically, with two timelike components (or, in\n subsequent work, with complex Minkowski momenta), it is possible to\n have a sensible 3-particle vertex with all momenta on shell and use\n this as an ingredient in building up the loop amplitude.}\nTo go from this result to collider predictions has been a huge\nundertaking, with many important steps along the way (most have been\nreviewed in~\\cite{Bern:2008ef}).\nIf one is to highlight a single one of them, it might be the\nobservation that it is possible to deduce the integrated 1-loop\ndiagram simply by inspection of the integrand for specific\nloop-momentum configurations~\\cite{Ossola:2006us}.\n\nThese developments represent a revolution in NLO calculations. Not\njust because of the number of $2\\to 4$ predictions that they have led\nto --- a corresponding effort devoted to Feynman-diagrammatic\ncalculations would probably have led to a similar number of results\n--- but more importantly because of the prospects that they offer for\n``low-cost'' automation of NLO calculations and the extension beyond\n$2\\to4$ processes.\nIndeed, just around the time of ICHEP, the first NLO results for a\n$2\\to5$ process were announced, the unitarity-based (leading colour)\ncalculation of $W$+4-jets~\\cite{Berger:2010zx}, nearly ten years ahead\nof expectations from the timeline discussed above.\n\nOne caveat to be mentioned in the context of these impressive results\nis that so far most of the $2\\to4$ or $2\\to 5$ NLO calculations are\nnot yet available as public codes (with the exception\nof~\\cite{Melia:2011gk}).\nThis is perhaps a consequence of the significant complexity of the\ncodes, which often bring together many different tools\\footnote{For\n example, on one hand the 1-loop corrections, on the other hand tools\n for handling real radiation such\n as~\\cite{Gleisberg:2007md,Czakon:2009ss,Frederix:2009yq,Hasegawa:2009tx}.}\nand then require enormous computing time if one is to obtain a\nnumerically stable result.\nNevertheless, it is only once they are public, in a form that is\nrelatively straightforward to use, that these calculations will be\nable to deliver their full value.\n\n\n\\subsection{NLO and Monte Carlo event generators}\n\nWhile NLO calculations have the benefit of quantifiable accuracy (at\nleast in regions of phase-space that don't probe disparate momentum\nscales), they only ever involve a handful of partons, a far cry from\nthe level of detail of MC parton-shower event generators, which\npredict distributions at the level of hadrons.\n\nTwo main techniques have been developed over the past decade to\ncombine NLO accuracy with parton shower ``detail'', the\nMC@NLO~\\cite{Frixione:2002ik} and POWHEG~\\cite{Nason:2004rx} methods.\nGenerally speaking, only relatively simple processes are available: at\nthe time of ICHEP, not even $Z$+jet or dijet production had been\npublicly implemented.\nThat is gradually changing thanks to progress on the systematisation\nand automation of both the MC@NLO~\\cite{Frixione:2010ra} and\nPOWHEG~\\cite{Alioli:2010xd,Hoche:2010pf} methods. In the POWHEG case\nthis helped the implementation of $Z$+jet~\\cite{Alioli:2010qp},\ndijet~\\cite{Alioli:2010xa} and $t\\bar t$+jet~\\cite{Kardos:2011qa} and\neven the $2\\to4$ process $W^\\pm W^\\pm$+2-jets~\\cite{Melia:2011gk},\nwhile in MC@NLO it has been of benefit for example in extending the\nrange of processes available with Herwig to work also with\nHerwig++~\\cite{Frixione:2010ra}.\n\n\nA point to be aware of is that while NLO MC implementations of,\nsay, $Z$ production necessarily include a correct LO (tree-level)\n$Z$+jet matrix element, they had not generally been matched with\nhigher-order tree-level matrix elements, e.g.\\ $Z$+2-jet, etc.\nIn contrast, it has for some time now been standard procedure to\ncombine LO tree-level $Z$, $Z$+jet, $Z$+2-jet, etc. matrix elements\ntogether (CKKW and MLM methods~\\cite{Catani:2001cc,Alwall:2007fs}).\nTherefore users have been forced to choose between, on one hand NLO\naccuracy for simple processes but with a poor description of multi-jet\nevents, and on the other hand low, LO, accuracy but simultaneously for\nmany different multiplicities.\nUltimately one would hope to have a method that provides NLO accuracy\nsimultaneously for a range of different multiplicities (for example,\nas implemented for $e^+e^-$ in \\cite{Lavesson:2008ah}, or for\nhadron-collider processes without showering in~\\cite{Rubin:2010xp}).\nHowever, in the meantime, an interesting\ndevelopment~\\cite{Hamilton:2010wh,Hoche:2010kg} is the merging of\nPOWHEG and CKKW\/MLM type methods to provide NLO accuracy for the\nlowest multiplicity process with LO accuracy for multijet processes.\n\nOverall, even if it is still early days, it is clear that automation\nof loop calculations, automation of methods to combine NLO and parton\nshowers and the development of methods to merge different\nmultiplicities of NLO-improved parton showers, taken together would\nhave the potential to radically improve the quality of MC predictions.\n\n\n\n\\section{NNLO}\n\nFor the foreseeable future the ultimate perturbative accuracy that one\ncan hope to achieve is NNLO, i.e.\\ corrections up to $\\order{\\alpha_{\\text{\\sc s}}^2}$\nrelative to the dominant process.\nThere are two broad reasons for being interested in NNLO\ncorrections. One may, for example, wish to extract precision\ninformation about standard-model couplings (as for the Higgs boson) or\nparton-distribution functions from measured cross-sections.\nAlternatively one may be faced with quantities where NLO corrections\nare large, and NNLO is then the first order at which one can hope to\nmake quantitatively reliable predictions.\n\nNNLO hadron-collider results have been available for some time now for\nHiggs and vector-boson production (state-of-the-art codes are\ndescribed\nin~\\cite{Anastasiou:2005qj,Grazzini:2008tf,Gavin:2010az,Catani:2009sm}),\nand the current frontier is NNLO accuracy for processes with coloured\nfinal-state particles, be they heavy (top) or light (jets).\n\nOne significant recent result is the calculation of the NNLO cross\nsection for Higgs production in vector-boson\nfusion~\\cite{Bolzoni:2010xr}, making use of the ``structure function''\napproach~\\cite{Han:1992hr} in which one views each proton's emission\nof a vector-boson as a DIS type reaction, and then separately\nconsiders the fusion of the two vector bosons. \nThis provides a NNLO result that is inclusive over the hadronic jets,\nbut still exclusive with respect to the vector-boson momenta.\nNumerically it indicates perturbative stability relative to the NLO\nprediction, with a reduction of scale uncertainties from the $5-10\\%$\nrange at NLO, down to $2-3\\%$.\n\nThe most likely candidate for the next process to be calculated at\nNNLO is $t\\bar t$ production. \nAmong the physics motivations, one can mention the importance of the\nforward-backward asymmetry: given that it is non-zero starting only\nat NLO, only from NNLO will there be some quantifiable control of the\ntheoretical uncertainties on its prediction.\nAlso of interest is the potential for an extraction of the top-quark\nmass by comparing the predicted cross-section (with its relatively\nstrong-mass dependence) to the actual measured cross\nsection.\\footnote{It seems this method was originally proposed during\n an extensive discussion at Moriond QCD 2008. It has since been\n analysed in detail for example in~\\cite{Langenfeld:2009wd}.}\n\nAs things stood a few years ago, the ingredients that were still\nmissing for a NNLO calculation of $t\\bar t$ production were the\nfollowing: \nthe two-loop diagrams for $q\\bar q\\to t\\bar t$ and $gg\\to t\\bar t$;\nthe squared one-loop terms for $t\\bar t$ production in association\nwith an extra parton;\nand a way of performing the phase-space integration for (tree-level)\n$t\\bar t$+2-parton production while keeping track of the divergences,\nwhich need to cancel with those from the 1- and 2-loop terms.\n\nProgress (reviewed in~\\cite{Bonciani:2010ue}) started with the\ncalculation of the high-energy limit of the two-loop $q\\bar q$ and\n$gg\\to t\\bar t$ diagrams \\cite{Czakon:2007ej}.\nThis was followed by a numerical evaluation of the full 2-loop $q\\bar\nq \\to t\\bar t$ amplitude \\cite{Czakon:2008zk} (a corresponding\napproach to $gg \\to t\\bar t$ seems close to completion\n\\cite{CzakonZurichTalk}) and by various analytical results for parts\nof the two\namplitudes~\\cite{Ferroglia:2009ii,Bonciani:2008az}. \nThe squared one-loop terms were determined in \\cite{Korner:2008bn}.\nFinally, the problem of integrating the (divergent) phase-space for\nproduction of $t\\bar t$+2-partons has been solved in\n\\cite{Czakon:2010td}.\nThus there is hope that in the reasonably near future, first full NNLO\nresults for top production will become available.\\footnote{%\n In the meantime there has been significant work towards estimating\n the NNLO (and yet higher-order) corrections using\n threshold-resummation\n techniques~\\cite{Cacciari:2008zb,Czakon:2009zw,Ahrens:2010zv,Aliev:2010zk,Kidonakis:2010dk}.\n %\n While it is beyond the scope of these proceedings to discuss the\n detailed differences between them, it is probably fair to say that\n they do not yet provide a consensus as to the likely impact of the\n full NNLO corrections.\n}\n\nThe next frontier for NNLO calculation will probably be that of\nprocesses with one or more light jets in both the initial and final\nstates, e.g. vector-boson plus jet or dijet\nproduction.\\footnote{Techniques that merge NLO calculations for\n different jet multiplicities~\\cite{Rubin:2010xp} can, meanwhile,\n provide a good approximation to NNLO for those observables in such\n processes that are subject to giant $K$-factors.}\nThe case with final-state jets only, specifically $e^+e^- \\to $3-jets,\nhas been solved in~\\cite{GehrmannDeRidder:2007bj,Weinzierl:2008iv}.\nA compilation of extractions of $\\alpha_{\\text{\\sc s}}$ based on the comparison of these\nNNLO results (supplemented with resummations and non-perturbative\ncorrections) to event-shape data has been given in\n\\cite{Gehrmann:2010rj}.\nInterestingly, there is a noticeable spread in the results,\nhighlighting the fact that at levels of precision of a few percent,\nhadronic final-state observables are subject to many different effects\nthat can contribute at the same few-percent level as $\\order{\\alpha_{\\text{\\sc s}}^2}$\ncorrections.\nStill, NNLO corrections are a class that can be controlled, helping\nprovide a far more constrained discussion of the overall precision of\nQCD predictions.\nIt is therefore highly valuable that work progresses on general NNLO\nmethods and their extension to processes with initial-state coloured\nparticles (see\n\\cite{Glover:2010im,Boughezal:2010mc,Anastasiou:2010pw,Bolzoni:2010bt}\nand references therein).\n\n\\section{Jets}\n\nThe majority of measurements that involve hadronic energy-flow at the\nLHC will make use of jets.\nJets are measured with the help of a jet algorithm, which takes the\nhundreds of particles measured in an experiment and combines them into\na handful of jets.\nThe same procedure can be applied to theoretical parton-level\ncalculations, with the idea that the jets obtained from parton-level\nand experiment can be directly compared.\n\nA problem that had plagued hadron-collider jet measurements for nearly\n20 years was that the vast majority used jet algorithms that were not\ninfrared and collinear (IRC) safe (despite widespread discussion of\nthe problem, e.g.\\ \\cite{Huth:1990mi,RunII-jet-physics}).\nIRC safety is the property that the final hard jets should be insensitive\nto the additional low-energy emissions and small-angle branchings that\noccur with high probability in QCD.\nWithout this property, the higher-order calculations discussed above\noften lead to divergent answers, compromising the huge investment that\nhas been made in them over the past decade.\n\nIt was therefore a welcome development to see that all of the jet\nmeasurements presented by ATLAS and CMS at ICHEP 2010 (and the\nsubsequent publications, e.g.~\\cite{ATLAS:2010pg,CMS:2011tk}) have\nused an infrared and collinear safe jet algorithm,\nanti-$k_t$~\\cite{Cacciari:2008gp}\n(which has \nalso been used by the H1 and ZEUS\ncollaborations~\\cite{Aaron:2009vs,Abramowicz:2010ke}).\nThe anti-$k_t$ algorithm repeatedly recombines the pair of particles\n$i$ and $j$ that has the smallest $d_{ij} =\n\\min(p_{ti}^{-2},p_{tj}^{-2}) \\Delta R_{ij}^2\/R^2$ unless a $d_{iB} =\np_{ti}^{-2}$ is smaller, in which case $i$ is labelled a jet\n($\\Delta R_{ij}$ is the rapidity-azimuth separation of $i$ and $j$ and\nthe parameter $R$ sets the minimum interjet distance).\nClosely related to the much earlier $k_t$\nalgorithm~\\cite{KtHH,Kt-EllisSoper}, it uses a different weighting of\nmomentum and angle to grow jets outwards from a central core, giving\n``cone-like'' jets\\footnote{Jets that are nearly always circular in\n the rapidity--azimuth plane; this relates to the algorithm producing\n jets whose momentum depends linearly on the distribution of soft\n particles in the jet and its vicinity~\\cite{Cacciari:2008gn}, a\n property that helps make it easier to account for detector effects.}\nwhile remaining IRC safe.\nThese properties, together with earlier developments that ensure good\ncomputational efficiency in the presence of high particle\nmultiplicities~\\cite{Cacciari:2005hq}, help make it particularly\nsuitable from both the experimental and theoretical points of view.\n\nJet finding is not simply about comparing theory and experiment, but\nalso about organising the huge amount of information in an event so as\nto best pull out signals of particles such as the Higgs boson and\nextensions of the standard model.\nOne kinematic regime of particular interest turns out to be that where\nparticles with electroweak-scale masses are produced with transverse\nmomenta somewhat (or far) above the electroweak scale.\nThere had been a handful of early investigations of this\nregime~\\cite{Seymour:1993mx, Butterworth:2002tt}, and in recent years\nit has become clear to what extent the hierarchy of scales present at\nLHC ($\\sqrt{s} \\gg M_{\\text{EW}}$) can usefully be exploited with\nsuitably targeted jet methods.\nExamples include: the search for new TeV-scale particles that decay to\nelectroweak bosons (W, Z, H) or top-quarks, which then go on to decay\nhadronically (e.g.\\\n\\cite{Butterworth:2007ke,Thaler:2008ju,Kaplan:2008ie}; more standard\njet methods were used for example in~\\cite{Baur:2008uv});\nthe observation that in searching for hadronic decays of the\nHiggs-boson (in association e.g.\\ with a\n$W\/Z$~\\cite{Butterworth:2008iy} or a $t\\bar t$\npair~\\cite{Plehn:2009rk}) it may be advantageous to concentrate on the\nsubset of events in which the Higgs boson has $p_t \\gg M_H$, or indeed\nthat the Higgs might be discoverable first in SUSY cascade\ndecays~\\cite{Kribs:2010hp};\nand the proposal that hadronically-decaying new particles\n(e.g. neutralinos and gluinos in $R$-parity violating\nsupersymmetry~\\cite{Butterworth:2009qa,Brooijmans:2010tn} or new\nscalars that appear in buried Higgs scenarios~\\cite{Chen:2010wk}) may\nhave sufficiently distinct jet-substructure signals to be picked out\nsometimes even in purely hadronic events.\\footnote{There is even a\n tantalising claim of a hint of an excess in such a channel at the\n Tevatron~\\cite{Eshel:2011vs}.}\nIt is beyond the scope of these proceedings to discuss in detail the\nmany different jet techniques that have been developed for these\npurposes (among those not already cited above,\nalso~\\cite{Almeida:2008yp,Ellis:2009su,Soper:2010xk,Cui:2010km,Kim:2010uj,Thaler:2010tr,Hook:2011cq,Barger:2011wf,Hook:2011cq,Barger:2011wf,Soper:2011cr}),\nand the reader is referred instead to recent\nreviews~\\cite{Salam:2009jx,Abdesselam:2010pt}.\n\n\n\\section{Conclusions}\n\nSeveral major long-term LHC-QCD related projects are now approaching\nmaturity.\nAmong them we looked at the \\texttt{C++}\\xspace event generators Herwig\\texttt{++},\nPythia~8 and Sherpa which are all now ready for mainstream use, and\nare also evolving in their physics content, be it in terms of\nnon-perturbative ingredients such as the underlying event or more\nwidespread matching with NLO calculations through automation of the\nMC@NLO and POWHEG methods.\\footnote{\n Though space limitations prevented us from discussing parton\n distribution functions, it is worth mentioning that the NNPDF\n project~\\cite{Ball:2011mu} has likewise reached maturity in the past\n year, joining CTEQ and MSTW as a global PDF fit, involving a quite\n complementary approach to the estimation of uncertainties. The\n discussion around PDFs remains very vibrant (even controversial,\n especially in the context of Higgs exclusion\n limits~\\cite{Baglio:2011wn}) and for other recent progress and\n comparisons between results, the reader is referred\n to~\\cite{Guzzi:2011sv,Martin:2010db,Alekhin:2011ey,CooperSarkar:2010ik,JimenezDelgado:2010nk,Alekhin:2011sk}.}\n\nWe also looked at some breakthroughs of the past couple of years.\nNLO calculations, with the first $2\\to 5$ result published almost 10\nyears ahead of ``schedule'' (i.e.\\ extrapolations of past progress)\nundoubtedly belong to this category.\nIt is probably also fair to say that jet finding has undergone a\nbreakthrough, on two fronts:\\footnote{Though the author is perhaps too\n close to the subject to provide an unbiased view.}\non one hand, the LHC is the first hadron collider to systematically\nuse infrared and collinear safe jet finding, more than 30 years after\nthe original proposal for jet-finding by Sterman and\nWeinberg~\\cite{StermanWeinberg}; on the other, it has become clear\nthat flexibility with jet-finding methods has the potential to help\ndiscover Higgs-boson decay channels and new physics scenarios that had\npreviously been thought beyond the scope of the LHC.\n\nOne of the other areas of extensive ongoing work in QCD is the quest\nfor high accuracy, where we discussed the progress in NNLO\ncalculations (space constraints prevented a discussion of\nresummations).\nThe most imminent development will probably be the NNLO calculation of\n$t\\bar t$ production, with an impact not just on predictions of the\ncross section, but also, possibly, on the highly topical question of\nthe $t\\bar t$ asymmetry.\n\nAt what point might we say that QCD is ready for the LHC? \nThere has been enormous progress in the past 5 to 10 years and the\ngoals that were set at the turn of the century have generally been met\n(with one or two good surprises along the way).\nStill, in many ways, the use of QCD at colliders remains a somewhat\ndelicate craft, one that relies on a combination of technical skill,\nphysical insight and extensive experience.\nThis is true whether one aims for the reliable prediction of complex\nbackgrounds, the high-precision extraction of fundamental parameters\nfrom data or the design of analyses that make the most of QCD to help\ndistinguish signal from background.\nWe can but look forward to breakthroughs of the coming years that will\nmake it more straightforward to use QCD on the path to discovery.\n\n\n\\section*{Acknowledgements}\n\nI am grateful to numerous colleagues for helpful exchanges and\ncomments, both while preparing the talk and this writeup. Among them,\nMatteo Cacciari, Aude Gehrmann de Ridder, Gudrun Heinrich, Nikolaos\nKidonakis and Giulia Zanderighi.\nFinancial support is acknowledged from grant ANR-09-BLAN-0060.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Justification from Generalization Error Bound}\\label{sec:justification}\nWe further analyze the generalization error bound of our \\texttt{ProGrad}. We define the expected risk $R(\\cdot)$ and empirical risk $\\hat{R}(\\cdot)$ of a classifier $f$ on domain $\\mathcal{D}$ as \n\\begin{equation}\nR(f)=\\mathbb{E}_{(X,Y)\\sim \\mathcal{D}}[\\ell(f(X),Y)], \\quad \\hat{R}(f)=\\frac{1}{N} \\sum_{i=1}^{N} \\ell(f(X_i),Y_i)\n\\end{equation}\nwhere $\\ell(f(X), Y)$ denotes the cross-entropy and $N$ is the volume of training data. We are interested in the downstream domain $\\mathcal{D}_{d}$ and pre-trained domain $\\mathcal{D}_{p}$, respectively. \\footnote{The pre-trained dataset includes samples from diverse classes. Here, we only consider the pre-trained data belonging to the classes of downstream task.}\n\n Let $\\mathcal{F}$ be a function class, \nthe conventional fine-tune model $\\hat{f}_{\\text{ce}}$ is trained on $\\mathcal{D}_{d}$ by \n\\begin{equation}\n \\hat{f}_{\\text{ce}} = \\argmin_{f\\in \\mathcal{F}} \\hat{R}_{d}(f).\n\\end{equation} The zero-shot CLIP model $\\hat{f}_{\\text{p}}$ is considered to be trained on $\\mathcal{D}_{p}$ by \n\\begin{equation}\n \\hat{f}_{\\text{p}} = \\argmin_{f\\in \\mathcal{F}} \\hat{R}_{p}(f).\n\\end{equation}\nFor the implementation of \\texttt{ProGrad},\nwe initialize the model $\\hat{f}_{\\text{prograd}}$ using the pre-trained model $\\hat{f}_{\\text{p}}$. We regularize each training step not to increase the KL divergence between the predictions of $\\hat{f}_{\\text{prograd}}$ and $\\hat{f}_{\\text{p}}$. In this way, $\\hat{f}_{\\text{prograd}}$ can keep the optimal value of the pre-trained domain $\\mathcal{L}_\\text{kl}$ when optimizing the empirical risk on the downstream domain.\nThe model $\\hat{f}_{\\text{prograd}}$ learned by our \\texttt{ProGrad} can be viewed as optimizing the empirical risk on both domains:\n\\begin{equation}\n \\hat{f}_{\\text{prograd}} = \\argmin_{f\\in \\mathcal{F}} \\hat{R}_{(d+p)}(f)=\\argmin_{f\\in \\mathcal{F}} \\hat{R}_{d}(f)+ \\hat{R}_{p} (f).\n\\end{equation}\n\nBased on Theorem 4.1 of \\cite{yang2021bridging}, assuming that the neural network has $L$ layers with parameters matrices $W_1, ..., W_L$, and their Frobenius norm are at most $M_1,...,M_L$ and the activation functions are 1-Lispschitz continuous, positive-homogeneous, and applied element-wise. The output of the neural network is the softmax function that predicts $c$ classes. Let $\\mathcal{F}$ be a function class with the range $[a,b]$. Distribution is such that $\\|\\mathbf{x}\\|\\leq B$. Let $\\mathbf{X}_1^{N_d}=\\{\\mathbf{x}_n^{(d)}\\}_{n=1}^{N_d}$ and $\\mathbf{X}_1^{N_p}=\\{\\mathbf{x}_n^{(p)}\\}_{n=1}^{N_p}$ be two set of i.i.d. samples drawn from the downstream domain $\\mathcal{D}_{d}$ and the pre-trained domain $\\mathcal{D}_{p}$. \nThen for any $\\epsilon>0$, we have with probability at least $1-\\epsilon$,\n\n\\begin{equation}\\label{eq:bound_complex}\n\\begin{aligned}\n R_{d}(\\hat{f}_{\\text{prograd}}) &\\leq \\hat{R}_{(d+p)}(\\hat{f}_{\\text{prograd}})+ \\frac{1}{2}\\gamma_\\mathcal{F}(D,P)+\\frac{cB\\left(\\sqrt{2\\log(2)L}+1\\right)\\prod_{j=1}^LM_j}{\\sqrt{N_p}}\\\\\n &+\\frac{cB\\left(\\sqrt{2\\log(2)L}+1\\right)\\prod_{j=1}^LM_j}{\\sqrt{N_d}}+\\frac{3}{2}\\sqrt{\\frac{(b-a)\\ln(4\/\\epsilon)}{2N_d}}\\\\&+\\frac{3}{2} \\sqrt{\\frac{(b-a)\\ln(4\/\\epsilon)}{2N_p}}\n +\\frac{1}{2}\\sqrt{\\frac{(b-a)^2\\ln(4\/\\epsilon)}{2}\\left(\\frac{1}{N_{d}}+\\frac{1}{N_p}\\right)},\n\\end{aligned}\n\\end{equation}\nwhere $\\gamma_\\mathcal{F}(D,P)$ is the integral probability metric~\\cite{muller1997integral} that measures the difference between the distribution of pre-trained domain and the downstream domain. The Eq.~\\eqref{eq:bound_complex} shows that the generalization error $R_{d}(\\hat{f}_{\\text{prograd}})$ is bounded by the empirical training risk $\\hat{R}_{(d+p)}(\\hat{f}_{\\text{prograd}})$, the two domain gap $\\gamma_\\mathcal{F}(D,P)$ and the estimation error that is inversely proportional to number of training samples, \\textit{i.e.}, $N_d$ and $N_p$. \nThe empirical training risk can be minimized to arbitrary small value and the estimation error that related to $N_p$ asymptotically tends to 0 as the sample size $N_p$ tends to infinity. Thanks to the large amount of pretrained samples $N_p$, we can approximate the generalization error bound for the model learned by $\\texttt{ProGrad}$ as\n\\begin{equation}\n\\begin{aligned}\n R_d(\\hat{f}_{\\text{prograd}}) & \\leq \\frac{1}{2}\\gamma_\\mathcal{F}(S,P) + \\frac{cB\\left(\\sqrt{2\\log(2)L}+1\\right)\\prod_{j=1}^LM_j}{\\sqrt{N_d}} \\\\\n &+ \\frac{3}{2}\\sqrt{\\frac{(b-a)\\ln(4\/\\epsilon)}{2N_d}} +\\frac{1}{2}\\sqrt{\\frac{(b-a)^2\\ln(4\/\\epsilon)}{2}\\frac{1}{N_d}}.\n \\end{aligned}\n\\end{equation}\nSimilarly, we have the generalization error for $\\hat{f}_{\\text{ce}}$ as\n\\begin{equation}\n R_d(\\hat{f}_{\\text{ce}}) \\leq 2\\frac{cB\\left(\\sqrt{2\\log(2)L}+1\\right)\\prod_{j=1}^LM_j}{\\sqrt{N_d}} \n +3\\sqrt{\\frac{(b-a)\\ln(4\/\\epsilon)}{2N_d}} +\\sqrt{\\frac{(b-a)^2\\ln(4\/\\epsilon)}{2}\\frac{1}{N_d}}.\n\\end{equation}\nIf the gap between the pre-trained domain $\\mathcal{D}_{p}$ and the downstream domain $\\mathcal{D}_{d}$ is very small, the $\\gamma_\\mathcal{F}(D,P)$ will tend to 0. Under this assumption, the estimation error bound of $R_d(\\hat{f}_{\\text{ce}})$ is at least 2 times greater than $R_d(\\hat{f}_{\\text{prograd}})$. Considering that in few-shot setting, $N_d$ is typically very small, which makes our \\texttt{ProGrad} model $\\hat{f}_{\\text{prograd}}$ a much lower error bound than conventional fine-tuning model $\\hat{f}_{\\text{ce}}$.\n\n\\section{Additional Experiments}\\label{sec:experiments}\n\\subsection{Effect of Hyper-parameter}\\label{sec:hyper-parameter}\nWe further analyze the effect of the hyper-parameter $\\lambda$ described in Eq.~(4) in the main paper. Results are shown in Table.~\\ref{tab:lambda}. As discussed in Section~3.2 in the main paper, a smaller $\\lambda$ weakens the general knowledge regularization, which results in a inferior performance under low-shot setting for most datasets.\nHowever, for DTD in Table~\\ref{tab:lambda}, using a smaller $\\lambda=0.9$ to reduce the general knowledge regularization can improve the 16 shots results. One possible reason is that texture images of DTD has large gap with the CLIP pre-trained images that collected from the Internet, stronger regularization from pre-trained knowledge might be detrimental to the fine-tune performance if downstream data is sufficient. \n\n\\input{A-tables\/lambda}\n\n\\subsection{Additional Few-shot Classification Results}\\label{sec:additional_FS}\n\\input{A-tables\/fewshot}\n\nIn this section, we further provide the detailed few-shot classification results of other learning-based fine-tuning methods with confidence interval at 95\\% in Table~\\ref{tab:fewshot_part1} and Table~\\ref{tab:fewshot_part2}.\n\n\\noindent\\textbf{Cosine.} As described in Section 4.5 of the main paper, we plug in an additional cosine classifier on top of the visual backbone and trained on downstream dataset.\n\n\\noindent\\textbf{CoOp} learns the context prompt from data rather than hand-crafted design. \n\n\\noindent\\textbf{CLIP-Adapter} learns additional feature adapter to boost conventional fine-tuning results.\n\n\\noindent\\textbf{Cosine + ProGrad} employs \\texttt{ProGrad} to the training process of cosine classifier. \n\n\\noindent\\textbf{CoOp + $l_2$ prompt reg}. We further investigate whether simply using the $l_2$ distance between learned prompt vector $\\bm{v}$ and the word embedding vector of hand-crafted prompt $\\bm{v}_\\text{zs}$ as the regularization can improve few-shot performance, \\textit{i.e.}, $\\mathcal{L}_\\text{total}(\\bm{v})=\\mathcal{L}_\\text{ce}(\\bm{v})+\\alpha\\|\\bm{v}-\\bm{v}_\\text{zs}\\|_2$, where we select $\\alpha=0.01$.\n\n\\noindent\\textbf{CoOp + GM} applies gradient matching method~\\cite{yu2020gradient} to CoOp, \\textit{i.e.}, we not only project the $\\bm{G}_\\text{ce}$ to the perpendicular direction of $\\bm{G}_\\text{kl}$ as the updated gradient, but also project the $\\bm{G}_\\text{kl}$ to the perpendicular direction of $\\bm{G}_\\text{ce}$ as the updated gradient to fine-tune the model alternately.\n\n\\noindent\\textbf{CoOp + KD.} As described in Section 4.5 of the main paper, we apply knowledge distillation loss to CoOp, \\textit{i.e.}, $\\mathcal{L}_\\text{total}=\\mathcal{L}_\\text{ce}+\\mathcal{L}_\\text{kl}$\n\n\\noindent\\textbf{CoOp + ProGrad} employs \\texttt{ProGrad} to CoOp.\n\nFor all prompt-based methods, we set the context length $M$ to $16$ except for CoOp + $l_2$ prompt reg. The learned length for CoOp + $l_2$ prompt reg needs to be equal to the hand-crafted prompt length to compute the $l_2$ norm, \\textit{e.g.}, the M has to be 4 if the hand-crafted prompt is ``\\texttt{a photo of a }''.\nAccording to the average results in Table~\\ref{tab:fewshot_part1}, we observe that our CoOp + ProGrad still achieves the best average performance.\nBy comparing the results of 1) Cosine and Cosine + ProGrad; and 2) CoOp and CoOp + ProGrad, we demonstrates both conventional ``pre-train then fine-tune'' paradigm and prompt tuning paradigm can benefit from our \\texttt{ProGrad}. The gap between CoOp and CoOp + $l_2$ prompt reg demonstrates that directly regularize the learned prompt to be not far away from the hand-crafted prompt has limited improvement. By digging into CoOp + KD and CoOp + GM, we find performance improvement by introducing the general knowledge. However, their performance still under-performs our CoOp + ProGrad. This is because 1) CoOp + KD learns the average knowledge from two domains which still allows the fine-tuned model to learn from the downstream knowledge that conflicts with the general knowledge; 2) CoOp + MD additional requires the fine-tuned model to discards the general knowledge that is not aligned with the downstream knowledge, as the downstream data is limited, the inaccurate estimation of $\\bm{G}_\\text{ce}$ will lead the model focus on biased general knowledge.\n\n\\input{A-tables\/base2new}\n\\subsection{Confidence Interval Results for Base-to-New Generalization}\\label{sec:additional_base2new}\nTable~\\ref{tab:base2new} further presents the confidence interval at 95\\% for base-to-new generalization on each of the 11 datasets. The results of base classes and new classes both show that our \\texttt{ProGrad} has lower average confidence intervals than CoOp and CoCoOp, \\textit{i.e.}, 0.69 \\textit{vs.} 1.00 and 1.26 on base classes; 1.93 \\textit{vs.} 2.68 and 3.00 on new classes. \n\n\\section{Additional Implementation Details}\\label{sec:implementation_details}\nWe follow the training settings of CoOp~\\cite{COOP}: \nAll prompt-based models are trained by SGD with an initial learning rate of 0.002 which is decayed by the cosine annealing rule. During the first epoch, we use the warm-up trick by fixing the learning rate to $1\\times 10^{-5}$ to alleviate the gradient explosion. The training epoch is set to 50 for all shots of experiments of ImageNet dataset. For the rest 10 datasets, the training epoch is set to 50 for 1 shot, 100 for 2\/4 shots and 200 for 8\/16 shots. We train all prompt-based model with batch size of 32 expect for CoCoOp. As described in \\cite{cocoop}, CoCoOp consumes a significant amount of GPU memory if the batch size is set larger than one. We set the batch size to 1, following their original setting. Our experiments are conducted on one 2080Ti GPU for all datasets except ImageNet where we train the models on one A100 GPU. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Justification from Generalization Error Bound}\nWe further \nanalyze the generalization error bound of our \\texttt{ProGrad}. We define the expected risk $R(\\cdot)$ and empirical risk $\\hat{R}(\\cdot)$ of a classifier $f$ on domain $\\mathcal{D}$ as $R(f)=\\mathbb{E}_{(X,Y)\\sim \\mathcal{D}}[\\ell(f(X),Y)]$ and $\\hat{R}(f)=\\frac{1}{N} \\sum_{i=1}^{N} \\ell(f(X_i),Y_i)$,\nwhere $\\ell(f(X), Y)$ denotes the cross-entropy and $N$ is the volume of training data. We are interested in aligning the pre-trained domain $\\mathcal{D}_p$ and the downstream domain $\\mathcal{D}_d$. \nThe conventional prompt-based learning model $\\hat{f}_{\\text{ce}}$ is trained on $\\mathcal{D}_d$ by $\\hat{f}_{\\text{ce}} = \\argmin_{f\\in \\mathcal{F}} \\hat{R}_{d}(f)$, while the zero-shot CLIP model $\\hat{f}_{\\text{p}}$ is trained on $\\mathcal{D}_p$ by $\\hat{f}_{\\text{p}} = \\argmin_{f\\in \\mathcal{F}} \\hat{R}_{p}(f)$. For the implementation of \\texttt{ProGrad},\nwe initialize the model $\\hat{f}_{\\text{prograd}}$ using the pre-trained model $\\hat{f}_{\\text{p}}$. We regularize each training step not to increase the KL divergence between the predictions of $\\hat{f}_{\\text{prograd}}$ and $\\hat{f}_{\\text{p}}$. In this way, $\\hat{f}_{\\text{prograd}}$ can keep the optimal value of the pre-trained domain $\\mathcal{L}_\\text{kl}$ when optimizing the empirical risk on the downstream domain.\nAs proven in \\cite{phuong2019towards}, \nthe model $\\hat{f}_{\\text{prograd}}$ learned by our \\texttt{ProGrad} can be viewed as optimizing the empirical risk on both domains:\n\\begin{equation}\n \\hat{f}_{\\text{prograd}} = \\argmin_{f\\in \\mathcal{F}} {\\hat{R}_{(d+p)}}(f)=\\argmin_{f\\in \\mathcal{F}} \\hat{R}_d(f)+ \\hat{R}_p (f).\n\\end{equation}\n\nThe comparison between the $\\hat{f}_{\\text{prograd}}$ and $\\hat{f}_{\\text{ce}}$ on the downstream task domain in terms of the generalization error can be given by the following theorem,\n\\begin{theorem}\\label{theorem:bound}\nLet $\\mathbf{X}^{(d)}=\\{\\mathbf{x}_n^{(d)}\\}_{n=1}^{N_d}$ and $\\mathbf{X}^{(p)}=\\{\\mathbf{x}_n^{(p)}\\}_{n=1}^{N_p}$ be two set of i.i.d. samples drawn from the downstream domain $\\mathcal{D}_d$ and the pre-trained domain $\\mathcal{D}_p$. \nGiven $f\\in \\mathcal{F}$ and any $\\epsilon>0$, we have \n\\begin{equation}\n R_d(f) \\leq \\hat{R}_{(d+p)}(f) + \\frac{1}{2}\\gamma_\\mathcal{F}(\\mathcal{D}_p,\\mathcal{D}_d) + \\frac{1}{2}\\beta(N_p)+\\frac{1}{2}\\beta(N_d)\n\\end{equation}\nwith probability at least $1-\\epsilon$, where $\\gamma_\\mathcal{F}(\\mathcal{D}_p,\\mathcal{D}_d)$ is the integral probability metric \\cite{IPM} measuring the difference between the distribution of $\\mathcal{D}_p$ and $\\mathcal{D}_d$, $\\beta(N) \\propto \\frac{1}{N}$ is the generalization error that is inversely proportional to the training sample size $N$. \n\\end{theorem}\nWe refer readers to more details of Theorem~\\ref{theorem:bound} in Appendix. Intuitively, Theorem~\\ref{theorem:bound} told us that the generalization error bound is dominated by $\\gamma_\\mathcal{F}(\\mathcal{D}_p,\\mathcal{D}_d)$ and $\\beta(N_d)$. As the empirical risk \n$\\hat{R}_{d+p}(f)$ of $\\hat{f}_{\\text{prograd}}$ can be minimized to an arbitrary small value and $\\beta(N_p) \\propto \\frac{1}{N_p}$ is close to zero with a large number of pre-training samples, we can approximate the generalization error bound for the model learned by $\\texttt{ProGrad}$ by \n\\begin{equation}\n R_d(\\hat{f}_{\\text{prograd}}) \\leq \\frac{1}{2}\\gamma_\\mathcal{F}(\\mathcal{D}_p,\\mathcal{D}_d) +\\frac{1}{2}\\beta(N_d)\n\\end{equation}\nAlso, the generalization error of conventional prompt tuning trained model $\\hat{f}_{\\text{ce}}$ is:\n\\begin{equation}\n R_d(\\hat{f}_{\\text{ce}}) \\leq \\beta(N_d).\n\\end{equation}\nWe give the proof in Appendix. In the few-shot setting, $N_d$ is typically small which makes $\\beta(N_d)$ very large. In such case, introducing the pre-training knowledge helps narrow the generalization error bound from $\\beta(N_d)$ to $\\frac{1}{2}\\beta(N_d)$ . In addition, we assume the domain gap between the pre-training and downstream $\\gamma_\\mathcal{F}(\\mathcal{D}_d,\\mathcal{D}_p)$ is not comparable to $\\beta(N_d)$, \\textit{i.e.}, $\\gamma_\\mathcal{F}(\\mathcal{D}_d,\\mathcal{D}_p) \\ll \\beta(N_d)$. Therefore, the generalization error bound for our $\\hat{f}_{\\text{prograd}}$ is approximately $\\frac{1}{2}\\beta(N_d)$ smaller than the one of $\\hat{f}_{\\text{ce}}$.\n\n\\section{Introduction}\\label{sec:intro}\nAfter seeing and reading countless image-text association pairs, large and deep vision-language models (VLM)~\\cite{CLIP,ALIGN} can memorize the \\textbf{general knowledge} (a.k.a. encyclopedic knowledge) about what visual patterns correspond to what textual sequence and vice versa. Thanks to the powerful language modeling of VLMs, we can establish a communication channel in human-readable natural language, \\textit{i.e.}, \\textbf{prompt}~\\cite{prompt_survey,yao2021cpt,good_prompt}, to query the general knowledge via human-unreadable model parameters. Prompting bridges the interface gap between the pre-trained and downstream tasks (\\textit{e.g.}, regression \\textit{vs.} classification) without the need for additional fine-tuning adaptation. For example, \nwe can craft a prompt---``\\texttt{a photo of a [CLASS]}''---to achieve zero-shot image classification: by using the popular vision-language model CLIP~\\cite{CLIP}, we input the image to the vision end and the prompt to the language end, then obtain a vision-language similarity measure as the confidence score of classifying the image as ``\\texttt{[CLASS]}''. \n\nIn practice, the prompt-based zero-shot classification is not accurate because the hand-crafted prompt may not be the most \nmachine-favorable\n(\\textit{e.g.}, ``\\texttt{this is a picture of}'' could be more grammatically prevailing in VLM training), or not domain-specific to the downstream tasks (\\textit{e.g.}, ``\\texttt{a photo of a person doing}'' is better in action recognition)~\\cite{CLIP}. \nRecently,\nprompt tuning or prefix tuning~\\cite{lester2021power,liu2021gpt,COOP,cocoop} is proposed to replace the hand-crafted prompt with a set of tunable word embedding vectors\nwhich does not have to be translatable back to human-readable words. \n\nYet, prompt tuning is still as tricky as conventional fine-tuning: as the training continues, the generalization ability may decrease and even under-perform the zero-shot baseline that is not fine-tuned. As shown in Figure~\\ref{fig:Fig1}(a\\&b), the prompt tuning method CoOp~\\cite{COOP} achieves the best results via early stopping, and its accuracies heavily drop by at most 4\\% when the training continues. Besides, Figure~\\ref{fig:Fig1}(c\\&d) shows that CoOp underperforms zero-shot CLIP without augmentation and more samples from downstream tasks.\nTo the best of our knowledge, existing methods still adopt the conventional anti-overfitting techniques such as early stopping and data augmentation~\\cite{COOP,cocoop,gao2020making,qin2021lfpt5}, which lacks a principled solution to the nature of improper prompt tuning. Furthermore, the Grad-CAM visualization results indicate that the fine-tuned prompt starts to mislead the VLM to forget the general knowledge that the classification should at least focus on the foreground object but not the background. Comparing CoOp (Figure~\\ref{fig:Fig2}(b)) with zero-shot CLIP (Figure~\\ref{fig:Fig2}(c)), we find that the CoOp model distracts its attention to the background, while CLIP mainly focuses on the foreground object.\nThese results demonstrate the over-fitting risk of existing prompt tuning strategies.\n\n\n\\input{images\/Fig1}\n\\input{images\/Fig2}\nTo this end, we present a novel prompt tuning method called Prompt-aligned Gradient (\\texttt{ProGrad}), to overcome the improperly biased tuning. The principle of \\texttt{ProGrad} is to regularize each tuning step not to conflict with the general knowledge already offered by the original prompt, \\textit{e.g.}, the zero-shot CLIP predictions.\nSpecifically, we measure the general knowledge direction $\\bm{G}_\\text{kl}$ using the gradient of Kullback\u2013Leibler (KL) divergence between the predictions of the zero-shot prompted CLIP and the few-shot fine-tuned model, which we name as \\textbf{general direction}.\nSimilarly, we compute the domain-specific knowledge direction $\\bm{G}_\\text{ce}$ using the gradient of cross-entropy between the ground-truth and the few-shot fine-tuned model, dubbed domain-specific direction.\nWe decompose the domain-specific direction $\\bm{G}_\\text{ce}$ into: 1) a vector $\\bm{G}_{\\perp}$ orthogonal to the general direction, which denotes the non-conflicting domain-specific knowledge; and 2) a vector parallel to the general direction, which denotes the general knowledge. \nNote that the first gradient component does NOT override the general direction as any two orthogonal vectors can be transformed into two non-conflicting base vectors. For the second component, it must be one of the two directions: 1) the same of the general direction, which indicates that the update is aligned to the general knowledge, and 2) the opposite of general direction, indicating a conflicting update that should be discarded to avoid forgetting. Overall, in each iteration, \\texttt{ProGrad} only updates the parameters in the prompt-aligned direction that has an acute angle to the general direction. \nCompared to CoOp and CLIP, both $\\bm{G}_\\text{kl}$ and $\\bm{G}_{\\perp}$ (Figure~\\ref{fig:Fig2}(d\\&e)) help to regularize the model to focus on the foreground, and our \\texttt{ProGrad} (Figure~\\ref{fig:Fig2}(f)) can further improve the visual response.\n\nFollowing CLIP~\\cite{CLIP}, CoOp~\\cite{COOP} and CoCoOp~\\cite{cocoop}, we evaluate our ProGrad under the few-shot learning and base-to-new generalization settings over 11 image classification benchmark datasets, covering generic object classification, fine-grained image recognition, action classification, and domain generalization. In summary, our ProGrad achieves: 1) clear improvement compared to CoOp over all of the 11 datasets, especially for the 1-shot learning setting; 2) clear improvement on the harmonic mean of base-class and new-class accuracies on all 11 datasets compared to CoOp and 9 out of 11 datasets compared to CoCoOp, and 3) efficient training, \\textit{i.e.}, 32.9$\\times$ faster than CoCoOp while only 1.2$\\times$ slower than CoOp.\n\n\n\\section*{Acknowledgments}\nWe thank Chenwei Qin and Jingkang Yang for valuable discussions.\nThis research is supported by the National Research Foundation, Singapore\nunder its AI Singapore Programme (AISG Award No: AISG2-PhD-2021-01-002).\n\\section{Conclusion}\nIn this paper, we pointed out the over-fitting issues of existing prompt tuning methods for few-shot generalization, which heavily relies on early stopping and data augmentation to promote zero-shot inference. We proposed a prompt tuning method \\texttt{ProGrad} that regularize each tuning step not to conflict with the general knowledge of the hand-crafted prompt. \nExperiments on few-shot classification, base-to-new generalization and domain generalization over 11 datasets demonstrate the effectiveness and efficiency of our \\texttt{ProGrad}. In the future, we will explore how to apply \\texttt{ProGrad} on other tasks like object detection and segmentation.\n\n\\section{Related Work}\n\\noindent\\textbf{Vision-Language Pre-training.}\nThe pre-text task for pre-training Vision-Language Models (VLMs) mainly falls into 4 categories: 1) Masked language modeling~\\cite{vilt,vilbert}, 2) Masked region prediction~\\cite{lxmert,su2019vl}, 3) Image-text matching~\\cite{lxmert,vilt}, and 4) Contrastive learning~\\cite{CLIP,ALIGN,li2021align,wenlan}. In our work, we focus on VLMs pre-trained by contrastive learning to align visual and textual embedding. CLIP~\\cite{CLIP} and ALIGN~\\cite{ALIGN} are the typical models that leverage large-scale image-text pairs to learn transferable visual representations and \nexhibit impressive prompt-based zero-shot performance on various image classification tasks.\n\n\\noindent\\textbf{Fine-tuning for VLMs.} It is for the VLM adaptation to various downstream tasks, \\textit{e.g.}, visual question answering~\\cite{vilt,lxmert}, visual grounding~\\cite{yao2021cpt,vilbert}, image retrieval~\\cite{vilt,vilbert} and image classification~\\cite{COOP,cocoop}. We focus on image classification task. Conventional ``pre-train then fine-tune'' paradigm that plugs in an additional classifier on top of visual backbone and trained on downstream data has been widely-adopted, \\textit{e.g.}, Linear Probe~\\cite{CLIP}.\nCLIP-Adapter~\\cite{clipadapter} proposes additional feature adapter to boost conventional fine-tuning results. \nRecently, NLP community presents a novel fine-tuning paradigm named ``prompt-based learning'', which is formulated as a ``fill-in-the-blank'' cloze test, and fine-tunes the prompt to maximize the ground-truth token~\\cite{lester2021power,liu2021gpt}. In CV community, CoOp~\\cite{COOP} uses a continuous prompt optimization from downstream data instead of hand-craft design. CoCoOp~\\cite{cocoop} further extends CoOp by learning image conditional prompt rather than a static one to improve generalization to unseen classes. DenseCLIP~\\cite{rao2021denseclip} applies context-aware prompt strategy to dense prediction tasks, \\textit{e.g.}, segmentation and object dectection. Our proposed \\texttt{ProGrad} follows the line of \\textit{prompt-based learning} to improve both few-shot classification performance and generalization ability by aligning the gradient to general direction, without extra model structure modification or tuning the pre-trained model parameters.\n\n\\noindent\\textbf{Knowledge Transfer.} Forgetting mitigation by knowledge distillation or memory replay is widely deployed in incremental learning~\\cite{liu2020mnemonics,rebuffi2017icarl,qin22continual,riemer2018learning,Hu_20121_CVPR}.\nHowever, prompt-based fine-tuning is fundamentally different from incremental learning: the former assumes that VLMs have already captured all the knowledge needed in downstream tasks and the goal is to compose a domain-specific query, whereas the latter assumes that the knowledge is yet to be sufficient. \nIn addition, incremental learning requires old data from memory storage while our prompt-based learning method has no access to the pre-trained data. Another related field that leverages gradient matching to transfer knowledge is domain generalization~\\cite{shi2022gradient,rame2021ishr} and multi-task learning~\\cite{sener2018multi,yu2020gradient}. However, their methods are not directly applicable in prompt tuning whose transfer direction is only from general to downstream. In Appendix, we will show how their methods fail in several ablative studies.\n\n\n\\section{Experiments}\\label{sec:exp}\nWe validate our \\texttt{ProGrad} on three problem settings: (1) few-shot classification (Section~\\ref{sec:fsl}), \n(2) base-to-new generalization (Section~\\ref{sec:base2new}), \n(3) domain generalization (Section~\\ref{sec:domain_generalization}).\n\n\\subsection{Datasets and Implementation Details}\\label{sec:implementation}\n\n\\noindent\\textbf{Datasets.} For few-shot learning and base-to-new generalization, we follow CLIP~\\cite{CLIP} and CoOp~\\cite{COOP} to use 11 image classification datasets, \\textit{i.e.}, ImageNet~\\cite{imagenet} and Caltech101~\\cite{caltech101} for generic object classification, OxfordPets~\\cite{oxford_pets}, StanfordCars~\\cite{stanford_cars}, Flowers102~\\cite{flowers102}, Food101~\\cite{food101} and FGVCAircraft~\\cite{aircraft} for fine-grained image recognition, EuroSAT~\\cite{eurosat} for satellite image classification, UCF101~\\cite{ucf101} for action classification, DTD~\\cite{dtd} for texture classification, and SUN397~\\cite{sun397} for scene recognition. \nFor domain generalization, we \nuse ImageNet as the source dataset and select ImageNetV2~\\cite{imagenetV2}, ImageNet-Sketch~\\cite{imagenetSketch}, ImageNet-A~\\cite{imagenetA} and ImageNet-R~\\cite{imagenetR} as the target datasets.\n\n\n\\noindent\\textbf{Training Details.} For few-shot learning, following CoOp~\\cite{COOP} and CLIP~\\cite{CLIP}, we train our \\texttt{ProGrad} with 1, 2, 4, 8 and 16 shots respectively and then evaluate the model on the full test split. \nFor domain generalization and base-to-new generalization, we evaluate 4-shot performance, which justifies the robustness under low-shots condition.\nAll results of learning-based models are averaged over three random seeds for fair comparison. \nFor all three settings, we adhere to CoOp~\\cite{COOP} to use ResNet-50~\\cite{resnet} as the backbone of image encoder and the length of context tokens $M$ is set to 16. \nWe follow the same training epochs, training schedule and the data augmentation settings in CoOp~\\cite{COOP}.\nThe hyper-parameter $\\lambda$ is set to 1 by default, except that $\\lambda$ is set to 0.8 for 8 and 16 shots of Flowers102~\\cite{flowers102}, DTD~\\cite{dtd} and EuroSAT~\\cite{eurosat}.\nPlease refer to Appendix for more details.\t\n\n\\noindent\\textbf{Baselines.} We compare \\texttt{ProGrad} with 4 baselines. (1) Zero-shot CLIP~\\cite{CLIP}, which is based on hand-crafted prompts. We follow the prompt design in CoOp. (2) Linear probe CLIP~\\cite{CLIP}, which trains a linear classifier on top of the CLIP image features. (3) CoOp~\\cite{COOP}, which learns the context prompt from data rather than hand-crafted design. (4) CoCoOp~\\cite{cocoop}, which extends CoOp by learning image conditional prompt instead of a static one to improve generalization. Although our method can beat some other fine-tune methods like CLIP-Adapter~\\cite{clipadapter}, we mainly focus on comparing with \\textit{prompt-based learning} methods, we include the results of other fine-tuning methods in Appendix. \n\\input{images\/fs_result}\n\n\\subsection{Few-Shot Classification Results}\\label{sec:fsl}\nFigure \\ref{fig:fs_result} illustrates the comparisons over 11 datasets. Overall, our \\texttt{ProGrad} achieves clear advantages over baseline models for all few-shot settings on average performance. Specifically, \\texttt{ProGrad} outperforms CoOp by $9.5\\%$, $6.9\\%$ and $5.1\\%$ on FGVCAircraft, EuroSAT and Flowers102 given 1 shot, \nand the average improvement over 11 datasets is $3.2\\%$. These results demonstrate the anti-overfitting ability of our \\texttt{ProGrad} when the samples from downstream tasks are extremely limited.\nWhen it comes to 16 shots training, the average improvement induced by \\texttt{ProGrad} is less appealing to around $0.5\\%$. \nThe reason is that the sufficient number of samples from downstream tasks can effectively avoid overfitting.\nNonetheless, the average performance gains in all shots settings\nvalidate the capability of our \\texttt{ProGrad} to improve prompt learning in a data-efficient way.\n\n\\subsection{Base-to-New Generalization}\\label{sec:base2new}\n\\input{tables\/base2new}\n\nZhou \\textit{et al.}~\\cite{cocoop} claim that the static context learned by CoOp is not able to generalize to unseen classes. They further proposed CoCoOp which used image-conditioned prompt to tackle the challenge. \nCompared to CoOp and CoCoOp, our \\texttt{ProGrad} can also generalize well to the new classes with the same architecture design of CoOp.\nTo evaluate the generalization performance from seen classes to unseen classes, \nwe equally divide the classes into two groups, \\textit{i.e.}, base classes and new classes. All the methods are only trained on base classes and tested on both base classes and novel classes. We also report the harmonic mean of base-class and new-class accuracies to evaluate the trade-off. \n\nFrom the results shown in Table~\\ref{tab:results_generalization} (a), we observed that our \\texttt{ProGrad} achieves the best average performance in terms of all metrics. Specifically, we found that \\texttt{ProGrad} achieves a clear harmonic mean improvement on all datasets compared to CoOp and outperforms CoCoOp on 9 out of 11 datasets. On the exceptional case EuroSAT, our \\texttt{ProGrad} underperforms CoCoOp by $4.98\\%$. The possible reason is the imprecise mean estimation of the new classes accuracy. We check the confidence interval at $95\\%$ for EuroSAT dataset and find the results are highly unstable, \\textit{i.e.}, $44.67 \\pm 8.17\\%$ for \\texttt{ProGrad} and $50.85 \\pm 7.65\\%$ for CoCoOp. The variances are much larger than those on other datasets, \\textit{e.g.}, $89.01 \\pm 1.05\\%$ for \\texttt{ProGrad} and $86.90 \\pm 1.56\\%$ for CoCoOp on Caltech101. We refer readers to Appendix for checking the confidence interval of all the datasets.\nBesides, all prompt-based methods perform worse than zero-shot CLIP on Food101.\nOne possible reason is that a Food101 image may include multiple types of foods, side dishes or beverages, which leads to overfitting when training on few samples.\nNonetheless, our \\texttt{ProGrad} still outperforms CoOp and CoCoOp on Food101.\n\n\\subsection{Domain Generalization}\\label{sec:domain_generalization}\n\\input{tables\/domain_generalization}\nThe domain generalization setting evaluates the generalization ability of models on a target domain which is different from the source domain. Conventional fine-tuning on limited data from a specific domain may mislead the model to learn spurious correlations or in-distribution patterns, resulting in a biased model with under-performance in unseen domains~\\cite{bahng2020learning,nam2020learning,CLIP}.\nIn contrast, zero-shot CLIP does not exploit such spurious correlations or patterns, since it is not fine-tuned on that distribution~\\cite{CLIP}. \nSince our \\texttt{ProGrad} \nuses the general knowledge from the pre-trained domain to regularize the fine-tuning on a specific distribution\nour \\texttt{ProGrad} is robust to the distribution shift.\nAs shown in Table~\\ref{tab:robustness}, our \\texttt{ProGrad} clearly outperforms CoOp on all target datasets and surpasses CoCoOp on 3 out of 4 target datasets. \nNote that CoCoOp achieves competitive performances for domain generalization with dynamic instance-conditional prompts. However, the the calculation of instance-conditional prompts heavily increases the training time. Differently, our \\texttt{ProGrad} simply use static prompt to reduce the training cost, and still outperforms CoCoOp (see Table~\\ref{tab:time}).\n\n\\subsection{Further Analysis}\\label{sec:ablation}\n\\input{images\/failurecase}\n\\noindent\\textbf{Failure cases}. \nWe further analyze the failure cases where \\texttt{ProGrad} models predict incorrectly but CoOp gives right predictions.\nSpecifically, we count the percentage of the failure cases that zero-shot CLIP models also fails in Figure~\\ref{fig:failurecase}. \nWe found that a high proportion of the failure cases are also mis-classified by Zero-shot CLIP model (red bar in Figure~\\ref{fig:failurecase}). This observation indicates that the general direction $\\bm{G}_\\text{kl}$ generated by imprecise zero-shot general knowledge is detrimental to model generalization.\nAs the number of samples increases, \nthe downstream knowledge represented by $\\bm{G}_\\text{ce}$ becomes more accurate and unbiased. As expected, we observe that the red bar becomes larger. \n\n\\input{images\/angle}\n\\noindent \\textbf{Conflict of knowledge.} \n\\texttt{ProGrad} requires the updated gradient direction be acute to the general knowledge gradient directions. \nWe explore how this constraint helps to defuse the conflicts of domain-specific and general knowledge by visualizing the angle between their representative gradients during training (angle between $\\bm{G}_\\text{ce}$ and $\\bm{G}_\\text{kl}$). \nAs depicted in Figure~\\ref{fig:ablation-2}, for the normal training without $\\bm{G}_\\text{prograd}$, the angle between $\\bm{G}_\\text{ce}$ and $\\bm{G}_\\text{kl}$ converges to 90 degree due to the fact that ``all high-dimensional random vectors are almost always orthogonal to each other''~\\cite{JMLR:cai13a}.\nIntuitively, without any constraint, the optimization direction $\\bm{G}_\\text{ce}$ is independent to the general direction, and the average angle would be around 90 degree (\\textit{i.e.}, orthogonal). \nIn contrast, utilizing $\\bm{G}_\\text{prograd}$ during training leads to the result that the angle finally converge to an obtuse angle. \nThe reason is that $\\bm{G}_\\text{prograd}$ intervenes the model to learn the downstream knowledge aligned with the general knowledge and leads to the insufficient learning of downstream knowledge that is incompatible with the general knowledge. As training stabilizes, $\\bm{G}_\\text{ce}$ struggles to learn the conflicting knowledge, reflecting an obtuse angle to the $\\bm{G}_\\text{kl}$. Thanks to \\texttt{ProGrad}, we discard such conflicting knowledge to avoid forgetting. \n\n\\noindent\\textbf{Comparison with conventional knowledge distillation.} Since our \\texttt{ProGrad} utilizes the gradient direction of knowledge distillation loss as regularization, one may wonder whether our \\texttt{ProGrad} is indeed conventional knowledge distillation.\nWe answer this question by investigating whether a simple knowledge distillation (\\textit{i.e.}, $\\mathcal{L}_\\text{total}=\\mathcal{L}_\\text{ce}+\\mathcal{L}_\\text{kd}$) can achieve similar performance as our \\texttt{ProGrad}. We repeated the few-shot experiments on 11 datasets and report the average results in Table~\\ref{tab:compare_kd}. Overall, \\texttt{ProGrad} outperforms KD for various few-shot settings. \nAlthough KD promotes CoOp in low-shot (\\textit{e.g.}, 1, 2 and 4 shots), the performance drops when number of shots is large (see 8 and 16 shots). These results indicate that our \\texttt{ProGrad} works differently from KD and is more robust to the number of training samples. Please see Appendix for the details on each dataset results. \n\n\\input{tables\/kd_cosine}\n\\noindent\\textbf{Applying \\texttt{ProGrad} to conventional fine-tune paradigm.} In this work, we focus on \\textit{prompt-based learning}. We are also interested in whether \\texttt{ProGrad} can be applied to the conventional ``pre-train then fine-tune'' paradigm. Specifically, we plug in an additional cosine classifier on top of the visual backbone and compare the performance by conducting the few-shot experiments. Table~\\ref{tab:compare_ProGrad} shows that conventional fine-tuning can benefit from our \\texttt{ProGrad}. The implementation details and the result of each dataset are provided in Appendix.\n\n\\input{tables\/training_time}\n\\textbf{Training efficiency}. For prompt-based methods, we compare the average 1-shot training time over all 11 datasets in Table~\\ref{tab:time}. We find that \\texttt{ProGrad} has much less training time than CoCoOp (1.2 vs 39.5). This is because CoCoOP is based on instance-wise tunable parameters, which cannot be optimized in parallel. Also, \\texttt{ProGrad} is slightly more time-consuming than CoOp (1.2 vs 1) due to the gradient decomposition overhead.\n\\section{Methodology}\\label{sec:method}\n\nIn this section, we introduce the preliminary concepts of hand-crafted prompt-based zero-shot inference, prompt-based learning, and present our proposed Prompt-aligned Gradient solution to align the domain knowledge with general knowledge for few-shot generalization.\n\n\\subsection{Preliminaries}\n\\noindent\\textbf{Contrastive language-image pre-training (CLIP)}~\\cite{CLIP} adopts a contrastive language-image pre-training paradigm on tremendous pairs of images with natural language descriptions. For contrastive learning, the associated image and sentences are taken as the positive samples, while the non-associated pairs are regarded as negative samples. The contrastive objective maximizes the similarity of positive pairs while minimize the similarity of negative pairs.\n\n\\noindent\\textbf{Zero-shot transfer inference} adapts the pre-trained CLIP model to downstream tasks without fine-tuning the model. Taking image classification as an example, zero-shot transfer is enabled by formulating the classification task as an image-text matching problem, where the text is obtained by extending the ``\\texttt{[CLASS]}'' name using a template like ``\\texttt{a photo of a [CLASS].}''.\nCLIP~\\cite{CLIP} finds that such a simple template narrows the distribution gap to pre-training text inputs. The image-class matching score is measured based on the cosine similarity $<\\bm{w}_i,\\bm{f}>$ between the image feature $\\bm{f}$ and the class-extended text feature $\\bm{w}_i$ for $i$-th class. The image feature $\\bm{f}$ for image $\\bm{x}$ is extracted by the image encoder, while the text feature $\\bm{w}_i$ for $i$-th class is obtained by feeding the prompt description into the text encoder. The probability for $i$-th class is obtained as\n\\begin{equation}\n p_\\text{zs}(\\bm{w}_i|\\bm{x})=\\frac{\\exp(<\\bm{w}_i,\\bm{f}>\/\\tau)}{\\sum_{j=1}^K\\exp(<\\bm{w}_i,\\bm{f}>\/\\tau)},\n\\end{equation}\nwhere \n$K$ denotes the number of classes, and $\\tau$ is a temperature learned by CLIP~\\cite{CLIP}.\n\n\\noindent\\textbf{Prompt-based learning} further strengths the transferring ability of the CLIP model and avoids prompt engineering by automatically learning the prompt given few samples from the downstream task. Different from the zero-shot transfer that used a fixed hand-craft prompt, CoOp~\\cite{COOP} \nconstructs and fine-tunes a set of $M$ continuous context vectors $\\bm{v}=\\{\\bm{v}_1,\\bm{v}_2,...,\\bm{v}_M\\}$ as the turnable prompt. Specifically, the prompt $\\bm{t}_i=\\{\\bm{v}_1,\\bm{v}_2,...,\\bm{v}_M, \\bm{c}_i\\}$ combines the learnable context vectors $\\bm{v}$ and the class token embedding $\\bm{c}_i$, and is fed to the text encoder $g(\\cdot)$. CoOp optimizes the static context vectors $\\bm{v}$ by minimizing the negative log-likelihood of the ground-truth token:\n\\begin{equation}\\label{eq:ce}\n \\mathcal{L}_\\text{ce}(\\bm{v})=-\\sum_i \\bm{y}_i \\log p(\\bm{t}_i|\\bm{x}),\\quad p(\\bm{t}_i|\\bm{x})=\\frac{\\exp(\/\\tau)}{\\sum_{j=1}^K\\exp(\/\\tau)},\n\\end{equation}\nwhere $\\bm{y}$ denotes the one-hot ground-truth annotation and $K$ denotes the number of classes.\n\n\\subsection{Prompt-aligned Gradient}\\label{sec:prograd_method}\n\\input{images\/pipeline}\nAs we introduced in Section~\\ref{sec:intro}, CoOp faced a challenge that the transfer performance drops when the number of annotations is very limited (\\textit{e.g.}, one per class), even underperforms the zero-shot transfer. Also, CoOp heavily relies on anti-overfitting techniques such as early stopping and data augmentation. To overcome the over-fitting challenge, we propose an effective and efficient fine-tuning paradigm \\texttt{ProGrad} to align the few-shot downstream knowledge with the large-scale general knowledge.\n\nMotivated by the success of knowledge distillation~\\cite{phuong2019towards,hinton2015distilling} and cross-domain knowledge~\\cite{niu2021counterfactual,niu2021introspective,zhu2021cross} in knowledge transfer, \nwe leverage \nthe zero-shot CLIP predictions as the general knowledge, and compare the fine-tuned predictions with the general knowledge to regularize the gradient direction.\nSpecifically, we obtain the domain-specific direction by calculating the cross-entropy $\\mathcal{L}_\\text{ce}(\\bm{v})$ between the model prediction $p(\\bm{t}_i|\\bm{x})$ and the ground-truth $\\bm{y}$ according to Eq.~\\eqref{eq:ce}, and the general knowledge direction based on the Kullback-Leibler (KL) divergence between $p(\\bm{t}_i|\\bm{x})$ and the zero-shot CLIP prediction $p_\\text{zs}(\\bm{w}_i|\\bm{x})$:\n\\begin{equation}\n \\mathcal{L}_\\text{kl}(\\bm{v})=-\\sum_i p_\\text{zs}(\\bm{w}_i|\\bm{x})\\log \\frac{p(\\bm{t}_i|\\bm{x})}{p_\\text{zs}(\\bm{w}_i|\\bm{x})}.\n\\end{equation}\nWe denote the gradients of $\\mathcal{L}_\\text{kl}(\\bm{v})$ and $\\mathcal{L}_\\text{ce}(\\bm{v})$ as $\\bm{G}_\\text{kl}=\\nabla_{\\bm{v}}\\mathcal{L}_\\text{kl}(\\bm{v})$ and \n$\\bm{G}_\\text{ce}=\\nabla_{\\bm{v}}\\mathcal{L}_\\text{ce}(\\bm{v})$, respectively. \nThe relations between $\\bm{G}_\\text{kl}$ and $\\bm{G}_\\text{ce}$ are two-fold. (1) Their angle is smaller than 90\\degree (Figure~\\ref{fig:pipeline}(a)), which indicates that the optimization direction of few-shot downstream knowledge does not conflict with general knowledge. In this case, we safely set the updated gradient direction $\\bm{G}_\\text{prograd}$ as $\\bm{G}_\\text{ce}$. (2) Their angle is larger than 90\\degree (Figure~\\ref{fig:pipeline}(b)), which indicates that the few-shot downstream knowledge conflicts with general knowledge. In other words, optimizing the context vectors following $\\bm{G}_\\text{ce}$ will lead to the forgetting of the pre-trained general knowledge.\nIn this case, \nwe project the $\\bm{G}_\\text{ce}$ to the vertical direction of $\\bm{G}_\\text{kl}$ to optimize the model for classification, which avoids increasing the KL loss. \nOur \\texttt{ProGrad} strategy is mathematically formulated as:\n\\begin{equation}\\label{eq:projgrad}\n\\bm{G}_\\text{prograd}=\n\\left\\{\n \\begin{array}{ll}\n \\bm{G}_\\text{ce}, & \\text{if}\\ \\bm{G}_\\text{ce} \\cdot\\bm{G}_\\text{kl} \\geq 0 \\\\\n \\bm{G}_\\text{ce} - \\lambda \\cdot \\frac{\\bm{G}_\\text{ce} \\cdot \\bm{G}_\\text{kl}}{\\|\\bm{G}_\\text{kl}\\|^2}\\bm{G}_\\text{kl},& \\text{otherwise}.\n \\end{array}\n\\right.\n\\end{equation}\nFig~\\ref{fig:pipeline}(c) illustrates the pipeline of our \\texttt{ProGrad}.\nInstead of updating the context vectors using $\\bm{G}_\\text{ce}$ in CoOp~\\cite{COOP}, we optimize the context vectors using $\\bm{G}_\\text{prograd}$, which regularizes the gradient direction for anti-overfitting to few-shot downstream samples. $\\lambda \\in [0,1]$ in Eq.~\\eqref{eq:projgrad} is a hyper-parameter to control the impact of general knowledge, \\textit{i.e.}, a smaller $\\lambda$ weakens the general knowledge regularization. In case that the pre-trained knowledge has large gap with domain-specific knowledge or downstream domain has large amount of training samples, the general direction might not be accurate to guide the learning process. From our empirical studies, we find $\\lambda = 1$ is satisfied for the most of our experiments (see Section~\\ref{sec:implementation} for the $\\lambda$ values). Further analysis of $\\lambda$ is included in Appendix.\n\nFor implementation, we first initialize the learnable context vector $\\bm{v}$ with the word embeddings of the zero-shot hand-crafted prompt. \nConcretely, if the context length $M$ is 16 and the hand-crafted prompt is ``\\texttt{a photo of a}'', which only has 4 tokens, we initialize the former 12 context vectors with zeros and the last 4 context vectors with the word embedding of ``\\texttt{a photo of a}''. We also provide the theoretical generalization error bound analysis for \\texttt{ProGrad} in Appendix, \\textit{i.e.}, under the assumption that the pre-trained domain and the downstream domain gap is small, we demonstrate that our \\texttt{ProGrad} can narrow the generalization error bound compared to conventional fine-tuning.\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzausp b/data_all_eng_slimpj/shuffled/split2/finalzzausp new file mode 100644 index 0000000000000000000000000000000000000000..2ae5aa7c76b780d2b188ffc6a8a1b592b3b25dd1 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzausp @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nMany machine learning and information retrieval systems rely on distance metrics that can accurately estimate the semantic similarity between two objects of interest. Functions such as Euclidean distance and cosine similarity when applied directly to complex input domains, like images or audio, are poor measures of semantic relatedness. Hence, machine learning has been applied to the problem of constructing metrics suited for specific domains.\n\nRecent advances in this field have taken advantage of the superior modelling capacity of deep neural networks to learn complex relations between data objects. While methods based on deep learning are able to represent more intricate metrics compared to linear Mahalanobis methods, they also take considerably longer to train. The objective of this work is to find a method to accelerate the training of deep learning approaches to similarity metric learning.\n\nCurrent metric learning algorithms based on deep neural networks rely on propagating multiple training examples through the network simultaneously in order to compute the gradient of some objective functions defined over the network output. For example, one could attempt to maximise the Euclidean distance between the network outputs for dissimilar instances, while also trying to minimise the Euclidean distance between the computed representations for instances that are similar. This forces one to train the network on pairs of examples---something that greatly increases the effective training set size, and therefore the time taken for the network to converge towards a good solution.\n\nMany techniques are evaluated on classification benchmark datasets, where each instance is labelled with a class and two instances are considered similar if and only if they share the same class. By evaluating metrics in this way there is an implicit assumption that transitivity is desired---which for some tasks is justified---but this is not always the case. For example, consider three images: A, B, and C. It could be the case the that A and B are considered similar because they have similar backgrounds. It could also be the case that B and C are similar because the same object appears in the foreground of both, despite the rest of the scene being vastly different. If transitivity were to be enforced then A and C would be considered similar, even though there is no reason for that to be the case.\n\nWe are interested in both transitive and non-transitive relations, such as multi-label classification datasets. Elaborating on the previous example; images A and B would be given a label indicating what type of background they have, while B and C would be given a label indicating what foreground object appears in the image. In this case B has two labels, and hence the task fits into the multi-label classification paradigm. However, in this work we take an even more general view of the problem definition. We assume only that the information provided consists of pairwise similarity or dissimilarity constraints. This generalised view enables the use of more diverse data collection strategies, such as the relevance feedback methods commonly used in information retrieval systems.\n\nIn our approach to similarity metric learning we acknowledge that there are latent classes within the data, however no explicit knowledge of these classes is required. By taking advantages of the existence of these latent classes, we first learn the structure of a target vector space for an embedding function, and subsequently learn a model that performs the embedding. The compute intensive part of our algorithm does not operate on pairs of feature vectors, and hence results in a computationally cheaper approach to learning instance similarity.\n\nWe first review some related methods in Section~\\ref{sec:related-work}, and then in Section~\\ref{sec:definition} we describe how the problem can be defined in terms of binary relations. Section~\\ref{sec:method} describes our similarity metric learning method in detail. In Section~\\ref{sec:experiments} we empirically demonstrate that our method converges considerably faster than other conventional methods and that the final accuracy is higher on both intrinsic and extrinsic evaluation tasks.\n\n\\section{Related Work}\n\\label{sec:related-work}\nMetric learning is a well established area, and much research has been put into developing sophisticated algorithms that can learn instance similarity. Metric learning techniques relevant to our work can be roughly divided into two categories: Mahalanobis based methods, and neural network based methods. Notable work in both of these areas are described, as the neural network approaches are often generalisations of the linear Mahalanobis methods to nonlinear models.\n\n\\subsection{Mahalanobis Based Methods}\nThe general form of a Mahalanobis distance metric parameterised by the matrix $\\textbf{M}$ and defined over the set $X$ is given in Equation~\\ref{eq:mahalanobis}, where $\\vec x_i, \\vec x_j \\in X$. The algorithms based on this model primarily differ on how the linear transform is optimised.\n\n\\begin{equation}\n\tD(\\vec x_i, \\vec x_j) = \\sqrt{(\\vec x_i - \\vec x_j)^{\\top} \\textbf{M} (\\vec x_i - \\vec x_j)}\n\t\\label{eq:mahalanobis}\n\\end{equation}\n\nThe large margin nearest neighbours (LMNN)~\\cite{weinberger2005} algorithm employs semidefinite programming to optimise a loss function composed of two terms. Specifically, there are two evaluations of Equation~\\ref{eq:mahalanobis}. One term draws very similar instances together, and the other encourages a margin to be formed between dissimilar instances. As the name suggests, the motivation for the development of this algorithm was to improve the accuracy of $k$-Nearest Neighbours ($k$-NN) classifiers.\n\nInformation-theoretic metric learning is another Mahalanobis technique, and was introduced by~\\cite{davis2007}. This criterion aims to minimise the Kullback-Leibler divergence between two multivariate Gaussians, of which the inverse covariance matrices are used to define Mahalanobis distance metrics. One of these Gaussians is defined in advance and acts as a regulariser, while the other is treated as a free parameter and optimised subject to constraints derived from similarity information.\n\nNeighbourhood Components Analysis (NCA)~\\cite{goldberger2004} is another method developed to be used in conjunction with $k$-NN classifiers. This technique attempts to find the matrix, $\\textbf{M}$, by minimising a differentiable loss function that approximates the behaviour of $k$-NN in the transformed feature space.\n\n\\subsection{Neural Network Based Methods}\nThe first application of neural networks to metric learning was with the introduction of Siamese networks~\\cite{bromley1993}. The original authors initially applied these models to signature verification, however others have since used this technique for many other domains such as face recognition and verification~\\cite{taigman2014}. Siamese networks are composed of two standard feed forward neural networks that have the same topology and share the same parameters. The output of these subnetworks is then compared with the cosine similarity function to produce the final output of the network, indicating whether the instances propagated through the two subnetworks are similar or not. During training the network is presented with pairs of instances labelled either positive or negative. For positive pairs the cosine similarity is maximised, whereas for negative pairs it is minimised.\n\nFollowing on from this, \\cite{chopra2005} developed a variant of Siamese networks that compares the output of the subnetworks using Euclidean distance. This method was then further improved by~\\cite{hadsell2006}, resulting in the contrastive loss function for Siamese networks, as given in Equation~\\ref{eq:contrastive}. This function is then averaged for all training pairs to create an objective function for training the network.\n\n\\begin{equation}\n\\label{eq:contrastive}\nL(\\vec x_i, \\vec x_j, y) = y \\|f(\\vec x_i) - f(\\vec x_j)\\|_2^2 + (1 - y) max(0, m - \\|f(\\vec x_i) - f(\\vec x_j)\\|_2)^2\n\\end{equation}\n\nWhere $\\vec x_i$ and $\\vec x_j$ are two instances, $y$ is the label indicating whether they are similar or dissimilar, $f$ performs a forward propagation through one of the subnetworks, and $m$ is a margin. The value of $m$ simply sets the scale of the resulting vector space and is typically set to 1.\n\nA further generalisation of Siamese networks are the class of methods that consider three instances per loss function evaluation, so called ``triplet'' loss functions~\\cite{gomez2015,schroff2015}. These methods attempt to minimise the distance between a target instance and another positive example, while simultaneously maximising the distance between the same target instance and a negative example. Some variants of this approach allow one to define the ground truth labels in terms of relative similarity. That is, rather than having hard constraints specifying similarity or dissimilarity, similarity is defined as an instance being more similar to one instance than another.\n\nThere is also an extension of NCA to nonlinear transformations of the input data~\\cite{salakhutdinov2007}. This method can be viewed as a probabilistic variant of Siamese networks. The nonlinear transformation models used in the original exposition of this method were stacked Restricted Boltzmann Machines, initialised using unsupervised pretraining and subsequently fine-tuned using the NCA loss function.\n\nA common theme that unifies all the approaches described thus far is the need to train on pairs (or triples) of instances. This increases the effective size of the training set quadratically, greatly slowing down the training time. Our proposal to decouple the process of learning the embeddings from learning the functions that performs the embedding is not entirely new. The work of~\\cite{lin2014} utilises a similar two step process for learning a hashing function. In their work the embeddings are in fact bit strings, and the function used to generate the hash codes takes the form of boosted decision trees. They also use a greedy discrete optimisation procedure to find the target hash codes, rather than a numerical optimisation method to find real valued vectors.\n\n\\section{Problem Definition}\n\\label{sec:definition}\nSimilarity metric learning algorithms are typically trained on pairs of objects labelled as being either positive, for pairs of similar objects, or negative, for pairs of dissimilar objects. More formally, we have a set of objects that we call $X$. We also have a set, $Z \\subset X \\times X \\times \\{1, 0\\}$, that contains pairs of objects from $X$ coupled with labels indicating whether they are similar or not. One can say that $Z$ represents a binary relation where some entries of the relation matrix are unknown.\n\nThis is a convenient problem formulation that naturally gives rise to models that are trained on pairs of objects at a time. The problem with this approach lies in the efficiency of pairwise training. The training set size is effectively squared, resulting in a representation of the training data that does not efficiently encode the useful information required to construct an accurate model. Even though all the information is available to the model, this inefficient encoding significantly slows down the training procedure.\n\nOne must inevitably train on pairs at some point in the metric learning process, however the goal of our work is demonstrate a method for modifying pre-existing loss functions in such a way that the pairwise training comprises a negligible fraction of the runtime.\n\nA key advantage of posing similarity learning as the task of inferring a binary relation is the ability decompose the training data into classes. This idea is most often studied in conjunction with equivalence relations, where the equivalence classes form a disjoint partition over the original training set. In this context the relation must exhibit reflexivity, symmetry, and transitivity---the last of which is somewhat limiting. One can drop the requirement for transitivity by instead considering binary tolerance relations, which are only reflexive and symmetric. As with equivalence relations, it is possible to decompose the training data into a set of classes (termed tolerance classes), however these classes need not be disjoint.\n\nRather than partitioning the training set into, potentially overlapping, subsets that each correspond to a tolerance class, we find a target vector for each instance. These vectors are constrained such that the targets of instances that are related will be close in the target vector space and unrelated instances will be far apart. The rationale behind this is that each tolerance class will be mapped to a cluster in the target vector space, however we can employ a technique that does not have to explicitly determine to which tolerance classes each instance belongs.\n\n\\section{Method}\n\\label{sec:method}\nInstead of training a large Siamese network on pairs of---potentially quite large---feature vectors in the training set, we compute a target vector for each training instance. To do this, we completely disregard any information provided by the features associated with each instance and instead try to solve an optimisation problem that encourages the target vectors of similar instances to be clustered together. After the target vectors have been computed, a multi-target regression model can be trained to embed instances into the target vector space. Provided that a suitable loss function has been chosen for learning the target vectors, some predefined distance metric applied to the target vector space will result in a system capable of determining instance similarity. The assumption that underlies this method is that the confusability between the latent classes in the dataset does not provide information that is useful for constructing a metric.\n\n\\subsection{Learning Target Vectors}\nWe now describe the method more formally, but still with enough abstraction that the generality is obvious. Consider $L(\\vec x_i, \\vec x_j, y_{ij})$, a differentiable loss function over a pair of instances. Similarity metric learning algorithms that rely on embedding data into a space where the semantic similarity is more salient usually rely on several components: the objective function, the model, and a fixed distance such as Euclidean or Manhattan distance. The use of a particular objective function generally implies that a certain fixed distance metric should be used on the embedded data. For example, when using the contrastive loss given in Equation~\\ref{eq:contrastive} it is fairly obvious that Euclidean distance (or squared Euclidean distance) is the intended metric. This leaves two components; the objective function and the embedding model. For this class of metric learning algorithms, where an embedding function is a primary component, one can write the loss function as $L(f_\\Theta(\\vec x_i), f_\\Theta(\\vec x_j), y_{ij})$, where $f_\\Theta$ is an embedding function parameterised by $\\Theta$.\n\nConsider the scenario where $f_\\Theta$ is modelled as a lookup table, so each feature vector, $\\vec x_i$, is simply mapped to a response vector, $t_i$, contained within $\\Theta$. One can then solve the following optimisation problem:\n\n\\begin{equation}\n\t\\Theta^\\ast = \\underset{\\Theta}{\\operatorname{arg}\\,\\operatorname{min}} \\frac{1}{\\|Z\\|} \\sum_{(\\vec x_i, \\vec x_j, y_{ij}) \\in Z} L(f_\\Theta(\\vec x_i), f_\\Theta(\\vec x_j), y_{ij})\n\\end{equation}\n\nBecause we only consider scenarios where $L$ is differentiable, this problem could be solved with any of a large variety of numerical optimisation algorithms.\n\nIt is trivial to see why systems such as this are not used in practice, because as soon as a novel feature vector is encountered where the label is unknown the model is not capable of making a prediction. However, we can take each of the learned target vectors and train a second model, $g_\\Omega$, that implements a more generalisable model. The new model can be created with any multi-target regression algorithm, but we focus on the use of deep neural networks in this paper. Although the original model for $f_\\Theta$ must be trained on pairs of instances, due to the composition with $L$, this new model does not require pairwise training.\n\nWe investigate two options for $L$: the contrastive loss given in Equation~\\ref{eq:contrastive}, and the loss function we define in Equation~\\ref{eq:dotloss}:\n\n\\begin{equation}\n\t\\label{eq:dotloss}\n\tL(f_\\Theta(\\vec x_i), f_\\Theta(\\vec x_j), y_{ij}) = \\frac{1}{2} (y_{ij} - f_\\Theta(\\vec x_i) \\cdot f_\\Theta(\\vec x_j))^2\n\\end{equation}\n\nWhere $\\cdot$ represents the dot product between two vectors. Using this loss function is equivalent to factoring the adjacency matrix of the underlying binary relation defined on the training set. We use the Adam optimiser~\\cite{kingma2014} to minimise both of these loss functions in a matter of seconds for each dataset we consider in our empirical evaluation.\n\n\n\\subsection{Learning an Embedding Function}\nOnce the target vectors have been found a regression model can be trained to minimise the squared error between the embedding of each instance and the corresponding target vector, as shown in Equation~\\ref{eq:squared-error}, where $g_\\Omega$ is the multi-target regression model. Our technique does not rely on a specific regression algorithm, but is instead very general. It is possible to use any multi-target regression method to learn a mapping from features to the learned target vectors, however our focus is on the performance of neural networks for metric learning and hence that is our regression algorithm of choice.\n\n\\begin{equation}\n\t\\label{eq:squared-error}\n\t\\Omega^\\ast = \\underset{\\Omega}{\\operatorname{arg}\\,\\operatorname{min}} \\frac{1}{2\\|X\\|} \\sum_{\\vec x_i \\in X} (f_{\\Theta^\\ast}(\\vec x_i) - g_{\\Omega}(\\vec x_i))^2\n\\end{equation}\n\nThe original Siamese network introduced by~\\cite{bromley1993} applied the cosine similarity function to embeddings of instances, as computed by the branches of the network, in order to determine whether instances are related. In this case a value of one means the instances are very similar, and a value of negative one indicates the instances are very dissimilar. Networks trained with the contrastive loss replace the cosine similarity function with Euclidean distance and the interpretation of the resulting real value is also changed. A value of zero indicates a high degree of similarity, and as the value becomes larger the instances are considered increasingly dissimilar.\n\nThe target vectors learned using the technique presented herein are found by either optimising loss function based on the squared Euclidean distance or the dot product. When using Euclidean distance, small values indicate similar instances and large values indicate dissimilar instances. It is important that the correct fixed metric is used on the resulting embeddings to ensure the optimal performance.\n\nUltimately, for all of these methods a threshold must be chosen if the problem is to be reduced to answering the question of whether two instances should be considered similar.\n\n\\section{Experiments}\n\\label{sec:experiments}\nThe motivation behind introducing this method was to accelerate the training process of neural network based metric learning algorithms. Firstly, we show that the optimisation problem used to compute the target vectors can be solved in a matter of seconds, thus comprising a negligible fraction of the overall training time. Then we demonstrate the time taken for our method and Siamese networks to converge when trained on the same datasets. Finally, we perform an extrinsic evaluation to show that the learned metrics perform well on $k$-NN classification tasks.\n\n\\subsection{Datasets and Network Architectures}\nStandard image classification datasets, summarised in Table~\\ref{tbl:datasets}, are used to demonstrate the capabilities of our method. The similarity metric learning methods we consider all involve pairwise training at some point, which necessitates datasets that contain pairs of instances with binary similarity constraints. In other words, for each dataset we must define a binary relation represented in the same manner as described in Section~\\ref{sec:definition}. For each element in each of the datasets, 10 positive and 10 negative pairs are generated. The pairs are labelled positive if the two instances have at least one class in common. This process is performed separately with the training and testing instances to prevent overlap between the train and test subsets. \n\nThe two most basic datasets considered are MNIST~\\cite{lecun1998} and CIFAR-10~\\cite{krizhevsky2009}. Both consist of 10 balanced classes and a similar number of total instances. The primary difference is that MNIST contains very easily discriminated hand written digits, and CIFAR-10 contains downsampled photographs of natural objects.\n\nThe Public Figures dataset~\\cite{kumar2009} is a large collection of photos spanning 200 different identities. Unfortunately the originators of this dataset only supply URLs for the images, and because the dataset is now several years old many of these links are dead. Fortunately there is a subset called PubFig83 that has been scraped and made available for download by~\\cite{pinto2011}. We use a version of this PubFig83 dataset created by~\\cite{chiachia2014} that has had the faces aligned such that the eyes in each image are always in the same position. In this version of the dataset there are 13,838 colour images that are all $100\\times100$ pixels. The networks trained on this dataset are only supplied with the central $60\\times60$ pixels of each image in order to reduce overfitting caused by the background clutter surrounding the faces.\n\nNUS-WIDE~\\cite{chua2009}, the multi-label dataset, is included in order to simulate a tolerance relation. Although there are only 81 classes, this dataset includes 16,458 unique label vectors consisting of different combinations of these classes. This results in a highly complex metric learning problem. The difficulty of this dataset is further compounded by the presence of label noise, due to the labels being determined using a method that relies on user specified tags. The original dataset consists of a set of image URLs and associated labels, however some of these URLs are now unavailable. We managed to collect 222,654 of the original 269,648 instances. Each image was resized such that the smallest dimension was 100, and then the central $100 \\times 100$ pixels were used for training and testing.\n\n\\begin{table}\n\t\\center\n\t\\caption{A summary of the datasets used for the experiments in this paper.}\n\t\\label{tbl:datasets}\n\t\\begin{tabular*}{0.9\\columnwidth}{@{\\extracolsep{\\fill}}ccccc}\n\t\t\\hline\\noalign{\\smallskip}\n\t\tDataset & Train Instances & Test Instances & Features & Labels \\\\\n\t\t\\noalign{\\smallskip}\n\t\t\\hline\n\t\t\\noalign{\\smallskip}\n\t\tMNIST & 60,000 & 10,000 & 784 & 10 \\\\\n\t\tCIFAR-10 & 50,000 & 10,000 & 3,072 & 10 \\\\\n\t\tPubFig83 & 11,000 & 2,838 & 30,000 & 83 \\\\\n\t\tNUS-WIDE & 150,000 & 72,654 & 30,000 & 81 \\\\\n\t\t\\noalign{\\smallskip}\n\t\t\\hline\n\t\\end{tabular*}\n\\end{table}\n\nEach dataset requires a different network due to the varying complexity of the associated task and the different number of features contained within each instance. Table~\\ref{tbl:architectures} provides an overview of these architectures. In the case of a Siamese network the architectures given in Table~\\ref{tbl:architectures} describe only a single branch of the network. Additionally, when training networks on CIFAR-10, PubFig83, and NUS-WIDE, we also train on horizontal flips of images in the training set. The size of the output layer determines the length of the learned target vectors for each dataset.\n\n\\begin{table}\n\t\\center\n\t\\caption{The different network architectures used for each dataset throughout all experiments. In this table Dense $x$ indicates a fully connected layer with $x$ hidden units, Convolutionall $x\\times y\\times z$ means a convolutional layer with $x$ feature maps and filters of size $y\\times z$. Lastly, Max Pool $x \\times y$, $z \\times w$ represents a max pooling layer with a pool size of $x \\times y$ and a stride of $z \\times w$.}\n\t\\label{tbl:architectures}\n\t\\begin{tabular*}{0.7\\columnwidth}{@{\\extracolsep{\\fill}}cc}\n\t\t\\hline\\noalign{\\smallskip}\n\t\tMNIST & PubFig83 \\\\\n\t\t\\noalign{\\smallskip}\n\t\t\\hline\n\t\t\\noalign{\\smallskip}\n\t\tDense 500 & Convolutional $64 \\times 9 \\times 9$ \\\\\n\t\tDense 500 & Max Pool $2 \\times 2$, $2 \\times 2$ \\\\\n\t\tDense 16 & Convolutional $96 \\times 7 \\times 7$ \\\\\n\t\t& Max Pool $2 \\times 2$, $2 \\times 2$ \\\\\n\t\t& Convolutional $128 \\times 7 \\times 7$ \\\\\n\t\t& Dense 1,024 \\\\\n\t\t& Dense 1,024 \\\\\n\t\t& Dense 64 \\\\\n\t\t\\noalign{\\smallskip}\n\t\t\\hline\\noalign{\\smallskip}\n\t\tCIFAR-10 & NUS-WIDE \\\\\n\t\t\\noalign{\\smallskip}\n\t\t\\hline\n\t\t\\noalign{\\smallskip}\n\t\tConvolutional $64 \\times 3 \\times 3$ & Convolutional $64 \\times 3 \\times 3$ \\\\\n\t\tConvolutional $64 \\times 3 \\times 3$ & Convolutional $64 \\times 3 \\times 3$ \\\\\n\t\tMax Pool $3 \\times 3$, $2 \\times 2$ & Convolutional $64 \\times 3 \\times 3$ \\\\\n\t\tConvolutional $96 \\times 3 \\times 3$ & Max Pool $2 \\times 2$, $2 \\times 2$ \\\\\n\t\tConvolutional $96 \\times 3 \\times 3$ & Convolutional $96 \\times 3 \\times 3$ \\\\\n\t\tMax Pool $3 \\times 3$, $2 \\times 2$ & Convolutional $96 \\times 3 \\times 3$ \\\\\n\t\tDense 128 & Convolutional $96 \\times 3 \\times 3$ \\\\\n\t\tDense 128 & Max Pool $2 \\times 2$, $2 \\times 2$ \\\\\n\t\tDense 16 & Convolutional $128 \\times 3 \\times 3$ \\\\\n\t\t& Convolutional $128 \\times 3 \\times 3$ \\\\\n\t\t& Convolutional $128 \\times 3 \\times 3$ \\\\\n\t\t& Max Pool $2 \\times 2$, $2 \\times 2$ \\\\\n\t\t& Dense 4,096 \\\\\n\t\t& Dense 4,096 \\\\\n\t\t& Dense 32 \\\\\n\t\t\\noalign{\\smallskip}\n\t\t\\hline\n\t\\end{tabular*}\n\\end{table}\n\nFor each hidden layer the ReLU activation function is used, and the output layers do not use any nonlinearity. Dropout~\\cite{srivastava2014} (with $p=0.5$) was applied before all hidden layers consisting of fully connected units. The weight initialisation procedure of~\\cite{glorot2010} was used for setting the starting values for the weights in all network. We applied a slightly modified standardisation procedure to the target vectors when training the regression networks. Mean subtraction is performed, however rather than dividing by the individual standard deviation for each component in the target vectors we scale all vectors such that the mean standard deviation of all the components is one. This prevents distortion of the target vector space, which would effectively change the objective function to a weighted squared error variant.\n\n\\subsection{Implementation}\nThe target vector optimisation was performed using a single threaded program written in the D programming language\\footnote{http:\/\/www.dlang.org} and run on an Intel i7 4770. The deep networks are trained using an implementation that takes advantage of functions provided by cuDNN 4~\\cite{chetlur2014} and is executed on an NVIDIA TITAN X GPU.\n\n\\subsection{First Phase Optimisation}\nThe first point of order is to show that the first optimisation problem consumes a negligible fraction of the overall training time when constructing a learned similarity metric. Table~\\ref{tbl:first-phase} contains empirical estimates of the expected runtime for both loss functions when applied to each dataset. Although the dot product based loss function takes longer to finish than the contrastive loss, both still finish in a relatively short amount of time.\n\n\\begin{table}\n\t\\center\n\t\\caption{95\\% confidence intervals for the expected runtime of the first phase optimisation problem for each loss function and dataset combination. Values are in seconds. FML-C denotes targets trained with the contrastive loss function and FML-DP denotes targets trained with the loss function given in Equation~\\ref{eq:dotloss}.}\n\t\\label{tbl:first-phase}\n\t\\begin{tabular*}{0.9\\columnwidth}{@{\\extracolsep{\\fill}}ccccc}\n\t\\hline\\noalign{\\smallskip}\n\t& MNIST & CIFAR-10 & PubFig83 & NUS-WIDE \\\\\n\t\\noalign{\\smallskip}\n\t\\hline\n\t\\noalign{\\smallskip}\n\tFML-C & 6.90 ($\\pm$0.01) & 5.19 ($\\pm$0.21) & 1.94 ($\\pm$0.01) & 46.43 ($\\pm$1.91) \\\\\n\tFML-DP & 21.77 ($\\pm$1.38) & 15.89 ($\\pm$1.70) & 6.86 ($\\pm$0.18) & 152.30 ($\\pm$4.89) \\\\\n\t\\noalign{\\smallskip}\n\t\\hline\n\t\\end{tabular*}\n\\end{table}\n\n\\subsection{Time to Converge}\nWe now demonstrate that our method converges significantly faster than a conventional Siamese network trained with the contrastive loss function, while still achieving competitive accuracy on an intrinsic evaluation task. Because the range of the various loss functions used to train the different models we are evaluating are quite different we must select a single metric to use as a proxy for model performance. We have chosen the Area Under the Receiver Operating Characteristic curve (AUROC), where the binary classification task is to determine whether two instances are similar or not according to the previously defined binary relation. Figure~\\ref{fig:convergence} shows how fast the three methods converge for each dataset. \n\n\\begin{figure}\n\t\\begin{subfigure}{0.5\\textwidth}\n\t\t\\center\n\t\t\\begin{tikzpicture}[scale=0.7,trim axis right,trim axis left]\n\t\t\t\\begin{axis}[yticklabel style={\/pgf\/number format\/fixed,\n\t\t\t \/pgf\/number format\/precision=3}, legend pos=south east, ylabel=AUROC, xlabel={Time (Seconds)}]\n\t\t\t\t\\addplot +[mark=none] table[col sep=comma] {results\/mnist-fml-mf.csv};\n\t\t\t\t\\addlegendentry{FML-DP}\n\t\t\t\t\\addplot +[mark=none] table[col sep=comma] {results\/mnist-fml-contrastive.csv};\n\t\t\t\t\\addlegendentry{FML-C}\n\t\t\t\t\\addplot +[mark=none] table [col sep=comma] {results\/mnist-siamese-contrastive.csv};\n\t\t\t\t\\addlegendentry{Siamese}\n\t\t\t\\end{axis}\n\t\t\\end{tikzpicture}\n\t\t\\caption{MNIST}\n\t\\end{subfigure}\n\t\\begin{subfigure}{0.5\\textwidth}\n\t\t\\center\n\t\t\\begin{tikzpicture}[scale=0.7,trim axis right,trim axis left]\n\t\t\t\\begin{axis}[yticklabel style={\/pgf\/number format\/fixed,\n\t\t\t \/pgf\/number format\/precision=3}, legend pos=south east, ylabel=AUROC, xlabel={Time (Seconds)}]\n\t\t\t\t\\addplot +[mark=none] table [col sep=comma] {results\/cifar10-fml-mf.csv};\n\t\t\t\t\\addlegendentry{FML-DP}\n\t\t\t\t\\addplot +[mark=none] table [col sep=comma] {results\/cifar10-fml-contrastive.csv};\n\t\t\t\t\\addlegendentry{FML-C}\n\t\t\t\t\\addplot +[mark=none] table [col sep=comma] {results\/cifar10-siamese-contrastive.csv};\n\t\t\t\t\\addlegendentry{Siamese}\n\t\t\t\\end{axis}\n\t\t\\end{tikzpicture}\n\t\t\\caption{CIFAR-10}\n\t\\end{subfigure} \\\\[1cm]\n\t\\begin{subfigure}{0.5\\textwidth}\n\t\t\\center\n\t\t\\begin{tikzpicture}[scale=0.7,trim axis right,trim axis left]\n\t\t\t\\begin{axis}[yticklabel style={\/pgf\/number format\/fixed,\n\t\t\t \/pgf\/number format\/precision=3}, legend pos=south east, ylabel=AUROC, xlabel={Time (Seconds)}]\n\t\t\t\t\\addplot +[mark=none] table [col sep=comma] {results\/pubfig83-fml-mf.csv};\n\t\t\t\t\\addlegendentry{FML-DP}\n\t\t\t\t\\addplot +[mark=none] table [col sep=comma] {results\/pubfig83-fml-contrastive.csv};\n\t\t\t\t\\addlegendentry{FML-C}\n\t\t\t\t\\addplot +[mark=none] table [col sep=comma] {results\/pubfig83-siamese-contrastive.csv};\n\t\t\t\t\\addlegendentry{Siamese}\n\t\t\t\\end{axis}\n\t\t\\end{tikzpicture}\n\t\t\\caption{PubFig83}\n\t\\end{subfigure}\n\t\\begin{subfigure}{0.5\\textwidth}\n\t\t\\center\n\t\t\\begin{tikzpicture}[scale=0.7,trim axis right,trim axis left]\n\t\t\t\\begin{axis}[yticklabel style={\/pgf\/number format\/fixed,\n\t\t\t \/pgf\/number format\/precision=3},\n\t\t\t\t\t\t\t\t scaled ticks=false, tick label style={\/pgf\/number format\/fixed}, legend pos=south east, ylabel=AUROC, xlabel={Time (Seconds)}]\n\t\t\t\t\\addplot +[mark=none] table [col sep=comma] {results\/nuswide-fml-mf.csv};\n\t\t\t\t\\addlegendentry{FML-DP}\n\t\t\t\t\\addplot +[mark=none] table [col sep=comma] {results\/nuswide-fml-contrastive.csv};\n\t\t\t\t\\addlegendentry{FML-C}\n\t\t\t\t\\addplot +[mark=none] table [col sep=comma] {results\/nuswide-siamese-contrastive.csv};\n\t\t\t\t\\addlegendentry{Siamese}\n\t\t\t\\end{axis}\n\t\t\\end{tikzpicture}\n\t\t\\caption{NUS-WIDE}\n\t\\end{subfigure}\n\t\\caption{Plots of the AUROC on the test set vs training time for each dataset. In these plots FML denotes one of the fast metric learning methods presented in this work. The suffix DP means the targets were found with the loss function given in Equation~\\ref{eq:dotloss}, and C means they were found with the contrastive loss. Each Siamese network was run until convergence, and then the other methods were run for double the number of epochs taken to train the Siamese network. Because the Siamese network considers pairs of images, each epoch takes twice as long as a regular regression network. On the NUS-WIDE experiment we stopped the FML methods earlier due to time constraints and the lack of extra information that would be obtained from running them for the same duration as the Siamese network.}\n\t\\label{fig:convergence}\n\\end{figure}\n\nWe can see that both variants of the fast metric learning method that has been proposed converge significantly faster than the conventional Siamese network. However, on one dataset the Siamese network does outperform our method. \n\n\\subsection{$k$-NN Classifier Performance}\nTo perform an extrinsic evaluation that shows how useful this method can be in practice, $k$-NN classifiers are created that utilise Euclidean distance applied to the embeddings to make predictions. A validation set was used for determining how many epochs each network should be trained for. It should be noted that the models trained with the dot product based loss function given in Equation~\\ref{eq:dotloss} are at a disadvantage in this case. The multiclass classifiers were trained using WEKA~\\cite{hall2009}, and the multi-label classifiers are created using the binary relevance problem transformation scheme as implemented in MEKA.\\footnote{http:\/\/meka.sourceforge.net} Table~\\ref{tbl:knn} shows the performance of the $k$-NN classifiers, with $k = 5$ for all classifiers.\n\n\\begin{table}\n\t\\center\n\t\\caption{Performance of $k$-NN classifiers applied to each dataset and model combination. Accuracy is reported for the three multiclass datasets (MNIST, CIFAR-10, and PubFig83), and the Jaccard index (higher is better) is reported for the multi-label dataset (NUS-WIDE).}\n\t\\label{tbl:knn}\n\t\\begin{tabular*}{0.9\\columnwidth}{@{\\extracolsep{\\fill}}ccccc}\n\t\t\\hline\\noalign{\\smallskip}\n\t\tAlgorithm & MNIST & CIFAR-10 & PubFig83 & NUS-WIDE \\\\\n\t\t\\noalign{\\smallskip}\n\t\t\\hline\n\t\t\\noalign{\\smallskip}\n\t\tFML-C & 0.976 & 0.806 & 0.777 & 0.427 \\\\\n\t\tFML-DP & 0.978 & 0.806 & 0.768 & 0.229 \\\\\n\t\tSiamese & 0.972 & 0.734 & 0.335 & 0.393 \\\\\n\t\t\\noalign{\\smallskip}\n\t\t\\hline\n\t\\end{tabular*}\n\\end{table}\n\nIt can be seen that both the fast metric learning methods presented in this work outperform the conventional Siamese network trained with the contrastive loss function when evaluating on multiclass classification tasks. Particularly surprising is the performance on PubFig83, where the accuracy of the Siamese network is significantly worse than the other methods. Also interesting is the performance on the NUS-WIDE dataset. It is quite surprising that FML-C achieves a higher Jaccard index than the Siamese network, despite the intrinsic evaluation showing the Siamese network converges towards a more accurate solution.\n\n\\section{Conclusion}\nIn this paper, we have presented a fast method for learning similarity metrics backed by deep neural networks. It has been shown that the convergence time for the techniques presented in this work are significantly faster and, in the majority of cases, result in superior performance on both intrinsic and extrinsic evaluation tasks. Our method, coupled with the contrastive loss function, appears to be a very good choice for learning specialised distance metrics for $k$-NN.\n\nIt would be interesting to investigate how well these methods work on information retrieval tasks. The formalisation given in Section~\\ref{sec:definition} is well suited for scenarios where one does not wish explicity assign a ground truth class during data collection, even though there is likely to be a large number of latent classes or topics. It would also be interesting to investigate how effective two step training phase is for speeding up networks trained with triplet loss functions, especially since these loss functions are more popular for information retrieval tasks~\\cite{chechik2010}.\n\n\\subsubsection*{Acknowledgements.} We thank NVIDIA for donating the TITAN X GPU that was used for training all of the deep networks in this paper.\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\section{Examining Patch Diversity in Vision Transformers}\n\\label{sec:study}\n\nTo understand the origin of the training instability, we study the patch representations learned after each self-attention layer. Ideally, we would hope the patch representations to be diverse and capture different information from the inputs. We study the diversity of the patch representations by computing the patch-wise absolute cosine similarity. Consider a sequence of patch representations $\\bm{h} = [h_{class}, h_1, \\cdots, h_n]$. We define the patch-wise absolute cosine similarity as\n\\begin{equation*}\n\\mathcal{P}(\\bm h) = \\frac{1}{n(n-1)} \\sum_{i\\neq j} \\frac{|h_i^\\top h_j|}{\\parallel h_i \\parallel_2 \\parallel h_j \\parallel_2}.\n\\end{equation*}\nHere we ignore the \\emph{class} patch. Larger values of $\\mathcal{P}(\\bm h)$ indicate a higher correlation among patches and vice versa.\n\nWe test two variants of vision transformers pre-trained on the ImageNet dataset~\\citep{deng2009imagenet}, including a 24-layer DeiT-Base model~\\citep{touvron2020training} (denoted as DeiT-Base24) and a SWIN-Base~\\citep{liu2021swin} model. We also evaluate a ResNet-50~\\citep{he2016deep} model for comparison. For the DeiT-Base24 model, we uniformly select 5 layers to compute the patch-wise cosine similarity. For the SWIN-Base model and ResNet-50 model, we select the input representation to each down-sampling layer. Specifically for the ResNet-50 model, we regard the representations at each spatial location as an individual patch.\n\n\\begin{figure}[t]\n\\centering\n\\setlength{\\tabcolsep}{1pt}\n\\begin{tabular}{cc}\n\\includegraphics[height=0.225\\textwidth]{figures\/vit_baseline.pdf} &\n\\raisebox{1.2em}{\\rotatebox{90}{\\small Cosine similarity}}\n\\includegraphics[height=0.225\\textwidth]{figures\/cos_baseline.pdf}\\\\ \n& {\\small Block index} \\\\\n(a) An illustration of vision transformers \n& (b) Pairwise absolute cosine similarity \\\\% & (b) Layer-wise \\emph{s.t.d.} of Attention \\\\\n\\end{tabular}\n\\caption{(a) An overview of vision transformers by following \\citep{dosovitskiy2020vit}. Each image patch is first transformed to a latent representation using a linear patch projection layer. The \\emph{dog} image is from ImageNet~\\citep{deng2009imagenet}. \n(b) Comparisons of patch-wise absolute cosine similarities.\nAll similarities are computed with 10,000 sub-sampled images from the ImageNet training set without data augmentation.}\n\\label{fig:oversmooth}\n\\end{figure}\n\nAs shown in Figure~\\ref{fig:oversmooth} (b), the patch-wise cosine similarity $\\mathcal{P}(\\cdot)$ of the patch representations learned by DeiT-Base24 and SWIN-Base gradually increases with the the depth of layers. For DeiT-Base24, the average cosine similarity becomes larger than 0.7 for the representations after the last layer. In contrast, ResNet-50 learns relative diversified features across the network without an obvious increase of the patch-wise cosine similarity. Such high similarity across the learned patch representations is undesired as it degrades the learning capacity of the vision transformer models and limits the actual information extracted from the input images. There is also risk for the patch representations to degenerate with the increase of model depth, which may partially explain the high training instability of deep models.\n\n\\iffalse\nIn vision transformers, patch representations are iteratively transformed by the self-attention layers. Intuitively, \none would expect each patch to capture different information from the inputs, such that all learned patch representations are diversified.\nIn this work, \nwe observe that the training of vanilla transformers suffers from the \n\\emph{over-smoothing} problem, in a way that different patches gradually become similar to each other as the layer goes deeper. Such representation degradation leads to sub-optimal performance and limits the capabilities of large-scale vision transformers in practice.\n\nTo study this over-smoothing phenomenon quantitatively,\nwe compute the patch-wise absolute cosine similarity\nto measure the diversity of the learned patch features.\nMore precisely, consider a sequence of patch features $\\bm{h} = [h_{class}, h_1, \\cdots, h_n]$, we define the patch-wise absolute cosine similarity as follows,\n$$\n\\mathcal{P}(\\bm h) = \\frac{1}{n(n-1)} \\sum_{i\\neq j} \\frac{|h_i^\\top h_j|}{\\parallel h_i \\parallel_2 \\parallel h_j \\parallel_2}.\n$$\nHere we leave out the \\emph{class} patch in computation. Larger values of $\\mathcal{P}(\\bm h)$ indicate a higher correlation between patches and vice versa.\n\n\n\n\n\\begin{figure}[t]\n\\centering\n\\setlength{\\tabcolsep}{1pt}\n\\begin{tabular}{cc}\n\\includegraphics[height=0.225\\textwidth]{figures\/vit_baseline.pdf} &\n\\raisebox{1.2em}{\\rotatebox{90}{\\small Cosine similarity}}\n\\includegraphics[height=0.225\\textwidth]{figures\/cos_baseline.pdf}\\\\ \n& {\\small Block index} \\\\\n(a) An illustration of vision transformers \n& (b) Pairwise absolute cosine similarity \\\\% & (b) Layer-wise \\emph{s.t.d.} of Attention \\\\\n\\end{tabular}\n\\caption{(a) An overview of vision transformers by following \\citep{dosovitskiy2020vit}. Each image patch is first transformed to a latent representation using a linear patch projection layer. The \\emph{dog} image is from ImageNet~\\citep{deng2009imagenet}. \n(b) Comparisons of patch-wise absolute cosine similarities.\nAll similarities are computed with 10,000 sub-sampled images from the ImageNet training set without data augmentation.\n}\n\\label{fig:oversmooth}\n\\end{figure}\n\n\\paragraph{Example}\nWe examine two ImageNet~\\citep{deng2009imagenet} pretrained vision transformers, including a 24-layer DEIT-base model~\\citep{touvron2020training} (denoted by DEIT-Base24) and \na SWIN-Base~\\citep{liu2021swin} model, and furthermore,\nwe evaluate a ResNet-50~\\citep{he2016deep} model to provide a baseline.\nSpecifically, for ResNet-50, we treat the features at each spatial location as an individual patch.\n\nTo quantify the diversity of the learned patch representations throughout the network,\nfor DEIT-Base24, we uniformly choose 5 layers ranging from the first layer to the last layer, and compute the patch-wise absolute cosine similarity of their learned features. For SWIN-Base and ResNet-50, we select the layers before each down-sampling step (with the same number of feature maps\/patches) for comparisons. \n\n\nAs illustrated in Figure~\\ref{fig:oversmooth} (b), \nthe patch-wise absolute cosine similarity $\\mathcal{P}(\\cdot) $ for patches learned by DEIT-Base24 and SWIN-Base gradually increases as the the depth of layers increases; the average cosine similarity is larger than 0.7\nfor patches from the last layer. \nOn the hand other, ResNet-50 learns relative diversified features across the network without degradation in representations. \n\n\\fi\n\n\\section{DiversePatch: Promoting Patch Diversification for Vision Transformers}\n\\label{sec:method}\n\nTo alleviate the observed problem, we propose three regularization techniques to encourage diversity across the learned patch representations.\n\n\\paragraph{Patch-wise cosine loss}\nAs a straightforward solution, we propose to directly minimize the patch-wise absolute value of cosine similarity between different patch representations. \nGiven the final-layer patch representations $\\bm{h}^{[L]}$ of an input $\\bm x$,\nwe add a patch-wise cosine loss to the training objective:\n\\begin{equation*}\n\\mathcal{L}_{\\textit{cos}}(\\bm x) = \\mathcal{P}(\\bm{h}^{[L]}).\n\\end{equation*}\nThis regularization loss explicitly minimizes the pairwise cosine similarity between different patches.\nSimilar approaches have also been adopted for training improved \nrepresentations in graph neural networks \\citep{chen2020measuring} and diversified word embeddings in language models \\citep{gao2019representation}.\nMeanwhile, \nthis regularization loss can be viewed as minimizing the upper bound of the largest eigen-value of $\\bm{h}$ \\citep{merikoski1984trace}, hence improving the expressiveness of the representations. See Figure~\\ref{fig:demo} (a) for an illustration.\n\n\\paragraph{Patch-wise contrastive loss}\nSecondly, as shown in Figure~\\ref{fig:oversmooth} (b), \nrepresentations learned in early layers are more diverse compared to that in deeper layers.\nHence, we propose a contrastive loss which uses the representations in early layers, and regularizes the patches in deeper layers to reduce the similarity of patch representations. \nSpecifically, given an image $\\bm{x}$, \nlet $\\bm{h}^{[1]}=\\{h_i^{[1]}\\}_i$ and $\\bm{h}^{[L]}=\\{h_i^{[L]}\\}_i$ denote its patches at the first and the last layer, respectively.\nWe constrain each $h^{[L]}_i$ to be similar to $h^{[1]}_i$ and to be different to any other patches $h^{[1]}_{j\\neq i}$ as follows (see Figure~\\ref{fig:demo} (b)),\n\\begin{equation*}\n\\mathcal{L}_{\\textit{contrastive}}(\\bm x)= - \\frac{1}{n} \\sum_{i=1}^n \\log \\frac{\\exp({h^{[1]}_i}^\\top h^{[L]}_i)}{\\exp({h^{[1]}_i}^\\top h^{[L]}_i) + \\exp({h^{[1]}_i}^\\top (\\frac{1}{n-1}\\sum_{j \\neq i} h^{[L]}_j))}, \n\\end{equation*}\nIn practice, we stop the gradient on $\\bm{h}^{[1]}$.\n\n\\paragraph{Patch-wise mixing loss}\nThirdly, instead of just using the class patch for the final prediction, we propose to train each patch to predict the class label as well. This can be combined with Cutmix~\\citep{yun2019cutmix} data augmentation to provide additional training signals for the vision transformer. As shown in Figure~\\ref{fig:demo} (c), we mix the input patches from two different images and attach an additional shared linear classification head to each output patch representations for classification. The mixing loss forces each patch to only attend to its a subset of patches from the same input images and ignore unrelated patches. Hence, it effectively prevents simple averaging across different patches to yield more informative and useful patch representations. This patch mixing loss can be formulated as below,\n\\begin{equation*}\n\\mathcal{L}_\\textit{mixing}(\\bm x) = \\frac{1}{n} \\sum_{i=1}^n \\mathcal{L}_\\textit{ce}(g(h^{[L]}_i), y_i),\n\\end{equation*}\nwhere $h^{[L]}_i$ represents patch representations in the last layer, $g$ denotes the additional linear classification head, $y_i$ stands for the patch-wise class label and $\\mathcal{L}_\\textit{ce}$ denotes the cross entropy loss.\n\n\\iffalse\nThirdly, we propose to train each patch to predict its own class label. \nThis regularization allows us to leverage Cutmix~\\citep{yun2019cutmix} \nto provide additional training signals to avoid over-smoothing among patches.\nSpecifically, we propose to mix each input with two different images, and in the meantime, attach an additional shared linear classification head to each patch at the last layer for patch-wise classification.\nThis idea is illustrated in Figure~\\ref{fig:demo} (c). \n\nIn this way, each patch would be forced to only attend to its related patches while ignoring unrelated patches from other images. \nBy mixing different images, our \\emph{patch mixing loss} effectively prevents simple averaging across different patches and hence yielding more informative and useful patch representations. \nThis patch mixing loss can be formulated as below,\n\\begin{equation*}\n\\mathcal{L}_\\textit{mixing}(\\bm x) = \\frac{1}{n} \\sum_{i=1}^n \\mathcal{L}_\\textit{ce}(g(h^{[L]}_i), y_i),\n\\end{equation*}\nwhere $h^{[L]}_i$ represents patch representations in the last layer, $g$ denotes the additional linear classification head, $y_i$ stands for the patch-wise class label and $\\mathcal{L}_\\textit{ce}$ denotes the cross entropy loss.\n\\fi\n\n\\paragraph{Algorithm}\nWe improve the training of vision transformers by simply jointly minimizing the weighted combination of $\\alpha_1 \\mathcal{L}_{\\textit{cos}} + \\alpha_2 \\mathcal{L}_{\\textit{contrastive}} +\\alpha_3 \\mathcal{L}_{\\textit{mixing}}$. \nOur method does not require any network modifications, and in the meantime, is not restricted to any specific architectures.\nIn our experiments, we simply set $\\alpha_1=\\alpha_2=\\alpha_3=1$ without any particular hyper-parameters tuning.\n\n\n\\begin{figure}[ht]\n\\centering\n\\setlength{\\tabcolsep}{2pt}\n\\begin{tabular}{ccc}\n\\includegraphics[height=0.19\\textwidth]{figures\/patch_cos_reg.pdf} &\n\\includegraphics[height=0.19\\textwidth]{figures\/patch_contrastive.pdf} &\n\\includegraphics[height=0.19\\textwidth]{figures\/patch_mixing.pdf} \\\\\n(a) Patch-wise Cosine Loss & (b) Patch-wise Contrastive Loss & \n(c) Patch-wise Mixing Loss \\\\\n\\end{tabular}\n\\caption{ An illustration of our patch diversification promoting losses. (a) Patch-wise cosine loss. (b) Patch-wise contrastive loss. (c) Patch-wise mixing loss.\n}\n\\label{fig:demo}\n\\end{figure}\n\n\\section{Introduction}\n\n \n \n \n\nRecently, vision transformers have demonstrated promising performance on various challenging computer vision tasks, including image classification~\\citep{dosovitskiy2020vit}, object detection~\\citep{zhu2020deformable, carion2020end}, multi-object tracking~\\citep{meinhardt2021trackformer}, image generation~\\citep{jiang2021transgan} and video understanding~\\citep{bertasius2021space}. \nCompared with highly optimized convolutional neural networks (CNNs), e.g., ResNet~\\citep{he2016deep} and EfficientNet~\\citep{tan2019efficientnet}, transformer encourages non-local computation and achieves comparable, and even better performance when pre-trained on large scale datasets.\n\nFor a vision transformer, an image is usually split into patches and the sequence of linear embeddings of these patches are provided as the input to the stacked transformer blocks~\\citep{dosovitskiy2020vit}. A vision transformer can learn the patch representations effectively using the self-attention block, which aggregates the information across the patches~\\citep{vaswani2017attention}. The learned patch representations are then used for various vision tasks, such as image classification, image segmentation, and object detection. Hence, learning high-quality and informative patch representations becomes key to the success of a vision transformer.\n\nThough promising results on vision tasks have been demonstrated for vision transformers, it is found that the training of vision transformers is not very stable, especially when the model becomes wider and deeper~\\citep{touvron2021going}. To understand the origin of the training instability, we use two popular vision transformer variants, i.e., DeiT~\\citep{touvron2020training} and SWIN-Transformer~\\citep{liu2021swin}, and study the extracted patch representations of each self-attention layer. We find the average patch-wise absolute cosine similarity between patch representations increases significantly for late layers in both two models. For a 24-layer DeiT-Base model, the cosine similarity can reach more than 0.7 after the last layer. This indicates a high correlation and duplication among the learned patch representations. Such behavior is undesired as it degrades overall representation power of patch representations and reduces the learning capacity of powerful vision transformers. Our findings shed light on the empirical results found by \\citep{touvron2021going}, and may partially explain the reason why simply increasing the depth of standard vision transformers cannot boost the model performance.\n\nTo alleviate the problem in vision transformers, we propose three different techniques. First, We propose to directly promote the diversity among different patch representations by penalizing the patch-wise cosine similarity. Meanwhile, we observe the input patch representations to the first self-attention layer are often more diversified as they rely solely on the input pixels. Based on this observation, we propose a patch-wise contrastive loss to encourage the learned representations of the same patch to be similar between the first and layer layers while force the representations of different patches in one given image to be different.\n\n\nThirdly, we propose a patch-wise mixing loss. Similar to Cutmix~\\citep{yun2019cutmix}, we mix the input patches from two different images and use the learned patch representations from each image to predict its corresponding class labels. With the patch-wise mixing loss, we are forcing the self-attention layers to only attend to patches that are most relevant to its own category, and hence, learning more discriminating features.\n\n\n\nEmpirically, leveraging the proposed diversity-encouraging training techniques, \nwe significantly improve the image classification for standard vision transformers on ImageNet without any architecture modifications or any extra data. \nSpecifically, we achieve 83.3\\% top-1 accuracy on ImageNet with an input resolution of 224$\\times$224 for DeiT.\nWe also finetune the checkpoint trained on ImageNet-22K from SWIN-Transformer \\citep{liu2021swin} and achieve 87.4\\% top-1 accuracy on ImageNet.\nBy transferring the backbone model to semantic segmentation, we enhance the SOTA results on Cityscapes and ADE20k valid set to 83.6 and 54.5 mIoU, respectively.\n\n\n\n\n\n\\section{Primal Experiment}\n\\subsection{Re-Design Self-Attention}\nThe SoftMax attention in self-attention layer is defined as:\ngiven input $X$, let $K = X W^k$, $Q = X W^q$ and $V = X W^v$, the output $y = \\mathrm{SoftMax}(K Q^T \/ \\mathrm{scale}) V$.\nThe time complexity and FLOPs of the attention is $\\mathcal{O}(T^2 N)$ where $T$ is the number of tokens and $N$ is the dimension size.\n\n$X \\in R^{[N, D]} -> X^{\\mathrm{weight}}_i = X W^{\\mathrm{reduce}}_i \\in R^{[N, 1]} -> Y = \\sum_i X \\odot f(X^{\\mathrm{weight}}_i)$ where $f$ is stacked MLP with a final sigmoid function and $f(X^{\\mathrm{weight}}_i) \\in R^{[N, 1]}$.\n\n$x_{i, d} \\hookleftarrow f(x_{i, 1:d}) x_{i, d} + x_{i, d}$\n\n$x_{i, d} \\hookleftarrow f(x_{1:n, d}) x_{i, d} + x_{i, d}$\n\nOriginal FLOPs of attention layer in tiny: 350M -> Now: ~20M.\n\nUsing attention is not a new idea in computer vision, \\citet{hu2018senet, li2019selective} use channel-wise attention to re-weight the importance of each channel.\nInspired by these works, we decompose the SoftMax attention into channel-wise and token\/pixel\/patch-wise attention. \nThe pytorch-style code is:\n\\begin{lstlisting}\n# Linear_1, Linear_2 : linear transform\n# Transform_1, Transform_2: \n# stacked full-connected layers with activation functions\n\n# reduce all token for each dimension\nx = Linear_1(x) * torch.sigmoid(Transform_1(x.mean(dim=1))) \n# reduce all hidden dimension for each token\nx = Linear_2(x) * torch.sigmoid(Transform_2(x.mean(dim=2))) \n\\end{lstlisting}\nIn this way, we re-weight each token and each dimension separately.\nThe decomposed channel-wise attention is similar to the channel-wise attentions \\citep{} which are widely-used in a bunch of computer vision tasks. \nWe build two baselines: 1) MLP layers without any attention 2) SoftMax-style self-attention.\n\n\\paragraph{Multi-head Patch-wise Attention} \nThe information between tokens is only ex-changed in the attention layer. Once we first do a average pooling operation, the expression of the token-level attention is limited. Thus, we create a multi-branch patch-wise attention by sum over several patch-wise attentions with different parameters.\n\n\\paragraph{Compare to Efficient Attentions} \nSparse attention usually has a $\\mathcal{O}(T \\log(T) N)$ time complexity, and linear attention has a $\\mathcal{O}(T N^2)$ complexity. \n\nFirst, we would like to see the performance of the decomposed attention.\nWhat is the next step? Further reducing the parameters in MLP layers, or discussing the value of different kind of attention (e.g. If we remove all the attentions, will the performance of the baseline drop a lot? If we only use token-wise attention, will it be much worse than SoftMax-style attention?)\n\n\\subsection{LN}\nIn the original VIT \\citep{} and DEIT \\citep{}, they use the pre-LayerNorm \\citep{} as:\n\n\\begin{lstlisting}\nx = x + self.drop_path(self.attn(self.norm1(x)))\nx = x + self.drop_path(self.mlp(self.norm2(x)))\n\\end{lstlisting}\n\nWe use the open-source code of DEIT \\footnote{\\url{https:\/\/github.com\/rwightman\/pytorch-image-models\/blob\/master\/timm\/models\/vision_transformer.py}}. \n\n\\subsection{Relative Position Information}\nThe relative position information can be added into the attention or outside. \n\nWe test three module, \n\n1) Relative Positional Embedding \\citep{}:\n\\begin{lstlisting}\nself.remb = nn.Parameter(torch.randn([197, embed_dim\/\/num_heads,197])*0.02)\ntrunc_normal_(self.remb,std=.02)\n \nrpos=q.permute(2,0,1,3).reshape(N,B*self.num_heads,C\/\/self.num_heads)@self.remb\nrpos=rpos.reshape(N,B,-1,N)\nattn=(q@k.transpose(-2,-1)+ rpos.permute(1,2,0,3))*self.scale\n\\end{lstlisting}\n2) MLP as an expansion block: change MLP Layer from Linear(192, 768)-Linear(768, 192) to MLP(192, 768)-Conv(input=192,output=192,groups=192)-MLP(192, 768). \n\n3) Adding a channel-wise conv layer before MLP layer to force local attention for the 14x14 patches.\nWe change the attention to \n\\begin{lstlisting}\nself.conv = torch.nn.Conv2d(in_channels=dim, \\ \nout_channels=dim, kernel_size=3, groups=dim, padding=1)\n\nx = x + self.drop_path(self.attn(self.norm1(x)))\nbatch_size, hidden, patch_length = x.shape[0], x.shape[-1], 224 \/\/ 16\nx[:, 1:, :] = x[:, 1:, :] + self.conv(x[:, 1:, :].transpose(1, 2) \\\n.reshape(batch_size, hidden, patch_length, patch_length))\\\n.reshape(batch_size, hidden, patch_length*patch_length).transpose(1, 2)\nx = x + self.drop_path(self.mlp(self.norm2(x)))\n\\end{lstlisting}\n\n\\begin{table}[h]\n \\centering\n \\begin{tabular}{cccc}\n \\hline\n Model & Params & Additional FLOPs & Accuracy \\\\\n \\hline\n Tiny DEIT & 5.7M & 0M & 72.2\\% \\\\\n \\hline\n + Relative Embedding & 8.2M & 178M & 72.3\\% \\\\\n + Conv in MLP & 5.7M & 16M & 72.5\\% \\\\\n + Conv outside MLP & 5.7M & 4M & \\bf{73.6\\%} \\\\\n \\hline\n \\end{tabular}\n \\caption{ImageNet.}\n \\label{tab:none}\n\\end{table}\n\n\n\\begin{figure}[ht]\n \\centering\n \\begin{tabular}{c}\n \\includegraphics[width=0.4\\textwidth]{figures\/vit_pos_emb.pdf} \n \\end{tabular}\n \\label{fig:vit_base}\n \\caption{Introducing learnanble positional embeddings.}\n\\end{figure}\n\n\n\n\\subsection{LayerNorm}\nIn NLP, the pre LayerNorm \\citep{xiong2020layer} can accelerate the training. However, the post norm (which is used in the original transformer) usually achieves better result. We have a test on it\n1) post LN :\n\\begin{lstlisting}\nx = self.norm1(x + self.drop_path(self.attn(x)))\nx = self.norm2(x + self.drop_path(self.mlp(x)))\n\\end{lstlisting}\n\n2) additional LN normalization:\n\\begin{lstlisting}\nattn_x = x + self.drop_path(self.attn(self.norm1(x)))\nx = attn_x + self.drop_path(self.mlp(self.norm2(attn_x)))\nx = self.norm3(x + attn_x)\n\\end{lstlisting}\n\n\\begin{table}[h]\n \\centering\n \\begin{tabular}{cccc}\n \\hline\n Model & Params & Additional FLOPs & Accuracy \\\\\n \\hline\n Tiny DEIT & 5.7M & 0M & 72.2\\% \\\\\n \\hline\n Post LN & 5.7M & 0M & 70.8\\% \\\\\n + Additional LN & 5.7M & ~0M & \\bf{72.4\\%} \\\\\n \\hline\n \\end{tabular}\n \\caption{ImageNet.}\n \\label{tab:none}\n\\end{table}\n\n\\subsection{MLP}\nIs where to put the MLP and to put how many MLP important?\nWe use a weight-shared MLP:\n\n\\begin{lstlisting}\nx = x + self.drop_path(self.attn(self.norm1(x)))\nfor mlp_index in range(num_mlp):\n x = x + self.drop_path(self.mlp(self.norm2(x))) \/n\n\\end{lstlisting}, \n\nand a weight-unshared MLP:\n\\begin{lstlisting}\nx = x + self.drop_path(self.attn(self.norm1(x)))\nfor mlp_index in range(num_mlp):\n x = x + self.drop_path(self.mlp[mlp_index](self.norm2[mlp_index](x))) \/ n\n\\end{lstlisting},\nto see whether there is difference. Notice that the original MLP(dim-hidden-dim) increase the hidden size to $4\\times$dimension size, we decrease the MLP hidden size to make the number of parameters the same.\n\nWe set the number of layer to 2 to do a test.\n\\begin{table}[h]\n \\centering\n \\begin{tabular}{cccc}\n \\hline\n Model & Params & Additional FLOPs & Accuracy \\\\\n \\hline\n Tiny DEIT & 5.7M & 0M & 72.2\\% \\\\\n \\hline\n Shared MLP & 5.7M & 500M & 72.4\\% \\\\\n Unshared MLP & 5.7M & 0M & \\bf{72.5\\%} \\\\\n \\hline\n \\end{tabular}\n \\caption{ImageNet.}\n \\label{tab:none}\n\\end{table}\n\n\\subsection{Initialization and Optimization}\nWe test Fixup and its followup \\citep{huang2020improving}, it does not have an impact on the final result. \n\n\\subsection{Hand-Crafted and NAS}\nWe combine these features to design a new network structure:\n\\begin{lstlisting}\nbatch_size, hidden, patch_length = x.shape[0], x.shape[-1], 224 \/\/ 16\nx[:, 1:, :] += Conv(x[:, 1:, :])\nx = self.norm1(x)\nx = x + 1. \/ 2 * self.drop_path(self.mlp1(x))\n\nx = self.norm2(x)\nx = x + self.drop_path(self.attn(x)) \n\nx[:, 1:, :] += Conv(x[:, 1:, :])\nx = self.norm3(x)\nx = x + 1. \/ 2 * self.drop_path(self.mlp(x))\nx = self.norm4(x)\n\\end{lstlisting}\n\nWe also use a DARTs-like NAS method to do a search. Note that in the search space, when we decide whether to use a module (e.g. LayerNorm, Conv) or Identity, we add a penalty on the Identity operator to avoid degenerate during searching.\n\n\\textbf{We not only achieve much better results, but also accelerate the training by a large margin (e.g. Training with 20 epochs on Tiny-DEIT, we improve the 25\\% top-1 accuracy baseline to 54\\%).}\n\n\\begin{table}[h]\n \\centering\n \\begin{tabular}{cccc}\n \\hline\n Model & Params & Additional FLOPs & Accuracy \\\\\n \\hline\n Tiny DEIT & 5.7M & 0M & 72.2\\% \\\\\n Human-Design & 5.7M & 8M & \\bf{74.2\\%} (not finished) \\\\\n NAS & 5.7M & 0M & \\bf{-} \\\\\n \\hline\n Small DEIT & 16.8M & 0M & 79.9\\% \\\\\n Human-Design & 16.8M & 16M & \\bf{81.1\\%} (not finished) \\\\\n NAS & 5.7M & 0M & \\bf{-} \\\\\n \\hline\n \\end{tabular}\n \\caption{ImageNet. \\textbf{Our result is even better than the DEIT with teacher model.}}\n \\label{tab:none}\n\\end{table}\n\n\n\\subsection{More...}\nThe next step is either designing a mobile-version or designing the attention module.\nLet us make a short version as a workshop paper first.\n\\section{Preliminaries: Vision Transformers}\n\nIn this section, we give a brief introduction \nto vision transformers~\\citep{dosovitskiy2020vit}.\nGiven a training example $(\\bm{x}, y)$, where $\\bm x$ and $y$ denote the input image and the label, respectively. \nA vision transformer first splits $\\bm x$ into a set of non-overlapping patches, i.e., $\\bm{x} = (x_1, \\cdots, x_n)$,\nwhere $x_i$ denotes a patch and $n$ is the total number of patches. \nEach patch $x_i$ is then transformed into a latent representation via a projection layer, and augmented with a position embedding. \nA learnable \\emph{class} patch is introduced to capture the label information. \nDuring training, a vision transformer model gradually transforms patch representations $\\bm{h}^{[\\ell]} = (h_{class}^{[\\ell]}, h_1^{[\\ell]}, \\cdots, h_n^{[\\ell]})$ with a stack of self-attention layers~\\citep{vaswani2017attention}. \nHere, $\\ell$ denotes the layer index and $h_{class}^{[\\ell]}$ and $h_i^{[\\ell]}$ denote the learned \\emph{class} patch and the image patch of $x_i$ at the $\\ell$-th layer, respectively.\nLet $L$ denote the total number of layers and $g$ denote a classification head. The vision transformer is trained by minimizing a classification loss $\\mathcal{L}(g(h_{class}^{[L]}), y)$. See Figure~\\ref{fig:oversmooth} (a) for an overview of vision transformers. It has been reported that the training of vision transformer can suffer from instability issue, especially for deep models \\citep{touvron2020training}.\n\\section{Introduction}\n\nPlease follow the steps outlined below when submitting your manuscript to\nthe IEEE Computer Society Press. This style guide now has several\nimportant modifications (for example, you are no longer warned against the\nuse of sticky tape to attach your artwork to the paper), so all authors\nshould read this new version.\n\n\\subsection{Language}\n\nAll manuscripts must be in English.\n\n\\subsection{Dual submission}\n\nPlease refer to the author guidelines on the ICCV 2021 web page for a\ndiscussion of the policy on dual submissions.\n\n\\subsection{Paper length}\nPapers, excluding the references section,\nmust be no longer than eight pages in length. The references section\nwill not be included in the page count, and there is no limit on the\nlength of the references section. For example, a paper of eight pages\nwith two pages of references would have a total length of 10 pages.\n{\\bf There will be no extra page charges for ICCV 2021.}\n\nOverlength papers will simply not be reviewed. This includes papers\nwhere the margins and formatting are deemed to have been significantly\naltered from those laid down by this style guide. Note that this\n\\LaTeX\\ guide already sets figure captions and references in a smaller font.\nThe reason such papers will not be reviewed is that there is no provision for\nsupervised revisions of manuscripts. The reviewing process cannot determine\nthe suitability of the paper for presentation in eight pages if it is\nreviewed in eleven. \n\n\\subsection{The ruler}\nThe \\LaTeX\\ style defines a printed ruler which should be present in the\nversion submitted for review. The ruler is provided in order that\nreviewers may comment on particular lines in the paper without\ncircumlocution. If you are preparing a document using a non-\\LaTeX\\\ndocument preparation system, please arrange for an equivalent ruler to\nappear on the final output pages. The presence or absence of the ruler\nshould not change the appearance of any other content on the page. The\ncamera ready copy should not contain a ruler. (\\LaTeX\\ users may uncomment\nthe \\verb'\\iccvfinalcopy' command in the document preamble.) Reviewers:\nnote that the ruler measurements do not align well with lines in the paper\n--- this turns out to be very difficult to do well when the paper contains\nmany figures and equations, and, when done, looks ugly. Just use fractional\nreferences (e.g.\\ this line is $095.5$), although in most cases one would\nexpect that the approximate location will be adequate.\n\n\\subsection{Mathematics}\n\nPlease number all of your sections and displayed equations. It is\nimportant for readers to be able to refer to any particular equation. Just\nbecause you didn't refer to it in the text doesn't mean some future reader\nmight not need to refer to it. It is cumbersome to have to use\ncircumlocutions like ``the equation second from the top of page 3 column\n1''. (Note that the ruler will not be present in the final copy, so is not\nan alternative to equation numbers). All authors will benefit from reading\nMermin's description of how to write mathematics:\n\\url{http:\/\/www.pamitc.org\/documents\/mermin.pdf}.\n\n\\subsection{Blind review}\n\nMany authors misunderstand the concept of anonymizing for blind\nreview. Blind review does not mean that one must remove\ncitations to one's own work---in fact it is often impossible to\nreview a paper unless the previous citations are known and\navailable.\n\nBlind review means that you do not use the words ``my'' or ``our''\nwhen citing previous work. That is all. (But see below for\ntech reports.)\n\nSaying ``this builds on the work of Lucy Smith [1]'' does not say\nthat you are Lucy Smith; it says that you are building on her\nwork. If you are Smith and Jones, do not say ``as we show in\n[7]'', say ``as Smith and Jones show in [7]'' and at the end of the\npaper, include reference 7 as you would any other cited work.\n\nAn example of a bad paper just asking to be rejected:\n\\begin{quote}\n\\begin{center}\n An analysis of the frobnicatable foo filter.\n\\end{center}\n\n In this paper we present a performance analysis of our\n previous paper [1], and show it to be inferior to all\n previously known methods. Why the previous paper was\n accepted without this analysis is beyond me.\n\n [1] Removed for blind review\n\\end{quote}\n\nAn example of an acceptable paper:\n\n\\begin{quote}\n\\begin{center}\n An analysis of the frobnicatable foo filter.\n\\end{center}\n\n In this paper we present a performance analysis of the\n paper of Smith \\etal [1], and show it to be inferior to\n all previously known methods. Why the previous paper\n was accepted without this analysis is beyond me.\n\n [1] Smith, L and Jones, C. ``The frobnicatable foo\n filter, a fundamental contribution to human knowledge''.\n Nature 381(12), 1-213.\n\\end{quote}\n\nIf you are making a submission to another conference at the same time,\nwhich covers similar or overlapping material, you may need to refer to that\nsubmission in order to explain the differences, just as you would if you\nhad previously published related work. In such cases, include the\nanonymized parallel submission~\\cite{Authors14} as additional material and\ncite it as\n\\begin{quote}\n[1] Authors. ``The frobnicatable foo filter'', F\\&G 2014 Submission ID 324,\nSupplied as additional material {\\tt fg324.pdf}.\n\\end{quote}\n\nFinally, you may feel you need to tell the reader that more details can be\nfound elsewhere, and refer them to a technical report. For conference\nsubmissions, the paper must stand on its own, and not {\\em require} the\nreviewer to go to a tech report for further details. Thus, you may say in\nthe body of the paper ``further details may be found\nin~\\cite{Authors14b}''. Then submit the tech report as additional material.\nAgain, you may not assume the reviewers will read this material.\n\nSometimes your paper is about a problem which you tested using a tool which\nis widely known to be restricted to a single institution. For example,\nlet's say it's 1969, you have solved a key problem on the Apollo lander,\nand you believe that the ICCV70 audience would like to hear about your\nsolution. The work is a development of your celebrated 1968 paper entitled\n``Zero-g frobnication: How being the only people in the world with access to\nthe Apollo lander source code makes us a wow at parties'', by Zeus \\etal.\n\nYou can handle this paper like any other. Don't write ``We show how to\nimprove our previous work [Anonymous, 1968]. This time we tested the\nalgorithm on a lunar lander [name of lander removed for blind review]''.\nThat would be silly, and would immediately identify the authors. Instead\nwrite the following:\n\\begin{quotation}\n\\noindent\n We describe a system for zero-g frobnication. This\n system is new because it handles the following cases:\n A, B. Previous systems [Zeus et al. 1968] didn't\n handle case B properly. Ours handles it by including\n a foo term in the bar integral.\n\n ...\n\n The proposed system was integrated with the Apollo\n lunar lander, and went all the way to the moon, don't\n you know. It displayed the following behaviours\n which show how well we solved cases A and B: ...\n\\end{quotation}\nAs you can see, the above text follows standard scientific convention,\nreads better than the first version, and does not explicitly name you as\nthe authors. A reviewer might think it likely that the new paper was\nwritten by Zeus \\etal, but cannot make any decision based on that guess.\nHe or she would have to be sure that no other authors could have been\ncontracted to solve problem B.\n\\medskip\n\n\\noindent\nFAQ\\medskip\\\\\n{\\bf Q:} Are acknowledgements OK?\\\\\n{\\bf A:} No. Leave them for the final copy.\\medskip\\\\\n{\\bf Q:} How do I cite my results reported in open challenges?\n{\\bf A:} To conform with the double blind review policy, you can report results of other challenge participants together with your results in your paper. For your results, however, you should not identify yourself and should not mention your participation in the challenge. Instead present your results referring to the method proposed in your paper and draw conclusions based on the experimental comparison to other results.\\medskip\\\\\n\n\\begin{figure}[t]\n\\begin{center}\n\\fbox{\\rule{0pt}{2in} \\rule{0.9\\linewidth}{0pt}}\n \n\\end{center}\n \\caption{Example of caption. It is set in Roman so that mathematics\n (always set in Roman: $B \\sin A = A \\sin B$) may be included without an\n ugly clash.}\n\\label{fig:long}\n\\label{fig:onecol}\n\\end{figure}\n\n\\subsection{Miscellaneous}\n\n\\noindent\nCompare the following:\\\\\n\\begin{tabular}{ll}\n \\verb'$conf_a$' & $conf_a$ \\\\\n \\verb'$\\mathit{conf}_a$' & $\\mathit{conf}_a$\n\\end{tabular}\\\\\nSee The \\TeX book, p165.\n\nThe space after \\eg, meaning ``for example'', should not be a\nsentence-ending space. So \\eg is correct, {\\em e.g.} is not. The provided\n\\verb'\\eg' macro takes care of this.\n\nWhen citing a multi-author paper, you may save space by using ``et alia'',\nshortened to ``\\etal'' (not ``{\\em et.\\ al.}'' as ``{\\em et}'' is a complete word.)\nHowever, use it only when there are three or more authors. Thus, the\nfollowing is correct: ``\n Frobnication has been trendy lately.\n It was introduced by Alpher~\\cite{Alpher02}, and subsequently developed by\n Alpher and Fotheringham-Smythe~\\cite{Alpher03}, and Alpher \\etal~\\cite{Alpher04}.''\n\nThis is incorrect: ``... subsequently developed by Alpher \\etal~\\cite{Alpher03} ...''\nbecause reference~\\cite{Alpher03} has just two authors. If you use the\n\\verb'\\etal' macro provided, then you need not worry about double periods\nwhen used at the end of a sentence as in Alpher \\etal.\n\nFor this citation style, keep multiple citations in numerical (not\nchronological) order, so prefer \\cite{Alpher03,Alpher02,Authors14} to\n\\cite{Alpher02,Alpher03,Authors14}.\n\n\\begin{figure*}\n\\begin{center}\n\\fbox{\\rule{0pt}{2in} \\rule{.9\\linewidth}{0pt}}\n\\end{center}\n \\caption{Example of a short caption, which should be centered.}\n\\label{fig:short}\n\\end{figure*}\n\n\\section{Formatting your paper}\n\nAll text must be in a two-column format. The total allowable width of the\ntext area is $6\\frac78$ inches (17.5 cm) wide by $8\\frac78$ inches (22.54\ncm) high. Columns are to be $3\\frac14$ inches (8.25 cm) wide, with a\n$\\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the\nfirst page) should begin 1.0 inch (2.54 cm) from the top edge of the\npage. The second and following pages should begin 1.0 inch (2.54 cm) from\nthe top edge. On all pages, the bottom margin should be 1-1\/8 inches (2.86\ncm) from the bottom edge of the page for $8.5 \\times 11$-inch paper; for A4\npaper, approximately 1-5\/8 inches (4.13 cm) from the bottom edge of the\npage.\n\n\\subsection{Margins and page numbering}\n\nAll printed material, including text, illustrations, and charts, must be kept\nwithin a print area 6-7\/8 inches (17.5 cm) wide by 8-7\/8 inches (22.54 cm)\nhigh.\n\nPage numbers should be included for review submissions but not for the \nfinal paper. Review submissions papers should have page numbers in the \nfooter with numbers centered and .75 inches (1.905 cm) from the bottom \nof the page and start on the first page with the number 1.\n\nPage numbers will be added by the publisher to all camera-ready papers \nprior to including them in the proceedings and before submitting the \npapers to IEEE Xplore. As such, your camera-ready submission should \nnot include any page numbers. Page numbers should automatically be \nremoved by uncommenting (if it's not already) the line\n\\begin{verbatim}\n\\end{verbatim}\nnear the beginning of the .tex file.\n\n\\subsection{Type-style and fonts}\n\nWherever Times is specified, Times Roman may also be used. If neither is\navailable on your word processor, please use the font closest in\nappearance to Times to which you have access.\n\nMAIN TITLE. Center the title 1-3\/8 inches (3.49 cm) from the top edge of\nthe first page. The title should be in Times 14-point, boldface type.\nCapitalize the first letter of nouns, pronouns, verbs, adjectives, and\nadverbs; do not capitalize articles, coordinate conjunctions, or\nprepositions (unless the title begins with such a word). Leave two blank\nlines after the title.\n\nAUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title\nand printed in Times 12-point, non-boldface type. This information is to\nbe followed by two blank lines.\n\nThe ABSTRACT and MAIN TEXT are to be in a two-column format.\n\nMAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use\ndouble-spacing. All paragraphs should be indented 1 pica (approx. 1\/6\ninch or 0.422 cm). Make sure your text is fully justified---that is,\nflush left and flush right. Please do not place any additional blank\nlines between paragraphs.\n\nFigure and table captions should be 9-point Roman type as in\nFigures~\\ref{fig:onecol} and~\\ref{fig:short}. Short captions should be centered.\n\n\\noindent Callouts should be 9-point Helvetica, non-boldface type.\nInitially capitalize only the first word of section titles and first-,\nsecond-, and third-order headings.\n\nFIRST-ORDER HEADINGS. (For example, {\\large \\bf 1. Introduction})\nshould be Times 12-point boldface, initially capitalized, flush left,\nwith one blank line before, and one blank line after.\n\nSECOND-ORDER HEADINGS. (For example, { \\bf 1.1. Database elements})\nshould be Times 11-point boldface, initially capitalized, flush left,\nwith one blank line before, and one after. If you require a third-order\nheading (we discourage it), use 10-point Times, boldface, initially\ncapitalized, flush left, preceded by one blank line, followed by a period\nand your text on the same line.\n\n\\subsection{Footnotes}\n\nPlease use footnotes\\footnote {This is what a footnote looks like. It\noften distracts the reader from the main flow of the argument.} sparingly.\nIndeed, try to avoid footnotes altogether and include necessary peripheral\nobservations in\nthe text (within parentheses, if you prefer, as in this sentence). If you\nwish to use a footnote, place it at the bottom of the column on the page on\nwhich it is referenced. Use Times 8-point type, single-spaced.\n\n\\subsection{References}\n\nList and number all bibliographical references in 9-point Times,\nsingle-spaced, at the end of your paper. When referenced in the text,\nenclose the citation number in square brackets, for\nexample~\\cite{Authors14}. Where appropriate, include the name(s) of\neditors of referenced books.\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{|l|c|}\n\\hline\nMethod & Frobnability \\\\\n\\hline\\hline\nTheirs & Frumpy \\\\\nYours & Frobbly \\\\\nOurs & Makes one's heart Frob\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\caption{Results. Ours is better.}\n\\end{table}\n\n\\subsection{Illustrations, graphs, and photographs}\n\nAll graphics should be centered. Please ensure that any point you wish to\nmake is resolvable in a printed copy of the paper. Resize fonts in figures\nto match the font in the body text, and choose line widths which render\neffectively in print. Many readers (and reviewers), even of an electronic\ncopy, will choose to print your paper in order to read it. You cannot\ninsist that they do otherwise, and therefore must not assume that they can\nzoom in to see tiny details on a graphic.\n\nWhen placing figures in \\LaTeX, it's almost always best to use\n\\verb+\\includegraphics+, and to specify the figure width as a multiple of\nthe line width as in the example below\n{\\small\\begin{verbatim}\n \\usepackage[dvips]{graphicx} ...\n \\includegraphics[width=0.8\\linewidth]\n {myfile.eps}\n\\end{verbatim}\n}\n\n\\subsection{Color}\n\nPlease refer to the author guidelines on the ICCV 2021 web page for a discussion\nof the use of color in your document.\n\n\\section{Final copy}\n\nYou must include your signed IEEE copyright release form when you submit\nyour finished paper. We MUST have this form before your paper can be\npublished in the proceedings.\n\n{\\small\n\\bibliographystyle{ieee_fullname}\n\n\\section{Conclusion}\nIn this paper, we \nencourage diversified patch representations\n when training image transformers. \nWe address the problem by \nproposing three losses.\nEmpirically, without changing the transformer model structure, \nby making patch representations diverse, \nwe are able to train larger, deeper models and obtain better performance on image classification tasks.\nApplying our pretrained model to semantic segmentation tasks, we obtain SOTA results on two popular datasets, ADE20K and Cityscapes.\n\nFor future works,\nwe plan to study how to encourage diversified patch representations for different tasks. \nWe will also incorporate the proposed loss into self-supervised learning settings, and study if the transformer model can serve as a better self-supervised learner for computer vision tasks when the patch representations are more diverse.\n\n\n\\section{Related Works and Discussions}\n\n\\paragraph{Transformers for image classification}\nTransformers~\\citep{vaswani2017attention} have achieved great success in natural language understanding~\\citep{devlin2018bert, radford2019language}, which motivates recent works in adapting transformers to computer vision.\nFor example, iGPT~\\citep{chen2020generative} views an image as a long sequences of pixels and successfully trains transformers to generate realistic images;\n\\citet{dosovitskiy2020vit} splits each image into a sequence of patches\nand achieves competitive performance on challenging ImageNet when pretraining on a large amount of data; \n\\citet{touvron2020training} leverages knowledge distillation to improve the training efficacy of vision transformers and achieves better speed vs. accuracy on ImageNet compared to EfficientNets. \n\nRecently, a variety of works focus on improving vision transformers by \nintroducing additional convolutional layers to take advantage of the benefits of inductive bias of CNNs, e.g., \n\\citep{han2021transformer, liu2021swin, wu2021cvt, zhou2021deepvit, jiang2021token, touvron2021going, valanarasu2021medical, arnab2021vivit, xie2021sovit, yuan2021tokens}.\nAmong them, \nLV-ViT~\\citep{jiang2021token} is most related to our work, \nLV-ViT~\\citet{jiang2021token} also introduces patch-wise auxiliary loss for training, which is equivalent to our \\emph{patch-wise mixing loss}; \nLV-ViT is a concurrent work of our submission.\nCompared to LV-ViT, our method focuses on improving the patch diversity of vision transformers, which is motivated from a very different perspective. \n\n\\paragraph{Diverse representations in CNNs}\nIn CNNs, the diverse representation often refers to feature diversity across different channels \\citep[e.g.][]{lee2018diverse, liu2018learning}.\nIn vision transformers, patches could be viewed as feature vectors from different spatial locations in CNNs. \nCNNs' feature maps at different spatial locations are naturally diversified as only local operators are used. \nTransformers, on the other hand, use global attention to fuse features across the different locations, and tends to learn similar presentations without regularization.\n\n\n\\paragraph{Limitations and negative societal impacts}\nFor the limitation part, \nin this work, we mainly focus on vision transformer. Therefore, our method can only be applied to transformer models with images as input. \nCurrently, we only focus on supervised learning problems, and do not study unsupervised or semi-supervised learning. \nAlthough our work has positive impact for the research community, we also have potential negative social impacts.\nOnce we open-source our model checkpoints and code, we will have no control of anyone who can get access to the code and model checkpoints.\n\n\n\\section{Vision Transformers}\n\n\\begin{figure}[ht]\n \\centering\n \\begin{tabular}{c}\n \\includegraphics[width=0.4\\textwidth]{figures\/vit_base.pdf} \n \\end{tabular}\n \\label{fig:vit_base}\n \\caption{Overview of basic vision transformer.}\n\\end{figure}\n\n\n\n\\section{Experiments}\n\\label{sec:experiment}\n\nIn this section, we apply our method to improve the training of a variety of \nvision transformers, including DeiT \\citep{touvron2020training} and SWIN-Transformer \\citep{liu2021swin}, and evaluate on a number of image classification and semantic segmentation benchmarks. \nWe show that our training strategy promotes patch diversification and \nlearns transformers with significantly better transfer learning performance on downstream semantic segmentation tasks.\n\n\n\n\n\\begin{table}[t]\n \\centering\n \\setlength{\\tabcolsep}{2pt}\n \n \\begin{tabular}{l|l|c|c|c|c}\n \\hline \n & Method & \\# Params (M) & Input Size & $+$ Conv & Top-1 (\\%) \\\\ \n \\hline\n \\hline\n \\multirow{6}{*}{CNNs} \n & ResNet-152 \\scriptsize{\\citep{he2016deep}} & 230 & 224 &$\\checkmark$ & 78.1 \\\\\n & DenseNet-201 \\scriptsize{\\citep{huang2017densely}} & 77 & 224 & $\\checkmark$ & 77.6 \\\\\n & EffNet-B8 \\scriptsize{\\citep{gong2020maxup}} & 87 & 672 &$\\checkmark$ & 85.8 \\\\\n & EffNetV2-L \\scriptsize{\\citep{tan2021efficientnetv2}} & 121 & 384 &$\\checkmark$ & 85.7 \\\\\n &NFNet \\scriptsize{\\citep{brock2021high}} & 438 & 576 &$\\checkmark$ & 86.5 \\\\\n & LambdaNet-420 \\scriptsize{\\citep{bello2021lambdanetworks}} & 87 & 320 &$\\checkmark$ & 84.9 \\\\\n \\hline\n \\hline\n \\multirow{2}{*}{CNNs + } \n \n \n \n \n & CVT-21 \\scriptsize{\\citep{wu2021cvt}} & 32 & 224 & $\\checkmark$ & 82.5 \\\\\n \\multirow{2}{*}{Transformers} \n & CVT-21 & 32 & 384 & $\\checkmark$ & 83.3 \\\\\n \\cline{2-6}\n & LV-ViT-M \\scriptsize{\\citep{jiang2021token}} & 56 & 224 & $\\checkmark$ & 84.0 \\\\\n & LV-ViT-L & 150 & 448 & $\\checkmark$ & 85.3 \\\\\n \\hline\n \\hline\n \\multirow{6}{*}{DeiT {(scratch)}} & DeiT-Small12 \\scriptsize{\\citep{touvron2020training}} & \\multirow{ 2}{*}{22} & \\multirow{ 2}{*}{224} & \\multirow{ 2}{*}{$\\times$} & 80.4 \\\\\n & + \\bf{{\\bf DiversePatch (ours)}} & & & & \\bf{81.2} \\\\\n \\cline{2-6}\n & DeiT-Small24 & \\multirow{ 2}{*}{44} & \\multirow{ 2}{*}{224} & \\multirow{ 2}{*}{$\\times$} & 80.3 \\\\\n & + {\\bf DiversePatch (ours)} & & & & \\bf{82.2} \\\\\n \\cline{2-6}\n & DeiT-Base12 & \\multirow{ 2}{*}{86} & \\multirow{ 2}{*}{224} & \\multirow{ 2}{*}{$\\times$} & 82.1 \\\\\n & + {\\bf DiversePatch (ours)} & & & & \\bf{82.9} \\\\\n \\cline{2-6}\n & DeiT-Base24 & \\multirow{2}{*}{172} & \\multirow{ 2}{*}{224} & \\multirow{2}{*}{$\\times$} & 82.1 \\\\\n & + {\\bf DiversePatch (ours)}& & & & \\bf{83.3} \\\\\n \n \\cline{2-6}\n & DeiT-Base12 & \\multirow{2}{*}{86} & \\multirow{ 2}{*}{384} & \\multirow{2}{*}{$\\times$} & 83.6 \\\\\n & + {\\bf DiversePatch (ours)} & & & & \\bf{84.2} \\\\\n \\hline\n \\hline\n \\multirow{4}{*}{SWIN {(finetune)}} & SWIN-Base \\scriptsize{\\citep{liu2021swin}} & \\multirow{2}{*}{88} & \\multirow{ 2}{*}{224} & \\multirow{2}{*}{$\\times$} & 83.4 \\\\\n & +{\\bf DiversePatch (ours)} & & & & \\bf{83.7} \\\\\n \\cline{2-6}\n & SWIN-Base & \\multirow{2}{*}{86} & \\multirow{ 2}{*}{384} & \\multirow{2}{*}{$\\times$} & 84.5 \\\\ \n & + {\\bf DiversePatch (ours)} & & & & \\bf{84.7} \\\\ \n \n \n \n \\hline\n \\hline\n \\end{tabular}\n \n \\caption{Top-1 accuracy results on ImageNet. We train all DeiT based models from scratch for 400 epochs. For SWIN-Transformer based models, we finetune from existing checkpoints for 30 epochs.\n Results without any extra data are reported.\n \n }\n \\label{tab:main_result}\n\\end{table}\n\n\\subsection{Main Results on ImageNet}\n\\label{sec:exp_imagenet}\n\nWe use DeiT~\\citep{touvron2020training} and SWIN transformers~\\citep{liu2021swin} as our baseline models, and improving these models by incorporating our patch diversification-promoting losses to the training procedure.\n\n\\paragraph{Settings}\nFor DeiT based models, \nwe closely follow the training settings provided in \\citet{touvron2020training}~\\footnote{\\url{https:\/\/github.com\/facebookresearch\/deit}}.\nWe train all DeiT baselines and our models for 400 epochs. \nWe use stochastic depth dropout and linearly increase the depth dropout ratio from 0 to .5 following ~\\citet{huang2016deep}. \nAdditionally, \nwe use stochastic depth dropout ratio of 0.5 for DeiT-Base24 and DeiT-Small24, \nwhich allows us to train deeper DeiT models without diverging. \nFor our method, we remove \\emph{MixUp} \\citep{zhang2017mixup} \nand repeated data augmentation \\citep{hoffer2020augment} as they are not compatible with our \\emph{path-wise mixing loss}. Detailed ablation study is in Section \\ref{sec:ablation}. \n\n\nFor SWIN transformers, we use the official code for training, evaluation and finetuning \\footnote{\\url{https:\/\/github.com\/microsoft\/Swin-Transformer}}.\nSpecifically, we download the official SWIN-Base models pretrained with resolution of $224\\times 224$ and $384\\times 384$, respectively.\nWe further finetune them for another 30 epochs with or without our patch diversification losses.\nIn particular, we use a batch size of 1024 (128 X 8 GPUS), a constant learning rate of $10^{-5}$ and a weight decay of $10^{-8}$ for finetuning.\n\n\n\n\\paragraph{Results on ImageNet}\n\nAs demonstrated in Table \\ref{tab:main_result}, for all the model architectures we evaluated, \nour method leads to consistent improvements upon its corresponding baseline model. For example, for DeiT based models, \nwe improve the top-1 accuracy of DeiT-Small12 from 80.4\\% to 81.2\\%,\nand improve the top-1 accuracy of DeiT-Base12 from 82.1\\% to 82.9\\%.\nA similar trend can also been for SWIN based models.\n\nAdditionally, our method allows us to train more accurate vision transformers by simply stacking more layers. \nNaive training of DeiT models cannot benefit from larger and deeper models due to rapid patch representation degradation throughout the networks. \nAs we can see from Table~\\ref{tab:main_result}, with standard training, the results from DeiT-Small24 and DeiT-Base24 are not better than the results from DeiT-Small12 and DeiT-Base12;\non the other hand, our method alleviates over-smoothing and learns more diversified patch features, enabling further improvements to the large model regime. Specifically, our method achieves 82.2\\% and 83.3\\% top-1 accuracy for DeiT-Small24 and DeiT-Base24, respectively, which are \n$1.9\\%$ and $1.2\\%$ higher compared to their corresponding DeiT baselines.\n\nTo further verify reproducibility of our results, we run DeiT-Base12 for three trials,\nachieving top-1 accuracy of 82.86\\%, 82.82\\%, 82.89\\%; the round to round performance variant is negligible and the \\emph{s.t.d} is smaller than 0.1.\n\n\\begin{wrapfigure}{r}{0.5\\textwidth}\n\\centering\n\\vspace{-1.2em}\n\\begin{tabular}{c}\n\\raisebox{1.1em}{\\rotatebox{90}{{Avg. Cosine similarity}}} \n\\includegraphics[width=0.45\\textwidth]{figures\/cos_combined_res.pdf} \\\\\n{{Block Index~~~~~~~~~~~~~~~~~~~~~}} \\\\ \n\\end{tabular}\n\\vspace{-1.2em}\n\\caption{Comparison on average patch-wise absolute cosine similarity.\n}\n\\vspace{-1.2em}\n\\label{fig:exp_ours_cosine_sim}\n\\end{wrapfigure}\nFollowing the studies in Section~\\ref{sec:study}, \nwe plot the patch-wise absolute cosine similarity for the patch features learned by our method in Figure~\\ref{fig:exp_ours_cosine_sim}. As shown in Figure~\\ref{fig:exp_ours_cosine_sim}, \nthe patch-wise absolute cosine similarity is reduced significantly for both DeiT-Base24 and SWIN-Base;\nthe cosine similarity among the learned patches \nin the last layer is similar to the result from the ResNet-50 baseline.\n\n\n\nWe also compare with recently proposed CNN and transformer hybrids in Table~\\ref{tab:main_result},\nincluding CVT \\citep{wu2021cvt} and LV-ViT \\citep{jiang2021token}. \nThese hybrid models introduce additional convolution layers to both the patch projection layers and self-attention layers, and achieve better classification accuracy on ImageNet compared to pure transformers like DeiT and SWIN-transformers. \nThese hybrid approaches are non-conflict to our method, and we will further study the diverse patch representations in these architectures and apply our method to them.\n\n\\iffalse\n\\subsection{Ablation Studies}\n\\label{subsec:ablation}\n\\paragraph{Training Technique Analysis}\nWe represent a detailed analysis and ablation study of our training techniques in Table \\ref{tab:deit_base_ablation}.\nWe first observe that, \ncompared to standard DeiT-Base model, \nboth patch contrastive loss and patch mixing loss can boost the performance, while combining these two losses achieve better performance (82.6\\% top-1 accuracy).\nInspired by \\citet{touvron2021going}, \nwe further introduce the talking-head attention into our models and training the model for a longer time, and further improve the results to 82.9\\% top-1 accuracy.\nUsing a deeper model and scale up the image resolution, the performance can be further improved.\n\\fi\n\n\n\\paragraph{Results with ImageNet-22K Pre-training}\n\nWe also fine-tune ViT-Large models \\citep{dosovitskiy2020vit} and SWIN-Large models that are pretrained on ImageNet-22K \\citep{russakovsky2015imagenet} to further push the limits of accuracy on ImageNet.\nImageNet-22k contains 22k classes and 14M images. \nSpecifically, we directly download the ImageNet-22K pre-trained models provided in ViT and SWIN-Transformer and finetune \nthese checkpoints on ImageNet training set with 30 epochs, \na batch size of 1024, a constant learning rate of $10^{-5}$ and a weight decay of $10^{-8}$.\n\nTable~\\ref{tab:imagenet22k} shows the finetuning accuracy on ImageNet.\nOur method again leads consistent improvements across all evaluated settings. Specifically, we improve the VIT-Large top-1 accuracy from 85.1\\% to 85.3\\% and archive 87.4\\% top-1 accuracy with SWIN-Large. \nAs a future work, we will further pretrain the models on ImageNet-22k with our method to see whether the improvement is larger.\n\n\n\\begin{table}[ht]\n \\centering\n \\setlength{\\tabcolsep}{6pt}\n \\begin{tabular}{l|c|c}\n \\hline \n Model & Input Size & Top-1 Acc (\\%)\\\\ \n \\hline \n VIT-Large \\scriptsize{\\citep{dosovitskiy2020vit}} & 224 & 83.6 \\\\\n + {\\bf DiversePatch (ours)} & 224 & \\bf{83.9}\\\\\n \\hline\n VIT-Large & 384 & 85.1\\\\\n + {\\bf DiversePatch (ours)} & 384 & \\bf{85.3}\\\\\n \\hline\n SWIN-Large + ImageNet22k & 384 & 87.3 \\\\ \n + {\\bf DiversePatch (ours)} & 384 & \\bf{87.4} \\\\\n \\hline\n \\end{tabular}\n \\caption{\n Results on ImageNet by finetuing from ImageNet-22K pretrained vision transformers. \n \n \n }\n \\label{tab:imagenet22k}\n\\end{table}\n\n\n\\subsection{Transfer Learning on Semantic Segmentation}\n\\label{sec:exp_segment}\nSemantic segmentation requires a backbone network to extract representative and diversified features from the inputs; the downstream semantic segmentation performance critically relies on the quality of the extracted features. \nIn this section, we use SWIN-Base pretrained on ImageNet (see Table~\\ref{tab:main_result}) and SWIN-large pretrained on both ImageNet22k and ImageNet (see Table~\\ref{tab:imagenet22k}) as our backbone models and finetune them on two widely-used semantic segmentation datasets, ADE20K \\citep{zhou2017scene} and Cityscapes \\citep{cordts2016cityscapes}, to verify the transferability our pretrained models.\n\nIn particular, we show the backbone models trained with our diversity-promoting losses are especially helpful for downstream segmentation tasks. By using our pretrained SWIN-Large ($87.4\\%$ top-1 accuracy), our results outperform all existing methods and establish a new state-of-the-art on ADE20K and the Cityscapes validation dataset, achieving mIOU of {54.5\\%} and {83.6\\%} mIoU, respectively. \n\n\\paragraph{Datasets}\nADE20K \\citep{zhou2017scene} is a large scene parsing dataset, covering 150 object and stuff categories. ADE-20K contains 20K images for training, 2K images for validation, and 3K images for test images.\nCityscapes \\citep{cordts2016cityscapes} dataset labels 19 different categories (with an additional unknown class) and consists of 2975 training images, 500 validation images and 1525 testing images. \nWe do evaluation on the ADE20K and Cityscapes validation set in this paper.\n\n\\paragraph{Baselines}\nWe introduce both state-of-the-art CNNs \\citep[e.g.][]{xiao2018unified, zhang2020resnest, wang2020hr, bai2020multiscale} and recent proposed transformer models \\citep[e.g.][]{liu2021swin, zheng2020rethinking, ranftl2021vision} as our baselines.\n\n\\paragraph{Settings} \nWe closely follow the finetuning settings proposed in SWIN transformers~\\citep{liu2021swin} \\footnote{\\url{https:\/\/github.com\/SwinTransformer\/Swin-Transformer-Semantic-Segmentation}}. \nSpecifically, \nwe use UperNet \\citep{xiao2018unified} in \\emph{mmsegmentation} \\citep{mmseg2020} as our testbed.\nDuring training, we use AdamW \\citep{loshchilov2017decoupled} optimizer with a learning rate of $6\\times 10^{-5}$ and a weight decay of 0.01.\nWe use a cosine learning rate decay and a linear learning rate warmup of 1,500 iterations. \nWe finetune our models for 160K iterations and 80K iterations on ADE20K and Cityscapes training set, respectively.\nWe adopt the default data augmentation scheme in \\emph{mmsegmentation} \\citep{mmseg2020} \\footnote{\\url{https:\/\/github.com\/open-mmlab\/mmsegmentation}} and train with \n512$\\times$512 and 769$\\times$769 crop size for ADE20K and Cityscapes, respectively, following the default setting in \\emph{mmsegmentation}.\nAdditionally, \nfollowing SWIN transformers \\citep{liu2021swin}, we use a stochastic depth dropout of 0.3 for the first 80\\% of training iterations, and increase the dropout ratio to 0.5 for the last 20\\% of training iterations.\n\nFollowing SWIN transformers~\\citep{liu2021swin},\nwe report both the mIoUs from the single-scale evaluation and the mIoUs from the multi-scale flipping evaluation \\citep{zhang2018context}. \nSpecifically, for multi-scale flipping testing, we enumerate testing scales of \\{0.5, 0.75, 1.0, 1.25, 1.5, 1.75\\} and random horizontal flip by following common practices in the literature \\citep[e.g.][]{zhang2020resnest, liu2021swin, zheng2020rethinking}. \nThe images are evaluated at 2048$\\times$1024 and 2049$\\times$1025 for ADE20K and cityscapes, respectively.\n\n\n\\paragraph{Results}\n\nWe summarize our results in Table \\ref{tab:segmentation_ade20k} and Table \\ref{tab:segmentation_cityscape}.\nOur method improves the training of SWIN-Transformers and learns backbones capable of extracting more diversified features from the inputs, \nleading to new state-of-the-art segmentation performance on both ADE-20K and Cityscapes. Our achieve 54.5\\% mIoU on ADE-20K and 83.6\\% mIoU on the Cityscapes validation set, outperforming all existing approaches.\n\n\n\n\n\n\\begin{table}[ht]\n \\centering\n \\begin{tabular}{l|c|cc}\n \\hline \n Model & \\#Params (M) & mIoU (\\%) & mIoU (ms+flip) (\\%)\\\\ \n \\hline\n OCNet \\scriptsize{\\citep{yuan2019object}} & 56 & 45.5 & \\texttt{N\/A} \\\\\n UperNet \\scriptsize{\\citep{xiao2018unified}} & 86 & 46.9 & \\texttt{N\/A} \\\\\n ResNeSt-200 + DeepLab V3 & 88 & \\texttt{N\/A} & 48.4 \\\\\n SETR-Base\\scriptsize{\\citep{zheng2020rethinking}} & 98 & \\texttt{N\/A} & 48.3 \\\\\n SETR-Large \\scriptsize{\\citep{zheng2020rethinking}} & 308 & \\texttt{N\/A} & 50.3 \\\\\n DPT-ViT-Hybrid \\scriptsize{\\citep{ranftl2021vision}} & 90 & \\texttt{N\/A} & 49.0 \\\\\n DPT-ViT-Large \\scriptsize{\\citep{ranftl2021vision}} & 307 & \\texttt{N\/A} & 47.6 \\\\\n \\hline \n \\hline\n Swin-Base & 121 & 48.1 & 49.7 \\\\\n + {\\bf DiversePatch (ours)} & 121 & \\bf{48.4}\\scriptsize{$\\pm$0.2} & \\bf{50.1}\\scriptsize{$\\pm$0.2} \\\\\n \\hline\n Swin-Large & 234 & 52.0 & 53.5 \\\\\n + {\\bf DiversePatch (ours)} & 234 & \\bf{53.1}\\scriptsize{$\\pm$0.1} & \\bf{54.5}\\scriptsize{$\\pm$0.1} \\\\\n \\hline\n \\end{tabular}\n \\caption{State-of-the-art on ADE20K. \n \n \n `ms+flip' refers to mutli-scale testing with flipping \\citep{zhang2018context}, and `\\#Params' denotes the number of parameters.\n }\n \\label{tab:segmentation_ade20k}\n\\end{table}\n\n\n\\begin{table}[ht]\n \\centering\n \\begin{tabular}{l|c|cc}\n \\hline \n Model & \\#Params (M) & mIoU (\\%) & mIoU (ms+flip) (\\%)\\\\ \n \\hline \n OCNet \\scriptsize{\\citep{yuan2019object}} & 56 & 80.1 & \\texttt{N\/A} \\\\\n HRNetV2 + OCR \\scriptsize{\\citep{wang2020hr}} & 70 & 81.6 & \\texttt{N\/A} \\\\\n Panoptic-DeepLab \\scriptsize{\\citep{cheng2019panoptic}} & 44 & 80.5 & 81.5 \\\\\n Multiscale DEQ \\scriptsize{\\citep{bai2020multiscale}}& 71 & 80.3 & \\texttt{N\/A} \\\\\n ResNeSt-200 + DeepLab V3 & 121 & \\texttt{N\/A} & 82.7 \\\\\n SETR-Base\\scriptsize{\\citep{zheng2020rethinking}} & 98 & \\texttt{N\/A} & 78.1 \\\\\n SETR-Large \\scriptsize{\\citep{zheng2020rethinking}} & 308 & \\texttt{N\/A} & 82.1 \\\\\n \\hline\n \\hline\n Swin-Base & 121 & 80.4 & 81.5 \\\\\n + {\\bf DiversePatch (ours)} & 121 & \\bf{80.8}\\scriptsize{$\\pm$0.1} & \\bf{81.8}\\scriptsize{$\\pm$0.1} \\\\\n \\hline\n Swin-Large & 234 & 82.3 & 83.1 \\\\\n + {\\bf DiversePatch (ours)} & 234 & \\bf{82.7}\\scriptsize{$\\pm$0.2} & \\bf{83.6}\\scriptsize{$\\pm$0.1} \\\\\n \\hline\n \\end{tabular}\n \\caption{ State-of-the-art on the Cityscapes validation set. \n \n \n }\n \\label{tab:segmentation_cityscape}\n\\end{table}\n\n\n\n\n\n\n\n\n\n\\newcolumntype{C}{>{\\centering\\arraybackslash}m{2.3cm}}\n\\begin{table}[ht]\n \\centering\n \\setlength{\\tabcolsep}{4pt}\n \\begin{tabular}{CCC|C|C}\n \\hline \n Patch-wise cosine loss & Patch-wise contrastive loss & Patch-wise mixing loss & DeiT-Base24 (\\%) & Swin-Base ~~ (\\%) \\\\ \n \\hline \n \n $\\times$ & $\\times$ & $\\times$ & 82.1 & 83.4 \\\\\n \\hline \n $\\checkmark$ & $\\times$ & $\\times$ & 82.5 & 83.5\\\\\n $\\times$ & $\\checkmark$ & $\\times$ & 82.6 & \\texttt{N\/A} \\\\\n $\\times$ & $\\times$ & $\\checkmark$& 82.8 & 83.4\\\\\n \\hline\n $\\checkmark$ & $\\times$ & $\\checkmark$& 83.1 & \\bf{83.7} \\\\\n $\\checkmark$ & $\\checkmark$ & $\\times$ & 83.1 & \\texttt{N\/A} \\\\\n $\\times$ & $\\checkmark$ & $\\checkmark$ & \\bf{83.3} & \\texttt{N\/A} \\\\\n \n $\\checkmark$ & $\\checkmark$ & $\\checkmark$ & \\bf{83.3} & \\texttt{N\/A} \\\\\n \\hline\n \\end{tabular}\n \\caption{Ablating the impact of different combinations of our proposed patch diversification losses.\n }\n \\label{tab:loss_ablation}\n\\end{table}\n\n\n\n\n\\subsection{Ablation Studies}\n\\label{sec:ablation}\n\n\\paragraph{On the efficacy of our regularization strategies}\nOur method introduces three regularization terms to promote patch diversification. In this part, \nwe use DeiT-Base24 and SWIN-Base as our baseline models and ablate the effectiveness of our \\emph{patch-wise cosine loss}, \\emph{patch-wise contrastive loss} and \\emph{patch-wise mixing loss} by enumerating all different combination of training strategies. \nWe proceed by exactly following the training settings in section~\\ref{sec:exp_imagenet}.\nWe summarize our results in Table~\\ref{tab:loss_ablation}.\nAs we can see from Table~\\ref{tab:loss_ablation}, \nall diversification-promoting losses are helpful and \nall combinations lead to improved top-1 accuracy on ImageNet.\nSpecifically, \nwe improved DeiT-Base24 from 82.1\\% to 83.3\\% on top-1 accuracy by combing all three losses;\nfor SWIN-Base, we did not ablate the \\emph{patch-wise contrastive loss}, because the number of patches was reduced throughout the network due to down-sampling. In this case, we boost the top-1 accuracy from 83.5\\% and 83.7\\% by incorporating the \\emph{patch-wise cosine loss} and \\emph{patch-wise mixing loss} into the training procedure.\nAnd the patch representations learned by our method are particularly useful in down-stream semantic segmentation tasks, as demonstrated in section~\\ref{sec:exp_segment}.\n\n\n\n\\paragraph{On the stabilization of training}\nVision transformers are prone to overfitting, and training successful vision transformers often requires careful hyper-parameters tuning. \nFor example, DeiT uses a bag of tricks for stabilized training, including RandAugment~\\citep{cubuk2020randaugment}, MixUp~\\citep{zhang2017mixup}, CutMix~\\citep{yun2019cutmix}, random erasing~\\citep{zhong2020random},\nstochastic depth~\\citep{huang2016deep}, repeated augmentation~\\citep{hoffer2020augment}, etc.\nAs we shown in Table~\\ref{tab:ablate_deit_training}, \nremoving some of these training tricks leads to significant performance degradation for DeiT. While our patch-wise diversification losses offer a natural regularization to prevent overfitting, and may therefore leading to a more stabilized training process. \nThe models trained via our method yield consistently competitive results\nacross different training settings. \nAdditionally, we show our method could further benefit from more advanced design of talking-heads attentions~\\citep{shazeer2020talking}, longer training time and deeper architecture design, achieving consistent improvements upon our DeiT baselines. \n\n\n\n\n\\begin{table}[ht]\n \\centering\n \\setlength{\\tabcolsep}{12pt}\n \\begin{tabular}{l|cc}\n \\hline \n Model & DeiT-Base12 & DeiT-Base12 +{\\bf DiversePatch (ours)} \\\\ \n \\hline \n Standard (300 epochs) & 81.8 & 82.6 \\\\ \n + Talking Head & 81.8 & 82.7 \\\\\n \\hline \n - Repeat Augmentation & 78.4 & 82.7 \\\\\n - Random Erasing & 77.5 & 82.7 \\\\\n - Mixup & 80.3 & 82.7 \\\\\n - Drop Path & 78.8 & 80.2 \\\\ \n \\hline\n + 400 Epochs & 82.1 & 82.9 \\\\ \n + Depth (24 Layer) & 82.1 & 83.3 \\\\\n \\hline\n \\end{tabular}\n \\caption{\n More stabilized training of DeiT models with our patch diversification promoted losses.\n \n }\n \\label{tab:ablate_deit_training}\n\\end{table}\n\\section{Method}\n\nSome recent works (e.g. Transformer in Transformer, Token to Token Transformer, Centroid Transformer) focus on leverage prior knowledge to the transformer model by re-designing the structure.\nIn this work, however, we argue that the vanilla Transformer structure shown in Figure 1 is powerful enough and focus on let the model converges faster and generalizes better by introducing additional proxy loss (task).\n\n\n\\subsection{Predicting Relative Location for Patches}\nAn additional head to classify whether two patches are nearby or not at final layer.\n\n\\begin{table}[ht]\n \\centering\n \\begin{tabular}{l|c}\n \\hline \n Model & Top-1 accuracy \\\\ \\hline \n DEIT Tiny & 72.2 \\\\\n w\/ predicting location & 72.5 \\\\\n \\hline \n \\end{tabular}\n \\caption{Accuracy on ImageNet validation set. Marginal improvement, can not say anything.}\n \\label{tab:deit_reproduce}\n\\end{table}\n\n\\subsection{Predicting the Label for Each Patch (Learning to Segment Mixed Image)}\n\nSuppose we have two image-label pairs $(\\{x_{0, i}\\}, y_0)$ and $(\\{x_{1, i}\\}, y_1)$, \nwe mix the input patches as $\\{x_{\\text{mix}, i}\\}$ where $x_{\\text{mix}, i} = \\mathbb{I}(\\rho < \\lambda) x_{0, i} + \\mathbb{I}(\\rho \\geq \\lambda) x_{1, i}$ ($\\rho$ is sampled from a uniform distribution)\nand mix the label $y_{\\text{mix}} = \\lambda y_0 + (1 - \\lambda) y_1$.\nFor the last-layer patch $\\{h_{\\text{mix}, i}\\}$ we add an additional patch-level classification loss $\\{h_{\\text{mix}, i}, y_{\\text{mix}, i}\\}$. \nIn order to save computation cost, we average pooling the patch from the same image and then pass it through the additional patch-level classification loss. \n\nDuring evaluation, we do not use the additional proxy loss.\nWhen we mix two images and do not use proxy loss, this is cutmix.\n\n\\begin{lstlisting}\npatch_loss = 0\nfor _ in range(len(mask_lst)):\n avgpool_patches = (patches * mask_lst[_]).mean(dim=(2, 3))\n avgpool_logits = F.log_softmax(self.patch_head(avgpool_patches), dim=-1)\n patch_loss += nn.KLDivLoss()(avgpool_logits, target_lst[_])\n \n # patch_loss += nn.KLDivLoss()(avgpool_logits, target_lst[_]) \\\n # * target_lst[_].sum() \/ target_lst[_].size().sum()\n\\end{lstlisting}\n\n\\begin{table}[ht]\n \\centering\n \\begin{tabular}{l|c|c}\n \\hline \n Model & Top-1 accuracy & Test Loss\\\\ \\hline \n Cutmix + Mixup (Default) & 81.8 @ epoch300 & 0.838\\\\\n Cutmix (reported by DEIT) & 78.7 @ epoch300 & - \\\\\n Ours (mix two) & 80.8 @ epoch240 & - \\\\\n \\hline \n \\end{tabular}\n \\caption{DEIT-Base accuracy on ImageNet validation set. {\\color{red} {will be finished tomorrow, together with experiments on DEIT-small.}} }\n \\label{tab:deit_reproduce}\n\\end{table}\n\n\\paragraph{How to mix}\nRandomly selecting patches or selecting nearby patches like Cutmix.\n\n\\begin{table}[ht]\n \\centering\n \\begin{tabular}{l|c|c}\n \\hline \n Model & Top-1 accuracy & Test Loss\\\\ \\hline \n Cutmix + Mixup (Default) & 79.9 @ epoch300 & 0.882 \\\\\n Randomly Select Patches (mix two) & 80.5 @ epoch300 & 0.827 \\\\\n Select nearby Patches (mix two) & 80.9 @ epoch300 & 0.809\\\\\n \\hline\n Refine & 81.0 @ epoch300 & - \\\\\n \\hline \n \\end{tabular}\n \\caption{DEIT-Small accuracy on ImageNet validation set. `Refine' denotes removing repeat augmentation and adding stronger randaugment.}\n \\label{tab:deit_reproduce}\n\\end{table}\n\n\\paragraph{Mixing Rate}\nWe sample the mixing rate from $\\mathrm{Beta}(1, 1)$, and find out this regularization too strong for smaller models while too weak for larger models. \nWe need an ablation study here.\n\n\\paragraph{Which Layer}\nAdding the loss at intermediate layer. \n\n\\paragraph{Over-estimation}\nWe notice that with our loss, we achieve lower validation loss while the validation accuracy is the same. \nIt indicates that some prediction is over-estimated.\nThus, we add an additional reverse KL term in the loss as a regularization term. {\\color{red} {to do...}}\n\n\\paragraph{Self-supervision}\nThe proxy loss can be easily extended to a self-supervised version, in which we mix a bag of images and predicting whether two patches are from a same image.\nIn this way, \nthe analysis of the loss is similar to supervised-LDA. (let's say each image is a topic, each patch is a word, and learning the classification loss is to learn the allocation.) {\\color{red} {future work...}}\n\n\\paragraph{Next}\nStudy whether mixing more images is useful {\\color{red} {to do...}}\n\nWithout extra data, without extra teacher, we need to beat all the strong CNN models.\nThus, we can try two approaches:\n1) increasing the number of patches, 2) using larger models (e.g. VIT-Large).\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Appendix \\thesection\\protect\\indent #1}\n \\addcontentsline{toc}{section}{Appendix \\thesection\\ \\ \\ #1}\n}\n\\newcommand\\encadremath[1]{\\vbox{\\hrule\\hbox{\\vrule\\kern8pt\n\\vbox{\\kern8pt \\hbox{$\\displaystyle #1$}\\kern8pt}\n\\kern8pt\\vrule}\\hrule}}\n\\def\\enca#1{\\vbox{\\hrule\\hbox{\n\\vrule\\kern8pt\\vbox{\\kern8pt \\hbox{$\\displaystyle #1$}\n\\kern8pt} \\kern8pt\\vrule}\\hrule}}\n\n\\newcommand\\figureframex[3]{\n\\begin{figure}[bth]\n\\hrule\\hbox{\\vrule\\kern8pt\n\\vbox{\\kern8pt \\vbox{\n\\begin{center}\n{\\mbox{\\epsfxsize=#1.truecm\\epsfbox{#2}}}\n\\end{center}\n\\caption{#3}\n}\\kern8pt}\n\\kern8pt\\vrule}\\hrule\n\\end{figure}\n}\n\\newcommand\\figureframey[3]{\n\\begin{figure}[bth]\n\\hrule\\hbox{\\vrule\\kern8pt\n\\vbox{\\kern8pt \\vbox{\n\\begin{center}\n{\\mbox{\\epsfysize=#1.truecm\\epsfbox{#2}}}\n\\end{center}\n\\caption{#3}\n}\\kern8pt}\n\\kern8pt\\vrule}\\hrule\\end{figure}\n}\n\n\\renewcommand{\\thesection}{\\arabic{section}}\n\\renewcommand{\\theequation}{\\arabic{section}-\\arabic{equation}}\n\\makeatletter\n\\@addtoreset{equation}{section}\n\\makeatother\n\\newtheorem{theorem}{Theorem}[section]\n\\newtheorem{conjecture}{Conjecture}[section]\n\\newtheorem{remark}{Remark}[section]\n\\newtheorem{proposition}{Proposition}[section]\n\\newtheorem{lemma}{Lemma}[section]\n\\newtheorem{corollary}{Corollary}[section]\n\\newtheorem{definition}{Definition}[section]\n\\newcommand{\\eq}[1]{eq.~(\\ref{#1})}\n\\def\\begin{remark}\\rm\\small{\\begin{remark}\\rm\\small}\n\\def\\end{remark}{\\end{remark}}\n\\def\\begin{theorem}{\\begin{theorem}}\n\\def\\end{theorem}{\\end{theorem}}\n\\def\\begin{definition}{\\begin{definition}}\n\\def\\end{definition}{\\end{definition}}\n\\def\\begin{proposition}{\\begin{proposition}}\n\\def\\end{proposition}{\\end{proposition}}\n\\def\\begin{lemma}{\\begin{lemma}}\n\\def\\end{lemma}{\\end{lemma}}\n\\def\\begin{corollary}{\\begin{corollary}}\n\\def\\end{corollary}{\\end{corollary}}\n\\def\\begin{eqnarray}{\\begin{eqnarray}}\n\\def\\end{eqnarray}{\\end{eqnarray}}\n\\newcommand{\\proof}[1]{{\\noindent \\bf proof:}\\par\n{#1} $\\bullet$}\n\n\\newcommand{{\\mathbb{R}}}{{\\mathbb{R}}}\n\\newcommand{{\\mathbb{C}}}{{\\mathbb{C}}}\n\\newcommand{{\\mathbb{Z}}}{{\\mathbb{Z}}}\n\\newcommand{\\bigskip \\noindent{\\bf Remarque: }}{\\bigskip \\noindent{\\bf Remarque: }}\n\\newcommand{\\refeq}[1]{eq.(\\ref{#1})}\n\n\\newcommand{\\rf}[1]{(\\ref{#1})}\n\\newcommand{\\rfig}[1]{fig.~\\ref{#1}}\n\n\\newcommand{\\equ}[2]{\\begin{equation}{\\label{#1}}{#2}\\end{equation}}\n\n\\newcommand{\\begin{equation}}{\\begin{equation}}\n\\newcommand{\\end{equation}}{\\end{equation}}\n\\newcommand{\\begin{eqnarray}}{\\begin{eqnarray}}\n\\newcommand{\\end{eqnarray}}{\\end{eqnarray}}\n\n\\newcommand\\eol{\\hspace*{\\fill}\\linebreak}\n\\newcommand\\eop{\\vspace*{\\fill}\\pagebreak}\n\n\\newcommand{\\hspace{0.7cm}}{\\hspace{0.7cm}}\n\\newcommand{\\vspace{0.7cm}}{\\vspace{0.7cm}}\n\n\\renewcommand{\\and}{{\\qquad {\\rm and} \\qquad}}\n\\newcommand{{\\qquad {\\rm where} \\qquad}}{{\\qquad {\\rm where} \\qquad}}\n\\newcommand{{\\qquad {\\rm with} \\qquad}}{{\\qquad {\\rm with} \\qquad}}\n\\newcommand{{\\qquad {\\rm for} \\qquad}}{{\\qquad {\\rm for} \\qquad}}\n\\newcommand{{\\qquad , \\qquad}}{{\\qquad , \\qquad}}\n\n\\newcommand{{\\it i.e.}\\ }{{\\it i.e.}\\ }\n\n\n\n\\newcommand{{\\,\\rm Det}} \\newcommand{{\\,\\rm Tr}\\:}{{\\,\\rm Tr}\\:}{{\\,\\rm Det}} \\newcommand{{\\,\\rm Tr}\\:}{{\\,\\rm Tr}\\:}\n\\newcommand{{\\,\\rm tr}\\:}{{\\,\\rm tr}\\:}\n\\newcommand{{\\,\\rm cte}\\,}{{\\,\\rm cte}\\,}\n\\newcommand{\\mathop{\\,\\rm Res\\,}}{\\mathop{\\,\\rm Res\\,}}\n\n\n\\newcommand{\\td}[1]{{\\tilde{#1}}}\n\n\\renewcommand{\\l}{\\lambda}\n\\newcommand{\\omega}{\\omega}\n\\newcommand{{\\cal P}}{{\\cal P}}\n\n\\newcommand{{\\mathrm{i}}}{{\\mathrm{i}}}\n\\newcommand{{\\,\\rm e}\\,}{{\\,\\rm e}\\,}\n\\newcommand{\\ee}[1]{{{\\rm e}^{#1}}}\n\n\\renewcommand{\\d}{{{\\partial}}}\n\\newcommand{{{\\hbox{d}}}}{{{\\hbox{d}}}}\n\\newcommand{\\dmat}[2]{\\mathrm{d}_{\\scriptscriptstyle{#1}}[#2]}\n\n\\newcommand{{\\int\\kern -1.em -\\kern-.25em}}{{\\int\\kern -1.em -\\kern-.25em}}\n\\newcommand{\\mathrm{Vol}}{\\mathrm{Vol}}\n\\newcommand{\\mathop{\\mathrm{Pol}}}{\\mathop{\\mathrm{Pol}}}\n\n\\newcommand{\\moy}[1]{\\left<{#1}\\right>}\n\n\\renewcommand{\\Re}{{\\mathrm{Re}}}\n\\renewcommand{\\Im}{{\\mathrm{Im}}}\n\n\\newcommand{{\\rm sn}}{{\\rm sn}}\n\\newcommand{{\\rm cn}}{{\\rm cn}}\n\\newcommand{{\\rm dn}}{{\\rm dn}}\n\\newcommand{\\ssq}[1]{{\\sqrt{\\sigma({#1})}}}\n\n\n\\newcommand{\\Sigma}{\\Sigma}\n\\newcommand{{\\overline\\Sigma}}{{\\overline\\Sigma}}\n\n\\renewcommand{\\l}{\\lambda}\n\\renewcommand{\\L}{\\Lambda}\n\\renewcommand{\\ssq}[1]{{\\sqrt{\\sigma({#1})}}}\n\\newcommand{\\overline}{\\overline}\n\\newcommand{{\\rm diag}}{{\\rm diag}}\n\\newcommand{{\\rm Cat}\\,}{{\\rm Cat}\\,}\n\n\n\\preprint{SPhT-T05\/037, hep-th\/0504029}\n\n\\title{Mixed correlation functions in the 2-matrix model, and the Bethe Ansatz}\n\n\\author{B.\\ Eynard, N. \\ Orantin \\\\\nService de Physique Th\\'eorique de Saclay, CEA\/DSM\/SPhT,\\\\\nUnit\\'e de Recherche associ\\'ee au CNRS (URA D2306), CEA Saclay,\\\\\nF-91191 Gif-sur-Yvette Cedex, France.\\\\\n E-mail: eynard@spht.saclay.cea.fr, orantin@spht.saclay.cea.fr}\n\n\\abstract{Using loop equation technics, we compute all mixed traces correlation functions of the 2-matrix model to large N leading order.\nThe solution turns out to be a sort of Bethe Ansatz, i.e. all correlation functions can be decomposed on products of 2-point functions.\nWe also find that, when the correlation functions are written collectively as a matrix, the loop equations are equivalent to commutation relations.}\n\n\\keywords{Matrix Models, Differential and Algebraic Geometry, Bethe Ansatz}\n\n\\begin{document}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\newsection{Introduction}\n\nFormal random matrix models have been used for their interpretation as combinatorial generating functions\nfor discretized surfaces \\cite{Mehta, BIPZ, ZJDFG}.\nThe hermitean one-matrix model counts surfaces made of polygons of only one color, whereas the hermitean\ntwo--matrix model counts surfaces made of polygons of two colors.\nIn that respect, the 2-matrix model is more appropriate for the purpose of studying surfaces with non-uniform\nboundary conditions.\nAt the continuum limit, the 2-matrix model gives access to ``boundary operators'' in conformal field theory \\cite{kostov}.\n\nGenerating functions for surfaces with boundaries are obtained as random matrix expectation values.\nThe expectation value of a product of $l$ traces is the generating function for surfaces with $l$ boundaries,\nthe total power of matrices in each trace being the length of the corresponding boundary.\nIf each trace contains only one type of matrix (different traces may contain different types of matrices),\nthe expectation value is the generating function counting surfaces with uniform boundary conditions.\nThose non-mixed expectation values have been computed for finite $n$ since the work of \\cite{eynardmehta, Mehta2} and refined by \\cite{Bergere1}.\n\nMixed correlation functions have been considered as a difficult problem for a long time and progress have been obtained only recently \\cite{BEmixed, eynprats}.\nIndeed, non-mixed expectation values can easily be written in terms of eigenvalues only (since the trace of a matrix is clearly related to its eigenvalues),\nwhereas mixed correlation functions cannot ($Tr M_1^k M_2^{k'}$ cannot be written in terms of eigenvalues of $M_1$ and $M_2$).\n\nThe large $N$ limit of the generating function of the bicolored disc (i.e. one boundary, two colors, i.e. $$) has\nbeen known since \\cite{kaz, eynchain, eynchaint}.\nThe large $N$ limit of the generating function of the 4-colored disc (i.e. one boundary, 4 colors, i.e. $$) has\nbeen known since \\cite{eynm2m}.\nThe all order expansion of correlation functions for the 1-matrix model has been obtained by a Feynman-graph representation in \\cite{eynloop1mat}\nand the generalization to non-mixed correlation functions of the 2-matrix model has been obtained in \\cite{eoloop2mat}.\n\nRecently, the method of integration over the unitary group of \\cite{eynprats} has allowed to compute, for finite $N$, all mixed correlation functions of the 2-matrix model\nin terms of orthogonal polynomials.\n\n\\smallskip\n\nThe question of computing mixed correlation functions in the large $N$ limit is addressed in the present article.\n\nThe answer is (not so) surprisingly related to classical results in integrable statistical models, i.e. the Bethe Ansatz.\nIt has been known for a long time that random matrix models are integrable in some sense (Toda, KP, KdV, isomonodromic systems,...),\nbut the relationship with Yang-Baxter equations and Bethe Ansatz was rather indirect.\nThe result presented in this article should give some new insight in that direction. We find that the $k$-point functions can be expressed\nas the product of 2-point function, which is the underlying idea of the Bethe Ansatz.\n\n\n\n\n\n\n\n\\bigskip\n\n{\\noindent \\bf Outline of the article:}\n\n- section 1 is an introduction,\n\n- in section 2, we set definitions of the model and correlation functions, and we write the relevant loop equations,\n\n- in section 3, we introduce a Bethe Ansatz-like formula, and prove it in section~4,\n\n- in section 5, we solve the problem under a matrix form,\n\n- section 6 is dedicated to the special Gaussian case.\n\n\\newsection{The 2-matrix model, definitions and loop equations}\n\n\\subsection{Partition function}\n\nWe are interested in the formal matrix integral:\n\\begin{equation}\\label{Zdef}\nZ:=\\int_{H_N^2} dM_1\\, dM_2\\, \\ee{-N{\\,\\rm Tr}\\:[V_1(M_1)+V_2(M_2)+M_1 M_2]}\n\\end{equation}\nwhere $M_1$ and $M_2$ are $N\\times N$ hermitean matrices and $dM_1$ (resp. $dM_2$) is the product of Lebesgue\nmeasures of all independent real components of $M_1$ (resp. $M_2$).\n$V_1(x)$ and $V_2(y)$ are complex polynomials of degree $d_1+1$ and $d_2+1$, called ``potentials''.\nThe formal matrix integral is defined as a formal power series in the coefficients of the potentials (see \\cite{ZJDFG}),\ncomputed by the usual Feynman method:\nconsider a local extremum of $\\ee{-N{\\,\\rm Tr}\\:[V_1(M_1)+V_2(M_2)+M_1 M_2]}$, and expand the non quadratic part\nas a power series and, for each term of the series, perform the Gaussian integration with the quadratic part.\nThis method does not care about the convergence of the integral, or of the series, it makes sense only order by order\nand it is in that sense that it can be interpreted as the generating function of discrete surfaces.\nAll quantities in that model have a well defined $1\/N^2$ expansion \\cite{thoft}.\n\nThe extrema of $V_1(x)+V_2(y)+xy$ are such that:\n\\begin{equation}\nV'_1(x)=-y {\\qquad , \\qquad} V'_2(y)=-x\n\\end{equation}\nthere are $d_1 d_2$ solutions (indeed $V'_2(-V'_1(x))=-x$), which we note $(\\overline{x}_I,\\overline{y}_I)$, \\eol\n$I=1,\\dots, d_1 d_2$.\nThe extrema of ${\\,\\rm Tr}\\:[V_1(M_1)+V_2(M_2)+M_1 M_2]$ can be chosen diagonal (up to a unitary transformation), with $\\overline{x}_I$'s and $\\overline{y}_I$'s on the diagonal:\n\\begin{eqnarray}\nM_1&=&{\\rm diag}(\n{\\stackrel{n_1\\,{\\rm times}}{\\overbrace{\\overline{x}_1,\\dots,\\overline{x}_1}}},\n{\\stackrel{n_2\\,{\\rm times}}{\\overbrace{\\overline{x}_2,\\dots,\\overline{x}_2}}},\n\\dots\n,{\\stackrel{n_{d_1 d_2}\\,{\\rm times}}{\\overbrace{\\overline{x}_{d_1 d_2},\\dots,\\overline{x}_{d_1 d_2}}}})\\cr\nM_2&=&{\\rm diag}(\n{\\stackrel{n_1\\,{\\rm times}}{\\overbrace{\\overline{y}_1,\\dots,\\overline{y}_1}}},\n{\\stackrel{n_2\\,{\\rm times}}{\\overbrace{\\overline{y}_2,\\dots,\\overline{y}_2}}},\n\\dots\n,{\\stackrel{n_{d_1 d_2}\\,{\\rm times}}{\\overbrace{\\overline{y}_{d_1 d_2},\\dots,\\overline{y}_{d_1 d_2}}}})\n\\end{eqnarray}\nThe extremum around which we perform the expansion is thus characterized by a set of filling fractions:\n\\begin{equation}\n\\epsilon_I = {n_I\\over N}\n{\\qquad , \\qquad}\n\\sum_{I=1}^{d_1 d_2} \\epsilon_I=1\n\\end{equation}\n\nTo summarize, let us say that the formal matrix integral is defined for given potentials and filling fractions.\n\nThe ``one-cut'' case is the one where one of the filling fractions is $1$, and all the others vanish.\nThis is the case where the Feynman expansion is performed in the vicinity of only one extremum.\n\n\\subsection{Enumeration of discrete surfaces}\n\nIt is well known that formal matrix integrals are generating functions for the enumeration of discrete surfaces \\cite{ZJDFG, BIPZ, Kazakov, courseynard}.\n\n\\smallskip\n\nFor instance, in the one-cut case (expansion near an extremum $\\overline{x},\\overline{y}$), one has:\n\\begin{equation}\n\\begin{array}{l}\n-\\ln{Z} =\\cr\n\\sum_{G} {1\\over \\#{\\rm Aut}(G)} N^{\\chi(G)} \\left({g_2\\over \\delta}\\right)^{n_{--}(G)}\\left({\\td{g}_2\\over \\delta}\\right)^{n_{++}(G)}\\left({-1\\over \\delta}\\right)^{n_{+-}(G)}\n\\prod_{i=3}^{d_1+1} g_i^{n_i(G)} \\prod_{i=3}^{d_2+1} \\td{g}_i^{\\td{n}_i(G)}\\cr\n\\end{array}\n\\end{equation}\nwhere the summation is over all finite connected closed discrete surfaces made of polygons of two signs (+ and -).\nFor such a surface (or graph) $G$, $\\chi(G)$ is its Euler characteristic, $n_i(G)$ is the number of $i$-gons carrying a $+$ sign, $\\td{n}_i(G)$ is the number of $i$-gons carrying a $-$ sign,\n$n_{++}(G)$ is the number of edges separating two $+$ polygons,\n$n_{--}(G)$ is the number of edges separating two $-$ polygons and $n_{+-}(G)$ is the number of edges separating two polygons of different signs.\n$\\#{\\rm Aut}(G)$ is the number of automorphisms of $G$.\n\nThe $g_i$'s, $\\td{g}_i$'s and $\\delta$ are defined as follows:\n\\begin{equation}\ng_k := \\left.{\\partial^k V_1(x)\\over \\partial x^k}\\right|_{x=\\overline{x}}\n{\\qquad , \\qquad}\n\\td{g}_k := \\left.{\\partial^k V_2(y)\\over \\partial x^k}\\right|_{x=\\overline{y}}\n{\\qquad , \\qquad}\n\\delta := g_2 \\td{g}_2-1\n\\end{equation}\n\nExample of a discrete surface:\n\\begin{equation}\n\\begin{array}{r}\n{\\epsfysize 6cm\\epsffile{surfdiscr.eps}}\n\\end{array}\n\\end{equation}\n\nIn the multicut case, i.e. with arbitrary filling fractions, matrix integrals can still be interpreted in terms of ``foams'' of surfaces,\nand we refer the reader to the appendix of \\cite{BDE} or to \\cite{eynhabilit} for more details.\n\n\\subsection{Enumeration of discrete surfaces with boundaries}\n\nSimilarly, given a sequence of signs $s_1,s_2,\\dots, s_k$, $s_i\\in{1,2}$, it is well known that the following quantity:\n\\begin{equation}\\label{Trdisc}\n\\left<{\\,\\rm Tr}\\:(\\prod_{i=1}^{k} M_{s_i})\\right>\n\\end{equation}\nis the generating function of discrete surfaces with one boundary of length $k$, whose signs of polygons on the edges are given by the sequence $(s_1,\\dots,s_k)$.\n\n\\medskip\nExample of a discrete surface with boundary $(++++++-----+++++------)$:\n\\begin{equation}\n\\begin{array}{lll}\n\\left<{\\,\\rm Tr}\\:(M_1^{6}M_2^{5}M_1^{5}M_2^{6})\\right> \\,\\,& =& \\,\\,\\, \\sum_G\\,\\,\\, {\\epsfysize 5cm\\epsffile{surfdiscr.eps}}\n\\end{array}\n\\end{equation}\n\nMore generally, an expectation value of a product of $n$ traces is the generating function for discrete surfaces with $n$ boundaries.\n\nIn this article, we are interested only in one boundary and to leading order in $N$, i.e. surfaces with the topology of a disc.\n\n\n\\subsection{Master loop equation and algebraic curve}\n\nLet us define:\n\\begin{equation}\nW(x):={1\\over N}\\left<{\\,\\rm Tr}\\:{1\\over x-M_1}\\right>\n{\\qquad , \\qquad}\n\\td{W}(y):={1\\over N}\\left<{\\,\\rm Tr}\\:{1\\over y-M_2}\\right>\n\\end{equation}\nwhere the expectation values are formally computed as explained in the previous section, with the weight\n$\\ee{-N{\\,\\rm Tr}\\:[V_1(M_1)+V_2(M_2)+M_1 M_2]}$.\n$W(x)$ (resp. $\\td{W}(y)$) is defined as a formal power series in its large $x$ (resp. large $y$) expansion,\nas well as in the expansion in the coefficients of the potentials.\n$W(x)$ (resp. $\\td{W}(y)$) is a generating function for surfaces with one uniform boundary, i.e. with only sign $+$ (resp. sign $-$)\npolygons touching the boundary by an edge:\n\\begin{equation}\n\\begin{array}{r}\nW(x)={\\epsfysize 2cm\\epsffile{disc+.eps}}\n{\\qquad , \\qquad}\n\\td{W}(y)={\\epsfysize 2cm\\epsffile{disc-.eps}}\n\\end{array}\n\\end{equation}\n\n\n\nWe also define the following formal series:\n\\begin{equation}\nY(x):=W(x)-V'_1(x)\n{\\qquad , \\qquad}\nX(y):=\\td{W}(y)-V'_2(y)\n\\end{equation}\n\nIn addition, we define:\n\\begin{equation}\nP(x,y):={1\\over N}\\left<{\\,\\rm Tr}\\:{V'_1(x)-V'_1(M_1)\\over x-M_1}{V'_2(y)-V'_2(M_2)\\over y-M_2}\\right>\n\\end{equation}\n\\begin{equation}\nU(x,y):={1\\over N}\\left<{\\,\\rm Tr}\\:{1\\over x-M_1}{V'_2(y)-V'_2(M_2)\\over y-M_2}\\right>+x+V'_2(y)\n\\end{equation}\n\\begin{eqnarray}\nU(x,y;x')&:=&\\left<{\\,\\rm Tr}\\:{1\\over x-M_1}{V'_2(y)-V'_2(M_2)\\over y-M_2}{\\,\\rm Tr}\\:{1\\over x'-M_1}\\right>\\cr\n&& - N^2 W(x') (U(x,y)-x-V'_2(y))\\cr\n\\end{eqnarray}\n\\begin{equation}\nE(x,y):=(V'_1(x)+y)(V'_2(y)+x)+P(x,y)-1\n\\end{equation}\nNotice that $U(x,y)$ and $U(x,y;x')$ are polynomials of $y$ (with degree at most $d_2-1$),\n $P(x,y)$ is a polynomial of both variables of degree ($d_1-1,d_2-1$) and $E(x,y)$ is a polynomial\nof both $x$ and $y$ of degree $(d_1+1,d_2+1)$.\n\nIt has been obtained in many articles \\cite{eynmultimat, staudacher, eynchain, eynchaint}, that:\n\\begin{equation}\nE(x,Y(x))={1\\over N^2} U(x,Y(x),x)\n\\end{equation}\nTo large $N$ leading order that equation reduces to an algebraic equation for $Y(x)$, called the ``Master loop equation'' \\cite{staudacher}:\n\\begin{equation}\nE(x,Y(x))=0\n\\end{equation}\n(similarly, one also has $E(X(y),y)=0$, which implies $Y\\circ X={\\rm Id}$, known as Matytsin's equation \\cite{matytsin}).\nThe coefficients of $E(x,y)$, i.e. of $P(x,y)$, are entirely determined by the conditions $\\oint_{{\\cal A}_i} ydx = 2i\\pi \\epsilon_i$\nfor a choice of irreducible cycles on the algebraic curve.\n\n\nThe properties of that algebraic equation have been studied in many works \\cite{eynmultimat, KazMar}. Here we assume that it is known.\n\n\n\n\n\\subsection{Correlation functions, definitions}\n\n\nWe define:\n\\begin{equation}\n \\overline{W}_k(x_1,y_1,x_2,\\dots,x_k,y_k)\n:={1\\over N}\\left<{\\,\\rm Tr}\\:\\prod_{j=1}^k {1\\over x_j-M_1}{1\\over y_j-M_2}\\right>\n\\end{equation}\n\\begin{eqnarray}\n&& \\overline{U}_k(x_1,y_1,x_2,\\dots,x_k,y_k) \\cr\n&:=& \\mathop{\\mathrm{Pol}}_{y_k} V'_2(y_k)\\, \\overline{W}_k(x_1,y_1,x_2,\\dots,x_k,y_k) \\cr\n&=&{1\\over N}\\left<{\\,\\rm Tr}\\: {1\\over x_1-M_1}{1\\over y_1-M_2}\\,\\dots \\,{1\\over x_{k}-M_1}{V'_2(y_k)-V'_2(M_2)\\over y_k-M_2}\\right> \\cr\n\\end{eqnarray}\n\\begin{eqnarray}\n&& \\overline{P}_k(x_1,y_1,x_2,\\dots,x_k,y_k) \\cr\n&:=& \\mathop{\\mathrm{Pol}}_{x_1} \\mathop{\\mathrm{Pol}}_{y_k} V'_1(x_1)\\, V'_2(y_k)\\, \\overline{W}_k(x_1,y_1,x_2,\\dots,x_k,y_k) \\cr\n&=&{1\\over N}\\left<{\\,\\rm Tr}\\: {V'_1(x_1)-V'_1(M_1)\\over x_1-M_1}\\,{1\\over y_1-M_2}\\dots{1\\over x_k-M_1}\\,\\,{V'_2(y_k)-V'_2(M_2)\\over y_k-M_2}\\right> \\cr\n\\end{eqnarray}\n\\begin{equation}\nA_k(x_1,y_1,x_2,\\dots,x_k):={1\\over N}\\left<{\\,\\rm Tr}\\: {1\\over x_1-M_1}{1\\over y_1-M_2}\\dots{1\\over x_k-M_1}V'_2(M_2)\\right>\n\\end{equation}\n\nwhere $\\mathop{\\mathrm{Pol}}_x f(x)$ denotes the polynomial part at infinity of $f(x)$ (i.e. the positive part in the Laurent series for $x$ near infinity).\n\nThe functions $\\overline{W}_k$ are generating functions for discrete discs with all possible boundary conditions.\nOne can recover any generating function of type \\eq{Trdisc} by expanding into powers of the $x_i$'s and $y_i$'s.\n\n\\medskip\n\nFor convenience, we prefer to consider the following functions:\n\\begin{equation}\nW_k(x_1,y_1,x_2,\\dots,x_k,y_k):=\\overline{W}_k(x_1,y_1,x_2,\\dots,x_k,y_k)+\\delta_{k,1}\n\\end{equation}\n\\begin{equation}\nU_k(x_1,y_1,x_2,\\dots,x_k,y_k):=\\overline{U}_k(x_1,y_1,x_2,\\dots,x_k,y_k)+\\delta_{k,1}(V'_2(y_k)+x_k)\n\\end{equation}\nand for $k>1$:\n\\begin{equation}\nP_k(x_1,y_1,x_2,\\dots,x_k,y_k):=\\overline{P}_k(x_1,y_1,x_2,\\dots,x_k,y_k)+{W}_{k-1}(x_{2},\\dots,x_k,y_1)\n\\end{equation}\n\nFor the smallest values of $k$, those expectation values can be found in the literature to large $N$ leading order:\n\n$\\bullet$ it was found in \\cite{eynchain,eynchaint, eynmultimat}:\n\\begin{equation}\\label{W1U1}\nW_1(x,y) = {E(x,y)\\over (x-X(y))(y-Y(x))}\n{\\qquad , \\qquad}\nU_1(x,y) = {E(x,y)\\over (y-Y(x))}\n\\end{equation}\n\n$\\bullet$ it was found in the appendix C of \\cite{eynm2m} (there is a change of sign, because the action in \\cite{eynm2m} was ${\\,\\rm e}\\,^{-N{\\,\\rm tr}\\:(V_1(M_1)+V_2(M_2)-M_1M_2)}$):\n\\begin{equation}\nW_2(x_1,y_1,x_2,y_2) = {W_1(x_1,y_1)W_1(x_2,y_2)-W_1(x_1,y_2)W_1(x_2,y_1)\\over (x_1-x_2)(y_1-y_2)}\n\\end{equation}\n\n$\\bullet$ For finite $N$, it was found in \\cite{BEmixed}, and with notations explained in \\cite{BEmixed}:\n\\begin{equation}\\label{BEmixedW1}\nW_1(x,y) = \\det{\\left(1_N+\\Pi_{N-1}{1\\over x-Q}{1\\over y-P^t}\\Pi_{N-1}\\right)}\n\\end{equation}\n\n$\\bullet$ For finite $N$, it was found in \\cite{eynprats} how to compute any mixed correlation function in terms of determinants involving biorthogonal polynomials, with a formula very similar to \\eq{BEmixedW1}.\n\n\\medskip\n\nHere, we shall find a formula for all $W_k$'s in the large $N$ limit.\n\n\n\\subsection{Loop equations}\n\nLoop equations are nothing but Schwinger--Dyson equations.\nThey are obtained by writing that an integral is invariant under a change of variable,\nor alternatively by writing that the integral of a total derivative vanishes.\n\nThe loop equation method is well known and explained in many works \\cite{eynmultimat, staudacher}.\nHere, we write for each change of variable the corresponding loop equation (we use a presentation similar to that of \\cite{eynmultimat}).\n\nIn all what follows we consider $k>1$.\n\n\\bigskip\n\n$\\bullet$ the change of variable: $\\delta M_2={1\\over x_1-M_1}{1\\over y_1-M_2}\\,\\dots \\,{1\\over x_{k}-M_1}$ implies:\n\\begin{eqnarray}\\label{loopeqA}\nA_k(x_1,\\dots,x_k)\n&=& \\sum_{j=1}^{k-1} \\overline{W}_j(x_1,\\dots,y_j)\\,\\overline{W}_{k-j}(x_k,y_j,\\dots,y_{k-1}) \\cr\n&& + { x_1\\overline{W}_{k-1}(x_{1},y_1,\\dots,y_{k-1})- x_k\\overline{W}_{k-1}(x_{k},y_1,\\dots,y_{k-1})\\over x_1-x_k} \\cr\n&=& \\sum_{j=1}^{k-1} {W}_j(x_1,\\dots,y_j)\\,{W}_{k-j}(x_k,y_j,\\dots,y_{k-1}) \\cr\n&& - {W}_{k-1}(x_k,y_1,\\dots,y_{k-1}) \\cr\n&& + x_k\\,{ {W}_{k-1}(x_{1},y_1,\\dots,y_{k-1})- {W}_{k-1}(x_{k},y_1,\\dots,y_{k-1})\\over x_1-x_k} \\cr\n\\end{eqnarray}\n\n$\\bullet$ the change of variable: $\\delta M_1={1\\over x_1-M_1}{1\\over y_1-M_2}\\,\\dots \\,{1\\over x_{k}-M_1}{V'_2(y_k)-V'_2(M_2)\\over y_{k}-M_2}$ implies:\n\\begin{equation}\\label{loopeqU}\n\\begin{array}{l}\n(Y(x_1)-y_k)\\, \\overline{U}_k(x_1,\\dots,y_k) \\cr\n= \\sum_{j=2}^k {W_{j-1}(x_1,y_1,\\dots,y_{j-1})-W_{j-1}(x_j,y_1,\\dots,y_{j-1})\\over x_1-x_j}\\,\\overline{U}_{k-j+1}(x_j,y_j,\\dots,x_k,y_k) \\cr\n+ V'_2(y_k) {W_{k-1}(x_1,y_1,\\dots,y_{k-1})-W_{k-1}(x_k,y_1,x_2,\\dots,y_{k-1})\\over x_1-x_k} \\cr\n+ A_k(x_1,\\dots,x_k) - \\overline{P}_k(x_1,y_1,x_2,\\dots,x_k,y_k) \\cr\n= \\sum_{j=2}^k {W_{j-1}(x_1,y_1,\\dots,y_{j-1})-W_{j-1}(x_j,y_1,\\dots,y_{j-1})\\over x_1-x_j}\\,{U}_{k-j+1}(x_j,y_j,\\dots,x_k,y_k) \\cr\n+\\sum_{j=1}^{k-1} {W}_j(x_1,\\dots,y_j)\\,{W}_{k-j}(x_k,y_j,\\dots,y_{k-1}) \\cr\n- {P}_k(x_1,y_1,x_2,\\dots,x_k,y_k) \\cr\n\\end{array}\n\\end{equation}\nwhere we have used the loop equation \\eq{loopeqA} for $A_k(x_1,\\dots,x_k)$.\n\n\n$\\bullet$ the change of variable: $\\delta M_2={1\\over x_1-M_1}{1\\over y_1-M_2}\\,\\dots \\,{1\\over x_{k}-M_1}{1\\over y_{k}-M_2}$ implies:\n\\begin{eqnarray}\\label{loopeqW}\n&& (X(y_k)-x_1)\\,\\overline{W}_k(x_1,y_1,x_2,\\dots,x_k,y_k) \\cr\n&=& \\sum_{j=1}^{k-1} {{W}_{k-j}(x_{j+1},\\dots,y_k)-{W}_{k-j}(x_{j+1},\\dots,x_k,y_j)\\over y_k-y_j}\\, {W}_j(x_1,\\dots,y_j) \\cr\n&& - U_k(x_1,\\dots,y_k) \\cr\n\\end{eqnarray}\n\n\\subsection{Recursive determination of the correlation functions}\n\n\\begin{theorem}\\label{thloopdetermineWk}\nThe system of equations \\eq{loopeqU} and \\eq{loopeqW} for all $k$ has a unique solution.\n\\end{theorem}\n\nIn other words, if we can find some functions $W_k$, $U_k$, $P_k$ which obey \\eq{loopeqU} and \\eq{loopeqW}\nfor all $k$, then they are the correlation functions we are seeking.\n\n\\medskip\n\\proof{\n$W_1$, $U_1$ and $P_1$ have already been computed in the literature.\n\nAssume that we have computed $W_j$, $U_j$, $P_j$ for all $j1\\, , \\qquad\n\\sum_{\\sigma\\in {\\overline\\Sigma}_k} C^{(k)}_\\sigma(x_1,y_1,x_2,y_2,\\dots, x_k,y_k) =0\n\\end{equation}\n\\end{lemma}\n\\proof{\nThis expression is a rational function of all its variables.\nConsider the poles at $y_k=y_j$, and write $\\tau=(k,j)$.\nOne can split the symmetric group ${\\overline\\Sigma}_k$ into its two conjugacy classes wrt the subgroup generated by $\\tau$: ${\\overline\\Sigma}_k=[Id]\\oplus[\\tau]$.\nIn other words:\n\\begin{eqnarray}\n&& \\sum_{\\sigma\\in {\\overline\\Sigma}_k} C_\\sigma(x_1,y_1,x_2,y_2,\\dots, x_k,y_k) \\cr\n&=& \\sum_{\\sigma\\in {\\overline\\Sigma}_k\/\\tau} C_\\sigma(x_1,y_1,x_2,y_2,\\dots, x_k,y_k)+C_{\\tau\\sigma}(x_1,y_1,x_2,y_2,\\dots, x_k,y_k) \\cr\n\\end{eqnarray}\nFrom Lemma \\ref{lemcancelpoleC}, the terms in the RHS have no pole at $y_k=y_j$.\nSimilarly, using cyclicity and doing the same for the $x$'s, we prove the lemma.\n}\n\n\n\\medskip\n\nThe Lemmas we have just proven, are sufficient to prove the main theorem \\ref{mainth}.\nThis is done in section \\ref{proofcorrel}.\n\n\n\n\n\n\\subsection{Computation of the rational functions $F^{(k)}(x_1,y_1,\\dots,x_k,y_k)$}\n\nAlthough the exact computation of the $F^{(k)}$'s is not necessary for proving theorem \\ref{mainth},\nwe do it for completeness.\nIn this section we give an explicit (and non-recursive) formula for the $F^{(k)}$'s.\n\nA practical way of computing these formulas is described in Appendix A.\n\n\n\n\n\n\\begin{definition}\\label{deff}\nTo every permutation $\\sigma\\in{\\overline\\Sigma}_{k-1}$, we associate a weight $f_\\sigma$ computed as follows:\n \\begin{equation} f_\\sigma = \\prod_{n=1}^l\n\\prod_{j=2}^{l_n}g_{i_{n,1},i_{n,j},i_{n,j+1}}\\prod_{n=2}^{\\td{l}}\n\\prod_{j=2}^{\\td{l_n}}g_{\\td{i}_{n,j},\\td{i}_{n,1},\\sigma (\\td{i}_{n,j})}\n\\prod_{j=1}^{\\td{l_1}}g_{\\td{i}_{1,j},k,\\sigma (\\td{i}_{1,j})} \\end{equation}\nwhere\n$g_{i,h,j}$ is defined as follows:\n\\begin{equation}\ng_{i,h,j}:=\\frac{1}{x_h-x_i} \\,\\frac{1}{y_h-y_j}\n\\end{equation}\nand $\\sigma$ and $S\\sigma$ are decomposed into their product of cycles as in Definition \\ref{defC}.\n\\end{definition}\n\n\\begin{theorem}\n$F^{(k)}(x_1,y_1, \\dots , x_k,y_k)$ is obtained as the sum of the\nweights $f_\\sigma$'s over all $\\sigma \\in \\Sigma_{k-1}$:\n\\begin{equation}\nF^{(k)}(x_1,y_1, \\dots , x_k,y_k) = \\sum_{\\sigma \\in \\Sigma_{k-1} } f_\\sigma\n\\end{equation}\n\\end{theorem}\n\n\n\\proof{\nFirst of all, let us interpret diagrammatically the recursion relation \\eq{recF} defining the F's:\n\\begin{equation}\n\\begin{array}{r}\n{\\epsfxsize 14cm\\epsffile{recF.eps}}\n\\end{array}\n\\end{equation}\n\nActually, this recursion relation is nothing else but a rule for cutting a graph along the dashed line into two smaller ones.\nThe weight of a graph is then obtained as the sum over all the possible ways of cutting it in two.\n\nNotice that $F^{(k)}$ is the sum of ${\\rm Cat}\\,(k-1)$ different terms.\n\nLet us now explicit this bijection with the graphs with $k-1$ arches. In order to compute one of the terms composing\n$F^{(k)}$, one has to cut it with the help of the recursion relation until one obtains only graphs with one arch.\nThat is to say that one cuts it $k-1$ times along non intersecting lines (corresponding to the dashed one\nin the recursion relation). If one draws these cutting lines on the circle, one obtains a graph with $k-1$ arches dual of the original one.\nThus every way of cutting a graph with $k-1$ arches is associated to a planar permutation $\\sigma \\in \\overline{\\Sigma} (i \\dots k-1)$. Let us now prove\nthat the term obtained by this cutting is equal to $f_\\sigma$.\n\nFor the sake of simplicity, one denotes the identity graph of $(x_j,y_j, \\dots , x_k,y_k)$\nby circle $(j,j+1, \\dots, k)$. In these conditions, the recursion relation reads:\n\\begin{equation}\n(1,2, \\dots, k) = \\sum_{j=1}^{k-1} g_{1,k,j} (1, \\dots , j) (j+1,\\dots ,k)\n\\end{equation}\n\nLet $\\sigma$ be a permutation of $(1, \\dots , k-1)$. Cut it along the line going from the boundary $(x_1,y_{k})$ to\n$(y_{\\sigma(1)},x_{S \\sigma(1)})$. It results from this operation the factor $g_{1,k,\\sigma(1)}$ and the circles\n$(1, \\dots , \\sigma(1))$ and $(S \\sigma(1), \\dots, k)$:\n\\begin{equation}\n(1,\\dots ,k) \\rightarrow^{\\sigma} g_{1,k,\\sigma(1)} (1, \\dots , \\sigma(1)) (S \\sigma(1), \\dots, k)\n\\end{equation}\n\n\nThen cutting the circle $(S \\sigma(1), \\dots, k)$ along $(y_k,x_{S \\sigma(1)}) \\rightarrow (y_{\\sigma S \\sigma(1)}, x_{S \\sigma S \\sigma(1)})$\ngives:\n\\begin{equation}\n(S \\sigma(1), \\dots, k) \\rightarrow^{\\sigma} g_{S \\sigma(1),k, \\sigma S \\sigma(1)} (S \\sigma(1), \\dots , \\sigma S \\sigma(1)) (S \\sigma S \\sigma(1), \\dots , k)\n\\end{equation}\n\nOne pursues this procedure step by step by always cutting the circle containing k. Using the former notations, this reads:\n\\begin{equation}\n(1,\\dots ,k) \\rightarrow^{\\sigma} \\prod_{j=1}^{\\td{l}_1} g_{\\td{i}_{1,j}, k, \\sigma (\\td{i}_{1,j})}\n(\\td{i}_{1,j}, \\dots , \\sigma(\\td{i}_{1,j}))\n\\end{equation}\n\nSo one has computed the weight associated to the first $S\\sigma$ - cycle. The remaining circles correspond to\n$\\sigma$-cycles. Let us compute their weight by considering for example $(\\td{i}_{1,1},\\dots , \\sigma(\\td{i}_{1,1})) = (i_{1,1}, \\dots , i_{1,2})$.\n\nThe cut along the line $(x_{i_{1,1}},y_{i_{1,2}}) \\rightarrow (y_{i_{1,3}},x_{S(i_{1,1})})$ gives:\n\\begin{equation}\n(i_{1,1}, \\dots , i_{1,2}) \\rightarrow^{\\sigma} g_{i_{1,1},i_{1,2},i_{1,3}} (i_{1,1}, \\dots , i_{1,3}) (S(i_{1,3}), \\dots , i_{1,2})\n\\end{equation}\n\nKeeping on cutting the circle containing $i_{1,1}$ at every step gives:\n\\begin{equation}\n(i_{1,1}, \\dots , i_{1,2}) \\rightarrow^{\\sigma} \\prod_{j=2}^{l_1} g_{i_{1,1},i_{1,j},i_{1,j+1}} (S(i_{1,j+1}), \\dots , i_{1,j})\n\\end{equation}\n\nOne can notice that the remaining circles in the RHS correspond to cycles of $S\\sigma$ whose contribution has not been taken\ninto account yet. One can then compute their values by\nfollowing a procedure similar to the one used for the first $S\\sigma$-cycle.\n\nOne can then recursively cut the circles so that one finally obtains only circles containing only one element.\nThis recursion is performed by alternatively processing on $\\sigma$-cycles and $S\\sigma$-cycles.\n\nThus, one straightforwardly finds:\n\\begin{equation}\n(1,\\dots ,k) \\rightarrow^{\\sigma} \\prod_{n=1}^l\n\\prod_{j=2}^{l_n}g_{i_{n,1},i_{n,j},i_{n,j+1}}\\prod_{n=2}^{\\td{l}}\n\\prod_{j=2}^{\\td{l_n}}g_{\\td{i}_{n,j},\\td{i}_{n,1},\\sigma (\\td{i}_{n,j})}\n\\prod_{j=1}^{\\td{l_1}}g_{\\td{i}_{1,j},k,\\sigma (\\td{i}_{1,j})} = f_\\sigma\n\\end{equation}\n\n\nAnd then:\n\\begin{equation}\nF^{(k)} = \\sum_{\\sigma\\in {\\overline\\Sigma}_{k-1}} f_\\sigma\n\\end{equation}\n}\n\n{\\bf Example:}\nLet us compute the weight associated to the permutation $\\sigma \\in {\\overline\\Sigma}_{12}$ introduced earlier. Starting from the circle $(1, \\dots , 13)$, one will proceed\nstep by step the following cutting:\n\\begin{equation}\n\\begin{array}{r}\n{\\epsfxsize 6cm\\epsffile{cut.eps}}\n\\end{array}\n\\end{equation}\n\nThe first step consists in cutting along the $\\td{\\sigma}_1$ cycle. The dashed lines show where one cuts the circles.\n Note that one do not represent the circles of unite length. The associated weight is $g_{1,13,3} \\,\\, g_{4,13,7} \\,\\, g_{8,13,8} \\,\\, g_{9,13,12}$.\n\nThe second step consists in cutting along the remaining $\\sigma$ cycles. One associates the weight $g_{1,3,2} \\,\\, g_{1,2,1} \\times g_{4,7,4} \\times g_{9,12,11} \\,\\, g_{9,11,9}$ to this step.\n\nThe weights associated to the two last cuttings are $g_{5,7,6} \\times g_{10,11,10}$ and $g_{5,6,5}$.\n\n\\begin{equation}\n\\begin{array}{r}\n{\\epsfxsize 13.4cm\\epsffile{cut3.eps}}\n\\end{array}\n\\end{equation}\n\nFinally, the weight of this planar permutation is then:\n\\begin{equation}\nf_{\\sigma} = g_{1,13,3} \\,\\, g_{4,13,7} \\,\\, g_{8,13,8} \\,\\, g_{9,13,12} \\,\\, g_{1,3,2} \\,\\, g_{1,2,1} \\,\\, g_{4,7,4} \\,\\, g_{9,12,11} \\,\\, g_{9,11,9} \\,\\, g_{5,7,6} \\,\\, g_{10,11,10} \\,\\, g_{5,6,5}\n\\end{equation}\n\n\n\\subsection{Proof of the main theorem}\n\\label{proofcorrel}\nWe now prove that the function $\\widehat{W}$ defined by the RHS of \\eq{Ansatz} and the functions $\\widehat{U}$ and $\\widehat{P}$ defined in\ntheorem \\ref{thansatzloop} satisfy the system of equations \\eq{loopeqU} and \\eq{loopeqW}.\n\n{\\noindent \\bf Proof of theorem \\ref{thansatzloop}:}\n\nUsing \\eq{CRessimple}, one has:\n\\begin{eqnarray}\n&&\\widehat{U}_k(x_1,\\dots,y_k)\n= \\mathop{\\mathrm{Pol}}_{y_k} V'_2(y_k)\\,\\widehat{W}_k(x_1,y_1,\\dots,x_k,y_k) \\cr\n&&= \\mathop{\\mathrm{Pol}}_{y_k} V'_2(y_k)\\,\\sum_{\\sigma\\in \\Sigma_k}\\, C^{(k)}_\\sigma(x_1,y_1,\\dots,x_k,y_k)\\,\\, \\prod_{i=1}^k W_1(x_i,y_{\\sigma(i)}) \\cr\n&&= \\sum_{\\sigma\\in \\Sigma_k}\\,\\sum_{j\\neq k}\\,\\mathop{\\,\\rm Res\\,}_{y'_k\\to y_j} C^{(k)}_\\sigma(x_1,y_1,\\dots,x_k,y'_k) dy'_k \\,\\, \\prod_{i=1}^{k-1} W_1(x_{\\sigma^{-1}(i)},y_i) \\cr\n&& \\quad \\mathop{\\mathrm{Pol}}_{y_k}{V'_2(y_k)W_1(x_{\\sigma^{-1}(k)},y_k)\\over y_k-y_j} \\cr\n&&= \\sum_{\\sigma\\in \\Sigma_k}\\,\\sum_{j\\neq k}\\,\\mathop{\\,\\rm Res\\,}_{y'_k\\to y_j} C^{(k)}_\\sigma(x_1,y_1,\\dots,x_k,y'_k) dy'_k \\,\\, \\prod_{i=1}^{k-1} W_1(x_{\\sigma^{-1}(i)},y_i) \\cr\n&& \\quad \\mathop{\\mathrm{Pol}}_{y_k}{(\\td{W}(y_k)-X(y_k))W_1(x_{\\sigma^{-1}(k)},y_k)\\over y_k-y_j} \\cr\n&&= - \\sum_{\\sigma\\in \\Sigma_k}\\,\\sum_{j\\neq k}\\,\\mathop{\\,\\rm Res\\,}_{y'_k\\to y_j} C^{(k)}_\\sigma(x_1,y_1,\\dots,x_k,y'_k) dy'_k \\,\\, \\prod_{i=1}^{k-1} W_1(x_{\\sigma^{-1}(i)},y_i) \\cr\n&& \\quad \\mathop{\\mathrm{Pol}}_{y_k}{X(y_k)W_1(x_{\\sigma^{-1}(k)},y_k)\\over y_k-y_j} \\cr\n&&= \\sum_{\\sigma\\in \\Sigma_k}\\,\\sum_{j\\neq k}\\,\\mathop{\\,\\rm Res\\,}_{y'_k\\to y_j} C^{(k)}_\\sigma(x_1,y_1,\\dots,x_k,y'_k) dy'_k \\,\\, \\prod_{i=1}^{k-1} W_1(x_{\\sigma^{-1}(i)},y_i) \\cr\n&& \\mathop{\\mathrm{Pol}}_{y_k} {(x_{\\sigma^{-1}(k)}-X(y_k))W_1(x_{\\sigma^{-1}(k)},y_k)-(x_{\\sigma^{-1}(k)}-X(y_j))W_1(x_{\\sigma^{-1}(k)},y_j)\\over y_k-y_j} \\cr\n&&= \\sum_{\\sigma\\in \\Sigma_k}\\,\\sum_{j\\neq k}\\,\\mathop{\\,\\rm Res\\,}_{y'_k\\to y_j} C^{(k)}_\\sigma(x_1,y_1,\\dots,x_k,y'_k) dy'_k \\,\\, \\prod_{i=1}^{k-1} W_1(x_{\\sigma^{-1}(i)},y_i) \\cr\n&& {(x_{\\sigma^{-1}(k)}-X(y_k))W_1(x_{\\sigma^{-1}(k)},y_k)-(x_{\\sigma^{-1}(k)}-X(y_j))W_1(x_{\\sigma^{-1}(k)},y_j)\\over y_k-y_j} \\cr\n\\end{eqnarray}\n\nIndeed, using \\eq{W1U1}, one sees that the last expression is a polynomial in $y_k$:\n\\begin{eqnarray}\n&& {(x_{\\sigma^{-1}(k)}-X(y_k))W_1(x_{\\sigma^{-1}(k)},y_k)-(x_{\\sigma^{-1}(k)}-X(y_j))W_1(x_{\\sigma^{-1}(k)},y_j)\\over y_k-y_j} \\cr\n&=& {{E(x_{\\sigma^{-1}(k)},y_k)\\over y_k-Y(x_{\\sigma^{-1}(k)})}-{E(x_{\\sigma^{-1}(k)},y_j)\\over y_j-Y(x_{\\sigma^{-1}(k)})}\\over y_k-y_j} \\cr\n\\end{eqnarray}\n\nWe have to check \\eq{loopeqW}, i.e. that $A=0$ with:\n\\begin{eqnarray}\\label{defA}\nA&:=& \\sum_{j} {\\widehat{W}_{k-j}(x_{j+1},\\dots,y_k)-\\widehat{W}_{k-j}(x_{j+1},\\dots,x_k,y_j)\\over y_k-y_j}\\, \\widehat{W}_j(x_1,\\dots,y_j) \\cr\n&& - \\widehat{U}_k(x_1,\\dots,y_k) \\cr\n&& + (x_1-X(y_k))\\,\\widehat{W}_k(x_1,y_1,x_2,\\dots,x_k,y_k) \\cr\n\\end{eqnarray}\nWe have:\n\\begin{eqnarray}\nA&=& \\sum_{j\\neq k} \\sum_\\pi\\sum_\\tau { C^{(j)}_\\tau(x_1,\\dots,y_j)\\,C^{(k-j)}_\\pi(x_{j+1},\\dots,y_k) \\over y_k-y_j} \\cr\n&& \\quad \\times \\prod_{i=1}^{j} W_1(x_{\\tau^{-1}(i)},y_i)\\,\\prod_{i=j+1}^{k} W_1(x_{\\pi^{-1}(i)},y_i) \\cr\n&& - \\sum_{j\\neq k} \\sum_\\pi\\sum_\\tau { C^{(j)}_\\tau(x_1,\\dots,y_j)\\,C^{(k-j)}_\\pi(x_{j+1},\\dots,y_j) \\over y_k-y_j}\\, W_1(x_{\\pi^{-1}(k)},y_j) \\,\\cr\n&& \\quad \\times \\prod_{i=1}^{j} W_1(x_{\\tau^{-1}(i)},y_i)\\,\\prod_{i=j+1}^{k-1} W_1(x_{\\pi^{-1}(i)},y_i) \\cr\n&& - \\sum_\\sigma \\sum_{j\\neq k} {(x_{\\sigma^{-1}(k)}-X(y_k))\\over y_k-y_j}\\,W_1(x_{\\sigma^{-1}(k)},y_k) \\cr\n&& \\quad \\times \\left(\\mathop{\\,\\rm Res\\,}_{y_k\\to y_j} C^{(k)}_\\sigma\\right) \\prod_{i=1}^{k-1} W_1(x_{\\sigma^{-1}(i)},y_i) \\cr\n&& + \\sum_\\sigma \\sum_{j\\neq k} {(x_{\\sigma^{-1}(k)}-X(y_j))\\over y_k-y_j} \\,W_1(x_{\\sigma^{-1}(k)},y_j) \\cr\n&& \\quad \\times \\left(\\mathop{\\,\\rm Res\\,}_{y_k\\to y_j} C^{(k)}_\\sigma\\right) \\prod_{i=1}^{k-1} W_1(x_{\\sigma^{-1}(i)},y_i) \\cr\n&& + (x_1-X(y_k)) \\, \\sum_\\sigma C^{(k)}_\\sigma(x_1,\\dots,y_k) W_1(x_{\\sigma^{-1}(k)},y_k) \\prod_{i=1}^{k-1} W_1(x_{\\sigma^{-1}(i)},y_i) \\cr\n\\end{eqnarray}\nUsing \\eq{CRessimple} in the last line, adding it to the 4th line, and using \\eq{CRessimple} again, we get:\n\\begin{eqnarray}\nA&=& \\sum_{j\\neq k} \\sum_\\pi\\sum_\\tau { C^{(j)}_\\tau(x_1,\\dots,y_j)\\,C^{(k-j)}_\\pi(x_{j+1},\\dots,y_k) \\over y_k-y_j} \\cr\n&& \\quad \\times \\prod_{i=1}^{j} W_1(x_{\\tau^{-1}(i)},y_i)\\,\\prod_{i=j+1}^{k} W_1(x_{\\pi^{-1}(i)},y_i) \\cr\n&& + (x_1-x_{\\sigma^{-1}(k)}) \\, \\sum_\\sigma C^{(k)}_\\sigma(x_1,\\dots,y_k) W_1(x_{\\sigma^{-1}(k)},y_k) \\prod_{i=1}^{k-1} W_1(x_{\\sigma^{-1}(i)},y_i) \\cr\n&& - \\sum_{j\\neq k} \\sum_\\pi\\sum_\\tau { C^{(j)}_\\tau(x_1,\\dots,y_j)\\,C^{(k-j)}_\\pi(x_{j+1},\\dots,y_j) \\over y_k-y_j}\\, W_1(x_{\\pi^{-1}(k)},y_j) \\,\\cr\n&& \\qquad \\prod_{i=1}^{j} W_1(x_{\\tau^{-1}(i)},y_i)\\,\\prod_{i=j+1}^{k-1} W_1(x_{\\pi^{-1}(i)},y_i) \\cr\n&& + \\sum_\\sigma \\sum_{j\\neq k} {(x_{\\sigma^{-1}(k)}-X(y_j))\\over y_k-y_j} \\,W_1(x_{\\sigma^{-1}(k)},y_j) \\cr\n&& \\quad \\times \\left(\\mathop{\\,\\rm Res\\,}_{y_k\\to y_j} C^{(k)}_\\sigma\\right) \\prod_{i=1}^{k-1} W_1(x_{\\sigma^{-1}(i)},y_i) \\cr\n\\end{eqnarray}\nUsing \\eq{Crec} in the second line, exactly cancels the first line, and thus we get:\n\\begin{eqnarray}\nA&=& - \\sum_{j\\neq k} \\sum_\\pi\\sum_\\tau { C^{(j)}_\\tau(x_1,\\dots,y_j)\\,C^{(k-j)}_\\pi(x_{j+1},\\dots,y_j) \\over y_k-y_j}\\, W_1(x_{\\pi^{-1}(k)},y_j) \\,\\cr\n&& \\qquad \\prod_{i=1}^{j} W_1(x_{\\tau^{-1}(i)},y_i)\\,\\prod_{i=j+1}^{k-1} W_1(x_{\\pi^{-1}(i)},y_i) \\cr\n&& + \\sum_\\sigma \\sum_{j\\neq k} {(x_{\\sigma^{-1}(k)}-X(y_j))\\over y_k-y_j} \\,W_1(x_{\\sigma^{-1}(k)},y_j) \\cr\n&& \\quad \\times \\left(\\mathop{\\,\\rm Res\\,}_{y_k\\to y_j} C^{(k)}_\\sigma\\right) \\prod_{i=1}^{k-1} W_1(x_{\\sigma^{-1}(i)},y_i) \\cr\n\\end{eqnarray}\nwhich is a rational fraction in $y_k$ with poles at $y_k=y_j$ for some $j$.\nFrom Lemma \\ref{lemcancelpoleC}, $A$ as defined in \\eq{defA} cannot have poles at $y_k=y_j$, thus $A=0$.\n\n\\bigskip\nNow, we have to check \\eq{loopeqU}\nUsing \\eq{CRessimple}, one has:\n\\begin{eqnarray}\n&& \\widehat{P}_k(x_1,\\dots,y_k) - \\widehat{W}_{k-1}(x_2,\\dots,x_{k},y_1) \\cr\n&=& \\mathop{\\mathrm{Pol}}_{x_1} V'_1(x_1)\\,\\widehat{U}_k(x_1,y_1,\\dots,x_k,y_k) \\cr\n&=& \\mathop{\\mathrm{Pol}}_{x_1} Y(x_1) \\sum_{\\sigma\\in \\Sigma_k}\\,\\sum_{j\\neq k}\\,\\mathop{\\,\\rm Res\\,}_{y'_k\\to y_j} C^{(k)}_\\sigma \\,\\, \\prod_{i=1}^{k-1} W_1(x_{\\sigma^{-1}(i)},y_i) \\cr\n&& \\quad {U_1(x_{\\sigma^{-1}(k)},y_k)-U_1(x_{\\sigma^{-1}(k)},y_j)\\over y_k-y_j} \\cr\n&=& \\mathop{\\mathrm{Pol}}_{x_1} Y(x_1) \\sum_{\\sigma\\in \\Sigma_k,\\,\\sigma(1)=k}\\,\\sum_{j\\neq k}\\,\\mathop{\\,\\rm Res\\,}_{y'_k\\to y_j} C^{(k)}_\\sigma \\,\\, \\prod_{i=1}^{k-1} W_1(x_{\\sigma^{-1}(i)},y_i) \\cr\n&& \\quad {U_1(x_1,y_k)-U_1(x_1,y_j)\\over y_k-y_j} \\cr\n&& + \\mathop{\\mathrm{Pol}}_{x_1} Y(x_1) \\sum_{\\sigma\\in \\Sigma_k,\\,\\sigma(1)\\neq k}\\,\\sum_{j\\neq k}\\,\\mathop{\\,\\rm Res\\,}_{y'_k\\to y_j} C^{(k)}_\\sigma \\,\\, W_1(x_1,y_{\\sigma(1)})\\prod_{i\\neq k,\\sigma(1)} W_1(x_{\\sigma^{-1}(i)},y_i) \\cr\n&& \\quad {U_1(x_{\\sigma^{-1}(k)},y_k)-U_1(x_{\\sigma^{-1}(k)},y_j)\\over y_k-y_j} \\cr\n&=& - \\sum_{\\sigma\\in \\Sigma_k,\\,\\sigma(1)=k}\\,\\sum_{j\\neq k}\\,\\sum_{l\\neq 1}\\,\\mathop{\\,\\rm Res\\,}_{x_1\\to x_l}\\,\\mathop{\\,\\rm Res\\,}_{y'_k\\to y_j} C^{(k)}_\\sigma \\,\\, \\prod_{i=1}^{k-1} W_1(x_{\\sigma^{-1}(i)},y_i) \\cr\n&& \\quad {E(x_1,y_k)-E(x_1,y_j)-E(x_l,y_k)+E(x_l,y_j)\\over (x_1-x_l)(y_k-y_j)} \\cr\n&& + \\sum_{\\sigma\\in \\Sigma_k,\\,\\sigma(1)\\neq k}\\,\\sum_{j\\neq k}\\,\\sum_{l\\neq 1}\\,\\mathop{\\,\\rm Res\\,}_{x_1\\to x_l}\\,\\mathop{\\,\\rm Res\\,}_{y'_k\\to y_j} C^{(k)}_\\sigma \\,\\,\\prod_{i\\neq k,\\sigma(1)} W_1(x_{\\sigma^{-1}(i)},y_i) \\cr\n&& \\quad {(y_{\\sigma(1)}-Y(x_1))W_1(x_1,y_{\\sigma(1)})-(y_{\\sigma(1)}-Y(x_l))W_1(x_l,y_{\\sigma(1)})\\over x_1-x_l} \\cr\n&& \\quad {U_1(x_{\\sigma^{-1}(k)},y_k)-U_1(x_{\\sigma^{-1}(k)},y_j)\\over y_k-y_j} \\cr\n\\end{eqnarray}\n\nIn order to satisfy \\eq{loopeqU}, we must prove that $B=0$, where:\n\\begin{eqnarray}\nB&:=& \\sum_{l=2}^k {\\widehat{W}_{l-1}(x_1,y_1,\\dots,y_{l-1})-\\widehat{W}_{l-1}(x_l,y_1,x_2,\\dots,y_{l-1})\\over x_1-x_l}\\cr\n&& \\quad \\times \\widehat{U}_{k-l+1}(x_l,y_l,\\dots,x_k,y_k) \\cr\n&& +\\sum_{l=1}^{k-1} \\widehat{W}_l(x_1,\\dots,y_l)\\,\\widehat{W}_{k-l}(x_k,y_l,\\dots,y_{k-1}) \\cr\n&& - \\widehat{P}_k(x_1,y_1,x_2,\\dots,x_k,y_k) \\cr\n&& - (Y(x_1)-y_k)\\, \\widehat{U}_k(x_1,\\dots,y_k) \\cr\n\\end{eqnarray}\n\nOne does it in a way very similar to the previous one, i.e. first prove, using \\eq{Crec}, that $B$ is a rational fraction of $x_1$, with poles at $x_1=x_l$.\nBut $B$ can have no pole at $x_1=x_l$, so,$B=0$.\n\n\n\n{$\\bullet$}\n\n\n\n\n\n\n\\newsection{Matrix form of correlation functions}\n\nSo far, we have computed mixed correlation functions with only one trace, i.e. the generating function of connected discrete surfaces with\none boundary.\nIn this section, we generalize this theory to the computation of generating functions of non-connected discrete surfaces with any number of\nboundaries.\nIn order to derive those correlation functions, a matrix approach of the problem, similar to the one developed in\n\\cite{eynprats}, is used.\n\n\n\\begin{definition}\\label{defWmatr}\nLet k be a positive integer. Let $\\pi$ and $\\pi$' be two permutations of $\\Sigma_k$ and decompose $\\pi'^{-1} \\pi$ into the product of its irreducible cycles:\n\\begin{equation}\n\\pi'^{-1} \\pi = P_1 P_2 \\dots P_n\n\\end{equation}\nEach cycle $P_i$ of $\\pi'^{-1} \\pi$, of length $p_i$, is denoted :\n\\begin{equation}\nP_m = (i_{m,1} \\rightarrow^{\\pi} j_{m,1} \\leadsto^{\\pi'^{-1}} i_{m,2} \\rightarrow^{\\pi} j_{m,2}\\leadsto^{\\pi'^{-1}} \\dots \\leadsto^{\\pi'^{-1}} i_{m,p_m} \\rightarrow^{\\pi} j_{m,p_m} \\leadsto^{\\pi'^{-1}} i_{m,1})\n\\end{equation}\n\nFor any $(x_1,y_1,x_2,y_2, \\dots , x_k,y_k) \\in \\mathbf{C}$, we define :\n\\begin{equation}\n{\\cal{W}}^{k}_{\\pi,\\pi'}(x_1,y_1, \\dots , x_k,y_k):= \\left\\langle\\prod_{m=1}^{n} \\left( \\delta_{p_m,1} + {1\\over N}{\\,\\rm Tr}\\: \\prod_{j=1}^{p_m} {1 \\over (M_1-x_{i_{m,j}})(M_2-y_{j_{m,j}})}\\right) \\right\\rangle\n\\end{equation}\nwhich is a $k!\\times k!$ matrix.\n\\end{definition}\n\nLet us now generalize the notion of planarity of a permutation.\n\n\\begin{definition}\\label{defcoplanarity}\nLet k be a positive integer. Let $\\pi$ and $\\pi$' be two permutations of $\\Sigma_k$.\n\nA permutation $\\sigma \\in \\Sigma_k$ is said to be planar wrt $(\\pi , \\pi')$ if\n\\begin{equation}\nn_{\\rm cycles}(\\pi^{-1} \\sigma)+n_{\\rm cycles}(\\pi'^{-1} \\sigma)=k+n_{\\rm cycles}(\\pi'^{-1} \\pi)\n\\end{equation}\n\nLet $\\Sigma_k^{(\\pi,\\pi')} \\subset \\Sigma_k$ be the set of permutations planar wrt $(\\pi , \\pi')$.\n\nGraphically, if one draws the sets of points $(x_{i_{1,1}},y_{j_{1,1}},x_{i_{1,2}},y_{j_{1,2}}, \\dots , x_{i_{1,p_1}},y_{j_{1,p_1}})$,\n$(x_{i_{2,1}},y_{j_{2,1}},x_{i_{2,2}},y_{j_{2,2}}, \\dots , x_{i_{2,p_2}},y_{j_{2,p_2}})$, $\\dots$ , $(x_{i_{p,1}},y_{j_{p,1}},x_{i_{p,2}},y_{j_{p,2}}, \\dots , x_{i_{n,p_n}},y_{j_{n,p_n}})$\non n circles and link each pair $(x_j, y_{\\sigma(j)})$ by a line, these lines do not intersect nor go from one circle to another.\n\n\\end{definition}\n\n\n\\begin{remark}\\rm\\small One can straightforwardly see two properties of these sets:\n\\begin{itemize}\n\\item This relation of planarity wrt to $(\\pi , \\pi')$ is symmetric in $\\pi$ and $\\pi'$, that is to say:\n\\begin{equation}\n\\Sigma_k^{(\\pi,\\pi')} = \\Sigma_k^{(\\pi',\\pi)}\n\\end{equation}\n\\item The planarity defined in \\eq{Defplanar} corresponds to $\\pi = Id$ and $\\pi'= S^{-1}$:\n\\begin{equation}\n{\\overline\\Sigma}_k = \\Sigma_k^{(Id,S^{-1})}\n\\end{equation}\n\n\\end{itemize}\n\\end{remark}\n\nDirectly from these definitions and the preceding results comes the following theorem computing any generating function of discrete\nsurface with boundaries.\n\n\n\\begin{theorem}\\label{thWCsigmamat}\n\\begin{equation}\\encadremath{\n{\\cal{W}}^{k}_{\\pi,\\pi'}(x_1,y_1,x_2,y_2, \\dots , x_k,y_k) = \\sum_\\sigma {\\cal{C}}_{\\sigma,\\pi,\\pi'}^{k}(x_1,y_1, \\dots , x_k,y_k) \\prod_{i=1}^k W_1(x_i,y_{\\sigma(i)})\n}\\end{equation}\n\nwhere ${\\cal{C}}_{\\sigma}^k$ is the $k! \\times k!$-matrix defined by:\n\\begin{itemize}\n\\item ${\\cal{C}}_{\\sigma,\\pi,\\pi'}^{k}(x_1,y_1,x_2,y_2, \\dots , x_k,y_k) := 0$ if $\\sigma$ is not planar wrt $(\\pi,\\pi')$;\n\n\\item if $\\sigma$ is planar wrt $(\\pi,\\pi')$ :\n\\begin{eqnarray}\\label{defCm}\n&&{\\cal{C}}_{\\sigma,\\pi,\\pi'}^k (x_1,y_1, \\dots , x_k,y_k) :=\\cr\n&&\\prod_{m=1}^a F^{(a_m)}(x_{r_{m,1}},y_{\\sigma(r_{m,1})},x_{r_{m,2}},y_{\\sigma(r_{m,2})},\\dots,x_{r_{m,a_m}},y_{\\sigma(r_{m,a_m})}) \\cr\n&&\\times \\prod_{m=1}^\\td{a} F^{(\\td{a}_m)}(x_{\\td{r}_{m,1}},y_{\\sigma(\\td{r}_{m,1})},x_{\\td{r}_{m,2}},y_{\\sigma(\\td{r}_{m,2})},\\dots,x_{\\td{r}_{m,a_m}},y_{\\sigma(\\td{r}_{m,a_m})}) \\cr\n\\end{eqnarray}\n\nwith the decompositions of $\\pi^{-1} \\sigma$ and $\\pi'^{-1} \\sigma$ into their products of cycles:\n\\begin{equation}\n\\pi^{-1} \\sigma = \\pi_1 \\pi_2 \\dots \\pi_a {\\qquad , \\qquad} \\pi'^{-1} \\sigma = \\td{\\pi}_1 \\td{\\pi}_2 \\dots \\td{\\pi}_{\\td{a}}\n\\end{equation}\n\nsuch that:\n\\begin{equation}\n\\pi_m = (r_{m,1},r_{m,2}, \\dots , r_{m,a_m}) {\\qquad , \\qquad} \\td{\\pi}_m = (\\td{r}_{m,1},\\td{r}_{m,2}, \\dots , \\td{r}_{m,\\td{a}_m})\n\\end{equation}\n\\end{itemize}\n\n\\end{theorem}\n\n\\begin{remark}\\rm\\small\nFrom the definition, one can see that $\\sigma(r_{m,a_m}) = \\pi(r_m,1)$ and $\\sigma(\\td{r}_{m,a_m}) = \\pi'(\\td{r}_m,1)$ for any $m$.\n\\end{remark}\n\n\n\\subsection{Properties of the $C_\\sigma^k$'s.}\n\n\\begin{lemma}\nThe matrices ${\\cal{C}}_\\sigma^k$ are symmetric.\n\\end{lemma}\n\n\\proof{ It comes directly from the definition.}\n\n\n\\begin{lemma}\n\\begin{equation}\n\\sum_{\\sigma} {\\cal{C}}_\\sigma^k = Id\n\\end{equation}\n\\end{lemma}\n\n\\proof{\nOne has:\n\\begin{equation}\n{\\cal{W}}^{k}_{\\pi,\\pi'}(x_1,y_1,x_2,y_2, \\dots , x_k,y_k) = \\sum_\\sigma {\\cal{C}}_{\\sigma,\\pi,\\pi'}^{k} \\prod_{i=1}^k W_1(x_i,y_{\\sigma(i)})\n\\end{equation}\nLet us shift all the $x$'s by a translation $a$ and send $a\\to\\infty$, i.e. replace all the $x_i$'s by $x_i+a$.\nIn the LHS, only the $\\delta$-terms of Definition \\ref{defWmatr} survive in the limit $a\\to\\infty$, and thus the LHS tends towards the identity matrix.\nIn the RHS, notice that $W_1(x_i+a,y_{\\sigma(i)}) \\rightarrow 1$.\nAnd ${\\cal{C}}_{\\sigma}^{k}$, which depends only on the differences between $x_i$'s, is independent of $a$.\n}\n\n\n\\subsection{Some commutation properties}\n\n\\begin{definition}\nLet ${\\cal M}^k(\\vec{x},\\vec{y},\\xi,\\eta)$ be the $k! \\times k!$ matrix defined by:\n\\begin{equation}\n{\\cal M}^k(\\vec{x},\\vec{y},\\xi,\\eta)_{\\pi,\\pi'}:= \\prod_{i} ( \\delta_{\\pi(i),\\pi'(i)} + {1\\over (\\xi-x_i)(\\eta-y_{\\pi(i)})})\n\\end{equation}\n\nLet ${\\cal{A}}^{(k)}(x_1,y_1, \\dots, x_k,y_k)$ be the $k! \\times k!$ matrix defined by:\n\\begin{equation}\n\\left\\{\n\\begin{array}{l}\n{\\cal{A}}^{(k)}_{\\pi,\\pi}(x_1,y_1, \\dots, x_k,y_k) := \\sum_i x_i y_{\\pi(i)} \\cr\n{\\cal{A}}^{(k)}_{\\pi,\\pi'}(x_1,y_1, \\dots, x_k,y_k) := 1 \\; \\rm{if} \\; \\pi \\pi'^{-1} = \\rm{transposition} \\cr\n{\\cal{A}}^{(k)}_{\\pi,\\pi'}(x_1,y_1, \\dots, x_k,y_k) := 0 \\; \\rm{otherwise}\n\\end{array}\n\\right.\n\\end{equation}\n\n\\end{definition}\n\n\n\n\n\n\\begin{theorem}\\label{thcommut}\n\\begin{equation}\\encadremath{\n\\forall \\sigma,\\xi,\\eta\\, , \\qquad [{\\cal M}^k(\\vec{x},\\vec{y},\\xi,\\eta),{\\cal C}_{\\sigma}^{k}(\\vec{x},\\vec{y})]= 0\n}\\end{equation}\nand\n\\begin{equation}\\encadremath{\n\\forall \\xi,\\eta\\, , \\qquad [{\\cal M}^k(\\vec{x},\\vec{y},\\xi,\\eta),{\\cal W}^{k}(\\vec{x},\\vec{y})]= 0\n}\\end{equation}\n\\end{theorem}\n\n\n\n\\proof{\nLet us define:\n\\begin{equation}\n\\widetilde{\\cal{M}}(\\vec{x},\\vec{y},\\xi,\\eta) := {\\cal{M}}(N \\vec{x}, \\vec{y}, N \\xi, \\eta)\n\\end{equation}\nand\n\\begin{equation}\n\\widetilde{\\cal{W}}^{k}_{\\pi,\\pi'}(x_1,y_1, \\dots , x_k,y_k):= \\left\\langle\\prod_{m=1}^{n} \\left( \\delta_{p_m,1} + {\\,\\rm Tr}\\: \\prod_{j=1}^{p_m} {1\\over N}{1 \\over (M_1-x_{i_{m,j}})(M_2-y_{j_{m,j}})}\\right) \\right\\rangle\n\\end{equation}\n\nIt was proven in \\cite{eynprats} that:\n\\begin{equation}\n [\\widetilde{\\cal M}^k(\\vec{x},\\vec{y},\\xi,\\eta),\\widetilde{\\cal W}^{k}(\\vec{x},\\vec{y})]= 0\n\\end{equation}\n\nNow, in the large $N$ limit, the factorization property \\cite{ZJDFG} $<{\\,\\rm Tr}\\: {\\,\\rm Tr}\\:>\\sim <{\\,\\rm Tr}\\:><{\\,\\rm Tr}\\:>$, implies:\n\\begin{eqnarray}\n&& \\widetilde{\\cal{W}}^{k}_{\\pi,\\pi'}(x_1,y_1,x_2,y_2, \\dots , x_k,y_k) \\cr\n&\\sim& N^{n_{\\rm cycles}(\\pi'^{-1} \\pi)-k}\n\\prod_{m=1}^{n} W_{p_m}(x_{i_{m,1}},y_{\\pi(i_{m,1})},\\dots,x_{i_{m,p_m}},x_{\\pi(i_{m,p_m})}) \\cr\n&\\sim& N^{n_{\\rm cycles}(\\pi'^{-1} \\pi)-k} {\\cal{W}}^{k}_{\\pi,\\pi'}(x_1,y_1,x_2,y_2, \\dots , x_k,y_k) \\cr\n\\end{eqnarray}\nand using theorem \\ref{thWCsigmamat}, we have:\n\\begin{eqnarray}\n&& \\widetilde{\\cal{W}}^{k}_{\\pi,\\pi'}(x_1,y_1,x_2,y_2, \\dots , x_k,y_k) \\cr\n&\\sim& N^{n_{\\rm cycles}(\\pi'^{-1} \\pi)-k}\n \\sum_\\sigma {\\cal{C}}_{\\sigma,\\pi,\\pi'}^{k}(x_1,y_1,x_2,y_2, \\dots , x_k,y_k) \\prod_{i=1}^k W_1(x_i,y_{\\sigma(i)})\n\\end{eqnarray}\n\nNotice that\n\\begin{eqnarray}\n&& {\\cal{C}}_{\\sigma,\\pi,\\pi'}^{k}(x_1,y_1,x_2,y_2, \\dots , x_k,y_k)\\cr\n&=& N^{k-n_{\\rm cycles}(\\pi^{-1} \\sigma)+k-n_{\\rm cycles}(\\pi'^{-1} \\sigma)}\n{\\cal{C}}_{\\sigma,\\pi,\\pi'}^{k}(Nx_1,y_1,Nx_2,y_2, \\dots , Nx_k,y_k)\\cr\n&=& N^{k-n_{\\rm cycles}(\\pi'^{-1} \\pi)}\n{\\cal{C}}_{\\sigma,\\pi,\\pi'}^{k}(Nx_1,y_1,Nx_2,y_2, \\dots , Nx_k,y_k)\\cr\n\\end{eqnarray}\n\nThus:\n\\begin{eqnarray}\n&& \\widetilde{\\cal{W}}^{k}_{\\pi,\\pi'}(x_1,y_1,x_2,y_2, \\dots , x_k,y_k) \\cr\n&\\sim& \\sum_\\sigma {\\cal{C}}_{\\sigma,\\pi,\\pi'}^{k}(Nx_1,y_1,Nx_2,y_2, \\dots , Nx_k,y_k) \\prod_{i=1}^k W_1(x_i,y_{\\sigma(i)})\n\\end{eqnarray}\nThen, from \\cite{eynprats}, we have:\n\\begin{equation}\n0=\\sum_\\sigma \\left[{\\cal{M}}^{k}(N \\vec{x}, \\vec{y}, N \\xi, \\eta), {\\cal{C}}_{\\sigma}^{k}(Nx_1,y_1, \\dots , Nx_k,y_k) \\right]\\prod_{i=1}^k W_1(x_i,y_{\\sigma(i)})\n\\end{equation}\nIn particular, choose a permutation $\\sigma$, and take the limit where $y_i\\to Y(x_{\\sigma^{-1}(i)})$,\nyou get in that limit:\n\\begin{equation}\n0= \\left[{\\cal{M}}^{k}(N \\vec{x}, \\vec{Y(x_{\\sigma^{-1}})}, N \\xi, \\eta), {\\cal{C}}_{\\sigma}^{k}(Nx_1,Y(x_{\\sigma^{-1}(1)}), \\dots , Nx_k,Y(x_{\\sigma^{-1}(k)})) \\right]\n\\end{equation}\nSince this equation holds for any potentials $V_1$ and $V_2$, it holds for any function $Y(x)$, and thus the $Y(x_i)$'s can be chosen independentely of the $x_i$'s, and thus, for any $y_1,\\dots, y_k$,\nwe have:\n\\begin{equation}\n0= \\left[{\\cal{M}}^{k}(N \\vec{x}, \\vec{y}, N \\xi, \\eta), {\\cal{C}}_{\\sigma}^{k}(Nx_1,y_1, \\dots , Nx_k,y_k) \\right]\n\\end{equation}\nSince it holds for any $x_i$'s and $\\xi$, it also holds for $x_i\/N$ and $\\xi\/N$.}\n\n\\begin{corollary}\\label{corcommut}\n\\begin{equation}\n\\forall \\sigma\\, , \\qquad [{\\cal A}^{(k)}(\\vec{x},\\vec{y}),{\\cal C}_{\\sigma}^{k}(\\vec{x},\\vec{y})]= 0\n\\end{equation}\n\\end{corollary}\n\\proof{\nThe corollary is obtained by taking the large $\\xi$ and $\\eta$ limit of theorem \\ref{thcommut} (see Appendix of \\cite{eynprats}).\n}\n\n\n\n\n\\subsection{Examples: $k=2$.}\n\n\\begin{eqnarray}\n{\\cal{W}}^{(2)} = \\pmatrix{W_{11}W_{22} & {W_{11}W_{22}-W_{12}W_{21}\\over (x_1-x_2)(y_1-y_2)}\\cr\n{W_{11}W_{22}-W_{12}W_{21}\\over (x_1-x_2)(y_1-y_2)} & W_{12}W_{21} }\n\\end{eqnarray}\n\nwhere $W_{ij} = W_{1}(x_i,y_j)$.\n\n\\begin{eqnarray}\n{\\cal C}^2_{Id} = \\pmatrix{1 & {1\\over (x_1-x_2)(y_1-y_2)}\\cr {1\\over (x_1-x_2)(y_1-y_2)} & 0 }\n\\end{eqnarray}\n\\begin{eqnarray}\n{\\cal C}^2_{(12)} = \\pmatrix{0 & {1\\over (x_1-x_2)(y_2-y_1)}\\cr {1\\over (x_1-x_2)(y_2-y_1)} & 1 } = 1- {\\cal C}^2_{Id} \\cr\n\\end{eqnarray}\n\n\n\n\n\n\\newsection{Application: Gaussian case}\n\nThere is an example of special interest, in particular for its applications to string theory in the BMN limit \\cite{BMN},\nit is the Gaussian-complex matrix model case, $V_1=V_2=0$.\nIn that case one has $E(x,y)=xy-1$, and thus:\n\\begin{equation}\nW_1(x,y)={xy\\over xy-1}\n\\end{equation}\n\n\nThe loop equation defining recursively the $W_k$'s can be written:\n\\begin{eqnarray}\n&& (x_1 y_k-1)W_k(x_1,y_1,\\dots,x_k,y_k) = \\cr \n&& x_1 \\sum_{j} {W_{j-1}(x_j,y_1,\\dots,y_{j-1})-W_{j-1}(x_1,y_1,\\dots,y_{j-1})\\over x_1-x_j}\\cr\n&& \\quad \\times W_{k-j+1}(x_j,y_j,\\dots,x_k,y_k) \\cr\n\\end{eqnarray}\n\nIts solution is then:\n\\begin{equation}\nW_k(x_1,y_1,\\dots,x_k,y_k) = \\sum_{\\sigma\\in \\Sigma_k}\\, C^{(k)}_\\sigma(x_1,y_1,\\dots,x_k,y_k)\\,\\, \\prod_{i=1}^k {x_i y_{\\sigma(i)} \\over x_i y_{\\sigma(i)} -1 }\n\\end{equation}\n\nFrom the loop equation, one can see that $W_k(x_1,y_1,\\dots,x_k,y_k)$ may have poles only when $x_i \\rightarrow y_j^{-1}$ for any $i$ and $j$.\nBecause the $C_\\sigma$'s are rational functions of all their variables and because $W_k$ has no singularity when $x_i=x_j$ or $y_i=y_j$, one can write:\n\\begin{equation}\nW_k(x_1,y_1,\\dots,x_k,y_k) = {N_k(x_1,y_1,x_2,y_2, \\dots , x_k,y_k) \\over \\prod_{i,j}(x_i y_j-1)}\n\\end{equation}\n\nwhere $N_k(x_1,y_1,x_2,y_2, \\dots , x_k,y_k)$ is a polynomial in all its variables.\n\nMoreover, the loop equation taken for the values $x_k=0$ or $y_k=0$ shows that $W_k(x_1,y_1,\\dots,0,y_k) = W_k(x_1,y_1,\\dots,x_k,0)=0$.\nUsing the cyclicity property of $W_k(x_1,y_1,\\dots,x_k,y_k)$, one can claim that it vanishes whenever one of its arguments is equal to 0.\nOne can thus factorize the polynomial $N_k(x_1,y_1,x_2,y_2, \\dots , x_k,y_k)$ as follows:\n\\begin{equation}\nW_k(x_1,y_1,\\dots,x_k,y_k) = {Q_k(x_1,y_1,x_2,y_2, \\dots , x_k,y_k) \\prod_{i} x_i y_i \\over \\prod_{i,j}(x_i y_j-1)}\n\\end{equation}\nwhere $Q_k(x_1,y_1,\\dots,x_k,y_k)$ is a polynomial of degree $k-2$ with integer coefficient in all its variables.\n\n\nNotice that $Q_k(x_1,y_1,\\dots, y_{\\sigma(i)}^{-1},y_i, \\dots ,x_k,y_k)=0$ if $\\sigma$ is not planar.\n\nAs an example, we have:\n\\begin{itemize}\n\\item for $k=2$:\n\\begin{equation}\nW_2(x_1,y_1,x_2,y_2) = {x_1x_2 y_1 y_2\\over \\prod_{i,j}(x_i y_j-1)} {\\qquad , \\qquad} Q_2(x_1,y_1,x_2,y_2) = 1\n\\end{equation}\n\n\\item for $k=3$:\n\\begin{equation}\nW_3(x_1,y_1,x_2,y_2,x_3,y_3) = (2-\\sum_i x_i y_{i+1} + x_1 x_2 x_3 y_1 y_2 y_3)\\, {x_1 x_2 x_3 y_1 y_2 y_3\\over \\prod_{i,j}(x_i y_j-1)}\n\\end{equation}\nand\n\\begin{equation}\nQ_3(x_1,y_1,x_2,y_2,x_3,y_3) = (2- x_1y_2 - x_2 y_3 - x_3 y_1 + x_1 x_2 x_3 y_1 y_2 y_3)\n\\end{equation}\n\n\\end{itemize}\n\n\n\n\\newsection{Conclusion}\n\nIn this article, we have computed the generating functions of discs with all possible boundary conditions,\ni.e. the large $N$ limit of all correlation functions of the formal 2-matrix model.\nWe have found that the $2k$ point correlation function can be written like the Bethe Ansatz for the $\\delta$-interacting bosons,\ni.e. a sum over permutations of product of 2-point functions.\nThat formula is universal, it is independent of the potentials.\n\nAn even more powerful approach consists in gathering all possible $2k$ point correlation functions in a $k!\\times k!$ matrix ${\\cal W}^k$.\nWe have found that this matrix ${\\cal W}^k$ satisfies commutation relations with a family of matrices ${\\cal M}^k$ which depend on two spectral parameters,\nand are related to the representations of $U(n)$ \\cite{eynprats}.\nWe claim that the theorem \\ref{thcommut} is almost equivalent to the loop equations, and allows to determine ${\\cal W}^k$.\n\n\\medskip\n\nIt remains to understand how all these matrices and coefficients $C_\\sigma$ are related to usual formulations of integrability,\ni.e. how to write these in terms of Yang Baxter equations. For instance, the similarity with equations found\nin Razumov-Stroganov conjecture's proof \\cite{RazStro} is to be understood.\n\n\\medskip\n\nOne could also hope to find a direct proof of theorem \\ref{mainth}, without having to solve the loop equations.\nIn other words, we have found that the $2k$-point function can be written only in terms of $W_1$, while, in the derivation, we use the one point functions $Y(x)$ and $X(y)$ although they don't appear in the final result.\n\n\\medskip\n\nThe next step, is to be able to compute the $1\/N^2$ expansion of those correlation functions, as well as the large $N$ limit of\nconnected correlation functions.\nWe are already working that out, by mixing the approach presented in the present article and the Feynman graph approach of \\cite{eynloop1mat}\ngeneralized to the 2-matrix model in \\cite{eoloop2mat}.\n\n\\medskip\n\nAnother prospect is to go to the critical limit, i.e. where we describe generating functions for continuous surfaces with conformal invariance,\nand interpret this as boundary conformal field theory \\cite{kostov}.\n\n\n\n\n\n\\subsection*{Acknowledgements}\nThe authors want to thank M. Bauer, M. Berg\\`ere, F. David, P. Di Francesco, J.B. Zuber for stimulating discussions.\nThis work was partly supported by the european network Enigma (MRTN-CT-2004-5652).\n\n\\eop\n\n\\setcounter{section}{0}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section[About Java]{About \\proglang{Java}}\n\\section{Introduction}\n\n\nFeature selection is commonly used to improve model interpretability, parsimony, and generalization. In the linear setting, methods such as the lasso \\citep{tibshirani1996regression}, group lasso \\citep{friedman2010note}, and elastic net \\citep{zou2005regularization} are frequently used to obtain sparse models. These techniques are valued for their ease of use and computational efficiency. \nTo fit a sparse nonlinear model with the lasso, modelers often resort to ad-hoc feature engineering strategies like binning features \\citep{wu2016revisiting} or adding pairwise feature interactions \\citep{nelder1972generalized}, which can explode the dimension of the problem and still underperform compared to tree-based models like XGBoost \\citep{chen2016xgboost} or Random Forest \\citep{breiman2001random}.\n\nFeature selection is more challenging in nonlinear models. Wrapper-based feature selection algorithms, such as recursive feature elimination (RFE), are computationally expensive as the model must be retrained to evaluate each subset of features \\citep{darst2018using}. Feature importance metrics derived from nonlinear models, such as mean decrease in impurity (MDI) importance for tree ensembles, can be biased \\citep{zhou2021unbiased} and can fail when some features are correlated \\citep{liu2021controlburn}: a group of correlated features splits the MDI score between them, and so the score for an important group of features may be suppressed below the importance threshold for every individual feature in the group.\n\n\nIn this paper, we present \\pkg{ControlBurn}, an efficient algorithm for feature selection in nonlinear models that works well even with many correlated features. \\pkg{ControlBurn} first builds a large tree ensemble out of simple trees that isolate the effects of important single features and small subsets of features.\nIt then chooses a subset of the trees that jointly use a small number of features\nby solving a group lasso problem. \nThe algorithm is fast for large-scale data and yields an interpretable model that identifies the effects of important individual features and pairs of features. \nAn implementation of \\pkg{ControlBurn} is available as an open-source package in the \\proglang{Python} programming language. \n\n\nThe paper is organized as follows. Section \\ref{Methodology} presents the \\pkg{ControlBurn} algorithm and section \\ref{software} provides a tutorial of the \\proglang{Python} implementation. Additional capabilities of \\pkg{ControlBurn} are presented in section \\ref{capabilities} and an advanced application of \\pkg{ControlBurn} to emergency room triage appears in section \\ref{application}.\n\n\\section{Nonlinear feature selection with trees} \\label{Methodology}\n\n\\pkg{ControlBurn} first builds a tree ensemble (forest), say, by bagging or by gradient boosting.\nPerformance is sensitive to the quality and diversity of the tree ensemble\nand \\pkg{ControlBurn} works best when each tree uses only a few features. \nWe discuss detailed strategies for building good ensembles in section~\\ref{treebuild}.\n\\pkg{ControlBurn} then seeks a subset of the trees (subforest) that jointly use a small number of features and that predict the response well. \nIt finds this subforest by solving a weighted lasso optimization problem. \n\\pkg{ControlBurn} can find models with different sparsity levels by varying the regularization parameter in the lasso problem\nand can choose the optimal sparsity level to minimize cross-validated error.\nGiven the selected features, \\pkg{ControlBurn} can fit a final (``polished'') tree ensemble on the selected features to debias the model compared to the results after the lasso fit. \n\nWe now describe each step in greater mathematical detail.\nWe begin by discussing methods for sparsifying a forest and \nrevisit methods for building appropriate forests in section~\\ref{treebuild}.\n\n\n\\subsection{General framework}\\label{generalframework}\nGiven $n$ output-response pairs $\\{(x^{(i)}, y^{(i)})\\}_{i=1}^n$, \nsuppose we have constructed $T$ trees: predictors that map each sample $x^{(i)}$\nto a prediction $a^{(i)}$,\nwhich can be continuous (for regression) or binary (for classification).\nEach tree $t=1,\\ldots,T$ is associated with a vector of predictions $a^{(t)} \\in \\mathbb{R}^{N}$ and a binary vector $g^{(t)} \\in \\{0,1\\}^{P}$ that indicates which features are used as splits in tree $t$.\n\\pkg{ControlBurn} works best when the \\emph{ensemble} (set of trees) is reasonably diverse, i.e. each tree is split on a different subset of features. We discuss methods for building trees in \\S\\ref{treebuild}. \n\nOur goal is to choose a sparse weight vector $w \\in \\mathbb{R}^T$ so that \na weighted sum of the predictions of the trees $\\hat y = \\sum_{t=1}^T w_t a^{(t)}$ matches the response $y$ as well as possible.\nWe measure prediction error according to the loss function $\\ell: \\mathbb{R} \\times \\mathbb{R} \\to \\mathbb{R}$, for example:\n\\begin{itemize}\n \\item (for regression) squared error: $\\ell(\\hat y, y) = \\|y - \\hat y\\|^2$,\n \\item (for classification) logistic loss: $\\ell(\\hat y, y) =\\sum_{n=1}^N \\log(1+\\exp(-y_n \\hat y_n))$,\n \\item (for classification) hinge loss: $\\ell(\\hat y, y) =\\sum_{n=1}^N (1 - y_n \\hat y_n)_{+} $.\n\\end{itemize}\n\nFor convenience, denote $A = [a^{(1)}, \\ldots, a^{(T)}] \\in \\mathbb{R}^{N\\times T} $ \nand $G = [g^{(1)}, \\ldots, g^{(T)}] \\in \\mathbb{R}^{N\\times T}$.\nWith this notation, $Aw$ gives the predictions of the weighted tree ensemble and \nfor each feature $p \\in \\{1,\\ldots,P\\}$, $(Gw)_p$ is nonzero if and only if feature $p$ is used by the ensemble.\nSince $w \\geq 0$ and $G$ is binary, $Gw = \\sum_{t=1}^T u_t w_t$, where $u_t$ counts the number of features used by tree $t$, \\textit{i}.\\textit{e}., the number of nonzeros in $g^{(t)}$.\n\n\\pkg{ControlBurn} chooses weights $w$ to minimize the average loss $\\frac{1}{N}\\sum_{n=1}^N \\ell(Aw, y_n)$ of the prediction $Aw$ compared to the response $y$,\ntogether with regularization that controls the number of features selected according to \nregularization parameter $\\alpha$.\nIn the case of square loss, the optimal weight vector $w^\\star$ solves the regularized maximum likelihood estimation problem\n\n\\begin{mini!}\n{w}{\\frac{1}{N}\\left\\|y-\\mathrm{~A} w\\right\\|_{2}^{2}+\\alpha u^T w \\label{opt1obj}}{\\label{optimizationproblem1}}{}\n\\addConstraint{w\\geq 0. \\label{opt1c1}}\n\\end{mini!}\nWe say feature $p$ is selected if $(Gw^\\star)_p$ is non-zero.\n\n\\cite{friedman2003importance} also proposed using the lasso to select trees from an ensemble.\nOur problem differs by weighting each tree $t$ by the number of features $u_t$ used by the tree, in order to reduce the number of \\emph{features} used by the ensemble. \nThis model will tend to choose trees that use only a few features while preserving predictive power.\n\nFinally, \\pkg{ControlBurn} optionally fits another ensemble model of choice, \nsay random forest, on the subset of selected features.\nThis step, which we call \\emph{polishing}, debiases the predictions compared to the lasso predictions \\citep{meinshausen2007relaxed}.\n\n\\subsection{Building trees} \\label{treebuild}\n\nThe success of \\pkg{ControlBurn} depends on the original tree ensemble $\\{1 \\ldots T \\}$. The method works best when the original tree ensemble contains many trees that use only a small fraction of the total features so that the lasso problem can find feature-sparse subsets of the trees. For example, if $\\{1 \\ldots T \\}$ is built via random forests \\citep{breiman2001random} and each tree in the ensemble is grown to full depth, each tree $t \\in \\{1 \\ldots T \\}$ uses nearly every feature. As a result, \\pkg{ControlBurn} will select either all or none of the features; there is no feature-sparse subforest (except the empty forest). Figure \\ref{goodvbad} presents visualizations of \\textit{good} vs. \\textit{bad} ensembles to use in \\pkg{ControlBurn}. \\textit{Good} ensembles are diverse enough so that a feature-sparse subset of trees can be selected. In \\textit{bad} ensembles, selecting a single tree selects almost all of the features. \n\n\n\\begin{figure}[h]%\n \\centering\n\\subfloat[\\centering \\textit{Good} ensemble]{{\\includegraphics[width=0.485\\textwidth]{figures\/sparseforest1.png} }}%\n\\qquad\n\\subfloat[\\centering \\textit{Bad} ensemble]{{\\includegraphics[width=0.445\\textwidth]{figures\/badforest.jpg} }}%\n \\caption{\\textit{Good} vs. \\textit{Bad} ensembles for \\pkg{ControlBurn}. The colors represent which features are used per split. In the \\textit{good} ensemble, the red dashed line selects a feature sparse subforest; features $x_3$ and $x_4$ are excluded. In the \\textit{bad} ensemble, selecting even a single tree selects all the features.}%\n \\label{goodvbad}%\n\\end{figure}\n\n\nVarious algorithms to build ensembles with diverse trees are detailed in \\cite{liu2021controlburn}. We summarize several such algorithms below.\n\n\\begin{itemize}\n \\item \\textbf{Incremental Depth Bagging:} Follow the bagging procedure proposed in \\cite{breiman1996bagging} and grow trees of depth 1. When the training error of the ensemble converges, increment depth; repeat this procedure until the maximum depth is reached.\n \n \\item \\textbf{Incremental Depth Bag-Boosting:} Follow the incremental depth bagging procedure proposed above, but at each depth level, fit trees to the residuals of the model formed by the current ensemble. Across depth levels (boosting iterations) compute the out-of-bag (OOB) difference in error as a proxy for test error. Stop when the OOB error no longer improves.\n \n \\item \\textbf{Incremental Depth Double Bag-Boosting:} Follow the bag-boosting procedure detailed above, but when the training error of the ensemble converges, conduct a boosting iteration \\emph{without} incrementing depth. When the OOB error between boosting iterations no longer improves, increase depth and repeat the procedure until the OOB error of the model no longer improves.\n\\end{itemize}\n\n\n\n\\subsection{Solving optimization problem 1}\n\nLet $D \\in \\mathbb{R}^{T \\times T}$ be a diagonal matrix such that the main diagonal of $D$ is equal to $u$. Each element of $u$, $u_t > 0$ represents the number of features tree $t$ uses. As a result, $D$ is positive-definite and invertible. We can rewrite Problem \\ref{optimizationproblem1} as:\n\n\\begin{mini!}\n{w}{\\frac{1}{N}\\left\\|y-\\mathrm{~AD^{-1}D} w\\right\\|_{2}^{2}+\\alpha \\norm{Dw}_1 \\label{opt2obj}}{\\label{optimizationproblem2}}{}\n\\addConstraint{w\\geq 0. \\label{opt2c1}}\n\\end{mini!}\n\nLet $x=Dw$; the above formulation is equivalent to:\n\n\n\\begin{mini!}\n{w}{\\frac{1}{N}\\left\\|y-\\mathrm{~AD^{-1}} x\\right\\|_{2}^{2}+\\alpha \\norm{x}_1}{\\label{optimizationproblem3}}{}\n\\addConstraint{x \\geq 0. \\label{opt3c1}}\n\\end{mini!}\n\nProblem 3 is equivalent to the non-negative garrote proposed in \\cite{breiman1995better} and can be solved by existing lasso solvers found in \\pkg{Scikit-learn} in \\proglang{Python} or \\pkg{glmnet} in \\proglang{R}. The solution vector $x$ can be mapped back to the weights by a backsolve:\n\\[w = D^{-1}x. \\]\n\nFinally, the entire regularization path for $w$ can be computed efficiently by varying $\\alpha$ and solving problem \\ref{optimizationproblem3} with warm-start continuation \\citep{friedman2010regularization}. This allows a practitioner to rapidly evaluate models with different feature sparsities. \n\n\\section{Software} \\label{software}\n\n\\subsection{Installation}\n\n\n\\pkg{ControlBurn} can be installed via the \\proglang{Python} Package Index \\href{https:\/\/pypi.org\/project\/ControlBurn\/}{PyPI} and is available for Python 3.7 and above. The following dependencies are required.\n\\begin{itemize}\n \\item \\pkg{Numpy} \\citep{harris2020array}\n \\item \\pkg{Pandas} \\citep{jeff_reback_2022_6053272}\n \\item \\pkg{Scikit-learn} \\citep{scikit-learn}\n\\end{itemize}\n\nThe source code for \\pkg{ControlBurn} can be found in the following \\href{https:\/\/github.com\/udellgroup\/controlburn}{repository}. To install \\pkg{ControlBurn}, run the following command in terminal: \\code{pip install ControlBurn}.\n\n\n\\subsection{Quick start}\nBelow, we present a quick example of \\pkg{ControlBurn} on the classic Wisconsin breast cancer binary classification dataset \\citep{wolberg1994machine}. Load \\pkg{ControlBurn} in to your \\proglang{Python} environment via the following.\n\n\\begin{pythoncode}\nimport ControlBurn\nfrom ControlBurn.ControlBurnModel import ControlBurnClassifier\n\\end{pythoncode}\n\nThe code below initalizes a \\code{ControlBurnClassifier}, grows a tree ensemble using the default method of incremental depth bag-boosting, \\code{build_forest_method = 'bagboost'} and solves problem \\ref{optimizationproblem1} with regularization penalty $\\alpha = 0.1$. \n\n\n\\begin{pythoncode}\ncb = ControlBurnClassifier(alpha = 0.1)\ncb.fit(xTrain, yTrain)\nfeatures = cb.features_selected_\n\nfeatures\n>>> ['mean concave points', 'worst perimeter', 'worst concave points']\n\\end{pythoncode}\n\nDuring the ensemble building stage, 40 trees are grown and after the lasso step, 11 trees are selected. The selected ensemble is feature-sparse; the 11 trees only use the features \\code{mean concave points, worst area,} and \\code{worst concave points}. Only 3 of the 30 features in the full dataset are selected.\n\nDuring the \\code{fit} call, \\pkg{ControlBurn} fits a polished model on the selected features. In this example, the default \\code{polish_method = RandomForestClassifier()} is used. The predictions of the polished model on the selected features can be obtained by the following.\n\n\\begin{pythoncode}\npredicted_classes = cb.predict(xTest)\npredicted_probabilities = cb.predict_proba(xTest)\n\\end{pythoncode}\n\nThe prediction accuracy\/AUC of the full 30 feature model is 0.96\/0.99 and the prediction accuracy\/AUC of the polished sparse model of 3 features is 0.96\/0.99. In two lines of code, we obtain a feature-sparse model that performs the same as the full model. \\pkg{ControlBurn} closely follows \\pkg{Scikit-learn} API conventions for easy integration into existing data science workflows. \n\n\\subsection{Tutorial with interpretability tools} \\label{cahousingtutorial}\nIn the following section, we use \\pkg{ControlBurn} on the California Housing Prices regression dataset from UCI MLR \\citep{Dua:2019}. Our goal is to select a sparse subset of features from \\code{MedInc, HouseAge, AveRooms, AveBedrms, Population, AveOccup, Latitude,} and \\code{Longitude} that jointly predict housing price well. We highlight the interpretability tools and features in the package.\n\nTo get started, run the following code.\n\n\\begin{pythoncode}\nfrom ControlBurn.ControlBurnModel import ControlBurnRegressor\ncb = ControlBurnRegressor(build_forest_method = 'doublebagboost', alpha = 0.02)\ncb.fit(xTrain,yTrain)\nprediction = cb.predict(xTest)\nfeatures = cb.features_selected_\n\nfeatures\n>>> ['MedInc', 'HouseAge', 'Latitude', 'Longitude']\n\\end{pythoncode}\n\nWe fit a \\code{ControlBurnRegressor} that uses incremental depth double bag-boosting to build the ensemble (and lasso to sparsify it). Double bag-boosting ensures that the effects due to single features are adequately represented before trees with two features are introduced, and similarly for each higher-order interaction.\n\nOut of the 79 trees grown, only 16 are selected. This subforest only uses the features \\code{MedInc, HouseAge, Latitude, Longitude}; only half of the features are selected. The feature \\code{MedInc} is the average earnings of households in the neighborhood surrounding a house, and \\code{Latitude} and \\code{Longitude} specify the location of the house. These features are important for predicting housing prices. The sparse polished model has a test mean-squared error (MSE) of 0.32 and the full model has a test MSE of 0.33. \\pkg{ControlBurn} is able to quickly eliminate 4 redundant features in this example.\n\n\n\\pkg{ControlBurn} provides interpretability tools to analyze the selected subforest. Import the interpretability module and initialize an interpreter using the fitted \\code{ControlBurnRegressor} object.\n\n\\begin{pythoncode}\nfrom ControlBurn.ControlBurnInterpret import InterpretRegressor\ninterpreter = InterpretRegressor(cb,xTrain,yTrain)\n\\end{pythoncode}\n\nTo plot the feature importance scores of the selected subforest, run the following line of code.\n\n\\begin{pythoncode}\nimportance_score = interpreter.plot_feature_importances()\n\\end{pythoncode}\n\n\n\\begin{figure}[h!]\n \\centering \n\\includegraphics[width = 0.75\\textwidth]{figures\/feature_importance_scores.pdf}\n \\caption{Weighted feature importance scores for the subforest selected by \\pkg{ControlBurn}. }\n \\label{featureimportances}\n\\end{figure}\n\nThe weighted feature importance scores in Figure \\ref{featureimportances} are computed by multiplying the impurity-based feature importance score of each tree in the subforest by the weight the tree is assigned during the lasso step (Problem \\ref{optimizationproblem1}).\n\n\nThe interpreter can also list the features used in each tree of the selected subforest.\n\n\\begin{pythoncode}\nfeatures_per_tree = interpreter.list_subforest(verbose = True)\n\\end{pythoncode}\n\nThis command outputs the following subforest structure:\n\n\\begin{pythoncode}\n>>> ['MedInc'], ['MedInc'], ['MedInc'], ['MedInc'], ['MedInc'], ['MedInc'],\n ['MedInc'], ['MedInc'], ['MedInc'], ['MedInc'], ['MedInc'], \n ['Latitude' 'Longitude'], ['Latitude' 'Longitude'], ['MedInc' 'HouseAge'],\n ['Latitude' 'Longitude'], ['Latitude' 'Longitude']\n\\end{pythoncode}\n\nEach array shows the features used by a decision tree. In this example, the feature \\code{MedInc} appears in several single-feature trees. We can use our interpreter to plot a shape function that shows how changes in the feature contribute to the response by aggregating these single feature trees.\n\n\\begin{pythoncode}\nplot_single = interpreter.plot_single_feature_shape('MedInc')\n\\end{pythoncode}\n\n\\begin{figure}[h]\n \\centering \n\\includegraphics[width = 0.85\\textwidth]{figures\/shapefunctionsinglefeat.pdf}\n \\caption{Shape function showing the contribution of feature \\code{MedInc} towards the prediction.}\n \\label{shapefunctionfigure}\n\\end{figure}\n\n\n\nFrom the plot in Figure \\ref{shapefunctionfigure}, we see that house prices rise with the median income of the neighborhood. We can also see the nonlinearity of the dependence: for example, the very steep rise in house prices as we move to the highest-income neighborhoods.\n\nThe features Latitude and Longitude also appear in many trees together. We can use the interpret module to examine how this pairwise feature interaction impacts the predictions. \n\\begin{pythoncode}\nplot_pairwise = interpreter.plot_pairwise_interactions('Latitude','Longitude')\n\\end{pythoncode}\n\n\n\n\\begin{figure}[p]\n\\centering\n\\includegraphics{figures\/pairwiseheatmap.pdf}\n\\caption{Pairwise interaction between \\code{Latitude} and \\code{Longitude}.}\n\\label{pairwiseheatmap}\n\\includegraphics{figures\/CaliforniaHousing.pdf}\n\\caption{\\code{Latitude} and \\code{Longitude} pairwise interaction effect on housing price, overlayed on a map of California.}\n\\label{CAmap}\n\\end{figure}\n\nWe observe in Figure \\ref{pairwiseheatmap} that house prices are highest in the northwest of California, and lowest in the southeast. We can overlay this heatmap on a map of California to understand this effect better. In Figure \\ref{CAmap} we observe that our model identifies that houses located in the San Francisco Bay area are most expensive; houses along the coast to Los Angeles and San Diego, in yellow, are next most expensive; and houses further inland, in green, are cheaper. These results are consistent with historical house price trends in the state.\n\n\n\n\\subsection{Regularization path}\nBy varying the regularization parameter $\\alpha$, we can compute the entire regularization path and observe how features enter the support. The cost of computing the entire path is comparable to solving the lasso problem once. Execute the code below to compute the entire regularization path and plot how the identified feature importance of each feature changes as the regularization parameter alpha varies.\n\n\\begin{pythoncode}\nalphas,coef = cb.solve_lasso_path(xTrain,yTrain)\nregularization_importances = interpreter.plot_regularization_path()\n\\end{pythoncode}\n\n\\begin{figure}[h]\n \\centering \n\\includegraphics[width = \\textwidth]{figures\/lassopath.pdf}\n \\caption{Regularization path obtained by varying $\\alpha$. }\n \\label{regularizationpath}\n\\end{figure}\n\n\nThe vertical axis of the plot in Figure \\ref{regularizationpath} shows, for each feature, the MDI feature importance from each tree weighted by the lasso solution coefficient. Unlike the linear lasso coefficient regularization path, our feature importance paths are not necessarily monotonic. For example, in Figure \\ref{regularizationpath} when \\code{Latitude} and \\code{Longitude} drop out of the subforest, the remaining feature \\code{MedInc} is assigned a higher weight and therefore a higher weighted feature importance score.\n\n\n\\subsection{Selecting the best regularization parameter}\n\n\\pkg{ControlBurn} can automatically select a good regularization parameter by searching the regularization path for the parameter that minimizes the k-fold cross-validation error (default k = 5) of the model, using the \\code{fit_cv} method. \n\\begin{pythoncode}\nbest_alpha, support_size, best_features = cb.fit_cv(xTrain,yTrain, \n show_plot = True, kwargs = {'tol':0.001})\n\nbest_alpha\n>>> 0.012354087681630486\nsupport_size\n>>> 4\nbest feature sets\n>>> ('AveOccup', 'Latitude', 'Longitude', 'MedInc'), \n ('HouseAge', 'Latitude', 'Longitude', 'MedInc')\n\\end{pythoncode}\n\n\\begin{figure}[h]\n \\centering \n\\includegraphics[width = 1\\textwidth]{figures\/fitcvplot.pdf}\n \\caption{Left plot shows validation error vs. $\\alpha$. Right plot shows validation error vs. number of features selected. The best support size contains four features.}\n \\label{fitcvplot}\n\\end{figure}\n\nIn this example, \\pkg{ControlBurn} selects a regularization parameter of 0.012, which selects four features. \\code{Latitude}, \\code{Longitude}, and \\code{MedInc} are consistently selected as important features while \\code{AveOccup} and \\code{HouseAge} are variably included in the support in different folds. The \\code{fit_cv} method automatically selects the optimal value of alpha and refits a tuned ControlBurnRegressor using that parameter. The features selected by the tuned model can be accessed with the command below.\n\n\\begin{pythoncode}\ncb.features_selected_\n>>> ['MedInc', 'HouseAge', 'Latitude', 'Longitude']\n\\end{pythoncode}\n\n\\section{Advanced capabilities}\n\\label{capabilities}\n\nIn the following section, we present some advanced capabilities of the \\pkg{ControlBurn} package.\n\n\\subsection{Non-homogeneous feature costs}\nIn certain modeling applications, some features may be more expensive to obtain than others. \nIn the \\pkg{ControlBurn} framework, the user can assign each feature a cost and minimize the total cost of the selected features.\n\nLet $c_p$ represent the relative cost of selecting feature $p$ and consider the framework presented in \\S\\ref{generalframework}. Let $\\delta_t$ represent the set of features used by tree $t$. Assign each tree $t$ the following weight:\n\n\\[u_t= \\sum_{p \\in \\delta_t} c_p.\\]\n\nThe original \\pkg{ControlBurn} framework with no feature costs corresponds to the case where $c_p = 1, \\mkern9mu \\forall \\ p \\in \\{ 1 \\ldots P \\}.$\n\nIn the following example, we demonstrate how \\pkg{ControlBurn} incorporates feature costs using the body fat regression dataset from \\cite{penrose1985generalized}. The goal is to predict body fat percentage using the features: \\code{Age,\n Weight,\n Height,\n Neck,\n Chest,\n Abdomen,\n Hip,\n Thigh,\n Knee,\n Ankle, Biceps, Forearm} and \\code{Wrist}.\n \n Consider a hypothetical scenario where it is twenty times more time-consuming for an observer to measure a subject's torso, because the subject may need to remove bulky outerwear. We assign feature costs of 20 to \\code{Chest, Abdomen, Hip} and feature costs of 1 to everything else. Assigning these feature costs and computing the entire regularization path with \\pkg{ControlBurn} can be done via the following.\n \n \\begin{pythoncode}\ncosts = [1,1,1,1,20,20,20,1,1,1,1,1,1]\ncb = ControlBurnRegressor()\ncb.solve_lasso_path(xTrain,yTrain,costs = costs)\n\n\\end{pythoncode}\n\nFigure \\ref{bodyfatfeaturecosts} compares the regularization paths computed by \\pkg{ControlBurn} with and without feature costs. Without feature costs, \\code{Abdomen} circumference is the best predictor of bodyfat percentage. When feature costs are included the trio, \\code{Age, Height,} and \\code{Neck} replace \\code{Abdomen} as the most important features.\n\n\n\n\\subsection{Feature groupings}\n\nIn some settings, users choose whether or not to acquire costly \\emph{groups} of features.\nOnce one feature in the group has been acquired, the rest are free.\nExamples might include measurements taken as part of a complete blood panel (CBC): white blood cell counts are obtained at the same time as red blood cell counts.\nAs another example, in remote sensing, temperature, humidity, and pressure readings at a given location can be obtained by placing a single sensor. To create a model that is sensitive to these feature groupings, \\pkg{ControlBurn} can penalize each tree for the \\emph{groups}, not individual features, that it uses.\n\nConsider the modeling framework presented in \\S\\ref{generalframework} and let $T_t$ represent the set of feature groups used by tree $t$. Let $c_g$ represent the cost of selecting group $g$ and assign each tree $t$ the weight\n\\[u_t= \\sum_{g \\in T_t} c_g.\\]\nThe standard \\pkg{ControlBurn} framework corresponds to the case where all groups are singletons and all weights $c_g$ are set to $1$.\n\nIn the section below, we return to the body fat regression example to demonstrate how \\pkg{ControlBurn} can guide users' decisions on whether to acquire different feature groups. Consider the scenario where the features \\code{Age, Weight, Height} can be obtained from the patient's medical history, and the remaining features are partitioned by their location on the patient's body. The features \\code{Neck, Chest, Abdomen} can be obtained by measuring the patient's core, the features \\code{Hip, Thigh, Knee, Ankle} can be obtained by measuring the patient's legs, and the features \\code{Biceps, Forearm, Wrist} can be obtained by measuring the patient's arm. \n\nWe can penalize selecting features over the four groups, History, Core, Legs, and Arms, each group having a cost of $1$, via the following.\n\n \\begin{pythoncode}\ngroups = [1,1,1,2,2,2,2,3,3,3,4,4,4]\ncb = ControlBurnRegressor()\ncb.solve_lasso_path(xTrain,yTrain,groups = groups)\n\\end{pythoncode}\n\nThe list \\code{groups} assigns each feature an integer group id, and in this setting \\pkg{ControlBurn} minimizes the total number of groups selected.\n\nThe bottom plot in Figure \\ref{bodyfatfeaturecosts} shows the output of this code chunk. Note that compared to the original \\pkg{ControlBurn} regularization path, the regularization path with feature grouping utilizes the feature \\code{Neck} more frequently. This is due to the fact that the feature \\code{Neck} is obtained at no additional cost when the feature \\code{Abdomen} is selected since they both belong to the feature group Core. The feature group History is introduced shortly after the feature group Core.\n\n\n\\begin{figure}[h!] \n \\centering\n \\includegraphics[width=\\textwidth]{figures\/bodyfatexample.pdf}\n \\caption{Effect of feature costs and feature groupings on \\pkg{ControlBurn} regularization paths.}\n \\label{bodyfatfeaturecosts}\n\\end{figure}\n\nTo obtain the weighted feature importance scores of each group, run this command.\n\n\\begin{pythoncode}\ngroup_names = ['History', 'Core', 'Legs', 'Arm']\ngroup_importances = interpreter.plot_feature_importances(groups, \n group_names, show_plot = True)\n\\end{pythoncode}\n\nThis outputs a bar plot of feature importance scores by group (Figure \\ref{groupfeatureimportanceplot.fig}).\n\n\\begin{figure}[h!] \n \\centering\n \\includegraphics[width=.75\\textwidth]{figures\/groupfeatureimportances.pdf}\n \\caption{Plot of weighted feature importance scores by group.}\n \\label{groupfeatureimportanceplot.fig}\n\\end{figure}\n\n\n\n\\subsection{Custom ensembles}\n\nThe ensemble building algorithms in \\pkg{ControlBurn} can be easily replaced with custom pre-trained ensembles. Pre-trained ensembles are restricted to collections of \\pkg{Scikit-learn} trees. For example, to train and parse in a \\code{GradientBoostingRegressor}, run the following.\n\n\\begin{pythoncode}\nfrom sklearn.ensemble import GradientBoostingRegressor\nStochasticGB = GradientBoostingRegressor(max_depth = 2, max_features = 'log2', \n subsample = 0.05)\nStochasticGB.fit(xTrain.values,yTrain)\ntree_list = np.ndarray.flatten(StochasticGB.estimators_)\n\ncb_custom = ControlBurnRegressor(build_forest_method = 'custom', \n custom_forest = tree_list)\ncb_custom.fit(xTrain,yTrain)\n\\end{pythoncode}\n\nNote that \\pkg{ControlBurn} works best with diverse ensembles that use very few features per split and shallow trees. Caution should be taken that custom ensembles are adequately diverse. Otherwise \\pkg{ControlBurn} may select only the null or the full model. \n\nFor example, given a Random Forest with deep trees, \\pkg{ControlBurn} can only select a null or full model (see the left plot in Figure \\ref{rfvsebm}); whereas the trees grown by an Explainable Boosting Machine (EBM) \\citep{lou2012intelligible} each use at most two features, so \\pkg{ControlBurn} works well.\n\n\\begin{figure}[h!] \n \\centering\n \\includegraphics[width=\\textwidth]{figures\/customensembles.pdf}\n \\caption{Regularization path of \\pkg{ControlBurn} on custom ensembles. On a random forest, where each tree uses every feature, \\pkg{ControlBurn} can select either the full or null model. The package works much better on EBMs and can select subforests of varying sparsities.}\n \\label{rfvsebm}\n\\end{figure}\n\n\n\n\n\\subsection{Sketching}\n\nThe optimization framework in \\pkg{ControlBurn} scales linearly with the number of training observations, $N$. To reduce computation time, we can subsample\/sketch matrix $A$. Define $S \\in \\mathbb{R}^{\\eta \\times N}$ as the Gaussian sketching matrix, where $\\eta$ is the number of rows subsampled uniformly from $A$. We rewrite problem \\ref{optimizationproblem1} as\n\n\\begin{mini!}\n{w}{\\frac{1}{N}\\left\\|Sy-S\\mathrm{~A} w\\right\\|_{2}^{2}+\\alpha u^T w \\label{sketchedobj}}{\\label{sketchedproblem}}{}\n\\addConstraint{w\\geq 0. \\label{skectchedc1}}\n\\end{mini!}\n\nTo use sketching in the \\pkg{ControlBurn} package, choose a proportion $\\rho \\in (0,1)$ of the training data to sample, which corresponds to $\\eta = \\lfloor N \\rho \\rfloor$ number of samples. For example, we may choose the sketching parameter $\\rho = 0.1$ and run the following code:\n\\begin{pythoncode}\ncb = ControlBurnRegressor()\ncb.fit(xTrain,yTrain,sketching = 0.1)\n\\end{pythoncode}\n\nFigure \\ref{sketchingcomparison} compares the runtime and performance of sketching versus no-sketching for \\pkg{ControlBurn} on the California housing dataset. With a sketching parameter of $\\rho = 0.1$, the optimization step of \\pkg{ControlBurn} runs about 3x faster (left) and the selected model performs about equally well (right).\n\n\\begin{figure}[h!] \n \\centering\n \\includegraphics[width=.96\\textwidth]{figures\/sketchcomparison.pdf} \n \\caption{Comparison of \\pkg{ControlBurn} lasso solve with and without sketching. Sketching reduces computation time with negligible performance loss. }\n \\label{sketchingcomparison}\n\\end{figure}\n\n\\subsection{Best subset selection}\n\n\n\\pkg{ControlBurn} can also use best-subset feature selection to select a feature-sparse subforest. In the linear setting, advancements in combinatorial optimization solvers have made best-subset selection feasible for medium-sized datasets (with thousands of samples and tens of features) \\citep{bertsimas2016best}. \nOn these datasets, best-subset selection has been shown to outperform lasso on regression problems \nin the high signal-to-noise ratio regime \\citep{hastie2017extended,mazumder2020discussion}. \nOne major advantage of best-subset selection is that the desired number of features can be directly specified. \nLet $K$ represent the desired number of features in the selected subforest. \nBest-subset selection over a tree ensemble finds weights $w_t$ for each tree $t \\in \\{ 1,\\ldots,T \\}$ to solve\n\\begin{mini!}\n{w}{\\frac{1}{N}\\left\\|y-\\mathrm{~A} w\\right\\|_{2}^{2}\\label{bestobj}}{\\label{bestsubsetproblem}}{}\n\\addConstraint{\\|Gw\\|_0 = K \\label{bestc1}}\n\\addConstraint{w\\geq 0, \\label{bestc2}}\n\\end{mini!}\nwhere $\\|\\cdot\\|_0$ counts the number of non-zero entries in its vector argument.\nConstraint (\\ref{bestc1}) ensures that exactly $K$ features are selected.\nAs in \\S\\ref{generalframework}, matrix $G$ captures which features are used by each tree. If an entry of $Gw \\in \\mathbb{R}^T$ is zero, the corresponding feature is excluded from the subforest. \n\n\\pkg{ControlBurn} can choose features by best subset selection with the \\code{solve_l0} function:\n\\begin{pythoncode}\ncb = ControlBurnRegressor()\ncb.bagboost_forest(xTrain,yTrain)\ncb.solve_l0(xTrain, yTrain, K = 5)\ncb.features_selected_\n\\end{pythoncode} \n\n\\pkg{ControlBurn} uses \\pkg{Gurobi} to efficiently solve the $\\ell_0$-constrained mixed-integer quadratic program (MIQP) presented above \\citep{gurobi}. \n\nTo demonstrate some advantages of \\pkg{ControlBurn} with best-subset selection, we use best-subset selection and lasso to obtain the best 3-feature model on the US Crime dataset \\citep{redmond2002data}. The goal is to predict neighborhood crime rates using 127 demographic and economic features, many of which are highly correlated. Figure \\ref{bestsubsetcomparison} compares the distribution in performance between the best model selected by lasso versus the best model selected by best-subset selection. The best model selected by best-subset selection improves test MSE slightly compared to the best model selected via lasso, as is common in other problems \\citep{mazumder2020discussion}.\n\n\\begin{figure}[h!] \n \\centering\n \\includegraphics[width=.65\\textwidth]{figures\/bestsubset.pdf} \n \\caption{\\pkg{ControlBurn} with best-subset selection vs. lasso. Sparse models selected by best-subset selection improve test MSE slightly compared to those selected by lasso.}\n \\label{bestsubsetcomparison}\n\\end{figure}\n\n\n\\section{Real-world application: Emergency room triage}\n\\label{application}\nWe conclude by applying \\pkg{ControlBurn} to a more extensive real-world example. The Yale Emergency Room Triage dataset \\citep{hong2018predicting} consists of adult emergency room records from the Yale New Haven Health System from 2014 to 2017. In \\cite{hong2018predicting}, the authors build binary classification models to predict hospital informations using triage information and patient medical history. Tree ensemble methods such as XGBoost achieve remarkable performance, with test AUC scores of 0.9. The feature importance scores of such models reveal that the emergency severity index (ESI) score, and what medications the patients were taking were the most influential predictors for hospital admission. ESI scores are triage scores assigned by the admitting medical staff and rank a patient on a scale from 1-5. Score 1 and 2 patients are in critical condition and require immediate life-saving care. Score 3 and 4 patients are non-critical but require care to stabilize. Score 5 patients require no emergency room resources to stabilize.\n\nWe use \\pkg{ControlBurn} to select the most important predictors of hospital admission among non-critical patients that require care (ESI levels 3 and 4). The real-world implications of this classification task are interesting; patients with ESI scores 3 and 4 do not obviously need to be hospitalized but have the potential to take a turn for the worse. Determining which features predict hospital admission among this sub-population may provide useful clues to medical staff conducting triage.\n\nDuring the ensemble-building stage, \\pkg{ControlBurn} uses bag-boosting to build 112 trees that use 896 of the 969 features in the dataset. With a regularization parameter of $\\alpha = 0.0015$, the lasso step of \\pkg{ControlBurn} selects a 10 tree subforest that uses just 10 features. A random forest classifier fit on just the 10 selected features achieves a test AUC score of 0.81. The same classifier fit on the full feature space performs marginally better, with an AUC score of 0.88. For around a 10 percent decrease in performance, the number of features used in the model can be reduced by a factor of 90. Table \\ref{tab:tablecompare} compares the performance of the model using features selected by \\pkg{ControlBurn} against the full model, on various learning algorithms described in \\citep{hong2018predicting}. Across all algorithms, the sparse model selected by \\pkg{ControlBurn} performs within 10 percent of the full model.\n\n\n\\begin{table}[]\n\\centering\n\\begin{tabular}{|l|l|l|}\n\\hline\n\\textbf{Method} & \\textbf{Full Model AUC} & \\textbf{Sparse Model AUC} \\\\ \\hline\n\\textbf{Logistic Regression} & 0.88 & 0.82 \\\\ \\hline\n\\textbf{Random Forest} & 0.88 & 0.81 \\\\ \\hline\n\\textbf{XGBoost} & 0.90 & 0.85 \\\\ \\hline\n\\end{tabular}\n\\caption{Comparison of AUC scores of full model vs. sparse model using features selected by \\pkg{ControlBurn}. Methods from \\cite{hong2018predicting} applied to our modified dataset: emergency room records filtered on ESI levels 3 and 4.}\n\\label{tab:tablecompare}\n\\end{table}\n\n\n\n\nThe subforest selected by \\pkg{ControlBurn} has the following structure of 7 single feature trees and 3 multi-feature trees.\n\n\\begin{pythoncode}\n>>> ['meds_gastrointestinal'], ['meds_cardiovascular'],['meds_cardiovascular'],\n ['meds_cardiovascular'], ['meds_cardiovascular'], ['meds_cardiovascular']\n ['meds_gastrointestinal']\n\n ['previousdispo' 'meds_cardiovascular' 'meds_gastrointestinal'],\n ['previousdispo' 'n_admissions' 'meds_cardiovascular' 'meds_gastrointestinal'],\n ['dep_name' 'age' 'insurance_status' 'n_admissions' 'triage_vital_sbp'\n 'meds_analgesics' 'meds_cardiovascular' 'meds_vitamins']\n\\end{pythoncode}\n\nThe most important features, \\code{meds_cardiovascular} and \\code{meds_gastrointenstinal} are contained in single-feature, single-split trees. These numerical features indicate the amount of cardiovascular or gastrointestinal medication a patient is taking before they present at the ER.\nFrom these single feature trees, it is apparent that patients currently taking these types of medications are more likely to be admitted; these medications are often used to treat chronic conditions in major organ systems.\n\n\\begin{figure}[h!] \n \\centering\n \\includegraphics[width=\\textwidth]{figures\/yale3feattree.pdf} \n \\caption{Three feature tree selected by \\textsc{ControlBurn} on the Yale emergency room triage dataset. Darker shades indicate the node increases the log-likelihood of hospital admission.}\n \\label{yale3feat}\n\\end{figure}\n\nIn addition, the multi-feature trees selected reveal interesting interactions in the data. The 3-feature tree \\code{[previousdisp, meds_cardiovascular, meds_gastrointenstinal]} is presented in Figure \\ref{yale3feat}. The third layer of the tree splits on the ordinal feature \\code{previousdispo} on the value -0.79. This feature represents the disposition of the patient on their last emergency room visit, patients with \\code{previousdispo} values less than -0.79 were either admitted or left against medical advice. In both cases, the recommendation of the ER staff was to admit the patient and as a result, patients who have \\code{previousdispo} values in these categories are more likely to be admitted upon subsequent ER visits. Consider the left-most branch in the tree in Figure \\ref{yale3feat}. Patients along this branch present in a non-critical condition and are not currently taking cardiovascular or gastrointestinal medications, but are more likely to be admitted since they were admitted in the past. This can be due to an abundance of caution on the part of the ER staff.\n\n The features \\code{n_admissions, dep_name, age, insurance_status, meds_vitamins,\\\\ meds_analgesics}, and \\code{triage_vital_sbp} are introduced in the deeper trees selected. The structures of these trees are complex and more difficult to interpret but can reveal interesting relationships. For example, the feature \\code{n_admissions} represents the number of prior hospital admissions for a patient and behaves similarly to the feature \\code{previousdispo}; patients previously hospitalized are more likely to be admitted when visiting the ER. Patients with \\code{insurance_status} equal to self-pay are less likely to be admitted since they may need to cover the cost of hospitalization. Finally, older patients are more likely to be admitted than younger ones.\n \nBy selecting a feature-sparse subforest, \\pkg{ControlBurn} allows practitioners to identify important features and examine individual decision trees to determine how these features interact with each other and the response. The resulting subforest is much more interpretable than standard tree ensembles such as random forests, which contain hundreds to thousands of deep trees to visualize and may yield biased feature importance scores. In addition, a polished model fit on the features selected by the subforest often performs identically to an ensemble fit on the entire feature space. \\pkg{ControlBurn} allows practitioners to extract insights from real-world data while preserving model performance.\n\n\n\n\n\\section{Concluding remarks}\n\nThe package \\pkg{ControlBurn} extends linear feature selection algorithms to the nonlinear setting. The algorithm behind \\pkg{ControlBurn} uses trees as basis functions and penalizes the number of features used per tree via a weighted $\\ell_1$-penalty. By selecting a feature-sparse subforest, \\pkg{ControlBurn} can quickly isolate a subset of important features for further analysis. \\pkg{ControlBurn} also contains various built-in interpretability and visualization tools that can assist data analysis. By examining the structure and decision boundaries of the selected subforest, a practitioner can discover interesting insights in the data. In addition, \\pkg{ControlBurn} can automatically evaluate the best support size by rapidly computing the entire path for the regularization parameter. Finally, \\pkg{ControlBurn} is flexible and can accommodate various frameworks such as feature groupings and non-homogeneous feature costs. The package can also accept custom ensembles and an $\\ell_0$-based solver for best-subset selection over trees. \nThe source code for \\pkg{ControlBurn} as well as the code and data to reproduce the experiments in this paper can be found at \\url{https:\/\/github.com\/udellgroup\/controlburn\/}. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{section1}\nThe task of storing and exchanging information lies at the heart of our society. In many cases information has to be transmitted in a secure fashion. Numerous examples can be found where a secure information exchange is crucial, ranging from sensitive private information such as one's own genome, to financial information, military equipment, or critical infrastructure. While various encryption schemes, as well as the strategies of breaking them, have been employed over time (see Singh for a historic overview \\cite{Singh1999}), most current security standards rely on the computational complexity of so-called one-way functions \\cite{Impagliazzo1989}. However, considering the steady increase in computational power, today's secrets might not stay secret forever. Additionally, breakthroughs in the fields of quantum computing might endanger security schemes such as the Rivest\u2013Shamir\u2013Adleman (RSA) scheme, which is based on prime factorization of large numbers, a problem that could be solved efficiently on future quantum computers using Shor's algorithm \\cite{Shor1997}. In addition classical encryption protocols do exist, which claim to be post-quantum secure - but none of them offers ultimate security \\cite{Bernstein2017}. \\\\\nThis is where quantum key distribution (QKD) comes into play, which is a method of establishing a random bitstring, securely shared between authenticated parties. This key can then be used to encrypt messages that are indeed provably secure, protected not by computational complexity, but by the laws of quantum mechanics \\cite{Shor2000,Gisin2002}. In brief, the communicating parties exchange quantum two-level systems (also known as quantum bits or qubits), prepared in one of two incompatible bases, so that any eavesdropping attempt requires a measurement on the qubits which causes detectable errors if the wrong basis was chosen by the eavesdropper. The eavesdropper hereby has to decide for only one basis, since quantum states cannot be copied \\cite{Wootters1982}. By that, any eavesdropping attempt can be noticed and the particular bit string is not used as a key to encrypt the message. Also, in contrast to classical encryption schemes, a postponed measurement of the transmitted qubit is not possible. Hence, securely transmitted data will remain secure also in future. \n\\\\\nEqually important, photonic quantum channels not only provide a means of perfectly secure communication, but they also enable other quantum technologies. \"Flying\" qubits, i.e. photons, are vital for distributed quantum computing schemes and to interface different parts of a future quantum computer, as famously formulated in the DiVincenzo criteria \\cite{DiVincenzo2000}. \n\\\\\nOnce several nodes are securely interconnected, one speaks of a quantum network and a future vision would be to connect parties all over the world via perfectly secure quantum channels - a vision often referred to\/called the quantum internet \\cite{Kimble2008}. This review intends to summarize and discuss some of the progress made towards achieving that vision, with a focus on implementations of quantum communication using quantum light sources based on epitaxial semiconductor quantum dots (QDs). \n\\\\\nLet us begin by introducing quantum key distribution in more detail. Historically, the ideas of quantum cryptography date back to the late 1960s, when Wiesner first formulated his ideas on conjugate coding, a work published more than a decade later in 1983 \\cite{Wiesner}. We refer to Bennett et al. \\cite{Bennett1992a} for a first-hand historical review. Motivated by the ideas of Wiesner, Bennett, and Brassard proposed the first actual key distribution protocol in 1984, later referred to as the BB84 protocol, using the quantum mechanical properties of single photons to detect eavesdropping attempts \\cite{Bennett1984}. In their seminal work, the authors proposed to use the polarization of single photons to encode the bits in different, randomly chosen bases (see \\textbf{Figure \\ref{fig:fig1}a}). Once the photons are transmitted from Alice to Bob (typical names for the communicating parties), Bob also measures them in a randomly selected basis. After Alice and Bob are authenticated, they share the selected bases over a public channel and keep only the measurement results of the cases in which both of them had measured a photon in the same basis (a process called \"key sifting\"), without communicating the actual measurement results. By comparing a subset of the results, potential eavesdropping can then be detected, since by the laws of quantum mechanics every measurement of an adversary in the wrong basis perturbs the quantum state, which leads to detectable errors. After post-processing steps, which reduce errors and enhance the secrecy, the remaining bits form the secure (or secret) key. This scheme is also referred to as prepare-and-measure setting, since Alice prepares the photon states and Bob measures them. \\par\nA few years later, in 1991, Artur Ekert proposed the first QKD protocol using entangled photon pairs \\cite{Ekert1991} (Figure \\ref{fig:fig1}b). The Ekert91 protocol relies on detecting an eavesdropper by monitoring the amount of violation of a Bell-like inequality \\cite{Clauser1969,Bell1964}, which quantifies the degree of remaining entanglement. In addition, one can also use the distributed entangled photons directly for measurements in the BB84 bases, compare some of the results and deduce the security from the identified error rates just as in the BB84 protocol - a protocol known as BBM92 \\cite{Bennett1992}. \n\\\\\nNote that sending single photons from Alice to Bob (prepare-and- measure) or Alice and Bob both receiving photons from a central source (entanglement based) are equivalent. One can imagine QKD with entangled photons also in the way that Alice measures a photon, which then travels backwards in time to the source and then forward in time to Bob. \n\\\\\nNote also, that an initial misconception was that, since the source can be placed at half of the distance between Alice and Bob, the achievable distance is doubled in the Ekert protocol. This is not the case, since although each photon needs to travel only half the distance, there are two photons that need to reach the receiver, thus canceling the effect of the reduced individual distance. The arrival probability is proportional to the transmittance of the quantum channels for the respective photons $T_{1(2)}$ for a channel of length $L_{1(2)}$, which are described by an exponential absorption loss inside the optical fiber given by an absorption constant $\\alpha$ (about 0.2 dB\/km in ultralow-loss fibers). The probability that both photons arrive is $P \\sim T_1 \\cdot T_2$ , which is the same as for one photon passing through a channel of the summed lengths.\n\\begin{equation}\n P \\sim T_1 \\cdot T_2 = e^{-\\alpha L_1} \\cdot e^{-\\alpha L_2} = e^{-\\alpha (L_1+L_2)}\n\\end{equation}\nNowadays, numerous other QKD protocols are known beyond the simple BB84 and Ekert protocols, ranging from decoy protocols for attenuated lasers, to asymmetric versions of BB84, protocols with more states (6-state), less states (2-state) or multi-dimensional protocols, full or partly device independent protocols, two-way protocols and coherent-one-way protocols, reference-frame independent protocols, twin-field protocols, continuous variable protocols and many more. For a recent overview we refer to Pirandola et al. \\cite{Pirandola2019}. \n\\\\\nHaving established a secure key by some QKD protocol, how can the two parties now exchange a message and be sure that nobody else knows its content? The only provably secure encryption method \u2013 in an information theoretical sense \u2013 known to date, is the one-time pad (OTP) \\cite{shannon1949} (see Figure \\ref{fig:fig1}c). This symmetric encryption method, first postulated by G. Vernam in 1926 \\cite{Vernam1926}, is based on four requirements:\n\\begin{itemize}\n \\item The key of Alice and Bob is secret\n \\item It is randomly generated\n \\item It has at least the same length as the message to be encrypted\n \\item It is used only once (hence the name one-time-pad).\n\\end{itemize}\nIf these requirements are met, the secret key can be applied to the message using an XOR (eXclusive OR) logical operation (a modulo-2 addition). The resulting encrypted message appears to third parties as random as the key itself and therefore does not disclose any useful information. Then, the encrypted message can be sent over a public channel. The other party can now use the same key and apply it with an XOR operation to the received, encrypted bit string, by which the original message is recovered. The problem for any classical key distribution process however is to guarantee the secrecy of the key distribution. Exactly this secrecy can be guaranteed by means of QKD.\n\\\\\nQuantum cryptography is physically secure in the sense that an eavesdropping attempt can always be detected. The most basic eavesdropping scenario is the intercept-resend strategy where an adversary just measures Alice's photons in a random basis and then sends new photons to Bob, this is indicated in the upper panel of Figure \\ref{fig:fig1}d. In BB84 half the choices then lead to wrong bases and thus to a disturbance of the quantum states. When an eavesdropping attempt is detected, it can either be compensated (if the error is low enough) by distilling a shorter key that is again secure, or the key is discarded and a new key exchange is initiated, until the final key is provably secure. This requires in all cases an authenticated classical channel to communicate some information such as the used measurement bases in the BB84 protocol. To realize that authentication, some shared, secret bits must already be present to begin with, which is why QKD is technically a key expansion scheme. Note furthermore, that in most of the QKD protocols random numbers are needed in one way or the other, either to encode a random bit or to pick measurement bases randomly. Thus, true random number generators are essential for QKD schemes to be absolutely secure (see section \\ref{section4.5}).\nIt is worth mentioning that quantum cryptography in general is the art of using quantum mechanical effects to perform cryptographic tasks, which includes much more than only exchanging a secure key (see e.g. Broadbent et al. for an overview \\cite{Broadbent2016}). Nevertheless, in this review we focus on the task of QKD, since the building blocks that are necessary for successful QKD with single photons (sources, channels, memories, repeaters, random numbers etc.) will also enable many other cryptographic applications. \\\\\n\\begin{figure}[h]\n \\includegraphics[width=\\linewidth]{figure1.png}\n \\caption{Principle of QKD: Prepare and Measure based (a) and entanglement based (b) protocols can be used to generate a secret key that can be used with the one-time-pad (c) for absolutely secure communications. Common eavesdropping strategies (d).}\n \\label{fig:fig1}\n\\end{figure}\nWhen implementing a QKD protocol, one of the first choices to take is the technology platform and the respective qubits used. To realize a \"flying\" qubit, photons are the obvious choice. They can be transmitted over long distances in optical fibers or through free-space links and hardly interact with their environment which leads to unmatched coherence properties \\cite{OBrien2009}. Note that in principle other types of exchangeable qubit realizations could be imagined such as electrons in wires, but compared to photons they face more loss and decoherence which strongly limits the maximum distances \\cite{Hermelin2011,McNeil2011}. Due to the photon's coherence properties, speed, and the possibility to encode information in various degrees of freedom (e.g., polarization, time-bin, energy, path, phase or orbital angular momentum, see \\cite{flamini2018} for an overview), they are a promising candidate for many quantum information tasks such as quantum computing, sensing, metrology and of course quantum cryptography.\nFor QKD different encoding schemes have been developed, which will be briefly summarized in the following \\cite{Gisin2002}. \n\\\\\nIt is straightforward to encode qubits in the polarization states of photons, which is why the very first demonstration of QKD in 1992 \\cite{Bennett1992a}, as well as the first field experiment \\cite{Muller1993}, used polarization encoding. With polarization optics arbitrary polarization states can be prepared, both in optical fibers and in free space. Typically, the rectilinear and the circular bases are used for the two non-commuting BB84 bases, as they can be prepared actively in a random fashion with fast electro-optical modulators on the sender side. On the receiver side, a beam splitter followed by polarization optics that project in two different bases realizes a passive random basis choice. It is crucial to maintain the polarization state in this encoding scheme, which is why effects such as polarization mode dispersion in optical fibers must be compensated and the quantum channel must be stabilized to compensate polarisation drifts over time. \n\\\\\nA coding scheme that is better suited for fiber-based communication is phase encoding. In the simplest case, a single Mach-Zehnder-Interferometer (MZI) is spanned between Alice and Bob via two optical fibers and two couplers, at which each of the parties can apply phase shifts to their half of the MZI. In this configuration, the condition for constructive or destructive interference at the receiving detector depends only on the applied phase shifts (given the MZI pathlength is stable to a fraction of the wavelength) corresponding to the encoded states. As the required path length stability in the single MZI phase encoding scheme is difficult in practice, QKD is typically done with a double MZI phase encoding, in which two unbalanced MZI are put in series, with one being controlled by Alice and one by Bob. If the MZI imbalance is the same at both MZIs, photon paths including the short and long arm exactly once are indistinguishable, leading to interference, which again depends on the applied phase shifts at Alice and Bob \\cite{Yuan2008}. As the photons are in a superposition of different time bins, this is also a way of time-bin encoding. An interesting variation are photons that travel back and forth, which improves the stability , as photons experience the same quantum channel twice in opposite directions thus compensating phase distortions, however at the cost of reduced communication distances \\cite{Martinelli1992}. \n\\\\\nSeveral other encoding schemes exist such as encoding in the relative phase between sidebands of a central optical frequency \\cite{Sun1995}, the photon path \\cite{Monteiro2015}, or using correlations between time bins and photon energy, enabling high-dimensional encoding \\cite{Zhong2015}. Also, several ways exist to convert the different encoding types into each other \\cite{Kupchak2017}.The probability that a photon is detected in a wrong basis in a given encoding scheme is called the quantum bit error ratio (QBER). \n\\\\\nIdeally, all these encoding schemes require practical, high-quality sources of single photons (SPSs) or entangled photon pairs (EPSs). Here, high quality means that all emitted photons are in the single photon Fock states with low multi-photon emission probability (purity), the majority of excitation pulses lead to collected photons (brightness) and photons emitted from the same source are similar (indistinguishability). Being practical implies that they are easy to operate, durable, spectrally tunable, affordable, and scalable. While many non-classical light sources have been developed to date (see e.g. \\cite{Lounis2005,shields2010,Eisaman2011} for an overview), ideal SPSs or EPSs are challenging to realize. Semiconductor QDs, however, are considered one of the most promising candidates to solve this challenge \\cite{Aharonovich2016}. \\\\\n\\begin{figure}\n \\includegraphics[width=\\linewidth]{figure2.png}\n \\caption{Comparison of Photon Statistics for laser sources (a) and single photon sources (b) for different mean photon numbers $\\mu$.}\n \\label{fig:fig2}\n\\end{figure}\nDue to the lack of close-to-ideal quantum light sources, the majority of QKD implementations, and all commercial realizations, made use of phase-randomized attenuated laser pulses, also known as weak coherent pulses (WCP) \\cite{Chen2009,Zhao2006,Jennewein2000a,Poppe2004,Boaron2018a}. These can be realized more easily, but can only approximate a single photon state, since the photon number still obeys the Poisson statistics. The probability to find $n$ photons in a state with mean photon number $\\mu$ is $P(n,\\mu) = e^{-\\mu} \\mu^n \/n!$ Therefore, the conditional probability to find more than one photon, given the pulse is not empty, is \\cite{Gisin2002}\n\\begin{equation}\nP(n>1, \\mu \\mid n>0, \\mu)=\\frac{P(n>1, \\mu)}{P(n>0, \\mu)}=\\frac{1-P(0, \\mu)-P(1, \\mu)}{1-P(0, \\mu)}=\\frac{1-(1+\\mu) e^{-\\mu}}{1-e^{-\\mu}} \\approx \\frac{\\mu}{2}\n\\end{equation}\nThis means that the probability that a non-empty pulse contains more than 1 photon can be made arbitrarily small, however at the cost of an extremely low number of photons in the quantum channel: $P(n=0, \\mu) \\approx 1-\\mu$. QKD experiments with WCPs typically use $\\mu = 0.1$, a mean photon number at which about $5\\%$ of the non-empty pulses still contain more than 1 photon. This leads to possible attacks such as photon number splitting attacks (Figure \\ref{fig:fig1}d) or the more realistic beam splitting attacks \\cite{Lutkenhaus2002}. \n\\\\\nIn these attacks an adversary replaces the channel with one of lower loss, measures the photon number non-perturbatively in each pulse, blocks pulses with 1 photon and steals all but 1 photon from the pulses with more than 1 photon. In this way the eavesdropper possesses a copy of all photons that Bob receives and can measure them according to the basis information disclosed over the public channel. Since an ideal SPS would emit at maximum 1 photon per pulse, the photon number splitting attack would not be possible. In (\\textbf{Figure \\ref{fig:fig2}}) the photon number distribution of WCP (Poissonian) and single photons are compared for different mean photon numbers. While for a typical average number of 0.1 photons per pulse both sources have a similar number of single photon events, the attenuated laser still has a considerable amount of multi-photon contributions. One might argue that a solution is to reduce the mean photon number in WCP setups strongly and, in turn, increase the repetition rate. In this case, however, the photon detectors receive many empty pulses significantly increasing the influence of dark counts at the detectors. \n\\\\\nUsing so-called decoy state protocols \\cite{Wang2005}, these attacks on WCP implementations can be mitigated, by estimating the amount of multi-photon pulses in-situ with the help of decoy states. But significant shrinking of the sifted key is required since the decoy pulses cannot be used to generate a key. Provided a single-photon source (in contrast attenuated laser pulses) with high efficiency, i.e. with an average number of photons per pulse $\\mu_\\text{SPS}$ comparable to average intensities typically used for signal states in decoy protocols (e.g. $\\mu_\\text{WCP} = 0.4$), the reduced protocol overhead gives a clear benefit for the SPS. For an overview of the state-of-the-art in attenuated laser QKD covering decoy and other types of protocols we refer the reader to \\cite{Xu2020}.\n\\\\\nEven more general, Poisson statistics predict a maximum probability to emit 1 photon in a WCP source of $37\\%$, which means that a QD SPS with an average photon number in the quantum channel above that value, outperforms the WCP source when driven at the same rate. The same argument holds true for entangled photon sources. By sending laser pulses to nonlinear crystals, spontaneous parametric down conversion (SPDC) can be used to generate entangled photon pairs \\cite{Kwiat1995,Couteau2018}. This process however is not only inefficient (conversion rates of $10^{-6}$ are typical) but also does not provide entangled photon pairs with single-photon statistics. The number of photon pairs follows a thermal distribution on timescales below the coherence time and a Poisson distribution for larger times \\cite{Tapster1998,Avenhaus2008}. As a consequence, the maximum number of created photon pairs in SPDC sources is limited, since the fidelity to the maximally entangled states reduces with increasing multi-pair contributions. As a consequence, the higher probability to emit more than one entangled photon pair in down-conversion processes ultimately limits the key rate, no matter how fast the clock rate of the system is, as was recently shown by Hosak et al. \\cite{Hosak2021}. QD SPSs on the other hand can provide entangled photon pairs with single photon pair statistics via the biexciton-exciton emission cascade (see section \\ref{section2.1}) which have already exceeded the fundamental limits of SPDC sources \\cite{Chen2018,Liu2019,Wang2019}. \n\\\\\nEven more important, when considering realistic finite key lengths, the number of secure bits reduces significantly due to finite-size corrections, which arise from statistical errors due to the reduced size of the parameter estimation set. These corrections, that are necessary to still claim a secure key exchange for finite-size keys, have a much greater effect on attenuated laser pulses than on SPSs \\cite{Scarani2009,cai2009}, which makes SPS superior even at lower average photon numbers. Such finite-size effects are especially severe in scenarios where the maximum key length is limited, for example in satellite links or mobile systems \\cite{Nauerth2013,Liao2017}.\n\\\\\nMoreover, in many advanced protocols such as measurement-device-independent (MDI), fully device independent (DI) QKD (see section \\ref{section3.4}) or in essential building blocks of quantum networks such as quantum repeaters (see section \\ref{section4.2}), successful Bell state measurements are essential. These however suffer strongly from multi-photon input states which occur more often in WCP systems due to the Poisson statistics. These measurements also require a high two-photon interference visibility which can in principle reach up to $100\\%$ for quantum light sources, while it is fundamentally limited to $50\\%$ in classical WCPs \\cite{Mandel1983}. Also in this case, SPSs would therefore provide a clear advantage.\n\\\\\nAnother aspect is that nowadays, even the state-of-the-art single-photon detectors can only operate at detection rates below 100 MHz \\cite{SingleQuantum}. The repetition rates of modern QKD systems however have already reached the GHz regime, hence many QKD system clock-rates are already limited by the photon detector \\cite{Gordon2005,Dixon2009}. In such a case and following the arguments presented above, using SPSs with relatively low brightness can already increase the secret key rate. Semiconductor QDs are a promising candidate for such a SPS and therefore at the core of this review.\n\\\\\nWhile this review article will focus mainly on the experimental advances in the field, also theoretical work has advanced significantly since the proposal of the BB84 protocol. First rigorous security proofs were soon extended to more realistic scenarios, also considering imperfections of Alice and Bob, unavoidable in physical implementations. A common way of calculating asymptotic secure key rates for imperfect photon sources was introduced in the so-called GLLP paper by D. Gottesmann, H.-K. Lo, N. L\u00fctkenhaus, and J. Preskill \\cite{Gottesman2004}. Typically, secure key rates are calculated in the asymptotic regime, i.e. assuming that an infinite amount of exchanged qubits is available so that protocol parameters can be estimated without errors from an infinite set. In that case the secure key rate $K_{\\text{sec}}$ is simply the clock rate of the QKD experiment $R_0$ multiplied with the mean photon number that is coupled into the channel $\\mu$, the channel transmissivity $t$, the detection setup transmission $\\eta_{\\text{Bob}}$, the detector efficiency $\\eta_\\text{det}$, the sifting factor $q$ and the secure bit fraction $r$ as\n\\begin{equation} \\label{eq:GLLP}\n K_\\text{sec} = R_0 \\cdot \\mu \\cdot q \\cdot t \\cdot \\eta_\\text{Bob} \\cdot \\eta_\\text{Det} \\cdot r\n\\end{equation}\nThe secure bit fraction $r$, following the GLLP security proof, is the amount of uncertainty an adversary has over the key, quantified by the binary Shannon entropy $h_2(e) = -e \\log_2(e) - (1-e) \\log_2(1-e)$ as a function of the QBER $e$, reduced by the information leakage due to the error correcting code with an efficiency of $f_\\text{EC}$, and corrected for the amount of multi-photon emission events that may enable photon number splitting attacks. This leads to a secure bit fraction of\n\\begin{equation} \\label{eq:GLLP_correction}\n r=A\\left[1-h_{2}\\left(\\frac{e}{A}\\right)\\right]-f_\\text{EC} \\cdot h_{2}(e)\n\\end{equation}\nHere, $A$ is the correction factor used to incorporate multi-photon emission events as $A = P_{1}\/P_\\text{click} $, which describes the ratio of detector clicks resulting from single-photon events with probability $P_1$, to all detector clicks $P_\\text{click}$. The quantum bit error ratio $e$ is the ratio of erroneous detection events normalized by the sum of all detection events, which can be estimated from a detection system intrinsic error $e_\\text{det}$ (probability that a photon encoded in one basis is detected in the other one) and the number of dark counts $P_\\text{dc}$ as\n\\begin{equation} \\label{eq:GLLP_QBER}\n e = e_\\text{false clicks} + e_\\text{dark counts} = \\frac{P_\\text{click } \\cdot e_\\text{det} + \\frac{1}{2} \\cdot P_\\text{dc}}{P_\\text{click}+P_\\text{dc}} .\n\\end{equation}\nNote that also more refined channel models exist which take into account detector imperfections such as dead-time and after-pulsing and allow channel multiplexing \\cite{eraerds2010quantum}. The GLLP equations (\\ref{eq:GLLP}-\\ref{eq:GLLP_QBER}) discussed above will be used at the end of section \\ref{section3.2} to put the QKD experiments reported to date into perspective. An important development in the field of QKD rates over the last decade was to consider even more realistic settings beyond the asymptotic limit, in which only a finite number of qubits can be exchanged. This is an important practical scenario, e.g. for communication links to and between moving platforms such as air-crafts or satellites. Several important adaptations have to be made to the estimation of the secure key rate, which are well-described in the works of Cai, Scarani and Renner \\cite{Scarani2008,Renner2008,Scarani2009,cai2009}. Importantly, incorporating finite-size effects is not just a theoretical consideration to certify security of the scheme more precisely. On the contrary, not incorporating these effects entails actual security risks, as shown by Chaiwongkhot et al. by hacking a commercial QKD system exploiting finite-size effects \\cite{Chaiwongkhot2017}. Noteworthy, secure key rates can also be calculated numerically, by maximizing Eve's information over all attacks that are allowed by the laws of physics \\cite{Coles2016}, which resulted in key rates that agreed with the analytically known ones \\cite{Winick2018,George2021}.\n\n\\section{Quantum Dot Based Quantum Light Sources}\nWhile the notion of \"single photons\" has been around for a 100 years from the early 20th century on \\cite{Einstein1905,LEWIS1926}, the concepts and practical realizations of light sources emitting single photons one by one took their fair time to be developed. Theoretically, a driven two-level system is an ideal single-photon emitter. While driving transitions in single atoms fulfills that goal \\cite{kimble1977,ripka2018}, this route is not always practical. This was one reason for researchers to consider low-dimensional solid-state systems, which mimic atoms but provide advantages for device integration. Important steps in this endeavor where made in the 1990s, where research groups succeeded in encapsulating small islands of semiconductor material with a lower bandgap by another semiconductor with a larger bandgap, effectively forming a quasi zero-dimensional structure known as QD \\cite{Leonard1993,Grundmann1995}. Technically this is possible by exploiting self-organized processes, e.g. in the Stranski\u2013Krastanov growth-mode \\cite{Stranski1939}, during hetero epitaxy of one material onto another \\cite{Lay1978}. After their discovery, QDs quickly gained interest with applications ranging from laser physics \\cite{Arakawa1982} to quantum information \\cite{Imamoglu1999}. Many excellent review articles and books on epitaxial semiconductor QDs and QD-based quantum light sources have been published over the years, e.g. \\cite{Michler2009a,Shields2007,Buckley2012,Senellart2017,Trivedi2020,Rodt2020}. This section therefore intends to briefly summarize the most important properties of QDs and engineered QD-based quantum light sources used for implementations of quantum communication to date.\n\n\\subsection{Semiconductor Quantum Dots} \\label{section2.1}\nIn QDs the three-dimensional confinement of charge carriers in a small volume of typically a few nanometer leads to quantized energy levels in the valence and conduction band, of which the lowest ones approximate a two-level system (cf. \\textbf{Figure \\ref{fig:fig3} a)-c)}). Filling the QD potential with one electron-hole pair forming a bound exciton (X) using optical or electrical pumping, the energy levels can be occupied. After a characteristic period of time, i.e. the spontaneous emission lifetime (typical $~1\\,$s), the exciton recombines radiatively under emission of a single photon, leaving the empty QD in its ground state. As long as the exciton has not recombined, the electronic state is occupied, thus providing the basic mechanism required for a SPS emitting photons subsequently, as first demonstrated by Michler et al. in 2000 \\cite{Michler2000}. Note, that QDs require typically cryogenic cooling down at temperatures arround $4-70\\,$K for optimal performance. There are however material combinations, such as CdSe and Nitride materials, as well as organic structures, which lead to higher exciton binding energies, resulting in higher possible operating temperatures up to room temperature \\cite{Michler2000a,Kako2006,Arians2008,Deshpande2013,Cui2019}. In addition, also cryotechnologies advanced significantly in recent years, nowadays enabling compact and benchtop cryogenic quantum light sources as recently demonstrated by Schlehahn et al. \\cite{Schlehahn2018} (see section \\ref{section4.1}).\\\\\nDue to the semiconductor environment, QDs can host multiple charge carrier in different combinations of excited states, resulting in various distinct emission lines (cf. Figure \\ref{fig:fig3}d and e). Spectrally selecting either one of these emission lines results in single-photon emission which can be experimentally verified via second-order photon autocorrelation experiments in a HBT setup (see Figure \\ref{fig:fig3} f and g). Advantages of QDs for quantum light generation are their excellent single photon properties, recently demonstrated to be superior to any other known type of SPS \\cite{Schweickert2018}, in combination with the possibility to realize engineered devices with designed emission wavelength (see also section \\ref{section2.2}). This is highly beneficial for applications in quantum information, as the generated quantum light states can be spectrally matched to the Telecom O- and C-band (at wavelengths around 1300 nm and 1500 nm) for long distance communication in optical fibers, or atomic transitions in Alkali vapor cells for quantum memory applications (see section \\ref{section4.3}). As the growth of high-quality QDs at Telecom wavelengths is still a technological challenge, an alternative approach is the wavelength conversion of single photons emitted from a QD at lower wavelengths to the Telecom range exploiting nonlinear optical effects \\cite{Rakher2010}. The state-of-the-art in this field enables conversion efficiencies of $35\\%$ \\cite{Morrison2021}. Another important development in the field of QD-device engineering refers to the availability of several deterministic fabrication technologies, where the QDs are either grown directly at a pre-determined location (site-controlled growth) or QDs are pre-selected post-growth (according to the optical properties) and subsequently integrated in photonic devices (see Refs. \\cite{Rodt2020,Liu2021} for recent review articles). These deterministic technologies turned out to become a game changer enabling high device yields for applications in photonic quantum technologies.\n\\\\\n\\begin{figure}\n \\includegraphics[width= \\linewidth]{figure3.png}\n \\caption{Two-level-scheme of a single photon emitter (a), Scanning Tunneling Microscope image of a single InAs\/GaAs QD (b), scanning electron microscope image of epitaxially grown InGaAs QD islands (c), overview of different excited states in a QD (d) and their respective emission lines (e), different emission lines when exciting a QD off resonantly (f left) and only one emission line in quasi-resonant excitation (f right), low correlation at zero temporal delay in a HBT experiment indicates single photon emission (g). (b) reprinted from \\href{https:\/\/aip.scitation.org\/doi\/full\/10.1063\/1.4770371}{\\textit{Keizer et al. 2012}} \\cite{keizer2012} with the permission of AIP publishing, (c) reprinted by permission from \\href{https:\/\/www.nature.com\/articles\/nature02969}{\\textit{\\textit{Reithmaier et al. 2004}}} \\cite{reithmaier2004} Copyright 2004 Springer Nature, (f,g) reprinted with permission from \\href{https:\/\/journals.aps.org\/prl\/abstract\/10.1103\/PhysRevLett.86.1502}{\\textit{Santori et al. 2001}} \\cite{santori2001} Copyright 2001 by the American Physical Society.}\n \\label{fig:fig3}\n\\end{figure}\nImportantly, again thanks to the semiconductor environment, different schemes are possible for the excitation of excitonic states in QDs. The highest purity of single-photons can be achieved in resonant excitation, where the excitation laser matches the QD emission wavelength \\cite{Muller2007}. This scheme, however, requires a suppression of the scattered laser light of typically 6 orders of magnitude using e.g. cross-polarized excitation and collection paths \\cite{Kuhlmann2013}, excitation from the side of the sample \\cite{Ates2009a}, via coupling to photonic waveguides \\cite{Makhonin2014} or by using dichroic excitation profiles \\cite{He2019}. Alternatively, one can excite quasi-resonantly (also known as p-shell excitation) where the closest energy level above the emission state is resonantly excited to limit the excitation of excess charge carriers and non-radiative relaxation induced dephasing \\cite{holewa2017}. The third possibility is non-resonant excitation with photon energies exceeding the band gap of the matrix material, which results in an excited excitonic reservoir, from which single excitons relax to the QD states and subsequently recombine radiatively emitting single photons. Non-resonant excitation is from a technical viewpoint the easiest excitation scheme, which however limits the single-photon quantum optical properties due to large dephasing, fluctuating charge traps and timing jitter \\cite{Vural2020}.\n\\\\\nThe physics of QDs, however, are by far not limited to single-photon emission. When exciting two (or more) excitons, cascaded emissions are possible, where the individual photons have slightly different energies (\\textbf{Figure \\ref{fig:fig4}}c) due to the Coulomb interaction of the confined excitons and can thus be separated \\cite{Hu1990,Kulakovskii1999,Moreau2001,Rodt2003,Seguin2006,Sarkar2006}. Under certain conditions, photon pairs emitted by the biexciton-exciton radiative cascade reveal polarization-entanglement, as proposed theoretically in Ref. \\cite{Benson2000} followed by several experimental demonstrations, e.g. \\cite{Benson2000,Akopian2006,Young2006} (Figure \\ref{fig:fig4} d-g). Noteworthy, also other peculiar configurations of the energy level scheme can be realized in this cascade, enabling the generation of polarization-entanglement via time-reordering \\cite{Avron2008} or so-called twin photon states \\cite{Heindel2017,Moroni2019}.\n\\\\\nNote that in order to obtain entangled photons from a QD, the so-called fine-structure-splitting (FSS), which quantifies the difference in energy of the photons emitted with different polarizations and which originates from intrinsic anisotropies in the QD, has to vanish (Figure \\ref{fig:fig4} a, b). As it was shown, this FSS can be made zero via careful high quality growth, external strain or by applying magnetic fields \\cite{Stevenson2006}. The possibility to generate polarization entanglement - being one crucial ingredient for many schemes of quantum communication - is a major advantage of using engineered QD devices. We refer the interested reader to Refs. \\cite{huber2018semiconductor} and \\cite{Schimpf2021} for a recent review and perspectives article on this topic, respectively.\n\\\\\n\\begin{figure}\n \\includegraphics[width=\\linewidth]{figure4.png}\n \\caption{Generation of polarization entangled photon pairs from single QDs: By reducing the polarization splitting between the different decay paths of the biexciton-exciton emission cascade (from a to b), the 'which-path' information is erased, resulting in entangled photons. The emitted photons have different energies (c) which allows separating them. Looking at the polarization correlations in different bases confirms the entanglement (d-g). Figures reproduced from \\href{https:\/\/iopscience.iop.org\/article\/10.1088\/1367-2630\/8\/2\/029}{\\textit{Young et al. 2006}} \\cite{Young2006} under Creative Commons BY license.}\n \\label{fig:fig4}\n\\end{figure}\nHow QDs compare to other SPSs such as defects, carbon nanotubes, atomic-vacancy centers in crystals, or two-dimensional materials was reviewed by Aharonovich et al. \\cite{Aharonovich2016}, with the result that QDs have the best overall performance due to the short lifetimes and the high purity and indistinguishability that are possible. There are also solid-state emitters such as crystal defects that emit directly in the Telecom range, even at room temperature \\cite{Zhou2018}, but the photon purity is worse than what is possible in QD sources. Overall, semiconductor QDs are by now the closest there is to ideal quantum light sources. \n\n\\subsection{Engineered Quantum Light Sources for Quantum Communication} \\label{section2.2}\nAs mentioned in the previous section, QDs provide all assets to achieve fast emission of single, indistinguishable and entangled photons at high rates. To further boost their performances in terms of out-coupling efficiency and emission rate, solid-state QDs are commonly incorporated into photonic structures \\cite{Barnes2002}. This section will provide an overview of the photonic structures used for QD-based SPSs employed in QKD experiments to be discussed in detail in section \\ref{section3}.\n\\\\\n\\begin{figure}\n \\includegraphics[width= \\linewidth]{figure5.png}\n \\caption{(a) SEM image of early QD micropillar cavities for optical excitation. (b,c) Cross-sectional SEM image and illustration of an single-photon light emitting diode (LED) based on electrically contacted p-i-n doped micropillar cavities with self-organized InAs\/GaAs-QDs. (d) Schematic of an single-photon LED with embedded InP-based QDs. (e) Schematic fabrication process and design layout of an entanlged light emitting diode. (f,g) Schematic representation and SEM image of a QD embedded in an optical horn structure, whose emission is excited and collected through the backside of the substrate-wafer using an AR-coating. (h) Schematic representation of a GaAs quantum dot grown by the droplet etching method in an AlGaAs layer. (i) 3D cross-sectional atomic force microscopy measurement of a nanohole inside a AlGaAs layer after droplet etching. (j) Top view of the atomic force microscopy measurement in (i). (a) reprinted by permission from Springer Nature: \\href{http:\/\/www.nature.com\/articles\/nature01086}{\\textit{ Santori et al.}} \\cite{Santori2002} Copyright 2002, (b,c) adapted from \\href{http:\/\/aip.scitation.org\/doi\/10.1063\/1.3284514}{\\textit{Heindel et al. 2010}} \\cite{Heindel2010} with the permission of AIP Publishing. (d) reprinted from \\href{http:\/\/aip.scitation.org\/doi\/10.1063\/1.3497016}{\\textit{Reischle et al. 2010}} \\cite{Reischle2010} with permission from AIP Publishing, (e) reprinted from \\href{http:\/\/dx.doi.org\/10.1038\/s41467-018-03251-7}{\\textit{M\u00fcller et al. 2018}} \\cite{Muller2018} under Creative Commons CC BY license. (f,g) reprinted from \\href{http:\/\/aip.scitation.org\/doi\/10.1063\/1.2723177}{\\textit{Takemoto et al. 2007}} \\cite{Takemoto2007} Copyright 2007 by American Institute of Physics. (h,i,j) reprinted from \\href{https:\/\/www.nature.com\/articles\/ncomms15506}{\\textit{Huber et al. 2017}} \\cite{huber2017highly} under Creative Commons CC BY license.}\n \\label{fig:fig5}\n\\end{figure}\nMicropillar (or micropost) cavities, since their first application for the enhancement of QD-emission at the turn of the century \\cite{Gerard1998,Solomon2001}, have been widely used and employed also for QKD experiments using optically-excited QD-based single-photon sources in the following chapter \\cite{Waks2002a,Intallura2009}. A scanning electron microscope (SEM) image of micropillar cavities is shown in \\textbf{Figure \\ref{fig:fig5}a}. The etched pillars consist of a QD layer sandwiched between two distributed Bragg reflector (DBR) mirror sections. The top DBR mirror is usually designed to have a lower reflectivity than the bottom DBR mirror, to promote emission into the upper hemisphere towards a collecting optics. Purcell enhancements of a factor 5-6 of the emission rate and extraction efficiencies above $60\\%$ can be typically achieved with micropillar structures \\cite{Santori2002,Gazzano2013,Somaschi2016,Ding2016}.\n\\\\\nMicropillar structures were also used to realize electrically triggered QD-based single-photon sources emitting in the near-infrared (900 nm) \\cite{Heindel2010} (cf. Figure \\ref{fig:fig5} b,c) or visible (650 nm) \\cite{Reischle2010} (cf. Figure \\ref{fig:fig5}d) spectral range. These highly engineered devices have in turn be employed for the first QKD experiments with electrically injected QD-devices \\cite{Heindel2012}. In both device approaches a layer of self-organized QDs is embedded in an undoped (intrinsic) section of a micropillar structure sandwiched between a top, p-doped and bottom, n-doped DBR mirror forming a p-i-n diode structure electrically contacted via gold bars \\cite{Boeckler2008}. The operation wavelength can thereby be determined by the choice of the QD-material, being InAs\/GaAs and InP\/GaAs in case of the device emitting in the near-infrared and visible spectral range, respectively, highlighting the flexibility QDs offer for application in quantum technologies. Electrically driven micropillar-based single-photon sources were reported to reach overall efficiencies (including electrical loses) exceeding $60\\%$ \\cite{Schlehahn2016a} and Purcell enhancement of close to 5 \\cite{Heindel2010}, values which are similar to non-electrical counterparts but, in contrast, excitation pulse-repetition rates in the GHz-range are straighforward to achieve \\cite{Hargart2013,Schlehahn2016a}.\n\\\\\nIn another approach, QDs were embedded in diode structures to electrically generate polarization entangled photon pairs via the biexciton-exciton radiative cascade (cf. section \\ref{section2.1}) \\cite{Salter2010,Muller2018}. This so-called entangled light emitting diode (ELED) has later been employed for the first entanglement-based QKD-experiments using QD-devices \\cite{Dzurnak2015}. The fabrication scheme to realize InP-based ELEDs for entangled photon emission in the telecom C-Band is shown in Figure \\ref{fig:fig5}e.\n\\\\\nA different type of photonic structure that was used for QKD experiments in the telecom C-Band \\cite{Takemoto2010} is an so-called optical horn (cf. Figure \\ref{fig:fig5}f). It consists of a QD embedded in a fabricated cone on a substrate with the horn acting as a reflector to direct photons upwards through the substrate with an antireflection coating, towards the collecting optics. A SEM image of a fabricated optical horn structure is shown in Figure \\ref{fig:fig5}g. The horn structure does not show Purcell enhancement, but photon collection efficiencies of close to $11\\%$ were achieved \\cite{Takemoto2007}.\n\\\\\nThe last type of QD-device that shall be mentioned here was used for a recent implementation of entanglement-QKD experiments \\cite{Basset2021,Schimpf2021a} and utilized optically-pumped symmetric GaAs QDs grown by the droplet-etching method, which show short radiative lifetimes and small fine-structure splittings enabling large entanglement fidelities \\cite{huber2017highly}. A schematic representation of such a symmetric GaAs QD situated in a hole in an AlGaAs-matrix is shown in Figure \\ref{fig:fig5}h. A three-dimensional atomic force microscopy cross-section image is shown in Figure \\ref{fig:fig5}i, with the top view of the measurement in Figure \\ref{fig:fig5}j. Here, the QDs were additionally combined with solid immersion lenses to increase their out-coupling efficiency to about $8\\%$.\n\\\\\nOther photonic structures to enhance the performance of solid-state QDs include nanowires \\cite{Claudon2010}, photonic crystal cavities \\cite{Madsen2014,Kim2016}, circular Bragg gratings \\cite{Davanco2011,Liu2019,Wang2019}, open cavity systems \\cite{Tomm2020}, on-chip waveguide based structures \\cite{Uppu2020}, and monolithic microlenses \\cite{Gschrey2015}. The latter proved to be useful for the development of practical plug\\&play single-photon sources \\cite{Schlehahn2018,Musial2020} as well as tools for the performance optimization of single-photon QKD \\cite{Kupko2020}, both evaluated very recently in a QKD-tesbed operating at O-band wavelengths \\cite{Kupko2021} (see section \\ref{section4} for details).\n\n\\section{Realizations of Quantum Key Distribution using Quantum Dots}\\label{section3}\nIn the previous section, we introduced QD-based quantum light sources as promising candidates for applications in quantum information. In this section, we review the implementations of QKD based on respective QD-devices reported to date. Noteworthy, also other types of quantum emitters have been used successfully for QKD experiments with sub-Poissonian light states. Among the two very first demonstrations of the BB84 protocol with single photons, one of them used Nitrogen vacancy (NV) centers in diamond to create the single photons \\cite{Beveratos2002}. Also later, NV and Silicon vacancy (SiV) -centers were used to implement QKD protocols \\cite{Alleaume2004,Leifgen2014}, but their secure key rates were smaller since excited states in NV centers have higher radiative lifetimes. In this review we restrict ourselves however to QD-based implementations. \n\\\\\nConcerning the QKD scheme, there are two major groups of protocols for which QD sources are used. As discussed in section \\ref{section1}, QDs can either be used for prepare-and-measure type QKD (like in the BB84 protocol) or the QD source can be placed in between Alice and Bob creating polarization entangled photon pairs distributed to Alice and Bob (like in the Ekert protocol). Let us begin by discussing BB84-like implementations of sub-Poissonian QKD.\n\n\\subsection{Quantum Key Distribution using Single Photons} \\label{section3.1}\nThe very first implementation of single-photon QKD by Waks et al. dates back to 2002 \\cite{Waks2002a}. The authors implemented the BB84 protocol with the bits being polarization-encoded in single-photon states from a triggered QD source \\cite{Santori2002}. Here, InAs QDs were encapsulated in micro-pillar cavities made from distributed Bragg reflectors (such as the one shown in Figure \\ref{fig:fig5}a). In their setup (\\textbf{Figure \\ref{fig:fig6}a}) the QD sample was kept in a liquid Helium cryostat and was optically excited with a pulsed Ti:Sapphire laser in a reflection configuration. The average single photon rate was measured by sending a part of the photon stream to a detector, while the polarization of the rest was modulated using an electro optical modulator (EOM), after selecting a fixed polarization in the polarizing beam splitter (PBS). The non-resonant optical excitation with a rate of 76 MHz led to an average rate of $\\mu = 0.007$ photons\/s injected into the quantum channel. The photons were then sent to Bob via a free-space link with variable attenuation, where the polarization state was measured. The beam splitter (BS) realized a random basis choice and a time interval analyzer (TIA) was used for synchronization.\n\\\\\nBy this, they measured bit rates and QBER for different attenuations, from which they could calculate the asymptotic key rate values shown as the red dots (Figure 6b), with the green line being the calculated values. In their work, they also introduced an upper bound on the probability for two-photon emission of SPSs of $P_{2, S P S} \\leq \\frac{1}{2} \\mu^{2} g^{(2)}(0)$, where $g^{(2)}(0)$ is the auto-correlation value at zero delay measured in a Hanbury-Brown andTwiss (HBT) experiment \\cite{Brown1956}, which is a measure for the probability of two photons being emitted into the same pulse. By calculating a simple, asymptotic secure key-rate \\cite{Waks2002} with a measured QBER of $2.5\\%$ they obtained a maximum communication rate without channel losses of 25 kbit\/s. By introducing an additional loss, they found a maximum tolerable channel loss up to which communication is possible, i.e. positive key rate, of 28 dB.\n\\\\\nRepeating the same experiment with attenuated laser pulses,without applying decoy states at that time, with an average photon number small enough to have the same multi-photon probability as the QD source, an experimental and calculated key rate was obtained. While for small losses the laser gave a higher communication rate, the QD single photons outperformed the laser at larger losses, since compensating potential photon number splitting attacks used up more bits for the attenuated laser pulses \\cite{Lutkenhaus2000}. Ultimately the QD could tolerate about 4 dB higher losses than the laser without decoy states. Using their secure bits, Waks et al. encoded an image of Stanford University Memorial's Church and decode it again using the transmitted secure key (Figure \\ref{fig:fig6}c). \\\\\nNote however, that nowadays decoy-state protocols allow for the in-situ estimation of the multi-photon contribution to mitigate photon-number-splitting attacks and hence much higher average photon numbers in the laser pulses \\cite{Wang2005}, which is why the asymptotic rate of Waks et al. would not beat a decoy-state implementation with WCPs. On the other hand, the upper-bound on the probability of multi-photon events used by Waks et al. for their QD-source is not tight, and that, following recent discussions, the measured value is a too pessimistic measure for the purity of the photon source \\cite{Grunwald2019}. Hence, the true probability for a multi-photon emission event of a QD-SPS is likely to be even lower. This further increases the advantage sub-Poissonian SPSs can have over attenuated laser pulses in implementations of QKD.\n\\\\\n\\begin{figure}[h]\n \\includegraphics[width=0.5 \\linewidth]{figure6.png}\n \\caption{Sketch of the setup of the first QD single photon QKD experiment by Waks et al. (a), the measured and calculated asymptotic secure key rates for the QD single photons (red crosses) and attenuated laser pulses (blue stars) (b) and the image that was securely transmitted (c). Reprinted from \\href{https:\/\/www.nature.com\/articles\/420762a}{\\textit{Waks et al. 2002}} \\cite{Waks2002a} with permission of Springer Nature: Copyright 2002 Springer Nature.}\n \\label{fig:fig6}\n\\end{figure}\nAn appealing scheme to improve secure key rates in BB84-QKD for a given QD-source was presented bei Aichele et al. in 2004 \\cite{Aichele2004}. Here, the cascaded emission of biexciton and exciton state was used to generate two single-photons at different energies from one excitation pulse, which was used to effectively double the achievable key rate. Using a Michelson-interferometer to introduce a temporal delay between both photons, each photon's polarization could be modulated individually (time-multiplexing) on Alice's side before coupling them to the same free-space quantum channel. At Bob, a second Michelson interferometer was used to separate the photons again and measure their polarization separately. Note that the separating and combining of the two photons is done in the time domain instead of separating them by energy to make the setup less vulnerable to emission energy fluctuations. Using this approach, Aichele et al. demonstrated a rate of secure bits per pulse of $5 \\cdot 10^{-4}$, which results in a communication rate of 38 kbit\/s at the given laser repetition rate (76 MHz).\n\\\\\nWhile these first implementations used free-space optical (FSO) links as quantum channel, which is ideal for achieving low losses at large distances in air-ground \\cite{Nauerth2013} or space-ground \\cite{Bedington2017,Yin2020,Sidhu2021} link scenarios, the use of optical fibers as quantum channels has many practical advantages for ground-based communication scenarios. Besides the fact that no direct line of sight is required in this case, which are susceptible to weather conditions and atmospheric turbulence, the technology can be made compatible with the world-wide fiber-based communication infrastructure. Several QD-based QKD experiments have been reported employing single photons coupled to an optical fiber acting as a quantum channel. Collins et al. were the first to report on a fiber-based QKD experiment using single photons generated by a QD emitting at a wavelength of 900 nm, sent through 2 km of optical fiber \\cite{Collins2010}.\n\\\\\nSince optical fibers provide lowest transmission losses at wavelengths in the second and third Telecom window, i.e. O- and C-band, it is beneficial to use QDs operating at these wavelengths, such as the ones first fabricated by Ward et al.\\cite{Ward2005}. Incorporating such QDs into micropillar cavities, Intallura et al. demonstrated single-photon QKD at 1300 nm in 2010, optically exciting the QD-device above bandgap \\cite{Intallura2009}. The quantum channel was represented by a standard SMF-28 optical fiber of 35 km length. As mentioned in section 1, it can be difficult to maintain a certain polarization state over a long fiber transmission. For this reason Intallura et al. used a phase encoding scheme, which also employed a multiplexed reference laser to match the path differences in Alice' and Bob's MZIs (\\textbf{Figure \\ref{fig:fig7}a}). For the QKD demonstration, the entire system ran at a clock rate of 1 MHz, limited by the single photon detectors response and dead time. Using the GLLP \\cite{Gottesman2004}, which is asymptotic, but incorporates multi-photon events (see also section 3.3), a measured QBER of $5.9\\%$ and an error correction efficiency of 1.17, the authors calculated a maximum secure key-rate of about 160 bit\/s and managed to achieve positive key rates at a distance of 35 km, overcoming the maximum distance achieved by WCPs (without decoy states) in their setup (Figure \\ref{fig:fig7}b). \\\\\n\\begin{figure}[h]\n \\includegraphics[width=0.5 \\linewidth]{figure7.png}\n \\caption{First QKD demonstration at Telecom wavelengths in 2009 by Intallura et al. Using a phase-encoding scheme (a), they achieved QKD over a 35 km distance (b) via optical fiber transmission. \u00a9IOP Publishing. Figure reproduced with permission from \\href{https:\/\/iopscience.iop.org\/article\/10.1088\/1464-4258\/11\/5\/054005}{\\textit{Intallura et al. 2009}} \\cite{Intallura2009}. All rights reserved.}\n \\label{fig:fig7}\n\\end{figure}\nOnly one year later, Takemoto et al. presented the first implementation of single-photon QKD at C-band wavelength (1560 nm), benefitting from even lower losses in optical fibers \\cite{Takemoto2010}. Using a QD incorporated into horn-like structure (depicted in Figure \\ref{fig:fig5}f) \\cite{Takemoto2007} and a QKD setup for phase encoding (see \\textbf{Figure \\ref{fig:fig8}}a for the setup), the authors achieved a maximum secure communication distance of 50 km, calculated with the asymptotic GLLP rate equation, setting a new record for single-photon QKD at that time.\n\\\\\nA few years later, the same group improved their QKD-implementation further, by using better detectors and QDs of higher quality. At close to maximum distance, e.i. strong channel losses, the number of detected signal photons is on the order of the dark counts of the detectors, ultimately limiting the achievable communication distance. Using single-photon detectors based on superconducting nanowire \\cite{Hadfield2005}, the dark count contribution could be significantly reduced and hence the maximum achievable distance increased. Moreover, the authors also improved the single-photon purity of their QD source, achieving an auto-correlation value at zero time delay of only $g^{(2)}(0) = 0.005$, so that the necessary corrections for multi-photon emission events, which would allow photon number splitting, became much smaller. With these improvements Takemoto et al. achieved with 120 km the so far longest distance in fiber-based BB84 QKD using a sub-Poissonian SPS in 2015 \\cite{Takemoto2015} (cf. Figure \\ref{fig:fig8}b).\n\\\\\n\\begin{figure}[h]\n \\includegraphics[width=0.5 \\linewidth]{figure8.png}\n \\caption{Implementation of BB84 QKD protocol using single photons at 1550 nm created by InAs QDs in horn-like structures. Using a phase-encoding scheme (a), 50 km QKD was achieved (grey line in b) \\cite{Takemoto2010} and later in an improved version of the experiment, that used Superconducting Nanowire Single Photon Detectors, more than 120 km were possible (red line in b), while improved structures promise even longer distances (other lines in b) \\cite{Takemoto2015}. Figures reproduced from \\href{http:\/\/www.nature.com\/articles\/srep14383}{\\textit{Takemoto et al. 2015}} \\cite{Takemoto2015} under Creative Commons CC BY license.}\n \\label{fig:fig8}\n\\end{figure}\nIn all QKD experiments presented so far, the QD-devices were excited optically using pulsed laser systems. A major advantage of semiconductor-based quantum light sources, however, is the possibility to realize complex engineered devices including diode structures for the injection of electrical charge carriers, which in turn allows for electrical triggering of QD emission. This is highly beneficial for applications, not only because higher degrees of device integration become possible (bulky laser systems become obsolete), but also the clock rate of implementations of quantum cryptographic protocols are easily adjustable (see also section 4.1). While the first electrically injected QD-based SPS was reported already in 2002 by Yuan et al. \\cite{Yuan2002}, it took one decade until the first QKD experiments could be realized. In the work by Heindel et al. in 2012 \\cite{Heindel2012}, three research groups joined forces to demonstrate lab-scale BB84-QKD experiments with two different types of single-photon light emitting diodes emitting in the near infrared (NIR) and visible (VIS) spectral range, at 897 nm and 653 nm, respectively (see \\textbf{Figure \\ref{fig:fig9}}). Using engineered QD-devices based on different material systems and growth techniques, their work highlighted the flexibility semiconductor-based quantum light sources offer for implementations of quantum information. While the NIR-SPS was based on an electrically contacted micropillar cavity exploiting the Purcell effect to enhance the photon extraction efficiency \\cite{Heindel2010}, the QDs were integrated in a quasi planar DBR cavity structure in case of the VIS-SPS \\cite{Reischle2010} (cf. Figure \\ref{fig:fig5}b). Using the NIR-SPS, the authors achieved sifted key rates of 27.2 kbit\/s (35.4 kbit\/s) at a QBER of $3.9\\%$ ($3.8\\%$) and a g(2)(0) value of 0.35 (0.49) at moderate (high) excitation under pulsed current injection at a clock-rate of 182.6 MHz. The VIS-SPS was triggered at 200 MHz, delivering sifted keys at a rate of 95.0 kbit\/s at a QBER of $4.1\\%$ and a g(2)(0) value of 0.49. While both the achieved suppression of multi-photon events as well as the key rates left room for future improvements, these first proof of principle QKD experiments using electrically operated semiconductor single-photon sources can be considered as a major step forward in photonic quantum technologies. Shortly after the lab-scale QKD experiments reported in 2012, the authors integrated the NIR-emitting SPS in a, at that time, compact quantum transmitter setup to be employed for QKD field experiments in downtown Munich (see Figure \\ref{fig:fig9}d). As reported by Rau et al. \\cite{Rau2014}, the QKD experiments comprised a 500 m FSO link between two buildings of the Ludwigs-Maximilians-Universit\u00e4t Munich, with the transmitter and receiver units synchronized via GPS-disciplined oscillators. Using their single-photon light emitting diode modulated at a clock-rate of 125 MHz, the authors achieved sifted key rates of 7.4 kbit\/s (11.6 kbit\/s) at a quantum bit error ratio of $7.2\\%$ ($6.3\\%$) and a g(2)(0) value of 0.39 (0.46) at low (moderate) excitation.\n\\\\\n\\begin{figure}\n\\centering\n \\includegraphics[width= 0.75 \\linewidth]{figure9.png}\n \\caption{First QKD experiments with two electrically triggered QD single photons emitting at 900 nm (InAs) and 650 nm (InP). The setup in (a) was used to measure sifted key rates as well as photon purity under different excitation conditions for both QD structures (b,c) \\cite{Heindel2012}. (d) In a second experiment, the single-photon emitting diode emitting at 900 nm was employed for field-experiments in down-town Munich using a 500 m free-space optical (FSO) link connecting two building of the Ludwigs-Maximilians-University Munich \\cite{Rau2014}, (a-c) reproduced with permission from \\href{https:\/\/iopscience.iop.org\/article\/10.1088\/1367-2630\/14\/8\/083001}{\\textit{Heindel et al. 2012}} \\cite{Heindel2012} \u00a9 IOP Publishing and Deutsche Physikalische Gesellschaft. Reproduced by permission of IOP Publishing.}\n \\label{fig:fig9}\n\\end{figure}\nIn order to become compatible with attenuated laser systems using decoy-state protocols, the efficiency and clock-rate of electrically contacted QD SPSs still has to be significantly increased. Promising steps in this direction were reported by Schlehahn et al. with achieved photon extraction efficiency of single-photon light emitting diodes of up to $61\\%$ (into the first lens) and trigger rates in the GHz range \\cite{Schlehahn2016a}. A promising route to improve also the $\\mu$ inside the quantum channel is a tighter integration of the SPSs, e.g. by the direct coupling to optical fibers allowing for practical plug-and-play quantum light sources (see section \\ref{section4.1}). Very recently, Kupko et al. evaluated for the first time the performance of a state-of-the-art plug-and-play SPS operating at O-band wavelengths for BB84-QKD \\cite{Kupko2021}.\n\n\\subsection{Quantum Key Distribution using Entangled Photon Pairs} \\label{section3.2}\nThe implementations discussed in the previous section were all based on the BB84 protocol in a prepare-and-measure configuration. In this section we will review QKD experiments using entangled photon states reported to date. By each measuring the two photons in a random basis and then keeping only the results in which they had used the same basis, they obtain perfectly correlated bit strings and by quantifying the remaining degree of entanglement for instance via violations of the Bell-type CHSH inequality \\cite{Clauser1969}, they can uncover eavesdropping attempts as described in the Ekert protocol \\cite{Ekert1991} explained in section 1. Entanglement monogamy guarantees that if Alice's and Bob's photons are maximally entangled, then the system cannot be entangled with any other system, hence an adversary's state is separable from the state of Alice and Bob, thus the adversary cannot have any information. From the degree of deviation from a maximum violation, the amount of necessary privacy amplification can be deduced.\n\\\\\nTwo important questions must be answered in entanglement-based implementations. How is the entanglement created and how is it distributed? QDs provide an excellent source of entangled photon pairs, via the biexciton-exciton emission cascade, as explained in section 2. Since the photons obey single-photon statistics, higher generation rates of entangled photons are possible than with SPDC sources \\cite{Chen2018,Wang2019,Liu2019}. In case of SPDC sources, the fidelity decreases significantly above pair emission efficiencies of 0.1, due to the emission of multiple pairs, which reduces the purity of the emitted states. The distribution of the entangled photons works like the distribution of single photons. They are either distributed via free-space links or via optical fibers which should maintain their polarization states.\n\\\\\nThe first proof-of-concept demonstration of QD-based entanglement QKD was reported by Dzurnak et al. \\cite{Dzurnak2015} in 2015. The authors performed an in-lab experiment using entangled photons generated via an electrically driven QD-device, referred to as an entangled-light emitting diode (ELED) as introduced first by Salter et al. \\cite{Salter2010} (similar to Figure \\ref{fig:fig5}c). The photon pairs were distributed in optical fibers to two detectors with random basis choices (\\textbf{Figure \\ref{fig:fig10}a}). Since the entangled photons emitted from a QD via the biexciton-exciton cascade have slightly different energies, they could be spectrally separated to send them to different receivers (Figure \\ref{fig:fig10}b). By this, they were able to transfer a total of 2000 secure bits. To prove that photons emitted during the same excitation pulse were entangled, the violation of the CHSH equation was tested, as indicated by the S-parameter exceeding 2 for vanishing time delays in Figure \\ref{fig:fig10}c. The authors obtained 1 MHz of photon counts on each detector, which, due to tight temporal filters, resulted in 10 sifted bits of key per minute. \\\\\n\\begin{figure}\n \\includegraphics[width= \\linewidth]{figure10.png}\n \\caption{First realization of entanglement based QKD with entangled photons from a single, electrically excited QD presented by Dzurnak et al. Since the photons have different wavelengths (b) they could be spectrally separated and distributed to the receivers (a) where the preserved entanglement was measured via the CHSH equation violation, as shown by the S-parameter $>$ 2 (c). Reprinted from \\href{http:\/\/aip.scitation.org\/doi\/10.1063\/1.4938502}{\\textit{Dzurnak et al. 2015}} \\cite{Dzurnak2015}, with the permission of AIP Publishing.}\n \\label{fig:fig10}\n\\end{figure}\nRecently, two other successful implementations of entanglement based QKD were reported back-to-back, that both used the same type of optically excited QD source emitting at 785 nm (embedded between a bottom DBR and a top solid immersion lense). Importantly, and in contrast to all previous QKD-implementations, both groups used a coherent excitation scheme, i.e. they optically excited the biexciton state of the quantum emitter via two-photon resonant laser pulses. While this is not required per se for the implementation of QKD summarized in the following, it is nevertheless an important step towards quantum repeaters and other advanced schemes of quantum information, relying on high photon indistinguishabilities. \n\\\\\nIn the implementation reported by Schimpf et al. the photons were transmitted via a 350 m optical fiber, resulting in an asymptotic secure bit rate of 86 bits\/s \\cite{Schimpf2021a}. Basset et al. realized both a fiber link and a free-space link, allowing for a direct comparison of both channel types operated with the same source \\cite{Basset2021} (\\textbf{Figure \\ref{fig:fig11}a}). The authors found that for their communication distance of 250 m, the fiber link enabled more stable conditions for entanglement distribution, resulting in the observation of a larger Bell parameter showing less fluctuations (Figure \\ref{fig:fig11}b). This led to a higher communication rate of about 500 secure bits\/s, compared to about 100 secure bits\/s in the free-space link (Figure \\ref{fig:fig11}c), the main reason being the difficulty of correcting instabilities and drift in the free-space optics, as manifested in the higher QBER in the free-space channel (Figure \\ref{fig:fig11}d). The authors counteracted distortions of the polarization state propagating in the fiber, by actively monitoring and compensating for the change in polarization during the experiment.\n\\\\\nInterestingly, the two groups used slightly different protocols for entanglement QKD. Basset et al. \\cite{Basset2021} implemented an asymmetric version of the original Ekert protocol. For this, the authors used a subset of the transmitted bits to evaluate the violation of the CHSH and quantified the amount of entanglement left after the transmission of the photons. Hereby, they determined the amount of eavesdropping that could have occurred and the amount of privacy amplification that was necessary. To do so, they measured the photons in the basis set, which is known to maximally violate the CHSH inequality, but only on Alice's side, while Bob measured them in the conventional BB84 bases. This asymmetric approach reduced the number of necessary detectors.\n\\\\\nSchimpf et al. \\cite{Schimpf2021a}, on the other hand, implemented an entanglement-based version of the BB84 protocol known as BBM92 \\cite{Bennett1992}. Here, Alice and Bob measure their respective halves of the entangled state in two conjugate bases and the amount of necessary privacy amplification is determined solely from evaluating the deviations of a subset of results measured and compared by Alice and Bob (as in the BB84). Thus, the amount of entanglement is not actively monitored and no Bell-like inequality violation is measured.\n\\\\\n\\begin{figure}\n \\includegraphics[width= \\linewidth]{figure11.png}\n \\caption{Entanglement-based QKD experiments: Schimpf et al. realized the BBM92 QKD protocol \\cite{Schimpf2021a}. Their setup (a) allowed them to achieve secure key transmission with the rate shown in (c) and a QBER (d) far below the maximum allowed value for secure transmission. Basset et al. used an asymmetric Ekert protocol \\cite{Basset2021} to perform QKD between two buildings (b) and the photons were transmitted both through an optical fiber (top panel in e) and a free-space (bottom panel in e) and the degree of entanglement (second column in e) and the key rate (third column in e) were compared. (a,c,d) reproduced from \\href{https:\/\/advances.sciencemag.org\/lookup\/doi\/10.1126\/sciadv.abe8905}{\\textit{Schimpf et al. 2021}} \\cite{Schimpf2021a} under Creative Commons Attribution License 4.0, (b,e) reprinted from \\href{https:\/\/advances.sciencemag.org\/lookup\/doi\/10.1126\/sciadv.abe6379}{\\textit{Basset et al. 2021}} \\cite{Basset2021} \u00a9 The Authors, some rights reserved; exclusive licensee AAAS. Distributed under a \\href{http:\/\/creativecommons.org\/licenses\/by-nc\/4.0\/}{CC BY-NC 4.0 license.}}\n \\label{fig:fig11}\n\\end{figure}\nInterestingly, QKD has so far not been implemented with entangled photons generated by QDs at Telecom wavelengths. The generation of entangled photons at a wavelength of 1550 nm from a QD SPS has already been shown in experiments by Olbrich et al. \\cite{Olbrich2017}. While in their experiment the quantum emitter was optically excited with a continuous-wave laser, Shooter et al. recently realized a pulsed electrically excited QD-source generating entangled photon pairs at GHz clock rate in the Telecom C-band \\cite{Shooter2020}. The entangled photon pairs showed a fidelity to the maximally entangled state of $89\\%$ and were distributed over a fiber link of 4.6 km length at record rates, representing an important step towards high-performance QKD systems exploiting sub-Poissonian entanglement sources. Additionally, using electrically excited InAs-QDs, the same group reported already entanglement generation via the biexciton-exciton cascade up to a temperature of 93 K \\cite{Muller2018} - a temperature not requiring liquid Helium anymore and allowing for the integration into compact Stirling cryocoolers \\cite{Schlehahn2015} (see also section 4.1). \\\\\nIn another experiment Xiang et al. achieved $91\\%$ fidelity of polarization entangled photons from electrically excited QDs, that had been transmitted over an 18 km long metropolitan optical fiber \\cite{Xiang2019}. Sufficiently maintaining the photon polarization over the fiber transmission for over a week was made possible, by multiplexing a polarization reference signal through the fiber and actively stabilizing it. Note that the current record for the longest distance of entanglement distribution via fiber-links is almost 100 km and was achieved by sending polarization entangled photons through a submarine optical fiber between the islands Malta and Sicily \\cite{Wengerowsky2019}. Although the entangled photons were created via a nonlinear downconversion process here and are thus not ideal for QKD applications due to the Poisson statistics, it is remarkable how the entanglement was preserved over such a long distance within the optical fiber in a 'real-world-scenario'. The success of the entanglement preservation was indicated by the clear violation of the CHSH inequality, with a value for the S-parameter of 2.5 after the entanglement distribution. The experiment achieving the to-date longest distance of free-space entanglement distribution sent photons from a SPDC-based source on the quantum satellite 'Micius' to earth \\cite{Yin2017}. Here, the photons propagated over a distance of 500-1000 km and the polarization entanglement was preserved with a fidelity of $86\\%$.\n\\\\\nThe status quo of QD-based QKD experiments is summarized in \\textbf{Table \\ref{tab:table1}}, listing all implementations reported to date. For a comparison of different implementations, the reader should keep in mind the following comments. While some experiments were proof-of-concepts with a fixed communication distance and\/or lab-based experiments, others were performed at varying distance or attenuation to simulate different channel losses. In these cases, we give a range of parameters. Moreover, some references quote an average value for the QBER in their system, while this value in general depends on the channel loss. Furthermore, some of the quoted parameters rely either on assumptions made in the key rate calculation (which imperfections to include and which attacks to consider), while others depend highly on the equipment used (such as number of detector dark counts), making a direct comparison difficult. Finally, key rates that we calculated according to the asymptotic GLLP equation (cf. equation \\ref{eq:GLLP}, section \\ref{section1}) from parameters given in the respective publication are indicated by footnotes.\n\\\\\n\\newgeometry{left=1cm,right=1cm}\n\\begin{table}\n\\centering\n\\caption{Summary of QKD implementations employing QD-based quantum light sources (abbreviations: polarization (Pol), single-photon source (SPS), entangled photon-pair source (EPS), free space optical (FSO), fiber-coupled (FC))}\n\\label{tab:table1}\n\\begin{threeparttable}\n\\begin{tabular}{ccccccccccl}\n\\hline\nProtocol &\n Coding &\n \\begin{tabular}[c]{@{}c@{}}Source\/ \\\\ Pump\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}} Clock-rate \\\\ {[}MHz{]} \\end{tabular} &\n $\\lambda$ {[}nm{]} &\n \\begin{tabular}[c]{@{}c@{}}Max Sifted \\\\ Key Rate\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}Max Secure \\\\ Key Rate\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}QBER \\\\ {[}\\%{]}\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}FSO\/ \\\\ FC \\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}Max\\\\ distance\\end{tabular} &\n Ref. \\\\ \\hline\nBB84 & Pol & SPS \/ optic. & 76 & 880 & - & 25 kbps & 2.5 & FSO & In-Lab & \\cite{Waks2002a} \\\\\nBB84 & Pol & SPS \/ optic. & 0.01 & 635 & 15 bps & 5 bps & 6.8 & FSO & In-Lab & \\cite{Aichele2004} \\\\\nBB84 & Phase & SPS \/ optic. & 1 & 1300 & 10 bps & 1 bps & 5.9 & FC & 35 km & \\cite{Intallura2009} \\\\\nBB84 & Pol & SPS \/ optic. & 40 & 895 & - & 8-600 bps & 1.2-21.9 & FC & 2 km & \\cite{Collins2010}\\\\\nBB84 & Phase & SPS \/ optic. & 20 & 1580 & 15-386 bps & 3-9 bps & 3.4-6 & FC & 50 km & \\cite{Takemoto2010} \\\\\nBB84 & Pol & SPS \/ elect. & 182.6 & 898 & 8-35 kbps & - & 3.8-6.7 & FSO & In-Lab & \\cite{Heindel2012} \\\\\nBB84\\tnote{ a)} & Pol & SPS \/ elect. & 200 & 653 & 9-117 kbps & - & 4.1-6.0 & FSO & In-Lab & \\cite{Heindel2012} \\\\\nBB84 & Pol & SPS \/ elect.& 125 & 910 & 5-17 kbps & - & 6-9 & FSO & 500 m & \\cite{Rau2014} \\\\\nBB84 & Phase & SPS \/ optic. & 62.5 & 1500 & 34 bps & 0.307 bps & 2-9 & FC & 120 km & \\cite{Takemoto2015} \\\\\nE91 & Pol & EPS \/ elect. & 50\\tnote{ b)} & 885 & 0.2 bps & 0.1 bps & - & FC & In-Lab & \\cite{Dzurnak2015} \\\\\nE91\\tnote{ c)} & Pol & EPS \/ optic. & 320 & 785 & 243 bps & 69\\tnote{ d)} & 3.4 & FC & 250 m & \\cite{Basset2021}\\\\\nE91 & Pol & EPS Optic. & 320 & 785 & 30 bps & 9 bps & 4.0 & FSO & 270 m & \\cite{Basset2021} \\\\\nBBM92 & Pol & EPS \/ optic. & 80 & 785 & 135 bps & 86 bps & 1.9 & FC & 350 m & \\cite{Schimpf2021a} \\\\ \\hline\n\\end{tabular}\na) Same publication as above but QKD experiment performed by a different group; b) Time-multiplexed detector effectively reduced clock rate to below 1 MHz; c) Modified asymmetric Ekert91 protocol; d) Calculated from stated parameters using the asymptotic GLLP equation;\n\\end{threeparttable}\n\\end{table}\n\\restoregeometry\nIn order to put the QKD implementations reviewed above into perspective, we now address the question which performance level, in terms of secure key rate and communication distance, can be expected with current state-of-the art QD-based quantum light sources. For this purpose, we use equations (\\ref{eq:GLLP}-\\ref{eq:GLLP_QBER}) to extrapolate the achievable asymptotic key rate from parameters reported in the literature as described in the following and assuming an operation wavelength of 1550 nm. Note, while the values states in the following were not yet realized at C-band wavelengths, the advances in the development of QD-based Telecom-wavelength quantum light sources make us optimistic that this can be achieved in the not too distant future.\n\\\\\nAs stated earlier, the amount of multi-photon events is upper-bounded by $P_\\text{M} \\leq \\frac{1}{2} \\mu^{2} g^{(2)}(0)$ \\cite{Waks2002}. Assuming a mean photon number of $\\mu=0.3$ and the record antibunching value of $g^{(2)}(0) = 7.5 \\cdot 10^{-5}$ \\cite{Schweickert2018}, an upper bound for the achievable multi-photon contribution can be calculated. Note that even larger values of $\\mu$ were recently demonstrated \\cite{Tomm2020}. In this regime, however, the increase of multi-photon contributions might effectively reduce the key rate at large loss on Bob's side. We further assume a setup transmission of $\\eta_\\text{Bob} = 0.3$ \\cite{gobby2004}, alignment errors of $e_{\\text{det}} = 1\\%$ \\cite{rusca2018finite}, and superconducting nanowire detectors with a few dark counts per second, resulting in $P_\\text{dc} = 10^{-8}$ dark counts per pulse with an detection efficiency of $\\eta_\\text{det} = 0.9$ \\cite{SingleQuantum}. Using ultralow-loss fibers, losses in the quantum channel of $\\alpha = 0.17\\, $dB\/km are possible at 1550 nm \\cite{rusca2018finite}. We further assume a value of $f_\\text{EC} = 1.1$ for the efficiency of the error correction protocol \\cite{elkouss2009} (with $f_\\text{EC} = 1$ being the ideal Shannon limit), $q=0.5$ for the sifting factor of the symmetric BB84 protocol, and a clock rate (for excitation and encoding) of $R_0=100\\,$MHz \\cite{Boaron2018a}.\n\\\\\nUsing these parameters, we deduce an asymptotic secure key rate exceeding 3$\\,$Mbit$\\,\\text{s}^{-1}$ at short distance, a key rate larger than 1$\\,$kbit$\\,\\text{s}^{-1}$ after a distance of 200 km in optical fiber, and a maximum achievable communication distance beyond 250$\\,$km. \nThese realistic extrapolations highlight the prospects for the substantial advances possible in the field of sub-Poissonian QKD, which are in turn also beneficial for schemes of quantum communication beyond direct point-to-point links. Also note, that this performance can be further improved significantly, e.g. by using higher clock-rates \\cite{Schlehahn2016}, asymmetric bases choices allowing for larger sifting factors, or temporal filtering \\cite{Kupko2020} as will be detailed in section \\ref{section4.6}. Keep in mind, however, that such high asymptotic key rates can only be achieved when sufficiently large block sizes of qubits are transmitted (about $10^{15}$), as otherwise finite-size effects drastically reduce the secure key rate \\cite{Scarani2009,cai2009}.\n\\\\\n\\subsection{Advanced QKD protocols \u2013 Towards Device Independence} \\label{section3.4}\nIn recent years, also advanced types of protocols have been proposed with a special emphasis on device independence. Knowing that the bulk of quantum hacking attacks target either the qubit source or the detector module, it would be a great advancement if the communication protocol did not have to rely on the integrity of any of these. While fully device-independent QKD (DI-QKD) is still only a theoretical proposal at this point \\cite{Acin2007,Masanes2011,Zapatero2019,Vazirani2019,Schwonnek2021}, intermediate steps such as source-independent \\cite{Koashi2003} or measurement-device-independent (MDI-QKD) might be feasible \\cite{Lo2012}. MDI-QKD guarantees independence of the detection setup which could in principle be controlled by an adversary. It is basically a time-reversed version of the Ekert protocol, requiring Alice and Bob each sending single photons to a central detector, which projects them into an entangled two-photon state via a Bell measurement. By learning the outcome of the Bell measurement and by communicating the preparation bases, Alice and Bob can establish a secret key. Knowing only the outcome of the Bell measurement does not reveal anything about the key which is why the detector does not need to be trusted. \n\\\\\nThis protocol has so far only been implemented using attenuated laser pulses \\cite{Rubenok2013,Liu2013,DaSilva2013,Cao2020,Semenenko2020,Wei2020} and also, as a proof-of-concept, with stored and released photons from down-conversion sources \\cite{Kaneda2017}. Implementing it with single photons is more difficult but will ultimately pay out. To allow for a successful and efficient Bell measurement, the photons must be indistinguishable. To obtain sufficiently indistinguishable single photons from remote sources, QDs are a promising candidate. Although high indistinguishability in practice is a challenge (due to local dephasing effects and spectral diffusion \\cite{Vural2020}) several demonstrations have already already surpassed the classical limit of $50\\%$ two photon interference visibility of laser light (see section \\ref{section4.2}) \\cite{Zhai2021}. The main advantage of QD single photons over the attenuated laser implementations is that, due to the limited visibility and the Poisson photon statistics, a Bell measurement using coherent light is less precise than using true single photons \\cite{Lee2021}. \n\\\\\nThe achievable secure key rate has already been analytically estimated for MDI-QKD based on attenuated laser pulses in the asymptotic \\cite{Lo2012}, as well as in the finite-length regime \\cite{Curty2014}, but not yet for true single photons, incorporating their better indistinguishability. MDI-QKD protocols are also a possible way of realizing many-parties metropolitan QKD networks with all members surrounding a central detection node in a star-like topology \\cite{Tang2016}. An experimental demonstration of this type of quantum network with sub-Poissonian quantum light source would be a major step towards the quantum internet.\n\n\\section{Recent Progress on Building Blocks for Quantum Networks}\\label{section4}\nHaving the ultimate aim of a world-wide quantum internet in mind, the establishment of a QKD-secured communication link, as discussed in the previous section, is only the first step. In this section, we discuss building blocks necessary for the extension toward networks (see \\textbf{Figure \\ref{fig:fig12}}). Noteworthy, many of these building blocks are equally important for other types of quantum technologies such as distributed quantum computing, optical quantum computing, quantum sensing \/ metrology and many more. This is why there are already extensive reviews on the building blocks for future quantum technologies \\cite{OBrien2009,Barnett2017,Uppu2021}. Therefore, in this section, we highlight recent advances in developing building blocks with a special focus on QD based communication networks. This includes practical QD SPSs (4.1), quantum memories compatible with QD single photons (4.3), teleportation of QD single photons (4.4) and quantum repeater schemes suitable for entangled photons from QDs (4.2). For completeness, we highlight recent advances in random number generation (4.5) and quantum network optimization tools (4.6).\n\\\\\n\\begin{figure}\n \\includegraphics[width= \\linewidth]{figure12_TH.png}\n \\caption{Building blocks for future quantum networks. The numbers in brackets point at the section in which recent advances concerning that building block are discussed. In this review we focus on building blocks that are built on (or compatible with) QD light sources.}\n \\label{fig:fig12}\n\\end{figure}\n\\subsection{Towards practical quantum light sources} \\label{section4.1}\nA practical QD-based quantum light source is ideally \"plug-and-play\". This means that it is operable independent of special laboratory infrastructure (e.g. without the need for liquid coolants and large laser systems) and provides the generated quantum light \"ready-to-use\" via an optical fiber output. Furthermore, it needs to be robust and durable in use, as compact as possible, and ideally only requires standard mains voltage as power supply. Not least, a practical source needs to be benchmarked regarding long-term stability.\n\\\\\nA key requirement to integrate QD-SPSs into practical modules as described above is a technology for the precise alignment and permanent coupling of the QD-device to an optical fiber. Such a direct connection has been the interest of research for more than a decade, starting with the probabilistic coupling of QDs using bundles of hundreds of single mode fibers \\cite{Xu2007}, then exploring deterministic coupling possibilities using open cavity systems \\cite{Muller2009} and deterministic selection of nanowire SPSs \\cite{Cadeddu2016}. \n\\\\\nRecently, fiber-scanning techniques in combination with epoxy resists or optical (UV-) adhesives have shown to be suitable for permanent fiber-coupling of micropillar cavities \\cite{Haupt2010,Snijders2018,Rickert2021}, microlens and micromesa SPSs \\cite{Schlehahn2018,Zonacz2019,Musial2020}, as well as for nanowires \\cite{Northeast2021}. A schematic of an electrically controllable, fiber-coupled micropillar-cavity with a detection and excitation fiber and capable of resonant excitation schemes published by Snijders and coworkers in 2018 \\cite{Snijders2018} is shown in \\textbf{Figure \\ref{fig:fig13}a}. While permanently coupled devices reported to date reached overall efficiencies in the range of only few percent \\cite{Snijders2018}, microcavities based on hybrid circular Bragg gratings (hCBGs) have recently been evaluated as promising a strategy for achieving large fiber coupling efficiencies up to unity. In their study, Rickert et al. presented numerically optimized designs for devices operating at O-band wavelengths \\cite{Rickert2019}, indicating that overall efficiencies exceeding $80\\%$ are possible using off-the shelf single-mode fibers. A schematic depiction of a single-mode fiber-coupled hCBG-cavity is shown together with simulation data in Figure \\ref{fig:fig13}b. This device approach appears particularly promising, considering the potential for state-of-the art SPS performances \\cite{Wang2019}.\n\\\\\nThe QD-based SPSs used for QKD experiments so far, entirely relied on laboratory infrastructure including bulky cryogenics in particular. A way to realize these low temperatures in a more practical fashion is to use off-the-shelf Stirling-cryocoolers \\cite{sunpower}. Such cryocoolers, operable with 230 V standard net supply voltage, allow operation of a suitable quantum emitter at temperatures below 30 K, an approach first introduced by Schlehahn et al. in 2015 using free-space optics \\cite{Schlehahn2015}. Subsequently, the integration of fiber-coupled QD-based SPSs in state-of-the-art Stirling cryocoolers proved to be a promising route for realizing plug-and-play QD-based quantum light sources as demonstrated for a fiber-pigtailed QD-device emitting around 925 nm in 2018 \\cite{Schlehahn2018}. The cryocoolers were compact enough to integrate them into a standard 19\" rack module (Figure \\ref{fig:fig13}c) and the achievable base-temperature of 40 K could be reached easily within 30 min. In the work by Musia\u0142 et al. the source module additionally housed a fiber-based pulsed laser as well as a fiber-based spectral filter, providing single-photon pulses in the telecom O-Band at the SMF-28 fiber output \\cite{Musial2020}.\n\\\\\nAnother aspect for practical single photon sources, it is worth also considering alternative ways of source excitation to achieve practicability. While in principle an electrically-contacted SPS allows efficient, fast and practical excitation, optical excitation required for resonant excitation of the source can also be desirable. In a pioneering work in 2013, Stock et al. proposed to use electrically driven microlaser sources in close vicinity to excite QDs in micropillar cavities, and showed Purcell enhanced emission for a QD in a micropillar exited in such a way \\cite{stock2013}. In 2017 Munnelly et al. used this on-chip excitation concept to show single-photon emission with an emission rate above 100 MHz and the possibility of wavelength tuning via the quantum-confined Stark-effect \\cite{Munnelly2017}. In a work in 2017 following a similar concept, Lee and coworkers realised an on-chip excited quantum dot light emitting diode (LED) and used the quantum-confined Stark effect to also tune the fine-structure of the emitting QD \\cite{Lee2017}. Work of such an on-chip driven QD entangled LED deployed in an urban fiber network was published recently \\cite{Xiang2020}. Although so far not demonstrated for (quasi-) resonant excitation, the concept of using an on-chip pumped QD SPS shows the potential for a very compact optical excitation compatible with the discussed Stirling technology.\n\\\\\nIn summary, the integration of directly fiber-coupled QD-based SPSs into compact cryocoolers offers a promising approach for high-performance plug-and-play quantum light sources. While Stirling-type refrigerators are the most compact solution to date, the achievable base temperatures are presently limited to about 27 K. Applications which rely on the excellent coherence properties of QDs, e.g. for the generation of highly indistinguishable photons, small-footprint Gifford-McMahon (GM) cryocoolers in combination with compact compressors are an alternative. Beyond the promising proof-of-concept experiments with fiber-coupled QD-SPSs in Stirling cryocoolers discussed above, a next important step is to show that this concept can also exploit the full potential QDs offer in terms of the efficiency and single-photon purity.\n\\begin{figure}\n \\includegraphics[width=0.5 \\linewidth]{figure13.png}\n \\caption{(a) Schematic of a fiber-coupled micropillar cavity with electrical control, and two fiber connections for excitation and collection. (b) (left) Schematic representation of a hybrid circular Bragg grating (hCBG) cavity coupled to a optical single mode fiber. (right) Simulated fiber coupling efficiencies of a hCBG cavity with two different single mode fibers as function of the grating gap width W (see Ref. \\cite{Rickert2019} for details). (c) QD-based SPS source module based on a Stirling cryocooler with active vibration cancellation (AVC) employing a fiber-coupled QD-based microlens SPS (from Ref. [Schlehahn2018]\\cite{Schlehahn2018}). (a) Reprinted with permission from \\href{https:\/\/link.aps.org\/doi\/10.1103\/PhysRevApplied.9.031002}{\\textit{Snijders et al. 2018}} \\cite{Snijders2018} Copyright 2018 by the American Physical Society, (b) Copyright 2021 by the authors of this work. (c) reprinted from \\href{http:\/\/www.nature.com\/articles\/s41598-017-19049-4}{\\textit{Schlehahn et al. 2018}} \\cite{Schlehahn2018} under Creative Commons CC BY license.}\n \\label{fig:fig13}\n\\end{figure}\n\\subsection{Towards Quantum Repeaters} \\label{section4.2}\nThe maximum distance over which a QKD scheme can exchange a provably secure key is limited due to photon loss, both in free-space and in fiber links. To cover larger distances, one has to rely on trusted intermediate nodes, which reduces the overall security. However, the maximum distance, as well as the rate can be extended without trusted nodes, by using so-called quantum repeaters. These allow to transfer an encoded qubit, without travelling the entire distance, with the help of entangled photon pairs and entanglement swapping. In this subsection we will briefly introduce the quantum repeater protocol and entanglement swapping, before we highlight recent advances in the key ingredient, two-photon interference from remote QD-based SPSs.\n\\\\\nThe original repeater protocol was proposed by D\u00fcr, Briegel, Cirac and Zoller in 1998 (now known as BDCZ protocol), as a way of reducing the errors in quantum channels, which depend exponentially on the transmission loss \\cite{Briegel1998,Dur1999}. This protocol enables the distribution of a maximally entangled photon pair, such as the well-known EPR state, in reference to the Einstein-Podolski-Rosen paradox \\cite{Einstein1935}, over arbitrary distances. A version which uses atomic ensembles both as the memory and the photon source is known as the DLCZ protocol \\cite{Duan2001}. The entangled photon pair can then be used to realize QKD protocols based on entangled photon pairs or to teleport any quantum state from one end to the other. To distribute the entanglement, the authors proposed to use multiple EPR states. The entangled photons are each split up and sent to opposite directions, thus covering a small part of the quantum channel length. At intermediate nodes, the photons are stored and then, via a joint Bell state measurement (BSM) between two photons of different EPR pairs, the entanglement is swapped, entangling the two remaining photons. By repeating this many times in a nested fashion, arbitrary distances can in principle be covered (\\textbf{Figure \\ref{fig:fig14}a}).\n\\\\\nNote, that as all photons together travelled the total distance in case of a successful run, an improvement of the photon loss sensitivity is only achieved, if each intermediate node has access to a quantum memory. In this case, redundant EPR pairs can be sent, until the memory is filled, and the BSM can be made, thus preventing any loss between two of such nodes. In other words, the quantum channel is cut into shorter segments and over each segment entanglement purification can create maximally entangled, distributed photon pairs in a memory, before entanglement swapping connects all these segments.\n\\\\\nThis protocol can be implemented with different quantum emitter platforms \\cite{Loock2020}, but to be implemented with entangled photon pairs from QDs, a quantum memory compatible with QD single photons is required, which will be discussed in \\textbf{section 4.3}. It also requires the ability to successfully project two single photons into a joint two-photon Bell state to realize the entanglement swapping. This was demonstrated for the first time by Pan et al. using a SPDC source \\cite{Pan1998}. The authors created two pairs of entangled photon states, projected one photon of each pair in a joint two-photon Bell state via two-photon-interference (TPI) and finally proved entanglement of the remaining two photons that never interacted before. \n\\\\\nA quantum repeater node implementing the BDCZ protocol was first experimentally realized by Yuan et al. \\cite{Yuan2008} using two atomic ensembles as quantum memories. By projecting photons, entangled with the states of their respective memory, in a joint Bell state, the entanglement was swapped, entangling the two atomic ensembles, which was confirmed by measuring the entanglement between the photons emitted from the memories. \n\\\\\nIn addition to memory-based quantum repeaters, all-photonic, measurement-based repeater schemes have been proposed which do not need a quantum memory to operate \\cite{Zwerger2012,Azuma2015,Zwerger2016}. The necessary resource states for such a repeater protocol are photonic cluster states, which were recently used in a proof-of-principle quantum repeater experiment \\cite{Li2019}. Photonic cluster states are also relevant for photonic quantum computing \\cite{raussendorf2001} and have already been generated using QD SPSs \\cite{Schwartz2016,Istrati2020}.\n\\\\\n\\begin{figure}\n \\includegraphics[width= \\linewidth]{figure14.png}\n \\caption{The concept of a quantum repeater to distribute entanglement over long distances (a). The two types of BSMs using linear optics with (b) only 1 BS to identify 1\/4 Bell states and (c) with PBS to identify 2\/4 Bell states from coincidence detections.}\n \\label{fig:fig14}\n\\end{figure}\nThe crucial ingredient for all repeater protocols and many other schemes in quantum information is to perform a Bell State measurement, which is typically implemented via two-photon-interference experiments at a beam splitter. Before reviewing recent advances in QD-based TPI, we remark a few general things concerning joint BSMs. The Bell basis is the natural basis of the 4-dimensional Hilbert space in which a joint two-qubit state is described. It consists of four maximally entangled states, the Bell states, that are non-separable superpositions of the tensor products of the respective one-particle Hilbert space bases. For the case of polarization as the only relevant quantum number (the two photons are ideally indistinguishable in all other quantum numbers) and using the rectilinear {H,V}-basis, they can be written as $|\\Psi^{\\pm}> = \\frac{1}{\\sqrt{2}} \\left( \\ket{HV} \\pm \\ket{VH} \\right)$ and $|\\Phi^{\\pm}> = \\frac{1}{\\sqrt{2}} \\left(\\ket{HH} \\pm \\ket{VV} \\right)$.\n\\\\\nA fundamental limit to BSMs is that, using linear optics, only two states out of the four Bell states can be discriminated \\cite{Braunstein1995a}, thus without photon-photon interaction, only a partial BSM with a success rate of $50\\%$ is possible. Braunstein et al. were the first to point out that two-photon-interference at a BS can be seen as a BSM, since the Bell states are eigen-states of the unitary that describes the BS. If the photons go to opposite outputs of the BS, one can be sure that a projection into the $\\ket{\\Psi^{-}}$ Bell state took place (Figure \\ref{fig:fig14}b). The reason is that only the $\\ket{\\Psi^{-}}$ Bell state is antisymmetric under particle exchange and thus displays fermionic anti-bunching statistics, causing the particles to leave from opposite output ports of the BS. But this only leads to a successful BSM in $25\\%$ of the cases. Furthermore, the photons need to be indistinguishable in all other quantum numbers, so that only their projected polarization state determines the coincidences. Otherwise, if the photons are too distinguishable, also a projection into the other Bell states would show up as an erroneous $\\ket{\\Psi^{-}}$ coincidence event between the two outputs. But if the photons are indeed indistinguishable and not projected into the $\\ket{\\Psi^{-}}$ state, they will go to the same output, as observed in the Hong-Ou-Mandel (HOM) effect, where it was first shown that completely indistinguishable single photons always leave the BS together \\cite{Hong1987}.\n\\\\\nOne can increase the BSM efficiency to $50\\%$ for polarization encoded photons, by adding a PBS at each output of the interference BS (Figure \\ref{fig:fig14}c) to further distinguish the polarization of the states that leave the BS from the same output. Photons which arrive at the same output side, with the same polarization, will go to the same detector and thus do not lead to coincidences (the $\\ket{\\Phi}$ states), while the $\\ket{\\Psi^{+}}$ state photons leave from the same output with different polarizations and thus lead to coincidences. Hence, one can now identify both $\\ket{\\Psi}$ states, because they are anti-correlated in polarization. Thus, $\\frac{1}{2}$ of the possible Bell basis can be detected at maximum.\n\\\\\nNote that if the incoming photons are not sufficiently indistinguishable, erroneous coincidences in the PBS basis can be detected, while states prepared in any other basis (that are randomly projected at the PBS) lead to false coincidences, as was shown by Basset et al. by comparing the achieved teleportation fidelities under the $25\\%$ and $50\\%$ BSM (without and with PBSs), as well as with high and low indistinguishability \\cite{Basset2020a}. Since SPSs allow higher indistinguishability, they cause less erroneous coincidence events than photons from SPDC sources. Note furthermore that, also due to more multi-photon events, a Bell measurement is less precise with attenuated laser pulses, than with true single photons \\cite{Lee2020}. The reason is that false coincidences due to two photons in one BS input cannot always be distinguished from true coincidences from one photon in each BS input. Therefore, it is of critical importance for many applications to achieve TPI with indistinguishable single photons from remote quantum emitters.\n\\\\\nThe indistinguishability is quantified by the coalescence probability (for pure states the maximum wave packet overlap of the two photons $P_{Coal} = \\abs{\\bra{\\phi_1}\\ket{\\phi_2}}$ , which is estimated by measuring the TPI visibility V in a Hong-Ou-Mandel interference experiment \\cite{Hong1987}. Hereby, one typically measures the suppression of coincidences between two BS outputs due to indistinguishable paths at vanishing temporal delay, and normalizes it by comparison to the case of maximum distinguishability (selecting a cross-polarized configuration, detuned wavelengths or detuned arrival times). While phase-randomized Poissonian light is limited to a visibility of 0.5 \\cite{Mandel1983}, single photons can reach higher visibilities up to unity, for more efficient BSMs.\n\\\\\nIn order to measure a high TPI visibility between photons from remote emitters, they must be as indistinguishable as possible, i.e. they must agree in all quantum numbers. So far, TPI between photons from remote emitters has already been demonstrated with many different types of photon sources, such as parametric down conversion sources \\cite{DeRiedmatten2003,Llewellyn2020}, trapped atoms \\cite{Beugnon2006}, atoms in combination with QDs \\cite{Vural2018}, trapped ions \\cite{Maunz2007}, silicon and nitrogen vacancy centers in diamond \\cite{Sipahigil2014,Bernien2012,Humphreys2018}, and also molecules \\cite{Lettow2010}. In the following, we will discuss recent progress on TPI experiments with photons emitted by remote QD-based quantum light sources.\n\\\\\nIn the case of QDs, due to their self-organized nature and the semiconductor environment, the spectral properties of the quantum emitters are of particular importance. While photons that are emitted subsequently by the same QD have shown almost ideal TPI visibilities $>99\\%$ \\cite{Somaschi2016}, such high values have not yet been achieved for TPI between photons from remote QDs. When photons are emitted by QDs located at different positions and interacting with their unique environment, the wavelength of their emitted photons are likely to be different. Coarse spectral matching of quantum emitters can thus be achieved using pre-selection of suitable QDs, if deterministic fabrication technologies are used even in engineered devices \\cite{Rodt2020,Liu2021}. Subsequently, a spectral fine-tuning is typically required, e.g. via temperature control (\\textbf{Figure \\ref{fig:fig15}}f) \\cite{Giesz2015,Thoma2016}, strain tuning (Figure \\ref{fig:fig15} c-e) \\cite{Flagg2010,Beetz2013,Reindl2017,Moczaa-Dusanowska2020,zhai2020a}, or electrical tuning via the quantum-confined Stark effect in diode-like structures (Figure \\ref{fig:fig15} a, b) \\cite{Patel2010}. These methods are summarized in Figure \\ref{fig:fig15} and the obtained visibilities of the respective TPI experiments are collected in Table 2.\n\\\\\n\\begin{figure}[h]\n \\includegraphics[width= \\linewidth]{figure15.png}\n \\caption{Different ways to tune the emission energies of remote QDs in TPI experiments. Patel et al. used electrical tuning of QDs in a diode-like structure (a) which allowed them to match the energies (b) \\cite{Patel2010}. Zhai et al. employed piezo actuators to apply strain to the QD (c) which enabled tuning of the emission energies (d) \\cite{zhai2020}, as did Reindl et al. to match the QD energies (e) \\cite{Reindl2017}. Giesz et al. used temperature to tune the QD energy into the cavity resonance (f) \\cite{Giesz2015}. (a,b) reprinted by permission from Springer Nature: \\href{http:\/\/dx.doi.org\/10.1038\/nphoton.2010.161}{\\textit{Patel et al. 2010}} \\cite{Patel2010} Copyright 2010, (c,d) reprinted from \\href{https:\/\/aip.scitation.org\/doi\/full\/10.1063\/5.0017995}{\\textit{Zhai et al. 2020}} \\cite{zhai2020}, with the permission of AIP Publishing, (e) reprinted from \\href{https:\/\/pubs.acs.org\/doi\/10.1021\/acs.nanolett.7b00777}{\\textit{Reindl et al. 2017}} \\cite{Reindl2017} under Creative Commons license CC-BY, (f) reprinted Figure with permission from \\href{https:\/\/link.aps.org\/doi\/10.1103\/PhysRevB.92.161302}{\\textit{Giesz et al. 2015}} \\cite{Giesz2015} Copyright 2015 by the American Physical Society.}\n \\label{fig:fig15}\n\\end{figure}\nHowever, even if the nominal emission energies of two QDs are matched perfectly, the achievable TPI visibility can be limited by several effects. Phonon induced dephasing, fluctuation of electron spins and fluctuation of surrounding charges can lead to emission energy fluctuations (also known as spectral diffusion) as well as emission line broadening \\cite{Vural2020}. Recently, an analytical model including dephasing as well as spectral diffusion has been developed, which can be used to predict the maximum expected TPI visibilities from experimentally accessible parameters of each individual QD \\cite{Kambs2018} - previous models numerically investigated achievable visibilities for non-ideal SPSs \\cite{Fischer2016}. To reduce the inhomogeneous line broadening mechanisms of QDs, different countermeasures can be taken, such as working at ultra-low temperatures to freeze out the majority of phonon contributions to the dephasing, resonant excitation schemes combined with small amounts of off-resonant light, as well as external control via electric and magnetic fields to saturate charge traps and align the nuclear spins inside the QD to reduce fluctuations. All the necessary requirements can be summarized as follows: For maximum indistinguishability, one needs Fourier-limited SPSs, which have no inhomogeneous line broadening. Such QD-SPSs were reported by Wang et al. \\cite{Wang2016} and Kuhlmann et al. \\cite{Kuhlmann2013}, but they are far from being the norm.\n\\\\\nAdditionally, to reduce relative spectral drifts between the two QD emission energies, an active feedback can be used \\cite{Schmidt2020}. Such a stabilization was applied to remote TPI experiments by Zopf et al., who applied piezo strain to compensate for any spectral drift to stabilize the emission energies of the QDs \\cite{Zopf2018}. In their setup both QDs were tuned via piezos glued to the QD. A fraction of the emitted photons went through a Faraday filter made from a Rubidium vapor cell, to identify small shifts in emission wavelengths and then a feedback loop corrected them. By doing so, they achieved a remote QD TPI visibility of $41\\%$, as predicted by the Kambs et al. model \\cite{Kambs2018} for their QD parameters. The active stabilization resulted in higher average TPI visibility compared to the unstabilized case (\\textbf{Figure \\ref{fig:fig16}a}).\n\\\\\nAnother technique, which was employed by Weber et al., is to use frequency conversion of both QDs (Figure \\ref{fig:fig16}b) to do the TPI at 1550 nm, in the Telecom window \\cite{Weber2019}. Although the two QDs themselves were spectrally distinguishable, laser-tuning in the upconversion process allowed the authors to match the photon's wavelengths (Figure \\ref{fig:fig16}c), resulting in an observed TPI visibility of $29\\%$. By now, TPI visibilities exceeding the classical limit of $50\\%$ were achieved in several experiments using QDs. Reindl et al. in 2017 for instance used a phonon-assisted two-photon excitation scheme in combination with piezo strain tuning to match the emission energies \\cite{Reindl2017} resulting in a TPI visibility of $51\\%$.\n\\\\\nVery recently, a new record has been set by Zhai et al. \\cite{Zhai2021}, reaching visibilities of up to $93\\%$ between QDs located in different cryostates. The high visibility was achieved without Purcell enhancement, tight spectral filtering, post-selection or any active stabilization, simply by using high quality, low noise, electrically tunable QD samples, leaving even room for further improvement \\cite{zhai2020a,zhai2020}. In another recent experiment You et al. observed interference of remote QD single photons that were converted to 1583 nm via quantum frequency conversion and separated by a 300 km optical fiber \\cite{You2021}, setting a remarkable record for the distance achieved between interfering QD-sources.\n\\\\\n\\begin{figure}\n \\includegraphics[width= \\linewidth]{figure16.png}\n \\caption{Active stabilization of QDs in remote TPI (red curve in a) improves visibility compared to no stabilization (blue curve in a) \\cite{Zopf2018}. Matching of remote QD wavelengths (c) via Quantum Frequency Conversion (b) \\cite{Weber2019}. (a) reprinted with permission from \\href{https:\/\/doi.org\/10.1103\/PhysRevB.98.161302}{\\textit{Zopf et al. 2018}} \\cite{Zopf2018} Copyright 2018 by the American Physical Society, (b,c) reprinted with permission from Springer Nature: Nature Nanotechnology \\href{http:\/\/dx.doi.org\/10.1038\/s41565-018-0279-8}{\\textit{Weber et al. 2019}} \\cite{Weber2019} Copyright 2019.}\n \\label{fig:fig16}\n\\end{figure}\nNoteworthy, indistinguishable photons are not only important for entanglement swapping and quantum repeaters, but also to erase the which-path-information in schemes that generate entanglement between remote solid-state qubits \\cite{Cabrillo1999}. This entanglement scheme was realized experimentally with remote QDs using hole- \\cite{Delteil2015} as well as electron-spins \\cite{Stockill2017}. An overview of achieved TPI visibilities with remote QD single photons is given in \\textbf{Table \\ref{tab:table2}}.\n\\\\\n\\begin{table}\n\\centering\n\\caption{Chronological overview of achieved visibilities in remote QD TPI experiments}\n\\label{tab:table2}\n\\begin{threeparttable}\n\\begin{tabular}{cccc}\n\\hline\nWavelength in nm & Tuning mechanism & TPI visibility [$\\%$] & Reference \\\\ \\hline\n 940 & Electrical & 33 $\\pm$ 1\\tnote{(a)} & Patel et al. 2010 \\cite{Patel2010} \\\\\n920 & Piezo strain & 18 $\\pm$ 1 & Flagg et al. 2010 \\cite{Flagg2010} \\\\\n930 & Temperature & 39 $\\pm$ 2 & Gold et al. 2014 \\cite{gold2014} \\\\\n945 & Temperature & 40 $\\pm$ 4 & Giesz et al. 2015 \\cite{Giesz2015} \\\\\n 955 & Electrical & 91 $\\pm$ 6 & Delteil et al. 2015 \\cite{Delteil2015} \\\\\n933 & Temperature & 29 $\\pm$ 6 & Thoma et al. 2016 \\cite{Thoma2016} \\\\\n1250 & Laser-induced Evaporation & 33 $\\pm$ 1\\tnote{(a)} & Kim et al. 2016 \\cite{Kim2016a}\\\\\n750 & Piezo strain & 51 $\\pm$ 5 & Reindl et al. 2017 \\cite{Reindl2017} \\\\\n 968 & Electrical & 93 $\\pm$ 1 & Stockill et al. 2017 \\cite{Stockill2017} \\\\\n795 & Piezo strain & 41 $\\pm$ 5\\tnote{(b)} & Zopf et al. 2018 \\cite{Zopf2018} \\\\\n1550 & Frequency Conversion & 29 $\\pm$ 3 & Weber et al. 2019 \\cite{Weber2019} \\\\\n780 & Electrical & 93 $\\pm$ 1 & Zhai et al. 2021 \\cite{Zhai2021} \\\\\n1583 & Frequency Conversion & 67 $\\pm$ 2\\tnote{(c)} & You et al. 2021 \\cite{You2021} \\\\ \\hline\n\\end{tabular}\n a) used temporal post-selection (CW experiment); b) with active feedback; c) (93 $\\pm$ 4)$\\,\\%$ with temporal filtering\n\\end{threeparttable}\n\\end{table}\nA proof-of-concept entanglement swapping (\\textbf{Figure \\ref{fig:fig17}a}) experiment with polarization entangled photons from QDs was reported by Zopf et al. \\cite{Zopf2019} as well as by Basset et al. \\cite{Basset2019}. Here, the entangled photon pairs did not yet originate from two different QDs, since the remote TPI visibility was not sufficient. Instead, two entangled photon pairs, that are subsequently emitted from the same QD, were each split up spectrally, with one photon of each pair being sent to a receiver module, where a partial BSM is performed on the joined state of one photon from each entangled photon pair (Figure \\ref{fig:fig17}b). Note that it is important to pick photons with the same energy for them to be indistinguishable, here they use the XX-photon from both entangled photon pairs. \n\\\\\nIn at best $25\\%$ of the TPI events, a coincidence indicates a projection into the $\\ket{\\Psi^{+}}$ Bell state (note that despite the photons leaving from different output ports of the BS here, a phase shift of $\\pi$ rotates them into the $\\ket{\\Psi^{+}}$ state). For these cases, the two remaining photons violated the CHSH inequality, in other words were entangled, which means that entanglement swapping took place. State tomography revealed that after the swapping, the two X photons were in the entangled $\\ket{\\Psi^{+}}$ state (Figure \\ref{fig:fig17}d), but without the BSM, their density matrix was maximally mixed (Figure \\ref{fig:fig17}c). Using the determined density matrices, Zopf et al. obtained a fidelity of the joint state of the remaining photons, to the Bell state, of $81\\%$ \\cite{Zopf2019}. \n\\\\\nIn addition to being the first proof-of-principle experiment of entanglement swapping with entangled photons from QDs, the emission wavelength was 780 nm which is close to the D2 optical-transition line in Rubidium, making it compatible to Quantum memories (see \\textbf{section 4.3}). Note also, that entanglement swapping of QD entangled photon pairs creates two entangled photons at the same energy, which is different from the typical XX-X emission cascade in QDs.\n\\\\\nBasset et al. did a similar experiment but triggered their coincidences on projections into the $\\ket{\\Psi^{-}}$ Bell state, using coincidences from two different outputs of the BS \\cite{Basset2019}. As for Zopf et al., the obtained fidelity of the entangled state after the swapping is mainly limited by the indistinguishability of the used photon pairs and their initial degree of entanglement. To quantify that, the authors introduced a model which incorporates the initial entanglement fidelity, mainly determined by the amount of FSS, and the HOM visibility, which reproduces their measured results (Figure \\ref{fig:fig17}e). \n\\\\\nUsing the parameters of state-of-the-art QD entangled photon sources \\cite{Huber2018,Liu2019}, they predict a maximum achievable fidelity of the swapped entanglement of $83\\%$ with current technology. Even higher values can be expected with further improvements in the foreseeable future, opening up the route for entanglement swapping using remote QD-sources and ultimately quantum repeaters based on sub-Poissonian quantum light sources.\n\\\\\n\\begin{figure}[h]\n \\includegraphics[width= \\linewidth]{figure17.png}\n \\caption{Realization of entanglement swapping (a) with entangled photons emitted by the same QD (b). State tomography revealed that the two X photons were afterwards in the entangled psi-plus state (d), but without the BSM, their density matrix is maximally mixed (c) \\cite{Zopf2019}. Achievable fidelity of the state after entanglement swapping as a function of TPI visibility and FSS (f) \\cite{Basset2019}. (a-d) Reprinted with permission from \\href{https:\/\/doi.org\/10.1103\/PhysRevLett.123.160502}{\\textit{Zopf et al. 2019}} \\cite{Zopf2019} Copyright 2019 by the American Physical Society, (e) reprinted from \\href{https:\/\/doi.org\/10.1103\/PhysRevLett.123.160501}{\\textit{Basset et al. 2019}} \\cite{Basset2019} under Creative Commons License CC BY 4.0.}\n \\label{fig:fig17}\n\\end{figure}\nTo summarize, while making remote single photons from different QDs indistinguishable enough to project them reliably into a joint BSM is a challenge that requires high quality samples, resonant excitation schemes and very good experimental control, it has already been demonstrated several times with TPI visibilities surpassing the classical limit of $50\\%$ for WCPs. This promises further advances in QKD schemes and quantum repeaters relying on remote sources.\n\n\\subsection{Quantum Memory} \\label{section4.3}\nAnother crucial building block for many applications in quantum information is the quantum memory. An ideal quantum memory stores a quantum state with zero decoherence for an infinite amount of time and allows on-demand retrieval of the same quantum state for further use. It does so at a high bandwidth for fast operation and without introducing additional photon noise. In a quantum computation scenario, a quantum memory is necessary to delay computational steps, temporarily store quantum states, and hereby allow for more complex algorithms or new computation schemes such as linear optical quantum computing (LOQC) \\cite{Kok2005}. Other applications of quantum memories include quantum metrology \\cite{Giovannetti2011}, quantum machine learning \\cite{Biamonte2017}, single-photon detectors \\cite{Imamoglu2002} and more (see \\cite{Bussieres2013} for an in-depth review).\n\\\\\nIn communication scenarios we consider in this review, quantum memories are required for the implementation of quantum repeater protocols (cf. previous subsection), which allow to distribute entanglement and enable QKD over, in principle, arbitrary distances \\cite{Briegel1998}. Note, that although there exist all-photonic repeater schemes, which do not rely on quantum memories, they are complex in other ways, e.g., demanding multiple multiplexed quantum channels or cluster states \\cite{Li2019,Borregaard2020}. Quantum communication also profits indirectly from a quantum memory, e.g. from memory-assisted two-photon interference and facilitated teleportation, as discussed in \\cite{Ma2019}.\n\\\\\nDifferent types of memories have been proposed and demonstrated in the past, such as solid-state systems \\cite{DeRiedmatten2008}, trapped atoms \\cite{Bao2012} and alkali vapor cells \\cite{Eisaman2005}. For an extensive overview of memory protocols and platforms we refer to \\cite{Heshami2016}. Here, we want to focus on memories that are suitable for QD-SPSs, as well as practical enough for scalable, optical quantum networks.\n\\\\\nAs a natural way of creating nodes for an optical quantum network is to combine QD-SPSs with compatible quantum memories, one has to ensure that the memories can keep up with the quantum emitters in terms of efficiency and bandwidth. QDs allow high emission rates, thus the memory should be able to operate at a similar rate. Since not every pulse emitted by a real QD contains a photon, memories also require low noise backgrounds so that it is possible to distinguish a retrieved photon from the noise floor. In the following, we will review recent advances in quantum memories compatible with QDs (see also Neuwirth et al. \\cite{Neuwirth2021} for an in-depth review).\n\\\\\nAlkali vapor cells are promising candidates to realize QD-compatible memories (\\textbf{Figure \\ref{fig:fig18}a}). Here, the group velocity of light pulses propagating through the atomic ensemble can be reduced, i.e. light pulses are slowed down, which allows for photon storage \\cite{hau1999}. Using warm alkali vapor cells allows one to omit the complex cooling infrastructure necessary for many solid-state memories and the laser cooling necessary for memories based on cold atoms. They are also practical for larger networks, because they can offer sufficiently long coherence times \\cite{Borregaard2016} and profit from a large set of existing memory protocols \\cite{Lvovsky2009}. Moreover, solid-state memories and ultracold atoms have much less noise, smaller bandwidths, and longer storage times, making them more desirable for long-term memories, but are, due to the smaller bandwidth, not compatible with QD single photons in quantum networks.\n\\\\\nTo store photons in the atomic ensemble inside an alkali vapor, one uses transitions between three energy levels (a so-called $\\lambda$-system), with two spectrally-close lower states and one excited state at a higher energy, which enables an effect called electromagnetically induced transparency (EIT) \\cite{Fleischhauer2000,Ma2017} (Figure \\ref{fig:fig18}b). While the to-be-stored quantum state is resonant with one transition, a control pulse enables the excitation by driving the other transition. On the atomic level, all the atoms are prepared in a joint ground state, before the excitation by the signal photon, in combination with a control pulse, leads to the creation of an atom-spin-wave, in which the coherence is stored. With a second control pulse, the spin-wave is transformed back into an optical excitation and the photon can be retrieved again (Figure \\ref{fig:fig18}c).\n\\\\\nTypically, signal- and control- pulse are detuned from the resonant atomic transition, to reduce noise due to immediate fluorescence. One way of detuning them and still driving the transitions, is by using Raman scattering, where the anti-stokes-shift helps matching the transition energy, as shown by Reim et al. \\cite{Reim2011}. While detuning reduces the noise floor, higher detuning reduces the memory efficiency and requires higher control pulse intensities and consequently a better suppression of the control pulse in the memory output, thus a compromise between low noise and efficient excitation must be found.\n\\\\\nInitial theoretical comparisons of different schemes of using Rubidium vapor cells combined with QD single photons were performed by Rakher et al. \\cite{Rakher2013} and highlighted the necessary steps towards storing QD single photons in a high bandwidth memory. Such a memory was demonstrated with an end-to-end efficiency of $3.4\\%$ by Wolters et al., using a warm Rubidium vapor cell to store photons (emitted by a laser) for a time of up to 50 ns exploiting EIT and achieving a bandwidth of 0.66 GHz, compatible with typical QD-SPSs \\cite{Wolters2017}.\n\\\\\nThe quantum memory demonstrations presented up to here only proved that a photon could be stored and retrieved with finite efficiency, but not that the retrieved photon was actually the same, i.e. indistinguishable from the initial photon. This was shown first by Hosseini et al. who achieved $98\\%$ process fidelity by performing quantum state tomography on the retrieved photons \\cite{Hosseini2011}. In this context, another insightful experiment towards quantum memories was performed by Vural et al., in which the authors performed TPI measurements between two photons of which only one went through the alkali vapor cell, proving that the two photons were still indistinguishable after interacting with the atomic ensemble \\cite{Vural2018}. Moreover, the state-of-the-art for hot vapor cell quantum memories in terms of efficiency was recently demonstrated by Guo et al. who achieved an efficiency of >$82\\%$, using the Raman-detuned quantum memory scheme and attenuated laser pulses as the signal photons \\cite{Guo2019}.\n\\\\\nHowever, standard EIT in vapor cells does not allow storing qubits in which the information is encoded in the polarization, since the EIT memory only preserves the phase, but not the polarization. This was solved by England et al. achieving $98\\%$ fidelity of polarized photons using a dual-rail memory \\cite{England2012}, as well as by Namazi et al. demonstrating storage of single polarized photons and their retrieval with a fidelity to the original polarization state exceeding $90\\%$ \\cite{Namazi2017}, which they also employed in QKD experiments \\cite{namazi2017free}. For this purpose, both groups used polarization-dependent displacement optics (Glan-laser polarizer) to transform the polarization-encoding into a path-encoding, defining two spatially separate paths through the memory, and then converting the qubits back to polarization-states after leaving the memory (Figure \\ref{fig:fig18}d). Hereby, each polarization component is individually stored by taking a different path through the memory medium. Since the entangled photons generated from the XX-X radiative cascade in QDs are polarization entangled, the demonstration of a quantum memory for polarization qubits was a major step towards memory-based quantum networks using QD quantum light sources. Recently, also the storage and retrieval of a pair of polarization entangled photons inside a quantum memory was demonstrated \\cite{Ding2018}.\n\\\\\n\\begin{figure}\n \\includegraphics[width= \\linewidth]{figure18.png}\n \\caption{Storing a QD single photon in a Rb vapor cell (a) works via Electromagnetically Induced Transparency in a $\\lambda$-system (b), so that a sequence of control pulses can define the light storage in the memory (c). To store polarization qubits, the polarization encoding can be translated into path encoding via beam displacer (BD) and Glan-laser polarizer (GLP in d) and transformed back after the memory \\cite{Namazi2017}. (d) reprinted Figure with permission from \\href{https:\/\/link.aps.org\/doi\/10.1103\/PhysRevApplied.8.034023}{\\textit{Namazi et al. 2017}} \\cite{Namazi2017} Copyright 2017 by the American Physical Society.}\n \\label{fig:fig18}\n\\end{figure}\nEven with existing quantum memories that are capable of being combined with QDs in terms of bandwidth, there are still a couple of issues to be addressed. The first one is the question of how one can, in practice, precisely match the QD emission energies to the transition energies in the atomic vapor ensemble. Additionally, the memory used in a future quantum network must also be able to deal with imperfect QDs, which are for example subject to spectral diffusion and dephasing, both broadening the emission and reducing the single-photon indistinguishability. These effects were theoretically treated by Rakher et al. finding that both fluctuations in phase and wavelength of the QD photons significantly reduce the memory efficiency \\cite{Rakher2013}.\n\\\\\nThe challenge of spectrally matching artificial atoms with their natural counterparts, i.e. matching photons emitted by QDs to transitions of alkali atoms, was first addressed by Akopian et al. in 2011, by slowing down single photons from QDs in an atomic ensemble \\cite{Akopian2011}. To do so, the authors shifted the QD emission energies to the memory transition energy via magnetic Zeeman tuning. In addition, they also showed that, after filtering the QD signal by a spectral window smaller than the linewidth of the atomic ensemble, the remaining photons, with fluctuating emission energies due to spectral diffusion, were all slowed down by the same amount (experienced the same storage times in the memory). Other ways of matching the QD emission energies with the atomic vapor transitions are piezo-strain tuning as done by Jahn et al. \\cite{Jahn2015}, temperature variation \\cite{Bremer2020} and over a large range covering all the relevant alkali vapor transition energies by Zhai et al. \\cite{zhai2020}. Additionally, tuning can be done by using a \"dressed state\" resonance fluorescence \\cite{Vamivakas2009}, which does not require any electric or magnetic field tuning as shown by Ulrich et al. \\cite{Ulrich2014}. Finally, we would like to stress that although single photons from QDs have been successfully stored and retrieved with high fidelity from quantum memories, these types of quantum memories are not yet sufficient, since they do not allow on-demand retrieval. They can only release the photon by a control pulse which is defined by the pulsed control laser, but the storage time cannot be adapted in a feedforward manner yet. Here, further progress will be necessary to obtain a true quantum memory.\n\\\\\nApart from storing a photon in an atomic ensemble, one can also store it in a solid-state system. Such a system could, for example, be a rare earth-ion system using an atomic frequency comb \\cite{Usmani2010}. However, such systems typically have a strong dependence on the polarization, which is why storing polarization qubits was difficult initially. But, by using two rare-earth crystals of Nd$^{3+}$:YVO$_4$, Zhou et al. also demonstrated the storage of polarization qubits \\cite{Zhou2012}. Later, the same group showed how a sequence of single photons emitted by a QD can be stored in their memory for 40 ns \\cite{Tang2016}. Additionally, they found that even for imperfect QD sources (with significant multi-photon contributions), only one photon is stored, thus increasing the purity of the qubit through the quantum memory.\n\\\\\nInterestingly, also the ensemble of nuclear spins inside a QD can be considered a solid-state quantum memory itself, as demonstrated by Kroutvar et al. using electron spins \\cite{Kroutvar2004} and also the so-called dark-exciton state in a QD can serve as a memory \\cite{McFarlane2009}. The advantage of nuclear spin memories is that they are shielded much better from the environment, due to the small magnetic moments of the nuclei. Thus, long-term storage of quantum bits (a quantum hard-drive) is more likely to be possible in such solid-state memories, limited only by dipole-dipole interactions among the nuclear spins \\cite{Taylor2003}, which can be suppressed as demonstrated by Kurucz et al. \\cite{Kurucz2009}. Other examples of nuclear spin memories also include nuclear spins of Carbon atoms coupled to NV centers in diamonds \\cite{Shim2013} or dopants in Silicon \\cite{Morton2008}. Although solid-state memories show excellent properties for photon storage, their reduced coupling to the environment can also be a disadvantage, making the read and write processes more difficult.\n\\\\\nAn alternative way of transferring the qubit state from a flying qubit, such as a QD single photon, onto a solid-state memory is by teleporting the state, using an entangled photon pair. In this scheme one photon of the entangled state would be stored in a local solid-state quantum memory, while the other is sent to the to-be-stored flying qubit. By projecting the two in a joint Bell state, the flying qubit gets teleported into the quantum memory, as was demonstrated by Bussi\u00e8res et al. using entangled photons at Telecom wavelengths, created via parametric down conversion, and using a rare-earth memory \\cite{Bussieres2014}. They achieved a fidelity above $80\\%$ over a distance of 12 km.\n\\\\\nIn summary, significant progress has been made during the last decade in optical quantum memories, especially increasing compatibility with QD-SPSs. Challenging tasks that remain are to further enhance the efficiency and reduce the noise (increase the signal to noise ratio to allow for non-ideal QDs) as well as the implementation of on-demand retrieval mechanisms. On the QD side research must focus on creating tunable, fourier-limited single photons which are subject to less line broadening and can thus be better integrated with quantum memories.\n\\subsection{Teleportation} \\label{section4.4}\nTeleportation consists of sending a previously unknown quantum state from Alice to Bob without physically sending the qubit itself, but by 'sacrificing' an entangled state shared between Alice and Bob, and using classical communication (\\textbf{Figure \\ref{fig:fig19}a}). Hereby, the state to be teleported is projected into the Bell basis in a joint measurement together with one half of the entangled state at Alice. The result of that Bell measurement then determines the unitary that must be applied to the other half of the entangled state by Bob to retrieve the teleported state at Bob's side \\cite{Bennett1993}.\n\\\\\nTeleportation is a vital building block in future quantum networks as it provides a way of realizing non-local quantum computations and can be used to transfer qubits into a solid-state quantum memory \\cite{Bussieres2014}. Teleportation can also be directly used for QKD (given a shared entangled state) by teleporting an encoded state from Alice to Bob, which is known as a quantum relay. Quantum relays using entangled photons from QDs have been demonstrated by Varnava et al. over a distance of 1 km \\cite{Varnava2016} and later by Huwer et al. using photons at Telecom wavelengths \\cite{Huwer2017}. In the following we discuss advances in quantum teleportation enabled by QD-based quantum light sources.\n\\\\\nAs discussed earlier, the efficiency of BSMs is reduced by multi-photon events unavoidable in implementations using WCP or SPDC sources. Therefore, also teleportation can benefit from the use of entangled photons created by deterministic QD-SPSs. The first QD-based proof-of-concept teleportation experiment was reported by Nilsson et al. using an QD entangled-light emitting diode \\cite{Nilsson2013}. The authors used the entangled photon pair emitted in one pulse to teleport a photon from the subsequent photon pair (Figure \\ref{fig:fig19}b) and obtained a maximum teleportation fidelity above the classical threshold of 2\/3. Although teleported photon and entangled photon were created by the same device, the authors emphasized that the teleported photon can in principle also stem from an external source. In fact, their QD emission energy was tunable, to match the wavelength of an incoming external photon, to facilitate TPI and enable teleportation. But the operation wavelength at 890 nm was not yet compatible with standard fiber-optical communication networks.\n\\\\\nTeleporting external photons with a QD entangled-photon source emitting in the Telecom C-band was recently demonstrated by the same group using an attenuated laser pulse \\cite{Anderson2020} (Figure \\ref{fig:fig19}c). In an initial characterization measurement, they found a HOM visibility of up to $70\\%$ for the TPI between the attenuated laser pulse and single photons emitted from their non-resonantly excited QD, a value significantly above the maximum value for TPI between two attenuated laser pulses of $50\\%$. The fidelity observed for teleporting a laser photon was exceeding $85\\%$ (Figure \\ref{fig:fig19}d).\n\\\\\n\\begin{figure}\n \\includegraphics[width= \\linewidth]{figure19.png}\n \\caption{The quantum teleportation scheme (a) was implemented with target photon and entangled photons both emitted by the same QD by Nilsson et al. (b) \\cite{Nilsson2013} and with a laser photon at telecom wavelengths as the target photon by Anderson et al. (c) achieving teleportation fidelities above the classical limit (d) \\cite{Anderson2020}. (b) reprinted by permission from Springer Nature: Nature Photonics \\href{http:\/\/dx.doi.org\/10.1038\/nphoton.2013.10}{\\textit{Nilsson et al. 2013}} \\cite{Nilsson2013} Copyright 2013, (c,d) reprinted from \\href{https:\/\/doi.org\/10.1038\/s41534-020-0249-5}{\\textit{Anderson et al. 2020}} \\cite{Anderson2020} under Creative Commons license CC BY.}\n \\label{fig:fig19}\n\\end{figure}\nIn recent years, research efforts focused on improving QD-based quantum light sources, reducing the FSS, and making the emitted photons or photon pairs more indistinguishable. However, already with imperfect QDs relevant applications are already possible. Basset et al. recently demonstrated teleportation experiments with photons from QDs, for which they deliberately chose a QD of below-average quality \\cite{Basset2020a}. The authors used one photon from every second entangled photon pair as the to-be-teleported input state. They showed, that projecting into two Bell states, instead of only one (cf. \\cite{Reindl2018}), in the partial BSM does not only increase the efficiency, but also reduces the impact of an imperfect indistinguishability (cf. \\textbf{Figure \\ref{fig:fig20}} a and b for one-state and two-state BSM, respectively). A part of the photons that previously led to false coincidence counts since they were not indistinguishable enough, will now be identified from their polarization. The authors further enhanced the teleportation fidelities via spectral filtering, which improved the photon indistinguishability (Figure \\ref{fig:fig20}c) and also proposed a model from which the process fidelity of the teleportation protocol can be predicted for given QD characteristics (FSS and photon-indistinguishability) (Figure \\ref{fig:fig20}d).\n\\\\\n\\begin{figure}\n \\includegraphics[width= \\linewidth]{figure20.png}\n \\caption{Teleportation with entangled photons from QDs: Comparison of teleportation fidelities with a $25\\%$ BSM (a) and a $50\\%$ BSM (b), both with low visibility, and one with higher visibility due to spectral filtering (c). Dependence of teleportation fidelity on visibility, FSS S and BSM type (d). Figures reprinted from \\href{http:\/\/dx.doi.org\/10.1038\/s41534-020-00356-0}{\\textit{Basset et al. 2020}} \\cite{Basset2020a} under Creative Commons CC BY license.}\n \\label{fig:fig20}\n\\end{figure}\nNoteworthy, also other teleportation protocols exist, such as single mode teleportation which does not require entangled photon pairs and is related to the proposal of linear optics quantum computation \\cite{Knill2001}. This protocol was experimentally demonstrated with QD single photons by Fattal el al. \\cite{Fattal2004}.\n\\\\\nAs teleportation relies on the same ingredients as quantum repeaters for quantum networks, namely large TPI visibilities and entangled photon pairs, it will also profit by further advances in the development of QD-based quantum light sources. Due to the previous limits set to the TPI visibility, teleportation of a photon from a QD using an entangled photon pair from another QD has not been demonstrated so far. But the recent advances discussed in \\textbf{section 4.2} show prospects that this can be achieved in the near future.\n\\subsection{Quantum Random Number Generation} \\label{section4.5}\nAmong the building blocks required for quantum networks, quantum random number generators (QRNG) were the first to be realized using for instance a radioactive decay \\cite{Schmidt1970}, for a review of random number generation see \\cite{Herrero-Collantes2017}. As a result, the technology readiness level is the highest among all building blocks, as confirmed by its widespread commercial availability \u2013 recently even in consumer smartphones \\cite{idquantique} - and standardization efforts are also underway \\cite{Hart2017}.\n\\\\\nAll existing QKD protocols require reliable sources of randomness. In addition true random numbers are of course also vital in many other domains ranging from simulations and computing to online casinos. It is not enough to have a sequence of uniformly distributed random bits, but they must also fulfill the requirements for forward and backward security. The final key will only be as secure, as the initial random numbers are actually random. Be it in the basis choices in the BB84 scheme or in the choice of 2-universal-hash functions during the privacy amplification post processing step, an initial random seed is needed to make quantum communication work.\n\\\\\nIt has been shown that ideal QRNGs are the only perfect source of random numbers, as opposed to pseudo-random number generators \\cite{VonNeumann1963,Peres1992}. While computers typically rely on pseudo randomness, where a random seed and an algorithm is used to generate the numbers, or on classical physical randomness, based on the complexity of classical systems such as thermodynamic systems, only quantum processes can provide true sources of randomness \\cite{Ma2016}. This boils down to the proof of having no hidden variables, so that the outcome of a single projective measurement cannot be predicted by any means, which is why randomness can be certified via the violation of Bell-like inequalities \\cite{Gallego2013}. The simplest optical QRNG can be described as a photon impinging on a BS and its wave function collapsing on a single-photon detector. These were in fact the first optical QRNGs that were implemented \\cite{Jennewein2000,Stefanov2000}. In such a setup, it was also demonstrated that a true SPS can provide more randomness than a bright laser \\cite{Oberreiter2016}. \n\\\\\nBy now, several other QRNG schemes have emerged, which provide even faster random bit rates using for instance photon arrival times \\cite{Furst2010,Wayne2010,Wahl2011}. The quantum phase fluctuation and vacuum state schemes achieve Gbps bandwidths \\cite{Haylock2019,Lei2020,Bai2021} and do not rely on SPSs. Even faster QRNGs are available \\cite{Liu2017}, but these lack the electronics to handle this amount of data in real time \u2013 a similar problem to the classical key reconciliation in practical QKD applications. Parallel to the developments in QKD, also in QRNG researchers now shift the focus from trusted devices to the development of device independent (DI) approaches \\cite{Gomez2019}. Like in QKD, full DI was not achieved yet and the intermediate approaches like Semi-Device-Independent \\cite{Avesani2021} or self-certifying approaches suffer from lower bit generation rates. For an overview of different QRNG platforms and the state-of-the-art we refer to \\cite{Herrero-Collantes2017} and \\cite{Hart2017}.\n\\\\\nConcerning QRNG based on semiconducting QDs one must note that QKD protocols are agnostic towards where the random numbers come from, as long as their rate is sufficiently high to keep pace with QD emission rates. Nevertheless, it is interesting to note that specific ways of creating QRNG with QDs exist, which could in the future be combined with their property as a SPS to generate random numbers and true single photons on the same platform \\cite{McCabe2020,Purkayastha2016}. While QDs can improve many ingredients of future quantum networks, they appear to be not the optimal choice for the generation of random numbers in terms of practicality and possible advantages.\n\n\\subsection{Towards Quantum Networks - Practical Challenges} \\label{section4.6}\nFinally, there are a few hands-on issues to be addressed when attempting to build a stable and functional quantum network that is immune against environmental fluctuations. Here, we review progress in maintaining the stability of potential quantum networks via active stabilization, monitoring, and optimization of the building blocks making up the network.\n\\\\\nOne important issue is how one can guarantee the stability of a QKD channel for long times so that the security is not compromised by an increase in QBER \\cite{Zhu2002}. Therefore, active stabilisation schemes will be necessary, e.g. using bright beacon lasers that are multiplexed in the quantum FSO channel, as demonstrated in the seminal work by Ursin et al. in 2007 using a 144 km link between the Canary Island of La Palma and Tenereiffe and entangled photon-pairs generated via an SPDC source \\cite{Ursin2007}. Such stabilization approaches have been used to improve QKD experiments employing local free-space channels \\cite{carrasco2014} and also to transmit quantum states from satellite to ground \\cite{Yin2017}. Using QD-generated entangled photon pairs at telecom wavelengths, an advanced stabilization scheme has recently been demonstrated by Xiang et al. \\cite{Xiang2019} by continuously exchanging entangled photon pairs over 18 km of optical fiber for more than a week (\\textbf{Figure \\ref{fig:fig21}a}). The current state-of-the-art for mechanical stabilization of free-space links was recently achieved by Liu et al. exchanging SPDC-generated qubits between flying drones \\cite{Liu2021entanglement}. Moreover, it is important to define a joint reference frame for the measurement of the qubits (unless reference-frame independent schemes are employed \\cite{Laing2010,Wabnig2013} ), which can also be done via auxiliary, multiplexed lasers \\cite{Nauerth2013}. In addition to stabilizing the quantum channel, it should also be characterized well, to optimize the used QKD scheme for maximum performance. This can be done experimentally via optimization routines, numerically via key rate predictions or even using machine learning approach \\cite{Ismail2019}.\n\\\\\nAnother practical issue refers to the synchronisation between senders and receivers which can either be done by synchronisation pulse sequences or a multiplexed synchronisation signal, but also via GPS clocks \\cite{Bienfang2004,Pljonkin2017}. For fast stabilization and high key rates it will also be important to achieve fast modulation of the phase and polarization of photons \\cite{Grunenfelder2018,Li2019}, for which integrated photonics systems promise the best modulation rates \\cite{Sibson2017,Bunandar2018}, as well as high speed detectors with low dead times, especially at Telecom wavelengths, for an overview see \\cite{Zadeh2021}.\n\\\\\nNot only the stability, but also the security has to be continuously monitored. As mentioned before, multi-photon pulses enable the photon number splitting attack, which is why sources have to be well characterized in order to compensate for their non-ideal photon statistics. The amount of multi-photon contributions could however change (or be changed by an adversary) which is why it is necessary to monitor the multi-photon contributions during key exchange by using a subset of the photons for purity checks, as demonstrated by Kupko et al. in a real-time security monitoring approach (Figure \\ref{fig:fig21}b) \\cite{Kupko2020}. In their work, the authors also demonstrated how temporal filtering on the receiver side can enhance the signal to noise ratio to optimize the secure key rate for a given channel loss. Implementing such approaches, special care must be taken not to open the door for other side-channel attacks, as an adversary could otherwise unnoticedly steal photons outside the temporal acceptance window. Hence, an implementation which wants not only to benefit from the improved signal-to-noise ratio but also by estimating a lower $g^{(2)}(0)$, temporal filtering has to be applied on sender side alike (or $g^{(2)}(0)$ needs at least to be monitored also inside Alice). Another issue is that because of finite-key size effects, a minimum block size must be reached before the post processing step - as implemented nowadays in commercial systems \\cite{Chaiwongkhot2017}.\n\\\\\n\\begin{figure}[h]\n \\includegraphics[width= \\linewidth]{figure21.png}\n \\caption{Stable exchange of entangled photon pairs via optical fiber for an entire week (a) \\cite{Xiang2019}, continuous measurements of relevant QKD parameters such as QBER (c) and photon autocorrelation values (d) enable real-time security monitoring, as long as a minimum integration time for conclusive $g^{(2)}$ results is set (b) \\cite{Kupko2020}. (a) reprinted from \\href{http:\/\/www.nature.com\/articles\/s41598-019-40912-z}{\\textit{Xiang et al. 2019}} \\cite{Xiang2019} under Creative Commons Attribution 4.0 International License, (b) reprinted from \\href{http:\/\/dx.doi.org\/10.1038\/s41534-020-0262-8}{\\textit{Kupko et al. 2020}} \\cite{Kupko2020} under Creative Commons Attribution 4.0 International License.}\n \\label{fig:fig21}\n\\end{figure}\nThe abundance of a classical authenticated channel is always assumed implicitly in QKD protocols. Even if an initial secret is shared, authentication has to be repeated regularly which uses up a fraction of the key, which is why protocols for efficient authentication are necessary \\cite{Ljunggren2000,kiktenko2020}. Alternatively, QKD can be combined with other post-quantum cryptography schemes to support the authentication \\cite{Wang2021}.\n\\\\\nA challenge that increases in importance with the growing maturity of the developed QKD systems refers to the security certification. One could either rely on device-independent schemes or use special protocols to confirm that a QKD system is working properly \\cite{Tomita2019}. That includes the certification of the initial randomness, the integrity of the quantum channel, the purity of the source, as well as the correctness of the post-processing steps.\n\\\\\nFinally, in order to benchmark different QKD protocols and different technology platforms, a general framework beyond stating only the maximum secure key rate or the maximum achieved distance must be developed, since these parameters are highly dependent on the specific laboratory setup used with a specific source and under certain conditions. Therefore, testing standards are envisioned, to reliably compare different approaches \\cite{Alleaume2004,Langer2009,alleaume2014}. That could be done for instance by ensuring that they certify the same amount of overall $\\epsilon$-security (see the discussion of security definitions in \\cite{Renner2008}). Another useful figure-of-merit could be the \"security-per-dollar-spent\", considering the fact that different QKD architectures, that in principle promise different levels of security, also have different levels of implementation difficulty and hence costs. And last but not least, while there is of course the ultimate aim to achieve unconditional security, ruling out even the most unlikely attacks (that are practically impossible but allowed by the laws of quantum mechanics), in practice one might be content with a more relaxed, applied form of security. This could either be a deliberate trade-off between security-gain and implementation-costs, or an intermediate step towards ultimate security. \n\\\\\nThe approach of assuming realistic restrictions on an adversary are known and even required from the field of cryptographic primitives beyond QKD in untrusted settings, e.g. quantum oblivious transfer in the so-called noisy storage model \\cite{Wehner2008}, representing crucial building blocks for modern communication \\cite{Broadbent2016} networks.\n\n\\section{Conclusion}\nIn this review we summarized the progress made in recent years in the field of quantum communication using quantum light sources based on semiconductor quantum dots. After revisiting the foundations of QKD and introducing semiconductor QDs as one of the most promising candidates for photonic implementations of quantum information, we comparatively discussed implementations of QKD using single photons as well as entangled photon pairs generated via engineered QD-devices. Next, we discussed recent progress in the development of key building blocks of future quantum networks and how they can benefit from \/ or become compatible with such semiconductor QDs. Considering the tremendous progress achieved in the field, functional quantum networks and real-world application appear to be within reach in the not too distant future. However, some important ingredients are still missing or require additional research efforts. For example, it is necessary to combine the superior properties QD sources proved to be able to deliver, in terms of high efficiency, high brightness, high single-photon purity, large photon indistinguishability, and large entanglement fidelities, with practical and durable source modules operable outside shielded lab environments. Another important challenge concerns the development of efficient quantum memories with on-demand retrieval. Furthermore, protocols for multi-node architectures and schemes as well as standards for the security certification are to be developed. A major challenge, that has not yet been tackled at all using QD-devices and which was not discussed here, concerns the implementation of cryptographic primitives beyond QKD and in untrustful settings. Such primitives, however, are representing important building blocks for sensitive tasks in modern communication networks, such as the secure authentication at a bank's ATM. This highlights the rich field of quantum cryptography and new areas of research to discover.\n\\\\\nFinally, we want to emphasize that future quantum networks will certainly not be constituted of a single technology or a specific protocol. On the contrary, many different platforms and schemes will most probably be combined and coexist, including deterministic quantum light sources as well as WCP- and SPDC-based sources, two-party quantum cryptographic primitives like QKD and beyond, multi-party primitives, classical and post-quantum cryptography, different encoding schemes and various network architectures and topologies, each of which being used and optimized for its special purpose. Reviewing the achievements and success since the advent of the field of quantum cryptography, driven by ideas of S. Wiesner in the late 1960s, it seems reasonable to expect major steps towards the quantum internet within this decade.\n\\\\\n\\end{justify}\n\\medskip\n\n\\medskip\n\\textbf{Acknowledgements} \\par \nWe gratefully acknowledge financial support from the German Federal Ministry of Education and Research (BMBF) via the project 'QuSecure' (Grant No. 13N14876) within the funding program Photonic Research Germany.\n\n\\medskip\n\n\\bibliographystyle{MSP}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbpmu b/data_all_eng_slimpj/shuffled/split2/finalzzbpmu new file mode 100644 index 0000000000000000000000000000000000000000..f79c2e233b298e2fd2d54e7a1b2018a62683a1be --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbpmu @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nOne of the most intriguing questions in modern astronomy is how the universe\nassembled into the structures we see today. A major \npart of this question is understanding how galaxies formed over\ncosmic time. A popular and rapidly developing method of tracing the \nevolution of galaxies is through examining galaxy morphologies and\nstructures over a range of redshifts (e.g. Conselice 2003; Conselice, \nRajgor \\& Myers 2008; Conselice et al. 2009; Lotz {et~al.$\\,$} 2008; Cassata\net al. 2010; Ricciardelli\net al. 2010; Weinzirl et al. 2011). While understanding how the morphologies of \ngalaxies evolve, and how matter in galaxies is structured, is fundamental, \nstructural studies of galaxies have thus far focused on measuring properties \nin one or more photometric bands, \nand using this as a tracer of\nevolution. These types of structural analyses have always been measured in\nterms of relative luminosities, such that brighter parts of galaxies\ncontribute more towards their structure and morphology. To understand \ngalaxies more fully however, it is important to study the evolution of \ngalaxy structure in terms of their stellar mass distribution, as this better\nreflects the underlying distribution of stars in a galaxy.\n\nMorphological studies have evolved from initial attempts to describe the \nrange of galaxy forms during the early-mid 20th century, towards modern \nefforts of linking the spatial distribution of a galaxy's stars to its \nformation history. In the move away from visual classifications, the goal \nhas been to quantify galaxy structure\/morphology in a way such that \nstructures can be measured automatically and in a reliable \nway. There are two broad approaches for automated classification of\ngalaxies - the parametric and non-parametric methods.\n\nIn the parametric approach, the light profiles of galaxies, as seen in the\nplane of the sky, are compared with predetermined analytic functions. For\nexample, single Sersic profiles are fit to an entire galaxy's light\nprofile. Similarly, \nbulge-to-disc ($B\/D$) light ratios are computed by fitting\ntwo component profiles to a galaxy. These \nmethods are however unable to parameterise directly any star formation or\nmerging activity which produces random or asymmetric structures within\ngalaxies.\nThe ``non-parametric'' structural approaches have no such presumed \nlight profiles implicit in their analyses (e.g., Shade et al. 1995; Abraham et al.\n1996; Conselice 1997, 2003; Lotz et al. 2004). This is a more \nnatural approach towards measuring galaxy structures, as the majority of \ngalaxies in the distant Universe have irregular structures that are not\nwell fit by parameterised forms\n(e.g. Conselice {et~al.$\\,$} 2005; Conselice, Rajgor \\& Myers 2008). Perhaps the \nmost successful and straightforward of the non-parametric\nsystems is the $CAS$ method, which uses a combination of concentration ($C$),\nasymmetry ($A$) and clumpiness ($S$) values to separate galaxy types \n(Conselice {et~al.$\\,$} 2000; Bershady {et~al.$\\,$} 2000; Conselice 2003).\n\nBesides overall structure, the measurement of the size evolution of galaxies, as\nmeasured through half-light radii,\nalso has important implications for their formation histories. For example, \nmany studies such as Trujillo {et~al.$\\,$} (2007) and Buitrago et al. (2008) have\n measured galaxy sizes for systems at $z > 1$ and have found\na population of compact spheroid galaxies with number densities two orders of \nmagnitude higher than what we find in the local universe. The absence of a significant \nnumber of these small sized (as measured by half-light radii), high mass galaxies \nin the local Universe suggests that this population has evolved and has perhaps merged with \nother galaxies. However, these studies are measured based on the light originating from \nthese galaxies, and it is not clear whether sizes would change when measured \nusing the distribution of stellar mass rather than light.\n\nAll of these methods for measuring the resolved structures \nof high redshift galaxies depend on measurements made in one or more \nphotometric bands. As such, quantitative structures are \ninfluenced by a combination of effects, including: regions of enhanced \nstar formation, irregular dust distributions, differing ages and \nmetallicities of stellar populations and minor\/major mergers.\nThe morphology and structure of\na galaxy also depends strongly upon the rest-frame wavelength probed\n(e.g., Windhorst et al. 2002; Taylor-Mager et al. 2007).\nYoung stars, such as OB stars can dominate the appearances of galaxies\nat blue wavelengths, thereby giving a biased view of the underlying\nmass within the galaxy. Longer wavelength observations improve the\nsituation, but at every wavelength, the light emitted is from a mixture\nof stars at a variety of ages and the dust content. \n More fundamental is the structure \nof the stellar mass distribution with a galaxy, as it is more of a direct tracer\nof the underlying potential. \nAlthough galaxy structure has been measured on rest-frame near-infrared and $I$-band \nimaging, which traces stellar mass to first order, we directly investigate the\nstellar mass images within this paper. \n\nWe use the method outlined in Lanyon-Foster, Conselice\n\\& Merrifield (2007; LCM07) to\nreconstruct stellar mass maps of galaxies within the GOODS fields at $z < 1$. \nWe then use this to investigate the evolution of the distribution of\ngalaxy stellar mass during the last half of the universe's history.\nWe use these stellar mass maps to directly measure \n$CAS$ parameters, and the sizes of these galaxies in stellar mass\nover this epoch. We test the assumptions that the $CAS$\nparameters and sizes \ncan be reliably measured in optical light by comparing the parameters \nfor these galaxies measured in $the \nB_{435}$, $V_{606}$, $i_{775}$, and $z_{850}$ bands and in stellar mass. \nWe finally investigate how these various wavelengths and stellar mass\nmaps can be used to classify galaxies by their formation modes, revealing \nhow these systems are assembling.\n\n\n\nThis paper is organised as follows: in \\S~\\ref{sec:data} and\n\\S~\\ref{sec:method} we describe the data, sample selection and method,\nincluding explanations of the K-correction code we use and the $CAS$ \nanalysis. In \\S~\\ref{sec:results} we present our results, discussing the \nstellar mass maps themselves, the comparison between galaxy size and the \n$CAS$ parameters in $z_{850}$ and mass. We then explore the \nrelations between the structural parameters in $B_{435}$, $V_{606}$, \n$i_{775}$, $z_{850}$ and stellar mass. Finally, in \\S~\\ref{sec:conclusions}, \nwe discuss our conclusions and comment on future applications. Throughout \nwe assume a standard cosmology of {\\it H}$_{0} = 70$ km s$^{-1}$ Mpc$^{-1}$, \nand $\\Omega_{\\rm m} = 1 - \\Omega_{\\Lambda}$ = 0.3.\n\n\\section{Data and Sample}\\label{sec:data}\n\n\\subsection{Data}\n\nThe primary source of our data consists of HST\/ACS imaging from the GOODS ACS \nimaging Treasury Program\\footnote{http:\/\/archive.stsci.edu\/prepds\/goods\/}. \nThe observations consist of imaging in the $B_{435}$ (F435W), $V_{606}$ \n(F606W), $i_{775}$ (F775W) and $z_{850}$ (F850LP) pass-bands, covering the \nHubble Deep Field-North (HDF-N) area. The central wavelengths of these\nfilters, and their full-width at half-maximum,\nare: F435W (4297, 1038 \\AA), F606W (5907, 2342 \\AA), \nF775W (7764, 1528 \\AA), F850L (9445, 1229 \\AA).\n The images have been reduced (using the ACS CALACS pipeline),\ncalibrated, stacked and mosaiced by the GOODS team (Giavalisco {et~al.$\\,$}\n2004). The field is provided in many individual image sections, \nwith the HDF-N data divided into 17 sections, each of\n$8192\\times8192$ pixels. The total area of the GOODS survey is roughly\n315 arcmin$^{2}$, and we utilise imaging which was drizzled with\na pixel scale of 0.03 arcsec pixel$^{-1}$, giving an effective PSF\nsize of $\\sim 0.1$\\arcsec in the $z-$band.\n\n\nThe sample consists of galaxies selected in the GOODS-N field with $z \\leq\n1$ and a F814W magnitude of $< 24$. The sample was also\nrestricted to those galaxies with \ndata in all four ACS photometric bands such that we can \nsubsequently obtain the most \naccurate K-corrections and stellar masses possible. The final sample \nconsists of 560 objects, which were individually cut out of the GOODS-N \nACS imaging, and handled as separate entities. This is described in detail \nin the following section.\n\nPhotometric redshifts are available for the whole sample (Mobasher {et~al.$\\,$}\n2004), and spectroscopic redshifts are available for 404 out of 560 galaxies,\nas found by the Team Keck Redshift Survey (TKRS; Wirth {et~al.$\\,$}, 2004). When\nspectroscopic redshifts are available they are used, else the photometric\nvalues are substituted. \n\n\n\\section{Method}\\label{sec:method}\n\n\\subsection{Stellar Mass Maps}\n\nTo create stellar mass maps of our galaxies, each pixel in each\ngalaxy image is treated individually\nthroughout the analysis. We convert fluxes from counts per pixel to apparent\nmagnitudes per square arc-second for each pixel in every image in all four\nphotometric bands, as well as calculating their associated errors. \nPixels with negative fluxes\nwere assigned values four orders of magnitude smaller than the typical flux\nso that even pixels with low signal-to-noise values can be mapped. This \nhowever means that our resulting values for the CAS parameters, especially\nasymmetry which uses the background light to do a correction, are potentially\ndifferent than when measured using optical light.\n\nBecause our galaxies are at a variety of redshifts, we have to carry out\nfitting to each SED for each pixel to calculate the stellar mass within\neach pixel. \nK-corrections and stellar masses are calculated for each pixel of each image\nusing the ``K-Correct'' code of Blanton \\& Roweis (2007). This is\nsimilar to previous methods outlined by Lanyon-Foster et al. (2007),\nBothun (1986), Abraham et al. (1999), and Welikala et al. (2008, 2011).\n\nThe code, K-Correct\\footnote{distributed at\nhttp:\/\/cosmo.nyu.edu\/blanton\/kcorrect\/}\nnaturally handles large datasets, and through its interpretation of the data\nin terms of physical stellar population models, outputs results in terms of\nstellar mass and star formation histories (SFHs) of each galaxy. The code\nallows the input of GOODS data and outputs a stellar mass for each object. The\ncode can also be modified so as to treat each pixel as a separate input object,\nwhich we do in this work.\n\nThe K-correction ($K_{QR}(z)$) between bandpass $R$, used to observe a galaxy\nwith apparent magnitude ($m_{R}$), at redshift $z$, and the desired \nbandpass $Q$ is defined as (e.g., Oke \\& Sandage 1968):\n\n\\begin{equation} \\label{eq:kcdef}\nm_R = M_Q + DM(z) + K_{QR}(z) - 5log(h)\n\\end{equation}\n\n\\noindent where\t\n\n\\begin{equation} \\label{eq:dmod}\nDM(z) = 25 + 5 log \\left[ \\frac{d_{L}}{h^{-1} Mpc} \\right]\n\\end{equation}\n\n\\noindent is the bolometric distance modulus calculated from the luminosity\ndistance, $d_L$, $M_Q$ is the absolute magnitude and $h$ = H$_{0}$\/100\nkm\\,s$^{-1}$\\,Mpc$^{-1}$. \n\nThe K-correct software contains model templates in electronic form\nand an implementation of the method to fit data to models. The code uses\nthe stellar population synthesis models of Bruzual \\& Charlot (2003) and\ncontains training sets of data from GALEX, SDSS, 2MASS, DEEP2 and GOODS.\nThe code finds the nonnegative linear combination of\n$N$ template star formation histories that best match the observations\nusing a minimum $\\chi^2$ comparison. With the entire set of galaxy\nobservations available, K-correct also fits for the $N$ template SFHs using\na nonnegative matrix factorisation algorithm. K-correct\nnaturally handles data uncertainties, missing data, \nand deals with the complications of observing galaxy spectra\nphotometrically, using broadband filters of galaxies at varying redshifts.\n\nWe use a set of 485 spectral templates to fit to our galaxy pixel\nSEDs using Bruzual \\& Charlot (2003) models with \nthe Chabrier (2003) IMF and Padova (1994) isochrones. All of the six metallicities \navailable (mass fractions of elements heavier than He of $Z = 0.0001, 0.0004, 0.004, 0.008,\n0.02$ and $0.05$) are used.\n\nSome of the known areas of uncertainty in the stellar population models are in\nthe UV and IR regions. In the UV light from young or intermediate aged\nstellar populations can dominate the flux at\n$\\sim 1500 \\AA$.\nIn the near-IR, thermally pulsating asymptotic giant branch (TP-AGB) stars\ndominate the flux in some intermediate age populations (Maraston, 2005). We\ndiscuss this issue in terms of stellar masses in Conselice et al. (2007) who\nshow that the effects of TP-AGB stars do not kick in until at\nredshifts higher than the scope of this study ($z > 2$), and thus are not a\nconcern in the present work.\n\nResults are outputted from K-correct for each\ninput pixel, giving rest-frame fluxes at $NUV$, $U$, $B$, $V$, $R$, $I$ \nmagnitudes\nplus their associated errors, mass-to-light ratios in all of these bands and\nfinally the stellar mass in that pixel. We then reconstruct the image of\nthe galaxy in stellar mass based on these calculations across the galaxy. \nWe use a signal-to-noise limit of S\/N $= 3$ per pixel in stellar mass for all\noptical bands, for a reliable calculation. We also investigate how the\nresults would change if pixels are summed together, finding similar results\nwith the exception of the asymmetry which tends to decrease at lower \nresolutions.\n\n\n\n\\begin{figure}\n\\includegraphics[angle=0, width=85mm, height=85mm]{lanyon.fig1.ps}\n\\caption{The number of galaxies in each of our sample classifications\nas described in \\S 3.2. Galaxies were visually classified by\n inspection of both $z_{850}$ images and segmentation maps.}\n\\label{fig:class}\n\\end{figure}\n\n\n\\subsection{Visual Classifications}\n\nWe have classified our entire sample visually, using the $z_{850}$ galaxy\nimages and the object segmentation maps. We have derived the classification\nsystem by applying the most appropriate criteria to this particular\ndataset. The classification scheme is divided into nine categories, ranging\nfrom compact objects (spherical with little or no apparent envelopes) to\nobviously merging systems (with evidence of tidal streams and other merger\nattributes). Six objects in the catalogue were found to be unresolved and\nthese were removed from the sample. We give a description of these types\nand how they were selected below. We often refer to these galaxy types\nthroughout this paper, and Appendix B gives a description of these\ntypes in terms of our measured indices. Figures 2-10 show examples of\nthese galaxy types.\n\n\\begin{figure}\n\\includegraphics[angle=0, width=85mm, height=130mm]{lanyon.fig4.eps}\n\\caption{Visual displays of the light (in $z_{850}$, left), stellar mass (middle) and the distribution of $M\/L$ ratios (right) for the compact elliptical (cE) galaxies. The $M\/L$ map is scaled such that white corresponds to higher $M\/L$ values, and black to lower $M\/L$ values. This convention is used in the following Figures ~\\ref{fig:massmap_pE} to ~\\ref{fig:massmap_M}.}\n\\label{fig:massmap_cE}\n\\end{figure}\n\n\n\n\\begin{figure}\n\\includegraphics[angle=0, width=85mm, height=130mm]{lanyon.fig5.eps}\n\\caption{Comparing the light (in $z_{850}$), stellar mass maps, and the \ndistribution of $M\/L$ ratios for examples of peculiar elliptical (pE) \ngalaxies. The scale and convention is the same as that used in \nFig. ~\\ref{fig:massmap_cE}.}\n\\label{fig:massmap_pE}\n\\end{figure}\n\n\n\\subsubsection{Compact Ellipticals}\n\nThe {\\em compact ellipticals}, denoted as ``cE'', were identified as compact \nobjects that varied smoothly across their radii when viewed morphologically. \nThese objects had little or no sign of an envelope usually \nassociated with early-type galaxies.\nThe compact Es are generally found at lower redshifts, at $z < 0.4$\nwith a few between $0.6 < z < 0.8$. The distinguishing feature\nof these compact Es is their small sizes, with half-light radii\nof $< 1$ kpc.\n\n\n\\subsubsection{Ellipticals}\n\nThe elliptical galaxies are amongst the most popular and important\ngalaxy type in our study, as they are typically the most massive galaxies\nin the local universe. One reason for this is that elliptical\ngalaxies, and massive galaxies in general, are the test-bed\nfor understanding theories of galaxy formation which often\nhave strong predictions for how the most massive galaxies\nshould form (e.g., Bertone \\& Conselice 2009). The ellipticals we see at high\nredshift are likely the progenitors of the massive\nellipticals found today.\n\nGalaxies with the familiar features of nearby {\\em elliptical galaxies} were \nclassed as such, and given the label ``E\". 28 E galaxies were identified, \nwhich at first glance have the appearance of early\ntypes, but on closer inspection and by varying the image contrast had some\npeculiar features, such as multiple nuclei or minor disturbances in the\ninternal structure. These objects were classified as \n{\\em peculiar ellipticals}, and are labelled as pE (Conselice {et~al.$\\,$} 2007).\n\n\n\\subsubsection{Spirals\/Disks}\n\nObjects with a disc-plus-bulge structure\nwere classified as {\\em early-type spirals} (``eS\") if the bulge appears\nlarger than the disc component, and {\\em late-type spirals} (``lS\") if the \ndisc is larger than the central bulge. Edge-on disc galaxies are denoted \nby ``EO\".\n\nThe spiral galaxies are amongst the most interesting for\nthis study given that they often contain two major stellar\npopulation types segregated spatially. Traditionally \nthis is seen as an older stellar population making up\nthe bulge, and the spiral arms consisting of\nyounger stellar populations. \n\n\\subsubsection{Peculiars and Mergers}\n\nObjects that did not fit easily into any of the previously defined categories,\nbut whose segmentation map showed them to be singular, or unconnected to any\napparently nearby object in the image, were classified as {\\em peculiar},\n``Pec\". These are galaxies that could possibly have merged in the recent past.\n\nGalaxies whose morphology also ruled them out from any previously defined\nclass but whose segmentation map showed them to be connected to, or associated\nwith, at least one other object on the image, but without obvious merger\nsignatures, were classified as {\\em possible mergers} (``pM\"). Galaxies \nwith the pM\nrequirements and which showed obvious signs of merging, such as tidal tails,\nwere classified as {\\em mergers} (``M\").\n\nThis information is summarised for quick reference in Table\n\\ref{tab_classfcn}. The sample was independently classified four times. \nThe mode of the results\nwere taken to be the true classification and where an indecisive split was\nproduced the lowest numerically valued type was chosen. Figure\n~\\ref{fig:class} shows a histogram of the sample classifications.\n\nWe note that there are a significant number of peculiar galaxies in our \nsample (e.g., Figure~1), much higher than what has been found at lower \nredshifts in previous work investigating galaxy morphology (e.g., Conselice \net al. 2005). \nThese are systems that at lower resolution would be classified as disk-like \ngalaxies in many cases. They are not necessarily merging systems, but are \nlikely normal galaxies in some type of formation, the modes of which are one\nof the features we investigate later in this paper.\n\nWe show examples of each of the above types in Figures 2-10, with\nfive examples shown for each. These figures show the image of the galaxy\nin the $z_{850}$-band, the stellar mass maps, and the mass to light ratio for each\ngalaxy.\n\n\n\\begin{table}\n\\centering\n\\begin{tabular}{|l|l|}\n\\hline\nType & Description \\\\\n\\hline\nE & Elliptical \\\\\ncE & Compact E \\\\\npE & Peculiar E \\\\\neS & Early-Type Spiral \\\\\nlS & Late-Type Spiral \\\\\nEO & Edge-on disc \\\\\nPec & Peculiar \\\\\npM & Possible merger \\\\\nM & Obvious merger \\\\\n\n\\hline\n\n\\end{tabular}\n\\caption{Descriptions of the classification scheme.}\n\\label{tab_classfcn}\n\\end{table}\n\n\n\\begin{figure}\n\\includegraphics[angle=0, width=85mm, height=130mm]{lanyon.fig6.eps}\n\\caption{Comparing light, stellar mass maps and $M\/L$ for the elliptical (E) galaxies. The scale and convention is the same as that used in Fig. ~\\ref{fig:massmap_cE}.}\n\\label{fig:massmap_E}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[angle=0, width=85mm, height=130mm]{lanyon.fig7.eps}\n\\caption{Comparing light, stellar mass maps and $M\/L$ for the early-type spiral galaxies (eS). The scale and convention is the same as that used in Fig. ~\\ref{fig:massmap_cE}.}\n\\label{fig:massmap_eS}\n\\end{figure}\n\n\n\\begin{figure}\n\\includegraphics[angle=0, width=85mm, height=130mm]{lanyon.fig8.eps}\n\\caption{Comparing light, stellar mass maps and $M\/L$ for the late-type spiral galaxies (lS). The scale and convention is the same as that used in Fig. ~\\ref{fig:massmap_cE}.}\n\\label{fig:massmap_lS}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[angle=0, width=85mm, height=130mm]{lanyon.fig9.eps}\n\\caption{Comparing light, stellar mass maps and $M\/L$ for the edge-on disk galaxy (EO) galaxies. The scale and convention is the same as that used in Fig. ~\\ref{fig:massmap_cE}.}\n\\label{fig:massmap_EO}\n\\end{figure}\n\n\n\\begin{figure}\n\\includegraphics[angle=0, width=85mm, height=130mm]{lanyon.fig10.eps}\n\\caption{Comparing light, stellar mass maps and $M\/L$ for the peculiar (Pec) galaxies. The scale and convention is the same as that used in Fig. ~\\ref{fig:massmap_cE}.}\n\\label{fig:massmap_Pec}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[angle=0, width=85mm, height=130mm]{lanyon.fig11.eps}\n\\caption{Comparing light, stellar mass maps and $M\/L$ for the possible-merging (pM) galaxies. The scale and convention is the same as that used in Fig. ~\\ref{fig:massmap_cE}.}\n\\label{fig:massmap_pM}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[angle=0, width=85mm, height=130mm]{lanyon.fig12.eps}\n\\caption{Comparing light, stellar mass maps and $M\/L$ for the merging (M) galaxies. The scale and convention is the same as that used in Fig. ~\\ref{fig:massmap_cE}.}\n\\label{fig:massmap_M}\n\\end{figure}\n\n\\subsection{CAS Analysis}\\label{sec:analysis}\n\nWe use the concentration, asymmetry, clumpiness ($CAS$) parameters to\nquantitatively measure the structures of our sample, in all available bands,\n$BViz$, and on the stellar mass maps. We also measure the Gini and M$_{20}$ \nparameters, forming an extensive non-parametric method for measuring the\nstructures and morphologies of galaxies in resolved CCD images (e.g.,\nConselice et al. 2000a; Bershady et al. 2000; Conselice et al. 2002; Lotz\net al. 2004;\nConselice 2003; Lotz et al. 2008). The premise for using these parameters is\nto tap into the light distributions of galaxies, which reveal their past and\npresent formation modes (Conselice 2003). The regions into which the\ntraditional Hubble types fall in CAS parameter space is well understand\nfrom local galaxy comparisons. For\nexample, selecting objects with $A > 0.35$ finds systems that are\nhighly disturbed, and nearly all are major galaxy mergers (e.g., Conselice et\nal. 2000b; Conselice 2003; Hernandez-Toledo et al. 2005; Conselice 2006a).\nA more detailed\nanalysis of this problem is provided in Appendix A for optical light.\n\n\n\\begin{figure*}\n\\includegraphics[angle=0, width=148mm, height=138mm]{masscas.eps}\n\\caption{The change in the asymmetries and concentrations measured on our\nstellar mass maps as a function of\nredshift due simply to distance effects. Show at the top is the change in the asymmetry\nparameters for a mixture of nearby galaxies of various types: ellipticals, spirals and\nirregulars, while the bottom two panels are for the concentration index. The left hand\nside for both shows the change for the entire sample we simulation with the red dots and\nerrorbars showing the average change and 1 $\\sigma$ variation of that change. The right\nhand side shows these simulations divided up between ellipticals (black solid line), \nspiral galaxies (blue dotted line) and irregulars (red dashed line.)}\n\\label{fig:sim}\n\\end{figure*}\n\n\n\\begin{figure}\n\\includegraphics[angle=0, width=90mm, height=90mm]{lanyon.fig2.ps}\n\\caption{The colour-magnitude relation for our sample of galaxies.\nDisplayed is the ($V_{606} - z_{850}$) colour vs. the magnitude of\nthe galaxy as observed in the $z_{850}$ band. Different galaxy\ntypes are displayed, including: compact ellipticals (cE), peculiar\nellipticals (pE), ellipticals (E), early-type spirals (eS), late-type\nspirals (lS), edge-on disk galaxies (EO), peculiar galaxies (Pec),\npossible-merger galaxies (pM) and merging systems (M). This key is used\n throughout the remainder of this paper.}\n\\label{fig:CMD}\n\\end{figure}\n\n\nWe measure the structural parameters for the GOODS sample using the method of\nConselice {et~al.$\\,$} (2008), with slight adjustments made for the stellar mass \nmaps, to enable the code to handle the large values of stellar masses per \npixel, rather than flux. The radius of each individual galaxy within the \npostage stamp image\/mass map is measured on the stellar mass map, and we \ndefine all our indices within the Petrosian radii (e.g., Petrosian 1976; \nBershady {et~al.$\\,$} 2000; Conselice 2003). the Petrosian radius has been \nfound to be a better are more reliable radius (and reproducible) than the\nisophotal radius (Petrosian 1976). \nThe limits and relationship to other radii is described in detail in\nterms of total light in a galaxy by Graham et al. (2005).\n\nCircular apertures are used for measuring our \nPetrosian radii and quantitative parameter estimation. \nThe Petrosian radius used to measure our parameters is defined by,\n\n$$R_{\\rm Petr} = 1.5 \\times r(\\eta = 0.2),$$\n\n\\noindent where $r(\\eta = 0.2)$ is the radius where the surface brightness \n(or stellar mass per unit area) is\n20 percent of the surface brightness (or stellar mass per unit area)\nwithin that radius (Bershady et al. 2000). Note that this is a distance\nindependent measurement, given that surface brightness dimming effects\nboth measurements of surface brightness in the same way. \n\nAccounting for background light and noise is extremely important when\nmeasuring structural parameters, especially for faint galaxies, and this \nmust also be dealt with for the stellar mass maps. The measured \nparameters in the $B_{435}$, $V_{606}$, $i_{775}$ and $z_{850}$ bands and \nstellar mass images are corrected as described\nin Conselice {et~al.$\\,$} (2008), by considering a background area close to the\nobject, and the segmentation map of the object. Using a background close to\nthe galaxy itself, any problems introduced by objects imaged on a large mosaic,\nwith a non-uniform weight map, and the object itself being faint compared to\nthe background, are alleviated. We review below how the\n$CAS$ parameters are measured. These are described in more detail\nin Bershady et al. (2000), Conselice et al. (2000), Conselice (2003) and Lotz\net al. (2008).\n\n\n\n\n\\subsubsection{Asymmetry}\n\nWe measure the asymmetry of a galaxy by taking the original galaxy image (or\nstellar mass map), rotating it by 180 degrees about its centre, and then \nsubtracting the two images (Conselice 1997), with corrections for background \nand radius (see Conselice {et~al.$\\,$} 2000a for details). The centre of rotation \nis found using an iterative process, which locates the minimum asymmetry. \nA correction is made within the stellar mass maps for this parameter, \nas it was found that uncertainties in the mass-to-light ratio adversely \naffected the asymmetry measurement. We found that approximately 15 percent \nof the stellar mass in the asymmetry calculation is left\nover from random fluctuations in $M\/L$. This was compensated for including\nan addition correction for this uncertainty through a slight additional\nsubtraction to the asymmetry signal. This was done by scaling the \nasymmetric residuals of the rotated and subtracted image by an increase of\n15\\%, that is only 85\\% of the residual is considered.\n\nWe measure the background\nlight in the same way as for the light measures of asymmetry, although because\nwe are dealing with a conversion to stellar mass, which is not trivially\ndone for the background, we have some background values which are lower\nthan zero. To deal with this, we placed all the negative pixels in the\nstellar mass map background to zero. \n\nThe equation for calculating asymmetry is:\n\n\\begin{equation}\nA = {\\rm min} \\left(\\frac{\\Sigma|I_{0}-I_{180}|}{\\Sigma|I_{0}|}\\right) - {\\rm\n min} \\left(\\frac{\\Sigma|B_{0}-B_{180}|}{\\Sigma|I_{0}|}\\right)\n\\label{eq:asym}\n\\end{equation}\n\n\\noindent Where $I_{0}$ denotes the original image pixels, $I_{180}$ is the\nimage after rotating by 180$^{\\circ}\\,$. The background subtraction is made using\nlight (or mass) from a blank sky area, $B_{0}$, and is minimised using the same process\nas for the object itself. The higher the\nvalues of $A$, the higher the degree of asymmetry the galaxy possesses, the\nmost extreme cases usually corresponding to merger candidates (Conselice 2003).\n\n\n\n\\subsubsection{Concentration}\n\nThe concentration parameter measures the the intensity of light (or stellar \nmass) contained within a pre-defined central region, compared to a larger \nregion towards the edge of the visible galaxy. Concentration is most often \ndefined as the ratio of the flux contained within circular radii possessing \n20 percent and 80 percent ($r_{20}$, $r_{80}$) of the total galaxy flux,\n\n\\begin{equation}\nC = 5 \\times {\\rm log} \\left(\\frac{r_{80}}{r_{20}}\\right).\n\\end{equation}\n\n\\noindent A higher value of $C$ corresponds to an object where more light is\ncontained within the central region. This measurement has been shown to\ncorrelate with the halo, and total stellar masses, of nearby \ngalaxies (e.g. Bershady {et~al.$\\,$} 2000; Conselice 2003).\n\n\\begin{figure}\n\\centering\n\\includegraphics[angle=0, width=90mm, height=90mm]{lanyon.fig3.ps}\n\\caption{Colour in observed ($V_{606} - z_{850}$) vs. Redshift ($z$) \nfor our sample. The points displayed are the same as in Fig. ~\\ref{fig:CMD}.}\n\\label{fig:colz}\n\\end{figure}\n\n\\subsubsection{Clumpiness}\n\nClumpiness ($S$) is related to asymmetry, in that it is used to measure the\namount of light (or mass) in a galaxy that exists in discrete, clumpy\ndistributions. In this definition, a smooth galaxy contains light at low\nspatial frequencies, such as elliptical galaxies, whereas clumpy systems have\nmost of their light contained in high spatial frequencies. Star forming\ngalaxies tend to be clumpy in structure, with high $S$ values. We measure\nclumpiness via:\n\n\\begin{equation} \nS = 10 \\times \\left[\\left(\\frac{\\Sigma\n(I_{x,y}-I^{\\sigma}_{x,y})}{\\Sigma I_{x,y} }\\right) - \\left(\\frac{\\Sigma\n(B_{x,y}-B^{\\sigma}_{x,y})}{\\Sigma I_{x,y}}\\right) \\right],\n\\end{equation}\n\n\\noindent where, the original image $I_{x,y}$ is blurred to produce a\nsecondary image, $I^{\\sigma}_{x,y}$. The secondary image is subtracted from\nthe original image to create a residual map, showing only the high frequency\nstructures contained within the galaxy (Conselice 2003). The residuals are\nquantified by normalising, using the total light in the original galaxy image,\nand then subtracting the normalised residual sky. The smoothing kernel,\n$\\sigma$, used is determined from the radius of the galaxy, and has the value\n$\\sigma = 0.2 \\cdot 1.5 \\times r(\\eta = 0.2)$ (Conselice 2003). The centres of\nthe galaxies (roughly the inner 1\/10th the Petrosian radius) are removed \nduring this procedure. This parameter is furthermore the most difficult\nto measure and is best measured on galaxies imaged with a high S\/N ratio, and\nare only of limited use in this paper. Also, the Clumpiness is highly\nsensitive to resolution or seeing, although the ACS imaging we use at these\nredshifts is within the range where values can be properly compared to \nnearby calibration data sets (Conselice 2003).\n\n\n\n\n\n\\subsubsection{Gini}\n\nThe Gini coefficient ($G$) is defined through the Lorentz curve of \nthe light distribution and does\nnot depend on a predefined central position, thus distinguishing it from the\nconcentration parameter. If $G$ is zero, the galaxy has a uniform surface\nbrightness, whereas if a single pixel contains all the flux, $G$ is unity. To\ncalculate $G$ efficiently, the pixel flux (or stellar mass) values, \n$f_{i}$, are\nsorted into increasing order and the following formula is used:\n\n\\begin{equation}\nG = \\frac{1}{\\vert \\bar{f} \\vert n\\left(n-1\\right)} \\sum_{i}^n \\left(2i - n -\n1\\right) \\vert f_{i} \\vert\n\\label{eq:gini}\n\\end{equation}\n\n\\noindent where $n$ is the total number of pixels in the galaxy (Lotz {et~al.$\\,$}\n2004, 2008).\n\n\\subsubsection{M$_{20}$}\n\nM$_{20}$ is anti-correlated with concentration such that galaxies with high\nconcentration have low M$_{20}$ values. M$_{20}$ is a direct tracer of the\nbrightest regions in a galaxy, but does not require a pre-determined central\ncoordinate, rather this is calculated to be the location of minimal\nM$_{20}$. M$_{20}$ is more sensitive to merger signatures than concentration\nand is normalised by the total moment of the galaxy. It is defined as\n\n\\begin{equation}\nM_{20} \\equiv {\\rm log} 10 \\left(\\frac{\\sum_{i} M_{i}}{M_{tot}}\\right), {\\rm while} \\sum_{i} f_{i} < 0.2f_{tot}\n\\label{Eq:M20}\n\\end{equation}\n\n\\begin{equation}\nM_{tot} = \\sum_{i}^n M_{i} = \\sum_{i}^n f_{i}[(x_{i} - x_{c})^2 + (y_{i} - y_{c})^2]\n\\label{Eq:Mtot}\n\\end{equation}\n\n\\noindent where $x_{c}$, $y_{c}$ is the galaxy centre (Lotz {et~al.$\\,$} 2004).\n\n\n\n\\subsection{Simulations of CAS parameters}\n\nOne issue that we must address within this paper is how well we are measuring structure on\nthe stellar mass\nmaps for our sample of galaxies, and importantly how the measurement of stellar mass structure\nwould change with redshift due only to cosmological effects. This is critical if we are\nto make any inferences of the evolution of galaxies in terms of their stellar mass distribution \nover time. This issue\nhas been addressed before for visual images of galaxies in Conselice (2003) and in Beauvais \n\\& Bothun (1999) for velocity fields in spiral galaxies. Here we address it for the stellar\nmass maps.\n\nWe simulate how our stellar mass map creation process, and CAS measurements would vary as a function of\nredshift due to a decrease in the signal to noise and resolution. We carry out this process\nin the same way we have for the actual galaxies in the GOODS fields that we analyse in\nthis paper. We take a sample of 82\nnearby galaxies of various types - ellipticals, spirals, and irregulars, convert their\nflux values to stellar mass. We also go through the entire process we do on GOODS, including\nsetting equal to zero the background values with negative stellar masses. \n\nThis stellar mass conversion is done after the galaxy is redshifted, so that we are mimicking as\nmuch as possible how the real observations and measurements are done. The simulation itself\nis done by simulating each image at some redshift z$_1$ to how it would appear at z$_2$, with z$_2 >$ z$_1$.\nIn our case z$_{1} \\sim 0$. When carrying out these simulations of placing lower redshift galaxies\nto high redshifts we calculate first the rebinning factor, b, which is the reduction in apparent size of \na galaxy's image when viewed at higher redshift. The other major factor is the relative amounts of \nflux from the sky and galaxy, and the noise produced from the galaxy, sky, dark current, and imaging instrument\n(e.g., read noise) from ACS. The process and details for how these simulations are done can be found\nin Conselice (2003). \n \nIn summary, the surface brightness of the simulated image must be reduced such that the equation,\n\n\\begin{equation}\n4\\pi \\alpha_{z_1} N_{z_1} p_{z_1} (1+z_{1}) = 4\\pi \\alpha_{z_2} N_{z_2} p_{z_2} \n(1+z_{2}) \\frac{\\Delta \\lambda_{z_1}}{\\Delta \\lambda_{z_2}},\n\\end{equation}\n\n\\noindent holds, whereby the galaxy as observed in one filter, and is simulated in\nanother with central rest-frame\nwavelengths of $\\lambda_{z_1}$ and $\\lambda_{z_2}$ and widths \n$\\Delta \\lambda_{z_1}$ and $\\Delta \\lambda_{z_1}$. In the above equation\n$N_{z}$ is the total number of pixels within the galaxy at $z$ and p$_{z}$ is the average ADU counts \nper pixel. The calibration constant\n$\\alpha_{z}$ is in units of erg s$^{-1}$ cm$^{-2}$ \\AA$^{-1}$ ADU$^{-1}$.\n\nA sky background, and noise from this backgrounds is added to these images by \n$B_{z} \\times t_{z}$, where $B_{z}$ is background flux in units of\nin units of ADU s$^{-1}$. For ACS simulations we take these values from the measured background\nbased on GOODS imaging (Giavalisco et al. 2004) checked to be consistent with\nthe values from the ACS handbook. Other noise effects are then added, including \nread-noise scaled for the number of read-outs, dark current and photon noise from the background. \nOur resulting images are then smoothed by the ACS PSF as generated by Tiny-Tim (Kriss et al. 2001),\nalthough using PSFs measured from stars give the exact same results. \n\nFrom these simulated images at redshifts z = 0.5,1 and 2, we then reconstruct the stellar mass\nimages in the same way as we do for our original galaxies, and then measure the CAS parameters on\nthese stellar mass images. The results of this are shown in Figure~\\ref{fig:sim} for the concentration\nand asymmetry parameters. The M$_{20}$ and Gini parameters however have similar behaviours,\nas do the galaxy half-light radii.\nWe find that the clumpiness indices have a similar average difference, but with a much larger\nscatter. \n \nWhat we find overall is that the trend with redshift is such that the concentration and asymmetry \nparameters on average do not change significantly when measured in stellar mass (Figure ~\\ref{fig:sim}). \nWe show in the right panel of Figure ~\\ref{fig:sim} the change in C and A when divided into\nearly\/late\/peculiar types. Again, on average there is not a significant change with redshift,\nalthough individual galaxies clearly can have significant differences at higher redshift. \n\nWe also investigate how our parameters change at each redshift between the stellar mass\nand the visual images after the simulation. We find very little difference except that at the highest redshifts\nof our simulations, at $z = 2$, we find that the average change in the asymmetry parameter\nchanges $\\delta A = (A_{\\rm opt} - A_{\\rm mass}) \\sim -0.1$, such that the stellar mass image\nis more asymmetric than the optical light one. \nWe also investigate the same quantities that we use in later figures,\nfinding that twice the difference in the asymmetries of the optical and stellar mass\nimage, divided by the sum of these, increases steadily until it reaches a values of\n$-1.1$ at $z = 2$ similar to what we see later in this paper for the actual values. However\nat $z = 1$ we find that this change is less and closer to $-0.1$. \n\n\n\n\n\\section{Analysis}\\label{sec:results}\n\nThe following sections describes the analysis of our sample, \nand what we can learn from examining their\nstructures in stellar mass maps, and how this evolves\nover time. We first examine the\ndistribution of visual classifications for our sample and\ntheir global colours. We later describe the stellar mass maps we\nconstruct for these galaxies, and then finally present a \nstructural analysis of these systems based on the distribution\nof their stellar mass.\n\n\n\\subsection{Global Properties of the Sample}\n\nIn Figure ~\\ref{fig:CMD} we show the observed colour magnitude diagram for our\nsample. Each morphological class is plotted as a different symbol; solid red \ncircles represent the cE's, open red circles show the pE galaxies and the \nE galaxies are represented by red stars. Solid blue squares show the eS \nand open blue squares the lS systems. Edge-on disk galaxies are represented by black \ndiagonal crosses and solid green triangles show the Pec systems. \npM and M galaxies are represented by open green triangles and open \ngreen stars respectively. This key is used throughout the rest of the \npaper in the various figures. \n\n\nBased on this, we find that our classified early-type galaxies are on average \nredder and brighter than the peculiars and pM\/M systems, while the spirals \ntend to be bluer across a range of brightnesses, but the differences are not \nso clear as would be expected for a sample at low-$z$ only.\n This colour-magnitude diagram does not show such an obvious morphological \nsequence as is seen for purely Hubble type galaxies in the nearby universe\n(e.g., Conselice 2006b). \n\nThe relation between $(V_{606} - z_{850})$ colour and redshift is shown in\nFigure ~\\ref{fig:colz}. The symbols are the same as those in Fig.\n~\\ref{fig:CMD}. As can be seen, there is a general trend for all types to \nbecome redder with increasing redshift, which is an expected feature of redshift. \nThe early types are reddest across the whole redshift range, followed by the \neS and lS galaxies, with the Pec\/pM\/M systems the bluest across all \nredshifts. The compact ellipticals are blue and \nfaint. Note however that some pM and eS galaxies are quite red. \nWhat this reveals is that there is a decoupled relation between the \ncolour and morphologies of galaxies not seen in nearby galaxies where\nthis correlation is strong. A specific morphological type as measured\nvisually at high-z cannot be used to predict the colour of the galaxy.\n\n\\subsection{Stellar Mass and Mass to Light Ratio Maps}\n\nBy calculating stellar mass to light ratios and the stellar mass within\neach pixel of each galaxy image in our sample, we reconstruct the image of\neach galaxy in terms of stellar mass and mass-to-light\nratio. We do this for each galaxy, using the method described in LCM07, \nand present five representative examples for each classification in Figures \n~\\ref{fig:massmap_cE} to ~\\ref{fig:massmap_M}. This figures present galaxy\nimages in $z_{850}$ (left), stellar mass (centre) and $(M\/L)_B$ (right), which\nwe discuss below. The images are grey-scaled such that the darkest pixels\nrepresent those brightest in $z_{850}$, and those that are most\nmassive. In the mass-to-light image, the whitest pixels are those with the\nhighest ($M\/L$), or redder in colour. We explain below how these maps\nappear for our various types.\n\n\\subsubsection{Compact Ellipticals}\n\nFigure ~\\ref{fig:massmap_cE} shows five examples of the compact elliptical\n(cE) population in the sample. These galaxies appear similar in their \nstellar mass maps as they do in their $z_{850}$ images at first\nglance. The \nmass-to-light maps are not as uniform, and there are important quantitative\ndifferences between the stellar mass and light images. \nFor example, the galaxy 18246 appears to have a relatively \nhigher ($M\/L$) in its core compared to its outer parts, whilst galaxy\n25035 appears to have a low ($M\/L$) in its centre, suggesting that it has a \nbluer core, \nlikely due to star formation. Galaxy 36503 contains a high ($M\/L$) \nring surrounding a blue core.\n\n\\subsubsection{Peculiar Ellipticals}\n\nFive examples of the peculiar elliptical (pE) galaxies are displayed in\nFig. ~\\ref{fig:massmap_pE}. These galaxies appear more diffuse in stellar \nmass than in $z_{850}$, in part due to the loss in contrast in the stellar \nmass maps, but also due to their blue inner colours \n(Fig. ~\\ref{fig:massmap_pE}). As can be seen in the ($M\/L$) ratio maps of \nthese galaxies, they have blue cores, and the effect is most pronounced in \n22932 and 27429. These galaxies also have blue overall \ncolours compared to pure E galaxies (Fig. ~\\ref{fig:colz}). These peculiar \nellipticals have been seen and studied before in papers such as Conselice \net al. (2007). They generally have a high asymmetry, and are found amongst \nthe most massive galaxies in the universe at $z \\sim 1$. As can be seen in \nthe M\/L maps for these systems (Fig. ~\\ref{fig:massmap_pE}), these galaxies \nalso have a diversity in how young and old stellar populations, including\ndust, are distributed in these systems compared to normal elliptical galaxies.\n\n\\subsubsection{Ellipticals}\n\nFigure ~\\ref{fig:massmap_E} presents five examples of early-types from our\nsample. These galaxies appear more diffuse and more distributed spatially\nin their stellar \nmass than in the $z_{850}$ band, especially at large radii. These galaxies \nhave larger ($M\/L$) ratios in their centres, except for 36419, which is blue, \nand has a more complicated structure in ($M\/L$) than the others. Overall, \nthe E galaxies morphologically have very similar structures in stellar mass \nand $z_{850}$.\n\n\\subsubsection{Spirals}\n\nFigures ~\\ref{fig:massmap_eS} and ~\\ref{fig:massmap_lS} show the $z_{850}$,\nstellar mass and ($M\/L$) maps for the early-type, and late-type spirals. \nAs found in LCM07, structures within discs, including prominent spiral \narms, are often (but not always) smoothed out in stellar mass. This \nis true for early types as well as late-type spirals, as can be seen for\nexample in\ngalaxy 25465 (Fig. ~\\ref{fig:massmap_eS}). Inspection of the ($M\/L$) maps \nreveal that the discs of many spirals are bluer than their bulges, as expected.\n\nThe mass-to-light ratio maps of the edge on galaxies \n(Fig. ~\\ref{fig:massmap_EO}) are mostly homogeneous in structure. However, \nthe maps of 19280, 39312 and 49722 all show some patches of low $M\/L$. \nIndeed, 19280 appears to be blue across the whole image, which would \nimply that the dust content of this galaxy is low, which is unusual for \nedge-on galaxies.\n\n\\subsubsection{Peculiars \\& Mergers}\n\nFigure ~\\ref{fig:massmap_Pec} shows the F814W images, stellar mass maps and M\/L \nratio maps for the Peculiar galaxies within our sample. The Peculiar \ngalaxies 22690 and 25584 (Fig. ~\\ref{fig:massmap_Pec}) appear to contain\n objects near the primary galaxy, but in both cases it is the larger galaxies on the\nright that is the target. Although 22690 looks disturbed in $z_{850}$, \nthe effect of smoothing in the stellar mass map is also seen here. \nThis indicates that disturbed regions of the galaxy are due to star \nforming regions. The $z_{850}$ image of\n22690 does not have the regular morphology of a nearby spiral galaxy, although\nthe stellar mass map has a structure one would expect of such a \ngalaxy, having a central bulge surrounded by a smooth disc. The ($M\/L$) \nmap also shows a red central region surrounded by bluer pixels.\n\nThe mass-to-light maps of the Peculiars generally show that they are blue and\nalso difficult to see in stellar mass, due to their low mass to light ratio. \nThis suggests that star formation\nitself might be difficult to trace in stellar mass. However, the stellar mass \nin the pM galaxy\n18917 (Fig. ~\\ref{fig:massmap_pM}) traces the light in the galaxy, despite\nhaving a low ($M\/L$). Note also that the peculiars are often peculiar spirals,\nin the sense that they look like nearly normal spirals, but with some\npeculiar features. Often these peculiars are quite small as well, and many of \nthem are very likely spirals in some type of formation.\n\n\nThe merging galaxy system 21839 (Fig. ~\\ref{fig:massmap_M}) is similar in \nstellar mass to its $z_{850}$ image, but possesses a more intricate structure \nin ($M\/L$). The mass-to-light map shows the smaller merging object to be blue,\nwhereas the brighter galaxy appears more red but with blue regions in its core.\n \nThe system 29800 (Fig. ~\\ref{fig:massmap_M}), whilst clearly appearing to\nbe a\nmerger between two galaxies of approximately equal brightness in $z_{850}$,\nwould likely be classified as a disc galaxy in stellar mass. The bottom of \nthe two bright nuclei has disappeared completely in the stellar mass image, \nalthough it is visible as a low ($M\/L$) patch in the mass-to-light map.\n\n\n\\subsection{Comparison Between Galaxy Properties in Mass and Light}\n\nIn this section we compare how the distribution of stellar mass in a galaxy\ncompares to the distribution of light as seen in the $z_{850}$ ACS imaging.\nOne of our goals is to determine how appropriate studies in $z_{850}$ and \nsimilar red bands are for measuring the mass content and structure of a galaxy, \nand how stellar mass quantitative morphologies differ from those measured in light. \n\nIn this section we investigate the relationship between galaxy properties \nin stellar mass and $z_{850}$-band light. For galaxy size (as \nmeasured by the half-light radius) and the $CAS$ parameters we \nplot the normalised difference between $z_{850}$ and stellar \nmass versus the mean value of $z_{850}$ and stellar mass as a representative \nfigure to determine how these various quantities change.\n\n\n\\subsubsection{Galaxy Half-light and Half-mass Radii}\n\nOne of the basic features of a galaxy is its size, which in this paper we\nquantitatively mean the half-light radii. Galaxy half-light radii are \nfound to strongly evolve with time, such that galaxies at higher redshifts have\na more compact structure (e.g., Ferguson et al. 2004; Trujillo et al. 2007; \nBuitrago et al. 2008; Carrasco et al. 2010).\nHowever, every size measurement has been carried out through measurements\nof light, and it is desirable to determine how the half-mass radii of galaxies measured\nin stellar mass maps compares with half-light radii measured using light.\n\nAs such, we have calculated half-light (or half-mass) radii ($Re$), in kpc, \nfor our sample in both stellar mass and $z_{850}$ light, and investigate whether \nhalf-light\/mass radii are comparable. \nWe define the normalised size difference as \n\n$$2[R_{\\rm e}(z_{850})-R_{\\rm e}(M_{*})]\/[R_{\\rm e}(z_{850})+R_{\\rm e}(M_{*})],$$ \n\n\\noindent and the average size as \n\n$$\\frac{1}{2}[R_{\\rm e}(z_{850})+R_{\\rm e}(M_{*})].$$ \n\n\\noindent We use these two calculations so as to avoid\nbiasing the analysis when the values become very\nsmall in either the stellar mass image, or in the $z_{850}$ band.\nWe plot the relation between these values in \nFigure ~\\ref{fig:size_cf}. The horizontal dashed line marks the position\nof equal size ($Re(z_{850}) = Re(M_{*})$). The symbols are the same as in \nFigure~\\ref{fig:CMD}, and the average dispersion for the sample is\nrepresented by the black square points on the right of the plot. The larger\nblack square points in the figure show the average values and the\nmeasured dispersion for\nthe ranges of $\\frac{1}{2}\\left( Re(z_{850}) + Re(M_{*}) \\right)$ between \nzero and one kpc, one and two kpc, up to six kpc. This convention is \nused for each of the parameter comparisons in \nSections ~\\ref{sec:conc_cf} to ~\\ref{sec:M20_cf}.\n\nWe note that there is a steady progression for increasing average galaxy size\nwith morphology, from the compact ellipticals, early types\nand early spirals to the late-type spirals, with the peculiars and edge-on\ngalaxies being more randomly distributed. The points are scattered about the\nline of equality such that nearly half (42 percent) of the galaxies have \n$Re(z_{850}) > Re(M_{*})$. There is, therefore, a slight tendency for \nsizes in masses to be higher, but this does not vary significantly with \naverage galaxy size and type (see Table~2).\n\nThere is no clear tendency for any particular morphological type to be larger\nin either the $z_{850}$ or stellar mass image, which suggests that there \nis no bias introduced \nby using measurements in $z_{850}$ band data for size measurements \n(Trujillo {et~al.$\\,$} 2007; Buitrago et al. 2008). Table ~\\ref{tab:size_cf}. \nshows that the average difference between the sizes in the \nstellar mass maps and the $z_{850}$-band image is essentially zero, \ndemonstrating that the measurements of half-radii in light does not differ \nsignificantly from the measurements in the distribution of stellar mass.\n\nWe also examine the normalised difference in size\nagainst ($V_{606}$ - $z_{850}$) colour to test whether there is a tendency for\nbluer galaxies to have larger radii in stellar mass than in light. This\nis what we might expect to find if star formation dominated the light near the\ncentre of the galaxy. While, on average, colour does not change with size\ndifference, we find that extreme half-radii differences between stellar\nmass and $z_{850}$ are mostly in blue systems.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[angle=0, width=90mm, height=90mm]{lanyon.fig13.ps}\n\\caption{Normalised effective-radius difference vs. mean \neffective radius of the sample in $z_{850}$\n and stellar mass. The error bar to the right of the plot shows the average\n dispersion for the whole sample. The large black squares denote the average\n values in equally spaced bins of mean size, with error bars showing the\ndispersions of these values. This convention\n is used in all of the parameter comparison plots. The symbols plotted\nhere are the same as in Figure~12.}\n\\label{fig:size_cf}\n\\end{figure}\n\n\n\n\\begin{table}\n\\centering\n\\begin{tabular}{c r r}\n\\hline \\hline\n\\\\\nType & Mean Size & Normalised Size Difference \\\\\n & (kpc) & \\\\ \n\\\\\n\\hline\ncE & $0.6 \\pm(0.3)$ & $0.14 \\pm(0.24)$ \\\\\npE & $1.8 \\pm(0.2)$ & $-0.11 \\pm(0.11)$ \\\\\nE & $1.8 \\pm(0.3)$ & $0.05 \\pm(0.11)$ \\\\\neS & $2.6 \\pm(0.4)$ & $0.00 \\pm(0.12)$ \\\\\nlS & $3.7 \\pm(0.6)$ & $-0.10 \\pm(0.12)$ \\\\\nEO & $2.7 \\pm(0.5)$ & $0.02 \\pm(0.13)$ \\\\\nPec & $2.6 \\pm(0.5)$ & $-0.09 \\pm(0.13)$ \\\\\npM & $2.7 \\pm(0.6)$ & $-0.07 \\pm(0.13)$ \\\\\nM & $3.0 \\pm(0.6)$ & $-0.08 \\pm(0.03)$ \\\\ \n\\hline\n\\end{tabular}\n\\caption{Comparisons between mean $z_{850}$-stellar mass half-mass\nradius ($R_{\\rm e}$) and normalised size difference in $z_{850}$ and stellar \nmass, organised by morphological type.} \n\\label{tab:size_cf}\n\\end{table}\n\n\n\\subsubsection{Concentration}\\label{sec:conc_cf}\n\nAnalogous to the galaxy size comparison in the previous section, we compare\nthe concentration parameter in $z_{850}$ and stellar mass images, as plotted \nin Figure~\\ref{fig:conc_cf}. The\ndashed line shows $C(z_{850}) = C(M_{*})$ and the black square points\nrepresent the average values in equally spaced bins of average concentration.\n\nThe average concentration, in both $z_{850}$ and stellar mass, is generally \nhigh for the early types, and lower in the late-types and peculiars, \nwith the early-type spirals being spread between low and high values. \nThe compact elliptical galaxies have a low average concentration, due \nto the light being spread evenly across a galaxy's pixels.\n\nThe average trend shows that the ratio $\\frac{R_{80}}{R_{20}}$ is slightly\nlower in the stellar mass maps for the early types compared to the $z_{850}$\nband concentration. We find that in general R$_{80}$ is lower\nand R$_{20}$ is higher, compared to those in the $z_{850}$-band\nimage, although the effect is mostly due to a smaller R$_{80}$ although\nthis effects is quite small. It is possible that \nstellar mass is less centrally concentrated towards the centre of \nthe galaxy in the early-types (raising\nR$_{20}$), and\/or the stellar mass is more diffuse at larger radii than\nthe distribution of light (lowering\nR$_{80}$). It is also the case that for many galaxies\nmore star formation is seen in the outer regions of the galaxies \nin the $z_{850}$-band, but these pixels are less dominant in stellar mass, \nraising the value of R$_{80}$ creating higher values of $C$ as seen.\n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[angle=0, width=90mm, height=90mm]{lanyon.fig15.ps}\n\\caption{Comparison between the concentration parameter in the $z_{850}$\nband and within\n stellar mass maps. The error bar to the right of the plot shows the average\n dispersion for the whole sample. The large black squares denote the average\n values in equally spaced bins of mean concentration, plus dispersions. The\nsymbols used here are the same as in Figure~12.}\n\\label{fig:conc_cf}\n\\end{figure}\n\n\n\\subsubsection{Asymmetry}\\label{sec:asym_cf}\n\n\nBefore we discuss the comparison of our stellar mass map asymmetry values\nto the $z_{850}$-band asymmetries, we note a few things. \nFirst, the lowest asymmetry values for the sample are negative \nin all bands, mainly due to the background correction which for very\nsymmetric galaxies will sometimes be larger than the asymmetry in the galaxy\nitself. This has the effect, \nas seen in Figure ~\\ref{fig:asym_cf}, of skewing the low average asymmetry \nvalues differences for the smallest stellar mass measured asymmetries. \nAs can be seen in Figure~16, except for these galaxies with low asymmetry \nvalues, there is a clear tendency for galaxies to be more asymmetric in \nstellar mass than in $z_{850}$, this being the case for a large\nfraction of the non-early type sample. The stellar mass maps also have a \ngreater spread in asymmetry values than $z_{850}$, and this can be more \nclearly seen in \\S ~\\ref{sec:AC}.\n\n However, we also note that the blue regions of galaxies are \nnot so well traced by the stellar mass\nmaps, and often led to difficulties with the image contrast, making the galaxy\nfeatures harder to see, as in the case of the late-spiral 19535\n(Fig. ~\\ref{fig:massmap_lS}). Further galaxies, such as 38722 (Figure~6), \nare clearly more asymmetric and lopsided when viewed in the stellar mass \nband. In this case, it appears that one spiral arm remains while the other \ndisappears.\n\nThere are several reasons why the asymmetry value in the \nstellar mass maps is higher than in the images. First, \nwhen calculating $A$, noise tends to be magnified due to the process of\nsubtracting images. The simulations discussed in \\S 3.4 show that\nthere is a tendency for the average galaxies to become more asymmetric\nin stellar mass measurements than in light. This can explain part of\nthis, but we find that the average difference of around $-1$ in the\nrelative difference is too high to be accounted for solely by these\nredshift effects. We discuss some of the reasons for why the\nasymmetries will be higher in the stellar mass maps than in light.\n\nWhen calculating stellar masses for each pixel we assume\nthat the $M\/L$ ratio is approximately constant in the surrounding \npixels. This should be the case for early type galaxies,\nespecially, due to their uniform stellar populations. To test how much this\nvariance in $M\/L$ could be affecting the $A$ values, we have measured\nthe $M\/L$ pixel variance in a typical early type galaxy (30976,\nFig. ~\\ref{fig:massmap_E}). The mean $M\/L$ for a pixel in this galaxy is\n$(M\/L)_{B} = 1.29$ with a typical standard deviation in the surrounding pixels\nof $\\sigma_{M\/L} = 0.27$. Although a correction has been applied to minimise\nthis effect (see \\S 3.3.1), it is not enough to account for\nthe high asymmetry signal in the spiral galaxy stellar mass maps, and thus \nthis asymmetry is likely a real effect due to the nature of the calculation\nof this parameter, and especially the background, as described in \\S 3.3.1.\n\n\\begin{figure}\n\\centering\n\\includegraphics[angle=0, width=90mm, height=90mm]{lanyon.fig16.ps}\n\\caption{Comparison between the asymmetry parameter in $z_{850}$ and stellar mass. The \npoints and conventions are the same as in Figure~12. Note that the asymmetry\nin the stellar mass is lower than the $z-$band for only the early types. This\nis due to the method of measuring the asymmetry, where the sky background\nis handled in a different way than for imaging, resulting in higher values\nfor later type galaxies (\\S 4.3.3). }\n\\label{fig:asym_cf}\n\\end{figure}\n\n\nTo further understand why late-type spiral galaxies are more asymmetric \nin stellar mass than $z_{850}$, we have examined these galaxies in detail \nand present four examples of the late-type sample in both \n$z_{850}$ (Fig. ~\\ref{fig:spirals_z}), and stellar mass maps. \nTwo of these galaxies have low asymmetries\nin both stellar mass and $z_{850}$ (25751 and 21448), and galaxy 21448 in \nparticular, demonstrates the smoothing of spiral arms, which we observed \npreviously in nearby galaxies (LCM08). However, two of these galaxies have \nhigher asymmetries in stellar mass than in $z_{850}$ (19535 and 34946). \nObject 19535, especially, shows little relation in stellar mass to the \n$z_{850}$ image. The star forming knots and central bulge, which can be \nseen clearly in $z_{850}$, translate to large scale asymmetries in stellar \nmass. There are several possible explanations for this effect. The star \nforming regions in 19535 could truly be more massive per pixel than the \nsurrounding disk, due to the high masses and densities of gas and dust \nrequired to form such massive star forming regions. This would lead to \nsuch regions not being smoothed out in stellar mass, causing higher $A(M_{*})$\nvalues. It is also possible that what we have interpreted as star forming\nknots in the disk in some cases could be minor merger signatures, that is, \nlow-mass galaxies falling into the object itself.\n\n\nThere is also a trend with morphological-type with increasing asymmetry \nsuch that early types have low asymmetry and late-types higher asymmetry. \nOur findings confirm previous studies who found similar trends \n(e.g. Conselice {et~al.$\\,$}\\, 2000). We have also investigated the relation \nbetween asymmetry in stellar mass and $(V_{606}-z_{850})$\ncolour, and examine this after splitting into two\nredshift bands: $z < 0.6$ and $0.6 \\leq z < 1$. There is a clear trend for\ngalaxies with a higher degree of asymmetry to be blue, with\n$\\langle(V_{606}-z_{850})\\vert_{A(M_{*})<0.35}\\rangle = 1.01(\\pm0.20)$ and\n$\\langle(V_{606}-z_{850})\\vert_{A(M_{*})\\geq0.35}\\rangle = 1.22(\\pm0.27)$. \nThis trend has also been \nfound for asymmetries measured in light (Conselice {et~al.$\\,$} 2003). \n\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[angle=0, width=188mm, height=95mm]{newfig17.eps}\n\\caption{Figure showing the $z_{850}$-band imaging (left) and\nstellar mass maps (right) for four late-type spiral \ngalaxies. Each galaxy is \nlabelled with its ID number and $A(z_{850})$ value. Each image is \n4.5 arcseconds on each side. \nShown at the top of each image is the ID and the asymmetry parameter\nmeasured on the stellar mass maps to be compared with the same\nvalue computed in the $z_{850}$ band.}\n\\label{fig:spirals_z}\n\\end{figure*}\n\n\n\n\n\n\n\n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[angle=0, width=90mm, height=90mm]{lanyon.fig21.ps}\n\\caption{Comparison between the Gini parameter in $z_{850}$ and stellar mass. \nThe points and conventions are the same as in Figure~12.}\n\\label{fig:gini_cf}\n\\end{figure}\n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[angle=0, width=90mm, height=90mm]{lanyon.fig22.ps}\n\\caption{Comparison between the M$_{20}$ parameter in $z_{850}$ and stellar mass. The points and conventions are the same as in Figure~12. Because of the inverse nature of M$_{20}$, galaxies with positive differences on\nthe y-axis are those with more `concentrated' light profiles than in the\nstellar mass maps.}\n\\label{fig:m20_cf}\n\\end{figure}\n\n\n\n\\subsubsection{Clumpiness}\n\nThe clumpiness values in stellar mass maps have a higher degree of \nuncertainty, as discussed in Section 3.3.3.\nOverall, we find a\ntrend for clumpiness to be greater in stellar mass than $z_{850}$, and this is the \ncase for 91 percent of the sample. However for larger values of average $S$, \nthere is a trend such that $S(M_{*}) \\rightarrow S(z_{850})$. \n\n\\begin{figure*}\n\\centering\n\\includegraphics[angle=0, width=148mm, height=148mm]{lanyon.fig23.eps}\n\\caption{Asymmetry vs. concentration in the $B_{435}$, $V_{606}$, $i_{775}$\n and $z_{850}$ bands. Solid lines show the classification\ncriteria of Conselice (2003). \nThe point\n in the bottom left segment of the plot shows the average error bar for the\n whole sample. }\n\\label{fig:AC_bviz}\n\\end{figure*}\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[angle=0, width=90mm, height=90mm]{lanyon.fig24.ps}\n\\caption{The asymmetry-concentration plane for the stellar mass maps. The \nplotting convention is the same as for Figure~12.}\n\\label{fig:AC_mass}\n\\end{figure}\n\n\nThe clumpiness parameter is similar to asymmetry, but picks out small \nscale features, such as compact star clusters, as opposed to large scale \nasymmetries. It is, therefore, not surprising that uncertainties in $M\/L$ \ncause a large number of the sample to have $S(M_{*}) > S(_{850})$, as \nvariations in $M\/L$ from pixel to pixel filter through in the \ncalculation of $S(M_{*})$. Such variations affect $A(M_{*})$ and $S(M_{*})$ \nmore than \nthe other parameters, as these calculations involve the subtraction of \nimages, which amplifies the $M\/L$ variations and large\nvariations of level within the background.\n\n\n\n\n\\subsubsection{Gini Index}\n\nFigure ~\\ref{fig:gini_cf} shows the comparison between the values of the Gini\nparameter in stellar mass and the $z_{850}$ band. \nAlthough the early-types are more strongly clustered around equality, \n$G(M_{*}) = G(z_{850})$, there is a clear trend for the sample to have \nhigher $G$ in stellar mass, with $G(M_{*}) > G(z_{850})$ in non-early types\nwithin our sample. This comparison of $G(z_{850})$ and $G(M_{*})$ \ndisplays a greater separation between types. The late-type spirals \nshow an approximately even \nspread in Gini values, from ~0.45 to ~0.93, but all of these have a higher \ndifference between $z_{850}$ and stellar mass than the early-types\nand compact ellipticals. \n\nFigure ~\\ref{fig:gini_cf} reveals that the Gini index is higher within the stellar \nmass maps than in \nthe $z_{850}$ band, except for early-type galaxies, indicating \nthat most of the \nstellar mass in later morphological types is contained within fewer pixels\nwithin the stellar mass image. \nThis is a significant difference between the early and late-types in our \nsample. The reason for this is that the bright blue\nregions of these galaxies vanish as the M\/L ratio is inversely proportional\nto L. This creates effectively a mass distribution with\na larger fraction of the mass contained within fewer pixels.\nFor example the bulges become more prominent for the late-type disks while the\narms vanish. The bulges in these systems become more prominent and this\nwill rise the value of the Gini index.\n\nWhat can often been seen in stellar mass maps of late-types and \nmergers\/peculiars is that these bright outer regions vanish, \nleaving only the central part of the galaxy including most objects in \nFigure~6 which loose their outer parts\nand appear almost as early-types. Figure~18 also shows that the \nGini index for the early-types in \nstellar mass and light are similar, likely because of the \nsmall variation in the M\/L ratios of the various pixels. \n\n\\subsubsection{M$_{20}$}\\label{sec:M20_cf}\n\nWe show the differences between the M$_{20}$ index in the stellar mass\nmaps and the $z_{850}$ band in Figure ~\\ref{fig:m20_cf}.\nDue to the reversed parity in the M$_{20}$ values, points which lie above the\ndashed line in Figure ~\\ref{fig:m20_cf} have M$_{20}(M_{*}) >$\nM$_{20}(z_{850})$, that is, the stellar mass is more more diffuse than the\nlight for most systems. This would change however if we used\na smaller radius than the standard definition. Overall, M$_{20}(M_{*})$ is higher\n(less negative) than M$_{20}(z_{850})$ for most of the sample.\n\nThe M$_{20}$ parameter traces the brightest 20 percent of the flux in the \ngalaxy, or the greatest 20 percent of the stellar mass. This parameter is \nheavily weighted by the spatial distribution of the most luminous, or most \nmassive, pixels, but is normalised to remove any dependence on galaxy size \nand total flux\/stellar mass. M$_{20}$, like $G$, is also not dependent on a \nfixed, pre-determined centre.\n\nTherefore, if stellar mass exactly follows light within a galaxy we would \nexpect that\nM$_{20}(M_{*})$ = M$_{20}(z_{850})$. Figure ~\\ref{fig:m20_cf} shows that, while\nthis is approximately the case for early-type galaxies and early spirals, the\nsame does not apply for peculiars, late-type spirals and edge-on discs. The\ncompact ellipticals especially differ from this trend, \nhaving M$_{20}(M_{*})$ $<$ M$_{20}$($z_{850}$). This is certainly due to \nthe fact that these compact ellipticals have blue cores which render their \nstellar mass maps more compact in the centre than for galaxies with \nredder cores.\n\n\n\\subsection{Stellar Mass Structure}\n\n\\subsubsection{Concentration vs. Asymmetry} \\label{sec:AC}\n\nWe examine how galaxies fall in the classic \nconcentration-asymmetry plane (e.g., Conselice et al. 2000) to determine \nif different galaxy types, as determine visually can be better separated \nat different wavelengths in this parameter space.\n\nWe plot the relation between asymmetry and concentration in light for the\nsample in Fig.~\\ref{fig:AC_bviz} and in stellar mass in \nFig. ~\\ref{fig:AC_mass}. Over-plotted are the classification criteria \nof Conselice (2003) and Bershady et al. (2000). Galaxies with \n$A>0.35$ are classified as mergers, \nwhile those which lie above the top line are early-types. \nGalaxies to the left of the middle line in Figure~20 \\& 21, \nare labelled \nmid-types and those to right of this line are classified as late-types. \n\n\n\n\n\n\nThese criteria for identifying mergers and \nlate-types appear more appropriate for our sample in the $i_{775}$ and \n$z_{850}$ bands, while galaxies are more mixed in these regions in stellar \nmass, having many more\ngalaxies with high asymmetries, and spirals are not easily distinguished from\nmerging systems. However, the uniqueness of the early-type region is more\nobvious in stellar mass\nmaps, with most E galaxies in the $i_{775}$ and $z_{850}$ band plots \nfalling into the mid-type region (Fig. ~\\ref{fig:AC_mass}).\n\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[angle=0, width=148mm, height=148mm]{lanyon.fig25.eps}\n\\caption{Gini-M$_{20}$ for the $B_{435}$, $V_{606}$, $i_{775}$ and $z_{850}$\nbands. The solid lines mark the classification criteria of Lotz {et~al.$\\,$} (2004;\nEquations ~\\ref{eq:lotz_mergerline} and ~\\ref{eq:lotz_normalline}).}\n\\label{fig:gm20_bviz}\n\\end{figure*}\n\nWe find that for our sample, $z_{850}$ is the best band in which to measure\nthese parameters for separating morphological types from each other. \nIt can also be seen in Fig. ~\\ref{fig:AC_bviz} that the $A$-$C$ \nrelation breaks down when viewed in bluer bands, as also shown for nearby \ngalaxies in Taylor-Mager et al. (2007). We find that the scatter in $A$-$C$ increases for all\ngalaxy types towards bluer wavelengths, the effect is more pronounced for \nlate-type morphologies.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[angle=0, width=90mm, height=160mm]{lanyon.fig26.eps}\n\\caption{Gini-M$_{20}$ for $z_{850}$ (top) and the stellar mass maps \n(bottom). In $z_{850}$ (top) the dashed lines are the classification \ncriteria of Lotz {et~al.$\\,$} (2004; Equations ~\\ref{eq:lotz_mergerline} and\n ~\\ref{eq:lotz_normalline}) while the solid lines mark our revised criteria\n in $z_{850}$ (Equations ~\\ref{eq:me_mergerline} and\n ~\\ref{eq:me_normalline}). In stellar mass (bottom) the Lotz {et~al.$\\,$}\\, \ncriteria are\n marked by the dashed lines, our $z_{850}$ criteria are shown by the\n dot-dashed lines and the solid line shows our separation of early and late\n types\/mergers, in stellar mass.}\n\\label{fig:gm20_mass}\n\\end{figure}\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[angle=0, width=90mm, height=90mm]{lanyon.fig20.ps}\n\\caption{Asymmetry in stellar mass maps vs. redshift. The solid line shows the \nfit to the E galaxies (Eq. ~\\ref{eqn:E_Am_z}), which is approximately \nconstant across redshifts ($z$). The dashed line illustrates the fit to \nthe late-type spiral galaxies (Eq. ~\\ref{eqn:lS_Am_z}). The late-type \nspiral galaxies show a larger $A(M_{*})$ with increasingly higher redshifts.}\n\\label{fig:Am_z}\n\\end{figure}\n\n\nFigure ~\\ref{fig:AC_mass} shows the $A$-$C$ relation for the stellar mass \nmaps. There is a greater range in asymmetry values in stellar mass and the \ncriterion suggested by Conselice (2003) for merging systems, of $A > 0.35$, \neven includes a few E and some eS galaxies. There is, however, still a clear \nseparation in types between early and peculiar type systems, but the spirals \n(especially lS galaxies) significantly overlap between the two. This makes \nit difficult to distinguish between morphological types in stellar \nmass within $A-C$. However, given the fact that many of these spirals appear\nto have some kind of merger or formation mode, the stellar mass structure\nis superior for finding galaxies in active evolution.\n\n\n\\subsubsection{Gini vs. M$_{20}$}\n\nWe plot the relation between $G$ and M$_{20}$ for light and stellar mass in \nFigures ~\\ref{fig:gm20_bviz} and ~\\ref{fig:gm20_mass}. The solid lines in Fig\n~\\ref{fig:gm20_bviz} are taken from Lotz {et~al.$\\,$} (2008), who used these\nparameters to\nclassify their sample of normal galaxies and ULIRGs. Equation\n~\\ref{eq:lotz_mergerline} describes the line above which Lotz {et~al.$\\,$} classify\ntheir sample as mergers, and Eq. ~\\ref{eq:lotz_normalline} describes the line\nseparating early and late-type galaxies. \n\n\\begin{equation}\nG = -0.14 \\cdot M_{20} + 0.33\n\\label{eq:lotz_mergerline}\n\\end{equation}\n\n\\begin{equation}\nG = 0.14 \\cdot M_{20} + 0.80\n\\label{eq:lotz_normalline}\n\\end{equation}\n\nWe again see that the cleanest, in terms of separating morphological types, optical\nrelation is in $z_{850}$, although our sample is not best represented by the\nLotz {et~al.$\\,$} classifications in Gini-M$_{20}$ space, with many early-type\ngalaxies falling into the merging region of the plot. This is partially due to\nthe fact that $G$ and M$_{20}$ are calculated differently here then in Lotz\n{et~al.$\\,$}, as described in \\S3.3 and Lisker {et~al.$\\,$} (2008). However, we cannot\nrule out that the classifications for this sample may be intrinsically\ndifferent, and it is worrying that all of the edge-on galaxies fall into the\nmerging region of $G$-M$_{20}$. We also use a similar\nprocedure to look at the same rest-frame wavelength as Lotz\net al. does. \n\nWe modify the Lotz et al. (2008) relations for $z_{850}$ (top) and \nstellar mass (bottom) and show this in Fig. ~\\ref{fig:gm20_mass}. The \nsolid lines in the $z_{850}$\nplot are our revisions to the Lotz {et~al.$\\,$} relations, based on our sample\n(Equations ~\\ref{eq:me_mergerline} for mergers and ~\\ref{eq:me_normalline}\nfor normal systems), \n\n\n\\begin{equation}\nG(z_{850}) = -0.14 \\cdot M_{20}(z_{850}) + 0.38\n\\label{eq:me_mergerline}\n\\end{equation}\n\n\\begin{equation}\nG(z_{850}) = 0.14 \\cdot M_{20}(z_{850}) + 0.74\n\\label{eq:me_normalline}\n\\end{equation}\n\n\\noindent while the dashed lines are the original Lotz {et~al.$\\,$} relations. In\nthe bottom panel of Fig. ~\\ref{fig:gm20_mass}, the solid line shows our\nrevision of the merger defining line for the stellar mass maps,\n(Eq. ~\\ref{eq:mass_mergerline}), \n\n\\begin{equation}\nG(M_{*}) = -0.14 \\cdot M_{20}(M_{*}) + 0.45,\n\\label{eq:mass_mergerline}\n\\end{equation}\n\n\\noindent while here our relations for $z_{850}$ and\nthose of Lotz {et~al.$\\,$} are shown as dot-dashed and dashed, respectively.\n\nAs with the relation for $A$($M_{*}$)-$C$($M_{*}$), the \n$G$($M_{*}$)-M$_{20}(M_{*})$ relation shows\nthe same general trend as its $z_{850}$ counterpart, but with a larger spread\nin values. The spirals, again, are not as neatly defined as in $z_{850}$, with\nthe eS galaxies separated from the early-types and the late-type\nspirals and edge-on disk galaxies\noccupying the merging region. However, there is a clear separation in\nthe stellar mass Gini vs. $M_{20}$ which does not exist for the optical\nlight. Several of the pE galaxies lie in the merger\nregion, even as defined by the higher stellar mass line\n(Eq. ~\\ref{eq:mass_mergerline}). This has also been noted by Conselice {et~al.$\\,$}\n(2007), and is likely due to these objects having multiple nuclei.\n\nThe $G$($M_{*}$)-M$_{20}(M_{*})$ relation shows a clear early\/late-type \nsplit, defined by Eq. ~\\ref{eq:mass_mergerline}, as the Sb\/Sc\/Irr region, as \ndescribed by\nLotz {et~al.$\\,$} is not appropriate to apply to our sample in\nstellar mass. We discuss other correlations: asymmetry vs. clumpiness,\nconcentration vs. M$_{20}$, size vs. concentration, and asymmetry\nvs. M$_{20}$ in the appendix A. We evaluate in this appendix\nother scaling relationship\nbetween these non-parametric relationships, including asymmetry-clumpiness,\nconcentration and M$_{20}$, size and concentration, and asymmetry and\nM$_{20}$.\n\n\n\\subsection{The Evolution of Galaxy Stellar Mass Structure with Redshift}\n\n\nIn this section we investigate the changes in our galaxy stellar mass\nmaps with redshifts. We take a quantitative approach here and find\nthat at $z < 1$ there is little change in the stellar mass\nstructure with redshift, with the exception of the asymmetries of\nthese galaxies which decreases for some galaxy types at $z < 1$.\n\nWe plot $A(M_{*})$ against redshift (Figure\n~\\ref{fig:Am_z}) where we can see a general trend for\ngalaxies to become more asymmetric in stellar mass at\nhigher redshifts. \nWe find that not all galaxy types follow this pattern\nhowever, with the most notable exception being the\nellipticals. As shown in Fig. ~\\ref{fig:Am_z}\nthere is no trend for the ellipticals \nto become more asymmetric in stellar mass at higher\nredshifts.\n\nWe note that while the Es remain at approximately the same\nlow asymmetries across the range in redshift, there is a trend for\nspiral galaxies to become increasingly asymmetric at higher redshifts. We\nmeasure this trend by fitting both the ellipticals (solid line) and lS (dashed line)\ngalaxies, plotted in Figure ~\\ref{fig:Am_z} and quantified in Equations\n~\\ref{eqn:E_Am_z} and ~\\ref{eqn:lS_Am_z}, below.\n\n\\begin{equation}\nA_{E}(M_{*}) = 0.049(\\pm0.079) \\cdot z + 0.021(\\pm0.056)\n\\label{eqn:E_Am_z}\n\\end{equation}\n\n\\noindent is the best fit for the ellipticals and for the late-type spirals,\nthe best fit is,\n\n\\begin{equation}\nA_{lS}(M_{*}) = 0.327(\\pm0.093) \\cdot z + 0.175(\\pm0.064).\n\\label{eqn:lS_Am_z}\n\\end{equation}\n\n\\noindent Equation ~\\ref{eqn:E_Am_z} shows that the early-type \ngalaxies retain\napproximately constant asymmetries across $0 < z < 1$. Equation\n~\\ref{eqn:lS_Am_z} indicates that there is a relation for late-type spirals,\nsuch that $A(M_{*})$ increases with redshift.\n\n\n\n\nWe have already discussed in \\S 4.3.3 how the late-type spiral galaxies\nhave stellar mass asymmetries larger than their asymmetries in\noptical light. Since the late-type spirals are galaxies\nwhich have arms\/disks that are larger\/brighter than\nthe bulge component, these galaxies are therefore\ndominated in terms of their light by their spiral\nstructures and arms. These galaxies show the\nmost diversity and contract between spatial distributions\nwithin their M\/L and stellar mass maps \n(e.g., Fig. ~\\ref{fig:massmap_lS}). \n\n\n\n\nWe note that not all of the late-type spirals have high asymmetries\nin the stellar mass maps, some are as asymmetric or less than in the\n$z_{850}$-band (Fig. ~\\ref{fig:asym_cf}). Furthermore, we find that\nin general the disk galaxies with higher stellar mass map \nasymmetries have a bluer\ncolour, which is the case at both high\nand low redshifts. In\nmany of these cases, the structures are lopsided and\/or contain\nouter features that remain after normalising by the M\/L map.\nThis interpretation of stellar mass being more localised than the light\nis consistent with a higher Gini index in the stellar mass\nbands than in the light.\n\nSome of these asymmetric spirals, as well as many of the peculiars\nresemble the so-called ``clump\nclusters'' and clumpy spirals found by Elmegreen {et~al.$\\,$} (2005, 2007), \nespecially in stellar mass maps. Their low concentration in mass\nis also similar to the Luminous Diffuse Objects (LDOs) found\nat similar redshifts (Conselice et al. 2004). \nIndeed, there are some late-type spirals galaxies that we would\nhave classified differently, perhaps as Pec or pM systems, had we performed\nthe classification in stellar mass, rather than in $z_{850}$. \nThe clumps in galaxies\nfound by Elmegreen {et~al.$\\,$} are also massive, with typical values in the region\n$\\sim10^{8} - 10^{9}M_{\\odot}$ (Elmegreen \\& Elmegreen 2005), and reside in\nyoung disks at high redshift. It is likely that some of our highly asymmetric\nlate-type spiral galaxies are similar, or in the same class as these clumpy \nspirals. These features may also be the result of minor merger events. \n\n\\section{Conclusions}\\label{sec:conclusions}\n\n\nWe conduct a pixel by pixel study of the structures of 560 \ngalaxies found in the \nGOODS-N field at $z < 1$ within stellar mass maps and Hubble\nSpace Telescope ACS $BViz$ wavebands.\nWe measure stellar masses for each pixel of each galaxy image from our\n$BViz$ images by fitting to stellar population models. We use these \nvalues to construct stellar mass maps for our sample and compare morphologies \nin $BViz$ and stellar mass. Our major findings and results include:\n\nI. We construct stellar mass maps and mass-to-light ratio maps for each \ngalaxy and we present examples of each morphological type, in the \n$z_{850}$ band, stellar mass maps, and $(M\/L)_B$ in \nFigures ~\\ref{fig:massmap_cE} to ~\\ref{fig:massmap_M}. We\nfind that some compact elliptical (cE) galaxies have blue cores, and more\ncomplicated internal structures than in either $z_{850}$ or stellar mass would \nreveal on their own. Many of the peculiar\nellipticals (pE) also have blue cores and we assert that these objects may\nhave merged in their recent histories.\nThe early and late-type spirals display more varied patterns. We find that for\nsome of these galaxies, structures seen in light (e.g. spiral arms) are\nsmoothed out in stellar mass. However, this effect does not hold true\nfor all galaxies, nor for all features seen in light.\nStellar mass and ($M\/L$) maps of the `peculiar' galaxies are complicated and \nvary across the sample. Although it is more difficult to make out features in \nstellar mass for galaxies that have very blue colours, \nstructures not seen in light are revealed. \n\n\nII. We compare the half-light\nradius, $R_{\\rm e}$, in $z_{850}$ with half-mass radii for our sample\nas a function of morphological type. We find no systematic tendency \nfor any particular morphological type to have larger $R_{\\rm e}$ in stellar mass \nthan in the $z_{850}$ band, and\nthus conclude that there is no bias introduced by measuring galaxy sizes in\n$z_{850}$.\n\nIII. We find a clear tendency for many galaxies to be more asymmetric\nin stellar mass than in $z_{850}$. We also find that morphology correlates\nwith asymmetries, with early-types having low $A(M_{*})$, and late-types \nhigher values of $A(M_{*})$. We find a relation between colour and \n$A(M_{*})$ such that bluer galaxies have higher stellar mass asymmetry\ndifferences between optical light and stellar mass maps. \nThe late-type \nspiral galaxies in the sample have higher $A(M_{*})$ than \nwould be expected from their asymmetries in $z_{850}$. We discuss possible \ncauses of this effect, including regions of enhanced star formation also \npossessing higher stellar masses, and the evolution of spirals. We note that these highly asymmetric\nspirals resemble the clumpy disks of Elmegreen {et~al.$\\,$} (2007), and Conselice et al. (2004) \nand are experiencing either minor merging activity, or bulge formation\nthrough accretion of disk material.\n\nIV. We find that the Gini index in stellar mass is higher than in the\nz-band ($G(M_{*}) > G(z_{850})$) for all galaxies except the early-types,\nindicating that most of the stellar mass in later morphological types is \ncontained within fewer pixels. Late-type and peculiar morphologies show a \ntrend for M$_{20}(M_{*})$ $>$ M$_{20}(z_{850})$, suggesting that the \nbrightest $20$ percent of the brightest pixels is not necessarily where \nthe greatest $20$ percent of the stellar mass is located.\n\nV. We investigate the relations between several combinations of the\n$CAS$ parameters in $BViz$ and stellar mass, and compare our \nresults to previous morphological studies of this type (e.g. Conselice, \nRajgor \\& Myers 2008; Lotz {et~al.$\\,$} 2004; 2008). We find that $z_{850}$ is \nthe most appropriate photometric band to utilise for a $z < 1$ sample of \ngalaxies with all morphologies, although stellar mass maps are better at\ndistinguishing active galaxies from passive ones and is more a physical\nmeasure of structure.\n\n\nWe furthermore compare our sample classifications in $G$-M$_{20}$ to those of Lotz {et~al.$\\,$}\n(2008) and find that the Lotz {et~al.$\\,$} criteria do not best describe our\nsample. We revise the Lotz {et~al.$\\,$} (2008) criteria to best fit our sample\nin the $z_{850}$-band and within the stellar mass maps, and find that \nearly-types, late-types and mergers can be separated. However, the edge-on\ndisk galaxies remain problematic and cannot be distinguished from mergers in\n$G(z_{850})$-M$_{20}(z_{850})$. We find that $G(M_{*})$-M$_{20}(M_{*})$ \ncan be used to broadly separate early from late-type galaxies, but the \ncriteria cannot distinguish between late-type\/edge-on disks \nfrom peculiar\/merger systems.\n\nVI. We find a relationship between $R_{\\rm e}$ and $C$ (see appendix A) \nfor early-type galaxies in both\n$z_{850}$ and stellar mass. In each case we find that galaxies with higher\nconcentrations have larger radii, and this relation is steeper in $z_{850}$\nthan stellar mass. We also investigate asymmetry versus M$_{20}$ and find that\nthese parameters display a similar relation to $A-C$ in $i_{775}$ and\n$z_{850}$. In stellar mass, $A(M_{*})$-M$_{20}(M_{*})$ shows a tighter relation \nthan $A(M_{*})-C(M_{*})$, with a clear separation between early and late-type\nsystems. Thus, we conclude that, for all parameters, late-type\nspiral galaxies overlap with with Pec\/pM\/M systems which may be\nultimately taken from the same subset of galaxies. Structural studies in \nstellar mass do however track both minor and major merging events, and\ncan be used to find galaxies in active galaxy evolution modes.\n\nWe thank the GOODS team for making their data public, and STFC for a studentship towards supporting this work\nand the Leverhulme Trust for support.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nHighly frustrated magnets are a fascinating area of research with\nmany challenges and surprises \\cite{SRFB04,Diep05}. One exotic\ncase is the spin-1\/2 Heisenberg antiferromagnet on the kagom\\'e\nlattice, where in particular an unusually large number of low-lying\nsinglets was observed numerically \\cite{lech97,waldt98}. An\neffective Hamiltonian approach to the strongly trimerized kagom\\'e lattice\n\\cite{sub95} was then successful in explaining\nthe unusual properties of the homogeneous lattice \\cite{mila98}.\n\nInterest in this situation has been renewed recently due to the\nsuggestion that strongly trimerized kagom\\'e lattices can be\nrealized by fermionic quantum gases in optical lattices\n\\cite{SBCEFL04,FehrmannThesis}. For two fermions per triangle,\none obtains the aforementioned effective Hamiltonian on a\ntriangular lattice \\cite{sub95}, but without the original\nmagnetic degrees of freedom. This effective Hamiltonian\nalso describes the spin-1\/2 Heisenberg antiferromagnet on the\ntrimerized kagom\\'e lattice at one third of the saturation magnetization\n\\cite{CGHHPRSS}. Furthermore, this model shares some features\nwith pure orbital models on the square lattice \\cite{NuFr05,DBM05},\nbut it is substantially more frustrated than these models.\n\nNumerical studies of the effective quantum Hamiltonian\n\\cite{DEHFSL05,DFEBSL05,CEHPSprep}\nprovide evidence for an ordered groundstate and a possible\nfinite-temperature ordering transition. Since the Hamiltonian only has\ndiscrete symmetries, one expects indeed a finite-temperature\nphase transition if the groundstate is ordered.\nFurthermore, quantum fluctuations should be unimportant for the\ngeneric properties of such a phase transition, which motivates\nus to study the classical counterpart of the model at finite temperatures.\n\n\\section{Model and symmetries}\n\n\\begin{figure}[tb]\n\\begin{center}\n\\includegraphics[width=0.35\\columnwidth]{lat-ham.eps}\n\\end{center}\n\\caption{Part of the triangular lattice. Arrows indicate the direction\nof the unit vectors $\\vec{e}_{i;\\langle i,j\\rangle}$ entering the\nHamiltonian (\\ref{eqH}).\n\\label{fig:LatHam}\n}\n\\end{figure}\n\nIn this paper we study a Hamiltonian which is given in terms\nof spins $\\vec{S}_i$ by\n\\begin{equation}\nH = J \\, \\sum_{\\langle i,j\\rangle}\n \\left(2 \\, \\vec{e}_{i;\\langle i,j\\rangle} \\cdot \\vec{S}_i\\right) \\,\n \\left(2 \\, \\vec{e}_{j;\\langle i,j\\rangle} \\cdot \\vec{S}_j\\right) \\, .\n\\label{eqH}\n\\end{equation}\nThe sum runs over the bonds of nearest neighbours $\\langle i,j\\rangle$\nof a triangular lattice with $N$ sites.\nFor each bond, only certain projections of the spins $\\vec{S}_i$\non unit vectors $\\vec{e}_{i;\\langle i,j\\rangle}$ enter the interaction.\nThese directions are\nsketched in Fig.\\ \\ref{fig:LatHam}. Note that these directions depend\nboth on the bond $\\langle i,j\\rangle$ and the corresponding end $i$ or $j$\nsuch that three different projections of\neach spin $\\vec{S}_i$ enter the interaction with its six nearest neighbours.\n\nIn the derivation from the trimerized kagom\\'e lattice \\cite{sub95,SBCEFL04,Z05},\nthe $\\vec{S}_i$ are pseudo-spin operators acting on the two\nchiralities on each triangle and should therefore be considered as quantum\nspin-1\/2 operators. Here we will treat the $\\vec{S}_i$ as classical\nunit vectors. Since only the $x$- and $y$-components enter the\nHamiltonian (\\ref{eqH}), one may take the $\\vec{S}_i$ as `planar' two-component\nvectors. On the other hand, in the quantum case commutation relations\ndictate the presence of the $z$-component as well, such that taking the\n$\\vec{S}_i$ as `spherical' three-component vectors is another natural choice\n\\cite{CEHPSprep}. Here we will compare both choices and thus assess qualitatively\nthe effect of omitting the $z$-components.\n\nThe internal symmetries of the Hamiltonian (\\ref{eqH}) constitute the\ndihedral group $D_6$, {\\it i.e.}, the symmetry group of a regular hexagon.\nSome of its elements consist of a simultaneous transformation of the spins\n$\\vec{S}_i$ and the lattice \\cite{CEHPSprep}. Since these symmetries are\nonly discrete, a finite-temperature phase transition is allowed above an\nordered groundstate.\n\nThe derivation from a spin model \\cite{sub95,Z05} yields a positive $J>0$,\nwhile it may also be possible to realize $J<0$ \\cite{FehrmannThesis} in\na Fermi gas in an optical lattice \\cite{SBCEFL04}. In the following we will\nfirst discuss the case of a negative exchange constant $J<0$ and then\nturn to the case of a positive exchange constant $J>0$. The second case\nis more interesting, but also turns out to be more difficult to handle.\n\n\\section{Negative exchange constant}\n\nFor $J<0$ there is a one-parameter family of ordered groundstates,\n$\\vec{S}_i = (\\cos(\\phi_i),\\,\\sin(\\phi_i),\\,0)$ with $\\phi_a = \\theta$,\n$\\phi_b = \\theta+2\\,\\pi\/3$ and $\\phi_c = \\theta-2\\,\\pi\/3$ on the three\nsublattices $a$, $b$, and $c$, respectively ($120^{\\circ}$ N\\'eel order).\nThe energy of these states, $E^{J<0} = 6\\,J\\,N$, is independent of\n$\\theta$. Computing the free energy $\\mathcal{F}^{J<0}(\\theta)$ by\nincluding the effect of Gaussian fluctuations we find that\n$\\mathcal{F}^{J<0}(\\theta)$ has minima at $\\theta = (2\\,n+1)\\,\\pi\/6$,\n$n = 0,\\,1,\\, \\dots,\\,5$ \\cite{CEHPSprep}. This implies that the above\n$120^{\\circ}$ N\\'eel structures lock in at these angles.\n\n\\begin{figure}[tb]\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{LTneg.eps}\n\\end{center}\n\\caption{MC results for $J<0$, $T = 10^{-3}\\,\\abs{J}$,\nand planar spins. (a) Snapshot of a configuration\non a $12\\times 12$ lattice.\nPeriodic boundary conditions are imposed at the edges.\n(b) Histogram of angles $\\phi_i$, averaged over 1000\nconfigurations.\n\\label{fig:CFGneg}\n}\n\\end{figure}\n\nWe have performed Monte-Carlo (MC) simulations for $J<0$ using a standard\nsingle-spin flip Metropolis algorithm \\cite{LB00}.\nA snapshot of a low-temperature configuration on an\n$N=12\\times 12$ lattice is shown in Fig.\\ \\ref{fig:CFGneg}(a).\nThe $120^{\\circ}$ ordering is clearly seen in such snapshots.\nFig.\\ \\ref{fig:CFGneg}(b) shows histograms of the angles $\\phi_i$.\nOne observes that with increasing lattice size $N$, pronounced maxima\nemerge in the probability $P(\\phi_i)$ to observe an angle $\\phi_i$\nat the predicted lock-in values $\\theta = (2\\,n+1)\\,\\pi\/6$.\n\n\\begin{figure}[tb]\n\\begin{center}\n\\includegraphics[width=0.65 \\columnwidth]{CnegN.eps}\n\\end{center}\n\\caption{MC results for the specific heat $C$ for $J<0$.\nFull lines are for planar spins; dashed lines for spherical spins.\nIncreasing line widths denote increasing system sizes $N$.\nError bars are negligible on the scale of the figure.\n\\label{fig:Cneg}\n}\n\\end{figure}\n\nThermodynamic quantities have been computed by averaging over at least 100\nindependent MC simulations. Each simulation was started at high\ntemperatures and slowly cooled to lower temperatures in order to minimize\nequilibration times. A first quantity, namely the specific heat $C$,\nis shown in Fig.\\ \\ref{fig:Cneg}. There is a maximum in $C$ at\n$T \\approx 1.8\\,\\abs{J}$ for spherical spins, and for planar spins at a\nhigher temperature $T\\approx 2.2\\,\\abs{J}$. The fact that the value of\n$C$ around the maximum increases with $N$ indicates a phase transition.\nFor $T \\to 0$, the equipartition theorem predicts a contribution $1\/2$\nto the specific heat per transverse degree of freedom. Indeed, the\nlow-temperature results are very close to $C\/N = 1\/2$ and $1$ for planar\nand spherical spins, respectively.\n\n\\begin{figure}[tb]\n\\begin{center}\n\\includegraphics[width=0.65 \\columnwidth]{Mneg.eps}\n\\end{center}\n\\caption{MC results for the square of the sublattice\nmagnetization $m_s^2$ for $J<0$.\nFull lines are for planar spins; dashed lines for spherical spins.\nIncreasing line widths denote increasing $N$.\nError bars are negligible on the scale of the figure.\n\\label{fig:Mneg}\n}\n\\end{figure}\n\nIn order to quantify the expected\norder, we introduce the sublattice order parameter\n\\begin{equation}\n\\vec{M}_s = {3 \\over N} \\sum_{i \\in {\\cal L}} \\vec{S}_i \\, ,\n\\label{defMsubl}\n\\end{equation}\nwhere the sum runs over one of the three sublattices ${\\cal L}$\nof the triangular lattice. Fig.\\ \\ref{fig:Mneg} plots the square\nof this sublattice order parameter\n\\begin{equation}\nm_s^2 = \\left\\langle \\vec{M}_s^2 \\right\\rangle \\, ,\n\\label{defMs}\n\\end{equation}\nwhich is a scalar quantity and expected to be non-zero in a\nthree-sublattice ordered state. Note, however, that the order\nparameter (\\ref{defMsubl}) is insensitive to the mutual orientation\nof spins on the three sublattices.\nIn Fig.\\ \\ref{fig:Mneg} we observe that $m_s^2$ converges to\nzero with $N \\to \\infty$ at high temperatures, while a non-zero\nvalue persists at lower temperatures, consistent with a\nphase transition around $T\\approx 2\\,\\abs{J}$, as already\nindicated by the specific heat. Furthermore, the sublattice order\nagain points to a higher transition temperature for planar spins than for\nspherical spins. It should also be noted that there are noticeable\nquantitative differences between the values of $m_s^2$ for\nplanar and spherical spins over the entire temperature range.\nJust for $T \\to 0$ both planar and spherical spins yield $m_s^2 \\to 1$,\nas expected for a perfectly ordered groundstate.\n\nIn order to determine the transition temperature $T_c$ more accurately,\nwe use the `Binder cumulant' \\cite{Binder81,LB00} associated to\nthe order parameter (\\ref{defMsubl}) via\n\\begin{equation}\nU_4 = 1+ A - A \\,\n{\\left\\langle \\vec{M}_s^4 \\right\\rangle \\over\n\\left\\langle \\vec{M}_s^2 \\right\\rangle^2} \\quad\n{\\rm with} \\quad\nA = \\cases{1 & for planar spins, \\cr\n{3 \\over 2} & for spherical spins. }\n\\label{defBinder}\n\\end{equation}\nThe constants in (\\ref{defBinder}) are chosen such that $U_4=0$ for a\nGaussian distribution of the order parameter $P(\\vec{M}_s) \\propto\n\\exp\\left(-c\\,\\vec{M}_s^2\\right)$. Such a distribution is expected at high\ntemperatures, leading to $U_4 \\to 0$ for $T \\gg \\abs{J}$. Conversely, a\nperfectly ordered state yields $\\left\\langle \\vec{M}_s^4 \\right\\rangle =\n\\left\\langle \\vec{M}_s^2 \\right\\rangle^2$ such that with the prefactors as\nin (\\ref{defBinder}) we find $U_4 = 1$. Hence, for an ordered state we\nexpect $U_4 \\approx 1$ for $T < T_c$.\n\n\\begin{figure}[tb]\n\\begin{center}\n\\includegraphics[width=0.93 \\columnwidth]{UnegN.eps}\n\\end{center}\n\\caption{Main panel:\nMC results for the Binder cumulant for $J<0$.\nFull lines are for planar spins; dashed lines for spherical spins.\nIncreasing line widths denote increasing $N$.\nError bars are negligible on the scale of the figure.\nInset: Binder cumulant for planar spins close to the\ncritical temperature.\n\\label{fig:Uneg}\n}\n\\end{figure}\n\nFig.\\ \\ref{fig:Uneg} shows our results for the Binder cumulant, as defined\nin (\\ref{defBinder}). The transition temperature $T_c$ can be estimated\nfrom the crossings of the Binder cumulants at different sizes $N$\n\\cite{Binder81,LB00}, which are shown for planar spins in the inset of\nFig.\\ \\ref{fig:Uneg}. Since we have not been aiming at high precision, we\ncannot perform a finite-size extrapolation. Nevertheless, we obtain a\nrough estimate\n\\begin{equation}\n{T_c^{\\rm planar}} \\approx 2 \\, {\\abs{J}}\\, ,\n\\label{TcNegPlanar}\n\\end{equation}\nwith an error on the order of a few percent. The corresponding\nvalue for spherical spins is estimated as\n${T_c^{\\rm spherical}} \\approx 1.57 \\, {\\abs{J}}$ \\cite{CEHPSprep}.\nWe therefore conclude that out-of-plane fluctuations reduce $T_c$\nby about $20\\%$.\n\n\\section{Positive exchange constant}\n\nAs in the case $J<0$, there is a one-parameter family of $120^{\\circ}$\nN\\'eel ordered groundstates with energy $-3\\,J\\,N$. They differ from the\nstates found for $J<0$ by an interchange of the spin directions on the\n$b$- and $c$-sublattices. The contribution of Gaussian fluctuations around\nthese states yields a free energy $\\mathcal{F}^{J>0}(\\theta)$ which has\nminima at $\\theta = \\pi \\,n\/3$, $n = 0,\\,1,\\, \\dots,\\,5$. Hence the N\\'eel\nstructure locks in at these angles for $J>0$. Surprisingly, an inspection\nof all states of finite cells of the lattice, in which the mutual angles\nbetween pairs of spins are multiples of $2\\,\\pi\/3$ \\cite{CEHPSprep},\nreveals that there is a macroscopic number of groundstates in addition to\nthe $120^{\\circ}$ N\\'eel states, {\\it i.e.}, the number of ground states\ngrows exponentially with $N$. The $120^{\\circ}$ N\\'{e}el state has $N\/3$\nsoft modes, all other groundstates have fewer soft modes \\cite{CEHPSprep}.\nAt low but finite temperatures the $120^{\\circ}$ N\\'eel state will\ntherefore be selected by a thermal order-by-disorder mechanism \\cite{OBD}.\n\n\\begin{figure}[tb]\n\\begin{center}\n\\includegraphics[width=0.5 \\columnwidth]{planarcfg_10_0.eps}\n\\hspace*{-0.12 \\columnwidth}\n\\includegraphics[width=0.5 \\columnwidth]{planarcfg_10_3.eps}\n\\end{center}\n\\caption{Snapshots of configurations generated by the exchange\nmethod at $T \\approx 10^{-2} \\, J$\non a $12\\times 12$ lattice for planar spins and $J>0$.\nPeriodic boundary conditions are imposed at the edges.\n\\label{fig:CFGpos}\n}\n\\end{figure}\n\nThe large number of groundstates which are separated by an energy barrier\nrenders it extremely difficult to thermalize a simple MC simulation at\nsufficiently low temperatures. We have therefore performed exchange MC\nsimulations (also known as `parallel tempering') \\cite{HN96,eM98} for\n$J>0$, using a parallel implementation based on MPI. 96 replicas were\ndistributed over the temperature range $T\/J = 0.01, \\ldots, 0.7$ in a\nmanner to ensure an acceptance rate for exchange moves of at least 70\\%.\nStatistical analysis was performed by binning the time series at each\ntemperature. Fig.~\\ref{fig:CFGpos} shows snapshots of two low-temperature\nconfigurations with $N=12\\times 12$ generated by the exchange method with\nplanar spins. In contrast to the case $J<0$, it is difficult to decide on\nthe basis of such snapshots if any order arises for $J>0$: there are\ndefinitely large fluctuations including domain walls in the system. These\nfluctuations are in fact a necessary ingredient of the order-by-disorder\nmechanism. A careful quantitative analysis is therefore clearly needed.\n\n\\begin{figure}[tb]\n\\begin{center}\n\\includegraphics[width=0.65 \\columnwidth]{Cpos.eps}\n\\end{center}\n\\caption{Exchange MC results for the specific heat $C$ for $J>0$.\n(a) Full lines are for planar spins; dashed lines for spherical spins.\nIncreasing line widths denote increasing system sizes $N$.\nError bars are on the order of the width of the lines in this panel.\nThe other panels show the\nlow-temperature parts of the results for spherical spins (b),\nand planar spins (c).\nThe system sizes in panels (b) and (c) range from $N=6\\times6$\n(bottom) to $N=18\\times18$ (panel (b), top),\nand $N=12\\times12$ (panel (c), top), respectively.\n\\label{fig:Cpos}\n}\n\\end{figure}\n\nWe start with the specific heat $C$, shown in Fig.\\ \\ref{fig:Cpos}. There\nis a broad maximum at $T \\approx 0.25\\,J$ for planar spins and $T \\approx\n0.3\\,J$ for spherical spins (see Fig.\\ \\ref{fig:Cpos}(a)). However, this\nmaximum does not correspond to a phase transition, as one can infer from\nthe small finite-size effects. There is a second small peak in $C$ at a\nlower temperature $T \\approx 0.02 \\, J$, see panels (b) and (c) of Fig.\\\n\\ref{fig:Cpos}. Since this peak increases with $N$, it is consistent with\na phase transition. Note that at the lowest temperatures $C\/N$ is clearly\nsmaller than $1\/2$ and $1$ for planar and spherical spins, respectively.\nIndeed, in-plane fluctuations around the $120^\\circ$ state yield one\nbranch of soft modes, which we expect to contribute only $N\/12$ to $C$\nrather than $N\/6$, as the other two branches. Thus, we expect $C\/N = 5\/12\n= 0.41666\\dots$ for planar spins and $C\/N = 11\/12 = 0.91666\\dots$ for\nspherical spins in the limit $T \\to 0$. Our MC results tend in this\ndirection, but we have not reached sufficiently low temperatures to fully\nverify this prediction. Lastly, we note that the difference between $C\/N$\nfor planar and spherical spins is consistent with $1\/2$ within error bars\nonly for $T \\lesssim 0.02\\, J$.\n\n\\begin{figure}[tb]\n\\begin{center}\n\\includegraphics[width=0.65 \\columnwidth]{Mpos.eps}\n\\end{center}\n\\caption{Exchange MC results for the square of the sublattice\nmagnetization $m_s^2$ for $J>0$.\nFull lines are for planar spins; dashed lines for spherical spins.\nIncreasing line widths denote increasing $N$.\nError bars are negligible on the scale of the figure.\n\\label{fig:Mpos}\n}\n\\end{figure}\n\nNext, in Fig.\\ \\ref{fig:Mpos} we show the square of the sublattice\nmagnetization (\\ref{defMs}) for $J>0$. First, we observe that it remains\nsmall for almost all temperatures and shoots up just at the left side of\nFig.\\ \\ref{fig:Mpos} where we measure a maximal value of $m_s^2 \\approx\n0.4$ for $T=J\/100$ on the $N=6\\times 6$ lattice. This is consistent with a\nphase transition into a three-sublattice ordered state in the vicinity of\nthe low-temperature peak in the specific heat $C$. In marked difference\nwith the case $J<0$ (see Fig.\\ \\ref{fig:Mneg}), here we observe only small\ndifferences between planar and spherical spins (at least for the sizes\nwhere we have data for planar spins, {\\it i.e.}, $N=6\\times6$, $9\\times9$\nand $12\\times12$).\n\n\\begin{figure}[tb]\n\\begin{center}\n\\includegraphics[width=0.98 \\columnwidth]{UposN.eps}\n\\end{center}\n\\caption{Exchange MC results for the Binder cumulant for $J>0$.\n(a) Full lines are for planar spins; dashed lines for spherical spins.\nIncreasing line widths denote increasing $N$.\nError bars are at most on the order of the width of the lines.\n(b) Binder cumulant for planar spins close to the\ncritical temperature (\\ref{TcPosPlanar}). Lines of increasing slope\nare for increasing system sizes $N=6\\times6$, $9\\times 9$,\nand $12\\times12$.\n\\label{fig:Upos}\n}\n\\end{figure}\n\nFinally, Fig.\\ \\ref{fig:Upos} shows our results for the Binder cumulant,\nas defined in (\\ref{defBinder}). In contrast to the sublattice\nmagnetization $m_s^2$, the Binder cumulants for planar and spherical spins\nare close to each other only for $T \\lesssim 0.1\\,J$, provided we scale to\nthe same prefactor $A$ in the definition (\\ref{defBinder}). Again, our\nbest estimate for the transition temperature $T_c$ is obtained from the\ncrossings of the Binder cumulants at different sizes $N$ shown for planar\nspins in the inset of Fig.\\ \\ref{fig:Upos}. The only meaningful crossings\nare those of the $N=6\\times6$ with the $N=9\\times 9$ and $12\\times12$\ncurves, respectively. This leads to a rough estimate\n\\begin{equation}\n{T_c^{\\rm planar}} \\approx 0.0125 \\, {J}\\, .\n\\label{TcPosPlanar}\n\\end{equation}\nThis estimate is indistinguishable from the corresponding value for\nspherical spins \\cite{CEHPSprep}, as is expected since the spins lie\nessentially all in the $x$-$y$-plane for such low temperatures.\n\n\n\\section{Conclusions and outlook}\n\nIn this paper,\nwe have investigated finite-temperature properties of the classical\nversion of the Hamiltonian (\\ref{eqH}) on a triangular lattice.\n\nFor $J<0$, we have clear evidence for a low-temperature phase\nwith $120^\\circ$ N\\'eel order. The transition temperature is of\norder $\\abs{J}$ and can be determined with reasonable accuracy.\nPlanar and spherical spins yield qualitatively similar results,\nbut there are quantitative differences, in particular the\ntransition temperature (\\ref{TcNegPlanar}) is higher for planar spins.\nWe have performed further simulations for spherical spins \\cite{CEHPSprep}\nin order to determine critical properties. On the one hand, so far\nwe have no evidence for a discontinuity at $T_c$, on the other hand the\nassumption of a continuous phase transition yields very unusual\ncritical exponents \\cite{CEHPSprep} which violate the hyperscaling\nrelation. Thus, the most plausible scenario may be a\nweakly first-order transition.\n\nFor $J>0$, the groundstates are macroscopically degenerate \\cite{CEHPSprep}.\nA thermal order-by-disorder mechanism \\cite{OBD} predicts the\nselection of another $120^\\circ$ ordered state. The corresponding\nphase transition is just at the limits of detectability even with\nthe exchange MC method \\cite{HN96,eM98}: we find an extremely\nlow transition temperature (\\ref{TcPosPlanar})\nwhich is two orders of magnitude smaller than the overall energy scale $J$.\nIn this case, the spins lie essentially in the $x$-$y$-plane in the\nrelevant temperature region such that we obtain quantitatively extremely\nclose results for planar and spherical spins in the vicinity of $T_c$,\napart from a constant offset in the specific heat.\n\nThe features observed e.g.\\ in the specific heat for the classical variant\nresemble those found in the original quantum model\n\\cite{DEHFSL05,DFEBSL05,CEHPSprep}. Since only much smaller systems are\nnumerically accessible for the quantum model, the results for the classical\nvariant are an important tool for understanding the quantum case.\nIn particular, the finite-temperature phase transitions should be universal\nand may therefore be characterized in the classical model. However,\nhighly accurate data is needed for that purpose. Our result that spherical\nand planar spins exhibit the same qualitative features for $J<0$, and that\nfor $J>0$ the $z$-component is even quantitatively very small in the\nrelevant temperature range, may be useful in this context. Namely,\none may choose planar spins for further simulations, thus\nreducing the number of degrees of freedom to be updated.\n\n\\ack\n\nUseful discussions with F.\\ Mila are gratefully acknowledged. We are\nindebted to the CECPV, ULP Strasbourg for allocation of CPU time on an\nItanium 2 cluster. This work has been supported in part by the European\nScience Foundation through the Highly Frustrated Magnetism network.\nPresentation at HFM2006 is supported by the Deutsche\nForschungsgemeinschaft through SFB 602.\n\n\\section*{References}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Vortex state}\n \n The typical vortex magnetization configuration for an oblate hemispheroidal shell and a prolate hemispheroidal shell are depicted in Figures $\\ref{oblate_vortex}$ and $\\ref{prolate_vortex}$, respectively. While for the oblate hemispheroidal shell we can see clearly a core in the middle of the vortex, in the prolate hemispheroidal shell the core is spread all over the vortex. \n \n \\begin{figure}\n\\begin{center}\n\\includegraphics [scale=0.16]{Aout45Bou25_Ain40Bin20-Vortex_Axes.eps}\n\\end{center}\n\\caption{The magnetization distribution in vortex state of an oblate hemispheroidal shell with $\\rm A_{in}=40$ nm, $\\rm C_{in}=20$ nm and thickness 5 nm. The color code represents the normalized z component of the magnetization, emphesizes the core in the center of the vortex.} \\label{oblate_vortex}\n\\end{figure}\n\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics [scale=0.16]{Aout20Bou25_Ain15Bin20-Vortex.eps}\n\\end{center}\n\\caption{The magnetization distribution in vortex state of a prolate hemispheroidal shell with $\\rm A_{in}=15$ nm, $\\rm C_{in}=20$ nm and thickness 5 nm. The color code represents the normalized z component of the magnetization, shows that the core is not only in the center of the vortex.} \\label{prolate_vortex}\n\\end{figure}\n\n\n\n\n\n Using the angular parametrisation for the normalised magnetisation \n \\begin{equation}\n\\label{m_eq}\n\\bold{m}=\\dfrac{\\bold{M}}{M}=(\\sin\\theta \\cos\\phi, \\sin\\theta\\sin\\phi, \\cos\\theta), \n\\end{equation}\n one can describe the vortex solution as follows:\n \n\\begin{equation}\n\\label{vortex_eq}\n\\cos\\theta=pf(r),\\qquad \\phi=\\epsilon\\dfrac{\\pi}{2}+\\chi .\n\\end{equation}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics [scale=0.5]{Vortex_core_thickness5.eps}\n\\end{center}\n\\caption{The normalized z component of the magnetization as a function of $r$ for hemispheroidal shells (Thickness=5 nm). (a) Symbols correspond to the micromagnetic simulations for semi-principal axis $\\rm A_{in}=40$ nm and different $\\rm C_{in}$. (b) Symbols indicate the micromagnetic simulations results for semi-principal axis $\\rm A_{in}=15$ nm and three different $\\rm C_{in}$ values. Solid lines are the best ansatzes which fit to the simulations' results.} \\label{profile_vortex5}\n\\end{figure}\n\n\n\n Here, $(r,\\chi,z)$ are the cylinder coordinates, $p=\\pm1$ is the vortex polarity which describes the vortex core magnetization (up or down) and $\\epsilon=\\pm1$ is the vortex chirality (clockwise or counterclockwise). The function $f(r)$ describes the out of surface structure of the vortex. In order to find a good fit to the oblate hemispheroidal shell, it is instructive to use an analogy with a vortex profile of the planar disk. There are known several models for describing the vortex in a disk shaped particles. For example the ansatz by Usov and Peschany \\cite{N.a1992,Guslienko2004} and the Kravchuk's ansatz \\cite{Kravchuk2007} which describes the vortex structure in disks and rings,\n\\begin{equation}\n\\label{Kravchuk_eq}\nf(r)=e^{-(\\dfrac{r}{\\xi})^2},\n\\end{equation}\nwhere the parameter $\\xi$ determines the radius of the vortex core. For the hemispherical shells Sheka et al. used a modified version of Usov's and Peschany's model \\cite{SHEKA2013} \n \\begin{equation}\n\\label{Usov_eq}\nf(r) = \\begin{cases} \\dfrac{r_c^2-r^2}{r_c^2+r^2} & \\text{if } r2$ do not fit at all to the Kravchuk's ansatz or to Eq. \\ref{like_Lor_eq} and do not correspond to Usov and Peschany's ansatz given by Eq. \\ref{Usov_eq}. Therefore, to describe $m_z$ vs $r$ for $\\rm \\frac{C_{in}}{A_{in}}>2$ we suggest the two parameters ansatz:\n\n \\begin{equation}\n\\label{Schultz_eq}\nf(r)=1-(r\/R_c)^{\\alpha}.\n\\end{equation}\nThe fit is not good enough but it is the best simple ansatz.\n\nThe profile of the normalized z component of the magnetization as a function of $r$, for $\\rm A_{in}=40$ nm and shell thicknesses of 10 nm is presented in Figure $\\ref{profile_vortex10}$(a). The Kravchuk's ansatz is the best fit throughout the range ($\\rm C_{in}\/\\rm A_{in} \\leq$1.25).\n \nFigure $\\ref{profile_vortex10}$(b) depicts the profile of the normalized z component of the magnetization as a function of $r$, for $\\rm A_{in}=15$ nm and shell thicknesses of 10 nm. The best fit for $\\rm \\frac{C_{in}}{A_{in}} \\geq2.66$ is Eq. \\ref{Schultz_eq} and for $0.33<\\rm \\frac{C_{in}}{A_{in}}<0.66$ is Eq. \\ref{like_Lor_eq}. Non of the above ansatz was good enough to describe $m_z$ vs $r$ for the middle size aspect ratios. \n\n \n\\section{Conclusions}\n\nWe present a detailed study of the ground state of magnetic nano hemispheroidal shells. In addition to the three magnetic ground states which exist in hemispherical magnetic shells we find another homogeneous ground state, the easy axis. This additional magnetic structure is the ground state only for small and elongated hemispheroidal shells. Like hemispherical shells \\citep{SHEKA2013}, as the dimensions increase there is more preference for vortex state than the onion state. The vortex profile cannot be described by only one ansatz. We need to choose the right ansatz for each hemispheroidal shell. Start with Kravchuk's ansatz for the wide hemispheroidal shells with low aspect ratio ($\\rm C_{in}\/\\rm A_{in}$), through Eq. \\ref{like_Lor_eq} and end with the ansatz which is given by Eq. \\ref{Schultz_eq} for the narrow hemispheroidal shells with large aspect ratio ($\\rm C_{in}\/\\rm A_{in}$). \n \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{PRELIMINARIES}\n\\subsection{Hamilton-Jacobi Reachability Analysis}\nLet $s \\le 0$ be time and $z \\in \\mathbb{R}^n$ be the state of an autonomous system. The evolution of the system state over time is described by a system of ordinary differential equations (ODE) below.\n\\begin{equation}\n\\label{eq:main_ode}\n \\dot{z} = \\dv{z(s)}{s} = f(z(s), u(s), d(s))\n\\end{equation}\nwhere $u(\\cdot)$ and $d(\\cdot)$ denote the control and disturbance function respectively. The system dynamics $f$ are assumed to be uniformly continuous, bounded and Lipschitz continuous in $z$ for fixed $u$ and $d$. Given $u(\\cdot)$ and $d(\\cdot)$, there exists a unique trajectory that solves equation \\eqref{eq:main_ode}.\n\nThe trajectory or solution to equation \\eqref{eq:main_ode} is denoted as $\\zeta(s;z, t, u(\\cdot), d(\\cdot)): [t, 0] \\rightarrow \\mathbb{R}^n$ , which starts from state $z$ at time $t$ under control $u(\\cdot)$ and disturbances $d(\\cdot)$. $\\zeta$ satisfies \\eqref{eq:main_ode} almost everywhere with initial condition $\\zeta(t;z, t, u(\\cdot), d(\\cdot)) = z$.\n\nIn reachability analysis, we begin with a system dynamics described by an ODE and a target set that represents unsafe states\/obstacles \\cite{HJOverview}. We then solve a HJ PDE to obtain Backward Reachable Tube (BRT), defined as follows:\n\n\\begin{equation}\n\\begin{aligned}\n \\bar{\\mathcal{A}} = \\{z: \\exists \\gamma \\in \\Gamma, \\forall u(\\cdot) \\in \\mathbb{U}, \\exists s \\in [t, 0], \\\\ \t\\zeta(s;z, t, u(\\cdot), d(\\cdot)) \\in \\mathcal{T} \\}\n\\end{aligned}\n\\end{equation}\n\nIn HJ reachability analysis, a target set $\\mathcal{T} \\subseteq \\mathcal{R}^n$ is represented by the implicit surface function $V_{0}(z)$ as $\\mathcal{T} = \\{z: V_{0}(z) \\leq 0\\}$. The BRT is then the sub-level set of a value function $V(z, s)$ defined as below.\n\\begin{align}\n\\label{eq:eq3}\n V(z, s) = \\min_{d(\\cdot)}\\max_{u(\\cdot) \\in \\mathbb{U}}\\min_{s\\in[t,0]}V_0(\\zeta(0;z, t, u(\\cdot), d(\\cdot)))\n\\end{align}\nWe assume disturbance is applied with non-anticipative strategies\\cite{bansal2017hamilton}. In a zero-sum differential game, the control input and disturbances have opposite objectives.\n\nThe value function $V(z, s)$ can be obtained as the viscosity solution of this HJ PDE: \n\\begin{equation}\n\\label{eq:HJ_variational_inequality}\n\\begin{gathered}\n \\min\\{D_{s}V(z,s) + H(z, \\nabla V(z,s)), V(z,0) - V(z,s) \\} = 0\\\\ \n V(z,0) = l(z), s \\in [t, 0] \\\\\n H(z, \\nabla V(z, s)) =\\min_{\\gamma[u](\\cdot)}\\max_{u(\\cdot)} \\nabla V(z, s)^{T}f(z,u) \\end{gathered}\n\\end{equation}\n\nWe compute the HJ PDE until it converges. Numerical toolboxes based on level set methods such as \\cite{LsetToolbox1} are used to obtain a solution on a multi-dimensional grid for the above equation.\n\n\\subsection{Basic Numerical Solution}\nLet us store the value function on a multi-dimensional grid, with the numerical solution of the value function denoted as $V$. Let $N_{d}$ be the grid size on the $d$th axis ($1 \\le d \\le 4$). We also let $x_{d,i}$ denote the state of grid $i$ in dimension $d$.\nIn our approach throughout this paper, we will adopt the central differencing scheme for approximating derivatives in dimension $d$, which is defined as follows:\n\\begin{equation}\n\\label{eq:central_diff}\n\\begin{gathered}\n D_{d}^{-}V(x_{d, i}) = \\dfrac{V(x_{d, i}) - V(x_{d, i-1})}{\\Delta x_{d}}, \\\\\n D_{d}^{+}V(x_{d, i}) = \\dfrac{V(x_{d, i+1}) - V(x_{d, i})}{\\Delta x_{d}}, \\\\\n D_{d}V(x_{d, i}) = \\dfrac{D_{d}^{+}V(x_{d, i}) + D_{d}^{-}V(x_{d, i})}{2} \\end{gathered}\n\\end{equation}\n\nThe two terms $D_{d}^{-}$ and $D_{d}^{+}$ are the left and right approximations respectively. Note that for grid points at each end of each dimension (i.e $i = N_d-1$, $i = 0$), \\eqref{eq:central_diff} is computed with extrapolated points.\nThe basic algorithm for solving \\eqref{eq:HJ_variational_inequality} on-grid for 4D systems is then described as follows: \n\n\\begin{algorithm}[h]\n\\caption{Value function solving procedures}\n\\label{HJalgorithm1}\n\\begin{algorithmic}[1]\n \\State $V_{0}[N_1][N_2][N_3][N_4] \\leftarrow l(z)$\n \\State \\texttt{\/\/Compute Hamiltonian term, and max, min deriv}\n \\For{\\texttt{$i = 0 :N_1 - 1$; $j= 0:N_2 -1$; $k= 0:N_3 -1$; $l= 0:N_4 -1$}} \n \n \n \n \\State \\texttt{Compute \\eqref{eq:central_diff} for } $1 \\leq d \\leq 4 $ \n \n \\State $ minDeriv \\leftarrow min(minDeriv, D_{d}V(x))$\n \\State $ maxDeriv \\leftarrow max(maxDeriv, D_{d}V(x))$\n \\State $\\displaystyle u_{opt} \\leftarrow \\arg \\max_{u \\in \\mathbb{U}} \\nabla V(z, s)^{\\top}f(z,u)$ \n \\State $\\dot{x} \\leftarrow f(z, u_{opt})$\n \n \\State $H_{i, j, k, l} \\leftarrow \\nabla V(z, s)^{\\top}\\dot{x}$\n \n \n \n \n \\EndFor\n \\State \\texttt{\/\/ Compute dissipation and add to H}\n \\For{\\texttt{$i = 0 :N_1 - 1$; $j= 0:N_2 -1$; $k= 0:N_3 -1$; $l= 0:N_4 -1$}}\n \\State $\\alpha_d(x) \\leftarrow \\max_{p \\in [minDeriv, maxDeriv]} \\abs{\\dfrac{\\partial H(x,p)}{\\partial p_d}}$\n \\State $H_{i, j, k, l} \\leftarrow H_{i, j, k, l} - \\Sigma_{d=1}^{4} \\alpha_{d}(x)\\dfrac{D_{d}^{+}\\check(x) - D_{d}^{-}\\check(x) }{2}$\n \\State $\\alpha_{d}^{max} \\leftarrow max(\\alpha_{d}^{max}, \\alpha_{d})$\n \\EndFor\n\\State \\texttt{\/\/Compute stable integration time step}\n\\State $\\Delta t \\leftarrow (\\Sigma_{d=1}^{4}\\dfrac{\\abs{\\alpha_{d}^{max}}}{\\Delta x_{d}})^{-1}$\n \\State $V_{t+1} \\leftarrow H\\Delta{t} + V_{t}$\n \\State \\texttt{$V_{t+1} \\leftarrow min(V_0, V_{t+1})$}\n \\State \\texttt{$\\epsilon \\leftarrow \\abs{V_{t+1} - V_{t}}$}\n \\If{$\\epsilon < threshold$}\n \\State \\texttt{$V_{t} \\leftarrow V_{t+1}$}\n \\State \\texttt{Go to line 3} \n \\EndIf\n\\end{algorithmic}\n\\end{algorithm}\nThe above algorithm loops through the 4D array three times. In the first grid iteration, the Hamiltonian terms, maximum and minimum derivative is determined (lines 3-9). In the next grid iteration, the dissipation is computed and added to the Hamiltonian in order to make the computation stable. At the same time, the maximum alphas in all dimensions defined in line 13 are computed. These $\\alpha_d^{max}$ are used to determine the step bound $\\Delta t$. In the third grid iteration (line 19), each grid point is integrated for a length of $\\Delta t$.\n\\begin{figure}[H]\n \\centering\n \\includegraphics[width=.30\\textwidth]{mem_4dims.png}\n \\caption{9 memory accesses (yellow + green colored grid) within each iteration for computing derivatives in all 4 directions as in line 3 (algorithm \\ref{HJalgorithm2})}\n \\label{fig:my_label}\n\\end{figure}\nIn certain cases, $\\alpha_{d}(x)$ in line 13 is the same as computing the absolute value of $\\dot{x}$, which has been computed in line 8. In addition, in a lot of cases, $\\alpha_{d}^{max}$ stays the same across different time iterations. We also observed that $\\Delta t$ depends only on grid configuration and $\\alpha_{d}^{max}$. So instead of re-computing $\\Delta t$ every time and then loop through the 4D grid array again, we could pre-compute $\\Delta t$ and re-use it for all the time iterations. Combining these ideas together, throughout this paper, we will use the following algorithm with one grid looping, which is more computationally efficient:\n\\begin{algorithm}[h]\n\\caption{Value function solving procedures}\n\\label{HJalgorithm2}\n\\begin{algorithmic}[1]\n \\State $V_{0}[N_1][N_2][N_3][N_4] \\leftarrow l(z)$\n \\For{\\texttt{$i = 0 :N_1 - 1$; $j= 0:N_2 -1$; $k= 0:N_3 -1$; $l= 0:N_4 -1$}}\n \\State \\texttt{Compute \\eqref{eq:central_diff} for } $1 \\leq d \\leq 4 $ \n \n \\State $\\displaystyle u_{opt} \\leftarrow \\arg \\max_{u \\in \\mathbb{U}} \\nabla V(z, s)^{\\top}f(z,u)$ \n \\State $\\dot{x} \\leftarrow f(z, u_{opt})$\n \n \\State $H_{i, j, k, l} \\leftarrow \\nabla V(z, s)^{\\top}\\dot{x}$\n \\State $H_{i, j, k, l} \\leftarrow H_{i, j, k, l} - \\Sigma_{d=1}^{4} \\abs{\\dot{x_{d}}}\\dfrac{D_{d}^{+}(x) - D_{d}^{-}(x) }{2}$\n \\State $V_{t+1,(i, j, k, l)} \\leftarrow H_{i, j, k ,l}\\Delta{t}_{precomputed} + V_{t, (i, j, k, l)}$\n \\State \\texttt{$V_{t+1, (i, j, k, l)} \\leftarrow min(V_{0, (i, j, k, l)}, V_{t+1, (i, j, k, l)})$}\n \\EndFor\n \\State \\texttt{$\\epsilon \\leftarrow \\abs{V_{t+1} - V_{t}}$}\n \\If{$\\epsilon < threshold$}\n \\State \\texttt{$V_{t} \\leftarrow V_{t+1}$}\n \\State \\texttt{Go to line 2} \n \\EndIf\n\\end{algorithmic}\n\\end{algorithm}\n\\subsection{Field Programmable Gate Arrays (FPGA)}\nFPGA are configurable intergrated circuits that are programmed for specific applications using hardware description language (HDL). \n\n\\begin{figure*}[h]\n \\includegraphics[width=\\textwidth]{figures\/Overall-System.pdf}\n \\caption{System overview on FPGA (Right). The initial value array is first transferred from DRAM to FPGA's on-chip memory. The memory buffer then distributes data to the 4 PEs to concurrently compute the new value function at 4 consecutive grid points. The output from PE is then written back to DRAM. Each fully pipelined PE outputs one grid point every clock cycle (Left). Inside the PE, there are hardware components that sequentially solve algorithm 2}\n \\label{fig:overall_system}\n\\end{figure*}\n\nComputing platforms such as CPUs, GPUs, and FPGAs have a memory component and computing cores. Compute cores must request and receive all the necessary data from the memory component before proceeding with the computation. If the memory component cannot provide all data accesses the application requires to proceed at once, cores have to stall and wait, slowing down the computation.\n\\begin{figure*}[h]\n \\includegraphics[width=\\textwidth, height=155.5pt]{figures\/PE_pipeline2.pdf}\n \\caption{Pipelining schedule of a single PE. The PE's operation is an assembly line where multiple grid points could be processed at the same time at different stages. Each stage is physical hardware that computes specific parts of algorithm 2. At a particular stage and a particular cycle, the PE is busy computing a certain part of algorithm 2 for the grid point at the indices shown. Note that for simplicity, the indices shown here are for a single PE only.}\n \\label{fig: pipelining_schedule}\n\\end{figure*}\nEfficient systems need to have both fast computing cores and fast data distributions from the memory. Depending on the application, the memory access and computing pattern will vary. General-purpose CPU\/GPU are often architected towards a reasonable performance for a wide variety of applications, but unoptimized for any particular application. FPGA chip, on the other hand, provides programmable digital circuits to design customized computing core and memory blocks. Thus, one can leverage knowledge about the details of the computing workload to design an efficient system accordingly with FGPA. With FPGA, one could control and achieve a higher degree of parallelism from the digital hardware level at the cost of programmability.\n\n\n\\subsection{Problem Description}\n\nA key observation of algorithm 2 is that each new grid point $V_{t+1}$ could be computed independently with each other within one time iteration and therefore, in parallel. We could then leverage a high degree of parallelism on FPGA by having many cores to update as many grid points concurrently as possible. \n\nHowever, before that, two challenges must be addressed. Firstly, memory blocks need to efficiently distribute data to compute cores. In order for a loop computation to proceed, each of these cores needs up to 9 data inputs (Fig 2.) and a memory design needs to satisfy this. \nSecondly, a four-dimensional grid takes up tens of megabytes in memory and therefore cannot be fully fit to FPGA's on-chip memory for fast access.\n\n\nIn this paper, our goal is twofold. First, we will discuss our hardware design that can solve the above challenges and maximize parallel computation of algorithm 2 while efficiently making use of FPGA's on-chip memory. Next, we will show that this enables low latency of computation on FPGA which could be deployed in real-time systems.\n\n\\input{sections\/03-FPGA}\n\\input{sections\/04-Experiment}\n\n\\addtolength{\\textheight}{-12cm} \n \n \n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{ieeetr}\n\n\n\\section{Introduction}\n\\label{sec:intro}\n\nAutonomous systems are becoming more prevalent in our lives. Examples of these systems include self-driving cars, unmanned aerial vehicles, rescue robots, etc. One key factor that will allow wider adoption of autonomous systems is the guaranteed safety of these systems. Despite tremendous progress in autonomous system research in areas such as motion planning, perception, and machine learning, deployment of these systems in environments that involve interactions with humans and other robots remains limited due to the potential danger these robotic systems can cause. Formal verification methods can help autonomous robots reach their untapped potential.\n \nHamilton-Jacobi (HJ) reachability analysis is a formal verification approach that provides guaranteed safety and goal satisfactions to autonomous systems under adversarial disturbances. There are many ways to do reachability analysis, solving the HJ PDE is one way to characterize sets of safe states and synthesizes optimal controllers, which involves calculating Backward Reachable Tube (BRT) that describes a set of states the system must stay out of in order to avoid obstacles. HJ reachability analysis has been successfully applied in practical applications such as aircraft safe landing \\cite{SafeLanding}, multi-vehicle path planning, multi-player reach avoid games \\cite{SafePlatoon}. \nThe appeal of this particular method is that it's very powerful in handling control and disturbances, nonlinear system dynamics, and flexible set representations. \n\nThe main downside to HJ reachability is that it's solved on a multi-dimensional grid with the same number of dimensions as the number of state variables and scales exponentially with the number of dimensions. This prevents HJ formulation to be applied on real-time systems where safety is increasingly demanded. While 3D or smaller systems could be computed quickly with multi-core CPUs, practical systems that usually involve 4 to 5 system components can take several minutes to hours to compute. There have been researches that proposed decomposing high dimensional systems into smaller tractable sub-systems that can exactly compute \\cite{ExacDec} or overapproximate the BRT in certain cases \\cite{OverDec}. However, that challenge of applying HJ formulation on real-time systems remains, as some systems cannot be decomposed further than four dimensions, and over-approximation is introduced if projection methods are used. \n\nIn this paper, we expand the limit of the number of dimensions for which we could directly compute the BRT in \\emph{real time} through the use of FPGA. We would argue that customized hardware accelerators could complement well with those decomposition methods in making higher dimensional systems provably safe in real-time. As general-purpose computer no longer double its performance every two years due to the end of Moore's law, we have seen examples of successful hardware accelerations in other areas such as machine learning's training\/inference \\cite{TPU,Eyeriss, EIE}, robot's motion planning\\cite{DukePaper}.\n\n\nIn this paper, our contributions are as follows: \n\\begin{itemize}\n \\item We prototype a customized hardware design on FPGA that accelerates HJ reachability analysis to 16x compared to state-of-the-art implementation and 142x compared to \\cite{LsetToolbox1} on 16-thread CPU for 4D system\n \\item We demonstrate that the system could meet real-time requirement of guaranteeing safety in changing environments by re-computing BRT at 5Hz\n \\item Demonstrate obstacle avoidance with a robot car driving in an environment in which new obstacles are introduced during run time at 5Hz.\n\\end{itemize}\n\n\n\\section{EXPERIMENT \\& RESULT}\n\\label{sec:exp}\n\\subsection{Experiment setup}\nIn this section, we demonstrate that our system can meet the real-time requirement through an obstacle avoidance demonstration in a changing environment.\n\nWe used a Tamiya TT02 model RC car\\cite{nvidiajetracer} controlled by an on-board Nvidia Jetson Nano microcontroller inside a $4$m $\\times$ $6$m room. We use the following extended Dubins car model for its dynamics:\n\\begin{equation}\n\\begin{split}\n\\label{eq:car_dynamics}\n \\dot{x} &= v \\cos(\\theta)\\\\\n \\dot{y} &= v \\sin(\\theta)\\\\\n \\dot{v} &=a\\\\\n \\dot{\\theta} &= v\\frac{\\tan(\\delta)}{L}\n\\end{split}\n\\end{equation}\n\n\\noindent where $a \\in \\left[-1.5, 1.5\\right]$,\n$\\delta \\in \\left[-\\frac{\\pi}{18}, \\frac{\\pi}{18}\\right]$,\nand $L = 0.3$m. The control inputs are the acceleration $a$ and the steering angle $\\delta$. We use a grid size of $60\\times60\\times20\\times36$ with resolutions of $0.1$m, $0.067$m, $0.2$m\/s and $0.17$ rad for $x$-position, $y$-position, speed and angle, respectively.\n\nInside the room, we use orange cones as obstacles and a motion capture system is used to accurately track the car's state and the position of the obstacles. We initialize the initial value function as follows:\n\\begin{equation}\n\\label{eq:init_V}\n V_{0}(x, y, v, \\theta) = \\sqrt{(x-x_o)^2 + (y-y_o)^2} - R\n\\end{equation}\nwhere $x_o$ and $y_o$ are the obstacle's positions and R is the radius of the cone.\nObstacle's positions can be obtained from the motion capture system. Each of the cones has a radius of $0.08$m but is set as $0.75$m to account for the model mismatch between the car and the dynamics used.\n\nFor the experiment, we considered three different environments, with different cone placements, set up inside the room as shown in Fig. \\ref{fig:cones}. For each environment, a user manually controls the car and tries to steer into the cones.\n\\begin{equation}\n\\label{eq:closetoboundary}\n V(x,y,v, \\theta) < 0.15\n\\end{equation}\nGiven the car's state, when \\eqref{eq:closetoboundary} is satisfied, the car is near the boundary of a BRT so optimal control computed from the value function is applied to safely avoid the cone. The optimal control is obtained from the value function as follows:\n\\begin{equation}\n\\label{eq:uopt_max}\n u_{opt} = \\arg \\max_{u \\in \\mathbb{U}} \\nabla V(x, y, v, \\theta , s)^{\\top}f(x, y, v, \\theta, u)\n\\end{equation}\n\n\\begin{figure}[h]\n\\centering\n\\begin{subfigure}[t]{.4\\textwidth}\n\\centering\n\\includegraphics[width=\\linewidth]{images\/car_center.png}\n\\caption{Environment 1}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[t]{.4\\textwidth}\n\\centering\n\\includegraphics[width=\\linewidth]{images\/car_line_final.png}\n\\caption{Environment 2}\n\\end{subfigure}\n\\hfill\n\\begin{subfigure}[t]{.4\\textwidth}\n\\centering\n\\includegraphics[width=\\linewidth]{images\/car_apart_cross_final.png}\n\\caption{Environment 3}\n\\end{subfigure}\n\\caption{Different BRTs are used as the placement of cones change over time which limits where the RC car can be as it drives around the room.}\n\\label{fig:cones}\n\\end{figure}\n\n\nWe pre-compute the BRTs with a horizontal time of $0.5$s for three environments using optimized\\_dp\\cite{optimizedp} and demonstrate safety by correctly loading the value functions as the environment changes. \nWe choose to pre-compute the BRTs in order to emulate having an FPGA on-board without extra latency resulted from communication with a remote AWS instance.\nFor all environments, the maximum time step to make the integration stable is $0.007497$s. Initially, the room had a single cone but changed over time to different cone placements. \nThe BRT of a new environment could not be used until 200ms after the environment has changed, which is longer than the time taken to compute the same BRT on an FPGA. A video of these experiments can be found at \\url{https:\/\/www.youtube.com\/playlist?list=PLUBop1d3Zm2vgPL4Hxtz8JufnIPmvrwlC}\n\n\\subsection{Hardware Correctness}\nWe use fixed-point data representations for hardware computation. In particular, we use 32 bits with 5 bits to represent the integer part (including sign) and 27 bits for the decimal part. With this choice, the precision of our computation is $2^{-27}=7.45\\times10^{-9}$ and the range of our computation is from $-16$ to $16$. The area we use for the experiment is $4$m $\\times$ $6$m, hence the largest absolute distance is the diagonal of $7.2$m. Therefore, the number of integer bits is enough to represent all possible values in the solution $V$, which has the physical interpretation of minimum distance to collision over time, given \\eqref{eq:eq3} and the choice of $V_0$ in \\eqref{eq:init_V}.\n\nWe choose to synthesize and implement our design on AWS F1 FPGA because of its flexibility and availability. To correctly input data to the FPGA, we first generate an initial value array based on the obstacles' positions and radius described by \\eqref{eq:init_V}. Then this value array is converted from floating-point to fixed-point number correctly based on the bit choice discussed above. Afterward, the value array is passed to the FPGA for the HJ PDE solving procedure to start.\n\nFor all three experiment, we verified the correctness of BRT generated by our hardware with the toolbox at \\cite{optimizedp} by comparing the maximum error between corresponding grid points. The toolbox uses 32-bit floating-point numbers. The numerical error resulting from the different representations is shown in table below for the three environments in table \\ref{table:Error}.\n\\begin{center}\n\\captionof{table}{ERROR COMPARISON} \n \\label{table:Error}\n \\begin{tabular}{||c c c c||} \n \\hline\n & Env. 1 & Env. 2 & Env. 3 \\\\ [0.5ex] \n \\hline\\hline\n Error & $1.68\\times10^{-6}$ & $1.78\\times10^{-6}$& $1.37\\times10^{-6}$ \\\\ \n \\hline\n\\end{tabular}\n\\end{center}\nThese negligible errors are due to precision difference between fixed-point and floating point number. Even though the computation is repeated for many iterations, the maximum error does not grow dramatically over time. We believe that is because of the convergence property of BRT. As time grows, the rate of changes in the grid values also slows down leading to stable discrepancy between the correct floating point and fixed-point values.\n\n\\subsection{Computational speed and Resources Usage}\nTo measure the speed up for all three environments, we compare the computation time on AWS FPGA running at 250MHz against \\cite{optimizedp} and \\cite{LsetToolbox1} running on a 16-thread Intel(R) Core(TM) i9-9900K CPU at 3.60GHz. The latency here is the time it takes to compute the BRT. For FPGA, latency can be computed by multiplying the clock cycles with the clock period. The result is summarized in the table below.\n\n\n\\begin{center}\n\\captionof{table}{FPGA}\\label{table:fpga} \n \\begin{tabular}{||c c c c c||} \n \\hline\n & \\ \\textbf{Clock cycles} & \\ \\textbf{Period} & \\textbf{Iterations} & \\textbf{Latency} \\\\ [0.4ex] \n \\hline\\hline\n Env. 1 & 44155209 & 4 ns & 67 & 0.176s\\\\ \n \\hline\n Env. 2 & 44155209 & 4 ns & 67 & 0.176s\\\\\n \\hline\n Env. 3 & 44155209 & 4 ns & 67 & 0.176s\\\\ [0.5ex]\n \\hline\n\\end{tabular}\n\\end{center}\n\n\\begin{center}\n\\captionof{table}{optimized\\_dp\\cite{optimizedp}} \\label{table:op_dp} \n \\begin{tabular}{||c c c c||} \n \\hline\n & \\ \\textbf{Latency} & \\textbf{Iterations} & \\textbf{FGPA speed up} \\\\ [0.1ex] \n \\hline\\hline\n Env. 1 & 3.35 s & 67 & \\textbf{$\\times$18.9} \\\\ \n \\hline\n Env. 2 & 2.99 s & 67 & \\textbf{$\\times$17} \\\\\n \\hline\n Env. 3 & 3.42 s & 67 & \\textbf{$\\times$19.4}\\\\ [0.5ex]\n \\hline\n\\end{tabular}\n\\end{center}\n\n\\begin{center}\n \\captionof{table}{ToolboxLS\\cite{LsetToolbox1}} \\label{table:toolboxls} \n \\begin{tabular}{||c c c c||} \n \\hline\n & \\ \\textbf{Latency} & \\textbf{Iterations} & \\textbf{FPGA speed up} \\\\ [0.1ex] \n \\hline\\hline\n Env. 1 & 25.11 s & 70 & \\textbf{$\\times$142} \\\\ \n \\hline\n Env. 2 & 25.14 s & 70 & \\textbf{$\\times$142} \\\\\n \\hline\n Env. 3 & 25.18 s & 70 & \\textbf{$\\times$142}\\\\ [0.5ex]\n \\hline\n\\end{tabular}\n\\end{center}\n \nIt can be observed that the latency of computation on FPGA is fixed and deterministic for all three environments while the latency on CPUs varies even though the computation remains the same. With the lower latency of $0.176$s, we are able to update the value function at a frequency of $5.68$Hz. The resources usage of our design for 4 PEs is shown in the table below. \n\\begin{center}\n \\begin{tabular}{||c c c c||} \n \\hline\n & \\ \\textbf{LUT} & \\textbf{BRAM} & \\textbf{DSP} \\\\ [0.1ex] \n \\hline\\hline\n \\textbf{Used} & 26319 & 519 & 598\\\\ \n \\hline\n \\hline\n \\textbf{Available} & 111900 & 1680 & 5640\\\\ \n \\hline\n \\textbf{Utilization} & \\textbf{14.03\\%} & \\textbf{30.89\\%} & \\textbf{10.6\\%} \\\\[0.5ex]\n \\hline\n\\end{tabular}\n\\end{center}\nOn an FPGA, arithmetic operations on numbers are implemented using Digital Signal Processing (DSP) hardware or Look Up Table (LUT) that perform logical functions. Our design does not significantly consume most of the available resources and could be scaled up to a larger grid size.\n\n\\section{CONCLUSION}\nThis paper introduces a novel customized hardware design on FPGA that allows HJ reachability analysis to be computed $16$x faster than state-of-the-art implementation on a 16-thread CPU. Because of that, we are able to solve the HJ PDE at a frequency of 5Hz. The latency of our computation on FPGA is deterministic for all computation iterations, which is crucial for safety-critical systems. Our design approach presented here can be applied to system dynamics and potentially higher dimensional systems. \nFinally, we demonstrate that at 5Hz, a robot car can safely avoid obstacles and guarantee safety.\n\n\\section{Solving the HJ PDE Using FPGAs}\n\\label{sec:fpga}\n\nBefore going into details of the design, we will introduce some terminologies that will be relevant throughout the next section.\n\nIn digital systems, time is discretized into the unit of a \\emph{clock cycle}, which is the amount of time it takes for an operation such as computing, loading, and storing to proceed. Each clock cycle typically is a few nanoseconds.\nDynamic Random Access Memory (DRAM) is a type of memory that sits outside of the FPGA, which has higher memory capacity but takes a lot more clock cycles to access.\n\nOur custom hardware comprised two main components: on-chip memory buffer, and processing elements (PE) or computing cores (shown in Fig \\ref{fig:overall_system}). The memory buffer is on-chip storage, providing access to all the grid points a PE needs to compute a new value function. Each PE is a digital circuit that takes 9 grid points from the memory buffer to compute a new value function at a particular grid point according to algorithm 2 (line 3-10). In the following subsections, we will go into the details of each component.\n\\subsection{Indexed Processing Element (PE)}\n\nThe PE has the following target design objectives: \n(1) increase compute throughput (defined as the number of output generated per second) through pipelining, (2) reduce the computation time of each PE, (3) and ensure the correctness of result while minimizing data transfer between DRAM and FPGA.\n\nIn our design, we use 4 PEs (as shown in figure \\ref{fig:overall_system}). Each PE has an index $idx$ with $0 \\le idx \\le 3$ associated with it and computes the grid point $V_{t+1}(i, j, k, l + idx)$. At the beginning of the computation of algorithm 2, each PE takes as input a grid index $(i,j,k,l)$ and its 8 neighbours to start computing $V_{t+1}(i,j,k,l)$ according to algorithm 2 (line 2-10).\n\nTo increase computation throughput, each PE is fully pipelined. Similar to an assembly line, the PE operation is divided into multiple stages taking a few clock cycles to complete (Fig. \\ref{fig: pipelining_schedule}). Each stage within the pipeline is physical hardware that has a computation corresponding to one of the lines in algorithm 2 (line 3-10) for a particular index $i, j, k, l$. Every clock cycle, the result from previous stages will be loaded to the next stage, following the sequential order of algorithm 2. At any time during operations, the processing element is computing different intermediate components for multiple indices concurrently (explained in Fig.\\ref{fig: pipelining_schedule}).\n\nTo ensure that the computation is correct, inside each of the PE, there are indices counters to keep track of loop variable $i, j, k, l$, with the inner loop variable incrementing by one every clock cycle. These indices are used to correctly address the state vectors during system dynamics computation. To avoid accessing external DRAM we store these 4 state\/position vectors $x$ or any fixed non-linear functions such as $\\cos(\\cdot)$ and $\\sin(\\cdot)$ of these states as a lookup table stored in on-chip memory, as state vectors depend only on grid configuration and do not change with the environment. Each PE will have its own look-up table to avoid communications between PEs. Having this data on-chip will only require a few kilobytes of memory and no need to access DRAM throughout the computation.\n\\begin{figure*}[h]\n \\centering\n \\includegraphics[width=0.85\\textwidth, height=250pt]{figures\/Mem_Buf.pdf}\n \\caption{Four lines of memory buffer supply all the grid data to the four PEs. Each of the rectangle blocks is a FIFO queue synthesized as Block RAM (BRAM). The overhead notation is the size of the FIFO queue with $N_1, N_2, N_3, N_4$ as the four grid dimensions. Note that the queue's size depends only on three dimensions. Every clock cycle, new grid points streamed from DRAM start entering each buffer line (left-hand side) and grid points at the end of the lines are discarded (right-hand side). }\n \\label{fig: parallel_line}\n\\end{figure*}\n\\subsection{On-Chip Memory Buffer}\nThe memory buffer has the following key design objectives: (1) minimizing the amount of on-chip memory usage and external DRAM accesses while (2) concurrently providing 9 grid points to each PE every clock cycle.\n\nOne problem of working with a high-dimensional grid is that the whole grid can take up tens of megabytes and therefore cannot be fully fit to a state-of-the-art FPGA's on-chip memory. Instead of storing everything on-chip, in our design, grid points are streamed continuously from DRAM into an on-chip memory buffer (shown in Fig.\\ref{fig:overall_system}) and can be re-used many times for spatial derivatives computation in 4 dimensions before being thrown away. From the grid dimensions, we could compute the maximum reuse distance beyond which a grid point can be safely discarded as no longer needed. This maximum reuse distance is equal to the minimum size of on-chip memory buffer, which is dependent only on $N-1$ dimensions \\cite{soda} and can be fitted to an FPGA's on-chip memory. Our memory buffer structure is implemented as First In First Out (FIFO) queue data structure. Every clock cycle, a new grid point supplied from DRAM will start entering the FIFO queue while the grid point reaching at the end of the FIFO queue will be discarded (shown in Fig. \\ref{fig: parallel_line}).\n\nFPGA on-chip memory buffers are composed of standard Blocks of Random Access Memory (BRAM). Each BRAM has two-ports and at most two reads can be requested concurrently in the same clock cycle. If all 9 grid points (shown in Fig. \\ref{fig:my_label}) are stored in the same BRAM at the same time, a PE would then have to wait for 5 clock cycles before performing the computation. One way to increase the number of accesses per clock cycle is to duplicate the data in multiple BRAMs, but this would not work well for multidimensional arrays since these array copies easily exceed FPGA on-chip memory. A different technique would be \\emph{memory banking}, which is to partition the memory on-chip into multiple BRAM that could concurrently deliver data to the PE, allowing the PE to start to compute new value function for a grid point in one clock cycle. \n\nTo allow concurrent access for multiple PEs, we adopted the parallel memory buffer microarchitecture from \\cite{soda}. Corresponding to the number of PEs, our on-chip storage structure is made of 4 line buffers. Each of these line buffers is a sequence of BRAM connected acting in a queue fashion: a grid point moves towards the end of the line every clock cycle. The two endpoints of each BRAM (shown in Fig 4) provide tapping points that are connected as inputs to the PEs.\nThe number of PEs, therefore, is mainly limited by the DRAM bandwidth. \n\nWe also made modifications to the execution flow in \\cite{OptimalMicro} to accommodate for computing values function at the boundary. Once each of the buffer lines is half full, all the processing elements can start computing a new value function. \n\n\\subsection{Fixed-Point Representation}\nComputing a new value function based on algorithm 2 involves multiple addition operations on floating-point numbers. At the hardware level, the addition of floating-point numbers is as computationally expensive as fixed-point multiplication, which would take up lots of resources and chip's area. Instead, we use fixed-point representations for our data to reduce the burden on the hardware. We will show in the next section that this has little impact on the correctness of the computation if the radix point is chosen carefully for the grid configuration. ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\n\nThe renormalization group procedure for effective particles (RGPEP) has been developed during the last years~\\cite{Glazek:2012qj,GlazekTrawinskiAdS,Gomez-Rocha:2015esa} as a non-perturbative tool for constructing bound-states in quantum chromodynamics (QCD)~\\citep{Wilsonetalweakcoupling}.\nIt introduces the concept of effective particles, which differ from the bare or canonical ones, by having size $s$, corresponding to the momentum scale $\\lambda=1\/s$. \nCreation and annihilation of effective particles in the Fock space are described by the action of effective particle operators, $a^\\dagger_s$ and $a_s$, on states built from the vacuum state $|0\\rangle$ using $a_s^\\dagger$; bare particle operators, $a^\\dagger_0$ and $a_0$, appearing in the canonical Hamiltonian, create and annihilate pointlike particles (with size $s=0$).\n\nWe are interested in calculating the evolution of quark and gluon quantum states, describing their dynamics and studying their binding. In a single formulation, the sought effective Hamiltonian must provide a means for the constituent-like behavior of quarks and gluons in hadrons with the measured quantum numbers, and also an explanation for the short-distance phenomena of weakly interacting pointlike partons.\n\nIn this work, we apply the RGPEP to interacting gluons in the absence of quarks. We will demonstrate that the RGPEP passes the test of describing asymptotic freedom, which is a precondition for any approach aiming at using QCD, especially for tackling nonperturbative issues, such as the ones that emerge when one allows effective gluons to have masses~\\cite{Wilsonetalweakcoupling}.\n\nWe start from the regularized canonical Hamiltonian for quantum Yang-Mills field in the Fock space, obtained from the corresponding Lagrangian density. We use the RGPEP to introduce effective particles and calculate a family of effective Hamiltonians characterized by a scale or size parameter $s$. These Fock-space Hamiltonians depend on the effective-particle size parameter in an asymptotically free way: the coupling constant in a three-gluon interaction term vanishes with the inverse of $\\ln(1\/s)$.\n\nIn the following, we summarize the procedure, which is general and can be applied to any other quantum field theory.\nFor a more extended and detailed explanation, we refer the reader to Ref.~\\cite{Gomez-Rocha:2015esa}.\n\n\\section{Renormalization group procedure for effective particles }\n\\label{sec:1}\n\n\\subsection{Initial Hamiltonian}\n\\label{subsec:2:1}\n\nWe derive the canonical Hamiltonian for Yang-Mills theories from the Lagrangian density:\n\\begin{eqnarray}\n {\\cal L} = - {1 \\over 2} \\text{tr} \\, F^{\\mu \\nu}\nF_{\\mu \\nu} \\ ,\n\\end{eqnarray}\nwhere \n$F^{\\mu \\nu} = \\partial^\\mu A^\\nu \n- \\partial^\\nu A^\\mu + i g [A^\\mu, A^\\nu]$, \n$A^\\mu = A^{a \\mu} t^a$, \n$ [t^a,t^b] = i f^{abc} t^c$, which leads to the energy-momentum tensor, \n\\begin{eqnarray}\n {\\cal T} ^{\\mu \\nu} & = &\n-F^{a \\mu \\alpha} \\partial^\\nu A^a_\\alpha + g^{\\mu \\nu} F^{a \\alpha\n\\beta} F^a_{\\alpha \\beta}\/4 \\ .\n\\end{eqnarray}\nWe choose the front-form (FF) of dynamics~\\citep{Dirac1949} which consists of setting the quantization surface on the hyperplane $x^+=x^0+x^3=0$. Using the gauge $A^+=0$, the Lagrange equations lead to the condition\n\\begin{eqnarray}\nA^- & = & \n{ 1 \\over \\partial^+ } \\, 2 \\, \\partial^\\perp A^\\perp \n- { 2 \\over \\partial^{ + \\, 2} } \\ \nig \\, [ \\partial^+ A^\\perp, A^\\perp] \\ ,\n\\end{eqnarray}\nso that the only degrees of freedom are the fields $A^\\perp$.\n\nIntegration of $ {\\cal T} ^{+-}$ over the front $x^+=0$ leads to the FF energy of the constrained gluon field:\n\\begin{equation}\nP^- = {1 \\over 2}\\int dx^- d^2 x^\\perp {\\cal T} ^{+-}\\, |_{x^+=0} \\quad .\n\\label{Hg}\n\\end{equation}\nThis operator contains a series of products of 2nd, 3rd or 4th powers of the field $A^\\mu$ or their derivatives. The energy momentum tensor $ {\\cal T} ^{+-}$ can be written as\n\\begin{eqnarray}\n {\\cal T} ^{+ -} = {\\cal H}_{A^2} + {\\cal H}_{A^3} + {\\cal H}_{A^4} + {\\cal\nH}_{[\\partial A A]^2} \\ ,\n\\end{eqnarray}\nwhere\n\\cite{Casher:1976ae,Thorn:1979gv,BrodskyLepage}\n\\begin{eqnarray}\n\\label{HA2}\n{\\cal H}_{A^2} & = & - {1\\over 2} A^{\\perp a } (\\partial^\\perp)^2 A^{\\perp a} \\ , \\\\\n\\label{HA3}\n{\\cal H}_{A^3} & = & g \\, i\\partial_\\alpha A_\\beta^a [A^\\alpha,A^\\beta]^a \\ , \\\\\n\\label{HA4}\n{\\cal H}_{A^4} & = & - {1\\over 4} g^2 \\, [A_\\alpha,A_\\beta]^a[A^\\alpha,A^\\beta]^a \\ , \\\\\n\\label{HA2A2}\n{\\cal H}_{[\\partial A A]^2} & = & {1\\over 2}g^2 \\,\n[i\\partial^+A^\\perp,A^\\perp]^a {1 \\over (i\\partial^+)^2 }\n[i\\partial^+A^\\perp,A^\\perp]^a \\ .\n\\end{eqnarray}\nThe quantum canonical Hamiltonian is obtained by replacing the field $A^\\mu$ by\nthe quantum field operator\n\\begin{eqnarray}\n\\hat A^\\mu & = & \\sum_{\\sigma c} \\int [k] \\left[ t^c \\varepsilon^\\mu_{k\\sigma}\na_{k\\sigma c} e^{-ikx} + t^c \\varepsilon^{\\mu *}_{k\\sigma}\na^\\dagger_{k\\sigma c} e^{ikx}\\right]_{x^+=0} \\ ,\n\\end{eqnarray}\nwhere $[k] = \\theta(k^+)\ndk^+ d^2 k^\\perp\/(16\\pi^3 k^+)$,\nthe polarization four-vector is defined as $\\varepsilon^\\mu_{k\\sigma} \n= (\\varepsilon^+_{k\\sigma}=0, \\varepsilon^-_{k\\sigma} \n= 2k^\\perp \\varepsilon^\\perp_\\sigma\/k^+, \n\\varepsilon^\\perp_\\sigma)$ and the indices $\\sigma$ and $c$ denote spin and color quantum numbers, respectively. The creation and annihilation operators satisfy the commutation relations\n\\begin{eqnarray}\n\\left[ a_{k\\sigma c}, a^\\dagger_{k'\\sigma' c'} \\right] \n& = & \nk^+\n\\tilde \\delta(k - k') \\,\\, \\delta^{\\sigma \\sigma'}\n\\, \\delta^{c c'} \\ , \n\\quad\n\\left[ a_{k\\sigma c}, a_{k'\\sigma' c'} \\right] \n\\ = \\\n\\left[ a^\\dagger_{k\\sigma c}, a^\\dagger_{k'\\sigma' c'} \\right] \n\\ = \\\n0 \\ ,\n\\end{eqnarray}\nwith $\\tilde \\delta(p) = 16 \\pi^3 \\delta(p^+) \\delta(p^1)\n\\delta(p^2)$.\n\n\nThe canonical Hamiltonian is divergent and needs regularization. At every interaction term, every creation and annihilation operator in the canonical Hamiltonian is multiplied by a regulating factor\\footnote{Other regulating functions are available~\\cite{Gomez-Rocha:2015esa}. Finite dependence of the effective Hamiltonian on the small-x regularization may be thought to be related to the vacuum state problem, the phenomena of symmetry breaking and confinement~\\cite{Wilsonetalweakcoupling}.}\n\\begin{eqnarray}\nr_{\\Delta \\delta}(\\kappa^\\perp, x) \n& = & \n\\exp(-\\kappa^\\perp \/ \\Delta)\\, x^\\delta \\theta(x-\\epsilon) \\ ,\n\\end{eqnarray} \nwhere $x$ is the relative momentum fraction $x_{p\/P}=p^+\/P^+$, $\\kappa$ is the relative transverse momentum, $\\kappa_{p\/P}=p^\\perp-xP^\\perp$, and $P$ is the total momentum in the term one considers.\nThe regulating function prevents the interaction terms from acting if the change of transverse momentum between gluons were to exceed $\\Delta$, or if the change of longitudinal momentum fraction $x$ were to be smaller than $\\delta$.\n\n\\subsection{Derivation of the effective Hamiltonian}\nThe RGPEP transforms bare, or point-like creation and annihilation operators into effective ones~\\citep{Glazek:2012qj}. Effective particle operators of size $s=t^{1\/4}$ are related to bare ones by certain unitary transformation\n\\begin{eqnarray}\n\\label{at}\na_t & = & {\\cal U} _t \\, a_0 \\, {\\cal U} _t^\\dagger \\ .\n\\end{eqnarray}\nThe fact that the Hamiltonian operator cannot be affected by this change requires,\n\\begin{eqnarray}\n {\\cal H} _t(a_t) & = & {\\cal H} _0(a_0) \\ , \n\\end{eqnarray}\nwhich is equivalent to writing:\n\\begin{eqnarray}\n\\label{cHt}\n {\\cal H} _t(a_0) = {\\cal U} _t^\\dagger {\\cal H} _0(a_0) {\\cal U} _t \\ .\n\\end{eqnarray}\nDifferentiating both sides of~(\\ref{cHt}) leads to the RGPEP equation:\n\\begin{eqnarray} \n\\label{ht1}\n {\\cal H} '_t(a_0) & = &\n[ {\\cal G} _t(a_0) , {\\cal H} _t(a_0) ] \\ ,\n\\end{eqnarray} \nwhere $ {\\cal G} _t = - {\\cal U} _t^\\dagger {\\cal U} '_t$,\nand therefore, \n$\n {\\cal U} _t \n= \nT \\exp{ \\left( - \\int_0^t d\\tau \\, {\\cal G} _\\tau\n\\right) }$. $T$ denotes ordering in $\\tau$.\nThe RGPEP equation~(\\ref{ht1}) is the engine of this procedure. It governs the evolution of effective particles with the scale parameter $t$. It encodes the relation between pointlike quantum gluons appearing in the canonical Hamiltonian and the effective,\nor constituent ones referred to by effective phenomenological models describing bound states.\n\n We choose the generator to be the commutator $ {\\cal G} _t = [ {\\cal H} _f, {\\cal H} _{Pt} ]$,\\footnote{Other generators are also allowed but may lead to more complicated expressions~\\cite{Glazek:2000dc}. Our choice is similar to Wegner's~\\cite{Wegner}.} where $ {\\cal H} _f$ is the non-interacting term of the Hamiltonian and $ {\\cal H} _{Pt}$ is defined in terms of $ {\\cal H} _t$ . \n\n$ {\\cal H} _t$ is a series of normal-ordered products of creation and annihilation operators,\n\\begin{eqnarray}\n\\label{Hstructure} \n {\\cal H} _t(a_0) =\n\\sum_{n=2}^\\infty \\, \n\\sum_{i_1, i_2, ..., i_n} \\, c_t(i_1,...,i_n) \\, \\, a^\\dagger_{0i_1}\n\\cdot \\cdot \\cdot a_{0i_n} \\, .\n\\end{eqnarray} \n$ {\\cal H} _{Pt}$ differs from $ {\\cal H} _t$ by \n the vertex total $+$-momentum factor,\n\\begin{eqnarray}\n\\label{HPstructure} \n {\\cal H} _{Pt}(a_0) & = &\n\\sum_{n=2}^\\infty \\, \n\\sum_{i_1, i_2, ..., i_n} \\, c_t(i_1,...,i_n) \\, \n\\left( {1 \\over\n2}\\sum_{k=1}^n p_{i_k}^+ \\right)^2 \\, \\, a^\\dagger_{0i_1}\n\\cdot \\cdot \\cdot a_{0i_n} \\, .\n\\end{eqnarray} \nThe initial condition for the differential equation~(\\ref{ht1}) is given by the regularized canonical Hamiltonian given in Section~\\ref{subsec:2:1} plus counterterms.\nMore precisely, the initial condition is given by the physical fact that at very small distances or very high energies, the regularized canonical Hamiltonian must be recovered and any regularization dependence must be removed. \n\nWe solve the RGPEP equation~(\\ref{ht1}) for the effective Hamiltonians using an expansion in powers of the coupling constant $g$ up to third order and we focus our studies on the structure of the three-gluon term~\\citep{Glazek:2012qj,Gomez-Rocha:2015esa}.\n\n\n\n\n\n\\section{The three-gluon vertex}\n\\label{sec:2}\n\nThe third-order effective Hamiltonian expansion have the following structure:\n\\begin{eqnarray}\n\\label{Hpert}\nH_t & = & \nH_{11,0,t} + H_{11,g^2,t} + H_{21,g,t} + \nH_{12,g,t} + H_{31,g^2,t} + H_{13,g^2,t} + \nH_{22,g^2,t} + H_{21,g^3,t} + H_{12,g^3,t} \\ .\n\\end{eqnarray}\nThe first and second subscripts indicate the number of creation and annihilation operators, respectively. The third subscript labels the order in powers of $g$. Finally, the last label indicates the dependence on the scale parameter $t$.\n\nThe initial condition at $t=0$ has the form:\n\\begin{eqnarray}\n\\label{Hper0}\nH_0 & = & \nH_{11,0,0} + H_{11,g^2,0} + H_{21,g,0} + \nH_{12,g,0} + H_{31,g^2,0} + H_{13,g^2,0} + \nH_{22,g^2,0} + H_{21,g^3,0} + H_{12,g^3,0} \\ ,\n\\end{eqnarray} \nand consists of the regularized canonical Hamiltonian plus counterterms. The latter are calculated in such a way that $H_t$ remains finite when $\\Delta\\to\\infty$. It is not possible to remove the small-$x$ cutoff $\\delta$ at these point. However, this dependence will be of interest in higher-order calculations, since the small-$x$ phenomena are thought to be related to the vacuum-state behavior. The last step in the RGPEP is to replace bare creation and annihilation operators by effective ones.\n\nThe three-gluon vertex and the running of the Hamiltonian coupling are encoded in the sum of first- and third-order terms:\n\\begin{eqnarray}\nH_{(1+3),t} & = & (H_{21,g,t} + \nH_{12,g,t}) + ( H_{21,g^3,t} + H_{12,g^3,t} ) \\ .\n\\end{eqnarray}\nThe third-order solution requires the knowledge of the first and second-order solutions. The sum of all these contributions has the form\\footnote{The subscripts 1,2,3 refer to the gluon lines indicated in Fig.~\\ref{Fig-runingg}. So, e.g. $\\kappa_{12}^\\perp= x_{2\/3}\\kappa_{1\/3}^\\perp - x_{1\/3}\\kappa_{2\/3}^\\perp$ . }~\\citep{Gomez-Rocha:2015esa} (see Fig.~\\ref{Fig-runingg}):\n\\begin{eqnarray}\nH_{(1+3),t} & = &\n\\sum_{123}\\int[123] \\ \\tilde \\delta(k_1+k_2-k_3) \n\\ f_{12t} \\, \\left[ \\tilde Y_{21\\,t}(x_1,\\kappa_{12}^\\perp, \\sigma)a^\\dagger_{1t} a^\\dagger_{2t} a_{3t} \\ + \\tilde Y_{12\\,t}(x_1,\\kappa_{12}^\\perp, \\sigma)\\ a_{3t}^\\dagger a_{2t} a_{1t} \\right]\\nonumber \\\\ \n\\end{eqnarray}\nwhere $f_{12 t}=e^{-(k_1+k_2)^4t}$ is a form factor and $\\tilde Y_{21\\,t}(x_1,\\kappa_{12}^\\perp, \\sigma)$ is the object of our study. \nWe define the Hamiltonian coupling constant $g_t$ as the coefficient in front of the canonical color, spin and momentum dependent factor $Y_{123}(x_1,\\kappa_{12}^\\perp,\\sigma)= i f^{c_1 c_2 c_3} [ \\varepsilon_1^*\\varepsilon_2^*\n\\cdot \\varepsilon_3\\kappa_{12}^\\perp - \\varepsilon_1^*\\varepsilon_3 \\cdot\n\\varepsilon_2^*\\kappa_{12}^\\perp {1\\over x_{2\/3}} - \\varepsilon_2^*\\varepsilon_3\n\\cdot \\varepsilon_1^*\\kappa_{12}^\\perp {1\\over x_{1\/3}} ]$ in the limit $\\kappa_{12}^\\perp\\to 0$, for some value of $x_1$ denoted by $x_0$.\nSo,\n\\begin{eqnarray}\n\\label{consider}\n\\lim_{\\kappa_{12}^\\perp \\to 0}\n\\tilde Y_t(x_1,\\kappa_{12}^\\perp, \\sigma) \n& = &\n\\lim_{\\kappa_{12}^\\perp \\to 0}\n\\left[\nc_t(x_1,\\kappa_{12}^\\perp) \nY_{123}(x_1,\\kappa_{12}^\\perp, \\sigma) \n+ \ng^3 \\tilde T_{3 \\,\\text{finite}}(x_1,\\kappa_{12}^\\perp, \\sigma) \n\\right] .\n\\end{eqnarray}\nwhere $\\tilde T_{3 \\,\\text{finite}}(x_1,\\kappa_{12}^\\perp, \\sigma)$ is a finite part contained in the counterterm and does not contribute to the running coupling,\n\\begin{eqnarray}\n\\lim_{\\kappa_{12}^\\perp \\to 0}\nc_t(x_1,\\kappa_{12}^\\perp) \n& = &\ng + g^3 \\lim_{\\kappa_{12}^\\perp \\to 0}\n\\left[\n c_{3t }(x_1,\\kappa_{12}^\\perp) \n- \n c_{3t_0}(x_1,\\kappa_{12}^\\perp) \n\\right] \\ . \n \\label{limitc3}\n\\end{eqnarray}\nAnd assuming some value for $g_0$ at some small $t_0$, $g_{t_0}=g_0$,\n\\begin{eqnarray}\n\\label{gl1}\ng_t\n& \\equiv & c_t(x_1) \n\\ = \\\n\\label{gl2}\ng_0 + g_0^3 \n\\left[\n c_{3t }(x_1) \n- \n c_{3t_0}(x_1) \n\\right] \\ .\n\\end{eqnarray}\nWe introduce now the momentum scale parameter $\\lambda=t^{-1\/4}$. This yields,\n\\begin{eqnarray}\n\\label{gl}\ng_\\lambda & = &\ng_0 - { g_0^3 \\over 48 \\pi^2 } N_c \\, 11 \\,\\ln\n{ \\lambda \\over \\lambda_0} \\ .\n\\end{eqnarray} \nDifferentiation of the latter with respect to $\\lambda$ leads to\n\\begin{eqnarray}\n\\lambda {d \\over d\\lambda} \\, g_\\lambda\n& = & \\beta_0 g_\\lambda^3 \\ , \\quad \\text{with}\\quad \\beta_0 \\, = \\, - { 11 N_c \\over 48\\pi^2 } \\ .\n\\end{eqnarray}\nThis result equals the asymptotic freedom result \nin Refs.~\\cite{Gross:1973id,Politzer:1973fx},\nwhen one identifies $\\lambda$ with the momentum \nscale of external gluon lines in Feynman diagrams.\nOur result also coincides with the expression obtained in~\\citep{Glazek:2000dc}, where an analogous calculation were performed using a different generator. \n\n\n\n\\begin{figure}[h]\n \\includegraphics[width=0.9\\textwidth]{runningg3rdorderNEW2}\n \\caption{Graphical representation of terms contributing to the effective three-gluon vertex (third-order expansion)~\\citep{Gomez-Rocha:2015esa}. Thin internal lines correspond to intermediate bare gluons and thick external lines correspond to the creation and\nannihilation operators that appear in the three-gluon FF Hamiltonian interaction\nterm for effective gluons of size $s$.\nDashed lines with transverse bars represent the combined contributions of terms~(\\ref{HA4}) and~(\\ref{HA2A2}). The black dots indicate counterterms.}\n \\label{Fig-runingg}\n\\end{figure}\n\n\\section{Summary and conclusion}\n\nWe have applied the RGPEP to the quantum $SU(3)$ Yang-Mills theory and extracted the running coupling from the three-gluon-vertex term in the third-order effective Hamiltonian.\nThe result turns out to be independent of the choice of the generator, as it coincides with the one obtained in an analogous calculation performed in~\\cite{Glazek:2000dc}, using a different generator. The present generator, however, leads to simpler equations than the older one, which is desired and needed for our forthcoming forth-order calculations, required for any attempt at description of physical systems using QCD~\\citep{Wilsonetalweakcoupling}. \nThe obtained running coupling is of the form that is familiar from other formalism and renormalization schemes and passes the test of producing asymptotic freedom, which any method aiming at solving QCD must past.\n\n\n\n\\begin{acknowledgements}\nPart of this work was supported by the Austrian Science Fund (FWF) under project No. P25121-N27. Fig.~\\ref{Fig-runingg} was produced with JaxoDraw~\\citep{Binosi:2003yf}. \n\\end{acknowledgements}\n\n\n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzhdns b/data_all_eng_slimpj/shuffled/split2/finalzzhdns new file mode 100644 index 0000000000000000000000000000000000000000..a97ab0a88742e46dbd9c75e9a8b2842d29e1d8ac --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzhdns @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nAssume to be given a vector field $X$ in a $m$ dimensional manifold $M$, and an equilibrium $e$ of $X$. Once chosen a local atlas $(x_1,..,x_m): U \\subset M \\to {\\mathbb R}^m$, the linearization of $X$ at $e$ is the linear system\n\\begin{equation}\\label{linear system}\n\\dot x = JX_{e} \\, x, \\qquad \\text{where} \\quad JX_{e} = \\begin{pmatrix}\n \\frac{\\partial X_1}{\\partial x_1} (e) & \\cdots & \\frac{\\partial X_1}{\\partial x_m} (e) \\\\\n \\vdots & \\ddots & \\vdots \\\\\n \\frac{\\partial X_m}{\\partial x_1} (e) & \\cdots & \\frac{\\partial X_m}{\\partial x_m} (e) \\\\\n\\end{pmatrix}.\n\\end{equation}\nAmong equilibria, the \\emph{hyperbolic} equilibria (those for which the spectrum of $JX_{e}$ has no eigenvalues with zero real part) are particularly important, are generic, and the linear system \\eqref{linear system} is topologically conjugate to the original one in a neighbourhood of the equilibrium \\cite{1960.PAMS.Hartman, 1961.D.Grobman}. In these cases the Jordan blocks of the matrix $JX_{e}$ have a very mild effect on the quantitative form of solutions (secular terms), and no effect on the qualitative structure of solutions. It follows that hyperbolic equilibria can be classified using only the spectral decomposition of the matrix $JX_{e}$. In particular every equilibrium can be given an inertia-type decomposition using the names \\emph{stable} and \\emph{unstable} to indicate the sign of the eigenvalues and \\emph{node} and \\emph{focus} to indicate wether the eigenvalues are complex or real. \n\nIn this article we classify hyperbolic equilibria using the symbols\n\\[\nf_\\beta^\\alpha n_\\delta^\\gamma, \\qquad \\text{where} \\quad \\alpha,\\beta,\\gamma,\\delta \\in \\mathbb N.\n\\]\nWith $f^\\alpha_\\beta$ we indicate the direct sum of $\\alpha$ unstable foci and $\\beta$ stable foci, with the symbol $n^\\gamma_\\delta$ we indicate the direct sum of $\\gamma$ unstable nodes and $\\delta$ stable nodes. Of course $2 \\alpha + 2\\beta + \\gamma +\\delta = m$, the dimension of the phase space. For the sake of clarity, in classical treaties the name stable node is typically referred to what we call stable double node $n_2$, the name unstable node to what we call unstable double node $n^2$, and the name saddle to what we call $n^1_1$, the direct product of a 1-dimensional stable and a 1-dimensional unstable node. We give the following definition.\n\\begin{defi}\nGiven a vector field $X$ and a hyperbolic equilibrium $e$, let\n\\begin{itemize}\n\\item $\\alpha$ be the number of couples of complex conjugate eigenvalues of $JX_{e}$ with positive real part; \n\\item $\\beta$ be the number of couples of complex conjugate eigenvalues of $JX_{e}$ with negative real part; \n\\item $\\gamma$ be the number of positive real eigenvalues of $JX_{e}$;\n\\item $\\delta$ be the number of negative real eigenvalues of $JX_{e}$.\n\\end{itemize}\nWe call the numbers $\\alpha, \\beta,\\gamma,\\delta$ \\emph{spectral indices}, and we call the symbol $f_\\beta^\\alpha n_\\delta^\\gamma$ \\emph{spectral type} of $e$.\n\\end{defi}\n\nThe investigation of the spectral type of an equilibrium in dimension 2 is trivial. In fact the linearisation of $X$ at $e$ yields a $2\\times 2$ matrix whose characteristic polynomial is $p(\\lambda) = \\lambda^2 - d_1 \\lambda + d_2$ where $d_1,d_2$ are the principal invariants of $JX_e$, that is $d_1 = \\tr JX_{e}$ and $d_2 = \\det JX_{e}$. The spectral type can be classified in the space of invariants in the well known diagram in Figure~\\ref{Marginal2}.\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=7cm]{Marginal2.pdf}\n\\end{center}\n\\caption{The decomposition of the space of invariants $d_1$ (the trace) and $d_2$ (the determinant) associated to the equilibrium of a 2 degrees of freedom dynamical system. In each open set the spectral type of the equilibrium is specified.}\\label{Marginal2}\n\\end{figure}\nIn this diagram the red half-line is the semi-algebraic set $\\mathcal{R} = \\{(d_1,d_2) \\,|\\, d_1 = 0, d_2 > 0\\}$, and it corresponds to the case in which $JX_{e}$ has a conjugate couple of non-zero purely imaginary eigenvalues. If we slightly move the principal invariants of $JX_{e}$ from right to left across such line, the sign of the real part of the two complex conjugate eigenvalues changes from positive to negative, meaning that the spectral type of the equilibrium changes from $f^1$ (unstable focus) to $f_1$ (stable focus). This event is called \\emph{Hopf bifurcation}.\n\nThe blue line is the algebraic set $\\mathcal{Z} = \\{(d_1,d_2) \\,|\\, d_2 = 0\\}$, and it corresponds to the event of zero being an eigenvalue of $JX_{e}$. If the principal invariants of $JX_{e}$ slightly move across such line the sign of one real eigenvalue of $JX_{e}$ changes from positive to negative or vice-versa, meaning that the spectral type of the equilibrium changes from $n^2$ (unstable node) or $n_2$ (stable node) to $n_1^1$ (saddle). This event is called \\emph{saddle-node bifurcation}.\n\nThe green curve is the algebraic set $\\mathcal{D} = \\{(d_1,d_2) \\,|\\, d_1^2 -4 d_2 = 0\\}$, where $d_1^2 -4 d_2$ is the discriminant of the polynomial $p$, and it corresponds to the event of $p$ having a double root in the real line. If the principal invariants of $JX_{e}$ slightly move across such curve, two real eigenvalues with same sign become a double real eigenvalue with same sign and then transform into a couple of two complex conjugate eigenvalues with real part having the same sign of the double root (and hence same sign of the two distinct eigenvalues). This means that the spectral type of the equilibrium changes from $f^1$ to $n^2$ or from $f_1$ to $n_2$. This event is called \\emph{focus-node bifurcation}.\n\nThe same type of analysis does not seem to have been carried out for higher dimensional systems, and in the literature it is possible to find only partial results, focussing on stability, which go under the name of Routh-Hurwitz conditions \\cite{1895.MA.Hurwitz, 1877.Routh} (see also \\cite{1971.JIMA.Barnett, 1991.CSSP.Anagnost.Desoer, 1992.CSM.Clark}).\n\nIn this article we set the analytical framework for a spectral analysis (Section~\\ref{formal conditions}) and we perform a throughout investigation of the 3 and 4 dimensional cases, where the expressions are simple enough to be written and the results can be pictorially represented (Sections~\\ref{3-dimensional case} and \\ref{4-dimensional case}). The general expressions in higher dimensional cases become cumbersome (in Section~\\ref{5 and 6 dimensional case} we show their aspect in dimension 5 and 6), and the geometric representation impossible. In Section~\\ref{non-genericity} we discuss typical non-generic situation and their effect on our approach. In Section~\\ref{7} we use Sturm's sequence and residue theorem to analytically compute the spectral indices. The general formulas, or even better the procedure to compute them, can be used in particular systems depending on few significant parameters, and gives a representation of the bifurcations of the system. To illustrate this fact we devote the last section.\n\n\\section{The formal conditions}\\label{formal conditions}\n\nConsider an equilibrium $e$ of a vector field $X$, and consider the characteristic polynomial of $JX_{e}$, which is the polynomial\n\\begin{equation}\\label{car pol}\np(\\lambda) = (-1)^m \\lambda^m + (-1)^{m-1} d_1 \\lambda^{m-1} + \\cdots - d_{m-1} \\lambda + d_m.\n\\end{equation}\nwhere $d_1 = \\tr JX_e,...,d_m = \\det JX_e$ are the principal invariants of the matrix $JX_e$. To every choice of principal invariants $(d_1,....,d_m)$ there corresponds a polynomial $p$ (the characteristic polynomial) and a family of roots of $p$ (the eigenvalues), that is a definite spectrum of $JX_{e}$, that is a definite spectral type of the equilibrium $e$. We will abuse the terminology and refer to \\emph{the spectral type of the point $(d_1,...,d_m)$} in the space of invariants.\n\nA change in spectral type of $e$ can take place only when very specific events take place. These events define algebraic varieties that are stratified varieties. Such algebraic varieties decompose the space of invariants in domains. We call \\emph{marginal} all the points $(d_1,...,d_m)$ of the space of invariants at which a change of spectral type is taking place or, in other words, any point at the boundary of domains whose points have a given spectral type. The generic elementary changes in the spectral type for the equilibrium $e$ are all and only one of the following:\n\\begin{itemize}\n\\item[(z)] a single real root changes sign, which corresponds to the change of spectral type\n\\[\nf^\\alpha_\\beta n^{\\gamma+1}_\\delta \\leftrightarrow f^\\alpha_\\beta n^\\gamma_{\\delta+1};\n\\]\n\\item[(r)] the real part of two complex conjugate roots changes sign, which corresponds to the change of spectral type\n\\[\nf^{\\alpha+1}_\\beta n^\\gamma_\\delta \\leftrightarrow f^\\alpha_{\\beta+1} n^\\gamma_\\delta;\n\\]\n\n\\item[(d)] two complex conjugate roots collide in the real axis on a non-zero real number, and then separate into two real roots or viceversa, which corresponds to one of the two possible changes of spectral type, depending on the sign of the real part of the roots \n\\[\nf^{\\alpha+1}_\\beta n^\\gamma_\\delta \\leftrightarrow f^\\alpha_\\beta n^{\\gamma+2}_\\delta \\qquad \\text{or} \\qquad f^\\alpha_{\\beta+1} n^\\gamma_\\delta \\leftrightarrow f^\\alpha_\\beta n^\\gamma_{\\delta+2}.\n\\]\n\\end{itemize}\n\nCondition (d) should be divided in two conditions: (d$^+$), corresponding to bifurcation $f^{\\alpha+1}_\\beta n^\\gamma_\\delta \\leftrightarrow f^\\alpha_\\beta n^{\\gamma+2}_\\delta$ and (d$^-$), corresponding to bifurcation $f^\\alpha_{\\beta+1} n^\\gamma_\\delta \\leftrightarrow f^\\alpha_\\beta n^\\gamma_{\\delta+2}$. We will briefly address this issue at the end of this section. We prefer to keep this analysis out of the picture for clarity.\n\nIn each of these situation a very specific event must take place. The case (z) is particularly simple to treat. At the marginality corresponding to a bifurcation in which a root changes sign, zero must be a root of the characteristic polynomial $p$, and hence the function $\\zeta(d_1,...,d_m) = d_m$ must vanish. We denote\n\\[\n\\mathcal Z = \\{(d_1,...,d_m) \\in {\\mathbb R}^m \\,|\\, \\zeta(d_1,...,d_m)= 0\\}.\n\\]\nThis condition gives a hyperplane in invariant space which separates domains in which the spectral type of the equilibrium changes according to (z).\n\nCase (d) is more involved. This type of bifurcation takes place when the characteristic polynomial has a double real root, which can happen only if the function $\\delta(d_1,...,d_m) = \\dsc(p)$, the discriminant of the polynomial $p$, vanishes. In this case the relevant algebraic variety $\\mathcal D$ is a subvariety of\n\\[\n\\widetilde{\\mathcal D} = \\{(d_1,...,d_m) \\in \\mathbb R^m \\,|\\, \\delta(d_1,...,d_m) = 0\\}.\n\\]\nThe reason for being a subvariety is due to the fact that multiple roots of $p$ could be outside the real axis. Such spurious solutions are strata of the variety $\\widetilde{\\mathcal D}$ which have higher codimension, and can be easily distinguished from the true marginal points (they can be so easily distinguished that they are often overseen). We will see in the case $n= 4$ that the marginal variety $\\mathcal D$ differs form $\\widetilde{\\mathcal D}$ for a 1 dimensional curve which is the analogous of the thread emanating from the swallowtail singularity. One can formally define the marginal region $\\mathcal D$ as the closure of $\\widetilde{\\mathcal D}^{m-1}$, where $\\widetilde{\\mathcal D}^{m-1}$ is the union of all $m-1$ dimensional strata of $\\widetilde{\\mathcal D}$ (see \\cite{2012.Arnold.Gusein-Zade.Varchenko.1} for a discussion on stratifications).\n\nThe bifurcation of type (r) is the most complicate to treat. If we are at such marginality then the characteristic polynomial $p$ has two conjugate, purely imaginary roots, that we indicate $i \\mu$, $-i \\mu$, with $\\mu$ real and non-zero. Let us denote\n\\begin{equation}\\label{pr pi}\n\\begin{cases}\np^r(\\mu) = d_m - d_{m-2} \\mu^2 + d_{m-4} \\mu^4 + \\cdots = \\sum_{j = 0}^{\\lfloor m\/2 \\rfloor} (-1)^j d_{m-2j } \\mu^{2j} \\\\[5pt]\np^i(\\mu) = d_{m-1} - d_{m-3} \\mu^2 + d_{m-5} \\mu^4 + \\cdots = \\sum_{j = 0}^{\\lfloor m\/2 \\rfloor} (-1)^j d_{m-1-2j} \\mu^{2j}.\n\\end{cases}\n\\end{equation}\nthe two polynomials such that $p(i \\mu) = p^r(\\mu) - i \\mu p^i(\\mu)$ (in these expressions we agree that $d_0 = 1$, and $\\lfloor m\/2 \\rfloor$ indicates the integer part of $m\/2$). These two polynomials have degrees\n\\[\n\\deg (p^r) = \n\\left[\\begin{matrix}\nm &\\text{ if } m \\text{ is even}\\\\\nm-1 &\\text{ if } m \\text{ is odd},\n\\end{matrix}\\right.\n\\qquad \n\\deg(p^i) = \\left[\\begin{matrix}\nm-2 & \\text{ if } m \\text{ is even}\\\\\nm-1 & \\text{ if } m \\text{ is odd}.\n\\end{matrix}\\right.\n\\]\n\nAt marginal points of type (r) the two polynomials $p^r$ and $p^i$ must have two common real roots $\\pm \\mu$. Unfortunately, both polynomials are in the variable $\\mu^2$, and hence a codimension one condition is that these two polynomials have two common real solution or that they have two complex conjugate purely imaginary solutions. The polynomials $p^r$, $p^i$ have common roots precisely when the function $\\widetilde\\rho(d_1,...,d_m) = \\res(p^r,p^i)$, the resultant of the two polynomials, vanishes. The above mentioned fact implies that, generically, it is not the entire variety\n\\[\n\\widetilde{\\mathcal R} = \\{ (d_1,...,d_m) \\in \\mathbb R^m \\,|\\, \\widetilde\\rho(d_1,....,d_m) = 0\\}\n\\]\nwhich corresponds to the marginality at exam, but only the semialgebraic variety that corresponds to a common double \\emph{real} root of the polynomials $p^r$, $p^i$. \n\nWe can rephrase the considerations above using the two polynomials $q^r(\\nu)$ and $q^i(\\nu)$ such that $p^r(\\mu) = q^r(\\mu^2)$ and $p^i(\\mu) = q^i(\\mu^2)$. The two polynomials are\n\\begin{equation}\\label{qr qi}\nq^r(\\nu) = \\sum_{j = 0}^{\\lfloor m\/2 \\rfloor} (-1)^j d_{m-2j} \\nu^j, \\qquad q^i(\\nu) = \\sum_{j = 0}^{\\lfloor m\/2 \\rfloor} (-1)^j d_{m-1-2j} \\nu^j\n\\end{equation}\nand their degrees are\n\\[\n\\deg (q^r) = \n\\left[\\begin{matrix}\n\\frac m2 &\\text{ if } m \\text{ is even}\\\\[5pt]\n\\frac{m-1}2 &\\text{ if } m \\text{ is odd}\n\\end{matrix}\\right.\n\\qquad \n\\deg(q^i) = \\left[\\begin{matrix}\n\\frac m2 - 1 & \\text{ if } m \\text{ is even}\\\\[5pt]\n\\frac{m-1}2 & \\text{ if } m \\text{ is odd.}\n\\end{matrix}\\right.\n\\]\nThe bifurcation of type (r) takes place when these two polynomials have a common \\emph{positive} real root. \n\nOnce again, the two polynomials have common roots when the function $\\rho(d_1,...,d_m) = \\res(q^r,q^i)$ vanishes (observe that $\\widetilde\\rho = \\rho^2$). But the vanishing of $\\rho$ corresponds to the condition that the two polynomials have a common real root, while the marginal locus we are looking for is the subvariety of $\\widetilde{\\mathcal R}$ that corresponds to points $(d_1,...,d_m)$ whose associated polynomials $q^r$, $q^i$ have a \\emph{common positive real root}. We hence need an extra condition to ensure that the common root is positive.\n\nThis condition can be obtained using Euclid's division algorithm. In fact the ultimate remainder of Euclid's division algorithm applied to $q^r$ and $q^i$ is a degree zero polynomial (a real number) that is the resultant $\\rho$. When the resultant is zero, the penultimate remainder of the Euclid's division algorithm applied to $q^r$ and $q^i$ is a degree 1 polynomial whose only root $\\sigma(d_1,...,d_m)$ is the common root that must exist given that the resultant is zero. This fact, which holds under generic assumptions, is what solves our dilemma, and we can state that\n\\[\n\\mathcal R = \\{ (d_1,...,d_m) \\in \\mathbb R^m \\,|\\, \\rho(d_1,...,d_m) = 0, \\sigma(d_1,...,d_m) > 0\\}.\n\\]\n\nThe remainders of Euclid's division of two polynomials have a fundamental role in the investigation of roots of polynomials, and they generate the so called \\emph{Sylvester sequence} \\cite{2013.Gondim.deMoralesMelo.Russo}. We summarise the discussion above in a definition and a main theorem.\n\n\\begin{defi}\nWe call \\emph{determinant locus} the set $\\mathcal Z$, \\emph{discriminant locus} the set $\\mathcal D$ and \\emph{resultant locus} the set $\\mathcal R$. We call \\emph{marginal locus} the union of the three loci. We call \\emph{marginal points} the points of the marginal locus.\n\\end{defi}\n\n\\begin{thm}\\label{main}\nGiven a vector field $X$ and an equilibrium $e$ of $X$. The marginal locus in invariant space is the union of three algebraic varieties: $\\mathcal Z$, $\\mathcal R$, and $\\mathcal D$. These varieties decompose the space of invariants in domains with a specific spectral type. Across the marginal locus the spectral type of $e$ varies according to a precise rule depending on the three possibilities (z), (r), (d) listed above.\n\nThe variety $\\mathcal Z$ is the hyperplane $\\{\\zeta = 0\\}$ with $\\zeta$ the determinant of $JX_e$. With possible lower-dimensional artefacts, the algebraic variety $\\mathcal D$ is the variety $\\{\\delta = 0\\}$ with $\\delta$ the discriminant of $p$, the characteristic polynomial of $JX_e$; the semialgebraic variety $\\mathcal R$ is the semialgebraic variety $\\{\\rho = 0, \\sigma > 0\\}$ with $\\rho$ the resultant and $\\sigma$ the unique root of the penultimate Euclid's remainder of the two polynomials $q^r,q^i$ defined in \\eqref{qr qi}.\n\\end{thm}\n\nWe conclude observing that the argument of the penultimate Euclid's remainder applied to $p$ and $p'$ does give information on what type of bifurcation is taking place between $f^{\\alpha+1}_\\beta n^\\gamma_\\delta \\leftrightarrow f^\\alpha_\\beta n^{\\gamma+2}_\\delta$ and $f^\\alpha_{\\beta+1} n^\\gamma_\\delta \\leftrightarrow f^\\alpha_\\beta n^\\gamma_{\\delta+2}$. In fact the last remainder of Euclid's division algorithm applied to $p$, $p'$ is the discriminant. When the discriminant is zero, the root $\\tau(d_1,...,d_m)$ of the penultimate remainder (a degree one polynomial) is the real double root of $p$, and hence its sign will discriminate between the two possible bifurcations: the unstable focus$\\leftrightarrow$unstable node (d$^+$) or the stable focus$\\leftrightarrow$stable node (d$^-$).\n\n\\section{The 3-dimensional case}\\label{3-dimensional case}\n\nLet us use Theorem~\\ref{main} to classify the spectral type of hyperbolic equilibria of a 3-dimensional system. Consider $p(\\lambda) = - \\lambda^3 + d_1 \\lambda^2 - d_2 \\lambda + d_3$, the characteristic polynomial of a $3\\times 3$ matrix, where $d_i$ are the invariants of the matrix, i.e.\\ $d_3$ is the determinant, $d_1$ is the trace, $d_2$ is the sum of the determinants of the three principal $2\\times 2$ minors.\n\nThe hyperplane $\\mathcal Z = \\{(d_1,d_2,d_3) \\,|\\, d_3 = 0\\}$ is easily drawn. Also the discriminant of the characteristic polynomial is easy to compute, and is\n\\[\n\\delta = -4 d_3 d_1^3+d_2^2 d_1^2+18 d_2 d_3 d_1-4 d_2^3-27 d_3^2.\n\\]\nThe corresponding discriminant locus $\\mathcal D = \\{(d_1,d_2,d_3) \\in \\mathbb R^3 \\,|\\, \\delta(d_1,d_2,d_3) = 0\\}$ is drawn in Figure~\\ref{n=3} center pane. In this low dimensional case the polynomials $p$ cannot have a double complex root, since this event can take place only when the polynomial has degree at least four, hence in this case $\\widetilde{\\mathcal D} = \\mathcal D$. The interesting feature of this algebraic set is that $\\mathcal D$ displays a line of cusp points corresponding to a triple root in the real axis.\n\nFor the variety $\\mathcal R$ we must consider the two polynomials $q^r = -d_1 \\nu + d_3$ and $q^i = - \\nu + d_2$. They have common positive real roots only if $d_3\/d_1 = d_2$ and $d_2 > 0$. The resultant of the two polynomials is in fact $\\rho = d_3 -d_1 d_2$. In this case the penultimate remainder is $q^i$ itself (the system is very low-dimensional). It follows that $\\sigma = d_2$. In Figure~\\ref{n=3} left pane a picture of the resultant locus $\\mathcal R$, (in transparent red is represented $\\widetilde{\\mathcal R} \\setminus \\mathcal R$) in the right pane a cumulative picture of the three loci $\\mathcal Z \\cup \\mathcal D \\cup \\mathcal R$.\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=5cm]{R3.pdf}\n\\includegraphics[width=5cm]{D3.pdf}\n\\includegraphics[width=5cm]{Marginal3.pdf}\n\\end{center}\n\\caption{In the left pane the resultant locus $\\mathcal R$ (solid red), a semialgebraic subvariety of the hyperbolic paraboloid $\\widetilde{\\mathcal R}$ (union of solid and transparent red). In the center pane the discriminant locus $\\mathcal D$. In the right pane the two varieties together with the determinant locus $\\mathcal Z$. The complement of the marginal varieties correspond to domains in which the spectral type does not change.}\n\\label{n=3}\n\\end{figure}\n\nAlthough complete, the above pictures lacks the indication of spectral type in each of the connected component complement of the marginal loci. For this reason we use a simple consideration to choose a representative slice of Figure~\\ref{n=3}. A positive rescaling of the vector field $X$ amounts to a rescaling of the spectral parameter $\\lambda$ (if the rescaling is non positive, there will be an exchange of the stable indices $\\beta$, $\\delta$ with the unstable ones $\\alpha$, $\\gamma$). Assume therefore that $d_3 \\not = 0$. We can positively rescale the variable $\\lambda$ by posing $\\lambda = k \\widetilde\\lambda$ with $k \\in \\mathbb R^+$ and obtain the polynomial\n\\[\n\\widetilde p(\\widetilde \\lambda) = k^3 \\left(\\widetilde \\lambda^3 - \\frac{d_1}k \\widetilde \\lambda^2 + \\frac{d_2}{k^2} \\widetilde \\lambda - \\frac{d_3}{k^3}\\right)\n\\]\nwhose roots have same spectral type of the roots of $p$. We hence shall choose $k = \\sqrt[3]{|d_3|}$ and investigate two possibilities:\n\\[\n\\left[\n\\begin{matrix}\np^- = \\widetilde \\lambda^3 - b_1 \\widetilde \\lambda^2 + b_2 \\widetilde \\lambda - 1 & \\text{ if } d_3 < 0\\\\[3pt]\np^+ = \\widetilde \\lambda^3 - b_1 \\widetilde \\lambda^2 + b_2 \\widetilde \\lambda + 1 & \\text{ if } d_3 > 0\n\\end{matrix}\n\\right.\n\\]\nwhere $b_1 = d_1 \/\\sqrt[3]{|d_3|}$ and $b_2 = d_2 \/\\sqrt[3]{|d_3|}$. The discriminants of these two polynomials are the already computed discriminant $\\delta$ with the substitutions $d_3 \\to \\pm 1$, $d_2 \\to b_2$, $d_1 \\to b_1$, that is\n\\[\n\\delta^\\pm = b_1^2 b_2^2 - 4 b_1^3 \\mp 4 b_2^3 \\pm 18 b_1 b_2 - 27.\n\\]\nThe vanishing of $\\delta^\\pm$ always corresponds to a double real root of $p$ except at the codimension 2 cusp point, which corresponds to a triple real root (see the green curves of Figure~\\ref{n=3 tomography}). Such variety is the marginal state separating the case of $p$ possessing three real roots to the case of $p$ possessing two complex conjugate roots and one real root.\n\nAfter substitution the resultant is $\\rho^\\pm = \\pm 1 - b_1b_2$, and of course its relevant submanifold is the one in which the polynomials $q^r = - b_1 \\nu \\pm 1 $ and $q^i = - \\nu + b_2$ have a common positive real root. It follows that $b_2 > 0$. In Figure~\\ref{n=3 tomography} we show how the marginal loci separate the space of invariants in regions of homogeneous spectral type, and how across each locus the change in spectral type is determined by the locus being crossed.\n\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=5cm]{Margin2projp.pdf}\n\\includegraphics[width=5cm]{Margin2projm.pdf}\n\\end{center}\n\\caption{Tomographics sections of Figure~\\ref{n=3} with specification of the spectral type of the equilibrium in the complement of the marginal loci.}\n\\label{n=3 tomography}\n\\end{figure}\n\nThe sign of $\\tau(d_1,d_2,d_3) = (d_1 d_2-9 d_3)\/(2 \\left(d_1^2-3 d_2\\right))$ separates $\\mathcal D$ in two marginal loci, $\\mathcal D^+$ across which two positive real roots become a couple of complex conjugate roots with positive real part and $\\mathcal D^-$ across which two negative real roots become a couple of complex conjugate roots with negative real part.\n\n\\section{The 4-dimensional case}\\label{4-dimensional case}\n\nIn the 4-dimensional case the characteristic polynomial of a $4\\times 4$ matrix is $p(\\lambda) = \\lambda^4 - d_1 \\lambda^3 + d_2 \\lambda^2 - d_3 \\lambda + d_4$. Also in this case the coefficients $d_1,d_2,d_3,d_4$ are the principal invariants of the matrix. Also in this case zero is an eigenvalue if and only if $d_4 = 0$. So that the determinant locus $\\mathcal Z$ is a hyperplane whose points separate regions in which one eigenvalue changes sign. The discriminant of $p$ is\n\\begin{multline*}\n\\delta = -27 d_4^2 d_1^4-4 d_3^3 d_1^3+18 d_2 d_3 d_4 d_1^3+d_2^2 d_3^2 d_1^2+144 d_2 d_4^2 d_1^2 - 4 d_2^3 d_4 d_1^2-6 d_3^2 d_4 d_1^2 + \\\\\n+ 18 d_2 d_3^3 d_1 -192 d_3 d_4^2 d_1 - 80 d_2^2 d_3 d_4 d_1-27 d_3^4+256 d_4^3-4 d_2^3 d_3^2-128 d_2^2 d_4^2+16 d_2^4 d_4+144 d_2 d_3^2 d_4,\n\\end{multline*}\nwhile the two polynomials needed to define $\\mathcal R$ are $q^r =\\nu ^2 -d_2 \\nu + d_4 $, $q^i = d_1 \\nu - d_3$, from which it follows that\n\\[\n\\rho = d_4 d_1^2-d_2 d_3 d_1+d_3^2, \\qquad \\sigma = d_1 d_3.\n\\] \n\nThese functions are what is needed to investigate bifurcations of 4-dimensional systems, but in this case the space of invariants is 4-dimensional. \n\nWe proceed as done in the 3-dimensional case and, assuming $d_4 \\not = 0$ we apply a positive rescaling of the vector field, which amounts to a positive rescaling of the variable $\\lambda$, hence reducing the parameters down to three. Pose $\\lambda = k \\widetilde \\lambda$ with $k \\in \\mathbb R^+$, the polynomial $p$ becomes\n\\[\n\\widetilde p(\\widetilde \\lambda) = k^4 \\left(\\widetilde \\lambda^4 - \\frac{d_1}k \\widetilde \\lambda^3 + \\frac{d_2}{k^2} \\widetilde \\lambda^2 - \\frac{d_3}{k^3} \\widetilde \\lambda+ \\frac{d_4}{k^4}\\right),\n\\]\nwhose roots have same spectral type of the roots of $p$. We hence shall choose $k = \\sqrt[4]{|d_4|}$ and investigate two possibilities:\n\\[\n\\left[\n\\begin{matrix}\np^- = \\widetilde \\lambda^4 - b_1 \\widetilde \\lambda^3 + b_2 \\widetilde \\lambda^2 - b_3 \\widetilde \\lambda - 1 & \\text{ if } d_4 < 0\\\\[3pt]\np^+ = \\widetilde \\lambda^4 - b_1 \\widetilde \\lambda^3 + b_2 \\widetilde \\lambda^2 - b_3 \\widetilde \\lambda+ 1& \\text{ if } d_4 > 0.\n\\end{matrix}\n\\right.\n\\]\n(also in this case $b_j = d_j\/\\sqrt[4]{|d_4|}$ for $j = 1,2,3$.) The discriminant of these two polynomials is\n\\begin{multline}\n\\delta^\\pm = -27 b_1^4+18 b_1^3 b_2 b_3 - 4 b_1^3 b_3^3 - 4 b_1^2 b_2^3 + b_1^2 b_2^2 b_3^2 \\pm 144 b_1^2 b_2 \\mp 6 b_1^2 b_3^2 \\mp 80b_1b_2^2 b_3 \\pm 18 b_1 b_2 b_3^3-192 b_1 b_3+ \\\\\n\\pm 16 b_2^4 \\mp 4 b_2^3 b_3^2 - 128 b_2^2 + 144 b_2 b_3^2-27 b_3^4 \\pm 256.\n\\end{multline}\nIn this case the variety $\\delta^+ = 0$ does posses a thread, corresponding to the codimension 2 degeneracy associated to the coincidence of two couples of complex conjugate solutions. These degeneracies do not correspond to marginal points, but they are of fundamental interest, being origin of interesting monodromic effects. The variety $\\mathcal D$ has also codimension 2 strata which are cusp singularities, corresponding to triple real roots, which are edges to regular strata of codimension 1 corresponding to double real roots, and it also has two point singularities that are swallotails, and correspond to a quadruple real root \\cite{2012.Arnold.Gusein-Zade.Varchenko.1}.\n\nWith this reduction from the space of invariants $(d_1,d_2,d_3,d_4)$ to the space $(b_1,b_2,b_3)$, the resultant becomes $\\rho^\\pm = b_1^2 - b_1 b_2 b_3 \\pm b_3^2$ and, when the resultant is zero, the common root of $q^r$ and $q^i$ is $\\sigma^\\pm = b_1\/b_3$. This fact can be deduced from the abstract approach of Section~\\ref{formal conditions}, but it can also be obtained by direct computation from the two polynomials $q^r = \\nu^2 - b_2 \\nu \\pm 1$ and $q^i = b_3 \\nu - b_1$.\n\nRepresentations of the loci when $d_4 > 0$ are given in Figure~\\ref{n=4 d4>0}, while representations of the loci when $d_4 < 0$ are given in Figure~\\ref{n=4 d4<0}.\n\\begin{figure}[ht]\n\\begin{center}\n\\includegraphics[width=5cm]{D4p.pdf}\n\\includegraphics[width=5cm]{R4p.pdf}\n\\includegraphics[width=5cm]{Marginal4p.pdf}\n\\end{center}\n\\caption{Case in which $d_4 >0$. In the left pane the marginal locus $\\widetilde{\\mathcal D}$, which differs from $\\mathcal D$ only for the 1-dimensional thread that connects the two 0-dimensional strata (sligtly visible at the center of the image). In the center pane the locus $\\widetilde{\\mathcal R}$ in red (solid and transparent) and $\\mathcal R$ in solid red. In the right pane $\\mathcal D$ and $\\mathcal R$ together.}\\label{n=4 d4>0}\n\\end{figure}\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=5cm]{D4m.pdf}\n\\includegraphics[width=5cm]{R4m.pdf}\n\\includegraphics[width=5cm]{Marginal4m.pdf}\n\\end{center}\n\\caption{Case in which $d_4 < 0$. In the left pane the marginal locus $\\mathcal D$. In the center pane the locus $\\widetilde{\\mathcal R}$ in red (solid and transparent) and $\\mathcal R$ in solid red. In the right pane $\\mathcal D$ and $\\mathcal R$ together.}\\label{n=4 d4<0}\n\\end{figure}\n\nWe complete this investigation by making a tomography at a few representative level sets and indicating the spectral type of the connected components of the complement of the loci. In Figure~\\ref{tomography n=4} we use the same convention made for the 3-dimensional case, that is red line for the resultant locus (across which a change from stable focus to unstable focus takes place) and green line for the discriminant locus (across which a change focus to node takes place, preserving the type of stability).\n\\begin{figure}[ht]\n\\begin{center}\n\\includegraphics[width=5cm]{case4at1.pdf}\n\\includegraphics[width=5cm]{case4at2.pdf}\n\\includegraphics[width=5cm]{case4at3.pdf}\n\\end{center}\n\\caption{In the left and center panes, tomographic sections of the 3-dimensional Figure~\\ref{n=4 d4>0} at two different $b_2$ levels. The labels of the different domains give an exhaustive description of the spectral types that can be met. In the center pane are visible two isolated points (drawn as infinitesimal circles for technical reasons). These points are the intersection of the tomographic plane with the thread in which a focus-focus degeneracy takes place. In the right pane, a section of the 3-dimensional Figure~\\ref{n=4 d4<0} at a chosen $b_3$ level. The labels in the different domains give an exhaustive description of the spectral types that can be met.}\n\\label{tomography n=4}\n\\end{figure}\n\nWhile in the odd-dimensional case ($m$ odd) the regions with $d_m >0$ are equivariantly symmetric with those having $d_m < 0$, which means that the loci are the same, and the spectral types change switching the upper indices with the lower ones, in the even-dimensional case such equivariance is lost, because the vector field $-X$ has linearisation with the same determinant of the linearisation of $X$. This means that in the even dimensional case the figures with $d_m$ positive or negative have a $\\mathbb Z_2$ symmetry that the figures in the odd dimensional case do not possess. In invariant space such $\\mathbb Z_2$ symmetry can be explicitly written and is \n\\[\n\\left[\n\\begin{matrix}\n(d_1,d_2,d_3,...,d_{n-1}, d_m) \\to (-d_1,d_2,-d_3,...-d_{n-1},d_m) &\\text{ if } m \\text{ is even}\\\\[5pt]\n(d_1,d_2,d_3,...,d_{n-1}, d_m) \\to (-d_1,d_2,-d_3,...d_{n-1},- d_m) &\\text{ if } m \\text{ is odd}.\n\\end{matrix}\n\\right.\n\\]\nThis symmetry becomes an equivariance for the odd-dimensional case between points with positive and negative determinant, clearly visible in Figure~\\ref{n=3} and in Figure~\\ref{n=3 tomography} between left and right pane, while in the even-dimensional case it becomes the invariance $(b_1,b_2,b_3,...,b_{n-1}) \\to (-b_1,b_2,-b_3,...-b_{n-1})$ clearly visible in Figures~\\ref{n=4 d4>0}, \\ref{n=4 d4<0}, and in the first two panes of Figure~\\ref{tomography n=4}, where it corresponds to the transformation $(b_1,b_2,b_3) \\to (-b_1,b_2,-b_3)$.\n\nAnother $\\mathbb Z_2$ symmetry for the reduced case, that is $(b_1,...,b_{n-1})$-space, is visible both in the odd and even dimensional case. It is related to the substitution of the spectral parameter $\\lambda = 1\/\\widetilde\\lambda$. This symmetry corresponds to an alternating sign inversion of the parameters \n\\[\n\\left[\n\\begin{matrix}\n(b_1,...,b_{n-1}) \\to (\\mp b_{n-1}, \\pm b_{n-2}, ..., \\pm b_2, \\mp b_1) &\\text{ in the even case}\\\\[5pt]\n(b_1,...,b_{n-1}) \\to (\\mp b_{n-1}, \\pm b_{n-2}, ..., \\mp b_2, \\pm b_1) &\\text{ in the odd case.}\n\\end{matrix}\n\\right.\n\\]\n\n\\section{The 5 and 6-dimensional cases}\\label{5 and 6 dimensional case}\n\nIn the 5-dimensional case the polynomial $p$ and the two polynomials $q^r,q^i$ are\n\\[\np = -\\lambda ^5 + d_1 \\lambda ^4 - d_2 \\lambda ^3 + d_3 \\lambda ^2 - d_4 \\lambda + d_5, \\qquad q^r = d_1 \\nu ^2-d_3 \\nu +d_5, \\qquad q^i = -\\nu ^2 + d_2 \\nu - d_4.\n\\]\nIt follows that\n\\begin{multline*}\n\\delta = 256 d_5^3 d_1^5-27 d_4^4 d_1^4-128 d_3^2 d_5^2 d_1^4-192 d_2 d_4 d_5^2 d_1^4+144 d_3 d_4^2 d_5 d_1^4+18 d_2 d_3 d_4^3 d_1^3-1600 d_2 d_5^3 d_1^3 +\\\\\n-4 d_3^3 d_4^2 d_1^3+144 d_2^2 d_3 d_5^2 d_1^3+160 d_3 d_4 d_5^2 d_1^3+16 d_3^4 d_5 d_1^3-36 d_4^3 d_5 d_1^3-6 d_2^2 d_4^2 d_5 d_1^3-80 d_2 d_3^2 d_4 d_5 d_1^3+144 d_2 d_4^4 d_1^2 + \\\\\n-4 d_2^3 d_4^3 d_1^2-6 d_3^2 d_4^3 d_1^2+2000 d_3 d_5^3 d_1^2+d_2^2 d_3^2 d_4^2 d_1^2-27 d_2^4 d_5^2 d_1^2+560 d_2 d_3^2 d_5^2 d_1^2-50 d_4^2 d_5^2 d_1^2+1020 d_2^2 d_4 d_5^2 d_1^2-4 d_2^2 d_3^3 d_5 d_1^2+ \\\\\n-746 d_2 d_3 d_4^2 d_5 d_1^2+24 d_3^3 d_4 d_5 d_1^2+18 d_2^3 d_3 d_4 d_5 d_1^2-192 d_3 d_4^4 d_1-80 d_2^2 d_3 d_4^3 d_1+2250 d_2^2 d_5^3 d_1-2500 d_4 d_5^3 d_1+18 d_2 d_3^3 d_4^2 d_1+ \\\\\n -900 d_3^3 d_5^2 d_1-630 d_2^3 d_3 d_5^2 d_1-2050 d_2 d_3 d_4 d_5^2 d_1-72 d_2 d_3^4 d_5 d_1+160 d_2 d_4^3 d_5 d_1+24 d_2^3 d_4^2 d_5 d_1+1020 d_3^2 d_4^2 d_5 d_1+356 d_2^2 d_3^2 d_4 d_5 d_1+\\\\\n + 256 d_4^5-128 d_2^2 d_4^4+3125 d_5^4+16 d_2^4 d_4^3+144 d_2 d_3^2 d_4^3-3750 d_2 d_3 d_5^3-27 d_3^4 d_4^2-4 d_2^3 d_3^2 d_4^2+108 d_2^5 d_5^2+825 d_2^2 d_3^2 d_5^2+\\\\\n + 2000 d_2 d_4^2 d_5^2-900 d_2^3 d_4 d_5^2+2250 d_3^2 d_4 d_5^2+108 d_3^5 d_5+16 d_2^3 d_3^3 d_5-1600 d_3 d_4^3 d_5+560 d_2^2 d_3 d_4^2 d_5-630 d_2 d_3^3 d_4 d_5-72 d_2^4 d_3 d_4 d_5,\n \\end{multline*}\nwhile\n\\[\n\\rho = d_1 d_5 d_2^2-d_1 d_3 d_4 d_2-d_3 d_5 d_2+d_1^2 d_4^2+d_5^2+d_3^2 d_4-2 d_1 d_4 d_5.\n\\]\nThe penultimate remainder of Euclid's division algorithm applied to $q^r$, $q^i$ is $\\left(d_1 d_2-d_3\\right) \\nu -d_1 d_4+d_5$, and hence its root is $\\sigma = (d_1 d_4-d_5)\/(d_1 d_2-d_3)$. The positivity of this function is equivalent to the positivity of the polynomial function\n\\[\n\\sigma = d_2 d_4 d_1^2-d_3 d_4 d_1-d_2 d_5 d_1+d_3 d_5,\n\\]\nwhich is preferable to use being a polynomial expression. In the 6-dimensional case the polynomial $p$ and the two polynomials $q^r,q^i$ are\n\\[\np = \\lambda^6 - d_1 \\lambda ^5 + d_2 \\lambda ^4 - d_3 \\lambda^3 + d_4 \\lambda^2 - d_5 \\lambda + d_6, \\qquad q^r = -\\nu ^3 + d_2 \\nu ^2-d_4 \\nu + d_6, \\qquad q^i = -d_1 \\nu ^2+d_3 \\nu -d_5.\n\\]\nWe spare the reader from seeing the expression of $\\delta$, while \n\\begin{multline*}\n\\rho = -d_6^2 d_1^3-d_4^2 d_5 d_1^2+d_3 d_4 d_6 d_1^2+2 d_2 d_5 d_6 d_1^2-d_2^2 d_5^2 d_1+2 d_4 d_5^2 d_1+\\\\\n+ d_2 d_3 d_4 d_5 d_1-d_2 d_3^2 d_6 d_1-3 d_3 d_5 d_6 d_1-d_5^3+d_2 d_3 d_5^2-d_3^2 d_4 d_5+d_3^3 d_6.\n \\end{multline*}\nIn this case the penultimate remainder of Euclid's division algorithm applied to $q^r$, $q^i$ is the polynomial\n\\[\n\\left(-\\frac{d_3^2}{d_1^2}+\\frac{d_2 d_3}{d_1}-d_4+\\frac{d_5}{d_1}\\right) \\nu -\\frac{d_2 d_5}{d_1}+\\frac{d_3 d_5}{d_1^2}+d_6.\n\\]\nIt follows that $\\sigma$ can be chosen as\n\\[\n\\sigma = \\left(d_4 d_1^2 - d_1 d_2 d_3 - d_1 d_5+d_3^2\\right) \\left(d_6 d_1^2-d_2 d_5 d_1+d_3 d_5\\right).\n\\]\n\n\\section{Non-genericity}\\label{non-genericity}\nSome words must be spent on the hypothesis we made that the bifurcations are generic. In many cases the equilibrium of a parameter-dependent dynamical system undergoes bifurcation at some parameters, but the bifurcation does not evolve with a change of spectral type. The three typical cases that can take place are:\n\\begin{itemize}\n\\item[(z$_{deg}$)] one real eigenvalue becomes zero and then moves back to the same real semi-axis from which it came from;\n\\item[(r$_{deg}$)]\ntwo complex conjugate eigenvalues touch the purely imaginary axis and then move back to the same half-plane from which they came from;\n\\item[(d$_{deg}$)]\ntwo real eigenvalues collide in the real axis but they then separate once again in the real axis instead of separating in a couple of complex-conjugate egenvalues (or the same event with two complex conjugate that separate back to two complex conjugate).\n\\end{itemize}\nThese three events are non-generic. Non generic events such as these (and other more subtle) take often place because the vector field $X$ has some symmetry, and the principal invariants of the matrix $JX_e$ are not free to move in an open domain of invariant space. From the point of view of the present treatment this fact can be fully understood by applying a small generic perturbation to the vector field $X$. In singularity theory this is called \\emph{morsification}. An example of what this would mean is represented in Figure~\\ref{morsification}.\n\\begin{figure}[ht]\n\\begin{center}\n\\includegraphics[width=5cm]{before_morse.pdf} \\qquad \\includegraphics[width=5cm]{after_morse.pdf}\n\\end{center}\n\\caption{These 2 panes represent the effect of morsification on the higher dimensional component of marginal loci ($\\mathcal Z$, $\\mathcal D$, $\\mathcal R$ respectively) when, across them, the sign of their defining function ($\\zeta$, $\\delta$, $\\rho$ respectively) does not change sign.}\\label{morsification}\n\\end{figure}\n\nIt is possible that in a chosen vector field our approach will display some of these degenerate features (and even more dramatic ones). The non-generic events described above can be easily determined: in all such cases the algebraic function ($\\zeta$, $\\delta$, or $\\rho$) whose zeroes are the marginal locus being crossed do not change sign across the marginal variety in parameter space. Observe that a software for numerical representation of level-sets will typically not draw such varieties, since a contour hypersurface is typically detected by tracing the changes in sign of the defining function. We will see this fact in the application of Section~\\ref{application}.\n\nOne last word must be spent on a remarkably interesting family of dynamical systems: the Hamiltonian vector fields. Hamiltonian vector fields are non-generic, and the characteristic polynomial of their linearisation at the equilibria are always polynomials in $\\lambda^2$. This implies that the degree of the characteristic polynomial is always even, and that $p^i$ is always zero. Lots can be said on such vector fields, but this is outside the purpose of this article. For such vector fields our approach can give results only after non-Hamiltonian morsification. We will dedicate to Hamiltonian vector fields an ad-hoc investigation.\n\n\\section{Determination of the indices}\\label{7}\n\nIn the previous section we made clear a fact that was not pointed at explicitely in the first part of the manuscript: at each marginal locus one type of bifurcation must take place, but there is uncertainty on which direction this bifurcations is taking place (from positive to negative or vice-versa, from two real to two complex conjugate or vice-versa). This is never unclear at low dimensions, and the best way to settle the question in applications is by checking at appropriate points of the connected components complement of the marginal loci $\\mathcal Z, \\mathcal D, \\mathcal R$. Nonetheless, a theoretical discussion requires a non-numerical determination of the precise spectral type, meaning that we would like to express the spectral type only as a fuction of the invariants $d_1,...,d_m$. This can be done almost explicitly using Sturm's sequences and residue theorem.\n\nLet us begin by considering the indices $\\gamma$ and $\\delta$. They indicate the number of positive real zeroes and of negative real zeroes of the characteristic polynomial $p$. This information can be easily extracted from Sturm's theorem \\cite{1835.Sturm}. Consider $p_0 = p$ and $p_1 = p'$, and define for $j= 2,..., m$ the polynomial $p_j = - \\rem(p_{j-2},p_{j-1})$ where $\\rem(p_{j-2},p_{j-1})$ is the remainder of the Euclidean division of $p_{j-2}$ by $p_{j-1}$. Let $a_1,a_2,....,a_m$ be the list of constant terms of the polynomials $p_j$ and let $b_1,....,b_m$ be the coefficients of the top power of the polynomials $p_j$. We have that\n\\[\n\\gamma = \\var(a_1,a_2,....,a_m) - \\var(b_1,....,b_m), \\delta = \\var(a_1,a_2,....,a_m) - \\var((-1)^{1-m}b_1,....,(-1)^{j-m}b_j,...,(-1)^{m-m}b_m)\n\\]\nwhere with $\\var$ we mean the number of changes in sign of consecutive elements of the list (we should say \"once the zeroes have been removed\", but such event is not generic). This could appear a mysterious formula, but it can be expressed explicitly in terms of the invariants.\n\nThe integer numbers $2\\alpha+\\gamma$ and $2\\beta+\\delta$ can be computed using the residue theorem. In fact if the polynomial $p$ has simple zeroes, the function $p'\/p$ has simple poles with residue 1 at every zero of $p$ \\cite{1974.Henrici}. It follows that the number of zeros that $p$ possesses in a region $\\Omega$ is $(2\\pi i)^{-1} \\int_{\\partial \\Omega} p'(z)\/p(z) dz$ (with the usual convention on orientation of boundaries). By the lemma of the big circle, one has that\n\\[\n2 \\alpha + \\gamma = \\frac 1{2 \\pi i} \\lim_{R \\to + \\infty} \\left(\\int_R^{-R} \\frac{p'(i s)}{p(i s)} i ds + \\int_{\\Gamma_R} \\frac{p'(z)}{p(z)} dz\\right) \\quad 2 \\beta + \\delta = \\frac 1{2 \\pi i} \\lim_{R \\to + \\infty} \\left(\\int_{-R}^R \\frac{p'(i s)}{p(i s)} i ds + \\int_{\\Delta_R} \\frac{p'(z)}{p(z)} dz\\right),\n\\]\nwhere $\\Gamma_R$ is the big half-circle parametrised by $\\Gamma_R(\\vartheta) = R(\\cos\\vartheta + i \\sin\\vartheta)$, $\\vartheta \\in [-\\pi\/2,\\pi\/2]$ and $\\Delta_R$ is the other big half-circle parametrised by $\\Delta_R(\\vartheta) = R(\\cos\\vartheta + i \\sin\\vartheta)$, $\\vartheta \\in [\\pi\/2,3\\pi\/2]$ . Concentrating on $2\\alpha+\\gamma$ we have that the integral along the big half-circle $\\Gamma_R$ gives $n \\pi i $, from which it follows that\n\\begin{multline*}\n2 \\alpha + \\gamma - \\frac n2 = \\frac 1{2 \\pi} \\lim_{R \\to + \\infty} \\int_R^{-R} \\frac{p'(i s)}{p(i s)} ds = \\frac 1{2 \\pi} \\lim_{R \\to + \\infty} \\int_R^{-R} \\frac{p'_i(s) - i p'_r(s)}{p_r(s) + i p_i(s)} ds = \\\\\n= \\frac 1{2 \\pi} \\lim_{R \\to + \\infty} \\int_R^{-R} \\frac{(p'_i(s)p_r(s) - p'_r(s)p_i(s)) - i (p'_r(s) p_r(s) + p'_i(s) p_i(s))}{p^2_r(s)+ p^2_i(s)} ds = \\\\\n= \\frac 1{2 \\pi} \\lim_{R \\to + \\infty} \\int_R^{-R} \\left[\\arg(p_r(s) + i p_i(s)) - \\frac 12 i\\log(p^2_r(s) + p^2_i(s))\\right]'ds = - \\wind(p_r + i p_i).\n\\end{multline*}\nwhere $p(i s) = p_r(s) + i p_i(s)$, the two polynomials $p_r, p_i$ are related to the $p^r,p^i$ introduced in \\eqref{pr pi} by the relation $p_r = p^r$, $p_i = - s p^i$, and $ \\wind(p_r + i p_i)$ is the winding number around zero of the curve parameterised by\n\\[\np_r+ i p_i : \\mathbb R \\to \\mathbb C, \\qquad s \\mapsto p_r(s) + i p_i(s) = p(i s).\n\\]\nA similar argument can be used in the other half-plane and it gives\n\\[\n2 \\beta + \\delta - \\frac n2 = \\wind(p_r + i p_i).\n\\]\nObserve that, by simple considerations on the asymptotic of the curve $s \\mapsto p_r(s) + i p_i(s)$, the winding number is an integer if $m$ is even and is a semi-integer if $m$ is odd. Moreover, the derivative of $\\arg(p_r + i p_i)$ is a rational function with numerator a polynomial in $s^2$ of degree at most $2m-2$ and denominator a polynomial in $s^2$ of degree $2 m$. The integral can be in turns computed using the residue theorem once again. It follows that\n\\begin{thm}\\label{indices}\nWith the notations of the previous sections $\\gamma$ and $\\delta$ can be computed using Sturm's theorem, and\n\\[\n\\alpha = \\frac12 \\left( \\frac n2 - \\wind(p_r + i p_i) - \\gamma \\right), \\qquad \\beta = \\frac 12 \\left( \\frac n2 + \\wind(p_r + i p_i) - \\delta \\right).\n\\]\n\\end{thm}\n\nLet us explicitly write the formulas of Theorem~\\ref{indices} in the low-degree cases. We only compute $\\gamma$ and $\\alpha$, since the other two indices $\\delta$ and $\\beta$ lead to similar expressions. When $n=2$ we have\n\\[\n\\gamma = \\var\\left(d_2,-d_1, d_1^2-4 d_2\\right) - \\var\\left(1,d_1^2-4 d_2\\right)\n\\]\nand\n\\[\n\\wind = \\frac{1}{2\\pi} \\int_{-\\infty}^{+ \\infty} \\frac{- d_1 \\mu ^2- d_1 d_2}{\\mu ^4 + (d_1^2 -2 d_2) \\mu ^2 + d_2^2} d\\mu.\n\\]\n\nWhen $n=3$ we have\n\\begin{multline*}\n\\gamma = \\var\\left(d_3,-d_2,d_1 d_2- 9 d_3,4 d_3 d_1^3-d_2^2 d_1^2-18 d_2 d_3 d_1+4 d_2^3+27 d_3^2\\right) + \\\\\n- \\var\\left(-1,3 d_2- d_1^2,4 d_3 d_1^3-d_2^2 d_1^2-18 d_2 d_3 d_1+4 d_2^3+27 d_3^2\\right),\n\\end{multline*}\n\\[\n\\wind = \\frac 1{2\\pi} \\int_{-\\infty}^{+ \\infty} \\frac{-d_1 \\mu ^4 + (3 d_3 - d_1 d_2) \\mu ^2 - d_2 d_3}{\\mu ^6 + (d_1^2 -2 d_2) \\mu ^4+ (d_2^2 -2 d_1 d_3)\\mu ^2 + d_3^2} d\\mu.\n\\]\nWhen $n=4$ the expressions for $\\gamma$ become cumbersome, while\n\\[\n\\wind = \\frac 1{2\\pi} \\int_{-\\infty}^{+ \\infty} \\frac{-d_1 \\mu ^6 + (3 d_3 - d_1 d_2) \\mu ^4 + (3 d_1 d_4 - d_2 d_3) \\mu ^2 - d_3 d_4}{\\mu ^8 + (d_1^2 - 2 d_2) \\mu ^6 + (d_2^2 -2 d_1 d_3 + 2 d_4) \\mu ^4 + (d_3^2 - 2 d_2 d_4) \\mu ^2+d_4^2} d\\mu.\n\\]\n\n\n\\section{Application}\\label{application}\n\nTo conclude, we want to show how the algebraic equations that define $\\mathcal Z$, $\\mathcal D$, and $\\mathcal R$ become useful in an application. Let us consider the celebrated Lorenz system, that is the vector field\n\\[\nX = \\begin{pmatrix} a (y - x) \\\\ b x - y - x z \\\\ x y - c z \\end{pmatrix}.\n\\]\nThe equilibria of $X$ are $e = (0,0,0)$ and $e_\\pm = (\\pm \\sqrt{c(b-1)}, \\pm \\sqrt{c(b-1)},b-1)$. Let us consider the equilibrium $e$, which always exists. The linearisation, the characteristic polynomial $p$, and the associated polynomials $q^r,q^i$ at such equilibrium are\n\\[\nJX_e = \\begin{pmatrix} -a & b & 0 \\\\ a & -1 & 0 \\\\ 0 & 0 & -c \\end{pmatrix}, \\qquad \\begin{cases} p = -\\lambda ^3 - \\lambda ^2 (1+a+c) + \\lambda (a b-a c-a-c)+ a c (b -1) \\\\\nq^r = (a+c+1)\\nu + a c (b-1)\\\\\nq^i = \\nu + a b-a c-a-c.\n\\end{cases}\n\\]\n\nIn parameter space $(a,b,c)$ we have that\n\\[\n\\zeta = \\zeta_1 \\, \\zeta_2 \\, \\zeta_3, \\text{ with } \\quad \\zeta_1 = a, \\quad \\zeta_2 = b-1, \\quad \\zeta_3 = c,\n\\]\n\\[\n\\rho = \\rho_1 \\rho_2, \\text{ with } \\quad \\rho_1 = 1+a, \\quad \\rho_2 = a - a b + c + a c + c^2, \\quad \\text{and } \\quad \\sigma = a - a b + c + a c,\n\\]\n\\[\n\\delta = \\delta_1 \\delta_2^2, \\text{ with } \\quad \\delta_1 = (-1 + a)^2 + 4 a b, \\quad \\delta_2 = (-1 + c) c - a (-1 + b + c).\n\\]\nThis allows to draw the marginal loci in Figure~\\ref{Lorentz e0}. The cumulative picture of the marginal locus and a section at $c=2$ with the indication of the spectral types is shown in Figure~\\ref{Lorentz e0 bis}.\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=4cm]{Lorenz0a.pdf}\\includegraphics[width=4cm]{Lorenz0b.pdf}\n\\includegraphics[width=4cm]{Lorenz0c.pdf}\n\\end{center}\n\\caption{In the first pane the determinant locus $\\mathcal Z$, in the second pane the discriminant locus $\\mathcal D$ which is the union of two varieties, one across which $\\delta$ changes sign (solid green) and another across which $\\delta$ does not change sign (transparent green), in the third pane the resultant locus $\\widetilde{\\mathcal R}$ in solid and transparent red. In solid red its semialgebraic subvariety $\\mathcal R$.}\\label{Lorentz e0}\n\\end{figure}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=7cm]{Lorenz0d.pdf} \\qquad \\includegraphics[width=7cm]{Lorenz0e.pdf}\n\\end{center}\n\\caption{In the first pane all the relevant marginal loci together, in the second pane is a section at $c = 2$ of the three-dimensional plot with the indication of the spectral types of the equilibrium $e_0$. The transparent green curve is a discriminant locus in which two negative real eigenvalues touch but on both sides of the curve they separate in the real axis.}\\label{Lorentz e0 bis}\n\\end{figure}\n\n\n \\section{Conclusions}\n\nIn this article we used a few fundamentals facts:\n\\begin{itemize}\n\\item the marginal loci are stratified, their regular stratum corresponds to a simple degeneracy. With simple degeneracy we mean that either zero is a simple root of $p$ ($\\mathcal Z$), or $p$ has a double root in the real line ($\\mathcal D$), or $p$ has two roots in the imaginary axis ($\\mathcal R$). The strata with higher codimension correspond to higher degeneracies (e.g. three roots that coincide in the real axis, two couples of complex conjugate roots that coincide);\n\\item generically, across the regular stratum of the marginal loci, the corresponding vanishing function, $\\zeta$, $\\delta$, and $\\rho$ change sign, and respectively one root of $p$ changes sign crossing $\\mathcal Z$, two roots of $p$ change from two real to a couple of conjugate numbers crossing $\\mathcal D$, two complex conjugate roots of $p$ have real part that changes sign crossing $\\mathcal R$;\n\\item if, crossing a marginal locus, the corresponding vanishing function does not change sign, then the transverse generic change of spectral type of the previous point does not take place.\n\\item the penultimate Euclid's remainder of two polynomials $p,q$ is generically a polynomial of degree one whose zero is the unique common zero of the two polynomials when the ultimate Euclid's remainder, which is the resultant of $p,q$, vanishes (if $q = p'$ the same is true with double root of $p$ instead of unique common zero and discriminant instead of resultant);\n\\end{itemize}\n\nThese facts are basic notions of singularity theory and algebraic geometry, and we think that giving formal justifications in this article would obscure the relevant information given here. The applications of these ideas to relevant parametric-dependent systems will prove extremely useful.\n\nWe found enlightening to numerically draw the loci in parameter space, numerically compute the location of zeroes of the characteristic polynomial in the complex plane, numerically compute the indices $\\alpha,\\beta,\\gamma,\\delta$ through the variations of Sturm's sequences and the winding number given in Section~\\ref{7}, and visualise the changes of spectral type with a Mathematica manipulation.\n\n\n\\section*{Acknowledgments}\nThe author acknowledges the financial support of Universit\u00e0 degli Studi di Catania, progetto PIACERI \\emph{Analisi qualitativa e quantitativa per sistemi dinamici finito e infinito dimensionali con applicazioni a biomatematica, meccanica, e termodinamica estesa classica e quantistica}, PRIN 2017YBKNCE \\emph{Multiscale phenomena in Continuum Mechanics: singular limits, off-equilibrium and transitions}, and GNFM (INdAM). I also thank Francesco Russo for fruitful conversations.\n\n\\printbibliography[heading=bibliography]\n\n\\end{document}","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Properties of the secret $\\pi$-calculus}\n\\label{sec:examples} \nIn this section we discuss some algebraic \nproperties of the secret $\\pi$-calculus, and we \n show how we can implement the name matching operator. Lastly we provide an example\nof deployment of a mandatory access control policy that is inspired by the D-Bus\ntechnology~\\cite{dbus}.\nIn the following, we write $P\\not\\stackrel{\\bullet}{\\cong} Q$ to indicate that $(P,Q)\\not\\in\\;\\stackrel{\\bullet}{\\cong}$.\nWe also write $\\SENDn{x}{}$ and omit to indicate the message in output whenever\nthis is irrelevant, and use the notation $\\SR B P$ to indicate the process\n$\\SR {b_1} \\cdots\\SR {b_n} P$ whenever $B=\\{b_1,\\dots,b_n\\}$.\n\n\\paragraph{\\bf Algebraic equalities and inequalities} \nThe first inequality illustrates the mechanism of blocked names.\n\\begin{align}\nx(y\\forbids B).P \\not\\stackrel{\\bullet}{\\cong} x(y\\forbids B').P &&\nB\\ne B'\\label{eq:forbids}\n\\end{align}\nTo prove (\\ref{eq:forbids}) let $z\\in B'$, $z\\not\\in B$ and consider the context \n $C[-]\\;\\stackrel{\\text{\\scriptsize def}}{=}\\; \\SR {B,B'}[\\SEND xz\\SENDn\\omega{ }\\mid - ]$ with\n$\\omega$ free, $\\omega\\not\\in\\fv(P)$. \nBy applying \\reductionrulename{Com} followed by applications of \\reductionrulename{Hide} we \nhave that $C[x(y\\forbids B).P ]\\osred \\SR {B,B'}[\\SENDn\\omega{ }\\mid P\\subs zy] $, that\nis \n$C[x(y ).P]\\mathrel{\\!\\Downarrow}_{\\bar\\omega}$. In contrast, we have that \n\\mbox{$C[x(y\\forbids B').P]\\!\\not\\Downarrow_{\\bar\\omega}$}, because of $z\\in B'$. \nThe case $B'\\subseteq B$ is analogous.\n\nWe have a similar result for accepted names.\n\\begin{align}\nx[y\\accept A].P \\not\\stackrel{\\bullet}{\\cong} x[y\\accept A'].P &&\nA\\ne A'\\label{eq:forbids}\n\\end{align}\nA distinguishing context is $C[-]\\;\\stackrel{\\text{\\scriptsize def}}{=}\\; \\SEND xa\\SENDn\\omega{ }\\mid - $ \nwhere $\\omega$ is fresh and $a\\in A, a\\not\\in A'$ if $A\\not\\subset A'$, and \n$a\\in A', a\\not\\in A$ otherwise.\n\nThe next inequality illustrates the discriminating power of the\n{\\it spy}. \n\\begin{align} \n\\NR x (\\SENDn xz \\mid x(y) )&\\not\\stackrel{\\bullet}{\\cong} \\INACT\n\\label{eq:weak}\n\\end{align}\nTo prove (\\ref{eq:weak}), consider the context $C[-]=\\keyword{spy} .\n{\\SENDn \\omega{}}\\mid -$.\nBy applying \\sreductionrulename{Com} and \\reductionrulename{New} followed by \\rstruct we infer \n$C[\\NR x (\\SENDn xy \\mid x(y))]\\osred \\SENDn\\omega{}$: that is,\n$C[\\NR x (\\SENDn xy \\mid x(y))]\\mathrel{\\!\\Downarrow}_{\\bar{\\omega}}$\nwhile $C[\\INACT]\\!\\not\\Downarrow_{\\bar \\omega}$. \n\nThe invisibility of communications protected \nby using the \\emph{hide} operator is established by means of the equation below, which is \nproved\nby co-induction.\n\\begin{align} \n\\SR x [\\SENDn xz \\mid x(y).Q] &\\stackrel{\\bullet}{\\cong}\\SR x[Q\\subs zy] \\label{eq:hide-weak}\n\\end{align}\n\n\n\nThe last equation states the impossibility of extrusion of hidden channels. \n\\begin{align}\n\\SR x [ \\SENDn zx ]\\stackrel{\\bullet}{\\cong} \\INACT \\label{eq:strong-fs}\n\\end{align}\n \n\n\\paragraph{\\bf Implementing name matching} \nName matching is not needed as an operator in our calculus\n(cf.~\\cite{CarboneM03}). We show this \nby providing a semantics-preserving translation of the if-then-else\nconstruct~\\cite{Hen07}. \nConsider the process\n$\\IF {x=y}PQ$ which\nreduces to $P$ whenever $x=y$, and reduces to $Q$ otherwise.\nLet $Z\\;\\stackrel{\\text{\\scriptsize def}}{=}\\;\\fv(\\IF\n{x=y}PQ)$; therefore there are names $z_1,\\dots, z_n$, $n\\geq 0$, s.t. \n$Z=\\{x,z_1,\\dots,z_n\\}$. Let $I=\\{1,\\dots,n\\}$ and assume~$k$ fresh. We define:\n\\begin{align*}\n\\encSQA{\\IF {x=y}PQ} &\\;\\stackrel{\\text{\\scriptsize def}}{=}\\;\n \\SR k[y[w\\accept k] \\mid \\SEND xk(P\\uplus k)\\mid_{I}\\SEND {z_i}k(Q\\uplus\nk) ]\n\\end{align*} \nWhenever $x=y$, we have that the only possible reduction arises among\nthe trusted input $y[w\\accept k]$ and $\\SEND xk(P\\uplus k)$, leading to\n$P'\\;\\stackrel{\\text{\\scriptsize def}}{=}\\; \\SR k[P\\uplus k \\mid_I \\SEND\n{z_i}k(Q\\uplus k)]$. Note that $P$ and $P'$ have the same interactions with\nthe context, because $k$ is blocked in all threads of $P'$: therefore $Q$ cannot be\nunblocked. \nThis result can be formalized by relying on the behavioural theory~\\footnote{Note that\nobservational equivalence is not\npreserved by input-prefixing; the outlined translation could be indeed sensitive to\nname aliasing.} of the secret\n$\\pi$-calculus.\n\n\\noindent\nWe infer the following equation\n\\begin{equation}\n\\encSQA{\\IF {x=x}PQ}\\cong P \\label{eq:match}\n\\end{equation}\n \n\\medskip\\noindent\nConsider now the case $x\\ne y$ and let $y=z_1$.\nThe matching process reduces to the rearranged process \n$\\SR k[\\SEND xk(P\\uplus k) \\mid Q\\uplus k \\mid_{ \n\\{2,\\dots,n\\}}\\SENDn{z_i}k(Q\\uplus k)]$, which has the same behaviour\nof~$Q$\n\\begin{equation}\n\\encSQA{\\IF {x=y}PQ}\\cong Q \\qquad x\\ne y \\label{eq:mismatch}\n\\end{equation} \n\n\\paragraph{\\bf Modeling dedicated channels}\nSecurity mechanisms based on dedicated\nchannels can be naturally modeled in the secret $\\pi$-calculus. \nD-Bus~\\cite{dbus} is an IPC system for software applications \nthat is used in many desktop environments. Applications of each user\nshare a private bus for asynchronous message-passing\ncommunication;\na system bus\npermits to broadcast messages among applications of different users. \nVersions smaller than $0.36$ contain an erroneous access policy for\nchannels which allows users to send and listen to messages on another user's channel\nif the address of the socket is known. We model this vulnerability \nby means of an {\\it internal } attacker that leaks the user's\nchannel. In the specification below, two applications of an user~$U_1$ utilize\na private bus to exchange a password; in fact, the password can be intercepted by the\nuser~$U_2$ through the malicious\ncode~$!\\SENDn {\\sys}{c}$ of $U_1$, which publishes~$c$ on the system bus. \n \\begin{align}\n U_1&\\;\\stackrel{\\text{\\scriptsize def}}{=}\\; \\NR{c}(! \\SENDn {\\sys}{c}\\mid \n\\NR{\\pwd}\\SENDn {c}{\\pwd} \\mid \\RECEIVE {c}x{P} ) \n& U_2&\\;\\stackrel{\\text{\\scriptsize def}}{=}\\; \\sys(x).x(y_{pwd}).Q \n\\end{align} \n\nThe patch released by Fedora restricts the access to the\nuser's bus: only applications with the same user-id can\nhave access. We stress that this policy is mandatory: that is, the user cannot\nchange it. By using the secret \\mbox{$\\pi$-calculus} we can easily patch $U_1$ by hiding\nthe\nbus: \n$U' \\;\\stackrel{\\text{\\scriptsize def}}{=}\\; \\SR{c}[! \\SENDn {\\sys}{c}\\mid \n\\NR{\\pwd}(\\SENDn {c}{\\pwd} ) \\mid \\RECEIVE {c}x{P}]$. \nThe following equation, which can be proved co-inductively, states that the\npolicy is fulfilled even in presence\nof internal attacks: \n\\begin{align} \nU' &\\stackrel{\\bullet}{\\cong} \\SR{c}[\\NR{\\pwd}({P}\\subs{\\pwd} x)] \\label{eq:dbus}\n\\end{align}\n \n\n\\shrink{\nWhile we do not have a formal separation result, we believe that enforcing such\nmandatory policies in the (untyped) $\\pi$-calculus is difficult. On contrast, our\ncalculus permits to naturally model security mechanisms based on dedicated channels\nthat cannot be disclosed. \n}\n\n\n\n\\section{Observational equivalence}\n\\label{sec:observational}\n \nIn this section we define a notion of behavioral equivalence based on observables, or\nbarbs. As the reader will notice, a distinctive feature of our observational theory is\nthat trusted inputs are visible only under certain conditions, namely that the context\nknows at least a name that is declared as accepted. Conversely, processes trying to send\na name protected by an hide declaration are not visible at all.\nThe choice to work in a synchronous setting\npermits us to emphasize the differences among our theory and that of\n$\\pi$-calculus. However, the same results would hold for a secret asynchronous\n$\\pi$-calculus, while the contrast would be less explicit as input barbs would not be\nobservable.\n\nWe say that a name $x$ is bound in $P$ if $x\\in\\bv(P)$.\nAn occurrence of $y$ is hidden in $P$ if such occurrence of $y$ appears\nin the scope of a {\\sf hide} operator in~$P$.\n \n\\begin{definition}[Barbs] \nWe define: \n\\begin{itemize}\n \\item $P\\mathrel{\\!\\downarrow}_{x}$ whenever $P\\equiv C[\\RECEIVET x{y\\accept A} Q]$ with $x$ not\nbound in $P$ and $A\\cap\\bv(P)\\ne A$, or whenever \n $P\\equiv C[\\RECEIVE x{y\\forbids B} Q]$ with $x$ not bound in $P$. \n \\item $P\\mathrel{\\!\\downarrow}_{\\overline x}$ whenever \n $P\\equiv C[\\SEND xyQ]$ with $x$ not bound in $P$ and~$y$ not hidden in~$P$.\n\\end{itemize}\n\\end{definition} \n\n\\noindent\nBased on this definition, we have that \n$P_1\\;\\stackrel{\\text{\\scriptsize def}}{=}\\;\\SR x \\RECEIVET z{y\\accept x}Q$, \n$P_2\\;\\stackrel{\\text{\\scriptsize def}}{=}\\;\\NR x \\RECEIVE x{y\\forbids B}Q$, and\n$P_3\\;\\stackrel{\\text{\\scriptsize def}}{=}\\; \\RECEIVET z{y\\accept \\emptyset}Q$ do not exhibit a barb $z$, written\n$P_i \\mathrel{\\!\\not\\downarrow}_z$ for $i=1,2,3$. \nIn contrast, when $x\\ne z$ and $A\\cap \\{x\\}\\ne\\emptyset$ \nwe have that $\\NR x \\RECEIVET\nz{y\\accept\nA}P \\mathrel{\\!\\downarrow}_z$, and when $x\\ne z$ we have $\\SR x \\RECEIVE z{y\\forbids B}P$. \nWhenever $P\\;\\stackrel{\\text{\\scriptsize def}}{=}\\; \\SR y \\SEND xvQ$ with $y\\ne x$, we have $P\\mathrel{\\!\\downarrow}_{\\overline x}$ if\n$y\\ne v$, and $P\\mathrel{\\!\\not\\downarrow}_{\\overline x}$ otherwise.\nWeak barbs are defined by ignoring reductions. \nWe let $P\\mathrel{\\!\\Downarrow}_{x}$\nwhenever $P\\Rightarrow P'$ and $P'\\mathrel{\\!\\downarrow}_{x}$; similarly $ P\\mathrel{\\!\\Downarrow}_{\\overline x}$ whenever $\nP\\Rightarrow P'$ and $P'\\mathrel{\\!\\downarrow}_{\\overline x }$. \n\nFollowing the standard definition of observational equivalence, we are aiming at an \nequivalence relation that is sensitive to the barbs, is closed under reduction, and is\npreserved by certain contexts. \n\n\\begin{definition}[Barb preservation]\nA relation $\\,{\\cal R}\\,$ over processes is barb preserving if \n$P\\,{\\cal R}\\, Q$, $P\\mathrel{\\!\\downarrow}_{x}$ implies $Q\\mathrel{\\!\\Downarrow}_{x}$, and \n $P\\mathrel{\\!\\downarrow}_{\\overline x}$ implies $Q \\mathrel{\\!\\Downarrow}_{\\overline x}$.\n\\end{definition}\n\nThe requirement of reduction closure is to ensure that the processes maintain their\ncorrespondence through the computation. \n\n\\begin{definition}[Reduction closure]\nA relation $\\,{\\cal R}\\,$ over processes is reduction-closed if $P\\,{\\cal R}\\, Q$\nand $P\\rightarrow P'$ implies that $Q\\Rightarrow Q'$ and \n$P'\\,{\\cal R}\\, Q'$.\n\\end{definition}\n\nWe require contextuality with respect to the parallel composition, the new and the\nhide operators (cf. Section~\\ref{sec:pi-calculus}). \n\n\\begin{definition}[Contextuality]\nA relation $\\,{\\cal R}\\,$ over processes is contextual if $P\\,{\\cal R}\\, Q$ implies\n$C[P] \\,{\\cal R}\\, C[Q]$. \n\\end{definition}\n\n\\begin{definition}[Observational equivalence]\\label{def:obs-equivalence}\nObservational equivalence, noted $\\cong$, is the largest symmetric \nrelation over processes which is barb preserving, reduction closed\nand contextual. \n\\end{definition} \n\nObservational equivalence is difficult to establish since it requires\nquantification over contexts. In the next section we will introduce labelled transition\nsemantics for the secret $\\pi$-calculus, and show that the induced bisimulation coincides\nwith observational equivalence. Besides the theoretical interest, this will be also\nof help in proving that two processes are observationally equivalent. \n \n\\subsection{Characterization} \n\\label{sec:bisimulation}\n\\input{fig-lts-rev}\nThe characterization relies on labelled transitions of the form $P\\lts\\alpha P'$, \nwhere $\\alpha$ is one of\nthe following actions:\n\\[\n\\alpha = x(z)\\mid \\SENDn xz \\mid (z)\\SENDn xz\n\\mid \\tau \n\\]\nWe let $\\fv(x(z))=\\{x\\}$, $\\fv(\\SENDn xz)=\\{x,z\\}$, and \n$\\fv{(z)\\SENDn xz}=\\{x\\}$. \nWe define $\\bv(x(z))=\\{z\\}$, $\\bv(\\SENDn xz)=\\emptyset$ and\n$\\bv( (z)\\SENDn xz)=\\{z\\}$. We let $\\fv(\\tau)=\\emptyset=\\bv(\\tau)$.\n\nThe transitions are defined by the rules in Figure~\\ref{fig:lts}. \nAction $ x(z)$ represents the receiving of a name $z$ on a\nchannel $x$. In rule \\ltsrulename{In}, a process\nof the form $x(y\\forbids B).P$ can receive a value $z$ over $x$, provided that $z$ is is\nnot blocked ($z\\not\\in B$). The received name will\nreplace the formal parameter in the body of the continuation. \nRule \\linpt describes a trusted input, that is a process\nof the form $\\RECEIVET x{y\\accept A}P$ that receives a variable $z$ over $x$ whenever \n$z$ is accepted ($z\\in A$); the variable~$z$ will replace all occurrences of~$y$ in $P$.\nThe action $ \\SENDn xy$ represents the output of a name $y$ over\n$x$. This move is performed in \\lout by the process $\\SEND xyP$ and leads to the\ncontinuation $P\\rhd B$. \nCommunication arises in rule \\lcom by means of a $\\tau$ action obtained by a\nsynchronization of an $ x(y)$ action with a $ \\SENDn xy$ action.\nAction $ (y)\\SENDn xy$ is fired when the name $y$ sent over $x$ is bound\nby the {\\sf new } operator and its scope is opened by using rule \\ltsrulename{Open}. \nThe scope of the {\\sf new } is closed by using rule \\lclose.\nIn this rule the scope of a name $y$ sent over $x$ is \nenlarged to include a process which executes a dual action $ x(y)$, \ngiving rise to a synchronization of the two threads depicted by an action $\\tau$. \nRule \\lres is standard for restriction. \nRule \\ltsrulename{Hide} says that process $\\SR x P $ performs an action $\\alpha$ inferred\nfrom $P$, provided that the $\\alpha$ does not contain $x$.\nTherefore extrusion of hidden channels is not possible, as previously discussed; note\nindeed that this the unique rule applicable for \\emph{hide}. \nRule \\ltsrulename{Repl} performs a replication. \n \nWe have a standard notion of bisimilarity; in the following, we let $\\Lts{\\tau}$ be\nthe reflexive and transitive closure of $\\lts{\\tau}$.\n\n\\begin{definition}[Bisimilarity]\\label{def:bisimulation}\nA symmetric relation $\\,{\\cal R}\\,$ over processes is a bisimulation if\nwhenever $P\\,{\\cal R}\\, Q$ and $P\\lts{\\alpha} P'$ then there exists \na process $Q'$ such that \n$Q \\Lts{\\tau}\\lts{\\hat\\alpha}\\Lts{\\tau}Q'$ and \n$P'\\,{\\cal R}\\, Q'$ where $\\hat\\tau$ is the empty string and\n$\\hat\\alpha=\\alpha$ otherwise. \nBisimilarity, noted $\\approx$, is the largest bisimulation.\n\\end{definition}\n\nThe following result establishes that bisimilarity can be used as a proof technique for\nobservational equivalence; the proof is by coinduction and relies on the closure of\nbisimilarity under the {\\sf new}, {\\sf hide} and parallel composition operators. \n \n\\begin{proposition}[Soundness]\\label{prop:soundness}\nIf $P\\approx Q$ then $P\\cong Q$. \n\\end{proposition}\n\n\nTo prove the reverse direction, namely that behaviourally equivalent processes are\nbisimilar, we follow the approach of Hennessy~\\cite{Hen07} and proceed by co-induction\nrelying on contexts $C_\\alpha$ which emit the desired barbs whenever they interact\nwith a process $P$ such that $P\\Lts{\\alpha} P'$, and vice versa. \nPerhaps interestingly, we can program a context to check if\na given name is fresh even if our syntax does not include a matching construct \n(cf.~\\cite{Hen07,BorealeS98}).\nIn Section~\\ref{sec:examples} we will show that in the secret $\\pi$-calculus the process \n$\\IF{x=y}PQ$ can be derived. \n\n\n\\begin{proposition}[Completeness]\n\\label{prop:completeness}\nIf $P\\cong Q$ then $P\\approx Q$. \n\\end{proposition} \n\\begin{proof}\nLet $P\\,{\\cal R}\\, Q$ whenever $P\\cong Q$ and assume that\n$P\\lts\\alpha P'$. We show that there is $Q'$ such that $Q\\Lts{\\hat\\alpha} Q'$\nand $P'\\equiv \\,{\\cal R}\\, \\equiv Q'$; this suffices to prove that $\\,{\\cal R}\\,$ is included in\nobservational equivalence (cf.~\\cite{SangiorgiM92}).\nWhenever $\\alpha=\\tau$, we use reduction-closure of $\\cong$ to find $Q'$ such that\n$Q\\Lts{} Q'$ with $P'\\cong Q'$. By relyng on a lemma that establishes that\nreductions correspond to $\\tau$ actions, we infer that $Q\\Lts{\\tau} Q'$, which is the\ndesired result since $P'\\,{\\cal R}\\, Q'$. Otherwise assume $\\alpha\\ne\\tau$.\nWe exploit contextuality of $\\cong$ and infer that\n$C^A_\\alpha[P]\\cong C^A_\\alpha[Q]$ where we let $A=\\fv(P)\\cup\\fv(Q)$ and $C^A_\\alpha$ be\ndefined below. We let $A=\\{a_1,\\dots,a_n\\}$, $I=1,\\dots,n$, with $n\\geq 1$, and\nassume names $\\omega,\\psi_1,\\dots,\\psi_n$ such that\n$\\{\\omega,\\psi_1,\\dots,\\psi_n\\}\\cap A=\\emptyset$. \n\\begin{align*}\nC^A_{x(y)}[-] &\\;\\stackrel{\\text{\\scriptsize def}}{=}\\; - \\mid \\SEND xy\\SENDn \\omega{} \\\\\nC^A_\\alpha[-] &\\;\\stackrel{\\text{\\scriptsize def}}{=}\\; \n- \\mid \\SR k[x(z).( z[w\\accept k] \\mid \\SENDn \\omega{}) \n\\mid_{i\\in I} \\SEND {a_i}k\\SENDn{\\psi_i}{}]\n&& \\alpha=x\\oput y,(y)x\\oput y\\\\\nC'_y&\\;\\stackrel{\\text{\\scriptsize def}}{=}\\; \\SR k [y[w\\accept k]\\mid \\SENDn \\omega{}\\mid_{i\\in I} \\SEND \n{a_i}k\\SENDn{\\psi_i}{}] &&\\forall i\\in I\\,.\\, y\\ne a_i \n\\\\\nC''_y &\\;\\stackrel{\\text{\\scriptsize def}}{=}\\; \\SR k [\\SENDn \\omega{} \\mid\n\\SENDn{\\psi_l}{}\\mid_{i\\in I\\less l}\\SEND{a_i}k\\SENDn{\\psi_i}{} ]\n&&a_l=y\n\\end{align*} \nAssume $\\alpha=\\SENDn xy$. We have that there is $a_l\\in A, a_l=y$ such that\n$C^A_\\alpha[P]\\Lts{}\\equiv C_P\\;\\stackrel{\\text{\\scriptsize def}}{=}\\; P'\\mid C''_y$. We find\na process $C_Q$ such that\n$C^A_\\alpha[Q]\\Lts{}C_Q\\cong C_P $.\nSince $C_P\\mathrel{\\!\\downarrow}_{\\bar \\omega}, \\mathrel{\\!\\downarrow}_{\\bar \\psi_l}$, this implies that \n$C_Q\\mathrel{\\!\\Downarrow}_{\\bar \\omega}, \\mathrel{\\!\\Downarrow}_{\\bar \\psi_l}$. Therefore the weak barb \n$\\bar \\omega$ of $C_Q$ has been unblocked since $Q$ emits a weak action $\\alpha'$ with\n$x$ as subject. Moreover, the object of $\\alpha'$ is~$y$, that is $\\alpha'=\\alpha$,\nbecause of the weak barb $\\bar \\psi_l$. Indeed the thread $\\SEND {a_l}k\\SENDn\n{\\psi_l}{}$ with $a_l=y$ can be unblocked only by $y[w:k]$, because $k$ is protected by\nthe {\\sf hide} declaration. \nTherefore there is $Q'$ such that $Q\\Lts{\\alpha} Q'$ and $C_Q\\cong Q'\\mid C''_y$.\nWe conclude by showing that this implies $P'\\cong Q'$, and in turn $P'\\,{\\cal R}\\, Q'$, as\nrequested. \nAssume $\\alpha=(y)x\\oput y$. We have that $C^A_\\alpha[P]\\Lts{}\\equiv C_P\\;\\stackrel{\\text{\\scriptsize def}}{=}\\; P'\\mid\nC'_y$. Since $y$ is fresh we have that $a_i\\ne y$ for all $a_i\\in A$. Therefore\n$C_P\\!\\not\\Downarrow_{\\bar \\psi_i}$ for all $i\\in I$, because $k$ is protected by {\\sf hide}.\nWe easily obtain that there is $C_Q$ such that $C^A_\\alpha[Q]\\Lts{} C_Q\\cong C_P$ with\n$C_Q\\mathrel{\\!\\Downarrow}_{\\bar\\omega},\\!\\not\\Downarrow_{\\bar \\psi_i}$, for all $i\\in I$. This let us infer that \nthere is $Q'$ such that $Q\\Lts{\\alpha} Q'$ and $C_Q\\cong Q'\\mid C'_y$, and the result then\nfollows by showing that $P'\\mid C'_y\\cong Q'\\mid C'_y$ implies $P'\\cong Q'$.\n\\end{proof} \n\nFull abstraction is obtained by\nPropositions~\\ref{prop:soundness} and~\\ref{prop:completeness}.\n\n\n\\begin{theorem}[Full Abstraction]\n\\label{theor:character}\n$\\cong\\ =\\ \\approx$. \n\\end{theorem}\n\n\n\n\n\n\n\n\\section{Introduction}\n\\label{sec:intro}\n\nThe restriction operator is present in most process calculi. Its behaviour is crucial for\n\\emph{expressiveness} (e.g., for specifying unbounded linked structures, nonce generation\nand locality). \nIn the $\\pi$-calculus~\\cite{MPW92a,MPW92b}, it plays a prominent role: It provides \nfor the generation and extrusion of unique names. In CCS~\\cite{Milner80}, it is also\nfundamental but it does not provide for name extrusion: It limits the interface of a given\nprocess with its external world. In this paper we shall extend the $\\pi$-calculus with a\nhiding operator, called {\\sf hide}, that behaves similarly to the CCS restriction. The\nmotivation for our work comes from the realm of \\emph{secrecy} and \\emph{confidentiality}: we shall\nargue that {\\sf hide} allows us to express and guarantee secret communications.\n\n\\paragraph{Motivation.} Secrecy and confidentiality are major concerns in most systems of communicating agents. \nEither because some of the agents are untrusted, or because the communication uses insecure channels, there \nmay be the risk of sensitive information being leaked to potentially malicious entities. \nThe price to pay for such security breaches may also be very high. \nIt is not surprising, therefore, that secrecy and confidentiality have become central issues in the \nformal specification and verification of communicating systems. \n\nThe $\\pi$-calculus and especially its variants enriched with mechanisms to\nexpress \ncryptographic operations, the spi calculus~\\cite{AbadiG99} and the applied\n$\\pi$-calculus~\\cite{AbadiF01}, have become \npopular formalisms for security applications. They all feature the operator {\\sf new} (restriction) \nand make crucial use of it in the definition of security protocols. \nThe prominent aspects of {\\sf new} are the capability of creating a new channel name, \nwhose use is restricted within a certain scope, and the possibility of enlarging its scope by communicating it to other processes. \nThe latter property is central to the most interesting feature of the $\\pi$-calculus: the \\emph{mobility} of the communication structure. \n\nAlthough in principle the restriction aspect of {\\sf new} should guarantee that the channel is used for communication \nwithin a secure environment only, the capability of extruding the scope leads to security problems. \nIn particular, it makes it unnatural to implement the communication using dedicated channels, and\nnon-dedicated channels are not secure by default. \nThe spi calculus and the applied $\\pi$-calculus do not assume, indeed, any \nsecurity guarantee on the channel, and implement security by using cryptographic encryption. \n\nLet us illustrate the problem with an example. \nThe following $\\pi$-calculus process describes a\nprotocol for the exchange of a confidential information:\n\\[\nP= \\SENDn s{\\textrm{CreditCard}} \\mid\n\\RECEIVE sx\\IFs {x=\\textrm{OwnerCard}} (\\SENDn p{\\textrm{Ok}} \\mid \\SENDn ps) \n\\qquad p\\ne s\n\\]\nIn this specification, the thread on the left sends a\ncredit card number over the channel~$s$ to the thread on the right which is waiting\nfor an input on the same channel. If the received card number is the expected one,\nthen the latter both sends an ack and forwards the communication channel~$s$ on a public\nchannel~$p$. \nThe problem is that, while the confidentiality of the information would require the context to be\nunable to interfere with the protocol and to steal the credit card number, \nin fact this is not guaranteed in the $\\pi$-calculus where \ninteraction with a parallel process waiting for input on channel~$s$ is allowed. \n \n\nTo amend this problem, the idea is to let the channel for the\nexchange of the secret information available only to the process $P$, \nrestricting its scope to $P$ with the declaration: $\\NR s P$. \nThe $\\pi$-calculus semantics makes the exchange invisible to the context. This is formalized\nby the following observational equation stating that no $\\pi$-calculus context \ncan tell apart $P$ from its continuation:\n\\begin{align}\n\\NR s P\\cong^{\\textrm{obs}}_\\pi \\NR s \\IFs\n{\\,\\textrm{CreditCard}=\\textrm{OwnerCard}}(\\SENDn p{\\textrm{Ok}} \\mid \\SENDn ps) \n\\label{eq:inact} \n\\end{align}\n\nUnfortunately, to preserve such behavioral\nequations when processes are deployed in untrusted\nenvironments is difficult, since, as explained above, we cannot rely on dedicated\nchannels for\ncommunication on names created by the {\\sf new} operator. \nOne natural approach to cope with this problem is to map the private communication \nwithin the scope of the {\\sf new} into open communications protected by cryptography. \n\n\nFor instance, the process $\\NR s P$ could be implemented in the spi calculus protocol\n$\\encSQ {\\NR sP}$ below by using a public-key crypto-scheme.\nIn this implementation the creation of a $\\pi$-calculus channel $s$ is mapped into the \ncreation of a couple\nof spi calculus keys: a public key~$s^+$ and a private key~$s^-$.\nThe receiver performs decryption of the crypto-packet ${\\pack{\\textrm{CC}}{s^+}}$\nwith the private key $s^-$; the operation assigns the card number to the variable\nin the conditional test. \n\\begin{align*}\n\\encSQ {\\NR s P} \\;\\stackrel{\\text{\\scriptsize def}}{=}\\;& \\NR{s^+,s^-}\n(\\SEND {net}{\\pack {\\textrm{CC}}{s^+}}\\INACT \\mid {net}(y).\\DECRYPT y{\\pack\nx{s^-}}Q)\n\\\\\nQ\\;\\stackrel{\\text{\\scriptsize def}}{=}\\;& \\IFs {x=\\textrm{OC}}{ \\SENDn {net}{\\pack{\\textrm{Ok}}{p^+}} \\mid \\SENDn\n{net}{\\pack{s^+,s^-}{p^+} }} \n\\end{align*} \nUnfortunately, the naive protocol above suffers from a number of problems,\namong which the most serious is the lack of forward secrecy~\\cite{abadi98}: this property \nwould guarantee that if keys are corrupted at some time~$t$ then the protocol steps\noccurred before~$t$ do preserve secrecy. \nIn particular, forward secrecy requires that the content of the packet~$\\pack\n{\\textrm{CC}}{s^+}$, which is the credit card number, is not disclosed if at some step of\nthe computation the context gains the decryption key~$s^-$. Stated differently, the\nimplementation $\\encSQ \\cdot$ should preserve the semantics of equation (\\ref{eq:inact}):\nthat is, it should be fully abstract. \nIt is easy to see that this is not the case since \na spi calculus context can first buffer the encrypted packet and subsequently, whenever it\nenters in posses of the decryption key, retrieve the confidential information; this\nbreaks equation (\\ref{eq:inact}).\nWhile a solution to recover the behavioral theory of $\\pi$-calculus is\navailable~\\cite{popl07}, the price to pay is a complex cryptographic\nprotocol that relies on a set of trusted authorities acting as proxies. \n\nBased on these considerations, in this paper we argue that the\nrestriction operator of $\\pi$-calculus does not adequately ensure confidentiality. \nTo tackle this problem, we introduce an operator to\nprogram explicitly secret communications, called {\\sf hide}. \nFrom a programming language point of view, the envisaged use of the\noperator is for declaring secret a medium used for {\\it local} inter-process\ncommunication; examples include pipelines, message queues and IPC\nmechanisms of microkernels. \nThe operator is static: that is, we assume that the scope of hidden\nchannels can not be extruded. The motivation is that all processes using a private channel\nshall be included in the scope of its {\\sf hide} declaration; processes outside\nthe scope represent another location, and must not interfere with the protocol. \nSince the {\\sf hide} cannot extrude the scope of secret channels, we can use it to directly build specifications\nthat preserves forward secrecy.\nIn contrast, we regard the\nrestriction operator of the $\\pi$-calculus, {\\sf new}, as useful to create a new \nchannel for message passing with scope extrusion, and which does not provide secrecy\nguarantees. \n\nTo emphasize the difference between {\\sf hide} and {\\sf new}, we introduce a\n \\emph{spy} context that represents a side-channel attack on the non-dedicated channels.\nIn practice, \n \\emph{spy} is able to detect whether there has been a communication on one of the\nchannels not protected by a {\\sf hide}, but \n is not able to retrieve its content.\n \n\n \n\n \\medskip\\noindent{\\bf {Contributions.}}\n We introduce the {\\it secret} $\\pi$-calculus as an \\ignore{orthogonal} extension of the\n$\\pi$-calculus with an operator representing confidentiality ({\\sf hide}). We \ndevelop its structural operational semantics and its observational theory. In particular,\nwe provide a reduction semantics, a labelled\ntransition semantics and an observational equivalence.\nWe show that the observational equivalence induced by the reduction semantics coincides\nby the labelled transition system semantics. \nTo illustrate the difference between {\\sf hide} and {\\sf new}, we shall also consider a\ndistinguished process context, called \\emph{spy}, representing a side-channel attack.\n \n \\medskip\\noindent{\\bf Plan of the paper}\n In the next section we introduce the syntax and the reduction semantics of\n the secret $\\pi$-calculus. \n In Section~\\ref{sec:observational} we present the observational equivalence, and a\ncharacterization based on labelled transition semantics, that we show sound\n and complete. \n In Section~\\ref{sec:spy} we introduce the {\\it spy} process, and we extend\nthe reduction semantics and bisimulation method accordingly. \n In Section~\\ref{sec:examples} we discuss some algebraic \n equalities and inequalities of the secret $\\pi$-calculus, and we analyze some\ninteresting examples, notably an\nimplementation of name matching, and a deployment of mandatory access control. \n Finally, Section~\\ref{sec:discussion} presents related work and concludes. \n An extended version of the paper containing all proofs is available\nonline~\\cite{tech-report}.\n \n\\section{Secret $\\pi$-calculus}\n\\label{sec:pi-calculus}\n\nThis section introduces the syntax and the semantics of our calculus, \n the {\\it secret $\\pi$-calculus}. \nThe syntax of the processes in Figure~\\ref{fig:syntax} extends \nthat of the $\\pi$-calculus~\\cite{MPW92a,MPW92b} by:\n(1) We consider two binding operators: \n{\\sf new }, which -- as we will argue -- does not offer enough security \nguarantees, and {\\sf hide}, which serves to program secrecy.\n(2) We use two forms of restricted pattern matching in input, so\nthat we can deny a process to receive a (possibly empty) set of channels, or we can\nenforce a process to receive only trusted channels. When in the first form the set of\nchannels is empty we have the standard input of $\\pi$-calculus. \n\\begin{figure}\n \\begin{align*} \n P,Q \\; ::= \\; & & \\text{Processes:} \n \\\\\n & \\RECEIVE x{y\\forbids{ B}}P & \\text{input} && \\NR{x}(P) &&\n\\text{restriction}\n \\\\\n & \\RECEIVET x{y\\accept{ A}}P& \\text{trusted input}\n && \\SR{x}[P] && \\text{secrecy} \n \\\\\n & \\SEND xyP & \\text{output} && \\INACT && \\text{inaction} \n \\\\\n &P\\mid Q & \\text{composition} && !P &&\n\\text{replication} \n \\end{align*}\n \\caption{Syntax of the secret $\\pi$-calculus}\\label{fig:syntax}\n \\end{figure}\nWe use an infinite set of names ${\\cal N}$, ranged over by $a,b,\\dots,x,y,z$, to represent\nchannel names and parameters, i.e. the subjects and the objects of communication, \nrespectively. \nWe let $A,B$ range over subsets of ${\\cal N}$. \n \nA process of the form $\\RECEIVE x{y\\forbids B }P$ represents an input\nwhere the name $x$ is the input channel name, \n$y$ is a formal parameter which can appear in the continuation $P$, \nand~$B$ is the set of {\\it blocked} names that the process cannot receive.\nOn contrast, an input process of the form $\\RECEIVET x{y\\accept A }P$ \ndeclares the object names that the process can {\\it accept}: that is, \nthe process accepts in input a name $z$ only if $z\\in A$. \nThis permits to program security protocols where only trusted names can be received. \nThe free and the bound names of such process are defined as follows: \n$\\fv(x[y\\forbids B].P)=(\\fv(P)\\setminus \\{y\\})\\cup\\{x\\}\\cup B$ and\n$\\bv(x[y\\forbids B].P)=\\{y\\}\\cup \\bv(P)$, \n$\\fv(x(y\\accept A).P)=(\\fv(P)\\setminus \\{y\\})\\cup\\{x\\}\\cup A $ and\n$\\bv(x(y\\accept A).P)=\\{y\\}\\cup \\bv(P)$. \n\n\nProcesses $\\SEND xyP$, $\\NR x(P)$, $P\\mid Q$, $!P$, and $\\INACT$ are the pi calculus\noperators respectively describing an output of a name~$y$ over channel~$x$,\nrestriction of~$x$ in $P$, parallel composition, replication and inaction;\nsee~\\cite{SanWalk01} for more details. \n\nThe process $\\SR x[P]$ represents a process $P$ in which the name $x$ is regarded as\n\\emph{secret}, and should not be accessible to any \nprocess external to $P$. $\\SR x[P]$ binds the occurrence of $x$ in $P$: \n$\\fv(\\SR x[P])=\\fv(P)\\setminus \\{x\\}$, and $\\bv(\\SR\nx[P])=\\{x\\}\\cup \\bv(P)$. \n \n\nContexts are processes containing a hole~$-$. We write $C[P]$ for\nthe process obtained by replacing~$-$ with $P$ in $C[-]$.\n\\begin{align*} \nC[-] &\\; ::= \\; - \\;\\mid\\; C[-]\\mid P \\;\\mid\\; P\\mid C[-] \\;\\mid\\; \\NR x[-]\\;\\mid\\; \\SR\nx[-]\n&& \\text{contexts}\n\\end{align*} \n\nWe write $x(y).P$ as a short of $x(y\\forbids \\emptyset).P$, and omit curly brackets in\n$x(y\\forbids\\{b\\}).P$ and $x[y\\accept\\{a\\}].P$. When no ambiguity is possible, we will\nremove scope parentheses in $\\NR x(P)$ and $\\SR x[P]$.\nWe will often avoid to indicate trailing~$\\INACT$s.\n\nThe combination of the accept and the block construct permits to design\nprocesses which are not \nsubject to interference attacks from the context. \nWe note that their role is dual: the accept operator prevents the reception (intrusion)\nof untrusted names from the environment, and its use is specified by the programmer. \nThe block mechanism prevents another process from sending (extruding) a secret name, \nand it is inserted automatically by the system to ensure the protection of such \nnames. \nOne may wonder whether we could have used just one form of (trusted) input, and declare\nthe names\nto be blocked by accepting all names in $\\cal N$ but the intended ones. The main reason\nthat guided our choice is that we believe that our form of input with blocked names can be\neffectively implemented, for instance by using blacklists. Also, we think that there is a\nnice symmetry among processes $\\RECEIVE x{y\\forbids B}P $ and $\\NR xP$, and among\nprocesses $\\RECEIVET x{y\\accept A}P $ and $\\SR xP$.\n \n\nWe embed the block mechanism in the rules for structural congruence through the \noperation~$\\uplus$ defined in Figure~\\ref{fig:reduction}. {\\it Blocked} names could indeed\nbe introduced both statically and dynamically, i.e. when structural congruence is\nperformed during the computation. We leave the time when the system blocks explicitly the\nname in components as an implementation detail. \nNote that in the second rule of the first line \nthe name $b$ is guaranteed to be different from all the names in $A$, because in the\ncongruence rule for \\emph{hide} (cfr same Figure) the free names of Q are\nrequired to be different from the name we want to hide, so the alpha conversion should be\napplied .\n\n\n\n\n\\input{fig-reduction-rev}\nFollowing standard lines, we define the semantics of our calculus\nvia a reduction\nrelation, also specified in Figure~\\ref{fig:reduction}.\nWe assume a capture-free substitution operation $\\subs zy$: \nthe process $P \\subs zy$ is obtained from $P $ by substituting all the free occurrences of $y$ \n by $z$.\nAs usual, we use a structural congruence $\\equiv$ to rearrange processes.\nSuch congruence includes the equivalence induced by alpha-conversion, and the relations defined in Figure~\\ref{fig:reduction}.\nThe rules for the $\\pi$-calculus operators (first line) are the standard ones.\nThe rules for inaction under a binder follow (second line).\nWe recall that the scope extrusion rule for \\keyword{new} (third line) permits to\nenlarge the scope of a name and let a process receive it. \nIn contrast, the scope extrusion rule for \\keyword{hide} (fourth line) permits to \nenlarge the scope of a name, but at the same time it sets the name to \\emph{blocked} for\nthe process which are being included in the scope, thus preventing them to receive the\nname. The last rule (fifth line) permits to swap the two binders. \n \nThe first rule for reduction, \\reductionrulename{Com}, says that an input process of the form\n $\\RECEIVE x{y\\forbids B}P$ is allowed to synchronize with an output process $\\SEND\nxzQ$ and receive the name~$z$ provided that~$z$ is not \\emph{blocked}\n($z\\not\\in B$). \nThe result of the synchronization is the progression of both the receiver and the sender,\nwhere the formal parameter in the input's continuation is replaced by the name~$z$. Note\nthat whenever $B=\\emptyset$ we have the standard communication rule of the $\\pi$-calculus.\nThe main novelty is represented by the rule for trusted communication \n \\rtcom. This rule says that an output process can send a\nname $z$ over $x$ to a parallel process waiting for input on $x$, provided that $z$ is\nexplicitly declared as accepted ($z\\in A$) by the receiver. If this is the case, the name\nwill replace the occurence of the formal parameter in the input's continuation. \nRules \\reductionrulename{New} and \\reductionrulename{Hide} are for {\\sf new} and for \n{\\sf hide} respectively, and follow the same schema. \nThe rules for parallel composition, replication and incorporating structural congruence\nare standard. \n\nWe let $P \\Rightarrow P'$ whenever either (a) $P \\rightarrow \\cdots \\rightarrow P'$, or \n(b) $P'=P$.\n\n\\begin{example}\\label{example:secrecy}\nWe show how {\\sf hide} can be used to prevent the extrusion of a secret.\nConsider the \nprocess:\\vspace{-.5em}\n\\[ \nP \\;\\stackrel{\\text{\\scriptsize def}}{=}\\; \\SR {z}[ \\SENDn xv ] \\qquad x\\ne z\n\\]\nThe process $\\SENDn xv$ can be interpreted as an internal attacker trying to\nleak the name $v$ to a context $C[-]\\;\\stackrel{\\text{\\scriptsize def}}{=}\\; -\\mid x(y).\\SENDn{leak}y $. \nBy using the structural rule for enlarging the scope of hide in\nFigure~\\ref{fig:reduction} we infer that $C[P] \\equiv \\SR {z}[ \\SENDn xv \\mid\nx(y\\forbids\nz).\\SENDn{leak}y]$.\nWhenever the name $v$ is not declared secret, that is whenever $v\\ne z$, the leak\ncannot be prevented: by applying \\reductionrulename{Com},\\reductionrulename{Hide}, and \\rstruct we have $C[P]\\osred\n\\SENDn{leak}v $.\nConversely, when the name $v$ is protected by {\\sf hide}, that is $v=z$, we do\nnot have any interaction and secrecy is preserved.\n\\end{example} \n \n\n\\begin{example}\\label{example:accepted}\nThe combined use of the accept and block sets permits to avoid\ninterference with the context. \nConsider the process below, where $n>0$:\\vspace{-.6em}\n\\begin{align*}\nP &\\;\\stackrel{\\text{\\scriptsize def}}{=}\\; \\SR {z_1} \\cdots \\SR {z_n}[\\cdots[x[y\\accept Z].P \\mid \\SENDn\nx{z_i} ]\\cdots] \n&&\nZ \\subseteq \\{z_1,\\cdots z_n\\},i\\in \\{1,\\dots,n\\} \n\\end{align*}\nTake a context $C[-] \\;\\stackrel{\\text{\\scriptsize def}}{=}\\; -\\mid \\NR y !\\SENDn xy \\mid \n!x(w)$. Such context is unable to send the fresh name $y$ to $P$,\nbecause the input process in $P$ is programmed to accept only trusted names protected\nby {\\sf hide}. Dually, the context cannot\nreceive the protected name~$z_i$. \nTherefore $C$ and~$P$ cannot interact: $C[P]\\osred Q$ implies that a) \n$Q\\equiv C[\\SR {z_1} \\cdots \\SR\n{z_n}[\\cdots[P\\subs {z_i}y ]\\cdots]]$ or b) $Q\\equiv C[P]$.\n\\end{example}\n\n\n \n\\section{Related work}\n\\label{sec:discussion}\n\n\\ignore{ \nThe idea that the restriction operator of the $\\pi$-calculus ({\\sf new}) and its scope extrusion mechanism\ncan model the possession and communication of a secret goes back to the spi\ncalculus~\\cite{AbadiG99}. In this approach, one can devise specifications of protocols\nthat rely on channel-based communication protected by {\\sf new}, and establish the\ndesired security properties by using equivalences representing indistinguishablity even in the presence of a Dolev-Yao attacker.\nHowever, while the secrecy properties of some protocols can be enforced by relying on\ndedicated channels, this is not obvious when protocols describe\ninter-site communication over the Transport\/Internet layer. \nIt is also not obvious to implement scope extrusion over dedicated channels. \nIn the presence of open, untrusted networks the secrecy of the protocol needs to be\nrecovered by means of other mechanisms, for instance by using asymmetric\ncryptography. However, preserving secrecy in the presence of scope\nextrusion is problematic~\\cite{abadi98}, and eventually leads to complex cryptographic\nprotocols which rely on a set of certified authorities~\\cite{popl07}. \nMoreover, the attacker can detect a communication, even if she cannot retrieve the content\nof the message. \n\nBased on these considerations, in this paper we argue that the\n{\\sf new} operator of the $\\pi$-calculus does not adequately represent confidentiality.\nWe enrich the $\\pi$-calculus with another operator for security, ({\\em hide}), which\ndiffers from {\\sf new} in that it forbids the extrusion of a name and hence has a static\nscope. To emphasize the\ndifference, we introduce a spy process that represents a side-channel attack and\nbreaks some of the standard security equations for {\\sf new}.\n}\n\nMany analysis and programming techniques for security have been developed for process\ncalculi. Among these, \nwe would mention the security analysis enforced by means of static and dynamic\ntype-checking (e.g.~\\cite{CardelliGG05,Hennessy05,tgc05}), \nthe verification of secure implementations and protocols that are \nprotected by cryptographic encryption\n(e.g.~\\cite{BorealeNP01,AbadiFG02,AbadiBF07,popl07}),\nand programming models that consider a notion of location \n(e.g.~\\cite{Hen07,SewellV03,CastagnaVN05})\n\n\nThe paper~\\cite{CardelliGG05} introduces a type system for a $\\pi$-calculus with groups\nthat permits to control the distribution of resources: names can be received only by\nprocesses in the scope of the group. The intent is, as in our paper, to preserve the\naccidental or malicious leakage of secrets, even in the presence of untyped opponents. \nA limitation of~\\cite{CardelliGG05} is that processes\nthat are not statically type-checked are interpreted as opponents trying to leak secrets. \nOn contrast, our aim is to consider systems where processes could dynamically join\nthe system at run-time; this permits us to analyze the secrecy of protocols composed by\ntrusted sub-systems that can grow in size of the number of the participants.\nWhile devising an algorithm for type checking groups can be non-trivial \n(cf.~\\cite{VasconcelosH93}), \nwe note that actual systems do not often rely on types, even for local\ncommunications. For instance D-Bus (cf. Section~\\ref{sec:examples}) relies on a mandatory\naccess control policy enforced at the kernel level through process IDs. Our\nsemantics-based approach appears as adequate to describe such low-level mechanisms. \n\n\nAs discussed in the introduction, concrete implementations of $\\pi$-calculi models\ndo protect communications by means of cryptography. The problem of devising a secure,\nfully abstract implementation has been first introduced\nin~\\cite{abadi98} and subsequently tackled for the join calculus in~\\cite{AbadiFG02}. \nThe paper~\\cite{BorealeNP01} introduces a bisimulation-based technique to prove\nequivalences of processes using cryptographic primitives; this can be used to show that\na protocol does preserve secrecy. We follow a similar\napproach and devise bisimulation semantics for establishing the secrecy of processes\nrunning in an environment where the distribution of channels is controlled. \nThe presence of a spy in our model is reminiscent of the network abstraction\nof~\\cite{mik-mscs10}. In that paper, the network provides the low-level counter part of\nthe model where attacks based on bit-string representations, interception, and\nforward\/reply\ncan be formalized. \n\nFrom the language design point of view, we share some similarity with the ideas\nbehind the boxed $\\pi$-calculus~\\cite{SewellV03}. \nA box in~\\cite{SewellV03} acts as wrapper where we can confine untrusted process;\ncommunication among the box and the context is subject to a fine-grained control that\nprevents the untrusted process to harm the protocol.\nOur {\\sf hide} operator is based on the symmetric principle: processes\nwithin the scope of an {\\sf hide} can run their protocol without be disturbed by the\ncontext outside it. \n \n\nAn interesting approach related to ours in spirit -- but not in conception or details --\nis D-fusion~\\cite{BorealeBM04}. \nThe calculus has two forms of restriction: A\n\"$\\nu$\" operator for name generation, and a \"$\\lambda$\" operator that behaves like an\nexistential quantifier and it can be seen as a generalization of an input binder. Both\noperators allow extrusion of the entities they declare but only the former guarantees\nuniqueness. In contrast our {\\sf hide} operator is not meant as an existential nor as an\ninput-binder and it prevents the extrusion of the name it declares.\n\n\\medskip\\noindent{\\bf Acknowledgements }\n We wholeheartedly thank the extremely competent, anonymous reviewers of EXPRESS 2012. \n They went beyond the call of duty in providing excellent reports which have been very helpful to improve our paper. \n\\section{Distrusting communications protected by restriction}\n\\label{sec:spy} \n\nIn this section we introduce a {\\em spy} process that \nrepresents a side-channel attack against communications that occur on untrusted\nchannels, that is: channels that are not protected by {\\sf hide}. \nWe assume that the spy is not able to retrieve the content of an exchange.\nThe spy abstraction models the ability of the context to detect interactions\nwhen the processes are implemented by means of network protocols which do not rely on \ndedicated channels, and therefore require some mechanism to enforce the secrecy of the\nmessage (e.g. cryptography). \nThis ability leads to break some of the standard security\nequations for \nthe {\\sf new} operator, which can be recovered by re-programming the protocol and making \nuse of the {\\sf hide} operator.\nWe add to the syntax of the secret $\\pi$-calculus the following process where we\nlet \\keyword{spy} be a reserved keyword. We let $P,Q,R$ to range over \\emph{spied\nprocesses}. \n\n \\begin{align*} \n P,Q,R &\\; ::= \\; \\cdots \\;\\mid\\; \\spyb P && \\text{spied processes}\\\\\n S &\\; ::= \\; \\{x\\} \\;\\mid\\; \\emptyset && \\text{spied set}\n \\end{align*} \n \n\n\\input{fig-spy-rev}\nWhen in $\\spyb P$ the spied set $S$ is equal to $\\{x\\}$, noted\n$\\keyword{spy}\\accept x . P$, this permits to make explicit which (free) reduction\nthe spy shall observe. Note that listening on multiple names can be easily programmed by\nputting in parallel several spies. \nThe spy process $\\keyword{spy}\\accept\\emptyset. P$, noted $\\spy P$, will be used to\ndetect reductions protected by restriction. \nWe let the free and bound names of\nthe\n\\emph{spy} be defined as follows:\n$\\fv(\\spyb R)\\;\\stackrel{\\text{\\scriptsize def}}{=}\\; S\\cup\\fv(R)$ and \n$\\bv(\\spyb R)\\;\\stackrel{\\text{\\scriptsize def}}{=}\\; \\bv(R)$. \n\nThe semantics of spied processes is\ndescribed by adding the communication rules in Figure~\\ref{fig:spy} to those in\nFigure~\\ref{fig:reduction}: \nThe rules describe a form of synchronization among three processes: a sender on\nchannel~$x$, a receiver on channel~$x$, and a {\\it spy} on channel~$x$. More in detail,\nrule \\sreductionrulename{Com} depicts a synchronization among an input\nof the form $\\RECEIVE x{y\\forbids B}P$, a sender and a spy, while rule \\rstcom describes\na similar three-synchronization but for a trusted input of the form $\\RECEIVET x{y\\accept \nA}P$. \n\nThe definition of observational equivalence \nfor spied processes is obtained by extending Definition~\\ref{def:obs-equivalence}\nto the semantics in Figure~\\ref{fig:spy}; we indicate the resulting equivalence with\n$\\stackrel{\\bullet}{\\cong}$. This will permit to study the security of processes in presence of the\n\\emph{spy}. \n\n\n \n\\input{fig-lts-spy-rev}\nTo make the picture clear, in Figure~\\ref{fig:lts-spy} we introduce labelled\ntransition semantics for spied processes. \nWe consider two new actions $?x$ and $!x$ corresponding\nrespectively to the presence of a \\emph{spy} and to a signal of communication.\n\\begin{align*}\n\\alpha &\\; ::= \\; \\cdots \\mid\\, ?x \\mid !x \n\\end{align*} \n\nWe assume the existence of variable~$\\nu\\in\\cal N$\nthat cannot occur in the process syntax,\nand we use it to signal restricted communications. \nIt is convenient to define the notion of (free) subject and object of an action.\nWe let $\\operatorname{subj}(\\alpha)\\;\\stackrel{\\text{\\scriptsize def}}{=}\\; \\{x\\}$ whenever $\\alpha=\\SENDn xy,(y)\\SENDn xy, x(y)$, and\nbe empty otherwise. We define $\\operatorname{obj}(\\alpha)\\;\\stackrel{\\text{\\scriptsize def}}{=}\\; \\{ y\\}$ whenever $\\alpha=\\SENDn\nxy,x(y)$, and $\\operatorname{obj}(\\alpha)=\\emptyset$ otherwise.\n\nThe lts in Figure~\\ref{fig:lts-spy} introduces three new rules for the\n\\emph{spy}, \\sltsrulename{Spy}, \\sltsrulename{Spy-Res} and \\sltsrulename{Spy-Com}, and re-defines the rules for restriction, for\n{\\sf hide} and for communication of Figure~\\ref{fig:lts}. \nIn rule \\sltsrulename{Spy} the process $\\keyword{spy}:x. P$ can fire an action $?x$\nand progress to~$P$. %\nThe dual action, $!x$, is fired in rules \\sltsrulename{Com} and \\slclose whenever a communication\noccurred on a free channel~$x$. \nRule \\sltsrulename{Spy-Com} describes the eaves-dropping of a communication. \nA process of the form $\\keyword{spy}.P$ can only fire an action $?\\nu$ through\nrule \\sltsrulename{Spy-Res}.\nIn rule \\lres\t we use a partial function $\\enc{\\cdot}_x$ to relabel the action fired\nunderneath a restriction: we let\n$\\enc{\\alpha}_x\\;\\stackrel{\\text{\\scriptsize def}}{=}\\;\\alpha$ whenever $x\\not\\in\\fv(\\alpha)$, \n$\\enc{!x}_x\\;\\stackrel{\\text{\\scriptsize def}}{=}\\; !\\nu$, $\\enc{?x}_x\\;\\stackrel{\\text{\\scriptsize def}}{=}\\; ?\\nu$. This will be\nused to signal restricted communications, as introduced. \nDifferently, in rule \\ltsrulename{Hide} we use a relabeling partial function $\\encH{\\cdot}_x$ that\nmakes invisible communications that occur under \\emph{hide}. We let\n$\\encH{\\alpha}_x\\;\\stackrel{\\text{\\scriptsize def}}{=}\\;\\alpha$ whenever $x\\not\\in\\fv(\\alpha)$, \n$\\encH{!x}_x\\;\\stackrel{\\text{\\scriptsize def}}{=}\\; \\tau$ and $\\encH{?x}_x\\;\\stackrel{\\text{\\scriptsize def}}{=}\\; \\tau$. \n \n\n\\begin{definition}[Bisimilarity]\nA symmetric relation $\\,{\\cal R}\\,$ over spied processes is a bisimulation if\nwhenever $R_1\\,{\\cal R}\\, R_2$ and $R_1\\lts{\\alpha}R'$ then there exists \na spied process $R''$ such that $R_2 \\Lts{\\tau}\\lts{\\hat\\alpha}\\Lts{\\tau}R''$ and \n$R'\\,{\\cal R}\\, R''$ where \n$\\hat\\tau$ is the empty string, and\n$\\hat\\alpha=\\alpha$ otherwise. \nBisimilarity, noted $\\stackrel{\\bullet}{\\approx}$, is the largest bisimulation.\n\\end{definition} \n\nBy using the same construction of Section~\\ref{sec:bisimulation}, we obtain the\nmain result of this section: observational equivalence for spied processes and \nbisimilarity coincide. As a by-product, we can also use bisimulation as a technique to\nprove that two processes cannot be distinguished by the {\\it spy}. \n\n\\begin{theorem}[Full Abstraction]\n\\label{theor:spy-fa}\n$\\stackrel{\\bullet}{\\cong} \\, =\\,\\stackrel{\\bullet}{\\approx}$. \n\\end{theorem}\n\\begin{proof}[Sketch of the proof]\nTo see that behavioural equivalence is included in bisimilarity, we proceed by\nco-induction as in the proof of Proposition~\\ref{prop:completeness} by \nrelying on contexts $C^A_\\alpha$ that detect whenever a process does emit a weak\naction $\\alpha$. Given a set of names $A$ such that $\\fv(\\alpha)\\subseteq A$\nand $\\omega\\not\\in A$ we define the following contexts to account for the new actions\n$!x$ and $?x$.\n\\begin{align*}\nC^A_{!x}[-]&\\;\\stackrel{\\text{\\scriptsize def}}{=}\\; \\keyword{spy}\\accept x. {\\SENDn\\omega{}} &&x\\ne \\nu\n\\\\\nC^A_{?x}[-]&\\;\\stackrel{\\text{\\scriptsize def}}{=}\\; x(y).\\SENDn\\omega{} \\mid \\SENDn x{} &&x\\ne \\nu\n\\\\\nC^A_{!\\nu}[-]&\\;\\stackrel{\\text{\\scriptsize def}}{=}\\; \\keyword{spy} . {\\SENDn\\omega{}} \n\\\\\nC^A_{?\\nu}[-]&\\;\\stackrel{\\text{\\scriptsize def}}{=}\\; \\NR{x}(x(y).\\SENDn\\omega{} \\mid \\SENDn x{}) \n\\end{align*}\nThe proof then proceeds routinely by following a schema similar to the one of\nProposition~\\ref{prop:completeness}.\nThe reverse direction, namely that bisimilarity is contained in behavioural equivalence,\nis shown by proving that $\\stackrel{\\bullet}{\\approx}$ is closed under the {\\sf new}, {\\sf hide}, and\nparallel composition operators. See~\\cite{tech-report} for all the details.\n\\end{proof}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:introduction}}\n\n\\IEEEPARstart{R}ecently, the intelligent spectrum management has received more attention than past few years since it shows the ability to efficiently allocate the scarce frequency resource. It is one of the key techniques for next-generation communication systems to liberate the spectrum\\cite{pahlavan2021understanding}. However, the wireless service provider and the airlines initiated a discussion about whether the 5G can affect the safety of aircrafts\\cite{news2022}. The cornerstone of intelligent spectrum management, interference monitoring and modeling, can be a solution to forecast the interference and dynamically allocate the spectrum without affecting the safety of other systems. \nThe CRN (Cognitive radio network) has been well researched since it plays an essential role in spectrum sharing\\cite{okegbile2021stochastic}. It requires accurate interference modeling to protect the user's communication. Related work in\\cite{yun2021intelligent} proposed an intelligent dynamic spectrum resource management mechanism that can coexist with other CRNs. The stochastic geometry-based models get rid of extensive Monte Carlo simulations and provide methods for random spatial patterns. This kind of model treats the locations of the base stations as points distributed\\cite{wang2016stochastic}, and they are widely used in analyzing the performance of CRN. Using stochastic geometry tools, authors in\\cite{lu2021stochastic} presented a comprehensive spatial-temporal analysis of large-scale communications systems.\n\n\n\\begin{figure*}[t!]\n\\centering\n\\hfill\n\\subfigure[]{\n \\begin{minipage}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=0.75\\textwidth,scale=0.3]{New_Figures\/Picture1b.png}\n \\label{fig:1_1}\n \\end{minipage}\n}\n\\hfill\n\\subfigure[]{\n \\begin{minipage}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=0.75\\textwidth,scale=0.3]{New_Figures\/Picture1a.png}\n \\label{fig:1_2}\n \\end{minipage}\n}\n\n\\caption{Measurement scenarios in our dataset, (a) Outdoor (b) Indoor}\n\\end{figure*}\n\n\\begin{figure*}[t!]\n\\begin{minipage}[b]{\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{New_Figures\/Picture2.png}\n \\end{minipage}\n \\caption{Trajectory and Measurement \\label{fig:2}}\n\\end{figure*}\n\nRelated works successfully establish models for Wi-Fi bands to determine whether the channel is busy. \\cite{hou2021modeling} measured the 2.4 GHz and 5GHz Wi-Fi bands and proves the current channel allocation is not efficient enough. 7 different distributions are evaluated using Kolmogorov-Smirnov (KS) distance, Kullback-Leibler (KL) divergence, and Bhattacharyya distance to compare the predictability. Authors in \\cite{hou2021modeling} monitored the 2.4 GHz and 5 GHz bands in a railway station with fixed location receivers. Spectrogram plays a vital role in the analysis of temporal features. To exploit the spectrogram using Deep Learning algorithms, \\cite{bhatti2021shared} proposes the Q-spectrogram and shows it is better for CNN than the traditional spectrogram using experimental data. The Deep Learning model trained on Q-spectrogram offers 99\\% accuracy in estimating the Wi-Fi traffic load (five density levels). Instead of using the locations of base stations, \\cite{al2018free} monitors 915-928 MHz ISM-band in Melbourne, Australia. The authors stated that the normalized histogram follows a log-normal distribution. The duty cycles are calculated across all the frequency bins as the key parameters for evaluating occupancy. The definition of whether a band is busy varies in the literature, But for Wi-Fi bands, the binarization of data can simplify the modeling. \\cite{al2018free} and \\cite{hou2021modeling} converted the data into binary patterns and counted the length of the continuous busy and idle durations with certain resolutions. The threshold for binarization also varies in the literature since it depends on the design purpose of the application. In this work, we follow a similar approach to analyze the interference behavior in 1.9-2.5 GHz. Most of the related works focus only on the unlicensed bands without considering the licensed band. The problem is if we can use the same method to model the interference in licensed bands and if we can monitor and predict the occupancy in all types of frequency bands. \nTo explore the possibility of allocating both licensed and unlicensed bands, we collected interference measurements in the spectrum in an urban area in Worcester, MA, and in a laboratory building of WPI. We proposed an interference model based on both temporal and spatial information. During the data collection, we added the GPS tag to each measurement to prove interference in the licensed band is highly related to location information. The rest of this paper is organized as follows; We first introduce the importance of spectrum monitoring and interference modeling. In part \\ref{sec:2}, we present the scenario and equipment in the data collection phase and show the result of spectrum monitoring. In part \\ref{sec:3}, we construct the interference model and analyze the interference behavior based on the real data. In part 4, we state the importance of interference modeling in intelligent spectrum management and present the Contribution of this study.\n\n\n\n\\section{The Measurement Scenarios and Data Analysis}\n\\label{sec:2}\nIn this section, we will introduce the measurement scenarios and dataset size and structure. The equipment used in the study is Agilent E4407B ESA-E Spectrum Analyzer, which can measure 9kHz to 26.5 GHz with 0.4 dB overall amplitude accuracy. The frequency of interest is 1.9 GHz to 2.5 GHz, including the LTE band and 2.4 GHz ISM band. Therefore, we are able to compare the interference for different types of bands. Fig.\\ref{fig:1_1} shows the outdoor measurement scenario. We put the spectrum analyzer on the back seat of the car and connected it with a laptop using a GPIB cable. Benefit from PyVISA, we can easily control the equipment and retrieve data from it. The GPS receiver is also connected to the laptop to label the data with GPS coordinates. For the indoor environment shown in Fig.\\ref{fig:1_2}, we put the spectrum analyzer on a cart and move it around the third floor of Atwater Kent laboratory of WPI. We do not use GPS in this scenario since the GPS receiver performs poorly in the indoor environment, which can have an error up to tens of meters.\n\n\nFig.\\ref{fig:2} shows a test drive on the selected route. We select an area near Worcester Common, Worcester, MA, an urban region with a large population. There are at least 5 cellular towers around. Fig. 2 left is the map of the selected area, and the trajectory is marked as a solid red dot. The relative location is calculated by GPS coordinates. Fig. 2 right is the spectrum measurement corresponding to this location. The frequency range is 1.9 GHz to 2.5 GHz, and the amplitude is from -20 dBm to -70 dBm. Each test drive is about 10 - 15 minutes, depending on the traffic situation. We can collect about 700 \u2013 1000 measurements during the drive. Each measurement consists of 401 amplitude readings representing the 600 MHz frequency span and the 2.2 GHz center frequency. The omnidirectional antenna is used in this study. Since the noise level changes over time for different measurements, we normalized the receiver power to make fair comparisons.\n\nWithout processing the collected data, we look into the raw data for an overview of temporal and spatial behavior of the interference. To analyze the difference between licensed and unlicensed bands, we plot the spectrogram of the data. Fig 3 is one set of data, including a complete test drive. Fig.\\ref{fig:3_1} is the spectrogram of the measurement starting at 0 sec and ending at 950 sec, and the power is between -20 dBm and -70 dBm. In this figure, different times also represent other locations. For licensed bands, we observe higher interference at locations closer to the cell tower (around 504 sec) than on the open road (0 \u2013 216 sec). 1.9 GHz \u2013 2.0 GHz has a similar spatial pattern as 2.1 \u2013 2.2 GHz since the variations in power levels are almost the same. For the 2.4 GHz unlicensed band, the interference is discontinuous, and the power level is significantly lower than the licensed bands. Fig.\\ref{fig:3_2} is the congested plot of the data showing the interference between 1.9 GHz and 2.5GHz. 2.0 GHz \u2013 2.1 GHz and 2.2 -2.3 GHz seems to be inactive bands during the experiment, which shows the potential of spectrum sharing applications. \n\n\n\\begin{figure*}[t!]\n\\centering\n\\hfill\n\\subfigure[]{\n \\begin{minipage}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{New_Figures\/Picture3.png}\n \\label{fig:3_1}\n \\end{minipage}\n}\n\\hfill\n\\subfigure[]{\n \\begin{minipage}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{New_Figures\/Picture3b.png}\n \\label{fig:3_2}\n \\end{minipage}\n}\n\n\\caption{(a) Spectrogram of One Test Drive (b) Congested Plot}\n\\end{figure*}\n\n\\begin{figure*}[t!]\n\\centering\n\\hfill\n\\subfigure[]{\n \\begin{minipage}[b]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{New_Figures\/licensed.png}\n \\label{fig:new_1}\n \\end{minipage}\n}\n\\hfill\n\\subfigure[]{\n \\begin{minipage}[b]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{New_Figures\/unlicensed.png}\n \\label{fig:new_2}\n \\end{minipage}\n}\n\\hfill\n\\subfigure[]{\n \\begin{minipage}[b]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{New_Figures\/inactive.png}\n \\label{fig:new_3}\n \\end{minipage}\n}\n\n\\caption{Histograms of 4 Datasets in Different Bands: (a) Licensed (b) Unlicensed (c) Inactive}\n\\end{figure*}\n\n\\begin{figure}[t!]\n\\centering\n\\hfill\n \\begin{minipage}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=0.75\\textwidth,scale=0.3]{New_Figures\/histo_bands.png}\n \\end{minipage}\n \\label{fig:new2}\n \\caption{Comparison of Different Bands in the Same Scenario}\n\\end{figure}\n\n\n To analyze the difference between licensed and unlicensed bands, we plot the spectrogram of the data. Fig 3 is one set of data, including a complete test drive. Fig 3 (a) is the spectrogram of the measurement starting at 0 sec and ending at 950 sec. In this figure, different times also represent other locations. For licensed bands, we observe higher interference at locations closer to the cell tower (around 504 sec) than on the open road (0 \u2013 216 sec). 1.9 GHz \u2013 2.0 GHz has a similar spatial pattern as 2.1 \u2013 2.2 GHz since the variations in power levels are almost the same. For the 2.4 GHz unlicensed band, the interference is discontinuous, and the power level is significantly lower than the licensed bands. Fig 3 (b) is the congested plot of the data showing the interference between 1.9 GHz and 2.5GHz. 2.0 GHz \u2013 2.1 GHz and 2.2 -2.3 GHz seems to be inactive bands during the experiment, which shows the potential of spectrum sharing applications. \n\n\\begin{figure*}[t!]\n\\centering\n\\hfill\n\\subfigure[]{\n \\begin{minipage}[b]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{New_Figures\/Picture4a.png}\n \\label{fig:4_1}\n \\end{minipage}\n}\n\\hfill\n\\subfigure[]{\n \\begin{minipage}[b]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{New_Figures\/Picture4b.png}\n \\label{fig:4_2}\n \\end{minipage}\n}\n\\hfill\n\\subfigure[]{\n \\begin{minipage}[b]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{New_Figures\/Picture4c.png}\n \\label{fig:4_3}\n \\end{minipage}\n}\n\n\\caption{(a) Data Binarization (b) Empirical CDF (c) Fitted CDF}\n\\end{figure*}\n\n\n\n\\section{Interference Modeling and Comparison}\n\\label{sec:3}\nIn this section, we begin with the comparison using histograms. Fig. 4 reveals the difference in amplitude distribution of 4 datasets when we plot the histogram of the same frequency bin. Fig.\\ref{fig:new_1} is the histogram of the licensed band. The indoor environment tends to have lower interference and is more centered. Fig.\\ref{fig:new_2} and Fig.\\ref{fig:new_3} are the plots for the unlicensed band and one of the unused bands, respectively. The main difference is the average interference level. We also compare the interference in each frequency band type by using only one dataset. Fig.\\ref{fig:new2} shows histograms of 3 frequency bands in the same dataset. The variance of interference in the licensed band seems much more significant than in the other two bands. \nTo have numerical comparisons, we can analyze the short-term variations of the interference in an IoT environment where many stationery and moving wireless devices interfere with each other based on circular scattering principles \\cite{clarke1968statistical} \\cite{pahlavan2005wireless}. This modeling enables a realistic performance evaluation of the RF cloud interference with any wireless device. According to Clarke's Model, the probability density function of the amplitude fluctuations follows a Rayleigh Distribution: \t\n\\[f(r) = \\frac{r}{{{\\sigma ^2}}}{e^{ - \\frac{{{r^2}}}{{2{\\sigma ^2}}}}},r \\ge 0\\]\nAs a device moves along a path with velocity $v_m$, the Doppler shift from each interfering source depends on the spatial angle of the direction of movement with the direction of the source, $\\alpha$,\n\\[{f_d} = \\frac{{{v_m}}}{\\lambda }\\cos \\alpha = {f_m}\\cos \\alpha \\Rightarrow \\alpha {\\cos ^{ - 1}}\\left( {\\frac{{{f_d}}}{{{f_m}}}} \\right)\\]\nFor uniform distribution of interference angle, $\\alpha$, the PDF ${f_A}(\\alpha )$ is given as: \n\\[{f_A}(\\alpha ) = \\frac{1}{{2\\pi }};\\alpha \\in ( - \\pi ,\\pi ]\\]\t \nThe Doppler spectrum of the interference is \\cite{pahlavan2005wireless}: \n \\begin{equation}\n{\\rm{D}}(f) = \\frac{1}{{2\\pi {f_m}}}{\\left[ {1 - (f\/{f_m})} \\right]^{ - \\frac{1}{2}}};\\left| f \\right| < {f_m}\n\\label{eq:doppler}\n\\end{equation}\nEq.\\ref{eq:doppler} allows one to simulate the interference for a mobile user for performance analysis by running a complex Gaussian noise through a filter reflecting Doppler spectrum characteristics \\cite{pahlavan2005wireless}. Then we can design software and hardware interference simulators to examine the effects of IoT interference on a communication link, a GPS device, or a radar. \n\n\\begin{figure*}[t!]\n\\centering\n\\hfill\n\\subfigure[]{\n \\begin{minipage}[b]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{New_Figures\/Picture5a.png}\n \\label{fig:5_1}\n \\end{minipage}\n}\n\\hfill\n\\subfigure[]{\n \\begin{minipage}[b]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{New_Figures\/Picture5b.png}\n \\label{fig:5_2}\n \\end{minipage}\n}\n\\hfill\n\\subfigure[]{\n \\begin{minipage}[b]{0.3\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{New_Figures\/Picture5c.png}\n \\label{fig:5_3}\n \\end{minipage}\n}\n\\caption{Doppler Spectrum (a) Inactive Band (2.2 GHz) (b) Unlicensed Band (2.4 GHz) (c) Licensed Band (1.95 GHz)}\n\\end{figure*}\n\n\n\nFor obtaining a model from empirical data, we first introduce data processing. The received power is first normalized using Min-Max scaling. The result normalized power is in the range [0, 1]. Then we convert the normalized power to a dummy variable using Eq.\\ref{eq:1}, where $I$ is the interference reading, and $I_{new}$ donates the result dummy variable. The data binarization makes computing the spectrum occupancy much easier. \n\n\\begin{equation}\n{I_{new}} = \\left\\{ \\begin{array}{l}\n0,I \\ge 0.1\\\\\n1,I < 0.1\n\\end{array} \\right.\n\\label{eq:1}\n\\end{equation}\n\nWe define the spectrum occupancy by counting the number of consecutive \"1\"s, which means if the current interference is greater than 10\\% of the maximum interference in this channel, we consider the band as occupied and busy, and it is unavailable to spectrum sharing.\nFig. 4a shows the result of the binarization of one of the licensed frequency bands. We can only observe about ten \"0\"s, so this frequency band is highly occupied. If we count the number of \"1\"s between every two \"0\"s, we can calculate the probability of the occupancy time for a specific frequency band. \nUsing the approach described above, we calculate the Cumulative Distribution Function (CDF) using real data. Fig 4b shows the empirical CDF of 3 different times of the day for the outdoor environment and one indoor dataset. All three outdoor datasets almost reach a probability of 0.9 for occupancy<=200. Since the sample rate is 1 per second, it means we have a 90\\% probability that the frequency occupancy is less than 200 seconds. For the indoor environment, the empirical CDF shows a probability of 0.22 for occupancy$<=$200. The outdoor environment tends to have less interference than the indoor environment because the indoor environment has more interference sources and a severe radio frequency (RF) environment. \nTo numerical compare the effect of interfence on spectrum occupancy, we fit the empirical data to obtain distribution and then evaluate the parameter. Since Rayleigh distribution is widely used for Wi-Fi channel occupancy modeling, we also apply it to the licensed band to have a fair evaluation. Fig 4c is the CDF of fitted Rayleigh distribution for four datasets mentioned before. Outdoor measurements approach 1 much faster than indoor measurements. The detailed parameters are shown in Table 1. \n\n\\begin{table}[]\n\\caption{Rayleigh Distribution Parameters}\n\\label{table:1}\n\\centering\n\\begin{tabular}{|l|l|l|}\n\\hline\n{Dataset} & {Loc} & {Scale} \\\\ \\hline\n8:00 Outdoor & -26.8908 & 65.8010 \\\\ \\hline\n12:00 Outdoor & -3.5453 & 9.0201 \\\\ \\hline\n16:00 Outdoor & -16.3339 & 40.9457 \\\\ \\hline\nAK 320 Indoor & 21.4714 & 182.7133 \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\nWe further investigate the frequency domain of interference measurements for indoor and outdoor scenarios. We first deduct the average value from the data using Eq.\\ref{eq:2} to remove the DC component.\n\n \\begin{equation}\n{I_{new - i}} = {I_i} - \\frac{1}{N}\\sum\\limits_i^N {{I_i}} \n\\label{eq:2}\n\\end{equation}\n\nThen we apply FFT to the processed data. Using one of the outdoor datasets as an example, Fig.\\ref{fig:5_1} shows the frequency domain feature of an inactive band (2.2 GHz). Fig.\\ref{fig:5_2} is the FFT result of the unlicensed band (2.4 GHz). Fig.\\ref{fig:5_3} illustrates one of the licensed bands (1.95 GHz). Compared with the two more occupied bands, the inactive band shows fewer variances and tends to be \"flat\". Both licensed and unlicensed bands have a U-shape spectrum. The only difference is that the unlicensed band has minor variations. Frequency domain patterns can also contribute to estimating whether the spectrum resource is available to be allocated to secondary users. This section analyzes the interference in two environments using statistical approaches and the interference in different frequency bands using frequency-domain approaches. \n\n\n\n\n\\section{CONCLUSION}\nSpectrum monitoring using the spectrum analyzer provides an approach to read and log the interference in real-time. The collected dataset shows its potential for intelligence spectrum management with retrieved temporal and spatial information. In this study, we demonstrate that the interference is highly related to the location, time, and band type. By analyzing the probability distribution of the occupancy time, we can estimate how long we can allocate a currently unused frequency band to secondary users. The result in the frequency domain demonstrates different mechanisms should be considered for each type of frequency band while assigning frequency resources. The information obtained from spectrum monitoring is critical for intelligence spectrum management since it provides parameters for manege the interference. We believe more datasets and Deep Learning algorithms can make accurate dynamic spectrum allocation achievable.\n\n\n\\ifCLASSOPTIONcompsoc\n \n \\section*{Acknowledgments}\nI'm extremely grateful to my friends Jianan Li for driving me to collect the data and Fei Li for his help on pyVisa.\nI'd also like to express my thanks to my classmates in ECE 538 for their patience in peer-reviewing this work.\n\n\\else\n \n \\section*{Acknowledgment}\n\\fi\n\n\n\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n\n\n\n\\nocite{*}\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIt is by now well established that particles confined to a two\ndimensional space can have fractional statistics. Interest in such\nparticles, which are called anyons, is partially motivated by their\nphysical effects such as their conjectured role in the fractionally\nquantized Hall effect \\cite{girvin} or high temperature\nsuperconductivity \\cite{hfl}, and partially by the fact that the\ndescription of anyons uses interesting mathematical structures.\nAnyons are a generalization of ordinary bosons or fermions where the\nwave functions of many identical particles, instead of being symmetric\nor antisymmetric, carry a representation of the braid group on their\ntwo dimensional configuration space \\cite{fracstat}. The braid group\non a two-dimensional space is an infinite, discrete, non-Abelian group\nand has many potentially interesting representations (see, for example\n\\cite{4}).\n\nAnyons are sometimes described mathematically by coupling the currents\nof particles to the gauge field of a Chern-Simons theory. This\ncoupling has been argued to produce fractional statistics both for the\ncase where particles are excitations of a dynamical quantum field\n\\cite{dynany} and when the matter is non-dynamical classical point particles\n\\cite{nondynany}. The representation of the braid group which arises\n(and therefore also the fractional statistics) can be either Abelian\nor non-Abelian. The former case arises from the quantization of\nAbelian Chern-Simons theory. It is also know that non-Abelian\nstatistics can arise from either non-Abelian Chern-Simons theory or\nelse Abelian Chern-Simons theory on a manifold whose fundamental group\nis non-trivial.\n\nFor an Abelian Chern-Simons theory, the action is\n\\BE \\label{csa}\nS=-{k \\over 4 \\pi}\\int A{\\rm d} A+\\int A_\\mu j^\\mu {\\rm d}^3 x\n\\EE\nwhere\n\\BE\nj^\\mu(x)=\\sum_{i=1}^n q_i \\int d\\tau\n\\frac{dr_i^\\mu}{d\\tau}\\delta^3(x-r_i(\\tau))\n\\label{curr}\n\\EE\nand $r_i^\\mu(\\tau)$ is the trajectory and $q_i$ the charge of the i'th\nparticle. (If particles are identical, then their charges should be\nequal.)\n\nIn this paper we shall examine the question of which representations\nof the braid group on a given Riemann surface are obtained from the\nwave functions of an Abelian Chern-Simons theory in the most general\ncase where the constant $k$ is a rational number, the Riemann surface\nhas arbitrary genus $g$ and the total charge of the particles is\nnon-zero. We shall construct the wave functions of the quantum theory\nwith action (\\ref{csa}) explicitly and find that, depending on the\ncoefficient $k$ and the genus of the configuration space, the wave\nfunction carries certain multi-dimensional, in general non-Abelian\nrepresentations of the braid group.\n\nThe wave function of Abelian Chern-Simons theory coupled to classical\npoint particles on the plane was found by Dunne, Jackiw and\nTrugenberger \\cite{8}. In this case the Chern-Simons theory has no\nphysical degrees of freedom, the Hilbert space is one-dimensional and\nthe only quantum state is given by a single unimodular complex number.\nFor a trajectory of $n$ particles with positions $z_i(t),~\ni=1,\\ldots,n$, $t\\in[0,1]$ which is periodic up to a permutation,\n$z_i(1)=z_{P(i)}(0)$, the phase of the wave function changes by the\nwell-known factor\n\\begin{displaymath}\nq_i q_j\\frac{1}{ k}\\sum_{il;\\ 1\\le l,p\\le g\n\\end{displaymath}\n\\BE \\label{braidrg}\n\\sigma_1^{-1}\\alpha_l\\sigma_1\\beta_l=\n\\beta_l\\sigma_1\\alpha_l\\sigma_1\\qquad\n1\\le l\\le g\n\\EE\nThese relations are necessary so that topologically equivalent braids\nare represented by identical elements of the braid group. There is\none additional relation which follows from the fact that there always\nexists a trajectory of a particle which encircles all other particles\nand traces all homology generators of ${\\cal M}$ and which is\nequivalent to a trivial loop on $Q_n({\\cal M})$. This leads to the\nrelation\n\\BE \\label{braidrt}\n\\sigma_1\\cdots\\sigma_{n-1}^2 \\cdots \\sigma_1\\beta_g\\beta_{g-1} \\cdots\n\\beta_1(\\alpha_1^{-1}\\beta_1^{-1}\\alpha_1) \\cdots\n(\\alpha_g^{-1}\\beta_g^{-1}\\alpha_g)=1\n\\EE\nThe above generators and relations constitute a presentation of the\ngeneral abstract braid group. In most cases, we are interested in\nrepresentations of this group, even finite dimensional ones.\n\nThe representations which follow from Abelian Chern-Simons theory are\nthe so-called pure $\\theta$-statistics representations where the\ngenerator of an interchange of neighboring particles is represented by\na phase, times a unit matrix, as the $\\sigma$ in (\\ref{jp}). In these\nparticular type of representations, the generators for particle\nexchanges $\\sigma_i$ and those for transport around handles satisfy a\nfar less restrictive set of relations due to the Abelian structure of\nthese $\\sigma_i$. They satisfy the relations (\\ref{braidra}) trivially\nwhile the relations (\\ref{braidrb}) tell us that the $\\sigma_i$ are\nequal, which we will called $\\sigma$. The remaining relations\n(\\ref{braidrg}) becomes\n\\begin{displaymath}\n[\\sigma,\\alpha_l]=[\\sigma,\\beta^l]=[\\alpha_l,\\alpha_m]=[\\beta^l,\\beta^m]=0\n\\end{displaymath}\n\\begin{displaymath}\n[\\alpha_l,\\beta^m]=0\\qquad{\\rm for}\\qquad l\\ne m\n\\end{displaymath}\n\\BE \\label{bgrel}\n\\alpha_l\\cdot\\beta^l=\\sigma^2\\beta^l\\cdot\\alpha_l\n\\EE\nand the global constraint (\\ref{braidrt}) for closed manifold is\n\\BE \\label{bggrel}\n\\sigma^{2(n+g-1)}=1\n\\EE\n\nWe will show that the wave functions of a Chern-Simons action coupled to\ncharges gives pure $\\theta$-statistic representations of the braid\ngroup on a Riemann surface, as these charges form braids in space-time.\n\n\\section{The decomposition of the gauge field}\n\nOur space will be an orientable 2-dimensional Riemann surface, ${\\cal\nM}$, of genus g. While our space-time will be a 3-dimensional manifold\nformed as the Riemann surface ${\\cal M}$ times a real line for the\ntime direction. In other words, the space-time metric is $g_{00}=1,\\\ng_{01}=g_{02}=0$ and the remaining components form the metric on\n${\\cal M}$. Since we have to consider the case of a non-zero total\nflux on ${\\cal M}$, the representation of this type of gauge field can\nbe done only on a set of patches covering ${\\cal M}$. Let us consider\nthe set of patches $U^i$ as a good cover of the manifold ${\\cal M}$.\nWe have a field $A^{(i)}$ on each patch $U^i$, with the transition\nfunctions defined on the intersection of any two patches $U^i\\cap U^j$\ngiven by\n\\BE \\label{trans}\nA^{(i)}-A^{(j)}=d\\chi^{(ij)}\n\\EE\nwhere $\\chi^{(ij)}=-\\chi^{(ji)}$ by definition. On triple intersection\n$U^i\\cap U^j\\cap U^k$ we can use (\\ref{trans}) to find the\nrelation \\BE\n\\chi^{(ij)}+\\chi^{(jk)}+\\chi^{(ki)}=c^{(ijk)}=\\ {\\rm constant}\n\\EE\nThe set of constants $c^{(ijk)}$ are related to the total flux by \\cite{9}\n\\BE \\label{F0}\nF_0=\\int_{\\cal M}{\\rm d} A=\\sum_i\\int_{V^i}{\\rm d} A^{(i)}=\\sum_{ij}\\int_{V^{ij}}\n{\\rm d}\\chi^{(ij)}=\\sum_{P^{ijk}}c^{(ijk)}\n\\EE\nwhere $V^i\\subset U^i$ and it is bounded by a line, $V^{ij}$, dividing the\nintersection $U^i\\cap U^j$. On the triple intersection, we\nlet the three lines $V^{ij}$, $V^{jk}$ and $V^{ki}$ meet at one point\n$P^{ijk}$.\n\nBefore quantizing, we will decompose the degrees of freedom of $A$ in\nits various components. To separate the effect of the non-zero total\nflux (\\ref{F0}) we will break it in two parts. First a fixed field\n$A_p$ with a total flux $F_0$ on ${\\cal M}$ localized at a reference\npoint $z_0$. This is an \"imaginary\" field without a direct physical\nmeaning, its purpose is to take care of the total flux. This will be\nthe field that has to be defined on patches, as explained above. The\nsecond field, $A_r$, is the remaining degree of freedom of $A$ on\n${\\cal M}$, a globally well defined 1-form. So we have\n\\BE \\label{decA}\nA=A_p+A_r\n\\EE\n\nWe decompose $A_r$ (without the $A_0{\\rm d} t$ part) into its exact, coexact\nand harmonic parts. More precisely, the Hodge decomposition of $A_r$,\non ${\\cal M}$, is given by (${\\rm d}$ and ${}^*$ act on ${\\cal M}$ in this\npaper)\n\\BE \\label{hda}\nA_r={\\rm d}(\\frac{1}{\\Box'}{}^*{\\rm d}^*A_r)+{}^*{\\rm d}(\\frac{1}{\\Box'}{}^*{\\rm d}\nA_r)+\\frac{2\\pi i}{k}\\sum_{l=1}^g(\\bar\\gamma_l\n\\omega^l-\\gamma_l\\bar\\omega^l)\n\\EE\nwhere $1\/\\Box'$ is the inverse of the laplacian $(\\Box)$ acting on\n0-forms where the prime means that the zero modes are removed. With\nour decomposition (\\ref{decA}), ${\\rm d} A_r$ do not have a zero mode.\nAlso we will set the zero mode of ${}^*{\\rm d}^* A_r=\\vec\\nabla\\cdot\\vec\nA_r$ to zero, using a time independent gauge transformation.\n\nThe first homology and cohomology group of ${\\cal M}$ tell us the number\nof additional degrees of freedom of $A$ there are on ${\\cal M}$ compared to\na plane, and how to take them into account. The zero modes for both ${\\rm d}$\nand\n${}^*{\\rm d}^*$ (or for $\\Box$) acting on a one-form are spanned by the set of\nAbelian differentials, $\\omega^l$, on ${\\cal M}$, called the holomorphic\n(function of $z$) harmonic forms (solution of $\\Box\\omega=0$). We can\nrepresent the homology of ${\\cal\nM}$ in terms of generators $a_l$ and its conjugate generators $b_l$,\n$l=1,\\ldots,g$. The intersection numbers of these generators are\ngiven by\n\\BE \\label{inter}\n\\nu(a_l,a_m)=\\nu(b^l,b^m)=0,\\ \\nu(a_l,b^m)=-\\nu(b^m,a_l)=\\delta_l^m\n\\EE\nwhere $\\nu(C_1,C_2)$ is the signed intersection number (number of\nright-handed minus number of left handed crossings) of the oriented\ncurves $C_1$ and $C_2$. The holomorphic harmonic one-forms $\\omega^l$\nhave the standard normalization\n\\cite{1}\n\\begin{displaymath}\n\\oint_{a_l}\\omega^m=\\delta_l^m,\\ \\oint_{b^l}\\omega^m=\\Omega^{lm}\n\\end{displaymath}\nThe matrix $\\Omega^{lm}$ is symmetric and its imaginary part is\npositive definite. This actually defines a metric in the space of\nholomorphic harmonic forms\n\\BE \\label{hfn}\n i\\int_{\\cal M}\\omega^l\\wedge\\bar\\omega^m=2{\\rm\nIm}(\\Omega^{lm})=G^{lm},\\ G_{lm}G^{mn}=\\delta_l^n\n\\EE\nWe will use $G_{lm}$ and $G^{lm}$ to lower or raise indices when needed and use\nEinstein summation convention over repeated indices.\n\nAny linear relation, with integer coefficients, of $a_l$ and $b_l$\nthat satisfy (\\ref{inter}) is another valid basis for the homology\ngenerators. These transformations form a symmetry of the Chern-Simons\ntheory and comprise the modular group, $Sp(2g,Z)$:\n\\BE \\label{mt}\n\\dc{a}{b}\\rightarrow S\\dc{a}{b}\\qquad{\\rm where}\\qquad S=\\dc{D\\ C}{B\\ A}\n\\EE\nwith $SES^\\top=E$ and $E=\\dc{\\ 0\\ 1}{-1\\ 0}$. The $g\\times g$\nmatrices $A,B,C,D$ have integer entries.\n\nWe can define\n\\begin{displaymath}\n\\xi=-\\frac{k}{2\\pi}\\frac{1}{\\Box'}{}^*{\\rm d}^* A_r~,~~\\ F_r={}^*{\\rm d} A_r\n\\end{displaymath}\nSo we then have the complete decomposition of the gauge field, with the\n$A_0{\\rm d} t$ part,\n\\BE \\label{ta}\nA_r=A_0{\\rm d} t-\\frac{2\\pi}{k}{\\rm d}\\xi+{}^*{\\rm d}(\\frac{1}{\\Box'}F_r)+2\\pi i\n(\\bar\\gamma_l \\omega^l-\\gamma_l\\bar\\omega^l)\n\\EE\nSimilarly we can write the current ${\\bf\nj}=j^\\mu\\frac{\\partial}{\\partial x^\\mu}$ into a one-form $j=j_\\mu{\\rm d}\nx^\\mu=j_0{\\rm d} t+\\tilde j$, using the metric. We can use again the\nHodge decomposition of ${}^*\\tilde j$ on ${\\cal M}$\n\\BE \\label{tj}\n{}^*\\tilde j=-{\\rm d}\\chi+{}^*{\\rm d}\\psi+i(j_l\\bar\\omega^l-\\bar j_l\\omega^l)\n\\EE\nThe continuity equation, using the 3-dimensional star operator ${}^{\\star}$,\n\\begin{displaymath}\n{\\rm d}^{\\star}j= (\\vec\\nabla\\cdot\\vec j){\\rm d}^3x=\\frac{\\partial\nj_0}{\\partial t}{\\rm d}^3x+{\\rm d}^*\\tilde j \\wedge{\\rm d} t=0\n\\end{displaymath}\ncan be used to solve for $\\psi$\n\\begin{displaymath}\n\\psi=-\\frac{1}{\\Box'}\\frac{\\partial j_0}{\\partial t}\n\\end{displaymath}\n\nWe shall consider a set of point charges moving on ${\\cal M}$, with\ntrajectories $z_i(t)$ and charge $q_i$, where $z_i(t)\\ne z_j(t)$ for\n$i\\ne j$. The current is represented by\n\\BE \\label{pcc}\nj_0(z,t)=\\sum_i q_i\\delta(z-z_i(t)),\\ \\tilde j(z,t)=\\sum_i\nq_i\\delta(z-z_i(t))\n\\frac{1}{2}(\\dot z_i(t){\\rm d}\\bar z+\\dot{\\bar z}_i(t){\\rm d} z)\n\\EE\nIntegrating (\\ref{pcc}) with the harmonic forms $\\omega^l$ , we find\nthe topological components of the current in (\\ref{tj})\n\\BE \\label{tc}\nj^l(t)=\\sum_i q_i\\dot z_i(t)\\omega^l(z_i(t))\\ ,\\qquad\\bar j^l(t)=\\sum_i q_i\n\\dot{\\bar z}_i(t)\\bar \\omega^l(\\bar z_i(t))\n\\EE\nThis is just telling us that integrating the topological currents $j^l(t)$\nover time is equivalent to a sum of the integral of the harmonic forms\n$\\omega^l$ over each charge trajectory.\n\nTo solve for $\\chi$, it is best to use complex notation\n\\begin{displaymath}\nR=\\psi+i\\chi=R(z,\\bar z)\n\\end{displaymath}\nwhere we find ${}^*{\\rm d}\\chi+{\\rm d}\\psi=\\partial_z\\bar R{\\rm d} z+\\partial_{\\bar z}\nR{\\rm d}\\bar z$. From (\\ref{tj}), (\\ref{pcc}) and using (\\ref{tc}) we find\n\\begin{displaymath}\n\\partial_z\\bar R+\\bar j_l\\omega^l(z)=\\frac{1}{2}\\sum_i q_i\\dot{\\bar z}_i\n\\delta(z-z_i(t))\n\\end{displaymath}\n\\BE \\label{R}\n\\partial_{\\bar z} R+j_l\\bar\\omega^l(\\bar z)=\\frac{1}{2}\\sum_i q_i\\dot{z}_i\n\\delta(z-z_i(t))\n\\EE\nTo solve (\\ref{R}) for $R$, we will need the prime form\n\\begin{displaymath}\nE(z,w)=(h(z)h(w))^{-{\\frac{1}{2}}}\\cdot\\Theta\\dc{1\/2}{1\/2}(\\int_z^w\\omega|\n\\Omega)\n\\end{displaymath}\nwhere $h(z)=\\frac{\\partial}{\\partial u^l}\\Theta\\dc{1\/2}{1\/2}\n(u|\\Omega)|_{u=0}\\cdot\\omega^l(z)$. The prime form is antisymmetric in\nthe variables $z$ and $w$ and behaves like $z-w$ when $z\\approx w$\n(the $h(z)$ which appear in the denominator are for\nnormalization).\\footnote{This formalism can also be extended to\ninclude the sphere, where there are no harmonic 1-forms at all (the\nspace of cohomology generators has dimension zero) by properly\ndefining the prime form. We use stereographic projection to map the\nsphere into the complex plane and use $E(z,w)=z-w$ as the definition\nof the prime form. }\n\nThe theta functions \\cite{7} are defined by\n\\BE \\label{thetaf}\n\\Theta\\dc{\\alpha}{\\beta}(z|\\Omega)=\\sum_{n_l}e^{i\\pi(n_l+\\alpha_l)\\Omega^{lm}\n(n_m+\\alpha_m)+2\\pi i(n_l+\\alpha_l)(z^l+\\beta^l)}\n\\EE\nwhere $\\alpha,\\ \\beta\\in [0,1]$, and have the following property\n\\begin{displaymath}\n\\Theta\\dc{\\alpha}{\\beta}(z^m+\ns^m+\\Omega^{ml}t_l|\\Omega)=e^{2\\pi i\\alpha_l s^l- i\\pi\nt_m\\Omega^{ml}t_l-2\\pi i t_m(z^m+\\beta^m)}\\Theta\\dc{\\alpha}{\\beta}(z|\n\\Omega)\n\\end{displaymath}\nfor integer--valued vectors $s^m$ and $t_l$. For a non-integer\nconstant $c$\n\\begin{displaymath}\n\\Theta\\dc{\\alpha}{\\beta}(z^m+c\n\\Omega^{ml}t_l|\\Omega)=e^{-i\\pi c^2 t_m\\Omega^{ml}\nt_l-2\\pi i c t_m(z^m+\\beta^m)}\\Theta\\dc{\\alpha-ct}{\\beta}(z|\\Omega)\n\\end{displaymath}\n\nThe solution of (\\ref{R}) is\n\\BE\nR=\\frac{\\partial}{\\partial t}[-\\frac{1}{2\\pi}\\sum_i\nq_i\\log(\\frac{E(z,z_i(t))}\n{E(z_0,z_i(t))})]-j_l(t)\\int_{z_0}^z(\\bar\\omega^l-\\omega^l)\n\\EE\nwhere we have chosen $R$ such that $R(z_0,\\bar z_0)=0$ for an\narbitrary point $z_0$\n, which we choose to be the same as the $z_0$ in the definition of $A_p$\n(We can choose $z_0=\\infty$ for genus zero).\nThe important fact about $R$ is that it is a single-valued function.\nIf we move $z$ around any of the homology cycles, $R$ returns to its\noriginal value. In fact, this is also true for windings of $z_0$, an important\nrelation since it is only a reference point. So\n\\BE \\label{chi}\n\\chi=\\frac{\\partial}{\\partial t}[-\\frac{1}{2\\pi}\\sum_i q_i\n{\\rm Im}\\log(\\frac{E(z,z_i(t))}{E(z_0,z_i(t))})]+\n\\frac{i}{2}[(j_l(t)+\\bar j_l(t))\\int_{z_0}^z(\\bar\\omega^l-\\omega^l)]\n\\EE\n\nThe action (\\ref{csa}) is written for a trivial U(1) bundle over ${\n\\cal M}$, that is for zero total flux. Every integral of the gauge field,\nwhich is invariant under a gauge transformation of $A$, can be extended\nuniquely into an integral using the $A^{(i)}$, defined on the set of\npatches, that is patch independent by adding appropriate terms.\nWe will represent our set of 3-dimensional patches as $V^i$. Then $V^i$\nand $V^j$ will share a common boundary, a\n2-dimensional surfaces $V^{ij}$.\nFinally, three surfaces $V^{ij}$, $V^{jk}$ and $V^{ki}$ will intersect along a\nline $L^{ijk}$, and four of these lines will terminate at a point\n$P^{ijkl}$. This might be best visualized as a triangulation of ${\\cal\nM}\\times[0,1]$ in term of 3-simplexes (or tetrahedrons), the $V^i$, with\n2-simplexes boundaries (or triangles), the $V^{ij}$, which in term has\n1-simplexes boundaries (or lines), the $L^{ijk}$, and finally those have\n0-simplexes (or points) as boundaries, the $P^{ijkl}$.\nFor our case, we will be using the proper extension of (\\ref{csa}), see\n\\cite{9,10}, that is\n\\begin{displaymath}\nS=-\\frac{k}{4\\pi}\\sum_i\\int_{V^i}A^{(i)}{\\rm d} A+\\frac{k}{4\\pi}\\sum_{ij}\\int\n_{V^{ij}}\\chi^{(ij)}{\\rm d} A-\\frac{k}{4\\pi}\\sum_{ijk}\\int_{L^{ijk}}c^{<(ijk)}\nA^{(k)>}+\\frac{k}{4\\pi}\\sum_{P^{ijkl}}c^{<(ijk)}\\chi^{(kl)>}(P)\n\\end{displaymath}\n\\BE \\label{pcsa}\n+\\sum_i\\int_{V^i}A^{(i)}{}^{\\star}j-\\sum_{ij}\\int_{V^{ij}}\\chi^{(ij)}{}^\n{\\star}j+\\sum_{ijk}\\int_{L^{ijk}}c^{<(ijk)}W^{(k)>}-\\sum_{P^{ijkl}}c^{<(ijk)}\n\\tilde\\chi^{(kl)>}(P)\n\\EE\nwhere the one-form $W$ is defined by ${}^{\\star}j={\\rm d} W$ locally, but since\n$\\int_{\\cal M}{}^{\\star}j=Q$,\nthis can be done only on patches\nwhere $W^{(i)}-W^{(j)}={\\rm d}\\tilde\\chi^{(ij)}$, in\nthe same way as we did for the gauge field $A$. The bracket $<...>$ mean put\nthe\nindices in increasing order (with appropriate sign) and set the repeated\nindex according to position, see \\cite{10}.\nIt will be useful to do the same decomposition of\n$j$ as we did for $A$, by having $j=j_p+j_r$, where $j_p$ is a\nterm corresponding to a single particle of charge $Q$ at the reference\npoint $z_0$.\n\nThe complicated expression (\\ref{pcsa}) for the action ensure that the\ntotal expression is independent of the triangulation of the manifold used\nfor the evaluation of each integral. For example, if we change the patches\n$V^i$, the integrand in the first term will change by a total derivative,\nleading to a correction term integrated over the boundaries of the $V^i$,\nthat is the $V^{ij}$. But the second term, in return, will change in such a\nway as to cancel the change generated by this first term, leaving the total\naction invariant. A similar effect can be found for the other terms in\n(\\ref{pcsa}).\n\nUsing (\\ref{decA}), the decomposition of $j$ and performing several\nintegrations by parts gives\n\\begin{displaymath}\nS=-\\frac{k}{4\\pi}\\int_{{\\cal M}\\otimes{\\cal\nR}}A_r{\\rm d} A_r+\\frac{k}{2\\pi}\\int_{{\\cal M}\n\\otimes{\\cal R}}A_r{\\rm d} A_p+\\int_{{\\cal M}\\otimes{\\cal\nR}}A_r({}^\\star j_r+{}^\\star j_p)+\\int_{{\\cal M}\\otimes{\\cal R}}W_r{\\rm d} A_p\n\\end{displaymath}\n\\BE \\label{topcsa}\n+[-\\frac{k}{4\\pi}\\int A_p{\\rm d} A_p+\\int A_p{}^\\star j_p]+{\\rm Surface\\ terms}\n\\EE\nThe terms in brackets, involving $A_p$ and $j_p$, has to be performed using\nthe extended decomposition (\\ref{pcsa}), by replacing $A$ with $A_p$.\nFor our case, we extend the\ntriangulation of ${\\cal M}$ trivially through the time direction.\nThe surface terms, appearing at the time boundaries ($t=0$ and $t=t_f$), are\nnot important for the quantum theory or\nthe braid group representation that we will find later on. They can't be\navoided since the action is not invariant under gauge transformations at\nthe time boundaries. Thus there is no terms to cancel the\ntriangulation dependent terms. This will not be a problem since under a\nperiodic configuration we are\neffectively working on ${\\cal M}\\times S^1$, so there is no surface term,\nor alternatively the surface terms are equal and cancel each other.\nAlso, surface terms do not affect the dynamics or quantization of the system.\nWe also represent $A_p$ such that ${\\rm d} A_p=F_0\\delta(z-z_0){\\rm d}^2 x$, which\nimplies that $z_0$ must stay within one patch at all time. And similarly\nfor $j_p$ since it is equal to $Q\\delta(z-z_0){\\rm d} t$. After a quick\ncalculation, we find that the terms inside the brackets are all zero, except\nfor the integral, $\\oint c^{<(ijk)}W^{k>}$, which is defined\nmodulo $c^{(ijk)}Q$ (for periodic motion). This is because $W$ is defined\non patches also, due to the total charge $Q$. At the quantum level, we are\nleft with a phase $e^{ic^{(ijk)}Q}$, but since the $c^{(ijk)}$ are\narbitrary except for the constraint (\\ref{F0}), the real ambiguity is\n$e^{iQ F_0}$. Actually,\nthe integral $\\int A{}^\\star j$ is equal to $\\sum_i q_i\\int_{C_i}A$,\nthe Wilson line integral for a\nset of charges $q_i$ following the curves $C_i$. In this case for each\nof these Wilson line integrals, corresponding to the charge $q_i$, we find\na phase $e^{i q_i F_0}$ instead. To solve these ambiguities we impose\nthese phases to be equal to unity as constraints on our system.\n\nOn the other hand, if in addition to the gauge field $A$, we had a second\nindependent Abelian gauge field, say $\\Gamma$, then a similar phase\nambiguity, $e^{i h_i \\chi_E}$, would arise. Here $h_i$ will be the charge\nattached to the particle $i$ corresponding to this new field, and\n$\\chi_E=\\int_{\\cal M}{\\rm d}\\Gamma$ is the total flux. The important fact,\nnow, is that the phase ambiguity from both gauge fields would appear at\nthe same time, thus we would have to impose the constraint\n\\BE \\label{fconst}\ne^{iq_i F_0-i h_i \\chi_E}=1\n\\EE\nto obtain a consistent quantum theory\n(the minus sign has been added to simplify the notation later on).\nAt this stage, the new field $\\Gamma$ seems artificial, but it turns out\nthat it is necessary to introduce such a field for Chern-Simons theory. In\nfact, it correspond to a connection on the tangent space of ${\\cal M}$.\nWe will need it because for each charge trajectory we will attach a\nframing (a unit vector on ${\\cal M}$). Such a framing has to be defined\nin relation to the basis of the tangent space, so $\\Gamma$ does not\nhave to be the associated metric connection, but it will enjoy the same\nglobal properties. It is well known that $\\chi_E=4\\pi(1-g)$, known as the\nEuler class of ${\\cal M}$. Note that we will assume that the field\n$\\Gamma$ does not have any flux around the particles (an effect similar\nto cosmic string), this would lead to a gravitational change in the\nstatistic of these particles. The charges $h_i$\nwill be equal to $q_i^2\/2k$, this will appear quit naturally in the next\nsection. Like we did for the filed $A$, we will concentrate all the\nflux, $\\chi_E$, of $\\Gamma$ around the point $z_0$. This will allow us to\nassume a constant framing on ${\\cal M}$, except when we cross the point\n$z_0$, in which case the constraint (\\ref{fconst}) will be used to fix any\nphase ambiguity.\n\nThe term $\\int_{{\\cal M}\\otimes{\\cal R}}W_r{\\rm d} A_p=F_0\\int_{\\cal R}W_{r0}\n(z_0){\\rm d} t$, but a simple calculation shows that $W_{r0}=-\\chi$. Since\n$\\chi(z_0)=0$, we set it up this way by definition, this term vanishes. If\nwe had not used our freedom in the definition of $\\chi$ to set it up this\nway, we would have to take care of its effects on the hamiltonian and\nultimately the wave function.\n\n\\section{Quantization}\n\nNow we are ready to solve for the action. By\nputting (\\ref{ta}) and (\\ref{tj}) back into (\\ref{topcsa}) we find\n\\begin{displaymath}\nS=\\frac{1}{2}\\int(\\xi\\dot F_r-\\dot\\xi F_r){\\rm d}^3x+i\\pi k\\int(\\gamma^l\n\\dot{\\bar\\gamma}_l-\\dot\\gamma^l\\bar\\gamma_l){\\rm d} t+\\int A_0(j_0\n-\\frac{k}{2\\pi}F){\\rm d}^3x\n\\end{displaymath}\n\\BE \\label{tcsa}\n-\\int(\\frac{2\\pi}{k}\\xi\\frac{\\partial j_0}{\\partial t}+F_r\\chi){\\rm d}^3x\n+2\\pi i\\int(j_l\\bar\\gamma^l -\\bar j^l\\gamma_l){\\rm d} t+{\\rm Surface\\ terms}\n\\EE\n\n{}From this we obtain the equal-time commutation relations of the quantum\ntheory\n\\BE \\label{qv}\n[\\xi(z),F_r(w)]=-iP\\delta(z-w)\\qquad {\\rm or}\\qquad\nF_r(z)=iP\\frac{\\delta}{\\delta\n\\xi(z)}\n\\EE\nand\n\\BE \\label{tqv}\n[\\gamma_l,\\bar\\gamma_m]=-\\frac{1}{2\\pi k}G_{lm}\\qquad {\\rm or}\\qquad\n\\bar\\gamma_l=\\frac{1}{2\\pi k}G_{lm}\\frac{\\partial}{\\partial\\gamma_m}=\n\\frac{1}{2\\pi k}\\frac{\\partial}{\\partial\\gamma^l}\n\\EE\nThe projection operator, $P$, in (\\ref{qv}) changes the delta function to\n$\\delta(z-w)-1\/{\\rm Area} ({\\cal M})$, this is needed since $F_r$ does not\nhave a zero mode ($\\int_{\\cal M} F_r{\\rm d}^2 x=0$). The functional derivative\nmust\nalso be defined using this projection operator. With this holomorphic\npolarization \\cite{3} it is convenient to use the following measure in\n$\\gamma$ space\n\\begin{displaymath}\n(\\Psi_1|\\Psi_2)=\\int e^{-2\\pi k\\gamma^m G_{ml}\\bar\\gamma^l}\n\\Psi_1^*(\\bar\\gamma)\\Psi_2(\\gamma)|G|^{-1}\\prod_m {\\rm d}\\gamma^m{\\rm d}\\bar\\gamma^m\n\\end{displaymath}\nwhere $|G|=\\det(G_{mn})$. With this measure, we find that $\\gamma^\\dagger=\\bar\n\\gamma$ as it should be.\n\n$A_0$ is a Lagrange multiplier which enforces the Gauss' law\nconstraint\n\\begin{displaymath}\nF(z)-\\frac{2\\pi}{k}j_0(z)=iP\\frac{\\delta}{\\delta\\xi(z)}+F_0\\delta(z-z_0)-\\frac\n{2\\pi}{k}j_{r0}(z)+\\frac{2\\pi}{k}Q\\delta(z-z_0)\\approx 0\n\\end{displaymath}\nfrom which we extract $F_0=\\frac{2\\pi}{k}Q$. Since $F_0$ and $Q$ are not\nquantum variables, this is a strong equality. Thus leaving\n\\BE \\label{glc}\nF_r(z)-\\frac{2\\pi}{k}j_{r0}(z)=iP\\frac{\\delta}{\\delta\\xi(z)}-\\frac\n{2\\pi}{k}j_{r0}(z)\\approx 0\n\\EE\n\nUnder a modular transformation, the basis $\\gamma^l$, ${\\bar\\gamma}^l$\nwill be transformed accordingly. This will not change the choice of\npolarization, since the modular transformations do not mix $\\gamma$\nand $\\bar\\gamma$.\n\n{}From (\\ref{tcsa}), (\\ref{qv}) and (\\ref{tqv}), we find that the hamiltonian,\nin the $A_0=0$ gauge, can be separated into two commuting parts (where we\nused $\\frac{\\partial j_0}{\\partial t}=\\frac{\\partial j_{r0}}{\\partial t}$)\n\\begin{displaymath}\nH=-\\int_{\\cal M}A\\wedge^*\\tilde j= H_0+H_T\n\\end{displaymath}\nwhere\n\\BE\nH_0=\\int_{\\cal M}(\\frac{2\\pi}{k}\\xi\\frac{\\partial j_{r0}}{\\partial t}+\ni\\chi P\\frac{\\delta}{\\delta\\xi}){\\rm d}^2 x\n\\EE\nwhile the\nadditional part that takes care of the topology is\n\\BE \\label{th}\nH_T=i(2\\pi\\bar j_l\\gamma^l-\\frac{1}{k}j^l\\frac{\\partial}\n{\\partial\\gamma^l})\n\\EE\n\nTo solve the Schr\\\"odinger equation, we will use the fact that the\nhamiltonian separates, thus writing the wave function as\n\\begin{displaymath}\n\\Psi(\\xi,\\gamma,t)=\\Psi_0(\\xi,t)\\Psi_T(\\gamma,t)\n\\end{displaymath}\nwith the Gauss' law constraint (\\ref{glc})\n\\begin{displaymath}\n(iP\\frac{\\delta}{\\delta\\xi}-\\frac{2\\pi}{k}j_{r0})\\Psi_0(\\xi,t)=0\n\\end{displaymath}\nwhich is solved by\n\\BE \\label{gwf}\n\\Psi_0(\\xi,t)=\\exp[-\\frac{2\\pi i}{k}(\\int_{\\cal M}\\xi(z) j_{r0}(z,t)\n{\\rm d}^2 x)]\\Psi_c(t)\n\\EE\nNote that in (\\ref{gwf}) there is a term $-Q\\xi(z_0)$ out of the integral,\nthis shows the presence of an \"imaginary\" charge at $z_0$, wiht a flux\n$F_0$.\n\nThe first Schr\\\"odinger equation is \\begin{displaymath}\ni\\frac{\\partial\\Psi_0(\\xi,t)}{\\partial t}=H_0 \\Psi_0(\\xi,t)=\n\\left[ \\int_{\\cal M} \\left( \\frac{2\\pi}{k}\\xi\\frac{\\partial j_{r0}}{\\partial\nt}+ i\\chi P\\frac{\\delta}{\\delta\\xi} \\right) {\\rm d}^2 x\\right] \\Psi_0(\\xi,t)\n\\end{displaymath}\nwhich has the solution \\cite{2}\n\\BE \\label{psij}\n\\Psi_c(t)=\\exp\\left[ -\\frac{2\\pi i}{k} \\int_0^t \\int_{\\cal M}\n\\chi(z,t^\\prime) j_{r0}(z,t^\\prime) {\\rm d}^2 x {\\rm d} t^\\prime \\right]\n\\EE\nFor a system of point charges, the use of (\\ref{chi}) with\n(\\ref{gwf}) and (\\ref{psij}), allows us to write $\\Psi_0$ as\n\n\\BE \\label{bgp}\n\\Psi_0(\\xi,t)=\\exp\\left[-\\frac{2\\pi i}{k}(\\sum_i\nq_i\\xi(z_i(t))-Q\\xi(z_0))+\\frac\n{i}{2k}\\sum_{ij} q_i q_j \\int_0^t {\\rm d} t\\dot\\theta_{ij}(t)+\\Phi(t) \\right]\n\\EE\nwhere\n\\begin{displaymath}\n\\Phi(t)=\\frac{\\pi}{k}\\left[\\int_0^t j_l(t^\\prime){\\rm d} t^\\prime \\int_0^\n{t^\\prime} \\bar j^l(t^{\\prime\\prime}){\\rm d} t^{\\prime\\prime}-\\int_0^t\\bar\nj_l( t^\\prime){\\rm d} t^\\prime \\int_0^{t^\\prime} j^l(t^{\\prime\\prime}){\\rm d}\nt^{\\prime\n\\prime}\\right]\n\\end{displaymath}\n\\BE \\label{phase}\n+\\frac{\\pi}{2k}\\left[\\int_0^t \\bar j_l(t^\\prime){\\rm d} t^\\prime \\int_0^ t\n\\bar j^l(t^\\prime){\\rm d} t^\\prime-\\int_0^t j_l( t^\\prime){\\rm d} t^\\prime\n\\int_0^t j^l(t^\\prime){\\rm d} t^\\prime\\right]\n\\EE\nand\n\\begin{displaymath}\n\\theta_{ij}(t)={\\rm Im}\\log\\left[\\frac{E(z_i(t),z_j(t))}{E(z_i(t),z_0)E(z_0\n,z_j(t))}\\right]\n\\end{displaymath}\n\\BE \\label{angfunc}\n+{\\rm Im}\\left[\\int_{z_0}^{z_i(0)}\\omega^l\\int_{z_j(0)}^{z_j(t)}\n(\\omega_l+\\bar\\omega_l)+\\int_{z_0}^{z_j(0)}\\omega^l\\int_{z_i(0)}^{z_i(t)}\n(\\omega_l+\\bar\\omega_l)\\right]\n\\EE\nis a multi-valued function defined using the prime form. We will need\nthe phase (\\ref{phase}) for the topological part of the wave function.\nThe function $\\theta_{ij}(t)$ is the angle function for particle $i$\nand $j$.\n\nFor $i=j$, we find a self-linking term of the form ${\\rm\nIm}\\log(z_i-z_i)={\\rm\nIm}\\log(0)$ which is an undetermined expression, although not a\ndivergent one. One way to solve the problem is to choose a framing\n\\BE \\label{frame}\nz_i(t)=z_j(t)+\\epsilon f_i(t)\n\\EE\nwhich leads to the replacement of $E(z_i(t),z_i(t))$ by $f_i(t)$. This\ncorrespond to a small shift in the position of the charges in $j_{r0}$, but\nnot in $\\chi$. In effect, this leads to a small violation of the continuity\nequation. Alternatively, we can view this term as the additional gauge\nfield $\\Gamma$ introduced in the last section. With the framing\n(\\ref{frame}), we find that\n\\begin{displaymath}\n\\int\\frac{q_i^2}{2k}\\dot\\theta_{ii}{\\rm d} t=h_i\\int_{C_i}\\Gamma\n\\end{displaymath}\nwhere $C_i$ is the trajectory of $q_i$ on ${\\cal\nM}$, representing a coupling of the particles, of charges $h_i$, to an\nAbelian gauge field $\\Gamma$, as claimed in the last section. We also recover\nthese charges as $h_i=q_i^2\/2k$, which actually are\nthe conformal weights of the underlying two dimensional conformal field\ntheory \\cite{2}.\n\nThe angle function (\\ref{angfunc}) depends on $z_0$, but it\nshould be invariant if we move $z_0$ either infinitesimally or around\nan homology cycle. For a small displacement there is no change unless\none of the charge trajectories, $z_i(t)$, passing by $z_0$ from one side\nis now going from the other side. Looking at the denominator of\n$\\theta_{ij}$, we see that this will change $\\Psi_0$ by\n$e^{i\\frac{2\\pi}{k}q_iQ}$, while looking at the numerator, we find a phase\n$e^{i\\frac{2\\pi}{k}q_i^2(g-1)}$ due to the flux $\\chi_E$ of $\\Gamma$. Or\nalternatively, the framing of $z_i$ is subject to a rotation of\n$\\chi_E\/2\\pi=2(1-g)$ turns as we go around ${\\cal M}$, an effect that we\nconcentrated around $z_0$ here. The total phase shift is\n\\BE \\label{fconsf}\ne^{i\\frac{2\\pi}{k}q_i(Q+q_i(g-1))}=1\n\\EE\nThis is equal to one by imposing the constraint (\\ref{fconst}),\nwith the use of the Gauss' law constraint $F_0=\\frac{2\\pi}{k}Q$ and our\nchoice of $\\chi_E$. The equation\n(\\ref{fconsf}) will represent a fundamental constraint that has to be\nsatisfied by all charges if we want a consistent solution to Chern-Simons\ntheory.\n\nLooking at (\\ref{angfunc}) shows that we can write\n$\\sum_{ij}q_iq_j{\\rm Im}\\log[E(z_i(t),z_0)E(z_0,z_j(t))]^{-1}$ as\n$\\sum_iq_i(-Q){\\rm Im}\\log[E(z_i(t),z_0)]+\\sum_j(-Q)q_j{\\rm Im}\\log\n[E(z_0,z_j(t))]$, thus representing an additional charge $-Q$ at $z_0$.\nThe constraint (\\ref{fconsf}) is indicating that this is indeed an \"imaginary\"\ncharge and that it should not be seen by any real charge.\nFor the displacement of\n$z_0$ around an homology cycle, we find that the angle function changes only\nby a constant, thanks to the second term in (\\ref{angfunc}), which will\ncancel out when we take the difference in (\\ref{bgp}). This point is\nactually more complicated; we will come back to it later on.\nSo the wave function (\\ref{bgp}), with the angle function (\\ref{angfunc}),\naccurately forms a representation of the braid group on a plane\n\\cite{2,8}, or $\\sigma$ is one of the generator of the full braid group\n(\\ref{bgrel})-(\\ref{bggrel}). We will cover the full braid group in more\ndetail later on.\n\nNow, the topological part of the hamiltonian is used to find the part of the\nwave function affected by the currents going around the non-trivial loops of\n${\\cal M}$. The Schr\\\"odinger equation for (\\ref{th}) is\n\\begin{displaymath}\ni\\frac{\\partial\\Psi_T(\\gamma,t)}{\\partial t}=\nH_T\\Psi_T(\\gamma,t)=\ni\\left(2\\pi\\bar j_l\\gamma^l -\\frac{1}{k} j^l\n\\frac{\\partial}{\\partial\\gamma^l} \\right)\n\\Psi_T(\\gamma,t)\n\\end{displaymath}\nwhich has the solution\n\\BE\n\\Psi_T(\\gamma,t)\n=\\exp \\left[ 2\\pi\\gamma^l \\int_0^t\\bar j_l(t^\\prime)\n{\\rm d} t^\\prime - \\frac{2\\pi}{k} \\int_0^t j_l(t^\\prime)\n{\\rm d} t^\\prime \\int_0^{t^\\prime} \\bar j^l(t^{\\prime\\prime}){\\rm d} t^{\\prime\\prime}\n\\right] \\tilde\\Psi_T(\\gamma,t)\n\\EE\nNote that with the phase (\\ref{phase}), the double integral above will turn\ninto $\\int_0^t j_l(t^\\prime){\\rm d} t^\\prime \\cdot\\int_0^t \\bar j^l(t^{\\prime})\n{\\rm d} t^{\\prime}$, a topological expression.\n\nThe remaining equation for $\\tilde\\Psi_T(\\gamma,t)$\n\\BE\n\\frac{\\partial\\tilde\\Psi_T(\\gamma,t)}{\\partial t}\n=-\\frac{1}{k}\\ j^l \\frac{\\partial\\tilde\\Psi(\\gamma,t)}\n{\\partial\\gamma^l}\n\\EE\nis easily solved in the form\n\\BE \\label{tpsit}\n\\tilde\\Psi_T(\\gamma^l,t)\n=\\tilde\\Psi_T(\\gamma^l -\\frac{1}{k} \\int_0^t j^l(t^\\prime){\\rm d} t^\\prime)\n\\EE\n\n\\section{Large gauge transformations}\n\nThe wave function (\\ref{tpsit}) is not arbitrary, but must satisfy the\ninvariance of the action (\\ref{csa}) under large gauge transformations, when\nthere is no current around. So let us set $j^\\mu=0$ for this section and find\nthe condition on $\\tilde\\Psi_T$.\n\nIn general, the large $U(1)$ gauge transformations are given by the set of\nsingle-valued gauge functions, with $s^m$ and $t_m$ integer-valued vectors,\n\\begin{displaymath}\nU_{s,t}(z)=\\exp\\left(2\\pi i(t_m\\eta^m(z)-s^m\\tilde\\eta_m(z)\\right)\n\\end{displaymath}\nwhere\n\\begin{displaymath}\n\\eta^m(z)=i\\int_{z_0}^z(\\bar\\Omega^{ml}\\omega_l-\\Omega^{ml}\\bar\\omega_l)\\\n,\\qquad \\tilde\\eta_m(z)=-i\\int_{z_0}^z(\\omega_m-\\bar\\omega_m)\n\\end{displaymath}\n\nIf we change the endpoint of integration by $z\\rightarrow z+a_l u^l+b^m v_m$\nwith $u,v$ integer and $a,b$ defined in\n(\\ref{inter}), we find $\\eta^m\\rightarrow\\eta^m+u^m,\\\n\\tilde\\eta_m\\rightarrow\\tilde\\eta_m+v_m$ and $U_{s,t}\\rightarrow U_{s,t}e^\n{2\\pi i(t_m u^m-\ns^m v_m)}=U_{s,t}$. The transformation of the gauge field (\\ref{hda}) under\n$U_{s,t}$ is given by\n\\BE \\label{lgt}\n\\gamma^m\\rightarrow\\gamma^m+s^m+\\Omega^{ml}t_l\\ ,\\qquad\\bar\\gamma^m\\rightarrow\n\\bar\\gamma^m+s^m+\\bar\\Omega^{ml}t_l\n\\EE\nThe classical operator that produces the transformation (\\ref{lgt})\n\\begin{displaymath}\nc_{s,t}(\\gamma,\\bar\\gamma)=\\exp\\left[(s^m+\\Omega^{ml}t_l)\\frac{\\partial}\n{\\partial\\gamma^{m}}+(s^m+\\bar\\Omega^{ml}t_l)\\frac{\\partial}{\\partial\\bar\n\\gamma^{m}}\\right]\n\\end{displaymath}\nmust be transformed into the proper quantum operator acting on the wave\nfunction\n$\\tilde\\Psi_T$. By using the commutation (\\ref{tqv}) to replace $\\frac\n{\\partial}{\\partial\\bar\\gamma^{m}}$ by $-2\\pi k\\gamma_m$ we find the\noperators $C_{s,t}$ which implement the large gauge transformations \\cite{5}\n\\BE \\label{glgt}\nC_{s,t}(\\gamma)=\\exp\\left[-2\\pi k(s^m+\\bar\\Omega^{ml}t_l)\\gamma_m-\\pi\nk(s^m+\\bar\n\\Omega^{ml}t_l)G_{mn}(s^n+\\Omega^{nl}t_l)\\right]e^{(s^m+\\Omega^{ml}t_l)\\frac\n{\\partial}{\\partial\\gamma^{m}}}\n\\EE\n\nThe quantum operators $C_{s,t}$ do not commute among themselves for\nnon-integer $k$. From now on we will set $k=\\frac{k_1}{k_2}$ for integer\n$k_1$ and $k_2$. Now, in contrast with their classical counterparts, the\noperators $C_{s,t}$ satisfy the clock algebra\n\\BE \\label{clka}\nC_{s_1,t_1}C_{s_2,t_2}=e^{-2\\pi ik(s^m_1{t_m}_2-s^m_2{t_m}_1)}C_{s_2,t_2}\nC_{s_1,t_1}\n\\EE\nTheir action on the wave function is\n\\begin{displaymath}\nC_{s,t}(\\gamma)\\tilde\\Psi_T(\\gamma^m)=\\exp\\left[-2\\pi k(s^m+\\bar\\Omega^{ml}t_l)\n\\gamma_m\\right.\n\\end{displaymath}\n\\BE \\label{ca}\n\\left.-\\pi k(s^m+\\bar\\Omega^{ml}t_l)G_{mn}(s^n+\\Omega^{nl}t_l)\\right]\n\\tilde\\Psi_T(\\gamma^m+s^m+\\Omega^{ml}t_l)\n\\EE\nOn the other hand $C_{k_2s,k_2t}$ commutes with everything and must be\nrepresented only by phases $e^{i\\phi_{s,t}}$. This implies, using (\\ref{ca}),\n\\begin{displaymath}\n\\tilde\\Psi_T(\\gamma^m+k_2(s^m+\\Omega^{ml}t_l))=\\exp\\left[-i\\phi_{s,t}+2\\pi\nk_1(s^m+\\bar\\Omega^{ml}t_l)\\gamma_m\\right.\n\\end{displaymath}\n\\BE \\label{algc}\n\\left.+\\pi k_1k_2(s^m+\\bar\\Omega^{ml}t_l)G_{mn}\n(s^n+\\Omega^{nl}t_l)\\right]\\tilde\\Psi_T(\\gamma^m)\n\\EE\nThe only functions that are doubly (semi-)periodic are combinations of the\ntheta functions (\\ref{thetaf}).\nAfter some algebra, we find that the set of functions\n\\BE \\label{tqf}\n\\Psi_{p,r}\\dc{\\alpha}{\\beta}(\\gamma|\\Omega)=e^{\\pi k\\gamma^m\\gamma_m}\\Theta\n\\dc{\\frac{\\alpha+k_1 p+k_2 r}{k_1 k_2}}{\\beta}(k_1\\gamma|k_1 k_2\\Omega)\n\\EE\nwhere $p=1,2,\\dots,k_2$ and $r=1,2,\\dots,k_1$ with $\\alpha,\\ \\beta\\in[0,1]$\nsolve the above algebraic conditions (\\ref{algc}).\nTheir inner product is given by\n\\BE\n(\\Psi_{p_1,r_1}|\\Psi_{p_2,r_2})=\\int_P e^{-2\\pi k\\gamma^mG_{ml}\\bar\\gamma^l}\n\\overline{\\Psi_{p_1,r_1}(\\gamma)}\\Psi_{p_2,r_2}(\\gamma)\n|G|^{-1}\\prod_m {\\rm d}\\gamma^m{\\rm d}\\bar\\gamma^m\n\\EE\n\\begin{displaymath}\n=|G|^{-\\frac{1}{2}}\\delta_{p_1,p_2}\\delta_{r_1,r_2}\n\\end{displaymath}\nThe integrand is completely invariant under the translation (\\ref{lgt}), thus\nwe restrict the integration to one of the\nplaquettes $P$ ($\\gamma^m=u^m+\\Omega^{ml}v_l$ with $u,v\\in[0,1]$), the phase\nspace\nof the $\\gamma$'s.\n\nUnder a large gauge transformation\n\\begin{displaymath}\nC_{s,t}\\Psi_{p,r}\\dc{\\alpha}{\\beta}(\\gamma)=\\exp\\left [2\\pi ikp_m s^m+i\\pi\nks^m t_m+\\frac{2\\pi i}{k_2}(\\alpha_m s^m-\\beta^m\nt_m)\\right ]\\Psi_{p+t,r}\\dc{\\alpha}{\\beta}(\\gamma)\n\\end{displaymath}\n\\BE\n=\\sum_{p^\\prime}[C_{s,t}]_{p,p^\\prime}\\Psi_{p^\\prime,r}\\dc{\\alpha}{\\beta}\n(\\gamma)\n\\EE\nThe matrix $[C_{s,t}]_{p,p^\\prime}$ forms a $(k_2)^g$ dimensional\nrepresentation\nof the algebra (\\ref{clka}) of large gauge transformations.\n\nThe parameters $\\alpha$ and $\\beta$ appear as free parameters, but in fact\nthey may be fixed such that we obtain a modular invariant wave function.\nThe modular transformation (\\ref{mt}) on our set of functions (\\ref{tqf}) is\n\\begin{displaymath}\n\\Psi_{p,r}\\dc{\\alpha}{\\beta}(\\gamma|\\Omega)\\rightarrow|C\\Omega+D|^{-\\frac{1}\n{2}}e^{-i\\pi\\phi}\\Psi_{p,r}\\dc{\\alpha^\\prime}{\\beta^\\prime}(\\gamma\n^\\prime|\\Omega^\\prime)\n\\end{displaymath}\nwhere $\\gamma^\\prime={(C\\Omega+D)^{-1}}^\\top \\gamma,\\ \\Omega^\\prime=(A\\Omega+B)\n(C\\Omega+D)^{-1}$ and $\\phi$ is a phase that will not concern us here (and\n$G^\\prime_{lm}=[(C\\Omega+D)^{-1}]_{lr}G_{rs}[(C\\bar\\Omega+D)^{-1}]_{sm}$).\nMost important are the new variables\n\\begin{displaymath}\n\\alpha^\\prime=D\\alpha-C\\beta-\\frac{k_1 k_2}{2}(CD^\\top)_d\\qquad\\beta^\\prime=\n-B\\alpha+A\\beta-\\frac{k_1 k_2}{2}(AB^\\top)_d\n\\end{displaymath}\nwhere $(M)_d$ mean $[M]_{dd}$, the diagonal elements.\n\nA set of modular invariant wave functions \\cite{4,5,6} can exist only when\n$k_1 k_2$ is even, where we set $\\alpha=\\beta=0$ (and also $\\phi=0$).\nIn the case of odd $k_1 k_2$, we can set $\\alpha,\\ \\beta$ to either $0$ or\n$\\frac{1}{2}$, which amount to the addition of a spin structure on the wave\nfunctions. This will increase the number of functions by $4^g$ which will now\ntransform non trivially under modular transformations.\n\n\\section{The braid group on a Riemann surface and Chern-Simons statistics}\n\nConsidering a set of point charges leads to the set of wave functions\n\\begin{displaymath}\n\\Psi_{p,r}\\dc{\\alpha}{\\beta}(\\xi,\\gamma,t|\\Omega)=\\exp\\left[\\pi k\\gamma^m\n\\gamma_m+2\\pi\\gamma^m\\int_0^t(\\bar j_m-j_m){\\rm d} t^\\prime-\\frac{2\\pi i}{k}(\\sum_i\nq_i\\xi(z_i(t))-Q\\xi(z_0))\\right.\n\\end{displaymath}\n\\begin{displaymath}\n\\left.+\\frac{i}{2k}\\sum_{ij}q_iq_j(\\theta_{ij}(t)-\\theta_{ij}(0))+\n\\frac{\\pi}{2k}\\int_0^t(j_m-\\bar j_m){\\rm d} t^\\prime\\cdot\\int_0^t(j^m-\n\\bar j^m){\\rm d} t^\\prime\\right]\n\\end{displaymath}\n\\BE \\label{wvfn}\n\\cdot\\Theta\\dc{\\frac{\\alpha+k_1 p+k_2 r}{k_1 k_2}}{\\beta}(k_1\\gamma^m-k_2\n\\int_0^tj^m{\\rm d} t^\\prime|k_1 k_2\\Omega)\n\\EE\n\nThe wave function depends on charge\npositions through the integrals over the topological components of the\ncurrent $j^m, \\bar j^m$, and through the function $\\theta_{ij}(t) -\n\\theta_{ij}(0)$. Consider for a moment motions of the particles\nwhich are closed curves, and are homologically trivial. We focus\nfirst on the integrals over $j^m, \\bar j^m$. If, for example a single\nparticle moves in a circle,\nwe find that the integral of these topological currents vanishes, we\nconclude that these currents contribute nothing additional to the phase of\nthe wave function under these kinds of motions. The function $\\theta_{ij}\n(t) - \\theta_{ij}(0)$ must be treated differently here, because it has\nsingularities when particles coincide, and thus, while motions that\nencircle no other particles may be easily integrated to get zero, this\nis not true when other particles are enclosed by one of the particle\npaths, and the result is non-zero in this case, in fact it is $2\\pi$ (with\nappropriate sign depending on the loop orientation). Nevertheless, this\nfunction is still independent of the particular\nshape of the particle path. Actually the definition of $\\theta_{ij}$ in term\nof the prime form $E(z,w)$ is just the generalization to an arbitrary Riemann\nsurface of the well known angle function on the plane, that is as the angle of\nthe line joining the particle $i$ and $j$ compare to a fixed axis of\nreference, determined by $z_0$ here.\nThus, we may conclude that under the permutation of two identical particles of\ncharge $q$, the wave functions defined here acquire the phase\n\\BE \\label{perph}\n\\sigma =e^{i\\frac{\\pi}{k}q^2}\n\\EE\n\nFor homologically non-trivial motions of a single particle on ${\\cal M}$\n, the current integral $\\int_0^t\nj^l(t^\\prime){\\rm d} t^\\prime$ will in general change as $\\int_0^t\nj^l(t^\\prime){\\rm d} t^\\prime \\longrightarrow \\int_0^t\nj^l(t^\\prime){\\rm d} t^\\prime + s^l+\\Omega^{lm}t_m$, where $s^l$ and $t_m$\nare integer-valued vectors whose entries denote the number of windings\nof the particle around each homological cycle.\nHowever, now, for multi-particle non-braiding paths, there is no\ncontribution comming from $\\theta_{ij}$.\nThus, for closed path on ${\\cal M}$, the wave functions become\n\\begin{displaymath}\n\\Psi_{p,r}(t)=\\exp\\left[-\\frac{2\\pi i}{k}r_m s^m\n-\\frac{2\\pi i}{k_1}\\sum_i q_i((\\alpha -k_2\\alpha_0)\n_m s_i^m-(\\beta-k_2\\beta_0)^m t_{im})-iJ\\right]\n\\end{displaymath}\n\\BE \\label{braid}\n\\cdot\\Psi_{p,r+t}(0)\n=\\sum_{r^\\prime}[B_{s,t}]_{r,r^\\prime}\\Psi_{p,r^\\prime}(0)\n\\EE\nwith\n$\\sum_iq_i\\int_{z_0}^{z_i(0)}\\omega^l=\\alpha_0^l+\\Omega^{lm}\\beta_{0m}$\nand where $J=\\sum_i \\frac{q_i^2}{2k}(f_i(t)-f_i(0))$ is the self-linking\nterm. The matrices (\\ref{braid}) satisfy the cocycle relation\n\\BE \\label{bgalg}\nB_{s_1,t_1}B_{s_2,t_2}=e^{-\\frac{2\\pi i}{k}(s_1^m{t_m}_2-s^m_2{t_m}_1)}\nB_{s_2,t_2}B_{s_1,t_1}\n\\EE\nThis cocycle has to be contrasted with the large gauge transformations cocycle\n(\\ref{clka}). They are very similar except that $k$ is now $\\frac{1}{k}$\nand the operator act on the wave function $\\Psi_{p,r}$ on the other index.\nIn this sense, these two cocycles play a dual role on the wave function.\n\nThe self-linking contribution, $J$, in (\\ref{braid}), plays an\nimportant role here. For homologically trivial closed particle\ntrajectories, we find $J=0$ if\nthe path does not enclose $z_0$ in the patch that we are working on, since we\nchoose to put all the flux of $\\Gamma$ around $z_0$. Otherwise, we find a\ncontribution $\\frac{q_i^2}{2k}\\chi_E=2\\pi\\frac{q_i^2}{k}(1-g)$ to $J$, for\nthe particle $i$. This can be illustrated by\nchecking for independence of the braiding (\\ref{braid}) on $z_0$.\nIn the definition of the angle function $\\theta_{ij}$ in (\\ref{angfunc}), we\nargue that by moving $z_0$ along an homology cycle, the angle function\nis changed by a constant that should cancel out in (\\ref{braid}).\nNow the function (\\ref{braid}) changes by $e^{i\\frac\n{2\\pi}{k}Qq_i}e^{i\\frac{2\\pi}{k}q_i^2(g-1)}$ to an integer power. Fortunately\nthis is one, being our\nfundamental consistency condition (\\ref{fconsf}). The first phase come\nfrom the shift in $\\alpha_0$ and $\\beta_0$, while the second phase come\nfrom the fact that each charge trajectory is being crossed by $z_0$, which\nproduces a shift in $J$.\n\nTo study the permuted (identical particles) braid group, we will consider $n$\nparticles of charge $q$, so $Q=nq$. The representation of the braid\ngroup is characterized by its generators, the permutation phase\n$\\sigma$ in (\\ref{perph}), and the braid matrices $B_{s,t}$ in (\\ref{braid}).\nThese generators are the result of the action of elements of the permuted\nbraid group on the particles which form the external sources in our\ntheory.\nIn fact, let the integer vectors $\\hat s^l$, $\\hat t_m$\ndenote vectors that are 0 in all entries except for the $l$th and\n$m$th, respectively, and 1 at the remaining position. Then with the\nidentifications $\\alpha_l = B_{\\hat s^l,0},\\\n\\beta_m = B_{0,\\hat t_m}$, it is easy to check that we recover all of\nthe necessary relations of the braid group on the Riemann surface,\ngiven in (\\ref{bgrel}). In particular, we recover the global constraint\n(\\ref{bggrel}), this is just our fundamental constraint (\\ref{fconsf}),\nusing (\\ref{perph}), applied to this case.\n\n\n\\section{Conclusion}\n\nWe have quantized Abelian Chern-Simons theory coupled to arbitrary\nexternal sources on an arbitrary Riemann surface, and solved the\ntheory. We find that the presence of non-trivial spatial topology\nintroduces extra dimensionality to the Hilbert space separately for\nthe large gauge transformations and the braid group.\nWe find a fundamental constraint (\\ref{fconsf}), relating the\ncharges, $k$, and $g$ such that we recover a consistent topological\nfield theory representing a general (with some identical and\nnon-identical particles) braid group on ${\\cal M}$. In particular, we\nrecover the permuted braid group on ${\\cal M}$.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec.intro}\n\nThere are many profound relations between quantum field theory, string theory\nand knot theory. This paper focuses on two aspects of the polynomial knot invariants -- in particular HOMFLY polynomials, superpolynomials, and their specializations -- colored by symmetric powers of the fundamental \nrepresentation:\n\\begin{itemize}\n\\item\nthe special geometry of algebraic curves of a knot,\n\\item\nthe integrality of the BPS invariants of a knot.\n\\end{itemize}\nBy (affine) algebraic curves we mean curves obtained as classical limits of recursions\nof knot polynomials. Such recursions are known to exist (for colored Jones\npolynomials \\cite{GL,Hikami_AJ}, or colored HOMFLY and superpolynomials of twist or torus knots) and conjectured to exist for all colored HOMFLY or superpolynomials \\cite{AVqdef,FGS,superA,Nawata}. For the colored Jones polynomial, the above algebraic curve agrees with the A-polynomial \\cite{GL,Hikami_AJ,Apol}. For the colored \nHOMFLY polynomial, the above algebraic curve is conjectured \\cite{AVqdef,superA} to agree with the augmentation polynomial of a knot \\cite{NgFramed}. The curve arising as the classical limit of recursions for colored superpolynomials is called the super-A-polynomial \\cite{superA,Fuji:2013rra}.\n\nOn the other hand, by BPS invariants of a knot we mean the Labastida-Mari{\\~n}o-Ooguri-Vafa (LMOV) invariants of a knot \\cite{OoguriV,Labastida:2000zp,Labastida:2000yw,Labastida:2001ts}, or certain combinations thereof. \n\nOur aim is to give exact formulas for a certain class of BPS invariants, as well\nas their asymptotic expansions to all orders, using the corresponding\nalgebraic curve. One motivation of our work is as follows. On one hand, as stated above, various algebraic curves associated to knots arise as classical limits of recursion relations for knot polynomials (Jones, HOMFLY, or superpolynomials) colored by symmetric representations \\cite{GL,Hikami_AJ,AVqdef,FGS,superA,FGSS,Nawata}. \nOn the other hand, as we will show, one can restrict the defining relations between HOMFLY polynomials and LMOV invariants to the case of symmetric representations only. This implies that recursion relations for knots should encode information about LMOV invariants labeled by symmetric representations, and classical limits of these recursions should still capture some of this information. In this paper we make these statements precise. Our main results are the following. \n\n\\begin{proposition}\n\\label{prop.main}\n\\rm{(a)} Fix a knot $K$, a natural number $r$ and an integer $i$. Then the BPS invariants $b_{r,i}$ are given by\n\\begin{equation}\nx\\frac{\\partial_x y(x,a)}{y(x,a)} \n= -\\frac{1}{2} \\sum_{r,i} r^2 b_{r,i} \\frac{x^r a^i}{1-x^r a^i}, \n\\label{xdyy}\n\\end{equation}\nwhere $y=y(x,a) \\in 1 + \\mathbb Q[[x,a]]$ is an algebraic function of $(x,a)$\nthat satisfies a polynomial equation\n\\begin{equation}\n\\mathcal{A}(x,y,a)=0 \\,. \\label{Axya}\n\\end{equation}\n\\rm{(b)} Explicitly, $x \\partial_x y\/y$ is an algebraic function of $(x,a)$,\nand if $x \\partial_x y\/y=\\sum_{n,m \\geq 0} a_{n,m} x^n a^m$, and $\\mu(d)$ denotes the M\\\"obius function, then\n\\begin{equation}\nb_{r,i} = \\frac{2}{r^2} \\sum_{d|r,i} \\mu(d) a_{\\frac{r}{d},\\frac{i}{d}} \n\\,. \n\\label{b-ri}\n\\end{equation}\n\\end{proposition}\nThe BPS invariants $b_{r,i}$ introduced above are certain combinations of LMOV invariants. In the string theory interpretation they are encoded in holomorphic disk amplitudes or superpotentials \\cite{OoguriV,AV-discs,AKV-framing} for D-branes conjecturally associated to knots, and more accurately they could be referred to as classical BPS invariants; however in this paper, unless otherwise stated, we simply call them BPS invariants or degeneracies. The definition of $b_{r,i}$ is given in Section~\\ref{sub-total}. The polynomial $\\mathcal{A}(x,y,a)$, sometimes referred to as the dual A-polynomial, is defined in Section~\\ref{ssec-AJ}. Equation \\eqref{xdyy} gives exact formulas for the BPS invariants $b_{r,i}$ for arbitrary $r,i$, as well as asymptotic expansions to all orders of $r$ when $(r,i)$ are along a ray. In particular, one obtains exact formulas\nand asymptotic expansions for the BPS invariants $b^\\pm_r=b_{r,r \\cdot c_\\pm}$ along the two extreme rays, that we call extremal BPS invariants. The extremal BPS invariants $b^{\\pm}_r$ can also be expressed in terms of coefficients of, respectively, maximal and minimal powers of $a$ in HOMFLY polynomials. The coefficients of these extremal powers are also referred to as top-row and bottom-row HOMFLY polynomials (\\emph{rows} refer to the components of the diagram representing HOMFLY homology, see e.g. \\cite{Gorsky:2013jxa}).\n\nWe call the algebraic curves that encode extremal BPS degeneracies the extremal A-polynomials, and for a knot $K$ we denote them $\\mathcal{A}^{\\pm}_{K}(x,y)$. We also refer to $\\mathcal{A}^{+}_{K}(x,y)$ and $\\mathcal{A}^{-}_{K}(x,y)$ respectively as top and bottom A-polynomials. In this work, among the others, we determine extremal A-polynomials for various knots. We also discuss their properties; we note here that in particular they are tempered, i.e. the roots of their face polynomials are roots of unity, which is also referred to as the quantizability condition \\cite{superA,Fuji:2013rra} and is the manifestation of the so-called $K_2$ condition \\cite{abmodel}. The extremal A-polynomials for twist knots, for a family of $(2,2p+1)$ torus knots, and for various knots with up to 10 crossings are given in Appendix \\ref{sec-extremeA}. The extremal A-polynomials for twist knots are also given below. \n\nFix an integer $p \\neq 0,1$ and consider the family of \nhyperbolic twist knots $K_p$. For $p=-1,-2,-3,\\ldots$ these are $4_1,6_1,8_1,\\ldots$ knots; for $p=2,3,4\\ldots$ these are $5_2,7_2,9_2,\\ldots$ knots. Much more can be said for this family of knots.\n\n\\begin{proposition}\n\\label{prop.twist}\nThe extremal BPS invariants of twist knots are given by\n\\begin{equation}\nb^-_{K_p,r} = -\\frac{1}{r^2}\\sum_{d|r} \\mu\\big(\\frac{r}{d}\\big) {3d-1 \\choose d-1},\\qquad b^+_{K_p,r} = \\frac{1}{r^2}\\sum_{d|r} \\mu\\big(\\frac{r}{d}\\big) {(2|p|+1)d - 1 \\choose d-1} \n\\label{br-neg-intro}\n\\end{equation}\nfor $p \\leq -1$ and \n\\begin{equation}\nb^-_{K_p,r} = \\frac{1}{r^2}\\sum_{d|r} \\mu\\big(\\frac{r}{d}\\big) (-1)^{d+1} {2d-1 \\choose d-1},\\qquad b^+_{K_p,r} = \\frac{1}{r^2}\\sum_{d|r} \\mu\\big(\\frac{r}{d}\\big) (-1)^d {(2p+2)d - 1 \\choose d-1} \\label{br-pos-intro}\n\\end{equation}\nfor $p \\geq 2$. \n\\end{proposition}\nOur formulas and the integrality of the BPS invariants\nlead to nontrivial integrality statements among sequences of rational numbers.\nIn particular, it implies that $b^\\pm_{K_p,r}$ are integers for all natural numbers $r$ and all integers $p \\neq 0,1$.\nNote that this implies a nontrivial statement, that for fixed $r$ the sums in expressions (\\ref{br-neg-intro}) and (\\ref{br-pos-intro}) are divisible by $r^2$. Also note that the above BPS degeneracies are determined explicitly for an infinite range of $r$; this is in contrast to other LMOV invariants in literature, which were determined explicitly only for some finite range of labeling representations (Young diagrams consisting up to several boxes)\n\\cite{Labastida:2000zp,Labastida:2000yw,Labastida:2001ts,Ramadevi_Sarkar,Zodinmawia:2011oya,Zodinmawia:2012kn}.\n\n\nWhat's more, we experimentally discover an Improved Integrality for the BPS invariants, observed by Kontsevich for algebraic curves satisfying the $K_2$ condition \\cite{Kontsevich-K2} (which is the same condition as already mentioned above \\cite{abmodel}). \n\n\\begin{conjecture}(Improved Integrality)\n\\rm{} Given a knot there exist nonzero integers $\\gamma^\\pm$ such that for any $r\n\\in \\mathbb N$\n\\begin{equation}\n \\frac{1}{r} \\gamma^\\pm b^\\pm_r \\in \\mathbb Z .\n\\label{eq.improved}\n\\end{equation}\n\\end{conjecture}\nWe checked the above conjecture for twist knots, torus knots and several knots\nwith up to 10 crossings. The values of $\\gamma^{\\pm}$ for various knots are given in table \\ref{c-minmax-aug}. Note that this integrality conjecture is more general than the integrality\nof LMOV invariants \\cite{OoguriV,Labastida:2000zp,Labastida:2000yw,Labastida:2001ts}, which only implies integrality of $b_r$. It would be interesting to give a physical interpretation of\nthis property and of the conjectured integer knot invariants $\\gamma^\\pm$.\n\nOur next proposition illustrates the special geometry of the extremal \n$A$-polynomials of the twist knots.\n\n\\begin{proposition}\n\\rm{(a)} The extremal A-polynomials of twist knots are are given by\n\\begin{align}\n \\mathcal{A}^-_{K_p}(x,y) &= x-y^4+y^6, & \n\\mathcal{A}^+_{K_p}(x,y) &= 1-y^2+x y^{4|p|+2}, & p \\leq -1, \n\\label{Ap-neg-intro} \\\\\n\\mathcal{A}^-_{K_p}(x,y) &=1-y^2- x y^4, &\n\\mathcal{A}^+_{K_p}(x,y) &= 1-y^2+x y^{4p+4}, & p \\geq 2. \n\\label{Ap-pos-intro}\n\\end{align}\n\\rm{(b)}\nThe algebraic curves $\\mathcal{A}^\\pm_{K_p}(x,y^{1\/2})$ have a distinguished\nsolution $y=y(x) \\in 1 + x \\mathbb Z[[x]]$ such that\n\\begin{equation}\n\\boxed{\\text{$y$ and $x y'\/y$ are algebraic hypergeometric functions}}\n\\label{eq.fortunate}\n\\end{equation}\n\\end{proposition}\nExplicit formulas for those solutions can be found in Section \\ref{sec-results}.\nNote that if $y$ is algebraic, so is $x y'\/y$. The converse nearly holds. \nThe next theorem was communicated to us by Kontsevich in the 2011 \nArbeitstagung talk. A proof, using the solution to the Grothendieck-Katz \nconjecture, was written in Kassel-Reutenauer~\\cite{KS}.\n\n\\begin{theorem}\n\\label{thm.KS}\nIf $f \\in \\mathbb Z[[x]]$ is a formal power series with integer coefficients such\nthat $g=x f'\/f$ is algebraic, then $f$ is algebraic. \n\\end{theorem}\nWith the above notation, the converse holds trivially: if $f$ is algebraic,\nso is $g$, without any integrality assumption on the coefficients of $f$.\nThe integrality hypothesis are required in Theorem~\\ref{thm.KS}: if $f=e^x$\nthen $g=x$. Note finally that the class of algebraic hypergeometric functions \nhas been completely classified in \\cite{beukers:monodromy, beukers:algebraic}. \nUsing this, one can also determine the functions that satisfy \n\\eqref{eq.fortunate}. We will not pursue this here.\n\nThe special geometry property \\eqref{eq.fortunate} does not seem to hold \nfor non-twist hyperbolic knots. \n\nNext, we point out more information encoded in extremal A-polynomials.\n\n\\begin{lemma}\n\\label{prop.disc}\nThe exponential growth rates of $b^{\\pm}_r$ are given by the zeros of $y$-discriminant of the corresponding extremal A-polynomials $\\mathcal{A}^{\\pm}_{K}(x,y)$\n\\begin{align}\nx_0=\\lim_{r\\to\\infty} \\frac{b^{\\pm}_r}{b^{\\pm}_{r+1}}\\quad \\Rightarrow\\quad \n\\mathrm{Disc}_y \\mathcal{A}_K^{\\pm}(x_0,y)=0\n\\end{align}\n\\end{lemma}\n\nFor example, for $K_p$ torus knots with $p\\leq -1$, the $y$-discriminant of $\\mathcal{A}^-_{K_p}(x,y)$ in (\\ref{Ap-neg-intro}) is\ngiven by \n$$\n\\mathrm{Disc}_y \\mathcal{A}^-_{K_p}(x,y)= -64 x^3 (-4 + 27 x)^2\n$$\nand its zero $x_0=\\frac{4}{27}$ matches the exponential growth rate \nof $b^-_{K_p,r}$ in (\\ref{br-neg-intro}), which can be obtained from Stirling formula.\n\nFurthermore, we generalize the above analysis of extremal A-polynomials and extremal BPS states by introducing the dependence on variables $a$ and $t$. As conjectured in \\cite{AVqdef,superA}, the augmentation polynomials \\cite{Ng,NgFramed}, that depend on variable $a$, should agree with Q-deformed polynomials obtained as classical limits of recursions for colored HOMFLY polynomials \\cite{AVqdef}. We verify that this is indeed the case by computing corresponding BPS invariants from augmentation polynomials, and verifying that they are consistent with LMOV invariants determined from known colored HOMFLY polynomials for various knots with up to 10 crossings. While using our method we can determine BPS invariants for arbitrarily large representations $S^r$ from augmentation polynomials, for more complicated knots the colored HOMFLY polynomials are known explicitly only for several values of $r$, see e.g. \\cite{Nawata:2013qpa,Wedrich:2014zua}. Nonetheless this is already quite a non-trivial check.\n\nFinally we introduce the dependence on the parameter $t$, consider refined BPS degeneracies arising from appropriate redefinitions of super-A-polynomials,\nand show their integrality.\n\nWe note that apart from the relation to the LMOV invariants our results have an interpretation also from other physical perspectives. In particular, recently a lot of attention has been devoted to the so-called 3d-3d duality, which relates knot invariants to 3-dimensional $\\mathcal{N}=2$ theories \\cite{DGG,superA,Chung:2014qpa}. In this context the A-polynomial curves, such as (\\ref{Axya}) or extremal A-polynomials, represent the moduli space of vacua of the corresponding $\\mathcal{N}=2$ theories. Furthermore, the equation (\\ref{xdyy}) can be interpreted as imposing (order by order) relations between BPS degeneracies, which should arise from relations in the corresponding chiral rings. We also note that in the corresponding brane system extremal invariants arise from the $\\mathbb{C}^3$ limit of the underlying resolved conifold geometry, and the precise way of taking this limit is encoded in the integers $c_{\\pm}$ mentioned below (\\ref{b-ri}) that specify the extreme rays. We comment on these and other physical interpretations in section \\ref{sec-conclude} and plan to analyze them further in future work.\n\n\n\n\n\\section{BPS invariants of knots from algebraic curves} \\label{sec-BPS}\n\n\n\n\n\n\\subsection{BPS invariants for knots} \\label{sub-total}\n\nIn this section we recall the formulation of Labastida-Mari{\\~n}o-Ooguri-Vafa (LMOV) invariants and discuss its form in case of $S^r$-colored HOMFLY polynomials. The starting point is to consider the Ooguri-Vafa generating function \\cite{OoguriV,Labastida:2000zp,Labastida:2000yw,Labastida:2001ts}\n\\begin{equation}\nZ(U,V) = \\sum_R \\textrm{Tr}_R U \\, \\textrm{Tr}_R V = \\exp\\Big( \\sum_{n=1}^{\\infty} \\frac{1}{n} {\\rm Tr \\,} U^n {\\rm Tr \\,} V^n \\Big),\n\\end{equation}\nwhere $U=P\\,\\exp\\oint_K A$ is the holonomy of $U(N)$ Chern-Simons gauge field along a knot $K$, $V$ can be interpreted as a source, and the sum runs over all representations $R$, i.e. all two-dimensional partitions. The LMOV conjecture states that the expectation value of the above expression takes the following form\n\\begin{equation}\n\\big\\langle Z(U,V) \\big\\rangle = \\sum_R P_{K,R}(a,q) \\textrm{Tr}_R V = \\exp \\Big( \\sum_{n=1}^\\infty \\sum_R \\frac{1}{n} f_{K,R}(a^n,q^n) \\textrm{Tr}_R V^n \\Big), \\label{ZUV}\n\\end{equation}\nwhere the expectation value of the holonomy is identified with the unnormalized HOMFLY polynomial of a knot $K$, $\\langle \\textrm{Tr}_R U \\rangle = P_{K,R}(a,q)$, and the functions $f_{K,R}(a,q)$ take form\n\\begin{equation}\nf_{K,R}(a,q) = \\sum_{i,j} \\frac{N_{R,i,j} a^i q^j}{q-q^{-1}}, \\label{fR}\n\\end{equation}\nwhere $N_{R,i,j}$ are famous BPS degeneracies, or LMOV invariants, a term that we will use interchangably; in particular they are conjectured to be integer. In string theory interpretation they count D2-branes ending on D4-branes that wrap a Lagrangian submanifold associated to a given knot $K$. From two-dimensional space-time perspective D2-branes are interpreted as particles with charge $i$, spin $j$, and magnetic charge $R$. For a fixed $R$ there is a finite range of $i$ and $j$ for which $N_{R,i,j}$ are non-zero. \n\nIn what follows we are interested in the case of one-dimensional source $V=x$. In this case ${\\rm Tr \\,}_R V \\neq 0$ only for symmetric representations $R=S^r$ (labeled by partitions with a single row\nwith $r$ boxes \\cite{AM}), so that $\\textrm{Tr}_{S^r}(x) = x^r$. For a knot $K$, let us denote $P_K(x,a,q)=\\langle Z(U,x) \\rangle$, and let $P_{K,r}(a,q) \\in \\mathbb Q(a,q)$ denote the $S^r$-colored HOMFLY polynomial of $K$. For a detailed \ndefinition of the latter, see for example \\cite{AM}. In this setting (\\ref{ZUV}) reduces to the following expression\n\\begin{equation}\nP_K(x,a,q) = \\sum_{r=0}^\\infty P_{K,r}(a,q) x^r = \n\\exp\\Big( \\sum_{r,n\\geq 1} \\frac{1}{n} f_{K,r}(a^n,q^n)x^{n r}\\Big).\n\\label{Pz2}\n\\end{equation}\nNote that we use the unnormalized (or unreduced) HOMFLY polynomials, so that the unknot is normalized as\n$P_{{\\bf 0_1},1}(a,q)=(a-a^{-1})\/(q-q^{-1})$.\nOften, we will drop the knot $K$ from the notation.\nNote that $f_r(a,q)$ is a universal polynomial (with rational coefficients) of $P_{r\/d}(a^d,q^d)$ for all divisors $d$ or $r$. For instance, we have:\n\\begin{eqnarray}\nf_1(a,q) &=& P_1(a,q), \\nonumber\\\\\nf_2(a,q) &=& P_2(a,q) - \\frac{1}{2}P_1(a,q)^2 -\\frac{1}{2} P_1(a^2,q^2), \\nonumber\\\\\nf_3(a,q) &=& P_3(a,q) - P_1(a,q)P_2(a,q) + \\frac{1}{3}P_1(a,q)^3 - \\frac{1}{3} P_1(a^3,q^3) \\label{f-P} \\\\\nf_4(a,q) &=& P_4(a,q) - P_1(a,q)P_3(a,q) - \\frac{1}{2}P_2(a,q)^2 + P_1(a,q)^2P_2(a,q) + \\nonumber\\\\\n& & -\\frac{1}{4}P_1(a,q)^4 -\\frac{1}{2}P_2(a^2,q^2)+\\frac{1}{4}P_1(a^2,q^2)^2. \\nonumber\n\\end{eqnarray}\nIt follows that $f_r(a,q) \\in \\mathbb Q(a,q)$. The LMOV conjecture asserts that \n$f_r(a,q)$ can be expressed as a finite sum\n$$\nf_{r}(a,q) = \\sum_{i,j} \\frac{N_{r,i,j} a^i q^j}{q-q^{-1}} ,\n\\qquad\nN_{r,i,j} \\in \\mathbb Z \\,.\n$$ \nand in this case the BPS degeneracies $N_{r,i,j}$ are labeled by a natural number $r$.\n\nWe now explain how to extract BPS degeneracies from the \ngenerating function \\eqref{Pz2}. First we write it in product form\n\\begin{eqnarray}\nP(x,a,q) &=& \\sum_r P_r(a,q) x^r \n\\exp\\Big( \\sum_{r,n\\geq 1; i,j} \\frac{1}{n} \n\\frac{N_{r,i,j}(x^r a^i q^j)^n}{q^{n}-q^{-n}}\\Big) \\nonumber\\\\\n&=& \\exp\\Big( \\sum_{r\\geq 1;i,j;k\\geq 0} N_{r,i,j}\n\\log(1-x^r a^i q^{j+2k+1} )\\Big) \\nonumber\\\\\n&=& \\prod_{r\\geq 1;i,j;k\\geq 0} \\Big(1 - x^r a^i q^{j+2k+1} \\Big)^{N_{r,i,j}} \n\\label{Pr-LMOV} \\\\\n&=& \\prod_{r\\geq 1;i,j} \\Big( x^r a^i q^{j+1};q^2 \\Big)_\\infty^{N_{r,i,j}} \n\\nonumber\n\\end{eqnarray}\nwhere the $q$-Pochhammer symbol (or quantum dilogarithm) notation is used\n\\begin{equation}\n(x;q)_\\infty =\\prod_{k=0}^\\infty (1-x q^k) \\,. \\label{qdilog}\n\\end{equation}\nThen, in the limit $q=e^{\\hbar} \\to 1$, using well known asymptotic expansion of the quantum dilogarithm (see e.g. \\cite{abmodel}), we get the following asymptotic \nexpansion\n of $P(x,a,e^{\\hbar})$\n\\begin{eqnarray}\nP(x,a,e^{\\hbar}) &=& \\exp\\Big( \\sum_{r,i,j} N_{r,i,j} \n\\big( \\frac{1}{2\\hbar}\\textrm{Li}_2(x^r a^i) -\\frac{j}{2}\\log(1-x^r a^i) \n+O(\\hbar) \\big) \\Big)= \\nonumber \\\\\n&=&\\exp\\Big(\\frac{1}{2\\hbar}\\sum_{r,i} b_{r,i} \\textrm{Li}_2(x^r a^i) \n-\\sum_{r,i,j} \\frac{j}{2}N_{r,i,j}\\log(1-x^r a^i) +O(\\hbar) \\Big)\n\\label{Px-asympt} \\\\\n&=& \\exp\\Big(\\frac{1}{\\hbar} S_0(x,a)+S_1(x,a) + O(\\hbar)\\Big), \\nonumber\n\\end{eqnarray}\nwhere \n\\begin{align}\nS_0(x,a) &= \\frac{1}{2} \\sum_{r,i} b_{r,i} \\textrm{Li}_2(x^r a^i), \\\\\nS_1(x,a) &= -\\frac{1}{2}j\\sum_{r,i,j} \\log(1-x^r a^i). \n\\label{Px-dilog}\n\\end{align}\nAbove we introduced \n\\begin{equation}\nb_{r,i} = \\sum_j N_{r,i,j} \\,\n\\label{eq.btotal}\n\\end{equation}\nthat appear at the lowest order in $\\hbar$ expansion in the exponent of (\\ref{Px-asympt}) and can be interpreted as the classical BPS degeneracies. These degeneracies are of our main concern. In the string theory interpretation they determine holomorphic disk amplitudes or superpotentials \\cite{OoguriV,AV-discs,AKV-framing} for D-branes conjecturally associated to knots. In what follows, unless otherwise stated, by BPS degeneracies we mean these numbers.\n\nOur next task is to compute $S_0(x,a)$. To do so, we use a linear $q$-difference\nequation for $P(x,a,q)$ reviewed in the next section.\n\n\n\n\\subsection{Difference equations and algebraic curves} \\label{ssec-AJ}\n\nIn this section we introduce various algebraic curves associated to knots.\nFirst, recall that the colored Jones polynomial $J_{K,r}(q) \\in \\mathbb Z[q^{\\pm 1}]$ of a knot $K$ can \nbe defined as a specialization of the colored HOMFLY polynomial:\n$$\nJ_{K,r}(q)=P_{K,r}(q^2,q) \\,.\n$$\nIt is known that the colored Jones polynomial satisfies a linear $q$-difference\nequation of the form\n\\begin{equation}\n\\widehat{A}_K(\\hat M, \\hat L,q) J_{K,r}(q) = 0, \n\\end{equation}\nwhere $\\widehat{A}_K$ is a polynomial in all its arguments, and $\\hat M$ and $\\hat L$ are operators that satisfy the relation $\\hat L \\hat M = q \\hat M \\hat L$ and act on colored Jones polynomials by\n\\begin{equation}\n\\hat M J_{K,r}(q) = q^r J_{K,r}(q),\\qquad \\quad \n\\hat L J_{K,r}(q) = J_{K,r+1}(q). \\label{MhatLhat}\n\\end{equation}\nThe AJ Conjecture states that\n\\begin{equation}\n\\widehat{A}_K(\\hat M, \\hat L,1) = A_K(M,L)\n\\end{equation}\nwhere $A_K(M,L)$ is the A-polynomial of $K$ \\cite{CCGLS}. Likewise, we will\nassume that the colored HOMFLY polynomial of a knot satisfies a linear\n$q$-difference equation of the form\n\\begin{equation}\n\\widehat{A}(\\hat M, \\hat L,a,q) P_r(a,q) = 0 \\,. \\label{Ahat-a}\n\\end{equation}\nThe corresponding 3-variable polynomial $A(M,L,a)=A(M,L,a,1)$ defines a family\nof algebraic curves parametrized by $a$. A further conjecture \\cite{AVqdef,superA} identifies\nthe 3-variable polynomial $A(M,L,a)$ with the augmentation polynomial of\nknot contact homology \\cite{Ng,NgFramed}.\n\nWe further assume the existence of the super-A-polynomial, i.e. the refined colored HOMFLY polynomial $P_r(a,q,t)$ of a knot, that specializes at $t=-1$ to the usual colored HOMFLY\npolynomial, and that also satisfies a linear $q$-difference equation\n\\cite{superA,FGSS}\n\\begin{equation}\n\\widehat{A}^{\\textrm{super}}(\\hat M,\\hat L, a,q,t) P_r(a,q,t) = 0. \n\\end{equation}\nThe specialization $\\widehat{A}^{\\textrm{super}}(M,L, a,1,t)$ can be thought\nof as an $(a,t)$-family of A-polynomials of a knot.\n\nIn the remainder of this section we discuss a dual version $\\mathcal{A}(x,y,a)$\nof the algebraic curve $A(M,L,a)$. \n\n\\begin{lemma}\nFix a sequence $P_r(a,q)$ which is annihilated by an operator \n$\\widehat A (\\hat M,\\hat L,a,q)$ and consider the\ngenerating function $P(x,a,q)=\\sum_{r=0}^{\\infty} P_r(a,q) x^r$. Then, \n\\begin{equation}\n\\widehat{A}(\\hat{y},\\hat{x}^{-1},a,q) P(x,a,q) = const, \\label{AyxP}\n\\end{equation}\nwhere\n\\begin{equation}\n\\hat{x} P(x,a,q) = x P(x,a,q),\\qquad \\quad \\hat{y} P(x,a,q) = P(qx,a,q) \\label{lemma-Px}\n\\end{equation}\nsatisfy $\\hat{x} \\hat{y} = q \\hat{y} \\hat{x}$, and $const$ is a $q$-dependent term that vanishes in the limit $q\\to 1$. \n\\end{lemma}\n\n\\noindent\n\\emph{Proof.} \nWe have\n\\begin{equation}\n\\widehat{A}(\\hat M,\\hat L) P(x,a,q) = \n\\sum_r x^r \\widehat{A}(\\hat M,\\hat L) P_r(a,q) = 0. \\label{AhatMLP}\n\\end{equation}\nOn the other hand, acting with $\\hat M$ and $\\hat L$ on this generating function (and taking care of the boundary terms) we get\n\\begin{eqnarray}\n\\hat M P(x,a,q) &=& \\sum_{r=0}^{\\infty} P_r(a,q) (qx)^r = P(qx,a,q), \\\\\n\\hat L P(x,a,q) &=& \\sum_{r=0}^{\\infty} P_{r+1}(a,q)x^r = \\frac{1}{x} \\Big(P(x,a,q) - P_0(a,q) \\Big)\\nonumber.\n\\end{eqnarray}\nTherefore the action of $\\hat M$ and $\\hat L$ on $P(x)$ can be identified, respectively, with the action of operators $\\hat{y}$ and $\\hat{x}^{-1}$, up to the subtlety in the boundary term arising from $r=0$. From the property of the recursion relations for the HOMFLY polynomial the result follows.\n\\qed\n\nApplying the above lemma to the colored HOMFLY polynomial $P_r(a,q)$, this\nmotivates us to introduce the operator\n\\begin{equation}\n\\widehat{\\mathcal{A}}(\\hat{x},\\hat{y},a,q) = \n\\widehat{A}(\\hat{y},\\hat{x}^{-1},a,q),\n\\end{equation}\nso that (\\ref{AyxP}) can be simply written as\n\\begin{equation}\n\\widehat{\\mathcal{A}}(\\hat{x},\\hat{y},a,q) P(x,a,q) = const. \\label{calAxyP}\n\\end{equation}\nIn the limit $q\\to 1$ the right hand side vanishes and we can consider the algebraic curve\n\\begin{equation}\n\\mathcal{A}(x,y,a)= A(y,x^{-1},a). \\label{calAxy}\n\\end{equation}\n\n\n\\subsection{The Lambert transform}\n\nIn this section we recall the Lambert transform of two sequences $(a_n)$\nand $(b_n)$ which is useful in the proof of Proposition \\ref{prop.main}.\n\n\\begin{lemma}\n\\label{lem.1}\n\\rm{(a)} Consider two sequences $(a_n)$ and $(b_n)$ for $n=1,2,3,\\dots$ that\nsatisfy the relation\n\\begin{equation}\n\\label{eq.ab}\na_n = \\sum_{d | n} b_d\n\\end{equation}\nfor all positive natural numbers $n$. Then we have:\n\\begin{equation}\n\\label{eq.ba}\nb_n = \\sum_{d | n} \\mu\\left(\\frac{n}{d}\\right) a_d\n\\end{equation}\nwhere $\\mu$ is the M\\\"obius function. Moreover, we have the Lambert \ntransformation property\n\\begin{equation}\n\\label{eq.gf}\n\\sum_{n=1}^\\infty a_n q^n = \n\\sum_{n=1}^\\infty b_n \\frac{q^n}{1-q^n} \n\\end{equation} \nand the Dirichlet series property\n\\begin{equation}\n\\label{eq.gf2}\n\\sum_{n=1}^\\infty \\frac{a_n}{n^s} = \\zeta(s)\n\\sum_{n=1}^\\infty \\frac{b_n}{n^s} \\,.\n\\end{equation} \n\\rm{(b)} If $(a_n)$ has an asymptotic expansion\n\\begin{equation}\n\\label{eq.asa}\na_n \\sim \\sum_{(\\lambda,\\alpha)} \\lambda^n n^{\\alpha}\n\\left( c_0 + \\frac{c_1}{n} + \\frac{c_2}{n^2}\n+ \\dots \\right)\n\\end{equation} \nwhere the first sum is a finite sum of pairs $(\\lambda,\\alpha)$ such that\n$|\\lambda|$ is fixed, then so does $(b_n)$ and vice-versa.\n\\end{lemma}\n\n\\noindent\n\\emph{Proof.}\nPart (b) follows from Equation~\\eqref{eq.gf2} easily. For a detailed discussion,\nsee the appendix to \\cite{zeidler} by D. Zagier.\n\\qed\n\n\\begin{lemma}\n\\label{lem.2}\nSuppose $y \\in \\mathbb Z[[x]]$ is algebraic with constant term $y(0)=1$. Write\n$$\ny=1+\\sum_{n=1}^\\infty c_n x^n = \\prod_{n=1}^\\infty (1-x^n)^{b_n}.\n$$\nIf $y$ has a singularity in the interior of the unit circle then $(b_n)$\nhas an asymptotic expansion of the form~\\eqref{eq.asa}. Moreover, the\nsingularities of the multivalued function $y=y(x)$ are the complex roots of the\ndiscriminant of $p(x,y)$ with respect to $y$, where $p(x,y)=0$ is a \npolynomial equation.\n\\end{lemma}\n\n\\noindent\n\\emph{Proof.}\nWe have:\n$$\n\\log y =\\sum_{n=1}^\\infty b_n \\log(1-x^n)\n$$\nthus if $z=x d\\log y=xy'\/y$, then we have\n$$\nz = \\sum_{n=1}^\\infty n b_n \\frac{x^n}{1-x^n} .\n$$\nNow $z$ is algebraic by the easy converse to Theorem~\\ref{thm.KS}. It follows \nthat the coefficients $(a_n)$ of its Taylor series\n$$\nz=\\sum_{n=1}^\\infty a_n x^n\n$$\nis a sequence of Nilsson type~\\cite{Ga:nilsson}. Since $z$ is algebraic, \nthere are no $\\log n$ terms in the asymptotic expansion. \nMoreover, the exponential growth rate is bigger than 1, in absolute value. \nPart (b) of Lemma~\\ref{lem.1} concludes the proof.\n\\qed\n\n\n\n\\subsection{Proof of Proposition 1.1.}\n\nLet us define\n\\begin{equation}\ny(x,a) = \n\\lim_{q\\to 1} \\frac{P(qx,a,q)}{P(x,a,q)} = \n\\lim_{q\\to 1} \\prod_{r\\geq 1;i,j;k\\geq 0} \n\\Big(\\frac{1 - x^r a^i q^{r+j+2k+1} }{1 - x^r a^i q^{j+2k+1}}\\Big)^{N_{r,i,j}} \n= \\prod_{r\\geq 1;i} (1 - x^r a^i)^{-r b_{r,i} \/ 2} .\n\\label{sfx-prod}\n\\end{equation}\nIf $P(x,a,q)$ is annihilated by $\\widehat{\\mathcal{A}}(\\hat{x},\\hat{y},a,q)$,\nit follows that $y=y(a,x)$ is a solution of the polynomial equation\n\\begin{equation}\n\\mathcal{A}(x,y,a) = 0.\n\\end{equation}\nIndeed, divide the recursion \n$$\n\\widehat{\\mathcal{A}}(\\hat{x},\\hat{y},a,q) P(x,a,q)=0\n$$\nby $P(x,a,q)$, and observe that\n$$\n\\lim_{q \\to 1} \\frac{P(q^jx,a,q)}{P(x,a,q)} =\n\\prod_{l=1}^j \\lim_{q \\to 1} \\frac{P(q^l x,a,q)}{P(q^{l-1}x,a,q)} =\ny(x,a)^j.\n$$\nTaking the logarithm and then differentiating \\eqref{sfx-prod} concludes\nEquation \\eqref{xdyy}. Part (b) of Proposition \\ref{prop.main} follows from\nLemma \\ref{lem.1}.\n\\qed\n\n\n\n\n\n\n\\subsection{Refined BPS invariants}\n\nIn this section we discuss refined BPS invariants $N_{r,i,j,k}$. In full generality, we can consider the generating function of superpolynomials $P_r(a,q,t)$; suppose that it has the product structure analogous to (\\ref{Pr-LMOV}), however with an additional $t$-dependence\n\\begin{equation}\nP(x,a,q,t) = \\sum_{r=0}^\\infty P_r(a,q,t) x^r \n= \\prod_{r\\geq 1;i,j,k;n\\geq 0} \n\\Big(1 - x^r a^i t^j q^{k+2n+1} \\Big)^{N_{r,i,j,k}} \n\\label{Pr-LMOVref} \n\\end{equation}\nWe conjecture that the refined BPS numbers $N_{r,i,j,k}$ encoded in this expression are integers. As in this work we are mainly interested in invariants in the $q\\to 1$ limit, encoded in (classical) algebraic curves, let us denote them as\n\\begin{equation}\nb_{r,i,j} = \\sum_k N_{r,i,j,k}.\n\\end{equation} \nIn this case the curves in question are of course the dual versions of super-A-polynomial $A^{\\rm super}(M,L,a,t)$, with arguments transformed as in (\\ref{calAxy}), i.e.\n\\begin{equation}\n\\mathcal{A}(x,y,a,t) = A^{\\rm super}(y,x^{-1},a,t)= 0.\n\\end{equation}\nSolving this equation for $y=y(x,a,t)$ and following steps that led to (\\ref{xdyy}), we find now\n\\begin{equation}\nx\\frac{\\partial_x y(x,a,t)}{y(x,a,t)} = \\frac{1}{2} \\sum_{r,i,j} r^2 b_{r,i,j} \\frac{x^r a^i t^j}{1-x^r a^i t^j}, \\label{xdyy-ref}\n\\end{equation}\nand from such an expansion $b_{r,i,j}$ can be determined. Conjecturally these should be integer numbers; as we will see in several examples this turns out to be true. \n\n\n\n\n\n\\section{Extremal invariants} \\label{sec-extreme}\n\n\\subsection{Extremal BPS invariants}\n\nIn this section we define extremal BPS invariants of knots. If $P_r(a,q)$ is\na $q$-holonomic sequence, it follows that the minimal and maximal exponent with respect to\n$a$ is a quasi-linear function of $r$, for large enough $r$. This follows \neasily from the Lech-Mahler-Skolem theorem as used in \\cite{Ga:degree} and is \nalso discussed in detail in \\cite{vanderveen}. We now restrict our attention to \nknots that satisfy\n\\begin{equation}\nP_r(a,q) = \\sum_{i=r \\cdot c_-}^{r\\cdot c_+} a^i p_{r,i}(q) \n\\label{Pr-minmax}\n\\end{equation}\nfor some integers $c_\\pm$ and for every natural number $r$, where\n$p_{r,r \\cdot c_\\pm}(q) \\neq 0$.\nThis is a large class of knots -- in particular two-bridge knots and torus knots have this property.\nFor such knots, we can consider the extremal parts \nof the colored HOMFLY polynomials (i.e. their top and bottom rows, that is the coefficients of maximal and minimal powers of $a$), defined as one-variable polynomials\n\\begin{equation}\nP^\\pm_r(q) = p_{r,r \\cdot c_\\pm}(q). \\label{Pminmax}\n\\end{equation}\nLikewise, we define the extremal LMOV invariants by \n\\begin{equation}\nf^\\pm_r(q) = \n\\sum_j \\frac{N_{r, r \\cdot c_\\pm,j} q^j}{q - q^{-1}} \\,, \\label{f-minmax}\n\\end{equation}\nand the extremal BPS invariants by\n\\begin{equation}\nb^\\pm_r = b_{r,r \\cdot c_\\pm} = \\sum_j N_{r,r \\cdot c_\\pm,j}. \\label{br-minmax}\n\\end{equation}\nWe also refer to $b^+_r$ and $b^-_r$ as, respectively, top and bottom BPS invariants. \nFinally, we define the extremal part of the generating series $P(x,a,q)$\nby\n\\begin{equation}\nP^\\pm(x,q) = \\sum_{r=0}^\\infty P^\\pm_r(q) x^r \n= \\prod_{r\\geq 1;j;k\\geq 0} \n\\Big(1 - x^r q^{j+2k+1} \\Big)^{N_{r,r \\cdot c_\\pm,j}}.\n\\label{Pr-LMOV-minmax} \n\\end{equation}\nThe analogue of Equation \\eqref{Px-asympt} is\n\\begin{equation}\nP^\\pm(x,e^{\\hbar}) = \n\\exp\\Big(\\frac{1}{2\\hbar}\\sum_{r} b^\\pm_r \\textrm{Li}_2(x^r) \n-\\sum_{r,j} \\frac{j}{2}N_{r,r \\cdot c_\\pm,j}\\log(1-x^r) +O(\\hbar) \\Big). \n\\label{Px-asympt-minmax}\n\\end{equation}\n\nIt follows from the LMOV conjecture that $b^{\\pm}_r$, as combinations of LMOV invariants $N_{r,r\\cdot c_{\\pm},j}$, are integer. Moreover, according to the Improved Integrality conjecutre \\ref{eq.improved}, for each knot one can find integer numbers $\\gamma^{\\pm}$, such that \n\\begin{equation}\n \\frac{1}{r} \\gamma^\\pm b^\\pm_r \\in \\mathbb Z .\n\\end{equation}\nThe numbers $\\gamma^{\\pm}$ can be regarded as new invariants of a knot. We compute these numbers for various knots in section \\ref{sec-results}, with the results summarized in table \\ref{c-minmax-aug}.\n\n\\begin{table}\n\\begin{equation}\n\\begin{array}{|c|c|c||c|c|c||c|c|c|}\n\\hline \n\\textrm{\\bf Knot} & \\ \\gamma^- \\ & \\ \\gamma^+ \\ & \\textrm{\\bf Knot} & \\ \\gamma^- \\ & \\ \\gamma^+ \\ & \\textrm{\\bf Knot} & \\ \\gamma^- \\ & \\ \\gamma^+ \\ \\nonumber \\\\\n\\hline \n\\hline\n\\ K_{-1-6k} \\ & 2 & 2 & \\ K_{2+6k}\\ & 6 & 2 & 6_2 & 3 & 30 \\\\\n\\ K_{-2-6k} \\ & 2 & 3 & \\ K_{3+6k}\\ & 6 & 3 & 6_3 & 6 & 6 \\\\\n\\ K_{-3-6k} \\ & 2 & 2 & \\ K_{4+6k}\\ & 6 & 2 & 7_3 & 1 & 10 \\\\\n\\ K_{-4-6k} \\ & 2 & 1 & \\ K_{5+6k}\\ & 6 & 1 & 7_5 & 3 & 30 \\\\\n\\ K_{-5-6k} \\ & 2 & 6 & \\ K_{6+6k}\\ & 6 & 6 & 8_{19} & 7 & 1 \\\\\n\\ K_{-6-6k} \\ & 2 & 1 & \\ K_{7+6k}\\ & 6 & 1 & 10_{124} & 2 & 8 \\\\\n\\hline \n\\hline\n\\ T_{2,2p+1} \\ & \\ 2p+3 \\ &\\ 2p-1 & \\multicolumn{6}{c|}{ } \\\\\n\\hline \n\\end{array}\n\\end{equation} \n\\caption{Improved Integrality: values of $\\gamma^-$ and $\\gamma^+$ for various knots. For twist knots $K_p$ the range of the subscript is labeled by $k=0,1,2,\\ldots$, and $T_{2,2p+1}$ denotes $(2,2p+1)$ torus knot.} \\label{c-minmax-aug}\n\\end{table}\n\n\n\\subsection{Extremal A-polynomials}\n\nIt is easy to see that if $P_r(a,q)$ is annihilated by\n$\\widehat{A}(\\hat M, \\hat L,a,q)$, then its extremal part $P^\\pm_r(q)$ is\nannihilated by the operator $\\widehat{A}^\\pm(\\hat M, \\hat L,q)$\nobtained by multiplying \n$\\widehat{A}(\\hat M, \\hat L,a^{\\mp 1},q)$ by $a^{\\pm r c_\\pm}$ (to make every \npower of $a$ nonnegative), and then setting $a=0$. This allows us to introduce\nthe extremal analogues of the curve \\eqref{calAxy} defined as distinguished, irreducible factors in \n\\begin{equation}\n\\mathcal{A}(x a^{- c_\\pm},y,a)|_{a^{\\mp 1} \\to 0} \n\\label{Amin} \n\\end{equation}\nthat determine the extremal BPS degeneracies. We call these curves extremal A-polynomials and denote them $\\mathcal{A}^\\pm(x,y)$. We also refer to $\\mathcal{A}^+(x,y)$ and $\\mathcal{A}^-(x,y)$ as top and bottom A-polynomials respectively. The extremal A-polynomials that we determine in this work are listed in Appendix \\ref{sec-extremeA}.\n\nAmong various interesting properties of extremal A-polynomials we note that they are tempered, i.e. the roots of their face polynomials are roots of unity. This is a manifestation of their quantizability and the so-called $K_2$ condition \\cite{abmodel, superA,Kontsevich-K2}, and presumably is related to the Improved Integrality of the corresponding extremal BPS states. \n\n\n\\subsection{Extremal BPS invariants from extremal A-polynomials}\n\nIn this section we give the analogue of Proposition \\ref{prop.main} for\nextremal BPS invariants.\n\n\\begin{proposition}\n\\label{prop.main.extreme}\n\\rm{(a)} Fix a knot $K$ and a natural number $r$. Then the extremal \nBPS invariants $b^\\pm_{r}$ are given by\n\\begin{equation}\nx\\frac{(y^\\pm)'(x)}{y^\\pm(x)} \n= \\frac{1}{2} \\sum_{r\\geq 1} r^2 b^\\pm_{r} \\frac{x^r}{1-x^r}\\,, \n\\label{xdyy-minmax}\n\\end{equation}\nwhere $y^\\pm=y^\\pm(x) \\in 1 + \\mathbb Q[[x]]$ is an algebraic function of $x$\nthat satisfies a polynomial equation\n$$\n\\mathcal{A}^\\pm(x,y^\\pm)=0 \\,.\n$$\n\\rm{(b)} Explicitly, $x \\partial_x y^\\pm\/y^\\pm$ is an algebraic function of \n$x$ and if \n\\begin{equation}\n\\label{xdyy-an}\nx (y^\\pm)'(x)\/y^\\pm(x)=\\sum_{n \\geq 0} a^\\pm_{n} x^n,\n\\end{equation} \nthen\n\\begin{equation}\nb^\\pm_{r} = \\frac{2}{r^2} \\sum_{d|r} \\mu(d) a^\\pm_{\\frac{r}{d}} \n\\,. \n\\label{b-r}\n\\end{equation}\n\\end{proposition}\n\n\\noindent\n\\emph{Proof.}\nWe define\n\\begin{equation}\ny^\\pm(x) = \n\\lim_{q\\to 1} \\frac{P^\\pm(qx,q)}{P^\\pm(x,q)} = \n\\lim_{q\\to 1} \\prod_{r\\geq 1;i,j;k\\geq 0} \n\\Big(\\frac{1 - x^r q^{r+j+2k+1} }{1 - x^r q^{j+2k+1}}\\Big)^{N_{r,r \\cdot\nc_\\pm,j}} \n= \\prod_{r\\geq 1;i} (1 - x^r )^{-r b^\\pm_r \/ 2} .\n\\label{sfx-prod-minmax}\n\\end{equation}\nAs in the proof of Proposition \\ref{prop.main}, it follows that $y^\\pm(x)$\nsatisfies the polynomial equation\n$$\n\\mathcal{A}^\\pm(x,y)=0 \\,.\n$$\nThis concludes the first part. The second part follows just as in \nProposition \\ref{prop.main}.\n\\qed\n\n\n\n\n\\section{Examples and computations} \n\\label{sec-results}\n\nIn this section we illustrate the claims and ideas presented earlier in many examples. First of all, LMOV invariants arise from redefinition of unnormalized knot polynomials. Therefore we recall that the unnormalized superpolynomial for the unknot reads \\cite{superA}\n\\begin{equation}\nP_{{\\bf 0_1},r}(a,q,t) = (-1)^{\\frac{r}{2}}a^{-r}q^{r}t^{-\\frac{3r}{2}} \\frac{(-a^2t^3;q^2)_{r}}{(q^2;q^2)_{r}} \\, ,\n\\label{Punknot} \n\\end{equation}\nand the unnormalized HOMFLY polynomial arises from $t=-1$ specialization of this expression. In what follows we often take advantage of the results for normalized superpolynomials $P^{norm}_r(K,a,q,t)$ for various knots $K$, derived in \\cite{superA,FGSS,Nawata}. Then the unnormalized superpolynomials that we need from the present perspective differ simply by the unknot contribution\n\\begin{equation}\nP_{K,r}(a,q,t) = P_{{\\bf 0_1},r}(a,q,t) P^{norm}_{K,r}(a,q,t). \\label{Punnorm}\n\\end{equation}\nIn general to get colored HOMFLY polynomials one would have to consider the action of a certain differential \\cite{DGR,GS}; however for knots considered in this paper, for which superpolynomials are known, HOMFLY polynomials arise from a simple substitution $t=-1$ in the above formulas. \n\n\n\\begin{wrapfigure}{l}{0.4\\textwidth}\n\\begin{center}\n\\includegraphics[scale=0.3]{draws\/4_1.pdf}\n\\end{center}\n\\caption{The $4_1$ knot.}\n\\label{fig-41}\n\\end{wrapfigure}\n\nLet us stress some subtleties related to various variable redefinitions. Super-A-polynomials for various knots, corresponding to the normalized superpolynomials $P^{norm}_r(K,a,q,t)$, were determined in \\cite{superA,FGSS,Nawata}; in the current notation we would write those polynomials using variables $M$ and $L$, as $A^{\\textrm{super}}(M,L,a,t)$. Super-A-polynomials in the unnormalized case (i.e. encoding asymptotics of unnormalized superpolynomials), which are relevant for our considerations, arise from $A^{\\textrm{super}}(M,L,a,t)$ by the substitution\n\\begin{equation}\nM\\mapsto M, \\qquad L \\mapsto (-a^2t^3)^{-1\/2} \\frac{1+ a^2 t^3 M}{1-M} L. \\label{MLnorm}\n\\end{equation}\nIn what follows we often consider $a$-deformed polynomials, which for knots considered in this paper are again simply obtained by setting $t=-1$ in $A^{\\textrm{super}}(M,L,a,t)$. These $a$-deformed polynomials are not yet identical, however closely related (by a simple change of variables) to augmentation polynomials or Q-deformed polynomials; we will present these relations in detail in some examples.\n\n\n\\subsection{The unknot}\n\nLet us illustrate first how our formalism works for the unknot. From the analysis of asymptotics, or recursion relations satisfied by (\\ref{Punknot}), the following super-A-polynomial is determined\\footnote{More precisely, due to present conventions, one has to substitute $M^2\\mapsto x, L\\mapsto y, a^2\\mapsto a$ to obtain the curve from \\cite{superA} on the nose.} \\cite{superA}\n\\begin{equation}\nA(M,L,a,t) = (-a^{-2}t^{-3})^{1\/2}(1+a^2 t^3 M^2)-(1-M^2)L.\n\\end{equation}\nThe analysis of the refined case essentially is the same as the unrefined, as the dependence on $t$ can be absorbed by a redefinition of $a$. Therefore let us focus on the unrefined case. From (\\ref{calAxy}) we find that, up to an irrelevant overall factor, the dual A-polynomial reads\n\\begin{equation}\n\\mathcal{A}(x,y,a) = x-a^2x y^2 -a+ay^2. \\label{Aconi}\n\\end{equation}\nFrom this expression we can immediately determine\n\\begin{equation}\ny^2=\\frac{1-x a^{-1}}{1-x a},\n\\end{equation}\nand comparing with (\\ref{sfx-prod}) we find only two non-zero BPS invariants $b_{1,\\pm 1}=\\pm 1$ (and from LMOV formulas (\\ref{f-P}) one can check that there are also only two non-zero LMOV invariants $N_{1,\\pm 1,j}$). These invariants represent of course two open M2-branes wrapping $\\mathbb{P}^1$ in the conifold geometry \\cite{OoguriV,AV-discs} (for the refined case we also find just two refined BPS invariants). Furthermore, from (\\ref{Punknot}) we find $c_{\\pm}=\\pm 1$, and therefore from (\\ref{Amin}) we determine the following extremal A-polynomials\n\\begin{equation}\n\\mathcal{A}^-(x,y) = 1-x-y^2,\\qquad \\qquad \\mathcal{A}^+(x,y) = 1+xy^2-y^2,\n\\end{equation}\nwhich represent two $\\mathbb{C}^3$ limits of the resolved conifold geometry (in terms of $Y=y^2$, the curve (\\ref{Aconi}) and the above extremal A-polynomials are the usual B-model curves for the conifold and $\\mathbb{C}^3$ respectively).\n\n\n\\subsection{The $4_1$ knot}\n\nAs the second example we consider the $4_1$ (figure-8) knot, see figure \\ref{fig-41}.\nThe (normalized) superpolynomial for this knots reads \\cite{superA} \n\\begin{equation}\nP^{norm}_{r} (a,q,t) = \\sum_{k=0}^{\\infty} (-1)^k a^{-2k} t^{-2k} q^{-k(k-3)} \\frac{(-a^2 t q^{-2},q^2)_k}{(q^2,q^2)_k} (q^{-2r},q^2)_k (-a^2 t^3 q^{2r}, q^2)_k \\,.\n\\label{Paqt41}\n\\end{equation}\n\nFrom this formula, after setting $t=-1$, it is immediate to determine $N_{r,i,j}$ LMOV invariants using the explicit relations (\\ref{f-P}), up to some particular value of $r$. Instead, using the knowledge of associated algebraic curves, we will explicitly determine the whole family of these invariants, labeled by arbitrary $r$.\n\nFirst of all, by considering recursion relations satisfied by $P^{norm}_r$, or from the analysis of its asymptotic behavior for large $r$, the following (normalized) super-A-polynomial is determined\\footnote{Again, one has to substitute $M^2\\mapsto x, L\\mapsto y, a^2\\mapsto a$ to obtain the curve from \\cite{superA} on the nose.} in \\cite{superA}\n\\begin{eqnarray}\nA^{\\text{super}} (M,L , a,t) \\, &=& \\,\na^4 t^5 (M^2-1)^2 M^4 + a^2 t^2 M^4 (1 + a^2 t^3 M^2)^2 L^3 + \\label{Asuper41} \\\\\n& & \\quad + a^2 t (M^2-1) (1 + t(1-t) M^2 + 2 a^2 t^3(t+1) M^4\n -2 a^2 t^4(t+1) M^6 + \\nonumber\\\\\n & & + a^4 t^6(1-t) M^8 - a^4 t^8 M^{10}) L - (1 + a^2 t^3 M^2) (1 + a^2 t(1-t) M^2 + \\nonumber \\\\\n & & \\quad + 2 a^2 t^2(t+1) M^4 + 2 a^4 t^4(t+1) M^6 + a^4 t^5(t-1) M^8 + a^6 t^7 M^{10}) L^2 . \\nonumber\n\\end{eqnarray}\nWe will use this formula when we consider refined BPS states; however at this moment let us consider its unrefined (i.e. $t=-1$) version. With the notation\nof Section \\ref{ssec-AJ} and Equation \\eqref{calAxy} we find the dual A-polynomial\n\\begin{eqnarray}\n\\mathcal{A}(x,y,a) &=& a^3 \\left(y^6-y^4\\right) +x \\left(-a^6 y^{10}+2 a^4 y^8-2 a^2 y^2+1\\right)+ \\label{calAa-41} \\\\\n& & + a x^2 \\left(a^4 y^{10}-2 a^4 y^8+2 y^2-1\\right) +x^3 \\left(a^2 y^4-a^4 y^6\\right). \\nonumber\n\\end{eqnarray}\nFrom \\eqref{Punnorm} and \\eqref{Paqt41} we find that the HOMFLY polynomial in the fundamental ($r=1$) representation is given by\n\\begin{equation}\nP_{1}(a,q) =a^{-3}\\frac{q}{\\left(1-q^2\\right)} + a^{-1}\\frac{q^4+1}{q \\left(q^2-1\\right)} \n+a \\frac{ \\left(q^4+1\\right)}{q \\left(1-q^2\\right)} + a^3 \\frac{ q}{q^2-1}.\n\\end{equation}\nComparing this with (\\ref{Pr-minmax}) we determine the value of $c_\\pm$\n\\begin{equation}\nc_- = -3, \\qquad \\quad c_+ =3, \\label{minmax-41}\n\\end{equation}\nso the extremal A-polynomials following from the definition \\eqref{Amin} and the result \\eqref{calAa-41} are given by\n\\begin{equation}\n\\mathcal{A}^-(x,y) = x - y^4 + y^6,\\qquad \\quad \n\\mathcal{A}^+(x,y) = 1 - y^2 + x y^6. \\label{Aminmax-41}\n\\end{equation}\nNote that $\\mathcal{A}^+(x,y^{-1}) = y^{-6} \\mathcal{A}^-(x,y)$, i.e. these curves agree up to $y\\mapsto y^{-1}$ (and multiplication by an overall monomial \nfactor) -- this reflects the fact that the $4_1$ knot is amphicheiral.\n\nWe can now extract the extremal BPS invariants from the curves (\\ref{Aminmax-41}). As these curves are cubic in terms of $Y=y^2$ variable, we can determine explicit solutions of the corresponding cubic equations. We will use two fortunate coincidences. \n\nThe first coincidence is that the unique solution $Y(x)=1+O(x)$ to the equation\n$\\mathcal{A}^-(x,Y) = x - Y^2 + Y^3 = 0$ is an algebraic hypergeometric \nfunction. Explicitly, we have \n\\begin{eqnarray}\nY(x) = Y^-(x) &=& \\frac{1}{3}+\\frac{2}{3}\\cos\\left[\\frac{2}{3}\\arcsin\\left(\\sqrt{\\frac{3^{3}x}{2^{2}}}\\right)\\right] \\nonumber\\\\\n&=& \\frac{1}{3}\\left[1-\\sum_{n=0}^{\\infty}\\frac{2}{3n-1}{3n \\choose n}x^{n}\\right] \\label{Y-41} \\\\\n&=& \\frac{1}{3}+\\frac{2}{3}\\ _{2}F_{1}\\left(-\\frac{1}{3},\\frac{1}{3};\\frac{1}{2};\\ \\frac{3^{3}x}{2^{2}}\\right). \\nonumber\n\\end{eqnarray}\nThe second coincidence is that $x \\partial_x Y^-\/Y^-$ is not only algebraic,\nbut also hypergeometric. Explicitly, we have:\n\\begin{eqnarray}\nx\\frac{\\partial_x Y^-(x)}{Y^-(x)} &=& \\frac{1}{3}-\\frac{2}{3}\\frac{\\cos\\left[\\frac{1}{6}\\arccos\\left(1-\\frac{27x}{2}\\right)\\right]}{\\sqrt{4-27x}} \\nonumber \\\\\n&=& - \\sum_{n=1}^{\\infty} {3n-1 \\choose n-1} x^n \\\\\n&=& \\frac{1}{3}-\\frac{1}{3}\\ _{2}F_{1}\\left(\\frac{1}{3},\\frac{2}{3};\\frac{1}{2};\\ \\frac{3^{3}x}{2^{2}}\\right).\n\\end{eqnarray}\nRecalling \\eqref{b-r}, we find that \n\\begin{equation}\nx\\frac{\\partial_x Y^-(x)}{Y^-(x)} =\\sum_{n=1}^\\infty a^-_n x^n, \\qquad\na^-_n = -\\frac{1}{2} {3n-1 \\choose n-1} \\label{an-41}\n\\end{equation}\nso that the extremal bottom BPS degeneracies \\eqref{b-r} are given by\n\\begin{equation}\nb^-_r = -\\frac{1}{r^2}\\sum_{d|r} \\mu\\big(\\frac{r}{d}\\big) {3d-1 \\choose d-1}. \\label{br-41}\n\\end{equation}\nSeveral values of $b^-_r$ are given in table \\ref{br-41-tab}. Note that the integrality of $b^-_r$ implies a nontrivial statement, that for each $r$ the sum in (\\ref{br-41}) must be divisible by $r^2$.\n\n\\begin{table}\n\\begin{equation}\n\\begin{array}{|c|c|c|}\n\\hline \nr & b^-_r = -b^+_r & 2\\frac{b^-_r}{r} \\nonumber \\\\\n\\hline \n1 & -1 & -2\\\\\n2 & -1 & -1 \\\\\n3 & -3 & -2 \\\\\n4 & -10 & -5 \\\\\n5 & -40 & -16 \\\\\n6 & -171 & -57 \\\\\n7 & -791 & -226 \\\\\n8 & -3\\, 828 & - 957 \\\\\n9 & -19\\, 287 & -4286 \\\\\n10 & -100\\, 140 & - 20\\, 028\\\\\n11 & -533\\, 159 & -96\\, 938 \\\\\n12 & -2\\, 897\\, 358 & -482\\, 893 \\\\\n13 & -16\\, 020\\, 563 & -2\\, 464\\, 702 \\\\\n14 & -89\\, 898\\, 151 & -12\\, 842\\, 593 \\\\\n\\ 15 \\ &\\ -510\\, 914\\, 700 \\ &\\ -68\\, 121\\, 960 \\ \\\\\n\\hline \n\\end{array}\n\\end{equation} \n\\caption{Extremal BPS invariants and their Improved Integrality for \nthe $4_1$ knot. \n} \\label{br-41-tab}\n\\end{table}\n\nIn an analogous way we determine top BPS invariants. \nThe above mentioned two coincidences persist. The solution \n$Y^+(x)=1\/Y^-(x)=1+O(x)$ of the equation $\\mathcal{A}^+(x,Y)=1-Y+xY^3=0$ is\ngiven by\n\\begin{equation}\nY^+(x) = \\frac{2}{\\sqrt{3x}}\\sin\\left[\\frac{1}{3}\\arcsin\\left(\\sqrt{\\frac{3^{3}x}{2^{2}}}\\right)\\right] = \\sum_{n=0}^{\\infty} \\frac{x^n}{2n+1} {3n \\choose n} = \\, _{2}F_{1}\\left(\\frac{1}{3},\\frac{2}{3};\\frac{3}{2};\\ \\frac{3^{3}x}{2^{2}}\\right)\n\\end{equation}\nso that\n\\begin{equation}\nx\\frac{\\partial_x Y^+(x)}{Y^+(x)} = - x\\frac{\\partial_x Y^-(x)}{Y^-(x)}.\n\\end{equation}\nTherefore $a_n^+=-a^-_n$ and $b^+_r = -b^-_r$, which is a manifestation of the amphicheirality of the $4_1$ knot. The above results illustrate Proposition \\ref{prop.twist} for the $4_1=K_{-1}$\ntwist knot. \n\nExperimentally, it also appears that the Improved Integrality holds \n\\eqref{eq.improved} with $\\gamma^\\pm =2$; see table \\ref{br-41-tab}.\n\nNext, we discuss the asymptotics of $b^\\pm_r$ for large $r$. Stirling's \nformula gives the asymptotics of $a^-_r$, and part (b) of Lemma \\ref{lem.1}\nconcludes that the asymptotics of $b^-_r$ are given by\n\\begin{align*}\nb^-_r &= -\\frac{1}{28 \\sqrt{3 \\pi}} \n\\left(\\frac{27}{4}\\right)^r r^{-1\/2} \\Big( \n1\n-\\frac{7}{72 r}\n+\\frac{49}{10368 r^2}\n+\\frac{6425}{2239488 r^3}\n-\\frac{187103}{644972544 r^4}\n+O\\left(\\frac{1}{r}\\right)^5 \\Big).\n\\end{align*}\nNote that the $Y$-discriminant of $\\mathcal{A}^-(x,Y)$ is\ngiven by \n$$\n\\mathrm{Disc}_Y \\mathcal{A}^-(x,Y)= -x (-4 + 27 x)\n$$\nand its root $x=\\frac{4}{27}$ matches the exponential growth rate \nof $b^-_r$, as asserted in Lemma \\ref{lem.2}.\n\nFinally, we discuss all BPS invariants $b_{r,i}$, not just the extremal ones, i.e. we turn on the\n$a$-deformation. To this end it is useful to rescale the variable $x$ in \n\\eqref{calAa-41} by $c_-=-3$, so that\n\\begin{eqnarray}\n\\mathcal{A}(a^3 x,y,a) &=& (x-y^4+y^6) -2 a x y^2 +a^2 \\left(2 x^2 y^2-x^2+2 x y^8\\right) + \\\\\n&& -a^3 x y^{10} +a^4 \\left(x^3 y^4+x^2 y^{10}-2 x^2 y^8\\right) -a^5 x^3 y^6 \\nonumber\n\\end{eqnarray}\ncontains $\\mathcal{A}^-(x,y)$ at its lowest order in $a$. Then, from (\\ref{xdyy}) and \\eqref{b-ri} we can determine the invariants $b_{r,i}$ for this curve; we list some of them in table \\ref{aBPS-41-tab}, whose first column of course agrees with the extremal BPS invariants $b^-_r$\ngiven by \\eqref{br-41}.\n\n\\begin{table}\n\\begin{small}\n\\begin{equation}\n\\begin{array}{|c|ccccccccc|} \\hline\nr \\setminus i & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\\\ \\hline\n1 & -1 & 1 & 1 & -2 & 1 & 0 & 0 & 0 & 0 \\\\\n2 & -1 & 2 & 1 & -4 & 3 & -6 & 11 & -8 & 2 \\\\\n3 & -3 & 7 & 2 & -18 & 21 & -23 & 34 & -48 & 82 \\\\\n4 & -10 & 30 & -2 & -88 & 134 & -122 & 150 & -234 & 384 \\\\\n5 & -40 & 143 & -55 & -451 & 889 & -797 & 664 & -978 & 1716 \\\\\n6 & -171 & 728 & -525 & -2346 & 5944 & -5822 & 3134 & -2862 & 6196 \\\\\n7 & -791 & 3876 & -4080 & -12172 & 39751 & -44657 & 17210 & 4958 & 4071 \\\\\n8 & -3828 & 21318 & -29562 & -62016 & 264684 & -347256 & 121276 & 191744 & -263282 \\\\\n9 & -19287 & 120175 & -206701 & -303910 & 1751401 & -2692471 & 1053774 & 2299115 & -4105859 \\\\\n10 & -100140 & 690690 & -1418417 & -1381380 & 11503987 & -20672858 & 10012945 & 21567000 & -46462399 \\\\ \\hline\n\\end{array}\n\\nonumber\n\\end{equation}\n\\caption{BPS invariants $b_{r,i}$ for the $4_1$ knot.} \n\\label{aBPS-41-tab}\n\\end{small}\n\\end{table}\n\n\n\n\\subsection{$5_2$ knot and Catalan numbers}\n\nWe can analyze the $K_2=5_2$ knot, see figure \\ref{fig-52}, similarly as we did for the figure-8 knot. Starting from the super-A-polynomial derived in \\cite{FGSS} and performing redefinitions discussed above, we get the following $a$-deformed algebraic curve (dual A-polynomial)\n\\begin{eqnarray}\n\\mathcal{A}(x,y,a) &=& a^{10} x^2 y^{16}-2 a^9 x^3 y^{16}+a^8 x^2 y^{12} \\left(x^2 y^4-3 y^2-4\\right)+a^7 x y^{12} \\left(x^2 \\left(4 y^2+3\\right)-1\\right)+ \\nonumber \\\\\n&& + a^6 x^2 y^8 \\left(-x^2 y^6+5 y^4+3 y^2+6\\right)+a^5 x y^{10} \\left(2-x^2 \\left(3 y^2+2\\right)\\right) + \\label{calAa-52} \\\\\n& & -a^4 x^2 \\left(3 y^6+4 y^4-3 y^2+4\\right) y^4-a^3 x \\left(y^2-1\\right) y^4 \\left(x^2 \\left(y^2-1\\right)+y^2+3\\right)+\\nonumber \\\\\n& & + a^2 x^2 \\left(-3 y^6+5 y^4-3 y^2+1\\right)+a x \\left(-3 y^4+4 y^2-2\\right)-y^2+1. \\nonumber\n\\end{eqnarray}\nThe unnormalized HOMFLY polynomial is given by\n\\begin{equation}\nP_{1}(a,q) = a \\frac{\\left(q^4-q^2+1\\right)}{q \\left(1-q^2\\right) } + a^5\\frac{ \\left(q^4+1\\right)}{q \\left(q^2-1\\right)}+a^7\\frac{q}{1-q^2},\n\\end{equation}\nso that \n\\begin{equation}\nc^- =1,\\qquad\\quad c^+=7.\n\\end{equation}\nIt follows that the extremal A-polynomials of \\eqref{Amin} are given by\n\\begin{equation}\n\\mathcal{A}^-(x,y) =\n1 - y^2 - x y^4 ,\\qquad \\quad \\mathcal{A}^+(x,y) = 1 - y^2 - x y^{12}. \\label{Aminmax-52}\n\\end{equation}\n\nThe two fortunate coincidences of the $4_1$ knot persist for the $5_2$ knot\nas well. It is again convenient to use a rescaled variable $Y=y^2$. \nIn particular note that the curve $\\mathcal{A}^-(x,y)$, \npresented as $1-y^2-xy^4 = 1-Y-x Y^2$, is the curve that encodes \nthe Catalan numbers. The latter are the coefficients in the series expansion\n\\begin{equation}\nY(x)=Y^-(x) = \\frac{-1+\\sqrt{1+4x}}{2x} = \\sum_{n=0}^{\\infty} \\frac{1}{n+1}{2n \\choose n} (-x)^n. \\label{Ymin-52}\n\\end{equation}\nTherefore we have found a new role of Catalan numbers -- they encode BPS numbers for $5_2$ knot (and as we will see, also for other twist knots $K_p$ for $p>1$).\n\nNow we get\n\\begin{equation}\nx\\frac{\\partial_x Y^-(x)}{Y^-(x)} = -\\frac{1}{2}+\\frac{1}{2}\\frac{1}{\\sqrt{1+4x}} = \\sum_{n=1}^{\\infty} {2n-1 \\choose n-1} (-x)^n = -\\frac{1}{2} + \\frac{1}{2}\\, _1 F_0\\left(\\frac{1}{2};\\ -\\frac{2^{2}x}{1^{1}}\\right),\n\\end{equation}\nso that\n\\begin{equation}\nb^-_r = \\frac{1}{r^2}\\sum_{d|r} \\mu\\big(\\frac{r}{d}\\big) (-1)^{d+1} {2d-1 \\choose d-1}. \\label{br-52min}\n\\end{equation}\nSeveral values of $b^-_r$ are given in table \\ref{br-52-tab}.\n\n\\begin{table}\n\\begin{equation}\n\\begin{array}{|c|c|c|c|c|}\n\\hline \nr & b^-_r & b^+_r & 6\\frac{b^-_r}{r} & 2\\frac{b^+_r}{r} \\nonumber \\\\\n\\hline \n1 & -1 & -1 & -6 & -2 \\\\\n2 & 1 & 3& 3 & 3 \\\\\n3 & -1 & -15 & -2 & -10 \\\\\n4 & 2 & 110 & 3 & 55 \\\\\n5 & -5 & -950 & -6 & -380 \\\\\n6 & 13 & 9021 & 13 & 3007 \\\\\n7 & -35 & -91\\, 763 & -30 & -26\\, 218\\\\\n8 & 100 & 982\\, 652 & 75 & 245\\, 663 \\\\\n9 & -300 & -10\\, 942\\, 254 & -200 & -2\\, 431\\, 612\\\\\n10 & 925 & 125\\, 656\\, 950 & 555 &25\\, 131\\, 390\\\\\n11 & -2915 & -1\\, 479\\, 452\\, 887 & -1590 & -268\\, 991\\, 434\\\\\n12 & 9386 & 17\\, 781\\, 576\\, 786 & 4693 &2\\, 963\\, 596\\, 131\\\\\n13 & -30\\, 771& -217\\, 451\\, 355\\, 316 & -14\\, 202 & -33\\, 454\\, 054\\, 664\\\\\n14 & 102\\, 347 & 2\\, 698\\, 753\\, 797\\, 201& 43\\, 863 & 385\\, 536\\, 256\\, 743\\\\\n\\ 15 \\ &\\ -344\\, 705 \\ &\\ -33\\, 922\\, 721\\, 455\\, 050\\ &\\ -137\\, 882 \\ &\\ -4\\, 523\\, 029\\, 527\\,340\\ \\\\\n\\hline \n\\end{array}\n\\end{equation} \n\\caption{Extremal BPS invariants and their Improved Integrality for $5_2$ knot.} \\label{br-52-tab}\n\\end{table}\n\n\nIn an analogous way for $\\mathcal{A}^+(x,Y) = 1 - Y - x Y^{6}$ we get more involved solution\n\\begin{equation}\nY(x)^+ = 1 + \\sum_{n=1}^{\\infty} \\frac{1}{n} {6n \\choose n-1} (-x)^n = \n_{5}F_{4}\\left(\\frac{1}{6},\\frac{2}{6},\\frac{3}{6},\\frac{4}{6},\\frac{5}{6};\\ \\frac{6}{5},\\frac{4}{5},\\frac{3}{5},\\frac{2}{5};\\ -\\frac{6^{6}x}{5^{5}}\\right), \\label{Ymax-52}\n\\end{equation}\nso that\n\\begin{equation}\nx\\frac{\\partial_x Y^+(x)}{Y^+(x)} = \\sum_{n=1}^{\\infty} {6n-1 \\choose n-1} (-x)^n = -\\frac{1}{6}+\\frac{1}{6} \\, _{5}F_{4}\\left(\\frac{1}{6},\\frac{2}{6},\\frac{3}{6},\\frac{4}{6},\\frac{5}{6};\\ \\frac{4}{5},\\frac{3}{5},\\frac{2}{5},\\frac{1}{5};\\ -\\frac{6^{6}x}{5^{5}}\\right),\n\\end{equation}\nand in consequence\n\\begin{equation}\nb_r^+ = \\frac{1}{r^2}\\sum_{d|r} \\mu\\big(\\frac{r}{d}\\big) (-1)^{d} {6d-1 \\choose d-1}. \\label{br-52max}\n\\end{equation}\nSeveral values of $b^+_r$ are also given in table \\ref{br-52-tab}.\n\n\nOur discussion illustrates Proposition \\ref{prop.twist} for the $5_2=K_2$\ntwist knot. Experimentally -- see table \\ref{br-52-tab} -- it appears that the Improved Integrality \\eqref{eq.improved}\nholds with\n\\begin{equation}\n\\gamma^- = 6,\\qquad\\quad \\gamma^+=2.\n\\end{equation}\n\nNext, we consider the asymptotics of the extremal BPS numbers. Using Lemma \\ref{lem.1} and Stirling formula for the binomials in \\eqref{br-52min} and \\eqref{br-52max} we find respectively\n\\begin{wrapfigure}{l}{0.4\\textwidth}\n\\begin{center}\n\\includegraphics[scale=0.3]{draws\/5_2.pdf}\n\\end{center}\n\\caption{The $5_2$ knot.}\n\\label{fig-52}\n\\end{wrapfigure}\n\\begin{equation}\n\\lim_{r\\to\\infty} \\frac{b^-_r}{b^-_{r+1}} = -\\frac{1}{4},\\qquad\\quad \\lim_{r\\to\\infty} \\frac{b^+_r}{b^+_{r+1}} = -\\frac{5^5}{6^6}. \\label{br-asympt-52}\n\\end{equation}\nThis matches with Lemma \\ref{lem.2}, since the $Y$-discriminants of \n$\\mathcal{A}^\\pm(x,Y)$ are given by \n\\begin{eqnarray}\n\\mathrm{Disc}_Y \\mathcal{A}^-(x,Y) &=& 1 + 4 x,\\nonumber \\\\\n\\mathrm{Disc}_Y \\mathcal{A}^+(x,Y) &=& x^4(5^5 + 6^6x).\\nonumber\n\\end{eqnarray}\n\n\nFinally, we also consider the $a$-deformation of the extremal curves.\nRescaling $x\\mapsto a^{-c_-} x$ in \\eqref{calAa-52} with $c_-=1$ we get a curve \nthat contains $\\mathcal{A}^-(x,y)$ at its lowest order in $a$. Then, from \n\\eqref{xdyy} and \\eqref{b-ri}, the find integral invariants $b_{r,i}$ given in \ntable \\ref{aBPS-52-tab}. The first column of table \\ref{aBPS-52-tab} agrees \nwith the values of \\eqref{br-52min}.\n\n\n\n\\begin{table}\n\\begin{small}\n\\begin{equation}\n\\begin{array}{|c|ccccccccc|} \\hline\nr \\setminus i & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\\\ \\hline\n1 & -1 & 0 & 2 & -1 & 0 & 0 & 0 & 0 & 0 \\\\\n2 & 1 & -2 & 1 & -4 & 11 & -10 & 3 & 0 & 0 \\\\\n3 & -1 & 4 & 0 & -12 & 23 & -71 & 154 & -162 & 80 \\\\\n4 & 2 & -12 & 14 & 8 & 40 & -226 & 594 & -1542 & 2944 \\\\\n5 & -5 & 36 & -66 & -30 & 132 & -184 & 1550 & -6108 & 16525 \\\\\n6 & 13 & -114 & 302 & -94 & -419 & -660 & 3387 & -12042 & 56209 \\\\\n7 & -35 & 372 & -1296 & 1168 & 1843 & -1400 & -1372 & -27398 & 135350 \\\\\n8 & 100 & -1244 & 5382 & -8014 & -4222 & 12390 & 23462 & -42626 & 163928 \\\\\n9 & -300 & 4240 & -21932 & 45383 & -6044 & -79628 & -31246 & 116368 & 589812 \\\\\n10 & 925 & -14676 & 88457 & -234124 & 160251 & 359514 & -192014 & -1022096 & -86854 \\\\ \\hline\n\\end{array}\n\\nonumber\n\\end{equation}\n\\caption{BPS invariants $b_{r,i}$ for the $5_2$ knot.} \n\\label{aBPS-52-tab}\n\\end{small}\n\\end{table}\n\n\n\n\\subsection{Twist knots}\n\nThe $4_1$ and $5_2$ knots are special cases, corresponding respectively to $p=-1$ and $p=2$, in a series of twist knots $K_p$, labeled by an integer $p$. Apart from a special case $p=0$ which is the unknot and $p=1$ which is the\ntrefoil knot $3_1$, all other twist knots are hyperbolic. In this section we analyze their BPS invariants. The two coincidences of the $4_1$\nknot persist for all twist knots. The formulas for $p>1$ are somewhat\ndifferent from those for $p<0$, so we analyze them separately. \n\nWe start with $p<0$. In this case the bottom A-polynomial turns out to be the same for all $p$, however the top A-polynomial depends on $p$, \n\\begin{equation}\n\\boxed{ \\mathcal{A}^-_{K_p}(x,y)=x-y^4+y^6,\\qquad\\quad \\mathcal{A}^+_{K_p}(x,y) = 1-y^2+x y^{4|p|+2} }\n\\end{equation}\nFor $p=-1$ these curves of course reduce to those for the $4_1$ knot (\\ref{Aminmax-41}). For all $p<0$ the bottom BPS invariants are the same as for the $4_1$ knot \\eqref{br-41}\n\\begin{equation}\nb^-_{K_p,r} = -\\frac{1}{r^2}\\sum_{d|r} \\mu\\big(\\frac{r}{d}\\big) {3d-1 \\choose d-1}, \\qquad p <0. \\label{br-min-Kp<0}\n\\end{equation}\nTo get top invariants it is convenient to introduce $Y=y^2$ and consider the equation $\\mathcal{A}^+_{K_p}(x,Y) = 1-y^2+x Y^{2|p|+1} = 0$, whose solution of interest reads\n\\begin{eqnarray}\nY(x) &=& \\sum_{n=0}^{\\infty} \\frac{1}{2|p|n+1} {(2|p|+1)n \\choose n} x^n = \\\\\n&=& _{2|p|}F_{2|p|-1}\\left(\\frac{1}{2|p|+1},...,\\frac{2|p|}{2|p|+1};\\frac{2|p|+1}{2|p|},\\frac{2|p|-1}{2|p|},\\frac{2|p|-2}{2|p|},...,\\frac{2}{2|p|};\\ \\frac{\\left(2|p|+1\\right)^{2|p|+1}x}{\\left(2|p|\\right)^{2|p|}}\\right) \\nonumber\n\\end{eqnarray}\nwhere $_{2|p|}F_{2|p|-1}$ is the generalized hypergeometric function. Then\n\\begin{eqnarray}\nx\\frac{Y'(x)}{Y(x)} &=& \\sum_{n=1}^{\\infty} {(2|p|+1)n - 1 \\choose n-1} x^n = \\\\\n&=& \\frac{1}{2|p|+1}\\left( _{2|p|}F_{2|p|-1}\\left(\\frac{1}{2|p|+1},...,\\frac{2|p|}{2|p|+1};\\ \\frac{2|p|-1}{2|p|},...,\\frac{1}{2|p|};\\ \\frac{(2|p|+1)^{2|p|+1} x}{\\left(2|p|\\right)^{2|p|}}\\right) -1\\right)\\nonumber\n\\end{eqnarray}\nFrom this expression we find top BPS invariants\n\\begin{equation}\nb^+_{K_p,r}= \\frac{1}{r^2}\\sum_{d|r} \\mu\\big(\\frac{r}{d}\\big) {(2|p|+1)d - 1 \\choose d-1}, \\qquad p<0. \\label{br-max-Kp<0}\n\\end{equation}\n\nNext, we consider the case $p>1$. Their bottom A-polynomials are the same for all $p$, however top A-polynomials depend on $p$, \n\\begin{equation}\n\\boxed{ \\mathcal{A}^-_{K_p}(x,y)=1-y^2- x y^4,\\qquad\\quad \\mathcal{A}^+_{K_p}(x,y) = 1-y^2+x y^{4p+4} }\n\\end{equation}\nFor $p=2$ these curves reduce to the results for $5_2$ knot (\\ref{Aminmax-52}). For all $p>1$ the bottom BPS invariants are the same as for $5_2$ knot (\\ref{br-52min})\n\\begin{equation}\nb^-_{K_p,r} = \\frac{1}{r^2}\\sum_{d|r} \\mu\\big(\\frac{r}{d}\\big) (-1)^{d+1} {2d-1 \\choose d-1}, \\qquad p>1 \\label{br-min-Kp>0}\n\\end{equation}\nwhich means that Catalan numbers encode these BPS invariants for all $K_{p>1}$ twist knots.\n\nTo get top invariants we again introduce $Y=y^2$ and consider the equation $\\mathcal{A}^+_{K_p}(x,Y) = 1-Y-x Y^{2p+2} = 0$, whose solution of interest reads\n\\begin{eqnarray}\nY(x) &=& 1 + \\sum_{n=1}^{\\infty} \\frac{1}{n} {(2p+2)n \\choose n-1} (-x)^n = \\\\\n&=& _{2p+1}F_{2p}\\left(\\frac{1}{2p+2},...,\\frac{2p+1}{2p+2};\\frac{2p+2}{2p+1},\\frac{2p}{2p+1},\\frac{2p-1}{2p+1},...,\\frac{2}{2p+1};-\\frac{\\left(2p+2\\right)^{2p+2}x}{\\left(2p+1\\right)^{2p+1}}\\right) \\nonumber\n\\end{eqnarray}\nThen\n\\begin{eqnarray}\nx\\frac{Y'(x)}{Y(x)} &=& \\sum_{n=1}^{\\infty} {(2p+2)n - 1 \\choose n-1} (-x)^n = \\\\\n&=& \\frac{1}{2p+2}\\left( _{2p+1}F_{2p}\\left(\\frac{1}{2p+2},...,\\frac{2p+1}{2p+2};\\frac{2p}{2p+1},...,\\frac{1}{2p+1};-\\frac{\\left(2p+2\\right)^{2p+2}x}{\\left(2p+1\\right)^{2p+1}}\\right) -1\\right) \\nonumber\n\\end{eqnarray}\nFrom this expression we find top BPS invariants\n\\begin{equation}\nb^+_{K_p,r} = \\frac{1}{r^2}\\sum_{d|r} \\mu\\big(\\frac{r}{d}\\big) (-1)^d {(2p+2)d - 1 \\choose d-1}, \\qquad p>1. \n\\label{br-max-Kp>0}\n\\end{equation}\n\nThe Improved Integrality holds for twist knots $K_p$ and the values of $\\gamma^\\pm$ are given in table \\ref{c-minmax-aug}. Note that these invariants repeat periodically with period 6 for both positive and negative $p$.\n\nFinally, turning on $a$-deformation also leads to integral invariants $b_{r,i}$, as an example see the results for $6_1= K_{-2}$ knot in table \\ref{aBPS-61-tab}.\n\n\\begin{table}\n\\begin{small}\n\\begin{equation}\n\\begin{array}{|c|ccccccccc|} \\hline\nr \\setminus i & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\\\ \\hline\n1 & -1 & 1 & 1 & -2 & 1 & 0 & 0 & 0 & 0 \\\\\n2 & -1 & 2 & 1 & -4 & 3 & -6 & 11 & -8 & 2 \\\\\n3 & -3 & 7 & 2 & -18 & 21 & -23 & 34 & -48 & 82 \\\\\n4 & -10 & 30 & -2 & -88 & 134 & -122 & 150 & -234 & 384 \\\\\n5 & -40 & 143 & -55 & -451 & 889 & -797 & 664 & -978 & 1716 \\\\\n6 & -171 & 728 & -525 & -2346 & 5944 & -5822 & 3134 & -2862 & 6196 \\\\\n7 & -791 & 3876 & -4080 & -12172 & 39751 & -44657 & 17210 & 4958 & 4071 \\\\\n8 & -3828 & 21318 & -29562 & -62016 & 264684 & -347256 & 121276 & 191744 & -263282 \\\\\n9 & -19287 & 120175 & -206701 & -303910 & 1751401 & -2692471 & 1053774 & 2299115 & -4105859 \\\\\n10 & -100140 & 690690 & -1418417 & -1381380 & 11503987 & -20672858 & 10012945 & 21567000 & -46462399 \\\\ \\hline\n\\end{array}\n\\nonumber\n\\end{equation}\n\\caption{BPS invariants $b_{r,i}$ for the $6_1$ knot.} \n\\label{aBPS-61-tab}\n\\end{small}\n\\end{table}\n\n\n\n\\subsection{Torus knots}\n\nWe can analyze BPS degeneracies for torus knots in the same way as we did for twist knots. Let us focus on the series of $(2,2p+1)\\equiv (2p+1)_1$ knots and present several examples. For the trefoil, $3_1$, see figure \\ref{fig-31}, the ($a$-deformed) dual A-polynomial is given by\n\\begin{equation}\n\\mathcal{A}(x,y,a) = -1 + y^2 + a^2 x^2 y^6 + a^5 x y^8 + \n a x(1 - y^2 + 2 y^4) - a^3 x y^4(2 + y^2) - a^4 x^2 y^8.\n\\end{equation}\nFrom the form of the HOMFLY polynomial we find $c_-=1$, so after appropriate rescaling of the above result we get\n\\begin{equation}\n\\mathcal{A}(a^{-1}x,y,a) = (-1 + x + y^2 - x y^2 + 2 x y^4 + x^2 y^6) - a^2 x y^4 (2 + y^2 + x y^4) +a^4 x y^8 .\n\\end{equation}\nThe lowest term in $a$ in this expression represents the extremal A-polynomial\n\\begin{equation}\n\\mathcal{A}^-(x,y) = -1 + x + y^2 - x y^2 + 2 x y^4 + x^2 y^6.\n\\end{equation}\nThe corresponding BPS invariants are given in table \\ref{aBPS-31-tab}.\n\nFor the $5_1$ knot we find that $c_-=3$ and the rescaled dual A-polynomial takes form\n\\begin{eqnarray}\n\\mathcal{A}(a^{-3}x,y,a) &=& -1 + x + y^2 + 2 x^2 y^{10} + x^3 y^{20} + \\\\\n&& + x y^2 (-1 + 2 y^2) + x y^6 (-2 + 3 y^2) + x^2 y^{12} (-1 + 3 y^2) \\nonumber \\\\\n&& + a^2 \\left(-x^3 y^{22}-2 x^2 y^{16}-x^2 \\left(4 y^2+1\\right) y^{12}-x y^{10}-2 x y^4+2 x \\left(1-2 y^2\\right) y^6\\right) + \\nonumber\\\\\n&& + a^4 \\left(x^2 y^{14}+2 x^2 \\left(y^2+1\\right) y^{16}+x y^8+x \\left(2 y^2-1\\right) y^{10}\\right) +\\nonumber \\\\\n&& + a^6 (-2 x^2 y^{18} - x^2 y^{20}) + a^8 x^2 y^{22} . \\nonumber\n\\end{eqnarray}\n\\begin{wrapfigure}{l}{0.4\\textwidth}\n\\begin{center}\n\\includegraphics[scale=0.3]{draws\/3_1.pdf}\n\\end{center}\n\\caption{The $3_1$ knot.}\n\\label{fig-31}\n\\end{wrapfigure}\nThe terms in the first two lines above constitute the extremal A-polynomial $\\mathcal{A}^-(x,y)$. Several corresponding BPS invariants are given in table \\ref{aBPS-51-tab}.\n\nFor the $7_1$ knot the $a$-deformed A-polynomial takes form of a quite lengthy expression, so we list just some BPS invariants -- with $a$-deformation taken into account -- in the table \\ref{aBPS-71-tab}.\n\nSimilarly we can determine top A-polynomials. We present the list of extremal A-polynomials for several torus knots in table \\ref{tab-A-torus}. Note that for various knots these polynomials have the same terms of lower degree, and for higher degrees more terms appear for more complicated knots. It would be interesting to understand this pattern and its manifestation on the level of BPS numbers.\n\nFurthermore, we observe that the Improved Integrality also holds for this family of torus knots, with the values of $\\gamma^\\pm$ given in table \\ref{c-minmax-aug}. Note that the values of $\\gamma^\\pm$ grow linearly with $p$.\n\n\n\n\n\n\\begin{table}\n\\begin{small}\n\\begin{equation}\n\\begin{array}{|c|cccccccc|} \\hline\nr \\setminus i & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\\\ \\hline\n1 & -2 & 3 & -1 & 0 & 0 & 0 & 0 & 0 \\\\\n2 & 2 & -8 & 12 & -8 & 2 & 0 & 0 & 0 \\\\\n3 & -3 & 27 & -84 & 126 & -99 & 39 & -6 & 0 \\\\\n4 & 8 & -102 & 488 & -1214 & 1764 & -1554 & 816 & -234 \\\\\n5 & -26 & 413 & -2682 & 9559 & -20969 & 29871 & -28203 & 17537 \\\\\n6 & 90 & -1752 & 14484 & -67788 & 201810 & -405888 & 569322 & -564192 \\\\\n7 & -329 & 7686 & -77473 & 451308 & -1711497 & 4499696 & -8504476 & 11792571 \\\\\n8 & 1272 & -34584 & 411948 & -2882152 & 13350352 & -43658370 & 104759240 & -188904738 \\\\\n9 & -5130 & 158730 & -2183805 & 17877558 & -98157150 & 385713186 & -1128850632 & 2524827921 \\\\\n10 & 21312 & -740220 & 11560150 & -108550256 & 690760044 & -3179915704 & 11028120884 & -29597042376 \\\\ \\hline\n\\end{array}\n\\nonumber\n\\end{equation}\n\\caption{BPS invariants $b_{r,i}$ for the $3_1$ knot.} \n\\label{aBPS-31-tab}\n\\end{small}\n\\end{table}\n\n\n\n\\begin{table}\n\\begin{small}\n\\begin{equation}\n\\begin{array}{|c|cccc|} \\hline\nr \\setminus i & 0 & 1 & 2 & 3 \\\\ \\hline\n1 & -3 & 5 & -2 & 0 \\\\\n2 & 10 & -40 & 60 & -40 \\\\\n3 & -66 & 451 & -1235 & 1750 \\\\\n4 & 628 & -5890 & 23440 & -51978 \\\\\n5 & -7040 & 83725 & -438045 & 1330465 \\\\\n6 & 87066 & -1257460 & 8165806 & -31571080 \\\\\n7 & -1154696 & 19630040 & -152346325 & 716238720 \\\\\n8 & 16124704 & -315349528 & 2847909900 & -15779484560 \\\\\n9 & -234198693 & 5179144365 & -53361940365 & 340607862518 \\\\\n10 & 3508592570 & -86566211200 & 1002184712130 & -7243117544640 \\\\ \\hline\n\\end{array}\n\\nonumber\n\\end{equation}\n\\caption{BPS invariants $b_{r,i}$ for the $5_1$ knot.} \n\\label{aBPS-51-tab}\n\\end{small}\n\\end{table}\n\n\n\n\\begin{table}\n\\begin{small}\n\\begin{equation}\n\\begin{array}{|c|cccccccc|} \\hline\nr \\setminus i & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\\\ \\hline\n1 & -4 & 7 & -3 & 0 & 0 & 0 & 0 & 0 \\\\\n2 & 28 & -112 & 168 & -112 & 28 & 0 & 0 & 0 \\\\\n3 & -406 & 2618 & -6916 & 9604 & -7406 & 3010 & -504 & 0 \\\\\n4 & 8168 & -71588 & 270928 & -579124 & 765576 & -641452 & 332864 & -97852 \\\\\n5 & -193170 & 2139333 & -10554173 & 30562838 & -57563814 & 73721676 & -65048368 & 39063778 \\\\ \\hline\n\\end{array}\n\\nonumber\n\\end{equation}\n\\caption{BPS invariants $b_{r,i}$ for the $7_1$ knot.} \n\\label{aBPS-71-tab}\n\\end{small}\n\\end{table}\n\n\n\n\\subsection{BPS invariants from augmentation polynomials: $6_2$, $6_3$, $7_3$, $7_5$, $8_{19}$, $8_{20}$, $8_{21}$, $10_{124}$, $10_{132}$, and $10_{139}$ knots} \\label{ssec-aug}\n\nUsing the methods presented above we can easily compute BPS invariants for knots with known augmentation polynomials. Moreover, in many nontrivial cases we can confirm the conjecture that augmentation polynomials agree with Q-deformed polynomials (defined as the classical limit of recursion relations satisfied by colored HOMFLY polynomials) and $t=-1$ limit of super-A-polynomials. This conjecture has been explicitly verified for several torus and twist knots in \\cite{AVqdef,superA,FGSS}, where it was shown that appropriate change of variables relates the two algebraic curves. For example, starting with the super-A-polynomial for figure-8 knot (\\ref{Asuper41}), setting $t=-1$ and changing variables (note that it is not simply (\\ref{MLnorm})) \\cite{superA}\n\\begin{equation}\nQ = a,\\qquad \\beta = M, \\qquad \\alpha = L\\frac{1 - \\beta Q}{Q(1-\\beta)},\n\\end{equation}\nwe obtain (up to some irrelevant simple factor) the Q-deformed polynomial in the same form as in \\cite{AVqdef}\n\\begin{eqnarray}\nA^{\\textrm{Q-def}}(\\alpha,\\beta,Q) & = & (\\beta^2 - Q \\beta^3) + (2 \\beta - 2 Q^2 \\beta^4 + Q^2 \\beta^5 - 1) \\alpha + \\nonumber \\\\\n& & + (1 - 2 Q \\beta + 2 Q^2 \\beta^4 - Q^3 \\beta^5) \\alpha^2 + Q^2 (\\beta-1) \\beta^2 \\alpha^3 \\,, \\nonumber\n\\end{eqnarray}\nwhere it was shown to match the augmentation polynomial.\n\n\\begin{figure}[ht]\n\\begin{center}\n\\includegraphics[scale=0.25]{draws\/6_2.pdf}$\\qquad$\n\\includegraphics[scale=0.25]{draws\/6_3.pdf}$\\qquad$\n\\includegraphics[scale=0.25]{draws\/7_3.pdf}$\\qquad$\n\\includegraphics[scale=0.25]{draws\/7_5.pdf}\n\\end{center}\n\\caption{The $6_2$, $6_3$, $7_3$ and $7_5$ knots.}\n\\label{fig-6-7-knots}\n\\end{figure}\n\nNow we show that this conjecture can be verified for many non-trivial knots, even if an explicit form of Q-deformed polynomials or super-A-polynomial is not known. Let us consider the following knots: $6_2$, $6_3$, $7_3$, $7_5$, $8_{19}$, $8_{20}$, $8_{21}$, $10_{124}$, $10_{132}$, and $10_{139}$, for which augmentation polynomials are determined in \\cite{Ng,NgFramed}. Changing the variables into $M$ and $L$ relevant for A-polynomials, and further into $x$ and $y$ relevant for our considerations, and from appropriate rescalings (\\ref{Amin}) we obtain extremal A-polynomials. They are presented in table \\ref{Aminmax-aug}, and constitute one of our main results.\n\nFrom extremal polynomials in table \\ref{Aminmax-aug} we can determine the extremal BPS invariants $b^{\\pm}_r$ for arbitrary $r$; some results are presented in tables in appendix \\ref{app-bps}. We also experimentally confirm the Improved Integrality -- the constants $\\gamma^\\pm$ for some knots are shown in table \\ref{c-minmax-aug}. However, unfortunately, the corresponding functions $Y^\\pm(x)$ no longer satisfy the \nfortunate condition \\eqref{eq.fortunate}.\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[scale=0.3]{draws\/8_19.pdf}$\\qquad$\n\\includegraphics[scale=0.3]{draws\/8_20.pdf}$\\qquad$\n\\includegraphics[scale=0.3]{draws\/8_21.pdf}\n\\caption{The $8_{19}$, $8_{20}$ and $8_{21}$ knots.}\n\\label{fig-8-knots}\n\\end{figure}\n\nEven though the Q-deformed or super-A-polynomials are not known for knots listed in table \\ref{Aminmax-aug}, colored HOMFLY polynomials for those knots for several values of $r$ have been explicitly determined in \\cite{Nawata:2013qpa,Wedrich:2014zua}. We can therefore determine the corresponding invariants $N_{r,i,j}$ using LMOV formulas (\\ref{f-P}) -- we list some of these invariants in tables in appendix \\ref{app-lmov}. On the other hand, from the known augmentation polynomials we can compute some BPS invariants $b_{r,i}$ using our techniques. In all cases we find the agreement between these two computations, as the reader can also verify by comparing tables in appendices \\ref{app-bps} and \\ref{app-lmov}. This is quite a nontrivial test of the (still conjectural) relation between augmentation polynomials and colored HOMFLY polynomials.\n\n\nFor example, consider the LMOV invariants $N_{r,i,j}$ of the $6_2$ knot for $r=1,2,3$, given in tables \\ref{62-N1ij}, \\ref{62-N2ij}, \\ref{62-N3ij}. We determined these invariants from the knowledge of HOMFLY polynomials, determined up to $r=4$ in \\cite{Nawata:2013qpa}, and applying formulas (\\ref{f-P}). To obtain $b^-_r$ and $b^+_r$ we need to resum, respectively, the first and the last row in those tables (corresponding to minimal and maximal power of $a$). For the minimal case (first rows in the tables) from the resummation we obtain the number $-1,-2,-10$, and for the maximal case (last rows in the tables) we find the numbers $2,2,7$. These results indeed agree with values of $b^\\pm_r$ for $6_2$ knot given in table \\ref{6-7-br-aug}, which are determined from its augmentation polynomial (more precisely, the corresponding extremal A-polynomials given in table \\ref{Aminmax-aug}). We verified such an agreement for other knots discussed in this section. \n\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[scale=0.3]{draws\/10_124.pdf}$\\qquad$\n\\includegraphics[scale=0.3]{draws\/10_132.pdf}$\\qquad$\n\\includegraphics[scale=0.3]{draws\/10_139.pdf}\n\\caption{The $10_{124}$, $10_{132}$ and $10_{139}$ knots.}\n\\label{fig-10-knots}\n\\end{figure}\n\n\n\n\n\n\n\n\\subsection{Refined BPS invariants from super-A-polynomials}\n\nBeyond the $a$-dependence we can consider further deformation of A-polynomials, in parameter $t$, that leads to super-A-polynomials. It is natural to ask if super-A-polynomials encode refined BPS degeneracies $b_{r,i,j}$. Such degeneracies should be identified with generalized LMOV invariants, defined by relations (\\ref{Pr-LMOVref}), and could be determined from the knowledge of super-A-polynomials, using the relation (\\ref{xdyy-ref}). This conjecture would be confirmed if $b_{r,i,j}$ would turn out to be integer. In this section we extract such invariants from the known super-A-polynomials. To this end it is convenient to consider super-A-polynomials as $T$-deformation and $a$-deformation of bottom A-polynomials (that arise for $a=0$ and $T=1$), where $t=-T^2$. This results in slightly redefined degeneracies $\\tilde{b}_{r,i,j}$, which can be combined into the generating functions\n\\begin{equation}\n\\sum_{r,i,j} \\tilde{b}_{r,i,j} x^r a^i T^j.\n\\end{equation}\nWe present such generating functions in table \\ref{tab-refBPS}. Clearly all coefficients in these generating functions are integer, and therefore capture putative refined BPS degeneracies. We plan to analyze these refined BPS invariants for knots in more detail in future work. \n\n\n\\begin{table}\n\\begin{small}\n\\begin{equation}\n\\begin{array}{|c|l|}\n\\hline \n\\textrm{\\bf Knot} & \\qquad \\qquad\\qquad \\sum_{r,i,j} \\tilde{b}_{r,i,j} x^r a^i T^j \\nonumber \\\\\n\\hline \n\\hline\n{\\bf 4_1} & (-1 + a T - a T^2 + 2 a T^3 - 2 a^2 T^4 + a^2 T^5) x + (-1 + 2 a T + (-a - a^2) T^2 + (3 a + a^2) T^3 + \\\\\n & - 4 a^2 T^4 + (a^2 + a^3) T^5) x^2 + (-3 + 7 a T + (-3 a - 5 a^2) T^2 + (10 a + 5 a^2 + a^3) T^3 + \\\\\n & + (-21 a^2 - 2 a^3) T^4 + (6 a^2 + 13 a^3) T^5) x^3 + (-10 + 30 a T + (-12 a - 32 a^2) T^2 + \\\\\n & + (42 a + 28 a^2 + 14 a^3) T^3 + (-117 a^2 - 21 a^3 - 2 a^4) T^4 + (35 a^2 + 114 a^3 + 5 a^4) T^5) x^4 +\\\\\n & + (-40 + 143 a T + (-55 a - 198 a^2) T^2 + (198 a + 165 a^2 + 132 a^3) T^3 + \\\\\n & + (-690 a^2 - 180 a^3 - 42 a^4) T^4 + (210 a^2 + 912 a^3 + 84 a^4 + 5 a^5) T^5) x^5 +\\dots \\\\\n\\hline\n{\\bf 6_1} & (-1 + a T - 2 a T^2) x + (-1 + 2 a T + (-3 a - a^2) T^2) x^2 + (-3 + 7 a T + (-10 a - 5 a^2) T^2) x^3 +\\\\ \n & + (-10 + 30 a T + (-42 a - 32 a^2) T^2) x^4 + (-40 + 143 a T + (-198 a - 198 a^2) T^2) x^5 + \\dots \\\\\n\\hline\n{\\bf 5_2} & (-1 + T + (-1 - a) T^2 + 2 a T^3 - 2 a T^4 + (a + 2 a^2) T^5) x + (T^2 + (-1 - 2 a) T^3 + \\\\\n & + (1 + 5 a + a^2) T^4 + (-8 a - 4 a^2) T^5) x^2 + (T^3 + (-3 - 4 a) T^4 + (2 + 16 a + 4 a^2) T^5) x^3 + \\\\\n & + (2 T^4 + (-6 - 10 a) T^5) x^4 + 4 T^5 x^5 + \\dots \\\\\n\\hline\n{\\bf 3_1} & (-1 - T^2 + 2 a T^3 + a T^5) x + (T^2 - a T^3 + T^4 - 5 a T^5) x^2 +\\\\\n & + (-2 T^4 + 7 a T^5) x^3 + (T^4 - 3 a T^5) x^4 + \\dots \\\\\n\\hline\n{\\bf 5_1} & (-1 - T^2 + 2 a T^3 - T^4) x + (T^2 - a T^3 + 4 T^4) x^2 - 5 T^4 x^3 + 2 T^4 x^4 +\\dots \\\\\n\\hline \n\\end{array}\n\\end{equation} \n\\end{small}\n\\caption{Generating functions of refined degeneracies $\\tilde{b}_{r,i,j}$ for several knots. Integrality of coefficients confirms the refined version of the LMOV conjecture.} \\label{tab-refBPS}\n\\end{table}\n\n\n\n\n\\section{Conclusions and discussion} \\label{sec-conclude}\n\nThe results of this work deserve further studies that we plan to undertake. On one hand they should inspire mathematical research. We have formulated and tested various conjectures, in particular divisibility by $r^2$ following from the conjectured LMOV integrality, and Improved Integrality of extremal BPS degeneracies. These statements should hold for all knots and proving them is an important task (even proofs of divisibility by $r^2$ in various specific cases, in particular (\\ref{br-neg-intro}) and (\\ref{br-pos-intro}), are challenging); note that proofs of integrality of Gopakumar-Vafa in certain cases were given in \\cite{Kontsevich:2006an,Vologodsky:2007ef,Schwarz:2008ti}, and presumably these techniques could be generalized to the case of knots. We also associated new integer invariants $\\gamma^{\\pm}$, related to Improved Integrality, to all knots -- it is important to understand deeper their mathematical meaning and, possibly, relation to other characteristics of knots. Furthermore, for some knots we observed that the solutions of extremal A-polynomial equations are given by hypergeometric functions. It is important to understand for which knots such algebraic hypergeometric functions arise and if they have any further meaning. Understanding which knots have this property could lead to a new method to determine many other algebraic hypergeometric functions (associated to various knots). \n\nIt would also be interesting to understand our results from the perspective of knot homologies. Currently the most powerful method to determine -- at least conjecturally -- colored HOMFLY homologies is the formalism of (colored) differentials \\cite{DGR,GS}. These differentials reveal intricate structure not only of colored homologies, but also of ordinary HOMFLY invariants. Therefore they should capture some essential information about unrefined and refined BPS degeneracies that we consider. Note that the ``bottom row'' structure of HOMFLY homologies (i.e. corresponding to the minimal power of $a$) was analyzed e.g. in \\cite{Gorsky:2013jxa}; it would be interesting to relate it to BPS degeneracies determined here.\n\nOur results also raise further interesting questions on the physics side. First, in the introduction we already mentioned their intimate connection to 3d-3d duality, which relates knot invariants to 3-dimensional $\\mathcal{N}=2$ theories \\cite{DGG,superA,Chung:2014qpa}. Various objects in our analysis, such as colored knot polynomials, super-A-polynomials, etc., play an important role in this duality. Therefore all the new objects and statements that we consider should also find its interpretation on the $\\mathcal{N}=2$ side of this duality. \n\nSecond, the results such as Improved Integrality or formulation of refined BPS invariants for knots generalize the statements of the original LMOV conjectures \\cite{OoguriV,Labastida:2000zp,Labastida:2000yw}. It is desirable to understand in more detail M-theory interpretation of these results. In particular we obtain the BPS degeneracies in an analogous way as has been done for D-branes in \\cite{AV-discs,AKV-framing}. Furthermore, it has been conjectured in \\cite{OoguriV,AVqdef} that all knots should be mirror to Lagrangian branes in the conifold geometry, and for some knots such Lagrangian branes have been constructed \\cite{Diaconescu:2011xr,Jockers:2012pz}. It would be amusing to construct such Lagrangian branes for other knots that we consider, and compare the degeneracies they encode with our computations.\n\nThird, our results concern primarily the classical algebraic curves, i.e. the $q\\to 1$ limit of recursion relations for knot polynomials. It is desirable to introduce the dependence on the parameter $q$ and determine corresponding BPS degeneracies directly from the knowledge of those recursion relations. Our results can also be further generalized to higher-dimensional varieties generalizing algebraic curves, and correspondingly to links or knots labeled by more general (multi-row) representations.\n\nFourth, an important challenge is to understand refined open BPS states that we compute for several knots, based on the known super-A-polynomials. Recently various formulations of closed refined BPS states have been considered, see e.g. \\cite{Choi:2012jz,Huang:2013yta}. It would be nice to make contact between these various approaches involving open and closed BPS states.\n\nYet another intriguing direction of research relating algebraic curves and knot invariants has to do with the topological recursion. It has been conjectured in \\cite{DijkgraafFuji-1} and further analyzed in \\cite{DijkgraafFuji-2,Borot:2012cw,abmodel,BEM,Gu:2014yba} that the asymptotic expansion of colored Jones or HOMFLY polynomials can be reconstructed from the topological recursion for the A-polynomial curve. This conjecture have been tested in a very limited number of cases and it still seems poorly understood. The new algebraic curves that we consider in this paper, in particular extremal A-polynomials, should provide a simpler setup in which this conjecture can be analyzed. \n\n\n\n\n\\acknowledgments{We thank Estelle Basor, Brian Conrey, Sergei Gukov, Maxim Kontsevich, Satoshi Nawata, and Marko Sto$\\check{\\text{s}}$i$\\acute{\\text{c}}$ for insightful discussions. We greatly appreciate hospitality of American Institute of Mathematics, Banff International Research Station, International Institute of Physics in Natal, and Simons Center for Geometry and Physics, where parts of this work were done. This work is supported by the ERC Starting Grant no. 335739 \\emph{``Quantum fields and knot homologies''} funded by the European Research Council under the European Union's Seventh Framework Programme, and the Foundation for Polish Science.}\n\n\n\n\n\n\\newpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzhxem b/data_all_eng_slimpj/shuffled/split2/finalzzhxem new file mode 100644 index 0000000000000000000000000000000000000000..974b9b31b962dcbf89cf2708028c7c6ef0922a7c --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzhxem @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{sec:introduction}}\n\n\n\n\nIn recent years, machine learning approaches (CNNs) have achieved the state of the art results for both discriminative \\cite{Krizhevsky2012,Girshick2014,Girshick2015,Ren2015,Ren2015} and generative tasks \\cite{Mao2016,Cai2017,Pathak2016,Aharon2006, Mairal2008, Mairal2009,Dabov2007,Dabov2009,Lebrun2012,Radford2015,Ledig2016,Pathak2016}.\nHowever, applying the ideas from these powerful learning techniques like Dictionary Learning and Convolutional Neural Networks (CNNs) to 3D shapes is not straightforward, as a common parameterization of the 3D mesh has to be decided before the application of the learning algorithm. A simple way of such parameterization is the voxel representation of the shape. For discriminative tasks, this generic representation of voxels performs very well \\cite{Maturana2015, Wu2015, Su2015, Brock2016}. However when this representation is used for global generative tasks, the results are often blotchy, with spurious points floating as noise \\cite{Wu2016, Dai2016, Brock2016}. The aforementioned methods reconstruct the global outline of the shape impressively, but smaller sharp features are lost - mostly due to the problem in the voxel based representation and the nature of the problem being solved, than the performance of the CNN.\n\nIn this paper we intend to reconstruct fine scale surface details in a 3D shape using ideas taken from the powerful learning methods used in 2D domain. This problem is different from voxel based shape generation where the entire global shape is generated with the loss of fine-scale accuracy. Instead, we intend to restore and inpaint surfaces, when it is already possible to have a global outline of the noisy mesh being reconstructed. \nInstead of the lossy voxel based global representation, we propose local patches computed by the help of mesh quadriangulation. These local patches provide a collection of \\textit{fixed-length} and \\textit{regular} local units for 3D shapes which can be then used for fine-scale shape processing and analysis. \n\n\n\\begin{figure}[t]\n\\centering\n\\begin{subfigure}{1\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/images_3dv17\/patch_framework_quads.jpg} \n\\end{subfigure}%\n\\caption{Our patch computation framework - Local patches are computed on reference frames from quad orientations of the quad mesh obtained from the low resolution version of the input mesh.}\n\\label{fig:patchframework}\n\\end{figure}\n\n\nOur local patch computation procedure makes it possible to have a large number overlapping patches of intermediate length from a single mesh. These patches cover the surface variations of the mesh and are sufficient in amount to use popular machine learning algorithms such as deep CNNs. At the same time due to the stable orientation of our patch computation procedure (by the help of quadriangulations), they are sufficiently large to capture meaningful surface details. This makes these patches suitable for the application of repairing a damaged part in the same mesh, while learning from its undamaged parts or some other clean meshes. Because of the locality and the density of the computed patches, we do not need a large database of shapes to correct a damaged part or fill a moderate sized hole in a mesh. We explore ideas from 2D images and use methods such as dictionary learning and deep generative CNNs for surface analysis.\n\nWe compute local patches of moderate length by applying automatic mesh quadrangulation algorithm~\\cite{Ebke2013} to the low-resolution representation of an input 3D mesh and taking the stable quad orientations for patch computation. The low-resolution mesh is obtained by applying mesh smoothing to the input 3D scan, which captures the broad outline of the shape over which local patches can be placed. We then set the average quad size and thereby choose the required scale for computing local patches. The mesh quadrangulation is a quasi-global method, which determines the local orientations of the quads based on the distribution of corners and edges on the overall shape. At the same time, the scanlines of the quads retain some robustness towards surface noise and partial scans - these orientations can be fine-tuned further by the user if needed. The patch computation approach is summarized in Figure \\ref{fig:patchframework}.\n\n\nPrior work in using local surface patches for 3D shape compression \\cite{Digne2014} assumed the patch size to be sufficiently small such that a local patch on the input 3D scan could be mapped to a unit disc. Such small patch sizes would restrict learning based methods from learning any shape detail at larger scale, and for applications like surface inpainting. \n\nThe contributions of our paper are as follows. \n\\begin{enumerate}\n\\item We propose a novel shape encoding by local patches oriented by mesh quadrangulation. Unlike previous works, we do not require\nthe patches to be exceedingly small \\cite{Digne2014}.\n\n\\item Using our quadriangulated patches, we propose a method for learning a 3D patch dictionary. Using the self-similarity among the 3D patches we solve the problem of surface analysis such as inpainting and compression.\n\n\\item We extend the insights for designing CNN architectures for 2D image inpainting to surface inpainting of 3D shapes using our 3D patches. We provide analysis for their applicability to shape denoising and inpainting.\n\n\\item We validate the applicability of our models (patch-dictionary and CNN) learned from multiple 3D scans thrown into a common data set, towards repairing an individual 3D scan. \n\\end{enumerate} \n\n\nThe related work is discussed in the following section. We first explain our encoding of quadriangulated patches in Section \\ref{sec:3Dpatches}. We then present both linear and CNN based generative models in Section \\ref{sec:generativemodels}. We follow it with the experiments section where both the generative models are evaluated.\n\n\\section{Related Work}\n\\label{sec:related_work}\n\\subsection{3D global shape parameterization} Aligning a data set of 3D meshes to a common global surface parameterization is very challenging and requires the shapes to be of the same topology. For example, {\\em geometry images}\\cite{Sinha2016} can parameterize genus-0 shapes on a unit sphere, and even higher topology shapes with some distortion. Alternatively, the shapes can be aligned on the spectral distribution spanned by the Laplace-Beltrami Eigenfunctions\\cite{Masci2015a,Boscaini2016}. However, even small changes to the 3D mesh structure and topology can create large variations in the global spectral parameterization - something which cannot be avoided when dealing with real world 3D scans. Another problem is with learning partial scans and shape variations, where the shape detail is preserved only locally at certain places. Sumner and Popovic \\cite{Sumner2004} proposed the {\\em deformation gradient} encoding of a deforming surface through the individual geometric transformations of the mesh facets. This encoding can be used for statistical modeling of pre-registered 3D scans~\\cite{Neumann2013}, and describes a Riemannian manifold structure with a Lie algebra~\\cite{Freifeld2012}. All these methods assume that the shapes are pre-registered globally to a common mesh template, which is a significant challenge for shapes with arbitrary topologies. Another alternative is to embed a shape of arbitrary topology in a set of 3D cubes in the extrinsic space, known as {\\em PolyCube-Maps}~\\cite{Tarini2004}. Unfortunately, this encoding is not robust to intrinsic deformations of the shape, such as bending and articulated deformations that can typically occur with real world shapes. So we choose an intrinsic quadrangular parameterization on the shape itself~\\cite{Ebke2013}(see also Jakob et al.~\\cite{Jakob2015}).\n\n\\subsection{Statistical learning of 3D shapes} For reconstructing specific classes of shapes, such as human bodies or faces, fine-scale surface detail can be learned~{\\em e.g,}\\cite{Garrido2016,Bermano2014,Bogo2015}, from high resolution scans registered to a common mesh template model. This presumes a common shape topology or registration to a common template model, which is not possible for arbitrary shapes as presented in our work. \nFor shapes of arbitrary topology, existing learning architectures for deep neural networks on 2D images can be harnessed by using the projection of the model into different perspectives~\\cite{Su2015, Sarkar2017}, or by using its depth images~\\cite{Wei2016}. 3D shapes are also converted into common global descriptors by voxel sampling. The availability of large database of 3D shapes like ShapeNet \\cite{Chang2015} has made possible to learn deep CNNs on such voxalized space for the purpose of both discrimination \\cite{Maturana2015, Wu2015, Su2015, Brock2016} and shape generation \\cite{Wu2016, Dai2016, Brock2016}. Unfortunately, these methods cannot preserve fine-scale surface detail, though they are good for identifying global shape outline. More recently, there has been serious effort to have alternative ways of applying CNNs in 3D data such as OctNet \\cite{Riegler2017} and PointNet \\cite{Qi2016}. OctNet system uses a compact version of voxel based representation where only occupied grids are stored in an octree instead of the entire voxel grid, and has similar computational power as the voxel based CNNs.\nPointNet on the other hand takes unstructured 3D points as input and gets a global feature by using max pool as a symmetrical function on the output of MLP (multi-layer perceptron) on individual points. \nBoth these networks have not been explored yet fully for their generation properties (Eg. OctNetFusion \\cite{Riegler2017a}). They are still in their core, systems for global representation and are not targeted specifically for surfaces. In contrast, we encode 3D shape by fixed-length and regular local patches and learn generative models (patch dictionary and generative CNNs) for reproducing fine scaled surface details.\n\n\\subsection{CNN based generative models in images} One of the earliest work on unsupervised feature learning are autoencoders \\cite{Hinton2006} which can be also seen as a generative network. A slight variation, denoising autoencoders \\cite{Vincent2008,Xie2012}, reconstruct the image from local corruptions, and are used as a tool for both unsupervised feature learning and the application of noise removal. Our generative CNN model is, in principle, a variant of denoising autoencoder, where we use convolutional layers following the modern advances in the field of CNNs. \\cite{Mao2016,Cai2017,Pathak2016} uses similar network with convolutional layers for image inpainting. Generating natural images from using a neural network has also been studied extensively - mostly after the introduction of Generative Adversarial Network (GAN) by Goodfellow \\cite{Goodfellow2014} and its successful implementation using convolutional layers in DCGAN (Deep Convolutional GANs) \\cite{Radford2015}. As discussed in Section \\ref{sec:networkdesign}, our networks for patch inpainting are inspired from all the aforementioned ideas and are used to inpaint height map based 3D patches instead of images.\n\n\\subsection{Dense patch based generative models in images} 2D patch based methods have been very popular in the topic of image denoising. These non local algorithms can be categorised into dictionary based \\cite{Aharon2006, Mairal2008, Mairal2009} and BM3D (Block-matching and 3D filtering) based \\cite{Dabov2007, Dabov2009, Lebrun2012} methods. \nBecause of the presence of a block matching step in BM3D (patches are matched and kept in a block if they are similar), it is not simple to extend it for the task of inpainting, though the algorithm can be applied indirectly in a different domain \\cite{Li2014}. In contrast, dictionary based methods can be extended for the problem of inpatinting by introducing missing data masks in the matrix factorization step - making them the most popular methods for the comparison of inpainting tasks. \nIn 3D meshes, due to the lack of common patch parameterization, this task becomes difficult. \nIn this work, we use our novel encoding to compute moderate length dense 3D patches, and process them with the generative models of patch dictionary and non linear deep CNNs.\n\n \n\\subsection{3D patch dictionaries} A lossy encoding of local shape detail can be obtained by 3D feature descriptors~\\cite{Kim2013}. However, they typically do not provide a complete local surface parameterization. Recently, Digne et al.~\\cite{Digne2014} used a 3D patch dictionary for point cloud compression. Local surface patches are encoded as 2D height maps from a circular disc and learned a sparse linear dictionary of patch variations~\\cite{Aharon2006}. They assume that the local patches are sufficiently small (wherethe shape is parameterizable to a unit disc). In contrast to\nthis work, (i) we use mesh quadringulation for getting the patch location and orientation (in comparison to uniform sampling and PCA in \\cite{Digne2014}) enabling us to get large patches at good locations, (ii) \nwe address the problem of inpainting by generative models (masked version in matrix factorization and a blind method for CNN models) instead of compression, (iii) as a result of aforementioned differences, our patch size is much larger in order to have a meaningful patch description in the presence of missing regions.\n\n\n\\subsection{General 3D surface inpainting} Earlier methods for 3D surface inpainting regularized from the geometric neighborhood ~\\cite{Liepa2003,Bendels2006}. More recently, Sahay et al.~\\cite{Sahay2015} inpaint the holes in a shape by pre-registering it to a {\\em self-similar} proxy model in a dataset, that broadly resembles the shape. The holes are inpainted using a patch-dictionary. In this paper, we use a similar approach, but avoid the assumption of finding and pre-registering to a proxy model. The term {\\em self-similarity} in our paper refers to finding similar patches in other areas of the shape. Our method automatically detects the suitable patches, either from within the shape, or from a diverse data set of 3D models. Zhong et al.~\\cite{Zhong2016} propose an alternative learning approach by applying sparsity on the Laplacian Eigenbasis of the shape. We show that our method (both patch dictionary and generative CNN models) is better than this approach on publicly available meshes.\n\n\\begin{figure}[t]\n\\centering\n\\begin{subfigure}{0.6\\linewidth}\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/images_3dv17\/heightmap-eps-converted-to.pdf} \n\\end{subfigure}%\n\\begin{subfigure}{0.35\\linewidth}\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/images_3dv17\/multipatches-eps-converted-to.pdf} \n\\end{subfigure}\n\\caption{(Left) Patch representation - Points are sampled as a height map over the planer grid of a reference frame at the seed point. (Right) Patches computed at multiple offset from the quad centres to simulate dense sampling of patches while keeping the stable quad orientation. The black connected square represents the quad in a quad mesh and the dotted squares represents the patches that are computed at different offset.}\n\\label{fig:heightmap}\n\\end{figure}\n\n\n\n\n\n\n\\section{3D Patch Encoding}\n\\label{sec:3Dpatches}\n\n\nGiven a mesh $\\model{M} = \\{F, V\\}$ depicting a 3D shape and the input parameters - patch radius $r$ and grid resolution $N$, our aim is to decompose it into a set of fixed length local patches $\\{P_s\\}$, along with the settings $\\model{S} = \\{(s, T_{s})\\}, Conn$ having information on the location (by $s$), orientation (by the transformation $T_{s}$) of each patch and vertex connectivity (by $Conn$) for reconstructing back the original shape.\n\nTo compute uniform length patches, a point cloud $C$ is computed by dense uniform sampling of points in $\\model{M}$. Given a seed point $s$ on the model surface $C$, a reference frame $\\model{F}_s$ corresponding to a transformation matrix $T_{s}$ at $s$, and an input patch-radius $r$, we consider all the points in the $r$-neighbourhood, $\\model{P}_s$. \nEach point in $\\model{P}_s$ is represented w.r.t. $\\model{F}_s$ as $P_{\\model{F}_s}$. That is, if the rotation between global coordinates and $\\model{F}_s$ is given by the rotation matrix $R_s$, a point $\\bm{p}$ represented in the local coordinate system of $\\model{F}_s$ is given by $\\bm{p}_{s}= T_s \\bm{p}$, where $T_s = \\begin{pmatrix}R & {-R_s}s\\\\ 0 & 1\\end{pmatrix}$ is the transformation matrix between the two coordinates.\n\n\n\\subsection{Local parameterisation and patch representation}\nAn $N\\times N$ square grid of length $\\sqrt{2}r$ and is placed on the X-Y plane of $\\model{F}_s$, and points in $P_{\\model{F}_s}$ are sampled over the grid wrt their X-Y coordinates. Each sampled point is then represented by its `height' from the square grid, which is its Z coordinate to finally get a height-map representation of dimension of $(N \\times N)$ (Figure \\ref{fig:heightmap}). Thus, each patch around a point $s$ is defined by a \\textit{fixed size} vector $\\operatorname{vec}(P_s)$ of size $N^2$ and a transformation $T_s$. \n\n\\subsection{Mesh reconstruction}\n\\label{sec:connectedmeshrec}\n To reconstruct a connected mesh from patch set we need to store connectivity information $Conn$. This can be achieved by keeping track of the exact patch-bin $(P_s, i)$ a vertex $v_j \\in V$ in the input mesh corresponds (would get sampled during the patch computation) by the mapping $\\{(j, \\{(P_s, i)\\})\\}$.\n\nTherefore, given patch set $\\{P_s\\}$ along with the settings $\\model{S} = \\{(s, T_{s})\\}, Conn$ with $Conn = \\{(j, \\{(P_s, i)\\})\\}, F$ it is possible to reconstruct back the original shape with the accuracy upto the sampling length. For each patch $P_{s}$, for each bin $i$, the height map representation $P_s[i]$, is first converted to the XYZ coordinates in its reference frame, $\\bm{p}_s$, and then to the global coordinates $\\bm{p}'$, by $\\bm{p}'= T_s^{-1} \\bm{p}_s$. Then the estimate of each vertex index $j$, $v_j \\in V$ is given by the set of vertices $\\{v_e\\}$. The final value of vertex $v_m'$ is taken as the mean of $\\{v_e\\}$. The reconstructed mesh is then given by $\\{\\{v_j'\\}, F\\}$. If the estimate of a vertex $v_j$ is empty, we take the average of the vertices in its 1-neighbour ring.\n\n\n \\begin{algorithm}[t]\n\\renewcommand{\\algorithmicrequire}{\\textbf{Input:}}\n\\renewcommand{\\algorithmicensure}{\\textbf{Output:}}\n\\floatname{algorithm}{Steps}\n \\caption{3D Patch computation based on quad mesh}\n \\begin{algorithmic}[1]\n \\REQUIRE Mesh - $M$, Patch radius - $r$, resolution - $N$\n \\STATE Compute quad mesh of the smoothened $M$ using \\cite{Jakob2015}.\n\t\t\t\\STATE Densely sample points in $M$ to get the cloud $C$.\n\\STATE At each quad center, compute r-neighborhood in $C$ and orient using the quad orientation to get local patches.\n\\STATE Sample the local patches in a ($N \\times N$) square grid in a height map based representation.\n\\STATE Store the vertex connections (details in the text).\n \\ENSURE Patch set $\\{P_s\\}$ of ($N \\times N$) dimension, orientations, vertex connections.\n \\end{algorithmic}\n \\label{algorithm}\n \n\\end{algorithm}\n\n\n\\subsection{Reference frames from quad mesh}\n\\label{sec:rfcomputation} \\label{sec:globalproperties}\nThe height map based representation accurately encodes a surface only when the patch radius is below the distance between surface points and the shape medial axis. In other words, the $r$-neighbourhood, $\\model{P}_s$ should delimit a topological disk on the underlying surface to enable parameterization over the grid defined by the reference frame. In real world shapes, either this assumption breaks, or the patch radius becomes too low to have a meaningful sampling of shape description. A good choice of seed points enables the computation of the patches in well behaved areas, such that, even with moderately sized patches in arbitrary real world shapes, the $r$-neighbourhood, $\\model{P}_s$ of a given point $s$ delimits a topological disk on the grid of parameterisation. It should also provide an orientation consistent with global shape. \n\n\nGiven a mesh $\\model{M}$, we obtain low-resolution representation by Laplacian smoothing \\cite{Sorkine2004}. The low resolution mesh captures the broad outline of the shape over which local patches can be placed. In our experiments, for all the meshes, we performed $30$ Laplacian smoothing iterations (normal smoothing + vertex fitting).\n\nGiven the smooth coarse mesh, the quad mesh $\\model{M}^Q$ is extracted following Jakob et al.\\cite{Jakob2015}. At this step, the quad length is specified in proportion to the final patch length and hence the scale of the patch computation. For each quad $q$ in the quad mesh, its center and $4*k$ offsets are considered as seed points, where $k$ is the overlap level (Figure \\ref{fig:heightmap} (Right)). These offsets capture more patch variations for the learning algorithm. For all these seed points, the reference frames are taken from the orientation of the quad $q$ denoted by its transformation $T_{s}$. In this reference frame, $Z$ axis, on which the height map is computed, is taken to be in the direction normal to the quad. The other two orthogonal axes - $X$ and $Y$, are computed from the two consistent sides of the quads. To keep the orientation of $X$ and $Y$ axes consistent, we do a breath first traversal starting from a specific quad location in the quad mesh and reorient all the axes to the initial axes. \n\n\n\n\n\n\n\n\n\\begin{figure}[t]\n\\centering\n\\begin{subfigure}{\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/images_wacv18\/patchaa_summary_gen} \n\\end{subfigure}%\n\\caption{Summary of the inpainting framework. Generative models are trained on 3D the patches computed from 3D shapes for the purpose of inpainting. During testing (dashed line) the generative model is used to reconstruct noisy patches computed in the noisy mesh.}\n\\label{fig:patchaa_summary}\n\\end{figure}\n\n\\section{Learning on 3D patches}\n\\label{sec:generativemodels}\nGiven a set of 3D meshes, we first decompose them into local rectangular patches. Using this large database of 3D patches, we learn a generative model to reconstruct denoised version of input 3D patches. We use both Matrix Factorization and CNN based generative models for inpainting whose details are explained in this section. The overall approach for training is presented in Figure \\ref{fig:patchaa_summary}.\n\nLet $\\bm{x_i} := \\operatorname{vec}(P_i) \\in \\mathbb{R}^{N^2}$ be the vectorization of the patch $P_i$ in the patch set $\\{P_i\\}$. And let $X$ be the set of the domain of the vectorization of the patches generated from a mesh (or a pool of meshes). Given such patch set, we learn a generative model $\\model{M}: X \\mapsto X$, such that $\\model{M}(\\bm{x}) = \\bm{x'}$ produces a cleaned version of the noisy input $\\bm{x}$. Following sections describe two such popular methods of generative models used in the context of patch inpainting, namely Dictionary Learning and Denoising Autoencoders. These methods, inspired from their popularity in the 2D domain as generative models, are designed to meet the needs of the patch encoding. They are described in detail in the following paragraphs.\n\n\\subsection{Dictionary Learning and Sparse Models}\nGiven a matrix ${D}$ in $\\mathbb{R}^{m \\times p}$ with $p$ column vectors, sparse models in signal processing aims at representing a signal $\\bm{x}$ in $\\mathbb{R}^{m}$ as a sparse linear combination of the column vectors of $D$. The matrix $D$ is called \\textit{dictionary} and its columns \\textit{atoms}. In terms of optimization, approximating $\\bm{x}$ by a sparse linear combination of atoms can be formulated as finding a sparse vector $\\bm{y}$ in $\\mathbb{R}^p$, with $k$ non-zero coefficients, that minimizes\n\\vspace{-0.2cm}\n\\begin{equation} \\label{eq:sparsity}\n \\quad \\min \\limits _{\\bm{y}} \\frac{1}{2}\\|\\bm{x} - D\\bm{y}\\|^2_2 \\qquad \\text{s.t. } \\|\\bm{y}\\|_0 \\le k\n \\vspace{-0.2cm}\n\\end{equation}\n\nThe dictionary $D$ can be learned or evaluated from the signal dataset itself which gives better performance over the off-the-shelf dictionaries in natural images. In this work we learn the dictionary from the 3D patches for the purpose of mesh processing. Given a dataset of $n$ training signals $\\bm{X} = [\\bm{x}_1, ..., \\bm{x}_n]$, dictionary learning can be formulated as the following minimization problem\n\\vspace{-0.2cm}\n\\begin{equation} \\label{eq:dlearning}\n \\quad \\min \\limits _{D, \\bm{Y}} \\sum_{i=1}^n \\frac{1}{2}\\|\\bm{x}_i - D\\bm{y}_i\\|^2_2 + \\lambda \\psi(\\bm{y}_i),\n \\vspace{-0.2cm}\n\\end{equation}\n\nwhere $\\bm{Y} = [\\bm{y}_1, ..., \\bm{y}_n] \\in \\mathbb{R}^{p \\times n}$ is the set of sparse decomposition coefficients of the input signals $\\bm{X}$, $\\psi$ is sparsity inducing regularization function, which is often the $l_1$ or $l_0$ norm.\n\n\nBoth optimization problems described by equations \\ref{eq:sparsity} and \\ref{eq:dlearning} are solved by approximate or greedy algorithms; for example, Orthogonal Matching Pursuit (OMP) \\cite{Pati1993}, Least Angle Regression (LARS) \\cite{Efron2004} for sparse encoding (optimization of Equation \\ref{eq:sparsity}) and KSVD \\cite{Aharon2006} for dictionary learning (optimization of Equation \\ref{eq:dlearning}) \n\n\n\\textbf{Missing Data:} Missing information in the original signal can be well handled by the sparse encoding. To deal with unobserved information, the sparse encoding formulation of Equation \\ref{eq:sparsity} can be modified by introducing a binary mask $M$ for each signal $\\bm{x}$. Formally, $M$ is defined as a diagonal matrix in $\\mathbb{R}^{m \\times m}$ whose value on the $j$-th entry of the diagonal is 1 if the pixel $\\bm{x}$ is observed and 0 otherwise. Then the sparse encoding formulation becomes \n\n\\begin{equation}\\label{eq:maskedsparsity}\n \\quad \\min \\limits _{\\bm{y}} \\frac{1}{2}\\|M(\\bm{x} - D\\bm{y})\\|^2_2 \\qquad \\text{s.t. } \\|\\bm{y}\\|_0 \\le k\n \\vspace{-0.2cm}\n\\end{equation}\n\nHere $M\\bm{x}$ represents the observed data of the signal $\\bm{x}$ and $\\bm{x'} = D\\bm{y}$ is the estimate of the full signal. The binary mask does not drastically change the optimization procedure and one can still use the classical optimization techniques for sparse encoding. \n\n\n\\subsubsection{3D Patch Dictionary}\n\\label{sec:shapeencoding}\nWe learn patch dictionary $D$ with the generated patch set $\\{P_s\\}$ as training signals ($m = N^2$). This patch set may come from a single mesh (providing \\textit{local dictionary}), or be accumulated globally using patches coming from different shapes (providing a \\textit{global dictionary} of the dataset). Also in the case of the application of hole-filling, a dictionary can be learnt on the patches from clean part of the mesh, which we call \\textit{self-similar} dictionary which are powerful in meshes with repetitive structures. For example a tiled floor, or the side of a shoe has many repetitive elements that can be learned automatically. We computed patches at the resolution of (24 $\\times$ 24) following the mesh resolution. More details on the 3D dataset, patch size, resolutions for different types of meshes are provided in the Evaluation section. Please note that, we also computed patches at the resolution (100 $\\times$ 100) for longer CNN base generative models they are more complex than the linear dictionary based models. Please find the details in the next section.\n\n\\textbf{Reconstruction}\nUsing a given patch dictionary $D$, we can reconstruct the original shape whose accuracy depends on the number of atoms chosen for the dictionary. For each 3D patch $\\bm{x_i} = \\operatorname{vec}(\\bm{P}_i)$ from the generated patches and the learnt dictionary $D$ of a shape, its sparse representation, $\\bm{y}$ is found following the optimization in Equation \\ref{eq:sparsity} using the algorithm of Orthogonal Matching Pursuit (OMP). It's approximate representation, the locally reconstructed patch $\\bm{x}_i'$ is found as $\\bm{x}_i' \\approx D\\bm{y}$. The final reconstruction is performed using the altered patch set $\\{P_i'\\}$ and $\\model{S}$ following the procedure in Section \\ref{sec:connectedmeshrec}. \n\n\\textbf{Reconstruction with missing data}\n\\label{sec:missingdatarec}\nIn case of 3D mesh with missing data, for each 3D patch $\\bm{x_i}$ computed from the noisy data having missing values, we find the sparse encoding $\\bm{y_i}$ following Equation \\ref{eq:maskedsparsity}. The estimate of the full reconstructed patch is then $\\bm{x}' = D\\bm{y}$. \n\nResults of inpainting using Dictionary Learning is provided in the Evaluation section (Section \\ref{sec:results}). We now present the second generative model in the next section.\n\n\n\n\n\n\n\\begin{figure*}\n\\small\n\\centering\n\\begin{subfigure}{0.45\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/images_wacv18\/AAsummary2}\n \\end{subfigure}\n\\resizebox{0.8\\textwidth}{!}{\n\\begin{tabular}{|l|l|l|l|l|l|l|}\n\\hline\n & \\textbf{small\\_4x} & \\textbf{multi\\_6x} & \\textbf{6x\\_128} & \\textbf{6x\\_128\\_FC} & \\textbf{long\\_12x} & \\textbf{long\\_12x\\_SC} \\\\ \\hline\nInput & (24x24) & (100x100x1) & (100x100x1) & (100x100x1) & (100x100x1) & (100x100x1) \\\\ \\hline\n & 3x3, 32 & 3x3, 32 & & & & \\\\ \n & 3x3, 32, (2, 2) & 3x3, 32, (2, 2) & 5x5, 32, (2, 2) & 5x5, 32, (2, 2) & 5x5, 64 & 5x5, 64 \\\\ \\cline{2-7}\n \\multirow{3}{*}{\\begin{turn}{90} Convolution blocks\\end{turn}} & 3x3, 32 & 3x3, 32 & & & & 5x5, 64, (2, 2) \\\\ \n & 3x3, 32, (2, 2) & 3x3, 32, (2, 2) & 5x5, 64, (2, 2) & 5x5, 64, (2, 2) & 5x5, 64, (2, 2) & Out (1) \\\\ \\cline{3-7}\n & & 3x3, 32 & & & & \\\\ \n & & 3x3, 32, (2, 2) & 5x5, 128, (2, 2) & 5x5, 128, (2, 2) & 5x5, 64 & 5x5, 64 \\\\ \\cline{5-6} \\cline{6-7}\n\n & & & & & & 5x5, 64, (2, 2) \\\\ \n & & & & & 5x5, 64, (2, 2) & Out (2) \\\\ \\cline{6-7}\n & & & & & & \\\\ \n & & & & & 5x5, 64 & 5x5, 64 \\\\ \\cline{6-7}\n & & & & & & \\\\ \n & & & & FC 4096 & 5x5, 64, (2, 2) & 5x5, 64, (2, 2) \\\\ \\hline \\hline \n\n\n & & & & & & \\\\ \n & & & & & 5x5, 64 & 5x5, 64 \\\\ \\cline{6-7}\n \\multirow{3}{*}{\\begin{turn}{90}Transposed conv blocks\\end{turn}} & & & & & & 5x5, 64, (2, 2) \\\\ \n & & & & & 5x5, 64, (2, 2) & Relu + (2) \\\\ \\cline{6-7}\n & & 3x3, 32 & & & & \\\\ \n & & 3x3, 32, (2, 2) & 5x5, 128, (2, 2) & 5x5, 128, (2, 2) & 5x5, 64 & 5x5, 64 \\\\ \\cline{3-7}\n & 3x3, 32 & 3x3, 32 & & & & 5x5, 64, (2, 2) \\\\ \n & 3x3, 32, (2, 2) & 3x3, 32, (2, 2) & 5x5, 64, (2, 2) & 5x5, 64, (2, 2) & 5x5, 64, (2, 2) & Relu + (1) \\\\ \\cline{2-7}\n & 3x3, 32 & 3x3, 32 & & & & \\\\ \n & 3x3, 32, (2, 2) & 3x3, 32, (2, 2) & 5x5, 32, (2, 2) & 5x5, 32, (2, 2) & 5x5, 64 & 5x5, 64 \\\\ \\cline{2-7}\n \n & 3x3, 1 & 3x3, 1 & 5x5, 1 & 5x5, 1 & 5x5, 1, (2, 2) & 5x5, 1, (2, 2) \\\\ \\hline\n\\end{tabular}\n\n}\n\\caption{(Left) - Summary of our network architecture showing the building blocks. Dashed lines and blocks are optional parts depending on the network as described in the table on the right. Conv, FCs and TConv denote Convolution, Fully Connected and Transposed Convolution layers respectively. (Right) - The detailed description of the different networks used. Each column represents a network where the input is processed from top to bottom. The block represents the kernel size, number of filters or output channels and optional strides when it differs from (1, 1). The network complexity in terms of computation and parameters increases from left to right except for \\textit{6x\\_128\\_FC}, which has the maximum number of parameters because of the presence of the FC layer. Other details are provided in Section \\ref{sec:networkdesign}.}\n\\label{table:networks}\n\\end{figure*}\n \n\\subsection{Denoising Autoencoders for 3D patches}\n\\label{sec:cnnintro}\nIn this section we present the generative model $\\model{M}: X \\mapsto X$ as Convolutional Denoising Autoencoder. Autoencoders are generative networks which try to reconstruct the input. A Denoising Autoencoder reconstructs the de-noised version of the noisy input, and is one of the most well known method for image restoration and unsupervised feature learning \\cite{Xie2012}. We use denoising autoencoder architecture with convolutional layers following the success of general deep convolutional neural networks (CNN) in images classification and generation. Instead of images, we use the 3D patches generated from different shapes as input, and show that this height map based representation can be successfully used in CNN for geometry restoration and surface inpainting. \n\nFollowing typical denoising autoencoders, our network has two parts - an encoder and a decoder. An encoder takes a 3D patch with missing data as input and and produces a latent feature representation of that image. The decoder takes this feature representation and reconstructs the original patch with missing content. The encoder contains a sequence of convolutional layers which reduces the spatial dimension of the output as we go forward the network. Therefore, this part can be also called \\textit{downsampling} part. This follows by an optional fully connected layer completing the encoding part of the network. The decoding part consists fractionally strided convolution (or transposed convolution) layers which increase the spatial dimension back to the original patch size and hence can also be called as \\textit{upsampling}. The general design is shown in Figure \\ref{table:networks} (Left). \n\n\n\n\\subsubsection{Network design choices}\n\\label{sec:networkdesign}\nOur denoising autoencoder should be designed to meet the need of the patch encoding. The common design choices are presented in Figure \\ref{table:networks} and are discussed in the following paragraphs in details.\n\n\\textbf{Pooling vs strides} \nFollowing the approach of powerful generative models like Deep Convolutional Generative Adversarial Network (DCGAN) \\cite{Radford2015}, we use strided convolutions for downsampling and strided transposed convolutions for upsampling and do not use any pooling layers. For small networks its effect is insignificant, but for large network the strided version performs better. \n\n\n\\textbf{Patch dimension} We computed patches at the resolution of 16 $\\times$ 16, 24 $\\times$ 24 and 100 $\\times$ 100 with the same patch radius (providing patches at the same scale) in our 3D models. Patches with high resolution capture more details than the low resolution counterpart. But, reconstructing higher dimension images is also difficult by a neural network. This causes a trade-off which needs to be considered. Also higher resolution requires a bigger network to capture intricate details which is discussed in the following paragraphs. For lower dimensions (24 $\\times$ 24 input), we used two down-sampling blocks followed by two up-sampling blocks. We call this network \\textbf{small\\_4x} as described in Figure \\ref{table:networks}. \nOther than this, all the considered network take an input of 100 $\\times$ 100 dimensions. The simplest ones corresponding to 3 encoder and decoder blocks are \\textbf{multi\\_6x} and \\textbf{6x\\_128}.\n\n\\textbf{Kernal size} Convolutional kernel of large size tends to perform better than lower ones for image inpainting. \\cite{Mao2016} found a filter size of (5 $\\times$ 5) to (7 $\\times$ 7) to be the optimal and going higher degrades the quality. Following this intuition and the general network of DCGAN \\cite{Radford2015}, we use filter size of (5 $\\times$ 5) in all the experiments.\n\n\\textbf{FC latent layer} A fully connected (FC) layer can be present in the end of encoder part. If not, the propagation of information from one corner of the feature map to other is not possible. However, adding FC layer where the latent feature dimension from the convolutional layer is already high, will cause explosion in the number of parameters. It is to be noted that for inpainting, we want to retain as much of information as possible, unlike simple Autoencoders where the latent layer is often small for compact feature representation and dimension reduction.\nWe use a network with FC layer, \\textbf{6x\\_128\\_FC} with 4096 units for 100 $\\times$ 100 feature input. Note that all though the number of output neurons in this FC layer can be considered to be large (in comparison to classical CNNs for classification), the output dimension is less than the input dimensions which causes some loss in information for generative tasks such as inpainting.\n\n\\textbf{Symmetrical skip connections}\nFor deep network, symmetrical skip connections have shown to perform better for the task of inpainting of images \\cite{Mao2016}. The idea is to provide short-cut (addition followed by Relu activation) from the convolutional feature maps to their mirrored transposed-convolution layers in a symmetrical encoding-decoding network. This is particularly helpful with a network with a large depth. In our experiments, we consider a deep network of 12 layers with skip connections \\textbf{long\\_12x\\_SC} and compare with its non connected counter part \\textbf{long\\_12x}. All the networks are summarized in Figure \\ref{table:networks}.\n\n\n\\subsubsection{Training details}\n\\label{sec:training_details}\n3D patches can be straightforwardly extended to images with 1 channel. Instead of pixel value we have height at a perticular 2D bin which can be negative. Depending on the scale the patches are computed, this height can be dependent on the 3D shape it is computed. Therefore, we need to perform dataset normalization before training and testing. \n\n\\textbf{Patch normalization}\nWe normalize patch set between 0 and 0.83 (= 1\/1.2) before training and assign the missing region or hole-masks as 1. This makes the network easily identify the holes during the training - as the training procedure is technically a blind inpainting method. We manually found that, the network has difficulty in reconstructing fine scaled details when this threshold is lowered further (Eg. 1\/1.5). The main idea here is to let the network easily identify the missing regions without sacrificing a big part of the input spectrum.\n\n\n\\textbf{Training} We train on the densely overlapped clean patches computed on a set of clean meshes. Square and circular hole-masks of length 0 to 0.8 times the patch length are created randomly on the fly at random locations on the patches with a uniform probability and is passed through the denoising network during the training. The output of the network is matched against the original patches without holes with a soft binary cross entropy loss between 0 and 1. Note that this training scheme is aimed to reconstruct holes less than 0.8 times the patch length. The use of patches of moderate length computed on quad orientations, enables this method to inpaint holes of small to moderate size.\n\n\\subsection{Inpainting pipeline}\n\\label{sec:testing_inpainting}\nTesting consists of inpainting holes in a given 3D mesh. This involves patch computation in the noisy mesh, patch inpainting through a generative model, and the reconstruction of the final mesh. \nFor a 3D mesh with holes, the regions to be inpainted are completely empty and have no edge connectivity and vertices information. Thus, to establish the final interior mesh connectivity after CNN based patch reconstruction, there has to be a way of inserting vertices and performing triangulation. We use an existing popular \\cite{Liepa2003}, for this purpose of hole triangulation to get a connected hole filled mesh based on local geometry. This hole-triangulated mesh is also used for quad mesh computation on the mesh with holes. This is important as quad mesh computation is affected by the presence of holes. \n\n\\begin{figure}[t]\n\\centering\n\n\\begin{subfigure}{0.45\\linewidth}\n \\centering\n \\includegraphics[width=0.9\\linewidth]{figures\/images_3dv17\/dict_Totem} \n\\end{subfigure}%\n\\hfill\n\\begin{subfigure}{0.5\\linewidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/images_3dv17\/shape_accuracy2} \n\\end{subfigure}%\n\\caption{(Left) Visualization of dictionary atoms learnt from the shape \\textit{Totem} ($m = 16 \\times 16$). (Right) Reconstruction of the shape \\textit{Totem} using local dictionary of size 5 atoms and 100 atoms}\n\\label{fig:dictionariesvis}\n\\end{figure}\n\\section{Experimental Results}\n\\label{sec:results}\nIn this section we provide the different experiments performed to evaluate our design choices of both dictionary and CNN based generative models for mesh processing. We first provide the details of the meshes used in our experiments by introducing our dataset in Section \\ref{sec:dataset_patches}. We then provide the different parameters used for patch computations followed by the mesh restoration results with dictionary learning (Section \\ref{sec:results_patchdict}). We then provide our results of inpainting with CNN based approach and its comparison with our dictionary based approach (Section \\ref{sec:conv_results}). As seen both quantitatively and qualitatively, our the CNN based approach provides better results than the dictionary based approach. We finally end up with a section with the generalizing capability of our local patches through global generative models (by both global dictionary and global denoising autoencoder) and discuss the possibility of having a global universal generative model for local 3D patches.\n\n\\subsection{Dataset}\n\\label{sec:dataset_patches}\nWe considered dataset having 3D shapes of two different types. The first type (\\textbf{Type 1}) consists of meshes that are in general, simple in nature without much surface texture. In descending order of complexity 5 such objects considered are - \\emph{Totem, Bunny, Milk-bottle, Fandisk} and \\emph{Baseball}. \\emph{Totem} is of very high complexity containing a high amount of fine level details whereas \\emph{Bunny} and \\emph{Fandisk} are some standard graphics models with moderate complexity. In addition, we considered 5 models with high surface texture and details (\\textbf{Type 2}) consisting of shoe soles and a human brain specifically to evaluate our hole-filling algorithm - \\emph{Supernova, Terrex, Wander, LeatherShoe} and \\emph{Brain}. This subset of meshes is also referred as \\textit{high texture} dataset in subsequent section. Therefore, we consider in total 10 different meshes for our evaluation (all meshes are shown in the supplementary material). \n\n\nOther than the models \\textit{Baseball, Fandisk} and \\textit{Brain}, all models considered for the experimentation are reconstructed using a vision based reconstruction system - 3Digify \\cite{3Digify}. Since this system uses structured light, the output models are quite accurate, but do have inherent noise coming from structured light reconstruction and alignments. Nonetheless, because of its high accuracy, we consider these meshes to be `clean' for computing global patch database. These models were also reconstructed with varying accuracy by changing the reconstruction environment before considering for the experiments of inpainting. In an extreme case some of these models are reconstructed using Structure From Motion for the purpose of denoising using its `clean' counterpart as described in Section \\ref{sec:denoisingres}.\n\n\n\n\n\\textbf{Dataset normalization and scale selection}\nFor normalization, we put each mesh into a unit cube followed by upsampling (by subdivision) or downsampling (by edge collapse) to bring it to a common resolution. After normalization, we obtained the low resolution mesh by applying Laplacian smoothing with 30 iterations. We then performed the automatic quadiangulation procedure of \\cite{Ebke2013} on the low resolution mesh, with the targeted number of faces such that, it results an average quad length to be 0.03 for Type 1 dataset and 0.06 for Type 2 dataset (for larger holes); which in turns become the average patch length of our dataset. The procedure of smoothing and generating quad mesh can be supervised manually in order to get better quad mesh for reference frame computation. But, in our varied dataset, the automatic procedure gave us the desired results. \n\nWe then generated 3D patches from each of the clean meshes using the procedure provided in Section \\ref{sec:3Dpatches}. We chose the number of bins $N$, to be 16 for Type 1 dataset and 24 for Type 2 dataset; to match the resolution the input mesh. To perform experiment in a common space (global dictionary), we also generated patches with patch dimension of 16 in Type 2 dataset with the loss of some output resolution.\n\n\n\n\\begin{figure}[t]\n\\centering\n\\begin{subfigure}{\\linewidth}\n \\centering\n \\includegraphics[width=0.85\\linewidth]{figures\/images_3dv17\/error_shapecomplexity} \n\\end{subfigure}%\n\\vspace{-0.4cm}\n\\caption{Reconstruction error of different shapes with Dictionaries with increasing number of atoms.}\n\\label{fig:reconstructioncomplexity_quantitative}\n\\end{figure}\n\n\n\\begin{table}\n\\small\n\\centering\n\\begin{tabular}{cccccc}\n\\toprule\n{} & Mesh & & Patch & Compr& \\\\\n{Meshes} & entities & \\#patches & entities & factor & PSNR \\\\\n\\midrule\nTotem & 450006 & 658 & 12484 & 36.0 & 56.6 \\\\\nMilkbottle & 441591 & 758 & 14420 & 30.6 & 72.3 \\\\\nBaseball & 415446 & 787 & 14974 & 27.7 & 75.6 \\\\\nBunny & 501144 & 844 & 16030 & 31.3 & 60.6 \\\\\nFandisk & 65049 & 874 & 16642 & 3.9 & 62.1 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Results for compression in terms of number of entities with a representation with global dictionary of 100 atoms. Mesh entities consists of the number of entities for representing the mesh which is: 3 $\\times$ \\#Faces and \\#Vertices. Patch entities consists of the total number of sparce dictionary coefficients (20 per patch) used to represent the mesh plus the entities in the quad mesh. Compr factor is the compression factor between the two representation. PSNR is Peek Signal to Noise Ratio where the bounding box diameter of the mesh is considered as the peek signal following \\cite{Praun2003}}.\n\n\\label{table:compression}\n\\end{table}\n\\subsection{Evaluating 3D Patch Dictionaries}\n\\label{sec:results_patchdict}\n\\subsubsection{Dictionary Learning and Mesh Reconstruction}\n\n\\textbf{Dictionary Learning}\nWe learn the local dictionary for each shape with varying numbers of dictionary atoms with the aim to reconstruct the shape with varying details. Atoms of one such learned dictionary is shown in Figure \\ref{fig:dictionariesvis} (Left). Observe the `stripe like' structures the dictionary of \\textit{Totem} in accordance to the fact that the \\textit{Totem} has more line like geometric textures. \n\n\\textbf{Reconstruction of shapes}\nWe then perform reconstruction of the original shape using the local dictionaries with different number of atoms (Section \\ref{sec:shapeencoding}). \nFigure \\ref{fig:dictionariesvis} (Right) shows qualitatively the difference in output shape when reconstructed with dictionary with 5 and 100 atoms. \nFigure \\ref{fig:reconstructioncomplexity_quantitative} shows the plot between the \\textit{Global Reconstruction Error} - the mean Point to Mesh distance of the vertices of the reconstructed mesh and the reference mesh - and the number of atoms in the learned dictionary for our Type 1 dataset. We note that the reconstruction error saturates after a certain number of atoms (50 for all).\n\n\\textbf{Potential for Compression} The reconstruction error is low after a certain number of atoms in the learned dictionary, even when global dictionary is used for reconstructing all the shapes (more on shape independence in Section \\ref{sec:generalization}). Thus, only the sparse coefficients and the connectivity information needs to be stored for a representation of a mesh using a common global dictionary, and can be used as a means of mesh compression. Table \\ref{table:compression} shows the results of information compression on Type 1 dataset. \n\n\n\n\\begin{figure}\n\\centering\n\\begin{subfigure}{1\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/images_3dv17\/missingvqualitative1-compressed} \n\\end{subfigure}\n\\caption{Inpainting of the models with 50\\% missing vertices (Left - noisy mesh, Middle - inpainted mesh, Right - ground truth) of \\textit{Terrex} and \\textit{Bunny}, using the local dictionary. Here we use the quad mesh provided at the testing time.}\n\\label{fig:missingvertices}\n\n\\end{figure}\n\n\n\n\n\\begin{table}\n\\centering\n\\small\n\\begin{tabular}{l|cc|cc}\n\\hline\n{Missing Ratio} & \\multicolumn{2}{c|}{0.2} & \\multicolumn{2}{c}{0.5} \\\\\n{} & ours & \\cite{Zhong2016} & ours & \\cite{Zhong2016} \\\\\n\\hline\n\nbunny & \\textbf{1.11e-3} &1.90e-2 &\\textbf{1.62e-3} & 2.20e-2 \\\\\nfandisk & \\textbf{1.32e-3} &8.30e-3 & \\textbf{1.34e-3} &1.20e-2\\\\\n\\hline\n\\end{tabular}\n\\caption{RMS Inpainting error of missing vertices from our method using local dictionary and its comparison to \\cite{Zhong2016}}.\n\\label{table:zhongcomp}\n\n\\end{table}\n\n\n\\begin{figure}[t]\n\\centering\n\\begin{subfigure}{1\\linewidth}\n \\centering\n \\includegraphics[width=.75\\linewidth]{figures\/images_3dv17\/qualitative_shoe} \n\\end{subfigure}%\n\n\\begin{subfigure}{1\\linewidth}\n \\centering\n \\includegraphics[width=0.75\\linewidth]{figures\/images_3dv17\/qualitative_milkbottle} \n\\end{subfigure}%\n\\caption{Qualitative analysis of the inpainting algorithm of \\textit{Supernova} and \\textit{Milk-bottle}. From left to right - mesh with holes, hole filling with \\cite{Liepa2003}, our results from global dictionary and ground truth mesh. Detailed visualization of the results of other meshes are presented in the provided supplementary material.}\n\\label{fig:inpaintqualitative}\n\n\\end{figure}\n\n\n\n\\subsubsection{Surface Inpainting}\n\n\\textbf{Recovering missing geometry}\nTo evaluate our algorithm for geometry recovery, we randomly label certain percentage of vertices in the mesh as missing. The reconstructed vertices are then compared with the original ones. The visualization of our result is in Figure \\ref{fig:missingvertices}. Zoomed view highlighting the details captured as well as the results from other objects are provided in the supplementary material. We compare our results with \\cite{Zhong2016} which performs similar task of estimating missing vertices, with the publically available meshes \\textit{Bunny} and \\textit{Fandisk}, and provide the recovery error measured as the Root Mean Square Error (RMSE) of the missing coordinates in Table \\ref{table:zhongcomp}. Because of the unavailability of the other two meshes used in \\cite{Zhong2016}, we limited to these aforementioned meshes for comparison. As seen in the table, we improve over them by a large margin.\n\nThis experiment also covers the case when the coarse mesh of the noisy data is provided to us which we can directly use for computing quad mesh and infer the final mesh connectivity (Section \\ref{sec:connectedmeshrec}). This is true for the application of recovering damaged part. If the coarse mesh is not provided, we can easily perform poisson surface reconstruction using the non-missing vertices followed by Laplacian smoothing to get our low resolution mesh for quadriangulation. Since, the low resolution mesh is needed just for the shape outline without any details, poisson surface reconstruction does sufficiently well even when 70\\% of the vertices are missing in our meshes.\n\n\n\\textbf{Hole filling}\nWe systematically punched holes of different size (limiting to the patch length) uniform distance apart in the models of our dataset to create noisy test dataset. We follow the procedure in Section \\ref{sec:testing_inpainting} in this noisy dataset and report our inpainting results in Table \\ref{table:inpaintingall}. Here we use mean of the Cloud-to-Mesh error of the inpainted vertices as our error metrics. Please note that the noisy patches are independently generated on its own quad mesh. No information about the reference frames from the training data is used for patch computation of the noisy data. Also, note that this logically covers the inpainting of the missing geometry of a scan due to occlusions. We use both local and global dictionaries for filling in the missing information and found our results to be quite similar to each other. \n\nFor baseline comparison we computed the error from the popular filling algorithm of \\cite{Liepa2003} available in MeshLab\\cite{Cignoni2008}. Here the comparison is to point out the improvement achieved using a data driven approach over geometry. We could not compare our results with \\cite{Zhong2016} because of the lack of systematic evaluation of hole-filling in their paper. As it is seen, our method is clearly better compared to the \\cite{Liepa2003} quantitatively and qualitatively (Figure \\ref{fig:inpaintqualitative}). The focus of our evaluation here is on the Type 2 dataset - which captures complex textures. In this particular dataset we also performed the hole filling procedure using self-similarity, where we learn a dictionary from the patches computed on the noisy mesh having holes, and use it to reconstruct the missing data. The results obtained is very similar to the use of local or global dictionary (Table \\ref{table:inpaintselfsimilar}).\n\n\\begin{figure}\n\\centering\n\\begin{subfigure}{0.65\\linewidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/images_3dv17\/denoising_totemrec-compressed} \n\\end{subfigure}\\begin{subfigure}{0.35\\linewidth}\\centering\\includegraphics[width=0.6\\linewidth]{figures\/images_3dv17\/keyboard_denoised} \n\\end{subfigure}%\n\\caption{Denoising meshes using a clean patch dictionary of a similar object. (Left) Results on \\textit{Totem} (from left to right) - noisy reconstruction from SFM, our denoising using patch dictionary from a clean reconstruction, denoising by Laplacian smoothing \\cite{Sorkine2004}, the high quality clean mesh with different global configuration. (Right) Result for the mesh \\textit{Keyboard} with the same experiment. Zoomed versions of similar results are provided in the supplementary material.}\n\\label{fig:denoising_qualitative}\n\\end{figure}\n\n\\begin{table}[t]\n\\centering\n\\small\n\\begin{tabular}{lrrr}\n\\toprule\n{} & \\cite{Liepa2003} & Our - Local & Our - Global \\\\\n\\midrule\nSupernova & 0.001646 & \\textbf{0.000499} & 0.000524 \\\\\nTerrex & 0.001258 & 0.000595 & \\textbf{0.000575} \\\\\nWander & 0.002214 & 0.000948 & \\textbf{0.000901} \\\\\nLeatherShoe & 0.000854 & 0.000569 & \\textbf{0.000532} \\\\\nBrain & 0.002273 & 0.000646 & \\textbf{0.000587} \\\\ \\hline\nMilk-bottle & 0.000327 & 0.000126 & \\textbf{0.000123} \\\\\nBaseball & 0.000158 & \\textbf{0.000138} & 0.000168 \\\\\nTotem & 0.001065 & 0.001065 & \\textbf{0.001052} \\\\\nBunny & \\textbf{0.000551} & 0.000576 & 0.000569 \\\\\nFandisk & 0.001667 & 0.000654 & \\textbf{0.000634} \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Mean inpainting error for our dataset of hole size 0.015, 0.025 and 0.035 for the dataset Type 2 (top block of the table) and 0.01 and 0.02 for Dataset Type 1 (bottom block of the table). \\textit{Local} uses the local dictionary learned from the clean mesh of the corresponding shape and \\textit{Global} uses a global dictionary learned from the entire dataset.}\n\\label{table:inpaintingall}\n\n\\end{table}\n\n\\begin{table}[th]\n\\centering\n\\small\n\\begin{tabular}{lrr}\n\\toprule\n{} & \\cite{Liepa2003} & Self-Similar \\\\\n\\midrule\nSupernova & 0.001162 & \\textbf{0.000401} \\\\\nTerrex & 0.000900 & \\textbf{0.000585} \\\\\nWander & 0.001373 & \\textbf{0.000959} \\\\\nLeatherShoe & 0.000596 & \\textbf{0.000544} \\\\\nBrain & 0.001704 & \\textbf{0.000614} \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Mean inpainting error comparison with self similar dictionary with 100 atoms. Hole size considered is 0.035}\n\\label{table:inpaintselfsimilar}\n\\end{table}\n\n\n\nWith smaller holes, the method of \\cite{Liepa2003} performs as good as our algorithm, as shape information is not present in such a small scale. The performance of our algorithms becomes noticeably better as the hole size increases as described by Figure \\ref{fig:brain_holewise_algo}. This shows the advantage of our method for moderately sized holes. \n\n\\textbf{Improving quality of noisy reconstruction}\n\\label{sec:denoisingres}\nOur algorithm for inpainting can be easily extended for the purpose of denoising. We can use the dictionary learned on the patches from a clean or high quality reconstruction of an object to improve the quality of its low quality reconstruction. Here we approximate the noisy patch with its closest linear combination in the Dictionary following Equation \\ref{eq:sparsity}. Because of the fact that our patches are local, the low quality reconstruction need not be globally similar to the clean shape. This is depicted by the Figure \\ref{fig:denoising_qualitative} (Left) where a different configuration of the model \\textit{Totem} (with the wings turned compared to the horizontal position in its clean counterpart) reconstructed with structure-from-motion with noisy bumps has been denoised by using the patch dictionary learnt on its clean version reconstructed by Structured Light. A similar result on Keyboard is shown in Figure \\ref{fig:denoising_qualitative} (Right).\n\\begin{figure*}[th]\n\\centering\n\\begin{subfigure}[b]{0.33\\linewidth}\n \\centering\n \\includegraphics[width=0.99\\linewidth]{figures\/images_3dv17\/brain_holewise_algo} \n \\vspace{-0.25cm}\n \\caption{} \n \\label{fig:brain_holewise_algo}\n\\end{subfigure}\\begin{subfigure}[b]{0.33\\linewidth}\n \\centering\n \\includegraphics[width=0.99\\linewidth]{figures\/images_3dv17\/recerror_global_local} \n \\vspace{-0.25cm}\n \\caption{} \n \\label{fig:recerror_global_local}\n\\end{subfigure}\\begin{subfigure}[b]{0.33\\linewidth}\n \\centering\n \\includegraphics[width=0.99\\linewidth]{figures\/images_3dv17\/recerror_noobject} \n \\vspace{-0.25cm}\n \\caption{} \n \\label{fig:reconstructioncomplexity}\n\\end{subfigure}%\n\\vspace{-0.4cm}\n\\caption{(a) Inpainting error vs Hole-size to patch-size ratio for \\textit{Brain} inpainted using the global dictionary. The patchsize here is 0.062 (patch radius $\\approx$ 0.044). Plots of other shapes are in provided in the supplementary material (b) Comparison of the reconstruction error of \\textit{Totem} using local and global dictionaries with different number of atoms. For better visualization the X axis provided in logarithmic scale. (c) Reconstruction error of \\textit{Totem} with global dictionaries (with 500 atoms) having patches from different number of shapes.}\n\\end{figure*}\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Evaluating Denoising Autoencoders}\n\\label{sec:conv_results}\n\nWe use the same dataset mentioned in the beginning of this section for evaluating Convolutional Denoising Autoencoders and put more emphasis to the \\textit{high texture dataset}. We compute another set of patches with resolution 100 $\\times$ 100 (in addition to computing patches with resolution 24 $\\times$ 24 as presented in Section \\ref{sec:dataset_patches}) for performing fine level analysis of patch reconstruction w.r.t. the network complexity.\n\n\n\n\n\\begin{figure*}\n\\centering\n\\begin{subfigure}{\\linewidth}\n \\centering\n \\includegraphics[width=0.85\\linewidth]{figures\/images_wacv18\/plot_exp_comp} \n\\end{subfigure}%\n\\caption{Qualitative result of our inpainting method with different patches of dimension 100 $\\times$ 100 (24 $\\times$ 24 for \\textit{small\\_4x}) with global networks. Patches are taken at random from the test set of meshes of shoe soles and brain, and random masks of variable size, shown in cyan (light blue), are chosen for the task of inpainting. Results of the inpainted patches with different network architectures are shown in the bottom rows. }\n\\label{fig:qualitative_test}\n\\end{figure*}\n\n\\begin{table*}\n\\centering\n\\small\n\\begin{tabular}{l|rr|rrrrrrr}\n\\toprule\nMeshes & \\cite{Liepa2003} & Global Dict & \\textbf{small\\_4x} & \\textbf{multi\\_6x} & \\textbf{6x\\_128} & \\textbf{6x\\_128\\_FC} & \\textbf{l\\_12x} & \\textbf{l\\_12x\\_SC} \\\\\n\\midrule\nSupernova & 0.001646 & 0.000524 & 0.000427 & 0.000175 & 0.000173 & 0.000291 & 0.000185 & 0.000162 \\\\\nTerrex & 0.001258 & 0.000575 & 0.000591 & 0.000373 & 0.000371 & 0.000488 & 0.000395 & 0.000369\\\\\nWander & 0.002214 & 0.000901 & 0.000894 & 0.000631 & 0.000628 & 0.001033 & 0.000694 & 0.000616 \\\\\nLeatherShoe & 0.000854 & 0.000532 & 0.000570 & 0.000421 & 0.000412 & 0.000525 & 0.000451 & 0.000407 \\\\\nBrain & 0.002273 & 0.000587 & 0.000436 & 0.000166 & 0.000171 & 0.000756 & 0.000299 & 0.000165 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{Mean inpainting error for our dataset of shoe soles of hole size 0.015, 0.025 and 0.035 with a single CNN of different architecture and its comparison to the global dictionary based method. As expected, the error decreases with the increase in the complexity (network length, skip connections, etc).}\n\n\\label{table:inpaintingall}\n\\end{table*}\n\n\n\\textbf{Training and Testing} We train different CNNs from the clean meshes as described in the following sections. For testing or hole filling, we systematically punched holes of different size (limiting to the patch length) uniform distance apart in the models of our dataset to create noisy test dataset. The holes are triangulated to get connectivity as described in the Section \\ref{sec:testing_inpainting}. Finally, noisy patches are generated on a different set of quad-mesh (Reference frames) computed on the hole triangulated mesh, so that we use a different set of patches during the testing. More on the generalising capability of the CNNs are discussed in the Section \\ref{sec:generalization}.\n\n\n\n\\begin{figure*}\n\\centering\n\\small\n\\begin{tabular}{|cccccc|}\n\nHoles & GT & \\cite{Liepa2003} & Global Dict & small\\_4x & long\\_12x\\_SC \\\\\n\\includegraphics[width=0.09\\linewidth]{figures\/images_wacv18\/snaps\/supernova2\/snapshot00.jpg}&\n\\includegraphics[width=0.09\\linewidth]{figures\/images_wacv18\/snaps\/supernova2\/snapshot01.jpg}&\n\\includegraphics[width=0.09\\linewidth]{figures\/images_wacv18\/snaps\/supernova2\/snapshot02.jpg}&\n\\includegraphics[width=0.09\\linewidth]{figures\/images_wacv18\/snaps\/supernova2\/snapshot03.jpg}&\n\\includegraphics[width=0.09\\linewidth]{figures\/images_wacv18\/snaps\/supernova2\/snapshot04.jpg}&\n\\includegraphics[width=0.09\\linewidth]{figures\/images_wacv18\/snaps\/supernova2\/snapshot05.jpg}\\\\\n\\end{tabular}\\begin{subfigure}{0.12\\linewidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/images_wacv18\/snaps\/totem_quads\/bh\/snapshot00} \n\\end{subfigure}%\n\\begin{subfigure}{0.12\\linewidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/images_wacv18\/snaps\/totem_quads\/bh\/snapshot01} \n\\end{subfigure}\n\\caption{(Left) Qualitative results of hole filling on the mesh Supernova with a hole radius of 0.025 with Global generative methods. (Right) Example of the quad mesh used in training (Left) and testing (Right) for the mesh Totem. Best viewed when zoomed digitally. Enlarged version and more results are provided in the supplementary material. }\n\\label{fig:inpaint_mesh_qual}\n\\end{figure*}\n\n\n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{lccc}\n\\toprule\nMeshes & \\cite{Liepa2003} & Local Dictionary& Local CNN - small\\_4x \\\\\n\\midrule\nSupernova & 0.001646 & 0.000499 & 0.000415 \\\\\nTerrex & 0.001258 & 0.000595 & 0.000509 \\\\\nWander & 0.002214 & 0.000948 & 0.000766 \\\\\nLeatherShoe & 0.000854 & 0.000569 & 0.000512 \\\\\nBrain & 0.002273 & 0.000646 & 0.000457 \\\\\n\n\\bottomrule\n\\end{tabular}\n\\caption{Mean inpainting error of hole size 0.015, 0.025 and 0.035 for high texture dataset which uses Local patches generated on the same clean mesh of the corresponding shape.}\n\\label{table:inpaint_local}\n\\end{table}\n\n\n\\subsubsection{Hole filling on a single mesh}\n\\label{sec:singlemesh}\nAs explained before, our 3D patches from a single mesh are sufficient in amount to train a generative model for that mesh.\nNote that we still need to provide an approximately correct scale for the quad mesh computation of the noisy mesh, so that the training and testing patches are not too different by size. \nTable \\ref{table:inpaint_local} shows the result of hole filling using our smallest network - \\textit{small\\_4x} in terms of mean of the Cloud-to-Mesh error of the inpainted vertices and its comparison with our linear dictionary based inpainting results. We also provide the results from \\cite{Liepa2003} in the table for better portray of the comparison. In this experiment, we learn one CNN per mesh on the patches in the clean input mesh (similar to local dictionary model), and tested in hole data as explained in the above section. As seen, our smallest network beats the result of linear approach of surface inpainting. \n\nWe also train a long network \\textit{long\\_12x\\_SC} (our best performing global network) with an offset factor of $k = 7$, giving us a total of 28 overlapping patches per quad location for the model \\textit{Supernova} and we show the qualitative result in Figure \\ref{fig:cnn_length} (Left). The figure verifies qualitatively, that with enough number of dense overlapping patches and a more complete CNN architecture, our method is able to inpaint surfaces with a very high accuracy.\n\n\n\n\n\n\n\n\n\n\n\n\\subsubsection{Global Denoising Autoencoder}\n\\label{sec:result_globalcnn}\nEven though the input to the CNN are local patches, we can still create a single CNN designed for repairing a set of meshes, if the set of meshes are pooled from a similar domain. This is analogous to the \\textit{global dictionary} where the dictionary was learnt from the patches of a pool of meshes. But to incorporate more variations in between the meshes in the set, the network needs to be well designed. \nThis gets shown in the column \\textit{global CNN} of Table \\ref{table:inpaint_global} where our inpainting result with a single CNN (\\textit{small\\_4x}) for common meshes (Type 1 dataset) is comparable to our linear global-dictionary based method (column \\textit{global dictionary}), but not better. With the premise that CNN is more powerful than the linear dictionary based methods, we perform additional experiments incorporating all the careful design choices discussed in the Section \\ref{sec:networkdesign} for creating global CNNs for the purpose of inpainting different meshes in similar domain. The objective of these experiments are 1) to evaluate different denoising autoencoders ideas for inpainting in the context of height map based 3D patches 2) to verify that carefully designed generative CNNs performs better than the linear dictionary based methods, and 3) to show how to design a single denoising autoencoder for inpainting meshes from similar domain or inpainting meshes across a varied domain, when the number of meshes is not too high. We, however, do not claim that this procedure makes it possible to have a single model (be it global dictionary or global CNN) capable of learning and inpainting across a large number of meshes (say all meshes in ShapeNet); nor is this our intention. \n\nFigure \\ref{fig:qualitative_test} provides the qualitative results for different networks showing the reconstructed patches from the masked incomplete patches. The results shows that the quality of the reconstruction increases with the increase in the network complexity. In terms of capturing overall details the network with FC layer seems to reconstruct the patches close to the original, but with the lack of contrast. This gets shown in the quantitative results where it is seen that the network with FC performs worse than most of networks. The quantitative results are shown in Table \\ref{table:inpaint_local}. The best result qualitatively and quantitatively is shown by \\textbf{long\\_12x\\_SC} - the longest network with symmetrical skip connections. Figure \\ref{fig:cnn_length} (Right) provides more insights on the importance of the skip connections. Visualizations of the reconstructed hole filled mesh are provided in Figure \\ref{fig:inpaint_mesh_qual} (Left).\n\n\n\n\n\n\n\\subsection{Generalisation capability}\n\\label{sec:generalization}\n\\textbf{Patches from common pool}\nWe perform reconstruction of \\textit{Totem} using both the local dictionary and global dictionary having different number of atoms to know if the reconstruction error, or the shape information encoded by the dictionary, is dependent on where the patches come from at the time of training. We observed that when the number of dictionary atoms is sufficiently large (200 - 500), the global dictionary performs as good as the local dictionary (Figure \\ref{fig:recerror_global_local} ). This is also supported by our superior performance of global dictionary in therms of hole filling. \n\nKeeping the number of atoms fixed at which the performances between Local and Global dictionary becomes indistinguishable (500 in our combined dataset), we learned global dictionary using the patches from different shapes, with one shape at a time. The reconstruction error of \\textit{Totem} using these global dictionary varied very little. But we notice a steady increase in the reconstruction error with increase in the number of object used for learning; which becomes steady after a certain number of object. After that point (6 objects), adding more shapes for learning does not create any difference in the final reconstruction error (Figure \\ref{fig:reconstructioncomplexity}). This verifies our hypothesis that the reconstruction quality does not deteriorate significantly with increase in the size of the dataset for common meshes for learning.\n\n\\textbf{Different test meshes}\nWe perform experiments to see how the inpainting method can be generalized among different shapes and use Type 1 dataset of \\cite{Sarkar2017a} consisting of general shapes like Bunny, Fandisk, Totem, etc. These meshes do not have high amount of specific surface patterns. Column \\textit{global CNN ex} of Table \\ref{table:inpaint_global} shows the quantitative result for the network \\textit{small\\_4x} to inpaint the meshes trained on patches of other meshes. It is seen that if the shape being inpainted does not have too much characteristic surface texture, the inpainting method generalizes well. Note that this result is still better than the geometry based inpainting result of \\cite{Liepa2003}. Thus, it can be concluded that our system is a valid system for inpainting simple and standard surface meshes (Eg. \\textit{Bunny}, \\textit{Milk-bottle}, \\textit{Fandisk} etc). \n\nHowever for complicated and characteristic surfaces (Eg. shoe dataset), we need to learn on the surface itself, because of the inherent nature of the input to our CNN - \\textit{local patches} (instead of global features which takes an entire mesh as an input) that are supposed to capture surface details of its own mesh. Evaluating the generalizing capability of such a system requires patch computation on different locations between the training and testing set, instead of different mesh altogether. As explained before, in all our inpainting experiments, we explicitly made sure that the patches during the testing do not belong to training by manually computing a different set of quad mesh (Reference frames) for the hole triangulated mesh. To absolutely make sure the testing is done in a different set of patches, we manually tuned different parameters in \\cite{Ebke2013} for quadriangulation. One example of such pair of quad meshes of the mesh Totem are shown in Figure \\ref{fig:inpaint_mesh_qual} (Right). \n\n\n\n\n\n\n\n\n\nThe generalization capability can also be tested across the surfaces that are similar in nature, but from a different sample. \nThe mesh Stone Wall from \\cite{Zhou2013} provides a good example of such data, which has two different sides of the wall of similar nature. We fill holes on one side by training CNN on the other side and show the qualitative result in Figure \\ref{fig:wall}. This verifies the fact that the CNN seems to generalize well for reconstructing unseen patches.\n\n\\textbf{Discussion on texture synthesis} We add a small discussion on the topic of texture synthesis as a good part of our evaluation is focused on a dataset of meshes high in textures.\nAs stated in the related work, both dictionary \\cite{Aharon2006} based and BM3D \\cite{Dabov2007} based algorithms are well known to work with textures in terms of denoising 2D images. Both approaches have been extended to work with denoising 3D surfaces. Because of the presence of patch matching step in BM3D (patches are matched and kept in a block if they are similar), it is not simple to extend it for the task of 3D inpainting with moderate sized holes, as a good matching technique has to be proposed for incomplete patches. Iterative Closest Point (ICP) is a promising means of such grouping as used by \\cite{Rosman2013} for extending BM3D for 3D point cloud denoising. Since the contribution in \\cite{Rosman2013} is limited for denoising surfaces, we could not compare our results with it - as further extending \\cite{Rosman2013} for inpainting is not trivial and requires further investigation. Instead we compared our results with the dictionary based inpainting algorithm proposed in \\cite{Sarkar2017a}.\n\nInpainting repeating structure is well studied in \\cite{Pauly2008}. Because of the lack of their code and unavailability of results on a standard meshes, we could not compare our results to them. We also do not claim our method to be superior to them in high texture scenario, though we show high quality result with indistinguishable inpainted region for one of the meshes in Figure \\ref{fig:cnn_length} (Left) using a deep network. However, we do claim our method to be more general, and to work in cases with shapes with no explicit repeating patterns (Eg. Type 1 dataset) which is not possible with \\cite{Pauly2008}.\n\n\\begin{figure}[t]\n\\centering\n\\begin{subfigure}{0.25\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/images_wacv18\/supernovaqual.pdf} \n\\end{subfigure} \n\\begin{subfigure}{0.6\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/images_wacv18\/plot_cnnlength.pdf} \n\\end{subfigure}\n\\caption{(Left) Qualitative result of inpainting on a single mesh with an overlap factor of $k = 7$. (Right) Mean inpainting error for high texture meshes wrt the number of parameters in the CNN. Inpainting error decreases with the increase in the network depth, saturates at one time, and performs worse if increased further. Presence of symmetrical skip connections decreases the error further providing its importance to train longer networks.}\n\\label{fig:cnn_length}\n\\end{figure}\n\\begin{table}\n\\centering\n\\small\n\\begin{tabular}{lcc|c}\n\\toprule\n{} & global & global CNN & global CNN ex\\\\\n& dictionary& small\\_4x & small\\_4x \\\\\n\\midrule\nMilk-bottle & 0.000123 & 0.000172 & 0.000187 \\\\\nBaseball & 0.000168 & 0.000113 & 0.000138\\\\\nTotem & 0.001052 & 0.001038 & 0.001406\\\\\nBunny & 0.000569 & 0.000780 & 0.000644\\\\\nFandisk & 0.000634 & 0.000916 & 0.000855\\\\\n\\bottomrule\n\\end{tabular}\n\\caption{(Left) Mean inpainting error of hole size 0.01, 0.02 and 0.03 for common mesh dataset using global models. For column \\textit{global CNN} we use a single global CNN (small\\_4x) trained on the local patches of all the meshes. The result of this small network is comparable to that of the linear global dictionary, but not better. This shows that we have more scope of improvement with a better network design for CNNs. \n(Right) in the column \\textit{global CNN ex}, for each mesh, we use a global CNN (small\\_4x) trained on the local patches of all the meshes except itself. More discussion is in Section \\ref{sec:generalization}\n}\n\\label{table:inpaint_global}\n\\end{table}\n\n\\subsection{Limitation and failure cases}\n\\label{sec:failurecases}\n\n\\noindent\n\\textbf{General limitations} - The quad mesh on the low resolution mesh provides a good way of achieving stable orientations for computing moderate length patch in 3D surfaces. However, on highly complicated areas such as joints, and a large patch length, the height map based patch description becomes invalid due to multiple overlapping surfaces on the reference quad as shown in Figure \\ref{fig:failurecase} (left). Also, the method in general does not work with full shape completion where the entire global outline has to be predicted.\n\n\\noindent\n\\textbf{Generative network failure cases} - It is observed that small sized missing regions are reconstructed accurately by our long generative networks. Failure cases arise when the missing region is large. In the first case the network reconstructs the region according to the patch context slightly different than the ground truth (Figure \\ref{fig:failurecase}-A). The second case is similar to the first case where the network misses fine details in the missing region, but still reconstructs well according to the other dominant features. The third case, which is often seen in the network with FC, is the lack of contrast in the final reconstruction (Figure \\ref{fig:failurecase}-C). Failure cases for smaller networks can be seen in Figure \\ref{fig:qualitative_test}.\n\n\\begin{figure}\n\\centering\n\\begin{subfigure}[b]{0.55\\linewidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{figures\/images_wacv18\/stonewall} \n \\caption{Experiment on Stone Wall} \n \\label{fig:wall}\n\\end{subfigure}%\n\\begin{subfigure}[b]{0.45\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/images_wacv18\/failurecase_badpatch} \n \\caption{Failure cases.}\n \\label{fig:failurecase}\n\\end{subfigure}\n\\caption{(a) Scanned mesh of Stone Wall \\cite{Zhou2013} which has two sides of similar nature shown in the top. The CNN \\textbf{6x\\_128} was trained on the patches generated on one side (Top Left) to recover the missing details on the other side (Top Right) whose result is shown in the bottom. \n(b) Failure cases -(Left) - bad or invalid patches (point cloud with RF at the top, and its corresponding broken and invalid surface representation at the bottom) at complicated areas of a mesh. (Right) Three failure case scenarios of the CNN.\n}\n\\end{figure}\n\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nWe proposed in this paper our a first attempt at using generative models on 3D shapes with a representation and parameterization other than voxel grid or 2D projections. For that, we proposed a new method for shape encoding 3D surface of arbitrary shapes using rectangular local patches.\nWith these local patches we designed generative models, inspired that from 2D images, for inpainting moderate sized holes and showed our results to be better than the geometry based methods. \nWith this, we identified an important direction of future work - exploration of the application of CNNs in 3D shapes in a parameterization different from the generic voxel representation. \nIn continuation of this particular work, we would like to extend the local quad based representation to global shape representation which uses mesh quadriangulation, as it inherently provides a grid like structure required for the application of convolutional layers. This, we hope, will provide an alternative way of 3D shape processing in the future.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nReproducing kernel Hilbert spaces were introduced by Zaremba \\cite{za1907} and Mercer \\cite{me1909} and were first studied in a systematic fashion by Aronszajn \\cite{aro50} in 1950. Ever since these spaces have played an important role in many branches of mathematics such as complex analysis \\cite{duschu04}, approximation theory \\cite{wah90} and, only recently, in learning theory and classification due to the celebrated representer theorem \\cite{schhesm01}. Another field with manifold connections to reproducing kernels is frame theory and its relatives.\n\nDiscrete frames have been introduced in the 1950's in the context of nonharmonic Fourier analysis \\cite{duscha52} and have then been generalized to continuous frames on arbitrary positive measure spaces in the early 1990's \\cite{alanga93,ka90}. Reproducing kernel theory can be employed to construct continuous frames and conversely frame theory can be used to study reproducing kernels \\cite{jo06}.\n\nAlthough frames are convenient objects to work with, there exists a large reservoir of interesting systems that are complete and do not satisfy both frame conditions. Therefore, semi-frames \\cite{jpaxxl09,jpaxxl12} and reproducing pairs \\cite{ansptr15,spexxl14,spexxl16} have been introduced. An upper (resp. lower) semi-frame is a complete system that only satisfies the upper (resp. lower) frame inequality. A reproducing pair is a pair of mappings that generates a bounded and boundedly invertible analysis\/synthesis process without assuming any frame inequality.\n\nThis paper is divided into three major parts portraying different connections between frames, reproducing pairs and reproducing kernel Hilbert spaces. In the first part we investigate systems taking values in a reproducing kernel Hilbert space. We present an explicit expression for the reproducing kernel in terms of a reproducing pair. This is an extension of the results from \\cite{paul09,raca05}. Moreover, we introduce a novel necessary condition for a vector family to form a frame.\n\n\nThe second part is devoted to studying the redundancy of (semi-)frames. In the discrete case, the redundancy of a frame measures how much the Hilbert space is oversampled by the frame, see for example \\cite{bocaku11,cacahe11}. It is however impossible to directly generalize the notion of redundancy to continuous (semi-) frames. The approach chosen in \\cite{hedera00} thus takes a detour via the concept of Riesz bases, i.e., non-redundant discrete frames. A discrete frame $\\Psi$ is a Riesz basis if its analysis operator $C_\\Psi$ is surjective. Following \\cite{hedera00}, the redundancy of a (semi-)frame is defined by\n\\begin{equation*}\nR(\\Psi):=\\dim({\\sf Ran}\\, C_\\Psi {}^\\bot).\n\\end{equation*}\nIt has been observed in several articles \\cite{hedera00,hogira13,jale15} that $R(\\Psi)$ depends on the underlying measure space $(X,\\mu)$. In particular, if a (lower semi-)frame has finite redundancy, then it follows that $(X,\\mu)$ is atomic. The proofs in the aforementioned papers all rely in one way or the other on the following argument: If the redundancy of a frame is zero (finite), then\n$$\n\\inf\\big\\{\\mu(A):\\ A\\mbox{ measurable and }\\mu(A)>0\\big\\}=C>0,\n$$\nwhich implies that $(X,\\mu)$ is atomic. We will give a new proof here using the reproducing kernel Hilbert space structure of the range of $C_\\Psi$. It is interesting to note that\nupper semi-frames behave essentially different in this regard. We show that there exist upper semi-frames on non-atomic measure spaces with redundancy zero. \n\nAs a side product, we conclude that efforts to generalize Riesz bases to the continuous setting \\cite{arkatota12,gaha03} cannot succeed. This is because the underlying measure space of a frame with redundancy zero is atomic (and therefore discrete).\nMoreover, we show that every frame can be split into a discrete and a strictly continuous Bessel system.\n\n\nThe final part of this paper is concerned with characterizing the ranges of the analysis operators of a reproducing pair. The omission of the frame inequalities causes the problem that ${\\sf Ran}\\, C_\\Psi$ need no longer be contained in $L^2(X,\\mu)$. We will demonstrate how a reproducing pair intrinsically generates a pair of reproducing kernel Hilbert spaces and calculate the reproducing kernel.\n\n\\begin{comment}\n See also the definition of excess of frames \\cite{bacahela03-1, bacahela03}.\\\\\n {\\bf Idea\/Question:}\\begin{itemize}\n \\item Does this concept (excess) make sense for continous frames?\n \\item If we assume that $R(\\Psi)$ is finite, we should get to the theory of Fredholm operators. Can this help? Could this provide an idea to distinguish between different infinite redundancies?\n \\item Is there any hope to distinguish between $\\{e_1,e_1,e_2,e_2,e_3,...\\}$ and\\\\ $\\{e_1,e_1,e_1,e_2,e_2,e_2,e_3,...\\}$ with another concept?\n \\end{itemize}\n\nWe will present known results from \\cite{hedera00,hogira13} but with proofs that are lie closer to the heart of frame theory using RKHS and \nallow insights in the structure of continuous frames.\nThe proofs in the previous paper follow more measure theoretical arguments.\n If a frame is non-redundant ($R(\\Psi)=0$) it is a continuous Riesz bases (Riesz-type frames) \\cite{gaha03,arkatota12}.\nWe will explain why continuous Riesz bases only exist on \n atomic measure spaces which are essentially discrete measure spaces.\n \n For finite dimensional frames $F=\\{f_i\\}_{i=1}^N\\subset\\mathbb{R}^n$ we have the classical notion of redundancy $N\/n=R(F)\/n+1$, as $R(F)=N-n$.\n \\end{comment}\n\n\nThis paper is organized as follows. After introducing the main concepts in Section \\ref{sec:prel-rkhs} we first consider systems on reproducing kernel Hilbert spaces in Section \\ref{sec:char-RKHS}. Then, in Section \\ref{redundancy-section} we investigate the redundancy of continuous (semi-)frames.\nFinally, we show how a reproducing pair intrinsically generates a pair of RKHSs in Section \\ref{sec:rep-pair-rkhs} and characterize the reproducing kernels.\n \n\\section{Preliminaries}\\label{sec:prel-rkhs}\n\n\\subsection{Atomic and non-atomic measures}\nThroughout this paper we will assume that $(X,\\mu)$ is a nontrivial measure space with $\\mu$ being $\\sigma$-finite and positive.\nA measurable set $A \\subset X$ is called an atom if $\\mu(A)>0$ and for any measurable subset $B\\subset A$, \nwith $\\mu(B)<\\mu(A)$,\nit holds $\\mu(B)=0$.\nA measure space is called atomic if there exists a partition $\\{A_n\\}_{n\\in \\mathbb{N}}$ of $X$\nconsisting of atoms and null sets.\n$(X,\\mu)$ is called non-atomic if there are no atoms in $(X,\\mu)$. To our knowledge there is no term to denote a\nmeasure space which is \nnot atomic. In order to avoid any confusion with non-atomic spaces, we will therefore call a measure space an-atomic if it is not atomic.\n\n\nA well-known result by Sierpi{\\'n}ski states that non-atomic measures take a continuity of values.\n\\begin{theorem}[Sierpi{\\'n}ski \\cite{sie22}]\\label{sierpinski-thm}\n Let $(X,\\mu)$ be a non-atomic measure space and let $A\\subset X$ be measurable with positive measure, then,\n for every $0\\leq b\\leq \\mu(A)$,\n there exists $B\\subset A$ such that $\\mu(B)=b$.\n\\end{theorem}\nWe will later separate the purely continuous part of a frame from the discrete part. For the construction, we need the following auxiliary result. Since we could not find any reference for the second part, we will provide a proof in the appendix.\n\\begin{lemma}\\label{not-atomic-non-atomic}\nLet $(X,\\mu)$ be a $\\sigma$-finite measure space.\\begin{enumerate}[(i)]\\item There exists $\\mu_a$ atomic and $\\mu_c$ non-atomic such that \n\\begin{equation}\\label{measure-partition}\n\\mu=\\mu_a+\\mu_c.\n\\end{equation}\n\\item If $(X,\\mu)$ is an-atomic, then there exists $A\\subset X$ with $\\mu(A)>0$ and $(A,\\mu)$ non-atomic.\n\\end{enumerate}\n\\end{lemma}\n\n\n\n\\subsection{Continuous frames, semi-frames and reproducing pairs}\nFrames were first introduced by Duffin and Schaeffer \\cite{duscha52} in the context of non-harmonic Fourier analysis. \nIn the early 1990's, Ali et al. \\cite{alanga93} and Kaiser \\cite{ka90} independently extended frames to\nmappings acting on a measure space $(X,\\mu)$. \n\nDenote by $GL(\\H)$ the space of all bounded linear operators on $\\H$ with bounded inverse and let\n$\\mathcal{H}$ be a separable Hilbert space.\n\\begin{definition}\\label{def-cont-frame}\nA mapping $\\Psi:X\\rightarrow \\mathcal{H}$ is called a continuous frame if\n\\begin{enumerate}[(i)]\n \\item $\\Psi$ is weakly measurable, that is, $x\\mapsto\\langle f,\\Psi(x)\\rangle$ is a measurable function for every\n $f\\in\\mathcal{H}$, \n \\item there exist positive constants $m,M>0$ such that\n \\begin{equation}\\label{frame-condition}\n m\\left\\|f\\right\\|^2\\leq\\int_{X}\\left|\\langle f,\\Psi(x)\\rangle\\right|^2d\\mu(x)\\leq M\\left\\|f\\right\\|^2,\\ \\\n \\forall f\\in\\mathcal{H}.\n \\end{equation}\n\\end{enumerate}\n\\end{definition}\nThe constants $m,M$ are called the frame bounds and $\\Psi$ is called Bessel if at least the second inequality in \n(\\ref{frame-condition}) is satisfied.\nIf $(X,\\mu)$ is a countable set equipped with\nthe counting measure then one recovers the classical definition of a discrete frame, see for example \\cite{christ1}. For a short and self-contained introduction to continuous \nframes, we refer the reader to \\cite{ranade06}.\n\nThe fundamental operators in frame theory are the analysis operator\n$\n C_\\Psi:\\mathcal{H}\\rightarrow L^2(X,\\mu)$, $C_\\Psi f(x):=\\langle f,\\Psi(x)\\rangle,\n$\nand the synthesis operator\n\\begin{equation*}\n D_\\Psi:L^2(X,\\mu)\\rightarrow \\mathcal{H},\\ \\ \\ D_\\Psi F:=\\int_X F (x)\\Psi(x)d\\mu(x),\n\\end{equation*}\nwhere the integral is defined weakly. Observe that $C_\\Psi^\\ast =D_\\Psi$ whenever $\\Psi$ is Bessel. The \nframe operator $S_\\Psi\\in GL(\\H)$ is defined as the composition of $C_\\Psi$ and $D_\\Psi$\n\\begin{equation*}\n S_\\Psi:\\mathcal{H}\\rightarrow \\mathcal{H},\\ \\ \\ \n S_\\Psi f:=D_\\Psi C_\\Psi f=\\int_X\\langle f,\\Psi(x)\\rangle \\Psi(x)d\\mu(x).\n\\end{equation*}\nEvery frame $\\Psi^d$ satisfying\n\\begin{equation*}\nf:=D_\\Psi C_{\\Psi^d} f=D_{\\Psi^d}C_\\Psi f,\\ \\ \\forall f\\in \\mathcal{H},\n\\end{equation*}\nis called a dual frame for $\\Psi$. For every frame there exists at least one dual frame $S_\\Psi^{-1}\\Psi$, called the canonical \ndual frame. As the analysis operator is in general not onto $L^2(X,\\mu)$, there may exist several dual frames for $\\Psi$. \n\nFrames have proven to be a useful tool in many different fields of mathematics such as signal processing \\cite{nsdgt10} or mathematical physics \\cite{alanga00,xxlbayasg11}. There is however a great variety of examples of complete systems that do not meet both frame conditions. Several concepts to generalize the frame property have thus been proposed. An upper (resp. lower) semi-frame is a complete system that only satisfies the upper (resp. lower) frame inequality, see \\cite{jpaxxl09,jpaxxl11,jpaxxl12}.\n\nAnother generalization is the concept of reproducing pairs, defined in \\cite{spexxl14} \nand further investigated in \\cite{ansptr15,antra16,spexxl16}. Here, one considers a pair of mappings instead of a single one and no frame inequality is assumed to hold.\n\\begin{definition}\\label{rep-pair-definition}\n Let $\\Psi,\\Phi:X\\rightarrow\\mathcal{H}$ weakly measurable.\n The pair of mappings $(\\Psi,\\Phi)$ is called a reproducing pair for $\\mathcal{H}$ if the resolution operator \n $S_{\\Psi,\\Phi}:\\mathcal{H}\\rightarrow \\mathcal{H}$, weakly defined by\n \\begin{equation}\\label{rep-pair-def}\n \\langle S_{\\Psi,\\Phi} f,g\\rangle:=\\int_X \\langle f,\\Psi(x)\\rangle \\langle\\Phi(x),g\\rangle d\\mu(x),\n \\end{equation}\nis an element of $GL(\\mathcal{H})$.\n\\end{definition}\nObserve that Definition \\ref{rep-pair-definition} is indeed a generalization of continuous frames. On the one hand, neither \n$\\Psi$ nor $\\Phi$ are required to meet the frame conditions and,\non the other hand, a weakly measurable mapping $\\Psi$ is a continuous frame if, and only if, $(\\Psi,\\Psi)$ is a reproducing pair.\nNote that reproducing pairs are conceptually similar to the concept of weak duality \\cite{fezi98} where one considers expansions in terms of a Gelfand triplet.\n\n\\subsection{Reproducing kernel Hilbert spaces (RKHS)}\nLet $\\mathcal{F}(X,\\mathbb{C})$ denote the vector space of all functions $f:X\\rightarrow\\mathbb{C}$. \nReproducing kernel Hilbert spaces are in a way convenient subspaces of $\\mathcal{F}(X,\\mathbb{C})$ since they allow for pointwise interpretation of functions, unlike for example Lebesgue spaces.\n\n\\begin{definition}\nLet $\\H_{K}\\subset \\mathcal{F}(X,\\mathbb{C})$ be a Hilbert space, $\\H_K$ is called a reproducing kernel Hilbert space (RKHS) if \nthe point evaluation functional $\\delta_x:\\H_{K}\\rightarrow\\mathbb{C}$, $\\delta_x(f):=f(x)$ is bounded for \n every $x\\in X$, that is, if there exists $C_x>0$ such that $|\\delta_x(f)|\\leq C_x\\|f\\|$, for all $f\\in\\H_K$.\n\\end{definition}\nAs $\\delta_x$ is bounded, there exists a unique vector $k_x\\in \\H_{K}$ such that $f(x)=\\langle f,k_x\\rangle,$ for all $f\\in\\H_{K}$.\nThe function $K(x,y)=k_y(x)=\\langle k_y,k_x\\rangle$ is called the reproducing kernel for $\\H_{K}$. The \nreproducing kernel is unique, $K(x,y)=\\overline{K(y,x)}$ and \nits diagonal is of the following form\n$$K(x,x)=\\langle k_x,k_x\\rangle=\\|k_x\\|^2=\\sup\\big\\{|f(x)|^2:\\ f\\in\\H_K,\\ \\|f\\|=1\\big\\}.$$\nThe following result can be found in \\cite[Theorem 3.1]{alanga93}.\n\\begin{theorem}\\label{charact-of-RKHS}\n Let $\\H_{K}$ be a RKHS and $\\{\\phi_i\\}_{i\\in\\mathcal{I}}\\subset \\H_{K}$ an orthonormal basis, then \n \\begin{equation}K(x,y)=\\sum_{i\\in\\mathcal{I}}\\phi_i(x)\\overline{\\phi_i(y)}, \\end{equation}\n with pointwise convergence. In particular, \n \\begin{equation}\\label{pointwise-l2-onb}\n0<\\sum_{i\\in\\mathcal{I}}|\\phi_i(x)|^2=K(x,x)<\\infty,\\ \\forall x\\in X.\n\\end{equation}\nConversely, if there exists an orthonormal basis for a Hilbert space $\\H_K\\subset \\mathcal{F}(X,\\mathbb{C})$ that satisfies \\eqref{pointwise-l2-onb},\nthen $\\H_{K}$ can be identified with a RKHS consisting of functions $f:X\\rightarrow \\mathbb{C}$.\n\nIf $X$ is equipped with a measure $\\mu$ and $\\H_K\\subset L^2(X,\\mu)$, then $\\Psi(x):=K(x,\\cdot)$ is a continuous Parseval frame.\n\\end{theorem}\nFor a thorough introduction to RKHS we refer the reader to \\cite{aro50,paul09}. We will investigate the connection between RKHS and frames (resp. reproducing pairs) in two different ways. In Section 3 we consider frames (resp. reproducing pairs) taking values in a RKHS, whereas in Section 4 and 5 we investigate the RKHS generated by the range of the analysis operator of a frame (resp. reproducing pair).\n\n\n\\begin{comment}\n\n\\section{Preliminaries from Gabardo-Han \\cite{gaha03}: Proposition 2.6. to 2.10}\n\n\n\n\\begin{theorem}\\label{frame-repres-equiv}\n The following are equivalent:\n \\begin{enumerate}[(i)]\n \\item $\\Psi$ is a continuous frame\n \\item there exists an orthonormal basis $\\{e_i\\}_{i\\in\\mathcal{I}}\\subset \\mathcal{H}$ and a Riesz sequence \n $\\{\\psi_i\\}_{i\\in\\mathcal{I}}\\subset L^2(X,\\mu)$, such that\n for a.e. $x\\in X$, it holds \n $$\\sum_{i\\in\\mathcal{I}}|\\psi_i(x)|^2<\\infty\\ \\ \\text{and}\\ \\ \\Psi(x)=\\sum_{i\\in\\mathcal{I}}\\psi_i(x)e_i.$$\n \\item there exists a Riesz basis $\\{r_i\\}_{i\\in\\mathcal{I}}\\subset\\mathcal{H}$ and an orthonormal family \n $\\{\\psi_i\\}_{i\\in\\mathcal{I}}\\subset L^2(X,\\mu)$, \n such that for a.e. $x\\in X$, it holds\n $$\\sum_{i\\in\\mathcal{I}}|\\psi_i(x)|^2<\\infty \\ \\ \\text{and}\\ \\ \\Psi(x)=\\sum_{i\\in\\mathcal{I}}\\psi_i(x)e_i.$$\n \\item there exists a Riesz basis $\\{r_i\\}_{i\\in\\mathcal{I}}\\subset\\mathcal{H}$ and a Riesz sequence \n $\\{\\psi_i\\}_{i\\in\\mathcal{I}}\\subset L^2(X,\\mu)$, \n such that for a.e. $x\\in X$, it holds $$\\sum_{i\\in\\mathcal{I}}|\\psi_i(x)|^2<\\infty\\ \\ \\text{and}\\ \\ \\Psi(x)=\\sum_{i\\in\\mathcal{I}}\\psi_i(x)r_i.$$\n \\end{enumerate}\n\\end{theorem}\n\n\n\\textbf{Proof:\\ }\n$(i)\\Rightarrow (ii)$ Take an orthonormal basis $\\{e_i\\}$ and define $\\psi_i(x)=\\langle \\Psi(x),e_i\\rangle$, then\n\\begin{equation*}\n \\sum_{i\\in\\mathcal{I}}|\\psi_i(x)|^2=\\sum_{i\\in\\mathcal{I}}|\\langle e_i,\\Psi(x)\\rangle|^2=\\|\\Psi(x)\\|_\\mathcal{H}^2<\\infty,\\ for\\ a.e.\\ x\\in X\n\\end{equation*}\nMoreover,\n\\begin{equation*}\n \\sum_{i\\in\\mathcal{I}}\\psi_i(x)e_i=\\sum_{i\\in\\mathcal{I}}\\langle \\Psi(x),e_i\\rangle e_i=\\Psi(x),\\ for\\ a.e.\\ x\\in X\n\\end{equation*}\nIt remains to show that $\\{\\psi_i\\}$ is a Riesz basis for its closed span. As $\\Psi$ is a frame we have\n\\begin{equation*}\n A\\|f\\|^2_\\mathcal{H}\\leq\\int_X|\\langle f,\\Psi(x)\\rangle|^2d\\mu(x)=\\Big\\|\\sum_{i\\in\\mathcal{I}}\\langle f,e_i\\rangle \\overline{\\psi_i}\n \\Big\\|^2_2\\leq B\\|f\\|^2_\\mathcal{H}\n\\end{equation*}\nNow since $l^2(\\mathcal{I})=\\Big\\{\\{\\langle f,e_i\\rangle\\}_{i\\in\\mathcal{I}}:\\ f\\in\\mathcal{H}\\Big\\}$ and Parseval's formula we get\n\\begin{equation*}\n A\\sum_{i\\in\\mathcal{I}}|c_i|^2\\leq\\Big\\|\\sum_{i\\in\\mathcal{I}}c_i \\overline{\\psi_i}\n \\Big\\|^2_2\\leq B\\sum_{i\\in\\mathcal{I}}|c_i|^2,\\ \\ \\forall c\\in l^2(\\mathcal{I})\n\\end{equation*}\n\n$(ii)\\Rightarrow (iii)$ Let $\\Psi(x)=\\sum_{i\\in\\mathcal{I}}\\psi_i(x)e_i$ with $\\{e_i\\}$ an orthonormal basis and $\\{\\psi_i\\}$ a Riesz basis for\n$\\mathcal{H}_\\psi$, the closure of $\\text{span}\\{\\psi_i, \\ i\\in\\mathcal{I}\\}$. Let $\\{\\phi_i\\}$ be an orthonormal basis for $\\mathcal{H}_\\psi$, then \nthere exists $T\\in GL(\\mathcal{H}_\\psi)$, such that $T\\psi_i=\\phi_i$ or in other words \n$$\n\\phi_i(x)=\\sum_{j\\in\\mathcal{I}}\\langle T \\psi_i ,\\widetilde\\psi_j\\rangle \\psi_j(x),\\ \\ \\text{for a.e. }x\\in X\n$$\nwhere $\\{\\widetilde\\psi_i\\}$ is the unique dual of $\\{\\psi_i\\}$.\nThe matrix $(\\langle T \\psi_i ,\\widetilde\\psi_j\\rangle)_{i,j\\in\\mathcal{I}}$ defines a bounded operator\non $l^2(\\mathcal{I})$ (see \\cite[Theorem 3.1]{xxl08}), that is, \n$$ \n\\sum_{i\\in\\mathcal{I}}|\\phi_i(x)|^2\\leq C\\sum_{i\\in\\mathcal{I}}|\\psi_j(x)|^2<\\infty,\\ \\ \\text{for a.e. }x\\in X\n$$\nMoreover, we have\n$$\n\\Psi(x)=\\sum_{i\\in\\mathcal{I}}\\psi_i(x)e_i=\\sum_{i,j\\in\\mathcal{I}}\\langle\\psi_i,\\phi_j\\rangle \\phi_j(x) e_i=\n\\sum_{j\\in\\mathcal{I}}\\Big(\\sum_{i\\in\\mathcal{I}}\\langle\\psi_i,\\phi_j\\rangle e_i\\Big) \\phi_j(x) \n$$\n$$\n=\\sum_{j\\in\\mathcal{I}} \\phi_{j}(x)r_{j}\n$$\nwhere $r_j:=\\sum_{i\\in\\mathcal{I}}\\langle\\psi_{i},\\phi_{j}\\rangle e_{i}$. It remains to show that $\\{r_j\\}$ is a Riesz basis.\n$$\n\\Big\\|\\sum_{j\\in\\mathcal{I}}c_jr_j\\Big\\|^2=\\sum_{i,j\\in\\mathcal{I}}c_i\\overline{c_j}\\langle r_i,r_{j}\\rangle\n$$\nThe inner product $\\langle r_i,r_{j}\\rangle$ can be expressed in the following way\n$$\n\\langle r_i,r_{j}\\rangle=\\sum_{k,l\\in\\mathcal{I}}\\ip{\\psi_k}{\\phi_i}\\ip{e_k}{e_l}\\ip{\\phi_j}{\\psi_l}\n$$\n$$\n=\\sum_{k\\in\\mathcal{I}}\\ip{\\psi_k}{\\phi_i}\\ip{\\phi_j}{\\psi_k}=\\ip{S_\\psi \\phi_j}{\\phi_i}=(\\mathcal{M}_\\phi(S_\\psi))_{i,j}\n$$\nBy Proposition \\ref{op-rep-positive} and \\ref{op-rep-invertible} it follows that $\\mathcal{M}_\\phi(S_\\psi)$ is positive and invertible on $l^2(\\mathcal{I})$\nas $\\phi$ is an orthonormal basis on ${\\sf Ran}\\, C_\\Psi$.\nHence, we get for all $c\\in l^2(\\mathcal{I})$\n$$\n\\Big\\|\\sum_{j\\in\\mathcal{I}}c_jr_j\\Big\\|^2=\\sum_{i,j \\in\\mathcal{I}}(\\mathcal{M}_\\phi(S_\\psi))_{i,j}c_i\\overline{c_j}=\\ip{\\mathcal{M}_\\phi(S_\\psi)c}{c}\n=\\norm{}{(\\mathcal{M}_\\phi(S_\\psi))^{1\/2}c}^2\n$$\nAgain by Proposition \\ref{op-rep-invertible} and using $\\phi$ that is an orthonormal basis (its frame bounds are 1) one gets\n$$\nA_\\psi \\norm{2}{c}^2\\leq \\norm{}{(\\mathcal{M}_\\phi(S_\\psi))^{1\/2}c}^2\\leq B_\\psi \\norm{2}{c}^2\n$$\n\\\\\n$(iii)\\Rightarrow (iv)$ trivial. \\\\ \n\n\n$(iv)\\Rightarrow (i)$ It holds\n\\begin{equation*}\n \\|V_\\Psi f\\|_2^2=\\int_X|\\langle f,\\Psi(x)\\rangle|^2d\\mu(x)=\\Big\\|\\sum_{i\\in\\mathcal{I}}\\langle f,r_i\\rangle \\overline{\\psi_i}\\Big\\|^2_2\n\\end{equation*}\nNow as $\\{\\psi_i\\}$, and consequently $\\{\\overline{\\psi_i}\\}$, is a Riesz basis for its closed span we get using that any Riesz basis\nis a frame\n\\begin{equation*}\n A\\|f\\|_\\mathcal{H}^2\\leq A'\\sum_{i\\in\\mathcal{I}}|\\langle f,r_i\\rangle|^2\\leq\\|V_\\Psi f\\|^2_2\\leq B'\\sum_{i\\in\\mathcal{I}}|\\langle f,r_i\\rangle|^2=B\\|f\\|_\\mathcal{H}^2\n\\end{equation*}\nHence, $\\Psi$ is a frame.\\\\\n\n\\hfill$\\Box$\\\\\n\n \n \\textcolor{red}{From this corollary one can derive that the frame operator of a continuous frame can be represented by the frame operator\n of a Riesz basis}\n\n\n\\begin{corollary}\\label{exist-frame-for-RKHS}\n Let $U$ be a closed subspace of $L^2(X,\\mu)$. The following are equivalent:\n \\begin{enumerate}[(i)]\n \\item There exists a continuous frame $\\Psi$, such that $Ran(C_\\Psi)=U$.\n \\item There exists an orthonormal basis $\\{\\psi_i\\}_{i\\in\\mathcal{I}}$ of $U$ such that $\\sum_{i\\in\\mathcal{I}}|\\psi_i(x)|^2<\\infty$, for a.e.\n $x\\in X$\n \\item for every orthonormal basis $\\{\\psi_i\\}_{i\\in\\mathcal{I}}$ of $U$ it holds $\\sum_{i\\in\\mathcal{I}}|\\psi_i(x)|^2<\\infty$, for a.e.\n $x\\in X$\n \\end{enumerate}\n\\end{corollary}\n\n\\textcolor{red}{write something on almost every and and every point }\n\nIn particular \n\n\\begin{corollary}\n The following are equivalent:\n \\begin{enumerate}[(i)]\n \\item there exists a Riesz-type frame for $\\mathcal{H}$\n \\item there exists an orthonormal basis $\\{\\psi_i\\}_{i\\in\\mathcal{I}}$ of $L^2(X,\\mu)$ such that $\\sum_{i\\in\\mathcal{I}}|\\psi_i(x)|^2<\\infty$, for a.e.\n $x\\in X$\n \\item for every orthonormal basis $\\{\\psi_i\\}_{i\\in\\mathcal{I}}$ of $L^2(X,\\mu)$ it holds $\\sum_{i\\in\\mathcal{I}}|\\psi_i(x)|^2<\\infty$, for a.e.\n $x\\in X$\n \\end{enumerate}\n\\end{corollary}\n\n\n\nOne should say something about the independence of the choice of the orthonormal basis....\n\n\\end{comment}\n\n\\section{Frames and reproducing pairs taking values in a RKHS}\\label{sec:char-RKHS}\nIn this section we will mainly investigate two questions. First, given a RKHS, what can be said about the pointwise behavior of frames and how can the reproducing kernel be characterized. Second, which conditions on a frame ensure that the space possesses a reproducing kernel.\\\\\nThe following result adapts the \narguments of the proof of \\cite[Theorem 3.12]{paul09} to the case of reproducing pairs.\n\\begin{theorem}\\label{kernel-and-rep-pair}\n Let $\\H_{K}$ be a RKHS and $\\Psi=\\{\\phi_i\\}_{i\\in\\mathcal{I}},\\ \\Phi=\\{\\psi_i\\}_{i\\in\\mathcal{I}}\\subset\\H_K$. The pair $(\\Psi,\\Phi)$\n is a reproducing pair for $\\H_K$ if, and only if, there exists $A\\in GL(\\H_K)$ such that\n\\begin{equation}\\label{rep-prod-char-ker}\nK(x,y)=\\sum_{i\\in\\mathcal{I}} (A\\phi_i)(x)\\overline{\\psi_i(y)}=\\sum_{i\\in\\mathcal{I}} (A^\\ast\\psi_i)(x)\\overline{\\phi_i(y)},\n\\end{equation}\n where the series converges pointwise. In particular, $A$ is unique and given by $S_{\\Psi,\\Phi}^{-1}$.\n \\end{theorem}\n\\textbf{Proof:\\ } Let $(\\Psi,\\Phi)$ be a reproducing pair, it holds\n$$\nK(x,y)=\\ip{k_y}{k_x}=\\sum_{i\\in\\mathcal{I}} \\ip{k_y}{\\psi_i}\\ip{S_{\\Psi,\\Phi}^{-1}\\phi_i}{k_x}=\\sum_{i\\in\\mathcal{I}} \\overline{\\psi_i(y)}(S_{\\Psi,\\Phi}^{-1}\\phi_i)(x).\n$$\nConversely, assume that $K$ is given by \\eqref{rep-prod-char-ker}. Let $f,g\\in \\mbox{span}\\{k_x\\hspace{-0.07cm}:\\ x\\in X\\}$, that is, there exist \n$\\alpha_n,\\beta_m\\in \\mathbb{C}$\nsuch that $f=\\sum_n \\alpha_n k_{x_n}$ and $g=\\sum_m \\beta_m k_{y_m}$, then\n$$\n\\ip{f}{g}=\\sum_{n,m=1}^N\\alpha_n\\overline{\\beta_m}\\ip{k_{x_n}}{k_{y_m}}=\\sum_{n,m=1}^N\\alpha_n\\overline{\\beta_m}K(y_m,x_n)\n$$\n$$\n=\\sum_{n,m=1}^N\\alpha_n\\overline{\\beta_m}\\sum_{i\\in\\mathcal{I}}(A\\phi_i)(y_m)\\overline{\\psi_i(x_n)}=\n\\sum_{n,m=1}^N\\alpha_n\\overline{\\beta_m}\\sum_{i\\in\\mathcal{I}}\\ip{k_{x_n}}{\\psi_i}\\ip{A\\phi_i}{k_{y_m}}\n$$\n$$\n=\\sum_{i\\in\\mathcal{I}}\\Big\\langle\\sum_{n=1}^N\\alpha_n k_{x_n},\\psi_i\\Big\\rangle\\Big\\langle A\\phi_i,\\sum_{m=1}^N\\beta_mk_{y_m}\\Big\\rangle$$\n$$\n=\\sum_{i\\in\\mathcal{I}}\\ip{f}{\\psi_i}\\ip{A\\phi_i}{g}=\\ip{AS_{\\Psi,\\Phi}f}{g}.\n$$\nIn \\cite[Proposition 3.1]{paul09} it is shown that $\\mbox{span}\\{k_x\\hspace{-0.07cm}:\\ x\\in X\\}$ is dense in $\\H_K$. Therefore, it follows\nthat $AS_{\\Psi,\\Phi}=I$. As $A\\in GL(\\H_K)$ we may conclude that $S_{\\Psi,\\Phi}\\in GL(\\H_K)$, that is,\n$(\\Psi,\\Phi)$ is a reproducing pair.\n\\hfill$\\Box$\\\\\n\\begin{remark}\nResults have been proven if $\\Psi$ and $\\Phi$ are dual frames \n \\cite[Theorem 7]{raca05} or if $\\Psi=\\Phi$ is a Parseval frame \\cite[Theorem 3.12]{paul09}, which are just particular cases of Theorem \\ref{kernel-and-rep-pair}. In both cases one has\n$A=I$.\n\\end{remark}\n\n\n\\begin{comment}\n {\\bf Idea\/Question:}\\begin{itemize}\n \\item There should be other convergence possible. Weakly unconditional is clear.... For the RKHS property the pointwise convergence is the right one!\n \\item The following should be possible:\n \\begin{theorem}\\label{charact-of-RKHS_1}\n Let $\\H_{K}$ be a RKHS and $\\{\\psi_k\\}_{k\\in\\mathcal{I}}\\subset \\H_{K}$ a frame, then \n \\begin{equation}K(x,y)=\\sum_{k\\in\\mathcal{I}}\\psi_k(x)\\overline{\\tilde \\psi_k(y)} \\end{equation}\n with pointwise convergence. In particular, \n \\begin{equation}\\label{pointwise-l2-frame}\n0<\\sum_{k\\in\\mathbb{N}} \\psi_k(x) \\overline{\\widetilde{\\psi_k}(x)} = |K(x,x)| < \\infty,\\ \\forall x\\in X.\n\\end{equation}\nConversely, if there exists a frame for a Hilbert space $\\H_K$ that satisfies \\eqref{pointwise-l2-frame},\nthen $\\H_{K}$ is identifiable with a RKHS in $\\mathcal{F}(X,\\mathbb{C})$.\n\\end{theorem}\nas well as \n\\begin{theorem}\\label{charact-of-RKHS_2}\n Let $\\H_{K}$ be a RKHS and $\\{\\psi_k\\}_{k\\in\\mathcal{I}},\\{\\phi_k\\}_{k\\in\\mathcal{I}}\\subset \\H_{K}$ weakly dual systems, then \n \\begin{equation}K(x,y)=\\sum_{k\\in\\mathcal{I}}\\psi_k(x)\\overline{\\phi_k(y)}, \\end{equation}\n with pointwise convergence. In particular, \n \\begin{equation}\\label{pointwise-l2-weak}\n0<\\sum_{k\\in\\mathbb{N}} \\psi_k(x) \\overline{{\\phi_k}(x)} = |K(x,x)| < \\infty,\\ \\forall x\\in X.\n\\end{equation}\nConversely, if there exists a frame for a Hilbert space $\\H_K$ that satisfies \\eqref{pointwise-l2-weak},\nthen $\\H_{K}$ identifiable with a RKHS consisting of functions $F:X\\rightarrow \\mathbb{C}$.\n\\end{theorem}\n\\end{comment}\n\n\\begin{proposition}\\label{bessel-rkhs}\nLet $\\H_K$ be a RKHS and $\\{\\psi_i\\}_{i\\in\\mathcal{I}}\\subset \\H_K$ Bessel, then it holds \n\\begin{equation}\\label{eq-lower-rkhs}\n\\sum_{i\\in\\mathcal{I}}|\\psi_i(x)|^2<\\infty,\\ \\forall\\ x\\in X.\n\\end{equation}\nIf $\\{\\psi_i\\}_{i\\in\\mathcal{I}}\\subset \\H_K$ is a frame, then \n\\begin{equation}\\label{eq-lower-rkhs2}\n0<\\sum_{i\\in\\mathcal{I}}|\\psi_i(x)|^2<\\infty,\\ \\forall\\ x\\in X.\n\\end{equation}\n\\end{proposition}\n\\textbf{Proof:\\ } Let $\\{\\psi_i\\}_{i\\in\\mathcal{I}}$ be Bessel, then, for every $x\\in X$, it holds\n$$ \\sum_{i\\in\\mathcal{I}}|\\psi_i(x)|^2=\\sum_{i\\in\\mathcal{I}}|\\ip{k_x}{\\psi_i}|^2\\leq M \\|k_x\\|^2<\\infty.$$ \nAn analogue argument shows the lower bound in \\eqref{eq-lower-rkhs2} if $\\{\\psi_i\\}_{i\\in\\mathcal{I}}$ is a frame.\\hfill$\\Box$\\\\\n\\begin{remark}\\label{discrete-subspace-rkhs}\n \\begin{enumerate}[(i)]\n \\item\n Observe that \\eqref{eq-lower-rkhs2} is not a direct consequence of Theorem \\ref{kernel-and-rep-pair} as \n\\eqref{rep-prod-char-ker} only ensures\n$$\n00$ such that $C_\\Psi f$ is constant on $A$, for all $f\\in\\H$.\n\\end{definition}\nSquare-integrable group representations \\cite{grmopa86} like the short-time Fourier system or the continuous wavelet system, see \\cite{groe1}, are just one class out of a large reservoir of strictly continuous mappings.\nIn the rest of this section we show that continuous frames can be decomposed into a discrete and a strictly continuous system.\nTo this end, we will need two auxiliary lemmata.\n\\begin{lemma}[\\cite{si96}, Theorem 3.8.1]\\label{constant-on-atoms}\nLet $A\\subset X$ be an atom. Every measurable function $F:X\\rightarrow \\mathbb{C}$ is constant\nalmost everywhere on $A$.\n\\end{lemma}\n\n\\begin{lemma}\\label{atomic-means-discrete}\nLet $\\Psi$ be Bessel and $A\\subset X$ such that $\\mu(A)>0$ and $\\ip{f}{\\Psi(\\cdot)}$ is constant on $A$ for every $f\\in \\H$, then there exists a unique $\\psi\\in \\H$ such that $$\n\\|C_\\Psi f\\|_2^2=\\|C_\\Psi f|_{X\\backslash A}\\|_2^2+|\\ip{f}{\\psi}|^2,\\ \\forall\\ f\\in\\H.\n$$\nIn particular, $\\psi$ is weakly given by\n\\begin{equation}\\label{defin-of-psi-const}\n\\ip{f}{\\psi}:=\\mu(A)^{-1\/2}\\int_{A}\\ip{f}{\\Psi(x)}d\\mu(x),\\ \\forall\\ f\\in\\H.\n\\end{equation}\n\\end{lemma}\n\\textbf{Proof:\\ } First, observe that $\\psi$ defined by \\eqref{defin-of-psi-const} is unique for every $n\\in\\mathbb{N}$ by Riesz representation theorem \n$$\n|\\ip{f}{\\psi}|\\leq\\frac{1}{\\sqrt{\\mu(A)}}\\int_{A}|\\ip{f}{\\Psi(x)}|d\\mu(x)\n\\leq \\left(\\int_{A}|\\ip{f}{\\Psi(x)}|^2d\\mu(x)\\right)^{\\frac{1}{2}}\\leq M\\|f\\|,\n$$\nwhere $M$ is the upper frame bound of $\\Psi$. Moreover, \n$$\n\\int_X|\\ip{f}{\\Psi(x)}|^2d\\mu(x)=\\int_{X\\backslash A}|\\ip{f}{\\Psi(x)}|^2d\\mu(x)+\\int_{A}|\\ip{f}{\\Psi(x)}|^2d\\mu(x)\n$$\n$$\n=\\int_{X\\backslash A}|\\ip{f}{\\Psi(x)}|^2d\\mu(x)+|\\ip{f}{\\psi}|^2\n$$\nwhere we have used that $\\ip{f}{\\Psi(\\cdot)}$ is almost everywhere constant on $A$ and \\eqref{defin-of-psi-const}.\\hfill$\\Box$\\\\\n\n\\begin{theorem}\nEvery frame $\\Psi$ can be written as $\\Psi=\\Psi_d\\cup \\Psi_c$, where $\\Psi_d$ is a discrete Bessel system and $\\Psi_c$ is a strictly continuous Bessel mapping.\n\\end{theorem}\n\\textbf{Proof:\\ } By Lemma \\ref{not-atomic-non-atomic} $(i)$, any measure $\\mu$ can be written as $\\mu=\\mu_a+\\mu_c$, where $\\mu_a$ is atomic and $\\mu_c$ is non-atomic. By Lemma \\ref{constant-on-atoms} and \\ref{atomic-means-discrete} we deduce that $\\Psi$ defined on $(X,\\mu_a)$ can be identified with a discrete Bessel system $\\Psi_d^a$. Let $X_d\\subset X$ be the disjoint union of all sets of positive measure with respect to $\\mu_c$ on which $C_\\Psi f$ is constant for all $f\\in\\H$ and $\\{\\psi_i\\}_{i\\in\\mathcal{I}}$ the corresponding collection of vectors. By definition $\\Psi_c:=\\Psi|_{X\\backslash X_d}$ is strictly continuous. It therefore remains to show that $\\mathcal{I}$ is countable.\nThis, however, is a direct consequence from the fact that $\\sigma$-finite measure spaces can only be partitioned into countably many sets of positive measure. Hence setting $\\Psi_d:=\\Psi_d^a\\cup \\{\\psi_i\\}_{i\\in\\mathcal{I}}$ yield the result.\\hfill$\\Box$\\\\\n\n\\noindent In an attempt to generalize the concept of Riesz bases, continuous Riesz bases \\cite{arkatota12} and Riesz-type mappings \\cite{gaha03} have been introduced.\nIt turns out that \nthe these notions are equivalent and characterized by as frames with redundancy zero \\cite[Proposition 2.5 \\& Theorem 2.6]{arkatota12}.\n\n\\begin{corollary}\nEvery continuous Riesz basis (Riesz-type mapping) can be written as a discrete Riesz basis.\n\\end{corollary}\n\\textbf{Proof:\\ } Let $\\Psi$ be a continuous Riesz basis, then $R(\\Psi)=0$. By Theorem \\ref{reproduced-result}, $(X,\\mu)$ is \natomic. Consequently, $\\Psi$ corresponds to a discrete Riesz basis by Lemma \\ref{constant-on-atoms} and \\ref{atomic-means-discrete}.\\hfill$\\Box$\\\\\n\n\\noindent With the results of this section in mind, we suggest to use the term continuous frame only in the case of a strictly continuous frame, and semi-continuous frame if there is both a strictly continuous and a discrete part. Moreover, the notion of continuous Riesz basis\/ Riezs type mapping should be discarded as there are no such systems on \nan-atomic measure spaces and continuous Riesz bases on atomic spaces reduce to discrete Riesz bases.\n\n\\subsection{Upper semi-frames}\\label{sec:upper-semi}\n\nIn this section we want to illustrate that upper semi-frames behave essentially different from (lower semi-)frames in respect of the problems of Section \\ref{sec:frames-and-redund}. In particular, the closure of the range of the analysis operator is not necessarily a reproducing kernel Hilbert space and there exist upper semi-frames on non-atomic measure spaces with redundancy zero (compare to Proposition \\ref{exist-frame-for-RKHS} and Theorem \\ref{reproduced-result}). Throughout this section we will assume that any upper semi-frame violates the lower frame inequality.\\\\\n\n\\noindent \\textbf{Example 1}\nIn \\cite{jpaxxl09,ansptr15} the following upper \nsemi-frame has been studied. \nSet $\\H_n:=L^2(\\mathbb{R}^+,r^{n-1}d r)$, where $n\\in\\mathbb{N}$, and $(X,\\mu)=(\\mathbb{R},d x)$.\n We use the following convention to denote the Fourier transform\n $$\n \\widehat f(\\omega)=\\int_\\mathbb{R} f(x)e^{-2\\pi i x\\omega}dx.\n $$\n Let $\\psi\\in \\H_n$ and define the affine coherent state by\n$$\n\\Psi(x)(r):=e^{-2\\pi ixr}\\psi(r),\\ \\ r\\in\\mathbb{R}^+,\\ x\\in\\mathbb{R}.\n$$\nThe mapping $\\Psi$ forms an upper semi-frame if \n$\\esssup_{r \\in {\\mathbb{R}}^{+}}{\\mathfrak s}(r)<\\infty,$ where ${\\mathfrak s}(r):=r^{n-1}|\\psi (r)|^{2}$,\nand $|\\psi(r)|\\neq 0$, for a.e. $r\\in\\mathbb{R}^+$.\nThe frame operator is then given by a multiplication operator on $\\H_n$, that is,\n$$\n(S_\\Psi f)(r)= {\\mathfrak s}(r) f(r).\n$$\nIt is thus easy to see that $\\Psi$ cannot form a frame since\n$\\essinf_{r \\in {\\mathbb{R}}^{+}} {\\mathfrak s}(r)=0$ for every $\\psi\\in \\H_n$.\nIn \\cite[Section 5.2]{ansptr15} it is shown that ${\\sf Ker}\\, D_\\Psi=\\mathcal F_+$, where \n$$\\mathcal{F}_+ :=\\{f\\in L^2(\\mathbb{R}):\\ \\widehat f(\\omega)=0,\\ \\text{for a.e. }\\omega\\geq0\\}.$$\nClearly, $\\overline{{\\sf Ran}\\, C_\\Psi}=(\\ker D_\\Psi)^\\bot=\\mathcal{F}_+{}^\\bot=\\mathcal{F}_-$, where\n$$\\mathcal{F}_- :=\\{f\\in L^2(\\mathbb{R}):\\ \\widehat f(\\omega)=0,\\ \\text{for a.e. }\\omega\\leq0\\}.$$\nTherefore, $\\Psi$ has infinite redundancy and a short argument shows that $\\mathcal F_-$ is not a RKHS: \n\n The dilation operator $D_a $, defined by $D_a f(x):=a^{-1\/2}f(x\/a)$, $a\\in \\mathbb{R}^+$, acts isometrically on $\\mathcal{F}_-$. Take $f\\in\\mathcal{F}_-$ with $\\|f\\|=1$ and $f(0)\\neq0$, then $|D_af(0)|=|a^{-1\/2}f(0)|\\rightarrow\\infty$, as $a\\rightarrow 0$. Consequently, point evaluation cannot be continuous and $\\mathcal{F}_-$ is not a RKHS.\n\nThe mapping $\\Psi$ possesses several other interesting properties, see \\cite{ansptr15}. For instance, it forms\na total Bessel system with no dual. Or, in other words, there is no mapping $\\Phi$ such that \n$(\\Psi,\\Phi)$ generates a reproducing pair.\n\n\nNext, we will show the existence of upper semi-frames with ${\\sf Ran}\\, C_\\Psi$ dense in $L^2(X,\\mu)$ \nif there exists an\n orthonormal basis of $L^2(X,\\mu)$ which is pointwise bounded. In particular, there exist upper semi-frames on non-atomic measure spaces with redundancy zero.\n\\begin{proposition}\nLet $(X,\\mu)$ be a measure space, such that there exists an orthonormal basis $\\{\\psi_n\\}_{n\\in\\mathbb{N}}$ of $L^2(X,\\mu)$ satisfying\n\\begin{equation}\\label{assumpt-on-onb}\n\\sup_{n\\in\\mathbb{N}}\\sup_{x\\in X}|\\psi_n(x)|=C< \\infty,\n\\end{equation}\nthen there exists an upper semi-frame $\\Psi$ for $\\H$ such that $\\overline{{\\sf Ran}\\, C_\\Psi}=L^2(X,\\mu)$.\nIn particular, $R(\\Psi)=0$.\n\\end{proposition}\n\\textbf{Proof:\\ } Take an arbitrary orthonormal basis $\\{e_n\\}_{n\\in\\mathbb{N}}$ of $\\H$, and define\n$$\\Psi(x):=\\sum_{n\\in\\mathbb{N}}n^{-1} e_n \\psi_n(x),$$\nwith the sum converging absolutely in every point. Then, $\\Psi$ is an upper semi-frame with the desired properties.\nTo see this, we first observe that $\\Psi:X\\rightarrow\\H$ is well-defined \nas, for $x\\in X$ fixed,\n$$\n|\\ip{f}{\\Psi(x)}|\\leq \\sum_{n\\in\\mathbb{N}}|\\ip{f}{e_n}n^{-1}\\psi_n(x)|\\leq \\norm{}{f}\\Big(\\sum_{n\\in\\mathbb{N}}n^{-2}| \\psi_n(x)|^2\\Big)^{1\/2}\n$$\n$$\n\\leq C\\norm{}{f}\\Big(\\sum_{n\\in\\mathbb{N}}n^{-2}\\Big)^{1\/2}= \\frac{\\pi}{\\sqrt{6}} C\\norm{}{f},\n$$\nwhere we used \\eqref{assumpt-on-onb} and Cauchy-Schwarz inequality.\nMoreover,\n$$\n\\int_X|\\ip{f}{\\Psi(x)}|^2d\\mu(x)\\leq\\int_X\\norm{}{f}^2\\sum_{n\\in\\mathbb{N}}n^{-2}| \\psi_n(x)|^2d\\mu(x)\n$$\n$$\n=\\norm{}{f}^2\\sum_{n\\in\\mathbb{N}}n^{-2}\\int_X| \\psi_n(x)|^2 d\\mu(x)=\\norm{}{f}^2\\sum_{n\\in\\mathbb{N}}n^{-2}=\n\\frac{\\pi^2}{6}\\norm{}{f}^2.\n$$\n Since $\\{\\psi_n\\}_{n\\in\\mathbb{N}}$ is an orthonormal basis of $L^2(X,\\mu)$ it follows that $\\Psi$ is total in $\\H$ as, for $f\\neq0$,\n$$\n\\int_X|\\ip{f}{\\Psi(x)}|^2d\\mu(x)= \\int_X \\sum_{n,k\\in\\mathbb{N}}\\ip{f}{e_n}\\ip{e_k}{f}(nk)^{-1}\\psi_n(x)\\overline{\\psi_k(x)}d\\mu(x)\n$$\n$$\n=\\sum_{n,k\\in\\mathbb{N}}\\ip{f}{e_n}\\ip{e_k}{f}(nk)^{-1}\\delta_{n,k}=\\sum_{n\\in\\mathbb{N}}|\\ip{f}{e_n}|^2n^{-2}>0.\n$$\nFinally, the range of the analysis operator of the system $\\{n^{-1} e_n \\}_{n\\in\\mathbb{N}}$ is dense in $l^2(\\mathbb{N})$, which implies that ${\\sf Ran}\\, C_\\Psi$ is dense \nin $L^2(X,\\mu)$.\\hfill$\\Box$\\\\\n\n\n\\noindent \\textbf{Example 2} Let $(X,\\mu)=(\\mathbb{T},dx)$ be the torus with Lebesgue measure, and $\\psi_n(x)=e^{2\\pi i xn},$ $n\\in\\mathbb{Z}$.\nThen, $\\{\\psi_n\\}_{n\\in\\mathbb{Z}}$ is an orthonormal basis and\n$$\\sup_{n\\in\\mathbb{Z}}\\sup_{x\\in \\mathbb{T}}|\\psi_n(x)|=1.$$\nHence, there exists an upper semi-frame $\\Psi$ with the closure of ${\\sf Ran}\\, C_\\Psi$ being $L^2(\\mathbb T,dx)$.\n\n\n\\subsection{Correction of the proof of a result on the existence of duals for lower semi-frames}\nIn this section we corrected version of the proof of \\cite[Proposition 2.6]{jpaxxl09} which states that for every lower semi-frame $\\Psi$ there exists a dual mapping $\\Phi$ such that $S_{\\Psi,\\Phi}=I$ on ${\\sf Dom}\\, C_\\Psi$. While the result itself is correct,\nthe construction of the dual system $\\Phi$ in \\cite{jpaxxl09} is in general not well-defined. In particular, $\\Phi$ is \ndefined as\n$$\n\\Phi(x):=\\sum_{n\\in\\mathbb{N}}\\phi_n(x)V\\phi_n=V\\Big(\\sum_{n\\in\\mathbb{N}}\\phi_n(x)\\phi_n\\Big),\n$$\nwhere $V:L^2(X,\\mu)\\rightarrow \\H$ is a bounded operator depending on $\\Psi$ and $\\{\\phi_n\\}_{n\\in\\mathbb{N}}$ is an orthonormal basis\nfor $L^2(X,\\mu)$. However, if $(X,\\mu)$ is an-atomic, then there exists a set of positive measure $A$ such that\n$\\sum_{n\\in\\mathbb{N}}|\\phi_n(x)|^2=\\infty,$ for all $x\\in A$, by Corollary \\ref{noonbpointwise1}.\nThus, $\\Psi$ is not well-defined on a set of positive measure.\n\n\n\\begin{proposition}[\\cite{jpaxxl09}, Proposition 2.6]\\label{corrected-result}\n Let $\\Psi$ be a lower semi-frame, there exists an upper semi-frame $\\Phi$ such that \n $$\n f=\\int_X\\ip{f}{\\Psi(x)}\\Phi(x)d\\mu(x),\\ \\ \\ \\forall \\ f\\in {\\sf Dom}\\, C_\\Psi.\n $$\n\\end{proposition}\n\\textbf{Proof:\\ } Let $\\Psi$ be a lower semi-frame, then ${\\sf Ran}\\, C_\\Psi$ is a RKHS in $L^2(X,\\mu)$ by Proposition \\ref{exist-frame-for-RKHS}.\nMoreover, let $P$ denote the orthogonal projection from $L^2(X,\\mu)$ onto ${\\sf Ran}\\, C_\\Psi$, and $\\{e_n\\}_{n\\in\\mathbb{N}}$ be an orthonormal\nbasis for $\\H$.\nDefine the linear operator $V:L^2(X,\\mu)\\rightarrow \\H$ by $V:=C_\\Psi^{-1}$ on \n${\\sf Ran}\\, C_\\Psi$ and $V:=0$ on $({\\sf Ran}\\, C_\\Psi)^\\bot$. Then $V$ is bounded and for all $f\\in {\\sf Dom}\\, C_\\Psi,\\ g\\in\\H$,\nit holds\n$$\n\\ip{f}{g}=\\ip{VC_\\Psi f}{g}=\\ip{C_\\Psi f}{V^\\ast g}_2=\\ip{C_\\Psi f}{V^\\ast (\\sum_{n\\in\\mathbb{N}}\\ip{ g}{e_n}e_n)}_2\n$$\n$$\n=\\ip{C_\\Psi f}{\\sum_{n\\in\\mathbb{N}}\\ip{g}{ e_n}V^\\ast e_n}_2=\\ip{C_\\Psi f}{\\sum_{n\\in\\mathbb{N}}\\ip{g}{ e_n}P V^\\ast e_n}_2\n=\\ip{C_\\Psi f}{C_\\Phi g}_2,\n$$\nwhere $\\Phi(x):=\\sum_{n\\in\\mathbb{N}}\\overline{(PV^\\ast e_n)}(x)e_n$. It remains to show that $\\Phi(x)$ is well-defined for every $x\\in X$. Since $\\{e_n\\}_{n\\in\\mathbb{N}}$ is an orthonormal basis, one has that $\\Phi(x)$ is well defined\nif, and only if, \n$$\n\\sum_{n\\in\\mathbb{N}}|(PV^\\ast e_n)(x)|^2<\\infty,\\ \\forall\\ x\\in X.\n$$ \nBy Proposition \\ref{bessel-rkhs}, it is sufficient to show that $\\Theta:=\\{P V^\\ast e_n\\}_{n\\in\\mathbb{N}}$ is a Bessel sequence on ${\\sf Ran}\\, C_\\Psi$.\nLet $F\\in {\\sf Ran}\\, C_\\Psi$, then\n$$\n\\sum_{n\\in\\mathbb{N}}|\\ip{F}{ \\Theta_n}_2|^2=\\sum_{n\\in\\mathbb{N}}|\\ip{VPF}{e_n}_2|^2=\\|VF\\|^2\\leq C\\|F\\|_2^2,\n$$\nas $PF=F$ and $V$ is bounded. It hence remains to show that $\\Phi$ is Bessel. Let $f\\in \\H$, then\n$$\n\\int_X|\\ip{f}{\\Phi(x)}|^2d\\mu(x)=\\int_X\\Big|\\sum_{n\\in\\mathbb{N}}\\ip{f}{e_n}\\Theta_n(x)\\Big|^2d\\mu(x)\n$$\n$$\n=\\|D_\\Theta \\{\\ip{f}{e_n}\\}_{n\\in\\mathbb{N}}\\|_2^2\n\\leq C\\sum_{n\\in\\mathbb{N}}|\\ip{f}{e_n}|^2=C\\|f\\|^2,\n$$\nas $\\Theta$ is Bessel.\n\\hfill$\\Box$\\\\\n\\begin{remark}\nThere is no analogue result of Proposition \\ref{corrected-result} if $\\Psi$ is an upper semi-frame. In \\cite{ansptr15} it is shown that the affine coherent state system presented in Section \\ref{sec:upper-semi} is a complete Bessel mapping with no dual.\n\\end{remark}\n\n\\section{Reproducing pairs and RKHSs}\\label{sec:rep-pair-rkhs}\n\n\nThe absence of frame bounds causes problems in the analysis of the ranges of $C_\\Psi$ and $C_\\Phi$ of a reproducing pair $(\\Psi,\\Phi)$. On the one hand, without the upper frame bound it is no longer guaranteed that ${\\sf Ran}\\, C_\\Psi$ is a subspace of $L^2(X,\\mu)$. The lower frame inequality, on the other hand, ensured that ${\\sf Ran}\\, C_\\Psi$ is a RKHS.\nA construction of two mutually dual Hilbert spaces intrinsically generated by the pair $(\\Psi,\\Phi)$ is presented in \\cite{ansptr15}.\nLet us first recall some of these results before we explain how reproducing kernel Hilbert spaces come into play.\n\nLet ${\\mathcal V}_\\Phi(X, \\mu)$ be the space of all measurable functions $F : X \\to \\mathbb{C}$ for which\n there exists $M>0$ such that\n$$\\label{eq-Vphi}\n\\left| \\int_X F(x) \\ip{\\Phi(x)}{g} d\\mu(x) \\right| \\leq M \\norm{}{g}, \\; \\forall\\, g \\in \\H.\n$$\nNote that in general neither ${\\mathcal V}_\\Phi(X, \\mu)\\subset L^2(X, \\mu)$ nor $ L^2(X, \\mu)\\subset {\\mathcal V}_\\Phi(X, \\mu)$.\nThe linear map $T_\\Phi :{\\mathcal V}_\\Phi(X, \\mu) \\rightarrow \\H$ given weakly by\n\\begin{equation}\\label{def-T-phi}\n\\ip{T_\\Phi F}{g} =\\int_X F(x) \\ip{\\Phi(x)}{g} d\\mu(x) , \\; g\\in\\H,\n\\end{equation}\nis thus well defined by Riesz representation theorem.\nThe operator $T_\\Phi$ can be seen as the natural extension of the synthesis operator $D_\\Phi$ (defined on ${\\sf Dom}\\, D_\\Phi\\subseteq L^2(X,\\mu)$) to ${\\mathcal V}_\\Phi(X, \\mu)$. \n\nLet $(\\Psi,\\Phi)$ be a reproducing pair, according to \\cite{ansptr15}\nit then holds\n\\begin{equation}\\label{dir-sum-rp}\n{\\mathcal V}_\\Phi(X,\\mu)={\\sf Ran}\\, C_\\Psi\\oplus \\ker T_\\Phi.\n\\end{equation}\nThis observation, together with the fact that $T_\\Phi$ is general not injective, motivates to define the redundancy for arbitrary complete mappings via \n\\begin{equation}\\label{redund-rep-pair}\nR(\\Phi):=\\dim(\\ker T_\\Phi).\n\\end{equation}\nWe expect that similar results on $R(\\Phi)$ as in Section \\ref{sec:frames-and-redund} hold.\n\\begin{conjecture} If $R(\\Phi)<\\infty$, then $(X,\\mu)$ is atomic.\n\\end{conjecture}\nThe main difficulty is that there is no characterization of ${\\mathcal V}_\\Phi(X,\\mu)$ which would allow to treat the problem in a similar manner than in Section \\ref{sec:frames-and-redund} using \\eqref{dir-sum-rp}. It is in particular not even clear if ${\\mathcal V}_\\Phi(X,\\mu)$ is normable.\n\nLet us introduce the following vector space\n$$ \nV_\\Phi(X, \\mu)= {\\mathcal V}_\\Phi(X, \\mu)\/{{\\sf Ker}\\,}\\,T_\\Phi,\n$$\nequipped with the inner product\n$$\n\\ip{F}{G}_{\\Phi}: =\\ip{T_\\phi F}{T_\\phi G}, \\mbox{ where } F,\nG\\ \\in V_\\Phi(X,\\mu).\n$$\nThis is indeed an inner product as $\\ip{F}{F}_{\\Phi}=0$ if, and only if, $F\\in{\\sf Ker}\\, T_\\Phi$. Hence, $V_\\Phi(X,\\mu)$ forms a pre-Hilbert space and $T_\\Phi:V_\\Phi(X,\\mu)\\rightarrow \\H$ is an isometry.\nBy \\eqref{def-T-phi} $ \\ip{\\cdot}{\\cdot}_{\\Phi}$ can be written explicitly as\n\\begin{equation}\\label{phi-inner-expl}\n \\ip{F}{G}_{\\Phi}=\\int_X\\int_X F(x) \\ip{\\Phi(x)}{\\Phi(y)}\\overline{G(y)} d\\mu(x)d\\mu(y).\n\\end{equation}\\\\\n\n\nWith the basic definitions at hand, we are now able to give an interpretation of \\cite[Theorem 4.1]{ansptr15} in terms of RKHSs.\nIn particular, this result answers the question if, given a mapping $\\Phi$, \nthere exist another mapping $\\Psi$ such that $(\\Psi,\\Phi)$ forms a reproducing pair. \n \\begin{theorem}[\\cite{ansptr15}, Theorem 4.1]\\label{theo-partner}\nLet $\\Phi:X\\rightarrow\\H$ be a weakly measurable mapping and $\\{e_i\\}_{i\\in\\mathcal{I}}$ an orthonormal \nbasis of $\\H$. There exists another family $\\Psi$, such that $(\\Psi,\\Phi)$ is a reproducing pair if, and only if,\n\\begin{enumerate}[(i)]\\item ${\\sf Ran}\\, T_\\phi =\\H$, \n\\item there exists $\\{\\mathcal{E}_i\\}_{i\\in\\mathcal{I}}\\subset {\\mathcal V}_\\Phi(X,\\mu)$ satisfying $T_\\Phi \\mathcal{E}_i= e_i,\\ \\forall\\ i\\in\\mathcal{I},$ and\n\\begin{equation}\\label{second-assumption}\n\\sum_{i\\in\\mathcal{I}}|\\mathcal{E}_i(x)|^2<\\infty,\\ \\forall\\ x\\in X.\n\\end{equation}\n\\end{enumerate}\n A reproducing partner $\\Psi$ is then given by\n\\begin{equation}\\label{def-repr-partn}\n \\Psi(x):=\\sum_{i\\in\\mathcal{I}}\\overline{\\mathcal{E}_i(x)}e_i.\n\\end{equation}\n \\end{theorem}\n Theorem \\ref{theo-partner} is a powerful tool for the study of complete systems. It has for example been used to construct a reproducing partner for the Gabor system of integer time-frequency shifts of the Gaussian window \\cite{spexxl16}.\n\nLet us briefly discuss the conditions $(i)$ and $(ii)$. For a complete system on can show that (under very mild conditions \\cite[Lemma 2.2]{jpaxxl09}) $\\overline{{\\sf Ran}\\, D_\\Phi}=\\H$ holds. It might therefore seem that $(i)$ is mainly a formality since $T_\\Phi$ extends $D_\\Phi$ to its domain ${\\mathcal V}_\\Phi(X,\\mu)$. The upper semi-frame from Section \\ref{sec:upper-semi} however does not satisfy $(i)$, see \\cite[Section 6.2.3]{ansptr15}. In addition, there are intuitive interpretations of $(i)$ and $(ii)$ in different contexts.\n\n{\\bf Coefficient map interpretation:} Property $(i)$ ensures the existence of a linear coefficient\nmap $A:\\H\\rightarrow {\\mathcal V}_\\Phi(X,\\mu)$ satisfying $f=T_\\Phi A(f)$ for every $f\\in\\H$. \nProperty $(ii)$ then guarantees that $A(f)$ can be calculated taking inner products of $f$ with a second mapping\n$\\Psi:X\\rightarrow\\H$.\n\n{\\bf RKHS interpretation:} Let us assume that $(i)$ and $(ii)$ are satisfied. The family\n$\\{\\mathcal{E}_i\\}_{i\\in\\mathcal{I}}$ forms an orthonormal system with respect to the inner product $\\ip{\\cdot}{\\cdot}_\\Phi$, since by $(ii)$ it holds\n$$\n\\langle \\mathcal{E}_i,\\mathcal{E}_k\\rangle_\\Phi=\\langle T_\\Phi \\mathcal{E}_i,T_\\Phi \\mathcal{E}_k\\rangle=\\langle e_i,e_k\\rangle=\\delta_{i,k}.\n$$\nHence, $\\{\\mathcal{E}_i\\}_{i\\in\\mathcal{I}}$ forms an orthonormal basis for\n$$ \n\\H_K^\\Phi:=\\overline{\\mbox{span}\\{\\mathcal{E}_i:\\ i\\in\\mathcal{I}\\}}^{\\|\\cdot\\|_\\Phi}.\n$$\nTheorem \\ref{charact-of-RKHS} together with \\eqref{second-assumption} ensure that $\\H_K^\\Phi$ is a RKHS.\nMoreover, the definition of the reproducing partner $\\Psi$ in \\eqref{def-repr-partn} yields that\n\\begin{equation}\\label{rkhs-ran-anal}\\H_K^\\Phi\\simeq V_\\Phi(X,\\mu)\\simeq({\\sf Ran}\\, C_\\Psi,\\|\\cdot\\|_\\Phi).\n\\end{equation}\nTo put it another way, $(i)$ and $(ii)$ guarantee that there exists a RKHS $\\H_K^\\Phi\\subset {\\mathcal V}_\\Phi(X,\\mu)$\nwhich reproduces $\\H$ in the sense that $T_\\Phi(\\H_K^\\Phi)=\\H$.\n\n\nLet us assume that $(\\Psi,\\Phi)$ is a reproducing pair. There is a natural way to generate frames on $\\H$ and $\\H_K^\\Phi$ using the analysis and synthesis operators.\n\\begin{proposition}\\label{rep-pair-frame-dec}\nLet $(\\Psi,\\Phi)$ be a reproducing pair for $\\H$, $\\{g_i\\}_{i\\in\\mathcal{I}}$ a frame for $\\H$ and $\\{G_i\\}_{i\\in\\mathcal{I}}$ a frame for \n$\\H_K^\\Phi$. \nDefine $H_i(x):=\\ip{g_i}{\\Psi(x)}$ and $h_i:=T_\\Phi G_i$, then\n$\\{H_i\\}_{i\\in\\mathcal{I}}$ is a frame for $\\H_K^\\Phi$ and $\\{h_i\\}_{i\\in\\mathcal{I}}$ is a frame for $\\H$.\n\\end{proposition}\n\\textbf{Proof:\\ } Let $F\\in \\H_K^\\Phi$, then \n$$\n\\sum_{i\\in\\mathcal{I}}|\\langle F,H_i\\rangle_\\Phi|^2=\\sum_{i\\in\\mathcal{I}}|\\langle T_\\Phi F,T_\\Phi H_i\\rangle|^2\n=\n\\sum_{i\\in\\mathcal{I}}|\\langle T_\\Phi F,S_{\\Psi,\\Phi} g_i\\rangle|^2\n$$\n$$\n=\\sum_{i\\in\\mathcal{I}}|\\langle (S_{\\Psi,\\Phi})^\\ast T_\\Phi F, g_i\\rangle|^2\\leq M\\|(S_{\\Psi,\\Phi})^\\ast T_\\Phi F\\|^2\n$$\n$$\n\\leq M\\|S_{\\Psi,\\Phi}\\|^2\\| T_\\Phi F\\|^2 =\\widetilde M \\|F\\|_\\Phi^2.\n$$\nThe lower bound follows from the same argument as $(S_{\\Psi,\\Phi})^\\ast$ is boundedly invertible.\nHence, $\\{H_i\\}_{i\\in\\mathcal{I}}$ is a frame for $\\H_K^\\Phi$. \\\\\nLet $f\\in \\H$, then \n$$\n\\|f\\|=\\|T_\\Phi C_\\Psi S_{\\Psi,\\Phi}^{-1}f\\|=\\|C_\\Psi S_{\\Psi,\\Phi}^{-1}f\\|_\\Phi, \n$$\ntogether with \n$$\n\\sum_{i\\in\\mathcal{I}}|\\langle f,h_i\\rangle|^2=\\sum_{i\\in\\mathcal{I}}|\\langle T_\\Phi C_\\Psi S_{\\Psi,\\Phi}^{-1}f,T_\\Phi G_i\\rangle|^2\n=\n\\sum_{i\\in\\mathcal{I}}|\\langle C_\\Psi S_{\\Psi,\\Phi}^{-1}f,G_i\\rangle_\\Phi|^2,\n$$\nyields that $\\{h_i\\}_{i\\in\\mathcal{I}}$ is a frame for $\\H$.\n\\hfill$\\Box$\\\\\n\n\\noindent The rest of this section is concerned with the explicit calculation of the reproducing kernel for $\\H_K^\\Phi$.\n Let $(\\Psi,\\Phi)$ be a reproducing pair, then there exists a similar characterization of the range of the \n analysis operators as in \\eqref{frame-rep-kernel-op}.\n Let $F\\in {\\mathcal V}_\\Phi(X,\\mu)$ and define $R_{\\Psi,\\Phi}(x,y):=\\ip{S_{\\Psi,\\Phi}^{-1}\\Phi(y)}{\\Psi(x)}$ and its associated\n integral operator\n$$\n\\mathcal{R}_{\\Psi,\\Phi}(F)(x):=\\int_X F(y)R_{\\Psi,\\Phi}(x,y)d\\mu(y).\n$$\nBy\n\\cite[Proposition 2]{spexxl14} it follows that $\\mathcal{R}_{\\Psi,\\Phi}(F)(x)=F(x)$ if, and only if, there \nexists $f\\in \\H$ such that $F(x)=\\ip{f}{\\Psi(x)}$, for all $x\\in X$.\nHowever, $R_{\\Psi,\\Phi}$ is not the reproducing kernel for $\\H_K^\\Phi$ since the reproducing formula is based\non the inner product of $L^2(X,\\mu)$ and not on $\\langle \\cdot,\\cdot\\rangle_\\Phi$. \n\nBy \\eqref{rkhs-ran-anal}, the reproducing kernel is given by a function $k_x\\in {\\sf Ran}\\, C_\\Psi$ such that $F(x)=\\ip{F}{k_x}_\\Phi$. \nLet $F\\in{\\sf Ran}\\, C_\\Psi$, applying \\eqref{phi-inner-expl} and the identity $f=T_\\Phi C_\\Psi S_{\\Psi,\\Phi}^{-1} f$ yields \n\\begin{align*}\n F(x)&=\\mathcal{R}_{\\Psi,\\Phi}(F)(x)=\\int_X F(y)\\ip{\\Phi(y)}{(S_{\\Psi,\\Phi}^{-1})^\\ast\\Psi(x)}d\\mu(y)\\\\\n &=\\int_X \\int_X F(y)\\ip{\\Phi(y)}{\\Phi(z)}\\ip{\\Psi(z)}{S_{\\Psi,\\Phi}^{-1}(S_{\\Psi,\\Phi}^{-1})^\\ast\\Psi(x)}d\\mu(z)d\\mu(y)\\\\\n\\\\\n&=\\Big\\langle F,\\big\\langle (S_{\\Psi,\\Phi}^{-1})^\\ast\\Psi(x),(S_{\\Psi,\\Phi}^{-1})^\\ast\\Psi(\\cdot)\\big\\rangle\\Big\\rangle_\\Phi\n \\end{align*}\nHence, using $(S_{\\Psi,\\Phi}^{-1})^\\ast=S_{\\Phi,\\Psi}^{-1}$, we finally obtain\n$$\nK_{\\Psi,\\Phi}(x,y)=k_x(y)=\\big\\langle S_{\\Phi,\\Psi}^{-1}\\Psi(x),S_{\\Phi,\\Psi}^{-1}\\Psi(y)\\big\\rangle.\n$$\n\n\n\n\\begin{comment}\nIt therefore remains\nto show that point evaluation is continuous.\nLet $F\\in V_\\Phi(X,\\mu)$, by Lemma one obtains\n$$\n|F(x)|=|\\ip{f}{\\Psi(x)}|\\leq \\norm{}{f}\\norm{}{\\Psi(x)}\\leq C\\norm{\\Phi}{C_\\Psi f}\\norm{}{\\Psi(x)}=C_x\\norm{\\Phi}{F}\n$$\n\n \\begin{theorem}\\label{theo-partner}\nLet $\\phi$ be a weakly measurable function and $e=\\{e_n\\}_{n\\in\\mathbb{N}}$ an orthonormal basis of $\\H$. There exists another measurable\nfunction $\\psi$, such that $(\\psi,\\phi)$ is a reproducing pair if, and only if, \n there exist $m,M>0$ such that \n\\begin{equation}\\label{norm_equiv_f_Cf}\nm\\norm{}{f}\\leq \\norm{\\phi^\\ast}{C_\\phi f} \\leq M\\norm{}{f},\\ \\forall f\\in\\mathcal{H}\n\\end{equation}\nand there exists a family $\\{\\xi_n\\}_{n\\in\\mathbb{N}}\\subset \\mathcal{V}_\\phi(X,\\mu)$ such that \n\\begin{equation}\\label{second-assumption}\n[\\xi_n]_\\phi=[\\widehat T_\\phi^{-1} e_n]_\\phi,\\ \\forall n\\in\\mathbb{N},\\hspace{0.5cm} \\text{and} \\hspace{0.5cm}\n\\sum_{n\\in\\mathbb{N}}|\\xi_n(x)|^2<\\infty,\\ \\forall\\ x\\in X.\n\\end{equation}\n\\end{theorem}\n\\end{comment}\n\n\n\n\n\\begin{comment}\n\\begin{definition}\n Let $(B,\\norm{B}{\\cdot})$ be a Banach space. A family $\\{g_n\\}_{n\\in\\mathbb{N}}\\subset B$ is called an \\textbf{atomic decomposition} if there exists a \n Banach space of sequences $(B^\\natural,\\norm{B^\\natural}{\\cdot})$ and bounded linear functionals $\\{\\lambda_n\\}_{n\\in\\mathbb{N}}$ such that\n \\begin{enumerate}[(i)]\n \\item $\\{\\lambda_n(f)\\}_{n\\in\\mathbb{N}}\\in B^\\natural$ and there exists $C_1>0$ such that\n $$\n \\norm{B^\\natural}{\\{\\lambda_n(f)\\}_{n\\in\\mathbb{N}}}\\leq C_1\\norm{B}{f},\\ \\forall\\ f\\in B\n $$\n \\item If $\\{\\lambda_n\\}_{n\\in\\mathbb{N}}\\in B^\\natural$ then $f=\\sum_{n\\in\\mathbb{N}}\\lambda_n g_n\\in B$ (with unconditional convergence in some\n suitable topology) and there exists $C_2>0$ such that \n $$\n \\norm{B}{f}\\leq C_2\\norm{B^\\natural}{\\{\\lambda_n\\}_{n\\in\\mathbb{N}}}\n $$\n \\item $f=\\sum_{n\\in\\mathbb{N}}\\lambda_n(f)g_n,\\ \\forall\\ f\\in B$\n \\end{enumerate}\n\n \\end{definition}\n \n\\begin{definition}\n Let $(B,\\norm{B}{\\cdot})$ be a Banach space. A family $\\{h_n\\}_{n\\in\\mathbb{N}}\\subset B^\\ast$ is called a \\textbf{Banach frame} if there exists a \n Banach space of sequences $(B^\\natural,\\norm{B^\\natural}{\\cdot})$ and a bounded linear reconstruction operator $\\Omega$ such that\n \\begin{enumerate}[(i)]\n \\item If $f\\in B$ then $\\{h_n(f)\\}_{n\\in\\mathbb{N}}\\in B^\\natural$ and there exists $C_1,C_2>0$ such that \n $$\n C_1\\norm{B}{f}\\leq\\norm{B^\\natural}{\\{h_n(f)\\}_{n\\in\\mathbb{N}}}\\leq C_2\\norm{B}{f},\\ \\forall\\ f\\in B\n $$\n \\item $f=\\Omega(\\{\\lambda_n(f)\\}_{n\\in\\mathbb{N}}),\\ \\forall\\ f\\in B$\n \\end{enumerate}\n\n \\end{definition}\n\n\n\n\\begin{proposition}\\label{rep-pair-atomic-dec}\nLet $(\\Psi,\\Phi)$ be a reproducing pair, $\\{f_n\\}_{n\\in\\mathbb{N}}$ a frame for $\\H$, $B:=({\\sf Ran}\\, C_\\Psi,\\norm{\\Phi}{\\cdot})$ and $B^\\natural:=l^2(\\mathbb{N})$.\nDefine $\\psi_n(x):=\\ip{f_n}{\\Psi(x)}$, then $\\{\\psi_n\\}_{n\\in\\mathbb{N}}$ is an atomic decomposition for $B$.\n\\end{proposition}\n\\textbf{Proof:\\ } Let $\\{\\widetilde f_n\\}_{n\\in\\mathbb{N}}$ be the canonical dual frame of $\\{f_n\\}_{n\\in\\mathbb{N}}$ and define\n$$\n\\lambda_n(F):=\\ip{S_{\\Psi,\\Phi}^{-1}T_\\Phi F}{\\widetilde f_n}\n$$\nBy a combination of \\cite[Lemma 2.5 \\& Proposition 2.10]{ansptr15} we get\n$$\n\\norm{l^2}{\\lambda(F)}^2=\\sum_{n\\in\\mathbb{N}}\\big|\\lambda_n(F)\\big|^2=\\sum_{n\\in\\mathbb{N}}\\big|\\ip{S_{\\Psi,\\Phi}^{-1}T_\\Phi F}{\\widetilde f_n}\\big|^2\n$$\n$$\n\\leq C \\norm{}{S_{\\Psi,\\Phi}^{-1}T_\\Phi F}^2\n\\leq C\\norm{}{T_\\Phi F}^2\n\\leq C\\norm{\\Phi}{F}^2\n$$\nNow let $\\lambda\\in l^2(\\mathbb{N})$ and $F=\\sum_{n\\in\\mathbb{N}}\\lambda_n\\phi_n$, then\n$$\n\\norm{\\Phi}{F}=\\sup_{\\norm{}{g}=1}\\Big|\\int_X\\sum_{n\\in\\mathbb{N}}\\lambda_n\\psi_n(x)\\ip{\\Phi(x)}{g}d\\mu(x)\\Big|\n$$\n$$\n=\\sup_{\\norm{}{g}=1}\\Big|\\sum_{n\\in\\mathbb{N}}\\lambda_n\\int_X\\ip{f_n}{\\Psi(x)}\\ip{\\Phi(x)}{g}d\\mu(x)\\Big|=\\sup_{\\norm{}{g}=1}\\Big|\n\\sum_{n\\in\\mathbb{N}}\\lambda_n\\ip{S_{\\Psi,\\Phi}f_n}{g}\\Big|\n$$\n$$\n=\\sup_{\\norm{}{g}=1}\\Big|\\ip{S_{\\Psi,\\Phi}D_f\\lambda}{g}\\Big|=\\Big\\|S_{\\Psi,\\Phi}D_f\\lambda\\Big\\|\\leq C\\norm{2}{\\lambda}\n$$\nFinally, for every $F=\\ip{f}{\\Psi(\\cdot)}\\in {\\sf Ran}\\, C_\\Psi$, we have\n$$\n\\sum_{n\\in\\mathbb{N}}\\lambda_n(F)\\psi_n(x)=\\sum_{n\\in\\mathbb{N}}\\ip{S_{\\Psi,\\Phi}^{-1}D_\\Phi F}{\\widetilde f_n}\\ip{f_n}{\\Psi(x)}\n=\\ip{S_{\\Psi,\\Phi}^{-1}T_\\Phi F}{\\Psi(x)}\n$$\n$$\n=\\ip{S_{\\Psi,\\Phi}^{-1}S_{\\Psi,\\Phi} f}{\\Psi(x)}=\\ip{ f}{\\Psi(x)}=F(x)\n$$\n\\hfill$\\Box$\\\\\n\n\\begin{proposition}\nLet $(\\Psi,\\Phi)$ be a reproducing pair, $\\{f_n\\}_{n\\in\\mathbb{N}}$ a frame for $\\H$, $B:=({\\sf Ran}\\, C_\\Psi,\\norm{\\Phi}{\\cdot})$ and $B^\\flat:=l^2(\\mathbb{N})$.\nDefine $\\phi_n(x):=\\ip{f_n}{\\Phi(x)}$ and $\\{h_n\\}_{n\\in\\mathbb{N}}\\subset B^\\ast$ defined by\n$$\nh_n(F):=\\int_X F(x)\\overline{\\phi_n(x)}d\\mu(x),\n$$ then $\\{\\phi_n\\}_{n\\in\\mathbb{N}}$ is an Banach frame for $B$.\n\\end{proposition}\n\\textbf{Proof:\\ } Let $F\\in B$ and $C_\\Psi f=F$, then\n$$\nC_1\\norm{\\phi}{F}\\leq \\norm{}{S_{\\Psi,\\Phi}^{-1}}^{-1}\\norm{}{f}\\leq \\norm{}{S_{\\Psi,\\Phi}f}\\leq \\norm{}{S_{\\Psi,\\Phi}}\\norm{}{f}\\leq C_2\\norm{\\Phi}{F}\n$$\nMoreover, we have\n$$\n\\sum_{n\\in\\mathbb{N}}|h_n(F)|^2=\\sum_{n\\in\\mathbb{N}}\\Big|\\int_X \\ip{f}{\\Psi(x)}\\ip{\\Phi(x)}{f_n}d\\mu(x)\\Big|^2\n$$\n$$\n=\\sum_{n\\in\\mathbb{N}}|\\ip{S_{\\Psi,\\Phi}f}{f_n}|^2\\asymp\\norm{}{S_{\\Psi,\\Phi}f}^2\n$$\nIt remains to show that there exists a bounded reconstruction operator $\\Omega:l^2(\\mathbb{N})\\rightarrow B$, such that $\\Omega(h(F))=F$.\nDefine $\\Omega(c):=C_\\Psi S_{\\Psi,\\Phi}^{-1}D_{\\widetilde f} c$. $\\Omega$ is bounded as \n$$\n\\norm{\\phi}{\\Omega(c)}=\\norm{\\phi}{C_\\Psi S_{\\Psi,\\Phi}^{-1}D_{\\widetilde f} c}\\leq C\\norm{}{S_{\\Psi,\\Phi}^{-1}D_{\\widetilde f} c}\\leq C\\norm{2}{c}\n$$\nMoreover, $\\Omega$ reproduces $F=C_\\Psi f$ as\n$$\n\\Omega(h(F))=C_\\Psi S_{\\Psi,\\Phi}^{-1}\\Big(\\sum_{n\\in\\mathbb{N}}\\int_X F(x)\\overline{\\phi_n(x)}d\\mu(x)\\widetilde f_n\\Big)\n$$\n$$\nC_\\Psi S_{\\Psi,\\Phi}^{-1}\\Big(\\sum_{n\\in\\mathbb{N}}\\int_X C_\\Psi f(x)\\ip{\\Phi(x)}{f_n}d\\mu(x){\\widetilde f}_n\\Big)\n=C_\\Psi S_{\\Psi,\\Phi}^{-1}\\Big(\\sum_{n\\in\\mathbb{N}}\\ip{S_{\\Psi,\\Phi}f}{f_n}{\\widetilde f}_n\\Big)\n$$\n$$\nC_\\Psi S_{\\Psi,\\Phi}^{-1}S_{\\Psi,\\Phi}f=C_\\Psi f=F\n$$\n\\hfill$\\Box$\\\\\n\\xxl{\n\\begin{itemize}\n\\item Can we use this for a Gelfand triplet \n$$ B \\subseteq \\H \\subseteq B'$$\nto construct Banach frames?\n\\item In this setting a reproducing pair $\\Phi \\subseteq B$ and $ \\phi \\subseteq B'$ makes sense, and gives a reproducing pair for $\\H$, right?\n\\end{itemize}\n}\n\n\n\nFinally, we give a characterization of the reproducing kernel of a reproducing pair via atomic decompositions (in analogy to Theorem \\ref{charact-of-RKHS})\nand show their pointwise square summability.\n\\begin{theorem}\\label{rep-kernel-repres}\nLet $(\\Psi,\\Phi)$ be a reproducing pair.\nThe reproducing kernel can be written as \n\\begin{equation}\\label{kernel-characterization}\nK_{\\Psi,\\Phi}(x,y)=\\sum_{n\\in\\mathbb{N}}\\psi_n(x)\\overline{\\phi_n(y)}\n\\end{equation}\nwhere $\\{\\psi_n\\}_{n\\in\\mathbb{N}}$ and $\\{\\phi_n\\}_{n\\in\\mathbb{N}}$ are atomic decompositions of $({\\sf Ran}\\, C_\\Psi,\\norm{\\Phi}{\\cdot})$ and \n$({\\sf Ran}\\, C_\\Phi,\\norm{\\Psi}{\\cdot})$ respectively.\n\\end{theorem}\n\\textbf{Proof:\\ } Let $\\{e_n\\}_{n\\in\\mathbb{N}}$ be an ONB of $\\H$ and define $g_n=S_{\\Psi,\\Phi}e_n$, then $\\{g_n\\}_{n\\in\\mathbb{N}}$ is a Riesz basis for $\\H$. \nIt holds\n$$\nK_{\\Psi,\\Phi}(x,y)=\\ip{S_{\\Psi,\\Phi}^{-1}\\Phi(y)}{\\Psi(x)}=\n\\Big\\langle S_{\\Psi,\\Phi}^{-1}\\sum_{n\\in\\mathbb{N}}\\ip{\\Phi(y)}{\\widetilde g_n}g_n,\\sum_{n\\in\\mathbb{N}}\\ip{\\Psi(x)}{e_k}e_k\\Big\\rangle\n$$\n$$\n=\\sum_{k,n\\in\\mathbb{N}}\\ip{\\Phi(y)}{\\widetilde g_n}\\ip{e_k}{\\Psi(x)}\\langle S_{\\Psi,\\Phi}^{-1}g_n,e_k\\rangle\n=\\sum_{k,n\\in\\mathbb{N}}\\ip{\\Phi(y)}{\\widetilde g_n}\\ip{e_k}{\\Psi(x)}\\langle e_n,e_k\\rangle\n$$\n$$\n=\\sum_{k,n\\in\\mathbb{N}}\\ip{e_n}{\\Psi(x)}\\overline{\\ip{\\widetilde g_n}{\\Phi(y)}}=\\sum_{k,n\\in\\mathbb{N}}\\psi_n(x)\\overline{\\phi_n(y)}\n$$\nwith $\\psi_n:=\\ip{e_n}{\\Psi(\\cdot)}$ and $\\phi_n:=\\ip{\\widetilde g_n}{\\Phi(\\cdot)}$. The result then follows by Proposition \n\\ref{rep-pair-atomic-dec}.\\hfill$\\Box$\\\\\n\n\\begin{corollary}\nUnder the same assumptions as in Theorem \\ref{rep-kernel-repres}, \nwe have that $\\sum_{n\\in\\mathbb{N}}|\\psi_n(x)|^2<\\infty$ and $\\sum_{n\\in\\mathbb{N}}|\\phi_n(x)|^2<\\infty$ \n\\end{corollary}\n\\textbf{Proof:\\ } Let $x\\in X$\n$$\n\\sum_{n\\in\\mathbb{N}}|\\phi_n(x)|^2=\\sum_{n\\in\\mathbb{N}}|\\ip{\\widetilde g_n}{\\Phi(x)}|^2\\leq B\\norm{}{\\Phi(x)}^2<\\infty\n$$\nThe argument for $\\psi_n$ is the same.\n\\hfill$\\Box$\\\\ \n\n\n\\textcolor{red}{conjectures:\n\\begin{proposition}\nLet $(B,B')$ be a pair of mutually dual, separable RKBS, then there exists an atomic decomposition of $B$ such that\n$$\n\\sum_{n\\in\\mathbb{N}}|\\psi_n(x)|^2<\\infty\n$$\n\\end{proposition}\n\\begin{proposition}\nLet $(B,B')$ be a pair of mutually dual, separable RKBS, and the reproducing kernel is given by\n$$\nK(x,y)=\\sum_{n\\in\\mathbb{N}}\\psi_n(x)\\overline{\\phi_n(y)}\n$$\nsatisfying\n$$\n\\sum_{n\\in\\mathbb{N}}|\\psi_n(x)|^2<\\infty\n$$\n$$\n\\sum_{n\\in\\mathbb{N}}|\\phi_n(x)|^2<\\infty\n$$\nThen there exist a reproducing pair $(\\Psi,\\Phi)$ with ....\n\\end{proposition}\n}\n\n\n\\begin{proposition}\nLet $(B,B')$ be a pair of separable RKBS defined on $(X,\\mu)$ with $B'$ being the conjugate dual space of $B$ w.r.t. the $L^2$ duality pairing.\nAssume that $\\{\\psi_n\\}_{n\\in\\mathbb{N}}$ is an atomic decompositions of $B$ and $\\{\\phi_n\\}_{n\\in\\mathbb{N}}$ a Banach frame for $B$ with\n$B^\\natural=B^\\flat=l^2$. \nThen there exists a reproducing pair $(\\Psi,\\Phi)$ such that ${\\sf Ran}\\, C_\\Psi=B$ and ${\\sf Ran}\\, C_\\Phi= B'$ as sets with equivalent \nnorms\n\\end{proposition}\n\\textbf{Proof:\\ } Let $\\{g_n\\}_{n\\in\\mathbb{N}}$, $\\{h_n\\}_{n\\in\\mathbb{N}}$ be frames of $\\H$, such that ${\\sf Ran}\\, C_g={\\sf Ran}\\, \\lambda$ and ${\\sf Ran}\\, C_h={\\sf Ran}\\, \\gamma$. As \n${\\sf Ran}\\, \\lambda$ and ${\\sf Ran}\\, \\gamma$ are closed by the atomic decomposition and Banach frame assumptions, it follows by Corollary \\ref{exist-frame-for-RKHS}\nand Remark \\ref{discrete-subspace-rkhs} that such \nframes always exist. Define $\\Psi$ via \n$\\ip{\\widetilde g_n}{\\Psi(x)}=\\psi_n(x)$ and $\\Phi$ via $\\ip{\\widetilde h_n}{\\Psi(x)}=\\phi_n(x)$. First we have to show that $\\Psi(x)$ is \nwell-defined. Let $f\\in\\H$, it holds\n$$\n|\\ip{f}{\\Psi(x)}|=\\Big|\\sum_{n\\in\\mathbb{N}}\\ip{f}{g_n}\\psi_n(x)\\Big|\\leq M_x\\Big\\|\\sum_{n\\in\\mathbb{N}}\\ip{f}{g_n}\\psi_n\\Big\\|_B\n$$\n$$\n\\leq M_x' \\|C_g f\\|_{2}=M_x''\\|f\\|\n$$\nwhere we have used that point evaluation is continuous and $\\{\\psi_n\\}_{n\\in\\mathbb{N}}$ defines an atomic decomposition. \n$$\n|\\ip{f}{\\Phi(x)}|=\\Big|\\sum_{n\\in\\mathbb{N}}\\ip{f}{h_n}\\phi_n(x)\\Big|\\leq M_x\\Big\\|\\sum_{n\\in\\mathbb{N}}\\ip{f}{h_n}\\phi_n\\Big\\|_{B'}\n$$\n$$\n= M_x\\sup_{\\norm{B}{F}=1}\\Big|\\sum_{n\\in\\mathbb{N}}\\ip{f}{h_n}\\phi_n(F)\\Big|\\leq M_x\\sup_{\\norm{B}{F}=1}\\norm{l^2}{C_h f}\\norm{2}{\\{\\phi_n(F)\\}_{n\\in\\mathbb{N}}}\n$$\n$$\n\\leq M_x'\\norm{}{f}\\sup_{\\norm{B}{F}=1}\\norm{B}{F}=M_x'\\norm{}{f}\n$$\nwhere we used Cauchy-Schwarz inequality, the upper frame inequality and the upper inequality from the definition of a Banach frame.\nHence, Riesz representation theorem assures the existence and uniqueness of $\\Psi(x)$ for every $x\\in X$. The same construction can \nbe made to define $\\Phi$.\nIt remains to show that $S_{\\Psi,\\Phi}\\in GL(\\H)$\n$$\n\\ip{S_{\\Psi,\\Phi}f}{g}=\\int_X \\ip{f}{\\Psi(x)}\\ip{\\Phi(x)}{g}d\\mu(x)\n$$\n$$\n=\\int_X \\sum_{n\\in\\mathbb{N}}\\ip{f}{g_n}\\psi_n(x)\\sum_{k\\in\\mathbb{N}}\\overline{\\phi_k(x)}\\ip{h_k}{g}d\\mu(x)\n$$\n$$\n=\\sum_{k\\in\\mathbb{N}}\\int_X \\Big(\\sum_{n\\in\\mathbb{N}}c_n \\psi_n(x)\\Big)\\overline{\\phi_k(x)}d\\mu(x)\\ip{h_k}{g}\n$$\n \\textcolor{red}{why are we allowed to interchange summation and integration???}\n \n We will now show that $O:{\\sf Ran}\\,\\Lambda\\rightarrow{\\sf Ran}\\, \\Gamma$ defined by \n$$\n(O c)_k:=\\int_X \\Big(\\sum_{n\\in\\mathbb{N}}c_n \\psi_n(x)\\Big)\\overline{\\phi_k(x)}d\\mu(x)= \\gamma_k\\Big(\\sum_{n\\in\\mathbb{N}}c_n \\psi_n\\Big)\n$$\nis bounded and bijective. Therefore let $c\\in {\\sf Ran}\\,\\Lambda$, then $O$ is bounded as\n$$\n\\sum_{k\\in\\mathbb{N}}|(O c)_k|^2=\\sum_{k\\in\\mathbb{N}}\\Big|\\gamma_k\\Big(\\sum_{n\\in\\mathbb{N}}c_n \\psi_n\\Big)\\Big|^2\\leq M \\Big\\|\\sum_{n\\in\\mathbb{N}}c_n \\psi_n\\Big\\|_B\\leq M'\\norm{2}{c}\n$$\nLet $F\\in B$ such that $F=\\sum_{n\\in\\mathbb{N}}c_n \\psi_n$, then\n$$\n\\sum_{k\\in\\mathbb{N}}|(O c)_k|^2=\\sum_{k\\in\\mathbb{N}}\\Big|\\gamma_k\\Big(\\sum_{n\\in\\mathbb{N}}c_n \\psi_n\\Big)\\Big|^2\\geq m \\Big\\|\\sum_{n\\in\\mathbb{N}}c_n \\psi_n\\Big\\|_B\n$$\n$$\n= m \\Big\\|\\sum_{n\\in\\mathbb{N}}\\lambda_n(F) \\psi_n\\Big\\|_B \\geq m'\\norm{2}{\\Lambda(F)}\n$$\n\nAs $B=\\big\\{\\sum_{n\\in\\mathbb{N}}c_n\\psi_n:\\ c\\in{\\sf Ran}\\,\\Lambda\\big\\}$ it follows that $O$ is surjective.\n\n\nFinally, $S_{\\Psi,\\Phi}=D_hO C_g\\in GL(\\H)$ as $C_g:\\H\\rightarrow {\\sf Ran}\\, \\Lambda={\\sf Ran}\\, C_g$ and $D_h:{\\sf Ran}\\, \\Gamma={\\sf Ran}\\, C_h\\rightarrow \\H$ \nare bijective.\n\\end{comment}\n\n\\section{Conclusion}\nThe results of this paper suggest to change the usage of some notions in frame theory. We have shown that any frame can be decomposed into a discrete and a strictly continuous part. \nIn this light, it is reasonable to use the term continuous (semi-)frame only if it is actually strictly continuous and semi-continuous (resp. discrete) frame otherwise.\nMoreover, since the underlying measure space of a frame with finite is atomic, all efforts to generalize Riesz bases to general measure spaces are condemned to failure from the beginning.\n\nWe have investigated the redundancy of (semi-)frames in detail and showed that, in this regard, upper semi-frames may behave essentially different from systems satisfying the lower frame bound. It is an open question to us whether a similar result like Theorem \\ref{reproduced-result} can be proven for the redundancy of a reproducing pair defined in \\eqref{redund-rep-pair}.\n\nAnother interesting topic for future research is to find and study alternative notions of redundancy for continuous frames. A promising approach that may be adapted can be found in \\cite{bacahela06}.\nStudying the dependence on the measure space should thereby remain a key objective.\n\nTo sum up, we hope that we could emphasize the fundamental importance of RKHSs for analysis\/synthesis processes like frames or reproducing pairs.\n\n\\section*{Appendix}\n\n\\textbf{Proof of Lemma \\ref{not-atomic-non-atomic}:} Ad $(i)$: See \\cite{fi72}.\n\nAd $(ii)$: Let $(X,\\mu)$ be an-atomic. Let us assume on the contrary that \nfor every measurable set $A\\subset X$ with \n$\\mu(A)>0$ there exists an atom $B\\subset A$ and let $\\{A_n\\}_{n\\mathcal{I}}\\subset X$ be a countable partition of $X$ by sets of finite measure. We will show that each $A_n$ can be partitioned into atoms and null sets, a contradiction. Assume without loss of generality that $\\mu(A_{1})>0$. By assumption, there exists an atom\n$B_1\\subset A_{1}$. \nIf $\\mu(B_1)=\\mu(A_{1})$, then $A_{1}$ is an atom. If $0<\\mu(B_1)<\\mu(A_{1})$, then\n$\\mu(A_{1}\\backslash B_1)>0$. Hence, there exists an atom $B_2\\subset A_{1}\\backslash B_1$ and the preceding\nargument can be repeated. If one has\n$\\mu\\big(A_{1}\\backslash \\big(\\bigcup_{k=1}^KB_k\\big)\\big)>0$ for all iteration steps $K$, then $\\mu_K:=\\mu\\big(\\bigcup_{k=1}^K B_k\\big)$\ndefines a strictly increasing sequence, bounded by $\\mu(A_{1})$.\nHence, $\\mu_K$ is convergent to some $\\mu^\\ast$ and the limit equals $\\mu(A_{1})$. \nIndeed, if $\\mu^\\ast<\\mu(A_1)$ then, by assumption,\nthere exists an atom $B^\\ast\\subset A_{1}\\backslash \\bigcup_{k\\in\\mathbb{N}} B_k$ and\n$$\\mu\\Big(\\bigcup_{k\\in\\mathbb{N}} B_k\\cup B^\\ast\\Big)>\\mu^\\ast,$$ \na contradiction.\nConsequently,\n$\nA_{1}=\\bigcup_{k\\in\\mathbb{N}} B_k\\cup N,$\nwhere $N=A_{1}\\backslash \\bigcup_{k\\in\\mathbb{N}} B_k$ is of measure zero. In particular, we have constructed a \npartition of $A_{1}$ consisting of atoms and null sets.\nRepeating this argument for all $A_n$, $n\\in\\mathcal{I}$, with $\\mu(A_n)>0$ shows that $(X,\\mu)$ is atomic, a\ncontradiction. \\hfill$\\Box$\\\\\n\n\n\n \n\n\\section*{Acknowledgement}\nThis work was funded by the Austrian Science Fund (FWF) START-project FLAME ('Frames and\nLinear Operators for Acoustical Modeling and Parameter Estimation'; Y 551-N13).\n\nThe authors would like to thank Jean-Pierre Antoine for fruitful discussions on the physical interpretation\nof the result on the redundancy of continuous frames.\n\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n \n The emergence of atomically thin, single-layer graphene spawned a new class of materials, known as two-dimensional (2D) materials~\\cite{Xu2013, Novoselov2011}. These extraordinary 2D materials have attracted significant attention within the scientific community due to their wide range of properties - from large band-gap insulators to the very best conductors, the mechanically tough to soft and malleable, and semi-metals to topologically insulating~\\cite{Singh2015, Paul2017,Blonsky2015,Akiyama2021}. The diverse pool of properties that 2D materials possess promise many novel next-generation device applications in nanoelectronics, quantum computing, field-effect transistors, microwave and terahertz photonics, and catalysis~\\cite{Rode2017, Xu2015, Yu2014, Kang2013, Amani2014, Li2019, Luo2016, Yu2014}. Despite the excitement surrounding these promising materials, surprisingly few 2D materials are used in the industry. Roughly 55 of the $>$5,000 theoretically predicted 2D materials have been experimentally synthesized~\\cite{Mounet2018, Ashton2017, c2db, Singh2014, Zhou2019}.\n \n Of the various methods used to synthesize 2D materials, substrate-assisted methods such as chemical vapor deposition result in large-area and low-defect flakes at a reasonable cost per mass~\\cite{Novoselov2012}. Substrate-assisted methods have the added benefit of being able to synthesize 2D materials that have non-van der Waals (vdW) bonded bulk counterparts. On the other hand, exfoliation techniques, like mechanical exfoliation~\\cite{Singh2015}, can only be used to generate 2D flakes from vdW-bonded bulk counterparts. Currently, substrate-assisted synthesis of 2D materials rely on expensive trial-and-error processes requiring significant experimental effort and intuition for choosing the substrate, precursors, and the growth conditions (substrate temperatures, growth rate, etc.) to synthesize 2D materials resulting in the slow progress to realize and utilize these materials. Furthermore, the properties of 2D materials can be dramatically altered by placing them on substrates. For example, the mobility of carriers in 2D-MoS$_2$ is reduced by more than an order of magnitude by placing it on a sapphire substrate~\\cite{singh2015al2o3}. To enable the functionalization and to assist in the selection of substrates for synthesis, a detailed understanding of the substrate-assisted modification of energetic, physical, and electronic properties of 2D materials is required. \n \n In this work, we present the $Hetero2d$ workflow package inspired by existing community workflow packages. $Hetero2d$ is tailored to address scientific questions regarding the stability and properties of 2D-substrate heterostructured materials. $Hetero2d$ provides automated routines for the generation of low-lattice mismatched heterostructures for arbitrary 2D materials and substrate surfaces, the creation of vdW-corrected density-functional theory (DFT) input files, the submission and monitoring of simulations on computing resources, and the post-processing of the key parameters to compute, namely, (a) the interface interaction energy of 2D-substrate heterostructures, (b) the identification of substrate-induced changes in the interfacial structure, and (c) charge doping of the 2D material. The 2D-substrate information generated by our routines is stored in a MongoDB database tailored for 2D-substrate heterostructures.\n \n As an example, we demonstrate the use of $Hetero2d$ in screening for substrate surfaces that stabilize the following four 2D materials - $2H$-MoS$_2$, $1T$- and $2H$-NbO$_2$, and hexagonal-ZnTe. We considered the low-index planes of a total of 50 cubic metallic materials as potential substrates. Using the $Hetero2d$ workflow, we determine that Cu, Hf, Mn, Nd, Ni, Pd, Re, Rh, Sc, Ta, Ti, V, W, Y, and Zr substrates sufficiently stabilize the formation energies of these 2D materials, with binding energies in the range of $\\sim$0.1 -- 0.6 eV\/atom. Upon examining the $z$-separation, the charge transfer, and the electronic density of states at the 2D-substrate interface using post-processing tools of $Hetero2d$, we find a covalent type bonding at the interface, which suggests that these substrates can be used as contact materials. \\href{https:\/\/github.com\/cmdlab\/Hetero2d}{Hetero2d} is shared on GitHub as an open-source package under the GNU license. \n \n\\section{DFT Approach to Identifying Stable 2D-Substrate Heterostructures}\n \n 2D materials are inherently meta-stable materials and are often created by peeling 2D films from layered, vdW bonded bulk counterparts. Their meta-stability arises from the removal of the vdW bonds between the individual flakes. However, the vdW bonds are an order of magnitude weaker than the in-plane covalent or ionic bonds of 2D materials, thus many 2D materials can remain stable at room temperature or above. A quantitative measure of the stability of 2D materials to remain as a free-standing 2D film is given by the formation energy, $\\Delta E_{\\mathrm{vac}}^f$, with respect to the bulk phase\n \n \\begin{equation}\n \t\\label{eq:Eform}\n \t\\begin{aligned}[t]\n \t\t\\hspace*{-1.5cm} \\Delta E_{\\mathrm{vac}}^f &= \\dfrac{ E_{\\mathrm{2D}}}{ N_{\\mathrm{2D}} } - \\dfrac{E_{\\mathrm{3D}}}{N_{\\mathrm{3D}}},\\\\\n \t\\end{aligned}\n \\end{equation} where $E_{\\mathrm{2D}}$\\ is the energy of a 2D material in vacuum, $E_{\\mathrm{3D}}$\\ is the energy of the bulk counterpart of the 2D material, and $N_{\\mathrm{2D}}$\\ and $N_{\\mathrm{3D}}$\\ are the number of atoms in the unit cell of 2D and bulk counterpart, respectively. \n \n The $\\Delta E_{\\mathrm{vac}}^f$\\ of a 2D material indicates the stability of a 2D flake to retain the 2D form over its bulk counterpart, where the higher the $\\Delta E_{\\mathrm{vac}}^f$, the larger the driving force to lower the free energy. Singh et. al. and others have shown that when the $\\Delta E_{\\mathrm{vac}}^f$\\ < 0.2 eV\/atom, the 2D materials are stable as a free-standing film, but for larger $\\Delta E_{\\mathrm{vac}}^f$'s they are highly unstable and may only be synthesized using substrate-assisted methods~\\cite{Singh2015, c2db}. \n \n \n For substrate surfaces to stabilize a 2D material during the growth processes, the 2D-substrate heterostructure should be energetically stable. Thus the interactions between the 2D material and substrate surface have to be attractive in nature. This interaction energy known as the binding energy can be estimated as, $\\Delta E_{\\mathrm{b}} = (E_{\\mathrm{2D}} + E_{\\mathrm{S}} - E_{\\mathrm{2D+S}} )\/N_{\\mathrm{2D}}$, where $E_{\\mathrm{2D+S}}$\\ is the energy of the 2D material adsorbed on the surface of a substrate, $E_{\\mathrm{S}}$\\ is the energy of the substrate slab, $E_{\\mathrm{2D}}$\\ is the energy of the free-standing 2D material, and $N_{\\mathrm{2D}}$\\ is the number of atoms in the unit cell of the 2D material. Note, strain is applied to the 2D material to place it on the substrate surface due to the lattice-mismatch between the two lattices. For the 2D-substrate heterostructure interaction to be attractive, the $\\Delta E_{\\mathrm{b}}$\\ > 0. In addition, this $\\Delta E_{\\mathrm{b}}$\\ should be greater than the $\\Delta E_{\\mathrm{vac}}^f$\\ of 2D materials to ensure that the 2D materials remain in their 2D form on the substrate. Singh et. al. has shown previously that the successful synthesis of a 2D material on a particular substrate surface is feasible when the adsorption formation energy, $\\Delta E_{\\mathrm{ads}}^f$\\ = $\\Delta E_{\\mathrm{vac}}^f$\\ - $\\Delta E_{\\mathrm{b}}$\\ < 0.\n\n\\section{Hetero2d: The High-Throughput Implementation of the DFT Approach}\n \\subsection{Introduction}\n \n The $Hetero2d$ package is an all-in-one workflow approach to model the heterostructures formed by the arbitrary combinations of 2D materials and substrate surfaces. $Hetero2d$ can calculate the $\\Delta E_{\\mathrm{vac}}^f$, $\\Delta E_{\\mathrm{b}}$, and $\\Delta E_{\\mathrm{ads}}^f$\\ for each 2D-substrate heterostructure and store the relevant simulation parameters and post-processing in a queryable MongoDB database that can be interfaced to and accessed by an application programming interface (API) or a web-portal. $Hetero2d$ is written in Python 3.6, a high-level coding language widely used on modern scientific computing resources. $Hetero2d$ utilizes \\textit{MPInterfaces}~\\cite{Mathew2016} routines and the robust high-throughput computational tools developed by the Materials Project~\\cite{atomate,Jain2013,Jain2015,Ong2013} (MP), namely \\textit{atomate}, \\textit{FireWorks}, \\textit{pymatgen}, and \\textit{custodian}.\n \n \n $Hetero2d$'s framework is inspired by \\textit{atomate}'s straightforward statement-based workflow design to perform complex materials science computations with pre-built workflows that automate various types of DFT calculations. Figure \\ref{fig:Figure1} illustrates the framework of our workflow within the $Hetero2d$ package. $Hetero2d$ extends some powerful high-throughput techniques available in existing community packages and combines them with new routines created for this work to generate 2D-substrate heterostructures, perform vdW-corrected DFT calculations, store the stability related data within a queryable database, and analyze key properties of the heterostructure. In the following sections, we discuss each step outlined in Figure \\ref{fig:Figure1} underscoring the new computational tools developed for $Hetero2d$.\n \n \n \\begin{figure}[!th]\n \\centering\n \\includegraphics[width=\\textwidth]{img\/WorkflowFlowChart.pdf}\n \\caption{Outline for our computational workflow used in our study to investigate the properties of the 2D-substrate heterostructures as coded in the $Hetero2d$ package. All structures imported from an external database are relaxed using vdW-corrected DFT with our parameters (discussed below) to maintain consistency. Boxes in gold denote a DFT simulation step and boxes in silver denote a pre-processing or post-processing step.}\n \\vspace{-0.25\\intextsep}\n \\label{fig:Figure1}\n \\end{figure}\n \n \\subsection{Workflow Framework}\n $Hetero2d$'s \\textit{atomate}-inspired framework utilizes the \\textit{FireWorks} package to break down and organize each task within a workflow. Workflows within the \\textit{FireWorks} package are organized into three task levels -- (1) workflow, (2) firework, and (3) firetask. A workflow is a set of fireworks with dependencies and information shared between them through the use of a unique specification file that determines the order of execution of each firework (FW) and firetask. Each FW is composed of one or more related firetasks designed to accomplish a specific task such as DFT structure relaxation. Firetasks are the lowest level task in the workflow. Firetasks can be simple tasks such as writing files, copying files from a previous directory, or more complex tasks such as calling script-based functions to generate 2D-substrate heterostructures, starting and monitoring a DFT calculation, or post-processing a DFT calculation and updating the database. \n \n $Hetero2d$'s workflow \\textit{get\\_heterostructures\\_stabilityWF} shown in Figure \\ref{fig:Figure1}, has a total of five firework steps (1) FW$_1$: the DFT structural optimization of the 2D material, (2) FW$_2$: the DFT structural optimization of the bulk counterpart of the 2D material, (3) FW$_3$: the DFT structural optimization of the substrate, (4) FW$_4$: the creation and DFT structural optimization of the substrate slab, and (5) FW$_5$: the generation and DFT structural optimization of the 2D-substrate heterostructure configurations. Each firework can be composed of a single or many related firetasks. The tasks are gathered from the specification file that controls the execution of each firetask. For example, FW$_1$ is used to perform a vdW-corrected DFT structure optimization of the 2D material. Note that the DFT simulations are performed using the Vienna \\textit{ab initio} simulation package ~\\cite{Kresse5, Kresse4, Kresse1, Kresse2, Kresse3}. FW$_1$ is composed of firetasks which (1) write VASP input files to the job's launch directory, (2) write the structure file, (3) run VASP using \\textit{custodian}~\\cite{Ong2013} to perform just-in-time job management, error checking, and error recovery, (4) collect information regarding the location of the calculation and update the specification file, and (5) perform analysis and convergence checks for the calculation and store all pre-defined information about the calculation in our MongoDB database. A more detailed explanation of each firework in the workflow is discussed in section 3.6, \\textit{Workflow Steps}. \n \n \\subsection{Package Functionalities}\n As mentioned earlier, $Hetero2d$ adapts and extends existing community packages to assess the stability of 2D-substrate heterostructures. Table \\ref{tab:Table1} lists the functionalities of $Hetero2d$ compared with two other workflow-based packages, \\textit{MPInterfaces}~\\cite{Mathew2016} and \\textit{atomate}~\\cite{atomate}, highlighting new and common features within the three packages. \n \n \\begin{table}\n \\centering\n \\caption{A list of functionalities present in the $Hetero2d$ package compared with two other workflow-based packages \\textit{MPInterfaces} and \\textit{atomate}. $Hetero2d$ is the only workflow package with all the specific features needed to create 2D-substrate heterostructures using high-throughput computational methods.}\n \\begin{adjustbox}{width=0.5\\textwidth}\n \\begin{tabular}{|c|c|c|c|}\n \\hline\n & $Hetero2d$ & \\textit{MPInterfaces} & \\textit{Atomate} \\\\\n \\hline\n Structure processing & \\checked & \\checked & \\checked \\\\\n \\hline\n Error recovery & \\checked & \\checked & \\checked \\\\\n \\hline\n Database integration & \\checked & \\checked & \\checked \\\\\n \\hline\n \\textit{FireWorks} compatible & \\checked & & \\checked \\\\\n \\hline\n 2D hetero. routines & \\checked & \\checked & \\\\\n \\hline\n 2D hetero. workflow & \\checked & & \\\\\n \\hline\n 2D post-processing & \\checked & & \\\\\n \\hline\n \\end{tabular}\n \\end{adjustbox}\n \\label{tab:Table1}\n \\end{table}\n \n All three packages utilize the \\textit{pymatgen} package to perform various structure processing tasks. \\textit{Pymatgen} is used to perform various types of structure-manipulation processes such as reducing\/increasing simulation cell size, creating a vacuum, or creating a slab during the execution of the workflow. Throughout $Hetero2d$, we utilized \\textit{pymatgen} to handle structure-manipulation for (a) the bulk materials and (b) some basic pre-\/post-processing of structures and generation of files for the DFT calculations. Within $Hetero2d$, \\textit{pymatgen}'s structure-manipulation tools are used to create conventional unit cells for the substrate and create the substrate slab surface. Additionally, we have integrated \\textit{pymatgen}'s structure analysis modules to decorate the fireworks in the workflow with structural information for each input structure to populate our database. The pre-processing enables one to differentiate crystal phases with similar compound formulas, easily reference and sort data within the database, and perform analysis in later fireworks. \n \n All three packages use the \\textit{custodian} package~\\cite{Ong2013} to perform error recovery. Error recovery routines are pivotal for any workflow package to reduce the need for human intervention and correct simple run-time errors with pre-defined functions. Additionally, \\textit{custodian} alerts the user if an unrecoverable error has occurred.\n \n Database integration is another functionality present in all three packages that stores and analyzes the vast amount of information generated by each calculation. \n \n Only $Hetero2d$ and \\textit{atomate} are \\textit{FireWorks} compatible whereas, \\textit{MPInterfaces} uses the python package \\textit{fabric} to remote launch jobs over SSH. \\textit{FireWorks} is a single package used to define, manage, and execute scientific workflows with built-in failure-detection routines capable of concurrent job execution and remote job tracking over an arbitrary number of computing resources accessible from a clean and flexible Python API. \n \n Routines used to automate the generation of 2D-substrate heterostructures given user constraints are available in $Hetero2d$ and \\textit{MPInterfaces}. \\textit{MPInterfaces} implements a mathematical algorithm developed by Zur et. al.~\\cite{Zur1984} for generating supercells of lattice-matched heterostructures given two arbitrary lattices and user-specified tolerances for the lattice-mismatch and heterostructure surface area. $Hetero2d$ incorporates functions from \\textit{MPInterfaces} to create 2D-substrate heterostructures and enable our package to utilize \\textit{FireWorks} which \\textit{MPInterfaces} is currently incompatible with. Additionally, by incorporating these routines in $Hetero2d$, we can modify the function to return critical information regarding the 2D-substrate heterostructures that are not returned by the \\textit{MPInterfaces} function. Our 2D-substrate heterostructure function returns the strain of the 2D material along \\textbf{a} and \\textbf{b} lattice vectors, angle mismatch between the \\textbf{ab} lattice vectors of the substrate and the 2D material, and scaling matrix used to generate the aligned the 2D-substrate heterostructures. \n \n \n \n The 2D-substrate heterostructure workflow and post-processing routines are uniquely available in $Hetero2d$. The workflow automates all steps needed to study 2D-substrate heterostructure stability and properties via the DFT method. The post-processing routines enable a curated database to view all calculation results and perform additional analysis or calculations. \n \n \\subsection{Default Computational Parameters}\n \\textit{CMDLInterfaceSet} is based on \\textit{pymatgen}'s \\textit{VASPInputSet} class that creates custom input files for DFT calculations. Our new class \\textit{CMDLInterfaceSet} has all the functionality of the parent \\textit{pymatgen} class but is tailored to perform structural optimizations of 2D-substrate heterostructures and implements vdW-corrections, on-the-fly dipole corrections for slabs, generation of custom $k$-point mesh grid density, and addition of selective dynamics tags for the 2D-substrate structures. All DFT calculations are performed using the projector-augmented wave method as implemented in the plane-wave code VASP~\\cite{Kresse5, Kresse4, Kresse1, Kresse2, Kresse3}. The vdW interactions between the 2D material and substrate are modeled using the vdW\u2013DF~\\cite{Rydber2003} functional with the optB88 exchange functional~\\cite{Klimes2011}. \n \n The \\textit{CMDLInterfaceSet} has a default energy cutoff of 520 eV used for all calculations to ensure consistency between structures that have the cell shape and volume relaxed and those that only have ionic positions relaxed. The default $k$-point grid density was automated using \\textit{pymatgen}~\\cite{Ong2013} routines to 20 $k$-points\/unit length by taking the nearest integer value after multiplying $\\frac{1}{\\textbf{a}}$ and $\\frac{1}{\\textbf{b}}$ by 20. These settings were sufficient to converge all calculations to a total force per atom of less than 0.02 eV\/\\AA. Additional information regarding default settings set in the \\textit{CMDLInterfaceSet} and convergence tests performed to benchmark our calculations are in the section 1 and 2 of the SI.\n \n \\subsection{Workflow Initialization and Customization}\n \n To use $Hetero2d$'s workflow, \\textit{get\\_heterostructures\\_stabilityWF}, we import the 2D structure, its bulk counterpart, and the substrate structure from existing databases through their APIs. When initialized, the workflow can accept up to three structures (1) the 2D structure, (2) the bulk counterpart of the 2D structure, and (3) the substrate structure in the bulk or slab form. \n \n To perform structure transformations to generate the substrate slabs or the 2D-substrate heterostructures, our workflow requires two dictionaries during initialization -- the (1) \\textit{h\\_params} and (2) \\textit{slab\\_params} dictionary. Figure \\ref{fig:Figure2} is a code excerpt demonstrating the parameters one can supply to generate a 2D-substrate heterostructure on a (111) substrate slab surface. In Figure \\ref{fig:Figure2}, \\textit{slab\\_params} dictionary generates a substrate slab with a vacuum spacing of 19 \\AA\\ and a substrate slab thickness of at least 12 \\AA. The \\textit{h\\_params} dictionary creates the lattice-matched, symmetry-matched 2D-substrate heterostructures with 3.0 \\AA\\ $z$-separation distance between the 2D material and the substrate surface. The \\textit{h\\_params} dictionary also sets the maximum allowed lattice-mismatch along \\textbf{ab} to be less than 5\\%, a surface area less than 130 \\AA$^2$, sets the selective dynamics tags in the DFT input file to relax all layers of the 2D material and top two layers of the substrate slab. \n \n \n \\begin{wrapfigure}[11]{r}{0.55\\textwidth}\n \\vspace{-1.4\\intextsep}\n \\hspace*{-0.4\\columnsep}\\includegraphics[width=0.55\\textwidth]{img\/CodeExcerpt.pdf}\n \\vspace{-0.55\\intextsep}\n \\caption{Simplified workflow illustrating the setup necessary to setup the 2D-substrate heterostructure workflows using \\textit{get\\_heterostructures\\_stabilityWF} used throughout this work. A full example jupyter notebook is located in the SI.}\n \\vspace{10\\intextsep}\n \\label{fig:Figure2}\n \\end{wrapfigure}\n \n \n The workflow has commands for two VASP executables compiled that incorporate vdW-corrections for performing DFT calculations for (1) 2D materials and (2) 3D materials. The first executable is a custom executable to relax 2D materials with a large vacuum and prevent the vacuum from shrinking by not letting the cell length change in the direction of vacuum spacing. The second executable allows the cell volume to change in all directions. Other optional arguments used to initialize the workflow include dipole correction for substrate slabs, tags for database entries, and avenues to modify the INCAR of each firework in the workflow. The parameters $vis$ and \\textit{vis\\_i} where $i$=2d, 3d2d, bulk, trans, and iface are used to override the default \\textit{VaspInputSet} with one provided by the user. This can be provided for all fireworks using \\textit{vis} or for a specific firework using \\textit{vis\\_i}. The parameters \\textit{uis} and \\textit{uis\\_i} can be set to change the default settings in the INCAR. The parameter \\textit{uis} will set the specified parameters for all INCARs in the workflow, while \\textit{uis\\_i} will set the INCAR parameters for the corresponding firework. Additional details regarding workflow customization options and current functionality available in \\textit{Hetero2d} are discussed in SI section 3 as well as an example jupyter notebook.\n \n \\subsection{Workflow Steps}\n As mentioned previously, our workflow has five firework steps. Here, we discuss the pre-processing steps that occur when initializing the workflow, each firework, and the firetasks composing each firework for the 2D-substrate heterostructure workflow introduced in section 3.2, \\textit{Workflow Framework}.\n \n \n The first firework, FW$_1$, in the workflow optimizes the 2D material structure. During initialization of the workflow, the 2D material is centered within the simulation cell, obtaining crystallographic information regarding the structure, the \\textit{CMDLInterfaceSet} is initialized to create VASP input files, and a list of user-defined\/default tags are created for the 2D material. The structure, tags, and \\textit{CMDLInterfaceSet} are used to initialize the firework \\textit{HeteroOptimizeFW} that performs the structure optimization. The default tags appended to the firework are the unique identification tags (provided to the workflow by the user), the crystallographic information, workflow and firework name, and the structure's composition. In FW$_1$, \\textit{HeteroOptimizeFW} executes firetasks that -- (a) create directories for the firework, (b) write all input files initialized using \\textit{CMDLInterfaceSet}, (c) submit the VASP calculation to supercomputing resources to perform full structure optimization and monitor the calculation to correct errors, (d) run our \\textit{HeteroAnalysisToDb} class to store all information necessary for data analysis within the database, and (e) lastly pass the information to the next firework. Details regarding \\textit{HeteroAnalysisToDb} can be found in the next section.\n \n \n Similar to FW$_1$, FW$_2$ and FW$_3$ perform a full structural optimization for the bulk counterpart of the 2D material and the substrate, respectively. FW$_2$ and FW$_3$ differ from FW$_1$ only in the pre-processing steps. The step to center the 2D material is not performed, however, the conventional standard structure is utilized during the pre-processing for FW$_3$.\n \n \n FW$_3$ spawns a child firework passing the optimized substrate structure to FW$_4$ which transforms the conventional unit cell of the substrate into a substrate slab using the \\textit{slab\\_params} dictionary and performs the structure optimization. When the workflow is initialized, FW$_4$ undergoes similar pre-processing steps that are used to initialize the firework \\textit{SubstrateSlabFW} that creates a substrate slab from the substrate. \\textit{SubstrateSlabFW} is the firework that transforms the conventional unit cell of the substrate into a slab, sets the selective dynamics tags on the surface layers, and sets the number of compute nodes necessary to relax the substrate slab. The \\textit{slab\\_params} variable is the input dictionary that initializes \\textit{pymatgen}'s \\textit{SlabTransformation} module that creates the substrate slab. All required and optional input arguments used in the \\textit{SlabTransformation} module must be supplied using this dictionary (key: value) format. This dictionary format is implemented to enable $Hetero2d$ to be flexible and extendable in future updates. Additionally, the \\textit{slab\\_params} dictionary is only required when creating a new substrate slab from a substrate. \n \n \n After the first four fireworks have been completed and successfully stored in the database, the fifth firework (FW$_5$) obtains the optimized structures and information from previous fireworks and the specification file. FW$_5$ calls the \\textit{GenHeteroStructuresFW} firework to generate the 2D-substrate heterostructure configurations using \\textit{h\\_params} and spawns a firework to perform structure optimization for each configuration. The input required for the \\textit{h\\_params} dictionary are those that are required by $Hetero2d$'s \\textit{hetero\\_interfaces} function. This function attempts to find a matching lattice between the substrate surface and the 2D material. The parameters used to initialize \\textit{hetero\\_interfaces} are listed in the \\textit{h\\_params} dictionary shown in Figure \\ref{fig:Figure2} and the jupyter notebook in the SI. \n \n \n Our function \\textit{hetero\\_interfaces} generates the 2D-substrate heterostructure configurations utilizing \\textit{MPInterfaces}'s interface matching algorithm. We developed \\textit{hetero\\_interfaces} to ensure functions within the workflow are compatible with \\textit{FireWorks}. Additionally, we can return key variables regarding the interfacing matching algorithm, such as the strain or angle mismatch, and store these values in our database. \\textit{MPInterfaces} is used to (a) generate heterostructures within an allowed lattice-mismatch and surface area of the supercell at any rotation between the 2D material and bulk material surface and (b) create distinct configurations in which the 2D material can be placed on the bulk material surface based on the Wyckoff positions of the near-interface atoms.\n \n \n FW$_5$ calls \\textit{GenHeteroStructuresFW} which generates the 2D-substrate heterostructure configurations, the total number of configurations is computed, each unique configuration is labeled from 0 to $n$-1, where $n$ is the total number of configurations, and stored under the \\textit{Interface Config} tag. For each configuration, a new firework is spawned to optimize each 2D-substrate heterostructure configuration. The data generated within FW$_5$ is stored in the database.\n \n \n After all previous FWs have successfully converged, \\textit{HeteroAnalysisToDb} is called one final time to compute the $\\Delta E_{\\mathrm{vac}}^f$, $\\Delta E_{\\mathrm{b}}$, and $\\Delta E_{\\mathrm{ads}}^f$\\ for each heterostructure configuration generated by the workflow. The calculation of the $\\Delta E_{\\mathrm{vac}}^f$\\ references the simulation for the 2D material and its bulk counterpart. The bulk counterpart is simulated using a standard periodic simulation cell. The calculation of $\\Delta E_{\\mathrm{b}}$\\ references the 2D material, substrate slab, and 2D-substrate heterostructure simulations which all employ a standard supercell slab model. The calculation of the $\\Delta E_{\\mathrm{ads}}^f$\\ references both $\\Delta E_{\\mathrm{b}}$\\ and $\\Delta E_{\\mathrm{vac}}^f$. Once each value is computed, all the information is curated and stored in the MongoDB database.\n\n \\subsection{Post-Processing Throughout Our Workflow} \n \n After each VASP simulation is complete, post-processing is performed within the calculation directory using our \\textit{HeteroAnalysisToDb} class, an adaptation of \\textit{atomate}'s \\textit{VaspToDb} module. It is used to parse the calculation directory, perform error checks, and curate a wide range of input parameters and quantities from calculation parameters and output, energetic parameters, and structural information for storage in our MongoDB. \\textit{HeteroAnalysisToDb} detects the type of calculation performed within the workflow and parses the calculation accordingly. \\textit{HeteroAnalysisToDb} has the same functionally as \\textit{VaspToDb} with additional analyzers developed for 2D-substrate heterostructures that -- (a) identify layer-by-layer interface atom IDs for the substrate and 2D material, (b) store the initial and final configuration of all structures, (c) compute the $\\Delta E_{\\mathrm{vac}}^f$, $\\Delta E_{\\mathrm{b}}$, and $\\Delta E_{\\mathrm{ads}}^f$, (d) store the results obtained from the interface matching, and (e) ensure each database entry has any custom tags added to the database such as those appended by the user. The workflow design ensures that the DFT simulations for each 2D-substrate surface pair will be performed independently of each other, but as soon as all simulations are completed for each 2D-substrate surface pair, the data will be analyzed and curated in the MongoDB database right away.\n\n\\section{An Example of Substrate Screening via Hetero2d}\n \\subsection{Materials Selection}\n \n To demonstrate the functionalities of the $Hetero2d$ package, we screened for suitable substrates for four 2D materials, namely $2H$-MoS$_2$, $1T$-NbO$_2$, $2H$-NbO$_2$~\\cite{c2db}, and hexagonal-ZnTe~\\cite{Torrisi2020}. The four 2D materials in consideration possess hexagonal symmetry as illustrated in Figure \\ref{fig:2ds}. \n \n MoS$_2$ was selected because there is a large amount of experimental and computational~\\cite{Chen2013, Zhuang2013b, Yun2012, singh2015al2o3} data available in literature which we can use to validate the computed properties from our $Hetero2d$ workflow. The hexagonal-ZnTe~\\cite{Torrisi2020}, $1T$-NbO$_2$, and $2H$-NbO$_2$~\\cite{c2db} are yet to be synthesized. In addition, these particular 2D materials have diverse predicted properties see Table \\ref{tab:2dProp}. It is noteworthy that hexagonal-ZnTe has been predicted to be an excellent CO$_2$ reduction photocatalyst~\\cite{Torrisi2020}. \n \n \n \\begin{table}[!htbp]\n \\centering\n \\caption{The electronic properties and band gap of the four selected 2D materials used in this work. FM represents ferromagnetic.}\n \\begin{adjustbox}{width=\\textwidth}\n \\begin{tabular}{|c|c|c|c|c|}\n \\hline\n 2D Mat. & MoS$_2$ & $1T$-NbO$_2$ & $2H$-NbO$_2$ & ZnTe \\\\\n \\hline\n Classification & Semiconductor & FM~\\cite{c2db} & FM~\\cite{c2db} & Semiconductor\\\\\n \\hline\n Band Gap (eV) & 1.88~\\cite{Gusakova2017} & 0.0~\\cite{c2db} & 0.0~\\cite{c2db} & 2.88~\\cite{Torrisi2020} \\\\\n \\hline\n \\end{tabular}\n \\end{adjustbox}\n \\label{tab:2dProp}\n \\end{table}\n \n \\begin{table}[!htbp]\n \\centering\n \\caption{A list of matching substrate surfaces for the 4 2D materials given our heterostructure search criteria discussed in the next section.}\n \\begin{adjustbox}{width=\\textwidth}\n \\begin{tabular}{|l|c|c|l|}\n \\hline\n 2D Mat. & (111) Substrate & (110) Substrate \\\\\n \\hline\n MoS$_2$ & Hf, Ir, Pd, Zr, Re, Rh & Ta, Rh, Sc, Pb, W, Y \\\\\n \\hline\n $1T$-NbO$_2$ & Ni, Mn, V, Nd, Pd, Ir, Hf, Zr, Cu & Rh, Ta, Sc, W \\\\\n \\hline\n $2H$-NbO$_2$ & Ni, Mn, Nd, Ir, Hf, Al, Te, Ag, Ti, Cu, Au & Ta, Sc, W, Y, Rh \\\\\n \\hline\n ZnTe & Sr, Ni, Mn, V, Al, Ti, Cu & W\\\\\n \\hline\n \\end{tabular}\n \\end{adjustbox}\n \\label{tab:iface}\n \\end{table}\n \n \n The properties of a 2D material can differ when placed on different miller-index planes for the same substrate. Thus, we investigated all unique low-index substrate surfaces (with $h$, $k$, $l$ equal to 1 or 0) for these 2D materials. A material available in the Materials Project (MP)~\\cite{Ong2013} database was considered a potential substrate if it satisfied all of the following criteria - a) is metallic, b) is a cubic phase, c) is single-element composition, d) has a valid ICSD ID~\\cite{ICSD} (thus been experimentally synthesized), and e) has an $E_{above\\ hull}<0.1$ eV\/atom. There are 50 total substrates that satisfy the criteria above when queried from the MP database. \n \n \n \\begin{wrapfigure}[19]{r}{0.5\\textwidth}\n \\centering\n \\vspace{-1.25\\intextsep}\n \\includegraphics[width=0.5\\textwidth]{img\/StructureModels.pdf}\n \\vspace{-2\\columnsep}\n \\caption{Structure models illustrating the 2D films crystal structure. Top view demonstrates the hexagonal symmetry of each 2D material. The $1T$ and $2H$ phase for NbO$_2$ are labeled to clarify the two phases.}\n \\label{fig:2ds}\n \\end{wrapfigure} \n \n The bulk counterpart of each 2D material is also obtained from the MP database. We query the database for bulk materials that have the same composition as the 2D material and select the structure with the lowest $E_{above\\ hull}$. SI Table 1-3 have additional reference information regarding all the optimized substrate slabs, 2D materials, and their bulk counterparts. SI Table 1 contains information about the Materials Project material\\_id, $E_{above\\ hull}$, ICSD ID, crystal system, and miller plane for the substrate surface. SI Table 2 contains information about the reference database ID, $\\Delta E^{f}_{vac}$ (eV\/atom), and crystal system for each 2D material and SI Table 3 contains information about the reference database id, $E_{above\\ hull}$, E$_{gap}$, and the crystal system for the bulk counterpart of the 2D material. \n \n \\subsection{Symmetry-Matched, Lattice-Matched 2D-Substrate Heterostructures}\n \n In this study, we focus our search for 2D-substrate heterostructures to substrate planes with indices, $h$, $k$, $l$ as 0 or 1. The following studies focus on the heterostructures with the (111) and (110) substrate surfaces because we find that only these two miller planes have an appreciable number of heterostructures. The (001) substrate plane resulted in only one heterostructure. \n \n Restricting our search for 2D-substrate matches to only the (111) and (110) yields a total of 4 (\\# of 2D materials) X 2 (\\# of planes) X 50 (\\# of substrates) = 400 potential 2D-substrate heterostructure combinations. As illustrated in Figure \\ref{fig:Workflow}, after introducing our constraints for the surface area to be $< 130$ \\AA\\ and applied strain on the 2D material to be $ < 5$ \\AA, a total of 49 2D-substrate heterostructure workflows are found. Table \\ref{tab:iface} lists all metallic substrates matching each of the 2D materials given our heterostructure criteria.\n \n \n \\begin{figure}[!htbp]\n \\centering\n \\includegraphics[width=0.5\\textwidth]{img\/WorkflowDataPipeline.pdf}\n \\caption{Schematic representing the materials selection process identifying stable 2D-substrate heterostructures using the $Hetero2d$ workflow. Tier 1 represents choosing 2D materials, substrates, and their surfaces. Tier 2 applies constraints on the surface area and lattice strain. Tier 3 shows the energetic stability of the heterostructures stored in the database.}\n \\vspace{-0.25\\intextsep}\n \\label{fig:Workflow}\n \\end{figure}\n \n \n Of the total 49 workflows, 33 workflows correspond to the (111) substrate surfaces, and 16 workflows correspond to the (110) substrate surfaces. Generally, the (111) surface has more substrate matches than (110) surface due to the intrinsic hexagonal symmetry of the (111) surface that matches the hexagonal symmetry of the selected 2D materials. Each workflow generates between 2--4 2D-substrate heterostructure configurations for a given 2D-substrate surface pair, resulting in a total of 123 2D-substrate heterostructure configurations. Of those 2D-substrate heterostructures, 78 configurations, a total of 29 workflows stabilize the meta-stable 2D materials when placed upon the substrate slab. Additional details regarding these simulations can be found in section 4 of the SI.\n \n \\subsection{Stability of Free-Standing 2D Films and Adsorbed 2D-Substrate Heterostructures}\n \n \\begin{wrapfigure}[13]{r}{0.56\\textwidth}\n \\vspace{-1\\intextsep}\n \\hspace*{-0.75\\columnsep}\\includegraphics[width=0.56\\textwidth]{img\/FormationEnergy.pdf}\n \\vspace{-0.1\\intextsep}\n \\caption{The $\\Delta E_{\\mathrm{vac}}^f$\\ for 2D- MoS$_2$ (\\tikzcircle[gray, fill=orange]{2pt}), \n $1T$-NbO$_2$ (\\tikzcircle[gray, fill=red]{2pt}), \n $2H$-NbO$_2$ (\\tikzcircle[gray, fill=green]{2pt}), and ZnTe (\\tikzcircle[gray, fill=blue]{2pt}). The $\\Delta E_{\\mathrm{vac}}^f$\\ is used to assess the thermodynamic stability of the free-standing 2D film with respect to its bulk counterpart. MoS$_2$ and ZnTe have relatively low $\\Delta E_{\\mathrm{vac}}^f$\\ while the $1T$ and $2H$ phase of NbO$_2$ have high $\\Delta E_{\\mathrm{vac}}^f$.}\n \\vspace{10\\intextsep}\n \\label{fig:form}\n \\end{wrapfigure}\n \n Figure \\ref{fig:form} shows the $\\Delta E_{\\mathrm{vac}}^f$\\ of the isolated unstrained 2D material with respect to their bulk counterpart. We find the $\\Delta E_{\\mathrm{vac}}^f$\\ for both MoS$_2$ and ZnTe are low, less than 0.2 eV\/atom. Both the $1T$ and $2H$ phase for NbO$_2$ possess high $\\Delta E_{\\mathrm{vac}}^f$, as shown by the red shaded region in Figure \\ref{fig:form}, making substrate-assisted synthesis methods the most feasible method to synthesize these 2D films. The $\\Delta E_{\\mathrm{vac}}^f$'s in Figure \\ref{fig:form} are consistent with prior computational~\\cite{c2db, Torrisi2020} and experimental work~\\cite{Lee2013}.\n \n \n Figures \\ref{fig:Eads}a and \\ref{fig:Eads}b show the $\\Delta E_{\\mathrm{ads}}^f$\\ for the four 2D materials on the (110) and (111) substrate surfaces, respectively. The black lines in Figure 2 separate the 2D materials, while the shaded regions indicate stabilization of the 2D material on the substrate surface. When generating 2D-substrate heterostructure, the first challenge is finding a matching lattice between the 2D material and substrate surface. The next challenge is identifying \"ideal\" or likely locations to place the 2D material on the substrate surface to generate stable low-energy heterostructures. To reduce the large number of in-plane shifts possible for a given 2D-substrate heterostructure, we selectively placed the 2D material on the substrate slab by enumerating combinations of high-symmetry points (Wyckoff sites) between the 2D material and substrate slab stacking the 2D material on top of these sites $z$ \\AA\\ away from the substrate surface. Each unique 2D-substrate heterostructure configuration is represented by 0=$\\triangle$, 1=\\textbf{x}, 2=$\\circ$, and 3=$\\square$ in Figure \\ref{fig:Eads}.\n \n\t \n \t\\begin{figure}[t!]\n \t \\centering\n \\vspace{-1\\intextsep}\n \t \\includegraphics[width=\\textwidth]{img\/AdsEnergy.pdf}\n \\vspace{-2\\intextsep}\n \t \\caption{Adsorption formation energy, $\\Delta E_{\\mathrm{ads}}^f$, for the symmetry-matched, low lattice-mismatched (a) (110) and (b) (111) substrate surfaces. The rectangular symmetry of the (110) surface results in fewer matches while the hexagonal symmetry of the (111) substrate surface results in numerous matches within the given constraints on the surface area and lattice strain. Negative $\\Delta E_{\\mathrm{ads}}^f$\\ values indicate stabilization of the 2D material. Each set of symbols (up to 4 points per substrate) represents the unique 2D-substrate configurations. }\n \\vspace{-1\\intextsep} \n \t \\label{fig:Eads}\n \t\\end{figure}\n \n \n The $\\Delta E_{\\mathrm{ads}}^f$\\ on the (110) surface is shown in Figure \\ref{fig:Eads}a. In the figure, 9 substrates stabilize the $\\Delta E_{\\mathrm{ads}}^f$\\ of the 2D materials. The $\\Delta E_{\\mathrm{ads}}^f$\\ appears to be correlated with the substrate where the 2D material is placed, however, there are not enough data points in Figure \\ref{fig:Eads}a to distinguish the origin of this trend. Interestingly, when MoS$_2$ is placed on the (110) Ta substrate surface, the 2D material buckles which likely increases the $\\Delta E_{\\mathrm{ads}}^f$\\ significantly above the other substrates. SI Figure 6 shows both configurations for MoS$_2$ on the (110) Ta substrate surface. There are an additional 5 2D-(110) substrate pairs that were studied but are not shown in Figure \\ref{fig:Eads}a because the 2D materials\/substrate interface becomes highly distorted\/completely disintegrated. These cases are shown in SI Figure 4a and discussed in section 5 of the SI. \n \n \n The (111) substrate surface matches for each 2D material are shown in Figure \\ref{fig:Eads}b, where 15 substrates result in an $\\Delta E_{\\mathrm{ads}}^f$\\ $<$ 0. An additional 8 2D-substrate pairs, shown in SI Figure 4b, have 2D materials\/substrate surfaces that are disintegrated and are discussed in section 5 of the SI.\n \n \n A correlation between the substrate surface and the $\\Delta E_{\\mathrm{ads}}^f$\\ is more apparent for the (111) surface in Figure \\ref{fig:Eads}b due to the increased number of 2D-substrate pairs. For MoS$_2$ on Zr and Hf, the triangle configurations have $\\Delta E_{\\mathrm{ads}}^f$\\ significantly lower than the other configurations, see SI Figure 6 for structures of the three configurations. The lower $\\Delta E_{\\mathrm{ads}}^f$\\ is correlated with smaller bond distances between the substrate surface and the 2D material. When the $\\Delta E_{\\mathrm{ads}}^f$\\ is lower for these structures, we find that the $2h$ Wyckoff site of the 2D material is stacked on top of the $2a$ Wyckoff site of the substrate surface. The location of a 2D material on a substrate surface has previously been shown to influence the type of bonding present between the 2D material and substrate surface~\\cite{Singh2014a,Zhuang2017}.\n \n The $1T$ phase of NbO$_2$ on Hf, Zr, and Ir substrates have an $\\Delta E_{\\mathrm{ads}}^f$\\ difference between each configuration that is larger than other 2D-substrate pairs. The differences in $\\Delta E_{\\mathrm{ads}}^f$\\ for $1T$-NbO$_2$ on Ir is partly due to some structural disorder of the 2D materials from the O atoms bonding strongly with the substrate surface, shown in SI Figure 7. For both Hf and Zr, the differences in $\\Delta E_{\\mathrm{ads}}^f$\\ do not arise from structural disorder. The $\\Delta E_{\\mathrm{ads}}^f$\\ of $1T$-NbO$_2$ on Hf and Zr are more strongly affected by the location of the 2D material on the substrate surface.\n \n $2H$-NbO$_2$ has two substrate surfaces, Ti and Au, where the $\\Delta E_{\\mathrm{ads}}^f$\\ varies strongly with the configuration of 2D material on the substrate, unlike other 2D-substrate pairs for $2H$-NbO$_2$. $2H$-NbO$_2$ on Ti and Au have no structural distortions that explain the difference in $\\Delta E_{\\mathrm{ads}}^f$. For $2H$-NbO$_2$ on Ti, each configuration possesses different $\\Delta E_{\\mathrm{ads}}^f$\\ arising from the unique placement of the 2D material on the substrate surface for each configuration. The strong bonding between the 2D material and substrate surface may be due to the affinity for Ti to form a metal oxide. SI Figure 8 shows each configuration for $2H$-NbO$_2$ on (111) Ti substrate surface. For $2H$-NbO$_2$ on Au, the circle configuration has a lower $\\Delta E_{\\mathrm{ads}}^f$\\ due to the bottom layer of the $2H$-NbO$_2$ stacked directly on the top layer of the Au substrate surface. \n \n \n The properties of MoS$_2$ have been studied both computationally and experimentally, where previous computational works~\\cite{Zhuang2013b, Singh2015} have found similar values for the $\\Delta E_{\\mathrm{vac}}^f$\\ of MoS$_2$. Chen et. al. found that Ir bonds more strongly with the substrate surface than Pd~\\cite{Chen2013}. This may explain the small structural modulations observed in our study for MoS$_2$ on the Ir (111) substrate surface but no such modulation is observed for MoS$_2$ on the Pd (111) substrate surface. Additionally, the $z$-separation distance between the 2D material and substrate surface found in this work agrees well with Chen et. al.'s values despite using a different functional. Our $z$-separation distances are within 0.05 \\AA\\ for Ir and 0.16 \\AA\\ for Pd~\\cite{Chen2013}. \n\n \\subsection{Separation Distance of Adsorbed 2D Films on Substrate Slab Surfaces}\n\t \n The change in the thickness of the adsorbed 2D material may provide insight into the nature of bonding between the 2D-substrate heterostructures. For instance, vdW bonds are weak and thus typically result in minimal structural and electronic changes in the 2D material. Using our database, we determine the change in the thickness of post-adsorbed 2D materials from that of the free-standing 2D material. The thickness of the free-standing\/adsorbed 2D material is computed first by finding the average $z$ coordinate of the top and bottom layer of the 2D material given by $\\bar{d}_z = \\sum\\limits_{i=1}^n d^{top}_{i,z}\/n - \\sum\\limits_{i=1}^m d^{bottom}_{i,z}\/m$ where $d_{i,z}$ is the $z$ coordinate of the $i^{th}$ atom summed up to $n$ and $m$, the total number of atoms in the top and bottom layers, respectively. The thickness, obtained by taking the difference between the average thickness of the adsorbed 2D material from that of the free-standing 2D material, $\\delta d$=$\\bar{d}^{adsorbed}_z-\\bar{d}^{free}_z$, with positive (negative) values corresponding to an increase (decrease) in the thickness of the adsorbed 2D material. \n \n Figure \\ref{fig:Zdiff} illustrates the change in the thickness of the free-standing 2D material from that of the adsorbed 2D material for each 2D-substrate heterostructure. Typically for vdW type bonding, each atom should have minimum deviations from the free-standing 2D film due to the weak interaction between the adsorbed 2D material and substrate surface that characterizes vdW bonding. Figure \\ref{fig:Zdiff} shows many of the 2D-substrate pairs have a significant change in the thickness of the 2D material that may indicate more covalent\/ionic type bonding. The change in the thickness of the 2D material for the majority of the MoS$_2$-substrate configurations is minimal ($\\textless$0.1 \\AA) that may indicate weak interactions between the 2D material and substrate surface. Figure \\ref{fig:Zdiff} indicates that for the majority of the adsorbed 2D materials, the substrates tend to induce an increase in the thickness of the adsorbed 2D material.\n \n \\begin{figure}[!h]\n \\centering\n \\includegraphics[width=0.50\\textwidth]{img\/2dThickness.pdf}\n \\caption{Each 2D material is separated spatially along the $x$-axis using a violin plot. The change in the 2D material's thickness, $\\delta d$, for all substrates is plotted along the $y$-axis. A positive $y$-value indicates the 2D material's thickness has increased during adsorption onto the substrate slab. The width of the violin plot is non-quantitative from scaling the density curve by the number of counts per violin, however, within one violin plot, the relative $x$-width does represent the frequency that a 2D material's thickness changes by $y$ amount relative to the total number of data points in the plot.}\n \\label{fig:Zdiff}\n \\end{figure}\n\n \\subsection{Charge Layer Doping of Adsorbed 2D Films}\n The $Hetero2d$ workflow package has a similar infrastructure as \\textit{atomate} that allows our package to integrate seamlessly with the workflows developed within \\textit{atomate}. These workflows enable us to expand our database by performing additional calculations such as Bader~\\cite{Tang2009,Henkelman2006} charge analysis and high-quality density of states (DOS) calculations to assess charge transfer that occurs between the adsorbed 2D material and the substrate surface, changes in the DOS from the adsorbed and pristine 2D material, and changes in the charged state of the 2D-substrate pairs. \n \n \n \\begin{table}[h!]\n \\centering\n \\caption{Q$_x$ is obtained with Bader analysis and represents the average number of electrons transferred to\/from (positive\/negative) specific atomic layers with the initial number of electrons taken from the POTCAR. The first four columns are the electrons transferred to\/from -- the Hf substrate atoms, Q$_{sub}$, the bottom layer of S atoms, Q$_{S_b}$, the Mo atoms, Q$_{Mo}$, and the top layer of S atoms, Q$_{S_t}$ for the adsorbed 2D-substrate heterostructure. The last three columns denote the charge transfer in the pristine MoS$_2$ structure. MoS$_2$ has an increased charge accumulation on the bottom layer of the 2D material from the substrate slab.}\n \\begin{adjustbox}{width=3in}\n \\begin{tabular}{|c|c|c|c|c|c|c|c|}\n \\hline\n electrons & Q$_{sub}$ & Q$_{S_b}$ & Q$_{Mo}$ & Q$_{S_t}$ & Q$^{prist}_{S_b}$ & Q$^{prist}_{Mo}$ & Q$^{prist}_{S_t}$ \\\\\n \\hline\n Q$_x$ & -0.11 & 1.10 & -1.03 & 0.57 & 0.60 & -1.20 & 0.60 \\\\\n \\hline\n \\end{tabular}\n \\end{adjustbox}\n \\label{tab:bader}\n \\end{table}\n \n \\begin{figure}[hb!]\n \\centering\n \\includegraphics[width=\\textwidth]{img\/BaderCharges_DOS.pdf}\n \\caption{(a) The element projected density of states (DOS) where red and blue lines correspond to S and Mo states, respectively, for the isolated strained 2D material (dashed lines), the adsorbed 2D material (solid lines), and the pristine MoS$_2$ material (dashed-dotted lines). The Hf (111) substrate influences the DOS for MoS$_2$ causing a semiconductor to metal transition. (b) The $z$ plane-averaged electron density difference ($\\Delta\\rho$) for MoS$_2$ on Hf. Electron density difference is computed by summing the charge density for the isolated MoS$_2$ and isolated Hf then subtracting that from the charge density of the interacting MoS$_2$ on Hf system. The charge densities were computing with fixed geometries. The red and blue colors indicate electron accumulation and depletion in the combined MoS$_2$ on Hf system, respectively, compared to the isolated MoS$_2$ and isolated Hf atoms. (c) The charge density distribution for MoS$_2$ on (111) Hf substrate. The cross section is taken along the (110) plane passing through Mo, S, and Hf atoms. The charge density is in units of electrons\/\\AA$^3$.}\n \\label{fig:DosChg}\n \\end{figure}\n Most 2D materials are desirable due to their unique electronic properties. We selected MoS$_2$ on Hf (111) surface to demonstrate the capability of \\textit{Hetero2d} in providing detailed electronic and structural information. Our Bader analysis illustrated in Table \\ref{tab:bader} shows that there is charge transfer from the substrate to the bottom layer of the 2D material which is consistent with the findings presented by Zhuang et. al.~\\cite{Zhuang2017} In Figure \\ref{fig:DosChg}a, the DOS for the isolated un-strained, isolated strained, and adsorbed MoS$_2$ is shown where the black dashed line represents the Fermi level. There is a small shift in the DOS when comparing the un-strained and strained DOS for MoS$_2$. Comparing the DOS for the adsorbed MoS$_2$ to the other DOS for MoS$_2$, there is a significant change in the DOS. We can see that the substrate influences the DOS of MoS$_2$ when placed on the Hf (111) surface causing a semiconductor to metal transition of the MoS$_2$. This change in the DOS is consistent with the Bader analysis that indicates electron doping of the MoS$_2$ material occurs which would result in changes in the DOS. Figure \\ref{fig:DosChg}b shows the redistribution of charge due to the interaction of the 2D material and substrate surface where red and blue regions indicate charge accumulation (gaining electrons) and depletion (losing electrons) of the combined system due to the interaction between MoS$_2$ and Hf. The charge density difference is computed as the difference between the sum of the isolated MoS$_2$ and isolated Hf substrate slab from that of the combined MoS$_2$ on Hf system . Figure \\ref{fig:DosChg}c is the charge density of the combined MoS$_2$ on Hf system along the (110) plane. Thus, the electronic properties of MoS$_2$ are dramatically affected by the substrate. \\textit{Hetero2d} can analyze the substrate induced changes in the electronic structure of 2D materials. This will lead to a fundamental understanding and engineering of complex interfaces.\n\n\\section{Conclusions} \n \n In summary, we have developed an open-source workflow package, $Hetero2d$, that automates the generation of 2D-substrate heterostructures, the creation of DFT input files, the submission and monitoring of computational jobs on supercomputing facilities, and the storage of relevant parameters alongside the post-processed results in a MongoDB database. Using the example of four candidate 2D materials and low-index planes of 50 potential substrates we demonstrate that our open-source package can address the immense number of 2D material-substrate surface pairs to guide the experimental realization of novel 2D materials. Among the 123 configurations studied, we find that only 78 configurations (29 workflows) result in stable 2D-substrate heterostructures. We exemplify the use of $Hetero2d$ in examining the changes in thickness of the adsorbed 2D materials, the Bader charges, and the electronic density of states of the heterostructures to study the fundamental changes in the properties of the 2D material post adsorption on the substrate. $Hetero2d$ is freely available on our GitHub website under the GNU license along with example jupyter notebooks. \n \n\\section{Acknowledgements}\n The authors thank start-up funds from Arizona State University and the National Science Foundation grant number DMR-1906030. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), supported by National Science Foundation grant number TG-DMR150006. The authors acknowledge Research Computing at Arizona State University for providing HPC resources that have contributed to the research results reported within this paper. This research also used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. The authors acknowledge Akash Patel for his dedicated work maintaining our database and API. We thank Peter A. Crozier for their valuable discussions and suggestions. \n\n\\section{Supporting Information}\n Supporting information provides additional descriptions, figures, and tables supporting the results described in the main text.\n\n\\section{Data Availability}\n The results reported in this article and the workflow package can be found on our github website \\href{https:\/\/github.com\/cmdlab\/Hetero2d}{Hetero2d}. \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsection*{Three-Dimensional Case with Partial Observations}\nWe also wish to present results for the case of partial observations, paired with a three-dimensional example involving Poisson's equation on $\\Omega = (0,1)^3$. \nThe desired state is illustrated in Figure \\ref{fig::res3d}. We use the preconditioner ${\\cal P}_\\Pi$, as the observation domain {\\color{black} $\\Omega_1$} is given by $0.2<\\rm x_1<0.4$, $0.4<\\rm x_2<0.9$, $0\\leq\\rm x_3\\leq 1$,\nand therefore the $(1,1)$-block of the matrix \\eqref{NewtonSystem} is singular. The results for the computation with $\\alpha = 10^{-5},$ $\\beta = 10^{-3},$ and without additional box constraints, are also presented in Figure \\ref{fig::res3d}, with the discretization involving $35937$ degrees of freedom.\n\\begin{figure}[htb!]\n\\begin{center}\n \\setlength\\figureheight{0.33\\linewidth} \t\n\t\\subfloat[Computed control $\\rm u$]{\n \t\t\\includegraphics[width=0.33\\textwidth]{figures\/control3d.png}\n\t}\n\t\\subfloat[Computed state $\\rm y$]{\n\t\t\\includegraphics[width=0.33\\textwidth]{figures\/state3d.png}\n\t}\n\t\\subfloat[Desired state $\\rm y_d$]{\n\t\t\\includegraphics[width=0.33\\textwidth]{figures\/destate3D.png}\n\t}\n\\end{center}\n\n\t\\caption{Three-dimensional Poisson problem with partial observations: computed solutions for the control, state, and desired state.}\\label{fig::res3d}\n\\end{figure}\nTo illustrate the performance of the proposed preconditioner $\\mathcal{P}_\\Pi$ with respect to changes in the parameter regimes, in Table \\ref{tab::resultspoisson2} we provide results for a computation involving sparsity constraints applied to the control, as well as partial observation of the state, and set $\\rm u_a=-2$, $\\rm u_b=1.5.$ \nAgain, the results are very promising and a large degree of robustness is achieved.\n \n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Introduction}\n\nIn this paper we address the challenge of solving large-scale problems arising from PDE-constrained\noptimization \\cite{book::hpuu09,book::IK08,book::FT2010}. Such formulations arise in a multitude of applications, ranging\nfrom the control of fluid flows \\cite{Hinze2000} to image processing contexts \\cite{de2013image}. \nThe particular question considered in this paper is how to efficiently handle sparsity-promoting cost\nterms within the objective function, as well as additional constraints imposed on the control variable and even the state variable. \nIn fact, seeking optimal control functions that are both contained within a range of function\nvalues, and zero on large parts of the domain, has become extremely relevant in practical applications \\cite{Sta09}.\n\nIn detail, we commence by studying the problem of finding $(\\rm y,\\rm u) \\in H^1(\\Omega) \\times L^2(\\Omega)$\nsuch that the functional\n\\begin{align}\n\\ \\mathcal{F}(\\rm y,\\rm u)&=\\frac{1}{2}\\|\\rm y-\\rm y_d\\|^ 2_{L^2(\\Omega)}+ \\frac{\\alpha}{2}\\|\\rm u\\|^ 2_{L^2(\\Omega)} + \n\\beta\\|u\\|_{L^1(\\Omega)} \\label{pb}\n\\end{align}\nis minimized subject to the PDE constraint\n\\begin{align}\n-\\Delta \\rm y &= \\rm u + \\rm f ~~\\mbox{ in } \\Omega, \\label{eq:lap} \\\\ \n\\rm y &= \\rm g \\hspace{2.3em}\\mbox{ on } \\Gamma,\n\\end{align}\nwhere we assume that the equation \\eqref{eq:lap} is understood in the weak sense \\cite{book::FT2010}. Here, $\\Omega\\subset\\mathbb{R}^2$ or $\\mathbb{R}^3$ denotes a spatial domain with boundary $\\Gamma$.\nAdditionally, we allow for box constraints on the control\n\\begin{equation}\\label{box}\n\\rm u_a \\le \\rm u \\le \\rm u_b \\quad\\mbox{ a.e. in } \\Omega,\n\\end{equation}\nand,\nfor the sake of generality, consider the possibility that there are also box constraints on the state\n\\begin{equation}\\label{boxs}\n\\rm y_a \\le \\rm y \\le \\rm y_b \\quad\\mbox{ a.e. in } \\Omega.\n\\end{equation}\nWe follow the convention of recent numerical studies (see \\cite{SongYu3,SongYu2,SongYu1,wwL1}, for instance) and investigate the case where the lower (upper) bounds of the box constraints are non-positive (non-negative). Here, the functions $\\rm y_d,f, g,\\rm u_a,u_b, y_a,y_b \\in \\ensuremath{L^{2}(\\Omega)}$ are provided in the problem statement, with $\\alpha,\\beta >0$ given problem-specific \\emph{regularization parameters}. The functions $\\rm y,\\rm y_d,\\rm u$ denote the state, the desired state, and the control, respectively.\nThe state $\\rm y$ and the control $\\rm u$ are then linked via a state equation (the PDE). In this work we examine a broad class of state equations, including Poisson's equation (\\ref{eq:lap}) as well as the convection--diffusion equation and the heat equation.\nFurthermore, we consider the case where the difference between state $\\rm y$ and desired state \n$\\rm y_d$ is only observed on a certain part of the domain,\ni.e. over $\\Omega_1\\subset\\Omega$, with the first quadratic term in (\\ref{pb}) \nthen having the form $\\frac{1}{2}\\|\\rm y-\\rm y_d\\|^ 2_{L^2(\\Omega_1)}$. We refer to this case\nas the ``partial observation'' case.\n\nThere are many difficulties associated with the problem (\\ref{pb})--(\\ref{boxs}), such as selecting a suitable discretization, and choosing an efficient approach for handling the box constraints and the sparsity term. In particular, the state constrained problem itself, not even including the $\\rm L^1$-norm term, leads to a problem formulation where the regularity of the Lagrange multiplier is reduced, see \\cite{Cas86} for details. Additionally, the simultaneous treatment of control and state constraints is a complex task.\nFor this, G\\\"unther and co-authors in \\cite{gunther2012posteriori} propose the use of Moreau--Yosida regularization in order to add the state constraints as a penalty to the objective function. Other approaches are based on a semismooth Newton method, see e.g. \\cite{HS10,pss17}.\nIn fact, the inclusion of control\/state constraints leads to a semismooth nonlinear formulation of the first-order optimality \nconditions \\cite{BIK99,HIK02,pst15}. Interestingly, the structure of the arising nonlinear system is preserved if the $\\rm L^1$-norm \npenalization is added \\cite{HS10,pss17,Sta09}. Therefore its solution also generally relies on semismooth Newton approaches, and\nan infinite dimensional formulation is commonly utilized to derive the first-order optimality system. \nStadler in \\cite{Sta09} was the first to study PDE-constrained optimization with the $\\rm L^1$ term included,\nutilizing a semismooth approach, and many contributions have been made to the study of these problems in recent years \n(cf. \\cite{HerOW15,HSW11_DS} among others).\nOur objective is to tackle the coupled problem of both box constraints combined with the sparsity-promoting term, using the Interior Point method.\n\nThe paper \\cite{pss17} provides a complete analysis of a globally convergent\nsemismooth Newton method proposed for the problem (\\ref{pb})--(\\ref{box}).\nTheoretical and practical aspects are investigated \nfor both the linear algebra phase and the convergence behavior of the nonlinear method.\nThe numerical experiments carried out revealed a drawback of the method, as it exhibited \npoor convergence behavior for limiting values of the regularization parameter $\\alpha$. \n\n\nThe aim of this paper is to propose a new framework for the solution of\n(\\ref{pb})--(\\ref{boxs}) for a wider class of state equations and boundary conditions\nand, at the same time, attempt to overcome the numerical limitations of the global semismooth approach.\n\nTo pursue this issue we utilize Interior Point methods (IPMs), which \nhave shown great applicability for nonlinear programming problems \\cite{NocW06,IPMWright}, \nand have also found effective use within the PDE-constrained optimization framework \\cite{PGIP17,ulbrich2009primal}.\nIn particular, IPMs for linear and (convex) quadratic programming problems display \nseveral features which make them particularly attractive for very large-scale optimization, see e.g. \nthe recent survey paper \\cite{gondzio12}. Their main advantages are undoubtedly \ntheir low-degree polynomial worst-case complexity, and \ntheir ability to deliver optimal solutions in an almost constant number of iterations which\ndepends very little, if at all, on the problem dimension.\nThis feature makes IPMs perfect candidates for huge-scale discretized PDE-constrained\noptimal control problems.\n\nRecently, in \\cite{PGIP17}, an Interior Point approach has been successfully applied to the solution\nof problem (\\ref{pb})--(\\ref{boxs}), with $\\beta=0$. In this case the discretization\nof the optimization problem leads to a convex quadratic programming problem,\nand IPMs may naturally be applied. Furthermore, the rich structure of the linear systems\narising in this framework allows one to design efficient and robust preconditioners, based on those originally developed for the Poisson control problem without box constraints \\cite{PW10}.\n\nIn this work we extend the approach proposed in \\cite{PGIP17} to the more difficult \nand general case with $\\beta > 0$, and apply it to a broad class of PDE-constrained optimal control problems.\nTo achieve this goal we utilize two key ingredients that will be described in detail\nin Section \\ref{sec::ipa}: an appropriate discretization of the\n$\\rm L^1$-norm that allows us to write the discretized problem in a matrix-vector form, and\na suitable smoothing of the arising vector $\\ell_1$-norm that yields a final quadratic programming\nform of the discretized problem. The first ingredient is based on the discretization described in \\cite{wwL1},\nand recently applied to problem (\\ref{pb})--(\\ref{box}) in \\cite{SongYu3,SongYu2,SongYu1},\nwhere block-coordinate like methods are then introduced.\nThe second ingredient has been widely used for solving the ubiquitous\n$\\rm L^1$-norm regularized quadratic problem as, for example, when computing \nsparse solutions in wavelet-based deconvolution problems and compressed sensing \\cite{GPSR}. \nOn the other hand, its use is completely new within the PDE-constrained optimization context.\nFinally, we propose new preconditioners for the sequence of saddle-point systems\ngenerated by the IPM, based on approximations of the $(1,1)$-block and the Schur complement.\nIn particular, the case where the $(1,1)$-block is singular is taken into account\nwhen examining the partial observation case.\nWe may then analyse the spectral properties of the preconditioned $(1,1)$-block and Schur complement, to guide us as to the effectiveness of our overall preconditioning strategies.\n\nWe structure the paper as follows. The discretization of the continuous problem is discussed \nin Section \\ref{sec::dis}, while an Interior Point scheme is\nintroduced in Section \\ref{sec::ipa} together with the description of the \nlinear algebra considerations. Hence, Section \\ref{sec::prec} is devoted \nto introducing preconditioning strategies to improve the convergence behavior of the linear iterative solver. \nWe highlight a ``matching approach'' that introduces robust approximations to the Schur complement of the linear system.\nAdditionally, we propose a preconditioning strategy for partial observations in \nSection \\ref{subsec::po}, and time-dependent problems in Section \\ref{subsec::td}.\nSection \\ref{exp} illustrates the performance of our scheme for a variety of different parameter regimes, \ndiscretization levels, and PDE constraints.\n\n\n\\subsection*{Notation}\nThe $\\rm L^1$-norm of a function $\\rm u$ is denoted by $\\|\\rm u\\|_{L^1}$,\nwhile the $\\ell_1$-norm of a vector $u$ is denoted by $\\| u\\|_1$. \nComponents of a vector $x$ are denoted by $x_j$, or by $x_{a,j}$\nfor a vector $x_a$. The matrix $I_n$ denotes the $n\\times n$ identity matrix,\nand $1_n$ is the column vector of ones of dimension $n$.\n\n\\section{Problem Discretization and Quadratic Programming Formulation}\n\\label{sec::dis}\nWe here apply a discretize-then-optimize approach to (\\ref{pb})--(\\ref{boxs}), and \nuse a finite element discretization that retains a favorable property of the vector $\\ell\n_1$-norm, specifically that it is separable with respect to the vector components.\nThis key step allows us to state the discretized problem as a convex quadratic program\nthat may be tackled using an IPM.\n\nLet $n$ denote the dimension of the discretized space, for both state and control variables. \nLet the matrix $L$ represent a discretization of the Laplacian \noperator (the \\textit{stiffness matrix}) when Poisson's equation is considered or, more generally, the discretization of a non-selfadjoint elliptic differential operator, \nand let the matrix $M$ be the finite element Gram matrix, or \\textit{mass matrix}.\nFinally, we denote by $y,u,y_d,f,u_a,u_b,y_a,y_b$ the discrete counterparts of the functions\n$\\rm y,u,y_d,f,u_a,u_b,y_a,y_b$, respectively.\n\nThe discretization without the additional sparsity term follows a standard Galerkin approach \\cite{HS10,RSW09,book::FT2010}.\nFor the discretization of the $\\rm L^1$ term, we here follow \\cite{SongYu3,SongYu2,SongYu1,wwL1}\nand apply the nodal quadrature rule:\n$$\\|{\\rm u}\\|_{{\\rm L}^1(\\Omega)} \\approx \\sum^n_{i=1} |u_i| \\int_{ \\Omega} \\phi_i(x)~{\\rm d}x,$$ \nwhere $\\{\\phi_i\\}$ are the finite element basis functions used\nand $u_i$ are the components of $u$. It is shown in \\cite{wwL1} that first-order convergence may be achieved using this approximation with piecewise linear discretizations of the control. We define a lumped mass matrix $D$ as\n$$\nD := \\text{diag}\\left ( \\int_{ \\Omega} \\phi_i(x)~{\\rm d}x\\right )_{i=1}^{n},\n$$\nso that the discretized $\\rm L^1$-norm can be written in matrix-vector form as $\\|D u\\|_1$.\nAs a result, the overall finite element discretization of problem (\\ref{pb})--(\\ref{boxs}) may be stated as\n\\begin{equation}\n\\begin{array}{cl}\\label{pb_fe}\n\\displaystyle\\min_{y\\in \\IR^n,u\\in \\IR^{n}} & \\frac 1 2 (y-y_d)^TM (y-y_d) + \n \\frac{\\alpha}{2} u^TMu + \\beta \\|D u\\|_1\\\\\n\\mbox{ s.t. } & L y - Mu = f,\n\\end{array}\n\\end{equation}\nwhile additionally being in the presence of control constraints and state constraints: \n\\begin{equation}\\label{boxvector}\nu_a \\le u \\le u_b,\\quad\\quad y_a \\le y \\le y_b.\n\\end{equation}\nThe problems we consider will always have control constraints present, and will sometimes also involve state constraints.\n\nProblem (\\ref{pb_fe})--(\\ref{boxvector}) is a linearly constrained quadratic problem with bound \nconstraints on the state and control variables $(y,u)$, and with an additional nonsmooth weighted \n$\\ell_1$-norm term of the variable $u$. \nA possible approach to handle the nonsmoothness in the problem \nconsists of using smoothing techniques for the $\\ell_1$-norm term, see e.g. \\cite{GPSR,FG-pseudo16,FG-IPM14}.\nWe here consider a classical strategy proposed in \\cite{GPSR} that linearizes\nthe $\\ell_1$-norm by splitting the variable $u$ as follows.\nLet $w, v \\in \\IR^n$ be such that\n$$|u_i | = w_i + v_i, \\ \\ i = 1, \\dots, n,\n$$\nwhere $w_i = \\max(u_i,0)$ and $v_i = \\max(-u_i,0)$. Therefore\n$$\n\\|u\\|_1 = 1_n^Tw + 1_n^Tv, \n$$\nwith $w,v\\ge 0$.\nIn the weighted case, which we are interested in when approximating the discretized version of $\\|\\rm u\\|_{\\rm L^1(\\Omega)}$ by $\\|Du\\|_1$, we obtain \n$$\n\\|D u\\|_1 = 1_n^T Dw + 1_n^T Dv.\n$$\n\nBy using the relationship \n\\begin{equation}\\label{split}\n u = w - v,\n\\end{equation}\none may now rewrite problem (\\ref{pb_fe}) in terms of variables $(y,z)$, with \n$$z= \\begin{bmatrix}\n w \\\\\n v\n \\end{bmatrix}.\n $$\nNote that bounds for $u$\n$$u_a \\le u \\le u_b$$\nnow have to be replaced by the following bounds for $z$:\n$$z_a \\le z \\le z_b,$$\nwith\n\\begin{equation*\nz_a = \\left[\\begin{array}{c}\n\\max\\{u_a,0\\} \\\\ -\\min\\{u_b,0\\} \\\\\n\\end{array}\\right], \\qquad z_b = \\left[\\begin{array}{c}\n\\max\\{u_b, 0\\} \\\\ -\\min\\{u_a, 0\\} \\\\\n\\end{array}\\right].\n\\end{equation*}\nWe note that these bounds automatically satisfy the constraint $z\\ge 0$. Overall, we have the desired quadratic programming formulation:\n\\begin{equation}\n\\begin{array}{cl}\\label{pb_fe_lin}\n\\displaystyle\\min_{y\\in \\IR^n,z\\in \\IR^{2n}} & Q(y,z):= \\frac 1 2 (y-y_d)^TM (y-y_d) + \n \\frac{\\alpha}{2} z^T \\widetilde M z + \n \\beta\\, 1_{2n}^T \\bar D z \\\\\n\\mbox{ s.t. } & L y - \\bar M z = f, \\\\\n & z_a \\le z \\le z_b, \\\\\n & y_a \\le y \\le y_b,\n\\end{array}\n\\end{equation}\nwhere\n$$\n \\widetilde M = \\begin{bmatrix}\n M & -M \\\\ \n -M & M\n \\end{bmatrix}, \\quad\\quad \\bar D = \\begin{bmatrix}\n D & D \n \\end{bmatrix} ,\\quad\\quad\n \\bar M = \\begin{bmatrix}\n M & -M \n \\end{bmatrix}.\n$$\nIn the next section we derive an Interior Point scheme for the solution of the above problem. \nClearly once optimal values of variables $z$, and therefore of\n$w$ and $v$, are found, the control $u$ of the initial problem is retrieved by\n(\\ref{split}). We observe that we gain smoothness in the problem at the expense of\nincreasing the number of variables by 50\\% within the problem statement. \n Fortunately, this increase will not\nhave a significant impact in the linear algebra solution phase of our method, as we only require additional sparse matrix-vector multiplications, and the storage of the additional control vectors.\n\n\n\\section{Interior Point Framework and Newton Equations}\n\\label{sec::ipa}\n\nThe three key steps to set up an IPM are the following. First, the\nbound constraints are ``eliminated'' by using a logarithmic barrier function.\nFor problem (\\ref{pb_fe_lin}), \nthe barrier function takes the form:\n\\begin{align*}\nL_{\\mu}(y,z,p) = Q(y,z) + p^T ( L y - \\bar M z - f)&{}- \\mu \\sum \\log(y_j - y_{a,j}) - \\mu \\sum \\log(y_{b,j} -y_j)\\\\\n &{}-\\mu \\sum \\log(z_j - z_{a,j}) - \\mu \\sum \\log(z_{b,j} -z_j),\n\\end{align*}\nwhere $p\\in\\IR^n$ is the Lagrange multiplier (or adjoint variable) associated with the state equation,\nwhile $\\mu > 0$ is the barrier parameter that controls the relation between\nthe barrier term and the original objective $Q(y,z)$. As the IPM progresses, $\\mu$ is decreased towards zero.\n\nThe second step involves applying duality theory, and\nderiving the first-order optimality conditions to obtain a nonlinear system\nparameterized by $\\mu$.\nDifferentiating $L_\\mu$ with respect to $(y,z,p)$ gives the nonlinear system\n\\begin{eqnarray*}\n M y - M y_d +L^T p - \\lambda_{y,a} + \\lambda_{y,b} & = & 0, \\\\\n \\alpha \\widetilde M z + \\beta \\bar D ^T 1_{n} - \\bar M^T p \n - \\lambda_{z,a} + \\lambda_{z,b} & = & 0, \\\\\n L y - \\bar M z - f & = & 0,\n\\end{eqnarray*}\nwhere the $j$th entries of the Lagrange multipliers $\\lambda_{y,a},\\lambda_{y,b},\n\\lambda_{z,a},\\lambda_{z,b}$ are defined as follows:\n$$\n(\\lambda_{y,a})_j = \\frac{\\mu}{y_j - y_{a,j}}, \\quad\\quad\n(\\lambda_{y,b})_j = \\frac{\\mu}{y_{b,j} - y_j}, \\quad\\quad\n(\\lambda_{z,a})_j = \\frac{\\mu}{z_j - z_{a,j}}, \\quad\\quad\n(\\lambda_{z,b})_j = \\frac{\\mu}{z_{b,j} - z_j}.\n$$\nAlso, the following bound constraints enforce the \nconstraints on $y$ and $z$ via:\n$$\\lambda_{y,a} \\ge 0 , \\quad\\quad \\lambda_{y,b} \\ge 0, \\quad\\quad \\lambda_{z,a} \\ge0, \\quad\\quad \\lambda_{z,b} \\ge 0.$$ \n\nThe third crucial step of the IPM is the application of Newton's method\nto the nonlinear system. \nWe now derive the Newton equations, following the description in \\cite{PGIP17}.\nLetting \n$y,z,p, \\lambda_{y,a}, \\lambda_{y,b}, \\lambda_{z,a}, \\lambda_{z,b}$\ndenote the most recent Newton iterates, these quantities \nare updated at each iteration by computing the corresponding Newton steps\n$ \\Delta y, \\Delta z, \\Delta p, \\Delta \\lambda_{y,a}, \\Delta \\lambda_{y,b}, \\Delta \\lambda_{z,a},$ $\\Delta \\lambda_{z,b}$,\nthrough the solution of the following Newton system:\n\\begin{align}\n\\ \\label{7by7} &\\begin{bmatrix}\n M & 0 & L^T & - I_n & I_n & 0 & 0 \\\\\n 0 & \\alpha \\widetilde M & -\\bar M^T & 0 & 0 & -I_{2n} & I_{2n} \\\\\n L & -\\bar M & 0 & 0 & 0 & 0 & 0 \\\\\n \\Lambda_{y,a} & 0 & 0 & Y - Y_a& 0 & 0 & 0 \\\\\n-\\Lambda_{y,b} & 0 & 0 & 0 &Y_b-Y & 0 & 0 \\\\\n0 & \\Lambda_{z,a} & 0 & 0 &0 & Z - Z_a & 0 \\\\\n0 &-\\Lambda_{z,b} & 0 & 0 &0 & 0 & Z_b - Z\n\\end{bmatrix}\n\\begin{bmatrix}\n \\Delta y\\\\\n \\Delta z \\\\\n \\Delta p \\\\\n \\Delta \\lambda_{y,a} \\\\\n \\Delta \\lambda_{y,b} \\\\\n \\Delta \\lambda_{z,a} \\\\\n \\Delta \\lambda_{z,b} \n\\end{bmatrix} \\\\\n\\ \\nonumber &\\hspace{17.5em}=-\\begin{bmatrix}\n M y - M y_d +L^T p - \\lambda_{y,a} + \\lambda_{y,b} \\\\\n \\alpha \\widetilde M z + \\beta \\bar D ^T 1_{n} - \\bar M^T p \n - \\lambda_{z,a} + \\lambda_{z,b} \\\\\n L y - \\bar M z - f \\\\\n (y-y_a).*\\lambda_{y,a} - \\mu 1_n \\\\\n (y_b-y).*\\lambda_{y,b} - \\mu 1_n \\\\\n (z-z_a).*\\lambda_{z,a} - \\mu 1_{2n} \\\\\n (z_b - z).*\\lambda_{z,a} - \\mu 1_{2n} \n\\end{bmatrix},\n\\end{align}\nwhere $Y, Z, \\Lambda_{y,a}, \\Lambda_{y,b}, \\Lambda_{z,a}, \\Lambda_{z,b}$ are diagonal matrices, \nwith the most recent iterates $y,z,p, \\lambda_{y,a},$ $\\lambda_{y,b}, \\lambda_{z,a}, \\lambda_{z,b}$\nappearing on their diagonal entries. Similarly, the matrices $Y_a , Y_b , Z_a, Z_b$ are diagonal matrices\ncorresponding to the bounds $y_a, y_b, z_a, z_b$. \nHere we utilize the {\\scshape matlab} notation `$.*$' to denote the componentwise product.\nWe observe that the contribution of the $\\ell_1$-norm term only arises in the right-hand side, that is to say\n$\\beta$ does not appear within the matrix we need to solve for.\n\n\n\n\n\n\nEliminating $\\Delta \\lambda_{y,a}, \\Delta \\lambda_{y,b}, \\Delta \\lambda_{z,a}, \\Delta \\lambda_{z,b}$ from \\eqref{7by7},\nwe obtain the following reduced linear system:\n\\begin{align}\\label{NewtonSystem}\n&\\begin{bmatrix}\n M + \\Theta_y & 0 & L^T \\\\\n 0 & \\alpha \\widetilde M + \\Theta_z & -\\bar M^T \\\\\n L & -\\bar M & 0 \\\\ \n\\end{bmatrix}\n\\begin{bmatrix}\n \\Delta y\\\\\n \\Delta z \\\\\n \\Delta p \\\\\n \\end{bmatrix} \\\\\n\\nonumber &\\hspace{5em}=-\\begin{bmatrix}\n M y - M y_d +L^T p -\\mu (Y-Y_a)^{-1}1_n + \\mu (Y_b-Y)^{-1}1_n \\\\\n \\alpha \\widetilde M z + \\beta \\bar D ^T 1_{n} - \\bar M^T p \n -\\mu (Z-Z_a)^{-1}1_{2n} + \\mu (Z_b-Z)^{-1}1_{2n} \\\\\n L y - \\bar M z - f \\\\\n\\end{bmatrix},\n\\end{align}\nwith\n$$\\Theta_y = (Y - Y_a )^{-1} \\Lambda_{y,a} + (Y_b - Y )^{-1} \\Lambda_{y,b},\n\\quad\\quad\\Theta_z = (Z - Z_a )^{-1} \\Lambda_{z,a} + (Z_b - Z )^{-1} \\Lambda_{z,b}\n$$\nboth diagonal and positive definite matrices, which are typically very ill-conditioned. \nOnce the above system is solved, one can compute the steps for the\nLagrange multipliers:\n\\begin{eqnarray}\n \\Delta \\lambda_{y,a} & = & - (Y-Y_a)^{-1} \\Lambda_{y,a} \\Delta y - \\Lambda_{y,a} + \\mu (Y-Y_a)^{-1}1_n, \\label{zupdate1}\\\\\n \\Delta \\lambda_{y,b} & = & (Y_b-Y)^{-1} \\Lambda_{y,b} \\Delta y - \\Lambda_{y,b} + \\mu (Y_b-Y)^{-1}1_n, \\label{zupdate2}\\\\\n \\Delta \\lambda_{z,a} & = & - (Z-Z_a)^{-1} \\Lambda_{z,a} \\Delta z - \\Lambda_{z,a} + \\mu (Z-Z_a)^{-1}1_{2n}, \\label{zupdate3}\\\\\n \\Delta \\lambda_{z,b} & = & (Z_b-Z)^{-1} \\Lambda_{z,b} \\Delta z - \\Lambda_{z,b} + \\mu (Z_b-Z)^{-1}1_{2n}. \\label{zupdate4}\n \\end{eqnarray}\nAfter updating the iterates, and ensuring that they remain feasible, the barrier $\\mu$ is reduced and \na new Newton step is performed.\n\nFor the sake of completeness, the structure of the overall Interior Point algorithm is reported in the Appendix,\nand follows the standard infeasible Interior Point path-following scheme outlined in \\cite{gondzio12}.\nWe report on the formulas for the primal and dual feasibilities, given by \n\\begin{equation}\\label{prdu}\n \\xi_p^k = L y^k - \\bar{M} z^k - f, \\quad \\quad \n \\xi_d^k = \\begin{bmatrix}\n M y^k - M y_d + L^T p^k - \\lambda^k_{y,a} + \\lambda^k_{y,b} \\\\\n \\alpha \\widetilde M z^k + \\beta \\bar D ^T 1_{n} - \\bar {M}^T p^k \n - \\lambda^k_{z,a} + \\lambda^k_{z,b} \n \\end{bmatrix},\n\\end{equation}\nrespectively, and the complementarity gap\n\\begin{equation}\\label{gap}\n \\xi_c^k = \\begin{bmatrix}\n (y^k-y_a).* \\lambda^k_{y,a} - \\mu^k 1_n \\\\\n (y_b-y^k).* \\lambda^k_{y,b} - \\mu^k 1_n \\\\\n (z^k-z_a).* \\lambda^k_{z,a} - \\mu^k 1_{2n} \\\\\n (z_b - z^k).* \\lambda^k_{z,a} - \\mu^k 1_{2n} \n \\end{bmatrix},\n \\end{equation}\nfor problem (\\ref{pb_fe_lin}). Here $k$ denotes the iteration counter for the Interior Point method, with $y^k,z^k,p^k,\\lambda^k_{y,a},\\lambda^k_{y,b},\\lambda^k_{u,a},\\lambda^k_{u,b},\\mu^k$ the values of $y,z,p,\\lambda_{y,a},\\lambda_{y,b},\\lambda_{u,a},\\lambda_{u,b},\\mu$ at the $k$th iteration.\n\nThe measure of the change in the norm of $\\xi_p^k, \\xi_d^k, \\xi_c^k$ allows us to monitor the convergence of the entire process.\nComputationally, the main bottleneck of the algorithm is the linear algebra phase,\nthat is the efficient solution of the Newton system (\\ref{NewtonSystem}).\nThis is the focus of the forthcoming section.\n\n\n\n\n\\section{Preconditioning}\n\\label{sec::prec}\n\nHaving arrived at the Newton system \\eqref{NewtonSystem}, the main task at this stage is to construct fast and effective methods for the solution of such systems. In this work, we elect to apply iterative (Krylov subspace) solvers, both the {\\scshape minres} method \\cite{minres} for symmetric matrix systems, and the {\\scshape gmres} algorithm \\cite{gmres} which may also be applied to non-symmetric matrices. We wish to accelerate these methods using carefully chosen preconditioners.\n\nTo develop these preconditioners, we observe that \\eqref{NewtonSystem} is a \\emph{saddle-point system} (see \\cite{BenGolLie05} for a review of such systems), of the form\n\\begin{equation*}\n\\ \\mathcal{A}=\\left[\\begin{array}{cc}\nA & B^T \\\\\nB & C \\\\\n\\end{array}\\right],\n\\end{equation*}\nwith\n\\begin{equation*}\n\\ A=\\left[\\begin{array}{cc}\nM+\\Theta_y & 0 \\\\\n0 & \\alpha\\widetilde{M}+\\Theta_z \\\\\n\\end{array}\\right],\\quad\\quad{}B=\\left[\\begin{array}{cc}\nL & -\\bar{M} \\\\\n\\end{array}\\right],\\quad\\quad{}C=\\left[\\begin{array}{c}\n0 \\\\\n\\end{array}\\right].\n\\end{equation*}\nProvided $A$ is nonsingular, it is well known that two \\emph{ideal preconditioners} for the saddle-point matrix $\\mathcal{A}$ are given by\n\\begin{equation*}\n\\ \\mathcal{P}_1=\\left[\\begin{array}{cc}\nA & 0\\\\\n0 & S \\\\\n\\end{array}\\right],\\quad\\quad\\mathcal{P}_2=\\left[\\begin{array}{cc}\nA & 0\\\\\nB & -S \\\\\n\\end{array}\\right],\n\\end{equation*}\nwhere the (negative) \\emph{Schur complement} $S:=-C+BA^{-1}B^T$. In particular, provided the preconditioned system is nonsingular, it can be shown that \\cite{Ipsen,Ku95,preconMGW}\n\\begin{equation*}\n\\ \\lambda(\\mathcal{P}_1^{-1}\\mathcal{A})\\in\\left\\{1,\\frac{1}{2}(1\\pm\\sqrt{5})\\right\\},\\quad\\quad\\lambda(\\mathcal{P}_2^{-1}\\mathcal{A})\\in\\{1\\},\n\\end{equation*}\nand hence that a suitable Krylov method preconditioned by $\\mathcal{P}_1$ or $\\mathcal{P}_2$ will converge in $3$ or $2$ iterations, respectively.\n\nOf course, we would not wish to work with the preconditioners $\\mathcal{P}_1$ or $\\mathcal{P}_2$ in practice, as they would be prohibitively expensive to invert. We therefore wish to develop analogous preconditioners of the form\n\\begin{equation*}\n\\ \\mathcal{P}_D=\\left[\\begin{array}{cc}\n\\widehat{A} & 0\\\\\n0 & \\widehat{S} \\\\\n\\end{array}\\right],\\quad\\quad\\mathcal{P}_T=\\left[\\begin{array}{cc}\n\\widehat{A} & 0\\\\\nB & -\\widehat{S} \\\\\n\\end{array}\\right],\n\\end{equation*}\nwhere $\\widehat{A}$ and $\\widehat{S}$ are suitable and computationally cheap approximations of the $(1,1)$-block $A$ and the Schur complement $S$. Provided $\\widehat{A}$ and $\\widehat{S}$ are symmetric positive definite, the preconditioner $\\mathcal{P}_D$ may be applied within the {\\scshape minres} algorithm, and $\\mathcal{P}_T$ is applied within a non-symmetric solver such as {\\scshape gmres}.\n\nOur focus is therefore to develop such approximations for the corresponding matrices for the Newton system \\eqref{NewtonSystem}:\n\\begin{equation*}\n\\ A=\\left[\\begin{array}{cc}\nM+\\Theta_y & 0 \\\\\n0 & \\alpha\\widetilde{M}+\\Theta_z \\\\\n\\end{array}\\right],\\quad\\quad{}S=\\left[\\begin{array}{cc}\nL & -\\bar{M} \\\\\n\\end{array}\\right]\\left[\\begin{array}{cc}\nM+\\Theta_y & 0 \\\\\n0 & \\alpha\\widetilde{M}+\\Theta_z \\\\\n\\end{array}\\right]^{-1}\\left[\\begin{array}{c}\nL^T \\\\\n-\\bar{M}^T \\\\\n\\end{array}\\right].\n\\end{equation*}\n\n\n\\subsection{Approximation of \\boldmath{$(1,1)$}-block}\n\nAn effective approximation of the $(1,1)$-block $A$ will require cheap and accurate approximations of the matrices $M+\\Theta_y$ and $\\alpha\\widetilde{M}+\\Theta_z$.\n\nWhen considering the matrix $M+\\Theta_y$, our first observation is that the mass matrix $M$ may be effectively approximated by its diagonal \\cite{wathen87} within a preconditioner. This can be exploited and enhanced by applying the \\emph{Chebyshev semi-iteration} method \\cite{VGI61,VGII61,RW08}, which utilizes the effectiveness of the diagonal approximation and accelerates it. Now, it may be easily shown that\n\\begin{align*}\n\\ &\\Big[\\lambda_{\\min}\\big((D_M+\\Theta_y)^{-1}(M+\\Theta_y)\\big),\\lambda_{\\max}\\big((D_M+\\Theta_y)^{-1}(M+\\Theta_y)\\big)\\Big] \\\\\n\\ &\\hspace{10em}\\subset\\Big[\\min\\left\\{\\lambda_{\\min}(D_M^{-1}M),1\\right\\},\\max\\left\\{\\lambda_{\\max}(D_M^{-1}M),1\\right\\}\\Big],\n\\end{align*}\nwhere $D_M:=\\text{diag}(M)$, due to the positivity of the diagonal matrix $\\Theta_y$. Here, $\\lambda_{\\min}(\\cdot)$, $\\lambda_{\\max}(\\cdot)$ denote the smallest and largest eigenvalues of a matrix, respectively. In other words, the diagonal of $M+\\Theta_y$ also clusters the eigenvalues within a preconditioner. The same argument may therefore be used to apply Chebyshev semi-iteration to $M+\\Theta_y$ within a preconditioner, and so we elect to use this approach.\n\nWe now turn our attention to the matrix $\\alpha\\widetilde{M}+\\Theta_z$, first decomposing $\\Theta_z=\\text{blkdiag}(\\Theta_w,\\Theta_v)$, where $\\Theta_w$, $\\Theta_v$ denote the components of $\\Theta_z$ corresponding to $w$, $v$. Therefore, in this notation,\n\\begin{equation*}\n\\ \\alpha\\widetilde{M}+\\Theta_z=\\left[\\begin{array}{cc}\n\\alpha{}M+\\Theta_w & -\\alpha{}M \\\\\n-\\alpha{}M & \\alpha{}M+\\Theta_v \\\\\n\\end{array}\\right].\n\\end{equation*}\nNote that $\\widetilde{M}$ is positive semidefinite but $\\alpha\\widetilde{M}+\\Theta_z$ is positive definite since the diagonal $\\Theta_z$ is positive definite (the control and state bounds are enforced as strict inequalities at each Newton step).\n\nA result which we apply is that of \\cite[Theorems 2.1(i) and 2.2(i)]{LSinverses02}, which gives us the following statements about the inverse of $2\\times2$ block matrices:\n\\begin{teo}\nConsider the inverse of the block matrix\n\\begin{equation}\n\\ \\label{ABCD} \\left[\\begin{array}{cc}\nA & B_1 \\\\\nB_2 & C \\\\\n\\end{array}\\right].\n\\end{equation}\nIf $A$ is nonsingular and $C-B_{2}A^{-1}B_1$ is invertible, then \\eqref{ABCD} is invertible, with\n\\begin{equation}\n\\ \\label{ABCDinv1} \\left[\\begin{array}{cc}\nA & B_1 \\\\\nB_2 & C \\\\\n\\end{array}\\right]^{-1}=\\left[\\begin{array}{cc}\nA^{-1}+A^{-1}B_1(C-B_{2}A^{-1}B_1)^{-1}B_{2}A^{-1} & -A^{-1}B_1(C-B_{2}A^{-1}B_1)^{-1} \\\\\n-(C-B_{2}A^{-1}B_1)^{-1}B_{2}A^{-1} & (C-B_{2}A^{-1}B_1)^{-1} \\\\\n\\end{array}\\right].\n\\end{equation}\nAlternatively, if $B_1$ is nonsingular and $B_2-CB_1^{-1}A$ is invertible, then \\eqref{ABCD} is invertible, with\n\\begin{equation}\n\\ \\label{ABCDinv2} \\left[\\begin{array}{cc}\nA & B_1 \\\\\nB_2 & C \\\\\n\\end{array}\\right]^{-1}=\\left[\\begin{array}{cc}\n-(B_2-CB_1^{-1}A)^{-1}CB_1^{-1} & (B_2-CB_1^{-1}A)^{-1} \\\\\nB_1^{-1}+B_1^{-1}A(B_2-CB_1^{-1}A)^{-1}CB_1^{-1} & -B_1^{-1}A(B_2-CB_1^{-1}A)^{-1} \\\\\n\\end{array}\\right].\n\\end{equation}\n\\end{teo}\n\nFor the purposes of this working, we may therefore consider the matrix $\\alpha\\widetilde{M}+\\Theta_z$ itself \nas a block matrix \\eqref{ABCD}, \nwith $A=\\alpha{}M+\\Theta_w$, $B_1=B_2=-\\alpha{}M$, $C=\\alpha{}M+\\Theta_v$. It may easily be verified that $A$, $C-B_{2}A^{-1}B_1$, $B_1$, $B_2-CB_1^{-1}A$ are then invertible matrices, and so the results \\eqref{ABCDinv1} and \\eqref{ABCDinv2} both hold in this setting.\n\nWe now consider approximating $\\alpha\\widetilde{M}+\\Theta_z$ within a preconditioner by replacing all mass matrices with their diagonals, i.e. writing\n\\begin{equation*}\n\\ \\alpha\\widetilde{D}_M+\\Theta_z:=\\left[\\begin{array}{cc}\n\\alpha{}D_M+\\Theta_w & -\\alpha{}D_M \\\\\n-\\alpha{}D_M & \\alpha{}D_M+\\Theta_v \\\\\n\\end{array}\\right].\n\\end{equation*}\nThis would give us a practical approximation, by using the expression \\eqref{ABCDinv1} to apply $(\\alpha\\widetilde{D}_M+\\Theta_z)^{-1}$, provided it can be demonstrated that $\\alpha\\widetilde{D}_M+\\Theta_z$ well approximates $\\alpha\\widetilde{M}+\\Theta_z$. This is indeed the case, as demonstrated using the result below:\n\\begin{teo}\n\t\\label{theorem1}\nThe eigenvalues $\\lambda$ of the matrix\n\\begin{equation}\n\\ \\label{PrecDiag} \\left[\\begin{array}{cc}\n\\alpha{}D_M+\\Theta_w & -\\alpha{}D_M \\\\\n-\\alpha{}D_M & \\alpha{}D_M+\\Theta_v \\\\\n\\end{array}\\right]^{-1}\\left[\\begin{array}{cc}\n\\alpha{}M+\\Theta_w & -\\alpha{}M \\\\\n-\\alpha{}M & \\alpha{}M+\\Theta_v \\\\\n\\end{array}\\right]\n\\end{equation}\nare all contained within the interval:\n\\begin{equation*}\n\\ \\lambda\\in\\Big[\\min\\{\\lambda_{\\min}(D_M^{-1}M),1\\},\\max\\{\\lambda_{\\max}(D_M^{-1}M),1\\}\\Big].\n\\end{equation*}\n\\end{teo}\n\\emph{Proof.}~~The eigenvalues of \\eqref{PrecDiag} satisfy\n\\begin{equation*}\n\\ \\left[\\begin{array}{cc}\n\\alpha{}M+\\Theta_w & -\\alpha{}M \\\\\n-\\alpha{}M & \\alpha{}M+\\Theta_v \\\\\n\\end{array}\\right]\\left[\\begin{array}{c}\n\\mathbf{x}_1 \\\\\n\\mathbf{x}_2 \\\\\n\\end{array}\\right]=\\lambda\\left[\\begin{array}{cc}\n\\alpha{}D_M+\\Theta_w & -\\alpha{}D_M \\\\\n-\\alpha{}D_M & \\alpha{}D_M+\\Theta_v \\\\\n\\end{array}\\right]\\left[\\begin{array}{c}\n\\mathbf{x}_1 \\\\\n\\mathbf{x}_2 \\\\\n\\end{array}\\right],\n\\end{equation*}\nwith $\\mathbf{x}_1$, $\\mathbf{x}_2$ not both equal to $\\mathbf{0}$, which may be decomposed to write\n\\begin{align}\n\\ \\label{EigEqn1} (\\alpha{}M+\\Theta_w)\\mathbf{x}_1-\\alpha{}M\\mathbf{x}_2={}&\\lambda(\\alpha{}D_M+\\Theta_w)\\mathbf{x}_1-\\lambda\\alpha{}D_M\\mathbf{x}_2, \\\\\n\\ \\label{EigEqn2} -\\alpha{}M\\mathbf{x}_1+(\\alpha{}M+\\Theta_v)\\mathbf{x}_2={}&-\\lambda\\alpha{}D_M\\mathbf{x}_1+\\lambda(\\alpha{}D_M+\\Theta_v)\\mathbf{x}_2.\n\\end{align}\nSumming \\eqref{EigEqn1} and \\eqref{EigEqn2} gives that\n\\begin{equation*}\n\\ \\Theta_w\\mathbf{x}_1+\\Theta_v\\mathbf{x}_2=\\lambda{}\\Theta_w\\mathbf{x}_1+\\lambda{}\\Theta_v\\mathbf{x}_2=\\lambda(\\Theta_w\\mathbf{x}_1+\\Theta_v\\mathbf{x}_2),\n\\end{equation*}\nwhich tells us that either $\\lambda=1$ or $\\Theta_w\\mathbf{x}_1+\\Theta_v\\mathbf{x}_2=\\mathbf{0}$. In the latter case, we substitute $\\mathbf{x}_1=-\\Theta_w^{-1}\\Theta_v\\mathbf{x}_2$ into \\eqref{EigEqn1} to give that\n\\begin{align*}\n\\ -(\\alpha{}M+\\Theta_w)\\Theta_w^{-1}\\Theta_v\\mathbf{x}_2-\\alpha{}M\\mathbf{x}_2={}&-\\lambda(\\alpha{}D_M+\\Theta_w)\\Theta_w^{-1}\\Theta_v\\mathbf{x}_2-\\lambda\\alpha{}D_M\\mathbf{x}_2 \\\\\n\\ \\Rightarrow\\quad\\quad~~\\Big[\\alpha{}M(\\Theta_w^{-1}\\Theta_v+I)+\\Theta_v\\Big]\\mathbf{x}_2={}&\\lambda\\Big[\\alpha{}D_M(\\Theta_w^{-1}\\Theta_v+I)+\\Theta_v\\Big]\\mathbf{x}_2,\n\\end{align*}\nwhich in turn tells us that\n\\begin{equation*}\n\\ \\Big[\\alpha{}M(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}+\\Theta_v(\\Theta_w^{-1}\\Theta_v+I)^{-1\/2}\\Big]\\mathbf{x}_3=\\lambda\\Big[\\alpha{}D_M(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}+\\Theta_v(\\Theta_w^{-1}\\Theta_v+I)^{-1\/2}\\Big]\\mathbf{x}_3,\n\\end{equation*}\nwhere $\\mathbf{x}_3=(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}\\mathbf{x}_2\\neq\\mathbf{0}$. Premultiplying both sides of the equation by $(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}$ then gives that\n\\begin{equation*}\n\\ \\Big[\\alpha(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}M(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}+\\Theta_v\\Big]\\mathbf{x}_3=\\lambda\\Big[\\alpha(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}D_M(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}+\\Theta_v\\Big]\\mathbf{x}_3,\n\\end{equation*}\nand therefore that the eigenvalues may be described by the Rayleigh quotient\n\\begin{equation*}\n\\ \\frac{\\mathbf{x}_3^T\\Big[\\alpha(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}M(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}+\\Theta_v\\Big]\\mathbf{x}_3}{\\mathbf{x}_3^T\\Big[\\alpha(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}D_M(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}+\\Theta_v\\Big]\\mathbf{x}_3}.\n\\end{equation*}\nNow, as $\\mathbf{x}_3^T\\Theta_v\\mathbf{x}_3$ is a positive number, $\\lambda$ may be bounded within the range of the following Rayleigh quotient:\n\\begin{align*}\n\\ \\lambda\\in{}&\\left[\\min\\left\\{\\min_{\\mathbf{x}_3}\\frac{\\mathbf{x}_3^T\\Big[\\alpha(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}M(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}\\Big]\\mathbf{x}_3}{\\mathbf{x}_3^T\\Big[\\alpha(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}D_M(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}\\Big]\\mathbf{x}_3},1\\right\\},\\right. \\\\\n\\ &\\quad\\quad\\quad\\quad\\left.\\max\\left\\{\\max_{\\mathbf{x}_3}\\frac{\\mathbf{x}_3^T\\Big[\\alpha(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}M(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}\\Big]\\mathbf{x}_3}{\\mathbf{x}_3^T\\Big[\\alpha(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}D_M(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}\\Big]\\mathbf{x}_3},1\\right\\}\\right] \\\\\n\\ ={}&\\left[\\min\\left\\{\\min_{\\mathbf{x}_4}\\frac{\\mathbf{x}_4^{T}M\\mathbf{x}_4}{\\mathbf{x}_4^{T}D_M\\mathbf{x}_4},1\\right\\},\\max\\left\\{\\max_{\\mathbf{x}_4}\\frac{\\mathbf{x}_4^{T}M\\mathbf{x}_4}{\\mathbf{x}_4^{T}D_M\\mathbf{x}_4},1\\right\\}\\right] \\\\\n\\ \\subset{}&\\Big[\\min\\{\\lambda_{\\min}(D_M^{-1}M),1\\},\\max\\{\\lambda_{\\max}(D_M^{-1}M),1\\}\\Big],\n\\end{align*}\nwhere in the above derivation $\\mathbf{x}_4=(\\Theta_w^{-1}\\Theta_v+I)^{1\/2}\\mathbf{x}_3\\neq\\mathbf{0}$. This gives the stated result.~~$\\Box$\n\n\n\\vspace{1em}\n\n\\begin{remark} Theorem \\ref{theorem1} is indeed a positive result. We utilize the fact that a mass matrix preconditioned by its diagonal gives tight eigenvalue bounds \\cite{wathen87}. \nWe have now obtained a cheap approximation of the $(1,1)$-block of our saddle-point system, with eigenvalues of the preconditioned matrix provably contained within a tight interval. \nWe wish to emphasize the fact that the interval boundaries and thus the region of interest where the eigenvalues will lie is independent of all system parameters, such as penalization-, regularization-, mesh-, and time-step parameters.\n\\end{remark}\n\n\n\\subsection{Approximation of Schur Complement}\\label{sec:Schur}\n\nThe Schur complement of the Newton system \\eqref{NewtonSystem} under consideration is given by\n\\begin{equation*}\n\\ S=L(M+\\Theta_y)^{-1}L^{T}+\\left[\\begin{array}{cc}\n-M & M \\\\\n\\end{array}\\right]\\left[\\begin{array}{cc}\n\\alpha{}M+\\Theta_w & -\\alpha{}M \\\\\n-\\alpha{}M & \\alpha{}M+\\Theta_v \\\\\n\\end{array}\\right]^{-1}\\left[\\begin{array}{c}\n-M \\\\\nM \\\\\n\\end{array}\\right].\n\\end{equation*}\nFor the matrix inverse in the above expression, we again consider the matrix $\\alpha\\widetilde{M}+\\Theta_z$ as\na block matrix of the form \\eqref{ABCD}, with $A=\\alpha{}M+\\Theta_w$, $B_1=B_2=B=-\\alpha{}M$, $C=\\alpha{}M+\\Theta_v$. Using \\eqref{ABCDinv2} then gives that\n\\begin{align*}\n\\ &\\left[\\begin{array}{cc}\n-M & M \\\\\n\\end{array}\\right]\\left[\\begin{array}{cc}\nA & B \\\\\nB & C \\\\\n\\end{array}\\right]^{-1}\\left[\\begin{array}{c}\n-M \\\\\nM \\\\\n\\end{array}\\right] \\\\\n\\ ={}&\\left[\\begin{array}{cc}\n-M & M \\\\\n\\end{array}\\right]\\left[\\begin{array}{c}\n(B-CB^{-1}A)^{-1}CB^{-1}M+(B-CB^{-1}A)^{-1}M \\\\\n-B^{-1}M-B^{-1}A(B-CB^{-1}A)^{-1}CB^{-1}M-B^{-1}A(B-CB^{-1}A)^{-1}M \\\\\n\\end{array}\\right] \\\\\n\\ \\ ={}&-M\\Big[B^{-1}+(B^{-1}A+I)(B-CB^{-1}A)^{-1}(CB^{-1}+I)\\Big]M,\n\\end{align*}\nwhereupon substituting in the relevant $A$, $B$, $C$ gives that this expression can be written as follows:\n\\begin{align*}\n\\ &\\frac{1}{\\alpha}M-\\left(-\\frac{1}{\\alpha}A+M\\right)\\left(-\\alpha{}M+\\frac{1}{\\alpha}DM^{-1}A\\right)^{-1}\\left(-\\frac{1}{\\alpha}D+M\\right) \\\\\n\\ ={}&\\frac{1}{\\alpha}M+\\left(\\frac{1}{\\alpha}\\Theta_w\\right)\\left(\\alpha{}M-\\left(\\alpha{}M+\\Theta_w+\\Theta_v+\\frac{1}{\\alpha}\\Theta_v{}M^{-1}\\Theta_w\\right)\\right)^{-1}\\left(\\frac{1}{\\alpha}\\Theta_v\\right) \\\\\n\\ ={}&\\frac{1}{\\alpha}M-\\frac{1}{\\alpha^2}\\left(\\Theta_w^{-1}+\\Theta_v^{-1}+\\frac{1}{\\alpha}M^{-1}\\right)^{-1}.\n\\end{align*}\nTherefore, $S$ may be written as\n\\begin{equation}\n\\ \\label{Schur} S=L(M+\\Theta_y)^{-1}L^{T}+\\frac{1}{\\alpha}M-\\frac{1}{\\alpha^2}\\left(\\Theta_w^{-1}+\\Theta_v^{-1}+\\frac{1}{\\alpha}M^{-1}\\right)^{-1}.\n\\end{equation}\nIt can be shown that $S$ consists of a sum of two symmetric positive semidefinite matrices. The matrix $L(M+\\Theta_y)^{-1}L^{T}$ clearly satisfies this property due to the positive definiteness of $M+\\Theta_y$, and $\\frac{1}{\\alpha}M-\\frac{1}{\\alpha^2}\\left(\\Theta_w^{-1}+\\Theta_v^{-1}+\\frac{1}{\\alpha}M^{-1}\\right)^{-1}$ is in fact positive definite by the following argument:\n\\begin{align*}\n\\frac{1}{\\alpha}M-\\frac{1}{\\alpha^2}\\left(\\frac{1}{\\alpha}M^{-1}+\\Theta_w^{-1}+\\Theta_v^{-1}\\right)^{-1}\\succ0\\quad\\Leftrightarrow\\quad&\\frac{1}{\\alpha^2}\\left(\\frac{1}{\\alpha}M^{-1}+\\Theta_w^{-1}+\\Theta_v^{-1}\\right)^{-1}\\prec\\frac{1}{\\alpha}M \\\\\n\\ \\Leftrightarrow\\quad&\\alpha^2\\left(\\frac{1}{\\alpha}M^{-1}+\\Theta_w^{-1}+\\Theta_v^{-1}\\right)\\succ\\alpha{}M^{-1} \\\\\n\\ \\Leftrightarrow\\quad&\\ M^{-1}+\\alpha\\Theta_w^{-1}+\\alpha\\Theta_v^{-1}\\succ{}M^{-1}.\n\\end{align*}\nBased on this observation, we apply a ``\\emph{matching strategy}'' previously derived in \\cite{PSW11,PW10} for simpler PDE-constrained optimization problems, which relies on a Schur complement being written in this form. In more detail, we approximate the Schur complement $S$ by\n\\begin{equation}\n\\ \\label{SchurApprox} \\widehat{S}=\\left(L+\\widehat{M}\\right)(M+\\Theta_y)^{-1}\\left(L+\\widehat{M}\\right)^T,\n\\end{equation}\nwhere $\\widehat{M}$ is chosen such that the `outer' term of $\\widehat{S}$ in \\eqref{SchurApprox} approximates the second and third terms of $S$ in \\eqref{Schur}, that is\n\\begin{equation*}\n\\ \\widehat{M}(M+\\Theta_y)^{-1}\\widehat{M}^{T}\\approx\\frac{1}{\\alpha}M-\\frac{1}{\\alpha^2}\\left(\\Theta_w^{-1}+\\Theta_v^{-1}+\\frac{1}{\\alpha}M^{-1}\\right)^{-1}.\n\\end{equation*}\nThis may be achieved if\n\\begin{equation*}\n\\ \\widehat{M}\\approx\\left[\\frac{1}{\\alpha}M-\\frac{1}{\\alpha^2}\\left(\\Theta_w^{-1}+\\Theta_v^{-1}+\\frac{1}{\\alpha}M^{-1}\\right)^{-1}\\right]^{1\/2}(M+\\Theta_y)^{1\/2}.\n\\end{equation*}\nA natural choice, which may be readily worked with on a computer, therefore involves replacing mass matrices with their diagonals, making the square roots of matrices practical to work with, and therefore setting\n\\begin{equation*}\n\\ \\widehat{M}=\\left[\\frac{1}{\\alpha}D_M-\\frac{1}{\\alpha^2}\\left(\\Theta_w^{-1}+\\Theta_v^{-1}+\\frac{1}{\\alpha}D_M^{-1}\\right)^{-1}\\right]^{1\/2}(D_M+\\Theta_y)^{1\/2}.\n\\end{equation*}\nWe therefore have a Schur complement approximation $\\widehat{S}$ which may be approximately inverted by applying a multigrid method to the matrix $L+\\widehat{M}$ and its transpose, along with a matrix-vector multiplication for $M+\\Theta_y$.\n\nBelow we present a result concerning the lower bounds of the eigenvalues of the preconditioned Schur complement.\n\\begin{teo}\nIn the case of lumped (diagonal) mass matrices, the eigenvalues of the preconditioned Schur complement all satisfy:\n\\begin{equation*}\n\\ \\lambda(\\widehat{S}^{-1}S)\\geq\\frac{1}{2}.\n\\end{equation*}\n\\end{teo}\n\\emph{Proof.}~~Bounds for the eigenvalues of $\\widehat{S}^{-1}S$ are determined by the extrema of the Rayleigh quotient\n\\begin{equation*}\n\\ R:=\\frac{\\mathbf{v}^{T}S\\mathbf{v}}{\\mathbf{v}^{T}\\widehat{S}\\mathbf{v}}=\\frac{\\boldsymbol\\chi^T\\boldsymbol\\chi+\\boldsymbol\\omega^T\\boldsymbol\\omega}{(\\boldsymbol\\chi+\\boldsymbol\\gamma)^T(\\boldsymbol\\chi+\\boldsymbol\\gamma)},\n\\end{equation*}\nwhere\n\\begin{align*}\n\\ \\boldsymbol\\chi={}&(M+\\Theta_y)^{-1\/2}L^T\\mathbf{v}, \\\\\n\\ \\boldsymbol\\omega={}&\\left[\\frac{1}{\\alpha}M-\\frac{1}{\\alpha^2}\\left(\\Theta_w^{-1}+\\Theta_v^{-1}+\\frac{1}{\\alpha}M^{-1}\\right)^{-1}\\right]^{1\/2}\\mathbf{v}, \\\\\n\\ \\boldsymbol\\gamma={}&(M+\\Theta_y)^{-1\/2}(D_M+\\Theta_y)^{1\/2}\\left[\\frac{1}{\\alpha}D_M-\\frac{1}{\\alpha^2}\\left(\\Theta_w^{-1}+\\Theta_v^{-1}+\\frac{1}{\\alpha}D_M^{-1}\\right)^{-1}\\right]^{1\/2}\\mathbf{v}.\n\\end{align*}\nFollowing the argument used in \\cite[Lemma 2]{PGIP17}, we may bound $R$ as follows:\n\\begin{equation}\\label{Rbound}\n\\ R=\\frac{\\boldsymbol\\chi^T\\boldsymbol\\chi+\\displaystyle{\\frac{\\boldsymbol\\omega^T\\boldsymbol\\omega}{\\boldsymbol\\gamma^T\\boldsymbol\\gamma}}\\hspace{0.25em}\\boldsymbol\\gamma^T\\boldsymbol\\gamma}{(\\boldsymbol\\chi+\\boldsymbol\\gamma)^T(\\boldsymbol\\chi+\\boldsymbol\\gamma)}\\geq\\min\\left\\{\\frac{\\boldsymbol\\omega^T\\boldsymbol\\omega}{\\boldsymbol\\gamma^T\\boldsymbol\\gamma},1\\right\\}\\cdot\\frac{\\boldsymbol\\chi^T\\boldsymbol\\chi+\\boldsymbol\\gamma^T\\boldsymbol\\gamma}{(\\boldsymbol\\chi+\\boldsymbol\\gamma)^T(\\boldsymbol\\chi+\\boldsymbol\\gamma)}\\geq\\frac{1}{2}\\cdot\\min\\left\\{\\frac{\\boldsymbol\\omega^T\\boldsymbol\\omega}{\\boldsymbol\\gamma^T\\boldsymbol\\gamma},1\\right\\},\n\\end{equation}\nusing the argument\n\\begin{align*}\n\\ \\frac{1}{2}(\\boldsymbol\\chi-\\boldsymbol\\gamma)^T(\\boldsymbol\\chi-\\boldsymbol\\gamma)\\geq0\\quad\\Leftrightarrow&\\quad\\boldsymbol\\chi^T\\boldsymbol\\chi+\\boldsymbol\\gamma^T\\boldsymbol\\gamma\\geq\\frac{1}{2}(\\boldsymbol\\chi+\\boldsymbol\\gamma)^T(\\boldsymbol\\chi+\\boldsymbol\\gamma) \\\\\n\\ \\Leftrightarrow&\\quad\\frac{\\boldsymbol\\chi^T\\boldsymbol\\chi+\\boldsymbol\\gamma^T\\boldsymbol\\gamma}{(\\boldsymbol\\chi+\\boldsymbol\\gamma)^T(\\boldsymbol\\chi+\\boldsymbol\\gamma)}\\geq\\frac{1}{2}.\n\\end{align*}\n\nWe now turn our attention to the product $\\frac{\\boldsymbol\\omega^T\\boldsymbol\\omega}{\\boldsymbol\\gamma^T\\boldsymbol\\gamma}$. Straightforward calculation tells us that\n\\begin{equation*}\n\\ \\frac{\\boldsymbol\\omega^T\\boldsymbol\\omega}{\\boldsymbol\\gamma^T\\boldsymbol\\gamma}=\\underbrace{\\frac{\\mathbf{v}^T[M-(\\Theta+M^{-1})^{-1}]\\mathbf{v}}{\\mathbf{v}^T[D_M-(\\Theta+D_M^{-1})^{-1}]\\mathbf{v}}}_{=:R_{\\Theta}}\\cdot\\frac{\\mathbf{w}^T(D_M+\\Theta_y)^{-1}\\mathbf{w}}{\\mathbf{w}^T(M+\\Theta_y)^{-1}\\mathbf{w}},\n\\end{equation*}\nwhere $\\Theta:=\\alpha\\Theta_w^{-1}+\\alpha\\Theta_v^{-1}$ and $\\mathbf{w}:=(D_M+\\Theta_y)^{1\/2}\\big[\\frac{1}{\\alpha}D_M-\\frac{1}{\\alpha^2}\\left(\\Theta_w^{-1}+\\Theta_v^{-1}+\\frac{1}{\\alpha}D_M^{-1}\\right)^{-1}\\big]^{1\/2}\\mathbf{v}$. It may be observed that\n\\begin{equation*}\n\\ \\frac{\\mathbf{w}^T(D_M+\\Theta_y)^{-1}\\mathbf{w}}{\\mathbf{w}^T(M+\\Theta_y)^{-1}\\mathbf{w}}\\geq\\lambda_{\\min}\\Big((D_M+\\Theta_y)^{-1}(M+\\Theta_y)\\Big)\\geq\\min\\left\\{\\lambda_{\\min}(D_M^{-1}M),1\\right\\},\n\\end{equation*}\nand hence that\n\\begin{equation}\\label{omegabound}\n\\ \\frac{\\boldsymbol\\omega^T\\boldsymbol\\omega}{\\boldsymbol\\gamma^T\\boldsymbol\\gamma}\\geq{}R_{\\Theta}\\cdot\\min\\left\\{\\lambda_{\\min}(D_M^{-1}M),1\\right\\}.\n\\end{equation}\n\nFinally, we observe that $R_{\\Theta}=\\lambda_{\\min}(D_M^{-1}M)=1$ for lumped mass matrices, as $D_M=M$. Inserting \\eqref{omegabound} into \\eqref{Rbound} then gives the required result.~~$\\Box$\n\n\\vspace{1em}\n\n\\begin{remark} For consistent mass matrices, the working above still holds, except $R_{\\Theta}$ and $\\lambda_{\\min}(D_M^{-1}M)$ are not equal to $1$. Therefore, the bound reads\n\\begin{equation*}\n\\ \\lambda(\\widehat{S}^{-1}S)\\geq\\frac{1}{2}\\cdot\\min\\Big\\{\\min\\hspace{0.1em}R_{\\Theta}\\cdot\\min\\left\\{\\lambda_{\\min}(D_M^{-1}M),1\\right\\},1\\Big\\},\n\\end{equation*}\nand depends on the matrix $[D_M-(\\Theta+D_M^{-1})^{-1}]^{-1}[M-(\\Theta+M^{-1})^{-1}]$, which does not have uniformly bounded eigenvalues. This is, however, a weak bound, and in practice we find that the (smallest and largest) eigenvalues of the preconditioned Schur complement are moderate in size.\n\n\nFurthermore, in numerical experiments, we find the vast majority of the eigenvalues of $\\widehat{S}^{-1}S$ \nto be clustered in the interval $\\left [\\frac{1}{2},1 \\right]$, particularly as the Interior Point method approaches convergence, for the following reasons. In \\cite[Theorem 4.1]{PW11}, it is shown that\n\\begin{eqnarray}\\label{half1}\n\\ \\lambda\\left(\\left[\\left(L+\\frac{1}{\\sqrt{\\alpha}}M\\right)M^{-1}\\left(L+\\frac{1}{\\sqrt{\\alpha}}M\\right)^T\\right]^{-1}\\left[LM^{-1}L^T+\\frac{1}{\\alpha}M\\right]\\right)\\in\\left[\\frac{1}{2},1\\right],\n\\end{eqnarray}\nfor any (positive) value of $\\alpha$, and any mesh-size, provided $L+L^T$ is positive semidefinite, which is the case for Poisson and convection--diffusion problems for instance. For the Schur complement \\eqref{Schur} and Schur complement approximation \\eqref{SchurApprox}, as the Interior Point method approaches convergence, two cases will arise: (i) some entries of $\\Theta_w^{-1}+\\Theta_v^{-1}$ will approach zero, whereupon substituting these values into \\eqref{Schur} and \\eqref{SchurApprox} gives that $S$ and $\\widehat{S}$ are both approximately $L(M+\\Theta_y^{-1})^{-1}L^T$, so the eigenvalues of $\\widehat{S}^{-1}S$ should be roughly $1$; (ii) some entries of $\\Theta_w^{-1}+\\Theta_v^{-1}$ approach infinity (with many entries of $\\Theta_y$ correspondingly approaching zero), so $S$ is approximately $LM^{-1}L^T+\\frac{1}{\\alpha}M$, with $\\widehat{S}$ an approximation of $(L+\\frac{1}{\\sqrt{\\alpha}}M)M^{-1}(L+\\frac{1}{\\sqrt{\\alpha}}M)^T$, giving clustered eigenvalues as predicted by \\eqref{half1}.\nThe numerical evidence of the described behavior, for consistent mass matrices, is shown in Figure \\ref{eig}.\n\n\\vspace{1em}\n\n\\tikzexternaldisable\n \\begin{figure}[htb]\n\\begin{center}\n\t\\setlength\\figureheight{0.3\\linewidth} \n\t\\setlength\\figurewidth{0.4\\linewidth}\n\t\\subfloat[Poisson eigenvalues]{\n\t\\input{figures\/eigplot1.tikz}\n\t}\n\t\\subfloat[Convection--diffusion eigenvalues]{\n\t\\input{figures\/eigplot2.tikz}\n\t}\n \\end{center}\n\\caption{Eigenvalue distribution of $\\widehat{S}^{-1}S $ at later Interior Point iterations for test problems involving Poisson's equation (left)\nand the convection--diffusion equation (right) (with mesh-size $h=2^{-4}$). \n} \n\\label{eig}\n\\end{figure}\n\\tikzexternalenable\n\\end{remark}\n\n\nWe note that the $(1,1)$-block and Schur complement approximations that we have derived are both symmetric positive definite, so we may apply the {\\scshape minres} algorithm with a block diagonal preconditioner\nof the form\n\\begin{equation*}\n\\ \\mathcal{P}_D=\\left[\\begin{array}{cccc}\nM+\\Theta_y & 0 & 0 & 0 \\\\\n0 & \\alpha{}D_M+\\Theta_w & -\\alpha{}D_M & 0 \\\\\n0 & -\\alpha{}D_M & \\alpha{}D_M+\\Theta_v & 0 \\\\\n0 & 0 & 0 & \\widehat{S} \\\\\n\\end{array}\\right],\n\\end{equation*}\nwith $\\widehat{S}$ defined as above.\n\n\nIt is also possible to exploit the often faster convergence achieved by block triangular preconditioners within {\\scshape gmres}, and utilize the block triangular preconditioner:\n\\begin{equation*}\n\\ \\mathcal{P}_T=\\left[\\begin{array}{cccc}\nM+\\Theta_y & 0 & 0 & 0 \\\\\n0 & \\alpha{}D_M+\\Theta_w & -\\alpha{}D_M & 0 \\\\\n0 & -\\alpha{}D_M & \\alpha{}D_M+\\Theta_v & 0 \\\\\nL & -M & M & -\\widehat{S} \\\\\n\\end{array}\\right].\n\\end{equation*}\n\n\n\n\\subsection{Preconditioner for Partial Observations}\n\\label{subsec::po}\nIn practice, the quantity of importance from a practical point-of-view is the difference between the state variable and the desired state on a certain region of the domain,\ni.e. $\\Omega_1\\subset\\Omega$, in which case one would instead consider the term $\\frac{1}{2}\\|\\rm y-\\rm y_d\\|^ 2_{L^2(\\Omega_1)}$ within the cost functional \\eqref{pb}.\nThis results in a mass matrix where many of the eigenvalues are equal to zero. In more detail, the matrix $M+\\Theta_y$ is in practice $M_s+\\Theta_y$, where $M_s$ is a (singular) mass matrix acting on a subdomain, although for the purposes of our working we retain the existing notation. Hence, the standard saddle-point preconditioning\napproach cannot be straightforwardly applied, due to the $(1,1)$-block being singular. One strategy is to replace the singular mass matrix with a slightly perturbed \nversion in the preconditioning step. However, it is not straightforward to estimate the strength of this perturbation and its affect on the preconditioner.\n\nAnother alternative is presented in \\cite{BenDOS15,herzog2018fast}, and we follow this strategy here. \nThis method is tailored to the case where\nthe leading block of the saddle-point system is highly singular (meaning a large proportion of its eigenvalues are zero), due to the fact that the observations \nare placed only on parts of the domain.\nIn more detail, we consider the matrix system\n\\begin{equation}\\label{MatrixPartial}\n\\left [\\begin{array}{cc c}\n M+\\Theta_y&0&L^T\\\\\n 0&\\alpha \\widetilde M + \\Theta_z &-\\bar M^T \\\\\n L&-\\bar M&0\\\\ \n\\end{array} \\right],\n\\end{equation}\nwith $M+\\Theta_y$ often a highly singular matrix, as $\\Theta_y=0$ when no state constraints are present. \nThe mass matrix used to construct $\\widetilde M$ is then defined on the control domain, which can be the whole domain or part of it.\nWe start by considering the following permutation of the matrix to be solved:\n\\begin{equation}\\label{Permuted}\n\t\\Pi\n\\left [\\begin{array}{ccc}\n M+\\Theta_y&0&L^T\\\\\n 0&\\alpha \\widetilde M + \\Theta_z &-\\bar M^T \\\\\n L&-\\bar M&0\\\\ \n\\end{array} \\right]\n=\n\\left [\\begin{array}{ccc}\nL&-\\bar M&0\\\\ \n0&\\alpha \\widetilde M + \\Theta_z &-\\bar M^T \\\\\nM+\\Theta_y&0&L^T\\\\ \n\\end{array} \\right]\n,\n\\end{equation}\nwhere \n\\begin{equation*}\n\t\\Pi:=\n\t\\left[\n\t\t\\begin{array}{ccc}\n\t\t\t0&0&I\\\\\n\t\t\t0&I&0\\\\\n\t\t\tI&0&0\\\\\n\t\t\\end{array}\n\t\\right].\n\\end{equation*}\nThe matrix \\eqref{Permuted} is a block matrix of the form \\eqref{ABCD} with\n\\begin{equation*}\n\t\\ A=\\left[\n\t\t\\begin{array}{cc}\nL&-\\bar M\\\\ \n0&\\alpha \\widetilde M + \\Theta_z\\\\\n\t\t\\end{array}\n\t\\right],\\quad\\quad{}B_1=\\left[\n\t\t\\begin{array}{cc}\n\t\t\t0\\\\\n\t\t\t-\\bar M^T \\\\\n\t\t\\end{array}\n\t\\right],\\quad\\quad{}B_2=\\left[\n\t\t\\begin{array}{cc}\n\t\t\tM+\\Theta_y&0\\\\\n\t\t\\end{array}\n\t\\right],\\quad\\quad{}C=\\left[\n\t\t\\begin{array}{c}\n\t\t\tL^T \\\\\n\t\t\\end{array}\n\t\\right],\n\\end{equation*}\nwhich is a modification to a general saddle-point system, with non-symmetric extra-diagonal blocks and a non-zero $(2,2)$-block given by $L^T$. \nBased on this we propose the following preconditioner of block-triangular type for the permuted system:\n\\begin{equation*}\n\t\\widetilde{\\mathcal{P}}=\n\t\\left[\n\t\t\\begin{array}{ccc}\nL&-\\bar M&0\\\\ \n0&\\alpha \\widetilde M + \\Theta_z &0\\\\\nM+\\Theta_y&0&-\\widehat{S}_{\\Pi}\\\\\n\t\t\\end{array}\n\t\\right],\n\\end{equation*}\nwith the inverse then given by\n\\begin{equation*}\n\t\\widetilde{\\mathcal{P}}^{-1}=\n\t\\left[\n\t\t\\begin{array}{ccc}\n\t\t\tL^{-1}&L^{-1}\\bar M(\\alpha \\widetilde M + \\Theta_z )^{-1}&0\\\\\n\t\t\t0&(\\alpha \\widetilde M + \\Theta_z)^{-1} &0\\\\\n\t\t\t\\widehat{S}_{\\Pi}^{-1}(M+\\Theta_y)L^{-1}&\\widehat{S}_{\\Pi}^{-1}(M+\\Theta_y)L^{-1}\\bar M(\\alpha \\widetilde M + \\Theta_z )^{-1}&-\\widehat{S}_{\\Pi}^{-1}\\\\\n\t\t\\end{array}\n\t\\right].\n\\end{equation*}\nThe matrix $\\widehat{S}_{\\Pi}$ is designed to approximate the Schur complement $S_{\\Pi}$ of the \\emph{permuted matrix system}, that is\n\\begin{equation*}\n\t\\widehat{S}_{\\Pi}\\approx\n\tS_{\\Pi}\n\t=L^{T}+(M+\\Theta_y)L^{-1}\\bar M(\\alpha \\widetilde M + \\Theta_z )^{-1}\\bar M^T.\n\\end{equation*}\nWe now propose a preconditioner $\\mathcal{P}_{\\Pi}$ for the original matrix \\eqref{MatrixPartial}, such that $\\mathcal{P}_{\\Pi}^{-1}=\\widetilde{\\mathcal{P}}^{-1}\\Pi$, and we therefore obtain\n\\begin{equation}\n\t\\label{eq:prec1}\n\t\\mathcal{P}_{\\Pi}^{-1}=\n\\left[\n\\begin{array}{ccc}\n0&L^{-1}\\bar M(\\alpha \\widetilde M + \\Theta_z )^{-1}&L^{-1}\\\\\n0&(\\alpha \\widetilde M + \\Theta_z)^{-1} &0\\\\\n-\\widehat{S}_{\\Pi}^{-1}&\\widehat{S}_{\\Pi}^{-1}(M+\\Theta_y)L^{-1}\\bar M(\\alpha \\widetilde M + \\Theta_z )^{-1}&\\widehat{S}_{\\Pi}^{-1}(M+\\Theta_y)L^{-1}\\\\\n\\end{array}\n\\right].\n\\end{equation}\nApplying the preconditioner is in fact more straightforward than it currently appears. To compute a vector $\\mathbf{v}=\\mathcal{P}_{\\Pi}^{-1}\\mathbf{w}$, where $\\mathbf{v}:=\\left[\\mathbf{v}_{1}^T,~\\mathbf{v}_{2}^T,~\\mathbf{v}_{3}^T\\right]^T$, $\\mathbf{w}:=\\left[\\mathbf{w}_{1}^T,~\\mathbf{w}_{2}^T,~\\mathbf{w}_{3}^T\\right]^T$, we first observe from the second block of $\\mathcal{P}_{\\Pi}^{-1}$ that\n\\begin{equation*}\n\t(\\alpha \\widetilde M + \\Theta_z)^{-1}\\mathbf{w}_2=\\mathbf{v}_2.\n\\end{equation*}\nThe first equation derived from \\eqref{eq:prec1} then gives that\n\\begin{align*}\nL^{-1}\\bar M(\\alpha \\widetilde M + \\Theta_z )^{-1}\\mathbf{w}_2+L^{-1}\\mathbf{w}_3&=\\mathbf{v}_1\\\\\n\\Rightarrow\\hspace{7.2em}L^{-1}(\\bar M\\mathbf{v}_2+\\mathbf{w}_3)&=\\mathbf{v}_1,\n\\end{align*}\nand applying this within the last equation in \\eqref{eq:prec1} that\n\\begin{align*}\n-\\widehat{S}_{\\Pi}^{-1}\\mathbf{w}_1+\\widehat{S}_{\\Pi}^{-1}(M+\\Theta_y)L^{-1}\\bar M(\\alpha \\widetilde M + \\Theta_z )^{-1}\\mathbf{w}_2+\\widehat{S}_{\\Pi}^{-1}(M+\\Theta_y)L^{-1}\\mathbf{w}_3&=\\mathbf{v}_3\\\\\n\\Rightarrow\\hspace{5.1em}-\\widehat{S}_{\\Pi}^{-1}\\mathbf{w}_1+\\widehat{S}_{\\Pi}^{-1}(M+\\Theta_y)\\big(L^{-1}\\bar M(\\alpha \\widetilde M + \\Theta_z )^{-1}\\mathbf{w}_2+L^{-1}\\mathbf{w}_3\\big)&=\\mathbf{v}_3\\\\\n\\Rightarrow\\hspace{20.95em}\\widehat{S}_{\\Pi}^{-1}\\big((M+\\Theta_y)\\mathbf{v}_1-\\mathbf{w}_1\\big)&=\\mathbf{v}_3.\n\\end{align*}\n\nThus we need to approximately solve with $\\widehat{S}_{\\Pi}$, $L$, and $\\alpha \\widetilde M + \\Theta_z$, which are all invertible matrices, to apply the preconditioner. We now briefly discuss our choice of $\\widehat{S}_{\\Pi}.$ We suggest a matching strategy as above, to write\n\\begin{align*}\nS_{\\Pi}=L^{T}+(M+\\Theta_y)L^{-1}\\bar M(\\alpha \\widetilde M + \\Theta_z )^{-1}\\bar M^T\\approx\\big(L^{T}+{M}_l\\big)L^{-1}\\big(L+{M}_r\\big)=\\widehat{S}_{\\Pi},\n\\end{align*}\nwhere \n\\begin{equation*}\n\t{M}_lL^{-1}{M}_r\\approx(M+\\Theta_y)L^{-1}\\bar M(\\alpha \\widetilde M + \\Theta_z )^{-1}\\bar M^T.\n\\end{equation*}\nSuch an approximation may be achieved if, for example, \n\\begin{equation*}\n\t{M}_l=M+\\Theta_y,\\quad\\quad{M}_r\\approx \\bar M(\\alpha \\widetilde M + \\Theta_z )^{-1}\\bar M^T.\n\\end{equation*}\nAlternatively, we can use a matrix based on the approximation $\\widehat{M}$ from the previous section to approximate ${M}_r.$\nWe thus build such approximations into our preconditioner $\\mathcal{P}_{\\Pi}$, although further tailoring of such preconditioners is a subject of future investigation.\n\n\n\n\\subsection{Time-Dependent Problems}\n\\label{subsec::td}\nTo demonstrate the applicability of our preconditioners to time-dependent PDE-constrained optimization problems, we now consider the minimization of the cost functional\n\\begin{equation*}\n\\ \\mathcal{F}(\\rm y,\\rm u)=\\frac{1}{2}\\|\\rm y-\\rm y_d\\|^ 2_{L^2(\\Omega\\times(0,T))}+ \\frac{\\alpha}{2}\\|\\rm u\\|^ 2_{L^2(\\Omega\\times(0,T))} + \\beta\\|u\\|_{L^1(\\Omega\\times(0,T))},\n\\end{equation*}\nsubject to the PDE $\\rm y_{t}-\\Delta\\rm y=\\rm u+\\rm f$ on the space-time interval $\\Omega\\times(0,T)$, along with suitable boundary and initial conditions.\n\n\nWith the backward Euler method used to handle the time derivative, the matrix within the system to be solved is of the form\n\\begin{equation}\\label{TimeDeptSystem}\n \\mathcal{A} = \\left [\\begin{array}{c c c }\n\n \\tau \\mathcal{M}_c + \\Theta_y & 0 & \\mathcal{L}^T \\\\\n 0 & \\alpha\\tau\\widetilde{\\mathcal{M}}_c + \\Theta_z & -\\tau\\bar{\\mathcal{M}}^T \\\\\n \\mathcal{L} & -\\tau\\bar{\\mathcal{M}} & 0 \\\\ \n \\end{array} \\right],\n\\end{equation}\nwith $\\tau$ the time-step used.\n\nThe matrix $\\mathcal{M}_c$ is a block diagonal matrix consisting of multiples of mass matrices on each block diagonal corresponding to each time-step, depending on the quadrature rule used to approximate the cost functional in the time domain. For example, if a trapezoidal rule is used, then $\\mathcal{M}_c=\\text{blkdiag}(\\frac{1}{2}M,M,...,M,\\frac{1}{2}M)$, and if a rectangle rule is used, then $\\mathcal{M}_c=\\mathcal{M}:=\\text{blkdiag}(M,M,...,M,M)$. Further,\n\\begin{equation*}\n\\ \\widetilde{\\mathcal{M}}_c=\\left[\\begin{array}{cc}\n\\mathcal{M}_c & -\\mathcal{M}_c \\\\\n-\\mathcal{M}_c & \\mathcal{M}_c \\\\\n\\end{array}\\right],\\quad\\quad\\bar{\\mathcal{M}}=\\left[\\begin{array}{cc}\n\\mathcal{M} & -\\mathcal{M} \\\\\n\\end{array}\\right],\n\\end{equation*}\nand $\\mathcal{L}$ is defined as follows (with its dimension equal to that of $L$, multiplied by the number of time-steps):\n\\begin{equation*}\n\\ \\mathcal{L}=\\left[\\begin{array}{cccc}\nM+\\tau{}L & & & \\\\\n-M & M+\\tau{}L & & \\\\\n & \\ddots & \\ddots & \\\\\n & & -M & M+\\tau{}L \\\\\n\\end{array}\\right].\n\\end{equation*}\n\nWe now consider saddle-point preconditioners for the matrix \\eqref{TimeDeptSystem}. We may apply preconditioners of the form\n\\begin{align*}\n\\ \\mathcal{P}_{D}={}&\\left [\\begin{array}{c c c c}\n\\tau \\mathcal{M}_c + \\Theta_y & 0 & 0 & 0 \\\\\n0 & \\alpha\\tau\\mathcal{D}_{M_c} + \\Theta_w & -\\alpha\\tau\\mathcal{D}_{M_c} & 0 \\\\\n0 & -\\alpha\\tau\\mathcal{D}_{M_c} & \\alpha\\tau\\mathcal{D}_{M_c} + \\Theta_v & 0 \\\\\n0 & 0 & 0 & \\widehat{\\mathcal{S}} \\\\ \n\\end{array} \\right] \\\\\n\\ \\text{or}\\quad\\mathcal{P}_{T}={}&\\left [\\begin{array}{c c c c}\n\\tau \\mathcal{M}_c + \\Theta_y & 0 & 0 & 0 \\\\\n0 & \\alpha\\tau\\mathcal{D}_{M_c} + \\Theta_w & -\\alpha\\tau\\mathcal{D}_{M_c} & 0 \\\\\n0 & -\\alpha\\tau\\mathcal{D}_{M_c} & \\alpha\\tau\\mathcal{D}_{M_c} + \\Theta_v & 0 \\\\\n\\mathcal{L} & -\\tau\\mathcal{M} & \\tau\\mathcal{M} & -\\widehat{\\mathcal{S}} \\\\ \n\\end{array} \\right],\n\\end{align*}\nwhere $\\mathcal{D}_{M_c}:=\\text{diag}(\\mathcal{M}_c)$, the matrix $\\tau\\mathcal{M}_{c}+\\Theta_y$ can be approximately inverted by applying Chebyshev semi-iteration to the matrices arising at each time-step, and $\\widehat{\\mathcal{S}}$ is an approximation of the Schur complement:\n\\begin{equation*}\n\\ \\mathcal{S}=\\mathcal{L}(\\tau\\mathcal{M}_{c}+\\Theta_y)^{-1}\\mathcal{L}^{T}+\\frac{\\tau}{\\alpha}\\mathcal{M}\\mathcal{M}_c^{-1}\\mathcal{M}-\\frac{1}{\\alpha^2}\\mathcal{M}\\mathcal{M}_c^{-1}\\left(\\Theta_w^{-1}+\\Theta_v^{-1}+\\frac{1}{\\alpha\\tau}\\mathcal{M}_c^{-1}\\right)\\mathcal{M}_c^{-1}\\mathcal{M}.\n\\end{equation*}\nWe select the approximation\n\\begin{equation*}\n\\ \\widehat{\\mathcal{S}}=\\left(\\mathcal{L}+\\widehat{\\mathcal{M}}\\right)(\\tau\\mathcal{M}_{c}+\\Theta_y)^{-1}\\left(\\mathcal{L}+\\widehat{\\mathcal{M}}\\right)^{T},\n\\end{equation*}\nusing the same reasoning as in Section \\ref{sec:Schur}, where\n\\begin{equation*}\n\\ \\widehat{\\mathcal{M}}=\\left[\\frac{\\tau}{\\alpha}\\mathcal{D}_{M}^2\\mathcal{D}_{M_c}^{-1}-\\frac{1}{\\alpha^2}\\mathcal{D}_{M}^2\\mathcal{D}_{M_c}^{-2}\\left(\\Theta_w^{-1}+\\Theta_v^{-1}+\\frac{1}{\\alpha\\tau}\\mathcal{D}_{M_c}^{-1}\\right)\\right]^{1\/2}(\\tau\\mathcal{D}_{M_c}+\\Theta_y)^{1\/2},\n\\end{equation*}\nwith $\\mathcal{D}_{M}:=\\text{diag}(\\mathcal{M})$. Within the numerical experiments of the forthcoming section, we apply the preconditioning strategy that arises from the working above.\n\n\n\\section{Numerical Experiments}\\label{exp}\n\n\nWe now implement the Interior Point algorithm described in the Appendix, using {\\scshape matlab}\\textsuperscript{\\textregistered} R2017b\non an Intel\\textsuperscript{\\textregistered} Xeon\\textsuperscript{\\textregistered} computer with a 2.40GHz processor, and 250GB of RAM.\nWithin the algorithm we employ the preconditioned {\\scshape minres}\\ \\cite{minres} and {\\scshape gmres} \\cite{gmres} methods with the following preconditioners:\n\\begin{itemize}\n\\item \\ipmbt: {\\sc gmres} and block triangular preconditioner $\\mathcal{P}_T,$ \n\\item {\\sc ipm-minres-${\\cal P}_D$} : {\\sc minres} with block diagonal preconditioner $\\mathcal{P}_D,$\n\\item {\\sc ipm-gmres-${\\cal P}_\\Pi$} : {\\sc gmres} and block triangular preconditioner $\\mathcal{P}_\\Pi.$\n\\end{itemize}\nRegarding the parameters listed in the Appendix, we use\n$\\alpha_0 = 0.995$ and $\\epsilon_p=\\epsilon_d=\\epsilon_c = 10^{-6}$.\nFor the barrier reduction parameter $\\sigma$, we consider for each class of\nproblems tested a value that ensures a smooth decrease in the complementarity measure\n$\\xi^k_c$ in (\\ref{gap}), that is to say $\\|\\xi^k_c\\| = \\mathcal{O}(\\mu^k)$. This way, the number of \nnonlinear (Interior Point) iterations typically depends only on $\\sigma$.\nWe solve the linear matrix systems to a (relative unpreconditioned residual norm) tolerance of $10^{-10}$.\n\n\n\\begin{figure}\n\\begin{center}\n\t\\setlength\\figureheight{0.225\\linewidth} \n\t\\setlength\\figurewidth{0.225\\linewidth} \n \\subfloat[Control $\\rm u$, $\\beta=5\\times10^{-2}$]{\n\t\\input{figures\/controlPoissonbeta5e_2.tikz}\n\t}\n\t\\subfloat[Control $\\rm u$, $\\beta=5\\times10^{-3}$]{\n\t\\input{figures\/controlPoissonbeta5e_3.tikz}\n\t}\n \\end{center}\n\\caption{Poisson problem: computed solutions of the control $\\rm u$, for two values of $\\beta$.} \\label{fig::poissonu}\n\\end{figure}\n\n\\begin{table}[htb!]\n\\begin{center}\n\\begin{tabular}{ccccccc}\n\\toprule\n & \\multicolumn{ 2}{c}{$\\beta = 10^{-1}$} & \\multicolumn{ 2}{c}{$\\beta = 10^{-2}$} & \\multicolumn{ 2}{c}{$\\beta = 10^{-3}$} \\\\\n\\midrule\n & {\\sc sparsity} & $\\|u\\|_1$ & {\\sc sparsity} & $\\|u\\|_1$ & {\\sc sparsity} & $\\|u\\|_1$ \\\\\n\\midrule\n$\\alpha = 10^{-2}$ & 99\\% & 3 & 15\\% & $7\\times 10^2$ & 12\\% & $1\\times 10^3$ \\\\\n\n$\\alpha = 10^{-4}$ & 100\\% & 2 & 38\\% & $9\\times 10^2$ & 12\\% & $1\\times 10^3$ \\\\\n\n$\\alpha = 10^{-6}$ & 100\\% & 2 & 39\\% & $9\\times 10^2$ & 12\\% & $1\\times 10^3$ \\\\\n\n\\bottomrule\n\\end{tabular} \n\\end{center}\n\\caption{Poisson problem: sparsity features of the computed optimal control, for a range of $\\alpha$ and $\\beta$, and mesh-size $h = 2^{-5}$. \n\\label{tab::sparsity}}\n\\end{table}\n\n\nWe apply the {\\scshape ifiss} software package \\cite{ifissmatlab,ifisslink} to build\nthe relevant finite element matrices for the 2D examples shown in this section, and use the\n{\\scshape deal.II} library \\cite{dealii} in the 3D case. In each case we utilize $Q1$ finite elements\nfor the state, control, and adjoint variables.\n\n\nWe apply $20$ steps of Chebyshev semi-iteration to approximate the inverse of mass matrices, as well as mass matrices plus positive diagonal matrices, whenever they arise within the preconditioners.\nApplying the approximate inverses of the Schur complement approximations derived for each of our preconditioners\nrequires solving for matrices of the form $L + \\widehat M$ and its transpose.\nFor this we utilize $3$ V-cycles of the algebraic multigrid routine {\\sc hsl-mi20} \\cite{Boyle2007},\nwith a Gauss--Seidel coarse solver, and apply $5$ steps of pre- and post-smoothing.\nFor time-dependent problems, we also use Chebyshev semi-iteration and algebraic multigrid within the preconditioner, \nbut are required to apply the methods to matrices arising from each time-step.\nIn all the forthcoming tables of results, we report the average number of linear ({\\scshape minres} or {\\scshape gmres}) iterations {\\sc av-li},\nand the average CPU time {\\sc av-cpu}. The overall number of nonlinear (Interior Point) iterations {\\sc nli} is specified in the table captions. \nWe believe these demonstrate the effectiveness of our proposed Interior Point and preconditioning approaches, as well as the robustness of the\noverall method, for a range of PDEs, matrix dimensions, and parameters involved in the problem set-up.\n\n\n\\subsection{A Poisson Problem}\n\n\\begin{table}[htb!]\n\\begin{center}\n\\begin{tabular}{llrrrrrr}\n\\toprule\n & & \\multicolumn{ 2}{c}{\\ipmbt } & \\multicolumn{ 2}{|c }{{\\sc ipm-minres-${\\cal P}_D$} } \\\\\n\\midrule\n $h=2^{-\\ell}$ & $\\mathrm{log}_{10}\\alpha$ & {\\sc av-li} & {\\sc av-cpu} & {\\sc av-li} & {\\sc av-cpu} \\\\\n\\midrule\n\n\\multicolumn{ 1}{c}{6} & $-2$ & 8.9 & 0.2 & 19.4 & 0.4 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 7.2 & 0.2 & 16.3 & 0.3 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 7.1 & 0.2 & 14.6 & 0.3 \\\\\n\n\\multicolumn{ 1}{c}{7} & $-2$ & 9.0 & 0.8 & 19.5 & 1.6 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 7.1 & 0.7 & 15.8 & 1.3 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 6.8 & 0.6 & 14.4 & 1.4 \\\\\n\n\\multicolumn{ 1}{c}{8} & $-2$ & 6.9 & 2.5 & 14.3 & 5.0 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 6.5 & 2.4 & 13.4 & 4.7 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 6.5 & 2.4 & 12.8 & 4.5 \\\\\n\n\\multicolumn{ 1}{c}{9} & $-2$ & 7.9 & 12.4 & 13.8 & 21.8 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 7.6 & 12.0 & 12.7 & 20.2 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 7.5 & 11.9 & 12.3 & 20.0 \\\\\n\\bottomrule\n\\end{tabular} \n\\end{center}\n\\caption{Poisson problem: average Krylov iterations and CPU times for problem with control constraints, for a range of $h$ and $\\alpha$, $\\beta = 10^{-2}$, $\\sigma = 0.2$, $\\textsc{nli} = 9$.\n\\label{tab::resultspoisson1}}\n\\end{table}\n\nWe first examine an optimization problem involving Poisson's equation, investigating the behavior of the IPM and our proposed preconditioners. \n\n\n\\subsection*{Two-Dimensional Case}\nWe focus initially on the performance of our solvers for the two-dimensional Poisson problem, employing both \\ipmbt and {\\sc ipm-minres-${\\cal P}_D$} methods, as well as considering some sparsity issues.\nWe set the box constraints for the control to be $\\rm u_a=-2$, $\\rm u_b=1.5,$ and the desired state\n$\\rm y_d=\\sin(\\pi {\\rm x_1})\\sin(\\pi {\\rm x_2}) $, with ${\\rm x}_i$ denoting the $i$th spatial variable. Figure \\ref{fig::poissonu} displays the computed optimal controls for this problem for a particular set-up on the domain $\\Omega=(0,1)^2$, for both $\\beta=5\\times10^{-2}$ and $\\beta=5\\times10^{-3}$\nas well as $\\alpha = 10^{-2}$. Table \\ref{tab::sparsity} reports the level of sparsity in the computed solution, as well as its \n$\\ell_1$-norm, when varying the regularization parameters $\\alpha$ and $\\beta$. The value of {\\sc sparsity} in the table is computed by\nmeasuring the percentage of components of $u$ which are below a certain threshold ($10^{-2}$ in our case),\nsee e.g. \\cite{fpcas}. We observe that our algorithm reliably computes sparse\ncontrols, and as expected the sparsity of the solution increases when $\\beta$ is correspondingly increased.\n\nIn Table \\ref{tab::resultspoisson1} we compare the performance of the preconditioners $\\mathcal{P}_T$ and $\\mathcal{P}_D$ within the IPM, varying the \nspatial mesh-size $h = 2^{-i},\\ i = 6, \\dots, 9$, as well as the regularization parameter $\\alpha$, while fixing the value $\\beta = 10^{-2}$ (Table \\ref{tab::sparsity} indicates that this value of $\\beta$ gives rise to the most computationally interesting case). We set $\\sigma = 0.2$, and\ntake $9$ Interior Point iterations with a final value $\\mu^k = 5 \\times 10^{-7}$. Figure \\ref{fig::convh} provides a representation of the typical convergence behavior for the feasibilities $\\xi^k_p, \\xi^k_d$ and complementarity $\\xi^k_c$, together with \nthe decrease of $\\mu^k$ with this value of $\\sigma$.\nThe reported results demonstrate good robustness of both preconditioners with respect to both $h$ and $\\alpha$ in terms of linear iterations and\nCPU time, with \\ipmbt outperforming {\\sc ipm-minres-${\\cal P}_D$} in each measure.\nDespite the fact that the value of {\\sc av-li} is constant in both implementations, we observe that when using {\\sc ipm-minres-${\\cal P}_D$} the number of\npreconditioned {\\scshape minres} iterations slightly increases as $\\mu^k \\rightarrow 0$, as many entries of $\\Theta_{z}$ tend to zero. \nOn the contrary, the number of preconditioned {\\scshape gmres} iterations hardly varies with $k$.\n\n\n\\tikzexternaldisable\n \\begin{figure}[htb]\n \\centering\n\t\\setlength\\figureheight{0.35\\linewidth} \n\t\\setlength\\figurewidth{0.45\\linewidth}\n\\input{figures\/convhist.tikz}\n \\caption{Typical convergence history of the relevant quantities $\\mu^k, \\xi^k_p, \\xi^k_d, \\xi^k_c$. \n\\label{fig::convh}}\n\\end{figure}\n\\tikzexternalenable\n\nAs a final validation of the general framework outlined, we report in Table \\ref{tab::resultspoisson2}\nresults obtained when imposing both control and state constraints within the Poisson setting described above.\nIn particular, we set $\\rm y_a=-0.1$, $\\rm y_b=0.8$, $\\rm u_a=-1$, $\\rm u_b=15$ and test the most promising implementation\nof the IPM, that is the \\ipmbt routine, while varying $h$ and $\\alpha$. The reported values of {\\sc av-li} confirm the roboustness of\nthe preconditioning strategy proposed.\n\n\\begin{table}[htb!]\n\\begin{center}\n\\begin{tabular}{llrr}\n\\toprule\n & & \\multicolumn{ 2}{c}{\\ipmbt} \\\\\n\\midrule\n $h=2^{-\\ell}$ & $\\mathrm{log}_{10}\\alpha$ & {\\sc av-li} & {\\sc av-cpu} \\\\% & {\\sc av-li} & {\\sc av-cpu} \\\\\n\\midrule\n\\multicolumn{ 1}{c}{6} & $-2$ & 15.8 & 0.4 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 11.4 & 0.3 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 10.6 & 0.2 \\\\\n\n\\multicolumn{ 1}{c}{7} & $-2$ & 14.8 & 1.5 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 11.4 & 1.0 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 10.3 & 0.9 \\\\\n\n\\multicolumn{ 1}{c}{8} & $-2$ & 14.6 & 5.4 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 10.8 & 3.9 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 10.1 & 3.5 \\\\\n\n\\multicolumn{ 1}{c}{9} & $-2$ & 14.5 & 22.1 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 10.8 & 16.6 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 9.0 & 15.4 \\\\\n\n\n\\bottomrule\n\\end{tabular} \\hfill \\begin{tabular}{clrrrr}\n\\toprule\n&&\\multicolumn{ 2}{c}{{\\sc ipm-gmres-${\\cal P}_\\Pi$} }\\\\\n\\midrule\n $h=2^{-\\ell}$ & $\\mathrm{log}_{10}\\alpha$ & {\\sc av-li} & {\\sc av-cpu} \\\\\n\\midrule\n\n\\multicolumn{ 1}{c}{3} & $-2$ & 10.2 & 0.04 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 11.3 & 0.05 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 11.3 & 0.05 \\\\\n\n\\multicolumn{ 1}{c}{4} & $-2$ & 11.2 & 0.4 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 11.3 & 0.4 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 11.3 & 0.4 \\\\\n\n\\multicolumn{ 1}{c}{5} & $-2$ & 15.0 & 7.2 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 15.1 & 7.3 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 15.1 & 7.3 \\\\\n\n\\bottomrule\n\\end{tabular} \n\\end{center}\n\\caption{\\emph{(Left)} Poisson problem: average Krylov iterations and CPU times for problem with both control and state constraints, for a range of $h$ and $\\alpha$, $\\beta = 10^{-2}$, $\\sigma = 0.2$ ($\\textsc{nli} = 14$).\\\\\\emph{(Right)} Three-dimensional Poisson problem with partial observations: average Krylov iterations and CPU times for problem, for a range of $h$ and $\\alpha$, $\\beta = 10^{-3}$, $\\sigma = 0.25$ ($\\textsc{nli} = 11$).\n\\label{tab::resultspoisson2}}\n\\end{table}\n\n\n\\input{partial_rev.tex}\n\n\\subsection{A Convection--Diffusion Problem}\nWe next consider the optimal control of the convection--diffusion equation given by\n$- \\varepsilon \\Delta {\\rm y} + \\vec{\\rm w} \\cdot \\nabla {\\rm y} = {\\rm u}$\non the domain $\\Omega=(0,1)^2$, with the wind vector $\\vec{\\rm w}$ given by $\\vec{\\rm w} = \\big[{\\rm 2x_2(1-x_1^2)}, {\\rm -2x_1(1-x_2^2)}\\big]^T$, and the bounds on the control given by $\\rm u_a=-2$ and $\\rm u_b = 1.5$.\nThe desired state is here defined by\n$\\rm y_d = \\exp(\\rm -64(x_1-0.5)^2+(x_2-0.5)^2)$.\nThe discretization is again performed using Q1 finite elements, while also employing the Streamline Upwind Petrov--Galerkin (SUPG) \\cite{BroH82} upwinding scheme as implemented in {\\scshape ifiss}. The results of our scheme are given in Table \\ref{tab::resultscd1}, which again exhibit robustness with respect to $h$ and $\\alpha$, while also performing well for both values of $\\varepsilon$ tested.\n\\begin{figure}\n\\begin{center}\n\t\\setlength\\figureheight{0.225\\linewidth} \n\t\\setlength\\figurewidth{0.225\\linewidth} \n\n\t\t\\subfloat[Control $\\rm u$, $\\beta=10^{-2}$]{\n\t\\input{figures\/controlCDbeta1e_2.tikz}\n\t}\n\t\\subfloat[Control $\\rm u$, $\\beta=10^{-3}$]{\n\t\\input{figures\/controlCDbeta1e_3.tikz}\n\t}\n \\end{center}\n\\caption{Convection--diffusion problem: computed solutions of the control $\\rm u$, for two values of $\\beta$.} \\label{fig::CDu}\n\\end{figure}\n\n\n\\begin{table}[htb!]\n\\begin{center}\n\\begin{tabular}{llrrrr|rrrr}\n\\toprule\n & & \\multicolumn{ 4}{c|}{$\\varepsilon = 10^{-1}$ } & \\multicolumn{ 4}{c}{$\\varepsilon = 10^{-2}$} \\\\\n\t\t\t\t\t\\midrule\n & & \\multicolumn{ 2}{c}{\\ipmbt} & \\multicolumn{ 2}{|c }{{\\sc ipm-minres-${\\cal P}_D$} }\n\t\t\t\t\t\t\t\t\t\t\t\t & \\multicolumn{ 2}{|c}{\\ipmbt} & \\multicolumn{ 2}{|c }{{\\sc ipm-minres-${\\cal P}_D$} }\\\\\n\\midrule\n $h=2^{-\\ell}$ & $\\mathrm{log}_{10}\\alpha$ & {\\sc av-li} & {\\sc av-cpu} & {\\sc av-li} & {\\sc av-cpu} & {\\sc av-li} & {\\sc av-cpu} & {\\sc av-li} & {\\sc av-cpu} \\\\\n\\midrule\n\\multicolumn{ 1}{c}{6} & $-2$ & 9.4 & 0.2 & 21.1 & 0.5 & 11.2 & 0.5 & 25.8 & 1.1 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 8.3 & 0.2 & 18.2 & 0.4 & 10.5 & 0.5 & 23.2 & 1.0 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 8.2 & 0.2 & 17.8 & 0.4 & 10.5 & 0.5 & 23.5 & 1.0 \\\\\n\n\\multicolumn{ 1}{c}{7} & $-2$ & 8.2 & 0.8 & 18.0 & 1.7 & 9.2 & 1.6 & 20.6 & 3.4 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 7.5 & 0.7 & 16.3 & 1.5 & 8.7 & 1.5 & 19.0 & 3.1 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 7.5 & 0.7 & 16.1 & 1.5 & 8.7 & 1.5 & 19.4 & 3.1 \\\\\n\n\\multicolumn{ 1}{c}{8} & $-2$ & 7.5 & 2.7 & 16.3 & 5.6 & 8.0 & 3.8 & 17.1 & 7.9 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 7.0 & 2.5 & 15.1 & 5.2 & 7.7 & 3.7 & 16.4 & 7.5 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 7.0 & 2.5 & 14.8 & 5.1 & 7.7 & 3.7 & 16.4 & 7.5 \\\\\n\n\\multicolumn{ 1}{c}{9} & $-2$ & 7.0 & 11.2 & 14.9 & 23.0 & 7.3 & 13.1 & 15.1 & 26.3 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 6.7 & 11.0 & 14.2 & 22.4 & 6.8 & 12.5 & 14.4 & 25.5 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 6.7 & 11.0 & 13.9 & 21.7 & 6.8 & 12.5 & 14.5 & 25.5 \\\\\n\n\\bottomrule\n\\end{tabular} \n\\end{center}\n\\caption{Convection--diffusion problem: average Krylov iterations and CPU times for problem with control constraints, for a range of $h$ and $\\alpha$, $\\beta = 10^{-3}$, $\\sigma=0.25$ ($\\textsc{nli} = 11$) with $\\varepsilon = 10^{-1}$, and $\\sigma=0.4$ ($\\textsc{nli} = 16$) with $\\varepsilon = 10^{-2}$.\\label{tab::resultscd1}}\n\\end{table}\n\nWe now provide a numerical insight on the comparison between the proposed IPM approach\nand the commonly used semismooth Newton approach \\cite{HIK02}.\nWe therefore compare \\ipmbt and the implementation \\ssnip of the global semismooth Newton method proposed for PDE-constrained optimization problems with sparsity-promoting terms in \\cite{pss17}. When using the \\ssnip approach, global convergence is attained using a nonsmooth line-search strategy\nand the linear systems arising in the linear algebra phase are solved by \nusing preconditioned {\\scshape gmres}. We consider the $2\\times2$ block formulation and \nan indefinite preconditioner available in a factorized form \\cite{pss17,pst15}. \nSince the semismooth approach requires a diagonal mass matrix in the discretization of the complementarity\nconditions, in the experiments with \\ssnip we use a lumped mass matrix.\nTable \\ref{tab::resultscd1_new} collects results concerning the nonlinear behaviour of\nthe two methods: the number of nonlinear iterations ({\\sc nli}) and the total CPU time ({\\sc tcpu}).\n\n\nIt is interesting to note that the number of nonlinear Interior Point iterations does not vary with $\\alpha$.\nIn fact, the mildly aggressive choice of barrier reduction factor $\\sigma$ yields a low number of nonlinear iterations,\n even for limiting values of $\\alpha$.\nBy contrast, \\ssnip struggles as $\\alpha \\rightarrow 0$. Furthermore, overall the \nInterior Point strategy outperforms the semismooth method in terms of total CPU time.\\\\\n\n\n\n\n\\begin{table}[htb!]\n\\begin{center}\n\\begin{tabular}{llrrrr}\n\\toprule\n & & \\multicolumn{ 2}{c}{\\ipmbt} & \\multicolumn{ 2}{|c }{\\ssnip}\\\\\n\\midrule\n $h=2^{-\\ell}$ & $\\mathrm{log}_{10}\\alpha$ & {\\sc nli} & {\\sc tcpu} & {\\sc nli} & {\\sc tcpu} \\\\\n\\midrule\n \n\\multicolumn{ 1}{c}{6} & -2 & 11 & 2.8 & 5 & 4.2 \\\\\n\n\\multicolumn{ 1}{c}{} & -4 & 11 & 2.5 & 19 & 27.9 \\\\\n\n\\multicolumn{ 1}{c}{} & -6 & 11 & 2.4 & $>100$ & \\\\\n\n\\multicolumn{ 1}{c}{} & -8 & 11 & 2.4 & $>100$ & \\\\\n\n\\multicolumn{ 1}{c}{7} & -2 & 11 & 9.4 & 5 & 14.0 \\\\\n\n\\multicolumn{ 1}{c}{} & -4 & 11 & 8.7 & 18 & 101.9 \\\\\n\n\\multicolumn{ 1}{c}{} & -6 & 11 & 8.7 & $>100$ & \\\\\n\n\\multicolumn{ 1}{c}{} & -8 & 11 & 9.1 & $>100$ & \\\\\n\n\\multicolumn{ 1}{c}{8} & -2 & 11 & 36.6 & 5 & 43.4 \\\\\n\n\\multicolumn{ 1}{c}{} & -4 & 11 & 34.4 & 20 & 345.3 \\\\\n\n\\multicolumn{ 1}{c}{} & -6 & 11 & 33.9 & $>100$ & \\\\\n\n\\multicolumn{ 1}{c}{} & -8 & 11 & 33.8 & $>100$ & \\\\\n\n\\multicolumn{ 1}{c}{9} & -2 & 11 & 155.9 & 5 & 147.3 \\\\\n\n\\multicolumn{ 1}{c}{} & -4 & 11 & 149.8 & 21 & 1265.4 \\\\\n\n\\multicolumn{ 1}{c}{} & -6 & 11 & 148.9 & $>100$ & \\\\\n\n\\multicolumn{ 1}{c}{} & -8 & 11 & 149.6 & $>100$ & \\\\\n\n\\bottomrule\n\\end{tabular} \n\\end{center}\n\\caption{Convection--diffusion problem: comparison between \\ipmbt and \\ssnip in terms of nonlinear iterations and total CPU times for problem with control constraints, for a range of $h$ and $\\alpha$,\n $\\beta = 10^{-3}$, $\\epsilon = 10^{-1}$. \\label{tab::resultscd1_new}}\n\\end{table}\n\n\n\n\n\\subsection{A Heat Equation Problem}\nTo demonstrate the applicability of our methodology to time-dependent problems, we now perform experiments on an optimization problem with the heat equation acting as a constraint. We utilize the implicit Euler scheme on a time interval up to $T=1$, for varying values of time-step $\\tau$, and set a time-independent desired state to be $\\rm y_d=\\sin(\\pi {\\rm x_1})\\sin(\\pi {\\rm x_2}) $. We consider a control problem with full observations, with Table \\ref{tab::resultsheat1} illustrating the performance of the Interior Point method and preconditioner $\\mathcal{P}_T$ for varying mesh-sizes and values of $\\alpha$, with fixed $\\beta=10^{-2}$. Considerable robustness is again achieved, in particular with respect to changes in the time-step.\n\\begin{table}[htb!]\n\\begin{center}\n\\begin{tabular}{llrrrrrr}\n\\toprule\n & & \\multicolumn{ 6}{c}{\\ipmbt} \\\\\n\t\t\t\t\t\\midrule\n & & \\multicolumn{ 2}{c}{$\\tau = 0.04$ } & \\multicolumn{ 2}{c}{$\\tau = 0.02$ } & \\multicolumn{ 2}{c}{$\\tau = 0.01$ } \\\\\n\t\t\t\t\t\\midrule\n $h=2^{-\\ell}$ & $\\mathrm{log}_{10}\\alpha$ & {\\sc av-li} & {\\sc av-cpu} & {\\sc av-li} & {\\sc av-cpu} & {\\sc av-li} & {\\sc av-cpu} \\\\\n\\midrule\n\\multicolumn{ 1}{c}{4} & $-2$ & 13.9 & 0.6 & 13.1 & 1.0 & 13.1 & 2.2 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 13.3 & 0.5 & 12.2 & 1.0 & 12.3 & 2.0 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 12.8 & 0.5 & 12.0 & 1.0 & 12.0 & 2.0 \\\\\n\n\\multicolumn{ 1}{c}{5} & $-2$ & 14.6 & 1.6 & 14.0 & 3.1 & 14.7 & 6.6 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 13.9 & 1.5 & 13.3 & 2.9 & 13.3 & 5.8 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 13.6 & 1.5 & 12.8 & 2.8 & 13.0 & 5.7 \\\\\n\n\\multicolumn{ 1}{c}{6} & $-2$ & 15.5 & 5.9 & 14.6 & 11.4 & 15.4 & 23.7 \\\\\n\n\\multicolumn{ 1}{c}{} & $-4$ & 14.8 & 5.8 & 14.0 & 10.6 & 14.0 & 21.7 \\\\\n\n\\multicolumn{ 1}{c}{} & $-6$ & 14.6 & 5.5 & 13.8 & 10.6 & 13.9 & 21.5 \\\\\n\\bottomrule\n\\end{tabular} \n\\end{center}\n\\caption{Heat equation problem: average Krylov iterations and CPU times for problem with control constraints, \nfor a range of $h$, $\\alpha$, and $\\tau$, $\\beta = 10^{-2}$, $\\sigma=0.25$ ($\\textsc{nli} = 13$). \n\\label{tab::resultsheat1}}\n\\end{table}\n\n\n\\vspace{1em}\n\n\\begin{remark} We highlight that the number of nonlinear Interior Point iterations almost does not vary with $\\alpha$, due\nto the suitable choices made for the barrier reduction factor $\\sigma$. In particular, in all the test cases\ndiscussed, the choice of $\\sigma$ is mildly aggressive (from $0.2$ to $0.4$ in the most difficult cases),\nyielding a low number of nonlinear iterations, even for limiting values of $\\alpha$.\nBy contrast, a semismooth Newton approach globalized with a line-search\nstrategy may perform poorly as $\\alpha \\rightarrow 0$.\n\\end{remark}\n\n\\section{Conclusions}\n\nWe have presented a new Interior Point method for PDE-constrained optimization problems that include additional box constraints on the control variable, as well as possibly the state variable, and a sparsity-promoting $\\rm L^1$-norm term for the control within the cost functional. We incorporated a splitting of the control into positive and negative parts, as well as a suitable nodal quadrature rule, to linearize the $\\rm L^1$-norm, and considered preconditioned iterative solvers for the Newton systems arising at each Interior Point iteration. Through theoretical justification for our approximations of the $(1,1)$-block and Schur complement of the Newton systems, as well as numerical experiments, we have demonstrated the effectiveness and robustness of our approach, which may be applied within symmetric and non-symmetric Krylov methods, for a range of steady and time-dependent PDE-constrained optimization problems.\n\n\\Appendix\n\\section{Interior Point Algorithm for Quadratic Programming}\\label{IPalgo}\nIn the Algorithm below, we present the structure of the Interior Point method that we apply within our numerical experiments, following the Interior Point path-following scheme described in \\cite{gondzio12}. It is clear that the main computational effort arises from solving the Newton system \\eqref{NewtonSystem} at each iteration.\n\n\\algo{ipm_algo}{Interior Point Algorithm for Quadratic Programming}{\\vspace{-2em}\n\\begin{align*}\n\\ &\\textbf{Parameters} \\\\\n\\ &\\quad\\quad\\alpha_{0} \\in(0,1),~~\\text{step-size factor to boundary} \\\\\n\\ &\\quad\\quad\\sigma\\in(0,1),~~\\text{barrier reduction parameter} \\\\\n\\ &\\quad\\quad\\epsilon_{p},~\\epsilon_{d},~\\epsilon_{c},~~\\text{stopping tolerances} \\\\\n\\ &\\quad\\quad\\text{Interior point method stops when }\\big\\|{\\xi}_{p}^{k}\\big\\|\\leq\\epsilon_{p},~\\big\\|{\\xi}_{d}^{k}\\big\\|\\leq\\epsilon_{d},~\\big\\|{\\xi}_{c}^{k}\\big\\|\\leq\\epsilon_{c} \\\\\n\\ &\\textbf{Initialize IPM} \\\\\n\\ &\\quad\\quad\\text{Set the initial guesses for }{y}^{0},~{z}^{0},~{p}^{0},~{\\lambda}_{y,a}^{0},~{\\lambda}_{y,b}^{0},~{\\lambda}_{z,a}^{0},~{\\lambda}_{z,b}^{0} \\\\\n\\ &\\quad\\quad\\text{Set the initial barrier parameter }\\mu^{0} \\\\\n\\ &\\quad\\quad\\text{Compute primal infeasibility } {\\xi}_{p}^{0}, \\text{ dual infeasibility } {\\xi}_{d}^{0}, \\text{ and} \n\\text{ complementarity gap }{\\xi}_{c}^{0}, \\\\\n\\ &\\quad\\quad\\quad\\quad \\text{as in \\eqref{prdu}--\\eqref{gap} with }k=0 \\\\ \n\\ &\\textbf{Interior Point Method} \\\\\n\\ &\\quad\\quad\\text{while}~~\\left(\\big\\|{\\xi}_{p}^{k}\\big\\|>\\epsilon_{p}~~\\text{or}~~\\big\\|{\\xi}_{d}^{k}\\big\\|>\\epsilon_{d}~~\\text{or}~~\\big\\|{\\xi}_{c}^{k}\\big\\|>\\epsilon_{c}\\right) \\\\\n\\ &\\quad\\quad\\quad\\quad\\text{Reduce barrier parameter}~\\mu^{k+1}=\\sigma\\mu^{k} \\\\\n\\ &\\quad\\quad\\quad\\quad\\text{Solve Newton system }\\eqref{NewtonSystem}\\text{ for primal-dual Newton direction}~{\\Delta}{y},~{\\Delta}{z},~{\\Delta p} \\\\\n\\ &\\quad\\quad\\quad\\quad\\text{Use }\\text{\\eqref{zupdate1}--\\eqref{zupdate4}}\\text{ to find }{\\Delta}{\\lambda}_{y,a},~{\\Delta}{\\lambda}_{y,b},~{\\Delta}{\\lambda}_{z,a},~{\\Delta}{\\lambda}_{z,b} \\\\\n\\ &\\quad\\quad\\quad\\quad\\text{Find }\\alpha_{P},~\\alpha_{D}~\\text{s.t. bound constraints on primal and dual variables hold} \\\\\n\\ &\\quad\\quad\\quad\\quad\\text{Set }\\alpha_{P}=\\alpha_{0}\\alpha_{P},~\\alpha_{D}=\\alpha_{0}\\alpha_{D} \\\\\n\\ &\\quad\\quad\\quad\\quad\\text{Make step: }{y}^{k+1}={y}^{k}+\\alpha_{P}{\\Delta}{y},~{z}^{k+1}={z}^{k}+\\alpha_{P}{\\Delta}{z},~{p}^{k+1}={p}^{k}+\\alpha_{D}{\\Delta p} \\\\\n\\ &\\quad\\quad\\quad\\quad\\quad\\quad{\\lambda}_{y,a}^{k+1}={\\lambda}_{y,a}^{k}+\\alpha_{D}{\\Delta}{\\lambda}_{y,a}, \\ \n {\\lambda}_{y,b}^{k+1}={\\lambda}_{y,b}^{k}+\\alpha_{D}{\\Delta}{\\lambda}_{y,b} \\\\\n\\ &\\quad\\quad\\quad\\quad\\quad\\quad{\\lambda}_{z,a}^{k+1}={\\lambda}_{z,a}^{k}+\\alpha_{D}{\\Delta}{\\lambda}_{z,a}, \\ \n {\\lambda}_{z,b}^{k+1}={\\lambda}_{z,b}^{k}+\\alpha_{D}{\\Delta}{\\lambda}_{z,b} \\\\\n\\ &\\quad\\quad\\quad\\quad\\text{Update infeasibilities } {\\xi}_{p}^{k+1},~{\\xi}_{d}^{k+1}, \\text{ and compute the complementarity gap } {\\xi}_{c}^{k+1} \\\\\n\\ &\\quad\\quad\\quad\\quad\\quad\\quad\\text{as in \\eqref{prdu}--\\eqref{gap}} \\\\\n\\ &\\quad\\quad\\quad\\quad\\text{Set iteration number }k=k+1 \\\\\n\\ &\\quad\\quad\\text{end}\n\\end{align*}\\vspace{-1.5em}\n}\n\n\\textbf{Acknowledgments.}\nJ. W. Pearson gratefully acknowledges support from the Engineering and Physical Sciences Research Council (EPSRC) Fellowship EP\/M018857\/2, and a Fellowship from The Alan Turing Institute in London.\nM. Porcelli and M. Stoll were partially supported by the {\\em DAAD-MIUR Joint Mobility Program} 2018--2020 (Grant 57396654).\nThe work of M. Porcelli was also partially supported by the {\\em National Group of Computing Science (GNCS-INDAM)}.\n\n\\bibliographystyle{siam}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn most papers dealing with the statistical analysis of\nmeteorological data available to the authors, the suggested\nanalytical models for the observed statistical regularities in\nprecipitation are rather ideal and inadequate. For example, it is\ntraditionally assumed that the duration of a wet period (the number\nof subsequent wet days) follows the geometric distribution (for\nexample, see~\\cite{Zolina2013}) although the goodness-of-fit of this\nmodel is far from being admissible. Perhaps, this prejudice is based\non the conventional interpretation of the geometric distribution in\nterms of the Bernoulli trials as the distribution of the number of\nsubsequent wet days (``successes'') till the first dry day\n(``failure''). But the framework of Bernoulli trials assumes that\nthe trials are independent whereas a thorough statistical analysis\nof precipitation data registered in different points demonstrates\nthat the sequence of dry and wet days is not only independent, but\nit is also devoid of the Markov property so that the framework of\nBernoulli trials is absolutely inadequate for analyzing\nmeteorological data.\n\nIt turned out that the statistical regularities of the number of\nsubsequent wet days can be very reliably modeled by the negative\nbinomial distribution with the shape parameter less than one. For\nexample, in~\\cite{Gulev} we analyzed meteorological data registered\nat two geographic points with very different climate: Potsdam\n(Brandenburg, Germany) with mild climate influenced by the closeness\nto the ocean with warm Gulfstream flow and Elista (Kalmykia, Russia)\nwith radically continental climate. The initial data of daily\nprecipitation in Elista and Potsdam are presented on Figures~1a and\n1b, respectively. On these figures the horizontal axis is discrete\ntime measured in days. The vertical axis is the daily precipitation\nvolume measured in centimeters. In other words, the height of each\n``pin'' on these figures is the precipitation volume registered at\nthe corresponding day (at the corresponding point on the horizontal\naxis).\n\n\\renewcommand{\\figurename}{\\rm{Fig.}}\n\n\\begin{figure}[h]\n\\begin{minipage}[h]{0.49\\textwidth}\n\\center{\\includegraphics[width=\\textwidth,\nheight=0.6\\textwidth]{DataElista_en.png}\n\\\\a)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}[h]{0.49\\textwidth}\n\\center{\\includegraphics[width=\\textwidth]{DataPotsdam_en.png} \\\\\nb)}\n\\end{minipage}\n\\label{Data} \\caption{The initial data of daily precipitation in\nElista (a) and Potsdam (b).}\n\\end{figure}\n\nIn order to analyze the statistical regularities of the duration of\nwet periods this data was rearranged as shown on Figures~2a and 2b.\n\n\\begin{figure}[h]\n\\begin{minipage}[h]{0.49\\textwidth}\n\\center{\\includegraphics[width=\\textwidth,\nheight=0.6\\textwidth]{ElistaDataWet_en.png}\n\\\\a)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}[h]{0.49\\textwidth}\n\\center{\\includegraphics[width=\\textwidth]{PotsdamDataWet_en.png} \\\\\nb)}\n\\end{minipage}\n\\label{WetPeriod} \\caption{The durations of wet periods in Elista\n(a) and Potsdam (b).}\n\\end{figure}\n\nOn these figures the horizontal axis is the number of successive wet\nperiods. It should be mentioned that directly before and after each\nwet period there is at least one dry day, that is, successive wet\nperiods are separated by dry periods. On the vertical axis there lie\nthe durations of wet periods. In other words, the height of each\n``pin'' on these figures is the length of the corresponding wet\nperiod measured in days and the corresponding point on the\nhorizontal axis is the number of the wet period.\n\nThe samples of durations in both Elista and Potsdam were assumed\nhomogeneous and independent. It was demonstrated that the\nfluctuations of the numbers of successive wet days with very high\nconfidence fit the negative binomial distribution with shape\nparameter less than one (also see~\\cite{Gorshenin2017}). Figures~3a\nand~3b show the histograms constructed from the corresponding\nsamples of duration periods and the fitted negative binomial\ndistribution. In both cases the shape parameter $r$ turned out to be\nless than one. For Elista $r=0.876$, $p=0.489$, for Potsdam\n$r=0.847$, $p=0.322$.\n\n\\begin{figure}[h]\n\\begin{minipage}[h]{0.5\\textwidth}\n\\center{\\includegraphics[width=\\textwidth,\nheight=0.6\\textwidth]{ElistaWetPeriod_en.png}\n\\\\a)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}[h]{0.5\\textwidth}\n\\center{\n\\includegraphics[width=\\textwidth,\nheight=0.6\\textwidth]{PotsdamWetPeriod_en.png} \\\\ b)}\n\\end{minipage}\n\\label{WetHist} \\caption{The histogram of durations of wet periods\nin Elista (a) and Potsdam (b) and the fitted negative binomial\ndistribution.}\n\\end{figure}\n\nIt is worth noting that at the same time the statistical analysis\nconvincingly suggests the Pareto-type model for the distribution of\ndaily precipitation volumes, see Figures~4a and 4b. For comparison,\non these figures there are also presented the graphs of the best\ngamma-densities which, nevertheless, fit the histograms in a\nnoticeably worse way than the Pareto distributions.\n\n\\begin{figure}[h]\n\\begin{minipage}[h]{0.49\\textwidth}\n\\center{\\includegraphics[width=\\textwidth,\nheight=0.6\\textwidth]{PrecipElista_en.png}\n\\\\a)}\n\\end{minipage}\n\\hfill\n\\begin{minipage}[h]{0.49\\textwidth}\n\\center{\n\\includegraphics[width=\\textwidth,\nheight=0.6\\textwidth]{PrecipPotsdam_en.png} \\\\ b)}\n\\end{minipage}\n\\label{WetHist} \\caption{The histogram of daily precipitation\nvolumes in Elista (a) and Potsdam (b) and the fitted Pareto and\ngamma distributions.}\n\\end{figure}\n\nIn the same paper a schematic attempt was undertaken to explain this\nphenomenon by the fact that negative binomial distributions can be\nrepresented as mixed Poisson laws with mixing gamma-distributions.\nAs is known, the Poisson distribution is the best model for the\ndiscrete stochastic chaos~\\cite{Kingman1993} by virtue of the\nuniversal principle of non-decrease of entropy in closed systems\n(see, e. g., \\cite{GnedenkoKorolev1996, KorolevBeningShorgin2011})\nand the mixing distribution accumulates the statistical regularities\nin the influence of stochastic factors that can be assumed exogenous\nwith respect to the local system under consideration.\n\nIn the paper \\cite{Korolev2017} this explanation of the adequacy of\nthe negative binomial model was concretized. For this purpose, the\nconcept of a mixed geometric distribution introduced\nin~\\cite{Korolev2016TVP} (also see~\\cite{KorolevPoisson,\nKorolev2016}) was used. In~\\cite{Korolev2017} it was demonstrated\nthat any negative binomial distribution with shape parameter no\ngreater than one is a mixed geometric distribution (this result is\nreproduced below as Theorem 1). Thereby, a ``discrete'' analog of a\ntheorem due to L.~Gleser~\\cite{Gleser1989} was proved. Gleser's\ntheorem establishes that a gamma distribution with shape parameter\nno greater than one can be represented as a mixed exponential\ndistribution.\n\nThe representation of a negative binomial distribution as a mixed\ngeometric law can be interpreted in terms of the Bernoulli trials as\nfollows. First, as a result of some ``preliminary'' experiment the\nvalue of some random variables (r.v:s) taking values in $[0,1]$ is\ndetermined which is then used as the probability of success in the\nsequence of Bernoulli trials in which the original ``unconditional''\nr.v. with the negative binomial distribution is nothing else than\nthe ``conditionally'' geometrically distributed r.v. having the\nsense of the number of trials up to the first failure. This makes it\npossible to assume that the sequence of wet\/dry days is not\nindependent, but is conditionally independent and the random\nprobability of success is determined by some outer stochastic\nfactors. As such, we can consider the seasonality or the type of the\ncause of a rainy period.\n\nThe negative binomial model for the distribution of the duration of\nwet periods makes it possible to obtain asymptotic approximations\nfor important characteristics of precipitation such as the\ndistribution of the total precipitation volume per wet period and\nthe distribution of the maximum daily precipitation volume within a\nwet period. The first of these approximations was proposed\nin~\\cite{Korolev2017}, where an analog of the law of large numbers\nfor negative binomial random sums was presented stating that the\nlimit distribution for these sums is the gamma distribution.\n\nThe construction of the second approximation is the target of the\npresent paper.\n\nThe paper is organized as follows. Definitions and notation are\nintroduced in Section~2 which also contains some preliminary results\nproviding some theoretical grounds for the negative binomial model\nof the probability distribution of the duration of wet periods. Main\nresults are presented and proved in Section 3 where the asymptotic\napproximation is proposed for the distribution of the maximum daily\nprecipitation volume within a wet period. Some analytic properties\nof the obtained limit distribution are described. In particular, it\nis demonstrated that under certain conditions the limit distribution\nis mixed exponential and hence, is infinitely divisible. It is shown\nthat under the same conditions the limit distribution can be\nrepresented as a scale mixture of stable or Weibull or Pareto or\nfolded normal laws. The corresponding product representations for\nthe limit random variable can be used for its computer simulation.\nSeveral methods for the statistical estimation of the parameters of\nthis distribution are proposed in Section 4. Section 5 contains the\nresults of fitting the distribution proposed in Section 3 to real\ndata by the methods described in Section 4.\n\n\\section{Preliminaries}\n\nAlthough the main objects of our interest are the probability\ndistributions, for convenience and brevity in what follows we will\nexpound our results in terms of r.v:s with the corresponding\ndistributions assuming that all the r.v:s under consideration are\ndefined on one and the same probability space\n$(\\Omega,\\,\\mathfrak{F},\\,{\\sf P})$.\n\nIn the paper, conventional notation is used. The symbols $\\stackrel{d}{=}$ and\n$\\Longrightarrow$ denote the coincidence of distributions and\nconvergence in distribution, respectively. The integer and\nfractional parts of a number $z$ will be respectively denoted $[z]$\nand $\\{z\\}$.\n\nA r.v. having the gamma distribution with shape parameter $r>0$ and\nscale parameter $\\lambda>0$ will be denoted $G_{r,\\lambda}$,\n$$\n{\\sf P}(G_{r,\\lambda}0$.\n\nIn these notation, obviously, $G_{1,1}$ is a r.v. with the standard\nexponential distribution: ${\\sf P}(G_{1,1}0$, $r>0$.\n\nThe properties of GG-distributions are described in \\cite{Stacy1962,\nKorolevZaks2013}. A r.v. with the density $g^*(x;r,\\gamma,\\lambda)$\nwill be denoted $G^*_{r,\\gamma,\\lambda}$. It can be easily made sure\nthat\n\\begin{equation}\\label{GG}\nG^*_{r,\\gamma,\\lambda}\\stackrel{d}{=} G_{r,\\lambda}^{1\/\\gamma}.\n\\end{equation}\nFor a r.v. with the Weibull distribution, a particular case of\nGG-distributions corresponding to the density $g^*(x;1,\\gamma,1)$\nand the distribution function (d.f.)\n$\\big[1-e^{-x^{\\gamma}}\\big]{\\bf 1}(x\\ge0)$, we will use a special\nnotation $W_{\\gamma}$. Thus, $G_{1,1}\\stackrel{d}{=} W_1$. It is easy to see\nthat\n\\begin{equation}\\label{Weibull}\nW_1^{1\/\\gamma}\\stackrel{d}{=} W_{\\gamma}.\n\\end{equation}\nA r.v. with the standard normal d.f. $\\Phi(x)$ will be denoted $X$,\n$$\n{\\sf\nP}(X0.\n\\label{Rdensity}\n\\end{equation}\n\nA r.v. $N_{r,p}$ is said to have the {\\it negative binomial\ndistribution} with parameters $r>0$ (``shape'') and $p\\in(0,1)$\n(``success probability''), if\n$$\n{\\sf P}(N_{r,p}=k)=\\frac{\\Gamma(r+k)}{k!\\Gamma(r)}\\cdot p^r(1-p)^k,\\\n\\ \\ \\ k=0,1,2,...\n$$\n\nA particular case of the negative binomial distribution\ncorresponding to the value $r=1$ is the {\\it geometric\ndistribution}. Let $p\\in(0,1)$ and let $N_{1,p}$ be the r.v. having\nthe geometric distribution with parameter $p\\,$:\n$$\n{\\sf P}(N_{1,p}=k)=p(1-p)^{k},\\ \\ \\ \\ k=0,1,2,...\n$$\nThis means that for any $m\\in\\mathbb{N}$\n$$\n{\\sf P}(N_{1,p}\\ge\nm)=\\sum\\nolimits_{k=m}^{\\infty}p(1-p)^{k}=(1-p)^{m}.\n$$\n\nLet $Y$ be a r.v. taking values in the interval $(0,1)$. Moreover,\nlet for all $p\\in(0,1)$ the r.v. $Y$ and the geometrically\ndistributed r.v. $N_{1,p}$ be independent. Let $V=N_{1,Y}$, that is,\n$V(\\omega)=N_{1,Y(\\omega)}(\\omega)$ for any $\\omega\\in\\Omega$. The\ndistribution\n$$\n{\\sf P}(V\\ge m)=\\int_{0}^{1}(1-y)^{m}d{\\sf P}(Y0$, $p\\in(0,1)$ and $k\\in\\{0\\}\\bigcup\\mathbb{N}$ we have\n\\begin{equation}\n\\frac{\\Gamma(r+k)}{k!\\Gamma(r)}\\cdot\np^r(1-p)^k=\\frac{1}{k!}\\int_{0}^{\\infty}e^{-z}z^kg(z;r,\\mu)dz,\\label{NBMixt}\n\\end{equation}\nwhere $\\mu=p\/(1-p)$.\n\nBased on representation \\eqref{NBMixt}, in \\cite{Korolev2017} it was\nproved that any negative binomial distribution with the shape\nparameter no greater than one is a mixed geometric distribution.\nNamely, the following statement was proved that gives an analytic\nexplanation of the validity of the negative binomial model for the\nduration of wet periods measured in days (see the Introduction).\n\n\\smallskip\n\n{\\sc Theorem 1} \\cite{Korolev2017}. {\\it The negative binomial\ndistribution with parameters $r\\in(0,1)$ and $p\\in(0,1)$ is a mixed\ngeometric distribution$:$ for any $k\\in\\{0\\}\\bigcup\\mathbb{N}$}\n$$\n\\frac{\\Gamma(r+k)}{k!\\Gamma(r)}\\cdot\np^r(1-p)^k=\\int_{\\mu}^{\\infty}\\Big(\\frac{z}{z+1}\\Big)\\Big(1-\\frac{z}{z+1}\\Big)^kp(z;r,\\mu)dz=\\int_{p}^{1}y(1-y)^kh(y;r,p)dy,\n$$\n{\\it where $\\mu=p\/(1-p)$ and the probability densities $p(z;r,\\mu)$\nand $h(y;r,p)$ have the forms\n$$\np(z;r,\\mu)=\\frac{\\mu^r}{\\Gamma(1-r)\\Gamma(r)}\\cdot\\frac{\\mathbf{1}(z\\ge\\mu)}{(z-\\mu)^rz},\n$$\n$$\nh(y;r,p)=\\frac{p^r}{\\Gamma(1-r)\\Gamma(r)}\\cdot\\frac{(1-y)^{r-1}\\mathbf{1}(p0$, $p\\in(0,1)$, then the density\n$p(z;r,\\mu)$ corresponds to the r.v.\n\\begin{equation}\nZ_{r,\\mu}=\\frac{\\mu(G_{r,\\,1}+G_{1-r,\\,1})}{G_{r,\\,1}}\n\\label{Zdef}\n\\end{equation}\nand the density $h(y;r,p)$ corresponds to the r.v.\n$$\nY_{r,p}=\\frac{p(G_{r,\\,1}+G_{1-r,\\,1})}{G_{r,\\,1}+pG_{1-r,\\,1}}.\n$$\n}\n\n\\smallskip\n\nLet $P(t)$, $t\\ge0$, be the standard Poisson process (homogeneous\nPoisson process with unit intensity). Then\ndistribution~\\eqref{NBMixt} corresponds to the r.v.\n$N_{r,p}=P(G_{r,p\/(1-p)})$, where the r.v. $G_{r,p\/(1-p)}$ is\nindependent of the process $P(t)$.\n\n\\section{The probability distribution of extremal precipitation}\n\nIn this section we will deduce the probability distribution of\nextremal daily precipitation within a wet period.\n\nLet $r>0$, $\\lambda>0$, $q\\in(0,1)$, $n\\in\\mathbb{N}$,\n$p_n=\\min\\{q,\\,\\lambda\/n\\}$. It is easy to make sure that\n\\begin{equation}\nn^{-1}G_{r,p_n\/(1-p_n)}\\Longrightarrow G_{r,\\lambda}\\label{2}\n\\end{equation}\nas $n\\to\\infty$.\n\n\\smallskip\n\n{\\sc Lemma 1.} {\\it Let $\\Lambda_1,\\Lambda_2,\\ldots$ be a sequence\nof positive r.v$:$s such that for any $n\\in\\mathbb{N}$ the r.v.\n$\\Lambda_n$ is independent of the Poisson process $P(t)$, $t\\ge0$.\nThe convergence\n$$\nn^{-1}P(\\Lambda_n)\\Longrightarrow \\Lambda\n$$\nas $n\\to\\infty$ to some nonnegative r.v. $\\Lambda$ takes place if\nand only if\n\\begin{equation}\nn^{-1}\\Lambda_n\\Longrightarrow \\Lambda \\label{3}\n\\end{equation}\nas $n\\to\\infty$.}\n\n\\smallskip\n\n{\\sc Proof}. This statement is a particular case of Lemma 2\nin~\\cite{Korolev1998} (also see Theorem 7.9.1 in\n\\cite{KorolevBeningShorgin2011}).\n\n\\smallskip\n\nConsider a sequence of independent identically distributed (i.i.d.)\nr.v:s $X_1,X_2,\\ldots$. Let $N_1,N_2,\\ldots$ be a sequence of\nnatural-valued r.v:s such that for each $n\\in\\mathbb{N}$ the r.v.\n$N_n$ is independent of the sequence $X_1,X_2,\\ldots$. Denote\n$M_n=\\max\\{X_1,\\ldots,X_{N_n}\\}$.\n\nLet $F(x)$ be a d.f., $a\\in\\mathbb{R}$. Denote\n$\\mathrm{rext}(F)=\\sup\\{x:\\,F(x)<1\\}$, $F^{-1}(a)=\\inf\\{x:\\,F(x)\\ge\na\\}$.\n\n\\smallskip\n\n{\\sc Lemma 2.} {\\it Let $\\Lambda_1,\\Lambda_2,\\ldots$ be a sequence\nof positive r.v$:$s such that for each $n\\in\\mathbb{N}$ the r.v.\n$\\Lambda_n$ is independent of the Poisson process $P(t)$, $t\\ge0$.\nLet $N_n=P(\\Lambda_n)$. Assume that there exists a nonnegative r.v.\n$\\Lambda$ such that convergence~{\\rm \\eqref{3}} takes place. Let\n$X_1,X_2,\\ldots$ be i.i.d. r.v$:$s with a common d.f. $F(x)$. Assume\nalso that $\\mathrm{rext}(F)=\\infty$ and there exists a number\n$\\gamma>0$ such that for each $x>0$\n\\begin{equation}\n\\lim_{y\\to\\infty}\\frac{1-F(xy)}{1-F(y)}=x^{-\\gamma}.\\label{4}\n\\end{equation}\nThen}\n$$\n\\lim_{n\\to\\infty}\\sup_{x\\ge 0}\\bigg|{\\sf\nP}\\bigg(\\frac{M_n}{F^{-1}(1-\\frac{1}{n})}0$, $q\\in(0,1)$\nand let $N_{r,p_n}$ be a r.v. with the negative binomial\ndistribution with parameters $r>0$ and $p_n=\\min\\{q,\\lambda\/n\\}$.\nLet $X_1,X_2,\\ldots$ be i.i.d. r.v$:$s with a common d.f. $F(x)$.\nAssume that $\\mathrm{rext}(F)=\\infty$ and there exists a number\n$\\gamma>0$ such that relation~{\\rm \\eqref{4}} holds for any $x>0$.\nThen\n$$\n\\lim_{n\\to\\infty}\\sup_{x\\ge 0}\\bigg|{\\sf\nP}\\bigg(\\frac{\\max\\{X_1,\\ldots,X_{N_{r,p_n}}\\}}{F^{-1}(1-\\frac{1}{n})}0$\ncorresponds to the r.v. $W_{\\gamma}^{-1}$, it is easy to make sure\nthat the d.f. $F(x; r,\\lambda,\\gamma)$ corresponds to the r.v.\n$M_{r,\\gamma,\\lambda}\\equiv\nG_{r,\\lambda}^{1\/\\gamma}W_{\\gamma}^{-1}$, where the multipliers on\nthe right-hand side are independent. From~\\eqref{GG}\nand~\\eqref{Weibull} it follows that\n\\begin{equation}\\label{M}\nM_{r,\\gamma,\\lambda}\\stackrel{d}{=}\\Big(\\frac{G_{r,\\lambda}}{W_1}\\Big)^{1\/\\gamma}\n\\stackrel{d}{=}\\frac{G^*_{r,\\gamma,\\lambda}}{W_{\\gamma}}\n\\end{equation}\nwhere in each term the multipliers are independent. Consider the\nr.v. $G_{r,\\lambda}\/W_1$ in \\eqref{M} in more detail. We have\n$$\n\\frac{G_{r,\\lambda}}{W_1}\\stackrel{d}{=}\\frac{G_{r,\\lambda}}{G_{1,1}}\\stackrel{d}{=}\\frac{G_{r,1}}{\\lambda\nG_{1,1}}\\stackrel{d}{=}\\frac{Q_{r,1}}{\\lambda r},\n$$\nwhere $Q_{r,1}$ is the r.v. having the Snedecor--Fisher distribution\nwith parameters $r,\\,1$ (`degrees of freedom') defined by the\nLebesgue density\n$$\nf_{r,1}(x)=\\frac{r^{r+1}x^{r-1}}{(1+rx)^{r+1}},\\ \\ \\ x\\ge0,\n$$\n(see, e. g., \\cite{Bolshev}, Section 27).\n\nSo,\n\\begin{equation}\\label{MQ}\nM_{r,\\gamma,\\lambda}\\stackrel{d}{=}\\Big(\\frac{Q_{r,1}}{\\lambda\nr}\\Big)^{1\/\\gamma},\n\\end{equation}\nand the statement of theorem 2 can be re-formulated as\n\\begin{equation}\n\\label{Mdef}\n\\frac{\\max\\{X_1,\\ldots,X_{N_{r,p_n}}\\}}{F^{-1}(1-\\frac{1}{n})}\\Longrightarrow\nM_{r,\\gamma,\\lambda}\\equiv\n\\frac{G_{r,\\lambda}^{1\/\\gamma}}{W_{\\gamma}}\\stackrel{d}{=}\n\\Big(\\frac{Q_{r,1}}{\\lambda r}\\Big)^{1\/\\gamma}\\ \\ \\ \\ (n\\to\\infty).\n\\end{equation}\n\nThe density of the limit distribution $F(x;r,\\gamma,\\lambda)$ of the\nextreme daily precipitation within a wet period has the form\n\\begin{equation}\np(x;r,\\gamma,\\lambda)=\\frac{r\\gamma\\lambda^rx^{\\gamma\nr-1}}{(1+\\lambda x^{\\gamma})^{r+1}}=\\frac{\\gamma\nr\\lambda^r}{x^{1+\\gamma}(\\lambda+x^{-\\gamma})^{r+1}},\\ \\ \\\nx>0.\\label{ExtrPDF}\n\\end{equation}\n\nIt is easy to see that $p(x;r,\\gamma,\\lambda)=O(x^{-1-\\gamma})$ as\n$x\\to\\infty$. Therefore ${\\sf\nE}M_{r,\\gamma,\\lambda}^{\\delta}<\\infty$ only if $\\delta<\\gamma$.\nMoreover, from~\\eqref{Mdef} it is possible to deduce explicit\nexpressions for the moments of the r.v. $M_{r,\\gamma,\\lambda}$.\n\n\\smallskip\n\n{\\sc Theorem 3.} {\\it Let $0<\\delta<\\gamma<\\infty$. Then}\n$$\n{\\sf\nE}M_{r,\\gamma,\\lambda}^{\\delta}=\n\\frac{\\Gamma\\big(r+\\frac{\\delta}{\\gamma}\\big)\\Gamma\\big(1-\\frac{\\delta}{\\gamma}\\big)}{\\lambda^{\\delta\/\\gamma}\\Gamma(r)}.\n$$\n\n\\smallskip\n\n{\\sc Proof}. From \\eqref{Mdef} it follows that\n\\begin{equation}\n\\label{Mmoments} {\\sf E}M_{r,\\gamma,\\lambda}^{\\delta}={\\sf\nE}G_{r,\\lambda}^{\\delta\/\\gamma}\\cdot{\\sf E}W_1^{-\\delta\/\\gamma}.\n\\end{equation}\nIt is easy to verify that\n\\begin{equation}\n\\label{moments} {\\sf\nE}G_{r,\\lambda}^{\\delta\/\\gamma}=\\frac{\\Gamma\\big(r+\\frac{\\delta}{\\gamma}\\big)}{\\lambda^{\\delta\/\\gamma}\\Gamma(r)},\\\n\\ \\ {\\sf\nE}W_1^{-\\delta\/\\gamma}=\\Gamma\\big(1-{\\textstyle\\frac{\\delta}{\\gamma}}\\big).\n\\end{equation}\nHence follows the desired result.\n\n\\smallskip\n\nTo analyze the properties of the limit distribution in theorem 2\nmore thoroughly we will require some additional auxiliary results.\n\n\\smallskip\n\n{\\sc Lemma 3} \\cite{KorolevWeibull2016}. {\\it Let $\\gamma\\in(0,1]$.\nThen\n$$\nW_{\\gamma}\\stackrel{d}{=} \\frac{W_1}{S_{\\gamma,1}}\n$$\nwith the r.v:s on the right-hand side being independent.}\n\n\\smallskip\n\n{\\sc Lemma 4} \\cite{Korolev2017}. {\\it Let $r\\in(0,1]$,\n$\\gamma\\in(0,1]$, $\\lambda>0$. Then\n$$\nG_{r,\\lambda}^{1\/\\gamma}\\stackrel{d}{=}\nG^*_{r,\\gamma,\\lambda}\\stackrel{d}{=}\\frac{W_{\\gamma}}{Z_{r,\\lambda}^{1\/\\gamma}}\\stackrel{d}{=}\n\\frac{W_1}{S_{\\gamma,1}Z_{r,\\lambda}^{1\/\\gamma}},\n$$\nwhere the r.v. $Z_{r,\\lambda}$ was defined in \\eqref{Zdef} and all\nthe involved r.v$:$s are independent.}\n\n\\smallskip\n\n{\\sc Theorem 4}. {\\it Let $r\\in(0,1]$, $\\gamma\\in(0,1]$,\n$\\lambda>0$. Then the following product representations are valid$:$\n\\begin{equation}\\label{T3_1}\nM_{r,\\gamma,\\lambda}\\stackrel{d}{=}\n\\frac{G_{r,\\lambda}^{1\/\\gamma}S_{\\gamma,1}}{W_1},\n\\end{equation}\n\\begin{equation}\\label{T3_2}\nM_{r,\\gamma,\\lambda}\\stackrel{d}{=}\n\\frac{W_{\\gamma}}{W'_{\\gamma}}\\cdot\\frac{1}{Z_{r,\\lambda}^{1\/\\gamma}}\\stackrel{d}{=}\nW_1\\cdot\\frac{R_{\\gamma}}{W'_1Z_{r,\\lambda}^{1\/\\gamma}}\\stackrel{d}{=}\n\\frac{\\Pi R_{\\gamma}}{Z_{r,\\lambda}^{1\/\\gamma}}\\stackrel{d}{=}\n\\frac{|X|\\sqrt{2W_1}R_{\\gamma}}{W'_1Z_{r,\\lambda}^{1\/\\gamma}},\n\\end{equation}\nwhere $W_{\\gamma}\\stackrel{d}{=} W'_{\\gamma}$, $W_1\\stackrel{d}{=} W'_1$, the r.v.\n$R_{\\gamma}$ has the density {\\rm\\eqref{Rdensity}}, the r.v. $\\Pi$\nhas the Pareto distribution$:$ ${\\sf P}(\\Pi>x)=(x+1)^{-1}$, $x\\ge0$,\nand in each term the involved r.v$:$s are independent.}\n\n\\smallskip\n\n{\\sc Proof}. Relation \\eqref{T3_1} follows from \\eqref{Mdef} and\nLemma 3, relation \\eqref{T3_2} follows from \\eqref{Mdef} and Lemma 4\nwith the account of the representation $W_1\\stackrel{d}{=} |X|\\sqrt{2W_1}$, the\nproof of which can be found in, say, \\cite{KorolevWeibull2016}.\n\n\\smallskip\n\nWith the account of the relation $R_{\\gamma}\\stackrel{d}{=} R_{\\gamma}^{-1}$,\nfrom~\\eqref{T3_2} we obtain the following statement.\n\n\\smallskip\n\n{\\sc Corollary 1.} {\\it Let $r\\in(0,1]$, $\\gamma\\in(0,1]$,\n$\\lambda>0$. Then the d.f. $F(x;r,\\gamma,\\lambda)$ is mixed\nexponential$:$\n$$\n1-F(x;r,\\gamma,\\lambda)=\\int_{0}^{\\infty}e^{-ux}dA(u),\\ \\ \\ x\\ge0,\n$$\nwhere\n$$\nA(u)={\\sf P}\\big(W_1R_{\\gamma}Z_{r,\\lambda}^{1\/\\gamma}0$. Then the d.f. $F(x;r,\\gamma,\\lambda)$ is infinitely\ndivisible.}\n\n\\smallskip\n\n{\\sc Proof.} This statement immediately follows from Corollary 1 and\nthe result of Goldie \\cite{Goldie1967} stating that the product of\ntwo independent non-negative random variables is infinitely\ndivisible, if one of the two is exponentially distributed.\n\n\\smallskip\n\nTheorem 3 states that the limit distribution in Theorem 2 can be\nrepresented as a scale mixture of exponential or stable or Weibull\nor Pareto or folded normal laws. The corresponding product\nrepresentations for the r.v. $M_{r,\\gamma,\\lambda}$ can be used for\nits computer simulation.\n\nIn practice, the asymptotic approximation $F(x; r,\\lambda,\\gamma)$\nfor the distribution of the extreme daily precipitation within a wet\nperiod proposed by Theorem~2 is adequate, if the ``success\nprobability'' is small enough, that is, if on the average the wet\nperiods are long enough.\n\n\\section{Estimation of the parameters $r$, $\\lambda$ and $\\gamma$}\n\nFrom~\\eqref{ExtrPDF} it can be seen that the realization of the\nmaximum likelihood method for the estimation of the parameters $r$,\n$\\lambda$ and $\\gamma$ inevitably assumes the necessity of numerical\nsolution of a system of transcendental equations by iterative\nprocedures without any guarantee that the resulting maximum is\nglobal. The closeness of the initial approximation to the true\nmaximum likelihood point in the three-dimensional parameter set\nmight give a hope that the terminal extreme point found by the\nnumerical algorithm is global.\n\nFor rough estimation of the parameters, the following considerably\nsimpler method can be used. The resulting rough estimates can be\nused as a starting point for the `full' maximum likelihood algorithm\nmentioned above in order to ensure the closeness of the initial\napproximation to the true solution. The rough method is based on\nthat the quantiles of the d.f. $F(x; r,\\lambda,\\gamma)$ can be\nwritten out explicitly. Namely, the quantile\n$x(\\epsilon;r,\\lambda,\\gamma)$ of the d.f. $F(x; r,\\lambda,\\gamma)$\nof order $\\epsilon\\in(0,1)$, that is, the solution of the equation\n$F(x; r,\\lambda,\\gamma)=\\epsilon$ with respect to $x$, obviously has\nthe form\n$$\nx(\\epsilon;r,\\lambda,\\gamma)=\\bigg(\\frac{\\epsilon^{1\/r}}{\\lambda-\\lambda\\epsilon^{1\/r}}\\bigg)^{1\/\\gamma}.\n$$\nLet at our disposal there be observations $\\{X_{i,j}\\}$,\n$i=1,\\ldots,m$, $j=1,\\ldots,m_i$, where $i$ is the number of a wet\nperiod (the number of a sequence of rainy days), $j$ is the number\nof a day in the wet sequence, $m_i$ is the length of the $i$th wet\nsequence (the number of rainy days in the $i$th wet period), $m$ is\nthe total number of wet sequences, $X_{i,j}$ is the precipitation\nvolume on the $j$th day of the $i$th wet sequence. Construct the\nsample $X^*_1,\\ldots,X^*_m$ as\n\\begin{equation}\nX^*_k=\\max\\{X_{k,1},\\ldots,X_{k,m_k}\\},\\ \\ \\ k=1,\\ldots,m.\\label{VarSample}\n\\end{equation}\nLet $X^*_{(1)},\\ldots,X^*_{(m)}$ be order statistics constructed\nfrom the sample $X^*_1,\\ldots,X^*_m$. Since we have three unknown\nparameters $r$, $\\lambda$ and $\\gamma$, fix three numbers\n$0 t_{ab}, t_{sc}$ than that of the explicit scheme.\n\nThis paper is organized as follows: In \\S~\\ref{Method}, we introduce\nargument equations for RRHD and the numerical scheme is shown in\n\\S~\\ref{Numerical}. Numerical results of one- and two-dimensional tests\nare shown in \\S~\\ref{test}. Discussion and summary are appeared in\n\\S~\\ref{discussion} and \\S~\\ref{summary}.\n\n\n\n\n\n\n\\section{Basic Equations}\\label{Method}\nIn the following, we take the light speed as unity. \nThe special relativistic radiation magnetohydrodynamic equations of\nideal gas consist of conservation of mass,\n\\begin{equation}\n (\\rho u^\\nu)_{,\\nu} = 0,\\label{geq:mcons}\n\\end{equation}\nconservation of energy-momentum,\n\\begin{equation}\n\\left(T^{\\mu\\nu}_\\mathrm{HD} \n+T^{\\mu\\nu}_\\mathrm{rad}\\right)_{,\\nu} =0,\\label{geq:Tcons}\n\\end{equation}\nand equations of radiation energy-momentum\n\\begin{equation}\n T^{\\mu\\nu}_\\mathrm{rad,\\nu} = -G^\\mathrm{\\mu},\\label{geq:Tradcons}\n\\end{equation}\nwhere $\\rho$ is the proper mass density, $u^\\mu = \\gamma(1 , v^i)$\nis the fluid four velocity, and $T^{\\mu\\nu}_\\mathrm{HD}$ and\n$T^{\\mu\\nu}_\\mathrm{rad}$ are energy momentum tensors of fluid\nand radiation. Here $\\gamma = \\sqrt{1+u_i u^i}$ is the bulk Lorentz\nfactor and $v^i$ is the fluid three velocity. Greek indices range over $0, 1, 2, 3$ and Latin ranges over\n$1, 2, 3$, where $0$ indicates the time component and $1, 2, 3$ do space\ncomponents. \n\nThe energy momentum tensor of fluids is written as \n\\begin{equation}\n T^{\\mu\\nu}_\\mathrm{HD} = \\rho \\xi u^\\mu u^\\nu + p_g \\eta^{\\mu \\nu},\\label{geq:THD}\n\\end{equation}\nwhere $p_g$ is the gas pressure and\n$\\mathrm{diag}~\\eta^{\\mu\\nu}=(-1,1,1,1)$ is the Minkowski metric. The\nspecific enthalpy of relativistic ideal gas $\\xi$ is given by\n\\begin{equation}\n \\xi = 1 + \\frac{\\Gamma}{\\Gamma-1}\\frac{p_g}{\\rho},\\label{geq:xi}\n\\end{equation}\nwhere $\\Gamma$ is the specific heat ratio. \n\nThe energy momentum tensor of radiation is written as\n\\begin{equation}\n T^{\\mu\\nu}_\\mathrm{rad}\n =\\left(\\begin{array}{cc}\n E_r, & F_r^j \\\\\n\t F_r^{i}, & P^{ij}_r\\label{geq:Trad}\n\t \\end{array}\\right)\n\\end{equation}\nwhere $E_r$, $F_r^i$, and $P^{ij}_r$ are the radiation energy\ndensity, flux and stress measured in the laboratory frame.\n\n\nThe radiation exchanges its energy and momentum with fluids through\nabsorption\/emission and scattering processes. The radiation four force\n$G^\\mu$ is explicitly given by\n\\begin{eqnarray}\n G^0 &=& -\\rho \\kappa \n \\left(4\\pi \\mathrm{B} \\gamma - \\gamma E_r + u_i F_r^i\\right)\n \\nonumber \\\\\n &-& \\rho \\sigma_s\\left[\\gamma u^2 E_r + \\gamma u_i u_j\n\t P^{ij}_r-\\left(\\gamma^2+u^2\\right)\n\t u_i F_r^i\\right],\\label{geq:G0}\n\\end{eqnarray}\nand \n\\begin{eqnarray}\n G^{i} &=&- 4\\pi \\rho \\kappa \\mathrm{B} u^i \n + \\rho (\\kappa + \\sigma_s)(\\gamma F_r^i-u_jP^{ij}_r) \\nonumber \\\\\n &-&\\rho \\sigma_s u^i\n\\left(\\gamma^2 E_r - 2\\gamma u_j F_r^j + u_j u_k P_r^{jk}\\right),\\label{geq:Gi}\n\\end{eqnarray}\nwhere $u = \\sqrt{u_i u^i}$. \n$\\kappa$ and $\\sigma_s$ are absorption and scattering\ncoefficients measured in the comoving frame. Thus, equation\n(\\ref{geq:Tradcons}) is a mixed-frame radiation energy-momentum\nequation that the radiation field is defined in the observer frame,\nwhile the absorption and scattering coefficients are defined in the\ncomoving frame. \n\nIn equations (\\ref{geq:G0}) and (\\ref{geq:Gi}), we assume the\nKirchhoff-Planck relation so that the emissivity\n$\\eta$ is replaced by the blackbody intensity $\\mathrm{B}$ as\n$\\eta=\\kappa \\mathrm{B}$ in the comoving frame.\n\nThe blackbody intensity $\\mathrm{B}$ is a function of gas temperature\n$T$ as $\\mathrm{B}=a_R T^4\/(4\\pi)$, where $a_R$ is related to the\nStefan-Boltzmann constant\n$\\sigma_\\mathrm{SB}=a_R\/4$. The plasma temperature is determined by\n\\begin{equation}\np_g = \\frac{\\rho k_B T}{\\mu m_p},\\label{geq:eos}\n\\end{equation}\nwhere $k_B$ and $m_p$ are the Boltzmann constant and the proton mass,\nand $\\mu$ is the mean molecular weight.\n\n\nSince we consider the moment equation of radiation fields, we need\nanother relation between $E_r$, $F_r^i$ and $P_r^{ij}$ to close\nthe system, i.e., the closure relation. Here we leave it as a general\nform described by\n\\begin{equation}\n P'^{ij}_r = P'^{ij}(E_r', F_r'^k),\\label{geq:closure}\n\\end{equation}\nwhere dash denotes the quantity defined in the comoving frame. \nThe explicit form of closure relation is introduced in \\S~\\ref{closure}\nafter the formulation.\n\n\n\nIn the following, we deal with the one-dimensional conservation law in\nthe $x$-direction:\n\\begin{equation}\n \\frac{\\partial \\mathcal{U}}{\\partial t} + \\frac{\\partial\n \\mathcal{F}}{\\partial x} = \\mathcal{S},\\label{geq:1dform}\n\\end{equation}\nwhich follows equations (\\ref{geq:mcons}), (\\ref{geq:Tcons}), and \n(\\ref{geq:Tradcons}).\nPrimitive variables $\\mathcal{P}$ are defined as\n\\begin{equation}\n \\mathcal{P} = \n \\left(\\begin{array}{c}\n \\rho \\\\\n u^k \\\\\n p_g \\\\\n E_r \\\\\n F_r^k\\\\\n\\end{array}\\right).\n\\end{equation}\nConserved variables\n$\\mathcal{U}$, fluxes $\\mathcal{F}$, and source terms $\\mathcal{S}$ have\nforms of\n\\begin{equation}\n \\mathcal{U} \n = \\left(\\begin{array}{c}\n D \\\\\n\t m_t^k\\\\\n E_t\\\\\n E_r\\\\\n F_r^k\\\\\n \\end{array}\\right),\n\\end{equation}\n\\begin{equation}\n \\mathcal{F}\n = \\left(\\begin{array}{c} \n D v^x\\\\\n \\rho \\xi u^x u^k + p_g\\delta^{xk} + P_r^{xk}\\\\\n m_t^x \\\\\n F_r^x \\\\\n P_r^{xk}\n \\end{array}\\right),\\label{geq:Flux}\n\\end{equation}\nand \n\\begin{equation}\n \\mathcal{S} \n \\equiv \\left(\\begin{array}{c}\n 0 \\\\\n 0\\\\\n 0\\\\\n S_E\\\\\n S_F^k\\\\\n \\end{array}\\right) \n=\n \\left(\\begin{array}{c}\n 0 \\\\\n 0\\\\\n 0\\\\\n -G^0\\\\\n -G^k\\\\\n \\end{array}\\right),\\label{def:source}\n\\end{equation}\nwhere $D=\\rho \\gamma$ is the mass density measured in the laboratory\nframe.\nThe total energy density $E_t$ and momentum density $m_t$ are given by\n\\begin{equation}\n E_t = E_\\mathrm{HD} + E_r = \\rho \\xi \\gamma^2 - p_g + E_r,\\label{geq:Et}\n\\end{equation}\nand \n\\begin{equation}\n m^k_t = m^k_\\mathrm{HD} + F_r^k = \\rho \\xi \\gamma u^k + F_r^k,\\label{geq:mt}\n\\end{equation}\nwhere $E_\\mathrm{HD}$ and $m^k_\\mathrm{HD}$ denote the energy and momentum\ndensity of the fluids, respectively. \n\nIt should be noted that $E_r$ and $F_r^k$ are not only the primitive\nvariables, but also the conserved variables. Thus, it is straightforward\nfor the radiation fields to convert the primitive variables from the\nconserved variables, as we will see in \\S~\\ref{sc:source}. \n\n\n\\section{Numerical Scheme for RRHD}\\label{Numerical}\nIn this section, we propose a new numerical scheme for solving RRHD\nequations. \nA conservative discretization of 1-dimensional equation\n(\\ref{geq:1dform}) over a time step $\\Delta t$ from $t=n \\Delta t$ to $t\n= (n+1)\\Delta t$ with grid spacing $\\Delta x$ is written as\n\\begin{equation}\n \\mathcal{U}^{n+1}_i= \\mathcal{U}^n_i - \\frac{\\Delta t}{\\Delta x}\n \\left(f_{i+\\frac{1}{2}} - f_{i-\\frac{1}{2}}\\right) +\n \\mathcal{S}_i\\Delta t,\n \\label{eq:1ddif}\n\\end{equation}\nwhere $\\mathcal{U}^n_i$ is the conservative variable at $x=x_i$ and\n$t=n\\Delta t$, and $f_{i\\pm \\frac{1}{2}}$ is the upwind numerical flux at the cell\nsurfaces $x=x_{i\\pm \\frac{1}{2}}$. In the numerical procedure, we divide\nequation (\\ref{eq:1ddif}) into two parts, the {\\it hyperbolic term } \n\\begin{equation}\n \\mathcal{U}^{*}_i = \\mathcal{U}^{n}_i \n - \\frac{\\Delta t}{\\Delta x}\n \\left(f_{i+\\frac{1}{2}} - f_{i-\\frac{1}{2}}\\right),\\label{geq:fluxterm}\n\\end{equation}\nand the {\\it source term} \n\\begin{equation}\n \\mathcal{U}^{n+1}_i = \\mathcal{U}^{*}_i + \\mathcal{S}_i \\Delta t, \\label{geq:sourceterm}\n\\end{equation}\nwhere $\\mathcal{U}^{*}$ is the conservative variable at the auxiliary step. \nIn the following subsection, we describe how to solve these two\nequations. \n\n\\subsection{hyperbolic term}\nFor the hyperbolic term, equation (\\ref{geq:fluxterm}) is solved as \nan initial value problem with the initial condition\n\\begin{equation}\n \\mathcal{U}(t^n, x) = \\left\\{\n\\begin{array}{lll}\n \\mathcal{U}_L & \\mathrm{for} & xx_{i+\\frac{1}{2}}\n\\end{array}\n\\right.\n\\end{equation}\nwhere the subscript $L$ and $R$ denote left and right constant\nstates on the cell interface. \nWhen we take $\\mathcal{U}_L = \\mathcal{U}_i$ and $\\mathcal{U}_R =\n\\mathcal{U}_{i+1}$, the numerical solution is reliable with a first order\naccuracy in space. \n\nIn order to improve the spatial accuracy, primitive variables\nat the zone surface $\\mathcal{P}_{i\\pm 1\/2}$ are calculated by\ninterpolating them from a cell center to a cell surface (the so-called\nreconstruction step). The primitive\nvariables at the left and right states are written as\n\\begin{eqnarray}\n \\mathcal{P}_{i\\pm \\frac{1}{2},S} = \\mathcal{P}_i \n \\pm \\frac{\\delta_x \\mathcal{P}}{2},\n\\end{eqnarray}\nwhere we take $S=L (R)$ with the plus (minus) sign. \nThe slope $\\delta_x P$ should be determined so as to preserve the\nmonotonicity. In our numerical code, we\nadopt the harmonic mean proposed by \n\\cite{1977JCoPh..23..263V}, which has a 2nd order accuracy in space, as\n\\begin{equation}\n \\delta_x \\mathcal{P}= \n\\frac{2 \\mathrm{max(0,\\Delta \\mathcal{P}_+ \\Delta \\mathcal{P}_-)}}\n{\\Delta \\mathcal{P}_+ + \\Delta \\mathcal{P}_-},\n\\end{equation}\nwhere\n\\begin{equation}\n \\Delta \\mathcal{P}_\\pm = \\pm (\\mathcal{P}_{i\\pm1} - \\mathcal{P}_i).\n\\end{equation}\nHere we adopted a 2nd order accurate scheme, but a higher order\nscheme such as Piecewise Constant Method \\citep{1999MNRAS.308.1069K},\nPiecewise Parabolic Method \\citep{1984JCoPh..54..174C,\n1996JCoPh.123....1M} or the other scheme can be applicable to both hydropart\nand radiation part.\n\n\n\nBy computing the primitive variables at the cell surface, the flux\n$\\mathcal{F}_{i \\pm 1\/2,S}$ is calculated directly from\n$\\mathcal{P}_{i\\pm\\frac{1}{2},S}$ \n(but see \\S~\\ref{closure} for the procedure of calculating $P_r^{xk}$\nappeared in equation \\ref{geq:Flux} from $P'^{ij}_r$).\nThen, numerical fluxes are calculated using an approximate Riemann\nsolver. \nWe adopted the HLL method \\citep{1983siamRev...25..35..61}, which\ncan capture the propagation of fastest waves. The numerical flux is then\ncomputed as\n\\begin{eqnarray}\nf_{i+\\frac{1}{2}}=\n \\frac{\\lambda_+\\mathcal{F}_{i+\\frac{1}{2},L} \n- \\lambda_-\\mathcal{F}_{i+\\frac{1}{2},R} \n+\\lambda_+\\lambda_-\n (\\mathcal{U}_{i+\\frac{1}{2},R} - \\mathcal{U}_{i+\\frac{1}{2},L})}\n{\\lambda_+ - \\lambda_-}.\\nonumber \\\\\n\\label{eq:hllf}\n\\end{eqnarray}\nHere $\\lambda_-$ and $\\lambda_+$ are\n\\begin{equation}\n \\lambda_-= \\mathrm{min}(0, \\lambda_{L-}, \\lambda_{R-}),\n\\end{equation}\nand\n\\begin{equation}\n \\lambda_+= \\mathrm{max}(0, \\lambda_{L+}, \\lambda_{R+}),\n\\end{equation}\nwhere $\\lambda_{S+}$ and $\\lambda_{S-}$ are the right and left going\nwave speed of the fastest mode. For example, they correspond to the\nsound speeds for the relativistic hydrodynamics. When the radiation\nfield is included, the light mode determines the fastest wave speed,\nwhich depends on the closure relation. The fastest wave speed is\ndiscussed after specifying the closure relation in \\S~\\ref{closure}.\nWe note that although we adopted the HLL scheme for simplicity, the\nhigher order approximate Riemann solvers such as the HLLC\n\\citep{2005MNRAS.364..126M, 2006MNRAS.368.1040M, 2007JCoPh.223..643H}\nand HLLD \\citep{2009MNRAS.393.1141M} can be\nimplemented in the relativistic radiation (Magneto)hydrodynamics\n\\citep{2009camcs4.135}.\n\nUsing the numerical flux $f_{\\pm \\frac{1}{2}}$, the conserved variables\n$U^*$ is obtained from equation (\\ref{geq:fluxterm}).\nIt should be noted that the total energy density $E_t^{n+1}$ and\nmomentum density $m_t^{n+1,k}$ at $t=(n+1)\\Delta t$ are already\nsolved although we only integrate the hyperbolic term. \nThus, when the radiation energy density and flux are obtained by solving\nequation (\\ref{geq:sourceterm}), the fluid energy density and momentum\ndensity are immediately computed (see, equations \\ref{eq:Emhd}-\\ref{eq:Fmhd}). \n\n\n\n\\subsection{Source term}\\label{sc:source}\nNext, we show how to solve equation (\\ref{geq:sourceterm}). \nThe source term appears in equation (\\ref{geq:Tradcons}),\nwhich treats the interaction between the radiation and the matter.\nAs discussed in \\S~\\ref{intro}, the heating\/cooling and scattering timescales\ncan be shorter than the dynamical timescale in optically thick media. \nIt prevents us from studying the long time evolution ($\\sim t_{dyn}$)\nwhen the equation is explicitly integrated. \nTo overcome this difficulty, we want to construct an implicit scheme as\n\\begin{eqnarray} \n E_r^{n+1} = E_r^{*} + \\Delta t \\mathcal{S}_E(E_r^{n+1}, \n {F}_r^{n+1,j}, P_r^{n+1,jk}, \\mathcal{P}_h^{n+1}),\\nonumber \\\\\n{\\label{eq:gimp1-1}}\\\\\n F_r^{n+1,i} = F_r^{*,i} + \\Delta t \\mathcal{S}_F^i(E_r^{n+1}, \n {F}_r^{n+1,j}, P_r^{n+1,jk},\n \\mathcal{P}_h^{n+1}),{\\label{eq:gimp1-2}}\\nonumber \\\\\n\\end{eqnarray}\nwhere $\\mathcal{P}_h$ is primitive variables of fluids\n(i.e., $\\rho, u^i, p_g$). \nWe confront two problems to solve equations ({\\ref{eq:gimp1-1}}) -\n({\\ref{eq:gimp1-2}}).\nThe first problem comes from the appearance of $\\mathcal{P}_h^{n+1}$.\nSince $\\mathcal{P}_h^{n+1}$ should be obtained after computing\n$\\mathcal{U}^{n+1}$,\nequations (\\ref{eq:gimp1-1}) - (\\ref{eq:gimp1-2}) become non-linear\nequations for $\\mathcal{U}^{n+1}$.\nAnother difficulty comes from the closure relation $P^{ij}_r = D^{ij}\nE_r$. The Eddington tensor\n$D^{ij}=D^{ij}(E_r, F_r^i)$ is generally a non-linear function of\n$E_r$ and $F_r^i$, so we cannot adopt the simple implicit method by\nreplacing $E_r\\rightarrow E_r^{n+1}$ and $F_r^i\\rightarrow\nF_r^{n+1,i}$ in the Eddington tensor. \n\nIn this paper, we propose an iterative method, which solves the following\nequation\n\\begin{eqnarray}\n {E}_r^{(m+1)} = E_r^{*} \n + \\Delta t \\mathcal{S}_E(E_r^{(m+1)},\n {F}_r^{(m+1),j}, P_r^{(m+1),jk}, \\mathcal{P}_h^{(m)}),\\nonumber \\\\\n {\\label{eq:gimp2-1}}\\\\\n {F}_r^{(m+1),i} = F_r^{*,i} \n + \\Delta t \\mathcal{S}_F^i(E_r^{(m+1)},\n {F}_r^{(m+1),j}, P_r^{(m+1),jk}, \\mathcal{P}_h^{(m)}),\\nonumber \\\\\n {\\label{eq:gimp2-2}}\n\\end{eqnarray}\nwhere $m=0,1,2...$ indicates the iteration step. \nWe take $E_r^{(0)}=E_r^n$, $F_r^{(0),j}=F_r^{n,j}$, and\n$\\mathcal{P}_h^{(0)}=\\mathcal{P}_h^n$ for the initial guess\n(also $E_r^*$ and $F_r^{*,j}$ can be\ncandidates for the initial guess). \n We note that the hydrodynamic quantity is\nexplicitly evaluated at $m$-th step\ndue to the complexity discussed above. \n\nNext we introduce the following two variables\n\\begin{eqnarray}\n\\delta E_r^{(m+1)}&\\equiv& E_r^{(m+1)} - E_r^{(m)},\\label{eq:defdE}\\\\\n\\delta F_r^{(m+1),i}&\\equiv& F_r^{(m+1),i} - F_r^{(m),i}.\\label{eq:defdF}\n\\end{eqnarray}\nBy substituting equations (\\ref{eq:defdE}) and (\\ref{eq:defdF}) \ninto equations (\\ref{eq:gimp2-1}) - (\\ref{eq:gimp2-2}), \nand taking the first order Taylor series\nin $\\delta E_r^{(m+1)}$ and $\\delta \\bmath F_r^{(m+1)}$, we\nobtain\n\\begin{eqnarray}\n \\bmath C^{(m)}\n \\left(\\begin{array}{c}\n \\delta E_r^{(m+1)}\\\\\n \\delta F_r^{(m+1),j}\\\\\n \\end{array}\\right)=\n \\left(\\begin{array}{c}\n \\Delta t \n S_E^{(m)}\n +E_r^* - E_r^{(m)} \\\\\n\t\\Delta t S_F^{(m),i}\n\t +F_r^{*,i} - F_r^{(m),i}\n\t \\end{array}\\right),\\label{eq:imp2_mat}\n\\end{eqnarray}\nwhere $S_E^{(m)}=S_E[E_r^{(m)}, F_r^{(m),j}, P_r^{(m),jk},\n \\mathcal{P}^{(m)}]$ and $S_F^{(m),i}=\n S_F^{i}[E_r^{(m)}, F_r^{(m),j}, P_r^{(m),jk},\n\t \\mathcal{P}^{(m)}]$.\nHere $\\bmath{C}$ is the $4\\times 4$ matrix given by\n\\begin{eqnarray}\n\\bmath C \\equiv \\bmath 1 - \\Delta t\n \\left(\\begin{array}{cc}\n \\frac{\\partial S_E}{\\partial E_r}\n +\\frac{\\partial P^{kl}}{\\partial E_r}\n \\frac{\\partial S_E}{\\partial P^{kl}}, \n \\frac{\\partial S_E}{\\partial F_r^j}\n +\\frac{\\partial P^{kl}}{\\partial F_r^j}\n \\frac{\\partial S_E}{\\partial P^{kl}} \\\\\n\t \\frac{\\partial S_F^i}{\\partial E_r}\n\t +\\frac{\\partial P^{kl}}{\\partial E_r}\n\t \\frac{\\partial S_F^i}{\\partial P^{kl}}, \n\t \\frac{\\partial S_F^i}{\\partial F_r^j}\n\t +\\frac{\\partial P^{kl}}{\\partial F_r^j}\n\t \\frac{\\partial S_F^i}{\\partial P^{kl}}\n\t\\end{array}\\right),\\label{eq:imp2_matC}\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n \\frac{\\partial S_E}{\\partial E_r}&=&\n -\\kappa \\rho \\gamma +\n \\sigma_s \\rho \\gamma u^2,\\label{imp2:dSEdE}\\\\\n \\frac{\\partial S_E}{\\partial F^j_r}&=&\n \\kappa \\rho u^j\n -\\sigma_s \\rho (\\gamma^2 +u^2) u^j,\\\\\n \\frac{\\partial S_E}{\\partial P_r^{kl}}&=&\n \\sigma_s \\rho \\gamma u_k u_l,\\\\\n \\frac{\\partial S_F^i}{\\partial E_r}&=&\n \\sigma_s \\rho \\gamma^2 u^i,\\\\\n \\frac{\\partial S_F^i}{\\partial F_r^j}&=&\n -\\kappa \\rho \\gamma \\delta^i_j\n -\\sigma_s \\rho \\gamma \\left(\\delta^i_j + 2 u^i u_j\\right),\\\\\n \\frac{\\partial S_F^i}{\\partial P_r^{kl}}&=&\n \\kappa \\rho \\delta^i_k u_l\n +\\sigma_s\\rho \\left(\\delta^i_k u_l+u^i u_k u_l\\right).\\label{imp2:dSFdP}\n\\end{eqnarray}\n\n\nHere we use equations (\\ref{geq:G0}), (\\ref{geq:Gi}), and (\\ref{def:source}).\nAlso $(\\partial P_r^{ij})\/(\\partial E_r)$ and $(\\partial\nP_r^{ij})\/(\\partial F_r^k)$ are required to complete matrix elements. \nThese quantities depend on the closure relation given in\nequation (\\ref{geq:closure}). \nWe do not specify the closure relation here, but the explicit form of\nthese quantities is shown in \\S~\\ref{closure} and appendix \\ref{apPr}.\n\nWhen $P^{ij}_r$ is the linear function of $E_r$ and $F_r^{i}$ (e.g., the\nEddington approximation), equations (\\ref{eq:gimp2-1}) - (\\ref{eq:gimp2-2})\nreduce to\n\\begin{eqnarray}\n \\bmath C^{(m)}\n \\left(\\begin{array}{c}\n E_r^{(m+1)}\\\\\n F_r^{(m+1),i}\\\\\n \\end{array}\\right)=\n \\left[\\begin{array}{c}\n E^n_r + S_E^{(m)}(E_r=0, F_r^i=0)\\Delta t\\\\\n \tF^{n,i}_r +S_F^{(m),i}(E_r=0, F_r^i=0)\\Delta t\n \\end{array}\\right].\\nonumber \\\\\n \\label{eq:imp2_matlin}\n\\end{eqnarray}\n\nBy inverting the $4\\times 4$ matrix $\\bmath C$ in equation\n(\\ref{eq:imp2_mat}) or (\\ref{eq:imp2_matlin}), we obtain the radiation\nenergy and flux at $(m+1)$-th iteration step from equations\n(\\ref{eq:defdE}) and (\\ref{eq:defdF}). \nIn general, the matrix inversion is time-consuming and sometimes unstable. The\nmatrix $C$ is however, only $4 \\times 4$ matrix so that we can invert it\nanalytically. We also tried inverting it using LU-decomposition method.\nWe obtain the inverse matric $C^{-1}$ stably in both scheme, but the \nanalytical method is faster than the LU-decomposition method. Thus we decide to use\nanalytical expression of $C^{-1}$.\n\n\nNext, we calculate the primitive variables $\\mathcal{P}^{(m+1)}$ from updated\nconservative variables $\\mathcal{U}^{(m+1)}$. \nAs pointed in \\S~\\ref{Method}, $E_r$ and $F_r^{k}$ are the both\nconserved and primitive variables. \nThus, the recovery step is unnecessary for the radiation\nfield. We need to compute $\\mathcal{P}_h^{(m+1)}$ from\n$\\mathcal{U}^{(m+1)}$.\n\nSince the total energy $E_t^{n+1}$ and the momentum $m_t^{n+1,k}$ are\nalready determined, the energy and momentum of fluids\n($E_\\mathrm{HD}^{(m+1)}$ and $m^{(m+1),k}_\\mathrm{HD}$) can be calculated as\n\\begin{eqnarray}\n E_\\mathrm{HD}^{(m+1)} = E_t^{n+1} - E_r^{(m+1)}, \\label{eq:Emhd}\\\\\n m^{(m+1),k}_\\mathrm{HD} = m_t^{n+1,k} - F^{(m+1),k}_r. \\label{eq:Fmhd}\n\\end{eqnarray}\nThen, three unknown variables $\\rho^{(m+1)}, u^{(m+1),k}, p_g^{(m+1)}$ are\ncomputed from \n$D^{n}$, $m^{(m+1),k}_\\mathrm{HD}$, and $E^{(m+1)}_\\mathrm{HD}$.\nThus, the recovery step in RRHD is the same\nas that in relativistic HD. \nWe adopt the recovery method developed by \\cite{2009ApJ...696.1385Z}\nfor solving a quartic equation. We\nbriefly show the method. In the following discussion, we drop the\nsuperscripts $n$ and $(m+1)$ for simplicity. \n\nThe gas density $D$, momentum $m^i_\\mathrm{HD}$, and energy\n$E_\\mathrm{HD}$ are related to the primitive variables $\\rho, u^i, p_g$\nas \n\\begin{eqnarray}\n D=\\rho \\gamma,\\label{eq:Ddef}\\\\\n m^i_\\mathrm{HD} = (\\rho + \\Gamma_1p_g)\\gamma u^i,\\label{eq:mdef}\\\\\n E_\\mathrm{HD} = (\\rho + \\Gamma_1 p_g)\\gamma^2,\\label{eq:Edef}\n\\end{eqnarray}\nwhere $\\Gamma_1 \\equiv \\Gamma\/(\\Gamma-1)$. \nFrom equation (\\ref{eq:mdef}), we obtain\n\\begin{equation}\n p_g = \\frac{1}{\\gamma\\Gamma_1}\\left(\\frac{|m_\\mathrm{HD}|}{\\sqrt{\\gamma^2-1}} - D\\right),\\label{eq:pgsolv}\n\\end{equation}\nwhere and $m_\\mathrm{HD}=\\sqrt{\\eta_{ij} m_\\mathrm{HD}^i m_\\mathrm{HD}^j}$. \nBy substituting equation (\\ref{eq:pgsolv}) into (\\ref{eq:Edef}), we obtain\na quartic equation on $u=\\sqrt {u_i u^i}$\n\\begin{eqnarray}\n f(u)\\equiv \\Gamma_1^2\\left(E_\\mathrm{HD}^2 - m_\\mathrm{HD}^2 \\right) u^4\n- 2 \\Gamma_1 m_\\mathrm{HD} D u^3\\nonumber\\\\\n+ \\left[\\Gamma_1^2 E_\\mathrm{HD}^2 - D^2 - 2 \\Gamma_1(\\Gamma_1-1)|\n m_\\mathrm{HD}|^2\\right]u^2\\nonumber\\\\\n -2(\\Gamma_1-1)D| m_\\mathrm{HD}| u - (\\Gamma_1-1)^2 m_\\mathrm{HD}^2=0.\n \\label{eq:primquart}\n\\end{eqnarray}\nBy solving above quartic equation, $p_g$, $\\rho$ and $u^i$ are computed\nfrom equation (\\ref{eq:pgsolv}), (\\ref{eq:Ddef}), and (\\ref{eq:mdef}).\nWhen we solve equation (\\ref{eq:primquart}), we first adopt a Brown\nmethod \\citep{2003TJSIAM}, which gives \nanalytical solutions of a quartic equation. If reasonable solutions are\nnot obtained, we solve equation (\\ref{eq:primquart}) using\nNewton-Raphson method with an accuracy of $f(u) \\le 10^{-8}$. \nIt is, however, noted that numerical solution converges without switching to\nNewton-Raphson method in the test problems shown in \\S~\\ref{test}.\nAlso we note that \\cite{2009ApJ...696.1385Z} showed that the Brown\nmethod can be applicable to obtain solutions with a wide range of\nparameters. \n\n\nWhen all the primitive variables are recovered, we obtain solutions\n$\\mathcal{P}^{(m+1)}$. These solutions would satisfy equation\n(\\ref{eq:imp2_matlin}). But we note that while $E_r, F_r^i$ and $P_r^{ij}$\nare evaluated at $(m+1)$-th time step, we evaluate\n$\\mathcal{P}_h$ at $(m)$-th time step when we solve equation\n(\\ref{eq:imp2_matlin}). The evaluation of $\\mathcal{P}_h$\nat $(m)$-th iteration step might be problematic when the density or\ntemperature jump is large (e.g., at the shock front).\nThus, we again solve equation (\\ref{eq:imp2_matlin}) using updated\nprimitive variables $\\mathcal{P}_h^(m+1)$ and check the difference\nbetween two successive variables,\n$|\\delta E_r^{(m+1)}|\/E_r^{(m+1)}$, \n$|\\delta F_r^{(m+1),i}\/F_r^{(m+1),i}|$, and \n$|\\mathcal{P}_h^{(m+1)}-\\mathcal{P}_h^{(m)}|\/|\\mathcal{P}^{(m+1)}_h|$. \nWhen these quantities is larger than a specified value (typically $\\sim\n10^{-6}$), the source term is integrated again using updated primitive\nvariables. This process is continued until two successive variables fall\nbelow a specified tolerance \\citep{2009MNRAS.394.1727P}. \nSimilar iteration method is also adopted in relativistic resistive\nmagnetohydrodynamics \\citep{2009MNRAS.394.1727P}. Since $\\mathcal{P}_h$\nis evaluated at the current iteration step, solutions obtained by solving\nequation (\\ref{eq:imp2_matlin}) might not be converged when the shock\nappears. But as we will see in \\S~\\ref{test}, we obtain solutions correctly even when the\ncooling time is much shorter than $\\Delta t$ (see,\n\\$~\\ref{radheatcool}), or a shock appears (see, \\S~\\ref{RHD}).\n\n\nNow we summarize our newly developed scheme for solving RRHD equations.\n\\begin{enumerate}\n \\item Calculate primitive variables at the cell surface\n $\\mathcal{P}_{i\\pm 1\/2, S}$ from $\\mathcal{P}_i$. \n \\item Compute the numerical flux $f_{i\\pm 1\/2}$ using approximate\n Riemann solvers (e.g., HLL scheme). \n \\item Integrate the hyperbolic term from numerical flux $f_{i\\pm 1\/2}$ (in\n equation \\ref{geq:fluxterm}). Then we obtain the intermediate states of\n conservative variables $\\mathcal{U}^*$.\n \\item Compute matrix elements of $\\bmath C$ given in equation\n (\\ref{eq:imp2_matC}).\n Then, calculate $E_r^{(m+1)}$ and $F_r^{(m+1),i}$ by inverting the\n $4\\times 4$ matrix.\n \\item Calculate primitive variables $\\mathcal{P}^{(m+1)}$ from\n $E_\\mathrm{HD}^{(m+1)}$, $m_\\mathrm{HD}^{(m+1),k}$ and $D^{n+1}$. \n \\item When the difference between two successive values does not fall\n below a specified tolerance, repeat from step 4. \n The updated primitive variables of fluids $\\mathcal{P}_h^{(m+1)}$ are\n used to evaluate matrix $\\bmath C$.\n\\end{enumerate}\n\n\\subsection{Closure Relation}\\label{closure}\nIn the above discussion, we do not specify the closure relation, but \ntake the general form given in equation (\\ref{geq:closure}).\nWhen we compute the numerical flux using the HLL scheme, the wave speeds of\nthe fastest mode $\\lambda_\\pm$ are needed, which depend on the closure\nrelation.\nAnother term depending on the closure relation is $P_r^{ij}$, which\nappears in numerical flux (\\ref{geq:Flux}).\nAlso we need to evaluate its derivatives, $\\partial P^{ij}\/\\partial E_r$\nand $\\partial P^{ij}\/\\partial F_r^{k}$ to compute the matrix $\\bmath C$.\nIn this subsection, we show how to obtain these quantities by specifying\nthe closure relation.\n\nMany kinds of closure relation are proposed by authors as discussed in\n\\S~\\ref{intro}. As the first step of developing the RRHD code,\nwe hereafter restrict our discussion to the closure relation being provided when\nwe assume the Eddington approximation. \nThen, the closure relation is described by\n\\begin{equation}\n P'^{ij}_r = \\frac{\\delta^{ij}}{3}E'_r, \\label{eq:Eddington}\n\\end{equation}\nwhich is valid when the radiation is well coupled with the matter so\nthat the radiation field is isotropic in the comoving frame. This\ncorresponds to the Eddington factor being $1\/3$.\n\nFirst, we show the wave speed of the fastest mode.\nBy assuming the relation given in equation (\\ref{eq:Eddington}), the\ncharacteristic wave velocity of fastest mode in the comoving frame is\n$1\/\\sqrt{3}$, which is equivalent to the sound speed in the relativistic\nregime. Thus the maximum wave velocity is always $1\/\\sqrt{3}$ in the\nradiation hydrodynamics with the Eddington approximation.\nThe wave velocity in the observer frame $\\lambda_-$\nand $\\lambda_+$ is obtained\nby boosting $\\pm 1\/\\sqrt{3}$ with the fluid velocity. \n\n\nNext, we show how to obtain radiation pressure $P_r^{ij}$ measured in\nthe observer frame. By performing Lorentz transformation on the radiation\nenergy momentum tensor and combining it with equation\n(\\ref{eq:Eddington}), the radiation stress tensor in the observer frame\nobeys the following equation\n\\begin{eqnarray}\n P_r^{ij}\n&+&\\left[-\\frac{\\delta^{ij}}{3}+\\frac{u^i u^j }{(1+\\gamma)^2} \\right] \nu_k u_mP_r^{km} \\nonumber \\\\\n &+& \\frac{1}{1+\\gamma} \\left(u^i u_k P_r^{jk} + u^j u_k P_r^{ik}\\right)\n = R^{ij},\\label{eq:Edclos}\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n R^{ij}&=&\n \\frac{\\delta^{ij}}{3}\\left(\\gamma^2 E_r - 2 \\gamma u_k F_r^k \\right)\n -u^i u^j E_r \\nonumber \\\\\n &+& (u^i F_r^j + u^j F_r^i)\n + \\frac{2}{1+\\gamma}u^i u^j u_k F_r^k,\\label{eq:EdclosR}\n \\end{eqnarray}\n\\citep[e.g.,][]{2008bhad.book.....K}. \nSince the radiation stress $P_r^{ij}$ is a $3\\times 3$ symmetric matrix, we\nneed to solve 6th order linear equations to compute $P_r^{ij}$ as\n\\begin{equation}\n \\bmath A(u)p = r,\\label{eq:Plinear}\n\\end{equation}\nwhere $p^T=(P_r^{11}, P_r^{22}, P_r^{33},P_r^{12},P_r^{13},P_r^{23})$,\n$r^T=(R^{11}, R^{22}, R^{33},R^{12},R^{13},R^{23})$ and $\\bmath A=\\bmath\nA(u)$ is the\n$6\\times 6$ matrix. Since $\\bmath A$ is a function of velocity, the derivatives\nof $P_r^{ij}$ are described by\n\\begin{equation}\n \\bmath A(u)\\frac{\\partial p}{\\partial E_r} = \\frac{\\partial r}{\\partial E_r},~\n \\bmath A(u)\\frac{\\partial p}{\\partial F^k_r} = \\frac{\\partial\n r}{\\partial F^k_r}. \n\\end{equation}\nThus, $P_r^{ij}$, $\\partial P_r^{ij}\/\\partial E_r$\nand $\\partial P_r^{ij}\/\\partial F_r^{k}$ are computed by inverting matrix\n$\\bmath A$. \nSince $\\bmath A$ is a $6 \\times 6$ matrix, it is difficult to obtain\n$\\bmath A^{-1}$ using analytical expression. Thus, we use LU-decomposition\nmethod to invert matrix $\\bmath A$. \nExplicit forms of\n$\\bmath A, (\\partial r)\/(\\partial E_r)$, and $(\\partial r)\/(\\partial F_r^k)$\nare shown in appendix \\ref{apPr}.\n\n\\begin{figure}\n \\begin{center}\n \\includegraphics[width=8cm]{f1.eps}\n \\caption{Thermal evolution of radiation energy $E_r$. Crosses and\n squares denote results of explicit and implicit schemes, while solid\n curves do analytical solutions. The filled circles with dotted curves show\n the results of the implicit scheme with the time step of $\\Delta t =\n 10t_{ab}$.}\n \\label{fig:radheatcool}\n \\end{center}\n\\end{figure}\n\n\\section{Test Problems}\\label{test}\nIn this section, we show results of some numerical tests for our RRHD\ncode. This section consists of numerical tests for the radiation field\n(\\S~\\ref{RD}) and relativistic radiation hydrodynamics (\\S~\\ref{RHD}).\nWe assume that the closure \nrelation is given by equation (\\ref{eq:Eddington}) so that the wave\nspeed of radiation field is $1\/\\sqrt{3}$.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{figure*}\n \\begin{tabular}{cc}\n \\begin{minipage}{0.49\\hsize}\n \\includegraphics[width=8cm]{f2.eps}\n \\caption{Time evolution of $E_r$ (thick solid) and $F_r$ (solid). The\n optical depth of the system size is $0.01$. Dashed curves denote \n the light head $l_c = c t\/\\sqrt{3}$ at $t=0.5, 1.0, 1.5\\tau_c$,\n where $\\tau_c = L\/c$. Dotted curves show analytical solutions assuming\n steady state given in equation (\\ref{eq:radflowst}).}\n \\label{fig:radtrans1}\n \\end{minipage}\n \\hspace{1mm}\n \\begin{minipage}{0.49\\hsize}\n \\includegraphics[width=8cm]{f3.eps}\n \\caption{Time evolution of $E_r$ (thick solid) and $F_r$ (solid). The\n optical depth of the system size is unity. Dashed curves denote \n the light head $l_c = c t\/\\sqrt{3}$ at $t=0.5, 1.0, 1.5\\tau_c$,\n where $\\tau_c = L\/c$. Dotted curves show analytical solutions assuming\n steady state given in equation (\\ref{eq:radflowst}).}\n \\label{fig:radtrans2}\n \\end{minipage}\n \\end{tabular}\n\\end{figure*}\n\\begin{figure*}\n \\begin{tabular}{cc}\n \\begin{minipage}{0.49\\hsize}\n \\includegraphics[width=8cm]{f4.eps}\n \\caption{Snapshot of $E_r$ at $t=1.5\\tau_c$. The solid, dashed and dotted\n curves denote contours at $10\\%, 0.1\\%, 10^{-3}\\%$ of its maximum.}\n \\label{fig:radtrans3}\n \\end{minipage}\n \\hspace{1mm}\n \\begin{minipage}{0.49\\hsize}\n \\includegraphics[width=8cm]{f5.eps}\n \\caption{Spatial profiles of $E_r$ at $t=1.5 \\tau_c$. Asterisks,\n squares, and filled circles indicate results along $y=0$, $x=0$,\n and $y=x$, respectively.}\n \\label{fig:radtrans4}\n \\end{minipage}\n \\end{tabular}\n\\end{figure*}\n\\subsection{Numerical tests for Radiation Field}\\label{RD}\nIn this section, we show results of numerical tests for solving\nradiation fields given in equations (\\ref{geq:Tradcons}).\nWe assume that the fluid is static and uniform for simplicity. We\nrecover the light speed as $c$ in this subsection. \nEquations are solved with 2nd order accuracy in space.\n\n\\subsubsection{Radiative heating and cooling}\\label{radheatcool}\nThis test has been proposed by \\cite{2001ApJS..135...95T}.\nWe evaluate the validity for the integration of the source\nterm appeared in equation (\\ref{geq:sourceterm}), which is implicitly and\niteratively integrated in our numerical scheme. For this purpose, we start\nfrom a\nstatic and 1 zone fluid (i.e., number of grid points $N_x = 1$), which\nis initially not in\nthe thermal equilibrium with the radiation. \nFrom these assumptions, the radiation field obeys the following equation\n\\begin{equation}\n \\frac{d E_r}{d t} = \\rho \\kappa \n \\left(4\\pi \\mathrm{B}-c E_r\\right),\n\\end{equation}\nwhich is analytically integrated by assuming $\\rho, \\kappa$ and\n$\\mathrm{B}$ are constant, and we obtain\n\\begin{equation}\n E_r(t) = \\frac{4\\pi}{c} \\mathrm{B}\n - \\left(\\frac{4\\pi}{c} \\mathrm{B}-E_0\\right)e^{-\\rho \\kappa c t},\n\\label{eq:radheatcool}\n\\end{equation}\nwhere $E_0$ is the radiation energy density at $t=0$.\nThe radiation field approaches that in the Local Thermal Equilibrium\n(LTE, $E_r = 4\\pi \\mathrm{B}\/c$). \n\nThe mass density is set to $\\rho = 0.025~\\mathrm{g~cm^{-3}}$ and the\nopacity is $\\kappa = 0.04~\\mathrm{cm^{2}~g^{-1}}$, so that the\ncorresponding absorption time scale is $t_{ab}\\equiv 1\/(\\rho\\kappa\nc)=3.3\\times 10^{-8}~\\mathrm{s}$.\nWe examine two models of the initial radiation energy density,\n$E_r=10^{2} E_\\mathrm{LTE}$ and $10^{-2} E_\\mathrm{LTE}$, where\n$E_\\mathrm{LTE}\\equiv a_R T^4=10^{10}~\\mathrm{erg~cm^{-3}}$. The\nspecific heat ratio and the mean\nmolecular weight are $\\Gamma=5\/3$ and $\\mu=1.0$. \n\\begin{deluxetable*}{lccccccccccccc}\n\\tabletypesize{\\scriptsize}\n\\tablecaption{List of Simulation Runs}\n\\tablewidth{0pt}\n\\tablehead{\n\\colhead{model} & \\colhead{$\\kappa$} &\\colhead{$\\Gamma$}\n &\\colhead{state} & \\colhead{$\\rho$} &\n \\colhead{$p_g$} & \\colhead{$u^x$} & \\colhead{$u^y$}\n & \\colhead{$u^z$} & \\colhead{$E_r'$}\n}\n\\startdata\n RHDST1 &0.4 &$\\frac{5}{3}$ & \n L & 1.0 & $3.0\\times 10^{-5}$ & 0.015 & 0 & 0 & $1.0\\times 10^{-8}$\\\\\n \\ &\\ &\\ & \n R & 2.4 & $1.61\\times 10^{-4}$ & $6.25\\times 10^{-3}$ & 0 & 0 & $2.50\\times 10^{-7}$\\\\\n RHDST2 &0.2 &$\\frac{5}{3}$ & \n L & 1.0 & $4.0\\times 10^{-3}$ & 0.25 & 0 & 0 & $2.0\\times 10^{-5}$\\\\\n \\ &\\ &\\ & \n R & 3.11 & $0.04512$ & $0.0804$ & 0 & 0& $3.46\\times 10^{-3}$\\\\\n RHDST3 &0.3 &2& \n L & 1.0 & 60 & 10 & 0 & 0 & 2\\\\\n \\ &\\ &\\ & \n R & 8 & $2.34\\times 10^{3}$ & 1.25 & 0 & 0& $1.13\\times 10^{3}$\\\\\n RHDST4 &0.08 &$\\frac{5}{3}$& \n L & 1.0 & $6.0\\times 10^{-3}$ & 0.69 & 0 & 0 & 0.18\\\\\n \\ &\\ &\\ & \n R & 3.65 & $3.59\\times 10^{-2}$ & 0.189 & 0 & 0& $1.30$\\\\\n\\enddata\n\\tablecomments{Parameter sets of numerical tests. Scattering coefficient\n $\\sigma_s$ is taken to be zero in all models.}\n\\label{tab:testparam}\n\\end{deluxetable*}\n\nFigure~\\ref{fig:radheatcool} shows the time evolution of $E_r$. Squares\nand crosses respectively denote results with explicit and\nimplicit scheme for integrating the source term with the time step of\n$\\Delta t = 0.1 t_{ab}$, while \nsolid curves do analytical solutions obtained from equation\n(\\ref{eq:radheatcool}). \nThe data point for the model with $\\Delta t=0.1 t_{ab}$ is reduced for\nplotting.\nThe lower and upper plots\ncorrespond to $E_r(t=0)=10^{-2}E_\\mathrm{LTE}$ and $10^{2}E_\\mathrm{LTE}$,\nrespectively. \nWe can see that results with both schemes excellently agree\nwith analytical solutions when the time step is smaller than the absorption\ntime scale $\\Delta t < t_{ab}$. \nWhen $\\Delta t = 10 t_{ab}$, \nsolutions with implicit scheme (filled circle) stably reach\nthe thermal equilibrium state, although\nthey deviate from the\nanalytical solutions before being in LTE. \nWe note that the explicit scheme does not converge to the analytical\nsolution when we take such a large time step. \nWe also performed simulations with a much larger time step $\\Delta t = 10^4 t_{ab}$\nand found that solutions converge to analytical solution (not shown in figure).\nWe also note that a number of iteration step used in the implicit scheme\nis less than or equal 2, even if $\\Delta t = 10^4 t_{ab}$.\nThus, the implicit scheme has a great advantage when the cooling timescale\nis much smaller than the dynamical timescale since we can take the numerical\ntime step being $t_{ab}<\\Delta t < t_{dyn}$.\nThe computational time of explicit and implicit scheme with an equal number\nof time steps is $t_\\mathrm{exp} : t_\\mathrm{imp}=1 : 1.04$ with $\\Delta\nt=0.1 t_{ab}$.\n\n\n\n\n\n\\subsubsection{Radiation transport}\\label{radtransport}\nThis test has been performed by \\cite{2001ApJS..135...95T}.\nWhen the radiation is injected into uniform matter, it propagates\nwith the characteristic wave velocity, while it exchanges energies with the\nplasma. As a test of this effect, we assume that the fluid is static and\nuniform in a simulation box bounded by $x=[0,L]$, where\n$L=1~\\mathrm{cm}$. \nThe radiation is injected from the boundary at $x=0$. \nIn the uniform medium, the opacity is set to $\\kappa =\n0.04~\\mathrm{cm^{-2}~g}$. \nThe radiation energy is $E_\\mathrm{LTE}=10^{10}~\\mathrm{erg~cm^{-3}}$\nand the thermal energy is determined from the LTE condition.\nWe examined two models of the mass density $\\rho = 0.25$ and\n$25~\\mathrm{g~\\mathrm{cm}^{-3}}$.\nThe corresponding optical depth is\n$\\tau = \\rho \\kappa L=0.01$ and $1$, respectively.\nAt the boundary $x=0$, the radiation is injected with the energy density\nof $E_r = 10^{10}E_\\mathrm{LTE}$. The free boundary condition is applied\nat $x=L$. \nOther parameters are $\\Gamma = 5\/3$, and $\\mu = 1.0$. \nThe Courant-Friedrichs-Lewy (CFL) number is taken to be $0.5$. \n\n\nFigure~\\ref{fig:radtrans1} shows results with $\\tau = 0.01$. \nThick solid curves denote the radiation energy density, and thin solid curves\nshow the radiation flux at\n$t=0.5, 1.0, 1.5\\tau_c$ from left to right, where $\\tau_c = L\/c$. Since the radiation energy\nis injected from $x=0$, the wave\nfront propagates from left to right as time goes on. The dashed curves\ndenote the position of the light head $l_c=ct\/\\sqrt{3}$. Since we assume\nthe Eddington approximation on the closure relation, the wave front\npropagates with the velocity $c\/\\sqrt{3}$. We can see that the wave\nfront is sharply captured in our simulation code. In FLD approximations,\nthe radiation field evolves obeying the diffusion equation, so\nthat the wave front has a smooth profile \\cite[see, Fig.~7 in\n][]{2001ApJS..135...95T}. In our code, although we apply the Eddington\napproximation, the 1st order moment equation is solved. \nThen equations of the radiation field have hyperbolic form, so that the wave\nfront can be captured using the HLL scheme.\n\nFigure~\\ref{fig:radtrans2} shows results of the model of larger density\n($\\tau=1$). The\nradiation propagates with the velocity $c\/\\sqrt{3}$.\nBehind the wave front, the radiation field becomes steady and\nits energy exponentially decreases with $x$ (equivalently, $\\tau =\n\\kappa \\rho x$) due to the absorption. \nWhen we assume the steady state and the\nradiation energy is much larger than that in LTE (i.e., $E_r \\gg 4\\pi\n\\mathrm{B}\/c$), equation (\\ref{geq:Tradcons}) can be solved and we obtain\n\\begin{equation}\n E_r = E_0 \\exp(-\\sqrt{3}\\rho \\kappa x), ~\n F_r = \\frac{cE_0}{\\sqrt{3}} \\exp(-\\sqrt{3}\\rho \\kappa x),\\label{eq:radflowst}\n\\end{equation}\n\\citep{1984oup..book.....M}.\nDotted curves in Fig. \n\\ref{fig:radtrans1} and \\ref{fig:radtrans2} show solutions obtained from\nequation (\\ref{eq:radflowst}). We can see that\nnumerical results excellently recover analytical ones.\n\\begin{figure*}\n \\begin{tabular}{cc}\n \\begin{minipage}{0.49\\hsize}\n \\includegraphics[height=9cm]{f6.eps}\n \\caption{Profiles of $\\rho$, $p_g$, $v^x$, $E'_r$ and $F'^x_r$ from\n top to bottom at $t=5000$ for the model RHDST1.\nDots and solid curves denote for numerical and semi-analytical solutions.\n}\n \\label{fig:farris1}\n \\end{minipage}\n \\hspace{1mm}\n \\begin{minipage}{0.49\\hsize}\n \\includegraphics[height=9cm]{f7.eps}\n \\caption{Profiles of $\\rho$, $p_g$, $v^x$, $E'_r$ and $F'^x_r$ from\n top to bottom at $t=5000$ for the model RHDST2. \nDots and solid curves denote for numerical and semi-analytical solutions.}\n \\label{fig:farris2}\n \\end{minipage}\n \\end{tabular}\n\\end{figure*}\n\n\\subsubsection{Radiation transport in 2-dimension}\\label{radtransport2}\nThis test has been shown in \\cite{2001ApJS..135...95T} that the radiation\nfield propagates in optically thin media in $x-y$ plane. \nThe simulation domain is $x=[-L,L]$ and $y=[-L, L]$, where\n$L=1\\mathrm{cm}$. Numerical grid points are $(N_x, N_y)=(400, 400)$. \nThe mass density of uniform fluid is $\\rho =\n0.25~\\mathrm{g~cm^{-3}}$. The other quantities are the same as those\nin \\S~\\ref{radtransport}. At the initial state, a larger radiation energy\nis given inside $r=\\sqrt{x^2+y^2}<0.1\\mathrm{~cm}$ and its energy is\n$E_r=10^{10}E_\\mathrm{LTE}$. The free (Neumann) boundary condition is\napplied on each side of the box. The CFL condition is taken as $0.5$. \n\nFigure \\ref{fig:radtrans3} shows contours of radiation energy density\n$E_r$ at $t=1.5\\tau_c$. \nThe solid, dashed, and dotted curves denote contours at $10\\%, 0.1\\%,\n10^{-3}\\%$ of its maximum. \nDue to the enhancement of radiation energy around the origin at the\ninitial state, the radiation field propagates in a circle, forming a\ncaldera-shaped profile of the radiation energy density. The wave\nfront propagates with the speed of $c\/\\sqrt{3}$. \nFigure \\ref{fig:radtrans4} shows one dimensional cut of results at\n$t=1.5\\tau_c$. Asterisks, squares and\nfilled circles show results along $y=0$, $x=0$, and $y=x$,\nrespectively. \nWe can see that the wave packet has a sharp structure. The radiation energy is\nmainly accumulated around the wave front, while it is very small\ninside the wave front (caldera floor). Such the caldera structure\nis not successfully reproduced when we apply FLD approximation\n\\citep{2001ApJS..135...95T} since the FLD is formulated based on the\ndiffusion approximation.\nAlthough the wave speed reduces to $c\/\\sqrt{3}$, solving the 1st order\nmoment in radiation field (radiation flux) with Eddington approximation\nhas an advantage for studying the propagation of the radiation pulse\nin optically thin media.\n\n\nIn the problem of $\\S~\\ref{radtransport}$ and $\\S~\\ref{radtransport2}$,\nthe radiation field passes the boundary $x=L$ at\n$t=\\sqrt{3}\\tau_c$. When we adopt the free (Neumann) boundary \nconditions,\nmost of the radiation energy passes the boundary, while some part of\nit is reflected and stays in the simulation box. An amplitude\nof the reflected wave $E_\\mathrm{ref}$ is smaller than that passing\nthe boundary $E_\\mathrm{pass}$, $\\max[E_\\mathrm{ref}\/E_\\mathrm{pass}]\\sim\n0.8\\%$ for the one-dimensional test and $\\sim 5\\%$ for the two dimensional\ntest. The waves are mainly reflected at the corner of the simulation box\nin the two dimensional simulation (i.e., around $[x,y]=[\\pm L, \\pm L]$). \nThe simple free boundary condition can be applied in the\nradiation field with this accuracy. \n\n\n\n\\begin{figure*}\n \\begin{tabular}{cc}\n \\begin{minipage}{0.49\\hsize}\n \\includegraphics[height=9cm]{f8.eps}\n \\caption{Profiles of $\\rho$, $p_g$, $v^x$, $E'_r$ and $F'^x_r$ from\n top to bottom at $t=5000$ for the model RHDST3. Dots and solid curves\n denote for numerical and semi-analytical solutions.}\n \\label{fig:farris3}\n \\end{minipage}\n \\hspace{1mm}\n \\begin{minipage}{0.49\\hsize}\n \\includegraphics[height=9cm]{f9.eps}\n \\caption{$L$-1 norms of $\\rho$, $p_g$, $u^x$, $E'_r$ and $F'^x_r$ for\n the model RHDST3.}\n \\label{fig:farris3-L1}\n \\end{minipage}\n \\end{tabular}\n\\end{figure*}\n\n\\subsection{Numerical Test for Relativistic Radiation Hydrodynamics}\\label{RHD}\nNext, we consider the coupling between the radiation and\nmatter fields. We solve RRHD equations with a second order accurate\nscheme. \n\nTest problems for radiative shock shown in this subsection are developed\nby \\cite{2008PhRvD..78b4023F} who proposed and solved four shock tube\ntest problems. \nInitial conditions of these problems are listed in\nTable~\\ref{tab:testparam}. The initial state has a discontinuity at\n$x=0$ and the system is in LTE. \nSuch an initial discontinuity breaks up generating waves. \nWhen the waves pass away from the simulation box, the system approaches\nthe steady state. Then we compare our numerical results with\nsemi-analytical solutions\nafter the system reaches a steady state.\nIn \\cite{2008PhRvD..78b4023F}, the initial condition is constructed by\nboosting semi-analytical solutions, so that the shock moves with an\nappropriate velocity. In our test, the shock is assumed to rest\naround $x=0$ (shock rest frame).\nSuch the problem can be more stringent tests for\nour code to maintain the stationarity \\citep{2011MNRAS.tmp.1386Z}.\n\n\n\nThe simulation box is bounded by $x=[-L, L]$, where $L=20$ in the\nnormalized unit, and a number of numerical\ngrid points is $N_x=3200$. The free boundary condition is applied at\n$x=\\pm L$. \nWe give the physical quantities at the left and\nright state of $\\rho, p, u, E_r'$ and flux is assumed to be zero. \nFollowing \\cite{2008PhRvD..78b4023F}, the Stefan-Boltzmann constant is\nnormalized such that $4\\pi a_R=E'_{r,L}\/T_L^4 =\nE'_{r,R}\/T_R^4$, where the subscripts $L$ and $R$ denote the quantities\nat the left ($x\\leq 0$) and right ($x>0$). \n\nWe note that although we adopt an iteration method to integrate\n stiff source term (equation (\\ref{eq:imp2_mat})), solutions in the\n following tests converge within the relative error of $10^{-6}$ without iterations. \n\n\\begin{figure*}\n \\begin{tabular}{cc}\n \\begin{minipage}{0.49\\hsize}\n \\includegraphics[height=9cm]{f10.eps}\n \\caption{Profiles of $\\rho$, $p_g$, $v^x$, $E'_r$ and $F'^x_r$ from\n top to bottom at $t=5000$ for the model RHDST4. \nDots and solid curves denote for numerical and semi-analytical solutions.}\n \\label{fig:farris4}\n \\end{minipage}\n \\hspace{1mm}\n \\begin{minipage}{0.49\\hsize}\n \\includegraphics[height=9cm]{f11.eps}\n \\caption{$L$-1 norms of $\\rho$, $p_g$, $u^x$, $E'_r$ and\n $F'^x_r$ for the model RHDST4. }\n \\label{fig:farris4-L1}\n \\end{minipage}\n \\end{tabular}\n\\end{figure*}\n\n\\subsubsection{Non-relativistic shock}\\label{RHDST1}\nIn this test, the non-relativistic strong shock exists at $x=0$\n(model RHDST1). The ratio of the radiation to thermal energy in the upstream\n($x<0$) is $2.2\\times 10^{-4}$. We take the CFL\ncondition to be $0.9$. \nFig.~\\ref{fig:farris1} shows profile at $t=5000$. The physical\nquantities shown in this figure are $\\rho$, $p_g$, $v^x$, $E_r', F'^x_r$\nfrom top to bottom. (We note that $F'^{x}_r$, which is the\nradiation flux measured in the comoving frame, is different from $F^x$\ndefined in \\citealt{2008PhRvD..78b4023F} by factor $\\gamma$).\nDots and curves denote for the numerical and semi-analytical solutions.\n\nSince we give an initial condition by a step function at $x=0$, waves\narisen at the discontinuity propagate in the $\\pm\nx-$direction (mainly in the $+x$-direction since the upstream is\nsupersonic). After the waves pass the boundary at $x=\\pm L$, the\nsystem reaches the steady state. \n\nSince the radiation energy is negligible, fluid quantities\nhave jumps around $x=0$ similar to pure hydrodynamical shock. The\nradiation field, on the other hand, has a\nsmooth profile in which the radiation energy is transported in front of the\nshock. The radiation energy and flux have no discontinuities. \nThe simulation results are well consistent with analytical solutions. \n\n\n\\subsubsection{Mildly relativistic shock}\\label{RHDST2}\nIn this test, the mildly-relativistic strong shock exists at $x=0$\n(model RHDST2). The ratio of the radiation to thermal energy in the upstream\nis $3.3\\times 10^{-3}$. We take the CFL\ncondition to be $0.9$. \nFig.~\\ref{fig:farris2} shows profiles at $t=5000$. The physical\nquantities shown in this figure are $\\rho$, $p_g$, $v^x$, $E_r', F'^x_r$ from top to bottom. \nWe can see that the radiation energy density is no more continuous but\njumps at $x=0$. In the non-relativistic limit, the radiation energy\ndensity and its flux are continuous, but it is not the case in\ngeneral. They are no more the conserved variables at the shock in the\nrelativistic flow \\citep{2008PhRvD..78b4023F}. \nThe solutions obtained in our numerical simulation (dots) \nare qualitatively and quantitatively consistent with semi-analytical\nsolutions (solid curves).\n\n\\subsubsection{Relativistic shock}\\label{RHDST3}\nIn this test, the highly-relativistic strong shock exists at $x \\simeq 0$\n(model RHDST3). The upstream Lorentz factor is $\\sim 10$ and the ratio\nof the radiation to thermal energy is $3.3\\times 10^{-2}$. We take the CFL\ncondition to be $0.9$. \nFig.~\\ref{fig:farris3} shows profile at $t=5000$. The physical\nquantities shown in this figure are $\\rho$, $p_g$, $v^x$, $E_r', F'^x_r$ from top to bottom. \n\nAfter waves generated at the jump $(x=0)$ passes the boundary, the\nsystem approaches the steady state.\nIn this case, all the physical quantities are continuous.\n\nThe solutions obtained in our numerical simulation excellently recover\nanalytical solutions even when the flow speed is highly relativistic. \nTo validate our numerical code, we compute the $L$-1 norms from\n\\begin{equation}\n L_1[f] = \\Delta x \\sum_{i=1}{N_x}|f_i - f_\\mathrm{a}(x_i)|,\n \\label{eq:L1}\n\\end{equation}\nwhere $f_i$ is the physical quantity at the $i$-th grid and\n$f_\\mathrm{a}$ is the semi-analytic solution. \n\nWe perform simulations with number of grid points $N_x=400, 800, 1600,\n3200$. In these tests, we use the semi-analytic solution as the initial\ncondition. Figure~\\ref{fig:farris3-L1} shows the $L$-1 norms of errors in\n$\\rho, u^x, p_g, E_r'$, and $F'^x_r$. We can see that all errors\nconverges at second order in $\\Delta x$.\n\n\n\\subsubsection{Radiation dominated shock}\\label{RHDST4}\nIn this test, the mildly-relativistic, radiation dominated shock exists\nat $x\\simeq 0$ (model RHDST4). The ratio\nof the radiation to thermal energy in the upstream is $20$. We take\nthe CFL condition to be $0.3$. \nFig.~\\ref{fig:farris4} shows profile at $t=5000$. The physical\nquantities shown in this figure are $\\rho$, $p_g$, $v^x$, $E_r', F'^x_r$ from top to bottom. \n\nAfter waves generated at the jump $(x=0)$ pass the boundary, the\nsystem approaches the steady state.\nIn this case, all the physical quantities are continuous and their\nprofiles are very smooth since a precursor generated from the shock\nstrongly affects the plasma.\n\nWe again find very good agreement with semi-analytical solutions even when\nthe radiation energy dominates the thermal energy. \nFigure~\\ref{fig:farris4-L1} shows the $L$-1 norms of errors in $\\rho, u^x,\np_g, E_r'$, and $F'^x_r$. In this test, we adopt the analytical solution\nas an initial condition following \\cite{2008PhRvD..78b4023F}.\nWe can see that all errors converges at second order in $\\Delta x$.\n\n\\section{Discussion}\\label{discussion}\nIn the current study, we construct the relativistic hydrodynamic\nsimulation code including the radiation field. The magnetic field, which\nwould play a\ncrucial role in the relativistic phenomena, is neglected for simplicity. \nIt would be quiet simple to include the magnetic field using a well-developed\nnumerical scheme. When we extend our radiation hydrodynamic code to the\nradiation magnetohydrodynamic one, the energy momentum tensor\n$T_\\mathrm{HD}^{\\nu,\\mu}$ is replaced by that of the magnetofluids, \n\\begin{equation}\n T^{\\mu\\nu}_\\mathrm{MHD} = (\\rho \\xi+b^2)u^\\mu u^\\nu - b^\\mu b^\\nu\n +\\left(p_g + \\frac{b^2}{2}\\right)\\eta^{\\mu \\nu},\\label{geq:TMHD}\n\\end{equation}\nwhere $b^\\nu = \\left\\{\\bmath u \\cdot \\bmath B, \n\\left[\\bmath B + (\\bmath u \\cdot\\bmath B)\\bmath u\\right]\/\\gamma\\right\\}$ \nis the covariant form of the magnetic fields. \nThe magnetic field evolves according to the induction equation for the\nideal MHD,\n\\begin{equation}\n \\partial_\\nu (u^\\nu b^\\mu - u^\\mu b^\\nu)=0.\\label{geq:induction}\n\\end{equation}\nThen the energy momentum conservation equation and the induction\nequation are explicitly integrated using the HLL or higher order\napproximate Riemann solvers \\citep{2005MNRAS.364..126M,\n2006MNRAS.368.1040M, 2007JCoPh.223..643H,2009MNRAS.393.1141M}.\nThe source term describing the interaction between the matter and the\nradiation, is integrated by applying our proposed scheme. \n\nAnother simplification made in this paper is adopting the Eddington\napproximation that the radiation field is assumed to be isotropic in the\ncomoving frame. The wave velocity of\nradiation field has a propagation speed of $c\/\\sqrt{3}$. This would\nbecome an issue when the flow speed is relativistic ($v\\simeq c$). The\nflow speed, in principle, exceeds the phase speed of light mode so that\nthe radiation energy might accumulate in front of the plasma flow. \nAlso the problem appears when we consider the magnetofluids. \nThe fast magnetosonic wave speed can exceeds the reduced light\nspeed $c\/\\sqrt{3}$.\nAnother problem is that the radiation does not propagate in a straight\nline since we assume that the radiation field is isotropic. When the optically\nthick medium with a finite volume is irradiated from one side, there\nshould be a\nshadow on the other side \\citep{2003ApJS..147..197H}. Such a shadow is\nno longer formed when we\nutilize the Eddington approximation \\citep{2007A&A...464..429G}. \nTo overcome these problems, we should admit an anisotropy of radiation flux in\na comoving frame \\citep{1976UCRL...78378L.CA,1978JQSRT..20..541M,\n1984JQSRT..31..149L}. \nIn our formulation, we do not specify the closure relation until\n\\S~\\ref{closure}. Thus we can employ M-1 closure by replacing matrix\n$A(u)$ without the other modification.\nThe explicit-implicit scheme of the relativistic radiation\nmagnetohydrodynamics with the M-1 closure will be reported in the near\nfuture. \n\n\n\n\\section{Summary}\\label{summary}\nWe have developed a numerical scheme for solving special relativistic\nradiation hydrodynamics, which ensures the conservation of total\nenergy and flux. The hyperbolic term is explicitly solved using an\napproximate Riemann solver, while the source term describing the\ninteraction between the matter and the radiation, is implicitly\nintegrated using an iteration method. \nThe advantage of the implicit scheme is that \nwe can take the numerical time step larger than the absorption and\nscattering time scale.\nThis allows us to study the long term evolution of the system\n(typically, the dynamical timescale). \n\n\nWhen integrating the source term, we need to invert matrices $\\bmath C$\nand $\\bmath A$ in our proposed scheme. \nWe have to note that the rank of these matrices used in the implicit\nscheme is very small ($4\\times 4$ for $\\bmath C$ and $6 \\times 6$\nfor $\\bmath A$). This is because the interaction between\nthe radiation and the gas is described by a local nature (source term)\nwhen we solve 0th and 1st moments for radiation. \nThis means that the numerical code can be easily parallelized and a high\nparallelization efficiency is expected. \n\nWe note that since the matrix $\\bmath C$ is only $4\\times 4$ matrix, we can invert it\nanalytically. For the matrix $\\bmath A$, it is relatively small matrix but it\nis difficult to invert it analytically. Thus we decide to use LU\ndecomposition method. Since the LU decomposition method can invert the \nmatrix directly without iterations, we obtain $\\bmath A^{-1}$ stably. \n\nWe find that the wave front of radiation field can be sharply\ncaptured using the HLL scheme even when we adopt the simple closure relation\n$P'^{ij}_r=\\delta^{ij}\/3$ (i.e., the Eddington approximation).\nWhen we adopt the FLD approximations, such a sharp wave front cannot be\ncaptured due to the diffusion approximations. Thus, solving the 1st order\nmoment of radiation has an advantage when we consider the optically thin\nmedium although the wave speed reduces to $c\/\\sqrt{3}$ and the\nradiation field is isotropic. \n\nWe adopted the iteration scheme to integrate stiff source\nterms. If we need many counts in iteration scheme, it causes a load\nimbalance between each core when the code is parallelized. \nBut we have to stress that solutions converged only within two steps\neven if $\\Delta t = 10^4 t_{ab}$ for the problem of radiative heating\nand cooling appeared in \\S~\\ref{radheatcool}. For the other tests,\nsolution converges without iteration. Thus we can expect that the load\nimbalance due to the iteration scheme is not severe even in the more \nrealistic problems.\n\nIn the previous papers \\citep{2008PhRvD..78b4023F, 2011MNRAS.tmp.1386Z},\nradiation fields are defined in different frames for primitive and \nconservative variables (mixed frame). \nIn such a method, the radiation moment equations are simply\ndescribed. However, this method is not suitable for implicit integration. \nThis is because that a Lorentz transformation between two frames is\nneeded. By the transformation, expressions of the radiation\nfields become quite complex even if the Eddington approximation is adopted. \nMoreover, the radiation stress tensor, Pr, is generally non-liner function \nof the radiation energy density and the radiation flux through the \nEddington tensor. Thus, extension of explicit method to implicit method \nis not straightforward for the relativistic radiation hydrodynamics.\n\nIn our method, we treat radiation fields in the laboratory frame, only. \nSuch a treatment simplifies implicit integration. Although we\nneed to invert $6\\times 6$ matrix and $4\\times 4$ matrix, our new method \nis easier than the method with mixed frame. Two matrices of $C$ \n(which is needed for implicit integration) and $A$ \n(which is needed to compute Pr) are directly inverted using analytical \nsolution and LU-decomposition method. Since we need not to use some\niteration methods to invert them, our method is stable and comparatively\neasy. Our method would be quite useful for numerical simulations of\nrelativistic astrophysical phenomena, e.g., black hole\naccretion disks, relativistic jets, gamma-ray bursts, end so on, since\nboth the high density region and high velocity region exists, and\nsince the radiation processes are play important role for dynamics as\nwell as energetics.\n\n\n\n\n\n\n\\acknowledgments\nWe thank Tomoya Takiwaki for fruitful discussions.\nNumerical computations were carried out on Cray XT4 at the Center for\nComputational Astrophysics, CfCA, at the National Astronomical Observatory\nof Japan, on Fujitsu FX-1 at the JAXA Supercomputer System (JSS) at the Japan\nAerospace Exploration Agency (JAXA), and on T2K at the University of\nTokyo. This work is supported in part by Ministry of Education, Culture,\nSports, Science, and Technology (MEXT) for Research Activity Start-up (HRT)\n23840045, and for Young Scientist (KO) 20740115\n(YS) 21018008, 21105511, 23740160 (TI) 22$\\cdot$3369, 23740154.\nKT is supported by the Research Fellowship from the Japan Society for\nthe Promotion of Science (JSPS) for Young Scientists.\nA part of this research has been funded by MEXT HPCI STRATEGIC PROGRAM.\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:sec1}\n\nSome of the prime targets of the currently operating network of ground-based detectors of \ngravitational waves (GWs) are the signals emitted by inspiralling and coalescing compact binaries. \nHere, ``compact binary'' refers to a binary system made either of two black holes, a black hole and a \nneutron star, or two neutron stars. The GW signal emitted by binary black hole (BBH) systems has \nbeen the subject of intense theoretical studies, based either on analytical methods or on numerical \nones. In particular, recent progress in the application of the effective one body (EOB) approach to \nBBH systems has led to a remarkable agreement between the (analytical) EOB predictions and the \nbest current numerical relativity results~\\cite{Damour:2009kr,Buonanno:2009qa}\n(see also~\\cite{Yunes:2009ef}).\nBy contrast, much less work has been devoted to the study of \nthe GW signal emitted by compact binaries comprising neutron stars: either black-hole-neutron-star \n(BHNS) systems or binary neutron-star (BNS) ones. During the inspiral phase (before contact), these \nsystems differ from the BBH ones by the presence of tidal interactions which affect both the dynamics \nof the inspiral and the emitted waveform. During the merger and coalescence phase, the presence of \nneutron stars drastically modifies the GW signal~\\cite{Baiotti:2009gk,Giacomazzo:2009mp,Baiotti:2008ra}. \nThe coalescence signal involves (especially in the \nBNS case) a lot of complicated physics and astrophysics, and is, probably, not amenable to the type \nof accurate analytical description which worked in the BBH case. Early works\non this problem have tried to approximately relate some qualitative features\nof the merger GW\nsignal linked, e.g., to ``tidal disruption'', to analytically describable\ninputs~\\cite{Bildsten:1992my,Kochanek:1992wk,Vallisneri:1999nq}.\n \nRecently, Flanagan and Hinderer~\\cite{Flanagan:2007ix,Hinderer:2007mb,Hinderer:2009} have\ninitiated the program of studying the quantitative influence of\ntidal effects~\\cite{Hinderer:2007mb,Damour:2009vw,Binnington:2009bb} \nin inspiralling BNS systems. \nHowever, they only considered the early (lower frequency) portion of \nthe GW inspiral signal, mainly because they were using a post-Newtonian\nbased description of the binary dynamics whose validity is restricted to\nlow enough frequencies.\nIn particular, one of the results of the recent work of \nHinderer et al.~\\cite{Hinderer:2009} is to show that the accumulated\nGW phase due to tidal interactions is, for most realistic NS models of\nmass $M\\sim 1.4M_\\odot$ {\\it smaller} than the ``uncertainty'' in\nthe PN-based description of GW phasing (see the central panel of\ntheir Fig.~4 where the thin-dashed and thin-dotted lines are two\nmeasures of the PN ``uncertainty''. [These measures are larger than\nthe inspiral tidal signal except for the extreme case where the \nradius of the $1.4M_\\odot$ NS is taken to be $\\geq 16$ km].\n\nBy contrast, our aim in this work will be to propose a way of describing\nthe binary dynamics (including tidal effects) whose validity does not\nhave the limitations of PN-based descriptions and therefore\nis not apriori limited\nto the low frequency part, but extends to significantly higher \nfrequencies. This might be crucial to increase the detectability of the\nGW signal and thereby have a handle on the nuclear equation of state (EOS).\nIndeed, our proposal consists in extending the EOB method\nby incorporating tidal effects in it. \nOur hope is that such a tidally-extended EOB framework will be able to describe with \nsufficient approximation not only the early inspiral phase, but also the late inspiral up to the moment \n(that we shall consistently determine within our scheme) of ``contact''. \nWe think that the present EOB description of tidal effects is likely to be\n more accurate than any of the possible \n``post-Newtonian-based'' descriptions involving supplementary tidal terms \n(such as~\\cite{Flanagan:2007ix} or~\\cite{Hinderer:2009}). \nThis should be especially true in the BHNS systems which, in the limiting case \n$m_{\\rm NS} \\ll m_{\\rm BH}$, are known to be well described by the EOB\napproach (and rather badly described by post-Newtonian-based approaches).\nWe will give some evidence of the validity of the EOB description of close\nneutron star systems by comparing our analytical predictions to recently\ncalculated quasi-equilibrium neutron star (NS) sequences of circular\norbits~\\cite{Uryu:2009ye} (see also~\\cite{Uryu:2005vv}).\n\n\n\\section{Effective-action description of tidal effects in two-body systems}\n\\label{sec:sec2}\n\n\\subsection{General formalism}\n\\label{sec:general}\n\nThe general relativistic tidal properties of neutron stars have been recently\nstudied in Refs.~\\cite{Hinderer:2007mb,Damour:2009vw,Binnington:2009bb,Hinderer:2009}. \n As emphasized in~\\cite{Damour:2009vw}, there are (at least) three different \ntypes of tidal responses of a neutron star \nto an external tidal solicitation, which are measured by three different tidal\ncoefficients: (i) a gravito-electric-type coefficient \n$G\\mu_\\ell = [{\\rm length}]^{2\\ell +1}$ measuring the ${\\ell }^{\\rm th}$-order \nmass multipolar moment $G M_{a_1\\dots a_{\\ell }}$ induced in a star by an external\n${\\ell }^{\\rm th}$-order gravito-electric tidal field $G_{a_1,\\dots,a_{\\ell }}$; (ii)\na gravito-magnetic-type coefficient $G\\sigma_{\\ell }=[{\\rm length}]^{2{\\ell }+1}$ \nmeasuring the ${\\ell }^{\\rm th}$ spin multipole moment $G S_{a_1 \\dots a_{\\ell }}$\ninduced in a star by an external ${\\ell }^{\\rm th}$-order gravito-magnetic tidal\nfield $H_{a_1\\dots a_{\\ell }}$; and (iii) a dimensionless ``shape'' Love number\n$h_\\ell$ measuring the distorsion of the shape of the surface of a star by an\nexternal ${\\ell }^{\\rm th}$-order gravito-electric tidal field. It was found in\n\\cite{Damour:2009vw,Binnington:2009bb} that all those coefficients have a strong sensitivity to the\nvalue of the star's ``compactness'' $c\\equiv GM\/c_0^2 R$ (where we denote by\n$c_0$ the velocity of light, to be distinguished from the compactness $c$).\nThis means, in particular, that the numerical values of the tidal coefficients\nof NS's should not be evaluated by using Newtonian estimates. Indeed, the\ndimensionless version of $\\mu_\\ell$, traditionally denoted as $k_\\ell$\n(``second Love number'') and defined as\n\\be\n\\label{eq:kl}\n2 k_{\\ell } \\equiv (2{\\ell } -1)!! \\dfrac{G\\mu_{\\ell }}{R^{2{\\ell } +1}} ,\n\\end{equation}\nwhere $R$ denotes the areal radius of the NS, is typically three times smaller\nthan its Newtonian counterpart (computed from the same equation of state). A\nsimilar, though less drastic, ``quenching'' also occurs for the ``first Love\nnumber'' $h_{\\ell }$. In particular, though Newtonian $h_{\\ell }$'s are larger than $1$\n(and equal to $1+2k_{\\ell }$, see Eq.~(81) of \\cite{Damour:2009vw}), the typical relativistic\nvalues of $h_{\\ell }$ are smaller than $1$. This will play a useful role in our\nanalysis below of the moment where the tidal distortion of the NS becomes too\nlarge for continuing to use an analytical approach. \n\nIt was shown in \\cite{Damour_cras80, Damour1983} that the \nmotion and radiation of two black holes can be described, up \nto the fifth post-Newtonian (5PN) approximation, by an \neffective action of the form\n\\be\n\\label{eq:2.1n}\nS_0 = \\int d^D x \\, \\dfrac{\\sqrt{g} \\, R(g)}{16\\pi \\, G} + S_{\\rm point\\mbox{-}mass},\n\\end{equation}\nwhere\n\\be\n\\label{eq:2.2}\nS_{\\rm point\\mbox{-}mass} = -\\sum_A \\int M_A \\, ds_A ,\n\\end{equation}\nis a ``skeletonized'' description of black holes, as ``point masses''. \nTo give meaning to the addition of point-mass sources to the nonlinear \nEinstein equations, one needs to use a covariant regularization\nmethod. Refs.~\\cite{Damour_cras80, Damour1983} mainly used \nRiesz' analytic regularization, but it was already mentioned at the time that \none could equivalently use dimensional regularization. The efficiency \nand consistency of the latter method was shown by the calculations of the dynamics, \nand radiation, of BBH systems at the 3PN\nlevel~\\cite{Damour:2001bu,Blanchet:2003gy,Blanchet:2004ek}. Let us also \nrecall that the limitation to the 5PN \nlevel in Ref.~\\cite{Damour1983} is precisely linked to the possible appearance of ambiguities in BBH \ndynamics appearing at the level where tidal effects start entering the picture. \nIndeed, it is well-known in effective field theory that finite-size effects correspond\nto augmenting the point-mass action~\\ref{eq:2.1n} by non-minimal (worldline) \ncouplings involving higher-order derivatives of the field \n[see~\\cite{Damour:1995kt,Goldberger:2004jt} and Appendix~A of Ref.~\\cite{Damour:1998jk}].\nMore precisely, \nthe two tidal effects parametrized by $\\mu_{\\ell }$ and $\\sigma_{\\ell }$ correspond to augmenting the leading \npoint-particle effective action, (\\ref{eq:2.1n}), (\\ref{eq:2.2}), by the following nonminimal worldline\ncouplings\n\\begin{eqnarray}\n\\label{eq:2.3}\n\\Delta S_{\\rm nonminimal} &= &\\sum_A \\biggl\\{ \\frac{1}{2} \\, \\frac{1}{{\\ell }!} \\, \\mu_{\\ell }^A \\int ds_A (G_L^A)^2 \n\\nonumber \\\\\n&+ &\\frac{1}{2} \\, \\frac{{\\ell }}{{\\ell } + 1} \\, \\frac{1}{{\\ell }!} \\, \\frac{1}{c_0^2} \\, \\sigma_{\\ell }^A \\int ds_A (H_L^A)^2 \\biggl\\} .\n\\end{eqnarray}\nHere\\footnote{We use here the notation of~\\cite{Damour:1990pi}, notably for multi-indices $L \\equiv a_1 , \\ldots , a_{\\ell }$.} \n$G_L^A \\equiv G_{a_1 \\ldots a_{\\ell }}^A$ and $H_L^A \\equiv H_{a_1 \\ldots a_{\\ell }}^A$ are the \ngravito-electric and gravito-magnetic ``external'' tidal gradients evaluated along the worldline of the \nconsidered star (labelled by $A$), in the local frames (attached to body $A$) defined in~\\cite{Damour:1990pi}. \nIf needed, they can be reexpressed in terms of covariant derivatives of the Riemann (or Weyl) \ntensor. For instance, using Eq.~(3.40) of~\\cite{Damour:1990pi}, the leading, \nquadrupolar terms in Eq.~(\\ref{eq:2.3}) read\n\\begin{eqnarray}\n\\label{eq:2.4} \n\\Delta S_{\\rm nonminimal} &= &\\sum_A \\biggl\\{ \\frac{1}{4} \\, \\mu_2^A \\int ds_A \\, {\\mathcal E}_{\\alpha\\beta}^A \n\\, {\\mathcal E}^{A_{\\alpha\\beta}} \\nonumber \\\\\n&+ &\\frac{1}{6} \\, \\sigma_2^A \\int ds_A \\, {\\mathcal B}_{\\alpha\\beta}^A \\, \n{\\mathcal B}^{A_{\\alpha\\beta}} + \\cdots \\biggl\\}\n\\end{eqnarray}\nwhere ${\\cal E}_{\\alpha\\beta}^A \\equiv [u^{\\mu} \\, u^{\\nu} \\, C_{\\mu\\alpha\\nu\\beta}]^A$, ${\\cal B}_{\\alpha\\beta}^A \\equiv [u^{\\mu} \\, \nu^{\\nu} \\, C_{\\mu\\alpha\\nu\\beta}^*]^A$, with $C_{\\mu\\nu\\alpha\\beta}^* \\equiv \\frac{1}{2} \\, \\epsilon_{\\mu\\nu\\rho\n\\sigma} \\, C^{\\rho\\sigma}_{\\alpha\\beta}$ being the dual of the Weyl tensor $C$, and $u^{\\mu} = dz^{\\mu} \/ ds$ \nbeing the four-velocity along the considered worldline. As explained in Appendix A of \nRef.~\\cite{Damour:1998jk}, one can, modulo some suitable ``field redefinitions'' that do \nnot affect the leading result, indifferently use the Weyl tensor \n$C_{\\alpha\\beta\\mu\\nu}$ or the Riemann tensor $R_{\\alpha\\beta\\mu\\nu}$ in evaluating the ${\\cal E}_{\\alpha\\beta}$ and \n${\\cal B}_{\\alpha\\beta}$ entering Eq.~(\\ref{eq:2.4}).\n\n\nThe effective-action terms (\\ref{eq:2.3}), (\\ref{eq:2.4}) can be used to compute the various observable \n effects linked to the relativistic tidal coefficients $\\mu_{\\ell }$ and $\\sigma_{\\ell }$~\\footnote{More precisely, \n Eq.~(\\ref{eq:2.3}) describes only the effects that are {\\it linear} in tidal\n deformations (and which preserve {\\it parity}). If one wished to also \n consider {\\it nonlinear} tidal effects one should augment the {\\it quadratic-only} terms (\\ref{eq:2.4}) by \n higher-order nonminimal worldline couplings which are cubic, quartic, etc$\\ldots$ in $C_{\\mu\\alpha\\nu\\beta}$ and \n its gradients. The coefficients of such terms would then parametrize some {\\it nonlinear} tidal effects, \n which have not been considered in the linear treatments of Refs.~\\cite{Damour:2009vw,Binnington:2009bb}.}. \n In particular, they imply both: (i) additional terms in the dynamics of the considered binary system, and \n (ii) additional terms in the gravitational radiation emitted by the considered binary system. \n Both types of additional terms can, in principle, be evaluated with any needed relativistic accuracy \n from Eq.~(\\ref{eq:2.3}), i.e. computed either in a ``post-Minkowskian'' (PM) expansion in powers of \n $G\/c_0^2$, or (after a further re-expansion in powers of $1\/c_0$), in a ``post-Newtonian'' (PN) \n expansion in powers of $1\/c_0^2$. Let us remark in passing that the PM expansion can be \n conveniently expressed in terms of Feynman-like diagrams, as was explicitly discussed (for \n tensor-scalar gravity) at the 2PN level in~\\cite{Damour:1995kt}.\n \n \n Here we shall use the extra terms \n (\\ref{eq:2.3}), (\\ref{eq:2.4}) as a way to {\\it add} to the description of binary black hole systems \n the effects linked to the replacement of one or two of the black holes by a neutron star. From this \n point of view, we shall conventionally consider that the tidal coefficients of a black hole vanish: \n $\\mu_{\\ell }^{\\rm BH} = 0 = \\sigma_{\\ell }^{\\rm BH}$~\\cite{Damour:2009vw,Binnington:2009bb}.\n However, as emphasized in~\\cite{Damour:2009vw}, \n more work is needed to clarify whether this is exact, i.e. whether the description of BBH's \n by an effective action requires or not the presence of additional couplings of the \n type of Eqs.~(\\ref{eq:2.3}), (\\ref{eq:2.4}), as ``counter terms'' to absorb\ndimensional regularization poles $\\propto (D-4)^{-1}$ (such poles are indeed linked\nto the possible ambiguities expected to arise at 5PN in the point-mass dynamics; \nsee the discussion in Sec.~5\n of~\\cite{Damour1983}; see also Sec.~7 of~\\cite{Damour:2009sm}) . \n We leave to future work a clarification of this subtle issue. \n\n \n\\subsection{Leading-Order tidal effects in the two-body interaction Lagrangian}\n \nLet us first consider the {\\it dynamical} effects, implied by (\\ref{eq:2.3})\ni.e. the tidal contribution to the ``Fokker'' Lagrangian describing \nthe dynamics of two compact bodies after having integrated out the gravitational field, say \n\\be\n\\label{eq:2.5}\nL ({\\bm q}^A , {\\bm v}^A) = L^{\\rm point\\mbox{-}mass} + L^{\\rm tidal} \\, .\n\\end{equation}\nHere, $L^{\\rm point\\mbox{-}mass} (q,v)$ denotes the (time-symmetric) interaction Lagrangian \nfollowing from the point-mass action (\\ref{eq:2.1n}) (say after a suitable redefinition of position variables \nto eliminate higher derivatives). It is currently known at the 3PN level. The supplementary term \n$L^{\\rm tidal}$ in Eq.~(\\ref{eq:2.5}) is of the symbolic form (keeping only powers of $G$ and $1\/c_0$) \n\\begin{eqnarray}\n\\label{eq:2.5bis}\nL^{\\rm tidal} &\\sim &G^2 \\, \\mu_2 \\left( 1 + \\frac{1}{c_0^2} + G + \\cdots \\right) \\nonumber \\\\\n&+ &\\frac{G^2 \\, \\sigma_2}{c_0^2} \\, \\left( 1 + \\frac{1}{c_0^2} + G + \\cdots \\right) \\nonumber \\\\\n&+ &G^2 \\, \\mu_3 \\left( 1 + \\frac{1}{c_0^2} + G + \\cdots \\right) + \\cdots\n\\end{eqnarray}\nLet us start by discussing the {\\it leading order} contributions associated to each\ntidal coefficient $\\mu_\\ell$ or $\\sigma_\\ell$.\nThe leading term in the contribution linked to $\\mu_{\\ell }$ is simply obtained from (\\ref{eq:2.3}) by \ninserting the leading-order value of $G_L^A$, i.e. $(L \\equiv a_1 \\ldots a_{\\ell })$\n\\be\n\\label{eq:2.6}\nG_L^A = \\left[ \\partial_L U^{\\rm ext }( {\\bf x} ) \\right]^A = \\partial_L^A \n\\left( \\frac{GM^B}{\\vert {\\bm z}_A - {\\bm z}_B \\vert} \\right)\n\\end{equation}\nwhere $B \\ne A$ denotes the companion of body $A$ in the considered binary system ($A,B=1,2$), \nand $\\vert {\\bm z}_A - {\\bm z}_B \\vert$ the distance between the two bodies. In addition \n$\\partial_L^A \\equiv \\partial_{a_1 \\ldots a_{\\ell }}^A$, with $\\partial_a^A \\equiv \\partial \/ \\partial z_A^a$, \ndenotes the differentiation with respect to ${\\bm z}_A$ that appear after taking the limit where \nthe field point ${\\bm x}$ tends to ${\\bm z}_A$ on the worldline of body $A$. Using\n\\be\n\\label{eq:2.7}\n\\partial_L^A \\, \\frac{1}{r_{AB}} = (-)^{\\ell } \\, (2{\\ell } - 1)!! \\, \\frac{\\hat n_{AB}^L}{r_{AB}^{{\\ell } + 1}}\n\\end{equation}\nwhere $n_{AB}^a \\equiv (z_A^a - z_B^a) \/ r_{AB}$, $r_{AB} \\equiv \\vert {\\bm z}_A - {\\bm z}_B \\vert$, \nand where the hat denotes a symmetric trace-free (STF) projection, and the fact that \n(see, e.g., Eq.~(A25) of~\\cite{BD86})\n\\be\n\\label{eq:2.8}\n\\hat n^L_{AB}\\, \\hat n^L_{AB} = \\hat n^L_{AB} \\, n^L_{AB} = \\frac{{\\ell } !}{(2{\\ell } - 1)!!} \\, ,\n\\end{equation}\none easily finds that the leading Lagrangian contribution proportional to $\\mu_{\\ell }$ reads\n\\begin{eqnarray}\n\\label{eq:2.9}\nL_{\\mu_{\\ell }^A} &= &\\frac{(2{\\ell }-1)!!}{2} \\, \\mu_{\\ell }^A \\, \\frac{(GM^B)^2}{r_{AB}^{2{\\ell }+2}} \\nonumber \\\\\n&= &k_{\\ell }^A \\, G(M^B)^2 \\, \\frac{R_A^{2{\\ell } + 1}}{r_{AB}^{2{\\ell }+2}} \\, .\n\\end{eqnarray}\nHere we have used (\\ref{eq:kl}) to replace $G\\mu_{\\ell }^A$ in terms of the dimensionless Love number \n$k_{\\ell }^A$, and of the areal radius $R_A$ of the NS. Note that, in a BNS system, one has to add two \ndifferent contributions: $L_{\\mu_{\\ell }^A} + L_{\\mu_{\\ell }^B}$. By contrast, in a BHNS system one has only \n$L_{\\mu_{\\ell }^A}$ if $A$ denotes the NS.\n\nLet us also evaluate the leading ``magnetic-type'' contribution, i.e. the term $\\propto \\sigma_2$ in \n(\\ref{eq:2.5}). It is obtained by inserting in (\\ref{eq:2.3}) the ``Newtonian''-level value of the \ngravito-magnetic quadrupolar field $H_{ab}^{B\/A}$ exterted by body $B$ on body $A$. This is given \nby Eq.~(6.27a) of \\cite{Damour:1991yw}, namely\n\\begin{align}\n\\label{eq:2.10}\nH_{ab}^{B\/A} =& -2G \\, \\partial_{ac}^A \\left( \\frac{\\epsilon_{bcd} \\, M^B \\, v_{BA}^d}{r_{AB}} \\right) \\nonumber \\\\\n&-2G \\, \\partial_{bc}^A \\left( \\frac{\\epsilon_{acd} \\, M^B \\, v_{BA}^d}{r_{AB}} \\right) \n\\end{align}\nwhere $v_{BA}^d \\equiv v_B^d - v_A^d$ is the relative velocity between $B$ and $A$. \nA straightforward calculation then yields\n\\be\n\\label{eq:2.11}\nL_{\\sigma_2^A} = 12 \\, \\sigma_2^A \\, \\frac{(GM^B)^2}{r_{AB}^6} \\left[ \\left( \\frac{{\\bm v}_{AB}}{c_0} \n\\right)^2 - \\left( \\frac{{\\bm n}_{AB} \\cdot {\\bm v}_{AB}}{c_0} \\right)^2 \\right] \\, .\n\\end{equation}\n\nNote that the leading quadrupolar gravito-magnetic contribution (\\ref{eq:2.11}) is \nsmaller than the corresponding quadrupolar gravito-electric contribution\n\\be\n\\label{eq:2.12}\nL_{\\mu_2^A} = \\frac{3}{2} \\, \\mu_2^A \\, \\frac{(GM^B)^2}{r_{AB}^6}\n\\end{equation}\nby a factor\n\\be\n\\label{eq:2.13}\n8 \\, \\frac{\\sigma_2^A}{\\mu_2^A} \\left[ \\left( \\frac{{\\bm v}_{AB}}{c_0} \\right)^2 - \\left( \\frac{{\\bm n}_{AB} \n\\cdot {\\bm v}_{AB}}{c_0} \\right)^2 \\right] \\, .\n\\end{equation}\nIn terms of the corresponding dimensionless Love numbers $j_2$ (defined in \\cite{Damour:2009vw}) \nand $k_2$, the prefactor $8 \\, \\sigma_2^A \/ \\mu_2^A$ is equal to the dimensionless ratio $j_2 \/ (4k_2)$. \nHowever, it was found in \\cite{Damour:2009vw,Binnington:2009bb} that the magnetic Love number $j_2$ was much smaller \nthan $k_2$. Typically, for a $\\gamma = 2$ $\\mu$-polytrope and a compactness $c^A \\sim 0.15$, \none has $j_2 \\simeq - 0.02$, while $k_2 \\sim 0.1$, so that $8 \\, \\sigma_2 \/ \\mu_2 = j_2 \/ (4k_2) \n\\simeq -0.05$. In other words, the leading gravito-magnetic interaction (\\ref{eq:2.11}) is equivalent \n(say for circular orbits) to a 1PN fractional correction factor, $1+\\alpha \\, (v_{AB} \/ c_0)^2$, modifying \nthe leading gravito-electric contribution (\\ref{eq:2.12}), with $\\alpha = 8 \\, \\sigma_2 \/ \\mu_2 = j_2 \/ (4k_2) \n\\sim -0.05$. As we shall discuss below, the 1PN correction to (\\ref{eq:2.12}), \nimplied by \\eqref{eq:2.3}, involves coefficients $\\alpha^{\\rm 1PN}$ of order unity.\nWe will therefore, in the following, neglect the \ncontribution (\\ref{eq:2.11}) which represents only a small fractional\nmodification to the 1PN correction to (\\ref{eq:2.12}).\nOn the other hand, we shall retain some of the higher-degree gravito-electric contributions.\nIndeed, though, for instance, $L_{\\mu_3^A}\\propto 1\/r_{AB}^8$ formally\ncorresponds to a 2PN correction to $L_{\\mu_2^A}\\propto 1\/r_{AB}^6$, its\ncoefficient is much larger than that corresponding to an order-unity 2PN\ncorrection to Eq.~\\eqref{eq:2.12} [see Table~\\ref{tab:table1} below].\n\nSummarizing: the leading-order tidal contributions to the two-body interaction Lagrangian are \n(from Eq.~(\\ref{eq:2.9}))\n\\be\n\\label{eq:2.14}\nL^{\\rm tidal} = + G \\sum_{{\\ell } \\geq 2} \\left\\{ k_{\\ell }^A (M^B)^2 \\, \\frac{R_A^{2{\\ell }+1}}{r_{AB}^{2{\\ell }+2}} + k_{\\ell }^B \n(M^A)^2 \\, \\frac{R_B^{2{\\ell }+1}}{r_{AB}^{2{\\ell }+2}} \\right\\} \\, ,\n\\end{equation}\nwhere $k_{\\ell }^A$ denotes the ${\\ell }^{\\rm th}$ dimensionless Love number of a NS \\cite{Hinderer:2007mb,Damour:2009vw,Binnington:2009bb}. \nNote that the plus sign in Eq.~(\\ref{eq:2.14}) expresses the fact that the tidal interactions are {\\it attractive}.\n\n\n\n\\subsection{Structure of subleading (post-Newtonian) dynamical tidal effects}\n\nLeaving to future work~\\cite{DEF09} a detailed computation of higher-order \nrelativistic tidal effects, let us indicate their general structure. Here, we\nshall neglect the effects which are {\\it nonlinear} in the worldline\ncouplings $\\mu^A_\\ell$ of Eq.~\\eqref{eq:2.3} (e.g. effects $\\propto \\mu_2^A \\mu_2^A$)\nfor two reasons. On the one hand, such effects are numerically quite small,\neven for close neutron stars (as we shall check below). On the other hand, a \nfully consistent discussion of such effects requires that one considers\na more general version of nonminimal worldline couplings, involving terms which\nare cubic (or more nonlinear) in the curvature tensor and its covariant derivatives.\nIndeed, it is easily seen that a nonminimal coupling which is {\\it cubic} in\n$G_{ab}\\sim {\\cal E}_{\\alpha\\beta}$ contributes to the dynamics at the same\nlevel that a 1PN correction to the coupling quadratic in $G_{abc}$. \n\n\nIn the \"quadratic-in-curvature\" approximation of Eq.~\\eqref{eq:2.3} the part of the tidal\ninteraction which is proportional to $\\mu_\\ell^A$ will have the symbolic structure\n\\begin{align}\n\\label{symb}\nS_{\\mu^A}\\sim \\mu^A(G M^B)^2\\bigg[1&+GM^A + GM^B \\nonumber\\\\\n &+ \\left(GM^A+GM^B\\right)^2+\\dots \\bigg]\n\\end{align}\nwhere we only indicate the dependence on $GM^A$ and $GM^B$, leaving \nout all the coefficients (symbolically replaced by 1), which depend on positions and\nvelocities. The presence of an overall factor $(GM^B)^2$ comes from the fact that\n$G^A_\\ell (z^\\mu)$ in Eq.~\\eqref{eq:2.3} (which denotes the {\\it regularized} value of \nsome gradient of the curvature tensor as the field point $x$ tends to $z_A^\\mu(s_A)$ \non the worldline of $M^A$) is proportional to $GM^B$, so that it is vanishing \nwhen $M^B\\to 0$, i.e. in the limit of a one-body system.\n[We are considering here a two-body system; in the more general case of an\n $N$-body system we would have $G^A(z_A)\\propto \\sum_{B\\neq A} G M^B$.] \nIn a diagrammatic language (see e.g.~\\cite{Damour:1995kt}) the higher-order\nterms on the right hand side (r.h.s.) of Eq.~\\eqref{symb}\ncorrespond to diagrams where, besides having the basic (quadratic in\n$h_{\\mu\\nu}$) vertex $\\mu_A$ on the $A$ worldline being connected by two\ngravity propagators to two $GM_B$ ``sources'' on the $B$ worldline, we also\nhave some further gravity propagators connecting one of the worldlines either\nto one of the worldline vertices, or to some intermediate ``field'' vertex.\nNote that the information about the 1PN corrections to both gravito-electric \n($\\mu_\\ell$) and gravito-magnetic ($\\sigma_\\ell$) multipolar interactions \n(of any degree $\\ell$) is contained in the work of \nDamour, Soffel and Xu~\\cite{Damour:1991yw,Damour:1992qi,Damour:1993zn}.\nWe shall discuss below the effect of the subleading (post-Newtonian) terms \nin~\\eqref{symb} on the EOB description of the dynamics of tidally interacting\nbinary systems.\n\n\n\\section{Incorporating dynamical tidal effects in the \nEffective One-Body (EOB) formalism}\n\\label{sec:sec3}\n\n\\subsection{General proposal}\n\nThe EOB formalism~\\cite{Buonanno:1998gg,Buonanno:2000ef,Damour:2001tu}\n replaces the two-body interaction Lagrangian (or\nHamiltonian) by a Hamiltonian, of a specific form, which depends \nonly on the relative position and momentum of the binary system, \nsay $({\\bm q},{\\bm p})$. For a non spinning BBH system, it has been \nshown that its dynamics, up to the 3PN level, can be described by \nthe following EOB Hamiltonian (in polar coordinates, within the \nplane of the motion):\n\\be\n\\label{eq:Heob}\nH_{\\rm EOB}(r,p_{r_*},p_\\varphi) = M\\sqrt{1+2\\nu (\\hat{H}_{\\rm eff}-1)}\n\\end{equation}\nwhere\n\\be\n\\label{eq:Heff}\n\\hat{H}_{\\rm eff} = \\sqrt{p_{r_*}^2 + A(r) \\left( 1 + \\frac{p_{\\varphi}^2}{r^2} + z_3 \\, \\frac{p_{r_*}^4}{r^2} \\right)} \\, .\n\\end{equation}\nHere $M=M_A + M_B$ is the total mass, $\\nu \\equiv M_A \\, M_B \/ (M_A + M_B)^2$ is the symmetric \nmass ratio and $z_3 \\equiv 2\\nu (4-3\\nu)$. In addition we are using rescaled dimensionless (effective) \nvariables, notably $r = r_{AB} \/ GM$ and $p_{\\varphi} = P_{\\varphi} \/ (GM_A M_B)$, and $p_{r_*}$ is \ncanonically conjugated to a ``tortoise'' modification of $r$~\\cite{Damour:2009ic}.\n\nA remarkable feature of the EOB formalism is that the complicated, \noriginal 3PN Hamiltonian (which contains many corrections to the basic \nNewtonian Hamiltonian $\\frac{1}{2} \\, {\\bm p}^2 + 1\/r$) can be replaced \nby the simple structure (\\ref{eq:Heob}), (\\ref{eq:Heff}) whose two crucial \ningredients are: (i) a ``double square-root'' structure \n$H_{\\rm EOB} \\sim \\sqrt{1+\\sqrt{{\\bm p}^2 + \\cdots}}$, and (ii) the \n``condensation'' of most of the nonlinear relativistic gravitational \ninteractions in one function of the (EOB) radial variable: \nthe basic ``radial potential'' $A(r)$. In addition, the structure of the function \n$A(r)$ is quite simple. At the 3PN level it is simply equal to\n\\be\n\\label{eq:3.3}\nA^{\\rm 3PN} (r) = 1-2u+2 \\, \\nu \\, u^3 + a_4 \\, \\nu \\, u^4 \\, ,\n\\end{equation}\nwhere $a_4 = 94\/3 - (41\/32)\\pi^2$, and $u \\equiv 1\/r = GM\/r_{AB}$. \nIt was recently found~\\cite{Damour:2009kr} \nthat an excellent description of the dynamics of BBH systems \nis obtained by: (i) augmenting the presently computed terms \nin the PN expansion (\\ref{eq:3.3}) by additional \n4PN and 5PN terms, and by (ii) Pad\\'e-resumming the corresponding \n5PN ``Taylor'' expansion of the $A$ function. In other words, \nBBH (or ``point mass'') dynamics is well described by a function of the form\n\\be\n\\label{eq:3.4}\nA^0(r) = P^1_5\\left[1-2u+2\\nu u^3 + a_4 \\nu u^4 + a_5\\nu u^5 + a_6\\nu u^6\\right] ,\n\\end{equation}\nwhere $P^n_m$ denotes an $(n,m)$ Pad\\'e approximant.\nIt was found in Ref.~\\cite{Damour:2009kr} that a good agreement between\nEOB and numerical relativity binary black hole waveforms is obtained \nin an extended ``banana-like'' region in the $(a_5,a_6)$ plane approximately \nextending between the points $(a_5,a_6)=(0,-20)$ and $(a_5,a_6)=(-36,+520)$.\nIn this work we shall select the values $a_5=-6.37$, $a_6=+50$ which lie\nwithin this good region.\n\nOur proposal for incorporating dynamical tidal effects in the EOB formalism\nconsists in preserving the simple general structure\n\\eqref{eq:Heob}, \\eqref{eq:Heff} of the EOB Hamiltonian, but to modify the BBH\nradial potential~\\eqref{eq:3.4} (which corresponds to the point-mass action~\\eqref{eq:2.1n})\nby augmenting it by some ``tidal contribution''. In other words the proposal\nis to use Eqs.~\\eqref{eq:Heob}, \\eqref{eq:Heff} with\n\\be\n\\label{eq:3.5}\nA(r) = A^0(r) + A^{\\rm tidal} (r) \\, .\n\\end{equation}\n\n\n\\subsection{Incorporating leading order (LO) dynamical tidal interactions}\n\nLet us show that, at the leading order (LO), one can use a tidal contribution of\nthe form\n\\be\n\\label{eq:3.6}\nA^{\\rm tidal}_{LO} (r) = -\\sum_{{\\ell }\\geq 2} \\kappa_\\ell^{\\rm T} u^{2\\ell + 2} , \n\\end{equation}\nwith some dimensionless coefficient $\\kappa_\\ell^{\\rm T}$.\n\nIndeed, if we keep only the Newtonian approximation of the full EOB Hamiltonian \\eqref{eq:Heob}, \n\\eqref{eq:Heff} (using $A(r) \\equiv 1 + \\bar A (r)$ with $\\bar A (r) = -2 \\, GM \/ (c_0^2 \\, r_{AB}) + \\cdots$ \nbeing 1PN small as $1\/c_0^2 \\to 0$) one finds (with $\\mu \\equiv M^A M^B \/ M$)\n\\be\nH_{\\rm EOB} \\simeq M \\, c_0^2 + \\frac{1}{2} \\mu\\, {\\bm p}^2 + \\frac{1}{2} \\mu \\, \\bar A (r) + \n{\\mathcal O} \\left( \\frac{1}{c_0^2} \\right) \\, ,\n\\end{equation}\nwhich exhibits the role of $\\frac{1}{2} \\, \\mu \\, \\bar A (r)$ as being the interaction energy. \nDecomposing $\\bar A (r) = \\bar A^0 (r) + A^{\\rm tidal} (r)$, and remembering that there is a sign \nreversal between the interaction energy and the interaction Lagrangian, \nwe see that the terms \\eqref{eq:2.14} can be converted in a contribution to the $A(r)$ potential \nof the form \\eqref{eq:3.6}, if the coefficients $\\kappa_{{\\ell }}^{\\rm T}$ take the values\n\\begin{eqnarray}\n\\label{eq:3.7}\n\\kappa_{{\\ell }}^{\\rm T} &= &2 \\, k_{\\ell }^A \\, \\frac{M_B}{M_A} \\left( \\frac{R_A \\, c_0^2}{G (M_A + M_B)} \n\\right)^{2{\\ell } + 1} \\nonumber \\\\\n&&+ \\, 2 \\, k_{\\ell }^B \\, \\frac{M_A}{M_B} \\left( \\frac{R_B \\, c_0^2}{G(M_A + M_B)} \\right)^{2{\\ell } + 1} \n\\nonumber \\\\\n&= &2 \\, \\frac{M_B \\, M_A^{2{\\ell }}}{(M_A + M_B)^{2{\\ell } + 1}} \\, \\frac{k_{\\ell }^A}{c_A^{2{\\ell } + 1}} \\nonumber \\\\\n&&+ \\, 2 \\, \\frac{M_A \\, M_B^{2{\\ell }}}{(M_A + M_B)^{2{\\ell } + 1}} \\, \\frac{k_{\\ell }^B}{c_B^{2{\\ell } + 1}} \\, .\n\\end{eqnarray}\nIn the second form, we have introduced the compactness parameters of the stars: \n$c_A \\equiv GM_A \/ (R_A \\, c_0^2)$. It is interesting to note that the dimensionless tidal parameters \nthat enter the EOB dynamics are (when $M_A \\sim M_B$) the ratios $k_{\\ell }^A \/ c_A^{2{\\ell } + 1}$, rather \nthan the Love numbers $k_{\\ell }^A$. Let us also note that the velocity of light $c_0$ formally appears in \nthe numerator of $\\kappa_{\\ell }^{\\rm T}$. This is related to the fact that, contrary to the coefficients of the \nsuccessive powers of $u$ that enter the BBH EOB potential $A^0 (r)$ which are (roughly speaking) \npure numbers of order unity, the coefficients $\\kappa_{\\ell }^{\\rm T}$ entering the tidal contribution \n$A^{\\rm tidal} (r)$ will tend to be much larger than unity (and to increase with ${\\ell }$). For instance, we \nshall typically find that $\\kappa_2^{\\rm T} = {\\mathcal O} (100)$. This numerical difference makes it \nconsistent to add to $A^0 (r)$ (which is known for sure only up to $u^4$ terms, i.e. \nthe 3PN level) additional terms $\\propto u^6 + u^8 + \\cdots$ that would formally correspond to \n5PN $+$ 7PN $+ \\, \\cdots$ contributions if their coefficients were ``of order unity'' \n(at least in the parametric sense).\n\n Finally, to illustrate the typical numerical values of the EOB tidal parameters we give \nin Table~\\ref{tab:table1} the values of $\\kappa_2^{\\rm T}$ for three\nparadigmatic systems, one equal-mass BNS and two BHNS of mass ratios $q\\equiv M_{BH}\/M_{NS}=4$ and $q=10$.\nThe neutron star model is described with a ``realistic''EOS SLy (with a\npiece-wise polytropic representation, see below) and has the following\ncharacteristics: mass $M=1.35M_\\odot$, compactness $c=0.17385$, radius $R=11.466$ km.\nNote that the main dependence on the equation of state (EOS) in $\\kappa^T_\\ell$ (say for the equal-mass\nBNS case) comes from $\\kappa^T_\\ell\\propto R_A^{2\\ell+1}$. Therefore, if one were\nconsidering a NS of different radius (because of the use of a different EOS)\nwith the same mass, $\\kappa^T_2$ would be approximately given by \n$\\kappa^T_2\\sim 73(R_A\/11.466\\,{\\rm km})^5$\n\n\n\\begin{table}[t]\n\\caption{\\label{tab:table1}Tidal properties of BNS and BHNS system. The NS\n model is obtained using the piece-wise polytropic representation of EOS \n SLy and has compactness $c = 0.17385$. Other properties of the model \n can be found in Table~\\ref{tab:table2}.}\n \\begin{center}\n \\begin{ruledtabular}\n \\begin{tabular}{lcccc}\n Model & $q$ & $\\kappa^T_2$ & $\\kappa^{\\rm T}_3$ & $\\kappa^{\\rm T}_3$\\\\\n \\hline \\hline\n BNS & 1 & 73.0426& 165.2966 & 509.6131 \\\\\n BHNS & 4 & 1.4959& 0.5416 & 0.2672 \\\\\n BHNS & 10 & 0.0726& 0.0054 & 0.0005 \n \\end{tabular}\n\\end{ruledtabular}\n\\end{center}\n\\end{table}%\n\nOne sees in Table~\\ref{tab:table1} that the dimensionless tidal\nparameter $\\kappa^T_2$ is a strongly decreasing function of the mass ratio.\nThis is analytically understood by looking at Eq.~\\eqref{eq:3.7}. If the label $B$\nrefers to a black hole (so that $k_\\ell^B=0$),\ndenoting $q\\equiv M_{BH}\/M_{NS}=M_B\/M_A$, we have $\\kappa^T_\\ell=(\\kappa^{\\rm T}_\\ell)^A$ \nwhere \n\\be\n(\\kappa^{\\rm T}_\\ell)^A=2\\dfrac{k^A_\\ell}{c_A^{2\\ell+1}}\\dfrac{q}{(1+q)^{2\\ell+1}}.\n\\end{equation}\nHere $c_A$ denotes as above the compactness of the NS.\nTherefore, as soon as the mass ratio $q$ is significantly larger than one,\nwe see that $(\\kappa^{\\rm T}_\\ell)^A$ contains a small factor\n$q^{-{2\\ell}}$ that suppresses the tidal contribution.\nAs a consequence, GW-observable tidal effects will be strongly suppressed \nin realistic BHNS systems.\nNote, however, that it might be quite useful to compare numerical\nrelativity simulations of ``artificial'' BHNS systems of mass ratio $q\\sim 1$\nto their EOB description to probe the analytical understanding of the \nlate inspiral and plunge phase. In particular, we note that, as a function of $q$, \n$\\kappa_2^{\\rm T}\\propto q\/(1+q)^5$ vanishes both when $q\\to 0$ and $q\\to\\infty$\nand reaches a maximum value when $q=M_{BH}\/M_{NS}=1\/4$. Moreover the maximum\nvalue of $\\kappa_2^{\\rm T}$ is larger than the value of $\\kappa_2^T$ for a\ncorresponding {\\it equal-mass} BNS system by a factor $4^6\/5^5=1.311$.\nWe suggest that the numerical study of such astrophysically irrelevant\nBHNS systems (with $M_{BH}\/M_{NS}\\sim 1\/4$) can be quite useful for improving\nour understanding of tidal interactions in strongly-interacting (near contact) \nregimes.\n\n\\subsection{Parametrizing higher-order dynamical tidal corrections}\n\\label{sec:1pn_tides}\n\nAbove we discussed the {\\it leading order} (LO) contribution of\ntidal interactions to the EOB ``radial potential'' $A(r)$.\nWe also discussed the structure of sub-leading (post-Newtonian) contributions\nto tidal interactions, Eq.~\\eqref{symb}. \nComparing the structure~\\eqref{symb} to the part of the EOB action linear in\n$A^{\\rm tidal}$, which is proportional to the product of $A^{\\rm tidal}$ by\nreduced mass $\\mu=M^A M^B\/(M^A+M^B)$, we see that the general\nstructure of the tidal contributions to the $A(r)$ potential is\n\\begin{align}\n\\label{eq:T2}\n&A_{\\mu_A}^{\\rm tidal}\\sim \\dfrac{M^A + M^B}{M^A M^B} \\mu^A\n\\dfrac{(GM^B)^2}{r^{\\ell +2}}\\nonumber\\\\\n&\\times\\left[1 + \\dfrac{GM^A}{r} + \\dfrac{GM^B}{r} \n+ \\left(\\dfrac{GM^A}{r}+\\dfrac{GM^B}{r}\\right)^2 + \\dots\\right] \n\\end{align}\nwhere we invoked dimensional analysis to insert appropriate powers\nof the (EOB) radial separation $r$.\n[Contrary to the action~\\eqref{symb} which also depends on velocities (and higher-derivatives),\nthe EOB radial potential depends only on the radius $r$.]\n\n\nIn other words, if we separate, for each multipolar order, the $\\mu_A$ and\n$\\mu_B$ contributions to $A^{\\rm tidal}$,\n\\be\n\\label{eq:T3}\nA^{\\rm tidal} = \\sum_{\\ell \\geq 2} A^{\\mu^A_\\ell} + \\sum_{\\ell \\geq 2}\nA^{\\mu_\\ell^B} ,\n\\end{equation}\nwe can write\n\\begin{align}\n\\label{eq:T4}\nA^{\\mu^A_\\ell} = A^{\\mu_\\ell^A}_{\\rm LO}\\left[1 + \\alpha_1^{A(\\ell)} u +\n \\alpha_2^{A(\\ell)}u^2+\\alpha_3^{A(\\ell)}u^3 \\dots\\right],\n\\end{align}\nwhere\n\\be\n\\label{eq:T5}\nA^{\\mu_\\ell^A}_{\\rm LO} \\equiv - \\kappa_\\ell^A u^{2\\ell +2}\n\\end{equation}\nis the part of $A^{\\rm tidal}_{\\rm LO}$, Eq.~\\eqref{eq:3.6},\nwhich is linear in $\\mu^A_\\ell$, or $k_\\ell ^A$, i.e.\n\\be\n\\kappa^A_\\ell = 2 \\, k_{\\ell }^A \\, \\frac{M_B}{M_A} \\left( \\frac{R_A \\, c_0^2}{G (M_A + M_B)} \n\\right)^{2{\\ell } + 1}.\n\\end{equation}\nSimilarly, one will have \n\\be\n\\label{eq:T7}\nA^{\\mu_\\ell^B} = A_{\\rm LO}^{\\mu_\\ell^B} \\left[1 + \\alpha_1^{B(\\ell)} u \n+\\alpha_2^{B(\\ell)} u^2 + \\alpha_2^{B(\\ell)} u ^3 + \\dots \\right]\n\\end{equation}\nThe coefficient $\\alpha_1^{A(\\ell)}$ represents the next to leading order (NLO)\nfractional correction to the leading order $A^{\\mu_\\ell^A}_{\\rm LO}$ (i.e. a\n1PN fractional correction), while $\\alpha_2^{A(\\ell)}$ represents the\nnext-to-next to leading order (NNLO) correction (i.e. a 2PN fractional\ncorrection), etc.\nThese coefficients are not pure numbers, but rather function of the two\ndimensionless mass ratios\n\\begin{align}\n\\label{eq:T8}\nX_A &\\equiv \\dfrac{M_A}{M_A+M_B},\\\\\nX_B &\\equiv \\dfrac{M_B}{M_A+M_B}\\equiv 1 - X_A .\n\\end{align}\nThe coefficients entering Eq.~\\eqref{eq:T7} are obtained from those\nentering~\\eqref{eq:T4} by the interchange of $X_A$ and $X_B$, i.e.\n$\\alpha^{A(\\ell)}_n (X_A,X_B)=\\alpha^{B(\\ell)}_n (X_B,X_A)$. The\nsymbolic structure~\\eqref{eq:T2} would naively suggest that\n$\\alpha_1^{A(\\ell)}$ is a linear combination of $X_A$ and $X_B$ and\nthat $\\alpha_2^{A(\\ell)}$ is a combination of $X_A^2$, $X_AX_B$ and $X_B^2$.\nHowever, as the reformulation of~\\eqref{symb} in terms of an EOB potential\n\\eqref{eq:T2} involves a ``contact transformation'' that depends on the\nsymmetric mass ratio $\\nu\\equiv X_AX_B$ (see Ref.~\\cite{Buonanno:1998gg}), the mass-ratio\ndependence of $\\alpha_n^{A(\\ell)}$ might be more complicated. \nNote that, by using the identity $X_A+X_B\\equiv 1$, one can, e.g., express\n$\\alpha_n^{A(\\ell)}$ in terms of $X_A$ only.\n[Then $\\alpha_n^{B(\\ell)}$ will be the same function of $X_B$ than\n$\\alpha_n^{A(\\ell)}$ of $X_A$.]\nNote also that, if one wishes, one can, for each value of $\\ell$ factorize\nthe total LO terms $-\\kappa_\\ell^{\\rm T} u^{2\\ell +2}$, and write\n\\begin{align}\n\\label{eq:T9}\nA^{\\rm tidal}=\\sum_{\\ell\\geq 2} - \\kappa_\\ell^{\\rm T}u^{2\\ell+2}\\hat{A}^{\\rm tidal}_\\ell,\n\\end{align}\nwhere\n\\be\n\\label{eq:T10}\n\\hat{A}^{\\rm tidal}_\\ell\\equiv 1\n +\\bar{\\alpha}_1^{(\\ell)}u + \\bar{\\alpha}_2^{(\\ell)}u^2 + \\dots ,\n\\end{equation}\nwith \n\\be\n\\label{eq:T11}\n \\bar{\\alpha}_n^{(\\ell)}\\equiv \\dfrac{\\kappa_\\ell^A \\alpha_n^{A(\\ell)} + \\kappa_\\ell^B\\alpha_n^{B(\\ell)} }{\\kappa^A_\\ell + \\kappa_\\ell^B}.\n\\end{equation}\nUsing Eqs.~(4.27) and (4.29) of~\\cite{Damour:1992qi}, or Eq.~(3.33) of~\\cite{Damour:1993zn},\ntogether with effective action techniques, a recent calculation ~\\cite{DEF09}\ngave the following result for the 1PN coefficient of multipolar order\n$\\ell=2$, $\\alpha_1^{A(2)}$, namely\n\\be\n\\label{eq:alpha1}\n\\alpha_1^{A(2)}=\\dfrac{5}{2}X_A.\n\\end{equation} \nMore work is needed to determine the higher degree and\/or higher order \n coefficients $\\alpha_n^{A(\\ell)}(X_A,X_B)$,\nand thereby the coefficients $\\bar{\\alpha}_n^{(\\ell)}$ entering Eq.~\\eqref{eq:T11}.\nBelow, we shall focus on the equal-mass case where the coefficients $\\alpha_n^{A(\\ell)}$\nbecome pure numbers.\n\nHere we shall explore three possible proposals for including higher-order\nPN corrections in tidal effects. The first proposal consists in truncating\nEq.~\\eqref{eq:T10} at 1PN order in a straightforward ``Taylor'' way, i.e.\nto consider a PN correcting factor to the EOB radial potential of the form\n\\begin{equation}\n\\label{eq:linear}\n\\hat{A}^{\\rm tidal}_\\ell = 1+\\bar{\\alpha}_1^{(\\ell)} u. \n\\end{equation}\nThe second proposal consists in considering a PN correcting factor which\nhas a ``Pad\\'e-resummed'' structure, i.e.\n\\begin{equation}\n\\label{eq:pade}\n\\hat{A}^{\\rm tidal}_\\ell = \\left( 1- \\bar{\\alpha}_1^{(\\ell)} u\\right)^{-1} .\n\\end{equation}\nOur third proposal consists in considering a PN correcting factor which would\nresult from having a ``shift'' between the EOB radial coordinate and the radial \ncoordinate appearing most naturally in a Newtonian-like tidal interaction\n($\\propto 1\/r^{2\\ell+2}$).\n\\begin{equation}\n\\label{eq:harmonic}\n\\hat{A}^{\\rm tidal}_\\ell = \\left( 1- \\widetilde{\\alpha}_1^{(\\ell)} u\\right)^{-(2\\ell +2)}.\n\\end{equation}\nWe use here a different notation for the 1PN coefficient, \n$\\widetilde{\\alpha}_1^{(\\ell)}$, as a reminder that, for instance, \nwhen $\\ell=2$, the parametrization~\\eqref{eq:harmonic} corresponds\nto a 1PN coefficient in the parametrization~\\eqref{eq:linear} given\nby \n\\be\n\\label{alpha_tilde}\n\\bar{\\alpha}_1^{(2)}=6\\,\\widetilde{\\alpha}_1^{(2)}.\n\\end{equation}\n\n\\section{Comparing EOB to numerical relativity results on \"waveless\" circular binaries }\n\\label{sec:nr}\nThe aim of this section is to compare stationary quasi-circular\nconfigurations of neutron star binaries computed, on the one hand, \nin the analytical framework outlined above and, on the other hand,\nin the numerical framework recently implemented by \nUry${\\rm\\bar{u}}$ et al.~\\cite{Uryu:2009ye} (see also~\\cite{Uryu:2005vv}).\nThe quantity from both frameworks that we shall compare is the\nbinding energy $E_b$ as a function of the orbital frequency $\\Omega$.\n\n\n\\subsection{Tidally interacting BNS circular configurations in the EOB framework }\n\\label{ar:circular}\n\n\\subsubsection{BNS binding energy in the EOB framework}\n\\label{sbsc:eob_circular}\nAs an application of the formalism discussed so far, \nwe consider in this section binaries in exactly circular orbits,\nin absence of radiative effects (these will be discussed in the following section).\n\nAs the EOB formalism is based on a Hamiltonian \ndescription of the conservative dynamics, the stable circular orbits correspond to minima, with respect \nto $r$, of the radial potential $H_{\\rm EOB}^{\\rm radial} (r,p_{\\varphi}) \\equiv H_{\\rm EOB} (r , \np_{r_*} = 0, p_{\\varphi})$. Minimizing $H_{\\rm EOB}^{\\rm radial} (r,p_{\\varphi})$ is equivalent to \nminimizing the corresponding effective Hamiltonian $\\hat H_{\\rm eff}$, or, its square, i.e.\n\\begin{eqnarray}\n\\label{eq:5.1}\n(\\hat H_{\\rm eff}^{\\rm radial})^2 \\, (r,p_{\\varphi}) &= &A(r) \\left( 1 + \\frac{p_{\\varphi}^2}{r^2} \\right) \n\\nonumber \\\\\n&\\equiv &A(u) + p_{\\varphi}^2 \\, B(u) \\, .\n\\end{eqnarray}\nHere, we have used the short-hand notation $u \\equiv 1\/r = GM\/R$ and $B(u) \\equiv u^2 \\, A(u)$. \nMinimizing (\\ref{eq:5.1}) with respect to $r$ (or, equivalently, $u$), for a given (scaled) total angular \nmomentum $p_{\\varphi} \\equiv J^{\\rm tot} \/ GM\\mu$, yields the following equation\n\\be\n\\label{eq:5.2}\nA'(u) + p_{\\varphi}^2 \\, B'(u) = 0,\n\\end{equation}\nwhere the prime denotes a $u$-derivative. This leads to the following\nparametric representation of the squared angular momentum:\n\\begin{equation}\nj^2(u)=-\\dfrac{A'(u)}{(u^2 A(u))'}\\quad\\text{(circular orbits)},\n\\end{equation}\nwhere we use the letter $j$ to denote the value of $p_\\varphi$ along\nthe sequence of circular orbits.\nInserting this $u$-parametric representation of \n$j^2$ in Eq.~\\eqref{eq:Heff} defines the $u$-parametric representation of the\neffective Hamiltonian $\\hat{H}_{\\rm eff}(u)$.\nWe can then obtain (at least numerically) $\\hat{H}_{\\rm eff}$ as a function of $x$\nby eliminating $u$ between $\\hat{H}_{\\rm eff}(u)$ and the corresponding\n$u$-parametric representation of the frequency parameter $x=(GM\\Omega\/c^3)^{2\/3}$\nobtained by the angular Hamilton equation of motion in the circular case\n\\begin{equation}\n\\label{eq:Omega}\nM\\Omega(u) = \\dfrac{1}{\\mu}\\dfrac{\\partial H_{\\rm EOB}}{\\partial j}=\\dfrac{M A(u)j(u) u^2}{H_{\\rm real}\\hat{H}_{\\rm eff}},\n\\end{equation}\nwhere $H_{\\rm real}$ denotes the real EOB Hamiltonian\n\\begin{equation}\n\\label{eq:real_hamiltonian}\nH_{\\rm EOB} = M \\sqrt{ 1 + 2\\nu\\left( \\hat{H}_{\\rm eff} - 1\\right)}.\n\\end{equation}\nIn this situation, the binding energy $E_b$ of the system is simply given\nby\n\\be\n\\label{eq:Eb}\nE_b(\\Omega) = H_{\\rm EOB}-M = M\\left\\{ \\sqrt{ 1 + 2\\nu\\left( \\hat{H}_{\\rm eff} - 1\\right)}-1 \\right\\},\n\\end{equation}\nwhere $M$ denotes, as above, the total mass $M=M_A+M_B$ of the system,\nand where one must eliminate $u$ between Eq.~\\eqref{eq:Omega} and\nEq.~\\eqref{eq:Eb} to express the r.h.s. in terms of $\\Omega$. Note that\nthe function $E_b(\\Omega)$ depends also on the choice of the following\nparameters: $\\kappa_\\ell^T$, $\\alpha_1^{A(\\ell)}$ and $\\alpha_1^{B(\\ell)}$.\nHere we shall focus on the equal-mass case, and consider the dependence\nof $E_b(\\Omega)$ only on $(\\kappa_2^T,\\kappa_3^T,\\kappa_4^T)$ and restrict\nthe parametrization of 1PN tidal effects to the consideration of a \n{\\it single} 1PN tidal parameter $\\bar{\\alpha}_1$ that is taken to\nbe the same for the three values of $\\ell$ that we consider.\nIn addition, we will incorporate 1PN corrections to tidal effects in\nthe three aforementioned functional forms, \nEq.~\\eqref{eq:linear}-\\eqref{eq:harmonic} and contrast their performances.\n\n\n\\subsubsection{BNS binding energy in the PN framework}\n\\label{sbsc:pn_circular}\nWe also want to constrast the performance of the EOB approach \n(which represents a resummation of the dynamics of the binary system) \nwith the ``standard'' nonresummed PN-based description of the \nbinding energy of tidally interacting BNS, as used for instance in\nRef.~\\cite{Mora:2003wt}. \nThe PN-expanded binding energy is written in the form\n\\be\nE_b(\\Omega) = E_{\\rm point-mass}(\\Omega) + E^{\\rm tidal}(\\Omega),\n\\end{equation}\nwhere\n\\begin{align}\n&E_{\\rm point-mass}(\\Omega) =-\\dfrac{\\mu}{2} x\\bigg\\{1-\\left(\\dfrac{3}{4}+\\dfrac{1}{12}\\nu\\right)x \\nonumber\\\\\n&-\\left(\\dfrac{27}{8}-\\dfrac{19}{8}\\nu+\\dfrac{1}{24}\\nu^2\\right)x^2\\nonumber\\\\\n&-\\left(\\dfrac{675}{64}-\\left[\\dfrac{34445}{576}-\\dfrac{205}{96}\\pi^2\\right]\\nu+\\dfrac{155}{96}\\nu^2 \n+ \\dfrac{35}{5184}\\nu^3\\right)x^3\\bigg\\},\n\\end{align}\nis the 3PN accurate post-Newtonian binding energy of two point-masses as\nfunction of the orbital frequency parameter $x=(GM\\Omega\/c^3)^{2\/3}$ \n~\\cite{Damour:1999cr,Damour:2001bu}. The expression of the tidal contribution\n$E^{\\rm tidal}(\\Omega)$ can be obtained for all values of the multipolar \nindex $\\ell$ by noting the following. Any (perturbative) power-law radial contribution \nto the interaction Hamiltonian of the form\n\\be\n\\delta H(r) = -\\dfrac{c_n}{r^n}\n\\end{equation}\nis easily shown to contribute a corresponding term\n\\be\n\\delta E_b(\\Omega) = +\\left(\\dfrac{2}{3}n-1\\right)\\dfrac{c_n}{r_\\Omega^n},\n\\end{equation} \nwhere it should be noted that the sign of the tidal contribution flips\nbetween the Hamiltonian and the binding energy expressed as a function\nof the orbital frequency ($r_\\Omega$ denoting the Newtonian value of $r$\ncorresponding to a given circular orbit of frequency $\\Omega$). As\na result, we have the leading order contribution to the PN-tidal\ncontribution\n\\be\n\\label{eq:dEtidal}\nE^{\\rm tidal}_{\\rm LO}(\\Omega)= + \\dfrac{\\mu}{2}\\sum_{\\ell\\geq 2} \\left[\\dfrac{2}{3}(2\\ell+2)-1\\right]\\kappa_\\ell^T\\, x^{2\\ell +2}.\n\\end{equation}\nWe shall also explore the effect of correcting $E^{\\rm tidal}_{\\rm LO}$\nby a fractional 1PN contribution, i.e. to employ a PN tidal contribution \nof the form\n\\be\n\\label{eq:NLO_pn}\nE^{\\rm tidal}(x)= (1+ \\bar{\\alpha}_1'x)E^{\\rm tidal}_{\\rm LO}(x).\n\\end{equation}\nwhere the (approximate) link with the previously defined $\\bar{\\alpha}_1$ is\n\\be\n\\bar{\\alpha}_1'=\\dfrac{11}{9} \\bar{\\alpha}_1.\n\\end{equation}\nHere the numerical coefficient $11\/9$ arises as a consequence of the factor $2n\/3-1$ in\nthe result above (considered for $n=6$ and $n=7$).\n\n\\subsection{BNS circular configurations in numerical relativity}\n\\label{nr:circular}\n\\begin{table*}[t]\n\\caption{\\label{tab:table2} Properties of NS models considered discussed in the numerical analysis of Ref.~\\cite{Uryu:2009ye}. The EOS are\nrepresented as piece-wise-polytropic functions (on four intervals) as proposed in~\\cite{Read:2008iy,Read:2009yp}. For the models considered, the present\ntable is compatible with Table III of ~\\cite{Uryu:2009ye}. From left to right, the columns report: the dividing density between the low-density\npart (the crust) and the higher density part of the EOS; the four adiabatic indices for each polytropic interval, $\\{\\Gamma_0,\\Gamma_1,\\Gamma_2,\\Gamma_3\\}$; \nthe compactness $c=M\/R$; the NS mass $M$ and the NS radius $R$; the Love numbers $k_2$, $k_3$ and $k_4$.}\n\\begin{center}\n \\begin{ruledtabular}\n \\begin{tabular}{lccccccccccc}\n Model & $\\log(\\rho_0)$\n & $\\Gamma_0$\n & $\\Gamma_1$ \n & $\\Gamma_2$\n & $\\Gamma_3$\n & $M\/R$ \n & $M$ \n & $R$\n & $k_2$ \n & $k_3$\n & $k_4$\\\\\n \\hline \\hline\n 2H &13.847 &1.35692&3 &3 & 3 &0.13097&1.3507& 15.229 & 0.1342& 0.0407& 0.0168 \\\\\n HB &14.151 &1.35692&3 &3 & 3 &0.17181&1.3507& 11.608 & 0.0946& 0.0260&0.0097 \\\\ \n 2B &14.334 &1.35692&3 &3 & 3 &0.20500&1.3505& 9.728 & 0.0686& 0.0174& 0.0059 \\\\\n SLy &14.165 &1.35692&3.005&2.988& 2.851&0.17385&1.3499& 11.466 & 0.0928& 0.0254& 0.0095\\\\ \n FPS &14.220 &1.35692&2.985&2.863& 2.600&0.18631&1.3511& 10.709 & 0.0805&0.0214&0.0077 \\\\\nBGN1H1 &14.110 &1.35692&3.258&1.472& 2.464&0.15792&1.3490& 12.614 & 0.1059& 0.0307&0.0120 \\\\\n \\end{tabular}\n\\end{ruledtabular}\n\\end{center}\n\\end{table*}\n\n\\subsubsection{Numerical framework of Ury${\\bar{u}}$ et al.}\n\\label{framework}\n\nIn a recent paper, Ury${\\rm\\bar{u}}$ et al.~\\cite{Uryu:2009ye} constructed BNS\nsystems in quasi-circular orbits by solving numerically the full\nset of Einstein's equations. The important advance of this work\nwith respect to previous analyses is the fact that Einstein equations\nare solved for all metric components, including the nonconformally\nflat part of the spatial metric. This goes beyond the common\n{\\it conformally flat} \napproximation that is usually employed for the spatial geometry.\nThe conformally flat approximation introduced systematic\nerrors which enter the PN expansion already at the 2PN level \n[see the detailed calculation in the Appendix B of Ref.~\\cite{Damour:2000we}].\nConsistently with this analytical argument, it was found\nin Ref.~\\cite{Uryu:2009ye} that the difference between conformally\nflat and nonconformally flat calculations is so large that it can\nmask the effect of tidal interaction for close systems. See, in\nthis respect, the location of the conformally flat (IWM) binding energy\ncurves in the two upper panels of Fig.~3 in Ref.~\\cite{Uryu:2009ye}.\nBelow we shall however emphasize that the nonconformally flat calculations\nof~\\cite{Uryu:2009ye} still introduce significant systematic errors which enter\nthe PN expansion {\\it at the 3PN level}. \n\nSince the new nonconformally flat results of Ury${\\rm\\bar{u}}$ et al. represent\na definitive improvement with respect to previous calculations, \nit is appealing to see to what extent these new results agree \nwith existing analytical descriptions.\nWe extracted from Ref.~\\cite{Uryu:2009ye} the six models which\npresent the highest computational accuracy. These models were obtained\nby using EOS labelled 2H,HB,2B, SLy, FPS and BGN1H1. These labels refer\nto piecewise polytropic EOS. Note that in the case of SLy, FPS and BGN1H1\nthe corresponding piecewise polytropic EOS were proposed in\nRef.~\\cite{Read:2008iy} as approximations to original tabulated EOS.\nIn the case of FPS and SLy, this implies that\nthe tidal coefficients $k_\\ell$ that we have computed for this work \ndiffer (by $\\sim 20\\%$) from the ones that we had previously\ncomputed in Ref.~\\cite{Damour:2009vw} that used the original tabulated EOS.\nFor example, in the case of a neutron star model described by the \nSLy EOS and having a compactness $c=0.176$ (which corresponds to\na mass of $1.4M_\\odot$), we obtain a dimensionless Love number\n$k_2^{(\\rm tab)}=0.07699$ (which is consistent with the first line of Table~I\nof Ref.~\\cite{Hinderer:2009}) if we use the tabulated EOS, while\nwe obtain $k_2^{(\\rm ppoly)}=0.09123$ if we use the piece-wise\npolytropic EOS. Note that the piece-wise polytropic result is $18.5\\%$\nlarger than the tabulated one. This suggests that one should\nrefine the piece-wise polytropic approximation to realistic\ntabulated EOS by incorporating $k_2$ within the set of observables\nthat are fitted.\n\nAmong the six EOS that we retain, three, i.e. 2H, HB and 2B, \nuse two polytropic intervals, \nwhile the other three, i.e. SLy, FPS and BGN1H1, \nuse four polytropic intervals.\nWe will thus have one dividing density\\footnote{Here, following the notation\nof~\\cite{Read:2008iy}, we use the letter $\\rho$ to denote the rest-mass\n(baryon) density which was denoted by $\\mu$ in our previous\nwork~\\cite{Damour:2009vw}}, denoted by $\\rho_0$, for 2H, HB and 2B,\nand three dividing densities, $(\\rho_0,\\rho_1,\\rho_2)$, for SLy, FPS and BGN1H1.\nHere, $\\rho_0$ indicates the dividing density between the lower density \ninterval that approximates the subnuclear density part of the EOS \n(the crust) and the supernuclear density part.\nThe values of (the base-ten logarithm of) $\\rho_0$ are displayed in \nthe first column of Table~\\ref{tab:table1}. \nFor all EOS, the lower density interval (``crust'') is \napproximated by setting $(\\Gamma_0,K_0)=(1.35692,3.59389\\times 10^{13})$,\nwhere $K_0$ (here is in cgs units) gives the pressure $p$ in dyn\/cm$^2$.\nThe other dividing densities (for the four-parameter EOS) are fixed \nas $\\rho_1=10^{14.7}$ and $\\rho_2=10^{15}$.\nThe corresponding adiabatic indices, $\\{\\Gamma_1,\\Gamma_2,\\Gamma_3\\}$,\ntaken from~\\cite{Read:2008iy,Uryu:2009ye} are also given in Table~\\ref{tab:table1}.\nFor the implementation of the piecewise polytropic EOS we follow\nthe procedure explained in Sec.~III of~\\cite{Read:2008iy} and in \nSec.~IID of~\\cite{Uryu:2009ye}.\n\nFor each selected EOS, we computed the sequence of equilibrium models with the\n related Love numbers $k_\\ell$ up to $\\ell=4$. For the compactnesses corresponding\n to those used in~\\cite{Uryu:2009ye} we display in Table~\\ref{tab:table2} the $k_\\ell$'s\n together with the values of mass and radius that we obtained from our calculation,\n to check consistency with the corresponding values of Table~III of ~\\cite{Uryu:2009ye}.\n The small differences (at the $10^{-3}$) level are probably due to the fact that we \n use the finite-digit value of the dividing density $\\rho_0$ that they published.\n \n \n\\subsubsection{Subtracting tidal effects from NR data}\n\nLet us start by noting two facts, that can be checked from\nthe analytical expressions above, about the dependence of the binding\nenergy on the tidal parameters $\\kappa_\\ell^T$: i) this dependence is\nto a very good approximation {\\it linear} and ii) the numerical effect\nof the $\\kappa_2^T$ strongly dominates over that of the higher degree\n$\\kappa_\\ell^T$'s. For example, if we take the tidal coefficients listed\nin Table~\\ref{tab:table1} (which correspond to the SLy EOS, which yields\na radius $\\sim 11.5$ km for $1.35M_\\odot$, which is in the middle of the\nrealistic range of NS radii) we find that the tidal contributions to\nthe binding energy would reach, if they were extended to the maximum\nfrequency that we shall explore here, namely $M\\Omega_{\\rm max}=0.060$,\nthe following values: the $\\kappa_2^T$ contribution to $E_b\/M$ \nis $\\sim 3.6\\times 10^{-4}$; the $\\kappa_3^T$ contribution \nis smaller than the $\\kappa_2^T$ by a factor 0.053, and the\n$\\kappa_4^T$ is smaller than the \n$\\kappa_2^T$ one by a factor $\\sim3.85\\times 10^{-3}$.\n\nThese two facts allow us to {\\it approximately subtract tidal effects \nfrom NR data}. Indeed, if we assume that the binding energy computed\nwith a certain equation state $(EOS)_I$ is approximately given by\n\\be\nE_b(\\Omega;\\, I) \\approx h_0(\\Omega) + (\\kappa_2^T)_I h_2(\\Omega)\n\\end{equation} \nwe can use the NR data for two different EOS, labelled by $(I,J)$ \nto compute, {\\it separately}\n\\begin{align}\n\\label{eq:h0}\nh_0(\\Omega) &\\approx \\dfrac{(\\kappa_2^T)_I E_b(J)- (\\kappa_2^T)_J E_b(I) }\n {(\\kappa_2^T)_I - (\\kappa_2^T)_J},\\\\\n\\label{eq:h2}\nh_2(\\Omega) &\\approx \\dfrac{E_b(I)- E_b(J) }\n {(\\kappa_2^T)_I - (\\kappa_2^T)_J} .\n\\end{align}\n\\begin{figure*}[t]\n\\begin{center}\n\\includegraphics[width=85 mm, height=70mm]{fig1a.eps}\\hspace{5 mm}\n\\includegraphics[width=85 mm, height=70mm]{fig1b.eps}\\\\\n\\vspace{5mm}\n\\includegraphics[width=85 mm, height=70mm]{fig1c.eps}\\hspace{5 mm}\n\\includegraphics[width=85 mm, height=70mm]{fig1d.eps}\\\\\n\\caption{\\label{fig:fig1} Comparison between various $\\delta$-corrected $h_0$'s (defined in Eq.~\\eqref{eq:h0}) \nand the EOB (resummed, solid line) and 3PN (nonresummed, dashed line) point-mass representations of the binding energy. }\n \\end{center}\n\\end{figure*}\n\nMost importantly we see that Eq.~\\eqref{eq:h0} allows us to compute\nfrom the binding energies of two BNS sequences a third binding energy\nfunction, $h_0$, which approximately represents the binding energy\nof {\\it non tidally interacting} neutron stars, i.e. the binding \nenergy curve of two point-masses. \nThe result of computing the r.h.s. of Eq.~\\eqref{eq:h0} for five pairs $(I,J)$\nof EOS having sufficiently different $\\kappa^T_2$'s is displayed in\nthe left panel of Fig.~\\ref{fig:fig1}. Two important lessons can be\ndrawn from this figure: i) The subtraction procedure defined by\nEq.~\\eqref{eq:h0} is remarkably able to define ``tidal-free'' energy\ncurves that are essentially on top of each other; this confirms \nthat our procedure succeeds in subtracting out the EOS-dependence\nof the binding energy curves; ii) However, the resulting ``universal''\n$h_0$ curve still differs significantly both from the EOB point-mass\ncurve (black solid line) and the PN point-mass one (black dashed line).\nThis second issue will be addressed in the next subsection.\n\nWe shall not display here the result of computing the $h_2$ part\nof the binding energy curve, Eq.~\\eqref{eq:h2}, because it is more\nsensitive than $h_0$ both to numerical noise (in the original NR data)\nand to the presence of higher-order tidal PN contributions.\nBelow, we shall address the issue of determining the tidal contributions to\n$E_b$ with a different approach.\n\n\n\\subsubsection{Detecting and subtracting systematic errors in NR data}\nHere we address the issue ii) mentioned in the previous subsection.\nIndeed, our subtraction procedure has given us access to the ``universal'',\nEOS-independent part of the energy curve $h_0$. \nHowever, we have seen that $h_0$ still significantly differs from the\nanalytical point-mass models. We think that the origin of this discrepancy\nis the presence of remaining ``systematic errors'' in the current\nnonconformally flat approach to BNS systems. Though the nonconformally flat\nintegration scheme of Ury${\\rm\\bar{u}}$ et al. is an improvement over previous work,\nit is however still only an approximation to the exact solution describing\ntwo BNS interacting in a (conservative) ``time-symmetric'' \nmanner (half-retarded-half-advanced). Here we shall only use the data obtained\nby Ref.~\\cite{Uryu:2009ye} called the ``waveless'' approximation.\nIn their approach, ``waveless'' means setting to zero the time-derivative\nof the conformal spatial metric (in a certain gauge): $\\partial_t\\tilde{\\gamma}_{ab}=0$.\nAs the NR gauge is rather similar to the ADM-TT gauge used in the 3PN\ncalculation of the interaction Hamiltonian of a two point-mass \nsystem in Refs.~\\cite{Jaranowski:1997ky,Damour:2001bu}, we can see,\nby looking at the analytical expression of the 3PN-accurate ADM Hamiltonian,\nthat neglecting the terms containing $\\pi_{ab}^{\\rm TT}\\sim \\partial_t\\tilde{\\gamma}_{ab}$\nmeans neglecting some of the terms that contribute at the {\\it 3PN level}.\n[The simplest of these terms being the ``kinetic energy'' term proportional\nto $\\int d^3 x (\\pi_{ab}^{\\rm TT})^2$].\nThis analytical argument suggests that the current NR data miss some 3PN\ncontributions, i.e. they miss some terms proportional to $x^4$ in the binding\nenergy curve. We are therefore entitled in assuming that the discrepancy\ndisplayed in the left-panel of Fig.~\\ref{fig:fig1} between the NR $h_0$\nand the point-mass analytical curves is, to leading order, given by\nan expression of the type $\\Delta E_b(\\Omega) = \\delta \\, x^4$ with an EOS-independent \nnumerical coefficient $\\delta$ that we expect to be of order unity.\nIndeed, the right panel of Fig.~\\eqref{fig:fig1} exhibits the fact that,\nby subtracting $\\Delta E_b(\\Omega) = \\delta \\, x^4$, with $\\delta= 0.8$ \n(see below) from all the individual $h_0$ curves, we can reach a good \nvisual agreement with both analytical point-mass models.\n[Note that the approximate ``best-fit'' value of $\\delta$ is mainly determined \nby the discrepancy \nNR\/AR on the lower frequency part of the panel, say for $M\\Omega< 0.035$\nwhere the contribution to tidal effects is relatively negligible]. \n\nThe remaining differences in this right panel are \ncompatible with the known level of numerical errors in the NR \ndata (see Fig.~4 of Ref.~\\cite{Uryu:2009ye}).\nIndeed~\\cite{Uryu:2009ye} has used the virial theorem to gauge \nsome of the systematic errors in their calculation by comparing \ntwo measures of the total mass of the system (Komar and ADM). \nThe resulting (absolute value) differences\nin binding energy, say $\\delta^v E_b$ are in general at the \nlevel $10^{-4}M$. We used these differences to estimate formal\n``error-bars'' on the various energy curves that we use in this\nwork.\nMore precisely, in $E_b$ energy curves we add error bars of one-sided amplitude\n$\\pm \\frac{1}{2}\\delta E_b$, so that the length of the two-sided error\nbars corresponds to the ``virial'' error.\nAs Fig.~\\ref{fig:fig1} concerns a quantity, $h_0$, defined as a \nlinear combination of NR data (see Eq.~\\eqref{eq:h0}), we conservatively\nestimated error bars on the $h_0$ curve corresponding to the pair 2B-FPS\nby linearly combining\nin absolute values the corresponding individual errors. We use this error\nbar to gauge the quality of the other $h_0$ curves (which do not extend as\nfar in the high frequency range).\nThis conservative\nestimate of the total error seems appropriate to the present situation\nwhere the errors are not random, but rather systematic.\n[Note, however, that these ``error-bars'' seem to be too conservative in\nthe lower frequency part of the panels because they exceed the ``distance''\nbetween the $h_0$ curves and the point-mass models.]\nUsing these error bars we can now roughly estimate a range of acceptable \n values of the NR correcting parameter $\\delta$. As illustrated in the\nfour panels of Fig.~\\ref{fig:fig1}, the range $0.4\\leq \\delta\\leq 1.2$\nis such that the $\\delta$-corrected NR-deduced $h_0$ curves are within \n``one formal sigma'' from both point-mass analytical models.\nWe shall use this range below to estimate a corresponding range of \nprobable values of the 1PN tidal parameter $\\bar{\\alpha}_1^{(2)}$.\n\n\n\\subsubsection{Least-square analysis: constraining next-to-leading order (1PN) \ntidal effects from numerical relativity data}\n\nIn this subsection we shall firm up the previous analysis and make it\nmore quantitative by using a $\\chi^2$ procedure.\nFor each EOS, labelled by index $I$, we have $20$ NR data points,\nRef.~\\cite{Uryu:2009ye}, $E_b^{\\rm Ury{\\rm\\bar{u}}}(x_{n_I}; \\,I)$, where the index\n$n_I$ varies from one to twenty. We retain in our analysis six EOS; \nI=(2H, HB, 2B, FPS, SLy, BGN1H1).\nLet us then define the following formal $\\chi^2$ function, measuring\nthe (squared) ``distance'' between NR and EOB: \n\\be\n\\label{eq:chi2}\n\\chi^2{\\left(\\bar{\\alpha}_1,\\delta\\right)}=\\sum_{I,n}\\left[ \\dfrac{(E^{\\rm Ury{\\rm\\bar{u}}}_b(x_n; \\,I)}{M}\n-\\delta\\, x^4_n) - \\dfrac{E^{\\rm EOB}_b(x_n;\\,\\bar{\\alpha}_1,I)}{M} \\right]^2.\n\\end{equation}\nHere, $x=\\Omega^{2\/3}$ and the index $n$ \nruns (for each EOS label $I$) over the sample of numerical data from one to twenty,\nso that $\\chi^2$ contains 120 terms in all.\nWe are interested in studying the dependence of $\\chi^2$ over the two\nvariables $(\\delta,\\bar{\\alpha}_1)$. Here $\\delta$ denotes the\ncoefficient of a 3PN subtraction to NR data of the type that we discussed\nin the previous subsection (as motivated by the neglect of some 3PN\nterms in the ``waveless'' approximation). \nAs explained above, we shall restrict the variation of $\\delta$ to the\nrange $0.4\\leq \\delta\\leq 1.2$. For simplicity, we shall actually sample\nthis interval through the three values $\\delta=(0.4,0.8,1.2)$.\nOn the other hand, the coefficient $\\bar{\\alpha}_1$ parametrizes possible \nnext-to-leading order (NLO) 1PN correction to the tidal effects.\nWe will use the three different descriptions of NLO tidal effects delineated \nin Eqs.~\\eqref{eq:linear}-\\eqref{eq:harmonic} above.\n\n\nWe wish to use the least-square method, i.e., minimizing the EOB-NR \n``distance'' function $\\chi^2{\\left(\\bar{\\alpha}_1,\\delta\\right)}$,\nto constrain the values of $\\left(\\bar{\\alpha}_1,\\delta\\right)$.\nHowever, we find that $\\chi^2{\\left(\\bar{\\alpha}_1,\\delta\\right)}$ \nremains close (on the scale of the NR error bars) to its global\nminimum in a ``valley'' which extends over a significant region of \nthe $\\left(\\bar{\\alpha}_1,\\delta\\right)$ plane. \nThis means that, given the present error level in numerical data, we\ncannot meaningfully and simultaneously select preferred values for \n$\\left(\\bar{\\alpha}_1,\\delta\\right)$. As a substitute, we shall\nexhibit the sections of the $\\chi^2$ valley that correspond to the \nthree values of $\\delta$ selected visually above in Fig.~\\ref{fig:fig1}.\nIn other words, we now fix $\\delta$ (to one of its three values) in\nEq.~\\eqref{eq:chi2} and consider the dependence of $\\chi^2$ on $\\bar{\\alpha}_1$.\nThe resulting one-dimensional plots are exhibited in Fig.~\\ref{fig:fig2}.\n\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=75 mm, height=58mm]{fig2a.eps}\\\\\n\\vspace{2.0mm}\n\\includegraphics[width=75 mm, height=58mm]{fig2b.eps}\\\\\n\\vspace{2.0mm}\n\\includegraphics[width=75 mm, height=58mm]{fig2c.eps}\n\\caption{\\label{fig:fig2} Sections of the function $\\chi^2(\\bar{\\alpha}_1,\\delta)$ \nfor three values of $\\delta$. The figure displays the corresponding ranges of\nallowed values of $\\bar{\\alpha}_1$. Note that, for all models, the minima are\nrather shallow.}\n \\end{center}\n\\end{figure}\n\nEach panel of Fig.~\\ref{fig:fig2} corresponds to a different modelization\nof NLO tidal effects: ``Taylor'' (upper panel,, Eq.~\\eqref{eq:linear}), \n``Pad\\'e'' (middle panel), Eq.~\\eqref{eq:pade} and \n``radial-shift'' (lower panel), Eq.~\\eqref{eq:harmonic}. In addition,\neach panel contains three curves corresponding to the three \nabove-selected values of $\\delta$: $\\delta=0.4$ \n(dash-dot line, right-most curve), $\\delta=0.8$ (solid-line, middle curve),\nand $\\delta=1.2$ (dashed-line, left-most curve).\n\nLet us start by focussing on the (solid) curves corresponding to \nthe ``central'' value of $\\delta$, $\\delta=0.8$. We see that the\npreferred values of $\\bar{\\alpha}_1$ that they select (minimum of\nthe curves) are $\\bar{\\alpha}_1\\approx 7$ for the Taylor model, \n$\\bar{\\alpha}_1\\approx 3.5$ for the Pad\\'e model and\n$\\bar{\\alpha}_1\\approx 4.5$ for the ``radial-shift'' model.\nThis shows that higher order PN terms (differently included in\nthe different models) have a significant effect on the determination \nof $\\bar{\\alpha}_1$.\nNote also that when $\\delta=1.2$ all the models tend to favour a lower\nvalue: $\\bar{\\alpha}_1\\sim 1$. The value of $\\chi^2$ at $\\bar{\\alpha}_1=0$\nand $\\delta=1.2$ is $\\chi^2{\\left(0,1.2\\right)}=5.665\\times 10^{-7}$.\nThis formally corresponds to an average (squared) ``error level'' on the individual\nNR-EOB energy differences summed in $\\chi^2$ equal to \n$\\sqrt{\\chi^2{\\left(0,1.2\\right)}\/120}=0.687\\times 10^{-4}$. This \nlevel is comparable to the ``virial error'' on each individual NR\ndata point $\\delta^v E_b\/M\\sim 10^{-4}$. It is therefore reasonable to\nuse this level to select a range of values of $\\bar{\\alpha}_1$.\nCombining this range with the range of values of $\\delta$'s means\nthat, at this stage, the range of values of $\\bar{\\alpha}_1$ that\nis compatible with the NR data is obtained by taking the level \nsurface $\\chi^2{\\left(\\bar{\\alpha}_1,\\delta\\right)}=\\chi^2{\\left(0,1.2\\right)}$\nas the admissible bottom of the ``valley'' in the \n$\\left(\\bar{\\alpha}_1,\\delta\\right)$ plane.\nThis leads to the following admissible ranges: \n$0\\lesssim\\bar{\\alpha}_1\\lesssim 15.7$ for the Taylor model;\n$0\\lesssim\\bar{\\alpha}_1\\lesssim 4.8$ for the Pad\\'e model;\n$0\\lesssim\\bar{\\alpha}_1\\lesssim 7.5$ for the ``radial shift'' model.\nIt is clear that at this stage the fact that (as we have argued above)\nthe NR data are ``polluted'' by some systematic errors (notably linked\nto unaccounted 3PN effects) prevents us from giving very significant\nconstraints on the value of $\\bar{\\alpha}_1$. Note in particular \nthat the value $\\bar{\\alpha}_1=5\/4=1.25$ which follows ( in the equal-mass case) \nfrom Eq.~\\eqref{eq:alpha1} is compatible with the present NR data\n(if we allow $\\delta=1.2$). In this respect, it is interesting to \nnote that if we consider a model of the form\n\\be\n\\hat{A}^{\\rm tidal} = 1 + \\bar{\\alpha}_1 u + \\bar{\\alpha}_2 u^2,\n\\end{equation}\nwith $\\bar{\\alpha}_1=1.25$ and compute the corresponding $\\chi^2$\nfor the central value $\\delta=0.8$, we find that \n$\\chi^2{\\left(\\bar{\\alpha}_2,0.8 \\right)}$ reaches a minimum\naround $\\bar{\\alpha}_2\\approx 40$. In addition the value of the\nminimum of the $\\chi^2$ is $3.20\\times 10^{-7}$ which is \nslightly better than the performance of the 1PN Taylor model in\nthe upper panel of Fig.~\\ref{fig:fig2}. This shows again that\nhigher PN tidal effects can play an important role and that\nthe minimima exhibited (for the central value $\\delta=0.8$) \nin the three panels of Fig.~\\ref{fig:fig2} should be viewed \nas ``effective'' values of $\\bar{\\alpha}_1$.\nWe note in this respect that a situation where higher-PN corrections dominate\nover the 1PN one is not at all exceptional. For instance, the 1PN contribution\nto the EOB radial potential $A(r)$ {\\it vanishes}, its 2PN contribution has a\nrather small coefficient, $2\\nu$, while the numerical coefficient of the 3PN\ncontribution $\\nu a_4$ is quite large and significantly modifies the\nconclusions that one might draw from the first two PN contributions. \n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=75 mm, height=60mm]{fig3a.eps}\\\\\n\\vspace{2 mm}\n\\includegraphics[width=75 mm, height=60mm]{fig3b.eps}\n\\caption{\\label{fig:fig3}2B EOS: Explicit comparison between various analytical representations\nof the binary binding energy and (corrected) numerical relativity data. \nThe correction parameter is chosen to be $\\delta=0.8$. The upper panel refers to EOB (resummed) \nmodels. The lower panel to PN (nonresummed) models. For ${\\rm EOB^{NLO}}$ effects, we \nuse their Pad\\'e representation, Eq.~\\eqref{eq:pade} with $\\bar{\\alpha}_1=3.5$. \nFor the 3PN$^{\\rm NLO}$ model, we use $\\bar{\\alpha}_1'=30$. }\n \\end{center}\n\\end{figure}\n\n\\begin{figure*}[t]\n\\begin{center}\n\\includegraphics[width=75 mm, height=60mm]{hb.eps}\n\\hspace{2.5 mm}\n\\includegraphics[width=75 mm, height=62mm]{2h.eps}\\\\\n\\vspace{3.0 mm}\n\\includegraphics[width=75 mm, height=60mm]{2b.eps}\n\\hspace{2.5 mm}\n\\includegraphics[width=75 mm, height=60mm]{sly.eps}\\\\\n\\vspace{3.0 mm}\n\\includegraphics[width=75 mm, height=60mm]{fps.eps}\n\\hspace{2.5 mm}\n\\includegraphics[width=75 mm, height=60mm]{bgn1h1.eps}\\\\\n\\caption{\\label{fig:fig4}Global comparison between EOB$^{\\rm NLO}$ and NR binding energies. \nWe use the values $(\\bar{\\alpha}_1,\\delta)=(1.25,1.2)$. The 3PN point-mass curve\nis added to guide the eye.}\n \\end{center}\n\\end{figure*}\nFigs.~\\ref{fig:fig3} and \\ref{fig:fig4} illustrate the complementary effects \nof $\\bar{\\alpha}_1$ and $\\delta$ at the level of the binding energy $E_b$.\n\nFig.~\\ref{fig:fig3} focuses on the 2B EOS model and contrasts (resummed) EOB\n(top panel) and (nonresummed) PN analytical representations of the binding energy.\nIn both cases, the NR binding energy is corrected by the same amount, that is\nwe assume that $\\delta$ takes its central value $\\delta=0.8$. \nWe see on this figure that the effect of the $\\delta$-correction is comparable\nto that of the added 1PN tidal contribution.\nNote that the value of the $\\bar{\\alpha}'_1$ parameter needed in the PN \nexpanded case (bottom panel) is significantly larger than the ones needed\nin the EOB case (for all ways of modeling 1PN tidal contributions).\n\nFig.~\\ref{fig:fig4} illustrates the excellent agreement between the EOB\npredictions (here considered for $\\bar{\\alpha}_1=1.25$)\nand the ($\\delta$-corrected, with $\\delta=1.2$ ) numerical data for\nall EOS. The fact that the $\\chi^2$ minima exhibited in Fig.~\\ref{fig:fig2}\nare at a comparable level (for all $\\delta$'s in the range we considered) \nindicates that a similarly excellent NR\/EOB agreement would have been obtained \nall along an extended valley in the $(\\bar{\\alpha}_1,\\delta)$ plane. In view \nof Fig.~\\ref{fig:fig3} the same would hold for the NR\/PN agreement, at the\ncost, however, of using, on average, significantly larger values of $\\bar{\\alpha}_1$.\n\nSummarizing: the recent numerical data of Ury${\\rm\\bar{u}}$ et al. do exhibit\nthe influence of tidal interactions in close BNS systems. However,\nthe presence of systematic errors in the data (due to an imperfect satisfaction \n-of the helical-Killing-vector condition) partially masks the tidal interactions\nand does not allow for a clean determination of the coefficients\nparametrizing tidal effects (and notably their 1PN contributions).\n\nWe recommend that new non-conformally flat simulations be performed for\nseveral values of the radius $r_0$ at which the helical-Killing-vector condition\nis cut off. By studying the dependence of the results on $r_0$, it might\nbe possible to extrapolate the results to infinite value of $r_0$\n(as used in analytical calculations), and thereby eliminate the \n3PN-level systematic error $ \\delta \\, x^4$.\n\n\n\\section{Incorporating radiative tidal effects in the EOB formalism}\n\\label{sec:sec4}\n\nBesides the specific Hamiltonian (\\ref{eq:Heob}), (\\ref{eq:Heff}), the other key ingredients of the \nEOB formalism are: (i) a specific, ``factorized'' representation of the multipolar waveforms \n$h_{{\\ell } m}$, and (ii) a resummed estimate of the radiation reaction force ${\\mathcal F}$, \nwhich must be added to the conservative Hamiltonian dynamics (\\ref{eq:Heob}), (\\ref{eq:Heff}). \nIn the most recent, and seemingly most accurate, version of the EOB formalism the radiation reaction \nis analytically computed in terms of the multipolar waveforms. Therefore, it will be enough to \nestimate here the ``tidal correction'' to the multipolar waveforms $h_{\\ell m}$. Following the \n``factorization'' philosophy of Refs.~\\cite{Damour:2009vw,Damour:2008gu} we shall look \nfor tidal-correction factors $f_{{\\ell } m}^{\\rm tidal} = 1 + {\\mathcal O} (\\mu , \\sigma)$, such that \nthe EOB waveform would read\n\\be\n\\label{eq:4.1}\nh_{{\\ell } m} = f_{{\\ell } m}^{\\rm tidal} \\, h_{{\\ell } m}^0.\n\\end{equation}\nHere $h_{{\\ell } m}^0$ is the factorized BBH EOB waveform, \nintroduced in~\\cite{Damour:2008gu}, and augmented by\ntwo next-to-quasi-circular parameters $(a_1,a_2)$ in\nRef.~\\cite{Damour:2009kr}. \n[Note, however, that, in view of the smallness of the tidal effects on the waveform,\n $f_{{\\ell } m}^{\\rm tidal} - 1 \\ll 1$, it would be equivalent to use (as done for the $A(r)$\npotential) an {\\it additive} ansatz: $h_{{\\ell } m} = h_{{\\ell } m}^0 + h_{{\\ell } m}^{\\rm tidal}$.]\n\nIn principle, one can use the effective action (\\ref{eq:2.3}), (\\ref{eq:2.4}) to compute the tidal \ncontributions to the waveform with any required relativistic accuracy (post-Minkowskian and\/or \npost-Newtonian).\n\nHere, we shall focus on the leading PN-order tidal correction to the leading PN waveform, i.e. \nthe ${\\ell } = 2$, $m=2$ partial wave $h_{22}$. This will provide the leading tidal correction to the \nradiation reaction (which is predominantly given by a contribution $\\propto \\vert 2 \\, \\Omega \\, h_{22} \n\\vert^2$).\n\nIn that case, a shortcut for computing the tidal correction $f_{22}^{\\rm tidal}$ consists in noting that the \nquadrupolar gravito-electric contribution in the action (\\ref{eq:2.3}) corresponds to adding to the \nenergy-momentum tensor of point-masses an extra contribution $\\Delta \\, T_{(x)}^{\\mu\\nu} \\equiv \n2 \\, g^{-1\/2} \\, \\delta \\Delta_{\\rm nonminimal} \\, S \/ \\delta g_{\\mu\\nu}$, which describes the tidally \ninduced quadrupole moment in each body $A$. At the leading ``Newtonian'' order this means that \nthe quadrupole mass moment $M_{ij}$ of the system will be\n\\be\n\\label{eq:4.2}\nM_{ij} = \\sum_A {\\rm STF}_{ij} [M^A z_A^i \\, z_A^j + \\mu_2^A \\, G_{ij}^A] \\, ,\n\\end{equation}\nwhere STF denotes a symmetric trace-free projection, and where the second term is the tidally \ninduced quadrupole moment. Replacing the Newtonian value (\\ref{eq:2.6}) of $G_{ij}^A$ \n(computed using Eq.~(\\ref{eq:2.7})) yields\n\\begin{eqnarray}\n\\label{eq:4.3}\nM_{ij} &= &\\sum_A {\\rm STF}_{ij} \\left[ M_A \\, z_A^i \\, z_A^j + 3\\mu_2^A \\, GM_B \\, \\frac{z_{AB}^i \\, \nz_{AB}^j}{r_{AB}^5} \\right] \\nonumber \\\\\n&= &\\left( \\mu + \\sum_A 3 \\, \\mu_2^A \\, \\frac{GM_B}{r_{AB}^5} \\right) r_{AB}^2 \\, \\hat n_{AB}^{ij} \\, ,\n\\end{eqnarray}\nwhere $\\mu \\equiv M_A \\, M_B \/ (M_A + M_B)$ is the reduced mass of the binary system, and where \nwe reduced the first expression to the center-of-mass\nframe. Eq.~\\eqref{eq:4.3} agrees with \nEq.~(7) of~\\cite{Flanagan:2007ix} (in the limit where one neglects the\nexcitation of the internal radial modes: $x_n \\to 0$). In \naddition to the explicit tidal modification $\\propto \\mu_2^A$ that appears in the first factor of \nEq.~(\\ref{eq:4.3}), there is an implicit tidal effect coming from the fact that the EOB waveform is \nconventionally expressed in terms of the (instantaneous) orbital frequency $\\Omega$ of the binary \nsystem. We must then eliminate the relative distance $r_{AB}$ in Eq.~\\eqref{eq:4.3} in favor of \n$\\Omega$. This is done by using the adiabatic (quasi-circular) Kepler law. The latter is modified by \ntidal forces:\n\\begin{eqnarray}\n\\label{eq:4.4}\n\\Omega^2 z_{AB}^i &= &- \\frac{d^2 z_{AB}^i}{dt^2} = -\\frac{1}{\\mu} \\, \\frac{\\partial L}{\\partial z_{AB}^i} \n\\nonumber \\\\\n&= &\\frac{GM}{r_{AB}^3} \\, z_{AB}^i - \\frac{1}{\\mu} \\, \\frac{\\partial L^{\\rm tidal}}{\\partial z_{AB}^i} \\, .\n\\end{eqnarray}\n\nDifferentiating the leading $(\\ell = 2)$ tidal Lagrangian \\eqref{eq:2.14}, and keeping only the leading \n$(\\ell = 2)$ term yields a modified Kepler law of the form\n\\be\n\\label{eq:4.5}\n\\Omega^2 r_{AB}^3 = GM \\left[ 1+9 \\, \\frac{M_B}{M_A} \\, \\frac{G\\mu_2^A}{r_{AB}^5} + 9 \\, \\frac{M_A}\n{M_B} \\, \\frac{G\\mu_2^B}{r_{AB}^5} \\right] .\n\\end{equation}\nUsing (\\ref{eq:4.5}) to solve $r_{AB}$ in terms of $\\Omega$, \nand replacing the (tidally-corrected) answer in \\eqref{eq:4.3} \nfinally leads to a quadrupole moment of the form\n\\be\n\\label{eq:4.6}\nM_{ij} = f_{22}^{\\rm tidal} \\mu \\, r_{AB}^2 \\, \\hat n_{AB}^{ij}\n\\end{equation}\nwith a tidal-correction factor\n\\begin{eqnarray}\n\\label{eq:4.7}\nf_{22}^{\\rm tidal} &= &1 + \\sum_A \\, 3 \\, \\frac{G\\mu_2^A}{r_{AB}^5} \\left( \\frac{M_B}{\\mu} + \n2 \\, \\frac{M_B}{M_A} \\right) \\nonumber \\\\\n&= &1 + \\sum_A \\, 3 \\, \\frac{G\\mu_2^A}{r_{AB}^5} \\left( 1 + 3 \\, \\frac{M_B}{M_A} \\right) \\nonumber \\\\\n&= &1 + \\sum_A \\, 2 \\, k_2^A \\left( \\frac{R_A}{r_{AB}} \\right)^5 \\left( 1 + 3 \\, \\frac{M_B}{M_A} \\right) \\, .\n\\nonumber \\\\\n\\end{eqnarray}\n\nThe factor $f_{22}^{\\rm tidal}$ is the ${\\ell } = 2$, $m=2$ tidal-correction factor which was introduced in \nEq.~(\\ref{eq:4.1}). It remains, however, to eliminate $r_{AB}$ in terms of $\\Omega$, or, as used in \nthe waveform of Ref.~\\cite{Damour:2008gu}, in terms of the EOB variable \n$v_{\\Omega} \\equiv r_{\\Omega} \\, \\Omega$ introduced in~\\cite{Damour:2006tr}: \nat the leading order it is enough to use \n$GM \/ c_0^2 \\, r_{AB} = v_{\\Omega}^2 (1+{\\mathcal O} (1\/c_0^2))$. This yields\n\\begin{align}\n\\label{eq:4.8}\n&f_{22}^{\\rm tidal} = 1 \\nonumber\\\\\n& + \\left( \\sum_A \\, 2 \\, k_2^A \\left( \\frac{R_A \\, c_0^2}{G(M_A + M_B)} \\right)^5 \n\\left( 1+3 \\, \\frac{M_B}{M_A} \\right)\\right) v_{\\Omega}^{10} \\, . \n\\end{align}\nThe result (\\ref{eq:4.8}) agrees (after squaring it) with Eq.~(8c) of \nRef.~\\cite{Flanagan:2007ix} (in the limit $x_n \\to 0$).\n\nSummarizing: we propose to incorporate radiative tidal effects in the EOB formalism by \ninserting in the dominant ${\\ell } = 2$, $m=2$\nwaveform, a factor of the form \n\\begin{align}\n\\label{eq:4.8pn}\nf_{22}^{\\rm tidal} &= 1 \\nonumber\\\\\n&+ \\left( \\sum_A \\, 2 \\, k_2^A \\left( \\frac{R_A \\, c_0^2}{G(M_A + M_B)} \\right)^5 \n\\left( 1+3 \\, \\frac{M_B}{M_A} \\right)\\right) v_{\\Omega}^{10}\\nonumber\\\\\n&\\times\\left(1+\\beta_1 v_{\\Omega}^2\\right) \\, ,\n\\end{align}\nwhere we included a possible 1PN correction to radiative tidal effects.\nOne then computes a tidal-corrected radiation reaction by using \nthis corrected waveform in the definition of ${\\mathcal F}$ \ngiven in~\\cite{Damour:2008gu} and~\\cite{Damour:2009vw}. In principle the\n(mass-ratio dependent) coefficient $\\beta_1$ can computed analytically. It can\nalso be ``calibrated'' by comparing NR data of inspiralling BNS systems to\nthe EOB predictions.\n\n\\section{EOB predictions for the motion and radiation of inspiralling compact binaries}\n\\label{sec:sec5}\n\nHaving defined a specific EOB way of incorporating tidal effects in the motion and radiation of \ninspiralling compact binaries (BNS or BHNS) let us study the predictions made by the resulting \ntidally-extended EOB formalism.\n\n\\subsection{Adiabatic inspiral, ``last stable orbit'', and ``contact''}\n\nLet us start by considering the {\\it adiabatic} approximation to the inspiral, i.e. the approximation in \nwhich the inspiral is described as a sequence of circular orbits. In this approximation, a key concept \nis that of the Last Stable (circular) Orbit (LSO).\nWe saw above the equation determining, in the EOB formalism, the \nsequence of circular orbits, Eq.~\\eqref{eq:5.2}.\nFor large values of $p_{\\varphi}$, and large values of $r$ (i.e. small values of $u = 1\/r$), \nEq.~(\\ref{eq:5.2}) has a unique solution $r = 1\/u \\simeq p_{\\varphi}^2$, corresponding to Newtonian \ncircular orbits. However, when $p_{\\varphi}^2$ decreases (as it does along the sequence of \ninspiralling orbits driven by radiation reaction), the sequence of stable circular orbits will \nterminate at certain values $r_{\\rm LSO} \\equiv 1\/u_{\\rm LSO}$, $p_{\\varphi_{\\rm LSO}}^2$ where \nthere exists a double root of Eq.~(\\ref{eq:5.2}), i.e. a common root of Eq.~(\\ref{eq:5.2}) and\n\\be\n\\label{eq:5.3}\nA''(u) + p_{\\varphi}^2 \\, B''(u) = 0 \\, .\n\\end{equation}\nThe condition determining the radial location of the Last Stable Orbit (LSO)\nis the vanishing of the determinant\n\\begin{eqnarray}\n\\label{eq:5.4}\n\\left\\vert \\begin{matrix} A' &B' \\\\ A'' &B'' \\end{matrix} \\right\\vert_{\\rm LSO} &= &A' (u_{\\rm LSO}) \\, \nB'' (u_{\\rm LSO}) \\nonumber \\\\\n&- &A'' (u_{\\rm LSO}) \\, B' (u_{\\rm LSO}) = 0 .\n\\end{eqnarray}\nFor instance, in the test-mass limit, and in absence of tidal corrections, i.e. for $A(u) = 1-2u$, \n$B(u) = u^2 \\, A(u) = u^2 - 2 \\, u^3$, Eq.~(\\ref{eq:5.4}) reads $-4 \\, (1-6 \\, u_{\\rm LSO}) = 0$, so that \nwe recover the classic result $r_{\\rm LSO} = 1\/u_{\\rm LSO} = 6$ (i.e. $r_{\\rm LSO}^{\\rm phys} = \n6 \\, GM$) for the LSO around a Schwarzschild black hole. On the other hand, when inserting in \nEq.~(\\ref{eq:5.4}) \nthe complete value of the $A$ function, i.e. the sum (\\ref{eq:3.5}), where $A^0 (r;\\nu)$ is given \nby Eq.~(\\ref{eq:3.4}), and $A^{\\rm tidal} (r)$ by Eq.~(\\ref{eq:3.6}), we see that the LSO predicted by \nthe EOB formalism will depend both on the symmetric mass ratio $\\nu$, and on the EOB tidal \nconstants $\\kappa_{\\ell }^{\\rm T}$, Eq.~(\\ref{eq:3.7}). More precisely, these two types of effects (the \n$\\nu$-dependent ones which exist already in BBH systems, and the tidal-dependent ones which exist \nonly in BHNS and BNS systems) act in opposite directions. Indeed, the $\\nu$-dependent \ncontributions tend to make the radial potential $A(r)$ less attractive (see Eq.~(\\ref{eq:3.3})), while \nthe tidal ones make $A(r)$ more attractive. As a consequence, $\\nu$-effects tend to move the radial \nlocation of the LSO towards smaller values ($r_{\\rm LSO} (\\nu) < 6 \\, GM$), while tidal effects tend to \nmove $r_{\\rm LSO}$ towards larger values. To avoid gauge effects, it is convenient to measure the \nlocation of the (adiabatic) LSO in terms of the corresponding (real) orbital frequency\n\\be\n\\label{eq:5.5}\n\\Omega = \\frac{\\partial H_{\\rm EOB}}{\\partial \\, p_{\\varphi}^{\\rm phys}} = \\frac{1}{GM\\mu} \\, \\frac{\\partial \nH_{\\rm EOB}}{\\partial \\, p_{\\varphi}} \\, .\n\\end{equation}\nFinally, we conclude that the dimensionless orbital frequency $GM\\Omega$ at the LSO is a \nfunction of the dimensionless parameters $\\nu$, $\\kappa_{\\ell }^{\\rm T}$ which tends to \n{\\it increase} as $\\nu$ increases, and to {\\it decrease} as $\\kappa_{\\ell }^{\\rm T}$ increases. \nWe have seen above that the tidal coefficients $\\kappa_{\\ell }^{\\rm T}$ generically take rather large \nnumerical values, of order $\\kappa_2^{\\rm T} = {\\mathcal O} (100)$, when\n$\\ell=2$, see Table~\\ref{tab:table1}. \nHowever, they enter the $A$ function at a higher order in $u$ than the\n$\\nu$-dependent effects. As a consequence, \nthe combination of the influences of $\\nu$ and $\\kappa_2^{\\rm T}, \\kappa_3^{\\rm T} , \\ldots$ leads to orbital LSO \nfrequencies which are sometimes larger, and sometimes smaller than the ``Schwarzschild value'' \n$GM\\Omega_{\\rm Schw} = 6^{-3\/2} = 0.06804$. This is illustrated in\nTable~\\ref{tab:table3} \nwhich lists the values of {\\it twice} the orbital frequency \n(corresponding to the adiabatic gravitational wave \nfrequency $\\omega_{{\\ell } m}$ for the dominant mode ${\\ell } = m = 2$) for several compactnesses \n($0.13$, $0.17$, $0.17385$, $0.5$) and for two paradigmatic systems: an equal-mass BNS system \nand a binary black hole system (labelled by its formal compactness $c=0.5$).\nHere we took the piece-wise polytropic SLy EOS. Note that one NS mass is\nsmaller than the ``canonical'' $1.35M_\\odot$ so to explore a smaller compactness.\nIf needed, one can convert the dimensionless freqency $2 \\, GM \\Omega$ in Hz\nby using $GM_{\\odot} = 4.925490947 \\mu $ s ($= 1.476625038$~km ) so that \nthe conversion factor between $\\hat\\omega = GM \\omega$ and $f = \\omega \/ 2\\pi$ is\n\\be\n\\label{eq:5.6}\nf = \\frac{\\hat \\omega}{2\\pi \\, GM} = 32.3125 \\, \\hat\\omega \\left( \\frac{M_{\\odot}}{M} \\right) {\\rm kHz} \\, .\n\\end{equation}\nWe see that, in a BNS system, the LSO frequency is smaller than the\n``Schwarzschild value'' $2 \\, GM \\Omega_{\\rm Schw} =1\/(3\\sqrt{6})= 0.136083$\nfor compactness smaller than about 0.1704. For such system the radius of \nthe LSO is larger than the canonical Schwarzschild 6GM.\nNote, by comparing BNS to BBH ones, how tidal effects can significantly change\nthe LSO frequency by more than a factor two! The results shown in Table~\\ref{tab:table3}\nhave been computed using the leading order, non-PN-corrected EOB description\nof tidal effects. The inclusion of next-to-leading order effects, notably with\n$\\bar{\\alpha}_{1}\\sim 6$, would double the effect of tidal interactions at the\nLSO and would therefore significantly affect the numbers listed in the table.\n\n\\begin{table*}[t]\n\\caption{\\label{tab:table3} Adiabatic LSO information for BNS and BBH systems.\nThe NS models are built using the piece-wise polytropic SLy EOS. From left to \nrigth, the columns report: the composition of the binary, the compactness $c$\nof the objects, the NS mass $M$, the NS radius $R$, twice the orbital\nfrequency at the adiabatic (EOB) LSO $2GM\\Omega_{\\rm LSO}^{\\rm adiab}$,\nthe corresponding LSO radius $r^{\\rm contact}\/GM$, the ``contact'' frequency \n$2GM\\Omega^{\\rm contact}$ and the corresponding radial distance $r^{\\rm contact}\/GM$. }\n\\begin{center}\n \\begin{ruledtabular}\n \\begin{tabular}{lcccccccc}\n System & $c$ & $M$ [$M_\\odot$] & $R$ [km] & $2GM\\Omega_{\\rm\n LSO}^{\\rm adiab}$ & $r_{\\rm LSO}\/GM$ & $2GM\\Omega^{\\rm contact}$\n & $r^{\\rm contact}\/GM$ \\\\\n \\hline \\hline\n BNS$^{\\rm LO}$ & $0.13$ & 1.0050 & 11.417 & 0.10208 & 7.3991 &0.09590 & 7.6923 & \\\\\n BNS$^{\\rm LO}$ & $0.17$ & 1.3205 & 11.470 & 0.13605 & 6.0111 & 0.14060 & 5.8824 \\\\\n BNS$^{\\rm LO}$ & $0.17385$ & 1.35 & 11.466 & 0.13902 & 5.9163 & 0.145061 & 5.7521 \\\\\n\\hline\n BNS$_{\\bar{\\alpha}_1=7.0}^{\\rm NLO}$ & $0.13$ & 1.0050 & 11.417 & 0.09056 & 8.1698 & 0.09834 & 7.6923 \\\\\n BNS$_{\\bar{\\alpha}_1=7.0}^{\\rm NLO}$ & $0.17385$ & 1.35 & 11.466 & 0.12185 & 6.5120 & 0.148750 & 5.7521 \\\\\n\\hline\n BBH$_{\\nu=1\/4}$ & $0.5$ & $\\dots$ & $\\dots$ & 0.19285 & 4.6186 & $\\dots$ & $\\dots$ \\\\\n BBH$_{\\nu=0}$ & $0.5$ & $\\dots$ & $\\dots$ & 0.13608 & 6.0000 & $\\dots$ & $\\dots$ \n \\end{tabular}\n\\end{ruledtabular}\n\\end{center}\n\\end{table*}%\n\nIn some BNS systems the concept of LSO and LSO frequency has only a formal meaning \nbecause the two NS's enter in contact (slightly) before reaching the LSO. This is\nillustrated in Table~\\ref{tab:table3} \n which lists also the value of (twice) the orbital frequency at the moment of ``contact'', i.e. when \nthe EOB radial separation $R$ becomes equal to the sum of the two (areal) radii $R_A + R_B$. [We \nuse $R_B = 2 \\, GM_B$ when the companion is a BH.] \nNote that it is approximately given by the simple analytical formula\n\\be\n2GM\\Omega_{\\rm contact}\\approx 2\\left(\\dfrac{X_A}{c_A} + \\dfrac{X_B}{c_B}\\right)^{-3\/2}.\n\\end{equation}\nThis definition of ``contact'' relies on the use of \nthe EOB radial coordinate. As this coordinate is a smooth deformation of the usual areal coordinate, \nwe think that it is a reasonable definition, and we propose here to use the EOB description up to the \nmoment when either the two objects enter in contact, or (if it happens earlier) when the orbital \nfrequency $\\Omega$ reaches a maximum. Note also that Table~\\ref{tab:table3}\nillustrates the possible effect (for $c=0.13$) of NLO (1PN) tidal\ncontributions. \nThis effect is very significant. The second line of the table indicates that, when using a\n``Taylor'' model with $\\bar{\\alpha}_1=7.0$ (which was the minimum of $\\chi^2$\nfor the central value of $\\delta$), the arrangement of the LSO and touching\nradius changes. In absence of 1PN correction the contact was reached {\\it before} LSO, \nwhile with $\\bar{\\alpha}_1=7.0$ the contact is reached {\\it after}\nthe LSO, which means that the system undergoes a short ``plunge phase'' \nbefore entering in contact.\n\n\n\nIn addition to the discussion of the frequency at the moment of contact\n(i.e. when $R = R_A + R_B$) let us also consider the \ndimensionless parameter measuring the \ntidal deformation of the NS labelled $A$ by its companion $B$\n\\be\n\\label{eq:5.7}\n\\epsilon_A = \\frac{M_B}{R^3} \\, \\frac{R_A^3}{M_A} \\, .\n\\end{equation}\nAt contact, $(R = R_A + R_B)$, this parameter can be expressed in terms of the two compactnesses \n$c_A = GM_A \/ R_A$ and $c_B = GM_B \/ R_B$ as\n\\be\n\\label{eq:5.8}\n\\epsilon_A^{\\rm contact} = \\frac{c_B}{c_A} \\, \\frac{R_A^2 \\, R_B}{(R_A + R_B)^3} \\, .\n\\end{equation}\nFor a symmetric, equal-mass BNS system, we see that, upon contact, $\\epsilon_A^{\\rm contact} = \n\\epsilon_B^{\\rm contact} = 1\/8$. It was found in~\\cite{Damour:2009vw}, and briefly recalled above, \nthat the fractional deformation of the NS $A$ is given by the product $h_2^A \\, \\epsilon_A$, where \nthe ``shape'' Love number $h_2^A$ is of order $0.8$ for a typical NS compactness. \nThis means that, in a symmetric (or near symmetric) BNS system each NS is only deformed by \nabout $10\\%$ at the moment of contact. This motivates our proposal of using the EOB description \nup to the moment of contact.\n\nIn the case of asymmetric BHNS systems (with $A$ labelling the NS and $B$ the BH) we can reach a \nsimilar general conclusion by noticing that the dimensionless function $R_A^2 \\, R_B \/ (R_A + \nR_B)^3$ (which depends only on the ratio $R_A \/ R_B$) reaches a maximum value of $2^2 \/ 3^3 = \n4\/27$ when $R_A = 2 \\, R_B$. As a consequence, we have the general inequality\n\\be\n\\label{eq:5.9}\n\\epsilon_A^{\\rm contact} \\leq \\frac{4}{27} \\, \\frac{c_B}{c_A} \\, .\n\\end{equation}\n\nIn the present case, $B$ denotes a BH (with $c_B = \\frac{1}{2}$) so that $\\epsilon_A^{\\rm contact} \\leq \n2\/(27 \\, c_A) = 0.074074 \/ c_A$. Upon multiplication by $h^A_2 \\sim 0.8$ this yields \n$h^A_2 \\, \\epsilon_A^{\\rm contact} \\lesssim 0.06 \/ c_A$. As NS compactnesses are expected to be \nlarger than about $0.13$, we find that the NS in a BHNS system is expected to be always deformed \nby less than $50\\%$ up to the moment of ``contact'' with its BH companion. Actually, the reasoning \nabove shows that such large deformations are only attained when $R_A = 2 \\, R_B$, i.e. \nwhen the mass ratio is equal to\n\\be\n\\label{eq:5.10}\n\\frac{M_B}{M_A} = \\frac{c_B}{c_A} \\, \\frac{R_B}{R_A} = \\frac{1}{2} \\, \\frac{c_B}{c_A} = \\frac{1}{4 \\, c_A} \\, .\n\\end{equation}\nFor typical NS compactnesses $c_A \\sim 0.15$, such a mass ratio $M_B \/ M_A \\sim 1.67$ would \ncorrespond to a BH of a small mass ($M_B \\sim 2.3 \\, M_{\\odot}$). Larger BH masses will lead to \nsmaller deformations of the NS. \n\n\nSummarizing, the main conclusions of this subsection are that: (i) the EOB\nformalism predicts that the ``quasi point mass'' description can be applied \nup to contact, without the possibility of a \ndisruption of the NS's in a well detached state, and (ii) the divide between\nthe systems that undergo a plunge before contact and those that don't depend\nstrongly both on the compactness and on currently uknown higher PN corrections\nto tidal effects. \n\nTo end this subsection, let us mention that our results are {\\it robust} under the choice of the EOB \nparameters $a_5$ and $a_6$ entering the BBH radial $A^0 (r)$ potential, Eq.~(\\ref{eq:3.4}). The \ncomparison between the currently most sophisticated version of the EOB formalism and the most \naccurate numerical relativity simulations has constrained the couple of parameters $(a_5 , a_6)$ to \nlie within a rather thin banana-like region in the $(a_5 , a_6)$ plane. We have checked that the \nresults that we present in this paper are quite insensitive to the choice of $a_5$ and $a_6$ within this \n``good'' region. The default values that we use in the present paper are $a_5 = -6.37$, $a_6 = +50$, \nwhich lie in the ``good'' region. To illustrate the insensitivity of our\nresults to this choice, let us mention that the value of twice\nthe orbital frequency at LSO, $2M\\Omega_{\\rm LSO}^{\\rm EOB} (a_5 , a_6)$\n(for an equal mass BNS system and for $c=0.17$),\nchanges from the value $0.13605$, quoted in Table~\\ref{tab:table3}, to the\nnew value $0.13603$ for $a_5=-4$ and $a_6=24$ which lie near the upper boundary\nof the good region of paramaters discussed in Ref.~\\cite{Damour:2009kr}. \n\n\\subsection{Phasing and waveform from the non-adiabatic inspiral of \ntidally interacting compact binaries}\n\\label{sec:phasing}\n\nLet us now consider the motion and radiation of tidally interacting binaries predicted by the full EOB \nformalism, i.e. beyond the adiabatic approximation. This is obtained by integrating the EOB equations \nof motion\n$$\n\\frac{dr}{dt} = a(r) \\, \\frac{\\partial \\hat H_{\\rm EOB}}{\\partial \\, p_{R_*}} \\, ,\n$$\n$$\n\\frac{dp_{r_*}}{dt} = - a(r) \\, \\frac{\\partial \\hat H_{\\rm EOB}}{\\partial \\, r} \\, ,\n$$\n$$\n\\frac{d\\varphi}{dt} = \\frac{\\partial \\hat H_{\\rm EOB}}{\\partial \\, p_\\varphi} \\, ,\n$$\n\\be\n\\label{eq:5.11}\n\\frac{dp_\\varphi}{dt} = \\hat{\\mathcal F}_{\\varphi} \\, , \n\\end{equation}\nwhere $a(r) \\equiv AD^{-1\/2}$, $\\hat H_{\\rm EOB} (r,p_{r_*} , p \\varphi) \\equiv H_{\\rm EOB} \/ \\mu$, \nwith $H_{\\rm EOB}$ defined by Eq.~(\\ref{eq:Heob}) above, and where the (scaled) radiation reaction \n$\\hat{\\mathcal F}_{\\varphi} = {\\mathcal F}_{\\varphi} \/ \\mu$ is defined in the\nway introduced in~\\cite{Damour:2009vw} improved (see Eq.~(3) there), i.e. by\nsumming over $\\ell$ and $m$ the adiabatic multipolar partial \nfluxes corresponding to the newly resummed multipolar waves $h_{{\\ell } m}$ (including the tidal \ncorrection (\\ref{eq:4.8}) in $h_{22}$). In addition, we recall that $r \\equiv R\/GM$, $t \\equiv T\/GM$, \n$p_{\\varphi} \\equiv P_{\\varphi} \/ GM\\mu$, and that the function $A(r)$ is here defined as the sum \n(\\ref{eq:3.5}). Concerning the other metric coefficient $D^{-1} (r)$ (entering the auxiliary function \n$a \\equiv (A\/B)^{1\/2} \\equiv AD^{-1\/2}$) we replace it by its standard resummation $(u \\equiv 1\/r)$\n\\be\n\\label{eq:5.12}\nD^{-1} (r) = 1+6 \\, \\nu \\, u^2 + 2 \\, (26 - 3 \\, \\nu) \\, \\nu \\, u^3 \\, .\n\\end{equation}\n\nThe solution of the ODE's (\\ref{eq:5.11}) is then inserted in the newly resummed (and tidally \ncompleted) multipolar waves $h_{{\\ell } m}$ to compute the waveform emitted by the inspiralling \ncompact binary. Here, we shall focus on the ${\\ell } = 2$, $m=2$ dominant asymptotic waveform \n$\\lim_{R \\to \\infty} (R \\, h_{22})$. \nScaling it by $G\\mu \\equiv GM\\nu$ and decomposing it in amplitude and phase,\n\\be\n\\label{eq:5.13}\n\\frac{R}{GM} \\, \\frac{h_{22}}{\\nu} = A_{22} (t) \\, e^{-{\\rm i} \\phi_{22}(t)} \\, ,\n\\end{equation}\nwe can then consider the dominant ``metric'' gravitational wave frequency $\\omega_{22}(t) \n\\equiv d \\, \\phi_{22} (t) \/ dt$. [Note that all these quantities are\n dimensionless. In particular $\\omega_{22} \\equiv GM\\omega_{22}^{\\rm phys}$.]\n\nUp to now we have discussed an extension of the EOB formalism which \nincorporates tidal effects in both the motion and the radiation of \ncompact binaries. However, it has been advocated~\\cite{Flanagan:2007ix,Read:2009yp,Hinderer:2009}\nto incorporate tidal effects as a modification of one of the non-resummed \n``post-Newtonian''-based ways of describing the dynamics of inspiralling\nbinaries. \nIn particular, the recent Ref.~\\cite{Hinderer:2009} uses as baseline a\ntime-domain T4-type incorporation of tidal effects.\nTo be precise, let us recall that the phasing of the T4 approximant is defined\nby the following ODEs\n\\begin{align}\n\\frac{d\\phi_{22}^{\\rm T4}}{dt} &= 2 \\, x^{3\/2}, \\nonumber\\\\\n\\label{eq:T4bis}\n\\frac{dx}{dt} &= \\frac{64}{5} \\, \\nu \\, x^5 \\,\\left\\{ a_{3.5}^{\\rm Taylor} (x)\n+ a^{\\rm tidal}(x)\\right\\}\n\\end{align}\nwhere $a_{3.5}^{\\rm Taylor}$ is the PN expanded expression describing\npojnt-mass contributions, and where $a^{\\rm tidal}$ \nis given in the equal mass case by~\\cite{Flanagan:2007ix}\n\\be\n\\label{t4:lo}\na^{\\rm tidal}(x)=26 \\, \\kappa_2^{\\rm T} x^5.\n\\end{equation}\nHere we shall analyze the (metric) GW phase $\\phi_{22}$ as a function of\nthe corresponding dimensionless frequency $\\omega_{22}$ and study the\ninfluence on it of tidal effects. More precisely, we give here two different \ncomparisons between the EOB predictions and the T4\none. In these two comparisons, we keep T4 unchanged and defined by\nEq.~\\eqref{eq:T4bis}, with a tidal contribution of the {\\it leading order} (LO)\ntype~\\eqref{t4:lo}. On the other hand, we compare this tidal-T4 model to two\ndifferent tidal-EOB models; both models use a tidally modified $A$ function, \nEq.~\\eqref{eq:3.5}. One model ($\\rm EOB^{LO}$) uses the LO $A^{\\rm tidal}$,\nEq.~\\eqref{eq:3.6}, while the other one ($\\rm EOB^{NLO}$) uses \nthe {\\it Taylor} NLO $A^{\\rm tidal}$, Eq.~\\eqref{eq:linear}, with $\\bar{\\alpha}_1=7$. \nHere we consider a BNS equal-mass system modelled using the 2H EOS with\ncompactness $c=0.13097$, mass $M=1.35M_\\odot$ and radius $R=15.23$ km.\n\nThe quantity which is plotted in Fig.~\\ref{fig:fig5} is the difference \n$\\Delta\\phi_{22}^{\\rm EOBT4}(\\omega_{22})\\equiv\\phi_{22}^{\\rm EOB^X}(\\omega_{22})-\\phi_{22}^{\\rm T4}(\\omega_{22})$.\nwhere the label X on EOB takes two values, X=LO for the leading order model\nand X=NLO for the next-to-leading order model.\nTo compute this quantity we took into account possible shifts in both \n$t \\, (t^{\\rm T4} = t^{\\rm EOB} + \\tau)$ and \n$\\phi \\, (\\phi^{\\rm T4} = \\phi^{\\rm EOB} + \\alpha)$. We use here \nthe ``two-frequency pinching'' technique of Ref.~\\cite{Damour:2007vq} \nto fix suitable values of the shifts\n$\\tau$ and $\\alpha$. We use here two pinching frequencies which are close to\n$450$~Hz. In other words, the phase differences displayed in our figure show\nthe phase differences accumulated for frequencies between 450~Hz and the \ncontact. Though, the figure does not display the phase differences below 450~Hz\nwe have checked that they remain much smaller than what they become for\nfrequencies higher than 450~Hz.\n\nIn Fig.~\\ref{fig:fig5}, the solid line (black online) displays \n$\\Delta\\phi_{22}^{\\rm EOBT4}(\\omega_{22})=\\phi_{22}^{\\rm EOB^{LO}}(\\omega_{22})-\\phi_{22}^{\\rm T4}(\\omega_{22})$.\nand the dashed line (red online) \n$\\Delta\\phi_{22}^{\\rm EOBT4}(\\omega_{22})=\\phi_{22}^{\\rm EOB^{NLO}}(\\omega_{22})-\\phi_{22}^{\\rm T4}(\\omega_{22})$.\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=75 mm, height=60mm]{fig5.eps}\n\\caption{\\label{fig:fig5}Accumulated GW phase difference (versus GW frequency $\\omega_{22}$) \nbetween tidal-EOB (quadrupolar) waveforms and a Taylor-T4-based PN \nwaveform with (leading order) tidal corrections, Eq.~\\eqref{eq:T4bis}.\nWaveforms have been suitably aligned (subtracting a relative time and phase shift)\n at low frequencies. The circles on the plot indicate, for each curve, the dephasing accumulated \n up to the ``contact'' frequencies.}\n \\end{center}\n\\end{figure}\nThe two circles on the curves indicate the final moments of\n``contact''. We added two vertical (dashed) lines corresponding \nto 500~Hz and 1~kHz.\n\nThe main messages that one can draw from this figure are: i) the relative\ndephasing between EOB and T4 (using the same tidal model) grows by more \nthan two radians up to contact; ii) the inclusion\nof higher-order PN tidal contributions further increases\nthe relative dephasing by nearly two radians more.\nNote that even if one stops the evolution around 1~kHz (which is within the\nsensitivity of some possible configurations of Advanced LIGO) the previously\ndiscussed accumulated dephasings are still larger than one radian. \nThis indicates that the GW phasing of the ultimate part of the BNS inspiral\nis very sensitive to tidal effects and also very sensitive to their precise\nanalytical modelling, including higher-order PN corrections.\nThis makes it urgent to do high-accuracy comparisons between accurate NR\nsimulations of BNS inspiral and EOB models, so as to accurately ``calibrate''\nthe EOB description of higher-order PN tidal contributions. \n\n\\section{Conclusions}\n\\label{conclusions}\n\nWe discussed an extension of the EOB formalism which includes tidal effects.\nThe hope is that such a ``tidal-EOB'' formalism will be able to go beyond\nthe present PN-based proposals whose validity is limited to the early\n(lower-frequency) portion of the GW inspiral signal emitted by BNS systems.\nThis formalism allows naturally for the presence of higher-order PN\ncorrections to the leading (Newtonian) effects. We compared tidal-EOB \npredictions to recently computed numerical relativity data of\nquasi-equilibrium circular BNS sequences~\\cite{Uryu:2009ye}. \nWe showed how to subtract tidal effects from NR data. \nEven after this subtraction, there remains a systematic\ndifference between the ``point-mass'' NR binding energy and its EOB (and PN)\nanalytical correspondant. We argue that this difference is due to unaccounted \n3PN-level effects linked to the imperfect satisfaction of the \nhelical Killing vector condition (which should be satisfied for physically \nwaveless solutions). We advocate that new nonconformally flat simulations \nbe performed for\nsequences of helical-Killing-vector cut-off radii so as to allow\nextrapolation to infinite radius.\nWe also suggested to study BHNS circular binaries for mass ratios\n$M_{\\rm BH}\/M_{\\rm NS}$ of order unity.\n\nIn absence of such physically waveless NR data, we propose to subtract from\nthe current data a term $\\delta\\,x^4$ representing a 3PN correction in the\nbinding energy. We could then do a least-square analysis to try to minimize\nthe (squared) ``distance'' $\\chi^2$ between NR data and tidal-EOB predictions.\nOur analysis allowed for 1PN-corrections to tidal effects parametrized\nby $\\bar{\\alpha}_1$. We found that $\\chi^2$ remains close to its global\nminimum in a flat valley that extends over a significant region of the\n$\\left(\\bar{\\alpha}_1,\\delta\\right)$ plane. \nThis means that, given the present error level in numerical data, we\ncannot meaningfully and simultaneously select preferred values for \n$\\bar{\\alpha}_1$ and $\\delta$.\nThough this analysis is not fully conclusive, it does suggest the need\nof including higher-order PN correction to tidal effects that {\\it significantly}\nincrease their dynamical effect.\n[In other words, the ``effective'' value, say \n$\\kappa_2^{\\rm eff}(u) = \\kappa_2^{\\rm T}\\left(1+\\bar{\\alpha}_1 u +\\bar{\\alpha}_2 u^2 + \\dots\\right)$, \nwhich is relevant for the late inspiral is significantly larger, by a\nfactor~$\\sim 2$, than $\\kappa_2^{\\rm T}$].\nThese higher-order PN corrections might\ncome not from the 1PN level, but from higher PN levels (see in particular\nthe end of Sec.~\\ref{sec:nr}, where a 2PN completion of a recently computed \n1PN correction of order unity was shown to be fully compatible with \ncurrent NR data).\n\nThis emphasizes the need both of higher order analytical calculations\nof tidal effects and of high-accuracy numerical relativity simulations\nof inspiralling BNS systems.[We note in this respect that it would be\nuseful to refine the piece-wise polytropic approximation to realistic\ntabulated EOS (used in this paper) by incorporating the relativistic\nLove numbers, notably $k_2$, within the set of observables\nthat are fitted].\nWe argued that such a suitably tidally completed EOB formalism will\nbe able to describe the dynamics (and GW emission) of inspiralling\nBNS systems essentially up to the contact of the two neutron stars.\nWe emphasized that, though below the dimensionless (quadrupolar) \nGW frequency $GM\\omega_{22}\\sim 0.04$ (which corresponds to a \nfreqency of 480~Hz for $1.35M_\\odot +1.35M_\\odot$ system) the present\nanalytical knowledge is possibly sufficient for accurately describing the\nsystem, the GW phasing becomes uncertain by a large amount ($\\sim 4 $~radians)\nduring the late part of the inspiral,\nbecause of our current lack of secure knowledge of higher order PN\ncorrections to tidal effects.\nThis makes it urgent to do high-accuracy comparisons between accurate NR\nsimulation of BNS inspiral and EOB models.\nWhen the EOB description of higher-PN tidal effects is ``calibrated'' with sufficient\naccuracy by using such EOB\/NR comparisons, we think it will be possible\nto use the EOB formalism to extract from Advanced-LIGO data some accurate \nknowledge of the nuclear EOS (via the measurement of the crucial parameter\n$\\kappa_2^{\\rm T}$). \n\n\\acknowledgments\nWe are grateful to Lo\\\"ic Villain for collaboration, at an early stage, on\nthe NR-EOB comparison. We thank Koji Ury${\\rm\\bar{u}}$ for making available to us the\nnumerical data behind the published tables of Ref.~\\cite{Uryu:2009ye}.\nWe are also grateful to Luca Baiotti, Bruno Giacomazzo and \nLuciano Rezzolla for sharing with us, before publication, \ntheir data on inspiralling and coalescing binary neutron\nstars, which prompted our interest in relativistic tidal \nproperties of neutron stars.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nFramed bundles (also called vector bundles with a level structure) are pairs \n$(E,\\,\\alpha)$ consisting of a vector bundle $E$ of rank $r$ and a nonzero linear \nmap $\\alpha:E_x\\longrightarrow \\mathbb{C}^r$ from a fiber over a fixed point $x\\in X$ to \n$\\mathbb{C}^r$; this $\\alpha$ is called a framing. Framed bundles were first introduced by \nDonaldson as a tool to study the moduli space of instantons on $\\mathbb{R}^4$ \\cite{Don84}. \nLatter on, Huybrechts and Lehn \\cite{HL95modules, HL95pairs} defined framed modules \nas a common generalization of several notions of decorated sheaves including framed \nbundles and Bradlow pairs. They described a general stability condition for framed \nmodules and provided a GIT construction for the moduli space of framed modules.\n\nA moduli space of framed bundles of rank $r$ carries a canonical $\\operatorname{PGL}_r(\\mathbb{C})$-action\nthat sends each $[G]\\,\\in \\,\\operatorname{PGL}_r(\\mathbb{C})$ and each framed bundle $(E,\\,\\alpha)$ to\n$$[G]\\cdot (E,\\,\\alpha)=(E,\\,G\\circ \\alpha)\\, .$$\nIn \\cite{BGM10}, a Torelli type theorem was proved for the moduli space of framed\nbundles by studying this $\\operatorname{PGL}_r(\\mathbb{C})$-action. It was proved there that\nthis action is\nessentially the only nontrivial $\\operatorname{PGL}_r(\\mathbb{C})$-action on the moduli space;\nthe corresponding GIT-quotient was shown to be isomorphic to the moduli space of vector bundles.\n\nOur aim here is to compute the automorphism group of the moduli space of framed \nbundles with fixed determinant; towards this the following is proved (see Theorem \n\\ref{thm:mainthm}):\n\n\\begin{theorem}\\label{thm:thmIntro}\nLet $X$ be a smooth complex projective curve of genus $g>2$\nwith a base point $x$. If $\\tau$ is a small stability\nparameter, then the automorphism group of the moduli space of $\\tau$-semistable framed\nbundles with fixed determinant $\\xi$ and framing over $x$ is generated by the following transformations\n\\begin{itemize}\n\\item pullback with respect to the automorphisms $\\sigma:X\\longrightarrow X$\nthat fix the point $x\\in X$,\n\\item tensorization with a line bundle $L\\in \\operatorname{Pic}(X)$, and\n\\item action of $\\operatorname{PGL}_r(\\mathbb{C})$ defined by $[G]\\cdot (E,\\,\\alpha)\\,=\\,(E,\\,G\\circ \\alpha)$,\n\\end{itemize}\nwhere $\\sigma$ and $L$ satisfy the relation $\\sigma^*\\xi \\otimes L^{\\otimes r} \\cong \\xi$.\n\\end{theorem}\n\nIn particular, this allows us to compute explicitly the structure of the automorphism \ngroup of the moduli space of framed bundles ${\\mathcal{F}}$ (Corollary \\ref{cor:maincor}):\n\n\\begin{corollary}\n\\label{cor:corIntro}\nThe automorphism group of ${\\mathcal{F}}$ is\n$$\\operatorname{Aut}({\\mathcal{F}})\\cong \\operatorname{PGL}_r(\\mathbb{C})\\times {\\mathcal{T}} $$\nfor a group ${\\mathcal{T}}$ fitting in the short exact sequence\n$$1\\longrightarrow J(X)[r] \\longrightarrow {\\mathcal{T}} \\longrightarrow \\operatorname{Aut}(X,x)\n\\longrightarrow 1\\, ,$$\nwhere $J(X)[r]$ is the $r$-torsion part of the Jacobian of $X$ and\n$$\\operatorname{Aut}(X,x)\\,=\\,\\{\\sigma\\,\\in\\, \\operatorname{Aut}(X) \\,\\mid\\, \\sigma(x)\\,=\\,x\\}\\, .$$\n\\end{corollary}\n\nThe classification of the automorphisms of the moduli space of vector bundles carried\nout in \\cite{KP} plays an important role in the computations done in\nTheorem \\ref{thm:thmIntro} and Corollary \\ref{cor:corIntro}.\n\n\\section{Moduli space of framed bundles}\n\nLet $X$ be a smooth complex projective curve. Fix a point $x\\,\\in\\, X$. A framed bundle\non $(X,\\,x)$ is a pair $(E,\\,\\alpha)$ consisting on a vector bundle $E$ over $X$ and a\nnonzero $\\mathbb{C}$-linear homomorphism\n$$\\alpha\\,:\\,E_x \\,\\longrightarrow\\, \\mathbb{C}^r\\, .$$\n\nGiven a real number $\\tau>0$, we say that a framed bundle $(E,\\,\\alpha)$ is $\\tau$-stable (respectively $\\tau$-semistable) if for all proper subbundles $0\\,\\subsetneq\\,\nE'\\,\\subsetneq\\, E$\n$$\\frac{\\text{degree}(E')-\\epsilon(E',\\alpha)\\tau}{{\\rm rank}(E')} < \\frac{\\text{degree}(E)-\\tau}{{\\rm rank}(E)} \\quad \n(\\text{respectively, }\\le)$$\nwhere\n$$\\epsilon(E',\\alpha)=\\left\\{ \\begin{array}{ll}\n1 & \\text{if } E_x'\\,\\not\\subseteq\\, \\ker(\\alpha)\\\\\n0 & \\text{if }E_x'\\,\\subseteq\\, \\ker(\\alpha).\n\\end{array} \\right.$$\n\nIn the general framework of framed modules introduced in \\cite{HL95modules}, a framed \nbundle is a framed module with respect to the reference sheaf ${\\mathcal{O}}_x^{\\oplus r}$. The \nstability condition for framed bundles described here coincides with the stability condition \ndefined by Huybrechts and Lehn for framed modules. Fix a line bundle $\\xi$\non $X$. Let ${\\mathcal{F}}\\,=\\,{\\mathcal{F}}(X,x,r,\\xi,\\tau)$ be the moduli space of $\\tau$-semistable framed \nbundles $(E,\\,\\alpha)$ on $(X,\\,x)$ with ${\\rm rank}(E)\\,=\\,r$ and $\\det(E)\n\\,=\\, \\bigwedge^r E\\,\\cong\\, \\xi$. By \n\\cite{HL95modules}, it is a complex projective variety.\n\nOn the other hand, a vector bundle $E$ is called stable (respectively semistable) if for all proper subbundles $0\\,\\subsetneq\\, E'\\,\\subsetneq\\, E$\n$$\\frac{\\text{degree}(E')}{{\\rm rank}(E')} < \\frac{\\text{degree}(E)}{{\\rm rank}(E)} \\quad (\\text{respectively, }\\le)$$\n\nLet ${\\mathcal{M}}={\\mathcal{M}}(X,r,\\xi)$ denote the moduli space of semistable vector bundles over $X$ of rank $r$ and determinant $\\xi$. \n\nBy \\cite[Lemma 1.1]{BGM10}, there exists some constant $\\tau_0(r)$ depending only on the\nrank $r$ such that if $0<\\tau<\\tau_0(r)$ then the following implications hold\n$$E \\text{ stable} \\implies (E,\\alpha) \\text{ }\\tau\\text{-stable} \\iff (E,\\alpha) \\text{ }\\tau\\text{-semistable} \\implies E \\text{ semistable}.$$\nFrom now on, we assume that $0<\\tau<\\tau_0(r)$. Then there is a forgetful map\n\\begin{eqnarray*}\n\\xymatrixrowsep{0.05pc}\n\\xymatrixcolsep{0.3pc}\n\\xymatrix{\nf&:&{\\mathcal{F}} \\ar[rrrr] &&&& {\\mathcal{M}}\\\\\n&& (E,\\alpha) \\ar@{|->}[rrrr] &&&& E\n}\n\\end{eqnarray*}\n\nWe can make $\\operatorname{PGL}_r(\\mathbb{C})$ act on ${\\mathcal{F}}$ by composition with the framing $\\alpha$. Given a matrix $[G]\\in \\operatorname{PGL}_r(\\mathbb{C})$, where $G\\in \\operatorname{GL}_r(\\mathbb{C})$ is any representative of the\nprojective class, the automorphism $G\\,:\\,\\mathbb{C}^r \\,\\longrightarrow\\, \\mathbb{C}^r$ produces the\nself-map \n$$(E,\\,\\alpha)\\,\\longmapsto\\, (E,\\,G\\circ \\alpha)$$\nof framed bundles. Since for every subbundle $E'\\,\\subset\\, E$ we have\n$$\\epsilon(E',\\,\\alpha)\\,=\\,\\epsilon(E',\\,G\\circ \\alpha)$$\nthis transformation preserves the (semi)stability condition and it gives a well defined\nmap $\\varphi_G\\,:\\,{\\mathcal{F}}\\,\\longrightarrow \\,{\\mathcal{F}}$.\n\nOn the other hand, we can perform the following transformations on (families of) framed\nbundles $(E,\\,\\alpha)$ which preserve the stability condition:\n\\begin{enumerate}\n\\item Given an automorphism $\\sigma:X\\longrightarrow X$ that fixes $x\\in X$,\n$$(E,\\,\\alpha)\\,\\longmapsto \\,\\left (\\sigma^*E, \\,\\alpha \\right)\\, .$$\n\\item Given a line bundle $L$ over $X$, fix a trivialization $\\alpha_L:L_x \\stackrel{\\sim}{\\longrightarrow} \\mathbb{C}$. Then send\n$$(E,\\,\\alpha)\\,\\longmapsto \\,\\left (E\\otimes L,\\,\\alpha\\cdot \\alpha_L \\right)$$\nSince two trivializations $\\alpha_L$ and $\\alpha_L'$ differ only by a scalar constant, this\nmap is well defined and furthermore it is independent on the choice of the trivialization $\\alpha_L$.\n\\end{enumerate}\nNote that taking the pullback by $\\sigma$ and tensoring with $L$ both change the\ndeterminant of the resulting framed bundle. Therefore, these transformations do not\nin general induce automorphism of the moduli space ${\\mathcal{F}}$, but rather an isomorphism\nbetween ${\\mathcal{F}}(X,x,r,\\xi,\\tau)$ and another moduli space of framed bundles with a different\ndeterminant ${\\mathcal{F}}(X,x,r,\\sigma^*\\xi \\otimes L^{\\otimes r},\\tau)$. Nevertheless, if $\\sigma$\nand $L$ satisfy the relation $\\sigma^*\\xi \\otimes L^{\\otimes r}\\cong \\xi$, it is clear that the map $\\overline{{\\mathcal{T}}_{\\sigma,L,+}}:{\\mathcal{F}}\\longrightarrow {\\mathcal{F}}$ sending\n\\begin{equation}\\label{e1}\n(E,\\alpha)\\longmapsto \\overline{{\\mathcal{T}}_{\\sigma,L,+}}(E,\\alpha)=\n\\left (\\sigma^*E\\otimes L,\\alpha\\cdot \\alpha_L \\right)\n\\end{equation}\nis an automorphism of the moduli space.\n\n\\section{Framed bundles with invertible framing}\n\nLet ${\\mathcal{F}}^{ss}$ denote the subset of ${\\mathcal{F}}$ corresponding to framed bundles $(E,\\,\\alpha)$\nsuch that $\\alpha$ is an isomorphism; it is evidently Zariski open. Analogously, let\n${\\mathcal{F}}^0$ be the subset of ${\\mathcal{F}}^{ss}$ consisting on framed bundles $(E,\\,\\alpha)$ such that\n$\\alpha$ is an isomorphism and $E$ is a stable vector bundle; from the\nopenness of the stability condition, \\cite[p.~635, Theorem 2.8(B)]{Ma}, it follows that ${\\mathcal{F}}^0$ is also Zariski\nopen. As the action of $\\operatorname{PGL}_r(\\mathbb{C})$ on framed bundles\npreserves stability, and acts freely and transitively on the space of isomorphisms\n$E_x\\stackrel{\\sim}{\\longrightarrow} \\mathbb{C}^r$, the fiber of the restricted forgetful map\n$$f^{0}:{\\mathcal{F}}^{0} \\longrightarrow {\\mathcal{M}}^s$$\nover a stable vector bundle $E\\, \\in\\, f^0({\\mathcal{F}}^{0})$ is \n$$(f^{0})^{-1}(E)\\,=\\,\\mathbb{P}(\\operatorname{Isom}(E_x,\\mathbb{C}^r)) \\,\\cong\\, \\operatorname{PGL}_r(\\mathbb{C})\\, .$$\nMoreover, the map $f^{0}\\,:\\,{\\mathcal{F}}^0\\,\\longrightarrow\\, {\\mathcal{M}}^s$ is surjective \nas a consequence of the following proposition.\n\n\\begin{proposition}\n\\label{prop:forgetfulStable}\nIf $\\alpha:E|_x\\longrightarrow \\mathbb{C}^r$ is an isomorphism, then $(E,\\,\\alpha)$ is $\\tau$-stable if and only if $E$ is semistable\n\\end{proposition}\n\n\\begin{proof}\nBy \\cite[Lemma 1.1]{BGM10}, if $(E,\\,\\alpha)$ is $\\tau$-stable, then $E$ is semistable. On\nthe other hand, if $\\alpha$ is an isomorphism, then for every subbundle $E'\\,\\subsetneq\\,\nE$, we have $\\alpha|_{E'}\\not\\cong 0$, so $\\epsilon(E',\\alpha)=1$. Now as\n${\\rm rank}(E')<{\\rm rank}(E)$ and $\\tau>0$, we have\n$$\\frac{-\\varepsilon(E',\\alpha)\\tau}{{\\rm rank}(E')} = \\frac{-\\tau}{{\\rm rank}(E')} < \\frac{-\\tau}{{\\rm rank}(E)}$$\nso if $E$ is semistable, then for every $E'\\,\\subsetneq\\, E$, we have\n$$\\frac{\\text{degree}(E')}{{\\rm rank}(E')}-\\frac{-\\varepsilon(E',\\alpha)\\tau}{{\\rm rank}(E')} <\n\\frac{\\text{degree}(E)}{{\\rm rank}(E)}- \\frac{\\tau}{{\\rm rank}(E)}\\, .$$\n\\end{proof}\n\nOn ${\\mathcal{F}}^{0}$ we can define an additional transformation inducing an isomorphism\n$$\n{\\mathcal{D}}:{\\mathcal{F}}^{0}(X,x,r,\\xi,\\tau)\\stackrel{\\sim}{\\longrightarrow} {\\mathcal{F}}^{0}(X,x,r,\\xi^{-1},\\tau)\n$$\nin the following way. An isomorphism $\\alpha:E_x\\longrightarrow \\mathbb{C}^r$ induces an\nisomorphism $\\alpha^{-1}:\\mathbb{C}^r\\longrightarrow E_x$. Identifying $(\\mathbb{C}^r)^\\vee \\,\\cong\\,\\mathbb{C}^r$\nand taking duals, we obtain an isomorphism $(\\alpha^{-1})^t\\,:\\, E_x^\\vee\\,\\longrightarrow\n\\,\\mathbb{C}^r$. Now take\n$${\\mathcal{D}}(E,\\,\\alpha)\\,=\\,\\left (E^\\vee,\\, (\\alpha^{-1})^t \\right )\\, .$$\nSince the transformation is evidently well defined for families, to show that ${\\mathcal{D}}$ induces an isomorphism between the moduli spaces it is enough to prove that it preserves $\\tau$-semistability.\n\n\\begin{proposition}\nThe framed bundle\n${\\mathcal{D}}(E,\\alpha)$ is $\\tau$-semistable if and only if $(E,\\,\\alpha)$ is $\\tau$-semistable.\n\\end{proposition}\n\n\\begin{proof}\nRecall that the choice of $\\tau$ implies that $\\tau$-semistability is equivalent to\n$\\tau$-stability. Therefore, by Proposition \\ref{prop:forgetfulStable},\nthe framed bundle ${\\mathcal{D}}(E,\\alpha)$ is $\\tau$-semistable if and only if $E^\\vee$ is\nsemistable, while $(E,\\,\\alpha)$ is $\\tau$-semistable if and only if $E$ is semistable. As $E$ is semistable if and only if $E^\\vee$ is semistable, the result follows.\n\\end{proof}\n\nLet $L$ be a line bundle over $X$, and let $\\sigma:X\\longrightarrow X$ be an automorphism\nof the curve; take any $s\\,\\in\\, \\{1,\\,-1\\}$. We define the map ${\\mathcal{T}}_{\\sigma,L,s}:{\\mathcal{M}}(X,r,\\xi) \\longrightarrow {\\mathcal{M}}(X,r,\\sigma^*\\xi^s\\otimes L^{\\otimes r})$ as\n\\begin{eqnarray*}\n\\xymatrixrowsep{0.05pc}\n\\xymatrixcolsep{0.3pc}\n\\xymatrix{\n{\\mathcal{T}}_{\\sigma,L,+}&:&{\\mathcal{M}}(X,r,\\xi) \\ar[rrrr] &&&& {\\mathcal{M}}(X,r,\\sigma^*\\xi\\otimes L^{\\otimes r})\\\\\n&& E \\ar@{|->}[rrrr] &&&& \\sigma^*E\\otimes L\n}\n\\end{eqnarray*}\nfor $s=1$, and\n\\begin{eqnarray*}\n\\xymatrixrowsep{0.05pc}\n\\xymatrixcolsep{0.3pc}\n\\xymatrix{\n{\\mathcal{T}}_{\\sigma,L,-}&:&{\\mathcal{M}}(X,r,\\xi) \\ar[rrrr] &&&& {\\mathcal{M}}(X,r,\\sigma^*\\xi^{-1}\\otimes L^{\\otimes r})\\\\\n&& E \\ar@{|->}[rrrr] &&&& \\sigma^*E^\\vee\\otimes L\n}\n\\end{eqnarray*}\nfor $s=-1$. If $\\sigma^*\\xi^s \\otimes L^{\\otimes r}\\cong \\xi$, then the above defined\nmap \n${\\mathcal{T}}_{\\sigma,L,s}:{\\mathcal{M}}\\longrightarrow {\\mathcal{M}}$ is an automorphism of the moduli space of\nvector bundles such that ${\\mathcal{T}}_{\\sigma,L,s}({\\mathcal{M}}^s)={\\mathcal{M}}^s$. In fact, by \\cite{KP}\nand \\cite{BGM13}, every automorphism of ${\\mathcal{M}}$ is given by a transformation of type\n${\\mathcal{T}}_{\\sigma,L,s}$. Analogously, if $L$ is a line bundle over $X$ with $\\sigma:X\n\\longrightarrow X$ an automorphism fixing $x\\in X$, and $s\\in \\{1,-1\\}$, then define\n\\begin{eqnarray*}\n\\xymatrixrowsep{0.05pc}\n\\xymatrixcolsep{0.3pc}\n\\xymatrix{\n{\\mathcal{T}}_{\\sigma,L,+}^{0}&:&{\\mathcal{F}}^{0}(X,x,r,\\xi,\\tau) \\ar[rrrr] &&&& {\\mathcal{F}}^{0}(X,x,r,\\sigma^*\\xi\\otimes L^{\\otimes r},\\tau)\\\\\n&& (E,\\alpha) \\ar@{|->}[rrrr] &&&& \\left (\\sigma^*E\\otimes L,\\alpha\\cdot \\alpha_L \\right)\n}\n\\end{eqnarray*}\nand\n\\begin{eqnarray*}\n\\xymatrixrowsep{0.05pc}\n\\xymatrixcolsep{0.3pc}\n\\xymatrix{\n{\\mathcal{T}}_{\\sigma,L,-}^{0}&:&{\\mathcal{F}}^{0}(X,x,r,\\xi,\\tau) \\ar[rrrr] &&&& {\\mathcal{F}}^{0}(X,x,r,\\sigma^*\\xi^{-1}\\otimes L^{\\otimes r},\\tau)\\\\\n&& (E,\\alpha) \\ar@{|->}[rrrr] &&&& \\left (\\sigma^*E^\\vee\\otimes L, (\\alpha^{-1})^t\\cdot \\alpha_L\\right)\n}\n\\end{eqnarray*}\nBy the previous discussion, if $\\sigma^*\\xi^s\\otimes L^{\\otimes r}\\cong \\xi$, then ${\\mathcal{T}}_{\\sigma,L,s}^{0}$ is an automorphism of ${\\mathcal{F}}^{0}$. By construction,\nthe map $\\overline{{\\mathcal{T}}_{\\sigma,L,+}}$ in \\eqref{e1} is an extension of\n${\\mathcal{T}}_{\\sigma,L,+}^{0}$ to the whole moduli space ${\\mathcal{F}}$. However it will now\nbe shown that a similar extension is not possible for ${\\mathcal{T}}_{\\sigma,L,-}^{0}$\nif $r>2$.\n\n\\begin{lemma}\n\\label{lemma:extendinversetranspose}\nTake $r>2$, and consider the algebraic automorphism\n\\begin{eqnarray*}\n\\xymatrixrowsep{0.05pc}\n\\xymatrixcolsep{0.3pc}\n\\xymatrix{\n{\\mathcal{D}}&:&\\operatorname{PGL}_r(\\mathbb{C}) \\ar[rrrr] &&&& \\operatorname{PGL}_r(\\mathbb{C})\\\\\n&& [G] \\ar@{|->}[rrrr] &&&& [(G^{-1})^t].\n}\n\\end{eqnarray*}\nThen there does not exist any algebraic automorphism\n$$\\overline{{\\mathcal{D}}}\\,:\\,\\mathbb{P}(\\operatorname{Mat}_r(\\mathbb{C}))\\,\\longrightarrow \\,\\mathbb{P}(\\operatorname{Mat}_r(\\mathbb{C}))$$ extending ${\\mathcal{D}}$.\n\\end{lemma}\n\n\\begin{proof}\nAs $\\operatorname{PGL}_r(\\mathbb{C})$ is dense in $\\mathbb{P}(\\operatorname{Mat}_r(\\mathbb{C}))$ and the latter is irreducible, there exists at most one extension of ${\\mathcal{D}}$ to $\\mathbb{P}(\\operatorname{Mat}_r(\\mathbb{C}))$. Let ${\\mathcal{U}}\\,\\subsetneq\\,\n\\mathbb{P}(\\operatorname{Mat}_r(\\mathbb{C}))$ be the open subset corresponding to matrices with at least an $(r-1)\\times (r-1)$ minor with nonzero determinant. Let $\\operatorname{cof}$ be the morphism that sends each matrix $[G]\\in {\\mathcal{U}}$ to its cofactor matrix\n$$\\operatorname{cof}(G)\\,=\\,\\wedge^{r-1}(G)\\, .$$\nThe entries of the cofactor matrix are determinants of minors of $G$, so they are given\nby homogeneous polynomials of degree $r-1$ in the entries of $G$ and, therefore, $\\operatorname{cof}$ induces an algebraic map\n$$\\operatorname{cof}:{\\mathcal{U}} \\longrightarrow \\mathbb{P}(\\operatorname{Mat}_r(\\mathbb{C}))\\, .$$\nGiven an invertible matrix $[G]\\in \\operatorname{PGL}_r(\\mathbb{C})$, we have that\n$$(G^{-1})^t \\,=\\, \\frac{1}{\\det(G)}\\operatorname{cof}(G)\\, .$$\nTherefore, $[(G^{-1})^t]=[\\operatorname{cof}(G)]$ for every $[G]\\in \\operatorname{PGL}_r(\\mathbb{C})$ and $\\operatorname{cof}$ is the unique possible extension of ${\\mathcal{D}}$ to ${\\mathcal{U}}$. Nevertheless, for $r>2$ this map is not injective. For example, for every $\\lambda\\in \\mathbb{C}$, let\n$$G_\\lambda=\\left(\\begin{array}{cc|c|c}\n1 & 0 & 0 & 0\\\\\n\\lambda & 1 & 0 & 0\\\\\n\\hline\n0 & 0 & \\operatorname{Id}_{r-3} & 0\\\\\n\\hline\n0 & 0 & 0 & 0\n\\end{array}\\right)$$\nClearly, if $\\lambda_1\\ne \\lambda_2$ then $[G_{\\lambda_1}]\\ne [G_{\\lambda_2}]$ in $\\mathbb{P}(\\operatorname{Mat}_r(\\mathbb{C}))$. Nevertheless, for every $\\lambda\\in \\mathbb{C}$\n$$\\operatorname{cof}(G_\\lambda)=\\left( \\begin{array}{c|c}\n0_{r-1} & 0\\\\\n\\hline\n0 & 1\n\\end{array} \\right)$$\nSo, in particular, $[G_\\lambda]\\in {\\mathcal{U}}$ for every $\\lambda\\in \\mathbb{C}$, which proves that ${\\mathcal{D}}$ cannot be extended to an injective map on ${\\mathcal{U}}$.\n\\end{proof}\n\n\\begin{lemma}\n\\label{lemma:extendDual}\nIf $r>2$, then the map ${\\mathcal{T}}_{\\sigma,L,s}^{0}\\,:\\,{\\mathcal{F}}^{0}\n\\,\\longrightarrow\\, {\\mathcal{F}}^{0}$ extends to an automorphism of ${\\mathcal{F}}$ if and only if $s=1$. \n\\end{lemma}\n\n\\begin{proof}\nAssume that ${\\mathcal{T}}_{\\sigma,L,-}^{0}$ extends to a map $\\overline{{\\mathcal{T}}_{\\sigma,L,-}}$. For every $E\\in {\\mathcal{M}}^s$ and every $(E,\\,\\alpha)\\,\\in\\, f^{-1}(E)\\cap {\\mathcal{F}}^{0}$,\nwe have\n$$f\\circ {\\mathcal{T}}_{\\sigma,L,-}^{0}(E,\\alpha)\\,=\\,{\\mathcal{T}}_{\\sigma,L,-}(E)\\, .$$\nSince $f^{-1}(E)\\cap {\\mathcal{F}}^{0}$ is dense in $f^{-1}(E)$, for every $(E,\\,\\alpha)\n\\,\\in\\, f^{-1}(E)$ we have\n$$f\\circ \\overline{{\\mathcal{T}}_{\\sigma,L,-}}(E,\\,\\alpha)\\,=\\,{\\mathcal{T}}_{\\sigma,L,-}(E)\\, .$$\nTherefore, it is enough to show that there exists some $E\\in {\\mathcal{M}}$ such that the map\n$${\\mathcal{T}}_{\\sigma,L,-}^{0}|_{f^{-1}(E)\\cap {\\mathcal{F}}^{0}}: f^{-1}(E)\\cap {\\mathcal{F}}^{0} \\longrightarrow f^{-1}({\\mathcal{T}}_{\\sigma,L,-}(E))$$\ncannot be extended to an isomorphism from $f^{-1}(E)$ to $f^{-1}({\\mathcal{T}}_{\\sigma,L,+}(E))$. Let $E\\,\\in\\, {\\mathcal{M}}^s$ be any stable bundle. By \\cite[Lemma 1.1]{BGM10}, the\nframed bundle $(E,\\,\\alpha)$ is\n$\\tau$-stable for every nonzero homomorphism $\\alpha:E|_x\\longrightarrow \\mathbb{C}^r$, so\n$$f^{-1}(E) = \\mathbb{P}(\\operatorname{Hom}(E|_x,\\mathbb{C}^r))\\, .$$\nThen the problem reduces to proving that there does not exist any isomorphism\n$\\mathbb{P}(\\operatorname{Hom}(E|_x,\\mathbb{C}^r))\\longrightarrow \\mathbb{P}(\\operatorname{Hom}(E|_x^\\vee,\\mathbb{C}^r))$ extending the map\n\\begin{eqnarray*}\n\\xymatrixrowsep{0.05pc}\n\\xymatrix{\n\\mathbb{P}(\\operatorname{Isom}(E|_x,\\mathbb{C}^r))\\ar[r] & \\mathbb{P}(\\operatorname{Hom}(E|_x^\\vee,\\mathbb{C}^r))\\\\\n\\alpha \\ar@{|->}[r] & (\\alpha^{-1})^t\n}\n\\end{eqnarray*}\nwhich, fixing a basis of $E|_x$, is equivalent to proving that there exists no algebraic\nautomorphism $\\overline{{\\mathcal{D}}}:\\mathbb{P}(\\operatorname{Mat}_r(\\mathbb{C})) \\longrightarrow \\mathbb{P}(\\operatorname{Mat}_r(\\mathbb{C}))$ extending the transpose of the inverse map\n\\begin{eqnarray*}\n\\xymatrixrowsep{0.05pc}\n\\xymatrix{\n\\operatorname{PGL}_r(\\mathbb{C}) \\ar[r] & \\operatorname{PGL}_r(\\mathbb{C})\\\\\n\\alpha \\ar@{|->}[r] & (\\alpha^{-1})^t\n}\n\\end{eqnarray*}\nTherefore, the result follows from Lemma \\ref{lemma:extendinversetranspose}.\n\\end{proof}\n\nThe previous results deal with the extension of the maps ${\\mathcal{T}}_{\\sigma,L,-}^{0}$ if $r>2$. Before proving the main theorem let us address the remaining $r=2$ case.\n\n\\begin{lemma}\n\\label{lemma:trivialrk2}\nLet $r=2$. Then for every automorphism ${\\mathcal{T}}_{\\sigma,L,-}:{\\mathcal{M}}\\longrightarrow {\\mathcal{M}}$ there exists a\nline bundle $L'$ on $X$ such that\n$${\\mathcal{T}}_{\\sigma,L,-}={\\mathcal{T}}_{\\sigma,L',+}\\, .$$\n\\end{lemma}\n\n\\begin{proof}\nSince $\\bigwedge^2E\\cong \\xi$ for every $E\\in {\\mathcal{M}}$, there is an isomorphism\n$$E^\\vee \\cong E\\otimes \\xi^{-1}\\, .$$\nConsequently, for every $\\sigma$ and $L$ we have\n$$\\sigma^* E^\\vee \\otimes L \\cong \\sigma^*(E\\otimes \\xi^{-1}) \\otimes L \\cong \\sigma^*E \\otimes \\sigma^*\\xi^{-1} \\otimes L\\, .$$\nThen taking $L'=\\sigma^*\\xi^{-1}\\otimes L$ yields\n$${\\mathcal{T}}_{\\sigma,L,-}={\\mathcal{T}}_{\\sigma,L',+}$$\nproving the lemma.\n\\end{proof}\n\n\\section{Automorphism group of the moduli space}\n\nIn this section, we combine the results on the $\\operatorname{PGL}_r(\\mathbb{C})$-action on ${\\mathcal{F}}$ proved in \n\\cite{BGM10} with the analysis on the transformations on ${\\mathcal{F}}$, ${\\mathcal{F}}^0$ and ${\\mathcal{F}}^{ss}$ given \nbefore to prove Theorem \\ref{thm:thmIntro} and compute the structure of the \nautomorphism group of ${\\mathcal{F}}$.\n\n\\begin{lemma}\n\\label{lemma:autofibers}\nLet $\\varphi:{\\mathcal{F}} \\longrightarrow {\\mathcal{F}}$ be an automorphism. Then there exists an\nautomorphism $\\sigma:X\\longrightarrow X$ with $\\sigma(x)=x$, a line bundle $L$ over $X$\nand $s\\in \\{1,-1\\}$ satisfying $\\sigma^* \\xi^s \\otimes L^{\\otimes r} \\cong \\xi$, such that\nthe following diagram is commutative\n\\begin{eqnarray*}\n\\xymatrix{\n{\\mathcal{F}} \\ar[r]^{\\varphi} \\ar[d]_f & {\\mathcal{F}} \\ar[d]^f\\\\\n{\\mathcal{M}} \\ar[r]^{{\\mathcal{T}}_{\\sigma,L,s}} & {\\mathcal{M}} \n}\n\\end{eqnarray*}\nMoreover $\\varphi$ preserves both ${\\mathcal{F}}^{ss}$ and ${\\mathcal{F}}^0$.\n\\end{lemma}\n\n\\begin{proof}\nBy \\cite[Proposition3.3]{BGM10}, there exists an automorphism $\\psi:{\\mathcal{M}}\\longrightarrow{\\mathcal{M}}$\nsuch that the following diagram is commutative\n\\begin{eqnarray*}\n\\xymatrix{\n{\\mathcal{F}} \\ar[r]^{\\varphi} \\ar[d]_f & {\\mathcal{F}} \\ar[d]^f\\\\\n{\\mathcal{M}} \\ar[r]^{\\psi} & {\\mathcal{M}} \n}\n\\end{eqnarray*}\nThe results by \\cite{KP} and \\cite{BGM13} on the structure of the automorphism group of ${\\mathcal{M}}$\nimply that there exist an automorphism $\\sigma:X\\longrightarrow X$, a line bundle $L$ over $X$\nand $s\\in \\{1,-1\\}$ satisfying $\\sigma^*\\xi \\otimes L^{\\otimes r} \\cong \\xi$ such\nthat $\\psi={\\mathcal{T}}_{\\sigma,L,s}$. Moreover, following the argument\nin \\cite[Corollary 4.2]{BGM10}, as $\\psi$ comes from an automorphism of ${\\mathcal{F}}$,\nthe induced automorphism $\\sigma\\,:\\,X\\,\\longrightarrow\\, X$ must fix the point $x\\,\\in\n\\, X$.\n\nBy \\cite[Proposition 2.5]{BGM10}, there exists a unique action of $\\operatorname{PGL}_r(\\mathbb{C})$ on ${\\mathcal{F}}$ up to \na group automorphism of $\\operatorname{PGL}_r(\\mathbb{C})$. Moreover, by \\cite[Lemma 3.2]{BGM10} the set of \nGIT-semistable points for the action of $\\operatorname{PGL}_r(\\mathbb{C})$ coincides with ${\\mathcal{F}}^{ss}$ for any \npolarization, so $\\varphi$ restricts to a map $\\varphi^{ss}:{\\mathcal{F}}^{ss}\\longrightarrow {\\mathcal{F}}^{ss}$.\n\nFinally, as ${\\mathcal{T}}_{\\sigma,L,s}$ preserves the stable locus ${\\mathcal{M}}^s \\,\\subset\\, {\\mathcal{M}}$ for every \n$\\sigma$, $L$ and $s$, it follows that $\\varphi$ preserves ${\\mathcal{F}}^0\\,=\\,{\\mathcal{F}}^{ss}\\cap f^{-1}({\\mathcal{M}}^s)$. \n\\end{proof}\n\n\\begin{lemma}\n\\label{lemma:autoEquivariant}\nLet $\\varphi\\,:\\,{\\mathcal{F}}\\,\\longrightarrow\\, {\\mathcal{F}}$ be an automorphism. Then there exists $[G]\n\\,\\in\\, \\operatorname{PGL}_r(\\mathbb{C})$ such that $\\varphi_{[G]}\\circ \\varphi$ is a $\\operatorname{PGL}_r(\\mathbb{C})$-equivariant automorphism.\n\\end{lemma}\n\n\\begin{proof}\nLet $\\gamma:\\operatorname{PGL}_r(\\mathbb{C}) \\times {\\mathcal{F}}\\longrightarrow {\\mathcal{F}}$ be the natural action of $\\operatorname{PGL}_r(\\mathbb{C})$ on\n${\\mathcal{F}}$ described before. If $\\varphi$ is an automorphism of ${\\mathcal{F}}$, it induces another action\n$$\\gamma':\\operatorname{PGL}_r(\\mathbb{C})\\times {\\mathcal{F}}\\longrightarrow {\\mathcal{F}}$$ given by\n$$\\gamma'([X],(E,\\alpha))\\,=\\,\\varphi(\\gamma([X],\\varphi^{-1}(E,\\alpha)))\\, .$$\nBy \\cite[Proposition 2.5]{BGM10}, there exists a unique action of $\\operatorname{PGL}_r(\\mathbb{C})$ on ${\\mathcal{F}}$ up to a group automorphism of $\\operatorname{PGL}_r(\\mathbb{C})$. For $r=2$, all the automorphisms of $\\operatorname{PGL}_2(\\mathbb{C})$ are\ninner and for $r>2$, the only outer automorphism of $\\operatorname{PGL}_r(\\mathbb{C})$ is the inverse-transpose, i.e., the map sending $[X] \\mapsto [(X^{-1})^t]$. Therefore, there exists a matrix $[G]\\in\\operatorname{PGL}_r(\\mathbb{C})$ such that either\n$$\\gamma'([X],(E,\\alpha))\\,=\\,\\gamma([GXG^{-1}],\\,(E,\\,\\alpha))\n\\,=\\,\\varphi_{[G]}(\\gamma([X],\\,\\varphi_{[G]}^{-1}(E,\\,\\alpha)))$$\nor \n$$\\gamma'([X],\\,(E,\\,\\alpha))\\,=\\,\\gamma([G(X^{-1})^tG^{-1}],\\,(E,\\,\\alpha))\n\\,=\\,\\varphi_{[G]}(\\gamma([(X^{-1})^t],\\,\\varphi_{[G]}^{-1}(E,\\,\\alpha)))$$\nand it is only necessary to consider the latter when $r>2$. In the first case, as\n$\\varphi_{[G^{-1}]}$ is an automorphism of ${\\mathcal{F}}$, it\nfollows that $\\varphi_{[G^{-1}]}\\circ \\varphi$ is a $\\operatorname{PGL}_r(\\mathbb{C})$-equivariant automorphism. Let\nus prove that the second case is impossible if $r > 2$. Let ${\\mathcal{T}}_{\\sigma,L,s}:{\\mathcal{M}} \\longrightarrow {\\mathcal{M}}$ be the automorphism of ${\\mathcal{M}}$ induced by $\\varphi$. Let $E$ be a stable vector bundle, and let $E'={\\mathcal{T}}_{\\sigma,L,s}(E)$. Then $\\varphi_{[G^{-1}]}\\circ \\varphi$ induces an algebraic isomorphism\n$$(\\varphi_{[G^{-1}]}\\circ \\varphi)|_{f^{-1}(E)} \\,:\\, \\mathbb{P}(\\operatorname{Hom}(E|_x,\\mathbb{C}^r))\n\\,\\longrightarrow\\, \\mathbb{P}(\\operatorname{Hom}(E'|_x,\\mathbb{C}^r))\\, .$$\nFix any trivialization $\\alpha:E_x \\stackrel{\\sim}{\\longrightarrow} \\mathbb{C}^r$\nof $E_x$. Let $\\alpha'\\,=\\,(\\varphi_{[G^{-1}]}\\circ \\varphi)|_{f^{-1}(E)}(\\alpha)$. By Lemma\n\\ref{lemma:autofibers}, the composition $\\varphi_{[G^{-1}]}\\circ \\varphi$ preserves ${\\mathcal{F}}^0$, so\n$\\alpha'$ is an isomorphism. Using the trivializations $\\alpha$ and $\\alpha'$, we get isomorphisms\n$$\\mathbb{P}(\\operatorname{Hom}(E|_x,\\mathbb{C}^r)) \\,\\stackrel{\\alpha}{\\cong}\\, \\mathbb{P}(\\operatorname{Mat}_r(\\mathbb{C}))\n\\,\\stackrel{\\alpha'}{\\cong}\\, \\mathbb{P}(\\operatorname{Hom}(E'|_x,\\mathbb{C}^r))\\, ;$$\nthus $(\\varphi_{[G^{-1}]}\\circ \\varphi)|_{f^{-1}(E)}$ induces an algebraic isomorphism\n$$\\widetilde{\\varphi}\\,:\\,\\mathbb{P}(\\operatorname{Mat}_r(\\mathbb{C})) \\,\\longrightarrow\\, \\mathbb{P}(\\operatorname{Mat}_r(\\mathbb{C}))\\, .$$\nMoreover, for every $[X]\\,\\in\\, \\operatorname{PGL}_r(\\mathbb{C})$ we have\n$(\\varphi_{[G^{-1}]}\\circ \\varphi)|_{f^{-1}(E)}(X\\circ \\alpha)\n\\,=\\, (X^{-1})^t\\circ \\alpha'$, so\nfor every $[X]\\in \\operatorname{PGL}_r(\\mathbb{C})$, $\\widetilde{\\varphi}([X])\\,=\\,[X^{-1}]^t$ and, therefore, $\\widetilde{\\varphi}$ extends the inverse-transpose map to an automorphism of $\\mathbb{P}(\\operatorname{Mat}_r(\\mathbb{C}))$, thus contradicting Lemma \\ref{lemma:extendinversetranspose}.\n\\end{proof}\n\nLet $\\mathbb{P}$ be the projective bundle over ${\\mathcal{M}}^s$ whose fiber over a stable vector bundle \n$E$ is $\\mathbb{P}(\\operatorname{Hom}(E_x,\\mathbb{C}^r))$. Even if ${\\mathcal{M}}^s$ does not admit a universal vector \nbundle, the existence of the bundle $\\mathbb{P}$ is guaranteed by \\cite[Lemma 2.2]{BGM13}. The fiber \nof its dual bundle $\\mathbb{P}^\\vee$ over a bundle $E$ is canonically isomorphic to \n$\\mathbb{P}(\\operatorname{Hom}(\\mathbb{C}^r,E_x))$.\n\n\\begin{lemma}\\label{lemlast}\nIf $r>2$, then the two projective bundles $\\mathbb{P}$ and $\\mathbb{P}^\\vee$ are not isomorphic.\n\\end{lemma}\n\n\\begin{proof}\nWe will break up into several cases because this can be seen from different points\nof view.\n\nFirst assume that $r$ and $\\text{degree}(\\xi)$ are\ncoprime. Then there is a Poincar\\'e vector bundle over $X\\times{\\mathcal{M}}^s$. Let\n$$\nW\\, \\longrightarrow\\, \\{x\\}\\times {\\mathcal{M}}^s \\,=\\, {\\mathcal{M}}^s\n$$\nbe the restriction of such a\nPoincar\\'e bundle to $\\{x\\}\\times {\\mathcal{M}}^s\\, \\subset\\, X\\times{\\mathcal{M}}^s$. Note that\n\\begin{equation}\\label{e2}\n\\mathbb{P}^\\vee\\,=\\, {\\mathbb P}(W^{\\oplus r})\\ \\ \\text{ and }\\ \\\n\\mathbb{P}\\,=\\, {\\mathbb P}((W^\\vee)^{\\oplus r})\\, .\n\\end{equation}\nAssume that the projective bundles $\\mathbb{P}^\\vee$ and $\\mathbb{P}$ are isomorphic. Consequently,\nfrom \\eqref{e2} it follows that there\nis a line bundle $L_0$ on ${\\mathcal{M}}^s$ such that\n\\begin{equation}\\label{e3}\n(W^\\vee)^{\\oplus r}\\,=\\, W^{\\oplus r}\\otimes L_0\\, .\n\\end{equation}\n\nIf $A$ and $B$ are two vector bundles on ${\\mathcal{M}}^s$ such that $A^{\\oplus r}$ is isomorphic\nto $B^{\\oplus r}$, then $A$ is isomorphic to $B$ \\cite[p.~315, Theorem~2]{At}. Therefore,\nfrom \\eqref{e3} it follows that $W^\\vee$ is isomorphic to $W\\otimes L_0$. Hence\nthe line bundle $\\bigwedge^r W^\\vee$ is isomorphic to $\\bigwedge^r (W\\otimes L_0)\\,=\\,\nL^{\\otimes r}_0\\otimes \\bigwedge^r W$. The Picard group of ${\\mathcal{M}}^s$ is identified with $\\mathbb Z$\nby sending its ample generator to $1$ \\cite{Ra}; let $\\ell\\, \\in\\mathbb Z$ be the image of\n$\\bigwedge^r W$ by this identification of $\\text{Pic}({\\mathcal{M}}^s)$ with\n$\\mathbb Z$. We have\n\\begin{equation}\\label{f1}\n\\text{degree}(\\xi)\\cdot \\ell\\,=\\, 1 + ar\n\\end{equation}\nfor some integer $a$ \\cite[p.~75, Remark~2.9]{Ra} (see also \\cite[p.~75, Definition~2.10]{Ra}).\nSince $\\bigwedge^r W^\\vee\\,=\\,L^{\\otimes r}_0\\otimes \\bigwedge^r W$, we also have\n\\begin{equation}\\label{f2}\n-\\ell\\,=\\, br+\\ell\\, ,\n\\end{equation}\nwhere $b\\,\\in\\, \\mathbb Z$ is the image of $L_0$. From \\eqref{f1} and \\eqref{f2} it\nfollows that\n$$\n2\\text{degree}(\\xi)\\cdot \\ell\\,=\\, -\\text{degree}(\\xi) br\\,=\\, 2+2ar\\, .\n$$\nThis implies that $r\\,=\\,2$.\n\nNow assume that $r$ and $\\text{degree}(\\xi)$ have a common factor. Let $$\\delta\\,=\\, \\text{g.c.d.}(r,\n\\, \\text{degree}(\\xi))\\, >\\, 1$$ be the greatest common divisor. The Brauer group \n$\\text{Br}({\\mathcal{M}}^s)$ of ${\\mathcal{M}}^s$ is the cyclic group ${\\mathbb Z}\/\\delta\\mathbb Z$, and\nit is generated by the class of the restriction to\n$\\{x\\}\\times {\\mathcal{M}}^s$ of the projectivized Poincar\\'e bundle \\cite[p.~267, Theorem~1.8]{BBGN}; we will\ndenote this generator of $\\text{Br}({\\mathcal{M}}^s)$ by $\\varphi_0$. Now, the class of\n$\\mathbb{P}^\\vee$ is $\\varphi_0$ (tensoring by a vector bundle does not change the Brauer class),\nand hence the class of $\\mathbb{P}$ is $-\\varphi_0$. If $\\mathbb{P}^\\vee$ is isomorphic to $\\mathbb{P}$, then\nwe have $\\varphi_0\\,=\\, -\\varphi_0$, hence $\\delta\\,=\\, 2$ (as it is the order of\n$\\varphi_0$).\n\nWe now assume that $\\delta\\,=\\, 2$. For a suitable ${\\mathbb P}^{r-1}_{\\mathbb C}$ embedded in ${\\mathcal{M}}^s$, the restriction of\n$\\mathbb{P}^\\vee$ to it is the projectivization of the vector\nbundle ${\\mathcal O}_{{\\mathbb P}^{r-1}_{\\mathbb C}}\\oplus\n\\Omega^1_{{\\mathbb P}^{r-1}_{\\mathbb C}}$ \\cite[p.~464, Lemma~3.1]{BBN09},\n\\cite[p.~464, (3.4)]{BBN09}; note that any extension of $\\Omega^1_{{\\mathbb P}^{r-1}_{\\mathbb C}}$\nby ${\\mathcal O}_{{\\mathbb P}^{r-1}_{\\mathbb C}}$ splits because\n$H^1({\\mathbb P}^{r-1}_{\\mathbb C},\\, T{\\mathbb P}^{r-1}_{\\mathbb C})\\,=\\, 0$.\nTherefore, if $\\mathbb{P}$ and $\\mathbb{P}^\\vee$ are isomorphic,\nrestricting an isomorphism to this embedded ${\\mathbb P}^{r-1}_{\\mathbb C}$ it follows\nthat ${\\mathcal O}_{{\\mathbb P}^{r-1}_{\\mathbb C}}\\oplus\n\\Omega^1_{{\\mathbb P}^{r-1}_{\\mathbb C}}$ is isomorphic\nto $({\\mathcal O}_{{\\mathbb P}^{r-1}_{\\mathbb C}}\\oplus T{\\mathbb P}^{r-1}_{\\mathbb C})\n\\otimes L'$ for some line bundle $L'$ on ${\\mathbb P}^{r-1}_{\\mathbb C}$. Since\n$T{\\mathbb P}^{r-1}_{\\mathbb C}$ is indecomposable, in fact it is stable,\nfrom \\cite[p.~315, Theorem~2]{At} it follows that $T{\\mathbb P}^{r-1}_{\\mathbb C}\n\\otimes L'$ is isomorphic to either ${\\mathcal O}_{{\\mathbb P}^{r-1}_{\\mathbb C}}$ or\n$\\Omega^1_{{\\mathbb P}^{r-1}_{\\mathbb C}}$. If\n$T{\\mathbb P}^{r-1}_{\\mathbb C}\n\\otimes L'$ is isomorphic to ${\\mathcal O}_{{\\mathbb P}^{r-1}_{\\mathbb C}}$,\nthen we have $r\\,=\\, 2$. If $T{\\mathbb P}^{r-1}_{\\mathbb C}\n\\otimes L'$ is isomorphic to $\\Omega^1_{{\\mathbb P}^{r-1}_{\\mathbb C}}$, we have\n$$\nr+ (r-1)\\cdot \\text{degree}(L')\\,=\\, -r \\, ,\n$$\nso we obtain\n$$\n-(r-1)\\cdot \\text{degree}(L')\\,=\\, 2r \\, .\n$$\nThen we conclude that $r-1$ divides $2$, which implies that either $r\\,=\\, 2$ or $r\\,=\\, 3$. However, $r$ is even because $\\delta \\,=\\, 2$, so $r\\,=\\, 2$.\n\\end{proof}\n\n\\begin{lemma}\n\\label{lemma:noDual}\nLet $\\varphi: {\\mathcal{F}}\\longrightarrow {\\mathcal{F}}$ be an automorphism. Then there exist an automorphism\n$\\sigma:X\\longrightarrow X$ with $\\sigma(x)= x$, and a line bundle $L$ over $X$, such that the induced automorphism on ${\\mathcal{M}}$ is ${\\mathcal{T}}_{\\sigma,L,+}$.\n\\end{lemma}\n\n\\begin{proof}\nFor $r=2$, this is a direct consequence of Lemma \\ref{lemma:trivialrk2}.\n\nAssume that $r>2$ and suppose that there exist $\\sigma$ and $L$ such that the induced automorphism on ${\\mathcal{M}}$ is ${\\mathcal{T}}_{\\sigma,L,-}$. Let $L'=(\\sigma^{-1})^* L$. Then clearly ${\\mathcal{T}}_{\\sigma,L,-}^{-1}={\\mathcal{T}}_{\\sigma^{-1},L',-}$. Fix a trivialization $\\alpha_{L}:L_x \\stackrel{\\sim}{\\longrightarrow} \\mathbb{C}$ and consider the map\n\\begin{eqnarray*}\n\\xymatrixrowsep{0.05pc}\n\\xymatrixcolsep{0.3pc}\n\\xymatrix{\n{\\widetilde{{\\mathcal{T}}_{\\sigma^{-1},L',-}}}&:&\\operatorname{Tot}(\\mathbb{P}) \\ar[rrrr]^{\\sim} &&&& \\operatorname{Tot}(\\mathbb{P}^\\vee)\\\\\n&& (E,\\alpha) \\ar@{|->}[rrrr] &&&& \\left( (\\sigma^{-1})^*E^\\vee\\otimes L', \\alpha^t \\otimes \\alpha_L^t \\right).\n}\n\\end{eqnarray*}\nThe following diagram is commutative by construction\n\\begin{eqnarray*}\n\\xymatrixcolsep{4pc}\n\\xymatrix{\n\\operatorname{Tot}(\\mathbb{P}) \\ar[r]^{\\widetilde{{\\mathcal{T}}_{\\sigma^{-1},L',-}}} \\ar[d] & \\operatorname{Tot}(\\mathbb{P}^\\vee) \\ar[d] \\\\\n{\\mathcal{M}}^s \\ar[r]^{{\\mathcal{T}}_{\\sigma^{-1},L',-}} & {\\mathcal{M}}^s\n}\n\\end{eqnarray*}\nTherefore, composing with $\\varphi|_{f^{-1}({\\mathcal{M}}^s)}:\\operatorname{Tot}(\\mathbb{P})\n\\stackrel{\\sim}\\longrightarrow \\operatorname{Tot}(\\mathbb{P})$, we obtain an isomorphism\n${\\widetilde{{\\mathcal{T}}_{\\sigma^{-1},L',-}}}\\circ \\varphi|_{f^{-1}({\\mathcal{M}}^s)}:\n\\operatorname{Tot}(\\mathbb{P}) \\stackrel{\\sim}{\\longrightarrow} \\operatorname{Tot}(\\mathbb{P}^\\vee)$ commuting with the\nrespective projections to ${\\mathcal{M}}^s$, thus contradicting Lemma \\ref{lemlast}.\n\\end{proof}\n\n\\begin{lemma}\\label{lel}\nLet $\\varphi^0\\,:\\,{\\mathcal{F}}^0\\,\\longrightarrow\\, {\\mathcal{F}}^0$ be a $\\operatorname{PGL}_r(\\mathbb{C})$-equivariant\nautomorphism of ${\\mathcal{F}}^0$ commuting with the forgetful map $f^0\\,:\\,{\\mathcal{F}}^0 \\,\\longrightarrow\n\\,{\\mathcal{M}}^s$. Then $\\varphi^0$ is the identity map.\n\\end{lemma}\n\n\\begin{proof}\nIf $\\varphi^0$ is $\\operatorname{PGL}_r(\\mathbb{C})$-equivariant then it is an automorphism of ${\\mathcal{F}}^0$\nconsidered as a $\\operatorname{PGL}_r(\\mathbb{C})$-principal bundle. Let ${\\mathcal{P}}$ be the universal projective bundle\nover ${\\mathcal{M}}^s$, i.e., the unique projective bundle over $X\\times {\\mathcal{M}}^s$ whose fiber over each\nstable vector bundle $E$ is $\\mathbb{P}(E)$. Let $\\{U_\\alpha\\}$ be a trivializing cover of ${\\mathcal{M}}^s$ for ${\\mathcal{P}}|_x$, and let $g_{\\alpha\\beta}:U_\\alpha\\cap U_\\beta \\longrightarrow \\operatorname{PGL}_r(\\mathbb{C})$ be the corresponding transition functions. Observe that $\\{U_\\alpha\\}$ is also a trivializing cover for $\\mathbb{P}$ and, thus, for the $\\operatorname{PGL}_r(\\mathbb{C})$-bundle ${\\mathcal{F}}^0$. It is straightforward to check that the transition functions for ${\\mathcal{F}}^0$ as $\\operatorname{PGL}_r(\\mathbb{C})$-bundle are $(g_{\\alpha\\beta}^{-1})^t$. Therefore, we conclude that ${\\mathcal{F}}^0$ is the $\\operatorname{PGL}_r(\\mathbb{C})$-principal bundle associated to the dual bundle of ${\\mathcal{P}}|_x$, i.e., ${\\mathcal{P}}^\\vee|_x$. By \\cite{BBN09}, the projective bundle\n${\\mathcal{P}}|_x$ is stable and, therefore, its dual ${\\mathcal{P}}^\\vee|_x$ must also be stable. Applying the\nresults from \\cite{BG08} we know that\n ${\\mathcal{P}}^\\vee|_x$ is simple and, therefore, ${\\mathcal{F}}^0$ has no nontrivial automorphism, so $\\varphi^0$ must be the identity map.\n\\end{proof}\n\n\\begin{theorem}\n\\label{thm:mainthm}\nLet $X$ be a smooth complex projective curve of genus $g>2$. Assume that $0<\\tau<\\tau_0(r)$. Let $\\varphi:{\\mathcal{F}}\\longrightarrow {\\mathcal{F}}$ be an automorphism of the moduli space of $\\tau$-semistable framed bundles with fixed determinant $\\xi$. Then there exist\n\\begin{itemize}\n\\item an automorphism $\\sigma:X\\longrightarrow X$ with $\\sigma(x)\\,=\\, x$,\n\\item a degree zero line bundle $L\\,\\in\\, J(X)$ with $\\sigma^*\\xi\\otimes L^{\\otimes r}\\,\n\\cong \\,\\xi$, and\n\\item a matrix $[G]\\in \\operatorname{PGL}_r(\\mathbb{C})$\n\\end{itemize}\nsuch that if we pick any trivialization $\\alpha_L:L_x\\stackrel{\\sim}{\\longrightarrow} \\mathbb{C}$ then for every $(E,\\,\\alpha)\\,\\in\\, {\\mathcal{F}}$\n$$\\varphi(E,\\,\\alpha)\\,=\\,(\\sigma^*E\\otimes L,\\,G\\circ \\alpha\\cdot \\alpha_L)\\, .$$\n\\end{theorem}\n\n\\begin{proof}\nBy Lemma \\ref{lemma:autoEquivariant}, composing with $\\varphi_{[G]}$ for some $[G]\\,\\in\\,\n\\operatorname{PGL}_r(\\mathbb{C})$, we may assume without loss of generality that $\\varphi$ is a $\\operatorname{PGL}_r(\\mathbb{C})$-equivariant\nisomorphism. Applying Lemma \\ref{lemma:autofibers} and Lemma \\ref{lemma:noDual}, there must exist\nan automorphism $\\sigma:X\\longrightarrow X$ with $\\sigma(x)\\,=\\, x$, and a line bundle $L$ over\n$X$ with $\\sigma^*\\xi\\otimes L^{\\otimes r}\\,\n\\cong \\,\\xi$, such that the following diagram is commutative\n\\begin{eqnarray*}\n\\xymatrix{\n{\\mathcal{F}} \\ar[r]^{\\varphi} \\ar[d]_f & {\\mathcal{F}} \\ar[d]^f\\\\\n{\\mathcal{M}} \\ar[r]^{{\\mathcal{T}}_{\\sigma,L,+}} & {\\mathcal{M}} \n}\n\\end{eqnarray*}\nComposing with $\\overline{{\\mathcal{T}}_{\\sigma,L,+}}^{-1}\\,=\\,\n\\overline{{\\mathcal{T}}_{\\sigma^{-1},(\\sigma^{-1})^*L^{-1},+}}$, we obtain a map\n$$\\varphi'\\,=\\,\\overline{{\\mathcal{T}}_{\\sigma,L,+}}^{-1}\\circ \\varphi:{\\mathcal{F}} \\longrightarrow {\\mathcal{F}}$$ commuting with the projection to ${\\mathcal{M}}$. The map $\\overline{{\\mathcal{T}}_{\\sigma,L,+}}$ is $\\operatorname{PGL}_r(\\mathbb{C})$-equivariant by construction, so $\\varphi'$ is a $\\operatorname{PGL}_r(\\mathbb{C})$-equivariant automorphism of ${\\mathcal{F}}$ commuting with the projection to ${\\mathcal{M}}$. By the second part of Lemma \\ref{lemma:autofibers},\nthe automorphism $\\varphi'$ preserves ${\\mathcal{F}}^0$, so it induces a $\\operatorname{PGL}_r(\\mathbb{C})$-bundle map\n\\begin{eqnarray*}\n\\xymatrix{\n{\\mathcal{F}}^{0} \\ar[rr]^{\\varphi^{0}} \\ar[dr]_f && {\\mathcal{F}}^{0} \\ar[dl]^f\\\\\n & {\\mathcal{M}}^s &\n}\n\\end{eqnarray*}\nUsing Lemma \\ref{lel} we obtain that $\\varphi^0$ is the identity map on ${\\mathcal{F}}^0$. There exists at \nmost one extension of $\\varphi^{0}$ to ${\\mathcal{F}}$, because ${\\mathcal{F}}^0$ is dense in ${\\mathcal{F}}$ and the latter \nis irreducible. Since the identity map of ${\\mathcal{F}}$ is one such extension, it follows that\n$\\varphi'\\,=\\,\\operatorname{Id}_{{\\mathcal{F}}}$, so we have $\\varphi\\,=\\,\\overline{{\\mathcal{T}}_{\\sigma,L,+}}$.\n\\end{proof}\n\nLet $J(X)[r]$ denote the $r$-torsion points in the Jacobian of $X$, and let $\\operatorname{Aut}(X,x)$ be the \ngroup of automorphisms of $X$ that fix the point $x\\in X$, i.e., $$\\operatorname{Aut}(X,x)\\,=\\,\\{\\sigma\\in \\operatorname{Aut}(X) \n\\,\\mid\\, \\sigma(x)\\,=\\,x\\}\\,.$$\n\n\\begin{corollary}\n\\label{cor:maincor}\nThe automorphism group of ${\\mathcal{F}}$ is\n$$\\operatorname{Aut}({\\mathcal{F}})\\cong \\operatorname{PGL}_r(\\mathbb{C})\\times {\\mathcal{T}} $$\nfor a group ${\\mathcal{T}}$ fitting in the short exact sequence\n$$1\\longrightarrow J(X)[r] \\longrightarrow {\\mathcal{T}} \\longrightarrow \\operatorname{Aut}(X,x) \\longrightarrow 1\\, .$$\n\\end{corollary}\n\n\\begin{proof}\nWe proved that the automorphism group is generated by the maps\n\\begin{itemize}\n\\item $\\varphi_{[G]}$ for each $[G]\\in \\operatorname{PGL}_r(\\mathbb{C})$, and\n\\item $\\overline{{\\mathcal{T}}_{\\sigma,L,+}}$ for each $\\sigma\\,\\in\\, \\operatorname{Aut}(X,x)$ and each $L\\in J(X)$ such that $\\sigma^*\\xi \\otimes L^{\\otimes r}\\cong \\xi$.\n\\end{itemize}\nFirst of all, the action of $\\operatorname{PGL}_r(\\mathbb{C})$ is faithful and commutes with all of the maps $\\overline{{\\mathcal{T}}_{\\sigma,L,+}}$, so we can split the group $\\operatorname{Aut}({\\mathcal{F}})$ as a product\n$$\\operatorname{Aut}({\\mathcal{F}})\\cong \\operatorname{PGL}_r(\\mathbb{C}) \\times \\langle \\overline{{\\mathcal{T}}_{\\sigma,L,+}} \\rangle\\, .$$\nObserve that, by construction, $\\overline{{\\mathcal{T}}_{\\sigma,L,+}}$ lies over the automorphism\n$${\\mathcal{T}}_{\\sigma,L,+}\\,:\\,{\\mathcal{M}}\\,\\longrightarrow\\, {\\mathcal{M}}$$ through the forgetful map $f\\,:\\,{\\mathcal{F}}\n\\,\\longrightarrow\\, {\\mathcal{M}}$. Since the latter is not trivial for any $\\sigma\\,\\in\\, \\operatorname{Aut}(X,x)$ and\n$L\\,\\in\\, \\operatorname{Pic}(X)$, apart from $(\\sigma,\\,L)\\,=\\,(\\operatorname{Id},\\,{\\mathcal{O}}_X)$,\nit follows that ${\\mathcal{T}}_{\\sigma,L,+}\\,\\ne\\, \\operatorname{Id}$ for $(\\sigma,\\,L)\\,\\ne\\, (\\operatorname{Id},\\,{\\mathcal{O}}_X)$. Therefore, in\norder to obtain the desired result it is enough to prove that the group\n$${\\mathcal{T}}\\,=\\,\\langle \\{\\overline{{\\mathcal{T}}_{\\sigma,L,+}}\\} \\rangle$$ consisting of\nthe maps $\\overline{{\\mathcal{T}}_{\\sigma,L,+}}$ is an extension of $\\operatorname{Aut}(X,x)$ by $J(X)[r]$.\n\nLet $\\sigma\\in \\operatorname{Aut}(X,x)$ be any automorphism. Since $\\text{degree}(\\sigma^*\\xi)\\,=\\,\\text{degree}(\\xi)$,\nthere is a line bundle $L_\\sigma \\in J(X)$ such that\n$$\\sigma^*\\xi \\otimes L_\\sigma^{\\otimes r} \\,\\cong\\, \\xi\\, .$$\nMoreover, if $L_\\sigma'\\in J(X)$ is another line bundle with the same property, then\n$(L_\\sigma')^{\\otimes r} \\,\\cong\\, L_\\sigma^{\\otimes r}$, so $L_\\sigma$ and $L_\\sigma'$ differ\nby tensoring with an $r$-torsion element of the Jacobian $J(X)$.\n\nThen, $\\langle \\overline{{\\mathcal{T}}_{\\sigma,L,+}}\\rangle$ is generated as a group by the maps\n\\begin{itemize}\n\\item $\\overline{{\\mathcal{T}}_{\\sigma,L_\\sigma,+}}$ for $\\sigma\\in \\operatorname{Aut}(X,x)$\n\\item $\\overline{{\\mathcal{T}}_{\\operatorname{Id},L,+}}$ for $L\\in J(X)[r]$.\n\\end{itemize}\n\nMoreover, for every $\\sigma \\in \\operatorname{Aut}(X,x)$, every $L\\in \\operatorname{Pic}(X)$ and every $L'\\in J(X)[r]$, we have\n$$\\overline{{\\mathcal{T}}_{\\sigma,L,+}}\\circ \\overline{{\\mathcal{T}}_{\\operatorname{Id},L',+}} =\n\\overline{{\\mathcal{T}}_{\\operatorname{Id},\\sigma^*L',+}}\\circ \\overline{{\\mathcal{T}}_{\\sigma,L,+}}\\, .$$\nSince $\\sigma^*:J(X)[r] \\longrightarrow J(X)[r]$ is an automorphism, it follows that\n$$\\overline{{\\mathcal{T}}_{\\sigma,L,+}} \\circ J(X)[r] = J(X)[r]\\circ \\overline{{\\mathcal{T}}_{\\sigma,L,+}}\\, .$$\nTherefore, $J(X)[r]$ is a normal subgroup of $\\langle \\overline{{\\mathcal{T}}_{\\sigma,L,+}} \\rangle$ and its\nquotient is precisely $\\operatorname{Aut}(X,x)$, so we obtain an exact sequence\n\\begin{eqnarray*}\n\\xymatrixcolsep{4pc}\n\\xymatrix{\n1 \\ar[r] & J(X)[r] \\ar[r]^-{L \\mapsto \\overline{{\\mathcal{T}}_{\\operatorname{Id},L,+}}} & {\\mathcal{T}} \\ar[r]^-{\\overline{{\\mathcal{T}}_{\\sigma,L,+}} \\mapsto \\sigma} & \\operatorname{Aut}(X,x) \\ar[r] & 1\n}\n\\end{eqnarray*}\nThis completes the proof.\n\\end{proof}\n\n\\section*{Acknowledgements} \n\nThis work was developed during a research stay of both authors at the Laboratoire J. \nA. Dieudonn{\\'e} at Universit{\\'e} de Nice Sophia-Antipolis. We would like to thank \nthe laboratory for its hospitality. This research was partially funded by MINECO \n(grant MTM2016-79400-P and ICMAT Severo Ochoa project SEV-2015-0554) and the 7th \nEuropean Union Framework Programme (Marie Curie IRSES grant 612534 project MODULI). \nThe first author was also supported by a predoctoral grant from Fundaci\\'on La Caixa \n-- Severo Ochoa International Ph.D. Program and would like to thank Tom\\'as G\\'omez \nfor the useful discussions held during the development of this work. The second\nauthor is supproted by a J. C. Bose Fellowship.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzkhpu b/data_all_eng_slimpj/shuffled/split2/finalzzkhpu new file mode 100644 index 0000000000000000000000000000000000000000..04c274acc7c0361204750cbf4a6f304a588907a6 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzkhpu @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nMultiplier ideals are an important tool in higher-dimensional algebraic geometry, and one can view them as measuring singularities. \nThey are defined as follows:\nlet $X$ be a smooth complex variety and ${\\mathfrak{a}} \\subseteq {\\mathcal{O}} _X$ be an ideal sheaf of $X$. \nSuppose that $\\pi:\\widetilde{X} \\to X$ is a log resolution of ${\\mathfrak{a}}$, that is, $\\pi$ is a proper birational morphism, \n$\\widetilde{X}$ is smooth and $\\pi^{-1}V({\\mathfrak{a}})=F$ is a divisor with simple normal crossing support. \nIf $K_{\\widetilde{X}\/X}$ is the relative canonical divisor of $\\pi$, then the multiplier ideal of ${\\mathfrak{a}}$ with exponent $c \\in \\R_{\\ge 0}$ is \n\\[\n\\J({\\mathfrak{a}}^c)=\\J(c \\cdot {\\mathfrak{a}})=\\pi_*{\\mathcal{O}}_{\\widetilde{X}}(K_{\\widetilde{X}\/X}-\\lfloor cF \\rfloor) \\subseteq {\\mathcal{O}}_X.\n\\]\nA positive rational number $c$ is called a {\\it jumping coefficient} if $\\J({\\mathfrak{a}}^{c-\\varepsilon})\\neq \\J({\\mathfrak{a}}^{c})$ for all $\\varepsilon>0$, \nand the minimal jumping coefficient is called the {\\it log-canonical threshold} of ${\\mathfrak{a}}$ and denoted by $\\mathop{\\mathrm{lct}}\\nolimits({\\mathfrak{a}})$. \nSince multiplier ideals are defined via log resolutions, it is difficult to compute them in general \n(when the ideal ${\\mathfrak{a}}$ is a monomial ideal or a principal ideal generated by a non-degenerate polynomial, \nthere is a combinatorial description of the multiplier ideals $\\J({\\mathfrak{a}}^c)$. See \\cite{H} and \\cite{H2}). \nIn this paper, we will give algorithms for computing multiplier ideals using the theory of $D$-modules. \n\n\nThe Bernstein-Sato polynomial (or $b$-function) is one of the main objects in the theory of $D$-modules. \nIt has turned out that jumping coefficients are deeply related to Bernstein-Sato polynomials.\nFor a given polynomial $f \\in {\\mathbb{C}}[x_1, \\dots, x_n]$, \nthe Bernstein-Sato polynomial $b_f(s)$ of $f$ is the monic polynomial in one variable $b(s) \\in {\\mathbb{C}}[s]$ of minimal degree \nhaving the property that there exists a linear differential operator $P(x,s)$ such that $b(s) f^s =P(x,s)f^{s+1}$. \nKoll\\'ar \\cite{Ko} proved that the log canonical threshold of $f$ is the minimal root of $b_f(-s)$. \nFurthermore, Ein--Lazarsfeld--Smith--Varolin \\cite{ELSV} extended Koll\\'ar's result to higher jumping coefficients: \nthey proved that all jumping coefficients in the interval $(0,1]$ are roots of $b_f(-s)$. \nRecently Budur--Musta\\c{t}\\v{a}--Saito introduced the notion of Bernstein-Sato polynomials of arbitrary ideal sheaves \nusing the theory of $V$-filtrations of Kashiwara \\cite{Ka} and Malgrange \\cite{M}. \nThey then gave a criterion for membership of multiplier ideals in terms of their Bernstein-Sato polynomials, \nand proved that all jumping coefficients of ${\\mathfrak{a}}\\subset {\\mathcal{O}}_X$ in the interval $(\\mathop{\\mathrm{lct}}\\nolimits({\\mathfrak{a}}),\\mathop{\\mathrm{lct}}\\nolimits({\\mathfrak{a}})+1]$ \nare roots of the Bernstein-Sato polynomial of ${\\mathfrak{a}}$ up to sign. \n\nIt is difficult to compute Bernstein-Sato polynomials in general, \nbut Oaku \\cite{O1}, \\cite{O2}, \\cite{O3} gave algorithms for computing Bernstein-Sato polynomials $b_f(s)$ \nusing Gr\\\"obner bases in Weyl algebras \n(algorithms for computing Gr\\\"obner bases in Weyl algebras are implemented in some computer systems, \nsuch as Kan\/Sm1 \\cite{T} and Risa\/Asir \\cite{N}). \nIn this paper, we give algorithms for computing Budur--Musta\\c{t}\\v{a}--Saito's Bernstein-Sato polynomials \n(Theorems \\ref{Algorithm for global b-functions 1}, \\ref{Algorithm for global b-functions 2} and \\ref{Algorithm for local b-functions}). \nOur algorithms are natural generalizations of Oaku's algorithm. \n\nThe other ingredient of this paper is algorithms for computing multiplier ideals. \nThe algorithm for computing generalized Bernstein-Sato polynomials enables us to solve the membership problem for multiplier ideals, \nbut does not give a system of generators of multiplier ideals. \nWe modify the definition of Budur--Musta\\c{t}\\v{a}--Saito's Bernstein-Sato polynomials \nto determine a system of generators of the multiplier ideals of a given ideal (Definition \\ref{new Bernstein-Sato polynomial}). \nThen we obtain algorithms for computing our Bernstein-Sato polynomials\nand algorithms for computing multiplier ideals (Theorem \\ref{algorithm for computing multiplier ideals 1}, Theorem \\ref{algorithm for computing multiplier ideals 2}). \nOur algorithms are based on the theory of Gr\\\"obner bases in Weyl algebras \n(see \\cite{OT} and \\cite{SST} for a review of Gr\\\"obner bases in Weyl algebras and their applications). \nWe conclude the paper by presenting several examples computed by our algorithms. \\\\\n\\begin{acknowledgments}\\upshape\nThe author thanks Shunsuke Takagi for useful comments and discussions. \nHe also would like to thank Naoyuki Shinohara and Kazuhiro Yokoyama for warm encouragement and support. \n\\end{acknowledgments}\n\\section{Preliminaries}\n\\subsection{Gr\\\"obner bases in Weyl algebras}\nWe denote by ${\\mathbb{C}}$ the complex number field. \nWhen we use a computer algebra system, we may work with a computable field $\\Q(z_1,\\dots,z_l)\\subset {\\mathbb{C}}$ \nwhich is sufficient to express objects that appear in the computations. \nLet $X$ be the affine space ${\\mathbb{C}}^{d}$ with the coordinate system ${\\boldsymbol{x}}= (x_1,\\dots,x_d)$, \nand ${\\mathbb{C}}[{\\boldsymbol{x}}]={\\mathbb{C}}[x_1,\\dots,x_d]$ a polynomial ring over ${\\mathbb{C}}$ which is the coordinate ring of $X$. \nWe denote by ${\\partial_{\\boldsymbol{x}}} = (\\partial_{x_1} ,\\dots, \\partial_{x_d})$ the partial differential operators where \n$\\partial_{x_i} = \\frac{\\partial}{\\partial{x_i}}$. \nWe set \n\\[\nD_X={\\mathbb{C}}\\langle {\\boldsymbol{x}},{\\partial_{\\boldsymbol{x}}}\\rangle={\\mathbb{C}}\\langle x_1,\\dots,x_d,\\partial_{x_1},\\dots,\\partial_{x_d}\\rangle, \n\\]\nthe rings of differential operators of $X$, and call it the {\\it Weyl algebra} (in $d$ variables). \nThis ring is non-commutative ${\\mathbb{C}}$-algebra with the commutation rules \n\\[\nx_ix_j=x_jx_i,~~\\partial_{x_i}\\partial_{x_j}=\\partial_{x_j}\\partial_{x_i},\n~~\\partial_{x_i}x_j=x_j\\partial_{x_i}~~\\mathrm{for}~~i\\neq j,~~\\mathrm{and}~\\partial_{x_i}x_i=x_i\\partial_{x_i}+1.\n\\]\nWe write $\\langle P_1,\\dots,P_r\\rangle$ for the left ideal of $D_X$ generated by $P_1,\\dots,P_r\\in D_X$. \nWe use the notation \n${\\boldsymbol{x}}^{\\boldsymbol{\\mu}}=\\prod_{i=1}^d x_i^{{\\mu}_{i}}$, \nand \n${\\partial_{\\boldsymbol{x}}}^{\\boldsymbol{\\nu}}=\\prod_{i=1}^d\\partial_{x_i}^{{\\nu}_{i}}$ \nfor $\\boldsymbol{\\mu} = ({\\mu}_{1}, \\dots , {\\mu}_{d})$, $\\boldsymbol{\\nu}=({\\nu}_{1},\\dots,{\\nu}_{d})\\in{\\mathbb{Z}}_{\\ge 0}^d$. \nWe denote by $|\\boldsymbol{\\mu}| := {\\mu}_{1} + \\dots+ {\\mu}_{d}$ the side of $\\mu$. \nWe call a real vector $({\\boldsymbol{v}},{\\boldsymbol{w}})=(v_{1},\\dots,v_{d},w_{1},\\dots,w_{d})\\in \\R^{d}\\times\\R^{d}$ a {\\it weight vector} if \n\\[\nv_i+w_i\\ge 0 \\mathrm{~for~}i=1,2,\\dots,d.\n\\]\nWe define the ascending filtration $\\cdots \\subset F^{({\\boldsymbol{v}},{\\boldsymbol{w}})}_0{D_X}\\subset F^{({\\boldsymbol{v}},{\\boldsymbol{w}})}_1{D_X}\\subset \\cdots$ on $D_X$ \nwith respect to the weight vector $({\\boldsymbol{v}},{\\boldsymbol{w}})$ by \n\\[\nF^{({\\boldsymbol{v}},{\\boldsymbol{w}})}_m{D_X}=\\{\\sum_{{\\boldsymbol{v}}\\cdot \\boldsymbol{\\mu}+{\\boldsymbol{w}}\\cdot \\boldsymbol{\\nu}\\le m} \na_{\\boldsymbol{\\mu\\nu}}{\\boldsymbol{x}}^{\\boldsymbol{\\mu}}{\\partial_{\\boldsymbol{x}}}^{\\boldsymbol{\\nu}}\\mid \na_{\\boldsymbol{\\mu\\nu}}\\in {\\mathbb{C}}\\}\\subset D_X \n\\]\nwhere ${\\boldsymbol{v}}\\cdot \\boldsymbol{\\mu}=\\sum v_i\\mu_i$ is the usual inner product of ${\\boldsymbol{v}}$ and $\\boldsymbol{\\mu}$. \nThen we have \n\\[\nF^{({\\boldsymbol{v}},{\\boldsymbol{w}})}_{m_1}{D_X}\\cdot F^{({\\boldsymbol{v}},{\\boldsymbol{w}})}_{m_2}{D_X}\\subset F^{({\\boldsymbol{v}},{\\boldsymbol{w}})}_{m_1+m_2}{D_X}\n\\] \nfor all $m_1$, $m_2\\in {\\mathbb{Z}}$ by the conditions $v_i+w_i\\ge 0$ and the commutation rules of $D_X$. \nIn particular, $F^{({\\boldsymbol{v}},{\\boldsymbol{w}})}_0{D_X}$ is a sub-ring of $D_X$, and $F^{({\\boldsymbol{v}},{\\boldsymbol{w}})}_m{D_X}$'s are $F^{({\\boldsymbol{v}},{\\boldsymbol{w}})}_0{D_X}$-submodules of $D_X$. \nWe can define the associated graded ring of $D_X$ with respect to the filtration \n\\[\n\\mathop{\\mathrm{Gr}}\\nolimits^{({\\boldsymbol{v}},{\\boldsymbol{w}})}{D_X}=\\bigoplus_{m\\in {\\mathbb{Z}}} F^{({\\boldsymbol{v}},{\\boldsymbol{w}})}_m{D_X}\/ F^{({\\boldsymbol{v}},{\\boldsymbol{w}})}_{m-1}{D_X}. \n\\]\n\\begin{Definition}\nThe {\\it order} of $P\\in D_X$ is defined by \n\\[\n\\mathop{\\mathrm{ord}}\\nolimits_{({\\boldsymbol{v}},{\\boldsymbol{w}})}(P)=\\min\\{m\\mid P\\in F^{({\\boldsymbol{v}},{\\boldsymbol{w}})}_m{D_X}\\}. \n\\]\nFor a non-zero $P\\in D_X$ with $\\mathop{\\mathrm{ord}}\\nolimits_{({\\boldsymbol{v}},{\\boldsymbol{w}})}(P)=m$, the {\\it initial form} $\\mathop{\\mathrm{in}}\\nolimits_{({\\boldsymbol{v}},{\\boldsymbol{w}})}(P)$ of $P$ is the image of $P$ in $\\mathop{\\mathrm{Gr}}\\nolimits_m^{({\\boldsymbol{v}},{\\boldsymbol{w}})}{D_X}:=F^{({\\boldsymbol{v}},{\\boldsymbol{w}})}_m{D_X}\/ F^{({\\boldsymbol{v}},{\\boldsymbol{w}})}_{m-1}{D_X}$. \nFor a left ideal $I\\subset D_X$, the {\\it initial ideal} $\\mathop{\\mathrm{in}}\\nolimits_{({\\boldsymbol{v}},{\\boldsymbol{w}})}(I)$ of $I$ is the left ideal of \n$\\mathop{\\mathrm{Gr}}\\nolimits^{({\\boldsymbol{v}},{\\boldsymbol{w}})}{D_X}$ generated by all initial forms of elements in $I$. \nA finite subset $G$ of $D_X$ is called {\\it Gr\\\"obner basis} of $I$ with respect to $({\\boldsymbol{v}},{\\boldsymbol{w}})$ if $I$ is generated by $G$ and \n$\\mathop{\\mathrm{in}}\\nolimits_{({\\boldsymbol{v}},{\\boldsymbol{w}})}(I)$ is generated by initial forms of elements in $G$. \n\\end{Definition}\nIt is known that there is an algorithm for computing Gr\\\"obner bases (\\cite{SST} Algorithm 1.2.6). \nWe can compute the restriction of ideals to sub-algebras using Gr\\\"obner bases as in the commutative case. \n\\begin{Lemma}\\label{elimination of variables}\nLet $Z$ be a subsystem of $({\\boldsymbol{x}},\\partial_{\\boldsymbol{x}})$, and ${\\mathbb{C}}\\langle Z \\rangle$ a sub-algebra of $D_X$ generated by $Z$ over ${\\mathbb{C}}$. \nLet $({\\boldsymbol{v}},{\\boldsymbol{w}})$ be a weight vector such that $v_i>0$ (resp. $w_j>0$) if $x_i$ (resp. $\\partial_{x_j}$) is not a member of $Z$, \nand $v_i=0$ (resp. $w_j=0$) otherwise. \nLet $I$ be a left ideal of $D_X$ and $G$ a Gr\\\"obner basis of with respect to $({\\boldsymbol{v}},{\\boldsymbol{w}})$, \nthen $G\\cap {\\mathbb{C}}\\langle Z\\rangle$ is a system of generators of the left ideal $I\\cap {\\mathbb{C}}\\langle Z\\rangle$. \n\\end{Lemma}\nWe can also compute the intersection of ideals using elimination of variables as in the commutative case. \n\\begin{Lemma}\\label{intersection}\nLet $I$ and $J$ be left ideals of $D_X$. Then \n\\[\nI\\cap J=D_X[u](u I+(1-u)J)\\cap D_X. \n\\]\n\\end{Lemma}\n\\begin{proof}\nIf $P\\in I\\cap J$, then $P=uP+(1-u)P\\in D_X[u](u I+(1-u)J)\\cap D_X$. \nLet $P \\in D_X[u](u I+(1-u)J)\\cap D_X$. By substituting $1$ and $0$ to $u$, we have $P\\in I$ and $P\\in J$. \n\\end{proof}\nNote that substituting some $p\\in D_X$ to the variable $u$ makes sense only when $p$ is in the center of $D_X$, that is, $p\\in{\\mathbb{C}}$. \nIn this case, the left ideal of $D_X[u]$ generated by $u-p$ is a two-side ideal. \n\nFrom now on, we assume that the weight vector $({\\boldsymbol{v}},{\\boldsymbol{w}})$ satisfies \n\\[\nv_i+w_i=0 \\mathrm{~for~}i=1,2,\\dots,d. \n\\]\nThen $D_X$ has a structure of a graded algebra: \nWe set \n\\[\n[D_X]^{({\\boldsymbol{v}},{\\boldsymbol{w}})}_m:=\\{\\sum_{{\\boldsymbol{v}}\\cdot \\boldsymbol{\\mu}+{\\boldsymbol{w}}\\cdot \\boldsymbol{\\nu}= m} \na_{\\boldsymbol{\\mu\\nu}}{\\boldsymbol{x}}^{\\boldsymbol{\\mu}}{\\partial_{\\boldsymbol{x}}}^{\\boldsymbol{\\nu}}\\mid a_{\\boldsymbol{\\mu\\nu}}\\in {\\mathbb{C}}\\}\\subset D_X. \n\\]\nThen $F^{({\\boldsymbol{v}},{\\boldsymbol{w}})}_m{D_X}=\\bigoplus_{k\\le m}[D_X]_k^{({\\boldsymbol{v}},{\\boldsymbol{w}})}$ and $\\mathop{\\mathrm{Gr}}\\nolimits_m^{({\\boldsymbol{v}},{\\boldsymbol{w}})}{D_X}\\cong [D_X]^{({\\boldsymbol{v}},{\\boldsymbol{w}})}_m$ \nsince the commutation rules of $D_X$ are homogeneous of weight $0$. \nHence $D_X$ is a graded algebra $D_X=\\bigoplus_{m\\in {\\mathbb{Z}}} [D_X]^{({\\boldsymbol{v}},{\\boldsymbol{w}})}_m$ and isomorphic to $\\mathop{\\mathrm{Gr}}\\nolimits^{({\\boldsymbol{v}},{\\boldsymbol{w}})}{D_X}$. \nIn particular $[D_X]^{({\\boldsymbol{v}},{\\boldsymbol{w}})}_0$ is a sub-ring of $D_X$. We call an element in \n$[D_X]^{({\\boldsymbol{v}},{\\boldsymbol{w}})}_m$ a {\\it homogeneous element} of degree $m$. \nA left ideal $J$ of $D_X$ is called a {\\it homogeneous ideal} if $J$ is generated by homogeneous elements. \n\n\\begin{Definition}\nFor $P=\\sum P_m\\in D_X$ with $P_m\\in [D_X]^{({\\boldsymbol{v}},{\\boldsymbol{w}})}_m$ and $m_0=\\mathop{\\mathrm{ord}}\\nolimits_{({\\boldsymbol{v}},{\\boldsymbol{w}})}(P)$, \nwe define the homogenization of $P$ with homogenizing variable $u_1$ to be \n\\[\nP^h=\\sum P_mu_1^{m_0-m}\\in D_X[u_1]. \n\\]\nwhere For a left ideal $J$ of $D_X$, we define the homogenization of $J$ to be the left ideal of $D_X[u_1]$ \n\\[\nJ^h=\\langle P^h\\mid P\\in J\\rangle. \n\\]\n\\end{Definition}\n\\begin{Definition}\nFor a left ideal $J$ of $D_X$, we set \n\\[\nJ^*=J^h\\cap D_X=\\bigoplus_{m\\in {\\mathbb{Z}}} (J\\cap [D_X]^{({\\boldsymbol{v}},{\\boldsymbol{w}})}_m), \n\\]\nthe ideal of $D_X$ generated by all homogeneous elements in $J$. \n\\end{Definition}\n\\begin{Lemma}\\label{homogeneous part}\nLet $J=\\langle P_1,\\dots,P_r\\rangle$ be a left ideal of $D_X$. Then \n\\[\nJ^*=D_X[u_1,u_2]\\langle P_1^h,\\dots,P_r^h,u_1u_2-1\\rangle\\cap D_X. \n\\]\n\\end{Lemma}\n\\begin{proof}\nIt is easy to see that \n\\[\nJ^h=(D_X[u_1,u_1^{-1}]\\langle P_1^h,\\dots,P_r^h\\rangle) \\cap D_X[u_1]=\\langle P_1^h,\\dots,P_r^h,u_1u_2-1\\rangle\\cap D_X[u_1]. \n\\]\nSince $J^*=J^h\\cap D_X$, we obtain the assertion. \n\\end{proof}\n\\subsection{Bernstein-Sato polynomials}\nBudur--Musta\\c{t}\\u{a}--Saito introduced generalized Bernstein-Sato polynomials (or $b$-function) of arbitrary varieties in \\cite{BMS} \nand proved relations between generalized Bernstein-Sato polynomials and multiplier ideals \nusing the theory of the $V$-filtration of Kashiwara and Malgralge. \n\nLet $X$ be the affine space ${\\mathbb{C}}^{n}$ with the coordinate ring ${\\mathbb{C}}[{\\boldsymbol{x}}]={\\mathbb{C}}[x_1,\\dots,x_n]$, and fix \nan ideal ${\\mathfrak{a}}$ of ${\\mathbb{C}}[{\\boldsymbol{x}}]$ with a system of generators ${\\boldsymbol{f}} =(f_1,\\dots,f_r)$. \nLet $Y=X\\times {\\mathbb{C}}^r$ be the affine space ${\\mathbb{C}}^{n+r}$ with the coordinate system $({\\boldsymbol{x}},{\\boldsymbol{t}}) = (x_1,\\dots, x_n,t_1,\\dots, t_r)$. \nThen $X\\times\\{0\\}=V(t_1,\\dots,t_r)\\cong X$ is a linear subspace of $Y$ with the defining ideal \n$I_{X\\times\\{0\\}}=\\langle t_1,\\dots,t_r\\rangle$. \nWe denote the rings of differential operators of $X$ and $Y$ by \n\\begin{eqnarray*}\nD_X&=&{\\mathbb{C}}\\langle {\\boldsymbol{x}}, {\\partial_{\\boldsymbol{x}}}\\rangle={\\mathbb{C}}\\langle x_1,\\dots,x_n,\\partial_{x_1},\\dots,\\partial_{x_n}\\rangle, \\\\\nD_Y&=&{\\mathbb{C}}\\langle {\\boldsymbol{x}},{\\boldsymbol{t}},{\\partial_{\\boldsymbol{x}}},{\\partial_{\\boldsymbol{t}}}\\rangle={\\mathbb{C}}\\langle x_1,\\dots, x_n, t_1,\\dots, t_r,\\partial_{x_1},\\dots,\\partial_{x_n},\n\\partial_{t_1},\\dots,\\partial_{t_r}\\rangle. \n\\end{eqnarray*}\nWe use the notation \n${\\boldsymbol{x}}^{\\boldsymbol{\\mu}_1}=\\prod_{i=1}^n x_i^{{\\mu}_{1i}}$, \n${\\boldsymbol{t}}^{\\boldsymbol{\\mu}_2}=\\prod_{j=1}^r t_j^{{\\mu}_{2i}}$, \n${\\partial_{\\boldsymbol{x}}}^{\\boldsymbol{\\nu}_1}=\\prod_{i=1}^n \\partial_{x_i}^{{\\nu}_{1i}}$, \nand ${\\partial_t}^{\\boldsymbol{\\nu}_2}=\\prod_{j=1}^r\\partial_{t_j}^{{\\nu}_{2j}}$ \nfor $\\boldsymbol{\\mu}_1 = ({\\mu}_{11}, \\dots , {\\mu}_{1n})$, $\\boldsymbol{\\nu}_1=({\\nu}_{11},\\dots,{\\nu}_{1n})\\in{\\mathbb{Z}}_{\\ge 0}^n$ \nand $\\boldsymbol{\\mu}_2 = ({\\mu}_{21},\\dots , {\\mu}_{2r})$, $\\boldsymbol{\\nu}_2=({\\nu}_{21},\\dots,{\\nu}_{2r}) \\in {\\mathbb{Z}}_{\\ge 0}^r$. \nThe ${\\mathbb{C}}[{\\boldsymbol{x}}]$-module $N_{{\\boldsymbol{f}}}:={\\mathbb{C}}[{\\boldsymbol{x}}][\\prod_i f_i^{-1},s_1,\\dots,s_r]\\prod_i f_i^{s_i}$, \nwhere $s_i$'s are independent variables and $\\prod_i f_i^{s_i}$ is a symbol, has a $D_X$-module structure as follows: \nThe action of ${\\mathbb{C}}[{\\boldsymbol{x}}]$ on $N_{{\\boldsymbol{f}}}$ is given by the canonical one, and the action of $\\partial_{x_i}$ are given by\n\\[\n\\partial_{x_j} (h\\prod_i f_i^{s_i})=\\biggl(\\partial_{x_j}(h)+h\\sum_{k=1}^{r} s_j\\frac{\\partial_{x_j}(f_k)}{f_k}\\biggr)\\prod_i f_i^{s_i}.\n\\] \nfor $h\\in {\\mathbb{C}}[{\\boldsymbol{x}}][\\prod_i f_i^{-1},s_1,\\dots,s_r]$. \nThis action is defined formally, but it has an obvious meaning when some integers are substituted for $s_i$'s. \nWe define $D_X$-linear actions $t_j$ and $\\partial_{t_j}$ on $N_{{\\boldsymbol{f}}}$ by \n\\[\nt_j(h(x,s_1,\\dots,s_r)\\prod_i f_i^{s_i})=h(x,s_1,\\dots,s_j+1,\\dots,s_r)f_j\\prod_i f_i^{s_i}\n\\] \nand \n\\[\n\\partial_{t_j}(h(x,s_1,\\dots,s_r)\\prod_i f_i^{s_i})=-s_jh(x,s_1,\\dots,s_j-1,\\dots,s_r)f_j^{-1}\\prod_i f_i^{s_i}. \n\\]\nfor $h(x,s_1,\\dots,s_r)\\in {\\mathbb{C}}[{\\boldsymbol{x}}][\\prod_i f_i^{-1},s_1,\\dots,s_r]$. \nThen it follows that $N_{{\\boldsymbol{f}}}$ is a $D_Y$-module because the actions defined above respect the commutation rules of $D_Y$. \nNote that $-\\partial_{t_i}{t_i} \\prod_i f_i^{s_i}=s_i\\prod_i f_i^{s_i}$ for all $i$. \n\\begin{Definition}[\\cite{BMS}]\\label{def of b-functions}\nLet $\\sigma =-(\\sum_i \\partial_{t_i}{t_i})$, and let $s$ be a new variable. \nThen the global generalized Bernstein-Sato polynomial $b_{{\\mathfrak{a}},g}(s)\\in {\\mathbb{C}} [s]$ of ${\\mathfrak{a}}=\\langle f_1,\\dots,f_r\\rangle$ and \n$g\\in {\\mathbb{C}}[{\\boldsymbol{x}}]$ is defined to be the monic polynomial of minimal degree satisfying \n\\begin{equation}\\label{exp1}\nb(\\sigma)g\\prod_i f_i^{s_i}=\\sum_{j=1}^{r}P_jgf_j\\prod_i f_i^{s_i}\n\\end{equation}\nfor some $P_j\\in D_X\\langle -\\partial_{t_j}t_k \\mid 1\\le j,k \\le r\\rangle$. \nWe define $b_{\\mathfrak{a}}(s)=b_{{\\mathfrak{a}},1}(s)$. \n\nFor a prime ideal ${\\mathfrak{p}}$ of ${\\mathbb{C}}[{\\boldsymbol{x}}]$, we define the local generalized Bernstein-Sato polynomial $b^{\\mathfrak{p}}_{{\\mathfrak{a}},g}(s)$ at ${\\mathfrak{p}}$ \nto be the monic polynomial of minimal degree satisfying \n\\begin{equation*}\nb(\\sigma)gh\\prod_i f_i^{s_i}=\\sum_{j=1}^{r}P_jgf_j\\prod_i f_i^{s_i}\n\\end{equation*}\nfor some $P_j\\in D_X\\langle -\\partial_{t_j}t_k \\mid 1\\le j,k \\le r\\rangle$ and $h \\not\\in {\\mathfrak{p}}$. \nWe define $b^{\\mathfrak{p}}_{\\mathfrak{a}}(s)=b^{\\mathfrak{p}}_{{\\mathfrak{a}},1}(s)$. \n\\end{Definition}\nNote that ${\\mathbb{C}}[x]_{\\mathfrak{p}} \\otimes_{{\\mathbb{C}}[x]} D_X$ is the ring of differential operators of $\\mathop{\\mathrm{Spec}}\\nolimits {\\mathbb{C}}[{\\boldsymbol{x}}]_{\\mathfrak{p}}$. \nIt is proved in \\cite{BMS} that generalized Bernstein-Sato polynomials are well-defined, \nthat is, they do not depend on the choice of generators of ${\\mathfrak{a}}$, and all their roots are negative rational numbers. \nThese facts follow from the theory of $V$-filtrations of Kashiwara \\cite{Ka} and Malgrange \\cite{M}. \nWhen ${\\mathfrak{a}}$ is a principal ideal generated by $f$, then $b_{\\mathfrak{a}}(s)$ coincides with the classical Bernstein-Sato polynomial $b_f(s)$ of $f$. \n\\subsection{V-filtrations}\nWe will briefly recall the definition and some basic properties of $V$-filtrations. \nSee \\cite{M}, \\cite{Ka}, \\cite{Sab} and \\cite{BMS} for details. \n\nWe fix the weight vector $({\\boldsymbol{w}},-{\\boldsymbol{w}})\\in {\\mathbb{Z}}^{2(n+r)}$, ${\\boldsymbol{w}}=((0,\\dots,0),(1,\\dots,1))\\in {\\mathbb{Z}}^{n}\\times{\\mathbb{Z}}^{r}$, that is, \nwe assign the weight $1$ to $\\partial_{t_j}$, $-1$ to $t_j$, and $0$ to $x_i$ and $\\partial_{x_i}$. \nThen \n\\[\nF_{m}^{({\\boldsymbol{w}},-{\\boldsymbol{w}})} D_Y=\\{\\sum_{-|\\boldsymbol{\\mu}_2|+|\\boldsymbol{\\nu}_2|\\le m} \na_{\\boldsymbol{\\mu}_1 \\boldsymbol{\\mu}_2 \\boldsymbol{\\nu}_1 \\boldsymbol{\\nu}_2}{\\boldsymbol{x}}^{\\boldsymbol{\\mu}_1}{\\boldsymbol{t}}^{\\boldsymbol{\\mu}_2}{\\partial_{\\boldsymbol{x}}}^{\\boldsymbol{\\nu}_1}{\\partial_{\\boldsymbol{t}}}^{\\boldsymbol{\\nu}_2}\n\\mid a_{\\boldsymbol{\\mu}_1 \\boldsymbol{\\mu}_2 \\boldsymbol{\\nu}_1 \\boldsymbol{\\nu}_2}\\in {\\mathbb{C}}\\}\n\\]\nIn this paper, we call the decreasing filtration $V^{m}D_Y:=F_{-m}^{({\\boldsymbol{w}},-{\\boldsymbol{w}})} D_Y$ on $D_Y$ \nthe {\\it V-filtration} of $D_Y$ along $X\\times\\{0\\}$ \n(some author call the increasing filtration $F^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}$ the $V$-filtration). \nNote that \n\\begin{eqnarray*}\nV^m{D_Y}&=&\\{\\sum_{|\\boldsymbol{\\mu}_2|-|\\boldsymbol{\\nu}_2|\\ge m} \na_{\\boldsymbol{\\mu}_1 \\boldsymbol{\\mu}_2 \\boldsymbol{\\nu}_1 \\boldsymbol{\\nu}_2}{\\boldsymbol{x}}^{\\boldsymbol{\\mu}_1}{\\boldsymbol{t}}^{\\boldsymbol{\\mu}_2}{\\partial_{\\boldsymbol{x}}}^{\\boldsymbol{\\nu}_1}{\\partial_{\\boldsymbol{t}}}^{\\boldsymbol{\\nu}_2}\\mid \na_{\\boldsymbol{\\mu}_1 \\boldsymbol{\\mu}_2 \\boldsymbol{\\nu}_1 \\boldsymbol{\\nu}_2}\\in {\\mathbb{C}}\\}\\\\\n&=&\\{P \\in D_Y \\mid P(I_{X\\times\\{0\\}} )^j \\subset (I_{X\\times\\{0\\}} )^{j+m} ~\\mathrm{for~~any}~ j \\ge 0\\},\n\\end{eqnarray*}\nwith the convention $I_{X\\times\\{0\\}}^{j}={\\mathbb{C}}[{\\boldsymbol{x}},{\\boldsymbol{t}}]$ for all $j\\le 0$. \n\\begin{Definition}\\label{Vfiltration}\nThe $V$-filtration along $X\\times\\{0\\}$ on a finitely generated left $D_Y$-module $M$ is an exhaustive decreasing filtration \n$\\{V^{{\\alpha}}M\\}_{\\alpha\\in \\Q}$ indexed by $\\Q$, such that: \n\\\\ {\\rm(i)} $V^{{\\alpha}}M$ are finitely generated $V^0D_Y$-submodules of $M$. \n\\\\ {\\rm(ii)} \n$\\{V^{{\\alpha}}M\\}_{\\alpha}$ is left-continuous and discrete, that is, \n$V^{\\alpha} M=\\bigcap_{{\\alpha}'<{\\alpha}}V^{{\\alpha}'}M$, and every interval contains only finitely many ${\\alpha}\\in \\Q$ \nwith $\\mathop{\\mathrm{Gr}}\\nolimits_V^{\\alpha} M\\neq 0$. \nHere $\\mathop{\\mathrm{Gr}}\\nolimits_V^{\\alpha} M := V^{{\\alpha}} M\/(\\bigcup_{{\\alpha}'>{\\alpha}} V^{{\\alpha}'}M)$. \n\\\\ {\\rm(iii)} \n$(V^iD_Y)(V ^{{\\alpha}}M) \\subset V^{{\\alpha}+i}M$ for any $i \\in {\\mathbb{Z}}$, ${{\\alpha}} \\in \\Q$.\n\\\\ {\\rm(iv)} \n$(V^iD_Y)(V ^{{\\alpha}}M) = V^{{\\alpha}+i}M$ for any $i > 0$ if ${{\\alpha}}\\gg 0$.\n\\\\ {\\rm(v)} \nthe action of $\\sigma+{\\alpha}$ is nilpotent on $\\mathop{\\mathrm{Gr}}\\nolimits_V^{\\alpha} M$. \n\\end{Definition}\n\\begin{Remark}\\label{b from V}\n(i) The filtration $V$ is unique if it exists (\\cite{Ka}), \nand $D_Y$-submodule $D_Y \\prod_i f_i^{s_i}\\cong D_Y\/\\mathop{\\mathrm{Ann}}\\nolimits_{D_Y}\\prod_i f_i^{s_i}$ of $N_{{\\boldsymbol{f}}}$ has such a $V$-filtration (see \\cite{BMS}). \\\\\n(ii) For ${\\alpha}\\neq z\\in {\\mathbb{C}}$, the action of $\\sigma+z$ on $\\mathop{\\mathrm{Gr}}\\nolimits_V^{\\alpha} M$ is invertible. \nHence, if $\\mathop{\\mathrm{Gr}}\\nolimits_V^{\\alpha} M\\neq 0$, $u \\not\\in V^{{\\alpha} +\\varepsilon}M$ and $b(\\sigma)u \\in V^{{\\alpha} +\\varepsilon}M$ \nfor some $b(s)\\in {\\mathbb{C}}[s]$ and all sufficiently small $\\varepsilon >0$, then $s+{\\alpha}$ is a factor of $b(s)$. \n\\end{Remark}\nLet $\\iota : X \\rightarrow Y$ be the graph embedding $x \\mapsto (x, f_1(x),\\dots, f_r(x))$ of ${\\boldsymbol{f}} =(f_1,\\dots,f_r)$, \nand $M_{{\\boldsymbol{f}}}=\\iota_+{\\mathbb{C}}[{\\boldsymbol{x}}]$, where $\\iota_+$ denotes the direct image for left $D$-modules. \nThere is a natural isomorphism $M_{{\\boldsymbol{f}}} \\cong {\\mathbb{C}}[{\\boldsymbol{x}}]\\otimes_{{\\mathbb{C}}}{\\mathbb{C}}[\\partial_{t_1},\\dots, \\partial_{t_r}]$ (see \\cite{Bo}), \nand the action of ${\\mathbb{C}}[{\\boldsymbol{x}}]$ and $\\partial_{t_1}, \\dots, \\partial_{t_r}$ on $M_{{\\boldsymbol{f}}}$ is given by the canonical one, \nand the action of a vector field $\\xi$ on $X$ and $t_j$ are given by\n\\begin{eqnarray*}\n\\xi (g \\otimes {\\partial_t}^{\\nu} ) &=& \\xi g \\otimes {\\partial_t}^{\\nu} -\\sum_j (\\xi f_j)g\\otimes \\partial_{t_j} {\\partial_t}^{{\\nu}},\\\\\nt_j(g\\otimes {\\partial_t}^{\\nu}) &=& f_jg \\otimes {\\partial_t}^{\\nu} - {\\nu}_j g \\otimes {\\partial_t}^{{\\nu}-1_j}.\n\\end{eqnarray*}\nwhere $1_j$ is the element of ${\\mathbb{Z}}^r$ whose $i$-th component is $1$ if $i=j$ and $0$ otherwise. \n\\begin{Definition}[\\cite{BMS}]\nLet M be a $D_Y$-module with $V$-filtration. \nFor $u\\in M$, the Bernstein-Sato polynomial $b_u(s)$ of $u$ is the monic minimal polynomial\nof the action of $\\sigma$ on $V^0D_Yu\/V^1D_Yu$. \n\\end{Definition}\nBy the properties of $V$-filtration in Definition \\ref{Vfiltration}, \nthe induced filtration $V$ on $(V^0D_Y)u\/(V^1D_Y)u$ is finite (see \\cite{BMS} Section 2.1) \nThis guarantees the existence of $b_{u}(s)$. \nIf $u\\in V^{\\alpha} M$, than $V^0D_Yu \\subset V^{\\alpha} M$ and $V^1D_Yu \\subset V^{{\\alpha}+1} M$. \nHence, if we set ${\\alpha}_0=\\mathop{\\mathrm{max}}\\nolimits\\{{\\alpha} \\mid u \\in V^{\\alpha} M\\}$, \nthen $u\\not\\in V^{{\\alpha}_0+\\varepsilon}M$ and $b_u(\\sigma)u\\in V^1D_Yu \\subset V^{{\\alpha}_0+1} M\\subset V^{{\\alpha}_0+\\varepsilon}M$ \nfor sufficiently small $\\varepsilon >0$. Hence \n\\[\n\\mathop{\\mathrm{max}}\\nolimits\\{{\\alpha} \\mid u \\in V^{\\alpha} M\\} = \\min\\{{\\alpha} \\mid \\mathop{\\mathrm{Gr}}\\nolimits_V^{\\alpha}((V^0D_Y)u) \\neq 0\\} \n \\min\\{{\\alpha} \\mid b_u(-{\\alpha}) = 0\\}. \n\\]\nTherefore we conclude the next proposition. \n\\begin{Proposition}[\\cite{Sab}]\nLet M be a $D_Y$-module with $V$-filtration. \nThen \n\\[\nV^{\\alpha} M = \\{ u \\in M \\mid {\\alpha}\\le {\\alpha}' \\mbox{~~if~~} b_u(-{\\alpha}') = 0 \\}.\n\\]\n\\end{Proposition}\nSince we have a canonical injection $M_{{\\boldsymbol{f}}} \\rightarrow N_{{\\boldsymbol{f}}}={\\mathbb{C}}[{\\boldsymbol{x}}][\\prod_i f_i^{-1},s_1,\\dots,s_r]\\prod_i f_i^{s_i}$ \nthat sends $g\\otimes {\\partial_t}^{\\nu}$ to $g{\\partial_t}^{\\nu}\\prod_i f_i^{s_i}$, \nthe generalized Bernstein-Sato polynomial $b_{{\\mathfrak{a}},g}(s)$ coincides with $b_u(s)$ where $u=g\\otimes 1 \\in M_{{\\boldsymbol{f}}}$ \n(see Observation \\ref{obs} in the next section). \n\\subsection{Multiplier ideals}\nWe will recall the relations between generalized Bernstein-Sato polynomials and multiplier ideals following \\cite{BMS}. \nThe reader is referred to \\cite{L} for general properties of multiplier ideals. \nFor a positive rational number $c$, the multiplier ideal $\\J({\\mathfrak{a}}^c)$ is defined via a log resolution of ${\\mathfrak{a}}$. \nLet $\\pi: \\tilde{X}\\to X=\\mathop{\\mathrm{Spec}}\\nolimits {\\mathbb{C}}[{\\boldsymbol{x}}]$ be a log resolution of ${\\mathfrak{a}}$, \nnamely, $\\pi$ is a proper birational morphism, $\\tilde{X}$ is smooth, \nand there exists an effective divisor $F$ on $\\tilde{X}$ such that ${\\mathfrak{a}}{\\mathcal{O}}_{\\tilde{X}}={\\mathcal{O}}_{\\tilde{X}}(-F)$ \nand the union of the support of $F$ and the exceptional divisor of $\\pi$ has simple normal crossings. \nFor a given real number $c\\ge 0$, the multiplier ideal $\\J({\\mathfrak{a}}^c)$ associated to $c$ is defined to be the ideal \n\\[\n\\J({\\mathfrak{a}}^c)=H^0(\\tilde{X},{\\mathcal{O}}_{\\tilde{X}}( K_{\\tilde{X}\/X}-\\lfloor cF \\rfloor))\n\\]\nwhere $K_{\\tilde{X}\/X}=K_{\\tilde{X}}-\\pi^*K_{X}$ is the relative canonical divisor of $\\pi$. \nThis definition is independent of the choice of a log resolution $\\pi:\\tilde{X}\\to X$. \nThe reader is referred to \\cite{L} for general properties of multiplier ideals. \nBy the definition, if $c0$. \nThe multiplier ideals give a decreasing filtration on ${\\mathcal{O}}_X$, \nand there are rational numbers $0 =c_0< c_1 < c_2 < \\cdots$ such that \n$\\J ({\\mathfrak{a}}^{c_j}) = \\J({\\mathfrak{a}}^c) \\neq \\J({\\mathfrak{a}}^{c_{j+1}})$ for $c_j \\leq c< c_{j+1}$. \nThese $c_j$ for $j > 0$ are called the {\\it jumping coefficients}, \nand the minimal jumping coefficient $c_1$ is called the {\\it log-canonical threshold} of ${\\mathfrak{a}}$ and denoted by $\\mathop{\\mathrm{lct}}\\nolimits({\\mathfrak{a}})$. \nBy the definition, it follows that multiplier ideals are integrally closed, \nand the multiplier ideal associated to the log canonical threshold is radical. \nIt is known that $\\J({\\mathfrak{a}}^{c})={\\mathfrak{a}}\\J({\\mathfrak{a}}^{c-1})$ for $c\\ge \\lambda({\\mathfrak{a}})$ where $\\lambda({\\mathfrak{a}})$ is the analytic spread of ${\\mathfrak{a}}$. \nRecall that analytic spread of ${\\mathfrak{a}}$ is the minimal number of elements needed to generate ${\\mathfrak{a}}$ up to integral closure, \nand thus $\\mu({\\mathfrak{a}})\\ge\\lambda({\\mathfrak{a}})$ where $\\mu({\\mathfrak{a}})$ is the minimal number of generators of ${\\mathfrak{a}}$. \nIn particular, if ${\\mathfrak{a}}$ is a principal ideal generated by $f$, then $\\J(f^c)=f\\J(f^{c-1})$ for $c\\ge 1$. \n\nBudur--Musta\\c{t}\\u{a}--Saito proved that the $V$-filtration on ${\\mathbb{C}}[x]$ is essentially equivalent to the filtration by multiplier ideals \nusing the theory of mixed Hodge modules (\\cite{Sa1}, \\cite{Sa2}), \nand gave a description of multiplier ideals in terms of generalized Bernstein-Sato polynomials. \n\\begin{Theorem}[\\cite{BMS}]\\label{multiplier ideals and b-functions}\nWe denote by $V$ the filtration on ${\\mathbb{C}}[{\\boldsymbol{x}}]\\cong {\\mathbb{C}}[{\\boldsymbol{x}}]\\otimes 1$ induced by the $V$-filtration on $\\iota_+ {\\mathbb{C}}[{\\boldsymbol{x}}]$. \nThen $\\J({\\mathfrak{a}}^c) = V^{c+\\varepsilon}{\\mathbb{C}}[{\\boldsymbol{x}}]$ and $V^{\\alpha} {\\mathbb{C}}[{\\boldsymbol{x}}] = \\J ({\\mathfrak{a}}^{{\\alpha}-\\varepsilon})$ \nfor any ${\\alpha}\\in \\Q$ and $0<\\varepsilon \\ll 1$. \nTherefore the following hold: \\\\\n{\\rm (i)} For a given rational number $c\\ge 0$ and a prime ideal ${\\mathfrak{p}}\\subset {\\mathbb{C}}[x]$, \n\\begin{eqnarray*}\n\\J({\\mathfrak{a}}^c) &=& \\{g\\in {\\mathbb{C}}[{\\boldsymbol{x}}] \\mid c < c' \\mbox{~~if~~} b_{{\\mathfrak{a}},g}(-c') = 0\\}, \\\\\n\\J({\\mathfrak{a}}^c)_{\\mathfrak{p}} \\cap {\\mathbb{C}}[{\\boldsymbol{x}}] &=& \\{g\\in {\\mathbb{C}}[{\\boldsymbol{x}}] \\mid c < c' \\mbox{~~if~~} b^{\\mathfrak{p}}_{{\\mathfrak{a}},g}(-c') = 0\\}. \n\\end{eqnarray*}\nIn particular, the log canonical threshold $\\mathop{\\mathrm{lct}}\\nolimits({\\mathfrak{a}})$ of ${\\mathfrak{a}}=\\langle f_1,\\dots,f_r\\rangle$ is the minimal root of $b_{\\mathfrak{a}}(-s)$. \\\\\n{\\rm (ii)} All jumping coefficients of ${\\mathfrak{a}}$ in $[\\mathop{\\mathrm{lct}}\\nolimits({\\mathfrak{a}}), \\mathop{\\mathrm{lct}}\\nolimits({\\mathfrak{a}}) + 1)$ are roots of $b_{\\mathfrak{a}}(-s)$. \n\\end{Theorem}\nTherefore an algorithm for computing generalized Bernstein-Sato polynomials induces an algorithm for \nsolving membership problem for multiplier ideals, and in particular, an algorithm for computing log canonical thresholds. \n\\section{Algorithms for computing generalized Bernstein-Sato polynomials}\nIn this section, we obtain an algorithm for computing generalized Bernstein-Sato polynomials of arbitrary ideals. \nThe algorithms for computing classical Bernstein-Sato polynomials are given by Oaku (see \\cite{O1}, \\cite{O2}, \\cite{O3}). \nWe will generalize Oaku's algorithm to arbitrary $n$ and $g$. \n\nLet ${\\mathbb{C}}[{\\boldsymbol{x}}]={\\mathbb{C}}[x_1,\\dots,x_n]$ be a polynomial ring over ${\\mathbb{C}}$, \nand ${\\mathfrak{a}}$ an ideal with a system of generators ${\\boldsymbol{f}} =(f_1,\\dots,f_r)$, \nand we fix the weight vector $({\\boldsymbol{w}},-{\\boldsymbol{w}})\\in {\\mathbb{Z}}^{n+r}\\times{\\mathbb{Z}}^{n+r}$, ${\\boldsymbol{w}}=((0,\\dots,0),(1,\\dots,1))\\in {\\mathbb{Z}}^{n}\\times{\\mathbb{Z}}^{r}$. \n\\begin{Observation}\\label{obs}\nWe rewrite the definition of generalized Bernstein-Sato polynomials in several ways. \n\nRecall that $b_{{\\mathfrak{a}},g}(s)$ is the monic polynomial of minimal degree satisfying \n\\[\nb(\\sigma)g\\prod_i f_i^{s_i}=\\sum_{j=1}^{r}P_jgf_j\\prod_i f_i^{s_i}\n\\]\nfor some $P_j\\in D_X\\langle -\\partial_{t_j}t_k \\mid 1\\le j,k \\le r\\rangle$ (Definition \\ref{def of b-functions} (\\ref{exp1}))\nSince \n\\begin{eqnarray*}\n[D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0&=&\\{\\sum_{|\\boldsymbol{\\nu}_2|-|\\boldsymbol{\\mu}_2|=0} \na_{\\boldsymbol{\\mu}_1 \\boldsymbol{\\mu}_2 \\boldsymbol{\\nu}_1 \\boldsymbol{\\nu}_2}{\\boldsymbol{x}}^{\\boldsymbol{\\mu}_1}{\\boldsymbol{t}}^{\\boldsymbol{\\mu}_2}{\\partial_{\\boldsymbol{x}}}^{\\boldsymbol{\\nu}_1}{\\partial_{\\boldsymbol{t}}}^{\\boldsymbol{\\nu}_2}\n\\mid a_{\\boldsymbol{\\mu}_1 \\boldsymbol{\\mu}_2 \\boldsymbol{\\nu}_1 \\boldsymbol{\\nu}_2}\\in {\\mathbb{C}}\\}\\\\\n&=& D_X\\langle -\\partial_{t_j}t_k \\mid 1\\le j,k \\le r\\rangle, \n\\end{eqnarray*} \nand $\\sigma =-\\sum \\partial_{t_i}t_i$ is a homogeneous element of degree $0$,\nthe condition (\\ref{exp1}) is equivalent to saying that \n\\begin{eqnarray*}\n&&(b(\\sigma)g-\\sum_{j=1}^{r}P_jgf_j)\\prod_i f_i^{s_i}=0 \\mbox{\\quad for~~} {}^\\exists P_j\\in [D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0 \\\\\n&\\Longleftrightarrow&\nb(\\sigma)g-\\sum_{j=1}^{r}P_jgf_j\\in \\mathop{\\mathrm{Ann}}\\nolimits_{[D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0}\\prod_i f_i^{s_i} \\mbox{\\quad for~~} {}^\\exists P_j\\in [D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0 \n\\nonumber\\\\\n&\\Longleftrightarrow&\nb(\\sigma)g\\in \\mathop{\\mathrm{Ann}}\\nolimits_{[D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0}\\prod_i f_i^{s_i}+[D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0 g{\\mathfrak{a}}. \n\\end{eqnarray*}\nSince $(I+J)\\cap [D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0=I\\cap [D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0+J\\cap [D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0$ for homogeneous ideals $I$ and $J$, \nwe have \n\\begin{eqnarray*}\n&&\\mathop{\\mathrm{Ann}}\\nolimits_{[D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0}\\prod_i f_i^{s_i}+[D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0 g{\\mathfrak{a}}\\\\\n=&& (\\mathop{\\mathrm{Ann}}\\nolimits_{D_Y}\\prod_i f_i^{s_i})^*\\cap [D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0+(D_Y g{\\mathfrak{a}})\\cap[D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0 \\\\\n=&&((\\mathop{\\mathrm{Ann}}\\nolimits_{[D_Y]}\\prod_i f_i^{s_i})^*+D_Y g{\\mathfrak{a}})\\cap [D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0. \\\\\n\\end{eqnarray*}\nHence the condition (\\ref{exp1}) is equivalent to saying that \n\\begin{equation}\\label{exp2}\nb(\\sigma)g\\in (\\mathop{\\mathrm{Ann}}\\nolimits_{D_Y}\\prod_i f_i^{s_i})^*+D_Y g{\\mathfrak{a}}. \n\\end{equation}\nSince $t_j\\prod_i f_i^{s_i}=f_j\\prod_i f_i^{s_i}$, the condition (\\ref{exp1}) is also equivalent to saying that \n\\begin{eqnarray}\n\\label{exp3}&&b(\\sigma)g\\prod_i f_i^{s_i}\\in (V^{1}{D_Y})g\\prod_i f_i^{s_i}=(F^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_{-1}{D_Y})g\\prod_i f_i^{s_i} \\\\\n\\label{exp4}\\Longleftrightarrow && b(\\sigma) \\in \\mathop{\\mathrm{in}}\\nolimits_{(-w,w)}(\\mathop{\\mathrm{Ann}}\\nolimits_{D_Y} g\\prod_i f_i^{s_i}) \\\\\n\\label{exp5}\\Longleftrightarrow && b(\\sigma)g \\in \\mathop{\\mathrm{in}}\\nolimits_{(-w,w)}((\\mathop{\\mathrm{Ann}}\\nolimits_{D_Y} \\prod_i f_i^{s_i})\\cap D_Yg). \n\\end{eqnarray}\nBy the expression (\\ref{exp3}), the generalized Bernstein-Sato polynomial $b_{{\\mathfrak{a}},g}(s)$ coincides with $b_u(s)$ where $u=g\\otimes 1 \\in M_{{\\boldsymbol{f}}}$. \n\\end{Observation}\nBy (\\ref{exp4}), the polynomial $b_{{\\mathfrak{a}},g}(-s-r)$ coincides with the $b$-function for $\\mathop{\\mathrm{Ann}}\\nolimits_{D_Y}g\\prod_i f_i^{s_i}$ \nwith the weight vector $({\\boldsymbol{w}},-{\\boldsymbol{w}})$ in \\cite{SST}, p.194. \nIn the case $g=1$, one can compute $b_{\\mathfrak{a}}(s)$ using loc. cit., p.196, Algorithm 5.1.6 by the next lemma. \n\\begin{Lemma}\\label{annfs}\n\\[\n\\mathop{\\mathrm{Ann}}\\nolimits_{D_Y} \\prod_i f_i^{s_i}=\\langle t_i-f_i\\mid 1\\le i \\le r\\rangle + \\langle \\partial_{x_j}\n+\\sum_{i=1}^{r} \\partial_{x_j}(f_i)\\partial_{t_i}\\mid 1\\le j \\le n\\rangle. \n\\]\n\\end{Lemma}\n\\begin{proof}\nOne can prove the assertion similarly to the case $n=1$. See \\cite{SST} Lemma 5.3.3. \n\\end{proof}\nBy this lemma and Lemma \\ref{homogeneous part}, the homogeneous left ideals \n\\[\n(\\mathop{\\mathrm{Ann}}\\nolimits_{D_Y}\\prod_i f_i^{s_i})^*+D_Y g{\\mathfrak{a}},\\mbox{~~and~~} \\mathop{\\mathrm{in}}\\nolimits_{(-w,w)}((\\mathop{\\mathrm{Ann}}\\nolimits_{D_Y} \\prod_i f_i^{s_i})\\cap D_Yg),\n\\] \nin (\\ref{exp2}) and (\\ref{exp5}) are computable. \nTherefore we can calculate generalized Bernstein-Sato polynomials if we obtain an algorithm for computing the ideal \n$\\{b({\\boldsymbol{x}},s)\\in {\\mathbb{C}}[{\\boldsymbol{x}},s]\\mid b({\\boldsymbol{x}},\\sigma)\\in J \\}\\cong J \\cap {\\mathbb{C}}[x,\\sigma]$ for a given homogeneous ideal $J\\subset D_Y$. \nOne can compute this in the same way as \\cite{SST} Algorithm 5.1.6. \nThe algorithm calculates $J'=J\\cap {\\mathbb{C}}[x,\\sigma_1,\\dots,\\sigma_r]$ first where $\\sigma_i=-\\partial_{t_i}t_i$, \nthen computes $J' \\cap {\\mathbb{C}}[x,\\sigma]$. This algorithm requires $2r$ new variables. \nWe will give an algorithm for computing $J \\cap {\\mathbb{C}}[x,\\sigma]$ without computing $J'$. \n\\begin{Lemma}\\label{substitution}\nLet $J$ be a homogeneous left ideal of $D_Y$. The following hold: \\\\\n{\\rm(i)} $\\sigma $ is in the center of $[D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0$.\\\\\n{\\rm(ii)} $D_Y[s](J+\\langle s-\\sigma \\rangle)\\cap [D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0=J\\cap [D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0$.\\\\\n{\\rm(iii)} $\\{b({\\boldsymbol{x}},s)\\in {\\mathbb{C}}[{\\boldsymbol{x}},s]\\mid b({\\boldsymbol{x}},\\sigma)\\in J \\}=D_Y[s](J+\\langle s-\\sigma \\rangle)\\cap {\\mathbb{C}}[{\\boldsymbol{x}},s]$. \n\\end{Lemma}\n\\begin{proof}\n(i) \nSince the ring $[D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0$ is generated by $\\partial_{t_j}{t_k}$, $1\\le j,k\\le r$, over $D_X$, \nand $\\sigma $ commutes with any element of $D_X$, \nit is enough to show that $\\sigma (\\partial_{t_j}{t_k})=(\\partial_{t_j}{t_k})\\sigma$ for all $1\\le j,k\\le r$. \n\nIn the case $j\\neq k$, we obtain \n\\begin{eqnarray*}\n(\\partial_{t_j}{t_k})(\\partial_{t_j}{t_j})&=&\\partial_{t_j}^2t_jt_k,\\quad \\quad ~~~\n(\\partial_{t_j}{t_j})(\\partial_{t_j}{t_k}) = \\partial_{t_j}^2t_jt_k-\\partial_{t_j}t_k,\\\\\n(\\partial_{t_j}{t_k})(\\partial_{t_k}{t_k})&=&\\partial_{t_j}\\partial_{t_k}t_k^2-\\partial_{t_j}t_k,~~\n(\\partial_{t_k}{t_k})(\\partial_{t_j}{t_k}) = \\partial_{t_j}\\partial_{t_k}t_k^2,\n\\end{eqnarray*}\nthus $(\\partial_{t_j}{t_k})(\\partial_{t_j}{t_j}+\\partial_{t_k}{t_k})=(\\partial_{t_j}{t_j}+\\partial_{t_k}{t_k})(\\partial_{t_j}{t_k})$. \nHence \n\\begin{eqnarray*}\n(\\partial_{t_j}{t_k})\\sigma\n&=&-(\\partial_{t_j}{t_k})(\\partial_{t_j}{t_j}+\\partial_{t_k}{t_k}+\\sum_{\\ell\\neq j,k}\\partial_{t_j\\ell}{t_\\ell})\\\\\n&=&-(\\partial_{t_j}{t_j}+\\partial_{t_k}{t_k})(\\partial_{t_j}{t_k})-(\\sum_{\\ell\\neq j,k}\\partial_{t_j\\ell}{t_\\ell})(\\partial_{t_j}{t_k})\n=\\sigma(\\partial_{t_j}{t_k}).\n\\end{eqnarray*}\nIn the case $j=k$, it is obvious that $(\\partial_{t_j}{t_j})\\sigma=\\sigma(\\partial_{t_j}{t_j})$. \nTherefore $\\sigma$ is in the center of $[D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0$. \\\\\n(ii) The inclusion $D_Y[s](J+\\langle s-\\sigma \\rangle)\\cap [D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0\\supset J\\cap [D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0$ is trivial. \nWe will show the converse inclusion. Let \n\\[\nh=\\sum P_{\\ell}s^{\\ell}+Q(s)(s-\\sigma)\\in D_Y[s](J+\\langle s-\\sigma \\rangle)\\cap [D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0\n\\]\nwhere $P_{\\ell}\\in J$ and $Q(s) \\in D_Y[s]$. \nTaking the degree zero part, we may assume $P_{\\ell}\\in J\\cap [D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0$ and $Q[s]\\in [D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0[s]$. \nAs $\\sum P_{\\ell}s^{\\ell}-\\sum P_{\\ell}\\sigma^{\\ell}\\in [D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0[s](s-\\sigma)$, \nthere exists $Q^{'}(s)\\in [D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0[s]$ such that \n$h=\\sum P_{\\ell}\\sigma^{\\ell}+Q^{'}(s)(s-\\sigma)$. \nSince $h\\in D_Y$, we have $Q^{'}(s)=0$. Therefore $h=\\sum P_{\\ell}\\sigma^{\\ell}=\\sum \\sigma^{\\ell}P_{\\ell}\\in J\\cap [D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0$. \\\\\n(iii) Let $b({\\boldsymbol{x}},\\sigma)\\in J$.\nSince $b({\\boldsymbol{x}},s)-b({\\boldsymbol{x}},\\sigma)\\in \\langle s-\\sigma \\rangle$, we have \n\\[\nb({\\boldsymbol{x}},s)=b({\\boldsymbol{x}},\\sigma)+(b({\\boldsymbol{x}},s)-b({\\boldsymbol{x}},\\sigma))\\in D_Y[s](J+\\langle s-\\sigma \\rangle). \n\\] \nConversely, if $b({\\boldsymbol{x}},s)\\in D_Y[s](J+\\langle s-\\sigma \\rangle)\\cap {\\mathbb{C}}[{\\boldsymbol{x}},s]$, then \n\\[\nb({\\boldsymbol{x}},\\sigma)=b({\\boldsymbol{x}},s)-(b({\\boldsymbol{x}},s)-b({\\boldsymbol{x}},\\sigma))\\in D_Y[s](J+\\langle s-\\sigma \\rangle). \n\\]\nSince $b({\\boldsymbol{x}},\\sigma)\\in [D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0$, and by (ii) , we conclude \n\\[\nb({\\boldsymbol{x}},\\sigma)\\in D_Y[s](J+\\langle s-\\sigma \\rangle)\\cap [D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0=J\\cap [D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0\\subset J. \n\\]\n\\end{proof}\n\\begin{Theorem}[Algorithm for global generalized Bernstein-Sato polynomials 1]\\label{Algorithm for global b-functions 1}\nLet \n\\[\nI_{{\\boldsymbol{f}}}=\\langle t_iu_1-f_i\\mid 1\\le i \\le r\\rangle + \\langle u_1\\partial_{x_j}\n+\\sum_{i=1}^{r} \\partial_{x_j}(f_i)\\partial_{t_i}\\mid 1\\le j \\le d\\rangle+\\langle u_1u_2-1\\rangle\n\\]\nbe a left ideal of $D_Y[u_1,u_2]$. Then compute the following ideals; \\\\\n1. $I_{{\\boldsymbol{f}},1}=I_{{\\boldsymbol{f}}}\\cap D_Y$, \\\\\n2. $I_{({\\boldsymbol{f}};g),2}=D_Y[s](I_{{\\boldsymbol{f}},1}+g{\\mathfrak{a}}+\\langle s-\\sigma \\rangle)\\ \\cap {\\mathbb{C}}[{\\boldsymbol{x}},s]$, \\\\\n3. $I_{({\\boldsymbol{f}};g),3}=I_{({\\boldsymbol{f}};g),2}:g=(I_{({\\boldsymbol{f}};g),2}\\cap \\langle g \\rangle)\\cdot g^{-1}$, \\\\\n4. $I_{({\\boldsymbol{f}};g),4}=I_{({\\boldsymbol{f}};g),3}\\cap {\\mathbb{C}}[s]$. \\\\\nThen $b_{{\\mathfrak{a}},g}(s)$ is the generator of $I_{({\\boldsymbol{f}};g),4}$. \n\\end{Theorem}\n\\begin{proof}\nBy Lemma \\ref{annfs} and Lemma \\ref{homogeneous part}, $I_{{\\boldsymbol{f}},1}= (\\mathop{\\mathrm{Ann}}\\nolimits_{D_Y}\\prod_i f_i^{s_i})^*$. \nAs $I_{{\\boldsymbol{f}},1}+D_Y g{\\mathfrak{a}}$ is a homogeneous ideal, we have \n\\[\nI_{({\\boldsymbol{f}};g),2}=\\{b({\\boldsymbol{x}},s)\\in {\\mathbb{C}}[{\\boldsymbol{x}},s]\\mid b({\\boldsymbol{x}},\\sigma)\\in (\\mathop{\\mathrm{Ann}}\\nolimits_{D_Y}\\prod_i f_i^{s_i})^*+D_Y g{\\mathfrak{a}}\\}\n\\]\nby Lemma \\ref{substitution}. Since $b_{{\\mathfrak{a}},g}(s)$ is the minimal generator of the ideal \n\\[\n\\{b(s)\\in {\\mathbb{C}}[s]\\mid b(\\sigma)g({\\boldsymbol{x}})\\in (\\mathop{\\mathrm{Ann}}\\nolimits_{D_Y}\\prod_i f_i^{s_i})^*+D_Y g{\\mathfrak{a}}\\}, \n\\]\nit follows that $I_{({\\boldsymbol{f}};g),4}=(I_{({\\boldsymbol{f}};g),2}:g)\\cap {\\mathbb{C}}[s]=\\langle b_{{\\mathfrak{a}},g}(s)\\rangle$. \n\\end{proof}\n\\begin{Theorem}[Algorithm for global generalized Bernstein-Sato polynomials 2]\\label{Algorithm for global b-functions 2}\nLet \n\\[\n\\tilde{I}_{{\\boldsymbol{f}}}=\\langle t_i-f_i\\mid 1\\le i \\le r\\rangle + \\langle \\partial_{x_j}\n+\\sum_{i=1}^{r} \\partial_{x_j}(f_i)\\partial_{t_i}\\mid 1\\le j \\le d\\rangle\\subset D_Y, \n\\]\nand compute the following ideals; \\\\\n0. $\\tilde{I}_{({\\boldsymbol{f}};g),0}=\\tilde{I}_{{\\boldsymbol{f}}}\\cap D_Yg=D_Y[u](u\\tilde{I}_{{\\boldsymbol{f}}}+(1-u)g)\\cap D_Y$, \\\\\n1. $\\tilde{I}_{({\\boldsymbol{f}};g),1}=\\mathop{\\mathrm{in}}\\nolimits_{({\\boldsymbol{w}},-{\\boldsymbol{w}})}(\\tilde{I}_{({\\boldsymbol{f}};g),0})$, \\\\\n2. $\\tilde{I}_{({\\boldsymbol{f}};g),2}=(\\tilde{I}_{({\\boldsymbol{f}};g),1}+\\langle s-\\sigma \\rangle)\\ \\cap {\\mathbb{C}}[{\\boldsymbol{x}},s]$, \\\\\n3. $\\tilde{I}_{({\\boldsymbol{f}};g),3}=\\tilde{I}_{({\\boldsymbol{f}};g),2}:g=\\tilde{I}_{({\\boldsymbol{f}};g),2}\\cdot g^{-1}$, \\\\\n4. $\\tilde{I}_{({\\boldsymbol{f}};g),4}=\\tilde{I}_{({\\boldsymbol{f}};g),3}\\cap {\\mathbb{C}}[s]$. \\\\\nThen $b_{{\\mathfrak{a}},g}(s)$ is the generator of $\\tilde{I}_{({\\boldsymbol{f}};g),4}$. \n\\end{Theorem}\n\\begin{proof}\nBy Lemma \\ref{annfs} and Lemma \\ref{intersection}, $\\tilde{I}_{({\\boldsymbol{f}};g),0}= (\\mathop{\\mathrm{Ann}}\\nolimits_{D_Y}\\prod_i f_i^{s_i})\\cap D_Yg$. \nAs $\\tilde{I}_{({\\boldsymbol{f}};g),1}$ is a homogeneous ideal, we have \n\\[\n\\tilde{I}_{({\\boldsymbol{f}};g),2}=\\{b({\\boldsymbol{x}},s)\\in {\\mathbb{C}}[{\\boldsymbol{x}},s]\\mid b({\\boldsymbol{x}},\\sigma)\\in \\mathop{\\mathrm{in}}\\nolimits_{({\\boldsymbol{w}},-{\\boldsymbol{w}})}((\\mathop{\\mathrm{Ann}}\\nolimits_{D_Y}\\prod_i f_i^{s_i})\\cap D_Y g) \\}\n\\]\nby Lemma \\ref{substitution}. \nSince $\\tilde{I}_{({\\boldsymbol{f}};g),0}\\subset D_Yg$, we have $\\tilde{I}_{({\\boldsymbol{f}};g),1}\\subset \\mathop{\\mathrm{in}}\\nolimits_{({\\boldsymbol{w}},-{\\boldsymbol{w}})}(D_Yg)=D_Yg$. \nHence $\\tilde{I}_{({\\boldsymbol{f}};g),2}\\subset (D_Yg +\\langle s-\\sigma \\rangle)\\ \\cap {\\mathbb{C}}[{\\boldsymbol{x}},s]= {\\mathbb{C}}[{\\boldsymbol{x}},s]g$, \nand $\\tilde{I}_{({\\boldsymbol{f}};g),2}:g=\\tilde{I}_{({\\boldsymbol{f}};g),2}\\cdot g^{-1}$. \nSince $b_{{\\mathfrak{a}},g}(s)$ is the minimal generator of the ideal \n\\[\n\\{b(s)\\in {\\mathbb{C}}[s]\\mid b(\\sigma)g({\\boldsymbol{x}})\\in (\\mathop{\\mathrm{Ann}}\\nolimits_{D_Y}\\prod_i f_i^{s_i})^*+D_Y g{\\mathfrak{a}}\\}, \n\\]\nit follows that $\\tilde{I}_{({\\boldsymbol{f}};g),4}=(\\tilde{I}_{({\\boldsymbol{f}};g),2}:g)\\cap {\\mathbb{C}}[s]=\\langle b_{{\\mathfrak{a}},g}(s)\\rangle$. \n\\end{proof}\n\\begin{Remark}\\label{same algorithm}\nNote that \n\\[\nI_{({\\boldsymbol{f}};g),3}=\\tilde{I}_{({\\boldsymbol{f}};g),3}=\\{b({\\boldsymbol{x}},s)\\in {\\mathbb{C}}[s]\\mid b({\\boldsymbol{x}},\\sigma)g({\\boldsymbol{x}})\\prod_i f_i^{s_i}\\in (V^1D_Y) g(x)\\prod_i f_i^{s_i}\\}, \n\\]\nand in the case $g=1$, it follows that \n\\[\nI_{({\\boldsymbol{f}};1),2}=\\tilde{I}_{({\\boldsymbol{f}};1),2}=\\{b({\\boldsymbol{x}},s)\\in {\\mathbb{C}}[{\\boldsymbol{x}},s]\\mid b({\\boldsymbol{x}},\\sigma)\\prod_i f_i^{s_i} \\in (V^1D_Y) \\prod_i f_i^{s_i}\\}. \n\\]\n\\end{Remark}\nWe can compute local generalized Bernstein-Sato polynomials similarly to the classical case using primary decompositions \n(see \\cite{O1}, \\cite{O2}, and \\cite{O3}). \n\\begin{Theorem}[Algorithm for local generalized Bernstein-Sato polynomials]\\label{Algorithm for local b-functions}\nLet $I_{({\\boldsymbol{f}};g),3}=\\tilde{I}_{({\\boldsymbol{f}};g),3}$ be the ideals in Theorem \\ref{Algorithm for global b-functions 1} \nand Theorem \\ref{Algorithm for global b-functions 2} \nwith primary decompositions $I_{({\\boldsymbol{f}};g),3}=\\cap_{i=1}^{\\ell} {\\mathfrak{q}}_i$. \nThen $b^{\\mathfrak{p}}_{{\\mathfrak{a}},g}(s)$ is the generator of the ideal\n\\[\n\\bigcap_{{\\mathfrak{q}}_i \\cap {\\mathbb{C}}[{\\boldsymbol{x}}] \\subset {\\mathfrak{p}}}{\\mathfrak{q}}_i\\cap {\\mathbb{C}}[s].\n\\]\n\\end{Theorem}\n\\begin{proof}\nWe set $R={\\mathbb{C}}[{\\boldsymbol{x}}]$. \nBy the definition, $b^{\\mathfrak{p}}_{{\\mathfrak{a}},g}(s)$ is the generator of the ideal \n\\[\n\\{b(s)\\in {\\mathbb{C}}[s]\\mid b(\\sigma)g({\\boldsymbol{x}})h({\\boldsymbol{x}})\\in (\\mathop{\\mathrm{Ann}}\\nolimits_{D_Y}\\prod_i f_i^{s_i})^*+D_Y g{\\mathfrak{a}} \\mbox{\\quad for~~} {}^\\exists h\\not\\in {\\mathfrak{p}} \\}. \n\\]\nThis ideal equals to \n\\begin{eqnarray*}\n&&\\{b(s)\\in {\\mathbb{C}}[s]\\mid b(s)g({\\boldsymbol{x}})h({\\boldsymbol{x}})\\in I_{({\\boldsymbol{f}};g),2} \\mbox{\\quad for~~} {}^\\exists h\\not\\in {\\mathfrak{p}} \\}\\\\\n&=&\\{b(s)\\in {\\mathbb{C}}[s]\\mid b(s)h({\\boldsymbol{x}})\\in I_{({\\boldsymbol{f}};g),3} \\mbox{\\quad for~~} {}^\\exists h\\not\\in {\\mathfrak{p}} \\}\\\\\n&=&(I_{({\\boldsymbol{f}};g),3}\\otimes_{R} R_{\\mathfrak{p}} )\\cap {\\mathbb{C}}[s]. \n\\end{eqnarray*} \nSince $I_{({\\boldsymbol{f}};g),3}\\otimes_{R} R_{\\mathfrak{p}} =\\bigcap_{{\\mathfrak{q}}_i \\cap R\\subset {\\mathfrak{p}}}{\\mathfrak{q}}_i\\otimes_{R} R_{\\mathfrak{p}}$, \nand ${\\mathfrak{q}}_i\\otimes_{R} R_{\\mathfrak{p}}=R_{\\mathfrak{p}}[s]$ if and only if ${\\mathfrak{q}}_i \\cap R \\subset {\\mathfrak{p}}$, \nit follows that $b^{\\mathfrak{p}}_{{\\mathfrak{a}},g}(s)$ is the generator of the ideal \n$(\\bigcap_{{\\mathfrak{q}}_i \\cap R \\subset {\\mathfrak{p}}}{\\mathfrak{q}}_i)\\cap {\\mathbb{C}}[s]$. \n\\end{proof}\nThe algorithm for computing generalized Bernstein-Sato polynomials enables us to solve the membership problem for multiplier ideals, \nbut does not give a system of generators of multiplier ideals. \nWe have to compute $b_{{\\mathfrak{a}},g}(s)$ for infinitely many $g$ to obtain a system of generators. \n\\section{Algorithms for computing multiplier ideals}\nThe purpose of this section is to obtain algorithms for computing the system of generators of multiplier ideals. \nTo do this, we modify the definition of Budur--Musta\\c{t}\\v{a}--Saito's Bernstein-Sato polynomial. \n\nAs in the previous section, let ${\\mathbb{C}}[{\\boldsymbol{x}}]={\\mathbb{C}}[x_1,\\dots,x_n]$ be a polynomial ring over ${\\mathbb{C}}$, \nand ${\\mathfrak{a}}$ an ideal with a system of generators ${\\boldsymbol{f}} =(f_1,\\dots,f_r)$, \nand fix the weight vector $({\\boldsymbol{w}},-{\\boldsymbol{w}})\\in {\\mathbb{Z}}^{n+r}\\times{\\mathbb{Z}}^{n+r}$, ${\\boldsymbol{w}}=((0,\\dots,0),(1,\\dots,1))\\in {\\mathbb{Z}}^{n}\\times{\\mathbb{Z}}^{r}$. \nWe set $\\delta=1\\otimes 1\\in M_{{\\boldsymbol{f}}}=\\iota_+ {\\mathbb{C}}[{\\boldsymbol{x}}]$ and $\\overline{M}_{{\\boldsymbol{f}}}^{(m)}=(V^0D_Y) \\delta\/(V^m D_Y) \\delta$. \nThe induced filtration $V$ on the $\\overline{M}_{{\\boldsymbol{f}}}^{(m)}$ is finite by the definition of the $V$-filtration \n(Definition \\ref{Vfiltration}) as in the case $m=1$. \nFor $g\\in {\\mathbb{C}}[{\\boldsymbol{x}}]$, we denote by $\\overline{g\\otimes 1}$ the image of $g\\otimes 1=g\\delta$ in $\\overline{M}_{{\\boldsymbol{f}}}^{(m)}$. \n\\begin{Definition}\\label{new Bernstein-Sato polynomial}\nWe define $b_{{\\mathfrak{a}},g}^{(m)}(s)$ to be the monic minimal polynomial of the action of $\\sigma$ on \n$(V^0D_Y)\\overline{g\\otimes 1}\\subset \\overline{M}_{{\\boldsymbol{f}}}^{(m)}$. \nWe define $b^{(m)}_{{\\mathfrak{a}}}=b^{(m)}_{{{\\mathfrak{a}},1}}$. \n\\end{Definition}\nThe existence of $b_{{\\mathfrak{a}},g}^{(m)}(s)$ follows from the finiteness of the filtration $V$ on $\\overline{M}_{{\\boldsymbol{f}}}^{(m)}$, \nand the rationality of its roots follows from the rationality of the $V$-filtration. \nNote that $b_{{\\mathfrak{a}},g}^{(m)}(s)=1$ if and only if $g\\otimes 1 \\in (V^m D_Y) \\delta$.\n\\begin{Observation}\\label{on new b-function}\nSince the ring $V^0D_Y$ is generated by ${\\boldsymbol{t}}=t_1,\\dots,t_r$ over $[D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0$, \n$\\sigma$ is in the center of $[D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0$, and $t_i\\cdot\\overline{g\\otimes 1}=f_i\\cdot\\overline{g\\otimes 1}$, \nit follows that $b_{{\\mathfrak{a}},g}^{(m)}(s)$ is the monic polynomial $b(s)$ of minimal degree satisfying $b(\\sigma)\\overline{g\\otimes 1}=0$. \nThis is equivalent to saying \n\\[\nb(s)g \\prod_i f_i^{s_i}\\in V^mD_Y\\prod_i f_i^{s_i}. \n\\]\nSince $V^mD_Y$ is generated by all monomials in ${\\boldsymbol{t}}=t_1,\\dots,t_r$ of degree $m$ as $V^0D_Y$-module, \nand $t_j\\prod_i f_i^{s_i}=f_j\\prod_i f_i^{s_i}$, \nour Bernstein-Sato polynomial $b_{{\\mathfrak{a}},g}^{(m)}(s)$ is the monic polynomial $b(s)$ of minimal degree satisfying \n\\begin{equation}\\label{exp6}\nb(s)g \\prod_i f_i^{s_i}= \\sum_j P_j h_j \\prod_i f_i^{s_i} \\mbox{\\quad for~~} {}^\\exists P_j\\in [D_Y]^{({\\boldsymbol{w}},-{\\boldsymbol{w}})}_0, {}^\\exists h_j\\in {\\mathfrak{a}}^m.\n\\end{equation}\nHence if $g,h\\in{\\mathbb{C}}[{\\boldsymbol{x}}]$ are polynomials such that $g$ divides $h$, then $b_{{\\mathfrak{a}},h}^{(m)}(s)$ is a factor of $b_{{\\mathfrak{a}},g}^{(m)}(s)$. \nIn particular, $b_{{\\mathfrak{a}},g}^{(m)}(s)$ is a factor of $b_{{\\mathfrak{a}}}^{(m)}(s)$ for all $g\\in{\\mathbb{C}}[{\\boldsymbol{x}}]$. \n\\end{Observation}\nWe obtain a description of multiplier ideals in terms of our Bernstein-Sato polynomials similarly to \nTheorem \\ref{multiplier ideals and b-functions}. \n\\begin{Theorem}\\label{multiplier ideals and b-functions 2}\n{\\rm (i)} For a given rational number $c$ v$_{\\rm e}$, then we are in the super-escape regime, so that gravitational-focusing \ncan be neglected. Conversely, in the sub-escape and super-Hill regime v$_{\\rm H} <$ $\\sigma$ $<$ v$_{\\rm e}$ \ngravitational-focusing is significant. We do not consider the sub-Hill regime (i.e. $\\sigma <$ v$_{\\rm H}$), \nsince the Hill velocity is always very small in galactic nuclei. \n\nPutting this all together, the collision rate for a Keplerian nuclear disk can be written: \n\\begin{equation}\n\\label{eqn:gamma1}\n\\Gamma \\approx \\Big( \\frac{{\\Sigma}{\\Omega}R^2}{M} \\Big)\\Big(1 + \\Big( \\frac{v_{\\rm e}}{\\sigma} \\Big)^2 \\Big),\n\\end{equation} \nwhere \n\\begin{equation}\n\\label{eqn:omega}\n\\Omega = \\Big( \\frac{GM_{\\rm BH}}{r^3} \\Big)^{1\/2},\n\\end{equation}\nand the second term in Equation~\\ref{eqn:gamma1} smoothly transitions the collision rate into and out of the regime \nwhere gravitational-focusing becomes important. The stellar surface mass density is:\n\\begin{equation}\n\\label{eqn:surfdens}\n\\Sigma = \\Big( \\frac{M_{\\rm d}}{\\pi{a^2}} \\Big),\n\\end{equation}\nwhere M$_{\\rm d}$ is the total stellar mass of the disk, a is its scale length and h is its scale height. \nEquation~\\ref{eqn:gamma1} gives the mean rate at which a given star or object orbiting at a distance r \nfrom the central SMBH collides with other objects. \n\nNext, we consider an NSC for which the dominant stellar distribution is (approximately) spherical, pressure-supported \nand the stellar motions are predominantly random. Here, \nwe assume that the stellar orbits are distributed isotropically and the velocity distribution is isothermal (i.e., Maxwellian with \ndispersion $\\sigma_{\\rm 0}$) \\citep[e.g.][]{merritt13}. \n\nSimilar to Equation~\\ref{eqn:gamma1}, an order-of-magnitude estimate for the mean rate $\\Gamma$ at which a given star or object \nundergoes direct collisions with other objects is:\n\\begin{equation}\n\\label{eqn:gamma9}\n\\Gamma \\approx \\Big( \\frac{{\\rho}{\\sigma}R^2}{M} \\Big)\\Big(1 + \\Big(\\frac{v_{\\rm e}}{\\sigma}\\Big)^2\\Big),\n\\end{equation} \nwhere, as before, the second term smoothly transitions the collision rate into and out of the regime where gravitational-focusing \nbecomes important. In Equation~\\ref{eqn:gamma9}, we use the local velocity dispersion $\\sigma$, calculated as:\n\\begin{equation}\n\\label{eqn:sigloc}\n\\sigma(r)^2 = \\sigma_{\\rm 0}^2 + \\frac{GM_{\\rm BH}}{r}.\n\\end{equation} \nThis serves to modify the velocity dispersion very close to the central SMBH, where the stellar orbits begin transitioning into \nthe Keplerian regime.\n\nFinally, combining Equations~\\ref{eqn:gamma1} and~\\ref{eqn:gamma9}, we obtain a \ngeneral formula for the rate of collisions in a nuclear star cluster with or without a central SMBH:\n\\begin{equation}\n\\label{eqn:gamma10}\n\\Gamma \\approx ({\\rho}\\sigma + {\\Sigma}\\Omega)\\Big( \\frac{R^2}{M} \\Big)\\Big(1 + \\Big( \\frac{v_{\\rm e}}{\\sigma} \\Big)^2 \\Big),\n\\end{equation}\nAs we will show in Section~\\ref{app}, Equation~\\ref{eqn:gamma10} reasonably describes the collision rates in every \ngalactic nucleus in our sample, provided we set \\textit{either} $\\rho =$ 0 or $\\Sigma =$ 0. This is because, within our chosen \nouter limit of integration (i.e., 2 pc), the dominant NSC \nstructure is either well described by one of these two extremes, namely an (approximately) spherical pressure-supported \nsystem (i.e., the MW, M32 and M33) or a Keplerian disk (i.e., M31).\n\n\n\\subsection{Binary mergers due to Kozai-Lidov oscillations} \\label{kozai}\n\nIn this section, we derive order-of-magnitude estimates for the mean rate of binary mergers mediated \nby Kozai-Lidov oscillations with an SMBH acting as the outer triple companion, as a function \nof the host nuclear environment and distance from the central SMBH. \n\nThe characteristic time-scale for eccentricity oscillations to occur due to the Kozai-Lidov mechanism is \\citep{holman97,antonini16}:\n\\begin{equation}\n\\label{eqn:taukl}\n\\tau_{\\rm KL} \\approx P_{\\rm b}\\Big( \\frac{M_{\\rm b}}{M_{\\rm BH}} \\Big)\\Big( \\frac{r}{a_{\\rm b}} \\Big)^3(1-e_{\\rm BH}^2)^{3\/2}, \n\\end{equation} \nwhere M$_{\\rm b} =$ 2M is the binary mass, r is the distance from the SMBH, e$_{\\rm BH}$ is orbital eccentricity of the \nbinary's orbit about the central SMBH, a$_{\\rm b}$ is the binary orbital separation and P$_{\\rm b}$ is the binary \norbital period. The rate at which binaries merge due to Kozai-Lidov oscillations within a given distance from the SMBH \nis then:\n\\begin{equation}\n\\label{eqn:gammakl1}\n\\Gamma_{\\rm KL}(r) \\sim \\frac{{\\pi}f_{\\rm b}f_{\\rm i}{\\Sigma}r^2}{2M\\tau_{\\rm KL}},\n\\end{equation} \nfor a nuclear disk, and \n\\begin{equation}\n\\label{eqn:gammakl2}\n\\Gamma_{\\rm KL}(r) \\sim \\frac{2{\\pi}f_{\\rm b}f_{\\rm i}{\\rho}r^3}{3M\\tau_{\\rm KL}},\n\\end{equation} \nfor a spherical pressure-supported nucleus. Here, f$_{\\rm b}$ is the fraction \nof objects that are binaries, and \nf$_{\\rm i}$ is the fraction of binaries with their inner and outer orbital planes aligned such that Kozai \ncycles operate. For simplicity, we assume f$_{\\rm i} =$ 1. Thus, the calculated rates \nare strict upper limits, since they assume that every binary is aligned relative to the SMBH such that Kozai oscillations \nwill occur, which is certainly not the case. We will return to this issue in Section~\\ref{discussion}.\n\nImportantly, Equation~\\ref{eqn:taukl}, and hence Equations~\\ref{eqn:gammakl1} and~\\ref{eqn:gammakl2}, \nare only valid at distances from the SMBH smaller than \\citep{prodan15}:\n\\begin{equation}\n\\begin{gathered}\n\\label{eqn:rsc}\nr_{\\rm SC} \\approx 0.02\\Big( \\frac{a_{\\rm b}}{{\\rm AU}} \\Big)^{4\/3}\\Big( \\frac{M_{\\rm b}}{2 {\\rm M_{\\rm \\odot}}} \\Big)^{-2\/3} \\\\\n\\Big( \\frac{M_{\\rm BH}}{4 \\times 10^6 {\\rm M_{\\rm \\odot}}} \\Big)^{1\/3}\\Big( \\frac{1- e_{\\rm b}^2}{1 - e_{\\rm BH}^2} \\Big)^{1\/2} {\\rm pc},\n\\end{gathered}\n\\end{equation}\nwhere e$_{\\rm b}$ is the orbital eccentricity of the binary star. This is due to general relativistic, or \nSchwarzschild apsidal, precision in the binary star \\citep{holman97,blaes02,holman06}. \n\n\n\n\\section{The effect of mass loss on the photometric appearance of red giant branch stars} \\label{ehbform}\n\nIn this section, we quantify the effects of collisions and, specifically, the subsequent tidal stripping on \nthe photometric appearance of \nRGB stars. Our primary concern is the stellar evolution, so we do not specify the nature of the impactor. \nInstead, we look at how discrete mass loss events affect the subsequent time evolution of the \nRGB luminosity and radius. We note here that our calculated collision rates are sufficiently high that \nthe underlying RGB populations could be significantly affected, and these stars dominate the light \ndistribution in an old stellar population. Due to the high extinction toward the Galactic centre, for \nexample, these are the only (``old'') stars that can be observed.\n\nStars of initial mass\\footnote{In this section, unless otherwise stated, mass, luminosity and radius are \nexpressed in solar units, and our chosen unit of time is $10^6~$yr} $0.3 \\lap M\/M_{\\odot} \\lap 10$ become RGBs \nstars. Exhaustion of hydrogen in the core is followed by burning of hydrogen to helium in a thin shell; the \nluminosity at the beginning of this evolutionary stage, i.e. at the base of the giant branch (BGB), is \napproximated by \\citep[e.g.][]{1989ApJ...347..998E}:\n\\begin{equation}\n\\label{eqn:LBGB}\nL_{\\rm BGB} \\approx \\frac{2.15M^2+0.22M^5}{1+1.4\\times10^{-2}M^2+5\\times10^{-6}M^4},\n\\end{equation}\nwhere M is the mass of the star. The mass of the helium core gradually increases, and the star's \nradius and luminosity shoot up as the star climbs the giant branch. \nIn stellar evolution models, the radius and luminosity during the giant phase are determined \nalmost uniquely by the mass, $M_{\\rm c}$, of the (helium) core.\nApproximate relations for $0.17 \\lap M_{\\rm c}\/M_{\\odot} \\lap 1.4$ are \\citep{2006epbm.book.....E}:\n\\begin{eqnarray}\n\\label{eqn:LRRG}\nL_{\\rm GB} & \\approx & \\frac{10^{5.3}M_{\\rm c}^6}{1 + 10^{0.4}M_{\\rm c}^4 + 10^{0.5}M_{\\rm c}^5}, \\\\\nR _{\\rm GB} & \\approx & M^{-0.3}(L_{\\rm GB}^{0.4}+0.383L_{\\rm GB}^{0.76}).\n\\end{eqnarray}\nThe dominant energy source during these evolutionary phases\nis the CNO cycle, and so the rate of energy production is tied to the rate of increase of the core mass by\n\\begin{equation}\n\\label{eqn:Lpp}\nL_{\\rm GB} \\approx 1.44 \\times 10^{5} \\times \\frac{dM_{\\rm c}}{dt}.\n\\end{equation}\nGiven an initial core mass, Equations~\\ref{eqn:LRRG}-\\ref{eqn:Lpp} can be solved for the dependence of \n$M_{\\rm c}$, $L_{\\rm GB}$ and $R_{\\rm GB}$ on time.\nThe surface temperature is given by the Stefan-Boltzmann law (however, at late times, the \nmaximum radius is limited by mass loss). \nThe initial core mass -- that is, the mass of the hydrogen-exhausted\nhelium core at the time the main sequence turnoff is reached -- is given by \\citep{2006epbm.book.....E}:\n\\begin{equation}\nM_{\\rm c} \\approx \\frac{0.11 M^{1.2} + 0.0022M^2 + 9.6 \\times 10^{-5} M^4}\n{1 + 0.0018M^2 + 1.75\\times 10^{-4} M^3}.\n\\end{equation}\nFor instance, for $M = (1,2,3)$ M$_{\\odot}$, the initial core mass is\n$M_{\\rm c}\/M_{\\odot} = (0.11,0.26,0.43)$ and the initial core mass ratio is\n$M_{\\rm c}\/M_{\\odot} = (0.11,0.13,0.14)$.\n\nFor stars with masses $0.3 \\lap M\/M_{\\odot} \\lap 0.5$ the evolution on the RGB is interrupted when \n$M_{\\rm c}$ has grown to approximately $M$. Low mass stars, with $0.5 \\lap M\/M_{\\odot} \\lap 2$, \ndevelop degenerate He cores on the RGB and $3\\alpha$ reactions initiate a violent nuclear\nrunaway, called the helium-flash. Intermediate mass stars, with $2 \\lesssim M\/M_{\\odot} \\lesssim 10$,\ndevelop non-degenerate He cores, which growth only slightly on the RGB before the He burning begins. \nThe luminosity at the tip of the giant branch (TGB) is roughly $L_{\\rm TGB} \\approx L_{\\rm BGB} + 2 \\times 10^3$.\n\nMass loss can significantly alter the evolution of an RGB star. For instance, it can \nprevent the star from igniting He, and therefore terminate the star's evolution at an \nearlier stage. In order to study the response of RGB stars to episodes of intense mass loss, \nwe used the stellar evolution code~STARS \\citep{eggleton71,pols+95}. \nWe compute evolution tracks for stars with various ZAMS masses, applying the \\citet{dejager+88} \nprescription for the mass loss rate due to stellar winds. \nAt some point during the RGB phase, we switch to a constant mass loss regime at the rate \n$10^{-5}M_\\odot~\\rm yr^{-1}$. During this time we turn off changes in composition due to nuclear burning, \nwhile retaining energy production. When the stellar mass is reduced to a certain value, $M_{\\rm f}$, we\nhalt the rapid mass loss regime, and continue with \"normal\" stellar evolution by turning the \nde~Jager mass loss rate prescription back on, which in turn allows for composition changes to \nre-initiate. This numerical procedure allows us to study the evolution of stars undergoing \nan episode of mass loss on a time scale much shorter than their nuclear time scale, as is the case for \nan RGB star that is tidally stripped of its envelope.\n\nFigure~\\ref{fig:fig1} displays stellar evolution tracks on the RGB, after different amounts of envelope \nstripping. Stars will not reach the TGB if the total stellar mass is smaller (due to stellar winds) than \nthe He core mass at the TGB of the progenitor star. In this case, the star will instead climb the giant branch \nuntil the mass of its hydrogen exhausted He core is essentially equal to its total stellar mass, at which point it \nwill turn off the giant branch to the blue at the same luminosity as other stars of the same total mass. The \nstar will fail to ignite He and will subsequently cool off to become a white dwarf \nwith an He core. If the mass left in the star is instead large enough that its He core mass \ncan reach that of the parent star at the moment of He ignition, then the remnant \nwill also ignite He at roughly the same luminosity as its (unperturbed) parent star. In this scenario, the star \neventually cools off to become a white dwarf with a CO core. \nFigure~\\ref{fig:fig1} shows that, for stars of total mass $0.8 \\lesssim M\/M_{\\odot} \\lesssim 1.5$, \nto significantly alter the evolution and make the TGB fainter requires $M_{\\rm f} \\lesssim 0.7 M_{\\odot}$.\nFor $M \\gtrsim 2 M_{\\odot}$, we require instead $M_{\\rm f} \\approx 0.3 M_{\\odot}$ to be removed or, \nequivalently, more than $90\\%$ of the star's envelope \\citep[e.g.][]{pradam09}.\n\nWe find that the overall stars' evolution is relatively insensitive to the luminosity at which the mass loss \nepisode occurs. Following mass removal, a star will nearly return to its initial luminosity on the RGB, \nbut will move onto the point on the Hayashi track corresponding to its new mass. This is independent \nof the luminosity at which tidal stripping occurs. While the luminosity and evolutionary timescales \nremain virtually unchanged, the new radius of a stripped RGB star will increase (or decrease; see below) \nsignificantly as it reaches a new state of thermal equilibrium.\n\nFor $M_{\\rm c}\/M \\gap(\\lap) 0.2$, the star responds to mass loss by becoming smaller (larger).\nIn Figure~\\ref{fig:fig4}, the dashed line corresponds to an He core mass equal to 0.2M, \napproximately the mass ratio below which RGB stars respond to tidal stripping by expanding. Stars with \nmasses $M \\gtrsim 2 M_{\\odot}$ never rise above this critical mass ratio and will always expand on\nlosing mass. Lower mass stars, on the other hand, have $0.1 \\lap M_{\\rm c}\/M \\lap 0.15$ when they \nfirst leave the main sequence, and will expand upon losing mass. Farther up the giant branch, \nthe core mass increases relative to the total stellar mass, and these stars should shrink upon losing mass.\n\nFigure~\\ref{fig:fig2} shows the mass of the hydrogen exhausted core, $M_{\\rm c}$, at the TGB as a function \nof the ZAMS mass for both unperturbed and stripped stars. For unperturbed stars, there is a deep minimum \naround $M = 2 M_{\\odot}$ which corresponds approximately to the transition between stars with \ndegenerate, $M \\lesssim 2 M_{\\odot}$, and non-degenerate, $M \\gtrsim 2 M_{\\odot}$, He cores at the beginning \nof the giant branch \\citep{HS55}. Comparing the He core mass at the TGB with that at the time corresponding to the \nmain sequence turnoff (dot-dashed line in Figure~\\ref{fig:fig2}), we see that for $M\\gtrsim 2 M_{\\odot}$ the majority \nof the He core mass needed for ignition of He-burning is already in place before the star reaches the giant branch. \nA star with $M \\gtrsim 2M_{\\odot}$ that undergoes a strong mass loss episode on the RGB will still experience a \ncore-He burning phase and reach nearly the same luminosity as its parent star at the TGB. Therefore, for \n$M \\gtrsim 2 M_{\\odot}$, mass loss will have little effect on the luminosity corresponding to the TGB, unless most of \nthe envelope is suddenly removed near the BGB. With that said, however, mass loss during the RGB phase \ncould still make the tip of the asymptotic giant branch (AGB) non-negligibly fainter (for example, see the lower \npanels in Figure~\\ref{fig:fig1}).\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.73\\linewidth,angle=0]{fig1.eps}\n\\end{center}\n\\caption[Evolution tracks for stars undergoing an episode of mass loss on the giant branch]{Evolution tracks for stars with \nZAMS masses of $M = 1 M_{\\odot}$ and $3 M_{\\odot}$ undergoing an \nepisode of mass loss on the giant branch. The total stellar mass, $M_{\\rm f}$, remaining after some fraction of \nthe RGB envelope was (instantaneously) expelled is shown in blue. The star's evolution during the mass \nremoval phase is shown by the red line, with the blue lines depicting the subsequent evolution. \nStellar evolution tracks for unperturbed stars are shown via the black lines. For comparison, for each model, \nwe also show via the dashed orange lines the Eggleton stellar evolution tracks for unperturbed stars with \n$M = M_{\\rm f}$. The star symbols indicate the point on the H-R diagram where stars ignite He. The \ndot-dashed lines are K=12, 15 contours obtained using giant colours and bolometric corrections from \n\\citet{johnson66}. \nStars with $K >15$ are too faint to be resolved at the MW Galactic centre. For this plot, we assume \na distance of $8~\\rm kpc$ and an extinction of $A_{\\rm K}=3$, suitable to the line of sight extending from the \nSun to the Galactic Centre.\n\\label{fig:fig1}}\n\\end{figure}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=.8\\linewidth,angle=-90.]{fig2.eps}\n\\end{center}\n\\caption[The He core mass at the TGB as a function of the ZAMS mass.]{The He-core mass at the TGB is shown as a \nfunction of the ZAMS mass. The filled circles show the \nresults for unperturbed single stars. The star symbols show the hydrogen-exhausted He core mass for stars \nundergoing an episode of rapid mass loss on the giant branch. The symbols are larger for stars that ignite He. The \nindicated masses show the final stellar mass that remains after mass removal. The dashed line delineates the boundary \nbetween canonical stars (filled circles) that respond to mass loss by becoming bigger ($M_{\\rm c}\/M_{\\odot} < 0.2$) or \nsmaller ($M_{\\rm c}\/M_{\\odot} > 0.2$). The dot-dashed line shows the mass of the hydrogen-exhausted He core at the time \ncorresponding to the main sequence turnoff. All units are assumed to be solar.\n\\label{fig:fig2}}\n\\end{figure}\n\n\\section{Application to Local Group Galactic Nuclei} \\label{app}\n\nIn this section, we review the available observations for the central nuclear regions of our four \nLocal Group galaxies, namely the MW, M31, M32 and M33, in order to characterize their physical \nproperties for input to the rate estimates presented in Section~\\ref{rates}. We further present the \ncalculated collision rates for each nucleus as a function of distance from the cluster centre, using the \nobserved NSC properties.\n\nOur main results are shown in Figures~\\ref{fig:fig3} and~\\ref{fig:fig4}, which show the collision rates (both for a \nparticular object and the total integrated rates) for the MW (top left insets), M31 (top right insets), M32 (bottom left insets) and \nM33 (bottom right insets), as a function \nof distance from the cluster centre or SMBH. The solid black, dotted red, dashed blue, long-dashed green, dotted black and \ndot-dashed cyan lines show, respectively, the \nrates of MS+MS, MS+RGB, MS+WD, WD+RGB, BH+RGB and 1+2 collisions. \n\nFigure~\\ref{fig:fig3} shows the individual rates for a \n\\textit{specific} object to experience direct collisions with other objects of a given type, and must hence be multiplied by an \nappropriate fraction. \n\n We multiply the rates of MS+MS, MS+RGB, MS+WD, WD+RGB, BH+RGB and 1+2 collisions by factors of (1-f$_{\\rm b}$)f$_{\\rm MS}$, \n(1-f$_{\\rm b}$)f$_{\\rm MS}$, (1-f$_{\\rm b})$f$_{\\rm MS}$, (1-f$_{\\rm b})$f$_{\\rm RGB}$, (1-f$_{\\rm b})$f$_{\\rm RGB}$ and \n(1-f$_{\\rm b}$), respectively, where f$_{\\rm b}$ is the fraction of objects that \nare binaries and f$_{\\rm MS}$ is the fraction of single stars on the main-sequence. We require \n1 $=$ f$_{\\rm MS}$ + f$_{\\rm RGB}$ + f$_{\\rm WD}$ + f$_{\\rm NS}$ + f$_{\\rm BH}$, where f$_{\\rm RGB}$, f$_{\\rm WD}$, f$_{\\rm NS}$ and \nf$_{\\rm BH}$ are the fractions of single RGBs, WDs, NSs and BHs, respectively. We set f$_{\\rm b} =$ 0.01, f$_{\\rm MS} =$ 0.89, \nf$_{\\rm WD} =$ 0.10 and f$_{\\rm RGB} =$ 0.01 \\citep[e.g.][]{leigh07,leigh09,maeder09,leigh11b}. These fractions are \nonly approximate, but are representative of what we find upon integrating over a Kroupa initial mass function \\citep{kroupa95,kroupa95b}, \nover the appropriate mass ranges for an old stellar population. For the fractions of NSs and BHs, we\nset f$_{\\rm NS} =$ 0.01 and f$_{\\rm BH} =$ 0.001 \\citep[e.g.][]{alexander05,hopman06}. \n\nTo estimate the binary fraction for \neach NSC, f$_{\\rm b}$, we first calculate the expected average hard-soft boundary in the NSC for a typical binary.\nWe then calculate the fraction of the log-normal input period distribution, taken from \\citet{raghavan10} for solar-type binaries\nin the field, below this hard-soft boundary, $f_{\\rm P}$. We assume that, if the full period distribution could be occupied, this would\nyield a binary fraction of 50\\%. The (hard) binary fraction is then given as the product f$_{\\rm b} =$ 0.5f$_{\\rm P}$.\nThe resulting binary fractions are lower in clusters with hard-soft boundaries at smaller orbital periods (i.e. clusters with larger\nvelocity dispersions), as is consistent with observed star clusters \\citep[e.g.,][]{sollima08,milone12,leigh15}. This gives\nf$_{\\rm b} =$ 0.010, 0.002, 0.020 and 0.060 for, respectively, the MW, M31, M32 and M33. Finally, for the mean binary orbital\nseparation, we adopt the cluster hard-soft boundary.\n\nFigure~\\ref{fig:fig4} shows the total rates for \\textit{any} pair of objects to undergo a collision. These total rates are calculated \nby multiplying each collision rate in Figure~\\ref{fig:fig3} by the number of objects (MS, RGB or WD) in a given radial bin. We adopt a \nbin size of 0.01 pc, such that the y-axis corresponds to the number of collision products in each 0.01 pc \nradial interval, over a 100 Myr time interval. We multiply each rate by the calculated \nnumber of objects of the given type within a given radial bin, found by multiplying the mean density in each \nbin by its volume. An additional factor is included to correct for the fraction of objects of the desired type. That is, \nwe multiply the numbers of MS+MS, MS+RGB, MS+WD, WD+RGB, MS+NS, BH+RGB and 1+2 collisions by (1-f$_{\\rm b}$)f$_{\\rm MS}$, \n(1-f$_{\\rm b})$f$_{\\rm RGB}$, (1-f$_{\\rm b})$f$_{\\rm WD}$, (1-f$_{\\rm b})$f$_{\\rm WD}$, (1-f$_{\\rm b})$f$_{\\rm NS}$, \n(1-f$_{\\rm b})$f$_{\\rm BH}$ and f$_{\\rm b}$, respectively. We then calculate the total number of collision\/merger products \nfor each mechanism by integrating numerically over the 2-D and 3-D density profiles. That is, we sum over all radial bins to \nestimate the total number of collision products expected in a 100 Myr time interval, from 0.1 pc (i.e., the current \nlocation of the inner blue stars) to 2 pc (i.e., a rough estimate for the relevant size of the \nsurrounding old population) from the cluster centre. The results are shown in Table~\\ref{table:one} for \nall NSCs in our sample.\n\nIn making Figures~\\ref{fig:fig3} and~\\ref{fig:fig4}, we assume a mean single star \nmass and radius of 0.3 M$_{\\odot}$ and 0.3 R$_{\\odot}$, respectively, which are suitable for an old stellar \npopulation.\\footnote{We do not integrate over a present-day stellar mass function in calculating the collision rates, due \nmainly to uncertainties in the cluster age and initial stellar mass function for the NSCs in our sample. For example, \nsome authors have argued for a top-heavy initial mass function in the Galactic Centre \\citep[e.g.][]{paumard06}.} \nAll RGB stars, WDs, NSs and BHs are assumed to have masses of, respectively, 1 M$_{\\odot}$, \n1 M$_{\\odot}$, 2 M$_{\\odot}$ and 10 M$_{\\odot}$. All RGB stars are assumed to have radii\nof 10 R$_{\\odot}$, and all binaries have orbital separations equal to the hard-soft boundary. Hence, gravitational focusing\nis typically negligible for all collisions involving RGB stars (except if BHs are involved) and binaries, but is always significant for MS+MS, \nMS+WD, MS+NS and BH+RGB collisions. We caution that the \nrates shown in Figures~\\ref{fig:fig3} and~\\ref{fig:fig4} are order-of-magnitude estimates, \nand are highly sensitive to our assumptions for the input parameters. For example, an increase in the mean \nbinary orbital separation by a factor of only 2 translates into an \nincrease in both the corresponding rates and numbers by a factor of 8. Thus, binary destruction via 1+2 \ncollisions could be (or have been in the past) much more efficient than Figures~\\ref{fig:fig3} and~\\ref{fig:fig4} would \nsuggest, since we assume only very compact binaries. \n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{fig3.eps}\n\\end{center}\n\\caption[Individual rates of MS+MS, MS+RGB, MS+WD, WD+RGB, BH+RGB and 1+2 collisions for the MW, M31, M32 and M33]{The MS+MS (black solid lines), \nMS+RGB (red dotted lines), MS+WD (green long-dashed lines), WD+RGB (cyan dot-dashed lines), BH+RGB (dotted black lines) and 1+2 (blue dashed \nlines) collision rates are shown as a function of distance from the cluster centre (or SMBH) for the MW (top left panel), M31 (top right panel), \nM32 (bottom left panel) and M33 (bottom right panel). These are the rates for a \\textit{particular} object to undergo a collision. \nFor comparison, we also show our results for the MW \nassuming the density profile given by Equations 1 and 4 in \\citet{merritt10}, via an additional \ninset in the top left panel. The assumptions used as \ninput to each rate equation in Section~\\ref{rates} are discussed in Section~\\ref{app}. \n\\label{fig:fig3}}\n\\end{figure}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=\\columnwidth]{fig4.eps}\n\\end{center}\n\\caption[Total Rates of MS+MS, MS+RGB, MS+WD, WD+RGB, BH+RGB and 1+2 collisions for the MW, M31, M32 and M33]{The total rates of \nMS+MS (black solid lines), MS+RGB (red dotted lines), MS+WD (green long-dashed lines), WD+RGB (cyan dot-dashed lines), \nBH+RGB (dotted black lines) and 1+2 (blue dashed lines) collisions are shown as a function of distance from the \ncluster centre (or SMBH) for the MW (top left panel), M31 (top right panel), M32 (bottom left panel) and M33 (bottom right panel). These \nare the total rates for \\textit{any} pair of objects to undergo a collision. For \ncomparison, we also show our results assuming the density profile given by Equations 1 and 4 in \\citet{merritt10}, via an additional \ninset in the top left panel. In making this figure, we adopt a \nbin size of 0.01 pc, such that the y-axis corresponds to the number of collision products in each 0.01 pc radial interval, over a 100 Myr \ntime period. The assumptions used as input to each collision rate are the same as in Figure~\\ref{fig:fig3}.\n\\label{fig:fig4}}\n\\end{figure}\n\n\\begin{table*}\n\\caption{Total number of collision products expected in a 100 Myr time interval}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|}\n\\hline\nGalaxy & MS+MS & MS+RGB & MS+WD & WD+RGB & MS+NS & BH+RGB & 1+2 \\\\\n\\hline\nMilky Way & 7.1 $\\times$ 10$^1$ & 1.8 $\\times$ 10$^2$ & 2.5 $\\times$ 10$^1$ & 2.0 $\\times$ 10$^1$ & 5.0 $\\times$ 10$^0$ & 1.0 $\\times$ 10$^0$ & 6.5 $\\times$ 10$^1$ \\\\\nM31 & 3.2 $\\times$ 10$^4$ & 9.3 $\\times$ 10$^4$ & 1.1 $\\times$ 10$^4$ & 1.1 $\\times$ 10$^4$ & 2.2 $\\times$ 10$^3$ & 4.6 $\\times$ 10$^2$ & 1.5 $\\times$ 10$^3$ \\\\\nM32 & 4.3 $\\times$ 10$^3$ & 5.5 $\\times$ 10$^4$ & 1.4 $\\times$ 10$^3$ & 2.0 $\\times$ 10$^3$ & 2.6 $\\times$ 10$^2$ & 6.2 $\\times$ 10$^1$ & 7.9 $\\times$ 10$^3$ \\\\\nM33 & 1.3 $\\times$ 10$^4$ & 5.0 $\\times$ 10$^3$ & 4.9 $\\times$ 10$^3$ & 1.9 $\\times$ 10$^3$ & 9.7 $\\times$ 10$^2$ & 1.8 $\\times$ 10$^2$ & 4.3 $\\times$ 10$^4$ \\\\\n\\hline\n\\end{tabular}\n\\label{table:one}\n\\end{table*}\n\n\n\\subsection{The Milky Way} \\label{MW}\n\nAt the heart of the Milky Way (MW), there resides an NSC of mass \n$\\sim$ 9 $\\times$ 10$^6$ M$_{\\odot}$ ($<$ 100 arcsec) hosting a super-massive black hole \nof mass $\\sim$ 4 $\\times$ 10$^6$ M$_{\\odot}$ \\citep[e.g.][]{genzel96,chatzopoulos15}.\nWe assume a pressure-supported nucleus with an isothermal velocity dispersion \n$\\sigma_{\\rm 0} =$ 100 km\/s \\citep{merritt13}. Using Equation~\\ref{eqn:rinf}, this yields \nan influence radius for the central SMBH of $\\sim$ 1.7 pc. \n\nThe MW NSC is thought to have a core \\citep[e.g.][]{merritt10}, hence we adopt the stellar density profile of \\citet{stone15}:\n\\begin{equation}\n\\label{eqn:rho}\n\\rho(r) = \\frac{\\rho_{\\rm 0}}{(1 + r^2\/r_{\\rm c}^2)(1 + r^2\/r_{\\rm h}^2)},\n\\end{equation}\nwhere r is the distance from the origin, r$_{\\rm c}$ is the core radius and r$_{\\rm h}$ is the half-mass radius. Equation~\\ref{eqn:rho} \nhas been chosen to produce a flat core inside a roughly isothermal cluster.\\footnote{These assumptions are consistent \nwith the observational constraints available for the MW NSC \\citep[e.g.][]{merritt10}. However, we note that, formally, \na certain amount of anisotropy is needed for there to exist a core (in a Keplerian potential).} Here, the core radius is given by:\n\\begin{equation}\n\\label{eqn:core}\nr_{\\rm c} = \\frac{\\sigma_{\\rm 0}}{2{\\pi}G\\rho_{\\rm 0}}.\n\\end{equation}\n\nFor comparison, we also show our results assuming the density profile given by Equations 1 and 4 in \\citet{merritt10}, \nvia an additional inset in Figures~\\ref{fig:fig3} and~\\ref{fig:fig4}. As we will show, both of these assumptions for the \ndensity profile yield very comparable radial collision profiles. The central density is assumed to be \n10$^6$ M$_{\\odot}$ pc$^{-3}$, and we adopt a half-mass radius of \nr$_{\\rm h} =$ 2.5 pc \\citep{merritt13} along with a mass-to-light ratio of 2. We use Equation~\\ref{eqn:gamma10} \nto calculate all collision rates in the MW (with $\\Sigma =$ 0), appropriate for a \npressure-supported roughly isothermal nuclear environment, and transition smoothly between these formulae \nas gravitational-focusing becomes significant. Note that we set M$_{\\rm d} =$ 0, since there is no significant \nKeplerian disk component outside 0.1 pc. The results are shown in Figures~\\ref{fig:fig3} \nand~\\ref{fig:fig4} by the black lines.\n\nIn the MW, the collision rates are sufficiently high that all types of collision products should be \npresent, at least within $\\lesssim$ 1 pc of the Galactic Centre. This predicts \nnon-negligible numbers of MS+MS collision products and hence blue stragglers, as \nillustrated in Figure~\\ref{fig:fig4} and Table~\\ref{table:one}. More specifically, over a 1 Gyr period, \non the order of 710 $\\sim$ 10$^3$ MS+MS collisions should occur, which is roughly 0.1\\% of the total \nstellar mass in this region. \nAfter an initial rise from r $=$ 0, the number of collision products remains roughly constant with \nincreasing distance from the SMBH. As shown in Figure~\\ref{fig:fig3}, the \npredicted collision rates for a particular object show a near monotonic decrease with increasing distance \nfrom the cluster centre, after a relatively constant inner core that extends out to only $\\sim$ 0.2 pc.\nThe density profile taken from \\citet{merritt10}, while quantitatively very similar, shows a slightly steeper initial \ndrop followed by a very similar monotonic profile. The total collision rates for any pair of objects, however, show a \nrapid increase out to $\\sim$ 0.3 pc, \nfollowed by a monotonic decrease. This is the case for both of our assumed density profiles, which yield \nvery similar collision rate profiles, as shown in Figure~\\ref{fig:fig4}. Thus, we might expect a relatively \nweak central concentration of blue stars in the MW, assuming a collisional origin (ignoring any possible \nmass segregation of collision products). \n\nThe core radius given in Equation 4 of \\citet{merritt10} is much smaller \nthan the SMBH influence radius, such that the stellar orbits within this core should be roughly Keplerian. \nThis is what creates a central drop in some collision numbers within the core, as shown in Figure~\\ref{fig:fig4}. \nHowever, this inner core is such a small \nfraction of the total volume over which we integrate the collision rates that this correction should have a \nrelatively negligible impact on the total numbers of collision products. \n\nVery roughly, the numbers of 1+2 collisions that occur over a $\\gtrsim$ 10 Gyr period \nshould be comparable to the total number of binaries in the MW NSC, for our assumed binary fraction and \nconsidering only hard binaries. The hard-soft boundary in the MW NSC corresponds to the binary \ncomponents being in (or nearly in) contact, due to the very high velocity dispersion. Hence, the probability of \na direct collision or merger occurring during \nan encounter involving such a hard binary is nearly unity for small impact parameters \\citep[e.g.][]{leigh12}. The \ncollision product should expand post-collision by a factor of $\\sim$ a few \\citep[e.g.][]{sills97,fregeau04}, driving \nthe remaining binary components to merge when their radii overlap at periastron. Conversely, encounters \ninvolving soft binaries should have a high probability of being dissociative, or of significantly increasing the \nbinary separation post-encounter such that the time-scale for a subsequent encounter (with an even higher \nprobability of dissociation) becomes very short. \nThus, only (near) contact binaries should survive in the Galactic Centre for any significant amount of time although \nmany will also be destroyed, and our estimate for the binary fraction in the MW NSC \n(i.e., f$_{\\rm b} =$ 0.01) could be an over-estimate of the true binary fraction. \n\nWith that said, it seems unlikely that the disk of O- and B-type stars observed at $\\sim$ 0.1 pc from \nSgr A* has a collisional origin. First, although collisions should be dissipative, there is no obvious reason why the \ncollision products should \narrange themselves into a disk-like configuration in a spherical pressure-supported nucleus. Second, given \nthe average stellar mass in the \nMW NSC, multiple collisions would be required to form these massive O- and B-type stars, which must \nhave occurred over a very short interval of time, since O- and B-stars have very short lifetimes. Figure~\\ref{fig:fig3} \nsuggests that the rate of collisions involving individual MS stars is too low for this to be the case. \n\nInterestingly, however, the rate of MS+WD collisions is sufficiently high that a non-negligible supply of gas could be \nsupplied to the nucleus via this mechanism (especially assuming the density profile in \\citet{merritt10}). This is \nbecause collisions between stars and WDs should act to ablate \nthe star, nearly independent of the cluster velocity dispersion (see Section~\\ref{discussion} for more details regarding \nthe underlying physics responsible for this mechanism for gas liberation; \\citep{shara77,shara78,regev87}). This mechanism \ncould supply on the order of $\\lesssim$ 10$^3$ M$_{\\rm \\odot}$ in gas every $\\sim$ 1 Gyr in the MW NSC. This is because, using \nthe density profile of \\citet{merritt10}, on the \norder of $\\sim$ 100 MS\/RGB+WD collisions happen every 100 Myr (see Table~\\ref{table:one}). Assuming that most MS stars \nundergoing MS+WD collisions are close to the turn-off, which is of order $\\sim$ 1 M$_{\\odot}$ for an old population, then \nFigure~\\ref{fig:fig2} (see the dot-dashed line) suggests that the He core mass should comprise a small fraction of the total \nstellar mass ($\\lesssim$ 0.2 M$_{\\odot}$ or 20\\%) when the collision occurs. Hence, for our purposes, a reasonable \nassumption for the mean mass in gas liberated per MS\/RGB+WD collision is $\\sim$ 0.8 M$_{\\odot}$ (i.e., the envelope, or all \nmaterial outside of the degenerate core). Assuming 10$^3$ MS\/RGB+WD collisions per 1 Gyr, this predicts \n$\\sim$ 800 M$_{\\odot}$ $\\sim$ 10$^3$ M$_{\\odot}$ in gas should be supplied to the inner nucleus every Gyr due to \nMS+WD collisions alone. \n\nThe above back-of-the-envelope calculation suggests that MS+WD collisions alone cannot supply \nenough gas to form stars. This is because the time required to accumulate enough gas to form an \nactual disk is much longer than the gas cooling time (of order a few to a few hundred \nyears, for either optically thick or thin disks), which must be less than roughly 3 times the local dynamical \ntime-scale \\citep{nayakshin06,levin07,chang07}. Thus, if star formation were to occur only from gas supplied \nby MS+WD collisions, then it should only occur after nearly a Hubble Time. Given that stellar \nmass-loss in the surrounding old population should likely supply gas to the inner nucleus at a comparable \nor even higher rate \n\\citep[e.g.][]{chang07}, and the fact that the gas must not be accreted by the central SMBH before it \nfragments to form stars, we conclude that, while MS+WD collisions could contribute non-negligibly to the \ngas supply in the inner nucleus, this mechanism seems by itself insufficient to form stars and an \nadditional source of gas is needed.\n\nAs shown in Section~\\ref{ehbform}, nearly the entire RGB envelope must be stripped to significantly affect the \nphotometric appearance of an RGB star. Given that more than one collision is typically required to strip an RGB of its \nenvelope \\citep[e.g.][]{bailey99,dale09,amaroseoane14}, the \nderived rate for a given RGB star to encounter other MS stars is too low for collisions to contribute significantly \nto the observed paucity of giants (as shown by the dotted red line in Figure~\\ref{fig:fig3}). Assuming instead an \nisothermal density profile that starts to diverge near the SMBH, \nonly in the inner $\\lesssim$ 0.1 pc could the rate become sufficiently high for any given RGB star to encounter more \nthan a single MS star within $\\lesssim$ 1 Gyr (i.e. the typical lifetime of an RGB star). \nAlthough we expect \nBH+RGB collisions to be more effective at stripping the RGB envelope on a per collision basis \\citep{dale09}, \nwe find that the rate of BH+RGB collisions is likely too low for more than a handful of RGB stars to have been \nfully stripped. We conclude that it seems difficult to account for any missing RGB stars in the Galactic Centre \nvia collisions. \n\nThis is consistent with the general picture described in \\citet{merritt10}, which suggests that the stellar density \nof the underlying (not yet observed) old population traces that of the observed RGB stars, and has always been \nlow. In the absence of star formation, a low initial density remains low for a long time, since the relaxation time is \nvery long. Figures 4 and 5 in \\citet{merritt10} show that a model with a core can reproduce the observed number \ncounts of RGB stars in the MW. Our fiducial model for the MW also has a core, which is partly responsible for the \nlow predicted numbers of collisions involving RGB stars. Thus, the low observed density of RGB stars could be \nconsistent with being a natural consequence of a low (initial) density in the underlying old population. This picture is \nconsistent with the low rate of RGB collisions found here, which would not be able to account for any inferred \npaucity of RGB stars.\n\nTo summarize, the predicted rates of single-single collisions are too low to have significantly affected the \nphotometric appearance of the MW NSC. This is the case for stars belonging to any evolutionary stage. In particular, \nMS+MS collisions should have produced $\\lesssim$ 1\\% of the total stellar mass in collision products after a Hubble \ntime. Collisions involving individual RGB stars occur too infrequently for the RGB envelope to have been affected \nsignificantly, and any paucity of RGB stars to be observed. The rate of single-binary collisions, on the other hand, is \nsufficiently high that very few binaries should remain at the present day, if any. Finally, the rate of MS+WD collisions \nis insufficient to supply enough gas to form stars and, while this mechanism for gas accumulation could contribute \nnon-negligibly to a single burst of star formation, an additional source of gas is also needed.\n\n\\subsection{M31} \\label{m31}\n\nThe M31 nucleus is a distinct stellar system at the centre of its host late-type spiral galaxy, with a surface brightness profile that \nrises significantly above the background potential at r $<$ 10 pc. The nucleus shows an asymmetric double-lobed \nstructure within r $<$ 3 pc of the central SMBH, which is consistent with the diffuse eccentric disk model \nof \\citet{tremaine95}. Here, the nucleus consists of an old stellar population, but at even smaller radii (r $<$ 0.6 pc; \nrelative to the photocentre of the surrounding bulge component) there exists an UV-bright cluster of blue stars whose\norigins are unknown. Unlike the younger and hotter O- and B-stars observed in our Galactic Centre, these \nappear to be A-type stars \\citep{bender05,lauer12}. Below, we elaborate further on the details of the M31 nucleus.\n\nAt the centre of M31, there lurks an SMBH of mass $\\sim$ 1.4 $\\times$ 10$^8$ M$_{\\odot}$ \n\\citep{dressler84,dressler88,kormendy88,richstone90,kormendy99,bender05}. \nThe central blue cluster, called P3, is surrounded by two over-densities of stars, called P1 and P2 \n\\citep{lauer93}, which reside \non either side of P3 with a separation of $\\sim$ 1.8 pc \\citep{bender05}. These \ntwo ``lobes'' are distinct from P3 both in terms of their stellar content and \nkinematics \\citep[e.g.][]{lauer12}. P1 and P2 are much redder in \ncolour, while P3 contains a significant ultraviolet excess \n\\citep[e.g.][]{nieto86,king92,king95,lauer98,brown98}. A velocity dispersion of $\\sim$ 250 km\/s \nhas been measured in P2 along the same line of sight to P3, with a maximum \nof 373 $\\pm$ 48 km\/s on the anti-P1 side of the central blue cluster \\citep{bender05}. \nUsing Equation~\\ref{eqn:rinf}, the mean velocity dispersion gives an influence \nradius for the central SMBH of $\\sim$ 9.6 pc. P3, on the other hand, has the highest \nvelocity dispersion measured to date, with \na central dispersion of 1183 $\\pm$ 200 km\/s \\citep{bender05}.\n\nThe most likely explanation for the double-lobed structure of P1+P2 is \nan eccentric disk viewed in projection. This was originally proposed by \n\\citet{tremaine95}, who argued that P1 and P2 are unlikely to be two distinct \nstar clusters caught during the final stages of a merger, since the merger would \noccur within $\\lesssim$ 10$^8$ years by dynamical friction \\citep{lauer93,emsellem97}. \nTo reconcile this, \n\\citet{tremaine95} proposed that both nuclei or ``lobes'' are part of the same eccentric \ndisk of stars. The observations can be explained if the brighter lobe, P1, is located \nat a farther distance from the central SMBH and is the result of stars lingering near \napocentre \\citep[e.g.][]{statler99,kormendy99,bacon01}. Conversely, the fainter lobe, P2, \ncan be accounted for if it corresponds \napproximately to pericentre and the disk density increases toward the central SMBH. Hence, \nthe SMBH dominates the central potential, such that stars in the surrounding disk \nshould follow approximately Keplerian orbits.\\footnote{If the disk has a mass $\\gtrsim$ 10\\% that of the SMBH, \nthen self-gravity is required to keep the disk aligned \\citep{statler99}.} \n\\citet{peiris03} later refined the eccentric disk model, taking advantage of more recent \nground-based spectroscopy to help constrain their improved model. The models were \nused to predict the kinematics that should be observed via the Ca triplet in HST spectra \nof P1+P2, and the model predictions are in excellent agreement with the \ndata \\citep{bender05,lauer12}.\n\nOur collision rate estimates in Section~\\ref{collisions} are based on alterations to the \nformulation provided in \\citet{goldreich04}, originally derived to treat collisions within \nprotoplanetary disks. This approach can be almost directly applied in M31, \nsince the total mass of the Tremaine disk is M$_{\\rm d}$ $\\sim$ 3 $\\times$ 10$^7$ M$_{\\odot}$ \n\\citep{bender05} (assuming all 1.0 M$_{\\odot}$ stars and a mass-to-light ratio of \n5.7, which is appropriate for a bulge population \\citep{tremaine95}). This is nearly \nan order of magnitude smaller than the mass of the central SMBH. Hence, here we \ncan ignore any self-gravity within the disk \\citep[e.g.][]{statler99,emsellem07}.\n\nWe use Equation~\\ref{eqn:gamma10} to calculate the collision rates in M31 with $\\rho =$ 0, \nappropriate to a Keplerian nuclear disk (as in the MW, we transition smoothly between \nthese formulae as gravitational-focusing becomes significant). The disk has a scale length and \nheight of, respectively a $=$ 1.8 pc and h $=$ 0.7a.\\footnote{The M31 nuclear disk is observed \nto satisfy the relation 1 - h\/a $>$ 0.3 \\citep{lauer93}.} We assume a constant surface mass \ndensity $\\Sigma =$ M$_{\\rm d}$\/($\\pi$a$^2$), where M$_{\\rm d} =$ 3 $\\times$ 10$^7$ M$_{\\odot}$ \n\\citep{bender05}. This is because, as shown in Figure 17 of \\citet{lauer98}, the central surface \nbrightness profile in M31 cannot be described by a simple analytic function. \nCritically, this neglects the torus-like structure of the M31 nucleus, and over-estimates the surface \nmass density at small radii close to the SMBH (ignoring the inner blue disk). However, as we will show, \nthe collision rates are sufficiently high in this region that collisions would have transformed the inner \nnucleus over the last few Gyrs. Thus, the absence of a significant background of old stars in the inner \nnucleus at present is not necessarily indicative of an absence in the past. Consequently, we assume \nthat the presently-observed torus-like structure was once a diffuse disk extending further in toward the \ncentral SMBH, and include any collision products calculated to have occurred there in our estimates. However, \nwe note that if this assumption is incorrect, and the presently observed torus was always present, then \nthere is a true inversion in the stellar density at small radii and hence the collision rates here would be \nnegligible, dropping to zero at small r instead of continually rising all the way down to r $=$ 0, as shown \nin Figure~\\ref{fig:fig3}.\n\nAs in the MW, the collision rates in M31 are sufficiently high that all six types of collision products \nshould be present in non-negligible numbers over the entire extent of the disk (i.e. $\\gtrsim$ 2 pc), as \nillustrated in Figure~\\ref{fig:fig4}. The radial dependence of the collision rate is weak beyond $\\gtrsim$ 1pc, \nbut becomes more significant in the inner nucleus. \nSpecifically, the predicted collision rates at $\\sim$ 0.1 pc should outweigh those at $\\sim$ 1 pc by $\\lesssim$ an \norder of magnitude, for all six types of collision products, as shown in Figure~\\ref{fig:fig4}. \n\nOnly in the inner $\\lesssim$ 0.1 pc is the rate of MS+RGB collisions ever sufficiently high for individual RGB stars \nto have undergone enough collisions ($\\sim$ 100) over 100 Myr to unbind most of the RGB envelope, and \nhence to cause an observed paucity of RGB stars. This is the case for MS+RGB collisions alone, however, \nsince multiple collisions involving the same main sequence star and other MS stars are less common. \nThis suggests that any BSs present in M31 are likely only slightly more massive than the MS turn-off, \nand hence should appear as A-type stars, as observed for the inner blue disk \\citep{lauer12} (and not \nO- and B-type stars, unlike in M33 and M32; see below).\n\nThe numbers of 1+2 collisions over a $\\gtrsim$ 100 Myr interval should \nexceed the total numbers of binaries in M31, for our assumed binary fraction of f$_{\\rm b} =$ 0.01 and \nconsidering only hard binaries. The velocity dispersion in M31 is so high that the hard-soft boundary \ncorresponds to the binary components being in contact. Hence, for small impact parameters, the \nprobability of a direct collision or merger occurring during any direct 1+2 encounter involving a hard binary \nis approximately unity \\citep[e.g.][]{leigh12}. Conversely, the probability of dissociation during encounters \ninvolving soft binaries is very high. Thus, even (most) contact binaries should not have survived until the \npresent-day, and our estimate for the binary fraction in the M31 nucleus could be an over-estimate of the true \nbinary fraction. \n\nMS+WD collisions should liberate on the order of \n10$^5$ M$_{\\rm \\odot}$ in gas every $\\sim$ 1 Gyr, which can subsequently be used to form \nyoung (blue) stars. This is because on the \norder of $\\sim$ 1.1 $\\times$ 10$^4$ MS+WD collisions happen every 100 Myr (see Table~\\ref{table:one}). \nHence, following the same assumptions as in the preceding section for the MW, if \n1.1 $\\times$ 10$^5$ MS+WD collisions occur per 1 Gyr, this predicts \n$\\sim$ 8.8 $\\times$ 10$^4$ M$_{\\odot}$ $\\sim$ 10$^5$ M$_{\\odot}$ in gas should be supplied to the \ninner nucleus every Gyr due to MS+WD collisions alone. In M31, however, the critical mass needed for \nfragmentation is 10$^4 <$ M$_{\\rm crit}$\/M$_{\\odot} <$ 10$^5$ for a distance from the \ncluster centre 0.1 $\\le$ r\/pc $\\le$ 3 (see Figure 7 in \\citet{chang07}). As in the MW, the \ngas accumulation times needed to satisfy this critical mass requirement are much longer \nthan the gas cooling time. Thus, these results suggest that star formation should occur \non the order of every $\\sim$ 1 Gyr in M31, if the only supply of gas to the inner nucleus is \nMS+WD collisions. Once again, the rate of mass loss due to stellar evolution is, very roughly, \ncomparable to the rate at which gas is supplied by MS+WD collisions. Hence, the actual time-scale \nfor disk fragmentation could be much shorter than this simple calculation suggests, and \ngas from stellar evolution in the surrounding old population should contribute non-negligibly to the \ntotal gas mass available for fragmentation.\n\nWe note that, in M31, the central velocity dispersion rises toward r $=$ 0 and can even exceed \n1000 km\/s interior to $\\lesssim$ 0.1 pc. This should decrease the rates of MS+MS, MS+WD, MS+NS \nand BH+RGB collisions relative \nto what is shown in Figures~\\ref{fig:fig3} and~\\ref{fig:fig4}, by reducing the significance of gravitational \nfocusing. Additionally, given \nsuch high relative velocities at impact, some direct MS+MS collisions could \ncompletely unbind or ablate the collision product, leaving behind a puffed-up cloud of gas and \ndust in its place \\citep{spitzer66,spitzer67}. Similarly, grazing or off-centre collisions could \n(eventually) leave behind a very rapidly rotating collision product. In any event, the collisional velocities \nat impact are sufficiently high in M31 very close to the central SMBH that gas can be supplied to the \nNSC not only via MS+WD and WD+RGB collisions, but also via \nMS+MS and MS+RGB collisions (in addition to mass loss due to stellar evolution). Additionally, \nwe caution that, if a non-negligible fraction of MS+MS collisions ablate the product completely, then \nwe might not expect any central rise in the numbers of blue stragglers, but rather a dip in the numbers. \nAlternatively, if the products of MS+MS collisions are not ablated, then the deposited kinetic \nenergy at impact will serve to puff up the collision product, reducing the time-scale for subsequent \ncollisions to occur. Immediately after a collision, the product shoots up the Hayashi track before \ncontracting back down to the main-sequence \\citep[e.g.][]{sills97,sills01}. Hence, at least temporarily, \nthe collision product \nwill be brighter and redder than a normal MS star of the same mass. Interestingly, the photometric \nappearance of the object observed at the nominal centre of P3, called S11, is qualitatively consistent \nwith this general picture, since it is the brightest and reddest object in the central blue cluster \\citep{lauer12}.\n\nTo summarize, the predicted rate of MS+MS collisions is sufficiently high in the inner $\\lesssim$ parsec \nof the M31 nucleus that most, if not all, of the mass of the inner blue disk in M31 could be composed of MS+MS \ncollision products, or blue stragglers. Importantly, these collisions would have dissipated orbital angular \nmomentum and, given the disk-like nature of the M31 NSC, the thin disk-like structure of the inner blue excess \nis in general consistent with a collision origin. Note that the additional concentration of collision products within \nthe plane of the surrounding disk is unique to the M31 nucleus, relative to the other NSCs in our sample. \nWe caution that MS+MS collisions could become ablative in the inner \n$\\lesssim$ 0.1 pc due to the very high velocity dispersion here, which predicts impact velocities with enough \nkinetic energy to exceed the binding energy of any collision product. This would contribute to the total gas supply \nin the inner M31 nucleus, but likely not enough to form the inner blue disk in a burst of star formation. On the other \nhand, MS+WD collisions, which should be largely ablative nearly independent of the impact velocity, could \ncontribute significantly to the total gas reservoir in the inner nucleus. Together with stellar evolution-induced \nmass loss in the surrounding old population, this could supply enough gas to form stars in much less than a \nGyr. Finally, the rate at which individual RGB stars collide with other objects is sufficiently high in the inner \n$\\lesssim$ 0.1 pc that enough collisions could have occurred to completely unbind the RGB \nenvelope, causing an observed paucity of RGB stars in the inner M31 nucleus.\n\nWith the above results in mind, we note that \\citet{yu03} found from a similar analysis of analytic \ncollision rates calculated for the M31 nucleus that collisions alone cannot account for the blue excess observed \nat r $<$ 0.6 pc. We note, however, that \\citet{yu03} took the (currently) observed present-day density of the old \npopulation at face value, and did not account for the possibility that the Tremaine disk once extended inward \nto smaller radii. What's more, \\citet{yu03} did not have the more recent results of \\citet{bender05} and \\citet{lauer12}, \nwhich provide, respectively, better constraints for the properties of the blue cluster P3 and better spatial \nresolution for the entire inner M31 nucleus. Overall, our estimates for the collision rates in M31 are in decent \nagreement with those calculated by \\citet{yu03}, but the additional information provided by more recent \nobservational studies has allowed us to perform a slightly more focused analysis based on a more up-to-date \nunderstanding of the structure of the inner nucleus. This accounts for the different conclusions reached by \n\\citet{yu03} for the M31 nucleus, relative to those presented in this paper.\n\n\\subsection{M32} \\label{M32}\n\nThe nearby elliptical galaxy M32 is home to the smallest of the three SMBHs known to exist \nin the Local Group, with a mass $\\lesssim$ 2.5 $\\times$ 10$^6$ M$_{\\odot}$ \n\\citep{tonry84,verolme02,vandenbosch10}. The properties and even detection of this SMBH are, \nhowever, controversial \\citep{merritt13}. Surrounding this putative central SMBH is a rotating \ndisk of stars with a density $>$ 10$^7$ M$_{\\odot}$ pc$^{-3}$ at r $<$ 0.1 pc \n\\citep{walker62,lauer98}. Beyond this, an additional stellar component is \npresent, with an effective radius of $\\sim$ 6 pc and a total mass \n$\\sim$ 3 $\\times$ 10$^7$ M$_{\\odot}$ \\citep{kormendy99,graham09}. This represents \n$\\sim$ 10\\% of the total galaxy mass, such that the NSC in M32 contains a much larger \nfraction of the total galaxy luminosity than is typical for early-type galaxies \n\\citep[e.g.][]{cote06}. The stellar populations characteristic of the central NSC are younger \nthan is normal for the rest of the galaxy, with a mean age of $\\sim$ 4 Gyr in the central \nnuclear region and rising to $\\sim$ 8 Gyr at larger radii \\citep{worthey04,rose05,coelho09}. \nEven though the nucleus is a two-component system, with a dominant or primary disk component \nembedded in a more spherically-distributed pressure-supported secondary component, there is \nno observed break in the properties of these stellar populations as a function of radius \\citep{seth10}. \n\nThe typical velocity dispersion in the M32 nucleus is $\\sim$ 60 km\/s, but rises to $\\sim$ 120 km\/s \nvery close to the central SMBH \\citep{seth10}. Using Equation~\\ref{eqn:rinf}, this yields an \ninfluence radius for the central SMBH of only $\\lesssim$ 0.7 pc. Thus, we use Equation~\\ref{eqn:gamma10} \nwith $\\Sigma =$ 0 to calculate the collision rates in M32, appropriate to a dynamically hot pressure-supported \nnuclear cluster, and transition smoothly between these formulae as gravitational focusing becomes important. \nAs in the MW, however, we correct for the influence of the central SMBH by replacing the isothermal velocity dispersion \nwith the local velocity dispersion given by Equation~\\ref{eqn:sigloc}, with $\\sigma_{\\rm 0} =$ 60 km\/s and \nM$_{\\rm BH} =$ 2.5 $\\times$ 10$^6$ M$_{\\odot}$. \n\nFor the surface mass density profile, we adopt the best-fitting solution to \nthe V-band surface brightness profile from \\citet{lauer98}, which is a Nuker-law fit of the form:\n\\begin{equation}\n\\label{eqn:sbm32}\n\\Sigma(r) = 2^{(\\beta-\\gamma)\/\\alpha}{\\Upsilon}\\Sigma_{\\rm 0}\\Big( \\frac{r_{\\rm b}}{r} \\Big)^{\\gamma}\\Big[1 + \\Big( \\frac{r}{r_{\\rm b}} \\Big)^{\\alpha}\\Big]^{(\\gamma-\\beta)\/\\alpha},\n\\end{equation} \nwhere $\\Upsilon =$ 2.0 is the V-band mass-to-light ratio (in M$_{\\odot}$\/L$_{\\odot}$), $\\alpha =$ 1.39, \n$\\beta =$ 1.47, $\\gamma =$ 0.46, r$_{\\rm b} =$ 0\".47 and $\\Sigma_{\\rm 0}$ is the central surface mass \ndensity in L$_{\\odot}$pc$^{-2}$, calculated from the V-band surface brightness $\\mu_{\\rm 0} =$ 12.91 \ngiven in \\citet{lauer98} assuming a distance to M32 of 770 kpc. In calculating the collision rates using \nEquation~\\ref{eqn:gamma10}, we assume $\\Sigma$ $=$ 0 (since there is no Keplerian disk component). Assuming \nspherical symmetry, we can obtain \nthe stellar density distribution from the observed surface brightness profile using Equation 3.65a in \\citet{merritt13}:\n\\begin{equation}\n\\label{eqn:rhofromsb}\n\\rho(r) = -\\frac{1}{\\pi} \\int_r^{\\infty} \\frac{d\\Sigma}{dR} \\frac{dR}{\\sqrt{R^2 - r^2}}.\n\\end{equation}\nwhich is independent of any assumptions for the gravitational potential. Equation~\\ref{eqn:rhofromsb} is the \nclassical Abel inversion of a projected surface brightness profile \\citep[e.g.][]{bracewell65}. Note that, in M32, the break \nradius corresponds almost exactly to the outer radial limit we adopt for the calculated collision rates, \nor r$_{\\rm b} \\sim$ 2 pc. Inside the break radius, the approximation $\\rho$(r) $\\sim$ $\\Sigma$(r)\/r is \nreasonable for power-law galaxies, where \n$\\Sigma$(r) is the observed surface brightness profile, as given by Equation~\\ref{eqn:sbm32}. \n\n\nAs shown in Figure~\\ref{fig:fig3}, the predicted collision rates for a particular object show a rise toward \nr $=$ 0, increasing by $\\lesssim$ two orders of magnitude over the inner $\\sim$ 1 pc. For WD+RGB and 1+2 collisions, \nhowever, the central increase is much less significant. Upon integrating \nnumerically, the predicted numbers of collision products remain approximately constant with \nincreasing distance from the galaxy centre, as shown in Figure~\\ref{fig:fig4}. We see only a slight rise in \nthe numbers of collision products at very small radii $\\lesssim$ 0.1 pc, similar to what is predicted for M31. Thus, \nwe do not expect a significant central concentration of blue stars in M32 (albeit perhaps a mild one at \n$\\lesssim$ 0.1 pc that would be difficult to identify observationally, at least currently), assuming a collisional \norigin (ignoring any possible mass segregation of collision products). Interestingly, M32 is the only \ngalaxy in our sample for which this is the case, while also being the only galaxy in our sample \nthat does not show any observational evidence for a central blue excess. \n\nBeyond $\\gtrsim$ 0.1 pc, the integrated numbers of collision products are high \nrelative to the other galaxies in our sample (ignoring M31); the predicted \nnumbers of MS+MS, MS+WD and MS+RGB collisions produced over a Hubble time should be of order a few percent \nof the total number of stars \nin the M32 nucleus, as shown in Table~\\ref{table:one}. This naively suggests that the observed light distribution in \nthe M32 NSC, and hence its stellar mass function, could have been non-negligibly affected by collisions. However, The \nMS+MS collision rates for a given MS star are too low for any massive \nO- and B-type stars to have formed from multiple collisions. And yet, the collision rates are high enough that collisions \ncould be connected with the lack of recent star formation in M32. Collisions could perhaps act to inhibit star \nformation by, for example, colliding with protostars sufficiently early on in their formation that \nthe protostellar embryos are destroyed.\n\n \n\nAs in both the MW and M31, the rate of 1+2 collisions in the M32 \nnucleus is sufficiently high that most binaries should have been destroyed by the present-day (since the \nhard-soft boundary is at only 0.04 AU), \nand a total mass in gas of order $\\sim$ 10$^3$ M$_{\\odot}$ should be supplied to the inner nucleus \nevery $\\sim$ 100 Myr. This is because on the \norder of $\\sim$ 1.4 $\\times$ 10$^3$ MS+WD collisions happen every 100 Myr (see Table~\\ref{table:one}). \nHence, following the same assumptions as in the preceding sections, if \n1.4 $\\times$ 10$^4$ MS+WD collisions occur per 1 Gyr, this predicts \n$\\sim$ 0.7 $\\times$ 10$^4$ M$_{\\odot}$ $\\sim$ 10$^4$ M$_{\\odot}$ in gas should be supplied to the \ninner nucleus every Gyr due to MS+WD collisions alone. This is enough gas for star formation to occur, \nignoring any other potentially complicating effects. More generally, it predicts that non-negligible amounts of \ngas should be present in the nucleus at any given time, roughly independent of any recent star \nformation or the details of gas heating\/cooling.\n\n\nAdditionally, as shown in Figure~\\ref{fig:fig3}, the rates of MS+RGB collisions are \nonly sufficiently high in the inner r $\\lesssim$ 0.1 pc for any given RGB star to undergo multiple (up \nto $\\sim$ 100 or more) collisions within 100 Myr. This suggests that only in the very inner nucleus could RGB \nstars in M32 be fully stripped, and appear as EHB stars. This predicts a paucity of giants in the very inner \n$\\lesssim$ 0.1 pc of the M32 nucleus.\n\nIn summary, the single-single collision rates are sufficiently high that the numbers of collision products \nproduced over a Hubble time should be of order a few percent of the total number of stars \nin the M32 nucleus. These rates are roughly independent of distance from the cluster \ncentre, apart from a brief but sharp rise near the origin. Hence, only in the inner $\\lesssim$ 0.1 pc of the nucleus \ncould the collision rates ever be sufficiently high to create a (significant) blue excess via MS+MS collisions. \nIndividual RGB stars could have undergone multiple collisions only within r $<$ 0.1 pc, with \nsufficient frequency for their envelopes to become fully stripped and their photometric appearance significantly \naffected. Finally, the rates of MS+WD collisions are sufficiently high to supply \nsignificant ($\\lesssim$ 10$^4$ M$_{\\odot}$ every 1 Gyr) gas to the nucleus. This predicts non-negligible quantities of gas to \nbe present in the M32 nucleus at present (assuming they did not already form stars; but see Section~\\ref{sf}).\n\nFinally, we note that \\citet{yu03} obtained similar results to what we report here, namely that collisions \nalone are unlikely to produce any colour gradients in M32, consistent with observations. However, we point out that the \ncollision rates are sufficiently high for a significant fraction of stars in the cluster to have undergone at least one collision over a \nHubble Time. This could significantly impact the stellar mass function and hence luminosity profile. The weak \ndependence of the collision rate on clustercentric distance suggests that this effect would \naffect the overall colour of the M32 nucleus roughly uniformly (with only a slight central blue excess). This \ncould directly affect any age determination assigned to the M32 nucleus from stellar population synthesis models \n(likely causing an under-estimation). Only in \nthe inner $\\lesssim$ 0.1 pc could the collision rates ever become sufficiently high for individual objects to undergo \nmultiple collisions. Here alone, our results predict a paucity of RGB stars. Thus, similar to \\citet{yu03}, we \nconclude that collisions are unlikely to be able to cause the appearance of an age gradient in M32. \n\n\\subsection{M33} \\label{m33}\n\nM33 is a nearby spiral galaxy lacking a significant bulge component. Intriguingly, this could be connected \nwith the lack of any detected SMBH in M33. \\citet{gebhardt01} placed an upper limit of 1500 M$_{\\odot}$ on the \nmass of any SMBH that could reside in the nucleus, using three-integral dynamical models fit to Hubble Space \nTelescope WFPC2 photometry and Space Telescope Imaging Spectrograph spectroscopy. \nThe NSC at the centre of M33 is \nextremely compact, with a central density approaching that of M32 \\citep{kormendy93,lauer98}. The \ncentral velocity dispersion is only 21 km\/s, and reaches central mass densities of at least \n2 $\\times$ 10$^6$ M$_{\\odot}$ pc$^{-3}$ \\citep{lauer98}. Assuming an SMBH mass of 1500 M$_{\\odot}$, \nthis yields an influence radius of only 0.01 pc, using Equation~\\ref{eqn:rinf}. Importantly, this is only an \nupper limit for the SMBH mass. There could be no SMBH in M33. Indeed, the presence of an additional strong \nX-ray source \\citep{long96,dubus99} \nin the nucleus so close to the putative SMBH suggests against the presence of a massive \ncentral BH, since SMBHs are very effective at tidally disrupting X-ray binaries and even ejecting their compact remnant \ncompanions from the NSC and even host galaxy \\citep{leigh14,giersz15}.\n\nWe use Equation~\\ref{eqn:gamma10} with $\\Sigma =$ 0 to calculate the collision rates in M33, which are \nappropriate to a dynamically hot pressure-supported nuclear cluster. We note that in M33 we find \nv$_{\\rm esc} >$ $\\sigma$ for all types of collisions, and gravitational focusing must always be taken into \naccount. For the surface mass density profile, we again adopt the best-fitting solution to \nthe V-band surface brightness profile from \\citet{lauer98}, which includes an analytic core in this case:\n\\begin{equation}\n\\label{eqn:sbm33}\n\\Sigma(r) = {\\Upsilon}\\Sigma_{\\rm 0}\\Big[1 + \\Big( \\frac{r}{a} \\Big)^{2}\\Big]^{-0.745},\n\\end{equation} \nwhere $\\Upsilon =$ 0.4 is the V-band mass-to-light ratio (in M$_{\\odot}$\/L$_{\\odot}$), a $=$ 0.1 pc is the \napproximate half-power radius, $\\Sigma_{\\rm 0}$ is the central surface mass density in L$_{\\odot}$pc$^{-2}$, \ncalculated from the V-band surface brightness $\\mu_{\\rm 0} =$ 10.87 given in \\citet{lauer98} assuming a \ndistance to M33 of 785 kpc. Although Equation~\\ref{eqn:sbm33} is consistent with the observed surface \nbrightness profile in M33, we note that \\citet{lauer98} also considered a model with an inner cusp, and \n\\citet{carson15} adopted Sersic models.\n\nAs in the preceding section, when calculating the collision rates using \nEquation~\\ref{eqn:gamma10}, we assume $\\Sigma$ $=$ 0 (since there is no Keplerian disk component). We adopt \nthe stellar density distribution provided in Equation 2 of \\citet{merritt01b}. This yields results \nthat are nearly identical to what we find upon integrating over the observed surface brightness profile of the nucleus. \nThat is, assuming spherical symmetry, we can obtain \nthe stellar density distribution from the observed surface brightness profile using Equation~\\ref{eqn:rhofromsb}, as \nbefore, and requiring that the luminosity density at r $=$ 1 pc is 10$^6$ L$_{\\odot}$ pc$^{-3}$. Note that, in M33, \nthe core radius corresponds almost exactly to the inner radial limit we adopt for the \ncalculated collision rates, or a $\\sim$ 0.1 pc. Outside the core radius, the surface brightness profile approximately \nresembles a power-law form, and the approximation $\\rho$(r) $\\sim$ $\\Sigma$(r)\/r is reasonable. \n\n\nFigure~\\ref{fig:fig3} illustrates that the predicted collision rates for a particular object decrease rapidly with \nincreasing distance from the galaxy \ncentre, dropping by about three orders of magnitude over the inner $\\sim$ 1 pc. In Figure~\\ref{fig:fig4}, \nwe see a similar behaviour and the total rates for any pair of objects follow a monotonic decrease with increasing \ndistance from the galaxy centre, dropping by about two orders of magnitude over the inner $\\sim$ 1 pc. As shown \nin Table~\\ref{table:one}, the predicted total \nnumbers of MS+MS, MS+WD and MS+RGB collisions over a Hubble Time is greater than the total number of \nstars in the central (i.e., the core) NSC. The number of 1+2 collisions are sufficiently high that almost if not every binary \nshould have undergone \na direct encounter over the cluster lifetime. However, in general, a non-negligible number of binaries should survive \nin M33, given its low velocity dispersion of $\\sim$ 21 km\/s, which yields a hard-soft boundary of 0.6 AU. This is \nvery close to the hard-soft boundaries of typical Milky Way globular clusters \\citep{harris96}, which are known to \nhave binary fractions on the order of a few percent \\citep[e.g.][]{sollima07,sollima08,milone12}. \n\nA total mass in gas comparable to the total NSC mass in the core should be supplied \nto the inner nucleus every $\\sim$ 1 Gyr due to ablating MS+WD collisions. This is because on the \norder of $\\sim$ 4.9 $\\times$ 10$^3$ MS+WD collisions happen every 100 Myr (see Table~\\ref{table:one}). \nHence, following the same assumptions as in the preceding sections, if \n4.9 $\\times$ 10$^4$ MS+WD collisions occur per 1 Gyr, this predicts \n$\\sim$ 2.5 $\\times$ 10$^4$ M$_{\\odot}$ $\\sim$ 10$^4$ M$_{\\odot}$ in gas should be supplied to the \ninner nucleus every Gyr due to MS+WD collisions alone. This is less than the total mass of the M33 nucleus by about \nan order of magnitude. This also implies that stars should have formed recently if the gas is able to become sufficiently \ndense to cool. As in M32, this further predicts that significant gas \nshould be present in the nucleus at any given time, roughly independent of any recent star \nformation or the details of gas heating\/cooling. \nFinally, the collision rates are never high enough in M33 for individual RGB stars \nto undergo multiple MS+RGB collisions within 100 Myr. Thus, we do not expect a paucity of RGB stars in \nthe M33 nucleus. \n\n\nTo summarize, the photometric appearance of the M33 nucleus should have also been significantly \naffected by collisions; the total number of collision products should be of order $\\lesssim$ 10\\% of the total \nnumber of stars in the inner nucleus. An excess (by roughly an order of magnitude) of blue stars from MS+MS \ncollisions is expected at r $\\lesssim$ 0.1 pc. \nHowever, individual MS and RGB stars should not have undergone multiple collisions with any regularity. Since \nof order $\\sim$ 100 collisions are \nneeded to fully strip the RGB envelope and significantly affect its photometric appearance, our results do not predict any \nobserved paucity of RGB stars. Unlike the other nuclei in \nour sample, however, binary destruction due to single-binary encounters should be comparably inefficient, due to \nthe much lower velocity dispersion. This predicts a binary fraction of order $\\lesssim$ a few percent, and possibly a \nlarge number of X-ray binaries due to exchange interactions involving hard binaries and stellar remnants.\n\n\n\\section{Discussion} \\label{discussion}\n\nIn this section, we discuss the implications of our results for the origins of the centrally concentrated \nblue stars observed (or not) in the four Local Group galactic nuclei considered here and, more generally, for \nobserving exotic or enigmatic stellar populations in \ngalactic nuclei, including blue straggler stars, extreme horizontal branch stars, young or recently formed \nstars, X-ray binaries, etc. Our results suggest that collisional processes could \ncontribute significantly to the observed blue excesses in M31 and M33, via several different channels. Below, \nwe highlight the relevant physics for the development of improved theoretical models, which are needed for \ncomparison to existing and future observational data. \n\n\n\n\n\n\n\n\\subsection{Young stars and\/or recent star formation} \\label{sf}\n\nThe presence of very blue stars in the inner nuclear regions is typically argued to be evidence for a recent \n(in situ) burst of star formation. This is because the timescale for such a cluster of young stars to inspiral into \nthe nucleus via dynamical friction tends to exceed the age of the stars, for all but extremely close initial \ndistances from r $=$ 0. Many authors have proposed channels via which such in situ star formation might \noccur so close to a central SMBH \\citep[e.g.][]{chang07,nayakshin06}.\n\nGiven the current state of the observations, we can neither rule out nor confirm an in situ star formation origin for \nany blue excesses observed in our sample, at least when considering individual nuclei. Upon consideration \nof the entire sample at once, however, more stringent constraints can perhaps be placed. For example, \nif the star formation occurs in a single burst, some fine-tuning must be required to produce 3 out of 4 galactic \nnuclei with recent star formation \nwithin $\\lesssim$ 0.1 pc of the galactic centre at the current epoch. This is because, relative \nto the lifetimes of the older populations in these NSCs, the young blue stars would live for only a \nvery short time. Thus, it is unlikely that all three nuclei were caught in the act of recent star formation, unless \nthe process is continuous or episodic with a short recurrence time. If correct, then significant star formation \nmust have previously occurred in \nthe nucleus as well. It follows that the most recent burst would have to occur in the presence of a significant \npopulation of remnants orbiting within $\\lesssim$ 0.1 pc. This predicts that a high \nmass-to-light ratio could also be present along with any inner blue excess, and that significant X-rays could \nbe observable due to gas accretion onto these inner remnants (if gas is still being supplied to the inner nucleus; \nsee below).\n\nInterestingly, as shown in Section~\\ref{app}, the rates of MS+WD collisions are sufficiently high \nthat a significant and steady supply of gas could be supplied to the inner nuclei. This is because, unlike \nMS+MS collisions, collisions between MS stars and WDs do not require very high relative velocities to ablate the star \n\\citep{shara77,shara78}. The WD is sufficiently compact that, upon penetrating the star, a shock wave is sent \nthrough it, increasing the temperature in the outer layers by roughly an order of magnitude. This in turn triggers \nCNO burning, which liberates enough energy to unbind most of the envelope. The collision of a WD with a massive \nMS star is hence disruptive, with the energy source unbinding the MS star being due only in small part to the kinetic \nenergy of the non degenerate star \\citep{regev87}. Thus, MS+WD collisions can supply \ngas to the nucleus independent of its velocity dispersion, which sets the typical relative velocity at infinity for direct \ncollisions. In the MW and M31, enough gas could be supplied to the inner NSC via MS\/RGB+WD collisions to account \nfor the mass observed in blue stars via star formation every $\\lesssim$ 1 Gyr. In the inner $\\lesssim$ 0.1 pc of M31 \nand M32, the rates of MS+WD \ncollisions are the highest, such that the recurrence time for star formation in this scenario should be considerably \nshorter. For comparison, Equation 8 in \\citet{generozov15} gives the mass input from stellar winds, parameterized \nby the fraction $\\eta$ of the stellar density being recycled into gas. With a lower limit of $\\eta \\ge$ 0.02 (and taking into \naccount the star formation efficiency), our results \nsuggest that the rate of gas supplied by ablative WD collisions could be comparable to the rate of gas supplied by stellar \nwinds in the surrounding old population. This could represent a significant correction to steady-state models of \ngas inflow\/outflow in galactic nuclei \\citep[e.g.][]{quataert04,shcherbakov10,generozov15}.\n\nWe emphasize that MS+MS collisions should contribute little to the total gas supply in the nuclei in our sample. This is \nbecause the velocity dispersions are sufficiently low that the total kinetic energy at impact is only ever a small fraction of \nthe binding energy of the collision product. This is the case even upon consideration of the entire spectrum of possible \nrelative velocities at impact, appropriate for a Maxwellian velocity distribution. Only in the inner $\\lesssim$ 0.1 pc \nof the M31 nucleus does the velocity dispersion ever become sufficiently high for MS+MS collisions to ablate the collision \nproduct and contribute to the total gas reservoir.\n\nIf young stars are indeed formed from violently liberated gas during collisions then, in a sample of NSCs with \ninner blue stars, we would expect a correlation between the (integrated) NSC collision rate and the total \nmass in blue\/young stars. With this in mind, we can already say that M32 would be a significant outlier in such a \nrelation, ignoring any over-looked effects that could potentially inhibit star formation preferentially \nin the M32 nucleus (e.g. the M32 NSC is by far the densest in our sample, such that collisions could even \nserve to inhibit star formation). Additionally, \nthis scenario predicts approximately solar abundance for such collisionally-derived \nyoung stars, without any significant helium enrichment (assuming that the collisionally-destroyed MS stars \nare members of the original old stellar population). This is because only a small fraction of the total mass \nactually undergoes enhanced CNO burning during the collision.\n\n\\citet{generozov15} recently calculated steady-state one-dimensional hydrodynamic profiles for hot gas in \nthe immediate vicinity of an SMBH. The authors provide an analytic estimate for the \"stagnation radius\" in their \nEquation 14, defined as the distance from the SMBH at which the radial velocity of the gas passes through zero: \n\\begin{equation}\n\\label{eqn:rstag}\nr_{\\rm s} = \\frac{GM_{\\rm BH}}{\\nu(v_{\\rm w}^2 + \\sigma_{\\rm 0}^2)}\\Big( 4\\frac{M(r < r_{\\rm s})}{M_{\\rm BH}} + \\frac{13 + 8\\Gamma_{\\rm s}}{4 + 2\\Gamma_{\\rm s}} - \\frac{3\\nu}{2 + \\Gamma_{\\rm s}} \\Big),\n\\end{equation}\nwhere M(r $<$ r$_{\\rm s}$) is the total stellar mass inside the stagnation radius, $\\Gamma_{\\rm s}$ is the two-dimensional power-law \nslope of the stellar light distribution (not to be confused with the collision rate), $\\nu$ is the three-dimensional power-law \nslope of the gas inflow profile (defined by Equation 13 in \\citet{generozov15}) and v$_{\\rm w}$ is an effective heating \nparameter that describes the energy deposited from stellar winds, supernovae and BH feedback. For an old \nstellar population, we expect v$_{\\rm w} \\sim$ 100 km\/s. Inside the stagnation radius \ngas flows inward, whereas outside this radius gas flows outward. In the MW and M31, the velocity dispersion is sufficiently \nhigh that the stagnation radius lies outside the SMBH sphere of influence, and beyond our outer limit of integration when \ncalculating the integrated number of collision products (i.e. 2 pc). Thus, all of the gas liberated from MS\/RGB+WD collisions \nin our calculations should quasispherically flow inward, until it circularizes at its angular momentum barrier. In M32, \nwe expect r$_{\\rm s} \\gtrsim$ r$_{\\rm inf}$, and the numerical solution of the 1D hydro equations shows that r$_{\\rm s} =$ 2 pc \nfor v$_{\\rm w} =$ 100 km\/s (Aleksey Generozov, private communication). \nIn M33, assuming that no SMBH is present, the stagnation radius will be close to zero since v$_{\\rm w} \\gg$ $\\sigma_{\\rm 0}$, as \ncan be seen from taking the limit M$_{\\rm BH} \\rightarrow$ 0 in Equation~\\ref{eqn:rstag}. Numerical experiments confirm that r$_{\\rm s} \\ll$ 1 pc for v$_{\\rm w} \\sim$ 100 km\/s (Aleksey Generozov, private communication). Hence, any (hot) gas liberated by collisions should be lost to a quasispherical outflow. Finally, \nwe note that the preceding estimates for the stagnation radius assume that stellar winds dominate the energy injection rate. However, \nin the inner nuclei of M31 and M32, the energy injection due to collisions could dominate, causing an increase in v$_{\\rm w}$ and a subsequent decrease in the stagnation radius. \n\n\nIn all nuclei in our sample, our results suggest that the 1+2 collision rate is sufficiently high that most \nstellar binaries should be driven to merge, destroyed (via collisions) or dissociated very early on in the cluster lifetime, compared to \nthe age of an old stellar population ($>$ a few Gyr). It follows that most binaries \\textit{currently} present could be \nvery young (mostly likely formed due to recent star formation). This predicts that, if \nyoung binaries are observed in a galactic nucleus, this could be a smoking gun for recent star formation, since \nno other mechanism considered in this paper or presented in the literature should yield a bright blue \nstar with a binary companion. Star formation, on the other hand, is expected to yield very high \nbinary fractions for massive stars \\citep[e.g.][]{sana12}. Provided that star formation occurred sufficiently \nrecently, most of these binaries should still be present. Observationally, unresolved binaries should broaden\nthe color distribution (at a given magnitude) along the inferred main-sequence, relative to a coeval population \nof single stars. With that said, our results presented in Section~\\ref{bmkl} suggest that the time-scales for \nKozai-Lidov oscillations to operate are sufficiently short very close to a central SMBH that all binaries \nwith initially high inclinations should merge on relatively short time-scales, leaving both binaries and merged \nbinaries in young nuclear environments. \n\nTo summarize, our results suggest that collisions could offer an additional pathway toward re-fueling an inner \nnucleus with gas for recent star formation. MS+WD collisions could contribute significantly to the total gas supply needed \nto form the centrally concentrated blue stars in every nuclei in our sample, with the possible exception of M32 and \nperhaps M33 (due to a relatively small stagnation radius for gas inflow\/outflow). In nearly all cases, the relative velocities are \nnot high enough to ablate the stars during MS+MS collisions, except within the inner $\\sim$ 0.1 pc of the M31 nucleus where \nthe velocity dispersion reaches $\\sim$ 1000km\/s. Here, however, MS+MS collisions could also offer an additional source of \ngas for star formation, as originally suggested by \\citet{spitzer66} and \\citet{spitzer67}. During MS+WD collisions, on the other \nhand, the relative velocity at impact plays only a \nminor role in deciding how much gas is liberated. This is because the strong surface gravity of the WD is primarily responsible \nfor generating the high temperature shock front that triggers explosive CNO burning in the MS star, as opposed to the kinetic \nenergies of the objects at impact (see \\citet{shara77} and \\citet{shara78}). \n\n\\subsection{Missing giants and extreme horizontal branch stars} \\label{ehb}\n\nIf the envelope of an RGB star is stripped, the hot core is revealed and the star settles onto the extreme horizontal \nbranch in the cluster colour-magnitude diagram before dimming to join the white dwarf cooling sequence \n\\citep[e.g.][]{davies98,amaroseoane14}. If only a single grazing collision occurs, only a small fraction of the \nRGB envelope should end up unbound. Although the photometric appearance of the resulting RGB star can still be \nslightly affected by a single mass loss episode, our results presented in Section~\\ref{ehbform} suggest that almost all \nof the RGB envelope must be stripped before its subsequent evolution and photometric appearance are significantly \naffected, which should require multiple collisions. \n\nIt is not clear how many MS+RGB collisions are needed for an RGB star to be collisionally stripped by MS stars to \nthe point of forming an hot EHB star. The number could range from on the order of 10 to hundreds \\citep[e.g.][]{dale09}. \nThus, the relevant rate here is that for a \\textit{specific} RGB star to undergo collisions with MS stars (i.e. Figure~\\ref{fig:fig3}), \nwhich is lower than the rate for \\textit{any} RGB star to undergo collisions by a factor N$_{\\rm RGB}$, \nwhere N$_{\\rm RGB}$ is the total number of RGB stars. The rates \nof MS+RGB collisions are sufficiently high in only the inner $\\lesssim$ 0.1 pc of M31 and M32 \nthat this mechanism could potentially produce EHB stars (and hence a paucity of RGB stars) in the inner nuclear \nregions. Given that the \nEHB lifetime is only on the order of 10$^8$ years \\citep[e.g.][]{maeder09,leigh11b}, however, \nit seems unlikely that any observed blue excess is due to EHB stars, since the implied rate of EHB formation \nmust be unrealistically high. Although BHs are expected to \nbe able to remove a larger fraction of the envelope on a per encounter basis \\citep{dale09}, Figure~\\ref{fig:fig3} \nsuggests that only in perhaps M33 could the rate of BH+RGB collisions be of comparable significance \nto MS+RGB collisions for removing RGB envelopes. A \ndeficit of late-type giants can perhaps be looked for observationally using the presence or absence of CO bandhead \nabsorption to distinguish late-type from early-type stars (see \\citet{genzel96} for more details). \n\n\n\n\n\\subsection{Blue straggler stars} \\label{bss}\n\nBlue straggler stars are rejuvenated main-sequence stars, identifiable in the cluster colour-magnitude\ndiagram as being brighter and bluer than the main-sequence turn-off \\citep{sandage53}. BSs are thought \nto form from a number of channels in both open and globular clusters, including direct collisions \nbetween main-sequence stars \\citep[e.g.][]{shara97,sills97,sills01,leigh07,leigh11}, mass-transfer in \nbinary systems from an evolved donor onto a normal MS star \n\\citep[e.g.][]{mccrea64,mathieu09,geller11,geller13,knigge09,leigh11b} and mergers of MS-MS binaries \n\\citep[e.g.][]{chen09,perets09}. In M31, M32 and M33, the rate of MS+MS collisions is sufficiently high to supply, \nin as little as a few Gyrs or less, a total mass in MS+MS collision products that is a significant fraction of the total \nmass of the inner nuclear regions, and could thus contribute non-negligibly to the observed light distribution. \n\nWith the exception of M33, the relaxation times of the NSCs in our sample tend to be on the order of a few \nGyr or more \\citep{merritt13}. This is longer than the expected age of a typical collision product formed from an old \npopulation \\citep[e.g.][]{sills97,sills01}. It follows that mass segregation of collision products formed at larger radii \ninto the inner nuclear regions should have only a minor impact on deciding the observed numbers of collision \nproducts here.\n\nWith that said, any tension due to an over-abundance of predicted BSs can perhaps be reconciled by the \nfact that the products of direct MS+MS collisions should be puffed up post-collision due to the deposition of \nkinetic energy \\citep[e.g.][]{sills97,fregeau04}. This increases the collisional cross-section and reduces the \ntime-scale for subsequent collisions, which could act to strip the collision product of its inflated envelope. This \nwould serve to reduce the acquired mass in collision products. The envelope should contract on a \nKelvin-Helmholtz time-scale, which can be comparable to the time for subsequent collisions to occur. Finally, \nwe note that more massive collision products should have shorter lifetimes, which reduces their probability of \nactually being observed. \n\nIf BSs are produced abundantly, then we might naively also expect an over-abundance of RGB stars, since \nthe heavier BSs should evolve onto the giant branch on shorter time-scales than the rest of the MS \npopulation. As explained in the next section, these evolved BSs might be rapidly destroyed via MS+RGB \nand WD+RGB collisions, however we do not expect this to be the case in the MW and M33. And yet, in the \nMW NSC, Table~\\ref{table:one} suggests that, over a 1 Gyr period, we expect on the order of 10$^3$ MS+MS \ncollisions in the inner $\\sim$ 2 pc. Although this is only a small fraction of the total number of RGB stars, this excess \nis nonetheless puzzling in the MW NSC, since a paucity of giants has been observed, not an excess \n(but see the last paragraph in Section~\\ref{MW}). \n\nVery roughly, the rates of 1+2 collisions are sufficiently high in every NSC in our sample that most primordial binaries \nshould have been destroyed by the present-day (with the exception of M33). It follows that our assumed binary fractions \ncould be over-estimates, \nin spite of correcting for the fraction of hard binaries using the observed velocity dispersion. More importantly, this \nsuggests that not enough binaries should be present \nto contribute significantly to BS production via either binary coalescence or binary mass-transfer. Only those \nbinaries born in a near contact state have sufficiently small cross-sections (i.e. near contact) to survive for \n$\\gtrsim$ 1 Gyr. This is because the orbital separation corresponding to \nthe hard-soft boundary is near contact in nuclei with such high velocity dispersions. For \nvery close binaries near contact, the probability of a merger or direct collision occurring during a 1+2 \ninteraction approaches unity \\citep[e.g.][]{fregeau04,leigh12}. Hence, most direct 1+2 encounters that do not \nresult in a direct stellar collision should be dissociative, or at least widen the binary and reduce the time-scale for a \nsubsequent (likely dissociative) encounter to occur. It follows that the rate of direct 1+2 encounters \nshould be approximately equal to the rate of binary destruction very early on in the cluster lifetime. For those \nfew near contact binaries that are able to survive for the bulk of the cluster lifetime, if stable mass-transfer is \ninitiated then mass, energy and angular momentum conservation dictate that the binary orbit should \n(eventually) widen, significantly increasing the geometric cross-section and reducing the time-scale for a \ndirect 1+2 encounter \\citep[e.g.][]{leigh16}. Similarly, if unstable mass-transfer is initiated, then a common \nenvelope could form, \nwhich would also (at least temporarily) increase the geometric cross-section for collision and reduce the \n1+2 collision time. Thus, we expect that the very few BSs that might form from binary mass-transfer should \nquickly lose their binary companions post-formation, due to dissociative or destructive 1+2 encounters \\citep{leigh16}. \nThe simple calculations performed here should be verified and better quantified in future studies, using more \nrealistic binary fractions and distributions of binary orbital parameters.\n\n\nFinally, we consider one additional mechanism for BS formation, which has not yet been considered. This is \nthe accretion of significant amounts of gas from the interstellar medium (ISM) onto a normal old MS star. The gas \ncould be supplied to the ISM by MS+WD and even MS+MS collisions (if the velocity dispersion is very high, which \nis the case in only M31 and only in its inner $\\lesssim$ 0.1 pc; see Section~\\ref{sf}), and\/or \nmass loss from evolving stars in the surrounding old population. The gas collects at the \ntidal truncation radius (if an SMBH is present) due to the focusing of gas particle orbits in an axisymmetric potential \n\\citep{chang07}. This gas is then channeled inward on a viscous time-scale, where we are assuming it is accreted by \nold main-sequence stars already orbiting there. \nAs discussed by \\citet{nayakshin06} for a similar scenario, as the gas density builds up, feedback from stars can serve \nto stop fragmentation \nof the gas, while accretion onto pre-existing old stars could proceed at very high rates. Although this scenario \nrequires an initial old and centrally-concentrated stellar population to provide the seeds for accretion, it is \nappealing since it ultimately requires a much smaller gas reservoir than the recent burst of star \nformation scenario, while also avoiding the special conditions required to induce the fragmentation of \na giant molecular cloud in such an extreme environment so close to an SMBH. We caution that more work needs \nto be done to quantify the expected accretion rates in this scenario, which are poorly known.\n\nThus, accretion onto pre-existing old MS stars also predicts \nan inner disk of blue stars in the M31, M32 and M33 nuclei, assuming that the accretion rates of the ISM onto the \nold stars are sufficiently high. This mechanism is also appealing in that, unlike a recent burst of star formation, \nit is a \\textit{continuous} mechanism for rejuvenating old MS stars. Indeed, we might naively expect the mean BS mass \n(and hence luminosity) for BSs formed from this mechanism in the inner nucleus to correlate with the age of the \nsurrounding old population, since older stars will have had longer to expel their winds into the ISM, and to accrete \nthe polluted material. This mechanism also predicts \nlow binary fractions, a very low central mass-to-light ratio and (possible) surface abundance spreads throughout the \ncluster \\citep[e.g.][]{lauer98,seth10}, depending on the quantity and composition of the accreted material.\n\nWe conclude that all nuclei in our sample should harbour an abundance of blue stragglers, formed mostly from MS+MS \ncollisions. Accretion onto old MS stars already orbiting in the inner nuclear regions offers an additional possible pathway \ntoward BS formation. Our results suggest that the rates of MS+WD collisions are sufficiently high to contribute significantly \nto the gas reservoir \nrequired for this mechanism to explain the observed blue stars. These \"non-binary mass transfer BSs\" should have \nproperties tightly correlated with those of the underlying old stellar population.\n\n\\subsection{Implications for inferred age spreads} \\label{inferred}\n\nAs illustrated in Table~\\ref{table:one}, our results predict non-negligible numbers of collision products formed \nover only a 100 Myr interval that can be a significant fraction of the total number of stars in the nuclear cluster. To help \ndistinguish blue stars formed via collisions from young stars and test for a true age gradient in a given NSC, \nparticular attention can be put into deriving the radial dependence of the collision rate as a function of the host \ncluster environment. The collision number \nprofiles can be converted into corresponding surface brightness profiles, and from there into radial colour profiles. \nThese collisional colour profiles can then be subtracted from the observed colour profiles, after multiplying the collisional \ncolour profiles by an appropriate constant to ensure that no over- or under-subtraction occurs -- the normalization \nshould be applied at large radii, or where the predicted collision rates are at their lowest. If any \nresidual colour gradient is present, this suggests that its origin is a recent burst of star \nformation. If no residual colour gradient remains and the U-V or U-B colours turn out to be approximately \nconstant as a function of distance from r $=$ 0, then the observed colour gradient is indeed consistent with \na collisional origin. This test can be performed on the Local Group nuclei when future observations with \nsuperior resolution capabilities become available and, in particular, more accurate velocity information. \n\nA number of collisional mechanisms considered in this paper could contribute to the appearance of a colour gradient, \nand should be accounted for when looking for a true age gradient. \nIn every nucleus in our sample, our results predict non-negligible numbers of MS+MS collision products, or blue stragglers. \nIn the inner $\\lesssim$ 0.1 pc of M31 and M32, the collision rates are sufficiently high that individual \nRGB stars could undergo multiple \ncollisions within 100 Myr. If this were to result in a paucity of RGB stars and a corresponding excess of single EHB stars, then \nthe contribution of RGB-stripping to observed colour gradients is two-fold; the loss of RGB stars reduces the total amount of \nred light and the production of EHB stars can increase the total amount of blue light. Thus, the stripping of RGB \nstars could also contribute to the appearance of an age gradient, provided the numbers of MS+RGB, WD+RGB and \nBH+RGB collisions are extremely high and centrally concentrated. \n\nFinally, we note that future studies considering age gradients in nearby galactic nuclei should include the \nLocal Group dwarf spheroidal galaxy NGC 205, which is known to harbor bright blue (and possibly young) stars. We \nhave not done so in this paper, since the rate of collisions is expected to be much lower than in the other NSCs in our \nsample \\citep{valluri05}, due to a lower central surface brightness. \n\n\\subsection{Binary mergers due to Kozai-Lidov oscillations} \\label{bmkl}\n\nThe key point to take away from Equation~\\ref{eqn:taukl} is that, for any binary within a distance r$_{\\rm SC}$ \nfrom the centre of the galaxy with a sufficiently large angle of inclination relative to its orbital plane about the SMBH \n(i.e. i $\\gtrsim$ 40$^{\\circ}$), the \nKozai-Lidov time-scale is much shorter than the cluster age, for any old (i.e. $\\gtrsim$ a few Gyrs) stellar \npopulation. For example, for a binary with a separation of 1 AU and component masses \nof 1 M$_{\\rm \\odot}$ on a circular orbit about the SMBH with a semi-major axis equal to the \nscale radius of the nucleus, the Kozai-Lidov time-scale given by Equation~\\ref{eqn:taukl} is 4.8 $\\times$ 10$^4$, \n1.0 $\\times$ 10$^2$ and 1.5 $\\times$ 10$^5$ years in, respectively, the MW, M31 and M32. We also calculate \nthe critical distance r$_{\\rm SC}$ from the SMBH beyond which Kozai-Lidov oscillations are suppressed by general \nrelativistic precession. This is 0.04 pc, 0.15 pc and 0.04 pc for, respectively, the MW, M31 and M32.\n\nThese Kozai-Lidov time-scales are significantly shorter than the ages of the surrounding old stellar populations, \nas well as the collision times \npresented in Section~\\ref{app}. It follows that, over sufficiently long time-scales (much shorter than the cluster age), the \nrate of binary mergers due to Kozai-Lidov oscillations \nshould be determined by the time-scale for single stars to scatter the binary orbital plane (and reduce the distance from \nthe SMBH to $\\lesssim$ r$_{\\rm SC}$) into \nthe active Kozai-Lidov domain. Vector resonant relaxation acting to re-orient the binary orbital plane could contribute \nto either an increase or decrease in this rate (see \\citet{antonini12b} for more details). Regardless, in all nuclei in our \nsample, the rate of direct 1+2 collisions is \nsufficiently high relative to the age of the old stellar population that only very compact binaries should \nsurvive to the present-day, since the hard-soft boundary is near contact in these galactic nuclei due to their high velocity \ndispersions and the collision probability for a direct encounter with small impact parameter is close to unity for such compact \nbinaries \\citep{leigh12}. The \ncollision product should expand after the encounter by a factor of a few \\citep[e.g.][]{sills97,fregeau04}, causing it to merge \nwith its binary companion nearly independent of the semi-major axis or expansion factor. This is \nthe case in every NSC in our sample, with the exception of M33 due to its lower velocity dispersion. \nIf the binary progenitors are destroyed, then Kozai-Lidov oscillations cannot operate. \nTherefore, we \ndo not expect binary mergers due to Kozai-Lidov oscillations to have contributed significantly to any of the anomalous \nblue star populations currently observed in the inner regions of the (old) NSCs in our sample of galaxies. \n\nAlthough the mass segregation timescale for a massive population can approach the cluster age in the NSCs in our \nsample, heavy objects could still have been delivered via this mechanism to the inner nuclear regions in non-negligible \nnumbers by the present-day (provided they are born at sufficiently small clustercentric radii). BH-BH binary mergers \ninduced by Kozai-Lidov oscillations with the central SMBH \ncould still be relatively high at the present epoch relative to the rates of stellar binary mergers \n\\citep[e.g.][]{vanlandingham16}. However, these BH-BH binaries \nshould either be primordial or have formed very far out in the NSC and only recently migrated inward. This is because the rate \nof \\textit{stellar} binary destruction is expected to be very high. If few binaries exist at the present epoch, then BHs cannot \nbe exchanged in to them via dynamical encounters to form BH-BH binaries. Additionally, due to the presence of \nthe massive central BH, BH binary formation via tidal capture can be unlikely since the timescale for this is comparable \nto the timescale on which the BH binary will experience a strong encounter with the central SMBH and itself \nbe stripped of its companion \\citep[e.g.][]{leigh14}.\n\n\\subsection{X-ray binaries and millisecond pulsars} \\label{xrays}\n\nIn order to estimate the numbers of X-ray binaries using our derived collision rates (i.e. the number of dynamically formed \nX-ray binaries), we need to know the fractions of \nstellar remnants, namely white dwarfs f$_{\\rm wd}$, neutron stars f$_{\\rm ns}$ and black holes f$_{\\rm bh}$. Unfortunately, \nthese numbers are highly uncertain, due to uncertainties in the initial stellar mass function at the high-mass end (which are \nsignificant), neutron star kicks at birth which can eject them from the nucleus, etc. Regardless, we can make general \ncomments about the expected frequencies of X-ray binaries in our sample of nuclei, which is also highly sensitive \nto the cluster binary fraction f$_{\\rm b}$. Based on our results for the 1+2 collision rates, we expect very few binaries \nto have survived until the present-day. This is due both to the high 1+2 collision rates, but also to the separations \ncorresponding to the hard-soft boundaries in such high velocity dispersion environments, which are near contact. With \nthis in mind, it seems more likely that the nuclei with the lowest velocity dispersions should harbor the largest \nbinary fractions, namely M32 and M33, which have hard-soft boundaries of 0.04 AU and 0.6 AU, respectively. In both \nM32 and M33, the 1+2 collision rates are also the highest, such that exchange encounters should occur very \nfrequently. Thus, we might expect a larger fraction of cataclysmic variables, NS and BH X-ray binaries as well as \nmillisecond pulsars (MSPs) in the inner nuclear regions of M32 and especially M33, relative to the MW and \nM31 which are home to more hostile environments for long-term binary\nsurvival in the inner $\\lesssim$ 1 pc. \nWith that said, \\citet{leigh14} recently showed (which was subsequently confirmed by \\citet{giersz15}) that the \npresence of a massive black hole near the cluster centre can \nefficiently destroy X-ray binaries, lowering their numbers significantly relative to what would be predicted in the \nabsence of a massive BH. Thus, putting this all together, we expect M33 to have the largest fraction \nof X-ray binaries per unit cluster mass, since it has both the lowest velocity dispersion and the smallest upper limit on \nthe SMBH mass. Indeed, constraining the formation of X-ray binaries in the M33 nucleus \ncould help to place constraints on the possible presence of a central BH, which remains controversial and \nis at best, of very low mass (i.e. $\\lesssim$ 1500 M$_{\\rm \\odot}$). \n\nFinally, we note that various processes not considered in our study may effectively replenish the nuclear regions \nof galaxies with binaries and compact objects, possibly enhancing the formation of X-ray binaries and MSPs in NSCs.\nFor example, nuclear stellar clusters may result from the continuous infall of star clusters that disrupt close to the \nSMBH \\citep[e.g.,][]{2015ApJ...812...72A}. Such stellar clusters may harbor MSPs, X-ray binaries as well as\nlarge populations of stellar remnants. In addition, \\citet{2009ApJ...698.1330P} suggest that the disruption of triple stars \ncould leave behind a binary in a close orbit around an SMBH; the rate of triple disruptions (brought in to the inner \nnuclear regions at late times either via NSC infall or mass segregation) could be high enough to \nserve as a continuous source of binaries close to the SMBH.\n\n\\section{Summary} \\label{summary}\n\nIn this paper, we consider the origins of the enigmatic stellar populations observed in the \nnuclear star clusters of Local Group galaxies, specifically the Milky Way, M31, M32 and M33. \nThese curious populations are blue stars found in the inner $\\lesssim$ 0.1 pc of three out of \nthe four galactic nuclei considered here. The origins of these centrally-concentrated blue stars are not \nknown. Several candidates have been proposed in the literature, including blue straggler stars, extended horizontal \nbranch stars and young recently formed stars. Here, \nwe calculate rough order-of-magnitude estimates for various rates of collisions, as a \nfunction of the host nuclear cluster environment and distance from the cluster centre. We subsequently \nquantify the contributions to the enigmatic blue stars from BSs, extended horizontal branch stars \nand young recently formed stars. \n\nThe collision rates are sufficiently high that blue stragglers, formed via direct collisions between single main-sequence stars, \ncould contribute non-negligibly ($\\sim$ 1-10\\%) to the observed surface brightness profiles, in nearly (with the \nexception of the MW) every nucleus in our sample. However, \nthe radial profiles of the collision products do not always predict a strong central concentration of these objects, as \nin the MW (for our assumed density profile, which has a constant density core), M32 and, to a lesser extent, M31. \nIn M31 and M32, the rates of MS+RGB collisions might only be sufficiently high for \nindividual giants to undergo multiple collisions within $\\lesssim$ 0.1 pc from the galaxy \ncentre. Only here could a paucity of giants be observed, since the results of stellar evolution models presented in \nthis paper suggest that the envelope \nmust be nearly fully stripped to destroy an RGB star and leave an EHB star in its place. \n\nThe rates of collisions between white dwarfs and MS stars, which are expected to typically ablate \nthe MS stars, are sufficiently high that this could offer a steady supply of gas to the inner nucleus and \nsubsequently contribute non-negligibly to the total gas reservoir needed to seed star formation in every galactic \nnucleus in our sample. This scenario seems the most likely \n(dominant?) of those considered in this paper for explaining the origins of the inner blue stars, relative to a \ndirect collisional origin. However, we suggest that the gas might be more likely to simply accrete onto \npre-existing old stars rather than fragment and form new stars. This could offer a new channel for BS \nformation or the rejuvenation of old MS stars, by creating \"non-binary mass transfer blue stragglers\" in the \ninner nucleus. We emphasize, however, that more work needs to be done to better constrain the expected \naccretion rates in this scenario, which are poorly known.\n\nOur results suggest that collisional processes could contribute significantly to the observed blue excesses in \nM31 and M33, via several different channels. More sophisticated theoretical models are needed, however, in \norder to properly compliment existing and future observations. \nFor example, accurate collision surface brightness profiles, which can be derived from theoretical collision \nnumber profiles (such as those presented in this paper), can be converted into radial colour profiles, multiplied \nby a suitable constant (to ensure not under- or over-subtracting) and subtracted from the observed \nradial colour profiles. Any remaining colour gradient observed in the residuals should then be indicative \nof a real age gradient in any high-density unresolved stellar population. We caution that this method is best \nsuited to quantifying only the \\textit{gradient} in an observed age spread, and not the relative sizes of the underlying blue \nand red populations.\n\n\\section*{Acknowledgments}\n\nWe would like to kindly thank Barry McKernan and Saavik Ford for useful discussions and suggestions. \nNL acknowledges the generous support of an AMNH Kabfleisch Postdoctoral Fellowship. \nFA acknowledges support from a NASA Fermi Grant NNX15AU69G and from a CIERA postdoctoral fellowship at Northwestern\nUniversity. Financial support was provided to NS by NASA through Einstein Postdoctoral Fellowship Award Number PF5-160145. \nFA and DM acknowledge insightful conversations with Eugene Vasiliev that stimulated them to conduct the \ncalculation presented in Section~\\ref{ehbform}. Financial support was granted to DM by the National Science Foundation under \ngrant no. AST 1211602 and by the National Aeronautics and Space Administration under grant no. NNX13AG92G.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\\IEEEPARstart{T}{he} number of electric vehicles on the road is expected to reach 125 million by 2030, generating 404 TWh of additional electricity demand \\cite{bunsen2018global}.\nCharging these EVs cleanly, affordably, and without excessive stress on the grid will require advances in charging system design, hardware, monitoring, and control. Collectively we refer to these advances as smart charging. Smart charging will substantially reduce the environmental footprint of transportation while unlocking immense potential for demand-side management.\n\nSmart charging is especially crucial for large-scale charging facilities such as those in workplaces, apartment complexes, shopping centers, airports, and fleet charging facilities. Providing charging at these diverse sites is vital to the widespread adoption of electric vehicles. Doing so can reduce range anxiety and provide an alternative to personal charging ports for those who cannot install them at their homes. Since many of these sites will provide charging during daytime hours, they can make use of abundant solar energy production and enable EVs to provide grid services throughout the day. However, with current technology, most sites are unable to install more than a few charging ports due to limited infrastructure capacity and fear of high electricity bills. Smart charging allows sites to scale their port capacity without costly infrastructure upgrades. Moreover, scheduling algorithms can reduce operating costs by optimizing for time-of-use tariffs, demand charges, and on-site renewable generation.\n\\iflong{Algorithms can also enable additional revenue streams by providing grid services.}\n\nIn this paper, we report the design, implementation, and application of a smart charging system that we call an Adaptive Charging Network (ACN). In Section~\\ref{sec:overview}, we describe the architecture through the lens of the first ACN, which was built on the Caltech campus in 2016, shown in Fig.~\\ref{fig:ACN_photo}. The ACN enables real-time control and monitoring of charging systems at scale. It has spawned a company, PowerFlex Systems, which operates over 100 similar charging systems around the United States. \\iflong{These include national laboratories, universities, schools, businesses, apartment complexes, hotels, and public parking facilities.}\n\nThrough our experience building large-scale charging facilities, we find that common assumptions made in theoretical models do not hold in many practical systems. We describe these in Section~\\ref{sec:key-insights}. This makes it challenging to apply algorithms proposed in the literature directly. In order to develop practical and robust algorithms for the ACN, we present the Adaptive Scheduling Algorithm (ASA) framework for online (causal) smart charging algorithms based on convex optimization and model predictive control (MPC). We describe the ASA framework in Section~\\ref{sec:scheduling-framework}. Then, in Section~\\ref{sec:applications}, we demonstrate ASA's performance in the context of maximizing energy delivery in congested infrastructure and minimizing operating costs, including demand charge using simulations based on real data collected from the ACN.\n\n\\begin{figure}[!t]\n\\includegraphics[width = \\columnwidth]{figs\/ACN_photo}\n\\centering\n\\caption{The ACN smart EV charging testbed at Caltech.}\n\\label{fig:ACN_photo}\n\\end{figure}\n\nBeyond serving as a model for smart charging systems, the ACN has led to the creation of the ACN Research Portal. This portal has three parts: ACN-Data, a collection of real fine-grained charging data collected from the Caltech ACN and similar sites \\cite{lee_ACN-Data_2019}; ACN-Sim, an open-source simulator which uses data from ACN-Data and realistic models derived from actual ACNs to provide researchers with an environment to evaluate their algorithms and test assumptions \\cite{lee_acnsim_2019}; and ACN-Live, a framework for safely field testing algorithms directly on the Caltech ACN. Thus the ACN has proven to be a valuable tool in both commercial and academic environments. \n\\section{Acknowledgment}\nLarge-scale infrastructure projects like the ACN require herculean effort and coordination between researchers, administrators, private companies, and funding agencies. In addition to the authors, the ACN project would not have been possible without the efforts of John Onderdonk from Caltech Facilities, Stephanie Yankchinski and Neil Fromer from the Resnick Sustainability Institute, Jennifer Shockroo and Fred Farina from Caltech Office of Technology Transfer, and Roger Klemm from the JPL Green Club. Technology development was aided by data provided by Roff Schreiber of Google and generations of Caltech and international students since 2016, especially the work of Karl Fredrik Erliksson of Lund University. We would also like to thank PowerFlex Systems and EDF Renewables North America for their continued support, development, and commercialization of the ACN project.\n\\section{Case Studies} \\label{sec-case-studies}\nTo demonstrate the efficacy and flexibility of our online scheduling framework, we now consider a series of case studies. These case studies consider a variety of operator objectives which have been expressed by real site owners. All case studies are run using real data collected from the Caltech ACN. In addition, when considering profit maximization, we use realistic revenues and Southern California Edison's time-of-use rate schedule, making our case studies not only informative of our scheduling framework's performance, but also of the business potential of operating large-scale EV charging systems.\n\n\\subsection{Preliminaries}\n\\subsubsection{Data}\nFor these case studies we consider real data collected from the Caltech ACN as part of ACN-Data between May 1, 2018 and October 1, 2018. During this period our system served 10,415 charging sessions and delivered a total of 92.78 MWh of energy. Table \\ref{tab:summary_stats} shows relevant summary statistics for this dataset. We also show the distribution of arrivals and departures in the system in Fig.~\\ref{fig:arrival_departure_dist}. These distributions can help us understand when the system is likely to be highly loaded. In addition, the distribution of arrivals relative to the TOU rate schedule helps us to understand how much shifting is necessary to reduce energy costs in our system. All simulations were conducted using ACN-Sim \\cite{lee_acnsim_2019}.\n\n\\begin{table}[t!]\n\\centering\n\\captionsetup{justification=raggedright}\n\\caption{\\hspace{15pt} Average Statistics for EV Charging Test Cases Per Day\\newline\nMay 1, 2018 - Oct. 1, 2018}\\label{tab:summary_stats}\n\n\\renewcommand{\\arraystretch}{1.1}\n\\begin{tabular}{l|c|c|c|c|c|}\n\\cline{2-6}\n & \\begin{tabular}[c]{@{}c@{}}Mean\\\\Daily\\\\Sessions \\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}Mean\\\\Session\\\\Duration\\\\(hours)\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Mean\\\\Session\\\\Energy\\\\(kWh)\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Mean\\\\Daily\\\\Energy\\\\(kWh)\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}Max\\\\Concurrent\\\\Sessions\\end{tabular}\\\\ \\hline\n\\multicolumn{1}{|l|}{Sun} & 41.32 & 3.94 & 415.06 & 10.05 & 18\\\\\n\\multicolumn{1}{|l|}{Mon} & 71.00 & 6.14 & 677.13 & 9.54 & 42\\\\\n\\multicolumn{1}{|l|}{Tues} & 76.73 & 6.24 & 685.79 & 8.94 & 47\\\\\n\\multicolumn{1}{|l|}{Wed} & 75.45 & 6.22 & 660.16 & 8.75 & 44\\\\\n\\multicolumn{1}{|l|}{Thurs} & 78.50 & 5.96 & 665.21 & 8.47 & 42\\\\\n\\multicolumn{1}{|l|}{Fri} & 77.18 & 6.71 & 697.41 & 9.04 & 43\\\\\n\\multicolumn{1}{|l|}{Sat} & 43.32 & 5.01 & 439.59 & 10.15 & 18\\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{figure}[t!]\n\\includegraphics[width = 0.8\\columnwidth]{figs\/Weekday_Arrival_Departure_Distribution_Oct_1.pdf}\n\\centering\n\\caption{Average arrivals and departures per hour for the period May 1, 2018 - Oct. 1, 2018. Background shading depicts the TOU rate at each time, with light grey, grey, and dark grey depicting off-peak, mid-peak, and peak pricing respectively. On weekdays we can see that most drivers arrive at the beginning of the morning mid-peak period and leave prior to the end of the peak period. Weekends, however have a much more uniform distribution of arrivals and departure owing to the heterogeneity of drivers weekend schedules.}\n\\label{fig:arrival_departure_dist}\n\\end{figure}\n\n\\subsubsection{Baseline Algorithms}\nFor comparison we consider a number of baseline algorithms which are included in ACN-Sim. These include Uncontrolled Charging, Round Robin (RR), Earliest Deadline First (EDF), Least-Laxity First (LLF), and Greedy Cost Minimization. In addition, we compare with the offline optimal to provide an upper bound on what any online algorithm could achieve. It is found by solving (\\ref{eq:SCH.1}) with the relevant utility function and perfect knowledge of future arrivals. \n\n\\subsubsection{Adaptive Scheduling Algorithm Setup}\nFor all case studies we consider the three-phase infrastructure present in the Caltech ACN. We set the length of each time slot, $\\delta$, to 5 minutes and consider a maximum optimization horizon of 12 hours. For all cases except those in Section~\\ref{sec:practical_considerations} we do not consider a maximum recompute period, $P$, as we use an ideal battery management system model for these tests and EVs are assumed to follow the control signal exactly. In Section~\\ref{sec:practical_considerations} we use more realistic models and set $P=5$ minutes.\n\n\\subsection{Energy Delivery in Oversubscribed Infrastructure}\nIn our first case study we consider the objective of maximizing total energy delivered. This objective is used when the operator's primary goal is driver satisfaction or when profit per unit energy at a site is constant. In these cases the primary purpose of adaptive scheduling is to meet driver demand with minimal infrastructure investment. To satisfy this operator objective we use the Adaptive Scheduling Algorithm (Alg. \\ref{alg:MPC}) with utility function\n\n\\begin{equation}\n U^\\mathrm{QC} := V^{QC} + 10^{-14}V^{ES}\n\\end{equation}\n\nHere $U^{QC}$ encourages the system to deliver as much energy as possible as quickly as possible which helps to free capacity for future arrivals.\nWe refer to this algorithm as SCH-Quick Charge (SCH-QC).\n\nIn order to demonstrate the effect of highly constrained infrastructure, we vary the capacity of transformer $t_1$ between 0 and 100 kW. For reference, the actual transformer in our system is 150 kW and a conventional system for \\numevse{} 7kW EVSEs would require 378 kW of capacity. We then measure what percent of the total energy demand can be met using SCH-QC as well as four baseline scheduling algorithms and the offline optimal. The results of this test are shown in Fig.~\\ref{fig:energy_delivered_const_infra}.\n\nFrom Fig.~\\ref{fig:energy_delivered_const_infra} we can see that SCH-QC performs near optimally in terms of the percent of demand met and with it we are able to meet 100\\% of driver demand with just 70 kW of transformer capacity even in the online setting. Meanwhile other baseline algorithms struggle in highly constrained infrastructure.\n\n\\zach{This section is interesting but less relevant, so we could remove it.}Among the baselines algorithms, we note that RoundRobin outperforms the other baselines for extremely low transformer capacities. This is because RoundRobin is able better balance between phases which the sorting based methods do not consider. As capacity increases, however, EDF and LLF outperform RoundRobin, as prioritizing drivers by their energy demands and deadline becomes more important than squeezing out capacity from the system. SCH-QC is able to factor in driver demands and deadlines while also balancing phases, which helps to explain its superior performance.\n\n\\begin{figure}[t!]\n\\includegraphics[width=\\columnwidth]{figs\/GreedyInfrastructureDerate}\n\\centering\n\\caption{Percentage of driver's energy demands which can be met at varying capacities for transformer $t_1$ for the month of Sept. 2018. Here demand met is defined as the ratio of total energy delivered to total energy requested.\n}\n\\label{fig:energy_delivered_const_infra}\n\\end{figure}\n\n\\subsection{Profit Maximization with TOU Rates}\\label{sec:case-no-dc}\nWe next consider the case where an operator would like to maximize their profit subject to time-of-use electricity tariffs. For this case study we adopt the Southern California Edison TOU rate schedule for separately metered EV charging systems between 20-500~kW which is shown in Table~\\ref{tab:tou_rates} \\cite{choi_general_2017}. We do not, however, consider a demand charge for this case, as we will do so in the next case study. We consider a fixed revenue of \\$0.30 \/ kWh.\\footnote{While this may seem high, it is actually reasonable as it can be sourced not only from the driver but also from subsidies and by selling credits in carbon markets such as the California Low-Carbon Fuel Standard program which is currently trading at the equivalent of \\$0.14~\/~kWh as of Oct. 2018.}\n\n\\renewcommand{\\arraystretch}{1.1}\n\\begin{table}[t!]\n\\centering\n\\captionsetup{justification=raggedright}\n\\caption{\\hspace{15pt} SCE TOU Rate Schedule for EV Charging}\\label{tab:tou_rates}\n\\begin{tabular}{c|c|}\n\\cline{2-2}\n\\multicolumn{1}{l|}{} & Rates \\\\ \\hline\n\\multicolumn{1}{|c|}{On-Peak (12pm-6pm)} & \\$0.25 \/ kWh \\\\ \\hline\n\\multicolumn{1}{|c|}{Mid-Peak (8am-12pm, 6pm-11pm)} & \\$0.09 \/ kWh \\\\ \\hline\n\\multicolumn{1}{|c|}{Off-Peak (11pm-8am)} & \\$0.06 \/ kWh \\\\ \\hline\n\\multicolumn{1}{|c|}{Demand Charge} & \\$15.48 \/ kW \/ month \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\nTo facilitate maximizing profits considering only energy charges, we propose two algorithms based on \\textbf{SCH}. The first, denoted SCH-Energy Charge (SCH-EC), uses the utility function\n\\begin{equation}\n U^\\mathrm{EC} := V^{EC} + 10^{-14}V^{ES}\n\\end{equation}\n\n\\noindent and the second, denoted SCH-Energy Charge Quick (SCH-ECQ) uses\n\n\\begin{equation}\n U^\\mathrm{ECQ} := V^{EC} + 10^{-4}V^{QC} + 10^{-14}V^{ES}\n\\end{equation}\n\nWe add the $V^{QC}$ term to SCH-ECQ in order to free capacity for future arrivals by encouraging charging early in each rate period. Since rates are flat during each period, we can front load charging within a period without increasing energy costs.\n\nWe consider the profit gained by our algorithms as well as the baselines for the five month period beginning in May 2018. The results of this test are shown in Fig.~\\ref{fig:profit_max_no_dc}. Here we can see that Uncontrolled, FCFS, and Round Robin perform poorly relative to the optimal.\\footnote{Somewhat surprisingly uncontrolled charging generates higher profits than FCFS and Round Robin. This is because uncontrolled charging does not have to contend with infrastructure limits. This, combined with the arrival patterns shown in Fig.~\\ref{fig:arrival_departure_dist}, means that uncontrolled charging is able to deliver more energy during mid-peak (rather than on-peak) hours than FCFS or Round Robin.} This is expected, as these algorithms do not consider pricing signals at all. The IndividualGreedy algorithm does quite well in this test, achieving 98.7\\% of the optimal profit available. However, the algorithms we propose based on our framework perform even better, achieving 99.58\\% and 99.99\\% for SCH-EC and SCH-ECQ respectively. As expected SCH-ECQ out-performs SCH-EC.\n\n\\begin{figure}[t!]\n\\includegraphics[width = \\columnwidth]{figs\/ProfitMaxNoDC.pdf}\n\\centering\n\\caption{Operator profit achieved via various scheduling approaches when using SCE's TOU rate schedule without demand charge.\n}\n\\label{fig:profit_max_no_dc}\n\\end{figure}\n\n\\subsection{Profit Maximization with Demand Charge}\\label{sec:pm_w_dc}\nWe next consider the more difficult task of maximizing profit in the presence of TOU rates and demand charge. We still consider the same rate schedule in Table~\\ref{tab:tou_rates}, but now with the additional demand charge component. Demand charges are assessed on the maximum power draw of a customer over the billing period which we assume to be one month. These charges can be very high, especially in uncontrolled EV charging systems where many drivers might arrive in close succession and charge at concurrently leading to high peak demand.\n\n\\begin{figure}[t!]\n\\includegraphics[width = \\columnwidth]{figs\/ProfitMaxDcCombined}\n\\centering\n\\caption{Operator profit achieved via various scheduling approaches when using SCE's TOU rate schedule with demand charge.}\n\\label{fig:profit_max_dc}\n\\end{figure}\n\nTo demonstrate the importance of considering demand charge when scheduling, we can evaluate the profit of scheduled found by SCH-EC and SCH-ECQ when including demand charge. This is shown in Fig.~\\ref{fig:profit_max_dc}. Here we can see that demand charge was reduced the profit obtained by SCH-EC and SCH-ECQ by 55\\% and 60\\% respectively. Moreover, these algorithms under perform the optimal by 35\\% and 45\\% respectively. Note that when demand charge is considered, SCH-ECQ performs worse than SCH-EC. This is because SCH-ECQ leads to higher peaks in demand. We also note that the baseline algorithms we consider also perform poorly when demand charge is considered. This is especially true of uncontrolled charging, which leads to high peaks in demand which severely reduces profitability.\n\nThis motivates us to consider demand charge in our scheduling problem. To do this we use the utility function\\footnote{Note that while SCH-ECQ fared worse than SCH-EC when considering demand charge, we still include $V^{QC}$ in this formulation. This is because $V^{DC}$ will enforce a low peak, so we can still use $V^{QC}$ to encourage charging quickly when possible.}\n\\begin{equation}\n U^\\mathrm{DC} := V^{DC} + 10^{-4}V^{QC} + 10^{-14}V^{ES}\n\\end{equation}\n\nHere $V^{DC}$ as two tunable parameters the demand charge proxy, $\\hat{\\Delta}$, and the previous\/projected peak demand, $L_0^{max}$. Careful selection of these parameters is necessary as demand charge is a ratcheting cost which must be considered over the entirety of the billing period. Meanwhile, the ASA framework is designed to work on small sub-intervals of this period. If we try to naively include demand charge into our objective by setting $\\hat{\\Delta}$ to the equivalent of \\$15.48 \/ kW and $L_0^{max}=0$, no charging would occur as the marginal cost of increasing peak demand would be far greater than the marginal benefit of charging the EVs.\n\nTo account for this we consider three parameter configurations. The first we call SCH-Demand Charge Proportional (SCH-DCP). In this configuration we scale the true demand charge, $\\Delta$, by the percentage of the billing period which is covered by the optimization horizon to arrive at $\\hat{\\Delta}$. In this case we begin the billing period with $L^{max}_0=0$ then update it, prior to each iteration, to be the highest peak so far in the billing cycle. We see in Fig.~\\ref{fig:profit_max_dc} that this results in a significant improvement over not considering demand charge at all, but still achieves only 84.5\\% of the optimal profit available. By examining the schedules produced by SCH-DCP, we see that while it achieves lower energy costs that the optimal, it has a much higher demand charge. This is likely because $\\hat{\\Delta}$ is too low.\n\nTo address this, we can instead prorate the demand charge on a daily basis, e.g. $\\hat{\\Delta} := \\Delta \/ 30$. We denote this configuration SCH-Demand Charge Daily (SCH-DCD). The intuition behind this change is that usage in our system is extremely low overnight. Thus the scheduling algorithm should at least consider the demand charge it will generate over a full 24 hour period. We see in Fig.~\\ref{fig:profit_max_dc} that this change increases performance to be within 93.6\\% of optimal.\n\nWhile SCH-DCD offers good performance, there is still room for improvement. We note that selecting the correct peak usage is extremely important to maximizing profit. So rather than using a demand charge proxy, we consider what would happen if we knew a-priori what the optimal peak from the offline case would be.\nWe could then set $L_0^{max}$ to be this optimal peak current and $\\hat{\\Delta}=\\Delta$. We denote this configuration SCH-Demand Charge Oracle (SCH-DCO) since we rely on an oracle to supply the optimal peak. We can see in Fig.~\\ref{fig:profit_max_dc} that SCH-DCO performs near optimally (within 99.6\\%).\n\nHowever, in practice, we do not have access to an oracle to provide the optimal peak demand, so we cannot use SCH-DCO directly. We must instead predict the optimal peak. One way to do so is to run OfflineOptimal over the previous 28 day period and use the resulting peak as an estimate of the optimal peak. We repeat this procedure daily (at midnight), so that the algorithm is able to adapt to changing usage patterns. At the beginning of each day, we set $L_0^{max}$ to be the max of the peak usage so far in that billing period and our estimate of the optimal peak. We denote this configuration SCH-Demand Charge Reflective (SCH-DCR). In Fig.~\\ref{fig:profit_max_dc} we can see that SCH-DCR is very close to optimal, achieving 97.7\\% of the available profit.\n\nWhile SCH-DCR is able to outperform SCH-DCD, there is still a trade-off between the two. First, SCH-DCD does not require access to historical system usage. This is important for new sites and for sites which do not wish to store past usage. Second, the two algorithms have very different behavior during high usage periods. Because SCH-DCR essentially fixes its peak using historical data, some vehicles will not be charged if doing so would violate this peak. Meanwhile, SCH-DCD has a much lower bar for raising its peak usage, meaning that more of the EV demand would be met but at higher cost over the entire billing period. Thus when choosing between the two, the site operator should consider which behavior better aligns with their goals.\\footnote{It is also possible to run both algorithms with an equality energy constraint, which would mean that the demand for all drivers would be met. However, doing this in practice could lead to incredibly high costs for the operator.}\n\n\\subsection{Practical Considerations} \\label{sec:practical_considerations}\nIn previous case studies we have ignored some of the more difficult issues which arise from practical EV charging systems, namely non-ideal battery behavior and additional (discrete) constraints on individual charging rates. In this section we address the effect of incorporating these constraints into our scheduling problem.\n\nWe once again consider the problem of profit maximization with demand charge. We use the best practical algorithm we have considered, SCH-DCR, as a base and measure performance relative to the profit achieved in the previous section where ideal EV and EVSE models were used. The effect of including these practical EV and EVSE considerations on the operator's profit are shown in Fig.~\\ref{fig:practical_considerations}.\n\n\\begin{figure}[t!]\n\\includegraphics[width = \\columnwidth]{figs\/PracticalConsiderations}\n\\centering\n\\caption{Operator profit achieved when considering practical EV charging models as a percent of profit when considering ideal EV and EVSE models. Profits are aggregated over June 1 through Oct. 1.\n}\n\\label{fig:practical_considerations}\n\\end{figure}\n\n\\subsubsection{Non-ideal battery behavior}\nWe first address the effects of non-ideal battery behavior. We consider a two-stage battery charging model. The first stage, referred to as \\emph{bulk} charging, occurs up to 80\\% state of charge. In this stage current draw, neglecting changes in pilot, is near constant. The second stage, called \\emph{absorption}, finishes charging the remaining 20\\%. In this stage, current decreases as the battery reaches full charge. To simulate this we use the Linear2Stage battery model included in ACN-Sim \\cite{lee_acnsim_2019}.\n\nWe account for this non-ideal battery behavior by computing a new solution to our scheduling problem periodically. This allows our algorithm to update its schedule to account for deviations from the assigned charging rates caused by battery behavior. For this test we set the minimum recompute period to be equal to the length of one period, 5 minutes. From Fig.~\\ref{fig:practical_considerations} we see that non-ideal battery behavior leads to approximately a 5.3\\% reduction in operator profit.\n\n\\subsubsection{Charging rate restrictions}\nWe next consider the limitations on charging rate imposed by EVSEs and battery management systems. As mentioned in Section \\ref{sec:key-insights} most EVSEs on the market, including those used in our ACN testbed, provide only a discrete set of allowable pilot signals. For this test we use $\\rho := \\{0, 6, 7, ..., 31, 32\\}$, which is the allowable rate set for AeroViroment EVSEs. In addition, the J1772 standard forbids charging rates in the range of 0-6 A and experience has shown us that some EVs do not recover after being assigned at 0 A pilot signal, meaning EVs should receive at least a 6 A pilot from when they plug in until their demand is met. These factors motivate the post-processing heuristic described in Section \\ref{sec:post_processing}.\n\nTo demonstrate the effect of accounting for these restrictions on operator profit, we first run SCH-DCR then run the resulting schedule through our post-processing steps (\\ref{eq:post_processing}) and (\\ref{eq:rate_projection}) with $\\gamma = 10^{-2}$. We once again set the maximum recompute period to be 5 minutes (1 period) so that SCH-DCR is able to account for deviations caused by the post-processing. From Fig.~\\ref{fig:practical_considerations} we see that accounting for charging rate restrictions using the post-processing approach we propose leads to approximately a 10\\% reduction in operator profit.\n\n\\subsubsection{Practical model}\nIn practice we must contend with both non-ideal battery behavior and rate restrictions. As such, in our final scenario we consider both factors simultaneously. The result, as seen in Fig.\\ref{fig:practical_considerations}, is an approximately 12.5\\% reduction in operator profit relative to SCH-DCR with the ideal EV\/EVSE model. This demonstrates that our proposed algorithms are robust when applied to more realistic models of EV and EVSE behavior and capabilities.\n\\zach{It would be nice to formulate the offline optimal problem as a mixed integer program and solve. But I don't know if this is feasible.}\n\\section{Related Work}\nSeveral smart EV charging systems have been developed, though usually at a smaller scale than the ACN. The Smart Energy Plaza (SEP) at Argonne National Laboratory consists of six controllable level-2 EVSEs\\cite{bohn_real_2016}. Likewise, the WinSmartEV system at UCLA consists of quad-port EVSEs capable of sharing a single oversubscribed circuit and multiple level-1 chargers with binary control \\cite{chynoweth_smart_2014, chung_master-slave_2014}. My Electric Avenue tested a control system to managed charging from 200 Nissan LEAFs to manage congestion in the distribution system \\cite{quiros-tortos_how_2018}. The Parker project utilized a testbed of 10 bi-directional EVSEs at a commercial site to investigate the potential of EVs to provide frequency regulation services and adapt to marginal emissions signals \\cite{andersen_parker_2019}.\n\nThere is also a tremendous literature on EV charging algorithms; see surveys \\cite{wang_smart_2016, mukherjee_review_2015} for extensive pointers to the literature. \nSome recent work specifically related to large-scale EV charging includes \\cite{chen_iems_2012,yu_intelligent_2016, wang_two-stage_2016, wang_predictive_2017, nakahira_smoothed_2017, zhang_optimal_2017, frendo_real-time_2019, alinia_online_2020}.\n\nSeveral works have been based on the ACN and data collected from it. \\cite{lee_ACN-Data_2019} uses data from the ACN to predict user behavior, size solar generation for EV charging systems, and evaluate the potential of EV charging to smooth the duck curve. \\cite{sun_electric_2020} clusters sessions based on their charging behavior using time series of charging current collected from the ACN. \\cite{lee_pricing_2020} proposes a pricing scheme to allocate costs (including demand charge) to charging sessions. \\cite{li_real-time_2020, schlund_flexability_2020} use data to quantify fleet-level flexibility within a charging facility, and \\cite{al_zishan_adaptive_2020} uses ACN-Sim to train reinforcement learning agents to schedule large-scale EV charging. \n\n\\iftoggle{long}{Our work adds to the existing body of research insights from building real-world EV charging systems. It}\n{This work}\nextends \\cite{lee_adaptive_2016, lee_large-scale_2018}, which describe the ACN in earlier stages of development. It provides a more thorough description of the ACN\n\\iflong{architecture and algorithms}{and ASA} \nand demonstrates ASA's effectiveness in realistic scenarios through simulation.\n\\section{Practical Challenges from the Testbed} \\label{sec:key-insights}\nBy building and operating the Caltech ACN, we have identified several important features of the physical system that have not been addressed in the EV charging literature but pose real problems for implementing practical EV scheduling algorithms. \nAmong these are proper modeling of the unbalanced three-phase electrical network, incorporation of EVSE quantization, and adaptation to non-ideal battery behavior.\nWe describe these models in this section and explain how we incorporate them into an MPC framework in Section~\\ref{sec:scheduling-framework}. These models also form the basis of the component models included in ACN-Sim \\cite{lee_acnsim_2019}.\n\n\\subsection{Infrastructure modeling}\\label{sec:three_phase_model}\nAs discussed in Section~\\ref{sec:physical_system}, the electrical infrastructure within the ACN is oversubscribed and often unbalanced. In our data, we observe that without proper control, these phase imbalances can be significant%\n\\iflong{, as seen in Fig.~\\ref{fig:phase_imbalance}}. \nWhile many algorithms have been proposed to handle charging with an aggregate power limit or even a hierarchy of limits, most previous work has focused on single-phase or balanced three-phase systems, making them inapplicable to charging systems like the ACN. An exception to this is the work of De Hoog et al. \\cite{de_hoog_optimal_2015}, which considers an unbalanced three-phase distribution system but only in the case of wye-connected EVSEs. In contrast, the EVSEs in the ACN and most large charging systems in the United States are connected line-to-line%\n\\iflong{, as shown in Fig.~\\ref{fig:circuit-diag}}. \n\n\\iflong{%\n\\begin{figure*}[htbp]\n\\includegraphics[width =0.88\\textwidth]{figs\/ACN_Circuit_Diagram}\n\\centering\n\\caption{Circuit diagram depicting the connection loads within the California Parking Garage. For simplicity, transformer $t_2$ is omitted and all EVSEs between phases A and B have been lumped together as $I_{ab}^{evse}$, and so forth for BC and CA.}\n\\label{fig:circuit-diag}\n\\end{figure*}\n}\n\n\\iftoggle{long}{%\n \\begin{figure}[!t]\n \\centering\n \\includegraphics[width=\\columnwidth]{figs\/LineCurrentsSept10}\n \\caption{Line currents from uncontrolled charging. We note significant current imbalances caused by differences in the allocation of EVSEs to phases and driver preferences. In the ACN, phase AB has 26 stations, whereas phases BC and CA each have 14. This imbalance is caused by the two 8-EVSE pods which are both on phase AB.}\n \\label{fig:phase_imbalance}\n\\end{figure}\n}\n\nIn general, infrastructure constraints can be expressed as upper bounds on the magnitudes of currents within the system. By Kirchoff's Current Law, we can express any current within the system as the sum of load currents. Let $\\mathcal{V}(t)$ denote the set of EVs which are available to be charged at time $t$. The current draw of EV $i$ at time $t$ can be expressed as a phasor in the form $r_i(t) e^{j\\phi_i}$ where $r_i(t)$ is the charging rate of the EV in amps, and $\\phi_i$ is the phase angle of the current sinusoid. We assume in this model that each charging EV has a unity power factor, so $\\phi_i$ is known based on how the EVSE is connected and the voltage phase angles (which we assume are separated by $\\pm 120\\degree$).\\footnote{This is reasonable since EVs' onboard charger generally includes power factor correction and voltage phase angles can be easily measured.} We can then model constraints within the electrical system as\n\\begin{equation}\\label{eq:three_phase_model}\n \\left| \\sum_{i \\in \\mathcal{V}} A_{li} r_i(t)e^{j\\phi_i} + L_l(t)\\right| \\leq \\ c_{lt} \\quad \\quad t \\in \\mathcal{T}, l \\in \\mathcal{L}\n\\end{equation}\n\\noindent Infrastructure limits of the network are indexed by resources $l\\in \\mathcal{L}$, e.g., $l$ may refer to a transformer or a breaker on a phase.\nFor each constraint $l$, $c_{lt}$ is a given capacity limit for time $t$ and $L_l(t)$ is the aggregate current draw from uncontrollable loads through resource $l$. $A=(A_{li}) \\in \\mathcal{R}^{|\\mathcal{L}|\\times|\\mathcal{V}|}$ is a matrix which maps individual EVSE currents to aggregate currents within the network. Matrix $A$ can account for both the connection of loads and lines as well as the effect of transformers, such as the delta-wye transformers in the Caltech ACN. The constraints in (\\ref{eq:three_phase_model}) are second-order cone constraints, which are convex and can be handled by many off-the-shelf solvers such as ECOS, MOSEK, and Gurobi. In some applications, however, these constraints could be too computationally expensive or difficult to analyze. Simpler, but more conservative constraints can be derived by observing\n\\begin{equation*}\n \\left| \\sum_{i \\in \\mathcal{V}} A_{li} r_i(t) e^{j\\phi_i} \\right| \\ \\leq \\sum_{i\\in\\mathcal{V}} |A_{li}| r_i(t)\n\\end{equation*}\nThis yields conservative affine constraints in the form\n\\begin{equation}\\label{eq:inf_constraints_affine}\n \\sum_{i \\in \\mathcal{V}} |A_{li}| r_i(t) + |L_l(t)| \\ \\leq \\ c_{lt} \\quad t \\in \\mathcal{T}, l \\in \\mathcal{L}\n\\end{equation}\n\nFor an example on how to find $A$ specifically for a subset of the Caltech ACN network, as well as a comparison of the performance of \\eqref{eq:three_phase_model} and \\eqref{eq:inf_constraints_affine}, see \\cite{lee_large-scale_2018}.\n\n\\iflong{%\n\\subsection{Battery Management System behavior} \\label{sec:bms_model}\nFor level-2 EVSEs, each EV's onboard charger and battery management system (BMS) controls its charging rate. As discussed in Section~\\ref{sec:evse}, the EV's actual charging current often deviates, sometimes significantly, from the pilot signal it receives. This requires us to develop algorithms that accurately model battery behavior or are robust against deviations from simpler models. While many tractable models for battery charging behavior exist, these models require information about the specific battery pack and initial state of charge of the vehicle \\cite{kazhamiaka_simple_2018, kazhamiaka_tractable_2019}. Other models rely on machine learning to learn the relationship between state of charge and current draw \\cite{frendo_data-driven_2020}.\nHowever, these ML models still require access to the state of charge of the vehicle. Since this information is not available with current charging hardware, we use a model-free approach to estimate battery behavior in real-time and use closed-loop control to account for modeling errors. This approach is described in Section~\\ref{sec:battery-tail-reclamation}.\n}\n\n\\subsection{EVSE limitations}\\label{sec:evse_limits}\nIn practice, EVSEs impose limits on the pilot signals which they support. For example, the J1772 standard does not allow pilot signals below 6 A (except 0). Also, most commercially available EVSEs only support a discrete set of pilot signals. Within the Caltech ACN, we have four types of EVSEs. EVSEs from ClipperCreek only support five pilot signals \\{0, 8, 16, 24, 32\\} for 32 amp EVSEs and \\{0, 16, 32, 48, 64\\} for 64 amp EVSEs. EVSEs from Webasto, Tesla, and OpenEVSE offer more control with 1 A (Webasto) or 0.1 A (Tesla and OpenEVSE) increments between 6 A and their maximum rate (32 A for Webasto and 80 A for Tesla and OpenEVSE). These limitations can be expressed mathematically as:\n\\begin{equation*} \nr_i(t) \\in \\rho_i(t) \\quad \\forall i,t\n\\end{equation*}\nwhere $\\rho_i(t)$ denote the set of allowable charging rates for EV $i$ at time $t$, which can depend on both the EVSE and our model of the EV's BMS%\n\\iflong{ described in Section~\\ref{sec:bms_model}}. \n\nWe also require the charging rate to be non-zero from when a car plugs in until its charging demand is met. \nThis constraint helps prevent contactor wear in the EVSEs and improves user experience, since most vehicles will notify their owner when charging stops.\nWe can encode this constraint as:\n\\begin{equation} \\label{eq:flapping-rate-const}\n r_i(t) \\in\n \\begin{cases}\n \\rho_i(t) \\setminus \\{0\\} & \\text{if $\\sum_{t=1}^{T} r_i(t) < \\, e_i$}\\\\\n \\{0\\} & \\text{otherwise}\n \\end{cases}\n\\end{equation}\nwhere $e_i$ is the energy request of EV $i$.\nUnfortunately, these constraints are discrete, making it difficult to incorporate them into optimization-based algorithms. In section \\ref{sec:post_processing} we propose heuristics to deal with these discrete constraints.\n\\section{Conclusions} \\label{sec-conclusions}\nIn this paper, we describe the Adaptive Charging Network (ACN), a framework for large-scale, managed electric vehicle charging facilities. The ACN and its scheduling algorithm (ASA) have been proven at scale through deployments around the United States, including the first ACN installed on the Caltech campus in 2016.\n\nThrough building the ACN, we have identified practical challenges, including unbalanced three-phase infrastructure, quantization of pilot signals, and non-ideal battery behavior, which require us to rethink classical scheduling approaches. To meet these challenges we propose ASA, a flexible model predictive control based algorithm along with pre- and post-processing heuristics, which can be easily configed to meet different operator objectives. \n\\iflong{We propose a collection of such objectives, including regularizers to promote desirable properties in the final schedule.}\n\n\\ifelselong{%\nThrough case studies, we consider the objective of delivering as much energy as possible in constrained infrastructure and maximizing profit, subject to time-of-use tariffs and demand charges. Using real workload data collected from the Caltech ACN and accurate models of ACN infrastructure, we demonstrate that ASA offers significant improvements in terms of energy delivered with constrained infrastructure when continuous pilots are allowed and performs comparably to baselines when pilots are restricted to a discrete set of values. We also note that by changing the objective function, we can easily modify ASA to maximize operator profit. Using real data from Sept. 2018, we achieve profits of \\$2,835 (98.1\\% of offline optimal) in an idealized setting, and \\$2,600 (90\\% of offline optimal) when considering non-ideal batteries and EVSEs. Compared to uncontrolled charging systems, our simulations show that an ACN like system can increase an EV charging system operator's profit by 3.4 times (see Section \\ref{sec:cost_min})\n}{%\nUsing real data from the ACN, we demonstrate that the ACN with ASA can reduce the infrastructure capacity necessary to meet charging demands and maximize profit subject to time-of-use tariffs and demand charges. In particular we find that ASA is able to consistently deliver more energy in highly constrained systems than baseline algorithms, and can increase an EV charging system operator's profit by 3.4 times over uncontrolled charging systems (see Section \\ref{sec:cost_min}).\n}\n\n\\iflong{\nBeyond its operational role of charging hundreds of EVs each week, the Caltech ACN and similar sites at research institutions like JPL, NREL, SLAC, and UC San Deigo, provide a valuable platform for research in managed EV charging. To facilitate this new frontier of research, ACN Research Portal provides open-access data from the Caltech and JPL ACNs and an open-source simulation environment based on the ACN architecture. In addition, a new project called ACN-Live will allow researchers from anywhere in the world to field test algorithms on the Caltech ACN.\n}\n\\section{Adaptive Charging Network Architecture}\\label{sec:overview}\nWe first describe the architecture of the ACN through the lens of the Caltech ACN. Since its installation in early 2016, the Caltech ACN has grown from 54 custom-built level-2 electric vehicle supply equipment (EVSEs) in a single garage \\cite{lee_adaptive_2016}, to 126 commercially available level-2 EVSEs, one 50 kW DC Fast Charger (DCFC), and four 25 kW DCFC, spread between three parking garages on campus. These EVSEs have delivered over 1,103 MWh as of July 7, 2020. The Caltech ACN is a cyber-physical system, as shown in Fig.~\\ref{fig:ACN_arch}, that consists of five interacting subsystems: (1) the information system which is responsible for collecting information and computing control actions; (2) the sensor system which gathers information from the physical system; (3) the actuation system (made up of EVSEs and the EVs' battery management systems) which controls each vehicle's charging rate; (4) the physical system (electrical infrastructure) which delivers power to the EVs and other loads within the system; (5) drivers who provide data to the system and decide when their vehicles are available to charge. \n\\ifelselong{In this section, we describe these subsystems and their interactions.}{We now focus on the components of the information and physical systems in detail.}\n\n\\begin{figure}[!t]\n\\includegraphics[width =\\columnwidth]{figs\/ACN_CPS_Diagram_small}\n\\centering\n\\vspace{-0.35in}\n\\caption{Architecture of the ACN. Blue and green arrows signify the flow of information and power, respectively. Sensors measure power flowing in the electrical network and convert this into information. Likewise, EVSEs and the EVs' onboard battery management system (BMS) work together as actuators to control the flow of power into each EVs battery based on signals from the information system. Drivers provide information to the system via a mobile app and directly control when EVs are plugged in or unplugged from the system (signified by the purple arrow).}\n\\label{fig:ACN_arch}\n\\end{figure}\n\\subsection{Information system}\nACN's information system collects and stores relevant data and computes control actions.\nIt consists of four components:\n\n\\textbf{Communication interface:}\nThe communication interface collects sensor data and passes it to the data storage layer. It also passes signals generated by the control algorithms to the corresponding EVSEs. An industrial computer within the parking garage controls this communication interface. It connects to the cloud-based components through a cellular internet connection and to sensors and EVSEs within the ACN via a Zigbee based mesh network.\n\n\\textbf{Data storage:}\nThe ACN utilizes a relational database to store information such as site configurations, driver profiles, and charging session parameters. A dedicated time-series database stores measurements like voltage, current, and power readings taken from sensors in the electrical network\n. \nThe data storage layer allows us to create visualizations for drivers and site operators%\n\\iflong{, which helps them understand the state of the system and their own EV's charging trajectory in real-time}, as seen at \\cite{caltech_dashboard_2019}.\n\n\\textbf{Mobile app:} \nOur mobile app collects data directly from drivers. After setting up an account, a driver scans a QR code on the EVSE, and then provides an estimated departure time and requested energy. If the driver's plans change, they can update these parameters throughout the charging session. The app also allows the site to collect payment and, if desired, implement access control. To ensure that drivers provide information through the app, an EV will only charge at 8 amps until the driver claims the session through the app. After 15 minutes, if the session is not claimed, it will be terminated, and the EVSE will cease charging.\n\n\\textbf{Control algorithms:}\nThe control layer takes inputs from the data layer and calculates a charging schedule for each EV in the system.\nWe use an event-based system to trigger the scheduling updates. The events considered include a vehicle plugging in or unplugging, a driver changing request parameters, or a demand response signal from the utility. Events are handled by a publish-subscribe model.\nWhenever an event occurs, or the time since the last charging schedule update exceeds a threshold (for example, 5 min), we compute a new charging schedule. These periodic computations close the control loop and account for discrepancies between the control signal sent to each EV and its actual current draw. We describe this model predictive control framework in detail in Section~\\ref{sec:scheduling-framework}.\n\n\\iflong{%\n\\subsection{EVSEs and Battery Management System}\\label{sec:evse}\nTo control charging rates, we use the pilot signal mechanism defined by the J1772 standard for level-2 EVSEs \\cite{sae_sae_2017}. According to this standard, the EVSE can communicate an upper bound to the EV's battery management system (BMS) that limits the amount of current it may draw from the EVSE. Because it is only an upper bound, the vehicle's BMS may choose to charge at a lower rate. This can occur for various reasons such as the pilot signal being higher than the vehicle's maximum charging rate or the BMS limiting current draw as the battery reaches a high state of charge. It can be difficult to diagnose why a car is charging below its allocated pilot signal since the J1772 standard does not provide a way to gather the EV's state of charge. Also, most EVSEs on the market today, including the ClipperCreek, Webasto, and Tesla EVSEs in the Caltech ACN, only support a finite set of pilot signal values and require quantization of the control signal.\n}\n\n\\iflong{%\n\\subsection{Sensors}\nSensors provide a bridge between the physical system and the information system. These sensors measure power, current, and voltage within the local electrical network, allowing us to monitor the system state and accurately track energy usage. The sensors also provide feedback for the control algorithm.\n}\n\n\\subsection{Physical system} \\label{sec:physical_system}\nThe physical system of the ACN includes the local electrical network (including transformers, lines, breakers, loads, and local generation), a connection to the grid, and the electric vehicles. Fig.~\\ref{fig:network-top} shows the topology of the local electrical network for one garage of the Caltech ACN. Power is delivered to the garage from the distribution transformer via three-phase service at 480 V\\textsubscript{LL}. \n\nFrom there, power is distributed throughout the garage via the main switch panel. The ACN is connected to this panel by two 150 kVA delta-wye transformers $t_1$ and $t_2$, which step the voltage down to 120 V\\textsubscript{LN}. Each level-2 EVSE is a single-phase load connected line-to-line (208 V\\textsubscript{LL}) with a maximum current draw between 32 A to 80 A depending its type. Because of unequal loading between phases,\n\\iflong{which is unavoidable due to the stochastic nature of driver demands on the system,}\nbalanced operation cannot be assumed. This makes protection of transformers $t_1$ and $t_2$ challenging which we discuss in Section~\\ref{sec:three_phase_model}. \n\\iflong{Another interesting feature of the Caltech ACN is the two pods of eight EVSEs. These pods are each fed by an 80 amp line. Since each EVSE in the pod has a maximum charging rate of 32 A, these lines are oversubscribed by 3.2 times. This demonstrates how smart charging can allow sites to scale EVSE capacity with existing infrastructure.}\n\n\\iflong{%\nIn addition to the 78 EVSEs in the garage, the ACN also includes a 50 kW DC fast charger (DCFC). This DCFC is a balanced three-phase load.\nWhile this garage does not have local generation, other garages in the Caltech ACN and other PowerFlex sites have on-site solar generation.}\n\\begin{figure}[!t]\n\\includegraphics[width = \\columnwidth]{figs\/ACN_tree.pdf}\n\\centering\n\\vspace{-0.35in}\n\\caption{System topology for the California Parking Garage in the Caltech ACN. The system consists of 78 EVSEs and one 50 kW DC Fast Charger. Switch Panel 1 is fed by a 150 kVA transformer and feeds 54 6.7 kW EVSEs, leading to a 2.4X over-subscription ratio. Nineteen of these lines feed pairs of two 6.7 kW AeroVironment EVSEs. Two additional 80 A lines feed pods of eight 6.7 kW EVSEs each, one pod of AeroVironment stations and the other of Clipper Creek stations. Switch Panel 2 is fed by an identical 150 kVA transformer and feeds 14 13.3 kW Clipper Creek EVSEs and 10 16.6 kW Tesla EVSEs. Each of these EVSEs has a dedicated 80 A line. All EVSEs in the system are connected line-to-line at 208 V. The 50kW DC Fast Charger from BTC Power is a balanced 3-phase load connected directly to the main switch panel. We do not directly control the DCFC at this time.\n}\n\\label{fig:network-top}\n\\end{figure}\n\n\\iflong{\n\\subsection{Drivers}\nHuman behavior can add significant randomness to the system. Drivers may arrive, depart, or change their input parameters at any time. Drivers are also difficult to model. Input through the mobile app can be highly inaccurate, as shown in \\cite{lee_ACN-Data_2019}. To combat this, we have explored using machine learning to predict driver parameters \\cite{lee_ACN-Data_2019} as well as pricing schemes that incentivize drivers to provide accurate estimates \\cite{lee_pricing_2020}.}\n\n\n\\section{Applications} \\label{sec:applications}\nWe now turn our attention to applications of the Adaptive Charging Network. We first examine the real-world operational data we have collected from the system. We then use this data to evaluate (through simulations) how the Adaptive Scheduling Algorithm proposed in Section~\\ref{sec:scheduling-framework} handles the practical challenges described in Section~\\ref{sec:key-insights}. To do this, we consider two practical objectives, charging users quickly in highly constrained systems, and maximizing operating profits. \n\\iflong{Due to limited space, we cannot address all possible use-cases of ACN and the ASA framework. For additional information about dynamic pricing and cost minimization using ACN see \\cite{lee_pricing_2020}.}\n\n\\subsection{Data Collected}\\label{sec:data_collected}\nThe ACN charging data includes actual arrival and departure times of EVs, estimated departure times and energy demands provided by drivers through the mobile app, and measurements of the actual energy delivered in each session. ACNs also record time series of the control signals passed to each EV and the EV's actual charging rate at 4-second resolution. This data is freely available, see \\cite{lee_ACN-Data_2019}, and can be used to evaluate new scheduling algorithms using ACN-Sim. Important workload features include the arrival and departure distribution of EVs, shown in Fig.~\\ref{fig:arrival_departure_dist}. \n\\iflong{Other statistics are summarized in Table~\\ref{tab:summary_stats}.}\nThis data was collected from May 1, 2018 - October 1, 2018 during which charging was free and only the 54 EVSEs connected to transformer $t_1$ were active. EVSEs on $t_2$ were added in early 2019, while the second and third garages in the ACN were added in late 2019. \n\\iflong{During this period, the system served 10,415 charging sessions and delivered a total of 92.78 MWh of energy.}\n\n\\iflong{%\nFrom Table \\ref{tab:summary_stats}, we observe a significant difference in system usage between weekdays and weekends. The total energy delivered is much higher on weekdays, but this energy is divided over far more charging sessions leading to a lower per session energy delivery. Also, charging sessions on weekends tend to be shorter than those during the week. This means that our system must be able to handle large numbers of flexible sessions on weekdays and smaller numbers of relatively inflexible sessions on weekends. This behavior precludes simple solutions such as installing large numbers of level-1 chargers, which would be too slow on weekends and for low laxity weekday sessions, or small numbers of level-2 chargers, which would be insufficient for the number of concurrent sessions on weekdays.}\n\n\\begin{figure}[t!]\n\\includegraphics[width =\\columnwidth]{figs\/arrivals_departures_2018.pdf}\n\\centering\n\\vspace{-0.2in}\n\\caption{Average arrivals and departures per hour for the period May 1, 2018 - October 1, 2018 (54 EVSEs). On weekdays we see a peak in arrivals between 7:00 - 10:00 followed by a peak in departures between 16:00 - 19:00. The Caltech ACN also has a much smaller peak in arrivals beginning around 18:00, which is made up of community members who use the site in the evening including some patrons of the nearby campus gym. Weekends, however have a more uniform distribution of arrivals and departures.}\n\\label{fig:arrival_departure_dist}\n\\end{figure}\n\n\n\\iflong{\n\\begin{table}[t!]\n\\centering\n\\captionsetup{justification=raggedright}\n\\caption{\\hspace{15pt} Average Statistics for EV Charging Test Cases Per Day\\newline\nMay 1, 2018 - Oct. 1, 2018 \n}\\label{tab:summary_stats}\n\\resizebox{\\columnwidth}{!}{\n\\begin{tabular}{lccccc}\n\\toprule\n & \\begin{tabular}[c]{@{}c@{}}Mean\\\\Daily\\\\Sessions \\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}Mean\\\\Session\\\\Duration\\\\(hours)\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Mean\\\\Session\\\\Energy\\\\(kWh)\\end{tabular} & \\begin{tabular}[c]{@{}c@{}}Mean\\\\Daily\\\\Energy\\\\(kWh)\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}Max\\\\Concurrent\\\\Sessions\\end{tabular}\\\\ \\midrule\nSun & 41.32 & 3.94 & 10.05 & 415.06 & 18\\\\\nMon & 71.00 & 6.14 & 9.54 & 677.13 & 42\\\\\nTues & 76.73 & 6.24 & 8.94 & 685.79 & 47\\\\\nWed & 75.45 & 6.22 & 8.75 & 660.16 & 44\\\\\nThurs & 78.50 & 5.96 & 8.47 & 665.21 & 42\\\\\nFri & 77.18 & 6.71 & 9.04 & 697.41 & 43\\\\\nSat & 43.32 & 5.01 & 10.15 & 439.59 & 18\\\\ \n\\midrule\n\\end{tabular}\n}\n\\end{table}\n}\n\n\\subsection{Practical Scenarios}\n\\begin{table}[t]\n\\caption{Modeling Assumptions by Scenario}\n\\label{tab:scenarios}\n\\centering\n\\begin{tabular}{@{}lccccc@{}}\n\\toprule\n & I & II & III & IV & V \\\\ \\midrule\nPerfect Information? & \\cmark & \\xmark & \\xmark & \\xmark & \\xmark \\\\\nContinuous EVSE? & \\cmark & \\cmark & \\xmark & \\cmark & \\xmark \\\\\nIdeal Battery? & \\cmark & \\cmark & \\cmark & \\xmark & \\xmark \\\\ \\midrule\n\\end{tabular}%\n\\end{table}\n\nTo better understanding the effect of practical limitations such as limited information, non-ideal batteries, and pilot signal quantization, we consider each operator objective in the context of the five scenarios in Table~\\ref{tab:scenarios}. Here perfect information refers to having access to the arrival time, duration, and energy demand of all EVs in advance, allowing for offline optimization. Continuous EVSEs allow for continuous pilot control between 0 and the EVSE's upper bound, while quantized EVSEs only allow a discrete set of values and must keep the charging rate at or above 6 A until the EV is finished charging. Finally, ideal batteries are assumed to follow the pilot signal exactly. In contrast, non-ideal batteries follow the linear two-stage model described in \\cite{lee_acnsim_2019}, where the initial state of charge and battery capacity are fit to maximize tail behavior, and the tail begins at 80\\% state-of-charge.\n\nFor our simulations we use ACN-Sim, which includes realistic models for each of the scenarios above. In each case, we consider the three-phase infrastructure of the Caltech ACN. We set the length of each time slot to 5 minutes, the maximum time between scheduler calls to 5 minutes, and consider a maximum optimization horizon of 12 hours. \n\n\\subsection{Energy delivery with constrained infrastructure}\nWe first consider the objective of maximizing total energy delivered when infrastructure is oversubscribed. This is a common use case when electricity prices are static or when user satisfaction is the primary concern. To optimize for this operator objective, we use the Adaptive Scheduling Algorithm (ASA) (Alg. \\ref{alg:MPC}) with utility function\n\n\\begin{equation*}\n U^\\mathrm{QC}(r) := u^{QC}(r) + 10^{-12}u^{ES}(r)\n\\end{equation*}\n\nHere $U^{QC}$ encourages the system to deliver energy as quickly as possible, which helps free capacity for future arrivals. We include the regularizer $u^{ES}(r)$ to promote equal sharing between similar EVs and force a unique solution. We refer to this algorithm as ASA-QC.\n\nTo control congestion in the system, we vary the capacity of transformer $t_1$ between 20 and 150 kW. For reference, the actual transformer in our system is 150 kW, and a conventional system of this size would require 362 kW of capacity. We then measure the percent of the total energy demand met using ASA-QC as well as three baseline scheduling algorithms; least laxity first (LLF), earliest deadline first (EDF), and round-robin (RR), as implemented in ACN-Sim.\n\nResults from this experiment are shown in Fig.~\\ref{fig:energy_delivered_const_infra}, from which we observe the following trends. \n\\begin{enumerate}\n \\item In scenario II, ASA-QC performs near optimally (within 0.4\\%), and significantly outperforms the baselines (by as much as 14.1\\% compared to EDF with 30 kW capacity).\n \n \\item In almost all cases, ASA-QC performs better than baselines, especially so in highly congested settings.\n \\footnote{For scenarios III and V and transformer capacities less than 68 kW, it may sometimes be infeasible to allocate a minimum of 6 A to each active EV. When this is the case, we allocate 6 A to as many EVs as possible then allocate 0 A to the rest.\n %\n \\iflong{This allocation is done by first sorting EVs (by laxity for LLF and ASA-QC, deadline for EDF, and arrival time for RR) then allocating 6 A to each EV until the infrastructure constraints are binding.}}\n \n \\item Non-ideal EVSEs (scenarios III and V) have a large negative effect on ASA-QC, which we attribute to rounding of the optimal pilots and restriction of the feasible set.\n \n \\item Surprisingly, non-ideal EVSEs increase the performance of LLF and EDF for transformer capacities $<$60 kW. This may be because the minimum current constraint leads to better phase balancing.\n \n \\item Non-ideal batteries (scenarios IV and V) have relatively small effect on the performance of ASA-QC compared to baselines, indicating the robustness of the algorithm.\n\\end{enumerate}\n\\iflong{%\nTo understand why ASA-QC performs so much better than the baselines, especially in scenario II, we must consider what information each algorithm uses. RR uses no information aside from which EVs are currently present, and as such, performs the worst. Likewise, EDF uses only information about departure time, while LLF also makes use of the EVs energy demand. Only ASA-QC actively optimizes over infrastructure constraints, allowing it to better balance phases (increasing throughput) and prioritize EVs, including current and anticipated congestion. A key feature of the ASA framework is its ability to account for all available information cleanly.\\footnote{When even more information is available, i.e., a model of the vehicle's battery or predictions of future EV arrivals, this information can also be accounted for in the constraint set $\\mathcal{R}$ and objective $U(r)$. However, these formulations are outside the scope of this paper.}\\\\\n}\n\n\\begin{figure}[t!]\n\\includegraphics[width=0.95\\columnwidth]{figs\/infrastructure_heatmap.pdf}\n\\centering\n\\vspace{-0.2in}\n\\caption{Percentage of driver's energy demands that can be met at varying capacities for transformer $t_1$ for Sept. 2018. Here demand met is defined as the ratio of total energy delivered to total energy requested.\n}\n\\label{fig:energy_delivered_const_infra}\n\\end{figure}\n\n\\subsection{Profit maximization with TOU tariffs and demand charge}\\label{sec:cost_min}\nNext, we consider the case where a site host would like to minimize their operating costs. Within this case, we will consider the Southern California Edison TOU EV-4 tariff schedule for separately metered EV charging systems between 20-500~kW, shown in Table~\\ref{tab:tou_rates} \\cite{choi_general_2017}. In each case, we assume that the charging system operator has a fixed revenue of \\$0.30\/kWh and only delivers energy when their marginal cost is less than this revenue. \n\\begin{table}[t!]\n\\caption{SCE EV TOU-4 Rate Schedule for EV Charging (Summer)}\n\\label{tab:tou_rates}\n\\centering\n\\begin{tabular}{lccc}\n\\toprule\nName & Time Range & Weekday & Weekend\\\\ \n\\midrule\nOff-Peak & 23:00 - 8:00 & \\$0.056 \/ kWh & \\$0.056 \/ kWh\\\\\n\\arrayrulecolor{black!30}\\midrule\nMid-Peak & \\begin{tabular}[c]{@{}l@{}} \\ 8:00 - 12:00\\\\18:00 - 23:00\\end{tabular}\n& \\$0.092 \/ kWh & \\$0.056 \/ kWh\\\\ \n\\arrayrulecolor{black!30}\\midrule\nPeak & 12:00 - 18:00 & \\$0.267 \/ kWh & \\$0.056 \/ kWh\\\\ \n\\arrayrulecolor{black!55}\\midrule\nDemand Charge & \\multicolumn{3}{c}{\\$15.51 \/ kW \/ month} \\\\ \n\\arrayrulecolor{black}\\bottomrule\n\\end{tabular}\n\\end{table}\nIn order to maximize profit, we use the objective:\n\\begin{align*}\n &U^\\mathrm{PM} := u^{EC} + u^{DC} + 10^{-4}u^{QC} + 10^{ -12}u^{ES}\n\\end{align*}\n\\noindent We denote the ASA algorithm with this objective ASA-PM.\n\\ifelselong{%\nThe revenue term $\\pi$ in $u^{EC}$ can have several interpretations. In the most straightforward case, $\\pi$ is simply the price paid by users. However, $\\pi$ can also include subsidies by employers, governments, automakers, or carbon credits through programs like the California Low-Carbon Fuel Standard (LCFS). For example, LCFS credits for EV charging have averaged between \\$0.13 - \\$0.16 \/ kWh in 2018-2019. In these cases, some energy demands might not be met if the marginal price of that energy exceeds $\\pi$. This is especially important when demand charge is considered since the marginal cost can be extremely high if it causes a spike above the previous monthly peak. Alternatively, $\\pi$ can be set to a very high value (greater than the maximum marginal cost of energy) and act as a non-completion penalty. When this is the case, the algorithm will attempt to minimize costs while meeting all energy demands (when it is feasible to do so).%\n}\n{%\nThe revenue term $\\pi$ in $u^{EC}$ can be interpreted as either the price paid by the user, a subsidy paid by the operator, or as a price on unmet demand.\n}\n\nIn $u^{DC}$, $\\hat{P}$ and $q'$ are tunable parameters. The demand charge proxy $\\hat{P}$ controls the trade-off between energy costs and demand charges in the online problem. In this case, we use the heuristic proposed in \\cite{lee_pricing_2020}, $\\hat{P} = P\/(D_p - d)$, where $D_p$ is the number of days in the billing period, and $d$ is the index of the current day. We will consider one version of the algorithm without a peak hint, e.g. $q'=0$, and one where the peak hint is 75\\% of the optimal peak calculated using data from the previous month. \n\\iflong{This percentage is chosen based on maximum historic month-to-month variability in the optimal peak (+11\\%\/-16\\%).}\n\n\nWe fix the transformer capacity to 150~kW and consider the previous baselines along with uncontrolled charging\n\\iflong{, which is the most common type of charging system today}.\nResults of the experiment are shown in Fig.~\\ref{fig:profit_max}, from which we observe:\n\\begin{enumerate}\n \\item Profits from both ASA-PM and ASA-PM w\/ Hint, are within 3.6\\% and 1.9\\% of the optimal respectively, and far exceed the profits of all baseline algorithms.\n \n \\item Uncontrolled, LLF and RR result in \\emph{lower} energy costs, but incur \\emph{very high} demand charges. These algorithms are not price aware. Instead low energy costs are a result of drivers arriving during off-peak and mid-peak times.%\n \\iflong{In particular, uncontrolled charging, which does not consider an infrastructure limit, leads to \\emph{extremely high} demand charges. On the other hand, both ASA-PM algorithms (and the offline optimal) trade-off higher energy costs for much lower peaks resulting in lower overall costs.}\n \n \\item Providing a peak hint to ASA-PM increases revenue by allowing more energy demands to be met. In this case, 97.8\\% vs. 95.6\\% without peak hints. Accurate hints allow the algorithm to utilize higher capacity earlier in the billing period, increasing throughput without increasing cost.\n \\iflong{Even with the peak hint, ASA-PM does not meet 100\\% of demands even though the offline optimal does. Since ASA-PM does not have knowledge of future arrivals, it must act conservatively in increasing the peak over time. It is, however, important that hints not be too large, as the algorithm can increase the peak as needed, but once a high peak is set, the demand charge cannot be lowered.}\n \n \\item While EVSE quantization and non-ideal batteries each reduce the operator's profit, even in scenario V, MPC w\/ Hint still produces 90\\% of the optimal profit.\n \n \\iflong{\n \\item Interestingly, revenue increases in scenarios with quantization (III and V). It can be hard to reason about exactly why this occurs, though it appears that the post-processing step leads to initial conditions for the next solve of \\textbf{OPT} to produce a higher revenue, higher cost solution. \n }\n \n \\item Because we use real tariffs structures, real workloads, and realistic assumptions (scenario V), we can conclude with reasonable certainty that a charging system operator could expect to net approximately \\$2,600~\/~month using an ACN like system, compared to just \\$763~\/~month in a conventional, uncontrolled system.\n\\end{enumerate}\n\n\\begin{figure}[t!]\n\\includegraphics[width = .85\\columnwidth]{figs\/profit_maximization.pdf}\n\\centering\n\\vspace{-0.25in}\n\\caption{Operator profit, costs, and revenue for various scheduling approaches when using SCE's EV TOU-4 tariff, $\\pi$ = \\$0.30, 150 kW transformer capacity, and data from Sept. 2018. In the middle panel we break out energy costs (darker, lower bar) from demand charge (lighter, upper bar). In each case, the offline optimal in the ideal setting is shown as a grey background. \n}\n\\label{fig:profit_max}\n\\end{figure}\n\\section{Online Scheduling Framework} \\label{sec:scheduling-framework}\nTo address these challenges, we have developed a practical and flexible framework for online scheduling of EV charging based on model predictive control and convex optimization. Within this\nframework, we introduce constraints that address unbalanced three-phase infrastructure and utilize feedback to account for inaccuracies in modeling, such as non-ideal battery behavior. Finally, we introduce a heuristic approach to account for discrete constraints arising from EVSE limitations efficiently.\n\n\\subsection{Model predictive control}\\label{sec:MPC}\nThe ACN computes charging rates using model predictive control, described in Alg. \\ref{alg:MPC}.\n\\begin{algorithm}[htbp] \\label{alg:MPC}\n\\SetAlgoLined\n\\SetNlSty{texttt}{(}{)}\n\\For{$k \\in \\mathcal{K}$}{\n \\nl $\\mathcal{V}_k := \\{i\\in \\hat{\\mathcal{V}}_k \\mid e_i(k) > 0$ \\textbf{AND} $d_i(k) > 0$\\} \\label{alg:active_set}\\\\\n \\nl \\If{event fired \\textbf{OR} time since last computation $> P$ \\label{alg:recomp_condition}}\n {\n \\nl $(r^*_i(1),...,r^*_i(T), i \\in \\mathcal{V}_k)$ := \\textbf{OPT}$(\\mathcal{V}_k, U_k, \\mathcal{R}_k)$ \\label{alg:scheduler.1} \\\\\n \\nl $r_i(k+t) := r^*_i(1+t), \\ t=0, \\dots, T-1$ \\label{alg:scheduler.2}\n }\n \\nl set the pilot signal of EV $i$ to $r_i(k)$, $\\forall i \\in \\mathcal{V}_k$ \\label{alg:pilot_signal}\\\\\n \\nl $e_i(k+1) := e_i(k) - \\hat{e}_i(k)$, $\\forall i \\in \\mathcal{V}_k$ \\label{alg:energy_update}\\\\ \\nl $d_i(k+1) := d_i(k) - 1$, $\\forall i \\in \\mathcal{V}_k$ \\label{alg:departure_update}\n }\n \\caption{Adaptive Scheduling Algorithm (ASA)}\n\\end{algorithm}\nWe use a discrete time model, with time indexed by $k$ in $\\mathcal{K} := \\{1,2,3,...\\}$. The length of each time period is $\\delta$ e.g. 5 minutes. At time $k$, $\\hat{\\mathcal{V}}_k$ is the set of all EVs present at the ACN and $\\mathcal{V}_k \\subseteq \\hat{\\mathcal{V}}_k$ is the subset of \\emph{active} EVs, i.e. the set of EVs whose energy demands have not been met.\nThe state of EV $i\\in\\mathcal V_k$ at time $k$ is described by a tuple ($e_i(k)$, $d_i(k)$, $\\bar{r}(k)$) where $e_i(k)$ is the remaining energy demand of the EV at the beginning of the period, $d_i(k)$ is the remaining duration of the session, and $\\bar{r}(k)$ is the maximum charging rates for EV $i$. In addition, we define $\\hat{e}(k)$ to be the measured energy delivered to the EV over time interval $k$. For simplicity of notation, we express $r_i(t)$ in amps and $e_i(t)$ and $\\hat{e}_i(t)$ in $\\delta$ $\\times$ amps assuming nominal voltage.\n\nWe now describe the MPC algorithm. In line \\ref{alg:active_set} we compute the active EV set\n$\\mathcal{V}_k$ by looking for all EVs currently plugged in which have non-zero remaining energy demand and are not already scheduled to depart. \nWe then check, in line \\ref{alg:recomp_condition}, if we should compute a new optimal schedule. This is done whenever an event-fired flag is True, or when the time since the last computed schedule exceeds $P$ periods.\n\nIf a new schedule is required, we call the optimal scheduling algorithm $\\textbf{OPT}(\\mathcal{V}_k, U_k, \\mathcal{R}_k)$ in line \\ref{alg:scheduler.1} that takes the form:\n\\begin{subequations}\n\\begin{eqnarray}\n\\max_{\\hat r} & & U_k(\\hat r)\n\\\\\n\\text{s.t.} & & \\hat r \\in \\mathcal R_k\n\\end{eqnarray}\n\\label{eq:SCH.1}\n\\end{subequations}\nThe set $\\mathcal V_k$ of active EVs defines the optimization variable $\\hat r:=(\\hat r_i(1), \\dots, \\hat r_i(T), i\\in\\mathcal V_k)$ for every active EV $i$ over the optimization horizon $\\mathcal T := \\{1, \\dots, T\\}$.\nThe utility function $U_k$ encodes the problem's objective while the feasible set $\\mathcal{R}_k$ encodes various constraints. They will be discussed in detail in the next two subsections.\nNote that \\textbf{OPT} does not have a notion of the current time $k$ and returns an optimal solution $r^*_i := (r_i^*(1),...,r_i^*(T))$ of \\eqref{eq:SCH.1} as a $T$-dimensional vector for each active EV $i$.\nThe algorithm then adjusts the indexing and sets the scheduled charging rates of EVs $i$ at time $k$ as $r_i(k+t) := r^*_i(1+t)$, $t = 0,...,T-1$ in line \\ref{alg:scheduler.2}.\nAt every time $k$, regardless of if a new schedule was produced, we set the pilot signal of each EV $i$ to $r_i(k)$ (line \\ref{alg:pilot_signal}) and update the system state (lines \\ref{alg:energy_update}, \\ref{alg:departure_update}) for the next time period.\n\nWe now describe how to design the utility function $U_k$ to achieve desirable features and how to model various constraints that define the feasible set $\\mathcal R_k$ for practical systems.\n\n\\subsection{Utility Functions $U_k$} \\label{sec:cost_funct}\nIn general, charging system operators may have many objectives they wish to achieve via smart charging, including charging vehicles as quickly as possible, maximizing their operating profit, \\ifelselong{}{or} utilizing renewable energy sources\\iflong{, or smoothing their total load profile}. Operators also have secondary objectives such as fairly distributing available capacity.\n\nTo allow operators to specify multiple objectives, our utility function $U_k(r)$ is a weighted sum of utility functions $u_k^v(r)$:\n\\begin{equation*}\nU_k(r) := \\sum_{v=1}^V \\alpha^{v}_k u_{k}^{v}(r)\n\\end{equation*}\nWe allow the utility function to change for each computation.\n Here $u_k^{v}(r)$,~$v=1,...,V$ are a set of utility functions which capture the system operator's objectives and promote desirable properties in the final schedule.\nMeanwhile, $\\alpha_k^{v} > 0$,~$v=1,...,V$ are time-dependent weights used to determine the relative priority of the various components.\nTo simplify notations, we will henceforth drop the subscript $k$ when we discuss the computation at time $k$.\n\n\\textbf{Charging quickly:}\nOne common operator objective is to charge all vehicles as quickly as possible. This can be done by specifying an objective such as\n\\begin{equation*}\nu^{QC}(r) := \\sum_{t \\in \\mathcal{T}} \\frac{T-t+1}{T} \\sum_{i \\in \\mathcal{V}} r_i(t)\n\\end{equation*}\nwhere the reward for delivering energy is strictly decreasing in time.\n\n\n\\textbf{Minimizing cost \/ maximizing profit:}\nAnother common objective for system operators is to maximize their operating profit. Let $\\pi$ be the per unit revenue from charging and $c(t)$ be the time-varying cost of one unit of energy. \nTo account for other loads and generation which share a meter with the ACN, we define the net load \n\\begin{equation*}\n N(t) := \\sum_{i \\in \\mathcal{V}} r_i(t) + L(t) - G(t)\n\\end{equation*}\n\n\n\\noindent where $L(t)$ denotes the net draw of the other loads while $G(t)$ denotes on-site generation such as PV. Since $L(t)$ and $G(t)$ are unknown for $t > 0$ this formulation relies on a prediction of these functions into the future. There are several methods for load\/generation prediction proposed in the literature, but these are outside the scope of this paper. We can express the objective of maximizing profit as:\n\n\\begin{equation*}\\label{eq:profit_max_util_energy_only}\nu^{EC}(r) := \\pi \\sum_{\\substack{t \\in \\mathcal{T}\\\\i \\in \\mathcal{V}}} r_i(t) - \\sum_{t \\in \\mathcal{T}} c(t)N(t)\n\\end{equation*}\nThis is equivalent to cost minimization when $\\pi = 0$. \n\n\\textbf{Minimizing demand charge:}\nIn addition to energy costs, utilities often impose a price on the maximum power draw in a billing period called demand charge. Since, demand charge is assessed over an entire month, while the optimization horizon is typically $<12$ hours, we replace the full demand charge $P$ with a proxy $\\hat{P} \\leq P$. We also introduce $q_0$ to be the highest peak so far in the billing period, and $q'$ as a prediction of the optimal peak. The demand charge can then be expressed as:\n\n\\begin{equation*} \\label{eq:profit_max_util}\n u^{DC}(r) \\: := -\\hat P\\cdot \\max \\left( \\max_{t\\in\\mathcal{T}} N(t),\\, q_0, q'\\right)\n\\end{equation*}\nNote that $\\hat{P}$ and $q'$ are tunable parameters. We describe the selection of these in Section~\\ref{sec:cost_min}.\n\n\\iflong{%\n\\textbf{Minimizing total load variations:}\nAnother common objective for EV charging operators is to minimize load variations. We can express this objective as:\n\n\\begin{equation*}\n u^{LV}(r) \\: := -\\sum_{t \\in \\mathcal{T}} N(t)^2 \n\\end{equation*}\n}\n\n\\textbf{Fairly distributing capacity:}\nThe utility functions described so far are not strictly concave in $r$ and hence the optimal solution, $r^*$, is generally non-unique. We can force a unique optimal solution by including the regularizer:\n\\begin{equation*}\\label{eq:sharing}\nu^{ES}(r) := -\\sum_{\\substack{t \\in \\mathcal{T}\\\\ i \\in \\mathcal{V}}} r_i(t)^2\n\\end{equation*}\nThis regularizer also promotes equal sharing among the EVs, which is desirable for the operator and drivers and minimizes line losses along the lines which feed each EVSE. This property comes from the fact that all things being equal, this component is maximized when all charging rates are as low as possible. Thus, it is sub-optimal to have one EV charging faster than another if both charging at an equal rate would result in the same optimal value for all other objective components. \n\n\\iflong{%\n\\textbf{Non-completion penalty:}\\label{sec:non-completion}\nA general goal of EV charging systems is to meet users' energy needs by their deadlines. While this can be accomplished by an equality constraint in $\\mathcal{R}_k$, doing so can lead to infeasibility. \nInstead, we can use the inequality constraint \\eqref{eq:Constraints.1c}, and add a non-completion penalty of the form:\n\n\\begin{equation*}\\label{eq:non-completion}\nu^{NC}(r) := - \\sqrt[p]{\\sum_{i \\in \\mathcal{V}} \\left|\\sum_{t \\in \\mathcal{T}} r_i(t) - e_i \\right|^p}\n\\end{equation*}\n\n\\noindent where $p \\geq 1$. This is the p-norm of the difference between the energy delivered to each EV and its requested energy. When $p=1$, this regularizer shows no preference between EVs. For $p > 1$, EVs with higher $e_i$ will be prioritized (given more energy) over those with lower $e_i$ when it is infeasible to meet all energy demands. Note that this regularizer is 0 whenever the energy demands of all EVs are fully met, e.g. $\\sum_{t \\in \\mathcal{T}} r_i(t) = e_i$. Thus, with sufficient weight on this component, \\eqref{eq:Constraints.1c} will be tight whenever feasible. \nLikewise, if \\eqref{eq:Constraints.1c} would have been tight without \\eqref{eq:non-completion}, this regularizer has no effect. \n}\n\n\\subsection{Feasible set $\\mathcal R_k$}\nThe feasible set $\\mathcal R_k$ is defined by a set of equality and inequality constraints that can depend on $k$, but for notational simplicity, we drop the\nsubscript $k$. These constraints then take the form:\n\\begin{subequations}\n\\begin{align}\n& 0 \\ \\leq \\ r_i(t) \\ \\leq \\ \\bar{r}_i(t) & t \\leq d_i, i \\in \\mathcal{V}\n\\label{eq:Constraints.1a} \\\\\n& r_i(t) \\ = \\ 0 & t > d_i, i \\in \\mathcal{V}\n\\label{eq:Constraints.1b} \\\\\n& \\sum_{t\\in\\mathcal{T}} r_i(t) \\ \\leq \\ e_i & i \\in \\mathcal{V}\n\\label{eq:Constraints.1c} \\\\\n& \\left| \\sum_{i \\in \\mathcal{V}} A_{li} r_i(t) e^{j\\phi_i} \\right| \\ \\leq \\ c_{lt}(t) & t \\in \\mathcal{T}, l \\in \\mathcal{L}\n\\label{eq:Constraints.1d}\n\\end{align}\n\\label{eq:Constraints.1}\n\\end{subequations}\n\n\\noindent Constraints (\\ref{eq:Constraints.1a}) ensure that the charging rate in each period is non-negative (we do not consider V2G) and less than its upper bound defined by the EV's BMS and the maximum pilot supported by the EVSE. This is a relaxation of the set of discrete rates allowed by the EVSE and is necessary to keep the scheduling problem convex. We discuss how to recover a feasible discrete solution in Section~\\ref{sec:post_processing}. \nConstraints (\\ref{eq:Constraints.1b}) ensure that an EV does not charge after its departure time. We use constraints (\\ref{eq:Constraints.1c}) to limit the total energy delivered to EV $i$ to at most $e_i$. To ensure feasibility, we do not require equality (the zero vector is always a feasible solution). This ensures that \\textbf{OPT} always returns a feasible schedule, which is important in practice. We can then craft the objective function to ensure this constraint is tight whenever possible%\n\\iflong{, see Section~\\ref{sec:non-completion}}.\n\n\\subsection{Quantization of pilot signal} \\label{sec:post_processing}\nThe pilot signal constraints imposed by EVSEs described in Section~\\ref{sec:evse_limits} are discrete and intractable in general for large problems. Because of this, we do not include \\eqref{eq:flapping-rate-const} in the definition of $\\mathcal R_k$, instead relaxing it to \\eqref{eq:Constraints.1a}. However, to account for our non-zero rate constraint, we add an additional constraint \n\\begin{equation*}\n r_i(0) \\geq \\min \\left( \\rho_i(0) \\setminus \\{0\\} \\right)\n\\end{equation*}\n\n\\noindent to \\eqref{eq:Constraints.1}.\\footnote{This constraint implicitly assumes that it is feasible to deliver a minimum charging rate to each EV, thus charging infrastructure should be designed with this constraint in mind if the operators want to ensure a minimum charging rate to each EV.} We denote the output of this optimization $r^* := (r^*_i(t), \\forall i\\in\\mathcal V \\ \\forall t\\in\\mathcal T)$. For simplicity, we assume that the maximum $P$ between scheduler calls (see Algorithm \\ref{alg:MPC})\nis set to the length of one period, so that only the first charging rate in $r^*$ will be applied.\n\nWe then round $r^*_i(0)$ down to the nearest value in $\\rho_i$:\n\\begin{equation*} \n\\tilde{r}_i(0) \\ \\leftarrow \\ \\left\\lfloor{r^*_i(0)}\\right\\rfloor_{\\rho_i}\n\\end{equation*}\n\\noindent This rounding may leave unused capacity which can be reclaimed. To reclaim this capacity, we first sort EVs in descending order by the difference between their originally allocated charging rate, $r^*_i(0)$, and the rate after rounding, $\\tilde{r}_i(0)$. We then iterate over this queue and increment each EV's charging rate to the next highest value in $\\rho_i(t)$, if the resulting current vector $\\tilde{r}(0) \\in \\mathcal{R}_k$ and $\\sum_i \\tilde{r}_i(0) \\leq \\sum_i r^*_i(0)$. We continue to loop over this queue until we cannot increment any EV's allocated rate.\n\n\\iflong{\n\\subsection{Battery tail capacity reclamation} \\label{sec:battery-tail-reclamation}\nAs discussed in Section~\\ref{sec:bms_model}, an EV's battery management system will sometimes limit the power draw of the battery as it approaches 100\\% state-of-charge.\nWhen this happens, the difference between the pilot signal and the vehicle's actual charging rate is wasted capacity. To reclaim this capacity, we use a simple algorithm which we call \\emph{rampdown}. Let $r^k_i(0)$ be the pilot signals sent to EV $i$ at time $k$, $m_i(k)$ be its measured charging current, and $\\bar{r}_i^k(0)$ be the upper bound on its charging rate. We define two thresholds, $\\theta_d$ and $\\theta_u$. If $r_i^k(0) - m_i(k) > \\theta_d$, we can reclaim some capacity by setting the upper limit on pilot signal of EV $i$ for the next period to be $m_i(k) + \\sigma$, where $\\sigma$ is typically around 1 A. In order to account for the possibility of the EV's BMS only limiting current temporarily, if $\\bar{r}_i^k(0) - m_i(k) < \\theta_u$, we increment the pilot signal upper bound by $\\sigma$ (clipping at the EV's BMS limit or the EVSE's pilot limit). With this scheme, we can quickly reclaim capacity during the tail region, while still allowing EVs to throttle back up if this reclamation was premature. Note that in our current implementation, the upper bound on the pilot signal, $\\bar r^k_i(t)$ is the same for all $t$ within the same sub-problem $k$. \nIn more advanced rampdown schemes, this bound could depend on $t$ or the decision variables $r(t)$.\n}","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nFrom the very beginning of condensed matter physics, solid state theory relied on two complementary approaches: model-building and the use of general principles. The atomistic model was invoked by early crystallographers to explain the laws of crystal habit \\cite{hauy}; more than 400 years ago, Kepler speculated about the connection of the shape of a snowflake with the dense packing of spheres \\cite{kepler}. The simple Drude \\cite{drude} and Sommerfeld \\cite{sommerfeld} models of metals were followed by a more realistic band theory of solids \\cite{bloch,kane-hasan}. Later, improved models that incorporate electron interactions led to some of the greatest triumphs of condensed matter\nphysics including the Fermi-liquid theory \\cite{Gabriele} and the explanation of superconductivity \\cite{BCS-book}. Any recent issue of this journal contains articles on the Hubbard, Tomonaga-Luttinger, Kondo or other models of materials.\n\nIn some cases it is possible to obtain nontrivial predictions without the use of models, solely from the basic principles of quantum and statistical mechanics. Thermodynamics is a particularly powerful source of such predictions and Einstein famously said \\cite{einstein-quote}: ``A theory is the more impressive the greater the simplicity of its premises is, the more different kinds of things it relates, and the more extended is its area of applicability. Therefore the deep impression which classical thermodynamics made upon me.'' Einstein himself made some of the most brilliant predictions from general principles by his masterful use of detailed balance \\cite{einstein1,einstein2}. The fluctuation relations, addressed in this review, are a far-reaching development of his ideas.\n\nEarly milestones in the field, opened by Einstein, included the Nyquist formula \\cite{johnson,nyquist} and the Onsager reciprocity relations \\cite{onsager,casimir}. Their generalization led to the fluctuation-dissipation theorem \\cite{FDT} (FDT) and the linear response theory \\cite{Kubo}. Another breakthrough in the application of general principles to many-body systems came from the idea of universality \\cite{Kadanoff}. It allowed a precise quantitative description of critical phenomena from the symmetry considerations without any microscopic information \\cite{ZJ}.\n\nThese and other achievements greatly advanced the theory of condensed matter in and close to thermal equilibrium. On the other hand, far from equilibrium, little could be told without the resort to microscopic models \\cite{kynetics}. That situation changed in the 1990s, when the discovery of fluctuation relations brought a powerful general principle to nonequilibrium statistical mechanics \\cite{fl-3,fl-2,fl-1,fl0,fl1,fl2}.\n\nThe key idea behind fluctuation relations is similar to the principle of detailed balance. Consider a time-reversal-invariant system in an initial state $|\\psi_i\\rangle$. The system evolves during the time $t_0$. From the unitarity of quantum mechanics, the probability to find the system in the final state $|\\psi_f\\rangle$\nis exactly the same as the probability to find the system with the initial state $|\\psi(t=0)\\rangle=\\Theta|\\psi_f\\rangle$ in the final state $|\\psi(t=t_0)\\rangle=\\Theta|\\psi_i\\rangle$, where $\\Theta$ is the time-reversal operator. We now consider a process made of the following three steps:\n\n\n1) The initial state of the system is measured;\n\n2) The system undergoes unitary evolution over the time interval $t_0$;\n\n3) The final state is determined through measurement.\n\n\\noindent\nThe probability to observe the evolution from $|\\psi_i\\rangle$ into $|\\psi_f\\rangle$ is no longer the same as the probability to observe the evolution from $\\Theta|\\psi_f\\rangle$ to $\\Theta|\\psi_i\\rangle$ since the probabilities to find the system in the initial states\n$|\\psi_i\\rangle$ and $\\Theta|\\psi_f\\rangle$ are not the same away from thermal equilibrium. However, it is often possible to write a simple relation between the two probabilities \\cite{crooks}. This, in turn, allows a derivation of numerous relations between correlation functions of observables. In this review we will be particularly interested in the correlation functions of electric currents.\n\nOne might think that all this is of no use in the absence of time-reversal symmetry, e.g., when an external magnetic field is applied. Indeed, most work on fluctuation relations assumes the time-reversal symmetry. However, it transpired recently that a number of nontrivial fluctuation relations hold even without such symmetry. One result \\cite{saito} connects systems that transform into each other under the action of the time-reversal operator. A family of relations has been found\nfor nonlinear transport coefficients of a system, close to thermal equilibrium, at two opposite directions of the magnetic field. Another result \\cite{wang1,wang2} applies even far away from equilibrium and connects non-linear response and noise at a fixed direction of the magnetic field.\nThe latter result holds in chiral systems. Since the word ``chirality'' has all too many meanings, we need to explain ours.\n\nBy chiral we mean systems, where excitations can propagate in one direction only, that is, either all excitations can propagate only clockwise or all excitations can only propagate counterclockwise. Such transport is known to occur on the edges of certain quantum Hall liquids and in some other systems. There has been much recent interest in chiral transport in the quantum Hall effect (QHE). The interest comes, in part, from the search for elusive neutral modes \\cite{chang}. Besides, the question of chirality proved relevant for the ongoing search for non-Abelian anyons \\cite{anyons} in quantum Hall states in the second Landau level (see Section 4). The more powerful fluctuation relations \\cite{wang1,wang2} in chiral systems originate from a stronger form of the causality principle for chiral transport. The standard causality principle states that the past is not affected by the future events. In the chiral case, in addition, one of the following two alternatives holds: either what happens on the right is not affected by the past events on the left or what happens on the left is not affected by the past events on the right.\n\nIn this review we address theoretical and experimental work on fluctuation relations in the absence of the time-reversal symmetry in chiral and non-chiral systems. Many questions remain open and we discuss future directions in Section 6. We also address the ongoing controversy\nabout microreversibility without the time-reversal symmetry.\n\n\n\nThe paper is organized as follows. In the second section we derive the quantum fluctuation theorem \\cite{fl2,tobiska05,andrieux06,esposito07,andrieux09,campisi10} that serves as a foundation for all subsequent discussion. In Section 3, we extract from that theorem the Saito-Utsumi relations \\cite{saito} for the nonlinear transport coefficients of identical systems in the opposite magnetic fields. We also address the verification of the Saito-Utsumi relations in microscopic models and the experimental results \\cite{exp1,exp2}. In Section 4 we introduce chiral systems with the emphasis on QHE. We derive fluctuation relations \\cite{wang1,wang2} for chiral systems in Section 5. Finally, we address open problems and summarize in Section 6.\n\n\\section{Fluctuation theorems}\n\n\n\n\\begin{figure}[b]\n\\centering \n\\includegraphics{fig_ft1.eps}\n\\caption{Schematics of a quantum open system. The system can exchange energy and particles with $r$ equilibrium reservoirs. Chemical potential difference or temperature difference between any two reservoirs will induce particle current and$\/$or heat current.}\\label{fig1}\n\\end{figure}\n\nIn statistical mechanics, physical quantities of interest usually undergo random fluctuations. {\\it Fluctuation theorems} are a class of exact relations for the distribution functions of those random fluctuations, regardless whether the system under investigation is in an equilibrium or nonequilibrium state. There are fluctuation theorems for, e.g, entropy in driven isolated systems, work and heat in closed systems, {\\it etc}. Fluctuation theorems are typically of the form \\cite{fl1,fl2,fl3}\n\\begin{equation}\n P_F(x) = P_B(-x) \\exp[a(x-b)], \\label{ftgeneral}\n\\end{equation}\nwhere $x$ is the physical quantity or the collection of quantities under investigation, $ P_F(x)$ and $ P_B(x)$ are the distribution functions of $x$ in the so-called {\\it forward} and {\\it backward} processes respectively, and the constants $a$ and $b$ are determined by the initial conditions of the two processes. The definitions of the forward and backward processes may differ slightly for different systems, but in general they follow these lines: 1) Initially in both processes, the system obeys a Gibbs distribution or can be factorized into several subsystems that obey Gibbs distributions and 2) the dynamical equations (e.g., the Schr\\\"odinger equation) in the backward process is obtained from the dynamical equations in the forward process by the time-reversal operation. Generally speaking, the system in the forward process and the system in the backward process should not be considered as the same system because they follow different microscopic dynamical equations. However, for a system with the time-reversal symmetry, the dynamical equation is the same in the forward and backward processes. Hence, provided that the initial Gibbs distributions in the two processes are the same, no difference exists between the two processes for a time-reversal invariant system, and thereby the subindices $F$ and $B$ in (\\ref{ftgeneral}) can be dropped.\n\nIn this review, we consider systems without the time-reversal symmetry, and focus on a particular fluctuation theorem for energy and particle transport in quantum open systems. This fluctuation theorem is very useful for studying transport phenomena in systems without the time-reversal symmetry, and, particularly, in systems with chirality (see Sections 4 and 5). For a reader, interested in other fluctuation theorems, many excellent reviews exist, for example, Refs. \\refcite{fl1,fl2,fl3}.\n\nThe approach of this section builds on Refs. \\refcite{andrieux09,campisi10} and closely follows Supplementary information to Ref. \\refcite{wang2}.\n\n\n\nLet us discuss the specific fluctuation theorem that we are interested in. We consider a setup shown in Fig.~\\ref{fig1}: the system in the center is coupled to $r$ reservoirs, each being in equilibrium. The system serves as a bridge, so that energy and particles can be transported between the reservoirs. This setup is commonly used in transport experiments with mesoscopic systems, such as quantum dots, quantum Hall bars, {\\it etc}. A {\\it forward} process can then be defined as follows. Initially, at $t\\le 0$, the system and reservoirs are decoupled. An interaction $\\mathcal V(t)$ that allows particle and energy exchange with the reservoirs is turned on at the times $0\\le t\\le \\tau$. The interaction is turned off at $t\\ge \\tau$. At $t\\le 0$, reservoir $i$ is at equilibrium with the inverse temperature $\\beta_i=1\/T_i$ and the chemical potential $\\mu_i=qV_i$, where $q$ is the charge of a charge carrier and $V_i$ the electric potential. We use only one set of chemical potentials and thus assume that only one carrier type is present which is usually electron. We assume that the size of the system is much smaller than that of the reservoirs, so its initial state is irrelevant in the $\\tau\\rightarrow \\infty$ limit which we will take. It is convenient to regroup the system with one of the reservoirs\\cite{andrieux09}, for example, the $r$-th reservoir. The interaction $\\mathcal V(t)$ becomes a constant $\\mathcal V_0$ when fully turned on during $\\tau_0\\le t\\le \\tau-\\tau_0$. We assume that $\\tau_0\\ll\\tau $ and $\\tau$ is much longer than the relaxation time so that the system remains in a steady state during most of the time interval $\\tau$.\n\n\nWe now find the statistical distribution of the changes $\\Delta N_i=N_i(t=\\tau)-N_i(t=0)$ and $\\Delta E_i= E_i(t=\\tau) - E_i(t=0)$, where $N_i$ is the particle number and $E_i$ is the energy of the $i$-th reservoir. Let $\\mathcal H_i$ and $\\mathcal N_i$ be the Hamiltonian and particle number operators of the $i$-th reservoir ($\\mathcal H_r$ includes the system). The particle numbers conserve in the absence of $\\mathcal V(t)$, i.e., $[\\mathcal H_i, \\mathcal N_i]=0$. Thus, the initial density matrix factorizes into a product of Gibbs distributions in each reservoir,\n\\begin{equation}\n\\rho_{n}=\\frac{1}{Z_0^+}\\prod_{i} e^{-\\beta_i[E_{in}-\\mu_i N_{in}]}\\label{gibbs}\n\\end{equation}\nwhere $Z_0^+$ is the initial partition function and the index $n$ labels the quantum state $|\\psi_n\\rangle$ with the reservoir energies $E_{in}$ and particle numbers $N_{in}$. Here, the ``+'' sign reminds us that we are studying the forward process. An initial joint quantum measurement of $\\mathcal H_i$ and $\\mathcal N_i$ is performed at $t=0$, so that the quantum state of the system collapses to a common eigenstate $|\\psi_n\\rangle$ with the probability $\\rho_n$. The state $|\\psi_n\\rangle$ then evolves according to the evolution operator $ U(t;+)$, which satisfies\n\\begin{equation}\ni\\frac{d}{dt} U(t; +) = \\mathcal H (t; +) U(t;+), \\label{forwardpropagator}\n\\end{equation}\nwhere the Hamiltonian $\\mathcal H(t;+)=\\sum_{i} \\mathcal H_i +\\mathcal V(t)$, and the initial condition is $U(0;+)=1$. At $t=\\tau$, a second joint measurement is taken, leading to the collapse of the system to the state $|\\psi_m\\rangle$ with the reservoir energies $E_{im}$ and particle numbers $N_{im}$.\nThe probability to observe such process is\n\\begin{equation}\nP[m,n]=|\\langle\\psi_m|U(\\tau;+)|\\psi_n\\rangle|^2\\rho_{n}.\n\\end{equation}\nHence, repeating the forward process, we obtain the joint distribution function of the energy and particle changes $\\Delta E_{i,mn}=E_{im}-E_{in}$ and $\\Delta N_{i,mn}=N_{im}-N_{in}$\n\\begin{align}\nP[\\Delta\\mathbf E, \\Delta \\mathbf N;+] = &\\sum_{mn}\\prod_{i=1}^r\\delta(\\Delta E_i - \\Delta E_{i,mn})\\delta(\\Delta N_i - \\Delta N_{i,mn}) \\nonumber\\\\ &\\times|\\langle\\psi_m|U(\\tau;+)|\\psi_n\\rangle|^2\\rho_{n} \\label{forwarddistri},\n\\end{align}\nwhere the vectors $\\Delta\\mathbf E=(\\Delta E_1, \\dots, \\Delta E_r)$ and $\\Delta\\mathbf N=(\\Delta N_1, \\dots, N_r)$. Since the total energy and particle number are conserved, one finds that $\\sum_i\\Delta E_i=\\sum_i\\Delta N_i=0$. The energy conservation is an approximation due to the time-dependent interaction $\\mathcal V(t)$. However, in the limit $\\tau\\rightarrow \\infty$, the violation of the energy conservation is negligible since \nthe time-dependence is relevant only during a short period of time $\\tau_0$.\n\n\n\n\n\nWe now study the {\\it backward} process. In that process, the initial temperatures and chemical potentials of the reservoirs are the same as those in the forward process. Note that such initial temperatures and chemical potentials are not necessarily the same as the final thermodynamic parameters at $t= \\tau$ in the forward process for large but finite reservoirs. The time evolution operator $U(t;-)$ is determined by the equation\n\\begin{equation}\ni\\frac{d}{dt} U(t; -) = \\mathcal H (t; -) U(t;-), \\label{backwardpropagator}\n\\end{equation}\nwhere the Hamiltonian $\\mathcal H(t;-)=\\Theta \\mathcal H(\\tau -t;+)\\Theta^{-1}$ with $\\Theta$ being the time-reversal operator. Here, the ``$-$'' sign stands for the backward process. The $i$-th reservoir has the Hamiltonian $\\Theta\\mathcal H_i\\Theta^{-1}$. Clearly, $\\Theta|\\psi_m\\rangle$ is a common eigenstate of $\\Theta\\mathcal H_i\\Theta^{-1}$ and $\\mathcal N_i=\\Theta\\mathcal N_i\\Theta^{-1}$ with the eigenvalues $E_{im}$ and $N_{im}$ respectively.\nThe amplitude of the transition from the state $\\Theta|\\psi_m\\rangle$ to $\\Theta|\\psi_n\\rangle$ after the time $\\tau$ is $(\\langle\\psi_n|\\Theta^{-1}U(\\tau;-)\\Theta|\\psi_m\\rangle)^*$.\nHence, performing two quantum measurements at $t=0$ and $t=\\tau$ in the backward process, we find that the probability to observe the collapse to the state $\\Theta|\\phi_m\\rangle $ at $t=0$ and the collapse to the state $\\Theta |\\phi_n\\rangle$ at $t=\\tau$ is\n\\begin{equation}\nP[m,n]=|\\langle\\psi_n|\\Theta^{-1}U(\\tau;-)\\Theta|\\psi_m\\rangle|^2\\rho_{m},\n\\end{equation}\nwhere the initial density matrix $\\rho_{m}=\\prod_{i} e^{-\\beta_i[E_{im}-\\mu_i N_{im}]}\/Z_0^-$. It follows from the antiunitarity of $\\Theta$ that $Z_0^+=Z_0^-$, that is, the Gibbs distribution $\\rho_m$\nin the time-reversed basis is the same as Eq.~(\\ref{gibbs}). One then finds the distribution of the energy and particle number changes, similar to Eq. (\\ref{forwarddistri}),\n\\begin{align}\nP[\\Delta\\mathbf E, \\Delta \\mathbf N;-] = &\\sum_{mn}\\prod_{i=1}^r\\delta(\\Delta E_i - \\Delta E_{i,nm})\\delta(\\Delta N_i - \\Delta N_{i,nm}) \\nonumber\\\\ &\\times |\\langle\\psi_n|\\Theta^{-1}U(\\tau;-)\\Theta|\\psi_m\\rangle|^2\\rho_{m}.\\label{backwarddistri}\n\\end{align}\nwhere $\\Delta E_{i,nm}=E_{in}-E_{im}$ and $\\Delta N_{i,nm}=N_{in}-N_{im}$.\n\n\nWe want to relate the distribution functions (\\ref{forwarddistri}) and (\\ref{backwarddistri}), i.e., obtain the fluctuation theorem. The evolution operators have an important property\\cite{andrieux09}\n\\begin{equation}\n\\Theta^{-1}U(\\tau;-)\\Theta = U^\\dag(\\tau; +).\\label{eq-U}\n\\end{equation}\nThis can be seen by checking that both operators $\\Theta^{-1}U(t;-)\\Theta$ and $U(\\tau-t;+)$ satisfy the equation\n\\begin{align}\ni\\frac{d}{dt}V(t) = -\\mathcal H(\\tau -t;+)V(t). \\label{prop}\n\\end{align}\nIn terms of $\\Theta^{-1}U(t;-)\\Theta$, we find that given the initial condition $V(0)=\\Theta^{-1}U(0;-)\\Theta=1$, the final operator at $t=\\tau$ is $V(\\tau) = \\Theta^{-1}U(\\tau;-)\\Theta$. In terms of $U(\\tau-t;+)$, given the initial condition $V(0) = U(\\tau;+)$, we have $V(\\tau) = U(0; +)$. In other words, if we have the initial condition $V(0)=1=U^\\dagger(\\tau;+)U(\\tau;+)$, the final operator at $t=\\tau$ will be $V(\\tau ) = U^\\dag(\\tau;+)U(0;+)=U^\\dag(\\tau;+)$. Therefore, due to the uniqueness of the solution of Eq.~(\\ref{prop}), the property (\\ref{eq-U}) is obtained.\n\n\nNow, we combine Eqs.~(\\ref{forwarddistri}), (\\ref{backwarddistri}) and (\\ref{eq-U}), and obtain the fluctuation theorem\n\\begin{equation}\n\\frac{P[\\Delta\\mathbf E, \\Delta \\mathbf N;+]}{ P[-\\Delta\\mathbf E, -\\Delta \\mathbf N;-]}=\\prod_{i} e^{\\beta_i(\\Delta E_i-\\mu_i\\Delta N_i)} \\label{eq-fr}.\n\\end{equation}\nClearly, it is of a general form (\\ref{ftgeneral}). As mentioned above, in the case of time-reversal invariant systems, i.e., for $\\mathcal H(t;+)=\\Theta \\mathcal H(\\tau -t;+)\\Theta^{-1}$, the microscopic dynamical equations (\\ref{forwardpropagator}) and (\\ref{backwardpropagator}) are the same. Hence, the ``$+$'' and ``$-$'' can be dropped in the fluctuation theorem. However, in what follows, we will focus on systems without the time-reversal symmetry. Therefore, one has to keep in mind that the two distribution functions in (\\ref{eq-fr}) describe two different systems. In most of the following applications, the system in the backward process is realized by reversing the direction of the magnetic field $B$ which is present in the system in the forward process.\n\n\n\\section{Saito-Utsumi relations}\n\nThe Saito-Utsumi relations \\cite{saito} connect transport properties of a conductor in two opposite magnetic fields. They generalize quantum fluctuation relations for the electric current and noise in time-reversal invariant systems \\cite{fl2}.\n\n\\begin{figure}[b]\n\\centering\n\\includegraphics[width=3in]{fig_2terminal.eps}\n\\caption{ A conductor is placed in a magnetic field and connects two terminals at the same temperature and the voltage difference $V$. }\n\\label{AAA}\n\\end{figure}\n\nWe consider a conductor, connected to two reservoirs with the voltage difference $V$ and the same temperature $T$, in the presence of a magnetic field, Fig. \\ref{AAA}. It is possible to generalize to a multi-terminal case. Below we will only investigate the simplest two-terminal situation. We will derive the relations between various current correlation functions for two opposite orientations of the magnetic field. An infinite number of relations can be derived for an infinite number of the correlation functions but we will focus on a few simplest relations that are most relevant for the experiment.\n\nWe will be interested in three quantities:\n\n1) The electric current\n\n\\begin{equation}\n\\label{dima1}\nI=e\\langle\\Delta N\\rangle\/\\tau;\n\\end{equation}\n\n\n2) The noise power\n\n\\begin{equation}\n\\label{dima2}\nS=2e^2(\\langle \\Delta N^2\\rangle-\\langle\\Delta N\\rangle^2)\/\\tau;\n\\end{equation}\n\n3) The third cumulant\n\n\\begin{equation}\n\\label{dima3}\nC=\\frac{2e^3}{\\tau}[\\langle \\Delta N^3\\rangle-3\\langle\\Delta N\\rangle\\langle\\Delta N^2\\rangle+2\\langle\\Delta N\\rangle^3],\n\\end{equation}\nwhere $e<0$ is one electron charge, $\\tau$ the duration of the protocol, Section 2, $\\Delta N$ the number of electrons transferred between the reservoirs, and the angular brackets denote the average over the distribution $P(\\Delta N, B)$, where $B$ is the magnetic field.\nNote that $\\langle\\Delta N\\rangle\\rightarrow 0$ at $V\\rightarrow 0$ since there is no current at zero voltage. We will only compute the noise $S$ with the accuracy up to the linear term in $V$ below. Thus, it is legitimate to omit the contribution $\\langle\\Delta N\\rangle^2$ in Eq. (\\ref{dima2}).\n\nThe fluctuation relations, considered in this section, were first derived in Ref. \\refcite{saito}. We follow a simpler derivation from Ref. \\refcite{exp2}.\n\n\n\n\\subsection{Symmetric and antisymmetric variables}\n\nThe results express in the simplest way in terms of the symmetrized and antisymmetrized combinations of the currents and noises $I_{\\pm}=I(B)\\pm I(-B)$, $S_{\\pm}=S(B)\\pm S(-B)$ in the opposite magnetic fields $\\pm B$.\nWe introduce the Taylor expansions in powers of the voltage $V$\n\n\\begin{equation}\n\\label{dima4}\nI_+=G_1V+\\frac{G_2V^2}{2}+\\dots;\n\\end{equation}\n\n\\begin{equation}\n\\label{dima5}\nI_-=G_1^AV+\\frac{G_2^AV^2}{2}+\\dots;\n\\end{equation}\n\n\\begin{equation}\n\\label{dima6}\nS_+=S_0+S_1V+\\dots;\n\\end{equation}\n\n\\begin{equation}\n\\label{dima7}\nS_-=S_0^A+S_1^AV+\\dots;\n\\end{equation}\n\n\\begin{equation}\n\\label{dima8}\nC_-=C_0^A+\\dots.\n\\end{equation}\n\nThe Saito-Utsumi relations connect the above Taylor coefficients.\n\nAccording to the Onsager reciprocity relations \\cite{onsager,casimir}, the linear conductance is an even function of the magnetic field, $G_1(B)=G_1(-B)$. Hence, $G_1^A=0$. Next, the Nyquist formula \\cite{nyquist} implies that\nthe equilibrium noise power $S_1=4G_1T$ does not depend on the direction of the magnetic field and $S_0^A=0$. We will also see that the symmetrized third cumulant $C_+=C(B)+C(-B)$ is zero at $v=0$.\n\n\\subsection{Fluctuation relations for symmetric variables}\n\nWe will use the notation $P(\\Delta N,B)$ for the probability to transfer $\\Delta N$ electrons from the left reservoir to the right one during the protocol, Section 2. According to the fluctuation theorem (\\ref{eq-fr}),\n\n\\begin{equation}\n\\label{dima9}\nP(\\Delta N,B)=P(-\\Delta N,-B)e^{v\\Delta N},\n\\end{equation}\nwhere $v=eV\/T$.\nWe next introduce the symmetrized and antisymmetrized probabilities $P_{\\pm}(\\Delta N)=P(\\Delta N,B)\\pm P(\\Delta N,-B)$. Eq. (\\ref{dima9}) then yields\n\n\\begin{equation}\n\\label{dima10}\nP_{\\pm}(\\Delta N)=\\pm P_{\\pm}(-\\Delta N)e^{v\\Delta N}.\n\\end{equation}\nWe also introduce the symmetrized and antisymmetrized averages $\\langle\\Delta N^k\\rangle_\\pm=\\sum P_{\\pm}(\\Delta N)\\Delta N^k$.\n\nEq. (\\ref{dima10}) for $P_+$ assumes exactly the same form as the fluctuation theorem for $P(\\Delta N,B=0)$. Thus, all fluctuation relations for the symmetric currents $I_+$ and noises $S_+$ are the same as the relations for the currents and noises in the absence of the magnetic field.\n\nNote that at $v=0$, Eq. (\\ref{dima10}) yields $P_+(\\Delta N)=P_+(-\\Delta N)$. Hence, $\\langle \\Delta N^{2k+1}\\rangle_+=\\sum P_{+}(\\Delta N)\\Delta N^{2k+1}$ is zero at $v=0$ for any odd $2k+1$.\nA trivial consequence of this relation is the absence of the average electric current in equilibrium, $I_+=e\\langle \\Delta N\\rangle\/\\tau=0$. We also find that $C_+=C(B)+C(-B)=0$.\n\nWe now expand in powers of $v$ the left and right hand sides of the relation\n\n\\begin{equation}\n\\label{dima11}\n\\langle\\Delta N\\rangle_+=\\sum_{\\Delta N}\\Delta N P_+(\\Delta N)=-\\sum_{\\Delta N} \\Delta N P_+(\\Delta N)e^{-v\\Delta N}.\n\\end{equation}\nAfter defining\n\n\\begin{equation}\n\\label{dima12}\n\\langle\\Delta N^k\\rangle_\\pm=N^\\pm_{k,0}+vN^\\pm_{k,1}+\\frac{v^2}{2}N^\\pm_{k,2}+\\dots\n\\end{equation}\nwe obtain $N^+_{2,0}=2N^+_{1,1}$ and $N^+_{1,2}=N^+_{2,1}$, where we used the fact that $N^+_{3,0}=0$. Comparing with Eqs. (\\ref{dima4}) and (\\ref{dima6}), we finally obtain\n\n\\begin{equation}\n\\label{dima13}\nS_0=4G_1T;\n\\end{equation}\n\\begin{equation}\n\\label{dima14}\nS_1=2TG_2.\n\\end{equation}\nThe first equation is nothing but the Nyquist formula. Eq. (\\ref{dima14}) goes beyond the standard fluctuation-dissipation relation since it contains the nonlinear transport coefficient $G_2$. Since that coefficient is nonzero in general, the noise $S=S_0+VS_1+\\dots$ is minimal at a nonzero voltage, Fig. \\ref{BBB}.\n\n\\begin{figure}[b]\n\\centering\n\\includegraphics[width=3in]{fig_parabola.eps}\n\\caption{Noise is minimal at a nonzero $V$.}\n\\label{BBB}\n\\end{figure}\n\n\n\\subsection{Fluctuation relations for antisymmetric variables}\n\nWe now turn to the new results that distinguish systems in a magnetic field from the time-reversal invariant situation.\nFirst, consider the identity\n\n\\begin{equation}\n\\label{dima15}\n\\langle \\Delta N\\rangle_-=\\sum_{\\Delta N}P_-(\\Delta N)\\Delta N=\\sum_{\\Delta N}P_-(\\Delta N)\\Delta N e^{-v\\Delta N}.\n\\end{equation}\nThe Taylor expansion of the left and right hand sides yields\n\n\\begin{equation}\n\\label{dima16}\nN^-_{3,0}=2N^-_{2,1}.\n\\end{equation}\nThis is equivalent to\n\\begin{equation}\n\\label{dima17}\nS_1^A=\\frac{C_0^A}{2T}.\n\\end{equation}\n\nThe normalization of probability implies\n\n\\begin{equation}\n\\label{dima18}\n\\sum P_-(\\Delta N)=0=-\\sum P_-(\\Delta N)e^{-v\\Delta N}.\n\\end{equation}\nOne obtains from the expansion of the right hand side in powers of $v$\n\n\\begin{equation}\n\\label{dima19}\n3N^-_{1,2}-3N^-_{2,1}+N^-_{3,0}=0.\n\\end{equation}\nThis reduces to\n\n\\begin{equation}\n\\label{dima20}\nS_1^A-2TG_2^A=C_0^A\/3T.\n\\end{equation}\n\nEquations (\\ref{dima17}) and (\\ref{dima20}), first derived in Ref. \\refcite{saito}, are the main results of Section 3.\n\n\n\\subsection{Microreversibility}\n\nThe crucial assumption behind our derivation is microreversibility: we assume that after the magnetic field, the velocities of all particles and their spins are reversed, the system traces its evolution backwards.\nAs was pointed out in Refs. \\refcite{forster08,forster09}, such assumption is counterintuitive in mesoscopic systems without time-reversal symmetry.\n\n\\begin{figure}[b]\n\\centering\n\\includegraphics[width=3in]{fig_box.eps}\n\\caption{Trajectories of charged particles connect different holes in opposite magnetic fields.}\n\\label{CCC}\n\\end{figure}\n\nConsider a mesoscopic conductor, connected to several infinite reservoirs, maintained at different voltages. A standard way to calculate currents and noises is based on the Landauer-B\\\"uttiker formalism \\cite{datta}. One first finds the self-consistent charge density $\\rho({\\bf r})$ as a function of the coordinates in the conductor. The charge distribution creates a self-consistent electrostatic potential. The currents and noises can be computed from the scattering theory for free particles, entering the mesoscopic conductor from the reservoirs, and moving in the self-consistent potential in the conductor. While this picture is compatible with microreversibility at zero magnetic field, there is a conflict at a finite $B$. Indeed, $\\rho({\\bf r},B)$ is not the same as the charge distribution $\\rho({\\bf r},-B)$ in the opposite field. Fig. \\ref{CCC} illustrates why. In that oversimplified example, charged particles can enter a square box through holes 1 and 3 only. At one direction of the magnetic field, particles from hole 1 move into hole 2 and particles from hole 3 move into hole 4. At the opposite field the particle trajectories connect hole 1 with 4 and 2 with 3. Since the charge density is nonzero only on those trajectories, $\\rho({\\bf r},B)\\ne\\rho({\\bf r},-B)$. The self-consistent electrostatic potentials depend on $\\rho$ and are different in the opposite magnetic fields. Consider now a scattering process in which a particle with the momentum ${\\bf k}_i$ acquires the momentum ${\\bf k}_f$ at the magnetic field B. The scattering amplitude depends on the self-consistent electrostatic potential $\\phi({\\bf r},B)$. The scattering probability for the time-reversed process at the field $-B$ describes the transition form the state with the momentum $-{\\bf k}_f$ into the state with the momentum $-{\\bf k_i}$. It depends on a different self-consistent potential $\\phi({\\bf r},-B)$ and hence is different from the scattering probability ${\\bf k}_i\\rightarrow{\\bf k}_f$ in the field $B$.\nSuch asymmetry of the self-consistent potential is closely related to the physics of rectification in mesoscopic conductors \\cite{rec1,rec2,rec3,rec4,rec5,rec6,rec7}.\n\nThe above argument does not by itself disprove microreversibility. Indeed, it is a mean-field argument which deals with single-particle states. On the other hand, the proof of the fluctuation theorem, Section 2, considers many-particle states. The unitarity of the evolution operator in quantum mechanics shows that the transition probability between the many-body states of the whole system, $P({\\rm initial~state~}\\rightarrow{\\rm~final~state},B)$, always equals $P({\\rm time-reversed~final~state~}\\rightarrow{\\rm~time-reversed~initial~state},-B)$. In each point, the charge densities in the forward and backward processes remain exactly the same in the corresponding moments of time. Hence, the electrostatic potentials do, in fact, remain the same. The mean-field argument is based on the different charge distributions in the steady states in the opposite magnetic fields. However, if one reverses time in a system in a steady state then the charge distribution does not change and hence does not become the steady state distribution of the electric charge in the opposite magnetic field.\n\nWe would like to emphasize that time-reversal in an interacting macroscopic system is not mathematical fiction. It was demonstrated experimentally long ago in the context of NMR \\cite{time1,time2}.\n\nSeveral groups have verified the fluctuation relations at a finite magnetic field without the use of microreversibility.\nThe Saito-Utsumi relations were confirmed by microscopic calculations beyond the Landauer-B\\\"uttiker formalism in two models \\cite{saito09,nasb10}. Note also that an approximate calculation beyond the mean-field theory in\nRef. \\refcite{lim10} agrees with Eq. (\\ref{dima14}). The fluctuation relations for a general chiral system from Section 5 below were derived both from microreversibility \\cite{wang2} and without its use \\cite{wang1}. Recent experiments \\cite{exp1,exp2} were interpreted as supporting the fluctuation relations \\cite{saito} (see the next subsection).\nWhile all this gives credibility to the approach, based on microreversibility, one must remember some subtleties.\n\nAll models in Refs. \\refcite{saito09,nasb10,lim10,wang1} assume a finite range of the electrostatic interaction. This implies the presence of screening gates. The gates are crucial for the fluctuation relations in chiral systems \\cite{wang1,wang2} since it is meaningful to speak about chiral transport only in systems with short-range interactions (Section 4). At the same time, the Saito-Utsumi relations do not make assumptions about the range of interactions. Yet, our derivation of the fluctuation theorem, Section 2, implicitly assumes short-range forces. Indeed, in the presence of the long-range Coulomb interaction, the reservoirs are not independent even in the beginning of the forward and backward processes and cannot be described by the Gibbs distribution. Certainly, this is a rather standard issue. It can be resolved by splitting the Coulomb force into a finite-range part with some large but finite interaction radius and the long-range part which must be treated in the mean-field approximation. This allows using the Gibbs distribution but returns the problem from the Landauer-B\\\"uttiker approach. Indeed, the average charge densities are not the same at the opposite field orientations and hence the mean-field effective long-range potentials are not the same in the forward and backward process. Fortunately, if the reservoirs are sufficiently large and the radius of the finite-range interaction is selected large enough, one can see that the difference of the long-range potentials can be neglected.\n\nAnother issue equally affects systems with and without the time-reversal symmetry. Our derivation, Section 2, assumes that the system is isolated. This may not be easy to accomplish in practice. Moreover, if this has been accomplished then no experiments can be performed since any measurement device disturbs the system. Fortunately, energy exchange with the outside world turns out not to be a problem provided the temperature of the environment is the same as the temperature of the system of interest. Indeed, let us include the environment as an additional reservoir and repeat the derivation of Eq. (\\ref{eq-fr}). The energy changes $\\Delta E_i$ of the reservoirs in the forward process enter Eq. (\\ref{eq-fr}) in the form $\\sum\\beta_i\\Delta E_i$. This combination is zero at equal $\\beta_i$ from the energy conservation.\nThus, all $\\Delta E_i$, including the energy absorbed by the environment, drop out from Eq. (\\ref{eq-fr}). On the other hand, the electrostatic potential of the environment drops out only in the absence of the particle exchange with the outside world. In fact, the fluctuation theorem Eq. (\\ref{eq-fr}) may break down even if charges are transfered between different regions of the environment in the absence of the particle exchange with the system of interest\n\\cite{exp3,exp4}. This issue has been a major difficulty in the experiments on fluctuation relations in mesoscopic conductors as we discuss in the next subsection.\n\n\n\\subsection{Experiment}\n\nNote that Eq. (\\ref{dima17}) crucially depends on the microreversibility while Eq. (\\ref{dima20}) does not require such assumption as shown in Ref. \\refcite{forster08}, see also Ref. \\refcite{s09}. Thus, the verification of Eq. (\\ref{dima17}) is particularly interesting. On the other hand, it is easier to measure noises and currents than the third cumulant \\cite{reznikov}. As a result, recent experiments have focused on the verification of a consequence of Eqs. (\\ref{dima17},\\ref{dima20}):\n\n\\begin{equation}\n\\label{dima21}\nS_1^A=6TG_2^A.\n\\end{equation}\n\nRefs. \\refcite{exp1,exp2} tested Eqs. (\\ref{dima21}) and (\\ref{dima14}) in a mesoscopic interferometer. Ref. \\refcite{exp1} observed $S_1^A\/[TG_2^A]=8.7^{+1.3}_{-0.7}$ and Ref. \\refcite{exp2} obtained $S_1^A\/[TG_2^A]=9.7^{+1.3}_{-1.2}$ in satisfactory agreement with Eq. (\\ref{dima21}). This was interpreted as a proof of microreversibility.\nOn the other hand, the results for $S_1$, $S_1\/TG_2=10.8^{+2.4}_{-1.4}$, Ref. \\refcite{exp1}, and $S_1\/TG_2=12.0^{+1.9}_{-2.0}$, Ref. \\refcite{exp2}, are incompatible with (\\ref{dima14}). The reasons for the discrepancy of the theory and experiment are unclear. A related experiment without a magnetic field may provide some hints.\n\nA violation of the fluctuation theorem (\\ref{eq-fr}) was found in a single electron tunneling experiment through a double quantum-dot system \\cite{exp3,exp4}. The violation has been explained by a careful analysis of the experimental circuit. The circuit included a quantum point contact\n(QPC) electrometer, used as a measurement device. When a finite voltage bias is applied to the quantum point contact, the fluctuation theorem (\\ref{eq-fr}) must be modified. The right hand side now contains an additional factor $\\exp(QV_{QPC}\/T)$, where $V_{QPC}$ is the bias at the QPC and $Q$ is the charge that travels through the QPC during the forward process. The derivation of the modified relation is exactly the same as our argument in Section 2. One just needs to include the QPC, connected to two reservoirs at different electrochemical potentials, into the system under consideration. This interpretation is supported by the fact that a modified fluctuation relation was found to hold in the experiment \\cite{exp1,exp2,sinitsyn}.\n\n\n\\section{Chiral systems}\n\nThe Saito-Utsumi relations from the previous section are very general and apply to any conductor in a magnetic field. This generality comes at a price. Indeed, the relations connect the coefficients in the expansions of the currents and noises in powers of the voltage. Thus, they only apply at low voltages, that is, close to equilibrium. One can overcome this limitation\nby looking at so-called chiral systems. In such systems all excitations propagate in one direction only, for example all excitations propagate to the right only.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=3in]{fig_idealgas.eps}\n\\caption{Ideal gas in a reservoir with a tube (from Ref. 30).}\n\\label{idealgas}\n\\end{figure}\n\nThe simplest example of a chiral system is illustrated in Fig. \\ref{idealgas}. A box is filled with an ideal gas and placed in vacuum. A narrow tube with an open end and smooth walls is attached to the box. Particles can leave the box through the tube but do not come back. Thus, the transport in the tube is chiral.\n\nThe above example is too simple to be interesting other than as a toy model. Several more interesting examples of chiral transport are known. For example, statistical mechanics has been used to model traffic \\cite{helbig}. Transport is chiral on a network of one-way roads. In some biological systems, transport can only occur in one direction. Most importantly, chiral transport takes place on the edges and surfaces of some topological states of matter.\n\n\\subsection{Chiral transport in topological systems}\n\nChiral transport occurs in several topological systems without the time-reversal symmetry. For example, the surface of a 3D stack of integer quantum Hall liquids is chiral \\cite{3D1,3D2,3D3}. Transport is chiral on the edges of\n$p\\pm ip$ superconductors \\cite{anyons,green,ivanov}.\nBesides, many states of the two-dimensional electron gas in the conditions of the quantum Hall effect \\cite{perspectives} (QHE) are chiral. So far, most research on chiral transport has been focused on those QHE states.\n\nWe expect that the bulk of a quantum Hall system is gapped (see, however, Refs. \\refcite{4-3,heiblum14}). Thus, low-energy transport occurs on the edges. Wen's hydrodynamic theory \\cite{wen} predicts that in many cases the low-energy transport is chiral.\nThe integer QHE with the filling factor $\\nu=1$ is the simplest example. Wen's theory describes the edge physics in terms of a single field $\\phi$, where $\\partial_x\\phi$ is proportional to the linear charge density. The action assumes the form\n\n\\begin{equation}\n\\label{dima22}\nL=\\int dxdt[\\partial_t\\phi\\partial_x\\phi-v(\\partial_x\\phi)^2].\n\\end{equation}\nThe solution of the equation of motion, $\\phi=\\phi(x+vt)$, describes excitations that move only to the left.\n\nGeneralizations of the action (\\ref{dima22}) also predict \\cite{chang,wen} chiral transport at all other integer filling factors and in many fractional QHE states, including the states with the filling factors $\\nu=1\/3$ and $\\nu=2\/5$.\nSome other QHE states are not expected to be chiral. This point can be easily understood by considering $\\nu=2\/3$. The $\\nu=2\/3$ liquid can be described as the $\\nu=1\/3$ state of holes on top of the $\\nu=1$ state of electrons.\nThus, its edge theory contains two excitation branches, corresponding to $\\nu=1$ and $\\nu=1\/3$, with the opposite chirality of the electron and hole edges.\n\nSuch description of the $2\/3$ edge conflicts with the experiment that shows only one propagation direction for charged excitations. This was explained by Kane, Fisher and Polchinsky \\cite{kfp} who uncovered the nontrivial role of impurities which are inevitably present along QHE edges. Impurities promote tunneling between the contra-propagating edge channels and change the nature of the edge modes: all charge excitations move in the same direction, called downstream; in addition, a neutral excitation branch of the opposite, i.e., upstream, chirality emerges.\n\nA similar picture is expected to apply in several other QHE states. Detecting upstream neutral modes proved to be a great challenge and only recently has progress been reported in the field \\cite{36,deviatov,yacoby}. We will see that the fluctuation relations from Section 5 give a tool to test the presence of upstream neutral modes. Confusingly, there were recent reports of upstream modes in the $1\/3$ and $2\/5$ states and even at the integer filling factors \\cite{heiblum14,deviatov,yacoby}. Thus, an independent test of chirality on the edges is of great importance.\n\n\\subsection{Non-Abelian quantum Hall states}\n\nThe question of chirality on QHE edges also touches upon the ongoing search for non-Abelian anyons \\cite{anyons}. Quasiparticles in fractional QHE states are known to be anyons with a different exchange statistics from bosons and fermions \\cite{wen}. The simplest Abelian anyons accumulate nontrivial phases when encircle each other. The many-body quantum state of a system of Abelian anyons does not change after one of them makes a circle around another adiabatically. Hypothetical non-Abelian anyons \\cite{anyons,2} change their quantum state after one particle braids another. This does not involve a change of the internal quantum numbers of any particle and reflects the fact that the information about a quantum state of an anyonic system is distributed over the whole system. This property makes non-Abelian anyons attractive for topological quantum information processing, naturally protected from errors \\cite{anyons,6}. Indeed, local perturbations from the interaction with the environment cannot change or erase quantum information that is stored globally.\n\nThe most promising candidate for non-Abelian anyons is the QHE state \\cite{willett87} at $\\nu=5\/2$. At the same time, there are many competing Abelian and non-Abelian candidate states at that filling factor \\cite{2,3,4,5,8,19,BS,26,Overbosch,yang} and the existing body of experiments does not allow the determination of the right state at this point, see Ref. \\refcite{yang} for a review. Interferometry\n \\cite{Stern10,chamon97,fradkin98,mz,dassarma05,11,12,13,14,hou06,grosfeld06,15,willett09,bishara09,willett10,fp331,16,kang,double,rosenow12} is the most direct approach but its implementation has faced significant difficulties. This motivated the search for non-interferometric ways to obtain information about the $5\/2$ state\n \\cite{yang,feldman08,viola12,overbosch09,CS,seidel09,yang09,wang10a,mstern10,rhone11,tiemann12,mstern12,chickering13,20,21}.\nNeither non-interferometric method would provide a direct observation of anyonic statistics but their combination may be sufficient to identify the correct state. In particular, some of the proposed states are chiral while others are not. This makes a chirality test, based on the fluctuation relations from Section 5, a useful tool in the search for non-Abelian particles.\n\n\\subsection{Chirality and causality}\n\nIn chiral systems transport occurs in one direction only. This includes the transport of information. Thus, the causality principle is enhanced. In addition to the requirement that future events do not affect the past, we also expect that the downstream events do not affect upstream events even in the future. Such modified causality principle allows one to generalize the Nyquist formula for nonlinear transport far from equilibrium (Section 5). Indeed, the standard derivation of the fluctuation-dissipation theorem (FDT) is based on the combination of the Gibbs distribution and linear response theory. Causality is the key ingredient of the latter theory. The enhanced causality principle allows a derivation of a generalized FDT even without the use of the Gibbs distribution, that is, far from equilibrium.\n\nNote that our definition of chirality assumes the absence of long-range forces \\cite{wang1}. Otherwise, such forces could mediate instantaneous information exchange between distant points so that the upstream events would affect the events downstream. In particular, we assume the presence of screening gates in chiral systems of charged particles.\n\n\n\\section{Fluctuation relations in chiral systems}\n\nIn this section we focus on chiral systems. We discover that fluctuation relations assume a form \\cite{wang1,wang2}, similar to the equilibrium FDT, but hold for nonlinear transport away from equilibrium.\nWe start with a heuristic derivation in Subsection 5.1. In Subsection 5.2 we verify the nonequilibrium FDT in the toy model, Fig. ~\\ref{idealgas}. We then give a general proof in Subsection 5.3. Numerous generalizations \\cite{wang2} and possible applications are briefly addressed in Subsections 5.4 and 5.5.\n\nOne closely related fluctuation relation was derived in Refs. \\refcite{kf,FLS,FS} in an exactly solvable model. Interestingly, that model could be used to describe both a chiral system in the context of QHE physics and a nonchiral quantum wire.\nOur results show that the integrability of the Hamiltonian is not required for the existence of an infinite number of fluctuation relations in chiral systems. At the same time, chirality is crucial. The applicability of the results of Refs. \\refcite{kf,FLS,FS} in a nonchiral system is a peculiar feature of the exactly solvable model.\n\n\n\n\\subsection{Qualitative argument}\n\n\\begin{figure}[b]\n\\centering\n\\includegraphics[width=3in]{fig_3terminal1.eps}\n\\caption{ From Ref. 30. Three-terminal setup of a chiral system. We consider a quantum Hall bar with the lower edge, coupled to terminal C. Charge can tunnel between terminal C and the lower edge. The\n solid lines represent chiral edge modes whose directions are shown by arrows and determined by the external magnetic field. The arrow on the dotted line represents our\nconvention about the positive direction of the tunneling current $I_{\\rm T}$. }\n\\label{3terminal}\n\\end{figure}\n\nIn the next three subsections, we only consider the simplest example of the nonequilibrium FDT\\cite{wang1,wang2}. The extended causality principle (Subsection 4.3)\nwill be extensively used in the derivation of these results. Many more generalized nonequilibrium fluctuation relations, e.g, fluctuation relations in a\nmulti-terminal setup and for heat transport, can be derived\\cite{wang2} but their discussion is postponed to section 5.4.\n\nLet us study the nonequilibrium FDT in the three-terminal system shown in Fig.~\\ref{3terminal}. It is a Hall bar with one of the two chiral edges coupled to a\nthird terminal. The three terminals [the source (S), the drain (D) and the third contact (C)] are assumed to be ideal reservoirs that have zero impedance\nand infinitely large capacitance. The bulk of the Hall bar is gapped and the edges are chiral. The strength of the coupling\nbetween the Hall bar and the third contact is unimportant. We assume that the two chiral edges are fully absorbed by source S at the left end\nand drain D at the right end, and they are far apart so that they do not interfere with each other.\nSource S is biased at the voltage $V_{\\rm S}$, contact C is at $V_{\\rm C}$, and the drain is grounded. Let the\ntemperature of the system be $T$. Steady currents flow between the terminals, in particular, charge tunnels into terminal C, leading to a tunneling current\n$I_{\\rm T}$. We will show that the following relation holds\n\\begin{equation}\n\\label{chenjie1}\nS_{\\rm D} = S_{\\rm C} - 4T\\frac{\\partial I_{\\rm T}}{\\partial V_{\\rm S}} + 4GT,\n\\end{equation}\nwhere $S_{\\rm D}=\\int dt \\langle \\Delta I_{\\rm D}(t) \\Delta I_{\\rm D}(0) + \\Delta I_{\\rm D}(0) \\Delta I_{\\rm D}(t)\\rangle$ is the zero-frequency noise of\nthe drain current $I_{\\rm D}$, $S_{\\rm C}=\\int dt \\langle \\Delta I_{\\rm T}(t) \\Delta I_{\\rm T}(0) + \\Delta I_{\\rm T}(0) \\Delta I_{\\rm T}(t)\\rangle$ is the\nzero-frequency noise of the tunneling current $I_{\\rm T}$, and $G$ is the quantized Hall conductance in the absence of contact C. The FDT (\\ref{chenjie1})\nholds for arbitrary $T$, $V_{\\rm S}$ and $V_{\\rm C}$ as long as they are far below the QHE gap. Note that the system is far from equilibrium when $T\n\\lesssim V_{\\rm C}, V_{\\rm S}$.\n\nBelow we give a heuristic derivation of the nonequilibrium FDT (\\ref{chenjie1}) following Ref.~\\refcite{wang1}. As shown in Fig.~\\ref{3terminal},\nthe current $I_{\\rm D} = I_{\\rm L} - I_{\\rm U}$, where $I_{\\rm L}$ is the current, entering the drain along the lower edge, and $I_{\\rm U}$ is the current,\nemitted from the drain along the upper edge. Since the system is in a steady state and no charge is accumulated on the lower edge, the low-frequency part\nof $I_{\\rm L}$ can be written as $I_{\\rm L} = I_{\\rm S} - I_{\\rm T}$, where $I_{\\rm S}$ is the current, emitted from the source. Because the two edges are\nuncorrelated, the low-frequency noises obey the relation\n\\begin{equation}\n\\label{chenjie2}\nS_{\\rm D} = S_{\\rm C} -2 S_{\\rm ST} + S_{\\rm S} + S_{\\rm U},\n\\end{equation}\nwhere $S_{\\rm ST}=\\int dt \\langle \\Delta I_{\\rm T}(t) \\Delta I_{\\rm S}(0) + \\Delta I_{\\rm S}(0) \\Delta I_{\\rm T}(t)\\rangle$ is the cross noise of $I_{\\rm\nS}$ and $I_{\\rm T}$, and $S_{\\rm S}$ and $S_{\\rm U}$ are the noises of $I_{\\rm S}$ and $I_{\\rm U}$ respectively. To derive (\\ref{chenjie1}) from\n(\\ref{chenjie2}), let us first find $S_{\\rm S}$ and $S_{\\rm U}$. To do this, it is enough to consider a simplified case, where contact C is absent.\nIn that case, we have an obvious result: both $S_{\\rm S}$ and $S_{\\rm U}$ are equal to one half of the standard Nyquist noise, that is\n\\begin{equation}\n\\label{chenjie3}\nS_{\\rm S}= S_{\\rm U}= 2GT.\n\\end{equation}\nCrucially, in the presence of contact C, Eq.(\\ref{chenjie3}) still holds. We notice that adding contact C does not affect the noise of $I_{\\rm S}$\nbecause of the extended causality principle in chiral systems. Also, it does not affect the noise of $I_{\\rm U}$ because of the assumption that the two edges are\nuncorrelated. Hence, the noises $S_{\\rm S}$ and $S_{\\rm U}$ are given by Eq. (\\ref{chenjie3}) even in the presence of terminal C.\n\nWe are left with the calculation of the cross noise $S_{\\rm ST}$. Let us analyze the dependence of the tunneling current $I_{\\rm T}(t)$ on the emitted\ncurrent $I_{\\rm S}$. The tunneling current $I_{\\rm T}$ depends on the average emitted current $\\bar I_{\\rm S}$ and its fluctuations $I_{\\omega}$, where\n$\\omega$ denotes the fluctuation frequency. According to the extended causality principle, the average emitted current $\\bar I_{\\rm S}$\ndepends only on $V_{\\rm S}$ and is not affected by terminal C. In other words, $\\bar I_{\\rm S}= G V_{\\rm S}$. We now assume that the central part of the lower edge has a relaxation time $\\tau$, so that\nthe instantaneous value of $I_{\\rm T}(t)$ depends only on the emitted current within the time interval $\\tau$. It is convenient to separate the\nfluctuations of $I_{\\rm S}$ into the fast part $I^>$ which contains the fluctuations of the frequencies above $1\/\\tau$, and the slow part $I^<$ which contains\nthe fluctuations of the frequencies below $1\/\\tau$. Within the time interval $\\tau$, $I^<$ does not exhibit a time-dependence. Hence $I^<$ enters the expression for\nthe tunneling current $I_{\\rm T}(t)$ in the combination $\\bar I_{\\rm S}+I^<$ only, $I_{\\rm T} = \\langle I(\\bar I_{\\rm S}+I^<, I^>)\\rangle$, where the\nbrackets denote the average with respect to the fluctuations of $I_{\\rm S}$. According to the Nyquist formula for the emitted current, its harmonics with\ndifferent frequencies have zero correlations $\\langle I_{\\omega} I_{-\\omega'} \\rangle\\sim \\delta(\\omega-\\omega')$. For the sake of the heuristic argument,\nwe will assume a Gaussian distribution of $I_{\\rm S}$, hence, independence of the high- and low-frequency fluctuations. With this assumption, we\naverage over the fast fluctuations and write $I_{\\rm T} = \\langle J(\\bar I_{\\rm S}+I^<)\\rangle$, where $J$ is obtained by averaging over $I^>$. $I^<$\ncorresponds to a narrow frequency window and can be neglected in comparison with $\\bar I_{\\rm S}$, i.e., the average tunneling current $I_T= J(\\bar I_{\\rm\nS})$. For the calculation of the cross noise, we expand $J(\\bar I_{\\rm S}+I^<)$ to the first order in $I^<$ and obtain\n\\begin{equation}\n\\label{chenjie4}\nS_{\\rm ST} = \\langle I_{\\rm T,\\omega}I_{-\\omega} + I_{\\rm T, -\\omega} I_{\\omega}\\rangle = \\frac{\\delta J(\\bar I_{\\rm S})}{\\delta \\bar I_{\\rm S}} \\langle\nI_\\omega I_{-\\omega} + I_{-\\omega} I_{\\omega}\\rangle = 2T \\frac{\\partial I_{\\rm T}}{\\partial V_{\\rm S}},\n\\end{equation}\nwhere we have used the result $S_{\\rm S} =\\langle I_\\omega I_{-\\omega} + I_{-\\omega} I_{\\omega}\\rangle=2GT$, and $\\delta \\bar I_{\\rm S}= G\\delta V_{\\rm\nS}$. Combining the results (\\ref{chenjie2})-(\\ref{chenjie4}), the nonequilibrium FDT (\\ref{chenjie1}) is easily obtained.\n\nWe have seen that chirality plays an important role in the derivation of Eqs. (\\ref{chenjie3}) and (\\ref{chenjie4}). In this heuristic derivation, an unnecessary\nassumption of Gaussian fluctuations of $I_{\\rm S}$ has been made. In Subsection 5.3, the nonequilibrium FDT will be derived from the fluctuation theorem (\\ref{eq-fr}) without that\nassumption.\n\n\\subsection{Toy model}\n\n\nBefore moving to a general proof of the nonequilibrium FDT (\\ref{chenjie1}), let us verify it in a toy model of an ideal gas\n(Fig.~\\ref{idealgas}). Our discussion will follow the appendix to Ref. \\refcite{wang1}.\n\nConsider a large reservoir of an ideal gas of noninteracting molecules at the temperature $T$ and chemical potential $\\mu$.\nMolecules can leave the reservoir through a narrow tube with smooth walls. By smooth, we mean such walls that the collisions of the molecules with the walls are elastic and\ndo not change the velocity projection along the tube axis. Thus, molecules can only leave the reservoir but never come back. The system is chiral. Imagine\nnow that molecules can escape through a side hole in the wall of the tube (Fig. ~\\ref{idealgas}). We can derive a relation, similar to Eq.~(\\ref{chenjie1}):\n\\begin{equation}\n\\label{chenjie6}\nS_{\\rm D} = S_{\\rm C} - 4T \\frac{\\partial I_{\\rm T}}{\\partial \\mu} + S_{\\rm S},\n\\end{equation}\nwhere $I_{\\rm T}$ is the particle current through the side hole in the tube wall and $S_{\\rm C}$ is its noise, Fig. ~\\ref{idealgas}, $S_{\\rm D}$ is the noise of the current $I_{\\rm\nD}$ at the open end of the tube, and $S_{\\rm S}$ is the noise of the particle current $I_{\\rm S}$ at the opposite end, attached to the box. The noise $S_{\\rm S}$ can be determined\nfrom the measurement of $S_{\\rm D}$ in the absence of the side hole in the tube wall. Note that the above relation is almost the same as the nonequilibrium FDT\n(\\ref{chenjie1}) in the QHE setup, Fig.~\\ref{3terminal}.\n\nThe proof of the above expression is rather simple and builds on Ref.~\\refcite{martin}.\nIt is most convenient to work with a Fermi gas with\na high negative chemical potential.\nOther cases, such as a Bose gas, can be considered in a similar way but will not be addressed below.\nLet $f=1\/\\{\\exp[(E-\\mu)\/T]+1\\}$ be the Fermi distribution in the box and $T_{E}$ the transmission\ncoefficient through the side hole in the tube wall for a particle of the energy $E$. According to Ref.~\\refcite{martin}, for the particles within the energy window $(E,\nE+dE)$, the current through the side hole is $T_{E}f dE$ and the noises $S_{\\rm S}=2f(1-f)dE$, $S_{\\rm C} = 2T_{E}f(1-T_{E}f)dE$, and $S_{\\rm\nD}=2(1-T_{E})f[1-(1-T_{E})f]dE$. One needs to integrate over the energy to obtain the overall current and noises. It is then easy to obtain the\nexpression (\\ref{chenjie6}).\n\n\n\\subsection{General derivation}\n\n\\begin{figure}[b]\n\\centering\n\\includegraphics[width=3in]{fig_3terminal2.eps}\n\\caption{ The time-reversed setup. The only differences from Fig.~\\ref{3terminal} are the reversed directions of the magnetic field and the chiral edge modes.}\n\\label{3terminal-tr}\n\\end{figure}\n\n\nIn this subsection, we give a derivation of the nonequilibrium FDT (\\ref{chenjie1}) based on the fluctuation theorem (\\ref{eq-fr}). In the\nheuristic argument, Subsection 5.1, an unnecessary assumption was only made in the derivation of the expression for the cross noise (\\ref{chenjie4}). Thus, below we\nfocus on proving the relation (\\ref{chenjie4}).\n\nFig.~\\ref{3terminal} is a three-terminal case of the most general setup Fig.~\\ref{fig1}, so the fluctuation theorem (\\ref{eq-fr}) applies. Let us follow\nthe protocol in Section 2 and assume that after the time period $\\tau$, the changes of the particle numbers in the three terminals are $\\Delta N_{\\rm S}$,\n$\\Delta N_{\\rm C}$, and $\\Delta N_{\\rm D} = -\\Delta N_{\\rm S} - \\Delta N_{\\rm C}$, respectively, and the distribution function is $P(\\Delta N_{\\rm S},\n\\Delta N_{\\rm C}; B)$. Note that only two out of the three $\\Delta N$'s are independent due to the charge conservation. Then\nthe fluctuation theorem (\\ref{eq-fr}) becomes\n\n\\begin{equation}\n\\label{chenjie7}\n\\frac{P(\\Delta N_{\\rm S}, \\Delta N_{\\rm C};+ B)}{P(-\\Delta N_{\\rm S}, -\\Delta N_{\\rm C}; -B)}= e^{-\\beta e(V_{\\rm S}\\Delta N_{\\rm S} + V_{\\rm C} \\Delta\nN_{\\rm C})}\n\\end{equation}\nwhere $\\beta =1\/T$, $e$ is the electron charge, and $P(\\Delta N_{\\rm S}, \\Delta N_{\\rm C};-B)$ is the distribution function for the backward process (see Section 2) in the time-reversed version\nof Fig.~\\ref{3terminal} as illustrated in Fig. ~\\ref{3terminal-tr}. Note that we could write the drain voltage $V_{\\rm D}$ explicitly in the fluctuation theorem instead\nof setting $V_{\\rm D}=0$. That, however, would only burden our notation without providing any advantage. Fig.~\\ref{3terminal-tr} shows the\ntime-reversed setup which has the opposite chirality, compared to Fig. ~\\ref{3terminal}. It is worth emphasizing that the observables $I_{\\rm T}$, $S_{\\rm ST}$, {\\it etc.}, that we are\ninterested in are defined in the setup from Fig.~\\ref{3terminal}. The role of the time-reversed setup is only to help us prove the nonequilibrium\nFDT (\\ref{chenjie1}). In terms of the distribution function $P(\\Delta N_{\\rm S}, \\Delta N_{\\rm C}; B)$, the observables of interest can be written as\n\n\\begin{equation}\n\\quad I_{\\rm T} =\\frac{e}{\\tau}\\langle\\Delta N_{\\rm C}\\rangle,\n\\end{equation}\n\n\\begin{equation}\nI_{\\rm R} = I_{\\rm U} - I_{\\rm S}= \\frac{e}{\\tau}\\langle\\Delta N_{\\rm S}\\rangle,\n\\end{equation}\n\n\\begin{equation}\nS_{\\rm ST} = -S_{\\rm R T} = -2\\frac{e^2}{\\tau}(\\langle\\Delta N_{\\rm S}\\Delta N_{\\rm C}\\rangle - \\langle\\Delta N_{\\rm S}\\rangle\\langle\\Delta N_{\\rm\nC}\\rangle),\n\\end{equation}\nwhere\n\\begin{equation}\n\\langle x \\rangle = \\sum_{\\Delta N_{\\rm S}, \\Delta N_{\\rm C}} x P(\\Delta N_{\\rm S}, \\Delta N_{\\rm C}; B).\n\\end{equation}\nWe have defined the current $I_{\\rm R}$ as the overall current flowing into source S. The cross noise $S_{\\rm RT}$ of $I_{\\rm R}$ and $I_{\\rm T}$\nequals $-S_{\\rm ST}$ since $I_{\\rm U}$ is uncorrelated with $I_{\\rm T}$.\n\nIt is convenient to define the cumulant generating functions\n\n\\begin{equation}\n\\label{chenjie8}\nQ(x,y;\\pm B) = \\lim_{\\tau\\rightarrow \\infty} \\frac{1}{\\tau} \\ln\\left\\{\\sum_{\\Delta N_{\\rm S}, \\Delta N_{\\rm C}} e^{-x e\\Delta N_{\\rm S} - y e\\Delta N_{\\rm\nC}} P(\\Delta N_{\\rm S}, \\Delta N_{\\rm C};\\pm B)\\right\\}.\n\\end{equation}\nWith these generating functions, the quantities of interest can be expressed as\n\n\\begin{equation}\n\\label{def1}\nI_{\\rm R} = -\\left.\\frac{\\partial Q(x,y; +B)}{\\partial x}\\right|_{x= y= 0},\n\\end{equation}\n\n\\begin{equation}\n\\label{def2}\nI_{\\rm T} = -\\left.\\frac{\\partial Q(x,y;+B)}{\\partial y}\\right|_{x=y=0},\n\\end{equation}\n\n\\begin{equation}\n\\label{def3}\nS_{\\rm ST} = -2\\left.\\frac{\\partial^2 Q(x,y; +B)}{\\partial x \\partial y}\\right|_{x=y=0}.\n\\end{equation}\n\nInserting the fluctuation theorem (\\ref{chenjie7}) into the definition (\\ref{chenjie8}), we find that the generating functions $Q(x,y;\\pm B)$ have a very\nnice property:\n\\begin{equation}\n\\label{chenjie5}\nQ(x, y; +B) = Q(-\\beta V_{\\rm S} - x, -\\beta V_{\\rm C} -y; -B).\n\\end{equation}\nNote that this is an equation, relating two generating functions, $Q(x, y; +B)$ and $Q(x, y; -B)$. Also, because the distribution functions $P(\\Delta\nN_{\\rm S}, \\Delta N_{\\rm C};\\pm B)$ are normalized, we have $Q(x=0, y=0; \\pm B)=0$. Since the distribution functions depend on the biases $V_{\\rm S}$\nand $V_{\\rm C}$, the generating functions are also functions of $V_{\\rm S}$ and $V_{\\rm C}$\n\\begin{equation}\nQ = Q(x, y, V_{\\rm S}, V_{\\rm C}; \\pm B).\n\\end{equation}\n\n We now prove the relation (\\ref{chenjie4}) and hence also the nonequilibrium FDT\n(\\ref{chenjie1}) from the property (\\ref{chenjie5}) and enhanced causality. We first apply the differential operator\n\n\\begin{equation}\n\\hat D = \\left(\\frac{d}{dx} - T \\frac{d}{d V_{\\rm S}}\\right)\\left(\\frac{d}{dy} - T \\frac{d}{d V_{\\rm C}}\\right)\n\\end{equation}\non both sides of Eq.~(\\ref{chenjie5}). A straightforward calculation using the expressions (\\ref{def1})-(\\ref{def3}) yields\n\n\\begin{align}\n\\label{chenjie9}\n\\frac{1}{2}S_{\\rm ST} = & T\\frac{\\partial I_{T}}{\\partial V_{\\rm S}} +T\\frac{\\partial I_{R}}{\\partial V_{\\rm C}}+ T^2\\left.\\frac{\\partial^2Q(x, y, V_{\\rm\nS}, V_{\\rm C};+B)}{\\partial V_{\\rm S}\\partial V_{\\rm C}}\\right|_{x=y=0} \\nonumber \\\\\n& -T^2 \\left.\\frac{\\partial^2Q(x, y, V_{\\rm S}, V_{\\rm C};-B)}{\\partial V_{\\rm S}\\partial V_{\\rm C}}\\right|_{x=-\\beta V_{\\rm S}, y=-\\beta V_{\\rm C}}.\n\\end{align}\n\n\\noindent\nThis equation does not look the same as Eq.~(\\ref{chenjie4}), which is simply $S_{\\rm ST} = 2T\\frac{\\partial I_{T}}{\\partial V_{\\rm S}} $. However, we will\nshow that only the first term on the right-hand-side of Eq.~(\\ref{chenjie9}) is nonzero.\n\nThe third term on the right-hand-side of Eq.~(\\ref{chenjie9}) is\nzero simply because $Q(x=0, y=0, V_{\\rm S}, V_{\\rm C}; B)=0$.\n\nNow the chirality-enhanced causality principle enters the game as a key tool to prove that the second and fourth terms vanish. First, it is easy to\nsee that the second term is zero, because: (1) $I_{\\rm R} = I_{\\rm U} - I_{\\rm S}$; (2) $I_{\\rm S}$ does not depend on $V_{\\rm C}$ due to extended causality and\n(3) the upper and lower edges are assumed to be uncorrelated, so $I_{\\rm U}$ does not depend on $V_{\\rm C}$.\n\nTo prove that the last term is zero, we need a more sophisticated argument from extended causality. Note that the last term comes from the\ntime-reversed setup, Fig.~\\ref{3terminal-tr}. Since the upper and lower edges in Fig.~\\ref{3terminal-tr} are uncorrelated, we can write the\ndistribution function $P(\\Delta N_{\\rm S}, \\Delta N_{\\rm C};- B)$ as\n\n\\begin{equation}\nP(\\Delta N_{\\rm S}, \\Delta N_{\\rm C};- B)= \\sum_{\\Delta N_{\\rm S}'} P_1(\\Delta N_{\\rm S}';- B) P_2(\\Delta N_{\\rm S}-\\Delta N_{\\rm S}', \\Delta N_{\\rm C};-\nB),\n\\end{equation}\nwhere $P_1(\\Delta N_{\\rm S}';- B)$ is the probability that $-\\Delta N_{\\rm S}'$ particles leave source $S$ through the {\\it\nupper} edge during the time interval $\\tau$, and $P_2(\\Delta N_{\\rm S}-\\Delta N_{\\rm S}', \\Delta N_{\\rm C};- B)$ is the probability that $\\Delta\nN_{\\rm S}-\\Delta N_{\\rm S}'$ particles enter S through the {\\it lower} edge while $\\Delta N_{\\rm C}$ particles enter contact\nC. This property of the distribution function $P(\\Delta N_{\\rm S}, \\Delta N_{\\rm C};- B)$ allows one to decompose the generating function $Q(x,y;- B)$ as\n\n\n\\begin{equation}\n\\label{dimaQ}\nQ(x,y;- B) = Q_1(x,y;- B)+Q_2(x,y;- B),\n\\end{equation}\n\n\\begin{equation}\nQ_1(x,y;- B) = \\lim_{\\tau\\rightarrow \\infty} \\frac{1}{\\tau} \\ln\\left\\{\\sum_{\\Delta N_{\\rm S}'} e^{-x e\\Delta N_{\\rm S}'} P_1(\\Delta N_{\\rm S}';-\nB)\\right\\},\\nonumber\n\\end{equation}\n\n\\begin{equation}\nQ_2(x,y;- B) = \\lim_{\\tau\\rightarrow \\infty} \\frac{1}{\\tau} \\ln\\left\\{\\sum_{\\Delta N_{\\rm S}'', \\Delta N_{\\rm C}} e^{-x e\\Delta N_{\\rm S}'' - y e\\Delta\nN_{\\rm C}} P_2(\\Delta N_{\\rm S}'', \\Delta N_{\\rm C};- B)\\right\\}. \\nonumber\n\\end{equation}\nWith the above decomposition (\\ref{dimaQ}), we look at the dependences of $Q_1$ and $Q_2$ on the voltages $V_{\\rm S}$ and $V_{\\rm C}$. Since the upper edge is chiral, the\ndistribution $P_1$ depends on $V_{\\rm S}$ but not on $V_{\\rm C}$. Similarly, since the lower edge is chiral, the distribution $P_2$ does not depend on\n$V_{\\rm S}$ while it does depend on $V_{\\rm C}$. Therefore, $Q_{1}$ depends on $V_{\\rm S}$ but not on $V_{\\rm C}$, while $Q_{2}$ depends on $V_{\\rm C}$\nbut not on $V_{\\rm S}$. Hence,\n\\begin{equation}\n\\frac{\\partial^2Q(x, y, V_{\\rm S}, V_{\\rm C};-B)}{\\partial V_{\\rm S}\\partial V_{\\rm C}}= \\frac{\\partial^2Q_1}{\\partial V_{\\rm S}\\partial V_{\\rm C}}\n+\\frac{\\partial^2Q_2}{\\partial V_{\\rm S}\\partial V_{\\rm C}} =0.\n\\end{equation}\nThus, the last term on the right-hand-side of Eq.~(\\ref{chenjie9}) is zero.\n\nTo sum up, we have proved that three of the terms on the right-hand-side of Eq.~(\\ref{chenjie9}) are zero, leaving the equation to be simply\n\\begin{equation}\nS_{\\rm ST} = 2 T\\frac{\\partial I_{T}}{\\partial V_{\\rm S}},\n\\end{equation}\nwhich is exactly the relation (\\ref{chenjie4}). Therefore, the nonequilibrium FDT (\\ref{chenjie1}) is indeed true for chiral systems.\n\n\n\n\\subsection{Generalizations}\n\nAbove, we studied the simplest example of the nonequilibrium FDT for charge transport. Let us mention some generalizations.\n\nOne of the generalizations is to study chiral heat transport in the three-terminal setup in Fig.~\\ref{3terminal}. In this case, the three terminals are at different temperatures, $T_{\\rm S}$ in S, $T_{\\rm C}$ in C, and $T_{\\rm D}$ in D. We can assume that they have the same chemical potential. According to Ref.\\refcite{wang2}, the cross noise $S_{\\rm ST}^h$ of the heat currents $I_{\\rm S}^h$ and $I_{\\rm T}^h$ satisfies\n\\begin{equation}\nS_{\\rm ST}^h =2T_{\\rm S}^2 \\frac{\\partial I^h_{\\rm T}}{\\partial T_{\\rm S}},\n\\end{equation}\nwhere $I_{\\rm T}^h$ is the heat current flowing into contact C and $I^h_{\\rm S}$ is the heat current flowing out of source S. This expression, valid for nonequilibrium states, resembles the standard equilibrium FDT which has the form $S^h = 4T^2 G^h$ with $S^h$ being the noise of the heat current in equilibrium, $G^h$ being the thermal conductance and $T$ the temperature.\n\nNonequilibrium fluctuation relations of higher-order cumulants can also be obtained in chiral systems. For example, the following relation holds\\cite{wang2} for the three-terminal setup in Fig.~\\ref{3terminal}\n\\begin{equation}\n\\label{chenjie10}\nC_{\\rm TT S} = T\\frac{\\partial S_{\\rm TT}}{\\partial V_{\\rm S}},\n\\end{equation}\nwhere $S_{\\rm TT}$ is the noise of the current $I_{\\rm T}$, and $C_{\\rm TTS}$ is the third cumulant defined as\n\\begin{equation}\nC_{\\rm TTS} = -\\frac{2e^3}{\\tau}\\langle(\\Delta N_{\\rm C}-\\langle\\Delta N_{\\rm C}\\rangle)\\cdot(\\Delta N_{\\rm C}-\\langle\\Delta N_{\\rm C}\\rangle) \\cdot (\\Delta N_{\\rm S}-\\langle\\Delta N_{\\rm S}\\rangle)\\rangle.\n\\end{equation}\n\nIt is also possible to generalize nonequilibrium fluctuation relations to multi-terminal systems and to higher-order cumulants for heat transport. The reader may consult Ref.~\\refcite{wang2} for details.\n\n\n\\subsection{Applications}\n\nOur main result, Eq. (\\ref{chenjie1}), as well as its generalization from the previous subsection apply to chiral systems only. Thus, the nonequilibrium FDT can be used to test edge chirality. If the FDT is satisfied both in and beyond equilibrium this is compatible with chirality. If it is broken then the edge is not chiral.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=3in]{fig_expsetup.eps}\n\\caption{From Ref. 30. A possible experimental setup. Charge carriers, emitted\nfrom the source, can either tunnel through constriction Q and\ncontinue toward the drain or are absorbed by Ohmic contact C.}\n\\label{EEE}\n\\end{figure}\n\nA possible experimental setup is shown in Fig. \\ref{EEE}.\nOne of the mechanisms how our FDT gets broken in nonchiral systems is illustrated in Fig. \\ref{DDD}. The downstream charged mode dissipates energy in the hot spot \\cite{klass}, where it enters the drain. The upstream neutral mode carries the dissipated energy back to the tunneling contact and the point, where the charged mode exits the drain, and heats them. This affects both the tunneling current and the noise and breaks the theorem (\\ref{chenjie1}).\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=3in]{fig_neutralmode.eps}\n\\caption{From Ref. 30. A nonchiral system. The solid line along the lower edge\nillustrates the downstream mode, propagating from the source to\nthe drain. The dashed line shows a counter-propagating upstream\nmode}\n\\label{DDD}\n\\end{figure}\n\n\nRecently there has been much interest in neutral modes on QHE edges. They were first reported \\cite{36} in the particle-hole conjugated QHE states, where the theory predicts upstream neutral modes. Latter, upstream neutral modes were also reported in some states where they had not been expected \\cite{heiblum14,deviatov,yacoby}. In such situation a new experimental test, based on the FDT (\\ref{chenjie1}), will be helpful.\n\nOur results can also be used to narrow the range of the candidate QHE states at the filling factor $5\/2$. Some candidates (e.g., the Pfaffian state \\cite{2} or the 331 state \\cite{8}) have chiral edges. Others, most notably the anti-Pfaffian state \\cite{3,4}, are not chiral.\nSome evidence of an upstream neutral mode on the $5\/2$ edge has been reported recently \\cite{36}. Obviously, verification with a different method is highly desirable. Our FDT (\\ref{chenjie1}) provides such a method.\n\n\\section{Conclusion}\n\nIn this review we considered fluctuation relations in the absence of the time-reversal symmetry. The Saito-Utsumi relations\\cite{saito} apply to any conductor in a magnetic field and connect nonlinear transport coefficients in the opposite magnetic fields close to equilibrium. The fluctuation theorem for chiral systems \\cite{wang1,wang2} applies even far from equilibrium and connects currents and noises at the same direction of the magnetic field. This relation can be used to test the chirality of QHE edges. This is relevant in the ongoing search for neutral modes on QHE edges and in the search for non-Abelian anyons.\n\nMany questions remain open. New experimental tests of the fluctuation relations \\cite{saito} beyond Refs. \\refcite{exp1,exp2} would be important. The conflict between the existing experimental data and Eq. (\\ref{dima14}) has not been understood yet.\nOn the theory side, fluctuation relations for electric currents can be generalized for any other conserving quantity. In particular, Ref. \\refcite{wang2} addresses fluctuation relations for heat currents in chiral systems. Fluctuation relations for spin currents is another interesting question.\nIn that context time-reversal symmetry may be broken by an external magnetic field or by the spontaneous magnetization of the leads. Some work, based on weaker fluctuation relations \\cite{forster08}, has been published in Refs. \\refcite{lopez12,lim13}. Ref. \\refcite{spin-utsumi} attempted to apply stronger fluctuation relations to spintronics but overlooked the correct transformation law for spin currents under time reversal.\n\nExternal magnetic fields and spontaneous magnetization are not the only ways how the time-reversal symmetry gets broken. It is interesting to extend the results, discussed in this review, to systems with time-dependent Hamiltonians which are not invariant with respect to time reversal. Research in that direction includes Ref. \\refcite{altland10}. Ref. \\refcite{yi11} considers a system with a time-dependent magnetic field.\nThe ideas from Refs. \\refcite{Maes1,Maes2} may be useful for the class of problems considered in this review.\n\nMost research in the field has focused on quantum systems but there is nothing inherently quantum about time-reversal symmetry breaking. A discussion of fluctuation relations in classical systems with time-reversal symmetry breaking can be found in Refs. \\refcite{class1,class2}.\nAn intriguing question involves possible applications of the results to biological systems with unidirectional transport.\n\n\n\\section*{Acknowledgments}\n\nWe thank M. Kardar and K. Saito for helpful discussions. C.W. acknowledges support from the NSF under Grant No. DMR-1254741. D.E.F. was supported in part by the NSF under Grant No. DMR-1205715.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe class $\\mathbf{N}_1$ of \\textit{generalized Nevanlinna functions with one negative square} \nis the set of all scalar functions $Q$ which are meromorphic on ${\\dC \\setminus \\dR}$, which satisfy\n $Q(\\overline{z})=\\overline{Q(z)}$, and for which the kernel\n\\begin{equation}\\label{1.0}\n\\frac{Q(z)-\\overline{Q(w)}}{z-\\bar{w}},\n\\end{equation}\nhas one negative square, see \\cite{KL73, KL77, KL81, L}. The class $\\mathbf{N}_0$ of ordinary Nevanlinna\nfunctions consists of functions holomorphic on ${\\dC \\setminus \\dR}$, which satisfy $Q(\\overline{z})=\\overline{Q(z)}$, and for which the above\nkernel has no negative squares, i.e., ${\\rm Im\\,} Q(z) \/ {\\rm Im\\,} z \\ge 0$, $z \\in {\\dC \\setminus \\dR}$; cf. \\cite{donoghue}. Let $Q(z)$\nbelong to $\\mathbf{N}_1$. A point $z_0\\in\\dC^+\\cup\\dR\\cup\\{\\infty\\}$ is a \\textit{generalized zero of\nnonpositive type} (\\textit{GZNT}) of $Q(z)$ if either $z_0\\in\\dC^+$ and\n\\begin{equation}\nQ(z_0)=0,\n\\end{equation}\nor $z_0\\in\\dR$ and\n\\begin{equation}\\label{x_0}\n\\lim_{z \\wh\\to z_0}\\frac{Q(z)}{z-z_0}\\in (-\\infty,0],\n\\end{equation}\nor $z_0=\\infty$ and\n\\begin{equation}\\label{x_00}\n\\lim_{z \\wh\\to \\infty} zQ(z) \\in [0,\\infty),\n\\end{equation}\nsee \\cite[Theorem 3.1, Theorem 3.1']{L}.\nHere the symbol $\\wh\\to$ denotes the non-tangential limit.\nIf $Q(z)$ is holomorphic in a neighborhood of\n$z_0\\in\\dR$, then \\eqref{x_0} simplifies to $Q(z_0)=0$ and\n$Q'(z_0)\\leq0$. Any function $Q(z) \\in \\mathbf{N}_1$ has precisely one GZNT in\n$\\dC^+\\cup\\dR\\cup\\{\\infty\\}$; cf. \\cite{KL81}.\n\nEach generalized Nevanlinna function with one negative square is the Weyl function of a closed symmetric\noperator or relation with defect numbers $(1,1)$ in a Pontryagin space with one negative square, see\n\\cite{BDHS,BHSWW,DHS1}. The selfadjoint extensions of the symmetric operator or relation are parametrized\nover $\\dR \\cup \\{\\infty\\}$; in fact, each selfadjoint extension corresponding to $\\tau \\in \\dR \\cup\n\\{\\infty\\}$ has a Weyl function of the form:\n\\begin{equation}\\label{bil}\nQ_\\tau(z)=\\frac{Q(z)-\\tau}{1+\\tau Q(z)}, \\quad \\tau \\in \\dR,\\quad \\mbox{and} \\quad Q_\\infty(z)=-\\frac1{Q(z)}.\n\\end{equation}\nThe transform \\eqref{bil} takes the class $\\mathbf{N}_1$ onto itself as follows from calculating the kernel\ncorresponding to \\eqref{1.0}. Hence, for each $\\tau \\in \\dR \\cup \\{\\infty\\}$ the function $Q_\\tau(z)$ has a\nunique GZNT, denoted by $\\alpha(\\tau)$. The study of the path $\\tau \\mapsto \\alpha(\\tau)$, $\\tau \\in \\dR \\cup\n\\{\\infty\\}$, was initiated in \\cite{DHS1}. Some simple examples of functions in $\\mathbf{N}_1$ may help to\nillustrate the various possibilities; cf. Theorem \\ref{factor}.\n\nFirst consider the $\\mathbf{N}_1$--function $Q(z)=-z$. It has a GZNT at the origin. For $\\tau \\in \\dR$ the equation $Q_\\tau(z)=0$ has one solution; hence the\npath $\\alpha(\\tau)$ of the GZNT is given by\n\\[\n \\alpha(\\tau)=-\\tau, \\quad \\tau \\in \\dR.\n\\]\nTherefore $\\alpha(\\tau)$ stays on the (extended) real line; cf. Section \\ref{ontherealline}.\n\nThe function $Q(z)=z^2$ provides another simple example of a function belonging to $\\mathbf{N}_1$. Observe that it\nhas a GZNT at the origin, hence $\\alpha(0)=0$. For $\\tau>0$ the equation $Q_\\tau(z)=0$ has two solutions, namely\n$-\\sqrt{\\tau}$ and $\\sqrt{\\tau}$. Since $Q_\\tau'(z)$ exists and is negative (positive) on the negative\n(positive) half-axes, it follows that the path $\\alpha(\\tau)$ of the GZNT is given by\n \\[\n \\alpha(\\tau)=-\\sqrt\\tau, \\quad \\tau>0.\n \\]\nFor $\\tau<0$ the equation $Q_\\tau(z)=0$ has precisely one solution in $\\dC^+$, so that the path\n$\\alpha(\\tau)$ of the GZNT is given by\n \\[\n \\alpha(\\tau)=\\ii\\sqrt{-\\tau}, \\quad \\tau<0.\n \\]\nHence the path approaches the real line\nvertically at the origin and continues along the negative axis; see Figure 1. Finally, note that $\\alpha(\\infty)=\\infty$.\n\nThe function $Q(z)=z^3$ also belongs to $\\mathbf{N}_1$ with a GZNT at the origin. For $\\tau \\in \\dR$\nthe equation $Q_\\tau(z)=0$ has three solutions. An argument similar to the one above shows that the path of\nthe GZNT of $\\alpha(\\tau)$ is given by\n\\[\n\\alpha(\\tau)=\\frac{ -{\\rm sgn\\,}{\\tau} + \\sqrt{3}\\ii }2\\,\n \\sqrt[3]{\\tau}, \\quad \\tau\\in\\dR.\n\\]\nTherefore the path approaches the real line under an angle $\\pi\/3$ and leaves the real line under an angle\n$2\\pi\/3$; see Figure 1.\n\n\\begin{figure}[htb]\n\\begin{center}\n\\begin{picture}(105,20)(0,0)\n\\linethickness{0.05mm} \\put(0,5){\\line(1,0){50}}\n \\put(55,5){\\line(1,0){50}}\n \\thicklines \\put(25,5){\\vector(-1,0){10}}\n \\put(15,5){\\line(-1,0){15}}\n \\put(25,17){\\vector(0,-1){9}}\n \\put(25,14){\\line(0,-1){9}}\n \\put(28,7){\\makebox(0,0)[cc]{\\tiny $\\frac{\\pi}2$}}\n \\put(20,10){\\makebox(0,0)[cc]{$\\alpha(\\tau)$}}\n \\put(25,02){\\makebox(0,0)[cc]{$0$}}\n\\thicklines \\put(88,17){\\vector(-2,-3){4}}\n \\put(84,11){\\line(-2,-3){4}}\n \\put(80,5){\\vector(-2,3){4}}\n \\put(76,11){\\line(-2,3){4}}\n \\put(65,15){\\makebox(0,0)[cc]{$\\alpha(\\tau)$}}\n \\put(80,02){\\makebox(0,0)[cc]{$0$}}\n \\put(80,9){\\makebox(0,0)[cc]{\\tiny $\\frac{\\pi}3$}}\n \\put(84,7){\\makebox(0,0)[cc]{\\tiny $\\frac{\\pi}3$}}\n \\put(76,7){\\makebox(0,0)[cc]{\\tiny $\\frac{\\pi}3$}}\n\\end{picture}\n\\end{center}\n\\caption{The path of $\\alpha(\\tau)$ for $Q(z)=z^2$ and for $Q(z)=z^3$.}\n\\end{figure}\n\nThe previous examples were about functions $Q(z) \\in \\mathbf{N}_1$ which have a GZNT on the real axis and\nsuch that $Q(z)$ is holomorphic in a neighborhood of the GZNT. If $Q(z)$ is not holomorphic at its GZNT, the behaviour may be quite different. For instance, consider the function $Q(z)=z^{2+\\rho}$\nwhere $0<\\rho<1$ and the branch is chosen to make $Q(z)$\nholomorphic and positive on the positive axis. Then $Q(z)$ belongs to $\\mathbf{N}_1$\nwith a GZNT at the origin. The path $\\alpha(\\tau)$ now approaches $z=0$ via the angles\n$\\pi\/(2+\\rho)$ and $2\\pi\/(2+\\rho)$, since\n\\[\n \\alpha(\\tau)=(-\\tau)^{1\/(2+\\rho)} e^{i \\pi \/(2+\\rho)}, \\quad \\tau<0, \\quad\n \\alpha(\\tau)=(\\tau)^{1\/(2+\\rho)} e^{i 2\\pi \/(2+\\rho)}, \\quad \\tau>0,\n\\]\nsee Figure 2.\n\n\\begin{figure}[hbt]\n \\begin{center}\n\\begin{picture}(100,20)(0,0)\n\\linethickness{0.05mm} \\put(0,5){\\vector(1,0){100}}\n \\put(55,4){\\line(0,1){2}}\n \\put(55,2){\\makebox(0,0)[cc]{$\\alpha(0)=0$}}\n \\linethickness{0.6mm} \\put(0,5){\\line(1,0){55}}\n \\put(5,2){\\makebox(0,0)[cc]{${\\rm supp\\,}\\sigma$}}\n \\thicklines \\put(63,17){\\vector(-2,-3){4}}\n \\put(59,11){\\line(-2,-3){4}}\n \\put(55,5){\\vector(-2,3){4}}\n \\put(55,5){\\line(-2,3){8}}\n \\put(40,15){\\makebox(0,0)[cc]{$\\alpha(\\tau)$}}\n \\put(55,11){\\makebox(0,0)[cc]{\\tiny $\\frac\\pi{2+\\rho}$}}\n \\put(61,7){\\makebox(0,0)[cc]{\\tiny $\\frac\\pi{2+\\rho}$}}\n\\end{picture}\n\\end{center}\n\\caption{The path of $\\alpha(\\tau)$ for $Q(z)=z^{2+\\rho}$, $0< \\rho <1$. }\\label{nons}\n\\end{figure}\n\nAs a final example, consider the function\n\\begin{equation}\\label{notonR}\n Q(z)= \\frac{z^2+4}{z^2+1 }\\,\\,\\ii.\n\\end{equation}\nThis function belongs to $\\mathbf{N}_1$ and it has a GZNT at $z=2\\ii$. The equation $Q(z)=\\tau$ has a nonreal\nsolution for each $\\tau\\in\\dR\\cup\\{\\infty\\}$. In fact, the path of the GZNT $\\alpha(\\tau)$ is a simple closed\ncurve bounded away from the real axis.\n\nIn general the path of the GZNT has a complicated behaviour. The path may come from $\\dC^+$, be part of the\nreal line, and then leave again to $\\dC^+$, it may approach the real line in different ways, it may stay\ncompletely on the real line, or it may stay away boundedly from the real line. In the present paper, some\naspects of the path are treated. Especially, the local behavior of $\\alpha(\\tau)$ is completely determined in\nthe domain of holomorphy of the function $Q(z)$. Furthermore under certain holomorphy conditions it is shown\nthat a small interval of the real line is part of the path of $\\alpha(\\tau)$. Finally, the case where the\npath stays on the extended real line is completely characterized.\n\nThe contents of this paper are now described. Section 2 contains some preliminary observations concerning\ngeneralized Nevanlinna functions with one negative square. Furthermore a useful version of the inverse\nfunction theorem is recalled. Some elementary notions concerning the path $\\alpha(\\tau)$ of the GZNT of\n$Q_\\tau(z)$ are presented in Section 3. In Section 4 the function $Q(z)$ is assumed to be holomorphic in a\nneighborhood of a real GZNT and the path $\\alpha(\\tau)$ of the GZNT is studied in such a neighborhood. In\nSection 5 necessary and sufficient conditions are given so that a left or right neighborhood of a GZNT of\n$Q(z)$ belongs to the path $\\alpha(\\tau)$. Section \\ref{ontherealline} is devoted to the case where the GZNT stays on the\nextended real line. A complete characterization is given.\n\nThe present paper has points of contact with \\cite{JL83,JL95} and \\cite{HSSW}.\nAn example of a path as described in the present paper can be found in\n\\cite{KL71}. Furthermore, it should be pointed out that there are strong\nconnections to the recent perturbation analysis\nin \\cite{MMRR1,MMRR2,RanWojtylak}.\nThe authors thank Vladimir Derkach and Seppo Hassi, who have influenced \nthis paper in more than one way.\n\n\n\\section{Preliminaries}\n\n\\subsection{Nevanlinna functions}\n\nLet $M(z)$ belong to $\\mathbf{N}_0$, i.e. $M(z)$ is a Nevanlinna function (without any negative squares).\nThen $M(z)$ has the usual integral representation\n\\begin{equation}\\label{nev'}\nM(z)= a+ b z+ \\int_\\dR \\left(\\frac{1}{s-z}-\\frac{s}{s^2+1}\\right) \\,d\\sigma(s),\\quad\nz\\in\\dC\\setminus\\mathbb{R},\n\\end{equation}\nwhere $a \\in \\dR$, $b \\ge 0$, and $\\sigma$ is a nondecreasing function with\n\\begin{equation}\n\\label{int'} \\int_\\dR \\frac{d \\sigma(s)}{s^2+1} < \\infty.\n\\end{equation}\nSince the function $\\sigma$ can possess jump discontinuities the following normalization is used:\n\\[\n \\sigma(t)=\\frac{\\sigma(t+0)+\\sigma(t-0)}{2}.\n\\]\nIn addition, it is assumed that $\\sigma(0)=0$. Note that $a$ and $b$ can be recovered from the function\n$M(z)$ by\n\\begin{equation}\\label{nev+}\n a=\\Re M(i), \\quad b = \\lim_{ z \\wh\\to \\infty} \\frac{M(z)}{z}.\n\\end{equation}\n Likewise, the function $\\sigma$ can be recovered from the\nfunction $M(z)$ by the Stieltjes inversion formula:\n\\[\n \\sigma(t_2)-\\sigma(t_1)=\\lim_{\\varepsilon \\downarrow\n 0} \\frac{1}{\\pi} \\int_{t_1}^{t_2} {\\rm Im\\,}\n M(x+i\\varepsilon)\\,dx, \\quad t_1 \\leq t_2,\n\\]\ncf. \\cite{donoghue}, \\cite{KK}.\nIf $\\sigma(s)$ is constant on $(\\gamma,\\delta)\\subseteq\\dR$, then $(\\gamma,\\delta)$ will be called a\n\\textit{gap} of $\\sigma$ or of $M(z)$. Note that in this case $M(z)$ given by \\eqref{nev'} is well--defined\nfor $z\\in(\\gamma,\\delta)$. By the Schwarz reflection principle $M(z)$ is also holomorphic on\n$\\dC^+\\cup\\dC^-\\cup(\\gamma,\\delta)$. Conversely, if the function $M(z)$ given by \\eqref{nev'} is holomorphic\non some interval $(\\gamma,\\delta)\\subseteq\\dR$ then $\\sigma(s)$ is constant on that interval. Furthermore,\nobserve that if $M(z)$ is holomorphic at $z\\in\\dC$ then\n\\begin{equation}\\label{gapp}\nM'(z)=b+\\int_{\\dR}\\frac{d\\,\\sigma(t)}{(t-z)^2}.\n\\end{equation}\nThe symbols $z \\downarrow \\gamma$ and $z \\uparrow \\delta$ will stand for the approximation of $\\gamma$ and $\\delta$ along $\\dR$ from above and below, respectively. So\n\\begin{equation}\\label{rreff}\nM(\\gamma+)=\\lim_{z\\downarrow\\gamma} M(z) \\in [-\\infty,\n\\infty),\\quad M(\\delta-)=\\lim_{z\\uparrow\\delta}M(z)\\in (-\\infty, \\infty].\n\\end{equation}\nRecall that these limits are equal to the nontangential limits at $\\gamma$ and $\\delta$, respectively;\nsee \\cite{donoghue}.\n\nThe function $\\sigma(s)$ introduces in a natural way a measure on $\\mathbb{R}$, which is denoted by the same symbol. The formula for the point mass\n\\begin{equation}\\label{nev++}\n \\sigma(\\{c\\})=\\sigma(c+0)-\\sigma(c-0)= \\lim_{z \\wh \\to c}\\, (c-z)M(z),\\quad c \\in\\dR,\n\\end{equation}\n complements the limit formula in \\eqref{nev+}.\n\nThe following result is based on a careful analysis of the relationship of the limiting behaviour of the\nimaginary part of $M(z)$ and the behaviour of the spectral function $\\sigma(s)$ in \\eqref{nev'}; see\n\\cite{donoghue} for details.\n\n\\begin{theorem}\\label{limits}\nLet $M(z)$ be a Nevanlinna function and let $(\\gamma,\\delta) \\subset \\dR$ be a finite interval. If\n\\[\n \\lim_{y \\downarrow 0} \\,\\Im M(x+ \\ii y)=0,\\quad x\\in(\\gamma,\\delta),\n\\]\nthen $M(z)$ is holomorphic on $(\\gamma,\\delta)$.\n\\end{theorem}\n\n\\begin{proof}\nLet the function $M(z)$ be of the form \\eqref{nev'} and let $x \\in (\\gamma,\\delta)$. It follows from\n\\eqref{nev'} that\n\\begin{equation}\\label{ilm}\n {\\rm Im\\,} M(x+iy)=by+\\int_\\dR \\frac{y}{(s-x)^2+y^2}\\,d\\sigma(s).\n\\end{equation}\nIt is known that if the limit of the integral in \\eqref{ilm} is $0$ as $y \\downarrow 0$, then $\\sigma$ is\ndifferentiable at $x$; see \\cite[Theorem IV.II]{donoghue}. An application of \\cite[Theorem IV.I]{donoghue}\nshows that $\\sigma'(x)=0$.\n\nHence, by assumption, it follows that $\\sigma$ is differentiable on $(\\gamma,\\delta)$ and that $\\sigma'(x)=0$\nfor all $x \\in (\\gamma,\\delta)$. Therefore $\\sigma$ is constant on $(\\gamma,\\delta)$ and, hence, $M(z)$ is\nholomorphic on $(\\gamma,\\delta)$.\n\\end{proof}\n\n\\subsection{Generalized Nevanlinna functions with one negative square}\n\nAssume that $Q(z) \\in \\mathbf{N}_1$. A point $z_0\\in\\dC^+\\cup\\dR\\cup\\{\\infty\\}$ is a \\textit{generalized pole\nof nonpositive type} (GPNT) of $Q(z)$ if $z_0$ is a GZNT for the function $-1\/Q(z)$ (which automatically\nbelongs to $\\mathbf{N}_1$). A function in $\\mathbf{N}_1$ has precisely one GPNT in\n$\\dC^+\\cup\\dR\\cup\\{\\infty\\}$, just as it has precisely one GZNT in $\\dC^+\\cup\\dR\\cup\\{\\infty\\}$. For the\nfollowing result, see \\cite{DHS1,DHS3,DLLSh}.\n\n\\begin{theorem}\\label{factor}\nAny function $Q(z) \\in \\mathbf{N}_1$ admits the following factorization\n\\begin{equation}\\label{fack}\n Q(z)=R(z) M(z),\n\\end{equation}\nwhere $M(z) \\in \\mathbf{N}_0$ and $R(z)$ is a rational function of the form\n\\begin{equation}\\label{einz}\n \\frac{(z-\\alpha)(z-\\overline{\\alpha})}{(z-\\beta)(z-\\overline{\\beta })},\n \\quad\n \\frac{ 1}{(z-\\beta)(z-\\bar{\\beta)}}, \\quad\n \\mbox{or}\n \\quad\n (z-\\alpha)(z-\\bar{\\alpha}).\n\\end{equation}\nHere $\\alpha, \\beta \\in \\dC^+ \\cup \\dR \\cup \\{\\infty\\}$ stand for the GZNT and GPNT of $Q(z)$, respectively;\nin the first case $\\alpha$ and $\\beta$ are finite, in the second case $\\infty$ is a GZNT and $\\beta$ is\nfinite, and in the third case $\\alpha$ is finite and $\\infty$ is a GPNT.\n\\end{theorem}\n\nFor the function $Q(z) \\in \\mathbf{N}_1$ the function $M(z) \\in \\mathbf{N}_0$ and the factors in\n\\eqref{einz} are uniquely determined. Note that $\\alpha\\ne\\beta$, otherwise $Q(z)$ would not have any negative\nsquares.\n\n\\begin{corollary}\\label{factor+}\nLet $Q(z) \\in \\mathbf{N}_1$ and let $z_0 \\in \\dC^+$. If $Q(z_0)=0$, then $Q'(z_0) \\ne 0$.\n\\end{corollary}\n\n\\begin{proof}\nSince $Q(z_0)=0$, it follows that $z_0 \\in \\dC^+$ is a GZNT of $Q(z)$. Therefore, according to Theorem\n\\ref{factor}, $Q(z)=(z-z_0)(z-\\bar{z}_0) H(z)$ where $H(z)$ is holomorphic in a neighborhood of $z_0$ and\n$H(z_0) \\neq 0$ (since $M(z_0) \\neq 0$). Differentiation of the identity leads to\n\\[\n Q'(z)=(z-z_0)H(z)+(z-\\bar z_0)H(z)+(z-z_0)(z-\\bar z_0)H'(z),\n\\]\nwhich implies that\n\\[\nQ'(z_0)=2 \\text{i} \\, ({\\rm Im\\,} z_0)H(z_0).\n\\]\nSince $z_0 \\in \\dC^+$ and $H(z_0)\\neq 0$, this implies that $Q'(z_0) \\ne 0$.\n\\end{proof}\n\nLet $Q(z) \\in \\mathbf{N}_1$ and assume that $Q(z)$ is holomorphic in a neighborhood of $z_0 \\in \\dR$. The\nfollowing is a classification of $Q(z_0)=0$; see \\cite{DHS3}. A proof is included for completeness.\n\n\\begin{proposition}\\label{zeros>0}\nLet $Q(z) \\in \\mathbf{N}_1$ and assume that $Q(z)$ is holomorphic in a neighborhood of $z_0 \\in \\dR$. If\n$Q(z_0)=0$, then precisely one of the following possibilities occurs:\n \\begin{itemize}\n \\item[(0)] $Q'(z_0) > 0$;\n \\item[(1)] $Q'(z_0) < 0$;\n \\item[(2a)] $Q'(z_0) = 0$ and $Q''(z_0) > 0$;\n \\item[(2b)] $Q'(z_0) = 0$ and $Q''(z_0) < 0$;\n \\item[(3)] $Q'(z_0) = 0$ and $Q''(z_0) = 0$ (in which case $Q'''(z_0) > 0$).\n \\end{itemize}\nIn the cases {\\rm(1)--(3)} the point $z_0 \\in \\dR$ is necessarily a GZNT of the function $Q(z)$.\n\\end{proposition}\n\n\\begin{proof}\nSince $Q(z) \\in \\mathbf{N}_1$ it is of the form $Q(z)=R(z)M(z)$, where $R(z)$ is of the form \\eqref{einz}\nand $M(z)$ is a Nevanlinna function; see Theorem \\ref{factor}. Assume that $Q(z)$ is holomorphic in a\nneighborhood of $z_0 \\in \\dR$ and that $Q(z_0)=0$.\n\n\\textit{Case 1}. Consider the case $R(z_0) \\neq 0$. Then $M(z)$ must be holomorphic around $z_0 \\in \\dR$ and\n$M(z_0)=0$. Observe that $M'(z_0)>0$, otherwise $M(z)$ would be constant, see relation \\eqref{gapp}, and\nhence identical zero, which would imply $Q(z)\\equiv 0\\notin \\mathbf{N}_1$.\n It follows from $Q'(z)=R'(z)M(z)+R(z)M'(z)$ that $Q'(z_0)=R(z_0) M'(z_0)$. Observe from\n\\eqref{einz} that $R(z_0) >0$. Therefore $Q'(z_0)>0$, so that (0) occurs.\n\n\\textit{Case 2}. It remains to consider the case $R(z_0) = 0$. This implies that $z_0=\\alpha \\in \\dR$. Hence\n\\[\n Q(z)=(z-\\alpha)^2M(z) \\quad \\mbox{or} \\quad Q(z)=\\frac{(z-\\alpha)^2}{(z-\\beta)(z-\\bar{\\beta})}\\,M(z).\n\\]\nSince $Q(z)$ is holomorphic around $\\alpha \\in \\dR$, it follows that the Nevanlinna function $M(z)$ has the\nexpansion\n\\begin{equation}\\label{nevv}\nM(z)=\\frac{m_{-1}}{z-\\alpha}+m_0+m_1 (z-\\alpha)+ \\cdots\n\\end{equation}\nwhere $m_{-1} \\le 0$ and $m_i \\in \\dR$, $i \\in \\dN$. Moreover\n\\[\n\\begin{split}\n Q(z)&=c_0m_{-1} (z-\\alpha)+(c_0m_0+c_1m_{-1})(z-\\alpha)^2 \\\\\n &\\hspace{2cm} +(c_0m_1+c_1m_0+c_2m_{-1})(z-\\alpha)^3+\\cdots,\n\\end{split}\n\\]\nwhere $c_0>0$ and $c_i \\in \\dR$, $i \\in \\dN$, stand for the coefficients of the power series expansion of the\nfunction $[(z-\\beta)(z-\\bar{\\beta})]^{-1}$ or of the function $1$ if $\\beta=\\infty$. In particular, the\nfollowing identities are clear:\n\\begin{equation}\\label{id1}\n Q'(\\alpha)=c_0m_{-1},\n\\end{equation}\n\\begin{equation}\\label{id2}\n2Q''(\\alpha)=c_0m_0+c_1m_{-1},\n\\end{equation}\nand\n\\begin{equation}\\label{id3}\n 6Q'''(\\alpha)=c_0m_1+c_1m_0+c_2m_{-1}.\n\\end{equation}\nIt follows from \\eqref{id1} that $Q'(\\alpha) \\le 0$. There is the following subdivision:\n\\begin{itemize}\n\\item $Q'(\\alpha) < 0$. Then (1) occurs.\n\\item $Q'(\\alpha)=0$ and $Q''(\\alpha)>0$ or $Q''(\\alpha)<0$. Then (2a) or (2b) occur.\n\\item $Q'(\\alpha)=0$ and $Q''(\\alpha)=0$. In this case it follows from \\eqref{id1} and \\eqref{id2}\nthat $m_{-1}=0$ and $m_0=0$. Note that \\eqref{nevv} with $m_{-1}=m_0=0$ implies that $m_1>0$. According to\n\\eqref{id3}, it follows that $Q'''(\\alpha)=c_0m_1$.\nTherefore $Q'''(\\alpha) > 0$, so that (3) occurs.\n\\end{itemize}\n\nTo conclude observe that it follows from \\eqref{x_0} that $Q(z_0)=0$ and $Q'(z_0) \\le 0$ imply that $z_0 \\in\n\\dR$ is a GZNT of $Q(z)$.\n\\end{proof}\n\n\\subsection{The inverse function theorem}\n\nThe following consequence of the usual inverse function theorem will be useful. It specifies branches of\nsolutions of an equation involving holomorphic functions; cf. \\cite[Theorem 9.4.3]{H}.\n\n\\begin{theorem}\\label{inver'}\nLet $Q(z)$ be a function which is holomorphic at $z_0$ and assume that $Q^{(i)}(z_0)=0$, $0 \\le i \\le n-1$,\nand $Q^{(n)}(z_0)\\ne 0$ for some $n \\ge 1$ so that\n\\[\n Q(z)=\\frac{Q^{(n)}(z_0)}{n!}(z-z_0)^n+\\frac{Q^{(n+1)}(z_0)}{(n+1)!}(z-z_0)^{n+1} + \\cdots.\n\\]\nThen there is a neighborhood of $z_0$ where the equation\n\\begin{equation}\\label{inv'}\nQ(z)=w^n\n\\end{equation}\nhas $n$ solutions $z=\\phi^{i}(w)$, $1\\le i\\le n$. The functions $\\phi^{i}(w)$ are holomorphic at $0$, of the\nform\n\\[\n \\phi^{i}(w)=z_0+\\phi^{i}_1 w+ \\phi^{i}_2 w^2+\\cdots,\n\\]\nand their first order coefficients $\\phi^{i}_1$, $1\\le i\\le n$, are the $n$ distinct roots of the equation\n\\begin{equation}\n\\label{crit'} (\\phi_1^i)^n= \\frac{n!}{Q^{(n)}(z_0)},\\quad 1\\le i\\le n.\n\\end{equation}\n\\end{theorem}\n\n\n\\section{Elementary properties of $\\alpha(\\tau)$ }\\label{basicalpha}\n\nLet $Q(z) \\in \\mathbf{N}_1$. The fractional linear transform $Q_\\tau(z)$, $\\tau \\in \\dR \\cup \\{\\infty\\}$, is\ndefined in \\eqref{bil}, so that the derivative of $Q_\\tau$ is given by\n\\begin{equation}\\label{QQ'}\n Q'_\\tau(z)=(1+\\tau^2)\\frac{Q'(z)}{(1+\\tau Q(z))^2}, \\quad \\tau \\in \\dR, \\quad \\mbox{and}\n \\quad Q'_\\infty(z)=\\frac{1}{Q(z)^2}.\n\\end{equation}\nSince $Q_\\tau(z)$ also belongs to $\\mathbf{N}_1$ for $\\tau\\in\\dR\\cup\\{\\infty\\}$, Theorem~\\ref{factor} may be\napplied.\n\n\\begin{corollary}\nLet $Q(z) \\in \\mathbf{N}_1$. Then $Q_\\tau(z)$, $\\tau\\in\\dR\\cup\\{\\infty\\}$, has the factorization\n\\begin{equation}\\label{QQ+}\n Q_\\tau(z)=R_{(\\tau)}(z) M_{(\\tau)}(z),\n\\end{equation}\nwhere $M_{(\\tau)}(z) \\in \\mathbf{N}_0$ and $R_{(\\tau)}(z)$ is a rational function of the form\n\\begin{equation}\\label{einzt}\n \\frac{(z-\\alpha(\\tau))(z-\\overline{\\alpha(\\tau)})}{(z-\\beta(\\tau))(z-\\overline{\\beta(\\tau}))},\n \\quad\n \\frac{ 1}{(z-\\beta(\\tau))(z-\\overline{\\beta(\\tau)})},\n \\quad\n \\mbox{or}\n \\quad\n (z-\\alpha(\\tau))(z-\\overline{\\alpha(\\tau)}).\n\\end{equation}\nHere $\\alpha(\\tau), \\beta(\\tau) \\in \\dC^+ \\cup \\dR \\cup \\{\\infty\\}$ stand for the GZNT and GPNT of\n$Q_\\tau(z)$, respectively; in the first case $\\alpha(\\tau)$ and $\\beta(\\tau)$ are finite, in the second case\n$\\infty$ is a GZNT and $\\beta(\\tau)$ is finite, and in the third case $\\alpha(\\tau)$ is finite and $\\infty$\nis a GPNT.\n\\end{corollary}\n\nNote that $\\alpha(0)=\\alpha$ and $\\beta(0)=\\beta$, and that $\\alpha(\\tau)\\ne\\beta(\\tau)$ for all\n$\\tau\\in\\dR\\cup\\{\\infty\\}$. The paths $\\alpha(\\tau)$ and $\\beta(\\tau)$ are related. To see this, observe that\nit follows from the form of the fractional linear transform \\eqref{bil} that\n\\[\n Q_{-1\/\\tau}(z)=\\frac{Q(z)+1\/\\tau}{1-Q(z)\/\\tau}\n =-Q_\\tau(z)^{-1}, \\quad \\tau \\in \\dR \\cup \\{\\infty\\}.\n\\]\nTherefore, \\eqref{QQ+} and \\eqref{einzt} lead to\n\\begin{equation}\\label{alphbet}\n \\alpha(\\tau)=\\beta({-1\/\\tau}), \\quad \\beta(\\tau)=\\alpha(-1\/\\tau),\\quad\\tau\\in\\dR\\setminus\\{0\\},\n\\end{equation}\nand, in particular, to $\\alpha(\\infty)=\\beta$ and $\\beta(\\infty)=\\alpha$.\n\nAccording to the identities in \\eqref{alphbet} it suffices to describe the function $\\alpha(\\tau)$. The\n\\textit{path} of the GZNT of the function $Q(z) \\in \\mathbf{N}_1$ is defined by\n\\[\n\\mathcal{F}_Q:=\\{\\,\\alpha(\\tau):\\tau\\in\\dR\\cup\\{\\infty\\}\\,\\}.\n\\]\nThe following result concerning the parametrization of the path may be useful.\n\n\\begin{lemma}\nLet $Q(z) \\in \\mathbf{N}_1$ and let $\\tau_0 \\in \\dR \\cup \\{\\infty\\}$. Then the path of the GZNT of $Q(z)$\ncoincides with the path of the GZNT of $Q_{\\tau_0}(z)$, i.e.\n \\begin{equation}\\label{FQt}\n\\mathcal{F}_Q=\\mathcal{F}_{Q_{\\tau_0}},\\quad \\tau_0\\in\\dR\\cup\\{\\infty\\}.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nFirst consider the case $\\tau \\in \\dR$. Then a simple calculation shows that\n\\begin{equation}\\label{tr}\n (Q_\\tau)_\\rho(z)= Q_{\\frac{\\tau+\\rho}{1-\\rho\\tau}}(z),\n \\quad \\rho\\in\\dR,\n \\quad (Q_\\tau)_\\infty(z)=Q_{-1\/\\tau}(z).\n\\end{equation}\nHence, if the GZNT of $(Q_\\tau)_\\rho(z)$ is denoted by $\\alpha_\\tau(\\rho)$, then it follows from \\eqref{tr}\nthat\n\\begin{equation}\\label{tr1}\n\\alpha_\\tau(\\rho)=\\alpha\\left(\\frac{\\tau+\\rho}{1-\\rho\\tau}\\right), \\quad \\rho\\in\\dR,\\quad\n\\alpha_\\tau(\\infty)=\\alpha_{-1\/\\tau}.\n\\end{equation}\nThe identity \\eqref{tr1} shows that \\eqref{FQt} is valid. The case $\\tau=\\infty$ can be treated similarly.\n\\end{proof}\n\nThe points of $\\mathcal{F}_Q$, i.e. the solutions of the equation $\\alpha(\\tau_0)=z_0$, will be\ncharacterized in terms of the function $Q(z)$. But first observe the following. If $Q(z)$ is holomorphic in a\nneighborhood of $\\alpha(\\tau_0)$ and if $Q'(\\alpha(\\tau_0))\\neq0$, then clearly the function $\\alpha(\\tau)$\nis holomorphic in a neighborhood of $\\tau_0$; this follows from the usual inverse function theorem (see\nTheorem \\ref{inver'} with $n=1$).\n\n\\begin{theorem}\\label{charact}\nLet $Q(z) \\in \\mathbf{N}_1$ and let $\\tau_0 \\in \\dR$.\n\\begin{enumerate}\\def\\rm (\\roman{enumi}) {\\rm (\\roman{enumi})}\n\\item Let $z_0 \\in \\dC^+$. Then $\\alpha(\\tau_0)=z_0$ if and only if\n\\begin{equation}\\label{un}\n Q(z_0)=\\tau_0.\n\\end{equation}\nIn this case, the function $\\alpha(\\tau)$ is holomorphic in a neighborhood of $\\tau_0$.\n\n\\item Let $z_0\\in\\dR$. Then $\\alpha(\\tau_0)=z_0$ if and only if\n\\begin{equation}\\label{deux}\n\\lim_{z \\wh\\to z_0}\\frac{Q(z)-\\tau_0}{z-z_0} \\in (-\\infty,0].\n\\end{equation}\n\n\\item Let $z_0=\\infty$. Then $\\alpha(\\tau_0)=\\infty$ if and only if\n\\begin{equation}\\label{deux+}\n\\lim_{z \\wh\\to \\infty} z (Q(z)-\\tau_0) \\in [0,\\infty).\n\\end{equation}\n\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{proof}\n(i) Let $z_0 \\in \\dC^+$ and let $\\tau_0 \\in \\dR$.\n\n($\\Leftarrow$) If \\eqref{un} holds, then $Q_{\\tau_0}(z_0)=0$. Hence, $z_0 \\in \\dC^+$ is a zero of\n$Q_{\\tau_0}(z) \\in \\mathbf{N}_1$, which implies that $z_0$ is a GZNT of $Q_{\\tau_0}(z)$. By uniqueness it\nfollows that $z_0=\\alpha(\\tau_0)$.\n\n($\\Rightarrow$) If $\\alpha(\\tau_0)=z_0$, then $Q_{\\tau_0}(z_0)=Q_{\\tau_0}(\\alpha(\\tau_0))=0$, so that\n\\eqref{un} holds.\n\nIf $z_0:=\\alpha(\\tau_0)\\in\\dC^+$, then $Q_{\\tau_0}(z_0)=0$ and hence\n $Q'_{\\tau_0}(z_0) \\ne 0$; cf. Corollary \\ref{factor+}.\nIt follows from \\eqref{QQ'} that $Q'(z_0)\\ne 0$. Therefore $\\alpha(\\tau)$ is holomorphic at $\\tau_0$.\n\n(ii) Let $z_0 \\in \\dR$ and let $\\tau_0 \\in \\dR$.\nIt follows from the fractional linear transform \\eqref{bil} that\n\\begin{equation}\\label{deux--}\n\\frac{Q_{\\tau_0}(z)}{z-z_0}=\\frac{1}{1+\\tau_0 Q(z)} \\,\\,\\frac{Q(z)-\\tau_0}{z-z_0}.\n\\end{equation}\n\n($\\Leftarrow$) Assume that \\eqref{deux} holds.\nSince \\eqref{deux} implies $\\lim_{z\\wh\\to z_0} Q_{\\tau_0}(z)=0$, it follows from \\eqref{deux--} that\n\\[\n \\lim_{z \\wh\\to z_0}\\frac{Q_{\\tau_0}(z)}{z-z_0}\n =\\frac1{1+\\tau_0^2}\\,\\lim_{z \\to z_0} \\frac{Q(z)-\\tau_0}{z-z_0} \\in (-\\infty,0].\n\\]\nHence, by \\eqref{x_0} $z_0$ is the GZNT of $Q_{\\tau_0}(z)$.\n\n($\\Rightarrow$)\nSince $Q_{\\tau_0}(z)\/(z-z_0) \\to 0$ as $z \\wh\\to z_0$, it follows that $Q_{\\tau_0}(z) \\to 0$ as $z \\wh\\to z_0$ and, therefore, $Q(z) \\wh\\to \\tau_0$. Hence $\\alpha(\\tau_0)=z_0$ follows from \\eqref{deux--}.\n\n(iii) Let $z_0 = \\infty$ and let $\\tau_0 \\in \\dR$.\nIt follows from the fractional linear transform \\eqref{bil} that\n\\begin{equation}\nz Q_{\\tau_0}(z) =\\frac{1}{1+\\tau_0 Q(z)} \\,\\,z( Q(z)-\\tau_0).\n\\end{equation}\nThe proof now uses \\eqref{x_00} with similar arguments as in (ii).\n\\end{proof}\n\nThe case $\\tau_0=\\infty$ is not explicitly mentioned in Theorem \\ref{charact}.\nRecall that $\\alpha(\\infty)=\\beta$. Hence, the identity $z_0=\\alpha(\\infty)$ means actually that $z_0$ is a generalized pole of nonpositive type of the function $Q(z)$.\n\n\\begin{corollary}\\label{quh}\nLet $Q(z) \\in \\mathbf{N}_1$, then\n\\[\n\\{\\,z\\in \\dC^+ : {\\rm Im\\,} Q(z)=0\\,\\} \\subseteq\\mathcal{F}_Q.\n\\]\n\\end{corollary}\n\n\\begin{proof}\nLet $z_0 \\in \\dC^+$ and assume that ${\\rm Im\\,} Q(z_0)=0$. Then\n\\[\n Q(z_0)-\\tau_0= {\\rm Re\\,} Q(z_0)-\\tau_0,\n\\]\nso that the lefthand side equals zero, if $\\tau_0$ is defined as ${\\rm Re\\,} Q(z_0)$. In this case $Q_{\\tau_0}(z)$\nhas a zero at $z_0$.\n\\end{proof}\n\n\\begin{corollary}\\label{alphainjective}\nLet $Q(z) \\in \\mathbf{N}_1$, then the function\n\\[\n\\dR\\cup\\{\\infty\\}\\ni\\tau\\mapsto \\alpha(\\tau)\\in\\dC^+\\cup\\dR\\cup\\{\\infty\\}\n\\]\nis injective.\n\\end{corollary}\n\n\\begin{proof}\nAssume that $\\alpha(\\tau_1)=\\alpha(\\tau_2)$ with $\\tau_1, \\tau_2 \\in \\dR \\cup \\{\\infty\\}$.\n\nFirst consider $\\tau_1, \\tau_2 \\in \\dR$ and let $z_0= \\alpha(\\tau_1)=\\alpha(\\tau_2)$.\nIf $z_0$ is in $\\dC^+$, then (i) of Theorem \\ref{charact}\nimplies $\\tau_1=Q(\\alpha(\\tau_1)=Q(\\alpha(\\tau_2)=\\tau_2$.\nIf $z_0$ is in $\\dR\\cup\\{\\infty\\}$, then\n(ii) and (iii) of Theorem \\ref{charact} imply\n\\[\n\\tau_1=\\lim_{z\\wh\\to \\alpha(\\tau_1)} Q(z)=\\lim_{z\\wh\\to \\alpha(\\tau_2)} Q(z) =\\tau_2.\n\\]\n\nNext consider $\\tau_1 \\in \\dR$ and $\\tau_2=\\infty$ and let $z_0= \\alpha(\\tau_1)=\\alpha(\\infty)$.\nThen $\\alpha(\\infty)=z_0$ means that $z_0$ is a GPNT, so that $Q(z) \\to \\infty$ as $z \\wh\\to z_0$. Furthermore, $\\alpha(\\tau_1)=z_0$ implies, by Theorem \\ref{charact}),\nthat $Q(z) \\to \\tau_1$ as $z \\wh\\to z_0$, a contradiction.\n\nHence, $\\alpha(\\tau_1)=\\alpha(\\tau_2)$ with $\\tau_1, \\tau_2 \\in \\dR \\cup \\{\\infty\\}$, implies that\n$\\tau_1=\\tau_2$. This completes the proof.\n\\end{proof}\n\nThe following result can be seen as a consequence of Theorem \\ref{limits}.\n\n\\begin{theorem}\\label{tauonreal}\nLet $Q(z) \\in \\mathbf{N}_1$ and let the interval $(\\gamma,\\delta) \\subset \\dR$ be contained in\n$\\mathcal{F}_Q$. Then $Q(z)$ is holomorphic on $(\\gamma,\\delta)$ except possibly at the GPNT of $Q(z)$,\nwhich is then a pole of $Q(z)$.\n\\end{theorem}\n\n\\begin{proof}\nSince $Q(z) \\in \\mathbf{N}_1$, it can be written as $Q(z)=R(z)M(z)$ as in \\eqref{fack} where $R(z)$ is of the\nform as in \\eqref{einz} with GZNT $\\alpha$ and GPNT $\\beta$. By assumption each $z_0 \\in (\\gamma,\\delta)$ is\nof the form $z_0=\\alpha(\\tau_0)$. Hence by Theorem \\ref{charact} it follows that\n\\begin{equation}\\label{frid}\n \\lim_{z\\wh\\to z_0} Q(z)=\\tau_0.\n\\end{equation}\nClearly, if $z_0=\\alpha(\\tau_0)$ with $\\tau_0=\\infty$ then $z_0=\\beta$ so that $z_0$ is a GPNT of $Q(z)$.\nThere are three cases to consider:\n\n\\textit{Case 1:} $\\beta \\not\\in (\\gamma,\\delta)$ and $\\alpha \\not\\in (\\gamma,\\delta)$. Then $\\lim_{z\\wh\\to\nz_0} R(z) \\in \\dR\\setminus \\{0\\}$, so that it follows from \\eqref{frid} that\n\\begin{equation}\\label{frid1}\n \\lim_{z\\wh\\to z_0} {\\rm Im\\,} M(z)=0.\n\\end{equation}\nHence, by Theorem \\ref{limits}, it follows from \\eqref{frid1} that $M(z)$ and therefore $Q(z)$ is\nholomorphic on $(\\gamma,\\delta)$.\n\n\\textit{Case 2:} $\\beta \\not\\in (\\gamma,\\delta)$ and $\\alpha \\in (\\gamma,\\delta)$. Then by Case~1 it follows\nthat $Q(z)$ is holomorphic on $(\\gamma,\\alpha)$ and on $(\\alpha, \\delta)$. Hence, either $Q(z)$ is\nholomorphic on $(\\gamma,\\delta)$ or $Q(z)$ has an isolated singularity at $\\alpha$. However, this last case\ncannot occur due to the representation \\eqref{einz}.\n\n\\textit{Case 3:} $\\beta \\in (\\gamma,\\delta)$. Then by Case 1 and Case 2 it follows that $Q(z)$ is holomorphic\non $(\\gamma,\\beta)$ and $(\\beta,\\delta)$. This implies that $\\beta$ is an isolated singularity of $Q(z)$; in\nother words the GPNT $\\beta$ is a pole of $Q(z)$.\n\\end{proof}\n\n\\section{Local behavior of $\\alpha(\\tau)$ in a gap of $Q(z)$.}\\label{Gap}\n\nLet the function $Q(z) \\in \\mathbf{N}_1$ be holomorphic in a neighborhood of $z_0 \\in \\dR$. Assume that\n$Q(z_0)=0$ and that $z_0$ is in fact a GZNT of $Q(z)$, so that $Q'(z_0) \\le 0$; cf. Proposition\n\\ref{zeros>0}. The local form of the path $\\alpha(\\tau)$ in a neighborhood of $\\tau=0$ will now be described.\nThe items in the following theorem correspond to the classification in Proposition~\\ref{zeros>0}.\n\n\\begin{theorem}\\label{mainth}\nLet $Q(z) \\in \\mathbf{N}_1$ be holomorphic in a neighborhood of $z_0 \\in \\dR$ and let $z_0$ be a GZNT of\n$Q(z)$. Then precisely one of the following possibilities hold.\n\\begin{itemize}\n\\item[(1)] $Q'(z_0)<0$:\nThere exists $\\varepsilon > 0$ such that the function $\\alpha(\\tau)$ is real-valued and holomorphic with\n$\\alpha'(\\tau)<0$ on $(-\\varepsilon, \\varepsilon)$.\n\n\\begin{figure}[htb]\n\n\\begin{center}\n\\begin{picture}(100,12)(0,0)\n\\linethickness{0.05mm} \\put(0,5){\\vector(1,0){100}}\n \\put(55,4){\\line(0,1){2}}\n \\put(55,2){\\makebox(0,0)[cc]{$\\alpha(0)$}}\n \\linethickness{0.6mm} \\put(0,5){\\line(1,0){30}}\n \\put(70,5){\\line(1,0){20}}\n \\put(15,2){\\makebox(0,0)[cc]{${\\rm supp\\,}\\sigma$}}\n \\thicklines \\put(60,5){\\vector(-1,0){15}}\n \\put(45,5){\\line(-1,0){5}}\n \\put(45,8){\\makebox(0,0)[cc]{$\\alpha(\\tau)$}}\n\\end{picture}\n\\caption{Case (1)}\n\\end{center}\n\\end{figure}\n\n\\item[(2a)] $Q'(z_0)=0$ and $Q''(z_0) >0$:\nThere exist $\\varepsilon_1 > 0$ and $\\varepsilon_2 > 0$ such that the function $\\alpha(\\tau)$ is continuous\non $(-\\varepsilon_1,\\varepsilon_2)$, and holomorphic on each of the intervals $(-\\varepsilon_1,0)$ and\n$(0,\\varepsilon_2)$. Moreover $\\alpha(\\tau) \\in \\dC^+$ for $\\tau \\in (-\\varepsilon_1,0)$ and $\\arg\n(\\alpha(\\tau)-z_0)\\to \\pi\/2$ as $\\tau \\uparrow 0$ and $\\alpha(\\tau) \\in \\dR$ for $\\tau \\in\n(0,\\varepsilon_2)$.\n\n\\item[(2b)] $Q'(z_0)=0$ and $Q''(z_0) < 0$:\nThere exist $\\varepsilon_1 > 0$ and $\\varepsilon_2 > 0$ such that the function $\\alpha(\\tau)$ is continuous\non $(-\\varepsilon_1,\\varepsilon_2)$, and holomorphic on each of the intervals $(-\\varepsilon_1,0)$ and\n$(0,\\varepsilon_2)$. Moreover $\\alpha(\\tau) \\in \\dR$ for $\\tau \\in (-\\varepsilon_1,0)$ and $\\alpha(\\tau) \\in\n\\dC^+$ for $\\tau \\in (0,\\varepsilon_2)$ and $\\arg (\\alpha(\\tau)-z_0)\\to \\pi\/2$ as $\\tau \\downarrow 0$.\n\n\\begin{figure}[htb]\n\\begin{center}\n\\begin{picture}(105,22)(0,0)\n\\linethickness{0.05mm} \\put(0,5){\\line(1,0){50}}\n \\put(55,5){\\line(1,0){50}}\n \\linethickness{0.6mm}\n \\put(35,5){\\line(1,0){15}}\n \\put(45,2){\\makebox(0,0)[cc]{${\\rm supp\\,}\\sigma$}}\n \\put(90,5){\\line(1,0){15}}\n \\put(100,2){\\makebox(0,0)[cc]{${\\rm supp\\,}\\sigma$}}\n \\thicklines \\put(25,5){\\vector(-1,0){10}}\n \\put(15,5){\\line(-1,0){5}}\n \\put(25,11){\\vector(0,-1){3}}\n \\put(25,14){\\line(0,-1){9}}\n \\put(20,14){\\makebox(0,0)[cc]{$\\alpha(\\tau)$}}\n \\put(25,2){\\makebox(0,0)[cc]{$\\alpha(0)$}}\n \\put(28,7){\\makebox(0,0)[cc]{\\tiny $\\frac{\\pi}2$}}\n \\qbezier(25,14)(25,17)(26,20)\n\n \\put(80,5){\\vector(-1,0){10}}\n \\put(70,5){\\line(-1,0){5}}\n \\put(65,9){\\vector(0,1){3}}\n \\put(65,14){\\line(0,-1){9}}\n \\put(70,14){\\makebox(0,0)[cc]{$\\alpha(\\tau)$}}\n \\put(65,2){\\makebox(0,0)[cc]{$\\alpha(0)$}}\n \\put(68,7){\\makebox(0,0)[cc]{\\tiny $\\frac{\\pi}2$}}\n \\qbezier(65,14)(65,17)(66,20)\n\\end{picture}\n\\caption{Cases (2a) and (2b)}\n\\end{center}\n\\end{figure}\n\n\\item[(3)]\n$Q'(z_0)=Q''(z_0)=0$, and $Q'''(z_0)>0$: There exist $\\varepsilon_1 > 0$ and $\\varepsilon_2 > 0$ such that\nthe function $\\alpha(\\tau)$ is continuous on $(-\\varepsilon_1,\\varepsilon_2)$, and holomorphic on each of the\nintervals $(-\\varepsilon_1,0)$ and $(0,\\varepsilon_2)$. Moreover $\\alpha(\\tau) \\in \\dC^+$ for $\\tau \\in\n(-\\varepsilon_1,0)$ and $\\arg (\\alpha(\\tau)-z_0)\\to \\pi\/3$ as $\\tau \\uparrow 0$; and $\\alpha(\\tau) \\in \\dC^+$\nfor $\\tau \\in (0,\\varepsilon_2)$ and $\\arg (\\alpha(\\tau)-z_0)\\to 2\\pi\/3$ as $\\tau \\downarrow 0$.\n\\begin{figure}[hbt]\n\\begin{center}\n\\begin{picture}(100,20)(0,0)\n\\linethickness{0.05mm} \\put(0,5){\\vector(1,0){100}}\n \\put(55,4){\\line(0,1){2}}\n \\put(55,2){\\makebox(0,0)[cc]{$\\alpha(0)$}}\n \\linethickness{0.6mm} \\put(0,5){\\line(1,0){30}}\n \\put(70,5){\\line(1,0){20}}\n \\put(15,2){\\makebox(0,0)[cc]{${\\rm supp\\,}\\sigma$}}\n \\thicklines \\put(63,17){\\vector(-2,-3){4}}\n \\put(59,11){\\line(-2,-3){4}}\n \\put(55,5){\\vector(-2,3){4}}\n \\qbezier(51,11)(48,17)(42,18)\n \\put(40,15){\\makebox(0,0)[cc]{$\\alpha(\\tau)$}}\n \\put(55,9){\\makebox(0,0)[cc]{\\tiny $\\frac{\\pi}3$}}\n \\put(59,7){\\makebox(0,0)[cc]{\\tiny $\\frac{\\pi}3$}}\n \\put(51,7){\\makebox(0,0)[cc]{\\tiny $\\frac{\\pi}3$}}\n\\end{picture}\n\\caption{Case (3) }\n\\end{center}\n\\end{figure}\n\n\\end{itemize}\n\\end{theorem}\n\n\\begin{proof}\nThe assumption is that $Q(z) \\in \\mathbf{N}_1$ is holomorphic in a neighborhood of $z_0 \\in \\dR$ and that\n$Q(z)$, possibly together with some derivatives, vanishes at $z_0 \\in \\dR$, as described in Proposition\n\\ref{zeros>0}. The theorem will be proved via Theorem \\ref{inver'} (implicit function theorem).\n\n\\textit{Case (1): $Q(z_0)=0$ and $Q'(z_0)<0$.} According to Theorem \\ref{inver'} with $n=1$ there is some neighborhood of $w=0$ where the equation\n\\begin{equation}\\label{Qphi}\n Q(\\phi(w))=w\n\\end{equation}\nhas a unique holomorphic solution $\\phi(w)$ which satisfies $\\phi(0)=z_0$ and is real-valued for real $w$. It\nfollows from \\eqref{Qphi} that $Q'(\\phi(w))\\phi'(w)=1$, so that the condition $Q'(z_0)<0$ implies that\n$\\phi'(0)<0$, and thus $\\phi'(w)<0$ on some neighborhood $(-\\varepsilon, \\varepsilon)$. It follows from\nTheorem \\ref{charact} that $\\alpha(\\tau)=\\phi(\\tau)$.\n\n\\textit{Case (2a): $Q(z_0)=0$, $Q'(z_0)=0$, and $Q''(z_0) >0$.} According to Theorem \\ref{inver'} with $n=2$\n there is some neighborhood of $0$ where the equation\n\\[\nQ(\\phi^\\pm(w))=w^2\n\\]\nhas holomorphic solutions $\\phi^+(w)$ and $\\phi^-(w)$ with $\\phi^\\pm(0)=z_0$, and which have expansions\n\\[\n\\phi^\\pm(w)=\\phi_1^\\pm w+\\phi_2^\\pm w^2+ \\cdots ,\n\\]\nwhere $\\phi_1^\\pm=\\pm (Q''(z_0)\/2)^{-1\/2}$. Note that all the coefficients $\\phi_i^\\pm$ in the above\nexpansions are real.\n\nLet $\\tau>0$. Put $w=\\tau^{1\/2}$ such that\n\\begin{equation}\\label{pmtau}\nQ(\\phi^\\pm(\\tau^{1\/2}))=\\tau.\n\\end{equation}\nSince $\\phi_1^-<0$, there is some $\\varepsilon_2>0$ such that $\\phi^{-'}(\\tau^{1\/2})<0$ for\n$\\tau\\in(0,\\varepsilon_2)$. Now the relation \\eqref{pmtau} implies by taking the derivative with respect to\n$\\tau$ that $Q'(\\phi^-(\\tau^{1\/2}))<0$ for $\\tau\\in(0,\\varepsilon_2)$. Hence, by relation \\eqref{pmtau} and\nTheorem \\ref{charact} one finds that $\\alpha(\\tau)=\\phi^-(\\tau^{1\/2})$ for $\\tau\\in(0,\\varepsilon_2)$.\n\nLet $\\tau<0$. Put $w=i|\\tau|^{1\/2}$ such that\n\\begin{equation}\\label{pmtaui}\nQ(\\phi^\\pm(i|\\tau|^{1\/2}))=\\tau.\n\\end{equation}\nSince $\\phi_1^+>0$, there is some $\\varepsilon_1>0$ such that $(\\phi^{+})'(i|\\tau|^{1\/2})\\in\\dC^+$ for\n$\\tau\\in(-\\varepsilon_1,0)$. Hence, by relation \\eqref{pmtaui} and Theorem \\ref{charact} one finds that\n$\\alpha(\\tau)=\\phi^+(i|\\tau|^{1\/2})$ for $\\tau\\in(-\\varepsilon_1,0)$.\n\nThe expansion of $\\phi^+(i|\\tau|^{1\/2})$ implies that $\\arg (\\alpha(\\tau)-z_0)\\to \\pi\/2$ as $\\tau\\uparrow 0$.\n\n\\textit{Case (2b): $Q(z_0)=0$, $Q'(z_0)=0$, and $Q''(z_0)<0$.} This case can be treated similarly as the case\n(2a).\n\n\\textit{Case (3): $Q(z_0)=0$, $Q'(z_0)=0$, $Q''(z_0)=0$, and $Q'''(z_0) > 0$.} According to Theorem\n\\ref{inver'} there is a neighborhood of $0$, where the equation\n$$\nQ(\\phi(w))=w^3\n$$\n has three solutions $\\phi^{(j)}(w)$, $j=1,2,3$, determined by\n\\[\n\\phi^{(1)}_1=\\sqrt[3]{r},\\quad\n \\phi^{(2)}_1=\\left(-\\frac12+\\frac{\\sqrt{3}}2\\ii\\right)\\sqrt[3]{r},\\quad\n \\phi^{(3)}_1 = \\left(-\\frac12-\\frac{\\sqrt{3}}2\\ii\\right)\\sqrt[3]{ r},\n\\]\nwith $r=6\/Q'''(z_0)$.\n\nLet $\\tau>0$. Then Theorem \\ref{inver'} implies that there is some $\\varepsilon_2>0$ such that\n$\\phi^{(2)}(\\tau^{1\/3})$ is in $\\dC^+$ for $\\tau\\in( 0,\\varepsilon_2)$, and that\n$Q(\\phi^{(2)}(\\tau^{1\/3}))=\\tau$. Hence, by Theorem \\ref{charact}\n\\[\n\\alpha(\\tau)=\\phi^{(2)}(\\tau^{1\/3}),\\qquad \\tau\\in(0,\\varepsilon_2).\n\\]\n\nLet $\\tau<0$. Then the Theorem \\ref{inver'} implies that there is some $\\varepsilon_1>0$ such that\n$\\phi^{(3)}(-|\\tau|^{1\/3})$ is in $\\dC^+$ for $\\tau\\in (-\\varepsilon_1,0)$, and that\n$Q(\\phi^{(3)}(-|\\tau|^{1\/3}))=-|\\tau|=\\tau$. Hence, by Theorem \\ref{charact}\n\\[\n\\alpha(\\tau)=\\phi^{(3)}(-|\\tau|^{1\/3})),\\qquad \\tau\\in(-\\varepsilon_1,0)<0.\n\\]\nMoreover, using Theorem \\ref{inver'} one finds that\n\\[\n\\lim_{\\tau\\uparrow0}\\tan(\\arg(\\alpha(\\tau)-z_0))=\\lim_{\\tau\\uparrow0}\\frac{{\\rm Im\\,}\\alpha(\\tau)}{{\\rm Re\\,}\\alpha(\\tau)-z_0}=\n \\frac{{\\rm Im\\,}\\phi_1^{(3)}}{{\\rm Re\\,}\\phi_1^{(3)}}\n =\\sqrt{3},\n\\]\nand\n\\[\n\\lim_{\\tau\\downarrow0}\\tan(\\arg(\\alpha(\\tau)-z_0))=\\lim_{\\tau\\downarrow0}\\frac{{\\rm Im\\,}\\alpha(\\tau)}{{\\rm Re\\,}\\alpha(\\tau)-z_0}=\n \\frac{{\\rm Im\\,}\\phi_1^{(2)}}{{\\rm Re\\,}\\phi_1^{(2)}}\n =-\\sqrt{3},\n\\]\nwhich shows that the angles are indeed $\\pi\/3$ and $2\\pi\/3$, respectively.\n\\end{proof}\n\n\\begin{remark}\\label{R2}\nIn Case (2) and Case (3) other zeros of $Q_\\tau(z)$ will occur, but they need not be GZNT.\nBelow each case will be considered separately.\n\nFor Case (2) it suffices to restrict to Case (2a) as Case (2b) is similar. When $\\tau>0$ the\nfunction $\\alpha_+(\\tau):=\\phi^+(\\tau^{1\/2})$ is also a local solution of $Q(\\alpha_+(\\tau))=\\tau$. However,\n$Q'(\\alpha_+(\\tau))>0$ for $\\tau>0$, so that $\\alpha_+(\\tau)$ is not a GZNT. Moreover, $\\alpha_+(\\tau)$ is\nlocally increasing in $\\tau$. A different situation happens when $\\tau<0$. Then clearly $\\phi^-(-\\ii\n\\tau^{1\/2})=\\phi^+(\\ii \\tau^{1\/2})$, and locally $\\bar\\alpha(\\tau)=\\phi^+(-\\ii \\tau^{1\/2})$ is the GZNT of\n$Q_\\tau(z)$ in the lower half plane conjugate to the GZNT $\\alpha(\\tau)$, cf \\cite{KL71}.\n\n\\begin{figure}[htb]\n\\begin{center}\n\\begin{picture}(105,25)(0,10)\n\\linethickness{0.05mm} \\put(0,20){\\line(1,0){50}}\n \\put(55,20){\\line(1,0){50}}\n \\thicklines \\put(25,20){\\vector(-1,0){10}}\n \\put(15,20){\\line(-1,0){5}}\n \\put(25,20){\\vector(1,0){10}}\n \\put(35,20){\\line(1,0){5}}\n \\put(25,26){\\vector(0,-1){3}}\n \\put(25,29){\\line(0,-1){9}}\n \\put(25,14){\\vector(0,1){3}}\n \\put(25,11){\\line(0,1){9}}\n\\put(20,29){\\makebox(0,0)[cc]{$\\alpha(\\tau)$}} \\put(13,17){\\makebox(0,0)[cc]{$\\alpha(\\tau)$}}\n\\put(37,17){\\makebox(0,0)[cc]{$\\alpha_+(\\tau)$}} \\put(30,11){\\makebox(0,0)[cc]{$\\bar\\alpha(\\tau)$}}\n \\put(65,20){\\vector(1,0){10}}\n \\put(75,20){\\line(1,0){5}}\n \\put(95,20){\\vector(-1,0){10}}\n \\put(85,20){\\line(-1,0){5}}\n \\put(80,24){\\vector(0,1){3}}\n \\put(80,29){\\line(0,-1){9}}\n \\put(80,16){\\vector(0,-1){3}}\n \\put(80,11){\\line(0,1){9}}\n\\put(75,29){\\makebox(0,0)[cc]{$\\alpha(\\tau)$}} \\put(68,17){\\makebox(0,0)[cc]{$\\alpha_+(\\tau)$}}\n\\put(92,17){\\makebox(0,0)[cc]{$\\alpha(\\tau)$}} \\put(85,11){\\makebox(0,0)[cc]{$\\bar\\alpha(\\tau)$}}\n\\end{picture}\n\\caption{Cases (2a) and (2b) - positive and negative zeros}\n\\end{center}\n\\end{figure}\n\nIn Case (3) there are locally two other zeros of the function $Q_\\tau(z)$ crossing $z_0$ at $\\tau=0$. The\nfirst of them is real and given by\n\\[\n \\alpha_+(\\tau)=\\phi^{(1)}({\\rm sgn\\,}\\tau |\\tau|^{1\/3}).\n\\]\nThe second one lies in the lower half plane and is conjugate to $\\alpha(\\tau)$.\n\n\\begin{figure}[htb]\n\\begin{center}\n\\begin{picture}(100,27)(0,0)\n\\linethickness{0.05mm} \\put(20,12){\\line(1,0){60}}\n \\thicklines\\put(58,24){\\vector(-2,-3){4}}\n \\put(54,18){\\line(-2,-3){4}}\n \\put(50,12){\\vector(-2,3){4}}\n \\put(46,18){\\line(-2,3){4}}\n \\put(50,22){\\makebox(0,0)[cc]{$\\alpha(\\tau)$}}\n \\put(58,0){\\vector(-2,3){4}}\n \\put(54,6){\\line(-2,3){4}}\n \\put(50,12){\\vector(-2,-3){4}}\n \\put(46,6){\\line(-2,-3){4}}\n \\put(50,2){\\makebox(0,0)[cc]{$\\bar\\alpha(\\tau)$}}\n \\put(25,12){\\vector(1,0){10}}\n \\put(35,12){\\vector(1,0){35}}\n \\put(70,12){\\line(1,0){5}}\n \\put(35,9){\\makebox(0,0)[cc]{$\\alpha_+(\\tau)$}}\n \\put(70,9){\\makebox(0,0)[cc]{$\\alpha_+(\\tau)$}}\n\\end{picture}\n\\caption{Case (3) - positive and negative zeros}\n\\end{center}\n\\end{figure}\n\\end{remark}\n\n\\begin{example}\\label{drie}\nLet $\\delta_1, \\delta_2 >0$ and consider the function\n\\[\n Q(z)=z^2 \\left( \\frac{\\delta_1}{-1-z}+\\frac{\\delta_2}{1-z} \\right), \\quad z \\in \\dC \\setminus\\{-1,1\\}.\n\\]\nThen $Q(z)$ is of the form \\eqref{fack} with\n\\[\n R(z)=z^2, \\quad M(z)=\\frac{\\delta_1}{-1-z}+\\frac{\\delta_2}{1-z},\n\\]\nwith $M(z) \\in \\mathbf{N}_0$. Hence $Q(z) \\in \\mathbf{N}_1$ with GZNT at $\\alpha=0$ and GPNT at\n$\\beta=\\infty$. The function $Q(z)$ is holomorphic in a neighborhood of $z=0$. Clearly, $Q(0)=0$, $Q'(0)=0$, and $Q''(0)=2(\\delta_2-\\delta_1)$. Hence,\n\\[\nQ''(0)=0 \\quad \\Leftrightarrow \\quad \\delta_1=\\delta_2,\n\\]\nin which case $Q'''(0)=12 \\delta_1=12 \\delta_2>0$. Therefore with regard to Theorem \\ref{mainth}\none obtains\n\\begin{enumerate}\n\\item $\\delta_2 > \\delta_1$ implies Case 2a;\n\\item $\\delta_2 < \\delta_1$ implies Case 2b;\n\\item $\\delta_2 = \\delta_1$ implies Case 3.\n\\end{enumerate}\n\\end{example}\n\n\\section{Part of the path on the real line.}\n\nLet $Q(z) \\in \\mathbf{N}_1$. The path of the GZNT may hit the real line coming from $\\dC^+$ and immediately\nreturn to $\\dC^+$ as the example $Q(z)=z^3$ shows. However it is also possible that the path will have a part\nof the real line in common as the example $Q(z)=z^2$ shows. In this section it is assumed that the GZNT\n$\\alpha$ belongs to the real line and the interest is in the existence of $\\varepsilon > 0$ such that\n\\[\n[\\alpha-\\varepsilon,\\alpha]\\subset\\mathcal{F}_Q \\quad \\mbox{or} \\quad\n[\\alpha,\\alpha+\\varepsilon]\\subset\\mathcal{F}_Q.\n\\]\nAccording to Theorem \\ref{tauonreal} this implies that $M(z)$ is holomorphic on\n\\[\n(\\alpha-\\varepsilon,\\alpha) \\quad \\mbox{or} \\quad (\\alpha,\\alpha+\\varepsilon),\n\\]\nrespectively. Recall that if the function $M(z) \\in \\mathbf{N}_0$ is holomorphic on an interval\n$(\\alpha,\\beta)$, then $M(z)$ has (possibly improper) limits at the end points of the interval and $M(\\alpha+) \\in [-\\infty, \\infty)$ and $M(\\beta-) \\in (-\\infty, \\infty]$. The fact that those limits are equal to the nontangential limits at $\\alpha$ and $\\beta$, respectively, will be extensively used in the proof below.\n \n\\begin{theorem}\\label{realpath}\nLet $Q(z)\\in\\mathbf{N}_1$ be of the form\n\\[\n Q(z)=(z-\\alpha)^2M(z) \\quad \\mbox{or} \\quad\n Q(z)=\\frac{(z-\\alpha)^2}{(z-\\beta)(z-\\bar{\\beta})} M(z),\n\\]\nwith $\\alpha \\in \\dR$, $\\beta \\in \\dC^+ \\cup \\dR$, $\\beta \\neq \\alpha$, and $M(z) \\in \\mathbf{N}_0$. Then\nthe following statements are valid:\n\\begin{enumerate}\\def\\rm (\\roman{enumi}){\\rm (\\roman{enumi})}\n\\item\nThere exists $\\varepsilon>0$ such that $(\\alpha-\\varepsilon,\\alpha]\\subset \\mathcal{ F}_Q$ if and only if\n$M(z)$ is holomorphic on $(\\alpha-\\gamma,\\alpha)$ for some $\\gamma >0$\nand $M(\\alpha-) \\in (0,\\infty]$. \n\\item\nThere exists $\\varepsilon>0$ such that $[\\alpha,\\alpha+\\varepsilon)\\subset \\mathcal{F}_Q$ if and only if \n$M(z)$ is holomorphic on $(\\alpha,\\alpha+\\gamma)$ for some $\\gamma>0$\n and $M(\\alpha+) \\in [-\\infty, 0)$.\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{proof}\nSince the proofs of the statements (i) and (ii) are analogous, only (i) will be shown. By a translation the\nstatement can be easily reduced to the case $\\alpha=0$. Hence, from now on it is assumed that $M(z)$ is\nholomorphic on some interval $(-\\gamma,0)$, with $\\gamma>0$. The proof will be carried out for the functions\n\\[\n z^2M(z) \\quad \\mbox{and} \\quad \\frac{z^2M(z)}{(z-\\beta)(z-\\bar{\\beta})},\n\\]\nrespectively, in three steps. \\\\\n\n\\textit{Step 1.} The function $R(z)=z^2M(z)$ is considered. Since $M(z) \\in \\mathbf{N}_0$ it has the\nrepresentation \\eqref{nev'}. Recall that\n\\begin{equation}\\label{hash}\n \\lim_{z \\uparrow 0} -z M(z)=\\sigma(\\{0\\}),\n\\end{equation}\ncf. \\eqref{nev++}. Note that if $\\sigma(\\{0\\})=0$, dominated convergence on $(-\\infty,-\\gamma)$ and monotone convergence on $(0, \\infty)$ implies that\n\\begin{equation}\\label{ooo}\nM(0-)=a+\\int_{\\dR } \\frac{d\\sigma(s)}{s (s^2+1)} \\in \\dR \\cup \\{\\infty\\},\n\\end{equation}\ncf. \\eqref{rreff}.\n\nIn the general case $\\sigma(\\{0\\}) \\neq 0$\nthe function $R(z)= z^2M(z)$ can be written as\n\\[\n R(z)=az^2+bz^3 -z \\sigma(\\{0\\})\n +z^2 \\int_{\\dR \\setminus \\{0\\}} \\left( \\frac{1}{s-z}-\\frac{s}{s^2+1} \\right) d\\sigma(s)\n\\]\nand it is holomorphic on $(-\\gamma, 0)$. A straightforward calculation implies that\n\\begin{equation}\\label{kuhstrich}\nR'(z)=-\\sigma(\\{0\\})+2az+3bz^2+zT(z),\n\\end{equation}\nwhere the function $T(z)$ is given by\n$$\nT(z)=\\int_{\\dR\\setminus \\{0\\}} h(s,z) \\,\\frac{d\\sigma(s)}{s-z},\n$$\nwith\n$$\nh(s,z)=\\frac{2+2sz}{s^2+1}+\\frac{z}{s-z}.\n$$\nIt will be shown that\n\\begin{equation}\\label{??}\n\\lim_{z\\uparrow 0}zT(z)=0.\n\\end{equation}\nIn order to see this, observe that\n\\begin{equation}\\label{limzh}\n \\lim_{z\\uparrow 0} z \\left( h(s,z) \\frac{1}{s-z} \\right) =0,\\quad s\\in\\dR\\setminus\\{0\\}.\n\\end{equation}\nThe argument will be finished via dominated convergence. For this purpose\nwrite the function $T(z)$ as $T(z)=T_1(z)+T_2(z)$ with\n$$\nT_1(z)=\\int_{(0,1\/2)} h(s,z) \\,\\frac{d\\sigma(s)}{s-z},\n\\quad \nT_2(z)= \\int_{\\dR\\setminus(-\\gamma,1\/2)} h(s,z) \\,\\frac{d\\sigma(s)}{s-z}.\n$$\nNote that for $s\\in (0,1\/2)$, $z\\in(-1\/2,0)$ the following\nestimations hold:\n\\begin{equation}\\label{est1}\n -1< \\frac{z}{s-z}<0\n\\end{equation}\nand\n\\begin{equation}\\label{est2}\n\\frac{6}{5}< \\frac{2-s}{s^2+1}<\\frac{2+2sz}{s^2+1}<2.\n\\end{equation}\nIt follows from \\eqref{est1} and \\eqref{est2} that for $s \\in (0,1\/2)$ and $z \\in (-1\/2,0)$\none has\n\\begin{equation}\\label{hschaetz}\n \\frac{1}{5}0.\n\\end{equation}\nThen the identity \\eqref{kuhstrich} together with \\eqref{??} implies that there is some $\\varepsilon>0$\nsuch that $R(z)$ is holomorphic with $R'(z)<0$ on $(-\\varepsilon,0)$. By Theorem \\ref{mainth} the interval\n$(-\\varepsilon,0)$ is contained in $\\mathcal{F}_Q$. Note that in this case it follows from \\eqref{hash} that\n\\begin{equation}\\label{1a'}\n M(0-)=+\\infty.\n\\end{equation}\n\n\\textit{Case (b)}. Assume that\n\\begin{equation}\\label{1b}\n \\sigma(\\{0\\})=0 \\quad \\mbox{and} \\quad \\int_{(0,1\/2)} \\frac{d\\sigma (s)}{s}=+\\infty.\n\\end{equation}\nBy the estimate \\eqref{hschaetz} and monotone convergence it follows that\n$$\n\\int_{(0,1\/2)} h(s,z) \\,\\frac{d\\sigma(s)}{s-z} \\ge \\frac{1}{5} \\int_{(0,1\/2)}\\frac{d\\sigma(s)}{s-z}\\to +\\infty,\n \\quad z\\uparrow 0.\n$$\nConsequently, $T_1(0-)=+\\infty$. Since $T_2(z)$ is holomorphic on the interval $(-\\gamma,1\/2)$,\none obtains\n$T(0-)=+\\infty$. As $T$ is holomorphic on $(-\\gamma,0)$, the identity \\eqref{kuhstrich} implies\nagain that $R'(z)<0$ on some interval $(-\\varepsilon,0)$, which is therefore contained in $\\mathcal{F}_Q$.\nNote that in this case it follows from \\eqref{ooo} that\n\\begin{equation}\\label{1b'}\nM(0-)=+\\infty.\n\\end{equation}\n\n\\textit{Case (c)}. Assume that\n\\begin{equation}\\label{1c}\n\\sigma(\\{0\\})=0 \\quad \\mbox{and} \\quad \\int_{(0,1\/2)} \\frac{d\\sigma (s)}{s} < + \\infty.\n\\end{equation}\n Then\n\\begin{equation}\\label{altkuhstrich}\nR'(z)=2z\\left(a+\\int_{\\dR\\setminus \\{0\\}} \\frac{d\\sigma(s)}{s(s^2+1)}\\right) + z^2S(z),\n\\end{equation}\nwith\n\\begin{equation}\\label{bbb}\nS(z)=3b+ \\int_{\\dR\\setminus \\{0\\}} \\frac{3s-2z}{s(s-z)^2} \\, d\\sigma(s).\n\\end{equation}\nIt will be shown by dominated convergence that\n\\begin{equation}\\label{limzS}\n\\lim_{z \\uparrow 0} zS(z)=0.\n\\end{equation}\nTo do this, it suffices to find an integrable upper bound when the integrand in \\eqref{bbb}\nis multiplied by $z$.\nFor $s>0$ and $z<0$ one has $0< 3s-2z < 3(s-z)$, so that\n\\[\n 0 \\le \\frac{3s-2z}{s(s-z)^2} \\le \\frac{3}{s(s-z)}.\n\\]\nSince $|z| < s-z$ for $s >0$ and $z<0$, it follows that\n\\begin{equation}\\label{hen1}\n \\left| z \\frac{3s-2z}{s(s-z)^2} \\right| \\le \\frac{3}{s}, \\quad s>0, \\quad z<0,\n\\end{equation}\nand in particular this estimate holds for $00$. Then the identities \\eqref{altkuhstrich} and \\eqref{limzS} imply that\n$R'(z)<0$ on some interval $(-\\varepsilon,0)$, which is therefore contained in $\\mathcal{F}_Q$.\n\n\\item\n$M(0-)< 0$. Similarly, one obtains $R'(z)>0$ on some interval $(-\\varepsilon,0)$, which is\ntherefore not contained in $\\mathcal{F}_Q$.\n\n\\item\n$M(0-)=0$.\nIt follows from \\eqref{ooo} and \\eqref{altkuhstrich}\nthat $R'(z)=z^2S(z)$. The identity \\eqref{r1lim} implies that $\\lim_{z\\uparrow 0} S(z)\\in(0,+\\infty]$\n(otherwise $b$ and $\\sigma$ and by \\eqref{ooo} also $a$ would vanish, so that $M(z) \\equiv 0$,\ncontradicting the fact that $R(z)\\in\\mathbf{N}_1$).\nHence, $R'(z)>0$ on some interval $(-\\varepsilon,0)$, which is therefore not contained in $\\mathcal{F}_Q$. \\\\\n\\end{itemize}\n\n\\textit{Step 2.} In order to treat the general case it will now be proved that\n\\begin{equation}\\label{rprim}\n \\lim_{z \\uparrow 0} \\frac{R'(z)}{R(z)} = -\\infty,\n\\end{equation}\nin each of the cases (a), (b), and (c) in Step 1. \\\\\n\n\\textit{Case (a)}.\nRecall that $R'(z) <0$ on some interval $(-\\varepsilon,0)$ with $\\lim_{z \\uparrow 0} R'(z)=-\\sigma(\\{0\\})$ according to \\eqref{kuhstrich}.\nSince $R(z) >0$ on some interval $(-\\varepsilon_1,0)$ by \\eqref{1a'} with\n$\\lim_{z \\uparrow 0} R(z)=0$ (see \\eqref{hash}), it\nfollows that \\eqref{rprim} holds. \\\\\n\n\\textit{Case (b)}.\nRecall that $T(0-) =+ \\infty$, so that\n\\eqref{kuhstrich} implies that\n\\[\n\\lim_{z \\uparrow 0}\\frac{R'(z)}{z} \\to +\\infty.\n\\]\nFurthermore\n\\[\n\\lim_{z \\uparrow 0} \\frac{R(z)}{z} = \\lim_{z \\uparrow 0} zM(z) =0,\n\\]\nand note that $z M(z) \\uparrow 0$ for $z \\uparrow 0$ (see \\eqref{1b'}).\nThus \\eqref{rprim} follows. \\\\\n\n\\textit{Case (c)}.\nIt follows from \\eqref{ooo}, \\eqref{altkuhstrich}, and\n\\eqref{limzS} that\n\\[\n\\lim_{z \\uparrow 0} \\frac{R'(z)}{z}=2M(0-).\n\\]\n\nIf $M(0-) \\neq 0$, then\n\\begin{equation}\n \\lim_{z \\uparrow 0} \\frac{z R'(z)}{R(z)} =\n \\lim_{z \\uparrow 0} \\frac{R'(z)}{z} \\frac{1}{M(z)} =2,\n\\end{equation}\nfrom which \\eqref{rprim} follows.\n\nIf $M(0-)=0$, then $M(z) \\uparrow0$ as $z \\uparrow 0$, so that\n\\begin{equation}\n \\lim_{z \\uparrow 0} \\frac{R'(z)}{R(z)} = \\lim_{z \\uparrow 0} \\frac{S(z)}{M(z)}=-\\infty,\n\\end{equation}\nand, again, \\eqref{rprim} follows. \\\\\n\n\\textit{Step 3.} Consider the function $Q(z)$ defined by\n\\[\n Q(z)=\\frac{z^2M(z)}{(z-\\beta)(z-\\bar{\\beta})}=\\frac{R(z)}{D(z)},\n\\]\nwhere $R(z)=z^2M(z)$ as in Step 1 and $D(z)=(z-\\beta)(z-\\bar{\\beta})$. Then\n\\begin{equation}\\label{deriv}\n Q'(z)=\\frac{R'(z)D(z)-R(z)D'(z)}{D(z)^2}=M(z)\\,\\frac{z^2}{D(z)} \\left( \\frac{R'(z)}{R(z)}-\\frac{D'(z)}{D(z)} \\right).\n\\end{equation}\nNote that $D(z) >0$ for $z \\in \\dR$ and $z \\neq \\beta$, and that\n\\[\n \\frac{D'(0)}{D(0)}=- \\frac{{\\rm Re\\,} \\beta}{|\\beta|^2}.\n\\]\nThe identity \\eqref{deriv} and the limit in \\eqref{rprim} now imply that\n\\[\nQ'(z) <0,\\,\\, z \\in (-\\varepsilon,0), \\mbox{ for some $\\varepsilon > 0$}\n\\quad \\Leftrightarrow \\quad 0< M(0-) \\le \\infty,\n\\]\nand\n\\[\nQ'(z) \\ge 0, \\,\\, z \\in (-\\varepsilon,0), \\mbox{ for some $\\varepsilon > 0$} \\quad \\Leftrightarrow \\quad M(0-) \\le 0.\n\\]\nThis completes the proof of the theorem.\n\\end{proof}\n\n\\begin{example}\nIf $Q(z)=z^{2+\\rho}$, $0 < \\rho<1$, then $Q(z)=z^2M(z)$ with $M(z)=z^\\rho \\in \\mathbf{N}_0$. Hence $Q(z)\n\\in \\mathbf{N}_1$ and $z=0$ \nis the GZNT of $Q(z)$. With the interpretation of $z^\\rho$\nas in the introduction, it follows that $M(z)$ is holomorphic on\n$(0,+\\infty)$. It is not\ndifficult to see that $\\lim_{z \\downarrow 0} M(z)=0$. Hence, the conditions of Theorem \\ref{realpath} are not\nsatisfied. The path is indicated in Figure 2.\n\\end{example}\n\n\\begin{example}\nIf $Q(z)=z^2 \\log z$, then $Q(z)=z^2 M(z)$ with $M(z) =\\log z$. Hence $Q(z) \\in \\mathbf{N}_1$ and $z=0$ is the GZNT of $Q(z)$. Clearly $\\lim_{z \\downarrow 0} \\log z=-\\infty$, so that by Theorem\n\\ref{realpath}, there exists $\\gamma >0$ such that $(0, \\gamma) \\subset \\mathcal{F}_Q$. It can be shown that\n$\\gamma=1\/\\sqrt{e}$ is the maximal possible value.\n\\end{example}\n\nIn the special case that $Q(z) \\in \\mathbf{N}_1$ is holomorphic in a neighborhood\nof the GZNT $\\alpha \\in \\dR$, a combination of Theorem \\ref{realpath} and\nProposition \\ref{zeros>0} leads to the following description, which\nagrees with Theorem \\ref{mainth}.\n\n\\begin{corollary}\\label{zeros>0+}\nLet $Q(z) \\in \\mathbf{N}_1$ be holomorphic in a neighborhood\nof the GZNT $\\alpha \\in \\dR$. Then the following cases appear:\n\\begin{itemize}\n\\item[(1)] $Q'(\\alpha) < 0$: There exists $\\varepsilon>0$ such that $(\\alpha-\\varepsilon, \\alpha+\\varepsilon)\\subset \\mathcal{ F}_Q$.\n\n\\item[(2a)] $Q'(\\alpha) = 0$, $Q''(\\alpha) > 0$: There exists $\\varepsilon>0$ such that $(\\alpha-\\varepsilon,\\alpha]\\subset \\mathcal{ F}_Q$.\n\n \\item[(2b)] $Q'(\\alpha) = 0$, $Q''(\\alpha) < 0$: There exists $\\varepsilon>0$ such that $[\\alpha,\\alpha+\\varepsilon ) \\subset \\mathcal{ F}_Q$.\n\n\\item[(3)] $Q'(\\alpha) = 0$, $Q''(\\alpha) = 0$: There exists no $\\varepsilon>0$ such that $(\\alpha-\\varepsilon,\\alpha]\\subset \\mathcal{ F}_Q$ or $[\\alpha,\\alpha+\\varepsilon ) \\subset \\mathcal{ F}_Q$.\n\\end{itemize}\n\\end{corollary}\n\nIn order to show how this follows from Theorem \\ref{realpath}\nassume for simplicity that $Q(z)$ is of the form $Q(z)=(z-\\alpha)^2M(z)$ with $M(z) \\in \\mathbf{N}_0$.\nThen the condition that $Q(z)$ is holomorphic around $\\alpha$ implies that\n$M(z)$ has an isolated first order pole at $\\alpha$ or that $M(z)$ is holomorphic\naround $\\alpha$.\n\nIf $M(z)$ has a first order pole at $\\alpha$, then\n\\begin{equation}\\label{pol}\n M(z)=\\frac{c}{z-\\alpha} + \\varphi(z),\n\\end{equation}\nwith $c<0$ and $\\varphi(z)$ holomorphic around $\\alpha$. Hence, in this case\n\\[\n Q'(\\alpha) =c <0.\n\\]\n\nIf $M(z)$ is holomorphic at $\\alpha$, then\n\\[\nQ'(\\alpha)=0, \\quad Q''(\\alpha)=2 M(\\alpha), \\quad Q'''(\\alpha)=6M'(\\alpha).\n\\]\n\nIf Case (1) prevails, then $Q'(\\alpha)<0$. Hence the function $M(z)$ must be of the form \\eqref{pol}.\n(It also follows from \\eqref{pol} and Theorem \\ref{realpath} that the assertion in (1) holds).\n\nIf Cases (2a), (2b), or (3) prevail, then $Q'(\\alpha)=0$. Hence the function $M(z)$ must be holomorphic at $\\alpha$.\nIn particular in Case (2a) $M(\\alpha)>0$ and in Case (2b) $M(\\alpha)<0$, so that assertions follows from Theorem \\ref{realpath}. Finally, in Case (3) $M(\\alpha)=0$ and the assertion in (3) follows from Theorem \\ref{realpath}.\n\n\\begin{remark}\\label{53}\nIt may be helpful to reconsider some of the concrete simple examples given earlier\nin terms of Theorem \\ref{realpath} and Corollary \\ref{zeros>0}.\n\nIf $Q(z)=-z$, then $Q'(0)=-1$. Since $Q(z)=z^2M(z)$ with $M(z)=-1\/z \\in \\mathbf{N}_0$,\none has $M(0+) =-\\infty$, $M(0-)=\\infty$.\n \nIf $Q(z)=z^2$, then $Q'(0)=0$, $Q''(0)=2$. Since\n$Q(z)=z^2M(z)$ with $M(z)=1 \\in \\mathbf{N}_0$, one has $M(0)=1$.\n \nIf $Q(z)=z^3$, then $Q''(0)=0$, and $Q'''(0)=6$. Since\n$Q(z)=z^2M(z)$ with $M(z)=z \\in \\mathbf{N}_0$, one has $M(0)=0$\n \nIf $Q(z)$ is as in Example \\ref{drie}, then\n$Q'(0)=0$, $Q''(0)=2(\\delta_2-\\delta_1)$, and $Q'''(0)=2(\\delta_1+\\delta_2)$, and one has\n$M(0)= \\delta_2-\\delta_1$.\n\\end{remark}\n\n\n\\section{The path associated with a special function}\\label{twe}\n\nEach of the factors\nin \\eqref{einz} belongs to $\\mathbf{N}_1$ and has \n$\\alpha \\in \\dC^+ \\cup \\dR \\cup{\\infty}$ as GZNT and $\\beta\n\\in \\dC^+ \\cup \\dR \\cup{\\infty}$ as GPNT. \nIn this section the path $\\alpha(\\tau)$ of the GZNT for this factor\nwill be described. The extreme case\nwith $\\beta=\\infty$ can be treated as in the introduction (after a shift). The other extreme case with\n$\\alpha=\\infty$ can be treated similarly.\nTherefore it suffices to consider the function $Q(z)$ defined by\n\\begin{equation}\n\\label{einz-}\n Q(z)=\\frac{(z-\\alpha)(z-\\overline{\\alpha})}{(z-\\beta)(z-\\overline{\\beta })},\n \\quad \\alpha, \\beta \\in \\dC^+ \\cup \\dR.\n\\end{equation}\n\nObserve that the equation ${\\rm Im\\,} Q(z)=0$ where $z=x+ \\ii y$, is equivalent to either the equation\n\\begin{equation}\\label{EINS}\ny=0,\n\\end{equation}\nor the equation\n\\begin{equation}\\label{ZWEI}\n ({\\rm Re\\,} \\alpha-{\\rm Re\\,} \\beta) (x^2+y^2)+(|\\beta|^2-|\\alpha|^2)x+|\\alpha|^2 {\\rm Re\\,} \\beta-|\\beta|^2 {\\rm Re\\,} \\alpha=0.\n\\end{equation}\nIf ${\\rm Re\\,} \\alpha \\ne {\\rm Re\\,} \\beta$, then the equation \\eqref{ZWEI} is equivalent to \n\\[\n\\left( x-\\frac{|\\beta|^2-|\\alpha|^2}{2\\Re(\\beta-\\alpha)}\\right)^2+y^2= \\left(\n\\Re\\alpha-\\frac{|\\beta|^2-|\\alpha|^2}{2\\Re(\\beta-\\alpha)}\\right)^2 +\\left(\\Im\\alpha\\right)^2.\n\\]\nThis defines a circle $C$ which can be also described by the condition that it contains $\\alpha$ and $\\beta$ and\nthat its center lies on $\\dR$. If ${\\rm Re\\,} \\alpha = {\\rm Re\\,} \\beta$, then the equation \\eqref{ZWEI} is equivalent to\n\\[\nx={\\rm Re\\,} \\alpha,\n\\]\nwhich defines a vertical line $C$ through $x ={\\rm Re\\,} \\alpha$.\nNote that by Theorem \\ref{charact} and Corollary \\ref{quh}:\n\\[\n\\mathcal{ F}_Q \\subset (C\\cap \\dC^+)\\cup\\dR\\cup\\{\\infty\\}.\n\\]\nIt is straightforward to check that the sign of $Q'(x)$, $x \\in \\dR$ (with the possible exception of $\\beta$\nwhen $\\beta \\in \\dR$), is given by the sign of the polynomial\n\\begin{equation}\\label{DREI}\n ({\\rm Re\\,} \\alpha-{\\rm Re\\,} \\beta) x^2+(|\\beta|^2-|\\alpha|^2)x+|\\alpha|^2 {\\rm Re\\,} \\beta-|\\beta|^2 {\\rm Re\\,} \\alpha.\n\\end{equation}\n\nDenote for ${\\rm Re\\,} \\alpha \\neq {\\rm Re\\,} \\beta$ the intersections of $C$ with $\\dR$ by $P_l$ and $P_r$ in such way that $P_l {\\rm Re\\,} \\beta$. Then it follows from \\eqref{ZWEI} and \\eqref{DREI} that\n$Q'(x)<0$ for $x\\in(P_l,P_r)$ and $Q'(x)>0$ for $x\\in\\dR\\setminus(P_l,P_r)$.\nHence,\n \\[\n \\mathcal{F}_Q=(C\\cap \\dC^+)\\cup[P_l,P_r].\n\\]\nThe direction of the path is indicated in Figure 8.\n\n\\begin{figure}[hbt]\\label{circle1}\n\\begin{center}\n\\begin{picture}(95,32)(0,13)\n\\linethickness{0.05mm} \\put(0,20){\\line(1,0){90}} \\thicklines\n \\put(65,20){\\vector(-1,0){10}}\n \\put(55,20){\\line(-1,0){30}}\n \\multiput(64.99,20.5)(0.01,-0.5){1}{\\line(0,-1){0.5}}\n\\multiput(64.98,21)(0.02,-0.5){1}{\\line(0,-1){0.5}}\n \\multiput(64.94,21.49)(0.03,-0.5){1}{\\line(0,-1){0.5}}\n\\multiput(64.9,21.99)(0.04,-0.5){1}{\\line(0,-1){0.5}}\n \\multiput(64.84,22.49)(0.06,-0.5){1}{\\line(0,-1){0.5}}\n\\multiput(64.78,22.98)(0.07,-0.49){1}{\\line(0,-1){0.49}}\n\\multiput(64.7,23.47)(0.08,-0.49){1}{\\line(0,-1){0.49}}\n\\multiput(64.6,23.96)(0.09,-0.49){1}{\\line(0,-1){0.49}}\n\\multiput(64.5,24.45)(0.1,-0.49){1}{\\line(0,-1){0.49}}\n\\multiput(64.38,24.94)(0.12,-0.48){1}{\\line(0,-1){0.48}}\n\\multiput(64.25,25.42)(0.13,-0.48){1}{\\line(0,-1){0.48}}\n\\multiput(64.11,25.9)(0.14,-0.48){1}{\\line(0,-1){0.48}}\n\\multiput(63.96,26.37)(0.15,-0.47){1}{\\line(0,-1){0.47}}\n\\multiput(63.79,26.84)(0.16,-0.47){1}{\\line(0,-1){0.47}}\n\\multiput(63.62,27.31)(0.18,-0.47){1}{\\line(0,-1){0.47}}\n\\multiput(63.43,27.77)(0.09,-0.23){2}{\\line(0,-1){0.23}}\n\\multiput(63.23,28.23)(0.1,-0.23){2}{\\line(0,-1){0.23}}\n\\multiput(63.02,28.68)(0.11,-0.23){2}{\\line(0,-1){0.23}}\n\\multiput(62.8,29.12)(0.11,-0.22){2}{\\line(0,-1){0.22}}\n\\multiput(62.56,29.57)(0.12,-0.22){2}{\\line(0,-1){0.22}}\n\\multiput(62.32,30)(0.12,-0.22){2}{\\line(0,-1){0.22}}\n\\multiput(62.07,30.43)(0.13,-0.21){2}{\\line(0,-1){0.21}}\n\\multiput(61.8,30.85)(0.13,-0.21){2}{\\line(0,-1){0.21}}\n\\multiput(61.52,31.27)(0.14,-0.21){2}{\\line(0,-1){0.21}}\n\\multiput(61.24,31.67)(0.14,-0.2){2}{\\line(0,-1){0.2}}\n \\multiput(60.94,32.08)(0.15,-0.2){2}{\\line(0,-1){0.2}}\n\\multiput(60.64,32.47)(0.1,-0.13){3}{\\line(0,-1){0.13}}\n\\multiput(60.32,32.86)(0.11,-0.13){3}{\\line(0,-1){0.13}}\n\\multiput(60,33.23)(0.11,-0.13){3}{\\line(0,-1){0.13}}\n \\multiput(59.66,33.6)(0.11,-0.12){3}{\\line(0,-1){0.12}}\n\\multiput(59.32,33.96)(0.11,-0.12){3}{\\line(0,-1){0.12}}\n\\multiput(58.96,34.32)(0.12,-0.12){3}{\\line(1,0){0.12}}\n\\multiput(58.6,34.66)(0.12,-0.11){3}{\\line(1,0){0.12}}\n \\multiput(58.23,35)(0.12,-0.11){3}{\\line(1,0){0.12}}\n\\multiput(57.86,35.32)(0.13,-0.11){3}{\\line(1,0){0.13}}\n\\multiput(57.47,35.64)(0.13,-0.11){3}{\\line(1,0){0.13}}\n\\multiput(57.08,35.94)(0.13,-0.1){3}{\\line(1,0){0.13}}\n \\multiput(56.67,36.24)(0.2,-0.15){2}{\\line(1,0){0.2}}\n\\multiput(56.27,36.52)(0.2,-0.14){2}{\\line(1,0){0.2}}\n \\multiput(55.85,36.8)(0.21,-0.14){2}{\\line(1,0){0.21}}\n\\multiput(55.43,37.07)(0.21,-0.13){2}{\\line(1,0){0.21}}\n \\multiput(55,37.32)(0.21,-0.13){2}{\\line(1,0){0.21}}\n\\multiput(54.57,37.56)(0.22,-0.12){2}{\\line(1,0){0.22}}\n\\multiput(54.12,37.8)(0.22,-0.12){2}{\\line(1,0){0.22}}\n\\multiput(53.68,38.02)(0.22,-0.11){2}{\\line(1,0){0.22}}\n\\multiput(53.23,38.23)(0.23,-0.11){2}{\\line(1,0){0.23}}\n\\multiput(52.77,38.43)(0.23,-0.1){2}{\\line(1,0){0.23}}\n\\multiput(52.31,38.62)(0.23,-0.09){2}{\\line(1,0){0.23}}\n\\multiput(51.84,38.79)(0.47,-0.18){1}{\\line(1,0){0.47}}\n\\multiput(51.37,38.96)(0.47,-0.16){1}{\\line(1,0){0.47}}\n\\multiput(50.9,39.11)(0.47,-0.15){1}{\\line(1,0){0.47}}\n\\multiput(50.42,39.25)(0.48,-0.14){1}{\\line(1,0){0.48}}\n\\multiput(49.94,39.38)(0.48,-0.13){1}{\\line(1,0){0.48}}\n\\multiput(49.45,39.5)(0.48,-0.12){1}{\\line(1,0){0.48}}\n \\multiput(48.96,39.6)(0.49,-0.1){1}{\\line(1,0){0.49}}\n\\multiput(48.47,39.7)(0.49,-0.09){1}{\\line(1,0){0.49}}\n\\multiput(47.98,39.78)(0.49,-0.08){1}{\\line(1,0){0.49}}\n\\multiput(47.49,39.84)(0.49,-0.07){1}{\\line(1,0){0.49}}\n \\multiput(46.99,39.9)(0.5,-0.06){1}{\\line(1,0){0.5}}\n\\multiput(46.49,39.94)(0.5,-0.04){1}{\\line(1,0){0.5}}\n \\multiput(46,39.98)(0.5,-0.03){1}{\\line(1,0){0.5}}\n\\multiput(45.5,39.99)(0.5,-0.02){1}{\\vector(1,0){0.5}}\n \\multiput(45,40)(0.5,-0.01){1}{\\line(1,0){0.5}}\n\\multiput(44.5,39.99)(0.5,0.01){1}{\\line(1,0){0.5}}\n \\multiput(44,39.98)(0.5,0.02){1}{\\line(1,0){0.5}}\n\\multiput(43.51,39.94)(0.5,0.03){1}{\\line(1,0){0.5}}\n \\multiput(43.01,39.9)(0.5,0.04){1}{\\line(1,0){0.5}}\n\\multiput(42.51,39.84)(0.5,0.06){1}{\\line(1,0){0.5}}\n \\multiput(42.02,39.78)(0.49,0.07){1}{\\line(1,0){0.49}}\n\\multiput(41.53,39.7)(0.49,0.08){1}{\\line(1,0){0.49}}\n \\multiput(41.04,39.6)(0.49,0.09){1}{\\line(1,0){0.49}}\n\\multiput(40.55,39.5)(0.49,0.1){1}{\\line(1,0){0.49}}\n \\multiput(40.06,39.38)(0.48,0.12){1}{\\line(1,0){0.48}}\n\\multiput(39.58,39.25)(0.48,0.13){1}{\\line(1,0){0.48}}\n \\multiput(39.1,39.11)(0.48,0.14){1}{\\line(1,0){0.48}}\n\\multiput(38.63,38.96)(0.47,0.15){1}{\\line(1,0){0.47}}\n \\multiput(38.16,38.79)(0.47,0.16){1}{\\line(1,0){0.47}}\n\\multiput(37.69,38.62)(0.47,0.18){1}{\\line(1,0){0.47}}\n \\multiput(37.23,38.43)(0.23,0.09){2}{\\line(1,0){0.23}}\n\\multiput(36.77,38.23)(0.23,0.1){2}{\\line(1,0){0.23}}\n \\multiput(36.32,38.02)(0.23,0.11){2}{\\line(1,0){0.23}}\n\\multiput(35.88,37.8)(0.22,0.11){2}{\\line(1,0){0.22}}\n \\multiput(35.43,37.56)(0.22,0.12){2}{\\line(1,0){0.22}}\n\\multiput(35,37.32)(0.22,0.12){2}{\\line(1,0){0.22}}\n \\multiput(34.57,37.07)(0.21,0.13){2}{\\line(1,0){0.21}}\n\\multiput(34.15,36.8)(0.21,0.13){2}{\\line(1,0){0.21}}\n \\multiput(33.73,36.52)(0.21,0.14){2}{\\line(1,0){0.21}}\n\\multiput(33.33,36.24)(0.2,0.14){2}{\\line(1,0){0.2}}\n \\multiput(32.92,35.94)(0.2,0.15){2}{\\line(1,0){0.2}}\n\\multiput(32.53,35.64)(0.13,0.1){3}{\\line(1,0){0.13}}\n \\multiput(32.14,35.32)(0.13,0.11){3}{\\line(1,0){0.13}}\n\\multiput(31.77,35)(0.13,0.11){3}{\\line(1,0){0.13}}\n \\multiput(31.4,34.66)(0.12,0.11){3}{\\line(1,0){0.12}}\n\\multiput(31.04,34.32)(0.12,0.11){3}{\\line(1,0){0.12}}\n \\multiput(30.68,33.96)(0.12,0.12){3}{\\line(1,0){0.12}}\n \\multiput(30.34,33.6)(0.11,0.12){3}{\\line(0,1){0.12}}\n \\multiput(30,33.23)(0.11,0.12){3}{\\line(0,1){0.12}}\n\\multiput(29.68,32.86)(0.11,0.13){3}{\\line(0,1){0.13}}\n \\multiput(29.36,32.47)(0.11,0.13){3}{\\line(0,1){0.13}}\n\\multiput(29.06,32.08)(0.1,0.13){3}{\\line(0,1){0.13}}\n \\multiput(28.76,31.67)(0.15,0.2){2}{\\line(0,1){0.2}}\n\\multiput(28.48,31.27)(0.14,0.2){2}{\\line(0,1){0.2}}\n \\multiput(28.2,30.85)(0.14,0.21){2}{\\line(0,1){0.21}}\n\\multiput(27.93,30.43)(0.13,0.21){2}{\\line(0,1){0.21}}\n \\multiput(27.68,30)(0.13,0.21){2}{\\line(0,1){0.21}}\n\\multiput(27.44,29.57)(0.12,0.22){2}{\\line(0,1){0.22}}\n \\multiput(27.2,29.12)(0.12,0.22){2}{\\line(0,1){0.22}}\n\\multiput(26.98,28.68)(0.11,0.22){2}{\\line(0,1){0.22}}\n \\multiput(26.77,28.23)(0.11,0.23){2}{\\line(0,1){0.23}}\n\\multiput(26.57,27.77)(0.1,0.23){2}{\\line(0,1){0.23}}\n \\multiput(26.38,27.31)(0.09,0.23){2}{\\line(0,1){0.23}}\n\\multiput(26.21,26.84)(0.18,0.47){1}{\\line(0,1){0.47}}\n \\multiput(26.04,26.37)(0.16,0.47){1}{\\line(0,1){0.47}}\n\\multiput(25.89,25.9)(0.15,0.47){1}{\\line(0,1){0.47}}\n \\multiput(25.75,25.42)(0.14,0.48){1}{\\line(0,1){0.48}}\n\\multiput(25.62,24.94)(0.13,0.48){1}{\\line(0,1){0.48}}\n \\multiput(25.5,24.45)(0.12,0.48){1}{\\line(0,1){0.48}}\n\\multiput(25.4,23.96)(0.1,0.49){1}{\\line(0,1){0.49}}\n \\multiput(25.3,23.47)(0.09,0.49){1}{\\line(0,1){0.49}}\n\\multiput(25.22,22.98)(0.08,0.49){1}{\\line(0,1){0.49}}\n \\multiput(25.16,22.49)(0.07,0.49){1}{\\line(0,1){0.49}}\n\\multiput(25.1,21.99)(0.06,0.5){1}{\\line(0,1){0.5}}\n \\multiput(25.06,21.49)(0.04,0.5){1}{\\line(0,1){0.5}}\n\\multiput(25.02,21)(0.03,0.5){1}{\\line(0,1){0.5}}\n \\multiput(25.01,20.5)(0.02,0.5){1}{\\line(0,1){0.5}}\n\\multiput(25,20)(0.01,0.5){1}{\\line(0,1){0.5}}\n \\thinlines \\put(35,36){\\line(0,1){2}}\n\\put(61,31){\\line(0,1){2}}\n \\put(36,35){$\\beta$}\n \\put(58,30){$\\alpha$}\n\\put(25,19){\\line(0,1){2}}\n \\put(65,19){\\line(0,1){2}}\n \\put(25,17){\\makebox(0,0)[cc]{$P_l$}}\n\\put(65,17){\\makebox(0,0)[cc]{$P_r$}}\n\n\\end{picture}\n\\caption{The path of $\\alpha(\\tau)$ for ${\\rm Re\\,} \\alpha > {\\rm Re\\,} \\beta$.}\n\\end{center}\n\\end{figure}\n\n\n\\textit{Case 2}. ${\\rm Re\\,} \\alpha < {\\rm Re\\,} \\beta$. Then it follows from \\eqref{ZWEI} and \\eqref{DREI} that\n$Q'(x)>0$ for $x\\in(P_l,P_r)$ and $Q'(x)<0$ for $x\\in\\dR\\setminus(P_l,P_r)$.\nHence,\n \\[\n \\mathcal{F}_Q=(C\\cap \\dC^+)\\cup(-\\infty, P_l] \\cup [P_r, \\infty).\n\\]\nThe direction of the path is indicated in Figure 9.\n\n\n\\begin{figure}[hbt]\\label{circle2}\n\\begin{center}\n\\begin{picture}(90,32)(0,13)\n\\linethickness{0.05mm} \\put(0,20){\\line(1,0){90}} \\thicklines\n \\put(25,20){\\vector(-1,0){10}}\n \\put(15,20){\\line(-1,0){15}}\n \\put(90,20){\\vector(-1,0){10}}\n \\put(80,20){\\line(-1,0){15}}\n \\multiput(64.99,20.5)(0.01,-0.5){1}{\\line(0,-1){0.5}}\n\\multiput(64.98,21)(0.02,-0.5){1}{\\line(0,-1){0.5}}\n \\multiput(64.94,21.49)(0.03,-0.5){1}{\\line(0,-1){0.5}}\n\\multiput(64.9,21.99)(0.04,-0.5){1}{\\line(0,-1){0.5}}\n \\multiput(64.84,22.49)(0.06,-0.5){1}{\\line(0,-1){0.5}}\n\\multiput(64.78,22.98)(0.07,-0.49){1}{\\line(0,-1){0.49}}\n\\multiput(64.7,23.47)(0.08,-0.49){1}{\\line(0,-1){0.49}}\n\\multiput(64.6,23.96)(0.09,-0.49){1}{\\line(0,-1){0.49}}\n\\multiput(64.5,24.45)(0.1,-0.49){1}{\\line(0,-1){0.49}}\n\\multiput(64.38,24.94)(0.12,-0.48){1}{\\line(0,-1){0.48}}\n\\multiput(64.25,25.42)(0.13,-0.48){1}{\\line(0,-1){0.48}}\n\\multiput(64.11,25.9)(0.14,-0.48){1}{\\line(0,-1){0.48}}\n\\multiput(63.96,26.37)(0.15,-0.47){1}{\\line(0,-1){0.47}}\n\\multiput(63.79,26.84)(0.16,-0.47){1}{\\line(0,-1){0.47}}\n\\multiput(63.62,27.31)(0.18,-0.47){1}{\\line(0,-1){0.47}}\n\\multiput(63.43,27.77)(0.09,-0.23){2}{\\line(0,-1){0.23}}\n\\multiput(63.23,28.23)(0.1,-0.23){2}{\\line(0,-1){0.23}}\n\\multiput(63.02,28.68)(0.11,-0.23){2}{\\line(0,-1){0.23}}\n\\multiput(62.8,29.12)(0.11,-0.22){2}{\\line(0,-1){0.22}}\n\\multiput(62.56,29.57)(0.12,-0.22){2}{\\line(0,-1){0.22}}\n\\multiput(62.32,30)(0.12,-0.22){2}{\\line(0,-1){0.22}}\n\\multiput(62.07,30.43)(0.13,-0.21){2}{\\line(0,-1){0.21}}\n\\multiput(61.8,30.85)(0.13,-0.21){2}{\\line(0,-1){0.21}}\n\\multiput(61.52,31.27)(0.14,-0.21){2}{\\line(0,-1){0.21}}\n\\multiput(61.24,31.67)(0.14,-0.2){2}{\\line(0,-1){0.2}}\n \\multiput(60.94,32.08)(0.15,-0.2){2}{\\line(0,-1){0.2}}\n\\multiput(60.64,32.47)(0.1,-0.13){3}{\\line(0,-1){0.13}}\n\\multiput(60.32,32.86)(0.11,-0.13){3}{\\line(0,-1){0.13}}\n\\multiput(60,33.23)(0.11,-0.13){3}{\\line(0,-1){0.13}}\n \\multiput(59.66,33.6)(0.11,-0.12){3}{\\line(0,-1){0.12}}\n\\multiput(59.32,33.96)(0.11,-0.12){3}{\\line(0,-1){0.12}}\n\\multiput(58.96,34.32)(0.12,-0.12){3}{\\line(1,0){0.12}}\n\\multiput(58.6,34.66)(0.12,-0.11){3}{\\line(1,0){0.12}}\n \\multiput(58.23,35)(0.12,-0.11){3}{\\line(1,0){0.12}}\n\\multiput(57.86,35.32)(0.13,-0.11){3}{\\line(1,0){0.13}}\n\\multiput(57.47,35.64)(0.13,-0.11){3}{\\line(1,0){0.13}}\n\\multiput(57.08,35.94)(0.13,-0.1){3}{\\line(1,0){0.13}}\n \\multiput(56.67,36.24)(0.2,-0.15){2}{\\line(1,0){0.2}}\n\\multiput(56.27,36.52)(0.2,-0.14){2}{\\line(1,0){0.2}}\n \\multiput(55.85,36.8)(0.21,-0.14){2}{\\line(1,0){0.21}}\n\\multiput(55.43,37.07)(0.21,-0.13){2}{\\line(1,0){0.21}}\n \\multiput(55,37.32)(0.21,-0.13){2}{\\line(1,0){0.21}}\n\\multiput(54.57,37.56)(0.22,-0.12){2}{\\line(1,0){0.22}}\n\\multiput(54.12,37.8)(0.22,-0.12){2}{\\line(1,0){0.22}}\n\\multiput(53.68,38.02)(0.22,-0.11){2}{\\line(1,0){0.22}}\n\\multiput(53.23,38.23)(0.23,-0.11){2}{\\line(1,0){0.23}}\n\\multiput(52.77,38.43)(0.23,-0.1){2}{\\line(1,0){0.23}}\n\\multiput(52.31,38.62)(0.23,-0.09){2}{\\line(1,0){0.23}}\n\\multiput(51.84,38.79)(0.47,-0.18){1}{\\line(1,0){0.47}}\n\\multiput(51.37,38.96)(0.47,-0.16){1}{\\line(1,0){0.47}}\n\\multiput(50.9,39.11)(0.47,-0.15){1}{\\line(1,0){0.47}}\n\\multiput(50.42,39.25)(0.48,-0.14){1}{\\line(1,0){0.48}}\n\\multiput(49.94,39.38)(0.48,-0.13){1}{\\line(1,0){0.48}}\n\\multiput(49.45,39.5)(0.48,-0.12){1}{\\line(1,0){0.48}}\n \\multiput(48.96,39.6)(0.49,-0.1){1}{\\line(1,0){0.49}}\n\\multiput(48.47,39.7)(0.49,-0.09){1}{\\line(1,0){0.49}}\n\\multiput(47.98,39.78)(0.49,-0.08){1}{\\line(1,0){0.49}}\n\\multiput(47.49,39.84)(0.49,-0.07){1}{\\line(1,0){0.49}}\n \\multiput(46.99,39.9)(0.5,-0.06){1}{\\line(1,0){0.5}}\n\\multiput(46.49,39.94)(0.5,-0.04){1}{\\line(1,0){0.5}}\n \\multiput(46,39.98)(0.5,-0.03){1}{\\line(1,0){0.5}}\n\\multiput(45.5,39.99)(0.5,-0.02){1}{\\vector(-1,0){0.5}}\n \\multiput(45,40)(0.5,-0.01){1}{\\line(1,0){0.5}}\n\\multiput(44.5,39.99)(0.5,0.01){1}{\\line(1,0){0.5}}\n \\multiput(44,39.98)(0.5,0.02){1}{\\line(1,0){0.5}}\n\\multiput(43.51,39.94)(0.5,0.03){1}{\\line(1,0){0.5}}\n \\multiput(43.01,39.9)(0.5,0.04){1}{\\line(1,0){0.5}}\n\\multiput(42.51,39.84)(0.5,0.06){1}{\\line(1,0){0.5}}\n \\multiput(42.02,39.78)(0.49,0.07){1}{\\line(1,0){0.49}}\n\\multiput(41.53,39.7)(0.49,0.08){1}{\\line(1,0){0.49}}\n \\multiput(41.04,39.6)(0.49,0.09){1}{\\line(1,0){0.49}}\n\\multiput(40.55,39.5)(0.49,0.1){1}{\\line(1,0){0.49}}\n \\multiput(40.06,39.38)(0.48,0.12){1}{\\line(1,0){0.48}}\n\\multiput(39.58,39.25)(0.48,0.13){1}{\\line(1,0){0.48}}\n \\multiput(39.1,39.11)(0.48,0.14){1}{\\line(1,0){0.48}}\n\\multiput(38.63,38.96)(0.47,0.15){1}{\\line(1,0){0.47}}\n \\multiput(38.16,38.79)(0.47,0.16){1}{\\line(1,0){0.47}}\n\\multiput(37.69,38.62)(0.47,0.18){1}{\\line(1,0){0.47}}\n \\multiput(37.23,38.43)(0.23,0.09){2}{\\line(1,0){0.23}}\n\\multiput(36.77,38.23)(0.23,0.1){2}{\\line(1,0){0.23}}\n \\multiput(36.32,38.02)(0.23,0.11){2}{\\line(1,0){0.23}}\n\\multiput(35.88,37.8)(0.22,0.11){2}{\\line(1,0){0.22}}\n \\multiput(35.43,37.56)(0.22,0.12){2}{\\line(1,0){0.22}}\n\\multiput(35,37.32)(0.22,0.12){2}{\\line(1,0){0.22}}\n \\multiput(34.57,37.07)(0.21,0.13){2}{\\line(1,0){0.21}}\n\\multiput(34.15,36.8)(0.21,0.13){2}{\\line(1,0){0.21}}\n \\multiput(33.73,36.52)(0.21,0.14){2}{\\line(1,0){0.21}}\n\\multiput(33.33,36.24)(0.2,0.14){2}{\\line(1,0){0.2}}\n \\multiput(32.92,35.94)(0.2,0.15){2}{\\line(1,0){0.2}}\n\\multiput(32.53,35.64)(0.13,0.1){3}{\\line(1,0){0.13}}\n \\multiput(32.14,35.32)(0.13,0.11){3}{\\line(1,0){0.13}}\n\\multiput(31.77,35)(0.13,0.11){3}{\\line(1,0){0.13}}\n \\multiput(31.4,34.66)(0.12,0.11){3}{\\line(1,0){0.12}}\n\\multiput(31.04,34.32)(0.12,0.11){3}{\\line(1,0){0.12}}\n \\multiput(30.68,33.96)(0.12,0.12){3}{\\line(1,0){0.12}}\n \\multiput(30.34,33.6)(0.11,0.12){3}{\\line(0,1){0.12}}\n \\multiput(30,33.23)(0.11,0.12){3}{\\line(0,1){0.12}}\n\\multiput(29.68,32.86)(0.11,0.13){3}{\\line(0,1){0.13}}\n \\multiput(29.36,32.47)(0.11,0.13){3}{\\line(0,1){0.13}}\n\\multiput(29.06,32.08)(0.1,0.13){3}{\\line(0,1){0.13}}\n \\multiput(28.76,31.67)(0.15,0.2){2}{\\line(0,1){0.2}}\n\\multiput(28.48,31.27)(0.14,0.2){2}{\\line(0,1){0.2}}\n \\multiput(28.2,30.85)(0.14,0.21){2}{\\line(0,1){0.21}}\n\\multiput(27.93,30.43)(0.13,0.21){2}{\\line(0,1){0.21}}\n \\multiput(27.68,30)(0.13,0.21){2}{\\line(0,1){0.21}}\n\\multiput(27.44,29.57)(0.12,0.22){2}{\\line(0,1){0.22}}\n \\multiput(27.2,29.12)(0.12,0.22){2}{\\line(0,1){0.22}}\n\\multiput(26.98,28.68)(0.11,0.22){2}{\\line(0,1){0.22}}\n \\multiput(26.77,28.23)(0.11,0.23){2}{\\line(0,1){0.23}}\n\\multiput(26.57,27.77)(0.1,0.23){2}{\\line(0,1){0.23}}\n \\multiput(26.38,27.31)(0.09,0.23){2}{\\line(0,1){0.23}}\n\\multiput(26.21,26.84)(0.18,0.47){1}{\\line(0,1){0.47}}\n \\multiput(26.04,26.37)(0.16,0.47){1}{\\line(0,1){0.47}}\n\\multiput(25.89,25.9)(0.15,0.47){1}{\\line(0,1){0.47}}\n \\multiput(25.75,25.42)(0.14,0.48){1}{\\line(0,1){0.48}}\n\\multiput(25.62,24.94)(0.13,0.48){1}{\\line(0,1){0.48}}\n \\multiput(25.5,24.45)(0.12,0.48){1}{\\line(0,1){0.48}}\n\\multiput(25.4,23.96)(0.1,0.49){1}{\\line(0,1){0.49}}\n \\multiput(25.3,23.47)(0.09,0.49){1}{\\line(0,1){0.49}}\n\\multiput(25.22,22.98)(0.08,0.49){1}{\\line(0,1){0.49}}\n \\multiput(25.16,22.49)(0.07,0.49){1}{\\line(0,1){0.49}}\n\\multiput(25.1,21.99)(0.06,0.5){1}{\\line(0,1){0.5}}\n \\multiput(25.06,21.49)(0.04,0.5){1}{\\line(0,1){0.5}}\n\\multiput(25.02,21)(0.03,0.5){1}{\\line(0,1){0.5}}\n \\multiput(25.01,20.5)(0.02,0.5){1}{\\line(0,1){0.5}}\n\\multiput(25,20)(0.01,0.5){1}{\\line(0,1){0.5}}\n \\thinlines \\put(35,36){\\line(0,1){2}}\n\\put(61,31){\\line(0,1){2}}\n \\put(36,35){$\\alpha$}\n \\put(58,29){$\\beta$}\n \\put(3,22){\\makebox(0,0)[cc]{$\\infty$}}\n\\put(25,19){\\line(0,1){2}}\n \\put(65,19){\\line(0,1){2}}\n \\put(25,17){\\makebox(0,0)[cc]{$P_l$}}\n\\put(65,17){\\makebox(0,0)[cc]{$P_r$}}\n\n\\end{picture}\n\\caption{The path of $\\alpha(\\tau)$ for ${\\rm Re\\,} \\alpha < {\\rm Re\\,} \\beta$.}\n\\end{center}\n\\end{figure}\n\n\\textit{Case 3.} ${\\rm Re\\,} \\alpha = {\\rm Re\\,} \\beta$. Then it follows from \\eqref{ZWEI} and \\eqref{DREI} that\nin the case ${\\rm Im\\,} \\beta < {\\rm Im\\,} \\alpha$\n\\[\n Q'(x) <0, \\quad x > P; \\quad Q'(x) > 0, \\quad x < P,\n\\]\nand in the case ${\\rm Im\\,} \\beta > {\\rm Im\\,} \\alpha$\n\\[\n Q'(x) >0, \\quad x > P; \\quad Q'(x) < 0, \\quad x < P.\n\\]\nThe direction of the path is indicated in each of these cases in Figure 10.\n\n\\begin{figure}[hbt]\\label{nocircles}\n\\begin{center}\n\\begin{picture}(105,32)(0,13)\n\n\\thinlines \\put(0,20){\\line(1,0){50}}\n\n \\put(24,27){\\line(2,0){2}}\n \\put(23,27){\\makebox(0,0)[cc]{$\\beta$}}\n \\put(24,33){\\line(2,0){2}}\n \\put(23,33){\\makebox(0,0)[cc]{$\\alpha$}}\n \\thicklines\n \\put(50,20){\\vector(-1,0){15}}\n \\put(35,20){\\line(-1,0){10}}\n \\put(25,20){\\vector(0,1){4}}\n \\put(25,24){\\vector(0,1){13}}\n \\put(25,37){\\line(0,1){3}}\n\n\\thinlines\n \\put(55,20){\\line(1,0){50}}\n\n \\put(79,27){\\line(2,0){2}}\n \\put(78,27){\\makebox(0,0)[cc]{$\\alpha$}}\n \\put(79,33){\\line(2,0){2}}\n \\put(78,33){\\makebox(0,0)[cc]{$\\beta$}}\n \\thicklines\n \\put(80,40){\\vector(0,-1){4}}\n \\put(80,36){\\vector(0,-1){13}}\n \\put(80,23){\\line(0,-1){3}}\n \\put(70,20){\\line(-1,0){15}}\n \\put(80,20){\\vector(-1,0){10}}\n\n \\put(25,17){\\makebox(0,0)[cc]{$P$}}\n \\put(80,17){\\makebox(0,0)[cc]{$P$}}\n\\end{picture}\n\\caption{The path of $\\alpha(\\tau)$ for ${\\rm Re\\,} \\alpha ={\\rm Re\\,} \\beta$ when ${\\rm Im\\,} \\alpha > {\\rm Im\\,} \\beta$ and\nwhen ${\\rm Im\\,} \\beta > {\\rm Im\\,} \\alpha$.}\n\\end{center}\n\\end{figure}\n\n\\begin{remark}\nIt follows from the above three cases that the point $\\infty$ belongs to $\\mathcal{F}_Q$ if and only if ${\\rm Re\\,}\n\\alpha \\le {\\rm Re\\,} \\beta$. This can be seen directly. It follows from \\eqref{x_00} \nthat $Q_\\tau(z)$ has $\\infty$ as GZNT if and only if\n\\[\n\\lim_{z \\wh\\to \\infty} z Q_\\tau(z) \\in [0,\\infty).\n\\]\nIn this case $Q_\\tau(z) \\to 0$ when $z \\wh\\to \\infty$. Since $Q(z) \\to 1$ as $z \\wh\\to \\infty$ this can only take place for $\\tau =1$, in which case\n\\[\n\\lim_{z \\wh\\to \\infty} z Q_\\tau(z)={\\rm Re\\,} \\beta -{\\rm Re\\,} \\alpha,\n\\]\nfrom which the assertion follows.\n\\end{remark}\n\n\\begin{remark}\nThe present treatment of the function $Q(z)$ in \\eqref{einz-} can be seen as an illustration of Theorem \\ref{mainth} and Theorem \\ref{realpath}.\nFor this purpose assume (without loss of generality) that $\\alpha \\in \\dR$. When ${\\rm Re\\,} \\beta < \\alpha$ the point $P_r$ coincides with $\\alpha$ and when ${\\rm Re\\,} \\beta > \\alpha$ the point $P_l$ coincides with $\\alpha$; if ${\\rm Re\\,} \\beta=\\alpha$, then\n the point $P$ coincides with $\\alpha$ (and ${\\rm Im\\,} \\beta > 0$).\nHence the path hits $\\dR$ at $\\alpha$ in a perpendicular way (cf. Theorem \\ref{mainth}). Afterwards it stays on the real line (cf. Theorem \\ref{realpath} with $M(z)=1 \\in \\mathbf{N}_0$). \n\\end{remark}\n\n\n\\section{Path entirely on the extended real line}\\label{ontherealline}\n\nThere exist functions $Q(z) \\in \\mathbf{N}_1$ whose GZNT $\\alpha(\\tau)$ stays on the (extended) real line.\nThese functions will be classified in this section.\n\n\\begin{example}\nLet $\\alpha, \\beta \\in \\dR$ and $c \\in \\dR$ such that $c(\\alpha-\\beta) < 0$ and define the function $Q(z)$ by\n\\begin{equation}\\label{R0}\n Q(z)=c\\,\\frac{z-\\alpha}{z-\\beta}, \n \\quad z \\neq \\beta. \n\\end{equation}\nThen $Q(z)$ belongs to $\\mathbf{N}_1$; actually, one has\n\\begin{equation}\\label{R0+}\n Q(z) =\\frac{(z-\\alpha)^2}{(z-\\beta)^2}\\,\\,M(z),\n \\quad\n M(z)=c + \\frac{c(\\alpha-\\beta)}{z-\\alpha},\n\\end{equation}\nwhere $M(z)$ belongs to $\\mathbf{N}_0$\n\nIt follows from \\eqref{R0+} that\n$M(\\alpha+)=\\infty$, $M(\\alpha-)=-\\infty$.\nHence, according to Theorem \\ref{realpath}, there exists $\\varepsilon > 0$ such that the intervals\n$(\\alpha-\\varepsilon, \\alpha)$ and $(\\alpha, \\alpha+\\varepsilon)$ belong to $\\cF_Q$. However, in this case one can obtain stronger results. \nObserve that the GZNT $\\alpha(\\tau)$ of $Q_\\tau(z)$, $\\tau \\in \\dR \\cup \\{\\infty\\}$, is given by\n\\[\n \\alpha(\\tau)=\\frac{c \\alpha-\\tau \\beta}{c-\\tau},\n \\quad \\tau \\in \\dR\\setminus\\left\\{c\\right\\},\n\\]\nand, in the remaining cases, by \n\\[\n \\alpha(\\infty)=\\beta, \\quad \\alpha(c)=\\infty. \n\\]\nTherefore the path $\\mathcal{F}_Q$ covers the extended real line.\nMoreover, note that for the functions \n\\begin{equation}\\label{Rc}\nQ(z) =\\frac{d}{z-\\gamma}, \n\\quad \\gamma\\in\\dR,\\quad d>0,\n\\end{equation}\nand\n\\begin{equation}\\label{R1c}\nQ(z)=\\frac{\\gamma-z}{d}, \n\\quad \\gamma\\in\\dR,\\quad d>0,\n\\end{equation}\none has $\\mathcal{F}_Q=\\dR\\cup\\{\\infty\\}$ as well.\n\\end{example}\n\nIn fact, the functions in \\eqref{R0}, \\eqref{Rc}, and \\eqref{R1c} are the only functions in $\\mathbf{N}_1$\nwhose path of the GZNT stays on the (extended) real line. In order to prove this, the following lemma is\nneeded.\n\n\\begin{lemma}\\label{kuhkuh}\nLet $\\alpha, \\beta \\in \\dR$ with $\\alpha \\not=\\beta$. Assume that $U(z)$ and $V(z)$ are nontrivial \nNevanlinna functions which\nsatisfy one of the following relations \n\\begin{equation}\\label{q1q0}\nU(z)=-\\frac{(z-\\alpha)^2 }{(z-\\beta)^{2}}\\,V(z),\n\\end{equation}\n\\begin{equation}\\label{q1q0a}\nU(z)=-\\frac{1}{(z-\\beta)^{2}}\\,V(z),\n\\end{equation}\nor\n\\begin{equation}\\label{q1q0b}\nU(z)=-(z-\\alpha)^2 \\,V(z).\n\\end{equation}\n Then the functions $U(z)$ and $V(z)$ are of the form\n\\begin{equation}\\label{q1q0rep}\nU(z)=c\\left(1-\\frac{\\beta-\\alpha}{\\beta-z}\\right), \\quad V(z)=c\\left(-1+\\frac{\\alpha-\\beta}{\\alpha-z}\\right), \\quad\nc(\\beta-\\alpha) < 0,\n\\end{equation}\n\\begin{equation}\\label{q1q0repa}\nU(z)=\\frac{d}{\\beta-z}\\, \\quad V(z)=d (z-\\beta), \\quad d> 0,\n\\end{equation}\nor\n\\begin{equation}\\label{q1q0repb}\nU(z)= e (z-\\alpha), \\quad V(z)= \\frac{e}{\\alpha-z}, \\quad e> 0,\n\\end{equation}\nrespectively. Conversely, all functions of the form \\eqref{q1q0rep}, \\eqref{q1q0repa}, or \\eqref{q1q0repb}\nare Nevanlinna functions and they satisfy \\eqref{q1q0}, \\eqref{q1q0a}, and \\eqref{q1q0b}, respectively.\n\\end{lemma}\n\n\\begin{proof}\nAssume that $U(z)$ and $V(z)$ are Nevanlinna functions. Let the integral representations of $U(z)$ and $V(z)$ be of the\nform \\eqref{nev'} with spectral measures $\\sigma_1$, $\\sigma_0$, and constants $a_1 \\in \\dR$, $b_1 \\ge 0$, $a_0\n\\in \\dR$, $b_0 \\ge 0$, respectively.\n\nAssume that $U(z)$ and $V(z)$ satisfy \\eqref{q1q0}. Then the identity \\eqref{nev+} implies\nthat \n$$\nb_1=\\lim_{z\\wh\\to\\infty}\\frac{U(z)}{z} =-\\lim_{z\\wh\\to\\infty}\\frac{V(z)}{z} =-b_0.\n$$\nSince both $b_0$ and $b_1$ are nonnegative, it follows that\n\\begin{equation}\\label{b0b1=0}\nb_0=b_1=0.\n\\end{equation}\nThe Stieltjes inversion formula implies that\n$$\n(s-\\beta)^2 d\\sigma_1(s)=-(s-\\alpha)^2 d\\sigma_0(s).\n$$\nIt follows that ${\\rm supp\\,} \\sigma_1\\subset\\{\\beta\\}$ and ${\\rm supp\\,} \\sigma_0\\subset\\{\\alpha\\}$. If $\\sigma_1=0$ then by\n\\eqref{b0b1=0} the function $V(z)$ is equal to a real constant, and the identity \\eqref{q1q0} implies that\n$U(z)=V(z)=0$. \nThe same conclusion holds if $\\sigma_0=0$. Therefore, it may be assumed that $U(z)$ and $V(z)$\nhave representations of the form\n$$\nU(z)=a_1+\\frac{d_1}{\\beta-z}, \\quad V(z)=a_0+\\frac{d_0}{\\alpha-z},\n$$\nwith $d_0,~d_1>0$. The identity \\eqref{q1q0} implies further that $U(\\alpha)=V(\\beta)=0$, so that\n$$\na_1+\\frac{d_1}{\\beta-\\alpha}=a_0+\\frac{d_0}{\\alpha-\\beta}=0.\n$$\nThus $a_1(\\beta-\\alpha) < 0$, $a_0(\\beta-\\alpha) > 0$, and\n$$\nU(z)=a_1\\left(1-\\frac{\\beta-\\alpha}{\\beta-z}\\right), \\quad V(z)=a_0\\left(1-\\frac{\\alpha-\\beta}{\\alpha-z}\\right).\n$$\nThis and $\\eqref{q1q0}$ imply that \n$$\na_1=\\lim_{z\\wh\\to\\infty}U(z)=-\\lim_{z\\wh\\to\\infty}V(z)=-a_0,\n$$\nand the representations in $\\eqref{q1q0rep}$ follow with $c=a_1$.\n\nNow assume that $U(z)$ and $V(z)$ satisfy \\eqref{q1q0a}. The identity \\eqref{nev+} \nimplies that\n$$\nb_1=\\lim_{z \\wh\\to\\infty}\\frac{U(z)}{z} = 0.\n$$\nThe Stieltjes inversion formula implies that\n$$\n(s-\\beta)^2 d\\sigma_1(s)=-d\\sigma_0(s),\n$$\nwhich leads to ${\\rm supp\\,} \\sigma_1\\subset\\{\\beta\\}$ and ${\\rm supp\\,} \\sigma_0= \\emptyset$. \nTherefore, it follows that $U(z)$ and $V(z)$ have representations of the form\n$$\nU(z)=a_1+\\frac{d_1}{\\beta-z}, \\quad V(z)=b_0z+a_0,\n$$\nwith $d_1>0$. The identity \\eqref{q1q0a} implies that $V(\\beta)=0$, so that $b_0 \\beta+a_0=0$. Hence\n$V(z)=b_0(z-\\beta)$ and, again by \\eqref{q1q0a}, \n\\[\n a_1+\\frac{d_1}{\\beta-z}=-\\frac{b_0}{z-\\beta}.\n\\]\nHence $a_1=0$ and\n\\[\n U(z)=\\frac{d}{\\beta-z}, \\quad V(z)=-(z-\\beta)^2 U(z)=d (z-\\beta), \\quad d>0.\n\\]\n\nThe treatment of the case where the functions $U(z)$ and $V(z)$ satisfy \\eqref{q1q0b} is similar to what has been\njust shown. It also follows by symmetry from the previous case.\n\nAs to the converse statement: it is straightforward to check that the functions in $\\eqref{q1q0rep}$,\n$\\eqref{q1q0repa}$, or $\\eqref{q1q0repb}$, are Nevanlinna functions which satisfy the identities\n\\eqref{q1q0}, \\eqref{q1q0a}, or \\eqref{q1q0b}, respectively.\n\\end{proof}\n\nThe case $\\alpha=\\beta$ excluded in Lemma \\ref{kuhkuh} concerns Nevanlinna functions\n$U(z)$ and $V(z)$ which satisfy $U(z)=-V(z)$. Of course, this implies that $U(z)=c$ and $V(z)=-c$, where $c$\nis a real constant.\n\n\\begin{theorem}\\label{ontheline:t}\nLet $Q(z) \\in \\mathbf{N}_1$ have the property that the path $\\mathcal{F}_Q$ is contained in the extended real\nline.\n\\begin{enumerate}\\def\\rm (\\roman{enumi}){\\rm (\\roman{enumi})}\n\n\\item If both the GZNT and the GPNT of $Q(z)$ are finite, then $Q(z)$ is of the form\n\\eqref{R0} with some $\\alpha,\\beta\\in\\dR$ and $c\\in\\dR$ such that $c(\\alpha-\\beta) < 0$.\n\n\\item If the GZNT of $Q(z)$ is at $\\infty$ and the GPNT of $Q(z)$ is finite,\nthen $Q(z)$ is of the form \\eqref{Rc}. \n\n\\item If the GZNT of $Q(z)$ is finite and the GPNT of $Q(z)$ is at $\\infty$,\nthen $Q(z)$ is of the form \\eqref{R1c}. \n\\end{enumerate}\n\\end{theorem}\n\n\\begin{proof}\n(i) \nThe assumption that $Q_{\\tau}(z)$ has no GZNT in $\\dC^+$ for all $\\tau\\in\\dR\\cup\n\\{\\infty\\}$ implies that $\\Im Q(z)\\not=0$ in $\\dC^+$; cf. Corollary \\ref{quh}. Since $Q(z) \\in\\mathbf{N}_1$,\nthere is at least one point $z_0\\in\\dC^+$ with $\\Im Q(z_0)<0$. However, $\\Im Q(z)$ is a continuous function\non $\\dC^+$, so that $\\Im Q(z)<0$ on $\\dC^+$. Observe that the functions $U(z)=-Q(z)$ and $V(z)=M(z)$ are\nnontrivial Nevanlinna functions which satisfy $\\eqref{q1q0}$. Therefore, by Lemma \\ref{kuhkuh}\n\\[\n Q(z)=-U(z)=c \\frac{z-\\alpha}{z-\\beta},\n\\]\nwhich completes the proof of (i). \n\n(ii) \\& (iii) The remaining parts of the theorem can be proved in a similar way, using the\nrespective parts of Lemma \\ref{kuhkuh}. \n\\end{proof}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzlszz b/data_all_eng_slimpj/shuffled/split2/finalzzlszz new file mode 100644 index 0000000000000000000000000000000000000000..46574f633f24b148b66a9bbeb637a3f7296f4403 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzlszz @@ -0,0 +1,5 @@ +{"text":"\n\\section{Introduction}\nIn 2015, 3477 deaths in car crashes in the U.S. were attributed to distracted driving \\cite{nhtsa}. The use of electronic devices, particularly cellphones, while driving is one of the most common causes. \nVehicle and cellphone manufacturers designed speech interfaces (such as Siri) that were supposed to reduce potential distraction by eliminating the need to look at a screen.\nHowever, studies show that hands-free voice technologies are still highly distracting \\cite{handsfree}. While legislation in many states bans the use of cellphones while driving, many individuals continue to speak and text while driving. Since this practice continues, another strategy must be adopted: warning the driver when a dangerous situation arises while she is distracted. \n\nAutomatic distraction detection can enable in-car systems or virtual personal assistants to choose the right time to warn the driver, giving out safety information, or shut down some app in a dangerous situation.\nEarly attempts to do this had high false alarm rates \\cite{fpr}. False alarms cause drivers to ignore or disable the system. This lack of robustness is often linked to the paucity of information, that is, the use of only one or two modalities for detection, often just facial expressions. \nDetecting driver distraction is complex.\nA variety of modalities come into play, all of which should be used to detect distraction. In addition to facial expression, there is the driver's speech while talking to a passenger, instructing an intelligent agent or talking on the phone. Information coming from the vehicle itself (CAN bus) such as knowing when the driver is braking is also important. Interaction across modalities is important. For example, a system based on eye-gaze alone fails if the driver is wearing sunglasses.\nRecent advances in multimodal deep learning afford better performance thanks to both improved facial feature detection and the ability to learn a joint representation from multiple modalities via multimodal fusion \\cite{mmml}. There are two advantages of multimodal deep learning over traditional unimodal approaches: 1) better accuracy, and 2) more robust detection. \n\nWe have developed a novel deep multimodal polynomial fusion (MPF) architecture to robustly detect distraction. Specifically, we used a polynomial function to map features from different modalities to a weighted sum of the intermodal product interactions as the fused representation for distraction detection. We also introduce a new training and assessment dataset. \n\n\nThe contribution of this paper is three-fold:\n\\begin{itemize}\n\\item A database of distracted driving behavior containing distraction events.\n\\item Empirical evidence that incorporating multiple modalities improves distraction detection.\n\\item A simple and effective multimodal fusion technique that outperforms baseline models.\n\\end{itemize}\n\n\n\\section{Related Work}\nThis section describes existing distraction detection approaches for each modality, and then describes previous work which explored multiple modalities. We also present related datasets.\n\n\\vskip0.5\\normalbaselineskip \\noindent \\textbf{Visual - Facial expression}\nPrevious work focused on facial cues such as facial landmarks, head pose turns, glances, eye-gaze tracking, and facial action units \\cite{yulan-svm, glance_lex, fernandez2016driver, dl-face, au, kang2013various}.\nWhile good detection accuracy was achieved without the use of other modalities, these approaches lack robustness in cases where either some part of the face is obscured or the lighting changes dramatically \\cite{fernandez2016driver}. Data from additional modalities can compensate for the missing information.\n\n\n\\vskip0.5\\normalbaselineskip \\noindent \\textbf{Visual - Road conditions}\nResearchers used a forward-facing camera and computer vision algorithms \\cite{lane, road, cvpr-eye-road, cmu-eye-road}. \nScene understanding is used along with information from a backward-facing camera (driver's glances) to categorize driving behavior \\cite{cvpr-eye-road,cmu-eye-road}. This bi-modal approach can detect what the driver is attending to on the road ahead \\cite{utd2}. \nLane position changes can be captured by the forward-facing camera \\cite{lane,cmu-eye-road}.\n\n\\vskip0.5\\normalbaselineskip \\noindent \\textbf{Acoustics - Speech}\nThe driver's speech has been used by \\cite{utd, speech, Craye2016}. There are voice interfaces installed in the vehicle, such as a spoken dialog system or personal assistant \\cite{utd, utd2}. The driver's speech is analyzed to derive features such as voice activity detection and strings of words via automatic speech recognition \\cite{Craye2016}.\n\n\\vskip0.5\\normalbaselineskip \\noindent \\textbf{Driving measures}\nVehicle control signals also encode changes in driving performance that reflect distraction \\cite{Craye2016}. They serve as complementary information to other modalities. CAN-Bus information that has been used includes: speed, steering wheel position, gas pedal usage, and break pedal usage \\cite{utd, utd2, kang2013various, lane}. \n\n\\vskip0.5\\normalbaselineskip \\noindent \\textbf{Combinations of modalities} There have been attempts to use multimodal fusion for distraction detection \\cite{utd, utd2, Craye2016, cvpr-eye-road, multi-li}. These used early fusion techniques that concatenate multimodal features into a single feature vector.\nFor example, \\cite{multi-li} uses modalities similar to those used in this paper. They used data from front and rear-facing cameras to extract road and facial features, audio from a microphone to extract the energy of speech, and CAN-bus information. They performed early feature fusion, trained machine learning models for binary classification of distraction, and showed promising results. This did not, however, model the intermodal interaction amongst features \\cite{tfn}. We compare our MPF model to their best binary classification model, SVM, below.\n\n\\vskip0.5\\normalbaselineskip \\noindent \\textbf{Other information}\nA few studies have used body position from a Kinect camera \\cite{kinect}. This indicates whether the driver's both hands are on the steering wheel.\nPhysiological signals such as an electroencephalogram (EEG) have also been studied \\cite{kang2013various}. \nBoth approaches require additional equipment not commonly found on vehicles (often due to cost). Due to the limited amount of existing data and the fact that these signals are noisy, they are less interesting for distraction detection.\n\n\\vskip0.5\\normalbaselineskip \\noindent \\textbf{Datasets}\nOne of the most well-studied distraction detection datasets is UTDrive \\cite{utd}. Collected in the 2000s, it is naturalistic and multimodal, and has a speech interface. \nBeside the limited sensor capabilities existing at the time the data was recorded, the dataset does not have: 1) extensive dialog interaction (i.e. phone usage), and 2) sufficient amounts of data. \nTaamneh et al. \\cite{newest-data} released a multimodal distracted driving dataset which is large enough for our needs, but does not include recorded speech. To investigate distraction detection using multimodal deep learning, we need a dataset that has speech and more instances of distraction. \n\n\n\n\\section{Multimodal Distraction Dataset}\n\nWe designed a dataset that can capture more instances and nuances of distraction (we have $\\sim147k$ datapoints at frame-level in the training set and $\\sim25\\%$ of them correspond to places where the driver was distracted). Specifically, we wanted to create distracting instances that afford different degrees of cognitive load, due to the road conditions, the type of message to be dealt with and the coincidence of the two. We also wanted to represent several sources of messages: texts, phone calls and emails. We recorded as many different modalities as possible (forward camera, backward camera, microphone, car information). There were 30 subjects, 7 female and 23 male, each driving for about 15 minutes (minimum 9 minutes, maximum 21.36 minutes). \n\nWhile driving, each subject had three types of interaction at different levels of cognitive load with the message agent on an Android phone. For example, low cognitive load is a combination of low-load email from Mom asking if the driver was feeling ok today while the subject was driving on a straightaway. High load is the combination of a friend asking the driver to list five things she wants for her upcoming birthday while she is entering a hairpin turn. Messages were sent as: text messages, phone calls, and email. The four modalities we captured were time synchronized. After driving, the subject was asked to watch the recording and annotate stretches of time (start and end point) when they felt they had been distracted.\n\n\\vskip0.5\\normalbaselineskip \\noindent \\textbf{Data collection system architecture}\nThe simulated driving route was created using the OpenDS driving simulator \\cite{opends}. Traffic signs, a traffic light, hairpin turns, and odd objects along the side of the road created situations that demanded attention. The simulator recorded and synchronized the driving data.\nA spoken dialog system was connected to a personal assistant that interacted with the driver to produce the predefined messages.\nThe MultiSense recorder \\footnote{http:\/\/multicomp.cs.cmu.edu\/} recorded and synchronized facial videos and speech from a backward-facing camera.\nOpen Broadcaster screen capture software \\footnote{https:\/\/obsproject.com\/}, served as the forward-facing camera, capturing what the subject saw on the screen while driving.\nA wizard interface controlled all of the submodules and initiated tasks in the dialog system.\nEach type of signal was synchronized with timestamps and saved to the database. \n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{model.pdf}\n \\caption{Proposed Multimodal Polynomial Fusion model}\n \\label{fig:model}\n\\end{figure}\n\nThe multimodal distraction detection dataset will be made publicly available after Interspeech publication of this paper.\n\n\\section{Multimodal Polynomial Fusion}\nIn this section, we present multimodal polynomial fusion (MPF) for learning fused representations for distraction detection.\n\nGiven feature vectors $x_{F}$, $x_{S}$, $x_{C}$ at each time frame (10Hz) from face, speech, and car modalities respectively, we want to learn a shared hidden representation $h_{fusion}$ that captures the interaction amongst these modalities to detect driver distraction at that time frame.\n\nThe cube activation function $f_{cube}(z) = (z)^3$ \\cite{chen}, where $z = h_1 + h_2 + ... + h_n + \\beta_0$, is a simple way to learn such a shared representation from multiple feature vectors ($h_1, h_2, ..., h_n$, along with a bias term $\\beta_0$) of the same dimension. \nIt is a special configuration of Polynomial Networks \\cite{poly1, poly2, poly3,qizhe1,qizhe2}.\nChen et al. \\cite{chen} show that the product interactions of features captured by cube activation empirically lead to a better representation for dependency parsing. \nIntuitively, the cube activation function resembles a polynomial kernel that extracts 3-combinations with repetitions from $h_{1}$, $h_{2}$, ..., $h_{n}$, and $\\beta_0$:\n\\begin{equation}\n \\begin{split}\n (h_{1} + h_{2} + ... + h_{n} + \\beta_0)^3 = \\sum_{i, j, k \\in \\{1, ..., n\\}} h_{i} \\odot h_{j} \\odot h_{k} \\\\ \n + \\beta_0 \\odot \\sum_{i, j \\in \\{1, ..., n\\}} h_{i} \\odot h_{j} + \\beta_0^2 \\odot \\sum_{i \\in \\{1, ..., n\\}} h_{i} + \\beta_0^3\\\\\n \\end{split}\n\\end{equation}\nwhere $\\odot$ is the Hadamard product.\n\nThe multimodal polynomial fusion (MPF) layer is inspired by the cube activation function. First, the feature vectors from each modality are transformed so that all of them have the same dimension $|h|$, so that an element-wise product (the Hadamard product) of features from different modalities can be performed:\n\\begin{equation}\n \\begin{split}\n h_{F} = W_{F} x_{F},\\quad h_{S} = W_{S} x_{S},\\quad h_{C} = W_{C} x_{C} \\\\\n \\label{eq0}\n \\end{split}\n\\end{equation}\nwhere $W_{F} \\in \\mathbb{R}^{|h|\\times |x_{F}|}$, $W_{S} \\in \\mathbb{R}^{|h|\\times |x_{S}|}$, $W_{C} \\in \\mathbb{R}^{|h|\\times |x_{C}|}$. Then $h_{F}$, $h_{S}$, and $h_{C}$ could be passed to a cube activation to model the intermodal interactions of the three: $h_{fusion} = (h_{F} + h_{S} + h_{C} + \\beta_0)^3$.\n\nHowever, cube activation captures redundant combinations from duplicated feature modalities, although being computationally efficient.\nFor example, $h_{F} \\odot h_{F} \\odot h_{C}$ is also an interaction term captured by cube activation, but it is not a reasonable representation to include in multimodal fusion, since modalities need not be repeatedly used in intermodal interaction. Such redundancy could increase complexity and lead to inferior predictive results.\n\nTo alleviate this problem, the multimodal polynomial fusion (MPF) layer (Eq.~\\ref{eq1}) is designed to model $h_{fusion}$ by summing up selected weighted intermodal interaction amongst the three feature modalities. Each interaction is derived by an element-wise product of features. The proposed neural architecture is shown on Figure~\\ref{fig:model}. The multimodal fusion representation is calculated using the following polynomial function:\n\\begin{equation}\n \\begin{split}\n h_{MPF} = f_{MPF}(h_{F}, h_{S}, h_{C}) = (\\alpha_0 \\cdot h_{F} \\odot h_{S} \\odot h_{C} + \\\\\n \\alpha_1 \\cdot h_{F} \\odot h_{S} + \\alpha_2 \\cdot h_{F} \\odot h_{C} + \\alpha_3 \\cdot h_{S} \\odot h_{C} \\\\\n + \\alpha_4 \\cdot h_{F} + \\alpha_5 \\cdot h_{S} + \\alpha_6 \\cdot h_{C} + \\beta_0)\n \\label{eq1}\n \\end{split}\n\\end{equation}\nwhere $\\alpha_i \\in \\mathbb{R}$ are learnable parameters adjusting the weight of each term, $\\beta_0 \\in \\mathbb{R}^{|h|}$ is the bias term, and $\\odot$ is the element-wise multiplication of vectors. Thus, $h_{F} \\odot h_{S} \\odot h_{C}$ models trimodal interaction; $h_{F} \\odot h_{C}$, $h_{F} \\odot h_{S}$, and $h_{S} \\odot h_{C}$ models bimodal interaction; $h_{F}$, $h_{S}$, and $h_{C}$ are the unimodal features. Essentially, $h_{MPF}$ is a weighted sum of the product interactions of the feature vectors from the three modalities.\n\n\nThe advantage of the polynomial fusion layer is that it explicitly specifies the desired combinations of modalities to model interaction and also learns the weights for all intermodal dynamics. The polynomial fusion layer can easily be extended to accommodate more modalities by adding more terms in the polynomial.\n\nWe then feed the fused hidden representation $h_{MPF}$ to a tanh activation: \n\\begin{equation}\n h_{fusion} = tanh(h_{MPF})\n \\label{eq2}\n\\end{equation}\nThe $tanh$ activation function is used because $h_{MPF}$ is unbounded and thus needs to be controlled by bounded non-linear activation \\cite{pei}. Empirically, we found that $tanh$ did stabilize network training and led to better results than unbounded activations such as $ReLU$.\n\nFinally, the hidden representation $h_{fusion}$ is fed to a two-layer feed forward neural network with dropouts and ReLU activations. The complete model is shown on Figure~\\ref{fig:model}.\n\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{eg.pdf}\n \\caption{A case-in-point visualization of distraction for three modalities: Face, Speech, and Car. Each feature modality is reduced and normalized to a one-dimensional space and projected onto a continuous time axis. The grey area denotes the time period when the driver said she was distracted.}\n \\label{fig:eg}\n\\end{figure}\n\n\\section{Experiments}\nWe now describe the multimodal features, baseline models, experimental methodology, and results.\n\n\\subsection{Multimodal Features}\nThis paper focuses on three modalities: facial expression\/movement, speech, and car information. Feature sets were extracted from the backward facing camera video and the speech signal. \n\n\n\n\n\n\n\n\\vskip0.5\\normalbaselineskip \\noindent \\textbf{Facial features} The OpenFace \\cite{openface} toolkit is a state-of-the-art tool used for facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation. The facial features we use include:\na) \\textit{Facial landmarks (FL):} 68 points on the face (204 values).\nb) \\textit{Gaze vectors:} 3D vector and gaze angle for each eye (8 values), and 28 2D and 3D eye region landmarks for each eye (280 values).\nc) \\textit{18 Action Units (AU):} regression and binary outputs of all the available AUs in \\cite{openface} (36 values).\nd) \\textit{Head pose:} 3D translation and 3D rotation of head pose (6 numbers).\nFacial features are extracted at a frame rate of 30Hz.\n\n\n\n\\vskip0.5\\normalbaselineskip \\noindent \\textbf{Speech features} The OpenSMILE \\cite{opensmile} toolkit is an audio feature extractor that extracts a knowledge-based feature set. The speech-related features include:\na) \\textit{Prosody:} Pitch and loudness.\nb) \\textit{Voice-Quality (VQ):} jitter and shimmer, creaky voice.\nc) \\textit{Frame Energy}.\nd) \\textit{Voice Activity Detection (VAD)}.\ne) \\textit{F0 fundamental frequency}.\nf) \\textit{Syllables per second (SPS)}.\nThe speech-related features are extracted with a moving window of 300ms and a shift of 100ms.\n\n\\vskip0.5\\normalbaselineskip \\noindent \\textbf{Car driving measures} The features from the car logged by the driving simulator are:\na) \\textit{Speed of the vehicle:} a real number (in km\/h).\nb) \\textit{Steering wheel position:} a continuous number from -1 to 1.\nc) \\textit{Gas pedal position:} a continuous number from 0 to 1.\nd) \\textit{Break pedal position:} a continuous number from 0 to 1.\n\nAll of the above features were synchronized with respect to the frame of audio features (10Hz). Distraction classification is performed here on a frame-wise basis.\n\n\n\\vskip0.5\\normalbaselineskip \\noindent \\textbf{Qualitative feature analysis}\nFeature value variation by dimensionality reduction for each of the three modalities over time is shown for one example in Figure~\\ref{fig:eg}. The grey area denotes the time period when the driver said she was distracted. Figure~\\ref{fig:eg} shows that when the driver screamed, there was a peak in the speech features as well as a corresponding peak in the facial features, due to mouth opening. Similarly, when the driver turned her head to the side toward the phone, the facial features show a negative peak. There was also a significant change in car information showing the moment when the driver realized that she went off the road and tried to shift back onto it. This qualitative analysis shows that the feature peaks may correlate with and complement one another in indicating driver distraction (the grey area). Thus, distraction detection may benefit from modeling the intermodal interaction of the modalities. \n\n\n \n\n\n\n\n\n\\subsection{Baseline Models}\nWe compare our model (\\textit{MPF}) to seven baseline models and \\textit{MPF} variants: \\textit{Majority} is the trivial baseline predicting the majority label; \\textit{SVM} is the Support Vector Machine using early fusion multimodal features in \\cite{multi-li} which achieves the best performance on binary classification of distraction; \\textit{NN-Early} is a two-layer feed forward neural network that takes the concatenation of features from three modalities; \\textit{NN-Cube} is the cube activation function for fusing features from three modalities described in \\cite{chen}; \\textit{NN-TC} is the tanh-cube activation function for fusing features from three modalities described in \\cite{pei}; \\textit{MPF-1} and \\textit{MPF-2}, which are variants of our full \\textit{MPF} model, use one modality and two modalities as input respectively. To ensure fair comparisons of neural network models, all of the fused representations have the same size (except for early fusion which has a larger size due to concatenation) and are fed to a two-layer feed forward neural network with the same number of parameters using the train\/dev\/test partition mentioned below.\n\n\n \n\n\n\\begin{table}[th]\n \\caption{Distraction detection results on the multimodal distraction dataset test portion. Our model outperforms the baseline models for unweighted accuracy (Acc), Area Under Curve (AUC), Equal Error Rate (EER), and F-1 score. (For EER only, the lower the score the better the performance.)} \n \\label{tab:ret}\n \\centering\n \n\\begin{tabular}{llllll} \n\\toprule\nModel & Modal. & Acc. & AUC & EER & F-1\\\\\n\\midrule\nMajority & -- & 0.7753 & 0.5000 & 0.5000 & -- \\\\\n\\midrule\n & F & 0.7749 & 0.6752 & 0.3714 & 0.4965 \\\\\nMPF-1 & S & 0.7491 & 0.5271 & 0.4850 & 0.1816 \\\\\n & C & 0.7724 & 0.5141 & 0.4928 & 0.0813 \\\\\n\\midrule\n & F + S & 0.7976 & 0.6960 & 0.3568 & 0.5318 \\\\\nMPF-2 & F + C & 0.7932 & 0.6935 & 0.3579 & 0.5269 \\\\\n & S + C & 0.7633 & 0.5386 & 0.4787 & 0.1987 \\\\\n\\midrule\nSVM & All & 0.7542 & 0.6637 & 0.3768 & 0.4772 \\\\\nNN-Early & All & 0.8046 & 0.6867 & 0.3693 & 0.5208 \\\\\nNN-Cube & All & 0.8023 & 0.7048 & 0.3488 & 0.5453 \\\\\nNN-TC & All & 0.8015 & 0.6931 & 0.3615 & 0.5290 \\\\\n\\midrule\nMPF & All & \\textbf{0.8139} & \\textbf{0.7152} & \\textbf{0.3416} & \\textbf{0.5641} \\\\\n\\bottomrule\n\\end{tabular}\n\n\\end{table}\n\n\n\n\n\n\\subsection{Methodology}\nThe 30 subjects were randomly separated into 20\/5\/5 train\/dev\/test sets. Each subject is in only one partition so that models can generalize to new drivers. For each subject, features are scaled to zero mean and unit variance. The binary classification of distraction is performed at frame-level, so the train\/dev\/test sets have 147k\/36k\/37k datapoints. Since the data is imbalanced, the performance of the models is evaluated by Area Under ROC Curve (AUC), Equal Error Rate (EER), and F-1 score. We chose the hyper-parameters of each model based on its development set performance. Neural network models were trained using the Adam optimizer \\cite{adam} with a step learning rate scheduler, regularized by dropouts \\cite{drop}. The size of the fused representation $|h|$ is 16 and the size of hidden layers is 8. \n\n\\subsection{Results and Discussion}\nIn Table~\\ref{tab:ret}, we report the experimental results of the baseline models and our \\textit{MPF} model with accuracy, AUC, EER, and F-1 score. For \\textit{MPF-1} and \\textit{MPF-2}, we also show the choice of feature combinations that was used. Other baseline models besides \\textit{Majority} use all three modalities. \n\nTable~\\ref{tab:ret} shows that \\textit{MPF} performs best for the combinations of modalities (\\textit{MPF-1} and \\textit{MPF-2}). It also shows that even if some unimodal features have poor performance, all modalities contribute to a certain extent to the results. That is, we achieve monotonically increasing accuracy with MPF using modalities that may not individually have good performance. This emphasizes the importance of intermodal interaction in multimodal fusion representation.\n\nResults show that MPF performs better than the baseline models (all using three modalities), where the \\textit{MPF} model achieves an AUC of 0.7152, an EER of 0.3416, and an F-1 score of 0.5641 on the test set, while the best baseline \\textit{NN-Cube} achieves 0.7048, 0.3488, and 0.5453 respectively.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.6\\linewidth]{roc.pdf}\n \\caption{ROC curve of MPF (all three modalities), MPF-2 (facial and car modalities), and MPF-1 (facial modality).}\n \\label{fig:roc}\n\\end{figure}\n\nWe also show the ROC curve of selected models in Figure~\\ref{fig:roc}. at various detection thresholds. We see that using more modalities (blue curve) has the largest ROC AUC. Performance increased when the speech modality was added. By adjusting the detection threshold, \\textit{MPF} achieves the lowest false positive rate while preserving good detection accuracy.\n\n\n\\section{Conclusion}\nThe results confirm that combining signals from multiple modalities through MPF affords better prediction performance for distraction detection due to its ability to model unimodal, bimodal and trimodal interactions.\nIn future work we plan to add the fourth modality from the forward-facing camera that records road conditions to further boost performance.\n\n\\section{Acknowledgements}\nWe thank the anonymous reviewers for their valuable comments. We thank Zhenqiang Xu and Qizhe Xie for suggestions on the draft.\nThis work has been sponsored by the U.S. Department of Transportation grant (Carnegie Mellon University UTC T-SET). The opinions expressed in this paper do not necessarily reflect those of the U.S. Department of Transportation.\n\n\n\n\n\\bibliographystyle{IEEEtran}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nItinerant van der Waals (vdW) magnets provide promising platforms to study the complex relationships between emergent magnetic phenomena and crystal chemistry. Studies of magnetism in layered vdW materials probe the nature of magnetic order and interactions in the bulk and 2D limit, how these fundamental properties can be tuned, and the various types of device-related responses that may emerge in the pure system or via heterostructure design.\\cite{Burch2018,Wang2020rev,Huang2020rev} Of the pertinent vdW materials, several metallic Fe-Ge-Te phases possess some of the highest magnetic ordering temperatures.\\cite{Deiseroth2006,Stahl2018,May2019acs,Jothi2019} Ferromagnetism in monolayer Fe$_{3-x}$GeTe$_2$ has been demonstrated, and of particular interest is the increase in the Curie temperature $T_{\\rm C}$ from $\\approx$100\\,K to over 300\\,K caused by electrochemical gating of few-layer flakes, perhaps due to the intercalation of lithium.\\cite{Fei2018,Deng2018} Importantly, $T_{\\rm C}$ of bulk Fe$_{5-x}$GeTe$_2$ ranges from 270-310\\,K and can be further enhanced by cobalt or nickel substitution.\\cite{Stahl2018,May2019acs,May2019,Tian2020,May2020} Fe$_{5-x}$GeTe$_2$ and related compositions are thus prime candidates for incorporation into synthetic vdW heterostructures, where the properties can be tuned by the local composition. For instance, the stabilization of longer-range emergent phenomena such as topological spin textures (e.g. skyrmions) is being heavily pursued in Fe$_{3-x}$GeTe$_2$ and related vdW heterostructures.\\cite{Ding2020,wang2020characteristics,wu2020neel,yang2020creation,Park2021skrymion} In general, Fe$_{5-x}$GeTe$_2$ and related Fe$_{4}$GeTe$_2$ have garnered significant attention recently due to their high ordering temperature and complex behaviors.\\cite{Wu2021,Yang2021,Yamagami2021,Li2020magnetic,Tan2021gate,Ly2021direct,Ohta2021butterfly,Zhang2020,Seo2020} The present work was motivated by identifying further routes to tune the magnetism in Fe$_{5-x}$GeTe$_2$ and related bulk phases so that novel properties and logical designs of heterostructures can be achieved.\n\nFe$_{5-x}$GeTe$_2$ contains significant disorder associated with a large concentration of vacancies on one of three Fe sublattices (see Fig.\\,\\ref{XRD}a).\\cite{May2019acs} A large amount of stacking disorder also exists within the crystals.\\cite{Stahl2018,May2019} Control over Fe content has not been demonstrated, but the magnetic properties of bulk and thin film Fe$_{5-x}$GeTe$_2$ depend on thermal processing and differences between polycrystalline and single crystalline samples have been observed.\\cite{Stahl2018,May2019acs,May2019,Ohta2020} M\\\"{o}ssbauer spectroscopy has evidenced that spin fluctuations on the (atomically disordered) Fe1 sublattice persist to $\\approx$100\\,K despite magnetic ordering on the other sublattices near 300\\,K.\\cite{May2019acs} Recently, evidence linking magnetic order on the Fe1 sublattice with a competing charge order state has also been discussed,\\cite{Wu2021} and inversion breaking associated with the atomic (vacancy) ordering on the Fe1(Ge) sublattice has been proposed as a source for helimagnetic order.\\cite{Ly2021direct} The ordering of moments on the Fe1 sublattice impacts the electrical properties, the lattice parameters, and the magnetic anisotropy,\\cite{May2019acs,May2019} and therefore controlling magnetism on the Fe1 sublattice is essential for tuning the properties of Fe$_{4.8}$GeTe$_2$. For instance, stronger inter-sublattice coupling may result in stronger magnetism and this could explain why Ni or Co substitution in Fe$_{5-x}$GeTe$_2$ increases $T_{\\rm C}$.\\cite{Stahl2018,May2020} By contrast, Ni or Co substitution into Fe$_{3-x}$GeTe$_2$ suppresses $T_{\\rm C}$,\\cite{Drachuck2018,Tian2019} as does decreasing the Fe content or substituting As for Ge.\\cite{Verchenko2015,May2016,Yuan2017} \n\nDue to the unique response of Fe$_{5-x}$GeTe$_2$ to transition metal substitutions, we were motivated to investigate the impact of As substitution for Ge in Fe$_{5}$Ge$_{1-y}$As$_y$Te$_2$. Investigation of polycrystalline Fe$_{5}$Ge$_{1-y}$As$_y$Te$_2$ samples reveal an enhancement in $T_{\\rm C}$ for low As contents ($y$=0.025 and 0.05), but a clear decrease in $T_{\\rm C}$ and the saturation moment were observed for 0.25 $<$ $y$ $<$ 0.75. These results suggest that fine tuning of the crystal chemistry in Fe$_{5-x}$GeTe$_2$ is a viable means of controlling its room temperature magnetic properties. We also report the crystal structure and physical properties of Fe$_{5-x}$AsTe$_2$ ($x \\approx$ 0.2), which contains significant disorder. Magnetization measurements support a canted antiferromagnetic ground state in Fe$_{4.8}$AsTe$_2$, and butterfly-shaped magnetoresistance loops are found to be driven by a hysteretic meta-magnetic transition.\n\n\n\n\\section{Results and Discussion}\n\n\\subsection{Structural Characterization}\n\n\n\nWe utilized polycrystalline samples of Fe$_{5}$Ge$_{1-y}$As$_y$Te$_2$ to examine how the lattice and magnetism evolves with arsenic substitution, though we note that structural complexities of the Fe$_{5-x}$GeTe$_2$ system could drive subtle differences in the magnetic behavior of polycrystalline versus single crystalline samples. The samples were quenched from 750$^{\\circ}$ C to promote chemical homogeneity, and x-ray powder diffraction data were collected at ambient conditions. A Le Bail fitting procedure was used to extract the lattice parameters because significant anisotropic peak broadening attributed to stacking disorder (intrinsic or induced by sample preparation) hinders Rietveld refinement of the diffraction data; the rhombohedral Fe$_{5-x}$GeTe$_2$ lattice symmetry was utilized. The powder diffraction patterns are shown in the Supporting Materials.\\cite{Supporting} The presence of As was confirmed by energy dispersive spectroscopy (Bruker TM3000 with Quantax EDS at 15 keV) performed for the polycrystalline Fe$_{5}$Ge$_{1-y}$As$_y$Te$_2$ samples. The As\/Ge L-series peak overlaps were found to complicate the accurate measurements of the Ge\/As ratio, especially at low concentrations. The measurements were found to overestimate the As content, as demonstrated by measurements on an As-free crystal that indicated an artificial As content up to $\\approx$ 5\\% As (relative to Ge). Measurements on samples with nominal As contents of 25, 50, and 75\\% returned 29(1), 53(1), and 77(1)\\% relative to Ge, showing that the actual and nominal concentrations are similar. The nominal value of $y$ is used throughout the text to establish the qualitative trends with arsenic substitution. \n\n\\begin{figure}[ht!]%\n\\includegraphics[width=1.05\\columnwidth]{StructureAndDiffraction4.pdf}%\n\\caption{(a) Crystal structure of Fe$_{5}$(Ge,As)Te$_2$ with partially transparent Fe1 and As positions indicating split-sites with up to 50\\% occupancy. (b,c) Lattice parameters obtained by fitting room temperature x-ray diffraction data for quenched polycrystalline samples and (d) the ratio of lattice parameters.}%\n\\label{XRD}\n\\end{figure}\n\nAs shown in Fig. \\ref{XRD}(b-d), the substitution of As for Ge leads to a contraction of the unit cell within the $\\textit{ab}$-plane and an expansion along the $c$-axis. This results in an increase in the ratio $c\/a$, potentially implying an increase in the 2D character with increasing arsenic content. Similar lattice trends were observed when As was substituted for Ge in Fe$_{3-x}$GeTe$_2$.\\cite{Yuan2017} In Fe$_{5-x}$GeTe$_2$, the $h0l$ diffraction peaks are significantly broadened when samples are cooled slowly because there is a structural transition near $\\approx$570\\,K that induces stacking disorder upon cooling; however, quenching results in a metastable state where sharp $h0l$ reflections are maintained.\\cite{May2019} In the mixed Ge-As samples, broadening of diffraction peaks due to stacking faults was found to decrease with increasing As content from 2.5 to 75\\% arsenic relative to Ge, though some broadening is observed despite quenching (see Supporting Materials). The trends observed in Fig.\\,\\ref{XRD}(b,c,d) are rather robust despite peak broadening because the 00$l$ and $hhl$ peaks are typically not broadened by the stacking disorder.\\cite{May2020practical} \n\n\n\n\nSingle crystals of Fe$_{4.8}$AsTe$_2$ were grown in the presence of iodine as discussed in the Methods section. Firstly, we note that the crystals are plate-like in nature and behave similar to Fe$_{5-x}$GeTe$_2$ during simple cleaving tests using adhesive tapes. Since exfoliation of Fe$_{5-x}$GeTe$_2$ to near monolayer limit has been demonstrated,\\cite{May2019acs,Ohta2020,Tan2021gate} it seems likely that similar exfoliation of the arsenide or arsenic-containing crystals would be possible. Dedicated efforts are necessary to examine this characteristic in detail, and such endeavors should probably treat Fe$_{5-x}$AsTe$_2$ flakes as air sensitive. To address the composition of the crystals, wavelength dispersive spectroscopy was performed in a JEOL electron microprobe on the as-grown facets of slow cooled crystals, and this chemical analysis technique yielded an average composition of Fe$_{4.77(7)}$As$_{0.97(2)}$Te$_{2.00(5)}$. We thus refer to the crystals using the composition Fe$_{4.8}$AsTe$_2$ for simplicity.\n\nSamples of the arsenic end-member Fe$_{4.8}$AsTe$_2$ generally possess a large degree of stacking disorder as well as a secondary phase. For slow cooled crystals, the primary phase has $c$=29.51(2)\\AA\\, and the secondary phase has a small cell with $c$=28.67(1)\\AA\\, as obtained by fitting the 00$l$ Bragg reflections from a diffraction measurement off a crystal facet. These values are notably different from those in the polycrystalline As-Ge alloys. While the larger $c$-axis parameter of the main phase could somehow relate to the stacking disorder, the significantly reduced $c$-axis parameter of the secondary phase likely has a chemical or structural origin. The extent to which the smaller-cell phase is present appears to depend on fine details of the synthesis and some related data are shown in the Supporting Materials; further investigations into the phase stability and local homogeneity of Fe$_{5-x}$AsTe$_2$ are necessary. As discussed below, the act of cooling slowly from the growth temperature promotes long-range magnetic order in Fe$_{4.8}$AsTe$_2$ crystals whereas thermal quenching appears to induce glassy magnetic behavior. As such, this article focuses on Fe$_{4.8}$AsTe$_2$ crystals that are cooled in the furnace from the growth temperature over 8-12\\,h. \n\nSingle crystal x-ray diffraction data reveal that the slow-cooled Fe$_{4.8}$AsTe$_2$ crystals contain significant stacking disorder or local variations of $c$, which precludes structural determination from diffraction data. Less stacking disorder was observed in a quenched crystal for which the single crystal x-ray diffraction data were able to be refined using the Fe$_{5-x}$GeTe$_2$ crystal structure with $a$ = 4.0088(6)\\AA\\, and $c$ = 29.279(6)\\AA\\, at $T$ = 220\\,K (see Supporting Materials for a comparison of the data). These results suggest that Fe$_{5-x}$AsTe$_2$ and Fe$_{5-x}$GeTe$_2$ have similar structural units as shown in Fig.\\,\\ref{XRD}(a). Fe$_{5-x}$AsTe$_2$ has a complex temperature-dependent phase evolution that promotes phase separation via changes in layer stacking, composition, and\/or site occupancy; further work is needed to understand the phase stability. Importantly, a $\\sqrt{3}a \\times \\sqrt{3}a$ supercell that was observed in Fe$_{5-x}$GeTe$_2$ was also observed for Fe$_{4.8}$AsTe$_2$ by single crystal x-ray diffraction (both quenched and furnace-cooled crystals). This further supports the structural similarity between the two materials because this in-plane supercell is related to occupancy on the Fe1a,b sublattice (a unique structural feature). Short correlation lengths along [001] preclude structural solution from x-ray diffraction data and cause a streaking of diffraction intensity (see Supporting Materials).\n\n\n\\subsection{Physical Properties}\n\nWe first discuss how arsenic substitution for germanium changes the properties in polycrystalline Fe$_{5}$Ge$_{1-y}$As$_y$Te$_2$ and then consider properties of single crystal Fe$_{4.8}$AsTe$_2$. Our primary interest is in the evolution of the magnetic properties, and magnetization ($M$) measurements are the key characterization technique utilized. Figure \\ref{MagMain}(a) contains the temperature-dependent $M$ data for polycrystalline Fe$_{5}$Ge$_{1-y}$As$_y$Te$_2$ at various compositions and Fig.\\ref{MagMain}(b) plots the field-dependent $M$ data at $T$=2.5\\,K. The overall trend is for decreasing ordering temperature and induced (saturation) moment with increasing arsenic content, as summarized in Fig.\\ref{MagMain}(c). However, close inspection reveals a more complicated scenario with different behavior observed for small arsenic concentrations ($y$=0.025 and 0.05) and a comparison to the polycrystalline $y$=0 data is important for understanding these results. In the parent material Fe$_{5-x}$GeTe$_2$, the magnetic behavior of polycrystalline samples is slightly different than that in the single crystals, though the main features appear consistent between the two. However, the Curie temperature of powders appears to be slightly lower than that of crystals and the first-order magnetostructural transition is only observed in the quenched and metastable crystals.\\cite{Stahl2018,May2019acs,May2019} As such, it is important to compare properties of the arsenic containing powders to the polycrystalline samples of the parent material. We observed plate-like morphology of the crystallites formed during the annealing of polycrystalline samples, and thus it seems reasonable to speculate that single crystal growth of Fe$_{5}$Ge$_{1-y}$As$_y$Te$_2$ is possible for at least certain compounds. However, detailed growth studies are necessary to evaluate how crystal growth impacts the chemistry and properties of such samples.\n\n\nThe $M$($T$) curve for Fe$_{5-x}$GeTe$_2$ ($y$=0) in Fig.\\,\\ref{MagMain}a is characterized by a strong rise in $M$($T$) upon cooling below $T_{\\rm C}$ $\\approx$ 258\\,K and there is a strong anomaly near 100\\,K. The signature in the magnetization near 100\\,K has been associated with the establishment of magnetic order on the Fe1 sublattice (observed in both powders and crystals).\\cite{May2019acs} The magnetism on the Fe1 sublattice impacts many properties and therefore controlling the Fe1 sublattice is a primary route for manipulating the magnetism in Fe$_{5-x}$GeTe$_2$. For instance, there is a coupled magnetoelastic response and ordering of the Fe1 moments impacts the electronic properties and magnetic anisotropy,\\cite{May2019acs,May2019} with easy-axis [001] anisotropy strengthening upon cooling below 100\\,K. Due to this evolution of the anisotropy, topological vortex features known as (anti)merons formed at domain walls become unstable at low $T$ in Fe$_{5-x}$GeTe$_2$.\\cite{Gao2020} The existence of a competing charge order above 100\\,K has also been discussed recently.\\cite{Wu2021}\n\n\\begin{figure*}[ht!]%\n\\includegraphics[width=2.05\\columnwidth]{Mag_3panel_Poly.pdf}%\n\\caption{ Magnetization data for polycrystalline Fe$_{5}$Ge$_{1-y}$As$_y$Te$_2$ samples. (a) Temperature-dependent magnetization and (b) isothermal magnetization data. (c) Magnetic ordering temperatures obtained using magnetization data collected in a small applied field of $H$ = 100\\,kOe as discussed in the Supporting Materials. The inset shows the saturation magnetization reached versus nominal arsenic content $y$ ($T$=2.5\\,K, $H$=50\\,kOe). Antiferromagnetic behavior is observed for the Fe$_{5}$AsTe$_2$ sample while the mixed As\/Ge alloy compositions display ferromagnetic character.}%\n\\label{MagMain}\n\\end{figure*}\n\n\n\n\\begin{table}[]\n\\caption{Density functional theory results of total ferromagnetic (FM) moments on different Fe sublattices for Fe$_5$AsTe$_2$ and Fe$_5$GeTe$_2$. Due to the simulated supercell with a checkerboard occupation of the Fe1 sublattice, atoms at different Fe2 and Fe3 positions have different local environments and moments. The data for Fe$_5$AsTe$_2$ were obtained using crystallographic parameters obtained from a quenched single crystal.}\n\\label{DFT}\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n & \\multicolumn{2}{c|}{Fe$_5$AsTe$_2$} & \\multicolumn{2}{c|}{Fe$_5$GeTe$_2$} \\\\ \\hline\nFM moment & \\multicolumn{2}{c|}{9.29\/f.u.} & \\multicolumn{2}{c|}{10.92\/f.u.} \\\\ \\hline\nsites & \\multicolumn{4}{c|}{FM Moment ($\\mu_B$\/Fe)} \\\\ \\hline\nFe1 & 1.19 & & 1.98 & \\\\ \\hline\nFe2 & 1.99 & 1.09 & 2.14 & 1.96 \\\\ \\hline\nFe3 & 2.64 & 2.38 & 2.52 & 2.32 \\\\ \\hline\n\\hline\n\\end{tabular}\n\\end{table}\n\nThe magnetic anomaly associated with the formation of magnetic order on the Fe1 sublattice is notably absent in the magnetization data for the polycrystalline arsenic-containing samples. This is true for all arsenic containing samples that were measured (including crystals of the pure arsenide end member). This suggests that arsenic substitution quenches spin fluctuations on the Fe1 sublattice and leads to an enhanced Curie temperature for powders with $y$=0.025 and 0.05. These results show that magnetism on the Fe1 sublattice is very sensitive to small changes in the Fermi level or chemistry in Fe$_{5-x}$GeTe$_2$. $M$($T$) data provide a quick screening for this behavior, though more local probes like M\\\"{o}ssbauer spectroscopy\\cite{May2019acs} or other zero field measurements are required to conclude if the Fe sublattices are not independent after As substitution for Ge. Changes to magnetic anisotropy may impact the magnetization measurements and thus hinder a direct conclusion of the underlying behavior, especially in applied fields. Finally, as noted above, the behavior in single crystals may vary from that in polycrystalline samples due to the metastability and disorder in the Fe$_{5-x}$GeTe$_2$ family.\n\n\n\nIn addition to the change in behavior of $M$($T$) data, the isothermal magnetization $M$($H$) in Fig.\\,\\ref{MagMain}c reveal a decreased critical field for saturation in the $y$=0.025 sample in comparison to the $y$=0 sample. This `softening' of the magnetism implies a loss of magnetic anisotropy, which is also consistent with arsenic substitution impacting magnetism on the Fe1 sublattice. A change in the anisotropy was also observed in cobalt substituted Fe$_{5-y}$Co$_y$GeTe$_2$, where crystals with $y\\approx$1 had easy-plane anisotropy that was opposite to the easy-axis [001] anisotropy of the $y$=0 parent.\\cite{Tian2020,May2020} The control over magnetic anisotropy, both sign and magnitude, is important because it has implications for the design of heterostructures where the anisotropy and magnetic properties are tuned locally, and because anisotropy is considered an important ingredient to form magnetic order in the 2D limit. Indeed, magnetic anisotropy is an important parameter for the stabilization of topological spin textures such as skyrmions.\n\nThe saturation moment is reduced by increased arsenic content, as illustrated in the inset of Fig.\\ref{MagMain}(c). The rather continuous decrease in the saturation moment could be linked to the smooth evolution of the lattice parameters (Fig.\\ref{XRD}), as was suggested for the behavior in Fe$_{3-y}$Ge$_{1-x}$As$_x$Te$_2$.\\cite{Yuan2017} Our density functional theory (DFT) calculations do not predict a strong decrease in the net moment of idealized compositions Fe$_{5}$AsTe$_2$ relative to Fe$_{5}$GeTe$_2$. For instance, as summarized in Table \\ref{DFT}, our DFT results predict an average ferromagnetic moment of 1.84$\\mu_B$\/Fe in Fe$_{5}$AsTe$_2$ compared to 2.15 $\\mu_B$\/Fe in Fe$_{5}$GeTe$_2$. The DFT results suggest that the Fe1 sublattice (and neighboring Fe2) are the most impacted by the presence of As. While DFT suggests a dominant FM ground state in atomically-ordered Fe$_5$AsTe$_2$, an AFM order with AFM coupling along [001] was found to be only $\\approx$0.6\\,meV\/Fe higher in energy for a different (zig-zag) occupancy pattern on the Fe1a,b sublattice. In Fe$_5$GeTe$_2$, the first competing magnetic order was calculated to be more than 2\\,meV\/Fe above the ground state and primitive layer stacking was found to decrease the stability of FM relative to AFM in Fe$_5$GeTe$_2$.\\cite{May2020} These trends may help explain the formation of a non-compensated antiferromagnetic structure in Fe$_{4.8}$AsTe$_2$ crystals where significant stacking disorder is evidenced by the diffraction data. Recently, experiments have demonstrated that gating ultra-thin Fe$_{5-x}$GeTe$_2$ seemingly produces an antiferromagnetic state,\\cite{Tan2021gate} and it has been shown that cobalt substitution also induces AFM.\\cite{Tian2020,May2020} Also, calculations have suggested that bilayer Fe$_{5}$GeTe$_2$ will be antiferromagnetic.\\cite{Yang2021} The DFT calculations are performed at an idealized stoichiometry and for idealized structures with specific Fe1a,b occupancy configurations and without any stacking faults. The situation in the crystals is much more complex, and the stacking disorder and the random Fe1 distributions could induce local AFM coupling and this may also reduce the saturation moment. In the limit of strong AFM coupling, the field-induced state that looks like saturation may in fact be a ferrimagnetic spin configuration. Further exploration of the magnetic moment with a local probe or a high-field measurement are necessary to understand the discrepancy between the induced moment and the theoretical moment in Fe$_{4.8}$AsTe$_2$ samples.\n\n\nWe now discuss the magnetic behavior of Fe$_{4.8}$AsTe$_2$ in greater detail. The $M$($T$) data for Fe$_{4.8}$AsTe$_2$ are qualitatively different than those of the ferromagnetic Fe$_{5}$Ge$_{1-y}$As$_y$Te$_2$ samples with $y \\leq 0.75$. Indeed, the $M$($T$) data for Fe$_{4.8}$AsTe$_2$ have a cusp that is characteristic of antiferromagnetic ordering, as shown in Fig.\\ref{MagXtl}(a,b) and for the polycrystalline sample in Fig.\\ref{MagMain}(a). We define $T_{\\rm N}$\\, = 42\\,K based on ac magnetic susceptibility data taken in zero applied dc magnetic field (shown in Supporting Materials). The temperature of the cusp in the dc $M$($T$) data is generally suppressed to lower $T$ with increasing applied field $H$, as expected for AFM order. However, upon increasing the applied field from $H$ $\\parallel$ $c$ = 0.1 to 4\\,kOe there is an increase in the temperature where the cusp is observed, and this qualitative behavior is suggestive of a non-compensated AFM order (a canted AFM order with a weak ferromagnetic component). An additional view of the data that highlights this behavior is shown in the Supporting Materials.\n\nThe isothermal magnetization data presented in Fig.\\ref{MagXtl}(c) portray the anisotropic magnetic response of Fe$_{4.8}$AsTe$_2$ at $T$=2\\,K. Of particular importance here is the apparent spin-flop transition observed when the applied field is parallel to the $c$-axis. Upon increasing the field from a zero field cooled (ZFC) condition, the spin-flop occurs near 10\\,kOe. This implies that the moments are oriented primarily along the $c$-axis for $H$=0, and they reorient to perpendicular to the [001] direction of the applied field near 10\\,kOe before slowly rotating towards a saturated state. A metamagnetic transition suggesting easy-axis anisotropy was also observed in the AFM phase induced by cobalt doping of Fe$_{5-x}$GeTe$_2$.\\cite{Tian2020,May2020} When the field is applied within the basal plane ($H$ $\\perp$ $c$) of Fe$_{4.8}$AsTe$_2$, the magnetization increases continuously up to around 25\\,kOe and then gradually increases towards an assumed saturation. This behavior also supports the hypothesis of long range antiferromagnetic order in these furnace cooled Fe$_{4.8}$AsTe$_2$ crystals. By contrast, the crystals that were quenched from 750 $^{\\circ}$ C did not display anisotropic $M$($H$) and the magnetic properties were generally consistent with the existence of short range AFM correlations and possible glassy behavior (see Supporting Materials for related data). Interestingly, the quenched crystals appear to have less stacking disorder and this may suggest that small changes in the Fe1a,b sublattice or lattice strain strongly impact the magnetism.\n\nThe spin-flop transition has significant field hysteresis, as illustrated in Fig.\\,\\ref{MHparC}(a). Upon decreasing the field to $H$=0 from high fields, a remanent moment is observed. A smaller, but finite, remanent moment is also observed for $H$ $\\perp$ $c$. The inset of Fig.\\,\\ref{MHparC}(a) displays the derivative of the isothermal magnetization d$M$\/d$H$ and small features can be observed in addition to the main spin-flop transition.\n\n\\begin{figure}[h!]%\n\\includegraphics[width=0.95\\columnwidth]{MT_MH_MainAnisotropy_FC_3.pdf}%\n\\caption{Anisotropic magnetization data for single crystalline Fe$_{4.8}$AsTe$_2$, with temperature-dependent data for (a) $H \\parallel c$ and (b) $H \\perp c$, and (c) isothermal data after cooling in zero applied field. The insets in (a,b) show the low-field data near $T_N$ using the same vertical axis units (emu\/g) as the main panels. In (a,b), data for applied fields of $H$ = 0.1, 1, 2.5, 4, 5, 6, 8, 10, 20\\,kOe are shown.}%\n\\label{MagXtl}\n\\end{figure}\n\n\\begin{figure}[h!]%\n\\includegraphics[width=0.95\\columnwidth]{Mhcombo.pdf}%\n\\caption{Isothermal magnetization for $H \\parallel c$ in furnace cooled single crystals of Fe$_{4.8}$AsTe$_2$. (a) Data for increasing and decreasing the applied field after zero field cooling (ZFC) with a maximum field of 70\\,kOe reached. The inset shows the derivative d$M$\/d$H$. (b,c) Contour plots of d$M$\/d$H$ as a function of $T,H$ for (b) increasing $H$ and (c) decreasing $H$.}%\n\\label{MHparC}\n\\end{figure}\n\nThe critical field of the spin-flop increases upon cooling from $T_{\\rm N}$ and so does the hysteresis associated with this meta-magnetic transition. These trends can be inferred from the contour plots in Fig.\\,\\ref{MHparC}(b,c) where the color scale is related to the value of d$M$\/d$H$ and thus the highest intensity (red) relates to the spin-flop transition where increasing (decreasing) the field rapidly increases (decreases) the magnetization. The data in Fig.\\,\\ref{MHparC}(b) were obtained while increasing $H$ after cooling in zero field, and thus they demonstrate the increasing anisotropy and critical field upon cooling. Both the increasing- and decreasing-field data contain shoulders to the spin-flop transition (a second band of large d$M$\/d$H$ at higher fields). These shoulders may be caused by complex domain behavior or inhomogeneity in the sample that promote different anisotropy energies. Similarly, the existence of a remanence could be linked to the large hysteresis of the spin-flop or the presence of complex domain walls that promote a small residual moment.\n\n\n\\begin{figure*}[ht!]%\n\\includegraphics[width=1.95\\columnwidth]{TransportCombo.pdf}%\n\\caption{In-plane electrical transport properties of furnace-cooled Fe$_{4.8}$AsTe$_2$ crystals. (a) Relative temperature dependence of the resistivity with inset showing the temperature derivative. (b) Hall resistivity as a function of applied field for increasing and decreasing field (ZFC not shown). (c) Transverse magnetoresistance for field along [001] as a ratio relative to the zero field cooled value of $R_0$ and (d) the corresponding magnetization loop. Panels (e,f) contain data for a field applied within the basal plane ($H$ $\\perp$ $c$) with (e) the transverse magnetoresistance and (f) the magnetization loop.}%\n\\label{Resist}\n\\end{figure*}\n\nThe electrical transport properties of Fe$_{4.8}$AsTe$_2$ were investigated using in-plane electrical resistivity $\\rho$, magnetoresistance (MR) and Hall effect measurements, and the primary results are shown in Fig.\\,\\ref{Resist}. The electrical resistivity of our Fe$_{4.8}$AsTe$_2$ crystals increases slightly upon cooling, which differs from the behavior observed in Fe$_{5-x}$GeTe$_2$ ($x$ $\\approx$ 0.2) crystals where the resistivity decreases upon cooling. The room-temperature resistivity of both compounds is fairly similar (hundreds of $\\mu \\Omega$-cm). Interestingly, $\\rho$ also increases upon cooling in Fe$_{3-x}$AsTe$_2$ whereas Fe$_{3-x}$GeTe$_2$ has bad metal like behavior.\\cite{May2016,Verchenko2016new} As shown in Fig.\\,\\ref{Resist}a, the resistivity in Fe$_{4.8}$AsTe$_2$ has a small anomaly near the magnetic transition and it increases more rapidly upon cooling below $T_{\\rm N}$. This behavior is suggestive of gapping in the Fermi surface caused by the magnetic order. It would be interesting to probe the extent to which correlations impact the physical properties of Fe$_{5-x}$AsTe$_2$ in comparison to Fe$_{5-x}$GeTe$_2$.\n\nThe Hall effect data are shown in Fig.\\,\\ref{Resist}(b) for $T$=2\\,K. Data are shown for decreasing the field toward zero (red data), which results in a remanent moment and an associated remanent (anomalous) Hall effect. Data are also shown for increasing the field from this remanent state (orange data), and in the increasing field condition the spin flop appears to have a stronger impact on the observed Hall effect signal. The anomalous portion is not very large in Fe$_{4.8}$AsTe$_2$, even after the spin-flop transition, which is consistent with the antiferromagnetic order inferred from magnetization measurements. The ordinary Hall resistance is non-linear with applied field, with curvature decreasing on warming yet still present well-above $T_{\\rm N}$\\, (see Supporting Materials), which suggests that multiple bands contribute to conduction in Fe$_{4.8}$AsTe$_2$. At 250\\,K the Hall resistance is seemingly linear with $H$ (10 to 80\\,kOe), and a single carrier analysis yields a Hall carrier density of $\\approx$1$\\times$10$^{22}$holes\/cm$^{3}$. This value, which changes with $T$, is most likely impacted by the existence of multi-carrier transport and thus the Hall data are not a good measure of the metallicity of Fe$_{4.8}$AsTe$_2$ without more detailed knowledge of the Fermi surface. Holes and electrons both contribute to conduction in Fe$_{4.86}$GeTe$_2$ as well.\\cite{May2019} For comparison sake, the Hall carrier density calculated by assuming a single band model is $\\approx$1.5$\\times$10$^{21}$holes\/cm$^{3}$ for Fe$_{4.86}$GeTe$_2$ crystals at 375\\,K (above the Curie temperature); data taken from Ref. \\cite{May2019}. These results suggest more free holes in Fe$_{4.8}$AsTe$_2$ than in Fe$_{4.86}$GeTe$_2$, though the temperature dependence of the resistivity is less metallic in Fe$_{4.86}$GeTe$_2$ and this may point to possible scattering effects. Of course, these numbers may be skewed by electron-hole compensation effects in the Hall effect that artificially raise the single-band carrier density.\n\nThe magnetoresistance (MR) data were collected in two transverse configurations with current always flowing within the $ab$-plane and always perpendicular to the applied field, which is either directed along [001] or orthogonal to [001]; additional schematic illustrations are provided in the Supporting Materials. The data for $H$ $\\parallel$ $c$ are shown in Fig.\\ref{Resist}(c) and data for $H$ within the basal plane (yet still perpendicular to the current) are shown in Fig. \\ref{Resist}(e). The corresponding magnetization loops are presented in Figs.\\,\\ref{Resist}(d,f) to illustrate how the magnetic hysteresis is coupled to the electrical resistivity. Starting from a zero field cooled state ($R_0$), the application of a magnetic field decreases the resistivity of Fe$_{4.8}$AsTe$_2$ and thus negative magnetoresistance is observed. Above a critical field, which varies with orientation, the sign of d$R$\/d$H$ changes and positive MR is observed at large fields. Together with the presence of strong magnetic hysteresis, this leads to butterfly-shaped magnetoresistance loops. These loops also reveal that the remanent moment leads to negative magnetoresistance at $H$=0 relative to the ZFC value of $R_0$. Upon further demagnetizing the sample and passing beyond the coercive field, the resistance approaches the $R_0$ value before the negative MR takes over and causes a local minimum near the critical field. The net result is the butterfly-shaped resistance loop. This behavior is most dominant for $H$ $\\parallel$ $c$ with the spin-flop, but can also be observed for $H$ $\\perp$ $c$ where the remanent moment and hysteresis are much smaller. Interestingly, butteryfly-shaped hysteresis loops have recently been reported for ultra-thin Fe$_{5-x}$GeTe$_2$ where a thickness effect appears to be important.\\cite{Ohta2021butterfly}\n\nThe negative MR observed for small applied field is likely caused by alignment of moments within a magnetic domain or alignment of the magnetic domains. The trend towards positive MR starts at the spin flop ($\\approx$10kOe) and MR ultimately reaches 10\\% for $H$ $\\parallel$ $c$ at 90\\,kOe and 2\\,K and slightly lower for $H$ $\\perp$ $c$. Positive MR is typical of nonmagnetic or paramagnetic metals and is also observed in some antiferromagnetic materials, but is not typical of a ferromagnet where the applied field typically suppresses fluctuations and aligns domains to reduce carrier scattering (especially near $T_{\\rm C}$). Positive MR can also be observed above the saturation field in a ferromagnet, as in Fe$_{5-x}$GeTe$_2$ at low $T$ and high field.\\cite{May2019} In Fe$_{4.8}$AsTe$_2$, the likely existence of multiple bands at the Fermi level complicates the interpretation of the magnetoresistance, though the anisotropic behavior and butterfly-shaped hysteresis loops demonstrate a strong coupling of the magnetism to the electronic transport.\n\n\n\n\n\\section{Conclusions}\n\nThe impact of arsenic substitution for Ge in the high-Curie temperature vdW material Fe$_{5-x}$GeTe$_2$ was probed and the properties of Fe$_{4.8}$AsTe$_2$ were reported. Small additions of As appear to enhance the ferromagnetism in polycrsytalline Fe$_{5-x}$GeTe$_2$ by suppressing spin fluctuations on the Fe1 sublattice. This also decreases the anisotropy field and thus provides a means for local tuning of the magnetism without major lattice changes. However, large concentrations of As lead to a significant decrease in the Curie temperature and saturation moment. While structural characterization of Fe$_{4.8}$AsTe$_2$ by x-ray diffraction was hindered due to the presence of stacking disorder and potential phase separation, a key similarity to Fe$_{5-x}$GeTe$_2$ is evidenced by observation of an in-plane supercell associated with occupancy\/vacancy order on the Fe1 sublattice. Electron diffraction and microscopy investigations are necessary to inspect the local crystallography and phase evolution in this complex material. Crystals of Fe$_{4.8}$AsTe$_2$ that are slowly cooled display characteristics of long-range antiferromagnetic order with a small ferromagnetic component, while quenched crystals display characteristics of glassy magnetism. In the magnetically ordered phase, a spin-flop transition demonstrates the dominant easy-axis [001] anisotropy of the moments, and this meta-magnetic transition is strongly coupled to the electrical transport properties and causes butterfly-shaped magnetoresistance loops. In total, these results motivate detailed experimental and theoretical efforts to identify dopants that lead to enhanced magnetic ordering temperatures and anisotropy control in itinerant vdW magnetic materials. Importantly, this work demonstrates that small concentrations of such dopants need to be considered due to the sensitivity of itinerant magnetic materials to small changes in the Fermi energy or crystal chemistry.\n\n\\section{Methods}\n\nSingle crystals of Fe$_{4.8}$AsTe$_2$ were grown by heating the pure elements (Fe, As, Te, I) in an evacuated silica ampoule to 750$^{\\circ}$ C over 30\\,h followed by a dwell period of approximately 10\\,d. The largest crystals were 2-3\\,mm in lateral dimension, and these were obtained from growths performed with a hot-side temperature of 750$^{\\circ}$ C in a horizontal tube furnace. The growth ampoules were either allowed to cool in the furnace over 8-12\\,h or were quenched into an ice-water bath. For quenched crystals, iodine was rinsed from the crystals using alcohols and\/or acetone to prevent accelerated tarnishing due to the hygroscopic nature of iodine; during slow cooling the iodine deposits on the silica ampoule due to the temperature gradient and rinsing is not required.\n\nPolycrystalline samples for the solid-solution series Fe$_{5}$Ge$_{1-y}$As$_y$Te$_2$ (with nominal $y$= 0.025, 0.05, 0.25, 0.50, 0.75) were synthesized by first reacting the elements at 750$^{\\circ}$ C for 72\\,h using an initial heating rate of 25$^{\\circ}$\/h. The reacted products were ground briefly in air, pressed into pellets with a diameter of one-half inch, and then sealed in silica ampoules with a small pressure of argon. A second heat treatment at 750$^{\\circ}$ C lasted for approximately 200\\,h prior to quenching into an ice-water bath. Polycrystalline samples of nominal compositions Fe$_{4.8}$AsTe$_2$, Fe$_{4.5}$AsTe$_2$ and Fe$_{5.5}$AsTe$_2$ were synthesized in a similar manner and the impact of thermal processing (quenching, cooling in furnace) was examined for these samples. Le Bail fitting was performed in FullProf\\cite{FullProf} to obtain lattice parameters, though it is noted that the fits are of low quality due to asymmetric and inconsistent broadness in diffraction intensities associated primarily with stacking faults, though the data may also be impacted by phase separation issues.\n\nChemical analysis of slow-cooled, vapor transport grown Fe$_{4.8}$AsTe$_2$ crystals was performed using wavelength dispersive spectroscopy (WDS) in a JEOL 8200 with elemental standards of Fe, Te and binary InAs. The beam energy was 25\\,keV using a current of 50\\,nA.\n\nSingle-crystal x-ray diffraction data were collected at 220\\,K using a Bruker D8 Quest with a nitrogen cold stream while the crystals were mounted on a kapton loop using paratone oil. Structural modeling was performed using ShelX after data reduction via Bruker's APEX3 software.\\cite{Sheldrick2015} Crystals ($<$70$\\mu$m) were selected from the products of growths that started with different nominal compositions (Fe$_5$AsTe$_2$ and Fe$_6$AsTe$_2$) and different thermal histories (quenching, furnace cooling). Quenched crystals were found to have less streaking along $l$ in the relevant reciprocal space maps of the diffracted intensity, suggesting they have fewer stacking faults than the furnace cooled crystals. Data from the heavily faulted crystals could not be refined, and refinement results from the structural solution for the quenched crystals are provided in the Supporting Materials. X-ray diffraction data were collected using a PANalytical X'Pert Pro MPD with a Cu K$\\alpha_1$ ($\\lambda$=1.5406\\,\\AA) incident beam monochromator. Some degradation of the diffraction data was observed after several hours of exposure, and thus these samples are mildly sensitive to moisture and\/or oxygen.\n\nTransport measurements were performed in a Quantum Design Dynacool. The Hall effect data were anti-symmetrized (odd only) to avoid mixing of the transverse and longitudinal signals due to imperfect measurement geometry. Transport data (MR, Hall) for $H \\parallel c$ were collected simultaneously using a six-wire method while data for $H \\perp c$ were collected on a separate crystal. Magnetization measurements were collected in SQUID magnetometers (MPMS-XL and MPMS3) from Quantum Design (QD) and ac susceptibility data were collected in a QD PPMS and the MPMS3. The contour plots of d$M$\/d$H$ shown in Fig.\\,\\ref{MHparC}(b,c) were obtained using temperature steps of 1\\,K and the applied field was stabilized in steps of 100\\,Oe. The data in Fig.\\,\\ref{MHparC}(b) were collected upon increasing $H$ after cooling from 150\\,K in zero field. The data in Fig.\\,\\ref{MHparC}(c) were obtained while decreasing $H$ from 70\\,kOe with data first collected at 2\\,K, then 3\\,K, and so on.\n\n\nDensity functional theory (DFT) calculations were performed using the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional\\cite{PerdewGGA1996} as implemented in the VASP code.\\cite{Kresse1996a} The kinetic energy cutoff of the plane-wave basis is 268\\,eV; changing to a cutoff energy of 400\\,eV resulted in quantitative changes of less than 10\\% and no qualitative changes. The projector augmented wave method is used to describe the interaction between ions and electrons.\\cite{Kresse1999} A 6 $\\times$ 6 $\\times$ 1 k-point mesh was used for a 2 $\\times$ 2 $\\times$ 2 supercell. The crystallographic parameters are fixed at the experimentally measured values for quenched crystals but the atomic positions are optimized until the force on each atom is less than 0.01 eV\/\\AA~. It is again emphasized that these calculations are highly idealized and point and planar defects may be essential in understanding the different magnetic phases.\n\n\n\\section{Acknowledgments}\nWe thank R. Custelcean for assistance with x-ray single-crystal diffraction measurements and M. Lance for assistance with WDS measurements. This work was supported by the U. S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division.\n\n\n\\providecommand{\\newblock}{}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nRobot-assisted surgery, with a short but remarkable chronicle~\\cite{moustris2011evolution}, has dramatically extended the dexterity and overall capability of surgeons, and plays an important role in modern minimally invasive surgery.\nRobotic systems enable precise control, efficient manipulation and vivid observation for the surgical procedures, yielding rich sources of information~\\cite{guthart2000intuitive}.\nIntelligent understanding of such complex surgical procedure is highly desired for facilitating cognitive assistance. To this end, automatic gesture recognition is fundamentally required for supporting higher-level perception such as surgical decision making~\\cite{maier2017surgical}, surgical skill assessment~\\cite{poursartip2018analysis} and surgical task automation~\\cite{nagy2019dvrk} towards the next generation of operating theatres.\nHowever, accurately recognizing on-going surgical gesture is challenging, due to the complex multi-step actions, frequent state transitions, disturbance in sensor data, various manipulation habits and proficiency of different surgeons.\n\nTo address above challenges in automatic surgical gesture recognition, a set of methods have been developed in the past decade. Some methods were based on processing sequential robotic kinematics data (e.g., the position and velocity of the tool tips), using traditional machine learning methods such as variants of hidden Markov models~\\cite{tao2012sparse,varadarajan2009data}, linear classifiers with hand-crafted metrics~\\cite{zappella2013surgical} and recent deep learning methods such as long short-term memory (LSTM)~\\cite{dipietro2016recognizing}, multi-task recurrent neural network~\\cite{van2020multi} and multi-scale recurrent network (offline)~\\cite{gurcan2019surgical}.\nIn the meanwhile, purely video based solutions have been intensively explored in the recent years, employing deep convolutional neural networks for extracting high-quality visual features.\nPromising gesture recognition results have been achieved relying on temporal convolutional network (TCN)~\\cite{lea2017temporal}, recurrent convolutional network~\\cite{jin2017sv}, 3D convolutional network~\\cite{funke2019using} and symmetric dilated convolution (offline)~\\cite{zhang2020symmetric} to extract representative visual features.\nHowever, all these solutions only adopted a single source of information,\nwithout considering the multi-modal joint knowledge inherent in kinematics and visual data\nsynchronously recorded in robotic systems.\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=0.93\\textwidth]{figures\/MRG-Net.pdf}\n \\vspace{-5mm}\n \\caption{The overview of our proposed multi-modal relational graph network for surgical gesture recognition in robot-assisted surgery.}\n \\label{fig1}\n \\vspace{-6mm}\n\\end{figure*}\n\nAs we understand, the kinematics and video data can be regarded as the hands and eye of the surgical robot, with eye giving visual guidance information for two hands collaboratively conducting specific actions, while hands drive changes in the visual scene.\nIn this regard, there are complementary information and joint knowledge contained in the kinematics and video data which are crucial to help gesture recognition.\nSeveral recent works have attempted to develop multi-modal learning methods.\nFor instance, some unsupervised multi-modal methods have been proposed to handle the problem of time-consuming annotation~\\cite{murali2016tsc,zhao2018fast}.\nLea \\textit{et al.}~\\cite{lea2016learning} designed a latent convolutional skip-chain conditional random field model with variables of scene-based features and kinematics data.\nThe work of Fusion-KV~\\cite{qin2020temporal} learned individual networks for each modality, and combined their predictions through weighted voting at an output level. Qin \\textit{et al.}~\\cite{qin2020davincinet} further improved Fusion-KV with an attention-based LSTM decoder to predict the surgical state using concatenated multi-modal features.\nDespite gaining performance improvement, these multi-modal feature fusions seem straightforward. How to dynamically integrate the multiple sources of information in latent feature space, to reveal and leverage the underlying relationships inherent in kinematics sequences and video scenes, is important yet still remains underexplored.\n\nRecently, graph neural networks have been increasingly receiving research interest, due to their capability to model non-Euclidean relationships among entities~\\cite{dwivedi2020benchmarkgnns,scarselli2008graph,zhou2018graph}. Graph convolutional networks (GCN)~\\cite{kipf2016semi} have widely demonstrated promising performances on applications in various domains including image classification~\\cite{wang2018zero}, neural machine translation~\\cite{marcheggiani2018exploiting}, social relationship understanding~\\cite{wang2018deep}, etc.\nSpecifically for robotic surgery related scenarios, there have been pilot studies applying graph neural networks for tool detection in surgical videos~\\cite{wang2019graph},\n3D point cloud classification~\\cite{weibel2019robust,weibel2019addressing} and surgical activity recognition from robotic joint pose estimation~\\cite{sarikaya2020towards}.\nThese achievements inspired us to explore the potential of graph learning for modeling distinct multi-modal data recorded in robotic surgery.\n\nIn this paper, we propose a novel \\textbf{m}ulti-modal \\textbf{r}elational \\textbf{g}raph \\textbf{net}work (i.e., MRG-Net) to effectively exploit important yet complex relationships in robotics visual and kinematics data for accurate surgical gesture recognition.\nSpecifically, we first extract the high-level embeddings from video scenes and kinematics sequences with temporal convolutional networks and LSTM units. Then, we leverage relational graph convolutional network to incorporate complementary sources of information and model the underlying multiple types of relations.\nOur main contributions are summarized as follows:\n\n\\begin{itemize}\n\t\\item We, for the first time, propose a novel online relational graph learning based framework to exploit the joint information with useful relationships in video and kinematics data for accurate surgical gesture recognition.\n\t\\item We evaluated our proposed method on the public robotic surgery dataset JIGSAWS, and set new state-of-the-art \\\\\n\tresults on both suturing and knot typing tasks, showing the efficacy of combined usage of visual and kinematics information for robotic intelligence.\n\t\\item We have extensively validated our method on in-house datasets collected from da Vinci Research Kit (dVRK) platforms in two centers (i.e., CUHK and JHU) with consistent promising results achieved, demonstrating the general effectiveness of our proposed method.\n\\end{itemize}\n\n\\section{Methods}\n\nIn robot-assisted surgery, the robotic system can generate video frames from endoscopy and kinematics sequences from multiple robotic arms, which are later synchronized to the video timestamps. The overview of our proposed network is shown in Fig.~\\ref{fig1}.\nOur network consists of three components, i.e. visual and kinematic information extraction modules, as well as the relational graph convolutional network. We first extract the visual and kinematic embeddings with visual and kinematic information extraction modules, and then model the complementary information and integrate the informative joint knowledge of these multi-modal features with relational graph convolutional network. As a whole, MRG-Net forms a multi-input single-output design to predict the probability distributions of surgical gestures at each time step.\n\n\n\\vspace{-1mm}\n\\subsection{Visual and Kinematic Embeddings Extraction}\n\nThe first part of the network is the visual and kinematic information extraction modules which extract representative descriptors from each of the following streams respectively: the video frames and the kinematics sequences of robotic left and right arms.\nRegarding the visual information, for each time step $t$, current video frames (RGB image) $I_t$ is forwarded to a standard CNN backbone (in this case we leverage a 18-layer deep residual network (ResNet-18)~\\cite{he2016deep}), yielding a vector of spatial feature ${u}_t$. For the entire video sample, the series of $\\{u_t\\}_{t=1}^T$ are input to a temporal convolution module, which adopts an encoder-decoder operation to hierarchically capture relationships across frames at multi-time-scales, yielding stronger spatio-temporal video features of $\\{s_t\\}_{t=1}^T$.\n\nFor the kinematics data, our feature extractor incorporates TCN and LSTM in parallel for modeling the complex sequential information of physical elements and for better capturing the local and longer-term temporal dependencies.\nSpecifically, the input to TCN stacks the kinematics variables from all time steps, followed by temporal convolutions, pooling, channel-wise normalization and upsampling to encode kinematics features as $\\{k_t^{tcn}\\}^T_{t=1}$. Meanwhile, LSTM obtains the feature $k^{lstm}_t$ of the current step, by inputting a sequence of kinematics of all its previous steps, for capturing the long-term dependencies in motions. Then, the $k_t^{tcn}$ and $k_t^{lstm}$ are averaged to represent the kinematics feature as $k_t$. Note that we separately encode left and right kinematics as $\\{k_t^{l},k_t^{r}\\}_{t=1}^T$, since the two robotic arms may conduct different actions and serve for specific purposes in surgery.\n\n\\subsection{Fusion of Multi-modal Embeddings with Graph Learning}\n\nNext, a designed graph learning module is subsequently adopted to fuse the above extracted high-level embeddings $\\{s_t, k_t^{l}, k_t^{r}\\}$ of each time step.\nThese features have already gained temporal information within each source of time-series data. The following key issue is to effectively exploit their joint knowledge by imposing structured interactions between multi-modal features for accurate gesture recognition.\nIntuitively, the graph learning layer plays a role for differentiable message passing framework~\\cite{gilmer2017neural}.\nSpecifically, we denote our graph as $G \\! = \\! \\{\\mathcal{V},\\mathcal{E}, \\mathcal{R}\\}$ with nodes $v_i \\! \\in \\! \\mathcal{V}$ and edges $(v_i, r, v_j) \\in \\mathcal{E}$ where $r\\in \\mathcal{R}$ is a relation. As shown in Fig.~\\ref{fig1}, there are three node entities corresponding to video, left kinematics and right kinematics, whose associated feature descriptors of $\\{h_1,h_2,h_3\\}$ are initialized as $\\{s_t, k_t^{l}, k_t^{r}\\}$. These descriptors are then updated by aggregating messages from neighboring nodes with a parameterized propagation rule, which can be generally written as:\n\\begin{equation}\nh_i^{(l+1)} = \\sigma\\left( \\sum_{j\\in \\mathcal{N}_i} f_m(h_i^{(l)}, h_j^{(l)})\\right),\\\\\n\\end{equation}\nwhere $h_i^{(l)}$ is the hidden state of node $v_i$ in the $l$-th graph network layer, $\\mathcal{N}_i$ is the set of indices of all nodes which are connected with node $i$, the $f_m(\\cdot, \\cdot)$ denotes the function for accumulating incoming messages from a relational neighbor, and $\\sigma(\\cdot)$ is the element-wise non-linear activation, i.e., the ReLU in our model.\n\nIntuitively, such interactive feature fusion is important to impose a learnable message passing process for the nodes which have some relations with each other.\nGiven that the multiple data sources from robotic surgery contain plenty of complementary information, effectively digging the inherent useful relationships among them is difficult while critical to boost the performance of gesture recognition.\n\n\\subsection{Multi-relation Modelling in Graph Latent Space}\n\nIn our scenario of robot-assisted surgery, there are at least three important types of relations in the video and kinematics data. Specifically, the first is the \\emph{vision-to-motion} relation which can be understood as the human's perception with the ``eyes\" to provide guidance information for the ``hands\" to move. Inversely, the second relation is \\emph{motion-to-vision} which reflects the mechanism of ``hands\" giving feedback to ``eyes\" and also resembles hand-eye coordinates projection of robotic vision~\\cite{tsai1989new}. The last relation is \\emph{in-between-motions} of left and right arms, which can be considered as two ``hand\" assisting each other to complete a task.\nThe widely-used conventional graph convolutional network~\\cite{kipf2016semi} is inadequate to handle various different types of relations, given that its undirected graph with $|\\mathcal{R}| \\! = \\! 1$ is insufficient to model multi-relations of nodes. Instead, we leverage the more powerful relational graph learning scheme,\nso that our $G$ is a directed graph endowing a higher capacity for modeling multiple types of directed edges between nodes. In this way, the parameterized propagation function $f_m(\\cdot,\\cdot)$ in Eq.(1) becomes relation-specific, where the forwarding message update to node $i$ from a relational node $j$ is elaborated as $c_{i,r} h_j^{(l)}W_r^{(l)}$. The $W_r^{(l)}$ represents a trainable transformation matrix that is uniquely associated to one certain type of relation $r \\in \\mathcal{R}$.\nIn other words, different relation types use individual matrices, and only directed edges of the same relation type share their weights.\nThe parameter $c_{i,r}$ is a normalization constant that correlates to the structure of the graph.\nIn this way, the layer-wise propagation is achieved by accumulating the message updates through a normalized sum of all neighbor nodes under all relation types.\n\nHierarchically, we stack two such relational graph learning layers, with each single layer having its separate set of projection weights $\\{W_r^{(l)}\\}_{l=0}^1$. No deeper layers are added to alleviate the over-smooth problem~\\cite{li2018deeper} of GCN, and our preliminary experiments also evidenced that additional layers yielded worse performance yet with heavier computations.\nAfter feature interactions, the final output representation associated with each node is computed by:\n\n\n\\begin{scriptsize}\n\\begin{equation}\nh_i^{(\\text{out})} = \\sigma\\left(\\sum_{r\\in \\mathcal{R}}\\sum_{j\\in N_i^r} {c_{i,r}}\n\\sigma\\left(\\sum_{r\\in \\mathcal{R}}\\sum_{j\\in N_i^r}{c_{i,r}}h_j^{(0)} W_r^{(0)}\\right) W_r^{(1)}\\right),\n\\label{rgcn2}\n\\end{equation}\n\\end{scriptsize}\nwhere the $\\mathcal{N}_i^r$ denotes the set of neighbor indices of node $i$ under a relation type $r \\in \\mathcal{R}$. Specifically in our model, we identify three different types of relations with $|\\mathcal{R}|=3$.\nFor instance, as shown in Fig.~\\ref{fig1}, the video node $h_1$ receives messages from kinematics nodes $\\{h_2,h_3\\}$, both under the relation type of \\emph{motion-to-vision} (cf. $W_2^{(0)}$ in brown arrow).\nThe left kinematics node $h_2$ receives messages from video node $h_1$ under relation type of \\emph{vision-to-motion} (cf. $W_1^{(0)}$ in purple arrow), and from right kinematics $h_3$ under relation type of \\emph{in-between-motions} (cf. $W_3^{(0)}$ in red arrow).\nThe weight $c_{i, r}$ is heuristically set as $1\/|\\mathcal{N}_i^r|$.\nWe exclude the self-loop aggregation for each node, with the consideration that, for our graph classification task, the self-loop information tends to result in feature redundancy during the update process, weakening the messages propagated from neighbor nodes (i.e., other modalities). Note that such a practice does not cause knowledge leakage since the hierarchical propagation can compensate those self-contained information through iterative interactions among nodes.\nIn addition, the regularization strategy of basis decomposition~\\cite{schlichtkrull2018modeling} is applied for $\\{W_r^{(l)}\\}_{l=0}^1$ to prevent rapid growth of the number of parameters with multi-relational data.\n\n\\subsection{Overall Loss Function}\n\nAfter interacting the multi-modal information in latent space for capturing joint knowledge, the relational graph learning layers produce updated representations for the nodes. Recall that the hidden state for each node represents the descriptors for each modality at time step $t$, so we rephrase $\\{h_i^{(\\text{out})}\\}_{i=1}^3$ into $\\{\\tilde{s}_t, \\tilde{k}_t^l, \\tilde{k}_t^r\\}$ as the set of features for video, left and right kinematics after graph learning. They are concatenated to convey the joint knowledge, and forwarded to a fully-connected layer for obtaining the classification prediction $\\hat{p}_t$ for each frame:\n\n\\begin{equation}\n \\hat{p}_t=Softmax(\\textbf{concat}[\\tilde{s}_t, \\tilde{k}_t^l, \\tilde{k}_t^r] W_{\\text{fc}} + b).\n\\end{equation}\nWith the situation in the robotic surgery that the duration of each conducted gesture varies widely (e.g., the gesture of ``loosening more suture\" occupies only about 1\\% time on average of the whole suturing task, while ``pushing needle through tissue\" takes up almost 30\\% task duration), we use the weighted cross-entropy loss to combat such inter-class imbalance for training samples.\nDenoting $\\alpha$ as the class balancing weight, $\\Theta$ as MRG-Net parameters of all trainable layers,\nwe optimize the overall loss function:\n\\begin{equation}\n\\mathcal{L}(\\mathcal{X,Y}; \\Theta) = \\frac{1}{T}\\sum\\nolimits_{t} -\\alpha \\cdot \\log \\hat{p}_t,\n\\end{equation}\nwhere $\\mathcal{X}$ is the multi-modal input space, $\\mathcal{Y}$ denotes the gesture categories.\n\n\\subsection{Implementation Details}\nOverall, the entire framework composing of the relational graph layers and the separate video and kinematics feature extractors is trained end-to-end.\nThe encoder and decoder of TCN backbone consists of $3$ temporal convolutional layers with $\\{64,96,128\\}$ filters for encoder and $\\{96,64,64\\}$ filters for decoder, with the kernel size of $51$.\nFor the visual information, we first train the CNN backbone (ResNet-18) using video sequences, then we generate the spatial-CNN features $u_t \\! \\in \\! \\mathbb{R}^{128}$ from the pretrained backbone to train the whole model more efficiently. For kinematic data, we first convert the rotation matrix\ninto Euler angles,\nthen normalize all the data to zero mean and unit variance. The relational graph layers have 64-dimensional hidden states and output states, with dropout (rate $\\!=\\!0.2$) applied.\n\nOur graph learning framework is implemented with the Deep Graph Library (DGL)~\\cite{wang2019dgl} in PyTorch with an NVIDIA Titan Xp GPU. The video frames are resized to resolution 320*256 with random crop 224*224 to reduce the training parameters and prevent over-fitting.\nWe used the Adam~\\cite{kingma2014adam} optimiser with learning rate of $5e^{-3}$ and weight decay of $5e^{-4}$ to train the proposed network. The training process took around 3 hours for a hundred epochs.\nTo avoid the case of coincidence, we trained the same model for three times and reported their average results.\n\n\n\\section{Experiments}\n\n\\subsection{Public Dataset and Evaluation Metrics}\n\nWe first extensively validate our proposed MRG-Net on the public dataset of JIGSAWS~\\cite{gao2014jhu} (JHU-ISI Gesture and Skill Assessment Working Set) on two tasks (i.e., suturing and knot typing), which consist of 39 videos and 36 videos respectively alongside with kinematics data of left and right robotic arms from the \\emph{da Vinci} surgical system.\nThe kinematics sequences include position, rotation matrix, linear velocity, rotational velocity of tool tip and angles of gripper.\nA total of ten categories of surgical gestures are annotated for each single frame in suturing task and six in knot typing task (cf.~\\cite{gao2014jhu} for detailed gesture definitions).\nOur experimental setting adopts \\emph{leave-one-user-out} cross validation, following the practice of previous works on this benchmark~\\cite{ahmidi2017dataset}.\nThe employed evaluation metrics on JIGSAWS dataset include: i) Accuracy (\\%) in the frame-wise level, which is to calculate the percentage of correctly recognized frames, ii) Edit Score~\\cite{lea2016segmental}\n(in range $[0,100]$, the higher score the better), which is designed to measure the performance in video segmentation level for emphasizing temporal smoothness.\n\n\\subsection{Comparison with Other State-of-the-art Methods}\n\\begin{table}[t]\n\\begin{center}\n\\caption{Results of different methods on JIGSAWS Suturing dataset for gesture recognition.\n}\\label{tab1}\n\\scalebox{0.94}{\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n\\multirow{2}{*}{Methods} & \\multicolumn{2}{|c|}{Input data} & \\multirow{2}{*}{Accuracy}& \\multirow{2}{*}{Edit Score} \\\\\n\\cline{2-3}\n~ & ~Kin & Vid &&\\\\\n\\hline\nTCN~\\cite{lea2016temporal} & \\checkmark && 79.6 & 85.8\\\\\nForward LSTM~\\cite{dipietro2016recognizing} & \\checkmark && 80.5 $\\pm$ 6.2 & 75.3\\\\\nTricorNet~\\cite{ding2017tricornet} & \\checkmark & & 82.9 & 86.8\\\\\nBidir. LSTM~\\cite{dipietro2016recognizing} & \\checkmark && 83.3 $\\pm$ 5.7 & 81.1\\\\\nBidir GRU~\\cite{dipietro2019segmenting} & \\checkmark && 84.7 $\\pm$ 6.0 & 88.5\\\\\nAPc~\\cite{van2020multi} & \\checkmark & & 85.5 & 85.3\\\\\n\\hline\nTCN~\\cite{lea2016temporal} & & \\checkmark & 81.4 & 83.1\\\\\nPolicy+Value~\\cite{gao2020automatic} & & \\checkmark & 81.7 & 88.5\\\\\n3D CNN(K)+window~\\cite{funke2019using} & & \\checkmark & 84.3 & 80.0\\\\\n\\hline\nLC-SC-CRF~\\cite{lea2016learning} & \\checkmark & \\checkmark & 83.5 & 76.8\\\\\nFusion-KV~\\cite{qin2020temporal} & \\checkmark & \\checkmark & 86.3 & 87.2\\\\\n\\bfseries MRG-Net (Ours) & \\checkmark & \\checkmark &\\bfseries 87.9 $\\pm$ 4.2 &\\bfseries 89.3 $\\pm$ 5.2\\\\\n\\hline\n\\end{tabular}\n}\n\\end{center}\n\\vspace{-4mm}\n\\end{table}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.4\\textwidth]{figures\/colorbar.pdf}\n \\vspace{-3mm}\n \\caption{Color-coded ribbon illustration of surgical gesture recognition on Suturing task (a) and Knot Typing task (b) with ground truth (top) and our results (bottom).} \\label{fig3}\n \\vspace{-5mm}\n\\end{figure}\n\n\\begin{table}[t]\n\\begin{center}\n\\caption{Results of different methods on JIGSAWS Knot Typing dataset for gesture recognition.}\\label{tab_knot}\n\\scalebox{1}{\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n\\multirow{2}{*}{Methods} & \\multicolumn{2}{|c|}{Input data} & \\multirow{2}{*}{~Accuracy~}& \\multirow{2}{*}{Edit Score} \\\\\n\\cline{2-3}\n~ & Kin & Vid &&\\\\\n\\hline\nSC-CRF~\\cite{ahmidi2017dataset} & \\checkmark & & 78.9 & N\/A \\\\\nBoF~\\cite{ahmidi2017dataset} & & \\checkmark & 86.5 & N\/A \\\\\nMsM-CRF~\\cite{ahmidi2017dataset} & \\checkmark & \\checkmark & 77.3 & N\/A \\\\\n\\bfseries MRG-Net (Ours) & \\checkmark & \\checkmark &\\bfseries 88.1 $\\pm$ 3.8 &\\bfseries 87.0 $\\pm$ 6.8\\\\\n\\hline\n\\end{tabular}\n}\n\\end{center}\n\\vspace{-4mm}\n\\end{table}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.25\\textwidth]{figures\/test.png}\n\\vspace{-3mm}\n\\caption{Bar chart of gesture-wise recognition accuracy.}\n\\label{fig2}\n\\vspace{-6mm}\n\\end{figure}\n\n\n\nWe compare our proposed MRG-Net with previous state-of-the-art methods on the benchmark dataset and we report the mean accuracy and mean Edit Score with standard deviation (std.) (those results without std. mean the original paper didn't report them).\n These methods are grouped into \\emph{purely kinematics based} (i.e., SC-CRF~\\cite{ahmidi2017dataset}, TCN~\\cite{lea2016temporal}, Forward and Bidirectional LSTM~\\cite{dipietro2016recognizing}, Bidirectional GRU~\\cite{dipietro2019segmenting}, TricorNet~\\cite{ding2017tricornet} of hybrid TCN and LSTM, APc~\\cite{van2020multi} using multi-task RNN), \\emph{purely video based} (i.e., TCN~\\cite{lea2016temporal}, Policy+Value~\\cite{gao2020automatic} of offline reinforcement learning, 3D CNN with post-processing~\\cite{funke2019using}), and \\emph{multi-modal based} methods (i.e., LC-SC-CRF~\\cite{lea2016learning} and MsM-CRF~\\cite{ahmidi2017dataset} with traditional machine learning, BoF~\\cite{ahmidi2017dataset} with manual extracted features and Fusion-KV~\\cite{qin2020temporal} with deep learning).\n\nFor the suturing task, which is the most popular task with more samples and gestures in JIGSAWS, we compare our results with eleven state-of-the-art methods listed in Table~\\ref{tab1}.\nWe first see that our MRG-Net significantly outperforms the state-of-the-art uni-modal methods, with the accuracy exceeding the previous best kinematics based method~\\cite{van2020multi} by 2.4\\% and best video method~\\cite{funke2019using} by 3.6\\%. With multi-modal learning to capture the complementary information of visual and motion data, improved results are obtained, with the LC-SC-CRF~\\cite{lea2016learning} outperforming six of the uni-modal methods on accuracy and the Fusion-KV~\\cite{qin2020temporal} outperforming all of them. Importantly, our MRG-Net achieves the highest accuracy of 87.9\\% (with lowest std. 4.2\\%) and Edit Score of 89.3 (with lowest std. 5.2) compared among the multi-modal methods, demonstrating the superiority of our method enabling dynamic interactions for modeling the inherent relations of multiple input sources with graph learning.\n\nFor the knot typing task, which is more complex than suturing while less validated in previous works, we list the results in Table~\\ref{tab_knot}. The performance of current state-of-the-art uni-modal and multi-modal methods are referenced from the benchmark in~\\cite{ahmidi2017dataset}.\nIt can be observed that traditional multi-modal method (MsM-CRF),\nif without sufficient integration of the visual and kinematics information in the complex task,\neven obtained worse performance compared with pure video based method and pure kinematics based method.\nLeveraging our proposed relational graph learning method to interact the visual and kinematics embeddings in the latent space, a high recognition accuracy of 88.1\\% can be achieved on this task.\n\nFor qualitative results, Fig.~\\ref{fig3} illustrates the visualization results on both suturing and knot typing tasks in the form of color-coded ribbon, demonstrating the temporal consistency and smoothness of the surgical gesture predictions leveraging the high-quality multi-modal representations.\n\n\n\n\\subsection{Ablation Analysis on Our Method}\n\n\\begin{table}[t]\n\\begin{center}\n\\caption{Ablation study on key components of our method using the same backbone on JIGSAWS Suturing dataset.}\n\\label{tab2}\n\\scalebox{1}{\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n\\multirow{2}{*}{Methods} & \\multicolumn{2}{|c|}{Input data} & \\multirow{2}{*}{~Accuracy~}& \\multirow{2}{*}{Edit Score} \\\\\n\\cline{2-3}\n~ & Kin & Vid &&\\\\\n\\hline\nPure-Vis & & \\checkmark & 81.7 $\\pm$ 6.7 & 86.5 $\\pm$ 6.9\\\\\nPure-Kin & \\checkmark & & 82.6 $\\pm$ 6.5 & 86.6 $\\pm$ 7.5\\\\\nTCN-KV (w\/o split) & \\checkmark & \\checkmark & 86.1 $\\pm$ 5.6 & 85.3 $\\pm$ 7.1\\\\\nTCN-KV & \\checkmark & \\checkmark & 86.2 $\\pm$ 5.4 & 86.1 $\\pm$ 6.4\\\\\nGCN-KV & \\checkmark & \\checkmark & 86.8 $\\pm$ 4.9 & 87.4 $\\pm$ 6.5\\\\\n\\bfseries MRG-Net & \\checkmark & \\checkmark & \\bfseries 87.9 $\\pm$ 4.2 & \\bfseries 89.3 $\\pm$ 5.2\\\\\n\\hline\n\\end{tabular}\n}\n\\end{center}\n\\end{table}\n\n\\begin{figure}[t]\n\\centering\n\\vspace{-6mm}\n\\includegraphics[width=0.32\\textwidth]{figures\/exp.pdf}\n\\vspace{-2mm}\n\\caption{Embeddings of pre- (left) and post- (right) relational graph message propagation multi-modal features (blue: vision, red: left kinematics, pink: right kinematics). Best viewed in color.}\n\\label{fig4}\n\\vspace{-7mm}\n\\end{figure}\n\nTo validate the contribution of each key component in our proposed MRG-Net,\nTable~\\ref{tab2} lists the results of five ablation studies implemented with our own backbone on suturing task for direct comparison:\n1) Pure-Vis: uni-modal using visual data,\n2) Pure-Kin: uni-modal using kinematics,\n3) TCN-KV (w\/o split): merging video and kinematics (without splitting left and right arms) with TCN,\n4) TCN-KV: merging video and kinematics (splitting left\/right arms) with TCN,\n5) GCN-KV: multi-modal learning with plain GCN without multi-relation,\nand finally our proposed multi-relational MRG-Net.\n\nWe see that fusing visual and kinematics features in latent space can provide richer knowledge for achieving higher performance, even using simple concatenation of representations in temporal convolutional networks.\nNote that splitting left and right kinematics data yields better results than treating both kinematics as a whole (comparing 3rd\/4th rows with TCN), which also reveals the different information contained in the left and right ``hands\" of robotic systems.\nMoreover, our graph learning for interactive multi-modal message passing can bring improvement over TCN.\nFurther modelling multi-relations as designed based on domain knowledge, the gesture recognition performance gets higher, which confirms the significance of considering the ``edges\" between ``nodes\" diversely, as they incorporate distinct types of relations among various information sources.\n\nIn addition, we analyze the detailed accuracy across gesture category, as shown in Fig.~\\ref{fig2}.\nWe notice a large variance in the results with the highest accuracy achieving 93\\% (G1 ``reaching for needle with right hand\"), while the lowest being less than 10\\% (G10 ``loosening more suture\").\nThe performance imbalance may be still due to the large variance in gesture frequency and sample numbers, which reflects the challenges in this recognition task which remains to be further conquered in future research.\nBesides, we see that our relational graph multi-modal learning consistently outperforms Pure-Vis and Pure-Kin by a large margin (especially for G9 ``using right hand to help tighten suture\" with strong visual\/motion relationships), demonstrating the stable effectiveness of our method.\nLast but not least, Fig.~\\ref{fig4} visualizes the node features from MRG-Net learning process with t-SNE~\\cite{maaten2008visualizing}, where the left and right embed the sets of $\\{s_t,k_t^l,k_t^r\\}$ and $\\{\\tilde{s}_t, \\tilde{k}_t^l, \\tilde{k}_t^r\\}$, respectively, for observing feature clusters before and after multi-relational graph updates.\nIt clearly shows that multi-modal features are harmoniously fused from interactive message propagation and aggregation.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{figures\/peg_transfer_demo.pdf}\n \\vspace{-3mm}\n \\caption{(a) Gestures of peg transfer, (b) Color-coded ribbon illustration of surgical gesture recognition on peg transfer task with ground truth (top) and our results (bottom).}\n \\label{fig_peg}\n \\vspace{-5mm}\n\\end{figure}\n\n\\subsection{Experiment on In-house dVRK Dataset from Two Centers}\n\nTo further validate our method, we have collected robotic multi-modal datasets on dVRK platforms in two centers from CUHK and JHU. The data collection conditions of the two datasets had variations in settings of\nhand-eye calibrations, illuminations, operating locations, which reflected complications in real-world practice.\nBoth kinematics sequences and camera videos have been recorded and synchronized.\n\nTo build the in-house dVRK datasets, we experimented on the peg transfer task (see Fig.~\\ref{fig_peg} (a)), which is one of the most popular tasks present in Fundamentals of Laparoscopic Surgery~\\cite{ritter2007design} and widely adopted for surgical skill training~\\cite{joseph2010chopstick}. Specifically, we defined and manually annotated five different gestures for peg transfer:\nA1: Idle (No action performed);\nA2: Reach for peg (with left hand);\nA3: Lift peg (with left hand);\nA4: Exchange (transfer the peg to right hand);\nA5: Place peg (with right hand).\nThe dataset consists of 24 sequences with 12 sequences from CUHK and JHU each. The duration of sequences is within the range of 20-60 seconds due to different length settings to transfer the peg.\nWithin each site, all the operation records were performed by the same user\nwho is familiar with using dVRK platform. The kinematics data includes the position\/orientation of end-effector and opening angle of the gripper, at the meanwhile, videos are synchronously recorded and there are all down-sampled to 10Hz in pre-processing.\n\nConsidering different conditions in data acquisition such as hand-eye settings of dVRK systems, appearances of peg transfer boards, we individually trained and tested models for each dataset, in which we split each dataset to perform 3-fold cross-validation (8 sequences for training and 4 for testing).\nWe adopted the same evaluation metrics as JIGSAWS (i.e., accuracy and Edit Score).\nThe results are listed in the Table~\\ref{tab_peg_cuhk} and Table~\\ref{tab_peg_jhu}, in which Baseline means the standard 2D CNN backbone used in MRG-Net (ResNet-18), while Pure-Vis and Pure-Kin represent the same configurations as Table~\\ref{tab2}, which adopted TCN and LSTM.\n\n\nIt can be observed that, on both datasets, compared to 2D based Baseline, the methods of Pure-Vis and Pure-Kin can obtain higher accuracies and Edit Scores, leveraging their consideration of temporal information in the sequential data.\nOn both the datasets, our proposed MRG-Net achieves the highest accuracy and Edit Score, consistently outperforming the Baseline, Pure-Vis and Pure-Kin methods with a notable margin. The Edit Scores reaches as high as 98.7\\% on CUHK dataset and 96.4\\% on JHU dataset, which reflects the good smoothness and stability of the prediction on dVRK data. In addition, we notice that the model performances for CUHK dataset are overall slightly higher than that for JHU dataset.\nWe analyze that this is related to the variation of task duration between these two sites. The data recorded from JHU present larger diversity regarding task completion speed compared to CUHK data, thus may be more challenging for recognition. It will be interesting and valuable to further investigate model behavior differences between two datasets in our future work.\n\n\n\n\n\n\\begin{table}[t]\n\\begin{center}\n\\caption{Results of different methods on Peg Transfer dataset in site CUHK for gesture recognition.}\\label{tab_peg_cuhk}\n\\scalebox{1}{\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n\\multirow{2}{*}{Methods} & \\multicolumn{2}{|c|}{Input data} & \\multirow{2}{*}{~Accuracy~}& \\multirow{2}{*}{Edit Score} \\\\\n\\cline{2-3}\n~ & Kin & Vid &&\\\\\n\\hline\nBaseline(ResNet-18) & & \\checkmark & 80.7 $\\pm$ 7.4 & 35.1 $\\pm$ 8.6\\\\\nPure-Vis & & \\checkmark & 88.9 $\\pm$ 2.8 & 96.7 $\\pm$ 3.9\\\\\nPure-Kin & \\checkmark & & 89.2 $\\pm$ 2.5 & 95.6 $\\pm$ 3.7\\\\\n\\bfseries MRG-Net (Ours) & \\checkmark & \\checkmark &\\bfseries 91.0 $\\pm$ 2.1 &\\bfseries 98.7 $\\pm$ 3.4\\\\\n\\hline\n\\end{tabular}\n}\n\\end{center}\n\\vspace{-2mm}\n\\end{table}\n\n\\begin{table}[t]\n\\begin{center}\n\\caption{Results of different methods on Peg Transfer dataset in site JHU for gesture recognition.}\\label{tab_peg_jhu}\n\\scalebox{1}{\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n\\multirow{2}{*}{Methods} & \\multicolumn{2}{|c|}{Input data} & \\multirow{2}{*}{~Accuracy~}& \\multirow{2}{*}{Edit Score} \\\\\n\\cline{2-3}\n~ & Kin & Vid &&\\\\\n\\hline\nBaseline(ResNet-18) & & \\checkmark & 78.5 $\\pm$ 8.2 & 21.3 $\\pm$ 9.8\\\\\nPure-Vis & & \\checkmark & 83.0 $\\pm$ 3.6 & 95.1 $\\pm$ 4.2\\\\\nPure-Kin & \\checkmark & & 85.1 $\\pm$ 3.4 & 95.5 $\\pm$ 3.9\\\\\n\\bfseries MRG-Net (Ours) & \\checkmark & \\checkmark &\\bfseries 87.3 $\\pm$ 2.9 &\\bfseries 96.4 $\\pm$ 3.6\\\\\n\\hline\n\\end{tabular}\n}\n\\end{center}\n\\vspace{-7mm}\n\\end{table}\n\n\\section{Conclusion and Future Work}\n\\label{CONCLUSIONS}\t\nThis paper presents a novel online multi-modal graph learning method to dynamically integrate complementary information in video and kinematics data from robotic systems, to achieve accurate surgical gesture recognition. Multi-relational representation aggregation is achieved through a designed directed graph to capture the underlying\njoint knowledge\nbetween the visual scenes and kinematics motions.\nThe effectiveness of our method is validated with state-of-the-art performance on the public dataset of JIGSAWS on two tasks of suturing and knot typing. Meanwhile, we investigate the significance of each component in our network by conducting ablation studies on JIGSAWS suturing dataset.\nFurthermore, the proposed method is validated on our collected in-house dVRK datasets, shedding light on the general efficacy of our approach.\n\n\nIn our future work, we shall explore\nhow to resolve the data variance and domain gap due to different acquisition environments and hardware platforms of our two in-house datasets. Potentially, we will design a 6-DOF transformer (a trainable homogeneous mapping) to uniformly align kinematics data from different platforms to a common feature space, and rely on optical flow to tackle the visual gap among different environment.\nWith the help of these methods, we can improve the generalization ability of our method, so as to make full use of more robotic surgery datasets and achieve a cross-platform training and testing scheme. Moreover, we will investigate how to extract the multi-modal embeddings with unsupervised learning schemes in order to reduce the annotation cost.\nWe will also apply the developed visual-kinematics based surgical gesture recognition to downstream scenarios such as sub-task automation for robotic surgery.\n\n\n\n \n \n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\newpage\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nComparative studies of the scaling relations in clusters of galaxies reveal\nstrong deviations of the observed relations from predictions based on\nself-similar collapse, e.g. the observations show a steeper $L_x-T$\nrelation than predicted by the self-similar laws (Kaiser 1986).\nThese deviations are thought to be best characterized by the\ninjection of energy (preheating) into the gas before clusters collapse\n(Kaiser 1991; Evrard \\& Henry 1991). \nRecently, an analysis of a large compilation of entropy profiles\non groups and clusters of galaxies also\nrequired at $r_{500}$ much larger entropy levels than was thought before\n(Finoguenov et al. 2002) and modifying the concept of the entropy floor to\nthe entropy ramp at $0.1r_{200}$ (Ponman et al. 2003). Reproduction of these\nresults both analytically and numerically, strongly supports the scenario of\nDos Santos \\& Dore (2002), where an initial adiabatic state of the infalling\ngas is further modified by the accretion shock (Voit \\& Ponman 2003). As a\nsupporting evidence to the latter, Ponman et al. (2003) noticed a\nself-similarity in the entropy profiles, once scaled to $T^{0.65}$. Some\nXMM-Newton observations are consistent with this result (Pratt \\& Arnaud\n2003). A major change introduced by these studies is that groups of galaxies\ncan again be viewed as scaled-down versions of clusters, yet the scaling\nitself is modified. Other evidence for the departure of groups from the\ntrends seen in clusters, such as the slope of the $L-T$ relation, has been\nrecently refuted by Osmond \\& Ponman (2004).\n\nThe idea of this {\\it contribution} is to check the consistency between the\ndata and both the concept and the level of the modified entropy scaling.\nWhile we give an overview of the results here, the details of the data\nanalysis could be found in Zhang et al. (2004); Finoguenov et al. (2004b,\nand in prep.).\n\nFor our study we have selected 14 groups in Mulchaey et al. (2003) in the\nredshift range $0.0125$ keV) clusters. Lower row presents distant X-ray\n luminous clusters ($0.270.6$. The level of the\nobserved flattening is around 100 keV cm$^2$ and could be related to the\ncooling threshold, discussed in Voit et al. (2003). At the same radii, the\npressure profile in groups is on average flatter than in clusters, which\nis seen as on average higher pressure at outskirts of the groups compared to\nthe model.\n\n(3) A comparison between the prediction of the entropy according to the\nevolution of the shock heating in the $\\Lambda CDM$ Universe and the data\ncan explain the entropy of the gas. The radial behavior of the entropy is\nflatter, compared to the index of $1.1-1.2$ predicted in a hierarchical\ncluster growth with no feedback effects (Tozzi \\& Norman 2001; Voit\n2004). This result implies that either the growth of clusters has been\nslower or feedback effects are significant. Slower accretion rates support a\nsuggestion of a dark energy dominated Universe (Schuecker et al. 2003).\n\n(4) At high redshift, the typical pressure of the gas is found to be\nin agreement with prediction of the evolution, once a consistent\ndefinition of the mean temperature has been assumed for scaling. \nThe effect of energy band in determining the mean temperatures has\nbeen discussed in Zhang et al. (2004) in application to the REFLEX-DXL\nsample and by Mazzotta et al (2004) in application to clusters in general.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe control of the magnetization of a system by an external electric field, which is known as magneto-electric effect,\nhas been widely investigated during the last years\nexperimentally as well as theoretically,\ndue to its potential application in spintronics\n\\cite{Prinz1660,Brovko_2014,Zhang2009c}.\nThe magneto-electric effect was investigated in particular for bulk magnetic compounds with non-collinear magnetic structures \\cite{Dzyaloshinsky, Moriya,PhysRevLett.95.057205}, magnetic semiconductors \\cite{Chiba2008,Yamada1065} and multi-ferroics \\cite{Lottermoser2004,PhysRevLett.108.237201,PhysRevLett.103.257601,PhysRevB.85.134428}.\nWithin these studies, \nthe modification of the magnetic properties by an external electric field has been associated with various \nelectronic mechanisms, such as the shift of the Fermi level or a change in the charge carrier density. \n\nRecent studies have reported that an external electric\n field may affect the physical properties of layered\n systems in a very pronounced way \\cite{PhysRevLett.120.157203, Obinata2015}.\nFor example, for thin films of Pd, it was shown that the\n electric field induces a phase transition from the para- to the ferromagnetic state.\nThis finding could be explained by the Stoner\n instability caused by the applied electric field that\n leads to a change in the occupation of the electronic states and shifting that way the Fermi level to a position with a high density of states (DOS).\nIn the case of magnetic systems, an electric field changes their magnetic properties first of all\ndue to its influence on the spin polarization of the valence electrons. \nSuch a manipulation of the magnetic state by the electric field can lead to interesting and important effects concerning possible applications \\cite{Hsu2017,Schott2017,Yang2018}. \nPerforming first principles calculations, it was demonstrated, that a well defined change in the magnetic moment can be observed in the case of a ferromagnetic free-standing thin Fe, Ni or Co film, for which the \n magnetic moments show a linear dependence on the strength of the external electric field \\cite{PhysRevLett.101.137201}. \n In this case the \nelectron populations in the different spin channels are varied and thus the balance of majority- and minority-spin electrons is distorted leading in turn to a change of the spin magnetic moment in the system. Apart from the spin magnetic moment many other magnetic properties may be controlled by an applied electric field as for example the orbital moment and its anisotropy as well as the magnetic anisotropy energy \\cite{PhysRevLett.101.137201,PhysRevLett.102.187201,APL_Chiba}. \n\nIn case of the Co\/Pt bilayer system\nit was demonstrated \nby means of anomalous Hall effect measurements \nthat the Curie temperature of the Co layer can be controlled by an electric field \\cite{Chiba2011}. \nFor the Curie temperature of the bulk 3d transition metal alloys a Slater-Pauling like behavior\nwas found from first principles calculations \\cite{Takahashi_2007}. \nHowever, experimental results on thin films showed that the Curie temperature is increasing with an increasing number of valence electrons and does not follow the Slater-Pauling like behavior \\cite{Chiba2011}.\nThis finding indicates that in the case of thin films -\nwhen compared to the bulk situation -\n other mechanisms can play an important role for the magnetic properties in the presence of an electric field. \n\nParamagnetic metals, such as Pt and Pd, that are close\nto the Stoner instability, have substantial induced moments due to the proximity effect when deposited on magnetic substrate layers. \nIn the case of Pd deposited on the Pt\/Co bilayer system,\n it was demonstrated experimentally and theoretically that the induced magnetic moment of the \n Pd layer can be controlled \n by an applied electric field \\cite{Obinata2015}. \n The inter-relation between the influence of an\n electric field on the magnetic state and the electronic structure of Pt deposited on a magnetic \n substrate was investigated \n in a recent work by exploiting \nthe component-specific x-ray absorption spectroscopy \n(XAS) together with\n the x-ray magnetic circular dichroism (XMCD) \\cite{PhysRevLett.120.157203}.\n\n\nThe XMCD is one of the most powerful probe for investigating the magnetization of layered systems\n in an element resolved way \\cite{Okabayashi2018}. \n The XMCD spectra give for a magnetized sample\n the difference in absorption\n for left and right-circularly polarized x-rays. \n XMCD spectra are often analyzed on the basis of the XMCD sum rules, which link the integrals of the XAS and XMCD spectra to the spin and orbital magnetic moments of the absorbing atom\n\\cite{PhysRevLett.68.1943, PhysRevLett.70.694,PhysRevB.47.597,PhysRevB.51.1282, PhysRevB.66.094413}. \n\nMotivated by a recent experimental XMCD study by \n Yamada et al.\\ \\cite{PhysRevLett.120.157203}, we investigated the electronic and magnetic properties of Pt layers in the surface film system Pd(001)\/Co\/Pt\n in the presence of an external electric field\n by means of first principles calculations. \n This way, we investigated \n how the electric field influences\n the electronic states, magnetic moments, \n XMCD spectra of the Pt layers and the magnetic anisotropy in the case of the considered systems. \n\nThe paper is organized as follows.\n In Sec.~\\ref{sec2}, the computational methods used are briefly sketched while in Sec.~\\ref{sec3} the results are presented and discussed. Finally, in Sec.~\\ref{sec4} we summarize our results.\n\\section{Computational details \\label{sec2}}\nAll calculations were performed within the \nframework of density functional theory, \nrelying on the local spin-density approximation (LSDA).\n For the exchange correlation potential the parametrization of Vosko, Wilk and Nusair was used \\cite{vosko1980}. \n The electronic structure is described \n on the basis of the Dirac equation, accounting for all\n relativistic effects coherently this way.\nElectronic states were represented \nby means of the corresponding Green function \ncalculated using the \nspin-polarized Korringa-Kohn-Rostoker (KKR) Green function formalism as implemented in the SPR-KKR code \\cite{Ebert_2011, Ebert_prog, PhysRevB.52.8807}. \nThe potentials were treated on the level of the atomic sphere approximation (ASA) and \nfor the self-consistent calculations \nan angular momentum cut-off of $l_{\\rm max}=3$ was used. \nAll necessary energy integrations have been done\nby sampling $32$ points on a semicircle contour in the upper complex energy semi-plane. Furthermore, \nthe $\\mathbf{k}$-space integration was done using $750$ points in the\nirreducible part of two dimensional Brillouin zone. \n\nWe investigated the Pd(001)\/Co$_{n}$\/Pt$_{m}$ thin film surface system, where $n$ and $m$ denote the number of Co and Pt layers, respectively. The first considered model system\n consisted of three Pd layers, $n=2$ or $5$ Co layers, $m=2$ Pt layers and three layers of empty spheres embedded between a semi-infinite Pd substrate and a semi-infinite vacuum region. For the second model system\n the number of Pt layers was varied with the system consisting of two Pd, $n=5$ Co layers, $m=1$, $4$ Pt layers and three layers of empty spheres embedded between the \n semi-infinite Pd and vacuum regions. For Pd, Co, Pt and empty layers ideal epitaxial growth was assumed on a fcc(001) textured substrate with the experimentally in-plane lattice constant of Pd, $a_{0}=3.89$ \\AA. Structural relaxations were neglected in all cases. \n\nFrom the obtained self-consistent potentials the X-ray absorption coefficients $\\mu_{\\lambda} (\\omega)$ for the photon energy $\\hbar\\omega$ and polarization $\\lambda$ were calculated using the SPR-KKR Green function method on the basis of Fermi's Golden rule \\cite{Ebert_1996, Ebert_2011}. The corresponding XMCD signal,\n\\begin{equation}\n\\Delta\\mu(\\omega)= \\frac{1}{2}\n \\big(\\mu_{+} (\\omega) - \\mu_{-} (\\omega) \\big) \\; ,\n\\end{equation}\nis defined as the difference in the absorption for left and right\ncircularly polarized radiation.\nThe broadening of the\nexperimental spectra \nwas simulated by a Lorentzian broadening function\nwith a width parameter of 1 eV. In addition to the XMCD spectra, the magnetic anisotropy (MAE) was obtained by means of magnetic torque calculations \\cite{PhysRevB.74.144411, Bornemann2007}.\n\nThe effect of a homogeneous external electric field was modeled by a periodic array of point charges\n in the vacuum region that behave essentially like a charged capacitor plate. \nIn the present calculations the array of point charges or capacitor plate, respectively, \n was placed in the last vacuum layer.\n This set-up leads to a homogeneous electric field of strength,\n\\begin{equation}\n E = \\frac{Q}{a_{0}^{2}\\epsilon_{0}},\n\\end{equation}\nwhere $Q$ is the charge of the capacitor in unit of the electron's charge,\n$\\epsilon_{0}$ is the permittivity of vacuum and $a_{0}^{2}$ is the area of\nthe unit cell for the Pd(001) plane.\n Here, the applied electric field is perpendicular to the surface and for the positively charged condensed plate it points \n from the vacuum towards the surface \n increasing this way the spill-out of the electrons\n from the Pt layers to the vacuum\n with increasing electric field strength.\n\n\n\\section{Results \\label{sec3}}\n\n\n\\subsection{Variation of the thickness of the Co layer}\n\nTo investigate the impact of an external electric field on the electronic structure of the Pd(001)\/Co$_{n}$\/Pt$_{m}$ system, we focus first on Pd(001)\/Co$_{2}$\/Pt$_{2}$, corresponding essentially to a system composed of 0.5 nm Co and 0.4 nm Pt films,\nas studied recently in Ref. \\cite{PhysRevLett.120.157203}. These experiments have been accompanied by theoretical work, however, considering in contrast to the present study a $(111)$-oriented surface.\n From the self-consistent calculations we obtained the spin magnetic moments of the layers as a function of the electric field. The spin-magnetic moments of the Co layers without an external electric field ($E = 0)$ are $m_{\\rm Co_{1}}=1.92\\, \\mu_{B}$ and $m_{\\rm Co_{2}}=1.89 \\, \\mu_{B}$ for\n Pd(001)\/Co$_{2}$\/Pt$_{2}$, where the indices of the Co layers start at the Pd\/Co interface.\n \nTo see an impact of the thickness of the ferromagnetic film on the\nspin and orbital polarization of the Pt film, the calculations\nhave been performed also for Pd(001)\/Co$_{5}$\/Pt$_{2}$.\nIn this case, the magnetic moments for four Co layers (for $E = 0$) are very\nclose to each other, $m_{\\rm Co_{2}}\\sim m_{\\rm Co_{3}}\\sim m_{\\rm\n Co_{4}}\\sim m_{\\rm Co_{5}}\\sim 1.8\\,\\mu_{B} $, while for the \nCo layer at the Pd\/Co interface the magnetic moment is the largest one, $m_{\\rm Co_{1}}\\sim1.9\\,\\mu_{B}$. \nFor both systems the change of the magnetic moment of the Co layers \ninduced by the electric field\nis negligible. \n\nIn the Pt layers induced spin moments are formed due to the proximity\neffect caused by the Co layer with the value of the induced moment being\nlargest for the Pt layer at the Pt\/Co interface. In the presence of the\nexternal electric field the magnetic moments in the Pt film are\nmodified. This modification is \ndepending on the thickness of the Co film, as can be seen in \nFig.\\ \\ref{fig_momPt} \nshowing the sum of the spin magnetic moments \nof the Pt layers, $\\sum\\limits_{\\rm Pt} \\rm m_{\\rm Pt}$, as a function of the electric field for both considered systems. \n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=1.00\\columnwidth]{fig_moments_Pt.pdf}%\n\\caption{Calculated sum of the spin magnetic moment of\nthe Pt layers,\n$\\sum\\limits_{\\rm Pt} \\rm m_{\\rm Pt}$, as a function of the external electric field $ E$ for Pd(001)\/Co$_{2}$\/Pt$_{2}$ (squares) and Pd(001)\/Co$_{5}$\/Pt$_{2}$ (circles).}\n\\label{fig_momPt}\n\\end{figure}\nNevertheless, for both systems the magnetic moment decreases with increasing positive electric field in spite of the different number of Co layers. Moreover, in contrast to calculations for free standing ferromagnetic thin films \\cite{PhysRevLett.101.137201}, the Pt spin-magnetic moment\n in the present case does not vary linearly with the field strength.\n\n\nIn order to understand the impact of an external electric field on the electronic structure of the Pt layer we calculated the density of states (DOS) for the systems in the presence of the electric field. \nFigure \\ref{fig_dosPt}\n shows for various electric field\n strengths\nthe spin resolved DOS\nprojected on to $s$, $p$ and $d$ states\nfor the topmost Pt layer in \n Pd(001)\/Co$_{2}$\/Pt$_{2}$. \n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=1.00\\columnwidth]{fig_dos.pdf}%\n\\caption{Calculated spin-polarized density of states, \n$n^{\\uparrow (\\downarrow)}(\\varepsilon, \\pm E)$, \nprojected to $s$, $p$ and $d$ states \nfor the topmost Pt layer in \nPd(001)\/Co$_{2}$\/Pt$_{2}$ for the selected electric fields $E = \\pm 7$~V\/nm\nindicated by $\\pm E$.}\n\\label{fig_dosPt}\n\\end{figure}\nThe applied electric field strengths have the value $\\pm 7$~V\/nm denoted as $+ E$ and $- E$ in the following. One can see, that the external electric field slightly modifies the electronic states of the Pt layer. \nA positive electric field shifts the $s$, $p$ as well as $d$ states of Pt down, while a negative field shifts the states up in energy. Moreover, one can see that these shifts increase with the energy of the electronic states as they are more affected by the electric field due to their weaker localization. Due to the difference of the DOS for the two spin channels, these shifts lead to a change of the magnetic moments, which depend on the sign of the electric field. It should be noted that in contrast to the work on a free standing film in Ref.\\ \\onlinecite{PhysRevLett.120.157203}, the Fermi level is fixed in our case as we deal with a half-infinite substrate. Accordingly, the value of the Fermi energy is that of the Pd substrate for all applied electric fields. Another effect of the electric field seen in\n Fig.\\ \\ref{fig_dosPt} \n is the change of the amplitude of the DOS due to the change of hybridization of the electronic states of Pt and Co layers, or as it was pointed out in Ref.\\ \\onlinecite{PhysRevLett.120.157203}, the hybridization of the $sp$-states and the $d$-states of Pt. As the hybridization is spin dependent, this also results in a change of the magnetic moments induced by the electric field.\n\n\nAs discussed in the literature \\cite{PhysRevLett.120.157203}, the above-mentioned\n field-induced changes of the electronic structure and magnetic properties\ncan be probed in a detailed way \nusing XAS\/XMCD spectroscopy. Focusing here on the\nmagnetic properties of the Pt layers, the absorption spectra at the Pt $L_2$ and\n$L_3$ edges have been calculated both for Pd(001)\/Co$_{2}$\/Pt$_{2}$ and\nPd(001)\/Co$_{5}$\/Pt$_{2}$.\n The XAS and XMCD spectra without the influence\n of an external electric field\nare given in \nFig.\\ \\ref{fig_xas-xmcd_Pt2}, \nshowing only tiny differences between the two systems.\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=1.00\\columnwidth]{fig_xas_xmcd_Co2Pt2_Co5Pt2.pdf}\n\\caption{Calculated layer resolved XAS (top panel),\n $\\mu$ and XMCD (bottom panel),\n $\\Delta \\mu$ spectra at the L$_{2}$ and L$_{3}$ edges for the Pt layers in \n Pd(001)\/Co$_{2}$\/Pt$_{2}$ (solid line) and Pd(001)\/Co$_{5}$\/Pt$_{2}$ (dashed line).}\n\\label{fig_xas-xmcd_Pt2}\n\\end{figure}\n\nThe modification of the XAS spectra \nfor these systems by\nthe electric field $\\pm E$, are represented\n in \n Fig.\\ \\ref{fig_xas-xmcd-E_Pt2}, \n showing the field-induced changes,\n$\\mu(\\pm E)-\\mu(0)$, of the total XAS and of XMCD signals, $\\Delta \\mu(\\pm E)-\\Delta \\mu(0)$ for the Pt \nlayers in Pd(001)\/Co$_{2}$\/Pt$_{2}$ and Pd(001)\/Co$_{2}$\/Pt$_{4}$.\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=1.00\\columnwidth]{fig_diff_xas_Co2Pt2_Co5Pt2.pdf}\\\\\n\\includegraphics[width=1.00\\columnwidth]{fig_diff_xmcd_Co2Pt2_Co5Pt2.pdf}\\\\\n\\caption{Difference of the calculated layer resolved XAS (a) and XMCD (b) spectra of Pt layers between in absence and presence of an electric field,\n $\\mu(\\pm E) -\\mu(0)$,\n $\\Delta \\mu(\\pm E) -\\Delta \\mu(0)$ around L$_{2}$ and L$_{3}$ edges in case of Pd(001)\/Co$_{2}$\/Pt$_{2}$ and Pd(001)\/Co$_{5}$\/Pt$_{2}$ systems.}\n\\label{fig_xas-xmcd-E_Pt2}\n\\end{figure}\nAlthough only tiny differences are found for both systems,\none can see that the changes are most pronounced\nat the L$_{3}$ edge in both cases.\nThis finding is in line with the\npreviously reported experimental results \\cite{PhysRevLett.120.157203}\nand can be explained as follows.\nAs an electric field will in particular shift \nelectronic states below or above the Fermi level,\n pronounced field induced changes have to be expected\nfirst of all at the absorption edges\nof the spectra.\nAs the Pt d-states in that energy region \nhave primarily $d_{5\/2}$-character\nand as the L$_{3}$ and L$_{2}$ spectra are dominated \nby their $d_{5\/2}$- and $d_{3\/2}$-contributions, \nrespectively, it follows that an\nelectric field has a much stronger impact for the \nL$_{3}$ than for the L$_{2}$ spectrum. \n\nThe modifications of XAS and XMCD spectra \ndepend directly on the\ndirection of the electric field as this determines the\ndirection of the field-induced shift of \nthe electronic states with respect to the \nFermi energy. \nDue to the screening of the electric field \nwith increasing distance from the\nsurface, the field-induced changes of the XAS and XMCD signals\nare most pronounced for\n the surface Pt layer and decrease towards the\ninterface. This behavior can be seen clearly in \nFig.\\ \\ref{fig_xas-xmcd-E_Pt2} \nshowing\nthe layer resolved results for \nPd(001)\/Co$_{2}$\/Pt$_{2}$. \nThe same trend\nis found also for Pd(001)\/Co$_{5}$\/Pt$_{2}$.\n\n\n\n\\subsection{Variation of the thickness of the Pt layer }\nIn order to investigate how the electric field effect changes with an increasing thickness of the Pt film, calculations have been performed for the Pd(001)\/Co$_{5}$\/Pt$_{m}$ system, where the thickness of the capping Pt layer was varied between $m=1$ and $m=4$.\nThe top panel of \nFig.\\ \\ref{fig_diff_DOS_Co5} \nshows the calculated spin moments of the individual layers in \nPd(001)\/Co$_{5}$\/Pt$_{1}$ (a) and Pd(001)\/Co$_{5}$\/Pt$_{4}$ \n for various electric fields.\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=1.00\\columnwidth]{fig_layer_res_mom.pdf}\n\\includegraphics[width=1.00\\columnwidth]{fig_DOS_Co5.pdf}%\n\\caption{Top panel: \nCalculated \nlayer resolved \nmagnetic spin moments of the individual atomic layers in \nPd(001)\/Co$_{5}$\/Pt$_{1}$ (a) and Pd(001)\/Co$_{5}$\/Pt$_{4}$ without electric field and in the presence of a positive and negative electric field. Bottom panel: Calculated difference for the minority and majority density of states \n of the interface Co layer \n between positive and negative electric field\nfor different thicknesses of the capping Pt layer.}\n\\label{fig_diff_DOS_Co5}\n\\end{figure}\nIn the case of Pd(001)\/Co$_{5}$\/Pt$_{1}$ the Pt spin moment is \n$m_{\\rm Pt_{1}}=0.22\\, \\mu_{B}$. For Pd(001)\/Co$_{5}$\/Pt$_{4}$ \n the induced moment of the Pt layers significantly decreases away from the Co interface while the Pt layer at the Co interface possesses the largest induced moment with \n$m_{\\rm Pt_{1}}=0.18\\, \\mu_{B}$\n coupled ferromagnetically to that of Co. The spin magnetic moment of the next Pt layer is \n$m_{\\rm Pt_{2}}=0.06\\, \\mu_{B}$ and is also ferromagnetically aligned to the Co moments. The remaining very small magnetic moments for the third and fourth Pt layers are antiferromagnetically oriented with respect to the Co layers.\n\n\nThe Pt spin-magnetic moment in Pd(001)\/Co$_{5}$\/Pt$_{1}$ \n increases in case of a positive electric\nfield ($+ E$) while it \ndecreases for $- E$ with the value\n$0.23 \\, \\mu_{B}$ and \n$0.21 \\, \\mu_{B}$, respectively. \nThis dependency on the electric field is opposite to that of \nPd(001)\/Co$_{n}$\/Pt$_{2}$ \nshown in \nFig.\\ \\ref{fig_momPt}\nand can be attributed to the screening of the electric field that gets more important for an increasing thickness of the Pt film. In particular, one can see rather strong field-induced changes of the spin magnetic moments of the interface and next-to-interface Co layers in Pd(001)\/Co$_{5}$\/Pt$_{1}$.\n In this case the spin moment of the Pt layer follows the changes of the spin moment of Co at the Co\/Pt interface. Obviously, the impact of the electric field on the Co spin moment significantly decreases with increasing of the thickness of the Pt film.\nThe bottom panel in \nFig.\\ \\ref{fig_diff_DOS_Co5} \nshows the difference in the density of states\nfor majority and minority spins\nof the topmost Co layer (Co$_{5}$) obtained for different electric fields. These electric fields induced changes in the DOS are decreasing for the Co layers when the thickness of Pt film increases.\nThis screening effect is also seen in the bottom panel of \nFig.\\ \\ref{fig_DOS_Pt} \nwhich represents the field induced DOS and the spin density changes in the different Pt layers\n of Pd(001)\/Co$_{5}$\/Pt$_{4}$. \n \\begin{figure}[htb]\n\\centering\n\\includegraphics[width=1.00\\columnwidth]{fig_diff_DOS_Pt.pdf}\\\\\n\\includegraphics[width=1.00\\columnwidth]{fig_diff_MDOS_Pt.pdf}\n\\caption{Top panel: Electric-field-induced change of the density of states\nin the different Pt layers in Pd(001)\/Co$_{5}$\/Pt$_{4}$. \nBottom panel: Electric-field-induced change of the difference $\\rm m_{\\rm spin} = n^\\uparrow(\\varepsilon) - n^\\downarrow(\\varepsilon)$ of majority- and minority DOS. }\n\\label{fig_DOS_Pt}\n\\end{figure}\nOne can see that the most pronounced DOS changes due to an\napplied electric field occur in the surface Pt layer, while the DOS modification in the deeper Pt layer and at the Pt\/Co interface is rather weak (see top panel of\n Fig.\\ \\ref{fig_DOS_Pt}).\nDespite this trend, the bottom panel of \nFig.\\ \\ref{fig_DOS_Pt} \nshows that the field induced changes of the spin polarization (i.e.\\ m$_{spin}(\\varepsilon) = n^\\uparrow(\\varepsilon) - n^\\downarrow(\\varepsilon)$ have the same order of magnitude for all Pt layers. This can be attributed to the strongest field effect for the surface Pt layer on the one hand side and the strongest proximity induced spin moment at the interface on other hand side. \n \nThus, one can conclude that depending on the thickness of the Pt layer, the field dependent changes of the induced spin moment can be dominated by different mechanisms associated either with field induced changes of the electronic structure and magnetic moments in the ferromagnetic sub-surface \nor in the non-magnetic surface parts\n of the system.\n\n\n\nNext, we discuss the influence of the electric field \non the XAS and XMCD\nspectra at the L$_{2}$- and L$_{3}$-edges of Pt in Pd(001)\/Co$_{5}$\/Pt$_{n}$\n with $n = 1$ and $4$ and its\ndependence on the\nthickness $n$ of the Pt film.\nFirst, we consider in \nFig.\\ \\ref{fig_xas-xmcd_Pt1-Pt4} (top panel) \nthe layer resolved XAS spectra calculated without including an external electric\nfield. \n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=1.00\\columnwidth]{fig_xas_xmcd_Co5Pt1-4.pdf}\n\\caption{Calculated layer resolved \nXAS (top panel) $\\mu$ and\nXMCD (bottom panel) $\\Delta \\mu$\n spectra at the \nL$_{2}$- and L$_{3}$-edges for the Pt layers in \n Pd(001)\/Co$_{5}$\/Pt$_{1}$ (left) and Pd(001)\/Co$_{5}$\/Pt$_{4}$ (right). } \n\\label{fig_xas-xmcd_Pt1-Pt4}\n\\end{figure}\nOne can see a weak dependence of the XAS spectra on the position of Pt layer in the \nPd(001)\/Co$_{5}$\/Pt$_{4}$ system. However,\nthe XMCD spectra shown in \nFig.\\ \\ref{fig_xas-xmcd_Pt1-Pt4} (bottom panel), \nexhibit a rather pronounced decrease, when going from the interface\n(Pt$_{1}$) to the surface (Pt$_{4}$) layer.\nNote that the XMCD \nsignal of the surface Pt layer even changes sign in line with the\nsign change for\n the induced spin moment in this layer \n(see discussion above). \nThe strongest XMCD signal occurs for the interface Pt layer reflecting the largest induced spin moment due to proximity to the ferromagnetically ordered Co layers. \n\n\nThe changes of the XAS and XMCD spectra of Pt in\nPd(001)\/Co$_{5}$\/Pt$_{1}$ and \nPd(001)\/Co$_{5}$\/Pt$_{4}$\nthat are caused by the\napplied electric field are presented in\n Figs.\\ \\ref{fig_diff_xmcd-E_Pt1-Pt4}, (a) and (b), respectively.\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=1.00\\columnwidth]{fig_diff_xas_Co5Pt1-4.pdf}\\\\\n\\includegraphics[width=1.00\\columnwidth]{fig_diff_xmcd_Co5Pt1-4.pdf} \n\\caption{Electric field induced change in\nthe XAS (a)\n and XMCD (b) spectra at the \nL$_{2}$ and L$_{3}$ edges \n for Pt in \nPd(001)\/Co$_{5}$\/Pt$_{1}$ and \nPd(001)\/Co$_{5}$\/Pt$_{4}$.}\n\\label{fig_diff_xmcd-E_Pt1-Pt4}\n\\end{figure}\nSimilar to the systems with \n$2$~ML of Pt, \none finds that the change of the \nspectra reverses its sign if the orientation of the electric field is reversed.\nIn addition, one can see \nthe asymmetry of these\nmodifications with respect to a change in the orientation of the electric field. \nThe most pronounced changes occur for the Pt L$_{3}$ edge signal,\nsimilar to the results obtained\nfor Pd(001)\/Co$_{2}$\/Pt$_{2}$. \nThe intensities of the layer-resolved changes of the XAS spectra for Pt in\nPd(001)\/Co$_{5}$\/Pt$_{4}$\ngradually decrease when going from the Pt surface to the interface layer. As discussed above, this\n can be attributed to the screening of the electric field in the surface region.\nHowever, the XMCD spectra change\nnon-monotonously towards the interface \nlayer as a consequence of\n a competition of the \ndecreasing electric field strength\nand the increasing impact of the neighboring Co layers controlling \nthe Pt spin magnetic moment via the proximity effect.\n\n\n\n\n\n\n \nIn addition to the impact of an \nelectric field on the XMCD spectra, \nits influence on the layer\nresolved MAE was investigated. \nFigure \\ref{fig_MAE-E_Pt1-Pt4}\nshows the layer resolved MAE of Pd(001)\/Co$_{5}$\/Pt$_{1}$ and\nPd(001)\/Co$_{5}$\/Pt$_{4}$, respectively,\n for no electric field present\n as well as for\n an applied electric field with positive and negative sign, respectively. \n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=1.00\\columnwidth]{fig_layer_res_MAE_Pt1.pdf} \n\\includegraphics[width=1.00\\columnwidth]{fig_layer_res_MAE_Pt4.pdf}\n\\caption{Calculated layer-resolved magnetic anisotropy energy (MAE) of\nPd(001)\/Co$_{5}$\/Pt$_{1}$ (top panel) and\nPd(001)\/Co$_{5}$\/Pt$_{4}$ (bottom panel) \nfor no electric field present\n as well as for\n an applied electric field with positive and negative sign, respectively. \nThe results have been obtained from \nfully relativistic calculations \n(SOC $\\neq$ 0) and calculations\nwith the strength of the spin-orbit\n coupling in the Pt layers\nset to zero (SOC = 0).}\n\\label{fig_MAE-E_Pt1-Pt4}\n\\end{figure}\nThe definition for the MAE used implies an\nout-of-plane and in-plane anisotropy\nfor a positive or\nnegative, respectively, sign of the MAE. \n\n\nIn an earlier study \\cite{PhysRevLett.102.187201}\nit was already found that an electric field\nmay strongly affect the magneto-crystalline anisotropy of free-standing transition metal mono layers,\n as a \nresult of the distortion of the electronic structure\nby the applied electric field. \nThe authors point out that in these systems the \nelectric field breaks the z-reflection symmetry, \nlifting that way a degeneration in the \ncross-points of the energy bands via the\nfield-induced hybridization between the $d$ and $p$ orbitals that contribute to these states. \n\n\n\n\nAs Fig.\\ \\ref{fig_MAE-E_Pt1-Pt4} \nillustrates,\nan electric field also strongly \nmodifies the MAE for the systems \nconsidered here despite the lack of \nz-reflection symmetry for the field-free case.\nAs a reference, the figure also shows\nthe total and layer resolved MAE for\nPd(001)\/Co$_{5}$\/Pt$_{1}$ \nand Pd(001)\/Co$_{5}$\/Pt$_{4}$ \nfor zero-field conditions.\nAs one can see, Pd(001)\/Co$_{5}$\/Pt$_{1}$\nhas a total MAE corresponding to an\nin-plane anisotropy with \ndominating contributions from the Co layers\nin the middle of the Co film, \nwhile the positive contribution from the\ninterface Co\/Pt layer is rather small.\nPd(001)\/Co$_{5}$\/Pt$_{4}$, on the other hand, \n has out-of-plane\nanisotropy with its \n MAE dominated by \nthe contribution from the \nCo layer at the Co\/Pt interface.\nIt should be noted that a strong \ndependence of the MAE on the thickness \nof the over layer\nwas discussed already previously \nfor several systems\n\\cite{Beauvillain_JAP_1994, PhysRevLett.77.1805}. \nFigures \\ref{fig_diff_Co5_MAE-E_Pt1}\n and\n\\ref{fig_diff_Co5_MAE-E_Pt4} \nillustrate the contributions to the MAE from the\ninterface Co and Pt layers, \nrepresented as a function of the occupation of\nvalence band realized by artificially \nvarying the Fermi energy.\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=1.00\\columnwidth]{fig_diff_MAE_Pt1.pdf}\n\\caption{Top panel: Contribution to the magnetic anisotropy energy from the Co$_{5}$ and Pt layers\n (without SOC scaling and with SOC = 0 in the Pt layers), represented as a\n function of occupation of energy bands in case of Pd(001)\/Co$_{5}$\/Pt$_{1}$\n system. Bottom panel: Field-induced changes of the contribution to the magnetic\n anisotropy energy from the Co$_{5}$ layer, MAE($\\pm E$)- MAE(0). }\n\\label{fig_diff_Co5_MAE-E_Pt1}\n\\end{figure}\n\\begin{figure}[htb]\n\\centering\n\\includegraphics[width=1.00\\columnwidth]{fig_diff_MAE_Pt4.pdf}\n\\caption{Top panel: Contribution to the magnetic anisotropy energy from the Co$_{5}$ and Pt layers\n (without SOC scaling and with SOC = 0 in the Pt layers), represented as a\n function of occupation of energy bands in case of Pd(001)\/Co$_{5}$\/Pt$_{4}$\n system. Bottom panel: Field-induced changes of the contribution to the magnetic\n anisotropy energy from the Co$_{5}$ layer, MAE($\\pm E$)- MAE(0). }\n\\label{fig_diff_Co5_MAE-E_Pt4}\n\\end{figure}\nThese relations have a \nnon-monotonous behavior with \nextreme values always \noccurring when the Fermi energy is passing\nthrough a SOC-induced avoided crossing of the energy bands. \nOne can see that for both systems, the amplitude of \nthe contribution to the\nMCA associated with the Co interface layer is much \nlarger\nthan that for the Pt interface layer. \n However, accidentally, these values can be\nclose to each other at a certain occupation,\nas it is the case for\n Pd(001)\/Co$_{5}$\/Pt$_{1}$ at the proper Fermi energy.\n It should be noted\n that the MAE of materials composed of \n magnetic- and\nheavy-element components \nis determined essentially by the spin-dependent\nhybridization of their electron orbitals \nas well as by the SOC of the heavy element \n(see for example \nRef.\\ \\onlinecite{ASE+07,SMME08,OKS+17}).\nAs the Pd(001)\/Co$_{5}$ substrate is \ncommon to both systems considered, the \ndifference in the MAE has to be\n attributed to the details of the\nelectronic structure of these systems \nassociated with a different\nthickness of the Pt surface film. \nAs a result, switching the SOC on the Pt atoms\nartificially off\nleads for the Pt film\n to a strong weakening of\n the dependence of its electronic structure\n on the magnetization\ndirection and in turn of the MAE.\nThis is seen in \nFigs.\\ \\ref{fig_diff_Co5_MAE-E_Pt1} \nand \\ref{fig_diff_Co5_MAE-E_Pt4} \nindicating that the contribution\nof the Co interface layer to MAE\n drops down by about one order of magnitude \n when the SOC is switched off for Pt.\n For this situation, \n the layer resolved MAE for both systems \nis rather similar \n(see Fig.\\ \\ref{fig_MAE-E_Pt1-Pt4}). \nThe difference between the results\nfor the two systems can be attributed\nto some extent \nto a different hybridization of the \nCo and Pt related electronic states, which is\nobviously dependent on the thickness of the Pt film.\nBased on these results, \n one can expect that the field\ninduced changes to the MAE should be \n associated first of all to the\ninfluence of the electric field on the Pt related electronic states.\n\nAnalysing the field-induced changes of the total and layer resolved\nMAE of \nPd(001)\/Co$_{5}$\/Pt$_{1}$ and Pd(001)\/Co$_{5}$\/Pt$_{4}$,\nthe most pronounced changes are found\n at the interface, although the changes for the other layers are not negligible.\nDespite the pronounced \nscreening effect in case of \nPd(001)\/Co$_{5}$\/Pt$_{4}$ with 4~MLs of Pt, the \nfield-induced change of the \nCo interface contribution to the MAE is\nsignificant, indicating a key role of the electronic structure changes\noccurring in the Pt film due to the electric field. \nThe field induced changes of the MAE in the systems with $1$ and $4$ Pt monolayers\nare associated \nprimarily with the Co\/Pt interface contribution.\nCorresponding results for the interface layers are \n plotted in Figs.\\ \\ref{fig_diff_Co5_MAE-E_Pt1} and\n \\ref{fig_diff_Co5_MAE-E_Pt4}, respectively, as a function of the\n occupation. From these figures one can see that for most of the \noccupation numbers the MAE changes have opposite sign for the opposite orientation when the electric field\nis reversed.\nThe origin of these changes of the MAE can be attributed first of all to\nthe modification of the electronic structure of \nthe Pt film, i.e. the electric field controlled hybridization of\nthe $d$ and $p$ orbitals, as discussed in \nRef.\\ \\onlinecite{PhysRevLett.102.187201}. \n\n\n\\section{Conclusions \\label{sec4}}\nIn conclusion, in this work we examined the\ninfluence of an electric field effect \non the magnetic properties \nin the Pt layer of Pd(001)\/Co$_{n}$\/Pt$_{m}$ thin film structures by performing\n first-principles calculations. \nFor this purpose, a homogeneous\n external electric field was modeled by a \n charged plate in front of the surface. \n From the self-consistent calculations, we determined the spin magnetic moment and XMCD spectra in the presence of an electric field. We found that \n in case of \n Pd(001)\/Co$_{2}$\/Pt$_{2}$ and \n Pd(001)\/Co$_{5}$\/Pt$_{2}$\n that \n the spin-magnetic moments are varying \n independently from the number of Co layers \n roughly quadratically as a function\n of the electric field strength.\n An inspection of the angular momentum \n resolved DOS reveals that \n the electric field slightly shifts the $s$ and $p$ states around the Fermi level.\n From the calculated XMCD spectra, \n it was found that the electric field \n has its major impact \n for the L$_{3}$ edge spectra. \n We also investigated \ndependency of the electric field effect \non the thicknesses of the Pt layer. \nIn case of \nPd(001)\/Co$_{5}$\/Pt$_{1}$ as well as \nPd(001)\/Co$_{5}$\/Pt$_{4}$, the electric field\ninduced \nchange of the XMCD spectra is most\n significant for the L$_{3}$ edge, independent on the thickness of the Pt capping layer.\n In addition, the layer dependent MAE \n and its dependency on an electric field \n was examined. It was found that the electric field strongly modifies the MAE.\n It turned out in particular \n that this change is still considerable for\n deeper lying layers. \n\n\n\\begin{acknowledgments}\nThis work was supported by the Deutsche Forschungsgemeinschaft\ngrant: DFG EB 154\/35.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzmahl b/data_all_eng_slimpj/shuffled/split2/finalzzmahl new file mode 100644 index 0000000000000000000000000000000000000000..997f83c94cc5d2c88f64255a5898cf8d93cc2e22 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzmahl @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe basic structure of a pulsar wind nebula (PWN) is determined by the\nspin-down energy injected by the central pulsar and the interaction\nof the nebula with the interior regions of the supernova remnant\n(SNR) in which it evolves. Losses from synchrotron radiation in the\nnebular magnetic field, whose strength depends both on the nature\nof the injected wind and on the evolving size of the PWN, inverse-Compton\nscattering of ambient photons by the energetic electron population\nwithin the nebula, and adiabatic expansion as the nebula sweeps up\nthe surrounding supernova ejecta, all combine to determine emission\nstructure and long-term evolution of the nebula. (See Gaensler \\&\nSlane 2006 for a review.) Multiwavelength observations of PWNe\nprovide crucial information on the underlying particle spectrum\nwhich, in turn, strongly constrains both the magnetic field strength\nand the stage of evolution. Of particular importance is the spectrum\nof particles injected into the PWN. While this is typically assumed\nto be a power law, it has long been known that the spectrum of the\nCrab Nebula is not fully consistent with such a structure; changes\nin the spectral index in the optical band seem to indicate inherent\nfeatures in the injection spectrum, and the large population of\nradio-emitting particles may have a completely distinct origin from\nthat of the higher energy particles.\n\nComplex structure in the observed spectrum of a PWN can originate\nin a number of ways that are associated with the long-term evolution,\nincluding synchrotron losses and interaction with the SNR reverse\nshock (Reynolds \\& Chevalier 1984; Gelfand, Slane, \\& Zhang 2009).\nIn addition, recent studies of the spectrum immediately downstream of\nthe wind termination shock show that, at least in some cases, the\ninjection spectrum itself deviates significantly from a simple power\nlaw (Slane et al. 2008), and particle-in-cell simulations of the\nacceleration process produce spectra that are well-described by a\nMaxwellian population with a power law tail (Spitkovsky 2008). Any\nsuch structure in the spectrum of the injected particles imprints\nitself on the broadband emission spectrum of the entire nebula. The\nresulting breaks or regions of curvature in the emission spectrum,\nand the frequencies at which they are observed, depend upon the\nenergy at which features appear in the electron spectrum as well\nas the means by which the photons are produced (e.g. synchrotron\nradiation or inverse-Compton emission). To fully understand the\nnature of the particle injection, as well as the long-term evolution\nof PWNe, it is thus crucial to study the emission structure over\nthe entire electromagnetic spectrum.\n\nGamma-ray observations have provided an important window into the\nlate-phase structure of PWNe. While the X-ray emission from older\nPWNe is relatively weak due to the long-term decline in magnetic\nfield strength, inverse-Compton scattering of the cosmic microwave\nbackground (CMB), as well as other ambient photons, by the energetic\nparticles in the nebula produce very energetic photons, providing\nunique discovery space for these systems. Even in younger nebulae,\nthe $\\gamma$-ray emission provides a crucial probe of the particle\nspectrum in energy regions that are not accessible with other\nobservations. The inverse-Compton emission from photons with a\ncharacteristic temperature $T$ peaks at an energy $\\epsilon_{ic}\n\\approx 5 \\gamma^2 kT$ for scattering from electrons of energy\n$\\gamma m_e c^2$. CMB photons scattered into the bandpass of the\n{\\em Fermi}\\ Large Area Telescope (LAT), for example, originate from\ninteractions with electrons in the approximate energy range 0.25\n-- 4 TeV. Such particles produce synchrotron radiation with photon\nenergies in the range $\\epsilon_{s} \\approx 0.03 - 7 B_{10}$~eV,\nwhere $B_{10}$ is the magnetic field strength in units of 10$\\mu$G.\nDepending on the magnetic field, which can range from $>100 \\mu$G\nor higher for young PWNe, down to $\\sim 5 \\mu$G for highly evolved\nsystems, the synchrotron emission from particles in this energy\nrange can be difficult to detect due to instrumentation limitations\n(at long radio wavelengths), absorption (in the optical\/UV range)\nor confusion with the bright sky Galactic background (in the\ninfrared). {\\em Fermi}\\ measurements can thus provide a unique probe\nof the emission from a significant population of the particle\nspectrum in PWNe.\n\nHESS J1640$-$465\\ (see Figure 1) is an extended source of very high energy\n$\\gamma$-ray emission discovered with the High Energy Stereoscopic\nSystem (H.E.S.S.) during a survey of the Galactic plane (Aharonian\net al. 2006). Centered within the radio SNR G338.3$-$0.0 (Shaver\n\\& Goss 1970), the deconvolved TeV image of the source has an RMS\nwidth of $2.7 \\pm 0.5$~arcmin (Funk et al. 2007). HI measurements\nshow absorption against G338.3$-$0.0 out to velocities corresponding\nto the tangent point, indicating a distance of at least 8 kpc\n(Lemiere et al. 2009), and thus implying a rather large size for\nthe PWN ($R_{PWN} > 6.4 d_{10} $~pc, where $d_{10}$ is the distance\nin units of 10~kpc). X-ray observations with {\\em XMM}\\ (Funk et al.\n2007) and Chandra (Lemiere et al. 2009) establish the presence of\nan accompanying X-ray nebula and an X-ray point source that appears\nto be the associated neutron star. In addition, Aharonian et al.\n(2006) noted the presence of the unidentified {\\em EGRET} source\n3EG~J1639$-$4702 located at a nominal position 34$^\\prime$ from\nHESS J1640$-$465, but the very large error circle on its position made any\nassociation with HESS J1640$-$465\\ highly uncertain. Here we report on\nobservations of HESS J1640$-$465\\ with the {\\em Fermi}-LAT. The observations and\ndata analysis are summarized in Section 2, and a discussion of the\nobserved $\\gamma$-ray emission is discussed in the context of the\nevolutionary state of HESS J1640$-$465\\ in Section 3. Our conclusions are\nsummarized in Section 4.\n\n\\begin{figure}[t]\n\\epsscale{1.15}\n\\plotone{f1.ps}\n\\epsscale{1.0}\n\\caption{\n{\\em Fermi}\\ LAT image (2 - 200 GeV) of HESS J1640$-$465. The cyan circle indicates\nthe uncertainty in the centroid of the {\\em Fermi}\\ LAT source, the\nmagenta circle indicates the 95\\% encircled flux distribution of\nthe HESS image, and the white circle indicates the 95\\% probability\ncontour for the position of 3EG J1639$-$4702. The white contours\noutline radio emission from G338.3$-$0.0 while the black contours\nat the center outline extended X-ray emission observed with {\\em XMM}.\nA compact X-ray source detected with {\\em Chandra}\\ resides within the\nX-ray contours.\n}\n\\end{figure}\n\n\n\n\\section{Observations and Data Analysis}\n\nWe investigated $\\gamma$-ray events acquired from the region\nsurrounding HESS J1640$-$465\\ with the {\\em Fermi}-LAT during the period 2008 August\n5 to 2009 November 13. Standard event selection was applied, using\nthe ``diffuse'' class as defined in Atwood et al. (2009), using\nevents with zenith angles less than 105 degrees to minimize the\nportion of the Earth limb in the LAT field-of-view (Abdo et al.\n2009a). The ``Pass6 version 3'' instrument response functions were\nused. Standard analysis tools available from the {\\em Fermi}\\ Science\nSupport Center (version v9r15p2) were used for the reduction and\nanalysis. In this work, the mapcube file gll\\_iem\\_v02.fit is used\nto describe the Galactic $\\gamma$-ray emission, and the isotropic\ncomponent is modeled using the isotropic\\_iem\\_v02.txt table. Data\nanalysis details follow those in Castro \\& Slane (2010).\n\nFor source detection and spatial analysis of the source, we used\nonly events with energies in the range 2 - 200 GeV converting in\nthe front section of the tracker, where the Point Spread Function\n(PSF) is narrower. At 2 GeV, the 68\\% containment radius of the PSF\nis approximately 18 arcmin, which is considerably larger than the\nsource of interest. Based on an unbinned maximum likelihood analysis,\nusing the {\\tt gtlike} routine, a LAT source coincident with HESS J1640$-$465\\\nis detected with a significance of $\\sim 17 \\sigma$ based on the\nTS statistic (Mattox et al. 1996). The counts map is shown in\nFigure 1, along with the uncertainty in the centroid position based\non our analysis (cyan circle). We have also included the HESS error\ncircle (magenta) and contours from the {\\em XMM}\\ observation of HESS J1640$-$465\\\n(black) and the MOST observation of G338.3$-$0.0 (white contours). The error\ncircle for 3EG~J1639$-$4702 is indicated by a dashed white\ncircle.\\footnote{EGRET position contours are not necessarily circular;\nthe radius of the contour shown, from Hartman et al. (1999), contains\nthe same solid angle as the formal 95\\% contour.} The centroid of\nthe LAT emission is located at $16^{\\rm h} 40^{\\rm m} 46^{\\rm s}$,\n$-46^\\circ 30^\\prime 44^{\\prime\\prime}$, in good agreement with the\nposition of HESS J1640$-$465, and the brightness distribution is consistent\nwith an unresolved source (Figure 2); models for a disk of extent\nbetween $0.05 - 0.2$~degrees significantly degrade the quality of\nthe fit, providing strong evidence for an unresolved source.\n\n\\begin{figure}[t]\n\\epsscale{1.20}\n\\plotone{f2.ps}\n\\epsscale{1.00}\n\\caption{\nProfile of the {\\em Fermi}\\ LAT emission from HESS J1640$-$465. The histogram\ncorresponds to the best-fit point source profile, including the\ndiffuse Galactic and extragalactic background which is shown separately\nin red.\n}\n\\end{figure}\n\n\nThe source spectrum was extracted using both front and back events,\nand covering the energy range $0.2 - 51.2$~GeV. Standard background\nmodels were used to account for both Galactic and extragalactic\ndiffuse emission as well as instrumental background. Contributions\nfrom field sources identified in the one-year {\\em Fermi}-LAT First\nSource Catalog (Abdo et al. 2010a) were included in the analysis.\nThe LAT spectrum for HESS J1640$-$465\\ is shown in Figure 3, and is well-described\nby a power law with $\\Gamma = 2.30 \\pm 0.09$ and a $F(>100{\\rm\\\nMeV}) = (2.8 \\pm 0.4) \\times 10^{-7}{\\rm\\ photons\\ cm^{-2}}{\\rm\\\ns}^{-1}$ [or $(1.7 \\pm 0.2)) \\times 10^{-10}{\\rm\\ erg\\ cm^{-2}\\\ns}^{-1}$ in the $0.1 - 300$~GeV band] based on spectral fits for\nwhich statistical error and systematic errors associated with\nuncertainties in the Galactic background level (estimated by\nartificially changing the Galactic background normalization by $\\pm\n3\\%$ -- see Abdo et al. 2009b) are added in quadrature.\n\nWe note that {\\em Fermi}\\ observations of pulsars reveal spectra with\ndistinct cutoffs above $\\sim 1 - 10$~GeV (e.g. Abdo et al. 2009).\nThe absence of such a cutoff in Figure 3 implies that that the bulk\nof the emission does not arise directly from an unseen pulsar in\nHESS J1640$-$465. Our fits imply a lower limit of 40~GeV (at the 90\\% confidence\nlevel) for any exponential cutoff energy in the spectrum. Joint\nfits with the HESS spectrum are also well-described by a power law.\nAddition of a second power law, with an exponential cutoff, does\nnot improve the fit. Such a fit can, however, accommodate $\\sim\n20\\%$ of the observed flux in the second power law for cutoff\nenergies between $1 - 8$~GeV (beyond which such a component is\nstatistically disfavored).\n\n\\begin{figure}[t]\n\\epsscale{1.2}\n\\plotone{f3.eps}\n\\epsscale{1.0}\n\\caption{\n{\\em Fermi}\\ LAT spectrum of HESS J1640$-$465. Statistical (systematic) uncertainties are\nindicated by solid (dashed) error bars. The dashed line corresponds to\nthe best-fit power law model described in the text.\n}\n\\end{figure}\n\n\nThe source 1FGL~J1640.8$-$4634, from the one-year catalog, is\ncoincident with the source we detect, and the quoted flux is in\nagreement with our measurements as well. The flux of the source\nis consistent with that of 3EG~J1639$-$4702; the spectral index is\nsomewhat flatter, though also consistent within the uncertainties.\n\n\\section{Discussion}\n\nThe evolutionary state of a composite supernova remnant system is\nstrongly constrained by the observed size of the SNR and the inferred\nspin-down properties of the associated pulsar. Radio observations\nestablish a radius $R_{SNR} \\sim 11.6 d_{10}$~pc for G338.3$-$0.0. The\nobserved extent of HESS J1640$-$465\\ constrains the radius of the PWN to $R_{PWN}\n> 6.4 d_{10}$~pc.\\footnote{Here, and throughout, $R_{SNR}$ refers to\nthe distance from the SNR center to the outer edge of its blast wave,\nwhile $R_{PSR}$ refers to the distance from the pulsar to the outer\nboundary of its wind nebula.}\nFor evolution in the Sedov phase, the SNR radius\nis \n\\begin{equation} R_{SNR} = 4.9 \\left(\\frac{E_{51}}{n_0}\\right)^{1\/5}\nt_3^{2\/5}{\\rm\\ pc} \\end{equation} \nwhere $E_{51}$ is the supernova explosion energy in units of\n$10^{51}$~erg, $n_0$ is the number density of the ambient medium,\nand $t_3$ is the SNR age in kyr. The evolution of the PWN radius\nthrough the SNR ejecta is given by (Chevalier 1977) \n\\begin{equation}\nR_{PWN} \\sim 1.1 \\dot{E}_{0,38}^{1\/5} E_{51}^{3\/10} M_{10}^{-1\/2}\nt_3^{6\/5} {\\rm\\ pc} \\end{equation} \nwhere $\\dot{E}_{0,38}$ is the initial spin-down power in units of\n$10^{38}{\\rm\\ erg\\ s}^{-1}$ and $M_{10}$ is the ejecta mass in units\nof $10 M_\\odot$. Figure 4 illustrates the evolution of the SNR and\nPWN radii under such assumptions for a range of values for the\nambient density and initial spin-down power, assuming an ejecta\nmass of $8 M_\\odot$ and $E_{51} = 1$. For the observed radius of\nG338.3$-$0.0, we see that the age must be around $5 - 8$~kyr for reasonable\nambient conditions. The PWN has ideally expanded to a larger radius\nby this time, which means that under real conditions the SNR reverse\nshock has already encountered and disrupted the PWN. This is\nillustrated in Figure 4 where the solid blue and red curves show\nthe SNR and PWN radii (respectively) as a function of time for the\nmodel shown in Gelfand et al. (2009) with $\\dot{E_0} = 10^{40} {\\rm\\\nerg\\ s}^{-1}$, $M_{ej} = 8 M_\\odot$, $n_0 = 0.1 {\\rm\\ cm}^{-3}$,\nand $E_{51} = 1$. Here the early SNR evolution is calculated using\nthe solution from Truelove \\& McKee (1999), and the SNR and PWN\nevolution is treated self-consistently. The SNR curve initially\nrises more quickly than the dashed blue curves, which assume a Sedov\nsolution from the onset, but approaches the Sedov solution at later\ntimes. The solid red curve shows a distinct reduction in the radius\nof the PWN upon compression by the SNR reverse shock. Such a reverse\nshock interaction is consistent with both the inferred size of HESS J1640$-$465\\\nand with the observed offset between the putative pulsar and the\nsurrounding nebula (Funk et al. 2007, Lemiere et al. 2009).\n\nA full exploration of the parameter space constrained by both the\nspatial and spectral properties of HESS J1640$-$465\\ and G338.3$-$0.0\\ is beyond the scope of\nthis initial investigation. Here we have modeled the PWN emission\nfollowing the description in Lemiere et al. (2009). A simple power\nlaw injection of particles from the pulsar into a 1-zone nebula is\nassumed, and radiative losses are assumed to be dominated by\nsynchrotron radiation.\\footnote{This assumption is valid for magnetic\nfields larger than $\\sim 1 \\mu$G, which is typically the case in\nPWNe. We note, however, that at late times (ages of order 10 kyr,\nas suggested for HESS J1640$-$465), when the magnetic field\ndecreases due to expansion, IC losses can begin to be significant.} The\ntime evolution of the pulsar spin-down is determined by \n\\begin{equation} \\dot{E}(t) = \\dot{E_0} \\left(1 +\nt\/\\tau_0\\right)^{-\\frac{b+1}{b-1}} \\end{equation} \nwhere $\\tau_0$ is a characteristic spin-down timescale for the\npulsar and $b$ is the pulsar braking index. Pulsations from the\nputative pulsar have not yet been detected. To estimate the current\nspindown power, we use the empirical relationship for $\\dot{E}\/L_x$\nobtained by Possenti et al. (2002), which yields $\\dot{E} = 4\n\\times 10^{36} {\\rm\\ erg\\ s}^{-1}$. Based on the observed scatter\nin the $\\dot{E}\/L_x$, the uncertainty on this estimated value may\nbe a factor of 10 or larger. The maximum particle energy is\nassumed to be limited by the condition that the particle gyroradii\ndo not exceed the termination shock radius (de Jager \\& Harding\n1992). The lower limit on the particle energy is a free parameter,\nand the overall normalization is set by the wind\nmagnetization parameter $\\sigma$, equal to the ratio of power in\nparticles to that in the Poynting flux. The magnetic field in the\nnebula is assumed to evolve as \n\\begin{equation} B(t) = B_0\/\\left[\n1 + (t\/\\tau_0)^\\alpha\\right] \\end{equation} \nwhere $\\alpha$ is a\nfree parameter.\n\n\\begin{figure}[t]\n\\epsscale{1.2}\n\\plotone{f4.ps}\n\\epsscale{1.0}\n\\caption{\nTime evolution of the SNR and PWN radii for a range of values for\nthe ambient density and initial spin-down power of the pulsar. The\nsolid curves correspond to models from Gelfand et al. (2009) using\n$\\dot{E_0} = 10^{40} {\\rm\\ erg\\ s}^{-1}$, $M_{ej} = 8 M_\\odot$, $n_0\n= 0.1 {\\rm\\ cm}^{-3}$, and $E_{51} = 1$. See text description for\ndetails.\n}\n\\end{figure}\n\n\nThe broadband emission model results are shown in Figure 5 where\nwe plot the {\\em Fermi}\\ and H.E.S.S spectra along with the radio upper\nlimit from GMRT observations (Giacani et al. 2008) and spectral\nconfidence bands derived from Chandra (Lemiere et al. 2009). The\nblack curves represent the model prediction for the synchrotron\n(left) and inverse-Compton (right) emission that best describes the\nX-ray and TeV $\\gamma$-ray spectra, similar to results from Lemiere\net al. (2009); the parameters for the model, which were adjusted ``by\nhand'' to provide good agreement with the radio, X-ray, and TeV\n$\\gamma$-ray data, as well as the inferred size of the PWN, \nare summarized in the\ncaption. As seen in Figure 5, this model significantly underpredicts\nthe observed {\\em Fermi}-LAT emission.\n\n\\begin{figure}[t]\n\\epsscale{1.2}\n\\plotone{f5.ps}\n\\epsscale{1.0}\n\\caption{\nElectron spectrum (upper) and broadband emission model (lower) for\nHESS J1640$-$465\\ assuming the evolutionary history described in the text. The\nblack curves represent a PWN with an age $T = 10$~kyr, and $B(T) =\n5 \\mu$G, assuming $\\dot{E}(T) = 4 \\times 10^{36} {\\rm\\ erg\\ s}^{-1}$\nand an injection spectrum with $\\sigma = 10^{-3}$, $\\gamma = 2.5$,\n$\\tau_0 = 500$~yr, and $E_{\\rm min} = 115$~GeV. The magnetic field\nevolution is characterized by $\\alpha = 0.65$. The magenta curves\nrepresent the scenario with a low-energy Maxwellian electron component\nreplacing the low-energy portion of the electron power-law spectrum.\nThe mean temperature for the IR and optical photon fields are 15~K\nand 5000~K, respectively, and the energy densities relative to the\nCMB are 4 and 1.15. The dashed curve in the upper panel represents\nthe truncated portion of the power law that was replaced by a\nMaxwellian. The dashed blue curve in the lower panel represents a\nmodel for which all of the $\\gamma$-ray emission results from pion\ndecay.\n}\n\\end{figure}\n\n\nAs discussed above, our spectral fits can formally accommodate up\nto about $\\sim 20\\%$ of the observed flux in a pulsar-like component\ncharacterized by a power law with an exponential cutoff energy\nbetween 1 and 8 GeV. This corresponds to an energy flux (above\n100~MeV) of $\\sim 3.8 \\times 10^{-11}{\\rm\\ erg\\ cm}^{-2}{\\rm\\\ns}^{-1}$. For the spin-down power suggested by its X-ray luminosity,\nthe available energy flux from the pulsar that powers HESS J1640$-$465\\ is $\\sim\n3.3 \\times 10^{-10} d_{10}^{-2}{\\rm\\ erg\\ cm}^{-2}{\\rm\\ s}^{-1}$.\nThus, as much as 10\\% of its spin-down could conceivably be\ncontributing directly to $\\gamma$-rays in the LAT band. There are\nknown radio pulsars within the field of 1FGL~J1640.8$-$4634 as well.\nPSR~J1637$-$4642, for example, is located within $\\sim 37$~arcmin\nand has a total energy flux of $2 \\times 10^{-10} {\\rm\\ erg\\\ncm}^{-2}{\\rm\\ s}^{-1}$ given its estimated distance and measured\nspin-down. Thus, while our spectral fits do not require a pulsar-like\ncomponent, it it quite feasible that one or more of these sources\ncontributes as much as 20\\% of the observed flux. Inspection of\nFigure 5 makes it clear, however, that the remaining LAT emission\nstill vastly exceeds the predicted flux from HESS J1640$-$465.\n\nWe note that simple power-law injection models for Vela X, another\nevolved PWN, fail to reproduce the observed broadband spectrum\n(LaMassa, Slane, \\& de Jager 2009). The presence of an excess\npopulation of low-energy electrons has been suggested, and models\nfor the inverse-Compton scattering of photons from this population\npredict an excess of $\\gamma$-rays in the GeV range (de Jager,\nSlane, \\& LaMassa 2009). This excess has been confirmed with\nobservations by both AGILE (Pellizzoni et al. 2010) and {\\em Fermi}\\\n(Abdo et al. 2010b). Motivated by this result, we modified the\nevolved power law spectrum from our model for HESS J1640$-$465\\ by truncating\nthe lower end of the power law and adding a distinct low-energy\ncomponent. Based on results from simulations of shock acceleration\n(Spitkovsky 2008), we chose a Maxwellian distribution for this\npopulation. Our resulting (ad hoc) particle spectrum is shown in\nthe upper panel in Figure 5, and the resulting broadband emission\nis shown in the magenta curves in the lower panel. Here we have\nadjusted the normalization of the Maxwellian to reproduce the\nemission in the {\\em Fermi}-LAT band, which is produced primarily by\nupscattered infrared (IR) photons from local dust. The energy density\nand mean temperature of the IR photon field was adjusted slightly\nto improve the agreement between the data and the model, but the\nvalues (listed in the caption) are within reasonable expectations\n(see, e.g., Strong et al. 2000). We find a mean value of $\\gamma\n\\approx 2 \\times 10^5$ for the electrons in the Maxwellian component,\nand roughly 5\\% of the total electron energy in the power law tail,\nconsistent with results from particle-in-cell simulations.\\footnote{A.\nSpitkovsky, private communication.} The associated pair multiplicity\nrelative to the integrated Goldreich-Julian injection rate is of\norder $10^6$, similar to that inferred for the Crab Nebula as well\nas several other PWNe (see Bucciantini et al. 2010). Recent work\nby Fang \\& Zhang (2010) uses a similar input distribution to\nsuccessfully model the emission for several PWNe including HESS J1640$-$465.\nHowever, their results for HESS J1640$-$465\\ underpredict the observed GeV\nemission from this source, apparently due to use of a slightly lower\nbulk Lorentz factor and a larger fraction of the total energy in\nthe power law tail than we have used in this analysis.\n\nWe note that the Maxwellian shape for the low-energy electron\npopulation is not unique. Indeed even a very narrow Gaussian\ndistribution can produce the GeV $\\gamma$-ray emission without\nexceeding the radio upper limit for the synchrotron emission. For\nany of these models, the total energy in electrons requires a larger\ninitial spin-down period than assumed for the simple power-law\ninjection models -- by a factor of roughly 3 for the adopted\nMaxwellian distribution. This is within the uncertainty in our\nassumed value based on scaling from the X-ray luminosity of the\nPWN. It is also important to note that while our proposed model is\nconsistent with the observed properties of this system, there are\npotential degeneracies in the effects of different model parameters,\nmeaning that the proposed scenario is not necessarily unique.\n\n\nAn alternative scenario for the $\\gamma$-ray emission is that it\narises from the SNR itself, and not the PWN. Nonthermal bremsstrahlung\nhas been suggested as a mechanism for the production of $\\gamma$-rays\nin SNRs (e.g. Bykov et al. 2000). This process can be dominant for\nelectron-to-proton ratios of order 1. However, the value typical\nof local cosmic rays is closer to $10^{-2}$, and even smaller values\nappear favored in models for $\\gamma$-ray emission from SNRs (e.g.\nMorlino et al. 2009, Zirakashvili \\& Aharonian 2010, Ellison et al.\n2010), so that this process is typically not dominant. The dashed\nblue curve in Figure 5 represents a model for the emission from the\ncollision of protons accelerated in the SNR with ambient material,\nleading to $\\gamma$-rays from the production and subsequent decay\nof neutral pions. The $\\gamma$-ray spectrum is calculated based on\nKamae et al. (2006) using a scaling factor of 1.85 for helium and\nheavy nuclei (Mori 2009), and we have used a power law distribution\nof protons with $dN_p\/dE \\propto E^{-2.4}$ to best reproduce the\nobserved spectrum. Assuming a shock compression ratio of 4 and that\n25\\% of the total supernova energy appears in the form of relativistic\nprotons, an ambient density $n_0 \\approx 100 {\\rm\\ cm}^{-3}$ is\nrequired to produce the model shown in Figure 5. This is much higher\nthan can be accommodated for the observed size of the SNR and the\nlack of observed thermal X-ray emission from the SNR. Such high\ndensities are found in dense molecular clouds, suggesting that the\n$\\gamma$-rays could be produced by particles that stream away to\ninteract with high-density material outside the SNR. However, only\nthe most energetic particles can escape the acceleration region,\nwhich is in conflict with the proton spectrum we require to match\nthe data. Moreover, the observed TeV emission appears to originate\nfrom within the SNR boundaries, making such an escaping-particle\nscenario appear problematic. Based on this, along with the lack\nof a spectral cutoff that might suggest emission from a central\npulsar, we conclude that the GeV $\\gamma$-ray emission most likely\narises from the PWN.\n\nIt is well-known that a distinct low-energy electron population\nresides in the Crab Nebula, although the origin is not well-understood.\nAtoyan (1999) has suggested that the spin-down timescale $\\tau_0$\n(see Equation 1) may itself be time-dependent, resulting in a large\nenergy input in the earliest epoch of pulsar spin-down, when\nsignificant synchrotron and adiabatic losses would result in rapid\ncooling of the electrons. Studies of the broadband emission from\n3C~58 indicate an injection spectrum that differs from a pure power\nlaw as well (Slane et al. 2008). As noted above, observations of\nVela~X appear to require an excess of low-energy electrons which\nmay be a relic population or could have been produced through rapid\nsynchrotron losses associated with the increased magnetic field\nstrength during the reverse-shock crushing stage. More complete\nmodeling of the broadband emission of HESS J1640$-$465, accounting for the full\ndynamical evolution of the system, including the effects of the\nreverse shock on the PWN, is required to assess the overall energetics\nand underlying particle spectrum more completely, but is beyond the\nscope of the results we report here.\n\n\\section{Conclusions}\n\nBroadband studies of HESS J1640$-$465\\ have identified this source as a likely\nPWN, with X-ray observations providing images of an extended nebula\nas well as the putative pulsar powering the system. Modeling of the\nPWN evolution based on inferred parameters of the pulsar imply\ndetectable emission in the GeV $\\gamma$-ray band, and our investigations\nof the {\\em Fermi}-LAT observations of this region reveal clear evidence\nof such emission, consistent with the source 1FGL~J1640.8$-$4634. \nThe flux and spectrum we derive are consistent with that\nof the previously-identified source 3EG~J1639$-$4702, and the much-improved\nposition makes it likely that the emission arises from HESS J1640$-$465. The\nlack of a spectral cutoff rules out an association with emission\ndirectly from the pulsar that powers the nebula, and the flux is\ninconsistent with $\\gamma$-rays from G338.3$-$0.0, in which HESS J1640$-$465\\ \nresides.\n\nWe have investigated the radio, X-ray, and $\\gamma$-ray emission\nfrom HESS J1640$-$465\\ in the context of a simple one-zone model in which power\nis injected into a nebula at a time-dependent rate consistent with\nthe current observed X-ray emission from the system. We find that\nmodels constrained by the observed size of the associated SNR, as\nwell as limits on the size of the PWN, require an approximate age\nof 10~kyr and a current magnetic field strength of only $\\sim 4\n\\mu$G, consistent with expectations for the late-phase evolution\nof a PWN. The conditions in such an evolved PWN yield a considerably\nhigher $\\gamma$-ray flux, relative to the X-ray flux, than in younger\nsystems where the higher magnetic field results in significant\nsynchrotron radiation.\n\nThe observed {\\em Fermi}-LAT emission from HESS J1640$-$465\\ significantly exceeds\nthat predicted by our broadband models. We propose that the excess\nemission is a signature of a distinct population of low-energy\nelectrons similar to that inferred from studies of the Crab Nebula\nand Vela~X, although the nature of this electron component is not\nwell constrained. Deeper radio observations are needed to place\nstronger constraints on this population. Sensitive searches for\npulsations from the central pulsar are of particular importance to\nconstrain the spin-down properties of the system, which can only\nbe very roughly constrained at present. There has been considerable\nsuccess in identifying such pulsars with the {\\em Fermi}-LAT, but the\nlack of an obvious pulsar-like spectrum in HESS J1640$-$465\\ may argue for more\nlikely success with deep radio timing searches.\n\n\\acknowledgments\n\nThe work presented here was supported in part by NASA Contract\nNAS8-39073 (POS) and {\\em Fermi}\\ Grant NNX09AT68G. JDG is supported\nby an NSF Astronomy and Astrophysics Postdoctoral Fellowship under\naward AST-0702957. POS, SF, and YU are grateful to the KITP in\nSanta Barbara, where elements of the work presented here were first\ndiscussed during a KITP program. The authors would like to thank\nDon Ellison, Luke Drury, Felix Aharonian, and David Smith for helpful\ndiscussions during the preparation of this manuscript.\n\nThe $Fermi$ LAT Collaboration acknowledges support from a number\nof agencies and institutes for both development and the operation\nof the LAT as well as scientific data analysis. These include NASA\nand DOE in the United States, CEA\/Irfu and IN2P3\/CNRS in France,\nASI and INFN in Italy, MEXT, KEK, and JAXA in Japan, and the\nK.~A.~Wallenberg Foundation, the Swedish Research Council and the\nNational Space Board in Sweden. Additional support from INAF in\nItaly and CNES in France for science analysis during the operations\nphase is also gratefully acknowledged.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\n\nStar formation is ultimately controlled by the processes that regulate the formation of density enhancements in molecular clouds. In our current picture, the density statistics of the interstellar medium are heavily affected by supersonic turbulence \\citep[for a review, see][]{hen12}. \nThe density statistics depend on characteristics such as the total turbulent and magnetic energy \\citep[e.g.,][FK13 hereafter]{pad97mnras, nor99, vaz01, kow07, mol12, fed13}, the driving mechanism of the turbulence \\citep[e.g.,][FK12, hereafter]{fed10, fed12}, the equation of state \\citep[e.g.,][]{pas98, gaz13}, and the driving scale \\citep[e.g.,][]{fis04, bru09}. Constraining these characteristics is fundamental for virtually all analytic star formation theories.\n\n\nWe have previously employed near-infrared dust extinction mapping in analyzing column density statistics of molecular clouds \\citep[][KT13 hereafter]{kai09, kai11a, kai11b, kai13}. This technique is sensitive and well-calibrated at low column densities, making it suitable to study the mass reservoirs of molecular clouds. Exploiting this advantage, we studied how the clouds gather gas to the regime where star formation occurs. We used an easily accessible characteristic to quantify this, namely the dense gas mass fraction\\footnote{We purposefully use here the term \"dense gas mass fraction\" instead of \"cumulative mass function\" (CMF) from our previous works. This is to avoid confusion with the \"core mass function\" that is commonly used in literature.} (DGMF, hereafter), defined as a function that gives the fraction of the cloud's mass above a column density value\n\\begin{equation}\n\\mathrm{d}M' (> N) = \\frac{M(> N)}{M_\\mathrm{tot}},\n\\label{eq:dgmf}\n\\end{equation}\nwhere $M(> N)$ is the mass above the column density $N$ and $M_\\mathrm{tot}$ is the total mass. The DGMF is linked to the probability density function (PDF), $p(N)$, of column densities, which gives the probability to have a column density between $[N, N+dN]$, via\n\\begin{equation}\ndM' = \\int_N^{N_\\mathrm{high}} p(N') dN' \/ \\int_{N_\\mathrm{low}}^{N_\\mathrm{high}} p(N') dN',\n\\label{eq:dgmf-pdf}\n\\end{equation}\nwhere $[N_\\mathrm{low}, N_\\mathrm{high}]$ is the probed column density range. The reason for analyzing DGMFs instead of PDFs is simply the intuitive connection to the total mass reservoir of the cloud. Previously, DGMFs have been analyzed by, e.g., \\citet{kai09} who showed that starless clouds contain much less dense gas than star-forming clouds and by \\citet{lad10} who used them to derive a star-formation threshold.\n\nFrom the theoretical point-of-view, the form of the DGMF can be controlled by any of the forces affecting the cloud's density structure. The key parameters describing these forces are\\footnote{However, see the discussion on the caveat related to the Reynolds numbers of simulations in Section \\ref{subsec:sim-DGMFs}} \\emph{i)} the sonic Mach number, ${\\cal M}_\\mathrm{s}$, \\emph{ii)} the turbulence \\emph{driving} \\citep{fed08, fed10}, which is commonly denoted by $b$, with $b=1\/3$ corresponding to purely solenoidal driving and $b=1$ to fully compressive driving, and \\emph{iii)} the magnetic field strength, $B$, reflected by the Alfv\\'en Mach number, ${\\cal M}_\\mathrm{A}$. These parameters relate to density fluctuations via \\citep[][]{nor99, pri11, mol12}\n\\begin{equation}\n\\sigma^2_{\\ln{\\rho \/ \\langle \\rho \\rangle}} = \\ln{(1+b^2 {\\cal M_\\mathrm{s}}^2 \\frac{\\beta}{\\beta+1})},\n\\label{eq:b}\n\\end{equation}\nwhere $\\sigma_{\\ln{\\rho \/ \\langle \\rho \\rangle}}$ is the standard deviation of logarithmic, mean-normalized densities and $\\beta = 2{\\cal M}^2_\\mathrm{A} \/ {\\cal M}^2_\\mathrm{s}$. This form of Eq. \\ref{eq:b} \\citep{mol12} is valid up to moderate magnetic field strengths, ${\\cal M}_\\mathrm{A} \\gtrsim 6$. The strength of the ${\\cal M}_\\mathrm{s}$ - density coupling is of great importance for analytic star formation theories, because it directly affects the star formation rates and -efficiencies (SFE) they predict \\citep[e.g.,][see FK12]{kru05, hen11, pad11}. \n\nIn this work, we will estimate how the different physical parameters affect the observed DGMFs of molecular clouds. To this goal, we will analyze numerical turbulence simulations and derive predictions for observable DGMFs. We will then compare the predictions to the results of \\citet{kai09, kai11b} and KT13 \\citep[see also][]{lad10}.\n\n\\section{Simulation data} \n\\label{sec:data}\n\n\n\n\nWe analyze a set of magneto-hydrodynamic simulations of isothermal, driven turbulence in a periodic box, including self-gravity and sink particles to follow gas accretion onto protostars (see FK12). Each simulation is a time-series which starts ($t=0$) when the turbulence is fully developed and the gravity is switched on. Then, the evolution is followed as a function of SFE, defined as the fraction of mass accreted into sink particles. The formation of the first sink particle occurs at $\\mathrm{SFE=0\\%}$.\nThe sink particles affect their surroundings because of gas accretion, and we eliminated them from the simulations. The issue is described in Appendix \\ref{app:sinks}. Here we quote the main result: the DGMFs of ${\\cal M}_\\mathrm{s} = 10$ simulations (which we directly compare with observations) with $512^3$ cells are unaffected by sink particles below $N(\\mathrm{H}_2) < 11 \\times 10^{21}$ cm$^{-2}$. They are $70$\\% accurate up to $N(\\mathrm{H}_2) \\approx 25 \\times 10^{21}$ cm$^{-2}$. We also show in Appendix \\ref{app:resolution} that the resolution does not affect the DGMFs in this range.\n\n\n \\begin{figure*}\n \\centering\n\\includegraphics[bb = 0 4 960 225, clip=true, width=\\textwidth]{fig1.eps}\n \\caption{DGMFs of four simulations (black lines) with ${\\cal M}_\\mathrm{s} = 10$, processed to mimic those observed with near-infrared dust extinction mapping technique. The solid lines show the DGMFs at $t=0$ and the dotted lines at time-steps $\\mathrm{SFE} = \\{1, 3, 10\\}\\%$. The panels also show with dashed lines the mean DGMF of nearby starless clouds (blue) and of Taurus \\citep[][red]{kai09}, and of a sample of IRDCs (KT13, green). \n }\n \\label{fig:DGMFs-nir}\n \\end{figure*}\n\n\nThe simulations were scaled so that their virial parameters, $\\alpha_\\mathrm{vir, 0} = 5 \\sigma^2_\\mathrm{v} L \/ (6 G M)$ where $\\sigma_\\mathrm{v}$ is the 3-D velocity dispersion and $L$ the size of the simulation, were close to unity. Observations have shown that molecular clouds, on average, show $\\alpha_\\mathrm{vir, 0} \\approx 1$ \\citep[e.g.,][]{hey09}. However, this definition is an idealized approximation. The actual virial parameters, $\\alpha_\\mathrm{vir} = 2|E_\\mathrm{kin}|\/|E_\\mathrm{pot}|$, vary by more than an order of magnitude in the simulations. However, the actual virial parameters do not affect density PDFs greatly (FK12). If the virial parameter is \"low-enough\" to allow some collapse, the density structure is determined by other parameters \\citep[FK12;][]{mol12}.\n\nTo make a realistic comparison with observations, we processed the simulations with ${\\cal M}_\\mathrm{s} = 5-10$ to mimic data derived using near-infrared dust extinction mapping \\citep{lom01}. First, column density data from simulations was re-gridded to $60 \\arcsec \/ \\mathrm{pixel}$ and smoothed to \nthe $FWHM = 120\\arcsec$ resolution (0.09 pc at $150$ pc distance). The native resolution of the simulations with ${\\cal M}_\\mathrm{s} > 10$ is coarser than this, and we could not smooth them (we do not compare them with the lower ${\\cal M}_\\mathrm{s}$ simulations). Then, the column densities outside $N(\\mathrm{H}_2) = [3, 25] \\times 10^{21}$ cm$^{-2}$ were discarded, approximating the dynamic range of extinction mapping. The lower limit of the range was chosen to be high enough that it is possible to define separate structures in simulations using (approximately) closed contours of constant column density. This is because observationally, \"clouds\" are commonly defined in this manner \\citep[e.g.,][]{lad10}. Finally, Gaussian noise with $\\sigma (N) = 0.018N(\\mathrm{H}_\\mathrm{2}) + 0.2 \\times 10^{21}$ cm$^{-2}$ was added, following typical uncertainties in the data of \\citet{kai09}. This procedure was repeated for three different projections of the simulation data, and the DGMFs from them were averaged to form the final DGMF.\n\nWe examined the effects of the resolution and noise to the DGMFs. We experimented with the resolution of 0.03 pc that studies employing \\emph{Herschel} data of nearby clouds will reach \\citep[e.g.,][]{sch13}. Similar resolution is reached by combined near- and mid-infrared extinction mapping when applied to infrared dark clouds (IRDCs, KT13). The effect of the resolution and noise to the DGMFs was practically negligible.\n\n\\section{Results and Discussion} \n\\label{sec:results}\n\\label{sec:discussion}\n\n\\subsection{Dependence of the DGMF on physical parameters} \n\\label{subsec:sim-DGMFs}\n\n\nWe derived the DGMFs for the simulations up to $\\mathrm{SFE} = 10$\\%. Figure \\ref{fig:DGMFs-nir} shows the DGMFs of four simulations with ${\\cal M}_\\mathrm{s}=10$ and $b = \\{1\/3, 0.4, 1\\}$. For the case $b=0.4$, a non-magnetized and magnetized simulation is shown. The DGMFs at early stages ($t = 0$ and $\\mathrm{SFE} = 0$\\%) are well-described by exponential functions, $dM' \\propto e^{\\alpha N}$. When star formation begins, the DGMFs flatten. Their shapes remain close to an exponential function, or curve upwards approaching a powerlaw shape. This behavior is similar in all models. Since the DGMFs are close to exponential functions in the range $N(\\mathrm{H_2}) = 3-11 \\times 10^{21}$ cm$^{-2}$, we quantified their shapes through fits of exponentials. This yielded the range $\\alpha = [-0.41, -0.023]$ in all models.\n\n\nWe examined the dependence of the DGMF slopes on the driving of turbulence and magnetic field strength ($B$) in the simulations with ${\\cal M}_\\mathrm{s} = 10$. The results are shown in Fig. \\ref{fig:slopes} (left and center). Most importantly, \\emph{the DGMF slope responds most sensitively to the turbulence driving}, changing by a factor of $4.8-8.5$ when $b$ changes from $1\/3$ to 1. The slopes depend clearly less on $B$. The non-magnetic simulations show significantly shallower slopes than magnetized ones, but if $ B \\gtrsim 3$ $\\mu$G, the slopes are uncorrelated with it. \n\n\nThe DGMF slopes depend on the SFE. The dependency is stronger in magnetized than in non-magnetized simulations: the spreads of the slopes in the range $\\mathrm{SFE}=[1, 10]\\%$ for these cases are $0.09$ and 0.03, respectively. The mean difference in the slopes of non-magnetized and magnetized runs is 0.05. The early stages ($t=0$, $\\mathrm{SFE}=0$\\%) show clearly steeper slopes than the higher SFEs. \nWe also examined the relationship between the DGMF slopes and ${\\cal M}_\\mathrm{s}$. For this, we derived the DGMFs in the native resolution of the simulations (smoothing would greatly reduce the size of the low-${\\cal M}_\\mathrm{s}$ runs). Therefore, the results should be compared to observations with caution. Figure \\ref{fig:slopes} shows the DGMF slopes and ${\\cal M}_\\mathrm{s}$ in simulations with $b=1\/3$. The slopes are non-responsive to ${\\cal M}_\\mathrm{s}$, except when ${\\cal M}_\\mathrm{s} = 5$. \n\n\nThe DGMFs can vary also due to \\emph{i)} the random nature of turbulence (\"cloud-to-cloud\" variations) and \\emph{ii)} projection effects. The former can be examined by comparing simulations that have the same input parameters, but different random number seeds (e.g., \\#12, 14, and 17, see Table \\ref{tab:sinks}). Unfortunately, we only had three simulation pairs with varying random number seeds. The mean difference in the DGMF slopes among these was 0.08 at the early stages ($t = 0$, $\\mathrm{SFE} = 0\\%$). However, for time-steps $\\mathrm{SFE} \\ge 1$ the mean difference was only 0.02. The projection effects were studied by examining the standard deviation of the slopes derived for three different projections of all models. The mean standard deviation of the slopes in all models was 0.03.\n\n\n \\begin{figure*}\n \\centering\n\\includegraphics[bb= 20 5 480 355, clip=true, width=0.33\\textwidth]{fig2.eps}\\includegraphics[bb= 20 5 480 355, clip=true, width=0.33\\textwidth]{fig3.eps}\\includegraphics[bb= 20 5 480 355, clip=true, width=0.33\\textwidth]{fig4.eps}\n \\caption{Exponential slopes of the DGMFs as a function of $b$ (\\emph{left}), $B$ (\\emph{center}), and ${\\cal M}_\\mathrm{s}$ (\\emph{right}). The solid black lines show the timestep $t = 0$, and the dotted lines $\\mathrm{SFE}=\\{0, 1, \\dots, 10\\}\\%$. The blue, red, and green shaded regions indicate the slopes observed in starless and star-forming nearby clouds \\citep{kai09} and in IRDCs (KT13, however, see the discussion on these data in Section \\ref{subsec:comp_with_obs}). The median masses of each set of the clouds, $M_\\mathrm{1\/2}$, are shown in the panels.\n }\n \\label{fig:slopes}\n \\end{figure*}\n\n\nWe note that the effective Reynolds numbers of our simulations ($\\lesssim$$10^4$) are lower than that of the interstellar medium ($\\sim$$10^7$). It is not clear how this affects the predicted statistical properties. \\citet{alu13} has rigorously shown that the direct influence of driving on the kinetic energy is restricted to scales larger than the smallest scale at which the turbulence is stirred. However, numerical \\citep{fed10} and analytic \\citep{gal11} works have found differences in flow statistics in the range that can be considered to be the \"inertial range\" of compressible turbulence simulations. Resolution studies of the simulations suggest that the driving-induced differences remain when the Reynolds number increases. As this issue cannot be addressed with the current computational methods, our results are also subject to it.\n\n\\subsection{Comparing the predictions with observations} \n\\label{subsec:comp_with_obs}\n\nFigures \\ref{fig:DGMFs-nir} and \\ref{fig:slopes} show observed DGMFs to be compared with the simulated ones. Figure \\ref{fig:DGMFs-nir} shows the mean DGMF of quiescent clouds (LDN1719, Lupus V, Cha III, and Musca) and a DGMF of a typical star-forming cloud (Taurus) from \\citet{kai09}, and a mean DGMF of ten IRDCs from KT13. Figure \\ref{fig:slopes} shows the ranges of the observed slopes from \\citet{kai09}, which span $\\alpha = [-0.17, -0.45]$ for 13 nearby star-forming clouds and $\\alpha = [-0.35, -1.2]$ for four quiescent clouds. The range of IRDC slopes from KT13 is also shown. We note that the DGMFs of IRDCs in KT13 were derived from a slightly different column density range than those of nearby clouds (they begin from $N(\\mathrm{H}_2) \\approx 7 \\times 10^{21}$ cm$^{-2}$). Thus, the comparison of them with the other data should be considered only suggestive.\n\n\nThe dependence of the DGMF slopes on the turbulence driving allows us to constrain $b$ (see Fig. \\ref{fig:slopes}). None of the simulations shows as steep slopes as observed in starless clouds. From the non-magnetized simulations, only those with $b=1\/3$ are in agreement with the nearby star-forming clouds. Magnetic fields can steepen the slopes by about $0.05$ (Fig. \\ref{fig:slopes}, center). Therefore, from the magnetized runs those with $b=1\/3$, or $b=0.4$ and $B \\ge 3 \\ \\mu$G agree with star-forming clouds. \\emph{The fully compressive simulations produce a greatly higher fraction of dense gas than observed in nearby clouds}. The comparison suggests a low $b$ for nearby molecular clouds on average, possibly lower than previously estimated by \\citet{pad97apj} and \\citet{bru10taurus} in Taurus, $b \\approx 0.5$.\n\nThe DGMF slopes correlate with the SFE, depending on whether the cloud is magnetized or not. Since in the current view clouds have magnetic fields \\citep[][]{cru12}, the spread of slopes is likely the most realistic in magnetized simulations (i.e., 0.1, see Fig. \\ref{fig:slopes}). Thus, it seems that part of the spread in the observed slopes originates from the SFEs of the clouds. We used a Monte Carlo simulation to estimate whether all the variation in the observed slopes can originate from changes in the SFE and statistical variations. We assumed that the changes due to SFE are uniformly distributed between $[0, 0.1]$ and the statistical variations are normally distributed with $\\sigma = 0.04$. The test showed that the probability that 13 clouds span a range $>0.28$ is $0.2\\%$. Note that the range of the observed slopes can be wider. KT13 showed that IRDCs possibly have flatter DGMFs than nearby clouds (Fig. \\ref{fig:slopes}). In conclusion, it seems likely that the spread of the observed DGMF slopes cannot be explained by statistical variations and changes in the SFE alone. Changes in the clouds' average compression provides one possible source to account for this variation.\n\nOne interesting question for the future is to examine the effect of cloud mass to the DGMFs. There are no very massive clouds in the nearby cloud sample (median mass $0.5 \\times 10^4$ M$_\\odot$). In contrast, the median mass of the IRDCs is $5 \\times 10^4$ M$_\\odot$, which is ten times higher. This could contribute to the differences seen in the slopes of the two cloud sets. However as discussed earlier, comparing DGMFs of IRDCs with nearby clouds is not without caveats. The question could be properly addressed by a study of a statistical sample of IRDCs, or a study of the nearest high-mass clouds (e.g., Orion, Cygnus, Rosette) employing \\emph{Herschel} data.\n\nThe weak dependence of the DGMF slopes on ${\\cal M}_\\mathrm{s}$ appears to be an effect of the narrow column density range we examine (note that the results were derived from simulations that have differing physical resolutions and are only suggestive). The density PDF is expected to respond to ${\\cal M}_\\mathrm{s}$ following Eq. \\ref{eq:b}, which should reflect to the DGMFs. However, it appears that in the range of $N(\\mathrm{H}_2) = 3-11 \\times 10^{21}$ cm$^{-2}$ the effect is insignificant. This result is in agreement with \\citet{goo09} who did not detect any dependence between column density PDF widths and CO linewidths in Perseus. However, we recently measured the column density PDF widths using a high-dynamic-range technique (KT13) and concluded that if a wider range is examined, the PDF widths correlate with ${\\cal M}_\\mathrm{s}$. \n\n\nWhen comparing observed DGMFs with simulations, it should be kept in mind that in simulations \"driving\" is well-defined and ideal: energy is injected at large scales, with certain characteristics such as the divergence and curl. In real clouds, energy is likely injected at multiple scales and the characteristics of the driving can depend on the scale. However, if some of these driving modes excite more compression than others, particular regions in a cloud, and hence, also clouds \\emph{on average}, can show characteristics of the flows produced with ideal driving with different mixtures of solenoidal and compressive modes.\n\n\nFinally, we comment on the relation between the DGMFs and column density PDFs. The column density PDFs of nearby clouds are log-normal below $N(\\mathrm{H}_2) \\lesssim 3 \\times 10^{21}$ cm$^{-2}$. In the range $N(\\mathrm{H}_2) = 3-25 \\times 10^{21}$ cm$^{-2}$, they are in agreement with either powerlaws or (wide) log-normals (KT13). It is not established if the PDFs above $N(\\mathrm{H}_2) \\gtrsim 3 \\times 10^{21}$ cm$^{-2}$ are log-normals (KT13) or powerlaws \\citep[][see Fig. \\ref{fig:pdfs}]{sch13}. Importantly, it follows from Eq. \\ref{eq:dgmf-pdf} that \\emph{a log-normal PDF yields an exponential DGMF and a powerlaw PDF yields a powerlaw DGMF.} The simulated DGMFs in the range $N(\\mathrm{H}_2) \\gtrsim 3-25 \\times 10^{21}$ cm$^{-2}$ appear exponential at the early stages. Therefore, the column density PDFs at these stages are close to log-normals. When the simulations evolve, the DGMFs become closer to powerlaws (see FK13). This means that the underlying column density PDF transits from a log-normal to a powerlaw.\n\n\\section{Conclusions}\n\\label{sec:conclusions}\n\nWe have examined the relationship between the dense gas mass fraction (DGMF), star formation, and turbulence properties in molecular clouds by comparing DGMFs derived from isothermal, magneto-hydrodynamic, self-gravitating turbulence simulations to observed ones. Our conclusions are as follows. \t\n\\begin{enumerate}\n\n \\item Simulations predict close-to exponential DGMFs for molecular clouds in the column density range of $N(\\mathrm{H}_2) = 3-11 \\times 10^{21}$ cm$^{-2}$. The DGMF slopes span the range $\\alpha = [-0.41, -0.023]$, being clearly steeper at the early stages of the simulations compared to the stages when stars are forming ($\\mathrm{SFE} \\geq 1$\\%). These predictions are accurate on a 70\\% level up to $N(\\mathrm{H}_2) \\approx 25 \\times 10^{21}$ cm$^{-2}$.\n \n \\item The DGMF slopes depend strongly on the turbulence driving ($b$). They depend less, but significantly, on the exact SFE. The dependence on the SFE is stronger in magnetized than non-magnetized cases. Generally, the effect of the magnetic field to the DGMF is small. Also ${\\cal M}_\\mathrm{s}$ has a negligible effect on the slopes in the examined column density range. The statistical variations are comparable to those arising from varying SFE. However, how compressive the turbulence is (i.e., parameter $b$) is the largest single factor in determining the slope of the DGMF in the simulations. \n \n \\item The observed DGMFs can be used to constrain the turbulence driving parameter $b$. The DGMFs of nearby clouds are only reproduced by simulations that are driven by relatively non-compressive forcing, i.e., $b = 1\/3$ or 0.4. The fully compressive simulations ($b = 1$) over-estimate the DGMFs greatly. Massive IRDCs can show flatter DGMFs that are in agreement with more compressive driving. The spread of the observed DGMFs cannot be explained by different SFEs and statistical variations alone. Variations in the clouds' average compression level offer one explanation to account for the observed spread. \n \n\\end{enumerate}\n\n\n\\begin{acknowledgements}\nThe work of JK was supported by the Deutsche Forschungsgemeinschaft priority program 1573 (\"Physics of the Interstellar Medium\"). C. F. acknowledges a Discovery Projects Fellowship from the Australian Research Council (grant DP110102191).\t\n\\end{acknowledgements}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nThe origin of the galactic halo has been an interesting question ever since halo stars and clusters became the defining objects of population II (\\citealt{Baade1944}, \\citealt{OConnell1958}). Halo subdwarfs were soon recognized to be metal-poor (\\citealt{Chamberlain1951}), as were red giant stars branch (RGB) stars in glubular clusters of the halo (\\citealt{Helfer1959}). Halo cepheids are now called Type II Cepheid (T2C) stars, and were first recognized by \\citet{Joy1937}, and found to be metal-poor by \\citet{Abt1954} and \\citet{Rodgers1963}. Short period cepheids were found in metal-poor globular clusters by \\citet{Arp1955}, and catalogued by \\citet{Clement2001}.\\footnote{The most recent update is to be found at http:\/\/www.astro.utoronto.ca\/~cclement\/cat\/listngc.html}\n\nTo the surprise of some, \\citet{Woolley1966} found that the halo cepheids showed thick disk kinematics and, hence, were likely to be only moderately metal-poor. The short period stars received the name BL~Her, a star which has a period of 1.3 days, and is known to show a metallicity of --0.16 (\\citealt{Maas2007}).\n\nIn an effort to further understand the T2C stars of the galactic halo, \\citet{Maas2007} derived metallicities of 19 stars, 7 of which have periods of 3 days or less. Except of one star, UY~Eri, they showed very modest depletions of heavy elements and, hence, fit the classification of BL~Her stars. To further expand the database for T2C stars in the halo, we have obtained high resolution spectra of field T2C stars, 10 of which have periods below 3 days. The stars were selected to be bright enough for the available telescopes, and to be conveniently placed when observing time was assigned.\n\nIn this paper, we report on the chemical composition of the 10 T2C field stars with periods from 0.9 to 3 days. The upper limit marks the edge of an almost empty gap from 4 to 9 days in the distribution shown in Figure~1 of \\citet{Soszynski2008} and Figure~6 of \\citet{Schmidt2011}. The separation of both cepheid strip candidates and Type II Cepheids into metal-normal and metal-poor stars can be seen in Figure~8 of \\citet{Schmidt2011}.\n\n\\begin{table*}[t]\n \\scriptsize\n \\caption{Details of Observations and Stellar Parameters}\n \n \\label{tab:atmparam}\n \\begin{adjustbox}{width=\\textwidth,center=\\textwidth}\n {\\begin{tabular}{rcccrccccrl}\n \\hline \\hline\n Star & P(days) & Telescope & JD (2400000+) & Exp (s) & Phase & ${T_{\\rm eff}}$ {} (K) & $\\log \\mathrm{g}$ {} & $\\mathrm{V_{t}}$ {} (km s$^{-1}$) & [Fe\/H] & Remarks \\\\\n \\hline\n UY~CrB & 0.929 & APO & 53162.6736 & 3600 & 0.491 & 6150 & 1.8 & 2.0 & --0.43 & \\\\\n & & APO & 53456.9981 & 1800 & 0.245 & 6700 & 2.0 & 2.4 & --0.32& \\\\\n & & APO & 53626.7528 & 1800 & 0.839 & 6300 & 2.5 & 2.0 & --0.47& \\\\\n NSV 10788 & 1.081 & VLT & 56124.7937 & 1200 & 0.451 & 6250 & 2.3 & 2.8 & --2.41 & \\\\\n V716~Oph & 1.116 & APO & 52336.9382 & 1200 & 0.154 & 7000 & 1.8 & 2.7 & --1.64 & \\\\\n & & APO & 52417.7674 & 1800&0.595 & 6100 & 2.2 & 2.6 & --1.67 & \\\\\n & & APO & 52447.6805 & 1800&0.380 & 6600 & 2.6 & 2.6 & --1.56 & \\\\\n & & APO & 52448.6667 & 1800&0.284 & 6700 & 2.2 & 2.6 & --1.72 & \\\\\n & & APO & 52449.7423 & 1800&0.248 & 6750 & 2.0 & 2.3 & --1.62 & \\\\\n BF~Ser & 1.165 & APO & 52336.8431 & 2400 & 0.652 & 5800 & 1.0 & 2.3 & --2.15 & \\\\\n & & APO & 52417.7090 & 1800 & 0.038 & 7300 & 2.2 & 3.0 & --2.04 & \\\\\n & & APO & 52804.7590 & 900 & 0.145 & 7000 & 2.0 & 3.0 & --2.08 & H$\\alpha$ double or emission \\\\\n & & APO & 53457.8495 & 1800 & 0.526 & 6300 & 2.1 & 2.2 & --2.04 & H$\\alpha$ double or emission \\\\\n BL~Her & 1.307 & APO & 53163.8646 & 900 & 0.104 & 7000 & 2.2 & 2.2 & --0.12 & \\\\\n & & OHP & 49572.3784 & 900 & 0.147 & 6650 & 2.5 & 2.2 & --0.20 & \\\\\n XX~Vir & 1.348 & APO &52417.6479 & 1800 & 0.070 & 7500 & 2.2 & 2.3 & --1.62& \\\\\n & & APO &52449.7188 & 1800 & 0.858 & 6100 & 2.5 & 2.8 & --1.51& \\\\\n & & APO &53541.6750 & 1800 & 0.791 & ... & ... & ... & ... & \\\\\n V1287~Sco & 1.956 & VLT &56126.4862 & 1500 & 0.487 & 5950 & 2.2 & 3.5 & --1.94 & H$\\alpha$ emission \\\\\n V553~Cen & 2.061 & ESO & 56748.6532 & 90 & 0.818 & 6060 & 2.2 & 2.7 & 0.01 & see \\citet{Wallerstein1996} \\\\\n UY~Eri & 2.213 & APO & 52984.7985 & 1800 & 0.889 & 6400 & 2.0 & 2.0 & --1.83 & H$\\alpha$ emiss or double \\\\\n & & APO & 53687.7856 & 1800 & 0.511 & 6200 & 1.8 & 2.6 & --1.66 & \\\\\n & & APO & 54044.7828 & 1800 & 0.809 & 6000 & 1.9 & 2.6 & --1.70 & H$\\alpha$ emission \\\\\n AU~Peg & 2.402 & APO & 53626.7292 & 900 & 0.848 & 6008 & 2.0 & 2.8 & 0.33 & orbit phase = 0.538 \\\\\n & & APO & 53687.6390 & 900 & 0.109 & 5544 & 1.5 & 2.3 & 0.21 & orbit phase = 0.681 \\\\\n \\hline\n \\end{tabular}}\n \\end{adjustbox}\n \n \\begin{itemize}\n \\item[] {\\it Remarks: \\\\}\n \n(a) -- {\\it NSV 10788} is a star with P = 1.08 days and a metallicity of [Fe\/H] = --2.4. The lightcurve shows an amplitude of 0.5 magnitudes. Thus, classification of this star as a cepheid is uncertain.\n\n(b) -- For {\\it BF~Ser}, all four spectra were taken at important phases. This star is very likely a member of the UY~Eri group. It is a metal poor-star with a pulsation period of 1.2 days.\n\n(c) -- {\\it XX~Vir} is a high latitude star of the UY~Eri group with a short pulsation period of 1.3 days. For one spectrum, with emission in iron lines, the chemical composition was not determined.\n\n(d) -- {\\it V1287~Sco} is a variable star of UY~Eri type. There is not much information in the literature about this star. H$\\alpha$ emission is seen in the spectrum. The lines are slightly asymmetric.\n\n(e) -- {\\it V553~Cen} is a star recognized by \\citet{Evans1983} as a rare C-rich Cephied. Its composition was investigated by \\citet{Wallerstein1998}.\n\n(f) -- {\\it AU~Peg} is a spectroscopic binary \\citep{Harris1984} with an orbital period of 53.3 days. The chemical composition of this star was studied by \\citet{Harris1984} and \\citet{Maas2007}. It appears to be slightly metal rich, which may be due in part to mass transfer from its unobserved companion.\n\n \\end{itemize}\n\\end{table*}\n\n\\section{Observations and Data Reduction}\n\\label{sec:obs}\n\nSeven objects were observed with the 3.5-m telescope at the Apache Point Observatory with the ARC Echelle Spectrograph (ARCES). By using a prism as a cross-disperser, the APO echelle captures the entire spectrum from 3500 \\AA{} to 10\\,400 \\AA{} with a resolving power of 31\\,500. However, the red-sensitive 2048x2048 chip has decreasing sensitivity for cool stars at shorter wavelengths and beyond 9000 \\AA{}. The observations were obtained as a part of a program to derive the chemical composition of certain Type II Cepheids and RR Lyrae stars. The exposure times were approximately 10--30 minutes. The estimated S\/N ratio per pixel at the continuum level, depending upon the wavelength interval, is approximately 80--150. The uncertainty in the determination of velocities is a few tenths of a km s$^{-1}$.\n\nTwo objects were observed with the cross-dispersed echelle spectrograph, UVES, at the Very Large Telescope.\\footnote{Based on observations collected at the European Organization for Astronomical Research in the Southern Hemisphere under ESO programme 089.D-0489(A).} The red arm was used, which covers the wavelength region between 4200 \\AA{} and 11\\,000 \\AA{}. Standard instrumental settings were used to achieve wavelength coverage from 4790--5760 \\AA{} and 5830--6810 \\AA{} with a resolution of 0.16 \\AA{}. The observations were done in service mode. For some objects, several spectra were obtained. The exposure times were approximately 20--30 minutes. The primary data reduction, such as bias subtraction, flat-field correction, wavelength calibration, sky subtraction and spectra extraction was performed with the UVES pipeline (\\citealt{Ballester2000}).\n\nOne object, the carbon cepheid, V553~Cen, was taken from the ESO archive. The star was observed with the echelle spectrograph HARPS at the ESO La Silla 3.6-m telescope. The spectral range is 4000--6800 \\AA{} with a resolving power of 100\\,000. For BL~Her, an additional spectrum was taken from the archive of the Elodie spectrograph at the Observatoire de Haute-Provence 1.93-m telescope (R=40\\,000, $\\lambda$ 4000-6800 \\AA{}). The observed stars and their atmospheric parameters are given in Table 1.\n\n\\section{Spectroscopic Analysis}\n\\label{sec:spec}\n\nTo determine the effective temperature, T$_{eff}$, we used the line depth ratio method of \\citet{Kovtyukh2007}. The uncertainties in this method range from 30--100 K depending on the S\/N. To set the $\\log{g}$ value, we required that the iron abundance, as determined from lines of FeI and FeII, be equal. \n\nElemental abundances were determined using LTE and NLTE approximations combined with atmospheric models by \\citet{Castelli2004}, computed for the parameters of each star. The solar abundances were computed for lines from the solar spectrum \\citep{Kurucz1984} with $\\log{gf}$ from the Vienna Atomic Line Database (VALD) \\citep{Kupka1999}, and the solar model by \\citet{Castelli2004}. They are listed in \\citet{Lemasle2015}. \n\n\\begin{table}\n \\centering\n \\caption{Table of $\\log (gf)$ and $\\chi_{\\rm{exc}}$ for the Na and O lines used.}\n \\label{tab:loggf}\n \\begin{tabular}{crr}\n \\hline \\hline Lambda & $\\chi_{\\rm{exc}}$ & $\\log (gf)$\\\\ \\hline\n & Sodium & \\\\ \\hline\n 5682.63 & 2.09 & --0.71 \\\\\n 5688.19 & 2.10 & --0.41 \\\\\n 6154.23 & 2.09 & --1.56 \\\\\n 6160.75 & 2.10 & --1.26 \\\\\n \\hline\n & Oxygen & \\\\ \\hline\n 6300.30 & 0.00 & --9.717 \\\\\n 7771.94 & 9.11 & 0.369 \\\\\n 7774.17 & 9.11 & 0.223 \\\\\n 7775.39 & 9.11 & 0.001 \\\\\n 8446.25 & 9.52 & --0.462 \\\\\n 8446.36 & 9.52 & 0.236 \\\\\n 8446.76 & 9.52 & 0.014 \\\\ \n \\hline\n \\end{tabular}\n\\end{table}\n\n\\subsection{Oxygen}\n\nThe NLTE model of the oxygen atom was first described by \\citet{Mishenina2000}, and then updated by \\citet{Korotin2014}. The model consists of 51 OI levels of singlet, triplet, and quintet systems, and the ground level of the OII ion. Fine structure splitting was taken into account only for the ground level and the 3p5P level (the upper level of the 7772,4,5 triplet lines). A total of 248 bound-bound transitions were included. Oxygen line parameters are listed in Table \\ref{tab:loggf}. The high excitation OI triplet suffers from departure from LTE (\\citealt{Parsons1964}, \\citealt{Altrock1968}, \\citealt{Amarsi2016}). Its strength depends sensitively on the star's surface gravity. In stars of high luminosity, the triplet is greatly enhanced since radiative effects dominate recombination and ionization, as compared to collisional excitation, which is controlled by the local temperature.\n\n\\subsection{Sodium}\n\nSodium line parameters are given in Table \\ref{tab:loggf}. We derived the sodium abundances by line profile fitting. The NLTE atomic model of sodium was presented by \\citet{Korotin1999} and then updated by \\citet{Dobrovolskas2014}. The updated sodium model currently consists of twenty energy levels of NaI and the ground level of NaII. In total, 46 radiative transitions were taken into account for the calculation of the population of all levels. For four stars, the Na D lines were saturated, so we used the pair of lines at 5682 \\AA{} and 5688 \\AA{}. For stars with nearly solar metallicity, we used the weaker pair at 6154 \\AA{} and 6160 \\AA{}. We chose not to use the 8183 \\AA{} and 8194 \\AA{} lines due to blending with absorption by atmospheric lines, which depends on the stellar radial velocity and the humidity above the observatory.\n\nThe results of the abundance analysis on our program stars is given in Table \\ref{tab:MetalCMn}.\n\n\\section{Discussion}\n\\label{sec:disc}\n\nThe T2C stars have been divided into W Vir stars with periods from 9 to 30 Days (though stars with periods greater than 20 days often show mild RV Tau properties); and the BL Her stars with periods less than 3 days.\\footnote{The gap from 3 to 9 days is violated by variable 3 in the globular cluster M10.} We advocate for the separation of the 1--3 day stars into the BL Her group with near-solar metallicity, and the distinctly metal-poor group that we will call the UY Eri class. The latter are to be found also in metal-poor globular clusters \\citep{Clement2001}. The relatively metal-rich globulars with [Fe\/H] $\\textgreater$ --1.0 do not have cepheids with periods less than 3 days. Of 10 stars in Table 1, 6 fall into the metal poor UY Eri group, all of which show [Fe\/H] $\\textless$ --1.5.\n\nOur database may be enhanced by including the abundances in \\citet{Maas2007}. Just by chance, all of their short period stars, except for UY~Eri, fall into the more metal-rich BL~Her class. Hence, we have combined the two sources to develop mean abundance ratios for the BL~Her subgroup, and used only our data for the metal-poor UY~Eri group. Our abundance data are summarized in Table~\\ref{tab:OurStats}.\n\nFor our small sample, formal probable errors are less indicative of the accuracy of a quantity than one might hope. Hence, we show the dependence of [O\/Fe] and the dependence of [$\\alpha$\/Fe] on [Fe\/H] in Figure~\\ref{XFe}, using the elements Mg, Si, Ca, and Ti. The dependence of [O\/Fe] on [Fe\/H] shows a dependence that is similar to what is seen in both giants and dwarfs, with a rise of [O\/Fe] from near 0.0 at [Fe\/H] = 0.0 to +0.7 between [Fe\/H] = --1.5 to --2.5. For [$\\alpha$\/Fe], the pattern is similar to many other investigations of the $\\alpha$-elements, rising from near 0.0 to +0.5 for the metal-poor UY~Eri Group.\n\nThe metal-rich BL Her group show abundances of 22 elements that differ little in their ratios relative to iron, with the possible exception of sodium. For the UY~Eri group, the abundances of oxygen, and the $\\alpha$-elements from Mg to Ti show excesses that are comparable to the excesses in most metal-poor red giants and main sequence stars.\n\nWe await the discovery and observation of additional T2C stars by Gaia and eventually the Large Synoptic Survey Telescope (LSST). The Gaia spectra in the 8400--8800 \\AA{} region may be used to derive metallicities from the strenth of the Ca II IR triplet as has been calibrated for RR Lyrae stars by \\citet{Wallerstein2012}.\n\n\\subsection{Carbon, Nitrogen, and Oxygen}\n\nThere are a few CI lines of high excitation in these stars. Despite our NLTE analyses, the scatter of [C\/Fe] is too great to consider the mean values of [C\/Fe] to be definitive. A significant excess in [N\/Fe] for the BL~Her stars is approximately what is expected from the enhancement of nitrogen by the CNO cycle and internal mixing. For the metal-poor UY~Eri stars, the nitrogen lines are too weak for analysis. For oxygen, the analysis depends upon the OI triplet at 7772,4,5 \\AA{} and the 8446.63 \\AA{} lines. As we have shown in Figure~\\ref{XFe}, for the metal-poor UY~Eri stars, the mean excess in [O\/Fe] is $0.72 \\pm 0.09$, which is typical for metal-poor stars.\n\n\\subsection{Light Elements}\n\nFor sodium, we see small excesses that are approximately equal to their uncertanties for both groups of stars. In most, if not all, globular clusters, some red giants show a significant excess of sodium in second generation stars \\citep{Carretta2009}. Our abundances are based on the lines arising from the 2.1 eV excited level. We did not use the Na D lines because they are usually too strong and are sometimes blended with circumstellar or interstellar lines. For the alpha elements, Mg, Si, Ca, and Ti, the BL~Her stars do not show a difference in their abundances to that of iron, as is usually seen in stars with near-solar metallicity. S is omitted from the $\\alpha$-elements because its lines are too weak for analysis, except in UY~Eri. For the UY~Eri stars, we find a mean value of [$\\alpha$\/Fe] = $0.35\\pm0.19$ which is typical of metal-poor stars.\n\n\\begin{figure}\n\\epsscale{1.2}\n\\plotone{f1.pdf}\n\\caption{~The dependence of [O\/Fe] and [$\\alpha$\/Fe] on [Fe\/H] for the observed stars. \\label{XFe}}\n\\end{figure}\n\n\\begin{table*}\n \n \\tiny\n \n \n \\caption{Relative to Fe Abundances in Type II Cephieds (C--Ti)}\n \\label{tab:MetalCMn}\n \\begin{adjustbox}{width=\\textwidth,center=\\textwidth}\n {\\begin{tabular}{rrrrrrrrrrrrrrr}\n \\hline \\hline\n & P, days & [Fe\/H] & C & O & Na & Mg & Al & Si & S & Ca & Sc & Ti \\\\\n \\hline\n \n \n UY~CrB & 0.929 & --0.40 & 0.65 & 0.67 & 0.52 & 0.31 & 0.35 & 0.24 & 0.25 & 0.20 & 0.01 & 0.23 \\\\\n NSV 10788 & 1.081 & --2.41 & ... & ... & ... & 0.52 & ... & ... & ... & 0.49 & 0.27 & 0.37 \\\\\n V716~Oph & 1.116 & --1.64 & 0.43 & 0.72 & 0.36 & 0.21 & ... & 0.14 & ... & 0.53 & 0.23 & 0.49 \\\\\n BF~Ser & 1.165 & --2.08 & 0.13 & 0.73 & ... & 0.31& ... & ... & ... & 0.75 & 0.20 & 0.41 \\\\\n BL~Her & 1.307 & --0.16 & 0.36 & 0.35 & 0.63 & 0.01 & 0.00 & --0.01 & 0.10 & 0.01 & 0.06 & 0.15 \\\\\n XX~Vir & 1.348 & --1.55 & ... & 0.83 & ... & 0.19 & ... & 0.05 & ... & 0.46 & 0.26 & 0.39 \\\\\n V1287~Sco & 1.956 & --1.94 & ... & ... & ... & 0.55 & ... & ... & ... & 0.52 & 0.25 & 0.47 \\\\\n V553~Cen & 2.061 & 0.01 & 0.78 & --0.11 & 0.43 & 0.19 & 0.12 & 0.00 & 0.10 & 0.23 & 0.28 & 0.28 \\\\\n UY~Eri & 2.213 & --1.73 & --0.19 & 0.61 & 0.18 & 0.37 & ... & 0.06 & 0.33 & 0.60 & 0.06 & 0.45 \\\\\n AU~Peg & 2.402 & 0.26 & 0.17 & 0.15 & --0.04 & --0.07 & 0.18 & --0.11 & 0.15 & --0.14 & --0.27 & --0.04 \\\\\n \\hline\n \\end{tabular}}\n \\end{adjustbox}\n \\\\\n \n\\end{table*}\n\n\\section{Conclusions}\n\\label{sec:conc}\n\nWe have found that the T2C stars with periods of 1--3 days may be divided into a nearly normal metal group, usually called BL~Her stars, and a newly recognized group with [Fe\/H] = --1.5 to --2.4. The BL~Her group probably belong to the thick disk, while latter are similar to short period cepheids in metal-poor globular clusters. With a few exceptions, it appears that globulars with [Fe\/H] $\\textgreater$ --1.0 do not have cepheid members of any metallicity. \n\nThe relationship of globular clusters and the general halo of our galaxy is not clear. \\citet{Martell2016} have shown that only a very small percentage of the general halo could have come from the evaporation of stars from globular clusters. Since all stars seem to form in groups and clusters, it is possible that the single halo stars are descendents of small, loose clusters no longer recognizable as such. In addition, the populations of variable stars in the halo and globulars are not the same. \\\\\n\nWe thank Giuseppe Bono, Charli Sakari, and Joanne Hughes for reading the manuscript and making some good suggestions. This research has been supported by the Kennilworth Fund of the New York Community Trust.\n\n\\begin{table}\n \\centering\n \\caption{Mean abundances of UY~Eri stars from this study, and BL~Her stars from this study and \\citet{Maas2007} for critical elements relative to Fe.}\n \\label{tab:OurStats}\n \\begin{tabular}{lrrrrc}\n \\hline \\hline\n Element & Mean$_{BL}$ & $\\sigma_{BL}$ & Mean$_{UY}$ & $\\sigma_{UY}$ & Mean$_{\\mid BL-UY \\mid}$\\\\\n \\hline\n O & 0.24 & 0.29 & 0.72 & 0.27 & 0.48 \\\\\n Na & 0.37 & 0.26 & 0.27 & 0.20 & 0.10 \\\\\n Mg & 0.08 & 0.16 & 0.35 & 0.20 & 0.27 \\\\\n Si & 0.02 & 0.13 & 0.13 & 0.14 & 0.11 \\\\\n S & 0.14 & 0.07 & 0.50 & 0.26 & 0.36 \\\\\n Ca & 0.05 & 0.16 & 0.54 & 0.26 & 0.49 \\\\\n Ti & 0.14 & 0.13 & 0.41 & 0.17 & 0.27 \\\\\n $[$Fe\/H$]$ & --0.07 & 0.24 & --1.89 & 0.29 & 1.82 \\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe successes of quantum computing in the past decade have laid the foundations for the interdisciplinary field of quantum machine learning (QML), where parameterized quantum circuits (PQC) are used as machine learning models to extract features from the training data. It was argued~\\cite{amirapaper} that such quantum neural networks (QNN) could have higher trainability, capacity, and generalization bound than their classical counterparts. Practically, hybrid quantum neural networks (HQNN) have shown promise in small-scale benchmarking tasks~\\cite{boston-housing,schetakis2022review} and larger-scale industrial tasks~\\cite{vw-paper,pharma}. Nevertheless, the utility, practicality, and scalability of pure QNNs are still unclear. Furthermore,~\\cite{schuld-advantage} provided a thorough overview of this field, showing that while classical machine learning is solving large real-world problems, QNNs are mostly tried on synthetic, clean datasets and show no real-world immediate advantages in its current state\\footnote{This is true for pure quantum models trained on classical datasets. There are numerous successes in applying quantum models to quantum-native problems~\\cite{hhl,vqe,discocat}.}. It also suggested that the QNN research focus ought to be shifted from seeking quantum advantage to new research questions, such as finding a new, advantageous quantum neuron. This work explores a new quantum neuron that argues for moving beyond using the Pauli-, single-qubit gates for encoding data, and instead employing higher-dimensional unitaries through gate decomposition. \n\nSince quantum gates are represented by elements of compact groups, Fourier analysis is a natural tool to analyse QNNs. \\cite{schuld_fourier} showed that certain quantum encodings can create an asymptotically universal Fourier estimator. A universal Fourier estimator is a model that can fit a trigonometric series on any given function and as the number of terms in the series approaches infinity, the fit becomes an asymptotically perfect estimate. This estimator can initially infer the coarse correlations in the supplied data, and by increasing the number of Fourier terms, it can incrementally judge the more granular properties of the dataset. This provides an adjustable, highly-expressive machine-learning model. \n\nIn \\cite{schuld_fourier}, the authors showed that a QNN can operate as such in two ways: a \\emph{sequential} single-qubit architecture with $n$ repetitions of the same encoding could yield $n$ Fourier bases, which could also arise from an $n$-qubit architecture with the encoding gates applied to all qubits in \\emph{parallel}.\n\nThe sequential architecture is widely shown to be an efficient Fourier estimator, demonstrated both theoretically \\cite{schuld_fourier} and empirically~\\cite{data_reuploading,mo_paper}. However, sequential circuits are often deep \\cite{mo_paper}, and assuming that near-term quantum computers will have a non-negligible noise associated with each gate, these circuits can experience noise-induced barren plateaus~\\cite{noise-bp}. Barren plateaus are a phenomenon observed in optimization problems where the gradients of the model vanish, rendering them impossible to train. More importantly, a single qubit can be simulated efficiently in a classical setting, so this architecture brings no quantum advantage.\n\nIn contrast, the parallel setting offers the exponential space advantage of quantum computing but poses two challenges for large numbers of qubits: \n\\begin{itemize}\n \\item An exponentially growing parameter count with the number of qubits, required to span the entire group space $\\mathcal{SU}(2^n)$~\\cite{haar_measure_paper}, which could also lead to noise-induced barren plateaus. Spanning the full space is especially important for a priori problems, where the best model lies somewhere in the Hilbert space, but the machine learning scientist has no prior knowledge of how to parameterize a circuit to reach this point. In addition, gradient calculations in QNNs -- as of this publication -- use the parameter-shift rule discovered and presented in \\cite{parameter-shift}. This method requires two evaluations of the QNN to find the derivative of the circuit with respect to each of the trainable parameters. An exponentially growing number of trainable parameters translates to an exponentially increasing amount of resources required for gradient computation. In mitigation, \\cite{poly1,poly2} showed that a polynomially-growing number of parameters could generate a similar result based on the quantum $t$-design limits~\\cite{t-design}.\n \n \\item Strongly-parameterised QNNs have a vanishing variance of gradient which decreases exponentially with the number of qubits~\\cite{bp}. This means that for a large number of qubits if one were to initialize her QNN randomly, they will encounter a barren plateau. This happens because the expectation value of the derivative of the loss function with respect to each variable for any well-parameterized quantum circuit is zero, and its variance decreases exponentially with the number of qubits. Mitigation methods have been suggested in \\cite{bp-mit1,bp-mit2}, and most notably in \\cite{log-nobp} it was shown that by relaxing the well-parameterized constrain to only include a logarithmically growing circuit depth with the number of qubits in the system and use local measurements, the circuit is guaranteed to evade barren plateaus. \\cite{bp-zx} developed a platform based on ZX-calculus\\footnote{A graphical language, based on tensor networks, to analyze quantum circuits\\cite{zx1}.} to explore which QNN architectures are affected by the barren plateau phenomenon and found that strongly-entangling, hardware efficient circuits suffer from them. In contrast with the previously-mentioned cases of barren plateaus, the latter is not noise-induced and thus this is a problem that needs to be addressed even in the fault-tolerant future of quantum computing.\n\\end{itemize} \n\nTherefore, the practicing QML scientist is limited in choosing her QNN architectures for general data science problems: they need to be shallow or employ only a few qubits.\n\nThis contribution suggests a modification to the encoding strategies in \\cite{schuld_fourier} to increase the growth of the Fourier bases in a QNN from linear in the number of qubits\/number of repetitions, to exponential. The proposed encoding is constructed by decomposing large unitary generators into local Pauli-Z rotations. This improves the expressivity of the QNNs without requiring additional qubits or encoding repetitions. The increased expressivity is a product of eliminating the encoding degeneracies of the quantum kernel, making efficient use of the available Hilbert space by assigning a unique wave-vector to each of its dimensions. However, such encodings could introduce a greater risk of limiting the model's Fourier accessibility\\footnote{Each of these bases has a Fourier amplitude as well as a phase angle, both of which need to be altered to fit a model on the training data. However, depending on the architecture (of both the encoding layers and the trainable layers) these quantities may be limited in the values they can represent. This could significantly limit their ability to represent various functions. This will henceforth be referred to as the \\emph{Fourier accessibility} of the quantum architectures.}.\n\n\n\nSec.~\\ref{sec:lin_arch} provides a review of how angle-embedded QNNs approximate their input distributions by fitting to them a truncated Fourier. Specifically, Sec.~\\ref{sec:lin_arch} reviews the linear encoding architectures and how their number of Fourier bases grow linearly with the number of repetitions -- sequential linear in Sec.~\\ref{sec:seq_lin} -- as well as the number of qubits -- parallel linear in Sec.~\\ref{sec:par_lin}. Then, Sec.~\\ref{sec:exp_arch} introduces the same two architectures but with a slight modification to represent an exponentially-growing number of Fourier bases. To use these architectures in practice, Sec.~\\ref{sec:training} showcases a comparison in the training performance of these architectures and shows that the parallel exponential has a superior training performance to the other architectures on a synthetic, one-dimensional dataset. Finally, Sec.~\\ref{sec:critical} critically evaluates the work and suggests areas for future investigation. \n\n\n\n\n\\section{Background review -- linear architectures} \\label{sec:lin_arch}\n\nAs discussed in \\cite{schuld_fourier}, all quantum neural networks that use angle embedding\\footnote{Ref.~\\cite{schuld_kernel} explores other embedding strategies too, such as basis embedding and amplitude embedding. This work primarily focuses on angle embedding. Nevertheless, it is worth noting that in the circuit model all circuit parameters enter as angles at some level of the description.} as their encoding strategy produces a truncated Fourier series approximation to the dataset. \n\\cite{schuld_fourier} also specifically explored two families of architectures of quantum neurons: a single-qubit architecture with a series of sequential $\\mathcal{SU}(2)$ gates, and a multi-qubit architecture with parallel $\\mathcal{SU}(2)$ encoding gates. In this section, the results and the architectures introduced in \\cite{schuld_fourier} are explored in depth in Sec.~\\ref{sec:exp_arch} two alternative neuron architectures are presented with the capability of achieving an exponentially higher Fourier expressivity for the same number of gates.\n\nConsider a quantum neuron that maps a real feature $x \\in \\mathcal{X}$ onto the quantum circuit via a parametric gate $S(x)=e^{-i\\mathcal{G}x}$. In most common architectures the only parametric gates are single qubit rotations $\\{R_x,R_y,R_z\\}$. For this work, the Pauli-Z generated rotations are used without any loss of generality $\\mathcal{G}=\\frac{1}{2}\\sigma_z=\\frac{1}{2}\\big(\\begin{smallmatrix}\n 1 & 0\\\\\n 0 & -1\n\\end{smallmatrix}\\big)$\\footnote{extra gates to convert between different generators can be absorbed into the variational gates -- see \\cite{schuld_fourier}}, then the embedding gate takes a simple form $S(x)= \\big(\\begin{smallmatrix}\n e^{-i x \/2 } & 0\\\\\n 0 & e^{i x \/2}\n\\end{smallmatrix}\\big)$. In general, the dependence of the expected value of any observable on the parameter $x$ is then given by\n\\begin{equation*}\n \\langle M\\rangle = \\bra{\\Psi}(S^\\dag(x)\\otimes \\mathbb{1}) M (S(x)\\otimes \\mathbb{1})\\ket{\\Psi} = (c_0+c_0^*) + c_{1} e^{i x}+ c_1^* e^{-i x}\n\\end{equation*}\nwith some complex parameters $c_0$ and $c_1$ which depend on the rest of the circuit and the measurements.\n\nThis expected value is a function of the feature $x$ with a very simple Fourier series. The \\textit{data re-uploading method}~\\cite{data_reuploading} is a natural way to construct neurons that give rise to richer Fourier series. These are architectures where several parametric gates depend on the same $x$. It is the most straightforward to consider gates that have a hardwired dependence on the feature\\footnote{In principle, one may also consider gates $S(x;\\theta)= e^{-i f_\\theta(x) \\mathcal{G}}$ with a \\textit{variational} dependence on the feature $x$, where the parameters $f_\\theta$ are to be learned. However, this could present additional challenges related to the computation of gradients and increased number of parameters}. In particular, such that the expected value of any observable takes the form of a discrete Fourier series \n\\begin{equation}\n\\label{eqn:fourier_general}\n f(x,\\theta) =\\sum_{k} c_k(\\theta) e^{ikx},\n\\end{equation}\nwhere $\\theta$ the variational parameters and $c_k \\in \\mathbb{C}$ with $c^{*}_{-k} = c_k$ for real observables. In Sec.~\\ref{sec:seq_lin} and \\ref{sec:par_lin}, two architectures exhibited in \\cite{schuld_fourier} are reviewed. The \\textit{Fourier expressivities} of these architectures are of particular interest, that is the list of wavenumbers $\\{k_1,k_2,\\dots\\}$ appearing in the exponents in Eq.~\\eqref{eqn:fourier_general}.\n\n\\subsection{Sequential linear}\\label{sec:seq_lin}\n\nThe single-qubit sequential linear method uses repetitions of the same single-qubit encoding gate $S(x)$ interlaced in-between trainable variational layers. Fig.~\\ref{fig:archs}a shows this implementation with generalized variational gates. Since the eigenvalues of each unitary are $e^{\\pm i \\frac{1}{2}x}$, it is\nstraightforward to observe (see e.g. App.~\\ref{appendix:seq_lin}) that after $n$ encoding layers the expected value of any observable takes the form\n\\begin{equation}\n\\label{eqn:fourier_first}\n f(x,\\theta) =\\sum_{k=-n}^{n} c_k(\\theta) e^{ikx},\n\\end{equation}\nThus, the repetitions have an additive effect such that for $n$ repetitions the final list becomes $\\{-n,-n+1,\\cdots,0,\\cdots,n-1,n\\}$. Each of the wavenumbers in the list gives rise to a sinusoidal term with the same frequency. Therefore, for $n$ repetitions of the encoding $S(x)= e^{-i\\mathcal{G}x}$, $n$ distinct Fourier bases are generated. \n\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=1\\linewidth]{fig1.pdf}\n \\caption{The four general circuits under analysis in this paper. In the exponential architectures, the first encoding is kept the same and the subsequent encoding gates are multiplied by the coefficients in Eq.~(\\ref{eqn:numbers}). The parallel circuits have a CNOT layer at the end to ensure that all qubits are cooperating in the training by propagating the $\\pi$-measurement through all quantum wires.}\n\\label{fig:archs}\n\\end{figure*}\n\n\n\\subsection{Parallel linear}\\label{sec:par_lin}\nIn the parallel setting, the single-qubit encoding gates are applied in parallel on separate qubits -- see Fig.~\\ref{fig:archs}c. Similarly to the sequential encoding, for $n$ parallel rotations $n$ Fourier bases are produced. This is due to the commutativity between the parallel rotations as they act on separate qubits. The generator $\\mathcal{G}$ becomes:\n\\begin{equation}\n \\mathcal{G} = \\frac{1}{2} \\sum_{q=1}^{r} \\sigma_z^{(q)},\n\\end{equation}\nwhere $q$ is the qubit index, $r$ indicates the total number of qubits, and $\\sigma_z^{(q)}$ is the Pauli-Z matrix applied to the $q^\\text{th}$ qubit. In App.~\\ref{appendix:par_lin_proof}, it is shown that $\\mathcal{G}$ -- being a square matrix of dimensions $2^r$ -- has $2r+1$ unique eigenvalues. This suggests a high degree of degeneracy in its eigenspectrum. As before, the subtraction of these values from themselves yields a list of wavenumbers ranging from $-r$ to $r$ generating $r$ Fourier bases.\n\n\\subsection{Redundancy}\n\nBoth in the sequential and parallel linear architectures there is a lot of redundancy in how the feature is encoded into the circuit. This is the easiest to see for the parallel architecture, where most of the eigenvalues of $\\exp(i x \\mathcal{G})$ are largely degenerate as the encoding commutes with qubit permutations.\n\n\n\\section{Results -- exponential architectures}\\label{sec:exp_arch}\n\nIn this section, two new families of architectures are suggested that can encode an exponential number of Fourier bases for a given number of repetitions\/parallel encodings. The basis of this generalization is to modify each \"subsequent\" appearance of the encoding gate in the circuit by a re-scaling of the generator $S(x)\\to S(m x)$ with an integer $m$. Keeping the factors $m$ integer guarantees that this procedure results in a discrete Fourier series in the form of Eq.~\\eqref{eqn:fourier_general}.\n\n\\subsection{Sequential exponential}\\label{sec:seq_exp}\nIt was shown in Sec.~\\ref{sec:seq_lin} that the wavenumbers created in the linear models are highly degenerate. By modifying the circuit encoding this degeneracy can be reduced, resulting in adding new wavenumbers to the list. This is accomplished by altering the generators in the individual encoding layers. In the linear case, the diagonal elements of the generator $\\lambda_i$ always belonged to the list $\\{-\\frac{1}{2},\\frac{1}{2}\\}$, but could be altered by scaling the generator $\\mathcal{G}$ in each layer. In practice, this is achieved by scaling the embedded data $x$, and then mathematically associating with the generator. The resultant function becomes\n\n\\begin{eqnarray*}\n f(x,\\theta) =\\left( W_{1,j_1}^{(0)\\dagger}W_{j_1,j_2}^{(1)\\dagger}\\cdots M_{k',k} \\cdots W_{i_2,i_1}^{(1)} W_{i_1,1}^{(0)}\\right)\\\\\ne^{\\left((\\lambda^{(1)}_{j_1}+\\lambda^{(2)}_{j_2}+\\cdots)-(\\lambda^{(1)}_{i_1}+\\lambda^{(2)}_{i_2}+\\cdots)\\right)x},\n\\end{eqnarray*}\n\nwhere $\\lambda^{(l)}_i = a_l \\lambda_i = \\frac{1}{2}\\{-a_l,a_l\\}$ for $a_l \\in \\mathbb{N}$~\\footnote{In comparison, for linear architectures $a_l=1$ for all $l$.}. In this work, $a_l$ scales as follows $a_l =\\{2^0,2^1,2^2,\\cdots,2^{n-1}+1\\}$. The motivation behind this choice is the sum of powers of 2, $\\sum_{i=0}^{n-1} 2^i = 2^{n} - 1$, where the largest wavenumber possible, $2^n$, is obtained by taking all the positive contributions from the list of eigenvalues, i.e. $k_{max}= \\sum_{i=0}^{n-1} 2^i + 2^{n-1} + 1 = 2^n$. Next, one can switch the signs of the positive values to negative starting from the smallest term to produce all integers from $-2^n$ to $2^n$. This generates $2^n$ Fourier frequencies. Fig.~\\ref{fig:archs}b shows a quantum circuit encoded using the sequential exponential strategy with 2 layers. App.~\\ref{appendix:constrained} demonstrates that this network produces extreme constraints on the Fourier accessibility, and thus is an undesirable choice for general data modelling. However, this scheme motivates extending this idea to parallel architectures.\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{fig2.pdf}\n \\caption{Variational gates used in single -- (a) and 2-qubit (b) experiments.}\n \\label{fig2:variational}\n\\end{figure*}\n\n\\subsection{Parallel exponential}\\label{sec:exp_par}\nTo perform this extension, it is appropriate to proceed with a 2-qubit example. The parallel linear method described in Sec.~\\ref{sec:par_lin} produces the generator:\n\\begin{equation}\n\\mathcal{G^{\\text{lin}}}=\\frac{1}{2} \\left(\\sigma_z \\otimes \\mathbb{I} + \\mathbb{I} \\otimes \\sigma_z\\right) = \\begin{pmatrix}\n 1 & 0 & 0 & 0\\\\\n 0 & 0 & 0 & 0\\\\\n 0 & 0 & 0 & 0\\\\\n 0 & 0 & 0 & -1\n\\end{pmatrix}\n\\end{equation}\nThis matrix has 3 unique eigenvalues, $\\lambda \\in \\{-1,0,1\\}$, and when subtracted from itself -- yielding wavenumbers $L^{\\text{(lin)}}_k = \\{-2,-1,0,1,2\\}$ -- it can produce 2 Fourier bases with frequencies $\\{1,2\\}$. One could generate a matrix with more unique values. For example,\n\\begin{equation}\n\\mathcal{G^{\\text{exp}}}=\\frac{1}{2} \\left(3\\sigma_z \\otimes \\mathbb{I} + \\mathbb{I} \\otimes \\sigma_z\\right)=\\begin{pmatrix}\n 2 & 0 & 0 & 0\\\\\n 0 & -1 & 0 & 0\\\\\n 0 & 0 & 1 & 0\\\\\n 0 & 0 & 0 & -2\n\\end{pmatrix},\\label{eqn:exp_matrix}\n\\end{equation}\nis a generator with 4 unique eigenvalues that generate 9 wavenumbers $\\{-4, -3, -2, -1, 0, 1, 2, 3, 4\\}$. This generator can be constructed using the quantum circuit shown in Fig.~\\ref{fig:archs}d. In this case, a $\\mathcal{SU}(4)$ generator is employed. This is decomposed into 2 $\\mathcal{SU}(2)$ generators, one using the group parameter, $x$, and the other $3x$. This can be generalized to $n$ qubits as one can extend the matrix for larger numbers of qubits, i.e. for $n$ qubits $\\mathcal{G}$ would be a diagonal matrix starting from $-2^{(n-1)}$ up to $2^{(n-1)}$, producing $2^n$ Fourier bases. The quantum circuit associated with this generator is an application of Pauli-Z rotations of $x$ with frequencies increasing in the following way: \\begin{equation}\\label{eqn:numbers}\n \\mathcal{L} = \\{2^0,2^1,2^2,...,2^{n-1}+1\\},\n\\end{equation}where $n$ is the number of qubits. Note the similarities between the sequential and parallel encodings and their symmetries in the way the circuits are constructed. One also recognizes similarities between the parallel encoding and Kitaev's quantum phase estimation algorithm~\\cite{kitaev1995quantum}, albeit in this case $x$ is a classical feature.\n\nThis can be significantly more expressive than the parallel linear method, but this advantage needs to be accompanied by Fourier accessibility since if the Fourier values of these newly-acquired bases cannot be altered, there would be no advantage in pursuing this setting. Sec.~\\ref{sec:training} shows there is a significant advantage in using parallel exponential encoding in a simple toy example.\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{fig3.pdf}\n \\caption{Training losses indicate a training advantage for the parallel exponential, and that the sequential exponential architecture performs only marginally better than the linear architectures. The training was done on QMware hardware\\cite{qmware} using the PennyLane Python package~\\cite{pennylane}. The Adam optimizer was employed to minimize a mean squared loss function with a learning rate of $\\epsilon=0.1$ and with uniformly-distributed parameters $\\theta \\in [0,2\\pi]$.}\n \\label{fig3:losses}\n\\end{figure*}\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=\\linewidth]{fig4.pdf}\n \\caption{The QNNs fit the best possible truncated Fourier series on the top-hat function. The parallel exponential architecture provides the best fit, and even though the sequential exponential architecture has access to the same 4 Fourier frequencies, it fails to access all of them efficiently and as a result, it performs sub-optimally. The linear architectures perform similarly to each other, potentially arising from their high Fourier accessibility to the 2 Fourier frequencies that they can represent.}\n \\label{fig4:tophat}\n\\end{figure*}\n\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=\\textwidth]{fig5.pdf}\n \\caption{The parallel exponential fit to the top-hat function on a simulator vs on a quantum processor. The noisy solid line is the evaluation of this network for 100 equally-spaced points using 100 shots.}\n \\label{fig5:qpu}\n\\end{figure*}\n\n\n\n\\subsection{Training}\\label{sec:training}\nIn this section, the training performance of these 4 architectures on a simple dataset is compared. Each architecture is trained to reproduce a one-dimensional top-hat function. Fig.~\\ref{fig4:tophat} shows the ground truth as well as the fitting performances of these architectures, and Fig.~\\ref{fig3:losses} shows their training performance. It is clear that the parallel exponential architecture fits a closer function to the ground truth, and in contrast, the sequential exponential architecture has the worst performance of all models. Furthermore, the Fourier decompositions of the models in Fig.~\\ref{fig6:decomposition} show that exactly 2 Fourier terms are accessed by the linear architectures and 4 by the exponential ones. Additionally, Fig.~\\ref{fig5:qpu} demonstrates the performance of the parallel exponential architecture on a trapped ion quantum processor. The IonQ implements a high-fidelity gate-based quantum processing unit through a process known as laser pumping trapped-ions explained in~\\cite{ionq-paper}. The hardware was shown to be one of the most accurate in recent benchmarking tests~\\cite{benchmarkingQMW}. We specifically used the hardware introduced in~\\cite{11qubit-ionq} with a single-qubit fidelity of $0.997$ and a 2-qubit fidelity of $0.9725$. The code implementation was done through Amazon Web Services (AWS) Braket, and the process of the forward pass for 100 data points took 4 hours and 11 minutes, due to the delays and queuing times. It can be observed that the low number of shots is the dominant source of noise here and higher shot counts could yield a smoother curve that is closer to the simulator. \n\n\\begin{figure*}[ht]\n \\centering\n \\includegraphics[width=\\linewidth]{fig6.pdf}\n \\caption{Fourier decomposition of the four architectures after training to fit the top-hat function. We see that the linear architectures can only access two Fourier frequencies, whereas the exponential ones can access four.}\n \\label{fig6:decomposition}\n\\end{figure*}\n\n\n\\subsection{Critical Evaluation}\\label{sec:critical}\nWhile the results for the parallel exponentials are encouraging, it is equally important to understand the limitations of this approach. Firstly, while the exponential growth in the number of Fourier frequencies is evident, this is not the higher limit of Fourier frequency growth. \\cite{schuld_fourier} showed that for $L$ repetitions of an encoding gate with a Hilbert space of dimension $d$, there is an upper limit to this growth of the form\n\n\\begin{equation}\nK < \\frac{d^{2L}}{2} -1,\n\\end{equation}\nwhere $K$ is the number of Fourier frequencies. This suggests that there is a potential for square-exponential growth, whereas the method discussed in this work only grows exponentially. In App.~\\ref{appendix:maths_problem} a mathematical problem is proposed, whose solution could unlock the maximum possible Fourier accessibility.\n\nSecondly, it is important to emphasize that the two parallel architectures are the same with a minor multiplicative factor added in the exponential case. This means that training them for a fixed number of epochs requires the same computational resources. However, adding more Fourier bases by eliminating the degeneracy of the network could result in under-parameterized models. Therefore, it is often necessary to parameterize the exponential architectures more heavily than the linear ones, indirectly affecting the required resources. Every Fourier frequency requires 2 degrees of freedom (real-valued parameters) and an exponentially-growing Fourier space requires the resources to grow exponentially, too. These resources could include the classical memory required to store the parameters or the classical optimizer that needs to calculate the gradient for these parameters. And lastly, extending this to many qubits will still result in barren plateaus.\n\n\\section{Conclusion and future work}\\label{sec:conclusion}\nThis work suggested two new families of QNN architectures, dubbed sequential and parallel exponential circuits, that provided an exponentially growing Fourier space. It was demonstrated that the former struggled with accessing these frequencies, but also that the latter showed an advantage in approximating a top-hat function. \n\nFuture work could focus on a quantitative understanding of the Fourier accessibility of these networks, such that the optimal variational parameterization could be chosen for a specific problem. Another possible direction for future work is to depart from hardwired encoding gates. A natural elementary step in this direction is to consider single-qubit gates of the form $S_i(x,w_i)= \\exp(- i \\, x w_i \\frac{1}{2}\\sigma_z )$, where the scaling factor $w_i$ is an independent scalar trainable parameter for each occurrence of the encoding gate in the circuit. In this case, the final wavevectors $k$ are linear combinations of the parameters $w_i$ that can be potentially trained efficiently.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{S1. First-principles calculations of phonon transport across a silicon interface}\nThe system used for calculating phonon transport across an intrinsic silicon (Si) interface is shown in Fig.~S\\ref{Fig.S1}(a). The system is the same as Fig. 1(a) of the main text, except that there is no vacuum gap region.\n\nA 1~${\\times}$~1~${\\times}$~4 supercell shown in Fig.~S\\ref{Fig.S1}(b) is used for computing the interatomic force constants via first-principles calculations based on the density functional theory. The interatomic force constants of cells 1 and 2 are extracted for the left and right leads, and also for the device region. The gap distance in contact is defined as the inter-planar spacing in the (001) orientation \\cite{Chiloyan2015,Xiao2017} (i.e., 0.137 nm). Surface reconstruction is not considered when the Si crystals are in contact.\n\n\\begin{figure}[p!]\n\\centering\n\\includegraphics[width=1\\linewidth]{Fig_S1.png}\n\\caption{(a) Schematic of the system used for calculating phonon heat transfer across an interface. The semi-infinite left and right leads are respectively maintained at temperatures $T_{\\mathrm{L}}$ = 305 K and $T_{\\mathrm{R}}$ = 300 K. (b) A 1~${\\times}$~1~${\\times}$~4 supercell, which is the combination of four conventional unit cells (black numbered boxes), is used for calculating the interatomic force constants.}\n\\label{Fig.S1}\n\\end{figure} \\clearpage\n\\newpage\n\n\\section{S2. Near-field radiation modeling}\nThe heat transfer coefficient due to near-field radiation between the left (L) and right (R) semi-infinite leads separated by a vacuum gap $d$ is calculated using fluctuational electrodynamics \\cite{Rytov1978,Polder1971}: \n\\begin{equation}\n\\label{radiative flux}\nh_\\mathrm{rad} \\ = \\ \\frac{1}{\\pi^{\\mathrm{2}}(T_\\mathrm{L}-T_\\mathrm{R})}\\int_{0}^{\\infty}d{\\omega}\\left[\t\\Theta\\left(\\omega,T_\\mathrm{L}\\right) - \\Theta\\left(\\omega,T_\\mathrm{R}\\right)\\right]\n\\\\ \\\\\n\\int_{0}^{\\infty}dk_\\mathrm{\\rho}k_\\mathrm{\\rho}\\sum_{\\gamma=\\mathrm{TE,TM}}{\\cal T_\\mathrm{rad}^{\\gamma}}\\left(\\omega,k_\\mathrm{\\rho}\\right)\n\\end{equation} \nwhere $T_{j}$ is the temperature of semi-infinite lead $j$ ($j = \\mathrm{L}, \\mathrm{R}$), $\\omega$ is the angular frequency, $k_\\mathrm{\\rho}$ is the component of the wave vector parallel to a surface, and $\\Theta(\\omega, T_{j})$ is the mean energy of an electromagnetic state calculated as $\\hbar\\omega\/[\\mathrm{exp}({\\hbar\\omega\/k_\\mathrm{B}T_{j}})-1]$. The transmission functions in polarization state $\\gamma$ for propagating ($k_\\mathrm{\\rho}$ $<$ $k_\\mathrm{0}$) and evanescent ($k_\\mathrm{\\rho}$ $>$ $k_\\mathrm{0}$) electromagnetic waves in vacuum are calculated as:\n\\begin{equation}\n\\label{propagating transmission}\n{\\cal T_\\mathrm{rad,prop}^\\gamma}\\left(\\omega,k_\\mathrm{\\rho}\\right) \\ = \\frac{\\left(1-\\left|r_\\mathrm{0L}^{\\gamma}\\right|^{2}\\right)\\left(1-\\left|r_\\mathrm{0R}^{\\gamma}\\right|^{2}\\right)}{4\\left|1-r_\\mathrm{0L}^{\\gamma}r_\\mathrm{0R}^{\\gamma}e^{2i\\mathrm{Re}\\left(k_{\\mathrm{z0}}\\right)d}\\right|^{2}}\n\\end{equation}\n\\begin{equation}\n\\label{evanescent transmission}\n{\\cal T_\\mathrm{rad,evan}^\\gamma}\\left(\\omega,k_\\mathrm{\\rho}\\right) \\ = e^{-2\\mathrm{Im}\\left(k_\\mathrm{z0}\\right)d}\\frac{\\mathrm{Im}\\left(r_\\mathrm{0L}^{\\gamma}\\right)\\mathrm{Im\\left(r_\\mathrm{0R}^{\\gamma}\\right)}}{\\left|1-r_\\mathrm{0L}^{\\gamma}r_\\mathrm{0R}^{\\gamma}e^{-2\\mathrm{Im}\\left(k_{\\mathrm{z0}}\\right)d}\\right|^{2}}\n\\end{equation} where $k_\\mathrm{0}={\\omega}\/c_{0}$ is the magnitude of the vacuum wave vector with $c_{0}$ as the speed of light in vacuum, $k_\\mathrm{z0}$ is the component of the vacuum wave vector perpendicular to an interface, and $r_{0j}^\\gamma$ is the Fresnel reflection coefficient at the vacuum-lead $j$ ($j = \\mathrm{L}, \\mathrm{R}$) interface in polarization state $\\gamma$ \\cite{Yeh1988}. The local, frequency-dependent dielectric function of intrinsic Si provided in Refs. \\cite{Adachi1988,Aoki1991} is used in the calculations.\n\\newpage\n\n\\section{S3. Number of electrons and interatomic force constants in the vacuum gap region}\nThe number of electrons, $n_\\mathrm{e}$, in the middle of the vacuum gap within the 1~${\\times}$~1~${\\times}$~8 supercell is calculated by integrating the electron density within the volume specified by the red shaded box in Fig.~S\\ref{Fig.S2}(a). Specifically, the ratio $l\/d$, where $l$ is defined in Fig.~S\\ref{Fig.S2}(a) and $d$ is the average vacuum gap thickness, was varied from 0.4 to $5{\\times}10^{-6}$. For $l\/d$ values smaller than $5{\\times}10^{-3}$, the number of electrons follows a $d^{-8.15 \\pm 0.85}$ power law, as shown in Fig.~S\\ref{Fig.S2}(b). This converging trend indicates that the number of electrons in the middle of the vacuum gap can be calculated using $l\/d$ $<$ $5{\\times}10^{-3}$. \n\nThe interatomic force constants reported in Fig. 3(d) of the main text are calculated by extracting the $zz$ components of the interatomic force constants acting on the atoms contained in the volume of the device region specified by the orange box in Fig.~S\\ref{Fig.S2}(c). Only the $zz$ components of the interatomic force constants are used since phonon transport across a single-digit nanometer vacuum gap is quasi-one-dimensional \\cite{Chiloyan2015,Tokunaga2021}.\n\n\\begin{figure}[p!]\n\\centering\n\\includegraphics[width=1\\linewidth]{Fig_S2.png}\n\\caption{(a) Vacuum gap region of a 1~${\\times}$~1~${\\times}$~8 supercell after structural optimization. The number of electrons is calculated within the volume specified by the red shaded box. The length $l$ surrounds the middle of the average vacuum gap. (b) Number of electrons as a function of the average vacuum gap thickness $d$ and the ratio $l\/d$. (c) The $zz$ components of the interatomic force constants acting on the atoms within the volume identified by the orange box are extracted, and their summations are reported in Fig. 3(d) of the main text.}\n\\label{Fig.S2}\n\\end{figure} \\clearpage\n\\newpage\n\n\\section{S4. Phonon dispersion relations and phonon density of states of bulk silicon calculated via the density functional theory}\nFigure S\\ref{Fig.S3} shows the phonon dispersion relations and phonon density of states of bulk Si obtained via first-principles calculations based on the density functional theory. In this work, acoustic and optical phonon modes are characterized by frequencies respectively lower and higher than 12 THz. This threshold is defined based on the boundary between longitudinal acoustic (LA) and longitudinal optical (LO) phonon modes located around 12 THz along the $\\Gamma$-$\\mathrm{X}$ direction, as shown in Fig. S\\ref{Fig.S3}.\n\n\\begin{figure}[p!]\n\\centering\n\\vspace{100pt}\n\\includegraphics[width=1\\linewidth]{Fig_S3.png}\n\\caption{Phonon dispersion relations and phonon density of states (DoS) of bulk Si obtained via the density functional theory \\cite{Gonze2002,Gonze2020}. Transverse optical, longitudinal optical, longitudinal acoustic, and transverse acoustic phonon modes are respectively denoted as TO, LO, LA, and TA. The order of the symmetric points within the first Brillouin zone used for generating the phonon dispersion relations is selected based on Refs. \\cite{Setyawan2010,Davis2011}. The horizontal dashed line is drawn at a frequency of 12 THz along the $\\Gamma$-$\\mathrm{X}$ direction.}\n\\label{Fig.S3}\n\\end{figure} \\clearpage \n\\newpage\n\n\\section{S5. Phonon populations}\nPhonon populations of the atomic layers adjacent to the vacuum gap in the left and right leads are calculated for vacuum gaps of 0.47, 0.69, 0.89, and 1.09 nm and are reported in Fig.~S\\ref{Fig.S4}. These atomic layers are denoted as layer 16 in the left lead and layer 1 in the right lead in Fig. 1(b) of the main text. Note that surface reconstruction is considered in the calculations. The phonon populations are obtained by multiplying the Bose-Einstein distribution function by the local phonon density of states of the device region obtained via the atomistic Green's function method. The local phonon density of states, $D_{\\mathrm{d}}$, is calculated as follows \\cite{Zhang_NHTPBF_2007}:\n\\begin{equation}\n\\label{pldos}\n D_{\\mathrm{d}} = i\\left(G_{\\mathrm{d}}-G_{\\mathrm{d}}^{\\dagger}\\right)\\omega\n\\end{equation} where $G_{\\mathrm{d}}$ is the Green's function of the device region, the superscript $\\dagger$ denotes conjugate transpose, and $\\omega$ is the phonon frequency. \n\nFigure~S\\ref{Fig.S4} clearly shows that the populations of acoustic phonons characterized by frequencies lower than 12 THz largely surpass that of optical phonons for all vacuum gaps considered. \n\n\\begin{figure}[p!]\n\\centering\n\\includegraphics[width=0.9\\linewidth]{Fig_S4.png}\n\\caption{Phonon populations of the atomic layers adjacent to the vacuum gap in the left and right leads for: (a) $d$ = 0.47 nm. (b) $d$ = 0.69 nm. (c) $d$ = 0.89 nm. (d) $d$ = 1.09 nm. The vertical dashed line at 12 THz delimits the frequencies associated with acoustic and optical phonons.}\n\\label{Fig.S4}\n\\end{figure} \\clearpage\n\\newpage\n\n\\normalem\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzotmh b/data_all_eng_slimpj/shuffled/split2/finalzzotmh new file mode 100644 index 0000000000000000000000000000000000000000..44f7888263f7fe5be4961356d442549c12fe00af --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzotmh @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction and outline}\n\nThe time dependent thermal radiative transport equations couple a transport equation with a internal matter density evolution equation. \nSimulating this dynamics is extremely time consuming and \nwe are interested in the stochastic (Monte Carlo) approaches and more precisely in the situation involving a large range of \nopacities~; in such cases the particles used in the Monte Carlo simulation will undergo a wide range of behaviors~: on one hand long-time rectilinear propagation interrupted by rare scattering events and on the other hand high intensity scattering with negligible overall displacement; but all other intermediate regimes are also present.\nThe two extreme regimes can either be simulated directly or with good quality approximations and the corresponding works have been documented in the literature (see section~\\ref{sec:motivation}). \nBut treating all regimes simultaneously\nhas been a challenge and our contribution introduces a unified method to tackle this circumstance. To this end we exploit a hidden smoothness in these models which is situated at the level of the statistics of the escape laws of a particle from a given domain. \n\nThe outline of the paper is the following~: we present in section \n\\ref{sec:motivation} some motivating comments and a presentation of the state of the art; then we describe in section \\ref{sec:method} our method based on an offline-online approach that exploits the quantiles of the escape laws from a domain. \nThe assumptions of the method are checked numerically in section \\ref{sec:test_exit_time}\nand then the method is tested on a benchmark with good results in sections~\\ref{sec:transfert_rad}. \nConcluding remarks are presented in section~\\ref{sec:conclusion}.\n\n\\section{Motivation and state of the art}\n\\label{sec:motivation}\n\n\nWe present briefly the principles of the Monte Carlo method \nused to solve the transport equation~\\eqref{eq:transport} and the problem of the \ndiffusion limit in a general setting. We then present the state of the art of the methods \nthat treat this high collisions regime.\nConsider the integro-differential transport equation of the form~:\n\\begin{equation}\n\t\\frac{1}{c} \\partial_t u(t,x,\\omega) + \\omega \\cdot \\nabla u(t,x,\\omega) + (\\sigma_a(t,x) + \\sigma_s(t,x)) u(t,x,\\omega) = \\sigma_s(t,x) \\moyAng{u}(t,x),\n\t\\label{eq:transport}\n\\end{equation}\nwith time $t \\in \\mathbb{R}^+$, position $x\\in \\mathcal{D} \\subset \\mathbb{R}^d$ ($d\\ge 1$ is the dimension), $\\omega \\subset \\mathcal{S}^{d}$ (unit sphere in $\\mathbb{R}^d$) the angle of propagation and $\\moyAng{u}(t,x) = \\frac{\\int_{\\mathcal{S}^d} u(t,x,\\omega') d \\omega'}{\\int_{\\mathcal{S}^d} 1 \\cdot d \\omega'}$\nthe angular average of $u$ on $\\mathcal{S}^d$. \n\nThe absorption opacity $\\sigma_a$ and the scattering opacity $\\sigma_s$ are (known) functions depending on the spatial discretization. To solve this equation we focus on the approaches described in~\\cite{sentis98} which interpret~\\eqref{eq:transport} as a time-evolving probability density and simulate the underlying stochastic process.\n\nWhen $\\sigma_s(t,x) \\to \\infty$ we are in the \"diffusion limit\" and the cost of the Monte Carlo is prohibitive~\\cite{densmore2005discrete} : each particle undergoes a high number of collisions with the mean time between two collisions being \n$O\\left(\\frac{1}{\\sigma_s}\\right)$. But the asymptotic analysis~\\cite{larsen1980diffusion} shows that \\eqref{eq:transport} converges towards the diffusion limit equation~:\n\\begin{equation}\n\t\\frac{1}{c} \\partial_t \\moyAng{u}(t,x) = \\nabla \\cdot \\left[ \\frac{1}{3 \\sigma_s} \\nabla \\moyAng{u}(t,x) \\right].\n\t\\label{eq:diffusion} \n\\end{equation}\n\nThe equation $\\eqref{eq:transport}$ appears in particular when solving radiative transfer equations where an isotropic scattering term is necessarily added by the \\textit{Implicit Monte Carlo} linearization method \\cite{FLECK1971313} in order to artificially represent the phenomena of absorption and re-emission.\nAnother approach to avoid artificial scattering is proposed in \\cite{brooks1989symbolic} but the problem remains unchanged when important physical scattering terms are present. In the context of radiative transfer, several methods have been proposed exploiting the limit regime~\\eqref{eq:diffusion}.\n\nThe \\textit{Random Walk} (RW) methods \\cite{fleck1984random, giorla1987random} exploit the fact that the trajectories of the particles are close to those of a Brownian motion: in an optically thick medium they replace (a part of ) the trajectory by a single diffusion step in the largest sphere contained in the mesh.\nThe \\textit{Random Walk} methods have the advantage to activate everywhere on the domain, and are easily applied to $3$-dimensional problems as well as multi-group problems. Their use in a production context remains limited by their strong dependence on mesh size (the smaller the mesh size, the smaller the sphere where the method will be applied) and the loss of precision introduced by the use of the diffusion limit for transient regimes.\n\n\n\nInitially called \\textit{Implicit Monte Carlo Diffusion}, the \\textit{Discrete Diffusion Monte Carlo} (DDMC) method~\\cite{gentile2001implicit,densmore2005discrete,densmore2006hybrid} splits the domain into two regions: one optically thick region solved by a Monte Carlo method using a diffusion equation and another part treated by the \\textit{IMC} method. The numerical simulation in the optically thick region uses a linearization similar to the \\textit{IMC} method. A new type of particle is then introduced to solve the diffusion equation. The advantage of this method is that it does not have any net flux to consider between the diffusion and transport regions (the flux is carried by the particles) and the particles can go from one region to another (by a conversion) and, more importantly, can change the cell (having different $\\sigma_a$ and $\\sigma_s$ values) with no particular treatment. The introduction of a new type of particles to treat the diffusion region allows easy\ntreatment of the interface between the transport and diffusion regions. Contrary to the \\textit{RW} methods, the efficiency of these methods is not dependent on the mesh; however their use is still restricted by the loss in precision introduced by the diffusion approximation when particles change the region.\n\n\nHybrid approaches \\cite{pomraning1979, clouet2003hybrid} solve the diffusion equation analytically in some spatial areas and use the \\textit{IMC} approach in others. Both methods are coupled by boundary conditions. The hybrid methods use an analytical resolution of the scattering equation when certain criteria are met (delimited areas or according to the frequency group). The use of these methods remains limited by the coupling between the analytical resolution of the diffusion equation and the Monte Carlo method solving the transport equation which is delicate as well as the choice of criteria (e.g. the definition of areas where the diffusion approximation can be used).\n\n\nWhen the coefficient $\\sigma_s(t,x)$ in \\eqref{eq:transport} is large, the classical Monte Carlo method uses Markov particles that undergo an important number of scattering events. The randomness of the scattering part dominates and after a certain time the state of the particle follows a probability law; in this case the \\textit{RW} approximation is justified. However, there are always intermediary regimes when the number of collisions is big enough to slow down the computation but not high enough to justify the use of the diffusion approximation.\n\nA new Monte Carlo method that is efficient regardless of the value of $\\sigma_s$ and that does not reduce the accuracy of the solution is still a challenge. Ideally the method should not be sensitive to the mesh used (i.e. robust to the change in value of $\\sigma_s$ and not limited to simple spatial domain e.g., a sphere); \nand it needs to be valid regardless of the value of $\\sigma_s$ (or that activates according to criteria independent on a choice of spatial areas such as methods of \\textit{RW} type).\n\n\nOur approach, called the \\methodname{} method, is to not use the diffusion limit approximation but to work with an approximation of the probability law of the \\textbf{exact solution} of the escape time, position and direction from the spacial cell.\n\n\n\\section{The \\methodname{} method} \\label{sec:method}\n We will consider $d=1$ in all this section and work on a segment (eventually divided in several sub-intervals). To ease notations we will also use $\\sigma$ instead of the scattering opacity $\\sigma_s$.\n\n\\subsection{Toy model illustration}\n\nWe recall here a simple example used later in the numerical tests in section~\\ref{sec:transfert_rad} and that will be useful to describe the \\methodname{} method below.\n\nConsider a $1D$ particle in the segment $[x_{min},x_{max}]$ situated at the initial time $t=0$ at position $x=x_{init}$ with angle $a\\in \\{-1,1\\}$. The total remaining simulation time is $t_{max}$; in the general simulation $t_{max}$ equals the overall time step $\\Delta t$ decremented by any previous time increments for this particle (for instance when the particle traverses several cells during the same $\\Delta t$).\n\n The exact evolution of the particle is the following: rectilinear movement in direction $a$ for a time $\\tau$ (exponential random variable of mean $1\/\\sigma$) then a collision takes place. This collision changes the angle uniformly at random to a new value $a'\\in\\{-1,1\\}$. Then the process repeats until either boundary is reached~: $x=x_{min}$ or $x=x_{max}$ or $t=t_{max}$.\n\nWe are interested precisely in this escape place (one of the extremities of the segment or of the time domain) and the escape angle. This is a random variable whose distribution will be denoted \n$\\mathcal{E}(\\sigma,\\ell,t_{max})$\nwhere $\\ell=(x_{init}-x_{min})\/ (x_{max}-x_{min})$ is the relative initial position of the particle.\n\n The probability law $\\mathcal{E}(\\sigma,\\ell,t_{max})$ has the support in \n \\begin{equation}\n\\left[\n \\Big\\{(x_{min},t), t\\in[0,t_{max}]\\Big\\} \\cup \n\\Big\\{(x,t_{max}), x\\in[x_{min},x_{max}]\\Big\\}\n\\cup \\Big\\{(x_{max},t), t\\in[0,t_{max}]\\Big\\}\\right] \\times \\{-1,1\\}.\n \\end{equation}\nNote that, although the distribution seems to be $3$ dimensional, conditional on knowing the escape side, only one dimension is essential, for instance escaping through \n$x_{min}$ \/ $x_{max}$ implies that the angle is $-1$ \/ $1$, otherwise it is random $-1$\/$1$.\nAn illustration is given in figure ~\\ref{fig:xminxmaxtmax}.\n\\begin{figure}[htb!]\n\t\\begin{center}\n\t\t\\includegraphics[width=.49\\textwidth]{exit_time_plot_v1.pdf}\t\n\t\t\\caption{An illustration of the escape dynamics of a particle starting at $x_{init}$ and undergoing collisions after $Exp(\\sigma)$ time (exponential random variable of average $1\/\\sigma$). The particle can escape through any of the domain's frontiers: either because it escapes the spatial domain (dotted trajectory) or because the time is up (dashed trajectory). The random events accumulate into a probability law \n\tdenoted\t$\\mathcal{E}(\\sigma,\\ell,t_{max})$\n\t\t\twith support on the boundaries of the time-space domain (together with a escape angle direction attribute).\n\t\t} \\label{fig:xminxmaxtmax}\n\t\\end{center}\n\\end{figure} \n\n\\subsection{The method}\n\nThe section \\ref{sec:motivation} highlighted the difficulty of dealing with the diffusion limit of the equation \\eqref{eq:transport} and the limitations of existing Monte Carlo methods. We propose a new Monte Carlo method, inspired by algorithms such as \\textit{Random Walk}, that \nworks with the probability laws $\\mathcal{E}(\\sigma,\\ell,t_{max})$ of escape from a cell and is based on vector quantization techniques~\\cite{book_quantization_measures}.\n\n\n\\begin{enumerate}\n\\item We define grids of representative values of the main parameters concerned~; for instance in 1D, we employ a grid $G_{sc}$ for $\\sigma$ values (in practice a log-uniform grid from $7.5\\times 10^{-3} cm^{-1}$ to $9.0\\times 10^{6} cm^{-1}$), a grid $G_{time}$ for simulation time values $t_{max}$ (uniform grid from $400fs$ to $40000fs$) and a grid $G_{ini}$ for relative initial position in the cell from $0\\%$ to $100\\%$ relative to left segment end. Each grid $G_{sc}$, $G_{time}$, $G_{ini}$ has $100$ points. We denote $|G|$ the size of a grid $G$.\n\n\\item An \\textbf{offline} computation is done once and for all (independent of the final simulation) in order to obtain an approximation of the joint distribution (escape time, escape point, escape direction) \n $\\mathcal{E}(\\sigma,\\ell,t_{max})$ \nas a probability distribution. For each point in \n$G_{sc} \\times G_{time} \\times G_{ini}$\n we compute and store the quantiles of the law. This approximation is valid beyond the framework of the diffusion limit, in particular it does not use any analytical form. In practice we perform $1500$ simulations for each point in $G_{sc} \\times G_{time} \\times G_{ini}$ but extract only $J$ quantiles from the whole distribution (cf. previous remarks on the fact that distribution is essentially one dimensional)\\footnote{When $\\sigma$ is large enough to ensure that the diffusion approximation is valid, one can sample this law using this diffusion approximation. In practice we use a very conservative approach by replacing, for $\\sigma$ large, several collisions with one collision provided that the diffusion approximation ensures that the probability to escape is less than $10^{-6}$. Note that this is only a way to compute faster the exact law but the \\methodname{} does not depend on this choice, any sampler of the exact escape law will do.}. \nThe quantiles are minimizers of the Huber-energy distance to the target and correspond to the optimal quantization of the measure, cf. \\cite[prop. 21]{measure_quantization_turinici22} and \\cite[prop. 3 and 4]{turinici_deep_2023}; for $J$ points the optimal quantiles chosen are $\\frac{j+1\/2}{J}$, $j=0,...,J-1$. \nThis part of simulation is highly parallelizable. The results are stored as a \n$|G_{sc}| \\times |G_{time}| \\times |G_{ini}| \\times J$ array of escape points $x$ or $t$ \ntogether with the $3$ positive numbers (summing up to $1$) indicating the probability of escape through each side; for us $J=100$, the number of points is $100^3\\times 103$ requiring $\\sim 800Mb$ of storage.\n\\item During the {\\bf online} simulation, \neach time that a particle \nof parameters $(\\sigma,\\ell,t_{max})$ \nneeds to be advanced to its next escape point, \na set of parameter values \n$\\sigma^g,\\ell^g,t_{max}^g$ from the 3D-grid \n$G_{sc} \\times G_{time} \\times G_{ini}$ is chosen (see below for details) and \n a random quantile from the stored distribution \n $\\mathcal{E}(\\sigma^g,\\ell^g,t_{max}^g)$ is selected \n and returned to the user. The particle is advanced with the corresponding space\/time increments prescribed by the escape quantile returned. \nThe grid point \n$\\sigma^{g},\\ell^g,t_{max}^g$ is chosen by identifying, for each of the parameters \n$\\sigma,\\ell,t_{max}$ \n the $2$ closest values of the grid~: \n$\\sigma \\in [\\sigma^{k_1},\\sigma^{k_1+1}]$,\n$t_{max} \\in [t_{max}^{k_2},t_{max}^{k_2+1}]$,\n$\\ell \\in [\\ell^{k_3},\\ell^{k_3+1}]$\n ; then we select one of them at random with probabilities depending on the relative distance between the actual parameters and the grid points, for instance $\\sigma^g=\\sigma^{k_1}$ with probability \n $(\\sigma^{k_1+1}-\\sigma)\/ (\\sigma^{k_1+1}-\\sigma^{k_1})$.\n \n\\end{enumerate}\t\nSuch an approach does not raise questions of validity of the diffusion limit or of the calculation of the escape time from the spheres (which resort to partial differential equations with assumptions and boundary conditions sometimes difficult to tackle cf.~\\cite{https:\/\/doi.org\/10.1002\/eng2.12109, giorlaCEAn84}).\n\nThe method is called \"quantized\" because we always sample from a pre-defined list of quantiles. In practice this dimension of quantization is not any more surprising than, e.g. space discretization of the mesh and if enough quantiles are considered the contribution to the overall error is negligible. The foundations of the method are well established \n(see~\\cite{book_quantization_measures} for general information on the mathematical objects and \\cite{measure_quantization_turinici22} more specifically tailored to our applications).\n\n\n\\section{Numerical tests}\n\n\\subsection{Toy model tests: escape time and position} \\label{sec:test_exit_time}\n\nIn order for the \\methodname{} method to work conveniently, one needs to ensure that \nthe distribution $\\mathcal{E}(\\sigma,\\ell,t_{max})$ is close to the mixing of the closest distributions $\\mathcal{E}(\\sigma^g,\\ell^g,t_{max}^g)$ on the grids. This, at its turn, depends on the \nsmoothness of the mapping $(\\sigma,\\ell,t_{max}) \\mapsto \\mathcal{E}(\\sigma,\\ell,t_{max})$ that we investigate in the following. More precisely, we plot in figure~\\ref{fig:toy_example} several histograms corresponding to different typical parameter values encountered in the numerical tests in section~\\ref{sec:transfert_rad}. As expected, the laws vary slowly with the parameters.\nFor instance, in practice we noted that a grid of values for $\\sigma$ spaced log-uniform by about $25\\%$ increase from one point to another gives very satisfactory results.\n\n\\begin{figure*}[htb!]\n\\includegraphics[width=0.49\\textwidth]{laws_tmax4000_sigma0.75_xinit0.005.pdf}\n\\includegraphics[width=0.49\\textwidth]{laws_tmax4000_sigma7.5_xinit0.005.pdf}\n\n\\includegraphics[width=0.49\\textwidth]{laws_tmax4000_sigma1.0_xinit0.005.pdf}\n\\includegraphics[width=0.49\\textwidth]{laws_tmax4000_sigma10_xinit0.005.pdf}\n\n\\includegraphics[width=0.49\\textwidth]{laws_tmax4000_sigma1.25_xinit0.005.pdf}\n\\includegraphics[width=0.49\\textwidth]{laws_tmax4000_sigma12.5_xinit0.005.pdf}\n\\caption{We plot the escape times for a particle from a time-space domain \n\t$[x_{min},x_{max}]\\times [0, t_{max}]$. The probabilities of escape are given in the title of each plot. \nWe take\t$x_{min}=0.0$, \n\t$x_{max}=0.01$, initial direction $+1$, \n\t$t_{max}=4000 fs$, speed $3.0\\times 10^{-5}$ (speed of light in fs\/cm), $x_{init}=0.005$ and change the $\\sigma$ parameter (in $cm^{-1}$).\nThe plots in the columns $1$ to $3$ correspond to the escape distributions for $\\sigma=0.75$ (first line), \n $\\sigma=1$ (second line) and $\\sigma=1.25$ (third line); the plots in columns $4$ to $6$ corresponds to \n $\\sigma=7.5$ (first line)\n $\\sigma=10$ (second line) and $\\sigma=12.5$ (third line). In these $3$ latter column the collisions are too many and the point does no significantly move i.e., it only escapes because the time is consumed.}\n\\label{fig:toy_example}\n\\end{figure*}\n\n\\subsection{Propagation of a Marshak-type wave with multi-regime physics and temperature dependent opacity}\\label{sec:transfert_rad}\n\nWe test the method on the propagation of a Marshak-type wave in an opaque medium (see \\cite{marshak1958,MCCLARREN20089711} for details) which is considered a good benchmark for difficult multi-regime computations.\nWe assume an ideal gas equation under the gray approximation.\nThe Monte Carlo method used here is based on the Fleck and Cummings \\cite{FLECK1971313} linearization.\n We use a model with two temperatures (radiative and matter)~: except mention of the contrary, the term \\textit{temperature} (noted $ T_{matter} $) will indicate the \nmatter temperature.\nThis is a 1D benchmark in rod geometry (like the $S_N$ method~\\cite{carlson_solution_1958} with $N=2$) with symmetry conditions on the top and bottom edges of the mesh.\n The values and units used are specified in the table \\ref{tab:valeursMarshak}.\nWe then solve the system of equations \\eqref{eq:IMC_rod} for $t \\in [t^n, t^{n+1}[$ where $I^+(t, x) = I(t, x, \\mu=1)$ and $I^-(t, x) = I(t, x, \\mu=1)$ and $f^n$ the Fleck factor~:\n\\begin{equation}\n\\begin{aligned}\n\\frac{1}{c} \\partial_t I^+ + \\partial_x I^+ + \\sigma f^n I^+ &= \\sigma^n f^n \\frac{a c T_{matter}^4(t^n)}{2} + \\sigma^n (1-f^n) \\frac{1}{2} (I^+ + I^-) \\\\\n\\frac{1}{c} \\partial_t I^- - \\partial_x I^- + \\sigma^n f^n I^- &= \\sigma^n f^n \\frac{a c T_{matter}^4(t^n)}{2} + \\sigma^n (1-f)^n \\frac{1}{2} (I^+ + I^-) \\\\\nC_V \\partial_t T_{matter} &= \\sigma^n f^n ( a c T_{matter}^4(t^n) - 2 \\pi (I^+ + I^-))\n\\end{aligned}\n\\label{eq:IMC_rod}\n\\end{equation}\n\n\\begin{table*}[!htb]\n\t\\centering\n\t\\begin{tabular}{|c|c|c|c|c|}\n\t\t\\hline\n\t\t$I$ & $erg . cm^{-2} . s^{-1}$ & $a$ & $ 7.56 \\times 10^{-15} \\ erg . cm^{-3} . K^{-4} $ \\\\ \\hline\n\t\t$\\Delta t$ & $4 \\times 10^{-11} \\ s$ & $d$ & $1.56 \\times 10^{23} \\ K^3 . g^{-1} . cm^2$ \\\\ \\hline\n\t\t$\\rho$ & $3 \\ g . cm^{-3}$ & $c$ & $3 \\times 10^{10} \\ cm . s^{-1}$ \\\\ \\hline\n\t\t$T_{matter}$ & $K$ & $T_{matter}(0,\\cdot)$ & $11604 \\ K$ \\\\ \\hline\n\t\t$C_V$ & $8.6177 \\times 10^{7} \\ erg.g^{-1}.K^{-1}$ & $T_{matter}(\\cdot, \\text{left border})$ & $11604000 \\ K$\t \\\\ \\hline\t \n\t\\end{tabular}\n\t\\caption{Values and units used in the numerical simulation of the propagation of a Marshak-type wave in an opaque medium. \\label{tab:valeursMarshak}}\n\\end{table*}\n\nWe analyze the wave profile at $1ns$, $5ns$ and $10ns$ using a time step of $\\Delta t = 4 \\times 10^{-11} s$.\nTo do this, we perform a run for the classical Monte Carlo method \nand a run with our method with\n$50$ cells and $N_{obj} = 200$ (target number of particles by cell); we employ the local regularization method in~\\cite{laguzet2020, LAGUZET2022111373} \nand compare the wave intensity to check for physical consistency.\n\nThe \\methodname{} is tested \nwith a temperature dependent opacity given by the formula~:~$\\sigma = \\rho \\times d \\times T_{matter}^{-3} cm^{-1}$ \\cite{LAGUZET2022111373}.\nThe value used is computed at each iteration by the Fleck linearization method.\nNote that the Fleck factor induces a scattering term also depending on the matter temperature. This case illustrates the behavior of the method in a circumstance where the scattering values belong to different regimes. The results are presented in figure~\\ref{fig:second_result_M}.\nThe comparison with reference results shows good physical agreement, independent of the collision regime~: moreover the number of events per particle is substantially reduced (by a factor $1000$, cf. right axis in the right plot of figure~\\ref{fig:second_result_M}), together with the computation time.\nMoreover, we notice that the computation time is no longer strictly proportional to the number of events as for the IMC classic method, which indicates that with this new method, the trajectography is no longer the \nlimiting phase in the computation time, but the treatment carried out between each tracking phase (emission and regulation of the particles for example) becomes important (the time increases with the number of particles remaining at the end of the iteration).\n \n\\begin{figure*}[htb!]\n\t\\includegraphics[width=0.9\\textwidth]{comparaison_M2A_mcc_qmc.pdf}\n\t\\caption{Results of the simulation described in~\\ref{sec:transfert_rad} (multi-regime physics, temperature dependent opacity). The number of events per particle for the \\methodname{} is reduced with respect to the reference while keeping the physical properties of the solution. \n {\\bf Left image~:} temperature profile for the times $1ns$, $5ns$ and $10ns$.\n{\\bf Right image dashed lines, right axis ~:} mean number of particle events per iteration for the classical Monte Carlo trajectory compared with the quantized simulation; {\\bf Right image solid lines, left axis :} execution time per iteration for the two procedures. All plots refer to the same simulations.}\n\t\\label{fig:second_result_M}\n\\end{figure*}\n\n\\section{Conclusion} \\label{sec:conclusion}\n\nWe introduce the \\methodname{} method to solve a computationally intensive multi-regime thermal radiative transport equation within an unifying framework. The method is independent on any random walk assumptions to treat the high collision regime and \nrelies on a offline computation followed by online sampling from a database. We check empirically that the smoothness assumptions underlying the method are, for the applications considered, of satisfactory quality; we next test the approach on a 1D benchmark and obtain physically coherent results while improving the computational time. This opens the perspective of future work on more complicated geometries and higher dimensional settings.\n\\begin{acknowledgments}\nL.L. and G.T. acknowledge the support from their institutions.\n\\end{acknowledgments}\n\\bibliographystyle{unsrt}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nSuperconducting detectors have been recognized as one of the most successful superconducting applications. \nGenerally, they have the advantages of high sensitivity, fast response, and high energy resolution, and have successfully been applied to detect cosmic rays \\cite{TES, MKID_1, MKID_2} and single photons \\cite{SNSPD}. \nThe three most successful examples include a transition edge sensor (TES) \\cite{TES}, a superconducting nanowire single photon detector (SNSPD) \\cite{SNSPD}, and a microwave kinetic conductance detector (MKID) \\cite{MKID_1,MKID_2}. \nTES is a kind of bolometer, in which a sharp superconducting transition edge is utilized as an ultra-sensitive thermometer \\cite{TES}. \nTherefore, the sensor temperature should be kept just at a midpoint of the superconducting transition temperature $T_{\\rm c}$.\nThe main part of the SNSPD is the meander line of the superconducting nanowire, in which a bias current is fed just below the critical current.\nA tiny fraction of the nanowire itself becomes a normal state when a Cooper pair breaks by an incident photon; thus, the photon incident event is detected by probing the superconducting-normal conducting transition.\nIn the case of the MKID, the kinetic inductance change induced by Cooper pair breaking in a superconducting resonator is detected as a change of the resonance frequency.\n\n\nSome efforts were recently made to detect a neutron beam using a superconducting detector.\nA TES with a B neutron absorption layer was proposed to be used as a neutron detector \\cite{TES_B}.\nTwo superconducting tunnel junctions (STJs) on a single crystal of Li$_2$B$_4$O$_7$ with a neutron-converter layer $^6$Li or $^{10}$B were also applied to detect neutrons \\cite{STJ_1,STJ_2}.\nWe have been developing a unique superconducting neutron detector, called the current-biased kinetic inductance detector (CBKID), aiming at a neutron imager with higher spatial and energy resolutions via the time-of-flight (TOF) method (see Section~3.2 for details) using pulsed neutron sources \\cite{MgB2_1,MgB2_2,CB-KID_1,CB-KID_2,CB-KID_3,CB-KID_4, Iizawa}.\nThe CBKID has superconducting meander microstrip lines, to which a finite DC bias current is fed, and a $^{10}$B neutron conversion layer.\nThe nuclear reaction between $^{10}$B and a neutron locally breaks the finite number of Cooper pairs in the superconducting microstrip lines (see Section~3.1 for details).\nA transient change of the Cooper pair density under the DC bias current generates voltage pulses proportional to the time derivative of the local kinetic inductance.\nTherefore, the CBKID can operate in a wide region of the superconducting phase.\nHigh-resolution imaging in two dimensions with four signal readout lines was recently achieved using a combination of CBKID and delay-line method (delay-line CBKID).\nA successful neutron imaging with a spatial resolution of 22\\,$\\mu$m \\cite{CB-KID_4} was demonstrated.\nVarious approaches other than superconducting detectors have been used to develop a neutron imaging system with high spatial and temporal resolutions.\nA gadolinium-gallium-garnet (GGG) scintillator-based neutron detector with a cooled charge-coupled device (CCD) camera has reached a spatial resolution of 11 $\\mu$m with a field of view of 6$\\times$6\\,mm$^2$ by optical magnification using cold neutrons at the reactor source \\cite{GGG_scintillator}. \nHowever, the method using the CCD is not suitable for highly energy-resolved imaging using pulsed neutron sources.\nThe highest spatial resolution of 2 $\\mu$m was recently achieved using a gadolinium-oxysulfide scintillator and a cooled complementary metal-oxide semiconductor (CMOS) camera from Andor Technology with a reactor neutron source \\cite{2um}.\nThe detector magnifies the scintillation light from the scintillator.\nOne must calculate the center of mass of the scintillation events to obtain a high spatial resolution of 2 $\\mu$m.\nAdditionally, a neutron imager with a high spatial resolution and a high energy resolution can be realized using a microchannel plate (MCP) \\cite{MCPs}.\nA $^{10}$B-doped MCP combined with a Timepix readout \\cite{Timepix} as a neutron imager was developed \\cite{A.S.Tremsin_2011, A.S.Tremsin_2015}.\nThe nuclear reaction $^{10}$B(n, $^4$He)$^7$Li, which mainly emits a 0.84\\,MeV $^7$Li ion and a 1.47\\,MeV $^4$He ion, is converted into pulsed electrons, which are amplified in the $^{10}$B-doped MCP.\nThe signal is further electronically amplified, read out by the Timepix sensor array, and signal-processed in the FPGA circuit, thereby achieving a high spatial resolution of 55\\,$\\mu$m with a high time resolution of 10\\,ns.\nA new chip, called Timepix 3, is thought to overcome the previous limitation of the count rate in modes with a high temporal resolution \\cite{Timepix3}.\\par\n\n\nIn the wavelength dependence of the neutron transmission, the characteristic sawtooth structures, which are called the Bragg edges, appear at wavelengths where the Bragg conditions are satisfied because of a transient change of the coherent scattering.\nTherefore, one can distinguish the crystal structure and crystalline quality of the samples from the Bragg edges.\nThe analysis of the Bragg edges gives unique information of the samples, and is an important technique in material sciences \\cite{Bragg, nuetron_diff_2018}.\nA pulsed neutron source, which generates neutrons with a wide energy range, is suitable for observing the Bragg edge.\nThus, a combination of the high temporal resolution of a neutron detector and TOF technique in a pulsed neutron source is promising for the Bragg edge analysis of the neutron transmission.\nThe delay-line CBKID has a potential for application in the Bragg edge method with a high spatial resolution.\n\n\nThe present work expands an active area of CBKIDs from 10\\,$\\times$\\,10\\,mm$^2$ in our preceding work\\cite{CB-KID_2} to 15 $\\times$ 15\\,mm$^2$; consequently, the spatial resolution is improved as discussed in Section \\ref{Sec.imaging}.\nWe demonstrate herein a clear imaging of various test specimens, including biological and metal samples over the whole sensor active area.\nA successful observation of a stainless-steel Bragg edge in the neutron transmission is also discussed.\n\n\\section{Current-biased kinetic inductance detector}\\label{Sec.imaging}\n\nDetailed principles of the delay-line CBKID were described in Ref.~\\cite{CB-KID_4}.\nHere we briefly discussed principles for signal generation and propagation, and imaging by using the delay-line method.\n\nThe kinetic inductance in the superconductor is inversely proportional to the Cooper pair density $n_{\\rm s}$. \nThe transient change of $n_{\\rm s}$ on the superconducting wire locally occurs at a hot spot induced by a collision of the charged particle created via the nuclear reaction between $^{10}$B and a neutron.\nWhen a DC bias current $I_{\\rm b}$ is fed into the superconducting wire, a pair of voltage pulses is generated at a hot spot within a tiny segment of the superconducting wire over the length $\\Delta l$, and each pulse propagates toward both ends of the wire with opposite polarities as electromagnetic waves.\nA voltage $V$ across the hot spot is expressed as follows:\n\\begin{equation}\n\\label{eq:V}\nV=I_{\\rm b}\\frac{{\\rm d}L}{{\\rm d}t}\\simeq I_{\\rm b}\\frac{{\\rm d}L_k}{{\\rm d}t}=I_{\\rm b}\\frac{\\rm d}{{\\rm d}t} \\left(\\frac{m_{\\rm s}\\Delta l}{n_{\\rm s}q^2_{\\rm s}S}\\right)=-\\frac{m_{\\rm s}\\Delta l I_{\\rm b}}{n_{\\rm s}^2 q_{\\rm s}^2 S}\\frac{{\\rm d}n_{\\rm s}}{{\\rm d}t}, \n\\end{equation}\nwhere $S$ is the cross-sectional area of the superconducting wire; $m_{\\rm s}$ and $q_{\\rm s}$ are the effective mass and electric charge of the Cooper pair, respectively.\nWe stress that $V$ is not only the function of $n_{\\rm s}$ but also that of ${\\rm d}n_{\\rm s}\/{\\rm d}t$.\nThis is a crucial difference with the MKID.\nBecause of ultra fast quasi-particle excitation, ${\\rm d}n_{\\rm s}\/{\\rm d}t$ can be sufficiently large, and thus $V$ becomes finite even if the superconducting wire remains in the superconducting zero-resistance state at a hot spot. \nIt is in sharp contrast with TES and SNSPD.\n\nThe delay-line CBKID can image the hot spot distribution on the detector.\nFigure~\\ref{CBKID}~(a) shows a schematic of the CBKID system.\nThe CBKID has two mutually orthogonal meander lines of the superconducting Nb nanowires on the superconducting Nb ground plane.\nTherefore, one regards the meander lines with the ground plane as the superconductor-insulator-superconductor (S-I-S) coplanar waveguides and expects a lower attenuation of the high-frequency traveling waves \\cite{Swirhart}.\nTherefore, one can observe the signals that travel through a 151\\,m-length superconducting waveguide.\nThe signal propagation velocity can be suppressed by placing the superconducting meander line closer to the ground plane \\cite{Koyama}. \nTherefore, the propagation velocities for orthogonal meander lines are different from each other. \nA more detailed discussion of the signal generation and transmission of the CBKID based on a superconducting electromagnetism has been reported in Ref. \\cite{Koyama}.\nAs mentioned above, a pair of voltage pulses with opposite polarities appears at a hot spot on the meander line and propagates as electromagnetic waves toward both ends along the Nb meander line. \n\n\nWe can identify the signal quartet originating from a single event, although several signals are simultaneously present on the meander lines, as discussed elsewhere~\\cite{CB-KID_4}.\nTherefore, the CBKID has a high multi-hit tolerance up to the temporal resolution limit, where the signals can be discriminated.\nThe neutron incident positions $X$ and $Y$ are determined as follows:\n\n\\begin{eqnarray}\n\\label{eq:X}\nX={\\rm ceil}\\left[\\frac{(t_{\\rm Ch4}-t_{\\rm Ch3})v_x}{2h}\\right] p,\n\\\\\n\\label{eq:Y}\nY={\\rm ceil}\\left[\\frac{(t_{\\rm Ch2}-t_{\\rm Ch1})v_y}{2h}\\right] p,\n\\end{eqnarray}\nwhere $h$ is the length of each segment of the meander line, $p$ is a repetition pitch for the meander line, $t_{\\rm Ch1}$, $t_{\\rm Ch2}$, $t_{\\rm Ch3}$, and $t_{\\rm Ch4}$ are the corresponding time stamps of the signals received at Ch1, Ch2, Ch3, and Ch4, which correspond to the signals propagated toward both ends of the $X$ (Ch3, Ch4) and $Y$ (Ch1, Ch2) meander lines \\cite{CB-KID_4}.\nSimilarly, the $Y$ position can also be determined; hence, we can image the positions of the mesoscopic excitations in a two-dimensional (2D) plane using a very limited number (four) of electric leads for the readout circuits. \nWe note that the pixel size is proportional to $v_{x, y}$ and inversely proportional to $h$. \nTherefore, the pixel size becomes finer by the reduction of $v_{x, y}$ and\/or the extension of $h$.\n\nThe acquired timestamp-data are processed according to the abovementioned procedures on a data-processing computer to obtain a neutron transmission image.\n\n\n\\section{Detector structure and experimental apparatus}\n\\subsection{Detector structure}\nSeven layers were deposited on a thermally oxidized Si substrate in our CBKID.\nThey were sequentially stacked from bottom to top as follows: (1) a 625-$\\mu$m thick silicon substrate, (2) a 300-nm-thick SiO$_2$ layer, (3) a 300-nm thick superconducting Nb ground plane, (4) a 350-nm thick insulating SiO$_2$ layer, (5) a superconducting 40-nm-thick Nb $Y$ meander line, (6) a 50-nm thick insulating SiO$_2$ layer, (7) a superconducting 40-nm-thick Nb $X$ meander line, (8) a 50-nm thick passivation SiO$_2$ layer, and (9) a $^{10}$B neutron capture layer.\nThe nuclear reaction $^{10}$B(n, $^4$He)$^7$Li mainly emitted a $^4$He ion of 1.47\\,MeV and a $^7$Li ion of 0.88\\,MeV. The local energy dissipation to the meander line provided by each projectile was used to create a hot spot on the Nb meander lines.\nIn this detector, the $^{10}$B layer was made by painting a mixture solution of GE7031 varnish and $^{10}$B powder with a brush.\nThis method intends to achieve sufficient thickness compared to the ion ranges, but causes the influence of inhomogeneity in the $^{10}$B density in the conversion layer because of the segregation of the GE7031 varnish upon drying.\nAll turning points of the meander lines were rounded. The width was kept constant to ensure smooth propagations without the reflection of the electromagnetic waves along the whole meander line.\nMoreover, the line width was gradually tapered from the end of a meander line to the electrode pad to prevent the signal reflection caused by the sudden change in impedance.\nThe $X$ and $Y$ meander lines of 0.9\\,$\\mu$m width and 15.1\\,mm segment length were folded 10,000 times with 0.6\\,$\\mu$m spacing.\nThe repetition period $p$ was 1.5\\,$\\mu$m, and the total length of the meander line $l$ reached 151\\,m.\nThe Nb meander line with two end electrodes was fabricated in the Clean Room for Analog-Digital Superconductivity (CRAVITY) at the National Institute of Advanced Industrial Science and Technology (AIST).\nCompared with the previously reported detector \\cite{CB-KID_4}, we extended the segment length to be 1.5 times longer, and the pitch width was reduced to 3\/4.\nThe extension of the segment length tended not only to increase in the sensor detection area, but also to improve the spatial resolution with the assistance of the segment pitch refinement.\nAlthough the ultimate pixel size of our detector may be defined by the repetition period of the meander line, the actual pixel size was an integral multiple of the repetition period because of the limitation of the temporal resolution in the readout circuit.\n\n\n\\subsection{Experimental apparatus}\nThe experimental apparatus is schematically shown in Fig.~\\ref{CBKID}~(b).\nThe DC bias currents $I^x_b$ and $I^y_b$ were applied by two constant voltage sources through the 3\\,k$\\Omega$ resistors for both meander lines.\nThe signals from Ch1, Ch2, Ch3, and Ch4 were amplified via a differential ultralow-noise amplifier (SA-430 F5 by NF Corporation), while the negative signals from Ch1 and Ch3 were inverted. A readout board (Kalliope-DC) and a 2.5 GHz sampling digital oscilloscope (Teledyne LeCroy HDO4104-MS) simultaneously received positive signals because the positive thresholds for the counting signals in the Kalliope-DC board were configured for convenience.\nThe Kalliope-DC readout circuit had a 1 ns-sampling multichannel (16\\,Ch\\,$\\times$\\,2) time-to-digital converter (TDC), which was originally developed by Kojima {\\it et al} for the muon-spin relaxation ($\\mu$SR) measurements at the J-PARC facility \\cite{Kalliope}.\nCBKID and test samples were cooled down to low temperatures below $T_{\\rm c}$ using a Gifford-McMahon (GM) cryocooler.\nThe detector temperature was monitored using a Cernox thermometer and controlled using a heater placed near the detector.\nThe neutron beam was irradiated to the detector from the silicon substrate side through the test samples placed at a distance of 0.8\\,mm from the detector and cooled down with the detector.\nFurther, neutron-irradiation experiments were performed with pulsed neutrons having the collimator ratio of $L\/D=14\\,{\\rm m}\/0.10\\,{\\rm m}=140$ at BL10 of the Material and Life Science Experimental Facility (MLF) of J-PARC \\cite{BL10}.\nThe neutron energy is proportional to square of velocity. Therefore, the measurement of the neutron flight time to travel the known distance provides the neutron energy. This is so-called TOF method. \nThe energy resolution was achieved using the TOF method through the 14-m flight path with 33\\,$\\mu$s full width at half maximum (FWHM) at 10\\,meV.\n\n\\section{Results and discussion}\n\\subsection{Signals by the neutron reaction in the CBKID}\nFigure~\\ref{Signal} shows a typical signal quartet measured using the oscilloscope.\nCh1 and Ch2 corresponded to both ends of the $Y$ meander line, whereas Ch3 and Ch4 corresponded to both ends of the $X$ meander line.\nNotably, the negative signals from Ch1 and Ch3 were inverted to positive signals using a differential amplifier.\nIn conclusion, these four different signals arose from the single-neutron reaction event at the hot spot.\nFrom Eqs. (\\ref{eq:X}) and (\\ref{eq:Y}), using the time at which these four signals were detected, the position of the hot spot created by the neutron reaction was specified as a 2D coordinate.\nThe signal widths of these signals were approximately 40\\,ns, showing a sharp reaction.\nThese signal widths and signal quartet selecting procedure, which are the characteristics of the CBKID, enabled a high-speed measurement and energy dispersive neutron imaging with the combination of the TOF technique.\nWe estimated the detection rate tolerance to be as high as a few tens of MHz for the theoretical limit because the CBKID can discriminate multi-hit events in contrast with other techniques.\nAs a matter of fact, the detection rate of 0.2\\,MHz can be read using our current system.\n\n\\subsection{Direct beam measurement and image processing}\\label{background}\nWe showed the neutron transmission image herein using the CBKID without any test samples.\nThe color scale indicates the number of events (NOE).\nThe image was obtained by summing 17.9\\,h under the condition of $T=4.0\\,$K with $I_{\\rm b}^x = I_{\\rm b}^y = $0.15\\,mA and 395\\,kW beam power.\nFigure~\\ref{DirectBeam}~(a) shows the image with an incident neutron wavelength $\\lambda$ ranging from 0.052 to 1.13\\,nm.\nThe NOEs from $10\\times10$ pixels were combined in Figs.~\\ref{DirectBeam}~(a) and (b) to obtain a high-contrast image.\n\nThe neutron conversion of $^{10}$B layer was not homogeneous enough in the present CBKID sensor, as evidenced by the irregular curves seen in Fig.~\\ref{DirectBeam}~(a).\nAdditionally, a white diagonal line from the upper left to lower right can be seen.\nWe considered this line to be caused by the signal leaks between $X$ and $Y$ meander lines.\nThe actual signal weakens if it is opposite in polarity with leak signal and they merge at a leak point.\nAssuming that the signals arising from the neutron reaction at $(n_x, n_y)$ weakens each other at a leak point $(n_x^l, n_y^l)$, the relation between $(n_x, n_y)$ and $(n_x^l, n_y^l)$ satisfies the following equation:\n\\begin{eqnarray}\n\\label{eq:line1}\n\\frac{n_x^l-n_x}{v_x}=\\frac{n_y-n_y^l}{v_y},\n\\end{eqnarray}\nwhere $v_x$ and $v_y$ are the propagation velocities for the $X$ and $Y$ meander lines, respectively.\nWe can obtain the linear function from Eq.~(\\ref{eq:line1}) as follows:\n\\begin{eqnarray}\n\\label{eq:line2}\nn_y=-\\frac{v_y}{v_x}n_x+n_y^l+\\frac{v_y}{v_x}n_x^l.\n\\end{eqnarray}\nFrom Eq.~(\\ref{eq:line2}), the diagonal line can be predicted as a linear function with a slope of $-v_y\/v_x = -5.74672\\times 10^7\/6.29011\\times 10^7 = -0.913612$.\nThe diagonal line slope was obtained as $-0.9135$ from Fig.~\\ref{DirectBeam}~(a), and was in a good agreement with our consideration.\nThe neutron imaging of the test samples was performed before the direct beam measurement and had no diagonal line, implying that the diagonal line appeared because of the aging degradation of the detector.\nWe tried to remove the diagonal line via imaging processing to proceed with the background correction (Fig.~\\ref{DirectBeam}~(b)).\n\n\\subsection{Neutron imaging of the test samples}\nFigure~\\ref{SampleImage}~(a) shows a photograph of the test objects, namely (\\#1) a spider, (\\#2) a titanium screw, (\\#3) a screw of stainless-steel, (\\#4) a screw of stainless-steel, (\\#5) a Japanese beautyberry(plant), and (\\#6) a circuit board.\nIn addition, we superimposed well-shaped $^{10}$B-dots as a neutron absorber to test the spatial resolution, as shown in Fig.~\\ref{SampleImage}~(b).\nThe test absorber comprised a 50-$\\mu$m thick stainless-steel mesh (15\\,$\\times$\\,15\\,mm$^2$ in size), wherein 100\\,$\\times$\\,100\\,$\\mu$m$^2$ square holes were arrayed in a square lattice (lattice constant: 250\\,$\\mu$m).\nEach hole was tightly filled by very fine $^{10}$B particles. The stainless-steel mesh was fabricated using the wet etching technique; hence, the corners and edges of the square hole were somewhat rounded [refer the optical photograph shown in Fig.~\\ref{SampleImage}~(b)].\nThe measurement was performed for 104.9\\,h under the conditions of a bias current $I_{\\rm b}=0.15\\,$mA, a temperature $T$=4.0\\,K, and a 304\\,kW beam power.\n\nFigure~\\ref{SampleImage}~(c) shows the neutron transmission image with incident neutron wavelength $\\lambda$ ranging from 0.052 to 1.13\\,nm after correcting for background by dividing the neutron image with the test samples by the direct beam image of Fig.~\\ref{DirectBeam}~(b).\nNotably, the NOEs from $10\\times10$ pixels were combined.\nThe plant fruit, three screws, spider, and $^{10}$B-dot pattern could be confirmed, demonstrating the capability of neutron imaging for organic and metal samples by the CBKID.\nMoreover, an internal structure can be seen inside the two stainless-steel screws. \nSuch a structure was not seen in the titanium screw.\nAdditionally, the difference between pulp and seed part of the internal structure of the berry can be seen (\\#6).\nIrregular curves still remained visible. \nAs mentioned earlier, we succeeded in imaging the test objects of interests over the 15\\,$\\times$\\,15\\,mm$^2$ size using the CBKID.\n\n\\subsection{Spatial resolution}\nWe discussed herein the spatial resolution of the CBKID using the $^{10}$B-dot pattern embedded in the test sample.\nFigures~\\ref{LineProf}~(a), (b), (c), and (d) show the typical line profiles along the (a) $X$ and (c) $Y$-directions and the corresponding differentiations along the (b) $X$ and (d) $Y$-directions with minimum pixel sizes of 3 or 4.5\\,$\\mu$m and 1.5 or 3\\,$\\mu$m for $X$ and $Y$-directions, respectively.\nThe Gaussian fitting results for the differentiations are depicted by the solid lines in Figs.~\\ref{LineProf}~(b) and (d).\nWe examined the Gaussian fitting for differentiations in all the clear dot patterns (480\\,points) appearing in Fig.~\\ref{SampleImage}~(c) and obtained the averages of the FWHM as 19.2\\,$\\mu$m and 16.2\\,$\\mu$m for the $X$ and $Y$-directions, respectively.\nThe spatial resolution in the $Y$-direction was better than that in the $X$-direction because $v_y$ was slower than $v_x$.\nThe incompleteness of the edge sharpness of the hole in the stainless-steel mesh and the incomplete filling of $^{10}$B-dot particles into the hole may partially affect the results of the spatial resolution of interest.\nAs expected, the present spatial resolutions were improved compared to those in a previous report\\cite{CB-KID_4}, in which the spatial resolutions were evaluated using a similar test pattern.\n\n\n\\subsection{Material analysis using the Bragg edge}\nThe combination of the high temporal resolution in CBKID and TOF method in a pulse neutron source allowed us to demonstrate wavelength (energy) selective neutron imaging.\nFigure~\\ref{BraggEdge}~(a) shows the neutron transmission of the stainless-steel screw (sample \\#3). \nNotably, the metal mesh of the stainless-steel was not attached during the direct beam measurements; however, it was installed during neutron imaging measurements of the test samples.\nThe 111 stainless-steel Bragg edge was clearly observed, as shown in Fig.~\\ref{BraggEdge}~(a).\nA finite, but distinct 200 Bragg edge also arose.\nFigure~\\ref{BraggEdge}~(b) shows the image ratio of the neutron transmission image with a wavelength shorter than 111 Bragg edge (0.390(1)\\,nm$<\\lambda<$0.401(1)\\,nm) and that with a wavelength longer than 111 Bragg edge (0.423(1)\\,nm$<\\lambda<$0.434(2)\\,nm). \nAs shown in Fig.~\\ref{BraggEdge}~(b), only stainless-steel screws were clearly observed. The other samples disappeared by division.\n\n\\section {Summary}\nIn this study, we demonstrated higher spatial resolution neutron imaging compare with the previous report \\cite{CB-KID_4} and Bragg edge analysis by delay-line CBKID systems.\nAccordingly, we succeeded in fabricating 0.9-$\\mu$m width strip lines without any disconnection even in a 151\\,m length.\nThis improvement of the CBKIDs brought a high spatial resolution down to 16.2\\,$\\mu$m.\nTest samples with various shapes and materials were clearly observed over the whole sensor area of 15\\,$\\times$\\,15\\,mm$^2$.\nIn addition, our detector has a temporal resolution in combination with the Kalliope-DC readout circuit with 1 ns resolution TDC for high-speed data acquisition.\nBy combining with the TOF method, the delay-line CBKID became capable of wavelength (energy)-selective neutron imaging and Bragg edge analysis, as demonstrated for stainless-steel.\nA further improvement of the fabrication method of the homogeneous $^{10}$B-conversion layer is required. Nonetheless, the CBKID has potential as a unique neutron imager.\n\n\\ack\nThis work is partially supported by a Grant-in-Aid for Scientfic Research (Grant Nos. JP16H02450, JP19K03751) and from JSPS.\nThe neutron-irradiation experiments at the Materials and Life Science Experimental Facility (MLF) of the J-PARC are conducted under the support of MLF programs (Proposals Nos. 2015A0129, 2016B0112, 2017A0011, 2017B0014, 2018A0109, 2015P0301, 2016P0301, 2018P0201).\nDevelopment of the Kalliope TDC and readout electronics\/software is conducted under the collaboration of KEK Open Source Consortium of Instrumentation (Open-IT).\n\n\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\n\nVision-based deep reinforcement learning has recently been applied to robotic manipulation tasks with promising success (\\cite{Quillen, Kalashnikov, haarnoja2, ebert2018, Agrawal2016, Schwab}). Despite successes in terms of task performance, reinforcement learning is not an established solution method in robotics, mainly because of lengthy training times (e.g., four months with seven robotic arms in \\cite{Kalashnikov}). We argue in this work that reinforcement learning can be made much faster, and therefore more practical in the context of robotics, if additional elements of human physiology and cognition are incorporated: namely the abilities associated with active, goal-directed perception.\n\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=0.9\\columnwidth]{images\/main.pdf}\n \\caption{Our active perception setup, showing the interaction between two manipulators (A, E). The camera manipulator (A) is used to shift a wrist-attached camera frame (B) about a fixation point (C) while maintaining the line of sight (D) aligned with the point. The gripper manipulator (E) is equipped with a 6-DoF action space. The original image observation (G) is sampled with a log-polar like transform to obtain (H). Note that the log-polar sampling reduces the image size by a factor of four (256x256 to 64x64) without sacrificing the quality of the central region.}\n\\end{figure}\n\nWe focus in particular on two related strategies used by the human visual system \\cite{Gegenfurtner2016}.\nFirst, the human retina provides a space-variant sampling of the visual field such that the density of photoreceptors is highest in the central region (fovea) and declines towards the periphery. This arrangement allows humans to have high-resolution, sharp vision in a small central region while maintaining a wider field-of-view. Second, humans (and primates in general) possess a sophisticated repertoire of eye and head movements \\cite{Liversedge2011} that align the fovea with different visual targets in the environment (a process termed `foveation`). This ability is essential for skillful manipulation of objects in the world: under natural conditions, humans will foveate an object before manipulating it \\cite{Johansson2001}, and performance declines for actions directed at objects in the periphery \\cite{Prado2005}. \n\nThese properties of the primate visual system have not gone unnoticed in the developmental robotics literature. Humanoid prototypes are often endowed with viewpoint control mechanisms (\\cite{Orabona, Colombo1996, Metta2000, Falotico2009}). The retina-like, space-variant visual sampling is often approximated using the log-polar transform, which has been applied to a diverse range of visual tasks (see \\cite{Traver2010} for a review). Space-variant sampling, in conjunction with an active perception system, allows a robot to perceive high-resolution information about an object (e.g., shape and orientation) and still maintain enough contextual information (e.g., location of object and its surroundings) to produce appropriate goal-directed actions. We mimic these two properties in our model. First, in addition to the grasping policy, we learn an additional `fixation' policy that controls a second manipulator (Figure 1A, B) to look at different objects in space. Second, images observed by our model are sampled using a log-polar like transform (Figure 1G, H), disproportionately representing the central region.\n\nActive perception provides two benefits in our model: an attention mechanism (often termed `hard` attention in deep learning literature) and an implicit way to define goals for downstream policies (manipulate the big central object in view). A third way we exploit viewpoint changes is for multiple-view self-supervised representation learning. The ability to observe different views of an object or a scene has been used in prior work (\\cite{Sermanet2017a, Eslami2018, Dwibedi2018, Yan2017a}) to learn low-dimensional state representations without human annotation. Efficient encoding of object and scene properties from high-dimensional images is essential for vision-based manipulation; we utilize Generative Query Networks \\cite{Eslami2018} for this purpose. While prior work assumed multiple views were available to the system through unspecified or external mechanisms, here we use a second manipulator to change viewpoints and to parameterize camera pose with its proprioceptive input.\n\nWe apply our active perception and representation (APR) model to the benchmark simulated grasping task published in \\cite{Quillen}. We show that our agent can a) identify and focus on task-relevant objects, b) represent objects and scenes from raw visual data, and c) learn a 6-DoF grasping policy from sparse rewards. In both the 4-DoF and 6-DoF settings, APR achieves competitive performance (85\\% success rate) on test objects in under 70,000 grasp attempts, providing a significant increase in sample-efficiency over algorithms that do not use active perception or representation learning \\cite{Quillen}. Our key contributions are:\n\\begin{itemize}\n\\item a biologically inspired model for visual perception applied to robotic manipulation\n\\item a simple approach for joint learning of eye and hand control policies from sparse rewards\n\\item a method for sample-efficient learning of 6-DoF, viewpoint-invariant grasping policies\n\\end{itemize}\n\n\\begin{figure*} [t!]\n \\centering\n \\includegraphics[width=0.9\\textwidth]{images\/model.pdf}\n \\caption{The APR Model. Visual (A) and proprioceptive (B) input from one view are encoded by the multimodal encoder to obtain the representation $r1$. The representation $r2$ is similarly obtained by encoding the visual (C) and proprioceptive input (D) from a second view. $r1$ and $r2$ are added to obtain the combined scene representation $r$. The action $a$, state-value $v$, and action-value function $q$ are computed for both the grasp policy (E) and the fixation policy (G). The GQN generator $g$ predicts the image from a query viewpoint, which is compared to the ground truth image from that view (F). Yellow boxes represent fully connected layers. Pink boxes represent convolutional blocks.}\n \\label{figure:model}\n\\end{figure*}\n\\section{Related Work}\n\n\\subsection{Deep RL for Vision-Based Robotic Manipulation} \n\nOur task is adapted from the simulated setup used in \\cite{Quillen} and \\cite{Kalashnikov}. \\cite{Kalashnikov} showed that intelligent manipulation behaviors can emerge through large-scale Q-learning in simulation and on real world robots. The robots were only given RGB inputs from an uncalibrated camera along with proprioceptive inputs. A comparative study of several Q-learning algorithms in simulation was performed in \\cite{Quillen} using the same task setup. Achieving success rate over 80\\% required over 100K grasp attempts. Performance of 85\\% or over is reported with 1M grasp attempts. Furthermore, \\cite{Kalashnikov} and \\cite{Quillen} restricted the action space to 4-DoF (top-down gripper orientations). We remove such restrictions, allowing the gripper to control all 6-DoF as this is important for general object manipulation. \n\nReinforcement learning with high-dimensional inputs and sparse rewards is data intensive (\\cite{Mnihb, Kaiser2019}), posing a problem for real world robots where collecting large amounts of data is costly. Goal-conditioned policies have been used to mitigate the sparse reward problem in previous work (\\cite{Andrychowicz, Nair2018}). In addition to optimizing the sparse rewards available from the environments, policies are also optimized to reach different goals (or states), providing a dense learning signal. We adopt a similar approach by using the 3D points produced by a fixation policy as reaching targets for the grasping policy. This ensures that the grasping policy always has a dense reward signal. We use the Soft Actor-Critic algorithm \\cite{Haarnoja2018} for policy learning, which was shown to improve both sample-efficiency and performance on real world vision-based robotic tasks \\cite{haarnoja2}. \n\n\\subsection{Multiple View Object and Scene Representation Learning}\n\nClassical computer vision algorithms infer geometric structure from multiple RGB or RGBD images. For example, structure from motion \\cite{Ozyesil2017} algorithms use multiple views of a scene across time to produce an \\textit{explicit} representation of it in the form of voxels or point sets. Multiple, RGBD images across space can also be integrated to produce such explicit representations \\cite{Zollhofer2018}. The latter approach is often used to obtain a 3D scene representation in grasping tasks (\\cite{Zeng2017a, Zeng2017}). In contrast to these methods, neural-based algorithms learn \\textit{implicit} representations of a scene. This is typically structured as a self-supervised learning task, where the neural network is given observations from some viewpoints and is tasked with predicting observations from unseen viewpoints. The predictions can take the form of RGB images, foreground masks, depth maps, or voxels (\\cite{Rezende2016, Gadelha, Tulsiani2017, Yan2016, Wu2016b, Eslami2018}). The essential idea is to infer low-dimensional representations by exploiting the relations between the 3D world and its projections onto 2D images. A related approach is described in \\cite{Florence} where the network learns object descriptors using a pixelwise contrastive loss. However, data collection required a complex pre-processing procedure (including a 3D reconstruction) in order to train the network in the first place. Instead of predicting observations from a different views, Time Contrastive Networks (TCNs) \\cite {Sermanet2017a} use a metric learning loss to embed different viewpoints closer to each other than to their temporal neighbors, learning a low-dimensional image embedding in the process.\n\nMultiple view representation learning has proven useful for robotic manipulation. TCNs \\cite{Sermanet2017a} enabled reinforcement learning of manipulation tasks and imitation learning from humans. Perspective transformer networks \\cite{Yan2016} were applied to a 6-DoF grasping task in \\cite{Yan2017a}, showing improvements over a baseline network. \\cite{Florence} used object descriptors to manipulate similar objects in specific ways. GQNs \\cite{Eslami2018} were shown to improve data-efficiency for RL on a simple reaching task. In this work we chose to use GQNs for several reasons: a) they require minimal assumptions, namely, the availability of RGB images only and b) they can handle unstructured scenes, representing both multiple objects and background, contextual information. We adapted GQNs to our framework in three ways. First, viewpoints are not arbitrarily distributed across the scene, rather they maintain the line of sight directed at the 3D point chosen by the fixation policy. Second, we apply the log-polar like transform to all the images, such that the central region of the image is disproportionately represented. These two properties allow the representation to be largely focused on the central object, with contextual information receiving less attention according to its distance from the image center. Third, instead of learning the representation prior to the RL task as done in \\cite{Eslami}, we structure the representation learning as an auxiliary task that it is jointly trained along with the RL policies. This approach has been previously used in \\cite{Jaderberg2016a} for example, resulting in 10x better data-efficiency on Atari games. Thus APR jointly optimizes two RL losses and a representation loss from a single stream of experience. \n\n\\subsection{Visual Attention Architectures}\n\nAttention mechanisms are found in two forms in deep learning literature \\cite{Xu2015a}. ``Soft'' attention is usually applied as a weighting on the input, such that more relevant parts receive heavier weighting. ``Hard'' attention can be viewed as a specific form of soft attention, where only a subset of the attention weights are non-zero. When applied to images, this usually takes the form of an image crop. Hard attention architectures are not the norm, but they have been used in several prior works, where a recurrent network is often used to iteratively attend to (or ``glimpse'') different parts of an image. In \\cite{Eslami}, this architecture was used for scene decomposition and understanding using variational inference. In \\cite{Gregor2015}, it was used to generate parts of an image one at a time. In \\cite{Mnih}, it was applied to image classification tasks and dynamic visual control for object tracking. More recently in \\cite{Elsayed}, hard attention models have been significantly improved to perform image classification on ImageNet. Our work can be seen as an extension of these architectures from 2D to 3D. Instead of a 2D crop, we have a 3D shift in position and orientation of the camera that changes the viewpoint. We found a single glimpse was sufficient to reorient the camera so we did not use a recurrent network for our fixation policy. \n\n\\section{Method}\n\n\\subsection{Overview}\n\nWe based our task on the published grasping environment \\cite{Quillen}. A robotic arm with an antipodal gripper must grasp procedurally generated objects from a tray (Figure 1). We modify the environment in two ways: a) the end-effector is allowed to move in full 6-DoF (as opposed to 4-DoF), and b) a second manipulator (the head) is added with a camera frame fixed onto its wrist. This second manipulator is used to change the viewpoint of the attached camera. The agent therefore is equipped with two action spaces: a viewpoint control action space and a grasp action space. Since the camera acts as the end-effector on the head, its position and orientation in space are specified by the joint configuration of that manipulator: $v = (j_1, j_2, j_3, j_4, j_5, j_6)$. The viewpoint action space is three-dimensional, defining the point of fixation $(x, y, z)$ in 3D space. Given a point of fixation, we sample a viewpoint from a sphere centered on it. The yaw, pitch and distance of the camera relative to the fixation point are allowed to vary randomly within a fixed range. We then use inverse kinematics to move the head to the desired camera pose. Finally, the second action space is 6-dimensional $(dx, dy, dz, da, db, dc)$, indicating the desired change in gripper position and orientation (Euler angles) at the next timestep. \n\nEpisodes are structured as follows. The agent is presented with an initial view (fixation point at the center of the bin) and then executes a glimpse by moving its head to fixate a different location in space. This forms a single-step episode from the point of view of the glimpse policy (which reduces the glimpse task to the contextual bandits formulation). The fixation location is taken as the reaching target; this defines the auxiliary reward for the grasping policy. The grasping policy is then executed for a fixed number of timesteps (maximum 15) or until a grasp is initiated (when the tool tip drops below a certain height). This defines an episode from the point of view of the grasping policy. The agent receives a final sparse reward if an object is lifted and the tool position at grasp initiation was within 10cm of the fixation target. The latter condition encourages the agent to look more precisely at objects, as it is only rewarded for grasping objects it was looking at. The objective of the task is to maximize the sparse grasp success reward. The grasping policy is optimized using the sparse grasp reward and the auxiliary reach reward, and the fixation policy is optimized using the grasp reward only. \n\nNote that all views sampled during the grasping episode are aligned with the fixation point. In this manner, the grasping episode is implicitly conditioned by the line of sight. Essentially, this encourages the robot to achieve a form of eye-hand coordination where reaching a point in space is learnt as a reusable skill. The manipulation task is thus decomposed into two problems: localize and fixate a relevant object, then reach for and manipulate said object. \n\n\\subsection{Model}\n\nAn overview of APR is given in Figure \\ref{figure:model}. Multimodal input from one view, consisting of the view parameterization (six joint angles of the head $v = (j_1, j_2, j_3, j_4, j_5, j_6)$), image ($64 \\times 64 \\times 3$) and gripper pose $g = (x, y, z, \\sin(a), \\cos(a), \\sin(b), \\cos(b), \\sin(c), \\cos(c))$, is encoded into a scene representation, $r1$, using a seven layer convolutional network with skip connections. $(a, b, c)$ are the Euler angles defining the orientation of the gripper. The scene representation $r1$ is of size $16 \\times 16 \\times 256$. The proprioceptive input vectors $g$ and $v$ are given spatial extent and tiled across the spatial dimension ($16 \\times 16$) before being concatenated to an intermediate layer of the encoder. The input from a second view (Figure 2C, D) is similarly used to obtain $r2$, which is then summed to obtain $r$, the combined scene representation.\n\nThe fixation policy and grasping policies operate on top of $r$. Their related outputs (action $a$, state-value $v$ and action-value functions $q$) are each given by a convolutional block followed by a fully-connected layer. The convolutional blocks each consist of three layers of $3 \\times 3$ kernels with number of channels 128, 64, and 32 respectively. The generator is a conditional, autoregressive latent variable model that uses a convolutional LSTM layer. Conditioned on the representation $r$, it performs 12 generation steps to produce a probability distribution over the query image. The encoder and generator architecture are unmodified from the original work, for complete network details we refer the reader to \\cite{Eslami2018}.\n\n\\begin{figure}[t!] \n \\centering\n \\includegraphics[width=0.9\\columnwidth]{images\/episode.pdf}\n \\caption{Comparing visual inputs of the active and passive models during a five step episode. Top: images sampled from different views centered on the target object. Bottom: images from one of the static cameras of the passive model. An interesting feature of the active input is that the gripper appears larger as it approaches the target object, providing an additional learning cue.}\n\\end{figure}\n\nThe log-polar like sampling we use is defined as follows. Let $(u, v) \\in [-1, 1] \\times [-1, 1]$ be a coordinate in a regularly spaced image sampling grid. We warp $(u, v)$ to obtain the log-polar sampling coordinate $(u', v')$ using the following equation: \n\n$$\n(u', v') = \\log(\\sqrt{u^2 + v^2} + 1) \\cdot (u, v)\n$$\n\n\\subsection{Learning}\n\nWe learn both policies using the Soft Actor-Critic algorithm \\cite{Haarnoja2018}, which optimizes the maximum-entropy RL objective. For detailed derivations of the loss functions for policy and value learning we refer the reader to \\cite{Haarnoja2018}. In conjunction with the policy learning, the multimodal encoder and generator are trained using the generative loss (evidence lower bound) of GQN. This loss consists of KL divergence terms and a reconstruction error term obtained from the variational approximation \\cite{Eslami2018}. Note that the encoder does not accumulate gradients from the reinforcement learning losses and is only trained with the generative loss. To obtain multiple views for training, we sample three viewpoints centered on the given fixation point at every timestep during a grasping episode. Two of those are randomly sampled and used as context views to obtain $r$, the third acts as the ground truth for prediction. We did not perform any hyperparameter tuning for the RL or GQN losses and used the same settings found in \\cite{Haarnoja2018} and \\cite{Eslami2018}. \n\n\\section{Experiments}\n\nWe perform three experiments that examine the performance of active vs passive models (Section A), of active models that choose their own targets (Section B), and the benefits of log-polar images and representation learning for active models (Section C).\n\nIn our experiments, training occurs with a maximum of 5 objects in the bin. A typical run takes approximately 26 hours (on a single machine), with learning saturating before 70K grasps. Every episode, the objects are sampled from a set of 900 possible training objects. For evaluation, we use episodes with exactly 5 objects present in the bin. Evaluation objects are sampled from a different set of 100 objects, following the protocol in \\cite{Quillen}. \n\n\\subsection{Active vs Passive Perception}\n\nWe evaluate active looking vs a passive (fixed) gaze at targeted grasping. This setting is designed to test goal-oriented manipulation, rather than grasping any object arbitrarily. For this experiment, we do not learn a fixation policy, but instead use goals, or target objects, that are selected by the environment. The policies are only rewarded for picking up the randomly chosen target object in the bin. The active model and the passive model receive the same inputs (visual and proprioceptive) along with a foreground object mask that indicates the target object. The only difference between the two models is the nature of the visual input: the active model observes log-polar images that are centered on the target object, while the passive model observes images of the bin from three static cameras (Figure 3). The static cameras are placed such that each can clearly view the contents of the bin and the gripper. This mimics a typical setup where cameras are positioned to view the robot workspace, with no special attention to any particular object or location in space. Using an instance mask to define the target object was previously done in \\cite{Fangb} for example. Note that the generator is trained to also reconstruct the mask in this case, forcing the representation $r$ to preserve the target information. \n\nTable 1 (Active-Target, Passive-Target) shows the evaluation performance of the active vs passive model with environment selected targets. We observe that the active model achieves 8\\% better performance. Figure 4 (yellow vs blue curves) shows that the active model is more sample-efficient as well. \n\nThe performance of the passive model (at 76\\%) is in line with the experiments in \\cite{Quillen} on targeted grasping. All algorithms tested did not surpass an 80\\% success rate, even with 1M grasp attempts. The experiment above suggests that, had the robot been able to observe the environment in a more ``human-like'' manner, targeted grasping performance could approach performance on arbitrary object grasping.\n\n\\begin{table}[t]\n\\caption{Evaluation Performance}\n\\label{table_example}\n\\begin{center}\n\\begin{tabular}{|c||c|}\n\\hline\nModel & Grasp Success Rate\\\\\n\\hline\nActive-Target & 84\\% \\\\\n\\hline\nPassive-Target & 76\\% \\\\\n\\hline\nActive-Target w\/o Log-Polar & 79\\% \\\\\n\\hline\nActive-Learned-6D (after 70K grasps)& 85\\% \\\\\n\\hline\nActive-Learned-4D (after 25K grasps) & 85\\% \\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\begin{figure}[t!] \n \\centering\n \\includegraphics[width=0.9\\columnwidth]{images\/line-plot.pdf}\n \\caption{Learning curves for our experiments. Active-Target, Passive-Target: active and passive models with environment selected targets. Active-Learned: full APR model with fixation policy. Active-w\/o representation, Active-Target w\/o log-polar: APR versions without representation learning or log-polar sampling, respectively. Shaded regions indicate the standard deviation over two independent runs with different random seeds.}\n\\end{figure} \n\n\\subsection{Learning Where to Look}\n\nThe experiment above shows that active perception outperforms passive models in goal-oriented grasping. But can a robot learn where to look? Here we use the full version of the model with the learned fixation policy. Grasp rewards are given for picking up any object, as long as the object was within 10cm of the fixation point. This ensures that the model is only rewarded for goal-oriented behavior. In this setting, the model learns faster in the initial stages than in the targeted grasping case (Figure 4) and is slightly better in final performance (Table 1). This does not necessarily imply that this model is better at grasping, it could be due to the model choosing easier grasping targets. The latter may nevertheless be a virtue depending on the context (e.g., a bin-emptying application). This result indicates that active perception policies can be learnt in conjunction with manipulation policies. \n\nNote that the full version of APR does not use additional information from the simulator beyond the visual and proprioceptive inputs. In contrast to Section A, the fixation point (and therefore the auxiliary reaching reward), is entirely self-generated. This makes APR directly comparable to the vanilla deep Q-learning learning algorithms studied in \\cite{Quillen}. With 100K grasp attempts, the algorithms in \\cite{Quillen} achieve approximately 80\\% success rate. We tested the model in the 4-DoF case, where it achieves an 85\\% success rate with 25K grasps (Table 1). Therefore, APR outperforms these previous baselines with four times fewer samples. [33] reported improved success rates of 89-91\\% with vanilla deep Q-learning after 1M grasps (though it was not reported what levels of performance were attained between 100K and 1M grasps). On the more challenging 6-DoF version, we achieve an 85\\% success rate with 70K grasps, but we have not yet extended the simulations to 1M grasps to allow a direct comparison with these results. \n\n\\subsection{Ablations} \n\nTo examine effects of the log-polar image sampling and the representation learning, we ran two ablation experiments in the environment selected target setting (as in Section A). Figure 4 (red curve) shows that APR without representation learning achieves negligible improvement within the given amount of environment interaction. (Without the representation learning loss, we allow the the RL loss gradients to backpropagate to the encoder, otherwise it would not receive any gradient at all). The pink curve shows APR without log-polar images. The absence of the space-variant sampling impacts both the speed of learning and final performance (Table 1).\n\n\\section{Discussion and Future Work}\n\nWe presented an active perception model that learns where to look and how to act using a single reward function. We showed that looking directly at the target of manipulation enhances performance compared to statically viewing a scene (Section 4A), and that our model is competitive with prior work while being significantly more data-efficient (Section 4B). We applied the model to a 6-DoF grasping task in simulation, which requires appropriate reaching and object maneuvering behaviors. This is a more challenging scenario as the state space is much larger than the 4-DoF state space that has typically been used in prior work (\\cite{Pinto2015, Quillen, Kalashnikov}). 6-DoF control is necessary for more general object manipulation beyond top-down grasping. Figure 5 shows interesting cases where the policy adaptively orients the gripper according to scene and object geometry. \n\nThe biggest improvement over vanilla model-free RL algorithms came from representation learning, which benefited both passive and active models. Figure 6 shows sample generations from query views along with ground truth images from a single run of the active model. Increasingly sharp renderings (a reflection of increasingly accurate scene representation) correlated with improving performance as the training run progressed. While the generated images retained a degree of blurriness, the central object received a larger degree of representational capacity simply by virtue of its disproportionate size in the image. This is analogous to the phenomenon of ``cortical magnification'' observed in visual cortex, where stimuli in the central region of the visual field are processed by a larger number of neurons compared to stimuli in the periphery \\cite{DANIEL1961}. We suspect that such a representation learning approach -- one that appropriately captures the context, the end-effector, and the target of manipulation -- is useful for a broad range of robotic tasks. \n\n\\begin{figure}[t] \n \\centering\n \\includegraphics[width=0.9\\columnwidth]{images\/nice.pdf}\n \\caption{Examples of pre-grasp orienting behaviors due to the policy's 6-DoF action space.}\n\\end{figure}\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=0.9\\columnwidth]{images\/repr.pdf}\n \\caption{Scene renderings from query views at different snapshots during active model training. At later stages, the gripper, central object, and bin are well-represented. Surrounding objects occupy fewer pixels in the image, so they are not represented in as much detail.}\n\\end{figure} \n\nLooking ahead to testing the APR model in a physical environment, we see additional challenges. Realistic images may be more difficult for the generative model of GQN, which could hamper the representation learning. Exploration in a 6-DoF action space is more time-consuming and potentially more collision-prone than a top-down, 4-DoF action space. Some mechanism for force sensing or collision avoidance might be needed to prevent the gripper from colliding with objects or the bin. Active camera control introduces another complicating factor. It requires a physical mechanism to change viewpoints and a way of controlling it. We used a second 6-DoF manipulator in our simulator, but other simpler motion mechanisms are possible. Learning where to look with RL as we did in this work may not be necessary. It might be possible to orient the camera based on 3D location estimates of relevant targets.\n \nLooking at relevant targets in space and reaching for them are general skills that serve multiple tasks. We believe an APR-like model can therefore be be applied to a wide range of manipulation behaviors, mimicking how humans operate in the world. Whereas we structured how the fixation and grasping policies interact (``look before you grasp''), an interesting extension is where both policies can operate dynamically during an episode. For example, humans use gaze-shifts to mark key positions during extended manipulation sequences \\cite{Johansson2001}. In the same manner that our fixation policy implicitly defines a goal, humans use sequences of gaze shifts to indicate subgoals and monitor task completion \\cite{Johansson2001}. The emergence of sophisticated eye-hand coordination for object manipulation would be exciting to see.\n\n\n\n\\section{Conclusion}\n\n\\cite{Hassabis2017} argues that neuroscience (and biology in general) still contain important clues for tackling AI problems. We believe the case is even stronger for AI in robotics, where both the sensory and nervous systems of animals can provide a useful guide towards intelligent robotic agents. We mimicked two central features of the human visual system in our APR model: the space-variant sampling property of the retina, and the ability to actively perceive the world from different views. We showed that these two properties can complement and improve state-of-the-art reinforcement learning algorithms and generative models to learn representations of the world and accomplish challenging manipulation tasks efficiently. Our work is a step towards robotic agents that bridge the gap between perception and action using reinforcement learning.\n\n %\n %\n %\n %\n %\n\n\n\n\n\n\n\n\n\\printbibliography\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nLet $\\mathcal W_n$ be a complete undirected graph of $n$ vertices where each edge is assigned an \nindependent exponential weight with mean $n$; this is referred to as the \\emph{stochastic mean-\nfield} ($\\mbox{SMF}_n$) model. For a (self-avoiding) \\emph{path} $\\pi = (v_0, v_1,\\ldots, v_m)$, \ndefine its length $len(\\pi)$ and average weight $A(\\pi)$ by\n\\begin{align*}\n len(\\pi) = m \\,, \\mbox{ and }A(\\pi) = \\tfrac{1}{m}\\mbox{$\\sum_{i = 1}^m$} W_{(v_{i-1},v_i)}\\,,\n\\end{align*}\nwhere $W_{(u, v)}$ is the weight of the edge $(u, v)$. For $\\lambda >0$, let $L(n,\\lambda)$ be \nthe length of the longest path with average weight below $\\lambda$, i.e., \n\n$$L(n,\\lambda) = \\max\\{len(\\pi): A(\\pi) \\leq \\lambda, \\mbox{ }\\pi\\mbox{ is a path in \n$\\mbox{SMF}_n$ model}\\}\\,.$$\n\nIn a non-rigorous paper of Aldous \\cite{Aldous05}, it was predicted that $L(n, \\lambda) \\asymp n \n(\\lambda - \\mathrm{e}^{-1})^\\beta$ with $\\beta = 3$ $\\lambda \\downarrow \\mathrm{e}^{-1}$. Our main \nresult is the following theorem, which corrects Aldous' prediction.\n\\begin{theorem}\n\\label{Prop}\nLet $\\lambda = 1\/\\mathrm{e} + \\eta$ where $\\eta > 0$. Then there exist absolute constants $C_1, \nC_2 , \\eta^*>0$ such that for all $\\eta \\leq \\eta^*$,\n\\begin{equation}\n\\label{main_prop}\n\\lim_{n\\to \\infty}\\mathbb{P}\\big(n \\mathrm{e}^{- C_1 \/ \\sqrt{\\eta}} \\leq L(n,\\lambda)\\leq n \n\\mathrm{e}^{- C_2 \/ \\sqrt{\\eta}}\\big) = 1\\,.\n\\end{equation}\n\\end{theorem}\n\nThe study of the object $L(n, \\lambda)$ was initiated by Aldous \\cite{Aldous98} where a phase \ntransition was discovered at the threshold $\\mathrm{e}^{-1}$. It was shown that with high \nprobability $L(n, \\lambda)$ is of order $\\log n$ for $\\lambda < \\mathrm{e}^{-1}$ and $L(n, \n\\lambda)$ is of order $n$ when $\\lambda > \\mathrm{e}^{-1}$. The critical behavior was established \nin \\cite{Ding13}, where it was proved that with high probability $L(n, \\lambda)$ is of order \n$(\\log n)^3$ when $\\lambda$ is around $\\mathrm{e}^{-1}$ within a window of order $(\\log n)^{-2}$. \nOur Theorem~\\ref{Prop} describes the behavior in the near-supercritical regime, and in \nparticular states that $L(n, \\lambda)\/n$ is a stretched exponential in $\\eta$ with $\\eta = \\lambda \n- \\mathrm{e}^{-1} \\downarrow 0$. Another interesting result proved in \\cite{Ding13} states that \n$L(n, \\lambda) \\geq n^{1\/4}$ in a somewhat similar regime namely $\\lambda \\geq 1 \/ \\mathrm{e} \n+ \\beta(\\log n)^{-2}$, where $\\beta > 0$ is an absolute constant. Notice that substituting $\\eta = \nC(\\log n)^{-2}$ in \\eqref{main_prop}, we indeed get a fractional power of $n$. In fact our method \nshould work, subject to some technical modifications, all the way down to $\\eta = C(\\log \nn)^{-2}$ for a large absolute constant $C$. However, we do not attempt any rigorous proof of \nthis in the current paper. \n\nA highly related question is the length for the cycle of minimal mean weight, which was studied by \nMathieu and Wilson \\cite{MW13}. An interesting phase transition was found in \\cite{MW13} with \ncritical threshold $\\mathrm{e}^{-1}$ on the mean weight. Further results on this problem have been \nproved in \\cite{DSW}. It might be relevant to mention here that the method used in \\cite{DSW} \ncould be potentially useful for nailing down the second phase transition detected in \n\\cite{Ding13}, namely the transition from $\\eta = \\alpha (\\log n)^{-2}$ to $\\eta = \\beta (\\log \nn)^{-2}$ where $\\alpha, \\beta$ are positive constants.\n\nAnother related question is the classical travelling salesman problem (TSP), where one minimizes \nthe weight of the path subject to passing through every single vertex in the graph. For the TSP in \nthe mean-field set up, W\\\"{a}stlund \\cite{Wastlund10} established the sharp asymptotics for more \ngeneral distributions on the edge weight, confirming the Krauth-M\\'{e}zard-Parisi conjecture\n\\cite{MP86b, MP86, KM89}. Indeed, it is an interesting challenge to give a sharp estimate on $L(n, \n\\lambda)$ for $\\mathrm{e}^{-1} <\\lambda < \\lambda^*$ (here $\\lambda^*$ is the asymptotic value for \nTSP), interpolating the critical behavior and the extremal case of TSP. A question of the same \nflavor on steiner tree is given in \\cite{Bollobas04}.\n\nOne can also look at the maximum size of tree with average weight below a certain threshold, \nwhere a phase transition was proved in \\cite{Aldous98}. The extremal case of the question on the \ntree with minimal average weight is the well-known minimal spanning tree problem, where a \n$\\zeta(3)$ limit is established by Frieze \\cite{Frieze85}.\n\n\\noindent {\\bf Main ideas of our proofs.} A straightforward first moment computation as done in \n\\cite{Aldous98} implies that $\\lim_{n \\to \\infty}\\mathbb{P}\\big(L(n, \\lambda) = O(\\log n)\\big) = \n1$ when $\\lambda < 1\/\\mathrm{e}$ (see also \\cite[Theorem 1.3]{Ding13}). For $\\lambda> \n1\/\\mathrm{e}$, a sprinkling method was employed in \\cite{Aldous98} to show that with high \nprobability $L(n, \\lambda) = \\Theta(n)$. The author first proved that with high probability there \nexist a large number of paths with average weight slightly above $1\/\\mathrm{e}$ and then used a \ncertain greedy algorithm to connect these paths into a single long path with average weight \nslightly above $1\/\\mathrm{e}$. However, the method in \\cite{Aldous98} was not able to describe the \nbehavior at criticality. In \\cite{Ding13} (see also \\cite{MW13} for the cycle with minimal average \nweight), a second moment computation was carried out restricted to paths of average weight below \n$1\/\\mathrm{e}$ and with the maximal deviation (defined in \\eqref{eq-def-M} below) at most $O(\\log \nn)$, thereby yielding that with high probability $L(n, 1\/\\mathrm{e}) = \\Theta((\\log n)^3)$. A \ncrucial fact responsible for the success of the second moment computation is that the length of \nthe target path is $\\Theta((\\log n)^3) \\ll \\sqrt{n}$. As such, a straightforward adaption of this \nmethod would not be able to succeed in the regime considered by this paper. \n\nTSP, where one studies paths (cycles) that visit every single vertex, is in a sense analogous to \nthe question of finding the minimal value $\\lambda$ for which $L(n, \\lambda) = n$ with high \nprobability. W\\\"{a}stlund \\cite{Wastlund10} showed that the minimum average cost of TSP converges \nin probability to a positive constant by relaxing it to a certain linear optimization problem. But \nit seems difficult to extend his method to ``incomplete'' TSP i.e. when the target object is the \nminimum cost cycle having at least $pn$ many edges for some $p \\in (0,1)$. Since our problem is in \na sense dual to incomplete TSP in the regime we are interested in, the method of \\cite{Wastlund10} \ndoes not seem to be suitable for our purpose either. In the current work, our method is inspired \nby the (first and) second moment method from \\cite{Ding13, MW13} as well as the sprinkling method \nemployed in \\cite{Aldous98}. \n\nIn order to prove the upper bound, our main intuition is that if $L(n, \\lambda)$ were greater than \n$\\mathrm{e}^{-C_2\/\\sqrt{\\eta}} n$ then we would have a larger number of short and light paths (a \nlight path refers to a path with small average weight --- at most a little above $1\/\\mathrm{e}$) \nthan we would typically expect. Formally, let $\\ell = \\frac{c_1}{\\eta}$ where $c_1$ is a small \npositive constant, and consider the number of paths (denoted by $N_{\\eta \/ c_1, c_2}$) with length \n$\\ell$ and total weight no more than $\\lambda \\ell - c_2 \\sqrt{\\ell}$ for a positive constant \n$c_2$. We call such a path a \\emph{downcrossing}. A straightforward computation gives $\\mathbb E \nN_{\\eta \/ c_1, c_2} = O(1) n \\ell \\mathrm{e}^{-c_3\/\\sqrt{\\eta}}$ for a positive constant $c_3$ \ndepending on $c_1$ and $c_2$. Now we consider the number of paths (denoted by $N_\\delta$) of \nlength $\\delta(\\lambda) n$ and average weight at most $\\lambda$. Such paths have two \npossibilities: (1) The path contains substantially more than $\\mathbb E N_{\\eta \/ c_1, c_2}$ many \ndowncrossings, which is unlikely by Markov's inequality. (2) The path does not have substantially \nmore than $\\mathbb E N_{\\eta \/ c_1, c_2}$ many downcrossings. This is also unlikely for the \nfollowing reasons: (a) A straightforward first moment computation gives that $\\mathbb E N_\\delta \n= O(n) \\mathrm{e}^{c_4 \\delta n \\eta}$ for a constant $c_4 > 0$; (b) The number of downcrossings \nalong a path of this kind, or a random variable that is ``very likely'' smaller, should dominate a \nBinomial random variable $\\mathrm{Bin}(\\delta n\/\\ell, c_5)$ where $c_5 > 0$ is an absolute \nconstant (since in the random walk bridge, every subpath of size $\\ell$ has a positive chance to \nhave such a downcrossing). If we choose $\\delta$ suitably large as in Theorem~\\ref{Prop}, we are \nsuffering a probability cost for the constraint on the number of downcrossings (probability for a \nbinomial much smaller than its mean) and this probability cost is of magnitude $\\mathrm{e}^{-c_6 \n\\delta n \/\\ell}$ for a constant $c_6 > 0$ depending in $c_1$. If we choose $c_1$ small enough this \nprobability cost kills the growth of $\\mathrm{e}^{c_4 \\delta n \\eta}$ in $\\mathbb E N_\\delta$. \nTherefore, paths of this kind do not exist either. The details are carried out in \nSection~\\ref{sec-upper}.\n\n\nFor the lower bound, our proof consists of two steps. In light of the preceding discussion, we \ncannot hope to directly apply a second moment method from \\cite{Ding13, MW13} to show the \nexistence of a light path that is of length linear in $n$. As such, in the first step of our proof \nwe prove that with high probability there exists a linear (in $n$) number of disjoint paths, each \nof which has weight slightly below $\\lambda$ and is of length $\\mathrm{e}^{c_7\/\\sqrt{\\eta}}$ for \nan absolute constant $c_7>0$. This is achieved by two second moment computations, which are \nexpected to succeed as the length of the path under consideration is $\\ll \\sqrt{n}$ (indeed \nremains bounded as $n\\to \\infty$). In the second step, we propose an algorithm which, with \nprobability going to 1, strings together a suitable collection of these short light paths to form \na light path of length $\\mathrm{e}^{-c_8\/\\sqrt{\\eta}}n$ for an absolute constant $c_8>0$. Our \nalgorithm is similar to the greedy algorithm (or in a different name exploration process) employed \nin \\cite{Aldous98}. But in order to ensure that the additional weight introduced by these \nconnecting bridges only increases the average weight of the final path by at most a multiple of \n$\\eta$, we have to use a more delicate algorithm. The details are carried out in Section~\\ref{sec-lower}.\n\n\\noindent {\\bf Notation convention.} \n For a graph $G$, we denote by $V(G)$ and $E(G)$ the set of vertices and edges of $G$ \n respectively. A path in a graph $G$ is an (finite) ordered tuple of vertices $(v_0, v_1, \\cdots, \n v_m)$, all distinct. For a path $\\pi = (v_0, v_1, \\cdots, v_m)$, we also use $\\pi$ to denote the \n graph whose vertices are $v_0, v_1, \\cdots, v_m$ and edges are $(v_0, v_1), \\cdots, (v_{m-1}, \n v_m)$. This would be clear from the context. The weight of an edge $e$ in $\\mathcal{W}_n$ is denoted by $W_e$ and we define the total weight $W(\\pi)$ of a path $\\pi$ as $\\sum_{e \\in E(\\pi)} W_{e}$. The collection of all paths in $\\mathcal{W}_n$ of length $\\ell \\in [n]$ is denoted as $\\Pi_\\ell$. We let $\\lambda = 1\/\\mathrm{e} + \\eta$ where $\\eta$ is a fixed positive number. A path is called \\emph{$\\lambda$-light} if its average weight is at most $\\lambda$, and a path is called \\emph{$(\\lambda, C)$-light} if its total weight is at most $\\lambda \\ell - C\\sqrt{\\ell}$ where $\\ell$ is length of \n the path. For nonnegative real or integer valued variables $x_0, x_1, \\cdots, x_n$, let $S$ be a \n statement involving $x_0, x_1, \\cdots, x_n$. We say that $S$ holds ``for large $x_0$ (given \n $x_1, \\cdots, x_n$)'' or ``when $x_0$ is large (given $x_1, \\cdots, x_n$)'' if it holds for any \n fixed values of $x_1, \\cdots, x_n$ in their respective domains and $x_0 \\geq a_0$ where $a_0$ is \n some positive number depending on the fixed values of $x_1, \\cdots, x_n$. In case $a_0$ is an \n absolute constant, the phrase ``(given $x_1, \\cdots, x_n$)'' will be dropped. We use ``for small \n $x_0$'' or ``when $x_0$ is small'' with or without the qualifying phrase ``(given $x_1, x_2, \n \\cdots, x_n$)'' in similar situations if the statement $S$ holds instead for $0 < x_0 \\leq a_0$. \n Throughout this paper the order notations $O(.), \\Theta(.), o(.)$ etc.~are assumed to be with \n respect to $n \\to \\infty$ while keeping all the other involved parameters (such as $\\ell$, $\\eta$ \n etc.) fixed. We will use $C_1, C_2, \\ldots$ to denote constants, and each $C_i$ will denote the \n same number throughout of the rest of the paper.\n\\smallskip\n\n\\noindent {\\bf Acknowledgements.} We are grateful to David Aldous for very useful discussions, and \nwe thank an anonymous referee for a careful review of an earlier manuscript and suggesting a \nsimpler proof of Lemma~\\ref{count_vertices}.\n\n\\section{Proof of the upper bound}\n\\label{sec-upper}\n\nLet $\\eta'$ be a multiple of $\\eta$ by a constant bigger than 1 whose precise value is to be \nselected. Set $\\ell = \\lfloor 1 \/ \\eta' \\rfloor$ and let $N_{\\eta'}$ be the number of ``$(\\lambda, \n1)$-light'' paths of length $\\ell$. We assume $\\eta < 1$ so that $\\ell \\geq 1$. As outlined in the \nintroduction, we shall first control $N_{\\eta'}$. \n\nIt is clear that the distribution of the total weight of a path of length $k$ follows a Gamma \ndistribution $\\Gamma(k, 1\/n)$, where the density $f_{\\theta, k}(z)$ of $\\mathrm{Gamma}(k, \\theta)$ \nis given by\n\\begin{equation}\n\\label{gamma_density}\nf_{\\theta, k}(z) = \\theta^k z^{k - 1}\\mathrm{e}^{-\\theta z} \/ (k - 1)!\\mbox{ for all }z \\geq 0, \n\\theta > 0\\mbox{ and }k\\in \\mathbb{N}.\n\\end{equation}\nBy \\eqref{gamma_density} and the Stirling's formula, we carry out a straightforward computation \nand get that\n\\begin{eqnarray}\n\\mathbb{E} N_{\\eta'} & = & (1 + o(1)) \\times n^{\\ell+1} \\times \\mathbb{P}\\Big ( \\mathrm{Gamma}\n(\\ell,1\/n) \\leq \\lambda \\ell - \\sqrt{\\ell} \\Big ) \\nonumber\\\\\n\t\t\t& = & (1 + o(1)) \\times n^{\\ell+1}\\times \\frac{\\mathrm{e}^{-(\\lambda \\ell - \n\t\t\t\\sqrt{\\ell})\/n}(\\lambda \\ell - \\sqrt{\\ell})^{\\ell}}{\\ell!n^{\\ell}}\\nonumber\\\\\n\\label{first_bnd}& = & (1 + o(1))C_0(\\eta)\\alpha\\mathrm{e}^{\\mathrm{e}\\eta\/\\eta'} \\sqrt{\\eta'} \n\\mathrm{e}^{-\\mathrm{e}\/\\sqrt{\\eta'}}n,\n\\end{eqnarray}\nwhere $C_0(\\eta) \\to 1$ as $\\eta \\to 0$, and $\\alpha$ is a positive constant. Furthermore the \nfactors $1 + o(1)$ are strictly less than 1.\n\\\\\nWe also need a bound on the second moment of $N_{\\eta'}$ to control its concentration around \n$\\mathbb{E}N_{\\eta'}$. For $\\gamma \\in \\Pi_\\ell$, define $F_{\\gamma}$ to be the event that \n$\\gamma$ is $(\\lambda, 1)$-light. Then clearly we have $N_{\\eta'} = \\sum_{\\gamma \\in \n\\Pi_\\ell}\\mathbf{1}_{F_{\\gamma}}$. In order to compute $\\mathbb E (N_{\\eta'})^2$, we need to estimate \n$\\mathbb{P}(F_{\\gamma} \\cap F_{\\gamma'})$ for $\\gamma, \\gamma' \\in \\Pi_\\ell$. In the case \n$E(\\gamma) \\cap E(\\gamma') = \\emptyset$, we have $F_\\gamma$ and $F_{\\gamma'}$ independent of each \nother and thus $\\mathbb{P}(F_{\\gamma'}|F_{\\gamma}) = \\mathbb{P}(F_{\\gamma'})$. In the case $|\nE(\\gamma) \\cap E(\\gamma')| = j > 0$, we have\n\\begin{eqnarray}\n\\label{cond_prob_order}\n\\mathbb{P}(F_{\\gamma'}|F_{\\gamma}) &\\leq & \\mathbb{P}\\big(\\mathrm{Gamma}(\\ell-j,1\/n) \\leq \\lambda \n\\ell\\big)\n \\leq \\tfrac{1}{(\\ell - j)!}\\tfrac{(\\lambda \\ell)^{\\ell - \n j}}{n^{\\ell - j}}.\n\\end{eqnarray}\nFurther notice that if $|E(\\gamma) \\cap E(\\gamma')| = j$, then $|V(\\gamma) \\cap V(\\gamma')|$ is at \nleast $j + 1$ as $\\gamma \\cap \\gamma'$ is acyclic. So given any $\\gamma \\in \\Pi_\\ell$, the number \nof paths $\\gamma'$ such that $|E(\\gamma) \\cap E(\\gamma')| = j$ is at most $O(n^{\\ell - j})$. \nAltogether, we obtain that\n\\begin{align}\n\\mathbb{E} N_{\\eta'}^2 & = \\mbox{$\\sum_{\\gamma, \\gamma' \\in \\Pi_\\ell}$}\\mathbb{P}(F_{\\gamma}\\cap \nF_{\\gamma'}) = \\mbox{$\\sum_{\\gamma \\in \\Pi_\\ell}$}\\mathbb{P}(F_{\\gamma})\\mbox{$\\sum_{\\gamma' \\in \n\\Pi_\\ell}$}\\mathbb{P}(F_{\\gamma'}|F_{\\gamma}) \\nonumber \\\\\n & \\leq \\mbox{$\\sum_{\\gamma \\in \\Pi_\\ell}$}\\mathbb{P}\n (F_{\\gamma})\\Big(\\sum_{\\gamma':E(\\gamma' \\cap \n \\gamma)=\\emptyset}\\mathbb{P}(F_{\\gamma'}) + \\sum_{1 \\leq j \\leq \n \\ell} \\sum_{\\gamma':|E(\\gamma' \\cap \\gamma)|=j}\\frac{1}{(\\ell - \n j)!}\\frac{(\\lambda \\ell)^{\\ell - j}}{n^{\\ell - j}}\\Big) \\nonumber\\\\\n & = \\mbox{$\\sum_{\\gamma \\in \\Pi_\\ell}$}\\mathbb{P}\n (F_{\\gamma})\\Big(\\sum_{\\gamma':E(\\gamma' \\cap \n \\gamma)=\\emptyset}\\mathbb{P}(F_{\\gamma'}) + \n \\sum_{1 \\leq j \\leq \n \\ell}\\frac{O(n^{\\ell - j})}{(\\ell - \n j)!}\\frac{(\\lambda \\ell)^{\\ell - j}}{n^{\\ell - \n j}}\\Big) \\nonumber\\\\\n &\\leq \\mbox{$\\sum_{\\gamma \\in \\Pi_\\ell}$}\\mathbb{P}\n (F_{\\gamma})\\Big(\\mathbb{E}N_{\\eta'} + O(1) \\Big) = \n \\mathbb{E}N_{\\eta'}\\Big(\\mathbb{E}N_{\\eta'} + O(1) \\Big). \n \\label{first_second_moment}\n\\end{align}\nSince $\\mathbb E N_{\\eta'} = \\Omega(1)$ as implied by \\eqref{first_bnd}, \\eqref{first_second_moment} \nyields that\n\\begin{equation}\n\\label{concentration_1}\n\\mathbb{E}N_{\\eta'}^2 = (\\mathbb{E}N_{\\eta'})^2 (1 + o(1)).\n\\end{equation}\nAs a consequence of Markov's inequality (applied to $|N_{\\eta'} - \\mathbb E N_{\\eta'} |^2$), we get that\n\\begin{equation}\n\\label{concentration_2}\n\\mathbb{P}\\big(N_{\\eta'} \\geq 2 \\mathbb{E}N_{\\eta'}\\big) = o(1).\n\\end{equation}\n\nNext, we set out to show that any long $\\lambda$-light path should have a large number of subpaths \nwhich are $(\\lambda, 1)$-light. Let $\\pi$ be a path of length $\\delta n$ for some $\\delta > \n0$. Denote its successive edge weights by $X_1, X_2, \\ldots X_{\\delta n}$ and let $S_k = \n\\sum_{i=1}^k X_i$ for $1\\leq k\\leq \\delta n$. Probabilities of events involving edge weights of \n$\\pi$, unless specfically mentioned, will be assumed to be conditioned on ``$\\{A(\\pi) \\leq \n\\lambda\\}$'' throughout the remainder of this section. Now divide $\\pi$ into edge-disjoint \nsubpaths of length $\\ell$ (with the last subpath of length possibly less than $\\ell$ in the case \n$\\ell$ does not divide $\\delta n$) and denote the $k$-th subpath by $b_k^{\\pi}$ for $1 \\leq k \n\\leq \\delta n \/ \\ell$. Call any such subpath a downcrossing if it is $(\\lambda, 1)$-light. Let \n$D_{k, \\eta', n}^{\\pi}$ be the event that $b_k^\\pi$ is a downcrossing. The following \nwell-known result about exponential random variables (see, e.g., \\cite[Theorem 6.6]{Dasgupta2011}) \nwill be very useful.\n\n\\begin{lemma}\n\\label{Beta}\nLet $W_1, W_2, \\ldots, W_N$ be i.i.d.\\ exponential random variables with mean $1\/\\theta$, and let \n$S_k = \\sum_{i=1}^k W_i$ for $1\\leq k\\leq N$. Then the random vector $(\\frac{W_1}{S_N},\\ldots, \n\\frac{W_{N-1}}{S_N})$ follows $\\mathrm{Dirichlet}(\\mathbf{1}_{N})$ distribution, $S_N$ follows \n$\\mathrm{Gamma}(N;\\theta)$ distribution, and they are independent of each other. Here \n$\\mathbf{1}_{N}$ is the $N$-dimensional vector whose all entries are $1$.\n\\end{lemma}\nWe will also require the following simple lemma which we prove for sake of completeness. \n\\begin{lemma}\n\\label{Azuma}\nLet $Z_1, Z_2, \\ldots, Z_N$ be i.i.d.\\ exponential random variables with mean 1 and let $S_N = \n\\sum_{i = 1}^N S_N$. Then\n\\begin{align}\n\\mathbb{P}(S_N \\geq N + \\alpha) &\\leq \\mathrm{e}^{-\\alpha^2\/4N} \\mbox{ for all } 0<\\alpha \\leq (2 \n- \\sqrt{2})N\\,, \\label{Azuma1}\\\\\n\\mathbb{P}(S_N \\leq N - \\alpha) &\\leq \\mathrm{e}^{-\\alpha^2\/2N}\\,, \\mbox{ for all } \\alpha>0\\,. \n\\label{Azuma2}\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nBy Markov's inequality, we get that for any $\\alpha > 0$ and $0 < \\theta < 1$,\n$$\\mathbb{P}(S_N \\geq N + \\alpha) = \\mathbb P(e^{\\theta S_N} \\geq e^{\\theta (N+ \\alpha)}) \\leq \n\\tfrac{\\mathrm{e}^{-\\theta N - \\theta \\alpha}}{(1-\\theta)^N}.$$\nWhen $ \\theta \\leq 1 - 1\/\\sqrt{2}$, the right hand side is bounded above by $\\mathrm{e}^{N \n\\theta^2 - \\alpha\\theta}$. So setting $\\theta = \\alpha \/ 2N$ yields \\eqref{Azuma1} as long as $0 < \n\\alpha \/ 2N \\leq 1 - 1\/\\sqrt{2}$. One can prove \\eqref{Azuma2} in the same manner.\n\\end{proof}\n \nAs hinted in the introduction, let us begin with the intent to prove that the number of \ndowncrossings along the first half of $\\pi$ (or any fraction of it) dominates a Binomial random \nvariable $\\mathrm{Bin}(\\delta n \/ 2\\ell, p)$ for some positive, absolute constant $p$. So \nessentially we need to prove that a subpath $b_k^{\\pi}$ can be a downcrossing with probability $p$ \nregardless of the first $(k-1)\\ell$ edges of $\\pi$ that precede it. Now conditional distribution \nof $X_{(k-1)\\ell + 1}, X_{(k-1)\\ell + 2}, \\cdots, X_{\\delta n}$ given $X_1, X_2, \\cdots, \nX_{(k-1)\\ell}$ and $A(\\pi) \\leq \\lambda$ is essentially the the distribution of $X_{(k-1)\\ell + \n1}, X_{(k-1)\\ell + 2},$ $\\cdots, X_{\\delta n}$ conditioned on $\\sum_{i = (k - 1)\\ell + 1}^{\\delta \nn}X_i \\leq \\lambda \\delta n - S_{(k - 1)\\ell}$. On the other hand we get from Lemma~\\ref{Beta} \nthat conditional mean and variance of $W(b_k^\\pi)$ given $S_{\\delta n} - S_{(k-1)\\ell} = \\mu \n(\\delta n - (k-1)\\ell)$ are $\\mu \\ell$ and $\\mu^2 \\ell(1 + o(1))$ respectively for all $\\mu > 0$ \nand $k \\leq \\delta n \/ 2$. Hence it is plausible to expect that probability of the event \n$\\{W(b_k^{\\pi}) \\leq \\Lambda_k(\\ell - C\\sqrt{\\ell})\\}$ conditional on any set of values for $X_1, \nX_2, \\cdots, X_{(k - 1)\\ell}$ is bounded away from 0 for large $\\ell$ and $n$, where $\\Lambda_k \n= \\Lambda_k^\\pi = (S_{\\delta n} - S_{(k-1)\\ell})\/(\\delta n - (k-1)\\ell)$ and $C$ is some positive \nnumber. Let us denote the event $\\{W(b_k^{\\pi}) \\leq \\Lambda_k(\\ell - C\\sqrt{\\ell})\\}$ by $A_{k, \n\\eta', \\delta, n}^{C, \\pi}$. Thus it seems more immediate to prove the stochastic domination for \nnumber of occurrences of $A_{k, \\eta', n}^{C, \\pi}$'s which, for the time being, can be treated as \na ``proxy'' for the number of downcrossings. The formal statement is given in the next lemma where \nwe use $6$ as the value of $C$ since this allows us to avoid unnecessary named variables and also \nsuits our specific needs for the computations carried out at the end of this section.\n\n\\begin{lemma}\n\\label{drop_prob}\nLet $N_{\\eta', n}^\\pi$ be the number of occurrences of events $A_{k, \\eta', n}^{\\pi} = \nA_{k, \\eta', n}^{6, \\pi}$ for $1 \\leq k \\leq \\delta n \/ 2\\ell$. Then for any $0 < \\eta' < \\eta_0$ \nwhere $\\eta_0$ is a positive, absolute constant and any $0 < \\delta_0 < 1$ there exists a positive \ninteger $n_d = n_d(\\delta_0, \\eta')$ and an absolute constant $c > 0$ such that for all $\\delta \n\\geq \\delta_0$ and $n \\geq n_d $ the conditional distribution of $N_{\\eta',n}^\\pi$ given \n$\\{A(\\pi) \\leq \\lambda\\}$ stochastically dominates the binomial distribution $\\mathrm{Bin}(\\delta \nn \/2\\ell, c)$. \n\\end{lemma}\n\n\\begin{proof}\nNotice that it suffices to prove that there exist positive absolute constants $\\ell_0, c$ such \nthat uniformly for $\\mu > 0$, $\\ell \\geq \\ell_0$ and large $L$ (given $\\ell$)\n$$\\mathbb{P}(S_{\\ell} \\leq \\tfrac{S_L}{L}(\\ell - 6\\sqrt{\\ell}) | S_L = \\mu L)\\geq c\\,.$$ To this \nend, we see that for $L > \\ell$\n\\begin{eqnarray}\n\\mathbb{P}(S_{\\ell} \\leq \\tfrac{S_L}{L}(\\ell - 6\\sqrt{\\ell}) | S_L = \\mu L) = \\mathbb{P}\n(\\tfrac{S_{\\ell}}{S_L} \\leq (\\ell - 6\\sqrt{\\ell})\/L | S_L = \\mu L) = \\mathbb{P}(\\tfrac{S_{\\ell}}\n{S_L} \\leq (\\ell - 6\\sqrt{\\ell})\/L) \\,,\\label{ratio_prob} \n\\end{eqnarray}\nwhere the last equality follows from Lemma~\\ref{Beta}. Since distribution of $\\frac{S_{\\ell}}\n{S_L}$ does not depend on the mean of the underlying $X_j$'s, we can in fact assume that $X_j$'s \nare i.i.d.\\ exponential variables with mean 1 for purpose of computing \\eqref{ratio_prob}. By \n\\eqref{Azuma2}, we have \n$$\\mathbb{P}\\big(S_L\/L \\leq 1 - 1\/(2\\sqrt{\\ell})\\big) \\leq \\mathrm{e}^{-L \/ 8 \\ell}\\,.$$\n So for $\\ell - 6\\sqrt{\\ell} > 0$, we get\n\\begin{equation}\n\\label{prob_ineq}\n\\mathbb{P}(S_{\\ell} \\leq \\tfrac{S_L}{L}(\\ell - 6\\sqrt{\\ell})) \n\\geq \\mathbb{P}(S_{\\ell} \\leq \\ell - 6.5\\sqrt{\\ell}) - \\mathrm{e}^{-L \/ 8 \\ell}.\n\\end{equation}\nBy central limit theorem there exist absolute numbers $\\ell_0, c'>0$ such that $\\mathbb{P}\n(S_{\\ell} \\leq \\ell - 6.5\\sqrt{\\ell}) \\geq c'$ for $\\ell \\geq \\ell_0$. Hence from \n\\eqref{prob_ineq} it follows that for any $\\ell \\geq \\ell_0$ there exists $L_0 = L_0(\\ell)$ such \nthat the right hand side of \\eqref{ratio_prob} is at least $c = 0.99c'$ for $L \\geq L_0$.\n\\end{proof}\nNow what remains to show is that the number of downcrossings $\\tilde N_{\\eta', n}^\\pi$ \nalong $\\pi$ is bigger than $N_{\\eta', n}^\\pi$ with high probability. Notice that the \noccurrence of $A_{k, \\eta', n}^{\\pi} \\setminus D_{k, \\eta', n}^{\\pi}$ implies that \n$\\Lambda_k$ must be ``significantly'' above $\\lambda$. But that can only be caused by a \nsubstantial drop in $S_k$ for some $1 \\leq k \\leq \\delta n \/2$, an event that occurs with small \nprobability.\n\\begin{lemma}\n\\label{slope_rise}\nDenote by $E_{\\eta', n}^\\pi$ the event that $\\Lambda_k$ is more than $\\lambda + \\sqrt{\\eta'}$ for \nsome $1 \\leq k \\leq \\frac{\\delta n}{2\\ell}$. Then for any $0 < \\eta' < 1\/4$ and $0 < \\delta_0 < 1$ \nthere exists a positive integer $n_s = n_s(\\delta_0, \\eta')$ such that,\n\\begin{equation}\n\\label{Ineq2}\n\\mathbb{P}(E_{\\eta', n}^\\pi| A(\\pi) \\leq \\lambda) \\leq 2n\\mathrm{e}^{-\\delta n\\eta' \/ 16} \\mbox{ \nfor all } \\delta \\geq \\delta_0 \\mbox{ and }n \\geq n_s\\,.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nFor $1 \\leq k \\leq \\delta n\/2\\ell$, let $\\ell_k = (k - 1)\\ell$, $n_s = \\lceil 2\\ell \/ \\delta_0 \n\\rceil$ and $E_{k, \\eta', n}^\\pi = \\{\\Lambda_k\\geq \\lambda + \\sqrt{\\eta'}\\}$. Assume $n \\geq n_s$ \nso that $\\delta n \/ 2\\ell \\geq 1$. On $E_{k, \\eta', \nn}^\\pi$, we have\n\\begin{eqnarray*}\n\\frac{S_{\\ell_k}}{S_{\\delta n}} \\leq \\frac{\\ell_k S_{\\delta n} \/ \\delta n - \\sqrt{\\eta'}(\\delta n \n- \\ell_k)}{S_{\\delta n}} \\leq \\frac{\\ell_k}{\\delta n} - \\frac{\\sqrt{\\eta'}(\\delta n - \\ell_k)}\n{\\delta n}\\,,\n\\end{eqnarray*}\nwhere the last inequality holds since we are conditioning on $S_{\\delta n} \\leq \\lambda \\delta n$ \nand $\\lambda \\leq 1$ when $\\eta' < 1\/4$ (recall that $\\eta < \\eta'$). Therefore, we get\n\\begin{equation}\\label{slope_change}\n\\mathbb{P}(E_{k,\\eta',n}^\\pi | A(\\pi) \\leq \\lambda) \n\\leq \\mathbb{P}(S_{\\ell_k} \\leq \\tfrac{S_{\\delta n}}{\\delta n}\\big(\\ell_k - \\sqrt{\\eta'}(\\delta n \n- \\ell_k)))\n\\end{equation}\nNow we evaluate the right hand side of \\eqref{slope_change}. Analogous to \\eqref{ratio_prob} in \nthe proof of Lemma~\\ref{drop_prob}, we can assume without loss of generality that $X_1, \nX_2, \\ldots X_L$ are i.i.d.\\ exponential variables with mean 1.\nIt is routine to check that $$(1+\\sqrt{\\eta'}\/2)\\times \\big(\\ell_k - \\sqrt{\\eta'}(\\delta n - \n\\ell_k)\\big) \\leq \\ell_k - \\sqrt{\\eta'}\\delta n\/4 \\,, \\mbox{ for all } 1\\leq k \\leq \\delta \nn\/2\\ell\\,.$$\nThus, for all $1 \\leq k \\leq \\delta n\/2\\ell$ we get\n\\begin{eqnarray*}\n\\mathbb{P}\\Big(S_{\\ell_k} \\leq \\frac{S_{\\delta n}}{\\delta n}\\big(\\ell_k - \\sqrt{\\eta'}(\\delta n - \n\\ell_k)\\big)\\Big) \n& \\leq & \\mathbb{P}\\Big(S_{\\ell_k} \\leq \\ell_k - \\sqrt{\\eta'}\\delta n\/4 \\Big) + \n\\mathbb{P}\\Big(\\frac{S_{\\delta n}}{\\delta n} \\geq 1 + \\sqrt{\\eta'}\/2\\Big)\\\\\n& \\leq & \\mathrm{e}^{-\\delta n \\eta' \/ 16} + \\mathrm{e}^{-\\delta n \\eta'\/ 16},\n\\end{eqnarray*}\nwhere the second inequality follows from \\eqref{Azuma2} and \\eqref{Azuma1} respectively. Combined \nwith \\eqref{slope_change}, it gives that\n\\begin{equation*}\n\\mathbb{P}(E_{k,\\eta',n}^\\pi|A(\\pi) \\leq \\lambda) \\leq 2\\mathrm{e}^{- \\delta n \\eta'\/16}\\,, \\mbox{ \nfor all }1 \\leq k \\leq \\delta n\/2\\ell\\,.\n\\end{equation*}\nAn application of a union bound over $k$ completes the proof of the lemma.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem~\\ref{Prop}: upper bound]\nAssume that $\\eta' < 1\/4 \\wedge \\eta_0$ where $\\eta_0$ is same as given in the statement of \nLemma~\\ref{drop_prob}. Fix a $\\delta_0 = \\delta_0(\\eta')$ in $(0,1)$ and let $n_0 = n_0(\\delta_0, \n\\eta') = n_d(\\delta_0, \\eta') \\vee n_s(\\delta_0, \\eta')$, where $n_d, n_s$ are as stated in \nLemmas \\ref{drop_prob} and \\ref{slope_rise} respectively. In the remaining part of this section we \nwill assume that $n \\geq n_0$ and $\\delta \\geq \\delta_0$, so that Lemmas \\ref{drop_prob} and \n\\ref{slope_rise} become applicable. Now let $\\pi$ be a path with length $\\delta n$. From \nLemma~\\ref{slope_rise} we get that with large probability $\\Lambda_k \\leq \\lambda + \\sqrt{\\eta'}$ \nfor all $k$ between 1 and $\\delta n \/2\\ell$. But it takes a routine computation to show that \n$A_{n,k,\\eta'}^\\pi \\setminus \\{\\Lambda_k \\leq \\lambda + \\sqrt{\\eta'}\\}\\subseteq D_{n,k,\n\\eta'}^\\pi$ when $\\eta'$ is small. Thus $\\tilde N_{\\eta', n}^\\pi \\geq N_{\\eta', n}^\\pi$ except on \n$E_{\\eta', n}^\\pi$. Consequently Lemma~\\ref{drop_prob} allows us to use binomial distribution to \nbound quantities like $\\mathbb{P}(\\tilde N_{\\eta', n}^\\pi \\leq x)$ with a ``small error term'' \ncaused by the rare event $E_{\\eta', n}^\\pi$. Formally, \n\n\\begin{eqnarray*}\n\\mathbb{P}\\Big(\\tilde N_{\\eta' , n}^\\pi \\leq 2\\mathbb{E}N_{\\eta'} | A(\\pi) \\leq \n\\lambda\\Big) \n&\\leq & \\mathbb{P}\\Big(N_{\\eta' , n}^\\pi \\leq 2\\mathbb{E}N_{\\eta'} | A(\\pi) \\leq \n\\lambda\\Big) + \\mathbb{P}\\Big(E_{\\eta', n}| A(\\pi) \\leq \\lambda\\Big)\\\\\n& \\leq & \\mathbb{P}\\Big(N_{\\eta' , n}^\\pi \\leq 2\\mathbb{E}N_{\\eta'}|A(\\pi) \\leq \\lambda\\Big) + \n2n\\mathrm{e}^{-\\delta n \\eta'\/16}\\,,\n\\end{eqnarray*}\nwhere the last inequality follows from Lemma~\\ref{slope_rise}.\nTherefore, by Lemma~\\ref{drop_prob}, we get that\n\\begin{equation}\n\\mathbb{P}\\Big(\\tilde N_{\\eta', n}^\\pi \\leq 2\\mathbb{E}N_{\\eta'} | A(\\pi) \\leq \n\\lambda\\Big) \\leq \\mathbb{P}\\Big(\\mathrm{Bin}(\\delta n \/ 2\\ell, c) \\leq 2\\mathbb{E}N_{\\eta'}\\Big) \n+ 2n\\mathrm{e}^{-\\delta n \\eta'\/16} \\,.\\label{second_bd}\n\\end{equation} \nNext let us define a new event as \n$$\\Xi_{\\eta,\\delta_0,n} = \\mbox{$\\bigcup_{k \\geq \\delta_0 n}$ $\\bigcup_{\\pi \\in \n\\Pi_k}$}\\big\\{\\tilde N_{\\eta', n}^\\pi \\geq 2\\mathbb{E}N_{\\eta'}, A(\\pi) \\leq \\lambda \\big\\}.$$\nSo $\\Xi_{\\eta,\\delta_0,n}$ is the event that there exists a $\\lambda$-light path $\\pi$ with \n$len(\\pi) \\geq \\delta_0 n$ and which contains at least $2\\mathbb{E}N_{\\eta'}$ many \ndowncrossings. Thus occurrence of $\\Xi_{\\eta,\\delta_0,n}$ implies that $N_{\\eta'} \\geq \n2\\mathbb{E}N_{\\eta'}$ which has small probability owing to \\eqref{concentration_2}. On the other \nhand if $\\Xi_{\\eta,\\delta_0,n}$ does not occur, $L(n, \\lambda) \\geq \\delta_0 n$ implies the \nexistence of a $\\lambda$-light path of length at least $\\delta_0 n$ that has no more than \n$2\\mathbb{E}N_{\\eta'}$ many downcrossings. Formally,\n\n\\begin{align}\n\\mathbb{P}&\\big(L(\\lambda, n) \\geq \\delta_0 n \\big) = \\mathbb{P}\\big(\\Xi_{\\eta,\\delta_0,n}\\big) + \n\\mathbb{P}\\big(\\{L(\\lambda, n) \\geq \\delta_0 n\\} \n\\setminus \\Xi_{\\eta,\\delta_0,n}\\big)\\nonumber \\\\\n& \\leq \\mathbb{P}\\big(N_{\\eta'} \\geq \n 2\\mathbb{E}N_{\\eta'}\\big) + \n \\mathbb{P}\\Big(\\mbox{$\\bigcup_{k \\geq \\delta_0 \n n}$ $\\bigcup_{\\pi \\in \\Pi_k}$}\\big\\{\\tilde \n N_{\\eta', n}^\\pi \\leq 2\\mathbb{E}N_{\\eta'}, \n A(\\pi) \\leq \\lambda \\big\\}\\Big) \\nonumber\\\\\n & \\leq o(1) + \n \\mbox{$\\sum_{k \\geq \\delta_0 n}$ $\\sum_{\\pi \\in \n \\Pi_k}$}\\mathbb{P}\\Big(\\tilde N_{\\eta', n}^\\pi \\leq \n 2\\mathbb{E}N_{\\eta'} | A(\\pi) \\leq\n \\lambda\\Big)\\mathbb{P}\\big(A(\\pi) \\leq \\lambda\\big)\\,.\n \\label{break-up}\n\\end{align}\nNow choose $\\delta_0 = \\delta_0(\\eta')$ such that \n\\begin{equation}\n\\label{Ineq3}\n\\delta_0 n \\eta' c \/4 = 2\\mathbb{E}N_{\\eta'}\\,.\n\\end{equation}\nSince $1 \/ \\ell \\geq \\eta'$, we then get from Binomial concentration that for $\\delta \\geq \n\\delta_0$,\n$$\\mathbb{P}\\Big(\\mathrm{Bin}(\\delta n \/ 2\\ell, c) \\leq 2\\mathbb{E}N_{\\eta'}\\Big) \\leq \n\\mathrm{e}^{-\\delta n\\eta'c^2\/ 16}\\,.$$\nPlugging this into \\eqref{second_bd} we have\n\\begin{equation*}\n\\mathbb{P}\\Big(\\tilde N_{\\eta', n}^\\pi \\leq 2\\mathbb{E}N_{\\eta'} | A(\\pi) \\leq \n\\lambda\\Big) \\leq 2n\\mathrm{e}^{- len(\\pi)\\eta'\/16} + \\mathrm{e}^{- len(\\pi)\\eta'c^2\/ 16}\\,,\n\\end{equation*}\nwhenever $len(\\pi) \\geq \\delta_0 n$. A straightforward computation using \n\\eqref{gamma_density} yields\n$$\\mbox{$\\sum_{\\pi \\in \\Pi_k}$}\\mathbb{P}\\big(A(\\pi) \\leq \n\\lambda\\big) \\leq \\frac{n}{\\sqrt{2\\pi k}}\\mathrm{e}^{\\mathrm{e} k \\eta}\\,.$$\n\nThe last two displays and \\eqref{break-up} together imply that\n\\begin{equation}\n\\label{break_up2}\n\\mathbb{P}\\big(L(\\lambda, n) \\geq \\delta_0 n \\big) \\leq o(1) + \\mbox{$\\sum_{k \\geq \\delta_0 n}$}\n(2n\\mathrm{e}^{- k\\eta'\/16} + \\mathrm{e}^{- k\\eta'c^2\/ 16})\\mathrm{e}^{\\mathrm{e} k \n\\eta}\\frac{n}{\\sqrt{2\\pi k}}\\,.\n\\end{equation}\nSetting $\\eta' = 32\\mathrm{e}\\eta \/ c^2$ we get from \\eqref{break_up2},\n$$\\mathbb{P}\\big(L(n, \\lambda)\\geq \\delta_0 n\\big) = o(1)\\,.$$\nIt remains to be checked whether $\\delta_0$ obtained from \\eqref{Ineq3} has the correct functional \nform as in \\eqref{main_prop}. To this end recall from \\eqref{first_bnd} that\n$$2\\mathbb{E}N_{\\eta'} \\leq 3\\alpha \\mathrm{e}^{\\mathrm{e}\\eta\/\\eta'} \\sqrt{\\eta'} \n\\mathrm{e}^{-\\mathrm{e}\/\\sqrt{\\eta'}}n\\,,$$\nwhere $\\eta$ is small enough so that $C_0(\\eta)$ in \\eqref{first_bnd} is less than $3\/2$. Hence \n$\\delta_0 \\leq \\mathrm{e}^{-C_2 \/ \\sqrt{\\eta}}$ for some absolute constant $C_2$ when $\\eta$ is \nsmall.\n\n\n\\end{proof}\n\n\n\\section{Proof of the lower bound}\n\\label{sec-lower}\n\n\\subsection{Existence of a large number of vertex-disjoint light paths}\nAs we mentioned in the introduction, the proof of lower bound is divided into two steps. In the \nfirst step we split the vertices into two parts and show that there exist a large number of \nshort (i.e. of $O(1)$ length) vertex-disjoint $\\lambda$-light paths containing vertices from only \none part. In the second step we use vertices in the other part as ``links'' to connect a \nsubcollection of the short paths obtained from step 1 into a long (i.e. of $\\Theta(n)$ length) and \nlight path. The current and next subsections are devoted to these two steps in respective order.\n\\par\n\nIn light of the preceding discussion, let us first select a complete subgraph $\\mathcal W^*_n$ of \n$\\mathcal W_n$ containing $n_* = n_{*; \\eta, \\zeta_1} = (1 - \\zeta_1 \\eta)n$ vertices where $\\eta, \n\\zeta_1 \\in (0, 1)$. To be specific we can order the vertices of $\\mathcal W_n$ in some arbitrary \nway and define $\\mathcal W^*_n$ as the subgraph induced by ``first'' $n_*$ vertices. It will be \nshown that there are substantially many short and light paths that can be formed with the vertices \nin $V(\\mathcal W_n^*)$. We will in fact require slightly more from a path than just being \n$\\lambda$-light. For $\\pi \\in \\Pi_\\ell$ and some $\\zeta_2 > 0$, define\n\n\\begin{equation}\nG_{\\pi} = G_{\\pi; \\eta, \\zeta_2} = \\Big \\{\\lambda \\ell - 1 \\leq W(\\pi) \\leq \\lambda \\ell, \nM(\\pi) \\leq (\\zeta_2\/\\sqrt{\\eta}).(W(\\pi)\/\\lambda \\ell)\\Big \\}\\,,\n\\label{good_event}\n\\end{equation}\n\nwhere $M(\\pi)$ is the maximum deviation of $\\pi$ away from the linear interpolation between the \nstarting and ending edges, formally given by\n\\begin{equation}\\label{eq-def-M}\nM(\\pi) = \\sup_{1\\leq k \\leq \\ell}\\mid\\mbox{$\\sum_{i = 1}^{k}$}W_{e_i} - \\tfrac{k}\n{\\ell}W(\\pi)\\mid\\,.\n\\end{equation}\nA similar class of events were considered in \\cite{Ding13, MW13} in order for second moment \ncomputation. As the authors mentioned in these papers, the factor $W(\\pi)\/\\lambda \\ell$ provides \nsome technical ease in view of the following property which is a consequence of Lemma~\\ref{Beta}:\n\\begin{equation}\n\\mathbb{P}(M(\\pi) \\leq (\\zeta_2\/\\sqrt{\\eta}).(W(\\pi)\/\\lambda \\ell)\\mbox{ }|\\mbox{ }W(\\pi) = w) \n\\equiv \\mbox{constant for all }w > 0.\\label{cond_Property}\n\\end{equation}\nCall a path $\\pi\\in \\Pi_\\ell$ \\emph{good} if $G_{\\pi}$ occurs. Since we are only interested in \ngood paths whose vertices come from $V(\\mathcal W^*_n)$, we need some related notations. For $\\ell \n\\in \\mathbb N$, denote by $\\Pi_\\ell^* = \\Pi_{\\ell; \\eta, \\zeta_1}^*$ the set of all paths of \nlength $\\ell$ in $\\mathcal{W}_n^*$ and by $N^*_\\ell = N^*_{\\ell; \\eta, \\zeta_1, \\zeta_2}$ the \ntotal number of good paths in $\\Pi_\\ell^*$, i.e., $N^*_\\ell = \\sum_{\\pi \\in \n\\Pi_\\ell^*}\\mathbf{1}_{G_{\\pi}}$. In order to carry out second \nmoment analysis of $N^*_\\ell$ we need to control the correlation between $\\mathbf{1}_{G_{\\pi}}$ \nand $\\mathbf{1}_{G_{\\pi'}}$ where $\\pi$, $\\pi' \\in \\Pi_\\ell^*$. It is plausible that such \ncorrelation depends on the number of common edges between $\\pi$ and $\\pi'$ and in fact bounding \nthe correlation in terms of the number of common edges was sufficient for proving \n\\eqref{concentration_1} in Section \\ref{sec-upper}. But in this case we need an additional \nmeasurement instead of just $|E(\\pi) \\cap E(\\pi')|$. This is discussed in detail in \\cite{Ding13, \nMW13} and some of their results will be used. Let $\\pi$ be a path in $\\Pi_\\ell ^*$ and $S \n\\subseteq E(\\pi)$. A segment of $\\pi$ is called an $S$-component or a component of $S$ if it is a \nmaximal segment of $\\pi$ whose all edges belong to $S$. Notice that $S$-components can be defined \nsolely in terms of $S$. For two paths $\\pi$ and $\\pi'$, define a functional $\\theta(\\pi, \\pi')$ to \nbe the number of $S$-components where $S = E(\\pi) \\cap E(\\pi')$. As $\\pi$ and $\\pi'$ are self-avoiding, $\\theta(\\pi, \\pi')$ is basically the number of maximal segments shared between $\\pi$ and \n$\\pi'$. We refer the readers to Figure \\ref{fig:fig0} for an illustration.\n\n\\begin{figure}[!ht]\n \\centering\\includegraphics[width=0.8\\textwidth, keepaspectratio]{component.png}\n \\caption{{\\bf Components of the set of edges common to two paths.} In this figure the sequences \nof vertices $v_1, v_2, v_3, v_4, v_5, v_6, v_7, v_8, v_9$ and $v_1', v_2, v_3', v_3, v_4, v_5', \nv_6, v_7, v_8, v_9'$ define the paths $\\pi$ and $\\pi'$ respectively. The dark edges belong to $S \n= E(\\pi) \\cap E(\\pi')$. Here $\\theta(\\pi, \\pi') = 2$ with the segments $(v_3, v_4)$ and \n$(v_6, v_7, v_8)$ being the two $S$-components.}\n\\label{fig:fig0}\n\\end{figure}\n\nThe following result (\\cite[Lemma 2.9]{Ding13}) relates cardinality of $V(S)$, the union of all \nendpoints of edges in $S = E(\\pi) \\cap E(\\pi')$, to $\\theta(\\pi, \\pi')$ and $|S|$.\n\n$$ |V(S)| = |S| + \\theta(\\pi, \\pi')\\,.$$\n\nThe pair $\\big(\\theta(\\pi, \\pi'), |E(\\pi) \\cap E(\\pi')|\\big)$ turns out to be sufficient for \nbounding the correlation between $\\mathbf{1}_{G_{\\pi}}$ and $\\mathbf{1}_{G_{\\pi'}}$ from above. \nConsequently it makes sense to partition $\\Pi_\\ell^*$ based on the value of this pair. More \nformally for $\\pi \\in \\Pi_\\ell^*$ and integers $i \\leq j$, define the set $A_{i,j}$ as\n\\begin{equation} \\label{eq-def-A-i-j}\nA_{i,j} \\equiv A_{i,j}(\\pi) = \\{\\pi' \\in \\Pi_\\ell^*: \\theta(\\pi,\\pi') = i, |E(\\pi)\\cap E(\\pi')| = \nj\\}.\n\\end{equation}\nWe need a number of lemmas from \\cite{Ding13}. \n\\begin{lemma}\n\\label{up_bnd_lemma}(\\cite[Lemma 2.10]{Ding13})\nFor any $1 \\leq \\ell \\leq n_*$ and any $\\pi \\in \\Pi_\\ell^*$, we have that \nfor any positive integers $i \\leq j$\n\\begin{eqnarray*}\n\\label{up_bnd}|A_{i,j}(\\pi)| \\leq \\tbinom{\\ell + 1}{2i} \\tbinom{n_* - i - j}{\\ell + 1 - i - \nj}2^i(\\ell + 1 -j)!\\leq \\ell^{3i}n_*^{\\ell + 1 - i - j}.\n\\end{eqnarray*}\n\\end{lemma}\n\n\\begin{lemma}( \\cite[Lemma 2.3]{Ding13})\n\\label{cond_dev1}\nLet $Z_i$ be i.i.d.\\ exponential variables with mean $\\theta > 0$ for $1 \\leq i \\leq \\ell$. For \n$1\/4 \\leq \\rho \\leq 4$, consider the variable\n\\begin{equation}\n\\label{max_dev_new}\nM_\\ell = \\sup_{1 \\leq k \\leq \\ell}\\mid\\mbox{$\\sum_{i=1}^k$} Z_i - \\rho k \\mid.\n\\end{equation}\nThen there exist absolute constants $c^*$, $C^* > 0$ such that for all $r \\geq 1$ and $\\ell \\geq \nr^2$,\n\\begin{equation*}\n\\mathrm{e}^{-C^*\\ell\/r^2} \\leq \\mathbb{P}(M_\\ell \\leq r \\mid \\mbox{$\\sum_{i=1}^\\ell$} Z_i = \\rho \n\\ell ) \\leq \\mathrm{e}^{-c^*\\ell\/r^2}\\,.\n\\end{equation*}\n\\end{lemma}\n\n\\begin{lemma}\\cite[Lemma 3.2]{Ding13}\n\\label{cond_dev}\nLet $Z_i$ be i.i.d.\\ exponential variables with mean $\\theta > 0$ for $i \\in \\mathbb{N}$. Consider \n$1 \\leq r \\leq \\sqrt{\\ell}$ and the integer intervals $[a_1, b_1], [a_2, b_2], \\cdots, [a_m, b_m]$ \nsuch that $1 \\leq a_1 \\leq b_1 \\leq a_2 \\leq \\cdots \\leq a_m \\leq b_m \\leq \\ell$ and $q = \n\\sum_{i=1}^m(b_i - a_i + 1) \\leq \\ell - 1$. Let $1\/4 \\leq \\rho \\leq 1$ and $M_\\ell$ be defined as \nin the previous lemma. Also write $A = \\cup_{i = 1}^m [a_i,b_i] \\cap \\mathbb{N}$ and $p_\\ell \n= \\mathbb{P}(M_\\ell \\leq r | \\sum_{i=1}^\\ell Z_i = \\rho \\ell)$. Then for all $z_j$ such that\n$$\\mbox{$\\sum_{j = a_i}^{b_i}$} z_j - \\rho(b_i - a_i + 1) \\leq 2r\\,,$$\nwe have \n\\begin{equation}\n\\label{cond_dev_ineq}\n\\mathbb{P}(M_\\ell \\leq r | \\mbox{$\\sum_{i = 1}^{\\ell}$}Z_i = \\rho \\ell, Z_j = z_j \\mbox{ for all \n}j \\in A) \\leq C_3r\\sqrt{q \\wedge (\\ell - q)} p_\\ell 10^{100mr}\\mathrm{e}^{C^*q\/r^2}\\,,\n\\end{equation}\nwhere $C^*$ is the constant from Lemma~\\ref{cond_dev1} and $C_3 > 0$ is an absolute constant.\n\\end{lemma}\n\\begin{remark}\n(1) Notice that the bounds in Lemma~\\ref{cond_dev1} and \\ref{cond_dev} do not depend on the \nparticular mean of $Z_i$'s due to Lemma~\\ref{Beta}. (2) Although the bounds on $p_\\ell$ in \nLemma~\\ref{cond_dev1} do not contain any $\\rho$ (as it was restricted to a bounded interval), \n$p_\\ell$ actually depends on $r$ only through the ratio $r\/\\rho$. This follows from an application \nof Lemma~\\ref{Beta} with little manipulation. (3) Lemma~\\ref{cond_dev} is same as Lemma 3.2. in \n\\cite{Ding13} except that in the latter $q$ is restricted to be at most $\\ell - 10r$. But we can \neasily extend this to all $q \\leq \\ell - 1$. To see this assume $\\ell - 1 \\geq q \\geq \\ell - 10r$. \nThen the right hand side in \\eqref{cond_dev_ineq} becomes at least $C_3 p_\\ell \n\\mathrm{e}^{C^*\\ell\/r^2}\\mathrm{e}^{-10C^*\/r}$. Now from Lemma~\\ref{cond_dev1} we get $p_\\ell \n\\mathrm{e}^{C^*\\ell\/r^2} \\geq 1$. So the right hand side in \\eqref{cond_dev_ineq} is bigger than \n$C_3\\mathrm{e}^{-10C^*}$ whenever $\\ell - 1 \\geq q \\geq \\ell - 10r$. Increasing $C_3$ if necessary \nwe can make this number bigger than 1 and thus Lemma~\\ref{cond_dev} follows.\n\\end{remark}\nBy second moment computation, we can hope to show that $N^*_\\ell \\sim \\mathbb E N^*_\\ell$ with high \nprobability. Then the main challenge is to prove that a large fraction of the good paths are \nmutually vertex-disjoint with high probability. To this end, we consider a graph $\\mathcal{G}_n$ \nwhere each vertex corresponds to a good path in $\\mathcal W_n^*$ and an edge is present whenever \nthe corresponding paths intersect at one vertex at least. Thus the presence of a large number \nof vertex disjoint good paths in $\\mathcal W^*_n$ is equivalent to the existence of a large \nindependent subset (i.e., a subset that has no edge among them) in the graph $\\mathcal G_n$. The \nfollowing simple lemma is sometimes referred to as Tur\\'{a}n's theorem, and can be proved simply \nby employing a greedy algorithm (see, e.g., \\cite{Erdos70}).\n\\begin{lemma}\n\\label{relation}\nLet $G = (V,E)$ be a finite, simple graph with $V \\neq \\emptyset$. Then $G$ contains an \nindependent subset of size at least $|V|^2 \/ (2|E| + |V|)$. Notice $2|E|$ is the total degree of \nvertices in $G$.\n\\end{lemma}\nIn light of Lemma~\\ref{relation}, we wish to show that with high probability the total degree of \nvertices in $\\mathcal{G}_n$ is not big relative to $|V(\\mathcal{G}_n)|$.\nFor this purpose, it is desirable to show that the typical number of good paths that intersect \nwith a fixed good path $\\pi \\in \\Pi_\\ell^*$ is not big. Thus, we need to estimate $\\sum_{\\pi' \\in \n\\Pi_{\\ell,\\pi}^*}\\mathbb{P}(G_{\\pi'}|G_{\\pi})$ where $\\Pi_{\\ell,\\pi}^*$ is the collection of all \npaths $\\pi'$ in $\\mathcal W_n^*$ sharing at least one vertex with $\\pi$. Drawing upon the \ndiscussions preceding \\eqref{eq-def-A-i-j}, we will first estimate $\\mathbb{P}(G_{\\pi'}|G_{\\pi})$ \nfor a specific value of the pair $(\\theta(\\pi, \\pi'), |E(\\pi)\\cap E(\\pi')|)$. Our next lemma is \nvery similar to Lemma~3.3 in \\cite{Ding13}.\n\\begin{lemma}\n\\label{conditional_prob}\nLet $\\pi \\in \\Pi_\\ell^*$ and $\\pi' \\in A_{i,j}$ with $1 \\leq i \\leq j \\leq \\ell$. Then there exist \nabsolute constants $\\eta_1, C_4>0$ such that for $0 < \\eta < \\eta_1$, $\\zeta_2 > 1 \\vee \n\\sqrt{2C^*\/\\mathrm{e}}$ and $\\ell \\geq \\zeta_2^2\/\\eta$ we have\n\\begin{equation}\n\\label{cond_est}\n\\mathbb{P}(G_{\\pi'}|G_{\\pi}) \\leq C_4(1 + o(1))\\mathbb{P}(G_{\\pi})n^j\\sqrt{\\ell\/\\eta} \n\\mathrm{e}^{-j\\eta}\\mathrm{e}^{1000\\zeta_2 i\/\\sqrt{\\eta}}\\,.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nDenote by $S$ and $S'$ the sets $E(\\pi)\\cap E(\\pi')$ and $E(\\pi')\\setminus E(\\pi)$ respectively. \nBy standard calculus, there exists $0<\\eta_1 \\leq 1$ such that $1 + \\mathrm{e}\\eta \\geq \n\\mathrm{e}^{(1 + \\mathrm{e}\/2)\\eta}$ for all $0 < \\eta < \\eta_1$. Note that $\\mathbb{P}(G_{\\pi'} \n\\mid G_{\\pi}) = \\mathfrak p_1 \\cdot \\mathfrak p_2$, where\n\n\n\\begin{eqnarray*}\n\\mathfrak p_1 &=& \\mathbb{P}(\\lambda \\ell - 1 \\leq W(\\pi') \\leq \\lambda \\ell\\mbox{ }|\\mbox{ \n}G_{\\pi})\\,, \\\\\n \\mathfrak p_2 &=& \\mathbb{P}\\big(M(\\pi') \\leq (\\zeta_2 \/ \\sqrt{\\eta}).\n (W(\\pi')\/\\lambda \\ell)\\mbox{ }|\\mbox{ }G_{\\pi}, \\lambda \\ell - 1 \\leq W(\\pi') \n \\leq \\lambda \\ell\\big)\\,.\n\\label{decomposition}\n\\end{eqnarray*}\n\nSince the maximum deviation of a good path from its linear interpolation between starting and \nending edges is at most $\\zeta_2\/\\sqrt{\\eta}$, the weight of an $S$-component, say $s$, is at \nleast $W(\\pi)|s| \/ \\ell - 2\\zeta_2\/\\sqrt{\\eta}$ when $\\pi$ is good. Here $|s|$ denotes the number \nof edges in $s$. Adding over all the $\\theta(\\pi, \\pi')$ components of $S$ we get that\n$\\mbox{$\\sum_{e\\in S}$}W_e \\geq W(\\pi)|S| \/ \\ell - 2\\theta(\\pi, \\pi')\\zeta_2\/\\sqrt{\\eta} \\mbox{ \non }G_\\pi$. As $\\pi' \\in A_{i, j}$ and weight of a good path is at least $\\lambda \\ell - 1$, the \nprevious inequality implies that on $G_\\pi$,\n$$\\mbox{$\\sum_{e\\in S}$}W_e \\geq \\lambda j - 1 - 2i\\zeta_2\/\\sqrt{\\eta}\\,.$$\nConsequently when $j \\leq \\ell - 1$,\n\\begin{eqnarray}\n\\mathfrak p_1&\\leq & \\mathbb{P}(\\mbox{$\\sum_{e\\in S'}$}W_e \\leq \\lambda|S'| + 1 + 2i \n\\zeta_2\/\\sqrt{\\eta} \\mbox{ }|\\mbox{ }G_{\\pi})\\\\\n&=& \\mathbb{P}\\big(\\mathrm{Gamma}(\\ell-j, 1\/n) \\leq \\lambda(\\ell - j) + 1 + 2i \n\\zeta_2\/\\sqrt{\\eta}\\big) \\nonumber\\\\\n&\\leq & C_4' n^{-(\\ell-j)}(\\ell-j)^{-1\/2}(1 + \\mathrm{e}\\eta)^{\\ell-j} \\mathrm{e}^{2i \n\\mathrm{e}\\zeta_2 \/ \\sqrt{\\eta}(1 + \\mathrm{e}\\eta)}\\,, \\label{eq-p-1}\n\\end{eqnarray}\nwhere $C_4'>0$ is an absolute constant and the last inequality used \\eqref{gamma_density}. For the \nsecond term in the right hand side of \\eqref{decomposition}, we can apply \\eqref{cond_Property} \nand Lemma~\\ref{cond_dev} to obtain\n\\begin{eqnarray}\n\\mathfrak p_2 \\leq C_3\\mathbb{P}\\big(M(\\pi) \\leq \\zeta_2\/\\sqrt{\\eta}\\mbox{ }|\\mbox{ \n}W(\\pi)=\\lambda l\\big)\\sqrt{j\\wedge (\\ell-j)\/\\eta} 10^{100i\\zeta_2 \/ \n\\sqrt{\\eta}}\\mathrm{e}^{C^*j\\eta\/\\zeta_2^2} \\,, \\label{cond_prob1}\n\\end{eqnarray}\nwhen $j \\leq \\ell - 1$ and $\\ell \\geq \\zeta_2^2 \/ \\eta$ (see the conditions in \nLemma~\\ref{cond_dev}). Using \\eqref{cond_Property} again, we get that\n\\begin{eqnarray*}\n\\mathbb{P}\\big(M(\\pi) \\leq \\zeta_2 \/ \\sqrt{\\eta}\\mbox{ }|\\mbox{ }W(\\pi)=\\lambda \\ell\\big) \n& = & \\mathbb{P}\\big(M(\\pi) \\leq (\\zeta_2 \/ \\sqrt{\\eta}).(W(\\pi)\/\\lambda \\ell)\\mbox{ }|\\lambda \n\\ell - 1 \\leq W(\\pi) \\leq \\lambda \\ell\\big) \\\\ \n& = & \\mathbb{P}(G_{\\pi})\/\\mathbb{P}(\\lambda \\ell - 1 \\leq W(\\pi) \\leq \\lambda \\ell)\\\\\n& = & \\mathbb{P}(G_{\\pi}) \/ \\mathbb{P}\\big(\\lambda \\ell - 1 \\leq \\mathrm{Gamma}(\\ell, 1\/n)\\leq \n\\lambda \\ell\\big)\\\\\n&\\leq & C_4''(1 + o(1))\\mathbb{P}(G_{\\pi})\\ell!\\big(n \/ \\lambda \\ell\\big)^\\ell \\,,\n\\end{eqnarray*}\nwhere $C_4'' > 0$ is an absolute constant and the last inequality follows from \n\\eqref{gamma_density}.\nPlugging the preceding inequality into \\eqref{cond_prob1} and using the fact $\\ell! \\leq \n\\mathrm{e}\\sqrt{\\ell}(\\ell\/\\mathrm{e})^{\\ell}$ (Stirling's approximation)\n$$\\mathfrak p_2 \\leq \\mathrm{e}C_3C_4''(1 + o(1))\\mathbb{P}(G_\\pi)n^\\ell \\sqrt{\\ell(\\ell - \nj)\/\\eta}(1 + \\mathrm{e}\\eta)^{-\\ell}10^{100i\\zeta_2 \/ \n\\sqrt{\\eta}}\\mathrm{e}^{C^*j\\eta\/\\zeta_2^2}\\,.$$\nCombined with \\eqref{eq-p-1}, it yields that\n\\begin{eqnarray*}\n\\mathbb{P}(G_{\\pi'}|G_{\\pi}) &\\leq & \\mathrm{e}C_3C_4' C_4''\\zeta_2(1 + o(1))\\mathbb{P}\n(G_{\\pi})\\sqrt{\\ell \/ \\eta} n^j (1 + e \\eta)^{-j} 10^{100i\\zeta_2 \/ \n\\sqrt{\\eta}}\\mathrm{e}^{C^*j\\eta\/\\zeta_2^2}\\mathrm{e}^{2ie\\zeta_2 \/ \\sqrt{\\eta}(1 + e\\eta)} \\,.\n\\end{eqnarray*}\nSince $\\zeta_2 \\geq \\sqrt{2C^*\/e}$ and $\\eta < \\eta_1$ we have\n\\begin{eqnarray*}\n\\mathbb{P}(G_{\\pi'}|G_{\\pi}) &\\leq & \\mathrm{e}C_3C_4' C_4''(1 + o(1))\\mathbb{P}\n(G_{\\pi})n^j\\sqrt{\\ell\/\\eta} \\mathrm{e}^{-j\\eta}\\mathrm{e}^{1000\\zeta_2 i\/\\sqrt{\\eta}}\n\\end{eqnarray*}\nprovided $j \\leq \\ell - 1$. The case $j = \\ell$ can also be easily accommodated. To this end let \nus first compute $\\mathbb{P}(G_{\\pi})$. It follows from \\eqref{gamma_density} and \nLemma~\\ref{cond_dev1} that\n\\begin{equation*}\n\\mathbb{P}(G_{\\pi}) \\geq (1 + o(1))(1 - \\mathrm{e}^{-1\/\\lambda})(\\lambda \\ell \/ n)^{\\ell} (1 \/ \n\\ell!) \\mathrm{e}^{-C^* \\ell \\eta \/ \\zeta_2^2}\\,.\n\\end{equation*}\n Applying Stirling's formula again, we get that for $\\zeta_2 \\geq \\sqrt{2C^*\/e}$ and $\\eta < \n \\eta_1$, \\begin{equation*}\n\\mathbb{P}(G_{\\pi}) \\geq C_4'''(1 + o(1))n^{-\\ell}\\ell^{-1\/2}\\mathrm{e}^{\\ell\\eta}\\,,\n\\end{equation*}\nfor an absolute constant $C_4'''>0$. Hence, with the choice of $C_4 = 1\/C_4''' \\vee \n\\mathrm{e}C_3C_4' C_4''$ the right hand side of \\eqref{cond_est} is at least 1, and thus \n\\eqref{cond_est} holds in this case.\n\\end{proof}\nArmed with Lemma~\\ref{conditional_prob}, we can now obtain an upper bound on $\\sum_{\\pi' \\in \n\\Pi^*_{\\ell,\\pi}}\\mathbb{P}(G_{\\pi'}|G_{\\pi})$. Similarly we can bound $\\sum_{\\pi' \\in \n\\Pi^*_\\ell}\\mathbb{P}(G_{\\pi'}|G_{\\pi})$ which is useful for the computation of $\\mathbb E((N_\\ell^*)^2)$ \nin view of the following simple observation:\n\n\\begin{equation}\n\\label{sec_mom_cond}\n\\mathbb E ((N_\\ell^*)^2) = \\mbox{$\\sum_{\\pi \\in \\Pi_\\ell^*}$}\\mathbb{P}(G_\\pi)\\mbox{$\\sum_{\\pi' \\in \n\\Pi^*_{\\ell}}$}\\mathbb{P}(G_{\\pi'}|G_{\\pi}) = \\mathbb E(N_\\ell^*)\\mbox{$\\sum_{\\pi' \\in \n\\Pi^*_{\\ell}}$}\\mathbb{P}(G_{\\pi'}|G_{\\pi})\\,,\n\\end{equation}\nwhere the last equality follows from the fact that $\\sum_{\\pi' \\in \n\\Pi^*_{\\ell}}\\mathbb{P}(G_{\\pi'}|G_{\\pi})$ is independent of $\\pi$.\n\n\\begin{lemma}\n\\label{lemma_crucial1}\nLet $0 < \\zeta_1 < 1\/4$ and let $\\zeta_2, \\ell, \\eta$ satisfy the same conditions as stated in \nLemma~\\ref{conditional_prob}. Then there exists an absolute constant $C_5 > 0$ such that,\n\\begin{align}\n&\\mbox{$\\sum_{\\pi' \\in \\Pi^*_{\\ell,\\pi}}$}\\mathbb{P}(G_{\\pi'}|G_{\\pi}) \\leq C_5(1 + \no(1))\\mathrm{e}^{1000\\zeta_2\/ \\sqrt{\\eta}}\\sqrt{\\ell^7\/\\eta}\\tfrac{\\mathbb{E} N^*_\\ell}{n}\\,, \n\\label{sum1}\\\\\n&\\mbox{$\\sum_{\\pi' \\in \\Pi^*_{\\ell}}$}\\mathbb{P}(G_{\\pi'}|G_{\\pi}) \\leq (1 + o(1)) \n\\mathbb{E}N^*_\\ell \\,. \\label{sum2}\n\\end{align}\n\\end{lemma}\n\\begin{proof}\n\nBy Lemmas \\ref{conditional_prob} and \\ref{up_bnd_lemma}, we get that for $1 \\leq i \\leq j \\leq \n\\ell$,\n\\begin{eqnarray}\n\\mbox{$\\sum_{\\pi' \\in A_{i,j}} $}\\mathbb{P}(G_{\\pi'}|G_{\\pi}) &\\leq& (1 + o(1))n_*^{\\ell + \n1}\\mathbb{P}(G_{\\pi})\\tfrac{\\xi(\\eta,\\ell,i,j, \\zeta_1)}{n^i} \\leq \n (1 + \n o(1))\\mathbb{E}N^*_\\ell\\tfrac{\\xi(\\eta,\n \\ell,i,j, \\zeta_1)}\n {n^i}\\label{secstage1}\\,,\n\\end{eqnarray}\nwhere $\\xi(\\eta,\\ell,i,j, \\zeta_1)$ is a number depending only on $(\\eta, \n\\ell, i, j, \\zeta_1)$ (so in particular, $\\xi(\\eta,\\ell,i,j, \\zeta_1)$ does not depend \non $n$). It is also clear that \n$$\\mbox{$\\sum_{\\pi' \\in A_{0, 0}} $}\\mathbb{P}(G_{\\pi'}|G_{\\pi}) \\leq \\mbox{$\\sum_{\\pi' \\in A_{0, \n0}} $}\\mathbb{P}(G_{\\pi'}) \\leq \\mathbb E N^*_\\ell\\,.$$\nCombined with \\eqref{secstage1}, it yields \\eqref{sum2}. It remains to prove \\eqref{sum1}. To this \nend, we note that the major contribution to the term $\\sum_{\\pi' \\in \\Pi^*_{\\ell,\\pi}} \\mathbb{P}\n(G_{\\pi'}|G_{\\pi})$ comes from those paths $\\pi'$ with $\\theta(\\pi,\\pi') = 1$ or $|V(\\pi') \\cap \nV(\\pi)| = 1$. Thus, we revisit \\eqref{secstage1} for the case of $i = 1$. By Lemmas \n\\ref{conditional_prob} and \\ref{up_bnd_lemma} again, we get that\n\\begin{eqnarray}\n\\mbox{$\\sum_{1 \\leq j \\leq \\ell} \\sum_{A_{1,j}}$}\\mathbb{P}(G_{\\pi'} | G_{\\pi}) \n&\\leq & 2C_4 (1 + o(1))\\mathrm{e}^{1000\\zeta_2 \/ \n\\sqrt{\\eta}}\\sqrt{\\ell^{7}\/\\eta}n_*^{\\ell+1}\\mathbb{P}(G_{\\pi}) n^{-1} \\mbox{$\\sum_{1\\leq j \\leq \n\\ell}$} \\mathrm{e}^{-j\\eta}(1 - \\zeta_1 \\eta)^{-j} \\nonumber\\\\\n& \\leq & 2C_4 (1 + o(1))\\mathrm{e}^{1000\\zeta_2 \/ \\sqrt{\\eta}}\\sqrt{\\ell^{7}\/\\eta} \n\\mathbb{E}N^*_\\ell (n(1-\\mathrm{e}^{-\\frac{\\eta}{2}}))^{-1} \\nonumber \\\\%\\mbox{\\hspace{1 cm}$[\\because \\zeta_1 \\leq 1\/4$]}\\\\\n&\\leq & 8C_4 (1 + o(1))\\mathrm{e}^{1000\\zeta_2 \/ \n\\sqrt{\\eta}}\\sqrt{\\ell^{7}\/\\eta^3}\\mathbb{E}N^*_\\ell n^{-1}\\,, \\label{eq-A-1-j}\n\\end{eqnarray}\nwhere the last two inequalities follow from the facts that $\\zeta_1 < 1\/4$ and $\\mathrm{e}^{-\\eta \n\/ 2} \\leq 1 - \\eta\/4$ whenever $0 < \\eta < 1$. \nWe still need to consider paths that share vertices with $\\pi$ but no edges. For $1\\leq i\\leq \n\\ell$, define $B_i$ to be the collection of paths which shares $i$ vertices with $\\pi$ but no \nedges, i.e., \n\\begin{equation*}\nB_i = \\{\\pi' \\in \\Pi_\\ell^* : |V(\\pi') \\cap V(\\pi)| = i, E(\\pi') \\cap E(\\pi) = \\emptyset\\}\\,.\n\\end{equation*}\nWe need an upper bound on the size of $B_i$. To this end notice that there are $\\tbinom{\\ell + 1}\n{i}$ many choices for $V(\\pi') \\cap V(\\pi)$ as cardinality of the latter is $i$ and these vertices \ncan be placed along $\\pi'$ in at most $\\tbinom{\\ell + 1}{i}i!$ many different ways. Also the \nnumber of ways we can choose the remaining $\\ell + 1 - i$ vertices is at most $n_*^{\\ell + 1 - \ni}$. Multiplying these numbers we get\n$$|B_i| \\leq \\tbinom{\\ell + 1}{i}^2i!n_*^{\\ell + 1 - i}\\,.$$\nSince the edge sets are disjoint, $\\mathbb{P}(G_{\\pi'}|G_{\\pi}) = \\mathbb{P}(G_{\\pi})$ for all \n$\\pi' \\in B_i$ and $1\\leq i \\leq \\ell$. So we have\n\\begin{equation}\n\\mbox{$\\sum_{\\pi' \\in B_{i}} $}\\mathbb{P}(G_{\\pi'}|G_{\\pi}) \\leq (1+o(1)) \\tbinom{\\ell + 1}{i}^2i!\n(1 - \\zeta_1\\eta)^{-i}\\tfrac{\\mathbb{E}N^*_\\ell}{n^i} \\leq (8+o(1)) \n\\ell^2\\tfrac{\\mathbb{E}N^*_\\ell}{n} \\,.\\label{secstage2}\n\\end{equation}\nCombined with \\eqref{eq-A-1-j}, it completes the proof of \\eqref{sum1}.\n\\end{proof}\nWe will now proceed with our plan of finding a large independent subset of $\\mathcal{G}_n$. For \nany two paths $\\pi$ and $\\pi'$ in $\\Pi^*_{\\ell}$, define an event \n$$H_{\\pi,\\pi'} = H_{\\pi,\\pi'; \\eta, \\zeta_2} = \\begin{cases}\nG_{\\pi} \\cap G_{\\pi'} &\\mbox{ if } V(\\pi) \\cap V(\\pi') \\neq \\emptyset\\,,\\\\\n\\emptyset, &\\mbox{ otherwise}\\,.\n\\end{cases}$$\nWriting $N_{\\ell}'= N_{\\ell; \\eta, \\zeta_1, \\zeta_2}' = \\sum_{\\pi, \\pi' \\in \n\\Pi_{\\ell}^*}\\mathbf{1}_{H_{\\pi,\\pi'}}$, we see that $N'_\\ell = 2 |E(\\mathcal{G}_n)| + |\nV(\\mathcal{G}_n)|$. Also notice that $N^*_\\ell = |V(\\mathcal{G}_n)|$. As an immediate consequence \nof Lemma~\\ref{lemma_crucial1}, we can compute an upper bound of $\\mathbb{E}N_{\\ell}'$ as follows:\n\\begin{eqnarray}\n\\mathbb{E}N_{\\ell}' = \\mbox{$\\sum_{\\pi \\in \\Pi^*_\\ell} $} \\mathbb{P}(G_\\pi) \\mbox{$\\sum_{\\pi' \n\\in \\Pi^*_{\\ell,\\pi}}$}\\mathbb{P}(G_{\\pi'}|G_{\\pi}) \\leq C_5(1 + \no(1))\\mathrm{e}^{1000\\zeta_2\/\\sqrt{\\eta}}\\sqrt{\\ell^{7}\/\\eta^3}\\tfrac{(\\mathbb{E}N^*_{\\ell})^2} \n{n}.\\label{expected_edge}\n\\end{eqnarray}\nIf $N_\\ell^*$ and $N_\\ell'$ are concentrated around their respective means in the sense that \n$N_\\ell^* = \\mathbb E N_\\ell^*(1 + o(1))$ and $N_\\ell' = \\mathbb E N_\\ell'(1 + o(1))$ with high probability, \nthen we can use Lemma~\\ref{relation} and \\eqref{expected_edge} to derive a lower bound on the size \nof a maximum independent subset of $\\mathcal G_n$. For this purpose, it suffices to show that \n$\\mathbb{E}((N_\\ell^*)^2) = (\\mathbb{E}N_\\ell^*)^2(1 + o(1))$ and $\\mathbb{E}((N_\\ell')^2) = \n(\\mathbb{E}N_\\ell')^2(1 + o(1))$. The former has already been addressed by \\eqref{sum2} (see \n\\eqref{sec_mom_cond}). For the latter we need to estimate contributions from terms like \n$\\mathbb{P}(H_{\\pi_1,\\pi_2} \\cap H_{\\pi_3,\\pi_4})$ in the second moment calculation for $N_\\ell'$. \nOur next lemma will be useful for this purpose.\n\\begin{lemma}\n\\label{count_vertices}\nLet $\\pi_1, \\pi_2, \\pi_3, \\pi_4$ be paths in $\\Pi_\\ell^*$ such that $|E(\\pi_3 \\cup \\pi_4)| = \n2\\ell - j$ and $|E(\\pi_1 \\cup \\pi_2) \\cap E(\\pi_3 \\cup \\pi_4)| = j'$ where $0 \\leq j \\leq \\ell$ \nand $1 \\leq j' \\leq 2\\ell - j$. Also assume that $V(\\pi_3 \\cap \\pi_4) \\neq \\emptyset$. Then,\n\\begin{equation}\n\\label{vertex_count}\n|V(\\pi_3) \\cap V(\\pi_4)| + |V(\\pi_3 \\cup \\pi_4) \\cap V(\\pi_1 \\cup \\pi_2)| \\geq j + j' +2\\,.\n\\end{equation}\n\\end{lemma}\n\\begin{figure}[!ht]\n \\centering\\includegraphics[width=0.8\\textwidth]{vertex_count.png}\n \\caption{{\\bf Removing edges from union of two paths.} In these figures the sequences of vertices \n $v_1, v_2, v_3, v_4, v_5, v_6, v_7, v_8, v_9$ and $v_1', v_2, v_3, v_4, v_5', v_6, \n v_7, v_8, v_9'$ define the paths $\\pi_4$ and $\\pi_3$ respectively. $C^O_1$ and $C^O_2$ are the \n two connected components of $\\pi_3 \\cap \\pi_4$. In the figure at the top, the vertices $v_4, \n v_5, v_6, v_5'$ define a cycle. After removing the edge $(v_4, v_5)$ from the only segment in \n $E(\\pi_4) \\setminus E(\\pi_3)$ between $C^O_1$ and $C^O_2$, we get an acyclic graph displayed at \n the bottom.}\n \\label{fig:fig2}\n\\end{figure}\n\\begin{proof}\nSuppose the graph $\\pi_3 \\cap \\pi_4$ has exactly $k + 1$ (connected) components namely $C^O_1, \n\\cdots, C^O_{k+1}$. Notice that $k$ is nonnegative as $\\pi_3 \\cap \\pi_4 \\neq \\emptyset$. Since $|\nE(\\pi_3 \\cap \\pi_4)| = j$ and $\\pi_3 \\cap \\pi_4$ is acyclic with $k + 1$ components, we have that \n$|V(\\pi_3 \\cap \\pi_4)| = j + k + 1$. Now suppose it were shown that $\\pi_3 \\cup \\pi_4$ can be made \nacyclic by removing at most $k$ edges while keeping the vertex set same and call this new graph as \n$H$. One would then have,\n\\begin{equation*}\n|V\\big(H \\cap (\\pi_1 \\cup \\pi_2)\\big)| \\geq |E\\big(H \\cap (\\pi_1 \\cup \\pi_2)\\big)| + 1 \\geq |\nE\\big((\\pi_3 \\cup \\pi_4) \\cap (\\pi_1 \\cup \\pi_2)\\big)| - k + 1 = j' - k + 1\\,.\n\\end{equation*}\nAdding this to $|V(\\pi_3 \\cap \\pi_4)| = j + k + 1$ would immediately give \\eqref{vertex_count}. In \nthe remaining part of this proof we will show that one can remove $k$ edges from $\\pi_3 \\cup \n\\pi_4$ so that the resulting graph becomes acyclic.\\par\n\nLet $\\mathcal C$ be a cycle in $\\pi_3 \\cup \\pi_4$. Since $\\pi_3$ and $\\pi_4$ are acyclic, \n$\\mathcal C$ consists of an alternating sequence of segments in $E(\\pi_4) \\setminus E(\\pi_3)$ \nand $E(\\pi_3) \\setminus E(\\pi_4)$ interspersed with segments in any one of the $C^O_i$'s (possibly \ntrivial i.e.~consisting of a single vertex). This implies that for some $1 \\leq i, i' \\leq \nk+1$, $\\mathcal C$ contains a (nontrivial i.e.~of positive length) segment in $E(\\pi_4) \\setminus \nE(\\pi_3)$ joining $C^O_i$ and $C^O_{i'}$. In fact $i \\neq i'$ since $\\pi_4$ is acyclic. Hence the \nonly case we need to consider is when $k \\geq 1$. As $\\pi_4$ is a path, $C^O_1, C^O_2, \\cdots, \nC^O_{k+1}$'s are vertex-disjoint segments (possibly trivial) aligned along $\\pi_4$ in some order \nwith $k$ intervening (nontrivial) segments in $E(\\pi_4) \\setminus E(\\pi_3)$. Pick one edge from \neach of these $k$ segments. It follows from the discussions so far that $\\mathcal C$ must contain \none of these edges. Consequently removing these $k$ edges from $\\pi_3 \\cup \\pi_4$ would make the \nresulting graph acyclic. We refer the readers to Figure \\ref{fig:fig2} for an illustration.\n\\end{proof}\nWe will now use \\eqref{vertex_count} and Lemma~\\ref{lemma_crucial1} to show that $N^*_\\ell$ and \n$N_\\ell'$ concentrate around their expected values.\n\\begin{lemma}\n\\label{lemma_crucial3}\nAssume the same conditions on $\\zeta_1, \\zeta_2$, $\\ell$ and $\\eta$ as in \nLemma~\\ref{lemma_crucial1}. Then there exists $g_{\\ell, \\eta} = g_{\\ell, \\eta; \\zeta_1, \\zeta_2}: \n\\mathbb N \\mapsto [0, \\infty)$ depending on $\\ell, \\eta$ (and $\\zeta_1, \\zeta_2$) with $g_{\\ell, \n\\eta}(n)\\to 0$ as $n\\to \\infty$ such that the \nfollowing hold:\\\\\n(1) $\\mathbb{P}\\big(|N^*_\\ell - \\mathbb{E} N^*_\\ell| \\leq g_{\\ell,\\eta}(n) \n\\mathbb{E}N^*_\\ell\\big) \\to 1$ as $n \\to \\infty$;\\\\\n(2) $\\mathbb{P}\\big(|N_\\ell' - \\mathbb{E} N_\\ell'| \\leq g_{\\ell,\\eta}(n)\\mathbb{E} N_\\ell'\\big) \n\\to 1$ as $n \\to \\infty$.\n\\end{lemma}\n\\begin{proof}\nThe proof of (1) is rather straightforward. By \\eqref{sec_mom_cond} and \\eqref{sum2} we see that\n\\begin{eqnarray*}\n\\mathbb{E}((N^*_\\ell)^2) \\leq (\\mathbb{E}N^*_\\ell)^2(1 + o(1))\\,.\n\\end{eqnarray*}\nAn application of Markov's inequality then yields Part (1). \nIn order to prove Part (2), we first argue that $\\mathbb{E}N_\\ell' = \\Theta(n)$. Similar to the \ncomputation of \\eqref{first_bnd}, we can show that $\\mathbb{E}N^*_\\ell$ is $O(n)$. But then \n\\eqref{expected_edge} tell us that same is also true for $\\mathbb{E}N_\\ell'$. For the lower bound, \nnotice that given any path $\\pi_1$ in $\\Pi_\\ell^*$, there are $\\Theta(n^{\\ell})$ many paths in \n$\\Pi_\\ell^*$ that intersect $\\pi_1$ in exactly one vertex. Furthermore for any such pair $(\\pi_1, \n\\pi_2)$ we have\n$$\\mathbb{P}(H_{\\pi_1,\\pi_2})= (\\mathbb{P}(G_{\\pi}))^2 = \\Theta(n^{-2\\ell}),$$ where the last \nequality follows from \\eqref{gamma_density} (see the computation in \\eqref{first_bnd}) and \nLemma~\\ref{cond_dev1}. Therefore, we obtain that\n\\begin{eqnarray*}\n\\mathbb{E}N_\\ell' = \\Theta(n^{\\ell + 1}) \\mbox{$\\sum_{\\pi_2 \\in \\Pi_{\\ell,\\pi_1}^*}$}\\mathbb{P}\n(G_{\\pi_1} \\cap G_{\\pi_2})\\geq \\Theta(n^{\\ell + 1}) \\Theta(n^{\\ell}) \\Theta(n^{-2\\ell})= \n\\Theta(n).\n\\end{eqnarray*}\nNext we estimate $\\mathbb E ((N_\\ell')^2)$. For this purpose, we first consider two fixed $\\pi_1, \n\\pi_2\\in \\Pi^*_\\ell$ such that $V(\\pi_1) \\cap V(\\pi_2) \\neq \\emptyset$. For $0 \\leq j \\leq \\ell$ \nand $1 \\leq j' \\leq 2\\ell - j$, let $\\Pi_{\\pi_1, \\pi_2}^{\\ell,j,j'}$ be the collection of all \npairs of paths $(\\pi_3, \\pi_4) \\in \\Pi_\\ell^*$ such that $|E(\\pi_1 \\cup \\pi_2) \\cap E(\\pi_3 \\cup \n\\pi_4)| = j'$ and $|E(\\pi_3 \\cup \\pi_4)| = 2\\ell - j$.\nFor $(\\pi_3, \\pi_4)\\in \\Pi_{\\pi_1, \\pi_2}^{\\ell,j,j'}$, we see that $|E(\\pi_3 \\cup \\pi_4) \n\\setminus E(\\pi_1 \\cup \\pi_2)| = 2\\ell - j - j'$ and thus by a similar reasoning as employed in \n\\eqref{cond_prob_order} we get\n$$\\mathbb{P}(H_{\\pi_3, \\pi_4}|H_{\\pi_1, \\pi_2}) = O(n^{j + j' - 2\\ell})\\,.$$ Now let $\\Pi_{\\pi_1, \n\\pi_2}^{\\ell,j,j'}(n_1, n_2) \\subseteq \\Pi_{\\pi_1, \\pi_2}^{\\ell,j,j'}$ contain all the pairs \n$(\\pi_3, \\pi_4)$ such that $|V(\\pi_3) \\cap V(\\pi_4)| = n_1 \\geq 1$ and $|V(\\pi_3 \\cup \\pi_4) \\cap \nV(\\pi_1 \\cup \\pi_2)| = n_2$. Then $|V(\\pi_3 \\cup \\pi_4) \\setminus V(\\pi_1 \\cup \\pi_2)| = 2\\ell + 2 \n- n_1 - n_2$ and consequently $|\\Pi_{\\pi_1, \\pi_2}^{\\ell,j,j'}(n_1, n_2)| = O(n^{2\\ell + 2 - n_1 - \nn_2})$.\nBy Lemma~\\ref{count_vertices}, we know that for $n_1 + n_2 \\geq j + j' +2$ for $(\\pi_3, \\pi_4)\\in \n\\Pi_{\\pi_1, \\pi_2}^{\\ell,j,j'}(n_1, n_2)$. Therefore, \n\\begin{equation*}\n\\mbox{$\\sum_{(\\pi_3, \\pi_4) \\in \\Pi_{\\pi_1, \\pi_2}^{\\ell,j,j'}}$}\\mathbb{P}(H_{\\pi_3, \\pi_4}|\nH_{\\pi_1, \\pi_2}) = \\mbox{$\\sum_{1\\leq n_1, n_2 \\leq 2\\ell+2} \\sum_{(\\pi_3, \\pi_4) \\in \\Pi_{\\pi_1, \n\\pi_2}^{\\ell,j,j'}(n_1, n_2)}$} \\mathbb{P}(H_{\\pi_3, \\pi_4}|H_{\\pi_1, \\pi_2}) = O(1)\\,.\n\\end{equation*}\nThis implies that\n$$ \\mbox{$ \\sum_{(\\pi_1, \\pi_2), (\\pi_3, \\pi_4)} $}\\mathbb P(H_{\\pi_1, \\pi_2} \\cap H_{\\pi_3, \n\\pi_4}) = O(1) \\mathbb E N'_\\ell\\,.$$\nwhere the sum is over all such pairs such that $ |E(\\pi_1 \\cup \\pi_2) \\cap E(\\pi_3 \\cup \\pi_4)| \n\\neq \\emptyset$. In addition, \n$$ \\mbox{$ \\sum_{(\\pi_1, \\pi_2), (\\pi_3, \\pi_4)} $}\\mathbb P(H_{\\pi_1, \\pi_2} \\cap H_{\\pi_3, \n\\pi_4}) = (1+o(1))( \\mathbb E N'_\\ell)^2\\,.$$\nwhere the sum is over all such pairs such that $ |E(\\pi_1 \\cup \\pi_2) \\cap E(\\pi_3 \\cup \\pi_4)| = \n\\emptyset$ (thus in this case $H_{(\\pi_1, \\pi_2)}$ is independent of $H_{(\\pi_3, \\pi_4)}$). \nCombined with the fact that $\\mathbb E N'_\\ell = \\Theta(n)$, it gives that $\\mathbb E ((N'_\\ell)^2) = (1+o(1)) \n(\\mathbb E N'_\\ell)^2$. At this point, another application of Markov's inequality completes the proof of \nthe lemma.\n\\end{proof}\nWe are now well-equipped to prove the main lemma of this subsection. For convenience of notation, \nwrite \n\\begin{equation}\\label{eq-def-f-l-eta}\nf(\\ell, \\eta) = f_{\\zeta_2}(\\ell, \\eta) = \\mathrm{e}^{-1000\\zeta_2\/\\sqrt{\\eta}}\\sqrt{\\eta^3 \/ \n\\ell^7}\\,.\n\\end{equation}\n\\begin{lemma}\n\\label{lemma_crucial2}\nAssume the same conditions on $\\zeta_1, \\zeta_2$, $\\ell$ and $\\eta$ as in \nLemma~\\ref{lemma_crucial1}. Let $S_{n, \\eta, \\ell} = S_{n, \\eta, \\ell; \\zeta_1, \\zeta_2}$ be a set \nwith maximum cardinality among all subsets of $\\Pi_\\ell^*$ containing only pairwise disjoint good \npaths. Then there exists an absolute constant $C_6>0$ such that, \n\\begin{equation}\n\\label{num_dis_good}\n\\mathbb{P}(|S_{n, \\eta, \\ell}| \\geq C_6f(\\ell, \\eta) n) \\to 1\\mbox{ as }n\\to\\infty\\,.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nLet $h(\\ell, \\eta) = C_5\\mathrm{e}^{1000\\zeta_2\/\\sqrt{\\eta}}\\sqrt{\\ell^7 \/ \\eta^3}$. By \nLemma~\\ref{lemma_crucial3} and \\eqref{expected_edge}, we assume without loss of generality that \n$$|N^*_\\ell - \\mathbb{E}N^*_{\\ell}| \\leq g_{\\ell,\\eta}(n)\\mathbb{E}N^*_{\\ell} \\mbox{ and }\n N_\\ell' \\leq (1 + o(1))h(\\ell, \\eta)\\tfrac{(\\mathbb{E}N^*_\\ell)^2}{n} (1 + g_{\\ell,\\eta}(n)),$$\nwhere $g_{\\ell, \\eta}(n)$ is defined as in Lemma~\\ref{lemma_crucial3}. Since $N_\\ell' = 2|\nE(\\mathcal{G}_n)| + |V(\\mathcal{G}_n)|$, by Lemma~\\ref{relation} we get that the graph \n$\\mathcal{G}_n$ has an independent subset of size at least\n$$ {N^*_\\ell}^2 \/ N_{\\ell}' \\geq n(1+o(1))\/h(\\ell,\\eta).$$\nTherefore, with high probability $|S_{n, \\eta, \\ell}| \\geq n\/2h(\\ell,\\eta)$ which leads to \n\\eqref{num_dis_good} for $C_6= 1\/2C_5$.\n\\end{proof}\n\n\\subsection{Connecting short light paths into a long one}\nWe set $\\zeta_1 = 1\/5$ and $\\zeta_2 = 1 + \\sqrt{2C^*\/\\mathrm{e}}$ in this subsection. Note that \nthis choice satisfies the conditions in Lemma~\\ref{lemma_crucial2}. Denote by $\\mathcal{E}_{n, \n\\eta, \\ell}$ the event $\\{|S_{n, \\eta, \\ell}| \\geq C_6f(\\ell, \\eta) n)\\}$.\\\\\nThe remaining part of our scheme is to connect a fraction of these disjoint good paths in a \nsuitable way to form a light and long path $\\gamma$. In order to describe our algorithm for the \nconstruction of $\\gamma$, we need a few more notations. Denote the vertex sets \n$V(\\mathcal{W}_n^*)$ and $V(\\mathcal{W}_n)\\setminus V(\\mathcal{W}_n^*)$ by $V_1$ and $V_2$ \nrespectively. Let $\\delta > 0$ be a number and $\\nu > 0$ be an integer satisfying \n\\begin{equation}\\label{algo_inequation}\n1 \\leq \\delta n \/\\ell \\leq |S_{n, \\eta, \\ell}|\\mbox{ and }\\delta n \\nu \/ \\ell \\leq |V_2|.\n\\end{equation}\nNow label the paths in $S_{n, \\eta, \\ell}$ as $\\pi_1, \\pi_2, \\ldots$ in some arbitrary way. Our \naim is to build up the path $\\gamma$ in step-by-step fashion starting from $\\pi_1$. In each step \nwe will connect $\\gamma$ to some $\\pi_j$ by a path of length 2 whose \\emph{middle} vertex is in \n$V_2$. These paths will be referred to as \\emph{bridges}. To leverage additional flexibility we \nalso demarcate two segments of length $\\lfloor \\ell\/4 \\rfloor$ one on each end of the paths \n$\\pi_j$'s which we call \\emph{end segments}.\nThese end segments will allow us to ``choose'' endpoints of $\\pi_j$'s while connecting them (as \nsuch, it is possible that we only keep half of the vertices of $\\pi_j$ in $\\gamma$). A vertex $v$ \nwill be said to be adjacent to a path or an edge if it is an endpoint of that path or edge. If an \nedge $e$ has exactly one endpoint in $S$, we denote that endpoint by $v_{e,S}$. The following \nalgorithm, referred to as $\\mathrm{BRIDGE}(\\nu, \\ell, \\delta)$, will construct a long path \n$\\gamma$. \nSee Figure \\ref{fig:fig1} for an illustration.\n\\smallskip\n\n\\textbf{Initialization.} $\\gamma = \\pi_1$, $T$ is the set of all vertices which are in end \nsegments of $\\pi_j$'s for $j \\geq 2$, $M = V_2$, $P = \\emptyset$ and designate an end segment \nof $\\gamma$ as the open end $\\gamma_O$. Also let $v$ be the endpoint of $\\gamma$ \\textbf{not} in \n$\\gamma_O$.\n\n\nNow repeat the following sequence of steps $\\lfloor \\delta n \/ \\ell \\rfloor - 1$ times:\n\n\n\n\\textbf{Step 1.} Repeat $\\nu$ times: find the lightest edge $e$ between $\\gamma_O$ and $M$, remove \n$v_{e,M}$ from $M$ and include it in $P$. These $\\nu$ edges will be called predecessor edges \n(so at the end of this step, $|P| = \\nu$).\n\n\\vspace{0.05 cm\n\n\\textbf{Step 2.} Find the lightest edge between $P$ and $T$. Call it $e'$. Then $v_{e',T}$ \ncomes from an end segment of some path in $S_{n, \\eta, \\ell}$, say $\\pi$.\n\n\\vspace{0.05 cm}\n\n\\textbf{Step 3.} The edge $e'$ and the \\emph{unique} predecessor edge adjacent to $v_{e',P}$ \ndefines a path $b$ of length 2 (so $b$ connects a vertex in $\\gamma_O$ to a vertex in $\\pi$). Let \n$w$ be the endpoint of $\\pi$ not in the end segment that $v_{e',T}$ came from. Then there is a \nunique path $\\gamma'$ in the tree $\\gamma \\cup b \\cup \\pi$ between $v$ and $w$. Set $\\gamma = \n\\gamma'$ and $\\gamma_O =$ the end segment of $\\pi$ containing $w$.\n\n\\vspace{0.05 cm}\n\n\\textbf{Step 4.} Remove the vertices on the end segments of $\\pi$ from $T$ and reset $P$ at \n$\\emptyset$.\\newline \n\\begin{figure}[!ht]\n \\centering\\includegraphics[width=0.8\\textwidth]{algorithm.png}\n \\caption{{\\bf Illustrating an iteration of BRIDGE for $\\nu = 2$ and $\\ell = 4$.} The edges $e'$ \n and $e''$ define the path $b$. So in this iteration the paths $\\gamma$ and $\\pi$ are shortened \n slightly before being joined via $b$.}\n \\label{fig:fig1}\n\\end{figure}\n\\newline\nNotice that the conditions in \\eqref{algo_inequation} ensure that we never run out of vertices in \n$T$ or $M$ during first $\\lfloor \\delta n \/ \\ell \\rfloor - 1$ iterations of steps 1 to 4. Thus \nwhat we described above is a valid algorithm for such choices of $\\delta$ and $\\nu$. Denote the \nlength and average weight of the path $\\gamma$ generated by $\\mathrm{BRIDGE}(\\nu, \\ell, \\delta)$ \nas $L_{\\text{BRIDGE}}(\\nu, \\ell, \\delta)$ and $A_{\\text{BRIDGE}}(\\nu, \\ell, \\delta)$ respectively \nwhen $\\delta, \\nu, \\ell$ satisfy these inequalities. For sake of completeness we may define these \nquantities to be $0$ and $\\infty$ respectively and regard the output path $\\gamma$ as ``empty'' if \nany one of the inequalities in \\eqref{algo_inequation} fails to hold. We are now just one lemma \nshort of proving the lower bound in \\eqref{main_prop}.\n\\begin{lemma}\n\\label{algo}\nFor any $0 < \\eta < \\eta_2$ where $\\eta_2 > 0$ is an absolute constant there exist positive \nintegers $\\nu = \\nu(\\eta)$, $\\ell = \\ell(\\eta) \\geq \\zeta_2^2 \/ \\eta$ and a positive number \n$\\delta = \\delta(\\eta)$ such that\n$$\\mathbb{P}\\big(L_{\\text{BRIDGE}}(\\nu, \\ell, \\delta) \\geq \\mathrm{e}^{-C_{7}\/\\sqrt{\\eta}} n\\mbox{ \nand }A_{\\text{BRIDGE}}(\\nu, \\ell, \\delta) \\leq 1\/\\mathrm{e} + 12\\eta \\mid \\mathcal{E}_{n, \\eta, \n\\ell}\\big) \\to 1$$\nas $n$ tends to infinity. Here $C_{7}>0$ is an absolute constant.\n\\end{lemma}\n\\begin{proof}\nWe will omit the phrase ``conditioned on $\\mathcal{E}_{n, \\eta, \\ell}$'' while talking about \nprobabilities in this proof (barring formal expressions) although that is to be implicitly \nassumed. We use $\\mathrm{Exp}(1 \/ \\theta)$ to denote the distribution of an exponential random \nvariable with mean $\\theta > 0$. Define $\\mathcal B_{n, \\eta, \\nu, \\ell, \\delta}$ to be the \nevent that the total weight of bridges does not exceed $3\\ell\\eta \\times \\lfloor \\delta n\/\\ell \n\\rfloor$. Notice that if any one of the inequalities in \\eqref{algo_inequation} does not hold, \n$\\gamma$ is ``empty'' and hence $\\mathcal B_{n, \\eta, \\nu, \\ell, \\delta}$ is a sure event. Suppose \n$\\delta, \\nu$ and $\\ell$ are such that \\eqref{algo_inequation} is satisfied. We will first bound \nthe average weight $A(\\gamma)$ of $\\gamma$ assuming that $\\mathcal B_{n, \\eta, \\nu, \\ell, \\delta}$ \noccurs. Let $\\ell_i$ be the length of the segment selected by the algorithm in the $i$-th \niteration. We see that its weight can be no more than $\\lambda \\ell_i + 2\\zeta_2 \/ \\sqrt{\\eta}$, \nsince the segment is chosen from a good path of average weight at most $\\lambda$ and maximum \ndeviation from its linear interpolation is at most $\\zeta_2 \/ \\sqrt{\\eta}$ (see \\eqref{good_event} \nas well as the proof for Lemma~\\ref{conditional_prob}). Thus the total weight of edges in $\\gamma$ \nfrom the good paths is bounded by $\\lambda L + \\lfloor \\delta n \/ \\ell \\rfloor.(2\\zeta_2 \/ \n\\sqrt{\\eta})$ where $L = \\sum_{i}\\ell_i$. Adding this to the total weight of bridges we get with \nprobability tending to 1 as $n\\to \\infty$\n\\begin{equation*}\nW(\\gamma) \\leq L( 1 \/ \\mathrm{e} + \\eta) + \\lfloor \\delta n \/ \\ell \\rfloor \\cdot (2\\zeta_2 \/ \n\\sqrt{\\eta}) + \\lfloor \\delta n\/\\ell \\rfloor3\\ell\\eta.\n\\end{equation*}\nSince the algorithm selects at least $\\ell \/ 2$ edges from each of the $\\lfloor \\delta n \/ \\ell \n\\rfloor$ good paths it connects, we have $\\ell_i\\geq \\ell\/2$ for each $i$ and thus $L \\geq \\lfloor \n\\delta n \/ \\ell \\rfloor \\times \\ell\/2$. Therefore, \n\\begin{eqnarray*}\nA(\\gamma)\n &\\leq & 1\/\\mathrm{e} + \\eta + \\lfloor \\delta n \/ \\ell \\rfloor.(2\\zeta_2 \/ L\\sqrt{\\eta}) + \n \\lfloor \\delta n\/\\ell \\rfloor3\\ell\\eta \/ L \n \\\\\n &\\leq & 1 \/ \\mathrm{e} + \\eta + 4\\zeta_2 \/ \\ell\\sqrt{\\eta} + 6\\eta\\,.\n\\end{eqnarray*}\nIf $\\ell\\geq \\zeta_2 \/ \\eta^{3\/2}$, then from the last display we can conclude $A(\\gamma) \\leq 1 \/ \n\\mathrm{e} + 12\\eta$. We can assume this restriction on $\\ell$ for now. Indeed, later we will \nspecify the value of $\\ell$ and it will satisfy the condition $\\ell\\geq \\zeta_2 \/ \\eta^{3\/2}$.\\par\n\n\nSo it remains to find positive numbers $\\delta, \\nu, \\ell$ as functions of $\\eta$ and an absolute \nconstant $\\eta_2 > 0$ such that the following three hold for all $0 < \\eta < \\eta_2$: (a) \n$\\mathbb{P}(\\mathcal B_{n, \\eta, \\nu, \\ell, \\delta} \\mid \\mathcal E_{n, \\eta, \\ell}) \\rightarrow \n1$ as $n \\to \\infty$, (b) $\\ell \\geq \\zeta_2 \/ \\eta^{3\/2} \\vee \\zeta_2^2 \/ \\eta$ (see the \nstatement of the lemma as well as the last paragraph) and (c) $\\gamma$ has the desired length. In \nthe next paragraph we will find a triplet $(\\delta, \\nu, \\ell)$ and an absolute constant $\\eta_2'' \n> 0$ such that (a) holds for $0 < \\eta < \\eta_2''$. In the final paragraph we will show that our \nchoice of $(\\delta, \\nu, \\ell)$ also satisfies (b) and (c) whenever $0 < \\eta < \\eta_2$ where \n$\\eta_2 < \\eta_2''$ is an absolute constant.\\par\n\nLet us begin with the crucial observation that, at the start of each iteration the edges between \n$M$ and $\\gamma_O$ are still unexplored. The same is true for the edges between $P$ and $T$ at the \nend of Step 1 in any iteration. Consequently their weights are i.i.d.\\ $\\mathrm{Exp}(1\/n)$ \nregardless of the outcomes from the previous iterations. Therefore, all the bridge weights are \nindependent of each other. Now suppose the mean and variance of each bridge weight can be bounded \nabove by $2\\ell\\eta$ and $\\sigma^2$ respectively and we emphasize that the latter does not depend \non $n$. By Markov's inequality it would then follow that $\\lim_{n \\to \\infty}\\mathbb{P}(\\mathcal \nB_{n, \\eta, \\nu, \\ell, \\delta} \\mid \\mathcal E_{n, \\eta, \\ell}) = 1$. To that end let us consider \nthe bridge obtained from the $m$-th iteration where $1\\leq m\\leq \\lfloor \\delta n\/\\ell \\rfloor - \n1$. Note that here we implicitly assume \\eqref{algo_inequation}, but this would be shortly shown \nto be implied by some other constraints involving $\\delta, \\nu$ and $\\ell$. Let $e'$ be the \nlightest edge between $P$, $T$ in Step 2 and $e$ be the predecessor edge adjacent to $e'$ (for \nthis iteration). So the bridge weight is simply $W_{e'} + W_e$. By discussions on independence at \nthe beginning of the proof, it follows that $W_{e'}$ and $W_e$ are independent of each other and \nalso of the weights of bridges already chosen. Since these weights are minima of some collections \nof i.i.d.\\ exponentials, they will be of small magnitude provided that we are minimizing over a \nlarge collection of exponentials, i.e., $|T|$, $|M|$ and $\\nu$ are big. It follows from the \ndescription of the algorithm that at each iteration we lose $2\\lfloor \\ell\/4\\rfloor$ many vertices \nfrom $T$ and $\\nu$ many vertices from $M$. By simple arithmetic we then get,\n\\begin{equation}\\label{eq-size-P1P2}\n|T| \\geq C_6\\lfloor\\ell \/4\\rfloor f(\\ell,\\eta)n \\mbox{ and }|M| \\geq \\zeta_1 \\eta n\/ 2\\,,\n\\end{equation}\nfor all $1 \\leq m \\leq \\lfloor \\delta n \/ \\ell\\rfloor - 1$ provided\n\\begin{equation}\n\\label{eq-delta-zeta-0}\n\\delta \\leq C_6 f(\\ell,\\eta)\\ell \/ 2 \\mbox{ and } \\nu \\delta \/ \\ell \\leq \\zeta_1 \n\\eta \/ 2\\,.\n\\end{equation}\nNotice that these inequalities automatically imply $\\delta n \/ \\ell \\leq C_6 f(\\ell, \\eta)n$ and \n$\\delta n \\nu \/ \\ell \\leq |V_2|$. Thus if $\\delta, \\nu, \\ell$ satisfy \\eqref{mean_bnd2}, \n\\eqref{algo_inequation} would also be satisfied for all large $n$ (given $\\delta, \\ell$). Assume \nfor now that \\eqref{eq-delta-zeta-0} holds. Since $W_{e'}$ is minimum of $\\nu \\times |T|$ many \nindependent $\\mathrm{Exp}(1\/n)$ random variables, it is distributed as $\\mathrm{Exp}(\\nu|T| \/ n)$. \nAs for $W_{e}$, it is bounded by the maximum weight of the $\\nu$ predecessor edges. From \nproperties of exponential distributions and description of the algorithm it is not difficult to \nsee that this maximum weight is distributed as $E_1 + E_2 + \\ldots E_\\nu$, where $E_{i+1}$ is \nexponential with rate $(|M| - i)\\times 1 \/ n\\times \\lfloor \\ell \/ 4\\rfloor$. By \\eqref{eq-size-P1P2}, we can then bound the expected weight of the bridge from above by\n\\begin{eqnarray}\\label{mean_bnd1}\n\\tfrac{1}{C_6\\lfloor\\ell \/4\\rfloor f(\\ell,\\eta)n} \\times \\tfrac{1}{\\nu} \\times n\\mbox{ }+\\mbox{ \n}\\tfrac{\\nu}{\\big(\\frac{\\zeta_1\\eta}{2} - \\tfrac{\\nu}{n}\\big)\\lfloor\\ell \/4\\rfloor} \n\\leq \\tfrac{5}{C_6\\nu\\ell f(\\ell,\\eta)} + \\tfrac{11\\nu}{\\zeta_1\\eta \\ell},\n\\end{eqnarray}\nwhere the last inequality holds for $\\ell \\geq 20$ and large $n$ (given $\\eta$, $\\nu$).\nBy the same line of arguments, we get that the its variance is bounded by a number that depends \nonly on $\\eta$, $\\ell$ and $\\nu$ (so in particular independent of $n$). To make the right hand \nside of \\eqref{mean_bnd1} bounded above by $2\\ell\\eta$, we may require each of the summands in \n\\eqref{mean_bnd1} to be bounded by $\\ell\\eta$. After a little simplification this amounts to\n\\begin{equation}\n\\nu \\geq 5 \/ C_6\\ell^2\\eta f(\\ell, \\eta) \\,, \\mbox{ and } \\zeta_1 (\\ell\\eta)^2 \\geq 11 \\nu\\,.\n\\label{mean_bnd2}\n\\end{equation}\nSo we need to pick a positive $\\delta = \\delta(\\eta)$, positive integers $\\nu = \\nu(\\eta), \\ell = \n\\ell(\\eta)$ and an absolute constant $\\eta_2'' > 0$ such that \\eqref{eq-delta-zeta-0} and \n\\eqref{mean_bnd2} hold for $0 < \\eta < \\eta_2''$. We will deal with \\eqref{mean_bnd2} first which \nis in fact equivalent to\n\\begin{equation}\n\\label{ineq_solve_1}\n\\zeta_1(\\ell \\eta)^2 \/ 11 \\geq \\nu \\geq 5 \/ C_6\\ell^2f(\\ell, \\eta)\\,.\n\\end{equation}\nLet us try to find an integer $\\ell$ satsfying $\\zeta_1(\\ell \\eta)^2 \/ 11 \\geq \\big(10 \/ \nC_6\\ell^2f(\\ell, \\eta)\\big)\\vee 2$ since this will ensure the existence of a positive integer \n$\\nu$ such that $\\nu, \\ell$ satisfy \\eqref{ineq_solve_1}. Using $f(\\ell, \\eta) = \n\\mathrm{e}^{-1000\\zeta_2\/ \\sqrt{\\eta}}\\sqrt{\\eta^3 \/ \\ell^7}$, we get that this amounts to\n$$\\ell \\geq \\tfrac{C_7'\\mathrm{e}^{2000\\zeta_2 \/ \\sqrt{\\eta}}}{\\eta^9} \\vee \\tfrac{C_7''}{\\eta}\\,,\n$$\nfor some positive, absolute constants $C_7'$ and $C_7''$. Hence there exists an absolute constant \n$\\eta_2''' > 0$ such that the integers $\\ell = \\lceil \\mathrm{e}^{2001\\zeta_2 \/ \\sqrt{\\eta}} \n\\rceil$ and $\\nu = \\lfloor \\zeta_1(\\ell \\eta)^2\/ 11 \\rfloor$ satisfy \\eqref{ineq_solve_1} whenever \n$0 < \\eta < \\eta_2'''$. Now we need to find $\\delta$ that would satisfy \\eqref{eq-delta-zeta-0} \nwhich can be rewritten as,\n\n\\begin{equation}\n\\label{ineq_solve_2}\n\\delta \\leq (C_6 f(\\ell, \\eta)\\ell \/ 2) \\wedge (\\zeta_1\\eta\\ell\/ 2\\nu)\\,.\n\\end{equation}\nAgain substituting $f(\\ell, \\eta) = \\mathrm{e}^{-1000\\zeta_2\/ \\sqrt{\\eta}}\\sqrt{\\eta^3 \/ \\ell^7}$, \nwe can simplify \\eqref{ineq_solve_2} to\n\\begin{equation}\n\\label{ineq_solve_3}\n\\delta \\leq (C_6 \\mathrm{e}^{-1000\\zeta_2\/\\sqrt{\\eta}}\\tfrac{\\eta^{3\/2}}{2\\ell^{5\/2}}) \\wedge \n(\\zeta_1\\eta\\ell\/ 2\\nu)\\,.\n\\end{equation}\nSince $\\nu = \\lfloor \\zeta_1(\\ell \\eta)^2\/ 11 \\rfloor$, \\eqref{ineq_solve_3} would be satisfied if\n$$\\delta \\leq (C_6 \\mathrm{e}^{-1000\\zeta_2\/\\sqrt{\\eta}}\\tfrac{\\eta^{3\/2}}{2\\ell^{5\/2}}) \\wedge \n(11 \/ 2\\ell \\eta)\\,.$$\nThe last display together with our particular choice of $\\ell$ i.e. $\\lceil \n\\mathrm{e}^{2001\\zeta_2 \/ \\sqrt{\\eta}} \\rceil$ imply that there exists a positive, absolute \nconstant $\\eta_2'' < \\eta_2'''$ such that $\\delta = \\mathrm{e}^{-7000\\zeta_2 \/ \\sqrt{\\eta}}$ \nsatisfies \\eqref{ineq_solve_2} for $0 < \\eta < \\eta_2''$. Thus our choice of the triplet $(\\delta, \n\\nu, \\ell)$ satisfies \\eqref{eq-delta-zeta-0} and \\eqref{mean_bnd2} for $0 < \\eta < \\eta_2''$ and \nconsequently the event $\\mathcal B_{n, \\eta, \\nu, \\ell, \\delta}$ occurs with high probability for \nthis choice.\\par\n\nAs to the constraint on $\\ell$, it is also clear that there exists a positive, absolute constant \n$\\eta_2' < \\eta_2''$ such that $\\ell = \\lceil \\mathrm{e}^{2001\\zeta_2 \/ \\sqrt{\\eta}} \\rceil$ is \nlarger than $\\zeta_2 \/ \\eta^{3\/2} \\vee \\zeta_2^2 \/ \\eta$ for all $0 < \\eta < \\eta_2'$. Finally it \nis left to ensure whether $\\gamma$ has the length required by the lemma. Since our particular \nchoice of the triplet $(\\delta, \\nu, \\ell)$ satisfies \\eqref{algo_inequation} for large $n$ (given \n$\\eta$), we have that $L_{\\text{BRIDGE}}(\\nu, \\ell, \\delta) \\geq \\lfloor \\delta n \/ \\ell \\rfloor \n\\times \\ell\/2$. It then follows that there exists a positive, absolute constant $\\eta_2 < \\eta_2'$ \nsuch that $L_{\\text{BRIDGE}}(\\nu, \\ell, \\delta) \\geq \\mathrm{e}^{-7001\\zeta_2\/ \\sqrt{\\eta}}n$ for \nthese particular choices of $\\nu, \\ell$ and $\\delta$ whenever $0 < \\eta < \\eta_2$ and $n$ is large \n(given $\\eta$). This completes the proof of the lemma.\n\n\\end{proof}\n\nCombining Lemmas \\ref{lemma_crucial2} and \\ref{algo} completes the proof of the lower bound in \nTheorem~\\ref{Prop}.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n Homotopy theory, originally developed in the realm of topology, is now used in most domains of mathematics. One can cite algebraic topology, $\\infty$-algebras, operads, higher category theory, mathematical physics or derived geometry as examples. This tendency is in part due to the success of the concept of {\\it closed model category} introduced by Quillen in \\cite[Chapter 1]{Quillen}. Indeed, a model structure on a category gives access to topological intuition and a collection of important results.\n\nThe aim of this article is to illustrate how homotopy theory can be used to get a better understanding of {\\it singular foliations}. Our motivation comes from recent results of Lavau, Laurent-Gengoux and Strobl \\cite{Lavau} on singular foliations which are reminiscent of classical results in homotopy theory. We show that these results can actually be deduced form a {\\it left semi-model} structure on the category $L_\\infty\/A$ of {\\it $L_\\infty$-algebroids}.\n\nWe first recall the results of \\cite{Lavau} about resolutions of singular foliations and state analogous statements in the language of model categories. We then outline the structure of the paper.\n\n\\mypara Foliations are classical objects of study in mathematics. Examples arise in PDEs, Poisson geometry, Lie groups or differential geometry to cite a few. Roughly speaking, a {\\it foliation} $\\mathcal{F}$ consists in a partition of a smooth manifold $M$ into a collection of sub-manifolds called the {\\it leaves} of $\\mathcal{F}$. The collection $D(\\mathcal{F})$ of tangent spaces to the leaves of $\\mathcal{F}$ is called the {\\it distribution} associated to the foliation $\\mathcal{F}$. The foliation is called {\\it regular} when $D(\\mathcal{F})$ forms a vector bundle. As the leaves are submanifolds, the module $\\Gamma (D(\\mathcal{F}))$ of sections of this bundle is involutive under the bracket of vector fields. However, in the general case, i.e. when the tangent spaces to the leaves do not form a vector bundle, one can still consider the modules of vectors fields tangent to the leaves. {\\it Singular foliations} are therefore defined in \\cite{An-Sk09}\nas locally finitely generated submodules of the module of vector fields, closed under the bracket.\n\nContrary to regular foliations, which are by now well understood, singular foliations are still mysterious objects. A promising approach has been proposed in \\cite{Lavau} to understand a given singular foliation $\\mathcal{F}$. The idea is to associate to $\\mathcal{F}$ a {\\it Lie $\\infty$-algebroid} $Q\\mathcal{F}$ which is equivalent to $\\mathcal{F}$ \\cite[Theorem 1.6]{Lavau}. This $Q\\mathcal{F}$ is called the {\\it universal Lie $\\infty$-algebroid over $\\mathcal{F}$}. The use of the adjective {\\it universal} is justified by \\cite[Theorem 1.8]{Lavau} which states that two such universal Lie $\\infty$-algebroids over $\\mathcal{F}$ are quasi-isomorphic, and that such a quasi-isomorphism is essentially unique (up to homotopy).\n\n\\mypara These results should immediately ring a bell to a reader familiar with model categories. For convenience of the reader who is not, let us briefly explain why. Model categories were introduced in order to write down a minimal set of axioms enabling to perform constructions familiar in homological algebra (projective resolutions, derived functors) and in homotopy theory (homotopy relation for maps, fibrations, cofibrations, homotopy category). \n\n\nAmong these constructions, the {\\it cofibrant replacement}, which subsumes the notion of projective resolution, is central. The idea is, given $\\mathcal{F}$, to find another object $Q\\mathcal{F}$ which is better behaved (i.e. is cofibrant) and remains equivalent to $\\mathcal{F}$.\nThe proper way to formulate these properties is to introduce three classes of maps, $\\mathcal{W}$, $Cof$ and $Fib$, respectivelly called {\\it weak equivalences}, {\\it cofibrations} and {\\it fibrations}. $Q\\mathcal{F}$ is {\\it weak equivalent} to $\\mathcal{F}$ means that there exists a map in $\\mathcal{W}$ between $Q\\mathcal{F}$ and $\\mathcal{F}$. $Q\\mathcal{F}$ is {\\it cofibrant} means that the map $0\\hookrightarrow Q\\mathcal{F}$ from the initial object is a cofibration.\n\n\nCofibrant replacements are important to derive functors. The {\\it left derived functor $D(F)$} of a functor $F$ is defined by $D(F)(\\mathcal{F}):=F(Q\\mathcal{F})$. A non trivial part of the theory is to ensure that this derived functor is well defined, i.e. does not depend on the choice of the cofibrant replacement. This is relevant for our purpose since key lemmas involved in proving this fact are the exact analogues of the results of \\cite{Lavau} we are interested in (see proposition \\ref{machinerysemimodel}).\n\n\\mypara The natural strategy would therefore be to search for a model structure on the category of Lie $\\infty$-algebroids. However, with the definition of $\\cite{Lavau}$ (in terms of complexes of {\\it vector bundles}), not every singular foliation admits a cofibrant replacement (section 1.1 of $\\cite{Lavau}$). We therefore need to allow ourselves to consider what we call {\\it $L_\\infty$-algebroids} in definition \\ref{holiea}, i.e. a complex $L$ of {\\it A-modules} equipped with an $L_\\infty$-structure compatible with the A-module structure. The nuance between Lie $\\infty$-algebroids and $L_\\infty$-algebroids is that for a Lie $\\infty$-algebroid, the module $L$ consists in sections of a differential graded vector bundle. \n\nHowever, as remarked by Nuiten in \\cite[Example 3.1.12]{Nuiten}, even in the more general setting of $L_\\infty$-algebroids, an axiom of the definition of model category fails to be satisfied. Therefore, Nuiten has instead equipped the category $L_\\infty\/A$ of $L_\\infty$ algebroids with a {\\it semi-model category}, a relaxed version of the concept of model category (see definition \\ref{semi-model}).\n\nWe can still apply this machinery to singular foliations, but in order to do so, we need the analogues of \\cite[Lemma 5.1]{DS} and \\cite[Lemma 4.9]{DS} for semi-model categories. This is why we start section \\ref{tools} by stating the proposition \\ref{machinerysemimodel}, whose proof is essentially the same as for model categories. The only \"work\" consists in checking that the axioms involved in the classical case are still valid for a semi-model category. In order to facilitate this task for the reader, we recall the\noutline of the classical proof in the appendix.\n\nWe then give in section \\ref{semi} details about Nuiten's construction. After recalling in subsection \\ref{rapleft} the precise statement and the relevant definitions, we recall the lemma \\ref{lemmatransfer} which enables to transport a semi-model structure via an adjunction. The next step is to describe in the subsection \\ref{slice} the semi model structure on $Mod\/A$ that can be transported to $L_\\infty\/A$ via the Free\/Forget adjunction. We conclude this section by describing thoroughly in subsections \\ref{catholie} and \\ref{freeLR} the free functor involved in this adjunction. \n\nWith this at hand we are able to compare in section \\ref{cofibrantreplacement} the results of proposition \\ref{tools} applied to $L_\\infty\/A$ with the results of \\cite{Lavau} that we wish to recover. We conclude with in subsection \\ref{open} a list of open questions.\n\n\\paragraph{Aknowledgements} We would like to thank Fran\u00e7ois M\u00e9tayer for a discussion about filtered colimits and Chris Rogers for mentioning the reference \\cite{Nuiten}. A large part of this work was carried out at the Max Planck Institute for Mathematics in Bonn and we are grateful for the excellent resources and working conditions there.\n\n\\section{Main technical tool}\\label{tools}\n\nIn \\cite{Lavau}, the universal Lie $\\infty$-algebroids of a singular foliation \\cite[Definition 1.5]{Lavau} is proven to behave in the same manner as a cofibrant replacement in a left semi-model category would with respect to homotopies.\n\nThe relevant definitions of left semi-model category, cofibrant replacement, and left and right homotopies can be found in Definition~\\ref{semi-model}, Definition~\\ref{cofibrantrep}, and \\ref{lrhomotopy} respectively. The definition of left semi-model category is taken from \\cite[Section 4.4.1]{Fresse} and \\cite[Definition 1]{Spitzweck} and \\cite[Definition 1.5]{Barwick}.\n\nTo be more precise, \\cite[Theorem 1.6, 1.8, Proposition 1.22]{Lavau} are reminiscent respectively of Proposition~\\ref{machinerysemimodel}.\\ref{1}, .\\ref{5} and .\\ref{3} below:\n\n\\begin{prop}\\label{machinerysemimodel}\nLet $\\mathcal{C}$ be a left semi-model category. Suppose that $A$, $X$ and $Y$ are objects in $\\mathcal{C}$.\n\n\\begin{enumerate}\n\\item \\label{1} Every object in $\\mathcal{C}$ has a cofibrant replacement.\n\\item\\label{3}\nIf $A$ is cofibrant, then $\\overset{l}{\\sim}$ is an equivalence relation on $Hom_\\mathcal{C}(A,X)$. We denote the set of equivalence relations by $\\pi^l (A,X)$.\n\\item \\label{2}\nIf $A$ is cofibrant and $p\\colon Y\\to X$ is an acyclic fibration, then composition with $p$ induces a bijection:\n$$\np_\\ast\\colon \\pi^l (A,Y)\\to \\pi^l (A,X), \\ [f]\\mapsto [pf].\n$$\n\\item \\label{5} Any two cofibrant replacements of $A$ are weak equivalent and any two such weak equivalences between them are left homotopic.\n\\end{enumerate}\n\\end{prop}\n\nWe flesh out this analogy in Section \\ref{app} of this work by applying this proposition to $\\mathcal{C}=L_\\infty\/\\mathcal{O}_M$ (Definition~\\ref{holiea}), the category of $L_\\infty$-algebroids over a manifold $M$.\n\n\\begin{rem}\nIn what follows we stop writing ``left'' and simply write semi-model category for a left semi-model category.\n\\end{rem}\n\n\\section{A semi-model structure on $\\bm{L_\\infty\/A}$}\\label{semi}\nWe remind the reader that throughout this section we consider categories of differential non-negatively graded modules over some base field $\\mathbbm{k}$ of characteristic 0 and degree preserving morphisms unless otherwise stated.\n\nIn order to be able to apply Proposition~\\ref{machinerysemimodel} to the category of $L_\\infty$-algebroids, we need to equip it with a cofibrantly generated semi-model structure. In Section~\\ref{rapleft} we state Theorem~\\ref{transfer} \\cite{Nuiten}, which gives a semi-model structure on $L_\\infty\/A$, the category of $L_\\infty$-algebroids, from the model structure described in Section~\\ref{slice} on $Mod\/T_A$, the category of $dg\\text{-}A\\text{-}$modules over $T_A$. In Section~\\ref{catholie} we define the category $L_\\infty\/A$, which is the main focus of this work, and in Section~\\ref{freeLR} we describe the free functor $LR\\colon Mod\/T_A\\to L_\\infty\/A$ that allows to apply Lemma~\\ref{lemmatransfer}.\n\n\\subsection{Statement of the result}\\label{rapleft}\nIn this subsection we present Theorem~\\ref{transfer}, which allows us to use the machinery available in model categories for $L_\\infty\/A$ (Definition~\\ref{holiea}). This theorem is originally proved for $dg$-algebras and unbounded modules, but for our purposes we just need to state it for an algebra $A$ concentrated in degree 0 and non-negatively differential graded modules.\n\nWe start by stating the definition of semi-model category.\n\n\n\\begin{df}\\label{semi-model}\nA {\\bf semi-model category} is a bicomplete category $\\mathcal{C}$ with three subcategories $\\mathcal{W}$, $Fib$ and $Cof$, each containing all identity maps, such that:\n\\begin{enumerate}\n\\item[SM1)] $\\mathcal{W}$ has the 2-out-of-3 property. As well, $\\mathcal{W}$, $Fib$ and $Cofib$ are stable under retracts.\n\\item[SM2)] \\label{SM2} The elements of $Cofib$ have the left lifting property with respect to $\\mathcal{W}\\cap Fib$. The elements of $\\mathcal{W}\\cap Cofib$ with cofibrant domain have the left lifting property with respect to fibrations.\n\\item[SM3)]\\label{semifactorization} Every map can be factored functorially into a cofibration followed by a trivial fibration. Every map with cofibrant domain can be factored as a trivial cofibration followed by a fibration.\n\\item[SM4)] The fibrations and trivial fibrations are stable under transfinite composition, products, and base change.\n\\end{enumerate}\n\\end{df}\n\n\\begin{rem}\\label{semifrommod}\nThe definition of a semi-model category is a weakening of the notion of {\\bf closed model category} which needs in addition the following modifications to the axioms:\n\\begin{enumerate}\n \\item[SM2' \\& SM3'] The axioms SM2 and SM3 hold without assumption of cofibrancy of the domain.\n\\end{enumerate}\nOne does not require for a closed model category the axiom SM4 which follows from the other axioms.\n\\end{rem}\n\n\\begin{rem}\nIn particular, any model category is a semi-model category.\n\\end{rem}\n\n\\begin{thm}\\cite[Lemma 2.14, Theorem 3.1]{Nuiten}\\label{transfer}\nLet $A$ be an algebra. Then the category $L_\\infty\/A$ of $L_\\infty$-algebroids over $A$ admits a semi-model structure, in which a map is a weak equivalence (resp. a fibration) if and only if it is a quasi-isomorphism (surjection on positive degrees).\n\\end{thm}\n\nIn fact, this structure is cofibrantly generated \\cite[Definition 11.1.2]{HirschModel}, \\cite[Chapter 11]{HirschModel} and \\cite[Section 2.1]{Hovey}, which means that there is a set of maps generating the class of trivial fibrations through the right lifting property.\n\nTheorem~\\ref{transfer} is proven through the following version of the transfer lemma:\n\\begin{lem}[Transfer of semi-model structure]\\label{lemmatransfer}\nLet $F\\colon \\mathcal{M}\\leftrightarrow \\mathcal{N}\\cocolon G$ be an adjunction between locally presentable categories and suppose that $\\mathcal{M}$ carries a semi-model structure with sets of generating (trivial) cofibrations $I$ and $J$.\n\nDefine a map in $\\mathcal{N}$ to be a weak equivalence (fibration) if its image under $G$ is a weak equivalence (fibration) in $\\mathcal{M}$ and a cofibration if it has the left lifting property against the trivial fibrations. Assume that the following condition holds:\n\\begin{itemize}\n \\item Let $f\\colon A\\to B$ be a map in $\\mathcal{N}$ with cofibrant domain, obtained as a transfinite composition of pushouts f maps in $F(J)$. Then $f$ is a weak equivalence.\n\\end{itemize}\nThen the above classes of maps determine a tractable semi-model structure on $\\mathcal{N}$ whose generating (trivial) cofibrations are given by $F(I)$ and $F(J)$.\n\\end{lem}\n\nThe transfer takes the model structure on the category $Mod\/T_A$ (Definition~\\ref{anchored}) and gives a semi-model structure on $L_\\infty\/A$ through an adjunction\n\\begin{center}\n\\begin{tikzcd}\nL_\\infty\/A\n\\arrow[bend left=35]{r}[name=F]{F}\n& Mod\/T_A\n\\arrow[bend left=35]{l}[name=LR]{LR}\n\\end{tikzcd}\n\\end{center}\nThe existence of such adjunction is the content of Proposition~\\ref{freelr} below.\n\n\\begin{proof}[Proof of Theorem~\\ref{transfer}]\nThe constructions in \\cite{Nuiten} used to prove the unbounded case preserve the subcategory $dg\\text{-}A\\text{-}Mod^{\\geq0}$. Thus, the result follows in this case.\n\\end{proof}\n\n\n\\subsection{The model structure on $\\bm{Mod\/A}$}\\label{slice}\nThe objective of this subsection is to prove Proposition~\\ref{modinMODA}, where we give a model structure on the slice category $Mod\/T_A$, the category of anchored differential non-negatively graded modules over an algebra, which we describe as a slice category in Definition~\\ref{anchored}. We then state a general result, Theorem~\\ref{modelunder}. It enables to obtain the model structure on $Mod\/T_A$ from the model structure on $dg\\text{-}A\\text{-}Mod^{\\geq0}$ given in Theorem~\\ref{modinMOD}.\n\n\nRecall the classical notion of slice category :\n\n\\begin{df} Let $\\mathcal{M}$ be a category and $Z$ and object of $\\mathcal{M}$. The {\\bf slice category} $\\mathcal{M}\/Z$ has for objects arrows of $\\mathcal{M}$ the form $x\\rightarrow z$\nand for arrows commutative diagrams in $\\mathcal{M}$ of the form\n\\begin{center}\n\\begin{tikzpicture}[normal line\/.style={->},font=\\scriptsize]\n\\matrix (m) [matrix of math nodes, row sep=2em,\ncolumn sep=1.5em, text height=1.5ex, text depth=0.25ex]\n{ X& & Y \\\\\n & Z. & \\\\ };\n\n\\path[normal line]\n(m-1-1) edge (m-1-3)\nedge (m-2-2)\n(m-1-3) edge (m-2-2);\n\\end{tikzpicture}\n\\end{center}\n\\end{df}\n\nIn what follows, we denote by $T_A$ the $A$-module of derivations of $A$. We can look at $T_A$ as a graded $A$-module and if $A$ is concentrated in degree 0, so is $T_A$.\n\\begin{df} \\label{anchored}\nWe denote by $\\bm{Mod\/T_A}$ the slice category $dg\\text{-}A\\text{-}Mod^{\\geq0}\/T_A$. We refer to such an object $\\rho\\colon M\\to T_A$ as an {\\bf anchored module}.\n\\end{df}\n\n\n\n\\begin{prop}\\label{modinMODA}\nThe category $Mod\/T_A$ has a cofibrantly generated semi-model structure.\n\\end{prop}\n\nIn Remark~\\ref{semifrommod} we highlight the fact that a model structure is a particular example of a semi-model structure. So, we actually exhibit a stronger, i.e. a {\\it cofibrantly generated} model structure on $Mod\/T_A$\n\n\n\\begin{thm}\\cite[Theorem 7]{Hirsch}\n \\label{modelunder}\n Let $\\mathcal{M}$ be a cofibrantly generated model category (see\n \\cite[Definition~11.1.2]{HirschModel}) with generating cofibrations $I$ and\n generating trivial cofibration $J$, and let $Z$ be an object of\n $\\mathcal{M}$. If\n \\begin{enumerate}\n \\item $I_{Z}$ is the set of maps in $\\mathcal{M}\/{Z}$ of the form\n \\begin{center}\n \\begin{equation} \\label{morphabove}\n\\begin{tikzpicture}[normal line\/.style={->},font=\\scriptsize]\n\\matrix (m) [matrix of math nodes, row sep=2em,\ncolumn sep=1.5em, text height=1.5ex, text depth=0.25ex]\n{ X& & Y \\\\\n & Z & \\\\ };\n\n\\path[normal line]\n(m-1-1) edge (m-1-3)\nedge (m-2-2)\n(m-1-3) edge (m-2-2);\n\\end{tikzpicture}\n\\end{equation}\n\\end{center}\n in which the map $X \\to Y$ is an element of $I$ and\n \\item $J_{Z}$ is the set of maps in $\\mathcal{M}\/{Z}$ of the\n form \\eqref{morphabove} in which the map $X \\to Y$ is an element of\n $J$,\n \\end{enumerate}\n then the standard model category structure on $\\mathcal{M}\/{Z}$\n (in which a map \\eqref{morphabove} is a cofibration, fibration, or\n weak equivalence in $\\mathcal{M}\/{Z}$ if and only if the map $X\n \\to Y$ is, respectively, a cofibration, fibration, or weak\n equivalence in $\\mathcal{M}$) is cofibrantly generated, with generating\n cofibrations $I_{Z}$ and generating trivial cofibrations $J_{Z}$.\n\\end{thm}\n\nWe introduce the classes of cofibrations and trivial fibrations in $dg\\text{-}A\\text{-}Mod^{\\geq0}$:\n\nLet the ``disc\" $D_A(n)$ denote the chain complex $A[n]\\xrightarrow{Id_A} A[n-1]$, and the ``sphere\" $S_A(n)$ denote the chain complex $A[n]$ with trivial differential. Let $J$ be the set of morphisms of complexes of the form $0\\to D_A(n)$ and let $I$ be the set of inclusions $S_A(n-1)\\to D_A(n)$.\n\n\\begin{thm}\\cite[Theorem 2.3.11]{Hovey}\\label{modinMOD}\n$dg\\text{-}A\\text{-}Mod^{\\geq0}$ is a cofibrantly generated model category with $I$ as its generating set of cofibrations, $J$ as its generating set of trivial cofibrations, and quasi-isomorphisms as its weak equivalences. The fibrations are the surjections.\n\\end{thm}\n\n\\begin{proof}[Proof of Proposition~\\ref{modinMODA}]\nWe apply Theorem \\ref{modelunder} to the slice category $Mod\/T_A$ and the cofibrantly generated model structure on $dg\\text{-}A\\text{-}Mod^{\\geq0}$ given by Theorem~\\ref{modinMOD}.\n\\end{proof}\n\n\\subsection{The category $\\bm{L_\\infty\/A}$}\\label{catholie}\n\nIn this subsection we define the category of $L_\\infty$-algebroids associated to a graded algebra $A$. In broad terms, such an algebroid is given by an $L_\\infty$-algebra with a compatible $A$ module structure in the same spirit as a Lie algebroid in \\cite{Kapranov}. In the following we consider non-negatively graded dg-modules.\n\n\\begin{df}\\label{holiea}\n\\begin{enumerate}\n\\item An {\\bf $\\bm{L_\\infty}$-algebra} is a module $L$ with an $L_\\infty$-structure \\cite[Definition 2]{Vitagliano2} given by degree $k-2$ anti-symmetric multibrackets\n$$\n[\\ ]_k\\colon \\Lambda^k (L)\\to L,\n$$\nwith\n$$\n\\sum_{i+j=k}(-1)^{ij}\\sum_{\\sigma\\in S_{i,j}}\\chi(\\sigma, v)\\left[[v_{\\sigma(1)},\\ldots,v_{\\sigma(i)}]_i,v_{\\sigma(n+1)},\\ldots,v_{\\sigma(i+j)}\\right]_{j+1}=0.\n$$\nHere, $\\chi(\\sigma, v)$ is the sign for which $v_1\\wedge\\cdots \\wedge v_k=\\chi(\\sigma, v)v_{\\sigma(1)}\\wedge\\cdots\\wedge v_{\\sigma(k)}$.\n\nWe remind the reader that $\\bigwedge^k L$ is the quotient of the graded vector space $\\bigotimes^k L$ under the equivalence relation linearly generated by\n$$\nv_1\\otimes\\cdots\\otimes v_k=-(-1)^{\\lvert v_i\\rvert\\lvert v_{i+1}\\rvert}v_1\\otimes\\cdots\\otimes v_{i+1}\\otimes v_i\\otimes\\cdots\\otimes v_k\n$$\nfor all $i$. As well, $S_{i,j}$ is the group of $(i,j)$-unshuffles, which is to say, the set of permutations $\\sigma$ of $1,\\ldots, k=i+j$ such that\n$$\n\\sigma(1)<\\cdots<\\sigma(i) \\text{ and }\\sigma(i+1)<\\cdots<\\sigma(k).\n$$\n\n\\item An $L_\\infty$-algebra $L$ with an $A$-module structure (an {\\bf $\\bm{A\\text{-}L_\\infty}$-algebra}) is an {\\bf $\\bm{L_\\infty}$-algebroid} if it has a compatible action on $A$. The action is specified by an $A$-module morphism (which we call the anchor)\n$$\n\\rho\\colon L\\to T_A\n$$\nof degree $0$. The compatibility conditions for the anchor and brackets are given by:\n\\begin{gather*}\n[v_0,av_1]_2=(-1)^{\\lvert a\\rvert\\lvert v_0\\rvert}a[v_0,v_1]_2+\\rho(v_0)[a]v_1\\\\\n\\left[v_0,\\ldots,av_{k}\\right]_{k+1}=(-1)^{\\lvert a\\rvert \\left(k+\\lvert v_0\\rvert+\\cdots+\\lvert v_{k-1}\\rvert\\right)}a\\left[v_0,\\ldots,v_{k}\\right]_{k+1}\n\\end{gather*}\nfor $a$ in $A$ and $v_i\\in L$.\n\nWe denote by $\\bm{L_\\infty\/A}$ the category of $L_\\infty$-algebroids with morphisms given by the $A$-module morphisms that preserve the multibrackets.\n\\end{enumerate}\n\\end{df}\n\n\\begin{rem}\\label{remarkLR}\nSince the higher brackets of $T_A$ all vanish, the data of an $L_\\infty$-algebroid on an $A$-module $L$ is equivalent to that of an $L_\\infty$-algebra structure on $L$ where the higher brackets are $A$-linear on the first coordinate, together with a strict morphism \\cite[Definition 5]{Vitagliano2} of $L_\\infty$-algebras to $T_A$.\n\nAn $L_\\infty$-algebroid is a version up to homotopy of the Lie-Rinehart algebra defined in \\cite[Section 1]{Kapranov}. The higher brackets and the jacobiators give the structure up to homotopy.\n\\end{rem}\n\n\n\\subsection{The free functor $\\bm{LR}$}\\label{freeLR}\nIn Proposition~\\ref{free linf1} we describe a right adjoint for the forgetful functor\n$$\nF\\colon L_\\infty\/A\\to Mod\/T_A,\n$$\nwhich takes an $L_\\infty$-algebroid and gives back its underlying module. The existence of this functor is implicit throughout \\cite[Section 3]{Nuiten}, but here we give an explicit construction.\n\nWe start by presenting the $L_\\infty$-words on a fixed $M$ (not necessarily bounded) $A$-module ($A$ possibly differential graded): An {\\bf $\\bm{L_\\infty^A}$-word} of arity $k$ is recursively defined as an element of\n$$\nA\\otimes\\left(\\bigwedge^k M'\\right)[k-2],\n$$\nwhere $M'$ is an $A$-module of some previously defined $L_\\infty^A$-words. We represent one such word by a symbol\n$$\na\\left[v_0,\\ldots,v_{k-1}\\right]_k,\n$$\nwhere $a$ is an element in $A$, while the $v_i$ are all previously constructed $L_\\infty^A$-words. The degree of $a\\left[v_0,\\ldots,v_{k-1}\\right]_k$ is defined to be\n$$\ndeg\\left(a\\left[v_0,\\ldots,v_{k-1}\\right]_k\\right)=\\sum_i deg(v_i)+ \\lvert a\\rvert +k-2.\n$$\n\nFor non-negatively graded $M$ and non graded $A$, it is necessary to modify this definition. The recursive construction goes as follows: We start setting all the elements of $\\lambda_0=M$ as $L_\\infty^A$-words.\n\nWe can then form the symbols that involve elements of $\\lambda_0$. The set of all such symbols forms an $A$-module which can be expressed as\n$$\n\\lambda'_1\\coloneqq \\bigoplus_{k\\geq 1} A\\otimes \\left(\\bigwedge\\nolimits^k \\lambda_0\\right)[k-2]\/K_0.\n$$\nHere we take $\\bigwedge^1 \\lambda_{0}[1]$ to be $\\lambda_{0}[1]$ and $K_0$ to be the $A$-submodule of negatively graded elements. Thus, the resulting module is non-negatively graded. Including the elements of $\\lambda_0$ themselves we obtain\n$$\n\\lambda_1\\coloneqq \\lambda_0\\oplus \\lambda'_1.\n$$\n\n\nRecursively, once defined $\\lambda_n$, the $A$-module of all $L_\\infty^A$-words with at most $n$ brackets, we define\n$$\n\\lambda'_{n+1}\\coloneqq \\bigoplus_{k\\geq 1} A\\otimes \\left(\\bigwedge\\nolimits^k \\lambda_{n}\\right)[k-2]\/K_n,\n$$\nwhere again, $K_n$ is the $A$-submodule of negatively graded elements, and then\n$$\n\\lambda_{n+1}\\coloneqq \\lambda_0\\oplus \\lambda'_{n+1}.\n$$\nIt is clear from the construction that $\\lambda_n\\subset \\lambda_{n+1}$ for all $n$.\n\n\\begin{df}\nGiven an $A$-module $M$, we define the {\\bf set of $\\bm{L_\\infty^A}$-words on $\\bm{M}$} as the colimit of $A$-modules\n$$\nL(M)=colim\\ \\lambda_{i}.\n$$\n\\end{df}\n\nGiven an element $v\\in \\lambda_n$, we say that $w\\in \\lambda_{n+1}$ is {\\bf directly over} $v$ if there are $v_1,\\ldots,v_{k-1}\\in \\lambda_n$ and $a$ in $A$ such that $w=[a v,v_1,\\ldots,v_{k-1}]_k$. We say that {\\bf$\\bm{w}$ is over $\\bm{v}$} if there is a sequence $v=v_0,\\ldots,v_k=w$ such that $v_{i+1}$ is directly over $v_i$.\n\n\\begin{df}\nThe {\\bf free $\\bm{L_\\infty^A}$-algebra} associated to $M$ is the $A$-module given by\n$$\nL_\\infty^A(M)\\coloneqq L(M)\/\\sim,\n$$\nwhich is the quotient of the set of $L_\\infty^A$-words of $M$ under $\\sim$, which is the $A$-module generated by all the elements over the jacobiators\n$$\n\\sum_{i+j=k}(-1)^{ij}\\sum_{\\sigma\\in S_{i,j}}\\chi(\\sigma, v)\\left[[v_{\\sigma(1)},\\ldots,v_{\\sigma(i)}]_i,v_{\\sigma(n+1)},\\ldots,v_{\\sigma(i+j)}\\right]_{j+1}.\n$$\nwith $\\chi(\\sigma,v)$ the sign for which $v_1\\wedge\\cdots \\wedge v_k=\\chi(\\sigma, v)v_{\\sigma(1)}\\wedge\\cdots\\wedge v_{\\sigma(k)}$.\n\nThe $L_\\infty$-brackets on $L_\\infty^A(M)$ are defined in the obvious way: given $k+1$ elements of $L_\\infty^A(M)$ represented by $L_\\infty^A$ words $v_0,\\ldots,v_k$, their bracket is given by the class represented by $[v_0,\\ldots, v_k]_{k+1}$.\n\nThese brackets are well defined on $L_\\infty^A(M)$ and their jacobiators vanish from construction. \n\\end{df}\n\nIn the construction of $L_\\infty^A(M)$ we did not make use of the internal differential structure nor the anchors of the module $M$. These are considered below in the construction of the free $L_\\infty$-algebroid associated to $M$.\n\n\\begin{prop}\\label{free linf1}\nGiven a graded $A$-Module $M$, the construction $L_\\infty^A(M)$ is functorial and is a left adjoint to the forgetful functor from the category of $A$-modules with an $L_\\infty$-algebra structure and strict morphisms of $A$-modules, to the category of $A$-Modules.\n\\end{prop}\n\n\\begin{proof}\nGiven a morphism of $A$-modules $f\\colon M\\to N$, we give a morphism of $A$-modules $L_\\infty^A(f)\\colon L_\\infty^A(M)\\to L_\\infty^A(N)$ by defining it on the level of $L_\\infty^A$-words: we start by setting $L(f)m\\coloneqq f(m)$ for $m$ in $M$, and recursively define\n\\begin{equation}\\label{equation}\nL(f)a\\left[v_0,\\ldots, v_{k-1}\\right]_k\\coloneqq a\\left[L(f)v_0,\\ldots, L(f)v_{k-1}\\right]_k.\n\\end{equation}\nThis is clearly a morphisms of $A$-modules that preserves jacobiators. Therefore, it descends to the required morphism $L_\\infty^A(f)\\colon L_\\infty^A(M)\\to L_\\infty^A(N)$. Since it preserves the $L_\\infty$-brackets by construction, it is a strict $L_\\infty$-morphism.\n\nThe adjunction property can be verified using that $M$ is an $A$-submodule of $L_\\infty^A(M)$. Therefore, given any $A$-module, bracket preserving morphism ({\\bf$\\bm{A\\text{-}L_\\infty}$-morphism}) $F\\colon L_\\infty^A(M)\\to L$, the restriction to $M$ is an $A$-module morphism.\n\nAs well, given an $A\\text{-}L_\\infty$-algebra $L$, Equation~\\ref{equation} above defines the unique strict $L_\\infty$-extension to $L_\\infty^A(M)$ of a given $A$-module morphism $f\\colon M\\to L$. The verification that these two assignations follows from the construction of the $L_\\infty$-extensions.\n\\end{proof}\n\n\\begin{rem}\nProposition~\\ref{free linf1} shows that $L_\\infty^A(M)$ is a free object associated to $M$ in $A\\text{-}L_\\infty$-algebras.\n\\end{rem}\n\n\\begin{rem}\\label{anchors}\nIf $\\alpha\\colon M\\to T_A$ is an anchored module, so is $L_\\infty^A(M)$. In fact, we obtain an $A\\text{-}L_\\infty$-morphism $\\tilde{\\alpha}\\colon L_\\infty^A(M)\\to T_A$ from $\\alpha$ through the described adjunction\n\\begin{center}\n\\begin{tikzcd}\nA\\text{-}L_\\infty \\arrow[bend left=35]{r}[name=F]{F} & Mod \\arrow[bend left=35]{l}[name=L]{L}\n\\end{tikzcd}\n\\end{center}\nHere we consider the dg-Lie algebra $T_A$ as an $A$-module with the $L_\\infty$-structure coming from the Lie bracket.\n\\end{rem}\n\n\\begin{prop}\\label{freelr}\nLet $\\alpha\\colon M\\to T_A$ be in $Mod\/A$ with differential $d$. The $A$-module over $T_A$ given by the quotient\n$$\nLR(M)\\coloneqq L_\\infty^A(M)\/\\sim\n$$\nis an $L_\\infty$-algebroid. The anchor $\\rho\\colon LR(M)\\to T_A$ is the quotient function of the map $\\hat{\\alpha}\\colon L_\\infty^A(M)\\to T_A$ described in Remark~\\ref{anchors}.\n\nHere, $\\sim$ is the $A\\text{-}L_\\infty$-ideal generated by the relations\n\\begin{gather*}\n[v]_1=d(v)\\\\\n[v_0,av_1]_2=\na[v_0,v_1]_2+\\hat{\\alpha}(v_0)(a)v_1\\\\\n\\left[v_0,\\ldots,av_{k}\\right]_{k+1}=\na\\left[v_0,\\ldots,v_{k}\\right]_{k+1}\n\\end{gather*}\nfor $v$ in $M$, the $v_i$ in $L_\\infty^A(M)$, and $a$ in $A$.\n\nFinally, this construction is functorial and gives a right adjoint for the forgetful functor $F\\colon L_\\infty\/A\\to Mod\/A$.\n\\end{prop}\n\n\\begin{rem}\nAn {\\bf$\\bm{A\\text{-}L_\\infty}$-ideal} in an $A\\text{-}L_\\infty$-algebra $L$ generated by a set $S$ is the smallest $A$-submodule of $L$ that is an ideal for the $L_\\infty$-brackets.\n\\end{rem}\n\n\\begin{proof}\nFrom the definition of the quotient, the $L_\\infty$-brackets of $L_\\infty^A(M)$ descend to $L_\\infty$-brackets on $LR(M)$ and, since $T_A$ is already an $L_\\infty$-algebroid, so does the anchor.\n\nThe compatibility conditions for the brackets and anchor from Definition~\\ref{holiea} are exactly the quotient relations and so, $LR(M)$ is itself an $L_\\infty$-algebroid.\n\nGiven $\\alpha_M\\colon M\\to T_A$ and $\\alpha_N\\colon N\\to T_A$ and $f\\colon M\\to N$ in $Mod\/A$, from Proposition~\\ref{free linf1} we obtain a commutative diagram of $A\\text{-}L_\\infty$-algebras\n\\begin{center}\n \\begin{tikzcd}[column sep=small]\n L_\\infty^A(M)\n \\arrow[rr, \"L(f)\"]\n \\arrow[rd, swap, \"\\hat{\\alpha}_{M}\"]\n && L_\\infty^A(N)\n \\arrow[ld, \"\\hat{\\alpha}_{N}\"]\n \\\\\n & T_A\n \\end{tikzcd}\n\\end{center}\nThis induces a morphim $LR(f)\\colon LR(M)\\to LR(N)$ in $L_\\infty\/A$.\n\nThe proof that the functors $F\\colon L_\\infty\/A\\leftrightarrow Mod\/A\\cocolon LR$ are adjoint to each other is similar to that in the proof of Proposition~\\ref{free linf1}.\n\\end{proof}\n\n\\begin{df}\\label{freehlr}\nWe define the {\\bf free $\\bm{L_\\infty}$-algebroid} associated to $\\alpha\\colon M\\to T_A$ in $Mod\/A$ to be the $L_\\infty$-algebroid $LR(M)$.\n\\end{df}\n\n\n\\section{Applications to singular foliations}\\label{app}\nIn this section we propose results parallel to those of \\cite[Section 1]{Lavau} for Lie $\\infty$-algebroids in terms of the (semi-)model category structure on $L_\\infty\/A$ given by Theorem~\\ref{transfer}. We first show how singular foliations and Lie $\\infty$-algebroids \\cite{Lavau} are examples of $L_\\infty$ algebroids. With this connection in mind, we then compare each statement of \\cite{Lavau} with its semi-model category analogue that we introduce and prove as corollaries of our main technical lemma \\ref{machinerysemimodel}. We conclude with open questions raised by the discrepancies between the two sets of results.\n\n\\subsection{Universal Lie $\\bm{\\infty}$-algebroids}\nIn this subsection, in order to make the connection with the semi-model structure on $L_\\infty\/\\mathcal{O}_M$, we show that both Lie $\\infty$-algebroids and singular foliations can be seen as objects in $L_\\infty\/\\mathcal{O}_M$. We fix a smooth manifold $M$ and its ring of smooth functions $\\mathcal{O}_M$. Recall\n\n\\begin{df}\\cite[Definition 1.13]{Lavau}\nA {\\bf Lie $\\infty$-algebroid} is a non-positively graded differential vector bundle over a manifold $M$ whose space of sections has a strong homotopy Lie-Rinehart algebra structure. This is to say, an $LR_\\infty[1]$-algebra in the sense of \\cite[Definition 7]{Vitagliano2} over the algebra $\\mathcal{O}_M$ of smooth functions on $M$.\n\\end{df}\n\n\\begin{rem}\nIn \\cite{Lavau}, the definitions are given in terms of non-positive cohomological chain complexes. We can directly transport our results to this setting by using the isomorphim of this category to that of non-negative homological chain complexes.\n\\end{rem}\n\nThen one has \n\n\n\\begin{lem}\\label{inclusion}\nThe de-suspension of the space of sections of an $\\infty$-algebroid forms an $L_\\infty$-algebroid.\n\\end{lem}\n\n\\begin{proof}\n The compatibility conditions of the anchor and brackets in \\cite[Definition 1.13]{Lavau} are the higher Jacobi equations for the $LR_\\infty[1]$-algebra. In this case, since both $\\mathcal{O}_M$ and $T_{\\mathcal{O}_M}$ are concentrated in degree 0, the action of the algebroid on $\\mathcal{O}_M$ has no higher terms. Therefore, taking the $L_\\infty$-algebroid associated to this $LR_\\infty[1]$-algebra we obtain (after desuspension) an object in $L_\\infty\/\\mathcal{O}_M$.\n \\end{proof}\n\nWe can also express the notion of singular foliation in terms of $L_\\infty$-algebroid. First recall \n\n\\begin{df}\nA {\\bf singular foliation} over $M$ is a locally finitely generated $\\mathcal{O}_M$-submodule of $\\mathfrak{X}(M)$ that is closed under the Lie bracket.\n\\end{df}\n\n Since $T_{\\mathcal{O}_M}=\\mathfrak{X}(M)$ (with the $L_\\infty\/{\\mathcal{O}_M}$-structure given by the commutator of derivations and anchor the identity) is just $\\mathfrak{X}(M)$, one can restate the definition in terms of the $L_\\infty$-structure.\n\n\\begin{lem}\nA singular foliation $\\mathcal{F}$ over $M$ is a locally finitely generated $\\mathcal{O}_M$-module $\\mathcal{F}\\subsetT_{\\mathcal{O}_M}$ that is closed under the $L_\\infty\/{\\mathcal{O}_M}$-structure of $T_{\\mathcal{O}_M}$.\n\\end{lem}\n\n\\subsection{Cofibrant replacements in $\\bm{L_\\infty\/{\\mathcal{O}_M}}$}\\label{cofibrantreplacement}\n\nWe can now show that results similar to those of \\cite{Lavau} can be understood in terms of the model structure on $Mod\/T_{\\mathcal{O}_M}$ (see Proposition~\\ref{modinMODA}) and the semi-model structure on $L_\\infty\/\\mathcal{O}_M$ (see Theorem~\\ref{transfer}).\n\nWe first recall the definition of a resolution of a singular foliation \\cite[Definition 1.1]{Lavau}:\n\n\\begin{df}\nA {\\bf resolution of a singular foliation $\\mathcal{F}$} is a complex of vector bundles for which the cohomology of the associated complex of sections gives the $\\mathcal{O}_M$-module $\\mathcal{F}$.\n\\end{df}\n\nThe model structure on $Mod\/T_{\\mathcal{O}_M}$ given by Proposition~\\ref{modinMODA} is such that:\n\n\\begin{lem}\nA resolution of a singular foliation $\\mathcal{F}$ is a cofibrant replacement in $Mod\/T_{\\mathcal{O}_M}$.\n\\end{lem}\n\nWe remind the reader of the definition of cofibrant replacements in a (semi-) model structure.\n\n\\begin{df}\\label{cofibrantrep}\nA cofibrant replacement of an object $X$ in a (semi-) model category $\\mathcal{C}$ with initial object $0$ is an object $QX$ such that there is a factorization of the initial arrow $0\\to X$ as $0\\xrightarrow{i}QX\\xrightarrow{p} X$, with $i$ a cofibration and $p$ a trivial fibration.\n\\end{df}\n\nThe following remark is the main reason for us to work with $L_\\infty$-algebroids instead of Lie-$\\infty$ algebroids:\n\\begin{rem}\nResolutions of singular foliations do not always exist (see section 1.1 of $\\cite{Lavau}$), while the existence of a cofibrant replacement in $Mod\/\\mathcal{O}_M$ of a singular foliation is guaranted by the axiom $SM3$ of a semi-model category. \n\\end{rem}\n\nIf a resolution of $\\mathcal{F}$ exists, on can consider the question of equipping it with a Lie-$\\infty$ algebroid structure.\n\n\n\\begin{df}\\cite[Definition 1.5]{Lavau}\\label{universal}\nA resolution of $\\mathcal{F}$ is called a {\\bf universal Lie-$\\infty$ algebroid over $\\mathcal{F}$} if the differential on the sections can be extended to a homotopy Lie-Rinehart structure.\n\\end{df}\n\nOne of the main results of \\cite{Lavau} (Theorem 1.6) reads: \n\n\\begin{thm}\nIf $\\mathcal{F}$ admits a resolution, then this resolution can be upgraded to a universal Lie-$\\infty$ algebroid over $\\mathcal{F}$.\n\\end{thm}\n\n The analogous notion in terms of the semi-model category on $L_\\infty\/\\mathcal{O}_M$ is given by a cofibrant replacement. The existence of such cofibrant replacement is then a direct consequence of the axioms:\n\n\\begin{thm}\\label{replacement}\nFor any singular foliation $\\mathcal{F}$ there exists a replacement by a dg-$\\mathcal{O}_M$-module endowed with a structure of $L_\\infty$-algebroid over $\\mathcal{F}$.\n\\end{thm}\n\\begin{proof}\nIt suffices to apply Proposition~\\ref{machinerysemimodel}.\\ref{1} to the semi-model structure on $L_\\infty\/\\mathcal{O}_M$ given by Theorem~\\ref{transfer}.\n\\end{proof}\n\nThe terminology \"universal\" in definition \\ref{universal} is due to the following result (\\cite[Theorem 1.8]{Lavau}):\n\n\\begin{thm}\\label{liftinglavau}\nTwo universal Lie $\\infty$-algebroids over $\\mathcal{F}$ are quasi-isomorphic in the category of Lie $\\infty$-algebroids, and such a quasi-isomorphism is essentially unique (up to homotopy).\n\\end{thm}\n\nThis result has a direct analog in terms of the semi-model structure on $L_\\infty\/\\mathcal{O}_M$.\n\n\\begin{thm}\\label{lifting}\nLet $\\mathcal{F}$ be a singular foliation. Then, any two cofibrant replacements of $\\mathcal{F}$ in $L_\\infty\/T_{\\mathcal{O}_M}$ are quasi-isomorphic (with strict morphisms of $L_\\infty$-algebroids) and any two such isomorphisms are (left)-homotopic.\n\\end{thm}\n\n\\begin{proof}\nThis is proposition \\ref{machinerysemimodel}.\\ref{5}.\n\\end{proof}\n\nEven though Lemma~\\ref{inclusion} states that any universal Lie $\\infty$-algebroid over $\\mathcal{F}$ is a replacement of $\\mathcal{F}$ in $L_\\infty\/\\mathcal{O}_M$ (i.e. an element of $L_\\infty\/\\mathcal{O}_M$ weakly equivalent to $\\mathcal{F}$), it is a priori \\underline{not} a cofibrant replacement. In particular, we can not \u00e0 priori deduce Theorem~\\ref{liftinglavau} from Theorem~\\ref{lifting}. Moreover, Theorem \\ref{liftinglavau} and \\ref{lifting} differ in two other points: \n\n\\begin{rem}\\label{homodif}\n\\begin{itemize}\n \\item The morphisms in proposition \\ref{liftinglavau} are $L_\\infty$-morphisms, while the morphisms in \\ref{lifting} are strict morphisms of $L_\\infty$-algebras.\n\\item The notion of homotopy equivalence used in \\ref{liftinglavau} is very different from the one we are using which is based on Cylinder objects (see definition \\ref{lrhomotopy}.1).\n\\end{itemize}\n\\end{rem}\n\nThe semi-model category statement analogous to \\cite[Proposition 1.22]{Lavau} is the following theorem, which is a corollary of Proposition~\\ref{machinerysemimodel}.\\ref{5}.\n\n\\begin{thm}\nLeft homotopy of Lie $\\infty$-algebroid morphisms is an equivalence relation denoted by $\\overset{l}{\\sim}$, which is compatible with composition.\n\\end{thm}\n\nOnce again, the notion of homotopy equivalence used in the two statements do not coincide. The reason is that when one works in the realm of model categories, there are standard definitions of left and right homotopies between morphisms, namely Definition~\\ref{lrhomotopy}.\\ref{leftHomotopy} and \\ref{lrhomotopy}.\\ref{rightHomotopy} below.\n\n\\begin{df}\\label{lrhomotopy}\nLet $A$ and $X$ be objects in a semi-model category $\\mathcal{C}$.\n\\begin{enumerate}\n \\item A {\\bf cylinder object} for $A$ is an object $Cyl(A)$ together with a factorization\n \\begin{center}\n \\begin{tikzcd}\n A\\amalg A\n \\arrow[rr, bend right, \"id_{A}+ id_{A}\"']\n \\arrow[r, \"i_0+i_1\"]\n &Cyl(A)\n \\arrow[r, \"\\sim\"]\n &A\n \\end{tikzcd}\n \\end{center}\n where the map $Cyl(A)\\to A$ is a weak equivalence.\n \\item Similarly, a {\\bf path object} for $X$ is an object $Path(X)$ together with a factorization\n \\begin{center}\n \\begin{tikzcd}\n X\n \\arrow[rr, bend right, \"id_{A}\\times id_{A}\"']\n \\arrow[r, \"\\sim\"]\n &Path(X)\n \\arrow[r, \"p_0\\times p_1\"]\n &X\\times X\n \\end{tikzcd}\n \\end{center}\n where the morphism $X\\to Path(X)$ is a weak equivalence.\n \\item\\label{leftHomotopy}Two morphisms $f,g\\colon A\\to X$ are {\\bf left homotopic}, and we write {\\bf $f\\overset{l}{\\sim}g$}, if there is a cylinder object of $A$ and a map $H\\colon Cyl(A)\\to X$ such that $f+ g\\colon A\\amalg A\\to X$ factorizes as\n $$\n A\\amalg A\\to Cyl(A)\\xrightarrow{H}X\n $$\n \\item\\label{rightHomotopy} Analogously, $f,g\\colon A\\to X$ are {\\bf right homotopic}, and we write {\\bf $f\\overset{r}{\\sim}g$}, if there is a path object of $X$ and a map $H\\colon A\\to Path(X)$ such that $f\\times g\\colon A\\to X\\times X$ factorizes as\n $$\n A\\xrightarrow{H} Path(X)\\to X\\times X\n $$\n\\end{enumerate}\n\\end{df}\n\n\\begin{rem}\\label{remho}\n Because of the restrictions imposed by Axiom~\\emph{SM3}, it is natural to use cylinder objects for the notion of homotopy in $L_\\infty\/\\mathcal{O}_M$, i.e. to use left-homotopies. However, it is still possible to define right homotopies by a construction of path objects akin to the one found in \\cite[Section 2]{Vezzosi} which turns out to be a notion similar to the one used by \\cite{Lavau} (see \\cite[Remark 6]{Lavau}). But then, it is not clear to us how to prove a statement similar to proposition \\ref{machinerysemimodel}.4 but for right homotopies. \n\\end{rem}\n\n\\subsection{Open questions}\\label{open}\n\nThe ineffable original goal of this paper was to obtain the results of \\cite{Lavau} as genuine properties of a (semi)-model category on $L_\\infty\/T_{\\mathcal{O}_M}$. However, Remark~\\ref{homodif} implies that one needs to change the (semi)-model structure we consider if one wants to reach this goal. More precisely, this work triggers the following questions.\n\n\\begin{que}\\label{q1}\nDoes there exist a variant of a notion of model-category on $L_\\infty\/T_{\\mathcal{O}_M}$ for which the morphisms are not anymore necessarily strict?\n\\end{que}\n\nIf such a structure exists we can consider the following questions:\n\n\\begin{que}\\label{q2}\nIs any universal Lie $\\infty$-algebroid cofibrant? In other words, is the first part of Theorem~\\ref{liftinglavau} a corollary of Theorem~\\ref{lifting} (with non necessarily strict morphisms of $L_\\infty$-algebroids)?\n\\end{que}\n\n\\begin{que}\\label{q3}(Inspired by theorem \\ref{remho})\nDoes the notion of left homotopy coincide with the notion of homotopy considered in \\cite{Lavau}? In other words, is the second part of Theorem~\\ref{liftinglavau} a corollary of Theorem~\\ref{lifting}?\n\\end{que}\n\nQuestion \\ref{q3} can be divided into two.\n\n\\begin{que}\nDoes the equivalence relation given by left homotopy coincide with the one given by right homotopy? \n\\end{que}\n\n\\begin{que}\\label{q4}\nDoes the notion of right homotopy coincide with the definition of homotopy used in \\cite{Lavau}? \n\\end{que}\n\nAnother possible generalisation of a $L_\\infty$-algebroid is given by allowing the existence of higher anchors which form a homotopy morphism to $T_A$.\n\n\\begin{que}\\label{q5}\nIs there a (semi-)model structure in the category of $L_\\infty$-algebroids with homotopy anchors? How does it relate to the other model structures above?\n\\end{que}\n\nWe have already noted (Remark~\\ref{remarkLR}) that $L_\\infty$-algebroids are particular examples of Lie-Rinehart algebras up to homotopy. On the other hand, in \\cite[Section 5]{Kji}, Kjeseth studies the notion of a resolution of a Lie-Rinehart Pair. \n \n \n \\begin{que}\n Does there exist a variant of a notion of model-category on the category of homotopy Lie-Rinehart pairs for which the resolutions considered in \\cite[Definition 5.1]{Kji} are cofibrant replacements?\n \\end{que}\n\n\\begin{que}\nDoes it induce, when restricted to a fixed associative algebra A, the structure considered aimed at in question \\ref{q1}?\n\\end{que}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzpapq b/data_all_eng_slimpj/shuffled/split2/finalzzpapq new file mode 100644 index 0000000000000000000000000000000000000000..5e3e9411bc358f53327dd4e0824db07a6dddd82e --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzpapq @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe variability exhibited by active galactic nuclei (AGN) has been studied extensively but the physical mechanisms responsible for these variations remain difficult to pin down. On timescales greater than a few days, variations are typically on the order of 0.2 mag and the variability amplitude tends to increase with frequency. While the exact cause of the intrinsic variations remains unknown, the variability allows a probe of the inner structure of AGN, namely the radial profile of both the broad line region (BLR) and accretion disc (e.g.: \\citet{Peterson2004,MacLeod2012,Homayouni2019} and review by \\citet{Lawrence2016b}). Variability of an extrinsic nature, such as the ongoing microlensing `flickering' seen in multiply-imaged AGN, also allows us to probe the structure of the inner regions \\citep{Morgan2010, Motta2017}.\n\nRarely, AGN are seen to undergo longer-lived outbursts which can rise an order of magnitude above quiescence. The AGN known as `Sharov21' is one such extreme example. First discovered by \\citet{Nedialkov1996} and reported in \\citet{Sharov1998}, this object was interpreted as a nova residing in M31. A later spectroscopic confirmation and detailed analysis from \\citet{Meusinger2010} showed that Sharov21 was in fact an AGN at the much greater redshift of $z=2.109$. The outburst seen in this object was on the order of one year in duration and rose to more than three magnitudes above the background, with no other outbursts evident in the decades-long lightcurve. \\citet{Meusinger2010} explore two explanations for this outburst: 1) that it was caused by the tidal disruption of a $\\sim$10\\,M$_\\odot$ star in close proximity to an extant accretion disc and 2) that it was caused by a microlensing event due to a stellar-mass lens in M31. In their work the former of these two scenarios is favoured, in part due to the low probability of a microlensing event occurring.\n\nIn this paper, we wish to re-examine the microlensing hypothesis. Recent observations have suggested that a number of long-lived AGN transients can be explained as the result of isolated microlensing events \\citep{Lawrence2016a,Bruce2017}. Not only does this remain a viable explanation for the Sharov21 event, it may also be the tip of the proverbial iceberg for the archival\/future detection of microlensed background AGN in the vicinity of M31 or other nearby galaxies.\n\nOur main aim is a critical re-evaluation of the light curve data for Sharov21, and then to carry out a microlensing model fit. We also examine whether or not Sharov21 seems a normal AGN. We then go on to discuss other possible microlensing events seen through M31. In Section \\ref{sec:Data} we describe the data sets used in our analysis. Section \\ref{sec:Methods} details the microlensing model and MCMC model-fitting procedures. In Section \\ref{sec:Results} we present our results and in Section \\ref{sec:Discussion} we assess event probabilities and implications for the future.\n\n\n\n\\section{Data}\n\\label{sec:Data}\n\nThis section provides an overview of the suite of data sets we have employed. We will describe 1) the time series data for the Sharov21 event 2) imaging data used to construct an SED for Sharov21 including HST, Spitzer and XMM-Newton observations 3) the \\citet{Vilardell2006} study that we use to search for more examples of Sharov21-like events. For reference, Figure \\ref{fig:M31} shows a wide angle view of M31 and the objects of particular interest to this work. The three most promising candidates for microlensing events, including Sharov21, are highlighted here along with the results of our search for additional candidate microlensing events.\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.9]{figures\/M31fig.png}\n \\caption{Wide angle view of M31. The image is a V band UK Schmidt plate (via DSS) and the major and minor axes are drawn assuming a position angle for the M31 disc of $37^\\circ$ with indicative lengths of $1^\\circ$ and $30\\arcmin$ for the major and minor axes respectively. The figure shows the location of Sharov21 (circle labelled `1') and other objects from the \\citet{Vilardell2006} survey, discussed in Section \\ref{sec:Villardell}. The ellipses are isodensity contours with semi-major axes of $1300\\arcsec$ and $3100\\arcsec$ respectively and are discussed in Section \\ref{sec:Rates}.}\n \\label{fig:M31}\n\\end{figure}\n\n\n\\subsection{Sharov21 time series data}\n\\label{sec:S21timeseries}\n\nThe archival time series data for Sharov21 used in this paper are almost identical to that used in \\citet{Meusinger2010}. As noted in that work, approximately 80\\% of the data has been taken in the B band whilst the remaining data has been corrected to $B$. The resulting B-band light curve spans several decades and the numerous sources of data, including archival plates, are detailed therein. The sources of these data and number of epochs are listed in Table \\ref{tab:S21timeseries}. The primary data set is that from \\citet{Sharov1998} which comprises of data from four telescopes. We note that no formal errors are reported in that work. \\citet{Meusinger2010} adopt a 0.1 mag error for this data which we carry forward though this assumption should be treated with caution. We have also opted to exclude 10 epochs flagged as being either uncertain and\/or of low quality. With the addition of one further epoch from SuperCOSMOS \\citep{Hambly2001}, which agrees well with the quiescent value, this gives us a total of 147 epochs. Table \\ref{tab:S21timeseries} also notes the number of epochs available within 6 months of the peak of the light curve on approximately MJD 48916. It is these data that will prove to be the most important in testing the microlensing hypothesis.\n\n\\begin{table}\n \\centering\n \\begin{tabular}{l|l|l|l}\n Source & $\\rm{N_{epochs}}$ & $\\rm{N_{peak}}$ & Plates? \\\\\n \\hline\n Sharov et al (1998) Table 1 & 58 & 50 & y \\\\\n Sharov et al (1998) 2-m Roshen & 1 & & y \\\\\n Tautenburg Schmidt photografisch & 29 & 6 & y \\\\\n Tautenburg Schmidt CCD & 14 & & \\\\\n ING Vilardell et al. & 6 & & y \\\\\n NOAO LGS & 2 & & \\\\\n CA 2.2-m CAFOS & 2 & & \\\\\n CA Schmidt & 14 & 4 & y \\\\\n CA 1.2-m & 5 & & y \\\\\n Palomar Schmidt & 4 & & y \\\\\n Asiago Schmidt & 7 & 6 & y \\\\\n INT (WFS) & 1 & & \\\\\n 3.6-m CFHT & 1 & & \\\\\n 40-cm Astrograph Sonneberg & 1 & 1 & y \\\\\n 60-cm Ganymede Skinakas & 1 & & \\\\\n SuperCOSMOS & 1 & & y \\\\\n \\hline\n Total & 147 & & \\\\\n \\end{tabular}\n \\caption{Data sources for the Sharov21 time series. With the exception of the SuperCOSMOS data, the source IDs are noted as per the data supplied to us and used in \\citet{Meusinger2010}.}\n \\label{tab:S21timeseries}\n\\end{table}\n\nThis impressive decades-long lightcuvre, shown in Figure \\ref{fig:S21_LC}, allows us to confidently state that Sharov21 shows no sign of any notable outburst with the exception of the main event in 1992. As reported in \\citet{Meusinger2010}, there are two other things to note with regards the light curve data. The first is that there is tentative evidence for a more rapid rise to the peak, as evidenced by a shoulder in the data, which can also be seen in our zoomed view of the light curve (Figure \\ref{fig:S21_LCzoom}). The second is that there are reported colour changes \\citep[see][Fig. 3]{Meusinger2010}. Both of these observations, if accurate, call into question the microlensing hypothesis. This is because, particularly in the case of a point-source, microlensing events are expected to be achromatic in nature and display a symmetric light curve.\n\n\\begin{figure}\n\t\\includegraphics[width=\\columnwidth]{figures\/S21LC}\n \\caption{Sharov21 full light curve including our microlensing model (discussed in Section \\ref{sec:S21microResults}). Residuals plotted below. Error bars reflect the original errors on the photometry.}\n \\label{fig:S21_LC}\n\\end{figure}\n\n\\begin{figure}\n\t\\includegraphics[width=\\columnwidth]{figures\/S21LCzoom}\n \\caption{Sharov21 zoomed light curve including our microlensing model (discussed in Section \\ref{sec:T2microResults}). Residuals plotted below. Error bars reflect the original errors on the photometry.}\n \\label{fig:S21_LCzoom}\n\\end{figure}\n\nThe data showing evidence for the more rapid rise to peak comes from the Sharov plates, primarily those taken at their Crimean station. Given our incomplete understanding of the systematics inherent in this photographic plate data, the assumed 0.1 mag errors may be overly optimistic. The third-party, non-Sharov observational data about the peak (data points in Figure \\ref{fig:S21_LCzoom} with larger errors), appear to be in good agreement with the model shown and this reinforces the notion that the shoulder may simply be an artefact in the data rather than a real change. With reference to the previously reported colour indices, it is only the B-R data that show colour information for epochs with B < 20. Note also that, though there is an indication of a trend, the B-R data is still consistent with being flat, i.e.: it is not sufficient to rule out the achromatic case.\n\nWith these caveats in mind, we retain the assumed 0.1 mag errors on the Sharov data for our microlensing analysis. We cannot rule out the presence of the shoulder or colour changes but neither do we believe that the data quality is sufficient to rule out the null hypothesis. We continue with the assumption that the simple point-lens, point-source microlensing model remains valid in the case of Sharov21.\n\n\n\\subsection{Optical imaging data}\nThe Panchromatic Hubble Andromeda Treasury (PHAT) survey \\citep{Dalcanton2012} with the Hubble Space Telescope (HST) provides high-resolution imaging in the UV (WFC3\/UVIS F275W, F336W), optical (ACS\/WFC F435W, F814W) and near-infrared (WFC3\/IR F110W, F160W). We construct a catalogue of all objects in the PHAT brick 8, field 14, imaging using the software {\\tt SExtractor} \\citep{Bertin1996} in dual-image mode with the F814W imaging serving as the detection image.\n\nAll of the photometry for the HST imaging was measured with $0.4^{\\prime\\prime}$-diameter apertures, and corrected to total assuming a point-source correction, as estimated from the flux versus radius curve-of-growth. Although the aperture correction to total is more sizeable for the F160W filter -- a factor $\\simeq 1.7$ compared to the more modest factor $\\simeq 1.2$ for the UV and optical filters -- this is necessary given the severe source crowding within the field. Moreover, this particular object can also be reasonably approximated to be a point-source.\n\nPhotometric errors for each HST filter were calculated using a local depth analysis (e.g. \\citet{McLeod2016}). A grid of non-overlapping $0.4^{\\prime\\prime}$ apertures were placed spanning the entire field of view and, using {\\sc SExtractor}, a segmentation map was generated in order to mask out any apertures containing significant flux from sources. Statistical analysis was performed on a smaller grid of $\\simeq 150$ ``blank-sky'' apertures local to our target object, and the Median Absolute Deviation (MAD) estimator was used in order to derive the local $\\sigma$ photometric error.\n\nTo supplement our HST photometry, we also included the photometry in {\\it g, r, i, z} and {\\it y} from the DR1 release of the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS), \\citep{Chambers2016,Magnier2016a,Magnier2016b,Magnier2016c,Waters2016,Flewelling2016}. Here we use the PSF magnitudes.\n\n\\begin{figure}\n\t\\includegraphics[width=\\columnwidth]{figures\/RGBcutout_F475W_F814_F110W}\n \\caption{Colour composite of a $10^{\\prime\\prime}$x$10^{\\prime\\prime}$ region centred on Sharov21 using our stacks of the PHAT survey data. Blue channel: F475W filter; green: F814W; red: F110W.}\n \\label{fig:S21_cutout}\n\\end{figure}\n\nFigure \\ref{fig:S21_cutout} shows a composite cutout of a $10^{\\prime\\prime}$x$10^{\\prime\\prime}$ region surrounding Sharov21, approximately $26^\\prime$ away from centre of M31. This false colour image makes use of our PHAT stacked data in three filters: F475W, F814W and F110W and provides an excellent combination of resolution and sensitivity. Sharov21 is well described by a point source in these images. At longer wavelengths blending becomes a serious concern which makes an accurate background subtraction difficult, as will be described in Section \\ref{sec:SED}.\n\n\n\\subsection{Mid-infrared data}\nSharov21 has also been imaged in several Spitzer programs\\footnote{The program IDs are 3400, 3126, 61001} and we include mid-infrared photometry in our analysis. Imaging in the mid-infrared is from the Spitzer Infrared Array Camera (IRAC), downloaded from the IRSA Spitzer Heritage Archive. This includes imaging in channel 1 ($3.6\\,{\\rm \\mu m}$), channel 2 ($4.5\\,{\\rm \\mu m}$), channel 3 ($5.8\\,{\\rm \\mu m}$) and channel 4 ($8.0\\,{\\rm \\mu m}$). One drawback of Spitzer imaging is the blending of sources due to the broad PSF FWHM, and so one requires deconfusion techniques to extract reliable photometry. To this end, we used the deconfusion software {\\tt TPHOT} \\citep{Merlin2015}. This uses the spatial and surface brightness information based on the high-resolution imaging (HST F814W in this instance) as a prior. Cutouts of all objects in an input catalogue are then convolved with a kernel which produces object model templates in the low-resolution IRAC image. The fluxes of all these templates are then fitted simultaneously. In order to be consistent with the aperture-corrected photometry, we opt to subtract all objects except the target Sharov21, and then perform aperture photometry on the cleaned-up image to avoid contamination. For channel 1 and channel 2, we employ $2.8^{\\prime\\prime}$-diameter apertures, and for channel 3 and channel 4, we use $5^{\\prime\\prime}$-diameter apertures, before correcting to the total flux assuming a point-source and the estimated flux curve-of-growth.\n\n\n\\subsection{X-ray and radio data}\nThe XMM-Newton data have been taken directly from the XMM-Newton Science Archive. Of four XMM observations, two had a reasonable number of counts and the catalogue fluxes for these have been averaged across each bandpass. Assumed upper limits for GALEX and two M31 radio surveys are taken directly from \\citet{Meusinger2010} \\citep{Braun1990,Gelfand2004}.\n\n\n\\subsection{Vilardell survey time series data}\n\\label{sec:Villardell}\n\nThe Sharov21 event may not be the only one of its kind so, in order to begin a search for other candidate events, we require suitable long-term monitoring data in the immediate vicinity of M31. The cadence of this data need not be high as we are primarily concerned with timescales on the order of months or more.\n\nA very useful archival source of time series data is that from \\citet{Vilardell2006}. This four year photographic survey of the Northeastern quadrant of M31 of comprises approximately five month-long observing epochs in both B and V, each separated by one year. Our Sharov21 lightcurve includes binned data from this survey. Though the survey was designed to search for variable stars, in particular eclipsing binaries, it nevertheless allows us to search for additional long term transient events which may be similar in nature to Sharov21.\n\nOut of their 236,238 sources, 3,964 have been identified as a variable star over an approximate $34\\arcmin\\times34\\arcmin$ field of view. Of these, 853 have been classified as an eclipsing binary or Cepheid. The remaining 3,111 object catalogue contains other variables with periods outwith the 1-100 day analysis range and also includes non-periodic sources. It is this `variable star' catalogue that we make use of to perform a search for other candidate AGN microlensing events (see Section \\ref{sec:search}).\n\n\\section{Methodolgy}\n\\label{sec:Methods}\n\\subsection{Microlensing analysis}\n\nWith the light curve noted in the previous section, we take a similar approach to modelling the lensing event as \\citet{Bruce2017}. Our first assumption is that the event can be well described by a simple microlensing model of a point-mass object passing in front of a background point-source. The light curve for which has an analytic solution involving the following parameters: source redshift ($z_{\\rm s}$); lens redshift\/mass\/transverse velocity ($z_{\\rm d}, M_{\\rm l}, v_\\perp$); impact parameter ($y_0$); mid-point epoch ($t_0$); source\/background flux ($F_{\\rm s}, F_{\\rm b}$). With the source redshift in this case constrained to be $z_{\\rm s}=2.109$ from spectroscopy this leaves 7 free parameters, also noted in Table \\ref{tab:microParams}. Cosmological calculations in this paper make use of \\texttt{Planck13} values: \\citet{Planck2014}; $H_0=67.8, \\Omega_\\Lambda=0.693$.\n\n\\begin{table}\n\t\\centering\n\t\\caption{Free parameters in the simple microlensing model.}\n\t\\label{tab:microParams}\n\t\\begin{tabular}{cl}\n\t\t\\hline\n\t\tParameter & Description \\\\\n\t\t\\hline\n\t\t$M_\\textrm{l}$ & Lens mass \\\\\n\t\t$v_\\perp$ & Transverse velocity \\\\\n\t\t$t_0$ & Mid-point epoch \\\\\n\t\t$z_\\textrm{d}$ & Lens redshift \\\\\n\t\t$y_0$ & Impact parameter \\\\\n\t\t$F_\\textrm{s}$ & Source flux (pre-lensing) \\\\\n\t\t$F_\\textrm{b}$ & Background flux (unlensed) \\\\\n\t\t\\hline\n\t\\end{tabular}\n\\end{table}\n\nIn order to efficiently explore this parameter space we employ the software package {\\tt emcee} \\citep{Foreman-Mackey2013} to perform an MCMC analysis. We also include some relatively simple assumptions in the choice of priors. This includes a log-normal prior on the lens mass ($\\mu=0, \\sigma=1$, median value 1\\,M$_\\odot$) to ensure we are in the stellar-mass regime and a gaussian prior on the transverse velocity, centred on 400 km\/s with a sigma of 200 km\/s. A further consideration was to place a constraint of $\\pm$60 days about the peak to minimise the amount of time spent in low likelihood regions.\n\nWe know that any AGN will also display some level of intrinsic variability regardless of any extrinsic cause. To allow for this in the MCMC analysis, we increased the errors on the light curve as $\\sigma_{\\rm MCMC}^2=\\sigma_{\\rm phot}^2 + \\sigma_{\\rm AGN}^2$, where we initially set $\\sigma_{\\rm AGN}$=0.1, a conservative value for typical AGN fluctuations.\n\nInitial testing with a starting guess for the lens redshift of $z_{\\rm d}=0.2$ failed to converge in the time allotted. With the possibility that the true lens position may lie at the much smaller ``redshift'' corresponding to M31 (on the order $z\\sim 10^{-4}$) the decision was made to perform a search in log($z_{\\rm d}$) instead. With the starting guess kept the same, this second attempt converged successfully. The results of the analysis are shown in Section \\ref{sec:S21microResults} with parameter constraints taken using the one-sigma percentiles from the MCMC trace and the `best-fit' model in this case corresponding to the peak in the posterior distribution.\n\n\\subsection{Search for other microlensing candidates}\n\\label{sec:search}\n\nIn order to search for additional microlensing candidates which may be similar to Sharov21, we perform a search through the published \\citet{Vilardell2006} data tables. In particular, their sources already identified as variable but not confirmed as either an eclipsing binary or Cepheid, a total of 3,111 objects. The data comprises groupings of approximately month-long observing periods, performed once per year, over a total four years. As we are concerned primarily with locating candidate events with $\\sim$year-long timescales, the median magnitude across each of the five principal observing epochs was determined after excluding data with uncertainty > 0.5 mag. We then exclude objects classified as variable stars and perform a search for changes of > 0.75 mag in either B or V across any of the five epochs. To exclude sources displaying rapid variability, an additional constraint was that the standard deviation over any one of the five clusters of observations was not more than 0.2 mag. After imposing these constraints, we identified 139 objects of interest. Each lightcurve was then visually inspected for smooth variations over the period of observations, as would be expected in a point-source, point-lens microlensing event. The assumption that the event need be achromatic was relaxed during this process. This left us with 20 candidate events. Of these 20, two objects were identified as likely background AGN due to the existence of corroborating data. The candidates are plotted in Figure \\ref{fig:M31} with position information noted in Table \\ref{tab:TargetIDs}.\n\nT2 has spectroscopic data which confirms the presence of a background AGN at $z=0.215$ \\citep{Dorn-Wallenstein2017}. It has been identified as a possible binary AGN though the binary nature of this AGN is in doubt \\citep{Barth2018}. Nevertheless, the lightcurve for this object underwent a smooth, significant change of $\\sim 0.75$\\,mag and has not been seen to exhibit this type of behavior in the observations since. As target T2 has a spectroscopic redshift, we perform a microlensing analysis of the event as per the procedures outlined for Sharov21. T3 has no available spectroscopy but is detected in the XMM-Newton Science Archive (as are all our targets) which makes it likely that this is another background AGN. In this case the lightcurve is seen to undergo a smooth, significant change of $\\sim 1.1$\\,mag. Without a spectroscopic redshift, we do not include T3 in our MCMC microlensing analysis.\n\n\\begin{table}Target IDs\/positions\n \\centering\n \\begin{tabular}{c|c|c|c|c|c}\n LongID & ShortID & RA & Dec \\\\\n Sharov21 & S21 & 00:44:57.94 & +41:23:43.72 \\\\ \n M31V\\_J00452730 & T2 & 00:45:27.31 & +41:32:54.06 \\\\ \n M31V\\_J00443792 & T3 & 00:44:37.95 & +41:45:14.10 \\\\\n M31 centre & M31* & 00:42:44.35 & +41:16:08.63 \\\\\n \\end{tabular}\n \\caption{Target coordinates for the three microlensing candidates and our assumed position for the centre of M31. T3 coordinates are from the XMM-Newton Science Archive detection, within 0.3 arcsec of the \\citet{Vilardell2006} position. Sharov21 and T2 are GAIA coordinates.}\n \\label{tab:TargetIDs}\n\\end{table}\n\n\\section{Results}\n\\label{sec:Results}\n\n\\subsection{Sharov21 SED}\n\\label{sec:SED}\n\n\\begin{figure}\n\t\\includegraphics[width=\\columnwidth]{figures\/S21SEDfull}\n \\caption{Full SED for Sharov21. The data points have not been corrected for Milky Way reddening. The triangles represent upper limits.}\n \\label{fig:S21_SEDfull}\n\\end{figure}\n\n\\begin{figure}\n\t\\includegraphics[width=\\columnwidth]{figures\/S21SEDzoom}\n \\caption{Zoom in on the IR--UV range of the SED for Sharov21, the data points have not been corrected for Milky Way reddening though the orange line reflects the \\citet{Shang2011} template, reddened assuming $B-V=0.2$}\n \\label{fig:S21_SEDzoom}\n\\end{figure}\n\nThe full SED is shown in Fig. \\ref{fig:S21_SEDfull}. The first point to note is that at there is nothing immediately unusual about this AGN. It is clear from the non-detections in the radio that this object can be considered radio-quiet and, for comparison, we have included the radio-quiet SED template from \\citet{Shang2011} which is in broad agreement. At present, we have made no attempt to correct for Milky Way reddening (approximately $B-V=0.2$) which explains the drop-off seen in the UV. In Fig. \\ref{fig:S21_SEDzoom} we show a zoomed in view on the IR-UV region which includes a reddened SED template that provides a good match to the data. One potential issue is that the IRAC mid-IR fluxes seem to be underestimated by $\\sim 40\\%$, most likely to do with the uncertainties inherent in working in a crowded field such as this. We believe our estimates are more accurate than the published Spitzer Enhanced Imaging Product (SEIP) source list values for this object. This is particularly true for IRAC channel one which appears anomalously high at the expected rest-frame $1\\mu{\\rm m}$ minimum, perhaps a problem concerning contamination from one or more neighbors. Indeed, the SEIP source list entry is flagged as having a bad background match.\n\n\\subsection{Microlensing analysis results for Sharov21}\n\\label{sec:S21microResults}\n\n\\begin{table}\n\t\\centering\n\t\\label{tab:MCMC_S21}\n\t\\begin{tabular}{clc}\n \t\\hline\n\t\tSharov21 & & $z_{\\rm agn}=2.109$ \\\\\n\t\t\\hline\n\t\tparameter & value & unit \\\\\n\t\t\\hline\n\t\t${\\rm log_{10}}(z_{\\rm d})$\t& $-3.82\\,^{+0.63}_{-0.67}$ & \\\\\n\t\t$M_{\\rm l}$\t\t\t\t\t& $1.26\\,^{+2.39}_{-0.79}$ & M$_\\odot$ \\\\\n\t\t$v_\\perp$\t\t\t\t\t& $420\\,^{+205}_{-192}$ & km\\,s$^{-1}$ \\\\\n\t\t$y_0$\t\t\t\t\t\t& $0.0343\\,^{+0.0077}_{-0.0075}$ & $\\theta_E$\\\\\n\t\t$t_0$\t\t\t\t\t\t& $48915.7\\,^{+0.9}_{-0.9}$ & MJD \\\\\n\t\t$F_{\\rm s}$\t\t\t\t\t& $2.52\\,^{+0.55}_{-0.54}$ & $\\times10^{-17}$erg\\,s$^{-1}$cm$^{-2}$\\AA$^{-1}$ \\\\\n\t\t$F_{\\rm b}$\t\t\t\t\t& $9.20\\,^{5.14}_{-5.30}$ & $\\times10^{-18}$erg\\,s$^{-1}$cm$^{-2}$\\AA$^{-1}$ \\\\\n\t\t$r_{\\rm E}$\t\t\t\t\t& $1260\\,^{+2780}_{-860}$ & light-days \\\\\n \\hline\n\t\\end{tabular}\n\t\\caption{Results of the MCMC microlensing analysis for Sharov21. The full corner plot can be found in Fig. \\ref{fig:S21_corner}.}\n\\end{table}\n\nWe first report on our analysis of the Sharov21 event. The results from the MCMC analysis are displayed in Table \\ref{tab:MCMC_S21} and the corresponding corner plot is displayed in Fig. \\ref{fig:S21_corner}. In general, the parameters are well constrained. Of particular interest is the range of allowable values for the projected lens redshift. These redshifts correspond to a physical distance in the range 0.14--2.84\\,Mpc with the most probable value being 0.67\\,Mpc. This is very close to the true distance of M31, $\\simeq0.78$\\,Mpc.\n\nIn contrast to the suspected microlensing events reported in \\citet{Lawrence2016a} and \\citet{Bruce2017} the timescale for this event is quicker, with the Einstein timescale $t_{\\rm E}\\approx 1$\\,year. This is a natural consequence of the low lens redshift but it does mean that events of this kind are in a more favourable regime for a coordinated observation campaign if detected on the rise. The peak amplification of the event is a factor of 30 above the base AGN level of ${\\rm B}\\sim20.5$\\,mag. A further consequence of the low lens redshift is that the Einstein radius when projected at the distance of the source ($r_{\\rm E}$) is on the order of light years as opposed to light days. With this lens footprint, our assumption that the AGN can be regarded as a point source would appear to be secure as any radial temperature profile in the accretion disc would be expected to go unresolved. Chromatic effects would be expected to creep in if the source is larger than approximately 10\\% of the Einstein radius. This may also mean that the inner regions of the BLR may have undergone significant amplification during this event.\n\nHow well does the microlensing model explain the bulk changes seen in this object? The full light curve is displayed in Fig. \\ref{fig:S21_LC} and a zoomed-in view of the main event is displayed in Fig. \\ref{fig:S21_LCzoom}. In both cases we overplot the model which corresponds to the peak of the posterior distribution. This model, using our MCMC errors defined above, produces a reduced chi squared fit of $\\chi_\\nu=1.48$ and broadly speaking performs very well. A potential issue can be seen in the residuals to the zoomed-in view where the data, particularly around the ${\\rm MJD}\\sim48880$ mark, shows some structure. As discussed in Section \\ref{sec:S21timeseries} we believe that this shoulder in the data may be unreliable. Tests with simulated damped random walk (DRW) models (e.g.: \\citet{MacLeod2012}), using typical parameters for radio-quiet AGN, show that changes of this nature occur very rarely, if ever, when using the same cadence\/sampling as the Sharov data. However, a modest increase to our intrinsic variability parameter, $\\sigma_{\\rm AGN}=0.15$, is sufficient to bring the reduced chi squared value of the fit to unity.\n\n\\subsection{Microlensing analysis results for candidate T2}\n\\label{sec:T2microResults}\n\nIn addition to the Sharov21 analysis, we have also performed a microlensing analysis of the candidate T2 event. The results are reported in Table \\ref{tab:MCMC_T2} and the corresponding fit is shown in Figure \\ref{fig:T2_LC}. It is immediately apparent that the sampling of the data is sub-optimal but the fit to the data performs very well with a reduced chi squared fit of $\\chi_\\nu=0.15$. The redshift values correspond to a physical distance in the range 0.63--11.0\\,Mpc with the most probable value being 2.63\\,Mpc. This is greater than the true distance to M31 though is still consistent with the lens residing in M31 within the one sigma constraints noted. In contrast to the Sharov21 event, the peak amplification is much lower, a factor of two above the background level in the B-band.\n\n\\begin{table}\n\t\\centering\n\t\\label{tab:MCMC_T2}\n\t\\begin{tabular}{clc}\n \t\\hline\n\t\tT2 & & $z_{\\rm agn}=0.215$ \\\\\n\t\t\\hline\n\t\tparameter & value & unit \\\\\n\t\t\\hline\n\t\t${\\rm log_{10}}(z_{\\rm d})$\t& $-3.23\\,^{+0.62}_{-0.62}$ & \\\\\n\t\t$M_{\\rm l}$\t\t\t\t\t& $1.12\\,^{+1.68}_{-0.67}$ & M$_\\odot$ \\\\\n\t\t$v_\\perp$\t\t\t\t\t& $420\\,^{+203}_{-187}$ & km\\,s$^{-1}$ \\\\\n\t\t$y_0$\t\t\t\t\t\t& $0.424\\,^{+0.045}_{-0.097}$ & $\\theta_E$\\\\\n\t\t$t_0$\t\t\t\t\t\t& $52612\\,^{+8}_{-8}$ & MJD \\\\\n\t\t$F_{\\rm s}$\t\t\t\t\t& $1.37\\,^{+0.21}_{-0.40}$ & $\\times10^{-17}$erg\\,s$^{-1}$cm$^{-2}$\\AA$^{-1}$ \\\\\n\t\t$F_{\\rm b}$\t\t\t \t& $<6.5$ & $\\times10^{-18}$erg\\,s$^{-1}$cm$^{-2}$\\AA$^{-1}$\\\\\n\t\t$r_{\\rm E}$\t\t\t\t\t& $249\\,^{+484}_{-167}$ & light-days \\\\\n \\hline\n\t\\end{tabular}\n \\caption{Results of the MCMC microlensing analysis for T2. The full corner plot can be found in Fig. \\ref{fig:T2_corner}.}\n\\end{table}\n\n\\begin{figure}\n\t\\includegraphics[width=\\columnwidth]{figures\/T2LC}\n \\caption{T2 light curve including our microlensing model. Residuals plotted below. Error bars reflect the original errors on the photometry.}\n \\label{fig:T2_LC}\n\\end{figure}\n\nThis object displays differential evolution in the lightcurve, with a change in the V-band of $\\sim0.25$ mag. This differential change is more significant than the tentative colour change noted for Sharov21 however this does not immediately rule out microlensing as the cause of the event. The extended nature of the T2 host galaxy is clear in the PHAT imaging and indicates that there may be a significant contribution to the background flux in the V-band. In the B-band, the data is not sufficient to reliably constrain the background contribution and is consistent only with an upper limit which suggests that the AGN is the dominant contribution in this filter. Another possibility for the achromatic lightcurve is that the simple point-source approximation is not appropriate in this case. The constraints on the Einstein radius of the lens when projected in the source plane are smaller than that seen in Sharov21 and may be an indication that we need to allow for an extended source in order to reproduce the observed colour changes as a consequence of the accretion disc being partially resolved by the lens.\n\n\n\n\n\\section{Discussion}\n\\label{sec:Discussion}\n\n\\subsection{Evidence for microlensing}\n\nWe first turn our attention to Sharov21 and the case for this being a high amplitude microlensing event. Our MCMC analysis lends strong support to the microlensing hypothesis in that it successfully predicts that the most probable lens location is at the distance to M31 with only a small number of initial assumptions. Figure \\ref{fig:cropcorner} shows the posterior distributions in log($z_{\\rm{d}}$) and 2D distribution in the log($z_{\\rm{d}}$) $M_{\\rm{lens}}$ plane. Also shown is the effective redshift that corresponds to M31 which is in excellent agreement with the data. For comparison top panel also shows the posterior distribution obtained for T2. In this case the peak of the distribution is at a greater redshift but the location of M31 is still enclosed within the one sigma bounds about this peak.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{figures\/S21cropcorner.pdf}\n \\caption{Top panel: 1D posterior redshift distributions for targets S21 (solid line) and T2 (dotted line) from our microlensing analysis. Bottom panel: 2D posterior in the log($z_{\\rm{d}}$) $M_{\\rm{lens}}$ plane for Sharov21. The solid vertical line indicated the effective ``redshift'' for M31, consistent with a distance of 785 kpc.}\n \\label{fig:cropcorner}\n\\end{figure}\n\nIn addition to a satisfactory distance estimate, our lensing model also provides evidence that this microlensing event should be achromatic given the constraint on the projected Einstein radius at the source. As has been previously mentioned, there is reported evidence of a colour change and the presence of a shoulder in the data that is not well described by the model. However, we do not believe that the lightcurve data about the peak is of sufficient quality to falsify the simple achromatic microlensing model. As discussed in Section \\ref{sec:S21microResults}, a modest increase to the error bars in our lightcurve provides for a satisfactory fit to the data. We note that it is only the Sharov observations which appear to be in tension -- all of the additional third-party data about the peak is in good agreement with the model. A better understanding of any potential systematics inherent in the Sharov plates and assumed errors would be required before discarding the simple lensing model in favour of, for example, a binary lens configuration.\n\nThe T2 event exhibits a satisfactory microlensing fit to the B-band data but the simple model is lacking in that it cannot reproduce the differential colour changes seen in the data, in particular with respect to the smaller changes in the V-band. This may be due to the presence of a significant host galaxy component resulting in a dilution of the microlensing signal. This is reinforced by the spectroscopic component fits in \\citet{Dorn-Wallenstein2017} which show that in the V-band the quasar component is on the order 20\\% of the total contribution in that band. We may also need to take into account the possibility that the accretion disc in this case may be being partially resolved by the lens, leading to the differential changes seen. Figure \\ref{fig:cropcorner} shows that the expected lens distance does not align well with the position of M31 though this location is still reasonable given the spread in the data.\n\n\\subsection{Event Rates}\n\\label{sec:Rates}\n\nWe are now concerned with estimating the expected microlensing event rate for a background AGN being lensed by a stellar-mass object in M31. We use a similar approach to \\citet{Meusinger2010} in order to estimate these rates but update some parameters to reflect more recent data. These parameters are noted in Table \\ref{tab:GeoParams}. Deriving a rate estimate requires an estimate both of the number density of background AGN and of the expected microlensing optical depth, i.e. the probability that a background source falls within the projected Einstein radius of a lensing object.\n\n\\begin{table}Geometric Parameters\n \\centering\n \\begin{tabular}{c|r|l}\n M31 ``redshift'' & $1.775\\times10^{-4}$ & \\\\\n M31 distance$^1$ & 785 & kpc \\\\\n M31 inclination$^1$ & 77.5$^\\circ$ & \\\\\n M31 PA$^2$ & 37$^\\circ$ & \\\\\n \\hline\n S21 redshift$^3$ & 2.109 \\\\\n S21 separation from M31 & $26.2\\arcmin$ & \\\\\n S21 projected separation & 6.0 & kpc \\\\\n \\hline\n Critical denisty: & & \\\\\n $D_{\\rm{s}}$ & 1758.58 & Mpc \\\\\n $D_{\\rm{d}}$ & 0.785 & Mpc \\\\\n $D_{\\rm{ds}}$ & 1758.33 & Mpc \\\\\n $\\Sigma_{\\rm crit}$ & $4.4\\times10^{3}$ & $\\rm{kg\\,m^{-2}}$ \\\\\n & $2.1\\times10^{6}$ & $\\rm{M_\\odot\\,pc^{-2}}$ \\\\\n & $3.1\\times10^{7}$ & $\\rm{M_\\odot\\,arcsec^{-2}}$ \\\\\n \\hline\n Quasar counts$^4$, $n_{\\rm QSO}$: \\\\\n 15.5 < g < 21 & 64 & $\\rm{deg^{-2}}$ \\\\\n 15.5 < g < 22 & 141 & $\\rm{deg^{-2}}$ \\\\\n 15.5 < g < 23 & 271 & $\\rm{deg^{-2}}$ \\\\\n 15.5 < g < 24 & 487 & $\\rm{deg^{-2}}$ \\\\\n 15.5 < g < 25 & 840 & $\\rm{deg^{-2}}$ \\\\ \n \\end{tabular}\n \\caption{Parameters used in the geometric event rate analysis. The ``redshift'' for M31 corresponds to the value required to obtain an angular diameter distance of 785 kpc. [1]\\citet{Geehan2006}; [2]\\citet{Tamm2012}; [3]\\citet{Meusinger2010}; [4]\\citet{Palanque2016}.}\n \\label{tab:GeoParams}\n\\end{table}\n\nThe optical depth to microlensing can be obtained via $\\tau=\\Sigma_{*}\/\\Sigma_{\\rm cr}$, where $\\Sigma_{*}$ is the stellar surface-mass density and $\\Sigma_{\\rm cr}$ is the critical surface-mass density given by:\n\n\\begin{equation}\n\\Sigma_{\\rm cr}=\\frac{c^2}{4\\pi G}\\frac{D_{\\rm s}}{D_{\\rm d}D_{\\rm ds}}\n\\end{equation}\n\nWhere $D_{\\rm s}, D_{\\rm d}, D_{\\rm ds}$ are the angular diameter distances to the source, lens and lens--source distance respectively. The critical surface density is noted in Table \\ref{tab:GeoParams}. With the optical depth estimates in hand the expected event rate can be found using:\n\n\\begin{equation}\n\\Gamma=\\frac{2N_{\\rm QSO}\\tau}{\\pi t_{\\rm E}}\n\\end{equation}\n\nFor these rate calculations we make the simplifying assumption that the event timescale is the same for all events and set $t_{\\rm E}=1\\,{\\rm yr}$, appropriate to the Sharov21 event.\n\nIn order to determine the total number of background QSOs we make use of the estimates reported in \\citet{Palanque2016}. The expected number densities for various apparent magnitude ranges are noted in Table \\ref{tab:GeoParams}. One further factor to consider is the ability for any survey to reliably detect AGN through the disc of M31. \\citet{Dalcanton2012} note that typical extinction values in M31 are on the order of $A_{\\rm V}\\sim1$ with less than 10\\% of sightlines displaying an $A_{\\rm V} > 2$. The \\citet{Vilardell2006} survey reports a limiting magnitude of 25.5 and 26.0 in $V$ and $B$ respectively though we note that our candidate events are all brighter than $B\\sim25$ at their faintest epoch. In our rate estimates we therefore assume that the 15.5 < g < 24 QSO density is the most appropriate choice for our rate calculations and allows for a reliable detection of the full light curve for any event.\n\nWhat remains is for us to derive an estimate of the expected stellar surface mass density at a given distance from the M31 centre. For this we make use of the stellar mass models from \\citet{Tamm2012} and for simplicity focus on their two-component model. This model consists of both a bulge and disc ellipsoid, each with rotational symmetry, to describe the stellar-mass profile of M31. When projected onto the sky, these components provide us with stellar surface-mass densities along the major and minor axes of M31. These are shown in Figures \\ref{fig:TammMajor} and \\ref{fig:TammMinor} and have been truncated to $3\\leq R_{\\rm proj}\\leq30,000\\,{\\rm pc}$. Armed with this information, we are now in a position to define isodensity elliptical contours with corresponding semi-major and semi-minor distances. This in turn allows us to define elliptical annuli of assumed constant optical depth with which to derive our rate estimates across the extent of M31.\n\nFigure \\ref{fig:microRate} shows the cumulative microlensing rate as a function of distance along the M31 major axis. We estimate that the overall event rate for any background AGN to fall within the Einstein radius of a stellar lens in M31 to be $\\Gamma=0.0826\\,{\\rm yr}^{-1}$ giving an average timescale between events of $\\sim12\\,{\\rm yr}$. We can add a further geometric constraint to this estimate if we require a higher amplitude event. The peak magnification for a point-source, point lens microlensing event is determined by the impact parameter and at $y_0=1$ this corresponds to a magnification $\\mu\\simeq1.34$. A minimum magnification factor of two, or thirty in the case of Sharov21, requires $y_0\\leq0.556$ or $y_0\\leq0.033$ respectively. The areas subtended by these higher magnification regions are reduced by the square of these values giving rise to an average timescale between events of this type of approximately 40 and 11,000 years respectively. These simplified estimates confirm the exceptional nature of the Sharov21 event but also allow for the possibility of detecting a number of lower amplitude events on more reasonable timescales.\n\n\\begin{figure}\n\t\\includegraphics[width=\\columnwidth]{figures\/cumulativerate}\n \\caption{Cumulative predicted microlensing rate against the M31 major axis. The rates are calculated using isodensity ellipses based on the two-component mass model described in \\citet{Tamm2012}. The vertical line shows the location of the isodensity ellipse appropriate to Sharov21.}\n \\label{fig:microRate}\n\\end{figure}\n\nOne point to note is that though the optical depth to microlensing is greatest at the innermost radii, the area subtended by these regions is considerably smaller than for more distant regions with lower optical depth and thus contribute only a small fraction to the overall rate estimates. The optical depth falls within the range $5.6\\times10^{-2}\\leq \\tau\\leq1.3\\times10^{-6}$. With these values we are still safely in the low optical depth regime. At the location of Sharov21 we estimate $\\tau\\simeq6\\times10^{-5}$ and $M_{\\rm surface}\\simeq135\\,M_\\odot\\,{\\rm pc}^{-2}$. We have plotted the isodensity ellipse corresponding to the position of Sharov21 in Figure \\ref{fig:M31}, with a semi-major axis distance of approximately 12 kpc or $3100\\arcsec$. Given that our lensing rate is proportional to the product of the surface-mass density and the area subtended by any isodensity annulus, there is a turnover in the derived rates, centred at approximately 5 kpc or $1300\\arcsec$ along the major axis. This location, where the microlensing rate effectively peaks, is highlighted by the inner annulus in Figure \\ref{fig:M31}.\n\n\\subsection{Event Rate Discussion}\n\nOur event rate estimate is likely an oversimplification but it suggests that the average time between microlensing events of this nature, with a factor of two or greater amplification, is on the order of half a century and would occur most frequently at intermediate radii from the M31 centre. In these calculations, given the close proximity of M31 to us as observers, the optical depth is relatively insensitive to the redshift of the source though the assumption of a point-source will break down for some lens geometries. In these cases the peak amplification of the event will be lower and there may be evidence for chromatic changes as a consequence of the AGN disc being partially resolved during the event. We would also expect that, assuming a binary fraction of $\\sim50\\%$, approximately 10\\% of events would show additional structure in their light curve due to the presence of a binary lens where a favourable alignment produces notable deviations from the symmetric point-lens case. One additional factor to consider is that the blending of the target flux with other M31 sources and\/or a strong host galaxy component has the possibility of further diluting any microlensing signal. The number of confirmed AGN behind M31 remains low at present \\citep{Massey2019} though this is certain to evolve in the coming years. It seems clear that spectroscopic confirmation will be required of any candidate to confirm their status as a background AGN. It is not trivial to perform a colour selection given that many of these background AGN will be seen through a not-insignificant dust column.\n\nGiven the low probability of observing a background AGN microlensing event it is thus perhaps surprising that we have been able to identify two other candidate AGN microlensing events in the \\citet{Vilardell2006} data. Both of these appear within a 4 year timeframe and $\\simeq0.3\\,\\rm{deg}^2$ survey footprint. T2 is spectroscopically confirmed as a background AGN and T3, as an X-ray source, remains a likely candidate for a background AGN. The peak amplifications for these additional events are a factor 2--3. Though much less than the factor 30 seen in Sharov21, these low-amplitude microlensing events will be far more likely to occur in general. We must therefore allow for the possibility that, either our hypothesis is incorrect, or that there may be an additional population of stellar-mass lenses which are as yet unaccounted for.\n\n\\subsubsection{Non-stellar microlenses?}\n\nCurrent cosmological \\citep[e.g.,][]{Planck2018} and galactic dynamics inform us that only 20\\% of the observed mass density is in baryonic form. One candidate for this non-baryonic (`dark') matter are massive astrophysical compact halo objects \\citep[MACHOs;][]{Paczynski1986}. MACHOs could be a range of non-luminous astrophysical objects, but would readily reveal themselves via gravitational microlensing events. With the recent detection of gravitational waves from merging black holes \\citep{Abbott2016, Abbott2019} the interest in primordial black holes (PBHs), which are potential MACHO candidates, has been reinvigorated \\citep[e.g., ][]{Clesse2017, Stegmann2019}. The MACHO experiment was designed to detect the microlensing of stars in the Magellanic Clouds by compact bodies in the Milky Way Galactic halo \\citet{Alcock1996}. Using the results from the MACHO collaboration \\citep{Alcock2000}, and a range of Galactic models for LMC microlensing, \\citet{Hawkins2011} finds that the a MACHO contribution to the MW halo is not ruled out, and the MACHO content could potentially be around 20\\%. Recent analyses from \\citet{Calcino2018} using microlensing constraints towards the Large Magellanic Cloud suggest that although the likelihood for $\\sim$1-10 M$_{\\odot}$ objects is weakened, the constraints for masses around 10 M$_{\\odot}$ are still viable.\n\nAs such, though we generally remain agnostic, we acknowledge the potential existence for MACHOs in the M31\/MW halo and note that the presence of a population of these objects may help to explain the discrepancy between our predicted and observed microlensing rates. Suffice to say, this provides an additional and compelling reason to undertake a systematic search for more events of this nature and on these year-long timescales. Of particular import will be the positions of the events relative to the M31 centre. Our rate analysis indicates that the most favourable location for detecting microlensing events is located nearer to the centre of M31 than the positions of our three current microlensing candidates. It is possible that the blending effects described above are simply biasing us towards detecting events at greater radii. It is also possible that the outskirts of the stellar disc of M31 play host to an additional repository of lenses as yet unaccounted for. Further speculation at this stage is premature as we are currently limited by a low number of candidate events.\n\n\\section{Conclusions}\n\nWe have re-examined the decades-long lightcurve for the object known as Sharov21, a background AGN within M31 seen to undergo a thirty-fold increase in brightness over one year. Armed with only the lightcurve, a point-source\/point-lens microlensing model and assumption of a stellar-mass lens it has been possible to derive constraints on the expected distance of the lens which is in excellent agreement with the true distance of M31. We believe this provides strong evidence that Sharov21 was indeed a rare, high-amplitude microlensing event. We find that slight discrepancies with the data are not sufficient, given the data quality, to justify discarding the simple model in favour of more complex lensing scenarios.\n\nWe have analysed archival data on Sharov21 from multiple sources and our resulting SED shows that this AGN can be considered an otherwise unremarkable type-I AGN. The high resolution Hubble data from the PHAT survey confirms that Sharov21 is consistent with a point source.\n\nIn addition to our work on Sharov21 we have undertaken a search for additional microlensing candidates that display characteristics of a simple microlensing event on similar timescales in the a four-year survey of a sector of M31. This search has yielded 20 candidate events, one is a confirmed background AGN and the other an X-ray source and thus a promising background candidate. For the confirmed background AGN, our microlensing analysis shows that this event is also consistent with the lens object being located at the distance of M31.\n\nOur exploration of the expected microlensing event rate shows that these events should occur on average every half century or so. This is a higher rate than derived in \\citet{Meusinger2010} but not high enough to explain our current number of candidates if our microlensing hypothesis is correct. This may suggest the presence of an additional population of lensing objects in the outskirts of M31 which is not yet accounted for.\n\nA detailed, systematic search for long-term microlensing candidates in M31, including spectroscopic follow-up, is required in order to address the discrepancy between our observed and predicted rates. The timescales for these events are on the order of years and are most likely to occur toward intermediate radii from the M31 centre. These events can in principle provide valuable information about these distant AGN if the data is well sampled. Perhaps more importantly, by monitoring as many background sources as possible it allows a probe of the M31 stellar and dark halo populations in both M31 and the Milky Way. The timescales for these events are longer and stand in contrast to the microlensing events of source stars in the LMC\/SMC\/M31 by intervening compact objects. A regime worthy of further exploration.\n\n\n\n\\section*{Acknowledgements}\n\nThe authors would also like to thank H. Meusinger for providing the Sharov21 light curve data used in their paper.\n\nNPR acknowledges support from the STFC and the Ernest Rutherford Fellowship scheme.\n\nThe Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation.\n\n\n\n\n\\bibliographystyle{mnras}\n\n\\section{Introduction}\n\nThe journal \\textit{Monthly Notices of the Royal Astronomical Society} (MNRAS) encourages authors to prepare their papers using \\LaTeX.\nThe style file \\verb'mnras.cls' can be used to approximate the final appearance of the journal, and provides numerous features to simplify the preparation of papers.\nThis document, \\verb'mnras_guide.tex', provides guidance on using that style file and the features it enables.\n\nThis is not a general guide on how to use \\LaTeX, of which many excellent examples already exist.\nWe particularly recommend \\textit{Wikibooks \\LaTeX}\\footnote{\\url{https:\/\/en.wikibooks.org\/wiki\/LaTeX}}, a collaborative online textbook which is of use to both beginners and experts.\nAlternatively there are several other online resources, and most academic libraries also hold suitable beginner's guides.\n\nFor guidance on the contents of papers, journal style, and how to submit a paper, see the MNRAS Instructions to Authors\\footnote{\\label{foot:itas}\\url{http:\/\/www.oxfordjournals.org\/our_journals\/mnras\/for_authors\/}}.\nOnly technical issues with the \\LaTeX\\ class are considered here.\n\n\n\\section{Obtaining and installing the MNRAS package}\nSome \\LaTeX\\ distributions come with the MNRAS package by default.\nIf yours does not, you can either install it using your distribution's package manager, or download it from the Comprehensive \\TeX\\ Archive Network\\footnote{\\url{http:\/\/www.ctan.org\/tex-archive\/macros\/latex\/contrib\/mnras}} (CTAN).\n\nThe files can either be installed permanently by placing them in the appropriate directory (consult the documentation for your \\LaTeX\\ distribution), or used temporarily by placing them in the working directory for your paper.\n\nTo use the MNRAS package, simply specify \\verb'mnras' as the document class at the start of a \\verb'.tex' file:\n\n\\begin{verbatim}\n\\documentclass{mnras}\n\\end{verbatim}\nThen compile \\LaTeX\\ (and if necessary \\bibtex) in the usual way.\n\n\\section{Preparing and submitting a paper}\nWe recommend that you start with a copy of the \\texttt{mnras\\_template.tex} file.\nRename the file, update the information on the title page, and then work on the text of your paper.\nGuidelines for content, style etc. are given in the instructions to authors on the journal's website$^{\\ref{foot:itas}}$.\nNote that this document does not follow all the aspects of MNRAS journal style (e.g. it has a table of contents).\n\nIf a paper is accepted, it is professionally typeset and copyedited by the publishers.\nIt is therefore likely that minor changes to presentation will occur.\nFor this reason, we ask authors to ignore minor details such as slightly long lines, extra blank spaces, or misplaced figures, because these details will be dealt with during the production process.\n\nPapers must be submitted electronically via the online submission system; paper submissions are not permitted.\nFor full guidance on how to submit a paper, see the instructions to authors.\n\n\\section{Class options}\n\\label{sec:options}\nThere are several options which can be added to the document class line like this:\n\n\\begin{verbatim}\n\\documentclass[option1,option2]{mnras}\n\\end{verbatim}\nThe available options are:\n\\begin{itemize}\n\\item \\verb'letters' -- used for papers in the journal's Letters section.\n\\item \\verb'onecolumn' -- single column, instead of the default two columns. This should be used {\\it only} if necessary for the display of numerous very long equations.\n\\item \\verb'doublespacing' -- text has double line spacing. Please don't submit papers in this format.\n\\item \\verb'referee' -- \\textit{(deprecated)} single column, double spaced, larger text, bigger margins. Please don't submit papers in this format.\n\\item \\verb'galley' -- \\textit{(deprecated)} no running headers, no attempt to align the bottom of columns.\n\\item \\verb'landscape' -- \\textit{(deprecated)} sets the whole document on landscape paper.\n\\item \\verb\"usenatbib\" -- \\textit{(all papers should use this)} this uses Patrick Daly's \\verb\"natbib.sty\" package for citations.\n\\item \\verb\"usegraphicx\" -- \\textit{(most papers will need this)} includes the \\verb'graphicx' package, for inclusion of figures and images.\n\\item \\verb'useAMS' -- adds support for upright Greek characters \\verb'\\upi', \\verb'\\umu' and \\verb'\\upartial' ($\\upi$, $\\umu$ and $\\upartial$). Only these three are included, if you require other symbols you will need to include the \\verb'amsmath' or \\verb'amsymb' packages (see section~\\ref{sec:packages}).\n\\item \\verb\"usedcolumn\" -- includes the package \\verb\"dcolumn\", which includes two new types of column alignment for use in tables.\n\\end{itemize}\n\nSome of these options are deprecated and retained for backwards compatibility only.\nOthers are used in almost all papers, but again are retained as options to ensure that papers written decades ago will continue to compile without problems.\nIf you want to include any other packages, see section~\\ref{sec:packages}.\n\n\\section{Title page}\n\nIf you are using \\texttt{mnras\\_template.tex} the necessary code for generating the title page, headers and footers is already present.\nSimply edit the title, author list, institutions, abstract and keywords as described below.\n\n\\subsection{Title}\nThere are two forms of the title: the full version used on the first page, and a short version which is used in the header of other odd-numbered pages (the `running head').\nEnter them with \\verb'\\title[]{}' like this:\n\\begin{verbatim}\n\\title[Running head]{Full title of the paper}\n\\end{verbatim}\nThe full title can be multiple lines (use \\verb'\\\\' to start a new line) and may be as long as necessary, although we encourage authors to use concise titles. The running head must be $\\le~45$ characters on a single line.\n\nSee appendix~\\ref{sec:advanced} for more complicated examples.\n\n\\subsection{Authors and institutions}\n\nLike the title, there are two forms of author list: the full version which appears on the title page, and a short form which appears in the header of the even-numbered pages. Enter them using the \\verb'\\author[]{}' command.\n\nIf the author list is more than one line long, start a new line using \\verb'\\newauthor'. Use \\verb'\\\\' to start the institution list. Affiliations for each author should be indicated with a superscript number, and correspond to the list of institutions below the author list.\n\nFor example, if I were to write a paper with two coauthors at another institution, one of whom also works at a third location:\n\\begin{verbatim}\n\\author[K. T. Smith et al.]{\nKeith T. Smith,$^{1}$\nA. N. Other,$^{2}$\nand Third Author$^{2,3}$\n\\\\\n$^{1}$Affiliation 1\\\\\n$^{2}$Affiliation 2\\\\\n$^{3}$Affiliation 3}\n\\end{verbatim}\nAffiliations should be in the format `Department, Institution, Street Address, City and Postal Code, Country'.\n\nEmail addresses can be inserted with the \\verb'\\thanks{}' command which adds a title page footnote.\nIf you want to list more than one email, put them all in the same \\verb'\\thanks' and use \\verb'\\footnotemark[]' to refer to the same footnote multiple times.\nPresent addresses (if different to those where the work was performed) can also be added with a \\verb'\\thanks' command.\n\n\\subsection{Abstract and keywords}\n\nThe abstract is entered in an \\verb'abstract' environment:\n\\begin{verbatim}\n\\begin{abstract}\nThe abstract of the paper.\n\\end{abstract}\n\\end{verbatim}\n\\noindent Note that there is a word limit on the length of abstracts.\nFor the current word limit, see the journal instructions to authors$^{\\ref{foot:itas}}$.\n\nImmediately following the abstract, a set of keywords is entered in a \\verb'keywords' environment:\n\\begin{verbatim}\n\\begin{keywords}\nkeyword 1 -- keyword 2 -- keyword 3\n\\end{keywords}\n\\end{verbatim}\n\\noindent There is a list of permitted keywords, which is agreed between all the major astronomy journals and revised every few years.\nDo \\emph{not} make up new keywords!\nFor the current list of allowed keywords, see the journal's instructions to authors$^{\\ref{foot:itas}}$.\n\n\\section{Sections and lists}\n\nSections and lists are generally the same as in the standard \\LaTeX\\ classes.\n\n\\subsection{Sections}\n\\label{sec:sections}\nSections are entered in the usual way, using \\verb'\\section{}' and its variants. It is possible to nest up to four section levels:\n\\begin{verbatim}\n\\section{Main section}\n \\subsection{Subsection}\n \\subsubsection{Subsubsection}\n \\paragraph{Lowest level section}\n\\end{verbatim}\n\\noindent The other \\LaTeX\\ sectioning commands \\verb'\\part', \\verb'\\chapter' and \\verb'\\subparagraph{}' are deprecated and should not be used.\n\nSome sections are not numbered as part of journal style (e.g. the Acknowledgements).\nTo insert an unnumbered section use the `starred' version of the command: \\verb'\\section*{}'.\n\nSee appendix~\\ref{sec:advanced} for more complicated examples.\n\n\\subsection{Lists}\n\nTwo forms of lists can be used in MNRAS -- numbered and unnumbered.\n\nFor a numbered list, use the \\verb'enumerate' environment:\n\\begin{verbatim}\n\\begin{enumerate}\n \\item First item\n \\item Second item\n \\item etc.\n\\end{enumerate}\n\\end{verbatim}\n\\noindent which produces\n\\begin{enumerate}\n \\item First item\n \\item Second item\n \\item etc.\n\\end{enumerate}\nNote that the list uses lowercase Roman numerals, rather than the \\LaTeX\\ default Arabic numerals.\n\nFor an unnumbered list, use the \\verb'description' environment without the optional argument:\n\\begin{verbatim}\n\\begin{description}\n \\item First item\n \\item Second item\n \\item etc.\n\\end{description}\n\\end{verbatim}\n\\noindent which produces\n\\begin{description}\n \\item First item\n \\item Second item\n \\item etc.\n\\end{description}\n\nBulleted lists using the \\verb'itemize' environment should not be used in MNRAS; it is retained for backwards compatibility only.\n\n\\section{Mathematics and symbols}\n\nThe MNRAS class mostly adopts standard \\LaTeX\\ handling of mathematics, which is briefly summarised here.\nSee also section~\\ref{sec:packages} for packages that support more advanced mathematics.\n\nMathematics can be inserted into the running text using the syntax \\verb'$1+1=2$', which produces $1+1=2$.\nUse this only for short expressions or when referring to mathematical quantities; equations should be entered as described below.\n\n\\subsection{Equations}\nEquations should be entered using the \\verb'equation' environment, which automatically numbers them:\n\n\\begin{verbatim}\n\\begin{equation}\n a^2=b^2+c^2\n\\end{equation}\n\\end{verbatim}\n\\noindent which produces\n\\begin{equation}\n a^2=b^2+c^2\n\\end{equation}\n\nBy default, the equations are numbered sequentially throughout the whole paper. If a paper has a large number of equations, it may be better to number them by section (2.1, 2.2 etc.). To do this, add the command \\verb'\\numberwithin{equation}{section}' to the preamble.\n\nIt is also possible to produce un-numbered equations by using the \\LaTeX\\ built-in \\verb'\\['\\textellipsis\\verb'\\]' and \\verb'$$'\\textellipsis\\verb'$$' commands; however MNRAS requires that all equations are numbered, so these commands should be avoided.\n\n\\subsection{Special symbols}\n\n\n\\begin{table}\n \\caption{Additional commands for special symbols commonly used in astronomy. These can be used anywhere.}\n \\label{tab:anysymbols}\n \\begin{tabular}{lll}\n \\hline\n Command & Output & Meaning\\\\\n \\hline\n \\verb'\\sun' & \\sun & Sun, solar\\\\[2pt]\n \\verb'\\earth' & \\earth & Earth, terrestrial\\\\[2pt]\n \\verb'\\micron' & \\micron & microns\\\\[2pt]\n \\verb'\\degr' & \\degr & degrees\\\\[2pt]\n \\verb'\\arcmin' & \\arcmin & arcminutes\\\\[2pt]\n \\verb'\\arcsec' & \\arcsec & arcseconds\\\\[2pt]\n \\verb'\\fdg' & \\fdg & fraction of a degree\\\\[2pt]\n \\verb'\\farcm' & \\farcm & fraction of an arcminute\\\\[2pt]\n \\verb'\\farcs' & \\farcs & fraction of an arcsecond\\\\[2pt]\n \\verb'\\fd' & \\fd & fraction of a day\\\\[2pt]\n \\verb'\\fh' & \\fh & fraction of an hour\\\\[2pt]\n \\verb'\\fm' & \\fm & fraction of a minute\\\\[2pt]\n \\verb'\\fs' & \\fs & fraction of a second\\\\[2pt]\n \\verb'\\fp' & \\fp & fraction of a period\\\\[2pt]\n \\verb'\\diameter' & \\diameter & diameter\\\\[2pt]\n \\verb'\\sq' & \\sq & square, Q.E.D.\\\\[2pt]\n \\hline\n \\end{tabular}\n\\end{table}\n\n\\begin{table}\n \\caption{Additional commands for mathematical symbols. These can only be used in maths mode.}\n \\label{tab:mathssymbols}\n \\begin{tabular}{lll}\n \\hline\n Command & Output & Meaning\\\\\n \\hline\n \\verb'\\upi' & $\\upi$ & upright pi\\\\[2pt]\n \\verb'\\umu' & $\\umu$ & upright mu\\\\[2pt]\n \\verb'\\upartial' & $\\upartial$ & upright partial derivative\\\\[2pt]\n \\verb'\\lid' & $\\lid$ & less than or equal to\\\\[2pt]\n \\verb'\\gid' & $\\gid$ & greater than or equal to\\\\[2pt]\n \\verb'\\la' & $\\la$ & less than of order\\\\[2pt]\n \\verb'\\ga' & $\\ga$ & greater than of order\\\\[2pt]\n \\verb'\\loa' & $\\loa$ & less than approximately\\\\[2pt]\n \\verb'\\goa' & $\\goa$ & greater than approximately\\\\[2pt]\n \\verb'\\cor' & $\\cor$ & corresponds to\\\\[2pt]\n \\verb'\\sol' & $\\sol$ & similar to or less than\\\\[2pt]\n \\verb'\\sog' & $\\sog$ & similar to or greater than\\\\[2pt]\n \\verb'\\lse' & $\\lse$ & less than or homotopic to \\\\[2pt]\n \\verb'\\gse' & $\\gse$ & greater than or homotopic to\\\\[2pt]\n \\verb'\\getsto' & $\\getsto$ & from over to\\\\[2pt]\n \\verb'\\grole' & $\\grole$ & greater over less\\\\[2pt]\n \\verb'\\leogr' & $\\leogr$ & less over greater\\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\nSome additional symbols of common use in astronomy have been added in the MNRAS class. These are shown in tables~\\ref{tab:anysymbols}--\\ref{tab:mathssymbols}. The command names are -- as far as possible -- the same as those used in other major astronomy journals.\n\nMany other mathematical symbols are also available, either built into \\LaTeX\\ or via additional packages. If you want to insert a specific symbol but don't know the \\LaTeX\\ command, we recommend using the Detexify website\\footnote{\\url{http:\/\/detexify.kirelabs.org}}.\n\nSometimes font or coding limitations mean a symbol may not get smaller when used in sub- or superscripts, and will therefore be displayed at the wrong size. There is no need to worry about this as it will be corrected by the typesetter during production.\n\nTo produce bold symbols in mathematics, use \\verb'\\bmath' for simple variables, and the \\verb'bm' package for more complex symbols (see section~\\ref{sec:packages}). Vectors are set in bold italic, using \\verb'\\mathbfit{}'.\n\nFor matrices, use \\verb'\\mathbfss{}' to produce a bold sans-serif font e.g. \\mathbfss{H}; this works even outside maths mode, but not all symbols are available (e.g. Greek). For $\\nabla$ (del, used in gradients, divergence etc.) use \\verb'$\\nabla$'.\n\n\\subsection{Ions}\n\nA new \\verb'\\ion{}{}' command has been added to the class file, for the correct typesetting of ionisation states.\nFor example, to typeset singly ionised calcium use \\verb'\\ion{Ca}{ii}', which produces \\ion{Ca}{ii}.\n\n\\section{Figures and tables}\n\\label{sec:fig_table}\nFigures and tables (collectively called `floats') are mostly the same as built into \\LaTeX.\n\n\\subsection{Basic examples}\n\\begin{figure}\n \\includegraphics[width=\\columnwidth]{example}\n \\caption{An example figure.}\n \\label{fig:example}\n\\end{figure}\nFigures are inserted in the usual way using a \\verb'figure' environment and \\verb'\\includegraphics'. The example Figure~\\ref{fig:example} was generated using the code:\n\\begin{verbatim}\n\\begin{figure}\n \\includegraphics[width=\\columnwidth]{example}\n \\caption{An example figure.}\n \\label{fig:example}\n\\end{figure}\n\\end{verbatim}\n\n\\begin{table}\n \\caption{An example table.}\n \\label{tab:example}\n \\begin{tabular}{lcc}\n \\hline\n Star & Mass & Luminosity\\\\\n & $M_{\\sun}$ & $L_{\\sun}$\\\\\n \\hline\n Sun & 1.00 & 1.00\\\\\n $\\alpha$~Cen~A & 1.10 & 1.52\\\\\n $\\epsilon$~Eri & 0.82 & 0.34\\\\\n \\hline\n \\end{tabular}\n\\end{table}\nThe example Table~\\ref{tab:example} was generated using the code:\n\\begin{verbatim}\n\\begin{table}\n \\caption{An example table.}\n \\label{tab:example}\n \\begin{tabular}{lcc}\n \\hline\n Star & Mass & Luminosity\\\\\n & $M_{\\sun}$ & $L_{\\sun}$\\\\\n \\hline\n Sun & 1.00 & 1.00\\\\\n $\\alpha$~Cen~A & 1.10 & 1.52\\\\\n $\\epsilon$~Eri & 0.82 & 0.34\\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\\end{verbatim}\n\n\\subsection{Captions and placement}\nCaptions go \\emph{above} tables but \\emph{below} figures, as in the examples above.\n\nThe \\LaTeX\\ float placement commands \\verb'[htbp]' are intentionally disabled.\nLayout of figures and tables will be adjusted by the publisher during the production process, so authors should not concern themselves with placement to avoid disappointment and wasted effort.\nSimply place the \\LaTeX\\ code close to where the figure or table is first mentioned in the text and leave exact placement to the publishers.\n\nBy default a figure or table will occupy one column of the page.\nTo produce a wider version which covers both columns, use the \\verb'figure*' or \\verb'table*' environment.\n\nIf a figure or table is too long to fit on a single page it can be split it into several parts.\nCreate an additional figure or table which uses \\verb'\\contcaption{}' instead of \\verb'\\caption{}'.\nThis will automatically correct the numbering and add `\\emph{continued}' at the start of the caption.\n\\begin{table}\n \\contcaption{A table continued from the previous one.}\n \\label{tab:continued}\n \\begin{tabular}{lcc}\n \\hline\n Star & Mass & Luminosity\\\\\n & $M_{\\sun}$ & $L_{\\sun}$\\\\\n \\hline\n $\\tau$~Cet & 0.78 & 0.52\\\\\n $\\delta$~Pav & 0.99 & 1.22\\\\\n $\\sigma$~Dra & 0.87 & 0.43\\\\\n \\hline\n \\end{tabular}\n\\end{table}\nTable~\\ref{tab:continued} was generated using the code:\n\n\\begin{verbatim}\n\\begin{table}\n \\contcaption{A table continued from the previous one.}\n \\label{tab:continued}\n \\begin{tabular}{lcc}\n \\hline\n Star & Mass & Luminosity\\\\\n & $M_{\\sun}$ & $L_{\\sun}$\\\\\n \\hline\n $\\tau$~Cet & 0.78 & 0.52\\\\\n $\\delta$~Pav & 0.99 & 1.22\\\\\n $\\sigma$~Dra & 0.87 & 0.43\\\\\n \\hline\n \\end{tabular}\n\\end{table}\n\\end{verbatim}\n\nTo produce a landscape figure or table, use the \\verb'pdflscape' package and the \\verb'landscape' environment.\nThe landscape Table~\\ref{tab:landscape} was produced using the code:\n\\begin{verbatim}\n\\begin{landscape}\n \\begin{table}\n \\caption{An example landscape table.}\n \\label{tab:landscape}\n \\begin{tabular}{cccccccccc}\n \\hline\n Header & Header & ...\\\\\n Unit & Unit & ...\\\\\n \\hline\n Data & Data & ...\\\\\n Data & Data & ...\\\\\n ...\\\\\n \\hline\n \\end{tabular}\n \\end{table}\n\\end{landscape}\n\\end{verbatim}\nUnfortunately this method will force a page break before the table appears.\nMore complicated solutions are possible, but authors shouldn't worry about this.\n\n\\begin{landscape}\n \\begin{table}\n \\caption{An example landscape table.}\n \\label{tab:landscape}\n \\begin{tabular}{cccccccccc}\n \\hline\n Header & Header & Header & Header & Header & Header & Header & Header & Header & Header\\\\\n Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit \\\\\n \\hline\n Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\\\\n Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\\\\n Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\\\\n Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\\\\n Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\\\\n Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\\\\n Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\\\\n Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\\\\n \\hline\n \\end{tabular}\n \\end{table}\n\\end{landscape}\n\n\\section{References and citations}\n\n\\subsection{Cross-referencing}\n\nThe usual \\LaTeX\\ commands \\verb'\\label{}' and \\verb'\\ref{}' can be used for cross-referencing within the same paper.\nWe recommend that you use these whenever relevant, rather than writing out the section or figure numbers explicitly.\nThis ensures that cross-references are updated whenever the numbering changes (e.g. during revision) and provides clickable links (if available in your compiler).\n\nIt is best to give each section, figure and table a logical label.\nFor example, Table~\\ref{tab:mathssymbols} has the label \\verb'tab:mathssymbols', whilst section~\\ref{sec:packages} has the label \\verb'sec:packages'.\nAdd the label \\emph{after} the section or caption command, as in the examples in sections~\\ref{sec:sections} and \\ref{sec:fig_table}.\nEnter the cross-reference with a non-breaking space between the type of object and the number, like this: \\verb'see Figure~\\ref{fig:example}'.\n\nThe \\verb'\\autoref{}' command can be used to automatically fill out the type of object, saving on typing.\nIt also causes the link to cover the whole phrase rather than just the number, but for that reason is only suitable for single cross-references rather than ranges.\nFor example, \\verb'\\autoref{tab:journal_abbr}' produces \\autoref{tab:journal_abbr}.\n\n\\subsection{Citations}\n\\label{sec:cite}\n\nMNRAS uses the Harvard -- author (year) -- citation style, e.g. \\citet{author2013}.\nThis is implemented in \\LaTeX\\ via the \\verb'natbib' package, which in turn is included via the \\verb'usenatbib' package option (see section~\\ref{sec:options}), which should be used in all papers.\n\nEach entry in the reference list has a `key' (see section~\\ref{sec:ref_list}) which is used to generate citations.\nThere are two basic \\verb'natbib' commands:\n\\begin{description}\n \\item \\verb'\\citet{key}' produces an in-text citation: \\citet{author2013}\n \\item \\verb'\\citep{key}' produces a bracketed (parenthetical) citation: \\citep{author2013}\n\\end{description}\nCitations will include clickable links to the relevant entry in the reference list, if supported by your \\LaTeX\\ compiler.\n\n\\defcitealias{smith2014}{Paper~I}\n\\begin{table*}\n \\caption{Common citation commands, provided by the \\texttt{natbib} package.}\n \\label{tab:natbib}\n \\begin{tabular}{lll}\n \\hline\n Command & Ouput & Note\\\\\n \\hline\n \\verb'\\citet{key}' & \\citet{smith2014} & \\\\\n \\verb'\\citep{key}' & \\citep{smith2014} & \\\\\n \\verb'\\citep{key,key2}' & \\citep{smith2014,jones2015} & Multiple papers\\\\\n \\verb'\\citet[table 4]{key}' & \\citet[table 4]{smith2014} & \\\\\n \\verb'\\citep[see][figure 7]{key}' & \\citep[see][figure 7]{smith2014} & \\\\\n \\verb'\\citealt{key}' & \\citealt{smith2014} & For use with manual brackets\\\\\n \\verb'\\citeauthor{key}' & \\citeauthor{smith2014} & If already cited in close proximity\\\\\n \\verb'\\defcitealias{key}{Paper~I}' & & Define an alias (doesn't work in floats)\\\\\n \\verb'\\citetalias{key}' & \\citetalias{smith2014} & \\\\\n \\verb'\\citepalias{key}' & \\citepalias{smith2014} & \\\\\n \\hline\n \\end{tabular}\n\\end{table*}\n\nThere are a number of other \\verb'natbib' commands which can be used for more complicated citations.\nThe most commonly used ones are listed in Table~\\ref{tab:natbib}.\nFor full guidance on their use, consult the \\verb'natbib' documentation\\footnote{\\url{http:\/\/www.ctan.org\/pkg\/natbib}}.\n\nIf a reference has several authors, \\verb'natbib' will automatically use `et al.' if there are more than two authors. However, if a paper has exactly three authors, MNRAS style is to list all three on the first citation and use `et al.' thereafter. If you are using \\bibtex\\ (see section~\\ref{sec:ref_list}) then this is handled automatically. If not, the \\verb'\\citet*{}' and \\verb'\\citep*{}' commands can be used at the first citation to include all of the authors.\n\n\\subsection{The list of references}\n\\label{sec:ref_list}\n\nIt is possible to enter references manually using the usual \\LaTeX\\ commands, but we strongly encourage authors to use \\bibtex\\ instead.\n\\bibtex\\ ensures that the reference list is updated automatically as references are added or removed from the paper, puts them in the correct format, saves on typing, and the same reference file can be used for many different papers -- saving time hunting down reference details.\nAn MNRAS \\bibtex\\ style file, \\verb'mnras.bst', is distributed as part of this package.\nThe rest of this section will assume you are using \\bibtex.\n\nReferences are entered into a separate \\verb'.bib' file in standard \\bibtex\\ formatting.\nThis can be done manually, or there are several software packages which make editing the \\verb'.bib' file much easier.\nWe particularly recommend \\textsc{JabRef}\\footnote{\\url{http:\/\/jabref.sourceforge.net\/}}, which works on all major operating systems.\n\\bibtex\\ entries can be obtained from the NASA Astrophysics Data System\\footnote{\\label{foot:ads}\\url{http:\/\/adsabs.harvard.edu}} (ADS) by clicking on `Bibtex entry for this abstract' on any entry.\nSimply copy this into your \\verb'.bib' file or into the `BibTeX source' tab in \\textsc{JabRef}.\n\nEach entry in the \\verb'.bib' file must specify a unique `key' to identify the paper, the format of which is up to the author.\nSimply cite it in the usual way, as described in section~\\ref{sec:cite}, using the specified key.\nCompile the paper as usual, but add an extra step to run the \\texttt{bibtex} command.\nConsult the documentation for your compiler or latex distribution.\n\nCorrect formatting of the reference list will be handled by \\bibtex\\ in almost all cases, provided that the correct information was entered into the \\verb'.bib' file.\nNote that ADS entries are not always correct, particularly for older papers and conference proceedings, so may need to be edited.\nIf in doubt, or if you are producing the reference list manually, see the MNRAS instructions to authors$^{\\ref{foot:itas}}$ for the current guidelines on how to format the list of references.\n\n\\section{Appendices and online material}\n\nTo start an appendix, simply place the \\verb'","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nIn \\cite{Cr}, problem $4.6,$ T. Tao and B. Green ask how small can $|A+A|$ be for an $n$-element subset $A$ of the set of squares of integers. This question originates from the paper \\cite{Ch} where it is shown that $|A+A| > c n(n{\\rm ln}(n))^{\\frac{1}{12}}$ for some absolute constant $c>0.$ In our paper we obtain upper bounds on how small $|A+A|$ can be. See \\cite{Ci} for another paper investigating this problem.\n\\\\\n\\\\\nWe generalise this problem to arbitrary rings and give satisfactory answers for fields of a prime order. We also show connections between the original problem and the question of the existence of perfect cuboids.\n\n\\section{Definitions and notation}\nThe sumset (also known as the Minkowski sum) of two sets, $A$ and $B$, is defined as the set of all possible results from summing an element of $A$ and an element of $B$. We are primarily concerned with the case when A is a finite set of the natural numbers and when $B=A$.\n\n\\noindent Let $A$ be a finite subset of the natural numbers. We can then define the sumset of $A$ as\n$$A+A:=\\{a+b|a,b\\in A\\}.$$ \nDefine $S(R)$ to be the set of squares in a given ring $R$,\n$$S(R):=\\{a^{2}|a\\in R\\}.$$\nWe aim to minimise the sumset of finite subsets of $S(R)$,\n$$N_{n}(R):=\\inf_{A\\subset S(R),|A|=n}|A+A|,$$\nwhere the size of the subset is $n\\in\\mathbb{N}$. We will provide upper bounds for $N_{n}(R)$ and compute $N_{n}(\\mathbb{Z})$ for sufficiently small n.\n\n\\section{Sumsets in integral domains}\n\nWe are primarily interested in estimating $N_{n}(\\mathbb{Z}).$ However it is easier to work over $\\mathbb{Q}$ than $\\mathbb{Z}$. In this section we show that\n$$N_{n}(\\mathbb{Z})=N_{n}(\\mathbb{Q}).$$\nFurthermore we show that it is possible to generalise this property to integral domains and their field of fractions.\n\\begin{definition}\nLet $R$ be a commutative ring such that for all $a,b\\in R$ such that $a\\neq0$ and $b\\neq 0$ then $a\\cdot b\\neq 0.$ We say that $R$ is an integral domain.\n\\end{definition}\n\n\\begin{definition}\nLet $R$ be an integral domain. For $a,b \\in R$ let $\\frac{a}{b}$ denote the equivalence class of fractions where $\\frac{a}{b}$ is equivalent to $\\frac{c}{d}$ if and only if $ad = bc$. The field of fractions of $R$ is the set of all such equivalence classes with the obvious operations.\n\\end{definition}\n\n\\begin{theorem}\nLet $R$ be an integral domain. Then there exists a field $F$ such that\n$$N_{n}(R)=N_{n}(F)$$\nfor all $n\\in\\mathbb{N}.$ Furthermore if $R$ has characteristic $c$ then $F$ can be chosen to have characteristic $c$ also.\n\\end{theorem}\n\\begin{proof}\nLet $F$ be the field of fractions of $R$. Note that $F$ and $R$ have the same characteristic.\n\\\\\n\\\\\nLet $n\\in\\NN$\n\\\\\n\\\\\nSince $R\\subset F$ it is clear that \n$$N_{n}(R)\\geq N_{n}(F).$$\nLet $A\\subset S(F)$ such that $|A| = n$ and $|A+A| = N_{n}(F).$ Suppose that\n$$A = \\left\\{\\left(\\frac{a_{1}}{b_{1}}\\right)^{2},\\ldots,\\left(\\frac{a_{n}}{b_{n}}\\right)^{2}\\right\\},$$\nwhere $a_{1},\\ldots,a_{n},b_{1},\\ldots,b_{n}\\in R.$\nConsider the set\n$$A' = \\left\\{(a_{1}\\times b_{2}\\times\\cdots\\times b_{n})^{2},\\ldots, (a_{n}\\times b_{1}\\times\\cdots\\times b_{n-1})^{2} \\right\\}.$$\nThen\n$$|A'+A'| = N_{n}(F).$$\nThus \n$$N_{n}(R)\\leq N_{n}(F).$$\n\\end{proof}\n\\begin{lemma}\nLet $F$ be a field and $G$ a subfield of $F$. Then\n$$N_{n}(G)\\geq N_{n}(F)$$\nfor all $n\\in\\mathbb{N}.$\n\\end{lemma}\n\\begin{proof}\nLet $n\\in\\NN.$ Let $A\\subset S(F)$ such that $|A| = n$ and $|A+A| = N_{n}(F).$\n\\\\\n\\\\\nSince $A\\subset S(G)$ we have that $N_{n}(G)\\geq N_{n}(F)$. \n\\end{proof}\n\\begin{corollary}\nLet $R$ be an integral domain of characteristic $0$. Then\n$$N_{n}(R)\\leq N_{n}(\\mathbb{Z}).$$\n\\end{corollary}\n\\begin{proof}\nLet $F$ be a field of characteristic $0$ such that $N_{n}(R)=N_{n}(F)$. Since $F$ has characteristic 0 it contains a subfield isomorphic to $\\mathbb{Q}.$ The result easily follows. \n\\end{proof}\n\n\\section{Properties of the elliptic curve $E_{d}:\\:y^{2}=x^{3}-d^2x$}\nIn this section we use the results of arithmetic progressions of squares from \\cite{Co}.\n\\\\\n\\\\\nLet $d\\in\\mathbb{N}.$ Consider the elliptic curve\n$$E_{d}:\\:y^{2}=x^{3}-d^{2}x.$$\nDefine \n$$E_{d}[\\mathbb{Q}]=\\{(x,y)\\in E_{d}|x,y\\in\\mathbb{Q}\\}.$$\nThe set $E_{d}[\\mathbb{Q}]$ is deeply connected to locating arithmetic progressions of 3 rational squares. We now demonstrate this.\n\\begin{lemma}\nLet $P=(x,y)\\in E_{d}[\\mathbb{Q}]$ and set\n$$a=\\frac{x^{2}-2dx-d^2}{2y},$$\n$$b=\\frac{x^{2}+d^2}{2y},$$\n$$c=\\frac{-x^{2}-2dx+d^2}{2y}.$$\nThen $\\{a^{2},b^{2},c^{2}\\}$ is an arithmetic progresion of lentgh 3 in $\\mathbb{Q}$ with common difference d.\n\\end{lemma}\n\n\\begin{proof}\nObserve that\n\\begin{align*}\nb^{2}-a^{2} & =\\left(\\frac{x^{2}+d^2}{2y}\\right)^{2}-\\left(\\frac{x^{2}-2dx-d^2}{2y}\\right)^{2}\n\\\\\n& = \\left(\\frac{2x^{2}-2dx}{2y}\\right)\\left(\\frac{2d^2+2dx}{2y}\\right)\n\\\\\n& = \\left(\\frac{x^{2}-dx}{y}\\right)\\left(\\frac{d^2+dx}{y}\\right)\n\\\\\n& = \\frac{dx^3-d^{3}x}{y^2}\n\\\\\n& = \\frac{d(x^3-d^{2}x)}{x^3-d^{2}x}\n\\\\\n& = d.\n\\end{align*}\nSimilarly\n\\begin{align*}\nc^{2}-b^{2} & =\\left(\\frac{-x^{2}-2dx+d^2}{2y}\\right)^{2}-\\left(\\frac{x^{2}+d^2}{2y}\\right)^{2}\n\\\\\n& = \\left(\\frac{2d^{2}-2dx}{2y}\\right)\\left(\\frac{-2x^2-2dx}{2y}\\right)\n\\\\\n& = \\left(\\frac{d^{2}-dx}{y}\\right)\\left(\\frac{-x^2-dx}{y}\\right)\n\\\\\n& = \\frac{dx^3-d^{3}x}{y^2}\n\\\\\n& = \\frac{d(x^3-d^{2}x)}{x^3-d^{2}x}\n\\\\\n& = d.\n\\end{align*}\n\\end{proof}\n\n\\begin{lemma}\nLet $a,b,c\\in\\mathbb{Q}$ such that $\\{a^{2},b^{2}c^{2}\\}$ is an arithmetic progression of length 3 with common difference d then \n$$\\left(\\frac{d(c-b)}{a-b},\\frac{d^2(2b-a-c)}{(a-b)^{2}}\\right)\\in E_{d}[\\mathbb{Q}].$$\n\\end{lemma}\n\n\\begin{proof}\nOne simply has to verify that\n$$\\frac{d^{4}(2b-a-c)^{2}}{(a-b)^{4}}-\\frac{d^{3}(c-b)^{3}}{(a-b)^{3}}+\\frac{d^{3}(c-b)}{a-b}=0.$$\n\\end{proof}\nWe say that the points of $E_{d}[\\mathbb{Q}]$ and the arithmetic progressions in the above lemmas are associated to each other.\n\\\\\n\\\\\nLet $P=(x,y)\\in E_{d}[\\mathbb{Q}].$ Define \n$$P\\circ P=\\left(\\left(\\frac{x^{2}+d^2}{2y}\\right)^{2},Y\\right).$$\nWhere $Y$ is chosen such that $P\\circ P\\in E_{d}[\\mathbb{Q}].$\n\n\n\\section{Sumsets of squares in $\\mathbb{Z}$}\nWe now turn our attention to estimating $N_{n}(\\mathbb{Z})$. Let $N_{n} = N_{n}(\\mathbb{Z}) = N_{n}(\\mathbb{Q}).$\n\\\\\n\\\\\nIt is not diffictult to show that\n$$2n-1\\leq N_{n}\\leq \\frac{n(n+1)}{2}.$$\nThis instantly gives that $N_{1}=1$ and $N_{2}=3.$ We also see that \n$$5\\leq N_{3}\\leq 6.$$\nTo compute the value of $N_{3}$ let \n$$A=\\{1,25,49\\}.$$ \nWe have\n$$A+A=\\{1,26,50,64,98\\}. $$\nIn this case $|A+A|=5.$ Which shows that $N_{3}=5.$\n\\\\\n\\\\\n This happened because $\\{1,25,49\\}$ is arithmetic progression. It turns out that arithmetic progressions are an efficient way to make the sumset small and infact an arithmetic progression is how we obtain the minumum bound of $2|A|-1.$ \n\\\\\n\\\\\nFermat showed that there exists no airthmetic progression of 4 or more squares so this method does not generalise. \n\\\\\n\\\\\nAlthough there exists no arithmetic progression of squares of length 4 we do have the following interesting set\n$$A=\\{49,169,289,529\\}.$$\n $A$ is an arithmetic progression of squares of length 5 with the 4th term removed. Since $|A+A|=8$ we obtain that $N_{4}=8$ (because $N_{4}=7$ implies the existence of an arithmetic progression of 4 squares).\n\\\\\n\\\\\nLet $n\\in\\mathbb{N}.$ We will calculate an upperbound for $N_{3n}$ and then state similar results for $N_{3n+1}$ and $N_{3n+2}.$ \n\\\\\n\\\\\nBefore we begin note the crude upper bound we already have.\n$$N_{3n}\\leq \\frac{3n(3n+1)}{2}\\sim \\frac{9}{2}n^{2}.$$\nWe improve this with\n\\begin{theorem}\\label{ub}\nLet $n\\in\\mathbb{N}.$ Then\n$$N_{3n}\\leq \\frac{5n(n+1)}{2}\\sim\\frac{5}{2}n^{2}.$$\n\\end{theorem}\n\\begin{proof}\nWe will construct a set $A\\subset S(\\mathbb{Q}),$ $|A|=3n$ such that \n$$|A+A|\\leq \\frac{5n(n+1)}{2}.$$\nWe first prove the case for $n=1.$ Let $A_{1}=\\{1,25,49\\}.$ Then $|A_{1}+A_{1}|=5.$\n\\\\\n\\\\\nNow suppose that $n\\geq 2.$ \n\\\\\n\\\\\nLet $P_{1}$ be the point of $E_{24}[\\mathbb{Q}]$ associated to $A_{1}.$\n\\\\\n\\\\\nLet $P_{i+1}=P_{i}+P_{i}$ for $1\\leq i\\leq n-1.$ Let $A_{i}$ be the arithmetic progression associated with $P_{i}$ for each $2\\leq i \\leq n.$\n\\\\\n\\\\\nLet \n$$A=\\bigcup_{i=1}^{n}A_{i}.$$\nWe have that $A\\subset S(\\mathbb{Q})$ and $|A|=3n.$ We claim that \n$$|A+A|\\leq\\frac{5n(n+1)}{2}.$$\nNote that if $B$ and $C$ are two arithmetic progressions of length 3 and the same common difference such that $B\\cap C=\\emptyset$ then $|B+C|=5.$ \n\\\\\n\\\\\nTherefore\n\\begin{align*}\n|A+A| & = \\left|\\left(\\bigcup_{i=1}^{n}A_{i}\\right)+\\left(\\bigcup_{i=1}^{n}A_{i}\\right)\\right|\n\\\\\n& \\leq \\sum_{k=1}^{n}|A_{k}+A_{k}|+\\sum_{i=1}^{n-1}\\sum_{j>i}|A_{i}+A_{j}|\n\\\\\n& = \\frac{5n(n+1)}{2}.\n\\end{align*}\n\\end{proof}\nUsing a simlar technique we obtain the following result.\n\\begin{theorem}\\label{ub2}\nLet $n\\in\\mathbb{N}.$ Then\n$$N_{3n}\\leq\\frac{5n^{2}+n}{2},$$\n$$N_{3n+1}\\leq\\frac{5n^{2}+9n+2}{2},$$\n$$N_{3n+2}\\leq\\frac{5n^{2}+13n+6}{2}.$$\n\\end{theorem}\n\\section{Possible ways for small sumsets}\nIt is a famous problem as to whether or not a perfect cuboid exists. A perfect cuboid is a cuboid with integer sides, intger faces and an integer valued long diagonal. It is conjectured that such an object does not exist. However if a perfect cuboid does exist then there exists integers $a,b,c,d,e,f,g$ such that\n\\begin{align*}\na^{2}+b^{2} & =d^{2}\n\\\\\na^{2}+c^{2} & =e^{2}\n\\\\\nb^{2}+c^{2} & =f^{2}\n\\\\\na^{2}+b^{2}+c^{2} & =g^{2}.\n\\end{align*} \nThis is a lot of additive structure amongst squares. In fact the existence of a perfect cuboid was equivalent to the existence of a 3x3 magic square such that every entry is a square and to the notion of a generalised arithmetic progression. \n\\begin{definition}\nLet $a,n_{1},...,n_{d},N_{1},...,N_{d}\\in\\mathbb{N}.$ Set \n$$A=\\{a+m_{1}n_{1}+...+m_{d}n_{d}|0\\leq m_{i}\\leq N_{i}\\}.$$\nWe say the $A$ is a generalised arithmetic progression. More specifically $A$ is a $(N_{1}-1)\\times\\ldots\\times(N_{d}-1)$ generalised arithmetic progression of dimension $d$. \n\\end{definition}\nA $3\\times 3$ generalised arithmetic progression is equivalent to a 3x3 magice square. Let $A$ be a 3x3 generalised arithmetic progression. Then $|A|=9$ and $|A+A|=25$ Note that this is $5$ lower than our upperbound for $N_{9}$ obtained in Theorem $\\ref{ub}.$\n\\\\\n\\\\\nThe trick to calculating $N_{4}$ was finding 4 elements inside an arithmetic progression of length 5. Although a 3x3 GAP of squares has not been found a 3x3 Gap with 7 of its elements square has been found.\n\\begin{center}\n{\\begin{picture}(180,180)(0,0)\n\\put(0,0){\\line(0,1){180}}\n\\put(60,0){\\line(0,1){180}}\n\\put(120,0){\\line(0,1){180}}\n\\put(180,0){\\line(0,1){180}}\n\\put(0,0){\\line(1,0){180}}\n\\put(0,60){\\line(1,0){180}}\n\\put(0,120){\\line(1,0){180}}\n\\put(0,180){\\line(1,0){180}}\n\\put(20,150){$373^{2}$}\n\\put(80,150){$289^2$}\n\\put(140,150){$565^2$}\n\\put(20,90){360761}\n\\put(90,90){$425^2$}\n\\put(140,90){$23^2$}\n\\put(20,30){$205^2$}\n\\put(80,30){$527^2$}\n\\put(140,30){222121}\n\\end{picture}}\n\\end{center}\n\n This gives us a candidate to lower our upper bound for $N_{7}.$ Theorem \\ref{ub2} gives us that $N_{7}\\leq 20.$ If we let $A$ be the 7 squares in the above magic square then $|A+A|=19.$ Thus showing that our upperbound is not optimal.\n\n\n\\section{Fields of prime order}\nWe now turn to our attention to sets of squares inside a finite field. In particular to fields with a prime order. The following inequality gives us a lower bound for $N_{n}(\\mathbb{Z}_{p}).$\n\\begin{theorem}[Cauchy-Davenport inequality]\nIf p is a prime and $A$ is a set in $\\mathbb{Z}_{p}$ then\n$$|A+A|\\geq min(2|A|-1,p).$$\n\\end{theorem}\nWe will show that this bound can be attained for all $n\\in\\mathbb{N}$ for $p$ sufficiently large. We do this by applying an inverse theorem to the Cauchy-Davenport inequality.\n\\begin{theorem}[Vosper]\nLet $p$ be a prime and $A$ a set in $\\mathbb{Z}_{p}$ such that $|A|>2$ and $|A+A|\\leq p-2.$ Then $|A+A|=2|A|-1$ if and only if $A$ is an arithmetic progression.\n\\end{theorem}\n\n\\begin{theorem}[Van der Waerden, Gowers]\\label{vdw}\n Let $r,k\\in\\mathbb{N}.$ Then there exists $N(r,k)\\in\\NN$ such that if $\\{1,2,...,N(r,k)\\}$ is expressed as the disjoint union of non-empty sets, $\\{A_{j}\\}_{j=1}^{r},$ then there exists a $j$ such that $A_{j}$ contains an arithmetic progression of length $k$. Furthermore $N(r,k)\\leq 2^{2^{r^{2^{2^{9+k}}}}}.$\n\\end{theorem}\nWe now prove the main result of this section.\n\\begin{theorem}\nLet $n\\in\\mathbb{Z}$ and let $p>2^{2^{2^{2^{2^{9+n}}}}}$ be a prime number. Then\n$$N_{n}(\\mathbb{Z}_{p})=2n-1.$$\n\\end{theorem}\n\\begin{proof}\nLet $R=S(\\mathbb{Z}_{p})\\setminus\\{0\\}$ and $T=\\mathbb{Z}_{p}\\setminus S(\\mathbb{Z}_{p}).$ Then $R$ and $T$ are disjoint non empty subsets of $\\{1,2,...,p\\}$ such that $R\\cup T=\\{1,2,...,p\\}.$ Therefore, by Van der Waerdens Theorem, either $R$ or $T$ contains an arithmetic progression of length $n$. Suppose that $T$ contains such a progression and denote it by $P.$ Let $p=min{P}.$ Then one can show that $p\\cdot P$ is a subset of $R$ and is an arithmetic progression of length $n$. Therefore $R$ contains an arithmetic progression of lentgh $n$ which we denote by $Q$. By Vospers theorem we have\n$$|Q+Q|=2n-1.$$\nThis shows that $N_{n}(\\mathbb{Z}_{p})\\leq 2n-1.$ By the Cauchy-Davenport inequality we have that $N_{n}(\\mathbb{Z}_{p})\\geq 2n-1$. Thus completing the proof. \n\\end{proof}\nNote that the lower bound for $p$ given above is probably much much more than needed. In fact for small values of $n$ we have been able to significantly improve this lower bound. In fact we have the following by applying the results on arithmetic progressions of quadratic residues from \\cite{Br}.\n\\begin{theorem}\n$$N_{5}(\\mathbb{Z}_{p})=9\\:\\:for\\:\\:p\\geq 41,$$\n$$N_{6}(\\mathbb{Z}_{p})=11\\:\\:for\\:\\:p\\geq 149,$$\n$$N_{7}(\\mathbb{Z}_{p})=13\\:\\:for\\:\\:p\\geq 619,$$\n$$N_{8}(\\mathbb{Z}_{p})=15\\:\\:for\\:\\:p\\geq 1087,$$\n$$N_{9}(\\mathbb{Z}_{p})=17\\:\\:for\\:\\:p\\geq 3391.$$\n\\end{theorem}\n\nTherefore for small $n$ we could calculate $N_{n}(\\mathbb{Z}_{p})$ for all $p$. Below are the values for $n=5$ or $6$ calculated by computer. \n\\begin{theorem}\n $N_{5}(\\mathbb{Z}_{p})=$ \n$\\begin{cases}\n 9 & for\\:\\: p=17,23\\:\\:and\\:\\:p\\geq 41 \\\\ \n10 & for\\:\\: p =11,19,29,31,37\\\\\n11 & for\\:\\:p=13.\n\\end{cases}$\n\\end{theorem}\n\\begin{theorem}\n $N_{6}(\\mathbb{Z}_{p})=$ \n$\\begin{cases}\n 11 & for\\:\\: p=11,53,61,67,71,73,79,83,89,97,101,103,\\\\\n&107,109,127,131,137\\:\\:and\\:\\:p\\geq 149 \\\\ \n12 & for\\:\\: p =17,23,41,43,47,113,139\\\\\n13 & for\\:\\:p=13,19,31,37,59\\\\\n14 & for\\:\\:p=29.\n\\end{cases}$\n\\end{theorem}\n\n\n\\section{Remarks}\nThere are still many problems in this area that we need to answer. There exists a 2x3 GAP of squares. Does there exists a 3x3 GAP. A 2x2x2 GAP. A 2x..x2 GAP? If we can find arbritrary long 2x..x2 GAPs then we can show that $N_{n}\\leq K\\cdot x^{2-\\varepsilon}$ for some $\\varepsilon>0.$ \n\\\\\n\\\\\nWhat happens for infinite integral domains of finite characteristic. Embedding a field of prime order is very unsatisfactory here as it gives no information for values of $N_{n}(R)$ beyond a certain point.\n\\\\\n\\\\\nWe need to look more at fields with a prime power number of elements. Consider $\\mathbb{F}_{9}.$ Then there exists a copy of $\\mathbb{Z}_{3}\\subset\\mathbb{F}_{9}$ consisting entirely of squares. Since $\\mathbb{Z}_{3}$ is closed under addition we obtain\n$$|\\mathbb{Z}_{3}+\\mathbb{Z}_{3}|=|\\mathbb{Z}_{3}|=3.$$\nWhich gives that $N_{3}(\\mathbb{F}_{9})=3<5.$\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Appendix}\nIn this appendix, we list the statements of our Theorems and their formal proofs.\nIn~\\cref{sec:minusedge} we prove that the Complete graph is locally optimal\n (\\cref{thm:minusedge}).\nIn~\\cref{sec:bipartite} we prove that for the (complete) Bipartite graph $B_N$ both \n $\\rho(B_N)$ and $\\fp^\\star(B_N)$ are within a constant factor of $1\/N$ (\\cref{thm:bipartite}).\nFinally, in~\\cref{sec:app-1d-lattices} we prove that for 1-dimensional lattices with dispersal radii 2 and 1,\nthe fixation probability $\\fp^{\\operatorname{1D}}_N(4,2)$ is bounded away from 0 (\\cref{thm:1d-lattices}).\n\n\n \n %\n\n %\n\n\\section{The Complete graph is locally optimal}\\label{sec:minusedge}\nHere we show that when it comes to $\\rho(G_N)$ and $\\fp^\\star(G_N)$, the Complete graph is locally optimal.\nThat is, we show that the graph $M_N$ obtained from $K_N$ by removing a single edge satisfies\n$\\rho(M_N)=\\rho(M_N,K_N)<1\/N$ and $\\fp^\\star(M_N)=\\rho(K_N,M_N)>1\/N$.\n\n\\begin{theorem}[Complete graph is locally optimal]\\label{thm:minusedge}\nFix $N\\ge 2$ and let $M_N$ be a graph obtained from the Complete graph $K_N$ by removing a single edge.\nThen\n\\[ \\rho(M_N,K_N)=\\frac{N-2}{(N-1)^2}<\\frac1N \\qquad\\text{and}\\qquad \\rho(K_N,M_N)=\\frac{1}{N-1}>\\frac1N.\n\\]\n\\end{theorem}\n\n\\begin{proof} Fix $N\\ge 2$. First we focus on $\\rho(M_N)=\\rho(M_N,K_N)$.\nDenote by $\\varphi(a,b)$ the fixation probability starting from a configuration with \n$a$ mutants among the $N-2$ fully connected vertices\nand $b$ mutants among the other two vertices that miss one edge.\nWe claim that\n\\[\\varphi(a,b)=\\frac{(a+b)(n-2)+\\frac12b(b-1)}{(n-1)^2}.\n\\]\n\n\nClearly, the formula satisfies $\\varphi(0,0)=0$ and $\\varphi(n-2,2)=1$. Therefore it suffices to check that given an arbitrary configuration (that is, any $0\\le a\\le n-2$ and $0\\le b\\le 2$), the expected value of the formula after a single transition doesn't change.\n\nThe transition probabilities are as follows:\n\n\\begin{enumerate}\n\\item[(i)] $p_{b+}\\equiv\\mathbb{P}[(a,b)\\to (a,b+1)]=\n\\frac{a}{n}\\cdot \\frac{2-b}{n-1}$\n\\item[(ii)] $p_{a+}\\equiv\\mathbb{P}[(a,b)\\to (a+1,b)]=\n\\frac{a}{n}\\cdot \\frac{n-2-a}{n-1} + \\frac{b}{n}\\cdot \\frac{n-2-a}{n-2}$\n\\item[(iii)] $p_{b-}\\equiv\\mathbb{P}[(a,b)\\to (a,b-1)]=\n\\frac{n-a-b}{n}\\cdot \\frac{b}{n-1}$\n\\item[(iv)] $p_{a-}\\equiv\\mathbb{P}[(a,b)\\to (a-1,b)]=\n\\frac{n-a-b}{n}\\cdot \\frac{a}{n-1}$\n\\end{enumerate}\n\nIgnoring the shared denominator $(n-1)^2$, the expected change in the value produced by the formula in the respective cases is as follows:\n\\begin{enumerate}\n\\item[(i)] $\\Delta\\varphi_{b+}= +\\,n-2+b$\n\\item[(ii)] $\\Delta\\varphi_{a+}= +\\,n-2$\n\\item[(iii)] $\\Delta\\varphi_{b-}= -(n-2+b-1)$\n\\item[(iv)] $\\Delta\\varphi_{a+}= -(n-2)$\n\\end{enumerate}\n\nDenoting $I=\\{b+,a+,b-,a-\\}$ we compute\n\n\\begin{align*}\n\\Delta\\varphi&\\equiv \\sum_{i\\in I} p_i\\cdot\\Delta\\varphi_i=\n\\frac1{n(n-1)}\\cdot\\Big[ a(2-b)(n-2+b) + a(n-2-a)(n-2) \\\\\n&+ b(n-2-a)(n-1) \n- (n-a-b)b(n-2+b-1) -(n-a-b)a(n-2)\n\\Big]\n\\end{align*}\nGrouping the terms with $n$ raised to the same power, the terms in the square brackets on the right-hand side can be rearranged as $n^2\\cdot X+n\\cdot Y+Z$, where $X=a+b-b-a=0$,\n\n\\begin{align*}\nY&= (2a-ab)+(-2a-a^2-2a) + (-2b-ab-b) \\\\\n &\\hspace{1.87cm}- (-ab-b^2-2b+b^2-b) - (-2a-a^2-ab)\\\\\n&= 0,\n\\end{align*}\nand\n\n\\begin{align*}\nZ&= (-ab^2+4ab-4a) +2a^2 + (ab+2b) +(b^3+ab^2-3b^2-3ab) -(2a^2+2ab)\\\\\n&=b^3-3b^2+2b=b(b-1)(b-2).\n\\end{align*}\nSince $Z=0$ for any admissible integer $b$ (recall $0\\le b\\le 2$), we are done.\n\n Regarding $\\fp^\\star(M_N)$, a completely analogous proof\n establishes a formula\n\\[\\varphi(a,b)=\\frac{(a+b)(n-2)+b(3-b)\/2}{(n-1)^2},\n\\]\nwhich implies the desired\n\\[\n\\fp^\\star(M_N)=\\frac{1\\cdot(n-2)+\\frac{1\\cdot 2}{2}}{(n-1)^2} = \\frac1{n-1}.\\qedhere\n\\]\n\\end{proof}\n\n\n\n\n\\section{Bipartite graph $B_N$ is within a constant factor of $K_N$}\\label{sec:bipartite}\nRecall that for $N$ even we denote by $B_N$ the (complete) Bipartite graph with parts of sizes $N\/2$ and $N\/2$.\nWe prove that for $N$ large, both $\\rho(B_N)$ and $\\fp^\\star(B_N)$ are within a constant factor of $1\/N$.\nRecall that numerical experiments in the main text suggest that $\\rho(B_N)\\approx 0.82\/N$ and $\\fp^\\star(B_N)\\approx 1.1\/N$, as $N\\to\\infty$.\n\\begin{theorem}[Bipartite graph $B_N$]\\label{thm:bipartite} Fix $N\\ge 2$. Then\n\\[\n\\rho(B_N)> \\frac1{e-1}\\cdot\\frac1N \\quad\\text{and}\\quad \\fp^\\star(B_N)<\\frac e{e-1}\\cdot\\frac1N.\n\\]\n\\end{theorem}\n\\begin{proof}\nFirst we bound $\\rho(B_N)$.\nConsider a time point at which there are $n$ mutants in total: $k$ of them in one part and $n-k$ in the other part. The probability $p^+(k,n-k)$ that in the next time step we gain a mutant equals\n\\begin{align*}\np^+(k,n-k) &=\\frac kN\\cdot\\frac{N\/2-(n-k)}{N\/2} +\\frac{n-k}N\\cdot\\frac{N\/2-k}{N\/2} \\\\\n&= \\frac{ k(N-2n+2k)+(n-k)(N-2k)}{N^2}\\\\\n&=\\frac{n\\cdot N-4k(n-k)}{N^2}.\n\\end{align*}\nSince $4k(n-k)\\le n^2$ for any $k=0,\\dots,n$ (and with equality only for $k=n\/2$), we can bound\n\\[\np^+(k,n-k)\\ge \\frac{n\\cdot N-n^2}{N^2}=\\frac{n(N-n)}{N^2}.\n\\]\nOn the other hand, the probability $p^-(k,n-k)$ that in the next time step we lose a mutant equals\n\\[\np^-(k,n-k)=\\frac{N-n}N\\cdot\\frac{n}{N-1}\n\\]\nand thus\n\\[ \\frac{p^-(k,n-k)}{p^+(k,n-k)}\\le \\frac N{N-1}\\equiv t\n\\]\nSince this expression is independent of $k$ and $n$, plugging it in the standard formula for the absorption probability of a 1-dimensional Markov chain we get\n\\[\n\\rho(B_N)\\ge \\frac1{1+ \\sum_{i=i}^{N-1} t^i} =\\frac{t-1}{t^N-1}.\n\\]\nSince $t-1=1\/(N-1)>1\/N$ and\n\\[\nt^N=\\left(\\frac N{N-1}\\right)^N=\\left(1+\\frac 1{N-1}\\right)^N\\ge e,\n\\]\n where $e\\doteq2.718$ is the famous Euler's constant, we thus obtain\n\\[\n\\rho(B_N)> \\frac1{e-1}\\cdot \\frac1N>\\frac{0.58}N.\n\\]\n\nRegarding $\\fp^\\star(B_N)$, a completely analogous proof yields\n\n\\begin{align*}\np^+(k,n-k) &= \\frac{n}N\\cdot\\frac{N-n}{N-1}\\quad\\text{and} \\\\\np^-(k,n-k) &= \\frac{n\\cdot N-4k(n-k)}{N^2}\\ge \\frac{n(N-n)}{N^2},\n\\end{align*}\nTherefore\n\\[\n\\frac{p^-(k,n-k)}{p^+(k,n-k)}\\ge \\frac {N-1}{N} \\equiv t.\n\\]\nFinally, since $t^N=(1-1\/N)^N\\le 1\/e$, we get\n\\[\\fp^\\star(B_N) \\le \\frac{t-1}{t^N-1} \\le\\frac{1\/N}{1-1\/e} = \\frac e{e-1}\\cdot\\frac1N.\\qedhere\n\\]\n\\end{proof}\n\n\n\n\\section{1-D Lattices}\\label{sec:app-1d-lattices}\nRecall that for a fixed even integer $d$ we denote by $\\operatorname{Cir}^d_N$ the graph whose $N$ vertices, labelled $1,\\dots,N$, are arranged along a circle and each vertex is connected with $d\/2$ closest vertices clockwise and $d\/2$ closest vertices counter-clockwise.\nAlso, recall that given two connectivities $d_1$, $d_2$ we shorthand\n\\[\\fp^{\\operatorname{1D}}_N(d_1,d_2)=\\rho(\\operatorname{Cir}^{d_1}_N,\\operatorname{Cir}^{d_2}_N).\n\\]\nFigure~6\nfrom the main text suggests that when $d_1>d_2$ then the expression $\\fp^{\\operatorname{1D}}_N(d_1,d_2)$ remains bounded away from 0 as $N\\to\\infty$,\nBelow we prove this in the special case $(d_1,d_2)=(2,1)$.\n\\begin{theorem}[1-D lattices]\\label{thm:1d-lattices} We have\n\\[ 0.138 < \\lim_{N\\to\\infty} \\fp^{\\operatorname{1D}}_N(2,1) < 0.34.\n\\] \n\\end{theorem}\n\\begin{proof}\nBy a \\textit{configuration} we mean a subset of vertices occupied by mutants.\nWe denote the possible configurations as a sequence of numbers (corresponding to blocks of consecutive mutants) and symbols ``$\\circ$'' (corresponding to individual residents).\nThat is, for instance the notation $k\\circ 1$ denotes a configuration with $k$ consecutive mutants, then one resident, then one more mutant (and residents before and after).\n\n\\smallskip\\noindent\\textit{Proof of the upper bound.\\ }\nThis is straightforward. We say that a step of the Moran process is \\textit{active} if it changes the configuration.\nGiven a single mutant, there are two possible active steps leading to immediate extinction (each occurs with probability equal to $\\frac1N\\cdot \\frac12$). On the other hand, there are four possible active steps where mutants reproduce (each occurs with probability equal to $\\frac1N\\cdot\\frac14$). Thus, in total, with probability $\\frac{2\/2}{2\/2+4\/4}=\\frac12$ the first active step results in the mutant extinction, hence $\\fp^{\\operatorname{1D}}_N(2,1)\\le 1\/2$.\n\nAccounting for trajectories that never reach a configuration with more than $m$ mutants, we can push this upper bound lower:\nFor instance, taking $m=2$, denote by $x$, $y$, $z$ the extinction probabilities from configurations 1, 2, $1\\circ1$, respectively.\nConditioning on the first active step (see~\\cref{fig:circulations-proof}a) we obtain:\n\\[ x=\\frac14(2\\cdot 1+y+z),\\quad y\\ge \\frac15(2x+3\\cdot 0), \\quad z\\ge \\frac17(4x+3\\cdot 0),\n\\]\nhence $x\\ge \\frac14(2+\\frac25x + \\frac47x)$. This rewrites as $x\\ge 35\/53$, hence $\\fp^{\\operatorname{1D}}_N(2,1)\\le 18\/53=0.34$ for any $N\\ge 5$.\n\n\\smallskip\\noindent\\textit{Proof of the lower bound.\\ }\nFor $k\\ge 1$ denote by $a_k$ the fixation probability from configuration $k$. \nSimilarly, for $k\\ge 2$ denote by $b_k$ the fixation probability from the configuration $k\\circ 1$.\nThen $a_0=0$ and\n\\[a_1=\\frac{ \\frac22a_0+\\frac24a_2+\\frac24b_2}{\\frac22+\\frac24+\\frac24} = \\frac14(a_2+b_2).\n\\]\n\n\\begin{figure}[h] \n\t\\centering\n\t\\includegraphics[width=\\linewidth]{fig-circulations-proof.pdf}\n \\caption{\\textbf{Proof of~\\cref{thm:1d-lattices}.}\n \\textbf{a,}~Upper bound.\n Boxes represent configurations.\n Mutants are shown as blue disks, residents as red crosses.\n Blue (red) transitions correspond to mutants (residents) reproducing.\n A fraction $a\/b$ denotes that there exist $a$ relevant edges, each pointing from a vertex with degree $b$.\n With a constant probability, the random evolutionary trajectory leads to mutant extinction without ever reaching a configuration with 3 or more mutants.\n \\textbf{b,}~Lower bound.\n We consider a process $M^\\downarrow$ where, any time a configuration below the dashed line is reached, we remove a mutant to instead reach one of the configurations above the dashed line.\n We then compute the fixation probability in $M^\\downarrow$, which is a lower bound.\n}\n\\label{fig:circulations-proof}\n\\end{figure}\n\nFor $k\\ge 2$ we have (see~\\cref{fig:circulations-proof}b)\n\\[a_k = \\frac{\\frac22 a_{k-1} + \\frac24b_k + \\frac44 a_{k+1}}{\\frac22+\\frac24+\\frac44} = \\frac15(2a_{k-1}+b_k+2a_{k+1}).\n\\]\n\nFor analogous expressions of $b_k$, we would need to introduce new variables $m_k$, $x_k$, $y_k$, $z_k$ to describe fixation probabilities from configurations listed below the dashed line.\nInstead of computing the fixation probability exactly, we bound it by considering a different processes $M^\\downarrow$, where mutants have lower fixation probability then in the original process $M$.\nWe define $M^\\downarrow$ as follows:\nThe two processes coincide, except that when $M$ reaches any configuration below the dashed line,\nwe remove several mutants so as to obtain a configuration $a_i$ or $b_i$ (for some $i$), see~\\cref{fig:circulations-proof}b.\nFor instance, when the current configuration is $k\\circ1$, the resident from the gap is spawning an offspring, and the offspring moves one place left resulting in a configuration $(k-1)\\circ\\circ 1$, we additionally remove the single separated mutant, reaching the configuration $k-1$, where the mutants have fixation probability $a_{k-1}$.\nSince for any two configurations $C\\subseteq C'$ we have $\\rho(C)\\le \\rho(C')$, the fixation probability in $M^\\downarrow$ is indeed decreased, as compared to $M$.\n(We note that by considering a process $M^\\uparrow$ where we occasionally add mutants so as to only visit configurations $a_i$, $b_i$, one can obtain a stronger upper bound than the one presented above.)\n\nIn $M^\\downarrow$ we thus obtain\n\n\\begin{align*}\nb_k&= \\frac{ \\frac22a_k + \\frac12b_{k-1}+\\frac12a_{k-1} + \\frac34 b_k+\\frac24b_{k+1}+\\frac34a_{k+2} }{\\frac22+\\frac12+\\frac12+\\frac34+\\frac24+\\frac34} \\\\\n&=\\frac1{16}(2a_{k-1}+4a_k+3a_{k+2} + 2b_{k-1}+3b_k+2b_{k+1})\n\\end{align*}\n\nIt remains to compute the fixation probability in $M^\\downarrow$, in the limit $N\\to\\infty$. We do this by a standard argument.\nConsider a ``potential function'' $\\varphi$ which assigns a positive real number to each configuration, defined by $\\varphi(k)=\\alpha^k$, $\\varphi(k\\circ1)=c\\cdot\\alpha^k$, where $\\alpha,c>0$ are positive real numbers (to be defined later).\nBy solving a system of 2 equations\n\n\\begin{align*}\n5 &= 2\/a + c + 2a \\\\\n16c &=2\/a+4+3a^2 + 2c\/a+3c+2ca\n\\end{align*}\nwe find values $\\alpha\\doteq0.860$, $c\\doteq0.954$ (the roots of certain degree-3 polynomials) for which the (expected) potential does not change in one step of the process\n(that is, the function $\\varphi$ is a martingale), except when at configuration $1$.\nNow for any initial configuration $x\\ne 1$, run the process starting from $x$ until it reaches either a configuration $1$ or the configuration $N$, and let $p_x$ be the probability that the former happens. Then\n\\[\n \\varphi(x) = \\mathbb{E}[\\varphi(x)\\mid \\text{when the process ends}] = p_x\\cdot\\varphi(1) + (1-p_x)\\cdot\\varphi(N),\n\\]\nwhich rewrites as\n\\[ p_x = \\frac{\\varphi(x)-\\varphi(N)}{\\varphi(1)-\\varphi(N)}.\n\\]\nSince $\\alpha<1$ we have $\\varphi(N)\\to_{N\\to\\infty}0$, thus for $x\\in\\{2,1\\circ1\\}$ we can write\n\\[p_{2} \\to_{N\\to\\infty} \\frac{\\alpha^2}{\\alpha}=\\alpha \\quad\\text{and}\\quad p_{1\\circ1} \\to_{N\\to\\infty} \\frac{c\\cdot\\alpha}{\\alpha}=c.\n\\]\nNow using the expressions\n\\[a_2 = p_2\\cdot a_1 + (1-p_2)\\cdot 1 \\quad\\text{and}\\quad b_2 = p_{1\\circ1}\\cdot a_1 + (1-p_{1\\circ1})\\cdot 1.\n\\]\nand plugging them into $a_1=\\frac14(a_2+b_2)$ we finally obtain\n\\[ a_1 \\to_{N\\to\\infty} \\frac14\\big( (2-p_{2}-p_{1\\circ1}) + (p_{2}+p_{1\\circ1}) \\cdot a_1\\big),\n\\]\nhence\n\\[a_1 \\to_{N\\to\\infty} \\frac{2-p_{2}-p_{1\\circ1} }{4-p_{2}-p_{1\\circ1}} =\\frac{2-\\alpha-c}{4-\\alpha-c}>0.138. \\qedhere\n\\]\n\\end{proof}\n\n\n\\end{document}\n\n\n\n\n\n\\section*{Introduction}\n\nEvolutionary dynamics is the study of how different traits arise and disappear in a population of reproducing individuals.\nEach trait might confer a fitness advantage (or disadvantage) on its bearer,\nthus in turn altering the probability that the trait spreads through the population (an event called \\textit{fixation}) or disappears (\\textit{extinction}).\nBesides the fitness advantage, another important factor in determining the fate of a trait over time (its fixation or extinction)\nis the spatial structure of the population \\cite{durrett2008probability,nowak2006evolutionary,broom2014game,moran,nagylaki1992introduction}.\nFor instance, the population might be subdivided into ``islands'':\nAn offspring of a reproducing individual then typically stays in the same island, but occasionally it migrates to some nearby island.\nThe fixation probability of a trait then crucially depends on the dispersal pattern, that is, the migration rates among the islands.\nIncorporation of population structure into a model of selection dynamics substantially improves the descriptive power of the model ~\\cite{durrett2008probability,nagylaki1992introduction,pollak1966survival,nagylaki1980strong,whitlock1997effective,durrett1994importance,komarova2006spatial,santos2006evolutionary}.\n\nEvolutionary graph theory is a powerful framework for studying natural selection in population structures with arbitrarily complex dispersal patterns~\\cite{lieberman2005evolutionary,antal2006evolutionary,broom2008analysis,diaz2014approximating,adlam2015amplifiers,monk2018martingales,allen2017evolutionary}.\nOn an evolutionary graph (network), individuals occupy the nodes (vertices), and the edges (links) specify where the offspring can migrate.\nGraphs can represent spatial structures, contact networks in epidemiology, social networks, and phenotypic or genotypic structures in biological populations~\\cite{lieberman2005evolutionary, santos2005scale, keeling2011modeling, szabo2007evolutionary, castellano2009statistical,perc2013evolutionary}.\nThe question is then: How does a graph structure affect the fixation probability of a new mutant introduced into a background population of residents?\nExtensive research over the past decade has produced many remarkable population structures with various desirable properties~\\cite{broom2011stars,mertzios2013natural,Galanis17,tkadlec2019population,allen2021fixation}.\nAs one example, consider a mutation that increases the reproduction rate of the affected individual.\nPopulation structures that increase the fixation probability of such mutations, as compared to the baseline case of unstructured (well-mixed) populations, are known as amplifiers of selection.\nMany amplifiers of selection are known, both simple ones and strong ones~\\cite{monk2014martingales,pavlogiannis2018construction,Goldberg19,tkadlec2021fast}.\n\n\nIn this work, we consider mutations that do not change the reproductive rate of the affected individual, but rather its motility potential.\nIn nature, an altered motility potential could arise in a variety of scenarios.\nWe give three examples.\n\nFirst, consider a species occupying a region that is split by a geographical barrier into two parts.\nIf the mutation allows the offspring to successfully cross the barrier, the \nmutants will perceive the population structure as being close to well-mixed, whereas\nthe residents will continue perceiving it as being split into two parts (islands).\n\nAs a second example, consider structured multicellular organisms.\nThere, cells are arranged in symmetric lattice structures known as epithelia.\nAn epithelial tissue may be described as a two-dimensional sheet defined by vertex points representing wall junctions,\none-dimensional edges representing cell walls, and two-dimensional faces representing cells.\nThe form of this tissue network is determined by the extracellular matrix (ECM).\nThe ECM is a network consisting of\nextracellular macromolecules, collagen, and enzymes that provide structural and biochemical support to surrounding cells.\nThe composition of ECM varies between multicellular structures \\cite{frantz2010extracellular,hay2013cell,walker2018role,gibson2009cell,kachalo2015mechanical}.\nThus, when discussing somatic evolution in multicellular organisms,\nthe invading genotype might differ in what network structure it is forming \\cite{radisky2002order,walker2018role}.\nIn other words, each type, in the absence of the other type, forms its own and different extracellular matrix.\nThis leads to different alignment of cells and thus a new population structure, see~\\cref{fig:tissue_1}.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=1]{fig-tissue.pdf}\n\\end{center}\n\\caption{In epithelial tissues, different cell types align along different lattice-like structures.}\n\\label{fig:tissue_1}\n\\end{figure}\n\nCarcinoma is yet another example of how the tissue organization of the invader and resident type can differ from each other.\nIn this case, tumor cells normally have a highly disorganized neighborhood structure, due to\nthe variability in cell-cell adhesion and the lack of proper epithelial programs among tumor cells in the tumor microenvironment \\cite{nelson2006extracellular,brauchle2018biomechanical}.\nNormal epithelial cells, on the other hand, typically follow symmetric geometric lattice patterns.\nThis change in structure between an invading trait and the resident type\ncan have substantial consequences on the outcome of the evolutionary process.\nHowever, in the context of evolutionary graph models, such considerations have not yet received appropriate attention. \n\nIn order to model differences in the motility potential within the framework of evolutionary graph theory,\nwe represent the population structure as two graphs $G^A$, $G^B$\noverlaid on top of each other on the same set of nodes~\\cite{spirakis2021extension}.\nThe two graphs $G^A$, $G^B$ represent the dispersal patterns for the mutants and residents, respectively.\nIn other words, mutant offspring migrate along the edges of $G^A$,\nwhereas resident offspring migrate along the edges of $G^B$.\nWe study the fixation probability $\\rho(G^A,G^B)$ of a single neutral mutant who\nappears at a random node and perceives the population structure as $G^A$,\nas it attempts to invade a population of residents who perceive the population through $G^B$.\n\nThere is a large body of literature on the evolution and ecology of migration and dispersal~\\cite{comins1980evolutionarily,dieckmann1999evolutionary,hutson2003evolution,levin2003ecology,ronce2007does},\nespecially for population structures formed by islands (also called patches, demes, or metapopulations)~\\cite{may1994superinfection,olivieri1995metapopulation,heino2001evolution}.\nOur framework is a generalization of this approach in the same way that evolutionary graph theory is a generalization of the\nvast literature on evolution and ecology in spatially structured populations~\\cite{lieberman2005evolutionary,durrett1994importance}.\nThe framework is flexible, allowing us to study both simple and arbitrarily complex population structures of any population size.\nAs such, it facilitates a discovery of new phenomena.\n\nAmong the graph-theoretical approaches, other ways to model motility and dispersal have been suggested in the literature.\nThey allow for the offsprings to disperse in more complex forms and reach locations that are not directly connected to the mother location.\nThis introduces migration potential as an independent quantity relative to the proliferation potential of the types \\cite{ohtsuki2007evolutionary,thalhauser2010selection,manem2014spatial,krieger2017effects,herrerias2019motion,waclaw2015spatial,manem2015modeling}.\nIn those cases, the motility potential is representative of a random motion and it is typically decoupled from the reproduction events.\nSuch random motility and motion has an anti-synergistic relationship with the proliferation potential.\nIn other words, if invaders are more motile, their fixation probability tends to decrease \\cite{thalhauser2010selection,manem2014spatial,krieger2017effects}.\n\nHere we show that, in contrast to random motility, enhanced structured motility generally leads to an increase in the fixation probability of the invading mutant.\nSpecifically, we prove that for any population size $N$ the Complete graph $K_N$ is ``locally optimal''.\nThat is, if mutants instead perceive the population through a graph $M_N$ that misses a single edge, \ntheir fixation probability is decreased.\nHowever, we show that the obvious generalization of this claim is not true:\nBy numerically computing the fixation probabilities for small population sizes,\nwe identify specific circumstances in which \nmaking mutants less motile actually increases their fixation probability.\nNext, we show that even for simple population structures that correspond to island models,\nthe extent to which increased motility helps the mutant fixate can vary considerably, depending on the exact layout of the extra connections.\nFinally, we show that for low-dimensional lattices,\nthe effect of altered motility is comparable to the effect of altered reproductive rate:\nin the limit of large population size,\nthe fixation probability of a mutant is either constant or exponentially small, depending on whether it is more or less motile than the residents.\n\n\n\n\n\\section{Model}\n\\paragraph{Standard Moran process on a graph.}\nWithin the framework of Evolutionary graph theory~\\cite{lieberman2005evolutionary},\na population structure is described as a graph (network), where\nnodes (vertices) represent locations (sites) and the graph connectivity defines the topology and the neighborhood.\nThere are $N$ nodes and each node is occupied by a single individual.\nEach individual is either of type~$A$ (mutant) with fitness $r_A$, or of type~$B$ (resident) with fitness $r_B$.\nThe evolutionary dynamics is governed by the standard stochastic discrete-time Moran Birth-death process, adapted to the population structure:\nat each time point, a single individual is picked for reproduction, proportionally to its fitness.\nThis focal individual produces offspring (a copy of itself), and the offspring then migrates and replaces a random neighboring individual.\n\nThe probability of migration from node $i$ to node $j$ is given by an $N\\times N$ dispersal matrix $M=(m_{i,j})_{i,j=1}^N$.\nThus, for undirected, unweighted graphs (which are the focus of this work), the entries $m_{i,j}$ of the dispersal matrix $M$ satisfy\n\\[m_{i,j}=\\begin{cases}\n 1\/\\operatorname{deg}(i), &\\text{ if nodes $i$ and $j$ are adjacent,}\\\\\n 0, &\\text{ otherwise.}\n \\end{cases}\n \\]\n (Here $\\operatorname{deg}(u)$ is the \\textit{degree} of node $u$, that is, the number of nodes adjacent to $u$.)\n\n\\paragraph{Moran process on two graphs.}\nIt is commonly assumed that the dispersal matrix is independent of the two types, that is,\nboth types of individuals perceive the population through the same population structure.\nFollowing the recent work of Melissourgos et al.~\\cite{spirakis2021extension}, here we study a more general case in which\nthe dispersal pattern depends on the type of the offspring that migrates.\nThus, we consider two graphs $G^A$, $G^B$ and the corresponding dispersal matrices $M^A=(m^A_{i,j})_{i,j=1}^N$, $M^B=(m^B_{i,j})_{i,j=1}^N$.\nThat is, any time a type~$A$ individual reproduces at a node $i$, the offspring replaces an individual at node $j$ with probability $m^A_{ij}$.\nIn contrast, the offspring of a type~$B$ individual reproducing at node $i$ migrates to node $j$ with probability $m^B_{ij}$, see~\\cref{fig:fig1}.\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[scale=1]{fig-moran-2.pdf}\n\\end{center}\n\\caption{\\textbf{Moran process with type-dependent dispersal patterns.}\nIn each discrete time-step, a random individual reproduces and the offspring proliferates to a neighboring node.\nType-$A$ offspring (mutant, blue) migrate along the edges of the blue graph $G_A$,\nwhereas type-$B$ offspring (residents, red) migrate along the red edges of $G_B$.\nThe key quantity is the fixation probability $\\rho(G^A,G^B)$ that a single initial mutant\nsuccessfully invades the population of residents.\n}\n\\label{fig:fig1}\n\\end{figure}\n\nThe state of the population at any given time point is described by a vector ${\\bf n}=(n_1,\\dots,n_N)$ of $N$ zeros and ones, where $n_i=1$ denotes that node $i$ is currently occupied by a type~$A$ individual (mutant).\nThe model is a Markov chain with $2^N$ possible states.\nTwo of the states are absorbing, and they correspond to homogeneous population consisting purely of type~$A$ individuals (state ${\\bf n^1}=(1,\\dots,1)$) or type~$B$ individuals (state ${\\bf n^0}=(0,\\dots,0)$).\nFormally, the transition probabilities between the states are given by the following equations:\n\n\\begin{align}\np^{+}_{i}({\\bf n}) :=& \\mathbb{P}[ (n_1,\\dots,n_i,\\dots, n_N) \\ \\to\\ (n_1,\\dots,n_i+1,\\dots, n_N)] \\nonumber\\\\\n=& \\frac{\\sum_{j} n_j(1-n_i) r_A m^{A}_{ji}}{\\sum_{k}\\left(n_kr_A+ (1-n_k)r_B\\right) } \\nonumber\\\\\np^{-}_{i}({\\bf n}) :=& \\mathbb{P}[(n_{1},\\dots,n_{i},\\dots, n_{N}) \\ \\to\\ (n_{1},\\dots,n_{i}-1,\\dots, n_{N})] \\nonumber\\\\\n=&\\frac{\\sum_{j} (1-n_{j})n_{i} r_{B}m^{B}_{ji}}{\\sum_{k}\\left(n_kr_A+ (1-n_k)r_B\\right)}\n\\label{transition}\n\\end{align}\n\n\\paragraph{Questions and Results.}\nIn this work, we study how differences in the migration and dispersal pattern\n$G^A$ of mutants and $G^B$ of residents influence\nthe fate of a single random mutant who appears at a random location.\nAs a measure of the mutant success, we use its fixation probability under neutral drift (that is, $r_A=r_B$).\nWe denote this quantity by $\\rho(G^A, G^B)$.\nIt is known that whenever the two types have the same dispersal pattern ($G^A=G^B$), the fixation probability under neutral drift is equal to $1\/N$, regardless of $G^A=G^B$~\\cite{broom2010two}.\nThus, the regime of neutral drift provides a clean baseline and it decouples the effect of a difference in population structure from other effects. \n\nSpecifically, we study the following questions:\n\\begin{enumerate}\n\\item Does increased motility increase or decrease the mutant fixation probability?\n\\item Can the effect be quantified for simple natural structures, such as island models or low-dimensional lattices?\n\\end{enumerate}\n\nTo address the first question,\nin~\\cref{sec:small} we numerically compute the fixation probabilities $\\rho(G^A, G^B)$ for all pairs $G^A$, $G^B$ of graphs of small size.\nWe find that, generally speaking, increased motility potential (that is, living on a graph with more edges) tends to\nincrease the fixation probability of the mutant.\nIn particular, we prove (see~Theorem~1{} in~the Appendix) that the Complete graph is locally optimal, in a sense described below.\nHowever, we also identify special cases, in which\nan increase in the motility potential decreases the fixation probability rather than increasing it.\nThis suggests that for arbitrary population structures the effects of motility on the fixation probability are complex.\nGiven this complexity, we proceed to study pairs of regular structures.\n\nTo address the second question,\nin~\\cref{sec:dense} we consider certain population structures that correspond to island models with two equal islands.\nWe show that two such structures with the same total number of edges exhibit a substantially different behavior in the limit $N\\to\\infty$.\nThis implies that the effect of altered motility in dense regular graphs can not be easily quantified in terms of a single parameter (the total number of edges).\nThen, motivated by tissue organization in multicellular organisms, in~\\cref{sec:lattices} we consider 1- and 2-dimensional lattices.\nWe show that in this setting, the difference in motility can be quantified and it has analogous effect to a difference in reproductive rate:\nincreased motility results in mutant fixation with constant probability, whereas decreased motility causes the fixation probability to be exponentially small.\n\n\n\\paragraph{Related work.}\nThe question of computing fixation probabilities for various versions of Moran processes on graphs has been studied extensively.\nIn principle, for any population structure the fixation probability can be computed numerically by solving a system of linear equations~\\cite{hindersin2016exact}.\nHowever since the size of the system is generally exponential in the population size, this approach is practically feasible only for very small populations, or for very specific population structures~\\cite{tkadlec2019population,moller2019exploring,pavlogiannis2017amplification}.\nFor large population sizes, there exist efficient approximation algorithms either in the limit of weak selection~\\cite{allen2017evolutionary,allen2021fixation,mcavoy2021fixation}\nor when the underlying graph is undirected~\\cite{diaz2014approximating,ann2020phase}. \nWhile this manuscript was under preperation, Melissourgos et al.~\\cite{spirakis2021extension}\nextended the latter result to a special case of the two-graph setting, namely\nfor mutants with reproductive advantage ($r_A\\gneqq r_B$) who perceive the population as a Complete graph ($G_A=K_N$).\nThey also established bounds for certain special pairs of graphs, such as\nthe Complete graph invading the Star graph. \nIn contrast, in this work we consider the problem from a biological perspective and\nwe study mutants with no reproductive advantage ($r_A=r_B$) who, similarly to the residents,\nperceive the population structure either as an island model or as a low-dimensional lattice.\nIn this way, the two manuscripts complement each other.\nWe also answer some questions stated in~\\cite{spirakis2021extension} related to the best-response dynamics in the space of all graphs.\nNamely, we show that while the Complete graph is locally optimal (see~Theorem~1{} in~the Appendix), it is not always the best response (see~\\cref{fig:game}).\n\n\\section{Results}\n\n\\subsection{Small Graphs}\\label{sec:small}\nIn this section we consider population structures on $N$ labelled nodes, for small values of $N$.\nIn this regime, the fixation probability $\\rho(G^A,G^B)$ can be computed exactly,\nby numerically solving a system of $2^N$ linear equations. \n\nFor $N=2$ there is only one connected graph and, by symmetry, the fixation probability of a single type~$A$ individual is equal to 1\/2.\nFor $N=3$ there are four undirected graphs: a single graph $G^0$ with three edges (equivalently a complete graph, or a cycle), and\nthree different graphs $G^1$, $G^2$, $G^3$ with two edges each.\nThe corresponding fixation probabilities are given in~\\cref{fig:n3}b.\nNote that $\\rho(G^A,G^B)=1\/N$ when $G^A$ and $G^B$ are identical, but in general $\\rho(G^A,G^B)$ could be both more than $1\/N$ or less than $1\/N$, even when $G^A$ and $G^B$ are isomorphic (if they are not identical), see~\\cref{fig:n3}c.\n\n\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[scale=1]{fig-n3.pdf}\n\\end{center}\n\\caption{\\textbf{Small populations $N=3$.}\n\\textbf{a,} There are four connected graphs $G^0,\\dots,G^3$ on $N=3$ labeled nodes.\n\\textbf{b,} The fixation probabilities $\\rho(G^A,G^B)$ for all $4\\cdot 4=16$ combinations.\n\\textbf{c,} When $G^A$ and $G^B$ are isomorphic but not identical, the fixation probability is not necessarily equal to $1\/N$. For instance,\n we have $\\rho(S_4,S'_4)=63\/208\\doteq 0.31$.\n}\n\\label{fig:n3}\n\\end{figure}\nFor general $N$, there are $2^{N^2-N}$ pairs of graphs on $N$ labeled nodes.\nAlready for $N=6$ this is more than a billion pairs, hence\nin what follows we focus on the case when one of the graphs $G^A$, $G^B$ is a Complete graph, denoted $K_N$.\nWe use a shorthand notation $\\rho(G)=\\rho(G,K_N)$, for the fixation probability of a single mutant who perceives the population structure as a graph $G$ and invades a population of residents who perceive the population structure as a Complete graph $K_N$.\nAnalogously, we denote by $\\fp^\\star(G)= \\rho(K_N,G)$ the fixation probability of a single mutant living on a Complete graph $K_N$ and invading a population of residents who live on~$G$.\n\\cref{fig:n6} shows $\\rho(G)$ and $\\fp^\\star(G)$ for all undirected graphs on $N=6$ vertices, based on the number of edges in $G$.\n\n\\begin{figure}[h] \n\t\\centering\n\\includegraphics[scale=1]{fig-n6.pdf}\n \\caption{\\textbf{Small populations $N=6$.} The fixation probabilities\n \\textbf{a,} $\\rho(G)=\\rho(G,K_N)$ and\n \\textbf{b,} $\\fp^\\star(G)=\\rho(K_N,G)$ \n for all 112 graphs $G$ on $N=6$ vertices.\n Each dot corresponds to a graph~$G$, the orange dots correspond to regular graphs.\n When $G=K_N$, both $\\rho(G)$ and $\\fp^\\star(G)$ are equal to $1\/N$.\n Other graphs $G_6$ on six vertices satisfy $\\rho(G_6)<1\/6$ and $\\fp^\\star(G_6)>1\/6$. \n}\n\\label{fig:n6}\n\\end{figure}\n \n\n\\paragraph{Maximal and minimal fixation probability.} \nAmong the graphs on 6 vertices, fixation probability $\\rho(G)$ is maximized when $G$ is the Complete graph $K_6$.\nRecall that $\\rho(K_N)=1\/N$, for any integer $N$.\nIn relation to this, we prove that $\\rho(K_N)$ is ``locally maximal'':\nthat is, we show that if one edge is removed from the Complete graph $K_N$, then the resulting graph $M_N$ satisfies\n$\\rho(M_N)=\\frac{N-2}{(N-1)^2}<\\frac1N=\\rho(K_N)$.\nSimilarly, we prove that $K_N$ is locally minimal with respect to $\\fp^\\star(G)$:\nwe show that $\\fp^\\star(M_N)=1\/(N-1)>1\/N$, see~Theorem~1{} in the Appendix.\n\nNote that, in contrast, for $N=6$ the fixation probability $\\rho(G)$ is minimized for the Star graph $S_6$.\nHere a \\textit{Star graph}, denoted $S_N$, consists of a single node (``center'') connected to all other nodes (``leaves'').\nIt is known~\\cite{spirakis2021extension} that $\\rho(S_N)\\le 1\/(N-2)!$ and $\\fp^\\star(S_N)\\to 1$ as $N\\to\\infty$.\n\n\n\\paragraph{Relation to the number of edges.}\nIn general, fixation probability $\\rho(G)$ tends to be higher for graphs $G$ with more edges.\nHowever, this is only a rule of thumb.\nFor instance, the Lollipop graph $LP_6$ has a relatively low fixation probability $\\rho(LP_6)$, given its number of edges.\nHere a \\textit{Lollipop graph}, denoted $LP_N$, consists of a Complete graph on $N-1$ vertices and a single extra edge connecting the last node.\nMoreover, adding edges to a graph $G$ to produce a graph $G'$ sometimes does not increase the fixation probability but rather decreases it:\nthis is illustrated by the Pan graph $P_6$ and the Treetop graph $TT_6$ for which we have $\\rho(P_6)>0.071$ and $\\rho(TT_6)<0.065$.\nHere a \\textit{Pan graph}, denoted $P_N$, consists of a cycle on $N-1$ nodes and a single extra edge connecting the last node.\nIn a \\textit{Treetop graph}, denoted $TT_N$, the vertex with degree 3 is further connected to all other vertices.\n\n\n\\paragraph{Regular graphs.}\nRecall that a graph is \\textit{regular} if all its nodes have the same degree (that is, the same number of neighbors).\n\\cref{fig:n6} shows that, given a fixed number of edges, the fixation probability $\\rho(G)$ tends to be higher for regular (or almost regular) graphs as compared to non-regular graphs.\nFor instance, for the Cycle graph $C_6$ and the Line graph $L_6$, the fixation probabilities $\\rho(C_6)$, $\\rho(L_6)$ are relatively high, given the low number of edges of $C_6$ and $L_6$.\nHere a \\textit{Cycle graph}, denoted $C_N$, is the connected graph where each node is connected to two neighbors, and\na \\textit{Line graph}, denoted $L_N$, is the Cycle graph with one edge missing.\nHowever, we prove that the Line graph generally does not maximize the fixation probability among the connected graphs with $N-1$ edges (so-called trees):\nin particular, for $N=8$ the graph $G_8$ consisting of three paths of lengths 2, 2, and 3 meeting at a single vertex satisfies $\\rho(G_8)>0.0098>0.0095>\\rho(L_8)$.\n\nMoreover, the Isothermal Theorem of~\\cite{lieberman2005evolutionary}\n does not hold:\nfor two different regular graphs $G$, $G'$ (even with the same degree) the fixation probabilities $\\rho(G)$, $\\rho(G')$ are generally different, as witnessed by the two 3-regular graphs with $N=6$ nodes and 9 edges.\n\n\n\n\n\n\\subsection{Dense regular graphs}\\label{sec:dense}\n\nAs suggested by~\\cref{fig:n6}, regular graphs $G$ have high fixation probability $\\rho(G)$, compared to other graphs with the same number of edges.\nHere we consider certain simple regular graphs that contain approximately half of the total possible number of edges.\nWe show that for some such graphs, the fixation probability is comparable to that of a Complete graph,\nwhereas for other graphs it is substantially smaller. Thus the Isothermal Theorem~\\cite{lieberman2005evolutionary} is strongly violated.\n\nGiven a population size $N$ (with $N$ even), let $B_N=K_{N\/2,N\/2}$ be a (complete) \\textit{Bipartite graph} with equal parts $N\/2$, $N\/2$ and let $T_N$ be a \\textit{Two-clique} graph obtained by adding $N\/2$ matching edges to a union of two disjoint Complete graphs of size $N\/2$ each, see~\\cref{fig:bntn}a.\nNote that both $B_N$ and $T_N$ have precisely $\\frac14N^2$ edges, which is roughly half of the edges of $K_N$.\nAlso, note that both $B_N$ and $T_N$ represent populations subdivided into two large islands:\nin case of $B_N$, the offspring always migrates to the opposite island, whereas\nin case of $T_N$ the offspring mostly stays in the same island and it migrates only rarely (namely with probability of the order of $1\/N$).\n\n\\begin{figure}[h] \n\t\\centering\n\\includegraphics[width=\\linewidth]{fig-bntn.pdf}\n \\caption{\\textbf{Dense regular graphs.}\n \\textbf{a,} In a (complete) Bipartite graph $B_N$ and a Two-clique graph $T_N$, each vertex is connected to $N\/2$ other vertices (here $N$ is even).\n \\textbf{b,} When the mutant lives on $B_N$, the fixation probability satisfies $\\rho(B_N)\\approx 0.82\\cdot \\frac1N$. In contrast, when the mutant lives on $T_N$, the fixation probability $\\rho(T_N)$ tends to zero faster than $1\/N$.\n \\textbf{c,} When the residents live on $B_N$ or $T_N$, we have $\\fp^\\star(B_N)\\approx 1.1\\cdot \\frac1N$ ans $\\fp^\\star(T_N)\\approx 1.4\\cdot\\frac1N$.\n }\n\\label{fig:bntn}\n\\end{figure}\n\nWe prove that $\\rho(B_N)>0.58\/N$ (see Theorem~2{} in the Appendix).\nSince $\\rho(K_N)=1\/N$, this implies that missing roughly half of the edges only reduces the fixation probability by a constant factor, independent of the population size $N$.\nIn fact, numerical computation shows that $N\\cdot \\rho(B_N)\\approx 0.82$ whereas for the Two-clique graph we observe $N\\cdot \\rho(T_N)\\to 0$, see~\\cref{fig:bntn}b.\n\nThe intuition for this distinction is as follows.\nOn both graphs, the state of the system at any given time point is completely described by the frequencies $N_L\\in[0,N]$ and $N_R\\in[0,N]$ of mutants in the left and the right half.\nOn $B_N$, the two frequencies remain roughly equal throughout the process ($N_L\\approx N_R$):\nindeed, once say $N_L\\gg N_R$, more mutant offspring is produced on the left and they migrate to the right, thereby helping balance the numbers again. \nIn contrast, on $T_N$ the mutants migrate rarely, thus the lineage produced by the initial mutant remains trapped in one half for substantial amount of time. \nThroughout that time, the mutants are ``blocking'' each other from spreading more than they would block each other if they were split evenly between the two halves:\nindeed, with all mutants in one half, the probability that a reproducing mutant replaces another mutant (thus not increasing the size of the mutant subpopulation) is twice as large, as compared to the situation where the mutants are evenly split.\nFor small mutant subpopulations, this effect is non-negligible and it causes the fixation probability $\\rho(B_N$) to decay faster than inversely proportionally to $N$.\n\nRegarding $\\fp^\\star$, we observe $N\\cdot \\fp^\\star(B_N)\\approx 1.11$ and $N\\cdot\\fp^\\star(T_N)\\approx 1.4$, see~\\cref{fig:bntn}c.\nThe intuition is that when mutants live on a Complete graph $K_N$, the offspring is equally likely to migrate to any location.\nBy randomness, the condition $N_L\\approx N_R$ is thus maintained throughout most of the early stages of the process.\nTherefore, as with $\\rho(B_N)$, both $\\fp^\\star(B_N)$ and $\\fp^\\star(T_N)$ are inversely proportional to $N$.\nTo sum up, the graphs $B_N$ and $T_N$ show a considerably different behavior in terms of $\\rho$ but a qualitatively comparable behavior in terms of $\\fp^\\star$.\n\n\n\\subsection{Lattice graphs}\\label{sec:lattices}\nHere we study sparse regular graphs, specifically lattice graphs.\nLattices exist in any number of dimensions.\nWe focus on one- and two-dimensional lattices, since those are biologically relevant.\nFor each dimension, we study the effect of increased or decreased connectivity (degree) of the lattice on the fixation probability of an invading mutant.\n\n\n\n\\paragraph{One-dimensional lattices.}\nIn one dimension, we consider circulation graphs $\\operatorname{Cir}^d_N$ (already studied in this context from a different point of view, see~\\cite{spirakis2021extension}).\nFor a fixed even integer $d$, a \\textit{$d$-Circulation} graph, denoted $\\operatorname{Cir}^d_N$, consists of $N$ vertices arranged in a cycle, where each vertex is connected to $d$ other vertices, namely the next $d\/2$ vertices and the previous $d\/2$ vertices in the cyclic order, see~\\cref{fig:1d-lattices}a.\n\nTo shorten the notation, we denote by $\\fp^{\\operatorname{1D}}_N(d_1,d_2)= \\rho(\\operatorname{Cir}^{d_1}_N,\\operatorname{Cir}^{d_2}_N)$ the fixation probability of a mutant living on a one-dimensional lattice $\\operatorname{Cir}^{d_1}_N$ with degree $d_1$ versus a population of residents living on a one-dimensional lattice $\\operatorname{Cir}^{d_2}_N$ with degree $d_2$.\nNote that when $d_1=d_2=d$ then $\\fp^{\\operatorname{1D}}_N(d,d)=1\/N$.\n\n\\begin{figure}[h] \n\t\\centering\n\\includegraphics[width=\\linewidth]{fig-circulations.pdf}\n \\caption{\\textbf{Overlaying 1-D lattices with different connectivities.}\n \\textbf{a,} A circulation graph $\\operatorname{Cir}^d_N$ is a 1-dimensional lattice with periodic boundary and connectivity (degree) $d$.\n We consider $d\\in\\{2,4,6\\}$.\n \\textbf{b,} When mutants live on a less connected graph ($d_1d_2$), their fixation probability tends to a constant.\n In both panels, the black dashed line shows the neutral baseline $1\/N$.\n The values for $N\\le 13$ are computed by numerically solving a large system of linear equations.\n The values for $N\\ge 14$ are obtained by simulating the process $10^5$ times and reporting the proportion of the runs that terminated with the mutant fixating.\n }\n \\label{fig:1d-lattices}\n\\end{figure}\n\nWhen the degrees $d_1$, $d_2$ of the mutant and resident graph differ, the fixation probability crucially depends on which of the two degrees is larger. \nWhen the mutant graph has a lower connectivity ($d_1d_2$) then $\\fp^{\\operatorname{1D}}_N(d_1,d_2)$\n tends to a positive constant $c$ that depends on $d_1$ and $d_2$, see~\\cref{fig:1d-lattices}c.\nSpecifically, for large $N$ we observe that $\\fp^{\\operatorname{1D}}_N(4,2)\\approx 0.16$, $\\fp^{\\operatorname{1D}}_N(6,2)\\approx 0.17$ and $\\fp^{\\operatorname{1D}}_N(6,4)\\approx 0.09$.\n\nThose results are in agreement with bounds $0.11\\le \\fp^{\\operatorname{1D}}_N(4,2)\\le 0.25$ that we prove analytically by a stochastic domination argument (see~Theorem~3{} in the Appendix).\nThe intuition behind the argument is that once the mutants form a contiguous block of a large size, the block is more likely to expand rather than to diminish at both interfaces.\nIndeed, the probability of gaining the boundary node is the same as losing the (other) boundary node but, on top of that, mutants could skip the boundary node, invade the interior of the resident territory and only after that gain the skipped node.\nThis event has a non-negligible probability of happening, hence there is a positive bias favoring the spread of mutants.\nFor a formal proof, see~Theorem~3{} in the Appendix.\n\n\n\\paragraph{Two-dimensional lattices.}\n In two dimensions, we consider graphs drawn on a square lattice with periodic boundary condition.\nFor instance, by connecting each vertex to its 4 closest vertices (Von Neumann neighborhood), we obtain a graph $\\operatorname{Sq}^4_N$, see~\\cref{fig:2d-lattices}a.\nSimilarly, by connecting to 8 closest vertices (Moore neighborhood) we obtain a graph $\\operatorname{Sq}^8_N$.\nWe also consider other graphs $\\operatorname{Sq}^d_N$ with different connectivities $d\\in\\{6,12,20\\}$.\nWe again shorten the notation by denoting $ \\fp^{\\operatorname{2D}}_N(d_1,d_2)=\\rho(\\operatorname{Sq}^{d_1}_N,\\operatorname{Sq}^{d_2}_N)$.\n\n\\begin{figure}[h] \n\t\\centering\n\\includegraphics[width=\\linewidth]{fig-lattices.pdf}\n \\caption{\\textbf{Overlaying 2-D lattices with different connectivities.}\n \\textbf{a,} We consider two-dimensional lattices with degree $4$ (Vonn Neumann neighborhood), $6$ (triangular grid), and $8$ (Moore neighborhood), and with dimensions $3\\times 3,3\\times 4,\\dots,30\\times 30$.\n \\textbf{b, c} Similarly to the 1-D case, the fixation probability decays to 0 exponentially quickly when $d_1d_2$.\n The black dashed line shows the baseline $1\/N$.\n The values are obtained by simulating the process (at least $10^5$ repetitions per data point).\n }\n \\label{fig:2d-lattices}\n\\end{figure}\n\nThe results are analogous to the case of one-dimensional lattices.\nWhen the mutants live on a less connected lattice,\ntheir fixation probability tends to 0 exponentially quickly.\nIn contrast, when they live on a more densely connected lattice,\ntheir fixation probability tends to a constant as the population size $N$ tends to infinity (see~\\cref{fig:2d-lattices}).\n\n\n\\paragraph{Effective fitness.}\nThe behavior of the fixation probability for pairs of low-dimensional lattices is reminiscent of the behavior of the fixation probability\n$\\fp(K_N;r)$ of a single mutant with relative reproductive rate $r\\ne 1$ in a well-mixed population of $N-1$ other residents.\nIn that setting, we have $\\fp(K_N;r)=\\frac{1-1\/r}{1-1\/r^N}$.\nFor any fixed $r\\ne 1$, the formula exhibits one of two possible behaviors in the limit $N\\to\\infty$.\nWhen $r<1$ then $\\fp(K_N;r)$ decays approximately as $1\/r^N$.\nIn contrast, when $r>1$ then it tends to a positive constant $1-1\/r$.\n(When $r=1$ we have $\\fp(K_N;r)=1\/N$ by symmetry.)\n\nThis suggests a possible interpretation:\nfor the neutral mutant, living on a more densely connected lattice has a comparable effect on the fixation probability as having a certain relative reproductive advantage $r_{d_1,d_2}$.\nFormally, given a population size $N$ and two lattices $L_N$, $L'_N$ we define the \\textit{effective fitness}, denoted\n$r(L_N,L'_N)$, as the unique number $r$ such that \n\\[ \\rho(L_N,L'_N) = \\fp(K_N;r).\n\\]\nIn other words, the effective fitness is such a number $r(L_N,L'_N)$, that a neutral mutant on a lattice $L_N$ invading a lattice $L'_N$ has the same fixation probability as a mutant with relative reproductive advantage $r(L_N,L'_N)$ in a well-mixed population.\n\nFor pairs of low-dimensional lattices with different connectivities $d_1,d_2$, the effective fitness can be computed from the data presented above, see~\\cref{fig:effective}.\nWe observe that while the effective fitness depends on the connectivities $d$, $d'$ of the two lattices and on their dimensionality,\nit is mostly independent of the population size $N$.\n\n\\begin{figure}[h] \n\t\\centering\n\\includegraphics[width=\\linewidth]{fig-effective.pdf}\n \\caption{\\textbf{Effective fitness.} Given the connectivities $d$, $d'$ of the mutant and resident lattice, we compute the effective fitness that would result in the same fixation probability, had both types lived on a Complete graph.\n \\textbf{a,} One-dimensional lattices $\\operatorname{Cir}^d_N$ with $d\\in\\{2,4,6\\}$.\n \\textbf{b,} Two-dimensional lattices $\\operatorname{Sq}^d_N$ for $d\\in\\{4,6,8\\}$. We have\n $ r^{\\operatorname{2D}}_N(8,4)\\approx 1.06$,\n $ r^{\\operatorname{2D}}_N(6,4)\\approx r^{\\operatorname{2D}}_N(8,6)\\approx 1.03$, and\n $ r^{\\operatorname{2D}}_N(6,8)\\approx 0.96$,\n $ r^{\\operatorname{2D}}_N(4,6)\\approx 0.95$,\n $ r^{\\operatorname{2D}}_N(4,8)\\approx 0.92$.\n In both panels, the black dashed line shows the neutral baseline $r=1$.\n }\n \\label{fig:effective}\n\\end{figure}\n\n\n\n\n \n \\section{Discussion}\nIn this work, we studied the effect of mutations that, rather than altering the reproductive rate of the affected individual,\nalter how the individual experiences the population structure.\nTo that end, we considered a powerful framework based on the classical Moran Birth-death process on graphs, in which\nthe two types of individuals (the novel mutant and the existing residents) perceive the population structure through different graphs.\nAs the key quantity, we studied the probability $\\rho(G^A,G^B)$ that a single neutral mutant who perceives the population structure as a graph $G^A$ successfully invades the population of residents who perceive the population structure as a graph $G^B$.\nFor small population sizes, we computed the pairwise fixation probabilities numerically, and we observed that $\\rho(G^A,G^B)$ tends to be higher when $G^A$ contains many edges (that is, the mutant is more motile) and when $G^A$ is regular.\nWe note that the latter aspect contrasts with other models of motility, where an increased dispersal potential of the mutant generally diminishes the fixation probability~\\cite{thalhauser2010selection,manem2014spatial,krieger2017effects}.\n\nNext, motivated by island models, we considered two regular graphs with the same total number of edges\nand we showed that the corresponding fixation probabilities are asymptotically different.\nIn particular, as the population size~$N$ increases, the fixation probabilities decay at different rates.\nThus, in the asymptotic sense, the Isothermal Theorem of~\\cite{lieberman2005evolutionary} is strongly violated.\n\nFinally, we studied the biologically relevant case of 1- and 2-dimensional lattices\nand we showed that the dispersal radius has similar effect on the fixation probability as the reproductive rate.\nRecall that in large unstructured populations, a beneficial mutation fixates with constant probability, whereas the fixation probability of a deleterious mutation is exponentially small.\nLikewise, neutral mutants on lattices with larger dispersal radius have a constant chance of successfully fixating, whereas\nhaving lower dispersal radius leads to fixation of the mutant only with exponentially small probability.\nThus, in terms of the fixation probability of the mutant, perceiving the population through a more densely connected lattice\nis effectively equivalent to having an increased reproductive rate.\n\n\nMoving on to more complex (though perhaps less realistic) population structures, many natural questions arise.\nWe conclude by commenting on three of them.\nRecall that for any graph $G_N$ on $N$ nodes we have $\\rho(G_N,G_N)=1\/N$~\\cite{broom2010two}.\n\nFirst, \\cref{fig:n6} suggests that $\\rho(G^A_N,K_N)< 1\/N$ for all mutant graphs $G^A_N\\ne K_N$.\nWhile we can prove that $\\rho(M_N,K_N)< 1\/N$ for a graph $M_N$ that misses a single edge (see~Theorem~1{} in the Appendix),\nthe general claim is left as an open problem.\nSimilarly, we do not know whether $\\rho(K_N,G^B_N)>1\/N$ holds for all resident graphs $G^B_N\\ne K_N$ (we do know that it holds for $G^B_N=M_N$).\n\nSecond, following the game theory perspective, Melissourgos et al.~\\cite{spirakis2021extension} asked what is the best mutant response to a given resident graph. That is, given a resident graph $G^B_N$ on $N$ nodes, which mutant graph $G^A_N$ on $N$ nodes maximizes the fixation probability $\\rho(G^A_N,G^B_N)$?\nOur results for small graphs show that although the Complete graph $K_N$ is frequently the best mutant response,\nit is not always the case, see~\\cref{fig:game}.\nIn particular, when the residents live on a Star graph $S_6$, the population is easier to invade through a graph $M_6$ that misses a single edge, rather than through the Complete graph $K_6$ -- direct computation gives\n$\\rho(M_6,S_6)>0.643>0.641>\\rho(K_6,S_6)$.\nWe note that the difference is minor -- both mutant graphs $M_6$ and $K_6$ provide a fixation probability well over the neutral threshold value $1\/6\\approx 0.167$.\n\n\\begin{figure}[h] \n\t\\centering\n\\includegraphics[scale=1]{fig-game.pdf}\n \\caption{\\textbf{Best-response graphs.} The Complete graph is sometimes not the best response when optimizing the fixation probability $\\rho$.\n \\textbf{a.} The resident population living on the Star graph $S_6$ (red) is easier to invade by mutants living on $M_6$ (blue) than by mutants living on the Complete graph $K_6$.\n \\textbf{b,} The mutants living on a graph $G_6$ (blue) have a harder time invading the graph $G'_6$ than they have invading the Complete graph $K_6$.\n }\n \\label{fig:game}\n\\end{figure}\n\nFor the complementary question of what is the best resident response $G^B_N$ to a given mutant graph $G^A_N$, the situation is analogous:\nwhile the Complete graph is generally hard to invade, it is sometimes not the hardest one.\nAs an example (see~\\cref{fig:game}b), when mutants live on a graph $G_6$ then for the graph $G'_6$ we have\n$\\rho(G_6,G'_6)<0.025<0.026<\\rho(G_6,K_6)$.\n\nThird, we observe that when mutants and residents live on different graphs $G^A_N\\ne G^B_N$ with the same edge density, the fixation probability $\\rho(G^A_N,G^B_N)$ typically drops below $1\/N$.\nThe intuition is that the mutant subpopulation tends to form clusters in $G^A_N$ but not necessarily in $G^B_N$.\nAs a consequence, mutants block each other from spawning onto a resident\nbut they do not guard each other from being replaced by residents. \n\nAs an extreme example of this phenomenon, suppose that mutants live on a long cycle $G^A_N=C_N$ and they currently form a contiguous block of 10 individuals.\nThe probability $p^+$ that, in a single step, a mutant is selected for reproduction and its offspring replaces a resident is equal to $p^+=1\/N$.\nHowever, if the residents perceive the population as a completely different cycle $G^B_N=C'_N$ (such that no two mutants are adjacent in $C'_N$), then the probability $p^-$ that a resident replaces a mutant equals $p^-=10\/N$.\nThus, in a single step, the size of the mutant subpopulation is $10\\times$ more likely to decrease than it is to increase.\nTo some extent, similar effects occur whenever the two graphs $G^A_N$ and $G^B_N$ differ.\nThis suggests that for distinct graphs $G^A_N\\ne G^B_N$ we typically have $\\rho(G^A_N,G^B_N)<1\/N$.\n\nIn one direction, this phenomenon can be easily overcome, for instance when one graph is denser than the other one.\nMoreover, as witnessed by the two Star graphs depicted in~\\cref{fig:n3}, there are pairs of irregular graphs for which the phenomenon is overcome in both directions.\nHowever, we are not aware of any such pair $G_N$, $G'_N$ of regular graphs.\nHence this is another open problem:\ndo there exist two regular graphs such that both $\\rho(G_N,G'_N)>1\/N$ and $\\rho(G'_N,G_N)>1\/N$?\n\n\n\n\\section*{Acknowledgements}\nK.C. acknowledges support from ERC Consolidator grant no. (863818: ForM-SMart).\n\n\\section*{Author Contributions}\nAll authors designed the research.\nJ.T. and K.K. performed the mathematical analysis.\nJ.T. wrote the computer code and produced the figures.\nAll authors wrote the manuscript.\n\n\\section*{Competing interests}\nThe authors declare no competing interests.\n\n\\section*{Code and Data availability}\nThe datasets generated during and\/or analysed during the current study are available in the Figshare repository,\n\\url{https:\/\/figshare.com\/s\/2d9cc41100151547b61a}.\n \n\\bibliographystyle{naturemag}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Dynamics of networks with time-varying links}\nThrough extensive numerical simulations of this dynamical network we\nfirst obtain bifurcation diagrams with respect to coupling strength\n$\\epsilon$. From these bifurcation diagrams we find the critical\ncoupling strength $\\epsilon_{sync}$ such that one obtains spatio-temporal\nsyncronization, namely a spatio-temporal fixed point, for $\\epsilon\n\\ge \\epsilon_{sync}$ .\n\nIt is clearly evident from Fig. \\ref{bif} \nthe critical coupling strength $\\epsilon_{sync}$ decreases as link\nswitching probability $p_t$ increases. Namely, as the {\\em\n probability of changing links increases, the range for the\n spatio-temporal fixed point increases}. So a more dynamic web of\nlinks is more favourable for inducing spatiotemporal regularity in\ncoupled chaotic systems.\nSurprisingly we found that both methods show similar qualitative\neffects on the dynamics of the nodes, despite the fact that the local\nmethod involves incremental changes in connectivity and the\nglobal case implies sudden and large changes in connectivity.\n\n\\bigskip\n\n \nWe now describe the degree of syncronization \\cite{barahona, hong,\n intermediate} in the system quantitatively through the\nsynchronization error function defined as\n\\begin{equation}\n\\ Z(n) = \\frac{1}{N}\\sum_{i=1}^N[x_n(i) - x_{mean}]^2\n\\end{equation}\naveraged over time \\emph{n} and calculated after transient time, with\n$x_{mean}$ being the mean value of $x(i) = 1,2, \\dots N$ at a given time\nstep $n$.\n\nFig. \\ref{sync} displays the variation of synchronzation error\n(averaged over space and time) with respect to the coupling strength\n$\\epsilon$, for the case of both local and global link rewiring. It\ncan be clearly seen that as the connection network becomes more\ndynamic, the range of complete syncronization increases. Furthermore,\nfor both the connection rewiring scenarios the qualitative results are\nsimilar.\n\n\\bigskip\n\n\n \nNow, we consider the scenario where link changes are infrequent,\nnamely a network near the static limit, with $p_t$ close to zero. Here\none obtains a range of critical coupling strengths, with\n$\\epsilon_{sync}$ being strongly dependent on the initial confguration\nof links. A deeper understanding is gained by studying the\ndistribution of $\\epsilon_{sync}$, at fixed $p_t$ and $p_s$, for\ndifferent initial realizations, as displayed for representative cases\nin Fig. \\ref{subfig: histo01}. It is clearly seen from the numerics\nthat when we are closer to the static limit there is a spread in\nvalues of $\\epsilon_{sync}$. As the system becomes more dynamics,\ni.e. as $p_t$ increases, we observe that the spread of\n$\\epsilon_{sync}$ narrows considerably, converging rapidly to the\naverage value $\\langle \\epsilon_{sync} \\rangle$. This is a reflection\nof the more effective ``self-averaging'' arising from dynamically\nchanging network configurations as the system evolves for larger\n$p_t$.\n\nFurther, the average critical coupling strength $\\langle\n\\epsilon_{sync} \\rangle$ also shifts to a smaller value with\nincreasing link switching probability $p_t$. This implies that lower\ncoupling strengths are necessary to bring about synchronization when\nthe link changes are more rapid. \n\n\nNext we show the variation of the average critical coupling strength\n$\\langle \\epsilon_{sync} \\rangle$ with respect to the probability of\nlink change $p_t$. Figs. \\ref{subfig: spread01} display the average\ncritical coupling strength, the maximum value of critical coupling\nstrength $\\epsilon_{max}$ and the mimimum value of critical coupling\nstrength $\\epsilon_{min}$. It is evident from the plot that $\\langle\n\\epsilon_{sync} \\rangle$ diplays a clear trend under increasing $p_t$, even\nat low $p_t$ when there is considerable separation in the\n$\\epsilon_{min}$ and $\\epsilon_{max}$ values.\nResults from the local rewiring scheme is diplayed here. Similar\nphenomena is observed for the case of global link changes as well.\n\n\\bigskip\n\n{\\em Intermittent approach to synchronization:}\\\\\n \n Examining the spatiotemporal evolution of the network, as displayed\n in Fig. \\ref{density} reveals the following feature: one can see\n low coupling strengths the system exhibits spatio-temporal chaos\n (cf. left panel of Fig. \\ref{density}). However as coupling\n strength increases\n one observes a dynamical regime in which the system displays\n intermittent behaviour. Fig. \\ref{density} (right) shows the\n spatio-temporal evolution of one such representative regime,\n exhibiting syncronized periods with burst of unsyncronized\n behaviour. Similar qualitative dynamical patterns are obtained for\n both local and global network changes.\n\n\nNow to study these intermittent patterns in greater detail, we define\na parameter $L_{intermittent}$ which is the average length of\nintermitent behaviour in time. Quantitatively this length represents\nthe time between the the first event of near complete spatiotemporal\nsyncronization and the last observed unsynchronized\nburst. Specifically, without loss of generality, we consider a system\nsynchronized if the synchronization error $Z < 10^{-5}$.\n\nIn Fig. \\ref{subfig: int02} we present a simple illustration of one\nsuch case. Here the first syncronized stretch is seen at $t \\approx\n800$. Subsequently one obtains desyncronized bursts, followed by\nsynchronized intervals, over a period of time. Finally at $t \\approx\n4600$ the last burst of unsyncronized behaviour is seen after which\nthe system remains syncronized up to the limits of the simulation time\n($t \\approx 10^4$). For the completely chaotic region at low coupling\nstrengths, and the completely syncronized region at high coupling\nstrengths there is no intermittency as evident from $L_{intermittent}\n\\rightarrow 0$. However, for a range of coupling approaching the\ncritical coupling strength, the average length of the intermittent\nperiod increases as a power law with respect to distance from the critical\npoint (see Fig. \\ref{subfig: int01} for representative examples). It\nis further evident that increasing the probability of changing links\nreduces the range of coupling strengths over which this intermittent\napproach to synchronization is observed.\n\n\n\n\\bigskip\n\n{\\em Mean time to reach the synchronized state:}\\\\\n\nWe have also investigated the average time taken by the system,\nstarting from generic random initial conditions, to reach the\nsynchronized state. The results are displayed for two representative\ncases in Fig. \\ref{mean_sync}. It is clear that more rapid link changes\nlead to much shorter transience. So the system very quickly reaches\nthe spatiotemporal fixed point when the connections are varying fast.\n\n\n\\bigskip\n\n{\\em Analysis:}\\\\\n\nWe now analyse the system to account for the much enhanced stability\nof the spatiotemporal fixed point under rapidly changing\nconnections. The only possible solution for a spatiotemporally\nsynchronized state here is $x_n (i) = x^*$, where $x^* = f(x^*)$ is\nthe fixed point solution of the local map. For the case of the\nlogistic map this is $x^* = 4 x^* (1-x^*) = 3\/4$.\n\nTo calculate the stability of the network with all sites at $x^*$, we\nconstruct an {\\it probabilistic evolution rule} for the state of the\nnodes. In this mean field-like version of the dynamics, the effective\ninfluence of the random connections on the local dynamics is given by\n$p_{eff}$, and the influence of the nearest neighbours is given by\n$(1-p_{eff})$, where $p_{eff}$ is determined by the link change\nprobability $p_t$ as well as the fraction of random sites $p_s$.\n\nIn terms of $p_{eff}$ the averaged evolution equation of node $j$ ($j\n= 1, \\dots N$) is\n\\begin{equation}\nx_{n+1} (j) =\n(1-\\epsilon) f (x_n(j)) + (1-p_{eff})\\frac{\\epsilon}{2} ( x_{n}(j+1)+x_n (j-1) )+\np_{eff} \\frac{\\epsilon}{2} (x_n (\\zeta) + x_n(\\eta))\n\\label{peff}\n\\end{equation}\nWhere $\\zeta $ and $\\eta $ are two random sites ($\\zeta, \\eta \\in [1,\nN]$).\n\nNow in order to calculate the stability of the synchronized state, we\nlinearize Eq.~\\ref{peff}, by considering $x_n(j) = x^* + h_n(j)$, and\nexpanding to first order:\n\\begin{equation}\nh_{n+1}(j) = (1 - \\epsilon)f'(x^*)h_{n}(j) + (1-p_{eff}) \\frac{\\epsilon}{2} \\left\\{\nh_n (j+1) + h_n (j-1) \\right\\} + p_{eff} \\frac{\\epsilon}{2} \\left\\{ h_n (\\zeta) + h_n (\\eta) \\right\\}\n\\end{equation}\nConsidering the sum over uncorrelated random neighbours to be equal to\nzero, one obtains the approximate evolution equation:\n\\begin{equation}\nh_{n+1}(j) = (1 - \\epsilon)f'(x^*)h_{n}(j) + (1-p_{eff}) \\frac{\\epsilon}{2} \\left\\{\nh_n (j+1) + h_n (j-1) \\right\\}.\n\\end{equation}\n\nFor stability considerations one can diagonalize the above expression\nusing a Fourier transform $h_n(j) = \\sum_{q} \\phi_n (q) \\exp(i \\ j q)$,\nwhere $q$ is the wavenumber and $j$ is the site index, which yields\nthe following growth equation:\n\n\\begin{equation}\n\\frac{\\phi_{n+1} (q)}{\\phi_n (q)} = f'(x^*)(1 - \\epsilon) + \\epsilon (1-p_{eff}) \\cos q\n\\end{equation}\nwith $q$ going from $0$ to $\\pi$. Specifically, for the case of the\nchaotic logistic map at $r=4$ we have $f^{\\prime} (x^*) = -2$. So the\nmagnitude of the growth coefficient that appears in the above\nexpression is smaller than $1$q, if and only if\n\\begin{equation}\n\\frac{1}{1+p_{eff}} < \\epsilon < 1\n\\end{equation} \n\nThis inequality then yields the coupling strength $\\epsilon_{sync}$\nafter which the spatiotemporal fixed point gains stability to be:\n\\begin{equation}\n\\epsilon_{sync} = \\frac{1}{1+p_{eff}}\n\\end{equation}\n\nFurther the range of the spatiotemporal fixed point $\\cal{R}$ is given by:\n\n\\begin{equation}\n{\\cal R} = 1-\\epsilon_{sync} = \\frac{p_{eff}}{1+p_{eff}}\n\\end{equation}\n\nNow $p_{eff}$ is the probability that the links are different from\ntime to time. So $p_{eff}$ must be directly proportional to the\nprobability of random rewiring $p_t$ and the fraction of random links\n$p_s$. Starting with the ansatz that $p_{eff} = f (p_s \\ p_t)$, where\nfunction $f$ is a power law, gives:\n\\begin{equation}\n{\\cal R} \\sim \\frac{(p_s \\ p_t)^{\\nu}}{1+(p_s \\ p_t)^{\\nu}} \n\\label{ansatz}\n\\end{equation}\n\nFig.~\\ref{scale} displays the dependence of the range of the\nspatiotemporal fixed point obtained numerically for the case of local\nlink change, on $p_s p_t$. Fitting this to Eq.~\\ref{ansatz} yields\n$\\nu \\sim 0.4$ for the range $0.1 < p_s p_t < 1$. The range of the\nspatiotemporal fixed point for the case of global changes can also be\nfit to the same functional form, with best fit yielding $\\nu \\sim 0.3$\nin a similar range of $p_s p_t$.\n\n\\bigskip\n\n\n{\\em Generality of the Results:}\\\\\n\nIn order to gauge the generality of our results, we also analysed a\nnetwork of Exponential Maps (also known as the Ricker Map). These are\ngiven by the dynamical equation:\n\\begin{equation}\nf(x) = x \\ e^{r (1-x)}\n\\end{equation}\nIn the results presented here we take the nonlinearity parameter $r$ to be $2.6$, where the map is strongly chaotic.\n\nRepresentative results are shown in Fig. \\ref{subfig: bif03}.\nClearly, spatiotemporal synchronization is obtained at coupling\nstrengths $\\epsilon > \\epsilon_{sync}$, where $\\epsilon_{sync} = 0.48$\nfor $p_t = 1$.\n\nFurther we calculate the variation of $\\langle \\epsilon_{sync}\n\\rangle$, maximum $\\epsilon_{sync}$ and minimum $\\epsilon_{sync}$ with\nrespect to link rewiring probability $p_t$, and the results are\ndisplayed in Fig. \\ref{subfig: spread02}. It is evident that the\nqualitative picture that emerges is the same as in a network of\nchaotic logistic maps. Namely, we again observe a wider separation\nbetween $\\epsilon_{max}$ and $\\epsilon_{min}$ at very low $p_t$\n(i.e. close to the static limit), and this shrinks rapidly as $p_t$\nincreases. As before, we also find a smooth decreasing trend for\n$\\langle \\epsilon_{sync} \\rangle$ with increasing $p_t$. So it is clear\nthat more frequent link changes enhances the range of spatiotemporal\nsynchronization, and the critical coupling strength necessary to\nobtain the spatiotemporal fixed point is lower in networks with\nfaster variation in connectivity.\n\n\n In summary, in this work we have investigated time varying networks\n with complex dynamics at the nodes. We considered two scenarios of\n network change in an interval of time: first, we have the case\n where each link can change with probability $p_t$, i.e. the network\n changes occur locally and independently at each node. Secondly we\n considered the case where the entire connectivity matrix changes\n with probability $p_t$, i.e. the change is global. We demonstrated\n that network changes, occuring both locally and globally, yield an\n enhanced range of synchronization. When the connections are\n changed slowly (i.e. $p_t$ is low) the nodes display nearly\n synchronized intervals interrupted by intermittent unsynchronized\n chaotic bursts. However when the connections are switched quickly\n (i.e. $p_t$ is large), the intermittent behavior quickly settled\n down to a steady synchronized state. Furthermore we found that the\n range of synchronization increases significantly with the\n probability of network change $p_t$. Additionally the system\n reaches the synchronized state much more rapidly for the case of\n switched links. Thus our results highlight the strong effects of\n time-varying connections on the nodal dynamics, and our principal\n observations are relevant to the understanding of temporal networks\n in general.\n\n\\bigskip\n\\bigskip\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\\section{}\n\\label{sec:appendixA}\n\\noindent\nHere we give some properties of $G(t)$ which are required for obtaining the asymptotic values of MSD. \nWe have the following properties of fractional integrals and derivatives of $G(t)$:\n\\begin{subequations}\n\\label{eq:appendixA_001\n\\begin{align}\n I^{1-\\mu} G(t) & = I^{1-\\mu}G_\\circ(t) \\nonumber \\\\\n & \\quad \n + \\sum_{n=1}^\\infty \\left(-\\lambda_1\\right)^n\\int_0^t du I^{1-\\mu} G_\\circ(t - u) G_\\circ^{*n}(u)\n\\label{eq:appendixA_001a\n\\end{align}\nand\n\\begin{align}\n D^\\phi G(t) & = D^\\phi G_\\circ(t) \\nonumber \\\\\n & \\quad\n + \\sum_{n=1}^\\infty \\left(-\\lambda_1\\right)^n\\int_0^t du D^\\phi G_\\circ(t - u) G_\\circ^{*n}(u).\n\\label{eq:appendixA_001b\n\\end{align} \n\\end{subequations}\nFrom (\\ref{eq:withExForce_005a}) the following can be calculated\n\\begin{subequations}\n\\label{eq:appendixA_002\n\\begin{align}\n D^\\alpha G_\\circ(t) &= E_{\\alpha - \\gamma +1,1}\\left(-\\lambda_2 t^{\\alpha-\\gamma+1}\\right) \n\\label{eq:appendixA_002a\n\\\\\n I^{1-\\alpha} G_\\circ(t) &= tE_{\\alpha - \\gamma +1,2}\\left(-\\lambda_2 t^{\\alpha-\\gamma+1}\\right) \n\\label{eq:appendixA_002b\n\\\\\n I^{1-\\kappa} G_\\circ(t) &= t^{\\alpha-\\kappa+1}E_{\\alpha - \\gamma +1,\\alpha-\\kappa+2}\\left(-\\lambda_2 t^{\\alpha-\\gamma+1}\\right). \n\\label{eq:appendixA_002c\n\\end{align}\n\\end{subequations}\nIn addition, one needs to consider the following:\n\\begin{align}\n \\int_0^t du G_\\circ(u) &= \\int_0^t du u^{\\alpha}\n E_{\\alpha - \\gamma +1,\\alpha+1}\\left(-\\lambda_2 u^{\\alpha-\\gamma+1}\\right) \\nonumber\\\\\n & = t^{\\alpha+1}E_{\\alpha - \\gamma +1,\\alpha+2}\\left(-\\lambda_2 t^{\\alpha-\\gamma+1}\\right). \n\\label{eq:appendixA_003\n\\end{align}\n\n\\section{}\n\\label{sec:appendixB}\n\\noindent\nWe give the derivation of the expression for variance (\\ref{eq:withExForce_015}). \nFrom (\\ref{eq:withoutExForce_008}), the Laplace transform of the convolution\n\\begin{align}\n A(t) & = \\int_0^t du C(t - u)G(u)\n\\label{eq:appendixB_001\n\\end{align}\nis given by\n\\begin{align}\n \\tilde{A}(s) &= \\tilde{C}(s)\\tilde{G}(s).\n\\label{eq:appendixB_002\n\\end{align}\nFrom the fluctuation dissipation relation we have \n$C(t) = k_BT\\gamma(t)$, \nand express according to definition (\\ref{eq:withoutExForce_007}), we have\n\\begin{align}\n \\tilde{A} &= k_BT\\frac{\\tilde{\\gamma}(s)}{s^{\\alpha+1} + \\tilde{\\gamma}(s) s}\n = \\frac{k_BT}{s}\\left[1 - \\frac{s^\\alpha}{s^\\alpha + \\tilde{\\gamma}(s)}\\right].\n\\label{eq:appendixB_003\n\\end{align}\nReplace the convolution term in \n(\\ref{eq:withoutExForce_008})\n by the inverse Laplace transform of (\\ref{eq:appendixB_003}) we get (\\ref{eq:withExForce_015}).\n\\section{}\n\\label{sec:appendixC}\n\\noindent\nConsider the fractional generalized Langevin equation \n(\\ref{eq:withoutExForce_001}) and (\\ref{eq:withoutExForce_002}) \nwith $f(t) = 0$. \nThe solution for the position process is given by\n\\begin{align}\n x(t) &= x_\\circ + \\frac{v_\\circ}{\\Gamma(1 - \\alpha}\\int_0^t (t - u)^{-\\alpha} G(u) du \\nonumber \\\\\n & \\quad + \\int_0^t G(t - u)\\xi(u) du,\n\\label{eq:appendixC_001\n\\end{align}\nwhere $G(t)$ is the inverse Laplace transform of \n\\begin{align}\n \\tilde{G}(s) &= \\frac{1}{s^{\\alpha+1} + \\tilde{\\gamma}(s) s}.\n\\label{eq:appendixC_002\n\\end{align}\nFor $\\gamma(t) = \\lambda_2 \\frac{t^{-\\gamma}}{\\Gamma(1 - \\gamma)}$, one gets\n\\begin{align}\n G(t) &= t^\\alpha E_{\\alpha - \\gamma +1,\\alpha +1}\\left(-\\lambda_2 t^{\\alpha - \\gamma +1}\\right).\n\\label{eq:appendixC_003\n\\end{align}\nThe mean and variance of $x(t)$ are respectively\n\\begin{align}\n \\bar{x} &= x_\\circ + v_\\circ I^{1-\\alpha}G(t)\n\\label{eq:appendixC_004\n\\end{align}\nand\n\\begin{align}\n \\left<\\left(x(t) - \\bar{x}\\right)^2\\right> &= 2k_BT\\left[\n \\int_0^t du G(t)\n - \\int_0^t du G(t)D^\\alpha G(u)\n \\right].\n\\label{eq:appendixC_005\n\\end{align}\nNow using the relation $R^2 = \\left(\\bar{x} - x_\\circ\\right)^2 + \\sigma^2$, \none sees that the MSD $R^2$ can have a close form provided the second term of the variance cancels out the term \n$\\left(\\bar{x} - x_\\circ\\right)^2$. \nThis is possible if $\\alpha = 1$. \nWe thus obtain\n\\begin{align}\n R^2 &= 2k_BT\\int_0^t du G(u)\n = 2k_BT\\int_0^t du uE_{2-\\gamma,2}\\left(-\\lambda_2 u^{2-\\gamma}\\right) \\nonumber \\\\\n & = 2k_BT t^2 E_{2-\\gamma,3}\\left(-\\lambda_2 t^{2-\\gamma}\\right).\n\\label{eq:appendixC_006\n\\end{align}\nWhen $\\gamma = 1\/2$, one gets \n\\begin{align}\n R^2 = 2k_BT t^2 E_{3\/2,3}\\left(-\\lambda_2 t^{3\/2}\\right).\n\\label{eq:appendixC_007\n\\end{align}\n\n\n\n\\section{New Closed Expressions for MSD of SFD}\n\\label{sec:clossExp}\n\\noindent\nFrom the above discussion we see that it is not possible to obtain a close analytic expression for the MSD \nfor the position process from the solution to \n(\\ref{eq:withoutExForce_001})\nand \n(\\ref{eq:withoutExForce_002}), \neven though we are able to show that both the long time and short time asymptotic properties of the MSD agree with that for SFD. \nHowever, for the overdamped case one has the closed expression (\\ref{eq:withExForce_026}) for the MSD.\nBy letting $\\gamma = 1\/2$, (\\ref{eq:withExForce_026}) gives the correct short and long time limits for the MSD of SFD.\n\n\nLet us consider this special case in more detail.\nIf we let $\\gamma = 1\/2$, and with appropriate values for $\\zeta$ and $\\lambda$, \nwe can then show that \n$R^2 = 2k_BT\\zeta tE_{1\/2,2}\\left(-\\lambda t^{1\/2}\\right)$\ncan provide the correct description of SFD. \nBy comparing this expression with (\\ref{eq:introduction_03}) one gets in the limit $t \\rightarrow 0$, \n\\begin{align}\n R^2 & \\sim 2k_BT\\zeta t = l^2(1 -\\theta) \\frac{t}{\\tau},\n\\label{eq:clossExp_001\n\\end{align}\nwhich requires $\\zeta = l^2(1-\\theta)\/(2k_BT\\tau)$. \nOn the other hand, by comparing it with (\\ref{eq:introduction_04})\none has for $t \\rightarrow \\infty$,\n\\begin{gather}\n R^2 \\sim 2k_BT\\zeta t \\frac{\\left(\\lambda t^{1\/2}\\right)^{-1}}{\\Gamma(3\/2)}\n = \\frac{4k_BT\\zeta}{\\lambda\\sqrt{\\pi}}t^{1\/2} \\nonumber \\\\\n = l^2 \\frac{1-\\theta}{\\theta}\\sqrt{\\frac{2}{\\pi}}\\sqrt{\\frac{t}{\\tau}},\n\\label{eq:clossExp_002\n\\end{gather}\nwhich requires $\\lambda =\\theta\\sqrt{2\/\\tau}$. \nTherefore, the expression $R^2 = 2k_BT\\zeta tE_{1\/2,2}\\left(-\\lambda t^{1\/2}\\right)$\nwith $\\zeta = l^2(1-\\theta)\/(2k_BT\\tau)$ and $\\lambda = \\theta\\sqrt{2\/\\tau}$\nprovides a new alternative expression for the MSD of SFD. \n\n\nBy noting that both $E_{1\/2,2}(t)$ and $E_{1\/2,\\beta}(t)$, $\\beta > 0$ \nhave the same short and long time limits (up to a multiplicative constant), it is then possible to generalized the above closed\nexpression for the MSD to the following more general expression\n\\begin{align}\n R^2 & = 2k_BT\\zeta^\\prime t E_{1\/2,\\beta}\\left(-\\lambda^\\prime t^{1\/2}\\right),\n\\label{eq:clossExp_003\n\\end{align}\nfor $\\beta > 0$. \nAgain, by comparing (\\ref{eq:clossExp_003}) with (\\ref{eq:introduction_03}) and (\\ref{eq:introduction_04}) one gets \n$\\zeta^\\prime = l^2(1- \\theta)\\Gamma(\\beta)\/(2k_BT\\tau)$ \nand $\\lambda^\\prime = \\theta\\Gamma(\\beta-\\frac{1}{2})\/\\sqrt{2\\tau}$.\nExpression (\\ref{eq:clossExp_003}) gives the family of closed form expressions for SFD, \nand they can be regarded as alternatives to the equation (\\ref{eq:clossExp_003}) obtained by Brandani \\cite{Brandani96}. \nSimulations of \n$R^2 = 2\\zeta tE_{1\/2,2}\\left(-\\lambda t^{1\/2}\\right)$ \nand the more general case \n$R^2 = 2\\zeta tE_{1\/2,\\beta}\\left(-\\lambda t^{1\/2}\\right)$ \nwith some specific values of $\\zeta$, $\\lambda$ and $\\beta$ are given in Figures \\ref{fig:SFD01} and \\ref{fig:SFD02}.\n\\begin{figure*}[h!]\n \\centering\n \\subfloat[]{\n \\includegraphics*[viewport=30 30 680 480,scale=0.35]{SFDfig01a.eps}}\n \\subfloat[]{\n \\includegraphics*[viewport=30 30 680 480,scale=0.35]{SFDfig01b.eps}}\n \n \\caption{Short- and long-time simulations for MSD $2tE_{1\/2,2}\\left(-t^{1\/2}\\right)$.}\n \\label{fig:SFD01}\n\\end{figure*}\n\\begin{figure*}[h!]\n \\centering\n \\subfloat[]{\n \\includegraphics*[viewport=30 30 680 480,scale=0.35]{SFDfig02a.eps}}\n \\subfloat[]{\n \\includegraphics*[viewport=30 30 680 480,scale=0.35]{SFDfig02b.eps}}\n \n \\caption{Short- and long-time simulations for MSD $2t^2E_{3\/2,3}\\left(-t^{3\/2}\\right)$.}\n \\label{fig:SFD02}\n\\end{figure*}\n\nThe SFD model having transition from normal diffusion to subdiffusion regime can only be regarded as an approximation to the real SFD system. \nAt very short times such a system first undergoes ballistic motion with \n${\\rm MSD} \\sim t^2$.\nBallistic motion occurs before the particle has had a chance to collide with anything during the initial very short times. \nThere exists the possibility of the direct transition from the ballistic regime to the single-file behavior \n\\cite{Karger08,KargerHahn96,HahnKarger96}.\nSuch a tendency becomes more prominent with increasing particle concentration, \nand this has been demonstrated by molecular dynamics simulations \\cite{KargerHahn96,HahnKarger96}.\nIn order to describe such a situation, \none consider the generalized Langevin equation \n(\\ref{eq:withoutExForce_001})\nwith $f(t) = 0$ and $\\gamma(t) = \\lambda_2 t^{-1\/2}$, which gives the MSD as\n\\begin{align}\n R^2 & = 2k_BT t^2 E_{3\/2,3}\\left(-\\lambda_2 t^{3\/2}\\right),\n\\label{eq:clossExp_004\n\\end{align}\nThe detailed derivation of \n(\\ref{eq:clossExp_004})\n is given in \\ref{sec:appendixC}. \nIt can be shown easily that~(\\ref{eq:clossExp_004}) gives the correct long and short time limits for a SF system \nthat goes directly from ballistic motion to the SF regime. \nSimulation of (\\ref{eq:clossExp_004}) is given in Figure \\ref{fig:SFD02}. \nWe also show that the presence of the three regimes, \nnamely the ballistic motion, normal diffusion and SFD subdiffusion in Figure \\ref{fig:SFD03} based on (\\ref{eq:clossExp_004}). \n\n\\begin{figure*}[h!]\n \\centering\n \\includegraphics*[viewport=30 30 680 480,scale=0.35]{SFDfig03.eps}\n \\caption{Plots showing the three regimes of SFD according to MSD $R^2 = 2t^2E_{3\/2,3}\\left(-t^{3\/2}\\right)$.}\n \\label{fig:SFD03}\n\\end{figure*}\n\n\\section{Concluding Remarks}\n\\label{sec:conclude}\n\\noindent\nWe have studied a class of generalized Langevin equation with and without external force. \nIt is found that with a specific choice of external force, namely one which varies as $t^{-\\kappa}$, $0< \\kappa \\leq 1$, \nthen the solution of the generalized Langevin equation \n(\\ref{eq:withoutExForce_001})\ngives a Gaussian process with MSD that has the same asymptotic properties as that of SFD.\n\nWe remark that an equivalent way to characterized SFD is to use the probability distribution function $W(x,t)$\nwhich can be obtained as solution to the Fokker-Planck equation. \nDerivation of Fokker-Planck equaion corresponds to the fractional generalized Langevin equation \n(\\ref{eq:withoutExForce_001}) is a highly non-trivial problem. Furthermore, it may not be fruitful to consider such an equation since we only use \n(\\ref{eq:withoutExForce_001}) to obtain the asymptotic properties of SFD. \nHowever, it may be interesting to note that\n$W(x,t)$ at the short and long time limits are respectively given by solutions to the following Fokker-Planck equations: \n\\begin{align}\n \\frac{\\partial W(x,t)}{\\partial t} & = D_\\circ \\frac{\\partial^2 W(x,t)}{\\partial x^2},\n \\quad t \\rightarrow 0,\n\\label{eq::conclude_001\n\\end{align}\nand\n\\begin{align}\n \\frac{\\partial W(x,t)}{\\partial t} & = \\frac{F}{2\\sqrt{t}} \\frac{\\partial^2 W(x,t)}{\\partial x^2},\n \\quad t \\rightarrow \\infty.\n\\label{eq::conclude_002\n\\end{align}\n(\\ref{eq::conclude_001})\nis just the usual diffusion equation and its solution gives the probability distribution of normal diffusion, \nwhile \n(\\ref{eq::conclude_002})\nis similar to the effective Fokker-Planck equation for fractional Brownian motion with Hurst index \n$H=1\/4$\n\\cite{WangLung90}. \nOne may heuristically combine these two equations into one:\n\\begin{subequations}\n\\label{eq::conclude_003\n\\begin{align}\n \\frac{\\partial W(x,t)}{\\partial t} & = \\left(\\frac{F}{D_\\circ\\sqrt{t}}\\right)^{\\rho(t)}D_\\circ \\frac{\\partial^2 W(x,t)}{\\partial x^2},\n\\label{eq::conclude_003a\n\\end{align}\nwhere\n\\begin{align}\n \\rho(t)\n \\begin{cases}\n 0 & \\text{if} \\ t \\rightarrow 0 \\\\\n 1 & \\text{if} \\ t \\rightarrow \\infty \n \\end{cases}.\n\\label{eq::conclude_003b\n\\end{align} \n\\end{subequations}\n(\\ref{eq::conclude_003}) has the disadvantage that the intermediate values of $\\rho(t)$ is not known. \nOne concrete version of Fokker-Planck equation \nthat gives the short and long time limiting equations (\\ref{eq::conclude_001}) and (\\ref{eq::conclude_002})\nis the following:\n\\begin{align}\n \\frac{\\partial W(x,t)}{\\partial t} & = D_\\circ \\! \\sqrt{\\pi}\n E_{1\/2,1}\\left(-\\frac{2D_\\circ}{F}t^{1\/2}\\right)\n \\frac{\\partial^2 W(x,t)}{\\partial x^2},\n\\label{eq::conclude_004\n\\end{align}\nwhich has the required properties since\n\\begin{align}\n E_{1\/2,1}\\left(-\\frac{2D_\\circ}{F}t^{1\/2}\\right) & \\underset{t \\to 0}{\\sim} \\frac{1}{\\Gamma(1\/2)}\n = \\frac{1}{\\sqrt{\\pi}}\n\\label{eq::conclude_005\n\\end{align}\nand\n\\begin{align}\n E_{1\/2,1}\\left(-\\frac{2D_\\circ}{F}t^{1\/2}\\right) & \\underset{t \\to \\infty}{\\sim} \n \\frac{1}{\\Gamma(1\/2)\\frac{2D_\\circ}{F}t^{1\/2}}\n = \\frac{F}{2\\!\\sqrt{\\pi}D_\\circ t^{1\/2}}.\n\\label{eq::conclude_006\n\\end{align}\n\nNote that (\\ref{eq::conclude_004}) is one possible effective Fokker-Planck equation which gives the correct asymptotic\nprobability distributions for SFD.\nOur remarks on the limitation of using MSD to characterize SFD apply to probability distribution as well. \n$W(x,t)$ does not describe SFD uniquely even if we know its values for all intermediate times, \nor the correct Fokker-Planck equation for the process. \nOn the other hand, if one has the correct fractional Langevin equation for SFD at all times, \nthen it describes the process uniquely.\n\nWe also obtain a class of new closed expressions for MSD of SFD by considering the solution of the overdamped case, \nthus providing an alternative expression to that of Brandani \\cite{Brandani96}. \nFrom another special case of generalized fractional Langevin equation (\\ref{eq:withoutExForce_001}), \nwe derive a closed expression describing MSD of SFD with three regimes, namely ballistic motion, \nnormal diffusion and sub-diffusion. \nThese closed analytic expressions of MSD are given in terms of Mittag-Leffler functions. \nFinally, we remark that despite of numerous studies carried out on SFD, \nthere are very few results which deal with possible concrete realizations of SFD as a specific stochastic process. \nIn a recent work, we proposed one such possible realization of SFD as the step fractional Brownian motion, \nwhich has the flexibility of giving the correct description to SFD with two and three regimes \n\\cite{LimTeo09}. \n\n\\section{Introduction}\n\\label{sec:introduction}\n\\noindent\nRecent advances in nanofabrication allow the preparation of new types of nanotube materials such as carbon nanotubes, petide nanotubes, \ninorganic and organic zeolites, etc. \nThe study of the molecular transport processes inside these nanotubes or channels have attracted considerable attention \n\\cite{KargerRuthven09,Karger08,RoqueMalherbe07,ChengBowen07,Strook00,AidleyStandfield96,Alberts08}. \nThe one dimension transport process of an assembly of non-passing particles (without mutual passage) \nin confined geometries such as narrow pores or nanotubes is known as single-file diffusion (SFD). \nIn other words, particles undergoing SFD maintain the order of their arrangement at all times.\n\nThe main feature of SFD can be characterized by its short time and long time limits of its mean square displacement (MSD) denoted by $R^2$. \nFor very short observation time, the particles diffuse normally and satisfy Fick's law, \nsuch that the short time limit of MSD of SFD is given by\n\\begin{align}\n R^2 & \\equiv \\left<\\left(x(t)-x_\\circ\\right)^2\\right> = 2D_\\circ t,\n \\quad t \\rightarrow 0,\n\\label{eq:introduction_01\n\\end{align}\nwhere\n$D_\\circ$\nis the diffusion coefficient. \nIn other words, for very short diffusion time, \nthe motion is just ordinary Brownian motion, \nwhich is a Markov process. \nAs for the long time limit of the MSD for SFD, one has \n\\begin{align}\n R^2 & = 2F\\sqrt{t}, \n \\quad t \\rightarrow \\infty,\n\\label{eq:introduction_02\n\\end{align}\nwhere $F$ is the SFD mobility. \nRecall that diffusion that does not satisfy Fick's law is known as anomalous diffusion with MSD satisfying\n$R^2 \\propto t^\\alpha$, \n$\\alpha\\neq 1$. \nWhen \n$\\alpha > 1$\nthe diffusion is enhanced and it is called superdiffusion. \nIn the case with\n$\\alpha < 1$\nthe diffusion is subdued and one has subdiffusion. \nThus, the long time behavior of SFD belongs to subdiffusion, \nwhich is a non-Markovian process, indicating the motion is correlated.\n\nConsider the simple case of a single molecule in a one-dimensional pore, \nsuch that the molecule can move in either direction with equal probability via activated jumps with step length \n$l$. \nIf\n$\\tau$ \nis the average time between successive jumps, \nand \n$\\theta$\nis the fractional occupancy, \nthen one has\n\\begin{align}\n R^2 & = l^2(1-\\theta)\\frac{t}{\\tau}, \\quad\n t \\rightarrow 0,\n\\label{eq:introduction_03\n\\end{align}\nwhich corresponds to normal self diffusion. \nFedders \n\\cite{Fedders78}\nhas derived the long time limit of the MSD as \n\\begin{align}\n R^2 & = l^2 \\frac{(1-\\theta)}{\\theta}\\sqrt{\\frac{2}{\\pi}}\\sqrt{\\frac{t}{\\tau}}, \\quad\n t \\rightarrow \\infty,\n\\label{eq:introduction_04\n\\end{align}\nwhich shows that at longer times the motion is strongly suppressed by collisions with neighboring particles \n(see also \\cite{Karger92}). \nFrom (\\ref{eq:introduction_02}) and (\\ref{eq:introduction_04}) one obtains the SFD mobility as\n\\begin{align}\nF & = l^2 \\frac{(1-\\theta )}{\\theta}\\frac{1}{{\\sqrt{2\\pi}}}.\n\\label{eq:introduction_05\n\\end{align}\n\nAbout a decade ago, \nBrandani \\cite{Brandani96} introduced heuristically (without derivation) the following analytic expression for the MSD in SFD:\n\\begin{align}\n R^2 & = l^2 \\frac{(1-\\theta)t\/\\tau}{1+\\theta\\sqrt{\\pi\/2}\\sqrt{t\/\\tau}}.\n\\label{eq:introduction_06\n\\end{align}\nExpression \n(\\ref{eq:introduction_06})\nis able to give the correct limiting cases \n(\\ref{eq:introduction_03})\nand \n(\\ref{eq:introduction_04}). \nEven until today, \n(\\ref{eq:introduction_06})\nis still regarded as an expression that \n\"comprises both cases with satisfactory accuracy\" \n[Ref. \\cite{Karger08}, page 334]. \n\nA similar expression for the MSD of SFD was obtained by \nLin et al \n\\cite{Lin05}\nbased on the following ansatz:\n\\begin{align}\n \\frac{1}{R^2} & = \\frac{1}{2D_\\circ t} + \\frac{1}{2F\\sqrt{t}} .\n\\label{eq:introduction_07\n\\end{align}\nBy solving\n(\\ref{eq:introduction_07})\nfor MSD one gets\n\\begin{align}\n R^2 & = \\frac{2D_\\circ t}{1+D_\\circ\\sqrt{t}\/F} .\n\\label{eq:introduction_08\n\\end{align}\nJust like \n(\\ref{eq:introduction_06}), \none can obtain from \n(\\ref{eq:introduction_08})\nthe short time and long time limits \n(\\ref{eq:introduction_01})\nand\n(\\ref{eq:introduction_02})\nrespectively. \nTo the best of our knowledge, \nso far there still do not exist any derivations for the closed analytic expressions \n(\\ref{eq:introduction_06})\nor \n(\\ref{eq:introduction_08})\n\n\nAlthough the research on SFD can be traced back to 1960s, but until now there are not many studies that can provide a comprehensive \ndescription of the process. \nOne notable exception is that of reference \\cite{RodenbeckKargerHahn98}, \nwhich has derived the exact SFD propagator valid for all time scales based on the reflection principle of \nChandrasekhar \\cite{Chandrasekhar43}. \nOther attempts to model SFD include the early statistical and probabilistic models \n\\cite{Fedders78,Karger92,Lin05,Harris65,Richards77,Levitt73,Beijeren83,Liggett85}\nto the more recent ones based on fractional dynamics \n\\cite{DemontisSuffritti06,Baqndyopadhyay08a,Baqndyopadhyay08b,TaloniLomholt08,LimTeo09}.\nThe main aims of this paper are two folds. \nFirst we introduce a new type of fractional generalized Langevin equation with external force to model SFD. \nOur second objective is to derive a new closed expression for the MSD of SFD. \nDespite that the fractional generalized \nLangevin equation under consideration does not lead to a closed expression for the MSD of SFD, \nit is still possible to show that it gives the correct short and long time limits under some specific conditions. \nIt is possible to derive from various special cases of the fractional generalized Langevin equation\na completely new class of closed expressions for the MSD of SFD, which can be regarded as alternatives to\n(\\ref{eq:introduction_06}) and (\\ref{eq:introduction_08}).\nOur previous work \n\\cite{LimTeo09} \nwhich though contains a detailed discussion of fractional generalized Langevin approach to SFD and its realization as a step fractional Brownian motion, \nit does not provide the derivation of closed expressions of the MSD for SFD. \nIn addition, the present work also provides a discussion of the effective \nFokker-Planck equation for SFD.\n\n\n\n\n\\section{Fractional Generalized Langevin Equation with External Force}\n\\label{sec:withExForce}\n\\noindent\nIn this section we consider the generalized Langevin equation \n(\\ref{eq:withoutExForce_001})\nwith external force under the following general setting:\n\n\\vspace{0.5cm}\n\\noindent{\\bf Case 3.}\n$0 < \\alpha < 1$,\n$\\gamma(t) = \\lambda_1 \\delta(t) + \\lambda_2 \\frac{t^{-\\gamma}}{\\Gamma(1-\\gamma)}$\nand\n$a \\neq 0$.\n\n\\noindent\nA special case with\n$\\alpha = 1$,\n$\\gamma = 1\/2$ and \n$a = 0$\nhas been considered recently in \n\\cite{TaloniLomholt08}. \nSolution to the position process is then given by \n(\\ref{eq:withoutExForce_006}) \nwith \n$G(t)$ \nis given by the inverse Laplace transform of\n\\begin{align}\n \\tilde{G}(s) & = \\frac{1}{s^{\\alpha+1}+ \\lambda_1 s + \\lambda_2 s^\\gamma}.\n\\label{eq:withExForce_001\n\\end{align}\nIn order to study the asymptotic properties of the solution, \nwe expand\n$\\tilde{G}(s)$\nin the following series form\n\\begin{align}\n \\tilde{G}(s) & = \\frac{s^{-\\gamma}}{s^{\\alpha+1-\\gamma} + \\lambda_2}\n \\sum_{n=0}^\\infty \\left[\\frac{s^{1-\\gamma}}{s^{\\alpha+1-\\gamma} + \\lambda_2}\\right]^n.\n\\label{eq:withExForce_002\n\\end{align}\nOne can express the solution in terms of \nthe two parameter Mittag-Leffler function\n$E_{\\alpha,\\beta}(z)$\nby using the following Laplace transform relation:\n\\begin{align}\n L\\left[t^\\beta E_{\\alpha,\\beta}\\left(\\lambda t^\\alpha\\right)\\right]\n & = \\frac{s^{\\alpha-\\beta}}{s^\\alpha - \\lambda},\n\\label{eq:withExForce_003\n\\end{align}\nwhere\n$E_{\\alpha,\\beta}(z)$\nis defined by \n\\cite{Erdelyi3_53}\n\\begin{align}\n E_{\\alpha,\\beta}(z) & = \\sum_{n=0}^\\infty \\frac{z^n}{\\Gamma(\\alpha n + \\beta)}.\n\\label{eq:withExForce_004\n\\end{align}\nNow define \n\\begin{subequations} \n\\label{eq:withExForce_005\n\\begin{align} \nG_\\circ(t) & = t^\\alpha E_{\\alpha-\\gamma+1,\\alpha+1}\\left(-\\lambda_2 t^{\\alpha-\\gamma+1}\\right),\n\\label{eq:withExForce_005a\n\\\\\nG_\\circ(t)^* & = t^{\\alpha -1} E_{\\alpha-\\gamma+1,\\alpha}\\left(-\\lambda_2 t^{\\alpha-\\gamma+1}\\right),\n\\label{eq:withExForce_005b\n\\end{align}\n\\end{subequations}\nand\n\\begin{align}\n G_\\circ^{*n}(t) & = \\int_0^t du_1 G_\\circ^*\\left(t-u_1\\right)\n \\int_0^{u_1} du_2 G_\\circ^*\\left(u_1 - u_2\\right)\n \\cdots \\nonumber \\\\\n & \\quad \n \\cdots\n \\int_0^{u_{n-2}} du_{n-1} G_\\circ^*\\left(u_{n-2} - u_{n-1}\\right)\n G_\\circ^*\\left(u_{n-1}\\right)\n\\label{eq:withExForce_006\n\\end{align}\nfor $n \\geq 2$. \nWe then have\n\\begin{align}\n G(t) & = G_\\circ (t) + \\sum_{n=0}^\\infty \\left(-\\lambda_1\\right)^n\n \\int_0^t du G_\\circ(t - u)G_\\circ^{*n}(u)\n\\label{eq:withExForce_007\n\\end{align}\nSome properties of\n$G(t)$\nwhich are necessary for obtaining the asymptotic limits of MSD are given in \n\\ref{sec:appendixA}.\n\nWe shall also need the following asymptotic expansion of \nMittag-Leffler function \n\\cite{Erdelyi3_53}\nto obtain the asymptotic properties of the variance and MSD. \nFor $z \\rightarrow \\infty$,\n\\begin{align}\n E_{\\alpha,\\beta}(-z) & = - \\sum_{n=1}^N \\frac{(-1)^{n-1}z^{-n}}{\\Gamma(\\beta - n\\alpha)}\n + \\mathcal{O}\\left(|z|^{-1-N}\\right), \\nonumber \\\\\n & \\hspace{2cm} \\left|\\arg(z)\\right| < \\left(1 - \\frac{\\alpha}{2}\\right)\\pi\n\\label{eq:withExForce_008\n\\end{align}\nand for $z \\rightarrow 0$,\n\\begin{align}\nE_{\\mu,\\nu}(-z) & \\sim \\frac{1}{\\Gamma(\\nu)} + \\mathcal{O}(z),\n\\label{eq:withExForce_009\n\\end{align}\n\n\\vspace{0.5cm}\n\\noindent\n{(i). Short time limit}\n\nFor $t \\rightarrow 0$, we have \n\\begin{align}\n G(t) & \\underset{t \\to 0}{\\sim} \\frac{t^\\alpha}{\\Gamma(\\alpha+1}\n - \\lambda_2 \\frac{t^{2\\alpha-\\gamma+1}}{\\Gamma(2\\alpha-\\gamma+2)} \n - \\lambda_1 \\frac{t^{2\\alpha}}{\\Gamma(2\\alpha+1)}.\n\\label{eq:withExForce_010\n\\end{align}\nSince $\\gamma \\leq 1$, one gets\n\\begin{align}\n G(t) & \\underset{t \\to 0}{\\sim} \\frac{t^\\alpha}{\\Gamma(\\alpha+1}\n - \\lambda_1 \\frac{t^{2\\alpha}}{\\Gamma(2\\alpha+1)}.\n\\label{eq:withExForce_011\n\\end{align}\nFrom (\\ref{eq:withoutExForce_006}) \nand (\\ref{eq:appendixA_001}) in \\ref{sec:appendixA}\none gets\n\\begin{align}\n \\bar{x} - x_\\circ &= v_\\circ I^{1-\\alpha} G(t) + a I^{1-\\kappa} G(t) \n\\label{eq:withExForce_012\n \\\\\n &\\underset{t \\to 0}{\\sim} \n v_\\circ\\left[t - \\lambda_1 \\frac{t^{\\alpha+1}}{\\Gamma(\\alpha+2)}\\right] \n \\nonumber \\\\\n & \\quad + a\\left[\n \\frac{t^{\\alpha-\\kappa+1}}{\\Gamma(\\alpha-\\kappa+2)}\n -\\lambda_1\\frac{t^{2\\alpha-\\kappa+1}}{\\Gamma(2\\alpha-\\kappa+2)}\n \\right].\n\\label{eq:withExForce_013\n\\end{align}\nWe therefore obtain\n\\begin{subequations}\n\\label{eq:withExForce_014\n\\begin{align}\n \\bar{x} - x_\\circ &\\underset{t \\to 0}{\\sim} \n v_\\circ t + a\\frac{t^{\\alpha-\\kappa+1}}{\\Gamma(\\alpha-\\kappa+2)},\n\\label{eq:withExForce_014a\n\\end{align}\nand\n\\begin{align}\n \\bar{x} - x_\\circ &\\underset{t \\to 0}{\\sim}\n \\begin{cases}\n v_\\circ t, & a = 0, \\text{or} \\ \\alpha > \\kappa \\\\\n \\left(v_\\circ + a\\right)t, & a \\neq 0, \\alpha = \\kappa \\\\\n a\\frac{t^{\\alpha-\\kappa+1}}{\\Gamma(\\alpha-\\kappa+2)}, & a \\neq 0, \\alpha < \\kappa \n \\end{cases}.\n\\label{eq:withExForce_014b\n\\end{align}\n\\end{subequations}\nUsing the following expression for variance\n(see \\ref{sec:appendixB} for its derivation)\n\\begin{align}\n \\sigma^2 &= 2k_B T\\left\\{\\int_0^t du G(u) - \\int_0^t du G(u)D^\\alpha G(u)\\right\\},\n\\label{eq:withExForce_015\n\\end{align}\none gets\n\\begin{align}\n \\sigma^2 &\\underset{t \\to 0}{\\sim} 2k_B T \n \\lambda_1 \\left[\\frac{t^{2\\alpha+1}}{(2\\alpha+1)\\Gamma^2(\\alpha+1)}\\right].\n\\label{eq:withExForce_016\n\\end{align}\nHere we remark that in the derivation of (\\ref{eq:withExForce_015}), \nwe have made use of the fluctuation-dissipation theorem \n(see \\ref{sec:appendixB}). \nIt has been pointed out that the FD theorem fails in the presence of external random noise \n\\cite{Kubo66}. \nHowever, the external force in the fractional generalized Langevin equation \n(\\ref{eq:withoutExForce_001})\nis a non-random force, \nso FD theorem is applicable in this case up to first order of the external force term \n\\cite{WangTokuyama99}. \nNote that we also assume that the FD theorem applies to the fractional generalized Langevin equation, \nas done in references \n\\cite{TaloniLomholt08,LimTeo09b}.\nUsual fluctuation-dissipation theorem is valid for SFD system provided the fluid or gas is elastic and diluted \n\\cite{Villamaina08}. \nIn the case of strongly inelastic and dense systems, \nfluctuation-dissipation formula fails and a more general form of fluctuation-dissipation relation has to be used.\n\nBy noting that the short time behavior of variance is of the order $t^{2\\alpha+1}$, \ntherefore for $\\alpha > 1\/2$, \nit is larger than \n$\\left(\\bar{x} - x_\\circ\\right)^2$\ngiven by the square of \n(\\ref{eq:withExForce_014b}), \nthus it is only this term that dominates the short time limit of MSD. \nThis gives\n\\begin{subequations}\n\\label{eq:withExForce_017\n\\begin{align}\n R^2 &\\underset{t \\to 0}{\\sim}\n \\begin{cases}\n v_\\circ^2 t^2, & a = 0 \\ \\text{or} \\ \\alpha > \\kappa \\\\\n \\left(v_\\circ + a\\right)^2t^2, & a \\neq 0, \\alpha = \\kappa \\\\\n a^2\\frac{t^{2\\alpha-2\\kappa+2}}{\\Gamma^2(\\alpha-\\kappa+2)}, & a \\neq 0, \\alpha < \\kappa \n \\end{cases}.\n\\label{eq:withExForce_017a\n\\end{align}\nThe short time limits associated with $\\alpha > 1\/2$ given by (\\ref{eq:withExForce_017a})\nimplies that the two cases with $\\alpha > \\kappa$ (zero external force) \nand $\\alpha = \\kappa$ (non-zero external force) are both ballistic in nature. \nThe case corresponds to non-zero external force with $\\alpha < \\kappa$ is non-ballistic, \nand it leads to normal diffusion if $\\kappa = \\alpha +1\/2$. \n\nFor $\\alpha = 1\/2$, one gets\n\\begin{align}\n R^2 &\\underset{t \\to 0}{\\sim}\n \\begin{cases}\n \\left(v_\\circ^2 + \\frac{4k_BT\\lambda_1}{\\pi}\\right)t^2, & a = 0 \\ \\text{or} \\ 1\/2 > \\kappa \\\\\n \\left[\\left(v_\\circ + a\\right)^2 + \\frac{4k_BT\\lambda_1}{\\pi}\\right]t^2, & a \\neq 0, \\kappa = 1\/2 \\\\\n a^2\\frac{t^{3-2\\kappa}}{\\Gamma^2(3\/2-\\kappa)}, & a \\neq 0, 1\/2 < \\kappa \n \\end{cases}.\n\\label{eq:withExForce_017b\n\\end{align}\nThe short time limits given by (\\ref{eq:withExForce_017b}) are quite similar to those in (\\ref{eq:withExForce_017a}). \nAgain, the first two cases with $\\alpha =1\/2 > \\kappa$ (zero external force) \nand $\\alpha =1\/2 = \\kappa$ (non-zero external force) both lead to ballistic motion, \nwhile the third case with $\\alpha =1\/2 < \\kappa$ (zero external force) is non-ballistic, \nit gives normal diffusion only if $\\kappa = 1$.\n\nFinally, for $\\alpha < 1\/2$ one obtains\n\\begin{align}\n R^2 &\\underset{t \\to 0}{\\sim}\n \\begin{cases}\n 2k_B T\\lambda_1 \\left[\\frac{t^{2\\alpha+1}}{(2\\alpha+1)\\Gamma^2(\\alpha+1)}\\right], & a = 0 \\ \\text{or} \\ \\alpha > \\kappa \\\\\n a^2\\frac{t^{2\\alpha-2\\kappa+2}}{\\Gamma^2(\\alpha-\\kappa+2)}, & a \\neq 0, \\alpha < \\kappa \n \\end{cases}.\n\\label{eq:withExForce_017c\n\\end{align}\n\\end{subequations}\nBoth cases in (\\ref{eq:withExForce_017c}) can not be ballistic. \nThe first case with $\\alpha > \\kappa$ \nor zero external force gives superdiffusion, \nhence does not describe SFD. \nFor non-zero external force with $\\alpha < \\kappa$, \nthe short time limit leads to normal diffusion if $\\kappa = \\alpha + 1\/2$, \nwhich is the same condition as for the third case in (\\ref{eq:withExForce_017a}) and (\\ref{eq:withExForce_017b}). \nIn other words, we always obtain normal diffusion in the short time limit for $\\kappa = \\alpha + 1\/2$.\n\n\\vspace{0.5cm}\n\\noindent\n{(ii). Long time limit}\n\nFor $t \\gg 1$, we have from (\\ref{eq:withExForce_013})\n\\begin{align}\n \\bar{x} - x_\\circ & \\underset{t \\to \\infty}{\\sim}\n v_\\circ\\left[\\frac{t^{\\gamma - \\alpha}}{\\lambda_2\\Gamma(\\gamma - \\alpha +1)}\\right]\n + a\\left[\\frac{t^{\\gamma - \\kappa}}{\\lambda_2\\Gamma(\\gamma - \\kappa +1)}\\right].\n\\label{eq:withExForce_018\n\\end{align}\nWhen $\\kappa > \\alpha$ (\\ref{eq:withExForce_018}) becomes\n\\begin{align}\n \\bar{x} - x_\\circ & \\underset{t \\to \\infty}{\\sim}\n v_\\circ\\left[\\frac{t^{\\gamma - \\alpha}}{\\lambda_2\\Gamma(\\gamma - \\alpha +1)}\\right].\n\\label{eq:withExForce_019\n\\end{align}\nFrom the variance relation (\\ref{eq:withExForce_015}) one gets\n\\begin{align}\n \\sigma^2 & \\underset{t \\to \\infty}{\\sim}\n 2k_B T \\left\\{\n \\frac{t^\\gamma}{\\lambda_2\\Gamma(\\gamma+1)}\n \\right. \\nonumber \\\\\n & \\qquad\\qquad - \\left.\n \\frac{t^{2\\gamma - \\alpha -1}}{\\lambda_2^2(2\\gamma - \\alpha -1)\\Gamma(\\gamma)\\Gamma(\\gamma - \\alpha)}\n \\right\\} \\nonumber\\\\\n & \\underset{t \\to \\infty}{\\sim} 2k_B T \\frac{t^\\gamma}{\\lambda_2\\Gamma(\\gamma+1)},\n\\label{eq:withExForce_020\n\\end{align}\nsince $\\gamma < \\alpha +1$.\nUsing (\\ref{eq:withoutExForce_010}) for the MSD and together with (\\ref{eq:withExForce_019})\nand (\\ref{eq:withExForce_020}), \nwe obtain the following three cases for the long time limit for MSD:\n\n\\begin{subequations}\n\\label{eq:withExForce_021\n\\noindent\nIf $\\gamma < 2\\alpha$,\n\\begin{align}\n R^2 & \\underset{t \\to \\infty}{\\sim} 2k_B T \\frac{t^\\gamma}{\\lambda_2\\Gamma(\\gamma+1)}.\n\\label{eq:withExForce_021a\n\\end{align}\nIf $\\gamma = 2\\alpha$,\n\\begin{align}\n R^2 & \\underset{t \\to \\infty}{\\sim} \\left[\n \\frac{2k_B T}{\\lambda_2\\Gamma(\\gamma+1)}\n + \\frac{v_\\circ^2}{\\lambda_2^2\\Gamma^2\\left(\\frac{1}{2}\\gamma + 1\\right)}\n \\right]t^\\gamma.\n\\label{eq:withExForce_021b\n\\end{align}\nIf $\\gamma > 2\\alpha$, then we have\n\\begin{align}\n R^2 & \\underset{t \\to \\infty}{\\sim} v_\\circ^2\\left[\n \\frac{t^{2(\\gamma-\\alpha)}}{\\lambda_2^2\\Gamma^2\\left(\\gamma - \\alpha + 1\\right)}\n \\right].\n\\label{eq:withExForce_021c\n\\end{align}\n\\end{subequations}\nWe remark that the initial velocity can be determined by the equipartition principle of \nkinetic energy $v_\\circ^2 = k_BT$. \nOne may therefore assume that coefficient $a$ for the external force is proportional \nto $\\sqrt{k_BT}$ and takes the form $a = a_1\\sqrt{k_BT}$, $a_1$ is a positive constant. \nNote that the fractional order of the Langevin equation \nor $\\alpha$ does not appear in the long time limits of MSD for the position process \nin the case of $\\gamma < 2\\alpha$ and the case $\\gamma = 2\\alpha$, just like for\nCase 1 and Case 2 (without external force) discussed in Section \\ref{sec:withoutExForce}. \nHowever, when $\\gamma > 2\\alpha$, \nthe long time limit of MSD $R^2$ given by (\\ref{eq:withExForce_021c}) is dependent on $\\alpha$. \nOn the other hand, the exponent of the external force $\\kappa$ is absent in (\\ref{eq:withExForce_021})\n\nHere we would like take note of the importance of initial conditions on SFD \nwhich has been pointed out in a recent work\n\\cite{BarkaiSilbey09}.\nHere we recall that according to equations \n(\\ref{eq:withExForce_016}) and (\\ref{eq:withExForce_017}), \nthe short time limits for the variance and MSD have different time dependence. \nIf now we suppose the initial condition is $x(0)=0$ instead of $x(0) = x_\\circ$, \nand we assume the process is centred, that is $\\bar{x} = 0$, then we have $R^2=\\sigma^2$. \nNow the short time limit of MSD has the same time dependence as the variance. \nFor the long time limits, both the variance and MSD \nhave the same time dependence except for \n$\\gamma > 2\\alpha$, \nand they are independent of $x(0)$. \nA similar observation had been obtained in reference \n\\cite{LimTeo09}\n\n\nThe above discussion shows that \nit is possible to obtain the correct short and long time behaviors of the SFD using the generalized fractional Langevin\nequation (\\ref{eq:withoutExForce_001}) and (\\ref{eq:withoutExForce_002}) under certain specific conditions.\nIn the following we want to consider some limiting and special cases.\n\n\\vspace{0.5cm}\n\\noindent{\\bf Case 3a.}\nOverdamped case:\n\n\\noindent\nConsider the overdamped case such that the Newton acceleration or ballistic term \n(for $\\alpha = 1$) \nor the fractional acceleration term \n(for $\\alpha \\neq 1$) \nis neglected. \nNow the second term in the expression for variance (\\ref{eq:withExForce_015}) will be absent, \nand the mean is simply\n\\begin{align}\n \\left &= x_\\circ,\n\\label{eq:withExForce_022\n\\end{align}\nsuch that\n\\begin{align}\n R^2 = \\sigma^2 = 2k_BT \\int_0^t du G(u).\n \\label{eq:withExForce_023\n\\end{align}\n\nLet us calculate the overdamped case in more detail. \nNow the Laplace transform of $G(t)$ is\n\\begin{align}\n \\tilde{G}(s) &= \\frac{1}{\\lambda_1 s + \\lambda_2 s^\\gamma}\n = \\zeta \\frac{s^{(1-\\gamma)-1}}{s^{1-g} + \\lambda},\n \\label{eq:withExForce_024\n\\end{align}\nwhere $\\zeta = 1\/\\lambda_1$ and $\\lambda = \\lambda_2\/\\lambda_1$, thus\n\\begin{align}\n G(t) & = \\zeta E_{1-\\gamma,1}\\left(-\\lambda t^{1-\\gamma}\\right).\n \\label{eq:withExForce_025\n\\end{align}\nOne now gets\n\\begin{align}\n R^2 = \\sigma^2 & = 2k_BT\\zeta \\int_0^\\infty d\\tau E_{1-\\gamma,1}\\left(-\\lambda t^{1-\\gamma}\\right) \\nonumber \\\\\n & = 2k_BT\\zeta E_{1-\\gamma,2}\\left(-\\lambda t^{1-\\gamma}\\right).\n \\label{eq:withExForce_026\n\\end{align}\nThe asymptotic behaviors of MSD are given by\n\\begin{align}\n R^2 & \\sim\n \\begin{cases}\n 2k_BT\\zeta t, & t \\rightarrow 0 \\\\\n 2k_BT\\zeta \\frac{t^\\gamma}{\\lambda\\Gamma(1-\\gamma)} & t \\rightarrow \\infty\n \\end{cases}.\n \\label{eq:withExForce_027\n\\end{align}\nTherefore for $\\gamma = 1\/2$, one gets the correct subdiffusion behavior corresponds to the SFD.\n\n\\vspace{0.5cm}\n\\noindent{\\bf Case 3b.}\n$0 < \\alpha <1$,\n$\\gamma(t) = \\lambda_2 \\frac{t^{-\\gamma}}{\\Gamma(1-\\gamma)}$\nand\n$a \\neq 0$\n\n\\noindent\nThis is the case of (\\ref{eq:withoutExForce_001})\nwith external force and single term of memory kernel \n(with $\\lambda_1 = 0$).\nWe have \n$\nG(t) = G_\\circ(t) = t^\\alpha E_{\\alpha -\\gamma -1, \\alpha + 1}\n\\left(-\\lambda_2 t^{\\alpha - \\gamma +1}\\right)\n$\nfrom (\\ref{eq:withExForce_005a}) as the Green function. \nUsing (\\ref{eq:appendixA_002}) and (\\ref{eq:appendixA_003}) in \\ref{sec:appendixA}, \ntogether with the asymptotic formulas of Mittag-Leffler functions (\\ref{eq:withExForce_008}) and (\\ref{eq:withExForce_009}), \none gets from equations in (\\ref{eq:withExForce_015}) the asymptotic properties of the variance. \nFor $t \\rightarrow \\infty$\n\\begin{align}\n \\sigma^2 & \\sim 2k_BT\\left[\n \\frac{t^\\gamma}{\\lambda_2\\Gamma(\\gamma+1)}\n -\\frac{t^{2\\gamma - 1 -\\alpha}}\n {\\lambda_2^2(2\\gamma -1 -\\alpha)\n \\Gamma(\\gamma)\\Gamma(\\gamma-\\alpha)}\n \\right] \\nonumber \\\\\n & \\sim 2k_BT\\frac{t^\\gamma}{\\lambda_2\\Gamma(\\gamma+1)},\n \\label{eq:withExForce_028\n\\end{align}\nwhere we use the fact that $\\gamma < 1 + \\alpha$. \nOn the other hand, for $t \\rightarrow 0$\n\\begin{align}\n \\sigma^2 & \\sim 2k_BT\\left[\\lambda_2\n \\frac{t^{2\\alpha - \\gamma +2}}\n {(2\\alpha - \\gamma +2)\n \\Gamma(\\alpha+1)\\Gamma(\\alpha - \\gamma+2)}\n \\right].\n \\label{eq:withExForce_029\n\\end{align}\n\nNow consider the mean\n\\begin{subequations}\n \\label{eq:withExForce_030\n\\begin{align}\n \\bar{x} - x_\\circ &= v_\\circ tE_{\\alpha - \\gamma +1,2}\\left(-\\lambda_2t^{\\alpha - \\gamma +1}\\right) \n\\nonumber \n \\label{eq:withExForce_030a\n\\\\ \n & \\quad + at^{\\alpha-\\kappa+1}E_{\\alpha - \\gamma +1,\\alpha - \\kappa+2}\\left(-\\lambda_2t^{\\alpha - \\gamma +1}\\right) \\\\\n & \\underset{t \\to \\infty }\\sim v_\\circ \\frac{t^{\\gamma -\\alpha}}{\\lambda_2\\Gamma(\\gamma - \\alpha +1)}\n + a \\frac{t^{\\gamma - \\kappa}}{\\lambda_2\\Gamma(\\gamma - \\kappa +1)},\n \\label{eq:withExForce_030b\n\\end{align}\n\\end{subequations}\nwhich decays to zero for \n$\\alpha > \\gamma$ and $\\kappa > \\gamma$ as $t \\rightarrow \\infty$. \nThe MSD behaves asymptotically in the same way as the variance, i.e.\n\\begin{align}\n R^2 & \\sim 2k_BT \\frac{t^\\gamma}{\\lambda_2\\Gamma(\\gamma + 1)}, \n \\quad t \\rightarrow \\infty.\n \\label{eq:withExForce_031\n\\end{align}\n\n\nOne gets from (\\ref{eq:withExForce_030a}) the small time limit\n\\begin{align}\n \\bar{x} - x_\\circ & \\sim v_\\circ t + a \\frac{t^{\\alpha - \\kappa +1}}{\\Gamma(\\alpha - \\kappa +2)},\n \\quad t\\rightarrow 0.\n \\label{eq:withExForce_032\n\\end{align}\nSince $0 < \\alpha \\leq 1$ and $0 < \\kappa \\leq 1$, if $\\alpha < \\kappa$, we have\n\\begin{align}\n \\bar{x} - x_\\circ & \\sim a \\frac{t^{\\alpha - \\kappa +1}}{\\Gamma(\\alpha - \\kappa +2)},\n \\quad t\\rightarrow 0.\n \\label{eq:withExForce_033\n\\end{align}\nWe can now conclude that the MSD takes the form\n\\begin{align}\n R^2 & \\sim a^2 \\frac{t^{2\\alpha - 2\\kappa +2}}{\\Gamma^2(\\alpha - \\kappa +2)}\n\\nonumber \\\\\n & \\quad + 2k_BT\\left[\\lambda_2\n \\frac{t^{2\\alpha - \\gamma +2}}\n {(2\\alpha - \\gamma +2)\n \\Gamma(\\alpha+1)\\Gamma(\\alpha - \\gamma+2)}\n \\right].\n \\label{eq:withExForce_034\n\\end{align}\nThe case $\\alpha = 1\/2$, $\\gamma = 1\/2$, $\\kappa = 1$ gives the MSD as \n\\begin{align}\n R^2 & \\sim \n \\begin{cases}\n a^2 \\frac{t}{\\Gamma^2(3\/2)} & t \\rightarrow 0 \\\\\n 2k_BT\\frac{t^{1\/2}}{\\lambda_2\\Gamma(3\/2)} & t \\rightarrow \\infty\n \\end{cases}\n \\label{eq:withExForce_035\n\\end{align}\nwhich satisfies the behavior of SFD.\n\n\nAs a manifestation of the non-passing many-particle syatem, a realistic physical model of SFD should include the long-range inter-particle correlation, and hence the particle density term.\nHowever, in our model the correlation function does not have a closed analytic form, \nwe do not pursue it since SFD can be characterized by its MSD, which can be calculated more easily.\nThe particle density and others many-body effect have been absorbed into the memory kernel term and the random force,\nand it can be linked to the variance and MSD via the diffusion coefficient \n$D$ and SFD mobility $F$ (see for example \n\\cite{TaloniLomholt08}). \nWe shall identify in next section the constants \nin the memory kernel with the diffusion coefficient and SFD mobility. \nThe long-range correlation property of SFD in our model is manifested in the realization of the single-file subdiffusion process as fractional Brownian motion with \nHurst index $H = 1\/4$. \nThe long-range dependence for such a realization can be verified by using the argument given in \n\\cite{LimMuniandy03}.\n\nWe remark that the mathematical modeling of SFD based on its MSD will not be unique. \nRecall that a Gaussian random process is determined (up to a multiplicative constant) by its mean and covariance function. \nTwo different Gaussian processes can have the same MSD or variance \nbut with different covariance functions. \nThus, any modeling based on fractional Langevin equations that give the correct short and long time limits to \nthe MSD of SFD can at best be regarded as one of the possible models. \nIn order to have a more realistic and concrete model of SFD, \nit is necessary to take into account the physical interactions and interplays between the boundary conditions and the particle motion.\n\n\n\n\\section{Fractional Generalized Langevin Equation without External Force}\n\\label{sec:withoutExForce}\n\\noindent\nIn this section we want to study whether it is possible to model SFD using the fractional generalized Langevin equation without external force. \nFor the purpose for subsequent discussion, \nlet us first consider the following fractional generalized Langevin equation with external force: \n\\begin{align}\n D^\\alpha v(t) + \\int_0^t \\gamma(t-\\tau)v(\\tau)d\\tau & = f(t) + \\xi(t),\n\\label{eq:withoutExForce_001\n \\\\\nv(t) & = Dx(t),\n\\label{eq:withoutExForce_002\n\\end{align}\nwhere $0 < \\alpha \\leq 1$. \n$\\xi(t)$ is an internal Gaussian noise with mean zero and covariance\n\\begin{align}\n \\left<\\xi(t)\\xi(s)\\right> & = C\\!\\left(\\left|t-s\\right|\\right),\n\\label{eq:withoutExForce_003\n\\end{align} \n$f(t)$ \nis a time-dependent external force given by\n\\begin{align}\n f(t) & = a \\frac{t^{-\\kappa}}{\\Gamma(1-\\kappa)}, \n \\quad\n 0 < \\kappa \\leq 1,\n\\label{eq:withoutExForce_004\n\\end{align}\nand\n$\\gamma(t)$\nis the memory kernel which will be specified later on. \nThe fractional derivative used in \n(\\ref{eq:withoutExForce_001})\nis the Caputo fractional derivative, which is defined by \n\\cite{Podlubny99,MetzlerKlafter00,West03}\n\\begin{subequations}\n\\label{eq:withoutExForce_005\n\\begin{align}\n D^\\alpha g(t) &= I^{m-\\alpha}D^m g(t),\n\\label{eq:withoutExForce_005a\n\\end{align}\nwhere the fractional integral\n$I^\\alpha$\nis defined for\n$\\alpha > 0$\nas\n\\begin{align}\n I^\\alpha g(t) & = \\frac{1}{\\Gamma(\\alpha)}\\int_0^t (t - u)^{\\alpha-1} g(u) du ,\n\\label{eq:withoutExForce_005b\n\\end{align}\n\\end{subequations}\nwith \n$m-1 < \\alpha \\leq m$, \nand $m$ is positive integer.\n\nNote that generalized Langevin equation \n(\\ref{eq:withoutExForce_001})\nwith \n$f(t)=0$\nhas been studied by several authors \n\\cite{Lutz01,Fa06,Fa07,LimTeo09}. \nThe solution for the position process from \n(\\ref{eq:withoutExForce_001}) and (\\ref{eq:withoutExForce_002})\nwith initial conditions \n$x(0)=x_\\circ$,\n$v(0)=v_\\circ$\nis given by \n\\begin{align}\n x(t) & = x_\\circ + v_\\circ I^{1-\\alpha} G(t) + a I^{1-\\kappa} G(t)\n + \\int_0^t G(t-u) \\xi(u) du,\n\\label{eq:withoutExForce_006\n\\end{align}\nwhere \n$G(t)$\nis given by the inverse Laplace transform of\n\\begin{align}\n \\tilde{G}(s) &= \\frac{1}{s^{\\alpha+1}+\\tilde{\\gamma}(s)s}.\n\\label{eq:withoutExForce_007\n\\end{align}\nThe mean of $x(t)$ is given by\n\\begin{align}\n \\bar{x} = \\left\n & = x_\\circ + \\frac{v_\\circ}{\\Gamma(1-\\alpha)}\\int_0^t (t-u)^{-\\alpha} G(u) du \\nonumber \\\\\n & \\quad + \\frac{a}{\\Gamma(1-\\kappa)} \\int_0^t (t-u)^{-\\kappa} G(u) du\n\\label{eq:withoutExForce_008\n\\end{align}\nand its variance is \n\\begin{align}\n \\sigma^2 & = \\left<\\left(x(t) - \\bar{x}\\right)^2\\right> \\nonumber \\\\\n & = 2\\int_0^t du G(u) \\int_0^u dv C(u-v)G(v),\n\\label{eq:withoutExForce_009\n\\end{align}\nThe MSD of $x(t)$ in terms of its mean and variance is\n\\begin{align}\n R^2 & = \\left<\\left(x(t) - x_\\circ\\right)^2\\right>\n = \\left(\\bar{x}-x_\\circ\\right)^2 + \\sigma^2.\n\\label{eq:withoutExForce_010\n\\end{align}\n\nHere we shall consider two cases without external force, \nthat is with the constant\n$a=0$; \nthe case with external force will be studied in next section. \nThe generalized Langevin equation\n(\\ref{eq:withoutExForce_001})\nincludes the following cases:\n\n\\vspace{0.5cm}\n\\noindent{\\bf Case 1.}\n$0<\\alpha<1$,\n$\\gamma(t) = \\lambda_1\\delta(t)$,\n$\\lambda_1 > 0$,\n$C(t) = c\\delta(t)$,\nand\n$c > 0$\n\n\\noindent\n(\\ref{eq:withoutExForce_001})\nbecomes fractional Langevin equation and its solution has been considered by several authors \n\\cite{KobolevRomanov00,LimMuniandy02,WestPicozzi02,LimEab06,LimLiTeo08}.\nThe variance and MSD of the position process at long time behaves like normal diffusion, \nthat is\n$\\sigma^2 \\sim t$ and $R^2 \\sim t$ as $t \\rightarrow \\infty$.\nOn the other hand, we have\n$\\sigma^2 \\sim t^{2\\alpha+1}$, \n$R^2 \\sim t^2$ as $t \\rightarrow \\infty$.\nNote that if\n(\\ref{eq:withoutExForce_002})\nis replaced by the fractional velocity \n$v(t)=D^\\beta x(t)$, $0 < \\beta < 1$, then for $x_\\circ = 0$ one has\n$R^2 \\sim t^{2\\beta-1}$\n\\cite{LimTeo09}.\nThis leads to SFD subdiffusion when\n$\\beta = 3\/4$. \nFor $t \\rightarrow 0$, \none obtains\n$R^2 \\sim t^{2(\\alpha+\\beta)-1}$, \nwhich gives normal diffusion if\n$\\beta = 1 - \\alpha$, and ballistic motion if\n$\\alpha = \\beta = 3\/4$\n\\cite{LimTeo09}.\n\n\\vspace{0.5cm}\n\\noindent{\\bf Case 2.}\n$0<\\alpha \\leq 1$,\n$\\gamma(t) = \\gamma_2(t)=\\lambda_2\\frac{t^{-\\gamma}}{\\Gamma(1-\\gamma)}$,\nand\n$C(t) = c_\\zeta t^{-\\zeta}$,\n$c_\\zeta > 0$, \n$0 < \\zeta \\leq 1$\n\n\\noindent\nThis is the case of fractional generalized Langevin equation without external force and with power law type of memory kernel \n\\cite{Lutz01,Fa06,Fa07,LimTeo09}.\nThe MSD corresponds to this case satisfy the following asymptotic properties. \nAs $t \\rightarrow 0$, \none gets\n\\begin{align}\nR^2 \\sim \n\\begin{cases}\n t^2, & \\text{if} \\ \\alpha \\geq \\zeta\/2 \\\\\n t^{2+2\\alpha-\\zeta}, & \\text{if} \\ \\alpha < \\zeta\/2\n\\end{cases}.\n\\label{eq:withoutExForce_011\n\\end{align}\nIn the case when \n$\\alpha \\geq \\zeta\/2$, \nthe particle undergoes ballistic motion for very short times. \nHowever, \nwhen $\\alpha > \\zeta\/2$, \nnormal diffusion with \n$R^2 \\sim t$ \nis possible only if\n$\\zeta = 1+\\alpha$ or $\\zeta > 1$, \nwhich contradicts our assumption that\n$\\zeta \\leq 1$.\nFor $t \\rightarrow \\infty$,\n\\begin{align}\n R^2 \\sim \n \\begin{cases}\n t^{2\\gamma-\\zeta}, & \\text{if} \\ \\gamma > \\zeta\/2 \\\\\n \\log{t}, & \\text{if} \\ \\gamma = \\zeta\/2 \\\\\n \\text{constant}, & \\text{if} \\ \\gamma < \\zeta\/2\n \\end{cases},\n\\label{eq:withoutExForce_012\n\\end{align}\nwhich gives subdiffusion with $R^2 \\sim \\sqrt{t}$ when $\\gamma = (2\\zeta+1)\/4$.\nHere we note that the large time asymptotic behavior of MSD is independent of the fractional order $\\alpha$\nof the generalized fractional Langevin equation. \nFrom (\\ref{eq:withoutExForce_011}) one notices that the particle does not diffuse normally for short times, \nthough it can undergo subdiffusion for $\\gamma > \\zeta\/2$ after sufficiently long time.\nHowever, if the particle is assumed to undergo ballistic motion at very short times, \nthen the generalized Langevin equation in this case gives the correct short and long time limits of the MSD for SFD.","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzztzbi b/data_all_eng_slimpj/shuffled/split2/finalzztzbi new file mode 100644 index 0000000000000000000000000000000000000000..cad7e6ebde3ba5fd5a6e58cb6ee9d5fe2d819f3b --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzztzbi @@ -0,0 +1,5 @@ +{"text":"2008-09-05: James went tromping in the woods of a neighboring graticule.\nby being the first to reach any hashpoint in the (30, -83) graticule, here, on 2008-09-05.\n2008-09-12: James carpooled with Zingor for more stomping through woods and stickers in an adjacent graticule, after enduring the gas panic.\nThis page was last modified on 7 March 2012, at 21:18.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Back to school can be a tough time for high school sweethearts, especially when one person is off to college.\nHowever, before he left for his first week of university life, a man named Jasper made his girlfriend Zara an adorable, pop-up photo album of their beautiful memories (so far, at least).\nFull of intricately made, hidden compartments and photo collages of their favorite Instagrams and Snapchats, it's enough to make anyone weep with joy.\nGet ready for permanent heart eyes.\nBONUS: You're invited to shake your butts at corgi prom!","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Perfect 10 PR is run by Louise Duffield, who has worked in the media industry since 1987.\nAs a qualified journalist, who has written for both regional and national newspapers and magazines, Louise knows how the media works and what makes a good story.\nUsing PR expertise gained over many years, we achieve exposure for clients online, on TV, in national, regional and local magazines and newspapers, on radio and in the trade press\u2026.helping to raise profiles and shape reputations.\nWe pride ourselves on having excellent contacts, particularly in the East Midlands regional and business media, and in the national food and construction trade press.\nWe are communications experts, who make words count\u2026whether they are on social media, in newsletters or media releases.\nAlways happy to share our expertise and advice, we also provide media training.\nPerfect 10 PR is based in Nottingham and has clients in Nottinghamshire, Derbyshire and Northamptonshire, as well as elsewhere in the UK and also in France and Sweden.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"The Arab world has long been subjected to super-power rivalry for influence and control. The area has been characterized by bloody conflict with Israel and the internal instability that has been particularly prevalent in the last few years. Whilst these political struggles have been highly visible and at times spectacular over the decades, other transformations have taken place within the societies and peoples of the region, on a less pronounced \u2013 although just as profound \u2013 scale. The integration of the region into the world economy and the spread of Islamic revivalism are perhaps the most significant of these transformations. This volume, inspired by a lecture series on the Arab world in transition at the American University, Washington D.C., was first published in 1985. It discusses a wide range of issues, from economic to religious, which together form an in-depth analysis of the complex processes of transformation in Arab society. This is a fascinating work that holds the same interest and value to scholars and students of Middle Eastern history, politics and domestic affairs, as it did when it was first published.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"World Environment Day 2015 was celebrated on June 5 with the UN tagline: '\u20ac\u0153Seven billion dreams. One Planet. Consume with care'\u20ac\u009d. The theme echoed concerns about the unsustainable consumption patterns that dominate our planet.\nRapid urbanization and industrialization, demographic and economic growth, make sustainable consumption and production (SCP) remarkably relevant in the context of the Asia-Pacific region including Indonesia.\nThe UN defines SCP as a holistic approach to minimize the negative environmental impact from consumption and production systems while promoting quality of life for all.\nThe UN Environment Program (UNEP) suggests four key SCP principles for analysis and policy action.\nThese principles also include decoupling economic growth from environmental degradation. Decoupling refers to the ability of an economy to grow without corresponding increases in environmental pressure.\nThus, an economy that is able to sustain gross domestic product (GDP) growth without having a negative impact on environmental conditions is said to be decoupled.\nIn the context of Asia and particularly Indonesia, changing the growth model could become a driver for a new industrial system, which does not replicate the resource intense development of industrialized countries.\nWe should leapfrog them by skipping polluting technologies and move directly to cleaner and more advanced systems, as explained by the UN Institute for Training and Research (UNITAR).\nSustainable practices create jobs. their private sector plays a pivotal role in shifting society toward SCP.\nWhile consumers typically have limited knowledge of the full life-cycles of the products they buy, producers are in a much better position to apply a life-cycle perspective to their operations and supply chains and initiate improvements.\nHowever industrial sectors must avoid '\u20ac\u0153green-washing'\u20ac\u009d or the over-spending of financial resources on '\u20ac\u0153green-branding'\u20ac\u009d, rather than the actual work of minimizing environmental impacts.\nUltimately, consumers also have the sovereignty to make an informed choice on sustainable alternatives. Beyond eco-labeling, consumers should put on their '\u20ac\u0153environmental thinking caps'\u20ac\u009d prior to determining their choices.\nSimilarly, in developed countries, electric cars were touted as '\u20ac\u0153sustainable'\u20ac\u009d. But how sustainable is it if the electric cars are using electricity produced in coal power plants as the main source? It would of course be more sustainable if the source of electricity used 100 percent renewable energy sources.\nOrganic food is now becoming a household name as a more sustainable alternative. Although it is pesticides-free, chemical-free and was labeled as such, consumers must also consider from where the organic foods originated.\nAgain, it is an issue of locality. When living in Japan, I lived in an apartment in front of an organic agricultural field.\nThe French thinker Ren\u00e9 Descartes philosophically proposed: Cogito ergo sum (I think, therefore I am). Consumers have the power. Think before we consume.\nThe writer is an environmental consultant who served as a technical expert at the GIZ company and provided input for the Community Renewable Energy Grant of the Millennium Challenge Account.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzuunf b/data_all_eng_slimpj/shuffled/split2/finalzzuunf new file mode 100644 index 0000000000000000000000000000000000000000..2435ba5a257fcaee1fd6dbdf96d6d0513d2deb36 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzuunf @@ -0,0 +1,5 @@ +{"text":"Vagabond is in the introductory blurb for the print edition but not in the Top 50 listings. Sun & Moon closed ages ago, HMRC closed them down.\nVagabond isn't in the top ten in the print edition, but it is in the top 50 online. Pleasingly, stroud green beat crouch end on this one - their only entry is Harris and Hoole.\nOh dear, that will upset them! Suggests a bit of lazy journalism, if they had wandered about a bit they would have found at least one coffee place nicer than the Tesco one.\nThere isn't much detail on the online edition<\/a>, but not only did Vagabond make it into Timeout's finest coffee shops, it was also marked up as the Staff Pick<\/a> for North London. Well earned in my opinion.\nHa ha. I knew that bringing up hipsters would get people going. Anyone that says \"if you know what a hipster is then you are one\" is a hipster :-). Hmm. Ironic clothes......im.going to say its making every effort possible to look like a bell end because looking like a bell end is cool. Eg. Tying your shoe laces around your legs whilst wearing shorts. Seen it, there is no other explanation other than pre meditated first degree hipsterness. Also seen paper sunglasses.\nBack in 2006 when we set this site up, (yea we set up an online community website in 2006, what were you doing?) I don't think we were hipsters. Now we're just old.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Cheshire Parker Schneider & Bryan attorney Ashley L. Oldham was recently recognized as a Board Certified Specialist in Family Law by the North Carolina State Bar . Ms. Oldham concentrates her legal practice in the areas of property division, custody, support and all aspects of military divorce. Her regular practice includes practice in Wake, Cumberland, Harnett, Moore, Chatham, Orange, Durham and Johnston County.\nMs. Oldham is also a member of the Family Law Section of the North Carolina Bar Association and has taught various continuing legal education courses for the North Carolina Bar Association, The Judge Advocate General's Legal Center and School in Charlottesville, Virginia, and the American Bar Association. The North Carolina State Bar, an agency of the State of North Carolina, certifies lawyers as specialists in designated practice areas as a service to the public. The program assists members of the public in the selection of legal counsel by identifying lawyers who have demonstrated special knowledge, skill, and proficiency in certain areas of law. The program also gives lawyers a credible way of making their expertise known to the public and other lawyers.\nAttorney Brentley Tanner joins CPSB Family Law section as a Partner.\nWe are excited to announce that Brentley Tanner is joining the Cheshire Parker Family Law section as a partner. He adds an expertise in military law and brings with him two fabulous associates, Ashley Oldham and Kaitlin Kober, and paralegals, Amber Caling and Eliza Lynch. We are thrilled to expand our practice statewide. As part of our growing practice, we have also opened a satellite office in Holly Springs.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"LG Ultra HD TVs are 4 times the resolution of Full HD for an ultra clear picture, even on large screens. Access premium content providers like Amazon Instant Video, Hulu Plus, Netflix and YouTube direct from your TV with the fun and easy to use Magic Remote. LG's Cinema 3D technology uses lightweight, battery free glasses to deliver an immersive 3D experience for movies, TV shows and gaming, all with amazing picture quality.\nThis technological triumph in cinema provides marvelous cutting-edge 3D technology similar to the theaters, both in quality and scale. With an enhanced resolution roughly 4X the pixel count of Full HD, it's nearly impossible to discern a single pixel \u2013 even from inches away. Watch everything bigger, richer and in more detail than ever in stunning 4K, the next generation of TV resolution. You can be among the first to bring this new spectacle of LG ULTRA HD TV* picture quality and sound to your home theater.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"After a brutal flu season last winter that saw 80,000 deaths in the U.S., Redlands Community Hospital is issuing temporary visiting guidelines to slow its spread this year.\nChildren ages 14 and under are discouraged from visiting the hospital, unless they are seeking medical treatment. If they do visit they must wear a mask, must be accompanied by an adult, and must go directly to the room of the patient, then directly to the exit.\nAll visitors should wash hands or use hand sanitizer before and after visiting; avoid touching eyes, nose and mouth; and avoid close contact with those who are sick. Those who are sick are encouraged to stay home unless seeking treatment, to cover their nose and mouth with a tissue when they cough or sneeze; and to stay home at least 24 hours after the fever is gone. Everyone is encouraged to get a flu vaccine.\nSerious symptoms that merit a trip to the ER include fever above 103 degrees, trouble breathing, shortness of breath, severe headache, stiff neck, and confusion or trouble staying awake.\nInformation: redlandshospital.org or call: 909-335-5500.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Work is underway on a 1km-long cycle track on the site of a lake next to Doncaster Dome.\nBuilders have already drained the lake next to the leisure centre, and diggers are now well on with earthworks at the site.\nThe building site is fenced off and yellow construction vehicles can be seen on the site, which now looks like mud.\nWhen it is completed, the 1km-long circuit is expected to attract more than 40,000 cyclists a year and experts see it as a way of getting more people more active.\nA report to planners when planning permission for the scheme was put to the council said the track will cater for the existing and growing numbers of recreational cyclists, particularly providing a safe traffic free environment for young people and adults to Doncaster Dome develop their skills.\nThe track would wind around the car park and use some of the land taken up by two lakes.\nIt is understood that fish have already been relocated from the lake which was previously on the site.\nConcerns were raised before the scheme was approved about the effect building the track would have on wildlife habitats.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzuyil b/data_all_eng_slimpj/shuffled/split2/finalzzuyil new file mode 100644 index 0000000000000000000000000000000000000000..0cef099b14f2f4c3603834382c679865d31795cd --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzuyil @@ -0,0 +1,5 @@ +{"text":"Budget friendly! Cavazos End Table top design By Millwood Pines. Cavazos End Table very well made, sleek and simple. Complete your living room furniture with a modern Cavazos End Table. Its cute sturdy, attractivce and it looks expensive and a good value for the money. Cavazos End Table is one of the most homey, cozy, beautiful look and exotic Cavazos End Table especially for the price and made of superior products. Great quality, easy to assemble, delivery on time and in perfect condition. Cavazos End Table is good merchandise at fair prices and amazing free shipping. Guarantee damaged claim by offering to send parts or to keep the item at a discounted price. Great buy would definitely recommend. Shop with our low-price guarantee and find great deals on ##ptitle# and more!. Reading the reviews helped you purchase.\nEveryone wishes to possess a club within their house! Catching a glass or two in your own personal club is much more exciting than located on the couch. The couch doesn't give the exact same feeling! There are some individuals, who develop the bar of the dreams but they are unsure about the type of bar stools that they can purchase. If you request us, it is important to purchase the right bar stool. It should be visually appealing, helpful and comfortable. You may decide to purchase stools for your restaurant or perhaps your houses club. You need to keep few things in mind before buying the right bar stools. Today, we will cover the tips for purchasing for leather bar stools. Also, we have produced a summary of Top 10 Leather-based Barstools. This list has evaluations of every club feces which will help you to pick the perfect 1. Continue reading to find out and make the decision. Looking for the right club feces for your house? Nicely, you must go through the checklist to obtain an concept. We are sure that you will find what you are looking for within this checklist! We've covered the important thing functions to help you make a decision. The good news is that you can buy these stools on the internet! These can be delivered to your house within few days. So, feel the checklist and pick the right.\nDoes your living space show room requirements? Are you willing to toss a slumber celebration in the light of the absence of room to sleep? This DHP Futon could be ideal for you in this situation. This Futon mattress sofa by DHP includes the functionality of a couch bed having a modern and stylish appearance. Place it inside your living room to attain an additional sleepy mattress at night. A micro-fiber floor that took off in the middle will be a perfect combination. This amazing couch accompanies a tapestry with gleaming stainless hair and legs. Consolidating these results with each other in a comfortable popular design sofa. You can choose to get it in Faux Leather, Purple velvet or Linen. An element that is really worth indicating is its back again style. Considering the various chairs, DHP promotes the alteration of the couch for your comfort and ease levels. You can level up for an animated babble, or for silent shifting picture evening.\nWith regards to buying living room furniture, leather is always a smart choice. It doesn't only look great with many designs, but its very long lasting (its the perfect material for any home with kids or pets) and it is extremely-easy to thoroughly clean, too. The down-side of leather Epsom 3 Piece Coffee Table Set? It can have a higher price tag than material, micro-fiber or fake leather Epsom 3 Piece Coffee Table Set. This reclining loveseat stays a financial budget-friendly choice because of 1 guru technique: its seating area is padded with leather, while the sides are upholstered with more affordable fake leather-based. Which means you receive the look and feel of the complete leather loveseat without the hefty cost. Make use of the lever on the loveseats equip to kick back and relax on its higher-density froth filling, and consider all the money you saved. This particular loveseat comes with an added bonus: expert assembly will come in many locations of the nation for the next fee.\nThis is the next sofa on the list. It's extra comfortable and very ideal for little rooms or attic living. It is padded in rayon material, prospects inset control keys that provide a stylish gemstone-tufted design. It is made from durable supplies and also the thighs are constructed with durable wooden to add to its durability. The loveseat has an espresso stained wooden thighs and no-marking foot caps. It includes a comfortable froth cushioning and polyester material furniture which makes it very magnificent. It has a longue spot that provides an exceptional room for resting.\nPuffy, overstuffed loveseat cushions arent your lifestyle? This reclining loveseat from GDF has sharper, cleaner lines than your conventional lying loveseat, making it an elegant accessory for a middle-hundred years modern, modern, eclectic or modern type of home. And, at just 46.46 inches wide, it's very easy to match a smaller space, like apartments, dog dens or office spaces. Obtainable in several colour and fabric options, such as charcoal material, navy fabric and standing micro-fiber, its easy to find the perfect complement for your home decorations, and the sturdiness your family needs for daily use. Gone are the days of good-searching, but uncomfortable furniture you no longer have to sacrifice comfort and ease for design. This reclining loveseat features a big, soft, plush-filled seat cushion with sufficient room for two or enough space for you to disseminate and unwind.\nWith nearly 100 5-celebrity evaluations, this reclining loveseat is touted for its comfort and ease, sturdiness, appearance and ease of set up. Along with gentle, but encouraging cushions, it features a sliding mechanism, so that you can put your ft up and rock and roll (although, not simultaneously.) Padded with extremely-durable glued leather, this lying loveseat will avoid holes, rips and stainswhich is a great thing, because it also includes a storage space area console (where you can hide your snacks, of course) and a two-mug owner. Whether youre deciding set for a Netflix binge or viewing football together with your crew, this lying loveseat offers the durability and comfort without breaking the bank.\nPuffy, overstuffed loveseat soft cushions arent your thing? This reclining loveseat from GDF has sharper, cleaner outlines than your conventional lying loveseat, making it a stylish accessory for a middle-hundred years contemporary, contemporary, modern or modern type of home. And, at just 46.46 inches in width, it's very easy to match a smaller space, like flats, dog dens or office spaces. Available in several colour and material options, including charcoal material, navy fabric and standing micro-fiber, its easy to find the right match for your house dcor, and also the durability your family requirements for everyday use. Gone are the days of excellent-looking, but uncomfortable Stambaugh 3 Piece Coffee Table Set you will no longer have to give up comfort for design. This lying loveseat functions an oversized, gentle, luxurious-filled seat cushion with enough room for 2 or sufficient space for one to spread out and relax.\nIf the Chesterfield is the graceful man amongst sofas, the Cabriole may be the grande dame. Known for its exposed wooden and elegant thighs, similar to the Louis XV time period, the cabriole also offers a unique silhouette. It was also a popular shape within the function of furniture producer Thomas Chippendale. Typically, the back is one continuous item without soft cushions and it has a stylish bending collection. This particular cabriole edition includes gems in the tufts for additional allure. It is a couch style that can be as simple or luxurious as you wish. Padded with a luxury material like purple velvet yields a much different design feeling than if it had been padded in a much more muted, textural neutral. The main aura of elegance derives from its general shape and lithe thighs which means it'll always lend a processed air to some space.\nThis is actually the next couch on the checklist. It's additional comfortable and incredibly ideal for little rooms or attic residing. It is padded in polyester fabric, potential customers inset buttons that provide a stylish diamond-tufted design. It is constructed of long lasting materials and also the thighs are made of long lasting wood to increase its durability. The loveseat comes with an espresso discolored wooden legs and no-tagging foot caps. It includes an appropriate foam padding and rayon fabric upholstery which makes it very luxurious. It features a longue spot that gives an exceptional space for resting.\nThis set includes a 1 remaining arm couch set, two armless couch models, and something corner couch set. This provides sufficient room to accommodate your family and friends. The fabric is 100Percent rayon for enough durability and comfort. The good thing about this couch established may be the matching and mixing of chairs in the space in the room for a ideal shape. As well as enables fitted even in little areas. It requires just mild putting together.Additionally, it features drive soft cushions for optimum comfort and ease. You might want to try this set. It does the job nicely.\nThis is by far the most adorable and ingeniously designed loveseat. Depending in the tone, it gives a disconcerting look for your office or areas. The sofa unimaginably imperceptible for a morning espresso or an animated journal studying very carefully. It's obtainable both in leather and in consistency depending on that which you assistance a bright look, or perhaps a beautiful, lovely look at. The good thing, it is very sensible! The extremely stylish tufting is shaped precious stone, which holds an ideal Chesterfield design. The most adorable thing about this sofa is its form. Along with the appeal, the trunk design provides the back knowledge of rest. It is cleverly designed to ensure that smaller sized rooms combine luxury in a smaller room. It is a clever choice for the kids space, because it provides several sweet tones. They would love to hit them with the story or an incredible dream. No hassle for adults as well for a comfy night with Manga or Wonders.\nCopyright \u00a9 Cavazos End Table By Millwood Pines in Sqaure Coffee Tables All right reserved.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Looking for an Estate Agent in Poole? Emoov list and sell properties across the nation for a one off fee. Emoov is the UK's No.1 Hybrid Estate Agent in the UK and we list and sell properties nationwide.\nPoole is one of the most beautiful areas of the South coast and offers a variety of things to do from tours of Brownsea Island and the Jurassic Coast by boat, to the laying on its gold sandy beaches or trying your hand at windsurfing in Poole Bay.\nThere is a number of places to eat, drink and shop from the Dolphin Shopping Centre in Poole Quay to the Lighthouse Theatre and the trendy bars and restaurants of Ashley Cross. Poole is also adjoined to the East by the town of Bournemouth. Whether you are on holiday with your family, a student at the university or enjoying a stag or hen do there is something on offer for everyone in and around Poole.\nThe coastal town is home to Poole Grammar School and Bournemouth Universities Talbot Campus, popular for the strength of courses provided by its industry renowned Media School. The Sandbanks in Poole is one of the most sought after property locations in the UK. The beautiful beaches, geographically historic coast line and wealth of places to eat, drink and relax make Poole a popular location for property buyers and sellers. Particularly amongst students looking for a high level of education in an affordable location and those looking to retire to the coast.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"I was in the Woodlands looking for a lunch spot and came across this.\nSure, I like cheese, why not?!\nThis is just a portion of their menu, the entire menu here. They have a half soup, half sandwich combo for lunch, that's the one I went with.\nI got the Mushroom & Brie Bisque. Obviously, it has mushroom, brie & swiss cheese, potato, onion, chives. Nice and hearty for a cold winter day.\nSince I'm a Proscuitto fan, I got the sandwich of its namesake. It has mozzarella, tomato slices, pesto, basil, balsamic vinegar. The baguette is a gluten free baguette. You can ask for that at the time of order.\nThey do have some seating for dine in. You basically order at the counter, they'll give you a number. But seating is limited, don't be surprise if you have to wait. They also have a small market area to buy cheese to go. I saw that they even have events and tasting, wouldn't mind trying them out one of these days.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"We have been marked down as a noted organization engaged in manufacturing and supplying Poly Cotton Yarn Dyed Dobby Fabric. Best quality polyester and cotton yarn is used in our sound processing unit to design this fabric. Offered fabric is used for designing curtains, garments and decorative fabric items. Clients can avail this fabric from us in different colors, designs and patterns as per their requirements. We offer this Poly Cotton Yarn Fabric at market leading prices to clients.\n2) The color and print of the fabric does not fade after wash.\n3) It is available in different dyeing options including piece dyeing, yarn dyeing and package dyeing.\n4) Offered fabric is known for colorfastness and skin friendliness.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"The creation of artificial structures with very narrow spectral features in the terahertz range has been a long-standing goal, as they can enable many important applications. Unlike in the visible and infrared, where compact dielectric resonators can readily achieve a quality factor (Q) of 106, terahertz resonators with a Q of 103 are considered heroic. Here, we describe a new approach to this challenging problem, inspired by the phenomenon of extraordinary optical transmission (EOT) in 1D structures. In the well-studied EOT problem, a complex spectrum of resonances can be observed in transmission through a mostly solid metal structure. However, these EOT resonances can hardly exhibit extremely high Q, even in a perfect structure with lossless components. In contrast, we show that the inverse structure, a periodic array of very thin metal plates separated by air gaps, can exhibit non-trivial bound states in the continuum (BICs) reflection resonances, with arbitrarily high Q, and with peak reflectivity approaching 100% even for a vanishingly small metal filling fraction. Our analytical predictions are supported by numerical simulations, and also agree well with our experimental measurements. This configuration offers a new approach to achieving ultra-narrow optical resonances in the terahertz range, as well as a new experimentally accessible configuration for studying BICs.\nC. W. Hsu, B. Zhen, J. Lee, S.-L. Chua, S. G. Johnson, J. D. Joannopoulos, and M. Solja\u010di\u0107, \"Observation of trapped light within the radiation continuum,\" Nature 499(7457), 188\u2013191 (2013).\nC. W. Hsu, B. Zhen, A. D. Stone, J. D. Joannopoulos, and M. Solja\u010di\u0107, \"Bound states in the continuum,\" Nat. Rev. Mater. 1(9), 16048 (2016).\nJ. von Neumann and E. Wigner, \"\u00dcber merkw\u00fcrdige diskrete Eigenwerte,\" Phys. Z. 30, 465\u2013467 (1929).\nA. Kodigala, T. Lepetit, Q. Gu, B. Bahari, Y. Fainman, and B. Kant\u00e9, \"Lasing action from photonic bound states in continuum,\" Nature 541(7636), 196\u2013199 (2017).\nB. Midya and V. V. Konotop, \"Coherent-perfect-absorber and laser for bound states in a continuum,\" Opt. Lett. 43(3), 607\u2013610 (2018).\nY. Liu, W. Zhou, and Y. Sun, \"Optical refractive index sensing based on high-Q bound states in the continuum in free-space coupled photonic crystal slabs,\" Sensors (Basel) 17(8), 1861 (2017).\nS. Romano, A. Lamberti, M. Masullo, E. Penzo, S. Cabrini, I. Rendina, and V. Mocella, \"Optical biosensors based on photonic crystals supporting bound states in the continuum,\" Materials (Basel) 11(4), 526 (2018).\nJ. M. Foley, S. M. Young, and J. D. Phillips, \"Symmetry-protected mode coupling near normal incidence for narrow-band transmission filtering in a dielectric grating,\" Phys. Rev. B 89(16), 165111 (2014).\nU. Fano, \"The theory of anomalous diffraction gratings and of quasi-stationary waves on metallic surfaces (Sommerfeld's waves),\" J. Opt. Soc. Am. 31(3), 213\u2013222 (1941).\nK. S. Reichel, P. Y. Lu, S. Backus, R. Mendis, and D. M. Mittleman, \"Extraordinary optical transmission inside a waveguide: spatial mode dependence,\" Opt. Express 24(25), 28221\u201328227 (2016).\nS. Astilean, P. Lalanne, and M. Palamaru, \"Light transmission through metallic channels much smaller than the wavelength,\" Opt. Commun. 175(4-6), 265\u2013273 (2000).\nF. Marquier, J. Greffet, S. Collin, F. Pardo, and J. Pelouard, \"Resonant transmission through a metallic film due to coupled modes,\" Opt. Express 13(1), 70\u201376 (2005).\nH. E. Went, A. P. Hibbins, J. R. Sambles, C. R. Lawrence, and A. P. Crick, \"Selective transmission through very deep zero-order metallic gratings at microwave frequencies,\" Appl. Phys. Lett. 77(18), 2789\u20132791 (2000).\nQ. Cao and P. Lalanne, \"Negative role of surface plasmons in the transmission of metallic gratings with very narrow slits,\" Phys. Rev. Lett. 88(5), 057403 (2002).\nF. J. Garcia-Vidal and L. Martin-Moreno, \"Transmission and focusing of light in one-dimensional periodically nanostructured metals,\" Phys. Rev. B 66(15), 155412 (2002).\nP. Lalanne, C. Sauvan, J. P. Hugonin, J. C. Rodier, and P. Chavel, \"Perturbative approach for surface plasmon effects on flat interfaces periodically corrugated by subwavelength apertures,\" Phys. Rev. B 68(12), 125404 (2003).\nJ. T. Shen and P. M. Platzman, \"Properties of a one-dimensional metallophotonic crystal,\" Phys. Rev. B 70(3), 035101 (2004).\nJ. W. Lee, M. A. Seo, D. J. Park, S. C. Jeoung, Q. H. Park, Ch. Lienau, and D. S. Kim, \"Terahertz transparency at Fabry-Perot resonances of periodic slit arrays in a metal plate: experiment and theory,\" Opt. Express 14(26), 12637\u201312643 (2006).\nO. Mata-Mendez, J. Avenda\u00f1o, and F. Chavez-Rivas, \"Rigorous theory of the diffraction of Gaussian beams by finite gratings: TM polarization,\" J. Opt. Soc. Am. A 23(8), 1889\u20131896 (2006).\nP. B. Catrysse, G. Veronis, H. Shin, J.-T. Shen, and S. Fan, \"Guided modes supported by plasmonic films with a periodic arrangement of subwavelength slits,\" Appl. Phys. Lett. 88(3), 031101 (2006).\nS. Collin, G. Vincent, R. Ha\u00efdar, N. Bardou, S. Rommelu\u00e8re, and J.-L. Pelouard, \"Nearly perfect Fano transmission resonances through nanoslits drilled in a metallic membrane,\" Phys. Rev. Lett. 104(2), 027401 (2010).\nW. E. Kock, \"Metal-lens antennas,\" Proc. IRE. 34, 828\u2013836 (1946).\nW. L. Shuter, C. P. Chan, E. W. P. Li, and A. K. C. Yeung, \"A metal plate Fresnel lens for 4 GHz satellite TV reception,\" IEEE Trans. Antenn. Propag. 32(3), 306\u2013307 (1984).\nF. Gallee, G. Landrac, and M. M. Ney, \"Artificial lens for third-generation automotive radar antenna at millimetre-wave frequencies,\" IEEE Trans. Antenn. Propag. 150, 470\u2013476 (2003).\nR. Mendis, M. Nagai, Y. Wang, N. Karl, and D. M. Mittleman, \"Terahertz artificial dielectric lens,\" Sci. Rep. 6(1), 23023 (2016).\nR. Mendis, M. Nagai, W. Zhang, and D. M. Mittleman, \"Artificial dielectric polarizing-beamsplitter and isolator for the terahertz region,\" Sci. Rep. 7(1), 5909 (2017).\nR. G\u00f3mez-Medina, M. Laroche, and J. J. S\u00e1enz, \"Extraordinary optical reflection from sub-wavelength cylinder arrays,\" Opt. Express 14(9), 3730\u20133737 (2006).\nM. A. Ordal, L. L. Long, R. J. Bell, S. E. Bell, R. R. Bell, R. W. Alexander, and C. A. Ward, \"Optical properties of the metals Al, Co, Cu, Au, Fe, Pb, Ni, Pd, Pt, Ag, Ti, and W in the infrared and far infrared,\" Appl. Opt. 22(7), 1099\u20131119 (1983).\nH. Lochbihler and R. Depine, \"Highly conducting wire gratings in the resonance region,\" Appl. Opt. 32(19), 3459\u20133465 (1993).\nE. Popov, L. Mashev, and D. Maystre, \"Theoretical study of the anomalies of coated dielectric gratings,\" Opt. Acta (Lond.) 33(5), 607\u2013619 (1986).\nX. Ming, X. Liu, L. Sun, and W. J. Padilla, \"Total absorption by degenerate critical coupling,\" Opt. Express 25, 24658 (2017).\nR. Mendis and D. Grischkowsky, \"Undistorted guided-wave propagation of subpicosecond terahertz pulses,\" Opt. Lett. 26(11), 846\u2013848 (2001).\nH. Friedrich and D. Wintgen, \"Interfering resonances and bound states in the continuum,\" Phys. Rev. A Gen. Phys. 32(6), 3231\u20133242 (1985).\nW. Suh, Z. Wang, and S. Fan, \"Temporal coupled-mode theory and the presence of non-orthogonal modes in lossless multimode cavities,\" \u200e,\" IEEE J. Quantum Electron. 40(10), 1511\u20131518 (2004).\nP. U. Jepsen, D. G. Cooke, and M. Kock, \"Terahertz spectroscopy and imaging \u2013 morden techniques and applications,\" Laser Photonics Rev. 5(1), 124\u2013166 (2011).\nFig. 1 Geometry of the scattering problem with an array of thin metal plates. A periodic array of identical free-standing metal plates with length L, plate thickness a, uniformly spaced with separation d is illuminated by p-polarized plane waves with an incident angle of \u03b8. We assume a << d, for the thin-plate approximation.\nFig. 2 Theoretical amplitude-reflectance |r| for a = 30 \u03bcm and L = 4 mm as functions of (a) k\/\/ and frequency with d = 1 mm, and (b) d and frequency with \u03b8 = 10\u00b0. Green ellipses indicate a few locations where the resonance linewidth becomes infinitesimally small, so that the resonance appears to vanish. The green dashed lines indicate the cut-off condition of the (\u22121st)-order free-space mode.\nFig. 3 Analytic model to explain the resonant condition and the vanishing linewidth, and the comparison with the rigorous calculations. (a) D0 = 0 (solid lines) and N0 = 0 (dashed lines) with different symmetry (red for the symmetric sub-problem; blue for the anti-symmetric sub-problem) for L = 4 mm. D0 = 0 gives a good approximation to the rigorously-calculated resonant condition (RC) (square dots) in Fig. 2(b), and the cross-points of D0 = 0 and N0 = 0 with the same symmetry give good approximations to the vanishing-linewidth conditions for the corresponding resonances in Fig. 2(b). (b) Rigorously-calculated Qs of the first four resonances in Fig. 2(b).\nFig. 4 FEM simulations of the thin-metal-plate array with a beam illumination. A collimated Gaussian beam illuminates a periodic array of 200 metal plates with L = 4 mm, a = 30 \u03bcm and d = 1 mm at \u03b8 = 10\u00b0. While generally the structure is almost perfectly transparent, at (a) 153.927 GHz and (b) 165.262 GHz strong reflections can be observed. Zoom-in view of the H-field norm at (c) 153.927 GHz and (d) 165.262 GHz clearly show the excitation of TM1 cavity modes.\nFig. 5 Comparison of experimental results with theoretical predictions. (a) Schematic of the experimental setup. The transmission spectra of a device with L = 4 mm, a = 100 \u03bcm and d = 1 mm are measured with a THz-TDS system at various incident angles from 0 to 45\u00b0. (b) Experimental, and (c) theoretical power-transmittance as functions of frequency and \u03b8. In (b) and (c), BIC-induced vanishing of resonances for normal incidence and for the 4th resonance for oblique incidence are indicated by the green ellipses and yellow ellipse, respectively. (d), Experimental (blue) and theoretical (red) power transmittance as functions of frequency at \u03b8 = 26\u00b0.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzuyvc b/data_all_eng_slimpj/shuffled/split2/finalzzuyvc new file mode 100644 index 0000000000000000000000000000000000000000..3199f21b77b51f96ca2c1bb8198ac2b1bcb32680 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzuyvc @@ -0,0 +1,5 @@ +{"text":"On the 6th January 2019, The Arena played host to the inaugural Battle of the King of Kings.\nThe event, sanctioned by the World MuayThai Council (WMC) in collaboration with The Arena and Muay Art Fitness promotions, brought 18 professional Muay Thai athletes hailing from seven different nations to the new sports park.\nThe evening saw Singaporean fighters, including Vincent Chew, Alvin Sham Raaj Elanshran, Lee Dejun Damien, Bryan Tee and Marilyn Cheng, facing challengers from around Asia.\nThe homegrown fighters from various gyms across the island were given a rare platform to showcase their skills against their international counterparts.\nIn addition to the thrill of taking to ring in competitive matches, the athletes relished the opportunity to entertain the passionate gathering of supporters, drawing attention to the sport and culture they had dedicated so much of their lives to, but rarely had a chance to celebrate.\nAmidst the charged atmosphere, local fighter, Bryan Tee, seized his opportunity to challenge for the Super 4 Title.\nIn his first bout, he stared down Le Hoang Duc of Vietnam, the intense match-up ending in a KO during the first round which sent Bryan to the final. Sharing the ring with Malaysia's Zulhilmi Bin Rosli in the fight for the title, Bryan Tee once again emerged victorious with a KO in round 3 to clinch himself the trophy.\nHaving seen a successful evening in the first of such events, WMC, The Arena and Muay Art Fitness will continue to support the local Singapore scene of dedicated Muay Thai specialists and cultivate excitement around the sport which has brought us silverware in the very first The Arena King of Kings.\nThis entry was posted on Wednesday, January 16th, 2019 at 4:23 PM\tand is filed under News.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"We carry out many end of tenancy cleans every day throughout Kingston and the surrounding towns as we as are proudly trusted by many local letting agents so we are the safe choice if you're looking for a company that will ensure you will get all your deposit back at the end of your tenancy.\n\"Very pleased with the services provided by Mayer Enterprises. The team is professional and hard-working, very good at what they do too. They did a fantastic job with my end of tenancy cleaning. Will definitely be using them again.\"","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Our Darjeeling Sikkim tour package gives you a chance to explore the natural beauty of Darjeeling and Sikkim with the best package cost offer and service.\nDarjeeling, a gem of a place which is known for its youthful vibe. And the modern as well as colonial charm with its beauty. There is one thing you must ensure in order to fulfill your expectations. Because it was a popular hill station during the old days, a lovely Victorian town was built among the Himalayan foothills. The remnants of which are still visible around the Darjeeling remains a popular summer and fall resort for the natives of Kolkata.\nFor foreign tourists, the main attractions are the cultural diversity, the beautiful views, a variety of trekking option. Also the opportunity to cool down after a stint in the plains. Yearly stable weather, hill drive, adventure sports and an India-China cultural combination is the another key attraction of this place.\nPick-up at Bagdogra Airport \/ NJP railway station \u2013 Gangtok (5,500 ft), (128 kms \/5hrs).Meet & Greet on arrival at Bagdogra Airport \/NJP railway station & transfer to Gangtok.Overnight stay at Gangtok.\nAfter breakfast at hotel, start a tour to Tsomgo Lake (12,400 ft, 34km). Lake is 1 km long, oval in shape and 50 ft deep. Here you can find snowfall & Yak Riding. Visit to legendary Baba Mandir Nathula pass depends upon permit availability on additional cost of Rs. 3,000\/- per vehicle for permit (Monday and Tuesday pass is closed). Overnight stay at Gangtok.\nMorning go for full sightseeing covering Rumtek monastery, Bhanjhakri waterfalls, Ropeway, Do-DrulChorten, Namgyal Institute of Tibetology, Directorate of Handicraft and Handloom (Sunday Closed) Flower Show (Orchid). Overnight stay at Gangtok.\nTransfer to Kalimpong (4,100 ft, 75 km). Go for sightseeing covering Army Golf course, Durpin Monastery, Pine view nursery, Mangal Dham, Delo Hill. Overnight stay at Kalimpong.\nDrive towards Darjeeling (7,200 ft, 50 km).Overnight stay at Darjeeling.\nEarly Morning (at 04:00 am) drive to Tiger hill to watch the spectacular sunrise over Mt. Khangchendzonga (28,208 ft. Worlds 3rd highest peak) (subject to clear weather), on your way back visit Ghoom Monastery and Batasay Loop. After breakfast visit Himalayan Mountaineering Institute, P.N. Zoological Park (Thursday closed), Tenzing Rock, Tibetan Refugee self-help Centre (Sunday closed), Tea Garden (outer view) and Japanese Temple (4 hrs from 9.00 to 13.00 hrs). Evening free for shopping and personal activities. Overnight stay at Darjeeling.\nDrop to Bagdogra Airport \/ NJP railway station according to the schedule. Tour Ends.\nTwin sharing in one room.\nAll Transfer & Sightseeing will be done as per the given trip schedule.\nDue to rough roads and seeing the clearance of the roads the cost has been based on 06pax in a vehicle.\nJain food will be provided on request kindly specify at the time of booking.\nCost for Airfare, Train fare, Paragliding, Rafting Charges.\nCost for personal expenses such as laundry, bottled water, soft drinks, porter charges, tips etc.\nAny cost arising due to natural calamities like, landslides, road blockage, political disturbances etc. (Such cost is to be borne by the client and directly payable on the spot).","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"The Proclaim Christian Treatment Program at Twelve Oaks treats those individuals who are seeking relief from the effects of substance abuse in a faith-based program.\nThis program offers the intensity of a Christ-centered process, integrated with our traditional treatment program. The psychoeducation and intensive therapy takes place in an atmosphere of faith and hope, based on belief in Christian values. Our program offers Biblically and psychologically based counseling. The combination of this program with the Twelve Oaks traditional treatment program enables you to look at the impact of substance abuse on your life and on your relationship with God.\nThe primary modality of treatment in Proclaim is group therapy with adjunctive individual therapy. Therapy is focused on the impact that substance abuse has had on not only the physical, cognitive, psychological and emotional aspects of daily living, but also on your commitment to your faith. Psychoeducation groups will explore topics including the impact of substance abuse, decision making, anger management, relationships, emotional expression, awareness, behavioral responses, self-harm, personal power, addiction & trauma, and responsibility.\nIn conjunction with Proclaim, you will be participating in the substance abuse program at Twelve Oaks. The combination of these two components provides an integrated treatment program.\nDiscerning God's will from your own and abstaining from substances and addictive behaviors.\nWorking with staff members who are committed Christians as well as addiction professionals.\nTreatment Services supported by scripture and biblical teachings.\nLetting go of the guilt, shame and self-doubt related to substance abuse.\nTreatment with a group of like-minded Christians committed to their faith and recovery.\nAn effective blend of traditional treatment, alternative treatment and faith based treatment.\nOpportunities for spiritual development and relationships with self, others and God.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Recruiting and hiring are not the same thing. Hiring is the process of making an offer to a candidate you need, negotiating employment terms, salary, and benefits, then filling out a lot of paperwork. Recruiting comes before hiring, and it's the process of selling your company and its opportunities to a prospective employee. The distinction has become increasingly important, as the job market has quietly shifted from an Employers' Market to one where desirable employees hold the cards. As the economy has continued in its recovery and businesses are gearing up for a period of growth, experienced managers are in high demand, and if you want to hire the best, you're going to have to learn to recruit.\nYou've built a profile of the ideal person for your position. Do you understand what motivates that person? Will they be moved by the ideals and mission of your company? Belief in the company's products? Earning potential or a specific career path? Creative control over a program and team? In all likelihood, it's some combination of these things. The important part is that you identify which things as you're interviewing that candidate, then highlight those things in your presentation, and re-iterate them if and when you make an offer.\nFew companies are so flush at this point that they can afford to offer bloated salaries in order to snap up top talent, but the managers who are actively searching for new opportunities know their value in today's market, and you're not going to win them over with a lowball offer. Don't base your offered salary solely on the candidate's current or last salary because a large number of experienced managers are now moving because of inadequate salaries. Understand the current market and make an offer that reflects that understanding.\nWe've just come through a period when employers could afford to take a month or more on a hiring decision, even for seasoned managers. That period is over. If you find the right candidate, stop interviewing and make an offer immediately. If an additional interview is required before you can make an offer, schedule it immediately, because there is probably another company writing an offer for that candidate.\nWorking with a management recruiter is helpful in finding the right candidates to interview. It's also a great advantage to be able to discuss a potential offer with your recruiter and gain their insight as to what selling points are most likely to connect with that candidate, and what it's going to take to make an offer that candidate will accept. If you need to negotiate, your recruiter can also talk you through the process and help you formulate a strategy to win over your perfect new manager.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzwipz b/data_all_eng_slimpj/shuffled/split2/finalzzwipz new file mode 100644 index 0000000000000000000000000000000000000000..b08d3e32c3bf77215113adcf71c504f1c7d4ec02 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzwipz @@ -0,0 +1,5 @@ +{"text":"I generally forget to check the age thing. Yet another case. Wonder if overage Georgia LB prospect scares the Steelers off after Jarvis.\nHe's not really overaged. I think the average prospect is 22.5 or something at the start of the NFL season and Carter will be 22.75.\nHarrison Phillips in the 5th!!!! That totally unrealistic pick makes the draft for me.\nnfldraftscout currently has Harrison Phillips as a 1st round pick.\nHere is a cutout of the actual board draft of the first round. If the Steelers stayed put at 28 or traded out for a second round pick like B2B did; who else here that fell in this area would you take for the Steelers other than Lorenzo Carter??? (Outside of Rashaan Evans because I'm thinking Evans vs Carter would be about a 50\/50 split) Who is better than Carter between picks 28 thru 37 from this list???\n1 26 ATL - Jobus Rum - Taven Bryant DT Florida Gators.\n1 32 PHI - Ice - Lorenzo Carter, OLB\/ILB\/EDGE, UGA.\nI'd rather take Reid, or Jackson, than Carter.\nJackson is a straight up zone CB. He's great at it and would have been a wonderful fit in the cover 3 days.\nHarrison Phillips, maybe? Otherwise CBs, OL, RBs, WRs... not too many safeties, ILB, edge guys at that point that seemed like good value.\nI'll be honest-- while I'm high on Lorenzo Carter, he's a risk not unlike Dupree. Super athlete who made more plays than Kentucky Dupree, but is a real tweener, without classic edge rush trait nor experience playing inside. I think Nwosu has a similar toolkit but arguably better burst and instincts that make him look faster than he is.\nI think Carter will not give the quick Edge rush skills at first but has shown the ability to cover and set the edge. The question will be can you project him to learn the pass rushing skills? Dupree has been ok in pass rushing but can't set the edge.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"We're looking for exceptional candidates who will be an asset to our firm, but we also know that we need to impress the best candidates.\nTo see if we're right for each other, you can attend our Summer Scheme, which provides a real insight into life with Ashfords.\nDuring the Summer Scheme you will spend a week at either our Exeter or Bristol office working in two departments, getting to grips with some challenging yet rewarding tasks. You'll also spend time with our current trainees, getting first-hand knowledge of what it's like to be a trainee with us.\nFor those applicants who attend the Summer Scheme, the Assessment Centre takes place on the final day.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Alexandria is known as a sweet destination with nearly 20 artisanal sweets and dessert spots, and when the summer sun is shining, you can cool off with a sweet treat from boozy gelato to liquid nitrogen ice cream and everything in between. We've rounded up our favorite spots for ice cream and other cold treats in the Port City, with enough options to keep you cool all summer long.\nYou definitely don't want to miss Dolci Gelati, which might just have the best gelato flavor in the world. The all-natural flavors change from day to day, with fun recipes like Mojito (lime and mint), Maple Brown Sugar, Poached Pear Cinnamon and more, and founder Gianluigi Dellaccio was even appointed Italy's \"Ambassador of Homemade Gelato\" in 2018.\nThis family-owned gelato shop serves homemade authentic Argentinean gelato, including boozy flavors like Malbec. The distinctive flavors at Casa Rosada Artisan Gelato like Sambayon and Dulce de Leche are inspired by the owners' Argentine heritage, and you'll have a hard time finding them anywhere else.\nKiller E.S.P. is the place to be if you want homemade gelato or sorbet with really good coffee. An affogato is the perfect mid-day pick-me-up, with a shot of espresso poured over a heaping scoop of gelato.\nOpened in 2017 in Alexandria's Del Ray neighborhood, Dolce & Bean is a family owned confectionery shop offering premium artisanal products, including an incredible selection of macarons, gourmet fudge, crepes, slow-churned gelato and more. It's the perfect neighborhood spot for sweet treats and gelato with the family or an afternoon pick-me-up.\nThe Dairy Godmother's authentic Wisconsin-style frozen custard is an absolute must in Alexandria's Del Ray neighborhood. The custard hits the spot, and you can even pick up Puppy Pops for the dog to enjoy. Check out the Flavor Forecast to see upcoming special flavors.\nLiquid nitrogen meets fresh farm ingredients at made-to-order ice cream concept Nicecream, opened in Old Town in the spring of 2017. Each Nicecream serving is crafted right before the customer's eyes with rotating weekly and seasonal flavors such as Mint Mojito, Cran-Orange Dark Chocolate Chunk and Butter Toffee Pecan.\nPop's Old Fashioned Ice Cream Co.\nPop's Old Fashioned is the classic ice cream parlor, offering over 60 homemade flavors. Known as the place \"where you can take a date or your three-year-old on a Saturday night,\" Pop's is perfect for a King Street or waterfront stroll on a warm summer night.\nKilwins is a sweet-tooth's dream, filled with chocolatey treats and ice cream. Stop in to pick up sweet treats for friends and family, and pick up a scoop (or two) of ice cream for yourself.\nNestled into a small shop near the Marina waterfront, Ben & Jerry's is the perfect spot to try a funky flavor or throw an ice cream party.\nOur impressive list of ice cream spots doesn't end there. You can pick up milkshakes on the side at some of the best restaurants in town. At Triple Craft on Daingerfield Island north of Old Town, enjoy craft milkshakes with whipped cream, sprinkles, candies, drizzles, and sweet treats on the Potomac River waterfront. Make it an adult shake by adding chocolate Kahlua, vanilla Bourbon or strawberry rum-amaretto. In Del Ray, you can get a hand-spun milkshake with your burger at Holy Cow, topped with fresh farmers' market strawberries. Lost Dog Cafe in the Parker-Gray neighborhood of North Old Town is open until 11 p.m. for those late night milkshake cravings, while Haute Dogs and Fries in North Old Town has extra-thick shakes in flavors like salted caramel.\nFor more information on sweet spots in Alexandria, click here. Be sure to check out the Ice Cream Bowl Fundraiser at the King Street Art Festival each year, where you can choose from over 1000+ ice cream bowls handmade at The Art League in Alexandria.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"From the Disney classic tale, Beauty and the Beast comes this delightful Mrs Potts charm. It features a lavender enamel teapot design with a hinged lid that opens up. Features a parrot clasp to clip onto either of the Disney charm bracelets or the icon charm necklace.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Johnson Controls is adding more manufacturing capacity for batteries that power Start-Stop vehicles in China.\nThe company is increasing production of Absorbent Glass Mat (AGM) batteries in its Changxing, Zhejiang Province facility, from 1.5 million to 3.4 million a year, the expansion project is expected to be complete in 2017. \"Increasing orders of AGM batteries from customers looking to improve fuel efficiency of vehicles has prompted us to add to our manufacturing capacity, and we expect this momentum towards Start-Stop technology to continue,\" said Kenneth Yeng, vice president and general manager, Johnson Controls Power Solutions China. Currently, about 5 percent of new vehicles in China have Start-Stop systems. The company predicts this number to rise to about 40 percent by 2020 as automakers have been challenged to meet aggressive fuel economy targets set by the government. \"Start-Stop is the best solution to help automakers meet increasingly strict environmental regulations,\" said Yeng. \"Being the world's leading provider of batteries for Start-Stop vehicles, we are confident we can provide the same high-quality performing products and services to customers in China.\" Johnson Controls recently announced it will invest USD 555 million between 2011 and 2020 to expand AGM battery production capacity in Germany, the United States and China in anticipation of increasing global demand. In August, the company also unveiled plans to build a USD 200 million state-of-the-art plant in Shenyang, China, to produce batteries for Start-Stop vehicles.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzxefu b/data_all_eng_slimpj/shuffled/split2/finalzzxefu new file mode 100644 index 0000000000000000000000000000000000000000..fa6c9c2e2337c5c1d63e8861115bd89145017379 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzxefu @@ -0,0 +1,5 @@ +{"text":"Or in the pipeline of becoming Manager?\nOr still student but want to learn about various aspects of Management?\nOr an Entrepreneurs planning to open an office for your business?\nThis course will take you through from the basics of Office to Office Management concepts.\nOnce, long ago, every one in the world ice skated. In fact, everyone worked on the ice. Then, someone came up with the idea of working off the ice. A new word was then formed by combining the words off and ice. That is where the word office comes from. This is why majority of people in the world don't ice skate and work in offices.\nOffice is described as the nerve centre of the entire organisation. The present day office activities have expanded to a wider extent to keep pace with rapid globalisation.\nOffice is now indispensable part of any business organisation. Modern offices are organised on scientific principles and their management and administration are in the hands of techno savvy Managers which has paved way for the sustenance of a business amidst cut throat competition.\nAn Office performs tasks like framing of business policies, processing and communication of information, record keeping, handling mails, execution of orders and managing receipts and payments. Office can be described as any place where information converges on paper, which is documented, preserved and used for both current and future operations of business.\nHence, Office Management is very important for successful functioning of the business.\nThis course has video lectures covering the above discussed topics.\nTake this course to learn the basics of Office Management.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"If a given block size limit is part of a given cryptocurrency at a given time, can economists legitimately say anything with regard to such a limit? Must this topic be left alone as a mere qualitative characteristic of a product that users have freely selected?\nFrom one perspective, if user preferences are subjective matters of taste and opinion, nothing can be said other than that Ravi prefers this, Setsuko prefers that, and Heinrich prefers some other thing. If various users prefer a cryptocurrency with one block size limit or another, economists must remain silent and leave users to their purely subjective preferences, only taking note in abstract and neutral terms of the shape of these preferences. Personal preferences are \"ultimate givens,\" their specific content irreducible \"black box\" starting points for economists.\nThis appears to be a sounder critique. Block size limits are indeed characteristics of specific cryptocurrencies as products. Users may well differ in their subjective preferences on such matters for reasons not even fully understandable. Users differ in their values. Motivations can even include various grades of membership signaling. An economist speaking on such things, this criticism goes, merely \"smuggles in\" his own particular personal preferences or party affiliation \"dressed up as\" objective analysis.\nCan any role for economic analysis here be rescued from this critique? It may help to take a step back and consider some other scenarios to gain perspective and then return to apply that perspective to the case under consideration.\nFirst, consider two hypothetical cryptocurrencies, one with a block size limit that directly influences the ordinary structure of supply and demand in its transaction-inclusion market, and another that does not (this can equally be the same cryptocurrency, such as Bitcoin, at two different phases in its history). The first cryptocurrency's code alters the operation of the market between transaction senders and miners, limiting the total quantity of services that can be supplied per time period. Certain economic and industry-structure effects follow. These effects apply to a coin with this characteristic, but not to one without it. What are those differences? Those differences were the central theme of the interview to which this series follows.\nYet subjective individual preferences do not alter the distinctions analyzed. Thus, even though the content of the preferences themselves may be a black box for economists, the two differing transaction-inclusion markets still have objectively describable economic distinctions independent of any such preferences. Dropping a stone from the Tower of Pisa is a choice, one with all manner of possible motivations, but the resulting acceleration of gravity is not altered by any personal opinion as to the nature and effects of such gravity.\nNext, consider several hypothetical intentional communities. It is possible to establish and run such communities under various rule sets. Although intentional communities have often been to some degree communistic (\"commune\"), it is possible to set up other idealistic havens, perhaps some real-life attempt at an Ayn-Rand-style Galt's Gulch or a Neal-Stephenson-style Thousander retreat. Participation is governed by a kind of \"social contract,\" but in this context the contract is more likely to be one that actually exists, including specified conditions to which participants have assented by joining and staying, possibly even signing a written agreement with terms of residence.\nLet us assume that in all cases, no matter what the other internal rules and cultures, participants are not forced to either join or stay. This freedom of entry and exit corresponds to cryptocurrency participation choices.\nNow consider three such voluntary intentional communities. Bernieland features a $20 minimum wage. MagicCorner bans \"wage relations\" altogether. Finally, Murrayville has no numerical restrictions on wage agreements. Even though all three are voluntary communities, only Bernieland and MagicCorner include labor rules that restrict wage rates. The voluntarily agreed community rules specify certain wage-market restrictions. These types of restrictions are traditionally analyzed under the rubric of market intervention by state agencies, which are often subsumed under the term \"government.\" Whether one wants to also call a complex around intentional community rules and enforcement measures a type of \"government\" or not is beside the point. There may be valid reasons for either using or not using that word, provided suitable definitions and qualifications are set out.\nIn this case, it is analytically valuable to be able to note how Murrayville is free of rules that specify restrictions on the existence or range of wages in its labor market. Murrayville might therefore be described within this context as having a labor market free of intervention\u2014unlike Bernieland and MagicCorner. Considering this difference alone, one would expect Murrayville to therefore have the best functioning labor market of the three, with more ample employment opportunities for those aiming to work on a wage basis.\nThe fact that all participants in all three communities voluntarily join and agree to the respective terms of each does not alter the economic distinctions between their differing labor market rules. Even though all three communities are voluntary, it remains that only one has a minimum wage, another bans wages, and a third does neither.\nArguing that the term \"intervention\" can only apply to state agency actions does not aid in the economic analysis of wage rate restrictions within these voluntary intentional communities. One might try to suggest a better term to use here instead of intervention. However, since the effects of wage restrictions have already been analyzed under the rubric of state-made laws described as \"interventions,\" using established terms\u2014with suitable qualifications, as was done\u2014easily accesses the appropriate implications.\nNow in an effort to compete for residents, each community launches its own altcoin. Berniecoin does not allow any transaction with a fee above 1.5 Bernielashes\/byte to be mined. This seeks to create a price ceiling for transaction inclusion. No one can pay more within the protocol. No one can use greater wealth to supersede other transaction senders. MCcoin's protocol includes no way for transaction fees to be included at all; no one can bid for priority by including a fee. Finally, Murraycoin does neither. Transactions with any fee, or none, can be sent, and each miner is free to include or exclude any of these. Each node is likewise free to either relay any of them or not, or to try to figure out some ways to monetize such services.\nOnce again, based on this alone, Berniecoin and MCcoin demonstrate forms of what has heretofore been best characterized as \"market intervention\" within their respective communities. In this case, their protocols specify this directly. Murraycoin alone is free of any such effective intervention in its transaction-inclusion market. The others have policies that place a ceiling on the payment of transaction fees. The voluntary nature of participation in all three does not alter this distinction. One cryptocurrency has a maximum transaction fee, another bans fees, and the third does neither. These respective encoded policies are indeed part of what users implicitly choose when they use one rather than another. Nevertheless, distinct economic and social implications follow from those differences, and do so apart from any beliefs or wishes as to the nature of such implications.\nThis price-ceiling example demonstrates the general applicability of market intervention analysis within the context of voluntary arrangements. With the issue of a block size limit that restricts normal transaction volume, the relevant concept is not a price ceiling, but an output ceiling.\nA subtler misconstrual of my interview assumes that I argued that since a particular situation or dynamic exists, someone must have acted to bring it about. However, I made no mention of any specific persons or groups, nor did I attribute any intentionality or motive. If there is thunder, it does not necessarily follow that Thor must have hammered it out.\nInstead, I identified a market. I noted an effective limit to industrywide service provision as actual market volume begins to interact with a limit long in place, but formerly inert for this purpose. I described some of the general effects of any such limit to the extent it actually begins to limit ordinary volume. I argued that these effects are negative, but also easy for observers and participants of all kinds to miss or underestimate because they entail hidden costs and distort industry structure evolution from paths it could have taken instead, but did not, thus rendering those possibly better alternative paths \"not seen\" in Bastiat's sense.\nCertain economic effects follow from output ceilings and these have commonly been analyzed in terms of cartel situations. Yet this implies no necessary argument that anyone has set out to form a cartel or to create any of these situations or dynamics. That would be a completely different argument, more journalistic in nature and evidence requirements.\nBeing encoded in a protocol is a new way for an output ceiling to exist. Normally\u2014but not in this case\u2014any given industry actor, either current player or potential entrant, could just violate such a ceiling unless facing some overt or threatened form of legal or quasi-legal enforcement. Consider post-war Japanese steel production. An industrywide output ceiling was maintained for many years to limit competition. The Ministry of International Trade and Industry \"recommended\" this as a \"voluntary\" measure for domestic steelmakers. Of course, when some rebels sought to exceed the limit, MITI simply refused to approve their requests for increased purchases of more iron ore and fuel, which it also oversaw. Only through MITI could such a limit be maintained.\nThis type of limit sets up an upside-down and sub-zero-sum dynamic in an industry. There are concentrated gains for the inefficient (who should otherwise probably quit and sell off assets), somewhat less concentrated losses for the more efficient (who are unable to expand as much), hidden losses for would-be entrants (who are never seen because they avoid entering a market with an arbitrary ceiling), and dispersed and nearly invisible losses for many anonymous end users (who mostly have little clue about any of this and how it is happening at their own expense). Once again, though, all this can be so regardless of anyone's knowledge or intentions.\nThat said, noting the social science concept of spontaneous emergence as one factor to consider does not also constitute a claim that certain effects have not been planned or that they do not actually produce special interest benefits for some at the expense of others. It only points out that any such intentions and plans as may or may not exist are not directly relevant to the comparative analysis of rule effects. The topics are distinct.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"quality management, process improvement, cost control, project management, mergers\/acquisitions, and leadership development.\nDaley is a certified manager of quality and organizational excellence, Six Sigma Black Belt, medical technologist, and diplomate in laboratory management. She's a frequent speaker at seminars and national meetings, and has authored several publications in industry journals and guidebooks.\nShe has a Master of Arts degree in administrative leadership and a Bachelor of Science degree in medical technology.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"For your business to make it big in the highly competitive global market, you need to be able to develop a powerful brand experience for your customers. With the development of marketing trends in recent years, the popularity of social media and mobile applications, we now have the power to choose which media suit our businesses best. This however gives you the challenge to find the perfect creative solutions services and offshore web development team to help spread the news about the company to your potential customers effectively.\nThe initial step you need to pursue is to decide the purpose of your website. Your goals need to be in tune with your plans on how to reach that success.\nLet us say that your website is for logistics or transport then your priorities should shift on delivering the goods on time, as well as growing and taking care of your customers.\nIf you are planning to launch a start-up and require brand awareness, the steps you need to take is to give as much information as you can to get the word about your products and services out there to your potential customers.\nHow much does your business need to spend on marketing?\nThis is an important factor that you need to keep in mind when establishing your business website. Budget is a deciding factor when it comes to hiring or outsourcing a creative team.\nAs with all the things in your business that you spend your money on, the old saying, \"You only get your money's worth\" also goes true in this industry. If you decide to cut your budget just to save a few pennies, you may not get the results you expected or the worth of the service you want.\nBasically, this is not about how long these people have been in the business. To tell you the truth, a lot of newcomers have the freshest marketing and graphic design ideas that old players may not have. What you need to consider here is how well the artist knows what your business needs are and how good he can deliver it such that it makes a strong impact to your business.\nThe fast paced, fast-changing world of web development and design makes this technically diverse topic a highly complicated subject to talk about even for us who've been in the industry for years. So it is very important that you know what your creative team is doing for your website.\nMake sure that the project is a collaborative effort and explain to them what you are trying to achieve and which methods both parties can choose to attain the goals.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Thoughts from a Traveling Tech: Consider yourself mooned.\nOn May 5th, 2012 the Moon reached perigee and we were treated to a special view.\nFor those less techinical, the Moon actually orbits in an oval around the Earth, not a circle. That means that at any time the Moon is moving back and forth in distance from the Earth and the percieved size of the Moon changes with that distance.\nOn May the 5th the Moon reached the closest point in its orbit AND was at full, which has not happened in a long time.\nI happen to have close friends who are also shutter bugs and who have access to the roof of a large building that is several stories tall and sits on top of a hill. This leads to some beautiful shots of the moon. My only regret is that my skills with the new camera do not to the moment justice and I am still learning to use photo processing tools to make the shot even better. Still waiting to find out how crazy things got while the Moon was full.\nDon't sell yourself short. That is a great shot.\nThanks, I am still working out how to get the clouds AND the Moon in the picture.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzxlxh b/data_all_eng_slimpj/shuffled/split2/finalzzxlxh new file mode 100644 index 0000000000000000000000000000000000000000..497f7987ea386b802a15cf17105e60247af7e5ad --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzxlxh @@ -0,0 +1,5 @@ +{"text":"***DEALER SERVICED***, ***QUALITY ASSURED VEHICLE***, ***The Right Price...Right Up Front!***, Fully Detailed, Quality Assured 100pt. Inspection, *FREE LIFETIME POWERTRAIN WARRANTY*, All New Vehicles from The Razzari Auto Centers come with our Exclusive Lifetime Powertrain Warranty, Transit Connect XLT, 4D Cargo Van, EcoBoost 1.6L I4 GTDi DOHC Turbocharged VCT, 6-Speed Automatic with Select-Shift, FWD, Frozen White, Cloth, 2 Speakers, 3.21 Axle Ratio, 4-Wheel Disc Brakes, ABS brakes, Air Conditioning, AM\/FM radio, Bodyside moldings, Brake assist, Bumpers: body-color, CD player, Cloth Front Bucket Seats, Driver door bin, Driver vanity mirror, Dual front impact airbags, Dual front side impact airbags, Electronic Stability Control, Front anti-roll bar, Front Bucket Seats, Front Center Armrest w\/Storage, Front fog lights, Front reading lights, Front wheel independent suspension, Heated door mirrors, Illuminated entry, Low tire pressure warning, Occupant sensing airbag, Outside temperature display, Overhead airbag, Overhead console, Passenger door bin, Passenger vanity mirror, Power door mirrors, Power steering, Power windows, Radio data system, Radio: AM\/FM Stereo Receiver w\/Single CD, Rear anti-roll bar, Remote keyless entry, Speed control, Tachometer, Telescoping steering wheel, Tilt steering wheel, Traction control, Variably intermittent wipers, Wheels: 16\" x 6.5\" Steel w\/XLT Full Wheel Covers.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"So, you have been playing LA Times crossword and got stuck on Golden State sch. in Davis? Don't worry, we can help you!\nSolving all the clues to a crossword is quite impossible. Only a few people can do it. So it is not a big deal if don't know the answer to \"Golden State sch. in Davis\" clue. Just let us help you.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"For heritage companies like Fila and Champion \u2014 which have product ranges covering everything from hype sneakers to activewear \u2014 success relies on being able to appeal to a diverse consumer base.\nAccording to Colon, Fila's history in a variety of different categories created an opportunity to authentically stretch the brand and reach a newer, younger customer.\nOn episode 4 of Glossy Trend Watch: Streetwear Edition, fashion reporter Danny Parisi sits down with Colon to discuss the role of a heritage brand, the categories a brand should enter to feel authentic, and the way a brand built for tennis courts became an essential player in streetwear. Edited highlights below.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Mobilized Fuels delivers fuel to fleets, truck to truck (wet-hosing) or with a bulk drop to a tank. Serving Metro Atlanta and the surrounding area, we refuel vehicles, tanks, equipment, generators, and construction sites \u2013 anything you have that needs fuel.\nOur on-site fueling service frees your organization from the need of costly tanks and dispensing equipment, saving time and money and avoiding risk. We are committed to providing unmatched reliability and service that exceeds expectations, all at a fair and competitive price. We are eager to meet your specific needs, so we offer flexibility in delivery time and a variety of fuels ranging from ULSD to clean burning biodiesel.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"I first came to Dr. Giaquinto on March 21, 2008. My wife had jaw surgery twenty-five years ago and had her jaw wired shut. I would wake up anytime she cleared her throat, fearing she was choking. Poor sleep ensued and it worsened over the years by job stress. I would routinely fall asleep easily, then wake up about two to three hours later and be restlessly awake for the rest of the night. This usually happened four to five times a week for the last twent-five years. I would rarely, if ever feel totally rested, which I believe contributed to low energy in general and fatigue. On top of that, I also would wake up four to five times a night to go to the bathroom to urinate. My medical doctor prescribed drugs to help me sleep. Another chiropractor also gave me natural sleeping aids. The only thing that worked was one of the drugs prescribed, but I was reluctant to take it regularly for fear of becoming dependent. I was referred to Dr. Giaquinto by my daughter. Since being treated by Dr. Giaquinto in a short amount of time with adjustments and enzyme nutrition, I would estimate that my sleep pattern has improved at least 50% and I might get up one time a night to urinate but that is rare now. I still have some nights where I have trouble sleeping, but I feel that is related to major stress in my life and stress management is something I am working on to help me sleep. Overall, I feel my conditions have greatly improved with Dr. Giaquinto's treatment over the last six weeks.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzxwuu b/data_all_eng_slimpj/shuffled/split2/finalzzxwuu new file mode 100644 index 0000000000000000000000000000000000000000..9d5eea1350707e95cb84cdca51f371e58780856e --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzxwuu @@ -0,0 +1,5 @@ +{"text":"Cockle Creek, on Tasmania's south-east coast, is the most southerly point you can drive in Australia. It's also on the edge of Tasmania's Southwest National Park and the Tasmanian World Heritage Wilderness Area.\nThis tiny seaside settlement of a few shacks is ideal for a summer swim, picnic or camping. It's also the beginning \u2013 or the end \u2013 of the South Coast Track, one of Tasmania's great bushwalks \u2013 and the furthest south you can drive in Australia.\nA stroll along the beach will reveal the beauty of this remote place. Continue to Fishers Point Navigation Light and Pilot Station Ruins and take the well-marked track to South East Cape for stunning cliff top views of Maatsuyker Island.\nCockle Creek is a 2-hr drive (148 km) south of Hobart via Geeveston.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"A different group of single-premium policies is based on traditional policies. They are aimed at generating the highest possible guaranteed net return, both over short periods of two to five years and for longer terms up to 15 years. There are two types, those aimed at producing a guaranteed capital sum at the end of the term and those aimed at generating a high net income over the period with return of the original capital at the end.\nPartly because of their ability to offset their expenses against taxable income, life insurance companies can often generate a very attractive return on this type of contract. For example, in late 2017 when long-term interest rates were about 13%, several companies were offering net returns of up to 9% p.a. over periods from one to three years.\nThe growth bond is based either on a non-profit endowment policy or on a deferred annuity. In both cases, the gain on the original investment is subject to tax at the higher income tax rates (there is no capital gains tax) but in the case of the deferred annuity basic-rate income tax is also chargeable on the gain. The non-profit endowment is therefore the more popular type of contract. A growth bond guaranteeing 9% net would produce \u00a3111,538 from a \u00a3111,000 investment over five years. For the basic-rate taxpayer not subject to further tax this could be attractive. The pattern of interest rates is always shifting, however, so that it is not possible to predict how attractive the rates companies are offering at different times will be compared with the alternatives. The point is that the company is guaranteeing a rate over a period of years, whereas building societies and most other deposit-taking institutions do not guarantee a rate for more than a few months or a year at the most, except in special cases.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Thread: How Involved In Current Neo-Fascist President of Brazil Was the CIA?!\nBrazil's fascist leader Bolsonaro immediately visited CIA headquarters on his first official trip to the US.\nMoro's Lava Jato, which was based on his own 2004 study of Italy's Mani Pulite(Clean Hands), and whose concept and structure was outlined in a 2009 State Department cable, not only created two false pretexts for one President's removal (corruption and economy), it prevented the election of another (Lula), and also worked during the 2018 election to attack the reputation of his replacement, Fernando Haddad, who was later cleared of the accusations.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"E-mail, it seems, has become yesterday's news. While checking e-mail used to be our primary reason for going online, we now devote more of our online time to surfing social networks, according to new numbers from Nielsen.\nWe spend 23 percent of our online time surfing around social networks like Facebook, while we only spend 8.3 percent of that time checking e-mail.\nThat news isn't necessarily surprising to me: I can spend 10 minutes on Facebook and get caught up on the activities of 25 different people, while spending that same 10 minutes on e-mail would allow me to delete a lot of useless junk mail and maybe read an actual message or two.\nBut, if I really stop to think about it, Facebook isn't really keeping me better connected to most people. Here are five reasons why.\nSure, some of the interactions I have on Facebook are what I would call quality. This morning alone, I saw new pictures of my niece and nephew and made plans to visit my sister-in-law. But I also spent some time looking at pictures of someone I barely knew in high school. We haven't seen each other since graduation, and \u2014 let's face it \u2014 we weren't even that close back then. But we're Facebook friends and she posted new family pictures, so I took a look. I also saw a daily update on wedding plans posted by another Facebook friend; this one a casual acquaintance who I might not even recognize if I ever did see her in person. But I now know that she jogged five miles this morning in hopes of fitting into her wedding dress when her big day arrives this fall.\nThat brings me to my second point about Facebook: much of what we do on the site is passive, especially in terms of communicating. I can read status updates and look at pictures \u2014 even those posted by close friends \u2014 and feel as though I am in touch with them. I know, for example, that one of my good friends is sore from her yoga class last night, and I know that my niece and nephew had a great time at an amusement park. But I never asked my friend about her yoga class specifically, or talked to my brother about his trip to New Hampshire, and chances are, I never will. Facebook lets me take in a lot of information on the surface, but wouldn't I be a better friend (and a better aunt) if I actually spent that time communicating directly with people instead?\nNielsen is exactly right: the more time I spend on Facebook, the less time I spend on e-mail. And while it may seem that the whole world is on Facebook, that's not exactly true. In my closest circle of friends, there are two or three people who refuse to join.\nAnd those people often find themselves excluded from conversations shared by the rest of us. Everyone who is on Facebook will start talking about information or photos we've seen posted on the site, and will talk about them as if they were common knowledge. Which they are to us\u2026but not to the people who aren't on the site. So Facebook has created something of a social divide between the people who are on there, and those who aren't.\nWe all know that Facebook has had more than its share of privacy problems, and I'm not interested in debating them here. Everyone takes a different tactic toward handling them: some people (like my friends mentioned above) refuse to join the site, while others have abandoned ship. I know that Facebook offers some granular controls that allow me to adjust who sees what info I post and when.\nAnd though I have adjusted my privacy settings, I still treat Facebook as if everything I put up there is for public consumption. I never post anything \u2014 a status update or a photo or a link \u2014 that I wouldn't want my boss or a prospective employer to see. That means that my work contacts and closest friends get the same treatment, and the same semi-sanitized look at my life. Over e-mail, of course, this isn't the case: I can share real opinions and communicate more honestly, as I would face-to-face.\nNielsen's study tracks the time we spend on Facebook instead of on e-mail or surfing news Web sites. What it doesn't track is the time we spend on Facebook instead of talking with people face-to-face.\nWhile the time we spend on social networks can help us keep track of friends who live far away, it can also detract from the time we should actually be spending with people who live nearby \u2026 and in some cases, in the same house. My husband is a new convert to Facebook and we haven't become the couple who communicates through Facebook posts \u2026 at least not yet! But I can see how it would be an easy trap to stumble into, and it's one that I'll work to avoid.\nI'm not completely down on Facebook. I do often feel like the 25 minutes I spend on Facebook are more fruitful than an hour spent on e-mail, and I love the ability to stay connected to friends and family who live far away. But I'm not fooling myself into thinking that time spent on Facebook is quality communication time.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Welcome back! It's been more than a month and a half since my last post on the Macbook Pro Retina panel \u2013 damn my \"real job\" and the time it consumes. But we're here now, so let's dive into things.\nIn the previous two posts on this subject, I have talked about two boards that I have been developing for the 15.4\u2033 Macbook Pro Retina display assembly: a breakout for the FaceTime camera, and the main display controller \/ backlight driver board. Well, in the month-and-a-half that has elapsed, both boards have been received and built. Let's start by taking a look at the former.\nWhen I went to order these boards, I did so immediately after the cutoff for the current OSH Park 4 layer order, so the estimated delivery time was three or four weeks. Unhappy and impatient, I held off and instead ordered the board as 2-layer for more instant gratification. The two inner layers are only GND, so this is possible without too much trouble. It throws off the impedance of the USB traces probably significantly, but as the entire trace length is a half inch or so, I figured it'd be close enough for at least temporary use.\nSo the boards arrived a week and a half later. I immediately noticed something interesting and bothersome. Board fab houses typically require a gap between the edge of any copper and the routed board edge. If you are making a small board and including mounting pads for screws, the pads may consume a large percentage of the total board area \u2013 in fact, the pads may well drive the overall dimensions of the board, and larger dimensions mean greater board cost. I wanted this board to screw mount, but even the 2-56 screws I designed for needed quite significant pads. I decided it would be nice to trim the amount of room wasted on mount pads by clipping the pads at the board edge, allowing the screw to extend into the no-copper area at the edges of the board \u2013 it really only drops a few cents from the board in this case, but it makes me feel better.\nMount pads look OK here.\nWhen I poured the rest of the GND polygon, apparently Altium attempted to connect to the inner Pad, and generated Polygon Cutouts surrounding it, which happened also to cut the Multilayer polygon. It didn't show in the PCB editor, although it did in the Gerbers \u2013 but I was in too much of a hurry when ordering to notice. Oops. But so be it, it's only the mount pads.\nI then assembled the board, and with bated breath, plugged it into a USB port. And Hooray! The Device Connected sound sounded, and the camera showed up in device manager. But, strangely, the power LED was orange. I specified a green LED, I thought I placed a green LED, but it was bright orange. I checked the package to be sure but it was correct. It was then that I discovered a tiny short, and simultaneously learned that if you apply 5V directly across a 2.1V green LED, it becomes an orange LED. Neato! It also gets real real hot. But miraculously, after I corrected the short, which had caused the full USB voltage to appear across the LED, it returned to green and operated as normal. So I guess I can say, these OSRAM CHIPLED parts seem to be pretty hardy! I unfortunately don't have any photos, but these parts are cheap, so buy some and experiment for yourself. Science!\nNow, the information you've all been waiting for. As I said on the previous post on this subject, I quickly threw together this board to get it in before the following OSH Park cutoff. As such it's not perfect, but it is fairly functional (or so it seems from the minimal amount of testing I've done so far).\nThere's a lot of new stuff going on on this board, as compared to my iPad board (on which this one is loosely based). Of course the panel output connector is different, because the panel is different. Also, though, the barrel jack has been removed, and a 2-pin shrouded connector has been added in its place. Barrel jacks, I finally came to realize, are just too damn big and expensive for a little cheap board like this. The new solution is less than a dollar in single quantities for plug, receptacle and terminals, whereas the board mount barrel receptacle was more than a dollar by itself, not counting wire mount plug. Plus the barrel jack tended to be the tallest component on the board, driving enclosure dimensions. It takes a bit more work to implement, but I still think this was a good change.\nThe dc-dc converters are new as well. The board contains two dc-dc buck regulators, both based on the AOZ1281 from Alpha & Omega Semiconductor. This part was chosen due to very low cost \u2013 about $0.90 in single quantities \u2013 as well as ample output current (1.8A) and acceptable input voltage range (3-26V). The board implements this part as one 3.3V\/500mA converter and one 3.8V\/1A converter, the former for processor, indicators and other functions, and the latter for the panel itself, which was investigated and found to run well on 3.8V in a prior post. It's probably a 5V panel, in retrospect, and I may someday become ambitious enough to test it at this higher voltage, but for now 3.8V works.\nThe board shares the same Freescale MKL25Z128VFM4 processor as the iPad board, again due to cost to performance ratio of the $3 ARM Cortex-M0+ core. Sure, I could drop $0.50 and put in a MSP430, but I like the potential for other functions that the more powerful processor enables. I still need to learn how to program the damn things, but I digress. The other big IC on the board, the backlight driver, has completely changed compared to the iPad version. Whereas the iPad panel has 12 backlight channels at 20V apiece, the Macbook has only six channels but at some 52V. The LT3754 used on the iPad board only does 45V, so a new backlight driver is needed. Of the (relatively few) integrated boost converter \/ LED driver ICs, the best choice for this application seemed to be the Freescale MC34844A, which happily will push out 60V with appropriate configuration.\nIn the interest of keeping this post from being more than fifteen or twenty pages, I'll leave the rest of the description to the schematic, but if anything is unclear please feel free to drop me a comment or email.\nAgain, this board uses 2-56 mounting screws, and again, I opted to include clipped pads. And again the boards showed up with voids in the pads. But this time, inexplicably, some of the pads did not have voids. They were all created the same way so I can't quite explain that one yet. But one thing is for sure, I'll certainly pay more attention from here on out. I built the pads for one of my boards at work with clipped edges as well, but this time used top and bottom polygons instead of one multilayer one. In this way the pads are slightly smaller in the internal layers, but on the other hand the polygons connect without issue, so that's a small price to pay.\nTop. Ignore the soldered-on power wires.\nI'm not 100% happy with it \u2013 in particular, I suspect that my overzealous attempt to match the lengths of the DisplayPort lanes may have actually had an adverse effect on signal integrity as the traces have to bend a whole lot and come in much closer range of each other than is really recommended. But I can say that the board works like this, and I can't keep from releasing something forever because I'm not 100% happy with it (or so my boss always says \u2013 \"You can always fix it in Rev B\"). So here we are.\nWhew. This has been a grueling post to prepare. Enough for now. Er, except for one last picture!\nThis entry was posted in Electronics on 2013-09-29 by mike.\nGreat Work on the Facetime Camera Breakout Board!!! My question is what application would you suggest I use to open the gerber and layer files with or alternately, how can I go about getting a Facetime HD Camera breakout board to test and try out? Your work has inspired me to educate myself further in PCB and Electronics design \u2013 I can't say thanks enough.\nNow, how can you go about getting a board? Well, the easiest way is to go to OSH Park (http:\/\/oshpark.com\/), upload the zip file of Gerbers and order a few \u2013 the Gerbers should already be set up in a format that OSH Park likes. They're real cheap \u2013 3 boards for $5 shipped \u2013 and the components aren't pricey either \u2013 or you could even solder on a cut USB cable which we all have hundreds of around, and no-load all the rest of the components, and it'd probably still work (er, so long as you short across the transient suppressor IC). Now, getting the camera connector is going to be the most difficult part, because I honestly don't know where to get it or even what the part number might be. I got mine from a Macbook Air camera processor baseboard, for which I paid about $6 (not terrible for qty 1 of an exotic connector). You have to be cautious of exactly which assembly you buy, though, because Apple has used a whole bunch of different connectors for their camera assemblies over the years. I looked at several boards and identified one as having the correct connector from the seller's photo before I bought it \u2013 you can see photos of the board I bought in this post: http:\/\/mikesmods.com\/mm-wp\/?p=340 .\nNow if your question is \"can I buy an assembled board from you\"\u2026 that one's probably \"no\", simply because I don't like having to rely on ripping connectors off of eBay camera modules to build these things. If I ever find a source for new connectors, though, I'll happily rethink that.\nYou're very welcome for the inspiration I have been toying with electronics for practically as long as I can remember, and it continues to be a field where the more I learn, the more I find I don't know and want to learn. If you need any help as you delve into this stuff, please don't hesitate to drop me a comment or an email.\nHello! Do you plan to sell this boards? I'd like to buy one for my damaged laptop assembly if possible. Thank's!\nTo tell you the truth, there's not been a whole lot of demand for them yet. The initial costs I have to incur to sell these (bulk component purchases to make them affordable, stencil since I refuse to manually paste any more of the damn things) have to be offset by projected sales of the board, and right now that's not happening. Plus I've got a plan to switch out some components once I get ahold of a healthy enough source of the alternative, so I'd want to do that before I invest in a stencil or production-quantity boards.\nIn summary, unless you've got a few friends, I don't have plans in the short-term. But keep an eye out, because it may happen sometime in the future.\nIf your version would cost half of that incl. shipping I'd be happy to order one even without a camera breakout.\nHmm. Actually, Daniel Rosznyo helped me out a bit as I worked through the reverse engineering process, as can be seen from some of the early posts on the subject.\nThe BOM cost of $150 does seem a bit steep, though, considering I calculate the BOM cost of my boards somewhere around the $60 mark, not counting ancillary costs and manufacturing fees.\nThe problem is and has always been that it's only economical to produce these boards in batches larger than one. Whereas on the iPad unit it might take me two hours to build, test and package an assembly by hand (since I don't have stencils), this one easily took six or more when all was said and done, not counting testing time. There's no price I can reasonably charge to make it worth my while to spend six or eight hours pushing solder paste around with a toothpick. This is of course not a problem if I have a solder stencil, but for the very small pitch of some of the pads on these boards, I can't use a cut-rate stencil fab and need to purchase a professional framed stencil, which usually cost upwards of $100 and can approach $200. So I need to sell \"a few\" boards in order for the cost to be properly amortized across each, to avoid having to charge each person an arm and a leg. Right now that number is around ten or fifteen, and I don't have that many interested parties.\nWould OSH Stencils be an option for this?\nFeel free to give us a try Mike, I think you'll be surprised at the precision we can attain. We have a very different process for our Kapton stencils compared to what you may be used to. Shoot me an e-mail if you have any questions.\nOne of our customers recently posted this, for reference. http:\/\/imgur.com\/dIHKSOK,KQMr1QV#0 These are .2mm apertures, and .5mm spacing between them. If you click the second image (a competitor product) you can see a very clear difference. We do a lot of ultra fine pitch work, and have had great results. If this isn't sufficient, hang tight, we're going to announce some new material options by the end of the year that may suit your needs if our Kapton solutions aren't a fit.\nHey, thanks for the input. Those apertures do look very sharp, but they appear to be rounded as if they are made with a single laser kerf, is 0.2mm the smallest that can be cut?\nFor comparison's sake, my Simple iPad board uses 0.28\u00d72.40mm apertures on 0.50mm pitch (web thickness 0.22mm) and my Macbook board uses 0.20\u00d71.20mm apertures on 0.40mm pitch (web thickness 0.20mm). It looks like you can cut the apertures, but what I've been told by other vendors is that it is unreliable to cut with such small webs since the laser tends to melt the material, is that not a factor with your process?\nI guess what I really want, not just from OSH Stencils but from all stencil vendors, is minimum aperture dimensions and minimum recommended web thickness for the material and process used. It's so hard to get that kind of information out of vendors \u2013 I understand it's hard to give an accurate number due to some dependence on the web length and surrounding geometry, but it's so hard to get a good feel for what's feasible and what isn't!\nThe rounding is the users design, not an attribute of the laser. We cut exactly what is sent to us. This customer just happened to design his board with rounded edges on his ICs and all his pads.\nI understand the frustration with vendors giving exact specs, and that's largely attributed to the inconsistent nature of how they handle their cutting. I don't have exact specs to release either, but that's because I haven't sat down and made the effort in determining the true capabilities, but we haven't had a single design to date (knock on virtual wood) that we haven't been able to cut to meet a customers needs.\nIf you want to give our stencils a try, feel free. If you are unhappy with the results, we'll make it right to your satisfaction.\nCan the camera breakout board be use with almost all old and new macbook displays or just the new ones like the 2011- 2013 models ?\nElectrically, the camera breakout will work for any of the recent Macbook cameras, since they have had a USB interface for at least the last few generations. However, Apple has a bad habit of changing connectors from one model to the next, so it's quite likely that the board may be mechanically incompatible. Is there a particular camera model you're interested in? I have some docs on Apple hardware that might give us a better idea.\nSorry about the delay. I will try to look into this tonight.\ni couldn't find any of the newest models or the 2012 models, even ho i thought i saw one last week, just cant seem to find it anymore.\nIf you would ever consider taking orders for the board, count me in!\nPlease do. I've perfectly good panel laying around, gathering dust but lack the skills and tools to assemble the board myself. Those QFN packages are just beyond me.\nThere is one other option that comes to my mind: I don't suppose that you take requests, but would it be possible for you to put together OSH-Park-compliant design of a board with only the DP pass-trough in place and inputs for external power supply (with optionally some feedback to the PS, that there is in fact something connected to the DP)? That way I could try going crazy with some less portable, but doable PS design. With my luck, I'm afraid that I would struggle with the design for quite few iterations before I get something done without breaking any rules. Not to mention that the wait alone would probably kill me . I hope, I could manage soldering the sockets in place, if not for the first time I would have 3 tries in a price of 1.\nI've had several requests for a simpler board for the Macbook recently. It might be time to consider designing one of those\u2026 I'll add it to my to-do list, but I can't tell you when I might get around to it \u2013 there are quite a few projects in line ahead of it at the moment.\nNo pressure. Whatever you come up with: be it simpler* board or one that I could order from you directly I'll take it. Once again: great work and I see that saying that you should keep it up is really pointless because you are just like that Energizer rabbit.\nOn \"not baking\"\u2026 That's tough, because the Macbook mating connectors are just such fine pitch (0.4mm) that it's difficult to hand-solder them. It can be done, but you'll need a magnifying lamp, tiny solder, a very fine iron tip and a steady hand. I find it easier to reflow process them, because if you screw up paste application, you can just wipe it off and do it again, no permanent damage. But if you hand-solder a bridge in between those pins, getting it out can be a very frustrating process.\nNice work! Count me in as a very interested customer for a board or two.\nWhy don't you start a kickstarter for this? You would definitely receive the funding you need to get it started!\nEh, Kickstarter. The problem with Kickstarter is that it pushes this stuff from 'hobby' to 'second job'. Well-run Kickstarter campaigns need set schedules, regular updates, timely communication. I can't guarantee some of those things without devoting more time per day to the project than I really want to (or can).\nIt's never been a problem of funding\u2026 money just can't buy free time. Well, it could if I quit my day job, but I don't think I want to gamble on that!\nPlease, let me know if you decide to make some PCB for sale.\nI may be interested in purchasing one.\nActually, I would be interested in interfacing a 13\" retina display (LP133WQ1-SJA1).\nfor achieve the purpose? I think that the two panels differ only for the LED load.\nI haven't looked into working with the 13\u2033 yet, but they are quite similar so it should be trivial to do so. But don't quote me on that, I haven't actually looked into the nitty-gritty of it all \u2013 and Apple loves to kill cross-compatibility by changing pinouts between products. I think I've got a specification document on that panel, so let me dig into it a little deeper.\nThank you for these posts and inspiring the reverse engineering of this. I was just thinking about such and adapter board but I see you and a couple others have already created them. I've enjoyed the read through your stories.\nDo you know, would the adapter board work on a MacBook Pro Retina to drive an external display at 1920*1200? The MBPr spec states external displays at up to 2560 by 1600. Being the external display is physically 2880 * 1800, I wasn't sure if this would be a problem. Boy I've been dreaming of small, high resolution external displays and I may finally have a solution.\nI actually don't know\u2026 I don't have a MacBook to test it with. The board itself doesn't mangle the signal at all, it's just a passthrough \u2013 and as such should support a practically infinite resolution. But whether the MacBook GPU hardware can handle it, I don't know, sorry!\nThank you Mike for the response. I see how the board is just passing the signals through. So perhaps my question is more generic and along the lines of what happens with a high resolution display, one with more hardware pixels than can be output by the laptop, is connected to the laptop. I imagine it will work at the lower resolutions, but it has been a while since I've done this. On to googling, this is basic enough question, thanks for helping rephrase the question and the inspiration from your work.\nI found this HDPI LG panel: LP129QE1-SPA1.\nIt is used on the 'chromebook' device. It is a 12.9\" display with a resolution of 2560\u00d7700.\nand it is very cheap compared to the 13\" Macbook Retina.\nI would order one or more of them today if you told you could build drivers (and ship them to Germany ) in less than a month. Thank you for efforts from all, who don't understand anything of PWM, eDP, LVDS, HDCP, DPCP and those other fancy words I now have read until 7am O.o The topic is just too exciting\u2026.\nShouldn't be all to hard. If you took a PC motherboard like the Asus Q87T, then all you would need on an intermediate board would be a Freescale MC34844A, and you could include a connector for both the Pixel LCD and the 15\u2033 Retina. The motherboard produces 12v or 19v DC for the MC34844A and even provides a PWM Signal directly from the ACPI (OS). I'm in the process of designing some boards.\nI've actually already built a controller for the Chromebook Pixel panel. It looks great! I'm going to have to document it at some point..\nPlease! No one has documented how yet but plenty have tried and given up. You'd be doing everyone a big service.\nI work mostly with displaying tons of text (don't really care for font quality as far as it's readable: source code mostly) and I think that apple just hit the right spot with their 2880\u00d71800@15.6\u2033. Anything more dense makes me just go pixel-blind and I start to scale the fonts. It appears to me that the next usable and worthwhile solution would be some 4k panel in a 17-19\u2033 realm. Anything above (DPI-wise) just seems plain waste of (pixel)ammo.\nSaying all this: hey Mike get your act together and start taking orders for the MBPR board .\nPentile on a notebook? Ick. Hey, that's not the panel from the Samsung Ativ Book 9, is it? I was really excited to hear about that when it was announced, and I haven't heard anything since.\nYep. I afraid that it's all over the place: Samsung, Lenovo, Eurocom, Clevo all have models build around this panel. One can hope that it'll drive the cost down, but than again I'm not quite sure it's worth it. I mean: it's ok (works as advertised @3200\u00d71800) when you display monochrome text on it. But the things go south really quickly when you start to employ something as cutting edge as syntax highlighting for instance .\nEither way the DPI is bit too much for me to use it comfortably. Maybe for viewing images it would be fine (lower spatial frequencies) but for (colored or not) text it's just unusable.\nAs for the programming part: if it's got a C compiler I could give it a try. I'm afraid that I don't know much about ARM assembly, but I did a fair amount of 8051 and x86 asm programming back in the day. But it's just talk the talk, as I'm not capable of building the board on my own and I can't even start to imagine how to remote debug this sort of stuff.\nBeing like da Vinci is not as easy as it used to be I suppose.\nHi Mike, any of the Retina boards for sale? Would need one.\nNot yet\u2026 but there has been an awful lot of interest lately. Maybe a link got posted somewhere. In any case, it's climbing higher on my priority list by the day, but it's not quite at the top yet. You'll have to be patient a little while longer, sorry!\nHi Mike, it's very interesting to read about your work. I'm also interested in 3 or 4 of the MBP Retina boards if you should ever sell them.\nThanks! I do think I'll eventually end up making some boards to sell, but probably not in the immediate future.\nHey \u2013 Did you ever try setting the refresh rate above 60hz?\nok i am going to go out on a limb here would anyone know where i can get a controller board for LP173WF2-TPB2 i have a broken m17xr3 and would like to make the LCD an external montior and noone and i mean noone has the controller board for this model which i find really interesting being that its a couple of years old already..and some people advertise that they have universal ones but before i buy one i would like to make sure.\nI also want a small 120hz screen for my gaming machine (a) for portability\/taking it to LAN parties and (b) because I have tunnel vision and don't really benefit from larger screens. I know Mike had started looking at the -TPA1 version of this panel a while back \u2013 he was going to use it with the MBP retina controller but had some trouble with the cabling. I wonder if he's had a chance to look at it since then.\nVery nice project. Please don't stop. You are very good at this I absolutely admire people with such skills and joy at their hobby.\nDo you think it is possible to modify the circuits of a non retina Macbook to run a retina Panel?\nIt depends. A lot of the older Macbook non-Retinas used LVDS instead of DisplayPort to drive their displays; if this is the case, then it is non-trivial or impossible (depending on configuration) to convert one to the other. I'd have to do a little digging to give you a more concrete answer.\nHi mike, a bit offtopic, do you know if the keyboard\/trackpad from a macbook pro retina can be modded to usb?\n@Kristian! Yay! Trackpads and keyboards! I'm working on that right now, I have some breakout boards designed. I can send you what I have if you're interested.\nI've already ordered almost all components. However, I am not sure about the crystal\/oscillator to buy.\nI miss the load capacitance.\nI know this an old thread, but any chance you could add an outline to the \"STENCIL\" files quick??\nOSH Stencils prefers an outline for the stencils. Not a requirement, but wanted to give them a go and not sure of the consequences of not having an outline . . .\nThanks and best of luck on the house and lab!\nThis is a awesome and inspiring project! I've learned much, and have been inspired to learn more.\nI don't quite think I'm able to build my own yet, but I'm not adverse to trying!\nI want a 16:10 monitor for my diy laptop, having a modern beautiful display, with cam in it, is gonna be awesome.\nIf you do happen to sell a controller board, hit me up.\nAfter my screen didn't want to lit up anymore, I had my laptop checked in an official apple repair center.\nSo now I have this new lcd, and if your board for the retina is compatible and available I 'd like to order one.\nIf not I might experiment to assemble the board, enthousiasted by your work and posts.\nWhat I'd really like is to do is use an MBP Retina Display as an External Monitor via mDP\/ DP . HDMI is a bonus.\nBesides the board you designed and I can send those files for a Fab, what other parts would I need?\nI do not mind scrounging the Fleabay for the pieces and Soldering them on the board.\nIs your design capable of this? PS: I was wondering why there is talk of USB cable (for power??) and Camera (what does it have to do with DP)?\nSorry, I was reading and just got lost. I wouldn't mind the effort in assembling\/ soldering the pieces together as long as I can get clear on what pieces are needed \u2013 Item list.\nI just downloaded the last 3 linked files. Would that be enough to save?","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzaafpi b/data_all_eng_slimpj/shuffled/split2/finalzzzaafpi new file mode 100644 index 0000000000000000000000000000000000000000..31400f64ebffb1cd047bb588f13df812608b9f9d --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzaafpi @@ -0,0 +1,5 @@ +{"text":"Sear the salmon pieces in the olive oil in a very hot pan; remove and set on a large plate to cool. Mix the crab, horseradish, basil, a pinch of salt, and the whipped egg whites. Place 1\/6 of this mixture on each salmon piece. Refrigerate.\nPlace all the vegetables in a non-metallic container. Bring the water, vinegar, and spices to a boil, then immediately strain the hot pickling liquid over the vegetables and place in the refrigerator to cool.\nWarm the salmon pieces at room temperature for 20 minutes. Bake at 190 degrees C (375 degrees F) for 10 minutes. Remove the pickled vegetables from the refrigerator, rinse under warm water, then saute them in butter for 2 minutes. Serve the salmon on a bed of sauteed vegetables.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Dartford finished their pre-season campaign with a defeat on Saturday after losing to Norwich City at Princes Park.\nDespite a strong start, Tony Burman's side fell behind on 33 minutes when a long pass eluded Darts' keeper Deren Ibrahim allowing Carlton Morris to tap in for the Canaries.\nThe home side continued to play positively into the second half with winger Luke Wanadio proving a nuisance for the Championship side's defence.\nIt was Wanadio who earned Dartford a penalty on 48 minutes but the Darts failed to capitalise when Andy Pugh's spot-kick was saved by Aston Oxborough.\nNorwich were resilient and sealed the win on 73 minutes when Sergi Canos' curled effort with the outside of his boot nestled in to the top corner.\nAlthough his side came away with a defeat, there were plenty of positives for Burman.\nHe said: \"I think we have played some good football and made some good chances. We needed to score the penalty but overall it was a great workout for us.\n\"It is the last time to say that the result does not really matter because from now on it certainly does matter.\n\"The guys have worked really hard in pre-season and I am pleased with the level of fitness we have got.\"\nThe Dartford manager also praised Wanadio for his performance during the friendly: \"He has come in and he works hard and I think he will be a good asset for the squad.\"\nOne of the frustrations for Burman in the close season has been trying to sign a new striker, with Saturday's friendly allowing him to experiment with using both Danny Harris and Ryan Hayes as forwards in case a transfer does not go through.\n\"We had to have a look at things,\" the boss added.\n\"We have signed two players with pace and that gives us an option. Ryan gives us a lot of quality on the ball and it might be that he plays out wide or he comes in behind the front one or two.\n\"I am pleased with where we are at the moment. We are still missing that elusive big striker but we are still talking to people and maybe that will happen before Saturday.\"","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Spending money is subjective. What I think is a waste of money may be prized by someone else as a necessity or an investment. Here's what I save and splurge on.\nTransportation \u2013 I live in L.A. and don't drive.\nMy savings add up to $3,624 while my splurges are only $2,540. My biggest expense is my cell phone bill. I don't feel as guilty about this expense as I use it mostly for work and I'm making changes to my plan to lower the monthly cost when I upgrade.\nAre you spending or saving on similar items as I am? Find out what Niconail can and can't live without.\nVotes are in! Niconail goes on spending moratorium!\nWe would definitely drop DirecTV but we do not get any local channel reception, and to upgrade to high speed internet will cost us the same as our satellite, $65 a month. Since we don't go to movies or out much (it is 30 miles one way to get to town), we figure it is good entertainment.\nI myself canceled my cable about 4 years ago. I would buy movies or series to watch on my own time when i could..\nI realized i was spending way more even though I was getting the 5$ movies.\nNow i have netflix and spend about 18 a month, but get movies streamed to my tv and movies on cd. Again all on my time. The great thing is my daughter can watch movies on her computer at college from using my password.\nWhy so much. I also dont go to the movies as often as before. we all know how expensive that is!\nAs a woman of a certain age, I have fewer needs\/wants than I did years ago. I do take advantage of senior rates at the movies, where the Laemmle chain offers matinees for $4.50 on Wednesdays. I usually go with some friends, and we \"do\" lunch before the movie.\nI'm an inveterate coupon clipper and user, so I save a lot of $$$ on food. My one food splurge is the divine strawberries from Tapia Bros., as well as their tomatoes when they're in season. It's worth the extra cost for the unbeatable flavor and quality.\nI dont buy into cable either, it is a waste of money. I do buy a Sunday paper a few times a month. Other than that, we have alot of activities such as the outdoor Mall across the street that has live music for free two nights a week in the summer, and we do free things that do not cost anything such as go to the library functions.\nOh and we do go to the farmers market and spend a whole day and spend about $5. We have fun just looking at stuff and buying a jar of jam or apple butter to take home!\nI notice you often say as a woman of a certain age about your-self. what are you saying? I am a woman of a certain age too. Every woman is of a certain age. I am just wondering what you mean.\nWithout trying to undermine the importance of finding a balance on saving vs. spluring\u2013and making sure that one comes out on the side from it\u2013I am appalled at how unclear the author is regarding her math. Her \"totals\" for save vs. spend don't match up at all to her parentheticals in each entry. Either this is a poorly-edited version of a longer article, or just a poorly-written article to begin with.\n@Robin: Good catch! Your comment made me get hold of a calculator. Her splurges were mathematically correct, but her savings were way of! I'm guessing it was some strange kind of typo. I assume the \"lead\" bloggers (Julia, Yazmin , et al.) do their own editing. (A former teacher, I notice quite a few gammatical errors.) The \"big lesson\" here is that the rest of us apparently just acceptd what she said without question. Good for you for doing the math!\n@Lotta ~ The expression, \"a woman of a certain age,\" is a euphemistic way of saying a woman past the age of 65. It's generally considered a humorous and polite way of avoiding saying old, senior, elderly, etc., which many women dislike. Personally, I don't mind being referred to in any of those ways, as I'm no longer young. My youngest son is 44, and I have grandsons well into their 20s. Hope this clears it up and answers your question.\nI figured you probably meant that. Now I have a new question from your answer. If you don't mind those other words why not just use one of them? Saying \"a woman of a certain age\" sounds effected to me altho YOU sound very nice.\n@Yazmin ~ I am disappointed not to hear back from you regarding Robin's (and, to a lesser degree, my) comments. I was sure we would.\n@Diane Thanks so much for being a loyal reader and active commenter. Like Robin pointed out, there was an error in the post. It has since been updated.\n@Robin If it's still not clear, I'd love to answer any questions you may have.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Japan has launched a departure tax for Japanese and foreigners leaving Japan, from January 7th.\nThe Japanese government will charge the new tax of 1,000 yen, or about nine dollars, to people leaving the country by air or sea. This is the first new national tax to be launched by Japan since a land value tax was introduced 27 years ago.\nTravelers are to pay the tax when buying tickets. But transit passengers who leave Japan within 24 hours and infants aged younger than two will be exempted from the taxation.\nThe government estimates that the revenue from the new tax will be about 55 million dollars, for the last three months of this fiscal year ending in March, and about 460 million dollars annually for fiscal 2019 and onward.\nThe government says it plans to use the revenue to boost the annual number of foreign tourists from the current 30 million to 40 million by 2020.\nThis year, the government plans to introduce facial recognition systems in airports to speed up immigration procedures. It also plans to improve multilingual explanations for foreign visitors to national parks and Japan's cultural assets.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"GiveUpAlready.com > Online Games > Official Kings of Chaos Forum > Catacombs > Age 5 > Barracks > Question..\nFriend of mine clicked my unique link but timed out on the number verification. I did not receive a soldier, nor can he click the link for another 24 hours.. Is this common? Thanks..\nNevermind, it just took awhile to process..\nno need to keep this thread open then?","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzaawem b/data_all_eng_slimpj/shuffled/split2/finalzzzaawem new file mode 100644 index 0000000000000000000000000000000000000000..c72f9a007ed9f490947b7d1189607c2abeaa43d5 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzaawem @@ -0,0 +1,5 @@ +{"text":"1. My kids love sweet potatoes but were getting a little tired of sweet potato fries (usually I just cut them up into sticks and oven roast them with olive oil, salt and pepper).\n2. I decided to go lighter and vegetarian for dinner because over the weekend we hosted two BBQs and definitely had our share of meaty proteins.\nThe girls (piepie and boss) ate this sweet potato hash right up and didn't even flinch at the green spinach. It also packs easily for lunch the next day, delicious at room temperature or slightly cool. You could even spice it up by adding cumin, cayenne pepper, coriander, and\/or turmeric.\nPlace washed sweet potatoes in a pot and cover with cold water. Bring to a boil then turn down to medium low, parboil (partly cooking) for about 10 minutes. You want them to be tender but not falling apart.\nOnce cool enough to handle, roughly cut sweet potato into bite size chunks. (I like to leave the skin on for texture and fibre).\nIn a large skillet (best to use cast iron) \u2013 on medium high heat, add butter and cooking oil. (If you are using a non-stick pan, use medium heat). Once the butter melts, add the cooked sweet potato chunks and mash slightly with a large fork or potato masher. Lightly brown the sweet potatoes to make them hash-like.\nAdd in your chick peas and spinach \u2013 stirring to mix everything together \u2013 about 5 minutes \u2013 once the spinach wilts and the chick peas are warmed, season with salt and pepper to taste.\nAfter a weekend of eating out twice at dinner time (we ordered pizza with some friends one night and then had a decadent dinner at downtown Ottawa's Beckta), we were feeling a little in need of a healthy, lighter dinner. Everything in moderation, right? If we were totally clean eaters I suppose I should have left out the feta cheese \u2013 but this bite of saltiness just works so well against the honey glazed sweet potatoes and adds a creamy texture to the dish! You could sub in saut\u00e9ed olives instead (but my husband hates olives\u2026sad for me!). The fresh cranberries play a key role too, as their tart tanginess counteracts the sweet and salty of the sweet potatoes and feta. The green beans need to be just blanched so that they retain their crisp bite, adding a crunchy texture to the salad. You can serve this salad at room temperature (just crumble the feta last minute!) or when all the ingredients are warm. Either way, it's delicious! Even piepie (our 3 1\/2 year old) ate it up!\nNote: I didn't make a dressing for this salad \u2013 but if you find you need one, try lemon juice and olive oil, or balsamic vinegar and olive oil (about 3:1 ratio of vinegar:oil) to give it more zip and moisture. Also \u2013 if you're looking for more protein and nutrients, try topping this salad with salmon. We had left some over pan seared salmon that went really well with it!\nPrep everything \u2013 wash and clean green beans (take off the rough tips). Wash sweet potato thoroughly and peel if desired (I left the skins on). Rinse the cranberries. Cut the lemon in half. Chop up garlic roughly. Drain and rinse chick peas. Preheat oven to 425 F (400 F if you have convection). Line a baking sheet with parchment paper.\nCut the sweet potato into cubes (bite size) and toss on the baking sheet with the fresh cranberries, and 1 tsp- 2 tsp cooking oil \u2013 season lightly with salt and pepper. Place the cookie sheet into your oven, for about 15-20 minutes, until sweet potato is tender and cranberries pop.\nBlanche the green beans quickly. To blanche \u2013 bring a pot of water to a boil, add the green beans, wait about 1 minute, then immediately remove them from the water and cool down by shocking them in cold water to stop the cooking process. This will maintain it's crispness.\nCheck on your sweet potatoes \u2013 at this point, glaze with honey (if your honey is too thick, heat it up slightly so that it drizzles onto the sweet potatoes and cranberries). Use a wooden spoon to gently stir everything and return to oven for remaining time to bake.\nIn a large pan on medium-high heat, heat up cooking oil and garlic, then add chick peas and green beans, stirring occasionally until coated with the garlic oil (this won't take very long, just a few minutes) \u2013 season with salt and pepper as desired and remove from heat. Transfer onto serving dish, squeeze lemon all over.\nRemove sweet potatoes and cranberries from the oven and layer them onto your green beans and chick peas. Crumble feta on top and enjoy immediately, or serve later at room temperature!\nI really shouldn't even call this a hummus sandwich because it is SO much more than that! This is one of my favourite go to lunches \u2013 it's quick, easy, super delicious and very satisfying. Plus it's healthy! And vegetarian, so it doesn't weigh you down for the rest of your work day! Some key ingredients are good bread, creamy avocado, slightly zingy hot peppers, and crunchy tangy pickles. Oh, and the sharp cheddar is key too. If you don't have time to fry up an egg that isn't a big deal \u2013 but if you can it sure adds some rich and flavourful bites to your sandwich. I used to make this all the time for my friend Kim whenever we got together for play dates. She was always surprised at how good it tasted because it seemed like I put a bunch of random things in it! A few weeks ago she recreated it as a build your own sandwich station at a baby shower. Brilliant way to easily serve a crowd!\nIf you are using eggs, fry them up until desired doneness (I like overeasy).\nPrep the rest of your ingredients and have them ready for sandwich assembly \u2013 wash spinach and pat dry, allow pickles and hot peppers to rest on a paper towel for a minute (this soaks up excess liquid), thinly slice cheese, cut avocado in half and remove pit.\nPlace toast on a plate and load them up! One side with hummus, then top with cheese and egg. On the other, spread avocado on, then load with spinach, pickles, and hot peppers. Gently bring the hummus side on top. Cut in half, and enjoy!\nHere's a meal that comes together pretty quickly but tastes like it's been stewing on the stove all day. The aromas of the spices warm your house without scaring away the kiddies (it's not spicy at all!). Make some basmati or plain rice to eat alongside this delicious hearty stew. My little ones (now 3 and 1) love sweet potatoes so this one was a winner in our household. You can actually serve this as baby food (mild warmth from ginger but not spicy! plus ginger has great benefits for the tummy) \u2013 just make sure to remove ginger pieces and gently smash the chick peas and sweet potatoes with the back of a spoon so that it is soft but still has a bit of texture.\nIt's a great stew that freezes well and is perfect to pull out at a later day for a quick lunch when you're in a crunch. We actually served left overs to some friends the other night and they loved it so much I decided to post this recipe.\nWash and peel sweet potatoes and chop into bite size chunks.\nRemove skin from garlic and shallots, then chop finely. Remove skin from ginger and thickly slice.\nDrain and rinse chick peas with cold water.\nIn a large pot (preferably a dutch oven or heavy soup pot), heat up oil on medium.\nAdd shallots and ginger, saut\u00e9 until fragrant. Add garlic and spices. Be careful not to burn the spices and garlic, continue stirring and add a little bit of oil if it seems too dry. Once you smell the fragrant aromas, add chick peas and sweet potatoes, stirring to mix.\nImmediately add your 3 cups of liquid. Stir to combine, then close the lid. Turn the heat to high then once it comes to a boil, turn it down to simmer for 45 min.\nWhen serving, be sure to remove the ginger pieces so that no one gets a big bite of ginger. Serve with chopped cilantro on top, alongside rice. Enjoy!","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"deliver a nationwide business transformation programme across 8,300 stores.\nOur approach centred around bringing the organisation's strategy to life through its 250,000-strong workforce. To improve performance, the entire culture and ways of working needed to change. Our programme spanned three phases: discover, design and enable. During the initial design phase we tested our hypotheses across two pilot regions: San Francisco and Dallas.\nWe mobilised teams through self-discovery diagnostics and storyboarding, defining the problem we had to solve together. From the customer's viewpoint, we analysed all aspects of field operations and support centre interactions \u2013 culture and leadership, performance management, operational efficiency, and customer behaviour. This culminated in an insight-rich story, owned by the whole business, driving the business case for change.\nThe design phase focused on getting engaged people doing the right things to delight customers and drive performance improvement. Store and regional level interventions focused on isolating and improving areas of opportunity identified during diagnostics. We combined tools and techniques such as problem solving, lean process mapping, RACI, plan-do-review and coaching and feedback with leadership coaching to create a self-sustaining culture.\nDuring the enable phase, we worked alongside internal change agents to empower the wider workforce to implement the solution design. We also implemented a visual performance management system centred around balanced scorecard to give line of sight from the board room to the shop floor, and drive continuous improvement and ownership of performance.\nOur diagnostics uncovered the picture of a culture crying out for change. It was task orientated, with a command and control leadership style. A lack of collaboration, ownership and customer focus was compounded by overly complex processes, unclear performance management systems driving metrics not behaviours, and conflicting priorities.\nfor change at all levels in the organisation.\nWith a team of just 16 consultants and 88 client change leaders, we actively engaged with more than 100,000 store and regional staff. We introduced them to the concept of 'freedom in a framework', transferring skills and building capability.\nThis helped us to achieve a statistically significant uplift in sales and margin, as well as an impressive reduction in team member turnover \u2013 which in turn contributed to an overall programme RoI of over 20:1.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Not that it needs it, but the facelifted Mercedes-Benz C-Class range has been given a little boost in the form of EQ Boost; the 48V electric system with mild hybrid assistance from the Integrated Starter Generator (ISG).\nWe've covered the 48V and ISG previously as it made its first local appearance in the new CLS and now it's available in consolidated range that starts with the C200, powered by a new 1.5-litre turbocharged four-cylinder petrol engine good for 184hp and 280Nm of torque. That ISG bolted between the engine and nine-speed 9G-Tronic automatic adds an additional 13hp when accelerating.\nIn case you were wondering, the C180 is destined for our memories as it's still powered by the 1.6-litre turbo-four and would've been priced too closely to the C200 so Mercedes-Benz Malaysia (MBM) tacked on more features and discontinued the C180 locally.\nGone the way of the dodo bird as well is the C250, though that's a global discontinuation, so you can't buy a brand new C250 anywhere now. This leads us to the C300 that's the step up and still powered by a 2.0-litre turbo-four but is an all-new engine. It makes 258hp and 370Nm of torque, also sent to the rear wheels via the 9G-Tronic auto.\nInterestingly, the C300 makes do without the 48V electrical system and EQ Boost from the ISG. The new mill is deemed sufficiently powerful with a useable torque curve. EQ Boost in the C200 is to supplement the missing torque in certain points of the powerband.\nRounding up the facelifted C-Class range is the C43 sedan that will continue to be produced locally with the C200 and C300. It runs on the same 3.0-litre bi-turbo V6 but with a little more juice wringed from the engine.\nPower is up the 390hp and 520Nm of torque. A nine-speed AMG Speedshift TCT 9G automatic lets power to all wheels run wild via the AMG Performance 4Matic system. It gets the car to 100kph in 4.7-seconds and will be electronically limited to 250kph.\nThe real upgrades are on the equipment list however. Opt for the C200 and you now get blind spot assist, 18-inch wheels, a 12.3-inch digital instrument cluster, a larger 10.25-inch media display and Apple CarPlay\/Android Auto connectivity.\nMake it rain some extra money for the C300 and you'll have 19-inch AMG wheels, AMG Line interior and exterior, lane-keeping assist and multibeam LED headlights tacked on.\nPrices have understandably gone up. The C200 starts at RM259,888 while the C300 is priced from RM304,888. If the C43 tickles your taste buds, you'll need to fork out RM421,888.\nIf you're quick to jump on the order bandwagon (by 31 December 2018), MBM; via its Flex-C financing programme, starts at RM2,788 a month and covers the financing, insurance, and service packages. Also included in the deal are complimentary First Year Tire and Rim Insurance.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"You all know what \"this\" is. My most dazzling friend and colleague, the magnificent Kyrie Irving, is out for the year. The knee he hyperextended is now infected because the doctors may have been double agents for the Cavs and put dirty screws in during surgery. He's finished. Done. Boned. Whatever you wanna call it, it stinks to the high heavens. Not only is the Celtics' season basically over with Kyrie and Marcus Smart out, but now the haters have some serious ammo against Kyrie. He's hurt that knee too many times now. I'm not saying he's injury-prone, because a couple of them were freak injuries that nobody could avoid, but it definitely isn't good for him to need a second surgery on this one. Put me down as someone who is assuredly not a fan of repeated knee injuries.\nDid I mention that this really stinks? I have absolute confidence in Scary Terry Rozier to hold his own going into the playoffs, but they have no shot at the Finals now. Zip, zero, zilch, nada, no chance they get past either the Cavs, Raptors or Sixers without their two best players (Kyrie and Gordon Hayward) and their best defender in Smart, as well as a key bench piece in Daniel Theis. Unless they luck into playing the Heat, there's a decent shot they don't even make it out of the first round. The Wizards and Bucks are both probably just as good as, if not better than, the Celtics minus Kyrie, Smart, Hayward and Theis. I feel like I've been riding in a hot air balloon since mid-November and a fighter jet just flew right through it and sent me tumbling back to earth. Total ball-buster. They lose Hayward and then rip off 16 in a row, are the best team in the East for the majority of the season, overcome Kyrie and Smart being hurt to make a late run at the No. 1 seed, and then it all goes away in a two-day span. The Raptors blow them out and Kyrie is out for the year. I guess I'll take this year as a bonus?\nThis also renders the Eastern Conference playoffs pretty much over. No team has a chance at the finals other than the Cavs, Raptors or Sixers, and including the Sixers might be pushing it. Until LeBron loses, I'm not gonna bet against him. He's made it eight years in a row for a reason, and the Raptors always choke. I'm just hoping the C's draw the Heat in the first round and can make it competitive against the Cavs for a few games in the second round. If they get the Bucks or Wizards, forget it. The Bucks would dominate inside with Giannis and Jabari and the Wizards would run the Celtics out of the building.\nI've been trying to go about my day as usual, but it's hard. A part of my life was just taken and put in a blender right in front of my eyes. Maybe not a blender, actually. Perhaps they laid it down on the table and inserted bacterially infected screws into my ailing knee, ensuring that King James returns to the Finals for the 375th year in a row. Perhaps. It's just a theory.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"welcome you to our website!\nOur office is known for its excellence in integrating clinical expertise, state of the art technology, and artistic capabilities. We specialize in making your smile and health our number one priority!\nOur experienced and friendly staff takes pride in helping educate all of our patients with up-to-date research, home care instructions, and nutritional guidance. We strive to provide quality care and high ethical standards in a pleasant and comfortable environment.\nGreat Staff \u2013 Teresa does a great job, every time, with cleaning teeth. She's professional, caring about the patient, and gentle when she does her work on you.\nDr. Davis's practice consistently gives good, friendly, professional service. I am a new patient and I could not be more pleased.\nall of my experience in Lauren Davis's office have been efficient, professional and caring.\nI moved out of Charleston a few years ago but am back and very happy to be a patient of Dr Davis' again. My hygienist today, Colleen, was great.\nAs usual, in less than an hour I had a cleaning, x-rays, and dentist's checkup. No waiting.\nThis is a five star practice!\nHad an issue that needed to be looked at. Called at 8:30, seen at 10, out by 10:15.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzabwzb b/data_all_eng_slimpj/shuffled/split2/finalzzzabwzb new file mode 100644 index 0000000000000000000000000000000000000000..2cfd95c5bda1255474c3458892c13a8e27773a44 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzabwzb @@ -0,0 +1,5 @@ +{"text":"The < c:redirect > tag redirects the browser to a new URL. It supports the context-relative URLs, and the < c:param > tag.\nIt is used for redirecting the browser to an alternate URL by using automatic URL rewriting.\nSince the value of the variable 'url' is 0 the page gets directed to the http:\/\/javatpoint.com .","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Search our Bridgeton, MO phone book by phone number to get the owner's name, address, social media profiles and more!\nFind out about suspected Bridgeton, MO scam phone calls and other nuisance calls and texts - all thanks to our active community of CallerSmart users. Run a reverse phone lookup on any Bridgeton, MO phone number to see what others have reported about it as well.\nFirst caller feedback on (314) 596-4863 shared by jazzbazz: \"Trying to charge to take care of Student loans\"\njazzbazz just unlocked their Wordsmith badge because their Caller I.Q. score increased - congrats!\nbrandi_n_petersen just unlocked their Freedom Fighter badge because their Caller I.Q. score increased - congrats!\nFirst caller feedback on (314) 596-6793 shared by brandi_n_petersen: \"Spam nerd.. Loser joke \"\nFirst caller feedback on (314) 637-0438 shared by BryanA: \"Personal cell phone.\"\nOur Hall of Shame highlights the numbers of the worst phone scammers and spammers from Bridgeton, MO. Below you'll find the worst offenders according to our community of CallerSmart users when it comes to Bridgeton, MO phone scams. These are the Bridgeton, MO numbers with the lowest Trust Factor ratings and the most negative feedback so please beware!\nVictim of a Bridgeton, MO Phone Scam?\nIf you've been the victim of a Bridgeton, MO phone scam or fraud, then be sure to file a complaint with the appropriate authorities.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"They must terminate their Agency Agreements with Apple within seven days after entry of the proposed Final Judgment.\nThey must terminate those contracts with e-book retailers that contain either a) a restriction on the e-book retailer's ability to set the retail price of any e-book, or b) a \"Price MFN,\" as defined in the proposed Final Judgment, as soon as each contract permits starting thirty days after entry of the proposed Final Judgment.\nFor at least two years, they may not agree to any new contract with an e-book retailer that restricts the retailer's discretion over e-book pricing.\nFor at least five years, they may not enter into an agreement with an e-book retailer that includes a Price MFN.\nCote basically said that this is a perfectly straightforward price fixing case, and the settlement directly counteracts the price fixing issues, so there's no reason not to just move forward with it.\nThe Complaint and CIS provide a sufficient factual foundation as to the existence of a conspiracy to raise, fix, and stabilize the retail price for newly-released and bestselling trade e-books, to end retail price competition among trade e-books retailers, and to limit retail price competition among the Publisher Defendants. Although the Government did not submit any economic studies to support its allegations, such studies are unnecessary. The Complaint alleges a straightforward, horizontal price-fixing conspiracy, which is per se unlawful under the Sherman Act.... The Complaint also details the defendants' public statements, conversations, and meetings as evidence of the existence of the conspiracy. The decree is directed narrowly towards undoing the price-fixing conspiracy, ensuring that price-fixing does not immediately reemerge, and ensuring compliance. Based on the factual allegations in the Complaint and CIS, it is reasonable to conclude that these remedies will result in a return to the pre-conspiracy status quo. In this straightforward price-fixing case, no further showing is required.\nIt is not necessary to hold an evidentiary hearing before approving the decree. Given the voluminous submissions from the public and the non-settling parties, which describe and debate the nature of the alleged collusion and the wisdom and likely impact of settlement terms in great detail, as well as the detailed factual allegations in the Complaint, the Court is well-equipped to rule on these matters. A hearing would serve only to delay the proceedings unnecessarily.\nShe does try to summarize the comments against the settlement into four broad categories: (1) that the settlement would harm third party players like indie book stores, indie ebook retailers, indie publishers and authors, (2) that the settlement is \"unworkable,\" (3) that there weren't enough facts to support the price fixing claim, (4) that the impact of such price fixing was actually pro-competition, in that it broke up Amazon's market dominance. She then breaks down each of these arguments to show why none of them apply and the settlement should move forward.\nI won't go through all four issues, but I would like to focus on the two that get the most attention, the first and the last. On the first issue, she points out that antitrust law is not designed to protect businesses from the working of the market, but to protect the public from the failure of the market. If the settlement causes some businesses to suffer, but it's in the public interest, there is no problem there.\nIf unfettered e-books retail competition will add substantially to the competitive pressures on physical bookstores, or if smaller e-book retailers are unable to compete with Amazon on price, these are not reasons to decline to enter the proposed Final Judgment.\nNone of the comments demonstrate that either condition for predatory pricing by Amazon existed or will likely exist. Indeed, while the comments complain that Amazon's $9.99 price for newly-released and bestselling e-books was \"predatory,\" none of them attempts to show that Amazon's e-book prices as a whole were below its marginal costs.\nOh, and finally, the court points out that swinging back the blame to Amazon is meaningless for the purpose of this case, anyway, because even if the court accepted that Amazon was price fixing too, that doesn't make it okay for the publishers to price fix themselves. Think of it as the \"two wrongs don't make a right\" rule.\nThird, even if Amazon was engaged in predatory pricing, this is no excuse for unlawful price-fixing. Congress \"has not permitted the age-old cry of ruinous competition and competitive evils to be a defense to price-fixing conspiracies.\" ... The familiar mantra regarding \"two wrongs\" would seem to offer guidance in these circumstances.\nGiven how quickly the settlements were worked out, as long as they keep the same judge, and I don't really see any reason not too, then yeah, the holdouts will probably find themselves wishing they'd just settled before long.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Celebrate turning 18 with this Personalised 18th Birthday Women's T-Shirt. The t-shirt is personalised with the year of birth and available in many sizes.\nA fun, slogan t-shirt that can be customised with the year of your choice. Perfect to celebrate a 18th Birthday.\nThis t-shirt makes a lovely gift for this special birthday!\nThe t-shirt features the word EST and then the year that they were born which is written in a bold rainbow coloured font.\nThe t-shirt is made from a polyester and cotton mix to create a garment which is breathable, durable and easy to care for. Your personalised t-shirt is printed using dye sublimation inks which actually colour the fabric so there's no need to worry about the design peeling off.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"what the front of my fridge says, about me.\nto its right, and held up by the Official First Magnet of this new home (from Zainab -- a miniature turquoise flip-flop with a very happy frog on it), is a hurriedly-scribbled note to self about how i gave Evelyn an extra fifteen dollars for her cleaning session on the twelfth of August, because i didn't have change.\nright under -- and not holding anything up -- is the magnet-card Romolo gave me that last limoncello-laden night in Rome. this is the picture on it. it says, \"Show me a day when the world wasn't new.\"\nperch\u00e9 non so amare altrimenti.\nso. what's on your fridge?\ni know. three whole weeks into the month and NOW i'm posting the new page? i know. blame Italy.\nblame blessedly good wine and even better friends. blame an afternoon at Greve in Chianti and an evening-massage in San Giovanni. blame jasmine-flavored gelato and my first taste of lampredotto. blame the way the peach-juice runs off your wrist and into the crook of your elbow, while you sit in the piazza and wait for the next bus, whenever that is.\nblame the ever-charming manager at the Deutsche Bank by St. Peter's, for telling me he wishes all his customers had such lovely smiles; blame Giorgio-the-portiere at Via Giordano Bruno, for telling me he misses me and can he kiss my hand again; blame the women at the pet-store in Prati, for hugging me goodbye and asking if i could send them a postcard from Canada; blame Monica at Enoteca d'Orio, for pinching my cheeks and asking when i'm moving back.\nblame porchetta d'Ariccia sandwiches and the FORVMROMANVM writers group at Ai Tre Scalini; blame a big book by Dante and blame Pablo Neruda in Italian. blame Angelo and the gay-Chinese restaurant in Monti; blame Giorgio-in-Cortona and his seventeen-euro-a-kilo prosciutto; blame pizza bianca on a rainy day in Monteverde; blame the salad-and-wine-bar by Piazza Navona and an afternoon full of papal daughters and Italian ex-boyfriends.\nblame the eighty-five-year-old man who made the John Lennon mosaic in Central Park, sitting half-blind and ever-friendly in his artisanal hole-in-the-wall studio on Via Urbana.\nblame what is still -- no matter how hard it is to go back there -- my favorite wine bar in the whole world; blame the way the wind feels on a motorino in Florence; blame a boy who always knows what to fill your glass with.\nblame risotto ai funghi porcini and friends that feel like home. blame home.\ni am on the InterCity to Florence. it is crowded like always, and it is late like always. the much-touted, lazy afternoon in Greve is looking less and less lazy. i wonder if i should bail and just go straight to Rina in Campi, and have her fuss over me like only a Sicilian housewife can. i think about her vegetable garden, and wonder whether the fiori di zucca are in season.\nit is cooler today than it has been all week. all week i have been enjoying the heat, and the world that comes with it: the way things get quieter in the afternoons -- as people retreat behind shuttered houses; the way sweat trickles down the back of your knee, the way a night out starts late, and lasts long -- stretched out by aperitivi before dinner, and gelato after, and a long walk home over finally-cooled cobbles.\nbut today is cloudy, and i wonder as we pull out of Rome whether La Notte Bianca will be as wet this year, as the last.\nsometimes i think Trenitalia gets away with all the delays, with full-to-bursting trains and less-than-stellar service, thanks only to the sheer loveliness of the Italian countryside. when i got to Termini this morning, there were no more seats on the next two, three, and four EuroStars to Florence, and my only option was to take a slower (and already-delayed) InterCity. i will miss the one o'clock bus connection to Greve, i might miss Erica, i might not even get to go to Greve this afternoon -- i have lost so much time. but the fields of fieno in upper Lazio do that burnished-gold, rolling-ripple thing, and the olive trees march down the hillsides like footsoldiers gone slightly awry. i am sitting in my favorite spot -- by the window and facing the tail of the train, and i get to watch the land unfurl away from me for a hundred and fifty minutes (instead of ninety) -- and it's okay.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzadcuc b/data_all_eng_slimpj/shuffled/split2/finalzzzadcuc new file mode 100644 index 0000000000000000000000000000000000000000..f8156ae58cfa06a6d2eec24876de36ca11bfaf22 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzadcuc @@ -0,0 +1,5 @@ +{"text":"Five races and five different winners from five different constructors. It certainly has been a crazy season that no-one could have foreseen when Formula One touched down in Australia six weeks ago.\nNow the worlds most viewed soap opera reconvenes in Monaco, the epitome of the sports DNA. The glitz, the glamour, the sun, the sea, the rich and the famous can all be seen in the world?s most luxurious location.\nWhat we will also find this weekend is a race that will, hopefully, help us find some sort of understanding of the madness that has so far entailed. How can McLaren have been so blisteringly quick in Australia, but struggle over race pace since? How can Red Bull go from a weekend of mediocrity, in China, to one of utter dominance in Bahrain? How can Williams, who had yet to really threaten the big boys, run away with the Spanish Grands Prix?\nThe season will at some stage settle down into a pattern as the bigger teams afford the more expensive upgrades, that?s the way it works unfortunately, money matters, massively. This weekend though is one of the rare events where a strong car is a help, but not the be all and end all. The great drivers perform around Monaco, the men are separated from the boys. Last season Michael Schumacher averaged around 9th in all his qualifying efforts. In Monaco the 7 time champion achieved his best quail result of 5th.\nTyres will undoubtedly play their part, just as they did last season where Sebastian Vettel scraped home to win by the skin of his teeth on fledgling Pirellis. The Red Bull leads the championship, and a masterful drive could put him in a very handsome position. Vettel, like Hamilton, has decided that consistency is the key to success this season. When the season does settle down and we have consistent fast drivers scoring consistent big points Vettel and Hamilton could be looking good.\nHamilton, the 2008 winner of the race, is perhaps in need of his maiden 2012 win. He has been unlucky, but consistently in the points. However, he needs to accept that consistency is something he lacks, despite a promising start to the season. Hamilton will make mistakes, crash occasionally and make poor errors of judgement, He has done t before and will probably do so again. To combat that he usually produces brilliant drives as well, but they have so far been lacking in regularity since the back end of 2010, and this has cost him. In such a tight season, you would back Lewis to produce more of these drives.\nThen the Matador, Fernando Alonso, who despite an ailing car for the first portion of the season, has driven superbly since the first round in Australia including his win in Malaysia. He will be by far the happiest driver, as he awaits the Ferrari team to catch up in terms of raw pace, no-one is scampering away at the front, and after a very good result in Spain the car and the driver could come together in Monaco this weekend. Whenever I make a prediction I am usually wrong, but Alonso would be my bet this weekend.\nThe Mercedes is still an enigma. Rosberg was superb in China, and Schumacher quick over the first three races. Yet since that maiden win, Rosberg has faltered back into the mid-field, uncompetitive at the front. Schumacher meanwhile looks to have been worn down by his atrocious luck this season thus far. He has shown the pace is still there, and a very good couple of races could catapult him back up the standings.\nAnd dare I say it if Kimi Raikkonen turns up like he has on occasion we could have six different winners from six different constructors. Grosjean has pushed his Lotus tea mate hard, but the Finn has the experience around the Principality. And then Maldonado. The result that no-one ever expected or predicted. Can the Venezuelan do it again? How Williams react after the garage fire?\nIt makes for a tantalising race. Let?s embrace the madness.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Dapper Professional shops the sales for you!\nDo you want to make sure you're getting the best deal? Do you have a fashion related question and no one to ask? Do you need a second or third or fourth opinion? Or maybe you just want someone to talk to about fashion!\nFeel free to reach out for any and all inquires and we'll respond as soon as possible!","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"1. SMD-2835 beads of imported chips are used as light source to ensure the quality of beads.\n2. The thermal conductivity of aluminium substrate for PCB board is 1.2 W\/m.k. The aluminium substrate and glass tube (0.95W\/m.k) are bonded tightly with thermal conductivity dealcoholized silica gel (1.0W\/m.k), which ensures good heat dissipation of the lamp body and ensures the service life of the lamp.\n4. The perfect combination of science ensures that the product can work normally in various climatic environments of - 20 ~60 ~C, and the lamp's safety, brightness, color temperature, color rendering and other aspects can achieve the best results.\n4. No glare: All-glass transmitter and advanced anti-glare diffusion technology are used.\n5. Long life: Under normal use, the service life of LED light source is up to 35000 hours.\n6. Low energy consumption: Compared with ordinary light source, LED light source saves more than 60% electricity.\n7. High light efficiency: Select high-quality imported chips, 40% brightness per watt higher than traditional fluorescent lamp tube, 80% higher than incandescent lamp.\n8. Low light attenuation: Unique lamp structure design, fast heat dissipation, light attenuation is 80% lower than traditional fluorescent lamp tube.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Back in January, Reading Council began consultation around the future of our Reading Your Way Service, and there was a real chance our service would have to close at the end of the financial year. This was very unwelcome news to the people who benefit from the service, who joined forces to form the Service User Action Group and petition to save Reading Your Way. Their hard work paid off, as after a mixture of petitioning, protesting and making their voices at heard at public meetings, funding for the service was extended for 12 months.\nWhat makes Reading Your Way special?\nFriends in a friendly place.\nThe people attending make it and it offers a good service. It offers comradery. It's the only place of its kind in Reading.\nOn arrival, there is an atmosphere of calm and a reassuring welcome, which encourages trust, safety and confidentiality. This, in due course, results in discussion about personal mental health problems.\nFirstly, the people; the members, the staff, the Peer Supporters. Secondly it's a place where we are accepted as we are, not defined by our diagnoses or our mental health histories. Thirdly, what we gain by seeing the progress of other members when they move on, become peer supporters, get jobs etc.\nWhy did you decide to take action to save the service?\nI felt largely recovered and I felt I had a moral duty to defend the organisation so it could continue to help those more unwell than myself. I also wanted the staff to keep their jobs.\nBecause to lose Reading Your Way would have had a big impact on service users' mental health. We all need each other to keep on an even level.\nBecause I saw a service that really made a difference to vulnerable people's lives. Initial rumours about the possible closure of the service were already causing disquiet, I felt I could help.\nHow did you feel when your efforts paid off?\nJoy, relief. I thought: 'thank god'.\nElated and happy, it was a weight off my shoulders. No more sleepless nights.\nSo pleased, but a feeling that we must put in actions to not let this happen again.\nA palpable feeling of achievement knowing that our collective voice had impact and had helped restore stability.\nRelieved mainly, but also proud of myself and the other Reading Your Way members and staff who had helped convince the council and CCG.\nOur main strategy was to put faces to the service so that we were no longer just anonymous figures on a spreadsheet, and I think we succeeded in that. We also proved that even though many of us have serious problems, we are still prepared to fight for what we value.\nAt Together, we strive to make sure that the people we support lead the way, not just in their own support but in decisions at every level about Together's governance, and the design and delivery of our services. One of our initiatives to support this aim is a grant scheme that enables people to apply for funding to develop service user involvement and leadership projects and ideas across the organisation.\nGrants were awarded to 20 successful initiatives.\nWe awarded an average amount of around \u00a3600 per proposal.\nA grant to support a person who uses Hastings Your Way who wanted to promote the work of artists who have lived experience of mental health by recording a CD and publishing a short book of creative writing.\nLawn Court in Bexhill received a grant to help keep their sports club up and running and organise tournaments for their members.\nA grant to improve an allotment project in Swale. The project was already running and many service users were involved, but new equipment was needed to maintain the allotment and make sure we could continue to offer this activity.\nWe supported the set up of a therapeutic arts group. This was applied for by the individual service user who would be leading the project, but with a view to involving a great number of people who access Swale Your Way.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Universidad Peruana de Ciencias Aplicadas (UPC) is a comprehensive private research university, institutionally accredited by WASC Senior College and University Commission (WSCUC), one of the six regional accrediting agencies in the United States, and one of the most prestigious worldwide.\nUPC has been recognized for three consecutive years as the number one Peruvian university in internationality by the Am\u00e9rica Econom\u00eda Ranking, and additionally, has consistently been positioned among Peru's top universities. From March 2018, UPC will be offering students the opportunity to study both the NCUK International Foundation Year and International Year One in Lima.\nUPC's educational model provides a set of guidelines that summarizes its academic philosophy and guides the educational process towards the graduate's personal and professional development, based on local and global demands. This model, which is expressed through the University's essential functions \u2013teaching, research and community outreach \u2013, is based on five pedagogical principles that support our educational processes and actions: competency-based learning, student-centered learning, autonomous and self-regulated learning, learning in diversity with a global vision, and learning towards sustainability.\nUPC evolves and strives to provide access to the most modern and state-of-the-art facilities, while also ensuring the comfort and productivity of its students through the constant creation of new learning environments and facilities.\nSports facilities, which include: gyms, semi-Olympic pool, recreation areas, football field and more.\nA wide range of student support services, including medical, dental, physical therapy, student coaching and counseling, etc.\nFirst Place in Accreditation: UPC is recognized as Peru's # 1 university in Accreditation, according to the \"Ranking of the Best Universities of Peru 2016\" presented by the Am\u00e9rica Econom\u00eda National Ranking. This award reaffirms our commitment to academic excellence and towards meeting the highest of both domestic and international standards.\nFor more than 20 years, UPC has been committed to the development of highly qualified professional leaders and is proud to offer an exceptional array of degree options which are complemented by our outstanding faculty, modern facilities, and comfortable, yet dynamic learning environment. The collaboration with NCUK offers the opportunity to continue building on this success by offering flexible and high quality qualifications to provide Latin American students international education experience and fantastic progression onto international degree programmes.\nWhich qualification and grade(s) did you achieve? * For example: IGCSE Science, C grade. Entry requirements for NCUK Qualifications are available on the respective qualification pages.\nYour Enquiry * Please include the details of your enquiry.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzadsei b/data_all_eng_slimpj/shuffled/split2/finalzzzadsei new file mode 100644 index 0000000000000000000000000000000000000000..ab96f4137e2b0389260bf7ec2e393b0892127365 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzadsei @@ -0,0 +1,5 @@ +{"text":"\" ... While certain forms of competition can be healthy, the kind of claws out, Housewives of Atlanta style undermining that many women partake in is more worthy of the unflattering term, cat fight. As women gain momentum and presence in all aspects of life, we would do well to take heed of Albright's ominous assertion. ... \"\nQuotes above was extracted from an article at Seekyt.com that has become unpublished. Link supplied below is a comparable replacement link.\nWays Women Can Support Other Women.\nFollow Cmoneyspinner's HomeBiz Projects's board GIRL POWER on Pinterest.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Monitor 200 services at 1 minute intervals all month for only $15. All our plans include unlimited international SMS notifications so your costs are predictable, no matter how many alerts you need. No contracts, cancel at anytime.\nOur SSL checks verify your HTTPS certificates and our HTTP checks verify your content and the server's reply. One minute intervals mean you're the first to know when something's not right.\nWe ensure your SMTP, POP3, and IMAP4 services are working well and let you know if your servers show up on spam blacklists (RBL). Email uptime is important.\nHTTP, SSL Certificates, SSH, DNS, real ICMP Ping checks, WHOIS domain expiration, and more. Full REST API to integrate with your provisioning systems.\nUse our on-demand diagnostic tools to quickly find out 'why' a service is failing. Hosting companies can use our free WHMCS module to resell whitelabel services to their own clients.\nTell others about NodePing by sharing your unique URL and receive a generous referral bounty.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Luxembourg Police are urging caution after a motorist was stopped by a fake police officer for allegedly speeding.\nAccording to a police press release, the female motorist was driving through Lauterborn, close to Grevenmacher in the east of Luxembourg at around 8.30am on Tuesday, when she was stopped.\nThe man, claimed he was a police officer and told the woman to pay a 100 euro fine for speeding. The victim paid the sum but when she went to the police station in Echternach to collect a receipt in the afternoon, the officers had no information about the incident.\nThe man posing as an officer was described as 1.60 - 1.70cm tall, aged 25-30 years old, of slim figure with a black beard. He spoke Luxembourgish without an accent and was wearing a baseball cap with the word 'Police'.\nAnyone with relevant information related to the incident should call Luxembourg Police on 113.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"The candidates of District of Columbia can apply for the certification if they have successfully completed their EMT program from a state-approved program school. Candidates are required to clear the exam, conducted by the National Registry of Emergency Medical Technicians, in order to be certified as EMT professionals. EMT certification in the District of Columbia is applicable only for two years.\nIt is compulsory for the candidates to be of at least 18 years of age or more.\nA state-approved program course is important to be completed by the aspirant.\nYou are ought to finish your CPR (Cardiopulmonary Resuscitation) certification.\nQualifying both parts of the NREMT examination is compulsory.\nGood stamina along with sound mental state is also obligatory.\nThe candidates must also provide a lawful proof of their citizensip in the United States.\nGood communication skills in English should be possessed by the candidate.\nThey should compulsorily undergo a background check.\nSubmit the application for certification, as well as the required application fee.\nThey should have to clear an approved practical examination in the District of Columbia.\nThe foremost thing applicant needs to do is to get EMT program from a state-approved program school in the District of Columbia.\nThe applicant should begin by creating the account on the NREMT website by clicking on \"Create New Account\".\nNext, the applicant needs to create a new application by clicking on the link of \"Create Initial Entry Application\".\nNow, the application fees must be paid by the applicant soon after the online application is completed.\nAfter that, the applicant needs to check the progress of his\/her application process, as well as check the \"Authorization to Test\" (ATT) Letter. Go to the website's homepage and log in using the username and password through which, you created your account. Here, click on the link \"Check Initial Entry Application Status\".\nThe word \"Submitted\" appears next to the link \"Course Completion Verification\" if your information is submitted by the website. But, keep in mind that the approval by the education program director is still pending.\nIf \"Not Submitted\" appears next to the \"Application Payment\", it indicates that the applicant should pay the application fee before getting the ATT Letter.\n\"Print ATT Letter\" will appear on the website if your course completion has been checked thoroughly by the education program director.\nNow, print the ATT Letter and decide the date to take the NREMT examination.\nImportant Note: \"Print ATT Letter\" will not appear until and unless your course completion is verified by the program director and the payment of fees is made by you.\nThe ATT Letter will instruct the applicant how to schedule the time and date of his\/her exam through a website, called 'Pearson VUE'.\nThe practical NREMT examination in the District of Columbia is organised by state-approved EMT program institutes. For further assistance, the applicant can take the help of Emergency Medical Technician Practical Examination Handbook.\nA clear copy of course completion certificate.\nA CPR (Cardiopulmonary Resuscitation) certification card.\nInitial certification fee of $45.00, either through a check or by a money order made to the \"DC Treasurer\".\nIt is important for the candidate undergo a criminal background check. For this, the applicant needs to contact the EMT course coordinator.\nEMT candidate must be employed in an Emergency Medical Service or he\/she must have worked in a rescue operation.\nThe candidate must prove his\/her cognitive skills by appearing in an examination or either through Continuing Education (CE).\nA state approved 12-hour refresher course should be completed by the candidate.\nA payment of $10.00 (non-refundable) should be done by the candidate.\nIt is important to submit the recertification application before September 30 of that year, in which, your renewal of the certificate is due.\nThe applicant needs to submit the recertification application before its expiration date. The application should reach before March 31.\nA 24-hour refresher course, done from a state-approved program, is mandatory.\nBesides this, an added EMT program of 48 hours is also necessary.\nApplicants of District of California must pay a non-refundable amount of $15.00 while submitting their paperwork, needed for the recertification.\nSkill verification is compulsory to be provided by the applicant, which should be confirmed and signed by the Program Program Director and Director of Operations\/ Physician Medical Director.\nThe applicant must maintain his\/her CPR (Cardiopulmonary Resuscitation) card.\nThe applicant must provide his\/her recertification application before March 31.\nThe applicant must have completed a 36-hour refresher course from a state-approved program.\nA 36-hour of additional course should also be completed by the applicant.\nAn amount of $15.00 must be paid by the candidate during the submission of recertification documents.\nEMT-paramedic applicants in the District of Columbia need to complete a state-approved 48-hour refresher course, including all necessary topics, along with a 24-hour additional program.\nHe\/she also needs to make a payment of non-refundable application fee of $20.00.\nThe applicant should get his\/her skills verification proof approved by the Physician Medical Director.\nA CPR (Cardiopulmonary Resuscitation) certification should be maintained by the applicant.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Globe is seeking to change the landscape of entertainment by promoting the digital lifestyle and at the same time protecting the industry through its advocacy #PlayItRight. It pursues its mission to bring more original content to Filipinos as it brings Globe first ever co-produced film, \"All Of You\". The beautifully crafted romantic comedy is among the movies participating in the Metro Manila Film Festival (MMFF).\n\"Producing original content also comes with the responsibility of educating our customers to consume the right and legitimate way through the advocacy #PlayItRight,\" says Quark Henares, head of Globe Studios.\nThe #PlayItRight campaign champions the entertainment industry by producing or releasing quality online or offline content and making more accessible to audiences. It encourages Globe customers to consume their entertainment in the right way, by avoiding piracy. Piracy undervalues and takes away the creative efforts that each member of a production team puts into making a movie or staging a show.\nThe noted director adds that as Filipinos enjoy their annual film fest through the MMFF, they need to learn and appreciate the hard work that comes with putting together a movie \u2013 from the number of people down to the hours.\nThe movie, which is headlined by Derek Ramsey, Jennylyn Mercado, Sam Milby and Solenn Heusaff is a romcom set in Taiwan about two individuals who find modern day love through Tinder. After three years of being together, the couple finds themselves at a point where they have to make the major decision to stay with each other or keep looking for #ThePerfectMatch.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzaepns b/data_all_eng_slimpj/shuffled/split2/finalzzzaepns new file mode 100644 index 0000000000000000000000000000000000000000..1511a2c863504d40dff0fbde7b7e25147c3f33cf --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzaepns @@ -0,0 +1,5 @@ +{"text":"HealthBridge Management provides management services to Skilled Nursing and Rehabilitation Centers in Massachusetts and Connecticut. We are looking for a Regional Director of Human Resources to provide HR assistance to approximately 7 Centers in Massachusetts and 1 Center in Connecticut including CareOne at Brookline, CareOne at Newton, CareOne at Millbury, Sweetwood of Williamstown, CareOne at Holyoke, CareOne at Northampton, CareOne at Redstone and River Glen Health Care Center (CT).\nThe Regional Director of Human Resources is responsible for providing advice and guidance to management on all aspects of Human Resources, including company-wide policies, federal and state labor and employment laws and regulations. In addition, this role provides support and assistance to employees across all levels of the organization, and ensures that Center management focuses on providing a positive and productive work environment for all employees. In this role, you will not only be a member of an exceptionally talented HR team, but you will be a member of a regional team known for its ability to collaborate with each other, support each other and provide great outcomes together.\nTo be considered for this position, you must have excellent communication skills\u2014both verbal and written, the ability to foster and maintain relationships across all levels of the organization, the talent and skill to support multiple locations and a willingness to travel on a regular basis to the various Centers as well as the confidence, knowledge and judgment to be an outstanding partner for your Centers and regional teammates. Prior HR experience supporting multiple locations preferred.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"This bungalow is a mail-order house from the Lewis Manufacturing Company. The name of the model is Cortez. It was offered about 1917; this house was built in 1928. The open front porch was enclosed during a renovation. The exterior is cedar shake siding. Another Cortez model is located in Libertyville at 425 West Cook Street.\nDescription This bungalow is a mail-order house from the Lewis Manufacturing Company. The name of the model is Cortez. It was offered about 1917; this house was built in 1928. The open front porch was enclosed during a renovation. The exterior is cedar shake siding. Another Cortez model is located in Libertyville at 425 West Cook Street.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Since this is an optional product, some people try to do away with this, saying that this is an unnecessary coverage.\nThese are just some of the possibilities of the problems you might face as a homeowner. And there are many more possibilities.\nThis is where residential title insurance can help, as it ensures that these problems don't happen to you and that if ever they do, you can get monetary assistance for legal fees and indemnity in case you lose and the property is reverted back to the claimant.\nResidential title insurance is quite useful in these situations. We are sure you will appreciate the fact that you had yourself insured.\nEven if the seller signs a warranty deed, it is still useful to get a residential title insurance. The preliminary title search also serves to keep these possibilities from ever happening. The title searcher from the insurance company will already check the line of title to see that there are no problems with the title and that you rightfully and wholly own the property and that there are no hidden debts that you will have to pay.\n> Is it really necessary to buy residential title insurance?","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"The pack hysteria that has overtaken America has been headlined by the Hollywood response to Harvey Weinstein, Kevin Spacey , Bill Cosby and many other members of that community, largely on the basis of testimony of sexual harassment or assault given years after that may have occurred. In the case of James Levine, the media reported abuse of teenagers and of interest, the film which received many Academy Award nominations, winning the Oscar for its screenwriter, is Call Me By Your Name, a languorous look at a homosexual affair between a teenage student and a much older graduate student. The difference in age is never questioned, nor is there any trace of assumption that the older man might have unduly influenced the younger one. To the contrary, the boy's father confesses his own regret at not having had the courage to experience a similar rite of passage in his own youth. Almost every man accused by the MeToo and TimesUp posses has apologized profusely either for wrong-doing or for being insensitive albeit misinterpreted, yet this has been insufficient for the various corporations, foundations, museums, universities and media centers for whom dismissal is the only appropriate response.\nWhen it comes to shocking criminal behavior, America wants to be on the side of the perpetrator, forgetting the insult to the families of the victims and the travesty of justice and focusing instead on humanitarian behavior towards the murderer in his senior years. When it comes to men in power possibly acting crudely, the default position is that the complainant must be telling the truth and besides, recognition of one's bad deeds is insufficient as a penalty. Ironically, when it came to Dr. Nassar who sexually assaulted many teenage Olympic athletes over the course of many years, some of whom complained immediately to the various coaches and people in charge, nothing was done and Dr. Nassar was allowed to indulge his perversions for over a decade. So far, none of the administrators of the various entities for whom he worked has been charged with dereliction of duty or criminally endangering the welfare of minors.\nIn 1993, Daniel Moynihan coined the term \"defining deviancy down\" to describe society's shift in exempting conduct previously stigmatized and normalizing what was once reprehensible and\/or criminal. The parole of a vicious murderer of two cops is an example of this. What is happening now with the MeToo and TimesUp movements is the discarding of due process and the acceptance of anonymous statements as testimony - two violations of our legal system. Allowing testimony of those who never reported a crime when it happened but waited years and decades before coming forward is riddled with the threat of its own abuse on many levels: memory distortion, the desire for personal gain or revenge, the desire to gain attention, the imposition of ex post facto standards. Our acceptance of this is normalizing what we fully understood to be corrosive and should be anathema to a nation of laws.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"The park museum is designed to replicate a prehistoric Native American mound. Recently remodeled, the museum focuses on Tennessee's prehistory.\nThe park's archaeological features and wildlife can be viewed along six miles of interconnecting trail. The paved trail sections are bicycle\/wheelchair accessible. Gravel\/forest floor hiking trails, which do not permit bicycles, are of easy to moderate level. Flora and fauna of three intersecting ecosystems, a cypress swamp, mixed beech-oak slopes, and oak-hickory uplands, are viewable along these trails.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzaexbr b/data_all_eng_slimpj/shuffled/split2/finalzzzaexbr new file mode 100644 index 0000000000000000000000000000000000000000..81b9a4a95d1e069955ff3ccc5c1914f808d29763 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzaexbr @@ -0,0 +1,5 @@ +{"text":"As with other areas of law, state law governing this area in each state will vary. Inthis particular area, the time periods and remedies are critical in each state, and may be completely different. Sometimes even cities have their own Rent Control laws, so it is imperative that you check with your local Plan Attorney before proceeding. This section can only be a general guide and if you have a serious landlord-tenant problem, it is advisable to seek help immediately from a lawyer.\nUnderstanding your rights as a landlord or tenant can assist you in avoiding problems should you decide to rent instead of purchase your home.\nIf you are a landlord, you should have your lease agreement drafted by a lawyer and periodically reviewed. This will protect you and your tenants from most unforeseen contingencies during the course of the lease term. If you are a tenant, you should take your lease agreement to an attorney if you have any questions about any term in the agreement. Remember, even if the landlord makes a statement during negotiations, unless the statement is set forth in the agreement, it will be difficult to prove later, especially if the building changes managers or owners.\nAssume that you are about to sign a lease, and the landlord specifically states that the apartment complex is quiet and never has any problems with noise. When asked where this is stated in the rental agreement, you retort, \"The landlord is a nice older man, and he seems honest, I believe him.\" Only after the lease is signed do you learn that three rock bands practice in the complex for hours each afternoon and night.\nAny agreement should set forth the terms by which the premises will be leased, such as a time period, and the monthly rental. Any agreement not specifying a term will in many states be construed to be a 30-day term. That means either the tenant or the landlord can, for any reason, terminate the lease by giving the other party thirty days written notice of termination.\nIn major metropolitan areas, where a shortage of housing exists, Rent Control ordinances may apply which will not allow landlords to evict tenants, except for specified reasons. For example, Santa Monica, California, has what many consider to be one of the most tenant-favorable Rent Control standards in the United States. Tenants may only be evicted for specific reasons set forth in the Rent Control ordinance, such as the landlord wants to move his\/her family into a particular unit. Many of the usual landlord-tenant principles of law do not apply in such areas.\nGenerally a landlord's duties are to provide the unit rented in reasonably habitable shape for the tenant, and to include such periodic repairs as may be necessary. For instance, the landlord must supply a unit which meets the Building and Safety Codes as well as local Health Codes.\nThe landlord usually has a duty to make reasonable repairs but not to repair every single item which may be considered unnecessary. Also, the landlord is generally not required to make any improvements that a tenant desires during the lease term. Once rented, the landlord has a further duty to provide \"quiet enjoyment\" of the premises to the tenant. This means that the landlord cannot enter the premises at will, and must respect a tenant's right to privacy. If the landlord does not honor this implied or expressed covenant, the tenant may have a course of action for invasion of privacy.\nThis also includes a duty on the part of the landlord to make sure that each tenant respects the right of quiet enjoyment expected by each other tenant. This does not mean that the building must be free from noise, however, and generally, absent a special ordinance or statute, the standard use to measure any disturbance is whether each tenant is being reasonable. Tenants should be aware that at the expiration of any written lease, the tenant will be expected to have fully and completely moved from the premises. Any \"holdover\" even if it is one day, may subject the tenant to penalties or even rent charges.\nRent. First and foremost, the tenant's first duty is to pay rent in the full amount owed and on the day specified. Failure to do so, depending upon the terms of the lease, or the lack of terms, can subject the tenant to eviction procedures. Landlords who are overly technical might even commence such proceedings if the rent is one day late!\nShould the premises need necessary repairs, e.g., leaking pipes, broken windows, defective toilet, etc., the tenant should make a demand to the landlord to fix the problem and give the landlord a reasonable time to make repairs. If the landlord does not do so, the tenant is permitted to fix the problem and withhold, or \"abate\" the rent.\nRent abatement is an exception to the tenant's duty to pay rent in full. If you are going to abate the rent be sure whatever you are doing is reasonable. For example, obtaining three estimates on the cost to fix a problem is more reasonable then obtaining one estimate. It is advisable, but not necessary, to obtain the advice of an attorney before you decide to abate your rent, if you believe it is necessary. Your Plan Attorney will be able to assist in determining the reasonableness of your actions.\nDamages. Generally, unless specified, a tenant is liable for any damages caused to the premises, except normal wear and tear and those caused by problems such as water leaks, etc. Thus, any tenant or any guest of any tenant, who causes any damages whatsoever to the premises will be responsible to make repairs that return the premises to the condition in which they were found. This is whether or not the damages were intentional or accidental.\nShould the premises be damaged in any manner, a landlord is able to deduct an amount equal to the amount of repairs from a tenant's security deposit; and if the damage is in an amount greater than the security deposit, the landlord will have a cause of action to sue the tenant for the cost of repairs.\nNuisances. Tenants are also requested to keep the premises free of any and all nuisances to the other tenants, their guests, the landlord, as well as any neighbors from outside the building. Thus, loud stereos which disturb other tenants will create a nuisance. In some cities, such as Los Angeles, local ordinances may be enacted to control these nuisances and provide a manner by which violators may be subjected to criminal penalties. Thus, in such cities, a loud stereo played during the day may be enough to violate the noise abatement ordinance, and subject the tenant to not only civil action for the nuisance, but possible criminal penalties for violating the statute.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"AC Wholesalers offers a huge selection of Single Stage Furnaces online at wholesale prices. Find the best Single Stage Heater Furnace deals that fit your budget by comparing our full range of Single Stage Furnace models, specs, and cost. Read customer and experts ratings on 1 Stage Furnaces to find out what other customer have to say about our products and services. Our in-house 1 Stage Furnace experts are here to help you purchase a 1 Stage Heating Furnace at the lowest price possible, with free shipping available on all orders over $199. For your Single Stage Furnace installation needs we offer a full network of preferred installers.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"American businessman who dropped out of Harvard University, yet cofounded Microsoft, a computer software company whose business practices were at times ruled to be anti-competive in several court battles.\nComputer science \u2026 jobs should be way more interesting than even going to Wall Street or being a lawyer--or, I can argue, than anything but perhaps biology, and there it's just a tie.\nFrom interview (24 May 2004) in Scientific American (Jun 2004), 45.\nHey, size works against excellence.\nUpside (Apr 1992). Quoted in Thomas J. Peters, Liberation Management: Necessary Disorganization for the Nanosecond Nineties (1992), 554.\nPBS interview with David Frost (Nov 1995). In Lisa Rogak (ed.) The Impatient Optimist - Bill Gates in his Words (2012), 107.\nThe Chinese are clearly inculcating the idea that science is exciting and important, and that's why they, as a whole\u2014they're graduating four times as many engineers as we are, and that's just happened over the last 20 years.\nNPR Radio interview, Morning Edition, (29 Apr 2005). In Lisa Rogak (ed.) The Impatient Optimist: Bill Gates in his Words (2012), 32.\n28 Oct - short biography, births, deaths and events on date of Gates's birth.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Score is Weight used for 1 rep + Weight used for 2 reps + Weight used for 1 rep.\nMet-Con: We will be outside, RAIN or SHINE. Dress appropriatly.\n1 Mile Sled Pull** (135\/95 on the sled). Partner Carries 2 KB's (70\/53) Stay together, and switch work as often as needed to get your team back as fast as possible!\n**Team of 3 will have 1 partner rest as they walk along side the team. Alternate as needed, with 1 partner always on rest, to get back to the gym as fast as possible!\nEarly morning and Evening classes, bring your headlamps or flashlights!","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Looks awesome! Bathroom Faucet good design By Elite. Bathroom Faucet very well made, sleek and simple. Complete your living room furniture with a modern Bathroom Faucet. Its elegent sturdy, attractivce and it looks expensive and a good value for the money. Bathroom Faucet is one of the most homey, cozy, nice look and exotic Bathroom Faucet especially for the price and made of fantastic products. Great quality, easy to assemble, delivery on time and in best condition. Bathroom Faucet is good merchandise at fair prices and amazing free shipping. Guarantee damaged claim by offering to send parts or to keep the item at a discounted price. Great buy would definitely recommend. Shop with our low-price guarantee and find great deals on ##ptitle# and more!. Reading the reviews helped you purchase.\nBar tables are not just specific furniture intended to be utilized in bars as the name suggests. If youre seeking to give your kitchen area, dining area of home a business-like appearance, then you need to buy the best club furniture. The dinning furniture are earning a constantly-increasing existence in individuals houses as they look for that unique company-like presence at home. However, with a large number of bar tables to select, from, obtaining the best offer can be a challenging task for most owners. For all those seeking to change the diner right into a lively modern club, weve got your back with the best club furniture on the market currently. This club table is really a breakfast every day set ideal for houses with small eating spaces. It's a fairly compact table that will not consider too much room in the dining area. It features caster tires that provide simple movement inside the home especially when cleaning. There is also some stools and drop foliage giving you great bang for your buck. Made of natural Beachwood, this table is a superb buy and arrives in the round and sq . shape.\nIf you are looking for some serious relaxation, this loveseats solid pillow back again and rolled arms will deliverbut at a higher price tag. Every chair cushioning contains 30 individual pocketed circles for spring and assistance, and high strength chair froth for an extra coating of plushness and durability. Plus, the foam cushions are capped with extremely soft, combined down feathers. For your extra comfort, there are 3 standard positions: seated, a slight recline by having an elevated foot rest, along with a complete lie down by having an raised footrest. It doesnt harm this reclining loveseat looks lux, either. Its all-leather-based upholstery and decorative nail mind cut provide a vintage look that can function with a lot of home dcor designs, from traditional to country to crafts and arts. In addition, leather stands up well against daily use, children and pet locks and fingernails. Just be sure you situation the leather every six to twelve months.\nThis is actually the next couch on our list. It's additional comfortable and incredibly well suited for small rooms or attic living. It's padded in polyester material, potential customers inset control keys that provide an elegant diamond-tufted style. It is constructed of long lasting materials and also the legs are made of long lasting wooden to add to its durability. The loveseat has an java stained wooden legs and no-tagging foot hats. It includes a comfortable froth padding and polyester material upholstery which makes it very luxurious. It has a longue place that provides an exceptional room for resting.\nCarolina Light Grey Material Sectional Couch is well known for its stunning comfort and impressive design. It has a modular style that allows for many agreement choices to suit your needs.The set features gentle fabric and forces cushions to bring about the desired comfort and ease. More to the point, its of a great size to support you together with the group of buddies. With respect to the available space and your preferred form, you are able to mix and match the I chairs to produce a remarkable shape. Its recommended for those with little areas and requires a couch.\nThe term this is actually the comfort and complicated look. This established includes a tufted backrest and nail head highlight that provides the sophisticated look. Its grey colour contributes to its remarkable look. It may fit as much as 8 in . mattress. That on your own lets you know how comfy the established could be. Particularly, it does not incorporate a mattress. So you may need to purchase one. The trundle includes a castor to allow easy access. You may even pull the trundle for people to sit on. Just to point out, it should take putting together, however its very easy to put together.\nThis suits you if all you need is a beautiful trainer. It is made with a flared body and cushion leading amrests in a awesome cobblestone grey. Now, it's tough foam soft cushions that are wrapped up in in a polyester upholstery to create the specified comfort. It features a sturdy part blocked body that adds to the sturdiness. Also, the feet are in a faux wood finish. It has an impressive gray colour that suits with any decoration. It measures 89 W by 39 D by 40 They would therefore big enough to accommodate you along with your loved onesOrfriends. More to the point, it arrives fully put together. This saves you the pain of having to put together the established.\nThis established includes a one remaining equip sofa established, two armless sofa models, and something part sofa set. This provides enough room to accommodate your family and friends. The fabric is 100% polyester for sufficient comfort and durability. The advantage of this sofa set is the matching and mixing of chairs within the space within the room for any ideal form. As well as allows for fitted even in small areas. It takes just light putting together.Additionally, it features drive soft cushions for optimum comfort and ease. You might want to do this established. It does the job well.\nThis sofa is praised for distinctive design and comfort. Its impressive flare-arm sofa with chairs offers you optimum room for stretching and relaxing in your room. So usually the modern couch chaise adds to the great thing about this established. The set also includes pillow leading arms padded back and chairs for optimum comfort. The sofa includes a large part-obstructed wooden frame that gives the sofa sufficient toughness. The couch steps 89 in . wide by 62 in . in depth by 4 inches in height making it ideal for medium size areas. You may want to do minor assembling for example of the legs.\nThis is the next sofa on the checklist. It is additional comfy and very ideal for small rooms or attic living. It's upholstered in rayon material, potential customers inset buttons that provide a stylish gemstone-tufted style. It is made from durable materials and also the legs are made of long lasting wood to increase its sturdiness. The loveseat has an espresso discolored wooden thighs and no-tagging foot caps. It offers an appropriate froth cushioning and rayon fabric upholstery that makes it extremely magnificent. It has a longue place that gives an extraordinary space for relaxing.\nThis established is padded in a drive and comfy textured cushioned velvet. This gives it the incredible comfort that its loved for. Once again it supports chaise style seating for sufficient comfort and ease. Moreover, the sinus spring base adds to the comfort and durability. Additionally, the sofa has hard wood frames that add to its toughness and durability. The couch has soft cushions on both the chair and the back again. Yet still, the sofa can move from a seated position to a reclined position easily. That said, simply to replicate, the couch is very soft and provides outstanding comfort and ease.\nPuffy, overstuffed loveseat cushions are not your thing? This lying loveseat from GDF has sharper, solution lines than your conventional reclining loveseat, making it an elegant accessory for a mid-century modern, modern, eclectic or modern type of house. And, at only 46.46 in . in width, it's very easy to squeeze into an inferior space, like flats, dog dens or offices. Available in several color and material choices, including grilling with charcoal material, navy material and slate micro-fiber, its simple to find the right match for your house dcor, and the durability your family requirements for daily use. Gone are the days of good-looking, but uncomfortable \"Double Handle Deck Mount 20\"\" Water Fall Roman Tub Faucet Trim\" you no longer need to give up comfort for design. This reclining loveseat functions a big, soft, plush-filled chair cushion with enough room for two or enough room for one to spread out and unwind.\nCopyright \u00a9 Bathroom Faucet By Elite in Bathtub Faucets All right reserved.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzahdvc b/data_all_eng_slimpj/shuffled/split2/finalzzzahdvc new file mode 100644 index 0000000000000000000000000000000000000000..892e47988c425e674dfcc6b1b534bf682a921ca6 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzahdvc @@ -0,0 +1,5 @@ +{"text":"Lonely Screen is one of the most sleek yet efficient AirPlay receiver for Mac OS X. It turns your Mac OS X into an Apple TV Receiver which is AirPlay compatible, so that you can easily stream your Audio, or Mirror your iOS iPhone or iPad screen.\nLonely Screen even supports iOS 9.1, 9.2 updates and is Apple TV 2\/3\/4 compatible AirPlay receiver.\nYou can easily stream using AirPlay and use your Mac as Apple TV.\nClick here to Download Lonely Screen for Mac OS X 10.7 & later.\nLaunch the LonelyScreen app on Mac. It will now setup its server.\nNow make sure that your Mac OS X Receiver and iOS device or AirPlay are on the same WiFi.\nNow on your AirPlay sender tap the AirPlay icon and in the list of available devices select \"LonelyScreen\" option as shown in the alongside image.\nThat's it your iOS device will now be mirrored on your Mac device.\n5kPlayer is a multi-function media player which can even be used as an AirPlay receiver. Though 5kPlayer has inbuilt media player, its on second position in our list as its AirPlay compatible only upto iOS 8.3 , 8.4 . Hence, 5kPlayer won't be able to receive AirPlay casts from iOS 9 and later.\nClick here to Download 5kPlayer for Mac OS X.\nMirroring 360 comes from Splashtop which is well known for producing quality applications. Mirroring 360 also is a quality product. Unlike 5kPlayer Mirroring 360 is AirPlay compatible with iOS 9 and later, so that you can even use your latest iPhone 6S , iPhone 6 for mirroring. The only downside of Mirroring 360 is that its a Paid app and hence you will need to pay for it after 7 days to continue using it.\nClick here to Visit the Mirroring 360 Website.\nSo, these are the 3 Best AirPlay receivers for Mac OS, which will turn your Mac into an Wireless Apple TV. If you know of any other such good app, feel free to comment.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"I recorded this video the other week and just got around to uploading it. In it I take a look at the new Netflix app that's available on Dish Network's Hopper. The app is a convenient way for many to access Netflix if they don't already have a way of using the service (such as a Roku, Chromecast, smart TV or video game console).\nThis entry was posted in Technology, TV and tagged Dish Network, Hopper, Netflix, netflix on dish, netflix on dish network, netflix on the hopper. Bookmark the permalink.\nAmazon Prime is $72, today only!","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"When debts begin to accumulate around you and you cannot make your regular month-to-month payments on time or even at all, you may be faced with a really demanding scenario. To make things worse, you will be denied credit from other lenders due to the fact that you cannot pay the credit you already have. If that wasn't bad enough, you will also have rude, irate and threatening letters and phone calls from your creditors, requiring that you pay them what is owed. Check this site out.\nAs these problems intensify, so do your bills. The issue with lots of customer debts or unsecured credit is the interest rates are so high that, even if you are staying up to date with your minimal monthly payments, opportunities are that you will never settle your financial obligations anyway. If the interest wasn't bad enough, once you begin to fall back in your repayments or you borrow above the limitation on your credit cards, you are likely to wind up paying a whole host of other additional charges, such as late payment and over the limitation charges.\nWhen confronted with these situations, you need debt relief or methods to get your debt under control to position yourself in a position where you are able to eliminate your debts when and for all. Prior to checking out debt relief choices, remember that it didn't take you a matter of days or weeks to obtain into debt, so you could hardly anticipate that debt relief will work for you in a matter of days or weeks either. Any option that you utilize to get from debt will take some time, patients and careful preparation of your finances to make it effective.\nThere are several ways to obtain debt relief. Before you start, you will have to take a seat and make a list of all of your financial obligations, then make a note of each creditor, their name, telephone and what their interest rates are. You will also have to work out your incoming money and where that money goes every week. Set yourself up with a budget and stay with it, while you are searching for options that will fit your scenarios much better and assist you get some debt relief.\nSee which of your financial obligations are drawing in the greatest rate of interest and target them. They are the greatest pressure on you, so the quicker that you pay them off, the closer you will be to obtaining some debt relief. Pay the minimum on all your other debts, other than for the debt at the top of your list and pay as much on that one as you perhaps can.\nNext, you will require to call each of your financial institutions and explain to them your situation. Be truthful with them. Where possible, ask them if you could pay your debt in full for less loan or if they would reduce your interest rates while you are paying your debts off. Ask your creditors how you can interact to get your debts paid off. You might be surprised at how willing they are to assist you repay your debts.\nThe most typical method that individuals typically believe of handling method a lot of costs, is to go insolvent. This is most likely the worst thing that you can do. By going bankrupt, you are most likely to still wind up with a few of your financial obligations having to be repaid, in addition to badly damaging your credit report, which will hamper your possibilities of getting credit in the future. Even if you do get credit after a bankruptcy, you will have to pay huge amounts of interest, which will put you back in the exact same situation you are currently in. So even though insolvency may look like an alternative, use it as your extremely last alternative and even then use care.\nAmong the very best methods to obtain some financial help would need to be debt combination. Generally, a debt combination loan will spend for all the debts that you currently owe and roll them over to one, generally with lower interest rates and lower month-to-month repayments. There are loans offered from lending institutions that don't need you to have security. The rates of interest will be higher than a secured loan, although they will be much less than the rates of interest being paid to other credit business or on charge card.\nIf you currently own your very own house, you may likewise want to think about the possibilities of a home-refinance, also described as a home equity loan, which can be utilized for a range of factors, including repaying your debts. By refinancing, you may have the ability to get a lower rates of interest on your home, as well as settle your financial obligations. If you take the refinanced loan out over a longer term, your payments will be lower every month, offering you instantaneous debt relief.\nWhile debt relief is crucial to obtain out of the debt you are already in, it is also important to make sure to inform yourself in how to budget your money thoroughly and manage it better in the future. You wish to prevent getting into a continuous cycle of getting in and out of debt.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Technology for every sector \u2013 II BlockchainTech Congress.\nOn 1-2 October, the BlockchainTech Congress took place. The event was an occasion to meet with businesses, providers, regulators and technologies. 43 speakers for 27 presentations brought more than 600 people during the two days.\nBefore us, an internet-sized revolution. During the BlockchainTech Congress, participants could take a huge step towards the upcoming changes in the market. Every person who took part in the event can be sure that they are up to date with technology, which is already coming into many economic sectors, and will soon be implemented as a norm in business.\nThe event started with a joint part between the BlockchainTech Congress and the AI & Big Data Congress. After the official opening of the Congresses by the Chairmen of the Advisory Boards, Norbert Biedrzycki and Tomasz Motyl, experts of blockchain technology and professionals in the areas of artificial intelligence and big data debated about the vision of the future.\nEven though the topic of blockchain is very loud, it's worth talking about the basic mechanisms of this technology. With this in mind, the first topic block of the BlockchainTech Congress started with a simulation showing the principles of operation of one of the processes based off of blockchain. With such knowledge, they took on the vision of development for the next 3-5 years.\nDuring the two days of the Congress, leading companies presented their examples of using this technology in their company. Everyone who wanted to gain an advantage over their competition and get to know precise examples of solution implementation using blockchain now knows how to implement them in their own company. This is all thanks to the debates, speeches and use cases hosted by experts of the technology. There was also no lack of questions and explanations of any doubts, and after the official part of the congress, there was time for priceless business meetings.\nThe participation of speakers from Spain, the United Kingdom, Switzerland, Sweden, Liechtenstein and Gibraltar allowed for the international perspective on blockchain implementation to be known. The congress gave a handful of inspiration which is needed by the modern, fast-changing market.\nKnowledge about blockchain was shared by speakers, including Tomasz Buczak, Emmanuel Djengue, Dave Ebbitt, dr hab. Iwona Karasek-Wojciechowicz, Tomasz Kozar, Karolina Marzantowicz, Manuel Machado, Micha\u0142 Turalski, Szymon Wa\u0142ach and Max Wang.\nStrategic Partners: Atende, Asseco, Dell EMC, Biuro Informacji Kredytowej, Heyka Capital Markets Group, Vivus.\nPartners: Luno, Bacca, Abak, ProfesCapital, CallPage, Gamfi.\nWe invite you to view the full photo gallery of the II BlockchainTech Congress.\nThe event was organised by MMC Poland.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"When cycling artist John Etheridge was forced to give up his job through injury he took an executive decision about what to do next.\nAdopting a 'nothing to lose' attitude, he set about creating his own business with a website selling his cycling paintings and greetings cards.\nNow the on-line cycling world can discover the unique style of an artist who works mostly in pastel and pencil, creating the soft, warm and delicate tones associated with this medium.\nHis pictures include cyclists awheel in the countryside on idyllic summer days, time triallists racing the clock, track riders on the boards and also portraits of top roadmen, including Britain's greatest Tour de France stage winner Mark Cavendish, winning of course!\nAs well as sport, he does traditional country scenes. Some feature those enigmatic of structures, the windmill; seen through a gap in the hedgerow, or beside the fens. There's a lighthouse, boats on rivers, and snow-covered fields. There's one of the shoreline being washed by a stormy North Sea.\nHe's quite fond of pheasants \u2013 there's usually one or two in view. The one of the cyclist in the country lane depicts a richly plumaged bird in the road. It begs the question, did the bird wait until the rider had passed before waddling into the scene? I found myself waiting to see what it did next, the bird that is?\nJohn Etheridge is 47 and his website tells us all about him. He was born in Hillingdon, Middlesex. At one-year old his family moved to Norfolk and the fens.\nHe was still very young when he developed an aptitude for painting.\nAlways sporty, he played football and coached. He was RAF boxing champion. He took up cycling in his late 20s.\nBut he stopped racing after five years, when he rediscovered art. Only recently has he got back into the saddle.\nCycling and art remain his two passions in life, he says. His prize-winning work has been featured in Anglian magazines and newspapers and exhibited at the Royal Norfolk Show.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzahoud b/data_all_eng_slimpj/shuffled/split2/finalzzzahoud new file mode 100644 index 0000000000000000000000000000000000000000..42e56128607de199e9af400ccadeeef3e1fc2255 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzahoud @@ -0,0 +1,5 @@ +{"text":"nudge necesitaba un nuevo dise\u00f1o de logotipo y lanz\u00f3 un concurso de dise\u00f1o en 99designs.\nUn ganador ha sido elegido entre 81 dise\u00f1os de 34 dise\u00f1adores freelance.\nThe logo should position us as the leading agency focused on social network application development and consultancy.\nWe came up with the idea of 'nudge' as its an action we're used to doing on social networks to grab our friends' attention, whether it's poking someone on facebook or hi5ing someone on hi5.\n- It needs to encapsulate the essence of social networks.\n- Fonts, colours, styles, capitalisation, imagery left to your imagination.\n- Overcomplication, dull, dated, amateurish - the usual suspects!","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"So your blinds were delivered and you are excited about having them installed.\nWe are thankful you have chosen us to install your new window treatments. Here are a few tips to help your installation appointment a success.\nIt is important to us that we can do a great job for you. You can help us do that by considering a few simple steps.\nClear access to windows. Sofas or couches, chairs, tables, etc. Small furniture that can be moved easily.\nWindow sills clear and free from all items (this will allow me to full extend and test the new window treatments.\nLarge furniture items moved (if possible) to allow access to windows. It is not always possible to move large furniture items like pianos, bookshelves and the like. Working around these things is possible, we can work with you to get that taken care of.\nFor elevated windows, we will need to make sure there is more space available so that I can safely set up the extension ladder.\nBe available during installation if I have questions or need to discuss aspects of project.\nWe strive to provide quality service on every install. We will open boxes and sort blinds, and after install we take the boxes and shipping debris.\nIn the event that a blind is missing, damaged in shipping, or otherwise incorrect, we will help see it through by communicating with you which blind it is, and how to place the reorder.\nIt is difficult to safely install new blinds in windows that are not easily accessible. Please consider this when preparing for your install appointment.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Meet the project of 2010.\nThe project included working out and implementing of new conception (positioning and image) for one of the oldest brand of bottled water Ugorskaya (\u0423\u0433\u043e\u0440\u0441\u043a\u0430\u044f) with its delivery service.\nNew packages, website, ads materials, new sale activities and promotions - and we've got new clients as a result.\nSergey Durachenko - strategic planning, creative direction, conception.\nIgor Stepakhin, Tanya Cherkiz - package-design.\nLesya Kuzvesova - guide-lines and ads.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"In a matter of minutes and without a single line of code, Zapier allows you to connect BoomTown! and FullContact, with as many as 52 possible integrations. Are you ready to find your productivity superpowers?\nIt's easy to connect BoomTown! + FullContact and requires absolutely zero coding experience\u2014the only limit is your own imagination.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Thursday night the Chargers obliterated the Chiefs 31-13. And yet, even though 44 points were scored, the highest scoring fantasy stud wasn't even a player, it was the Chargers defense. The Chargers defense, previously ranked as a Top 15 fantasy defense, had two defensive touchdowns, three fumbles, one interception and a sack. Not too shabby.\nFor the rest of both teams though, there's room for a lot of concern. We're talking about a fantasy wasteland here. Before we get to the wasteland, let me give some free passes to a couple of players. For starters, I'm not going to talk much about Matt Cassel because he doesn't matter. He's a guy whose on most waiver wires and is a bye week play, if you're desperate.\nI'm also not going to give a hard time to Dwayne Bowe. Bowe was drafted in the 6th round in most drafts either as a low end WR2 or a high end WR3. He's hovering around the bottom end of the Top 15 receivers, which is good enough considering what his expectations were for the season. He's on pace to have one of his best seasons, with 37 catches, 488 yards and 3 touchdowns in seven games. If you consider the fact that the Chiefs quarterbacks have been hellaciously awful, he's actually having a pretty good season.\nNo, I'm not focusing on Bowe or Cassel but everyone else on the Chargers and Chiefs. Let's start with the highest profile player, this season, on either team: Jamaal Charles. For years I've watched the Chiefs and seen, clearly their best player, just disappear from games. Even though he's the best player on the team, the guy that ran for 233 yards a month ago, he still somehow doesn't seem to be the main centerpiece of the offense.\nLast week, Charles disappeared again with five rushing attempts for four yards. When Coach Romeo Crennell was asked after the game why Charles was ignored even he didn't have any answers and he's the coach! For the last three weeks, including last night, Charles hasn't topped 40 yards rushing. Part of this is the fact that the Chiefs are always playing from behind but still, as a receiver in catch-up mode, Charles is being underutilized.\nAnd don't even get me started on Peyton Hillis, one of the biggest fantasy busts of the season. Can you imagine that this guy was on the cover of Madden at some point? Did I dream it all? Was that one Browns season a mirage?\nSticking with the same position but moving on to the other side of the field, Ryan Mathews has been one of this season's biggest disappointments. Bothered by injuries earlier in the season, he's seemingly gotten healthy and yet he still hasn't produced like the RB1 we expected him to be. He has yet to top 100 yards rushing this season and he has more fumbles (2) than he has touchdowns (1).\nMathews is still averaging 4.41 yards per carry, so he's still effective when given the opportunity. Even last night we saw glimpses of Mathews talent on a 31-yard scamper. It's a wonder they don't give him the ball more. If he's healthy and able, why not give him the keys to the offense? It's not like Antonio Gates is carrying this offense anymore.\nAntonio Gates barely makes the Top 15 for fantasy football tight ends. We're talking about the 3rd tight end drafted overall. Last night's 3\/43\/1 line is a great night for Gates, compared to the rest of his season. That was his second game with a touchdown this season, third TD overall.\nHere's some more depressing Gates stats for you. Gates has only gotten more than three receptions in a game, twice this season. He's only had two games all season where he's scored in double-digits for fantasy purposes. After his \"stellar\" performance against the Chiefs, his owners might want to think about moving him. His name still has a nice currency in fantasy circles and right now might be the best price owners can get for him the rest of the season.\nOur last stop in the fantasy wasteland of the Chiefs\/Chargers game is Philip Rivers. While he hasn't been terrible, he also hasn't even cracked the Top 20 in fantasy quarterbacks, statistically speaking. It's never a good sign when you have a 12\/10 touchdown-to-interception ratio. His yardage totals have been decent but he's only passed for over 300 passing yards once. He has had five games where he's thrown for multiple touchdowns but he's also thrown eight interceptions in those games as well.\nThe fact is, there's a large number of quarterbacks in fantasy football who are having a better season than Rivers. Alex Smith, Brandon Weeden and Ryan Fitzpatrick, to name a few. He's become a low end QB1, high end QB2. The 2011 Philip Rivers looks more like the norm now than the stud fantasy QB he used to be.\nAll in all, the future doesn't look bright for any of these guys in the 'Fantasy Wasteland' we witnessed last night. I would expect more of the same despite some of these players having genuine talent.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzahwdx b/data_all_eng_slimpj/shuffled/split2/finalzzzahwdx new file mode 100644 index 0000000000000000000000000000000000000000..5c68f125cbe1d7987e8531dc884615f5729a7ecd --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzahwdx @@ -0,0 +1,5 @@ +{"text":"Today's weather is turning out to be partly cloudy. The visibility is going to be around 17 km i.e. 10 miles and an atmospheric pressure of 1020 mb . The daytime temperature is going to reach 25 \u00b0c and the temperature is going to dip to 14 \u00b0c at night. It will be dry with no precipitation and cloud covering 46% of the sky, the humidity will be around 79%.\nTomorrow weather is forecasted to be partly cloudy. The visibility is going to be around 20 km i.e. 12 miles and an atmospheric pressure of 1019 mb. The daytime temperature is going to reach 25 \u00b0c and the temperature is going to dip to 13 \u00b0c at night. It will be dry with no precipitation and cloud covering 25% of the sky, the humidity will be around 66%.\nOn Sunday weather will be partly cloudy with daytime temperature reaching 25 \u00b0c. Night time temperature are expected to be 12 \u00b0c.It will be dry with no precipitation. The visibility is going to be around 20 km i.e. 12 miles and an atmospheric pressure of 1017 mb. It will be dry with no precipitation and cloud covering 6% of the sky, the humidity will be around 63%.\nMonday seems to be partly cloudy. Rusape, Zimbabwe visibility is going to be around 20 km i.e. 12 miles and an atmospheric pressure of 1017 mb. The daytime temperature is going to reach 23 \u00b0c and the temperature is going to dip to 12 \u00b0c at night. It will be dry with no precipitation and cloud covering 19% of the sky, the humidity will be around 74%.\nPartly cloudy will be the weather pattern for the Tuesday. The visibility is going to be around 20 km i.e. 12 miles and an atmospheric pressure of 1018 mb. The daytime temperature is going to reach 25 \u00b0c and the temperature is going to dip to 11 \u00b0c at night. It will be dry with no precipitation and cloud covering 18% of the sky, the humidity will be around 75%.\nLooking at the weather in Rusape, Zimbabwe over the next 7 days, the maximum temperature will be 25\u2103 (or 78\u2109) on Sunday 21st April at around 2 pm. In the same week the minimum temperature will be 11\u2103 (or 52\u2109) on Tuesday 23rd April at around 5 am.\nLooking at the world weather radar, national weather service and satellite images, Rusape, Zimbabwe weather forecaster is reporting little or no rainfall over the next 7 days. So make most of it while you are on vacation in Rusape, Zimbabwe.\nThe windiest of all days will be Friday 19th April as wind will reach 11mph (or 18kmph) at around 2 pm.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Available in three sizes, our Personalised Keyrings are the ideal way to keep that special picture of your nearest and dearest with you all the time. Simply upload those treasured photos and leave the rest to us. The Personalised Keyrings can be customised with a different image on each side.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"The thought of being cut up bugs me. I am seriously going to request that even if my death is under suspicious\/unknown circumstances, I will not receive an autopsy. I mean why? They are just indescribably horrible. Well, that's my opinion anyway.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"The Essence of Essential Oils - Find.Learn.Share.\nEssential oils have been used for centuries for many different purposes. Before modern manufacturing processes were discovered, human beings depended on purely natural ingredients, and plant-based ingredients, or botanicals, played an important role in the lives of our ancestors.\nAncient records demonstrate that the Egyptians and many other populations of the Fertile Crescent and the Far East used essential oils. Plant-based ingredients were used for mummification and ritualistic ceremonies, for instance, and fragrant oils were highly valued and often given as gifts. With uses ranging from wound healing to therapeutic massage, recipes for the use of botanical ingredients were collected and shared among cultures.\nIn modern times, essential oils are used in the fragrance, flavor, and aromatherapy industries. The unique characteristics of each plant give these essential oils their powers. But to truly understand why essential oils continue to play an important role in so many lives, we need to first understand what they are.\nEssential oils are volatile products obtained from plants using either a steam, water, or\u2014in limited circumstances\u2014cold-press distillation process. The word \"volatile\" is used to describe these oils because they release their scent when they evaporate at room temperature. This is why essential oils are a popular ingredient in fragrances. Some companies sell plant extracts that they call essential oils, but to be a true essential oil, an extract must be obtained without the use of chemical solvents.\nVolatile Oil? What Does that Mean?\n\"Volatile\" is a term used in chemistry to describe something that evaporates at room temperature. In other words, volatile liquids easily become vapor or gas. We often think of the word \"volatile\" as meaning something is flammable. For instance, gasoline is a volatile liquid that is also flammable. However, the terms \"volatility\" and \"flammability\" describe different properties of the substance. A flammable liquid that is also volatile is dangerous because once vaporized the substance can easily ignite. Not all volatile substances are flammable.\nEssential oils are called volatile because their molecules will easily change from liquid to gaseous form when exposed to the open air. In non-scientific terms, the oil's volatility is what makes it aromatic\u2014the molecules released as vapor into the air carry the essential oil's scent.\nWhy Are They Called \"Essential\" Oils?\nWhen something is described as \"essential,\" it is only natural to think of the item as a necessary thing because that is often what the word \"essential\" means. For instance, some vitamins and minerals are described as being essential because the body needs them to remain healthy.Essential amino acids are those that the body needs but cannot produce on its own. These amino acids must be derived from the foods we eat. Given the use of the word \"essential\" to describe nutrients that our bodies need, it is no wonder that there is some confusion as to what essential oils are and why we call them essential.\nHowever, when used to describe essential oils, the word \"essential\" is a different word altogether. It is a shortened version of the word \"quintessential.\" In modern times, the word \"quintessential\" means \"embodying or possessing the essence of something.\" This essence is the term that describes essential oils; these natural liquids are drawn from the very essence of the plant.\nThe word \"quintessential\" has a history that dates back to the very discovery of essential oils, and the two have been interlinked since that time. \"Quintessential\" can be literally translated to mean \"the fifth essence.\" This quinta essentia was thought to be the fifth and highest element, and it was believed that when the quinta essentia combined with earth, air, fire and water, it made up the whole of a being. The quintessence was the life force or spirit of the being or plant. Distilling a plant's quintessential oil was thought to pull out its spirit. In the Middle Ages, it was believed that harnessing this quintessential element could cure all disease.\nWhile essential oils aren't really comprised of the spirit of the plant, they do well at representing the plant's essence. By using only mechanical means to extract the chemical compounds from a particular plant, the essential oil extraction process results in a rich, pure mix that is unique to the plant from which it is derived. Not only will the composition of the extracted oil be unique for each species of plant, but it will also reflect the individual plant's distinct growing environment. In her book, The Taste of Place: A Cultural Journey into Terroir, Amy B. Trubeck describes this terroir effect\u2014the unique combination of environmental factors in a specific location that affect the plants grown there. Terroir is what gives a wine its regional accents.\nTechnically, essentially oils are chemicals. But understanding why these oils are classified as chemicals requires an understanding of the nature of chemicals. Again, this is a matter of language and meaning. Chemicals, at their base, are the molecules and atoms that compose all matter.\nThe elements of the periodic table are all chemicals. With a few exceptions, these elements are distinct and naturally occurring. Without these chemicals, there would be nothing on Earth\u2014in fact, there would be no Earth at all. So the plants themselves, along with their extracts, are made of chemicals. However, the more commonly accepted meaning for \"chemical\" is \"something that is manufactured and synthetic.\" When discussing products, we often distinguish between natural and chemical when in reality chemicals can be either. The chemicals found in essential oils are naturally occurring, not synthetically produced.\nWhat Are These Plant Chemicals?\nPlants are more complex than many imagine. Plants are the only living thing that can produce their own energy using the light of the sun. The rest of us, human and animal alike, are dependent on plants to provide us with the energy and amino acids we need for survival. David Chamovitz, author of What A Plant Knows: A Field Guide to the Senses, describes how plants, though not sentient, have an \"intelligence\" of their own. More than just roots, stems, and leaves, plants have a complicated system of communication, both within each individual plant and also with the surrounding environment. These communications are made by means of chemical messengers.\nThese natural plant chemicals create bad tastes to protect the plant from predators, sweet smells to attract pollinators, and coatings to keep bacteria and fungi at bay. Some plants even release a chemical that prevents competing plant species from growing in the first plant's territory! In some instances, these chemicals are contained inside the plant's structure. In other instances, the chemical is found on the surface of the plant's leaves. Essential oils are extracts of these various plant chemicals. Once distilled from the plant, each essential oil will carry with it a taste, smell and texture that is unique to the plant from which it was derived. The oil will carry with it the plant's essence.\nWhy Does the Extraction Process Matter?\nIn the fragrance and food industries, the term \"essential oil\" has a specifically assigned meaning. The International Standards Organization (ISO) defines essential oils as those distilled using water, steam, cold-press, or dry distillation. The specific process used varies depending on the plant type. After distillation, the oil is separated from the water that was drawn out during extraction. This separated oil is then the \"essential oil\" that may be marketed using that specific term.\nOther plant extracts are also used by fragrance and food producers, and each of these types of extract has a distinct name. The classification of the end product is based on the type of extraction process used. When water, steam or cold-process distillation are used to distil the plant's chemicals, no solvent comes into contact with the natural plant chemicals. This non-use of solvent is an important feature that distinguishes essential oils from other plant-based products.\nForms of extraction used to derive non-essential oils include separating the desired plant materials from the body of the plant using ethanol and hexane (or similar chemicals). When solvents are used for extraction, there is always a chance of solvent residue remaining in the final product. Products that are not extracted as essential oils are referred to as absolutes, concretes, florasols, and CO2s. Each of these plant products plays a role in the manufacturing of a variety of goods, but they are not essential oils.\nWhen trying any new product, it is important to remember every body is different. \"Natural\" is not a synonym for \"safe.\" Essential oils are highly concentrated chemicals. Whether inhaled or applied to the skin, this level of concentration means that any sensitivity or allergy to the source plant will be magnified. Strong fragrances may trigger respiratory reactions in people with asthma or similar conditions, and applying the oil directly to your skin may cause a skin reaction.\nAdditionally, some plants have natural medicinal qualities that can react with your prescribed medications or exacerbate a medical condition. So, to be safe, always start slowly when using a new essential oil. Dilute the oil before using it on your skin. Check with your physician if you are taking medication or suffer from allergies. Furthermore, some essential oils should be avoided if you are pregnant. It is always a good idea to research the specific risks associated with the essential oil you are considering.\nCan Essential Oils Cure Disease?\nMany modern medicines include natural plant ingredients, and by necessity, ancient remedies were based on natural ingredients. Herbalists and natural healers have known about the benefits of botanical ingredients for generations. Yet information about essential oils remains limited. In some instances, studies have shown a correlation between a specific essential oil and a health benefit. For most essential oils, however, there are simply not enough clinical studies available to demonstrate their effect on the body.\nThe use of essential oils in the United States for medicinal purposes is still limited, but medical aromatherapy is common in Europe. Essential oils are most often used as part of a comprehensive treatment plan. Approximately 100 varieties of essential oils are used in Austria and Europe for medical aromatherapy. The benefits of essential oils are probably best realized when they are used as a supplement to the benefits of modern medicines and treatments. The use of an essential oil can enhance your life, but it should be a part of a supervised treatment plan if you are dealing with a serious disease.\nNow that you know the What and Why of essentials oils, take some time to discover the unique properties of each plant. While these botanical power packs might not be the quinta essentai cure all that our ancestors longed to find, their delightful aromatics can definitely lift your spirits.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"If you have been doing your research, then you know that Lasik eye surgery is a popular cosmetic treatment that improves the lives of many people each year. It also has a high success rate. There are a few things you may not have heard about Lasik surgery.\nWithout the foggy glasses and the pressure on your nose, your sinuses may not get so stopped up. You can breathe better when you are not wearing glasses. With a less stuffy nose, you have fewer headaches.\nProfessional Lasik surgeons say that people tell them that they can see more clearly and further to the sides after their operation. They liken it to the clarity of forward vision. This effect is often attributed to the fact that glasses do not wrap around the head. With glasses out of the way, the eye can see further in each direction.\nMany people can see much better at night after their full healing time has elapsed after Lasik eye surgery. Often, those with high refractive errors like astigmatism and myopia have decreased high or low vision without moving their head. Having Lasik surgery typically increases vision in all directions.\nYour Lasik eye surgeon, like an expert at Manhattan Lasik Center, can give you a more detailed exam that allows them to customize your outcome. We have more than 20 years experience in Lasik eye surgery, and we have seen the results firsthand. Many of our patients leave testimonials and referrals for new patients and those thinking about having Lasik surgery. The process is quick with very little discomfort. You heal fast, and you have an excellent chance of getting rid of your eyeglasses and contact lenses. For a contoured map of your eye, a look around our humidity and temperature controlled surgery room, or for questions call us at 212-759-9617.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzahxxr b/data_all_eng_slimpj/shuffled/split2/finalzzzahxxr new file mode 100644 index 0000000000000000000000000000000000000000..cb84ec83c154cdf39120b110c9e528abfcf1a304 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzahxxr @@ -0,0 +1,5 @@ +{"text":"Sunrooms are excellent investments for homeowners who wish to increase the overall livable space of their homes while taking in natural light. At SRA Home Products, we install TEMO sunrooms in Bear, Middletown, and other nearby Delaware communities because we believe in offering our customers only the best products on the market. Our sunrooms are designed to beautifully complement virtually any type of house and to withstand the elements for years, so they're the perfect choice for anyone seeking a new home addition.\nAll the sunrooms that SRA Home Products installs provide these important benefits, but that's not to say that our home additions are one-size-fits-all solutions by any means. Rather, we invite each client to choose from several general profiles; from there, our crew of home improvement experts works closely with them to develop a customized design that will incorporate seamlessly into the rest of the home. Once we have the perfect plan prepared, our factory-trained, experienced sunroom professionals can get to work creating the new room quickly and effectively. What's more, all the sunrooms that we create are backed by TEMO's lifetime transferable warranty for additional peace of mind.\nTo learn more about the sunrooms that we build for homeowners in Bear, Middletown, and other DE communities, contact SRA Home Products today. We can schedule a free, in-home consultation with you to discuss the design options that will best suit your home.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"We recommend the best cryptocurrency exchanges to keep you safe and secure.There are over 10 trillion existing coins with a ridiculous staking. List of Cryptocurrencies Binance Supports, Including NEO, AppCoins,.Stellar (XLM) historical charts for 1 day, 7 days, 1 month, 3 months, 1 year and all time candlestick price charts.\nFind the current Neblio Binance Coin Binance rate and access to our NEBL BNB converter, charts, historical data, news, and more.TXKtoday.com is the official site of TXK Today your trusted source for breaking news, weather and more in Texarkana, Texas and Arkansas.\nCrypto news offering the latest coin, token, and cryptocurrency news.\nNarendra Modi News - Check out the latest News on Narendra Modi. time and occasion of the possible trip are under consideration, official sources said today. Clicking on these links a new page with individual data about the chosen coin will be.\nIf you have been paying attention to the crypto markets today, you have probably noticed that the AdToken coin is one of the best.FunFair Prediction 2018, FUN Forecast and Price Charts - When to buy FUN.News Add Coin. Rank 180. Today This Week This Month This Quarter This Year All Time. Loading. Market Exchange. Live Neblio prices from all markets and NEBL coin market Capitalization.View live Neblio trade prices on all markets: Neblio Price, NEBL Stock and live Index. Datacoinz.com the Cryptocurrency Expert.\nThe current price of Neblio is 2.279 USD today., Neblio technical analysis, Neblio coin future price, NEBL projections.Come chat with us and join our growing community. Join Now. Next Generation Enterprise Blockchain Solutions.\nStay up to date with the latest Neblio price movements and forum discussion.OEX Binance Cryptopia. View coin. 19 May 2018 (or earlier) NEBL.\nI created this blog to help readers keep abreast of the latest altcoin news and.\nNeblio NEBL price graph info 24 hours, 7 day, 1 month, 3 month, 6 month, 1 year.Get latest updates on politics, entertainment, food, sports, business, technology, education, jobs, nation, world and weather.The action in the cryptocurrency market today was similar to what happened yesterday. with 29 of the top 100 coins seeing double digit. (NEBL) news, analysis.\nNews. All Crypto Finance. the Neblio API suite will support uniform core APIs that are most commonly used by some of today.\nIt has great content that include sports, ground breaking documentaries and investigative features. T.\nYou need to know how many units of a coin you have and how much it is.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"A taxing issue: navigating the complex US tax environment | Vertex, Inc.\nThe US sales and use tax environment is hugely complex. It involves a confounding network of tax types, rates and exceptions, all with a multitude of variations.\nMinor errors can have huge implications, including tax fines, interest and penalty charges, making it important that businesses review US tax implications prior to expansion in the country. It's vital that all businesses operating in the US equip themselves with detailed local knowledge and data-driven tax technology if they're to succeed.\nAmerica's tax system is decentralised, meaning individual states, cities and districts have the freedom to decide their own tax rules. As such, identifying jurisdictions and determining rates accurately is a monumental challenge. Furthermore, unlike VAT, US classifications and sales tax exemptions depend not only on the location, but also the product being sold, who it's being sold to and how it is being used.\nFor example, in Chicago, sales tax is 10.25 percent, a number that is made up of 6.25 percent state tax, 1.25 percent city tax, 1.75 percent county tax and one percent regional transport authority tax. In Tennessee, there's a seven percent sales tax on the first $1,600 (\u20ac1,374), with 2.75 percent on the next $1,600. In Texas, certain sales of electronic data are subject to tax, but only on 80 percent of the price. In New York, clothing sold for less than $110 (\u20ac94) is exempt from state sales tax, but subject to local sales tax.\nThese variations can be confusing, but it's imperative that businesses understand what's required of them. Unlike European VAT, which is included in the advertised price, US sales tax is highly visible \u2013 literally added on at the point of sale. Retailers risk a class action suit if they overcharge tax, and a hefty audit liability if they undercharge. They also risk tainting the customer's experience.\nUnderstanding the basic state and local tax rules, plus using comprehensive tax automation technology, can make a big difference to expanding businesses. There are some unique tax rules international businesses should consider when expanding in the US.\n'Nexus' is the term used to describe when a business is required to register to collect tax in a jurisdiction. If a business has nexus in a state, it must collect tax on sales there. This is different from form-based VAT registration in Europe, as nexus can be acquired through a temporary or permanent presence of property or people (employees, service people or independent sales\/service agents). It even includes sales visits, trade show attendance and temporary inventory in warehouses.\nBecause each state defines nexus differently, it's not a straightforward concept. Some states have drafted legislation that taxes businesses even if they do not have a physical presence but do have economic nexus. This means tax administrations are looking at sales and frequency to determine if a taxpayer has nexus. For example, a business that has sales over $500,000 (\u20ac430,000) and more than 500 transactions in the state can be determined as having nexus.\nState legislation is ever-changing and can cause real confusion for businesses that are simply trying to comply. Automation is crucial to the correct identification and management of nexus. The compliance process is complex, and attempting to process it manually can have negative repercussions for businesses.\nExemptions exist for a range of customers and also differ by state. The responsibility for getting it right falls solely on the seller. If a seller exempts a transaction but does not have the proper exemption documentation, they will be responsible for the payment of that tax due on audit.\nThe final challenge of complying with US tax systems is submitting returns accurately. Each month, returns are due at state level, and often at local level as well. All of these returns must be accurate and filed on time or penalties will be incurred.\nComplying with the thousands of US tax jurisdictions places a strain on resources, and US states are only increasing their audit activity with regards to overseas sellers. In order to eliminate mistakes, businesses must reduce the amount of manual calculations they make. Automated tax technology can spare tax professionals from making human errors when calculating returns; inflexible, time-consuming workflows can become efficient and accurate processes.\nAutomation lets businesses verify tax jurisdictions and rates using geospatial technology. Furthermore, automated jurisdictional assignment means addresses conform to USPS standards with full zip codes at all times. As a result, businesses are covered for the most minor change in the smallest jurisdiction.\nTo reduce the risk even further, businesses can keep important tax decisions in the hands of professionals. This allows in-house finance and tax teams to spend more time on strategic work to support the business, instead of checking rates, rules and manually filing returns.\nWith changing regulations, as well as additional pressure from tax regimes and non-governmental organisations, the stakes for global tax functions have never been higher. As a result, tax planning and management needs to be more data-driven than ever. Tax functions will need to segregate and allocate data to keep up with more detailed information reporting requirements. This requires more precise methods of analysing and organising data.\nTo achieve the defensibility that global organisations will require, tax executives need to have a strong plan in place. While the specifics will depend on the unique characteristics of the industry, company and market, tax executives are better equipped to navigate compliance needs with a strong tax technology system.\nThe best forms of tax technology tend to have the following capabilities: access to standardised quality data; data management and advanced analytics; the ability to consolidate transactional data across multiple systems; the ability to synchronise, transform, and catalogue tax data for future needs; and portals for sharing tax and financial data.\nWithout automated tax solutions, staying up to date and compliant with tax regulations is difficult and time consuming. Tax technology is not only crucial to successful expansion in the US, it can also support global tax efforts, allowing tax professionals to focus on business strategy to increase the bottom line.\nOriginally published in European CEO.\nIn this video, Bernadette Pinamont, Vice President of Tax Research, Vertex explains that while US tax reform promised simplicity, what it has delivered will prove challenging for most companies.\nLearn about the challenges organisations face when managing U.S. indirect sales tax, and what businesses must do to ensure a smooth transition.\nIn this video, Danny Vermeiren, VAT Director, Vertex, explains the two big trends in the European VAT landscape: digitisation, and real-time reporting. Danny discusses why these trends have emerged and the technologies available to help businesses comply.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"A Channel set hooped earrings channel set with a matching set of Princess cut diamonds.\nThe hypnotic sparkle of the Princess cut Diamonds are truly ignited within these majestic hoop earrings. Held discreetly, yet securely within the invisible channel setting, the quality Diamonds are exquisitely displayed. The total 0.50 carats of Diamonds, 16 x 0.03cts, have been graded F Colour, VS1 Clarity prior to being expertly set and the finished earrings rest gracefully on the earlobe allowing the natural radiance of the Diamonds to shine. You can select finishes of 18 Carat White, Rose or Yellow Gold and 950 Palladium or 950 Platinum. Please allow 4 weeks for delivery as each pair of precision set earrings are crafted to order. Your earrings will then arrive with you beautifully presented, ready to be the centrepiece to grace any outfit.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Holcim, Concrete - Ready-Mix, listed under \"Concrete - Ready-Mix\" category, is located at Reynolds St Mareeba QLD, 4880, Australia and can be reached by 0740922171 phone number. Holcim has currently 0 reviews.\nBrowse all Concrete - Ready-Mix in Mareeba QLD.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzaiifg b/data_all_eng_slimpj/shuffled/split2/finalzzzaiifg new file mode 100644 index 0000000000000000000000000000000000000000..017a725fed7dbfd9d73978bc261a348dd8e51472 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzaiifg @@ -0,0 +1,5 @@ +{"text":"This coming Sunday is going to be a very special event. I'm getting together with my SD tribe \u2013 Dance Klassique \u2013 and we are teaming up with another posse and bringing you an entire Sunday of amazing house music called Depp End & Dance.\nI'm headed straight there after Denver, and I'm gonna try my best to make the scene in time for sound check at noon.\nReally hope to see you there. Love.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"We are a family-run business not only recognised for our high-quality second hand commercial vehicles but also for understanding and meeting the needs of our customers, whatever their business or company size.\nThey go the extra mile for us. All the trucks we have bought from Chris Hodge have been sound, reliable and well-priced.\nThey were awesome. All the trucks we have bought from Chris Hodge have been sound, reliable and well-priced.\nBuying a used truck can be a daunting task. Explore a huge range of high quality, carefully sourced and prepared, problem free used trucks from a range of manufacturers. Our stock of used trucks for sale includes a wide range of carefully selected commercial vehicle chassis types including 3.5ton, 7.5ton, 12ton, 18ton, 26ton, 32ton, 44ton weights and used Euro 5 & Euro 6 category vehicles.\nOur constantly changing stock of used rigid trucks for sale includes a quality range of rigid heavy haulage brands.\nSearching for a quality used tractor unit? Our used tractor units & semi-trailers can include: 4x2 tractor; 6x2 Mid-lift tractor; 6x2 Twin-steer tractor & 6x2 Rear-lift tractor units.\nBuying a used van? Our range of high quality used vans for sale will suit every business need. All our used vans have the assurance of being fully checked and approved.\nBuying a used trailer can be a daunting task. Explore a huge range of high quality, problem free used trailers.\nWe are constantly on the lookout for quality used trucks. If you are looking to sell your truck then why not save yourself the time and hassle and let us give you a quote.\nThe European truck market is the most sophisticated in the world and the British market, the most competitive of all European countries.\nCarefully sourced and prepared used trucks from a range of manufacturers. Man, Iveco, DAF, Peugeot, Mitsubishi, Land Rover, Montracon, Isuzu, Mercedes Benz.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"As you read this I'll be making my way to the sandy shores of Rivera Maya, Mexico, for a much awaited weeklong vacation. Aside from relaxing and taking a break from reality, I'm most looking forward to exploring the Mayan Ruins and swimming in the most stunning underwater cave rivers in the world. My camera, and additional zoom lens, are all ready to capture every detail of these world wonders. Have a happy Memorial Day weekend, and the next time I'll have plenty of photos to share!","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Get Scientific Method essential facts below. View Videos or join the Scientific Method discussion. Add Scientific Method to your PopFlock.com topic list for future reference or share this resource on social media.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Wildlife, pets, landscape, abstract and anything you can imagine. Basically if you can dream it, I can paint it.\nIf you have an empty wall in your home and need something to bring a bit of life to it. Then I will gladly come round to your home and give you some ideas. Alternatively you can send me photos of the space.\nNo project is too small or too big. And no idea is too crazy for me to paint. So let your imagination and dreams go wild and I will do my upmost to bring them to life.\nMy main passion is art, doing commissions for people is one of the biggest and most exciting parts of being an artist.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzajxpv b/data_all_eng_slimpj/shuffled/split2/finalzzzajxpv new file mode 100644 index 0000000000000000000000000000000000000000..58d935c1cafb943e30072eee24ccc15eb4986329 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzajxpv @@ -0,0 +1,5 @@ +{"text":"Please be noted that we plan to hold an agent seminar at Guangzhou Joint Visa Application Centre (Austria, Poland and Portugal) on 09 April 2019, VAC will stop application submission as well as passport collection service for Austria, Poland and Portugal operations from 13:00, 09 April 2019. Applicants are advised to plan their submission \/passport collection accordingly. Should you have any inquiries, please contact Guangzhou Joint Visa Application Centre.\nNotice for Beijing jurisdiction: Dear applicant! As per the Austrian Embassy's regulation, please make sure your application complies with below requirements: 1. all applicants must provide bank statement of salary card. 2. The balance of your bank statement should be able to cover all expenses of your travel. In case of low balance, please provide other account's bank statement or deposit certificate. 3. If applicant has other income that is not in terms of salary, please provide explanation letter and relevant documents; if applicant has no income, please provide proof of solvency of family member such as parent(s)\/spouse (salary card bank statement + work certificate + notary kinship + explanation letter). 4. Group members with same itinerary must submit together; ADS must submit application at least three weeks in advance.\nNotice for Beijing jurisdiction: According to the Austrian Embassy's requirement, tourism visa applications and family\/friend visit visa applications must be submitted at least 20 days (15 days if submit in Beijing) before departure date. In case the processing time is not enough, the application will not be accepted until the travel date is changed accordingly.\nImportant notice about insurance: please be aware that the travel medical insurance policy should cover the full validity of visa respecting Central European Time. In case the insurance is defined in terms of Beijing time, the expiration date should be at least one day after the intended date of departure in Schengen area.\nPlease kindly be informed that Joint Visa Application Center in Nanjing will be closed early on 19 July, 2017 at 11:30 as requested. We regret for the inconvenience.\nThere is a power outage from municipal transmission line at a nearest substation and this has caused temporary shutdown of our office building. The office is in no position to accept visa applications today (15 May 2017). Power grid authorities are working to restore the power and the news of restoration of normal services will be published soon on our website. We regret for the inconvenience caused.\nPlease note that Joint Visa Application Center in Xi'an will be closed on 21 April 2017 at 12:00 as requested.\nVisa Application Centre in Hangzhou will be closed on 28 March 2017 early at 12:30.\nDue to the Consulate event on 28 March 2017, kindly be informed that Visa Application Centre in Hangzhou will be closed early at 12:30. We regret for the inconvenience.\nPlease be noted that Austrian Visa Application Centres in China will be closed for one working day on 02 January 2017 for the New Year's Holiday and will open as usual on 03 January 2017. Thank you.\nPlease be careful of touts and agents when you visit the visa application centers. We have received reports that they offer their services claiming to be an officer of the visa application center and charge exorbitant prices for services which are either provided for free at the visa application center or at very nominal prices. Please note that the visa application center or the Embassies do not endorse any agency or agents and cannot be held responsible for their actions.\nPlease be noted, Austrian Visa Application Centre will be closed for one working day on 15 September 2016 and will open as usual on 16 September 2016. Thank you.\nDue to a solution has been found to the technical issues the Joint Visa Application Centre in Hangzhou and Nanjing will be able to accept applications for Austria from now on. All applicants from Shanghai jurisdiction can submit their visa application in Hangzhou and Nanjing for their convenience. Thank you.\nThe Embassy of Austria in Beijing and the Consulate General of Austria in Shanghai are pleased to announce the launch of Joint Visa Application Centre-Xi'an, Chongqing and Hangzhou on 5 April 2016. Joint Visa Application Centre \u2013Shenyang, Wuhan, Changsha and Chengdu on 8 April 2016, Joint Visa Application Centre \u2013Jinan, Shenzhen and Fuzhou on 15 April 2016,Joint Visa Application Centre \u2013Nanjing and Kunming on 22 April 2016.\nage of 12 or if it is physically impossible, she\/he need not come for submission in person.\nMore information at \"Biometric Data Collection\" page of our website.\nPlease be informed that the Austria Visa Application Centre in Guangzhou will be relocated to a new location from 28 August 2015 onwards. Please visit our Centre at \"Room 06B, 2nd F, Sunrich Plaza, No.988, Guangzhou Da Dao Zhong, Tianhe District,Guangzhou China\". Please accept our apologies for any inconvenience caused.\nApplicants are required to submit new version of Visa Application Form from 5 June 2015.\nFrom now on, applicants in Beijing and Guangzhou Jurisdiction are required to submit new version of Visa Application Form. Please note that the new form is editable. It is highly recommended that the applicants' details are typed in PDF visa application form before printing it; you can download it from this website.\nPlease be informed that the Austrian Visa Application Centre \u2013 Shanghai will be relocated to a new location for submission of visa applications and for collection of the processed visa application result. Effective 1 June 2015, applicants need to visit the new Austrian Visa Application Centre located at \"3rd Floor, Jiushi Commercial Building, 213 Sichuan Middle Road, Huangpu District, Shanghai, China\".\nFrom 1 May 2014 the Austrian visa application centre in Shanghai will change processes in an effort to further improve the customer journey. From this date applicants can submit their visa application without the need to make a prior appointment. Group applications (more than 5 applications) can be submitted between 12:00 and 14:00.\nBusiness visa applicants holding an invitation (original or copy) from an Austrian company or an EVE or a GVE and who have held 3 Schengen visas within the last 2 years can authorize a representative to submit the visa application on their behalf. The representative will need to carry an ordinary authorization letter issued by applicant and his\/her ID card in order to submit the application at the visa application centre.\nApplicants are reminded that they are responsible for ensuring that application forms are correctly completed and that information is accurate. The Austrian Consulate in Shanghai will refuse applications if information contained within the form is found to be inaccurate, even if completed by an agent\/representative.\nFrom 25 June 2013, Austrian Visa Application Centre (VAC) in Shanghai will accept Slovenia Schengen Visa (Type C) application on behalf of Austrian Consulate General in Shanghai.\nApplicants need to take prior appointment to submit their application at the VAC. To fix an appointment, please contact our helpdesk +86 (21) 33661347 between 8:00am to 15:00pm (Monday-Friday; Except for weekends and public holidays) or write to us on infosha.autcn@vfshelpline.com.\nAttention: Guarantee should submit standard form of formal obligation letter as provided for by Slovenia's national legislation (Article 24 of the Aliens Act). Formal obligation letter must be legalized by Administrative Unit in Slovenia.\nThe Consulate General of Austria warns applicants of taking assistance and consultations from any kind of unauthorized agents - especially agents offering such services outside the Embassy, Consulate General and Visa Application Centre entrances. Neither the Consulate General nor the Visa Application Centre approve of such services and shall not assume any responsibility for the information provided by such agents.\nIt is expressively warned about the use of services offered by visa agents (who usually charge high amounts for assistance in obtaining a visa) as their services might be often associated with illegal and unlawful practices. The visa applicant may then be confronted by rejection of the visa application as well as the refusal of a visa by all other Schengen-Embassies\/Consulates.\nPlease note that scheduling an appointment to submit your application at the Visa Application Centre Shanghai is FREE OF COST. No additional money should be paid to anyone for this service. You can also call the helpline number (Shanghai: 0086-21-33661347) to make an appointment. You need to give the name of the applicant and the passport number.\nPlease note that due to security reasons we don't allow access to anybody other than applicants inside the Austrian Visa Application Centre. The only exceptions to this policy are those accompanying children under 18 years old or those who have held previous 6 months or one year multiple entries Schengen Visa, issued by the General Consulate of Austria in Shanghai, can submit their application through agent\/representative who holds an authorization letter issued by applicant.\nAustrian Visa Application Centre has been appointed by the Austrian Ministry for European and International Affairs as a collection agent for visa applications with effect from 18 April 2012 in Beijing and Shanghai and Guangzhou.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Direct Costing: Check out to know the Cost Details of a Specific Project!\nGetting sponsors for projects is something that is one difficult aspect. Did you know that there are multiple facets associated with this concept that needs to be taken into consideration?\nWe at 24x7assignmenthelp.com make sure that students with our manuals get a detailed idea of the concept with services of Direct Costing on Sponsored Projects assignment help and know the financial details that are required in this context. We provide a detailed analysis and description that makes all our manuals unique from other help services.\nSo, geared up? We have the best in store for you!\nIn case of projects that are sponsored by the government or any such agency, there is an associated cost attached to it. This canbe better defined as knowing whether there is anappropriate budget that can help in charging a certain amount for the project that has been sponsored by the government. There are investors, departmental officers and a set of grant administrators who are present to determine the complete context.\nConfused, are you? With Direct Costing on Sponsored Projects assignment help you can get a detailed idea of the amount of money that is to be charged against this project. Also, varying rates are associated with projects as these, and hence, one needs to have an idea as to when which rates are to be charged and how well.\nHow is a homework manual more efficient than a project?\nHomework manual has detailed analysis of the concepts as well as systematic presentation of these details. This is in stark comparison to the assignments that are way more elaborate and provide you with certain ordinate details.\nThe best part of such homework manuals is the set of formulas and questionnaire that are associated with this. Unlike any project, these manuals have answers for probable queries making students ready to take the test.\nSo, with Direct Costing on Sponsored Projects homework help you are to get a systematic approach to the query.\nOur offerings are specialized to ensure that all the students get maximum service from us.\nOur manuals are preparedby experts who have complete knowledge regarding sponsorship projects and how to deal with them in a specified manner. Therefore, with the Direct Costing on Sponsored Projects assignment help you can get a chance to know the systematic mode of functioning.\nWe have a specialized team to answer all your queries via mails, live chats, hotlines. So, whatever is your query and whichever is the hour, you can just click on our website, we are ready to serve you.\nOur specialized team of experts has a special segment for special children, who need some extra help. Our manuals are specifically prepared for them to understand the concepts in detail and prepare themselves.\nAlso, our test series and additional manuals are available that make students all set for answering queries.\nGet yourself a copy of Direct Costing on Sponsored Projects homework help from 24x7assignmenthelp.com and see how well you understand facts.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"We are the Animals is an installation examining human-animal interaction and how we relate, individually and collectively, with different species. Through the use of masquerade and collaborative performances this piece explores the similarities and differences between species. By anthropomorphizing the behavior of animals that coexist with humans, the behavior becomes easier to identify with and evokes notions in the viewer that may have otherwise lay dormant. Humans are the hardest animals to cohabitate with. The wild and domesticated animals surrounding us are nuisances, pets, endangered or extinct. We have cultivated animals selfishly for our needs; protection, food, testing, hunting, transportation, clothing, and comfort. This is a homage for all the non-homo sapiens survivors of the environmental artifice civilization has created. May they endure in spite of us.\nThese images are from an installation at the Broad Museum of Art in East Lansing, MI.\nWe Are The Animals from Elise Toups on Vimeo.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Home Supplies (Wessex) Ltd & Paramount Plating Limited have both been established for over 40 years and are family run businesses priding themselves on quality of product and a reliable service. Although run as separate companies they work closely together to operate an almost unique one stop shop for a huge range of manufacturing and finishing operations.\nHome Supplies have a comprehensive manufacturing facility including pressings, forming, tube bending, punching, welding,mesh welding, vacuum forming,wire basket production, and a very extensive plastic injection moulding shop which uses all commonly used plastics e.g. polypropylene, nylon, abs, styrene, acetal etc. Our modern facility uses the latest technology to maintain colour and quality 24 hours a day.\nParamount Plating has a well equipped fully automated finishing shop, processing metallic parts in a variety of different finishes, including zinc (bright and colour) barrel zinc, vibro-deburring and polishing, mechanical polishing and others.\nAt Paramount Fixings we hold in stock a huge range of all the most commonly used nuts, bolts, screws and other fixing for use in industry, agriculture, building manufacturing, engineering and DIY. Our range and experience in this field means that we can source practically any shape, size and finish or material.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"There is, I am convinced, no picture that conveys in all its dreadfulness, a vision of sorrow, despairing, remediless, supreme. If I could paint such a picture, the canvas would show only a woman looking down at her empty arms.\nMy fine visions are all very well, but I must not forget they are absolutely unreal. I have a rosy sky and a green flowery Eden in my brain; but without, I am perfectly aware, lies at my feet a rough tract to travel, and around me gather black tempests to encounter.\nI mean that I value vision, and dread being struck stone blind.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzalqvg b/data_all_eng_slimpj/shuffled/split2/finalzzzalqvg new file mode 100644 index 0000000000000000000000000000000000000000..0c471a94bb3c6b2bb06f1bedf3fe2cd01251e769 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzalqvg @@ -0,0 +1,5 @@ +{"text":"najdi.si is a website that ranks 7,493 in Alexa. najdi.si is ranked 7,375 on statisy and has 174,087 backlinks according to Alexa. The hostname or fully qualified domain name (FQDN) najdi.si is identical to the domain name najdi.si. The domain is registered under the domain suffix si and is named najdi. The najdi.si Server is hosted by AMiS and is located in Slovenia (Brezovica). najdi.si is not listed in the dmoz open directory project. After analyzing najdi.si's demographics we have determined that najdi.si average users are 25-34 years old, with College. We also have determined that najdi.si's average user earns $0 - $30K a year and is most likely Male. Oh wait it seems like we know a little bit more about najdi.si's average user, they have No Children and browse from Home and are Other.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Microsoft is testing yet another Windows 10 Build from development branch (Redstone 3) for Windows Insider. The future builds for Windows 10 is going to introduce many new features and major UI changes on PCs and Tablets. Microsoft has recently released a update for Windows 10 PCs and Tablets in the Fast Ring.\nMicrosoft has now internally complied a new Windows 10 Build 16193.1001. The company complied the update on May 7 and it will be released to Windows Insiders in the Fast Ring today or tomorrow.\nFurthermore, Microsoft will also release ISO Images of Windows 10 Build 16193.1001. As today is Build 2017 Keynote Day 2, Microsoft may release the update later today along with the ISO Images, SDK.\nToday, Microsoft is expected to talk about the new features coming to Windows 10 with Redstone 3. You can read Build 2017 Updates by clicking here.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"The services of a dentist are critical to every family that is committed to maintaining proper dental health for their members. The choice of professional determines the quality of services that people can get. Proper dental health requires individuals to seek for checkups from the right dentist even when they do have any unwanted feeling on their teeth. Patients should seek to have a good understanding of a dentist before acquiring their services to be assured of the needed quality of treatment.\nThe dentists have proved teeth pain to be a condition affecting many patients. A sign that an individual has a dental problem can be through teeth pain. There are times when the feeling of pain on the teeth comes together with weakening of the teeth. The bad feeling ion the teeth when during meals or when taking water require individuals to seek dental help with immediate effect. The pain on the teeth might be an indication of problem with the nerves or the blood vessels. Immediate response helps in managing the condition at the right stages to protect the teeth from much damage.\nA crack on the tooth demand immediate treatment as continued exposure of the nerves can easily encourage infections. Teeth might sometimes crack due to eating hard foods. Sensitivity to cold or hot drinks require fast action by the affected to acquire the services of a dentist to determine the problem and the solution to the condition. Sometimes infections on the teeth can manifest itself by causing sensitivity to cold or hot substances. People need to be worried about teeth sensitivity as it might be a communication of unwanted condition within the teeth.\nSwelling of tissues within the teeth require a fast response by the affected to acquire the right dental services. It's the teeth attached to gums have a problem, the gums are likely to swell. Dark color of the teeth might result from poor dental care. Food particles left on the surface of the teeth end up decaying leading to teeth decay. The bad smell of the mouth might be indicating a problem if the owner have been keeping the mouth thus the need to call for the advice of a dentist.\nPeople should seek dental help from the right dentists immediately they realize a minor condition within the mouth as they might be indications of oral cancer. Getting dental help for teeth conditions can help individuals realize the onset of oral cancer and therefore seek the right treatment when the condition is manageable. Regular visits to the dental clinics for checkups should be the order of every individual to prevent the infections leading to poor dental health.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Under certain conditions, users may experience unusual behaviour when moving parts or sub-assemblies using the compass. It may appear that the part or sub-assembly being moved is not moving as the cursor is moved along the screen, only for it to suddenly jump across and then appear to freeze again.\nIn the example shown above, the part will appear to jump when moved in the U (or X) direction, and will move by 100mm once a certain amount of cursor input has been received.\nThis behaviour can be used to great advantage, if for example a user wants to translate a part in 2mm increments to see how the position appears in the assembly, whilst being able to just drag the mouse and be confident the part will only move in the set increments.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Is United's destroyer Lionel Messi the first 'New Maradona' worthy of the name?\nIf you're a promising footballer, five foot nothing and from Argentina, there's a good chance you'll be described as the 'New Maradona'.\nPundits and punters alike fall over themselves to hail the 'new Diego', the player who can take on the Argentine's mantel as a force in world football.\nIf you type the phrase into an internet search engine, you get 340,000 entries. And that's just the English web pages.\nLionel Messi's header against Manchester United in the Champions League final was a case in point.\nSportsmail described Barcelona's second goal as the 'Head of God', a reference to El Diego's 'Hand of God' against England in the World Cup quarter-finals of 1986.\nIt is not the first time similarities have been drawn between Messi's talent and the ability of his national coach.\nThe Barcelona player's strike in a 5-2 win against Getafe in 2007 ignited the debate - Messi's dribbling and trickery had clear echoes of Maradona's spellbinding winner against England in 1986.\nMessi's not immune to putting the ball in the back of the net using his hand, either.\nNow, finally, 12 years after Maradona retired, his successor may have been crowned.\nDiego Latorre, who disappeared without trace after joining Fiorentina in 1991, was the original 'new Maradona', but what about the long list of players who have followed?\nThe 5ft 6in striker was hailed as the 'new Maradona' when he became the youngest player to win El Goleador, the Golden Boot, in the Argentine league. Whose record did he beat? Maradona's of 1978.\nThe comparisons continued thanks to Saviola's heroics in the 2001 Under 20 World Cup when he scored 11 goals and was named player of the tournament - just like Maradona in the 1979 competition.\nSaviola's decision to leave River Plate, whom he helped to the Argentine championship, and join Barcelona for \u00a320m did not help matters. The Nou Camp was also Maradona's first European port of call when he left Boca Juniors in 1982.\nBut then the wheels came off . Saviola, nicknamed 'The Little Rabbit', fell out of favour at Barcelona and his 2007 move to Real Madrid has failed to get his career back on track. As if things couldn't get any worse, he's been linked with Newcastle United.\nHe may not look like Maradona (he's 6ft tall) or, indeed, emulate El Diego in the way he plays, but he certainly followed in Maradona's footsteps in his playing career.\nBoth started at Boca Juniors, wore the No 10 shirt and were successful before moving to Spain and struggling at Barcelona.\nBut Maradona and Riquelme (left) went on to find glory with smaller clubs and returned to Boca Juniors later in their careers.\nAn article in The Scotsman called Riquelme 'the daddy' of all the so-called 'new Maradonas' in 2006.\nA 5ft 7insnumber 10, Aimar (right) fitted the 'new Maradona' bill as he helped Argentina to the 1997 Coca-Cola World Youth Cup.\nSuccess with River Plate led to a \u00a320m move to Rafa Benitez's Valencia, with whom Aimar won La Liga and reached the Champions League final.\nDiego Maradona once described Aimar as his 'legitimate successor as the world's best player' and said he would pay to watch him play.\nMaradona called Tevez 'the Argentinian prophet for the 21st century', which sounds like a lot of South American hyperbolic drivel but makes the point.\nSquat, 5ft 6in, no oil painting but a powerful, explosive player, Tevez (left) was touted as the 'new Maradona' when he burst on to the scene with Boca Juniors in 2001.\nThe pair became close and Maradona has admitted the Manchester United striker is like a 'son' to him.\nOrtega enjoyed early success with River Plate and impressed on the international scene with his pace and dribbling ability.\nThe 5ft 7in attacking midfielder was a key part of the Argentina team for over a decade, representing his country at the 1994, 1998 and 2002 World Cups.\nBut, like Maradona, Ortega's off-field antics had a negative impact on his performance on the pitch.\nAlcoholism has dogged his career, while his temper sometimes got the better of him. Remember Ortega head-butting Edwin Van der Sar at the 1998 World Cup?\nA 19-year-old d'Alessandro trained with Harry Redknapp's West Ham in 2001, with the Hammers boss saying: 'He's a fantastic talent. He's the type of lad that can play anywhere.\nForget Joe Cole, d'Alessandro was lauded as the latest 'new Maradona' after coming through the River Plate youth ranks and impressing during the 2001 Under 20 World Cup.\nJust 5ft 4in, the playmaker eventually moved to Europe in 2003, joining Bundesliga side Wolfsburg for \u00a36.2m.\n'A young Argentine dubbed \"the new Maradona\" has been training with West Ham,' said The Daily Star in February 2001.\nThe latest 'Argentine wonder kid' joined Chelsea in January 2008 after spending two years at Chilean side Audux Italiano La Florida.\nDi Santo shows just how far the 'new Maradona' label stretches - he's 6ft 4ins and 13-stone worth of muscle, but he's still going to be the next Diego. We'll see.\n'Up to 10 Premier League clubs are chasing the signature of 6ft 4in, 18-year-old Argentine wonderkid Franco Di Santo - described as the new Maradona,' said BBC Sport's Sunday gossip column in August 2007.\nMarinelli was a tad tall for the Maradona comparisons (5ft 11ins), but he came to Europe from Boca Juniors so everyone overlooked that.\nEven Middlesbrough, who signed the 17-year-old for \u00a31.5m in 1999.\nHe had a miserable time in Teesside and skulked back to Argentina after being released at the end of the 2003 season.\nBoro manager Bryan Robson said: 'People always like to tag a player and I hope he does turn out the be the new Maradona.\nThere really is a 'new Maradona'. And we don't mean Lionel Messi.\nAtletico Madrid's Sergio Aguero, a 5ft 7in Argentina striker also often likened to El Diego, is engaged to Maradona's daughter, Giannina, who gave birth to the couple's first child in February this year.\nBenjamin (left, with father Sergio) is Maradona's first grandson and expectations are already rife that the baby will follow in his father and grandfather's footsteps.\nWell, he's hardly going to be a giant, is he?","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzalvqv b/data_all_eng_slimpj/shuffled/split2/finalzzzalvqv new file mode 100644 index 0000000000000000000000000000000000000000..fbbe51107212dc57eb05f00e57d9ce32d5a048ed --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzalvqv @@ -0,0 +1,5 @@ +{"text":"Break out the leg warmers and parachute pants, it's time to party like the '80s are back with Toma's fun and funky new single.\nSummer and surf rock go together like peanut butter and jelly, and this single by Mean Jolene is making our hot days much cooler.\nIs the summer heat giving you the blues? Well get some relief with these Austin shows, featuring some of our favorite blues artists.\nPunk in Austin is not dead, and Lung Letters is helping to keep it alive. Their newest EP and single are definitely proof of that.\nFeaturing local and national bands, we've got some of the most diverse groups Austin has to offer this weekend. Let's see some live music!\nVertical Vice may be a new band, but its members have been Austin musicians for years. \"Repetition\" is the perfect single to show off this new group.\nNocturne is an earth shaking, punk inspired tune that rages, while remaining passionate throughout.\nBlood Pumps may just be our favorite new band currently creating music in Austin -- this pop song certainly gets our plasma flowing!\nThe frenetic pace of \"Chasing,\" from Dude Elsberry's first full length album, gives us a rush almost strong enough to make us skip our morning coffee.\nQuite a few in-store performances made our list this weekend, which means free for you -- but you'll have to get out of the house early to grab a spot!\nWith so much live music in Austin to see every weekend, we know it is difficult to pick. To make it easier, we've picked 10 shows that we personally want to go see this weekend. Check it out!\nTele Novella's ode to California is an infectious track, perfect to keep on repeat as the summer months begin to heat up.\nIt's that time of the week again -- here are our top picks for live music in Austin. Grab your earplugs and your favorite local beer and check them out!\nWe're ready for swimsuit weather now that Austin's Summer Salt has provided us with the ultimate collection of poolside music.\nIt's that time of the week again! Here are our top 10 picks for live music in Austin. Be sure to add them to your calendar.\nSabrina Ellis, known as the lead singer for Austin punk band A Giant Dog, is back with a new solo project called Sweet Spirit.\n\"I Had to Do It\" by local band Daniel Francis Doyle and The Dreams is their latest release in several years, and also slotted to be their last.\nAlex Rose plays soft indie rock that is perfect for the genre's evolving scene in Austin.\nThere is always too much live music going on in Austin for anybody to sort through on a given night. Thankfully, there's also this weekly list!","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Humayun Tomb. Humayun was the eldest son of Babar who was the first emperor of Mughal Empire in India. He succeeded in becoming the next Emperor. Humayun ruled India for about a decade till he was bitten by an Afghan Emperor Sher Shah Suri . In 1555 AD Humayun regained Delhi with the help of Shah of Persia. Humayun died an unfortunate death in less than a year's time after his conquer. He felt from the stairs of his own library known as Sher Mandal library. The Persian wife of Humayun named Bega Begum then decided to build a tomb for her husband which was named as Humayun Tomb . The construction of the tomb started in 1562 and the building was completed in the year 1572 . This tomb became the landmark in establishing different essential norms for buildings later built in Mughal Era.\nAn excellent post...and very educative..\nGreat one Geeta ..might be helpful .\nGeeta, you have always shared something educative. this time also the same. keep going.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Local locksmith of Crystal Bay MN. Get a mobile locksmith near Crystal Bay, Minnesota in 15 minutes.\nLocal locksmith of Dayton MN. Get a mobile locksmith near Dayton, Minnesota in 15 minutes.\nLocal locksmith of Hampton MN. Get a mobile locksmith near Hampton, Minnesota in 15 minutes.\nLocal locksmith of Hills MN. Get a mobile locksmith near Hills, Minnesota in 15 minutes.\nLocal locksmith of Hopkins MN. Get a mobile locksmith near Hopkins, Minnesota in 15 minutes.\nLocal locksmith of Lonsdale MN. Get a mobile locksmith near Lonsdale, Minnesota in 15 minutes.\nLocal locksmith of Maple Plain MN. Get a mobile locksmith near Maple Plain, Minnesota in 15 minutes.\nLocal locksmith of Marine On Saint Croix MN. Get a mobile locksmith near Marine On Saint Croix, Minnesota in 15 minutes.\nLocal locksmith of Minneapolis MN. Get a mobile locksmith near Minneapolis, Minnesota in 15 minutes.\nLocal locksmith of Minnetonka Beach MN. Get a mobile locksmith near Minnetonka Beach, Minnesota in 15 minutes.\nLocal locksmith of New Prague MN. Get a mobile locksmith near New Prague, Minnesota in 15 minutes.\nLocal locksmith of Plato MN. Get a mobile locksmith near Plato, Minnesota in 15 minutes.\nLocal locksmith of Saint Francis MN. Get a mobile locksmith near Saint Francis, Minnesota in 15 minutes.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"The basic Japanese tree forms have evolved over the years as a way of categorising bonsai and also helping to establish basic guidelines for styling trees.\nThese form definitions are helpful to the beginner to help develop an eye for different tree shapes and to help define different trunk and branch patterns.\nIt is very useful for the beginner to start his or her bonsai styling education by learning these basic forms. However, once learnt, the enthusiast must not make the mistake of being bound by these definitions either.\nIn many textbooks, the following forms are described as bonsai styles, however there is a strong movement, instigated by Walter Pall, to make a distinction between the form ( according to the predominant feature or direction of the trunk) and the style (the manner in which the form is displayed), and for this reason this article follows this re-categorisation by listing bonsai forms.\nIn summary: The form describes the basic shape of the tree as defined by trunk, the style describes the the way in which the tree has been styled (for instance windswept, near or far away from the viewer, naturalistic or abstract).\nThis is a list of the basic bonsai forms but is by no means a complete list of all bonsai forms or the many variations of the different forms that exist.\nThis form is the most commonly seen in Bonsai and in nature. It can be used for most tree species. The trunk can twist, turn and change direction with a number of bends along its length though the growth is basically upwards.\nBranches tend to emerge from the outside of bends. Branches emerging directly from the inside of bends often look awkward.The overall silhouette of an informal upright is often irregularly triangular but does not have to be.\nConiferous species such as Pines and Junipers are often seen with largely horizontal branching and clearly defined 'clouds' of foliage.\nDeciduous and broadleaf species such as Elms, Maples and box should have predominantly naturally ascending branching and should not have clearly defined foliage pads; a too-common mistake is for deciduous species to be styled with horizontal branching, clearly defined foliage pads and 'pompoms' of foliage.\nIn this form the trunk is completely straight and upright. Previously a popular form but now rarely seen as the majority of upright trees have some movement that make them informal uprights.\nIdeally the trunk should display an even taper from base to apex. This form replicates trees growing unimpeded in open growing conditions without competition from other trees. The branches leave the trunk alternatively from left to right to back and no branches face the front until the top third of the tree. All the branches will be mainly horizontal or slightly drooping as if weighed down by snow in winter. This can be a difficult form to carry out convincingly and it is recommended that only trees with a naturally straight trunk be used. The silhouette of a formal upright is triangular though not strictly symmetrical.\nDeciduous species are unsuitable as formal uprights. Coniferous species such as yew, swamp cypress and cryptomeria make good candidates.\nThe broom form replicates the way many deciduous trees grow in nature given ideal growing conditions with no competition from other trees. It is particularly recommended for fine branching species particularly Ulmus and Zelkova but all deciduous and broadleaf tree species are suitable. The broom form is not suitable for coniferous species including pines and junipers.\nThe broom form can be further divided into two types, the formal and the informal broom.\nThe best known broom form has a main trunk that divides at a certain point into three or more branches of roughly equal thickness which grow out diagonally upwards from the central trunk.\nThe silhouette of the tree resembles an upturned Japanese broom ; hence the name.\nThe main trunk of the formal broom tends to be 1\/3 of the overall height of the bonsai. There are no horizontal branches; all branching is placed diagonally in a fan-shape with no, or very few, crossing branches.\nThere are variations of the classic formal broom; there can be a main trunk that runs from base to apex of the tree. However, unlike a (in)formal upright, the branches are nearly as thick and dominant as the central trunk, but all of these branches are placed at upturned diagonals from the main trunk, forming a broom silhouette.\nThere is no reason why a trunk without a straight trunk cannot be used for the broom form; a trunk with bends or movement is simply an informal broom. Quite possibly the most common form of broom seen in nature.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Rejuvenate your face with a Nu Skin Galvanic Spa\u00ae Hollywood facial treatment. A 45-minute treatment which aims to stimulate collagen growth and give instant results, with benefits lasting into the future. Designed to combat signs of ageing and improve skin health. Performed by experienced beauty professionals. Based at DH Faces, a pristine clinic on world-famous Harley Street. Valid Saturday 9am-9pm and Sunday 9am-5pm.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzalynh b/data_all_eng_slimpj/shuffled/split2/finalzzzalynh new file mode 100644 index 0000000000000000000000000000000000000000..0ca8bb4901bc9dc2b2047ae882ee5304f1a612dd --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzalynh @@ -0,0 +1,5 @@ +{"text":"These muffins are truly rich,chocolatey and decadent. If you want a chocolate overload then this is a winner. There is nothing healthy about them but the flavour makes up for it. The muffins turned out perfectly and had a nice dense bite to them.\nLine a muffin pan with liners and heat oven to 180C. I got a total of 14 muffins.\nWhisk together the eggs,buttermilk and vanilla extract.\nIn a large bowl, whisk together all the dry ingredients. The flour,cocoa powder,brown sugar,powdered white sugar,baking soda,baking powder and salt. Stir all the ingredients together till well combined.\nAdd the wet ingredients to the dry mixture and add the melted butter and chocolate chips in. Reserve some chocolate chips for the topping. Mix with a spatula or spoon till just combined. It doesn't matter if the batter remains lumpy\u2026thats good!\nFill each muffin case till 3\/4 full and add a few chocolate chunks to the top. Bake at 180C for 12-15 minutes on the middle rack.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"I like to put my headphones in and let the pencil do the talking. I'm not the best at shading yet but I'm practicing in hopes of getting better. I have a few drawings done and some that are almost finished that I can get finished. I hope y'all like them.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"The West Midlands Housing Association Partnership (WMHAP) has welcomed the Chancellor's commitment to invest an extra \u00a3392 million in the region through the Local Growth Fund and address the housing shortage.\nMr Hammond's Midlands Engine strategy pledges to invest millions to unlock land for new housing developments and overcome productivity barriers by closing the skills gap.\nPartnership chair, Kevin Rodgers, chief executive of WM Group, said: \"We welcome the ambition shown by the Chancellor to drive growth and the delivery of much needed housing across the region. The West Midlands has a proud history as a powerhouse of the national economy and this announcement gives us confidence that we can continue this tradition.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"A student armed with two handguns opened fire inside a classroom at an Indiana middle school Friday morning, wounding a 13-year-old girl and a teacher, authorities said.\nThe attack occurred just after 9 a.m. at the Noblesville West Middle School in Noblesville.\nSeventh-grader Ethan Stonebraker said that his science teacher \u2014 later identified as 29-year-old former college football player Jason Seaman \u2014 took down the shooter and credited him with preventing more bloodshed.\nThe shooting unfolded after the suspect \u2014 whose identity was not released \u2014 asked to be excused from class and then returned armed with two handguns, cops said.\nPolice believe the shooter, who was taken into custody, acted alone.\n\"The situation resolved extremely quickly,\" said Noblesville Police Chief Kevin Jowitt.\nThe wounded girl was rushed to Riley Hospital for Children.\nSeaman \u2014 who was shot in the abdomen, hip and forearm \u2014 still managed to get the gun and tackle the shooter. The teacher underwent surgery at IU Health Methodist Hospital.\nSeaman played defensive end for Southern Illinois University between 2007 and 2010.\nTwice he made the Missouri Valley Conference's honorable mention all-academic team and is still in school record books for most tackles for loss in a single game.\nHis former coach, current SIU athletic director Jerry Kill, said Seaman has always been selfless.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Bonnie Lou's Caf\u00e9 is a late 1800's General Store that belonged to the Ruggle family for four generations. All the original counters, hardwood floors, shelving, spice bins, and even the old post office boxes still remain.There are many antiques on display to enjoy, even a porcelain doll that was found in the wall during renovation!\nThe caf\u00e9 opened in May 2009, and it's a great destination place to add to your list of places to see.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzambfj b/data_all_eng_slimpj/shuffled/split2/finalzzzambfj new file mode 100644 index 0000000000000000000000000000000000000000..9117f2a5e41deed53fc4c323d18fb5b5641cb76c --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzambfj @@ -0,0 +1,5 @@ +{"text":"In the last seven years, 615 solar companies have gone into liquidation. An even scarier statistic is 105; the number of solar companies that went into liquidation in 2018 alone.\nSure, these figures are frightening. But for the hundreds of thousands of customers with no warranty, broken installations and no support, it's a nightmare.\nSo, why are so many solar companies going broke?\nThe race to the bottom seems to get worse every year.\nMany companies are bidding for work at a loss just so they can pay their staff and other overheads.\nRather than delivering quality, their focus has been volume of jobs, which has led to some of Australia's largest installation companies going under in the past three months, including Energy Matters and True Value Solar. This means, when companies go into liquidation, their clients are often left with no warranties or support.\nThe other major trap is winning the bulk of work through tenders and request for proposals (RFPs), where an inevitable reverse auction leads to losses being made by the \"winning\" solar company.\nThe Government's stance on renewables and government funding programs appears to change direction on a weekly basis.\nSetting up a solar company that relies on premium feed-in tariffs, upfront rebates, and battery storage grants is very dangerous because when the legislation inevitably changes, the business model often falls apart.\nA good financial adviser would encourage their clients to diversify their investments so that if one investment class drops then the majority of the portfolio may still grow.\nThe problem with solar companies is they put 'all their eggs in one basket', and when the solar industry sees a downturn or if power prices drop, the entire business could get dragged down in a very short period of time.\nWithout having complementary services that can sustain a company through a downturn in solar sales, it's very unlikely the solar business will be around in the medium to long term.\nPotential clients are seeking a lot more than just product specs for their purchasing decision, and yet most solar companies have no idea what these factors are.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Arive Delhi and transfer to hotel.Overnight at hotel.\nMorning sightseeing of Delhi visiting Red Fort ,Jama Masjid, Dargah of Hazrat Nizamuddin Auliya,Humayun's Tomb. Also visit India Gate , Qutab Minar. Overnight at hotel.\nTransfer to railway station to connect train for Ajmer. On arrival transfer to hotel. Afternoon tour of Ajmer visiting , Dargah Shareef - the shrine of Khwaja Moinuddin Chisthi. Continue on to Adhey Din - Ka Jhanpara. Proceed to Emperor Akbar's Royal Palace made of red sand stone - housing a museum with a rich collection of Moghul & Rajput armour. After that , drive around Ana Sagar lake. Overnight at hotel.\nDrive to Jaipur and check in at hotel Ajmer\/Jaipur ( 131 kms\/03 hrs ) . Tour of Jaipur visiting Maharaja's Palace , Hawa Mahal , Royal Observatory and Albert Museum. Overnight at hotel.\nMorning excursion of Amber Fort , where ascent to the fort is on elephants. Afternoon at leisure.\nDrive to Agra via Fatehpur Sikri (Jaipur\/Fatehpur Sikri\/Agra 221 kms\/05 hours) to visit the Mosque built by Emperor Akbar in the memory of Sheikh Salim Chisti. On arrival , check in at hotel. Overnight at hotel.\nMorning visit Taj Mahal - built by the Moghul Emperor Shah Jehan in memory of his wife Mumtaj Mahal ; Agra Fort on the opposite bank of River Yamuna built by four successive Emperors ; and the tomb of Itmad - Ud - Daullah.Afternoon at leisure.\nAfternoon drive to Delhi via Sikandra (Agra\/Sikandra\/Delhi 203 kms\/05 hours) Sikandra- Mausoleum of Emperor Akbar. On arrival , check in at hotel. Overnight at hotel.\nDrive back to Delhi and overnight at hotel.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"The Sorrento Castellammare di Stabia ferry route connects Italy with Italy and is currently operated by 2 ferry companies. The Alilauro service runs up to 7 times per week with a sailing duration of around 20 minutes while the Seremar service runs up to 7 times per week with a duration from 30 min.\nSo that's a combined 14 sailings on offer per week on the Sorrento Castellammare di Stabia route between Italy and Italy. Compare now and get the best fare at the time that you want to travel.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Rio 2 Postkartenkalender 2015 is good choice for you that looking for nice reading experience. We hope you glad to visit our website. Please read our description and our privacy and policy page.\nFinally I get this ebook, thanks for all these Rio 2 Postkartenkalender 2015 can get now!","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"The Problem with Forever by Jennifer L. Armentrout was so good! I absolutely loved it. It is a young adult novel but it was just what I needed after being in a bit of a funk last week. It definitely still had some dark and tragic moments in the book but that's okay because they were important to the story line and so well written!\nThe book is about a girl named Mallory. She has been homeschooled because of her traumatic past but decides she needs to go to high school for her senior year. She obviously didn't expect to run into Rider, the boy from her past, on her first day. They grew up together in a foster home but then they were separated for years.\nHe protected her in that home and he has every intention of doing so now that they are back together again but now they are different people. We follow their relationship and everything they have to face as Rider never came to terms with their past. He starts to spiral out of control and Mallory needs to make a decision if she's going to stay quiet or do something about it.\nThis book really makes you realize that you have no idea what anyone has been through or what challenges they are still facing. It definitely goes to show that you should always reserve judgment. No one's life is perfect but that's hard to remember sometimes.\nI'm not going to give anything away so you'll have to read it for yourself. It's worth a read, I couldn't put it down!\nHave you read this before? Let me know!","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzanjqz b/data_all_eng_slimpj/shuffled/split2/finalzzzanjqz new file mode 100644 index 0000000000000000000000000000000000000000..17fd9f390880f852b59f6653373c1a37da2416c6 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzanjqz @@ -0,0 +1,5 @@ +{"text":"I remember watching the original Carrie when I was younger and actually a few years ago, the hubby and I saw it on TV and watched it again. It's just one of those classic movies you think shouldn't be remade, but know it's going to happen because of how popular it was. Well I have to admit, I had my doubts on how this new version would compare with the original version, but I have to say I was pretty impressed. Chloe Grace Mortez did an amazing job as Carrie and Julianne Moore was amazing as usual. With all the special effects, I definitely think this remake was done with great respect of the original movie. I also loved that there was an alternative ending to the movie and over an hour of extras you can watch. If you loved the original version, be sure to check out the latest remake of Carrie!\nDisclosure: All opinions are my own. I received Carrie on blu-ray DVD at no cost for the purpose of this review. No other compensation was received.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"The Pemberley Pouch is designed to be a multifunctional portable storage pouch that is perfect to store just about anything from personal items, household items to all kinds of gadgets that need to be organized neatly. Use them at home or away to organize and carry craft items, kid items, cosmetics and so much more. Ideal for daily use and travel inside a bag or alone.\nThe Pemberley Trio pouches are compact and easy to carry. Perfectly portable, the set includes 3 different sizes to fit neatly into different purses, handbags, backpacks or luggage with tons of pockets to arrange everything neatly inside and out and save space in your bags. Easy to lift out to use separately.\nA quick sew that does not require a lot of materials or time to make. Makes a perfect gift for adults and kids alike or sew and sell projects for your next market or fair.\nThe trio includes 3 sizes, make one or make a set!\nSmall Measures: 6\" tall, 9\" wide, and 3\" deep.\nMedium Measures: 7\" tall, 10\" wide, and 3 \u00bd\" deep.\nLarge Measures: 8\" tall, 11\" wide, and 4\" deep.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"1 Tanzanian Shilling (TZS) to (MKD) Macedonian Denar Exchange Rate - TZS\/MKD Charts, History, Historical Rates, Currency Converter.\n1 TZS\/MKD Exchange Rate points that how much 1 Tanzanian Shilling in Macedonian Denars currency is.\nIt is also available to check inverse of the rate as 1 Macedonian Denar to Tanzanian Shilling. Exchange Rates are updated each minute at 1exchangerate.com. In addition to the exchange rate, you are also able to get Tanzanian Shilling and Macedonian Denar currency to any other money exchanges. You can calculate cross rates of these two exchange rate by using the currency calculator below.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"How do I access wired LAN resources in Datto Network Manager?\nIn order to access local resources on your LAN you will need to bridge one of your SSIDs to your LAN so your wireless clients will get an IP address from your router.\nTo bridge an SSID, select the desired SSID, then go into the Advanced section, and enable the 'Bridge to LAN' toggle.\nThen click \"Save Settings\" and wait for the configuration to take effect after about five minutes.\nNote: For networks running legacy firmware you can find the option to bridge on the SSID 2 tab.\nCan I adjust the transmit power of the OM2P?","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Great pics! I came via the challenge post pingback can't get mine to stick grrr, anyway your site is looking very professional, makes me feel like I need to do some work. Your water market photos are wonderful, I esp like the \"busy\" shot.\nThe photos were all rather poor \u2013 the morning light was grey and humid and the movement on the boat made it difficult. It's always hard to get good light in the tropics. So I am pleased that you think they look ok. The busy shot was majorly edited.\nThat's annoying when the post doesn't load into into the wordpress site. annoying. It will go there eventually.\nPhoto editing definitely has it's uses no one would know..\nawesome, i missed my opportunity to go there last time i was in vietnam. I will make it a priority on my next visit! nice post!!!\nThanks Patrick. It's a great trip. They used to do a trib by river from Can Tho to Pnom Penn in Cambodia but it has been cancelled. This would make me go back there. I am a little obsessed with the Mekong.\nI think Vietnam is a must to return soon. We did not visit HCMC last time, so we will be this time. Great photo of the guide.\nShe was a lovely young woman, intelligent, educated, funny. Her tour guide gigs were worked in around her studies at Uni.\nOh Lord: this post really puts Vietnam first on my 'to-do' list: never mind about friends on a few other continents awaiting! Remember doing wonderful early morning canal trips to markets in Bangkok \u2013 obviously way back and nowhere as interesting tho' mindbending then! Methinks people visiting soon should really take your advice to have an incomparable experience . . .\nI hope you get to go there Eha- think of Vietnam as a nice warm stopover for a couple of weeks on the way to those other continents.\nGreat photo shots Francesca of life on the Mekong. Can't believe you did all that! You have so much energy.\nI am a gypsy at heart. I love doing these things throughout Asia.\nYes, you've always been a tiger. The way I feel lately is the only boat I'll be going on to see Vietnam is a 66,000+ tonner! Yes a cruise ship stopping all ports, OMG, but I did manage to board the slow canal boats in Norway and St Petersburg last year \u2013 required absolutely no energy \u2013 just sat there and stared at all the gold baroque buildings and wonders.\nI thoroughly loved that tour but wonder if you encountered the \"Lotto Lady\" She was whizzing about in her small motor boat selling tickets to all the marketers. As a sideline she also served hot delicious coffee. Her voice is what I remember \u2013 it was so loud and squeaky she could stop traffic in Saigon!\nHahha, I came across quite a few lotto ladies in Vietnam. I also noticed some poorly clad young Spanish tourists abuse a Lotto Lady thinking that she was trying to con them for some motorbike parking money. Off they went, in a huff. I just sat under a shady tree and laughed.\nYou are a real traveler Francesca. What a very thorough description of getting to the markets and I love how deftly you tied it into the H2O theme \ud83d\ude42 That python scene really did my head in.\nIt did my head in to. I didn\"t know what to expect until I arrived there. I suppose it isn't much different from force feeding a goose in the Perigord to enlarge their livers to make pate.\nYes, force feeding a goose was exactly what came to my mind and that does not sit well with me either.\nShe was an English High School teacher and it comes out in her descriptions of things! Very seasoned.\nGreat photos! I really love how you twined water AND markets. Really glad you didn't include a photo of the snake farm. It would have done in my head as well. Rice paper and noodle making, however, sound much more interesting!\nThe snake farm was really shocking.\nLovely experience and interesting shots!\nI think you would love it Julie.\nA 7 hour tour on the water? You're hardy! I love to get out and see the sight and much of the time, tours are the best way to sample the region of whats on offer but I think I'd balk at 7 hours. Duly noted about the snake farm.\nThe only way to visit these markets is with a lady with a boat, and a tour guide who speaks English. It is a personalised tour in a fairly remote district. Yes, I am hardy. You do a 7 hour tour or you don't get to see anything at all.\nReading your posts back to back, I came here from Trump and I find your description of the snake farm eerily reminiscent of what capitalism and politics does to humanity: making us more pliable for when we die. Thanks for the warning.\nAhaha, yes! We are all being massaged with crap and brainwashed with nonsense along the way to a reptilian death.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzaslip b/data_all_eng_slimpj/shuffled/split2/finalzzzaslip new file mode 100644 index 0000000000000000000000000000000000000000..0249a5214706bd606af7cdafe2b83ceb6d07f586 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzaslip @@ -0,0 +1,5 @@ +{"text":"You found Neoprene Universal Wrap Around Knee Support (Regular) in category Exercise\/Wellness and subcategory Sports Medicine. If you need to buy more Sports Medicine than you are on the righ place.\nAdditional 5 cm2 head for the Intelect\u00ae Legend ultrasound and combo units. There is no actual image of this item. The image shown is representative only. The actual item will be a sound head only.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Dear beloved children and brethren in the lands of immigration, Clergy and congregation.\nIt is my pleasure to congratulate you on the Glorious Feast of Resurrection, hoping that you have a joyful, holy and blessed life.\nTalking about the Resurrection is a joy for the ears, because it makes the heart full of joy and hope. Why?\nBut death was an intruder to human nature that God created for life. Therefore, due to His love, God wanted to return us once more to life. So, as He allowed death to enter into our nature, He also allowed the resurrection to enter into our nature\u2026so that we will return to life.\nGod has given us examples of resurrection from the dead in the Old Testament through Elijah the Prophet (I Kings) and Elisha the Prophet (II Kings) and in the New Testament, the Lord raised the daughter of Jairus (Mark 5:42, Luke 8:41) and the son of the widow of Nain (Luke 7:14-15). And less than two weeks before His crucifixion the Lord raised Lazarus from the dead so that His disciples will believe, that if He is able to raise a person from the dead after four days, He is also able to rise.\nAnd the Lord Jesus Christ rose from the dead, and it has been said that He has become \"the firstfruits of those who have fallen asleep\" (I Corinthians 15:20). What does it mean \"The firstfruits of those who have fallen asleep\" although many were raised from the dead before Him?\nThe phrase \"The firstfruits of those who have fallen asleep\" that has been said about the Lord Jesus Christ means two things.\nFirst: He rose where there is no death to follow. Because those who were raised from the dead before Him, ended up dying later on, and they are waiting for the general resurrection.\nSecond: He arose with a glorified body that enabled Him to enter the upper room while the doors were shut.\nAnd by this glorious resurrection that has been granted to us, we will be in a better state than the first man before the fall.\n1. Adam and Eve had a materialistic body that lives a materialistic life.\nAs for us, as St. Paul the Apostle said, we will be resurrected in glory and in power.\n2. Adam and Eve, when they were created, they had a body capable of death, and both have died and all their descendents. But we will have a body that would not die later on, because God has prepared it for eternal life.\n3. Also Adam and Eve, when created, had a simple, pure body that did not know sin, but could fall in sin. And they actually did fall and both deserved a punishment. As for us, we will be in bodies that would not know sin, and will live in an everlasting purity.\nAs for Adam and Eve, they only lived with animals of the wilderness that they were trying to tame. But we never heard that one of them saw angels.\n8. In the resurrection, God will give us to eat from the tree of life (Revelation 2:7), which was not permitted for Adam and Eve.\nI refer to all these, while congratulating you on the Glorious Feast of Resurrection, the feast that represents the living hope and the eternal future.\nMay the Lord grant you many such a feast, while you are full of joy and blessing.\nBe well in the Lord, absolved by the Holy Spirit. Pray for us.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Life is hectic. Face it. It seems like, if we're not actively doing something, then we're on our way to the next thing. There is laundry to do, dinner to prepare, work piling up on your desk and you're already late for your son's soccer practice. How in the world are you supposed to find time for yourself?\nYet at the same time you know that you will be a much better spouse, mother, friend and employee if your life is in balance and you take time to relax and rejuvenate.\nWe are all limited to 24 hours in any given day. Since you can't get extra time, you need to free up a few hours a day by delegating some of your tasks. Get the kids and your spouse to help out around the house, ask someone else to work out all the details for the church fundraiser this year, and delegate some of your simpler tasks at work to your assistant or if you own your own business, consider outsourcing.\nMultitasking is great, but sometimes you waste a lot of time trying to get 5 things done at the same time. Try focusing on one project at a time, putting your head down and getting it done before moving on to the next one. You'll find yourself getting things done much faster without all the distractions multi-tasking can bring.\nMake yourself a priority. There are always going to be at least 25 other things that you could be doing. Just ignore them for a moment and schedule time for yourself first. Get out your day planner and block out a few hours each week that are just for you. Unless there is a dire emergency, don't reschedule.\nNow that you have carved out some time for yourself, put it to good use. Take a hot bath, read a good book, or give yourself a facial. Relax before diving back into your hectic everyday life.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Wake up in one of our secluded Luxury Cottages, Villas and Farmhouse that combine the best of a boutique luxury resort, along with the organic elements you would expect from 'green hotels' and savor the bountiful natural surroundings, that inspire the design of your accommodation and everything else we do at Belle Mont Farm.\nAll of the accommodations are designed by the award-winning architect, Bill Bensley, whose designs regularly feature on the cover of Conde Nast. Bill's designs for Belle Mont Farm have been crafted in harmony with the natural landscape, and feature spectacular views of the ocean and forest that will fill you with a sense of space and openness that refreshes the spirit. Gaze at the verdant forests, which blanket the slopes of Mount Liamuiga and descend to the Caribbean Sea from your private wrap-around verandah.\nWe provide all the modern comforts you need, including plush bedding and rainwater showers. With Belle Mont Farm being on a mountainside, our public areas and accommodation are quite spread out and a few of our walkways are quite steep. Some of our guests may find it difficult walking to their cottage, villa or Farm House, so our Team is on hand to transport our guests by Golf Carts to every part of our Resort and our Team is always on hand at a moments notice! Also, our Team ensures the service you receive responds to your individual needs and matches the standards that of the very best hotels.\nYou can even transform your room into a personal cinema with pull-down screens and projectors although it should be noted that we do not have cable televisions in our guest rooms. Playful and inviting, Belle Mont Farm provides a restorative experience that is rarely found within a typical Caribbean resort.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"O homem pode ejacular sempre dentro, tudo depende do que pretenda e se toma precau\u00e7\u00f5es. No entanto durante a menstrua\u00e7\u00e3o n\u00e3o h\u00e1 per\u00edodo f\u00e9rtil, logo n\u00e3o \u00e9 poss\u00edvel engravidar.\n\"\u2026they advised me not to buy any cards or boxes through the internet\"I thought everyone knew that?The only Sky boxes and cards that will work properly, come from Sky.\nICh habe mal eine Frage ! Ich wollte heute auf meine Facebook seite gehen und auf einmal erschien IIS7 ! HILFE !!!!!!!!! wie bekomme ich das wieder weg !","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzatdxr b/data_all_eng_slimpj/shuffled/split2/finalzzzatdxr new file mode 100644 index 0000000000000000000000000000000000000000..3fbfa9a5b04c34e85f516a585416c45a9ac48a54 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzatdxr @@ -0,0 +1,5 @@ +{"text":"Located in South Ottawa, Pine Grove Trail is part of the National Capital Commission (NCC) Greenbelt trial system. Parking can be found off Davidson Road at P18 (N45 21.292 W075 35.560) which is between Hawthorne Road and Conroy Road.\nThe hike is a 5 kilometer loop through natural forests, reclaimed white and red pine forests and marshlands. It should take between 60 and 90 minutes to complete following trails 43 and 44. Starting at the parking lot head north and cross Davidson Road to the beginning of the trail, here you'll see an information panel explaining the history or the forest. Before 1962, most of this area was cleared farmland. After it was purchased by the NCC both red and white pines were planted in most of the clearings and the area was eventually developed into a full grown forest.\nHeading north on 43, the first kilometer of the trail is a gravel covered. This is part of the Greenbelt pathway which will eventually run from Greens Creek in the east end of Ottawa to the Shirley's Bay in the west. As you walk make sure you take a few moments and read the panels which describe the different trees in the forest.\nOne of the best parts of this trail are the new signs that are located at all intersections. As the gravel path heads east continue on 43 (to the north) to where it changes into a grass\/dirt covering.\nAs you follow the trail you'll eventually pass by a large marsh on your left. There are a few spots where you can sneak through the brush get a better look but beware, the closer you get the more mosquitos you'll anger. After another five minutes you'll arrive at another intersection, turn left and keep going on 43.\nKeep heading south through the old natural forest until you reach Davidson Road. From here you have two options... 1. You could cross the road go over a small bridge onto a narrow, not well maintained trail, or... 2. Go left and follow the road for about 100 meters until you reach the opening on the left. Personally, I chose the road... it was a nice break from the bugs.\nFrom here the trail keeps heading south. If you're getting tired you have the option of cutting the hike short. At the intersection of trail 43 and 44, if you turn left you will reach the parking lot after only a few minutes. If you're still up for the hike go straight through onto trail 44. This part of the walk brings you around a very large marsh with a few boardwalks, some gerat views and lots of mosquitos.\nThe trail follows a southern flow for a kilometer until you make it around the marsh then it heads straight north towards the parking lot. Along this stretch there are a lot of side trails which will take you to side street parking on Hawethorne Road as well as other parts of the Pine Grove Trail System. Stay on 44 as this is the only one that leads you back to P18.\nIn conclusion, Pine Grove Trail is a great hike for beginners and families. It offers some very easy terrain, some nice scenery and is by far one of the best kids in the Ottawa Green Belt.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Cardinal IG has had a presence in Fremont for 20 years.\nThe Fremont Town Council has approved tax abatements for two expansion projects in Steuben County. Cardinal IG Co. and Miller Waste Mills plan to invest a total of $5.7 million to add new equipment to their respective facilities.\nCardinal IG is a subsidiary of Minnesota-based Cardinal Glass Industries and makes insulating glass products. The Steuben County Economic Development Corp. says the company will invest $3.1 million to add new machinery and equipment to its Fremont facility.\nThe need for the new equipment is follows the company's previously-announced expansion, which resulted in 45 new jobs. The installation of the new equipment will not result in any additional jobs.\nMiller Waste Mills plans to invest $1.5 million to add new machinery and expand into more of its facility. The Steuben County EDC says the expansion will create 10 additional jobs in Fremont.\nThe tax abatements for Cardinal IG and Miller will run for five years and four years, respectively.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Q. What is Reparations about?\nA: There is an abundance of proof that Africa was the cradle of civilization. Centuries before the birth of Christ, the stories of the Queen of Sheba and her visits to King Solomon, with an organized retinue and also the architectural wonders of the pyramids are clear evidences of the height to which African civilization had reached. Further, the ancient kingdoms of Africa like those of Songhai, Benin, Ghana and others were highly organized and even the ancient universities like Timbuktu existed. At this time Europe was very underdeveloped and America had not even been discovered. This development of Africa was interrupted some time around the 14th century by the heinous institution of slavery. This slavery robbed Africa of her best and strongest men, women and children who were put in chains and were exported like goods and chattel - like goats and pigs to the Islands of the Caribbean, United States, Brazil and elsewhere. These slaves worked under very hard conditions, planting sugar cane and cotton for their masters enrichment, and this prevented for centuries the development of their own countries in Africa. The results of their work as slaves was to enrich the countries of their masters. Those countries became rich and developed, while Africa, the home of the slaves, remained poor.\nReparations, which comes from the word 'repair' is a Movement which seeks to identify and redress those wrongs, so that the countries and people that suffered will enjoy full freedom to continue their own development on more equal terms.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"This section will start off with a look at power regulation and then shift into general circuit selection and implementation.\nThe VRM on this board is deceivingly powerful. It is powered by an IR3580 which is a popular PWM for overclockers. It is produced by International Rectifier who in recent years has been killing it with their high performance digital PWMs and power stages. Only 6 of the 8 phases are being utilized, however, GIGABYTE tossed in the IR3556 also produced by International Rectifier. Each IR3556 can provide up to 50A of current and the IR3356 is IR's second generation of integrated high-current power stages.\nGIGABYTE chose to upgrade their inductors and is using 76A Cooper Bussmann high current inductors. The key to this 6-phase VRM's performance are the inductors. To tie things up, GIGABYTE is using nine of their Chemicon 10K black solid polymer capacitors for a total of 5040uF.\nGIGABYTE decided to stick with International Rectifier for the memory VRs as well. For the main DDR4 and VPP outputs, the X99-UD4 is using IR3553's, which are 40A integrated power-stages from International Rectifier. Two IR3570 from International Rectifier are 3+2 phase digital PWMs, each controls the VRs for one set of four DIMMs.\nThe X99-UD4 uses an ALC1150 for its audio codec which works in unison with the integrated Azalia audio processor in the PCH. The X99-UD4 has only one audio amplifier the Texas Instruments NE5532 which is quite common, it is used to amplify the backpanel IO's headphone jack. Nichicon audio capacitors are also present and GIGABYTE has isolated the analog audio lines from the digital domain of the rest of the motherboard. There are also 14 yellow LEDs on the back of the PCB, which illuminate the PCB divide.\nAn Intel i218v is being used as the integrated NIC's PHY which many prefer because of Intel's strong reputation in the NIC industry. A Renesas D720210 is used to expand one USB 3.0 port into four ports for the backpanel IO.\nOn the left are two iTE chips, the IT8620E is the Super I\/O in charge of voltage, temperature, and fan monitoring and control. The IT9792E is an embedded controller (EC) which handles features such as overclocking, extra fan controls, or even LEDs. On the right are the two 128Mbit BIOS ROMs for Dual BIOS and an IT8951E which is used to power GIGABYTE's additional USB BIOS recovery. The additional recovery method can update the BIOS even if the system doesn't have a CPU.\nOn the left is the PCH VRM powered by a single phase Richtek PWM and some Vishay MOSFETs. An IDT 6V49322NLG is an external clock generator which is supposed to help with BCLK overclocking. The NXP L04083B is a PCI-E quick switch used to switch between SATA Express and M.2. On the right is a nuvoTon NCT3941 which is one of four on the motherboard. They are used to control each voltage mode fan header, which require more motherboard hardware than PWM control, because voltage mode requires each header to have its own voltage regulator.\nThese four NXP L04083B switch 8X PCI-E 3.0 from the third PCI-E slot to the fourth.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Sunlight's upholstery cleaning insures that your furniture and area rugs look GREAT!\nWe Guarantee All Our Carpet and Upholstery Cleaning Work.\nOver time and wear, upholstery can become dingy, stained, spotted and dull. Colors can become faded and look bland. Sunlight upholstery cleaning services deliver deep cleaning that restores a Like New Look to your furniture. Bleeding, fading and\/or brown out is eliminated.\nA Dry\/Wet\/Dry method that prevents the fabric from getting too wet.\nProlong and increase the investment you have made in your furniture. Get cleaner, fresher, brighter furniture. Sunlight's upholstery cleaning insures that your furniture and area rugs look GREAT!\nWe Serve All South Houston Area Home and Office Carpet Cleaning, Upholstery Cleaning and Carpet Stretching Needs.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzatjqp b/data_all_eng_slimpj/shuffled/split2/finalzzzatjqp new file mode 100644 index 0000000000000000000000000000000000000000..b5f70277bb4bdc5ddc03077b1b3e11a5d08e0802 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzatjqp @@ -0,0 +1,5 @@ +{"text":"Celebrate all things Keuka College all afternoon on Ostrander Field, and in the Recreation and Athletics Center (RAC). Enjoy food trucks, beer and wine tastings, caricatures, inflatables, games, face painting, live music, a family photo booth, and the latest in gaming tech inside our Gaming RV! Plus, check out our vision for the future of Keuka College in the Main Gym or take a walk down memory lane in the Nostalgia and Old KC Story Arena, also known as the Auxiliary Gym.\nCheer on our field hockey team as they compete against the Wilson Phoenix. Be sure to visit our gameday tent for your free novelty KC baseball helmet and check out the halftime performance featuring the College's Dance Team.\nPresident Jorge D\u00edaz-Herrera warmly invites students and their families to a reception for light refreshments and conversation. Chat with the President, and hear his vision for the future.\nThe Athletics Hall of Fame honors those whose achievements and contributions to the athletics program have established a standard of excellence and cultivated a winning tradition. Help us congratulate and honor our 2017 Athletics Hall of Fame inductees.\nAt this year's ceremony we will also present the 2017 Rob Smets Heart of a Champion Award and recognize all past recipients. \u200bCocktail hour begins at 6 p.m. followed by dinner and formal ceremony at 7 p.m.\nName a game show that provides fun, laughs and friendly competition. If you said \"Family Feud,\" you'll be thrilled to participate in the Feud Game Show, a spin-off of the well-known televised mainstay. A set, lights, and a podium will recreate the show's stage, and a professional host will lead players through friendly-fued shenanigans. Teams can be comprised of roommates, friends, or family members.\nKeuka College has been lighting up careers for decades, but we also know how to light up the sky. Watch our marvelous fireworks show from the main lawn or any other comfortable location on campus.\nIf you're looking to end the night with, well, a nightcap, we'll see that you get to and from that barstool without incident. Shuttles to downtown Penn Yan pubs will be available, allowing you to safely meet up over a cocktail with friends, old and new.\nRound-trip shuttles run every 30 minutes and will go from the front of Ball Hall, Main St. Penn Yan, and three area hotels: Best Western, Hampton Inn, and Microtel Inn & Suites.\nLocated in the heart of the Finger Lakes, The Windmill Farm & Craft Market south of Penn Yan was the first of its kind in upstate New York. Now in its 30th year, The Windmill plays host to nearly 200 retailers offering clothing, collectibles, handcrafted items, home and garden tools, produce, and much more!\nShuttles will depart at 8:30 a.m. & 10 a.m from Ball Hall, and 11 a.m. & Noon from the Windmill.\nPenn Yan is as charming as small upstate villages get. Mom-and-pop stores, historic diners, a museum, gift shops, coffee and wine bars, and even an old-time candy shop line the pedestrian-friendly sidewalks. Don't forget to explore the Outlet Trail while you're in town.\nShuttles will depart at 9 a.m. & 10 a.m. from Ball Hall, and 10:30 a.m. & Noon from town.\nFor alumni and their families, join us for a breakfast, hear from your Alumni Executive Council and learn about what's coming up this year.\nJoin Professor Melissa Newcomb and members of the Yates County Arts Center in painting a Keuka College campus scene. We'll provide the canvas, paints, brushes and direction to help you create your own unique memento. No artistic experience is necessary \u2013 just a willingness to be creative! Space is limited.\nRelive your shining moments as a student-athlete! Sign up for this event if you're interested in getting together with former players, either to play in friendly competition, or to swap stories of your glory days. Games available for women's and men's soccer, baseball, softball or women's lacrosse (men play separately), but we welcome all student-athletes to gather together.\nMeet Keuka College authors Prof. Sander Diamond and Prof. Stan Wilczek Jr., discuss their recent novels and purchase your own signed copy at the Campus Bookstore.\n10 a.m. - 4 p.m. - \"Last Witness,\" Wilczek's fourth novel, is the tale of Reece Landis, who, as a 4-year-old, stood just 20 feet from the presidential limousine in Dallas as shots rang out on Nov. 22, 1963.\n2 - 4 p.m. - In Diamond's \"The Herod Mosiac,\" follow the story of two First Century C.E. scholars as they find themselves in a thriller-adventure involving the Israeli elite, the Iranians, Palestinians, and the Vatican.\nDiscover secrets of creative and healthy aging with Professor of Social Work Stephanie Craig. She will share a model of healthy aging including tips and positive pointers to optimize aging for all adults at any life stage. A folder of relevant references and resources will be provided to enrich your personal journey.\nCome see for yourself why our Occupational Therapy graduates are among the most sought after in the region. From state-of-the-art lab spaces, to knowledgeable professors, to engaging projects and scholarship opportunities, get your own hands-on look at the Occupational Therapy Division.\nHas it been five years already...or ten? It has, and that means it's time to celebrate! Mark your 5- or 10-Year Reunion by joining our Laser Tag Challenge at Roseland Bowl in Canandaigua. Bring fellow classmates, family, or friends to create a team. $6 per game. Pay at venue. Free games to the first ten who register.\nNorton Chapel affords one of most contemplative spaces on campus. Reflect in the architectural splendors of the College's spiritual center, and light a candle to honor and remember an alum, classmate or friend who can not be here to share the weekend.\nHelp us celebrate several faculty members who are on their journey to retirement. Hear them reflect on their time educating Keuka College students, and ask them questions about their favorite aspects of teaching and what's next.\nIt was \"Camelot\" in the White House, \"Lawrence of Arabia\" on the big screen and Elvis soundtracks on the jukebox. Share memories of your College days \u2013 and days since \u2013 when you reunite over lunch with your classmates from 1962.\nIt was a pre-Watergate Nixon in the White House, \"The Godfather\" on the big screen and the seemingly never-ending \"American Pie\" on the radio. Share memories of your College days \u2013 and days since \u2013 when you reunite over lunch with your classmates from 1972.\nIt was year seven of the Reagan Revolution at the White House, \"Dirty Dancing\" on the big screen and The Bangles \"Walk Like An Egyptian\" on MTV. Share memories of your College days \u2013 and days since \u2013 when you reunite with your classmates from 1987.\nIt was Jimmy Carter in the White House, \"Star Wars\" ruling the big screen and \"Saturday Night Live\" solidifying a new late-night niche on TV. Share memories of your College days \u2013 and days since \u2013 as you celebrate your 40-Year Reunion and enjoy a fun Fiesta Fajita Buffet with the Class of 1977.\nIt was \"Singin' in the Rain\" on the big screen, Kay Starr's \"Wheel of Fortune\" on the jukebox, and Harry Truman's last year in the White House. Share memories of your College days \u2013 and days since \u2013 when you reunite over lunch with your classmates from 1952.\nCelebrate your 50th-Reunion and revisit the Summer of Love by sharing your Keuka College memories over a generous buffet lunch and wine tasting courtesy of Glenora Wine Cellars. Hear about the evolution of our Field Period\u00ae program from Assistant Director of Field Period and Internships Tara Bloom.\nIt's been 60 years since Elvis Presley monopolized transistor radios and \"I Was a Teenage Werewolf\" graced drive-in movie screens. Share memories of your College days \u2013 and days since \u2013 when you reunite over lunch with your classmates from 1957.\nJoin us for an open house reception at the new Spanish Language & Cultural Center at Allen House. Come meet our Fulbright Foreign Language Teaching Assistant, Alejandro Rico Pol. You can also learn more about the College's Spanish programs and activities. Refreshments will be served.\nIt was the spring of 1982: \"Ebony and Ivory\" was the Number One song, \"E.T.\" ruled at the box office, and you were a newly minted graduate. Revisit those youthful days and reconnect with fellow alums. Join us for a Class of 1982 meeting where we'll make class decisions and enjoy cheese, crackers, fresh fruit, and beverages.\nFormer men's lacrosse student-athletes can sign up to play in friendly competition with fellow alums. Pre-registration required.\nIt's been 60 years since Elvis Presley monopolized transistor radios and \"I Was a Teenage Werewolf\" graced drive-in movie screens. Share memories of your College days \u2013 and days since \u2013 when you reunite with your classmates from 1957.\nCalling all Nursing alumni! We want to hear \u2013 and more importantly, document \u2013 your stories. Stop by for a brief interview where we'll capture your Keuka College experiences and personal nursing story on video. We're eager to hear about your favorite teachers, clinical experiences, and best memories as a nursing student. Come individually or with a group; all are welcome! Feel free to drop off your written memories as well.\nYou might not remember their names, but you'll be glad to see their faces at this milestone reunion gathering. Open to alumni celebrating reunion years ending in '2 and '7, this event will give alumni the opportunity to rekindle old friendships, and reminisce. We'll take class photos starting at 4:30 p.m., with the reception beginning at 5 p.m.\nClass of 1967: Class photos will be taken during the Pathfinders Luncheon and Medallion Ceremony Friday afternoon.\nThe College's mission strives to create exemplary citizens and leaders, so it's no surprise leaders flourish on campus. Always have, always will. Past and present student leaders \u2014 RAs, mentors, senators, club officers \u2014 are invited to join us for networking, conversation, and reminiscing.\nIt all started with a request from then-first lady Eleanor Roosevelt, and has grown into one of Keuka College's most revered programs. Take a walk down memory lane as you admire photographs and memorabilia spanning the decades of the College's nursing program. Then share your stories, hear about the future of nursing at the College, and mingle with alumni, students and faculty.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Yesterday the international jury of the XXV Premio Compasso d'Oro ADI awarded \"Food Design Progetto e Comunicazione del Prodotto Alimentare\" the PREMIO COMPASSO d'ORO ADI in a ceremony at the Sforza Castle in Milan, Italy. The jury motivated their decision saying \"Food in Italy is acknowledged as an excellence worldwide. This book illustrates how Italy has earned its leadership in this field. The recipes, equipment and tools used in food preparation underpin a system which must be safeguarded and handed on, not just abroad, but also to future generations of Italians\".\nThe first book to study and explain food product design as part of an overall system, ranging from concept design to production and distribution through to communication and consumption.\nThe story of each product (pizza margherita, San Daniele cured ham, tinned tomatoes, pasta, Mio processed cheese, Motta panettone and many more) is explored in detail. Each step is fully explained, from the original product concept to its final form, the packaging, the communication strategy and the advertising campaign, without losing sight of the impact these foodstuffs have had on consumption and people's daily habits, together with the role of the manufacturers and the major players.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Image Title: Buy 12 Piece Queen Comforter Set From Bed Bath Beyond Intended For Design 19. Post Title: 12 Piece Queen Comforter Set. Filename: buy-12-piece-queen-comforter-set-from-bed-bath-beyond-intended-for-design-19.jpg. Image Dimension: 400 x 400 pixels. Images Format: jpg\/jpeg. Publisher\/Author: Pedro Casper. Uploaded Date: Thursday - October 18th. 2018 16:18:29 PM. Category: Bedroom. Image Source: dornob.com. Janna 12 Piece Comforter Set Bed Bath Beyond Within Queen Inspirations 9. Shop VCNY Istanbul 12 Piece Comforter Set Free Shipping Today Regarding Queen Remodel 16. Stylish Romantic Garden Reversible 12 Pc King Comforter Set Bed In A For Piece Queen Inspirations 14. Amazon Com Chic Home 8 Piece Ruth Ruffled Comforter Set King Inside 12 Queen Plan 6. Amazon Com Chic Home Vermont 12 Piece Bedding Comforter Set Cozy Regarding Queen Decor 4. Amazon Com Madison Park Paxton 12 Piece Jacquard Comforter Set In Queen Designs 10. Amazon Com Madison Park Tiburon Queen Size Bed Comforter Set In With 12 Piece Ideas 8. Chic Home Arlington 12 Piece Bed In A Bag Comforter Set Walmart Com Within Queen Prepare 11. Chic Grace Embroidered Bridal Collection 12 Piece Comforter Set Pertaining To Queen Decorations 13. Fall Sale Jordan 12 Piece Queen Comforter Set In White Grey Pertaining To Decorations 0.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"A new brand of natural toothpaste has a lot of people talking. Earthpaste is as natural a toothpaste as you'll ever find, and it uses Redmond Clay as one of its few ingredients.\nEarthpaste's list of ingredients is unique, but the things they left out are just as important. Most brands of toothpaste contain foaming agents like SLS (sodium lauryl sulfate), and chemicals like titanium dioxide to make the paste bright white. Not Earthpaste. It isn't just safe to swallow \u2014 each ingredient in Earthpaste has been used to support healthy systems.\nIf I use the clay infrequently, does the clay dry out in the jar?\nThe powdered clay stores forever if kept dry. The hydrated clay will also last a very long time if kept in a glass container with a secure tight lid.\nI received this in my conscious box, and we love love love it! So excited about a healthier toothpaste!\nYes, Liz. All Redmond products are gluten free!","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Full of Noises are a sound art and new music organisation based in a a public park on Cumbria's Furness peninsula. We invite artists from all over the world to create and share new work in Barrow, ranging from installations and performances to workshops and residencies. This year's programme includes a wild soudncamp in May, a festival weekend in August and the third edition of 'Yo No Bi', a tour bringing Japanese Sound Art to venues in the North.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzatpdh b/data_all_eng_slimpj/shuffled/split2/finalzzzatpdh new file mode 100644 index 0000000000000000000000000000000000000000..ad48224c373919c5d863811500c245b0443625fd --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzatpdh @@ -0,0 +1,5 @@ +{"text":"object that you d like to duplicate, you can create a latex mold. Its a good thing the baths closed, as the Black Plague would have turned the baths into a cesspool of disease as it spread through Europe from the Middle ages through the seventeenth century. Slitasjeindikatorene er markeringer p sommerdekkene, som viser nr dekket har. Luckily, after completing the soap making process, there is no lye in the finished product. 1.04 km Away Previous Next More Follow visitBergen Other Sites. 14pcs\/Set Clay Sculpting Wax Carving Pottery Tools Polymer Ceramic Modeling.\nMosjoen Mosj\u00f6en mosj motion by city town stadt Vefsn. Each kit comes with all the components needed to make theatrical foam rubber: High Solids, Natural. Galaxie, KingMeiler og Elite. Helrsdekk er derimot svrt gode p sn og slaps, og er dessuten det mest miljvennlige alternativet. After several hours the chamber will contain a liquid with coffee-like appearance, 5 6 7 and the only solids that remain are very fragile bone hulls of mostly calcium phosphate, which can be mechanically crushed to a fine powder with very little force. Vi har samarbeidspartnere over hele landet. 10 Sections from it were serialised over four issues of The Western Mail (Wales).\nOslo, camerata livamle logen Leder og solist: Henning Kraggerud Mandag.\nVi yter service til studenter p en rekke ulike studiesteder.\nNew customers can now instantly discover and get in touch with places like Edgars Bakeri in Mandal.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Looking to develop your company's future leaders?\nAimed at graduates from any discipline, our intense five-day tailored Graduate Programme, hosted at The Development Company's very own Newgale Lodge, will aid in shaping delegates into future company leaders, learning skills which are essential for achieving maximum success within their graduate careers.\nThere are five core areas which we have identified as critical to shaping graduates into tomorrow's leaders.\nThese issues will be explored and analysed over the course of the five days at Newgale Lodge in a range of challenging ways.\nWhy choose The Development Company?\nDCo's Graduate Programme has been designed to complement in-house graduate development to provide a well-rounded experience for delegates.\nThe programme will deftly combine classroom-based learning \u2013 whilst avoiding leaving participants poring over a textbook or flat PowerPoints \u2013 with experiential hands-on tasks, allowing graduates to expand their practical knowledge of essential skills and to really challenge their boundaries, all whilst utilising the beautiful local area surrounding the Lodge.\nThe Development Company have expertise in supporting graduate progression, and our vibrant, passionate and challenging consultants and technicians have an abundance of experience in delivering programmes and bringing out the best in delegates.\nA residential course based at Newgale Lodge, which is proudly owned by The Development Company, takes delegates out of their workplaces and homes, away from their day-to-day routines and stresses and into the unknown, making the course a lot more intense, ensuring that everything learned during their time at the Lodge will be retained far into their graduate careers.\nThe local area is an integral part of the course; graduates will take part in range of activities, including a CSR module in which they will liaise with local charities to provide a service to them, which will involve decision-making, communicating effectively and developing their ability to work as a team towards a common goal.\nAlongside this, delegates will attend workshops at a local theatre to focus on presentation and communication skills, with continuous feedback and guidance at the centre of the learning process.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Find fun things to do in Beckley, WV. View our list of attractions, activities, events, restaurants and visitor information.\nCurrently as low as $234.08\/night!\nThe ultimate thrill seekers paradise Adventures on the Gorge is the only vacation destination located on the very rim of the New River Gorge. Free Internet,Outdoor Pool, Restaurant, Pets Allowed, Non Smoking Rooms.\nCurrently as low as $57.20\/night!\nWi-Fi and a fitness room are some of the perks guests find at the Baymont Inn & Suites Beckley. All 55 rooms at the three-story Baymont are non-smoking and feature flat-panel cable TVs free Wi-Fi and coffeemakers. Enjoy a drink at the on-site bar. Free Internet, Non Smoking Rooms.\nFeaturing free WiFi Budget Inn is located in Princeton within 20 km of Bowen Field. The property provides a 24-hour front desk.At the motel rooms have a wardrobe. The private bathroom is fitted with a bath. The units will provide guests with a fridge. Non Smoking Rooms.\nFree hot breakfast free Wi-Fi and a seasonal outdoor pool foster favor with our guests at Comfort Inn Beckley. This two-story hotel has 119 rooms with down pillows and cable TVs. Turn up the heat with a microwave or chill out with a mini-fridge. Free Internet,Outdoor Pool, Non Smoking Rooms.\nFree hot breakfast free internet and free parking put cash back in the pockets of our guests at Country Inn and Suites by Carlson Beckley. For an additional fee you and a four-legged friend can pad over to this three-story hotel with 156 rooms. Free Internet, Swimming Pool, Indoor Pool,Outdoor Pool, Pets Allowed, Non Smoking Rooms.\nHotel-wide free Wi-Fi complimentary newspapers and a cozy lobby with a fireplace add to the homey feel at the non-smoking Country Inn & Suites by Carlson Princeton WV. Free Internet, Swimming Pool, Indoor Pool, Pets Allowed, Non Smoking Rooms.\nFree high-speed internet access free parking and a heated indoor pool are a few of the fixings at the completely non-smoking Courtyard by Marriott Beckley. This four-story hotel has 106 rooms with thick mattresses crisp cotton sheets and fluffy down pillows. Free Internet, Swimming Pool, Indoor Pool, Restaurant, Non Smoking Rooms.\nWith free Wi-Fi and a convenient location the Days Inn Princeton is a value proposition for our guests staying in the area. All 122 rooms at the low-rise exterior-corridor Days Inn feature Wi-Fi cable TV with HBO data port phones and traditional furnishings. Free Internet, Indoor Pool, Pets Allowed.\nFreebies such as Wi-Fi continental breakfast and parking add up to big savings for our guests at Econo Lodge Beckley. Pets are welcome at this three-story hotel with 130 rooms with cable TV free Wi-Fi roomy work desks microwaves and refrigerators. Free Internet, Pets Allowed, Non Smoking Rooms.\nFeaturing free WiFi and a fitness centre Fairfield Inn & Suites by Marriott Princeton offers accommodation in Princeton.Each air-conditioned guest room and suite has a desk for writing an iron a small refrigerator and a microwave. Free Internet, Indoor Pool.\nA small market a heated pool and free breakfast and Wi-Fi are a few of the plus-size features guests discover at the non-smoking Fairfield Inn & Suites by Marriott Princeton. Free Internet, Swimming Pool, Indoor Pool, Free Breakfast, Non Smoking Rooms.\nA free breakfast buffet and free parking add to the value of a stay at the completely non-smoking Fairfield Inn Beckley one of the most popular hotels in the area among our guests. Free Internet, Swimming Pool,Outdoor Pool, Pets Allowed, Non Smoking Rooms.\nCurrently as low as $122.40\/night!\nFree hot breakfast free parking and free Wi-Fi give our guests a little extra at Hampton Inn Beckley. This five-story hotel has 108 rooms with plush pillowtop mattresses custom comforters and fluffy down pillows. Free Internet, Swimming Pool,Outdoor Pool, Non Smoking Rooms.\nAn extensive free breakfast buffet and well-equipped rooms attract our guests to the Hampton Inn Princeton. The five-story Hampton Inn is home to 112 rooms all featuring free Wi-Fi cable TV work desks and beds with soft white sheets and bright white duvet covers. Free Internet, Swimming Pool, Indoor Pool, Non Smoking Rooms.\nHawks Nest Lodge is located in Ansted. The property features a business centre and free WiFi.All rooms feature private bathrooms with bathtubs and showers. Extras include cable TV.At Hawks Nest Lodge you will find a swimming pool and barbecue facilities. Free Internet, Pets Allowed, Non Smoking Rooms.\nCurrently as low as $113.75\/night!\nWarm decor a heated indoor pool and free hot breakfast earn top ratings from our guests for the Holiday Inn Express Princeton I-77. Free Internet, Indoor Pool, Free Breakfast, Pets Allowed.\nFree Wi-Fi a refreshing indoor pool and free parking strike a chord with our guests at Holiday Inn Hotel & Suites Beckley. This four-story hotel has 110 rooms with cable TVs and plush triple-sheeted mattresses. Free Internet, Indoor Pool, Restaurant, Non Smoking Rooms.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"In 12-inch nonstick skillet, heat 2 teaspoons of the oil over medium-high heat. Add onion; cook 5 minutes, stirring occasionally, until onion begins to brown. Remove from skillet; set aside.\nSeason both sides of pork chops with 1\/4 teaspoon of the seasoned salt, the thyme and pepper. Heat remaining 3 teaspoons oil in same skillet over medium heat. Add pork; cook 2 to 3 minutes on each side or until browned. Transfer from skillet to plate.\nAdd water, vinegar, quinoa, parsley, cooked onion and remaining 1\/2 teaspoon seasoned salt to skillet. Heat to boiling. Reduce heat; place pork and any juices on top of quinoa mixture. Cover; simmer 10 minutes.\nAdd broccoli and peaches; cover and cook about 10 minutes longer or until quinoa is tender and meat thermometer inserted in center of pork reads 145\u00b0F.\nOmit the seasoned pork chops and this quinoa skillet dish would make a great side dish to pair with other plain meats such as roasted chicken or pot roast.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Drosera aliciae growing herbaceous carnivorous plants of the genus Drosera also known as Alice sundew, Drosera aliciae perennial evergreen used as ornamental plant, can grow in subtropics, mediterranean climate and growing in hardiness zone 9-10.\nLeaves color green, red or yellow the leaves grow in rosette structure, each leaf has little antennas in the edges of the leaves the antennas color can be different than the leaves and can be in the color green, red or white, in the tip of each antenna there is round shape like pompon.\nHOO PRODUCTS - Drosera Peltata Seeds Potted Plant Circular Sundew Carnivorous Plants Garden Seeds 100 Seeds \/ Bag New Arrival !","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzaulfp b/data_all_eng_slimpj/shuffled/split2/finalzzzaulfp new file mode 100644 index 0000000000000000000000000000000000000000..365f3e3b02b47224d2b9e89cd3fd8f531871c1b1 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzaulfp @@ -0,0 +1,5 @@ +{"text":"I am partial to kissing. We should kiss baby's cheeks whenever possible!!\nLOVE, LOVE, LOVE this one!!!\nAnd then what can I say about this series?? I could have taken pictures of this little cutie all day long!!\nThanks M Family!! You are all fun to be with and that little girl is just BEAUTIFUL!!!\nThese are gorgeous! Thank you so much for a fun time and some fantastic photos!","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"South Africa has recalled its ambassador to Israel following a bloody crackdown on Palestinian protesters by Israel forces that left at least 55 dead and 2,700 injured either by live gunfire, tear gas or other means.\nHigh-profile opening of the U.S. embassy to Israel in Jerusalem raised tension to boiling point after weeks of demonstrations.\nThe South African foreign ministry explained that its decision had been taken to protest the indiscriminate killing of protesters.\nBecause of the grave and indiscriminate nature of the latest Israeli attack, the South African government has decided to recall Ambassador Sisa Ngombane with immediate effect.\n\"Because of the grave and indiscriminate nature of the latest Israeli attack, the South African government has decided to recall Ambassador Sisa Ngombane with immediate effect,\" read part of the statement.\nThe violence took place on the day of the controversial transfer of the American embassy from Tel Aviv to Jerusalem.\nAfrican countries celebrate launch of US embassy in Jerusalem While South Africa and Turkey recalled their ambassadors to Israel because of the US decision that has been widely criticised by its Western allies, some African countries sent representatives to witness the official opening of the US embassy in disputed Jerusalem.\nThe United Nations General Assembly rejected by a huge majority the US recognition of Jerusalem as Israel's capital.\nIsrael's foreign ministry said thirty three countries attended the ceremony including Angola, Cameroo, Republic of Congo, Democratic Republic of Congo, Ivory Coast, Ethiopia, Kenya, Nigeria, Rwanda, South Sudan, Tanzania and Zambia.\nIn December 2017, Togo was the only African country to support the US motion to establish an American embassy in Jerusalem.\nThe United States which was furious at the lack of support from its allies, threatened to cut aid to countries that don't support its positions at the United Nations and in several other international fora.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Wandering Chopsticks: Vietnamese Food, Recipes, and More: Beet. Beet. Borscht!\nInspired by the gorgeous, gorgeous borscht I saw on Rambling Spoon, I knew I was going to try making it soon.\nBy the time I stocked up on supplies, the weather had turned. A big pot of soup was perfect for today's gray skies and intermittent showers. Borscht, also spelled borsch or borshch, is a beet soup with Ukrainian origins. Borscht (English-speakers pronounce it with the T) made its way west to Eastern and Central Europe. Jewish immigrants brought the recipe with them to America. I've only had the tomato-based version, a frequent staple at Hong Kong cafes in the San Gabriel Valley.\nBut if I was going to make borscht for the first time, I knew I was going to make it with beets. Gloriously brilliant beets.\nI've simplified the recipe as much as possible because I'm lazy like that. I eliminated the first step of making a stock and just put everything in the pot to simmer. The beets added so much flavor that you can easily omit the beef and make this vegetarian.\nAdapted from Rambling Spoon. She says serving it with crumbled bacon is key. I also found a Russian borsch recipe, and a Ukrainian borsch recipe online. Between those three recipes, I came up with this.\nDice the vegetables. Set aside.\nIn a large stock pot, add a few drizzles of olive oil and saute mire poix (that fancy French word for carrots, celery, and onions). Add 2 tsp salt, 1 tsp caraway seeds, and 2 tsp bay leaves and saut\u00e9 until fragrant and mire poix is softened.\nAdd beef bones and the rest of the ingredients. Add enough water to cover the bones by several inches.\nSimmer on medium-low for about an hour for beefy flavor, or half an hour if you're making a vegetarian version. Before serving, fish out the beef bones, shred any meat, and add that back into the pot. Serve with a dollop of sour cream, bacon crumbles, and the chopped fresh parsley and dill that had been set aside earlier. Serve with bread too if you wish.\nBeets add so much flavor and color to the soup. The lemon juice and vinegar added a slight sour depth. The parsley and dill added a freshness to the earthy root vegetables. The bacon crumbles give savoriness and a nice crunch in texture.\nJust be very, very careful not to spill the beet juice when you're cooking or eating. And like all soups and stews, it's even better the next day when all the vegetables have a chance to meld.\nOh my gosh, that borscht has an eye-popping pink color.\nWhy have I never made this before? I love the ingredients, and your photo is so appetizing. Thanks for the inspiration!\nExcellent! So happy to hear you were inspired to make this vivacious dish.\nI wondered the same thing myself. :) And all those veggies are good for you too.\nYour borscht looked so gorgeous, I had to try making it myself.\nThough I'm of the right heritage, I've never had the beef borscht, only the vegetarian kind.\nChilled in the summer, with sour cream and bits of matzo crumbled in it for crunch it is YUMMY and refreshing.\nIf you're going to wander down the eastern european road - have you ever had sweet and sour stuffed cabbage? It can be done with meat or, as a veggie soup.\nOoh, I bet cold borscht would be lovely. I've never had sweet and sour stuffed cabbage, although my mom makes a VNese version.\ndid u sere cold or hot?\nServe? I like borscht hot, but some people serve it cold during hot weather months.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Are You Suffering but Calling it Adulting?\nYou made it! You've arrived! You're a grown-up!\nYou graduated from college and started your career. Now you have little rectangular cards that say your name, maybe with a few new letters written after it, and the title of that adult thing you do. You can stay up as late as you want watching Netflix, order cookie delivery to your apartment, and drink right out of the carton!\nThen\u2026you wake up. You go to your job where you have a boss who tells you what to do. You do that because you have bills to pay: rent, food (those cookies), those student loans that helped you land this job where your boss tells you what to do. Your work doesn't make you feel like the competent adult you imagined when you earned that degree. Sometimes you feel like a fraud, like you're just a kid playing dress up\u2026except it's not fun. You thought you wanted this career but now you're not so sure. You want something different, but what?\nThen there's your personal life. You have a solid group of friends that you see in the margins surrounding your work day, but you can't shake the feeling that something's missing. You live in New York City with 8 million other people, but you feel lonelier than ever.\nIf you're single, you may be trying to figure out if you want to be. You might be straddling the hazy line between hookups and dating. Maybe you're torn between wanting to be in a relationship and wanting to remain fiercely independent, or you want a partner but the idea of dating makes you want to run for the hills. If you're going on dates, it can feel like a hopeless revolving door. \"What if,\" you worry, \"I'm alone forever?\" Maybe you checked off that relationship goal from your to-do list, but now you're left with the day-to-day of what that's supposed to look and feel like. It doesn't fill the emptiness you're carrying.\nAfter graduating, your life opened up into a wide field full of potential. So why do you feel so stuck? With so many possibilities come overwhelming unknowns. This uncertainty can lead to intense anxiety and despair that something is terribly wrong. You're desperate to escape this trapped feeling.\nYou want help with all of these swirling thoughts but it's so messy and tangled up that you can't come up with any concise questions to ask. Even if you could find the words to describe this dread, you doubt anyone would get it. Or worse, they'd confirm your fear that you're the only one going through this, that everyone except you feels as confident inside as they look on the outside.\nYou might be surprised to know that other bright, successful, young professionals also feel lost and alone in their struggles. In a survey, 86% of millennials reported going through a quarter-life crisis, a period of intensely questioning one's identity and purpose. That's hard enough. According to the National Institute of Mental Health, about one-third of people in their 20s and 30s have an anxiety disorder, and nearly one-quarter suffer from depression or bipolar disorder.\nHere's what you need to know: Your pain is real. You're entitled to feel it. You're not alone.\nShame keeps this suffering in the dark, in silence, where it has free rein to keep growing. Exposing these conflicts to light by sharing them with safe, empathic people who can relate is the first step to easing the burden.\nAsking for help is the scariest and most courageous step. If this sounds like something you'd like more help with, I invite you to give me a call.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"With researches showing that inefficient cooking stoves were responsible for approximately 25 per cent of emissions of black carbon, and indoor emissions contributed to thousands of premature deaths annually, the United Nations Environment Programme (UNEP) has joined efforts to boost efficiency of around three billion cook stoves across Africa, Asia and Latin America. According to research under the UNEP-supported Atmospheric Brown Cloud (ABC) project, black carbon could now be responsible for a significant level of current climate change. The UNEP has associated itself with the Global Alliance for Clean Cook Stoves launched last week in New York during the 65th session of the UN General Assembly, as part of the Global Clinton Initiative and spear headed by the UN Foundation. The initiative can also make a contribution to reduce deforestation by curbing the large quantities of wood and other biomass used to make charcoal or by households switching to alternative fuels including cookers powered by solar energy. Emissions of black carbon may also be accelerating melting rates of glaciers in mountain ranges such as the Himalayas, with the dark particles absorbing sunlight and raising ice temperatures. In addition, black carbon - a key component of brown clouds in some parts of the world - is contributing to dimming and reducing the amount of sunlight hitting the ground in polluted parts of the globe. For example, some major cities in Asia may be up to 25 per cent dimmer or darker than they were half a century ago. Reductions in visible light may also be harming agriculture - again with implications for poverty and combating hunger under the MDGs. UNEP's Project ABC, which is led by Professor Veerabhadran Ramanathan of the Scripps Institution in La Jolla, California, and included researchers from countries such as India and China, has established a network of ABC observatories throughout the Asia-Pacific region that are now operated by national scientists.\nPlans are underway to extend the network into Africa and beyond. The UNEP is already supporting black carbon and cook stoves demonstration project called ''Project Surya'' in rural areas of India. Surya aims to provide sustainable, effective and incentive-based action plans as well as infrastructure and technologies to switch to cleaner technologies such as efficient cooking stoves. A pilot phase of Project Surya has been implemented in a rural village in India with 500 households and a population of 2,500 people. The pilot phase, with Professor Ramanathan as the Principal Investigator and The Energy Resource Institute (TERI) as the implementing agency, tested several available commercial cook stoves for climate and health benefits and fuel efficiency using specially designed cell phones, capable of collecting and uploading data on pollutant exposure and cooking time periods, wireless technology and the establishment of indoor air quality sensors as well as an outdoor climate monitoring tower. The pilot phase also included gathering baseline socio-economic data, and assessing different technological options for cooking as a way to evaluate the acceptance of the stoves by the public. With the successful implementation of the pilot phase, Surya is embarking on the demonstration phase, which will last for two years and will involve two to three rural areas, each with a population of 15,000 people spread from north to south India. Pilot phases of Surya are also being developed for other developing countries such as Bhutan, Nepal and Kenya. It is hoped to link the declining emissions of black carbon, both indoor and outdoor, with the reduced impact on the regional climate as detected by the monitoring tower and satellites taking the pollution levels of the atmosphere.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzaulnz b/data_all_eng_slimpj/shuffled/split2/finalzzzaulnz new file mode 100644 index 0000000000000000000000000000000000000000..22da3a65219b14995677bd06158bf9db79a803d1 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzaulnz @@ -0,0 +1,5 @@ +{"text":"Alder Hey in the Park was, last night, crowned the BBC Building of the Decade at a prestigious awards night in Manchester.\nBBC North West Tonight asked its viewers to vote for their favourite modern building constructed in the last 10-years. North West Tonight, in partnership with the business magazine 'Insider Media', then launched an online vote for 'People's Choice - North West Building of the Decade which Alder Hey in the Park won.\nAlder Hey moved into their new hospital, Europe's only hospital in the park in October 2015 and it features a uniquely designed hospital alongside a dedicated children's research and innovation facility, creating a leading-edge centre for children's healthcare and research.\nChildren and young people were involved in designing the new hospital when almost 1000 patients drew pictures and shared their views on what their new hospital would look like. A Children and Young People's Design Group, made up of current and former patients aged 10-22, have had their say throughout the design process on everything from the colour of their room, to the artwork displayed in the new hospital and what their wards should look like.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Join Dr Jason Fox & Mykel Dixon for a live uncut conversation about character.\nIn a maturing social media landscape, where 'likes' have come to define our lives, can our personality lead us to prominence? In a post-truth world, where merely attention equates to influence, can our values determine our value?\nIn a fragile economic ecosystem, where job security grows more elusive by the hour, can we ensure our relevance by being\u2026ourselves? And if so, how do we harvest, leverage & exploit who we are and hope to be, without losing face, losing clients or losing our authenticity.\nJoin Dr Jason Fox (keynote speaker of the year 2016) & Mykel Dixon (breakthrough speaker of the year 2018) for an intimate, emergent & uncut conversation into the nature of character. And how we might infuse it throughout our on & offline lives for more influence & impact.\nSet within the lush, private surrounds of The Elk Room, in Melbourne's premier cocktail establishment \u2013 The Everleigh, complete with live music and dangerous drinks, this is an evening not to be missed.\nto hold, articulate & share themselves with the world.\nAnd a rare opportunity to hear from two of Australia's most in-demand speakers with no pretence, nor a pre-planned performance. Just a raw, revealing voyage into how they plan to navigate the emerging meta-modern climate.\nTickets are strictly limited to 40 people to ensure intimacy and elbow room.\nAnd only 2 tickets per person to ensure diversity of wisdom and experience.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"I often don't remember enough about my dreams to include any wild ideas in my writing, but usually that magic period right before I fall asleep is when a good bit of dialogue or a plot twist will hit me, and I'll have to write it down before I drift off and forget it. In my last dream, one of my MFA instructors turned out to be friends with one of my high school teachers, and I was happy because they'd always seemed like cosmic twins to me.\nI first started writing X-Files fan fiction when I was twelve. I created my own first characters in my late teens but wasn't published until 2013.\nChick lit has evolved so much and is no longer a genre in itself but a style of writing that's finding its way into almost all other genres. I think the first I heard the term was around the time Maryjanice Davidson started writing her Dead series. I loved the snarky, funny tone. It was so different from the prim and proper writing I was used to reading. I'm so glad I can find that kind of voice in romance, mystery, sci-fi, and even horror now.\nThis is the 4th book in my Las Vegas Sinners series. I introduced Dylan in the first book, and readers loved him so much, they wanted to hear his story. He was inspired by Sidney Crosby, my favorite player. I was always intrigued by how well he handled so much pressure and responsibility at such a young age. I asked myself what kind of woman would be his perfect match, and it needed to be someone who could understand what he was going through. An Olympic skater was just right.\nI'm published through Crimson Romance, so their art department creates my covers.\nI've always seen Josh Hartnett as Dylan. My inspiration for Lori is a model named Andi Davis.\nDefinitely One For the Money by Janet Evanovich! I love the Stephanie Plum series.\nI've been under writing deadlines for the last two years and haven't had much time to read, but I'm still trying to finish the last Odd Thomas book by Dean Koontz because it's one of my all time favorite series, and before that, I read Miss Guided by Alexia Adams.\nRachel Caine! She has a decent following now, but she's brilliant, and I wish her books were movies or TV series.\nAt home, I usually write at the kitchen table, but I have three writing dates a week where I'll meet my RWA chapter mates at local coffee shops for a few hours.\nI've been a vegetarian since I was twelve.\nReese Witherspoon. Short, blonde, blue eyes, and just the right mix of sweet and sass.\nI live on an Air Base, so at any given time, there's probably a C-17 plane above you.\nI'm wrapping up the Sinners series with the fifth book, Fair Trade, out later this year. After that, I'm going to do a collection of short stories, each one catching up with a couple from the series. After that, I'm not sure! I may genre hop, but there will always be romance in what I write.\nI've always wanted to visit the Maldives. Tropical paradise!\nThe song that's been kind of an anthem for my series is \"Vegas Girl\" by Conor Maynard. It's up-beat, and you can't help but dance.\nIf I'm writing at home, I need quiet because it's easy to get distracted. If I'm writing in a coffee shop, the background chatter and light music blend into the background, and I can shut them out.\nI wanted to be a doctor. I loved the idea of really helping people and making them feel better. My dad is a general physician, and I used to go with him on hospital rounds to see patients. When I was a junior in high school and took my second chemistry class, it became clear that my talent was writing. I realized what I considered a fun hobby could actually be a life path, and it was the right choice for me.\nI live in Seattle with my Air Force hubby, and I write hockey romance. I played one season of roller hockey when I was fifteen--it hurt enough that I decided I liked it better as a spectator--and it's been true love ever since! My fictional team is the Las Vegas Sinners, and my real-world team is the Pittsburgh Penguins.\nI like strong, capable heroines who bring out the vulnerability in their tough guys.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"2015 marked the fourth full year of operation for the states1 SSBCI-funded programs. The map below shows the approximate location of all loans and investments as reported in the state's Annual Reports to Treasury. The pins shown indicate one or more loans or investments within the county. Click pins for more details. The table below the map shows all reported transactions as of December 31, 2015.\n2 States were approved to use SSBCI funds to support loans and investments to small businesses within the state or outside its geographic borders if the state had determined that the loan or investment would result in significant economic benefit to the state.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"NIIT Technologies Limited is a information technology solutions organization running globally in North America, Europe, Middle East, Asia and Australia. It was founded in 1981. Some of its products are banking & financial services, channel and customer interface solutions in insurance, e-business, e-procurement, sap retail for retail & distribution and travel & transportation. It offers services like BPO, application development management, managed service, package implementation, platform based services.\nWe have provided you with the brief description of the NIIT Technologies Limited and now we are providing some useful contact details of the company including customer care numbers, toll free numbers, phone numbers, office address, etc.\nThis is the customer care toll free number provided by NIIT Technologies Limited for all the Indian customers. If you having any query or any support is needed then you can call on this number. It's free and available at service 24\/7 in all 365 days.\nWe are providing the contact details of the corporate office or the head office of NIIT Technologies Limited. If you want to contact or visit this office then you can use the information we are providing and here we are mentioning the full address, phone number, fax number, mail address and the official website of the company.\nFull Address: NIIT Technologies Ltd., Corporate Heights (Tapasya), Plot No. 5, EFGH, Sector 126, Noida-Greater Noida Expressway, Noida-201301, U.P., India.\nIn case you want any information regarding the registered office of NIIT Technologies Limited, so for the purpose we are mentioning full address that is provided by the company.\nFull Address: NIIT Technologies Limited, B-234, Okhla Phase 1, New Delhi-110020.\nWe have provided all the information here that we thought would be helpful for you in contacting the NIIT Technologies Limited but still if you want to know something more then you can go threw the official website we have mentioned above.\nWe are a leading BPO & have good set up in Gurgaon, Bihar, Jharkhand, Mumbai, West Bengal & NESA. We are interested in partnering with your reputed organization to help you serve government projects wherever digitization is required. If you found our profile suitable then please share the concerns mail Id & contact to share the detailed profile of our BPO.\ni m looking for sql server so,are there same fee for each course in every branch?","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzaulsr b/data_all_eng_slimpj/shuffled/split2/finalzzzaulsr new file mode 100644 index 0000000000000000000000000000000000000000..df8e702e45398058e544dda93240bcad4489a174 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzaulsr @@ -0,0 +1,5 @@ +{"text":"When you buy a conventional house in Orlando, or anywhere else for that matter, you have spend money to furnish and equip the house.\nIt's not just the money, but all the work to go out and buy everything. Find out about the best shopping places, schedule all deliveries, replace what did not look right, and so on.\nAnd it is not only the work to buy everything, but the work, money and the headache to maintain, restore, repair, fix, paint everything later on.\nAt Signum Bella Vida Orlando you don't need to worry about any of that. The price you pay, which is a fraction of that of a conventional house, already includes all the furniture, equipment and utensils you need.\nThe photos on the site are not of model units, but of the actual home you purchase.\nAfter that, you need not to worry about anything regarding maintenance, repairs, replacements. Signum Resorts takes care of everything for you.\nThus, unlike any other vacation home, when you arrive everything is new and working.\nThere is no smarter way to own a home.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Eco-Pan offers concrete washout waste, containment, disposal, and recycling services to all of Georgia including Atlanta, Savannah, Augusta, Macon, Athens, and Gainesville.\nEco-Pan Georgia has worked with some of the largest construction builders in the industry including Cooper & Co., Pioneer Concrete Pumping, Brundage-Bone Concrete Pumping, Kiewit, Holder, Century Communities, CW Matthews Contracting, Keystone Concrete, and Thomas Concrete. Some of the projects we have serviced include Camp Southern Ground, Quik Trip, City of Newnan, KDC State Farm Building, RM Clayton Water Reclamation Center, Cherokee Country Club, and City of Atlanta.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"British PM Theresa May's valiant efforts may have failed because Brits don't really want to leave the European Union. They shouldn't, say more than 5.1 million which is 5 million more than is needed to trigger a parliamentary debate.\nRead if you wish ->Petition live: Revoke Article 50 and remain in the EU.\nThe British public is speaking out against Brexit in the millions. That cannot be ignored. But there are significant voices being heard from Theresa May's side of the house.\nPhilip Anthony Hammond is Chancellor of the Exchequer since 2016 and Member of Parliament for Runnymede and Weybridge since 1997.\nSpeaking out in contrast with his party leader's obstinate stand against a second Brexit plebiscite, Chancellor Philip Hammond said a second referendum is a \"coherent proposition\" that deserves consideration.\n\"One way or another Parliament is going to have the opportunity this week to decide what it is in favor of, and I hope that it will take that opportunity \u2014 if it can't get behind the Prime Minister's deal \u2014 to say clearly and unambiguously what it can get behind,\" Hammond told Sky News.\nMore history. Above, at the front of a demonstration against Brexit in Manchester in September 2017, a float showed a multi-headed chimera with the faces of Theresa May and three leading Brexit campaigners: Foreign Secretary Boris Johnson, Environment Secretary Michael Gove and Brexit Secretary David Davis. It beared the inscription \"Brexit is a monstrosity\" \u2013 \"Let's stop it\". The float was made by Jacques Tilly and his team, opposing Brexit.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"In following the debates over raising the U.S. debt ceiling, I\ufffdm struck by the frequent claim that defaulting on public debt is unthinkable because of the \ufffdsignal\ufffd that would send. If you can\ufffdt rely on the T-bill, what can you rely on? Debt instruments backed by the \ufffdfull faith and credit\ufffd of the United States are supposed to be risk-free \ufffd almost magically so \ufffd somehow transcending the vagaries of ordinary debt markets. The Treasury bill, in other words, has become a myth and symbol, just like the Constitution.\nI find this line of reasoning unconvincing. A T-bill is a bond just like any other bond. Corporations, municipalities and other issuers default on bonds all the time, and the results are hardly catastrophic.\nFinancial markets have been restructuring debt for many centuries, and they\ufffdve gotten pretty good at it. From the discussion regarding T-bills, you would think no one had ever heard of default-risk premiums before.\nInterestingly, this seems to be a case of American exceptionalism: People aren\ufffdt particularly happy about Greek, Irish and Portuguese defaults, but no one thinks the world will end because of them.\nSo, isn\ufffdt it time to demythologize all of this? Treasuries are bonds just like any other bonds. There\ufffds nothing magic, mythical or sacred about them. A default on U.S. government debt is no more or less radical than a default on any other kind of debt.\n\ufffdWhat is prudence in the conduct of every private family can scarce be folly in that of a great kingdom,\ufffd Adam Smith famously observed. Bankrupt firms, like bankrupt families, restructure their debt obligations all the time. The notion of T-bills as sacred relics to be once and forever \ufffdrisk-free\ufffd seems more like religion than economics to me.\nPublic discussion on the U.S. debt crisis assumes the only options for meeting debt obligations are increasing taxes, cutting spending or both.\nBut asset sales are another viable option. There\ufffds a huge literature in corporate finance that explores the benefits and costs of asset sales as a source of liquidity for financially distressed firms.\nOf course, selling assets at fire-sale prices under dire circumstances is far from the best option, but as this literature points out, it is often better than bankruptcy or liquidation. One of the best-known results is that asset sales tend to increase firm value when they result in an increase in focus. Would it really be so bad if the government sold off some foreign treasuries and currency, the Strategic Petroleum Reserve, its vast holdings of commercial land, and other elements of its highly diversified and unaccountably bloated portfolio?\nIf asset sales aren\ufffdt feasible, is default really an option? Isn\ufffdt the global financial system dependent on the U.S. dollar and the AAA rating of U.S. government debt? Isn\ufffdt default \ufffdoff the table,\ufffd as President Barack Obama and congressional leaders insist?\nOf course not. Default and even repudiation are policy options that have benefits and costs, just as continuing to borrow and increasing the debt have benefits and costs. Reasonable people can disagree about the relevant magnitudes, but comparative institutional analysis is obviously the way to go here.\nBetween 1841 and 1843, eight states and one territory defaulted on their obligations, and by the end of the decade, four states and one territory had repudiated all or part of their debts. These debts are properly seen as sovereign debts both because the U.S. Constitution precludes suits against states to enforce the payment of debts and because most of the state debts were held by residents of other states and other countries, primarily Britain.\nIn spite of the inability of the foreign creditors to impose direct sanctions, most U.S. states repaid their debts. It appears states repaid to maintain their access to international capital markets. The states that repaid were able to borrow more in the years leading up to the Civil War, and those that did not repay were, for the most part, unable to do so. States that defaulted temporarily were able to regain access to the credit market by settling their old debts. More surprisingly, two states that repudiated a part of their debt were able to regain access to capital markets after servicing the remainder of their debt for a time.\nAmazingly, the Earth did not crash into the sun, nor did the residents of the delinquent states experience locusts, boils or Nancy Grace. Bond yields rose, of course, in the repudiating, defaulting and partially defaulting states but not to \ufffdcatastrophic\ufffd levels. There were complex restructuring deals and other transactions undertaken to try to mitigate harms.\nDoesn\ufffdt sound like the end of the world to me.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Adobe Photoshop CC 2017 Crack works fast with zero percent risk. In addition, several tools are updated in Adobe Photoshop CC Keygen that include pen tool, moving tool, selection tool, cropping tool, slicing tool, shape tool and myriad of tools. First of all, it was developed by the pavement systems with regard to Windows. So, you can use this expert software to edit high quality three dimensional pictures with the ease using its complex effects because it is enriched with all types of functions that are necessary to edit pictures. Photoshop Crack comes with amazing tools such as HDR imaging, shading administration, brushes, exact choice apparatuses, histogram palettes, canny adjustment, covers, impacts and movements.\nAdobe Photoshop CC 2017 crack is the stunning tool to change the background of your photos. It is an excellent tool for 3D and 2-dimensional designs either if you are working in videos or movies for graphics or making your personal videos. It is also helpful to make your personality to become the shining star. You will be a model if you can edit and retouching your pictures with Photoshop CC 2017. If you are Mac user and you are finding graphical editing software, then it's the best software to enhance the grace of your editing and graphics with MAC. Adobe Photoshop CC 2017 For Mac works efficiently so you can get the benefit if for free. Secondly one of the major issues is to activate Adobe Photoshop CC 2017. First is to purchase Activation key that's little bit harsh method. Don't worry because we have found the new method for activating Adobe Photoshop CC 2017 For free. In the link given at the bottom, you can also download the file of activation key to activate Adobe Photoshop CC 2017 Updated.\nAdobe continues making the Photoshop interface more adjustable. You can look over among a few focused on workspace designs, including 3D, Graphic and Web, Motion, Painting, and Photography, or make your own custom format of boards and windows. You can even revise the program's toolbar catch rail to taste. Photoshop's symbols now brandish the level, 2D, non-skeuomorphic style that began with Windows 8, later arrived in iOS 7, and has since turned into a generally received interface plan standard.\nThe interface additionally adjusts to the current reason. An a valid example is the new Select and Mask workspace, which is an accessible alternative at whatever point you have a determination device dynamic. This shows just the devices valuable amid choice, for example, Refine Edge, Lasso, Brush, Hand, and Zoom, alongside the significant Properties board. The interface's shading topics offer a satisfying, setting delicate consistency, as well. In the event that you set the window fringes to be light dark, all exchanges will in like manner be dim.\nThere are 3D compositing and editing abilities and improved video controls to expand creative options dramatically.\nHard Disk: 10 GB Free HDD Space.\nDownload Adobe Photoshop cc 2017 Serial Key from given links.\nInstall crack paste serial quantity in the enrollment box.\nNow your computer software is registered.\nThanks for the working link.. but how to use serial key bRo??","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzaxyym b/data_all_eng_slimpj/shuffled/split2/finalzzzaxyym new file mode 100644 index 0000000000000000000000000000000000000000..b8eac17c66bd8f849f4c0fa11e5f19740fbf4173 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzaxyym @@ -0,0 +1,5 @@ +{"text":"Palombo, F; Gallinari, P; Iaccarino, I; Lettieri, T; Hughes, M; D'Arrigo, A; Truong, O; Hsuan, J J; Jiricny, J (1995). GTBP, a 160-kilodalton protein essential for mismatch-binding activity in human cells. Science, 268(5219):1912-1914.\nThis list was generated on Thu Apr 18 21:29:52 2019 CEST.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"VENERA LIZZIO was born 24 April 1885, received Social Security number 114-10-7655 (indicating New York) and, Death Master File says, died July 1969. Research in ZIP Code 11971.\nVERA HARDING was born 24 April 1885, received Social Security number 470-56-2858 (indicating Minnesota) and, Death Master File says, died January 1976. Research in ZIP Code 56215.\nVERGE MANN was born 24 April 1885, received Social Security number 428-36-9228 (indicating Mississippi) and, Death Master File says, died March 1984. Research in ZIP Code 38917.\nVERNA HOVEY was born 24 April 1885, received Social Security number 079-09-0643 (indicating New York) and, Death Master File says, died March 1973. Research in ZIP Code 13403.\nVERNON LOOK was born 24 April 1885, received Social Security number 547-10-1222 (indicating California) and, Death Master File says, died September 1973. Research in ZIP Code 95501.\nVICTOR REESE was born 24 April 1885, received Social Security number 300-18-9804 (indicating Ohio) and, Death Master File says, died January 1967. Research in ZIP Code 43140.\nVICTORIA COTE was born 24 April 1885, received Social Security number 369-09-2461 (indicating Michigan) and, Death Master File says, died November 1978. Research in ZIP Code 48060.\nVICTORIA OLDAKOWSKI was born 24 April 1885, received Social Security number 210-18-2822 (indicating Pennsylvania) and, Death Master File says, died June 1978. Research in ZIP Codes 16507 and 16504.\nVINCENZO GIZZI was born 24 April 1885, received Social Security number 383-28-9452 (indicating Michigan) and, Death Master File says, died May 1968. Research in ZIP Code 48228.\nVIOLET HUMPHREY was born 24 April 1885, received Social Security number 421-09-8936 (indicating Alabama) and, Death Master File says, died March 1967. Research in ZIP Code 35805.\nVIOLETTE GARDNER was born 24 April 1885, received Social Security number 239-06-1102 (indicating North Carolina) and, Death Master File says, died May 1974. Research in ZIP Code 27288.\nVITO GALGANO was born 24 April 1885, received Social Security number 051-01-6167 (indicating New York) and, Death Master File says, died March 1964.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Stimulates new cell growth, tightens tissues, speeds healing of sores, wounds, reducing body odor, cools fever, and repels insects. Calms nerves, lifts depression, reduces inflammation, fights infection, stimulates sexual desire.\nPatchouli: (Pogostemon cablin) rejuvenates skin cells, so it's used on mature, aging skin and also treats dry skin prone to acne problems. It is also antiseptic and anti-fungal on skin problems. In aromatherapy, patchouli essential oil is typically used to treat acne, anxiety, athlete's foot, constipation, dandruff, eczema, fatigue, indigestion and insomnia.\nA preliminary study published in the Journal of Natural Medicines in 2011 found that patchouli essential oil may help promote sleep.\nIn tests on mice, the study's authors determined that inhaling the aroma of patchouli essential oil may have sedative effects that could be useful in the treatment of sleep problems.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Height 7 3\/8 in. by Diameter 4 7\/8 in.; 18.7 by 12.4 cm.\nChristopher Bangs, The Lear Collection: A Study of Copper-Alloy Socket Candlesticks, A.D. 200-1700. (Bethlehem, PA: Oaks Printing Company, 1995), pp. 149, 334, no. 128.\nIn overall fine condition. Wear commensurate with age and use. The electrified candle is attached in a non-invasive manner and may be detached.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"One key deduction not only offers tax benefits, but it also helps you save systematically and prepare for retirement and other financial needs. Registered Retirement Savings Plans (RRSPs) let you put money into a registered plan and deduct the money from your taxable income until you take it out of the plan.\nDeducting your RRSP contribution from your net income means you don't have to pay income taxes on it until you take it out of the registered plan. You will pay lower taxes on the money in the plan when you take the money out if you are in a lower tax bracket at that time.\nAn RRSP can include almost any type of investment. Ensure that the investment is a good one for your needs. Returns may be guaranteed or they may not\u2014some types of investments, such as stock market shares, can lose money. See the module on Investing for more information.\nThe maximum amount you can put into an RRSP depends on your income. It's listed on the Notice of Assessment that the CRA sends you when you file your annual tax return. For example, the maximum limit for 2018 was $26,230. If you did not contribute the maximum in previous years, you may be able to contribute more.\nRRSPs are designed to help you save money for your retirement. You can withdraw money from your RRSP for certain purposes, such as buying your first home and financing your education. However, you will have to pay it back within a limited period of time. In addition, if you take your money out of the plan for any other reason before you retire, part of the amount you take out will be withheld for taxes.\nFor details, go to Canada Revenue Agency's information on RRSPs.\nYou have 60 days after the end of the year, usually March 1, to put money in your RRSP to get a tax deduction for the previous year. But don't wait until the deadline. Begin regular contributions (monthly or every payday) as soon as possible and your investment savings will start to grow sooner.\nTo see the benefits you can make by protecting your investments in an RRSP, go to Autorit\u00e9 des march\u00e9s financiers information on RRSP - Registered Retirement Savings Plan.\nExample: Michel plans to retire in 10 years. This year, he has saved $10,000 that he can invest. He's not sure if an RRSP is the best approach for him.\nIn his circumstances, investing $10,000 at five percent per year for 10 years would result in earnings of $6,288.95, if his investment is in an RRSP.\nIf Michel invests outside of an RRSP, he'll pay tax on his investment earnings at a rate of 31.15 percent. The tax will reduce his earnings to $4,027.82, costing him $2,261.13.\nMichel will receive a tax refund if he uses the RRSP. If he also invests the tax refund from his investment in the RRSP, it will be worth $5,074.01 in 10 years.\nMichel's investment inside an RRSP, including the re-invested tax refund, would be worth $7,335.14 more in 10 years, compared to the same amount invested outside an RRSP.\nMichel will pay income tax on the money he takes out of the plan when he retires. The amount will depend on his tax rate when he takes it out. Even after paying these taxes, Michel will likely have saved significantly more money than he would have if he had not placed money into his RRSP.\nWhen you put money into an RRSP, it reduces your taxable income for the year, and may produce a tax refund. You can use the refund to pay down a mortgage or other debt, save for a child's education or pursue other financial goals. In this way, an RRSP helps you prepare for retirement and your other goals. Talk to a financial advisor about the best way to use an RRSP to achieve your goals.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzaygfv b/data_all_eng_slimpj/shuffled/split2/finalzzzaygfv new file mode 100644 index 0000000000000000000000000000000000000000..ceaf4e67818fc551e176ffcb34229dd88ece9c92 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzaygfv @@ -0,0 +1,5 @@ +{"text":"Bit optimistic but closed above another key support. cant no take this long. target next swing high.ambitious but if bTC remains bull would be scary.\nBB bands getting closer and closer with a breakout coming soon. Bias long.\nH4: - Structure: Donwntrend However: - Broken trendline - Tendency: upward --> Buy pullback !\nForecast EOSBTC ! CUP pattern forming ?!","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Intex Twin 18\" Dura-Beam Standard Raised Pillow Rest Airbed Mattress, Fiber-Tech Construction, 3-in-1 Valve, Weight Capacity: 300 lbs, 39\" x 75\" x 18\"\nDisney Star Wars Classic \"Space Logo\" Blanket, 62\" x 90\"\nWEN 944 7-Amp Angle Grinder, 4-1\/2\"\n(2 Pack) Keebler Chips Deluxe Cookies, Rainbow, 11.3 Oz.\nFrame It All 2-inch Series Composite Raised Garden Bed Kit - 4ft. x 8ft. x 5.5in.\nComfortable, Convertible, Easy Set Up, Polyester Made, 4-persons Ozark Trail Folding Sports Blue Bench. This Bench Have a Backrest and Can Be Use for Camping, Indoor\/outdoor Activities, and Athletic Events.\nStanley STHT30159 Chrome Tape Rule, 25' x 1\"\nBetter Homes Gardens Rustic Country Antiqued Black\/Pine Panel TV Stand TVs up to 52\"\nZevia All Natural Strawberry Soda, 12 Ounce - 24 per case.\nZatarains Pre-Seasoned Crab and Shrimp Boil, 72 Ounce - 6 per case.\nSlumber 1 Mattress-in-a-box Size, Twin 8\"","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Please read these Conditions of Use carefully. The contents of this website are copyright of Bennetts Auctioneers Ltd. Your access to this website is subject to your agreement to these Conditions of Use.\nIf you make any permitted use of this website then you are deemed to have agreed these Conditions of Use. If you reject these Conditions of Use then you are not allowed access to this site. Please note these Conditions of Use may be updated from time to time by Bennetts Auctioneers along with changes in content of the pages at Bennetts Auctioneers discretion. You should ensure you verify the Conditions of Use when making access. You agree to be bound by any updated Conditions of Use in force at the time you access the site.\nAll intellectual property rights (including but not limited to copyright, trade and service marks and registered designs) for the content, look and feel of this website and including all software, logos, text, photographs, demonstration videos, graphics, sounds and other materials are reserved by Bennetts Auctioneers or its associated companies or the owners of such rights. Any reproduction or use save to the extent permitted by these Conditions of Use is strictly prohibited.\nBennetts Auctioneers grants to you the right to view, browse and make a copy of any material published by Bennetts Auctioneers on any page of this website solely by you for your personal use as a consumer or for non-commercial use within your organisation. No other right is granted to you unless expressly permitted by Bennetts Auctioneers in writing. If you make any permitted copy, you must retain all copyright and other proprietary notices in the same formal manner as appear on the original. You are not permitted to alter or modify, to transmit to any other person, make any commercial use of or distribute or create any derivative work from any material appearing on this website or make any other use not expressly granted or authorised by Bennetts Auctioneers.\nAny software or other downloadable material available from this website is subject strictly to the terms of any applicable licence, including any exclusions and limitations of liability, appearing in connection with that software or such material. Bennetts Auctioneers does not warrant that any such software or material is error free and you have exclusive responsibility for any application of the same which shall be at your own risk.\nThe content of this website is made available \"as is\" and without any warranty, express or implied, as to the accuracy or suitability of that content or any infringement of any third party right. The content of this website does not represent any warranty as to fitness for any particular purpose. In no event will Bennetts Auctioneers or any person or entity involved in creating, producing or distributing content on this website be liable for any damages, including, without limitation, direct, indirect, incidental, special, consequential or exemplary or punitive damages arising out of any use or inability to use this site as permitted by Bennetts Auctioneers. By agreeing the Conditions of Use you that the provisions of this disclaimer shall apply to all software or other downloadable material as may be available on this website.\nWhere this website provides links to other websites which are not under the control of Bennetts Auctioneers, Bennetts Auctioneers makes no representation concerning the contents of those websites nor does Bennetts Auctioneers endorse any such website. Bennetts Auctioneers does not accept any liability for the contents of any linked website, their terms and conditions of use or the protection and treatment of any information you may submit to such linked websites. Any software or other downloadable material as may be found on such sites has not been tested by Bennetts Auctioneers and Bennetts Auctioneers makes no representation regarding quality or the suitability of any such software or material. You should verify the conditions of access to any linked site as such sites are likely to be independent of Bennetts Auctioneers.\nWhere this website permits you to transmit information to Bennetts Auctioneers, you acknowledge that Internet transmission remains inherently insecure and you transmit any information at your own risk. Upon receipt by Bennetts Auctioneers, Bennetts Auctioneers will make its best efforts to protect your information in its systems.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"For you to apply for the scholarship contesters.\nWe have a team of 137 researchers that assist students to come up with great and in-depth custom essays and academic papers to enhance their research of whatever assignment they're undertaking.\nWe also recognize that university or college can be expensive and know how difficult it can be for college students. With the many expenses required to survive the campus life, it can be easily lead to debts that could go to tens of thousands. In that in mind, we want to offer a scholarship of $2,500 to an individual who is currently attending a United States recognized college in an effort to make life easier.\nYour Essays must reach us by July 23, 2018.\nMake sure to include their full details which would include the college they are currently attending.\nWe will not review any essays sent after the deadline.\nWe are very excited to receiving the submissions and assisting the winners to ease the burden of their learning cost. The winner will be declared at the end of March. Keep checking this page.\nAll your personal information will be kept private and will only be used for the contest only, thus will not be shared with any third party.\nBy submitting your essay to our contest, you'll be giving WritingSharks.net exclusive rights to your essay. Credit will be given to the author if used in any way.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Gears of Halo - Video game reviews, news and cosplay : What is the plot of The Impossible Life and the Possible Death of Preston J. Cole?\nWhat is the plot of The Impossible Life and the Possible Death of Preston J. Cole?\nIn the beginning, before Halo: Combat Evolved was even released on the original Xbox, Bungie had thought big and organised a book tie in. That book was The Fall of Reach. It's job was to set up the events and plot that had lead to the start of the Halo game. It's author was a chap called Eric Nylund and he went on to write two more of the 5 Halo novels that followed the Fall of Reach.\nHalo: Evolutions is a collection of Halo fiction in the 'short story format'. It seems only natural then that Microsoft bought back Eric Nylund to have a crack at a story. 'The Impossible Life and the Possible Death of Preston J. Cole' is his take on one of the Halo universe's key characters.\nThe title itself is most interesting. On the face of, it says nothing. It's almost in the style of The Curious Case of Benjamin Button and other book titles that seem to describe the plot.\nPlayers of the original Halo game may remember that the Pillar of Autumn arrived at the Halo, after following the directive known as The Cole Protocol. Named after the protagonist of Nylund's short story, the Cole protocol was itself the subject and name of the sixth halo novel.\nThe plot of Nylund's story gives an insight into what may have really happened to Admiral Cole as according to the official records, he died...but did he?\nClearly, this isn't the full plot summary, it will be here when I've read the whole thing!\nThe ship from the first Halo game wasn't the \"Amber Clad\" (the ship is actually called \"In Amber Clad\") it was the Pillar of Autumn. The In Amber Clad found the second Halo from following the Prophet of Regret as he left Africa in Halo 2.\nYou're right. My bad. Thanks!","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzaymff b/data_all_eng_slimpj/shuffled/split2/finalzzzaymff new file mode 100644 index 0000000000000000000000000000000000000000..42bd876665dd0361c2bec7702f26733a7c21a641 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzaymff @@ -0,0 +1,5 @@ +{"text":"We turn your requirements into a production-ready machine design. Our highly qualified employees will be glad to provide you with direct support related to any technical questions you may have.\nOur electrical engineering department will provide both the software (PLC controls, PC-based controls, etc.) and the hardware planning (switch cabinets, wiring, etc.) for all control-relevant components. The designs are turned into products in our in-house electrical production. It goes without saying that compilation of customer documentation, such as operating manuals, spare parts catalogues or qualification documents, is also part of our daily work.\nDirect contact of our engineering team to our production, suppliers and to our customers guarantees a practical, high-quality design of our machines and machine lines.\nHigh-performance process engineering always requires multiple coordinated assembly groups and machines. For that reason, we offer you our systems engineering service. The core of our systems is always one of our mixers, dryers, reactors or coaters, around which we plan and build an optimised system of devices and machines. Typical examples of system elements might be devices for product transport and product dosage, temperature control of supply and exhaust air or temperature control of heating\/cooling jackets.\nStarting with a product idea, we turn the process into a system concept and create all necessary design and planning documents, such as P&ID flow diagrams, installation drawings and descriptions. In the area of pharmaceutical and food technology, you can make use of our experience in system and process qualification.\nOur work is supported by CAD, PPS and numerous calculation programs. Our drawing archive permits retracing to the first hand-made drawing.\nAt the end of all this work, the result is a production-ready system, an individual mixer or a mixing tool developed according to your specific requirements.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Thank you Susan! I love to think about how that conference in 2013 led to such exciting collaboration and hopefully lifelong friendship. I might expect that women attending the Homestead Retreat might find some of the same long lasting relationships with those that they meet there as well as sparking new excitement in their creative process. Can't wait!\nI was thinking the same thing, and I'm looking forward to meeting new friends at the retreat, too. And to spending time with YOU, my friend!","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Project. Pax participes of Project Management course in UBA (University of Buenos Aires).\nOur partner in Argentina, Marcelo Collia, is doing a masteclass of Project Management in the University of Buenos Aires. The studies are based in the improvement of learned process, in particular way for the retention of concepts and the concepts formation. The lessons are divided in project teams and every team have been changed the rols for experiment the diferent perspectives of the reallity in Project Management. On every phase of simulation the students generate a discussion about the differents alternatives and every team applies during the simulation. Of these way students are combinated the differents phases of listening, reading, looking, demonstring, argueing and practicing the methodology. The simulation apports a real situation inside a organization with process of lanch project and organizational components about Lessons Learned and Risks.\nThis course is inside speciallity of Project Management about Marketing & Management studies.\nWhat kind of profiles can access to this course? A professionals that manage and got responsabilities on the development of programs and projects, directors and coordinators of team work of jobs in private organizations or public institutions, also too, people who want manage their professional want change their career direction to programs and projects or management organizators for executed projects.\nTALAIA OpenPPM allows to the students could access a simulator when they have the initiative during the course and once finished the studies. One of clear vocations of TALAIA always has been the docence on the effective management of the projects, from their beginings are present at universities and studies of superior education for the use of students. For this reason TALAIA not only helps to students also too help the companies that want improve their projects and put in the real situations for a real jobs.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Neuroscientists who've been studying creativity and insight for the past ten years recently located the part of the brain where the 'eureka' moment takes place.\nHere's what it looks like on a scan.\nYep, a teeny part of your brain, just above your right ear lights up.\nThe coining of Adapt was a eureka moment for us, but unlike the many eureka stories in history, it didn't just drop like an apple from the sky. It came about from lots of hard work, different sources of inspiration, and, like all good learning design \u2013 because there was a need to do it (and, of course, a deadline).\nSo, what's happened since Kineo's collective lobes were flashing like crazy? Here's the story of Adapt, and how it came about.\nThink back to 2010 when nearly all content was created in Flash and delivered just to the desktop. At that time, Kineo was busy designing and developing an onboarding course for a major high street bank. The course would be completed by pre-joining staff on their home PCs, as well as by existing staff in the workplace. But there was a problem: within some areas of the business there was no ability to play Flash. Not only that: stringent accessibility requirements meant that creating a duplicate course in a format such as HTML was looking more than likely. Then, never one to be daunted, Kineo raised the bar of the challenge even further, agreeing to create a mobile version too.\nThis meant a need to create at least three other versions of the course in addition to the core Flash delivery \u2013 one for those learners who couldn't run Flash, one that would meet the organisation's commitment to meet W3C accessibility guidelines, and the mobile version. If we tried meeting the mobile brief via a native app then we'd need even more courses \u2013 one for iOS and one for Android. It was clear that a pretty novel solution was required (and fast!), to solve this challenge within the budget.\nIt was at that point that inspiration struck. We realised that rather than create multiple versions of one course at great cost, it was possible for a single HTML course to do it all. Yes: we could create a solution where the same course content would work as well on a smartphone as it did on the desktop. And because it would be built in HTML and displayed in the phone's browser, the device's operating system would be rendered irrelevant. It would also meet the client's accessibility guidelines.\nWhat's more, by adhering to responsive web design principles, we could deliver a multi-device solution that reflected the experiences learners had elsewhere online. We really felt like we might be on to something.\nwe happily swap from one device to another depending on where we are at the time, our preferences and the goal we're trying to achieve.\nThis made us realise that we should go beyond what we'd done for that major high street bank. To meet the expectations of our 'always connected', tech-savvy learners, we needed to place multi-device behaviours at the heart of our entire design approach.\nFollowing the success of the bank project, and mindful of the Google research and the challenges with Flash, I took a bold step. I pitched to the Kineo directors to secure funding for a proof of concept project which we called Adapt. The idea was to test just how far we could push this new approach to designing and delivering elearning. The pitch was approved unanimously and the project commenced.\nWe initially had a three-month development window for the proof of concept project but enthusiasm got the better of us. We sold it to a mobile phone giant and another company, who'd heard what we were developing and wanted in, well before the framework had been officially released.\nSo, take it slow or jump in at the deep end?\nIt was time to make a decision. What do you do when you have an entire production department producing Flash courses, but a backlog of work calling for something other than Flash? Tread softly? Outsource? Not if you're Kineo.\nYou trust your instincts about where the industry is heading, you re-train your entire workforce to use a new elearning development framework and kick-start a revolutionary new approach to designing elearning content. And it was a good job we did, because the market was changing rapidly and we were committed to getting ahead of it.\nWe sold \u00a3700k of Adapt elearning in the first year. We could see that the multi-device angle and the modern design approach was really appealing to our market. Sales were going from strength to strength, so much so that by the end of FY13, we'd sold \u00a33m.\nBut enough of the technical details \u2013 how else were we innovating? For a start, we took the opportunity to refresh our design approach by using scrolling page layouts. For many years web designers and instructional designers avoided the scroll bar at all costs and kept all content 'above the fold' to ensure users didn't miss anything out of sight. Yet much of this thinking was based on research that was addressing the needs of na\u00efve users new to a fledgling technology emerging at the end of the 20th century.\nThings have changed. A large proportion of our audience today are tech savvy, and for them surfing the net is second nature. They're used to seamlessly switching back and forth between devices and applications and for that reason we adopted a 'mobile first' approach to our designs. Layouts and interactions that worked well on desktop but not on a smartphone or tablet were unceremoniously cut and redesigned from the ground up so they worked equally well on mobile devices.\nThe speed and scale of our transition to Adapt was impressive, and if we're honest, intense. It required huge capital investment as well as real commitment from Kineo's directors. It also meant we had to overhaul nearly every aspect of our processes. And whilst we all knew Adapt was a game changer, we had to sell the idea to the market and prove our new offering was superior to what had gone before.\nIt wasn't an easy journey \u2013 and we didn't expect it to be. When challenges inevitably occurred the team rolled up their sleeves, fixed the problem and improved the framework \u2013 all the while delivering best in class courses to our clients. The rock solid product that is Adapt today exists in no small part to the boundless skill and patience of all Kineo's employees, past and present, who were involved in its creation. Thank you all!\nAt the end of Adapt's first year, and having refined the technology and our design thinking, we started discussing making it a collaborative project; partnering with other companies with different skills and ambitions to help it flourish. In May 2013 the Adapt Framework, \"Kineo's ground-breaking responsive elearning design\", won the Platinum Award for Best Learning Design Technology at LearnX. It was then that we knew that we couldn't keep this to ourselves. We'd proved that this thing had potential but it needed more minds, more hands, and more input to be able to realise its full potential.\nWe invited interested organisations to become part of this journey and the open source project took flight.\nWhat happened next? Well that's another story and it's still very much being written. But the Adapt framework recently won another LearnX award for the fourth year in a row, so it's obviously working.\nWant more? Download Going Mobile \u2013 our guide to creating great multi-device learning with Adapt.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"status: Mismatch between pool hostid and system hostid on imported pool.\nand then was verbatim imported into this system.\naction: Export this pool on all systems on which it is imported.\nThen import it to correct the mismatch.\nThe hostid file in the initramfs does not match the input file from \/etc\/, due to genkernel's mangling of the hostid value. So you can't generate a host ID that way. Unless you work around that problem by copying the hostid file out of the initramfs into your live system! Once the files match, zpool status will no longer complain.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzaynah b/data_all_eng_slimpj/shuffled/split2/finalzzzaynah new file mode 100644 index 0000000000000000000000000000000000000000..f64ce96523a0a3cf739b86222a4e0e0f86874539 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzaynah @@ -0,0 +1,5 @@ +{"text":"If you have atrial fibrillation (also called AFib or AF) and you are on blood thinners, this procedure can help you get off your meds and reduce your risk of stroke.\nAtrial Fibrillation is a quivering or irregular heartbeat (arrhythmia) that can lead to blood clots, stroke, heart failure and other heart-related complications. At least 2.7 million Americans are living with AFib. Now a new technique can help you get off blood thinning meds.\nWatch this video to show how a clot can cause a stroke and how Watchman can help open the blood vessel. Dr. William Hirsch, Chair of Cardiology at Deborah Heart and Lung Center, explains the procedure.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"One of the emerging technologies that has the potential to dramatically change the world as we know it, for the good and the bad, is 3D printing. Being able to download product designs from the Internet and manufacture them with your 3D printer is an exciting thought, the technology still needs to mature but the possibilities are endless.\nIn the news is Have Blue, a hobbyist from the AR-15 forum who has become one of the first people to construct an AR-15 class assault rifle using a 3D printer. Have Blue used 3D CAD files from CNCGunsmith.com to print the gun's lower receiver, the part that in a legal sense actually constitutes a firearm. After making some minor changes to the design, he fed about $30 worth of standard ABS plastic into his fairly old school Stratasys 3D printer and combined the body with off-the-shelf, metal AR-15 parts to complete the gun. Afterwards he loaded the gun with relatively low-powered .22 caliber pistol rounds and after firing 200 rounds he posted online that it runs great, without visible signs of wear and tear.\nThe lower receiver was created using a fairly old school Stratasys 3D printer, using a normal plastic resin. HaveBlue estimates that it cost around $30 of resin to create the lower receiver, but \"Makerbots and the other low cost printers exploding onto the market would bring the cost down to perhaps $10.\" Commercial, off-the-shelf assault rifle lower receivers are a lot more expensive. If you want to print your own AR-15 lower receiver, HaveBlue has uploaded the schematic to Thingiverse.\nHaveBlue tried to use the same lower receiver to make a full-blown .223 AR-15\/M16 rifle, but it didn't work. Funnily enough, he thinks the off-the-shelf parts are causing issues, rather than the 3D-printed part.\nMuch more information can be read at Have Blue's website.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Hey, I'm fallencrow305. The first person who can tell me where I got the fallencrow part, I'll mail you $20. I like to build things, because there isn't much else to do in this town. I also like to read xkcd, and play TPT (The Powder Toy). I hope to be a good contributor to this community, and get something back from it as well!\nHey, what paper did you use, specifically? I'm unable to find any large yellow paper besides yellow butcher paper, and I'm not sure that's suitable.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Kevin the Locksmith offers a Keyed Alike service for new and existing locks.\nThis is where a number of compatible locks in your property are re-keyed to all work from the same key.\nWe can reduce your property's collection of keys down to just one or two.\nDoes your home or business have lots of different keys?\nYou can easily reduce the number of keys you need to carry by 50-75%.\nAny doors that use the same type of key can be re-keyed to work from the same single key.\nThis involves changing the tumblers within the locks so they all match the one key.\nIf the locks are not the same type, we can replace them with new locks, pre-set to use the same key.\nMaster Key Systems work by installing locks with their own individual different keys, but also providing a single 'Master Key' that works with them all.\nThis is useful for when access needs to be limited to certain areas.\nThis feature is popular with our commercial clients, but also for parents or landlords who need to maintain access to multiple sites.\nTo find out which of your locks are compatible, contact us now.\nWant to reduce the number of keys you carry?","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"For other people with the same name, see Michael Weiner.\nMichael S. Weiner (December 21, 1961 \u2013 November 21, 2013) was an American attorney who served as the executive director of the Major League Baseball Players Association for 4 years. He assumed the role on June 22, 2009, replacing Donald Fehr, becoming only the fifth executive director of the union. Weiner joined the organization in September 1988 and had been general counsel since 2004.\nHe was born in Paterson, New Jersey.\nWith Weiner at the helm, the union signed an agreement in November 2011 for a five-year contract running until December 2016, which ensured 21 consecutive years of labor peace in Major League Baseball. The agreement allowed for blood testing for human growth hormone, introduced restraints on bonuses for amateur draft picks and international signings, and restored salary arbitration eligibility for part of a class of players that lost it in the 1980s.\nWeiner received his undergraduate degree in political economy from Williams College in 1983. He graduated from Harvard Law School in 1986. From 1986 to 1988, Michael served as law clerk to H. Lee Sarokin, then United States District Court Judge, in Newark, New Jersey.\nWeiner was diagnosed with a brain tumor in August 2012, and died 15 months later, on November 21, 2013. He was 51 years old. He was succeeded by his deputy, Tony Clark, the first former Major League Baseball player to lead the union.\n^ \"Michael Weiner, Executive Director\". Major League Baseball Players Association. Retrieved 2010-11-23.\n^ Crasnick, Jerry (2009-06-29). \"'Regular genius' to be next union chief\". USA Today. Retrieved 2011-11-29.\n^ \"Michael Weiner, MLB Players' Union Head, Treated For Brain Tumor\". Huffington Post. Retrieved 2012-08-29.\n^ \"Michael Weiner, Executive Director\". Major League Baseball . Retrieved 2012-08-29.\n^ McCullough, Andy. \"Michael Weiner, battling inoperable brain tumor, continues to draw people together\", The Star-Ledger, January 6, 2013. Accessed May 3, 2015.\n^ Shaikin, Bill (2012-08-21). \"Baseball union chief Michael Weiner being treated for brain tumor\". Los Angeles Times.\n^ \"Michael Weiner, head of MLB players' union, dies at 51\". New York Post. November 22, 2013.\nThis page was last edited on 22 November 2018, at 04:42 (UTC).","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzaywvf b/data_all_eng_slimpj/shuffled/split2/finalzzzaywvf new file mode 100644 index 0000000000000000000000000000000000000000..9d794b32044e232820918c8b49aac1562bb7bab0 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzaywvf @@ -0,0 +1,5 @@ +{"text":"Our Maltese puppies are of top quality With Health Guarantee $ Free Shipping to any location. Adopt or Reserve Today. 100% Satisfaction. Special Offers. Great Quality. Fast Delivery.\nOur Maltese puppies are of top quality With Health Guarantee $ Free Shipping to any location. Adopt or Reserve Today. 100% Satisfaction. Special Offers. Great Quality. Fast Delivery. of Maltese puppies and other puppies available.\nOur Maltese puppies are of top quality With Health Guarantee $ Free Shipping to any location. Adopt or Reserve Today. 100% Satisfaction. Special Offers. Great Quality. Fast Delivery. of Maltese puppie..","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"I'm going NUTS over the PINK, I should say HOT PINK! Over 4 years ago, we tried to hit this color, but they just couldn't get the correct hue, and previous colors tend to be rosey pink to reddish salmon. Just couldn't nail it. So the other day when after seeing it in person, it is sooo HOT PINK! Yes, I was happy and so the pink has to be my fav, but then again, the\u2026..lol.\n\u2190 \"HAPPY HOLIDAZE\" SERIES: 18th ISSUE KUSTOMCITY\u00ae EVO GT DRAG BUS!\nWhile pink is sweet, I am gonna have to go with red, season wise its the most fitting.\nThat black gold looks smokin too, Almost carbonish. On the blue if you threw some chrome rims on it, it would outpace the pink, however pink as it is, in the scale of 1 to 7 gets 2, for 2nd, red is 1st. \ud83d\ude00 Of course, any I get, I will be lovin.\nhmmmmmmm\u2026\u2026\u2026\u2026I am going to go with the black gold one, and the pink one.\nI know that's 2 BUT\u2026\u2026\u2026.\nThese are yet again, amazing, and I am already sitting at the computor waiting for the sale, LOL.\nRed gets my vote, for true Christmas colors. It just makes me think of all my childhood dreams. Now when it comes to custom Cool it goes to Blue for me. Reminds me of the color of frozen ice. All those years of skating on the lakes of Michigan.\nI think the red and green fit the christmas theme the best. I also love the pink one, doesn't everybody.\nWHOO HOO, just ordered my 2 sets.\nOh well, still got my 2 sets, come oooooooon black gold.\nDave bought my master set today. My fifth. All the colors look cool to me. How about considering a medium purple chrome with pink flames. What do you think??\nCan't wait to receive mine!!\nThanks, and all the best and sucess for the New Year!!!!\nO.K., I think I have finally decided.\nFor the reg. set, I like the green one.\nThe chase versions, I like the pink one, and the super chase, I like the black one, then of course I am sure the mystery chase will be fantastic as well.\nI just knew the mystery chase was going to be silver chrome, FANTASTIC, that is now my second fave, after the black gold.\nHi dave. It was a super idea selling off the chase cars for the X-mas ones. How about doing the same for the big tows if you have any left? I started collecting just after that issue and missed out on the master set and would love to get my hands on the chase ones.Please?\nFor most of these marvels.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Four banking chief executives talked about their outlook on the global economy for 2017. They spoke at the institute's 2016 annual meeting.\nPanelists talked about the economy and the state of the U.S. housing market. They focused on foreclosure and interest rates, as well as construction and sales.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"The Aeon Labs Recessed Door sensor is a Z-Wave sensor that detects whether a door is open or closed.\nThis beautifully designed, tiny sensor fits into your door so that it is almost invisible.\nIt then notifies your Z-Wave system when the door is opened or closed.\nLooks perfectAt around 10 millimetres in width, some might call it small and compact. We prefer to call it invisible. It's the Z-Wave door sensor that's been designed as the door sensor should have always been.\nFor a start it doesn't change your home's aesthetic. It's not plastic stuck on a door. Instead it's a clever piece of technology that installs simply within your door's frame. Aeotec's Recessed Door Sensor is as beautiful as the rest of your home. You never see it, you just feel its benefits.\nSimple idea - Powerful applicationsThe Recessed Door Sensor is a simple idea. It tells you and your Z-Wavenetwork if a door is open or closed. Allowing you to create a whole new level of control through Z-Wave. Alarms. Safety. Intelligence. Automated decisions. All enabled by one simple, powerful and invisible device.\nSimple to installThe beauty of an invisible installation is complemented by the beauty of a simple installation. Installing Recessed Door Sensor is easy. Adding it to your home's network takes no more than activating its battery and syncing it with your Z-Wave system. Making the whole thing invisible means little more than a slight drill hole hidden away on the top of your door. It's the sort of installation that takes no more than 10 minutes but provides years' worth of benefits.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"The posting from my end has been a little light - life has been interfering. Thought provoking discussion outsourced to Eli Rabett for today.\nHowever there is one housekeeping item that needs to be announced! That is the winner of the first ever annual Dawg's Blawg climate prediction contest. It was a tough fight but Catelli and Lenny both were spot on for the December - November prediction, and were both only 2 hundredths of a degree off for the November prediction. So as stipulated in the rules, the winner is Catelli, who posted 2 hours ahead of Lenny (sorry Lenny, but in my opinion that still qualifies you as having bragging rights, you just won't be able to brag while sipping from your pirate's mug).\nCatelli, can you please contact me at john dot croix at hotmail dot com.\nNow I can start to set up next years. Does anyone have any ideas about the next one?\nThis page contains a single entry by published on December 21, 2009 1:40 PM.\nQOTY was the previous entry in this blog.\nKiss me, Kate is the next entry in this blog.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzbaspp b/data_all_eng_slimpj/shuffled/split2/finalzzzbaspp new file mode 100644 index 0000000000000000000000000000000000000000..462d720590dd90b22ea34cced00b9f6eb06944d6 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzbaspp @@ -0,0 +1,5 @@ +{"text":"We are a door manufacturer based in the North East of England that can meet any requirements from our vast range of composite, fire and flood doors. All Bowater by Birtley doorsets deliver outstanding energy efficient performance, low maintenance, high security and robust construction. The company's British-made doorsets are created to the highest specifications to meet the needs of specifiers, fabricators, builders, installers, residents and homeowners alike.\nDownload free BIM Content for residential style doors available from bimstore created for Birtley Group Bowater Doors.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Since Layout Builder was added to Drupal core in 8.5, Lightning has had plans to adopt it and retire Panels and Panelizer. We've been working hard at closing the feature gap between out of the box Layout Builder and what Lightning Layout currently provides.\nUnder certain circumstances, it might be necessary to build a specific version of Lightning with dependencies exactly as they were when it was released. But sometimes building older versions of Lightning can be problematic. For example, maybe the older version assumes an older version of a dependency, or a patch no longer applies with an updated dependency.\nThis post was originally published on Medium. Ah, the config system. Crown jewel of Drupal 8, amirite? Well, yeah, it's fantastic and flexible (as is most of Drupal). But if you have advanced use cases\u200a\u2014\u200asuch as building a system that alters config dynamically\u200a\u2014\u200athere are traps you should know about.\nIt's here! Lightning 2.2.1 provides a migration to the core media system that was introduced in Drupal 8.4.0.\nLightning 2.1.7 includes a new top-level component: Content API. Its purpose is to provide a very basic server-side framework for building decoupled apps using Lightning as a backend. It has no strong opinions about how the \"front-end\" of such an application is implemented -- out of the box, it merely provides tools to deliver Drupal entities according to the JSON API specification.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"A thick, filling porridge with a taste of maple syrup and brown sugar provides a feeling of fullness for long hours. One serving is only 132 kcal, and only 9g of carbohydrates, and at the same time 18g of protein and 5g of fiber! Do you train intensely? You are on a diet? DIETmeal PRO Oatmeal is the perfect breakfast for you!\nWeight net. 36g.; 14,64 pln\/100g.\nMix the contents of the sachet with water, heat - after 3 minutes you will admire the intense taste of delicious protein porridge and seduce you with the smell of clan syrup and hot brown sugar. A portion of porridge will give you energy and hunger for long hours. A perfectly composed and balanced composition of high-protein porridge means a minimum amount of calories and a maximum feeling of fullness.\nOne serving is only 119 kcal, and only 6g of carbohydrates, and at the same time 18g of protein and 5g of fiber!\nDo you train intensely? You are on a diet? DIETmeal PRO Oatmeal is the perfect breakfast for you!\nIngredients: milk proteins (soy), oatmeal (gluten) (22.8%), oat bran (gluten), protein isolate from legumes, chicory extract, corn bran, oat fiber (gluten), salt, aromas (milk) sweeteners (aspartame and sucralose). It contains a source of phenylalanine.\nAllergens: the product contains milk, soy, gluten; possible presence of mustard, celery, eggs and sulfites.\nQuantity (portion) necessary to obtain the product's beneficial effects: 36g. A balanced way of nutrition and a healthy lifestyle are the basis for the proper functioning of the body.\nPreparation: pour the contents of the sachet into 160-170ml boiling water, mix, cook on low heat for 2 minutes, stirring several times. Set aside for 2-3 minutes until thick. Preparation in the microwave: pour the contents of the sachet 160-170ml of cold water, stir, heat in the oven 1min on medium power. Stir and reheat for 30 seconds. Set aside for 2-3 minutes until thick. For optimal taste and consistency, it is best to eat directly after cooking.\nGold Smucker's syrup without sugar is a delicate addition to cakes, ice cream and desserts. Thanks to the content of natural maple flavor it is as delicious as traditional maple syrup, but it contains only 20 calories! This dense, amber syrup is a perfect sugar substitute that can be used to pour desserts and baking.\nPieces of juicy beef with firm vegetables cream and herbal sauce. New from DietiMeal is a ready-made dish of carefully selected ingredients that deliver up to 35g of protein in no more than 230kcal.\nBlend of sweet pastry brownie does not contain fat or added sugar - sweetened steak cake has intense chocolate-fruity taste and natural sweetness, it is moist and aromatic though the melon has a minimal amount of fat and sugar.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"\u062f\u0647\u0645\u06cc\u0646 \u0647\u0645\u0627\u06cc\u0634 \u0628\u06cc\u0646\u00ad \u0627\u0644\u0645\u0644\u0644\u06cc \u0632\u0628\u0627\u0646 \u00ad\u0634\u0646\u0627\u0633\u06cc \u0627\u06cc\u0631\u0627\u0646 \u062f\u0631 \u0631\u0648\u0632\u0647\u0627\u06cc \u0633\u0647 \u00ad\u0634\u0646\u0628\u0647 \u0648 \u0686\u0647\u0627\u0631\u0634\u0646\u0628\u0647\u060c 3 \u06484 \u0645\u0647\u0631\u0645\u0627\u0647 1397 \u062f\u0631 \u062f\u0627\u0646\u0634\u06a9\u062f\u0629 \u0627\u062f\u0628\u06cc\u0627\u062a \u0641\u0627\u0631\u0633\u06cc \u0648 \u0632\u0628\u0627\u0646\u00ad \u0647\u0627\u06cc \u062e\u0627\u0631\u062c\u06cc \u062f\u0627\u0646\u0634\u06af\u0627\u0647 \u0639\u0644\u0627\u0645\u0647 \u0637\u0628\u0627\u0637\u0628\u0627\u0626\u06cc \u0628\u0631\u06af\u0632\u0627\u0631 \u0645\u06cc \u00ad\u0634\u0648\u062f.\n\u0647\u0631 \u0641\u0631\u062f \u0645\u06cc\u00ad\u062a\u0648\u0627\u0646\u062f \u06cc\u06a9 \u0645\u0642\u0627\u0644\u0629 \u0645\u0633\u062a\u0642\u0644 \u0648 \u06cc\u06a9 \u0645\u0642\u0627\u0644\u0629 \u0645\u0634\u062a\u0631\u06a9 \u0628\u0631\u0627\u06cc \u0634\u0631\u06a9\u062a \u062f\u0631 \u0647\u0645\u0627\u06cc\u0634 \u0627\u0631\u0627\u0626\u0647 \u062f\u0647\u062f. \u0686\u06a9\u06cc\u062f\u0647\u00ad\u0647\u0627 \u0628\u0627\u06cc\u062f \u0628\u062f\u0648\u0646 \u0646\u0627\u0645 \u0628\u0627\u0634\u0646\u062f. \u0644\u0627\u0632\u0645 \u0627\u0633\u062a \u0647\u0646\u06af\u0627\u0645 \u0627\u0631\u0633\u0627\u0644 \u0686\u06a9\u06cc\u062f\u0647 \u0639\u0646\u0648\u0627\u0646 \u0645\u0642\u0627\u0644\u0647\u060c \u0646\u0627\u0645 \u0648 \u0646\u0627\u0645 \u062e\u0627\u0646\u0648\u0627\u062f\u06af\u06cc\u060c \u0627\u0637\u0644\u0627\u0639\u0627\u062a \u062f\u0627\u0646\u0634\u06af\u0627\u0647\u06cc\u060c \u0646\u0634\u0627\u0646\u06cc \u067e\u0633\u062a \u0627\u0644\u06a9\u062a\u0631\u0648\u0646\u06cc\u06a9\u06cc \u0648 \u0634\u0645\u0627\u0631\u0647 \u062a\u0644\u0641\u0646 \u0647\u0645\u0631\u0627\u0647 \u0646\u06af\u0627\u0631\u0646\u062f\u0647 \/ \u0646\u06af\u0627\u0631\u0646\u062f\u06af\u0627\u0646 \u0631\u0648\u06cc \u06cc\u06a9 \u0628\u0631\u06af\u0647 \u062c\u062f\u0627\u06af\u0627\u0646\u0647 \u0627\u0631\u0633\u0627\u0644 \u0634\u0648\u062f.\nWe encourage submissions in all areas related to descriptive as well as theoretical linguistics, with a focus on Iranian languages and other languages and dialects of Iran. Areas of interest include, but are not limited to, the following: phonetics and phonology, morphology, syntax, semantics, semiotics, pragmatics, discourse analysis, pragmatics, comparative\/historical linguistics, endangered languages and language documentation, ancient Iranian languages, atlas of languages, language typology, applied linguistics, teaching Persian as a second\/foreign language, translation, sociolinguistics, psycholinguistics, neurolinguistics, sign languages, clinical linguistics, computational linguistics, corpus linguistics, forensic linguistics, cognitive linguistics, philosophy of language, ecolinguistics, lexicology and lexicography, stylistics, writing systems.\n\u00b7 Abstracts are accepted in Persian and English and should be submitted to the following email address no later than 04-Apr-2018.\nseparate page at the time of submission.\nDetails about the registration fees will be provided later.\nAll non-Iranian citizens who are attending the Conference need a Visa to enter Iran. Please contact the Iranian embassy\/Consulate nearest to you for Visa procedures. When your abstract is accepted, you will be issued an official letter of invitation to the Conference, which you can submit with your Visa application.\nAccommodations will be provided by Allameh Tabataba'i University, Tehran, Iran.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"An often overlooked consequence of hurricanes in Florida is their effect upon residential and commercial real estate sales. As soon as a hurricane's approach is announced, real estate closings are usually delayed until after the storm has passed and weather conditions have improved. That delay may lead prospective buyers to change their minds and decide to \"wiggle out\" of the purchasing contract. The best interests of all of the parties involved, however, can be better protected if proper safeguards are in place. A competent Florida real estate attorney who is also well versed in state-of-the-art tracking systems can assist in making quick changes to a purchasing contract and help prevent unnecessary delays.\nOur firm integrates an innovative software system, Qualia, into the real estate closing process. It streamlines the required steps taken by the attorney and the title, escrow, and mortgage companies. Qualia consolidates the purchasing, due diligence, escrow, and closing processes into one centralized internet portal, reducing the time it takes to get to the closing. The buyer, seller, and other parties to the transaction can also watch the progress of the real estate closing in real time.\nBut even before a closing happens, there are other functions involved in the transaction that can be affected by severe weather conditions. The prospective buyer must also perform a title search, which can be delayed during a hurricane due to power outages. Under Florida law, the title to a property must be clear and not have pending liens or other ownership disputes before it can be successfully transferred and insured. Most mortgage companies will also require a re-inspection of a property after a hurricane.\nAnd if a property that has not yet closed sustains hurricane damage, the lender may not provide all of the necessary funds for the purchase if there are concerns about lasting damage to the property. Without these funds in place, there won't be a finalized sale agreement, so the closing can't happen.\nFrank Charles Miranda, P.A. can explain how our state-of-the-art technology can prevent delays in your real estate closing and help protect you from unforeseen problems. Call us at 813-254-2637 or contact us online to learn more.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzbasul b/data_all_eng_slimpj/shuffled/split2/finalzzzbasul new file mode 100644 index 0000000000000000000000000000000000000000..7347995fb49702aadf821bff97a27a5ee27e05f7 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzbasul @@ -0,0 +1,5 @@ +{"text":"This Sheffield Wednesday Football Calendar is a unique Calendar gift idea for a football fan.\nOn each month of this Calendar, we feature a newspaper report from one of Sheffield Wednesday's key games.\nYou can choose the start Date of these Sheffield Wednesday Newspaper Football Calendars. You can select any month in any of the next 12 months. So if you are viewing Football Calendar details in June, you can select a start Date from June through to any month to June next year. Includes all Public Holidays and key dates.\nOur Football Calendars can be personalised with the Name of the recipient and a short Message on the opening page. The Football Calendar is especially made for the recipient, which makes this a must have Football Calendar gift for any Sheffield Wednesday fan.\nOur Sheffield Wednesday Calendars are unofficial products, and are in no way affiliated with Sheffield Wednesday FC. They have been independently compiled and research by our archive team.\nA Unique Gift Idea For Sheffield Wednesday Fans!\nSheffield Wednesday formed in 1867, making it the second oldest football team in England. The club was originally a cricket side. They began to play football in the winter months to keep the players together and this would soon become common for cricket teams around the country. Football quickly became the club's main sport and player power forced the Owls to turn professional. Sheffield Wednesday won four First Division titles and three FA Cup trophies over the next fifty years. The club has since found trophies harder to come by, but the Owls have enjoyed many periods of top flight football and great local support. One reason for disappointing results has apparently been the Sheffield Wednesday strip. Superstitious fans have become convinced that narrow stripes lead to faltering form and traditional broader sections of blue and white will guarantee success for the club.\nOur Personalised Calendars feature reproduced Sheffield Wednesday match reports from some of the best games from the club's history. Starting from the month of your choice, these A3 Calendars are a terrific gift idea for any fan of the Owls.\nThe Personalised Calendars feature an impressive selection of encounters from league fixtures and domestic cups, including trophy-winning games, major finals and two 'Steel City' derbies. Below you will find just a few highlights from this popular Calendar Gift.\nSheffield Wednesday were a formidable team at the end of the 1920s and developed a strong reputation at the top of the game. In 1930, the Owls beat title-rivals Derby County 6-3 to win the Championship for the second consecutive year. The club won another piece of silverware in 1935, defeating West Bromwich 4-2 in the FA Cup Final. The match had been fiercely contested, with the Baggies levelling each time Sheffield Wednesday scored a goal. At 2-2 West Brom had a great chance to win the match, but missed the target. They were unable to recover from the disappointment and Ellis Rimmer scored twice to win the game for the Owls; Rimmer had therefore finished the competition having scored in every round.\nRon Atkinson became manager of the Owls in 1989. The A3 Calendar features an article on his appointment and some of the key cup ties of his reign. In 1991, Sheffield Wednesday were beginning to flourish under their new boss and the team reached the League Cup Final. They faced Atkinson's old club Manchester United and were considered rank outsiders in the match. However, John Sheridan scored in the first half for the Owls and United were unable to muster a reply. Sheffield Wednesday won the final 1-0, claiming their first trophy for over fifty years. The club would also reach the FA Cup final in 1993, following a thrilling semi-final win over Sheffield United that was played at Wembley Stadium. Our Football Calendar incorporates reports on this classic semi-final and also the exciting final replay against Arsenal.\nThe most recent game included in the Personalised Calendar is the League One play-off final from 2005. Over 40,000 Sheffield Wednesday fans travelled to the Millennium Stadium and a 4-2 win in extra time guaranteed promotion to the Championship.\nWe have an exciting selection of alternative football gifts that are designed especially for fans of the Owls. These include our Personalised Diaries and a unique Personalised Football Book, which is smartly presented and can be treasured for years to come. Other more general football presents include our Spoof Football Headlines and Stadium Tours.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"WASHINGTON, Feb 17 (KUNA) -- US President Donald Trump said that the US is asking Britain, France, Germany and other European allies to take back over 800 so-called Islamic State (IS) fighters that we captured in Syria and put them on trial.\nTrump reiterated in a tweet that \"the Caliphate is ready to fall. The alternative is not a good one in that we will be forced to release them.\" He affirmed that the US \"does not want to watch as these IS fighters permeate Europe, which is where they are expected to go.\n\"We do so much, and spend so much time for others to step up and do the job that they are so capable of doing,\" he remarked.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Looking for pilates, yoga or a place to do your regular workout? All-Gyms.com is your first stop to discover the best gyms, health clubs and fitness centers in the USA. The Pampering Place is one of 85,000 gyms in the USA. Show more Gyms in Harrisonburg, VA.\nBrowse the most complete directory of health clubs, fitness centers and gyms in Harrisonburg, VA.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Best Cuban Bakery in Miami (miami) Try our delicious Cuban bakery shop in Miami. Find our additional smooth heavenly and delicate surface of our cakes. Give your taste bud the trial chomp with the tart flavor. Visit us at http:\/\/www.rickybakery.com\/.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"\uc6b0\ub9ac\ub294 \uc911\uad6d\uc5d0\uc11c \ud504\ub9ac\ubbf8\uc5c4 \uc790\uce74\ub4dc \ub780\uc81c\ub9ac \uc81c\uc870 \uc5c5\uccb4 \ubc0f \uacf5\uae09 \uc5c5\uccb4 \/ \uacf5\uc7a5 \uc804\ubb38\ud654\ub418\uc5b4 \uc788\uc2b5\ub2c8\ub2e4. \ud504\ub9ac\ubbf8\uc5c4 \uc790\uce74\ub4dc \ub780\uc81c\ub9ac \uc911 \ud558\ub098 \uc778 Shenzhen Yiyixing Zipper Manufacture Co.,Ltd \uc911\uad6d \uc720\uba85 \ube0c\ub79c\ub4dc \uc911 \ud558\ub098 \uc778 \uc800\ub834\ud55c \uac00\uaca9 \/ \uc800\ub834\ud55c \uac00\uaca9\uc73c\ub85c \uace0\ud488\uc9c8\uc758 \ud504\ub9ac\ubbf8\uc5c4 \uc790\uce74\ub4dc \ub780\uc81c\ub9ac \ub3c4\ub9e4\uc5c5.\nWholesale \ud504\ub9ac\ubbf8\uc5c4 \uc790\uce74\ub4dc \ub780\uc81c\ub9ac from China, Need to find cheap \ud504\ub9ac\ubbf8\uc5c4 \uc790\uce74\ub4dc \ub780\uc81c\ub9ac as low price but leading manufacturers. Just find high-quality brands on \ud504\ub9ac\ubbf8\uc5c4 \uc790\uce74\ub4dc \ub780\uc81c\ub9ac produce factory, You can also feedback about what you want, start saving and explore our \ud504\ub9ac\ubbf8\uc5c4 \uc790\uce74\ub4dc \ub780\uc81c\ub9ac, We'll reply you in fastest.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzbcuzg b/data_all_eng_slimpj/shuffled/split2/finalzzzbcuzg new file mode 100644 index 0000000000000000000000000000000000000000..3ea120a35ae27e3c5aae134b3c812c1d2d7278d9 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzbcuzg @@ -0,0 +1,5 @@ +{"text":"To locate Mayfair Tufted Cocktail Ottoman ByOphelia & Co.\nWonderful Zaylee Slipper chair ByWinston Porter is essential have in virtually any home. You want for the greatest items, and also you want to make sure you in no way pay too much to them. Seems just a little complex, proper? Nicely, this article is in this article to assist. Keep reading and locate some good skilled methods for finding the deals on furniture pieces you are going to really like. Before buying any cabinets, open all of the compartments and check inside. You're not merely making sure that each of the storage are made to last and available without the hitches, you also want to be sure that the interior of your drawers have some kind of finishing as well. When buying Zaylee Slipper chair ByWinston Porter, evaluate your room before you go buying. There exists practically nothing worse than picking out the ideal set of Zaylee Slipper chair ByWinston Porter only to find that it will not easily fit in your room once you buy it house. Rather, precisely evaluate your spaces proportions prior to going mentioning in which doorway and microsoft windows are situated.\nWithin this fast changing world of competitors and advancement, it has been a undeniable fact that people barely get enough time to invest at their homes, looking after on their own. It's mandatory for us that people spend at least half from the days amount of time in office, doing our responsibility. When the office does not have in proper tools and amenities for that employees, the workers get serious health issues. Similarly, the decorum and also the tools of the offices are supplied immense importance. These days, almost all the equipments are selected inside a cautious manner so as to ensure the employees comfort. One such important item in the office is the ergonomic workplace seats. As majority of the offices have turned out to be smart, which means they depend more on machine than hard physical work, ergonomic desk office chairs are especially created for this kind of offices. Those are the seats used for using the workplaces getting pc work stations. These have been developed specifically chairs which help within the reduction of demands around the several nerves of the spine cords, therefore restricting the risk of muscular and neural illnesses. Also they help to maintain the correct body posture. It aligns the over arms, wrists, lower back, neck and the mind within the appropriate positions, thereby reducing back pain, function associated musculoskeletal disorders along with other such illnesses. And if you simply searching for a chair to unwind, then consider one of the reclining chair from your checklist. In this article we will talk about the top 15 best ergonomic seats in order to supply the readers with the accurate understanding so that once they purchase such seats, it might be easy for them to select the correct one.\nThis reclining loveseat requires the 2nd spot because its a super inexpensive, extremely comfy option that may fit in with most house dcor styles. Its wood body is padded with coils (for some springtime) and high-density foam (for some soft), and upholstered with dark brown faux leather that makes it an ultra-versatile piece of furniture. Combine it with the person cave for a cozy spot to watch the game (and dont worry about any spillsits vinyl fabric upholstery makes clean-up easy) or perhaps your living room with a few ornamental throw cushions for that perfect place to cozy up following a long-day. Real-life clients love this lying loveseat simply because its comfy, long lasting, easy to assemble 1 rater even noted that they could put together two models within forty-five minutes! and shipping from the merchant is easy.\nThe Bridgewater is a sofa style that is mainly casual and definitely comfy. This style is the darling of numerous a designer simply because you can use it to produce a informal, pleasant room. Bridgewater couches are versatile, with respect to the furniture you select. Completed in an unbiased material, this style of sofa does not contend with additional factors in the room that may be much more remarkable, such as art work or any other large features. Much more formal material will yield a far more grand type of sofa. A number of characteristics distinguish Bridgewater couches: Typically they have reduced hands and a high back, which contributes to the casual appear. Most also have a customized skirt that hides the thighs and free cushions for that seat and back again. This particular design was created to support slipcovers, which also give an informal air towards the sofa.\nThis is the subsequent couch on our list. It's additional comfortable and incredibly ideal for small rooms or loft living. It is upholstered in polyester material, potential customers inset buttons that provide an elegant gemstone-tufted design. It is constructed of long lasting supplies and the thighs are constructed with long lasting wooden to add to its durability. The loveseat has an espresso stained wooden legs and non-tagging foot caps. It offers a comfortable foam padding and rayon material upholstery which makes it very magnificent. It features a longue spot that gives an exceptional room for relaxing.\nThis set of RTA sofas comes in various furniture textures. A number of them include bonded leather, linen, corduroy, bed linen texture, rayon. All you need is to find the right tapestry in significance for the balanced needs. Accessible materials offer an easy-to-clean component. A characteristic that implies its adaptability to resist food and drinks. The luxurious chair and back again cushions vote in favor of a cutting-edge look. The square shaped arm that starts just over them enhances the magnificence. You have the conveniences and soft designs that hit your entry.\nThe best part that compensates for the lost time the glory of the buyers is its style effortlessly. Only for the look of the furniture, you receive a nice and tasteful environment. It's available in wealthy solid shading, imitating as incredible formal attire. While keeping a respectable prominence, the material furniture could be washed effortlessly. You shouldn't rue following turning over a glass of consume in your nice sofa. All you need is to wash the spot for ever and ever precisely if this hits the furniture. Even though great repute accompanies delicacy, it is not the situation here. This incredible couch is given having a contour development of wood. You're more reluctant to hear squeaks and wobbling seems.\nThe Lawson might be known as perfect United states couch style. It's comfortable and simple: The boxy shape is large and characteristically has three back soft cushions in addition to 3 seat soft cushions. The classic Lawson also offers A higher back again and box-shaped cushions which have welts at the sides, just like the back pillows. Ideal for cuddling and napping, this sofa style was designed for Jones Lawson, a united states copper tycoon in the change from the century. He desired a sofa that was quite different from the picky Victorian styles that were typical at that time. Current variations from the Lawson may also incorporate metal or wood included in the hands. Most significantly, this sofa design is the one which will change its feel using the textile you choose for that furniture. The shape is so flexible that it may be glamorous, magnificent, industrial, casual or formal depending on your material choice.\nThis lying loveseat requires the 2nd spot simply because its an excellent affordable, extremely comfy choice that can fit in with most home decorations styles. Its solid wood body is cushioned with circles (for some springtime) and high-denseness foam (for many gentle), and padded with dark brown fake leather which makes it an ultra-versatile furniture piece. Add it to the man cave for a cozy spot to view the game (and do not worry about any spillsits vinyl fabric upholstery tends to make cleanup easy) or your family room with a few ornamental toss cushions for the perfect spot to comfortable up after a long day. Real-life customers adore this reclining loveseat simply because its comfy, durable, easy to assemble 1 reviewer even mentioned that she was able to assemble two models within forty-five minutes! and shipping from the merchant is easy.\nThis suits you if all you need is an attractive coach. It is made with a flared body and cushion top amrests in a awesome cobblestone gray. Now, it has tough froth soft cushions which are wrapped up in in a polyester upholstery to bring about the desired comfort and ease. It has a durable corner blocked body that adds to the sturdiness. Also, feet are in a fake wood complete. It has an impressive grey color that suits with any decor. Its dimensions are 89 T by 39 D by 40 They would thus big enough to support you together with your loved onesOrbuddies. More importantly, it occurs completely assembled. This protects the agony of having to assemble the set.\nThe Lawson could be called quintessential United states sofa design. It is comfy and straightforward: The boxy form is large and characteristically has three back again cushions as well as 3 seat soft cushions. The classic Lawson also offers A taller back and container-formed soft cushions which have welts in the sides, just like the rear pillows. Well suited for snuggling and sleeping, this couch style was created for Jones Lawson, a united states copper mineral magnate at the change from the hundred years. He desired a sofa which was quite different from the picky Victorian designs which were typical at that time. Current variations of the Lawson may also incorporate wood or metal contained in the arms. Most significantly, this sofa style is the one which will morph its vibe with the textile you select for that furniture. The form is really versatile that it may be glamorous, luxurious, commercial, casual or formal depending on your material option.\nThis reclining loveseat takes the 2nd place because its an excellent inexpensive, extremely comfortable choice that can participate in most home decorations designs. Its wood frame is padded with circles (for some spring) and-density foam (for many soft), and upholstered with dark brown fake leather that makes it an ultra-flexible furniture piece. Combine it with the person cavern for any comfy destination to watch the sport (and do not worry about any spillsits vinyl fabric furniture makes cleanup simple) or perhaps your living room with a couple of decorative toss cushions for that ideal spot to cozy up after a long day. Actual-existence clients adore this reclining loveseat simply because its comfortable, long lasting, easy to put together one reviewer even noted that she could assemble two models within forty-five minutes! and delivery in the merchant is easy.\nThe raise top a coffee table have now turn out to be the standard and essential furniture all over the world. No living room ought to skip this phenomenal lift leading table. Getting the super high quality, distinctive and sturdy one can be the greatest accomplishment in life. All you need is remember factors such as size, utilizes and construction material is really essential. Grasp the important thing points and make certain of determining the real thing from the fakes Aranda Sphere Pouf.Using the top greatest checklist above, you can confidently choose one of them which will capture your interest. You won't ever rue instead, you will question why you did not think it is prior to.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Mutrana marketing agency was founded with the goal of providing full services of marketing based on the experiences of the founders of this company.\nOne of the main concerns of entrepreneurs in the Iranian market is the development of a comprehensive marketing plan and the provision of new solutions, implementation, control and evaluation of its effectiveness on the basis of this plan. In Mutrana, we have been trying to bring academic foundations alongside the market experience to achieve an optimal combination.\nIn Mutrana, we are trying to promote your brand by providing full services of marketing based on scientific principles beside experience.\nDeveloping purposeful business based on scientific principles cause job creation, extra income, costumer's network expansion and brand promotion.\nPersonal and organizational improvements are one of main purposes of Mutrana. To achieve this prosperity, we planned courses, technical and general workshops on our agenda.\nAddress: 19 Giti St. #101, Golestan St.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Never judge a book by its cover \u2013 except that many people do. Your Booktrack cover is one of the most important first impressions for a reader, so it makes sense to make it as attractive and attention-grabbing as possible.\nYou've created an amazing Booktrack, chosen the genre and type, made a fantastic cover and are ready to publish. Just one small problem: you're stuck at the book description.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"All these 3830406C13RIK cDNA clone are full sequence confirmed. There are 15 3830406C13RIK expression cDNA clones with various fusion tags, especially GFPspark tag and OFPspark tag. 3830406C13RIK expression cDNA clones are expression validated.3830406C13RIK cDNA clones customerized service are available.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Extreame Savings Item! Free Shipping Included! Save 41% on the 15\" Vintage Handmade Leather Messenger Satchel Bag | Briefcase Bag By Aaron Leather by Aaron Leather at Mas Fashion. Hurry! Limited time offer. Offer valid only while supplies last.\n**STYLISH AND ELEGANT - A Unique, Eye-catching Rugged Vintage Look Messenger Bag That Is Solid, Sturdy And Reliable | Looks High End With Casual And Formal Attire | Great For Students, Business Or Travelers Who Want To Make A Fashion Statement.\n**DIMENSIONS - 15\"(Length) x 11\"(Height) x 4\"(Depth) | Adjustable Strap With Max Length 55\"\n**PERFECT GIFT- A Wise And Thoughtful Gift For Active Students & Professionals.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzbdtwh b/data_all_eng_slimpj/shuffled/split2/finalzzzbdtwh new file mode 100644 index 0000000000000000000000000000000000000000..c7c4fc61f0a18b0be45ae84f1fac53ae23c8e100 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzbdtwh @@ -0,0 +1,5 @@ +{"text":"Wai Lana is an amazing personality; she has introduced many people to yoga as to aid them in their lives. We all know how yoga is beneficial for our physical, emotional and spiritual health. She is one of the most well-known yoga teachers in the world. She is a TV host for yoga series, director and also designs her clothes \u2013 Wai Lana is multi-talented. Wai Lana has also been a prominent TV star where she has played a vital role in making yoga popular among the five continents including North and South America, Asia, Europe, Australia, and the Middle East.\nWai Lana is a global yoga icon, she has a very lovely family. She is the proud mother of three, and grandmother of six. She loves her family very much and spends happy time with them. She misses them in the shooting sets, sometimes the family also surprises her by visiting the shooting locations. They have fun, meals together at the sets itself. Wai Lana is multi talented-she is a songwriter, author, and a genuine advocate for peoples' inner peace and well-being.\nWai Lana has many passions that include yoga, music, and vegetarian cuisine. She is also a lyricist and has written all her famous songs like Namaste, Alive Forever, Colors and Oh My Sweet Lord. When it comes to health and food, she has her own chain of gluten-free snacks, meditation CDs, DVDs. Wai Lana's outstanding effort to spread the teachings of yoga has touched the lives of all- from preschoolers to great-grandparents.\nWai Lana was honored with India's prestigious Padma Shri award for her extraordinary achievements in popularizing yoga globally. Wai Lana is one of only two Chinese nationals ever to win a Padma Shri award in its 62-year history. The Padma Awards are the highest and most well-known civilian awards in India for exceptional and distinguished achievement in a particular field of human endeavor.\nThe TV series \"Wai Lana Yoga\" has aired nationwide for 18 years and is considered to be the longest-running fitness series ever on public television. In 2015, she produced her infamous music video \"Namaste\" to help the people around the world to learn the important wisdom of yoga and also to celebrate the first ever International Yoga Day on June 21, 2015. She wrote the Namaste song to help us \"to remember one of yoga's greatest lessons\u2014humility and respect for others.\" \"Namaste\" was included in an hour-long PBS special called \"Wai Lana Yoga for a Better Life & a Better World\" which aired nationwide.\nWai Lana has also launched several yoga DVDs for people of all age group, developed teacher training programs, published books, designed kids' yoga products, recorded music, and meditation albums, developed a complete line of yoga gear and even has her range of snacks.\nEveryone appreciates Wai Lana for her excellent yoga skills and her music that helps as a therapy to people for them to find enlightenment in their life. Yoga should be a part of every human's life, and that's what Wai Lana wants, to spread the message of yoga among every person on this planet. So it is ideal to start living your life in a positive notion, avoid any negative element with yoga wisdom shared by Wai Lana. This will help us to live fearlessly and embrace the beauty of aging and growing old. That way you will live a healthy and blissful life with no regrets and no guilt.\nMusic is an essential part of our life, and that's why we often turn to music during our happy and sad times. Now if you notice, we often listen to those songs that make us feel better. When we are in love, we listen to love songs, and when we are sad, we listen to soft songs. But the spectrum of emotions doesn't limit here. Humans tend to have more than two emotions. If we feel angry, we also feel calm and so on. With aging, we often develop existential crisis or fear regarding death and to get away with it we make ourselves busy in our day to day lives. That's why Wai Lana, an internationally-renowned yoga teacher & icon, wants to preach the teachings of yoga which can alleviate the fear of death and aging in her song \"Alive Forever\".\nThe song shares the wisdom from ancient yoga through the teachings called \"Aham Brahmasmi\"\u2014I am the eternal spiritual being, I am not my body, and I will not die when my body dies. The short music film is an inspiring story which features people from all age group. It delivers the message that we should break the shackles of fear of aging and that we should embrace the truth. As Wai Lana sings, we can all be peaceful and happy knowing that we need not be afraid of getting old, as the true self is ever-youthful and ever-lasting. The song represents the truth about life. The lyrics of the song are well versed as every line pours out a golden message that needs to be heard and adopted by the listener.\nThe fear of death and body aging has always been a part of human life. A human is born to serve his\/her life term in this planet and then he\/she has to die one day. It's a bitter truth of human life, but that's how it is something surprising. You are the spirit in essence and not the physical body; you can admire this and live fearlessly or stay alive forever. The music film talks about Aham Brahmasmi.\nThis word has been taken from Vedic scriptures where the word Aham mean 'I' and Brahma mean 'full' and Ami means 'am' \u2013 I am Brahman. Once you understand the concept of Aham Brahmasmi, then even death cannot make you fearful. Brahman means that you are part and parcel of the Supreme soul and even after your death, you will continue to live.\nAlso if your body decays, you will not die. One should be ready for everything in their life, and nothing should stop you from living their life fearlessly. We have to accept that death is something which is not in our hands. So rather than spending time being worried as for how can we relive the past or what will be our future, the best is to live the present to the fullest.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Home \u00bb Breakfasts \u00bb Salted Caramel Mocha at Home!\nToday, I am doing something I never do.\nBut seriously, I felt that withholding this beverage from you, even for just a few days, would be cruel. You could be experiencing pure bliss, and I have the power to either give it to you or withhold it! I can't handle that kind of power. You must have this recipe!\nSo here's the scoop: I (like way too many of my fellow citizens) am a gigantic sucker. As much as I know it is foolish go pay $5 for a specialty seasonal coffee drink at a specialty coffee shop (which will remain unnamed), I do it anyway! I talk a big game about discerning wants from needs and eliminating excess food spending\u2026 but there are still occasions when my precious monthly allowance of \"blow money\" gets very blown indeed, on drinks such as this Salted Caramel Mocha. It's just such a treat, darnit!\nAnyway, the other night, I pinned this at-home recipe on a whim\u2026 thinking that, as usual, it would sit on my Pinterest boards gathering dust and never actually get made. But I woke up the next morning, craving coffee as always, and realized, \"I have sea salt. I have caramel. I have espresso. What the hey? Let's get crazy!\" And I did, and well, 9 am was basically the peak of my day. There was nowhere for it to go after that but South, haha! This drink was amazing. Better than the coffee-shop version, in my opinion. I've made it again and again since!\nYou will definitely want to share the love with this one!\nSalted Caramel Mocha at Home? Yes, please!!\nP.S. I'm still planning to post about bringing food to new neighbors, sick friends, bereaved families, new mommies, etc. I'm very excited about it! Barring no further life-changing discoveries (such as this Salted Caramel Mocha), that post will be coming your way on Wednesday! So stay tuned.\nNotes of caramel, espresso, and sea salt... the perfect drink warm you up during Fall and Winter, made at home for a fraction of the cost of specialty coffee shops!\nMix together the 1 tsp. raw sugar and 1 tsp. sea salt and set aside.\nIn the microwave, heat the milk in a large mug. When milk is steaming, add the 1 or 2 Tbsp. caramel (depending on your sweetness preference), the cocoa, the sweetener, and a pinch of sea salt and whisk well to combine\/froth.\nSlowly pour the hot espresso into the mug with the frothy milk mixture. Top with whipped cream, drizzle with extra caramel, and sprinkle lightly with the mixture of sea salt and raw sugar.\nMakes 1 large mug. Enjoy!\nI used this idea to turn my caramel frappe recipe into a caramel mocha frappe, It was delicious! I can't wait to try the hot version once cooler weather arrives.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"The band took the stage at midnight, five South Korean musicians with day jobs and a dream of stardom playing an unlikely brand of music borrowed from another place and time.\nEarly in a frenetic 12-song set, the RockTigers launched into an old standard that made the crowd, most of them foreigners, many already standing, jammed near the stage, surge in movement.\nAn American turned to a companion. \"Elvis,\" she shouted.\nIt was the King's \"Baby, Let's Play House,\" being crooned in Korean by lead singer Velvet Geena. Dressed in tall boots, a white leather jacket and swirly red dress, her black hair dyed impossibly blond, she led the band in a fluid progression of rockabilly beats that trace their roots back 60 years earlier, to 1950s rural America.\nIt's a Korean twist on the rocker as rebel story. In a conservative culture in which parents dream of their daughters meeting a nice Korean boy, Velvet Geena recalls sneaking out of the house in her rocker boots to join the band for gigs.\nHer band mates, all slicked-up pompadours, U.S. Navy tattoos and cut-off sleeves, made old-school moves straight out of Memphis, one spinning a big slap bass decorated in flames and leopard print, with the rhythm guitarist named Tiger crouching, gyrating, peeling off exaggerated guitar strokes.\nThe rockabilly riffs flowed, one into the other, minus any preening breaks between songs, with Velvet Geena's smile beaming nonstop. This was a band that wasn't taking itself too seriously, just five talented musicians having fun with another culture's music they were seamlessly making their own.\n\"They're so fresh,\" said Keigh Cleveland, an Ohio transplant, as she danced at the bar. \"It's not just the style, the hair \u2014 you can tell they really like the music they play.\"\nIn a South Korean music industry dominated by often bland pretty-boy-and-girl K-pop bands, the RockTigers have developed a cult following among foreigners here with the unique retro sound they call Kimchibilly.\nIt's an Asian version of the old rockabilly genre that combines rock music with the sounds of swing, bluegrass and hillbilly, summoning images of Buddy Holly, bobby socks and malt shops \u2014 but mostly sung in Korean.\nAssuming stage names such as Tiger, Roy, Eddie Tarantula and Jack \"The Knife,\" the RockTigers have attracted a following mostly of young Americans and foreign English teachers.\nAnd though most young Koreans have yet to embrace the sound, the band hasn't missed a beat, playing regular gigs in Seoul's Hongdae university area, mixing in an occasional tour in Japan.\nBut the place they really want to play is America. For Velvet Geena, who sports tattoos of angels wings on her back \u2014 she pays the rent by modeling underground fashion \u2014 it would be a pilgrimage back to the home of the music she has come to love.\n\"America is the heart of where this music all started,\" she said. \"We want to explore the roots of rockabilly.\"\nThe RockTigers formed in 2001 as an experiment with different musical styles. \"We weren't serious, we started out just to have fun,\" says Tiger, who, along with Velvet Geena, makes up the band's creative core. \"We were all sunglasses, leather and loud volume.\"\nIn a video from their debut album \"Come on Let's Go!\" the band plays off the outsider image. Decked out in leather and boots, they dance and parade inside a Seoul subway car as baffled middle-age passengers look on.\nGathering material for their second album in 2004, the RockTigers were in search of a new sound. That's when they turned to rockabilly, a genre they'd seen only in old Internet videos.\n\"We liked the sound and the fact that nobody was doing it in Korea,\" recalled Tiger, who by day is a Web designer. \"But it was difficult to learn the technical things about the music, like the different backbeat.\"\nThe band studied the musical style and dress of such pioneers as Jerry Lee Lewis, Wanda Jackson and Eddie Cochran. Then a studio photo they took of themselves in rockabilly garb gave them their first break.\nBased on the publicity shot alone, they were invited in 2004 to perform at the Tokyo Big Rumble, a rockabilly music festival. That's where the audience and the promoters learned that the band from South Korea didn't just look the part \u2014 they could really play.\nEncouraged by the warm reception in Japan, the band returned to Korea to release two more CDs, including this year's \"Rock 'N' Roll Licence.\"\nThey also began sponsoring monthly events known as \"Kimchibilly\" nights \u2014 a reference to the famous South Korean pickled vegetable \u2014 with guest bands from South Korea and Japan to promote the new boogie-woogie sound.\nThey admit their success with young Koreans has been mixed, but foreign fans often arrive at shows dressed in ponytails and pompadours.\n\"We never targeted foreigners, they just come to see us,\" said Velvet Geena. \"Many Koreans still think we're a little weird.\"\nWest Virginia native Michael Kunhavijit sees every RockTigers show he can. \"Where I'm from rockabilly is big and so seeing them is a reminder of home,\" he said. And if the band can't cure homesickness, the online world offers help.\n\"The Internet pushes influences from the West into South Korea, Japan \u2014 but it also goes in the other direction \u2014 there's a mutual fascination,\" said Stephen Epstein, director of the Asian Studies Institute at Victoria University in Wellington, New Zealand, who has written about the Korean punk music scene.\n\"You can get on the Internet and hear just about anything, from Indonesian rock to Mongolian hip-hop, all with the click of a mouse.\"\nThat doesn't mean that the RockTigers are expecting Westerners to translate their lyrics; the band has been recording English versions of many of their songs. \"We're looking for an international spotlight,\" said Tiger, expressing a desire that transcends cultures. \"Once we have that, maybe the Koreans will follow.\"\nSome already have. At the recent midnight show, 24-year-old Goo Ban-sook sucked down a gin and tonic and danced in place.\n\"I close my eyes and I hear Elvis Presley,\" he said, \"my father's favorite form of music.\"","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"This entry was posted in Uncategorized on March 19, 2018 by Michelle Craig.\nTo go along with this week's anchor text\u2026.\nThis entry was posted in Uncategorized on March 11, 2018 by Michelle Craig.\nThese 3rd graders have this down! How about you?","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"We've written extensively about the phenomenon of network microbursts and how to use the iPerf network performance tool to create them in order to test their effects on your network. Our interest in them grew out of our work with Velocimetrics, since microbursts can have pretty significant effects in financial\/trade markets.\nOur journey down the rabbit-hole got us interested in seeing the effects of microbursts on switches and interfaces in a test network. We wanted to find a switch that would drop traffic when we sent \"bursty\" traffic through it. We ended up uncovering an even deeper problem.\nOur exercise demonstrated how the root cause of packet loss can be very difficult to pin down, and how to use tools like iPerf and packet capture in the right way to find it.\nWhen experimenting with microbursts, we wanted to set up a simple network with a piece of networking gear (in this case a 100 Mbps switch) to serve as the subject. Part of the setup involved sending traffic from more than one interface to the switch, in order to see if microbursts were having an effect on the switch's packet buffers. As a matter of convenience, we wanted to use a single machine to run the iPerf client(s), so we added a 1000\/100\/10 Mbps USB adapter to serve as the second interface.\nUsing this, we could run two iPerf clients to send traffic bursts through the switch.\nFrom this graph you can see that both Network Interface Cards (the internal NIC and the USB dongle) sent data very differently. The NIC sends at a (fairly constant) rate of 100Mb\/sec but the USB NIC (suddenly) sent at 1Gb\/sec for 100ms and then nothing for the rest of the second - the very definition of a microburst.\niPerf works by calculating data sent by the client correlated with the data received by the server. While doing the test, however, iPerf's results showed 90% packet loss of traffic that was sourced through the USB NIC! At first we assumed that the switch was dropping the packets due to the microburst, but thanks to advice from the folks at packet-foo.com, we knew that you can miss a lot of information about root-cause when you are relying on captures that are taken from the source rather than in-line using a TAP.\nA TAP (Test Access Point) is a hardware tool for network monitoring. You can think of a TAP as a network probe that sits in-line between the data source and the device you are evaluating for network performance. Using a TAP, rather than capturing on the source machine, lets you see the data as it is seen \"on-the-wire\" rather than a record of what the Operating System is sending.\niPerf's results showed a ~50% packet loss from the client sending traffic through the USB NIC. By taking a packet capture on the TAP, we could see what was actually being sent to the switch. What we found was that not all of the traffic in the capture taken on the host was actually being sent on-the-wire.\nThe blue bars show packets sent by the os, the black bars show packets seen on wire.\nThis graph showed us where the packet loss happened. It wasn't because the switch was dropping packets. It was somewhere before that and due to using this USB Ethernet adapter!\nAgain, the blue bars show packets sent by the os, the black bars show packets seen on wire.\nPeople tend to blame the network when things are slow or poorly performing. In this case, the problem arose before even leaving the source - possibly a USB issue, a driver problem, a Mac OS bug, or problems with the hardware itself. Microbursts are tricky and it's very difficult to nail down exactly where they are causing a problem.\nConsequently, this is why \"they\" tell you not to run servers behind a low-cost USB interface. What would happen if this were a web server pushing lots of traffic?\nAdditionally, it's important when doing this type of network investigation that you take your captures from the right location; capturing directly from your local host will give you different results than those seen by the network itself.\nWe're still on a mission to find a switch that will drop bursty traffic. We went in search of a hypothesis, and came up with something totally different. Such is the way with packets! We'll keep hunting - stay tuned.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzbeivx b/data_all_eng_slimpj/shuffled/split2/finalzzzbeivx new file mode 100644 index 0000000000000000000000000000000000000000..dc65ce9caf4c753c53a855d5163c7fc5beb8e1d1 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzbeivx @@ -0,0 +1,5 @@ +{"text":"Caiden \\ca(i)-den\\ as a boy's name is a variant of Kaden (Arabic), and the meaning of Caiden is \"companion\".\nThe baby name Caiden sounds like Caden, Cayden, Kaiden, Caidon, Caidin and Caidan. Other similar baby names are Caide, Camden, Carden, Jaiden and Raiden.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"This exclusive Pink Hearts 1 Mug Teapot is the perfect addition to your Valentine's breakfast tray. Priced at \u00a329.95 it will be available to buy from the Members Page on Monday 5th February from 9am, while stocks last.\nOur Wallflower and Spring Floral Ducks are also back in stock on Monday 5th February, and will be available exclusively on the Members Page. If you miss out, the Ducks will be back and we will keep you updated on this page!\nOur Spring Floral Cats and Dogs are currently only available to Collectors too. A small number of Dogs will be available to buy on Monday 5th February and any Cats and Dogs remaining will go onto general sale from Monday 12 February.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Where the naturally untainted foot of Mount Santubong gently welcomes the undulating South China Sea proudly sits Cove 55, a private boutique hotel hideaway where ethnic luxury takes on a refreshingly renewed and refined form.\nBuilt originally as a holiday home for an Iban family, the sprawling villa is reborn as an intimate escape with just 16 individual guest rooms and suites.\nThe jewel of Cove 55 is an over-water platform sea deck with an infinity salt-water pool allowing guests to relish breathtaking uninterrupted panoramic views over the horizon.\nAt the heart of the retreat is the unpretentious, casual and social, 'Kechala', Cove 55's destination restaurant that's spearheading the concept of modern Sarawakian cuisine.\nBeyond rejuvenating urban weary souls, it is the owners' wish to have guests discover and immerse in the pride of the Sarawakian land here. Every aspect of Cove 55 is detailed to make you feel wonderfully relaxed at home, explore the lush diversity of Borneo, or indulge in the luxury of lounging about, doing nothing at all.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"I know people usually only talk about the weather when there's nothing else to say, but today, the weather was worthy subject matter all it's own.\nToday, the weather was beyond belief. It's July in the Midwest, which usually renders more than 5 minutes outside an impossibility, and yet I write this post from my deck, after cooking dinner outside and even \u2013 yes \u2013 building a fire. After a week of monsoon conditions followed by a week of draught, this week mother nature decided to grace us with weather usually reserved for football Sundays in October.\nIt's been downright magical, and the dogs have been soaking in every minute.\nCan we take a second to appreciate the fact that Lambeau actually stood still long enough to capture that shot? Lambeau, our 2.5 year old beagle\/fox hound (we're guessing?) mix never sits still for long, and this may be the only clear photo I have of him.\nI completely agree, Cooper. Today was a perfect way to wrap up the weekend, and I'll admit I'm not quite ready for it to be over.\nOh well \u2026 Monday's just another day, I suppose.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Wm. Notman & Son, J. Tait's Farm, North Battleford, SK, 1920. Silver salts on film \u2013 Gelatin silver process, (20 x 25 cm). McCord Museum VIEW-8544.\nA. Buisson, St. Anthony Mine (Panorama) Sturgeon Lake, Ont, 1936. Photograph. LAC PA-014872.\nCameron Lake Settlement, Great Bear Lake, N.W.T., Photograph. LAC PA-017818.\nMan and woman in a garden, Toronto, Ontario, around 1859. Photograph, silver salts on paper \u2013 Albumen process, (10 x 12 cm). McCord Museum MP-1975.221.\nLouis Riel North West Rebellion 1885, 1885. Photograph (13.1 x 21.4cm) CMH no. 996.2.10.\nHomer Watson. Hay Wagon, date unknown, purchased 1962. Pen and black ink on wove paper, (14.4 x 20.8 cm) NGC no. 7892.10r.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzbemrn b/data_all_eng_slimpj/shuffled/split2/finalzzzbemrn new file mode 100644 index 0000000000000000000000000000000000000000..e693bc86a3f998709460334675d69e43a5fdc3ff --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzbemrn @@ -0,0 +1,5 @@ +{"text":"Sunscreen and sunblock is the only way to protect from the sun. Try to keep covered up during the peak sun hours which means from 12 \u2013 2 PM. When that's not possible because, you know, summer involves things like going to the beach. Sun protective factor (SPF) is a rating of how long sunscreen will protect your skin from ultraviolet rays. When a sunscreen is labeled \"SPF 30\", this means that 1\/30th of the UV rays will reach the skin. Most experts recommend using sunscreen or sunblock with an SPF of 30 or higher to ensure you are adequately protected.\nBabo Botanicals Daily Sheer Sunscreen for Face SPF 40 is a great product for young ones and it is great for the face and for sensitive skin. The formula is non-greasy and sheer. So long as you rub it in correctly, you won't see any white glaze which can be the issue with some mineral sunscreens. It contains a unique combination of clear zinc oxide and titanium dioxide as physical barriers to UVA & UVB Rays while also containing Aloe, White Tea, Avocado and Jojoba Oil which are powerful soothing and moisturizing ingredients.\nDevita Solar Body Moisturizer SPF 30 is a super light moisturizer and also a 100% natural mineral sunscreen. It is a perfect all-over body sun-ray blocker. Also non-greasy and super light. The mineral sunscreen has non-nano particles, so it sits on the skin to protect you. Not to apply too liberally, or you'll have some of the mineral sunscreen rolling off. This is a gentle, physical mineral sunscreen that is skin and ocean friendly.\nGoddess Garden Sport Spray SPF 30 has a huge assortment of products when it comes to suncare. This one protects for around 80 minutes before needing reapplication even with swimming and sweating. The spray is also really handy for quicker application. It is non-nano, and uses no chemical sunscreens, but rather Zinc Oxide and Titanium Dioxide and red-raspberry seed oil. This one is especially useful for the active lifestyle types. You will need to rub in the spray to make sure it's blended before heading outside.\nA research by the Environmental Working Group suggests that some chemicals in common sunscreens and sunblocks are endocrine disruptors, which can interfere with certain hormone processes such as the thyroid. For this reason, focus on ingredient's first and find mineral sunblocks that are not absorbed into the skin.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"People sometimes wonder why things made today aren't made like they were in the \"good old days.\" The old furnace in your grandmother's house, for example, might have lasted 50 years or more.\nThe problem with most old furnaces (and most older fuel-powered machines in general) was that, more often than not, they were extremely inefficient \u2013 sometimes as low as 50 percent or even less. That meant that half the money you spent to heat your home was wasted.\nAt today's energy prices, you simply can't afford to do that (our environment can't afford it, either).\nEfficiency standards started to change in the 1970s in response to oil shortages: manufacturers improved furnace efficiency by adding fans to circulate warm air and producing thinner heat exchanger walls, pushing heating efficiency ratings into the 70 percent range. Later came electronic ignitions in place of pilot lights, then exhaust fans, then computer chips that would push efficiencies to 98 percent!\nBut all those improvements came at a cost in two areas: durability and simplicity. Thinner heat exchangers crack, more motors means more breakable parts (and more complicated designs), and an electronic ignition means more maintenance. The truth is that with better technology, more can go wrong \u2013 which means that more must be done to make sure it works right .\nToday's gas furnaces can last 15-20 years \u2013 but that length greatly depends on many factors \u2013 especially how well you maintain your equipment. Right-sizing your furnace and making sure it is properly installed will, too. If you don't do those things, you could be cutting the lifespan of your gas furnace significantly \u2013 and get higher heating bills, too.\nMaintain your equipment. Routine maintenance will make your equipment run more efficiently and last longer \u2013 and it will prevent many costly repairs.\nUpgrade older equipment. If your system is more than 15 years old, consider an upgrade. Today's systems are much more efficient than they were 15 years ago \u2013especially if you haven't maintained your equipment.\nUse this rule of thumb. If a repair on your 10+ year old system is more than half the cost to replace it, upgrade your unit \u2013 the efficiency improvements will offset the cost quickly and give you better overall comfort and peace of mind.\nNot sure whether to repair or replace your gas furnace? We can help you decide what makes sense for your home! Contact us today to learn more.\nNext PostNext Thank you for choosing Broward Factory Service!","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Slide 12: Bauhaus \u2014 Plan of the first training programme in Weimar, n.d.\nSchlemmer's work on this project became the basis for his fundamental-systematizing dances the Form, Gesture, Space, Stick, Scenery, and Hoop dances which he taught at the Bauhaus while in charge of its theater workshop from 1923 to 1929.\nSlide 24: Eugen Batz \u2013 Ordering of colours and forms from Kandinsky's seminar, 1929-30.\nDirect artists into a new machine age by marrying art and industry. The process started in 1897 with Art & Craft workshops in Germany and ended in 1933. Bauhaus is not unique. Its principal aim was to bring together ideas from of art and industry.\nThe main idea was to improve manufacturing in order to improve the quality of life in Germany and create products for export. They had to have a German identity.\nKirchner's Bathers at Moritzburgm \u2013 Expressionism and Jugendstil, he tried to create a unique German art. In Germany artists were all aware of the importance of design and were all trained in Arts & Crafts. This meant no life classes and no copying old masters.\nPeter Behrens AEG logo and unique AEG fan.\nWilliam Morris felt the machine alienated the worker so he wanted all his goods to be handmade but this was utopian as they were so expensive only the upper classes could buy his work and this was in conflict with his desire to create beautiful goods for everybody.\nThe German approach was very different as the German Government held been involved in Arts & Crafts since unification (1866 and 1870-71). They wanted to put money into the project and make it successful so it became better known than in England. Their ideas were therefore applied to industry rather than being utopian.\nThe Bauhaus was a coalition of existing authorities in Weimar. They asked Walter Gropius to head the new art academy \u2013 the old Fine Arts School and the Arts & Crafts school. They would learn both skills and work with industry and create products that could be mass produced supported by the Government.\nThey looked at Morris 's medieval \u2013 inspired designs but applied it to mass culture. The New Weimar republic post \u2013 war brought a new idealism. Gropius 's experience during the war caused him to employ painters with utopian leanings.\nGropius was an established architect \u2013 he had designed a locomotive, factory and office building. He married craftsmanship with modem materials. He had a utopian idea that architecture could bring together all the arts. He thought Bauhaus would contribute to post \u2013 war society.\nHe introduced a guild system \u2013 lecturers were called masters, first year students apprentices and second year journeyman. A medieval craft guild system. The school was divided into workshops \u2013 metal, wood, stained glass, colour, textiles, stone and clay. There was also a theatre workshop. Wagnerian ideal of the totalkunstwerk of uniting all the arts. Futurists and the Expressionists also had this idea of uniting all the arts.\nEach workshop had a master of form and a workshop master (creativity and the practiced application).\nThe ultimate of the programme was building. Students were encouraged to experiment with different materials.\nA large number of female students were accepted unlike traditional art colleges. Elsewhere females were not accepted into life classes until the late 19.\nAreas they felt suited the female nature \u2013 intuitive, weaving, looms were brought into the colleges.\nJohannes ltten (Eaten) was one of the first masters of form. He wanted to free the hidden talents of the student rather than impose ideas. He was interested in Eastern religions \u2014 he introduced breathing exercises before class and dance. He got students to students reduce old masters to blocks of colour. He was Utopian and mystical. There was a clash between utopianism and the manufacture of goods. So he resigned in 1923 after an argument with Gropius.\nKandinsky and Klee were also interested in mysticism and utopian ideas.\nThe book Point and Line to Plane by Kandinsky explained the spiritual association of form and colour. So blue was a circle, red a square and yellow a triangle. Intermediate forms had intermediate colours.\nAngles were deemed to have a certain temperature 90 degrees was red, smaller angles yellow, larger cooler.\nHe wrote a colour theory that also linked colour to sound.\nMoholy-Nagy was appointed as a much more practical director as Gropius had sold very little in the first three years. Moholy-Nagy replaced Itten. He was influenced by the Constructivists in Russia and wanted to link art and industry.\nHis medium was photography (see famous photograph looking down from a radio tower).\nMoholy-Nagy was much more practical and he introduced distinct design ideas and typography (very distinct and became a symbol of the Bauhaus). He introduced a photographic course from 1923. He produced some very significant works (see photomontage of eyes in palms of hands before a city, alienation. Also see Kranz 's superimposition of portrait and Bauhaus).\nIn 1923 he opened an exhibition that included a model house Hans am Horn. This was very significant as it was designed for the modern middle \u2013 class. The purpose was to make life easy for the inhabitants. The living room was an open space for the whole family. The kitchen was the first fitted kitchen worldwide. The children 's room had blackboards on the walls and toy boxes that became tables and chairs. These were all firsts and common today. The fitted kitchen, cupboards and so on were all designed to be easily mass produced. The house did away with ornamentation, it was a minimalist house.\nPolitical events in Weimar 1919-1925 were significant and ran parallel with the Bauhaus. There was a very strong local resistance to having the school there. An extreme right wing local government accused Bauhaus of left wing and revolutionary tendencies. The life of the Bauhaus students was Bohemian and threatening to the locals. Gropius was accused of favouring Jewish students.\nGropius designed a memorial to the dead leftwing strikers of 1921. In 1923 his house was searched. In 1924 all funding was withdrawn. The school was moved to Dessau (north east on the way to Berlin, an industrial city).\nThey all designed the new building. Moholy-Nagy designed the lighting, Gropius the building. It used modern materials and was practical. He also designed the staff accommodation.\nThe old guild method was abolished and each workshop now had one professor. The school began designing useful goods, such as table lamps, tea infusers, chairs (the famous Wassily (Vaseely]) chair made of steel tubes with two stripes as the seat. They had funding from Dessau but also started generating income from designs. Ruth Hallos carpet design. (gobelin). Also Gunter Stolzl encouraged consumers to put up textile wall hangings. Consumer goods became highly desirable in the late 1920s. Typography workshop was influential. They favoured lower case letters against the German system of capitalizing all nouns. Smooth, clean, clinical typography. It was very provocative to challenge the German language itself. Moholy-Nagy was a very political person who wanted to challenge the state. Bauhaus follows the rise of the Nazi party. When the Nazi's took control of Dessau in 1924 they moved to Berlin but lasted only one year when in 1925 Hitler took control of the country. It was immediately closed as it was accused of harbouring Jews and left-wingers. Most staff and students went to the US and founded a school in Chicago. Compare with Constructivists as they had similar ideas. The majority of the artists had been to Russia in the early 1920's. It is important to note the similarities.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Prayer is the foundation for everything at Pine Lake Covenant Church and the key to unlocking God's prevailing power in your life. We invite the congregation and community to seek the Lord through prayer in several ways.\nCome when you can. It is our intent to always have at least one pastor there. Prayer time lasts 30 minutes.\nIn life we all have times where we need healing. You are invited to a simple one-hour gathering to worship and receive prayer for spiritual, emotional, relational, and physical healing. A ministry team of caring and trained intercessors are available to pray for you.\nThis service is offered quarterly at Pine Lake Covenant Church and meets in the sanctuary. You can also arrange for healing prayer by appointment. See dates below.\nThe prayer network at Pine Lake Covenant Church rapidly mobilizes believers into God-centered prayer for those suffering physically, emotionally, or spiritually. This leads to a closer relationship with God and a deeper trust in Him. The prayer network is maintained by the church office and includes the pastoral staff and other individuals at PLCC who request to participate. You may submit prayer requests for the PLCC community, family and friends.\nLove to pray for others?\nThe Visitation Team consists of lay persons filled with Christ's love and trained to visit people in times of need or immobility, including illness, hospitalization, assisted living, nursing care or being homebound. We offer support through a one-time visit or ongoing conversation and prayer.\nPine Lake Covenant Church occasionally hosts a prayer room to facilitate a time of focused prayer for the whole church community. Individuals or small groups are invited to sign up to visit the room and use its resources to direct their own personal time of focused prayer.\nWe periodically host prayer services, typically in the fall and during Lent.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Why is visual content important for your brand?\nWhy is visual content important for websites, SEO, and social media?\nPeople engage first with images - whatever captures the eye.\nAs a business owner, you need to draw people into your service or product, your website, or your offers. In this era of shortened attention spans and instant gratification, you need to captivate them immediately or risk losing their patronage.\nVisual content is highly preferred over written for increasing your Google ranking. Search Engine Optimization, or SEO, takes into consideration a variety of factors such as whether your pages are text-heavy vs. visual and if you have video content. It's all about the overall user experience because, when there are pictures and videos that users enjoy, they spend more time on your site and spend more time looking around at other pages on the site, which Google tracks and factors into your ranking.\n= all helping you rank higher in Google.\nRead more about SEO here.\nSocial media plays into all of the above as both another way to get your message out and another way to support your Google ranking. The more helpful and engaging information you share, the more you build credibility and establish yourself as an expert and authority in your industry or market. This is where eye-catching Facebook posts, video Tweets, infographics, and more are extremely helpful. If you're not using fresh visual content, you'll get drowned in a sea of social media and you're less likely to get that client or customer looking for products and services like yours.\nHow can you use fresh visual content to stand out in your business?\nand more - we can find the solution that's right for your business.\nAsk us how we can help you be more consistent and grab your customers' attention with ongoing, fresh visual content.\nSee our Visual Content Creation Packages here.\nWriter, geek, Mom, wife, friend \u2013 in no particular order, just like my house.\nCopyright \u00a9 2015 - 2019 Capturing Spaces is a division of Lethologica. All rights reserved.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzbepqv b/data_all_eng_slimpj/shuffled/split2/finalzzzbepqv new file mode 100644 index 0000000000000000000000000000000000000000..68347f3379539f264c62c641068cbbfe472b66aa --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzbepqv @@ -0,0 +1,5 @@ +{"text":"WASHINGTON, D.C. \u2014 April 24, 2017 \u2014 Many American cities are undergoing the process of gentrification. This is not a new phenomenon and Washington, D.C., is not a stranger to it. A new book by Derek Hyra, associate professor at American University's School of Public Affairs and the director of AU's Metropolitan Policy Center, unravels the complex forces driving the process of gentrification in the nation's capital. Entitled Race, Class, and Politics in the Cappuccino City, thebook examines D.C.'s rapidly changing economic landscape through the prism of the revitalization of the city's historic Shaw\/U Street neighborhood.\nAmerica is witnessing the emergence of what Hyra calls \"cappuccino cities.\" A cappuccino has essentially the same ingredients as a cup of coffee with milk, but is considered upscale and is double the price.\nRace, Class, and Politics in the Cappuccino City is an in-depth ethnographic analysis of a D.C. neighborhood that has become almost unrecognizable to its older long-time residents in part because of an influx of young, white, and relatively wealthy professionals. Hyra witnesses the upheaval, conflict, and micro-segregation as this black inner-city neighborhood becomes racially \"lighter\" and more expensive.\nHyra also suggests that building social bridges between newcomers and long-term residents through developing neutral \"third spaces\" where residents have meaningful interactions across race and class, could help to reduce the tensions and inequalities associated with gentrification. Hyra insists that more equitable, inclusive, and integrated neighborhood redevelopment will only occur in places like Shaw\/U Street through policy actions that help grease the wheels of micro-level social interactions.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"When the landlord-tenant relationship reaches a dead end, eviction often becomes a challenging and thankless, but necessary task for landlords. Laws at both the state and local level dictate the permissible grounds to terminate a tenancy.\nthe tenant's refusal to correct any number of lease provisions after being afforded a cure-or-quit notice.\nThis list is of enumerated reason is not all-inclusive.\nA growing chorus of Bay Area cities have implemented \"just cause\" eviction rules that prohibit landlords from removing tenants without a carved out exception, a \"just cause\", and these grounds may differ by municipality.\nAlthough the limited causes to evict will vary from city to city, determining those grounds is the easy part. The procedural requirements and framework to actually prove that the tenant violated the lease is when things get technical.\nAt Bornstein Law, we have always dissuaded our clients from wishful thinking and hoping a problem will resolve itself on it's own. When a tenant is violating the lease terms, it is best to take proactive action and address the underlying behavior at the outset, rather than letting the issue escalate.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Will Denham is a Houston-based litigator specializing in civil litigation, criminal defense, trademark litigation, employment litigation, and copyright-related issues. Before starting his own firm, he worked at Baker Botts, LLP. Licensed in several states and federal courts, he is a highly rated attorney who has been recognized with multiple awards. Will holds a J.D., with honors from the University of Texas Law School.\n\"Mr. Denham recently helped me with a trademark infringement issue. Throughout the entire process he was easy to work with and very thoro...\"\n\"Excellent & very professional work! They are in constant communication with you during the work and give the upmost attention to your nee...\"\nTim Sutherland is both an attorney at law and a business owner. He specializes in assisting startup companies and entrepreneurs found their businesses successfully. Tim is also experienced in drafting and negotiating commercial contracts. He has been licensed to practice law in Texas for the past six years and received his degree in law from the University of Houston Law Center. In August 2014, Tim founded his own legal firm, Sutherland, Attorney & Counselor.\n\"I had an issue with my previous employer who refuse to give me my last check. I contacted Ms. McClain and she went to work immediately to...\"\nTiffany worked for Eagle Land Services, Inc before joining the Pratt Aycock team as a practicing lawyer. Her areas include business law, real estate law, dispute resolution, immigration law and oil and gas. She received a JD from Texas A&M School of Law. In addition to her law degree, she also attended Southern Methodist University where she graduated with a Masters in Dispute Resolution. Volunteer service is a priority for her as she is part of the Junior League of Dallas.\nWhy use UpCounsel to hire a Sugar Land Employment Attorney?\nOur experienced Sugar Land employment attorneys & lawyers can help guide you on how to proceed with various employee decisions such as reviewing employee documents such as contracts, agreements, policies, and handbooks, along with difficult decisions such as firing, lawsuits, claims, and complaints.\nA confidentiality agreement and a non-compete agreement are common forms of employee contracts that one of our Sugar Land employment attorneys can help customize for your business. If your business needs to fire an employee, proper measures should be taken from a business legal standpoint to ensure proper communication and a smooth transition of dismissing that employee. In any case, we suggest you connect with our employment attorneys to discuss your options.\nImprove Your Legal ROI with Affordable Employment Attorneys that service Sugar Land, TX.\nWant to Connect with Top Sugar Land Employment Attorneys & Lawyers?","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"We have created a 2019 ranking of the best colleges in Arizona that offer Natural Resources Management And Policy degrees to help you find a school that fits your needs. Each school's ranking is based on the compilation of our data from reliable government sources, student surveys, college graduate interviews, and editorial review. In addition, you can view our entire list of all 2 Natural Resources Management And Policy schools located within Arizona. We also provide reviews, facts, and questions and answers for schools on our site and offer you access to get valuable information from colleges and universities today.\nNorthern Arizona University offers 2 Natural Resources Management And Policy Degree programs. It's a large public university in a small city. In 2015, 8 students graduated in the study area of Natural Resources Management And Policy with students earning 8 Master's degrees.\nPhoenix College offers 2 Natural Resources Management And Policy Degree programs. It's a large public college in a large city. In 2015, 10 students graduated in the study area of Natural Resources Management And Policy with students earning 10 Certificates degrees.\nOnline Natural Resources Management and Policy degrees are available with as many as 4 degrees earned at the most popular school. Read more below about all schools that have offered online Natural Resources Management and Policy degrees. If you are interested learning more about getting a degree online, check out our page dedicated to online degree information.\nHow many Arizona schools offer online Natural Resources Management and Policy degrees?\nWhat Natural Resources Management And Policy degree programs were earned in Arizona?","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Cake chain Patisserie Valerie - which has a store in Preston - has collapsed into administration.\nThe company's troubles were first uncovered in October last year when its shares were suspended after \"significant, and potentially fraudulent\" activity was discovered within its accounts.\nThe chain has a branch in the St George's Shopping Centre on Friargate in Preston.\nThe cafe opened in October 2015, and was the first to open in Lancashire.\nFears are now growing that jobs will be lost across the business.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzbfdgg b/data_all_eng_slimpj/shuffled/split2/finalzzzbfdgg new file mode 100644 index 0000000000000000000000000000000000000000..361354ff51af62e33dd833994b4622ebc67d6974 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzbfdgg @@ -0,0 +1,5 @@ +{"text":"Transport for London has announced that it is carrying out studies of three SE1 road junctions as part of a major review of cycle safety ordered by the Mayor.\nLast autumn the Waterloo roundabout was named as the junction with the fourth highest number of collisions involving cyclists in the capital. The Mayor said at the time that work was in hand to provide better signage for alternative routes so that cyclists could avoid the roundabout.\nTowards the end of 2011 a number of cyclists were killed on London's roads including two in the vicinity of Bow roundabout and Ellie Carey in Tower Bridge Road.\nAs a result of pressure from the London Assembly and cycle campaigners, the Mayor asked TfL to carry out a thorough review of around 150 major junctions and planned schemes on TfL roads as well as all junctions on the existing Barclays Cycle Superhighways, to see if more could be done for cyclists in these locations.\n\"Improving safety for cyclists in London has always been a key focus for TfL whenever designing or implementing any improvements to our road network,\" says Leon Daniels, TfL's managing director of surface transport.\n\"By getting key stakeholders from road user groups involved as part of our junction review process, we can build on the good work and skills-base already available to ensure London becomes an even more world class city for cyclists.\"\nThe review has been welcomed by Labour London Assembly member Val Shawcross.\n\"I'm particularly pleased to see that St George's Circus is to receive attention as traffic speeds there are high and there are a number of tourist attractions and public facilities \u2013 as well as schools \u2013 nearby,\" she said.\n\"I hope that Elephant & Castle will be borne in mind in this regard too. I am very pleased that the Mayor has finally taken note of the London Assembly's concerns about cyclist and pedestrian safety.\"\nTfL has already formed a steering group and held the first of a series of meetings as part of the junction review programme. Senior staff from TfL and representatives of the main road user groups including freight vehicle drivers, motorists, cyclists, pedestrians and road safety organisations will continue to meet regularly to discuss the establishment and progression of the review.\n\"We know that cycle safety is the big problem which puts Londoners off jumping on their bikes and this review of challenging junctions is something which the London Assembly has been pressing the Mayor to carry out,\" says Jenny Jones, Green Party assembly member and mayoral candidate.\n\"However, the review will not make the street safer unless the Mayor tells his engineers that cyclists and pedestrians come first. I am worried that the Mayor's policy of smoothing traffic flow is saving time for some people, by putting the lives of others in danger.\"","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"The OISTAT Theatre Architecture Competition is an international ideas competition, aimed at students and emerging practitioners, which is organised every four years by the Architecture Commission of OISTAT (International Organisation of Scenographers, Technicians and Theatre Architects). The next competition will be generously supported by and exhibited at the Stage, Set, Scenery conference and exhibition organised by the DTHG (German OISTAT Centre) in Berlin from 9-11 June 2015. Selected entries will be exhibited and cash prizes awarded.\nThe theme for the competition will be the design of a floating theatre to be moored at a particular location on the river Spree in Berlin, Germany, but capable of being moved to other sites on the river.\nThe floating theatre will provide a performance space for an audience of 200-300 people and backstage accommodation for a cast of no more than 20 performers. Facilities for the audience, such as foyer space, toilets and refreshment areas will be located on the land and will be temporary and easily moved to another location, when required.\nThere is increasing interest amongst theatre practitioners in the use of temporary site specific locations to present particular productions. These settings can often provide a unique atmosphere, which resonates with a particular production or style of presentation, in a way which may not be possible in a conventional theatre.\nThese are the themes to be explored in this competition.\nThe site for the competition is on the northeast bank of the river Spree in Berlin in an area known as the \u02bbHolzmarkt' or \u02bbwood market'.\nThis is therefore an \u02bbalternative' kind of place in a state of transition.\nThe Holzmarkt is the center of the neighbourhood \u2013 physically and spiritually, the market, the creative village, the club and the restaurant invite, surprise, inspire and entertain. Artists, artisans, musicians and hedonists create with and for each other.\nA 3D CAD model of the area will also be made available shortly.\nThemed \"Theatre as Public Space\", TAC 2017 opens for submission from February 17 to March 17, 2017. The site will be in Public Activity Center, a disused sports stadium in Hsinchu City, Taiwan.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Finlands rise to the bookies top 10, and Latvias fall to just outside the top. The Czech Republic is now the least favourite to qualify, with odds as low as 63\/10. Winning bets paid in cash based on the normal price on our website. The mayor of Dublin called for Ireland to not take part in the Eurovision 2019 competition since it will be held in Jerusalem. Jowst, who represented Norway in 2017 and composed yet another competing entry in Melodi Grand Prix 2018, has revealed that he would like to compose yet another song for the Eurovision Song Contest, but this time not for his home country the producer would instead. This year SuRie will represent the UK in May's contest with the song Storm after winning Eurovision: You Decide. More like this., this list is based on my personal opinion I do not own one of the following videos all the credits go to the artists and producers of the following music and videos. Salvador Sobral has been in the top five with the bookies for the past few weeks, and has now risen to third place with best odds of 12\/1. Losing 5 free bet excludes Coral Connect.\nSan Marino is currently the least favourite to qualify from their semi-final, with best odds of 81\/10. Welcome to the 2017 ESC Nation Scoreboard Simulator. It didnt take long for the Israeli prime minister, Benjamin Netanyahu, to attach his name to the victory: You know what we say: Those who didnt want Jerusalem in the #Eurovision are going to get the Eurovision in Jerusalem, he tweeted, in a series.\nMore like this., This is the top 43 according to the odds (March 13th 2018) Chosen songs: Albania - Eugent Bushpepa - Mall Armenia - Sevak Khanagyan - Qami. More like this., This top is based on a poll on the eurovisionworld website 28088 people have voted for the country that they think will win, not their opinion! More like this., this list is based on the odds on the eurovisionworld website m\/?oddseurovision-semi-final-1 I do not own one of the following videos. We still have not defined any city or any dates but we will start to talk to see how we should organise this and when it should. More like this., This is the top 43 according to the odds (March 27th 2018) Chosen songs: Albania - Eugent Bushpepa - Mall Armenia - Sevak Khanagyan - Qami. Winner, according to Oddschecker, the current favourite to win this years contest remains Italy. Odds correct as of 11th May, 16:15. Sweden meanwhile has dropped slightly in the odds to fourth place, with best odds of 13\/1.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Platinum Spas There are no products in this category.\nThe deluxe range has models suitable for 3 to 8 persons.\nThe Santorini is a 2018 What Spa award winner and is one of our best sellers from this range.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Katie Carrell is a Board Certified Behavior Analyst (BCBA) who has worked with children and families affected by autism spectrum disorder (ASD) and other developmental differences using the principles of applied behavior analysis (ABA) since 2007. She has trained and worked in a variety of settings which include home, school, and clinic-based educational and behavioral programs. She completed the Master's of Education program at Vanderbilt University in Nashville, TN with a specialization in early childhood special education and applied behavior analysis. She is passionate about working closely with families and seeing children learn, grow, and succeed.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzbfhug b/data_all_eng_slimpj/shuffled/split2/finalzzzbfhug new file mode 100644 index 0000000000000000000000000000000000000000..e38557c92e0c3943785877df89a7c2fd39961c7c --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzbfhug @@ -0,0 +1,5 @@ +{"text":"Published 04\/22\/2019 10:11:54 pm at 04\/22\/2019 10:11:54 pm in Where To Buy Glass Pitchers.\nwhere to buy glass pitchers set of 2 glass pitchers with lid and spout 18 liters ribbed design fridge door amazoncom mini glass pitcher ounces high child sized mini glass pitcher ounces quot high.\nwhere to buy glass pitchers, amazoncom artcome ounces glass iced tea pitcher with stainless artcome ounces glass iced tea pitcher with stainless steel strainer lid hotcold, heres a great price on vintage style glass pitcher vintage style glass pitcher, amazoncom luminarc quadro liter ounce pitcher amazoncom luminarc quadro liter ounce pitcher fridge jug carafes pitchers, heres a great price on personalized monogram glass pitcher j clear personalized monogram glass pitcher j clear, holiday shopping special french home recycled glass pitcher and french home recycled glass pitcher and tumblers, amazoncom anchor hocking chiller glass pitcher with lid anchor hocking chiller glass pitcher with lid ounce, impressions drink pitcher reviews crate and barrel , amazoncom mr coffee ice tea glass pitcher qt bvsttp mr coffee ice tea glass pitcher qt bvsttp, williams sonoma pitcher williams sonoma roll over image to zoom, amazoncom libbey mario glass pitcher ounce crisa glass amazoncom libbey mario glass pitcher ounce crisa glass pitcher carafes pitchers.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"You may be thinking that with all the bright, eye-catching holiday decorations in stores these days there isn't much reason to create your own wreath.\nHere's a secret\u2026 It's not just about the wreath itself.\nThis activity is a reminder to all of us, young and old, that there is much to discover outside all year long.\nWorking on this project together is a good way to remind your kids that the makings for some of the most beautiful creations can be found outside in nature, rather than at a superstore.\nBe sure to stop and toss some leaves along the way. Joy like this is the true value you get from making a nature wreath with your kids.\nAnd it's a lot of fun to play in the crunchy leaves. This is a great chance to go outside with your kids and make some memories. And then come in and make some more.\nAllow for some shopping time at your favorite craft store to get the wreath and any materials you want to add.\nYour walk will take at least 15-20 minutes, but can take longer if you choose to make a longer hike out of it. The crafting time will take about 30 minutes to an hour.\nYou can make your adventure a full-day affair (if weather permits, pack a lunch and make it a full hike) or just an impromptu outing in your neighborhood. The most interesting places to have this adventure are nearby parks, river or creek beds, hiking trails or just around your neighborhood.\nYou can do this activity in any season and come up with a different-looking wreath every time. We aimed for autumn colors for this article, but the possibilities are endless.\nGet a wreath and a few embellishments at the craft store. Everything else comes from your nature walk.\nGot a wreath and a glue gun? Set them out in your work area and you'll be ready when you return from the nature walk with your kids. Go collect items from nature and bring them home to create something wonderful.\nBefore you head out the door, ask your kids what they expect to find outside. This is a good time to remind the younger kids how much their environment changes with the seasons. Those flowers that bloomed over the summer may not be there anymore.\nCheck out this great video that shows all four seasons on planet earth.\nAsk your older kids what they've seen outside lately and what they hope to find.\nIf you don't know where to start on your hike, ask your child. My daughter already had some ideas of what she wanted to collect.\nListen to your kids and let them take charge of your leaf-finding expedition. They may have been paying closer attention to the changing seasons than you realize. Who knows\u2014your children might be able to lead you straight to some collectable acorns or interesting berry bushes.\nThis project can reflect a true snapshot of your surroundings. In the Pacific Northwest, we have tons of trees, so our wreaths are filled with maple leaves and pine needles.\nRemind your kids to look high and low when they collect. In addition to finding neat things, all the extra searching gives them some exercise.\nHelp set your kids' expectations: Different locations will result in much different-looking wreaths.\nA walk on the beach may yield a collection of shells, sea glass and driftwood, while a walk through a grasslands area may offer up long beautiful braids of grass that you can turn and twist into a wreath.\nWherever you are, let your children be creative in finding items from nature that they find beautiful.\nIt's a good idea to collect more than you think you'll need while on your walk.\nA greater assortment of items to choose from will help your kids come up with some creative patterns when they get to the wreath-designing stage.\nBe warned that your table may temporarily look like your back yard. For that reason, it's wise to lay down newspaper before dumping out the collection bags.\nYou might not expect to find a lot of colorful things to add to your wreath in winter. Don't let that stop you! Head outside. You'll be surprised at how much you can collect on your walk.\nKeep your eyes open for unusual items wherever you go. Be on the lookout constantly for things\u2014even common things\u2014to add to your wreath.\nSome larger items may be a bit tricky to fit on a wreath, so be sure to guide your younger children toward smaller items that will be simple to attach later. Otherwise, they might get frustrated.\nAfter your walk, lay out what you've found. Rather than firing up the glue gun and starting to attach the items right away, encourage your kids to stop and look at what they found.\nIt's a good idea to spread the collection on a table and sort through what to use versus discard. I always like to have some extra items on hand to offer the kids in case they find they don't have enough.\nIf you sort the collected items into groups it can help the kids plan their wreaths, especially if they're trying to make patterns with different-colored leaves or design a wreath of all acorns and berries, etc.\nOnce you lay out your collections, you can group together similar colors or sizes. This makes it easier to plan a pattern.\nThis is also an opportunity for kids to share or trade items with each other as they plan their wreaths. Those interactions are always fun to watch!\nBe aware that there may be some critters that made it into the collection. When we laid out our nature collection, we had to make a couple of trips to the window to deliver bugs back to the great outdoors.\nOnce everyone has his or her wreath designed, it's time to fire up the glue gun.\nCAUTION: Hot glue guns and melted glue can cause very bad burns! Parents should help younger children and supervise older kids carefully.\nGlue items onto the wreath one at a time and let the glue dry in between items.\nAs they start gluing on items from nature, it's exciting to see what your children create.\nAttach larger items with craft wire or tie them around the wreath with ribbon or raffia.\nEncourage everyone to create an individual masterpiece. Some children like to decorate their wreaths solely with natural items. Others may want to embellish the wreath with crafting items like ribbon or buttons. Resist the urge to \"fix\" things unless you see someone getting discouraged.\nTo keep your nature items fresh, try to do the nature walk and make the wreath on the same day. If you need to do them in two separate sessions, I recommend that you press delicate leaves or flowers to keep them in good shape until you have time to glue. This will help prevent the leaves from curling up as they dry.\nPressing leaves and flowers is easier than you'd think. Just find a thick book and lay your leaves or flowers between the pages with paper towels or toilet paper on either side of each leaf. The paper absorbs any dampness from the items.\nWhen you have all your items in the pages, close the book and put something heavy on top until you are ready to glue the leaves.\nTalk to your kids about different wreaths they could make in different places. It's fun to say \"What would a wreath look like if it were collected at Grandma's house in California?\" or brainstorm what a wreath made by kids in Hawaii might look like.\nThis gives the kids a chance to imagine and picture the foliage in different parts of the country (or world) and talk about why different things grow in different places.\nDon't forget to show off your wreath after the glue dries. Adding a loop of crafting wire or ribbon to the backside of your wreath will allow you to hang it on the front door.\nI hope this adventure gives you and your kids a great reason to get out and enjoy the outdoors, and that you make a beautiful wreath to decorate your home during the holidays.\nWhat do you think? Please let us know how it goes. What did you and your kids find on your adventure? How did your wreaths represent your environment? Post a snapshot of your wreath below. We'd love to see what you came up with and learn about where you live. My kids and I would love to see what wreaths in other places look like, compared to our \"Pacific Northwest\" wreath.\nThanks, Christina! This looks like fun. I like your ideas about observing the world around us in different locations. We're headed out of state for Thanksgiving & it will be interesting to compare what we can find there to what we have at home.\nSounds fun, Jennifer. If you collect some of the non-perishable natural items (rocks, sticks, acorns, etc.) you can even bring them home to make a post-vacation wreath as a keepsake to remember your holiday trip.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Designed to deliver live security training to just a few or hundreds of remote employees, Private Simulcast allows your multi-location workforce to login to a virtual classroom and experience the same SANS training as employees who are receiving SANS training in person. Private Simulcast is ideal for organizations with distributed workforces, limited travel budgets or training facilities that cannot accommodate all students who need training.\nSANS Simulcast uses Citrix's industry-leading online learning software, GoToTraining, to deliver live classroom instruction to remote students. Remote students will receive the added benefit of six months' access to an archived copy of the class to use as a reference tool or to catch up on a missed session.\nFlexibility: Your team can attend in classrooms (maximizing accountability while minimizing distractions) or as individuals (great for a dispersed workforce or if there is no classroom available). With either option students attend their live training session in a virtual classroom.\nCourseware: SANS Simulcast utilizes the same SANS Certified Courseware and Instructors you find at SANS training events.\nTo get more information or to schedule a Private Simulcast Session please email us at simulcast@sans.org.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"US DOL Info#ff6600 Q: I was just laid off from my job. What would a typical pregnancy cost with this insurance? Please help. Thanks Andrew US DOL Info#ff6600 A: COBRA is a federal law, which allows you to continue on your previous employer 's (not your new employer's) group health plan. When you elect COBRA coverage, it will be a continuation of the same group health plan that you were on the day before your insurance was terminated. If the pregnancy was covered now, it will be covered with the COBRA election. Since this would be the same policy, the benefits would be the same. COBRA costs are US DOL Info#ff6600 102% of the montly group health plan rates. Your employer will be able to give you that information.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Statistics show that, on average, U.S. companies lose half of their customers every five years. Here are a few ways to keep your customers loyal.\nIt's true that acquiring new customers will help your business grow. However, your current customers are the lifeblood of your business and keeping them happy should be your highest priority.\nHere are six ways to improve your customer loyalty.\nMany business owners mistakenly believe that customers choose to patronize other companies solely because of better prices. While pricing can be a concern, customers often head to the competition when they don't feel valued.\nMaybe it's reliability or speed or cost. Your company should know your clientele's No. 1 priority and consistently deliver it. Remember, customers' desires change frequently, so re-examine this every six months.\nThe lifetime value of your customers is the income you would gain if a customer stayed with you as long as they possibly could.\nGood first impressions tend to generate loyal customers, and you get only one chance to make a positive first impression. Appearance is important. The exterior and interior of your business should be neat and clean.\nEmployees should listen actively to customers. Reassure your customers that you genuinely want to help them. Customers will judge your business based on the politeness, empathy, effort and honesty of your staff.\nInevitably, your employees will encounter unsatisfied customers. Whether they're returning an item or changing a service, customers expect a fair policy. If you cannot offer a resolution immediately, let the customer know when he or she can expect an answer.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzbfjrh b/data_all_eng_slimpj/shuffled/split2/finalzzzbfjrh new file mode 100644 index 0000000000000000000000000000000000000000..c679d1f62b727ce27c311d4ddec537316f54e06a --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzbfjrh @@ -0,0 +1,5 @@ +{"text":"LDF, the finance broker from Flintshire in North Wales, has won Aldermore's inaugural five-a-side football tournament which took place last week (Monday 19th October) at St George's Park, the Football Association's National Football Centre and home of England's national teams.\nThirteen teams competed in the tournament consisting of 11 teams fielded by Aldermore's introducer partners and two teams made up of Aldermore employees.\nThe winners from LDF were awarded the Adam Massen Memorial Cup, named after a member of Aldermore's asset finance team who sadly passed away from a brain haemorrhage in 2014.\nOver \u00a36,000 was raised from the tournament. These funds will be donated to Headway, Aldermore's charity of the year which promoted understanding of all aspects of brain injury and provides support and services to those suffering from brain injury, as well as their families and carers.\nAndy Davies, Sales Director at LDF, said: \"We enjoyed a great day at St. George's Park. The event was a brilliant opportunity to get together with other brokers whilst supporting a great charity in Headway.\nJon Maycock, Sales Director, Asset Finance at Aldermore, said: \"Aldermore chose Headway as its charity of the year in memory of Adam Massen, our colleague in Asset Finance who sadly passed away just over a year ago. Adam was a huge football fan so it seemed fitting to organise a fundraising five-a-side football event in memory of him, which I know he would have loved to have been part of.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Kerry Kennedy, second from left, walks with her mother, Ethel Kennedy, third from left, as she leaves the Westchester County Courthouse, Friday, Feb. 28, 2014 in White Plains, N.Y.\nA jury found Kerry Kennedy not guilty of impaired driving on Friday.\nThe daughter of the late New York Sen. Robert Kennedy testified during the trial that she'd taken a sleeping pill by mistake before getting in the car, and did not remember the events that led to her sideswiping a truck on a New York interstate in July 2013, the Associated Press reports. Prosecutors agreed that Kennedy, 54, had taken the sleeping medication zolpidem by accident instead of her thyroid medicine, but focused their case on if she realized she was drugged and whether she should have stopped the car before swerving out of control. Kennedy's lawyers said the behavior was accidental, not criminal.\nKennedy failed sobriety tests at the scene of the accident, but passed several tests hours later at the police station.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Welcome to the Peace Country Roots Group!\nThe Peace Country Roots Group was organized in 1985 and has continued since that date. We operate out of the small annex building next to the museum. We maintain an index to the newspaper records regarding all births, deaths, and marriages. We also have copies of all of the obituaries that have been in the Newspaper. We have recorded all known cemeteries in this area. We also have a fairly extensive library covering self help genealogical information and books of specific interest local and international.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"\"Its subtitle will alert the prospective reader that this isn't just a book of 'reading' but rather, it has suggested exercises for the mind and body, grounded in the ritual and teachings of Freemasonry.\nThose looking for a 'workbook' will likely find this intriguing and it's FAR better than a couple of other works along these lines in the past.\nIf you're the type of reader who likes activity-oriented learning with triggers to help stimulate your mind, this may well be the book for you. Others would likely and it a waste of time.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Company Name:Dhruvtara Chemicals Pvt Ltd.\nIntroduction of Dhruvtara Chemicals Pvt Ltd.\nProducts of Dhruvtara Chemicals Pvt Ltd.\nbeta-Tetralone is an organic chemical compound with the molecular formula C10H10O. This colourless oil is an intermediate in organic synthesis. It is a ketone derivative of tetralin, a hydrogenated derivative of naphthalene.\nBenzidine, also called 1,1'-biphenyl-4,4'-diamine, is an organic compound with the formula (C\u2086H\u2084NH\u2082)\u2082. It is an aromatic amine. It is a component of a test for cyanide. Related derivatives are used in the production of dyes.\nCrystals or brick red liquid.\n6-FABA (6-Fluoroanthranilic Acid) is a small-molecule inhibitor of MTB tryptophan synthesis that converts Mtb into a tryptophan auxotroph and restores the efficacy of a failed host defense; has an MIC of 5 uM in liquid broth in the absence of tryptophan in Pseudomonas aeruginosa.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzbfpqk b/data_all_eng_slimpj/shuffled/split2/finalzzzbfpqk new file mode 100644 index 0000000000000000000000000000000000000000..663df5f1f32917c4966e36d9963eff404087cb00 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzbfpqk @@ -0,0 +1,5 @@ +{"text":"The Campbell Collaboration is pleased to announce the winner of the 2013 Frederick Mosteller Award.\nThe Frederick Mosteller Award honours an individual who has made an important contribution to the theory, method or practice of systematic reviewing in areas within the ambit of the Campbell Collaboration. This year Frank L. Schmidt has been selected as the recipient of this award. Schmidt was one of the pioneer developers of systematic review methods, who widely advocated their use. His methodological contributions and insights have enriched the field, which have in part, made the Campbell Collaboration possible.\nSchmidt , John Hunter (1939-2002) and Greg Jackson produced one of the first methodological guides to meta-analysis in the early 1980s. Schmidt and Hunter went on to create two subsequent guides, all of which had a significant influence on the early growth and application of meta-analysis. Schmidt broke early on from conventional wisdom, which held that individual studies were best-suited for advancing knowledge, arguing instead that the meta-analysis of research findings was. Over the years Schmidt has continued to publish both substantive and methodological meta-analysis papers on the subject.\nToday Schmidt is a Professor Emeritus at the Tippie College of Business at the University of Iowa.\nFollow Campbell on Twitter @C2Update or visit us on Facebook!","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Turkish authorities have dismissed more than 10,000 civil servants over their suspected links with US-based cleric Fethullah Gulen, blamed by Ankara for orchestrating the failed coup in July.\nThrough the decrees, elections to choose a rector at the universities have also been abolished. President Recep Tayyip Erdogan will directly appoint the rectors from the candidates nominated by the High Educational Board, Reuters reported.\nThe extent of the crackdown has worried rights groups and some western allies who fear Erdogan is using it to curtail dissent. The government says the actions are justified by the threat to the state on July 15, when more than 240 people were killed.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"It is just possible that value of certain assets such as land and building may increase. This increase in the value is gain, so it will be credited. It will also increase the value of assets, so assets account will be debited as the rule for assets goes 'debit the increase'.\n(i) If appreciation appears in Trial balance. It will be shown at the credit side of profit and loss account only, because it is gain.\n(ii) If appreciation appears in adjustment. All the items appearing in adjustments are shown at two places in the final accounts. Appreciation is a gain so it will be shown at the credit side of profit and loss account on the one hand, and on the other hand, it will be added to the value of concerned assets in the Balance sheet. If land and building worth $ 1,00,000 appreciated @ 15% the following entry will be passed.\nNote. As a convention of conservatism firms generally do not show the increase in the value of their assets. I as such appreciation is casual item. Whereas depreciation is a usual item.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"The precast concrete industry is showing a solid growth worldwide. Quality, productivity, efficient cost and value management from project inception to completion, independence to weather conditions, etc. are just some of the advantages of precast concrete over traditional construction methods.\nHow to save on materials and costs?\nHow to program machines directly from a 3D model?\nHow to be sure that the expensive machines in the factory are used efficiently?\nAnd how to guarantee traceability of an element from early design stage, production, transportation and up to erection?\nPlanning software for the precast parts industry means more than just commercially available CAD software in the construction industry. Automated precast parts production requires the consistency of data from planning to factory and assembly.\nIt requires automation and the highest efficiency with series products, as well as full flexibility and high-performance functions, in order to plan and produce complicated precast parts. This applies to all types of precast parts, from basic ceilings right up to complex architectural elements and specialist parts.\nThe Nemetschek precast software solution supports the technical areas of a precast company, from the planning of precast parts, to operations scheduling, logistics, production and right up to assembly and settlement.\nFor the planning area, this is Allplan Precast, software aimed at professionals for highly efficient planning. A plug-in secures quick and error-free data exchange with Scia Engineer for structural analysis.\nThe other company departments use the work results of the planners with TIM, the Technical Information Manager, for virtually planning tasks, processes and decisions.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"For over fifty years, ADELTA OPTEC and its founders have been manufacturing microscopes that feature the perfect balance of performance, reliability and affordability. INNOVATE ALWAYS \u2013 this is the business philosophy at ADELTA OPTEC and our objective is to build ourselves into one of the top medical microscope manufacturing company in India and Asia Pacific.\nAt the heart of ADELTA OPTEC will always be our commitment to quality, service, team work and faster on-time delivery schedules which continues to make us an industry leader. Our constant development and commitment to lasting quality and technical excellence ensures that our customers are getting the best product for their application at the best budgeted price.\nOur innovative and easy to use APVIEW imaging software provides for real-time high resolution live imaging and interactive learning experience. The user is able to adjust RGB, contrast, brightness, gamma, saturation, exposure and frame speed on live image preview. The software provides full editing tools for image correction and placement of text & arrows. The APVIEW software is available with all ADELTAVISION DIGITAL microscopes only.\nOur ADELTAVISION AV10 & AV70 series microscopes come with Adaptive Focusing Management (AFM) system. The AFM system is a precision silky smooth ball bearing system with precision gear mechanism for maintenance free focusing throughout the life of the microscopes. Both coarse & fine movement are achievable by a single knob. The AFM system has an Auto Lock feature which prevents objective & slide from crashing & damage.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzbfueo b/data_all_eng_slimpj/shuffled/split2/finalzzzbfueo new file mode 100644 index 0000000000000000000000000000000000000000..b0d66b2b3a98c5d05ce7c3312f482f84a02185dc --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzbfueo @@ -0,0 +1,5 @@ +{"text":"Partners Group often gets asked to conduct research from a wide range of different stakeholders in order to achieve a wide range of outcomes.\nThe EITI is a Norwegian headquartered global standard for the good governance of oil, gas and mineral resources. The EITI had asked the Partners Group to condense its several hundred page annual reports for Zambia in to a simple one page infographic that would highlight the most interesting points for policy makers, stakeholders in the mining sector and other key relevant members of the public in an engaging manner.\nThe Partners Group received feedback from the World Bank and the International Monetary Fund (IMF) that infographic was effective in helping to raise awareness about the activities of the EITI in Zambia and also to assist the Zambian government to revise policies concerning the mining sector that were more effective and conducive for both the general public as well as stakeholders in the mining sector.\nIn 2015, various undisclosed mining interests approached the Partners Group to research on the historical copper production output of Chile and Zambia and how any legislation or legislative changes may have led to increased output and therefore increased taxes being collected and accelerated economic development being achieved. An infographic was produced to highlight this simply and published in the Partners Guide Magazine with the view of influencing the Zambian government to emulate the strategies of the Chilean government for the mutual benefit of stakeholders connected to the mining sector as well as the Zambian general public.\nDISCLAIMER: The Partners Group will only embark on research projects that it believes the findings of which may lead to being of benefit for a wider number of Zambians and only publish things that it feels can be factually substantiated and validated.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"A love for cooking may seem like the only prerequisite for opening a restaurant, but experts and veterans agree that beneath the allure lies one of the most challenging industries to enter.\nMany people see money, money, money, but it's one of the hardest industries to get into.\nRestaurants accounted for 772,000 jobs, or 9 percent of all employment in New York in 2014, according to the National Restaurant Association, but a Cornell University study said about 60 percent of independent restaurants fail in their first year.\nWhile the numbers are daunting, there are steps restaurant owners can take to avoid landing in the 60 percent.\nA majority of the work opening a restaurant comes from the planning stages.\nThere are two ways to start a restaurant: find a great location and base your concept on that, or find a great concept and find the location to suit it.\nIt is highly recommended that you create a detailed business plan, including a location, concept, audience, demographic, competition and menu. However, none of these matter without proper funding.\nThe number one reason restaurants fail is because they don't have enough capital.\nIt could take up to a year before a new restaurant starts making a profit. Numbers are hard to pin down in the restaurant business.\nThere isn't a crystal ball that tells you how many people are going to come in each night.\nBeyond the work starting up the restaurant, it's essential to create and maintain good customer relations.\nIf you don't stay in the front of the mind of your guests, they're going to start to forget about you.\nTo manage this, a strong social media presence, as well as word-of-mouth is very important.\nA common misconception in the restaurant industry is that the profits come from food. While that is part of the equation, rising food prices mean restaurants must find alternate sources for income, such as catering.\nWhile it may be tempting to raise menu prices to help combat costs, it won't work with restaurant-goers.\nPeople are more value-focused today and definitely won't tolerate higher prices.\nIt is impossible for a restaurant owner to do everything, but it's necessary that everyone working is on the same page.\nIt's important that your staff share the same vision as you do, so they can carry that vision through to the best of their abilities.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Corlene ensures that businesses, foundations, and non-profit groups meet federal and state requirements, operate successfully, and make important changes to their organizations. In addition to overseeing audit and tax work, Corlene consults to dissolve partnerships, distribute assets, maximize working capital, and transfer business ownership.\nShe continuously looks for ways to improve organizations' operations and tighten up their internal controls. Corlene consults on the processes and procedures that reduce errors and mitigate the risks of fraud, non-qualification as a non-profit entity, and non-compliance with applicable accounting standards.\nDeciphering special tax rules and statutes enabling a service organization to maintain its non-profit status and advising on the best ways to reduce exposure to its foundation and real estate holdings.\nTailoring tax advice to businesses so owners can plan for and achieve their personal and business goals.\nCorlene grew up in Mankato, Kansas. In high school, she took a liking to bookkeeping, studied accounting in college, and landed at the firm over 35 years ago. A steward to many organizations in rural Kansas, Corlene works to keep local businesses and nonprofits thriving to benefit their communities.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Tiffany & Co is an American jewelry retailer that is known all over the world. The New York City company was founded by Charles Lewis Tiffany and John B. Young in 1837. The story of Charles Lewis Tiffany embodies the American dream. On the first day of business, the takings of Tiffany and Young's \"stationery and fancy goods store\" amounted to a mere $4.98. However, this soon changed.\nIn the 1840s, Charles Lewis Tiffany took the calculated risk of buying diamonds from Europe. After the French revolution and abdication of Louis Philippe, members of the French aristocracy sold their treasured jewels in order to raise funds to relocate and avoid the political turmoil in France. Tiffany's acquisition of such exceptional diamonds earned him the nickname, \"King of Diamonds\".\nThus began the association of Tiffany with diamonds. Tiffany & Co. is now known for the \"Tiffany Diamond\", which is a fancy yellow diamond. The Tiffany Diamond was discovered in 1877 in the South African Kimberley Mines. The rough stone weighed 287.42 carats. A year later, Tiffany bought the stone and had it cut down to 128.54 carats; the cutting was supervised by George Frederick Kunz, the resident gem expert at Tiffany's. The famous stone was later set into a Ribbon Rosette necklace designed by Jean Schlumberger and worn by Audrey Hepburn in publicity photos for the iconic movie, \"Breakfast at Tiffany's\" (see image, below). Hepburn was the second woman to have worn the famous gemstone.\nIn 1995, Jean Schlumberger's \"Bird on a Rock\" brooch was mounted. This brooch was displayed at Tiffany & Co. and in the Smithsonian Museum of Natural History in New York in 2006. In 2012, the Tiffany Diamond was set into a white diamond and platinum necklace and exhibited around the world to celebrate Tiffany's 175th anniversary. It now resides in the Fifth Avenue Tiffany & Co. store in New York. The Tiffany Diamond is said to be one of the largest and finest yellow diamonds in the world.\nPart of the Tiffany & Co. brand is a certain shade of robin's egg blue, known as \"Tiffany Blue\". This is the color of fine turquoise, which is the gemstone thought to have inspired the signature color choice. Tiffany Blue was first seen on the cover of the Blue Book, which was the first mail-order catalogue published in the US. The Blue Book of 1845 was sent to customers as part of the Tiffany service. The Tiffany Blue Book is now considered to be a work of art in itself, with glossy color pictures of beautiful jewelry. Some editions are highly sought-after and valuable. The Blue Book is sent out only to certain customers, depending on purchases, and displays the most sumptuous gems and jewelry. Tiffany Blue is also the color used for the boxes in which Tiffany products are presented, tied with a white ribbon.\nTiffany & Co. began with only \"precious\" gemstones; emerald, ruby, diamond and sapphire. Later, the company was responsible for acquainting the world market with some colored gemstones that may have remained relatively unknown otherwise. Such gems include tourmaline, which was sold to Charles Comfort Tiffany by George Frederick Kunz in 1875. Another gem type that was popularized by Tiffany & Co. is tanzanite. After its discovery, tanzanite was rejected by the Saks Department Store jewelry department, but admired by Tiffany & Co. who gave it the name \"tanzanite\", after the location of its discovery in the hills of Northern Tanzania. On the Tanzanian and Kenyan border, a green variety of garnet was found by Campbell Bridges and named tsavorite, after the Tsavo National Park, by Henry B Platt, who was the president of Tiffany at the time. Mr Bridges and Tiffany & Co. introduced tsavorite garnet to the world. Another colored gem that was marketed by Tiffany & Co. is kunzite, which was first discovered in the USA, and then named after George Kunz, the first man to describe the pink-violet variety of spodumene. The more delicate pink gemstone, morganite, was named after esteemed Tiffany & Co customer and gemstone collector, J.P. Morgan.\nTiffany & Co. has long been famous for silverware, especially the iconic Tiffany heart tag bracelet and necklace. The company has been selling silverware since 1837. Shortly after Tiffany & Co. became known for diamonds, the company implemented the 925\/1000 silver standard, leading the US silver industry. This means that the silver is comprised of 925 parts per 1000 parts silver. It consists of 92.5% silver and 7.5% other metals, typically copper. This was the British sterling silver standard and was adopted by Tiffany and then the whole of the US. Tiffany's silverware won a number of international awards, including the prestigious Grand Prize for Excellence at the 1878 Exposition Universelle in Paris for a stunning \"Japanesque\" silver and mixed metals piece.\nAll of the above has contributed to Tiffany & Company's popularity and iconic status over the years. If you ever find yourself wandering down Fifth Avenue, New York City, be sure to pay a visit to Tiffany & Co., whether it is to see their beautiful gems and jewelry, or simply to stare at the creative window displays, which draw people from all over the globe.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Is March NOT the best month of the year for basketball? Every year college hoops fans go into the NCAA\u00ae tournament with the high hope of creating a perfect bracket, and the more realistic aspiration that every game in the tournament will be a nail-biting thrill. This year, even more so than years past, the madness has delivered.\nFans brought passion, Bing brought the smarts \u2013 how was March Madness for you?","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzbhmmf b/data_all_eng_slimpj/shuffled/split2/finalzzzbhmmf new file mode 100644 index 0000000000000000000000000000000000000000..b0a3015e464aaeccd40fc23dba4dbdb2fe52f509 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzbhmmf @@ -0,0 +1,5 @@ +{"text":"These pics show the current state of the van (January 2018).\nOne thing we've noticed, compared to many Pro conversions, is the lack of 'holes' cut in the sides of Deep Red - no gas fridge vents, or door for the toilet holding tank, or gas bottle locker.\nPics of our travels are now on dedicated pages. We know folks like to see where we've been in Deep Red.\nThe colour of the upholstery is mid-green - the camera tends to go to grey (like us really!).\nThe new washroom and wardrobe layout. Still some minor work to be done.\nNew washroom with toilet cassette door and water filler on rear wall. Corner edging still has to be fitted.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"People come from all over the world to enjoy one of the top tourist destinations in the world. Whether enjoying Bourbon Street, the French Quarter, the Casinos, the Riverboats, the Cruises, the Conventions, the Museums, the Stadiums, the Restaurants or simply relaxing in the Big Easy, comedy can be the capper to any business trip or vacation.\nContact us to find out how to be entertained by shows that are never to be repeated. From short-form shows similar to TV's Whose Line Is It Anyway? to long-form shows spiralling entertainment based on audience suggestions, there are options for every type of audience.\nStand-Up Comedy is the industry standard. There are options for Open-Mic shows to nationally recognized comedians performing at local clubs. Contact us about options that we can bring to your group!\nWhile some public performances may be available, there are great options for having a show custom-tailored to your group. Contact us to find out how we can have a show just for you!\nAt venues with other things going on, such as at casinos or on riverboats, we can provide entertainment to help provide additional entertainment options. Likewise, this can draw people to booths at conventions or develope camaraderie among employees for team building.\nComedy is a great way to entertain family and friends. With a wide-variety of options to entertain your groups we can serve your needs and raise your private party to a new level. Simply contact us to come up with options.\nFor companies looking to entertain their employees, there are great options to entertain your group while achieving your goals for developing skills and team building. Contact us to help determine what we can do to help your group while within your budget.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"This barber chair would be a perfect addition for Barbershop, Salons, Tattoo Parlors and more. Textured synthetic leather with high density sponge provides a comfortable place for your customer to relax. It is made from high quality materials with modern technology to ensure high quality production. The reclining backrest, adjustable headrest, and comfortable T-footrest make it the best reason for your selection. Furthermore, it is very light and you can move it effortless.\nClassic styling with modern flavor, high quality workmanship, unmatched reliability.\nErgonomically designed to fit the natural shape of human backs, providing comfort and support to the lower back.\nOil Pump Warranty---We offer old-for-new service within the first year if there is a quality problem.\nFully Reclinable --- Backrest reclines up to 135\u00b0 for a wide adjustment of back position (Backrest can be easily adjusted to multiple locking positions by a gas piston lock&release mechanism).","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"After school let out back in the 1960's, mom use to make a quick easy cake that dad loved, he hated frosting.. We always use to sit around the table and this song use to come on the transistor radio, boy was that a long time ago, I was around 10 years old... see if you just may have heard this one along the way. So today being her anniversary death on June 16th 1999, I miss her everyday, I decided to make the cake, play this song as it brought back wonderful memories of both of them, hope you smile a too!\nHave ready a greased 9-by-13-inch rectangular baking pan. Set the oven at 350 degrees.\nIn a large bowl, cream butter and 1 1\/2 cups sugar, then mix in the egg yolks, sour cream, and vanilla.\nIn a medium bowl, sift together the flour, baking soda, baking powder, and salt. Stir into the butter mixture.\nBeat the egg whites until they hold stiff peaks, then fold into the batter. In a small bowl, mix the cinnamon with the remaining 1\/2 cup of sugar and the chocolate chips.\nPour half of the cake batter into the pan. Sprinkle the top with the cinnamon-sugar mixture. Pour remaining batter on top.\nBake 40-50 minutes or until a toothpick comes out clean. Sprinkle with confectionery sugar.\nVery touching Claudia. I'm sure your mother is so incredibly proud of you. You are so extraordinary and full of love.\nThis cake is wonderful, I can see why your father loved it!\nSounds delicious. Unfortunately I have never tried sourcream cake but I have tried yogurt cake. Guess it is similar.\nThis cake sounds delicious, Claudia! And I agree with your Dad, no frosting for me. I usually scrape it off so I can get right to the cake. Yum.\nClaudia, this is such a beautiful touching post. I love the family photo, great memories for you...from food and music too! I am with your dad there on no frosting as long as it's a moist yummy cake...just like this one! Perfect!\nI remember the song well; my father used to get a kick out of it, too. Love the cake as well. The third anniversary of my mom's death was last Friday. It's always a hard day, isn't it?\nClaudia, what a lovely post in remembrance of your parents! I enjoyed reading it and could really feel your love for them. Your mother must have made your father very happy with her cooking and baking!!! Thanks for telling me about the linky not working. I went and made a few changes, and it works now, so I just went ahead and added this post for you. Thanks for continually sharing again each week. I selected your post as one of the featured posts today and am glad that you are pleased!\nThe sour cream cake is delectable! And this post is bittersweet - sending you a big HUG today!\nTransistor radios, and Camp Granada! I remember it well. My Honey will love this cake, I cannot wait to make it for him, thanks!\nI love a coffee cake that has a little something \"extra\" in it like chocolate chips. This one looks wonderful! I really love how recipes\/food can bring back fond memories.\nHow sweet! Thanks for sharing such wonderful memories, plus the the recipe looks delicious! I love how food (and music!) can bring such wonderful memories back.\nI'd so love it if you could share this recipe at our Father's Day inspired blog hop today. Have you ever hopped before? It's a pretty easy way to get some link love.\nDear Claudia, I I had a transistor radio with the ear plug! Do you remember those? I felt so cool! I remember listening to that song as a little girl. My mom is gone many, many years. I know that you think about her everyday. My dad is also gone even longer, therefore I understand how everyday and everything you do is with a memory.\nI'm with your Dad - I'd take that cake over something topped with eons of too-sweet frosting any day! He liked the real stuff. Camp Grenada was a favorite of mine \"back in the day.\" And I smiled at the transistor radio reference. Even my Walkman is outdated! Hugs to you on this anniversary - remembrances and a life of love go a long way.\nFun! Love that photograph :) Great recipe too!\nThat is so sweet and touching! And what a lovely recipe too! Yum! I love food memories. lol!\nNice tribute to mom. A lot of people just think of food as sustenance. In actuality it can be quite comforting and stir up some cherished moments in time.\nThe cake has stirred up my cravings for something with chocolate chips-yum!\nThat cake sounds mighty tasty. I wished I had old photos of my parents. I am sure your mom is proud of the way you are passing down her recipes to your sons and sharing them with us.\nBeautiful post, Claudia! God Bless you and yours and may she rest in peace.\nA lovely tribute and a beautiful cake. My hubby doesn't like a bunch of frosting, either.\nSour Cream, chocolate chip, and cake all rolled into one?? Yes please!!! Love this one, your dad has good taste!\nCould you make this with different flavors of baking chips?\nThis sounds like a delicious coffee cake. Simple ingredients and easy to make.\nI loved cooking for my dad.\nYour post made me feel tearful. So sweet. How blessed you have been and how blessed you are to have such lovely memories of your mother. May she always be alive \"in your garden of beautiful memories\".","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Large Octavo. Gilt letters on beige cover with large gold chop on front with Brodart pictorial dust jacket in FINE (near new) condition. Plum endpapers. 191pp. Index. A beautiful full-color exhibition catalogue, including two pages of seals used.\n(12D3) No trades. Cash or Credit Card only.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzbimct b/data_all_eng_slimpj/shuffled/split2/finalzzzbimct new file mode 100644 index 0000000000000000000000000000000000000000..9c1a0b2130305a5f1866b5d57e483747e45d3931 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzbimct @@ -0,0 +1,5 @@ +{"text":"Data Stash is an award-winning clever little security tool that allows you to hide sensitive data files within other files. Select a large bitmap or database file (or any file you'd like to use) as a receptacle, then add the data files you'd like to hide, via an easy drag and drop mechanism. The receptacle file remains fully functional. Password protection is also provided, using Blowfish encryption. This is useful if you wish to keep out certain files from prying eyes, under the guise of a normal file. For example you can hide sensitive document files into a bitmap file, and retrieve them later. Opening the bitmap file would only show the image of the bitmap, and not the files that are stored in it. You can hide files into .exe, .com, .mpg, and so on.\nInstantly remove protection or recover original passwords to encrypted Word 95-2013 documents. Accent WORD Password Recovery removes passwords to modify and passwords to VBA macros while retrieving the original passwords to open from encrypted docs.\nThis site does not contain serial number, crack, keygen Data Stash serial number : serial number is the simplest type of crack. A serial number(cdkey,product key,etc) is register the program as you may already know. a crack or keygen may contain harmful software. If you cannot find the exact version you want, just give it a try. for more information read .nfo\/.txt\/.diz file include in zipped file.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"The game is a story driven adventure game. The player awakens with no memory in a hospital setting. A radio implanted in their head is telling them they're an Agent, and need to escape.\nAs the player gets further through the story, clues are revealed which help the player to understand what's really going on.\nAt the end, the player is presented with a decision. They have to make the right decision by picking up on the clues revealed during the story in order to win.\nI wanted to explore the idea of having a voice in your head, telling you what to do. Being your primary motivator.\nHow did you address the themes and keywords of the jam?\nThe game momentum is controlled by a Narrator voice, that exists only inside the player's head. The player also needs to gain access to a safe to complete the game.\nStandard first person controls. WASD to move. Hold Left Shift to sprint (you will NEED to do this to complete the game).\nLeft click while looking directly at objects to perform a default interaction with them.\nRight click while looking directly at objects to get a context menu with all interaction options.\nRight click again to close interactions menus.\nPress the 'I' key to open your personal inventory.\nExtract to your chosen installation directory.\nDouble click AgentEscape.exe to run the game.\nOnce the cutscene completes, you can use the Settings menu to adjust your mouse controls (sensitivity, invert Y) if desired.\nThis is a serious game that explores the motivations of a paranoid and depressive schizophrenic. Auditory and visual hallucinations are common in these patients.\nOne form of treatment these kinds of patients sometimes receive is called electroconvulsive therapy, where psychiatrists use a probe to send highly targeted shocks through the brain.\nAlthough this treatment is effective, unfortunately, a common side-affect of this treatment is memory loss. The opening cut-scene is meant to hint at the player receiving this kind of treatment (which is pretty barbaric and torturous looking by all accounts). However, we didn't have time to really flesh it out and make it nice\/effective. Just be glad you have a cutscene at all.\nThis explains the memory loss.\nWhen the player gets back to their apartment there are signs of general uncleanliness and drug use. Lack of motivation and drug abuse are very common in schizophrenic patients. A further clue is the message from your caring mother, and the fact that you use the last four digits of your mother's phone number as your safe code suggests you actually care about this woman.\nThese are the clues the player is meant to pick up on to decide to take themselves back to hospital. They must talk with the nurse at the hospital, who explains their situation, in order to win the game.\nI hope that through playing this game the player gains a better understanding of the motivations and mental state of paranoid delusional schizophrenic s, and will exhibit empathy and understanding. The player is meant to sense that the decisions the character is making are sensible (within their deluded view of reality).\nThere were several additional puzzles planned to flesh out the story and make the game more interactive and immersive.\nIn the opening scene the player is meant to be handcuffed to the bed, and use their IV line to pick the lock.\nAfter the player lures the nurse into the other room, they are meant to have to sabotage the nurse's chair with a blood sample, so that the nurse gets stained pants. This is so that the nurse will change pants and the player can steal her prox card. The player should need this prox card in order to use the elevator to escape.\nThe player is meant to find and examine their wallet in the nurses station. Looking at their ID will reveal their apartment address (and name). The player was meant to have to do this in order to travel to their apartment (otherwise they should have no idea where it is, memory loss!).","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"The digital revolution is increasingly a focus area for private equity firms, so our deep industry understanding is invaluable to clients. Through lasting relationships with our clients, together we find optimal, creative solutions to the legal and commercial issues arising throughout their investment life cycle.\nThe information economy is global. Technological aptitude is advantageous. Private equity firms recognise the importance of industry players pushing for pole positions in a digitalised industry and their growth potential. The need for industry specialist legal advisers is growing rapidly. Our reputation is built on technical technological expertise.\nOur sophisticated advice increases the value of the company our clients are investing in, and we stay with them until they want to sell it. We also advise on corporate reorganisation and restructuring, alongside regulatory reporting requirements using fully integrated sector groups.\nWe advised Forenom, a Nordic leading corporate housing provider and a portfolio company of Nordic private equity investor CapMan, on Forenom's acquisition of Apartment HS, a Swedish serviced apartment provider. Value: over 9 million Euros. Hospitality sector. Lead by Jan Bystr\u00f6m. Buy-side.\nOur Italian team has recently assisted Giulio Fiocchi S.p.A. (the holding company owned by the Fiocchi family) on the sale of its subsidiary, Fiocchi Munizioni S.p.A (an Italian small caliber ammunition manufacturer) and its controlled entities to Charme III (a fund of the European private equity firm, Charme Capital Partners). It is one of the most significant transactions in the Italian private equity market, both in terms of size and importance of the target company given its leading position in the market. As a consequence of the size of this deal, we have received some strong media coverage.\nOur German team advised Vitruvian Partners on its partnership with the German-Swiss doctari Group. In addition to acquiring the shares and setting up the holding structure for the investment, Bird & Bird advised Vitruvian on the partial financing of the purchase price. Likewise we handled the required merger control clearance at the Federal Cartel Office. The new ownership structure and Vitruvian's expertise with high-growth, technology-enabled companies will enable doctari to further capitalize on the great market opportunity and expand its service offering going forward. In line with our market trends the founders continue to hold a significant stake in the company after the transaction.\nAssising Semtech Corporation (a leading supplier of high performance analog and mixed-signal) in the frame of its equity stake in IDOSENS, an IoT platform that offers home automation solution.\n\"Bird & Bird's international corporate offering is growing from strength to strength. In the past year we have launched some outstanding initiatives and welcomed 13 new partners to the team, an 18% increase from 2017.\nBird & Bird appoints financial services expert Pawel Bajno as a partner in its Corporate practice in Warsaw.\nDr. Hans Peter Leube, LL.M.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"1 (To the chief Musician, A Psalm of David.) The fool hath said in his heart, There is no God. They are corrupt, they have done abominable works, there is none that doeth good.\n- The fool hath said in his heart, There is no God. The sinner here described is an atheist, one that saith there is no Judge or Governor of the world, no Providence ruling over the affairs of men. He says this in his heart. He cannot satisfy himself that there is none, but wishes there were none, and pleases himself that it is possible there may be none; he is willing to think there is none. This sinner is a fool; he is simple and unwise, and this is evidence of it: he is wicked and profane, and this is the cause. The word of God is a discerner of these thoughts. No man will say, There is no God, till he is so hardened in sin, that it is become his interest that there should be none to call him to an account. The disease of sin has infected the whole race of mankind. They are all gone aside, there is none that doeth good, no, not one. Whatever good is in any of the children of men, or is done by them, it is not of themselves, it is God's work in them. They are gone aside from the right way of their duty, the way that leads to happiness, and are turned into the paths of the destroyer. Let us lament the corruption of our nature, and see what need we have of the grace of God: let us not marvel that we are told we must be born again. And we must not rest in any thing short of union with Christ, and a new creation to holiness by his Spirit. The psalmist endeavours to convince sinners of the evil and danger of their way, while they think themselves very wise, and good, and safe. Their wickedness is described. Those that care not for God's people, for God's poor, care not for God himself. People run into all manner of wickedness, because they do not call upon God for his grace. What good can be expected from those that live without prayer? But those that will not fear God, may be made to fear at the shaking of a leaf. All our knowledge of the depravity of human nature should endear to us salvation out of Zion. But in heaven alone shall the whole company of the redeemed rejoice fully, and for evermore. The world is bad; oh that the Messiah would come and change its character! There is universal corruption; oh for the times of reformation! The triumphs of Zion's King will be the joys of Zion's children. The second coming of Christ, finally to do away the dominion of sin and Satan, will be the completing of this salvation, which is the hope, and will be the joy of every Israelite indeed. With this assurance we should comfort ourselves and one another, under the sins of sinners and sufferings of saints.\nThe fact that there is \"none that doeth good, no, not one.\" proves that the only ones who do good and are saved are those who believe in Jesus. Cause since all of us are evil sinners undeserving of eternal life (Romans 3:23) then that means the only way to be saved is to have your sins forgiven and righteous imputed upon you by faith and the only way that could happen is by believing in Jesus.\nAny person\/sinner is changed from fool to fellowship with God, only by his grace. We don't find God, he is the potter, we are the clay. It is nonsense\/ignorance to believe, or suggest, he owes us anything, due to anything, from within us. We love him because he first loved us.1John4:19. Sovereign, Almighty, Savior, Love, is he alone.\nVerse 1: God views those who do not believe in him as fools. Such ones can become corrupt and have detestable actions.\n\"Oh that THE SALVATION OF ISRAEL were come out of Zion! when I AM brings back the captivity of HIS PEOPLE, JACOB shall rejoice, and ISRAEL shall be glad.\" As the heathen beat their plowshares into swords, and beat more fiercely the drums of war; The God of Israel, the Lord and Saviour Jesus Christ, is working all these things for the salvation of His People Israel, and for all grafted in by faith.\n\"I AM looked down from heaven upon the children of men, to see if there were any that did understand, seek God.\" Some may say that the Lord knows their hearts, when they are reproved for sin and bad behavior. Yet when it comes to the very root of their moral choices and character they play ignorant. God KNOWS that we understand the truth and His commandments: our duty is to truth in the inner man.\n\"The fool hath said in his heart . . No God! They are corrupt, they have done abominable works, there is none that does good.\" Notice that it doesn't say, The angry hostile hater . . but, The Fool! It is a moral behavior of willful stupidity. When the Lord called the scribes and Pharisees, Fools and blind; the word in the Greek literally means, Stupid, Moron; which is what men act out morally.\nv. 1 - We need to pray, pray really and sincerely pray for the 'fools'. Pray with REAL compassion - which compassion ONLY the Holy Ghost dwelling in us can give; we cannot 'conjure' it up ourselves.\nThis passage of Scripture isn't condemning the whole earth as if there wasn't anyone doing good or not understanding nor seeking God. It specifically identifies the non-compliant subjects as the fool being the children of men. There is a difference between the children of men and the sons of God See Genesis 6:1-2 4 . ALSO Psalms 82:6,7 I have said, Ye are gods and all of you are children of the Most High. But you shall die like men... In other words the sons of God now die just like men die. The only way to know the difference today is to observe who is eating up the people identified in verse 4 of Palms chapter 14. In Genesis 3:15 the Lord God put emnity between the seed of the woman and serpent. Therefore we have to observe the fruit or works to identify who is creating all the hell on earth and who is catching all this hell that is being created. Therein lie the truth. I speak strength and endurance unto all the children of the Most High the sons of God that they may endure to the end not being caught in the trap of materialism. Blessings unto all you who strive for the good of all! Amen.\nWhat Do You Think of Psalms 14?","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Look after pregnant Mommy and her sweet newborn baby girl! Take care of their health, home, food and style!\nAre you excited to have a little sister? Play Mommy's little helper and be the best nanny for a sweet newborn baby girl. Prepare for baby's arrival: go to the doctor and check Mommy's health, bake a yummy cake and throw a baby shower, decorate a nursery and treat Mommy with a relaxing facial spa.\nWhen the baby is born, get ready to change smelly diapers, make baby food, give a bubble bath, iron baby clothes and clean up the house. Don't forget to design matching outfits for Mommy and her newborn baby!\n\u00b7 Mommy is so hungry, make crazy snacks and feed her!\n\u00b7 Design the cutest baby room \u2013 paint and decorate it the way you like!\n\u00b7 Cook and make a delicious strawberry cake for a baby shower!\n\u00b7 Throw the best baby shower and open baby presents!\n\u00b7 Take Mommy to the doctor and check her health!\n\u00b7 Listen to Mommy's baby bump and hear baby's heartbeat!\n\u00b7 Paint on Mommy's baby bump: color a stork, a tree or draw your own picture!\n\u00b7 Play beauty salon and give Mommy a relaxing facial spa treatment!\n\u00b7 When the baby is here, help Mommy at home and iron tiny baby clothes!\n\u00b7 Prepare baby formula and fruit puree for hungry baby girl Emma!\n\u00b7 Dress up Mommy and create an amazing outlook!\n\u00b7 Dress up sweet baby girl Emma and design an adorable baby outfit!\n\u00b7 Babysit and learn how to change baby diapers!\n\u00b7 Give newborn baby girl a bubble bath and wash her hair!\n\u00b7 Help Mommy clean up the rooms and vacuum the carpets!\n\u00b7 Play Sweet Baby Girl Newborn 2 every day and collect cute trophies!\n\u00b7 Watch videos for kids to get more coins and unlock new game items!\nYou can play Iron Laundry, Baby Food, Baby Shower, Mommy Dress Up, Baby Dress Up, Room Cleanup, Painting and Coloring mini games for free.\nYou can purchase Mommy Facial Spa, Mommy Care, Mommy Food, Make a Cake, Baby Formula, Change Diapers and Baby Bath mini games for $1.99 each; Stork Coloring, Catch the Toys and Find a Diamond mini games for $0.99 each; or get the full game version with no ads for $4.99.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzbkbdo b/data_all_eng_slimpj/shuffled/split2/finalzzzbkbdo new file mode 100644 index 0000000000000000000000000000000000000000..9ea4dfce0e5525b4cdfc715c88b0af50ee3d360d --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzbkbdo @@ -0,0 +1,5 @@ +{"text":"CAF U-20 Championship hosts Senegal scored in the 86th minute to equalise in their final Group A match against Congo, and then in the third minute of stoppage time the Lions of Teranga scored again to win 4-3. The result booked Senegal a spot in the semi-finals of the competition along with a place at the FIFA U-20 World Cup New Zealand 2015.\nIn a dramatic seven-goal thriller in Dakar, Senegal's El Hadji Malick Niang opened the scoring early on, but six minutes later Moise Justalain Nkounkou equalised for Congo. Ibrahima Wadji gave Senegal the lead to head into half-time up 2-1, but Congo would score twice in a ten-minute span in the second period to take a 3-2 lead.\nSidy Sarr proved to be Senegal's hero, scoring in the 86th and then again seven minutes later to send the hosts' supporters at the Stade Leopold Sedar Senghor into a state of delirium as coach Marie Joseph Francois Koto's side won 4-3 and punched their ticket into the semi-finals and New Zealand 2015 in the process.\nSenegal will be joined in New Zealand by Nigeria, Ghana and Mali. The Flying Eagles finished in front of Senegal in Group A at the CAF U-20 Championship, while Ghana and Mali, who have both earned a spot in the semi-finals, will square off on Sunday to determine the top stop in Group B.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"This vaccine protects against chickenpox , a disease caused by the varicella-zoster virus.\nChickenpox is caused by the varicella-zoster virus (VZV). It is very common and highly contagious, occurring more often in the winter and spring. Most of the time, the infection is mild and not life-threatening. However, there are thousands of cases each year in which people become seriously ill, requiring hospitalization, and some children do die from it.\nThe chickenpox vaccine works very well in preventing the disease. A small number of people who get the vaccine will still get chickenpox. However, they usually have a milder case than those seen in persons who did not receive the vaccine.\nThe American Academy of Pediatrics (AAP) and the Centers for Disease Control and Prevention (CDC) recommend that children receive two doses of the traditional chickenpox vaccine.\nThe first dose should be given when the child is 12 - 15 months old.\nChildren should receive the second dose when they are 4 \u2013 6 years old. However, the second dose can be given before age 4, as long as as 3 months have passed since the first dose.\nPeople 13 and older who have not received the vaccine and have not had chickenpox should get 2 doses 4 to 8 weeks apart.\nPeople 13 and older who have had a previous dose and have not had chickenpox should receive a second dose.\nOther reactions, such as low blood counts and brain involvement, are so rare that their link to the vaccine is questionable.\nChildren or adults who have a weakened immune system as a result of HIV\/AIDS, cancer, organ transplants, or other factors should not be vaccinated for chickenpox.\nChildren or adults who are allergic to the antibiotic neomycin or gelatin should not receive this vaccine.\nChildren or adults taking steroids for any condition should consult with their doctor about the proper timing of chickenpox vaccine.\nAnyone who has recently received a blood transfusion or other blood product (including gamma globulin) should consult with their doctor about the proper timing of the chickenpox vaccine.\nChildren receiving aspirin or other salicylates should not receive this vaccine because of the theoretical risk of Reye syndrome.\nChaves SS, Gargiullo P, Zhang JX, et al. Loss of vaccine-induced immunity to varicella over time. N Engl J Med. 2007;356:1121-1129.\nAmerican Academy of Pediatrics Committee on Infectious Diseases. Recommended immunization schedules for children and adolescents--United States, 2008. Pediatrics. 2008;121:219-220.\nAdvisory Committee on Immunization Practices. Recommended adult immunization schedule: United States, October 2007-September 2008. Ann Intern Med. 2007;147:725-729.\nUS Centers for Disease Control and Prevention. Recommended Immunization Schedules for Persons Aged 0 Through 18 Years --- United States 2009. MMWR, January 2, 2009: 57(51&52);Q1-Q4.\nCoonrod DV, Jack BW, Boggess KA. The clinical content of preconception care: immunizations as part of preconception care. Am J Obstet Gynecol. 2008 Dec;199(6 Suppl 2):S290-5.\nReviewed By: David Zieve, MD, MHA, Medical Director, A.D.A.M., Inc., and Neil K. Kaneshiro, MD, MHA, Clinical Assistant Professor of Pediatrics, University of Washington School of Medicine.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"We have grown over the years, expanding from our signature Blue Bunny\u00ae. Our crew now includes the iconic Bomb Pop\u00ae and Blue Ribbon Classics. Wells also manufactures licensed brands Sorbabes gourmet sorbet and Weight Watchers\u00ae frozen novelties. But through it all we've stuck to our roots. Our best-in-class novelties offer something for everyone, helping us share the love of ice cream while staying committed to quality.\nExplore our brands and get to know them better!\nMeet the one who started it all. Blue Bunny ice cream can be found across the United States. From classic flavors to our custom creations, the Blue Bunny lineup has deliciousness in store for everyone.\nFind these frozen desserts and novelties in a location near you: grocery stores, convenience stores, your favorite restaurant, on ice cream trucks, and at events. Check out our website to see the full flavor lineup, and use our store locator to find your nearest location!\nTurn everyday moments into special memories with Blue Ribbon Classics ice cream. With more than 50 beloved products, the Classic lineup features family favorites like Vanilla, Cookies 'n Cream, and Classic Sundae Cones. Make your next afternoon snack or birthday party more than memorable. Make it classic.\nLearn more about our flavors online, and use the store locator to find Blue Ribbon Classics in a convenience or grocery store near you. Your next tradition could be just around the corner.\nIt's time to make your taste buds your taste besties. Declare Your Freezedom with the original red white and blue Bomb Pop, or get a lick of other flavors like Warheads, watermelon, banana fudge and more. This is a rocket-shaped revolution. This is a colorful call to do you. This is Bomb Pop.\nLearn more on their website and grab one at your local grocery or convenience store.\nIn a world taken over by health crazes, kale obsessions and ice cream imitations, people are deprived of the things that give them real joy and satisfaction. That's why Chilly Cow is helping the people of earth get their souls back, one lower-calorie half-pint at a time. Chilly Cow is the first protein-packed light ice cream to use Ultra-Filtered Milk, making it ultra-delicious, ultra-creamy and ultra-soul-saving.\nWells proudly offers frozen fun for everyone, from families to restaurants. See our foodservice and vending websites for customer information!\nTrust your taste to Wells Foodservice for premium flavors, industry-leading food safety, and a little bit of fun.\nSee which products are available through your local distributor!","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"If you've ever seen a film by the 48-year-old Anderson, you'll know that the component parts of one his films are unmistakable.\nThe collective genius of the internet has once again worked its magic. A group of Reddit users have gone about collecting a series of photographs that are not from a Wes Anderson film, but might as well be.\nIf you've ever seen a film by the 48-year-old Anderson, you'll know that the component parts of one his films are unmistakable, regardless of the project's theme and characters. There are the eye-popping colors and the strong, well-defined lines. There are the eccentric architectural triumphs and eerie quiet. But most importantly, there are those shots \u2015 the ones that zoom in and out with an almost borderline obsessiveness in their quest for near-perfect, everything-just-so symmetry.\nIt is a beautiful, signature look \u2015 and apparently one that, every so occasionally, people around the world have stumbled upon in real life too. Hence, \"Accidental Wes Anderson,\" a brilliant subdivision of Reddit where people are collecting reality's best attempts at recreating Anderson's art.\nAnd by the look of things, they've done quite a good job over their first couple months in business. We've said all we can. Just look at the photos.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Welcome to Paw'D! This website is still work-in-progress - but you can already play and explore the Kingdom of Elyssia in the paws of brave adventurous wolves.\nFrom the small mountain village of Saliko, over the frozen plains around Ashaya, to the perilious jungles of the ever-dark Midnight Mountain - there is heaps to discover, and many challenges to best. Curious what Paw'D is? Look at the feature description, or read on to get started!","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzbkmea b/data_all_eng_slimpj/shuffled/split2/finalzzzbkmea new file mode 100644 index 0000000000000000000000000000000000000000..f960aac7d9fe3aebba5f061a3c73849484264184 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzbkmea @@ -0,0 +1,5 @@ +{"text":"Manuel Rodriguez Killed in Apartment Shooting. Did a Security Lapse Contribute to the Death of a Young Houston Father?\n28-year-old Manuel Rodriguez was reportedly shot and killed in front of his family during a robbery outside of the Santa Monica Apartments in Houston Sunday evening, January 8, 2017.\nAccording to KPRC 2 News, Mr. Rodriguez \"was unloading groceries with his wife and three children around 8:30 p.m. outside the Santa Monica Apartments in the 7000 block of Ashcroft Drive when a would-be-robber walked up to his family.\" Mr. Rodriguez was fatally struck while attempting to protect his family in the subsequent shootout.\nDid negligent security contribute to the death of this young father? We represent individuals and families who have suffered a tragic loss or injury as a consequence of negligent property security. Read Our Legal Take to find out if the Rodriguez family may have a legal avenue for justice and claims for substantial compensation in Texas, or call now for a free consultation with out legal team: 888.842.1616.\nWhat security measures, such as bright lighting, security patrols and surveillance cameras, were in place to deter crime at the time of the shooting?\nHave there been prior incidents of violence on or near property? If so, were any additional security precautions implemented by the apartment complex owner and management company to protect residents and guests?\nGenerally, property owners are required to protect all patrons legally on the premises from any foreseeable harm. Should the facts of this matter reveal that the owner or management of the apartment complex failed to provide adequate security to protect those on its premises, the family of Manuel Rodriguez may seek justice and elect to pursue legal claims for his wrongful death.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Enjoy your planning and revel even more with All 3 Promocode.club's great Budget Vet Care US promotion codes!\nChoose a Budget Vet Care US coupon you would like to use from Promocode.club. Press use Budget Vet Care US code box, which is located just below. Now your web browser should copy the All 3 Budget Vet Care US code for you, although it is best to make sure and copy it manually. The https:\/\/www.budgetvetcare.com\/ web page will open in a new window for you. Go to their Budget Vet Care US checkout section and find the Promotional Keycode box. Paste your Budget Vet Care US coupon code there and click apply.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Star Tile Roof Repair will be available for your needs regarding Tile Roof Repair in Roanoke, AL. You expect the most sophisticated technologies in the industry, and our workforce of highly trained experts can provide that. We make certain that you get the very best service, the right price, and the finest materials. Call us today at 888-738-5526 to learn more.\nStar Tile Roof Repair focuses on customer satisfaction to make sure you'll be ultimately satisfied with our business. We want you to enjoy the tasks we accomplish, and we will strive to fulfill all your goals and visions. We understand all your concerns, and we're ready to help. We predict all of your questions and concerns, and we'll resolve them when you call us. We have the practical experience and knowledge to help you to come up with the very best choices regarding your venture.\nHere at Star Tile Roof Repair, we are aware that you must stay within your financial budget and reduce costs wherever you're able to. On top of that, you require the most impressive and finest quality of services regarding Tile Roof Repair in Roanoke, AL. We offer the very best quality even while saving you money. Any time you deal with us, you will get the advantages of our practical experience and top quality products to make certain that the project can last while saving time and resources. For instance, we take care to stay away from pricey complications, work quickly to save working hours, and be certain that you are given the most effective bargains on products and labor. Connect with Star Tile Roof Repair when you want the most impressive products and services at a minimal price. Contact 888-738-5526 to talk with our customer service representatives, right now.\nTile Roof Repair are available in Roanoke, AL.\nYou have to be well informed when it comes to Tile Roof Repair in Roanoke, AL. We will not inspire you to come up with unwise decisions, because we understand just what we are working at, and we make sure you understand what to be expecting from the task. That's the reason why we make every attempt to make sure that you comprehend the process and are not confronted with any unexpected situations. Get started by contacting 888-738-5526 to talk about your project. We are going to review your concerns as soon as you contact us and get you set up with a scheduled visit. We consistently get there at the scheduled time, all set to work closely with you.\nThere's many reasons to choose Star Tile Roof Repair regarding Tile Roof Repair in Roanoke, AL. Our company is the first choice when you need the most efficient cash saving methods, the best quality products, and the best rank of customer support. We'll be available to assist you with the greatest experience and skills available. Call 888-738-5526 to reach Star Tile Roof Repair and discuss all your expectations about Tile Roof Repair in Roanoke.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"MISSISSAUGA, ON, April 9, 2019 \/PRNewswire\/ - The Green Organic Dutchman Holdings Ltd. (the \"Company\" or \"TGOD\") (TSX:TGOD, US:TGODF) is pleased to announce that its website, TGOD.ca, has been recognized with two prestigious global awards.\nTGOD was presented two Horizon Interactive Awards for the TGOD.ca website in the categories of Best Responsive \/ Mobile Website (Gold) and Best E-commerce Website (Bronze).\n\"We are thrilled to receive global recognition for our new site,\" said Brian Athaide, CEO of TGOD. \"Having launched in only December 2018, the fact that we are already receiving awards is remarkable. This recognition showcases the importance of not only building a leading digital platform for the cannabis industry, but a platform that competes across all industries.\"\nSince 2002, the Horizon Interactive Awards have recognized the best websites, videos, online advertising, print media and mobile applications. The competition receives thousands of entries from around the world which are then judged by a panel of industry professionals specializing in advertising, marketing and design to determine which works should be celebrated. Entries are judged based on their design, creativity, technical approach, clarity of message and overall effectiveness.\n\"These global recognitions provide validation that we have built a scalable best-in-class website that works for patients, organic enthusiasts and investors,\" said Andrew Pollock, VP of Marketing.\nThe new TGOD.ca focuses on driving awareness of TGOD's position as Canada's only 100% supplier of certified organic cannabis while facilitating direct medical product purchases with ease for patients providing medical documentation confirming the diagnosis by a health care practitioner. In March, the first group of patients, the Growers' Circle, received access to purchase TGOD's first certified organic cannabis strain through the website and early feedback has been extremely positive. TGOD will be releasing more of its premium organic product in the coming months and the Company will continue to add enhancements and improvements to its website.\nThe Green Organic Dutchman Holdings Ltd. (TSX:TGOD) is a publicly traded, premium global organic cannabis company, with operations focused on medical cannabis markets in Canada, Europe, the Caribbean and Latin America, as well as the Canadian adult-use market. The Company grows high quality, organic cannabis with sustainable, all-natural principles. TGOD's products are laboratory tested to ensure patients have access to a standardized, safe and consistent product. TGOD has a planned capacity of 219,000 kgs and is building 1,643,600 sq. ft. of cultivation and processing facilities across Ontario, Quebec, Jamaica and Denmark.\nFor more information on The Green Organic Dutchman Holdings Ltd., please visit www.tgod.ca.\nThis news release includes statements containing certain \"forward-looking information\" within the meaning of applicable securities law (\"forward-looking statements\"). Forward looking statements in this release includes, but is not limited to, statements about further release of product by the Company, statements about enhancements or improvements to the Company's website, statements about global growth by the Company, statements about the offering of any particular products by the Company in any jurisdiction and statements regarding the future performance of the Company. Forward-looking statements are frequently characterized by words such as \"plan\", \"continue\", \"expect\", \"project\", \"intend\", \"believe\", \"anticipate\", \"estimate\", \"may\", \"will\", \"potential\", \"proposed\" and other similar words, or statements that certain events or conditions \"may\" or \"will\" occur. These statements are only predictions. Various assumptions were used in drawing the conclusions or making the projections contained in the forward-looking statements throughout this news release. Forward-looking statements are based on the opinions and estimates of management at the date the statements are made, and are subject to a variety of risks and uncertainties and other factors that could cause actual events or results to differ materially from those projected in the forward-looking statements. The Company is under no obligation, and expressly disclaims any intention or obligation, to update or revise any forward-looking statements, whether as a result of new information, future events or otherwise, except as expressly required by applicable law.\nSOURCE The Green Organic Dutchman Holdings Ltd.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Your privacy is important to us. We are bound by our professional code of conducts and code of ethics and are dedicated to upholding these.\nWe have a professional responsibility to maintain records of each appointment we have with you. This information is stored securely and for a time period required by Australian legislation. After this time, all records are disposed of (shredded or deleted).\nWe will not disclose any information you provide to us without your consent. We will provide feedback to your GP or specialist on your progress and at the completion of our services.\nFor occupational therapy home visits, we will provide your information to home modification service providers and equipment suppliers in order for them to complete their work or make deliveries where necessary. We will discuss this with you during your home visit.\nAll information provided on this website is generic in nature. For individualised information, please consult with your doctor or specialist or dietitian.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzblhjh b/data_all_eng_slimpj/shuffled/split2/finalzzzblhjh new file mode 100644 index 0000000000000000000000000000000000000000..a1771733a4e4f4423f0f2d981bd72e9f01024d49 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzblhjh @@ -0,0 +1,5 @@ +{"text":"Community languages schools help students learn and use their community language. They are also open to any student who wants to learn a new language.\nThis information is current as at 23\/04\/2019 02:22pm, AEST. For the most up-to-date information, go to https:\/\/education.nsw.gov.au\/our-priorities\/a-to-z-priorities.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Considering a move to a fantastic North Raleigh Neighborhood like Stonebridge?\nNorth Raleigh is a hugely popular place to live in the Greater Triangle Area. There are a lot of reasons, but one of them is the opportunity to live in a neighborhood like Stonebridge (see map here). Few neighborhoods have such a wonderful combination of rolling terrain, large forested lots, beautiful amenities, and spacious homes within minutes of all of the awesome North Raleigh lifestyle features. More specifically, the Stonebridge neighborhood is located just north of Interstate 540 between Six Forks Road and Creedmoor Road. There are 520 homes situated on 663 acres of mostly wooded land located just north of the Raleigh city limits in the rolling hills of northern Wake County.\nNeighborhood amenities include two pools with a clubhouse, playground, six tennis courts, jogging trails and a soccer field and a wonderful lake for fishing. Stonebridge is a very active community that has opportunities for everyone - from kids to seniors, and couples to groups.\nThere are many community organizations in Stonebridge. The Swim Team competes with other local community and club teams.\nThe Junior Tennis team gives kids the opportunity to learn how to play tennis and compete with other tennis teams at various skill levels. The adult tennis programs for ages 18 years and older includes the Women's Team Tennis, World Team Tennis and Wednesday Night Open Tennis programs. Visit the tennis programs page for more details.\nSome of the other organizations are the toddler play group, the book club and the wine lovers, to name a few.\n\"We love the Stonebridge Forums -- we all share referrals for repair people and look out for children's safety and suggestions on successful landscaping, gardening and pest control. There are lots of free, well-informed advice on all kinds of topics. We have very intelligent and friendly neighbors. There is also the Stonebridge Trader -- on-line swap shop where we sell and give things away to our neighbors.\"\nYou have to catch the \"luminaries at Christmastime\" and experience \"driving thru the neighborhood with car lights off on Christmas Eve.\" Also, we have an \"active Home Owners Board that helps keep property values up by addressing unsightly problems.\"\nIt is easy to understand why people are passionate about their neighborhood! Stonebridge home styles are mostly traditional and transitional in style. They range in size from a little over 2,000 up to 5,000 square feet and are generally priced in the high $300,000's to $700,000's range. Similar subdivisions in the area include Stone Creek to the East, The Gates at Ethan's Glen to the North and Heritage Point to the South.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Please enjoy a wonderful panoramic view of Huis Ten Bosch on the ferris wheel!\nThe white ferris wheel is the symbol of happiness.\nIt's possible for you to enjoy the perfect combination of ferris wheel and Europe Street during daytime and to appreciate the beautiful night scene of the Huis Ten Bosch.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"A contemporary remake of the classic digital watch. This version updates a heritage look with full functionality, including a calendar, alarm and stopwatch. The comfortable silicone strap adds to the utilitarian, street-ready style.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"00:06 eythian it's the right place to ask, I just have no answer for your question.\n00:08 dcook oa10712: No idea. We just use Code39, and that works great.\n00:35 huginn` eythian: The current temperature in Wellington, New Zealand is 21.0\u00b0C (12:00 PM NZST on April 07, 2015). Conditions: Mostly Cloudy. Humidity: 73%. Dew Point: 16.0\u00b0C. Pressure: 30.06 in 1018 hPa (Falling).\n00:35 eythian dew point of 16\u00b0 is weird.\n01:03 huginn` dcook: The current temperature in Sydney, New South Wales is 22.0\u00b0C (10:30 AM AEST on April 07, 2015). Conditions: Scattered Clouds. Humidity: 57%. Dew Point: 13.0\u00b0C. Pressure: 29.74 in 1007 hPa (Steady).\n01:04 dcook Ahh, wait. I think I figured it out... sort of.\n01:05 dcook A \"Main entry\" search returns the result... but not \"Anywhere\", \"Heading match\", or \"Main entry ($a only)\".\n01:12 dcook Aha. Indexing issue..\n02:49 eythian rangi: what's the release date (roughly) for 3.20?\n02:51 eythian OK good. I'm going to push for retiring squeeze and precise for that release.\n02:51 huginn` dcook: The operation succeeded.\n02:51 eythian and jessie release is April 25.\n02:52 dcook What's the plan for Zebra and Jessie?\n02:52 eythian I don't have one.\n02:52 eythian Short of packaging a new version myself, it's a tricky one.\n02:53 rangi post freeze, we can agitate to get the new version in, its just bugfix eh?\n02:53 rangi or did they load in new features as well?\n02:54 dcook Doesn't Ubuntu drop a new version soon as well?\n02:57 dcook Once Debian have 2.0.60, would we contact the Zebra maint for Ubuntu then?\n03:05 dcook I think eythian already has an open bug report?\n04:07 dcook Hmm, it went for blue skies to storm to blue skies again here..\n04:15 dcook Hmm getting that problem that cait pointed out the other day..\n04:16 eythian dcook: cait points out too many problems for that to be a useful specifier.\n04:17 dcook It was the one about having a problem opening \"updatedatabase_2015-04-07T14:16:45.log\"\n04:21 dcook Hmm... is Apache run as root on Debian?\n04:24 eythian at least, I think that's how it works.\n04:24 dcook How do we subvert it though?\n04:24 dcook Actually, something I've been meaning to try..\n04:25 dcook But I would think that would cause problems with a package install..\n04:26 eythian they either shouldn't be editable, or (better) they should save somewhere so that they only apply to that instance.\n04:27 wizzyrea pretty sure I wasn't involved at the time.\n04:28 dcook How do we change the Apache user with the packages?\n04:28 dcook I'm so intrigued now..\n04:34 dcook Or when I'm breathing..\n04:38 eythian and that's why it was dying.\n04:40 eythian I don't know what will happen with the packages.\n04:44 eythian I don't think it's actually in place yet is it?\n04:46 wizzyrea that moment when you think to yourself \"gosh, it's been a long time since I haven't been able to make THAT work\" and then you realise zebra isn't running.\n04:47 eythian possibly since bug 13068?\n04:47 eythian it's why I have on my ES list to give a sensible response if you can't connect to the ES server.\n04:47 eythian rather than \"no results\"\n05:38 dcook I admit I've been delaying upgrading Vbox as I didn't want to break anything..\n05:38 dcook Not that I use it for anything too important any way though..\n05:39 eythian I'm wondering if the module has got itself into a weird state.\n05:39 eythian I might try rebooting.\n05:40 dcook Alas, I don't know anything about that one.\n05:49 eythian ah, that seemed to fix it.\n05:50 eythian I probably could have fixed that without a reboot actually, it just didn't give me useful error messages to diagnose the problem.\n05:50 eythian I think a kernel update had removed stuff so that new vms couldn't start.\n05:52 dcook Yeah, sometimes I debate the merits of restarting vs finding the root problem..\n06:05 huginn` eythian: The current temperature in Wellington, New Zealand is 19.0\u00b0C (5:30 PM NZST on April 07, 2015). Conditions: Light Rain. Humidity: 94%. Dew Point: 18.0\u00b0C. Pressure: 30.09 in 1019 hPa (Steady).\n06:06 huginn` magnuse: The current temperature in Bodo Vi, Norway is 6.0\u00b0C (8:00 AM CEST on April 07, 2015). Conditions: Light Rain Showers. Humidity: 92%. Dew Point: 5.0\u00b0C. Pressure: 29.70 in 1006 hPa (Rising).\n06:06 huginn` magnuse: The current temperature in Realtor, CABRIES, France is 5.1\u00b0C (7:58 AM CEST on April 07, 2015). Conditions: Clear. Humidity: 88%. Dew Point: 3.0\u00b0C. Windchill: 5.0\u00b0C. Pressure: 30.33 in 1027 hPa (Rising).\n06:06 eythian I think we still have your heat here.\n06:07 huginn` dcook: The current temperature in Bardwell Park, Sydney, New South Wales is 19.2\u00b0C (4:07 PM AEST on April 07, 2015). Conditions: Mostly Cloudy. Humidity: 46%. Dew Point: 7.0\u00b0C. Pressure: 29.77 in 1008 hPa (Steady).\n06:07 eythian pfft, you win due to rounding!\n06:10 eythian at least you don't have 94% humidity.\n06:10 eythian I blame daylight saving going away.\n06:10 eythian that's clearly messed with the weather.\n06:10 dcook People always say it's so humid here, but I think they might be wrong..\n06:11 dcook I was all \"It's 6pm! Where did the sun go?!\"\n06:18 eythian OK, now I have a jessie VM for testing koha on. I think it's time to go brave the elements and actually do the testing tomorrow.\n06:18 dcook magnuse: uh oh!\n06:33 Viktor Kia ora Magnuse!\n07:28 nlegrand Ooh! Aah! #koha!\n07:34 kivilahtio does anybody know what is the status regarding this http:\/\/bywatersolutions.com\/20[\u2026]fund-volunteers\/?\n07:35 kivilahtio have the sub-committee members been nominated already?\n07:39 kivilahtio wahanui: my friend!\n08:59 wahanui hmmm... magnuse is a Norwegian giant.\n09:04 andreashm is there a way to automatically generate invoices for lost books in Koha? anyone know?\n09:13 andreashm sophie_m: doesn't that just generate a fee? not an invoice? I need to look more closely at that script.\n09:23 andreashm so the only way to create invoices in Koha for late\/lost books is to do it manually?\n09:59 andreashm kivilahtio: at the moment, in our old prop. system, we run a report on books that have recently been marked as lost (long overdue) and then invoices are automatically generated. so we can then send them via email or snail mail. there are to many to handle manually.\n10:02 andreashm cait: yeah, an sql report might be the answer. and then we could probably fix the invoice generating and sendout in our Viola system. but other libraries in Sweden have asked about this feature too, and I was hoping something was built into Koha.\n10:03 andreashm cait: I'll do just that - search bugzilla atleast.\n10:17 cait kivilahtio: hm, not sure, does it cover the lost item process? (what happens after the overdues) yet?\n10:20 kivilahtio cait: we have a cronjobs for setting items to lost status?\n10:27 cait meh - is it possible that for calculating the pickup date we don't check the holidays calendar?\n10:39 huginn` Bug http:\/\/bugs.koha-community.org[\u2026]_bug.cgi?id=12353 enhancement, P5 - low, ---, olli-antti.kivilahti, In Discussion , Reserves last pickup date needs to respect holidays, and everybody need to know the last pickup date for reserves, even notices.\n12:49 octo19 Hello. How do I force an update of the Koha library via CLI? I want to make a cron job that updates every minute.\n12:51 tcohen you mean have your records indexed?\n12:53 tcohen how did you get your Koha instlaled?\n12:53 tcohen what instructions did you follow?\n12:54 octo19 I inherited our current system. I have no idea how it was installed. I can tell you it is version 3.12 and running in openSUSE 12.3.\n12:55 octo19 I am trying to update koha and openSUSE but the readout keeps telling me everything is outdated so it can't update.\n12:57 tcohen can't you install ubuntu?\n12:57 octo19 is that easy?\n12:58 octo19 Koha recommends Debian, but I know more about Ubuntu. Is one easier than the other?\n12:59 octo19 I want to have a system with little-to-no user involvement.\n13:00 nlegrand octo19: I'm a bit puzzled, you're looking to update the software or the search engine indexation?\n13:01 octo19 nlegrand i need to both, but my initial question was to update the index.\n13:02 octo19 tcohen ah so adding the debian repo to ubuntu will make everything run smoothly?\n13:02 nlegrand octo19: ok :) did you check the crontab of the unix account dedicated to koha ?\n13:03 octo19 akafred less of a hassle to run or install?\n13:06 octo19 what version of apache do i need for koha?\n13:06 octo19 or does it matter?\n13:09 octo19 tcohen thanks for the tips!\n14:58 cait hmm did you edit the item?\n16:31 misilot With the patron import, if I want to blank out passwords can I sent NULL in the csv import file?\n16:47 misilot dateenrolled should get auto populated on import?\n18:26 barton cait: I wonder if you could take another look at http:\/\/bugs.koha-community.org[\u2026]w_bug.cgi?id=6499 ?\n18:44 cait not sure when i will get to it barton - maybe reset to needs signoff with a note for now?\n18:44 tcohen @later tell rangi would you join me on a talk with randal schwartz?\n18:44 huginn` tcohen: The operation succeeded.\n18:53 barton cait: will do, thanks.\n19:21 barton cait: I updated bug 6499, including a new test procedure and some notes about some 'gotchas' that I've run into along the way.\n20:10 huginn` TGoat: The operation succeeded.\n20:12 huginn` TGoat: The operation succeeded.\n21:05 cait what happened to the manual?\n21:12 barton if I want to report a bug regarding koha-gitify, how do I categorize that? Installer? Tools?\n21:16 wizzyrea barton: do it on github.\n21:16 wizzyrea no one will see it in our bz.\n21:16 wizzyrea koha gitify is a separate project to koha. Just like kohadevbox.\n21:38 wizzyrea yep, feel free to submit a patch too, if you like.\n21:39 wizzyrea please tell me you're not trying to use gitify in production.\n21:40 barton no, but it's still pain to have all of the zebra indexes break in test environments.\n21:40 wizzyrea good because that would be nuts.\n21:41 barton wizzyrea: didn't you know that all of the bywater folk were a bit touched?","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzbmvps b/data_all_eng_slimpj/shuffled/split2/finalzzzbmvps new file mode 100644 index 0000000000000000000000000000000000000000..c9f978559fbb96868d19b988f266ad578f7b38bf --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzbmvps @@ -0,0 +1,5 @@ +{"text":"Flat Education - 2796 icons.\niPhone Medical - 2552 icons.\nWindows 8 Web Design - 2698 icons.\nMaterial accounting - 2705 icons.\nIcon Fonts Graphics - 2689 icons.\nline icons: 12000 free outline icons - 2780 icons.\nReal vista computer gadgets - 2544 icons.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"I would like a new Pyrex bowl. I ruined this one while attempting to make peanut brittle this week.\nJust thought I'd share with you my letter to Santa after a horrible peanut brittle making accident. Okay, so it wasn't horrible, but I did break my bowl. I was making this delicious microwave peanut brittle recipe the other day. I did lots of things wrong.\nThe sound of the pyrex bowl popping and breaking was pretty cool. Next time I'll follow the directions and make the peanut brittle right and never pour water over a super hot bowl.\nAnyway...I still really love my recipe for peanut brittle. It is super, duper easy and really yummy. I think I'll buy some more corn syrup and make it the right way.\nThat's what I'm making this week. How about you?","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Haven Family Holidays Devon. Book Haven Devon Cliffs Holiday Park in Devon. Best Haven Deals Online.\nVisit Devon where you will find activities for all ages. With it's spectacular coastline boasting award winning beaches and Dartmoor National Park in the region you can see why this area is so popular with holidaymakers.\nDartmoor National Park is a fantastic place to explore with vast cycle trails and sprawling country walks. Wherever you find yourself, you'll never be far from an abundance of wildlife and 33.000 miles of hedgerow. Nature lovers will be in their element here.\nAfter a holiday in Devon you are sure to return home with fantastic family memories in this idyllic setting.\nBabbacombe Model Village is one of the most popular and unique family attractions in Devon. You will have the opportunity to see over 400 scale models, all handmade by Babbacombe's own craftsmen.\nThis place also has over 4 acres of award winning miniature landscaped gardens. With both indoor and outdoor displays, a licensed cafe and gift shop, its a fun day out whatever the weather.\nAt dusk the model city is illuminated by thousands of tiny lights and sets a breathtaking atmosphere. A sight not to be missed!\nCrealy Amusement Park in Devon is an unforgettable family experience set in 100 acres with huge indoor and outdoor play areas.\nThere is an Adventure Zone with 6 slides, aerial walkways, climbing nets, ball pools and loads more with two floors to explore.\nThe outdoor amusements include a log flume, roller coasters, race carts, bumper boats and a traditional Victorian Carousel. For animal lovers there is even an animal handing section where you can visit the small animals.\nCertainly, with the variety of cafes and eateries on offer it will be a fun day out in Devon for all the family.\nThe recently improved Devon Cliffs Holiday Park has been a favourite with Haven customers for many years. The recently improved Devon Cliffs Holiday Park has been a favourite with Haven customers for many years.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Former cabinet secretary Lord Gus O'Donnell has said he expects a public inquiry to take place into the UK's decision to leave the European Union \u2013 and for constitutional issues that have arisen since to be properly probed.\nO'Donnell, who was head of the civil service under prime ministers Tony Blair, Gordon Brown and David Cameron, also said that current cab sec Sir Mark Sedwill was facing \"mission impossible\" compared with the challenges on his watch.\nIn an interview for BBC Rado 4's Westminster Hour politics programme, O'Donnell said the fallout from Cameron's decision to offer \u2013 and then call \u2013 a referendum on EU membership was such a \"big issue\" that he expected an inquiry to take place. Calls for such an inquiry have already come from former civil service head Lord Bob Kerslake and senior Labour Party figures.\nO'Donnell also said the recent months of parliamentary wrangling over prime minister Theresa May's withdrawal agreement had exposed constitutional issues that needed to be addressed.\n\"I think we have seen a change in the balance between parliament and the executive,\" he said. \"There are many people [who] will think with our with our system, with first past the post, that if you've got a clear majority you can pretty much do what you like and that the checks and balances that are provided by parliament are relatively limited.\n\"I think that there's something to be said for rebalancing that somewhat. I think there's also a question that we need to look at over the role of the speaker.\"\nO'Donnell did not criticise speaker John Bercow, but observed that in a constitutional system is governed by precedent, the speaker had \"changed a lot of those precedents\".\nThe former cabinet secretary appeared to suggest that looking at the relationship between parliament, the executive and the speaker was a more pressing issue than the anticipated inquiry into Brexit.\nO'Donnell said governments generally only agreed to parliamentary demands for public inquiries after the leaders responsible for the key decisions to be probed had moved on.\nHe said a public inquiry should probe how referendum questions should be phrased in future, and some of the key claims from the 2016 vote.\n\"I'm certain they'll go through the campaign picking up those areas where there were things that were highly misleading,\" he said.\n\"We all know the \u00a3350m on the bus, and we all know that [the] emergency budget saying that there would be a big recession if there were a vote one way or another was also misleading.\nElsewhere in the interview, O'Donnell said he believed that the \"true voice\" of Northern Ireland was not being heard because of the power vacuum at Stormont.\nHe said that while the Democratic Unionist Party had a \"legitimate role\" as partners in Theresa May's minority government, they were not the only important group in Northern Ireland.\n\"It's a power-sharing government and we are only hearing from one side,\" he said.\nAsked how the challenges faced by the current cabinet secretary compared to those he had dealt with, O'Donnell said Sedwill's were much bigger.\n\"I think he's got mission impossible at the minute,\" O'Donnell said.\nThere should certainly be an inquisition if Brexit is not delivered.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"1.Boil onions,tomatoes,green chillies,ginger garlic and grind it into a paste.\n3.Then add the grinded paste and stir fry till it starts leaving oil.\n4.Then add chilly garlic sauce,lemon juice,salt,black pepper powder.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzbntpw b/data_all_eng_slimpj/shuffled/split2/finalzzzbntpw new file mode 100644 index 0000000000000000000000000000000000000000..bc973d739664547225b7848c18263a81dbfd8e93 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzbntpw @@ -0,0 +1,5 @@ +{"text":"Few nights back I was looking for way to explain my dad how to file a bug and this starts from there. Me and mether had the initial discussion in the middle of night about the basic design and workflow.\nTo get automated information , I wrote some code based on this wiki page.If your package requires any other information which can be collected automatically please drop a mail to me.\nI started writing a console based client for the same. For now we thought that user will know against which package he wants to file the bug.\nIn the next screen, user needs to put the username and password or it will pick them up from a config file.\nNext it will open up your favorite editor based on $EDITOR environment variable or it will fallback to vi.\nFinally it will show the report to the user , user may want to submit that to go back or cancel the whole operation.\nWhile submitting the bug it will automatically upload the files to be attached. After submitting the report the user will get an URL of the bug just filed.\nYou can see the process in this screencast.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Our love for the stories of the past is what our tables show. At Square Nail Table Co. our refurbished and repurposed tables are made from reclaimed wood from another time and weathered to perfection. We design, build, and hand-finish the tables, maintaining original character\u2014nail holes, weathering, and saw blade marks.\nWe take pride in transforming these pieces of history that the boards hold into new heirlooms for your home. We, also, welcome designing a custom built table with you. Square Nail Table Co. is exclusively situated inside Barn & Barrel, 115 W Main St, Florence, CO. Come see our latest inventory.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"What does the destiny knot do in Pokemon Diamond?\n10\/03\/2017 \u00b7 Destiny Knots are available at the Department store, on the first \"Niche items\" floor (I think floor 3 or 4).... A long, bright red string to be held by a Pok\u00e9mon. If the holder becomes infatuated, the foe does too.\nWhat happens if you give a Destiny Knot to a female pokemon?\nKirby Fighters - Destiny Knot is a party\/fighter game developed by Kirbystar247, planned for release mid-2016, and available on the 3DS, Wii and Wii U Platforms.\nIn the Abandoned Ship\/Sea Mauville, there's a couple. It's a random chance, but if you beat them, they'll give you a destiny knot. You also have a random chance of getting one from a fan after you beat a Master Rank Contest.\nEventually you'll end up with 5x31 IVs, at this point make sure to give one of the parents the Destiny Knot as this will start allowing the parents to pass down 5 stats, rather than 3 without the Destiny Knot. I'd suggest only using the Destiny Knot when you have 4x31, 5x31 or 6x31 on one of the parents.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"When running large parallel jobs, there is often a trade-off between runtime (sped up by requesting many cores) and long waits in the queue. This is especially a problem if the walltime is set to a high estimate because the actual time required is uncertain. A tool that queries the Hansen queue and returns information on the number of jobs, their sizes (number of cores), and their remaining time, as well as the number of available free cores, would enable users to optimize their resource requests and better anticipate time-to-results.\nSince this task would only require a query to the scheduler and not a job run, a dashboard widget that regularly updates usage or has a refresh button (to minimize unnecessary traffic) would be even better.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"The Lung Flute has received national attention, appearing in Popular Science magazine's list of the top 100 innovations of 2009.\nThe success of Buffalo-based Medical Acoustics, a medical device company expecting to turn a profit for the first time in 2011, exemplifies how partnerships with universities like the UB can help businesses bring new products to the market.\nMedical Acoustics distributes the Lung Flute, a hand-held device that employs sound waves to break up mucus in the lungs of patients suffering from respiratory illnesses, including emphysema and chronic bronchitis. Patients use the apparatus\u2014a hollow tube with a reed inside\u2014by simply exhaling into it.\nThe Lung Flute has received national attention, appearing in Popular Science magazine's list of the top 100 innovations of 2009. The product also landed on national television in 2010, starring in a segment of \"The Doctors\" titled \"How to Get it Out.\nMedical Acoustics, which has partnered with UB for years on research and development, began shipping orders to the U.S. hospital market in November. The firm has also contracted with distributors to sell the Lung Flute in the European Union and Asia.\nThe company, located in the Innovation Center on the Buffalo Niagara Medical Campus, has 12 employees, nine of whom are full-time. Manufacturing is also local, with Polymer Conversions in Orchard Park producing the Lung Flute.\nOver the years, UB's support has been instrumental in helping Medical Acoustics commercialize the Lung Flute, said company CEO Frank Codella. Since its founding in 2002, the firm has taken advantage of university resources ranging from financial support to help with clinical trials.\nIn 2003, the UB Office of Science, Technology Transfer and Economic Outreach (STOR) helped Medical Acoustics draft its first business plan. Market research conducted by STOR identified the Lung Flute's key market: an underserved population of 12 million patients with chronic obstructive pulmonary disease (COPD).\nSanjay Sethi, MD, professor of medicine, identified a secondary market for the Lung Flute when he informed Codella and his associates that the device could serve as a noninvasive diagnostic tool, enabling doctors to secure sputum samples for testing. Sputum contains germs and biological markers that health practitioners can use to spot diseases, including tuberculosis and pneumonia. Other detection procedures\u2014such as bronchoscopies, in which doctors thread a thin tube through a patient's airways\u2014are more invasive. Medical Acoustics began shipping the Lung Flute as a diagnostic instrument in 2007.\nSethi led three clinical trials that demonstrated the safety and efficacy of the Lung Flute. The studies played a critical role in the U.S. Food and Drug Administration's decision to clear the Lung Flute for diagnostic and therapeutic use. Sethi, chief of UB's Division of Pulmonary, Critical Care and Sleep Medicine, is now conducting a six-month trial with 80 patients to examine the Lung Flute's performance over time. \"It's good to see something such as this come to fruition and reach patients as a treatment option,\" notes Sethi.\nThe UB Center for Advanced Biomedical and Bioengineering Technology (UB CAT), funded by the New York State Foundation for Science, Technology and Innovation (NYSTAR), assisted Medical Acoustics with federal grant submissions in 2007 and provided the company with $50,000 in the 2009-10 fiscal year to ready the Lung Flute for its commercial launch in the hospital market. This fiscal year, the UB CAT supported the company with another $60,000 to fund Sethi's research.\nSethi said the university's partnership with Medical Acoustics demonstrates how valuable a research institution can be to industry. Both he and Codella take particular pride in the fact that their work is contributing to the growth of Western New York's innovation economy.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzbpnea b/data_all_eng_slimpj/shuffled/split2/finalzzzbpnea new file mode 100644 index 0000000000000000000000000000000000000000..2cf021fce306a5651c208bf0e555093e8d04a690 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzbpnea @@ -0,0 +1,5 @@ +{"text":"We all have old photographs that we treasure, whether they are old family photographs passed on from previous generations, or historic photographs of the homes and towns we live in.\nBecause photos are printed on paper things such as handling, light, moisture, dust, and scratches often affect them. Regrettably, these things can ruin our photos and prevent them from lasting through the generations.\nMost of the photo papers and inks used up until recently have not been archival quality, and as a result, many photo prints show signs of fading and colour shifts after only a few years.\nPoor storage practices such as keeping photos in attics or basements or allowing photos to be exposed to sunlight, have also contributed to the poor condition of most photo collections.\nPeople don't realize there is a problem until they take a look at their photo collection after many years of storage. Our precious photo collections are in fact fading away and in time many will be beyond recovery. There is also the possibility of their demise in the case of fire or flood.\nIt is a good idea to restore your special photographs or even entire photo collection in order to stop the unavoidable destruction that will take place no matter how carefully the photos are stored. Once restored, the photos can be copied to CD or DVD for long-term storage and safekeeping. Once you have digital copies you needn't worry about further damage because digital images do not change at all. You can use these digital images to make a set of prints on archival paper, which will last over 100 years in an album with no signs of degradation or twenty years exposed to daylight for framed prints. Additional prints can be made at any time in the future should disaster strike. You can also share the images with other members of your family and friends.\nDuring the restoration process it is also possible to improve the quality of the original photograph \u2013 for example removal of spots, stains, marks, tears as well as improving contrast, brightness, colour balance and enhance detail, focus and sharpness.\nIt is also possible to add colour to black and white photographs, as well as being able to make quality enlargements for framing.\nHere is a photograph I was asked to digitally restore. The original photograph is about a metre and a half wide and the photograph is absolutely fascinating as it contains so much detail. It's a photograph of Robertson and Ginnetts Gigantic Circus at the British Empire Exhibition in Wembley Stadium in 1925 featuring my customer's great grandfather who is the ringmaster!\nHere is the whole photograph before restoration, after restoration and then a single detail followed by a detail of that detail! There are literally hundreds of recognisable faces.\nThe British Empire Exhibition, held in 1924 and 1925, assembled the member nations of the empire to develop imperial trade connections and to cultivate closer political ties between Britain and her territories.\nThe British Empire Exhibition opened for a second season in May 1925, but only after considerable debate. Despite the enthusiastic press reports and the self-congratulatory comments of the exhibition organizers, the 1924 exhibition was a financial disaster. Executive director Sir William Travers Clark blamed the cold, rainy summer. Although 17 million people had passed through the turnstiles, that figure was much lower than the anticipated 30 million visitors that had been the basis for 1924's projected returns. If only to try and recoup its investment, the British government agreed to re-open Wembley in 1925.\nMore recently, the British Empire Exhibition appears in the 2010 film about the Duke of York's stammer wherein Prince Bertie delivers a painful public address at the exhibition's closing ceremony in October 1925.\nShirley T brought a small but high quality print to me and asked for it to be scanned and enlarged. The original print is about 8cm (3 inches) square. The final enlargement I supplied was 76cm (30 inches) square. Here is the before, after and the close up of a detail of the tree.\nThe Speich family formed a significant presence in the Sernf Valley of Switzerland in the 13th, 14th, 15th and 16th Centuries but during the latter part of the period economic depression and repeated plagues drastically reduced the population and of course the Speichs.\nThe difference between the format used by the professional company that compiled the Speich Tree and that found in this Country is that it starts at the centre with the latest members of the family and expands outwards to include the earliest members of the family that could be traced including, where possible, their dates of birth and death, their occupations and their coats of arms.\nShirley's dad's uncle commissioned the work which is all hand painted we think about 70 years ago. The tree detail goes back to 14th century!","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Symposium registration includes access to all technical sessions, invited speakers, lunches, breakfasts, breaks, the reception, and one copy of the SPCE Boston 2017 Proceedings.\nPlease note that there will be no refunds for cancellations after October 31, 2017.\nPlease note that BOTH an Author of an accepted Formal Paper AND a Presenter of an accepted Presentation (without Formal Paper) must register, attend, and present the work at SPCE 2017 in order for the work to be published in the proceedings. It is conference policy that an author or presenter must register at the full IEEE Member or Non-Member rate in order to meet this requirement. If the only author registering to cover the author and presenter requirement is a student, he or she has to pay the appropriate IEEE Member or Non-Member fee.\nCooperation from all involved in the 2017 SPCE will allow for the volunteer Technical Program Committee members and volunteer Peer Reviewers to assemble the best technical program possible. Thank you in advance.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Prior to Thursday November 22, 2018; the Ugandan and South African parliaments were on record as places that had 'hosted' violent confrontation between lawmlakers and security agents.\nZimbabwe joined the list of chaotic parliaments after opposition lawmakers were forcibly removed from the chamber during the reading of the first post-Mugabe budget statement.\nReports indicate that the lawmakers had refused to respect the President Emmerson Mnangagwa who was present in the house to witness the presentation by Finance Minister Mthuli Ncube.\nThe MPs remained seated as Mnangagwa and his vice-presidents entered the chamber leading to the decision to expel them.\nPhotos shared on Twitter by the state-owned Zimpapers Images showed a very physical encounter between security detail and lawmakers of the main opposition Movement for Democratic Change \u2013 Alliance.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"This is as odd a club as I have seen recently. Typical 1920s 'stripie' driver, the shaft whipping below a soft black leather grip 'decorative' as was also prevalent in the 1920s. But the head is just a bit too elongated, more characteristic of Forgan heads from 1905-1910, and then there is a crown mark on the head, a symbol of the royal patronage of King Edward VII between 1901 and his death im May 1910 and not allowed to be used thereafter.\nSo, how to explain the anachronism? I think the clue is in the stamp for \"Lumley\" just above the stripe. Lumley's (qv) was a a large sports retailer in Glasgow and I think either they refurbished the head (or had a batch from Forgan where they asked for 'modernisation' and a stripe was added over the maker's name and crown). Maybe Forgan (or Lumley) had a bunch of wooden heads only as, unusually. the shaft does not have a Forgan stamp so the clum may have been shafted at the time of the 'striping' with the more modern whipping.\nAnyone with a better theory, please let me know!\nAnyway, the shaft is through-hoselled (i.e. flush with the sole) and has a horn slip secured by three hickory dowels, in keeping with the 1907 date. The sole is stamped, or rather embossed, with 'lady's' which, again I have not seen in a 1905-1910 club. In fact, other than mu suggestion that the shaft has been added in the 1920s I would not see the head as being a lady's one at all. the size does not seem smaller nor the backweight lighter than a man's club from the twentieth century's first decade.\nIt's a club with an interesting story: I'm just annoyed I don't know what the story is.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"The power of negative thinking. That's the phrase I keep repeating over and over -- because the negativity is overwhelming the facts. But no one is interested in the facts right now, for fear that the facts may change -- and change for the worse. We know from John Maynard Keynes, one of the greatest economists and thinkers of all time, that when the facts change, you have to change your view, too. So that means everything you might want to buy could turn into a sale, or even a short, or the tariffs and rate hikes could keep going higher.\nRight now, for example, we are all transfixed by the FAANG. They are all, we know, over-owned, meaning that there are plenty of funds that have huge gains in them and, theoretically and ironically, at least, the prevailing thinking is that unless these funds sell these stocks now, they will give up all their gains. They are going to have to give up all of their gains. That's why the stocks can't bottom.\nI totally get that. I regularly check my Twitter feed and it is filled with people who think I just recommended Amazon (AMZN) for the first time at $1750. Like I fought it all the way up or something. I did recommend it at $1750. Guilty, your honor. But, how about if I recommended it at $75 or $175, does that count against my transgressions?\nApparently not, because the facts have supposedly changed and all who liked it at $1750 are now dead wrong.\nThat's the current storyline -- and I wish it were true. That would make me feel a little more certain. But it isn't. The company's crushing it and even the sellers would agree with that. They are just more fearful of their other shareholders than they are of their own work. The miss was minuscule. The reaction gigantic.\nSome may have been sanguine about the story after Thursday's earnings blow out --oops, after Thursday's revenue miss -- but because Red Hat (RHT) has now teamed up with IBM (IBM) to \"get\" Amazon, I guess Amazon is no longer investable. Forget that Red Hat was disliked because it had missed the last quarter. Forget that IBM did the same. You put them together and they are supposed to beat Amazon? I like the deal, but let's not go too crazy. AWS is the undisputed king of the cloud.\nI believe that if you loved Amazon at $2050 you have to worship it at $1500. But that's not how this new, very bearish, game works. We find a chink -- a minute revenue miss -- and we decide that Amazon is no longer any good and that Target (TGT) and Walmart (WMT) are great and not only that but now that Red Hat and IBM have teamed up to defrock Amazon it is double game over for the colossus that ruled the world a few weeks ago.\nThat's what I call the power of negative thinking.\nI am seeing it everywhere. I got a call last night about Northrop Grumman (NOC) . The caller couldn't believe that after its amazing quarter, the stock should be shelled. The lightning round is fast. You don't have time to hem and haw. But I had been on the amazing NOC call and I have followed this company and loved it since BEFORE Ron Sugar was CEO -- and he left a decade ago. He's now chairman of Uber, so it is top of mind.\nSo, what could I do? Should I have joined the throng who are hurtling out of it -- as they are Raytheon (RTN) , a company with shares that my charitable trust, which you can follow along by joining the Action Alerts Plus club, owns?\nI stood there for a moment, music blaring in the background, the clock ticking and I realized that, at this point, with the stock down more than 100 points from its high and off 16% for the year, it was too late to bail. But did I say buy? No. Too much selling. You'd be down a great deal before you even got your report.\nIn truth, Amazon is most likely a buy down here if you ask those selling it, but who has that kind of chutzpah? Even if the Democrats take the house, is Northrop still a good sale? I don't think so. I think it is a buy. But I am loath to just come out and say it. Too much blowback if it doesn't bottom today and rally big.\nSo, while I don't join the selling exodus, I, too, won't swim against the tide. That's because I, too, have been infected by the power of negative thinking, even when I believe that the facts haven't changed and the negativists at this juncture, with these lower prices, may actually be wrong.\nSo I wait until the selling stops. Like everyone else. Which is why they can fall so fast. Which is how bargains are created. Or at least will be, one day. Right now, though, I'm thinking negatively. And it's powerful as all get out.\nAction Alerts PLUS, which Cramer co-manages as a charitable trust, was long AMZN and RTN.","meta":{"redpajama_set_name":"RedPajamaC4"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzzbpodr b/data_all_eng_slimpj/shuffled/split2/finalzzzbpodr new file mode 100644 index 0000000000000000000000000000000000000000..b8e3ca487cb36e5116a50be47a7562e26529640b --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzzbpodr @@ -0,0 +1,5 @@ +{"text":"North Korea has been discussed widely, especially since Kim Jong-un took the reins after Kim Jong-il's death. Yet, not often is it considered to be a subject of comparative strategic thought. The preoccupation with North Korean thinking, as hard as that may be to discern, and US thinking often leaves insufficient room for attention to the views of the other states that were part of the Six-P..\nIn 2013, China is faced with the challenge of weighing the alternative goals of a favorable balance of power in Northeast Asia, a process of reunification on the Korean Peninsula that pays suitable attention to China's national interests, and denuclearization. Of these, the oldest concern is the region's balance of power, which is best seen through a brief overview of how it had changed since ..\nFor more than six decades, South Korea has coped with one of the most enduring geopolitical asymmetries in the world, namely, the South-North conundrum on the Korean Peninsula. Virtually every aspect of Seoul's policies and strategies towards Pyongyang is governed by this fundamental quandary that has resulted oftentimes in contrasting and even conflicting approaches and attendant policy objecti..\nThrough the repeated cycles of North Korean missile and nuclear testing, negotiations, and sanctions that have characterized international reaction to Pyongyang's proliferation, Japan has gradually lost ground it its effort to shape events on the Korean Peninsula. Tokyo made some progress in direct negotiations with Kim Jong-il, most notably the visits to Pyongyang by Prime Minister Koizumi Jun..\nFor Russia the DPRK is not a normal state. The two facts that it is one of Russia's nearest neighbors and it was founded by the Soviet Union exert unquestionable influence on the current thinking about this country among Russia's political elite. In this issue of the Asan Forum there is extended discussion in the Country Report: Russia of an August article by Evgenyi Bazhanov on Russia's str..","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Influencer marketing is that form of marketing which the focus is moved to influential people. This type of marketing is quite popular in recent times as there is a substantial increase on the part of internet celebrities.\nInternet Celebrities are also getting equal amount of fame like the A-lister superstars on the film industry.\nThese people who have a large influence over the audience is known as influencer marketing. What do we offer?\nNEED TO GET IN TOUCH WITH A CELEBRITY?","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"Bingo was initially a kind of lottery which has been around since Italia. It had been then referred to as Lo Giucco del Lotto D'Italia. Le lotto as bingo ended up being known attracted the folks particularly the intelligentsia in the 16th century. In the last versions from the bingo games handmade cards were split into three horizontal and nine vertical rows to create bingo cards. Each vertical row contained figures from 1 to 90 as the horizontal rows had five figures in random order with four blank spaces. Within the vertical rows the figures one to ten were designed in the very first row 11 to twenty within the second row also it ongoing within the same fashion till 90.\nOver time a more recent form of the sport emerged. The first bingo game as you may know it now was setup with a toymaker from Georgia named Lowe. The 2 variations from the bingo offered were the 12 card pack game at $1 and also the 24 card pack bingo game at $2.\nIt's not tough to help make your own bingo card. All you need is really a card board that is split into five posts and 6 rows. On top of each column write the name \"BINGO\" and write figures within the letters as well as your bingo game card is prepared. Usually the winning format from the bingo includes any row meaning all of the figures entered in either the very best center or bottom row or a couple of full houses where the foremost and second individual who crosses out all of the figures within the bingo card wins a bumper prize.\nBingo games could be adapted to the theme. For instance a Christmas bingo could be performed using the whole family. Rather of writing \"BINGO\" on top of the hand crafted bingo cards, the term \"ANGEL\" might be written. The rows and posts could be loaded with Christmas theme pictures. Even children would love this particular variation from the bingo.\nBingo for kids might be switched into an academic tool. When the children have learnt the alphabet, the teacher could make \"Bingo\" cards with uppercase minimizing situation alphabets and also the child who crosses out all of the alphabets properly first can obtain a prize. In the same manner children could be trained fundamental multiplication having a \"Math Bingo\" game in which the teacher would on-site visit 5\u00d73 and also the child has to obtain the answer around the bingo card. Bingo games really are a huge hit for children's parties too.\nDo Betting Tips Systems Enable you to Win?","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"time is like orange juice!\nAccording to my orange juice theory, a day is like an orange and the amount of time that you have in a day to do your tasks is like orange juice.\nWith the right tools, techniques and determination, you can always squeeze a little bit more (juice) out of your day.","meta":{"redpajama_set_name":"RedPajamaC4"}} +{"text":"I am Claire Gibbs, one half of the originators of the #WhyWeDoResearch campaign. This time last year I was not in the twittersphere and not particularly interesting in joining. However in the spirit of being a Senior Clinical Research Nurse, and therefore living and breathing research development (R&D) and innovation every day, I felt I should give it a go not only as an individual human being but also in leading our R&D department at The James Paget University Hospitals NHS Foundation Trust. I think it's safe to say that twitter is an extremely powerful tool for connecting with patients and the public in a way I never would have dreamt was possible.\nWe launched the #WhyWeDoResearch campaign at Christmas (2014) as a way of introducing our core research team members to our local area using twitter as the social media platform, and a way of giving people a voice to say why they do what they do. Between May and December 2014 our confidence had grown with SoMe therefore we felt it was appropriate and we were happy to aim for more of a patient and public focus. To participate in the campaign individuals held a placard which included the '#WhyWeDoResearch' in type followed by their written reason why they do what they do.\nWithin a week of the campaign starting other NHS Trusts began getting involved and the campaign, originally planned as the 12 days of Christmas, was extended to the New Year. Within four weeks we had 'gone global' reaching Australia and Spain first. At this point it seemed madness to stop, so we re-evaluated our approach. We had appointed Michael Keeling (Stroke Research Nurse in York) as a National Collaborator and decided to expand this to create a collaborators group (The group now consists of representatives from England, Ireland, Scotland, Australia, Canada and Spain).\nSix months later and the impacts have been fabulous. The hashtag now has 29 million impressions, >31,000 tweets and >3,200 individual accounts\/participants.\nWe announced our #WhyWeDoResearch ambassador role in May 2015 which enables organisations, patients and the public to have a more formal role in the campaign at local levels. Within 48 hours of the announcement we appointed 20 ambassadors who varied from representing NHS Trusts & Universities to members of the public; a fantastic example is @WendyPMitchell who as a member of the public is representing people with dementia.\nWhilst the campaign started and continues in the healthcare setting, it has gradually been spreading into other areas too. Some fantastic Primary School teachers became aware of the hashtag and used this within their schools. @WestQuartPrim in Scotland created an assembly purely around the campaign \u2013 their assembly trailer is fantastic and can be viewed here https:\/\/www.youtube.com\/watch?v=LDuem3eyA9g It shows that research is really fun as well as exciting and educational and they have since become our first #WhyWeDoResearch school ambassadors; we welcome them wholeheartedly. Children are of course the researchers of the future.\nIn June 2015 we were delighted to launch our website www.whywedoresearch.weebly.com which gives background to the campaign, as well as collaborator and ambassador tabs, alongside an 'opportunities for patients' tab (which was created following public followers requests), and a downloadable templates of placards and campaign leaflets.\nWe have been absolutely humbled and inspired by everyone involved in #whywedoresearch and the incredible reasons that researchers, staff, public and patients are involved in their various ways. I am very proud to have created the campaign however without everyone else; it would not be what it is today. The collaborators team work hard to realise the campaign purpose and it is wonderful that patient and public followers are now helping to shape this.\nEveryone involved in this campaign is doing so entirely voluntarily and the campaign is not funded other than that which we as collaborators have put in ourselves. We do it because we think that it is important, and we do it because we love it; feeling inspired by peers and public and patients every single day is wonderful, motivating and has created a real sense of community in an area which has previously felt quite lonely.\nWe joke about people joining the \"#WhyWeDoResearch train\" quite often \u2013 if you would like to get involved and share your voice, the train isn't stopping any time soon so please do.","meta":{"redpajama_set_name":"RedPajamaC4"}}