diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzotml" "b/data_all_eng_slimpj/shuffled/split2/finalzzotml" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzotml" @@ -0,0 +1,5 @@ +{"text":"\\section{I. Introduction}\n\nThe Nobel Laureate Richard Feynman first suggested that a new type of\ncomputer, running on the principles of quantum rather than classical\nmechanics, might have an exponential speed-up in computation time over a\nclassical device for some tasks[1]. This observation engendered the\ndevelopment of a theory of a digital quantum computer by David Deutsch of\nCambridge University[2], in which he demonstrated the intrinsic massive\nparallelism of quantum computation by operating on a coherent superposition of\na large number of states. In particular, a single computation acting on N\nquantum bits can achieve the same effect as 2N simultaneous computations\nacting on classical bits. For N=300, this is a number greater than the number\nof atoms in the visible universe. Obviously, no classical computer could ever\nbe built to compete with this kind of processing power. Exact methods and\napplications that harness this enormous potential are currently the subject of\nintense research. In 1994 Peter Shor of Bell Labs announced the discovery of\nan algorithm which, if running on a quantum computer, could find prime factors\nof large numbers in polynomial time[3]. There is currently no known classical\ncounterpart that can accomplish the same task in less than exponential time.\nShor's factoring algorithm was the impetus for an explosion of research aimed\ntoward the theory and experimental implementation of a working digital quantum\ncomputer[4]. In a subsequent development, Lov Grover discovered an algorithm\nthat shows a square-root speed-up over a classical machine for the task of\nsearching a large unordered database[5]. Even though Grover's algorithm is\nonly polynomially faster than a classical algorithm, the wide range of its\napplicability to data search makes it very enticing. Another extremely\nimportant class of problems is represented by the so-called ''intractable''\nNP-complete set. These problems are at the core of a wide range of practical\nrouting, layout, and other logistics issues. A prototypical example here is\nthe traveling salesman problem (TSP). It is widely believed that no classical\nalgorithm that can solve this problem, or any other member of this class,\nexactly in less than exponential time. Such problems are now a challenge for\nquantum computers and algorithm developers.\n\nVarious approaches are currently in process for laboratory realization of\nessential elements to perform quantum computing, namely, the construction of\nqubits and demonstration of quantum logic operations. The work of Jeff Kimble\nand his colleagues at the California Institute of Technology (Cal Tech)\nemphasizes the use of entangled photon states using microcavities to construct\nquantum logic gates [6]. Correspondingly, David Wineland and his group at\nNIST, following the theoretical predictions of Peter Zoller and coworkers[7],\nachieved what amounts to a two bit quantum register using laser excitation\ncontrol of atoms in cold ion traps[8]. These methods require large scale,\nstate-of-the-art laboratory facilities and defy scalability beyond a few\nqubits. Aside from the requirements of extremely sophisticated laboratory\nfacilities and procedures, quantum decoherence occurs in these schemes at a\ntime scale of nanoseconds at best, and is a major obstacle. Recently, Neil\nGershenfeld and Isaac Chuang introduced a revolutionary scheme for quantum\ncomputing using nuclear spins and the methods of nuclear magnetic resonance\n(NMR) to construct and manipulate quantum logic[9]. Free temporal evolution\ntakes place virtually without decoherence due to the robust isolation of\nnuclear spins caused by screening by the atoms electrons. This version of\nquantum computation can be controlled and processed with current\nstate-of-the-art technology. More recently, Chuang, Gershenfeld, and Kubinec\n(CGK) reported, using this scheme, the first ever laboratory demonstration of\na genuine quantum computation [10] by execution of Grover's quantum search\nalgorithm [5]. This included loading of the registers, unitary execution of\nthe quantum calculation, and read-out of the results. Their results, though it\nutilized only four input registers, constitute a significant step forward\ntoward the practical realization of a useful quantum computer. One of the\nproblems in scaling, in this regard, to a practical device is that the nuclear\nspins within a molecule that form the computational basis must rely on\nchemical shifts to be distinguished from one another. These shifts are usually\nvery small and consequently the rf pulses that must control the computation\nare required to be sufficiently long so as not to couple distinct spins by\nresonance overlap. This is usually of the order of microseconds. Thus,\ncomputation is required to be exceedingly slow. A second, and more severe\nproblem in scaling is that the read-out signal necessarily diminishes\nexponentially with the size of the quantum bit register. The current\nstate-of-the-art limits this size to no more than 15 bits. We recognized early\non [11] that these difficulties could be mitigated by selectively coupling the\nnuclear and electronic intrinsic spin angular momenta via hyperfine or\ntransferred hyperfine interactions [12,13]. Further emphasis has been recently\nenunciated in the use of electron-nuclear spin coupling and control for\nquantum computation by David DiVincenzo[14]. These comments were stimulated by\nthe specific seminal suggestions by Kane[15] for nanostructures in Si. The\nscheme proposed by Kane surmounts the problems of scalability and individual\nnuclear spin addressability that is inherent in the liquid state NMR methods\n[9,10]. However, it suffers in principle from the requirement of electron\ncharge transfer, and the measurement of single electronic charge by\nnanostructured gates for signal read-in and read-out. Otherwise, electronic\ncharge migration is required for mediation of nuclear spin coupling as well as\nthe shift of individual nuclear spins in and out of resonance with a constant\nrf field. Electronic charge and spin migration is controlled by nanofabricated\ngates. The requirements of the procedure are beyond current fabrication\ntechnology, but this is likely to change in the reasonably near future.\nHowever, there remain problems connected with charge transfer control for\nnuclear spin identification, and single charge measurements for read-in and read-out.\n\nOur purpose here is to present a scheme for quantum computation that is\nuniversal and that does not suffer from the limiting conditions of liquid\nstate NMR and that may offer a more viable alternative to the proposed scheme\nof Kane. We draw upon the favorable decoherence properties of nuclear spin\nsystems utilized in NMR quantum computing, and the solid state distributed\nimplanted nuclear spin elements system coupled via electron hyperfine and\ntransferred hyperfine interactions proposed by Kane. Our scheme involves a\ndistributed individually addressable system of electron spin\/nuclear spin\nqubits and state-of-the-art microwave single electron spin resonance for\nsignal read-in and sensitive optical fluorescence for read-out. Our treatment\nhere is generic. Specific materials and optimizations are relegated to a\nfuture publication.\n\nIn the next section we present and discuss the irreducible requirements for a\nuniversal quantum computer. Our generic scheme is presented in Section III.\nQuantum logic, entanglement, nonlocality and quantum information memory are\ndiscussed in Section IV where it is shown that all the conditions of Section\nII are met. The last section is used for discussion and comparison with other\nproposed methods and schemes, and for concluding remarks.\n\n\\bigskip\n\n\\section{II. Requirements for Universal Quantum Computer}\n\n\\bigskip\n\nQuantum computational (QC) algorithms are of only esoteric significance apart\nfrom the hardware capability to execute them. The resource and technological\ninvestiture necessary to develop such a device would necessitate design\nrequirements that fulfill the capability to perform any given quantum\ncomputation and accommodate the execution of any algorithm, including initial\nstate preparation, read-in and read-out. A quantum computer design meeting\nthese requirements will be termed universal. The design must meet, therefore,\ncertain irreducible requirements that are both necessary and sufficient for\nthe execution of any given QC algorithm. Establishing these requirements\nnecessitates the identification of the set of elemental operations common to\nthe execution of all quantum computational algorithms that, in a particular\noperational sequence, are sufficient for the execution of any given algorithm.\nA universal quantum computer must be capable of execution of the elemental operations.\n\nIt is conjectured that the identification of universal elemental operations\ncommon to al QC algorithms and sufficient in specific combination are simple\nHadamard transforms, together with a generalized quantum phase shift operation\n[16]. The work of Cleve, et.al. [16] is the basis for this conjecture. A\nsimple Hadamard transform, $H$ , on a qubit $\\left| q\\right\\rangle $, $q=0,1$\nis given by [16]%\n\n\\begin{align}\n& \\left| 0\\right\\rangle \\overset{H}{\\rightarrow}\\frac{1}{\\sqrt{2}}(\\left|\n0\\right\\rangle +\\left| 1\\right\\rangle )\\label{eq1}\\\\\n& \\left| 1\\right\\rangle \\overset{H}{\\rightarrow}\\frac{1}{\\sqrt{2}}(\\left|\n0\\right\\rangle -\\left| 1\\right\\rangle )\n\\end{align}\nThis operation is exactly equivalent to a uniform single particle beam\nsplitter[17]., and is a special case of the more general Quantum Fourier\nTransform (QFT),\n\\begin{equation}\n\\left| a\\right\\rangle \\overset{F_{2^{m}}}{\\rightarrow}\\sum_{y=0}^{2^{m}%\n-1}e^{2\\pi iay\/2^{m}}\\left| y\\right\\rangle\n\\end{equation}\nIn terms of a two-bit operation a subsequent phase shift operation can impart\na conditional phase shift to one component of the transformed wave function,\n\\begin{equation}\n\\left| 0\\right\\rangle \\left| 1\\right\\rangle \\overset{H}{\\rightarrow}\\frac\n{1}{\\sqrt{2}}(\\left| 0\\right\\rangle +\\left| 1\\right\\rangle )\\left|\nu\\right\\rangle \\overset{c-u}{\\rightarrow}\\frac{1}{\\sqrt{2}}(\\left|\n0\\right\\rangle +e^{i\\varphi}\\left| 1\\right\\rangle )\\left| u\\right\\rangle\n\\end{equation}\nOr a conditional transition of the auxilliary qubit as well,\n\\begin{equation}\n\\frac{1}{\\sqrt{2}}(\\left| 0\\right\\rangle +\\left| 1\\right\\rangle )\\left|\nu\\right\\rangle \\overset{C-NOT}{\\rightarrow}\\frac{1}{\\sqrt{2}}(\\left|\n0\\right\\rangle \\left| u\\right\\rangle +e^{i\\varphi}\\left| 1\\right\\rangle\n\\left| v\\right\\rangle )\n\\end{equation}\nthus forming an entangled state and constituting a controlled NOT-gate\n(C-NOT). Ultimately, quantum trajectories are brought together as in a\nMach-Zehnder interferometer to produce quantum interference[17]. The\nintermediate conditional operation, phase shift and\/or entanglement, requires\nauxiliary qubits and a unitary operation[16]. The corresponding network\nrepresentation is given in fig. 1.%\n\n\n\\par\\noindent Fig. 1. Network representation of quantum Mach-Zender interferometer with\ngeneralized phase shifter.\n\n\\bigskip\n\nThus, the elemental operations are conjectured to be constituted by a simple\nQFT, a conditional phase shift and\/or entanglement, and ultimately a QFT that\nbrings together quantum trajectories to produce quantum interference. The\nlater constitutes the results of a computation, i.e. collapse of the wave\nfunction. These operations are entirely equivalent to the unique\ncharacterization of quantum computation: a) quantum superposition, b) quantum\nentanglement, and c) quantum interference.\n\nIn addition to the ability to facilitate the operational requirements\ndiscussed above, a universal digital quantum computer must satisfy the\nfollowing criteria:\n\n\\bigskip\n\ni)\\qquad A distinct set of distributed qubits must be defined that are\nindividually addressable.\n\nii) \\ \\ \\ \\ \\ The qubits must be linkable via a binary interaction.\n\niii) \\ \\ \\ \\ Must be capable of performing two-bit logic operations, i.e.\nC-NOT gate.\n\niv) \\ \\ \\ The speed of operation must satisfy the Preskill criterion, i.e. the\ncapability must exist for the execution of at \\qquad\\qquad\\ \\ \\ least 104\ndistinct operations within the quantum decoherence time.\n\nv) \\ \\ \\ \\ \\ Arbitrary initial state preparation must be clearly executable.\n\nWe present our generic scheme for realization of a quantum computer in the\nnext section. In section IV we discuss the fulfillment of the above criteria\nin our proposed scheme.\n\n\\section{III. Realization of a Universal Quantum Computer}\n\nOur proposed scheme is similar to the seminal proposal of Kane[15], but does\nnot depend upon electronic charge migration for read-in and read-out, nor wave\nfunction displacement to distinguish nuclear spin qubits. Our scheme, that we\nfeel offers a viable alternative, is represented in fig.2.%\n\n\\par\\noindent Fig. 2. Schematic representation of distributed qubit array as discussed in\nthe text. Atoms C and A are embedded in the Si wave guide as shown. The Stark\ngates are depicted above atoms C external to the guide, and the bias gates,\nlocated on the top and edge of the guide are also shown.\n\n\\bigskip\n\nAs depicted in fig. 2 the two-qubit system is constituted by the unpaired\nsingle electron spin of atom C, and the nuclear spin of atom A. The\nspecification of the atomic constituents is that atom C be composed of a\nsingle loosely bound unpaired electron and isotope of nuclear spin $I_{C}=0$,\nwhereas, atom A is composed of isotope with $I_{A}=1\/2$ and no unpaired\nelectrons. The configuration of atoms A and C, shown in fig. 2, is embedded in\nan optical waveguide composed of pure Si of isotope $I_{SI}=0$. The substrate\nalso consists of Si with $I_{SI}=0$ to avoid background interference. The\nelectron spin of atom C is coupled to the nuclear spin of atom A by electronic\nwavefunction overlap as shown in fig 2. The coupling is via Fermi contact\ninteraction[18] given generically for isotropic transferred hyperfine\ninteraction by\n\\begin{equation}\nA_{s}=\\frac{8\\pi\\beta\\hbar\\beta_{n}}{3S}\\left[ a_{1s}^{2}\\left| \\psi\n_{1s}\\left( 0\\right) \\right| ^{2}+a_{2s}^{2}\\left| \\psi_{2s}\\left(\n0\\right) \\right| ^{2}+2a_{1s}a_{2s}\\left| \\psi_{1s}\\left( 0\\right)\n\\right| \\left| \\psi_{2s}\\left( 0\\right) \\right| \\right]\n\\end{equation}\n\nwhere $\\beta$ is the Bohr magneton, $\\beta_{n}$ the nuclear magneton, and the\nwave functions $\\psi_{ks}(0)$ , k = 1, 2, are the 1s and 2s orbitals of atom A\nevaluated at its nucleus, and the a2s=are overlap integrals coupling the\nelectron wave function of atom C with the 1s and 2s inner shell wave functions\nof atom A. Thus, the unpaired electron spin of atom C is coupled with the\nnuclear spin of atom A by contact interaction mediated by the inner shell s -\nstate orbitals of atom A.\n\nThe interaction is controlled by laser field induced electronic excitation of\natom C from its ground s - state (no overlap interaction with atom A) to an\nelectronic excited state with a p -, or d- orbital, with strong overlap and\nsuperhyperfine interaction with one or both of its nearest neighbor atoms A.\nEnhanced, as well as suppressed, interaction is controlled by gates mounted\nexternal to the waveguide, that produce a positive or negative bias. Outer\nshell, loosely bound, electronic wave functions may be as extensive as 20nm in\na solid of sufficiently high dielectric constant, so nearest neighbor C- A\natom configurations must be within this range. Selective excitation is\nprovided by a gate that sandwiches atom C used to Stark shift atom C into and\nout of resonance with the cw laser field in the waveguide, thus enabling $\\pi$\n- excitation \/ deexcitation. The nuclear spin flips are controlled by\nmicrowave induced simultaneous electron - nuclear spin flips mediated by the\ntransferred hyperfine interaction.\n\nThe elementary, time independent, spin Hamiltonian for this system is,\n\\begin{equation}\nH=g\\beta H_{0}S_{z}+S.A.I.-g_{n}\\beta_{n}H_{0}I_{z}%\n\\end{equation}\n\nwhere $g$ and $g_{n}$ are the electron and nuclear gyromagnetic ratios. Here,\nwe have omitted the weak transverse time dependent part of the Hamiltonian for\nsimplicity. The first and third terms are the Zeeman terms for the electronic\nand nuclear spins, respectively, coupling to the constant magnetic field,\n$H_{0}$. The second term expresses the transferred hyperfine coupling between\nthe electron spin S of atom C and the nuclear spin I of atom A. These are\ncoupled via the superhyperfine tensor A, that for isotropic homogeneous\ninteraction is diagonal in the representation where $S_{z}$ and $I_{z}$ are\naligned with $H_{0}$, and is given in magnitude by (6). For simplicity we\nassume isotropy in what follows unless stated otherwise. To first order, and\nwith the condition\n\\begin{equation}\ng\\beta>A_{s}>>g_{n}\\beta_{n}%\n\\end{equation}\nthe eigen energies of (7) with associated quantum numbers, $m_{s}$ and $m_{I}\n$ for electron spin and nuclear spin in the representation in which the Zeeman\nterms are diagonal are given by,\n\\begin{equation}\nE(m_{s},m_{I})=g\\beta H_{0}m_{s}+A_{s}m_{s}m_{I}%\n\\end{equation}%\n\n\\par\\noindent Fig. 3. Zeeman and superhyperfine energy level splitting and microwave\nfrequency transition for simultaneous electron-nuclear spin flips.\n\n\\bigskip\n\nThe corresponding Zeeman energy level diagram is shown in fig. 3. It is seen\nthat simultaneous electron-nuclear spin flips can be induced by an applied\nmicrowave field of frequency $\\omega$, according to the selection rule $\\Delta\nm_{F}=0$, where F = S + I is the total spin angular momentum. Thus the\n$E_{4}\\rightarrow E_{2}$ transition, corresponding to simultaneous electron -\nnuclear spin flips, is allowed under the proper selection rules, $\\Delta\nm_{F}=\\pm1,0$, but the $E_{3}\\rightarrow E_{1}$ transition is forbidden.\nExplicitly, for an isotropic tensor A, i.e. diagonal in this representation,\ntransitions are induced according to the second term in (7) by the operators\n$S^{\\pm}I^{\\mp}$, consistent with the selection rule for simultaneous electron\n- nuclear spin transitions indicated in fig. 3. However, provided the symmetry\nis such that A is not diagonal in this representation, or if there is an\nelement of symmetry breaking, then the superhyperfine interaction described by\nthe second term in (7) can contain contributions from mixed terms $S_{x}S_{y}%\n$, $S_{y}S_{x}$ that can cause the transition $E_{3}\\rightarrow E_{4}$. Such\nconditions can, in principle, be regulated by local impurities or the natural\nlocal symmetry environment.\n\nWith these conditions, simultaneous electron - nuclear spin flips can be\ncontrolled by externally applied microwave pulses mediated by transferred\nhyperfine interactions. The interaction can be regulated, i.e. turned on -\noff, by laser pulse electronic excitation \/ deexcitation between the ground s\n- state of atom C (no electronic wave function overlap with the nuclear spin\nof atom A) and an electronic excited state of nonzero orbital angular momentum\n(strong overlap with the nucleus of atom A) as indicated in fig. 4. This\nintroduces also the option of storing information in the nuclear spin system.\nMost important, however, is the single qubit selectivity facilitated by\nselective laser pulse excitation using the Stark shift gates discussed\nearlier. In the next Section we discuss some useful specific performance\naspects of the model.%\n\n\n\\par\\noindent Fig. 4. Zeeman and superhyperfine (shf) energy level splittings and laser and microwave\n\npulse sequence for shf interaction control and simultaneous electron- nuclear\nspin flips.\n\n\\bigskip\n\n\\section{IV. Quantum Calculational Features}\n\nWith reference to figs. 3-4 we identify the calculational basis,\n\\begin{equation}\n\\left| \\downarrow\\right\\rangle _{e}\\left| \\uparrow\\right\\rangle _{n},\\left|\n\\uparrow\\right\\rangle _{e}\\left| \\downarrow\\right\\rangle _{n},\\left|\n\\downarrow\\right\\rangle _{e}\\left| \\downarrow\\right\\rangle _{n},\\left|\n\\uparrow\\right\\rangle _{e}\\left| \\uparrow\\right\\rangle _{n}%\n\\end{equation}\n\nwhere $\\left| {}\\right\\rangle _{e}$ and $\\left| {}\\right\\rangle _{n}$\nrepresent electron, and nuclear spin states, respectively. The Zeeman split\nelectron spin manifold depicted in fig. 3 can be driven by a microwave pulse\nof frequency $\\omega$ into a coherent superposition described by a single spin\nrotation unitary transformation, U(t),\n\\begin{equation}\nU(t)\\left| \\downarrow\\right\\rangle _{e}\\left| \\uparrow\\right\\rangle\n_{n}=\\alpha(t)\\left| \\downarrow\\right\\rangle _{e}\\left| \\uparrow\n\\right\\rangle _{n}+\\beta\\left| \\uparrow\\right\\rangle _{e}\\left|\n\\downarrow\\right\\rangle _{n}%\n\\end{equation}\nwhere $|\\alpha|^{2}+|\\beta\\}^{2}=1.$ The operation (10a), together with\n\\begin{equation}\nU(t)\\left| \\downarrow\\right\\rangle _{e}\\left| \\downarrow\\right\\rangle\n_{n}=\\alpha(t)\\left| \\downarrow\\right\\rangle _{e}\\left| \\downarrow\n\\right\\rangle _{n}+\\beta\\left| \\uparrow\\right\\rangle _{e}\\left|\n\\downarrow\\right\\rangle _{n}%\n\\end{equation}\nform a quantum controlled NOT-gate (C-NOT-gate). The target bit $\\left|\n{}\\right\\rangle _{n}$ is flipped contingent upon the state of the control bit\n$\\left| {}\\right\\rangle _{e}$ .The externally applied coherent microwave\npulse induced unitary transformation drives the electron-nuclear spin system\nground and excited states into an entangled pair via the transferred hyperfine\ninteraction, and the entanglement is manifestly nonlocal. Since two-bit gates\nare universal in quantum computing [19], the identification of the set of\nqubits and the calculational basis, together with the demonstrated C-NOT gate,\n(12), fulfills the requirement for universal quantum computation, i.e., any\nalgorithm is executable in the system provided the system is scalable.\n\nWorth noting here is that if the target state is initially $\\left|\n\\uparrow\\right\\rangle _{n}$ , then the unitary transformation is to the\ntwo-bit sub-manifold, $\\{\\left| \\downarrow\\right\\rangle _{e}\\left|\n\\uparrow\\right\\rangle _{n},\\left| \\uparrow\\right\\rangle _{e}\\left|\n\\downarrow\\right\\rangle _{n}\\}$, (11); whereas, given the target state\n$\\left| \\downarrow\\right\\rangle _{n}$, the same transformation results in the\ntransformation to the submanifold, $\\{\\left| \\downarrow\\right\\rangle\n_{e}\\left| \\downarrow\\right\\rangle _{n},\\left| \\uparrow\\right\\rangle\n_{e}\\left| \\uparrow\\right\\rangle _{n}\\}$. These alternatives are isomorphic\nto the oracle of the Deutsch-Jozsa Promise Algorithm [20], and the initial\nnuclear spin target state can be determined by application of that algorithm\nin a single unitary operation.\n\nA different calculational basis can be identified, independently, with respect\nto the nuclear spins. Selective rf field induced nuclear spin flips can be\ninduced by laser pulse controlled transferred hyperfine interaction. The\nselective transferred hyperfine interaction can be used to induce significant\nnuclear spin level shifts. This interesting alternative and option will not be\npursued further here, but will be treated elsewhere. We feel that we have\npresented here the simplest approach to quantum computation with respect to\nthis particular scheme.\n\nSo far, we have discussed results for selective coupling between an atom C,\nwith respect to its electronic component, and a single atom A, with its\nnuclear component. Now we focus attention on electronic wave function mediated\ncoupling of two adjacent nuclear spins, fig. 2. For this case, instead of (7)\nwe have the Hamiltonian\n\\begin{equation}\nH=g\\beta H_{0}S_{z}+A_{k}S.I_{k}+A_{k+1}S.I_{k+1}-g_{nk}\\beta H_{0}I_{z}%\n^{(k)}-g_{nk+1}\\beta_{n}H_{0}I_{z}^{(k+1)}%\n\\end{equation}\nand to first order and within the approximation (8), the associated energy\nlevels are given by\n\\begin{equation}\nE(m_{s},m_{Ik},m_{Ik+1})=g\\beta H_{0}m_{s}+A_{s}m_{s}(m_{I}+m_{Ik+1})\n\\end{equation}\nHere, k labels the location of atom $A_{k}$, and $m_{Ik}$, and $m_{Ik+1}$ are\nthe spin quantum numbers for nuclei of atoms $A_{k}$ and $A_{k+1}%\n$respectively. The corresponding energy level diagram and transitions are\ndisplayed in fig. 5.%\n\n\n\\par\\noindent Fig.5 Zeeman and superhyperfine energy level splittings and microwave field\ninduced transitions for electron spin mediated nuclear spin flips (see fig.\n2). Here, the left-hand arrow corresponds to the nuclear spin of the kth atom\nof type A whereas the right-hand arrow corresponds to the nuclear spin of the\nk+1 atom of type A.\n\n\\bigskip\n\n\\bigskip\n\nAs an example, consider the $\\Delta m_{I}=-1$ microwave excitation from the\nground initial state\n\\begin{equation}\n\\left| \\psi_{l}\\right\\rangle =\\left| \\downarrow\\right\\rangle _{e}\\left|\n\\uparrow\\uparrow\\right\\rangle _{n}%\n\\end{equation}\nto the state\n\\begin{equation}\nU_{1}(t)\\left| \\downarrow\\right\\rangle _{e}\\left| \\uparrow\\uparrow\n\\right\\rangle _{n}=\\left| \\downarrow\\right\\rangle _{e}\\left| \\uparrow\n\\downarrow+\\downarrow\\uparrow\\right\\rangle _{n}%\n\\end{equation}\nCorresponding to the transition $E_{6}\\longrightarrow E_{2}$ , fig.5. Here,\nthe left arrow in $\\left| {}\\right\\rangle _{n}$ always refers to atom $A_{k}$\n, and the right arrow refers to atom $A_{k+1}$ nuclear spin. This is followed\nby the $\\Delta m_{I}=0$ microwave pulse deexcitation,\n\\begin{equation}\nU_{1}(t)\\left| \\uparrow\\right\\rangle _{e}\\left| \\uparrow\\downarrow\n+\\downarrow\\uparrow\\right\\rangle _{n}=\\left| \\downarrow\\right\\rangle\n_{e}\\left| \\uparrow\\downarrow+\\downarrow\\uparrow\\right\\rangle _{n}%\n\\end{equation}\nCorresponding to the transformation $E_{2}\\longrightarrow E_{5}$ .\nSubsequently, this is followed by laser pulse induced adiabatic electron\nspin-nuclear spin decoupling\n\\begin{equation}\n\\left| \\downarrow\\right\\rangle _{e}\\left| \\uparrow\\downarrow+\\downarrow\n\\uparrow\\right\\rangle _{n}\\longrightarrow\\left| \\uparrow\\downarrow\n+\\downarrow\\uparrow\\right\\rangle _{n}%\n\\end{equation}%\n\\begin{equation}\nA_{s}\\longrightarrow0\n\\end{equation}\nthus storing entangled state information in the nuclear spin system,\n\\begin{equation}\n\\left| \\psi_{f}\\right\\rangle _{n}=\\left| \\uparrow\\downarrow+\\downarrow\n\\uparrow\\right\\rangle _{n}%\n\\end{equation}\n\nNow, suppose the superhyperfine coupling is again induced adiabatically via\nlaserpulse electronic excitation, except that the overlap is enhanced with\nrespect to the left- hand atom $A_{n}$, and at the same time suppressed with\nregard to the right hand atom, $A_{n+1}$ by means of the outer gate elements,\nfig. 3. The electron spin of atom C is now coupled to the nuclear spin of atom\n$A_{n}$ only. This is represented by\n\\begin{align}\n\\left| \\uparrow\\downarrow+\\downarrow\\uparrow\\right\\rangle _{n} &\n\\longrightarrow\\left| \\downarrow\\right\\rangle _{e}\\left| \\uparrow\n\\downarrow+\\downarrow\\uparrow\\right\\rangle _{n}\\\\\nA_{s} & \\neq0\n\\end{align}\nThen, microwave excitation from the initial state (18) gives\n\\begin{equation}\nU_{1}(t)\\left| \\downarrow\\right\\rangle _{e}\\left| \\uparrow\\downarrow\n+\\downarrow\\uparrow\\right\\rangle _{n}=\\left| \\uparrow\\right\\rangle\n_{e}\\left| \\uparrow\\downarrow+\\downarrow\\uparrow\\right\\rangle _{n}%\n\\end{equation}\nThen, adiabatic laser pulse induced electron-spin, nuclear-spin decoupling\nyields the entangled nuclear spin state,\n\\begin{equation}\n\\left| \\psi_{f}\\right\\rangle _{n}=\\left| \\uparrow\\uparrow+\\downarrow\n\\downarrow\\right\\rangle _{n}%\n\\end{equation}\nThus, we have manipulated the system to induce two distinct entangled pairs of\nspin states, (20) and (24), and this information can be stored in the nuclear\nspin system. In a similar manner we can produce a uniform superposition of\n(20) and (24).\n\n\\section{V. Universality}\n\n\\bigskip\n\nWe are now in position to discuss the universality of our scheme in relation\nto the criteria of Section II.: i) We have identified a distinct set of qubits\nand a computational basis, (10), fig.2; ii)The qubits are distinct,\ndistributed, and individually addressable, fig. 3; iii) The coupling of\nelectron and nuclear spins is via transferred hyperfine interaction, (5-6),\nfig. 3; iv) We have demonstrated a C- NOT gate, (11,12); v) The speed of\noperation is governed by the rate at which an externally applied microwave\nfield can cause simultaneous electron-nuclear spin flips. This rate is limited\nby the strength of the transferred hyperfine interaction, As 10KHz - 100MHz.\nThe nuclear spin flip relaxation times in Si:31P are measured to be within the\n1-10 hour range at low temperatures [21], whereas the direct electron spin\nrelaxation is found to be on the order of one hour. The electron spin\nresonance (ESR) line width for 106 Phosphorus ions \/ cm3 in Si was observed to\nbe 1kHz using spin-echo techniques [22], much narrower than the ESR frequency\non the order 10 GHz. Thus, Preskill's criterion [23] is well satisfied for the\nconfiguration of our QC scheme; vi) We demonstrated selective single qubit\naddressability in Section IV. Initial state preparation can be established\neither globally by the usual techniques of electron-nuclear spin resonance,\nNMR, and ESR, or by selective single bit preparation using laser pulse\nselective electronic excitation and subsequent microwave pulse initial state\npreparation. There are a variety of combinations of techniques that can be\nused; however, the selective single qubit preparation is sufficient to\nguarantee arbitrary initial state preparation capability; vii) Input\ninformation can be imparted selectively to either the nuclear or electronic\nspin systems, or both, by microwave pulse excitation following selective laser\npulse electronic excitation. Output information can be rendered efficiently\nvia optical fluorescence from electronic excited states. Output information is\ntherefore imparted to the excited state electronic spin system. The\nfluorescence yield conveys all information involving the electron-spin\nnuclear-spin system. The specifics of optical fluorescence and detection for\nthis system will be the subject of another publication [24]; vii) scalability\nwas clearly identified in the previous Section, (15-24).\n\n\\section{\\bigskip VI. Summary and Conclusions}\n\n\\bigskip\n\nWe have demonstrated a scheme for quantum computing and shown that it\nsatisfies the irreducible set of criteria for universality. Our scheme has\ndefinitive advantages over other schemes involving solid state systems,\nnotably that of Kane [15], and Yablonovitch [25]. These schemes, each, involve\nelectron charge transfer and readin \/ readout requiring single electron charge\nmeasurements. Our method is more robust, depending upon microwave pulse\nexcitation control for reading and sensitive optical fluorescence and single\nphoton detection for readout.\n\nFabrication technology requirements for the scheme proposed here is currently\nbeyond the state-of-the-art, but this current impasse is expected to be\nalleviated within a few years due to the rapid progress in Nanoscience of\nmaterials and fabrication, especially in regard to silicone fabrication\ntechnology. Another concern involves the specific choice of atomic species to\nsatisfy the conditions required for the qubit design. This requires\nsimultaneous materials study and optimization analysis. The spectroscopy\ntechniques required in this scheme provide a particular challenge. The NMR,\nESR, and double resonance requirements correspond to well developed\ntechniques; however, this combined with integrated laser pulse excitation adds\na third component to constitute a triple resonance spectroscopy. This may lead\nto the cultivation of an interesting and useful spin-off as a novel method in spectroscopy.\n\nIt is felt that the proposed scheme combines several novel ideas to provide a\nrealistic multi-purpose method of exploiting quantum parallelism,\nentanglement, and interference. We anticipate that the scheme and analysis\nexpressed here can serve as a template for advancement toward realization of a\npractical universal quantum computer.\n\n\\bigskip\n\n\\section{References}\n\n\\bigskip\n\n1. Feynman, R. P., 1982, Int. J. Theor. Phys.21, 467.\n\n2. Deutsch, D., 1985, Proc. Roy. Soc. London, Ser. A 400, 96.\n\n3. Shor, P., 1994, Proc. 35th Annual Symp. On Foundations of Computer Science,\nLos Alamitas, CA, IEEE \\qquad\\qquad\\qquad\\qquad\\ \\ \\ \\ Press, 124.\n\n4. Ekert, A., and Jozsa, R., 1996, Rev. Mod. Phys. 68, 733.\n\n5. Grover, L. K., 1997, Phys. Rev. Lett. 78, 325.\n\n6. Turchette, Q. A., Hood, C. J., Lange, W., Mabuchi, H., and Kimble, H. J.,\n1995, Phys. Rev. Lett. 75, 4710.\n\n7. Cirac, J. I., and Zoller, P., 1995, Phys. Rev. Lett. 74, 4091.\n\n8. Monroe, C., Meekhof, D. M., King, B. E., Itono, W. M., ans Wineland, D. J.,\n1995, Phys. Rev. Lett. 75, 4714.\n\n9. Gershenfeld, N. A., and Chuang, I. L., 1997, Science 275, 350.\n\n10. Chuang, I. L., Gershenfeld, N. A., and Kubinec, M., 1998, Phys Rev. Lett.\n80, 3408.\n\n11. Bowden, C. M., Dowling, J. P., and Hotaling, S. P., 1997, Proc. SPIE 1111\nAnnual International Symposium on Aerospace \/ Defence, Sensing, Simulations,\nand Control, Orland, FL, April.\n\n12. Bowden, C. M., and Miller, J. E., 1967, Phys. Rev. Lett. 19, 4.\n\n13. Marshall, W., and Stuart, R., 1961, Phys. Rev. 123, 2048.\n\n14. DiVincenzo, D. P., 1998, Nature 393, 113.\n\n15. Kane, B. E., 1998, Nature 393, 133.\n\n16. Cleve, R., Ekert, A., Macchiavello, C., and Mosca, M., 1997, quant-ph\/9708016.\n\n17. Scully, M. O., and Zubairy, M. S., 1997, Quantum Optics (Cambridge), Ch.4, 17.\n\n18. Slichter, C. P., 1963, (Harper \\& Row), Ch. 7.\n\n19. DiVincenzo, D. P., 1995, Phys. Rev. A 51, 1015.\n\n20. Deutsch, D., and Jozsa, R., 1992, Proc. Roy. Soc. London A 439, 553.\n\n21. Feher, G., and Gere, E. A., 1959, Phys. Rev. 114, 1245.\n\n22. Chiba, M. and Hirai, A., 1972, J. Phys. Soc. Japan 33, 730.\n\n23. Preskill, J., 1998, Royal Soc. London, A 454, 385.\n\n24. Bowden, C. M., Pethel, S. D., and Bloemer, M. J., in preparation.\n\n25. Yablonovitch, E., private communication.\n\\end{document}","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsection{Preliminaries}\nWe begin with the definition of the process that we are going to study. It is a two component Markov process $(X_t^\\varepsilon,I_t^\\varepsilon)$ to which we refer in all the work with \\textit{\"molecular motors model\"} or \\textit{\"motor proteins model\"}.\n\\begin{definition}[Molecular motors]\\label{contmodel}\nGiven an integer $J$, we consider the setting $E = {{\\mathbb R}}^d \\times \\{1,\\dots,J\\}$. For all $i,j$ in $\\{1,\\dots,J\\}$, let $r_{ij} \\in C^\\infty(\\mathbb{R}^d\\times {{\\mathbb R}}^d; [0,\\infty))$ denote nonnegative smooth maps, $\\psi^i \\in C^\\infty({{\\mathbb R}}^d\\times{{\\mathbb R}}^d)$ a smooth and $\\nabla \\psi^i$ its gradient with respect to $x$. We suppose that $\\psi$ grows at most linearly in the first component and is periodic in the second one. Finally, given the following operator\n\\begin{equation}\n\\widetilde{A}_\\varepsilon f(x,i)\n:=-\\nabla \\psi^i(\\varepsilon x,x) \\cdot \\nabla_x f(\\cdot,i)(x)\n+\n \\frac{1}{2} \\Delta_x f(\\cdot,i)(x)\n+\n\\sum_{j = 1}^J r_{ij}(\\varepsilon x,x) \n\\left[\nf(x,j) - f(x,i)\n\\right],\n\\label{eq:intro:MP_with_switching_L_varepsilon}\n\\end{equation} \nwe define the $E$-valued Markov process $(X^\\varepsilon_t, I^\\varepsilon_t)|_{t \\geq 0}$ as the solution to the martingale problem corresponding to $\\widetilde{A}_\\varepsilon$. More precisely, $(X^\\varepsilon_t, I^\\varepsilon_t)$ is such that for all $f\\in D(\\widetilde{A}_\\varepsilon)$,\n\\begin{equation}\nf(X^\\varepsilon(t), I^\\varepsilon(t))- f(X^\\varepsilon(0),I^\\varepsilon(0)) - \\int_0^t \\widetilde{A}_\\varepsilon f(X^\\varepsilon(s),I^\\varepsilon(s))\\, ds\n\\end{equation}\nis a martingale.\n\n\n\n\\end{definition}\n\n\n\\begin{remark}\nIn our case $r_{ij}$ is regular enough that the martingale problem associated to $A_\\varepsilon$ is well posed (see \\cite{EithKurtz} and \\cite{StroVar}).\n\\end{remark}\n\\begin{remark}\nIt is straightforward to see that the above defined process solves the stochastic differential equation \\eqref{sde} given in the introduction.\n\\end{remark}\nWe firstly study the above particular model for which we prove the large deviations property. Then, using this model, we lead to a theorem for a general class of processes called \\textit{Switching Markov process} (see Section \\ref{general}).\n\nAs mentioned in the introduction, we will work with the rescaled process $(Y_{t}^\\varepsilon, \\bar{I}_t^\\varepsilon)=$\\\\ $=\\left(\\varepsilon X_{t\/\\varepsilon}^\\varepsilon, I_{t\/\\varepsilon}^{\\varepsilon}\\right)$. \nThen, by the chain rule, the generator becomes\n\\begin{equation} \\label{gen}\nA_\\varepsilon f(x,i) = - \\nabla \\psi^i \\left(x, \\frac{x}{\\varepsilon}\\right) \\nabla_x f(\\cdot,i)(x) + \\frac{\\varepsilon}{2} \\Delta_x f(\\cdot,i)(x) + \\frac{1}{\\varepsilon}\\sum_{j=1}^J r_{ij}\\left(x, \\frac{x}{\\varepsilon}\\right)[f(x,j)-f(x,i)]. \n\\end{equation}\nWe will assume in the main theorem that the matrix $(R_{ij}(x))_{ij} := (\\sup_{y \\in \\mathbb{R}^d}r_{ij}(x,y))_{ij}$ is irreducible. Here we give the rigorous definition.\n\n\\begin{definition}\\label{irreducible:def}\n We say that a matrix $A=(A_{ij}(x))_{ij\\in \\{1,\\dots,J\\},x\\in {{\\mathbb R}}^d}$ is irreducible if there is no decomposition of $\\{1,\\dots,J\\}$ into two disjoint sets $\\mathcal{J}_1$ and $\\mathcal{J}_2$ such that $A_{ij} = 0$ on $\\mathbb{R}^d$ whenever $i \\in \\mathcal{J}_1$ and $j \\in \\mathcal{J}_2$.\n\\end{definition}\n\nThe main goal of this work is to prove that the spatial component $Y^\\varepsilon$ of the above Markov process verifies the large deviation principle. Here we give the main definitions which are written down in terms of a general Polish space $\\mathcal{X}$ but will later on be applied for e.g. $\\mathcal{X} = \\mathbb{R}^d$ or $\\mathcal{X} = C_{\\mathbb{R}^d}([0,\\infty))$.\n\n\\begin{definition}\\label{def:LDP}Let $\\{X_\\varepsilon\\}_{\\varepsilon>0}$ be a sequence of random variables on a Polish space $\\rchi$. Given a function $I : \\rchi \\to [0,\\infty]$, we say that\n\\begin{enumerate}[(i)]\n\\item the function $I$ is a \\textit{good rate function} if the set $\\{x \\,|\\, I(x) \\leq c\\}$ is compact for every $c\\ge0$.\n\\item the sequence $\\{X_\\varepsilon\\}_{\\varepsilon>0}$ satisfies the \\textit{large deviation principle} with good rate function $I$ if for every closed set $A\\subseteq\\rchi$, we have\n\\begin{equation}\n\\limsup_{\\varepsilon\\to0} \\varepsilon\\log \\mathbb{P}[X_\\varepsilon \\in A] \\leq -\\inf_{x\\in A} I(x)\n\\end{equation}\nand, for every open set $U \\subseteq \\rchi$,\n\\begin{equation}\n\\liminf_{\\varepsilon\\to0} \\varepsilon\\log\\mathbb{P}[X_\\varepsilon \\in U] \\geq -\\inf_{x\\in U} I(x).\n\\end{equation}\n\\end{enumerate}\n\\end{definition}\nWe recall the definition of exponentially tightness and the compact containment condition, typical properties that come out in a large deviations context.\n\\begin{definition}[Exponential tightness]\nA sequence of probability measures $\\{P_\\varepsilon\\}$ on a Polish space $\\rchi$ is said to be \\textit{exponentially tight} if for each $a>0$, there exists a compact set $K_a \\subset \\rchi$ such that\n\\begin{equation}\n\\limsup_{\\varepsilon \\to 0} \\varepsilon \\log P_\\varepsilon(K_a^c) \\leq a.\n\\end{equation}\nIn the next we will consider for the above definitions the space $\\rchi=C_{{{\\mathbb R}}^d}[0,\\infty)$.\n\nA sequence $\\{X_\\varepsilon\\}$ of $E$-valued random variables is exponentially tight if the corresponding\nsequence of distributions is exponentially tight.\n\\end{definition}\n\\begin{definition}\nWe say that the processes $(Z_\\varepsilon(t))$ satisfy the \\textit{exponential compact containment condition} if for all $T >0$ and $a > 0$ there is a compact set $K= K(T,a) \\subseteq E$ such that \n\\begin{equation*}\n\\limsup_{\\varepsilon\\rightarrow 0} \\varepsilon\\log \\mathbb{P}\\left[Z_\\varepsilon(t) \\notin K \\text{ for some } t \\in [0,T] \\right] \\leq - a.\n\\end{equation*}\n\\end{definition}\n\n\\subsection{Statement of the main theorem}\nNow we state the main theorem in which we prove sufficient conditions for large deviation property for the spatial component of the switching process defined in Definition \\ref{contmodel}. \n\\begin{theorem}[Large deviation for the \"molecular motors model\"]\\label{maintheorem}\nLet $(X^\\varepsilon_t, I^\\varepsilon_t)$ be the Markov process of Definition \\ref{contmodel}. Suppose that the matrix $R_{ij} = (\\sup_{y \\in \\mathbb{R}^d}r_{ij}(y))_{ij}$ is irreducible. Denote $Y^\\varepsilon_t=\\varepsilon X^\\varepsilon_{t\/\\varepsilon}$ the rescaled process.\nSuppose further that at time zero, the family of random variables $\\{Y^\\varepsilon(0)\\}_{\\varepsilon > 0}$ satisfies a large deviation principle in $\\mathbb{R}^d$ with good rate function $\\mathcal{I}_0 :C_{{{\\mathbb R}}^d}[0,\\infty) \\rightarrow [0,\\infty]$. Then, the spatial component $\\{Y_t^\\varepsilon\\}$ satisfies a large deviation principle in $C_{\\mathbb{R}^d}[0,\\infty)$ with good rate function $\\mathcal{I} : C_{{{\\mathbb R}}^d}[0,\\infty) \\to [0,\\infty]$ given in the integral form \n\\begin{equation}\n\\mathcal{I}(x)=\\begin{cases}\n\\mathcal{I}_0(x(0)) +\\int_0^\\infty\\mathcal{L} \\left(x(t),\\dot{x}(t)\\right) \\, dt &\\quad \\text{if } x\\in AC([0,\\infty); \\mathbb{R}^d),\\\\\n\\infty & \\quad \\text{else},\n\\end{cases}\n\\end{equation}\nwith $\\mathcal{L}(x,v)=\\sup_p\\{pv-{{\\mathcal H}}(x,p)\\}$ the Legendre transform of a Hamiltonian ${{\\mathcal H}}(x,p)$ given in variational form by\n\\begin{equation} \\label{eigen:repr}\n\\mathcal{H}(x,p)= \\sup_{\\mu\\in\\mathcal{P}(E^\\prime)}\\left[\\int_{E^\\prime} V_{x,p}(z) \\,d\\mu(z) - I_{x,p}(\\mu)\\right],\n\\end{equation}\nwhere $E^\\prime = \\mathbb{T}^d \\times \\{1, \\dots, J\\}$, \\begin{equation}\n\tV_{x,p} (y,i) = \\frac{1}{2} p^2 - p\\cdot \\nabla_x\\psi^i\\left(x,y\\right)\n\\end{equation}\nand the map $I_{x,p} : \\mathcal{P}(E^\\prime) \\rightarrow [0,\\infty]$ is the Donsker--Varadhan function, i.e.\n\\begin{equation}\nI_{x,p}(\\mu)= -\\inf_{\\varphi}\\int_{E^\\prime} e^{-\\varphi} L_{x,p} (e^{\\varphi}) \\, d\\mu,\n\\end{equation}\nwhere the infimum is taken over vectors of functions $\\varphi(\\cdot,i) \\in C^2(\\mathbb{T}^d)$, and $L_{x,p}$ is the operator defined by\n\\begin{equation}\\label{generator}\nL_{x,p} u(z,i) =\\frac{1}{2} \\Delta_z u(z,i) + (p-\\nabla_x \\psi^i(x,z))\\cdot \\nabla_z u(z,i)+\\sum_{j = 1}^J r_{ij}(x,z) \\left[u(z,j) - u(z,i)\\right].\n\\end{equation}\n\\end{theorem}\n\n\\begin{remark}\n$E^\\prime$ captures the periodic behaviour and the internal state. In the homogenisation context described in the introduction, $E^\\prime$ is exactly what is being homogenised over while $L_{x,p}$ describes the dynamics on it.\n\\end{remark}\n\n\\subsection{Law of large numbers and speed of the limit process}\nThe following corollary characterises the limit process.\n\\begin{corollary}\\label{velocity:cor}\nConsider the same assumptions of Theorem \\ref{maintheorem} for the Markov process $(Y_t^\\varepsilon, I_t^\\varepsilon)$. Then, the spatial component converges almost surely to the path with velocity given by\n\\begin{equation}\n\\partial_t x = \\partial_p \\mathcal{H}(x,0) = - \\int_{E'} \\nabla_x \\psi^i (x,y) \\, d\\mu_x^*(y),\n\\end{equation}\nwith $\\mu_x^*$ the unique stationary measure of the operator $L_{x,0}$ given in \\eqref{generator}.\n\\end{corollary}\n\\begin{proof}\nBy Theorem A.2 of \\cite{PelSch}, the spatial component $Y_t^\\varepsilon$ converges almost surely to the set of minimizers of the rate function. More precisely, \n\\begin{equation}\nd(Y_t^\\varepsilon, \\{\\mathcal{I}=0\\})\\to 0 \\qquad \\text{a.s. as $\\varepsilon \\to 0$}\n\\end{equation}\nwhere $\\{\\mathcal{I} = 0\\}=\\{x\\in C_{{{\\mathbb R}}^d}[0,\\infty) \\, :\\, \\mathcal{I}(x)=0\\}$. We now prove that this set is actually a singleton and then characterise the unique element.\n\nWith this aim, note that by \\autocite[Theorem 23.5]{Ro70}, $v$ is a minimizer of $\\mathcal{L}$ if and only if $v \\in \\partial_p{{\\mathcal H}}(x,0)$. Moreover, by \\autocite[Thoerem 4.4.2]{HULe01},\n\\begin{equation}\n\\partial_p {{\\mathcal H}} (x,0)= co \\left\\{ \\bigcup_{\\mu\\in \\mathcal{P}(E^\\prime)} \\partial \\left[ \\int_{E^\\prime}V_{x,0}\\, d\\mu - I_{x,0}(\\mu)\\right]\\, \\text{for all $\\mu$ s.t.} \\, {{\\mathcal H}}(x,0)= \\int_{E^\\prime}V_{x,0}\\, d\\mu - I_{x,0}(\\mu) \\right\\},\n\\end{equation}\nwhere with $co$ we refer to the convex hull of a set and $\\partial \\left[ \\int_{E^\\prime}V_{x,0}\\, d\\mu - I_{x,0}(\\mu)\\right]$ is the differential of the convex functions $\\int_{E^\\prime}V_{x,p}\\, d\\mu- I_{x,p}(\\mu)$ for $p=0$.\n\nWe know that ${{\\mathcal H}}(x,0)=0$ and $V_{x,0}(z)=0$ for all $z\\in E^\\prime$. Then, if $\\mu^*_x$ is the optimal measure for ${{\\mathcal H}}(x,0)$, we have that $$0={{\\mathcal H}}(x,0)= \\int_{E^\\prime} V_{x,0}(z) \\,d\\mu^*_x - I_{x,0}(\\mu^*_x) = I_{x,0}(\\mu^*_x).$$ We can conclude that the optimal $\\mu^*_x$ is the unique stationary measure of $L_{x,0}$ (see Proposition \\ref{prop:stationarymeasure} in the appendix for existence and uniqueness of $\\mu^*_x$). We thus find that $\\partial_p {{\\mathcal H}}(x,0)= \\left\\{\\frac{\\partial}{\\partial p}{{\\mathcal H}}(x,p)\\vert_{p=0}\\right\\}$ and hence $\\mathcal{I}(x)=0 \\iff \\partial_t x=\\frac{\\partial}{\\partial p}\\mathcal{H}(x,0)$ for almost all $t$\nand \n\\begin{align}\n\\partial_t x &=\\frac{\\partial}{\\partial p}\\mathcal{H}(x,0)= \\int_{E'} \\frac{\\partial V_{x,p}(z)}{\\partial p} \\biggl\\vert_{p=0} \\, d\\mu_x^*(z)\\\\\n&=-\\int_{E'} \\nabla_x \\psi(x,z) \\, d\\mu_x^*(z).\n\\end{align}\n\n\\end{proof}\n\n\n\\begin{remark}\nThe above corollary confirms the suggestion of Figure \\ref{limit} that, when there is no dependence on $x$ in the drift, the spatial component is converging to a path with constant speed. Indeed, for small $\\varepsilon$, $X_t^\\varepsilon$ tends to concentrate around a path with a constant velocity $v= \\partial_p{{\\mathcal H}}(0)$.\n\\end{remark}\n\n\\end{section}\n\\section{Connection with Hamilton-Jacobi equations and strategy of proof}\\label{method:sub}\nWe will now present a brief overview of the technical aspects of the Hamilton-Jacobi approach introduced by Feng and Kurtz \\cite{FK} to the path-space large deviations theory for Markov processes.\nIn the next we give an outline of the main steps of this argument.\n\nFeng and Kurtz in \\cite[Theorem 5.15]{FK} used a variation of the projective limit method (\\cite{DawGar}, \\cite{deAcosta}) to prove that the large deviations property can be obtained as a consequence of the large deviations of the finite dimensional distributions and the exponential tightness of the process. By Bryc's theorem and the Markov property, large deviations for the finite dimensional distributions follows by the convergence of the conditional \"moment generating functions\" that forms the semigroup \n\\begin{equation*} \\label{semigroup}\nV_\\varepsilon(t)f(x)= \\varepsilon \\log \\mathbb{E}\\left[e^{f(X_\\varepsilon(t))\/\\varepsilon}\\vert X(0)=x\\right]=\\varepsilon \\log \\int_E e^{f(y)\/\\varepsilon}\\mathbb{P}_\\varepsilon(t,x,dy),\n\\end{equation*}\nwith $\\mathbb{P}_\\varepsilon(t,x,dy)$ the transition probabilities of $X^\\varepsilon_t$. Note that $V_\\varepsilon(t)f = \\varepsilon \\log S_\\varepsilon(t) e^{f \/ \\varepsilon}$, where $S_\\varepsilon$ is the linear semigroup associated to the generator $A_\\varepsilon$.\nComputing $V_\\varepsilon$ and verifying its convergence is usually hard. In analogy to results for linear semigroups and their generators, the convergence of $V_\\varepsilon$ follows from the convergence of the \\textit{nonlinear generators} $H_\\varepsilon$. Formally applying the chain rule to $V_\\varepsilon(t)$ in terms of the linear semigroup $S_\\varepsilon(t)$ yields the following definition, that can be put on more rigorous grounds as exhibited in \\cite{FK,Kr21}.\n\n\\begin{definition}[Nonlinear generator]\\label{nonlineargenerators:def}\nLet $A_\\varepsilon$ the generator of a process $X^\\varepsilon_t$. The \\textit{nonlinear generator} of $X^\\varepsilon_t$ is the map defined in the domain $D(H_\\varepsilon)=\\left\\{f\\in C(E)\\, : \\, e^{f(\\cdot)\/\\varepsilon}\\in D(A_\\varepsilon)\\right\\}$ by\n\\begin{equation}\\label{eq:nonlinear generator}\nH_\\varepsilon f(x) = \\varepsilon e^{-f(x)\/\\varepsilon} A_\\varepsilon e^{f(x)\/\\varepsilon}.\n\\end{equation}\n\\end{definition}\nMore precisely, the problem comes down to two steps. First one needs to prove the convergence of the generators $H_\\varepsilon \\to H$ in a suitable sense. \nThen, one has to show that the limiting semigroup generates a semigroup. The sufficient conditions are, using Crandall--Liggett Theorem \\cite{CraLigg}, the \\textit{range condition} and the \\textit{dissipativity} property.\n\\begin{definition}[Range condition]\nLet $E$ be an arbitrary metric space and $H: D(H) \\subseteq C_b(E) \\to C_b(E)$ a nonlinear operator. We say that $H$ satisfies the range condition if\n\\begin{equation}\n\\exists \\lambda_0>0 \\, : \\, D(H) \\subset \\overline{\\mathcal{R}(I-\\lambda H)} \\qquad \\text{for all $0<\\lambda <\\lambda _0$.}\n\\end{equation}\n\\end{definition}\n\n\\begin{definition}[Dissipative operator]\nWe say that an operator $(H,D(H))$ is dissipative if for all $\\lambda>0$,\n\\begin{equation*}\n\t\\vn{(f_1 - \\lambda Hf_1) - (f_2 - \\lambda H f_2)} \\geq \\vn{f_1 - f_2}\n\\end{equation*}\nfor all $f_1,f_2 \\in \\mathcal{D}(H)$.\n\\end{definition}\n\nThe range condition corresponds to existence of classical solutions for the equation $(1- \\lambda H)u=h$. Hence, we can conclude that, in order to prove large deviations, we need the convergence of the nonlinear generators to a dissipative operator $H$ such that the existence of classical solutions for $(1- \\lambda H)u=h$ holds. However, it is well known that the existence of classical solutions is a too strong condition to make this method work in most cases. As observed by Crandall -- Lions in \\cite{CraLio}, the use of \\textit{viscosity solutions} allows to overcome this problem. These weak solutions are defined in order to create an extension $\\tilde{H}$ of $H$ that automatically satisfies the range condition and that is still dissipative.\nBelow the definitions for both single and multi-valued operators.\n\\begin{definition}[Sub- and supersolutions for single valued operators]\nLet $H:\\mathcal{D}(H)\\subseteq C(E)\\rightarrow C(E)$ be a nonlinear operator. Then for $\\lambda>0$ and $h\\in C(E)$, define viscosity sub- and supersolutions of $(1 - \\lambda H) u = h$ as follows:\n\\begin{itemize}\n\\item[(i)] \nWe say that $u:E \\to {{\\mathbb R}}$ is a viscosity subsolution if it is bounded and upper semicontinuous and, for every $f \\in \\mathcal{D}(H)$, there exists a sequence $x_n \\in E$ such that\n\\begin{gather*}\n\\lim_{n \\uparrow \\infty} u(x_n) - f(x_n) = \\sup_x u(x) - f(x), \\\\\n\\lim_{n \\uparrow \\infty} u(x_n) - \\lambda H f(x_n) - h(x_n) \\leq 0.\n\\end{gather*}\n\\item[(ii)]\nWe say that $v:E\\to{{\\mathbb R}}$ is a viscosity supersolution if it is bounded and lower semicontinuous and, for every $f \\in \\mathcal{D}(H)$, there exists a sequence $x_n \\in E$ such that\n\\begin{gather*}\n\\lim_{n \\uparrow \\infty} v(x_n) - f(x_n) = \\inf_x v(x) - f(x), \\\\\n\\lim_{n \\uparrow \\infty} v(x_n) - \\lambda Hf(x_n) - h(x_n) \\geq 0.\n\\end{gather*}\n\n\\end{itemize} \nA function $u\\in C(E)$ is called a viscosity solution of $(1 - \\lambda H) u = h$ if it is both a viscosity sub- and supersolution.\n\n\\end{definition}\n\n\\begin{definition}[Sub - and supersolutions for multivalued operators]\n\\label{def:viscosity_solution}\nLet $H \\subseteq C(E) \\times C(E\\times E^\\prime)$ be a multivalued operator with domain $\\mathcal{D}(H) \\subseteq C(E)$. Then for $h\\in C(E)$ and $\\lambda>0$, define viscosity solutions of $(1 - \\lambda H) u = h$ as follows:\n\\begin{itemize}\n\\item[(i)] $u:E\\to {{\\mathbb R}}$ is a viscosity subsolution of $(1 - \\lambda H) u = h$ if it is bounded and upper semicontinuous and, for all $(f,g) \\in H$, \nthere exists a sequence $(x_n,z_n) \\in E\\times E'$ such that\n\\begin{gather*}\n\\lim_{n \\uparrow \\infty} u(x_n) - f(x_n) = \\sup_x u(x) - f(x), \\\\\n\\limsup_{n \\uparrow \\infty} u(x_n) - \\lambda g(x_n,z_n) - h_1(x_n) \\leq 0.\n\\end{gather*}\n\\item[(ii)] $v:E\\to {{\\mathbb R}}$ is a viscosity supersolution of $(1 - \\lambda H) u = h$ if it is bounded and lower semicontinuous and, for all $(f,g) \\in H$, \nthere exists a sequence $(x_n,z_n) \\in E\\times E'$ such that\n\\begin{gather*}\n\\lim_{n \\uparrow \\infty} v(x_n) - f(x_n) = \\inf_x v(x) - f(x), \\\\\n\\liminf_{n \\uparrow \\infty} v(x_n) - \\lambda g(x_n,z_n) - h_2(x_n) \\geq 0.\n\\end{gather*}\n\\end{itemize}\n\nA function $u\\in C(E)$ is called a viscosity solution of $(1 - \\lambda H) u = h$ if it is both a viscosity sub- and supersolution.\n\n\\end{definition}\n\\begin{remark} \\label{remark:existence of optimizers}\nConsider the definition of subsolutions. Suppose that the test function $f \\in \\mathcal{D}(H)$ has compact sublevel sets, then instead of working with a sequence $x_n$, there exists $x_0 \\in E$ such that\n\\begin{gather*}\nu(x_0) - f(x_0) = \\sup_x u(x) - f(x), \\\\\n(x_0) - \\lambda H f(x_0) - h(x_0) \\leq 0.\n\\end{gather*}\nA similar simplification holds in the case of supersolutions.\n\\end{remark}\n\nIn the classical context, the range condition, combined with the dissipativity of the operator can be shown to imply unique solvability of the equation $u - \\lambda H u = h$. However, for viscosity solutions this argument does not work anymore. The main reason is that viscosity solutions are in general not in the domain of $H$. In order to address this issue, an option can be to suppose that the following \\textit{comparison principle} (implying uniqueness) holds.\n \\begin{definition}[Comparison Principle]\nWe say that a Hamilton-Jacobi equation $(1 - \\lambda H) u = h$ satisfies the comparison principle if for any viscosity subsolution $u$ and viscosity supersolution $v$, $u\\leq v$ holds on $E$.\n\nFor two operators $H_\\dagger, H_\\ddagger \\subseteq C(E) \\times C(E \\times E^\\prime)$, we say that the comparision principle holds if for any viscosity subsolution $u$ of $(1 - \\lambda H_\\dagger) u = h$ and viscosity supersolution $v$ of $(1 - \\lambda H_\\ddagger) u = h$, $u\\leq v$ holds on $E$.\n\\end{definition}\n\nThe theory above was made rigorous in \\cite{FK}, \\cite{Kr21}. We present the key result in our context and notations.\n\\begin{theorem}[Adaptation of 7.18 of \\cite{FK} to our context]\\label{FengKurtz:th}\nLet $(X_\\varepsilon,I_\\varepsilon)$ be a Markov process with generator $A_\\varepsilon$ and nonlinear generator $H_\\varepsilon$ as in Definition \\ref{nonlineargenerators:def}. Consider the semigroup $V_\\varepsilon$ defined in \\eqref{semigroup} and suppose the following \n\\begin{enumerate}[(i)]\n\\item $\\exists \\, H$ \\quad \\text{s.t.} \\quad $H_\\varepsilon$ converges to $H$ in the sense of Definition \\ref{def:convergence} ,\n\\item $\\forall \\lambda>0, h \\in C(E)$, the comparison principle holds for $(1-\\lambda H)u=h$,\n\\item $X_\\varepsilon$ is exponentially tight,\n\\item $X_\\varepsilon(0)$ satisfies large deviation principle with rate function $I_0$.\n\\end{enumerate}\nThen, there exists a unique viscosity solution $R(\\lambda)h$ to $(1 - \\lambda H ) f = h$ and a unique semigroup $V(t):C_b(E)\\to C_b(E)$ such that\n\\begin{enumerate}\n\\item $\\lim_{m\\to \\infty} R(\\frac{t}{m})^m f(x) = V(t)f(x)$ for every $f\\in C_b(E), t\\geq 0$ and every $x\\in E$,\n\\item $V_\\varepsilon$ converges to $V$ in the sense that for any sequence of functions $f_\\varepsilon \\in C(E)$ and $f\\in C(E)$, \n\\begin{equation}\n\\text{if \\quad $\\Vert f_\\varepsilon - f \\Vert_E \\xrightarrow[]{\\text{$\\varepsilon \\to 0$}} 0$ \\quad then \\quad $\\Vert V_\\varepsilon(t) f_\\varepsilon \\to V(t) f \\Vert_E \\xrightarrow[]{\\text{$\\varepsilon \\to 0$}}0$.}\n\\end{equation}\n\\end{enumerate}\nMoreover, $X_\\varepsilon$ satisfies the large deviation principle with rate function $I: \\mathbb{C}_E [0,\\infty) \\to [0,\\infty]$ given by\n\\begin{equation} \\label{rate:eq}\nI(x)= I_0(x(0)) + \\sup_{k\\in \\mathbb{N}} \\sup_{(t_1,\\dots,t_k)} \\sum_{i=1}^k I_{t_i-t_{i-1}}(x(t_i)\\vert x(t_{i-1}))\n\\end{equation}\nwith $I_t(z\\vert y)=\\sup_{f\\in C(E)}[f(z)-V(t)f(y)]$ and the supremum above is taken over all finite tuples $t_0=0 < t_1 < t_2 <\\dots< t_k$.\n\\end{theorem}\n\n\n\n\\section{Proof of the main theorem}\\label{proof}\n\n\nUsing the discussion of the previous section and Theorem \\ref{FengKurtz:th}, we can prove Theorem \\ref{maintheorem}.\n\\begin{proof}[Proof of Theorem \\ref{maintheorem}]\nWe claim the following five facts:\n\\begin{enumerate}\n\\item The nonlinear generators $H_\\varepsilon f=\\varepsilon e^{-f\/\\varepsilon} A_\\varepsilon e^{f\/\\varepsilon}$ of $X^\\varepsilon_t$ converge to a multivalued operator\n$H:=\\left\\{(f, H_{f,\\varphi}) \\, : \\,f \\in C^2(\\mathbb{R}^d), \\; H_{f, \\varphi} \\in C( \\mathbb{R}^d\\times E^\\prime) \\text{ and }\\varphi \\in C^2(E^\\prime)\\right\\}$,\n\\item there exists $\\tilde{\\varphi}$ such that $H_{f, \\tilde{\\varphi}}(x,z)=H_{\\tilde{\\varphi}}(x,p,z)=\\mathcal{H}(x,p)$ for all $z \\in E'$ and $p=\\nabla f$,\n\\item the comparison principle for $ (1- \\lambda H)u= h$ holds,\n\\item $X_\\varepsilon$ verifies the exponential tightness property,\n\\item the rate function \\eqref{rate:eq} can be represented in the following integral form\n\\begin{equation}\n\\mathcal{I}(x)=\\mathcal{I}_0(x(0)) +\\int_0^\\infty\\mathcal{L} \\left(x(t),\\dot{x}(t)\\right) \\, dt\n\\end{equation} \nwith $\\mathcal{L}(x,v)=\\sup_{p \\in \\mathbb{R}^d}\\left[p \\cdot v - \\mathcal{H}(x,p)\\right]$ is the Legendre transform of $\\mathcal{H}(x,p)$ in \\eqref{eigen:repr}.\n\\end{enumerate}\nWe prove the above claims respectively in Propositions \\ref{convergence:th}, \\ref{hamiltonian}, \\ref{theorem:comparison}, \\ref{proposition:exponential_compact_containment}, \\ref{ratefunction:th} in the following subsections.\nThen, once the above facts are proved, we can apply Theorem \\ref{FengKurtz:th} and the required large deviation property follows.\n\\end{proof}\n\\subsection{The convergence of generators and an eigenvalue problem}\\label{eigenvalueProb}\nThe first step of the proof of large deviations is based on operator convergence. Since the process and its limit do not live in the same space, we can not work with the usual definition. In the following, we introduce a new definition of limit for functions and multivalued operator on different spaces.\n\\begin{definition}\nLet $f_\\varepsilon \\in C({{\\mathbb R}}^d \\times \\{1, \\dots, J\\})$ and $f\\in C^2({{\\mathbb R}}^d)$. We say that $LIM f_n=f$ if \n\\begin{equation}\n\\Vert f_\\varepsilon - f \\circ \\eta_\\varepsilon \\Vert_{{{\\mathbb R}}^d\\times \\{1,\\dots,J\\}} = \\sup_{{{\\mathbb R}}^d\\times \\{1,\\dots,J\\}} \\vert f_\\varepsilon - f \\circ \\eta_\\varepsilon \\vert \\to 0 \\quad \\text{as $\\varepsilon \\to 0$},\n\\end{equation}\nwhere $\\eta_\\varepsilon: {{\\mathbb R}}^d \\times \\{1,\\dots,J\\} \\to {{\\mathbb R}}^d$ is the projection \n$$ \\eta_\\varepsilon (x,i)= x.$$\n\\end{definition}\n\\begin{definition}[extended limit of multivalued operators]\\label{def:convergence}\nLet $H_\\varepsilon\\subseteq C({{\\mathbb R}}^d\\times\\{1,\\dots,J\\})$. Define $ex - LIM H_\\varepsilon$ as the set\n\\begin{multline}\nex - LIM H_\\varepsilon=\\\\\n=\\bigg \\{ (f,H)\\in C^2({{\\mathbb R}}^d)\\times C({{\\mathbb R}}^d \\times \\mathbb T^d\\times \\{1,\\dots,J\\})\\lvert \\, \\exists f_\\varepsilon\\in D(H_\\varepsilon) \\,:\\, f=LIM f_\\varepsilon \\\\\n\\text{and} \\, \\Vert H\\circ \\eta'_\\varepsilon - H_\\varepsilon f_\\varepsilon \\Vert_{{{\\mathbb R}}^d\\times\\{1,\\dots,J\\}}\\to 0 \\bigg \\},\n\\end{multline}\nwhere $\\eta'_\\varepsilon : {{\\mathbb R}}^d \\times \\{1,\\dots,J\\} \\to {{\\mathbb R}}^d \\times \\mathbb T^d \\times \\{1, \\dots, J\\}$ is the function $\\eta'_\\varepsilon(x,i)=\\left(x, \\left[\\frac{x}{\\varepsilon}\\right]_{{{\\mathbb Z}}^n}, i \\right)$.\n\\end{definition}\nThe following basic example gives the idea of the intuition behind the definitions above. \n\\textbf{Example:}\nGiven $H_\\varepsilon f (x,i) = f'(x) + \\varepsilon \\Delta f (x)$, for every $(f,H) \\in ex - LIM H_\\varepsilon$ we define \n$$f_\\varepsilon (x,i) = f(x) + \\varepsilon \\varphi \\left(\\frac{x}{\\varepsilon}, i \\right) \\qquad \\text{and} \\qquad H(x,y,i) = \\Delta \\varphi ^i (y)$$\nthen, the two definitions of limit hold.\n\\begin{proposition}[Convergence of nonlinear generator]\\label{convergence:th}\nLet $E={{\\mathbb R}}^d \\times \\{1,\\dots,J\\}$ and let $(X_t^\\varepsilon, I_t^\\varepsilon)$ be a Markov process from Definition \\ref{contmodel} with generator $A_\\varepsilon$ from \\eqref{gen} and let $H_\\varepsilon$ be the nonlinear generators defined in Definition \\ref{nonlineargenerators:def}. Then, the multivalued operator $H\\subseteq C({{\\mathbb R}}^d) \\times C({{\\mathbb R}}^d \\times \\mathbb T^d\\times \\{1,\\dots, J\\})$ given by\n$$ H:= \\left \\{ (f, H_{f,\\varphi}) \\, : \\, f\\in C^2({{\\mathbb R}}^d), H_{f,\\varphi} \\in C({{\\mathbb R}}^d\\times E') \\, \\text{and}\\, \\varphi \\in C^2(E')\\right\\},$$\nwhere the images $H_{f,\\varphi} : \\mathbb{R}^d \\times \\mathbb T^d \\times \\{1, \\dots, J\\} \\to \\mathbb{R}$ are\n\\begin{multline}\nH_{f,\\varphi}(x,y,i):= \\frac{1}{2}\\Delta_y\\varphi^i(y) + \\frac{1}{2} \\big|\\nabla f(x)+\\nabla_y\\varphi^i(y) \\big|^2 - \\nabla_x\\psi^i(x,y)(\\nabla f(x) + \\nabla_y\\varphi^i(y))\\\\ + \\sum_{j = 1}^J r_{ij}(x,y)\\left[e^{\\varphi(y, j)-\\varphi(y, i)}-1\\right],\n\\end{multline}\nis such that $H\\subseteq ex - LIM H_\\varepsilon$. Moreover, for all $\\varphi$ parametrising the images we have a map\n$\nH_\\varphi : \\mathbb{R}^d \\times \\mathbb{R}^d \\times \\mathbb{T}^d\\times \\{1,\\dots,J\\} \\to \\mathbb{R} $\nsuch that for all\n $f \\in \\mathcal{D}(H)$ and any $x \\in \\mathbb{R}^d$, the images $H_{f,\\varphi}$ of $H$ are given by\n$$\nH_{f,\\varphi}(x,z^\\prime) = H_\\varphi(x ,\\nabla f(x),z^\\prime), \\quad \\text{ for all } z^\\prime \\in \\mathbb{T}^d\\times \\{1,\\dots,J\\}. $$\n\\end{proposition}\n\\begin{proof}\nWe want to prove that $H_\\varepsilon$ converges to $H$ in terms of Definition \\ref{def:convergence}.\nWith this aim, note that, by the definitions of $A_\\varepsilon$ and $H_\\varepsilon$, we have \n\\begin{align}\nH_\\varepsilon f(x,i)= \\frac{\\varepsilon}{2} \\Delta_x f^i(x) +\\frac{1}{2} \\left| \\nabla_x f^i(x)\\right|^2 - &\\nabla\\psi^i\\left(x,\\frac{x}{\\varepsilon}\\right) \\nabla_x f^i(x)\\\\& + \\sum r_{ij} \\left(x,\\frac{x}{\\varepsilon}\\right)\\left(e^{(f(x,j)-f(x,i))\/\\varepsilon} -1\\right).\n\\end{align}\nChoosing functions $f_\\varepsilon(x,i)$ of the form\n$$\nf_\\varepsilon(x,i)= f(x)+\\varepsilon\\,\\varphi\\left(\\left[\\frac{x}{\\varepsilon}\\right]_{{{\\mathbb Z}}^n}, i\\right)=f\\circ \\eta_\\varepsilon (x,i) +\\varepsilon\\,\\varphi\\left(\\left[\\frac{x}{\\varepsilon}\\right]_{{{\\mathbb Z}}^n}, i\\right),$$\n\t%\nwe obtain, \n\\begin{multline*}\nH_\\varepsilon( f_\\varepsilon)(x,i)= \\frac{\\varepsilon}{2}\\Delta f(x) + \\frac{1}{2}\\Delta_y\\varphi^i\\left(\\left[\\frac{x}{\\varepsilon}\\right]_{{{\\mathbb Z}}^n}\\right) + \\frac{1}{2}\\big|\\nabla f(x)+\\nabla_y\\varphi^i\\left(\\left[\\frac{x}{\\varepsilon}\\right]_{{{\\mathbb Z}}^n}\\right)\\big|^2 \\\\ -\n\\nabla\\psi^i\\left(x,x\/\\varepsilon \\right) \\left(\\nabla f(x)+\\nabla_y\\varphi^i\\left(\\left[\\frac{x}{\\varepsilon}\\right]_{{{\\mathbb Z}}^n}\\right)\\right)\n+\n\\sum_{j = 1}^J r_{ij}\\left(x, x\/\\varepsilon \\right)\n\\left[\ne^{\\varphi\\left(\\left[\\frac{x}{\\varepsilon}\\right]_{{{\\mathbb Z}}^n}, j\\right)-\\varphi\\left(\\left[\\frac{x}{\\varepsilon}\\right]_{{{\\mathbb Z}}^n}, i\\right)}-1\\right],\n\\end{multline*}\nwhere $\\nabla_y$ and $\\Delta_y$ denote the gradient and Laplacian with respect to the variable $y=x\/\\varepsilon$.\nWe can conclude that\n\\begin{equation}\n\\|f\\circ\\eta_\\varepsilon - f_\\varepsilon\\|_{E}=\\|f(x) -f_\\varepsilon (x, i) \\|_{E}=\\varepsilon\\|\\varphi\\|_{E} \\to 0 \\, \\text{as $\\varepsilon \\to 0$},\n\\end{equation}\nand\n\n\\begin{align*}\n\\|H_{f,\\varphi}\\circ\\eta_\\varepsilon^\\prime-H_\\varepsilon f_\\varepsilon\\|_{E} &=\n\\sup_{(x,i)\\in E}\\bigg{|} H_{f,\\varphi}\\left(x,\\left[\\frac{x}{\\varepsilon}\\right]_{{{\\mathbb Z}}^n},i\\right)-H_\\varepsilon f_\\varepsilon(x,i)\\bigg{|}\\\\\n\t&= \\frac{\\varepsilon}{2}\\;{\\sup_{(x,i)\\in E}|\\;\\Delta f(x)|}\n\t\t\\xrightarrow{\\varepsilon\\rightarrow 0}0.\n\\end{align*}\n\\end{proof}\n\\begin{remark}\\label{limit_repr}\t\nNote that for all $f \\in D(H)$ the image $H_\\varphi$ has the representation\n$$H_\\varphi(x,p,z)=e^{- \\varphi(z)}\\left[B_{x,p} +V_{x,p}+R_x\\right]e^\\varphi (z)$$\nwith $p=\\nabla f(x)$ and \n\\begin{align*}\n(B_{x,p} h)(y,i) &:= \\frac{1}{2} \\Delta_y h(y,i) + \\left( p - \\nabla_x \\psi^i(x,y)\\right)\\cdot \\nabla_y h(y,i) \\\\\n(V_{x,p} h)(y,i)&:= \\left(\\frac{1}{2} p^2 - p\\cdot \\nabla_x\\psi^i(x,y)\\right) h(y,i), \\\\\n(R_x\\,h)(y,i)&:=\\sum_{j = 1}^J r_{ij}(x,y) \\left[h(y,j) - h(y,i)\\right].\n\\end{align*}\n\\end{remark}\n\\begin{proposition}[Existence of an eigenvalue]\\label{eigenvalue:th}\nLet $E'=\\mathbb T^d \\times \\{1, \\dots, J\\}$ and let $H_\\varphi :{{\\mathbb R}}^d\\times{{\\mathbb R}}^d \\times E' \\to R$ the images of $H$ given in Proposition \\ref{convergence:th}. Then, for all $p\\in {{\\mathbb R}}^d$ there exists an eigenfunction $g_{x,p} \\in C^2(E^\\prime\\times \\{1,\\dots,J\\})$ with $g_{x,p}^i>0$ and an eigenvalue $\\lambda_{x,p}$ such that\n\\begin{equation}\n\\left[B_{x,p} +V_{x,p}+R_x\\right]g_{x,p}=\\lambda_{x,p}g_{x,p}.\n\\end{equation}\n\\end{proposition}\t\n\n\\begin{proof}\nWe want to solve the following eigenvalue problem\n\\begin{equation}\\label{problem}\n\\left[L_{x,p} +R_x\\right]g_{x,p} = \\lambda_{x,p} g_{x,p}\n\\end{equation}\nwhere $L_{x,p}$ is a diagonal matrix with $(L_{x,p})_{ii}=(B_{x,p})_{i}+(V_{x,p})_i$ and $(R_x)_{ij}=r_{ij}$ for $i\\neq j$ and $(R_x)_{ii}=\\sum_{j = 1}^J r_{ij}$.\\\\\nGuido Sweers showed (see \\cite{Sweers1992}) that there exists $\\gamma_{x,p}$ and $g_{x,p} >0$ such that \n\\begin{equation}\n\\left[-L_{x,p} -R_x\\right]g_{x,p} = \\gamma_{x,p} g_{x,p}\n\\end{equation}\nwhen $L_{x,p}$ is a diagonal matrix with $(L_{x,p})_{ii}$ of the type $-\\Delta + p\\cdot\\nabla +c$. Hence, in our case, the equality \\eqref{problem} is verified by taking $\\lambda_{x,p}=-\\gamma_{x,p}$.\n\\end{proof}\nIn the next proposition we prove that the images $H_{\\varphi}$ depend only on $x$ and $p$. \n\\begin{proposition}\\label{hamiltonian:prop}\nConsider the same setting of Proposition \\ref{eigenvalue:th} and let ${{\\mathcal H}}(x,p)$ be the constant depending on $p$ and $x$ given in \\eqref{eigen:repr}.\nThen, for all $x,p \\in {{\\mathbb R}}^d$ there exist a function $\\varphi_{x,p} \\in C^2(E')$ such that \n\\begin{equation}\\label{hamiltonian}\nH_{\\varphi_{x,p}}(x,p,z)= {{\\mathcal H}}(x,p) \\qquad \\text{for all $z \\in E'$}.\n\\end{equation}\n\\end{proposition}\n\\begin{proof}\nBy Proposition \\ref{eigenvalue:th}, there exists a function $g_{x,p}$ and a constant $\\lambda_{x,p}$ that satisfy the eigenvalue problem for the operator $L_{x,p} +R_{x,p}$ defined in \\eqref{problem}. By the variational representation established by Densker and Varadhan in \\cite{DV2}, the eigenvalue is equal to the constant ${{\\mathcal H}}(x,p)$ defined in \\eqref{eigen:repr}. Then, equality \\eqref{hamiltonian} follows from Remark \\ref{limit_repr} and Proposition \\ref{eigenvalue:th} by choosing $\\varphi_{x,p}= \\log g_{x,p}$.\n\\end{proof}\n\\subsection{Regularity of the Hamiltonian}\\label{sub:regularity}\nBefore proving the comparison principle, we first show that the map $p \\mapsto \\mathcal{H}(x,p)$, constructed out of the eigenvalue problem in Propositions \\ref{eigenvalue:th} and \\ref{hamiltonian:prop}, is convex, coercive and continuous uniformely with respect to $x$. \n\\begin{proposition}[Convexity and Coercivity of $\\mathcal{H}$]\\label{conv:th}\nThe map ${{\\mathcal H}}: (x,p)\\to\\mathcal{H}(x,p)$ in \\eqref{eigen:repr} is convex in $p$ and coercive in $p$ uniformly with respect to $x$. Precisely, \n\\begin{equation}\\label{coercivity:eq}\n\\lim _{|p| \\to \\infty} \\inf_{x\\in K} \\mathcal{H}(x,p) = \\infty\n\\end{equation}\nfor every $K$ compact set. Moreover, ${{\\mathcal H}}(x,0)=0$ for all $x \\in {{\\mathbb R}}^d$.\n\\end{proposition}\n\n\\begin{proof}\nBy Proposition \\ref{hamiltonian:prop} the eigenvalue $\\mathcal{H}(x,p)$ admits the representation\n\\begin{align*}\n\\mathcal{H}(x,p)\n&=\n- \\sup_{g > 0} \\inf_{z^\\prime \\in E^\\prime} \n\\left\\{ \n\\frac{1}{g(z^\\prime)}\\left[\n(-B_{x,p} - V_{x,p} - R_x)g\n\\right](z^\\prime) \n\\right\\} \\\\\n&=\n\\inf_{g > 0} \\sup_{z^\\prime \\in E^\\prime}\n\\left\\{\n\\frac{1}{g(z^\\prime)}\n\\left[\n(B_{x,p} + V_{x,p} + R_x)g\n\\right](z^\\prime)\n\\right\\} \\\\\n&=\n\\inf_{\\varphi} \\sup_{z^\\prime \\in E^\\prime}\n\\left\\{\ne^{-\\varphi(z^\\prime)}\n\\left[(B_{x,p} + V_{x,p} + R_x)e^{\\varphi}\n\\right] (z^\\prime)\n\\right\\}\n=:\n\\inf_{\\varphi} \\sup_{z^\\prime \\in E^\\prime} F(x,p,\\varphi)(z^\\prime),\n\\end{align*}\nwhere the map $F$ is given by\n\\begin{align*}\nF(x,p,\\varphi)(y,i)\n=\n\\frac{1}{2}\\Delta\\varphi^i(y)\n+\n\\frac{1}{2} |\\nabla\\varphi^i(y)+p|^2\n&-\n\\nabla_x\\psi^i(x,y)(\\nabla\\varphi^i(y)+p)\n\\\\&+\n\\sum_{j = 1}^J r_{ij}(x,y)\n\\left[ e^{\\varphi^j(y)-\\varphi^i(y)}-1 \\right].\n\\end{align*}\nNote that $F$ is jointly convex in $p$ and $\\varphi$. \nRegarding coercivity of $\\mathcal{H}(x,p)$, we isolate the $p^2$ term in $V_{x,p}$, to obtain\n\\begin{align*}\n\\mathcal{H}(x,p)\n&=\n\\frac{p^2}{2}\n+\n\\inf_{\\varphi}\\sup_{E^\\prime}\n\\left\\{\ne^{-\\varphi}\n\\left[\nB_{x,p} - p\\cdot\\nabla_x \\psi + R_x\n\\right]e^{\\varphi}\n\\right\\}.\n\\end{align*}\nAny $\\varphi \\in C^2(E^\\prime)$ admits a minimum $(y_m,i_m)$ on the compact set $E^\\prime$, and with the thereby obtained uniform lower bound\n\\begin{align*}\n\\Gamma(x,p,\\varphi) \n&=\\sup_{E'} \\left\\{e^{-\\varphi(z_m)}\\left[ B_{x,p} -p\\cdot \\nabla_x\\psi +R_x\\right]e^{\\varphi(z_m)}\\right\\} \\\\\n&\\geq\ne^{-\\varphi(z_m)}\\left[ B_{x,p} -p\\cdot \\nabla_x\\psi +R_x\\right]e^{\\varphi(z_m)} \\\\\n&=\n\\underbrace{\\frac{1}{2} \\Delta_y \\varphi(y_m,i_m)}_{ \\displaystyle \\geq 0} \n+\n\\frac{1}{2}|\\underbrace{\\nabla_y \\varphi(y_m,i_m)}_{\\displaystyle = 0}|^2\n+ (p-\n\\nabla_x \\psi^{i_m}(x,y_m))\\cdot \\underbrace{\\nabla_y \\varphi(y_m,i_m)}_{\\displaystyle = 0}\\\\\n&+\n\\sum_{j\\neq i} r_{ij}(x,y_m) \n\\underbrace{\\left[ e^{\\varphi(y_m,j) - \\varphi(y_m,i_m)} - 1 \\right]}_{\\displaystyle \\geq 0} - p \\cdot \\nabla_x\\psi^{i_m}(x,y_m)\n\\geq\n- p \\cdot \\nabla_x\\psi^{i_m}(x,y_m).\n\\end{align*}\nUsing the lower bound $\\Gamma(x,p,\\varphi)\\geq -p\\cdot \\nabla_x\\psi^{i_m} (x,y_m)\\geq \\inf_{E'} (-p \\cdot \\nabla_x\\psi)$, it follows that, if $K$ is a compact set\n\\begin{align*}\n\\inf_{x\\in K} \\mathcal{H}(x,p)\n&\\geq\n\\frac{p^2}{2} - \\sup_{x\\in K}\\sup_{E^\\prime} (p\\cdot \\nabla _x\\psi^i(x,y))\n\\geq\n\\frac{1}{4}p^2 - \\sup_{x\\in K}\\sup_{E^\\prime}|\\nabla_x\\psi^i(x,y)|^2 \\xrightarrow{|p|\\rightarrow \\infty} \\infty.\n\\end{align*}\nRegarding ${{\\mathcal H}}(x,0)=0$, note that $\\Gamma(x,0,\\varphi)\\leq0$ for all $x$ and $\\varphi$. Then we have the first inequality ${{\\mathcal H}}(x,0)\\geq \\inf_\\varphi \\Gamma(x,0,\\varphi)\\geq 0$. For the opposite inequality we choose the function $\\varphi = (1,\\dots,1)$ in the representation of ${{\\mathcal H}}$.\n\\end{proof}\n\n\\begin{proposition}[Continuity of $\\mathcal{H}$]\\label{continuity-th}\nThe map ${{\\mathcal H}}:(x,p)\\in {{\\mathbb R}}^d\\times{{\\mathbb R}}^d\\to {{\\mathcal H}}(x,p)\\in {{\\mathbb R}}$ is continuous.\n\\end{proposition}\n\nWe will prove the continuity of ${{\\mathcal H}}$ by showing that it is lower and upper semicontinuous. For that, we need the following auxiliary results. In particular, for the lower semicontinuity we will make use of the $\\Gamma$--convergence in the sense expressed in the following lemma in which we prove that property in a general context. Later, we will use it for $\\mathcal{J}(x,p,\\theta)=I_{x,p}(\\theta)$.\n\n\\begin{lemma}[$\\Gamma$-convergence]\\label{gammaConv}\nGiven two sets $U,V\\subseteq {{\\mathbb R}}^d$ and a constant $M\\geq0$ we define $\\Theta_{U,V,M}$ as\n$$\\Theta_{U,V,M} = \\bigcup_{x\\in U, p\\in V} \\{\\theta \\in \\Theta \\vert \\mathcal{J}(x,p,\\theta)\\leq M \\}.$$\nLet $\\mathcal{J}:{{\\mathbb R}}^d \\times {{\\mathbb R}}^d \\times\\Theta\\to[0,\\infty]$ satisfy the following assumptions:\n\\begin{enumerate}[(i)]\t\n\\item \\label{item:assumption:I1} \n The map $(x,p,\\theta) \\mapsto \\mathcal{J}(x,p,\\theta)$ is lower semi-continuous on ${{\\mathbb R}}^d \\times {{\\mathbb R}}^d\\times\\Theta$.\n \n\n\\item \\label{item:assumption:I4}\nFor every $x$ and $p$ fixed and $M\\geq 0$, there exist $U_x$ and $U_p$ open and bounded neighbourhoods and a constant $M'$ such that \n$$ \\mathcal{J}(y,q,\\theta) \\leq M' \\qquad \\text{for all $y \\in U_x$, $q \\in U_p$ and $\\theta \\in \\Theta_{\\{x\\},\\{p\\},M}$}$$.\n\\item \\label{item:assumption:I5}For all compact sets $K_1 \\subseteq {{\\mathbb R}}^d$ and $K_2 \\subseteq {{\\mathbb R}}^d$ and each $M \\geq 0$ the collection of functions $\\{\\mathcal{J}(\\cdot,\\cdot, \\theta)\\}_{\\theta \\in \\Theta_{K_1,K_2,M}}$ is equi-continuous.\n\t\n\\end{enumerate}\n\n\n\nThen if $x_n\\to x$ and $p_n \\to p$, the functionals $\\mathcal{J}_n$ defined by\n\t\n\\begin{equation*}\n\\mathcal{J}_n(\\theta) := \\mathcal{J}(x_n,p_n,\\theta)\n\\end{equation*}\n\t\nconverge in the $\\Gamma$-sense to $\\mathcal{J}_\\infty(\\theta) := \\mathcal{J}(x,p,\\theta)$. That is:\n\\begin{enumerate}\n\\item If $x_n \\rightarrow x$, $p_n \\to p$ and $\\theta_n \\rightarrow \\theta$, then $\\liminf_{n\\to\\infty} \\mathcal{J}(x_n,p_n,\\theta_n) \\geq \\mathcal{J}(x,p,\\theta)$,\n\\item For $x_n \\rightarrow x$ and $p_n \\to p$ and all $\\theta \\in \\Theta$ there are controls $\\theta_n \\in \\Theta$ such that $\\theta_n \\rightarrow \\theta$ and $\\limsup_{n\\to\\infty} \\mathcal{J}(x_n,p_n,\\theta_n) \\leq \\mathcal{J}(x,p,\\theta)$.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\nLet $x_n\\to x$ and $p_n \\to p$ in ${{\\mathbb R}}^d$. If $\\theta_n\\to \\theta$, then by lower semicontinuity~\\ref{item:assumption:I1},\n\\begin{equation*}\n\\liminf_{n\\to\\infty}\\mathcal{J}(x_n,p_n,\\theta_n) \\geq \\mathcal{J}(x,p,\\theta).\n\\end{equation*}\nFor the $\\text{lim-sup}$ bound, let $\\theta\\in\\Theta$. If $\\mathcal{J}(x,p,\\theta)=\\infty$, there is nothing to prove. Thus suppose that $\\mathcal{J}(x,p,\\theta)$ is finite, i.e., $\\theta \\in \\Theta_{\\{x\\},\\{p\\},M}$ for some $M$. Then, by~\\ref{item:assumption:I4}, there exist a bounded neighborhood $U_x$ of $x$, a bounded neighborhood $U_p$ of $p$ and a constant $M' $ such that for any $y\\in U_x$ and $q\\in U_p$,\n\\begin{equation*}\n\\mathcal{J}(y,q,\\theta) \\leq M'.\n\\end{equation*}\nSince $x_n\\to x$ and $p_n \\to p$, the sequences $x_n$ and $p_n$ are, for $n$ large, contained in $U_x$ and $U_p$, respectively. Taking the constant sequence $\\theta_n:=\\theta$, we thus get that $\\mathcal{J}(x_n,p_n,\\theta_n) \\leq M'$ for all $n$ large enough. By~\\ref{item:assumption:I5}, the family of functions $\\{\\mathcal{J}(\\cdot,\\cdot, \\theta)\\}_{\\theta \\in \\Theta_{\\bar{U}_{x},\\bar{U}_p,M'}}$ is equi-continuous, and hence\n\\begin{equation*}\n\\lim_{n\\to\\infty}|\\mathcal{J}(x_n,p_n,\\theta_n)-\\mathcal{J}(x,p,\\theta)| \\leq 0,\n\\end{equation*\nand the $\\text{lim-sup}$ bound follows.\n\\end{proof}\n\nWe can now prove that the function $I_{x,p}$ in \\eqref{eigen:repr} is $\\Gamma$-convergent.\n\\begin{proposition}[$\\Gamma$-convergence of $I_{x,p}$]\\label{prop:gamma-conv}\nLet $I_{x,p}: \\Theta \\to [0,\\infty]$ the function defined in \\eqref{eigen:repr}. If $x_n \\to x$ and $p_n \\to p$, the functionals $I_n(\\theta) := I_{x_n,p_n}(\\theta)$ converge in the $\\Gamma$-sense to $I_\\infty (\\theta):= I_{x,p}(\\theta)$.\n\\end{proposition}\n \n\\begin{proof}\nUsing Lemma \\ref{gammaConv}, we only need to prove that $I_{x,p}$ verifies the assumptions.\n\n\\textbf{Assumption \\ref{item:assumption:I1}.}\nFor any fixed function $u\\in\\mathcal{D}(L_{x,p})$ such that $u > 0$, the function $(L_{x,p}u\/u)$ is continuous. Thus, for any such fixed $u > 0$ it follows that\n\\begin{equation*}\n(x,p,\\theta)\\mapsto \\int_{E' }\\frac{L_{x,p}u}{u}\\,d\\theta\n\\end{equation*}\nis continuous on ${{\\mathbb R}}^d\\times{{\\mathbb R}}^d\\times\\Theta$. As a consequence $I(x,p,\\theta)$ is lower semicontinuous as the supremum over continuous functions.\n\n\\textbf{Assumption \\ref{item:assumption:I4}.}\nFix $x$, $p$ and $M\\geq0$. Let $\\theta \\in \\Theta_{{x},{p},M}$. Then, $I_{x,p}(\\theta)=I(x,p,\\theta)\\leq M$. It follows from\n\\autocite[Theorem 3]{Pin07} that the density $\\frac{d\\theta}{dz}$ exists. Moreover, by the same theorem, for all $y$ and $q$ there exist constants $c_1(y,q),c_2(y,q)$ positive, depending continuously on $y$ and $q$, but not on $\\theta$, such that\n\\begin{equation} \n I_{y,q}(\\theta) \\leq c_1(y,q) \\int_{E^\\prime}|\\nabla g_\\theta |^2\\,dz + c_2(y,q),\n\\end{equation}\nwhere $g_\\theta = (d\\theta\/dz)^{1\/2}$ is the square root of the Radon--Nykodym derivative.\nAs the dependence is continuous in $y$ and $q$, we can find two open neighbourhoods, $U \\subseteq {{\\mathbb R}}^d$ of $x$ and $V\\subseteq {{\\mathbb R}}^d$ of $p$, such that there exist constants $c_1,c_2$ positive, that do not depend on $\\theta$, such that for any $y \\in U$ and $q \\in V$:\n\\begin{equation}\nI_{y,q}(\\theta) \\leq c_1 \\int_{E'}|\\nabla g_\\theta |^2\\,dz + c_2:= M',\n\\end{equation}\t\nobtaining then \\ref{item:assumption:I4}.\n\n\\textbf{Assumption \\ref{item:assumption:I5}.} \n By the continuity of $r_{ij}$ and $\\psi$, assumption \\ref{item:assumption:I5} follows from Theorem 4 of \\cite{Pin07}.\n \n\n\\end{proof}\n\nThe following technical lemma will give us the upper semi-continuity of $\\mathcal{H}$.\n\n\\begin{lemma}[Lemma 17.30 in~\\cite{AlBo} ] \\label{lemma:upper_semi}\nLet $\\mathcal{X}$ and $\\mathcal{Y}$ be two Polish spaces. Let $\\phi : \\mathcal{X} \\rightarrow \\mathcal{K}(\\mathcal{Y})$, where $\\mathcal{K}(\\mathcal{Y})$ is the space of non-empty compact subsets of $\\mathcal{Y}$. Suppose that $\\phi$ is upper hemi-continuous, that is if $x_n \\rightarrow x$ and $y_n \\rightarrow y$ and $y_n \\in \\phi(x_n)$, then $y \\in \\phi(x)$. \nLet $f : \\text{Graph} (\\phi) \\rightarrow {{\\mathbb R}}$ be upper semi-continuous. Then the map $m(x) = \\sup_{y \\in \\phi(x)} f(x,y)$ is upper semi-continuous.\n\\end{lemma} \n\n\\medspace\n\nWe can finally prove the continuity of $\\mathcal{H}(x,p)$.\n\\begin{proof}[Proof of Proposition \\ref{continuity-th}]\nWe have already showed that $I_{x,p}(\\mu)$ is lower semicontinuous and, since $V_{p}(x,i)$ is continuous and bounded, $\\int_{E^\\prime} V_{x,p} \\,d\\mu$ is continuous. Then, $f(x,p,\\mu):=\\int_{E^\\prime} V_{x,p} \\,d\\mu - I_{x,p}(\\mu)$ is upper semi-continuous. \\\\\nLet $x,p\\in {{\\mathbb R}}^d$. We know, by Proposition \\ref{prop:stationarymeasure} in the appendix, that there exists a unique stationary measure $\\theta_{x,p}^0$ such that for all $g \\in D(L_{x,p})$,\n\\begin{equation}\\label{stationary}\n\\int_{E'} L_{x,p}g(z,i) d\\theta_{x,p}^0=0.\n\\end{equation}\n\nLet $L_{x,p}^{\\lambda}= \\lambda (\\lambda - L_{x,p})^{-1} L_{x,p}$ the Hille-Yosida approximation of $L_{x,p}$. Then we have\n\\begin{align}\n&- \\int_{E^\\prime} \\frac{L_{x,p} u}{u} d\\theta_{x,p}^0 = - \\int_{E^\\prime} \\frac{L_{x,p}^{\\lambda} u}{u} d\\theta_{x,p}^0 +\\int_{E^\\prime} \\frac{\\left(L_{x,p}^{\\lambda}-L_{x,p}\\right)u}{u} d\\theta_{x,p}^0\\\\\n&\\leq -\\int_{E^\\prime} \\frac{L_{x,p}^{\\lambda}u}{u} d\\theta_{x,p}^0 + \\frac{1}{\\inf_{E^\\prime} u} \\Vert{(L_{x,p}^{\\lambda} - L_{x,p})u}\\Vert_{E^\\prime} \\\\\n&\\leq -\\int_{E^\\prime} L_{x,p}^\\lambda \\log u \\, d\\theta_{x,p}^0 +o(1).\n\\end{align}\nSending $\\lambda \\to 0$ and using \\eqref{stationary} we have that $I_{x,p}(\\theta^0_{x,p})=0$.\nThen, $\\mathcal{H}(x,p) \\geq \\int_{E'}V_{x,p} d\\theta^0_{x,p}$.\nThus, it suffices to restrict the supremum over $\\theta \\in \\phi(x,p)$ where \n\\begin{equation*}\n\\phi(x,p) := \\left\\{\\theta \\in \\mathcal{P}(E') \\, \\middle| \\, I_{x,p}(\\theta) \\leq 2 \\Vert{\\Pi(x,p,\\cdot)}\\Vert_{\\mathcal{P}(E')}\\right\\},\n\\end{equation*}\nwhere $\\Vert{\\cdot}\\Vert_{\\mathcal{P}(E')}$ denotes the supremum norm on $\\mathcal{P}(E')$ and we called for simplicity $\\Pi(x,p,\\theta)=\\int_{E'} V_{x,p} d\\theta.$\\\\\nNote that $\\Vert{\\Pi(x,p,\\theta)}\\Vert_{\\mathcal{P}(E')} < \\infty$ by definition of $V_{x,p}$.\n It follows that\n\\begin{equation*}\n\\mathcal{H}(x,p) = \\sup_{\\theta\\in \\phi(x,p)}\\left[\\int_{E^\\prime} V_{x,p} \\,d\\mu-I_{x,p}(\\mu)\\right].\n\\end{equation*}\n$\\phi(x,p)$ is non-empty as $\\theta_{x,p}^0 \\in \\phi(x,p)$ and it is compact because any closed subset of $\\mathcal{P}(E')$ is compact. We are left to show that $\\phi$ is upper hemi-continuous.\nLet $(x_n,p_n,\\theta_n) \\rightarrow (x,p,\\theta)$ with $\\theta_n \\in \\phi(x_n,p_n)$. We establish that $\\theta \\in \\phi(x,p)$. By the lower semi-continuity of $I$ and the definition of $\\phi$ we find\n\\begin{equation*}\nI_{x,p}(\\theta) \\leq \\liminf_n I_{x_n,p_n}(\\theta_n) \\leq \\liminf_n 2\\Vert{\\Pi(x_n,p_n,\\cdot})\\Vert_{\\mathcal{P}(E')} = 2 \\Vert{\\Pi(x,p,\\cdot)}\\Vert_{\\mathcal{P}(E')}\n\\end{equation*}\nwhich implies indeed that $\\theta \\in \\phi(x,p)$. Thus, upper semi-continuity follows by an application of Lemma \\ref{lemma:upper_semi}.\n\nWe proceed with proving lower semi-continuity of $\\mathcal{H}$. Suppose that $(x_n,p_n) \\rightarrow (x,p)$, we prove that $\\liminf_n \\mathcal{H}(x_n,p_n) \\geq \\mathcal{H}(x,p)$. \nLet $\\theta$ be the measure such that $\\mathcal{H}(x,p) = \\Pi(x,p,\\theta) - I_{x,p}(\\theta)$. By Proposition \\ref{prop:gamma-conv}, there are $\\theta_n$ such that $\\theta_n \\rightarrow \\theta$ and $\\limsup_n I_{x_n,p_n}(\\theta_n) \\leq I_{x,p}(\\theta)$. \nMoreover, $\\Pi(x_n,p_n,\\theta_n)$ converges to $\\Pi(x,p,\\theta)$ by continuity.\nTherefore,\n\\begin{align*}\n\\liminf_{n\\to\\infty}\\mathcal{H}(x_n,p_n)&\\geq \\liminf_{n\\to\\infty} \\left[\\Pi(x_n,p_n,\\theta_n)-I_{x_n,p_n}(\\theta_n)\\right]\\\\\n&\\geq \\liminf_{n\\to\\infty}\\Pi(x_n,p_n,\\theta_n)-\\limsup_{n\\to\\infty}I_{x_n,p_n}(\\theta_n)\\\\\n&\\geq \\Pi(x,p,\\theta)-I_{x,p}(\\theta) = \\mathcal{H}(x,p),\n\\end{align*}\nestablishing that $\\mathcal{H}$ is lower semi-continuous.\n\\end{proof}\n\n\n\\subsection{Comparison principle}\\label{comparison-sub}\nIn this section we prove the comparison principle for the Hamilton--Jacobi equation in terms of $H$ by relating it to a set of Hamilton-Jacobi equations constructed from $\\mathcal{H}$ (Figure \\ref{SF:fig:CP-diagram-in-proof-of-CP}). We introduce the operators $H_\\dagger,H_\\ddagger$ and $H_1,H_2$. In both cases, the new Hamiltonians will serve as natural upper and lower bounds for $\\mathbf{H}f(x)={{\\mathcal H}}(x,\\nabla f(x))$ and $H$ respectively, where $\\mathcal{H}$ and $H$ are the operators introduced in Propositions \\ref{hamiltonian:prop} and \\ref{convergence:th}. These new Hamiltonians are defined in terms of a containment function $\\Upsilon$, which allows us to restrict our analysis to compact sets.\nHere we give the rigorous definition.\n\\begin{definition}[Containment function]\\label{contfun_def}\n A function $\\Upsilon: {{\\mathbb R}}^d \\to [0,\\infty)$ is a containment function for $V_{x,p}$ in \\eqref{eigen:repr}, if $\\Upsilon \\in C^1({{\\mathbb R}}^d)$ and it is such that\n \\begin{itemize}\n \\item $\\Upsilon$ has compact sub-level sets, i.e. for every $c\\geq0$ the set $\\{x\\vert \\Upsilon(x)\\leq c\\}$ is compact ;\n \\item $\\sup_{x\\in{{\\mathbb R}}^d, z\\in E^\\prime} V_{x,\\nabla \\Upsilon(x)}(z) < \\infty$.\n \\end{itemize}\n\\end{definition}\n\n\\begin{lemma}\\label{contfun_log_lemma}\n The function $\\Upsilon(x)= \\frac{1}{2}\\log\\left(1+\\abs{x}^2\\right)$ is a containment function for $V_{x,p}$.\n\\end{lemma}\n\\begin{proof}\nFirstly note that $\\Upsilon$ has compact sub-level sets. \nRegarding the second property, by the definition of $V_{x,p}$, we have for every $x\\in {{\\mathbb R}}^d$ and $z=(y,i)\\in \\mathbb{T}^d\\times \\{1,\\dots,J\\}$,\n\\begin{align}\n V_{x,\\nabla \\Upsilon(x)} (y,i) =\\frac{x^2}{2(1+\\abs{x}^2)^2} - \\nabla_x \\psi^i(x,y) \\frac{x}{1+\\abs{x}^2}.\n\\end{align}\nRecalling that $\\psi$ grows at most linearly in $x$, we can conclude that $\\sup_{x,z}V_{x,\\nabla \\Upsilon}(z)< \\infty$.\n\\end{proof}\nUsing the above lemma we are now able to define the auxiliary operators in terms of $\\Upsilon$.\nIn the following we will denote by $C_l^\\infty(E)$ the set of smooth functions on $E$ that have a lower bound and by $C_u^\\infty(E)$ the set of smooth functions on $E$ that have an upper bound.\n\\begin{definition} \\label{aux}\nFix $\\eta \\in (0,1)$ and given $\\Upsilon(x)= \\frac{1}{2}\\log\\left(1+|x|^2\\right)$, $C_\\Upsilon := \\sup_{x,z} V_{x,\\nabla\\Upsilon(x)}(z)$ and $\\mathbf{H}f(x)=\\mathcal{H}(x,\\nabla f(x))$, we define\n\\begin{itemize}\n\\item For $f \\in C_l^\\infty(E)$,\n\\begin{gather*}\nf^\\eta_\\dagger := (1-\\eta) f + \\eta \\Upsilon, \\\\\nH_{\\dagger,f}^\\eta(x) := (1-\\eta) \\mathbf{H} f(x) + \\eta C_\\Upsilon,\n\\end{gather*}\nand set\n\\begin{equation*}\nH_\\dagger := \\left\\{(f^\\eta_\\dagger,H_{\\dagger,f}^\\eta) \\, \\middle| \\, f \\in C_l^\\infty(E), \\eta \\in (0,1) \\right\\}.\n\\end{equation*} \n\\item For $f \\in C_u^\\infty(E)$,\n\n\\begin{gather*}\nf^\\eta_\\ddagger := (1+\\eta) f - \\eta \\Upsilon, \\\\\nH_{\\ddagger,f}^\\eta(x) := (1+\\eta) \\mathbf{H} f(x) - \\eta C_\\Upsilon,\n\\end{gather*}\nand set\n\\begin{equation*}\nH_\\ddagger := \\left\\{(f^\\eta_\\ddagger,H_{\\ddagger,f}^\\eta) \\, \\middle| \\, f \\in C_u^\\infty(E), \\eta \\in (0,1) \\right\\}.\n\\end{equation*} \n\\end{itemize}\n\\end{definition}\n\n\\begin{definition}\nFix $\\eta \\in (0,1)$ and given $\\Upsilon(x)= \\frac{1}{2}\\log\\left(1+|x|^2\\right)$, $C_\\Upsilon := \\sup_{x,z} V_{x,\\nabla\\Upsilon(x)}(z)$ and $\\mathbf{H}f(x)=\\mathcal{H}(x,\\nabla f(x))$, we define\n\\begin{itemize}\n\\item For $f \\in C_l^\\infty(E)$ , $\\varphi\\in C^2(E^\\prime)$, $\\eta \\in (0,1)$ set \n\\begin{gather*}\nf^\\eta_1 := (1-\\eta) f + \\eta \\Upsilon, \\\\\nH^\\eta_{1,f,\\varphi}(x,z) :=\n(1-\\eta) H_{f,\\varphi}(x,z) + \\eta C_\\Upsilon ,\n\\end{gather*}\nand set\n\\begin{equation*}\nH_1 := \\left\\{(f^\\eta_1,H^\\eta_{1,f,\\varphi}) \\, \\middle| \\, f \\in C_l^\\infty(E), \\varphi \\in C^2(E^\\prime), \\eta \\in (0,1) \\right\\}.\n\\end{equation*} \n\\item For $f \\in C_u^\\infty(E)$, $\\varphi\\in C^2(E^\\prime)$, $\\eta \\in (0,1)$ set \n\\begin{gather*}\nf^\\eta_2 := (1+\\eta) f - \\eta \\Upsilon, \\\\\nH^\\eta_{2,f,\\varphi}(x,z) :=(1+\\eta) H_{f,\\varphi}(x,z) - \\eta C_\\Upsilon,\n\\end{gather*}\nand set\n\\begin{equation*}\nH_2 := \\left\\{(f^\\eta_2,H^\\eta_{2,f,\\varphi}) \\, \\middle| \\, f \\in C_u^\\infty(E), \\varphi\\in C^2(E^\\prime), \\eta \\in (0,1)\\right\\}.\n\\end{equation*}\n\\end{itemize}\n\\label{definition:H1H2}\n\\end{definition}\n\n\\begin{figure}[h!]\n\\begin{center}\n\\begin{tikzpicture}\n\\matrix (m) [matrix of math nodes,row sep=1em,column sep=4em,minimum width=2em]\n{\n{ } & H_1 &[7mm] H_\\dagger &[5mm] { } \\\\\nH & { } & { } & \\mathbf{H} \\\\\n{ } & H_2 & H_\\ddagger & { } \\\\};\n\\path[-stealth]\n(m-2-1) edge node [above] {sub} (m-1-2)\n(m-2-1) edge node [below] {super \\qquad { }} (m-3-2)\n(m-1-2) edge node [above] {sub \\qquad { }} (m-1-3)\n(m-3-2) edge node [below] {super \\qquad { }} (m-3-3)\n(m-2-4) edge node [above] {\\qquad sub} (m-1-3)\n(m-2-4) edge node [below] {\\qquad super} (m-3-3);\n\\begin{pgfonlayer}{background}\n\\node at (m-2-3) [rectangle,draw=blue!50,fill=blue!20,rounded corners, minimum width=1cm, minimum height=2.5cm] {comparison};\n\\end{pgfonlayer}\n\\end{tikzpicture}\n\\end{center}\n\\captionsetup{width=.9\\textwidth}\n\\caption{An arrow connecting an operator $A$ with operator $B$ with subscript 'sub' means that viscosity subsolutions of $(1 - \\lambda A)f= h$ are also viscosity subsolutions of $(1 - \\lambda B)f = h$. Similarly for arrows with a subscript 'super'. The box around the operators $H_\\dagger$ and $H_\\ddagger$ indicates that the comparison principle holds for subsolutions of $(1 - \\lambda H_\\dagger) f = h$ and supersolutions of $(1 - \\lambda H_\\ddagger) f = h$.}\n\\label{SF:fig:CP-diagram-in-proof-of-CP}\n\\end{figure}\n\nWe now prove the comparison principle for $f - \\lambda Hf = h$ based on the results summarized in Figure~\\ref{SF:fig:CP-diagram-in-proof-of-CP}. \n\\begin{theorem}[Comparison principle] \\label{theorem:comparison}\nLet $h \\in C_b(E)$ and $\\lambda >0$. Let $u$ and $v$ be, respectively, any subsolution and any supersolution to $(1 - \\lambda H)f = h$. Then we have that \n\\begin{equation*}\n\\sup_x u(x) - v(x) \\leq 0.\n\\end{equation*}\n\\end{theorem}\n\\begin{proof}\n\nFix~$h\\in C_b(E)$ and~$\\lambda>0$. Let~$u$ be a viscosity subsolution and~$v$ be a viscosity supersolution to~$(1-\\lambda H)f=h$. By Figure~\\ref{SF:fig:CP-diagram-in-proof-of-CP}, the function~$u$ is a viscosity subsolution to~$(1-\\lambda H_\\dagger)f=h$ and~$v$ is a viscosity supersolution to~$(1-\\lambda H_\\ddagger)f=h$. Hence by the comparison principle for $H_\\dagger,H_\\dagger$ established in Theorem \\ref{theorem:comparison_from_otherpaper} below,~$\\sup_x u(x) - v(x) \\leq 0 $, which finishes the proof.\n\\end{proof}\n\nThe rest of this subsection is devoted to establishing Figure~\\ref{SF:fig:CP-diagram-in-proof-of-CP}. More precisely, we establish Figure \\ref{SF:fig:CP-diagram-in-proof-of-CP} in results \\ref{theorem:comparison_from_otherpaper}, \\ref{H12}, \\ref{lemma:viscosity_solutions_arrows_based_on_eigenvalue} and \\ref{lemmaRepre}.\n \\\\\n\nThe next theorem contains the comparison principle for $H_\\dagger$ and $H_\\ddagger$. The proof follows standard ideas that can be found for instance in \\cite{BaCaDo} and \\cite{CrIsLi}. In order to be able to use both the subsolution and supersolution properties in the estimate of $\\sup_x u(x) - v(x)$, we use the following strategy based on the introduction of double variables.\n\\begin{enumerate}\n\\item First of all, note that the supremum over $x$ of $u(x)-v(x)$ can be replaced, sending $\\varepsilon \\to 0$, with the supremum over $x$ and $y$ of the double variables function $u(x)-v(y)- (2\\varepsilon)^{-1}(x-y)^2$\n\\item Once the supremum (x,y) is found, we are able to use the sub-super solution properties in the following way: \n\\begin{itemize}\n\\item fixing $y$ and optimising over $x$, it can be used in the application of the subsolution property of $u$\n\\item fixing $x$ and optimising over $y$, it can be used in the application of the supersolution property of $v$.\n\\end{itemize}\n\\end{enumerate}\n\\begin{theorem} \\label{theorem:comparison_from_otherpaper}\nLet $h \\in C_b(E)$ and $\\lambda >0$. Let $u$ be any subsolution to $(1 - \\lambda H_\\dagger) f = h$ and let $v$ be any supersolution to $(1- \\lambda H_\\ddagger) f = h$. Then we have that \n\\begin{equation*}\n\\sup_x u(x) - v(x) \\leq 0.\n\\end{equation*}\n\\end{theorem}\n\n\n\\begin{proof}\nFollowing the above steps we define the double variables function \n\\begin{equation}\n\\Phi_{\\varepsilon,\\beta}(x,y) = \\frac{u(x)}{1-\\beta} - \\frac{v(y)}{1+\\beta} - \\frac{|x-y|^2}{2\\varepsilon} - \\frac{\\beta}{1-\\beta}\\Upsilon(x) -\\frac{\\beta}{1+\\beta}\\Upsilon(y).\n\\end{equation}\nNote that the containment function $\\Upsilon$ is introduced in order to be able to work in a compact set, and the positive constant $\\beta$ will allow us to use the convexity of $\\mathcal{H}$.\nSince $\\Phi_{\\varepsilon,\\beta}$ is upper semicontinuous and $\\lim_{|x|+|y| \\to \\infty}\\Phi (x,y) =-\\infty$, for every $\\varepsilon\\in(0,1)$ there exists $(x_{\\varepsilon},y_{\\varepsilon})$ such that\n\\begin{equation}\\label{supremum}\n \\Phi_{\\varepsilon,\\beta}(x_{\\varepsilon},y_{\\varepsilon})=\\sup_{{{\\mathbb R}}^d\\times{{\\mathbb R}}^d} \\Phi_{\\varepsilon,\\beta}(x,y).\n \\end{equation}\nSuppose by contradiction that $\\delta= u(\\tilde{x})-v(\\tilde{x})> 0 $ for some $\\tilde{x}$. We choose $\\beta$ such that $\\frac{2\\beta}{(1-\\beta)(1+\\beta)} \\Upsilon(\\tilde{x})<\\delta\/2$ and $\\frac{2\\beta}{1-\\beta^2}\\left(\\vn{h} + C_\\Upsilon\\right)<\\delta\/2$. Then, \n\\begin{equation}\\label{positivity}\n\\Phi_{\\varepsilon,\\beta}(x_\\varepsilon,y_\\varepsilon) \\geq \\Phi_{\\varepsilon,\\beta}(\\tilde{x},\\tilde{x}) > \\delta - \\frac{2\\beta}{(1-\\beta)(1+\\beta)} \\Upsilon(\\tilde{x})>\\frac{\\delta}{2}> 0,\n\\end{equation}\nand \n$$\\frac{\\beta}{1-\\beta} \\Upsilon(x_\\varepsilon) + \\frac{\\beta}{1+\\beta} \\Upsilon(y_\\varepsilon) \\leq \\sup\\left(\\frac{u}{1-\\beta}\\right)+\\sup\\left(\\frac{-v}{1+\\beta}\\right) <\\infty.$$\nTherefore there exists $R_\\beta>0$ such that $x_\\varepsilon$ and $y_\\varepsilon$ belong to $B(0,R_\\beta)$.\n\nNext we observe that by Lemma 3.1 of \\cite{CrIsLi}, \n\\begin{equation}\\label{conv2}\n\\frac{|x_\\varepsilon -y_\\varepsilon|^2}{\\varepsilon} \\to 0 \\qquad \\text{as $\\varepsilon \\to 0^+$},\n\\end{equation}\nand, as a consequence, $\\vert x_\\varepsilon - y_{\\varepsilon} \\vert \\to 0$ as $\\varepsilon \\to 0^+$.\nDefine the functions $\\varphi_1^{\\varepsilon,\\beta} \\in D(H_\\dagger)$ and $\\varphi_2^{\\varepsilon,\\beta} \\in D(H_\\ddagger)$ by\n\\begin{align}\n&\\varphi_1^{\\varepsilon,\\beta} (x)= (1-\\beta) \\left [ \\frac{v(y_\\varepsilon)}{1+\\beta} +\\frac{|x-y_\\varepsilon|^2}{2\\varepsilon} +\\frac{\\beta}{1-\\beta} \\Upsilon (x) + \\frac{\\beta}{1+\\beta}\\Upsilon(y_\\varepsilon)+(1-\\beta)(x-x_{\\varepsilon})^2 \\right] \\\\\n& \\varphi_2^{\\varepsilon,\\beta}(y) = (1+ \\beta) \\left [ \\frac{u(x_\\varepsilon)}{1-\\beta} - \\frac{|x_\\varepsilon-y|^2}{2\\varepsilon} -\\frac{\\beta}{1-\\beta} \\Upsilon (x_\\varepsilon) - \\frac{\\beta}{1+\\beta}\\Upsilon(y) - (1+\\beta) (y-y_{\\varepsilon})^2 \\right].\n\\end{align}\nUsing \\eqref{supremum}, observe that $u - \\varphi_1^{\\varepsilon,\\beta}$ attains its supremum at $x = x_{\\varepsilon}$, and thus\n\\begin{equation*}\n\\sup_E (u-\\varphi^{\\varepsilon,\\beta}_1) = (u-\\varphi^{\\varepsilon,\\beta}_1)(x_{\\varepsilon}).\n\\end{equation*}\nMoreover, by addition of the $(1-\\beta)(x-x_\\varepsilon)^2$ term, this supremum is the unique optimizer of $u- \\varphi_1^{\\varepsilon,\\beta}$.\nThen, by the subsolution and supersolution properties, taking into account Remark \\ref{remark:existence of optimizers},\n\\begin{equation}\\label{subsolution}\nu(x_\\varepsilon) -\\lambda \\left[ (1-\\beta)\\mathcal{H}\\left(x_\\varepsilon, \\frac{x_\\varepsilon -y_\\varepsilon}{\\varepsilon}\\right) + \\beta \\,C_\\Upsilon \\right] \\leq h(x_\\varepsilon).\n\\end{equation}\nWith a similar argument for $u_2$ and $\\varphi^{\\varepsilon}_2$, we obtain by the supersolution inequality that\n\\begin{equation}\\label{eq:proof-CP:supersol-ineq}\nv(y_{\\varepsilon}) - \\lambda \\left[(1+\\beta)\\mathcal{H}\\left(y_{\\varepsilon}, \\frac{x_\\varepsilon -y_\\varepsilon}{\\varepsilon} \\right) - \\beta C_\\Upsilon\\right] \\geq h(y_{\\varepsilon}).\n\\end{equation}\nBy the coercivity property obtained in \\ref{conv:th} on page \\pageref{conv:th} and by the inequality \\eqref{eq:proof-CP:supersol-ineq}, $p_\\varepsilon := \\frac{x_\\varepsilon-y_\\varepsilon}{\\varepsilon}$ is bounded in $\\varepsilon$, allowing us to extract a converging subsequence $p_{\\varepsilon_k}$. \n\nWe conclude that for each $\\beta$\n\t\\begin{align}\n\t\t& \\liminf_{\\varepsilon\\to0}\\Phi(x_\\varepsilon,y_\\varepsilon) \\\\\n\t\t& \\leq \\liminf_{\\varepsilon \\to 0} \\frac{u(x_\\varepsilon)}{1-\\beta} - \\frac{v(y_\\varepsilon)}{1+\\beta} \\\\\n\t\t& \\leq \\liminf_{k \\rightarrow \\infty} \\lambda \\mathcal {H}\\left(x_{\\varepsilon_k}, p_{\\varepsilon_k}\\right)+\\frac{\\beta}{1-\\beta} C_\\Upsilon - \\lambda \\mathcal{H}\\left(y_{\\varepsilon_k},p_{\\varepsilon_k}\\right) + \\frac{\\beta}{1+\\beta} \\, C_\\Upsilon\\\\\n\t\t&\\qquad + \\frac{h(x_{\\varepsilon_k})}{1-\\beta} - \\frac{h(y_{\\varepsilon_k})}{1+\\beta} \\\\\n\t\t& \\leq \\liminf_{k \\rightarrow \\infty} \\lambda \\left[ \\mathcal{H}\\left(x_{\\varepsilon_k}, p_{\\varepsilon_k}\\right) - \\mathcal{H}\\left(y_{\\varepsilon_k},p_{\\varepsilon_k}\\right)\\right] +\\frac{h(x_{\\varepsilon_k}) - h(y_{\\varepsilon_k})}{1-\\beta^2} + \\frac{2\\beta}{1-\\beta^2}\\left(\\vn{h} + C_\\Upsilon\\right) \\\\\n\t\t& \\leq \\frac{2\\beta}{1-\\beta^2}\\left(\\vn{h} + C_\\Upsilon\\right).\n\t\\end{align}\nAs $\\beta$ is chosen such that $\\frac{2\\beta}{1-\\beta^2}\\left(\\vn{h} + C_\\Upsilon\\right)<\\delta\/2$, we obtain a contradiction with \\eqref{positivity}, establishing the comparison principle. \n\\end{proof}\nBelow, we complete the figure by proving the left-hand side of Figure \\ref{SF:fig:CP-diagram-in-proof-of-CP}.\n\\begin{lemma}\\label{H12}\nFor all $h \\in C(\\mathbb{R}^d)$ and $\\lambda > 0$, viscosity subsolutions of\n$(1 - \\lambda H) f =h$\nare viscosity subsolutions of $(1 - \\lambda H_1) f =h,$ and viscosity supersolutions of $(1 - \\lambda H) f = h $ are viscosity supersolutions of $(1 - \\lambda H_2) f =h.$\n\\end{lemma}\n\n\n\\begin{proof}\n\nFix $\\lambda > 0$ and $h \\in C_b(E)$. Let $u$ be a subsolution to $(1 - \\lambda H) f = h$. We prove it is also a subsolution to $(1 - \\lambda H_1) f = h$. Fix $\\eta \\in (0,1)$, $\\varphi\\in C^2(E')$ and $f \\in C_{l}^\\infty(E)$, so that $(f^\\eta_1,H^\\eta_{1,f,\\varphi}) \\in H_1$ with $f_1^\\eta$ and $H^\\eta_{1,f,\\varphi}$ as in Definition \\ref{definition:H1H2}. We will prove that there are $(x_n,z_n)$ such that\n\t\t\\begin{gather} \n\t\t\t\\lim_n u(x_n) - f^\\eta_1(x_n) = \\sup_x u(x) - f^\\eta_1(x),\\label{eqn:proof_HH1_conditions_for_subsolution_first} \\\\\n\t\t\t\\limsup_n u(x_n) - \\lambda H^\\eta_{1,f,\\varphi}(x_n,z_n) - h(x_n) \\leq 0. \\label{eqn:proof_HH1_conditions_for_subsolution_second}\n\t\t\\end{gather}\n\t\nGiven $M:= \\eta^{-1}\\sup_y u(y)-(1-\\eta)f(y)<\\infty$, as $u$ is bounded and $f\\in C_l^\\infty(E)$, we have that the sequence $x_n$ along which the limit in \\eqref{eqn:proof_HH1_conditions_for_subsolution_first} is\nattained, is contained in the compact set $K:=\\left\\{x\\vert \\Upsilon(x)\\leq M\\right\\}$. We define $\\gamma:{{\\mathbb R}} \\to {{\\mathbb R}}$ as a smooth increasing function such that \n\\begin{equation}\n\\gamma(r)=\n\\begin{cases}\nr \\qquad &\\text{if $r\\leq M$},\\\\\nM+1 \\qquad &\\text{if $r\\geq M+2$.}\n\\end{cases}\n\\end{equation}\nDenote by $f_\\eta$ the function on $E$ defined by\n$$ f_\\eta(x)=\\gamma((1-\\eta)f(x)+\\eta\\Upsilon(x))=\\gamma(f_1^\\eta(x)).$$\nBy construction, $f_\\eta$ is smooth and constant outside a compact set and thus lies in $\\mathcal{D}(H)$. We conclude that $(f_\\eta, H_{f_{\\eta},(1-\\eta)\\varphi}) \\in H$. \nAs $u$ is a viscosity subsolution for $(1 - \\lambda H)u = h$, there exist $x_n \\in E$ and $z_n \\in E'$ with\n\\begin{gather}\n\\lim_n u(x_n) - f_\\eta(x_n) = \\sup_x u(x) - f_\\eta(x), \\label{eqn:visc_subsol_sup} \\\\\n\\limsup_n u(x_n) - \\lambda H_{f_{\\eta},(1-\\eta)\\varphi}(x_n,z_n) - h(x_n) \\leq 0. \\label{eqn:visc_subsol_upperbound}\n\\end{gather}\nSince $f_1^\\eta$ equals $f_\\eta$ in $K=\\left\\{x\\vert \\Upsilon(x)\\leq M\\right\\}$, we also have that \n\\begin{equation*}\n\\lim_n u(x_n) - f_1^\\eta(x_n) = \\sup_x u(x) - f_1^\\eta(x),\n\\end{equation*}\nestablishing \\eqref{eqn:proof_HH1_conditions_for_subsolution_first}. Convexity of $H_{f,\\varphi}(x,z)=H_\\varphi(x,\\nabla f (x), z)$ in $p$ and $\\varphi$ yields for arbitrary $(x,z)$ the elementary estimate\n\\begin{align*}\nH_{f_{\\eta},(1-\\eta)\\varphi}(x,z) = & H_{(1-\\eta)\\varphi}(x, (1-\\eta)\\nabla f (x)+ \\eta \\nabla \\Upsilon(x), z) \\\\\n& \\leq (1-\\eta) H_\\varphi(x,\\nabla f(x), z) +\\eta H_0(x, \\nabla \\Upsilon (x), z) \\\\\n&= (1-\\eta)H_{\\varphi}(x,\\nabla f(x), z) + \\eta V_{x,\\nabla \\Upsilon(x)}(z)\\\\\n& \\leq H^\\eta_{1,f,\\varphi}(x,z).\n\\end{align*} \nCombining the above inequality with \\eqref{eqn:visc_subsol_upperbound}, we have\n\\begin{align*}\n\\limsup_n u(x_n) - \\lambda H^\\eta_{1,f,\\varphi}(x,z) - h(x_n) \\leq \\limsup_n u(x_n) - \\lambda H_{f_{\\eta},(1-\\eta)\\varphi}(x_n,z_n) - h(x_n) \\leq 0,\n\\end{align*}\nestablishing \\eqref{eqn:proof_HH1_conditions_for_subsolution_second}.\nThe supersolution statement follows in the same way.\n\\end{proof}\n\n\\begin{lemma}\\label{lemma:viscosity_solutions_arrows_based_on_eigenvalue}\nFix $\\lambda > 0$ and $h \\in C_b(E)$. \n\\begin{enumerate}[(a)]\n\\item Every subsolution to $(1 - \\lambda H_1) f = h$ is also a subsolution to $(1 - \\lambda H_\\dagger) f = h$.\n\\item Every supersolution to $(1 - \\lambda H_1 )f = h$ is also a supersolution to $(1 - \\lambda H_\\ddagger) f = h$.\n\\end{enumerate}\n\\end{lemma}\n\nThe definition of viscosity solutions, Definition \\ref{def:viscosity_solution}, is written down in terms of the existence of a sequence of points that maximizes $u-f$ or minimizes $v-f$. To prove the lemma above, we would like to have the subsolution and supersolution inequalities for any point that maximizes or minimizes the difference. This is achieved by the following auxiliary lemma.\n\t\n\\begin{lemma} \\label{lemma:strong_viscosity_solutions}\n\t\tFix $\\lambda > 0$ and $h \\in C_b(E)$. \n\t\t\\begin{enumerate}[(a)]\n\t\t\t\\item Let $u$ be a subsolution to $(1 - \\lambda H_1) f = h$, then for all $(f,g) \\in H_1$ and $x_0 \\in E$ such that\n\t\t\t\\begin{equation*}\n\t\t\t\tu(x_0) - f(x_0) = \\sup_x u(x) - f(x)\n\t\t\t\\end{equation*}\n\t\t\tthere exists a $z \\in E'$ such that \n\t\t\t\\begin{equation*}\n\t\t\t\tu(x_0) - \\lambda g(x_0,z) \\leq h(x_0).\n\t\t\t\\end{equation*}\n\t\t\t\\item Let $v$ be a supersolution to $(1 - \\lambda H_2) f = h$, then for all $(f,g) \\in H_2$ and $x_0 \\in E$ such that\n\t\t\t\\begin{equation*}\n\t\t\t\tv(x_0) - f(x_0) = \\inf_x v(x) - f(x)\n\t\t\t\\end{equation*}\n\t\t\tthere exists a $z \\in E'$ such that \n\t\t\t\\begin{equation*}\n\t\t\t\tv(x_0) - \\lambda g(x_0,z) \\geq h(x_0).\n\t\t\t\\end{equation*}\n\t\t\\end{enumerate}\n\t\\end{lemma}\nFor a proof of the above Lemma see Lemma 5.7 of \\cite{KraSch}.\n\n\\begin{proof}[Proof of Lemma \\ref{lemma:viscosity_solutions_arrows_based_on_eigenvalue}]\nWe only prove the subsolution statement. Fix $\\lambda > 0$ and $h \\in C_b(E)$. \nLet $u$ be a subsolution of $(1 - \\lambda H_1 ) f = h$. We prove that it is also a subsolution of $(1 - \\lambda H_\\dagger) f = h$. Let $f^\\eta_1 = (1-\\eta)f + \\eta \\Upsilon \\in \\mathcal{D}(H_1)$ and let $x_0$ be such that\n\\begin{equation*}\nu(x_0) - f^\\eta_1(x_0) = \\sup_x u(x) - f_1^\\eta(x).\n\\end{equation*}\nFor each $\\delta > 0$, since $\\mathcal{H}(x,p)$ is a principal eigenvalue for $L_{x,p}+R_x$ (as remarked in Proposition \\ref{hamiltonian:prop}, there exists a function $g$ such that \n\\begin{equation} \\label{eqn:inequality_based_on_ev}\n\\mathcal{H}(x_0,\\nabla f(x_0)) = g^{-1}\\left( L_{x_0,\\nabla f(x_0)} + R_{x_0}\\right)g.\n\\end{equation}\nAs\n\\begin{equation*}\n\\left(f^\\eta_1, (1-\\eta) g^{-1}\\left( L_{x_0,\\nabla f(x_0)} + R_{x_0}\\right)g + \\eta C_\\Upsilon \\right) \\in H_1,\n\\end{equation*}\nwe find by the subsolution property of $u$ and that there exists $z$ such that\n\\begin{align}\nh(x_0) & \\geq u(x_0) - \\lambda \\left((1-\\eta) g^{-1}\\left( L_{x_0,\\nabla f(x_0)} + R_{x_0}\\right)g + \\eta C_\\Upsilon\\right) \\\\\n& = u(x_0) - \\lambda \\left((1-\\eta) \\mathcal{H}(x_0,\\nabla f(x_0)) + \\eta C_\\Upsilon \\right)\n\\end{align}\nwhere the second inequality follows by \\eqref{eqn:inequality_based_on_ev} and it establishes that $u$ is a subsolution for $(1 - \\lambda H_\\dagger )f = h$.\n\\end{proof}\n\nWe conclude this subsection proving the right part of Figure \\ref{SF:fig:CP-diagram-in-proof-of-CP}.\n\\begin{proposition}\\label{right-diag:prop}\nLet the map $\\mathcal{H}: \\mathbb{R}^d\\times {{\\mathbb R}}^d \\to \\mathbb{R}$ be the eigenvalue \\eqref{eigen:repr} and let $\\mathbf{H} : \\mathcal{D}(\\mathbf{H}) \\subseteq C^1(\\mathbb{R}^d) \\to C(\\mathbb{R}^d)$ be the operator $\\mathbf{H} f(x) := \\mathcal{H}(x,\\nabla f(x))$. Then, for all $\\lambda > 0$ and $h \\in C(\\mathbb{R}^d)$, every viscosity subsolution of $(1 - \\lambda \\mathbf{H}) f = h$ is also a viscosity subsolutions of $(1 - \\lambda H_\\dagger) f = h$ and every viscosity supersolution of $(1-\\lambda \\mathbf{H})f=h$ is also a viscosity supersolution of $(1-\\lambda H_\\ddagger)f=h$.\n\\end{proposition}\n\\begin{proof}\nFix $\\lambda > 0$ and $h \\in C_b(E)$. Let $u$ be a subsolution to $(1- \\lambda \\mathbf{H})f = h$. We prove it is also a subsolution to $(1 - \\lambda H_\\dagger) f = h$.\n\\smallskip\nFix $\\eta > 0 $ and $f\\in C_\\ell^\\infty(E)$ and let $(f^\\varepsilon_\\dagger,H^\\eta_{\\dagger,f}) \\in H_\\dagger$ as in Definition \\ref{aux}. We will prove that\n\\begin{equation}\n\\left(u-f_\\dagger^\\eta\\right)(x) = \\sup_{x\\in E}\\left(u(x)-f_\\dagger^\\eta(x) \\right),\\label{eqn:proof_lemma_conditions_for_subsolution_first}\n\\end{equation}\nimplies\n\\begin{equation}\nu(x)-\\lambda H_{\\dagger,f}^\\eta(x) - h(x)\\leq 0.\n\\label{eqn:proof_lemma_conditions_for_subsolution_second}\n\\end{equation}\nAs $u$ is a viscosity subsolution for $(1 - \\lambda \\mathbf{H})f = h$ and $f_\\dagger^\\eta \\in D(\\mathbf{H})$, if\n\\begin{equation}\n\\left(u-f^\\eta_\\dagger\\right)(x) = \\sup_x \\left(u(x)-f^\\eta_\\dagger(x)\\right), \\label{eqn:visc_subsol_sup_dagger}\n\\end{equation}\nthen,\n\\begin{equation}\nu(x) - \\lambda \\mathbf{H} f^\\eta_\\dagger(x) - h(x) \\leq 0. \\label{eqn:visc_subsol_upperbound_dagger}\n\\end{equation}\nConvexity of $p \\mapsto \\mathcal{H}(x,p)$ yields the estimate\n\\begin{align*}\n\\mathbf{H} f_\\eta(x) &= \\mathcal{H}(x,\\nabla f_\\eta(x)) \\\\\n& \\leq (1-\\eta) \\mathcal{H}(x,\\nabla f(x)) + \\eta \\mathcal{H}(x,\\nabla \\Upsilon(x)) \\\\\n&\\leq (1-\\eta) \\mathcal{H}(x,\\nabla f(x)) + \\eta C_\\Upsilon = H^\\eta_{\\dagger,f}(x).\n\\end{align*} \nCombining this inequality with \\eqref{eqn:visc_subsol_upperbound_dagger}, we have\n\\begin{equation}\nu(x) - \\lambda H^\\eta_{\\dagger,f}(x) - h(x)\n\\leq u(x) - \\lambda \\mathbf{H} f^\\eta_\\dagger(x) - h(x) \\leq 0,\n\\end{equation}\nestablishing \\eqref{eqn:proof_lemma_conditions_for_subsolution_second}. The supersolution statement follows in a similar way.\n\\end{proof}\n\n\\subsection{Exponential tightness}\\label{sub:expo}\n\t\nTo establish exponential tightness, we first note that by \\cite[Corollary 4.19]{FK} it suffices to establish the exponential compact containment condition. This is the content of the next proposition.\n\\begin{proposition} \\label{proposition:exponential_compact_containment}\nFor all $K\\subset E$ compact, $T >0$ and $a > 0$ there is a compact set $\\hat{K}_{K,T,a} \\subset E$ such that \n\\begin{equation}\\label{compactcontainment}\n\\limsup_{\\varepsilon \\rightarrow 0} \\varepsilon \\log \\mathbb{P}\\left[\\bigcup_{t\\in[0,T]}\\left\\{ X_\\varepsilon(t) \\notin \\hat{K}_{K,T,a}\\right\\}\\neq \\emptyset \\right] \\leq \\max\\{- a, \\limsup_{\\varepsilon \\to 0} \\varepsilon \\log \\mathbb{P}(X_\\varepsilon(0) \\notin K)\\}.\n\\end{equation}\t\n\\end{proposition}\t\n\\begin{remark}\nNote that, since $X_\\varepsilon(0)$ satisfies the large deviations principle by assumption, inequality \\eqref{compactcontainment} gives the searched compact containment condition.\n\\end{remark}\n\\begin{proof}[Proof of Proposition \\ref{proposition:exponential_compact_containment}]\nFirst of all let's consider $\\varphi \\equiv 0$. Note that, by Lemma \\ref{contfun_log_lemma}, we have $\\sup_{x,z} H_0(x,\\nabla\\Upsilon,z)= \\sup_{x,z}V_{x,\\nabla \\Upsilon(x)}(z)\\leq C_{\\Upsilon}$. Choose $\\beta > 0$ such that $T C_\\Upsilon - \\beta \\leq -a$. \nSince $\\Upsilon$ is continuous, there is some $c$ such that the set $G := \\left\\{x \\, \\middle| \\, \\Upsilon(x) < c + \\beta \\right\\}$ is non empty. Note that $G$ is open and let $\\overline{G}$ be the closure of $G$. Then, $\\overline{G}$ is compact.\nLet $f(x) := \\iota \\circ \\Upsilon $ where $\\iota$ is some smooth increasing function such that\n\\begin{equation*}\n\\iota(r) = \\begin{cases}\nr & \\text{if } r \\leq \\beta +c, \\\\\n2\\beta+ c & \\text{if } r \\geq \\beta + c + 2.\n\\end{cases}\n\\end{equation*}\nIt follows that $\\iota \\circ \\Upsilon$ equals $\\Upsilon$ on $\\overline{G}$ and is constant outside of a compact set. Set $f_\\varepsilon = f \\circ \\eta_\\varepsilon$, $g_\\varepsilon = H_\\varepsilon f_\\varepsilon$ and $g = H_{f,\\varphi}$. Note that $g(x,z) = H_{\\varphi}(x,\\nabla \\Upsilon(x),z)$ if $x \\in \\overline{G}$. Therefore, we have $\\sup_{x \\in \\overline{G}, z \\in E'} g(x,z) \\leq C_{\\Upsilon}$.\nLet $\\tau$ be the stopping time $\\tau := \\inf \\left\\{t \\geq 0 \\, \\middle| \\, X_\\varepsilon(t) \\notin \\overline{G} \\right\\}$ and let\n\\begin{equation*}\nM_\\varepsilon(t) := \\exp\\left\\{\\frac{1}{\\varepsilon}\\left( f(X_\\varepsilon(t)) - f(X_\\varepsilon(0)) - \\int_0^t g_\\varepsilon(X_\\varepsilon(s),I_\\varepsilon(t)) \\mathrm{d} s \\right)\\right\\}.\n\\end{equation*}\nBy construction $M_\\varepsilon$ is a martingale. Let $K\\subset E$ be compact. We have\n\\begin{align*}\n& \\mathbb{P}\\left[\\bigcup_{t\\in[0,T]}\\left\\{X_\\varepsilon(t) \\notin \\overline{G}\\right\\}\\neq \\emptyset \\right] \\\\\n& \\leq \\mathbb{P}\\left(X_\\varepsilon(0) \\in K , \\bigcup_{t \\in [0,T]}\\left\\{X_\\varepsilon(t) \\notin \\overline{G}\\right\\}\\right) + \\mathbb{P}\\left(X_\\varepsilon (0) \\notin K\\right)\\\\\n& = \\mathbb{E}\\left[\\mathbbm{1}_{\\{X_\\varepsilon(0)\\in K\\}}\\mathbbm{1}_{\\left\\{\\bigcup_{t \\in [0,T]}\\left\\{X_\\varepsilon(t) \\notin \\overline{G}\\right\\}\\right\\}}M_\\varepsilon(\\tau) M_\\varepsilon(\\tau)^{-1} \\right] +\\mathbb{P}\\left(X_\\varepsilon (0) \\notin K\\right) \\\\\n& \\leq \\exp\\left\\{ -\\frac{1}{\\varepsilon} \\left(\\inf_{y_1 \\notin \\overline{G}} f(y_1)- f(X_\\varepsilon(0)) \\right. \\right. \\\\\n& \\hspace{4cm} \\left. \\left. - T \\sup_{y_2 \\in \\overline{G}, i \\in \\{1,\\dots,J\\}} g_\\varepsilon(y_2,i) \\right) \\right\\} \\\\\n& \\hspace{2.5cm} \\times \\mathbb{E}\\left[\\mathbbm{1}_{\\{X_\\varepsilon(0)\\in K\\}}\\mathbbm{1}_{\\left\\{\\bigcup_{t \\in [0,T]}\\left\\{X_\\varepsilon(t) \\notin \\overline{G}\\right\\}\\right\\}}M_\\varepsilon(\\tau) \\right] + \\mathbb{P}\\left(X_\\varepsilon (0) \\notin K\\right).\n\\end{align*}\nSince $\\sup_{x \\in \\overline{G}, z \\in E'} g(x,z) \\leq C_{\\Upsilon,\\varphi}$ and $g$ is the limit of $g_\\varepsilon$ for $\\varepsilon \\to 0$ in the sense of Definition \\ref{def:convergence}, we obtain that the term in the exponential is bounded by $ \\frac{1}{\\varepsilon}\\left(C_\\Upsilon T - \\beta \\right) \\leq - \\frac{1}{\\varepsilon} a$ for sufficiently small $\\varepsilon$. The expectation is bounded by $1$ due to the martingale property of $M_\\varepsilon(\\tau)$.\nWe can conclude that\n\\begin{equation*}\n\\limsup_{\\varepsilon \\to 0} \\varepsilon \\log \\mathbb{P} \\left[\\bigcup_{t\\in[0,T]} \\left\\{X_\\varepsilon(t) \\notin K_{T,a}\\right\\}\\neq\\emptyset\\right] \\leq \\max\\{-a, \\limsup_{\\varepsilon \\to 0}\\varepsilon \\log\\mathbb{P}\\left(X_\\varepsilon (0) \\notin K\\right)\\}\n\\end{equation*}\nwhere $\\hat{K}_{K,T,a}=\\overline{G}$.\n\\end{proof}\n\n\n\\begin{subsection}{Action-integral representation of the rate function}\\label{rate}\nIn this section we establish a representation of the rate function as an integral of a Lagrangian function $\\mathcal{L}$. We refer to this representation as the \"action-integral representation\" of the rate function $\\mathcal{I}$. We argue on basis of Section 8 of \\cite{FK} for which we need to check the following two conditions.\n\\begin{lemma}\\label{lemmaRepre}\nLet $\\mathcal{H}: \\mathbb{R}^d\\times {{\\mathbb R}}^d \\to \\mathbb{R}$ be the map given in \\eqref{eigen:repr} and $\\mathbf{H} : \\mathcal{D}(\\mathbf{H}) \\subseteq C^1(\\mathbb{R}^d) \\to C(\\mathbb{R}^d)$ the operator $\\mathbf{H} f(x) := \\mathcal{H}(x,\\nabla f(x))$. Then:\n\\begin{enumerate}[(i)]\n\\item\nThe Legendre-Fenchel transform $\\mathcal{L}(x,v) := \\sup_{p \\in \\mathbb{R}^d} (p\\cdot v - \\mathcal{H}(x,p))$ and the operator $\\mathbf{H}$ satisfy Conditions 8.9, 8.10 and 8.11 of \\cite{FK}.\n\\item For all $\\lambda > 0$ and $h \\in C(\\mathbb{R}^d)$, the comparison principle holds for\n$(1 - \\lambda \\mathbf{H}) u = h.$\n\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\nTo prove the first aim, we will show that following items (a), (b) and (c) imply Condition 8.9, 8.10 and 8.11 of \\cite{FK}. Then, the proof of (a), (b), (c) is shown in \\cite[Proposition 6.1]{PelSch}.\n\\begin{enumerate}[(a)]\n\\item The function $\\mathcal{L}:\\mathbb{R}^d\\times {{\\mathbb R}}^d \\rightarrow[0,\\infty]$ is lower semicontinuous and for every $C \\geq 0$, the level set\n$\n\\{(x,v)\\in \\mathbb{R}^d\\times {{\\mathbb R}}^d\\,:\\,\\mathcal{L}(x,v)\\leq C\\}\n$\nis relatively compact in $\\mathbb{R}^d \\times {{\\mathbb R}}^d$.\n\\item For all $f\\in\\mathcal{D}(H)$ there exists a right continuous, nondecreasing function $\\psi_f:[0,\\infty)\\rightarrow[0,\\infty)$ such that for all $(x,v)\\in \\mathbb{R}^d \\times \\mathbb{R}^d$,\n\\[\n|\\nabla f(x)\\cdot v|\\leq \\psi_f(\\mathcal{L}(x,v))\\qquad \\text{and} \\qquad\n\\lim_{r\\rightarrow\\infty}\\frac{\\psi_f(r)}{r}=0.\n\\]\n\\item For each $x_0\\in {{\\mathbb R}}^d$ and every $f\\in\\mathcal{D}(\\mathbf{H})$, there exists an absolutely continuous path $x : [0,\\infty) \\to \\mathbb{R}^d$ such that $x_0=x(0)$ and\n\\begin{equation}\n\\int_0^t \\mathcal{H}(x(s), \\nabla f (x(s))) \\, ds\n=\n\\int_0^t \\left[\n\\nabla f(x(s)) \\cdot \\dot{x}(s) - \\mathcal{L}(x(s),\\dot{x}(s))\n\\right] \\, ds.\n\\label{eq:action_integral:optimal_path_for_H}\n\\end{equation}\n\\end{enumerate}\nThen regarding Condition 8.9, the operator $A f(x,v) := \\nabla f(x) \\cdot v$ on the domain $\\mathcal{D}(A) = \\mathcal{D}(H)$ satisfies (1). For (2), we can choose $\\Gamma = \\mathbb{T}^d \\times \\mathbb{R}^d$, and for $x_0 \\in \\mathbb{T}^d$, take the pair $(x,\\lambda)$ with $x(t) = x_0$ and $\\lambda(dv \\times dt) = \\delta_{0} (dv) \\times dt$. Part (3) is a consequence of (a) from above. Part (4) can be verified as follows. Let $\\Upsilon$ the containment function used in Definition \\ref{aux} and note that the sub-level sets of $\\Upsilon$ are compact. Let $\\gamma \\in \\mathcal{AC}$ with $\\gamma(0)\\in K$ and such that the control\n\\begin{equation}\n\\int_0^T \\mathcal{L}(\\gamma(s),\\dot{\\gamma}(s))\\, ds \\leq M\n\\end{equation}\nimplies $\\gamma(t) \\in \\hat{K}$ for all $t\\leq T$, with $\\hat{K}$ compact. Then,\n\\begin{equation}\n\\begin{aligned}\n\\Upsilon(\\gamma(t))&=\\Upsilon(\\gamma(0))+\\int_0^t \\left\\langle\\nabla\\Upsilon(\\gamma(s)),\\dot{\\gamma}(s)\\right\\rangle \\,ds\\\\\n&\\leq \\Upsilon(\\gamma(0))+\\int_0^t \\mathcal{L}(\\gamma(s),\\dot{\\gamma}(s)) + \\mathcal{H}(\\gamma(s),\\nabla \\Upsilon(\\gamma(s))) \\,ds\\\\\n& \\leq \\sup_{y\\in K} \\Upsilon(y) + M +\\int_0^T \\sup_z {{\\mathcal H}}(z, \\nabla \\Upsilon(z)) \\,ds\\\\\n&:= C < \\infty.\n\\end{aligned}\n\\end{equation}\nHence, we can take $\\hat{K} = \\{z \\in {{\\mathbb R}}^d | \\Upsilon(z) \\leq C\\}$.\n\nPart (5) is implied by (b) from above. Condition 8.10 is implied by Condition 8.11 and the fact that $\\mathbf{H}1 = 0$, by Theorem \\ref{conv:th} (see Remark 8.12 (e) in \\cite{FK}). Finally, Condition 8.11 is implied by (c) above, with the control $\\lambda(dv \\times dt) = \\delta_{\\dot{x}(t)}(dv) \\times dt$.\n\nThe comparison principle for $\\mathbf{H}$ follows from Proposition \\ref{right-diag:prop} and Theorem \\ref{theorem:comparison_from_otherpaper}.\n\\end{proof}\nIn the following, we prove the integral representation of the rate function. Firstly, let's recall that Theorem \\ref{FengKurtz:th} gives the existence of a semigroup $V(t)$ and a family of functions $R(\\lambda)$ and let $\\mathbf{V}(t):C({{\\mathbb R}}^d)\\to C({{\\mathbb R}}^d)$ be the Nisio semigroup with cost function $\\mathcal{L}$, that is\n\\begin{equation}\n\\mathbf{V}(t) f(x)=\\sup_{ \\substack{ \\gamma \\in \\mathrm{AC}_{\\mathbb{R}^d}[0,\\infty) \\\\ \\gamma(0) = x}}\\left[f(\\gamma(t)) - \\int_0^t \\mathcal{L}(\\gamma(t),\\dot{\\gamma}(s)) \\, ds\\right].\n\\end{equation}\nLet $\\mathbf{R}(\\lambda)h$ be the operator given by\n\\begin{equation*}\n\t\t\t\\mathbf{R}(\\lambda) h(x) = \\sup_{\\substack{\\gamma \\in \\mathcal{A}\\mathcal{C}\\\\ \\gamma(0) = x}} \\int_0^\\infty \\lambda^{-1} e^{-\\lambda^{-1}t} \\left[h(\\gamma(t)) - \\int_0^t \\mathcal{L}(\\gamma(s),\\dot{\\gamma}(s))\\right] \\, \\mathrm{d} t.\n\\end{equation*}\n\nThe proof of the result below is based on the following four main steps.\n\\begin{itemize}\n\\item Figure \\ref{SF:fig:CP-diagram-in-proof-of-CP} on page \\pageref{SF:fig:CP-diagram-in-proof-of-CP} shows that $R(\\lambda)h$ is the unique function that is a sub- and supersolution to the equations $(1 - \\lambda H_\\dagger )f = h$ and $(1 - \\lambda H_\\ddagger) f = h$ respectively.\n\\item $\\mathbf{R}(\\lambda) h$ has been proven to be the unique viscosity solution to $(1 - \\lambda \\mathbf{H}) f = h$. Then, again by Figure \\ref{SF:fig:CP-diagram-in-proof-of-CP}, we must have $R(\\lambda) h = \\mathbf{R}(\\lambda)h$. \n\\item Starting from the equality of resolvents we work to an equality for the semigroups $V(t)$ and $\\bf{V}(t)$. \n\\item Recalling that the rate function in Theorem \\ref{maintheorem} is given by,\n\\begin{equation}\nI(x)= I_0(x(0)) + \\sup_{k\\in \\mathbb{N}} \\sup_{(t_1,\\dots,t_k)} \\sum_{i=1}^k I_{t_i-t_{i-1}}(x(t_i)\\vert x(t_{i-1}))\n\\end{equation}\nwith $I_t(z\\vert y)=\\sup_{f\\in C(E)}[f(z)-V(t)f(y)]$, it is not difficult to realise that, if $V(t)=\\mathbf{V}(t)$, it follows that $I_t(y\\vert z)= \\inf_{\\substack{\\gamma : \\gamma(0)=z, \\\\ \\gamma(t)=y}} \\int_0^t \\mathcal{L}(\\gamma(s),\\dot{\\gamma}(s))\\, ds$.\n\\end{itemize}\n\n\n\\begin{theorem}[Integral representation of the rate function]\\label{ratefunction:th}\nThe rate function of Theorem \\ref{maintheorem} has the following representation\n\\begin{equation}\n\\mathcal{I}(x)=\\begin{cases}\n\\mathcal{I}_0(x(0)) +\\int_0^\\infty\\mathcal{L} \\left(x(t),\\dot{x}(t)\\right) \\, dt &\\quad \\text{if } x\\in AC([0,\\infty); \\mathbb{R}^d),\\\\\n\\infty & \\quad \\text{else},\n\\end{cases}\n\\end{equation}\nwhere %\n$\\mathcal{L}(x,v) =\\sup_{p \\in \\mathbb{R}^d}\\left[p \\cdot v - \\mathcal{H}(x,p)\\right]\n$\nis the Legendre transform of $\\mathcal{H}$.\n\\end{theorem}\n\n\\begin{proof}\nFollowing the above mentioned steps, we recall that, as stated by Theorem \\ref{FengKurtz:th}, there exists a family of operators $R(\\lambda) : C_b({{\\mathbb R}}^d) \\rightarrow C_b({{\\mathbb R}}^d)$, such that for $\\lambda > 0$ and $h \\in C_b({{\\mathbb R}}^d)$, the function $R(\\lambda)h$ is the unique function that is a viscosity solution to $(1- \\lambda H ) f = h$ and such that\n\\begin{equation} \\label{eqn:convergence_R_to_V}\n\\lim_{m \\rightarrow \\infty} \\vn{R\\left(\\frac{t}{m}\\right)^m f - V(t)f } = 0 \\qquad \\text{for all $f$ in a dense set $D\\subseteq C_b({{\\mathbb R}}^d)$.}\n\\end{equation}\nSee also \\autocite[Theorem 7.10]{Kraaij2020} or \\autocite[Theorem 7.17]{FK} for the construction of the operators $R(\\lambda)$.\nBy \\autocite[Proposition 6.1]{KraSch} (or \\cite[Chapter 8]{FK}), $\\mathbf{R}(\\lambda)$ is the unique viscosity solution to $(1 - \\lambda \\mathbf{H}) f = h$. Then, Figure \\ref{SF:fig:CP-diagram-in-proof-of-CP} on page \\pageref{SF:fig:CP-diagram-in-proof-of-CP} shows that it must equal $R(\\lambda) h $. \nMoreover, we find by \\cite[Lemma 8.18]{FK} (whose assumptions are implied by Lemma \\ref{lemmaRepre} above) that for all $f \\in C_b({{\\mathbb R}}^d)$ and $x \\in {{\\mathbb R}}^d$\n\\begin{equation} \\label{eqn:convergence_bfR_to_bfV}\n\t\t\t\\lim_{m \\rightarrow \\infty} \\mathbf{R}\\left(\\frac{t}{m}\\right)^m f(x) = \\mathbf{V}(t)f(x).\n\\end{equation}\nWe conclude from \\eqref{eqn:convergence_R_to_V} and \\eqref{eqn:convergence_bfR_to_bfV} that $V(t)f = \\mathbf{V}(t)f$ for all $t$ and $f \\in D$.\nNow recall that $D$ is sequentially strictly dense so that equality for all $f \\in C_b({{\\mathbb R}}^d)$ follows if $V(t)$ and $\\mathbf{V}(t)$ are sequentially continuous. The first statement follows by Theorems \\cite[Theorem 7.10]{Kraaij19} and \\cite[Theorem 6.1]{Kraaij2020}. The second statement follows by \\cite[Lemma 8.22]{FK}. We conclude that $V(t)f = \\mathbf{V}(t)f$ for all $f \\in C_b({{\\mathbb R}}^d)$ and $t \\geq 0$. Using Theorem 8.14 of \\cite{FK} and the convexity of $v\\mapsto \\mathcal{L}(x,v)$ we get the integral representation.\n\n\n\n\n\\end{proof}\n\n\\end{subsection}\n\n\n\n\n\n\\begin{section}{A more general theorem}\\label{general}\n\nAnalysing the proofs in the previous sections, we can state the following facts:\n\\begin{itemize}\n\\item In the proof of large deviations principle, the main steps are:\n\\begin{enumerate}\n\\item Convergence of the nonlinear operators $H_\\varepsilon$ to a multivalued operator $H$,\n\\item comparison principle for $(1-\\lambda H)f= h$.\n\\end{enumerate}\n\\item The existence of an eigenvalue ${{\\mathcal H}} (x,p)$ and its convexity, coercivity and continuity are crucial for the proof of comparison principle and\n\\begin{itemize}\n\\item the arguments for existence, convexity and coercivity (proofs of Propositions \\ref{eigenvalueProb} and \\ref{conv:th}) are based on the fact that ${{\\mathcal H}} (x,p)$ is the eigenvalue of an operator of the type $B_{x,p} + V_{x,p} + R_x$ with the three operators that verify particular properties such as coercivity and the maximum principle,\n\\item to show the continuity of ${{\\mathcal H}}$ the representation \\eqref{eigen:repr} is needed. In particular, some properties of $V$ and $I$, like $\\Gamma$- convergence, are necessary.\n\\end{itemize}\n\\end{itemize}\nThe above observations allow for a straightforward generalization in Theorem \\ref{maintheorem_gen} and justify the assumptions of the next subsection. In this section we indeed prove the large deviation principle for a general switching Markov process. In particular, we will study the Markov process $(X_t^\\varepsilon, I_t^\\varepsilon)$, that is the solution to the Martingale problem corresponding to the following operator\n\n\\begin{equation}\\label{generator_gen}\nA_\\varepsilon f(x,i)\n:=\nA_\\varepsilon^{(i)} f(\\cdot,i) (x)\n+\n\\sum_{j = 1}^J r_{ij}(x,x\/\\varepsilon) \n\\left[\nf(x,j) - f(x,i)\n\\right]\n\\end{equation}\nwith $\nA_\\varepsilon^{(i)} : \\mathcal{D}(A_\\varepsilon^{(i)}) \\subseteq C({{\\mathbb R}}^d) \\rightarrow C({{\\mathbb R}}^d)\n$\nbe the generator of a strong ${{\\mathbb R}}^d$-valued Markov process, with domain $\\mathcal{D}(A_\\varepsilon^{(i)})$.\n \n\\begin{subsection}{Assumptions}\\label{assumption}\n\nHere we give the assumptions needed.\n\n\\begin{assumption}\\label{assumption1}\nThe nonlinear generators $H_\\varepsilon f = \\varepsilon e^{-f\/\\varepsilon} A_\\varepsilon e^{f\/\\varepsilon}$ admits an extended limit $H \\subseteq ex - LIM H_\\varepsilon$ with $H$ of the type \n\\begin{align*}\nH:=\\left\\{(f, H_{f,\\varphi}) \\, : \\,f \\in C^2(\\mathbb{R}^d), \\; H_{f, \\varphi} \\in C( \\mathbb{R}^d\\times E^\\prime) \\text{ and }\\varphi \\in C^2(E^\\prime)\\right\\}.\n\\end{align*}\nFor all $\\varphi$ there exist a map $H_{\\varphi}:{{\\mathbb R}}^d \\times {{\\mathbb R}}^d \\times E^\\prime$ such that for all $f\\in D(H)$, $x\\in{{\\mathbb R}}^d$ and $z\\in E^\\prime$, $H_{\\varphi,f}(x,z)=H_{\\varphi}(x,\\nabla f, z)$.\nMoreover, the image $H_\\varphi$ has the representation\n$$H_\\varphi(x,p,z)=e^{- \\varphi(z)}\\left[B_{x,p} +V_{x,p}+R_x\\right]e^\\varphi (z)$$\nwith $p=\\nabla f(x)$ and $B_{x,p}, V_{x,p}, R_x$ such that\n\\begin{enumerate}[(i)]\n\\item For all $p\\in {{\\mathbb R}}^d $ there exists an eigenfunction $g_{x,p} \\in C^2({{\\mathbb R}}^d \\times J)$ with $g_{x,p}^i>0$ and an eigenvalue $\\mathcal{H}(x,p)$ such that\n$\n\\left[B_{x,p} +V_{x,p}+R_x\\right]g_{x,p}=\\mathcal{H}(x,p)g_{x,p}.\n$\n\\item $T_{x,p}=B_{x,p} + R_x$ verifies the maximum principle : \n\nif $(i_m,y_m)=\\argmin {\\varphi}$ then $e^{-\\varphi(i_m,y_m)}T_{x,p}e^{\\varphi(i_m,y_m)} \\geq 0$.\n\\item $p \\mapsto V_{x,p}$ is coercive uniformly with respect to $x$.\n\\item $p \\mapsto B_{x,p}$ and $p \\to V_{x,p}$ are convex uniformly on compact sets.\n\\end{enumerate}\n\n\\end{assumption}\n\nThe above assumption implies the convergence of the nonlinear operators and the existence of the principal eigenvalue $\\mathcal{H}$. Moreover, it will imply convexity and coercivity of $\\mathcal{H}(x,p)$.\n\n\\begin{assumption}\\label{assumption2}\nThe eigenvalue $\\mathcal{H}$ is of the type $\\mathcal{H}(x,p) = \\sup_{\\mu \\in \\mathcal{P}(E')} \\left[\\Lambda (x,p,\\mu) - I_{x,p}(\\mu)\\right]$ with \n\\begin{equation}\n\\Lambda(x,p,\\mu)=\\int_{E^\\prime}V_{x,p}\\,d\\mu, \\qquad\\text{and}\\qquad I_{x,p}(\\mu)= -\\inf_{u>0}\\int_{E^\\prime}\\frac{(B_{x,p}+R_x)u}{u}\\, d\\mu,\n\\end{equation}\nand the following properties hold\n\\begin{enumerate}[(i)]\n\\item $I_{x,p}$ satisfies the assumption of Lemma \\ref{gammaConv},\n\\item $\\Lambda (x,p,\\mu)$ is continuous and $\\norma{\\Lambda(x,p,\\mu)}_\\Theta < \\infty$,\n\\item there exists a containment function $\\Upsilon$ for $\\Lambda$ in the sense of Definition \\ref{contfun_def},\n\\item for all $x$, there exists a unique measure $\\mu_x^*$ such that $I_{x,0}(\\mu_x^*)=0$.\n\\end{enumerate}\n\\end{assumption}\nAssumption \\ref{assumption2} implies the continuity of $\\mathcal{H}$.\n\n\\end{subsection}\n\n\\begin{subsection}{Large Deviation for a Switching Markov process}\\label{mainTheorem-sub}\n\nWe are ready to state the general theorem. \n\\begin{theorem}[Large deviation for a Switching Markov process]\\label{maintheorem_gen}\nLet $(X_t^\\varepsilon, I_t^\\varepsilon)$ be the solution of the Martingale problem corresponding to the operator given in \\eqref{generator_gen}.\nIf Assumptions \\ref{assumption1} and \\ref{assumption2} hold and suppose further that at time zero, the family of random variables $\\{X^\\varepsilon(0)\\}_{\\varepsilon > 0}$ satisfies a large deviation principle in $\\mathbb{R}^d$ with good rate function $\\mathcal{I}_0 :\\mathbb{R}^d \\rightarrow [0,\\infty]$. Then, the spatial component $\\{X_t^\\varepsilon\\}$ satisfies a large deviation principle in $C_{\\mathbb{R}^d}[0,\\infty)$. \n\\end{theorem}\nThe proof of the above theorem follows the same lines of what is done in Subsection \\ref{comparison-sub}. \n\\end{subsection}\n\\end{section}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\review{Fluid-structure interaction (FSI) problems often occur in Engineering (aircraft and automotive industries, wind turbines) as well as in medical applications (cardiovascular systems, artificial organs, artificial valves, medical devices, etc.). Today the design of such systems usually requires advanced studies and high-fidelity (HF) numerical simulations become an essential tool of computed-aided analysis. However, computational FSI is known to be very time-consuming even on high-performance computing facilities. Usually, engineering problems are parameterized and the search of suitable designs require numerous computer experiments leading to prohibitive computational times. \nFor particular applications such as the tracking of drug carrier capsules flowing in blood vessels, it would be ideal to have real-time simulations for a better understanding of the behaviour of the dynamics and for efficiency assessment. Unfortunately, today high-fidelity real-time FSI simulations are far from being reached with current High Performance Computing (HPC) facilities. \n\nA current trend is to use machine learning (ML) or artificial intelligence (AI) tools such as artificial neural networks (ANN). Such tools learn numerical simulations from HF solvers and try to map entry parameters with output criteria in an efficient way, with response times far less than HF ones, say 3 or 4 orders of magnitude smaller. In some sense, heavy HF computations and training stage are done in an offline stage, and learned ANNs can be used online for real time evaluations and analysis. However, ML and ANN today are not fully satisfactory for dynamical problems, and\/or the training stage itself may be time consuming, thus requiring more Central Processing Unit (CPU) time.\nAnother option is the use of model order reduction (MOR). Reduced-order modeling (ROM) can be seen as a 'grey-box' supervized ML methodology, taking advantage of the expected low-order dimensionality of the FSI mechanical problem. By 'grey-box' we mean that the low-dimensional encoding of the ML process is based on mechanical principles and a man-made preliminary dimensionality reduction study. This allows one for a better control of the ROM accuracy and behaviour. \nThere are two families of MOR: intrusive and non-intrusive approaches. The intrusive approaches use physical equations. The low-order model is derived by setting the physical problem on a suitable low-dimensional space. The accuracy can be very good, but the price to pay is the generation of a new code which can be a tedious and long task. The non-intrusive approach does not require heavy code development. It is based on HF simulation results used as entry data. Although it is not based on high-fidelity physical equations, a non-intrusive approach can include a priori physical informations, like e.g. meaningful physical features, prototype of system of equations, pre-computed principal components, consistency with physical principles, etc.\n\nIn the recent literature, efficient intrusive ROMs for FSI\nhave been proposed e.g. in \\citep{Quarteroni2016}. But to our knowledge\nthere are far less contributions in non-intrusive ROMs dedicated to FSI. \n\n\nIn this paper, we propose a data-driven model order reduction approach for FSI problem which is consistent with the equations of kinematics and is designed from a low-order meaningful system of equations.\nAs case of study, we focus on the motion of a microcapsule, a droplet surrounded by a membrane, subjected to a confined and unconfined Stokes flow. \n\nArtificial microcapsules can be used in various industrial applications such as in cosmetics \\citep{Miyazawa2000, Casanova2016}, food industry \\citep{Yun2021} and biotechnology, where drug targeting is a high potential application \\citep{Ma2013,Abuhamdan2021, Ghiman2022}.\nOnce in suspension in an external fluid, capsules are subjected to hydrodynamics forces, which may lead to large membrane deformation, wrinkle formation or damage. \nThe numerical model must be able to capture the time-evolution of the nonlinear 3D large deformations of the capsule membrane. \nDifferent numerical strategies are possible to solve the resulting large systems of equations \\citep{Lefebvre2007, Hu2012, Ye2017, Tran2020}. However, they all have long computational times.\n\nDifferent approaches have been used over the past decade to accelerate the computations, such as HPC (e.g. \\cite{Zhao2010}) and Graphics Processing units (e.g. \\cite{Matsunaga2014}).\nMore recently, reduced order models have been proposed to predict the motion of capsules suspended in an external fluid flow.\nIn~\\citet{Quesada2021}, the authors used the large amount of data generated by numerical simulations to show how relevant it is to recycle these data to produce lower-dimensional problem using physics-based reduced order models. However, their method can only predict the steady-state capsule deformed shape. \n\\citet{Boubehziz2021} show for the first time the efficiency of data-driven model-order reduction technique to predict the dynamics of the capsule in a microchannel. However, the method is cumbersome as it requires two POD bases, one to predict the velocity field, the other to capture the shape evolution over time. And then they reconstruct the solution in the parameter space thanks to diffuse approximation (DA) strategy.\n\n\nThe proposed method serves different objectives.\nWe have designed the method to be non intrusive for practical uses of existing high-fidelity FSI solver (also referred to as the Full-Order Model, or FOM). That means that the ROM methodology should be data-driven.\nWe also want the ROM to be consistent with the equations of kinematics. The model must thus return the displacement $\\{u\\}$ and velocity $\\{v\\}$ fields from a few snapshots provided by the FOM. \nIt must otherwise be able to predict the solution for any parameter vector in predefined admissible domain.\nFinally, the kinematics-consistent data-driven reduced-order model of capsule dynamics must ideally open the way to real-time simulations. \nTo do so, we use a coupling between Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition (DMD), as well as a Tikhonov regularization for robustness purposes and interpolation to predict solutions for any parameter value. \n\n\nAs indicated above, we mainly consider the case of an initially spherical capsule flowing in a microfluidic channel with a square cross-section.\nThe corresponding FOM was developed by \\citet{Hu2012} and used to get a complete numerical database of the three-dimensional capsule dynamics as a function of the parameters of the problem: the capsule-to-tube confinement ratio, hereafter referred to as size ratio $a\/\\ell$ and the capillary number $Ca$, which measures the ratio between the viscous forces acting onto the capsule membrane and the membrane elastic forces. \nFor clarity reasons, different ROMs are introduced with increasing levels of generality, as detailed in Table~\\ref{tab:1}. First, we consider a fixed parameter vector, and get a space-time ROM in the form of a low-order dynamical system. Next, we generate such $N$ ROMs for the~$N$ parameter samples that fill the admissible parameter domain, and then assess the uniform accuracy (space-time accuracy over the whole sample set). Finally, we propose a strategy to derive a general space-time-parameter ROM for any value of the parameter vector $(Ca,a\/\\ell)$ in the admissible space.\n\n\n\\begin{table}\n \\centering\n \\begin{tabular}{|c|c|c|c|}\\hline\n Nb of parameter & ROM output type & Verification & Related Section(s) \\\\ \n samples for data && (accuracy) & in the paper\\\\ \\hline\\hline\n 1 & 1 space-time ROM & Space-time accuracy & Sections \\ref{sec:ROM} and \\ref{sec:1example} \\\\ \\hline\n $N$ & $N$ space-time ROM & Uniform space-time & Section~\\ref{sec:database} \\\\ \n && accuracy on the & \\\\ \n && sample set & \\\\ \\hline\n $N$ & 1 space-time-parameter ROM & Uniform accuracy & Section~\\ref{sec:interpolation} \\\\\n & (any parameter couple) && \\\\ \\hline\n \\end{tabular}\n \\caption{\\review{General methodology, stepwise procedure for ROM construction of increasing level of generality.}}\n \\label{tab:1}\n\\end{table}\n\n\nThe paper is organized as follows. First, we present the physics of the problem and the FOM in Section \\ref{sec:FOM}. \nThe strategy used to develop a non-intrusive space-time ROM is detailed in Section \\ref{sec:ROM}. The results are presented for a given configuration in Section \\ref{sec:1example}. The accuracy is then estimated in Section \\ref{sec:database} on the entire database, formed by all the cases that have reached a stationary state. Finally in Section \\ref{sec:interpolation} we present the methodology of space-time-parameter ROM. The ROM accuracy is confirmed by some numerical experiments.}\n\n\n\\section{Full-order microcapsule model, parameters and quantities of interest}\n\\label{sec:FOM}\n\n\\subsection{Problem description}\n\nAn initially spherical capsule of radius $a$ flows within a long microfluidic channel having a constant square section of side $2\\ell$ (Figure \\ref{IllustrationPb}). The suspending fluid and capsule liquid core are incompressible Newtonian fluids with the same kinematic viscosity $\\eta$. \n\nThe capsule liquid core is enclosed by a hyperelastic isotropic membrane. Its thickness is assumed to be negligible compared to the capsule dimension. The membrane is thus modeled as a surface devoid of bending stiffness with surface shear modulus $G_S$. \nThe two non-dimensional governing parameters of the problem are the size ratio $a\/\\ell$ and the capillary number \n\\begin{equation}\nCa= \\eta V\/G_S\n\\end{equation}\nwhere $V$ is the mean axial velocity of the undisturbed external Poiseuille flow.\\\\\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.9\\textwidth]{CapsuleInChannel.eps}\n\\caption{Sketch of the model geometry showing an initially spherical capsule of radius $a$ placed in a channel with a constant square section of side $2\\ell$. }\n\\label{IllustrationPb} \n\\end{figure}\n\nThe flow Reynolds number is assumed to be very small. We solve the Stokes equations in the external ($\\beta = 1$) and internal fluids ($\\beta = 2$), together with the membrane equilibrium equation to determine the dynamics of the deformable capsule within the microchannel. \n\nFor the fluid problem, we denote $\\vbold^{(\\beta)}$, $\\boldsymbol{\\sigma}^{(\\beta)}$ and $p^{(\\beta)}$ the velocity, stress and pressure fields in the two fluids. These parameters are non-dimensionalized using $\\ell$ as characteristic length, $\\ell\/V$ as characteristic time and $G_S \\ell$ as characteristic force. \nThe non-dimensional Stokes equations \n \\begin{equation}\n\\nabla p^{(\\beta)} = Ca \\nabla^2\\vbold^{(\\beta)},\\quad\\nabla\\cdot\\vbold^{(\\beta)}=0,\\quad\\beta=1,2\n\\label{eq:Stokes}\n\\end{equation}\nare solved in the domain bounded by the cross sections $S_{in}$ at the tube entrance and $S_{out}$ at the exit. These cross sections are assumed to be both located far from the capsule. The reference frame $(O,\\bm{x}, \\bm{y}, \\bm{z})$ is centered at each time step on the capsule center of mass $O$ in the high-fidelity code, but the displacement of the capsule center of mass along the tube axis~$\\bm{Oz}$ is computed.\n\nThe boundary conditions of the problem are the following ones:\n\n\\begin{itemize}\n\\item~The velocity field is assumed to be the unperturbed flow field on $S_{in}$ and $S_{out}$, i.e. the flow disturbance vanish far from the capsule.\n\\item~The pressure is uniform on $S_{in}$ and $S_{out}$.\n\\item~A no-slip boundary condition is assumed at the channel wall $W$ and on the capsule membrane $M$:\n\\begin{equation}\n\\forall \\bm{x}\\in W, \\vbold(\\bm{x})=\\bm{0};\\quad \n\\forall \\bm{x} \\in M,\\ \\vbold(\\bm{x})=\\frac{\\partial \\ubold}{\\partial t}.\n\\label{eq:noslipcondition}\n\\end{equation}\n\\item~The normal load $\\bm{n}$ on the capsule membrane $M$ is continuous, i.e. the non-dimensionalized external load per unit area $\\bm{q}$ exerted by both fluids is due to the viscous traction jump:\n\\begin{equation} \\label{eq:traction_jump}\n(\\boldsymbol{\\sigma}^{(1)}-\\boldsymbol{\\sigma}^{(2)})\\bcdot\\boldsymbol{n} = \\bm{q}\n\\end{equation} where $\\boldsymbol{n}$ is the unit normal vector pointing towards the suspending fluid. \n\\end{itemize}\n\nTo close the problem, the external load $\\bm{q}$ on the membrane is deduced from the local equilibrium equation, which, in absence of internia, can be written as\n\\begin{equation}\n \\bnabla_s \\bcdot \\boldsymbol \\tau + \\bm{q} = \\boldsymbol 0\n \\label{eq:equi-mb}\n\\end{equation}\nwhere $\\bm{\\tau}$ is the non-dimensionalized Cauchy tension tensor (forces per unit arclength in the deformed plane of the membrane) and $\\bnabla_s \\bcdot$ is the surface divergence operator. We assume that the membrane deformation is governed by the strain-softening neo-Hookean law. The principal Cauchy tensions can then be expressed as\n\\begin{equation}\n\\tau_1 = \\frac{G_S}{\\lambda_1 \\lambda_2}\\left[ \\lambda_1^2 - \\frac{\n1}{(\\lambda_1 \\lambda_2)^2}\\right] ~(\\text{likewise for } \\tau_2),\n\\end{equation}\nwhere $\\lambda_1$ and $\\lambda_2$ are the principal extension ratios measuring the in-plane deformation. \n\n\n\n\\subsection{Numerical procedure}\n\nThe FSI problem is solved by coupling a finite element method that determines the capsule membrane mechanics with a boundary integral method that solves for the fluid flows \\citep{Walter2010,Hu2012}. Thanks to the latter, only the boundaries of the flow domain, i.e the channel entrance $S_{in}$ and exit $S_{out}$, the channel wall and the capsule membrane have to be discretized to solve the problem.\nThe mesh of the initially spherical capsule is generated by subdividing the faces of the icosahedron (regular polyhedron with 20 triangular faces) inscribed in the sphere until reaching the desired number of triangular elements. At the last step, nodes are added at the middle of all the element edges to obtain a capsule mesh with 1280 $P_2$ triangular elements and 2562 nodes, which correspond to a characteristic mesh size $\\Delta h_C= 0.075\\,a$. The channel mesh of the entrance surface $S_{in}$ and exit surface $S_{out}$ and of the channel wall is generated using \n\\texttt{Modulef} (INRIA, France). The central portion of the channel, where the capsule is located, is refined. The channel mesh comprises 3768 $P_1$~triangular elements and 1905~nodes. \n\n\nAt time $t=0$, a spherical capsule is positioned with its center of mass $O$ on the channel axis. \nAt each time step, the in-plane stretch ratio $\\lambda_1$ and $\\lambda_2$ are computed from the nodes deformation.\n The elastic tension tensor $\\boldsymbol \\tau$ is then deduced from the values of $\\lambda_1$ and $\\lambda_2$. The finite element method is used to solve the weak form of the membrane equilibrium equation~\\eqref{eq:equi-mb} and determine the external load $\\bm{q}$.\n\nApplying the boundary integral method, the velocity of the nodes on the capsule membrane reads \\citep{Pozrikidis1992}:\n\\begin{equation}\n\\vbold(\\bm{x}) = \\vbold^\\infty(\\bm{x}) - \\frac{1}{8\\pi\\mu_F} \\left[ \\int_M \\bm{J}(\\bm{r})\\cdot\\bm{q} dS(\\bm{y}) + \\int_W \\bm{J}(\\bm{r})\\cdot \\bm{f} dS(\\bm{y}) -\\Delta P \\int_{S_{out}} \\bm{J}(\\bm{r})\\cdot \\bm{n}\\, dS(\\bm{y})\\right]\n\\label{eq:flo1}\n\\end{equation}\nfor any $\\bm{x}$ in the spatial domain when the suspending and internal fluids have the same viscosity. The vector $\\bm{f}$ is the disturbance wall friction due to the capsule, $\\Delta P$ is the additional pressure drop and $\\bm{r}=\\bm{y}-\\bm{x}$. \n\nTo update the position of the membrane nodes, the nodal displacement $\\ubold$ is computed by integrating equation \\eqref{eq:noslipcondition} in time.\nThe procedure is repeated until the desired non-dimensional time $VT\/\\ell$.\n\nFor later development, it is more convenient to work on the condensed abstract form of the system. The full order semi-discrete FSI system to solve consists of the kinematics and the membrane equilibrium algebraic equations:\n\\begin{align}\n& \\dot\\{u\\rbrace = \\{v\\rbrace, \\qquad t\\in [0,T],\\label{eq:flo2}\\\\\n& \\{v\\rbrace = \\varphi(\\{u\\rbrace) \\label{eq:flo3}\n\\end{align}\nwhere $\\varphi$ is a nonlinear mapping from $\\mathbb{R}^{3d}$ to~$\\mathbb{R}^{3d}$ and $d$ is the number of nodes on the membrane.\nRegarding time discretization, a Runge-Kutta Ralston scheme is used:\n\\begin{align*}\n & \\{\\widehat u^{n+2\/3}\\} = \\{u^n\\} + \\frac{2}{3}\\Delta t \\, \\{v^{n}\\},\n \\\\\n & \\{\\widehat v^{n+2\/3}\\} = \\{\\varphi\\}(\\{\\widehat u^{n+2\/3}\\}), \\\\\n & \\{u^{n+1}\\} = \\{u^n\\} + \\Delta t \\, \\left(\\frac{1}{4}\\{v^{n}\\}+\\frac{3}{4}\\{\\widehat v^{n+2\/3}\\}\\right),\\\\\n & \\{v^{n+1}\\} = \\{\\varphi\\}(\\{u^{n+1}\\}), \\\\\n & \\{u^0\\} = \\{0\\},\\ \\{v^{0}\\} = \\{\\varphi\\}(\\{0\\})\n\\end{align*}\n\\noindent where $\\Delta t>0$ is a constant time step and $\\{u\\rbrace^n$ and $\\{v\\rbrace^n$ respectively represent the discrete membrane displacement field and the discrete membrane velocity field at discrete time~$t^n=n\\Delta t$. The initial condition is simply $\\{u\\rbrace^0=\\lbrace 0\\rbrace $.\n\nThe whole numerical scheme is subject to some Courant-Friedrichs-Lewy (CFL) type stability condition on the time step \\citep{Walter2010} because of its explicit nature.\nThe numerical method is conditionally stable if the time step $\\Delta t$ satisfies\n\\begin{equation}\n\\frac{V}{\\ell}\\Delta t < O\\left( \\frac{\\Delta h_C}{\\ell}Ca\\right).\n\\label{StabilityCondition}\n\\end{equation}\nFrom the computational point of view, the resolution of~\\eqref{eq:flo3} at each time step requires i) the computation of the disturbance wall friction $\\bm{f}$ at all the wall nodes, ii) the additional pressure drop $\\Delta P$, iii) the traction jump $\\bm{q}$ at the membrane nodes and iv) the boundary integrals for each node.\nThe resulting numerical FOM may thus be time-consuming, depending on the membrane discretization and the number of time steps. Figure \\ref{SimuTime} shows that the evolution of the computational cost when $a\/\\ell=0.7$, considering the mesh discretization described above and a workstation equipped with 2 processors Intel\\textsuperscript{\\textregistered} Xeon\\textsuperscript{\\textregistered} Gold 6130 CPU (2.1 GHz). A week of computation is sometimes necessary to simulate the dynamics of an initially spherical capsule in a microchannel over the non-dimensional time $VT\/\\ell=10$. \n\n\n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.55\\textwidth]{SimuTime.eps}\n\\caption{Simulation time of the dynamics of the capsule over a non-dimensional time $Vt\/\\ell=10$ ($a\/\\ell=0.7$) according to the time step. Simulations were performed on a workstation equipped with 2 processors Intel\\textsuperscript{\\textregistered} Xeon\\textsuperscript{\\textregistered} Gold 6130 CPU (2.1 GHz).}\n\\label{SimuTime} \n\\end{figure}\n\n\n\n\nFor that reason, a model-order reduction (MOR) strategy is studied in this paper, in order to reduce the computational time by several orders of magnitude. ROMs try to approximate solutions of the initial problem by strongly lowering the dimensionality of the numerical model, generally using a reduced basis (RB) of suitable functions, then derive a low-order system of equations. \n\n\nIn the case of differential algebraic equations (DAE) like~\\eqref{eq:flo2}-\\eqref{eq:flo3}, the reduced system of equations to find should also be of DAE nature. Remark that it is often possible to reformulate DAEs as a system of ordinary differential equations (ODEs) \\citep{Ascher1998}. \nIn the next section, we give details on the chosen ROM methodology for the particular case and context of FSI capsule problem. \n\n\\section{\\review{Non-intrusive space-time model-order reduction strategy}}\n\\label{sec:ROM}\n\n\\review{In this section, the parameter couple $\\bm{\\theta}=(Ca,a\/\\ell)$ is fixed, thus we omit the dependency of the solutions with respect to $\\bm{\\theta}$ for the sake of simplicity}.\nFor the derivation of the ROM model, we consider the semi-discrete time-continuous version of the FOM, i.e.~\\eqref{eq:flo2}-\\eqref{eq:flo3}.\n\\subsection{Dimensionality reduction and reduced variables for displacements and velocities}\n %\nAssume first that, for any $t\\in [0,T]$, the discrete velocity field can be accurately approximated according to the expansion\n\\begin{equation}\n \\{v\\rbrace(t) \\approx \\sum_{k=1}^K \\beta_k(t)\\, \\lbrace \\phi_k\\rbrace \n\\label{eq:flo6}\n\\end{equation}\n\n\\noindent for some orthonormal modes $\\lbrace \\phi_k\\rbrace \\in\\mathbb{R}^d$ and real coefficients $\\beta_k(t)$.\nThe truncation rank $K\\leq d$ is of course expected to be far less than $d$ as expected in a general ROM methodology. From the kinematics equations we have\n\\begin{align*}\n\\{u\\rbrace(t) &= \\int_0^t \\{v\\rbrace(s)\\, ds\\\\\n &\\approx \\int_0^t \\beta_k(s)\\, \\lbrace \\phi_k\\rbrace \\ ds\n\\end{align*}\nso that the displacement field can be accurately represented by \n\\begin{equation}\n\\{u\\rbrace(t) \\approx \\sum_{k=1}^K\\alpha_k(t)\\, \\lbrace \\phi_k\\rbrace \n\\label{eq:flo7}\n\\end{equation}\nwhere $\\displaystyle{\\alpha_k(t)=\\int_0^t \\beta_k(s)\\, ds}$. The coefficients\n$(\\alpha_k(t))_k$ and $(\\beta_k(t))_k$ are called the reduced variables. For the sake of readability and mental correspondence between full-order unknowns and reduced ones, we will use the convenient notations\n\\[\n\\bm{\\alpha}(t)=(\\alpha_1(t),...,\\alpha_K(t))^T,\\quad\n\\bm{\\beta}(t)=(\\beta_1(t),...,\\beta_K(t))^T\n\\]\nwhere the exponent $^T$ denotes the transpose of the matrix.\nThe condensed matrix forms of~\\eqref{eq:flo7} and~\\eqref{eq:flo6}\nrespectively are\n\\begin{equation}\n\\{u\\rbrace(t) \\approx Q\\, \\bm{\\alpha}(t),\\quad\n\\{v\\rbrace(t) \\approx Q\\, \\bm{\\beta}(t),\n\\end{equation}\nwhere $Q=[\\lbrace \\phi_1\\rbrace ,...,\\lbrace \\phi_K\\rbrace ]\\in\\mathscr{M}_{dK}$. Since the modes $\\lbrace \\phi_k\\rbrace $ are assumed to be orthonormal (for the standard Euclidean inner product), the matrix $Q$ is a semi-orthogonal matrix, i.e. $Q^TQ=I_K$. In particular, we have\n$\\bm{\\alpha}(t)\\approx Q^T\\, \\{u\\rbrace(t)$ and $\\bm{\\beta}(t)=Q^T\\, \\{v\\rbrace(t)$.\n\n\\review{Note that the modes $\\lbrace \\phi_k\\rbrace $ and reduced variables $\\bm{\\alpha}$, $\\bm{\\beta}$ are determined for each parameter set ($Ca, a\/\\ell$), but a common value of the truncation rank $K$ is chosen for all the sets. Its practical computation will be detailed in a next subsection, as well as that of the modes $\\lbrace \\phi_k\\rbrace $}.\n\\subsection{ROM prototype}\nThe expressions $\\lbrace \\tilde u\\rbrace (t)=Q\\, \\bm{\\alpha}(t)$ and\n$\\lbrace \\tilde v\\rbrace (t)=Q\\, \\bm{\\beta}(t)$ provide low-order representations of displacement and velocity fields respectively. We can now write equations for the reduced vectors $\\bm{\\alpha}(t)$ and $\\bm{\\beta}(t)$ respectively. In this subsection, let us consider a projection Galerkin-type approach. Let us denote $\\langle.,.\\rangle$\nthe standard Euclidean scalar product in $\\mathbb{R}^d$. Considering \na test vector $\\lbrace w\\rbrace $ in $W=span(\\lbrace \\varphi_1\\rbrace ,...,\\lbrace \\varphi_K\\rbrace )$, we look for an approximate displacement field $\\tilde \\{u\\rbrace(t)$ solution of the projected kinematics equations\n\\[\n\\langle \\frac{d}{dt}\\lbrace \\tilde u\\rbrace (t),\\lbrace w\\rbrace \\rangle = \n\\langle \\lbrace \\tilde v\\rbrace (t),\\lbrace w\\rbrace \\rangle\n\\quad \\forall\\, \\lbrace w\\rbrace \\in W.\n\\]\nBy considering each test vector $\\lbrace w\\rbrace =\\lbrace \\varphi_k\\rbrace $, we get the consistent reduced kinematics equation\n\\begin{equation}\n\\dot\\bm{\\alpha} = \\bm{\\beta}.\n\\label{eq:flo9}\n\\end{equation}\nConsider now the projected field $\\lbrace \\tilde v\\rbrace (t)$ which is solution of the\nsystem of algebraic equations (Galerkin approach):\n\\begin{equation}\n\\langle \\lbrace \\tilde v\\rbrace (t),\\lbrace w\\rbrace \\rangle = \\langle \\varphi(\\lbrace \\tilde u\\rbrace (t)),\\lbrace w\\rbrace \\rangle \\quad \\forall\\, \\lbrace w\\rbrace \\in W.\n\\label{eq:flo10}\n\\end{equation}\n\\review{\nAgain by taking the test vector $\\{w\\}=\\{\\phi_k\\}$, we have\n\\[\n\\{\\phi_k\\}^T Q \\beta(t) = \\{\\phi_k\\}^T \\varphi(Q \\bm{\\alpha}(t)).\n\\]\nConsidering all $k$ in $\\{1,...,K\\}$, since $Q=[\\lbrace \\phi_1\\rbrace ,...,\\lbrace \\phi_K\\rbrace ]$ and $Q^TQ=I_K$ we get}\n\\[\nQ^T Q \\bm{\\beta}(t) = \\bm{\\beta}(t) = Q^T \\varphi(Q \\bm{\\alpha}(t)).\n\\]\nIt is in the form\n\\begin{equation}\n\\bm{\\beta}(t) = \\varphi_r(\\bm{\\alpha}(t))\n\\label{eq:flo11}\n\\end{equation}\nwith the mapping $\\varphi_r:\\mathbb{R}^K\\rightarrow\\mathbb{R}^K$\ndefined by $\\varphi_r(\\bm{\\alpha})=Q^T\\varphi(Q\\bm{\\alpha})$. \nWe get a reduced-order algebraic equilibrium equation.\nUnfortunately,\nbecause of the nonlinearities in $\\varphi$, the computation of\n$\\varphi_r(\\bm{\\alpha})$ requires high-dimensional $O(d)$ operations, making this approach irrelevant from the performance point of view.\nA possible solution to deal with the nonlinear terms would be to use for example Empirical Interpolation Methods (EIM) \\citep{Barrault2004} but from the algorithm and implementation point of view, this would lead to an intrusive approach with specific code developments. We here rather adopt a linearization strategy in the following sense: by derivating~\\eqref{eq:flo11} with respect to time, we get\n\\[\n\\dot\\bm{\\beta}(t) = \\frac{\\partial\\varphi_r}{\\partial \\bm{\\alpha}}(\\bm{\\alpha}(t))\n\\, \\dot\\bm{\\alpha}(t).\n\\]\nThanks to the reduced kinematics equation~\\eqref{eq:flo9}, we get\n\\begin{equation}\n\\dot\\bm{\\beta}(t) = \\frac{\\partial\\varphi_r}{\\partial \\bm{\\alpha}}(\\bm{\\alpha}(t))\n\\ \\bm{\\beta}(t).\n\\label{eq:12}\n\\end{equation}\nSince $\\varphi_r$ is hard to evaluate, it is even harder to evaluate its differential. But the differential $\\frac{\\partial\\varphi_r}{\\partial \\bm{\\alpha}}(\\bm{\\alpha}(t))$ can be approximated itself, or replaced by some matrix $A(t)$.\nThen we get a ROM structure (ROM prototype) in the form\n\\begin{align}\n& \\dot\\bm{\\alpha} = \\bm{\\beta}(t), \\label{eq:flo13} \\\\\n& \\dot\\bm{\\beta}(t) = A(t) \\ \\bm{\\beta}(t) \\label{eq:flo14}.\n\\end{align}\nThe differential system~\\eqref{eq:flo13}-\\eqref{eq:flo14} is linear with variable coefficient matrix $A(t)\\in \\mathscr{M}_K(\\mathbb{R})$. It can be written in matrix form\n\\begin{equation}\n\\frac{d}{dt}\\begin{pmatrix} \\bm{\\alpha}(t)\\\\ \\bm{\\beta}(t) \\end{pmatrix}\n= \\underbrace{\\begin{pmatrix} [0] & I_K \\\\ [0] & A(t) \\end{pmatrix}}_{=\\mathbb{A}(t)}\n\\begin{pmatrix} \\bm{\\alpha}(t)\\\\ \\bm{\\beta}(t) \\end{pmatrix}.\n\\label{eq:flo15}\n\\end{equation}\nThe spectral properties of the differential system~\\eqref{eq:flo15} are related to the spectral properties of matrix $A(t)$. In particular, if all the (complex) eigenvalues $\\lambda_k(t)$ of $A(t)$ are such that\n$\\Re(\\lambda_k(t))<0$ for all~$k$ (uniformly distributed in time), then the system is dissipative.\n\\subsection{Nonintrusive approach, SVD decomposition and POD modes}\nOne of the requirements of this work is to achieve a non-intrusive reduced-order model, meaning that the effective implementation of the ROM does not involve tedious low-level code development into the FOM code. For that, a data-based approach is adopted: from the FOM code, it is possible to compute FOM solutions $(\\{u\\rbrace^n,\\{v\\rbrace^n)$ at discrete times $t^n$, $n=0,...,N$ ($t^N=N\\Delta t=T$), then store some snapshot solutions (called snapshots) into a database for data analysis and later design of a ROM. Proper Orthogonal Decomposition (POD) \\citep{Berkooz1993} is today a well-known dimensionality reduction approach to determine the principal components from solutions of partial differential equations. The Sirovich's snapshot variant approach \\citep{Sirovich1987} is based on snapshot solutions from a FOM to get \\textit{a posteriori} empirical POD modes $\\lbrace \\varphi_k\\rbrace $. For the sake of simplicity, assume that the snapshot solutions are all the discrete FOM solution at simulation instants.\nApplying a singular value decomposition (SVD) to the displacement snapshot matrix\n\\[\n\\mathbb{S}^u = \\left[\\ubold^1,\\ubold^2,...,\\ubold^N\\right],\n\\]\nof size $d\\times N$, we get the SVD decomposition \n\\begin{equation}\n\\mathbb{S}^u = U \\Sigma V^T\n\\end{equation}\nwith orthogonal matrices $U\\in\\mathscr{M}_d(\\mathbb{R})$,\n$V\\in\\mathscr{M}_N(\\mathbb{R})$ and the singular value matrix\n$\\Sigma=diag(\\sigma_k)\\in\\mathscr{M}_{d\\times N}(\\mathbb{R})$, with $\\sigma_k\\geq 0$ for all $k$ organized in decreasing order:\n$\\sigma_1\\geq \\sigma_2\\geq ...\\geq\\sigma_{\\min(d,N)}\\geq 0$.\nFrom SVD theory, for a given accuracy threshold $\\varepsilon>0$,\nthe truncation rank $K=K(\\varepsilon)$ is computed as the smallest integer such that the inequality\n\\begin{equation}\n \\frac{\\displaystyle{\\sum_{k=K+1}^{\\min(d,N)} \\sigma_k^2}}{\\displaystyle{\\sum_{k=1}^{\\min(d,N)}\\sigma_k^2}}\\leq \\varepsilon\n\\label{eq:RIC}\n\\end{equation}\nholds \\citep{Shawe2004}. Proceeding like that, it is shown that the relative orthogonal projection error of the snapshots~$\\{v\\rbrace^n$ onto the linear subspace $W$ spanned by the $K$ first eigenvectors of $U$ is controlled by $\\varepsilon$. Denoting~$\\pi^W$ the linear orthogonal projection operator over $W$, we have:\n\\[\n\\sum_{n=1}^N \\|\\{v\\rbrace^n-\\pi^W\\{v\\rbrace^n\\|^2 \\leq \\varepsilon\n\\sum_{n=1}^N \\|\\{v\\rbrace^n\\|^2.\n\\]\nThe matrix $Q$ is obtained as the restriction of $U$ to its $K$ first columns. \n\n\\subsection{Data-driven identification of coefficient matrix}\\label{sec34}\nThe system~\\eqref{eq:flo13}-\\eqref{eq:flo14} is still not closed since the coefficient matrices $A(t)$ are unknowns.\nFrom FOM data, one can try to identify the matrices by minimizing some residual function that measures the model discrepancy.\nThe simplest linear model corresponds to the case where $A(t)$ is searched as a time-constant matrix~$A$. In this case, equation~\\eqref{eq:flo14} becomes\n$\\dot\\bm{\\beta}(t) = A \\, \\bm{\\beta}(t)$. This is the scope of this article. From the time continuous problem, one could determine the matrix $A$ by minimizing the least square functional\n\\[\n\\min_{A\\in\\mathscr{M}_K(\\mathbb{R})} \\frac{1}{2}\\int_0^T\n\\|\\dot\\bm{\\beta}(t)-A \\bm{\\beta}(t)\\|^2\\, dt.\n\\]\nBut practically, we only have velocity snapshot data at discrete times and\nwe do not have access to the time derivatives of the velocity fields. \nSo the following numerical procedure is adopted: from the velocity snapshot matrix \n$\\mathbb{S}^v=[\\{v\\rbrace^1,...,\\{v\\rbrace^N]$, we compute first the reduced snapshots variables:\n\\[\n\\bm{\\beta}^n = Q\\, \\{v\\rbrace^n,\\quad n=1,...,N.\n\\]\nNext, we determine a matrix $A$ that minimizes the least square cost function:\n\\begin{equation}\n\\min_{A\\in\\mathscr{M}_K(\\mathbb{R})}\\ \\frac{1}{2} \\sum_{n=1}^{N-1}\n\\left\\|\\frac{\\bm{\\beta}^{n+1}-\\bm{\\beta}^n}{\\Delta t}-A\\bm{\\beta}^n \\right\\|^2\n\\label{eq:flo17}\n\\end{equation}\nIn~\\eqref{eq:flo17}, the finite difference $\\dfrac{\\bm{\\beta}^{n+1}-\\bm{\\beta}^n}{\\Delta t}$\nis a first-order approximation (in $\\Delta t$) of $\\dot\\bm{\\beta}$ at time $t^n$.\nIn appendix \\ref{app:a}, we provide a mathematical analysis of the effect of time discretization in~\\eqref{eq:flo17} about the impact on the stability of the resulting identified differential system compared to the initial one. \n\nThe minimization problem~\\eqref{eq:flo17} can be written in condensed matrix form\n\\begin{equation}\n\\min_{A\\in\\mathscr{M}_K(\\mathbb{R})}\\ \\frac{1}{2} \\|\\mathbb{Y}-A\\mathbb{X}\\|_F^2\n\\label{eq:flo18}\n\\end{equation}\nwith the two data matrices\n\\begin{equation}\n\\mathbb{X} = \\left[\\bm{\\beta}^1,\\bm{\\beta}^2,...,\\bm{\\beta}^{N-1}\\right],\\quad\n\\mathbb{Y} = \\left[\\frac{\\bm{\\beta}^2-\\bm{\\beta}^1}{\\Delta t},...,\\frac{\\bm{\\beta}^N-\\bm{\\beta}^{N-1}}{\\Delta t} \\right].\n\\label{eq:flo19}\n\\end{equation}\nBecause $\\mathbb{X}$ and $\\mathbb{Y}$ store reduced variables (of size $K$), for a sufficient number of discrete snapshot times, these two matrices are\nhorizontal ones. We will assume that the rank of $\\mathbb{X}$ is always maximal, i.e. equal to $K$.\nThe least-square solution $A$ of~\\eqref{eq:flo18} is then given by\n\\begin{equation}\nA = \\mathbb{Y}\\mathbb{X}^\\dagger\n\\label{eq:flo20}\n\\end{equation}\nwhere $\\mathbb{X}^\\dagger=\\mathbb{X}^T(\\mathbb{X}\\mathbb{X}^T)^{-1}$ is the Moore-Penrose pseudoinverse matrix of $\\mathbb{X}$.\nThis least square approach has close connections with SVD-based Dynamic Mode Decomposition (DMD) \\citep{Schmid2010,Kutz2016}.\n \\subsection{Tikhonov least-square regularized formulation}\n %\n From standard spectral theory arguments, it is expected that the POD coefficients rapidly decay when $k$ increases as soon as both displacement and velocity fields are smooth enough. \n A possible side effect is the bad condition number of the matrix $\\mathbb{X}$, since the last rows of $\\mathbb{X}$ have small coefficients (thus leading to row vectors close to zero 'at the scale' of the first row of $\\mathbb{X}$). Even if the solution $A$ in~\\eqref{eq:flo20} always exists, the solution may be sensitive to perturbations, noise or round-off errors.\n In order to get a robust identification approach, one can regularize the least-square problem~\\eqref{eq:flo18} by adding a Tikhonov regularization term (see e.g. \\citep{Aster2005})\n %\n \\begin{equation}\n \\min_{A\\in\\mathscr{M}_K(\\mathbb{R})}\\ \\frac{1}{2} \\|\\mathbb{Y}-A\\mathbb{X}\\|_F^2 + \\frac{\\mu}{2} \\|\\mathbb{X}\\|_F^2\\, \\|A\\|_F^2\n \\label{eq:flo21}\n \\end{equation}\n %\n where the scalar $\\mu>0$ is the regularization coefficient. The factor\n $\\|\\mathbb{X}\\|_F^2$ in the regularization term has been added for scaling purposes.\n The solution $A_\\mu$ of~\\eqref{eq:flo21} is given by\n %\n \\begin{equation}\n A_\\mu = \\mathbb{Y} \\mathbb{X}^T\\left(\\mathbb{X}\\mathbb{X}^T+\\mu \\|\\mathbb{X}\\|_F^2\\, I_K\\right)^{-1}.\n \\label{eq:flo22}\n \\end{equation}\n %\n \\subsubsection*{Choice of optimal regularization coefficient}\n Of course, the solution matrix $A_\\mu$ depends on the regularization coefficient $\\mu$ and one can ask what is the optimal choice for $\\mu$.\n There is a trade-off between the approximation quality measured by the residual $\\|\\mathbb{Y}-A_\\mu\\mathbb{X}\\|_F$ and the norm solution $\\|A_\\mu\\|_F$. The minimization of $\\|A_\\mu\\|_F$ should ensure that unneeded features will not appear in the regularized solution.\n When plotted on the log-log scale, the curve of optimal values \n $\\mu\\mapsto \\|A_\\mu\\|_F$ versus the residual $\\mu\\mapsto\\|\\mathbb{Y}-A_\\mu\\mathbb{X}\\|_F$ often takes on a characteristic L shape \\citep{Aster2005}. A design of experiment with\n the test of \n different values of $\\mu$ (starting say from \\review{$10^{-12}$ to $10^{-5}$})\n generally allow to find quasi-optimal values of $\\mu$ located at the corner of the L-curve, thus providing a good trade-off between the two criteria.\n %\n \\subsection{Reduced-order continuous dynamical system}\n %\n Once the matrix $A_\\mu$ has been determined, we get the reduced-order continuous dynamical system\n %\n \\begin{align}\n & \\dot\\bm{\\alpha} = \\bm{\\beta}, \\label{eq:flo23}\\\\\n & \\dot\\bm{\\beta} = A_\\mu\\, \\bm{\\beta} \\label{eq:flo24}\n \\end{align}\n %\n with initial conditions $\\bm{\\alpha}(0)=\\bm{0}$, $\\vbold(0)=Q^T \\varphi(\\lbrace 0\\rbrace )$. At any time $t$, one can go back to the high-dimensional physical space using the POD modes: $\\{u\\rbrace(t)=Q\\bm{\\alpha}(t)$, $\\{x\\rbrace(t)=\\lbrace X\\rbrace +\\{u\\rbrace(t)$, \n $\\{v\\rbrace(t)=Q \\bm{\\beta}(t)$. As already mentioned, the system can be written in condensed matrix form\n %\n \\begin{equation}\n \\dot{\\bm{w}} = \\mathbb{A}_\\mu\\, \\bm{w}\n \\label{eq:flo25}\n \\end{equation}\n %\n where $\\bm{w}(t)=(\\bm{\\alpha}(t),\\bm{\\beta}(t))^T$ and\n $\\mathbb{A}_\\mu = \\begin{pmatrix}[0]_K & I_K\\\\ [0]_K & A_\\mu. \\end{pmatrix}$.\n \n The exact analytical solution of~\\eqref{eq:flo25} is\n \\begin{equation}\n \\bm{w}(t) = \\exp(\\mathbb{A}_\\mu t)\\,\\bm{w}(0).\n \\label{eq:flo26}\n \\end{equation}\n %\n The stability of the differential system depends on the spectral structure of $\\mathbb{A}_\\mu$, or equivalently on the spectrum of $A_\\mu$. Because of the stability of the fluid-capsule coupled system and from accurate solutions of the FOM solver, one can hope that the solution $A_\\mu$ of the least-square identification problem has the expected spectral properties. This will be studied and discussed in the numerical experimentation section. From the kinetic energy point of view, it is shown in appendix \\ref{app:b} that the stability of the kinetic energy is linked to the property of the (real) spectrum of the symmetric matrix $(A_\\mu+A_\\mu^T)\/2$. \n %\n \\subsubsection*{Model consistency with steady states} \n %\n A steady state in our context is defined by a capsule that reaches a constant\n velocity $\\{v\\rbrace_\\infty$, so that the motion becomes a translation flow in time\n in the direction $\\{v\\rbrace_\\infty$. From~\\eqref{eq:flo6}, this shows that \n $\\bm{\\beta}(t)$ also reaches a constant vector $\\bm{\\beta}_\\infty$, and\n $\\dot\\bm{\\beta} = 0$ at steady state. As a consequence, from~\\eqref{eq:flo24},\n we get $A_\\mu \\bm{\\beta}_\\infty=0$, meaning that $0$ is an eigenvalue of $A_\\mu$ with $\\bm{\\beta}_\\infty$ as eigenvector. As a conclusion, the matrix $A_\\mu$ must have zero\n in its spectrum in order to be consistent with the existence of steady states.\n %\n \\subsection{Reduced-order discrete dynamical system}\n %\n Of course, it is also possible to derive a discrete dynamical system from the continuous one by using a standard time advance scheme. For example the explicit forward Euler scheme with a constant time step $\\Delta t$ gives\n %\n \\begin{align}\n & \\bm{\\alpha}^{n+1} = \\bm{\\alpha}^n + \\Delta t\\, \\bm{\\beta}^n, \\label{eq:flo27} \\\\\n & \\bm{\\beta}^{n+1} = \\bm{\\beta}^n + \\Delta t\\, A_\\mu \\bm{\\beta}^n. \\label{eq:flo28}\n \\end{align}\n %\n By multiplying~\\eqref{eq:flo27} by $Q$ we get the space-time approximate solution\n \\[\n \\{u\\rbrace^{n+1} = \\{u\\rbrace^n + \\Delta t\\, \\{v\\rbrace^n,\n \\]\n so the ROM model is completely consistent with the kinematics equation.\n Stability properties of the discrete system are linked to the spectral properties of the matrix\n \\[\n A_\\mu^\\Delta = \\begin{pmatrix}\n I_K & \\Delta t\\, I_K \\\\\n [0]_K & (I_K+\\Delta t\\, A_\\mu) \n \\end{pmatrix}\n \\]\n For unconditional stability in time, it is required for the eigenvalues\n of $I_K+\\Delta t A_\\mu$ to stay in the unit disk of the complex plane.\n \n More generally, it is possible to use any other time advance scheme, according to the expected order of accuracy or stability domain.\n %\n \\subsection{Accuracy criteria and similarity distances between ROM and FOM solutions}\n \nIn order to quantify the error induced by approximations, we introduce 3 accuracy criteria. \nThe first accuracy criterion is the relative information content (RIC), defined by \n\\[\n\\text{RIC}(K) = \\frac{\\displaystyle{\\sum_{k=K+1}^{\\min(d,N)} \\sigma_k^2}}{\\displaystyle{\\sum_{k=1}^{\\min(d,N)}\\sigma_k^2}},\n\\] \nquantifies the relative amount of neglected information when truncating the number of modes at rank $K$. \nThe truncation rank is determined such that the RIC is inferior to the accuracy threshold~$\\varepsilon$. The accuracy threshold $\\varepsilon$ is fixed to $10^{-6}$.\n\n\nThe second accuracy criterion is the relative time residual $\\mathcal{R}$. It quantifies the relative error induced by the minimization of the least square cost function (\\ref{eq:flo17}) using $A_{\\mu}$. It is given by\n\\[\n\\mathcal{R}(j)=\\frac{\\Vert A_{\\mu} \\mathbb{X}_j-\\mathbb{Y}_j \\Vert_1}{\\Vert \\mathbb{Y}_j \\Vert_1}\n\\]\nwhere $\\mathbb{X}_j$ represents the $j^{th}$ column of $\\mathbb{X}$ and $\\mathbb{Y}_j$ the $j^{th}$ column of $\\mathbb{Y}$. The index $j$ is thus linked to the snapshots ($j\\in \\{1, ..., N\\}$). To better draw a parallel between the evolution of this parameter and the capsule dynamics, this parameter will be represented as a function of the non-dimensional time $Vt\/\\ell$ hereafter.\n\nThe third accuracy criteria $\\varepsilon_{\\text{Shape}}(Vt\/\\ell)$ measures the difference between the 3D reference capsule shape given by the FOM ($\\mathcal{S}_{\\text{FOM}}$) and the 3D shape predicted by the ROM ($\\mathcal{S}_{\\text{ROM}}$). It is defined at a given non-dimensional time $Vt\/\\ell$ as the ratio between the modified Hausdorff distance (MHD) computed between $\\mathcal{S}_{FOM}$ and $\\mathcal{S}_{ROM}$ and non-dimensionalized by $\\ell$\n\\[\n\\varepsilon_{\\text{Shape}}(Vt\/\\ell) =\\frac{\\text{MHD}(\\mathcal{S}_{\\text{FOM}}(Vt\/\\ell), \\mathcal{S}_{\\text{ROM}}(Vt\/\\ell))}{\\ell}\n\\]\nThe modified Hausdorff distance is the maximum value of the mean distance between $\\mathcal{S}_{\\text{FOM}}$ and $\\mathcal{S}_{\\text{ROM}} $ and the mean distance between $\\mathcal{S}_{\\text{ROM}}$ and $\\mathcal{S}_{\\text{FOM}}$ \\citep{Dubuisson1994}.\n\n\n\n\\section{Numerical experimentation on a given configuration}\n\\label{sec:1example}\n\nThe method is first applied to a given configuration, in order to set the model parameters and to study its stability and precision. We consider the dynamics of an initially spherical capsule flowing in a microchannel when $Ca=0.17$ and $a\/\\ell=0.8$. The time step between each snapshot $\\Delta t$ equals to 0.04. The dynamics predicted by the FOM is illustrated in Fig. \\ref{DynamicFOM} up to a nondimensional time $VT\/\\ell=10$. As the capsule flows, its membrane is gradually deformed by the hydrodynamic forces inside the channel during a temporary time until a steady state is reached. We assume that the capsule has reached its steady-state shape, when the surface area of the capsule varies by less than $5\\times10^{-4}\\times(4 \\pi a^2)$ over a non-dimensional time $Vt\/\\ell=1$. For $(Ca=0.17$, $a\/\\ell=0.8)$, the steady state is reached at $Vt\/\\ell=2.6$ and is characterized by a parachute capsule shape (Figure \\ref{DynamicFOM}). \n\n\\begin{figure}\n\\centering\n\n(a)\\includegraphics[width=0.18\\textwidth]{newfigures\/CuttedPlane.eps}\n(b)\\includegraphics[width=0.7\\textwidth]{newfigures\/Comp_capsule_al0.8_Ca0.17_FOM.eps}\n\\caption{Dynamics of a microcapsule flowing in a microchannel with a square cross-section predicted by FOM in the vertical cutting plane represented in grey in (a). The in-plane capsule profiles are shown for $Ca=0.17$ and $a\/\\ell=0.8$ at the non-dimensional times $Vt\/\\ell=$ 0, 0.4, 2, 4, 6 in (b). The horizontal lines on (b) represent the channel borders. The capsule will always be represented flowing from left to right.} \n\\label{DynamicFOM}\n\\end{figure}\n\n\\subsection{Proper orthogonal decomposition, truncation and modes}\n\nThe singular value decomposition is first applied to the displacement snapshot matrix. To determine the truncation rank, the evolution of 1- RIC is illustrated in Figure~\\ref{RIC} as a function of number of modes considered. The RIC is about 1\\% only with one mode. The more modes is kept, the less information is neglected. In the following, we fix the number of modes to 20. The accuracy threshold $\\varepsilon$ is thus equal to $10^{-6}$.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.55\\textwidth]{newfigures\/RIC_Ca0.17_al0.8.eps}\n\\caption{Evolution of the relative amount of neglected information 1-RIC, as a function of the number of modes considered ($Ca=0.17, a\/\\ell=0.8$).}\n\\label{RIC}\n\\end{figure}\n\nThe modes are determined from the displacement snapshot matrix. They are added to the sphere of radius 1 and amplified by a factor 2 to be visualized. The first six modes are represented in Figure~\\ref{modesRepresentation} for ($Ca=0.17, a\/\\ell=0.8$).\nThe first six modes are mostly dedicated to change the shape of the rear of the capsule. The following modes appear to become noisy (not shown). However, these modes are not negligible, if one wants to get an accuracy of $10^{-6}$.\n\n\\begin{figure}\n\\centering\n(a)\\includegraphics[width=0.28\\textwidth]{newfigures\/Modes2_al0.80_Ca0.170.eps}\n(b)\\includegraphics[width=0.28\\textwidth]{newfigures\/Modes3_al0.80_Ca0.170.eps}\n(c)\\includegraphics[width=0.28\\textwidth]{newfigures\/Modes4_al0.80_Ca0.170.eps}\n(d)\\includegraphics[width=0.28\\textwidth]{newfigures\/Modes5_al0.80_Ca0.170.eps}\n(e)\\includegraphics[width=0.28\\textwidth]{newfigures\/Modes6_al0.80_Ca0.170.eps}\n(f)\\includegraphics[width=0.28\\textwidth]{newfigures\/Modes7_al0.80_Ca0.170.eps}\n\\caption{Representation of the first six modes of the capsule dynamics when $a\/\\ell =0.80$ and $Ca=0.17$. To be vizualized the modes of displacement were added to the sphere of radius 1 and amplified by a factor 2.} \n\\label{modesRepresentation}\n\\end{figure}\n \n\\subsection{Dynamic Mode Decomposition: empirical regularization}\n\nBefore determining the matrix $A$, we check the condition number of the matrices $\\mathbb{X}$ and $\\mathbb{XX^T}$. They are respectively equal to $7.9\\times 10^4$ and $6.2 \\times 10^9$. The condition numbers of these matrices are very high and the matrix $A$, determined by solving (\\ref{eq:flo20}), may be sensitive to perturbations or noise. To improve the robustness, a Tikhonov regularization is applied to solve the least-square problem (\\ref{eq:flo17}) and the matrix $A_{\\mu}$ is computed using (\\ref{eq:flo22}), which depends on the regularization coefficient $\\mu$.\nTo determine the optimal value of $\\mu$, the relative least square error $\\Vert A_{\\mu}\\mathbb{X}-\\mathbb{Y}\\Vert_F\/\\Vert\\mathbb{Y}\\Vert_F$ is represented according to the norm solution $\\Vert A_{\\mu}\\Vert_F$ when 20 modes are considered and when $\\mu$ is varied between \\review{$10^{-12}$ and $10^{-5}$} (Figure \\ref{muDetermination}). \nThe least square error $\\Vert A_{\\mu}\\mathbb{X}-\\mathbb{Y}\\Vert_F$ and the norm solution $\\Vert A_{\\mu}\\Vert_F$ are minimal when $\\mu=10^{-9}$. In the following, $\\mu$ is thus fixed to $\\mu=10^{-9}$ and the number of modes to 20. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.55\\textwidth]{newfigures\/DeterminationMu_Ca0.17_al0.8_Xrelatif.eps}\n\\caption{Evolution of the norm solution $\\Vert A_{\\mu}\\Vert_F$ as a function of\nthe least square error $\\Vert A_{\\mu}\\mathbb{X}-\\mathbb{Y}\\Vert_F \/ \\Vert \\mathbb{Y}\\Vert_F$\n when the number of modes is fixed to 20 and $(Ca=0.17, a\/\\ell=0.8)$.} \n\\label{muDetermination}\n\\end{figure}\n\n\\subsection{Validity check of the ROM: spectral study of the resulting matrix}\n\nIn order to detect anomalies, a spectral analysis of the reduced-order model learned by the DMD method is carried out. The spectrum of the matrix $A_{\\mu}$ is represented in Figure~\\ref{eigenvalues}. All the eigenvalues $\\lambda_k$ of the matrix $A_{\\mu}$ have non-positive real parts. The resulting linear ROM is thus stable. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.55\\textwidth]{newfigures\/ValeurPropreCa0.17Al0.8_ModelKBfModelA_X.eps}\n\\caption{Eigenvalues $\\lambda_k$ of $A_{\\mu}$ when 20 modes are considered, $\\mu=10^{-9}$ and $(Ca=0.17, a\/\\ell=0.8)$. } \n\\label{eigenvalues}\n\\end{figure}\n\nThe temporal evolution of the residual $\\mathcal{R}$ (Figure~\\ref{ErrorModel}) shows that the error is less than 4\\%. The maximal value is reached\nat the beginning of the simulation ($Vt\/\\ell < 0.3$) and $\\mathcal{R}$ decreases afterwards. \nWhen $Vt\/\\ell \\lesssim 2$, i.e. before the capsule has reached its steady state, high frequency oscillations are observed. This probably means that a high frequency mode is neglected, even if 20 modes are considered. For $Vt\/\\ell >6$, $\\varepsilon_{ROM}$ is of order $0.01\\%$. The stationary state is thus well predicted by the model and the error during the transient stage is more than acceptable. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.55\\textwidth]{newfigures\/ErrorSnapCa0.17Al0.8_X.eps}\n\\caption{Temporal evolution of the normalized time residual when 20 modes are considered, $\\mu=10^{-9}$ and $(Ca=0.17, a\/\\ell=0.8)$.} \n\\label{ErrorModel}\n\\end{figure}\n\n\n\n\\subsection{ROM online stage and accuracy assessment}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.9\\textwidth]{newfigures\/resultROM3D.eps}\n\\caption{Dynamics of a microcapsule flowing in microchannel with a square cross-section predicted by ROM at the non-dimensional time $Vt\/\\ell=$ 0.4, 2.8, 5.2, 7.6, 10 when $Ca=0.17$ and $a\/\\ell=0.8$. The initial spherical capsule is shown on the left by transparency. The number of modes is fixed to 20 and $\\mu=10^{-9}$.} \n\\label{DynamicsROM3D}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\textwidth]{newfigures\/ROM_FOM_Profilcapsule.eps}\n\\caption{Comparison of the capsule contours given by the FOM (dotted line) and estimated by the ROM (orange line). The capsule is shown for $(Ca=0.17, a\/\\ell=0.8$ at the non-dimensional time $Vt\/\\ell=$ 0, 0.4, 2, 4, 6. The horizontal lines represent the channel borders. The number of modes is fixed to 20 and $\\mu=10^{-9}$.} \n\\label{CompProfils}\n\\end{figure}\n\n\n\n\n\n\nThe displacement of all the nodes of the capsule mesh estimated by the ROM is then added to the corresponding node of the sphere of radius 1 to visualize the temporal evolution of the capsule shape in three dimensions. Figure \\ref{DynamicsROM3D} shows the capsule dynamics for the reference case ($Ca=0.17, a\/\\ell=0.8$). The ROM allows us to reproduce the capsule deformation from the initial state up to the parachute-shaped steady state. \nFor the FOM and the ROM, the capsule profile is then determined in the\n cutting plane passing through the middle of the microchannel. Figure \\ref{CompProfils} shows that the two capsule profiles perfectly overlap at $Vt\/\\ell=0, 0.4, 2, 4, 6$.\nThe temporal evolution of $\\varepsilon_{\\text{Shape}}$ is shown in Figure \\ref{TimeLearning}a. The maximum value of the error committed on the 3D shape $\\varepsilon_{\\text{Shape}}$ equals to 0.022\\%.\n The error on the capsule shape $\\varepsilon_{\\text{Shape}}$ is thus negligible. The deformation of the capsule from its initially spherical shape to its steady state over an non-dimensional time $Vt\/\\ell=10$ can thus be estimated very precisely with the developed reduced-order model. \n\n\n\n\n\\review{\\subsection{Sensitivity of the ROM on the learning time }\n\nThe DMD method predicts the capsule displacement at time $t^{n+1}$ from that at time $t^n$. The model has been constructed until now by considering the dynamics of the capsule over a non-dimensional time $Vt\/\\ell$ of 10.\n\nIn order to study its influence, we increase the learning $T_L$, i.e. the non-dimensional time over which the model is trained, from 2 to 8 and estimate the capsule dynamics using the ROM up to a non-dimensional time $Vt\/\\ell$ of 10. The number of modes is always equal to 20 and $\\mu=10^{-9}$. The estimated shape is then compared to the one simulated with the FOM. The time evolution of $\\varepsilon_{\\text{Shape}}$ is shown in Figure \\ref{TimeLearning}a. The error between the 3D shape increases, as soon as we exceed the learning time. The longer the learning time, the smaller the error $\\varepsilon_{\\text{Shape}}$ at $Vt\/\\ell=10$ (Figure \\ref{TimeLearning}b). The comparison of the capsule profile estimated by the ROM at $Vt\/\\ell=10$ with the one simulated with the FOM (Figure \\ref{TimeLearning}c) confirms that considering only the dynamics of the capsule up to a non-dimensional time $Vt\/\\ell$ of 2 is not sufficient. In fact, the capsule is still in the transient phase at $Vt\/\\ell=2$. When $T_L=4$, a small difference persists in the parachute at $Vt\/\\ell=10$ and $\\varepsilon_{\\text{Shape}}=1.5\\%$. However, when the learning time is increased above 6, $\\varepsilon_{\\text{Shape}}$ falls below 0.19\\% at $Vt\/\\ell=10$. This learning time is sufficient to estimate the overall shape of the capsule.}\n\n\n\\begin{figure}\n\\centering\na)\\includegraphics[width=0.6\\textwidth]{newfigures\/Influence_TimeLearning_avecInsert.eps}\\\\\nb)\\includegraphics[width=0.6\\textwidth]{newfigures\/Error_TimeLearning_insert.eps}\\\\\nc)\\includegraphics[width=0.95\\textwidth]{newfigures\/Influence_TimeLearning_snap.eps}\n\\caption{\\review{ a) Influence of the learning time $T_L$ on the temporal evolution of the error on the capsule shape $\\varepsilon_{\\text{Shape}}$. The error during the learning time is shown in solid line. b) Evolution of $\\varepsilon_{\\text{Shape}}$ measured at $Vt\/\\ell=10$ as a function of the learning time $T_L$. c) Comparison of the capsule contours given by the FOM (dotted line) and estimated by the ROM (orange line) for the different learning times $T_L$. For this case, the parameters are 20 modes, $\\mu=10^{-9}$, $Ca = 0.17$ and $a\/\\ell=0.8$. } } \n\\label{TimeLearning}\n\\end{figure}\n\n\\section{\\review{Space-time ROM accuracy assessment over the full parameter sample set}}\n\\label{sec:database}\nThe capillary number $Ca$ and the aspect ratio $a\/\\ell$ are now considered as variable parameters.\nA database of 119 simulations of the deformation of an initially spherical capsule in a microchannel has been generated using the FOM with the same time step and mesh size as in section \\ref{sec:1example}. Figure~\\ref{DB} shows the different values of $Ca$ and $a\/\\ell$ for which the simulations have been computed to create the training database. \nWhen the capsule initial radius is close to or larger than the microchannel cross-dimension ($a\/\\ell \\geq 0.90$), the capsule is predeformed into a prolate spheroid to fit in the channel. For a given $a\/\\ell$, a limit value of $Ca$ exists beyond which a capsule does not reach a steady-state (Figure~\\ref{DB}). This is due to the softening behavior of the neo-Hookean law. \n\n\\begin{figure} \n\\centering\n\\includegraphics[width=0.55\\textwidth]{GraphDataBaseLearning.eps}\n\\caption{Values of $Ca$ and $a\/\\ell$ included in the training database. The dotted line delimits the domain where a steady-state capsule deformation exists for capsules following the neo-Hookean law.}\n\\label{DB}\n\\end{figure}\n\nFor all the couples ($Ca, a\/\\ell$) of the training database, the capsule shape is reconstructed from the ROM results at given non-dimensional times and compared to the shape predicted by the FOM at the same non-dimensional time. The evolution of the error committed on the capsule shape $\\varepsilon_{Shape}$ on the full database is illustrated in Figure~\\ref{ErreurBaseA20} at $Vt\/\\ell=0, 0.4, 1, 2, 5, 10$. $\\varepsilon_{Shape}$ is null at $Vt\/\\ell=0$. The ROM is therefore able to predict the initial capsule shape correctly, whether it is spherical or slightly ellipsoidal. Until $Vt\/\\ell\\leq2$, $\\varepsilon_{Shape}$ essentially remains zero on the majority of the database. Otherwise, it is equal to 0.15\\% at maximum. At $Vt\/\\ell = 5$ and 10, the error $\\varepsilon_{Shape}$ slightly increases for most of the couples ($Ca, a\/\\ell$) of the database. It remains fully acceptable since it is equal to 0.35\\% at maximum. When considering 20 modes and $\\mu = 10^{-9}$, the developed ROM allows us to estimate with great precision the dynamics of an initially spherical capsule in a microchannel with a square cross-section. \n\n\\begin{figure}\n\\centering\n(a)\\includegraphics[width=0.45\\textwidth]{newfigures\/GraphMHD_DB0_20modes_mu10-8.eps}\n(b)\\includegraphics[width=0.45\\textwidth]{newfigures\/GraphMHD_DB0.4_20modes_mu10-8.eps}\n(c)\\includegraphics[width=0.45\\textwidth]{newfigures\/GraphMHD_DB1_20modes_mu10-8.eps}\n(d)\\includegraphics[width=0.45\\textwidth]{newfigures\/GraphMHD_DB2_20modes_mu10-8.eps}\n(e)\\includegraphics[width=0.45\\textwidth]{newfigures\/GraphMHD_DB5_20modes_mu10-8.eps}\n(f)\\includegraphics[width=0.45\\textwidth]{newfigures\/GraphMHD_DB10_20modes_mu10-8.eps}\n\\caption{Heat maps of $\\epsilon_{Shape}$ on the training database as a function of $Ca$ and $a\/\\ell$ at (a) $\\dot{\\gamma}t$=0, (b) 0.4, (c) 1, (d) 2, (e) 5, (f) 10. 20 modes and $\\mu=10^{-9}$ are considered. The dotted line delimits the domain where a steady-state capsule deformation exists.} \n\\label{ErreurBaseA20}\n\\end{figure}\n\nTo respect the stability condition (see Equation~\\ref{StabilityCondition}), the time step imposed to simulate the capsule dynamics with the FOM decreases, when the $Ca$ decreases. The lower the $Ca$, the longer the simulation lasts (Figure~\\ref{SimuTime}). The time needed to calculate the capsule shape and write the results was estimated on the same workstation used to simulate and generate the result files with the FOM (2-CPU Intel\\textsuperscript{\\textregistered} Xeon\\textsuperscript{\\textregistered} Gold 6130, 2.1 GHz). The speedup is the ratio between the FOM runtime and the ROM runtime. Its evolution according to the FOM time step is illustrated in Figure~\\ref{Speedup}. It was estimated from the ROM and FOM simulation time obtained when $a\/\\ell=0.7$. The speedup varies between 52106 for a FOM time step of $10^{-4}$ (i.e for the lowest value of $Ca$ tested) and 4200 for $5 \\times 10^{-4}$ (i.e $Ca\\leq 0.05$). It is thus possible to estimate the capsule dynamics very precisely with the developed ROM, while considerably reducing the computational time.\n\n\\review{Another significant advantage is the gain in storage of the simulation results. By storing only the reduced variables $\\bm{\\alpha}$, $\\bm{\\beta}$, the modes $\\lbrace \\phi_k\\rbrace $ and the initial position of the nodes of each couple $\\theta=(Ca,a\/\\ell)$, the training database is reduced from 1.9 GB, when computed with the FOM, to 0.15 GB with the ROM. It can therefore be more easily shared. }\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.55\\textwidth]{Speedup.eps}\n\\caption{Evolution of the speedup as function to the time step imposed to simulate the capsule dynamics with the FOM ($a\/\\ell=0.7$).}\n\\label{Speedup} \n\\end{figure}\n\n\\section{\\review{Full space-time-parameter ROM (for any admissible parameter value)}}\n\\label{sec:interpolation}\n\\review{\n\\subsection{General methodology}\n\nIt is here again assumed that a training database of $N$ precomputed FOM results is available. Now we would like to derive a ROM for any\nparameter couple $\\bm{\\theta}=(Ca,a\/\\ell)$ in the admissible parameter domain. The proposed space-time-parameter ROM is made of two steps. The first step consists in predicting the space-time solution $\\{\\ubold\\}(t;\\bm{\\theta})$ \nby means of a robust interpolation procedure. The second step consists in deriving a ROM in the form of a low-order dynamical system\nby using the predicted solutions of the first step as training data. Then we apply the former procedure detailed in Section \\ref{sec:ROM}. Below we give a detailed explanation of the two steps. \\medskip\n\n\\textbf{Step 1: predictor step.} Considering a parameter couple $\\bm{\\theta}$, we first search the three nearest neighbor parameters in the sample set that form a nondegenerate triangle in the plane $(Ca,a\/\\ell)$. Let us denote them by $\\bm{\\theta}_1$, $\\bm{\\theta}_2$ and $\\bm{\\theta}_3$. We will define a linear operator in the triangle $(\\bm{\\theta}_1,\\bm{\\theta}_2,\\bm{\\theta}_3)$. For that, let us introduce the barycentric coordinates $(\\lambda_1,\\lambda_2,\\lambda_3)$, $\\lambda\\in[0,1]$, $i=1,2,3$ such that\n\\begin{align}\n \\lambda_1 + \\lambda_2 + \\lambda_3 & = 1, \\label{eq:baryc1}\\\\\n \\bm{\\theta}_1 \\lambda_1 + \\bm{\\theta}_2 \\lambda_2 + \\bm{\\theta}_3\\lambda_3 & = \\bm{\\theta}. \\label{eq:baryc2}\n\\end{align}\nThe $3\\times 3$ linear system~\\eqref{eq:baryc1},\\eqref{eq:baryc2} is invertible as soon as the triangle $(\\bm{\\theta}_1,\\bm{\\theta}_2,\\bm{\\theta}_3)$ is nondegenerate. Notice that the $\\lambda_i$ ($i=1,2,3$) are actually functions of $\\bm{\\theta}$.\nLet us now denote by $\\{u_1\\}$, $\\{u_2\\}$ and\n$\\{u_3\\}$ the displacement fields for the parameter vectors\n$\\bm{\\theta}_1$, $\\bm{\\theta}_2$ and~$\\bm{\\theta}_3$ respectively. Then we can consider the predicted velocity field $\\hat\\ubold(t;\\bm{\\theta})$\ndefined by\n\\begin{equation}\n\\{\\hat u\\}(t,\\bm{\\theta}) =\n\\lambda_1 \\{u_1\\}(t) + \\lambda_2 \\{u_2\\}(t) + \\lambda_3 \\{u_3\\}(t).\n\\label{eq:hatv}\n\\end{equation}\n\n\\textbf{Step 2: low-order dynamical system ROM.}\nExpression~\\eqref{eq:hatv} can be evaluated at some discrete instants\nin order to generate new training data. Then the SVD-DMD ROM methodology presented in Section~\\ref{sec:ROM} can be applied to these data to get a reduced dynamical system in the form\n\\begin{align*}\n & \\dot\\bm{\\alpha}(\\bm{\\theta}) = \\bm{\\beta}(\\bm{\\theta}), \\\\\n & \\dot\\bm{\\beta}(\\bm{\\theta}) = A_\\mu(\\bm{\\theta})\\, \\bm{\\beta}(\\bm{\\theta}).\n\\end{align*}\n\nWe also have a matrix $Q(\\bm{\\theta})$ of orthogonal POD modes and we can go back to the high-dimensional physical space by the standard operations\n\\begin{equation}\n\\{\\hat u\\}(t,\\bm{\\theta}) \\approx Q(\\bm{\\theta})\\, \\bm{\\alpha}(t,\\bm{\\theta}),\\quad\n\\{\\hat v\\}(t,\\bm{\\theta}) \\approx Q(\\bm{\\theta})\\, \\bm{\\beta}(t,\\bm{\\theta}).\n\\end{equation}\nNotice that the capsule position field $\\{\\bm{x}\\}(t,\\bm{\\theta})$ is given by\n\\[\n\\{ x\\}(t;\\bm{\\theta}) = \\{X\\}(\\bm{\\theta}) + \\{\\hat u\\}(t,\\bm{\\theta})\n\\]\nwith an initial capsule position $\\{X\\}(\\bm{\\theta})$ that may depend on $\\bm{\\theta}$ because of the pre-deformation preprocessing\nif $a\/\\ell \\geq 0.95$.\n}\n\n \n\\review{\\subsection{Numerical experiments, ROM accuracy assessment}\n\\begin{figure} \n\\centering\n\\includegraphics[width=0.55\\textwidth]{GraphDataBaseLearningTesting.eps}\n\\caption{Values of $Ca$ and $a\/\\ell$ included in the testing database (open circle). The filled squares represent the cases in the training database.\nThe dotted line delimits the domain where a steady-state capsule deformation exists for capsules following the neo-Hookean law.}\n\\label{DB2}\n\\end{figure}\n\nA testing database is created using the FOM as in Section \\ref{sec:database} and considering $(Ca,a\/\\ell)$-couples which are not in the training database. A set of 110 $(Ca,a\/\\ell)$-couples are included in this database (Figure \\ref{DB2}). \nFor all the $(Ca,a\/\\ell)$-couples of the testing database, the capsule dynamics is interpolated from the dynamics of the 3 closest neighbors at a given non-dimensional time. \nCapsule shapes obtained by the ROM are compared to the ones predicted by the FOM at the same nondimensional time. Figure~\\ref{ErreurBaseTestingA20} represents the evolution of the error committed on the capsule shape $\\varepsilon_{Shape}$ on the training database at $Vt\/\\ell=0,\\ 0.4,\\ 1,\\ 2,\\ 5,\\ 10$. At initial time, $\\varepsilon_{\\text{Shape}}$ is zero. The interpolation method is therefore able to capture the initial capsule shape. When the time increases, $\\varepsilon_{\\text{Shape}}$ increases and greater than if we apply directly the POD-DMD method on the FOM results and reconstruct the dynamics. However, $\\varepsilon_{\\text{Shape}}$ remains less than 0.3\\% on the majority of the testing database. It remains fully acceptable. $\\varepsilon_{\\text{Shape}}$ is more important near the steady-state limit and when we approach the lowest values of $Ca$ because we are close to the limits of the training base.}\n\n\n\n\n\n\\begin{figure}\n\\centering\n(a)\\includegraphics[width=0.45\\textwidth]{newfigures\/GraphMHD_outDB0_20modes_mu10-8_ROMFOM.eps}\n(b)\\includegraphics[width=0.45\\textwidth]{newfigures\/GraphMHD_outDB0.4_20modes_mu10-8_ROMFOM.eps}\n(c)\\includegraphics[width=0.45\\textwidth]{newfigures\/GraphMHD_outDB1_20modes_mu10-8_ROMFOM.eps}\n(d)\\includegraphics[width=0.45\\textwidth]{newfigures\/GraphMHD_outDB2_20modes_mu10-8_ROMFOM.eps}\n(e)\\includegraphics[width=0.45\\textwidth]{newfigures\/GraphMHD_outDB5_20modes_mu10-8_ROMFOM.eps}\n(f)\\includegraphics[width=0.45\\textwidth]{newfigures\/GraphMHD_outDB10_20modes_mu10-8_ROMFOM.eps}\n\\caption{\\review{Heat maps of $\\varepsilon_{\\text{Shape}}$ on the testing database as a function of $Ca$ and $a\/\\ell$ at (a) $\\dot{\\gamma}t$=0, (b) 0.4, (c) 1, (d) 2, (e) 5, (f) 10. The dotted line delimits the domain for which a steady-state capsule deformation exists.} }\n\\label{ErreurBaseTestingA20}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n(a)\\includegraphics[height=0.17\\textwidth]{newfigures\/SSFCuttedPlane.eps}\n(b)\\includegraphics[height=0.17\\textwidth]{newfigures\/SSF_1.eps}\n(c)\\includegraphics[height=0.17\\textwidth]{newfigures\/SSF_2.eps}\n(d)\\includegraphics[height=0.17\\textwidth]{newfigures\/SSF_4.eps}\\\\\n\\caption{\\review{Dynamics estimated by the ROM of a capsule subjected to a simple shear flow. The capsule is shown for $Ca=0.3$ at the non-dimensional time (a) $\\dot{\\gamma}=$ 0,(b) 1.6,(c) 4.8,(d) 6.4. 15 modes and $\\mu=10^{-6}$ are considered. The red point is a membrane point. }} \n\\label{SSF3D}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\textwidth]{newfigures\/SSF2D_2plan.eps}\\\\\n\\caption{\\review{Capsule subjected to a simple shear flow: Comparison of the contours given by the FOM (dotted line) and estimated by the ROM (orange line). The capsule is shown for $Ca=0.3$ in the shear plane in the top and in the cross plane in the bottom. 15 modes and $\\mu=10^{-6}$ are considered. } } \n\\label{SSFCompProfils}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.45\\textwidth]{newfigures\/SSFErrorShape.eps}\\\\\n\\caption{\\review{Evolution of the maximum error committed on the shape of a capsule subjected to a simple shear flow as a function of the capillary number $Ca$. The capsule dynamics was simulated up to a non-dimensional time $\\dot{\\gamma}=10$.} } \n\\label{SSFError}\n\\end{figure}\n\n\\section{\\cdd{Discussion and conclusion}}\n\\review{\nAs a summary, in this paper we have considered a $\\bm{\\theta}$-parametrized reduced-order model of microcapsule dynamics in the form\n\\begin{align*}\n & \\dot\\bm{\\alpha}(\\bm{\\theta}) = \\bm{\\beta}(\\bm{\\theta}), \\\\\n & \\dot\\bm{\\beta}(\\bm{\\theta}) = A_\\mu(\\bm{\\theta})\\, \\bm{\\beta}(\\bm{\\theta}).\n\\end{align*}\n\n\\noindent The vector $\\bm{\\theta}=(Ca,a\/\\ell)$ contains the governing parameters, the coefficients $\\alpha_k(t,\\bm{\\theta})$ and $\\beta_k(t,\\bm{\\theta})$ are spectral coefficients of \nPOD decomposition for the displacement and velocity fields respectively, and the \nmatrix $A_\\mu(\\bm{\\theta})$ is identified from data using a dynamic mode decomposition least-square procedure.\nWe have numerically proven for a broad range of capillary numbers $Ca$ and aspect ratios $a\/\\ell$ that it is able to capture the dynamics up to the steady state of a capsule flowing in a channel and its large deformations.\nAs a first approach, we have presently chosen to use a DMD method that is linear in time to build the ROM model. Still the ROM model captures spatial non-linearity by means of the POD modes. \nThe resulting reduced-order model is of great fidelity, weak discrepancies being only observed in the early transient stage.\nWe have also shown that the learning time need to be larger than the transient stage duration and that we can go beyond the FOM time window used for the training of the ROM model.\n\nFor generalisation, we have computed the capsule dynamics for any parameter set. The generalization algorithm is based on interpolation: we first pre-calculate the ROM dynamic model at a finite number of points in the parameter space domain and determine the $\\alpha$, $\\beta$ and $\\phi_k$ (and thus the capsule displacement) at these points. For any other value of the parameters, we first predict the time-evolution of the capsule node displacements using a linear interpolation procedure in the parameter space and then build a dynamical system based the DMD methodology. The error is mostly below 0.3\\% over the entire domain, which proves the precision and utility of the ROM approach.\n\nLike any other data-driven model, the model requires a certain number of high-fidelity simulations to provide accurate predictions. By discretizing the parameter space in a regular and homogeneous way (Figure \\ref{DB}), we have not presently tried to optimize the number of FOM simulations. But sampling strategies like the Latin Hypercube Sampling (LHS) exist and result in a net reduction in FOM simulation number. \nThe empirical law, conventional among the data-driven model community, is that one needs between $10\\times D$ and $50 \\times D$ points, where $D$ is the dimension of the problem ($D = 2$ in our case). This law shows that the number of high-fidelity simulations does not explode with the problem dimension, owing to the linear dependence of the law.\n\nTo prove the generality of the proposed approach, we additionally apply the ROM to a capsule in simple shear flow. This classical case was extensively studied over the past years \\citep{ramanujan98,lac_dbb:05,Li2008,Walter2010,Foessel2011,dbb2010book,dupont2015}.\nWe build a ROM model to predict the evolution of an initially spherical capsule subjected to a shear rate $\\dot{\\gamma}$ until $\\dot{\\gamma}t=10$ with 15 modes, a learning time of $T_L=10$ and $\\mu=10^{-6}$. \nThe time step~$\\Delta t$ between each snapshot is equal to 0.04.\nWe retrieve that the initial capsule is elongated in the straining direction by the external flow and that the membrane rotates around the deformed shape due to the flow vorticity (Figure~\\ref{SSF3D}). The ROM is thus able to recover the tank-treading motion. The capsule contours in both shear and perpendicular planes predicted by the ROM and simulated by the FOM are in very good agreement (Figure \\ref{SSFCompProfils}).\nFigure~\\ref{SSFError} shows the evolution of the maximum error on the capsule shape for different values of $Ca$. At $Ca=0.1$, folds appear periodically on the capsule, which prevents the ROM from reproducing precisely the wrinkling phenomenon. \nBut from $Ca\\geq 0.3$, the error is reduced by an order of magnitude and is below 0.2\\%.\n\n\nThe linear differential model is stable as soon as the eigenvalues of $A_\\mu$ have nonpositive real parts, and is consistent with steady states as soon as zero is an eigenvalue. Numerical experiments show that identified matrices $A_\\mu$\nfrom data have eigenvalues with negative real parts and one of the eigenvalues is very close to zero.\n\nAs it is often the case with spectral-like methods, there is a trade-off between accuracy and ill-conditioning effects: when a large number of POD modes are used ($K>20$), the data matrix~$\\mathbb{X}$ of snapshot POD coefficients is ill-conditioned. For the determination of $A_\\mu$, we have used a Tikhonov regularization in the least square cost function (see~\\eqref{eq:flo21}) in order to have a better conditioned problem and a $L$-curve procedure to determine the best regularization coefficient $\\mu$. Unfortunately we observe some limitations in the accuracy. A perspective would be to use a proximal approach: within an iterative procedure, at iteration~$(p+1)$, compute the matrix $A_\\mu^{(p+1)}$ solution of\n\\[\nA_\\mu^{(p+1)} = \\arg \n \\min_{A\\in\\mathscr{M}_K(\\mathbb{R})}\\ \\frac{1}{2} \\|\\mathbb{Y}-A\\mathbb{X}\\|_F^2 + \\frac{\\mu}{2} \\|\\mathbb{X}\\|_F^2\\, \\|A-A_\\mu^{(p)}\\|_F^2\n\\]\nusing $A_\\mu^{(0)}=0$. At convergence, one can observe that the regularization term vanishes, so that one can expect better accuracy with this approach. This will be investigated in a future work.\n\nWe have proposed a successful and very efficient ROM for FSI problems.\nIt is an alternative to the use of HPC. It must be seen as a complimentary (and non-competing) approach to full-order models, and has many advantages. \nAmong them, one can mention the easiness in implementation. \nIt leads to a very handy set of ODEs, that are easy to determine from an algorithmic point of view. Furthermore, the system can be run on any computer. The size of the matrices is, indeed, reduced from ($3 \\times 2562 \\text{nodes} \\times 250 $snapshots) to about ($3 \\times 2562 \\text{nodes} \\times (K+1)$), where the number of modes is $K$ = 20. The computation required time is a few milliseconds for one parameter set. The current speedups are between 5 000 and 52 000, which out-performs any full-order model approach. We believe that this work is an encouraging milestone to move toward real time simulation of general coupled problems and to deal with high-level parametric studies, sensitivity analysis, optimization and uncertainty quantification.\n\n\nThe next milestone following this work would be to go toward nonlinear differential dynamical systems as reduced-order models. There is three natural ways for that. The first one is to use Kernel Dynamic Model Decomposition (KDMD) rather than DMD. But we have recently shown in \\cite{DeVuyst2022} that a non-linear low-order dynamical model does not provide significant improvement.\nThe second one is to use Extended Dynamic Model Decomposition (EDMD) \\citep{Williams2015}. The EDMD method adds some suitable nonlinear observables (or features) in the data, so that a linear 'augmented' dynamical system is searched for. \nA third option is would be to directly use artificial neural networks (ANN), in particular recurrent neural networks (RNN) \\citep{Trischler2016}. The RNN training would replace the DMD procedure, and would be trained with the same POD coefficient matrices $\\mathbb{X}$ and~$\\mathbb{Y}$. As shown in the recent study by \\cite{Lin2021}, artificial intelligent may prove to be efficient and precise to predict capsule deformation.\n}\n\n\\backsection[Acknowledgements]{\nThe authors warmly thank Prof. Pierre Villon for fruitful discussions on model-order reduction and related topics.}\n\n\\backsection[Funding]{This project has received funding from the European Research Council (ERC) under the European Union's Horizon\n2020 research and innovation programme (Grant agreement No. ERC-2017-COG - MultiphysMicroCaps). }\n\n\\backsection[Declaration of interests]{The authors report no conflict of interest.}\n\n\\backsection[Author ORCID]{C. Dupont, https:\/\/orcid.org\/0000-0002-7727-3846; F. De Vuyst, https:\/\/orcid.org\/0000-0003-0854-4670; A.-V. Salsac, https:\/\/orcid.org\/0000-0001-8652-5411}\n\n\\backsection[Author contributions]{A.V.S and F.D.V. created the research plan and formulated the numerical problem. C.D. implemented the numerical method and performed the tests. All authors contributed to analysing data and reaching conclusions, and in writing the paper.}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nQuantum tunneling through a barrier is a fundamental physical effect\n\\cite{AG,Lee}, and the neutral atoms trapped in optical lattices\nhave given us an opportunity to study the aspects of tunneling,\nincluding coherent dynamics on macroscopic scale. For the symmetric\ndouble-well system which has long served as a paradigm of quantum\nphysics, the eigenvalue structures are well-known for the low lying\nexcited states \\cite{WKB1,WKB2,WKB3,WKB4,GDJH}, and it has been pointed out that, by\nadding a specific driving force, the tunneling dynamics can be\nbrought to a complete standstill known as coherent destruction of\ntunneling (CDT) \\cite{GDJH}. Recently this coherent destruction has\nbeen visualized in single particle tunneling \\cite{KSSTO} as well as\nin the tunneling of Bose-Einstein condensates (BECs) \\cite{Valle}.\nOn the other hand, in experiments, localized wave packets have been\nprepared in one well of an asymmetric double-well potential, and the\ntunneling dynamics has been observed by turning off the asymmetric\npart of the potential \\cite{HADGJ,DJ,KSSTO}. It is also known that\nthe density distributions of the BECs of interacting particles are\nasymmetric in the asymmetric double-well potentials\n\\cite{AG,HWAHS,SH,DHC}, to result in the non-vanishing relative\nphase evolution rate \\cite{JS1,JS2}.\n\nIn this article,\nwe will find the energy eigenvalues of low lying excited states of\nan asymmetric double-well potential $V(x)$ which has one local maximum\nat $x=x_c$ between the wells, by constructing WKB wave functions with\nthe quadratic connection formula.\nTo quantify the degree of tunneling, we define the {\\em tunneling\nvisibility} for a wave function $\\psi(x,t)$, as ${\\cal V}= (P_{max}\n-P_{min})\/(P_{max} +P_{min}),$ where $P_{max}$ $(P_{min})$ denotes\nthe maximum (minimum) value of $P_r(t)=\\int_{x_c}^\\infty \\psi^*(x,t)\n\\psi(x,t) dx$ during the time evolution. We find that the wave\nfunctions of (almost) arbitrary ${\\cal V}$ ranging from 0 to 1 can\nbe realized from a Gaussian wave packet, by controlling the\npotential energy difference between the bottoms of the double well.\nThe case of ${\\cal V}\\approx 0$ with $P_{max}\\approx 1$ amounts to\nthe CDT, and this case can be realized when the two wells can be\nconsidered to be separate.\nA Gaussian wave packet at a stand still can be realized also in the\nwell of higher bottom.\n\nWhile the results for a single particle can be applied only for\nthe systems of noninteracting particles, we note that interacting systems\nhave been extensively studied \\cite{MCWW,NK,JI,JMP}. Particularly,\nfor the systems of a few bosons, highly delayed pair tunneling\nanalogous to nonlinear self-trapping has\nbeen found for the medium range of the interaction strength \\cite{JMS}.\nThough we only consider the double-well potentials bounded\nfrom below, the system of a particle in a periodic potential\nwith an additional constant force has been of great interest \\cite{Sias,BWK,Holthaus},\nand we note that this system has been analyzed through the instanton method\n\\cite{LFZLC}\nwhich is intimately related with the WKB analysis \\cite{WKB1}.\n\nIn the next section, before presenting the main {\\em analytic} results,\ntwo asymmetric systems will be numerically studied.\nIn Sec.~III, we will construct the WKB wave functions for a general potential\n$V(x)$. It will be shown that the asymmetric systems\ncan be classified into two different regimes: In the one regime, an eigenfunction\nof low lying excited states has\nsignificant amplitude in both wells as in the symmetric systems,\nwhile, in the other, the eigenfunction describes the\nparticle mostly localized in just one of the wells.\nIn Sec.~IV, we will develop formulas for the estimation of the energy eigenvalues\nin the regime of the localized eigenfunctions, and for the estimation of\nthe tunneling visibility in the other.\nIn Sec.~V, the asymmetric double oscillator model\nwill be numerically solved to give an implication on the eigenvalue structure\nof a general double-well system, and to show that {\\em WKB description} could be\n{\\em remarkably accurate}.\nThe last section will be devoted to a summary and discussions.\n\n\n\n\n\n\n\\section{Coherent control of Tunneling: Numerical examples}\nIn this section, we will study two systems numerically to indicate that\nthe coherent control method could also be useful in a general potential\nwhose wells are {\\em not} exactly quadratic,\nand to expose that gravity may be used to control the tunneling dynamics in\nthe settings of the recent experiments \\cite{KSSTO,HADGJ}.\n\n\nFirst, we\nconsider the system of a particle of mass $m$ in the quartic\ndouble-well potential\n\\begin{equation}\nV_Q(\\alpha;x)=\\hbar\\omega\\left[\n-\\frac{x^2}{4l_{ho}^2}+\\frac{x^4}{96l_{ho}^4}+\\frac{\\alpha\nx}{8\\sqrt{3}l_{ho}}+C(\\alpha)\\right],\n\\end{equation}\nwith $l_{ho}=\\sqrt{\\frac{\\hbar}{m\\omega}}$, where $C(\\alpha)$ is\nintroduced to ensure that the minimum of the potential is 0. While\nthe term proportional to $\\alpha$ is added to give the asymmetry,\n$V_Q(0;x)$ is a special case of the well-known potentials\n\\cite{GDJH,WKB1,WKB2,WKB3,WKB4}.\nFor $V_Q(0;x)$, the angular frequency for small oscillations\nat the bottoms of each well is $\\omega$, and the barrier height is\n$3\\hbar\\omega\/2$.\n\nFor the Gaussian wave packet\n$\\phi_G(\\alpha;x)==\\exp[-(x-a(\\alpha))^2\/(2l_{r}^2)]\/(l_r^2\\pi)^{1\/4}$\ncentered at the bottom of the right well, $a(\\alpha)$, with\n$l_r^{-4}=\\frac{m^2\\omega_r^2(\\alpha)}{\\hbar^2}=\\frac{m}{\\hbar^2}\\frac{d^2\nV_\\alpha(x)}{dx^2}|_{x=a(\\alpha)}$, in Fig.~\\ref{quartic}, we\nevaluate the probabilities ${\nP}_i(\\alpha)=|<\\phi_G(\\alpha)|\\varphi_i(\\alpha)>|^2$, where\n$\\varphi_i(\\alpha;x)=$ are the eigenfunctions\narranged in the order of ascending energy eigenvalue $E_i(\\alpha)$\n$(i=0,1,2,\\cdots)$. When we define the (unnormalized) wave function $\\psi_\\alpha (x,t)$ as\n\\begin{eqnarray}\n&\\psi_\\alpha(x,t)&\\cr\n&=&\\sqrt{P_1(\\alpha)}\\varphi_1(\\alpha;x)\\exp\\left[\\frac{-iE_1(\\alpha)t}{\\hbar}\\right]\\cr\n&&+\\sqrt{P_2(\\alpha)}\\varphi_2(\\alpha;x)\\exp\\left[\\frac{-iE_2(\\alpha)t}{\\hbar}\\right],\n\\end{eqnarray}\nFig.~\\ref{quartic} indicates that, in this system of deep quantum regime,\n$\\psi_\\alpha(x,0)$ closely describes the Gaussian wave function $\\phi_G(\\alpha;x)$\nfor $0.01\\le \\alpha $ in the given range of $\\alpha$.\nFurther, plot (e) shows that, if $ 0.01<\\alpha<0.9$,\nthe Gaussian wave\npacket is closely described by an eigenfunction,\n$\\varphi_1(\\alpha;x),$ which is in turn mostly localized in the\nright well of the higher bottom (We note that localized eigenfunctions have also\nbeen known in the buried double-well systems \\cite{WM}).\n\n\\begin{figure}\n\\includegraphics[width=3.3in]{1a.eps}\n\\includegraphics[width=3.3in]{1b.eps}\n\\includegraphics[width=3.3in]{1cd.eps}\n\\includegraphics[width=3.3in]{1e.eps}\n\\caption{(Color online) Tunneling dynamics in $V_Q(\\alpha;x)$. (a)\nA potential (thick solid line) and three lowest eigenvalues (thin\nblack, green, red solid lines), for $\\alpha=\\alpha_0=0.985$. The\ndotted and dashed, dashed, solid vertical lines indicate the values\nof $a(\\alpha_0)$, $-b(\\alpha_ 0)$, $x_c(\\alpha_0)$, respectively,\nwhere $V_Q(\\alpha_0;-b(\\alpha_0))=0$. (b) $|\\phi_G(\\alpha_0;x)|^2$\n(solid line), and $|\\psi_{\\alpha_0}(x,t)|^2$ at $t=0$ (dotted line)\nand at $t=\\hbar \\pi\/(E_2(\\alpha_0 )-E_1(\\alpha_0 ))$ (dashed line).\n (c) and (d)\nThe calculated tunneling visibility for $\\psi_\\alpha(x,t)$ (solid\nlines) and an estimation (dotted lines). The estimation is made from\nEq.~(\\ref{visibility}). For (c),\n$\\delta_\\epsilon\/\\delta_a=2V_\\alpha(a(\\alpha))\/(E_1(0)-E_0(0)).$ For\n(d), since $E_2(\\alpha)-E_1(\\alpha)$ has the minimum at\n$\\alpha=\\alpha_1=1.00$,\n$2[V_\\alpha(a(\\alpha))-V_{\\alpha_1}(a(\\alpha_1))]\/(E_2(\\alpha_1)-E_1(\\alpha_1))$\nis used as $\\delta_\\epsilon\/\\delta_a$. The \"{\\bf +}\" mark denotes\nthe value of $\\alpha_0$. } \\label{quartic}\n\\end{figure}\n\nAs is well-known in the systems of symmetric potentials \\cite{WKB1,WKB3},\nthe wave function $\\frac{1}{\\sqrt{2}}\\left(\\varphi_0(0;x)e^{\\frac{-iE_0(0)t}{\\hbar}}\n+\\varphi_1(0;x)e^{\\frac{-iE_1(0)t}{\\hbar}}\\right)$ describes the system\nwhich tunnels back and forth between an almost Gaussian state localized\nin the left well and the state localized in the right well,\nwith ${\\cal V} \\approx 1$.\nAs, for $0.01\\le\\alpha\\le 0.9$, the ground state wave function $\\varphi_0(\\alpha;x)$\nand the first excited state wave function $\\varphi_1(\\alpha;x)$\nare mostly localized in the left and right wells with the shapes close to a Gaussian, respectively,\nif we could change $\\alpha$ from 0 to a value\nwhich is larger than 0.01 but smaller than 0.9, without distorting the wave function\nby the change,\nthen we will have a system where the probability density is almost stationary.\nFor this stationary system, the probability of finding the particle in the left\n(or right) well depends on the details of changing $\\alpha$.\nIf the change could be made in a much shorter period of time compared with\nthe period of tunneling $2\\hbar \\pi\/(E_1(0 )-E_0(0 ))$, the probability of finding the\nparticle in one of the wells\ncrucially depends on the timing of the change.\nIf we tune $\\alpha$ back to 0 or to 1, tunneling dynamics appears again,\nbut in this time the tunneling visibility ${\\cal V}$ could be less than 1\ndepending on the process.\n\n\n\nSecond, for an atomic spinor trapped in a double-well of the potential\n\\begin{eqnarray}\nV_L(\\beta;x)&=&C E_R[\\cos^2 kx +\\xi\\cos^2\n(\\frac{kx}{2})-{\\xi}\/{2}+{\\xi^2}\/{16}]\\cr\n&&+m\\beta gx,\n\\end{eqnarray}\nwith $E_R=(\\hbar\nk)^2\/2m$ \\cite{KSSTO,HADGJ}, we explicitly consider the case of\n$C=10$ and $\\xi=\\frac{1}{2}$ which is similar to an experimental\nsituation \\cite{KSSTO}. For $\\beta=0$, with the vanishing boundary condition\nat $x=\\frac{2j\\pi}{k}$ ($j:$integer), there are three energy\neigenvalues $E_i^L~(i=0,1,2)$ under the barrier height, with\n$E_1^L-E_0^L=0.122E_R$, $E_1^L+E_0^L=5.70E_R$, and $E_2^L=7.27E_R$.\n\nIf $\\lambda=2\\pi\/k=811$ nm, $g=980$ ${\\rm cm\/s^2}$ and\ncesium atoms are in $V_L(1;x)$, the fact\n$mg\\lambda\/2=0.580E_R=4.75(E_1^L-E_0^L)$ then suggests that the\ntunneling visibility is very low for a cesium atom trapped in the\nvertical optical lattice aligned along the Earth's gravity. If the\nright minimum of a double-well is located at $x_r$, we construct a\nGaussian wave packet\n$\\phi(f;x)==\\exp[-(x-x_r)^2\/(2f^2l_{r}^2)]\/(f^2l_r^2\\pi)^{1\/4}$,\nhere, with a fitting factor $f$. Indeed, for the first excited state\n$|\\varphi_1^L>$ of $\\beta=1$, we find that\n$|<\\phi(0.759)|\\varphi_1^L>|^2=0.980$ and\n$|<\\phi(1.00)|\\varphi_1^L>|^2=0.945$,\nwhich shows that the eigenfunction\n$\\varphi_1^L(x)$ is closely described by a Gaussian wave\npacket, to prove the low visibility of the wave packet.\nAs the visibility is high for the symmetric horizonal lattice, this shows that\ngravity may be used to control the tunneling dynamics in the optical\nlattices.\n\n\\section{WKB wave functions}\n\nIn this section, we will construct WKB wave functions for a general potential\n$V(x)$, assuming that $V(x)$ is written as $\\frac{m\\omega_l^2}{2}(x+b)^2$ and as\n$\\frac{m\\omega_r^2}{2}(x-a)^2+\\epsilon\\hbar\\omega_r$ around the\nbottoms of the left and right wells, respectively. The eigenvalue structure\nwill then be found by requiring the WKB wave functions to be asymptotically matched,\nin the overlapping regions, onto the exact solutions of the quadratic wells.\n\n\nIn the quadratic\nregions of $V(x)$, the eigenfunction is described by the parabolic\ncylinder function $D_\\eta(z)$, and the eigenfunction of an energy\neigenvalue $\\hbar\\omega_r (\\nu+\n\\epsilon+\\frac{1}{2})[\\equiv\\hbar\\omega_l (\\mu +\\frac{1}{2})]$ is\nwritten as\n\\begin{equation}\nC_L D_{\\mu}\\left(- \\frac{\\sqrt{2}(x+b)}{l_{l}} \\right) ~{\\rm and}~\nC_R D_{\\nu}\\left( \\frac{\\sqrt{2}(x-a)}{l_{r}} \\right),\n\\end{equation}\nnear the bottoms of the left and right wells, respectively, with\n$l_i=\\sqrt{\\frac{\\hbar}{m\\omega_i}}$ $(i=l,r)$. On the other hand,\nby taking $x_c=0$,\nin the region of the barrier we have an approximate solution for the\neigenfunction through the WKB method \\cite{WKB1,WKB2,WKB3}, as\n\\begin{eqnarray}\n\\psi_{WKB}(x)&=& \\frac{N_R\\sqrt{\\hbar}}{\\sqrt{l_{ho}\np(x)}}\\exp\\left[\\int_0^x\\frac{p(y)}{\\hbar} dy\\right]\\ \\cr\n&&+\\frac{N_L\\sqrt{\\hbar}}{\\sqrt{l_{ho}\np(x)}}\\exp\\left[-\\int_0^x\\frac{p(y)}{\\hbar} dy\\right], \\label{WKB}\n\\end{eqnarray}\nwhere\n$p(y)=\\sqrt{2m[V(y)-(\\nu+\\epsilon+\\frac{1}{2})\\hbar\\omega_r]}$. The\nrelations between the real coefficients $C_L,~C_R,~N_L,~N_R$ may be\ngiven by comparing the eigenfunctions in the regions where the\ndescriptions by the parabolic cylinder function and by the WKB\nfunction are both valid. On the negative real axis, the asymptotic\nexpansion of the parabolic cylinder function is \\cite{AS}\n\\begin{eqnarray}\nD_\\eta(z) &\\sim\n&\\frac{\\sqrt{2\\pi}}{\\Gamma(-\\eta)}\\exp\\left[\\frac{z^2}{4}\\right]\\frac{1}{|z|^{\\eta+1}}\n\\left[1 +O\\left(\\frac{\\eta^2}{z^2}\\right) \\right]\\cr &&+\\cos(\\eta\n\\pi)\n\\exp[-\\frac{z^2}{4}]|z|^\\eta\\left[1+O\\left(\\frac{\\eta^2}{z^2}\\right)\\right].\n~~~~~ \\label{pcf asymp}\n\\end{eqnarray}\n\n\n\\begin{figure}\n\\includegraphics[width=3.3in]{vxf.eps}\n\\caption{(Color online) An illustration of turning points.} \\label{vxf}\n\\end{figure}\n\n\n\nThe WKB wave function can also be expanded, for instance, as\n\\begin{eqnarray}\n&&\\psi_{WKB}(x) \\cr && \\sim2^\\frac{1}{4}N_R\n\\sqrt{\\frac{l_r}{l_{ho}}}\\left(\\frac{e}{\\nu+\\frac{1}{2}}\\right)\n ^{\\frac{\\nu}{2}+\\frac{1}{4}}\\left(\\frac{\\sqrt{2}(a-x)}{l_{r}}\\right)^\\nu\n \\cr\n&&~~~~~~~~~~~~~\\times\n\\exp\\left({-\\frac{(a-x)^2}{2l_{r}^2}+\\int_0^{a_\\nu}\n\\frac{p(y)}{\\hbar} dy}\\right)\\cr &&~~~ +2^\\frac{1}{4}N_L\n\\sqrt{\\frac{l_r}{l_{ho}}} \\left(\\frac{\\nu+\\frac{1}{2}}{e}\\right)\n ^{\\frac{\\nu}{2}+\\frac{1}{4}}\\left(\\frac{l_{r}}{\\sqrt{2}(a-x)}\\right)^{\\nu+1}\n \\cr\n&&~~~~~~~~~~~~~\\times \\exp\\left(\\frac{(a-x)^2}{2l_{r}^2}-\\int_0^{a_\\nu} \\frac{p(y)}{\\hbar} dy\\right),\n\\label{WKB asymptotic}\n\\end{eqnarray}\nwhen $\\frac{a-x}{a-a_\\nu} \\gg 1$ and $x$ is in the region of\nquadratic potential of the right well, where $a_\\nu, -b_\\mu$ are\nturning points satisfying\n$V(a_\\nu)=V(-b_\\mu)=(\\nu+\\epsilon+\\frac{1}{2})\\hbar\\omega_r$ ~~$(a>\na_\\nu>0,~b>b_\\mu >0)$ (see Fig.~\\ref{vxf}). By comparing the leading terms in the\nparabolic cylinder function description which originates from the\nfirst term in the right hand side (r.h.s.) of Eq.~(\\ref{pcf asymp})\nwith the relevant terms in the WKB description, we have\n\\begin{eqnarray}\nN_R&=&\\frac{\\sqrt{\\sqrt{2}\\pi}}{\\Gamma(-\\mu)}\\sqrt{\\frac{\\l_{ho}}{l_l}}\n \\left(\\frac{e}{\\mu+\\frac{1}{2}}\\right)^{\\frac{\\mu}{2}+\\frac{1}{4}}\n e^{\\int_{-b_\\mu}^0 \\frac{p(y)}{\\hbar} dy}C_L,\\cr\nN_L&=&\\frac{\\sqrt{\\sqrt{2}\\pi}}{\\Gamma(-\\nu)}\\sqrt{\\frac{\\l_{ho}}{l_r}}\n \\left(\\frac{e}{\\nu+\\frac{1}{2}}\\right)^{\\frac{\\nu}{2}+\\frac{1}{4}}\n e^{\\int_0^{a_\\nu} \\frac{p(y)}{\\hbar} dy}C_R.\n\\label{necessary condition}\n\\end{eqnarray}\nIf any of $\\mu$ or $\\nu$ is not close to a non-negative integer, in\nthe large separation limit of $e^{\\int_{-b_\\mu}^0 \\frac{p(y)}{\\hbar}\ndy}\\gg 1$ and $e^{\\int_0^{a_\\nu} \\frac{p(y)}{\\hbar} dy}\\gg 1$,\nEq.~(\\ref{necessary condition}) yields\n\\begin{equation}\n|N_R| \\gg |C_L| ~~~{\\rm and}~~~ |N_L|\\gg|C_R|.\n\\label{noninteger}\n\\end{equation}\nIf the WKB condition $p^2(x) \\gg \\hbar|\\frac{dp}{dx}|$ is satisfied,\nin the region of the barrier, the first term in r.h.s.~of\nEq.~(\\ref{WKB}) with positive (negative) $N_R$ is a monotonically\nincreasing (decreasing) function and the second term with positive\n(negative)$N_L$ is a monotonically decreasing (increasing) function.\nEq.~(\\ref{noninteger}) then implies that if an eigenfunction exist\nfor such $\\nu$, it gives the probability distribution in which the\nprobability of finding the particle in the barrier region is\nconsiderable. Since $E_\\varphi >\\int \\varphi^*(x) V(x) \\varphi(x)$,\nif an eigenfunction gives a considerable probability in the barrier\nregion, the eigenvalue can not be much smaller than $V(0)$.\n\nOn the other hand, if any of $\\nu$ or $\\mu$ is close to a\nnon-negative integer, the relations in Eq.~(\\ref{noninteger}) are\nnot valid. If $\\nu$ is close to a non-negative integer, due to the\nsingularity in the gamma function, the second term of the r.h.s.~of\nEq.~(\\ref{pcf asymp}) can also be a leading term. By comparing this\ntype of leading term in the parabolic cylinder function description\nwith the relevant term in the WKB wave function, we have a relation\nbetween $N_R$ and $C_R$. By combining this relation with the first\none in Eq.~(\\ref{necessary condition}), we have\n\\begin{eqnarray}\n\\frac{C_L}{C_R}&=&\\sqrt{\\frac{l_l}{l_r}}\\frac{\\cos(\\nu\\pi)\\Gamma(-\\mu)}{\\sqrt{2\\pi}}\n \\left( \\frac{\\nu+\\frac{1}{2}}{e}\\right)^{\\frac{\\nu}{2}+\\frac{1}{4}}\n \\cr\n&&\\times \\left(\n\\frac{\\mu+\\frac{1}{2}}{e}\\right)^{\\frac{\\mu}{2}+\\frac{1}{4}}\n\\exp\\left[- \\int_{-b_\\mu}^{a_\\nu} \\frac{p(y)}{\\hbar}dy\\right].\n\\label{integer nu}\n\\end{eqnarray}\nIf $\\mu$ is not close to an integer, Eqs.~(\\ref{necessary\ncondition}) and (\\ref{integer nu}) imply that the eigenfunction\ngives the probability distribution in which the particle is mostly\nfound in the right well.\n\nIf $\\mu$ is close to a non-negative integer, we have the relation\n\\begin{eqnarray}\n\\frac{C_L}{C_R}&=&\\sqrt{\\frac{l_l}{l_r}}\\frac{\\sqrt{2\\pi}}{\\cos(\\mu\\pi)\\Gamma(-\\nu)}\n \\left( \\frac{e}{\\nu+\\frac{1}{2}}\\right)^{\\frac{\\nu}{2}+\\frac{1}{4}}\\cr\n&&\\times \\left( \\frac{e}{\\mu+\\frac{1}{2}}\\right)^{\\frac{\\mu}{2}+\\frac{1}{4}}\n \\exp\\left[ \\int_{-b_\\mu}^{a_\\nu} \\frac{p(y)}{\\hbar}dy\\right].\n\\label{integer mu}\n\\end{eqnarray}\nIn this case, if $\\nu$ is not close to an integer,\nEqs.~(\\ref{necessary condition}) and (\\ref{integer mu}) imply that\nthe eigenfunction gives the probability distribution of the particle\nmostly localized in the left well.\n\nFor an eigenstate whose eigenvalue is much lower than the barrier\nheight, the eigenvalue thus must be close to\n$\\hbar\\omega_l(m+\\frac{1}{2})$ or\n$\\hbar\\omega_r(n+\\frac{1}{2}+\\epsilon)$ $(n,m=0,1,2,\\cdots)$, the\neigenvalues of the quadratic potentials of the wells. Furthermore,\nthe fact $D_k(\\sqrt{2}y)=e^{-y^2\/2}H_k(y)\/(\\sqrt{2})^k$ for a\nnon-negative integer $k$, implies that, if the eigenvalue is close\nto $\\hbar\\omega_r(n+\\frac{1}{2}+\\epsilon)$\n($\\hbar\\omega_l(m+\\frac{1}{2})$), the eigenfunction of the\ndouble-well system must be closely described by\n$\\psi_n^{ho}(a;l_r;x)$ ($\\psi_m^{ho}(-b;l_l;x)$) around the bottom\nof the right (left) well, with an eigenfunction of a simple\nharmonic oscillator $\\psi_k^{ho}(c;l;x)$ ($\\equiv H_k(\\frac{x-c}{l})\\exp[-\\frac{(x-c)^2}{2l^2}]\/\\sqrt{\\sqrt{\\pi}l 2^k\nk!}$).\n\n\n\n\\section{Two different regimes}\n\nThe analysis of the previous section shows that an eigenfunction of\nthe low lying excited states in the large separation limit has\nsignificant amplitude either in both wells or in just one of the wells.\nIn this section, we will show that the eigenfunction of significant\namplitude in both wells must be accompanied by another eigenfunction\nto form a doublet. As in the symmetric case, a linear combination\nof the doublet is responsible for the tunneling, and\nwe will calculate the tunneling visibility for the combination.\nFor the eigenfunctions localized in one of the wells, we will develop a\nformula for the energy eigenvalue estimation.\n\n\n\\subsection{Tunneling visibility}\n\n\nFor the case that both $\\mu$ and $\\nu$ are close to integers $m$ and $n$,\nrespectively,\nwe define $\\delta_\\epsilon= \\epsilon+n- \\frac{\\omega_l}{\\omega_r}m$,\nso that $|\\delta_\\epsilon|$ is equal or less than the minimum of\n$\\frac{1}{2}$ and $ \\frac{\\omega_l}{2\\omega_r}$. In this case,\nthe corresponding eigenfunction gives\nconsiderable probabilities in both of the left and right wells,\nand $\\mu$ and $\\nu$ should be written as $\\mu=m+\\delta_\\mu$ and\n$\\nu=n+\\delta_\\nu$ with $|\\delta_\\mu|,$ $|\\delta_\\nu|,$\n$|\\delta_\\epsilon|\\ll 1.$ From the fact that\n$\\delta_\\mu=\\frac{\\omega_r}{\\omega_l}(\\delta_\\nu+\\delta_\\epsilon)$,\nEqs.~(\\ref{integer nu}) and (\\ref{integer mu}) then yield\n$\\delta_\\nu^2+\\delta_\\epsilon\\delta_\\nu-\\delta_a^2=0,$ with\n\\begin{eqnarray}\n\\delta_a&=&\\sqrt{\\frac{\\omega_l}{\\omega_r}}\\sqrt{\\frac{1}{2\\pi n!m!}}\n \\left(\\frac{n+\\frac{1}{2}}{e}\\right)^{\\frac{2n+1}{4}}\\cr\n&&~~~~ \\times \\left( \\frac{m+\\frac{1}{2}}{e}\\right)^{\\frac{2m+1}{4}}\n \\exp\\left[- \\int_{-b_m}^{a_n} \\frac{p(y)}{\\hbar}dy\\right].~~~\n\\end{eqnarray}\n\nWith\n\\begin{equation}\n\\delta_\\pm= \\frac{1}{2}(-\\delta_{\\epsilon} \\pm\n\\sqrt{\\delta_\\epsilon^2+4\\delta_a^2}),\n\\end{equation}\nwhen $\\delta_\\nu=\\delta_-$,\nthe eigenfunction is written as\n\\begin{equation}\n\\psi^-(x)=\n\\frac{(-1)^n\\delta_a\n\\psi_{m}^{ho}\n(-b;l_l;x)+\\delta_+\\psi_{n}^{ho}(a;l_r;x)}{\\sqrt{\\delta_a^2+\\delta_+^2}},\n\\end{equation}\nwhile the eigenfunction of $\\delta_\\nu=\\delta_+$ is\n\\begin{equation}\n\\psi^+(x)=\n\\frac{-(-1)^{n}\\delta_+ \\psi_{m}^{ho}\n(-b;l_l;x)+\\delta_a\\psi_{n}^{ho}(a;l_r;x)}{\\sqrt{\\delta_a^2+\\delta_+^2}}.\n\\end{equation}\nThe formal expression of $\\delta_{\\pm}$ in terms of\n$\\delta_{\\epsilon}$ and $\\delta_a$ can be understood from the fact\nthat, for $|\\delta_\\epsilon|\\ll 1$, the tunneling dynamics is\nessentially described by that of a two-level system(TLS)\n\\cite{CDL,RMP,GDJH}. If $\\psi_n^{ho}(a;l_r;x)$ is written as a linear\ncombination of $\\psi^-(x)$ and $\\psi^+(x)$, the visibility of the\nlinear combination is\n\\begin{equation}\n{\\cal V}=1\/[1+\\frac{1}{2}(\\frac{\\delta_\\epsilon}{\\delta_a})^2].\n\\label{visibility}\n\\end{equation}\n\n\nFor the visibility estimation in Fig.~\\ref{quartic}(c) and (d), the parameter is\ndetermined from the consideration that, when $\\delta_\\epsilon=0$, the\nenergy splitting is given as $2\\hbar\\omega_r\\delta_a$.\nThe fact that the tunneling dynamics significantly takes place for\n$|\\delta_\\epsilon|\\ll 1$, even with $m\\neq n$, may be closely\nrelated to the resonant enhancement of tunneling in the\nmultiple-well structures \\cite{WM,Sias}. As in the BEC loaded into\nan asymmetric double-well potential \\cite{HWAHS}, if there are\nnoninteracting $N$ atoms in the ground state of $V(x)$ of small\n$\\delta_\\epsilon$ ($\\ll \\delta_a$), the number difference between\nthe left and right wells is proportional to $\\delta_\\epsilon$.\n\n\\subsection{Energy eigenvalue estimation}\n\nFor an eigenfunction $\\psi(x)$ of the system of $V(x)$ with the\nenergy eigenvalue $E$, we have the identity\n\\begin{eqnarray}\n&&E-\\hbar\\omega(n+\\frac{1}{2})\\cr\n&&=\\frac{\\int_{-\\infty}^{\\infty}\\left[ V(x)- V_r^{ho}(x)\\right]\n\\psi(x)\\psi_{n}^{ho}(a;l_r;x)\ndx}{\\int_{-\\infty}^{\\infty}\\psi(x)\\psi_{n}^{ho}(a;l_r;x) dx},~~~~~~~\n\\label{harmonic identity}\n\\end{eqnarray}\nwhere $V_r^{ho}(x)= \\frac{\\hbar\\omega_r}{2}\\frac{(x-a)^2}{l_r^2}$.\nIn numerical calculations, this identity may be efficiently used\nin estimating the energy eigenvalue of an eigenfunction which is close to\n $\\psi_{n}^{ho}(a;l_r;x)$.\n\n\nAs the visibility also implies, when $\\delta_\\epsilon$ is much\nlarger than $\\delta_0$, $\\psi(x)$ of $E\\approx \\hbar\nw_r(n+\\epsilon+\\frac{1}{2})$ is mostly localized in the right well,\nand around the bottom it will be closely described by\n$\\psi_{n}^{ho}(a;l_r;x)$, to give $\\psi_{n}^{app}\n(l_r;x)$ an approximation of $\\psi(x)$ in this well.\nIn the other regions, Eqs.~(5,7) and the\nWKB method can be used to find $\\psi_{n}^{app}\n(l_r;x)$. Eq.~(\\ref{harmonic identity}) then may be\nused to find a correction to $\\nu$, as\n\\begin{eqnarray}\n&&\\nu-n +\\epsilon \\cr &&\\approx\\frac{\\int_{-\\infty}^{\\infty}[ V(x)-\nV_r^{ho}(x)] \\psi_n^{app}(l_r;x)\\psi_{n}^{ho}(a;l_r;x) dx}\n{\\hbar\\omega_r\\int_{-\\infty}^{\\infty}\n\\psi_n^{app}(x)\\psi_{n}^{ho}(a;x) dx}.~~~~~~\n\\end{eqnarray}\nThis approximation of a localized eigenfunction (ALE) can also be made similarly, for the eigenstate\nof $E\\approx \\hbar w_l(m+\\frac{1}{2})$ which describes a probability distribution mostly localized\nin the left well.\n\n\n\n\n\\section{ precision test: Asymmetric Double Oscillator}\n\nIn application of the WKB method for a symmetric double-well potential, it is\nknown that the energy splitting could be found accurately, if the (ground state)\nenergy eigenvalue\nand thus the turning points are appropriately chosen \\cite{WKB2}. If a well is quadratic with\nangular frequency $\\omega$, then $(j+\\frac{1}{2})\\hbar\\omega$ ($j$: nonnegative integer) may be\na good estimation for an energy eigenvalue.\n\nIn order to check the accuracy of the formalism\nwe have provided, avoiding the turning-point problem as much as possible,\nwe consider the system of the asymmetric double oscillator potential \\cite{Song}\n\\begin{equation}\nV_D(\\epsilon;x)=\\left\\{\\begin{array}{ll}\n \\hbar\\omega\\left(\\frac{x+\\sqrt{a^2+2\\epsilon l_{h0}^2}}\n { \\sqrt{2} l_{ho}} \\right)^2 &\n ~{\\rm for}~ x< 0,\\\\\n \\hbar\\omega\\left[\\left(\\frac{x-a}{ \\sqrt{2} l_{ho}} \\right)^2+\\epsilon\\right]\n & ~ {\\rm for}~ x\\geq 0.\n \\end{array}\\right.\n\\end{equation}\nFor this system, since both wells are exactly quadratic, the eigenfunctions\nare described by the parabolic\ncylinder functions on both sides of $x=0$ \\cite{Song,Merzbacher} and the\ncontinuities of the eigenfunction and its derivative at $x=0$ can be\nused to find the eigenvalues $E_0^D(a),E_1^D(a),\\cdots$. As in\nFig.~\\ref{Double Osc}, the calculations indeed show that, when $a$ is\na few times of $l_{ho}$, the eigenvalues of the low lying excited\nstates are close to $\\hbar\\omega(n+\\epsilon+\\frac{1}{2})$ or\n$\\hbar\\omega(m+\\frac{1}{2})$. To expose that the estimation through\nthe ALE is not valid when $\\hbar\\omega\\delta_\\epsilon$ is order of\nor smaller than the energy difference of the adjacent energy\neigenstates, we add the ratio\n$2\\epsilon\\hbar\\omega\/(E_1^D(a)-E_0^D(a))$ (dotted and dashed line)\nin Fig.~\\ref{Double Osc}(b).\n\n\\begin{figure}\n\\includegraphics{DO.eps}\n\\caption{(Color online) Energy eigenvalues for some lowest\neigenstates of the system of $V_D(\\epsilon;x)$. $\\epsilon$ is\nchosen as 0.01 for (a) and (b), 0.3 for (c), and 0.5 for (d).\nCalculated values (Solid lines); estimated values through the ALEs\n(dotted lines) and through the approximation to a TLS (dashed\nlines). For the ALEs of the ground and second excited states,\nEq.~(\\ref{integer mu}) and the turning points $a_n$, $b_n$\nsatisfying $V_D(\\epsilon;a_n)=V_D(\\epsilon;-b_n)=\n(n+\\frac{1}{2})\\hbar\\omega$ are used with $n=0$ and $n=1$,\nrespectively; and for the first and third excited states,\nEq.~(\\ref{integer nu}) and $a_n$, $b_n$ of\n$V_D(\\epsilon;a_n)=V_D(\\epsilon;-b_n)=\n(n+\\epsilon+\\frac{1}{2})\\hbar\\omega$ are used with $n=0$ and $n=1$,\nrespectively. In the approximation to a TLS, $a_0$ and $b_0$ are\ndetermined from $V_D(0;a_0)=V_D(0;-b_0)= \\frac{1}{2}\\hbar\\omega$.}\n\\label{Double Osc}\n\\end{figure}\n\nWhen $\\epsilon$ is as large as 0.3, the ALE gives\nbetter results than the approximation to a TLS practically in the\nwhole range where both methods are applicable [Fig.~\\ref{Double Osc}(c)].\nAs the approximation to a TLS is suggested by the WKB method,\nFig.~\\ref{Double Osc} indeed shows that {\\em WKB description} could be\n{\\em very accurate}. Fig.~\\ref{quartic}(c) and (d) also suggests that\nthis accuracy is not limited to the systems of the wells\nwhich are exactly quadratic.\nThis with the reasons the WKB method provides implies that,\nif the potential $V(x)$ is quadratic until it reaches several times\nof the zero-point energies $\\hbar\\omega_l\/2$ and $\\hbar\\omega_r\/2$\nfrom the bottoms of the left\nand right well, respectively, the energy eigenvalues of the low\nlying excited states of the system must be close to the eigenvalues\nof the quadratic potentials.\n\n\n\n\n\n\n\n\n\\section{conclusions and outlook}\n\nWe have shown, through the WKB method of quadratic connection formula,\nthat the systems of asymmetric double-well potentials\ncan be classified into two different regimes. In the regime of\neigenfunctions giving significant amplitude in both wells, the tunneling\ndynamics could take place, while there is no tunneling\nin the regime of localized eigenfunctions. In this respect, the systems of\nthe eigenfunctions mostly localized in just one of the wells are very different\nfrom those of the symmetric potentials. As Fig.~\\ref{quartic}(c), (d) and\nFig.~\\ref{Double Osc} clearly show, the WKB description could be very accurate,\nand the results given here may be valid for a system of the potential\nwhose wells are not exactly quadratic.\n\n\nFor the regime of localized eigenfunctions, even in the deep quantum limit,\nit may be possible to confine a large number of noninteracting bosons\nin just one of the wells.\nFor single-component fermions, in the light of the particle density\n$\\rho_R(\\epsilon;x,t)=\\sum_i\\psi_{Ri}^*(\\epsilon;x,t)\\psi_{Ri}(\\epsilon;x,t)$\n(see, e.g., Ref.~\\cite{BB}),\nthe number of fermions which can be confined in one of the wells\nis limited by that of the localized eigenfunctions $\\psi_{Ri}(\\epsilon;x,t)$.\nFor the system of particles confined in just one of the wells, the\ntunneling dynamics can be initiated and controlled by adjusting $\\epsilon$\nthe potential energy difference between the bottoms of the double well, since,\nif we change $\\epsilon$ so that $\\delta_\\epsilon \\ll 1,$\n$\\psi_{Ri}(\\epsilon;x,t)$ turns into a linear combination of the\neigenfunctions of the new system.\n\nIn the periodic arrangement of double-wells of the optical lattice\nwhere the tunneling is accompanied by a precession of the atom's\nangular momentum \\cite{HADGJ,KSSTO}, a considerable time-periodic\n{\\em fluctuation of the population} of atoms in a spin state could\nimply that the atomic spinors are in the states of {\\em high\nvisibility}. Since $\\delta_a$ is very small in the large separation limit\nand the period of tunneling is inversely proportional to $\\delta_a$,\nif a tunneling phenomenon can be established over a long period of time,\nit can be used for precision measurements.\n\n\n\n\\acknowledgments\nThe author thanks Professors Kyungwon An and Yong-il Shin for discussions\non experimental aspects.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nKnowledge is distinguished by the ability to evolve over time. \nThis progression of knowledge is usually incremental and its formation is related to the cognitive areas being studied. \nThe process of Knowledge Tracing (KT) defined as the task of predicting students' performance has attracted the interest of many researchers in recent decades \\cite{corbett1994knowledge}.\nThe Knowledge State (KS) of a student is the degree of his or her mastering the Knowledge Components (KC) in a certain domain, for example ``Algebra'' or ``Physics''.\nA knowledge component generally refers to a learnable entity, such as a concept or a skill, that can be used alone or in combination with other KCs in order to solve an exercise or a problem \\cite{koedinger2012knowledge}.\nKnowledge Tracing is the process of modeling and assessing a student's KS in order to predict his or her ability to answer the next problem correctly.\nThe estimation of the student's knowledge state is useful for improving the educational process by identifying the level of his\/her understanding of the various knowledge components. By exploiting this information it is possible to suggest appropriate educational material to cover the student's weaknesses and thus maximize the learning outcome.\n\n\nThe main problem of Knowledge Tracing is the efficient management of the responses over time. \nOne of the factors which add complexity to the problem of KT is the student-specific learning pace. \nThe knowledge acquisition may differ from person to person and may also be influenced by already existing knowledge. \nMore specifically, KT is predominantly considered as a supervised sequence learning problem where the goal is to predict the probability that a student will answer correctly the future exercises, given his or her history of interactions with previous tests.\nThus, the prediction of the correctness of the answer is based on the history of the student's answers in combination with the skill that is currently examined at this time instance.\n\nMathematically, the KT task is expressed as the probability $P(r_{t+1} = 1|q_{t+1}, X_t)$ that the student will offer the correct response in the next interaction $x_{t + 1}$, where the students learning activities are represented as a sequence of interactions $X_t = \\{ x_1,x_2,x_3,...,x_t\\}$ over time $T$. \nThe $x_t$ interaction consists of a tuple $(q_t, r_t)$ which represents the question $q_t$ being answered at time $t$ and the student response $r_t$ to the question.\nWithout loss of generality, we shall assume that knowledge components are represented by skills from a set $S = \\{s_1, s_2,..., s_m\\}$.\nOne simplifying assumption, used by many authors \\cite{zhang2017dynamic}, is that\nevery question in the set $Q = \\{q_1, q_2,..., q_T\\}$ is related to a unique skill from $S$.\nThen the knowledge levels of the student for each one of the skills in $S$ compose his or her knowledge state.\n\nThe dynamic nature of Knowledge Tracing leads to approa-ches that have the ability to model time-series or sequential data. \nIn this work we propose two dynamic machine learning models that are implemented by time-dependent methods, specifically recurrent and time delay neural networks. \nOur models outperform the current state-of-the-art approaches in four out of five benchmark datasets that we have studied. \nThe proposed models differ from the existing ones in two main architectural aspects:\n\\begin{itemize}\n \\item we find that attention does not help improve the performance and therefore we make no use of attention layers\n \\item we experiment with and compare between two different skill embedding types: (a) initialized by pre-trained embeddings of the textual descriptions of the skill names using standard methods such as Word2Vec and FastText and (b) randomly initialized embeddings based on skill ids\n\\end{itemize}\n\n\nThe rest of the paper is organized as follows. \nSection 2 reviews the related works on KT and the existing models for student performance prediction. \nIn Section 3 we present our proposed models and describe their architecture and characteristics. \nThe datasets we prepared and used are present in Section 4 while the experiments setup and the results are explained in Section 5. \nFinally, Section 6 concludes this work and discusses the future works and extensions of the research.\n\n\n\\section{Related Works}\nThe problem of knowledge tracing is dynamic as student knowledge is constantly changing over time.\nThus, a variety of methods, highly structured or dynamic, have been proposed to predict students' performance. \nOne of the earlier methods is Bayesian Knowledge Tracing (BKT) \\cite{corbett1994knowledge} which models the problem as a Hidden Markov chain in order to predict the sequence of outcomes for a given learner.\nThe Performance Factors Analysis Model (PFA) \\cite{pavlik2009performance} proposed to tackle the knowledge tracing task by modifying the Learning Factor Analysis model.\nIt estimates the probability that a student will answer a question correctly by maximizing the likelihood of a logistic regression model. \nThe features used in the PFA model, although interpretable, are relatively simple and designed by hand, and may not adequately represent the students' knowledge state \\cite{yeung2019deep}.\n\nDeep Knowledge Tracing (DKT) \\cite{piech2015deep} is the first dynamic model proposed in the literature utilizing recurrent neural networks (RNN) and specifically the Long Short-Term Memory (LSTM) model \\cite{hochreiter1997long} to track student knowledge. \nIt uses one-hot encoded skill tags and associated responses as inputs and it trains the neural network to predict the next student response. \nThe hidden state of the LSTM can be considered as the latent knowledge state of a student and can carry the information of the past interactions to the output layer. \nThe output layer of the model computes the probability of the student answering correctly a question relating to a specific Knowledge Component.\n\nAnother approach for predicting student performance is the Dynamic Key-Value Memory Network (DKVMN) \\cite{zhang2017dynamic} which relies on an extension of \\textit{memory networks} proposed in \\cite{miller2016key}.\nThe model tries to capture the relationship between different concepts.\nThe DKVMN model outperforms DKT using memory slots as key and value components to encode the knowledge state of students. \nLearning or forgetting of a particular skill are stored in those components and controlled by read and write operations through the Least Recently Used Access (LRUA) attention mechanism \\cite{santoro2016meta}.\nThe key component is responsible for storing the concepts and is fixed during testing while the value component is updated when a concept state changes. \nThe latter means that when a student acquires a concept in a test the value component is updated based on the correlation between exercises and the corresponding concept.\n\nThe Deep-IRT model \\cite{yeung2019deep} is the newest approach that extends the DKVMN model. \nThe author combined the capabilities of DKVMN with the Item Response Theory (IRT) \\cite{hambleton1991fundamentals} in order to measure both student ability and question difficulty. \nAt the same time, another model, named Sequential Key-Value Memory Networks (SKVMN) \\cite{abdelrahman2019knowledge}, tried to overcome the problem of DKVMN to capture long term dependencies in the sequences of exercises and generally in sequential data. \nThis model combines the DKVMN mechanism with the Hop-LSTM, a variation of LSTM architecture and has the ability to discover sequential dependencies among exercises, but it skips some LSTM cells to approach previous concepts that are considered relevant.\nFinally, another newly proposed model is Self Attentive Knowledge Tracing (SAKT) \\cite{pandey2019self}. \nSAKT utilizes a self-attention mechanism and mainly consists of three layers: an embedding layer for interactions and questions followed by a Multi-Head Attention layer \\cite{vaswani2017attention} and a feed-forward layer for student response prediction. \n\nThe above models either use simple features (e.g. PFA) or they use machine learning approaches such as key-value memory networks or attention mechanisms that may add significant complexity. \nHowever we will show that similar and often, in fact, better performance can be achieved by simpler dynamic models combining embeddings and recurrent and\/or time-delay feed-forward networks as proposed next.\n\n\\section{Proposed Approach}\n\n\\subsection{Dynamic Models}\nAs referenced in the relative literature, knowledge change over time is often modeled by dynamic neural networks.\nThe dynamic models produce output based on a time window, called ``context window'', that contains the recent history of inputs and\/or outputs.\n\nThere are two types of dynamic neural networks (Figure \\ref{fig:DNN_architecture}):\n(a) Time-Delay Neural Networks (TDNN), with only feed-forward connections and finite-memory of length $L$ equal to the length of the context window, and\n(b) Recurrent Neural Networks (RNN) with feed-back connections that can have potentially infinite-memory although, practically, their memory length is dictated by a forgetting factor parameter.\n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[scale=0.6]{dynamic_models.png}\n \\caption{Dynamic model architectures: (a) Time-Delay Neural Network (b) Recurrent Neural Network.}\n \\label{fig:DNN_architecture}\n\\end{figure}\n\n\\subsection{The Proposed Models}\n\nWe approach the task of predicting the student response (0=wrong, 1=correct) on a question involving a specific skill as a dynamic binary classification problem.\nIn general, we view the response $r_t$ as a function of the previous student interactions:\n\\begin{equation}\n r_t = h( q_t,q_{t-1},q_{t-2},\\dots,r_{t-1},r_{r-2},\\dots ) + \\epsilon_t\n \\label{eq:total_recurrent_model}\n\\end{equation}\nwhere $q_t$, is the skill tested on time $t$ and $\\epsilon_t$ is the prediction error. The response is therefore a function of the current and the previous tested skills $\\{q_t, q_{t-1}, q_{t-2}, \\dots\\}$, as well as the previous responses $\\{r_{t-1}, r_{t-2}, \\dots\\}$ given by the student.\n\nWe implement $h$ as a dynamic neural model.\nOur proposed general architecture is shown in Figure \\ref{fig:EDM_architecture}.\nThe inputs are the skill and response sequences $\\{q\\}$, $\\{r\\}$ collected during a time-window of length $L$ prior to time $t$.\nNote that the skill sequence includes the current skill $q_t$ but the response sequence does not contain the current response which is actually what we want to predict.\nThe architecture consists of two main parts:\n\\begin{itemize}\n \\item The Encoding sub-network. It is used to represent the response and skill input data using different embeddings.\n Clearly, embeddings are useful for encoding skills since skill ids are categorical variables. \n We found that using embeddings to encode responses is also very beneficial.\n The details of the embeddings initialization and usage are described in the next section.\n \n \\item The Tracing sub-network. This firstly estimates the knowledge state of the student and then uses it to predict his\/her response.\n Our model function consists of two parts: (i) the Knowledge-Tracing part, represented by the dynamic model $f$, which predicts the student knowledge state $\\mathbf{v}_t$ and (ii) the classification part $g$, which predicts the student response based on the estimated knowledge state:\n \\begin{eqnarray}\n \\mathbf{v}_t &=& f(q_t,q_{t-1},q_{t-2},\\dots,r_{t-1},r_{r-2},\\dots)\n \\label{eq:kt_estimation}\n \\\\\n \\hat{r}_t &=& g(\\mathbf{v}_t)\n \\label{eq:classification}\n \\end{eqnarray}\n Depending on the memory length, we obtain two categories of models:\n \\begin{itemize}\n \\item[(a)] models based on RNN networks which can potentially have infinite memory.\n In this case the KT model is recurrent:\n \\[\n \\mathbf{v}_t = f(\\mathbf{v}_{t-1}, q_t,q_{t-1},\\dots,q_{t-L},r_{t-1},\\dots,r_{r-L})\n \\]\n \\item[(b)] models based on TDNN networks which have finite memory of length $L$.\n In this case the KT model has finite impulse response $L$:\n \\[\n \\mathbf{v}_t = f(q_t,q_{t-1},\\dots,q_{t-L},r_{t-1},\\dots,r_{r-L})\n \\]\n \\end{itemize}\n\\end{itemize} \n \nAlthough RNNs have been used in the relevant literature, it is noteworthy that TDNN approaches have not been investigated in the context of knowledge tracing. \nThe classification part is modeled by a fully-connected feed-forward network with a single output unit.\n\n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[scale=0.62]{edm_architecture.png}\n \\caption{General proposed architecture. The dynamic model can be either a Recurrent Neural Network (with a feedback connection from the output of the dynamic part into the model input) or a Time Delay Neural Network (without feedback connection).}\n \\label{fig:EDM_architecture}\n\\end{figure}\n\nWe investigated two different architectures: one based on recurrent neural networks and another based on time delay neural networks.\nThe details of each proposed model architecture are described below.\n\n\\subsection{Encoding Sub-network}\nThe first part in all our proposed models consists of two parallel embedding layers with dimensions $d_q$ and $d_r$, respectively, which encode the tested skills and the responses given by the student.\nDuring model training the weights of the Embedding layers are updated. \nThe response embedding vectors are initialized randomly.\nThe skill embedding vectors, on the other hand, are initialized either randomly or using pretrained data. In the latter case we use pretrained vectors corresponding to the skill names obtained from Word2Vec \\cite{mikolov2013efficient} or FastText \\cite{joulin2016fasttext} methods.\n\nA 1D spatial dropout layer \\cite{tompson2015efficient} is added after each Embedding layer. \nThe intuition behind the addition of spatial dropout was the overfitting phenomenon that was observed in the first epochs of each validation set. We postulated that the correlation among skill name embeddings, that might not actually exist, confused the model.\n\n\\subsection{Tracing Sub-network}\n\nWe experimented with two types of main dynamic sub-net- \nworks, namely Recurrent Neural Networks and Time Delay Neural Networks. These two approaches are described next.\n\n\\subsubsection{RNN Approach: Bi-GRU Model}\n\nThe model architecture based on the RNN method for the knowledge tracing task is shown in Figure \\ref{fig:Bi_GRU}.\n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[scale=0.525]{GRUModel.png}\n \\caption{Bi-GRU model}\n \\label{fig:Bi_GRU}\n\\end{figure}\n\nThe Spatial Dropout rate following the input embedding layers is $0.2$ for most of used datasets.\nNext, we feed the skills and the responses input branches into a Convolutional layer consisting of 100 filters, with kernel size 3, stride 1, and ReLU activation function.\nThe Convolutional layer acts as a projection mechanism that reduces the input dimensions from the previous Embedding layer.\nThis is found to help alleviate the overfitting problem.\nTo the best of our knowledge, Convolutional layers have not been used in previously proposed neural models for this task.\nThe two input branches are then concatenated to feed a Bidirectional Gated Recurrent Unit (GRU) layer with 64 units \\cite{cho2014learning}.\nBatch normalization and ReLU activation layers are applied between convolutional and concatenation layers.\nThis structure has resulted after extensive experiments with other popular recurrent models such as LSTM, plain GRU and also bi-directional versions of those models and we found this to be the proposed architecture is the most efficient one.\nOn top of the RNN layer we append a fully connected sub-network consisting of three dense layers with 50 and 25 units and one output unit respectively.\nThe first two dense layers have a ReLU activation function while the last one has sigmoid activation which is used to make the final prediction ($0 < \\hat{r}_t < 1$).\n\n\n\n\n\n\\subsubsection{TDNN Approach}\n\nIn our TDNN model (Figure \\ref{fig:Tdnn_v1}) we add a Convolutional layer after each embedding layer with 50 filters and kernel size equal to 5. \n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[scale=0.525]{TDNN_Model.png}\n \\caption{TDNN model}\n \\label{fig:Tdnn_v1}\n\\end{figure}\n\nBatch normalization is used before the ReLU activation is applied.\nAs with the RNN model, the two input branches are concatenated to feed the classification sub-network.\nIt consists of four dense layers with 20, 15, 10, and 5 units respectively, using the ReLU activation function.\nThis funnel schema of hidden layers (starting with wider layers and continuing with narrower ones) has helped achieve better results for all datasets we have experimented with. \nIn the beginning of the classification sub-network we insert a Gaussian Dropout layer \\cite{srivastava2014dropout} which multiplies neuron activations with a Gaussian random variable of mean value 1. This has been shown to work as good as the classical Bernoulli noise dropout and in our case even better.\n\n\n\\section{Datasets}\n\n\\begin{table*} [htb]\n\\centering\n \\caption{Datasets Overview.}\n \\label{tab:datasets}\n \\begin{tabular}{|c|c|c|c|c|}\n \\hline\n Dataset & Skills & Students & Responses & Baseline Accuracy \\\\ \\hline\n ASSISTment09 & 110 & 4,151 & 325,637 & 65.84\\%\\\\ \\hline\n ASSISTment09 corrected & 101 & 4,151 & 274,590 & 66.31\\%\\\\ \\hline\n ASSISTment12 & 196 & 28,834 & 2,036,080 & 69.65\\%\\\\ \\hline\n ASSISTment17 & 101 & 1,709 & 864,713 & 62.67\\%\\\\ \\hline\n FSAI-F1toF3 & 99 & 310 & 51,283 & 52.98\\%\\\\\n \\hline\n \\end{tabular}\n\\end{table*}\n\nWe tested our models using four popular datasets from the ASSISTments online tutoring platform.\nThree of them, ``\\textit{ASSISTment09}'', ``\\textit{ASSISTment09 corrected}''\\footnote{\\rule{0pt}{1.2\\baselineskip}https:\/\/sites.google.com\/site\/assistmentsdata\/home\/assist\nment-2009-2010-data\/skill-builder-data-2009-2010},\nand ``\\textit{ASSISTment12}''\\footnote{https:\/\/sites.google.com\/site\/assistmentsdata\/home\/2012-13-school-data-with-affect} were provided by the above platform.\nThe fourth dataset, named ``\\textit{ASSISTment17}'' was obtained from 2017 Data Mining competition page\\footnote{https:\/\/sites.google.com\/view\/assistmentsdatamining\/dat a-mining-competition-2017}.\nFinally a fifth dataset, ``\\textit{FSAI-F1toF3}'' provided by ``Find Solution Ai Limited'' was also used in our experiments.\nIt is collected using data from the from the 4LittleTrees\\footnote{https:\/\/www.4littletrees.com} adaptive learning application. \n\n\n\\subsection{Datasets Descriptions}\n\nThe ASSISTments datasets contain data from student tests on mathematical problems \n\\cite{assistmentsdata} and the content is organized in columns style.\nThe student's interaction is recorded on each line.\nThere are one or more interactions recorded for each student. \nWe take into account the information concerning the responses of students to questions related with a skill.\nThus, we use the following columns:\n``\\textit{user\\_id}'', ``\\textit{skill\\_id}'',\n``\\textit{skill\\_name}'', \nand ``\\textit{correct}''. \nThe ``\\textit{skill\\_name}'' contains a verbal description of the skill tested.\nThe ``\\textit{correct}'' column contains the values of the students' responses which are either $1$ (for correct) or $0$ (for wrong).\n\nThe original ``\\textit{ASSISTment09}'' dataset contains 525,534 student responses.\nIt has been used extensively in the KT task from several researchers but according to \\cite{assistmentsdata} data quality issues have been detected concerning duplicate rows. \nIn our work we used the ``\\textit{preprocessed ASSISTment09}'' dataset found on DKVMN\\footnote{https:\/\/github.com\/jennyzhang0215\/DKVMN} and Deep-IRT\\footnote{https:\/\/github.com\/ckyeungac\/DeepIRT} models GitHubs. \nIn this dataset the duplicate rows and the empty field values were cleaned, so that finally 1,451 unique students participate with 325,623 total responses and 110 unique skills.\n\nEven after this cleaning there are still some problems such as duplicate skill ids for the same skill name.\nThese problems have been corrected in the ''\\textit{Assistment09 corrected}'' dataset.\nThis dataset contains 346,860 students interactions and has been recently used in \\cite{xu2020dynamic}.\n\nThe ``\\textit{ASSISTment12}'' dataset contains students' data until the school year 2012-2013. The initial dataset contains 6,123,270 responses and 198 skills. \nSome of the skills have the same skill name but different skill id.\nThe total number of skill ids is 265. \nThe ``\\textit{Assistment17}'' dataset contains 942,816 students responses and 101 skills. \n\nFinally, the ``\\textit{FSAI-F1toF3}'' dataset is the smallest dataset we used. \nIt involves responses to mathematical problems from 7th grade to 9th grade Hong Kong students and consists of 51,283 students responses from 310 students on 99 skills and 2,266 questions. As it is commonly the case in most studies using this dataset, we have used the question tag as the model input $q_t$.\n\n\n\\subsection{Data Preprocessing}\n\nNo preprocessing was performed on the ``\\textit{ASSISTment09}'' and ``\\textit{FSAI-F1toF3}'' datasets.\nFor the remaining datasets we followed three preparation steps. \n\nFirst, the skill ids had been repaired by replacement. \nIn particular, the ``\\textit{ASSISTments09 corrected}'' dataset contained skills of the form of ``\\textit{skill1\\_skill2}'' and ``\\textit{skill1\\_skill2\\_skill3}'' which correspond to the same skill names, so we have merged them into the first skill id, found before the underscore. \nIn other words, the skill ``\\textit{10\\_13}'' was replaced with skill ``\\textit{10}'' and so on.\nMoreover, few misspellings were observed that were corrected and the punctuations found in three skill names were converted to the corresponding words. \nFor example, in the skill name ``\\textit{Parts of a Polnomial Terms Coefficient Monomial Exponent Variable}'' we corrected the ``\\textit{Polnomial}'' with ``\\textit{Polynomial}''. Also, in the skill name ``\\textit{Order of Operations +,-,\/,*() positive reals}'' we replaced the symbols ``\\textit{+,-,\/,* ()}'' with the words that express these symbols, ie. ``\\textit{addition subtraction division multiplication parentheses}''. \nThe latter preprocessing action was preferred over the removal of punctuations since the datasets referred to mathematical methods and operations and without them, we would lose the meaning of each skill. \nSimilar procedure has been followed for the ``\\textit{ASSISTments12}'' dataset. \nFurthermore, spaces after some skill names were removed i.e. the skill name ``\\textit{Pattern Finding }'' became ``\\textit{Pattern Finding}''. \nIn the ``\\textit{ASSISTment17}'' dataset we came across skill names as ``\\textit{application: multi-column subtraction}'' and corrected them by replacing punctuation marks such as ``\\textit{application multi column subtraction}''.\nThat text preparation operations made to ease the generation of word embeddings of the skill names descriptions.\nIn addition, in the ``\\textit{ASSISTment17}'' dataset, the problem ids are used instead of the skill ids. \nWe had to match and replace the problem ids with the corresponding skill ids with the aim of uniformity of the datasets between them.\n\n\nSecondly, all rows containing missing values were discarded. \nThus, after the preprocessing, the statistics of the data sets were formulated as described in the Table \\ref{tab:datasets}.\n\nFinally, we split the datasets so that 70\\% was used for training and 30\\% for testing.\nThen, the training subset was further split into five train-validation subsets using 80\\% for training and 20\\% for validation. \n\n\\section{Experiments}\n\nIn this section we experimentally validate the effectiveness of the proposed methods by comparing them with each other and also with other state-of-the-art performance prediction models.\nThe Area Under the ROC Curve (AUC) \\cite{ling2003auc} metric is used for comparing the predicting probability correctness of student's response. \n\nThe state-of-the-art knowledge tracing models we are compared with the DKT, DKVMN and Deep-IRT. \nWe performed the experiments for our proposed models Bi-GRU, TDNN as well as for each of the previous model for all datasets, using the code provided by the authors on their GitHubs. \nIt is worth noting that the python GitHub code\\footnote{\\rule{0pt}{1.2\\baselineskip}https:\/\/github.com\/lccasagrande\/Deep-Knowledge-Tracing} used for the DKT model experiments requires the entire dataset file and the train\/test splitting is performed during the code execution. \n\nAll the experiments were performed on a workstation with Ubuntu operating system, Intel i5 CPU and 16GB Titan Xp GPU card.\n\n\\begin{table*}[htb]\n \\caption{Models experiments settings} \n \\label{tab:settings}\n \\centering\n \\begin{tabular}{|l | r| r| }\n \\hline \n \\multicolumn{1}{|c|}{Parameters} & \n \\multicolumn{1}{c|}{Bi-GRU} & \n \\multicolumn{1}{c|}{TDNN}\\\\ \\hline\n Learning rate & 0.001 & 0.001 \\\\ \\hline\n Learning rate schedule & yes & no\\\\ \\hline\n Training epochs& 30 & 30 \\\\ \\hline\n Batch size & 32 & 50 \\\\ \\hline\n Optimizer & Adam & AdaMax\\\\ \\hline\n History window length & 50 & 50 \\\\\\hline\n Skill embeddings dim. & 100 \\& 300 & 100 \\& 300\\\\ \\hline\n Skill embeddings type & Random, W2V, FastText & Random, W2V, FastText\\\\ \\hline\n Responses embeddings dim. & Same to skill dim. & Same to skill dim.\\\\ \\hline\n Responses embeddings type & Random & Random\\\\\n \\hline \n\\end{tabular}\n\\end{table*}\n\\subsection{Skill embeddings initialization}\n\nAs mentioned earlier, skill embeddings are initialized either randomly or using pretrained vectors.\nRegarding the initialization of the skill embeddings with pretrained vectors we used two methods described next. \nIn first method we used the text files from Wikipedia2Vec\\footnote{https:\/\/wikipedia2vec.github.io\/wikipedia2vec\/} \\cite{yamada2020wikipedia2vec} that is based on Word2Vec method and contains pretrainable embeddings for the word representation vectors in English language in 100 and 300 dimensions.\nIn second method we used the ``\\textit{SISTER}'' (SImple SenTence EmbeddeR)\\footnote{https:\/\/pypi.org\/project\/sister\/} library to prepare the skill name embeddings based on FastText in 300 dimensions pretrained word embeddings.\nEach skill name consists of one or more words. \nThus, for the Word2Vec method, the skill name embeddings vector is created by adding the word embeddings vectors, while in case of FastText, the skill name embeddings are created by taking the average of the word embeddings. \n\nEspecially for the FsaiF1toF3 dataset, the question embeddings are initialized either randomly or using the pretrained word representations of the corresponding skill descriptions by employing the Wikipedia2Vec and SISTER methods as described above. Since many questions belong to the same skill, in this case the corresponding rows in the embedding matrix are initialized by the same vector. \n\n\n\\subsection{Experimental Settings}\nWe performed the cross-validation method for the 5 training and validation set pairs. \nThis was to choose the best architecture and parameter settings for each of the proposed models.\nUsing the train and test sets we evaluated the chosen architectures for all the datasets. \n\nOne of the basic hyperparameters of our models that affect to the inputs is the $L$. It represents the student's interaction history window length. \nThe inputs with $L$ sequence of questions and $L-1$ sequence of responses. \nThe best results we succeeded are when using $L=50$ for the both Bi-GRU and TDNN models.\nThe batch sizes used in the models during the training are: 32 in Bi-GRU and 50 in TDNN. \n\nSince specific dimensions of the pretrained word embeddings are provided, we used the same dimensions in case of random embedding in order to take the comparable results. \nSkill embeddings and responses embeddings set in the same dimensions. \n\nThe scheduler learning rate is implementing in Bi-GRU starting from 0.001 \nand reducing over the training operation of the models that performs for 30 epochs.\nDuring training we applied the following learning rate schedule depending on the epoch number $n$:\n\\[\n lr = \\left\\{\n \\begin{array}{ll}\n r_{init} &\n \\text{if } n < 10\n \\\\\n r_{init} \\times e^{(0.1\\cdot(10-n))} &\n \\text{otherwise}\n \\end{array}\n \\right.\n\\]\n\nIn case of the TDNN-based model, the learning rate equals 0.001 and is the same during the whole training process for 30 epochs.\nWe used cross-entropy optimization criterion and the Adam or AdaMax \\cite{kingma2014adam} learning algorithms.\n \nDropout with rate = $0.2$ or $0.9$ is also applied to the Bi-GRU model while the dropout rate of the TDNN equals to one of the $(0.2, 0.4, 0.6, 0.9)$ values through to the Gaussian dropout layer.\nWe observed a reduction of overfitting during model training by changing the Gaussian dropout rate relative to the dataset's size.\nThus, the smaller dataset size is, the bigger dropout rate has been used.\n\nThe various combinations of parameters settings were applied during the experimental process for all proposed models presented in Table \\ref{tab:settings}. \n\n\n\\subsection{Experimental Results}\n\n\\begin{table*}[htb]\n \\centering\n \\caption{Comparison between our proposed models - AUC (\\%). (R) = random skill embedding initialization, (W) = skill embedding initialization using W2V, (F) = skill embedding initialization using FastText. Datasets: (a) \\textit{ASSISTment09}, (b) \\textit{ASSISTment09 corrected}, (c) \\textit{ASSISTment12}, (d) \\textit{ASSISTment17}, (e) \\textit{FSAI-F1toF3}}\n \\label{tab:compare_our_models} \n \\begin{tabular}{|c|c|c|c|c|c|}\n \\hline\n & $d_q=100$(R) & $d_q=300$(R) & $d_q=100$(W) & $d_q=300$(W) & $d_q=300$(F)\n \\\\\n \\hline\n Bi-GRU & 82.55 & 82.45 & 82.52 & 82.55 & 82.39\n \\\\ \\hline\n TDNN & 81.54 & 81.67 & 81.59 & 81.50 & 81.53\n \\\\\n \\hline\n \\end{tabular}\n \\\\\n (a)\n \\\\\n ~\\\\\n \\begin{tabular}{|c|c|c|c|c|c|}\n \\hline\n & $d_q=100$(R) & $d_q=300$(R) & $d_q=100$(W) & $d_q=300$(W) & $d_q=300$(F)\n \\\\\n \\hline\n Bi-GRU & 75.27 & 75.13 & 75.14 & 75.09 & 75.12\n \\\\ \\hline\n TDNN & 74.38 & 74.39 & 74.40 & 74.33 & 74.37\n \\\\\n \\hline\n \\end{tabular}\n \\\\\n (b)\n \\\\\n ~\\\\\n \\begin{tabular}{|c|c|c|c|c|c|}\n \\hline\n & $d_q=100$(R) & $d_q=300$(R) & $d_q=100$(W) & $d_q=300$(W) & $d_q=300$(F)\n \\\\\n \\hline\n Bi-GRU & 68.37 & 68.37 & 68.40 & 68.23 & 68.27\n \\\\ \\hline\n TDNN & 67.95 & 67.97 & 67.99 & 67.95 & 67.91\n \\\\\n \\hline\n \\end{tabular}\n \\\\\n (c)\n \\\\\n ~\\\\\n \\begin{tabular}{|c|c|c|c|c|c|}\n \\hline\n & $d_q=100$(R) & $d_q=300$(R) & $d_q=100$(W) & $d_q=300$(W) & $d_q=300$(F)\n \\\\\n \\hline\n Bi-GRU & 73.62 & 73.58 & 73.76 & 73.54 & 73.58\n \\\\ \\hline\n TDNN & 71.68 & 71.75 & 71.52 & 71.81 & 71.83\n \\\\\n \\hline\n \\end{tabular}\n \\\\\n (d)\n \\\\\n ~\\\\\n \\begin{tabular}{|c|c|c|c|c|c|}\n \\hline\n & $d_q=100$(R) & $d_q=300$(R) & $d_q=100$(W) & $d_q=300$(W) & $d_q=300$(F)\n \\\\\n \\hline\n Bi-GRU & 70.47 & 69.34 & 70.24 & 69.80 & 69.51\n \\\\ \\hline\n TDNN & 70.03 & 69.80 & 69.80 & 70.11 & 70.06 \n \\\\\n \\hline\n \\end{tabular}\n \\\\\n (e)\n\\end{table*}\n\n\n\\begin{table*}[htb]\n \\caption{Comparison test results of evaluation measures - the AUC metric (\\%) }\n \\label{tab:results}\n \\centering\n \\begin{tabular}{|l|c|c|c|c|c|}\n \\hline\n Dataset & DKT & DKVMN & Deep-IRT & Bi-GRU & TDNN\\\\ \\hline\n ASSISTment09 & 81.56\\% & 81.61\\% & 81.65\\% &\\textbf{82.55}\\%$^{(1,2)}$ & 81.67\\%$^{(3)}$ \\\\ \\hline\n ASSISTment09 corrected & 74.27\\% & 74.06\\% & 73.41\\% & \\textbf{75.27}\\%$^{(1)}$ & 74.40\\%$^{(2)}$\\\\ \\hline\n ASSISTment12 & 69.40\\% & 69.26\\% & \\textbf{69.73}\\% & 68.40\\%$^{(4)}$ & 67.99\\%$^{(4)}$ \\\\ \\hline\n ASSISTment17 & 66.85\\% & 70.25\\% & 70.54\\% & \\textbf{73.76}\\%$^{(4)}$ & 71.83\\%$^{(5)}$ \\\\ \\hline\n FSAI-F1toF3 & 69.42\\% & 68.40\\% & 68.69\\% & \\textbf{70.47}\\%$^{(1)}$ & 70.11\\%$^{(2)}$ \\\\\n \\hline\n \\multicolumn{6}{l}{\n \\rule{0pt}{3ex}\n $^{(1)}$ $d_q=d_r=100$, Random, ~~\n $^{(2)}$ $d_q=d_r=300$, W2V, ~~\n $^{(3)}$ $d_q=d_r=300$, Random,\n }\n \\\\\n \\multicolumn{6}{l}{\n $^{(4)}$ $d_q=d_r=100$, W2V, ~~\n $^{(5)}$ $d_q=d_r=300$, FastText\n }\n \\end{tabular}\n\\end{table*}\n\nThe experiments results of our models are shown in Table \\ref{tab:compare_our_models}. Comparing our models with each other we can see that the RNN-based Bi-GRU model outperforms the TDNN-based model in all datasets. \nIt achieved best results when 100d embeddings were used either in pretrained or the random initialization type. \n\nWe observed that in both Bi-GRU or TDNN, the embedding type is not the significant parameter that affects the models performance. \nThe differences between the results of the experiments showed that the size of embeddings dimensions not particularly contributed to the final result and the difference in performance of the models was small.\n\nExcept for our models, we performed experiments for all datasets on the previous models we compared. \nFor three of the datasets, specifically for ``\\textit{ASSISTment09 corrected}'', ``\\textit{ASSISTment12}'' and ``\\textit{ASSISTment17}'' there were not available results in the corresponding papers. \nIn this paper, we present the results of the experiments we run using that models codes.\n\nThe best experimental results of the ours models in comparison with the previous models for each dataset are presented in Table \\ref{tab:results}.\nThe model that has the best performance for the four of datasets is the Bi-GRU. \nExcept for that, the TDNN-based model has better performance in comparison to the previous models for four datasets.\nThe only dataset, for which the previous models overcomed our models is the ``\\textit{ASSISTment12}''.\n\n\\subsection{Discussion}\n\nOur model architecture is loosely based on the DKT model and offers improvements in the aspects discussed below.\nFirst, we employ embeddings for representing both skills and responses.\nIt is known that embeddings offer more useful representations compared to one-hot encoding because they can capture the similarity between the items they represent \\cite{wang2020survey}.\nSecond, we thoroughly examined dynamical neural models for estimating the student knowledge state by trying both infinite-memory RNNs and finite-memory TDNNs.\nTo our knowledge, TDNNs have not been well studied in the literature with respect to this problem.\nThird, we used convolutional layers in the inputs encoding sub-net. We found that this layer functioned as a reducing mechanism of the embedding dimensions and in conjunction with the dropout layer mitigated the overfitting problem. \nThe use of Convolutional layers is a novelty in models tackling the knowledge tracing problem. \nFourth, unlike DKT, we used more hidden layers in the classification sub-net.\nOur experiments demonstrate that this gives more discriminating capability to the classifier and improves the results.\nFinally, our experiments with key-value modules and attention mechanism did not help further improve our results and so these experiments are not reported here.\nIn the majority of the datasets we examined our model outperforms the state-off the models employing key-value mechanisms such as DKVMN and Deep-IRT.\n\n\\begin{table*}[hbt]\n\\centering\n \\caption{Statistical significance testing results of Bi-GRU and TDNN}\n \\label{tab:p_value}\n \\begin{tabular}{|l|c|c|}\n \\hline\n Dataset & P-value \\\\ \\hline\n ASSISTment09 & 7.34 e-59 \\\\ \\hline\n ASSISTment09 corrected & 2.31 e-52 \\\\ \\hline\n ASSISTment12 & 1.45 e-203 \\\\ \\hline\n ASSISTment17 & 7.96 e-44 \\\\ \\hline\n FSAI-F1toF3 & 1.38 e-84 \\\\ \\hline\n \\end{tabular}\n\\end{table*}\n\nIn addition to the AUC metric which is typically used for evaluating the performance of our machine learning models, we applied statistical significance testing to check the similarity between out Bi-GRU and TDNN models.\nSpecifically, we performed a T-Test between the outcomes of the two models in all training data using the best configuration settings as shown in Table \\ref{tab:results}.\nThe results reported in Table \\ref{tab:p_value} show that the P-value calculated in all cases is practically zero which proves the hypothesis that the two models are significantly different.\n\n\\section{Conclusion and Future Work}\n\nIn this paper we propose a novel two-part neural network architecture for predicting student performance in the next exam or exercise based on their performance in previous exercises.\nThe first part of the model is a dynamic network which tracks the student knowledge state and the second part is a multi-layer neural network classifier.\nFor the dynamic part we tested two different models: a potentially infinite memory recurrent Bidirectional GRU model and a finite memory Time-Delay neural network (TDNN). \nThe experimental process showed that the Bi-GRU model achieves better performance compared to the TDNN model.\nDespite the fact that TDNN models have not been used for this problem in the past, our results have shown that they can be just as efficient or even better compared to previous state-of-art RNN models and only slightly worse than our proposed RNN model.\nThe model inputs are the student's skills and responses history which are encoded using embedding vectors. Skill embeddings are initialized either randomly or by pretrained vectors representing the textual descriptions of the skills.\nA novel feature of our architecture is the addition of spatial dropout and convolutional layers immediately after the embeddings layers.\nThese additions have been shown to reduce the overfitting problem.\nWe found that the choice of initialization of the skill embeddings has little effect on the outcome of our experiments.\nMoreover, noting that there is a different use of the same datasets in different studies, we described in detail the process of the datasets pre-processing, and we provide the train, validation and test splits of the data that were used in our experiments on our GitHub repository\\footnote{\\rule{0pt}{1.2\\baselineskip} https:\/\/github.com\/delmarin35\/Dynamic-Neural-Models-for-Knowledge-Tracing}.\nThe extensive experimentation with more benchmark datasets as well as the study of variants of the proposed models will be the subject of our future work with the aim of even further improving the prediction performance of the models.\n\n\n\n\\section{Acknowledgments}\n\nWe would like to thank NVIDIA Corporation for the kind donation of an Titan Xp GPU card that was used to run our experiments.\n\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}