diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzjbbj" "b/data_all_eng_slimpj/shuffled/split2/finalzzjbbj" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzjbbj" @@ -0,0 +1,5 @@ +{"text":"\\section{ Introduction }\nDepression is a significant health concern worldwide, and its early-stage symptom monitoring, detection, and prediction are becoming crucial for us to mitigate this disease. With considerable attentions devoted to this field, traditional diagnosis and monitoring procedures usually rely on subjective measurements. It is desirable to develop more biomarkers that can be automatically extracted from objective measurements. Depression will leave recognizable markers in patient's vocal acoustic, linguistic, and facial patterns, all of which have demonstrated increasing promise on evaluating and predicting patient's mental condition in an unobtrusive way \\cite{kachele2014fusion}. In this work, we aim to extend the existing body of related work and investigate the performances of each of the biomarker modalities (audio, linguistic, and facial) for the task of depression severity evaluation, and further boost our results by using a confidence based fusion mechanism to combine all three modalities. Experiments on the recently released AVEC 2017 \\cite{AVEC2017} depression dataset have verified the promising performance of the proposed model.\n\n\\section{Feature Engineering}\n\nThe AVEC 2017 dataset includes audio and video recordings, as well as extensive questionnaire responses in text formats, collected from (nearly) real-world settings. We will next introduce how we developed feature engineering techniques based on given data and features in each modality. \n\nThe original audio datasets were pre-extracted features using the COVAREP toolbox. We further extracted descriptors: fundamental frequency (F0), voicing (VUV), normalized amplitude quotient (NAQ), quasi open quotient (QOQ), the first two harmonics of the differentiated glottal source spectrum (H1, H2), parabolic spectral parameter (PSP), maxima dispersion quotient (MDQ), spectral tilt\/slope of wavelet responses (peak\/slope), shape parameter of the Liljencrants-Fant model of the glottal pulse dynamic (Rd), Rd conf, Mel cepstral coefficient (MCEP 0-24), harmonic model and phase distortion mean (HMPDM 0-24) and deviations (HMPDD 0-12), and the first 3 formants. The top $10$ largest discrete cosine transformation (DCT) coefficients were computed for each descriptor to balance between information loss and efficiency. Delta and Delta-Delta features known as differential and acceleration coefficients were calculated as additional features to capture the spectral domain dynamic information. In addition, a series of statistical descriptors such as mean, median, std, peak-magnitude to rms ratio, were calculated. Overall, a total of 1425 audio features were extracted.\n\n2D coordinates of 68 points on the face, estimated from raw video data were provided. To develop visual features from this data-limited setting, we chose stable regions between eyes and mouth due to minimal involvement in facial expression. We calculated the mean shape of 46 stable points not confounding with gender. The pairwise Euclidean distance between coordinates of the landmarks were calculated as well as the angles (in radians) between the points, resulting in 92 features. Finally, we split the facial landmarks into three groups of different regions: the left eye and left eyebrow, the right eye and right eyebrow, and the mouth. We calculated the difference between the coordinates of the landmarks and finally calculated the Euclidean distances ($\\ell_2$-norm) between the points for each group, resulting in 41 features. Overall, we obtained 133 features.\n\nThe transcript file includes translated communication content between each participant and the animated virtual interviewer 'Ellie'. Basic statistics of words or sentences from the transcription file including number of sentences over the duration, number of the words, ratio of number of the laughters over the number of words were calculated. The depression related words were identified from a dictionary of more than 200 words downloaded from online resources\\footnote{\\url{https:\/\/myvocabulary.com\/word-list\/depression-vocabulary\/}}. The ratio of depression-related words over the total number of words over the duration was calculated. \n\nIn addition, we introduced a new set of \\textbf{text sentiment} features, obtained using the tool of AFINN sentiment analysis \\cite{nielsen2011new}, that would represent the valence of the current text by comparing it to an exiting word list with known sentiment labels. The outcome of AFINN is an integer between minus five (negative) and plus five (positive), where negative and positive number number shows negative and positive positive sentiment subsequently. The mean, median, min, max, and standard deviation of the sentiment analysis outcomes (as a time series) were used. A total of 8 features were extracted. The new set of sentiment features was found to be highly helpful in experiments.\n\n\\section{Multi-Modal Fusion Framework}\n\nWe adopted an input-specific classifier for each modality, followed by a decision-level fusion module to predict the final result. In detail, for each modality biomarker we used a random forest to translate features into predictive scores, while these scores were further combined in a confidence based fusion method to make final prediction on the PHQ8. To fuse the modalities, we implemented a decision-level fusion method. Rather than simple averaging, we recognized that each modality itself might be noisy. Therefore, for each modality we calculated the standard deviation for the outcomes of all trees, defined as the modality-wise \\textbf{confidence score}. After trying several different strategies, the \\textit{winner-take-all} strategy, i.e., picking the single-modality prediction with the highest confidence score as the final result seems to be the most effective and reliable in our setting. In most cases, we observed that audio modality tends to dominate during the prediction. We conjectured that it implies the imbalanced (or say, complementary) informativeness of three modalities, and one modality often tends to dominate in each time of prediction. An overview of the confidence based decision-level fusion method is shown in Figure \\ref{fig:framework}.\n\n\\begin{figure}[ht]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{framework}\n\\caption{Overview of the confidence based decision-level fusion method}\n\\label{fig:framework}\n\\end{figure}\n\n\\section{Preliminary Result and Future Work }\nBaseline scripts provided by AVEC have been made available in the data repositories where depression severity was computed using random forest regressor.\nTable~\\ref{tab:perf_fusion} reports the performance of the baseline and our model for development and training sets. For both models, we reported the performance of single modality and multi-modal fusion methods. Comparing to the baseline, confidence based fusion could achieve comparable or even marginally better performance than the baseline in terms of both RMSE and MAE. \n\n\n\\begin{table}[thp]\n\\caption{Performance comparison among single modality and confidence based fusion model}\n\\centering\n\\label{tab:perf_fusion}\n\\begin{tabular}{ c| c c | c c }\n\\hline\nFeature used & \\multicolumn{2}{c}{`development'} & \\multicolumn{2}{c}{`train'} \\\\\n& \\textbf{RMSE} & MAE & RMSE & MAE \\\\\n\\hline \\hline\n\\multicolumn{5}{c}{The baseline provided by AVEC organizer} \\\\\n\\hline \nVisual only & 7.13 & 5.88 & 5.42 & 5.29 \\\\\nAudio only & 6.74 & 5.36 & 5.89 & 4.78 \\\\\nAudio \\& Video & 6.62 & 5.52 & 6.01 & 5.09 \\\\\n\\hline \n\\multicolumn{5}{c}{Our model that doesn't include gender variable} \\\\\n\\hline\nVisual only & 6.67 & 5.64 & 6.13 & 5.08 \\\\\nAudio only & 5.45 & 4.52 & 5.21 & 4.26 \\\\\nText only & 5.59 & 4.78 & 5.29 & 4.47 \\\\\nFusion model & 5.17 & 4.47 & 4.68 & 4.31 \\\\\n\\hline\n\\multicolumn{5}{c}{Our model that includes the gender variable} \\\\\n\\hline\nVisual only & 5.65 & 4.87 & 4.99 & 4.46 \\\\\nAudio only & 5.11 & 4.69 & 4.84 & 4.23 \\\\\nText only & 5.51 & 4.87 & 5.13 & 4.28 \\\\\nFusion model & 4.81 & 4.06 & 4.23 & 3.89 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\nWe plan to enhance our methodology in the following directions. First, to improve decision rules, we will use Rule ensemble models to exhaustively search interactions among features and scale up the high-dimensional feature space. In addition, we are interested to perform vowel formants analysis to allow a straightforward detection of high arousal emotions. Second, we found that with more relevant features refined, the overall performance could be improved (e.g., silence detection). Finally, we plan to implement our model to a more general clinical environment (e.g., routine patient-provider communication) to characterize social interactions to support clinicians in predicting depression severity. \n\n \\pdfoutput=1\n \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nA natural question in Riemannian geometry is: When does a closed manifold $X$ admit a Riemannian metric with positive scalar curvature? (See \\cite{rosenberg1994manifolds} for a survey on this problem. We call such manifolds ``psc-manifolds''.) The answer is fully understood in the following two cases:\n\\begin{itemize}\n\\item $X$ is $3$-dimensional or less \\cite{perelman2003ricci};\n\\item $X$ is simply connected and $5$-dimensional or more \\cite{gromov-Lawson,stolz1992simply}.\n\\end{itemize}\n\nNow consider the case that $X$ is a $4$-dimensional psc-manifold. Then we have the following three constrains on the topology of $X$:\n\\begin{enumerate}[(i)]\n\\item Suppose $X$ is spin. Then the signature of $X$ (denoted by $\\operatorname{sign}(X)$) must be zero. Similar result holds for its covering spaces \\cite{hitchin1974harmonic,lichnerowicz1963spineurs};\n\\item Suppose $b_{3}(X)>0$. Then up to a nonzero multiple, any element of $H_{3}(X;\\mathds{R})$ can be represented by an embedded, oriented psc $3$-manifold. Similar result holds for its covering spaces \\cite{schoen1979structure};\n\\item Suppose $b_{2}^{+}(X)>1$. Then the Seiberg-Witten invariant $SW(X,\\hat{\\mathfrak{s}})$ must equal $0$ for any spin$^{c}$ structure $\\hat{\\mathfrak{s}}$. Similar result holds for its covering spaces \\cite{Witten}.\n\\end{enumerate}\n\nIn the current paper, we consider the following case:\n\\begin{assum}\\label{homology S3timesS1}\n $X$ is a $4$-manifold with the same homology as $S^{1}\\times S^{3}$; the homology group $H_{3}(X;\\mathds{Z})$ is generated by an embedded $3$-manifold $Y$ with $b_{1}(Y)=0$.\n\\end{assum}\n\nFor such $X$, condition (i) tells nothing interesting and condition (ii) provides a cobordism between $Y$ and a psc $3$-manifold. As for condition (iii), it can not be applied because the Seiberg-Witten invariants are not well defined (since $b_{2}^{+}(X)=0$).\n\nThe first purpose of the current paper is to obtain a new obstruction of positive scalar curvature in the direction of (iii). Recall that for $X$ satisfying Assumption \\ref{homology S3timesS1}, although the original Seiberg-Witten invariant is not well defined, there are two other invariants from the Seiberg-Witten theory:\n\\begin{itemize}\n\\item The $4$-dimensional Casson-type invariant $\\lambda_{SW}(X)$, defined by Mrowka-Ruberman-Saveliev \\cite{MRS};\n\\item The Fr{\\o}yshov invariant $\\operatorname{h}(Y,\\mathfrak{s})$, defined by Fr{\\o}yshov \\cite{Froyshov}, where $\\mathfrak{s}$ is the unique spin structure on $Y$ that can be extended to a spin structure on $X$. (It was proved in \\cite{Froyshov} that this invariant does not depend on the choice of $Y$.)\n\\end{itemize}\nHere is the main theorem of the paper:\n\\begin{thm}\\label{new obstruction}\nSuppose $\\lambda_{SW}(X)+\\operatorname{h}(Y,\\mathfrak{s})\\neq 0$. Then $X$ admits no Riemannian metric with positive scalar curvature.\n\\end{thm}\n\\begin{rmk}\nWe conjecture that one should be able to recover Theorem \\ref{new obstruction} as a special case of Schoen-Yau's result \\cite{schoen1979structure} using monopole Floer homology.\n\\end{rmk}\n\n\nSince it was proved in \\cite{MRS} that the mod-$2$ reduction of $\\lambda_{SW}(X)$ is always $\\rho(Y,\\mathfrak{s})$ (the Rohlin invariant of $(Y,\\mathfrak{s})$), we have the following corollary:\n\\begin{cor}\\label{rohlin=froyshov}\nSuppose $X$ is a homology $S^{3}\\times S^{1}$ with $H_{3}(X;\\mathds{Z})$ generated by an embedded rational homology sphere $Y$ satisfying $$\n\\operatorname{h}(Y,\\mathfrak{s}) \\not\\equiv \\rho(Y,\\mathfrak{s})\\ (\\textrm{mod}\\ 2).\n$$\nThen $X$ admits no Riemannian metric with positive scalar curvature.\n\\end{cor}\nThis corollary gives a large family of interesting examples of $4$-manifolds (with $b_{2}=0$) admitting no positive scalar curvature metric.\n\\begin{ex}\nLet $X$ be obtained by furling up any homology cobordism from $Y=\\Sigma(2,3,7)$ (the Brieskorn sphere) to itself. Then $X$ admits no Riemannian metric with positive scalar curvature because $\\rho(Y)=1$ and $\\operatorname{h}(Y)=0$.\n\\end{ex}\n\nWe summarize the idea in the proof of Theorem \\ref{new obstruction} as follows: Let $W$ be the cobordism from $Y$ to itself obtained by cutting $X$ along $Y$. We consider the manifold\n$$\nZ_{+}=((-\\infty,0]\\times Y)\\cup_{Y}W\\cup_{Y}W\\cup_{Y}...\n$$\nThis non-compact manifold has two ends: one is cylindrical and the other one is periodic. (The word ``periodic'' indicates the fact that we are gluing togegher infinitely many copies of the same manifold $W$. See \\cite{Taubes} for the precise definition.) For a Riemannian metric $g_{X}$ on $X$, we can construct, using a cut-off function, a metric on $Z_{+}$ that equals the a lift of $g_{X}$ over the periodic-end and restricts to the product metric on the cylindrical end. Now consider the (suitably perturbed) Seiberg-Witten equations on $Z_{+}$. More specifically, let $[\\mathfrak{b}]$ be a critical point of the Chern-Simons-Dirac functional with certain absolute grading. We consider the moduli space $\\mathcal{M}([\\mathfrak{b}],Z_{+})$ of gauge equivalent classes of solutions that approaches $[\\mathfrak{b}]$ on the cylindrical end and has exponential decay on the periodic end. By adding end points to the moduli space $\\mathcal{M}([\\mathfrak{b}],Z_{+})$, which correspond to ``broken solutions'' on $Z_{+}$, we get the moduli space $\\mathcal{M}^{+}([\\mathfrak{b}],Z_{+})$, which is a $1$-manifold with boundary. Now we use the assumption that $g_{X}$ has positive scalar curvature. Under this assumption, we can prove that $\\mathcal{M}^{+}([\\mathfrak{b}],Z_{+})$ is compact. Therefore, the number of points in $\\partial\\mathcal{M}^{+}([\\mathfrak{b}],Z_{+})$, counted with sign, should be $0$. This actually implies that a certain reducible critical point $[\\mathfrak{a}_{0}]$ can not be ``killed by the boundary map'' and hence survives in the monopole Floer homology. By this argument, we show that $-2\\operatorname{h}(Y,\\mathfrak{s})\\leq 2\\lambda_{\\textnormal{SW}}(X)$. By the same argument on $-X$, we can also prove $-2\\operatorname{h}(Y,\\mathfrak{s})\\geq 2\\lambda_{\\textnormal{SW}}(X)$, which completes the proof of Theorem \\ref{new obstruction}.\n\nAs can be seen from the above discussion, the study of Seiberg-Witten equations on end-periodic manifolds plays a central role in our argument. We note that the first application of gauge theory on end-periodic manifolds was given by Taubes \\cite{Taubes} in the context of Donaldson theory, where he proved that the Euclidean space $\\mathds{R}^{4}$ admits uncountable many exotic smooth structures. However, the Seiberg-Witten theory on end-periodic manifold is still not well developed. One major difficulty in this direction is finding a reasonable substitution for the assumption $\\pi_{1}(W)=1$ (which was used in \\cite{Taubes}) and prove the compactness theorem under this new assumption. In the current paper, we use the positive scalar curvature assumption, which tells something interesting but still not general enough. This motivates the second purpose of the paper: we try to develop a framework that can be useful in further study of the Seiberg-Witten theory on general end-periodic manifolds. Actually, all the results (except Lemma \\ref{orientation reversal 2}) in Section 2, Section 3 and the appendix are stated and proved without the positive scalar curvature assumption.\n\nWe note that many of the results and proofs in the current paper follow the same line as Kronheimer-Mrowka's book \\cite{KM}. The idea is that: by working with suitably weighted Sobolev spaces, one can treat the non-compact manifold $$X_{+}=W\\cup_{Y} W\\cup_{Y}...$$ as a compact manifold whose signature equals the correction term $-w(X,0,g_{X})$ (see Subsection 2.4).\n\nThe precise statements of all the results used in the current paper will be given. However, to keep the length of the paper somehow under control, we will omit the proofs that are word by word translations from the corresponding parts of \\cite{KM}. In order to help the reader to follow the argument, we will always give the precise reference of the omitted details. From now on, we will refer to \\cite{KM} as \\textbf{the book}.\n\nThe paper is organized as follows: In Section 2, we briefly recall the definition of the monopole Floer homology, the Fr\\o yshov invariant $\\operatorname{h}(Y,\\mathfrak{s})$ and the $4$-dimensional Casson invariant $\\lambda_{\\textnormal{SW}}(X)$. We will also review and prove some results about linear analysis on end-periodic manifolds. In Section 3, we start setting up the gauge theory on end-periodic manifolds and define the moduli spaces. In Section 4, we prove the compactness result under the positive scalar curvature assumption. In Section 5, we will put all the pieces together and finish the proof of Theorem \\ref{new obstruction}. In the appendix, we prove (using Fourier-Laplace transformation) Proposition \\ref{laplace equation}, which states the uniqueness and existence of the solution of the Laplace equation on end-periodic manifolds. This may be of independent interest for some readers.\n\\\\\n\\\\\n\\textbf{Acknowledgement.} The author wishes to thank\nPeter Kronheimer, Tomasz Mrowka, Ciprian Manolescu, Daniel Ruberman, Nicolai Saviliev and Richard Schoen for sharing their expertise in several inspiring discussions. The author is especially grateful to Clifford Taubes for suggesting the idea of proof of Lemma \\ref{exp decay} (the key estimate in Section 4) and Terence Tao for providing an alternative proof of Lemma \\ref{Solving laplace equation on covering space}. Corollary \\ref{rohlin=froyshov} was also proved by Daniel Ruberman \\cite{Ruberman} using Schoen-Yau's minimal surface result.\n\nDuring the preparation of the current paper, the author noticed that a different version of compactness theorem for Seiberg-Witten equations over manifolds with periodic ends of positive scalar curvature was proved earlier in Diogo Veloso's thesis \\cite{Diogo}. A different type of Hodge decomposition for such manifolds was also studied there.\n\n\n\\section{Preliminaries}\n\\subsection{The set up and the notations}\nLet $X$ connected, oriented, smooth 4-manifold satisfying the condition $$H_{1}(X;\\mathds{Z})\\cong \\mathds{Z},\\ H_{2}(X;\\mathds{Z})\\cong 0.$$ In other words, $X$ is a homology $S^{1}\\times S^{3}$. We further assume that $H_{3}(X;\\mathds{Z})$ is generated by an embedded rational homology 3-sphere $Y$. (This is not always the case.) We fix a homology orientation of $X$ by fixing a generator $[1]\\in H_{1}(X;\\mathds{Z})$. This induces an orientation on $Y$ by requiring that $[1]\\cup [Y]=[X]$. Let $W$ be the cobordism from $Y$ to itself obtained from cutting $X$ open along $Y$. The infinite cyclic covering space of $X$ has a decomposition\n$$\n\\tilde{X}=...\\cup_{Y} W_{-1}\\cup_{Y} W_{0}\\cup_{Y} W_{1}\\cup ... \\ \\text{with all }W_{n}\\cong W.\n$$\n We choose a lift of $Y$ to $\\tilde{X}$ and still call it $Y$. We let\n$$X_{+}=W_{0}\\cup_{Y} W_{1}\\cup_{Y} W_{2}\\cup ... $$\nbe one of the two components of $\\tilde{X}\\setminus Y$.\n\\begin{nota}\nIn the current paper, we will use $\\cup$ to denote the disjoint union and use $\\cup_{Y}$ to denote the result of gluing two manifolds along their common boundary $Y$.\n\\end{nota}\nThere are two spin structures on $X$. We pick one of them and denote it by $\\hat{\\mathfrak{s}}$. It induces spin structures on the various manifolds we constructed so far. In particular, we have an induced spin structure on $Y$ and we denote it by $\\mathfrak{s}$. It is not hard to see that $\\mathfrak{s}$ does not depend on the choice of $\\hat{\\mathfrak{s}}$. These spin structures will be fixed through out the paper and we will suppress them from most of our notations. We denote by $S^{+}$ and $S^{-}$ the positive and negative spinor bundles over various 4-manifold. The spin connection over $4$-manifolds are all denoted by $A_{0}$. For the 3-manifold $Y$, we denote the spinor bundle by $S$ and the spin connection by $B_{0}$. In both dimensions, we write $\\rho$ for the Clifford multiplication.\n\nOther than $\\tilde{X}$ and $X_{+}$, we also consider the following two (non-compact) spin $4$-manifolds\n$$\nM_{+}:=M\\cup_{Y} X_{+}\\ \\text{and }Z_{+}:=Z\\cup_{Y} X_{+},\n$$\nwhere $Z=(-\\infty,0]\\times Y$ and $M$ is a compact spin $4$-manifold bounded by $(Y,\\mathfrak{s})$. By doing surgeries along loops in $M$, we can assume that $b_{1}(M)=0$. We denote by $\\bar{M}$ the orientation reversal of $M$.\n\n\nNow we specify Riemannian metrics on these manifolds: Let $g_{X}$ be a metric on $X$. We consider a harmonic map\n\\begin{equation}\\label{harmonic function}f:X\\rightarrow S^{1}\\cong \\mathds{R}\/\\mathds{Z}\\end{equation}\nsatisfying\n$$\nf^{*}(d\\theta)=[1]\\in H^{1}(X;\\mathds{Z}).\n$$\nIt was proved in \\cite{ruberman2007dirac} that for a generic choice of $g_{X}$, the Dirac operator\n$$\\slashed{D}^{+}_{A}:L^{2}_{1}(X;S^{+})\\rightarrow L^{2}(X;S^{-}),$$\nassociated to the connection $A=A_{0}+i a\\cdot f^{*}(d\\theta)$ for any $a\\in \\mathds{R}$, has trivial kernel. We call such metric ``admissible metric''.\n\\begin{assum}\\label{admissible metric}\nThroughout this paper, we fix a choice of admissible metric $g_{X}$.\n\\end{assum}\n\n\\begin{rmk}\nBy the Weitzenb\\\"ock formula, any metric with positive scalar curvature is admissible. However, we will not impose this positive scalar curvature condition until Section 4.\n\\end{rmk}\n\nLet $g_{\\tilde{X}}$ be the lift of $g_{X}$ on $\\tilde{X}$ and $g_{Y}$ be an arbitrary metric on $Y$. Using a cut-off function, we can construct a metric $g_{X_{+}}$ on $X_{+}$ which is isomorphic to the product metric $[0,3]\\times g_{Y}$ near the boundary (with $\\{0\\}\\times Y$ identified with $\\partial X_{+}$)\n and whose restriction on $X_{+}\\setminus W_{0}$ equals $g_{\\tilde{X}}$. Let $g_{M}$ be a metric on $M$ isomorphic to the product metric near the boundary. By gluing $g_{M}$ and $g_{X_{+}}$ together, we get a metric $g_{M_{+}}$ on $M_{+}$. Similarly, we obtain the metric\n$g_{Z^{+}}$ on $Z^{+}$ by gluing the metric $g_{X_{+}}$ together with the product metric on $Z$.\n\n\n\\subsection{The monopole Floer homology and the Fr\\o yshov invariant }\nIn this subsection, we briefly review the definition of the monopole Floer homology and the Fr\\o yshov invariant. For details, we refer to the book and \\cite{Froyshov}.\n\nLet $k\\geq 3$ be an integer fixed throughout the paper. To begin with, we define\n$$\n\\mathcal{A}_{k-1\/2}(Y)=\\{B_{0}+a| a\\in L^{2}_{k-1\/2}(Y;i\\mathds{R})\\}\n$$\nas the space of spin$^{\\text{c}}$ connections over $Y$ of class $L^{2}_{k-1\/2}$. Consider the configuration space:\n$$\n\\mathcal{C}_{k-1\/2}(Y)=\\mathcal{A}_{k-1\/2}(Y)\\times L^{2}_{k-1\/2}(Y;S).\n$$\nThe pair $(B,\\Psi)\\in \\mathcal{C}_{k-1\/2}(Y)$ is called reducible if $\\Psi=0$. Denote by $\\mathcal{C}^{\\text{red}}_{k-1\/2}(Y)$ the space of reducible pairs. We will also consider the blown-up configuration space:\n\\begin{equation}\n\\begin{split}\n\\mathcal{C}^{\\sigma}_{k-1\/2}(Y)=\\{&(B,s,\\Psi)|\\ B\\in \\mathcal{A}_{k-1\/2}(Y),\\\\\n&s\\in \\mathds{R}_{\\geq 0} \\text{ and } \\Psi\\in L^{2}_{k-1\/2}(Y;S) \\text{ satisfies }\\|\\Psi\\|_{L^{2}}=1\\}.\\end{split}\n\\end{equation}\nThe gauge group\n$$\\mathcal{G}_{k+1\/2}(Y)=\\{u:Y\\rightarrow S^{1}|\\ \\|u\\|_{L^{2}_{k+1\/2}}<\\infty\\}$$ acts on both $\\mathcal{C}_{k-1\/2}(Y)$ and $\\mathcal{C}^{\\sigma}_{k-1\/2}(Y)$. Denote the quotient spaces by $\\mathcal{B}_{k-1\/2}(Y)$ and $\\mathcal{B}^{\\sigma}_{k-1\/2}(Y)$ respectively. It was proved in the book that $\\mathcal{C}_{k-1\/2}(Y)$ and $\\mathcal{B}_{k-1\/2}(Y)$ are Hilbert manifolds without boundary, while\n$\\mathcal{C}_{k-1\/2}(Y)$ and $\\mathcal{B}_{k-1\/2}(Y)$ are Hilbert manifolds with boundary.\n\nWe define the Chern-Simons-Dirac functional $\\mathcal{L}$ (with $B_{0}$ as the preferred reference connection) on $C_{k-1\/2}(Y)$ as\n\\begin{equation}\\label{CSD}\n\\mathcal{L}(B,\\Psi)=-\\frac{1}{8}\\int_{Y}(B^{t}-B^{t}_{0})\\wedge (F_{B^{t}}+F_{B^{t}_{0}})+\\frac{1}{2}\\int_{Y}\\langle \\slashed{D}_{B}\\Psi,\\Psi\\rangle\\,d\\text{vol},\n\\end{equation}\nwhere $B^{t}$ and $B_{0}^{t}$ denote the induced connections on the determine bundle $\\text{det}(S)$ and $F_{B^{t}}$, $F_{B_{0}^{t}}$ denote their curvatures. We denote by $\\operatorname{grad}\\mathcal{L}$ the formal gradient of $\\mathcal{L}$. This is a section of the $L^{2}_{k-3\/2}$-completed tangent bundle of $\\mathcal{C}_{k-1\/2}(Y)$. In order to get the transversality condition, we need to add a perturbation $\\mathfrak{q}$ on $\\operatorname{grad}\\mathcal{L}$. The sum $\\operatorname{grad}\\mathcal{L}+\\mathfrak{q}$ is gauge invariant and gives rise to a ``vector field'' $$v_{\\mathfrak{q}}^{\\sigma}:\\mathcal{B}_{k-1\/2}^{\\sigma}(Y)\\rightarrow \\mathcal{T}_{k-3\/2}(Y),$$ where $\\mathcal{T}_{k-3\/2}(Y)$ denotes the $L^{2}_{k-3\/2}$ completion of the tangent bundle of $\\mathcal{B}_{k-1\/2}^{\\sigma}(Y)$. (We put the quotation marks here because $v_{\\mathfrak{q}}^{\\sigma}$ is not a section of the actual tangent bundle). We call the perturbation $\\mathfrak{q}$ admissible if all critical points of $v^{\\sigma}_{\\mathfrak{q}}$ are nondegenerate and the moduli spaces of flow lines connecting them are regular. (See Page 411 of the book for an exact definition.) Under this admissibility condition, the set $\\mathfrak{C}$ of critical points of $v^{\\sigma}_{\\mathfrak{q}}$ is discrete and can be decomposed into the disjoint union of three subsets:\n\\begin{itemize}\n\\item $\\mathfrak{C}^{o}$: the set of irreducible critical points;\n\\item $\\mathfrak{C}^{s}$: the set of reducible, boundary stable critical points (i.e., reducible critical points where $v^{\\sigma}_{\\mathfrak{q}}$ points outside the boundary);\n\\item $\\mathfrak{C}^{u}$: the set of reducible, boundary unstable critical points (i.e., reducible critical points where $v^{\\sigma}_{\\mathfrak{q}}$ points inside the boundary).\n\\end{itemize}\nThe monopole Floer homologies $\\widebar{HM}(Y,\\mathfrak{s};\\mathds{Q})$,\n$\\widecheck{HM}(Y,\\mathfrak{s};\\mathds{Q})$ and $\\widehat{HM}(Y,\\mathfrak{s};\\mathds{Q})$ are defined as the homology of the chain complexes freely generated by $\\mathfrak{C}^{o}$, $\\mathfrak{C}^{o}\\cup \\mathfrak{C}^{s}$ and $\\mathfrak{C}^{o}\\cup \\mathfrak{C}^{u}$ respectively.\n\nOur main concern will be $\\widebar{HM}(Y,\\mathfrak{s};\\mathds{Q})$ and $\\widecheck{HM}(Y,\\mathfrak{s};\\mathds{Q})$. To give the precise definitions, we first recall that a two-element set $\\Lambda([\\mathfrak{b}])$ (called the orientation set) can be associated to each $[\\mathfrak{b}]\\in \\mathfrak{C}$ (see Section 20.3 of the book). After making a choice of preferred element $\\chi([\\mathfrak{b}])\\in\\Lambda([\\mathfrak{b}])$ for each $[\\mathfrak{b}]$, we can canonically orient the moduli spaces of trajectories connecting them. Now let $C^{o}$ (resp. $C^{u}$ and $C^{s}$) be a vector space over $\\mathds{Q}$ with basis $\\{e_{[\\mathfrak{b}]}\\}$ indexed by elements $[\\mathfrak{b]}$ in $\\mathfrak{C}^{o}$ (resp. $\\mathfrak{C}^{s}$ and $\\mathfrak{C}^{u}$). We define the linear maps\n$$\n\\partial^{o}_{o}:C^{o}\\rightarrow C^{o},\\ \\ \\ \\ \\partial^{o}_{s}:C^{o}\\rightarrow C^{s},\n$$\n$$\n\\partial^{u}_{o}:C^{u}\\rightarrow C^{o},\\ \\ \\ \\ \\partial^{u}_{s}:C^{u}\\rightarrow C^{s}.\n$$\nby the formulae\n$$\n\\partial^{o}_{o}e_{[\\mathfrak{b}]}=\\mathop{\\sum}\\limits_{[\\mathfrak{b}']\\in \\mathfrak{C}^{o}} \\#\\breve{\\mathcal{M}}([\\mathfrak{b}],[\\mathfrak{b}'])\\cdot e_{[\\mathfrak{b}']}\\ \\ \\ \\ ([\\mathfrak{b}]\\in \\mathfrak{C}^{o})\n$$\nand so on, where the integer $\\#\\breve{\\mathcal{M}}([\\mathfrak{b}],[\\mathfrak{b}'])$ counts (with sign) the number of points in $\\breve{\\mathcal{M}}([\\mathfrak{b}],[\\mathfrak{b}'])$ (the moduli space of Seiberg-Witten trajectories going from $[\\mathfrak{b}]$ to $[\\mathfrak{c}]$) that has dimension $0$.\n\nBy considering the number $\\#\\breve{\\mathcal{M}}^{\\text{red}}([\\mathfrak{b}],[\\mathfrak{b}'])$ instead (i.e., only counting reducible trajectories), we can similarly define the linear maps\n$$\n\\bar{\\partial}^{s}_{s}:C^{s}\\rightarrow C^{s},\\ \\ \\ \\\n\\bar{\\partial}^{s}_{u}:C^{s}\\rightarrow C^{u},$$\n$$\n\\bar{\\partial}^{u}_{s}:C^{u}\\rightarrow C^{s},\\ \\ \\ \\ \\bar{\\partial}^{u}_{u}:C^{u}\\rightarrow C^{u}.\n$$\n(We note that $\\bar{\\partial}^{u}_{s}$ is different with $\\partial^{u}_{s}$.)\n\nThe following definition was given as Definition 22.1.7 of the book.\n\\begin{defi}\\label{monopole Floer}\nThe monopole Floer homology groups $\\widebar{HM}_{*}(Y,\\mathfrak{s};\\mathds{Q})$ and $\\widecheck{HM}_{*}(Y,\\mathfrak{s};\\mathds{Q})$ are defined as the homology groups of the chain complexes $\\bar{C}=C^{s}\\oplus C^{u}$ and $\\check{C}=C^{o}\\oplus C^{s}$ with the differentials\n\\begin{equation}\\label{differential for hm-bar}\n\\bar{\\partial}=\\left(\\begin{array} {cc}\n \\bar{\\partial}^{s}_{s} & \\bar{\\partial}^{u}_{s} \\\\\n \\bar{\\partial}^{s}_{u} & \\bar{\\partial}^{u}_{u}\n\\end{array}\\right)\\text{ and }\\check{\\partial}:=\\left(\\begin{array} {cc}\n \\partial^{o}_{o} & -\\partial^{u}_{o}\\bar{\\partial}^{s}_{u} \\\\\n \\partial^{o}_{s} & \\bar{\\partial}^{s}_{s}-\\partial^{u}_{s}\\bar{\\partial}^{s}_{u}\n\\end{array}\\right)\n\\end{equation}\nrespectively. There is a natural map $i_{*}:\\widebar{HM}_{*}(Y,\\mathfrak{s};\\mathds{Q})\\rightarrow \\widecheck{HM}_{*}(Y,\\mathfrak{s};\\mathds{Q})$ induced by the chain map $i:\\bar{C}\\rightarrow \\check{C}$ defined as\n\\begin{equation}\\label{chain map}\n\\left(\\begin{array} {cc}\n0 & -\\partial^{u}_{o}\\\\\n 1 & -\\partial^{u}_{s}\n\\end{array}\\right).\n\\end{equation}\n\\end{defi}\nTo each $[\\mathfrak{b}]\\in \\mathfrak{C}$, we can assign a rational number $\\operatorname{gr}^{\\mathds{Q}}([\\mathfrak{b}])$ (called the absolute grading) as follows (see Definition 28.3.1 of the book): Let $\\operatorname{gr}(M,[\\mathfrak{b}])$ be the ``relative $M$-grading'' of $[\\mathfrak{b}]$. This number describes the expected dimension of the Seiberg-Witten moduli space on the manifold $M^{*}=M\\cup_{Y} ([0,+\\infty)\\times Y)$ with limit $[\\mathfrak{b}]$. It was proved in the book that the quantity\n\\begin{equation}\\label{absolute grading}\n-\\operatorname{gr}(M,[\\mathfrak{b}])-b_{2}^{+}(M)-\\frac{1}{4}\\operatorname{sign}(M)-1\n\\end{equation}\ndoes not depend on the choice of $M$ and we define it as $\\operatorname{gr}^{\\mathds{Q}}([\\mathfrak{b}])$. This grading induces absolute gradings on $\\widebar{HM}_{*}(Y,\\mathfrak{s};\\mathds{Q}),\\ \\widehat{HM}_{*}(Y,\\mathfrak{s};\\mathds{Q})$ and $\\widecheck{HM}_{*}(Y,\\mathfrak{s};\\mathds{Q})$. The map $i_{*}$ in Definition \\ref{monopole Floer} preserves this grading.\n\\begin{rmk}\nIn (\\ref{absolute grading}), we use $\\operatorname{gr}(M,[\\mathfrak{b}])$ instead of $\\operatorname{gr}([\\mathfrak{a}_{0}],M\\setminus B^{4},[\\mathfrak{b}])$ as in the book. Here $[\\mathfrak{a}_{0}]$ denotes the first boundary stable critical point in $\\mathcal{B}^{\\sigma}_{k-1\/2}(S^{3})$. These two gradings satisfy the relation (see Lemma 27.4.2 of the book)\n$$\n\\operatorname{gr}(M,[\\mathfrak{b}])=\\operatorname{gr}(B^{4},[\\mathfrak{a}_{0}])+\\operatorname{gr}([\\mathfrak{a}_{0}],M\\setminus B^{4},[\\mathfrak{b}])=-1+\\operatorname{gr}([\\mathfrak{a}_{0}],M\\setminus B^{4},[\\mathfrak{b}]).\n$$\nThis explains the extra term ``$-1$'' in our formula.\n\\end{rmk}\n\n\n\\begin{rmk}\nIn general, one needs to specify a connected component of $\\mathcal{B}^{\\sigma}_{k}(M)$ (the blown-up quotient configuration space of $M$) to define the relative $M$-grading. However, in our case the space $\\mathcal{B}^{\\sigma}_{k}(M)$ is connected since $b_{1}(Y)=0$.\n\\end{rmk}\n\n\\begin{defi}\\cite{Froyshov}\nThe Fr\\o yshov invariant is defined as\n$$\\operatorname{h}(Y,\\mathfrak{s}):=-\\frac{1}{2}\\cdot\\inf\\{\\operatorname{gr}^{\\mathds{Q}}([\\mathfrak{b}])|[\\mathfrak{b}]\\text{ represents a nonzero elements in }\\operatorname{im}i_{*}\\}.$$\n\\end{defi}\nThe following lemma was proved in \\cite{Froyshov} (in a (possibly) different version of monopole Floer homology). The proof can be easily adapted to the version used in the book.\n\\begin{lem}\\label{orientation reversal}\nFor any rational homology sphere $Y$ and any spin$^{\\text{c}}$ structure $\\mathfrak{s}$ on $Y$, we have $\\operatorname{h}(-Y,\\mathfrak{s})=-\\operatorname{h}(Y,\\mathfrak{s})$.\n\\end{lem}\n\n\n\\begin{defi}\nAn admissible perturbation $\\mathfrak{q}$ is called a ``nice perturbation'' if $\\mathfrak{q}=0$ when restricted to ${\\mathcal{C}^{\\operatorname{red}}_{k-1\/2}(Y)}$.\n\\end{defi}\n\n\\begin{rmk}\\label{component of perturbation}\nSince the tangent bundle of $\\mathcal{C}_{k-1\/2}(Y)$ is trivial with fiber $$L^{2}_{k-1\/2}(Y;i\\mathds{R})\\oplus L^{2}_{k-1\/2}(Y;S),$$ we can write the perturbation $\\mathfrak{q}$ as $(\\mathfrak{q}^{0},\\mathfrak{q}^{1})$, where $\\mathfrak{q}^{0}$ denotes the connection component and $\\mathfrak{q}^{1}$ denotes the spinor component. Note that by the gauge invariance, the restriction of $\\mathfrak{q}^{1}$ to $\\mathcal{C}^{\\operatorname{red}}_{k-1\/2}(Y)$ is always $0$. Therefore, an admissible perturbation $\\mathfrak{q}$ is nice if and only if $\\mathfrak{q}^{0}=0$ when restricted to ${\\mathcal{C}^{\\operatorname{red}}_{k-1\/2}(Y)}$.\n\\end{rmk}\n\nUnder the assumption that $\\mathfrak{q}$ is nice, there is only one reducible critical point downstairs (up to gauge transformation), which is just $(B_{0},0)$. As for the critical points upstairs, the sets $\\mathfrak{C}^{u}$ and $\\mathfrak{C}^{s}$ can be described explicitly as follows:\nConsider the self-adjoint operator\n\\begin{equation}\\label{perturbed dirac}\n\\slashed{D}_{\\mathfrak{q},B_{0}}:L^{2}_{k-1\/2}(Y;S)\\rightarrow L^{2}_{k-3\/2}(Y;S)\n\\end{equation}\n$$\n\\Psi\\mapsto \\slashed{D}_{B_{0}}\\Psi+\\mathcal{D}_{(B_{0},0)}\\mathfrak{q}^{1}(0,\\Psi) .\n$$\nSince $\\mathfrak{q}$ is admissible, $0$ is not an eigenvalue of $\\slashed{D}_{\\mathfrak{q},B_{0}}$ and all eigenvalues have multiplicity $1$ (see Proposition 12.2.5 of the book). We arrange the eigenvalues $\\lambda_{*}$ so that\n$$\n...\\lambda_{-2}<\\lambda_{-1}<0<\\lambda_{0}<\\lambda_{1}<...\n$$\nFor each $i$, we pick an eigenvector $\\psi_{i}$ with eigenvalue $\\lambda_{i}$ and $\\|\\psi_{i}\\|_{L^{2}}=1$. We let $[\\mathfrak{a}_{i}]=[(B_{0},0,\\psi_{i})]$. By Proposition 10.3.1 of the book, we have\n$$\n\\mathfrak{C}^{s}=\\{[\\mathfrak{a}_{i}]|\\,i\\geq 0\\},\\ \\mathfrak{C}^{u}=\\{[\\mathfrak{a}_{i}]|\\,i< 0\\}.\n$$\nFrom now on, we always use $[\\mathfrak{a}_{*}]$ to denote these reducible critical points. Note that $\\operatorname{gr}^{\\mathds{Q}}([\\mathfrak{a}_{i}])-\\operatorname{gr}^{\\mathds{Q}}([\\mathfrak{a}_{i-1}])$ equals $1$ when $i=0$ and equals $2$ otherwise.\n\n\\begin{defi}\nLet $\\mathfrak{\\mathfrak{q}}$ be a nice pertrubation. The height of $\\mathfrak{q}$ is defined as $$\\operatorname{ht}(\\mathfrak{q})=\\operatorname{gr}^{\\mathds{Q}}([\\mathfrak{a}_{0}]).$$\nIn other words, the height is defined to be the absolute grading of the lowest boundary stable critical point.\n\\end{defi}\nConsdier the operator\n$$\nD_{\\mathfrak{q}}:L^{2}_{k}(M;S^{+})\\rightarrow L^{2}_{k-1}(M;S^{-})\\oplus (L^{2}_{k-1\/2}(Y;S)\\cap H_{1}^{-})\n$$\n$$\n\\Phi\\mapsto (\\slashed{D}^{+}_{\\hat{\\mathfrak{q}},A_{0}}\\Phi,\\pi^{-}(\\Phi|_{Y}))\n$$\nwhere $\\slashed{D}^{+}_{\\hat{\\mathfrak{q}},A_{0}}$ is a perturbed Dirac operator over $M$ which equals $\\frac{d}{dt}+\\slashed{D}_{\\mathfrak{q},B_{0}}$ near the boundary; $H^{-}_{1}$ (resp. $H^{+}_{1}$) is the closure in $L^{2}(Y;S)$ of the eigenvectors of $\\slashed{D}_{\\mathfrak{q},B_{0}}$ with negative (resp. positive) eigenvalue; $\\pi^{-}$ is the projection to $L^{2}_{k-1\/2}(Y;S)\\cap H_{1}^{-}$ with kernel $H^{+}_{1}$.\n\\begin{lem}\\label{height as index}\nFor any nice perturbation $\\mathfrak{q}$, we have\n\\begin{equation}\\label{height as eta invariant}\n\\operatorname{ht}(\\mathfrak{q})=-2\\operatorname{ind}_{\\mathds{C}}D_{\\mathfrak{q}}-\\tfrac{\\operatorname{sign}(M)}{4}.\n\\end{equation}\n\\end{lem}\n\\begin{proof}\nBy the same argument as Page 508 of the book, we can identify $\\operatorname{grad}(M,[\\mathfrak{a}_{0}])$ with the index of the Fredholm operator (24.41) in the book. A further deformation identifies this index with the index of the operator $D_{\\mathfrak{q}}\\oplus B$, where $B$ is the Fredholm operator\n$$\nL^{2}_{k}(M;iT^{*}M)\\rightarrow L^{2}_{k-1}(M;i\\mathds{R}\\oplus i\\wedge^{2}_{+}T^{*}M)\\oplus L^{2}_{k-1\/2}(Y;i\\mathds{R})\\oplus C^{-}\n$$\n$$\n\\alpha\\mapsto (d^{*}\\alpha,d^{+}\\alpha,\\langle \\alpha,\\vec{v}\\rangle,\\alpha^{-}).\n$$\nHere $C^{-}\\subset (\\operatorname{ker}d^{*}\\cap L^{2}_{k-1\/2}(Y;iT^{*}Y))$ denotes the negative eigenspace of the operator $*d$ and $\\alpha^{-}\\in C^{-}$ denotes projection of $\\alpha|_{Y}$. By Lemma 24.8.1 of the book, we have $\\operatorname{ind}_{\\mathds{R}}B=-b^{+}_{2}(M)-1$. Therefore, we get\n$$\n\\operatorname{grad}(M,[\\mathfrak{a}_{0}])=2\\operatorname{ind}_{\\mathds{C}}D_{q}-b_{2}^{+}(M)-1.\n$$\nBy (\\ref{absolute grading}), this implies the lemma.\n\\end{proof}\nNow consider the following subset of $\\mathds{Q}$\n$$\n\\mathfrak{m}(Y,\\mathfrak{s})=\\{a\\in \\mathds{Q}|\\, a=[-\\frac{\\operatorname{sign}(M)}{8}]\\in \\mathds{Q}\/\\mathds{Z}\\}.\n$$\n\\begin{rmk}$\\mathfrak{m}(Y,\\mathfrak{s})$ is actually determined by the Rohlin invariant $\\rho(Y,\\mathfrak{s})$ and hence independent with the choice of $M$.\\end{rmk}\n\n\n\\begin{pro}\\label{height of nice perturbation}\nFor any $e\\in \\mathfrak{m}(Y,\\mathfrak{s})$, there exists a nice perturbation $\\mathfrak{q}$ with $\\tfrac{\\operatorname{ht}(\\mathfrak{q})}{2}=e$.\n\\end{pro}\n\\begin{proof}\nLet $\\{\\psi_{n}|\\,n\\in \\mathds{Z}_{\\geq 0}\\}$ be a complete, orthonormal set of eigenvectors of $\\slashed{D}_{B_{0}}$. Let the eigenvalue of $\\psi_{n}$ be $\\lambda_{n}'$. For each $n$, we consider the the function\n$$\nf_{n}:\\mathcal{C}_{k-1\/2}(Y)\\rightarrow \\mathds{R}\n$$\n$$\n(B_{0}+a,\\Psi)\\mapsto |\\langle e^{i\\xi}\\Psi,\\psi_{n}\\rangle_{L^{2}}|^{2}\n$$\nwhere $\\xi:Y\\rightarrow\\mathds{R}$ is the unique solution of\n$$\ni\\Delta \\xi=d^{*}da,\\ \\int_{Y}\\xi=0.\n$$\nOne can prove that $f_{n}$ is invariant under the action of $\\mathcal{G}_{k+1\/2}(Y)$. We denote by $\\mathfrak{q}_{n}$ the formal gradient of $f_{n}$. A simple calculation shows that\n$$\n\\mathcal{D}_{(B_{0},0)}q_{n}^{1}(0,\\Psi)=2\\langle \\Psi,\\psi_{n}\\rangle_{L^{2}}\\cdot \\psi_{n}.\n$$\nWe let $\\mathfrak{q}'=\\mathop{\\sum}\\limits_{n=0}^{+\\infty}c_{n}\\mathfrak{q}_{n}$, where $\\{c_{n}\\}$ is a sequence of real numbers. We require $|c_{n}|$ decreasing to $0$ fast enough so that $\\mathfrak{q}'$ is a tame-perturbation (see Definition 10.5.1 of the book). Now consider the perturbed Dirac operator $\\slashed{D}_{\\mathfrak{q}',B_{0}}$ (see (\\ref{perturbed dirac})). Its eigenvalues are of the form $\\lambda'_{n}+2c_{n}$ and the corresponding eigenvector is just $\\psi_{n}$. By choosing a generic sequence $\\{c_{n}\\}$, we can assume\n$$\n\\lambda'_{n}+2c_{n}\\neq \\lambda'_{m}+c_{m},\\ \\forall n\\neq m\\text{ and } \\lambda'_{n}+2c_{n}\\neq 0,\\ \\forall n\\in\\mathds{Z}_{\\geq 0}.\n$$\nNote that the number $\n-\\operatorname{ind}_{\\mathds{C}}D_{\\mathfrak{q}'}-\\tfrac{\\operatorname{sign}(M)}{8}\n$\nalways belongs to $\\mathfrak{m}(Y,\\mathfrak{s})$. Moreover, as we varies $\\{c_{n}\\}$, this number changes by the spectral flow of $\\slashed{D}_{\\mathfrak{q}',B_{0}}$. Therefore, by choosing suitable $\\{c_{n}\\}$, we may assume that\n$$\ne=-\\operatorname{ind}_{\\mathds{C}}D_{\\mathfrak{q}'}-\\frac{\\operatorname{sign}(M)}{8}.\n$$\nUnder this perturbation $\\mathfrak{q}'$, the reducible critical points are just $[(B_{0},0,\\psi_{n})]$ with $n\\geq 0$. All of them are non-degenerate by \\cite[Proposition 12.2.5]{KM}. Therefore, by the compactness result of the critical points, we can find $\\epsilon>0$ such that the gauge invariant open subset\n$$\nU(\\epsilon)=\\{(B,\\Phi)| \\|\\Phi\\|_{L^{2}}<\\epsilon\\} \\subset \\mathcal{C}_{k-1\/2}(Y)\n$$\ncontains no irreducible critical point. Now consider the Banach space\n$$\\mathcal{P}(U(\\epsilon)):=\\{\\mathfrak{q}''\\in \\mathcal{P}|\\ \\mathfrak{q}''|_{U(\\epsilon)}= 0\\},$$\nwhere $\\mathcal{P}$ is the large Banach space of tame perturbations constructed in Theorem 11.6.1 of the book. By repeating the proof of Theorem 15.1.1 of the book, we can find a perturbation $\\mathfrak{q}''\\in \\mathcal{P}(U(\\epsilon))$ such that the perturbation $\\mathfrak{q}=\\mathfrak{q}''+\\mathfrak{q}'$ is admissible. Since both $\\mathfrak{q}''$ and $\\mathfrak{q}'$ vanishes on $\\mathcal{C}_{k-1\/2}^{\\text{red}}(Y)$, the perturbation $\\mathfrak{q}$ is nice. Moreover, since $\\mathfrak{q}''$ vanishes on $U(\\epsilon)$, we have $D_{\\mathfrak{q}}=D_{\\mathfrak{q}'}$. By Lemma \\ref{height as eta invariant}, we have\n$$\n\\frac{\\operatorname{ht}(\\mathfrak{q})}{2}=-\\operatorname{ind}_{\\mathds{C}}D_{\\mathfrak{q}}-\\tfrac{\\operatorname{sign}(M)}{8}=-\\operatorname{ind}_{\\mathds{C}}D_{\\mathfrak{q}'}-\\tfrac{\\operatorname{sign}(M)}{8}=e.\n$$This finishes the proof.\\end{proof}\n\n\\begin{lem}\\label{alternative defi of Froyshov}\nSuppose $\\mathfrak{q}$ is a nice perturbation with $\\operatorname{ht}(\\mathfrak{q})<-2\\operatorname{h}(Y,\\mathfrak{s})$. Then we have\n\\begin{equation}\\label{Froyshov 1}\n\\begin{split}\n-2\\operatorname{h}(Y,\\mathfrak{s})=&\\inf\\{\\operatorname{gr}^{\\mathds{Q}}([\\mathfrak{a}_{j}])|\\,j\\geq 0;\\ \\nexists\\ n,m_{1},...,m_{l}\\in \\mathds{Z}_{\\neq 0} \\text{ and }[\\mathfrak{b}_{1}],...,[\\mathfrak{b}_{l}]\\in \\mathfrak{C}^{o} \\text{ s.t. }\\\\\n&\\partial^{o}_{o}(m_{1}[\\mathfrak{b}_{1}]+...+m_{j}[\\mathfrak{b}_{l}])=0 \\text{ and } \\partial^{o}_{s}(m_{1}[\\mathfrak{b}_{1}]+...+m_{j}[\\mathfrak{b}_{l}])=n[\\mathfrak{a}_{j}]\\}.\n\\end{split}\n\\end{equation}\n\\end{lem}\n\\begin{proof}\nFor the grading reason, all the maps $\\bar{\\partial}^{*}_{*}$ vanish. As a result, the set\n$$\\{[e_{[\\mathfrak{a}_{j}]}]|\\,j\\in \\mathds{Z}\\}$$ is a basis of $\\widebar{HM}_{*}(Y,\\mathfrak{s};\\mathds{Q})$. For $j\\geq 0$, the map $i_{*}$ sends $$[e_{[\\mathfrak{a}_{j}]}]\\in \\widebar{HM}_{*}(Y,\\mathfrak{s};\\mathds{Q})$$ to $$[e_{[\\mathfrak{a}_{j}]}]\\in \\widecheck{HM}_{*}(Y,\\mathfrak{s};\\mathds{Q}).$$ Since we have $\\operatorname{ht}(\\mathfrak{q})<-2\\operatorname{h}(Y;\\mathfrak{s})$, the set\n$$\nS=\\{j|j\\geq 0,\\ [e_{[\\mathfrak{a}_{j}]}]\\neq 0\\in \\widecheck{HM}_{*}(Y,\\mathfrak{s};\\mathds{Q})\\}\n$$\ndoes not equals $\\mathds{Z}_{\\geq 0}$ and we have\n\\begin{equation}\\label{Froyshov 2}-2\\operatorname{h}(Y,\\mathfrak{s})=\\inf\\{\\operatorname{gr}^{\\mathds{Q}}([\\mathfrak{a}_{j}])|j\\in S\\}.\\end{equation}\nSince we have\n$$\n\\check{\\partial}=\\left(\\begin{array} {cc}\n \\partial^{o}_{o} & 0\\\\\n \\partial^{o}_{s} & 0\n\\end{array}\\right).\n$$ in the current case, (\\ref{Froyshov 1}) and (\\ref{Froyshov 2}) coincide with each other. This finishes the proof of the lemma.\n\\end{proof}\n\n\n\\subsection{Linear analysis on end-periodic manifolds}\n\nIn this subsection, we will set up the appropriate Sobolev spaces on end-periodic manifolds and review the related Fredholm theory. Our construction is inspired from \\cite{Taubes} and \\cite{MRS}.\n\nLet $E$ be an end-periodic bundle (over $\\tilde{X},X_{+},M_{+} \\text{ or }Z_{+}$) equipped with an end-periodic metric $|\\cdot|$ and an end-periodic connection $\\nabla$ (see \\cite{Taubes} for definition). For any $j,p\\in \\mathds{Z}_{\\geq 0}$, we can define the unweighted Sobolev norm of a smooth section $s$ in the usual way:\n\\begin{equation}\\label{unweighted sobolev space}\n\\|s\\|_ {L^{p}_{j}}:=(\\mathop{\\Sigma}\\limits_{i=0}^{j}\\int|\\nabla^{(i)}s|^{p} d\\operatorname{vol})^{\\frac{1}{p}}.\n\\end{equation}\n(We can also define the $L^{p}_{j}$ norm for negative $j$ using integration.)\n\\begin{rmk}\nOther then a trivial real or complex line bundle, which we denote by $\\mathds{R},\\mathds{C}$ respectively, two other types of end-periodic bundle will be considered: the spinor bundle $S^{\\pm}$ (associated to spin structures) and the bundle of differential forms. Both of them have a canonical metric. As for the connection, we use the spin connection for the former and the Levi-Civita connection for the latter.\n\\end{rmk}\n\nIn general, the differential operators that we will consider do not have Fredholm properties under the norms defined in \\ref{unweighted sobolev space}. Therefore, we need to use the weighted Sobolev norms instead. To define them, recall that we have a harmonic map $f:X\\rightarrow S^{1}$ corresponding to a generator of $H^{1}(X;\\mathds{Z})$. We lift $f$ to a function $\\tilde{f}:\\tilde{X}\\rightarrow \\mathds{R}$ satisfying\n$$\nf^{-1}([-1,1])\\subset \\mathop{\\cup}\\limits_{n=-N}^{N}W_{n}\n\\text{ for some }N\\gg 0.$$\nNow consider the following smooth cut-off functions:\n\\begin{itemize}\n\\item $\\tau_{0}:\\tilde{X}\\rightarrow [0,+\\infty)$: a function that equals $|f|$ on $\\tilde{X}\\setminus \\mathop{\\cup}\\limits_{n=-N}^{N} W_{n}$;\n\\item $\\tau_{1}:X_{+}\\rightarrow [0,+\\infty)$: the restriction of $\\tau_{0}$;\n\\item $\\tau_{2}:M_{+}\\rightarrow [0,+\\infty)$: an extension of $\\tau_{1}$;\n\\item $\\tau_{3}:Z_{+}\\rightarrow [0,+\\infty)$: an extension of $\\tau_{1}$ with the property that $$\\tau_{2}(t,y)=|t|,\\ \\forall (t,y)\\in (-\\infty,-1]\\times Y.$$\n\\end{itemize}\n\\begin{defi}\nFor $\\delta\\in \\mathds{R},j\\in \\mathds{Z},p\\in \\mathds{Z}_{\\geq 0}$, we define the weighted Sobolev norm of a smooth section $s$ of $E$ in different ways depending on the underlying manifold:\n\\begin{itemize}\n\\item Over $X_{+}$, we set $\\|s\\|_ {L^{p}_{j,\\delta}}=\\|e^{\\delta\\cdot \\tau_{1}}\\cdot s\\|_ {L^{p}_{j}}$;\n\\item Over $M_{+}$, we set $\\|s\\|_ {L^{p}_{j,\\delta}}=\\|e^{\\delta\\cdot \\tau_{2}}\\cdot s\\|_ {L^{p}_{j}}$;\n\\item Over $\\tilde{X}$, we set $\\|s\\|_ {L^{p}_{j;-\\delta,\\delta}}=\\|e^{\\delta\\cdot \\tau_{0}}\\cdot s\\|_ {L^{p}_{j}}$;\n\\item Over $Z_{+}$, we set $\\|s\\|_ {L^{p}_{j;-\\delta,\\delta}}=\\|e^{\\delta\\cdot \\tau_{3}}\\cdot s\\|_ {L^{p}_{j}}$.\n\\end{itemize}\n(Note that we use two weight indices for manifolds $\\tilde{X}$ and $Z_{+}$ because they both have two ends.) We denote the corresponding Sobolev space respectively by $$L^{2}_{j,\\delta}(X_{+};E),\\ L^{2}_{j,\\delta}(M_{+};E),\\ L^{2}_{j;-\\delta,\\delta}(\\tilde{X};E) \\text{ and } L^{2}_{j;-\\delta,\\ \\delta}(Z_{+};E).$$\nWe remove $j$ from our notations when it equals $0$.\nWe sometimes also suppress the bundle $E$ when it is clear from the context.\n\\end{defi}\n\n\n\nThe following lemma is a straightforward corollary of \\cite[Lemma 5.2]{Taubes}. It asserts that one can control the weighted Sobolev norm of a function using the weighted Sobolev norm of its derivative. (Although \\cite{Taubes} only stated the result for smooth functions, we can prove the general case easily using standard arguments, i.e., approximating a Sobolev function by smooth functions.)\n\n\\begin{lem} \\label{Taubes's lemma}\nFor any $\\delta>0,j\\geq 0$, we can find a positive constant $C$ with the following significance:\n\\begin{enumerate}\n\\item For any $u\\in L^{2}_{1,\\operatorname{loc}}(X_{+};\\mathds{R})$ with $\\|du\\|_{L^{2}_{j,\\delta}}<\\infty$, there exists a unique number $\\bar{u}\\in \\mathds{R}$ such that $\\|u-\\bar{u}\\|_{L^{2}_{j+1,\\delta}}<\\infty$. Moreover, in this case we have $$\\|u-\\bar{u}\\|_{L^{2}_{j+1,\\delta}}\\leq C\\|d\\bar{u}\\|_{L^{2}_{j,\\delta}}.$$\n\\item Fix a smooth function $$\\tau_{4}:Z_{+}\\rightarrow [0,1] \\text{ with } \\tau_{4}|_{Z}=0, \\tau_{4}|_{W_{i}}=1\\ \\forall i\\geq 1.$$Then for any $u\\in L^{2}_{1,\\operatorname{loc}}(Z_{+};\\mathds{R})$ with $\\|du\\|_{L^{2}_{j;-\\delta,\\delta}}<\\infty$, there exists unique numbers $\\bar{u},\\bar{\\bar{u}}\\in \\mathds{R}$ such that $\\|u-\\bar{u}-\\bar{\\bar{u}}\\cdot\\tau_{4}\\|_{L^{2}_{j+1;-\\delta,\\delta}}<\\infty$. In this case we have $$\\|u-\\bar{u}-\\bar{\\bar{u}}\\cdot\\tau_{4}\\|_{L^{2}_{j+1;-\\delta,\\delta}}\\leq C\\|du\\|_{L^{2}_{j;-\\delta,\\delta}}.$$\n\\end{enumerate}\n\\end{lem}\n\n\nNext, we summarize the Sobolev embedding and multiplication theorems. We focus on the manifold $X_{+}$ (although similar results holds other manifolds) because that will be our main concern. The proofs are straightforwardly adapted from the unweighted case (Theorem 13.2.1 and Theorem 13.2.2 of the book) and the cylindrical end case\n(\\cite[Proposition 2.9, Proposition 2.10]{francescolin}) so we omit them.\n\\begin{pro}\\label{Sobolev embedding}\nLet $E$ be an end-periodic bundle over $X_{+}$. There is a continuous inclusion\n$$\nL^{p}_{j,\\delta}(X_{+};E)\\rightarrow L^{q}_{l,\\delta'}(X_{+};E)\n$$\nfor $j\\geq l,\\ \\delta\\geq \\delta'\\geq 0,\\ p \\leq q$ and $(j-4\/p)\\geq (l-4\/q)$. This embedding is compact when $j>l,\\ \\delta>\\delta'$ and $(j-4\/p)> (l-4\/q)$.\n\\end{pro}\n\\begin{pro}\\label{Sobolev multiplication}\nLet $E,F$ be two end-periodic bundles over $X_{+}$.\nSuppose $\\delta+\\delta'\\geq \\delta'',\\ j,l\\geq m$ and $1\/p+1\/q\\geq 1\/r$, with $\\delta,\\delta',\\delta''\\geq 0$ and $p,q,r>1$. Then the multiplication\n$$\nL^{p}_{j,\\delta}(X_{+};E)\\times L^{q}_{l,\\delta'}(X_{+};F)\\rightarrow L^{r}_{m,\\delta''}(X_{+};E\\otimes F)\n$$\nis continuous in any of the following three cases:\n\\begin{enumerate}\n\\item \\begin{enumerate}\n\\item $(j-4\/p)+(l-4\/q)\\geq m-4\/r,$ and\n\\item $j-4\/p<0,$ and\n\\item $l-4\/q<0$;\n\\end{enumerate}\n\\hspace{-4mm} or\n\\item \\begin{enumerate}\n\\item $\\min \\{j-4\/p,l-4\/q\\}\\geq m-4\/r,$ and\n\\item either $j-4\/p>0$ or $l-n\/q>0$;\n\\end{enumerate}\n\\hspace{-4mm} or\n\\item \\begin{enumerate}\n\\item $\\min \\{j-4\/p,l-4\/q\\}> m-4\/r,$ and\n\\item either $j-4\/p=0$ or $l-4\/q=0$.\n\\end{enumerate}\n\n\\end{enumerate}\nWhen the map is continuous, it is a compact operator as a function of second variable for fixed first variable provided $l>m$ and $l-4\/q>m-4\/r$.\n\\end{pro}\n\n\nThe following corollary will be very useful because the differential operators we are going to consider can often be composed into the sum of a first-order, linear operator with a zeroth-order, quadratic operator.\n\\begin{cor}\nFor any $j>2,\\delta>0$, the multiplication map $$L^{2}_{j,\\delta}(X_{+};E)\\times L^{2}_{j,\\delta}(X_{+};F)\\rightarrow L^{2}_{j-1,\\delta}(X_{+};E\\otimes F)$$ is compact.\n\\end{cor}\n\\begin{proof}\nBy Proposition \\ref{Sobolev multiplication}, this map factors through the natural inclusion $$L^{2}_{j,2\\delta}(X_{+};E\\otimes F)\\rightarrow L^{2}_{j-1,\\delta}(X_{+};E\\otimes F),$$\nwhich is compact by Proposition \\ref{Sobolev embedding}.\n\\end{proof}\nNow we start discussing the related Fredholm theory.\n\\begin{pro}\\label{laplace equation}\nThere exists a small $\\delta_{0}>0$ such that for any $j\\in \\mathds{Z}_{\\geq 0}$ and $\\delta\\in (0,\\delta_{0})$, we have the following results:\n\\begin{enumerate}[(i)]\n\\item The operator\n$$\n\\Delta(\\tilde{X};-\\delta,\\delta):L^{2}_{j+2;-\\delta,\\delta}(\\tilde{X};\\mathds{R})\\rightarrow L^{2}_{j;-\\delta,\\delta}(\\tilde{X};\\mathds{R})$$\n$$u \\mapsto \\Delta u\n$$\nis a Fredholm operator with trivial kernel and two dimensional cokernel. The same result holds for the manifold $Z_{+}$.\n\\item The operator\n$$\n\\Delta(M_{+};\\delta):L^{2}_{j+2,\\delta}(M_{+};\\mathds{R})\\rightarrow L^{2}_{j,\\delta}(M_{+};\\mathds{R})$$\n$$u\\mapsto \\Delta u\n$$\nis a Fredholm operator with trivial kernel and 1-dimensional cokernel.\n\\item The operator\n$$\n\\Delta(X_{+};\\delta):L^{2}_{j+2,\\delta}(X_{+};\\mathds{R})\\rightarrow L^{2}_{j,\\delta}(X_{+};\\mathds{R})\\oplus L^{2}_{j+1\/2}(Y;\\mathds{R})$$ $$u\\mapsto (\\Delta u,\\langle du,\\vec{v}\\rangle)$$\nis Fredholm with trivial kernel and $1$-dimensional cokernel, where $\\vec{v}$ denotes the inward normal vector on the boundary.\n\\end{enumerate}\n\\end{pro}\nProposition \\ref{laplace equation} will be proved in the appendix.\n\\begin{lem}\\label{half De rham complex}\nThere exists a constant $\\delta_{1}\\in (0,\\delta_{0})$ such that for any $j\\in \\mathds{Z}_{\\geq 0}$ and $\\delta\\in(0,\\delta_{1})$, we have the following results:\n\\begin{enumerate}[(i)]\n\\item For any $w\\in L^{2}_{j;-\\delta,\\delta}(Z_{+};\\mathds{R})$ with $\n\\int_{Z_{+}} w\\,d\\operatorname{vol}=0,\n$\nwe can find $u\\in L^{2}_{j+2,\\operatorname{loc}}(Z_{+};\\mathds{R})$ satisfying\n$$\n|du|_{L^{2}_{j+1;-\\delta,\\delta}}<\\infty,\\ \\Delta u=w.\n$$\n\\item The operator $$\nD(M_{+}):L^{2}_{j+1,\\delta}(M_{+};T^{*}M_{+})\\rightarrow L^{2}_{j,\\delta}(M_{+};\\mathds{R}\\oplus \\wedge_{2}^{+}T^{*}M_{+} ):\\alpha\\mapsto (d^{*}\\alpha,d^{+}\\alpha)\n$$ is Fredholm with index $-(b_{2}^{+}(M)+1)$;\n\\item The operator $$D(Z_{+}):L^{2}_{j+1;-\\delta,\\delta}(Z_{+};T^{*}Z_{+})\\rightarrow L^{2}_{j;-\\delta,\\delta}(Z_{+};\\mathds{R}\\oplus \\wedge^{2}_{+}T^{*}Z_{+}):\n\\alpha\\mapsto (d^{*}\\alpha,d^{+}\\alpha)\n$$ is Fredholm with trivial kernel and $1$-dimensional cokernel. Its image equals $$\\{(w,\\beta)|\\ \\int_{Z_{+}}w \\,d\\operatorname{vol}=0\\}.$$\n\\item The operator $$D(X_{+}):L_{j+1,\\delta}^{2}(X_{+};T^{*}X_{+})\\rightarrow L^{2}_{j,\\delta}(X_{+};\\mathds{R}\\oplus \\wedge^{2}_{+}T^{*}X_{+})\\oplus L^{2}_{j+1\/2}(Y;\\mathds{R})\\oplus C^{+}$$\ngiven by\n\\begin{equation}\\label{half De Rham with boundary}\n\\alpha \\mapsto (d^{*}\\alpha,d^{+}\\alpha,\\langle \\alpha,\\vec{v}\\rangle,\\pi^{+}(\\alpha|_{Y}))\n\\end{equation}\nis Fredholm with trivial kernel and one dimensional cokernel, which can be canonically identified with $\\mathds{R}$. Here $C^{+}$ (resp. $C^{-}$) is the closure in $ L^{2}_{j+1\/2}(Y;T^{*}Y)\\cap \\operatorname{ker} d^{*}$ of the space spanned by the eigenvectors of $*d$ with positive (resp. negative) eigenvalues and $$\\pi^{+}:L^{2}_{j+1\/2}(Y;iT^{*}Y)\\rightarrow C^{+}$$ is the projection with kernel $C^{-}$.\n\\end{enumerate}\n\n\\end{lem}\n\\begin{proof}\n(i) We consider two vector spaces:\n$$\nV_{1}=\\{u\\in L^{2}_{j+2,\\text{loc}}(Z_{+};\\mathds{R})|\\ \\|du\\|_{L^{2}_{j+1;-\\delta,\\delta}}<\\infty\\}\n$$\n$$\nV_{2}=\\{w\\in L^{2}_{j;-\\delta,\\delta}(Z_{+};\\mathds{R})|\\ \\int_{Z_{+}} w\\,d\\text{vol}=0\\}.\n$$\nNow assume $\\delta \\in (0,\\delta_{0})$, where $\\delta_{0}$ is the constant in Proposition \\ref{laplace equation}. By Lemma \\ref{Taubes's lemma}, we also have \\begin{equation}\\label{equivalent defi}V_{1}=L^{2}_{j+2;-\\delta,\\delta}(Z_{+};\\mathds{R})\\oplus\\mathds{R}\\oplus \\mathds{R}\\tau_{4}.\n\\end{equation}\nUsing this identification and integration by part, we can show that $\\Delta u\\in V_{2}$ for any $u\\in V_{1}$. In other words, we have a well defined operator\n$$\n\\Delta: V_{1}\\rightarrow V_{2}.\n$$\nComparing the domain and target of this operator with the one in Proposition \\ref{laplace equation} (1), we see that it is a Fredholm operator with index $1$. To finish the proof, we just need to prove kernel of $\\Delta$ consists only of constant functions. This is a simple consequence of the maximum principle, noticing that all functions in $V_{1}$ are bounded (because of (\\ref{equivalent defi})).\n\n(ii) Consider the operator\n$$\nd^{+}:L^{2}_{j+1,\\delta}(M_{+};T^{*}M_{+})\\rightarrow L^{2}_{j,\\delta}(M_{+};\\wedge_{+}^{2}T^{*}M_{+} ).\n$$\nBy \\cite[Proposition 5.1]{Taubes}, when $\\delta_{1}>0$ is small enough, both the kernel and the image of this operator (which we denote by $V_{3}$ and $V_{4}$ respectively) are closed with the following properties:\n\\begin{equation}\\label{1st homology vanish}\nV_{3}\\cong L^{2}_{j+2,\\delta}(M_{+};\\mathds{R}):du\\leftrightarrow u;\n\\end{equation}\n\\begin{equation}\\label{2nd homology}\n\\text{dim}(L^{2}_{j,\\delta}(M_{+};\\wedge_{2}^{+}T^{*}M_{+} )\/V_{4})=b_{2}^{+}(M).\n\\end{equation}\n\nBy (\\ref{1st homology vanish}), the operator\n$$\nV_{3}\\rightarrow L^{2}_{j-1,\\delta}(M_{+};\\mathds{R}): \\alpha\\mapsto d^{*}\\alpha.\n$$\nis essentially the same with the operator $\\Delta(M_{+},\\delta)$ in Proposition \\ref{laplace equation}, which is Fredholm with index $-1$. This implies that the operator\n$$\nL^{2}_{j,\\delta}(M_{+};T^{*}M_{+})\\rightarrow L^{2}_{j-1,\\delta}(M_{+};\\mathds{R})\\oplus V_{4}:\\alpha\\mapsto (d^{*}\\alpha,d^{+}\\alpha)\n$$\nis also Fredholm with the same index. Therefore, by (\\ref{2nd homology}), the operator\n$$\nL^{2}_{j+1,\\delta}(M_{+};T^{*}M_{+})\\rightarrow L^{2}_{j,\\delta}(M_{+};\\mathds{R})\\oplus L^{2}_{j,\\delta}(M_{+};\\wedge^{2}_{+}T^{*}M_{+} ):\\alpha\\mapsto (d^{*}\\alpha,d^{+}\\alpha)\n$$\nis Fredholm with index $-(b_{2}^{+}(M)+1)$.\n\n(iii) To apply the excision principle of the index, we consider the manifold $M_{-}=Z\\cup_{Y}\\bar{M}$. (Recall that $\\bar{M}$ is the orentation reversal of $M$.) We choose a function\n$$\\tau:M_{-}\\rightarrow [0,+\\infty)\\text{ with }\\tau(t,y)=|t|,\\ \\forall(t,y)\\in (-\\infty,-1]\\times Y$$\nand define the weighted Sobolev norm of a section $s$ over $M_{-}$ as\n$$\n\\|s\\|_{L^{2}_{j,-\\delta}}:=\\|e^{\\delta\\tau}s\\|_{L^{2}_{j}}\n.$$\nBy similar argument as (ii), one can show that the operator\n$$\nL^{2}_{j+1,-\\delta}(M_{-};T^{*}M_{-})\\rightarrow L^{2}_{j,-\\delta}(M_{-};\\mathds{R}\\oplus \\wedge^{2}_{+}T^{*}M_{-} ) :\\alpha\\mapsto (d^{*}\\alpha,d^{+}\\alpha)\n$$\nis Fredholm with index $-(b_{2}^{+}(\\bar{M})+1)$. Notice that we have the decompositions\n$$\nM_{+}=M\\cup_{Y}X_{+},\\ M_{-}=Z\\cup_{Y}\\bar{M},\\ Z_{+}= Z\\cup_{Y}X_{+}.\n$$\nBy an exision argument, we see that the operator\n$$\n(d^{*},d^{+}):L^{2}_{j+1;-\\delta,\\delta}(Z_{+},T^{*}Z_{+})\\rightarrow L^{2}_{j;-\\delta,\\delta}(Z_{+},\\mathds{R}\\oplus \\wedge^{2}_{+}T^{*}Z_{+})\n$$\nis Fredholm with index\n$$\n-(1+b_{2}^{+}(M))-(1+b_{2}^{+}(\\bar{M}))+(1+b_{2}^{+}(M\\cup_{Y}\\bar{M}))=-1.\n$$\nHaving proved this fact, we are left to show that the kernel is trivial. Suppose we have $$\\alpha\\in L^{2}_{j+1;-\\delta,\\delta}(Z_{+};T^{*}Z_{+})\\text{ with }d^{*}\\alpha=0,d^{+}\\alpha=0.$$ Integrating by part, we get $d\\alpha=0$. Since $H^{1}(Z_{+};\\mathds{R})=0$, we have $\\alpha=du$ for some harmonic function $u$. Notice that $\\|du\\|_{L^{2}_{j+1,-\\delta,\\delta}}<\\infty$. By Lemma \\ref{Taubes's lemma}, the function $u$ is bounded. By the maximal principle, $u$ is a constant, which implies $\\alpha=du=0$.\n\n(iv) Consider the operator\n$$D(\\bar{M}):L_{j+1}^{2}(\\bar{M};T^{*}\\bar{M})\\rightarrow L^{2}_{j}(\\bar{M};\\mathds{R}\\oplus \\wedge^{2}_{+}T^{*}\\bar{M})\\oplus L^{2}_{j+1\/2}(Y;\\mathds{R})\\oplus C^{+}$$\ndefined by the same formula as (\\ref{half De Rham with boundary}). By Lemma 24.8.1 of the book, $D(\\bar{M})$ is a Fredholm operator with index $-b^{+}(\\bar{M})-1$. We note that the boundary of $\\bar{M}$ is $-Y$ while the boundary of the manifold in that Lemma is $Y$, this explains the reason we use $C^{+}$ while the book use $C^{-}$. We also note that the additional term ``$-1$'' in our index formula comes from the $1$-dimensional cokernel of the map\n$$D(\\bar{M}):L_{j+1}^{2}(\\bar{M};T^{*}\\bar{M})\\rightarrow L^{2}_{j}(\\bar{M};\\mathds{R}\\oplus i\\wedge^{2}_{+}T^{*}\\bar{M})\\oplus L^{2}_{j+1\/2}(Y;\\mathds{R})$$\n$$\\alpha\\mapsto (d^{*}\\alpha, d^{+}\\alpha,\\langle\\alpha,\\vec{v}\\rangle ).$$ By an excision argument involving the operators $D(X_{+}),D(\\bar{M}),D(M_{+})$ and the operator\n$$d^{*}\\oplus d^{+}:L^{2}_{j+1}(M\\cup_{Y}\\bar{M};T^{*}(M\\cup_{Y}\\bar{M}))\\rightarrow L^{2}_{j}(M\\cup_{Y}\\bar{M};\\mathds{R}\\oplus \\wedge^{2}_{+}T^{*}(M\\cup_{Y}\\bar{M})),$$\nwe can prove that $D(X_{+})$ is Fredholm with index $-1$. Now suppose $\\alpha\\in\\operatorname{ker}D(X_{+})$. Then by the integration by part argument on page 502 of the book, we can prove $d\\alpha=0$. Since $H^{1}(X_{+};\\mathds{R})=0$, we have $\\alpha=df$ for some local $L^{2}_{j+1}$-function $f$. By Lemma \\ref{Taubes's lemma}, we can assume $\\|f\\|_{L^{2}_{j+1,\\delta}}<\\infty$ after adding some constant function. Then $f$ satisfies\n$\\Delta f=0,\\ \\langle df,\\vec{v}\\rangle=0$.\nBy Lemma \\ref{laplace equation}, we see that $f$ (hence also $\\alpha$) equals $0$. We have proved that the kernel is trivial, which implies that the cokernel is $1$-dimensional. Using integration by part again, one can easily see that a necessary condition for an element $$(w_{1},\\beta,w_{2},\\alpha' )\\in L^{2}_{j,\\delta}(X_{+};\\mathds{R}\\oplus \\wedge^{2}_{+}T^{*}X_{+})\\oplus L^{2}_{j+1\/2}(Y;\\mathds{R})\\oplus C^{+}$$ belonging to $\\operatorname{im}D(X_{+})$ is $$\\int_{X_{+}}w_{1}d\\text{vol}+\\int_{Y}w_{2}d\\text{vol}=0.$$ Since the cokernel is $1$-dimensional, we see that this is also a sufficient condition. Moreover, we have a canonical isomorphism $$\\operatorname{coker}D(X_{+})\\cong \\mathds{R}:\\ [(w_{1},\\beta,w_{2},\\alpha' )]\\leftrightarrow \\int_{X_{+}}w_{1}d\\text{vol}+\\int_{Y}w_{2}d\\text{vol}.$$\n\\end{proof}\n\nNow we study the Fredholm properties related to the linearized Seiberg-Witten equations. Recall that we chose an ``admissible metric'' $g_{X}$ on $X$ (see Assumption \\ref{admissible metric}). Under this assumption, we have the following proposition.\n\n\\begin{pro}[\\cite{MRS}]\\label{Dirac operator is Fredholm}There exists a number $\\delta_{2}>0$ such that for any $\\delta\\in (-\\delta_{2},\\delta_{2}),j\\in \\mathds{Z}_{\\geq 0}$, the end-periodic Dirac operator\n$$\n\\slashed{D}^{+}_{A_{0}}:L^{2}_{j+1,\\delta}(M^{+};S^{+})\\rightarrow L^{2}_{j,\\delta}(M^{+};S^{-})\n$$\nis Fredholm. Moreover, the number $$\\operatorname{ind}_{\\mathds{C}}(\\slashed{D}^{+}_{A_{0}}(M_{+}))+\\frac{\\operatorname{sign}(M)}{8}$$ is an invariant of the pair $(X,g_{X})$, which we denote by $w(X,g_{X},0)$.\n\\end{pro}\n\nTo end this subsection, let us consider the Atiyah-Patodi-Singer boundary problem on the end-periodic manifold $X_{+}$. This will be essential in our study of local structure of the Seiberg-Witten moduli space. To simplify our notation, we denote the following bundles over $X_{+}$\n$$iT^{*}X_{+}\\oplus S^{+}\\text{ and }i(\\mathds{R}\\oplus \\wedge_{+}^{2}T^{*}X_{+})\\oplus S^{-}$$ respectively by $E_{1}$ and $E_{2}$. We also write $F_{1}$ for the bundle $i(\\mathds{R}\\oplus T^{*}Y)\\oplus S$ over $Y$.\n\nRecall that $k$ is a fixed integer greater than $2$. First consider the linear operator\n$$D=D_{0}+K:L^{2}_{k,\\delta}(X_{+};E_{1})\\rightarrow L^{2}_{k-1,\\delta}(X_{+};E_{2}),$$\nwhere $D_{0}=(d^{*},d^{+},\\slashed{D}_{A_{0}})$ and $K$ is an operator that can be extended to a bounded operator\n$$\nK:L^{2}_{j,\\delta}(X_{+};E_{1})\\rightarrow L^{2}_{j,2\\delta}(X_{+};E_{2})\n$$\nfor any integer $j\\in [-k,k]$. Next, we define the restriction map\n$$\nr:L^{2}_{k,\\delta}(X_{+};E_{1})\\rightarrow L^{2}_{k-1\/2}(Y;F_{1})$$\n$$(a,\\phi)\\mapsto (\\langle a,\\vec{v}\\rangle,a|_{Y},\\phi|_{Y}).$$\n Let $H_{0}^{+}$ (resp. $H_{0}^{-}$) be the closure in $L^{2}_{1\/2}(Y;F_{1})$ of the span of the eigenvectors eigenvalues of operator\n$$\nL_{0}: C^{\\infty}(Y;F_{1})\\rightarrow C^{\\infty}(Y;F_{1})\n$$\n$$\n(u,\\alpha,\\phi)\\mapsto (d^{*}\\alpha,*d\\alpha-du,\\slashed{D}_{A_{0}}\\phi).\n$$\nwith positive (resp. non-positive) eigenvalues. We write $\\Pi_{0}$ for the projection\n$$\nL^{2}_{1\/2}(Y;F_{1})\\rightarrow L^{2}_{1\/2}(Y;F_{1})\n$$\nwith image $H_{0}^{-}$ and kernel $H_{0}^{+}$. It also maps $L^{2}_{s}(Y;F_{1})$ to $L^{2}_{s}(Y;F_{1})$ for all $s$. Consider another projection\n$$\n\\Pi: L^{2}_{1\/2}(Y;F_{1})\\rightarrow L^{2}_{1\/2}(Y;F_{1})$$\nsatisfying\n$$\\Pi (L^{2}_{s}(Y;F_{1}))\\subset L^{2}_{s}(Y;F_{1})$$ for any $s$. We say that $\\Pi$ and $\\Pi_{0}$ are $k$-commonmensurate if the difference $$\n\\Pi-\\Pi_{0}: L^{2}_{j-1\/2}(Y;F_{1})\\rightarrow L^{2}_{j-1\/2}(Y;F_{1})\n$$\nis a compact operator, for all $1\\leq j\\leq k$. We write $H^{-}$ for $\\operatorname{im}(\\Pi)\\subset L^{2}_{1\/2}(Y;F_{1})$ and $H^{+}$ for $\\operatorname{im}(1-\\Pi)\\subset L^{2}_{1\/2}(Y;F_{1}).$\n\\begin{pro}\\label{APS}\nLet $\\delta_{1},\\delta_{2}$ be the constant provided by\nLemma \\ref{half De rham complex} and Proposition \\ref{Dirac operator is Fredholm} respectively. Then for any $\\delta\\in (0,\\min(\\delta_{1},\\delta_{2}))$ and any $1\\leq j\\leq k$, the operator\n$$\nD\\oplus ((1-\\Pi)\\circ r): L^{2}_{j,\\delta}(X_{+};E_{1})\\rightarrow L^{2}_{j-1,\\delta}(X;E_{2})\\oplus (H^{+}\\cap L^{2}_{j-1\/2}(Y;F_{1}))\n$$\nis Fredholm. In addition, if $u_{i}$ is a bounded sequence in $L^{2}_{j,\\delta}(X_{+};E_{1})$ and $Du_{i}$ is Cauchy in $L^{2}_{j-1,\\delta}(X_{+};E_{2})$, then $\\Pi\\circ r(u_{i})$ has a convergent subsequence in $L^{2}_{j-1\/2}(Y;F_{1})$. In particular, the maps $\\Pi\\circ r$ and $(1-\\Pi)\\circ r$ restricted to the kernel of $D$, are respectively, compact and Fredholm.\n\\end{pro}\n\\begin{proof}\nWe consider the following two operators:\n\\begin{itemize}\n\\item The operator over $\\bar{M}$\n$$\n(d^{*},d^{+},\\slashed{D}_{A_{0}})\\oplus (1-\\Pi)\\circ r_{\\bar{M}}: L^{2}_{j}(\\bar{M})\\rightarrow L^{2}_{j-1}(\\bar{M})\\oplus (H^{+}\\cap L^{2}_{j-1\/2}(Y)),$$ where $r_{\\bar{M}}$ is the restriction map defined similarly as $r$;\n\\item The operator over $M_{+}$\n$$\n(d^{*},d^{+},\\slashed{D}_{A_{0}}): L^{2}_{j,\\delta}(M_{+})\\rightarrow L^{2}_{j-1,\\delta}(M_{+}).$$\n\\end{itemize}\n By Proposition 17.2.5 of the book, Lemma \\ref{half De rham complex} and Proposition \\ref{Dirac operator is Fredholm}, both of these two operators are Fredholm. Note that they correspond to the operator $D_{0}\\oplus((1-\\Pi)\\circ r)$ on $X_{+}$. We can prove the Fredholm property of $D_{0}\\oplus((1-\\Pi)\\circ r)$ using standard parametrix patching argument (see Page 245 of the book). Since the embedding $L^{2}_{j,2\\delta}\\rightarrow L^{2}_{j-1,\\delta}$ is compact, the operator $D\\oplus((1-\\Pi)\\circ r)$ is a compact perturbation of $D_{0}\\oplus((1-\\Pi)\\circ r)$ and we conclude that $D\\oplus((1-\\Pi)\\circ r)$ is also Fredholm. To prove the second part of the Proposition, we multiply the sequence $\\{u_{i}\\}$ by a bump function $\\beta$ supporting near $\\partial X_{+}$ and follow the argument on Page 304 of the book.\n\\end{proof}\n\\subsection{The invariant $\\lambda_{\\textnormal{SW}}(X)$} Now we review the definition of $\\lambda_{\\textnormal{SW}}(X)$. By \\cite[Lemma 2.1]{MRS}, for a generic pair $(g_{X},\\beta)$ with $\\beta\\in L^{2}_{k+1}(X;iT^{*}X)$, the blown-up Seiberg-Witten moduli space $\\mathcal{M}(X,g_{X},\\beta)$ consisting of the gauge equivalence classes of the triples $$(A,s,\\phi)\\in \\mathcal{A}_{k}(X)\\times \\mathds{R}_{\\geq 0}\\times L^{2}_{k}(X;S^{+}),\\ \\|\\phi\\|_{L^{2}}=1$$ that solve the Seiberg-Witten equation\n\\begin{equation}\\label{blown up seiberg-witten}\n\\left\\{\\begin{array} {cc}\n F_{A}^{+}-s^{2}(\\phi\\phi^{*})_{0}=d^{+}\\beta \\\\\n \\slashed{D}_{A}^{+}(X,g_{X})(\\phi)=0\n\\end{array}\\right.\n\\end{equation}\nis an oriented manifold of dimension $0$ and contains no reducible points (i.e. triples with $s=0$). We call such $(g_{X},\\beta)$ a regular pair. Now consider the end-periodic (perturbed) Dirac operator\n$$\n\\slashed{D}^{+}_{A_{0}}(M_{+},g_{M_{+}})+\\rho(\\beta'): L^{2}_{1}(M_{+};S^{+})\\rightarrow L^{2}(M_{+};S^{-}).\n$$\nwhere $\\beta'$ is an imaged valued one form on $M_{+}$ that equals the pull back of $\\beta$ when restricted to $X_{+}$. As proved in \\cite{MRS}, this operator is Fredholm and the quantity\n$$\n\\operatorname{ind}_{\\mathds{C}}(\\slashed{D}^{+}_{A_{0}}(M_{+},g_{M_{+}})+\\rho(\\beta'))+\\frac{\\text{sign}(M)}{8}\n$$\nis an invariant of $(X,g_{X},\\beta)$, which we denote by $w(X,g_{X},\\beta)$.\n\\begin{thm}[\\cite{MRS}] The number $\n\\#\\mathcal{M}(X,g_{X},\\beta)-w(X,g_{X},\\beta)\n$\ndoes not depend on the choice of regular pair $(g_{X},\\beta)$ and hence is an invariant of the manifold of $X$, which we define as $\\lambda_{\\textnormal{SW}}(X)$; morveover, the reduction of $\\lambda_{\\textnormal{SW}}(X)$ modulo $2$ is the Rohlin invariant of $X$.\n\\end{thm}\n\\begin{lem}\\label{casson for psc}\nSuppose $g_{X}$ is a metric with positive scalar curvature. Then the pair $(g_{X},0)$ is regular and $\\lambda_{SW}(X)=-\\omega(X,g_{X},0)$.\n\\end{lem}\n\\begin{proof}\nThis is a simple consequence of the Weitzenb\\\"ock formula.\n\\end{proof}\n\n\\begin{lem}\\label{orientation reversal 2}\nSuppose $X$ admits a metric $g_{X}$ with positive scalar curvature. Then we have\n$\n\\lambda_{\\textnormal{SW}}(X)=-\\lambda_{\\textnormal{SW}}(-X)\n$.\n\\end{lem}\n\\begin{proof}\n By Lemma \\ref{casson for psc}, we have $\\lambda_{\\textnormal{SW}}(X)=w(X,g_{X},0)$. Similarly, $\\lambda_{\\textnormal{SW}}(-X)=w(-X,g_{X},0)$. Notice that $$\\text{sign}(M)+\\text{sign}(\\bar{M})=\\text{sign}(M\\cup_{Y}\\bar{M})=0.$$ By an excision argument relating indices of the Dirac operator on $M_{+}\\cup \\bar{M}_{+}$ (where $\\bar{M}_{+}$ denotes the orientation reversal of $M_{+}$) and the Dirac operator on $(M\\cup_{Y}\\bar{M})\\cup \\tilde{X}$, we get\n\\begin{equation}\n\\begin{split}\nw(X,g_{X},0)+w(-X,g_{X},0)=&\\operatorname{ind}_{\\mathds{C}}\\slashed{D}^{+}_{A_{0}}(M_{+},g_{M_{+}})+\\operatorname{ind}_{\\mathds{C}}\\slashed{D}^{+}_{A_{0}}(\\bar{M}_{+},g_{M_{+}})\\\\=&\\operatorname{ind}_{\\mathds{C}}\\slashed{D}^{+}_{A_{0}}(\\tilde{X},g_{\\tilde{X}}),\n\\end{split}\n\\end{equation}\nwhere $$\\slashed{D}^{+}_{A_{0}}(\\tilde{X},g_{\\tilde{X}}):L^{2}_{1}(\\tilde{X};S^{+})\\rightarrow L^{2}(\\tilde{X};S^{-})$$\nis the (unperturbed) Dirac operator on $\\tilde{X}$. As in the proof of \\cite[Proposition 5.4]{MRS}, this operator has index $0$. Therefore, we have proved the lemma.\n\\end{proof}\n\\begin{rmk}\nIt was conjectured in \\cite{MRS} that the relation $\n\\lambda_{\\textnormal{SW}}(X)=-\\lambda_{\\textnormal{SW}}(-X)$ holds for a general homology $S^{3}\\times S^{1}$ (without any assumption on the metric). This conjecture is still open. \\end{rmk}\n\n\\section{Gauge theory on end-periodic manifolds}\nIn this section, we study the gauge theory on the end-periodic manifolds. First, we will carefully set up the (blown up) configuration space, the gauge group and the moduli spaces. Once this was done correctly, the arguments in Section 24 and 25 of the book can be repeated without essential difficulty. For this reason, some proofs in this section will only be sketched and we refer to the book for complete details.\n\n\n\n Let $\\delta$ be a positive number smaller than $\\min(\\delta_{1},\\delta_{2})$, where $\\delta_{1},\\delta_{2}$ are constants provided by Lemma \\ref{half De rham complex} and Proposition \\ref{Dirac operator is Fredholm} respectively. We let\n $$\n \\mathcal{A}_{k,\\delta}(X_{+})=\\{A_{0}+a| a\\in L^{2}_{k,\\delta}(X_{+};iT^{*}X_{+})\\}\n $$\n be the space of spin$^{\\text{c}}$ connections of class $L^{2}_{k,\\delta}$. The configuration spaces are defined as\n\\begin{equation}\\label{configuration space}\n\\begin{split}\n\\mathcal{C}_{k,\\delta}(X_{+})=\\mathcal{A}_{k,\\delta}(X_{+})\\times L^{2}_{k,\\delta}(X_{+};S^{+});\\\\\n\\mathcal{C}_{k,\\delta}^{\\sigma}(X_{+})=\\{(A,s,\\phi)| A\\in\\mathcal{A}_{k,\\delta}(X_{+}),\\phi\\in L_{k,\\delta}^{2}(X_{+};S^{+}), \\|\\phi\\|_{L^{2}}=1, s\\in \\mathds{R}_{\\geq 0}\\}.\\end{split}\n\\end{equation}\nIt is easy to see that $\\mathcal{C}_{k,\\delta}(X_{+})$ is a Hilbert manifold without boundary, while $\\mathcal{C}_{k,\\delta}^{\\sigma}(X_{+})$ is a Hilbert manifold with boundary. There is a map $\\boldsymbol{\\pi}:\\mathcal{C}_{k,\\delta}^{\\sigma}(X_{+})\\rightarrow \\mathcal{C}_{k,\\delta}(X_{+})$ given by\n$$\n\\boldsymbol{\\pi}(A,s,\\phi)=(A,s\\phi).\n$$\nNext, we define the gauge groups\n$$\n\\mathcal{G}^{0}_{k+1,\\delta}(X_{+})=\\{u:X_{+}\\rightarrow S^{1}| (1-u)\\in L^{2}_{k+1,\\delta} (X_{+};\\mathds{C}) \\};\n$$\n$$\n\\mathcal{G}_{k+1,\\delta}(X_{+})=\\mathcal{G}_{c}\\times \\mathcal{G}^{0}_{k+1,\\delta}(X_{+}),\n$$\nwhere $\\mathcal{G}_{c}\\cong S^{1}$ denotes the group of constant gauge transformations. Note that we impose the $L^{2}_{k+1,\\delta}$-topology on $\\mathcal{G}^{0}_{k+1,\\delta}(X_{+})$ and the product topology on $\\mathcal{G}_{k+1,\\delta}(X_{+})$. Using the equality $$1-uv=(1-u)+(1-v)-(1-u)(1-v)$$ together with the Sobolev multiplication theorem, one can prove that $\\mathcal{G}^{0}_{k+1,\\delta}$ (and hence $\\mathcal{G}_{k+1,\\delta}$) is a group. A standard argument (see \\cite{Taubes} and \\cite{Freed-Uhlenbeck} for the non-abelian case) shows that they are actually Hilbert Lie groups. The Lie algebra of $\\mathcal{G}_{k+1,\\delta}$ is given by \\begin{equation}\\label{lie algebra}T_{e}\\mathcal{G}_{k+1,\\delta}(X_{+})=\\mathds{R}\\oplus L^{2}_{k+1,\\delta}(X_{+};i\\mathds{R}).\\end{equation}\n\\begin{rmk}\nOur main concern will be the group $\\mathcal{G}_{k+1,\\delta}(X_{+})$, while the group $\\mathcal{G}^{0}_{k+1,\\delta}(X_{+})$ is introduced to smooth the arguments.\n\\end{rmk}\nThe actions of $\\mathcal{G}_{k+1,\\delta}(X_{+})$ on $\\mathcal{C}_{k,\\delta}(X_{+})$ and $\\mathcal{C}^{\\sigma}_{k,\\delta}(X_{+})$ are respectively given by\n$$\nu\\cdot (A,\\Phi)=(A-u^{-1}du,u\\Phi)$$ and $$u\\cdot (A,s,\\phi)=(A-u^{-1}du,s,u\\phi).\n$$\nNote that the latter action is free. We denote the quotient spaces by $\\mathcal{B}_{k,\\delta}(X_{+})$ and $\\mathcal{B}^{\\sigma}_{k,\\delta}(X_{+})$ respectively.\n\n\n\n\n\\begin{lem}\n$\\mathcal{B}^{\\sigma}_{k,\\delta}(X_{+})$ is Hausdorff.\n\\end{lem}\n\\begin{proof}\nBy standard argumet, the proof is reduced to the following claim:\n\n\\begin{claim}: Suppose we have sequences $\\{u_{n}\\}\\subset \\mathcal{C}^{\\sigma}_{k,\\delta}(X_{+}),\\ \\{g_{n}\\}\\subset \\mathcal{G}_{k+1,\\delta}(X_{+})$ such that $u_{n}\\rightarrow u_{\\infty}\\text{ and } g_{n}u_{n}\\rightarrow v_{\\infty}$\nfor some $u_{\\infty},v_{\\infty}\\in \\mathcal{C}^{\\sigma}_{k,\\delta}(X_{+})$. Then we can find $g_{\\infty}\\in \\mathcal{G}_{k+1,\\delta}(X_{+})$ such that $g_{\\infty}u_{\\infty}=v_{\\infty}$.\\end{claim}\nTo prove the claim, we let $u_{n}=(A_{n},s_{n},\\phi_{n})$. Then both $A_{n}$ and $A_{n}-g_{n}^{-1}dg_{n}$ converges in $L^{2}_{k+1,\\delta}$ norm, which implies that the sequence $\\{g_{n}^{-1}dg_{n}\\}$ is Cauchy in $L^{2}_{k,\\delta}(X_{+};i\\mathds{R})$. Let $g_{n}=e^{\\xi_{n}}$. Then $\\{d\\xi_{n}\\}$ is Cauchy in $L^{2}_{k,\\delta}(X_{+};i\\mathds{R})$. By Lemma \\ref{Taubes's lemma}, we can find numbers $\\bar{\\xi}_{n}\\in i\\mathds{R}$ such that $\\{\\xi_{n}-\\bar{\\xi}_{n}\\}$ is a Cauchy sequence in $L^{2}_{k+1,\\delta}(X_{+};i\\mathds{R})$. Using the fact that the exponential map\n$$\n L^{2}_{k+1,\\delta}(X_{+};iT^{*}X_{+})\\rightarrow \\mathcal{G}^{0}_{k+1,\\delta}(X_{+}):\\ \\xi\\mapsto e^{\\xi}\n$$\nis well defined and continuous (which is a consequence of the Sobolev multiplication theorem). We see that $\\{e^{\\xi_{n}-\\bar{\\xi}_{n}}\\}$ is a Cauchy sequence in $\\mathcal{G}^{0}_{k+1,\\delta}(X_{+})$.\n\nOn the other hand, by replacing $\\xi_{n}$ with $\\xi_{n}-2m_{n}\\pi i$ for $m_{n}\\in \\mathds{Z}$. We can assume $\\bar{\\xi}_{n}\\in [0,2\\pi)$. After passing to a subsequence, we may assume $\\bar{\\xi}_{n}$ converges to some number $\\bar{\\xi}_{\\infty}$, which implies $e^{\\bar{\\xi}_{n}}$ converges to $e^{\\bar{\\xi}_{\\infty}}$ as elements of $\\mathcal{G}_{c}$.\n\nNow we see that $g_{n}=e^{\\bar{\\xi}_{n}}\\cdot e^{\\xi_{n}-\\bar{\\xi}_{n}}$ has a subsequencial limit $g_{\\infty}$ in $\\mathcal{G}_{k+1,\\delta}(X_{+})$. Since the action of $\\mathcal{G}_{k+1,\\delta}(X_{+})$ is continuous, we get $g_{\\infty}\\cdot u_{\\infty}=v_{\\infty}$.\\end{proof}\n\nNext, we define the local slice $S^{\\sigma}_{k,\\delta,\\gamma}$ of the gauge action at $\\gamma=(A_{0},s_{0},\\phi_{0})\\in \\mathcal{C}^{\\sigma}_{k,\\delta}(X_{+})$. By taking derivative on gauge group action, we get a map\n$$d^{\\sigma}_{\\gamma}:T_{e}\\mathcal{G}_{k+1,\\delta}(X_{+})\\rightarrow T_{\\gamma}\\mathcal{C}^{\\sigma}_{k,\\delta}(X_{+})$$\n$$\n\\xi\\mapsto(-d\\xi,0,\\xi\\phi_{0}).\n$$\nWe denote the image of $d^{\\sigma}_{\\gamma}$ by $\\mathcal{J}^{\\sigma}_{k,\\delta,\\gamma}$, which is the tangent space of the gauge orbit. To define its complement, we consider the subspace $\\mathcal{K}^{\\sigma}_{k,\\delta,\\gamma}\\subset T_{\\gamma}C^{\\sigma}_{k,\\delta}(X_{+})$ as the kernel of the operator (c.f. formula (9.12) of the book)\n\\begin{equation}\\label{d-sigma-flat}\\begin{split}\nd^{\\sigma,\\flat}_{\\gamma}:L^{2}_{k,\\delta}(X_{+};i\\mathds{R})\\oplus \\mathds{R}\\oplus L^{2}_{k,\\delta}(X_{+};S^{+})\\rightarrow L^{2}_{k-1\/2}(Y;i\\mathds{R})\\oplus L^{2}_{k-1,\\delta}(X_{+};i\\mathds{R})\\\\\n(a,s,\\phi)\\mapsto (\\langle a, \\vec{v} \\rangle, -d^{*}a+is_{0}^{2}\\text{Re}\\langle i\\phi_{0},\\phi\\rangle +i|\\phi_{0}|^{2}\\cdot \\int_{X_{+}} \\text{Re}\\langle i\\phi_{0},\\phi\\rangle\\,d\\text{vol})\n\\end{split}\\end{equation}\n\\begin{rmk}To motivate this construction, we note that when $s_{0}>0$, $\\mathcal{K}^{\\sigma}_{k,\\delta,\\gamma}$ is obtained by lifting the $L^{2}$-orthogonal complement of the tangent space of the gauge orbit (through $\\boldsymbol{\\pi}(\\gamma)$) in $\\mathcal{C}_{k,\\delta}(X_{+})$.\n\\end{rmk}\n\\begin{rmk}\nWe also note that in the book, the integral in the formula corresponding to (\\ref{d-sigma-flat}) is divided by the total volume of the $4$-manifold. However, this difference is not essential because the kernel is not affected.\n\\end{rmk}\n\\begin{lem}\\label{decomposition of the tangent space}\nFor any $\\gamma$, we have a decomposition $T_{\\gamma}\\mathcal{C}^{\\sigma}_{k,\\delta}(X_{+})=\\mathcal{J}^{\\sigma}_{k,\\delta,\\gamma}\\oplus \\mathcal{K}^{\\sigma}_{k,\\delta,\\gamma}$.\n\\end{lem}\n\\begin{proof}\nWe want to show that for any $(a,s,\\phi)\\in T_{\\gamma}\\mathcal{C}^{\\sigma}_{k,\\delta}(X_{+})$, there exists a unique $\\xi\\in T_{e}\\mathcal{G}_{k+1,\\delta}(X_{+})$ such that\n$$(a,s,\\phi)-d^{\\sigma}_{\\gamma}\\xi\\in \\mathcal{K}^{\\sigma}_{k,\\delta,\\gamma}.$$\nThis is equivalent to the condition\n\\begin{equation}\\label{perturbed laplace}\nD\\xi=(\\langle a, \\vec{v}\\rangle,-is_{0}^{2}\\operatorname{Re}\\langle i\\phi_{0},\\phi\\rangle -i|\\phi_{0}|^{2}\\int_{X_{+}} \\operatorname{Re}\\langle i\\phi_{0},\\phi\\rangle d\\text{vol} +d^{*}a)\n\\end{equation}\nwhere the operator $$D:T_{e}\\mathcal{G}_{k+1,\\delta}(X_{+})\\rightarrow L^{2}_{k-1\/2}(Y;i\\mathds{R})\\oplus L^{2}_{k-1,\\delta}(X_{+};i\\mathds{R})$$\nis given by\n$$\n\\xi\\mapsto (\\langle d\\xi, \\vec{v}\\rangle, \\Delta \\xi +s_{0}^{2}|\\phi_{0}|^{2}\\xi +i |\\phi_{0}|^{2} \\int_{X_{+}} (-i\\xi)|\\phi_{0}|^{2}d\\text{vol})\n$$\nNotice that the map\n$$\n\\xi \\mapsto s_{0}^{2}|\\phi_{0}|^{2}\\xi +i |\\phi_{0}|^{2} \\int_{X_{+}} (-i\\xi)|\\phi_{0}|^{2}d\\text{vol}\n$$\nactually factors through the space $L^{2}_{k,2\\delta}(X;i\\mathds{R})$. Therefore, the operator $D$ is a compact perturbation of the operator $D'$ given by\n$$\n\\xi\\mapsto (\\langle d\\xi, \\vec{v}\\rangle, \\Delta \\xi).\n$$\nThe index of $D'$ (hence $D$) equals $0$ by Proposition \\ref{laplace equation} (iii). Here the index is increased by $1$ because we have an additional summand $\\mathds{R}$ in the domain (see (\\ref{lie algebra})). As in the proof of Proposition 9.3.5 of the book, we can show that $D$ has trivial kernel using integration by part. Therefore, $D$ is an isomorphism and (\\ref{perturbed laplace}) has a unique solution. \\end{proof}\n\\begin{rmk}\nThe integration by part argument over the noncompact manifold $X_{+}$ is justified by the following fact (which can be proved using bump function): For any $\\delta>0$ and $\\theta\\in L^{2}_{k}(X_{+};\\wedge^{3}T^{*}X_{+})$, we have\n$$\n\\int_{X^{+}}d\\theta=\\int_ {\\partial{X_{+}}}\\theta.\n$$\n\\end{rmk}\nWe define the local slice $\\mathcal{S}^{\\sigma}_{k,\\delta,\\gamma}\\subset \\mathcal{C}^{\\sigma}_{k,\\delta}(X_{+})$ (at $\\gamma$) as the set of points $(A,s,\\phi)$ satisfying\n$$\nd^{\\sigma,\\flat}_{\\gamma}(A-A_{0},s,\\phi)=0\n$$\nBy Lemma 9.3.2 of the book, Lemma \\ref{decomposition of the tangent space} has the following corollary.\n\\begin{cor}\n$\\mathcal{B}^{\\sigma}_{k,\\delta}(X_{+})$ is a Hilbert manifold with boundary. For any $\\gamma\\in C^{\\sigma}_{k,\\delta}(X_{+})$, there is an open neighborhood of $\\gamma$ in the slice\n$$\nU\\subset \\mathcal{S}^{\\sigma}_{k,\\delta,\\gamma}\n$$\nsuch that $U$ is a diffeomorphism onto its image under the natural projection from $\\mathcal{C}^{\\sigma}_{k,\\delta}(X_{+})$ to $\\mathcal{B}^{\\sigma}_{k,\\delta}(X_{+})$, which is an open neighborhood of $[\\gamma]$ in $\\mathcal{B}^{\\sigma}_{k,\\delta}(X_{+})$.\n\\end{cor}\n\n\nNow we study the Seiberg-Witten equations on the manifold $X_{+}$. Let $\\mathcal{V}^{\\sigma}_{k,\\delta}(X_{+})$ be the trivial bundle $\\mathcal{C}^{\\sigma}_{k,\\delta}(X_{+})$ with fiber $L^{2}_{k-1,\\delta}(i\\mathfrak{su}(S^{+})\\oplus S^{-})$, where $\\mathfrak{su}(S^{+})$ denotes the bundle of skew-hermitian, trace-$0$ automorphisms on $S^{+}$. We consider a smooth section $$\\mathfrak{F}^{\\sigma}:C^{\\sigma}_{k,\\delta}(X_{+})\\rightarrow \\mathcal{V}^{\\sigma}_{k,\\delta}(X_{+})$$ given by\n$$\n\\mathfrak{F}^{\\sigma}(A,s,\\phi)=(\\frac{1}{2}\\rho(F^{+}_{A^{t}})-s^{2}(\\phi\\phi^{*})_{0},\\slashed{D}_{A}^{+}\\phi)\n$$\nThe zero locus of $\\mathfrak{F}^{\\sigma}$ describes the solution of the (blown-up) Seiberg-Witten equations.\n\nTo obtain the transversality condition, we introduce a perturbation on $\\mathfrak{F}^{\\sigma}$. This was done in the same way as the book: Recall that the boundary $\\partial X_{+}$ has a neighborhood $N$ which is isomorphic to $[0,3]\\times Y$ (with $\\{0\\}\\times Y$ identified with $\\partial X_{+}$). Pick two $3$-dimensional tame perturbations $\\mathfrak{q}$ and $\\mathfrak{p}_{0}$. We impose the following assumption on $\\mathfrak{q}$:\n\\begin{assum}\\label{3 dimensional perturbation}\n$\\mathfrak{q}$ is a nice perturbation with $\\operatorname{ht}(\\mathfrak{q})=-2w(X,g_{X},0)$. Such perturbation exists by Proposition \\ref{height of nice perturbation}.\n\\end{assum}\nThese two perturbations induce, in a canonical way, 4-dimensional perturbations $\\hat{\\mathfrak{q}}^{\\sigma},\\hat{\\mathfrak{p}}_{0}^{\\sigma}$ on $N$ (see Page 153 and 155 of the book). Pick a cut-off function $\\beta$ that equals $1$ near $\\{0\\}\\times Y$ and equals $0$ near $\\{3\\}\\times Y$ and a bump function $\\beta_{0}$ supported in $(0,-3)\\times Y$. Then the sum\n\\begin{equation}\\label{mixed perturbation}\n\\hat{\\mathfrak{p}}^{\\sigma}=\\beta\\cdot \\hat{\\mathfrak{q}}^{\\sigma}+\\beta_{0}\\cdot \\hat{\\mathfrak{p}}_{0}^{\\sigma}\n\\end{equation} is a section of $\\mathcal{V}^{\\sigma}_{k,\\delta}(X_{+})$ with the property that: $\\hat{\\mathfrak{p}}^{\\sigma}(A,s,\\phi)\\in L^{2}_{k-1,\\delta}(i\\mathfrak{su}(S^{+})\\oplus S^{-})$ is supported in $N$ and only depends on $(A|_{N},s,\\phi|_{N})$.\n\nWe denote by $\\mathfrak{p}$ the $4$-dimensional perturbation given by the section $\\hat{\\mathfrak{p}}^{\\sigma}$. Let $\\mathfrak{F}^{\\sigma}_{\\mathfrak{p}}=\\mathfrak{F}^{\\sigma}+\\hat{\\mathfrak{p}}^{\\sigma}$.\nWe can define the moduli spaces $$\\mathcal{M}(X_{+})=\\{(A,s,\\phi)|\\mathfrak{F}^{\\sigma}_{\\mathfrak{p}}(A,s,\\phi)=0\\}\/\\mathcal{G}_{k+1,\\delta}(X_{+})\\subset \\mathcal{B}_{k,\\delta}^{\\sigma}(X_{+})$$\n$$\\mathcal{M}^{\\text{red}}(X_{+})=\\{[(A,s,\\phi)]\\in \\mathcal{M}(X_{+})|\\ s=0 \\}$$\nas the set of gauge equivalent classes of the solutions of the perturbed Seiberg-Witten equations. ( For simplicity, we do not include $\\mathfrak{p}$ in our notations of moduli spaces.)\n\\begin{lem}\nFor any choice of perturbations $\\mathfrak{q},\\mathfrak{p}_{0}$, the moduli space $\\mathcal{M}(X_{+})$ is always a Hilbert manifold with boundary $\\mathcal{M}^{\\operatorname{red}}(X_{+})$.\n\\end{lem}\n\\begin{proof}\nThe proof is essentially identical with Proposition 24.3.1 in the book. Just replace the manifold $X$ there with $X_{+}$ and use weighted Sobolev space through out the argument.\n\\end{proof}\nBecause of the unique continuation theorem (see Proposition 10,8.1 of the book), we have $\\phi|_{\\partial X_{+}}\\neq 0$ for any $[(A,s,\\phi)]\\in \\mathcal{M}(X_{+})$. Therefore, we have a well defined map\n\\begin{equation}\\label{restriction from the right}\nR_{-}:\\mathcal{M}(X_{+})\\rightarrow \\mathcal{B}_{k-1\/2}^{\\sigma}(Y)\n\\end{equation}\ngiven by\n$$\n(A,s,\\phi)\\mapsto (A|_{\\partial X_{+}},s\\|\\phi|_{\\partial X_{+}}\\|_{L^{2}},\\frac{\\phi|_{\\partial X_{+}}}{\\|\\phi|_{\\partial X_{+}}\\|_{L^{2}}}).\n$$\n\nNow we attach the cylindrical end $(-\\infty,0]\\times Y$ on $X_{+}$ and consider the Seiberg-Witten equations on the manifold $Z_{+}$. We define the configuration space as\n$$\\mathcal{C}_{k;\\text{loc},\\delta}(Z_{+})=\\{(A_{0}+a,\\Phi)| (a,\\Phi)\\in L^{2}_{k,\\text{loc}}(Z_{+};iT^{*}Z_{+}\\oplus S^{+}),\\ \\|(a,\\Phi)|_{X_{+}}\\|_{L^{2}_{k,\\delta}}<\\infty\\}$$\nand gauge group as\n$$\n\\mathcal{G}_{k+1;\\text{loc},\\delta}(Z_{+})=\\{u:Z_{+}\\rightarrow S^{1}|\\ u\\in L^{2}_{k+1,\\text{loc}}(Z_{+};\\mathds{C}),\\ u|_{X_{+}}\\in \\mathcal{G}_{k+1,\\delta}(X_{+})\\}.\n$$\nNote that in the above definitons, we only impose the exponential decay condition over the periodic end. As before, the action of $\\mathcal{G}_{k+1;\\text{loc},\\delta}(Z_{+})$ on $\\mathcal{C}_{k;\\text{loc},\\delta}(Z_{+})$ is not free. Therefore, we need to blow up the configuration space. Since $\\mathcal{C}_{k;\\text{loc},\\delta}(Z_{+})$ is not a Banach manifold now, the blown-up configuration space should be defined in the following manner: Let $\\mathds{S}$ be the topological quotient of the space\n$$\n\\{\\Phi\\in L^{2}_{k,\\text{loc}}(Z_{+};S^{+})|\\|\\Phi|_{X_{+}}\\|_{L^{2}_{k,\\delta}}<\\infty\\}\\setminus \\{0\\}\n$$\nby the action of $\\mathds{R}_{>0}$. The blown-up configuration configuration space is defined as\n$$\n\\mathcal{C}^{\\sigma}_{k;\\text{loc},\\delta}(Z_{+})=\\{(A,\\Phi,\\phi)|(A,\\Phi)\\in\\mathcal{C}_{k;\\text{loc},\\delta}(Z_{+}),\\ \\phi \\in \\mathds{S},\\ \\Phi\\in \\mathds{R}_{\\geq 0}\\phi\\}.\n$$\nNow we define the blown-up quotient configuration space as $$\\mathcal{B}^{\\sigma}_{k;\\text{loc},\\delta}(Z_{+})=\\mathcal{C}^{\\sigma}_{k;\\text{loc},\\delta}(Z_{+})\/\\mathcal{G}_{k+1;\\text{loc},\\delta}(Z_{+}).$$\n\nThe bundle $\\mathcal{V}^{\\sigma}_{k;\\text{loc},\\delta}(Z_{+})$ and its section $\\mathfrak{F}^{\\sigma}_{\\mathfrak{p}}(Z_{+})$ are defined similarly as the book. The section $\\mathfrak{F}^{\\sigma}_{\\mathfrak{p}}(Z_{+})$ is invariant under the action of $\\mathcal{G}_{k+1;\\text{loc},\\delta}(Z_{+})$. We omit the detail here because the specific definition is not important for us. Just keep in mind that the perturbation equals $\\hat{\\mathfrak{q}}^{\\sigma}$ over the cylindrical end $Z$, equals $\\hat{\\mathfrak{p}}$ over $[0,3]\\times Y$ and equals 0 on $Z_{+}\\setminus (-\\infty,3]\\times Y$. We call $(A,\\phi,\\Phi)$ a ``$Z_{+}$-trajectory'' if $\n \\mathfrak{F}^{\\sigma}_{\\mathfrak{p}}(Z_{+})(A,\\phi,\\Phi)=0$. This is equivalent to the condition that $(A,\\Phi,\\phi)$ satisfies the blown-up perturbed Seiberg-Witten equations\n$$\n\\left\\{\\begin{array} {cc}\n F_{A}^{+}-(\\Phi\\Phi^{*})_{0}=\\hat{\\mathfrak{p}}^{0,\\sigma}_{Z_{+}}(A,\\Phi) \\\\\n \\slashed{D}_{A}^{+}\\phi=\\hat{\\mathfrak{p}}^{1,\\sigma}_{Z_{+}}(A,\\phi)\n\\end{array}\\right.\n$$\nwhere $\\hat{\\mathfrak{p}}^{0,\\sigma}_{Z_{+}}(A,\\Phi)$ and $\\hat{\\mathfrak{p}}^{1,\\sigma}_{Z_{+}}(A,\\phi)$ are certain perturbation terms supported on $(-\\infty,3]\\times Y$. The second equation should be thought as a homogeneous equation in $\\phi$, i.e., both sides of the equation will be rescaled by the same factor as we change the representative of $\\phi$. By the unique continuation theorem, we have $\\phi|_{\\{t\\}\\times Y}\\neq 0$ for any $t\\leq 0$. As a result, the triple\n$\n(A|_{\\{t\\}\\times Y},\\|\\Phi_{t\\times Y }\\|,\\tfrac{\\phi}{\\|\\phi|_{\\{t\\}\\times Y }\\|_{L^{2}}})\n$\ngives a point of $\\mathcal{C}^{\\sigma}_{k-1\/2}(Y)$, which we define to be the restriction $(A,\\Phi,\\phi)|_{\\{t\\}\\times Y}$. By restricting to $(-\\infty,0]\\times Y$, a gauge equivalent class $[(A,\\Phi,\\phi)]\\in \\mathcal{B}^{\\sigma}_{k;\\text{loc},\\delta}(Z_{+})$ of $Z_{+}$-trajectory gives a path $(-\\infty,0]\\rightarrow B^{\\sigma}_{k-1\/2}(Y)$.\n\nLet $[\\mathfrak{b}]\\in \\mathcal{B}^{\\sigma}_{k-1\/2}(Y)$ be a critical point of $\\mathfrak{F}^{\\sigma}_{\\mathfrak{q}}(Y)$. We consider the moduli space $$\\mathcal{M}([\\mathfrak{b}],Z_{+})=\\{[\\gamma]\\in \\mathcal{B}^{\\sigma}_{k;\\text{loc},\\delta}(Z_{+})|\\ \\mathfrak{F}^{\\sigma}_{\\mathfrak{p}}(Z_{+})(\\gamma)=0,\\ \\mathop{\\lim}\\limits_{t\\rightarrow -\\infty}[\\gamma|_{\\{t\\}\\times Y}]=[\\mathfrak{b}]\\}.$$\nIt consists of $Z_{+}$-trajectories that are asymptotic to $[\\mathfrak{b}]$ over the cylindrical end. By restricting to the submanifolds $Z$ and $X_{+}$, we get a map\n\\begin{equation}\\label{restriction}\n\\rho:\\mathcal{M}([\\mathfrak{b}],Z_{+})\\rightarrow \\mathcal{M}([\\mathfrak{b}],Z)\\times \\mathcal{M}(X_{+}).\n\\end{equation}\nHere $\\mathcal{M}([\\mathfrak{b}],Z)$ denotes moduli space of Seiberg-Witten half-trajectories with limit $[\\mathfrak{b}]$. In other words, $\\mathcal{M}([\\mathfrak{b}],Z)$ consists of gauge equivalent classes of paths\n$$\n\\gamma:(-\\infty,0]\\rightarrow \\mathcal{C}^{\\sigma}_{k-1\/2}(Y)\\text{ with }\\frac{d}{dt}\\gamma(t)+\\mathfrak{F}^{\\sigma}_{\\mathfrak{q}}(Y)(\\gamma(t))=0,\\ \\mathop{\\lim}_{t\\rightarrow -\\infty}\\gamma(t)=\\mathfrak{b}.\n$$\nJust like $\\mathcal{M}(X_{+})$, the moduli space $\\mathcal{M}([\\mathfrak{b}],Z)$ is always a Hilbert manifold with boundary $\\mathcal{M}^{\\text{red}}([\\mathfrak{b}],Z)$ (the moduli space of reducible half-trajectories) for arbitary perturbation. Note that we have a well defined restriction map\\begin{equation}\\label{restriction from the left} R_{+}: \\mathcal{M}(Z,[\\mathfrak{b}])\\rightarrow \\mathcal{B}^{\\sigma}_{k-1\/2}(Y) \\text{ given by }[\\gamma]\\mapsto [\\gamma(0)].\\end{equation}\nThe proof of the following lemma is identical with Lemma 24.2.2 in the book.\n\\begin{lem}\\label{fiber sum}\nThe map $\\rho$ is a homeomorphism from $\\mathcal{M}(Z_{+},[\\mathfrak{b}])$ to its image, which equals the fiber product $\\operatorname{Fib}(R_{-},R_{+})$. (The maps $R_{\\pm}$ are defined in (\\ref{restriction from the right}) and (\\ref{restriction from the left}) respectively.)\n\\end{lem}\n\nNow we start discussing the regularity of the moduli spaces. Recall that for any point $[\\mathfrak{c}]\\in \\mathcal{B}^{\\sigma}_{k-1\/2}(Y)$, we have a decomposition\n$$\nT_{[\\mathfrak{c}]}\\mathcal{B}^{\\sigma}_{k-1\/2}(Y)\\cong \\mathcal{K}^{+}_{\\mathfrak{c}}\\oplus \\mathcal{K}^{-}_{\\mathfrak{c}}\n$$\ngiven by the spectral decomposition of the Hessian operator $\\text{Hess}^{\\sigma}_{\\mathfrak{q}}(\\mathfrak{c})$ (see Page 313 of the book).\n\\begin{lem}\\label{APS for moduli space}\nFor any $([\\gamma_{1}],[\\gamma_{2}])\\in \\operatorname{Fib}(R_{+},R_{-})$. Let $[\\mathfrak{c}]$ be the common restriction of $[\\gamma_{j}]\\ (j=1,2)$ on the boundary $Y$. Denote by $\\pi$ the projection from $T_{[c]}\\mathcal{B}^{\\sigma}_{k-1\/2}(Y)$ to $\\mathcal{K}^{-}_{\\mathfrak{c}}$ with kernel $\\mathcal{K}^{+}_{\\mathfrak{c}}$. Then we have the following results.\n\\begin{enumerate}[(i)]\n\\item The linear operators $$(1-\\pi)\\circ\\mathcal{D}R_{+}:T_{[\\gamma_{1}]}\\mathcal{M}([\\mathfrak{b}],Z)\\rightarrow \\mathcal{K}_{\\mathfrak{c}}^{+} \\text{ and }\\pi\\circ\\mathcal{D}R_{+}:T_{[\\gamma_{1}]}\\mathcal{M}([\\mathfrak{b}],Z)\\rightarrow \\mathcal{K}_{\\mathfrak{c}}^{-}$$\nare respectively compact and Fredholm.\n\\item The linear operators $$(1-\\pi)\\circ\\mathcal{D}R_{-}:T_{[\\gamma_{2}]}\\mathcal{M}(X_{+})\\rightarrow \\mathcal{K}_{\\mathfrak{c}}^{+} \\text{ and }\\pi\\circ\\mathcal{D}R_{-}:T_{[\\gamma_{2}]}\\mathcal{M}(X_{+})\\rightarrow \\mathcal{K}_{\\mathfrak{c}}^{-}$$\nare respectively Fredholm and compact.\n\\item The linear operator $$\\mathcal{D}R_{+}+\\mathcal{D}R_{-}:T_{[\\gamma_{1}]}\\mathcal{M}([\\mathfrak{b}],Z)\\oplus T_{[\\gamma_{2}]}\\mathcal{M}(X_{+})\\rightarrow T_{[\\mathfrak{c}]}\\mathcal{B}^{\\sigma}_{k-1\/2}(Y)$$\nis Fredholm.\n\\end{enumerate}\n\\end{lem}\n\\begin{proof}\n(i) was proved in Theorem 17.3.2 of the book. With Proposition \\ref{APS} in place, (ii) can be proved similarly (see also Proposition 24.3.2 of the book). (iii) is directly implied by (i) and (ii).\n\\end{proof}\n\nThe following definition is parallel to Definition 24.4.2 of the book.\n\n\\begin{defi}\nLet $[\\gamma]\\in \\mathcal{M}([\\mathfrak{b}],Z_{+})$. If $[\\gamma]$ is irreducible, we say the moduli space $\\mathcal{M}([\\mathfrak{b}],Z_{+})$ is regular at $[\\gamma]$ if the maps of Hilbert manifolds\n$$ R_{+}:\\mathcal{M}([\\mathfrak{b}],Z)\\rightarrow \\mathcal{B}^{\\sigma}_{k-1\/2}(Y)\\text{ and } R_{-}:\\mathcal{M}(X_{+})\\rightarrow \\mathcal{B}^{\\sigma}_{k-1\/2}(Y)$$are transverse at $[\\gamma]$. If $[\\gamma]$ is reducible, we say the moduli space $\\mathcal{M}(Z_{:};[c])$ is regular at $[\\gamma]$ if the maps of Hilbert manifolds\n$$ R_{+}:\\mathcal{M}^{\\operatorname{red}}([\\mathfrak{b}],Z)\\rightarrow \\partial\\mathcal{B}^{\\sigma}_{k-1\/2}(Y)\\text{ and } R_{-}:\\mathcal{M}^{\\operatorname{red}}(X_{+})\\rightarrow \\partial\\mathcal{B}^{\\sigma}_{k-1\/2}(Y)$$are transverse at $\\rho([\\gamma])$ (see (\\ref{restriction})). We say the moduli space is regular if it is regular at all points.\n\\end{defi}\nRecall that the perturbation $\\mathfrak{p}$ on $Z_{+}$ is determined a pair of $3$-dimensional perturbations $(\\mathfrak{q},\\mathfrak{p}_{0})$ (see (\\ref{mixed perturbation})), where $\\mathfrak{q}$ is a nice perturbation that is fixed throughout our argument (see Assumption \\ref{3 dimensional perturbation}). We want to obtain the transversality condition by varying the second perturbation $\\mathfrak{p}_{0}$. To do this, let $\\mathcal{P}(Y)$ be the large Banach space of $3$-dimensional tame perturbations provided by Theorem 11.6.1 of the book. We have the following result.\n\\begin{pro}\\label{transversality}\nThere exists a residual subset $U_{1}$ of $\\mathcal{P}(Y)$ such that for any $\\mathfrak{p}_{0}\\in U_{1}$, the moduli space $\\mathcal{M}([\\mathfrak{b}],Z_{+})$ corresponding to $(\\mathfrak{q},\\mathfrak{p}_{0})$ is regular for any critical point $[\\mathfrak{b}]\\in\\mathfrak{C}$.\n\\end{pro}\n\\begin{proof}\nThe proof follows the standard argument as in the proof of Proposition 24.4.7 of the book: We consider parametrized moduli space\n$$\\mathfrak{M}(X_{+})\\subset \\mathcal{B}^{\\sigma}_{k,\\delta}(X_{+})\\times \\mathcal{P}(Y)$$\n$$\n\\mathfrak{M}(X_{+})=\\{(A,s,\\phi,\\mathfrak{p}_{0})|\\,\\mathfrak{F}^{\\sigma}_{\\mathfrak{p}}=0\\}\/\\mathcal{G}_{k+1,\\delta}(X_{+}).\n$$\nFor any $[\\mathfrak{b}]\\in \\mathfrak{C}$, we can prove that the map\n$$\nR_{+}\\times \\mathfrak{R}_{-}: \\mathcal{M}([\\mathfrak{b}],Z)\\times \\mathfrak{M}(X_{+})\\rightarrow \\mathcal{B}_{k-1\/2}^{\\sigma}(Y)\\times \\mathcal{B}_{k-1\/2}^{\\sigma}(Y)\n$$\nis transverse to the diagonal by the same argument as the book. Here the map $\\mathfrak{R}_{-}$ is defined similarly with $R_{-}$ (but with larger domain). Now we apply the Sard-Smale lemma (Lemma 12.5.1 of the book) to finish the proof. We note that Lemma \\ref{APS for moduli space} (iii) is used essentially in this last step.\n\\end{proof}\n\n\nThe proof of the following proposition is by standard transversility argument and we omit it. (Compare Proposition 24.4.3 of the book.)\n\\begin{pro}\\label{moduli space is a manifold}\nSuppose the moduli space $\\mathcal{M}([\\mathfrak{b}],Z_{+})$ is regular and non-empty. Then the moduli space is\n\\begin{enumerate}[(i)]\n\\item a smooth manifold consisting only of irreducibles, if $[\\mathfrak{b}]$ is irreducible;\n\\item a smooth manifold consisting only of reducibles, if $[\\mathfrak{b}]$ is reducible and boundary-stable;\n\\item a smooth manifold with (possibly empty) boundary, if $[\\mathfrak{b}]$ is reducible and boundary-unstable.\n\\end{enumerate}\nIn the last case, the boundary consists of the reducible elements of the moduli space (i.e., we have $\\partial\\mathcal{M}([\\mathfrak{b}],Z_{+})=\\mathcal{M}^{\\text{red}}([\\mathfrak{b}],Z_{+})$).\n\\end{pro}\nRecall that we associated a rational number $\\operatorname{gr}^{\\mathds{Q}}([\\mathfrak{b}])$ to each critical point $[\\mathfrak{b}]$. We have the following result.\n\\begin{pro}\\label{expected dimension}\nSuppose the moduli space $\\mathcal{M}([\\mathfrak{b}],Z_{+})$ is regular. Then the moduli space is\n\\begin{enumerate}[(i)]\n\\item the empty set, if $\\operatorname{gr}^{\\mathds{Q}}([\\mathfrak{b}])+2w(X,g_{X},0)<0$;\n\\item a manifold with dimension $\\operatorname{gr}^{\\mathds{Q}}([\\mathfrak{b}])+2w(X,g_{X},0)$, if $\\operatorname{gr}^{\\mathds{Q}}([\\mathfrak{b}])+2w(X,g_{X},0)\\geq 0$.\n\\end{enumerate}\n\\end{pro}\n\\begin{proof}\nWe just need to show that the expected dimension of $\\mathcal{M}([\\mathfrak{b}],Z_{+})$ (which we denote by $\\operatorname{gr}(Z_{+};[\\mathfrak{b}])$) can be expressed as\n $$\\operatorname{gr}(Z_{+};[\\mathfrak{b}])=\\operatorname{gr}^{\\mathds{Q}}([\\mathfrak{b}])+2w(X,g_{X},0).$$ This can be done by direct computation. But we follow an alternative argument here. Recall that $M$ is a spin manifold with bounded by $(Y,\\mathfrak{s})$ with $b_{1}(M)=0$. We let $M^{*}=M\\cup _{Y}([0,+\\infty)\\times Y)$. As discussed before, the $M$-grading of $[\\mathfrak{b}]$ (which we denoted by $\\operatorname{gr}(M;[\\mathfrak{b}])$) equals the expected dimension of the moduli space consisting of solutions on $M^{*}$ that are asymptotic to $[\\mathfrak{b}]$. Since one can deform the linearized Seiberg-Witten equations over the manifold $M^{*}\\cup Z_{+}$ first to the corresponding equations over the manifold $$M\\cup_{Y}([-T,T]\\times Y)\\cup_{Y}X_{+}\\text{ with }T\\gg 0$$\n and then to the manifold $M_{+}$. We see that the grading is additive in the sense that the sum $\\operatorname{gr}(M;[\\mathfrak{b}])+\\operatorname{gr}(Z_{+};[\\mathfrak{b}])$ equals the expected dimension $\\mathcal{M}(M_{+})$, the moduli space consisting of gauge equivalent classes of solutions over $M_{+}$ that decay exponentially on the periodic end. The linear operator that determines the local structure of $\\mathcal{M}(M_{+})$ is a compact perturbation of the operator\n $$\n L^{2}_{k,\\delta}(M_{+};iT^{*}M_{+}\\oplus S^{+})\\rightarrow L^{2}_{k-1,\\delta}(M_{+};i\\mathds{R}\\oplus i\\wedge^{2}_{+}T^{*}M_{+}\\oplus S^{-})\n $$\n $$\n (a,\\Phi)\\mapsto (d^{*}a,d^{+}a,\\slashed{D}_{A_{0}}\\Phi).\n $$\n By Lemma \\ref{half De rham complex} and Proposition \\ref{Dirac operator is Fredholm}, the (real) index of this operator equals\n $$\n -\\frac{\\text{sign}(M)}{4}+2w(X,g_{X},0)+b^{+}_{2}(M)-1.\n $$\n By (\\ref{absolute grading}), this implies\n \\begin{equation}\\begin{split}\\operatorname{gr}(Z_{+};[\\mathfrak{b}])&=-\\frac{\\text{sign}(M)}{4}+2w(X,g_{X},0)+b^{+}_{2}(M)-1-\\operatorname{gr}(M;[\\mathfrak{b}])\\\\&=\\operatorname{gr}^{\\mathds{Q}}([\\mathfrak{b}])+2w(X,g_{X},0).\\end{split}\\end{equation}\n \\end{proof}\n\nRecall that we denote the lowest boundary stable reducible critical point by $[\\mathfrak{a}_{0}]$. Recall that the absolute grading $[\\mathfrak{a}_{0}]$ equals the height of the nice perturbation $\\mathfrak{q}$, which has been chosen to be $-2w(X,g_{X},0)$ (see Assumption \\ref{3 dimensional perturbation}). By Proposition \\ref{moduli space is a manifold} and Proposition \\ref{expected dimension}, for any $\\mathfrak{p}\\in U_{1}$ (the residue set provided by Lemma \\ref{transversality}), the moduli space $\\mathcal{M}([\\mathfrak{a}_{0}],[Z_{+}])$ consists of discrete elements, all of which are reducible because $[\\mathfrak{a}_{0}]$ is boundary stable. The moduli spaces $\\mathcal{M}([\\mathfrak{a}_{i}],[Z_{+}])\\ (i< 0)$ are all empty.\n\\begin{pro}\\label{only one reducible}\nThere exists an open neighborhood $U_{2}\\subset \\mathcal{P}(Y)$ of $0$ such that for any $\\mathfrak{p}_{0}\\in U_{2}$, the moduli space $\\mathcal{M}([\\mathfrak{a}_{0}],[Z_{+}])$ corresponding to $(\\mathfrak{q},\\mathfrak{p}_{0})$ contains a single point.\n\\end{pro}\n\\begin{proof}\nSince the moduli space only consists of reducibles, we do not need to consider the nice perturbation $\\mathfrak{q}$ since it vanishes on the reducibles. Moreover, we can describe the moduli space explicitly: each gauge equivalent class of solutions of the downstairs equation\n\\begin{equation}\\label{downstairs reducible equation}\nd^{+}a-\\beta_{0}\\cdot \\rho^{-1}(\\mathfrak{\\hat{p}}^{0}_{0}(A_{0}+a,0))=0,\\ a\\in L^{2}_{k+1;\\text{loc},\\delta}(Z_{+};i\\mathds{R})\n\\end{equation}\ncontributes a copy of $\\mathds{CP}^{0}$ in $\\mathcal{M}([\\mathfrak{a}_{0}],[Z_{+}])$. (Here $\\beta_{0}$ is the bump function in (\\ref{mixed perturbation}) and $\\hat{\\mathfrak{p}}^{0}_{0}$ is a component of the $4$-dimensional, downstairs perturbation $\\hat{\\mathfrak{p}}_{0}$ induced by the $3$-dimensional perturbation $\\mathfrak{p}_{0}$.) We want to show that when $\\mathfrak{p}_{0}$ (hence $\\hat{\\mathfrak{p}}_{0}^{0}$) is small enough, (\\ref{downstairs reducible equation}) has exactly one solution up to gauge equivalence. By the exponential decay result Theorem 13.3.5 of the book (applied to $a|_{Z}$) and Lemma \\ref{half De rham complex} (i), we see that each equivalent class contains a unique representative satisfying\n$$\n\\|a\\|_{L^{2}_{k;-\\delta,\\delta}}<\\infty,\\ d^{*}a=0.\n$$\nIn other words, we just need to prove (\\ref{downstairs reducible equation}) has a unique solution satisfying the above gauge fixing condition when the perturbation is small. To do this, we consider the map\n$$\n\\mathfrak{P}:\\mathcal{P}(Y)\\times L^{2}_{k;-\\delta,\\delta}(Z_{+};iT^{*}Z_{+})\\rightarrow V\\oplus L^{2}_{k;-\\delta,\\delta}(Z_{+}; i\\wedge^{2}_{+}T^{*}Z_{+}),\n$$\nwhere $V=\\{\\xi\\in L^{2}_{k,-\\delta,\\delta}(Z_{+};i\\mathds{R})|\\int_{Z_{+}}\\xi d\\text{vol}=0\\}$, given by\n$$\n(\\mathfrak{p}_{0},a)\\mapsto (d^{*}a,d^{+}a-\\beta_{0}\\cdot \\rho^{-1}(\\mathfrak{\\hat{p}}_{0}^{0}(A_{0}+a,0))).\n$$\nBy Lemma \\ref{half De rham complex} (iii), the restriction of $\\mathfrak{P}$ to $\\{0\\}\\times L^{2}_{k;-\\delta,\\delta}(Z_{+};iT^{*}Z_{+})$ is a (linear) isomorphism. Therefore, by the implicit function theorem, there exists a neighborhood $U$ of $0\\in L^{2}_{k;-\\delta,\\delta}(Z_{+};iT^{*}Z_{+})$ and a neighborhood $U'$ of $0\\in \\mathcal{P}(Y)$ with the property that: for any $\\mathfrak{p}_{0}\\in U'$, there exists a unique solution of the equation $\\mathfrak{P}(\\mathfrak{p}_{0},a)=0$ with $a\\in U$. Now we claim that we can find another neighborhood $U''$ of $0\\in \\mathcal{P}(Y)$\nsuch that for any $\\mathfrak{p}_{0}\\in U''$, $\\mathfrak{P}(\\mathfrak{p}_{0},a)=0$ implies $a\\in U$. This will finish the proof because we can set $U_{2}=U'\\cap U''$. Now we prove our claim by contradiction. Suppose there exist $\\mathfrak{p}_{0,n}\\rightarrow 0$ and $a_{n}\\notin U$ such that $\\mathfrak{P}(\\mathfrak{p}_{0,n},a_{n})=0$ for each $n$. Integrating by part on $(-\\infty,-0]\\times Y$ and $X_{+}\\setminus [3,+\\infty)$ respectively, we see that\n$$\n\\operatorname{CSD}((A_{0}+a_{n})|_{Y\\times\\{0\\}},0)<0,\\ \\operatorname{CSD}((A_{0}+a_{n})|_{Y\\times\\{3\\}},0)>0.\n$$\nUsing these energy estimates, one can easily adapt the proof of Theorem 10.7.1 of the book (from the single perturbation case to the case of a convergent sequence of perturbations) and prove that: after passing to a subsequence and applying suitable gauge transformations $u_{n}$, the sequence $\nu_{n}\\cdot((A_{0}+a_{n})|_{Y\\times[1,2]},0)\n$ converges smoothly. Notice that the gauge invariant term $\\beta_{0}\\cdot \\rho^{-1}(\\mathfrak{\\hat{p}}_{0,n}^{0}(A_{0}+a_{n},0))$ is supported on $Y\\times [1,2]$ and only depends on $(A_{0}+a_{n})|_{Y\\times[1,2]}$ (because the bump function $\\beta_{0}$ is supported on $[1,2]\\times Y$). We see that $$\\|(d^{*}a_{n},d^{+}a)\\|_{L^{2}_{k-1;-\\delta,\\delta}} =\\|\\beta_{0}\\cdot \\rho^{-1}(\\mathfrak{\\hat{p}}_{0,n}^{0}(A_{0}+a_{n},0))\\|_{L^{2}_{k-1;-\\delta,\\delta}}\\rightarrow 0\\text{ as }n\\rightarrow\\infty$$ since $\\mathfrak{p}_{0,n}\\rightarrow 0$. By Lemma \\ref{half De rham complex} (iii) again, we get $\\|a_{n}\\|_{L^{2}_{k;-\\delta,\\delta}}\\rightarrow 0$. This contradicts with our assumption $a_{n}\\notin U$ and completes our proof.\\end{proof}\n\\begin{assum}\\label{4 dim perturbation}\nFrom now on, we fix a choice of perturbation $\\mathfrak{p}_{0}\\in U_{1}\\cap U_{2}$, where $U_{1}$, $U_{2}$ are subsets of $\\mathcal{P}(Y)$ provided by Proposition \\ref{transversality} and Proposition \\ref{only one reducible} respectively.\n\\end{assum}\nAs in the cylindrical case, a sequence of $Z_{+}$-trajectories (even with unifomly bounded energy) can converge to a broken trajectory. For this reason, we have to introduce the moduli space of broken trajectories before discussing the compactness property. Although our construction can be generalized to moduli space of higher dimension without essential difficulty, we focus on $1$-dimensional moduli spaces for simplicity. This will be enough for our application.\n\nWe start with recalling the ``$\\tau$-module'' for blow up. (See Section 6.3 of the book for details.) Let $I\\subset \\mathds{R}$ be an interval. Denote the product manifold $I\\times Y$ by $Z_{I}$. There are two cases:\n\\begin{itemize}\n\\item Suppose $I$ is compact, we define the configuration space \\begin{equation}\\label{tau-module}\\begin{split}\n\\mathcal{C}^{\\tau}_{k}(Z_{I})=&\\{(A_{0}+a,s,\\phi)|(a,\\phi)\\in L^{2}_{k}(Z_{I};iT^{*}Z_{I}\\oplus S^{+}),\\ s\\in L^{2}_{k}(I;\\mathds{R})\\\\& \\text{satisfies }s(t)\\geq 0,\\ \\|\\phi|_{Y\\times \\{t\\}}\\|_{L^{2}(Y)}=1\\text{ for any }t\\in I \\}\\end{split}\\end{equation}\nThe gauge group $\n\\mathcal{G}_{k+1}(Z_{I})$ acts on $\\mathcal{C}^{\\tau}_{k}(Z_{I})$ as\n$$\nu\\cdot(A_{0}+a,s,\\phi)= (A_{0}+a-u^{-1}du,s, u\\phi).\n$$ We denote the quotient space by $\\mathcal{B}^{\\tau}_{k}(Z_{I})$.\n\\item Suppose $I$ is non-compact, we define $\\mathcal{C}^{\\tau}_{k,\\text{loc}}(Z_{I})$ by replacing $L^{2}_{k}$ with $L^{2}_{k,\\text{loc}}$ in (\\ref{tau-module}). We let $\\mathcal{B}^{\\tau}_{k,\\text{loc}}(Z_{I})=\\mathcal{C}^{\\tau}_{k,\\text{loc}}(Z_{I})\/\\mathcal{G}_{k+1,\\text{loc}}(Z_{I})$.\n\\end{itemize}\nIn both cases, we impose the quotient topology on the quotient configuration space. For any $[\\mathfrak{b}],[b']\\in \\mathfrak{C}$, the moduli space $\\mathcal{M}([\\mathfrak{b}],[b'])$ is a subset of $\\mathcal{B}^{\\tau}_{k,\\text{loc}}(Z_{(-\\infty,+\\infty)})$ and consists of the non-constant Seiberg-Witten trajectories going from $[\\mathfrak{b}]$ to $[b']$. We let $\\breve{\\mathcal{M}}([\\mathfrak{b}],[b'])=\\mathcal{M}([\\mathfrak{b}],[b'])\/\\mathds{R}$, where $\\mathds{R}$ acts as translation (reparametrization).\n\nNow we define the moduli space of broken trajectories. Let $[\\mathfrak{b}_{0}]$ be a critical point with $\\operatorname{gr}^{\\mathds{Q}}([b_{0}])=2w(X,g_{X},0)+1$. By our assumption about $\\operatorname{ht}(\\mathfrak{q})$, $[\\mathfrak{b}_{0}]$ must be irreducible. We consider the set\n$$\n\\mathcal{M}^{+}([\\mathfrak{b}_{0}],Z_{+})=\\mathcal{M}([\\mathfrak{b}_{0}],Z_{+})\\cup(\\mathop{\\cup}\\limits_{[\\mathfrak{b}]\\in \\mathfrak{C}}\\breve{\\mathcal{M}}([\\mathfrak{b}_{0}],[\\mathfrak{b}])\\times \\mathcal{M}([\\mathfrak{b}],Z_{+})).\n$$\nBy our regularity assumption, $\\mathcal{M}([\\mathfrak{b}_{0}],Z_{+})$ is a $1$-dimensional manifold (without boundary). The set $\\breve{\\mathcal{M}}([\\mathfrak{b}_{0}],[\\mathfrak{b}])\\times \\mathcal{M}([\\mathfrak{b}],Z_{+})$ is nonempty only if $\\operatorname{gr}^{\\mathds{Q}}([\\mathfrak{b}])=2w(X,g_{X},0)$, in which case it is a discrete set.\n\nTo define the topology on $\\mathcal{M}^{+}([\\mathfrak{b}_{0}],Z_{+})$, we need to specify a neighborhood base for each point. For those points in $\\mathcal{M}([\\mathfrak{b}_{0}],Z_{+})$, we just use their neighborhood basis inside $\\mathcal{M}([\\mathfrak{b}_{0}],Z_{+})$. For a broken trajectory $([\\gamma_{-1}],[\\gamma_{0}])\\in \\breve{\\mathcal{M}}([\\mathfrak{b}_{0}],[\\mathfrak{b}])\\times \\mathcal{M}([\\mathfrak{b}],Z_{+})$, we let $[\\gamma_{-1}]$ be represented by a parametrized trajectory\n$$\n\\gamma_{-1}\\in \\mathcal{M}([\\mathfrak{b}_{0}],[\\mathfrak{b}]).\n$$\nLet $U_{0}$ be a neighborhood of $[\\gamma_{0}]$ in $\\mathcal{B}^{\\sigma}_{k,\\operatorname{loc},\\delta}(Z_{+})$ and let $I\\subset\\mathds{R}$ be a compact interval and $U_{-1}\\subset \\mathcal{B}^{\\tau}_{k}(Z_{I})$ be a neighborhood of $[\\gamma_{-1}|_{I}]$. For any $T\\in \\mathds{R}_{>0}$ with the property that $I-T$ (the translation of $I$ by $-T$) is contained in $\\mathds{R}_{\\leq 0}$, we define $\\Omega(U_{-1},U_{0},T)$ to be the subset of $\\mathcal{M}^{+}([\\mathfrak{b}_{0}],Z_{+})$ consisting of the broken $Z_{+}$-trajectory $([\\gamma_{-1}],[\\gamma_{0}])$ and (unbroken) $Z_{+}$-trajectories $[\\gamma] \\in \\mathcal{M}([\\mathfrak{b}_{0}],Z_{+})$ satisfying the following conditions:\n\\begin{itemize}\n\\item $[\\gamma]\\in U_{0};$\n\\item There exists $T_{-1}>T$ such that $[\\tau_{T_{-1}}(\\gamma|_{I-T_{-1}})]\\in U_{-1}$, where $\\tau_{T_{-1}}(\\gamma|_{I-T_{-1}})$ denotes the translation of $\\gamma|_{I-T_{-1}}$ by $T_{-1}$ (in the positive direction).\n\\end{itemize}\nWe put the sets of the form $\\Omega(U_{-1},U_{0},T)$ form a neighborhood basis for $([\\gamma_{-1}],[\\gamma_{0}])$.\nWith the topology on $\\mathcal{M}^{+}([\\mathfrak{b}_{0}],Z_{+})$ defined, we have the following gluing theorem, whose proof is a word by word translation from the proof of Theorem 24.7.2 in the book and we omit.\n\n\\begin{thm}\\label{gluing}\nFor each broken $Z_{+}$-trajectory $([\\gamma_{-1}],[\\gamma_{0}])\\in \\mathcal{M}^{+}([\\mathfrak{b}_{0}],Z_{+})$, we can find its open neighborhood $U$ with $U\\setminus ([\\gamma_{-1}],[\\gamma_{0}])\\subset\\mathcal{M}([\\mathfrak{b}_{0}],Z_{+})$ and a homeomorphism $f: (0,1]\\times ([\\gamma_{-1}],[\\gamma_{0}]) \\rightarrow U$ that sends $\\{1\\}\\times ([\\gamma_{-1}],[\\gamma_{0}]) $ to $([\\gamma_{-1}],[\\gamma_{0}])\\in U$.\n\\begin{rmk}\nTheorem 24.7.2 in the book actually contains the two parts: the boundary obstructed case and the boundary unobstructed case. The second case is much easier than the first case. Theorem \\ref{gluing} here corresponds to the second case with the additional assumption that the moduli space is $1$-dimensional and the boundary of the $4$-manifold is connected. This further simplifies the statement of the result.\n\\end{rmk}\n\\end{thm}\n\nNow we consider the orientation of the moduli spaces. As mentioned in Subsection 2.2, a choice of $\\chi([\\mathfrak{b}])$ in the orientation set $\\Lambda([\\mathfrak{b}])$ for each $[\\mathfrak{b}]$ canonically induces an orientation of the moduli space $\\breve{\\mathcal{M}}([\\mathfrak{b}],[\\mathfrak{b}])$ for any critical points $[\\mathfrak{b}],[\\mathfrak{b}']$. It was also proved in Threorem 24.8.3 of the book that a choice of $\\chi([\\mathfrak{b}])$ and a homology orientation of $M$ determines an orientation of $\\mathcal{M}(M^{*},[\\mathfrak{b}])$ (the moduli space of gauge equivalent classes consisting of solutions on $M^{*}=M\\cup_{Y}[0,+\\infty)\\times Y$ that are asymptotic to $[\\mathfrak{b}]$). By replacing the compact manifold $M$ with the non-compact manifold $X_{+}$ and working with the weighted Sobolev spaces instead of the unweighted ones, one can repeat the argument there and prove the following similar result. Note that we do not need any homology orientation of $X_{+}$. This is essentially because of Lemma \\ref{half De rham complex} (iv) (compare Lemma 24.8.1 of the book). An alternative viewpoint is that $H^{1}(X_{+};\\mathds{R})=H^{2}(X_{+};\\mathds{R})=0$.\n\\begin{thm}\\label{orientation}\nA choice of $\\{\\chi([\\mathfrak{b}])|\\,[\\mathfrak{b}]\\in\\mathfrak{C}\\}$ canonically induces an orientation on the moduli space $\\mathcal{M}([\\mathfrak{b}],Z_{+})$ for any critical point $[\\mathfrak{b}]$. These orientations are compatible with the gluing map in the following sense: the map $f$ provided by Theorem \\ref{gluing} is orientation preserving when restricted to $(0,1)\\times ([\\gamma_{-1}],[\\gamma_{0}])$, if we orient the moduli spaces $\\breve{\\mathcal{M}}([\\mathfrak{b}_{0}],[\\mathfrak{b}])$, $\\mathcal{M}([\\mathfrak{b}],Z_{+}))$ and $\\mathcal{M}([\\mathfrak{b}_{0}],Z_{+}))$ by the same choice $\\{\\chi([\\mathfrak{b}])|\\,[\\mathfrak{b}]\\in\\mathfrak{C}\\}$ and use the positive orientation on the interval $(0,1)$.\n\\end{thm}\n\\section{Compactness}\nIn the current and the next section, we impose the following assumption:\n\\begin{assum}\nThe scalar curvature $\\operatorname{scal}$ of $g_{X}$ to be everywhere positive. In other words, we have\n$$\ns_{0}=\\mathop{\\operatorname{inf}}\\limits_{x\\in X}\\operatorname{scal}(x)>0.\n$$\n\\end{assum}\nThis assumption implies that the restriction of $g_{Z_{+}}$ on $\\mathop{\\cup}\\limits_{n\\geq 1}W_{n}$, which is a lift of $g_{X}$, has uniformly positive scalar curvature. Under this assumption, we will prove the following compactness theorem:\n\\begin{thm}\\label{compactness}\nFor any $[\\mathfrak{b}_{0}]\\in \\mathfrak{C}$ with $\\operatorname{gr}^{\\mathds{Q}}([\\mathfrak{b}_{0}])=-2w(X,g_{X},0)+1$, the moduli space $\\mathcal{M}^{+}([\\mathfrak{b}_{0}],Z_{+})$ is compact.\n\\end{thm}(Again, the result can be generalized to arbitrary $[\\mathfrak{b}_{0}]$. But we focus on the current case because that is all we need.)\n\\subsection{The topological energy $\\mathcal{E}^{\\text{top}}$ and the quantity $\\Lambda_{\\mathfrak{q}}$} We start with some standard definitions in the book, which will be useful in our proof of compactness theorem. Let $\\hat{X}$ be a general spin$^{c}$ $4$-manifold and $(A,\\Phi)$ be a point of the configuration space (i.e., $A$ is a spin$^{c}$ connection and $\\Phi$ is a positive spinor over $\\hat{X}$). Its topological energy is defined as\n\\begin{equation}\\label{topolocial energy}\n\\mathcal{E}^{\\text{top}}(A,\\Phi)=\\frac{1}{4}\\int_{\\hat{X}}F_{A^{t}}\\wedge F_{A^{t}}-\\int_{\\partial\\hat{X}}\\langle \\Phi|_{\\partial\\hat{X}},\\slashed{D}_{B}(\\Phi|_{\\partial\\hat{X}})\\rangle d\\text{vol}+\\int_{\\partial\\hat{X}}(H\/2)|\\Phi|^{2}d\\text{vol}\n\\end{equation}\nwhere $B=A|_{\\partial\\hat{X}}$ and $H$ denotes the mean curvature of the boundary, which will be vanishing if we use the product metric near the boundary.\nNote that in our situation, the integrals in (\\ref{topolocial energy}) are always convergent (even if $\\hat{X}$ is not compact) because $F_{A^{t}}$ decays exponentially over the end of $\\hat{X}$.\n We also talk about the topological energy of a point in the blown-up configuration space (i.e., a triple $(A,e,\\phi)$ with $e\\geq 0$ and $|\\phi|_{L^{2}}=1$). In this case, we define $\\mathcal{E}^{\\text{top}}(A,s,\\phi)$ to be $\\mathcal{E}^{\\text{top}}(\\boldsymbol\\pi(A,s,\\phi))$ where $$\\boldsymbol{\\pi}(A,s,\\phi)=(A,s\\phi)$$\nas before. Since the topological energy is invariant under gauge transformation, it also makes sense to talk about the topological energy of a gauge equivalent class.\n\nNow we return to our end-periodic manifold $X_{+}$. Recall that $\\mathfrak{q}$ is a nice perturbation (of height $-2w(X,g_{X},0)$). After choosing a gauge invariant function\n\\begin{equation}\\label{perturbation}\nv:\\mathcal{C}_{k-1\/2}(Y)\\rightarrow \\mathds{R}.\n\\end{equation} whose formal gradient equals $\\mathfrak{q}$. We can define the perturbed topological energy of a point $\\gamma\\in \\mathcal{C}^{\\sigma}_{k,\\text{loc}}(X_{+})$ as\n$$\n\\mathcal{E}^{\\text{top}}_{\\mathfrak{q}}(\\gamma)=\\mathcal{E}^{\\text{top}}(\\gamma)-2v(\\boldsymbol\\pi(\\gamma)|_{Y}).$$\n\nLet $\\epsilon$ be a number lying in $(0,\\frac{1}{2})$. We consider two other manifolds:\n$$\nX^{'}_{+}=X_{+}\\setminus ([0,2\\epsilon)\\times Y),\\\nX^{''}_{+}=X_{+}\\setminus ([0,\\epsilon)\\times Y)\n$$ We can define the blown-up configuration space $\\mathcal{C}^{\\sigma}_{k,\\delta}(X_{+}^{'})$ similarly as $\\mathcal{C}^{\\sigma}_{k,\\delta}(X_{+})$. There is a partially defined restriction map $$\\mathcal{C}^{\\sigma}_{k,\\delta}(X_{+})\\dashrightarrow \\mathcal{C}^{\\sigma}_{k,\\delta}(X^{'}_{+})$$\n$$\n(A,s,\\phi)\\rightarrow (A|_{X^{'}_{+}},s\\|\\phi|_{X^{'}_{+}}\\|_{L^{2}},\\frac{\\phi|_{X^{'}_{+}}}{\\|\\phi|_{X^{'}_{+}}\\|_{L^{2}}})\n$$\nwhose domain contains triples $(A,s,\\phi)$ with $\\phi|_{X^{'}_{+}}\\neq 0.$ We denote by $(A,s,\\phi)|_{X'_{+}}$ the image of $(A,s,\\phi)$ under this map. Under the assumption $\\phi|_{Y\\times \\{\\epsilon\\}}\\neq 0$, we can define $(A,s,\\phi)|_{Y\\times \\{\\epsilon\\}}\\in \\mathcal{C}^{\\sigma}_{k-1\/2}(Y)$ in a similar vein. Note that since we are considering the solution of the perturbed Seiberg-Witten equations, these conditions are always satisfied by the unique continuation theorem.\n\nOther than the (perturbed) topological energy, there is another quantity that will be useful when dealing with the blown-up configuration space. Let $(B,r,\\psi)$ be a point of $C^{\\sigma}_{k-1\/2}(Y)$. We define the quantity $$\\Lambda_{\\mathfrak{q}}(B,r,\\psi)=\\operatorname{Re}\\langle \\psi,\\slashed{D}_{B}\\psi+\\tilde{\\mathfrak{q}}^{1}(B,r,\\psi)\\rangle_{L^{2}}$$\nwhere $\\tilde{\\mathfrak{q}}^{1}(B,r,\\psi)$ is defined as (see Remark \\ref{component of perturbation})\n$$\n\\tilde{\\mathfrak{q}}^{1}(B,r,\\psi)=\\int_{0}^{1}\\mathcal{D}_{(B,sr\\psi)}\\mathfrak{q}^{1}(0,\\psi)ds.\n$$\n(Recall that $\\mathfrak{q}^{1}$ denotes the spinor component of the perturbation $\\mathfrak{q}$.)\n\n\\subsection{Compactness: local results}\nIn this subsection, we will prove the compactness results for solutions on the manifold $X_{+}=W_{0}\\cup_{Y}W_{1}\\cup_{Y}...$. To simplify the notation, we denote by $W_{n,n'}$ the manifold\n$$\nW_{n}\\cup_{Y}W_{n+1}\\cup_{Y}...\\cup_{Y}W_{n'}\\subset X_{+}\n,$$\nand write $\\|\\cdot\\|_{L^{2}_{j}(W_{n,n'})}$ for the $L^{2}_{j}$ norm of the restriction to $W_{n,n'}$. We will use similar notation for other manifolds.\n\nLet us start with the following lemma, which was communicated to the author by Clifford Taubes.\n\n\\begin{lem}\\label{exp decay}\nThere exists uniform constants $C,\\delta_{3}>0$ with the following significance: for any $\\delta\\in (0,\\delta_{3})$ and any solution $\\gamma=(A,s,\\phi)\\in \\mathcal{C}^{\\sigma}_{k,\\delta}(X_{+})$ of the equation $\\mathfrak{F}^{\\sigma}_{\\mathfrak{p}}(\\gamma)=0$, we have\n$$\n\\|\\phi\\|_{L^{2}(W_{n})}\\leq Ce^{-\\delta_{3}n},\\ \\ \\forall n\\geq 0.\n$$\n\\end{lem}\n\\begin{proof}\nWe first consider $W_{n}$ for $n\\geq 1$. Over these manifolds, the perturbation $\\mathfrak{p}$ equals $0$ and hence we have\n\\begin{equation}\\begin{split}\n\\rho(F^{+}_{A^{t}})-2s^{2}(\\phi\\phi^{*})_{0}=0\\\\\\slashed{D}^{+}_{A}\\phi=0.\\end{split}\n\\end{equation}\nWe choose an integer $N$ large enough such that there exists a bump function\n$$\n\\tau:W_{1,3N}\\rightarrow[0,1]\n$$\nwith the following properties: i) $\\tau$ is supported on $W_{2,3N-1}$; ii) $\\tau$ equals $1$ when restricted to $W_{N+1,2N}$; iii) $|d\\tau(x)|^{2}\\delta$, the natural inclusion $C^{\\sigma}_{k+1,\\delta'}(X')\\rightarrow C^{\\sigma}_{k,\\delta}(X')$ maps a bounded closed set to a compact set. Therefore, we can find a subsequence that converges in $C^{\\sigma}_{k,\\delta}(X')$.\\end{proof}\n\\subsection{Compactness: broken trajectories} With Theorem \\ref{local compactness} (compare Theorem 24.5.2 in the book) proved, the proof of Theorem \\ref{compactness} is essentially the same with the proof of Theorem 24.6.4 in the book. For completeness, we sketch it as follows:\n\\begin{proof}[Proof of Theorem \\ref{compactness}](Sketch)\nWe first consider a sequence $[\\gamma_{n}]\\in \\breve{\\mathcal{M}}([\\mathfrak{b}_{0}],Z_{+})$ $(n\\geq 1)$ represented by unbroken $Z_{+}$-trajectories $\\gamma_{n}$. Using integration by part, it is easy to see that $\\mathcal{E}_{\\mathfrak{p}}^{\\text{top}}(\\gamma_{n}|_{X_{+}})=\\mathcal{L}_{\\mathfrak{q}}(\\gamma_{n}|_{\\{0\\}\\times Y})$ for any $n$, which implies $\\mathcal{E}_{\\mathfrak{p}}^{\\text{top}}(\\gamma_{n}|_{X_{+}})<\\mathcal{L}_{\\mathfrak{q}}([\\mathfrak{b}_{0}])$ (because $\\gamma|_{Z}$ is a flow line with limit $[\\mathfrak{b}_{0}]$). By similar decomposition as in the proof of Lemma \\ref{energy bound}, we can prove that\n$$\n\\mathcal{L}_{\\mathfrak{q}}(\\gamma_{n}|_{\\{\\epsilon\\}\\times Y})=\\mathcal{E}_{\\mathfrak{p}}^{\\text{top}}(\\gamma_{n}|_{X''_{+}})>C,\\ \\mathcal{L}_{\\mathfrak{q}}(\\gamma_{n}|_{\\{2\\epsilon\\}\\times Y})=\\mathcal{E}_{\\mathfrak{p}}^{\\text{top}}(\\gamma_{n}|_{X'_{+}})>C.\n$$\nfor some uniform constant $C$. This implies both \\begin{equation}\\label{energy control1}\\mathcal{E}_{\\mathfrak{q}}^{\\text{top}}(\\gamma_{n}|_{(-\\infty,\\epsilon]\\times Y})<\\mathcal{L}_{\\mathfrak{q}}([\\mathfrak{b}_{0}])-C\\end{equation} and \\begin{equation}\\label{energy control2}\\mathcal{E}_{\\mathfrak{q}}^{\\text{top}}(\\gamma_{n}|_{(-\\infty,2\\epsilon]\\times Y})<\\mathcal{L}_{\\mathfrak{q}}([\\mathfrak{b}_{0}])-C.\\end{equation}\nBy the same argument as proof of Lemma 16.3.1 in the book, condition (\\ref{energy control1}) actually implies $$\\Lambda_{\\mathfrak{q}}(\\gamma_{n}|_{\\{t\\}\\times Y})\\leq C',\\ \\forall t\\in (-\\infty,\\epsilon]$$ for some constant $C'$. Now we apply Theorem \\ref{local compactness} to show that after applying suitable gauge transformations $u_{n}:X_{+}\\rightarrow S^{1}$ and passing to a subsequence, the restriction $u_{n}(\\gamma_{n}|_{X_{+}})|_{X'_{+}}$ has a limit $C^{\\sigma}_{k,\\delta}(X')$. Since $\\Lambda_{\\mathfrak{q}}(\\cdot)$ is gauge invariant, we get $$\\Lambda_{\\mathfrak{q}}(\\gamma_{n}|_{\\{2\\epsilon\\}\\times Y})=\\Lambda_{\\mathfrak{q}}(u_{n}(\\gamma_{n})|_{\\{2\\epsilon\\}\\times Y})\\geq C''$$ for some uniform constant $C''$. Another application of Lemma 16.3.1 in the book provides a uniform lower bound\n$$\n\\Lambda_{\\mathfrak{q}}(\\gamma_{n}|_{\\{t\\}\\times Y})\\geq C''',\\ \\forall t\\in (-\\infty,2\\epsilon].\n$$\nNow the proof proceed exactly as in the book: We can show that after passing to a further subsequence, $\\gamma_{n}|_{(-\\infty,2\\epsilon]}$ converges to a (possibly broken) half trajectory. Putting the two pieces $\\gamma_{n}|_{(-\\infty,2\\epsilon]\\times Y}$ and $\\gamma_{n}|_{X'_{+}}$ together, we see that after passing to a subsequence and composing with suitable gauge transformations, $\\gamma_{n}$ converges to a (possibly broken) $Z_{+}$-trajectory $\\gamma_{\\infty}$. By our regularity assumption, $\\gamma_{\\infty}$ can have a most one breaking point, whose absolute grading must be $2w(X,g_{X},0)$. In other words, the limit $\\gamma_{\\infty}$ represents a point of $\\mathcal{M}^{+}([\\mathfrak{b}_{0}],Z_{+})$.\n\nWe have shown that any sequence $[\\gamma_{n}]\\in \\breve{\\mathcal{M}}([\\mathfrak{b}_{0}],Z_{+})$ contains convergent subsequence in $\\mathcal{M}^{+}([\\mathfrak{b}_{0}],Z_{+})$. By a similar argument, we see that $\\breve{\\mathcal{M}}([\\mathfrak{b}],Z_{+})$ contains at most finitely many elements for any $[\\mathfrak{b}]$ with $\\operatorname{gr}^{\\mathds{Q}}([\\mathfrak{b}])=-2w(X,g_{X},0)$. Since there are only finitely many critical points $[\\mathfrak{b}]$ with $\\operatorname{gr}^{\\mathds{Q}}([\\mathfrak{b}])=-2w(X,g_{X},0)$ and $\\breve{\\mathcal{M}}([b_{0}],[\\mathfrak{b}])$ is a finite set for each of them, we see that $$\\mathcal{M}^{+}([\\mathfrak{b}_{0}],Z_{+})\\setminus\\breve{\\mathcal{M}}([\\mathfrak{b}_{0}],Z_{+}) =(\\mathop{\\cup}\\limits_{ \\operatorname{gr}^{\\mathds{Q}}([\\mathfrak{b}])=-2w(X,g_{X},0)}\\breve{\\mathcal{M}}([\\mathfrak{b}_{0}],[\\mathfrak{b}])\\times \\mathcal{M}([\\mathfrak{b}],Z_{+}))$$\nis a finite set. This finishes the proof of the theorem.\n\\end{proof}\n\n\n\\section{Proof of the theorem \\ref{new obstruction}}\n Suppose $g_{X}$ has positive scalar curvature everywhere. We first prove that $-2\\operatorname{h}(Y,\\mathfrak{s})\\leq 2\\lambda_{SW}(X)$. Suppose this is not the case. Recall that $\\lambda_{SW}(X)=-\\omega(X,g_{X},0)$ by Lemma \\ref{casson for psc}. By Assumption \\ref{3 dimensional perturbation}, the perturbation $\\mathfrak{q}$ is chosen so that the condition of Lemma \\ref{alternative defi of Froyshov} is satisfied. As a result, we can find nonzero integers\n$ n,m_{1},...,m_{l}$ and irreducible critical points $[\\mathfrak{b}_{1}],...,[\\mathfrak{b}_{l}]\\in \\mathfrak{C}^{o}$ with $\\operatorname{gr}^{\\mathds{Q}}([\\mathfrak{b}_{l}])=-2w(X,g_{X},0)+1$ such that\n\\begin{equation}\\label{b0 killed}\n\\partial^{o}_{o}(m_{1}[\\mathfrak{b}_{1}]+...+m_{j}[\\mathfrak{b}_{l}])=0 \\text{ and } \\partial^{o}_{s}(m_{1}[\\mathfrak{b}_{1}]+...+m_{j}[\\mathfrak{b}_{l}])=n[\\mathfrak{a}_{0}].\\end{equation}\nNow consider the manifold $$\\mathcal{M}=(\\mathop{\\cup}\\limits_{\\{l|m_{l}>0\\}}m_{l}\\cdot\\mathcal{M}^{+}([\\mathfrak{b}_{l}],Z_{+}))\\cup( \\mathop{\\cup}\\limits_{\\{l|m_{l}<0\\}}m_{l}\\cdot \\bar{\\mathcal{M}}^{+}([\\mathfrak{b}_{l}],Z_{+})$$\nwhere $m_{l}\\cdot*$ means the disjoint union $m_{l}$ copies and $\\bar{\\mathcal{M}}^{+}([\\mathfrak{b}_{l}],Z_{+}))$ denotes the orientation reversal of $\\mathcal{M}^{+}([\\mathfrak{b}_{l}],Z_{+})$. By Theorem \\ref{gluing}, Theorem \\ref{orientation}, Theorem \\ref{compactness} and condition (\\ref{b0 killed}), $\\mathcal{M}$ is an oriented, compact $1$-dimensional manifold with $$\\#\\partial\\mathcal{M}=n\\cdot\\#\\breve{\\mathcal{M}}([\\mathfrak{a}_{0}],Z_{+}),$$\nwhere as before, $\\#*$ denotes the number of points, counted with sign, in an oriented $0$-dimensional manifold. By Assumption \\ref{4 dim perturbation} and Proposition \\ref{only one reducible}, we get\n$$\\#\\partial\\mathcal{M}=n\\cdot \\pm 1=\\pm n\\neq 0,$$\nwhich is impossible because we know that the number, counted with sign, of boundary points in any compact $1$-manifold should be $0$.\nThis contradiction finishes the proof of the inequality $\\operatorname{h}(Y,\\mathfrak{s})\\leq 2\\lambda_{SW}(X)$.\n\nBy applying the same argument to the manifold $-X$, we also get $-2\\operatorname{h}(-Y,\\mathfrak{s})\\leq 2\\lambda_{\\textnormal{SW}}(-X)$, which implies $-2\\operatorname{h}(Y,\\mathfrak{s})\\geq 2\\lambda_{\\textnormal{SW}}(X)$ by Lemma \\ref{orientation reversal} and Lemma \\ref{orientation reversal 2}. Therefore, we have $-2\\operatorname{h}(Y,\\mathfrak{s})= 2\\lambda_{\\textnormal{SW}}(X)$ and the theorem is proved.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsubsection{ObjectNav}\n\nIn ObjectNav, the agent is tasked with navigating to one of a set of target object types (\\eg navigate to the bed) given ego-centric sensory inputs. The sensory input can be an RGB image, a depth image, or combination of both. At each time step the agent must issue one of the following actions: \\emph{Move Forward}, \\emph{Rotate Right}, \\emph{Rotate Left}, \\emph{Look Up}, \\emph{Look Down}, and \\emph{Done}. The \\emph{Move Forward} action moves the agent by 0.25m and the rotate and look actions are performed in $30^\\circ$ increments.\n\nEpisodes are considered successful if (1) the object is visible in the camera's frame (2) the distance between the agent and the target object is within 1 meter and (3) the agent issues the Done action. The starting location of the agent is a random location in the scene.\n\nOur workshop has held 2 ObjectNav challenges: the RoboTHOR ObjectNav Challenge~\\cite{deitke2020robothor} and the Habitat ObjectNav Challenge~\\cite{habitatchallenge2022, savva2019habitat}. Both challenges use the mentioned action and observation space, as well as a simulated LoCoBot robotic agent. In comparison:\n\\begin{itemize}\n \\item \\textit{Scenes.} The RoboTHOR Challenge\\footnote{\\url{https:\/\/ai2thor.allenai.org\/robothor\/challenge}} includes 89 room-sized dorm-like scenes. The Habitat 2021 Challenge\\footnote{\\url{https:\/\/aihabitat.org\/challenge\/2021\/}} 90 houses from the Matterport3D dataset~\\cite{Chang3DV2017Matterport} and the Habitat 2022 Challenge\\footnote{\\url{https:\/\/aihabitat.org\/challenge\/2022\/}} uses 120 houses from the HM3D Semantics dataset~\\cite{ramakrishnan2021habitat}. Both iterations of the Habitat Challenge use scenes collected from real-world scans. In contrast, RoboTHOR scenes were hand-built by 3D artists to be accessible in AI2-THOR \\cite{ai2thor} in the Unity game engine. Habitat houses are significantly larger than those in RoboTHOR, often consisting of multiple floors.\n \\item \\textit{Target Objects.} The RoboTHOR Challenge uses 13 relatively small objects as target object types (\\eg Alarm Clock, Basketball, Laptop). The Habitat 2021 Challenge used 21 target objects types and the Habitat 2022 Challenge used 6 target object types. The target object types in both Habitat Challenges typically represent larger objects (\\eg Bed, Fireplace, Sofa).\n\\end{itemize}\n\nFor the RoboTHOR Challenge, state-of-the-art is currently held by ProcTHOR~\\cite{deitke2022procthor}, which has a test SPL~\\cite{anderson_arxiv18} of 0.2884 and a success rate of 65\\% on unseen scenes during training. ProcTHOR uses a fairly simple model that embeds images with CLIP, feeds it through a GRU, and uses an actor-critic output optimized with DD-PPO. Its novelty is pre-training on 10K procedurally generated houses (ProcTHOR-10K). It then fine tunes in RoboTHOR. For the Habitat 2022 Challenge, state-of-the art by SPL is also held by ProcTHOR, achieving 0.32 SPL and a success rate of 54\\% on unseen scenes. For the Habitat 2022 Challenge, ProcTHOR pre-trains on ProcTHOR-10K and fine-tunes on the HM3D Semantics scenes. When sorting the Habitat 2022 Challenge entries by success rate, imitation learning with Habitat-Web~\\cite{ramrakhya2022habitat}, fine-tuned with RL, achieves a state-of-the-art 60\\% success rate and an SPL of 0.30 on unseen scenes. Habitat-Web built a web interface to collect human demonstrations of ObjectNav with Amazon Mechanical Turk. It also achieved state-of-the-art in the Habitat 2021 Challenge, with an SPL of 0.146 and a success rate of 34\\%.\n\n\n\n\n\n\\section{Introduction}\n\n\nWithin the last decade, advances in deep learning, coupled with the creation of massive datasets and high-capacity models, have resulted in remarkable progress in computer vision, audio, NLP, and the broader field of AI. This progress has enabled models to obtain superhuman performance on a wide variety of passive tasks (\\eg image classification). However, this progress has also enabled a paradigm shift towards embodied agents (\\eg robots) which learn, through interaction and exploration, to creatively solve challenging tasks within their environments. The field of embodied AI focuses on how intelligence emerges from an agent's interactions with its environment. An interaction in the environment involves an agent taking an action that affects its future state. For instance, the agent may perform navigation actions to move around the environment or take manipulation actions to open or pick up objects within reach. Embodied AI is a focus of a growing collection of researchers and research challenges.\n\n\nConsider asking a robot to \\myquote{Clean my room} or \\myquote{Drive me to my favorite restaurant}. To succeed at these tasks in the real world, the robots need skills like \\textit{visual perception} (to recognize scenes and objects), \\textit{audio perception} (to receive the speech spoken by the human), \\textit{language understanding} (to translate questions and instructions into actions), \\textit{memory} (to recall how items should be arranged or to recall previously encountered situations), \\textit{physical intuition} (to understand how to interact with other objects), \\textit{multi-agent reasoning} (to predict and interact with other agents), and \\textit{navigation} (to safely move through the environment). The study of embodied agents both provides a challenging testbed for building intelligent systems and tries to understand how intelligence emerges through interaction with an environment. As such, it involves many disciplines, such as computer vision, natural language processing, acoustic learning, reinforcement learning, developmental psychology, cognitive science, neuroscience, and robotics.\n\nIn this paper, we present a retrospective on the state of embodied AI, focusing on the challenges highlighted at the 2020--2022 CVPR embodied AI workshops. The challenges presented in the workshop have focused on benchmarking progress in navigation, rearrangement, and embodied vision-and-language. The navigation challenges include Habitat PointNav~\\cite{habitat2020sim2real} and ObjectNav~\\cite{batra2020objectnav}, Interactive and Social Navigation with iGibson~\\cite{xia2020interactive}, RoboTHOR ObjectNav~\\cite{deitke2020robothor}, MultiON~\\cite{wani2020multion}, RVSU Semantic SLAM~\\cite{hall2020robotic}, and Audio-Visual Navigation with SoundSpaces~\\cite{chen_soundspaces_2020}; rearrangement challenges include AI2-THOR Rearrangement~\\cite{weihs2021rearrangement}, TDW-Transport~\\cite{gan2022threedworld}, and RVSU Scene Change Detection~\\cite{hall2020robotic}; and embodied vision-and-language challenges include RxR-Habitat~\\cite{rxr}, ALFRED~\\cite{ALFRED20}, and TEACh~\\cite{teach}. We discuss the setup of each challenge and its state-of-the-art performance, analyze common approaches between winning entries across the challenges, and conclude with a discussion of promising future directions in the field.\n\n\n\n\n\n\n\n\n\\begin{figure*}[ht]\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/reduced\/header_v3.pdf}\n \\vspace{-12mm}\n \\caption{Passive AI tasks are based on predictions over independent samples of the world, such as images collected without a closed loop with a decision-making agent. In contrast, embodied AI tasks include an active artificial agent, such as a robot, that must perceive and interact with the environment purposely to achieve its goals, including in unstructured or even uncooperative settings. \n Enabled by the progress in computer vision and robotics, embodied AI represents the next frontier of challenges to study and benchmark intelligent models and algorithms for the physical world.}\n \\label{fig:passive-vs-embodied-ai}\n\\end{figure*}\n\n\\section{What is Embodied AI?}\n\n\n\n\\emph{Embodied AI} studies artificial systems that express intelligent behavior through bodies interacting with their environments. The first generation of embodied AI researchers focused on robotic embodiments \\cite{pfeifer2004embodied}, arguing that robots need to interact with their noisy environments with a rich set of sensors and effectors, creating high-bandwidth interaction that breaks the fundamental assumptions of clean inputs, clean outputs, and static world states required by \\textit{classical AI} approaches \\cite{wilkins1988practical}. More recent embodied AI research has been empowered by rich simulation frameworks, often derived from scans of real buildings and models of real robots, to recreate environments more closely resembling the real world than those previously available. These environments have enabled both discoveries about the properties of intelligence \\cite{partsey2022mapping} and systems which show excellent sim-to-real transfer \\cite{8968053,DBLP:journals\/corr\/abs-1804-10332}.\n\nAbstracting away from real or simulated embodiments, embodied AI can be defined as the study of intelligent agents that can\n\\begin{inparaitem}[]\n \\item \\emph{see} (or more generally perceive their environment through vision, audition, or other senses),\n \\item \\emph{talk} (\\ie hold a natural language dialog grounded in the environment),\n \\item \\emph{listen} (\\ie understand and react to audio input anywhere in a scene.),\n \\item \\emph{act} (\\ie navigate their environment and interact with it to accomplish goals), and \n \\item \\emph{reason} (\\ie consider the long-term consequences of their actions).\n\\end{inparaitem}\nEmbodied AI focuses on tasks which break the clean input\/output formalism of passive tasks such as object classification and speech understanding, and require agents to interact with - and sometimes even modify - their environments over time (Fig. \\ref{fig:passive-vs-embodied-ai}).\nFurthermore, embodied AI environments generally violate the clean dynamics of structured environments such as games and assembly lines, and require agents to cope with noisy sensors, effectors, dynamics, and other agents, which creates unpredictable outcomes.\n\n\\paragraph{Why is Embodiment Important?}\nEmbodied AI can be viewed as a reaction against extreme forms of the \\emph{mind-body duality} in philosophy, which some perceive to view intelligence as a purely mental phenomenon. The mind-body problem has faced philosophers and scientists for millennia \\cite{crane2012history}: humans are simultaneously ``physical agents'' with mass, volume and other bodily properties, and at the same time ``mental agents'' that think, perceive, and reason in a conceptual domain which seems to lack physical embodiment. Some scholars argue in favor of a strict mind-body duality in which intelligence is a purely mental quality only loosely connected to bodily experience \\cite{ryle2009concept}. Other scholars, across philosophy, psychology, cognitive science and artificial intelligence, have challenged this mind-body duality, arguing that intelligence is intrinsically connected to embodiment in bodily experience, and that separating them has distorting effects on research \\cite{brooks1990elephants, paul2021extended, varela2017embodied, mehta2011mind, ryle2009concept}.\n\nThe history of research in artificial intelligence has mirrored this debate over mind and body, focusing first on computational solutions for symbolic problems which appear hard to humans, a strategy often called GOFAI (\"Good Old Fashioned AI\", \\cite{boden20144,mcdermott2015gofai}). The computational theory of mind argued that if intelligence was reasoning operations in the mind, computers performing similar computations could also be intelligent \\cite{piccinini2004first,Sclar2022}. %\nPurely symbolic artificial intelligence were often disconnected from the physical world, requiring symbolic representations as input, creating problems with grounding symbols in perception \\cite{harnad1990symbol,steels2008symbol} and often leading to brittleness \\cite{mccarthy2007here,lohn2020estimating,cummings2020surprising}. However, symbolic reasoning problems themselves often proved to be relatively easy, whereas the physical problems of perceiving the environment or acting in it were actually the most challenging: what is unconscious for humans often requires surprising intelligence, often known as Moravec's Paradox \\cite{goldberg2015robotics,agrawal2010study}. \nSome researchers challenged this approach, arguing that for machines to be intelligent, they must interact with noisy environments via rich sets of sensors and effectors, creating high-bandwidth interactions that break the assumption of clean inputs and outputs and discrete states required by \\textit{classical AI} \\cite{wilkins1988practical}; %\nthese ideas were echoed by roboticists already concerned with connecting sensors and actuators more directly \\cite{arkin1998behavior,brooks1990elephants,moravec2000ripples}.\nMuch as neural network concepts hibernated through several AI winters before enjoying a renaissance, embodied AI ideas have now been revived by new interest from fields such as computer vision, machine learning and robotics - often in combination with neural network ideas.\nNew generations of artificial neural networks are now able to digest raw sensor signals, generate commands to actuators, and autonomously learn problem representations, linking \"classical AI\" tasks to embodied setups.\n\nThus, embodied AI is more than just the study of agents that are active and situated in their environments: it is an exploration of the properties of intelligence. Embodied AI research has demonstrated that intelligent systems that perform well at embodied tasks often look different than their passive counterparts \\cite{fu2022coupling} - but, conversely, that highly performing passive AI tasks can often contribute greatly to embodied systems as components \\cite{shridhar2022cliport}. Furthermore, the control over embodied agents provided by modern simulators and deep learning libraries enables ablation studies that reveal fine-grained details about the properties needed for individual embodied tasks \\cite{partsey2022mapping}.\n\n\n\\paragraph{What is \\emph{not} Embodied AI?}\n\nEmbodied AI overlaps with many other fields, including robotics, computer vision, machine learning, artificial intelligence, and simulation.\nHowever, there are differences in focus which make embodied AI a research area in its own right.\n\nAll \\textit{robotic} systems are embodied; however, not all embodied systems are robots (e.g., AR glasses), and robotics requires a great deal of work beyond purely trying to make systems intelligent. Embodied AI also includes work that focuses on exploring the properties of intelligence in realistic environments while abstracting some of the details of low-level control.\nFor example, the ALFRED~\\cite{ALFRED20} benchmark uses simulation to abstract away low-level robotic manipulation (\\eg moving a gripper to grasp an object) to focus on high-level task planning. Here, the agent is tasked with completing a natural language instruction, such as \\textit{rinse the egg to put it in the microwave}, and it can open or pickup an object by issuing a high-level \\textit{Open} or \\textit{Pickup} action that succeeds if the agent is looking at the object and is sufficiently close to it.\nAdditionally, \\cite{partsey2022mapping} provides an example of studying properties of intelligence, where they attempt to answer whether mapping is strictly required for a form of robotic navigation.\nConversely, robotics includes work that focuses directly on the aspects of the real world, such as low-level control, real-time response, or sensor processing.\n\n\\textit{Computer vision} has contributed greatly to embodied AI research; however, computer vision is a vast field, much of which is focused purely on improving performance on passive AI tasks such as classification, segmentation, and image transformation. Conversely, embodied AI research often explores problems that require other modalities with or without vision, such as navigation with sound \\cite{chen_soundspaces_2020} or pure LiDAR images.\n\n\\textit{Machine learning} is one of the most commonly used techniques for building embodied agents. However, machine learning is a vast field encompassing primarily passive tasks, and most embodied AI tasks are formulated in such a way that they are learning agnostic. For example, the iGibson 2020 challenge \\cite{shen2020igibson} allowed training in simulated environments but deployment in holdout environments in both real and simulation; nothing required the solutions to use a learned approach as opposed to a classical navigation stack (though learned approaches were the ones deployed).\n\n\\textit{Artificial intelligence} is written into the name of embodied AI, but the field of embodied AI was created to address the perceived limitations of classical artificial intelligence \\cite{pfeifer2004embodied}, and much of artificial intelligence is focused on problems like causal reasoning or automated programming which are hard enough without introducing the messiness of real embodiments. More recently, techniques from more traditional artificial intelligence domains like natural language understanding have been applied to embodied problems with great success \\cite{ahn2022can}.\n\n\\textit{Simulation} and embodied AI are intimately intertwined; while simulations of real-world systems go far beyond the topics of robotics, and the first generation of embodied AI focused on robotic embodiments \\cite{pfeifer2004embodied}, much of modern embodied AI research has expanded to simulated benchmarks, emulating or even scanned from real environments, which provide challenging problems for traditional AI approaches, with or without physical embodiments. Despite not starting with robots, systems that have resulted from this work have nevertheless found success in real-world environments \\cite{8968053,DBLP:journals\/corr\/abs-1804-10332}, providing hope that simulated benchmarks will prove a fruitful way to develop more capable real-world intelligent systems.\n\n\n\n\n\n\\paragraph{Why focus on real-world environments?}\nMany researchers are exploring intelligence in areas such as image recognition or natural language understanding where at first blush interaction with an environment appears not to be required. Genuine discoveries about intelligent systems appear to have been made here, such as the role of convolutions in image processing and the role of recurrent networks and attention in language processing. So a reasonable question is, why do we need to focus on interactive and realistic (if not real-world) environments if we want to understand intelligence?\n\nFocusing on interactive environments is important because each new modality of intelligence we consider - classification, image processing, natural language understanding, and so on - has required new architectures for learning systems \\cite{goodfellow2016deep}, \\cite{chollet2021deep}. Interacting with an environment over time requires the techniques of reinforcement learning. Deep reinforcement learning has made massive strides in creating learning systems for synthetic environments, including traditional board games, Atari games, and even environments with simulated physics such as the Mujoco environments.\n\nHowever, embodied AI research focuses on environments that are either more realistic \\cite{habitatchallenge2022} or which require actual deployments in the real world \\cite{habitat2020sim2real,shen2020igibson}). This shift in emphasis has two primary reasons. First, many embodied AI researchers\nbelieve that the challenges of realistic environments are critical for developing systems that can be deployed in the real world. Second, many embodied\nAI researchers believe that there are genuine discoveries to be made about the properties of intelligence needed to handle real world environments that can only be made by attempting to solve problems in environments that are as close to the real world as is feasible at this time.\n\n\n\n\n\n\\section{Challenge Details}\n\nIn this section, we discuss the 13 challenges present at our Embodied AI Workshop between 2020--2022. The challenges are partitioned into navigation challenges, rearrangement challenges, and embodied vision-and-language challenges. Most challenges present a distinctive tasks, metrics and training datasets, though many challenges share similar observation spaces, action spaces, and environments.\n\n\\subsection{Navigation Challenges}\n\nOur workshop has featured a number of challenges relating to embodied visual navigation. At a high-level, the tasks consist of an agent operating in a simulated 3D environment (\\eg a household), where its goal is to move to some target. For each task, the agent has access to an egocentric camera and observes the environment from a first-person's perspective. The agent must learn to navigate the environment from its visual observations.\n\nThe challenges primarily differ based on how the target is encoded (\\eg ObjectGoal, PointGoal, AudioGoal), how the agent is expected to interact with the environment (\\eg static navigation, interactive navigation, social navigation), the training and evaluation scenes (\\eg 3D scans, video-game environments, the real world), the observation space (\\eg RGB vs. RGB-D, whether to provide localization information), and the action space (\\eg outputting discrete high-level actions or continuous joint movement actions).\n\n\\noindent\n\\begin{minipage}[l]{0.46\\textwidth}\n \\vspace{0.1in}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/pointnav_task_v2.pdf}\n \\vspace{-0.1in}\n \\captionsetup{type=figure}\n \\captionof{figure}{The \\emph{PointNav} task requires an agent to navigate to a goal coordinate in a novel environment (potentially with noisy sensory inputs), without access to a pre-built map of the environment.}\n \\vspace{-0.1in}\n\\end{minipage}\n\n\n\\subsubsection{PointNav}\n\nIn PointNav, the agent's goal is to navigate to target coordinates in a novel environment that are relative to its starting location (\\eg navigate 5m north, 3m west relative to its starting pose), without access to a pre-built map of the environment. The agent has access to egocentric sensory inputs (RGB images, depth images, or both), and an egomotion sensor (sometimes referred to as GPS+Compass sensor) for localization. The action space for the robot consists of: \\emph{Move Forward 0.25m}, \\emph{Rotate Right $30^\\circ$}, \\emph{Rotate Left $30^\\circ$}, and \\emph{Done}. An episode is considered successful if the agent issues the \\emph{Done} command within 0.2 meters of the goal and within 500 maximum steps. The agent is evaluated using the Success Rate (SR) and \"Success\nweighted by Path Length\" (SPL) \\cite{anderson_arxiv18} metrics, which measures the success and efficiency of the path taken by the agent. For training and evaluation, challenge participants use the train and val splits from the Gibson 3D dataset \\cite{zamir_cvpr18}. \n\nIn 2019, AI Habitat hosted its first challenge on PointNav.\\footnote{\\url{https:\/\/aihabitat.org\/challenge\/2019\/}} The winning submission \\cite{chaplot2020learning} utilized a combination of classical and learning-based methods, and achieved a high test SPL of 0.948 in the RGB-D track, and 0.805 in the RGB track. In 2020 and 2021, the PointNav challenge was modified to emphasize increased realism and on sim2real predictivity (the ability to predict performance on a real robot from its performance in simulation) based on findings from Kadian et al. \\cite{habitatsim2real20ral}. Specifically, the challenge (PointNav-v2) introduced (1) no GPS+Compass sensor, (2) noisy actuation and sensing, (3) collision dynamics and `sliding', and (4) minor changes to the robot embodiment\/size, camera resolution, height to better match the LoCoBot robot. These changes proved to be much more challenging, with the winning submission in 2020 \\cite{ramakrishnan2020occant} achieving a SPL of 0.21 and SR of 0.28. In 2021, there was a major breakthrough with a 3$\\times$ performance improvement over the winners in 2020; the winning submission achieved a SPL of 0.74 and SR of 0.96 \\cite{habitat2020sim2real}. Since an agent with perfect GPS + Compass sensors in this PointNav-v2 setting can only achieve a maximum of 0.76 SPL and 0.99 SR, the PointNav-v2 challenge was considered solved, and discontinued in future years. \n\n\n\n\n\\subsubsection{Interactive and Social PointNav}\n\nIn Interactive and Social Navigation, the agent is required to reach a PointGoal in dynamic environments that contain dynamic objects (furniture, clutter, etc) or dynamic agents (pedestrians). Although robot navigation achieves remarkable success in static, structured environments like warehouses, it still remains a challenging research question in dynamic environments like homes and offices. In 2020 and 2021, the Stanford Vision and Learning Lab in collaboration with Robotics@Google hosted challenges on Interactive and Social (Dynamic) Navigation\\footnote{\\url{https:\/\/svl.stanford.edu\/igibson\/challenge2021.html}}. These challenges used the simulation environment iGibson~\\cite{shen2020igibson, li2021igibson} with a number of realistic indoor scenes, as illustrated in Fig.~\\ref{fig:iGibsonChallenges}. The 2020 Challenge\\footnote{\\url{https:\/\/svl.stanford.edu\/igibson\/challenge2020.html}} also featured a Sim2Real component where the participants trained their policies in the iGibson simulation environment and deployed in the real world.\n\n\n\\begin{figure}[ht!]\n \\centering\n \\begin{subfigure}[b]{0.23\\textwidth}\n \\centering\n \\includegraphics[width=1\\textwidth]{fig\/igibson\/ig_interactive_nav_2022.jpg}\n \\caption{Interactive Navigation}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.23\\textwidth}\n \\centering\n \\includegraphics[width=1\\textwidth]{fig\/igibson\/ig_social_nav_2022.jpg}\n \\caption{Social Navigation}\n \\end{subfigure}\n \\caption{\\emph{Interactive Navigation} (left) requires the agent to push aside small obstacles (\\eg shoes, boxes) whereas \\emph{Social Navigation} (right) requires the agent to navigate among pedestrians and respect their personal space.}\n \\label{fig:iGibsonChallenges}\n\\end{figure}\n\nIn \\textit{Interactive Navigation}, we challenge the notion that navigating agents are to avoid collision at any cost. We argue for the contrary -- in clutter-filled real environments, such as homes, an agent will have to interact and push away objects to achieve meaningful navigation. Note that all objects in the scenes are assigned realistic physical weight and are interactable. As in the real world, while some objects are light and movable by the robot, others are not. Along with the furniture objects originally in the scenes, additional objects (e.g. shoes and toys) from the Google Scanned Objects dataset~\\cite{downs2022google} are added to simulate real-world clutter. The performance of the agent is evaluated using a novel Interactive Navigation Score (INS)~\\cite{xia2020interactive} that measures both navigation success as well as the level of disturbance to the scene an agent has caused along the way.\n\nIn \\textit{Social Navigation}, the agent navigates among walking humans in a home environment. The humans in the scene move towards randomly sampled locations, and their 2D trajectories are simulated using the model of Optimal Reciprocal Collision Avoidance (ORCA)~\\cite{berg2011reciprocal} integrated in iGibson~\\cite{shen2020igibson, li2021igibson, darpino2021socialnav}. The agent shall avoid collisions or proximity to pedestrians beyond a threshold (distance <0.3 meter) to avoid episode termination. It should also maintain a comfortable distance to pedestrians (distance <0.5 meter), beyond which the score is penalized but episodes are not terminated. Social Navigation Score (SNS), which is the average of STL (Success weighted by Time Length) and PSC (Personal Space Compliance), is used to evaluate performance of the agent.\n\nThe agent takes in the current RGB-D images, the target coordinates in its local frame, and current velocities as observations, and outputs a continuous twist command (desired linear and angular velocities) as actions. The dataset includes eight training scenes, two validation scenes and five testing scenes. All scenes are fully interactive.\n\nIn the 2020 edition we saw 4 submissions while in the subsequent 2021 edition we had 6 submissions. The current state-of-the-art learning based methods achieved some level of success for Interactive and Social Navigation tasks (around $0.5$ INS and $0.45$ SNS), but they are still far from being solved. In both competitions participants improved over navigation success rate while keeping environment disturbance relatively constant. The common failure cases include the agent being too conservative and not being able to clear the obstacles in time, and the agent being too aggressive and colliding with the other moving pedestrians. \n\nOne of the challenges for the Social Nav part was the difficulty in simulating the trajectories of the human agents, including reactivity and interaction between agents. Often times, getting to the goal requires negotiation of the space or the agent would require to go over the desired personal space threshold; or the simulated human agents behave erratically due to limitations on the behavior models and the space constraints. For future editions, we are to emphasize on the importance of high fidelity simulation of navigation with human-like behaviors.\n\nFor the Sim2Real component of the 2020 Challenge, a significant performance drop was observed during the Sim2Real transfer, due to the reality gap in visual sensor readings, dynamics (\\eg motor actuation), and 3D modeling (\\eg soft carpets). More analysis of the takeaways can be found in the iGibson Challenge 2020\\footnote{\\url{https:\/\/www.youtube.com\/watch?v=0BvUSjcc0jw}} and 2021\\footnote{\\url{https:\/\/www.youtube.com\/watch?v=1uSsds7HSrQ}} videos, along with the winning entry paper~\\cite{yokoyama2021benchmarking}.\n\n\n\\input{challenges\/robothor-objectnav}\n\n\\noindent\n\\begin{figure}[ht!]\n \\vspace{0.1in}\n \\centering\n \\textbf{Task:} Find the Bed\\\\[-0.2in]\n \\includegraphics[width=0.475\\textwidth]{fig\/objectnav.png}\n \\vspace{-0.1in}\n \\caption{\n \\emph{ObjectNav} tasks the agent with navigating to a given object type in the scene. This example shows the agent tasked with navigating to the \\emph{Bed} in the scene. The house is curtousy of the ArchitecTHOR dataset \\cite{deitke2022procthor}.\n }\n \\vspace{-0.1in}\n\\end{figure}\n\n\n\n\n\n\\begin{figure}[ht!]\n \\centering\n \\begin{subfigure}[b]{0.46\\textwidth}\n \\centering\n \\includegraphics[width=1\\textwidth]{fig\/multion-cyl.png}\n \\caption{}\n \\label{fig:multion_cyl}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.48\\textwidth}\n \\centering\n \\includegraphics[width=1\\textwidth]{fig\/multion-real.png}\n \\caption{}\n \\label{fig:multion_real}\n \\end{subfigure}\n \\caption{\\emph{Multi-ObjectNav}: (a) Top-down visualization of a MultiON episode with 5 target cylinder objects in a particular sequence; (b) Top-down visualization of a MultiON episode with 5 target real objects in a particular sequence.}\n\\end{figure}\n\n\\subsubsection{Multi-ObjectNav}\nIn Multi-ObjectNav (MultiON)~\\cite{wani2020multion}, the agent is initialized at a random starting location in an environment and asked to navigate to an ordered sequence of objects placed within realistic 3D interiors (Figures~\\ref{fig:multion_cyl},~\\ref{fig:multion_real}). The agent must navigate to each target object in the given sequence and call the \\emph{Found} action to signal the object's discovery.\nThis task is a generalized variant of ObjectNav, whereby the agent must navigate to a sequence of objects rather than a single object.\nMultiON explicitly tests the agent's navigation capability in locating previously observed goal objects and is, therefore, a suitable test bed for evaluating memory-based architectures for Embodied AI.\n\nThe agent is equipped with an RGB-D camera and a (noiseless) GPS+Compass sensor.\nThe GPS+Compass sensor provides the agent's current location and orientation relative to its initial location and orientation in the episode.\nIt is not provided with a map of the environment. The action space comprises of \\emph{Move Forward} by 0.25 meters, \\textit{Rotate Left} by 30$^\\circ$, \\textit{Rotate Right} by 30$^\\circ$ and \\emph{Found}.\n\nThe MultiON dataset is created by synthetically adding objects in the Habitat-Matterport 3D (HM3D)~\\cite{ramakrishnan_arxiv21} scenes. The objects are either cylinder-shaped or natural-looking (real) objects. As shown in Figure~\\ref{fig:multion_cyl}, the cylinder objects are of the same height and radius, with different colors. However, such objects do not appear realistic in the indoor scenes of Matterport houses. Furthermore, detecting the same object with different colors might be easy for the agent to learn. This has led us to include realistic-looking objects that can naturally occur in houses (Figure~\\ref{fig:multion_real}). These objects are of varying sizes and shapes and pose a more demanding detection challenge. \nThere are 800 HM3D scenes and 8M episodes in the training split, 30 unseen scenes and 1050 episodes in the validation split, and 70 unseen scenes and 1050 episodes in the test split.\nThe episodes are generated by sampling random navigable points as start and goal locations, such that the locations are on the same floor and a navigable path exists between them. Next, five goal objects are randomly sampled from the set of Cylinder or Real objects to be inserted between the start and the goal, maintaining a minimum pairwise geodesic distance between them to avoid cluttering. Furthermore, to make the task even more realistic and challenging, three distractor objects (which are not goals) are inserted in each episode. The presence of distractors will encourage new agents to distinguish between goal objects and other objects in the environment. An episode is considered successful if the agent is able to reach within 1 meter of every goal in the specified order and generate the \\textit{FOUND} action at each goal object. Apart from the standard evaluation metrics used in ObjectNav, such as Success Rate (SR) and Success weighted by path length (SPL)~\\cite{anderson_arxiv18}, we additionally use Progress and Progress weighted by path length (PPL) to measure agent performance. The leaderboard for the challenge is based on the PPL metric. MultiON challenge was hosted on evalAI, an open-source platform for evaluating and comparing artificial intelligence methods. The participants implemented their methods in docker images and submitted them to evalAI. The docker images were evaluated on evaluation servers, and the results were uploaded to evalAI.\n\nThe MultiON task is similar to ObjectNav, but at the same time, it tries to solve different challenges. Notably, it aims to inject long-term planning capabilities into the agents. In the ObjectNav task, the object detection task takes on a fundamental role. Still, the agent does not have to remember all the objects (and their semantic information) encountered in the past. In MultiON, on the other hand, we assume a more limited part of the detection (e.g., detecting cylinders or a set of a limited number of natural objects). Parallelly, the agent must be able to remember the objects already seen. Thus, this task is more tailored to the real world than ObjectNav. In fact, the agents operate in the same environment for a very long time and, therefore, must be able to remember what has already been seen. For this reason, the approaches developed for MultiON, unlike those for ObjectNav, always add a component that stores the semantic information obtained through exploration.\n\nFor the 2021 challenge, a simpler setup was used. The distractors were absent, the objects were only the cylinders, and the dataset was developed on Matterport3D \\cite{chang2017matterport3d}. The Proj-Neural model was used as Baseline \\cite{wani2020multion}. This model takes advantage of an egocentric map that is used as an input for an end-to-end model that achieved 29\\% Progress and 12\\% Success. Surprisingly, two models based on mapping and path planning, SgoLAM (64\\% progress, 52\\% Success) and Memory Augmented SLAM (Mem-SLAM) (57\\% Progress, 36\\% Success), exceeded the results obtained from the Baseline by a large margin demonstrating that this type of model works well on long-horizon tasks. Instead, the model proposed in \\cite{marza2021teaching} won the 2021 challenge, with a progress of 67\\% and a success of 55\\%. This model is an evolution of Proj-Neural, where three auxiliary tasks were used to inject information about the map and objects into the agent's internal representation.\n\nIn the 2022 challenge, instead, we noticed some similarities between the Baseline method, Mem-SLAM, and the winning entry in the 2022 MultiON challenge, Exploration and Semantic Mapping for Multi Object-Goal Navigation (EXP-MAP). Both the methods are modular, consisting of detection (identifying objects from raw RGB images), Mapping (incrementally building a top-down map of the environment using Depth observations and relative poses), and Planning (navigating to a detected goal object by generating low-level actions) modules. All these models record previously seen objects in some memory (e.g., semantic map of the environment). The EXP-MAP can achieve 70\\% Progress and 60\\% Success in the Test-Challenge split of the Cylinder objects track of the challenge while achieving 55\\% Progress and 40\\% Success in the Real objects track. These results show that episodes with natural objects are more challenging to detect than the cylinders.\n\n\n\\noindent\n\\begin{minipage}[l]{0.46\\textwidth}\n \\vspace{0.1in}\n \\centering\n \\includegraphics[width=\\textwidth, height=1.75in]{fig\/rvsu\/challenge_hero_image_small.jpg}\n \\vspace{-0.1in}\n \\captionsetup{type=figure}\n \\captionof{figure}{In the \\emph{RVSU Semantic SLAM} task, an autonomous agent explores environment to create a semantic 3D cuboid map of objects.}\n \\vspace{-0.1in}\n\\end{minipage}\n\n\\subsubsection{Navigating to Identify All Objects in a Scene}\n\nThe RVSU semantic SLAM challenge tasks participants with exploring a simulation environment to map out all objects of interest therein. \nThis challenge asks a robot agent the question, ``what objects are where?'' within the scene.\nRobot agents traverse a scene, create an axis aligned 3D cuboid semantic map of the objects within that scene, and are evaluated based on their map's accuracy.\nProviding a semantic understanding of objects can assist a robot's ability to interpret attributes of its environment such as knowing how to interact with objects and understanding what type of room it might be in.\nThis semantic understanding is typically viewed as a semantic simultaneous localization and mapping (SLAM) problem.\nThe task of semantic SLAM has already seen great investigation using static datasets such as KITTI~\\cite{Geiger2013IJRR}, Sun RGBD~\\cite{song2015sun} and SceneNet~\\cite{McCormac:etal:ICCV2017}.\nHowever, these static datasets ignore the active capabilities of robots and forego searching the physical action space for the actions that best explore and understand an environment.\nAddressing this limitation, the RVSU semantic SLAM challenge~\\cite{hall2020robotic} helps bridge the gap between passive and active semantic SLAM systems by providing a framework and simulation environments for repeatable, quantitative comparison of both passive and active approaches.\n\nParticipation in the challenge is conducted through simulated environments, accessed and controlled using the BenchBot framework~\\cite{talbot2020benchbot}.\nThe environments used are a version of the BenchBot environments for active robotics (BEAR)~\\cite{hall2022bear} rendered using the NVIDIA Omniverse Isaac Simulator\\footnote{\\url{https:\/\/developer.nvidia.com\/isaac-sim}}.\nBEAR provides 25 high-fidelity indoor environments comprising of five base environments with five variations thereof.\nBetween variations, objects are added and removed, and lighting conditions are changed.\nAcross environments there are 25 object classes of interest to be mapped within the challenge.\nThe challenge splits BEAR into 2 base environments for algorithm development and 3 for final testing and evaluation.\nThe BenchBot framework enables a simulated robot to explore BEAR using either passive or active control through discretised actions that are pre-defined or actively chosen by the agent respectively.\nThe action space for robot agents is \\textit{MOVE\\_NEXT} for passive mode and \\textit{MOVE\\_DISTANCE} and \\textit{MOVE\\_ANGLE} for active mode with magnitude of movement being defined by users with a minimum distance of 0.01 m and a minimum angle of 1\u00b0.\nBenchBot provides the robot agent access to RGB-D camera, laser, and either ground-truth or estimated pose information for the robot immediately after completing any given action.\nThe progression of passive control with ground-truth pose data, through to active control with estimated pose data is designed to gradually bridge the gap from passive to active semantic SLAM.\nThe final cuboid map created by the agent within the challenge is evaluated using the new object map quality (OMQ) measure outlined in~\\cite{hall2020robotic}.\nThis evaluation measure considers the quality of every provided object cuboid, in terms of both geometric and semantic accuracy, when compared to its best match in the ground-truth map, as well as the number of provided cuboids with no matching ground-truth equivalent and vice verse.\nThe final OMQ score is between 0 and 1 with 1 being the best score.\n\nCurrent results from the RVSU Semantic SLAM challenge have shown that while the challenge is simple in concept, there is still room for improvement from current state-of-the-art methods.\nThe highest result for semantic SLAM achieved was 0.39 OMQ when using ground-truth pose data and passive control.\nWhen digging deeper into the results provided, we can see that although the quality of matching cuboids is often good (pairwise quality of up to 0.72) there are too many unmatched cuboids to get a high score.\nWhen competitors bridge the gap from passive to active control, we also commonly see a drop in OMQ of approximately 0.06 despite having more control of the robot's observations.\nThose who participated in both passive and active control versions of the semantic SLAM task, focused their research in how to map a scene given a sequence of inputs, rather than how to actively explore to maximize understanding of the scene.\nThese results suggest that potentially the most fruitful areas for future research lie in better filtering out cuboids that do not match any true object, and in how to best exploit active robot control to improve scene understanding.\nThere is also yet to be an attempt at solving this challenge using active control and noisy pose estimation which adds further difficulty to the challenge.\n\n\n\\subsubsection{Audio-Visual Navigation}\n\n\nMoving around in the real world is a multi-sensory experience, and an intelligent agent should be able to see, hear and move to successfully interact with its surroundings. While current navigation models tightly integrate seeing and moving, they are deaf to the world around them, motivated by these factors, the audio-visual navigation task was introduced~\\cite{gan2019look,chen_soundspaces_2020}, where an embodied agent is tasked to navigate to a sounding object in an unknown unmapped environment with its egocentric visual and audio perception (Figure~\\ref{fig:soundspaces_concept}). This audio-visual navigation task can find applications in assistive and mobile robotics, e.g., robots for search and rescue operations and assistive home robots. Along with the task, the SoundSpaces platform was also introduced, a first-of-its-kind audio-visual simulator where an embodied agent could move around in the simulated environment while seeing and hearing.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=\\linewidth]{fig\/soundspaces.pdf}\n \\caption{\\emph{AudioGoal} tasks an autonomous agent to find an audio source in an unmapped 3D environment by navigating to a goal. Here the top down map is overlaid with the acoustic pressure field heatmap. While audio provides rich directional information about the goal, and audio intensity variation is correlated with the shortest path distance, vision reveals the surrounding geometry in the form of obstacles and free space. An AudioGoal navigation agent should intelligently leverage the synergy of these two complementary signals to successfully navigate in the environment.}\n \\label{fig:soundspaces_concept}\n\\end{figure}\n\n\nAudio-visual navigation is a challenging task because the agent not only needs to perceive the surrounding environment, but also to reason about the spatial location of the sound emitter in the environment via the received sound. This new multimodal embodied navigation task has gained attention over the past few years and different methods have been proposed to solve this task, including learning hierarchical policies~\\cite{chen_waypoints_2020}, training robust policies with adversarial attack~\\cite{YinfengICLR2022saavn} or data augmentation for generalization to novel sounds~\\cite{dynamic_av_nav}. However, the performance of SOTA audio-visual navigation models is still not perfect, and thus we organized the SoundSpaces Challenge~\\footnote{\\url{https:\/\/soundspaces.org\/challenge}} at CVPR 2021 and 2022, which aims to promote research in the field of developing autonomous embodied agents that are capable of navigating to sounding objects of interest using audio and vision. \n\nMore specifically, in an AudioGoal navigation episode, a sound source is placed at a random location in the environment, and the agent is also positioned with a random pose (location and orientation) at the start of the episode. The agent is tasked to navigate to the sounding object with one of the four actions from the action space: \\emph{Move Forward}, \\emph{Rotate Left}, \\emph{Rotate Right}, and \\emph{Done}. At each episode step, the agent receives egocentric (noiseless) RGB-D images captured with a $90^\\circ$ field-of-view (FoV) camera, the binaural audio received by the agent. The episode terminates when the agent executes the \\emph{Done} action, or it runs out of a pre-specified time budget. The agent is evaluated using standard embodied navigation metrics, such as Success Rate (SR) and SPL~\\cite{anderson_arxiv18}. We use SPL as the metric for ranking challenge participants. \n\nWe set up the AudioGoal navigation task on the Matterport3D (MP3D)~\\cite{Chang3DV2017Matterport} scene dataset, split into train\/val\/test splits in 59\/10\/12 for this challenge due to its large scale. SoundSpaces provides audio renderings for MP3D in the form of pre-rendered room impulse responses (RIRs), which are transfer functions that characterize how sound propagates from one point in space to another point in space. For all MP3D scenes, SoundSpaces discretizes them into grids of spatial resolution 1 meter $\\times$ 1 meter and provide RIRs for all pairs of grid points. For the source sound, we use 73\/11\/18 disjoint sounds in our train\/val\/test splits, respectively. Each sound clip is 1 second long. The received sound at every step is the result of convolution between the source sound and the RIR corresponding to the source location and current agent pose in the scene. While the \\emph{Move Forward} action takes the agent forward by 1 meter in the direction it's currently facing if there is a navigable node in the scene grid in that direction, \\emph{Rotate Left} and \\emph{Rotate Right} rotate the agent by $90^\\circ$ in the clockwise and anti-clockwise directions, respectively. The episode terminates when the agent issues the \\emph{Done} action, or it exceeds a budget of 500 steps. \n\nIn SoundSpaces Challenge 2021 and 2022, a total of 25 teams showed interest and 8 teams participated. For SoundSpaces Challenge 2022's leading teams, we observed some similarities between the model design of the top two teams. Both models used a hierarchical navigation architecture (inspired by AV-WaN~\\cite{chen_waypoints_2020}), where a high-level (long-term) planner predicts a navigation waypoint in the local neighborhood of the agent at each step, and a low-level (short-term) planner executes atomic actions, such as \\emph{Move Forward} and \\emph{Rotate Left}, to take the agent to the predicted waypoint. Further, agents that leverage the audio-visual cues from the full $360^\\circ$ FoV and train a separate model for stopping are more successful and efficient than the others. Moreover, training an AudioGoal navigation agent in the presence of distractor sound sources also results in learning robust navigation policies that boost navigation performance. The presentation videos from the leading teams can be found on the challenge website.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=\\linewidth]{fig\/ss2.pdf}\n \\caption{SoundSpaces 2.0, a continuous, configurable, and generalizable audio-visual simulation platform. It models various acoustic phenomena and renders visual and audio observations with spatial and acoustic correspondence.}\n \\label{fig:ss2}\n\\end{figure}\n\nOne of the limitations of the SoundSpaces platform is that it provides pre-rendered RIRs for fixed grid points and does not allow users to render sounds for arbitrary locations or environments. To tackle this issue, we have introduced SoundSpaces 2.0~\\cite{chen22soundspaces2} (Fig.~\\ref{fig:ss2}, a continuous, configurable and generalizable simulator. This new simulator has enabled continuous audio-visual navigation as well as many other embodied audio-visual tasks. We believe this simulator will take the audio-visual navigation task to the next step. Another important direction for future research is for the agent to reason about the semantics between the sound and objects (\\eg semantic audio-visual navigation~\\cite{chen2021savi} and finding fallen objects~\\cite{gan2022finding}). If the agent could leverage the semantics of sounding objects, it could navigate faster by reasoning where the object is located in space based on its category information.\n\nWe believe studying audio-visual embodied AI is of vital importance for building truly autonomous robots with rich perception modalities in the real world.\n\n\\subsection{Rearrangement Challenges}\n\nThis section discusses rearrangement challenges. Rearrangement is described as a canonical task in Embodied AI that may lead to learning representations that are for many downstream tasks~\\cite{batra2020rearrangement}. Here, the agents goal is to move or detect the changes from one state of the scene to another. For examples, several objects, such as an apple and a banana may move, and the agent is tasked with detecting that they moved and putting them back to their correct locations.\n\n\\noindent\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=.46\\textwidth]{fig\/rvsu\/scd_example.png}\n \\caption{Example of the scene change detection challenge. Between two scenes some objects are added (blue) and removed (orange) and these need to be identified and mapped out.}\n\\end{figure}\n\n\\subsubsection{Scene Change Detection} \n\nThe RVSU scene change detection (SCD) challenge, an extension to the RVSU semantic SLAM challenge, requires identification and mapping of objects which have been added and removed between two traversals of the same base scene~\\cite{hall2020robotic}.\nHuman environments are inherently non-static with objects frequently being added, removed, or shifted.\nIn order to operate within said environments whilst utilising object maps, it becomes important to be able to identify when these changes have occurred.\nThis challenge examines perhaps the simplest of such scenarios, where some objects are added or removed while all others remain fixed in place.\n\nThe setup for SCD is similar to that shown for the RVSU Semantic SLAM challenge described previously but with some differences in challenge setup within BenchBot.\nThe SCD challenge also uses the BEAR dataset~\\cite{hall2022bear} which already has multiple variants of a set of base scenes.\nVariants differ in some objects are added and some removed (as desired for the SCD task), and also there are some lighting variations to increase challenge difficulty.\nBenchBot enables the switching between environment variants within one SCD submission as soon as the robot agent determines it is finished with its first traversal.\nBenchBot supplies the same robot control that progresses from passive to active control via discrete actions and wherein each action is followed by an observation containing RGB-D images, laser scan, and either ground-truth or estimated robot pose data.\nThe SCD challenge utilises a variant of the OMQ evaluation measure~\\cite{hall2020robotic} which evaluates the final 3D object cuboid map output as part of the challenge.\nThis variant introduces the necessity for the map to provide an estimate for the likelihood that an object has been added or removed from the scene between traversals.\nThis state estimation for the object is then combined with the estimation of the label and location of the object to make up the object-level quality score.\nAs before, the best OMQ score possible is 1 and the worst is 0.\n\nThere has been limited engagement with the SCD challenge and there is much room for improvement.\nIn the CVPR 2022 iteration of this challenge the highest OMQ score achieved was 0.25.\nThis is quite lower than the best OMQ score of the semantic SLAM challenge which was able to reach OMQ of 0.39.\nThis can be attributed somewhat to the approach that competitors used in solving SCD.\nAll SCD submissions performed semantic SLAM on the two different traversals and did a naive comparison of the resultant cuboid maps.\nThis led to an accumulation of the errors seen across the maps for both traversals.\nThis simple beginning shows that there are many directions that can still be experimented with in order to improve SCD in future years.\nThis may include more targeted approaches to navigation and\/or mapping within the second traversal which utilises the scene knowledge from the first traversal.\nThere is still much research to be done in how to reliably identify and map out changes between scenes.\n\n\\noindent\n\\begin{figure}[ht!]\n \\vspace{0.1in}\n \\centering\n \\includegraphics[width=0.47\\textwidth]{fig\/rearrangement.jpg}\n \\vspace{-0.1in}\n \\captionsetup{type=figure}\n \\caption{\\textbf{AI2-THOR Visual Room Rearrangement Challenge.} An agent must change pose and attributes of objects in a household environment to restore the environment to an initial state.}\n \\vspace{-0.1in}\\label{fig:rearrangement}\n\\end{figure}\n\n\\subsubsection{Interactive Rearrangement}\n\n\n\n\nWhile the PointNav and ObjectNav tasks have led to substantial advances in embodied AI, performance on these tasks has steadily improved with PointNav being nearly solved~ \\cite{ramakrishnan_arxiv21}. In light of this fast progress, researchers from nine institutions proposed the \\emph{rearragement} as the next frontier for research in embodied AI~ \\cite{batra2020rearrangement}. At a high level, in rearrangement, an embodied agent must interact with its environment to transform the environment from it's initial state $s^{\\text{init}}$ to a goal state $s^{\\text{goal}}$. This general formulation of rearrangement leaves much unspecified, namely: (1) which environment? (2) what affectors\/actions are available to agent? (3) how are states $s^{\\text{init}},s^{\\text{goal}}$ specified? Given the EAI community's focus on building agents capable of assisting humans in everyday tasks, all existing instantiations of the rearrangement task embody agents in household environments and focus on object-based rearrangement: the difference between goal and initial environment states is confined to objects pose (position\/rotation) and attributes (e.g. is the object opened or closed?). Successful rearrangement in these tasks requires agents to flexibly encode environment environment states, to dynamically update these encodings as they interact with their environment, and also to making long-term plans (frequently of the traveling-salesman variety) to maximize the efficiency of rearrangement. We now detail the two rearrangement challenges, AI2-THOR Visual Room Rearrangement and TDW-Transport, held at the EAI workshop in past years.\n\nThe AI2-THOR Visual Room Rearrangement (RoomR) task~\\cite{weihs2021rearrangement} occurs in two phases, see Figure~\\ref{fig:rearrangement}. In the \\emph{Walkthrough phase} the the agent explores a room and builds an internal representation of the room's configuration ($s^{\\text{goal}}$). Then, in the \\emph{Unshuffle phase}, the agent is placed within the same environment but objects within this environment have been randomly moved to different locations and opened\/closed ($s^{\\text{init}}$), the agent must now restore objects back to their original states. As this 2-phase RoomR is quite challenging, a 1-phase variant was also proposed where the agent enacts the Walkthrough and Unshuffle phases simultaneously, receiving egocentric RGB-D images of the environment in both the $s^{\\text{init}}$ and $s^{\\text{goal}}$ states at each step. In the 2021 RoomR challenge, no participants were able to outperform the baseline model, which used a 2D semantic mapping approach along with imitation learning from a heuristic expert agent. In 2022 however, several exciting approaches were released resulting in dramatic improvements in performance. For the 1-phase variant, performance leapt from ${\\approx}9\\%$ to ${\\approx}24\\%$ on the \\textsc{FixedStrict} metric on the test-set. Advances making this possible included (1) the use of CLIP-pretrained visual encoders~\\cite{khandelwalEtAl2021embodiedclip} and (2) large-scale pre-training using procedurally generated environments~\\cite{deitke2022procthor}. Unlike the end-to-end approaches used for the 1-phase variant, the most successful methods for the 2-phase variant used powerful inductive biases in the form of semantic mapping and planning algorithms. In a yet unpublished work, 2022 2-phase challenge winner used voxel-based 3D semantic map and shortest path planners to bulid an agent attaining ${\\approx}15\\%$ \\textsc{FixedStrict} on the test-set (dramatically beating the baseline performance of $<1\\%$). The differences between the approaches used in the 1- and 2-phase variants is striking: it seems that new algorithms are required to bring fully end-to-end methods to the challenging 2-phase setting.\n\n\n\\noindent\n\\begin{figure}[ht!]\n \\vspace{0.1in}\n \\centering\n \\includegraphics[width=0.47\\textwidth]{fig\/transport_comp.jpg}\n \\vspace{-0.1in}\n \\captionsetup{type=figure}\n \\caption{\\textbf{TDW-Transport Challenge.} In this example task, the agent must transport two objects on the table in one room and place them on the bed in the bedroom. The agent can first pick up the container, put two objects into it, and then transport them to the target location.}\n \\vspace{-0.1in}\\label{fig:TDW-transport}\n\\end{figure}\n\nTDW-Transport Challenge~\\cite{gan2022threedworld} is an object-goal driven interactive navigation task (see Figure ~\\ref{fig:TDW-transport}). In this challenge, an embodied agent is spawned randomly in a house and is required to find a small set of objects scattered around the house and transport them to a desired final location.\nWe also position various containers around the house; the agent can find these containers and place some objects into them. Without using a container as a tool, the agent can only transport up to two objects at a time. However, using a container, the agent can collect several objects and then transport them together. While the containers help the agent transport more than two items, it also takes some time to find them. Therefore, the agent has to decide to use containers or not.\n\nThe embodied agent is equipped with an RGB-D camera. There are two types of actions of the agent: navigation and interactive actions. Navigation actions include Move Forward($\\alpha$ meters), Turn Left($\\theta$ degrees), Turn Right($\\theta$ degrees). Interactive actions include Reach to object, Put into container, Grasp, and Drop. The objective of this challenge is to transport the maximum number of objects in fixed steps as efficiently as possible. We use the transport rate as an evaluation metric, which measure the fraction of the objects successfully transported to the desired position within a given budget.\n\n\\subsection{Embodied Vision-and-Language}\n\nThis section discusses the embodied vision-and-language challenges. In each challenge, natural language is used to convey the goal to the agent. For example, the agent may be tasked with following instructions to complete a task. Since language is the primary means of human communication, advances in embodied vision-and-language research will make it easier for a human to naturally interact with the trained agents. Additionally, language imposes a data-sparse regime, as examples cannot be created automatically, as precision in language is directly tied to a specific scene layout (e.g. ``on the left\/right\"), and it is an open challenge as to if unimodal representations can be leveraged in this embodied space \\cite{bisk2020}.\n\n\\noindent\n\\begin{minipage}[l]{0.46\\textwidth}\n \\vspace{0.1in}\n \\centering\n \\includegraphics[width=\\textwidth]{challenges\/rxr-habitat\/rxr-habitat-challenge.pdf}\n \\vspace{-0.1in}\n \\captionsetup{type=figure}\n \\captionof{figure}{The Room-Across-Room Habitat Challenge (RxR-Habitat) is a multilingual instruction-following task set in simulated indoor environments requiring realistic navigation over long action sequences.}\n \\vspace{-0.1in}\n \\label{fig:rxr_habitat_challenge}\n\\end{minipage}\n\n\\subsubsection{Navigation Instruction Following}\nNavigation guided by natural language has long been a desired foundational ability of intelligent agents. In Vision-and-Language Navigation (VLN), an agent is given egocentric vision in a realistic, previously-unseen environment and tasked with following a path described in natural language, \\eg, \\textit{Move toward the dining table. Go down the hallway toward the kitchen and stop at the sink}. The Room-Across-Room Habitat Challenge (RxR-Habitat) instantiates VLN in simulated indoor environments, provides multilingual instructions, and requires agents to navigate via long action sequences in a realistic, continuous 3D world (Figure~\\ref{fig:rxr_habitat_challenge}). Solving RxR-Habitat would have applications in many domains, such as personal robotic assistants, and lead to a better scientific understanding of the connection between language, vision, and action.\n\nThe RxR-Habitat Challenge takes place in 3D reconstructions of Matterport3D scenes \\cite{matterport3d} and interacts with those scenes using the Habitat Simulator \\cite{savva2019habitat}. We model the agent embodiment after a robot of radius 0.18m and height 0.88m with a camera mount at 0.88m. An episode is specified by a scene, a start location, a language instruction, and the implied path.\nAt each time step, the agent observes egocentric vision in the form of a single forward-facing, noiseless 480x640 RGB-D image with a 79$^{\\circ}$ HFOV. The agent also receives the natural language instruction from one of three languages: English, Hindi, or Telugu. The action space is discrete and noiseless, consisting of actions $\\{$\\texttt{MOVE\\_FORWARD}, \\texttt{TURN\\_LEFT}, \\texttt{TURN\\_RIGHT}, \\texttt{STOP}, \\texttt{LOOK\\_UP}, \\texttt{LOOK\\_DOWN}$\\}$. Forward movement is 0.25m and turning and looking actions are performed in 30$^{\\circ}$ increments. Actions that result in collision terminate upon collision, \\ie, no wall sliding. An episode ends when the agent calls \\texttt{STOP}.\n\nThe dataset used in RxR-Habitat is the Room-Across-Room (RxR) dataset \\cite{rxr} ported from high-level discrete VLN environments \\cite{anderson_cvpr18} to the continuous VLN-CE environments \\cite{krantz_vlnce_2020} used in Habitat. The dataset is split into training (Train: 60,300 episodes, 59 scenes), validation in environments seen during training (Val-Seen: 6,746 episodes, 57 scenes), validation in environments not seen during training (Val-Unseen: 11,006 episodes, 11 scenes), and testing in environments not seen during training (Test-Challenge: 9,557 episodes, 17 scenes), each with a roughly equal distribution between English, Hindi, and Telugu instructions. To submit to the RxR-Habitat leaderboard~\\footnote{\\url{https:\/\/ai.google.com\/research\/rxr\/habitat}}, participants run inference on the Test-Challenge split and submit the inferred agent paths. The leaderboard evaluates these paths against held-out ground-truth paths. Agent performance is reported as the average of episodic performance. The official comparison metric between the agent's path and the ground truth path is normalized dynamic time warping (nDTW) \\cite{magalhaes2019effective} which scores path alignment between 0 and 1 with 1 indicating identical paths. Additional metrics reported for analysis include path length (PL), navigation error (NE), success rate (SR) and success weighted by inverse path length (SPL)~\\cite{anderson_arxiv18}.\n\nRxR-Habitat is incredibly difficult; the interplay between perception, control, and language understanding makes instruction-following an interdisciplinary problem. Realistic environments and unconstrained natural language lead to a long tail of vision and language grounding, and the low-level action space makes learning the relationship between instructions and actions highly implicit. The RxR-Habitat Challenge took place in 2021 and again in 2022. The baseline model is a cross-modal attention (CMA) model \\cite{krantz_vlnce_2020} that attends between vision and language encodings, predicts actions end-to-end from observation, and is trained with behavior cloning (nDTW: 0.3086). In the first year, teams failed to surpass the performance of this baseline. However, a significant improvement in SOTA was attained in 2022; the top submission (Reborn \\cite{an20221st}) produced an nDTW of 0.5543 --- an 80\\% relative improvement over the baseline. This was enabled by an effective hierarchy of waypoint candidate prediction, waypoint selection (the discrete VLN task), and waypoint navigation. For waypoint selection, a history-aware transformer was trained in discrete VLN with augmentations including synthetic instructions, environment editing, and ensembling. It was then transferred and tuned in continuous environments. Despite this remarkable improvement, a performance gap still exists between SOTA in continuous versus discrete environments, with human performance even higher. Evidently, this direction of research is still far from saturated.\n\n\\noindent\n\\begin{minipage}[l]{0.46\\textwidth}\n \\vspace{0.1in}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/inst_teaser_v9.pdf}\n \\vspace{-0.1in}\n \\captionsetup{type=figure}\n \\captionof{figure}{ALFRED involves interactions with objects, keeping track of state changes, and references to previous instructions. The dataset consists of 25k language directives corresponding to expert demonstrations of household tasks.\n We highlight several frames corresponding to portions of the accompanying language instruction.}\n \\vspace{-0.1in}\n\\end{minipage}\n\n\\subsubsection{Interactive Instruction Following.} \n\nALFRED is a benchmark for connecting human language to \\textit{actions}, \\textit{behaviors}, and \\textit{objects} in interactive visual environments.\nPlanner-based expert demonstrations are accompanied by both high- and low-level human language instructions in 120 indoor scenes in AI2-THOR. \nThese demonstrations involve partial observability, long action horizons, underspecified natural language, and irreversible actions. \n\nThe dataset includes over 25K English language directives describing 8K expert demonstrations averaging 50 steps each, resulting in >428K image-action pairs.\nMotivated by work in robotics on segmentation-based grasping, agents in ALFRED interact with objects visually, specifying a pixelwise interaction mask of the target object.\nThis inference is more realistic than simple object class prediction, where localization is treated as a solved problem.\nExisting beam-search and backtracking solutions are infeasible due to the larger action and state spaces, long horizon, and inability to undo certain actions. Agents are evaluated on their ability to achieve directives in both seen and unseen rooms. Evaluation metrics include: success rate (SR), success weighted by path-length (SPL), and Goal-Condition success which measures completed subtasks.\n\nCurrent state-of-the-art approaches in ALFRED use spatial-semantic mapping \\cite{blukis2022persistent,min2021film} to explore and build persistent representations of the environment before grounding instructions. These representations have also been coupled with symbolic planners and modular policies for better generalization to unseen rooms. Currently, the best performing agent achieves 40\\% success in seen rooms and 36\\% in unseen rooms.\n\n\\noindent\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=.45\\textwidth]{fig\/teach\/teaser.pdf}\n \\caption{\n In the TEACh Two-Agent Task Completion challenge, the \\textit{Commander}\\ has oracle task details (a), object locations (b), a map (c), and egocentric views from both agents, but cannot act in the environment, only communicate.\n The \\textit{Follower}\\ carries out the task and asks questions (d).\n The agents can only communicate via text.}\n \\label{fig:teach_teaser}\n\\end{figure}\n\n\\subsubsection{Interactive Instruction Following with Dialog}\n\\textbf{T}ask-driven \\textbf{E}mbodied \\textbf{A}gents that \\textbf{C}hat (TEACh) is a dataset of over 3,000 human--human, interactive dialogues and demonstrations of household task completion in the AI2-THOR simulator.\nRobots operating in human spaces must be able to engage in such natural language interaction with people, both understanding and executing instructions and using conversation \\cite{thomason:corl19,Roman2020} to resolve ambiguity \\cite{Nguyen2022} and recover from mistakes.\nA \\textit{Commander}\\ with access to oracle information about a task communicates in natural language with a \\textit{Follower}. \nThe \\textit{Follower}\\ navigates through and interacts with the environment to complete tasks varying in complexity from \\texttt{Make Coffee} to \\texttt{Prepare Breakfast}, asking questions and getting additional information from the \\textit{Commander}\\ (Figure~\\ref{fig:teach_teaser}).\n\nThere are 12 task types in TEACh with 438 unique combinations of task parameters (e.g., \\texttt{Make Salad} with 1 versus 2 slices of \\texttt{Tomato}) in 109 AI2-THOR environments.\nOn average, there are more than 13 utterances in each cooperative dialogue, with tasks taking an average of 131 \\textit{Follower}\\ actions to complete compared to ALFRED's 50 due to both task complexity and non-optimal planning.\nA major difference between the TEACh and ALFRED challenge is edge cases in the environments due to ALFRED's rejection sampling: if a PDDL planner could not resolve an ALFRED task given an initial scene configuration, it was rejected from data, where TEACh scene configurations are rejected only when a \\textit{human} cannot resolve them.\nThis decision results in many ``corner cases'' in TEACh that require human ingenuity, for example filling a pot with water using a cup as an intermediate vessel when the pot itself is too large to fit in the sink basin.\n\nThe Two-Agent Task Completion (TATC) challenge is based on the TEACh data, and involves modeling \\textit{both} the \\textit{Commander}\\ and \\textit{Follower}\\ agents, which have distinct action and observation spaces but a common household task goal.\nThe \\textit{Commander}\\ agent has access to a structured representation of the goal and its component parts, as well as search functions to identify the locations and physical appearance of objects in the environment by class or id.\nThe \\textit{Follower}\\ is analogous to an ALFRED agent, but with a wider action space that includes, for example, pouring liquids from one container to another.\nFurther, object interactions are done via individual $(x,y)$ coordinate predictions, rather than the full object masks used in ALFRED, analogous to the click inputs of human users who provided demonstrations.\nThe agents both have a \\texttt{communicate} action that adds to a mutually-visible dialogue history, and requires generating text.\n\nTATC agents are evaluated via SR and SPL, similar to ALFRED agents.\nRule-based, planning agents for TATC achieve about 24\\% SR, with planning corner cases dominating failures.\nA learned \\textit{Follower}\\ based on the Episodic Transformer~\\cite{pashevich:et} with a rule-based, simple \\textit{Commander}\\ that simply reports the raw text of the next task subgoal as a communication action achieves nearly 0\\%.\nWe are eager to see whether mapping-based approaches like those succeeding at ALFRED can adapt to the wider space of tasks and environment corner cases in TEACh.\n\n\n\\section{Common Approaches}\n\nThis section presents common approaches used by the winners of the challenges. We discuss large-scale training by scaling up datasets and compute, leveraging visual pre-trained models such as CLIP, the use of inductive biases such as maps, goal embeddings to represent different tasks, and visual and dynamic augmentation to make simulators more noisy and closer to reality.\n\n\\subsection{Large-Scale Training}%\n\\input{directions\/large-scale-training}\n\n\\subsection{Visual Pre-Training}\nInitial successes in deep reinforcement learning were largely focused on graphically simplistic environments, \\eg Atari games, for which complex visual processing was, in large part, unnecessary. For instance, the seminal work of Mnih et al.~\\cite{mnih_nature15} achieved human-level performance on dozens of Atari games using used a model with only three convolutional layers. Several initial works in embodied AI, in part due to computational constraints when training RL agents, adopted this mindset; for instance, Savva et al.~\\cite{habitat19iccv} trained models using a 3-layer image processing CNN for Point Navigation. As embodied agents are, ostensibly, meant to be embodied in the real-world, one might expected the they would benefit from image processing architectures designed for use with real images and, indeed, this has proven to be the case. A recent work has shown that modifying existing embodied baseline models by replacing their visual backbones with a CLIP-pretrained ResNet-50 can result in dramatic improvements~\\cite{khandelwalEtAl2021embodiedclip}. The top performing models of 1-Phase Rearrangement, RoboTHOR ObjectNav leaderboards, and Habitat ObjectNav leaderboard, use variants of this ``EmbCLIP'' architecture~\\cite{deitke2022procthor}. Several other top performing models to other challenges use pretrained vision models for object detection and semantic segmentation (RVSU Semantic SLAM, MultiON, and Two-Phase Rearrangement).\n\n\\setlength{\\tabcolsep}{4pt}\n\\begin{table*}[]\n\\small \n\\begin{tabular}{lllllclllc}\n\\hline\n\\textbf{} & \\textbf{} & \\textbf{} & \\multicolumn{3}{c}{\\textbf{Best End-to-end}} & \\textbf{} & \\multicolumn{3}{c}{\\textbf{Best Modular}} \\\\\n\\textbf{Challenge} & \\textbf{Simulator} & \\textbf{} & \\textbf{Method} & \\textbf{Success} & \\textbf{Rank} & \\textbf{} & \\textbf{Method} & \\textbf{Success} & \\textbf{Rank} \\\\ \\hline\nObjectNav & Habitat & & Habitat-Web & 60 & 2 & & Stretch & 60 & 1 \\\\\nAudio-Visual Navigation & SoundSpaces & & Freiburg Sound & 73 & 2 & & colab\\_buaa & 78 & 1 \\\\\nMulti-ON & Habitat & & - & - & - & & exp\\_map & 39 & 1 \\\\\nNavigation Instruction Following & VLN-RxR & & CMA Baseline & 13.93 & 10 & & Reborn & 45.82 & 1 \\\\\nInteractive Instruction Following & AI2-THOR & & APM & 15.43 & 14 & & EPA & 36.07 & 1 \\\\\nRearrangement & AI2-THOR & & ResNet18 + ANM & 0.5 & 6 & & TIDEE & 28.94 & 1 \\\\ \\hline\n\\end{tabular}\n\\vspace{-6pt}\n\\caption{Table summarizing the performance of best end-to-end and best modular methods across various challenges.}\n\\label{tab:e2e_modular}\n\\end{table*}\n\n\\subsection{End-to-end vs Modular} \nIn the last few years, two classes of methods have emerged for various embodied AI tasks: (1) end-to-end and (2) modular. The end-to-end methods learn to predict low-level actions directly from input observations. They typically use a deep neural network consisting of a visual encoder followed by a recurrent layer for memory and are trained using imitation learning or reinforcement learning. Earliest application of end-to-end methods on embodied AI tasks include \\cite{ lample2016playing, zhu2017target, mirowski2016learning, chaplot2017arnold, savva2017minos, hermann2017grounded, chaplot2017gated}. End-to-end RL methods have also been scaled to train with billions of samples using distributed training~\\cite{wijmans2019decentralized} or using tens of thousands of procedurally generated scenes~\\cite{deitke2022procthor}. Researchers have also introduced some structure in end-to-end policies such using spatial representations~\\cite{gupta2017cognitive, parisotto2017neural, chaplot2018active, henriques2018mapnet, gordon2018iqa} and topological representations~\\cite{yang2018visual, savinov2018semi, savinov2018episodic}. \n\nModular methods use multiple modules to break down the embodied AI tasks. Each module is trained for a specific subtask using direct supervision. The modular decomposition typically includes separate modules for perception (mapping, pose estimation, SLAM), encoding goals, global waypoint selection policies, planning and local obstacle avoidance policies. Rather than training all modules end-to-end, each module is trained separately using direct supervision, which also allows use of non-differentiable classical modules within the embodied AI pipeline. Earliest learning-based modular methods include ~\\cite{chaplot2020learning, chaplot2020neural, chaplot2020object} which show their effectiveness on various navigation tasks such as Exploration, ImageNav and ObjectNav. Variants of these methods include improvements in mapping by anticipating unseen parts~\\cite{ramakrishnan2020occant} or by using density-based maps~\\cite{bigazzi2022focus}; and learning global waypoint selection policies in ObjectNav and ImageNav entirely using offline or passive datasets to improve sample and compute efficiency~\\cite{ramakrishnan2022poni,hahn_nrns_2021,mezghani2021memory,wassermanlast}. Recently, modular methods have also been applied to longer horizon tasks such as Navigation Instruction Following in VLN-CE~\\cite{Krantz_2021_ICCV,an20221st,raychaudhuri2021language}, Interactive Instruction Following in ALFRED~\\cite{min2021film, liu9planning, murray2022following} and Rearrangement in AI2 Thor~\\cite{sarch2022tidee, trabucco2022simple}.\n\nIn Table~\\ref{tab:e2e_modular}, we show the performance of best end-to-end and modular methods in various 2022 Embodied AI challenges. The table shows that while end-to-end method performance is comparable to modular methods on easier and relatively shorter horizon tasks such as ObjectNav and Audio-Visual Navigation, the performance gap increases as the complexity of the task increases such as in Interactive Navigation and Rearrangement. This is likely because as the task horizon increases, the exploration complexity increases exponentially when training end-to-end with just reinforcement learning.\n\n\n\n\\subsection{Visual and Dynamic Augmentation}\nVisual and dynamic augmentation of real-world datasets has proven to be a key technique for enabling robotic systems trained in simulation to transfer to unseen environments and even to reality. For years in the robotics and learning community, a prevalent attitude has been that simulation transfers poorly to reality. One justification for this perspective is that the dynamics models of most simulations are not good enough to reveal problems that typically occur in real robotic deployments, such as wheel slippage, odometry drift, floor irregularities, nonlinear motor and dynamic responses, and component breakage and burnout. Another justification is that simulated evaluation can reveal problems with systems, but cannot validate them: validation tests for robotic systems must ultimately be performed on-robot.\n\nNevertheless, many existing systems have shown successful transfer to novel and to real-world environments by augmenting training datasets with noise, static obstacles, dynamic obstacles, and changes to visual appearance. \nMany approaches add noise to sensors, actions and even environment dynamics, effectively making each episode occur in a distinctive environment; these techniques have proved useful for translating LiDAR-based policies trained in simulation to the real world \\cite{faust2018prm,francis2020long} and for estimating the safety of plans prior to deployment \\cite{xiao2021toward}. %\nOther approaches improve performance by adding static obstacles to the environment in simulation, also effectively increasing the space of environments trained on \\cite{xiao2021toward}.\nAn interesting example of this presented at the workshop involves training in a simulated environment with variable dynamics and using an adaptation module to perform system identification in real environments \\cite{kumar2021rma}, \\cite{fu2022coupling}.\n\nHowever, visual policies present other difficulties: a policy trained on one set of objects and lighting conditions is unlikely to transfer to other objects and conditions~\\cite{deitke2022procthor}. Adding noise has been used to improve robustness \\cite{fang2019scene}, and the RSVU challenges add distractor objects to reduce the effects of distractors~\\cite{hall2020robotic}. The RL-CycleGan approach uses style transfer to make simulated environments appear more like the real world \\cite{rao2020rl}. Most recently, ProcTHOR~\\cite{deitke2022procthor} attempts to address the visual diversity issue by generating large numbers of synthetic environments.\n\nFinally, while the pandemic disrupted many plans for real-world deployments, both the iGibson, RoboTHOR, and Habitat challenges included tests of simulation-trained policies in real deployments~\\cite{xia2020interactive,deitke2020robothor,batra2020objectnav}. These environments proved challenging for many policies; nevertheless, many policies were still able to function, and going forward tests in the real will be an important validation step for embodied AI agents. As datasets collected from real evaluations increase, the opportunity exists to train policies directly over this real-world data, which has already proved useful in a grasping and manipulation context~\\cite{bahl2022human} and for legged locomotion~\\cite{smith2022walk}. \n\n\\section{Future Directions}\n\nIn this section, we discuss promising future directions for embodied AI, including further leveraging pre-trained models, world models and inverse graphics, simulation and dataset advances, sim2real approaches, procedural generation, generalist agents, and multi-agent and human interaction.\n\n\\subsection{Pre-training}\n\nPre-training has powered impressive results from visual recognition~\\cite{girshick2014rich}, natural language~\\cite{radford2019language,devlin2018bert}, and audio~\\cite{oord2016wavenet}.\nPre-trained models can be repurposed through fine-tuning, zero-shot generalization, or prompting to perform diverse tasks.\nHowever, pre-training has not yet found such levels of success in embodied AI. \nRecent work has begun to explore this direction, showing that pre-trained models can help improve performance, efficiency and expand the scope of solvable tasks.\nThis section discusses how pre-training can help embodied AI with visual pre-training objectives, the role of scale in pre-training, pre-training for task specification, and pre-trained behavioral priors.\n\nOne promising area is new pre-training objectives for visual representations in embodied AI.\nPrior work shows supervised pre-training is effective for navigation and manipulation tasks \\cite{yen2020learning, shah2021rrl, sax2018mid}.\nHowever, a large-scale study \\cite{wijmans2019dd} showed that at scale, supervised pre-training visual representations from ImageNet could hurt downstream performance in PointNav.\nEmbCLIP shows that unsupervised pre-training with a pre-trained CLIP visual encoder is effective for various embodied AI tasks \\cite{khandelwalEtAl2021embodiedclip}.\nOther works explore pre-training with masked auto-encoders~\\cite{xiao2022masked}, contrastive learning~\\cite{du2021curious,nair2022r3m,sermanet2018time}, or other SSL objectives~\\cite{yadav2022offline}.\nFuture work may explore tailoring pre-training objectives specifically for control.\nFor example, pre-training may account for the temporal aspect of decision making~\\cite{gregor2018temporal}, be embodiment agnostic~\\cite{stadie2017third}, curiosity-driven~\\cite{du2021curious},or avoid pixel reconstruction~\\cite{zhang2020learning}. Analogous to pretrained visual representation for visual navigation, audio-visual representations~\\cite{alwassel2020self,Morgado2021AudioVisualID,mittal2022learning} can be adopted for tasks with multi-modal inputs~\\cite{gan2019look,chen_soundspaces_2020} in future work.\n\nAnother way pre-training may benefit embodied AI is with scaling model and dataset size.\nCurrently, works use a variety of datasets for pre-training such as Epic Kitchens \\cite{damen2022rescaling,damen2018scaling,VISOR2022}, YouTube 100 days of hands \\cite{shan2020understanding}, Something-Something \\cite{goyal2017something}, Ego4D \\cite{grauman2022ego4d}, and RealEstate10k \\cite{46965} datasets.\nThe curation of data for pre-training matters, with pre-training on unlabeled curated datasets outperforming labeled datasets on downstream tasks \\cite{xiao2022masked}.\nIncreasing model size also promises benefits, with larger ResNet showing better performance \\cite{wijmans2020train}.\nPrior work pre-trains ResNet-50 \\cite{nair2022r3m,khandelwalEtAl2021embodiedclip,yadav2022offline}, CLIP \\cite{khandelwalEtAl2021embodiedclip}, or ViT models \\cite{xiao2022masked}.\nWith the success of neural scaling laws~\\cite{kaplan2020scaling} in vision and language, future work in embodied AI may translate these lessons to pre-training larger models with larger datasets.\n\nPre-training also provides a way to specify diverse tasks for agents easily.\nOpen-world agents must be able to flexibly complete tasks with unseen goals or task specifications.\nPrior work shows that pre-trained models can provide dense reward supervision \\cite{cui2022can, shao2021concept2robot, chen2021learning}. \nOther work shows that pre-trained models can be leveraged for open-world object detection, allowing for zero-shot generalization to new goals in navigation tasks \\cite{al2022zero,gadre2022clip,majumdar2022zson}. \nFinally, some methods explore generalization to new language instructions by employing pre-trained models \\cite{shridhar2022cliport}. \nThere are further opportunities to use such models for zero-shot generalization to completing new tasks, new goals, or flexibly specifying goals in different input modalities.\n\nFinally, pre-training can learn behavioral priors for interaction.\nThe previously discussed pre-training objectives primarily focus on learning representations of input modalities.\nHowever, this leaves out a critical part of embodied AI, interacting with the environment.\nRather than pre-training representations, pre-training can also learn models of behavior that account for agent actions.\nOne line of work pre-trains models with supervised learning to predict actions from sensor inputs on large interaction datasets and then fine-tune this model to specific downstream tasks~\\cite{baker2022video}.\nOther work learns skills or reusable behaviors from offline datasets that can adapt to downstream tasks~\\cite{pertsch2020accelerating,gupta2019relay}.\nFuture work may explore how scaling dataset size, model size, and compute can pre-train behavioral policies better suited for fine-tuning on downstream tasks.\n\n\n\\subsection{World models and inverse graphics}\nAs previously discussed, semantic and free-space maps have been hugely successful in enabling high performance and efficient learning across embodied-AI tasks (\\eg in navigation~\\cite{chaplot2020learning} and rearrangement~\\cite{trabucco2022mass}). These mapping approaches are successful as they provide a simple, highly-structured, model of the agent's environment that enables explicit planning. The simplicity of existing mapping approaches is also one of their major limitations: as embodied tasks become more complex they require agents to reason about new semantic categories and new types of interaction (\\eg arm-based manipulation).\nExtending existing approaches to include new capabilities is generally possible but non-trivial, often requiring substantive human effort. For instance, a 2D free-space mapping approach successful for PointGoal Navigation~\\cite{chaplot2020learning} was explicitly extended to include semantic mapping channels so as to enable training agents for ObjectGoal Navigation~\\cite{chaplot2020object}. \nThese challenges in mapping raise an important question: how can we build flexible models of an agent's environment that can be used for general purpose task planning? We identify two exciting directions toward answering this question: end-to-end trainable world models and game-engine simulation via inverse-graphics.\n\nAt a high-level, a world models $W$ is a function that, given the state of the environment $s_t$ at time $t$ and an agent action $a$, produces a prediction $W(s,a)=\\widehat{s}_{t+1}$ of the state of the world at time $t+1$ if the agent were to take action $a$~\\cite{ha2018worldmodels}. Iterative applications of the world model can thus be used to simulate agent trajectories and, thus, for model-based planning. As may be expected, building and training world models made challenging by several factors: (1) generally full state information ($s_t$) is not available as agent's have access only to partial, egocentric, observations, (2) the dynamics of an environment are frequently stochastic and thus cannot be predicted deterministically, (3) many details encoded in a state are irrelevant to task completion (\\eg minor color or texture variations of objects) and attempting to predict these details needlessly complicates training, and (4) collecting high-quality training data for the end-to-end training of world models may require the design of increasingly complex physical states (\\eg a tower of plates to be knocked over). While more work is needed before world models will become a ubiquitous tool for embodied AI agents, recent work has shown that world models can be successfully used to training agents to play Atari games~\\cite{hafner2021discreteworldmodels} and to build navigation-only models of embodied environments~\\cite{koh2021pathdreamer}.\n\nAs world models are meant to be broadly applicable and learned from data, they frequently eschew inductive biases and use general purpose architectures. The disadvantage of this approach is clear: we have well-understood models of physics that should not have to be re-learned from data for every task. Moreover, we have simulators designed explicitly to simulate 3D objects and their physical interactions, video game engines. These observations suggest another approach: rather than learning an implicit world model, can we use techniques from inverse-graphics to back-project an agent's observations to 3D assets within a scene in a game engine? Once this back-projection is complete, the game engine can be used to perform physical simulations and planning. This approach, which can be thought of as world modeling with strong inductive biases, has used successfully to build models of intuitive physics in constrained settings~\\cite{wu2017learningtoseephysics}. While this approach appears very promising it does present some challenges: (1) the problem of inverse graphics is especially challenging in this setting as de-rendered objects must be in physically plausible relationships with one another for simulation to be meaningful and (2) game-engines are, generally, non-differentiable and can be slow. Nevertheless, this approach of explicitly bringing our understanding of physical laws to world models seems a promising direction toward building embodied models that can physically reason and plan.\n\n\\subsection{Simulation and Dataset Advances}\n\n\nOne factor towards improving the reliability and scope of embodied AI research in the future will be the continued improvement of simulation capabilities and realism, and increase in the scale and quality of 3D assets used in simulation.\nRepeatable, quantitative analysis of embodied AI systems at scale has been made possible through the use of simulation.\nAs research in embodied AI continues to grow and tackle increasingly complex problems within increasingly complex scenes, the needs placed on simulation environments and assets will increase. \n\nOne important area of improvement for simulation environments is physics realism during agent-object interaction. Past simulation environments have solidly supported both abstracted~\\cite{ai2thor,puig2018virtualhome} and rigid-body physics-based agent interactions~\\cite{shen2020igibson, szot2021habitat, gan2020threedworld, ehsani2021manipulathor}. There has been quite some progress in physics simulation of flexible material (rope, cloth, soft body)~\\cite{lin2020softgym, seita_bags_2021}, fluids~\\cite{fu2022rfuniverse}, and contact-rich interaction (e.g. nut-and-bot)~\\cite{narang2022factory}, leveraging state-of-the-art physics engines like PyBullet~\\cite{coumans2021} and NVIDIA's PhysX\/FleX. Some environment like iGibson 2.0~\\cite{li2021igibson} even attempts to go beyond kinodynamic simulation and use approximate models to simulate more complex physical processes such as thermodynamics. However, all of these simulations are still far from perfect and oftentimes face a grim trade-off between fidelity and efficiency. More efficient and realistic simulation of physical interaction of agents with all elements of their environment can greatly assist in the applicability of embodied AI trained using simulation, to solving real-world problems.\n\nWith the prevalence of vision sensors for solving problems, the need for increased visual realism has also become imperative for research that is to translate to the real world. This has been aided in recent years through aspects like new graphics technology like real-time ray tracing.\nAn example of how these advances can improve visual realism can be found within iterations of the RVSU challenge~\\cite{hall2020robotic} %\nthat recently migrated to NVIDIA's Isaac Omniverse\\footnote{see \\url{https:\/\/developer.nvidia.com\/blog\/making-robotics-easier-with-benchbot-and-isaac-sim\/} for details}. Yet, the rendering speed can still become a bottleneck as the number of objects and light sources increase in the scenes. \n\nAside from advances in computer graphics, visual realism also relies on high-quality 3D assets of scenes and objects. It has been a standard practice for embodied AI researchers to benchmark navigation agents in large-scale static scene datasets like Matterport3D~\\cite{matterport3d}, Gibson~\\cite{xiazamirhe2018gibsonenv}, and HM3D~\\cite{ramakrishnan2021habitat}. On the other hand, interactive scenes have been quite limited. iGibson 2.0~\\cite{li2021igibson} provides fifteen fully interactive scenes with added clutter that aim to capture the messiness of the real world, and Habitat 2.0~\\cite{szot2021habitat} also similarly converts a subset of an existing static dataset~\\cite{replica19arxiv} to become fully interactive. ProcTHOR~\\cite{deitke2022procthor} recently attempted to scale up the effort and procedurally generate fully interactive scenes with realistic room structures and object layout.\n\nMany object datasets have been proposed and heavily utilized by embodied AI researchers in the past years~\\cite{chang2015shapenet,mo2019partnet,xiang2020sapien,calli2017yale,srivastava2022behavior,downs2022google,collins2022abo}. Although increased scale and quality has been the general trend for these datasets, it still remains extremely costly to make them useable for interactive tasks. For example, most of the objects in these datasets do not support interaction, such as the ability to open cabinets. Such work not only requires modifying meshes, but also requires a tremendous amount of annotation to provide part-level and articulation annotation, as was done in the PartNet and PartNet-Mobility datasets~\\cite{mo2019partnet, xiang2020sapien}. Similarly, it requires additional annotation and mesh editing to support object states (\\eg whether the object is cookable, sliceable) for the BEHAVIOR dataset~\\cite{srivastava2022behavior} or in AI2-THOR~\\cite{ai2thor}. Yet, these annotations are essential as we ramp up the complexity of embodied AI tasks.\n\nAnother important aspect of realistic simulation is its multimodal nature, one of the most important ones is auditory perception. Existing acoustic simulation like SoundSpaces~\\cite{chen_soundspaces_2020} allows the agent to move around in the environment with both visual and auditory sensing to search for a sounding object. However, it pre-computes the room impulse response (RIR) based on scene geometry and can't be configured. Recent work like SoundSpaces 2.0~\\cite{chen22soundspaces2} (Fig.~\\ref{fig:ss2}) extended the simulation to make it continuous, configurable and generalizable to arbitrary scene datasets, which enables the agent to explore the acoustics of the space even further.\n\nIn addition, tactile sensing is also super important to future simulation environments. As these sensors become more cost-efficient, robots will likely be equipped with these new sensing capabilities in the foreseeable future. Researchers have made tremendous progress in tactile simulation~\\cite{narang2021sim, agarwal2021simulation} in the past years, which can unlock tremendous potential for multi-modal embodied AI research. \n\n\n\n\\subsection{Sim2Real Approaches}\n\nAs the embodied AI community grows, and benchmarks in simulation continue to improve, a fundamental question that remains is: how well does this progress translate to the real world? Towards answering this question, the embodied AI community has made significant efforts in 1) building infrastructure to facilitate sim2real transfer on hardware, 2) providing support for researchers across the world to evaluate policies in the real-world, and 3) developing sim2real adaptation techniques. \n\nSignificant advances have been made in recent years on real-world hardware targets, with the emergence of low-cost robots for evaluation \\cite{pyrobot2019,kemp2022design} and open-source infrastructure for sim2robot deployment \\cite{habitat2020sim2real, talbot2020benchbot, deitke2020robothor}. These advances have lowered the barrier to entry for robotics, and enable the embodied AI community to evaluate the performance of various research algorithms both in simulation and on real-world robots. Currently, each approach is limited to a specific simulator or a limited set of robot platforms. A key future direction is for these translation technologies to become ubiquitous interfaces, with support for any simulator or physical robot platform required by the researcher.\n\nBy comparing the performance of policies in simulation and the real-world, researchers are able to identify flaws in the simulator design that lead to poor sim2real transfer \\cite{habitat2020sim2real}, and develop novel methods to overcome the sim2real gap. Common approaches for bridging the sim2real gap include domain randomization \\cite{tobin2017domain, anderson2020sim}, or domain adaptation, a technique in which data from a source domain is adapted to more closely resemble data from a target domain. Prior works leveraged GAN techniques to adapt the visual appearance of objects from sim-to-real \\cite{rao2020rl}, and other works built models \\cite{truong2021bi, truong2022kin2dyn, deitke2020robothor}, or learned latent embeddings of the robot's dynamics \\cite{truong2020learning, kumar2021rma, yu2017preparing} to adapt to the actuation noise found in the real world. Models of real-world camera and actuation noises have since been integrated into simulators, and included as part of the Habitat, RoboTHOR, RVSU and iGibson Challenges, thereby improving the realism of the challenge and decreasing the sim2real gap. Continuing this close integration between real-world evaluation and improving simulators and benchmarks will help accelerate the speed of progress in robotics research.\n\nA final future direction, is in addressing the differences between simulated and real-world sensorimotor interfaces. It is common currently for actuation to be broken into discretised chunks, and simulated sensor inputs treated the same as real-world inputs. While simulators and datasets will continue to advance, there will likely always be a difference between emulated and real-world sensorimotor experiences. Research approaches that leverage simulated data to learn policies, then embrace the limitations of these policies when transferring to real-world scenarios, have begun to emerge in recent years \\cite{rana2021zero}. This is a start, but approaches like these will need to be expanded upon in the future. \n\n\n\\subsection{Procedural Generation}\n\\input{directions\/procedural-generation}\n\n\\subsection{Generalist Agents}\n\\input{directions\/generalist-agents}\n\n\n\\subsection{Multi-Agent \\& Human Interaction}\n\n\nAnalogous to the social learning in humans, it is desirable that embodied agents can observe, learn from, and collaborate with other agents (including humans) in their environment. The advanced and realistic simulated environments being developed for Embodied AI research will serve as virtual worlds for agent-agent and human-agent interaction. The two pillars for social, multi-agent, and human-in-the-loop embodied agents are (1) accurately simulating a subset of agent and human behavior relevant to a given embodied task and (2) creating realistic benchmarks for multi-agent and human-AI collaboration.\n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=0.47\\textwidth]{fig\/multi-agent\/furnmove.pdf}\n \\caption{Furniture Moving~\\cite{jain2020cordial} is a collaborative multi-agent task for agents to move a heavy furniture item.}\n\\end{figure}\n\nImmersing humans in simulation creates an opportunity for a new class of experiences and user studies that involve human-virtual agent interaction, data collection of human demonstrations at scale in controlled environments, and creation of events and visualizations that are impossible or irreproducible in real scenarios. Some examples towards this goal include VirtualHome~\\cite{puig_cvpr18} where programs are collected and created to model human behaviors along with animated atomic actions such as walk\/run, grab, switch-on\/off, open\/close, place, look-at, sit\/standup, touch. \nTEACh~\\cite{teach} collects both human instructions, demonstrations, and question answers from human who interact with the simulator through a web interface~\\cite{teach}, while BEHAVIOR uses virtual reality to collect high-fidelity human demonstrations directly in the action space of a simulated robot agent~\\cite{srivastava2022behavior}.\nTo train policies, modeling the task-relevant aspects of human behavior is of prime focus. \nIn challenges such as SocialNav, human agents are simulated following a simple interaction model that considers interactions between agents. \nLooking forward, with robust motion solutions models~\\cite{rong2021frankmocap,lugaresi2019mediapipe} and human behavior animation~\\cite{won2020scalable,2021-TOG-AMP}, emulating from large-scale human-activity datasets~\\cite{gu2018ava,smaira2020short,damen2022rescaling,grauman2022ego4d} is an exciting prospect for modeling human behaviors in simulation. To train and transfer these policies to the real world, we must develop low-shot approaches and realistic benchmarks to learn socially intelligent agents.\n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=0.47\\textwidth]{fig\/multi-agent\/wah.pdf}\n \\caption{Watch and Help encourages social intelligence where an agent learns in the presence of human-like teachers (image credits: Puig~\\etal~\\cite{puig2021watchandhelp}).}\n\\end{figure}\n\nSeveral benchmarks have helped make progress within the space of multi-agent and social learning in embodied AI. Within AI2-THOR, collaborative task completion~\\cite{jain2019two} and furniture moving~\\cite{jain2020cordial} were one of the first benchmarks for multi-agent learning in embodied AI, focussed on task that cannot be done by a single agent. While abstracted gridworlds~\\cite{jain2021gridtopix} provide a faster training ground for such tasks, efficiently going beyond 2-3 agents models with high visual fidelity is challenging. Emergent communication~\\cite{Patel_2021_ICCV} and emergent visual representations~\\cite{weihs2020learning} show examples of learning heterogeneous agents possessing specialized skills. SocialNav in iGibson presents early steps towards robot learning for mobility around humans and other moving objects within the environment. Within VirtualHome, the watch-and-help~\\cite{puig2021watchandhelp} benchmark will enabled few-shot learning of policies that can interact with a human-like agent to replicate demonstrations in an unseen environment.\n\nOverall, simulated environments offer a scalable platform for procedural training and testing of interactive policies, potentially addressing some of the limiting challenges inherent to research on human interaction: scaling up with safety and speed, standardize environments to support reproducible research, and procedural testing and benchmarking of a minimum set of tests before deploying on real robots. Progress on all these fronts requires the integration and convergence of contributions from diverse fields such as graphics, animation, and simulation, towards fully functional, realistic and interactive virtual environments. \n\n\\subsection{Impact of Embodied AI}\n\nWhether in simulation or reality, embodied AI research focuses on embodied tasks in the hope of delivering on the fundamental promise of AI: the creation of embodied agents, such as robots, which learn, through interaction and exploration, to creatively solve challenging tasks within their environments.\nMany embodied AI researchers believe that creating intelligent agents that can solve embodied tasks will produce outsized real-world impacts.\nIncreasingly capable robotic platforms and effective sim-to-real techniques make it easier to transfer learned policies to the real world.\nEven small advances at interesting embodied tasks could serve as the foundation for technologies that could improve the lives of people with disabilities or free able-bodied humans from mundane tasks.\nHowever, these advances, as with all automation, could result in disruptions such as the elimination of jobs or disempowerment of individuals.\nWe must be careful to ensure that the benefits of embodied AI become available to all and do not reinforce inequality.\nTherefore, the embodied AI community has promoted discussion of these issues in the hope that it will guide us towards more equitable solutions.\n\n\n\n\\section{Conclusion}\n\nIn this paper, we presented a retrospective on the state of Embodied AI research. We discussed 13 different challenges that make up a testbed for a suite of embodied navigation, interaction, and vision-and-language tasks. Over the past 3 years, we observed large-scale training, visual pre-training, modular and end-to-end training, and visual \\& dynamic augmentation as common approaches to many of the top challenge entries. We discuss improvements to pre-training, world models and inverse graphics, simulation and dataset advances, sim2real, procedural generation, generalist agents, and multi-agent \\& human interaction as promising future directions in the field.\n\n\n\n\n\\section*{Contributions}\n\n\\paragraph{Matt Deitke} led the planning, outline, and coordination of the paper; worked on the abstract, introduction, \\& conclusion and worked on the ObjectNav section, the large-scale training section, the procedural generation, and the generalist agents section.\n\n\\paragraph{Yonatan Bisk} said we should do this, attended a few planning meetings, but then delegated and Matt really ran with it.\n\n\\paragraph{Tommaso Campari} co-wrote the section on Multi-ObjectNav challenge.\n\n\\paragraph{Devendra Singh Chaplot} worked on Habitat Challenge sections and the end-to-end vs modular subsection.\n\n\n\\paragraph{Changan Chen} worked on the audio-visual navigation section and the simulation and dataset advances section.\n\n\\paragraph{Claudia P\\'{e}rez-D'Arpino} worked on the Introduction, Interactive and Social PointNav, and the Multi-Agent \\& Human Interaction sections.\n\n\\paragraph{Anthony Francis} worked on the Introduction, What Is Embodied AI, and Sim to Real sections, and edited other sections.\n\n\\paragraph{Chuang Gan} worked on the rearrangement challenges section.\n\n\\paragraph{David Hall} worked on the RVSU challenge sections and provided some editing on the simulation and dataset advances section.\n\n\\paragraph{Winson Han} created the Figure 1 cover graphic.\n\n\\paragraph{Unnat Jain} worked on audio-visual navigation, multi-object navigation, and multi-agent sections.\n\n\\paragraph{Jacob Krantz} worked on the challenge section on Navigation Instruction Following.\n\n\\paragraph{Chengshu Li} worked on the Interactive and Social PointNav section and the Simulation and Dataset Advances section in Future Directions.\n\n\\paragraph{Sagnik Majumder} worked on the audio-visual navigation section.\n\n\\paragraph{Roberto Mart\\'{i}n-Mart\\'{i}n} worked on the What Is Embodied AI section, and the Interactive and Social PointNav section.\n\n\\paragraph{Sonia Raychaudhuri} co-wrote the section on Multi-ObjectNav challenge.\n\n\\paragraph{Mohit Shridhar} worked on the interactive instruction following section for challenge details and the generalist agents section for future directions. \n\n\\paragraph{Niko S\\\"{u}nderhauf} worked on the RVSU challenge sections.\n\n\\paragraph{Andrew Szot} worked on the pre-training section for future directions.\n\n\\paragraph{Ben Talbot} worked on the RVSU challenge sections and Sim2Real Approaches advances section.\n\n\\paragraph{Jesse Thomason} worked on the interactive instruction following and interactive instruction following with dialog sections of the challenge details, and the multi-agent \\& human interaction section of future directions.\n\n\\paragraph{Alexander Toshev} worked on Social and Interactive Navigation section.\n\n\\paragraph{Joanne Truong} worked on the PointNav section for challenge details, and the Sim2Real approaches section for future directions.\n\n\\paragraph{Luca Weihs} worked on the rearrangement section for challenge details, the visual pre-training section for common approaches, and the world models and inverse graphics section for future directions.\n\n\\paragraph{Dhruv Batra, Angel X. Chang, Kiana Ehsani, Ali Farhadi, Li Fei-Fei, Kristen Grauman, Aniruddha Kembhavi, Stefan Lee, Oleksandr Maksymets, Roozbeh Mottaghi, Mike Roberts, Manolis Savva, Silvio Savarese, Joshua B. Tenenbaum, Jiajun Wu} advised and provided feedback on the draft, workshop, and\/or challenges.\n\n\n\n\n\n\\section*{References}\n\n\\singlespace\n\\renewcommand{\\section}[2]{}\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{1. Experimental Methodology} \\label{sec:experimental_methodology}\n\n\\subsection{Preparation of lipid-coated particles}\n\nFluorescently labeled, lipid-coated particles were created by coating silica micro-beads with a supported lipid bilayer (SLB) containing a minority fraction of fluorescently tagged lipid. \n1,2-dioleoyl-sn-glycero-3-phos-phocholine (DOPC) and 1,2-dioleoyl-sn-glycero-3-phospho-L-serine (DOPS) were purchased from Avanti Polar Lipids.\nAtto 647-1,2-dioleoyl-sn-glycero-3-phosphoethanolamine (DOPE-Atto 647) was purchased from ATTO-TEC GmbH.\nSilica microspheres (diameter 2.5 $\\mu$m; catalog code: SS05000) were purchased from Bangs Laboratories.\nSmall unilamellar vesicles (SUVs) were formed using an established sonication method \\cite{bakalar2018size}.\nIn brief, a lipid film containing DOPC, 5\\% DOPS, and 0.5\\% DOPE-Atto 647 was dried under nitrogen and then under vacuum for 30 minutes.\nThe film was rehydrated in Milli-Q (MQ) water to 0.2 mg\/mL lipids, sonicated at low power using a tip sonicator (Branson SFX250 Sonifier) at 20\\% of maximum, 1 s\/2 s on\/off, for three minutes. \nMOPS buffer was added at a final concentration of 50 mM MOPS, pH 7.4, 100 mM NaCl to the resulting SUV mixture.\n\nSilica microspheres were cleaned using a 3:2 mixture of sulfuric acid:hydrogen peroxide (Piranha) for 30 minutes in a bath sonicator, spun at 1000 g, and washed 3 times before being resuspended in MQ water. \nTo form SLBs on the beads, 50 $\\mu$L of SUV solution was mixed with 10 $\\mu$L of the cleaned bead suspension. \nThe bead\/SUV mixture was incubated for 15 minutes at room temperature while allowing the beads to sediment to the bottom of the centrifuge tube. \nBeads were washed 5 times with MQ water by gently adding\/removing the liquid without resuspending the beads into solution. \nThe fluidity of the SLB was verified by imaging beads on a glass coverslip at high laser intensity, where the diffusion of labeled lipids was visible after photo-bleaching a small region. \nLipid-coated beads were deposited into a chamber containing MQ water and sealed off to eliminate drift.\nThe beads settled down to the bottom of the chamber and all experiments were conducted in 2D.\n\n\n\n\n\n\\subsection{Optical tweezer setup and calibration}\n\nAn array of moving harmonic traps was generated using optical tweezers (Tweez 305, Aresis Ltd; Ljubljana, Slovenia), using an IR laser (1064 nm) with a maximum power of 5 W continuous wave (CW).\nWe selected a trap-to-trap switching rate of 100 kHz to ensure that the particles will effectively feel a continuous harmonic potential.\nWe used a 16 $\\times$ 16 array of traps, which results in $\\approx 2.5$ ms time delay to illuminate all trap positions.\nThis time delay is significantly smaller than the Brownian and oscillatory convection timescales in our system, ensuring that the particles experience a continuous harmonic potential.\nA custom MATLAB script was written to construct a time trajectory of oscillatory trap positions for each cell lattice position and incorporated into the tweezer software.\nThe trap focus was adjusted to the mid-plane of the colloids sitting above the substrate.\nLaser powers were adjusted from 0.05-0.5 W to vary the trap stiffness from $\\kappa = 0.5$-6 $kT\/\\mu\\mathrm{m}^2$.\n\n\nThe trap stiffness $\\kappa$ was calibrated by measuring the equilibrium probability distribution of the particles in a stationary array of traps.\nFor each laser power, $\\kappa$ was obtained by binning particles by their radial position $r$ from the center of the trap and fitting the binned data to a Boltzmann distribution, $P(r) = (\\kappa\/2\\pi)\\mathrm{e}^{-\\kappa r^{2}\/(2kT)}$.\nAn example of a distribution and fit is shown in Fig.~\\ref{Fig:SI1}.\nWe verified that there are no variations in trip stiffness between different lattice positions in the array.\n\n\\begin{figure}[!h]\n\t\\begin{center}\n\t\t\\includegraphics[width=0.7\\linewidth]{Figs\/SI\/SIFig_Trap_Stiffness.jpg}\n\t\t\\caption{%\n\t\t\tMeasurement of trap stiffness $\\kappa$ from the equilibrium probability distribution of particles diffusing in a harmonic well generated by optical tweezers. \n\t\t\tData are fit to a Boltzmann distribution to obtain $\\kappa$ ($\\kappa = 4~kT \/\\mu\\mathrm{m}^2$ in the case shown).\n\t\t\tThis measurement was averaged over all 16 $\\times$ 16 trap positions in the lattice array and repeated for every laser power used in this study.\n\t\t}\n\t\t\\label{Fig:SI1}\n\t\\end{center}\n\t\\vspace{-18pt}\n\\end{figure}\n\nThe trap width $W_{\\text{trap}}$ was determined from a separate set of experiments.\nTwo traps were placed side-by-side with center-to-center separation distance $W$.\nThe first trap, containing a trapped particle, was held fixed while the position of the second trap was varied; the average position $\\langle x_{i}(t) \\rangle$ of the particle was measured as a function of the separation distance $W$ (Fig.~\\ref{Fig:SI2}).\nWhen the second trap is placed far away, no interference is observed on the average position of the particle.\nHowever, as the second trap is moved closer, $W < 3$ $\\mu$m for a particle of radius $a = 1.25$ $\\mu$m, the average position drifts towards the second trap.\nWe found that the average particle position remains approximately constant within the range of separation distances of $W =$ 3-3.5 $\\mu$m, giving an approximate trap width $W_{\\text{trap}} \\approx 3.2$ $\\mu$m.\n\n\n\n\n\n\\begin{figure}[!h]\n\t\\begin{center}\n\t\t\\includegraphics[width=0.7\\linewidth]{Figs\/SI\/SIFig_Trap_Width.jpg}\n\t\t\\caption{%\n\t\t\tMeasurement of trap width $W_{\\text{trap}}$.\n\t\t\tA second trap was placed at varying separation distances from the first trap containing a trapped bead. \n\t\t\tWe measured the time-averaged position of the trapped bead, $\\langle x_i(t) \\rangle$, for varying separation distances at fixed trap stiffness.\n\t\t\tWe found that the average position is pulled towards the second trap at distances $W < 3$ $\\mu$m and is approximately constant in the range $W = 3$-3.5 $\\mu$m. This gives an average trap width $W_{\\text{trap}} \\approx 3.2$ $\\mu$m.\n\t\t}\n\t\t\\label{Fig:SI2}\n\t\\end{center}\n\t\\vspace{-18pt}\n\\end{figure}\n\n\n\\subsection{Measurement of diffusivity}\n\nThe long-time self diffusivity was determined by particle tracking. All imaging was carried out on an inverted Nikon Ti2-Eclipse microscope (Nikon Instruments) using a water-immersion objective (Plan Apochromat VC 60x, numerical aperture 1.2, water). \nLumencor SpectraX Multi-Line LED Light Source was used for excitation (Lumencor, Inc).\nFluorescent light was spectrally filtered with an emission filter (680\/42; Semrock, IDEX Health and Science) and imaged on a Photometrics Prime 95 CMOS Camera (Teledyne Photometrics).\nIn order to achieve satisfactory long-time statistics, particle trajectories were measured for times much larger than all other timescales in the system (including the diffusive timescale $\\gamma L^{2}\/kT$, oscillation period $2\\uppi\/\\omega$, and trapping timescale $\\gamma\/\\kappa$). A modified MATLAB script, based on the IDL code by Crocker and Grier \\cite{crocker1996methods,crockerweeksIDL,blairdufresneMATLAB}, was used to track the individual particles by identifying each particle center and tracking its trajectory over time using an image stack with one frame taken every 1-2 s. \nParticles that were immobile (due to defects) were filtered out so as not to be considered during image post-processing.\n\n\n\n\nThe average diffusivity tensor is classically defined in terms of the long-time derivative of the mean squared displacements (MSD) of the particles: \n\\begin{equation}\n \\overline{\\tens{D}} = \\lim_{t \\rightarrow \\infty} \\frac{1}{2}\n \\frac{\\mathrm{d}}{\\mathrm{d}t}\n \\langle \\Delta \\bm{R}(t) \\Delta \\bm{R}(t) \\rangle\n ,\n \\label{eq:SI_diffusivity_tensor}\n\\end{equation}\nwhere $\\bm{R}$ denotes the {\\it global} position vector [related to the {\\it local} position vector $\\bm{r}$ by Eq. \\eqref{eq:SI_coordinate_conversion}, below] and the angle brackets $\\langle\\,\\cdot\\,\\rangle$ denote an {\\it ensemble} average (not to be confused with the {\\it cell} average defined in the main text). The MSD tensor over a time interval $t$ is computed from the formula,\n\\begin{equation}\n\t\\braket{\n\t\\Delta \\bm{R} (t)\n\t\\Delta \\bm{R} (t)\n\t}\n\t=\n\t\\frac{1}{N_{\\text{p}}}\n\t\\sum_{i=1}^{N_{\\text{p}}}\n\t\\lim_{\\tau\\rightarrow\\infty}\n\t\\frac{1}{\\tau-t}\n\t\\int_{0}^{\\tau-t} \n\t\\left[ \\bm{R}_{i}(s + t) - \\bm{R}_{i}(s) \\right]\n\t\\left[ \\bm{R}_{i}(s + t) - \\bm{R}_{i}(s) \\right]\n\t\\, \\mathrm{d} s\n\t,\n\t\\label{eq:SI_msd_experiments}\n\\end{equation}\nwhere $\\bm{R}_{i}(t)$ denotes the global position of the $i$th particle at time $t$.\nIn Eq.~\\eqref{eq:SI_msd_experiments}, the squared displacement of a particle with index $i$ is first averaged over all time windows of duration $t$ within the interval $\\tau$ of the particle's trajectory. This ``time average'' for each $i$th particle, evaluated in the limit as $\\tau \\rightarrow \\infty$, is subsequently averaged over all particles $i = 1, 2, \\dots, N_{\\text{p}}$ to approximate the ensemble average of all squared displacements with satisfactory statistics.\nAt long times, the MSD tensor $\\langle \\Delta \\bm{R}(t) \\Delta \\bm{R}(t) \\rangle$ oscillates with fixed amplitude about a steady, linear growth. Thus, the long-time derivative of the MSD can be measured by simply dividing by time, leading to the relation,\n\\begin{equation}\n \\overline{\\tens{D}} = \\lim_{t \\rightarrow \\infty} \\frac{1}{2 t}\n \\langle \\Delta \\bm{R}(t) \\Delta \\bm{R}(t) \\rangle\n .\n \\label{eq:SI_diffusivity_experiments}\n\\end{equation}\nEquation \\eqref{eq:SI_diffusivity_experiments} was used to measure the diffusivity from the measured particle trajectories (see Fig.~\\ref{Fig:SI3}). Trajectories were averaged over a sufficiently long time interval $\\tau$ to ensure linear growth, and the time integral in Eq.~\\eqref{eq:SI_msd_experiments} was discretized using the left Riemann sum.\nStatiscal errors in the MSD were calculated using a bootstrap algorithm \\cite{ross2020introduction}.\n\n\n\n\\begin{figure}[!h]\n\t\\begin{center}\n\t\t\\includegraphics[width=0.5\\linewidth]{Figs\/SI\/SIFig_MSD_Example.jpg}\n\t\t\\caption{%\n\t\t\tRepresentative mean squared displacements $\\langle \\Delta x(t) \\Delta x(t) \\rangle \/ (2t)$ (black symbols) and $\\langle \\Delta y(t) \\Delta y(t) \\rangle \/ (2t)$ (blue symbols) measured using Eq.~\\eqref{eq:SI_msd_experiments} for Brownian particles diffusing through an oscillating array of harmonic traps.\n\t\t\tDiffusivities reported in the main text were computed from the long-time plateaus of these curves, using Eq.~\\eqref{eq:SI_diffusivity_experiments}. Statistical errors were calculated using the bootstrap algorithm \\cite{ross2020introduction} over the entire observation time window.\n\t\t}\n\t\t\\label{Fig:SI3}\n\t\\end{center}\n\t\\vspace{-18pt}\n\\end{figure}\n\n\nThe particle resistivity $\\gamma$ used in all theoretical calculations was calibrated by measuring the Stokes-Einstein-Sutherland diffusivity $D_{0} = kT\/\\gamma \\approx 0.105$ $\\mu$m$^2$\/s of particles diffusing in the absence of a harmonic potential. For a spherical particle of radius $a$ in a fluid of viscosity $\\eta$, the particle resistivity is given by $\\gamma = 6 \\uppi \\eta a K_{D}$, where $K_{D}$ is a drag-correction factor to account for the hydrodynamic interaction with a nearby wall (in our case, the substrate floor). For our system with $a = 1.25$ $\\mu$m and $\\eta = 1$ cP, we estimate the drag-correction factor to be $K_{D} = kT\/(6\\uppi \\eta a D_{0}) \\approx 1.63$, corresponding to a particle-to-wall spacing of about 0.5 $\\mu$m according to Fax\\'en's formula \\cite{happel2012low}. This gives a particle resistivity of $\\gamma \\approx 9.49$ $kT\\cdot\\text{s}\/\\mu\\text{m}^2$.\n\n\n\n\n\n\n\n\n\\section{2. Taylor-Dispersion Theory} \\label{sec:dispersion_theory}\n\n\\subsection{Derivation of Eqs. (3)-(5): governing equations for the probability density and displacement}\n\n\nThe starting point for deriving the basic equations in the main text is the single-particle Smoluchowski equation,\n\\begin{equation}\n\t\\frac{\\partial P(\\bm{R}, t)}{\\partial t}\n\t=\n\t-\n\t\\bm{\\nabla}_{\\bm{R}} \\cdot \\bm{J} (\\bm{R}, t)\n\t,\n\t\\label{eq:SI_smoluchowski_eqn}\n\\end{equation}\nwhere $P(\\bm{R}, t)$ is the probability density of finding a Brownian particle at a {\\it global} position $\\bm{R}$ and time $t$ and\n\\begin{equation}\n\t\\bm{J} (\\bm{R}, t)\n\t=\n\t\\bm{u} (t) P\n\t-\n\t\\frac{1}{\\gamma}\n\t[\n\tkT \\bm{\\nabla}_{\\bm{R}} P \n\t+\n\tP \\bm{\\nabla}_{\\bm{R}} V (\\bm{R})\n\t]\n\t\\label{eq:SI_probability_flux}\n\\end{equation}\nis the probability flux. The spatial periodicity of the potential-energy field allows us to convert the ``global'' position $\\bm{R}$ to the ``local'' position $\\bm{r}$ via the transformation,\n\\begin{equation}\n\t\\bm{R} = \\bm{n} L + \\bm{r},\n\t\\label{eq:SI_coordinate_conversion}\n\\end{equation}\nwhere $\\bm{n}$ contains the lattice indices of a given periodic cell. In terms of lattice and local coordinates, $V(\\bm{R}) \\equiv V(\\bm{r})$, $P(\\bm{R},t) \\equiv P_{\\bm{n}} (\\bm{r}, t)$, and $\\bm{J} (\\bm{R}, t) \\equiv \\bm{J}_{\\bm{n}} (\\bm{r},t)$. \n\n\nIn the following, we employ the ``flux-averaging'' approach of Brady and coworkers \\cite{morris1996self,Zia2010,Takatori2014,burkholder2017tracer,burkholder2019fluctuation,peng2020upstream}.\nFirst, we define the continuous wavevector $\\bm{k}$ and apply the discrete Fourier transform $\\hat{(\\,\\cdot\\,)} \\equiv \\sum_{\\bm{n}} (\\, \\cdot\\,) \\mathrm{e}^{\\mathrm{i} \\bm{k} \\cdot \\bm{n} L}$ to Eqs.~\\eqref{eq:SI_smoluchowski_eqn}-\\eqref{eq:SI_probability_flux}, obtaining\n\\begin{equation}\n\t\\frac{\\partial \\hat{P}(\\bm{k}, \\bm{r}, t)}{\\partial t}\n\t=\n\t-\n\t(\\mathrm{i} \\bm{k} + \\bm{\\nabla}_{\\bm{r}}) \\cdot \\hat{\\bm{J}} (\\bm{k}, \\bm{r}, t)\n\t,\n\t\\label{eq:SI_smoluchowski_eqn_ft}\n\\end{equation}\n\\begin{equation}\n\t\\hat{\\bm{J}} (\\bm{k}, \\bm{r}, t)\n\t=\n\t\\bm{u} (t) \\hat{P}\n\t-\n\t\\frac{1}{\\gamma}\n\t[\n\tkT (\\mathrm{i} \\bm{k} + \\bm{\\nabla}_{\\bm{r}}) \\hat{P} \n\t+\n\t\\hat{P} \\bm{\\nabla}_{\\bm{r}} V (\\bm{r})\n\t]\n\t.\n\t\\label{eq:SI_probability_flux_ft}\n\\end{equation}\nNext, we spatially average Eqs.~\\eqref{eq:SI_smoluchowski_eqn_ft}-\\eqref{eq:SI_probability_flux_ft} over one periodic cell according to $\\braket{\\,\\cdot\\,} \\equiv L^{-2}\\int_{L^{2}} (\\,\\cdot\\,) \\, \\mathrm{d} \\bm{r}$, apply the divergence theorem, and invoke periodic boundary conditions to obtain the continuity equation,\n\\begin{equation}\n\t\\frac{\\partial \\hat{\\rho}(\\bm{k}, t)}{\\partial t}\n\t=\n\t-\n\t\\mathrm{i} \\bm{k} \\cdot \\braket{\\hat{\\bm{J}}} (\\bm{k}, t)\n\t,\n\t\\label{eq:SI_smoluchowski_eqn_ft_avg}\n\\end{equation}\n\\begin{equation}\n\t\\braket{\\hat{\\bm{J}}} (\\bm{k}, t)\n\t=\n\t\\bm{u} (t) \\hat{\\rho}\n\t-\n\t\\frac{1}{\\gamma}\n\t[\n\tkT \\mathrm{i} \\bm{k} \\hat{\\rho} \n\t+\n\t\\braket{\\hat{P} \\bm{\\nabla}_{\\bm{r}}V}\n\t]\n\t,\n\t\\label{eq:SI_probability_flux_ft_avg}\n\\end{equation}\nwhere $\\hat{\\rho} (\\bm{k}, t) \\equiv \\braket{\\hat{P}} (\\bm{k}, t)$ is the Fourier-transformed number density. Eqs.~\\eqref{eq:SI_smoluchowski_eqn_ft_avg}-\\eqref{eq:SI_probability_flux_ft_avg} represent the macroscopic transport equations for the periodic lattice.\n\n\nNext, we define the structure function $\\hat{G}(\\bm{k}, \\bm{r}, t)$ as\n\\begin{equation}\n\t\\hat{P}(\\bm{k}, \\bm{r}, t) \n\t=\n\t\\hat{\\rho} (\\bm{k}, t)\n\t\\hat{G}(\\bm{k}, \\bm{r}, t)\n\t.\n\t\\label{eq:SI_conditional_probability}\n\\end{equation}\nMultiplying Eq.~\\eqref{eq:SI_smoluchowski_eqn_ft_avg} by $\\hat{G}$, subtracting from Eq.~\\eqref{eq:SI_smoluchowski_eqn_ft}, and dividing through by $\\hat{\\rho}$ then gives\n\\begin{flalign}\n\t\\frac{\\partial \\hat{G}(\\bm{k}, \\bm{r}, t)}{\\partial t}\n\t&=\n\t-\n\t\\hat{\\rho}^{-1} [ \n\t\\mathrm{i} \\bm{k} \\cdot ( \\hat{\\bm{J}} - \\braket{\\hat{\\bm{J}}} \\hat{G} )\n\t+\n\t\\bm{\\nabla}_{\\bm{r}} \\cdot \\hat{\\bm{J}}\n\t]\n\t\\nonumber\\\\\n\t&=\n\t- \\bm{u} (t) \\cdot \\bm{\\nabla}_{\\bm{r}} \\hat{G}\n\t+\n\t\\frac{kT}{\\gamma}\n\t\\nabla_{\\bm{r}}^{2} \\hat{G}\n\t+\n\t\\frac{1}{\\gamma}\n\t\\bm{\\nabla}_{\\bm{r}} \\cdot \n\t[\n\t\\hat{G} \\bm{\\nabla}_{\\bm{r}} V (\\bm{r})\n\t]\n\t+\n\t\\mathrm{i} \\bm{k} \\cdot \n\t\\left(\n\t\\frac{2kT}{\\gamma} \\bm{\\nabla}_{\\bm{r}} \\hat{G} \n\t+\n\t\\frac{1}{\\gamma}\n\t[\n\t\\hat{G} \\bm{\\nabla}_{\\bm{r}} V (\\bm{r})\n\t-\n\t\\braket{\\hat{G} \\bm{\\nabla}_{\\bm{r}}V} \\hat{G} \n\t]\n\t\\right)\n\t,\n\t\\label{eq:SI_structure_field_eqn}\n\\end{flalign}\nwhere in the last line we have substituted Eqs.~\\eqref{eq:SI_probability_flux_ft}, \\eqref{eq:SI_probability_flux_ft_avg}, and \\eqref{eq:SI_conditional_probability}. Taylor-expanding $\\hat{G}$ about $\\bm{k} = \\bm{0}$,\n\\begin{equation}\n\t\\hat{G}(\\bm{k}, \\bm{r}, t)\n\t=\n\tg(\\bm{r}, t)\n\t+\n\t\\mathrm{i} \\bm{k} \\cdot \\bm{d} (\\bm{r}, t)\n\t+\n\t\\cdots\n\t,\n\t\\label{eq:SI_taylor_series}\n\\end{equation}\nsubstituting the expansion into Eq.~\\eqref{eq:SI_structure_field_eqn}, and collecting terms of like order in $\\mathrm{i} \\bm{k}$ yields the ordered set of equations,\n\\begin{flalign}\n\t\\frac{\\partial g(\\bm{r}, t)}{\\partial t}\n\t+\n\t\\bm{u} (t) \\cdot \\bm{\\nabla}_{\\bm{r}} g\n\t-\n\t\\frac{kT}{\\gamma}\n\t\\nabla_{\\bm{r}}^{2} g\n\t-\n\t\\frac{1}{\\gamma}\n\t\\bm{\\nabla}_{\\bm{r}} \\cdot \n\t[\n\tg \\bm{\\nabla}_{\\bm{r}} V (\\bm{r})\n\t]\n\t=\n\t0\n\t,\n\\end{flalign}\n\\begin{flalign}\n\t\\frac{\\partial \\bm{d}(\\bm{r}, t)}{\\partial t}\n\t+ \n\t\\bm{u} (t) \\cdot \\bm{\\nabla}_{\\bm{r}} \\bm{d}\n\t-\n\t\\frac{kT}{\\gamma}\n\t\\nabla_{\\bm{r}}^{2} \\bm{d}\n\t-\n\t\\frac{1}{\\gamma}\n\t\\bm{\\nabla}_{\\bm{r}} \\cdot \n\t[\n\t\\bm{d}\\bm{\\nabla}_{\\bm{r}} V (\\bm{r})\n\t]^{\\dag}\n\t=\n\t\\frac{2kT}{\\gamma} \\bm{\\nabla}_{\\bm{r}} g\n\t+\n\t\\frac{1}{\\gamma}\n\t[\n\tg\\bm{\\nabla}_{\\bm{r}} V (\\bm{r})\n\t- \n\t\\braket{g \\bm{\\nabla}_{\\bm{r}}V} g\n\t]\n\t.\n\\end{flalign}\nThe last two equations are exactly Eqs.~\\eqref{eq:g_eqn} and \\eqref{eq:d_eqn} from the main text. Conservation of probability requires the $g$- and $\\bm{d}$-fields to satisfy periodic boundary conditions as well as the normalization conditions $\\braket{g} = 1$ and $\\braket{\\bm{d}} = \\bm{0}$.\n\n\n\n\n\n\\subsection{Derivation of Eqs. (6)-(7): effective drift velocity and diffusivity}\n\n\nThe effective drift velocity $\\bm{U}(t)$ and diffusivity $\\tens{D} (t)$ of the Brownian particle are related to the Fourier-transformed, average flux $\\braket{\\hat{\\bm{J}}}$ via the large-wavelength expansion,\n\\begin{equation}\n\t\\braket{\\hat{\\bm{J}}} (\\bm{k}, t)\n\t=\n\t\\hat{\\rho}\n\t\\left[ \n\t\t\\bm{U} (t)\n\t\t-\n\t\t\\mathrm{i} \\bm{k} \\cdot \\tens{D} (t)\n\t\t+\n\t\t\\cdots\n\t\\right]\n\t.\n\t\\label{eq:SI_average_flux_ft_eqn1}\n\\end{equation}\nIn order to derive expressions for $\\bm{U}$ and $\\tens{D}$, we insert Eqs.~\\eqref{eq:SI_conditional_probability} and \\eqref{eq:SI_taylor_series} into \\eqref{eq:SI_probability_flux_ft_avg}, obtaining\n\\begin{flalign}\n\t\\braket{\\hat{\\bm{J}}} (\\bm{k}, t)\n\t&=\n\t\\hat{\\rho}\n\t\\left(\n\t\\bm{u} (t)\n\t-\n\t\\frac{1}{\\gamma}\n\t[\n\tkT \\mathrm{i} \\bm{k} \n\t+\n\t\\braket{\\hat{G} \\bm{\\nabla}_{\\bm{r}}V}\n\t]\n\t\\right)\n\t\\nonumber\\\\\n\t&=\n\t\\hat{\\rho}\n\t\\left[\n\t\\bm{u} (t) - \\frac{1}{\\gamma} \\braket{g \\bm{\\nabla}_{\\bm{r}}V}\n\t-\n\t\\mathrm{i} \\bm{k} \n\t\\left(\n\t\\frac{kT}{\\gamma} \\tens{I} \n\t+\n\t\\frac{1}{\\gamma}\\braket{\\bm{d} \\bm{\\nabla}_{\\bm{r}}V}\n\t\\right)\n\t+\n\t\\cdots\n\t\\right]\n\t.\n\t\\label{eq:SI_average_flux_ft_eqn2}\n\\end{flalign}\nEquating terms of like order in $\\mathrm{i} \\bm{k}$ in Eqs.~\\eqref{eq:SI_average_flux_ft_eqn1} and \\eqref{eq:SI_average_flux_ft_eqn2} furnishes the expressions,\n\\begin{equation}\n\t\\bm{U}(t)\n\t=\n\t\\bm{u} (t)\n\t-\n\t\\frac{1}{\\gamma} \\braket{g \\bm{\\nabla}_{\\bm{r}} V} (t)\n\t,\n\\end{equation}\n\\begin{equation}\n\t\\tens{D}(t)\n\t=\n\t\\frac{kT}{\\gamma} \\tens{I} \n\t+\n\t\\frac{1}{\\gamma}\\braket{\\bm{d} \\bm{\\nabla}_{\\bm{r}}V} (t)\n\t,\n\\end{equation}\nwhich are exactly Eqs.~\\eqref{eq:effective_drift}-\\eqref{eq:effective_diffusivity} in the main text.\n\n\n\n\n\n\n\n\n\n\\section{3. Numerical Method} \\label{sec:numerical_method}\n\nEqs.~\\eqref{eq:g_eqn} and \\eqref{eq:d_eqn} were solved using the finite-element method in COMSOL Multiphysics$^\\text{\\textregistered}$ (Version 5.5) with the ``Coefficient Form PDE'' physics interface. An $L \\times L$ square cell was set up and discretized into triangular elements (Fig.~\\ref{Fig:SIMeshes}). Periodic boundary conditions were applied to the $g$- and $\\bm{d}$-fields at the edges of the cell. Studies were run using both time-dependent ($\\bm{u}\\not=\\bm{0}$) and stationary ($\\bm{u}=\\bm{0}$) solvers. For the time-dependent studies, the $g$- and $\\bm{d}$-fields were initialized to uniform values $1$ and $\\bm{0}$, respectively, and time-advanced using the backward differentiation formula with a timestep $\\Delta t = 0.001(2\\uppi\/\\omega)$ until a periodic steady state was achieved. The number of periods needed to reach steady state generally increased with the oscillation frequency. For the stationary studies, the equations were solved iteratively using Newton's method and the normalization conditions $\\braket{g} = 1$ and $\\braket{\\bm{d}} = \\bm{0}$ were implemented as weak-form constraints. Upon solving for the $g$- and $\\bm{d}$-fields, Eqs.~\\eqref{eq:effective_drift} and \\eqref{eq:effective_diffusivity} were evaluated using a fourth-order domain integration method and (in the time-dependent studies) subsequently time-averaged over the final oscillation period.\n\n\n\n\n\n\n\n\\begin{figure}[!h]\n\t\\begin{center}\n\t\t\\includegraphics[width=0.7\\linewidth]{Figs\/SI\/SIFig_Meshes.png}\n\t\t\\caption{%\n\t\t\tTriangular meshes used for the finite-element calculations. Meshes containing (a) 1132 elements (for the time-dependent studies) and (b) 29,018 elements (for the stationary studies) were used. Coarser meshes were used in the time-dependent calculations to save computational time.\n\t\t}\n\t\t\\label{Fig:SIMeshes}\n\t\\end{center}\n\t\\vspace{-18pt}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{4. Asymptotic Limits} \\label{sec:asymptotic_limits}\n\n\n\n\\subsection{Derivation of Eq. (8): stationary traps with shallow potential wells}\n\nIf the harmonic traps held in a fixed configuration, $\\bm{u} = \\bm{0}$ and the $g$- and $\\bm{d}$-fields achieve a steady state. Equations \\eqref{eq:g_eqn} and \\eqref{eq:d_eqn} then simplify to\n\\begin{flalign}\n\tkT\n\t\\nabla_{\\bm{r}}^{2} g (\\bm{r})\n\t+\n\t\\bm{\\nabla}_{\\bm{r}} \\cdot \n\t[\n\tg(\\bm{r}) \\bm{\\nabla}_{\\bm{r}} V (\\bm{r})\n\t]\n\t=\n\t0\n\t,\n\t\\label{eq:SI_g_eqn_steady}\n\\end{flalign}\n\\begin{flalign}\n\tkT\n\t\\nabla_{\\bm{r}}^{2} \\bm{d}(\\bm{r})\n\t+\n\t\\bm{\\nabla}_{\\bm{r}} \\cdot \n\t[\n\t\\bm{d}(\\bm{r})\\bm{\\nabla}_{\\bm{r}} V (\\bm{r})\n\t]^{\\dag}\n\t=\n\t-\n\t2kT \\bm{\\nabla}_{\\bm{r}} g\n\t-\n\tg\\bm{\\nabla}_{\\bm{r}} V \n\t+ \n\t\\braket{g \\bm{\\nabla}_{\\bm{r}}V} g\n\t.\n\t\\label{eq:SI_d_eqn_steady}\n\\end{flalign}\nEq.~\\eqref{eq:SI_g_eqn_steady} may be solved subject to the constraint $\\braket{g} = 1$ to get the Boltzmann distribution,\n\\begin{equation}\n\tg (\\bm{r})\n\t=\n\t\\frac{\\mathrm{e}^{-V(\\bm{r})\/kT}}{\\braket{\\mathrm{e}^{-V\/kT}}}\n\t.\n\t\\label{eq:SI_g_field_steady}\n\\end{equation}\nThe governing equation for the $\\bm{d}$-field, Eq.~\\eqref{eq:SI_d_eqn_steady}, then simplifies to\n\\begin{flalign}\n\tkT\n\t\\nabla_{\\bm{r}}^{2} \\bm{d}\n\t+\n\t\\bm{\\nabla}_{\\bm{r}} \\cdot \n\t(\n\t\\bm{d}\\bm{\\nabla}_{\\bm{r}} V \n\t)^{\\dag}\n\t=\n\t-\n\tkT \\bm{\\nabla}_{\\bm{r}} g\n\t.\n\t\\label{eq:SI_d_eqn_steady_2}\n\\end{flalign}\n\n\nEq.~\\eqref{eq:SI_d_eqn_steady_2} cannot be solved analytically in general. However, for ``shallow'' potential wells, $\\Delta V \\ll kT$, we may Taylor-expand Eq.~\\eqref{eq:SI_g_field_steady} as\n\\begin{equation}\n\tg\n\t=\n\t1 \n\t- \n\t\\frac{V - \\braket{V}}{kT} \n\t+\n\t\\frac{V^{2} - \\braket{V^{2}} - 2 \\braket{V} (V - \\braket{V})}{2(kT)^{2}} \n\t+ \n\t\\cdots\n\t,\n\\end{equation}\nso that Eq.~\\eqref{eq:SI_d_eqn_steady_2} becomes\n\\begin{flalign}\n\tkT\n\t\\nabla_{\\bm{r}}^{2} \\bm{d}\n\t+\n\t\\bm{\\nabla}_{\\bm{r}} \\cdot \n\t(\n\t\\bm{d}\\bm{\\nabla}_{\\bm{r}} V \n\t)^{\\dag}\n\t=\n\t\\left( 1 - \\frac{V - \\braket{V}}{kT} + \\cdots \\right)\\bm{\\nabla}_{\\bm{r}} V\n\t.\n\t\\label{eq:SI_d_eqn_steady_3}\n\\end{flalign}\nTo solve Eq.~\\eqref{eq:SI_d_eqn_steady_3}, we expand the $\\bm{d}$-field in a perturbation series,\n\\begin{equation}\n\t\\bm{d} (\\bm{r})\n\t=\n\t\\bm{d}^{(0)} (\\bm{r})\n\t+\n\t\\bm{d}^{(1)} (\\bm{r})\n\t+\n\t\\cdots\n\t,\n\t\\label{eq:SI_d_field_perturbation_series}\n\\end{equation}\nwhere $\\bm{d}^{(0)} = O(\\Delta V\/kT)$, $\\bm{d}^{(1)} = O[\\Delta V^{2}\/(kT)^{2}]$, and so on. Inserting Eq.~\\eqref{eq:SI_d_field_perturbation_series} into \\eqref{eq:SI_d_eqn_steady_3} and collecting terms of like order in $\\Delta V \/ kT$ yields the ordered set of equations,\n\\begin{flalign}\n\tkT\n\t\\nabla_{\\bm{r}}^{2} \\bm{d}^{(0)}\n\t=\n\t\\bm{\\nabla}_{\\bm{r}} V\n\t,\n\t\\label{eq:SI_d_eqn_steady_4a}\n\\end{flalign}\n\\begin{flalign}\n\tkT\n\t\\nabla_{\\bm{r}}^{2} \\bm{d}^{(1)}\n\t=\n\t-\n\t\\bm{\\nabla}_{\\bm{r}} \\cdot \n\t(\n\t\\bm{d}^{(0)}\\bm{\\nabla}_{\\bm{r}} V \n\t)^{\\dag}\n\t-\n\t\\frac{V - \\braket{V}}{kT} \\bm{\\nabla}_{\\bm{r}} V\n\t,\n\t\\label{eq:SI_d_eqn_steady_4b}\n\\end{flalign}\nsubject to the constraints $\\braket{\\bm{d}^{(0)}} = \\bm{0}$, $\\braket{\\bm{d}^{(1)}} = \\bm{0}$, etc. Since $V$ and $\\bm{d}$ are spatially periodic, Eqs.~\\eqref{eq:SI_d_eqn_steady_4a}-\\eqref{eq:SI_d_eqn_steady_4b} may be sequentially solved by means of Fourier series:\n\\begin{equation}\n\t\\bm{d}^{(0)} (\\bm{r})\n\t=\n\t-\n\t\\frac{1}{kT}\n\t\\sum_{\\bm{q} \\not= \\bm{0}}\n\t\\frac{\\mathrm{i} \\bm{q}}{q^{2}} \n\tV_{\\bm{q}} \\mathrm{e}^{\\mathrm{i} \\bm{q} \\cdot \\bm{r}}\n\t,\n\t\\label{eq:SI_d0_soln}\n\\end{equation}\n\\begin{equation}\n\t\\bm{d}^{(1)} (\\bm{r})\n\t=\n\t\\frac{1}{2(kT)^{2}}\n\t\\sum_{\\bm{q} \\not= \\bm{0}}\n\t\\sum_{\\bm{q}' \\not= \\bm{0}}\n\t\\cdot\n\t\\left( \\frac{\\mathrm{i} \\bm{q}}{q^{2}} + \\frac{2 \\mathrm{i} \\bm{q} \\cdot (\\bm{q} - \\bm{q}')\\bm{q}'}{q^{2} q'^{2}} \\right)\n\tV_{\\bm{q} - \\bm{q}'} V_{\\bm{q}'}\n\t\\mathrm{e}^{\\mathrm{i} \\bm{q} \\cdot \\bm{r}}\n\t,\n\t\\label{eq:SI_d1_soln}\n\\end{equation}\nwhere $\\bm{q}$ is the discrete wavevector and $V_{\\bm{q}} \\equiv L^{-2} \\int_{L^{2}} [V(\\bm{r}) - \\braket{V}] \\mathrm{e}^{- \\mathrm{i} \\bm{q} \\cdot \\bm{r}} \\, \\mathrm{d} \\bm{r}$ denotes the Fourier integral of $V$. \n\nBy use of Eqs.~\\eqref{eq:effective_diffusivity} and \\eqref{eq:SI_d_field_perturbation_series}, the effective diffusivity of the Brownian particle is given by\n\\begin{flalign}\n\t\\tens{D}\n\t&=\n\t\\frac{kT}{\\gamma} \\tens{I}\n\t+\n\t\\frac{1}{\\gamma} \\braket{\\bm{d} \\bm{\\nabla}_{\\bm{r}} V}\n\t\\nonumber\\\\\n\t&=\n\t\\frac{kT}{\\gamma} \\tens{I}\n\t+\n\t\\frac{1}{\\gamma} \\braket{\\bm{d}^{(0)} \\bm{\\nabla}_{\\bm{r}} V}\n\t+\n\t\\frac{1}{\\gamma} \\braket{\\bm{d}^{(1)} \\bm{\\nabla}_{\\bm{r}} V}\n\t+\n\t\\cdots\n\t.\n\t\\label{eq:SI_diffusivity_tensor_smallV_expansion}\n\\end{flalign}\nMultiplying Eqs.~\\eqref{eq:SI_d0_soln} by $\\bm{\\nabla}_{\\bm{r}} V = \\sum_{\\bm{q}\\not=\\bm{0}} \\mathrm{i} \\bm{q} V_{\\bm{q}} \\mathrm{e}^{\\mathrm{i} \\bm{q} \\cdot \\bm{r}}$ and averaging over an $L \\times L$ cell yields the force-displacement dyads,\n\\begin{equation}\n\t\\braket{\\bm{d}^{(0)} \\bm{\\nabla}_{\\bm{r}} V}\n\t=\n\t-\n\t\\frac{1}{kT}\n\t\\sum_{\\bm{q} \\not= \\bm{0}}\n\t\\frac{\\bm{q}\\bm{q}}{q^{2}} \n\t|V_{\\bm{q}}|^{2}\n\t,\n\t\\label{eq:SI_d0_gradV_avg}\n\\end{equation}\n\\begin{equation}\n\t\\braket{\\bm{d}^{(1)} \\bm{\\nabla}_{\\bm{r}} V}\n\t=\n\t\\frac{1}{2(kT)^{2}}\n\t\\sum_{\\bm{q} \\not= \\bm{0}}\n\t\\sum_{\\bm{q}' \\not= \\bm{0}}\n\t\\left( \\frac{\\bm{q}\\bm{q}}{q^{2}} + \\frac{2 \\bm{q} \\cdot (\\bm{q} - \\bm{q}')\\bm{q}'\\bm{q}}{q^{2} q'^{2}} \\right)\n\tV_{\\bm{q} - \\bm{q}'} V_{\\bm{q}'} V_{-\\bm{q}}\n\t,\n\t\\label{eq:SI_d1_gradV_avg}\n\\end{equation}\nwhere $|V_{\\bm{q}}|^{2} \\equiv V_{\\bm{q}} V_{-\\bm{q}}$. Thus, the diffusivity tensor $\\tens{D}$ admits the Fourier-series representation,\n\\begin{flalign}\n\t\\tens{D}\n\t&=\n\t\\frac{kT}{\\gamma} \\,\n\t\\left[\n\t\\tens{I}\n\t-\n\t\\frac{1}{(kT)^{2}}\\sum_{\\bm{q} \\not= \\bm{0}}\n\t\\frac{\\bm{q}\\bm{q}}{q^{2}} \n\t|V_{\\bm{q}}|^{2}\n\t+\n\t\\frac{1}{2(kT)^{3}}\n\t\\sum_{\\bm{q} \\not= \\bm{0}}\n\t\\sum_{\\bm{q}' \\not= \\bm{0}}\n\t\\left( \\frac{\\bm{q}\\bm{q}}{q^{2}} + \\frac{2 \\bm{q} \\cdot (\\bm{q} - \\bm{q}')\\bm{q}'\\bm{q}}{q^{2} q'^{2}} \\right)\n\tV_{\\bm{q} - \\bm{q}'} V_{\\bm{q}'} V_{-\\bm{q}}\n\t+\n\t\\cdots\n\t\\right]\n\t.\n\t\\label{eq:SI_diffusivity_smallV_1}\n\\end{flalign}\n\n\n\nAn alternative expression for $\\tens{D}$ can be obtained by writing leading-order displacement field as the negative gradient of a potential, \n\\begin{equation}\n\t\\bm{d}^{(0)} (\\bm{r})\n\t=\n\t- \\frac{1}{kT} \\bm{\\nabla}_{\\bm{r}} \\varPhi(\\bm{r})\n\t,\n\t\\label{eq:SI_diffusivity_as_gradient_of_potential}\n\\end{equation}\nwhere $\\varPhi(\\bm{r})$ satisfies the Poisson equation,\n\\begin{equation}\n\t\\nabla^{2}_{\\bm{r}} \\varPhi(\\bm{r}) = - [V(\\bm{r}) - \\braket{V}],\n\t\\label{eq:SI_poisson_eqn}\n\\end{equation}\nsubject to the closure $\\braket{\\varPhi} = 0$. The Fourier-series solution of Eq.~\\eqref{eq:SI_poisson_eqn} is\n\\begin{equation}\n\t\\varPhi(\\bm{r}) \n\t=\n\t\\sum_{\\bm{q} \\not= \\bm{0}}\n\tq^{-2}\n\tV_{\\bm{q}} \\mathrm{e}^{\\mathrm{i} \\bm{q} \\cdot \\bm{r}}\n\t.\n\t\\label{eq:SI_poisson_field_soln}\n\\end{equation}\nBy use of Eqs.~\\eqref{eq:SI_d0_soln}, \\eqref{eq:SI_d1_gradV_avg}, and the convolution theorem, it can be shown that\n\\begin{equation}\n\t\\braket{\\bm{d}^{(1)} \\bm{\\nabla}_{\\bm{r}} V}\n\t=\n\t\\frac{1}{2kT} \n\t\\braket{(V - \\braket{V})^{2} \\bm{\\nabla}_{\\bm{r}} \\bm{d}^{(0)}}\n\t+\n\t\\braket{(\\bm{\\nabla}_{\\bm{r}} \\bm{d}^{(0)} \\cdot \\bm{\\nabla}_{\\bm{r}} V)\\bm{d}^{(0)}} \n\t.\n\t\\label{eq:SI_d1_gradV_avg_identity}\n\\end{equation}\nThen, by Eqs.~\\eqref{eq:SI_d0_gradV_avg}, \\eqref{eq:SI_diffusivity_as_gradient_of_potential}, \\eqref{eq:SI_poisson_field_soln}, and \\eqref{eq:SI_d1_gradV_avg_identity}, it follows that\n\\begin{equation}\n\t\\braket{\\bm{d}^{(0)} \\bm{\\nabla}_{\\bm{r}} V}\n\t=\n\t- \\frac{1}{kT} \\braket{\\bm{\\nabla}_{\\bm{r}} \\varPhi \\bm{\\nabla}_{\\bm{r}} V}\n\t,\n\t\\label{eq:SI_d0_gradV_avg_2}\n\\end{equation}\n\\begin{equation}\n\t\\braket{\\bm{d}^{(1)} \\bm{\\nabla}_{\\bm{r}} V}\n\t=\n\t\\frac{1}{2(kT)^{2}} \n\t\\left[\n\t-\n\t\\braket{(V - \\braket{V})^{2} \\bm{\\nabla}_{\\bm{r}} \\bm{\\nabla}_{\\bm{r}} \\varPhi}\n\t+\n\t2 \\braket{(\\bm{\\nabla}_{\\bm{r}} \\bm{\\nabla}_{\\bm{r}} \\varPhi \\cdot \\bm{\\nabla}_{\\bm{r}} V)\\bm{\\nabla}_{\\bm{r}} \\varPhi} \n\t\\right]\n\t.\n\t\\label{eq:SI_d1_gradV_avg_2}\n\\end{equation}\nSubstituting Eqs.~\\eqref{eq:SI_d0_gradV_avg_2}-\\eqref{eq:SI_d1_gradV_avg_2} into \\eqref{eq:SI_diffusivity_tensor_smallV_expansion} then gives the alternative representation,\n\\begin{flalign}\n\t\\tens{D}\n\t&=\n\t\\frac{kT}{\\gamma} \n\t\\left(\n\t\\tens{I}\n\t-\n\t\\frac{1}{(kT)^{2}}\n\t\\braket{\\bm{\\nabla}_{\\bm{r}} \\varPhi \\bm{\\nabla}_{\\bm{r}} V}\n\t+\n\t\\frac{1}{2(kT)^{3}}\n\t\\left[\n\t-\n\t\\braket{(V - \\braket{V})^{2} \\bm{\\nabla}_{\\bm{r}} \\bm{\\nabla}_{\\bm{r}} \\varPhi}\n\t+\n\t2 \\braket{(\\bm{\\nabla}_{\\bm{r}} \\bm{\\nabla}_{\\bm{r}} \\varPhi \\cdot \\bm{\\nabla}_{\\bm{r}} V)\\bm{\\nabla}_{\\bm{r}} \\varPhi} \n\t\\right]\n\t+\n\t\\cdots\n\t\\right)\n\t.\n\t\\label{eq:SI_diffusivity_smallV_2}\n\\end{flalign}\n\n\n\nSince $V(\\bm{r})$ is isotropic, only the trace of the steady diffusivity tensor need be computed: $D \\equiv \\tfrac{1}{2} \\tens{D} : \\tens{I}$. Using Eq.~\\eqref{eq:SI_diffusivity_tensor_smallV_expansion}, the scalar diffusivity $D$ is given by\n\\begin{equation}\n\tD\n\t=\n\t\\frac{kT}{\\gamma}\n\t+\n\t\\frac{1}{2\\gamma} \\braket{\\bm{d}^{(0)} \\cdot \\bm{\\nabla}_{\\bm{r}} V}\n\t+\n\t\\frac{1}{2\\gamma} \\braket{\\bm{d}^{(1)} \\cdot \\bm{\\nabla}_{\\bm{r}} V}\n\t+\n\t\\cdots\n\t.\n\t\\label{eq:SI_diffusivity_scalar_smallV_expansion}\n\\end{equation}\nTaking the trace of Eqs.~\\eqref{eq:SI_d0_gradV_avg_2}-\\eqref{eq:SI_d1_gradV_avg_2}, integrating by parts, and applying Eq.~\\eqref{eq:SI_poisson_eqn} then gives\n\\begin{flalign}\n\t\\braket{\\bm{d}^{(0)} \\cdot \\bm{\\nabla}_{\\bm{r}} V}\n\t&=\n\t- \\frac{1}{kT} \\braket{\\bm{\\nabla}_{\\bm{r}} \\varPhi \\cdot \\bm{\\nabla}_{\\bm{r}} V}\n\t\\nonumber\\\\\n\t&=\n\t-\\frac{1}{kT} \\braket{(V - \\braket{V})^{2}}\n\t,\n\t\\label{eq:SI_d0_gradV_avg_3}\n\\end{flalign}\n\\begin{flalign}\n\t\\braket{\\bm{d}^{(1)} \\cdot \\bm{\\nabla}_{\\bm{r}} V}\n\t&=\n\t\\frac{1}{2(kT)^{2}} \n\t\\left[\n\t-\n\t\\braket{(V - \\braket{V})^{2} \\nabla^{2}_{\\bm{r}} \\varPhi}\n\t+\n\t2 \\braket{(\\bm{\\nabla}_{\\bm{r}} \\bm{\\nabla}_{\\bm{r}} \\varPhi \\cdot \\bm{\\nabla}_{\\bm{r}} V) \\cdot\\bm{\\nabla}_{\\bm{r}} \\varPhi} \n\t\\right]\n\t\\nonumber\\\\\n\t&=\n\t\\frac{1}{2(kT)^{2}} \n\t\\left[\n\t\\braket{(V - \\braket{V})^{3}}\n\t+\n\t\\braket{\\bm{\\nabla}_{\\bm{r}} (|\\bm{\\nabla}_{\\bm{r}} \\varPhi|^{2}) \\cdot \\bm{\\nabla}_{\\bm{r}} V} \n\t\\right]\n\t.\n\t\\label{eq:SI_d1_gradV_avg_3}\n\\end{flalign}\nInserting Eqs.~\\eqref{eq:SI_d0_gradV_avg_3}-\\eqref{eq:SI_d1_gradV_avg_3} into \\eqref{eq:SI_diffusivity_scalar_smallV_expansion} then gives\n\\begin{equation}\n\tD\n\t=\n\t\\frac{kT}{\\gamma}\n\t\\left(\n\t1\n\t-\n\t\\frac{1}{2(kT)^{2}}\n\t\\braket{(V - \\braket{V})^{2}}\n\t+\n\t\\frac{1}{4(kT)^{3}}\n\t\\left[\n\t\\braket{(V - \\braket{V})^{3}}\n\t+\n\t\\braket{\\bm{\\nabla}_{\\bm{r}} (|\\bm{\\nabla}_{\\bm{r}} \\varPhi|^{2}) \\cdot \\bm{\\nabla}_{\\bm{r}} V} \n\t\\right]\n\t+\n\t\\cdots\n\t\\right)\n\t.\n\t\\label{eq:SI_diffusivity_scalar_smallV_expansion}\n\\end{equation}\nThe last expression is exactly Eq.~\\eqref{eq:diffusivity_soft_traps} from the main text.\n\n\n\n\\subsection{Derivation of Eq. (9): stationary traps with deep potential wells}\n\n\nFor stationary, ``deep'' potential wells, $\\Delta V \\gg kT$, the small-potential perturbation series \\eqref{eq:SI_d_field_perturbation_series} fails to converge. Unfortunately, no exact analytical solution of Eq.~\\eqref{eq:SI_d_eqn_steady_2} is readily available. However, one can take advantage of the fact that, for deep potential wells, the probability density is strongly localized near the origin $\\bm{r} = \\bm{0}$ of the lattice cell where the potential-energy field $V(\\bm{r})$ is minimized. Then, a useful {\\it approximation} of the $\\bm{d}$-field is\n\\begin{flalign}\n\t\\bm{d} (\\bm{r})\n\t&\\approx\n\t- \\bm{r} g (\\bm{r})\n\t\\nonumber\\\\\n\t&=\n\t- \\frac{\\bm{r} \\mathrm{e}^{-V(\\bm{r})\/kT}}{\\braket{\\mathrm{e}^{-V\/kT}}}\n\t.\n\t\\label{eq:SI_d_field_largeV_approx}\n\\end{flalign}\nEq.~\\eqref{eq:SI_d_field_largeV_approx} is the particular solution of Eq.~\\eqref{eq:SI_d_eqn_steady_2} and conserves probability, $\\braket{\\bm{d}} = \\bm{0}$. However, this particular solution clearly violates the periodic boundary conditions at the edges of the lattice cell $x = \\pm L\/2$, $y=\\pm L\/2$, incurring an error of $O(L\\mathrm{e}^{-\\Delta V\/kT}\/\\braket{\\mathrm{e}^{-V\/kT}})$ that decreases in magnitude with increasing trap stiffness. Fig.~\\ref{Fig:SI_StrongTrap_dx_vs_x} compares the approximation, Eq.~\\eqref{eq:SI_d_field_largeV_approx}, against the ``exact'' numerical solution for the displacement field, showing very good agreement. The slight error in the approximation is due to the neglect of the homogeneous solution of Eq.~\\eqref{eq:SI_d_eqn_steady_2}, which is complicated by the 2D potential-energy field given by Eq.~\\eqref{eq:harmonic_potential}. It will be shown that the error in this approximation for the $\\bm{d}$-field quantitatively (though not qualitatively) impacts the prediction for the effective diffusivity.\n\n\n\n\\begin{figure}[!h]\n\t\\begin{center}\n\t\t\\includegraphics[width=1\\linewidth]{Figs\/SI\/SIFig_StrongPotentialLimit_Dx_vs_X.png}\n\t\t\\caption{\n\t\tComparison of numerical solution for the steady displacement field density $d_{x}(x,y)$ against the particular solution [see Eq.~\\eqref{eq:SI_d_field_largeV_approx}] for a stiff trap, $\\kappa=5$ $kT\/\\mu$m$^2$. (a) 2D contour plot of $d_{x}$ with line traces at four distinct values of $y$. (b) Plot of $d_{x}$ against $x$ for each line trace shows favorable agreement to Eq.~\\eqref{eq:SI_d_field_largeV_approx}. \n\t\t}\n\t\t\\label{Fig:SI_StrongTrap_dx_vs_x}\n\t\\end{center}\n\t\\vspace{-18pt}\n\\end{figure}\n\n\nUsing Eq.~\\eqref{eq:harmonic_potential} for $V(\\bm{r})$ and Eq.~\\eqref{eq:SI_d_field_largeV_approx} for $\\bm{d}(\\bm{r})$, the force-displacement dyad that appears in Eq.~\\eqref{eq:diffusivity_soft_traps} can now be approximated as\n\\begin{flalign}\n\t\\braket{\\bm{d} \\bm{\\nabla}_{\\bm{r}} V}\n\t&\\approx\n\t-\n\t\\frac{1}{\\kappa}\n\t\\frac{\\braket{\\mathrm{e}^{-V\/kT}\\bm{\\nabla}_{\\bm{r}} V \\bm{\\nabla}_{\\bm{r}} V}}{\\braket{\\mathrm{e}^{-V\/kT}}}\n\t,\n\t\\label{eq:SI_d_gradV_avg_largeV}\n\\end{flalign}\nwhere we've used the fact that $\\bm{\\nabla}_{\\bm{r}} V = \\kappa \\bm{r}$ for $r \\le \\frac{1}{2} W_{\\text{trap}}$ and $=\\bm{0}$ otherwise. Defining the well depth as $\\Delta V =\\tfrac{1}{8} \\kappa W_{\\text{trap}}^{2}$, the cell averages in Eq.~\\eqref{eq:SI_d_gradV_avg_largeV} become\n\\begin{flalign}\n\t\\braket{\\mathrm{e}^{-V\/kT}}\n\t&=\n\t\\frac{2\\uppi kT}{\\kappa L^{2}}\\left( 1 - \\mathrm{e}^{-\\Delta V\/kT} \\right)\n\t+\n\t\\bigg( 1- \\frac{2\\uppi \\Delta V}{\\kappa L^{2}} \\bigg) \\mathrm{e}^{-\\Delta V\/kT}\n\t,\n\\end{flalign}\n\\begin{flalign}\n\t\\braket{\\mathrm{e}^{-V\/kT}\\bm{\\nabla}_{\\bm{r}} V \\bm{\\nabla}_{\\bm{r}} V}\n\t&=\n\t\\frac{2\\uppi (kT)^{2}}{L^{2}}\n\t\\left[\n\t\t1\n\t\t-\n\t\t\\left( 1 + \\frac{\\Delta V}{kT} \\right)\\mathrm{e}^{-\\Delta V\/kT}\n\t\\right]\n\t\\tens{I}\n\t.\n\\end{flalign}\nSubstitution into Eq.~\\eqref{eq:SI_d_gradV_avg_largeV} then gives, upon simplification,\n\\begin{flalign}\n\t\\braket{\\bm{d} \\bm{\\nabla}_{\\bm{r}} V}\n\t&\\approx\n\tkT\n\t\\left\\{\n\t\t-1\n\t\t+\n\t\t\\left[ \n\t\t1\n\t\t+\n\t\t\\frac{2\\uppi kT}{\\kappa L^{2}}\n\t\t\\left(\n\t\t\t\\mathrm{e}^{\\Delta V\/kT}\n\t\t\t-\n\t\t\t\\frac{\\Delta V}{kT}\n\t\t\t-\n\t\t\t1\n\t\t\\right)\n\t\t\\right]^{-1}\n\t\\right\\}\n\t\\tens{I}\n\t\\nonumber\\\\\n\t&\\approx\n\t\\left(\n\t- k T\n\t+\n\t\\frac{\\kappa L^{2}}{2\\uppi}\n\t\\mathrm{e}^{-\\Delta V \/ kT} \n\t\\right)\n\t\\tens{I}\n\t\\quad\n\t\\text{for}\n\t\\quad\n\t\\Delta V \\gg kT.\n\\end{flalign}\nSubstitution into Eq.~\\eqref{eq:effective_diffusivity} and replacing $\\kappa \\tens{I}$ by $(\\bm{\\nabla}_{\\bm{r}}\\bm{\\nabla}_{\\bm{r}}V)|_{\\bm{r}=\\bm{0}}$ then gives the following approximation for the diffusivity tensor:\n\\begin{equation}\n\t\\tens{D}\n\t\\approx\n\t\\frac{L^{2}}{2\\uppi \\gamma}\n\t\\mathrm{e}^{-\\Delta V \/ kT}\n\t(\\bm{\\nabla}_{\\bm{r}}\\bm{\\nabla}_{\\bm{r}}V)|_{\\bm{r}=\\bm{0}}\n\t,\n\\end{equation}\nor, upon taking one-half the trace,\n\\begin{equation}\n\tD\n\t\\approx\n\t\\frac{L^{2}}{4\\uppi \\gamma}\n\t\\mathrm{e}^{-\\Delta V \/ kT}\n\t(\\nabla_{\\bm{r}}^{2}V)|_{\\bm{r}=\\bm{0}}\n\t.\n\t\\label{eq:SI_diffusivity_largeV_approx}\n\\end{equation}\nThis is exactly the form that would be predicted by Kramers' theory for the escape of a Brownian particle from a deep potential well \\cite{kramers1940brownian,brinkman1956brownian,brinkman1956brownian2}.\nComparison of Eq.~\\eqref{eq:SI_diffusivity_largeV_approx} to numerical calculations of $D$ indicates the qualitatively correct dependence on the trapping strength, but quantitative discrepancies due to errors in the approximation \\eqref{eq:SI_d_field_largeV_approx} for the $\\bm{d}$-field (see Fig.~\\ref{Fig:SI_StrongTrap_D_vs_K_LogLinearPlot}). Quantitative agreement can be obtained by renormalizing the above result by a factor that depends upon the ratio $W_{\\text{trap}} \/ L$. Therefore, we write\n\\begin{equation}\n\tD\n\t\\propto\n\t\\frac{L^{2}}{4\\uppi \\gamma}\n\t\\mathrm{e}^{-\\Delta V \/ kT}\n\t(\\nabla_{\\bm{r}}^{2}V)|_{\\bm{r}=\\bm{0}}\n\t\\label{eq:SI_diffusivity_largeV_approx_proportionality}\n\\end{equation}\nup to a proportionality constant. Eq.~\\eqref{eq:SI_diffusivity_largeV_approx_proportionality} is identical to Eq.~\\eqref{eq:diffusivity_stiff_traps} from the main text.\nFor traps of diameter $W_{\\text{trap}} = 3.2$ $\\mu$m spaced a distance $L=6$ $\\mu$m apart, a proportionality constant of 1.5 gives quantitative agreement with the exact dispersion theory (see Fig.~\\ref{Fig:SI_StrongTrap_D_vs_K_LogLinearPlot}).\n\n\n\n\n\\begin{figure}[!h]\n\t\\begin{center}\n\t\t\\includegraphics[width=0.5\\linewidth]{Figs\/SI\/SIFig_StrongPotentialLimit_ExponentialTrends.png}\n\t\t\\caption{%\n\t\tLog-linear plot of diffusivity $D$ against trap stiffness $\\kappa$. The full numerical solution (solid curve) is compared against Eq.~\\eqref{eq:diffusivity_stiff_traps} (dashed curves) using two different constants of proportionality. Irrespective of the numerical prefactor, Eq.~\\eqref{eq:diffusivity_stiff_traps} demonstrates the appropriate scaling with the trapping strength and is consistent with Kramers' theory of activated escape. A proportionality constant of 1.5 gives quantitative agreement with the exact solution for the specific geometry considered in this study.\n\t\t}\n\t\t\\label{Fig:SI_StrongTrap_D_vs_K_LogLinearPlot}\n\t\\end{center}\n\t\\vspace{-18pt}\n\\end{figure}\n\n\n\n\n\n\n\n\n\\subsection{Derivation of Eq. (11): oscillating traps in the high-frequency limit}\n\nIn the high-frequency limit, the potential-energy field is cycled in the $x$-direction at a rate much faster than the response time of the Brownian particle. A reasonable model for this system is a quasi-steady, uniform convection in the $x$-direction, for which we make the ansatz $g = g(y)$ and $d_{y} = d_{y} (y)$ (for the time being, we will ignore the $d_{x}$-field). Eqs.~\\eqref{eq:g_eqn} and \\eqref{eq:d_eqn} then simplify to\n\\begin{equation}\n\tkT\n\t\\frac{\\mathrm{d}^{2}g}{\\mathrm{d} y^{2}}\n\t+\n\t\\frac{\\partial V}{\\partial y} \\frac{\\mathrm{d} g}{\\mathrm{d} y}\n\t+\n\t\\left( \\frac{\\partial^{2} V}{\\partial x^{2}} + \\frac{\\partial^{2} V}{\\partial y^{2}} \\right) g\n\t=\n\t0\n\t,\n\t\\label{eq:SI_g_eqn_1d}\n\\end{equation}\n\\begin{equation}\n\tkT\n\t\\frac{\\mathrm{d}^{2} d_{y}}{\\mathrm{d} y^{2}}\n\t+\n\t\\frac{\\partial V}{\\partial y} \\frac{\\mathrm{d} d_{y}}{\\mathrm{d} y}\n\t+\n\t\\left( \\frac{\\partial^{2} V}{\\partial x^{2}} + \\frac{\\partial^{2} V}{\\partial y^{2}} \\right) d_{y}\n\t=\n\t-\n\t2 kT \\frac{\\mathrm{d} g}{\\mathrm{d} y}\n\t- \n\tg \\frac{\\partial V}{\\partial y}\n\t+\n\t\\bigg\\langle g \\frac{\\partial V}{\\partial y} \\bigg\\rangle g\n\t.\n\t\\label{eq:SI_dy_eqn_1d}\n\\end{equation}\nAveraging Eqs.~\\eqref{eq:SI_g_eqn_1d}-\\eqref{eq:SI_dy_eqn_1d} over the $x$-direction only and defining the modified potential,\n\\begin{equation}\n\tv(y) \n\t=\n\t\\frac{1}{L} \\int_{-L\/2}^{L\/2} V(x,y) \\, \\mathrm{d} x\n\t,\n\\end{equation}\nthen gives\n\\begin{equation}\n\tkT\n\t\\frac{\\mathrm{d}^{2}g}{\\mathrm{d} y^{2}}\n\t+\n\t\\frac{\\mathrm{d}}{\\mathrm{d} y} \\left( g\\frac{\\partial v}{\\partial y} \\right)\n\t=\n\t0\n\t,\n\t\\label{eq:SI_g_eqn_1d_2}\n\\end{equation}\n\\begin{equation}\n\tkT\n\t\\frac{\\mathrm{d}^{2} d_{y}}{\\mathrm{d} y^{2}}\n\t+\n\t\\frac{\\mathrm{d}}{\\mathrm{d} y} \\left( d_{y} \\frac{\\partial v}{\\partial y} \\right)\n\t=\n\t-\n\t2 kT \\frac{\\mathrm{d} g}{\\mathrm{d} y}\n\t- \n\tg \\frac{\\mathrm{d} v}{\\mathrm{d} y}\n\t+\n\t\\bigg\\langle g \\frac{\\mathrm{d} v}{\\mathrm{d} y} \\bigg\\rangle g\n\t,\n\t\\label{eq:SI_dy_eqn_1d_2}\n\\end{equation}\nwhere we have applied the conditions $V(L\/2,y)=V(-L\/2,y)$ and $(\\partial V \/ \\partial x)|_{x = \\pm L\/2} = 0$. Here, it is understood that the cell average of a one-dimensional (1D) function $f(y)$ simplifies to a 1D average in the $y$-direction, $\\braket{f} = L^{-1} \\int_{-L\/2}^{L\/2} f(y)\\, \\mathrm{d}y$.\n\nEqs.~\\eqref{eq:SI_g_eqn_1d_2}-\\eqref{eq:SI_dy_eqn_1d_2} are the 1D versions of Eqs.~\\eqref{eq:SI_g_eqn_steady}-\\eqref{eq:SI_d_eqn_steady}. The solution of Eq.~\\eqref{eq:SI_g_eqn_1d_2} for the $g$-field, subject to the constraint $\\braket{g} = 1$, is the 1D analog of Eq.~\\eqref{eq:SI_g_field_steady}:\n\\begin{equation}\n\tg(y)\n\t=\n\t\\frac{\\mathrm{e}^{-v(y)\/kT}}{\\braket{\\mathrm{e}^{-v\/kT}}}\n\t.\n\\end{equation}\nEq.~\\eqref{eq:SI_dy_eqn_1d_2} then simplifies to\n\\begin{flalign}\n\tkT\n\t\\frac{\\mathrm{d}^{2} d_{y}}{\\mathrm{d} y^{2}}\n\t+\n\t\\frac{\\mathrm{d}}{\\mathrm{d} y} \\left( d_{y} \\frac{\\partial v}{\\partial y} \\right)\n\t&=\n\t-\n\tkT \\frac{\\mathrm{d} g}{\\mathrm{d} y}\n\t\\nonumber \\\\\n\t&=\n\t\\frac{\\mathrm{e}^{-v\/kT}}{\\braket{\\mathrm{e}^{-v\/kT}}}\n\t\\frac{\\mathrm{d} v}{\\mathrm{d} y}\n\t,\n\t\\label{eq:SI_dy_eqn_1d_3}\n\\end{flalign}\nwhich is the 1D analog of Eq.~\\eqref{eq:SI_d_eqn_steady_2}. Unlike the 2D problem, the 1D problem admits an exact analytical solution:\n\\begin{flalign}\n\td_{y} (y)\n\t&=\n\t- y g (y)\n\t+\n\tc_{1} \\mathrm{e}^{- v(y)\/kT}\n\t\\int_{0}^{y}\n\t\\mathrm{e}^{v(\\eta)\/kT} \\, \\mathrm{d} \\eta\n\t+\n\tc_{2} L \\mathrm{e}^{-v(y) \/ kT}\n\t\\nonumber\\\\\n\t&=\n\t- \\frac{y \\mathrm{e}^{-v(y)\/kT}}{\\braket{\\mathrm{e}^{-v\/kT}}} \n\t+\n\tc_{1} \\mathrm{e}^{- v(y)\/kT}\n\t\\int_{0}^{y}\n\t\\mathrm{e}^{v(\\eta)\/kT} \\, \\mathrm{d} \\eta\n\t+\n\tc_{2} L \\mathrm{e}^{-v(y) \/ kT}\n\t.\n\t\\label{eq:SI_dy_field_1d_general_soln}\n\\end{flalign}\nThe first term on the right-hand side of Eq.~\\eqref{eq:SI_dy_field_1d_general_soln} is simply the particular solution of Eq.~\\eqref{eq:SI_dy_eqn_1d_3}; it is the 1D analog of Eq.~\\eqref{eq:SI_d_field_largeV_approx}, which was used to approximate the full solution in the strong-potential limit. The remaining terms in Eq.~\\eqref{eq:SI_dy_field_1d_general_soln} are the homogeneous solutions, with constants $c_{1}$, $c_{2}$ that must be determined from the periodicity and normalization conditions,\n\\begin{subequations}\n\\begin{gather}\n\td_{y} (L\/2) - d_{y} (-L\/2) = 0,\n\t\\label{eq:SI_dy_1d_bc1}\n\t\\\\\n\t\\braket{d_{y}}\n\t=\n\t\\frac{1}{L} \\int_{-L\/2}^{L\/2} d_{y}(y) \\, \\mathrm{d}y\n\t=\n\t0.\n\t\\label{eq:SI_dy_1d_bc2}\n\\end{gather}\n\t\\label{eq:SI_dy_1d_bcs}%\n\\end{subequations}\nInserting Eq.~\\eqref{eq:SI_dy_field_1d_general_soln} into \\eqref{eq:SI_dy_1d_bcs}, setting $v(L\/2) = v(-L\/2)$, and solving for the two unknowns $c_{1}$ and $c_{2}$ gives\n\\begin{subequations}\n\\begin{flalign}\n\tc_{1} &= \\braket{\\mathrm{e}^{-v\/kT}}^{-1} \\braket{\\mathrm{e}^{v\/kT}}^{-1}\n\t,\n\t\\\\\n\tc_{2} &= \n\t\\frac{1}{L} \\braket{\\mathrm{e}^{-v\/kT}}^{-2}\n\t\\left(\n\t\t\\braket{y\\mathrm{e}^{-v\/kT}}\n\t\t-\n\t\t\\braket{\\mathrm{e}^{v\/kT}}^{-1}\n\t\t\\bigg\\langle \\mathrm{e}^{-v\/kT}\\int_{0}^{y} \\mathrm{e}^{v(\\eta)\/kT} \\mathrm{d} \\eta \\bigg\\rangle\n\t\\right)\n\t.\n\\end{flalign}\n\t\\label{eq:SI_dy_1d_constants}%\n\\end{subequations}\n\n\nWith the solution for $d_{y}(y)$ fully specified, it remains to compute the effective diffusivity along the $y$-axis. Multiplying Eq.~\\eqref{eq:SI_dy_field_1d_general_soln} by $\\mathrm{d} v \/ \\mathrm{d}y$, applying the inverse chain rule, and averaging over the $y$-direction gives\n\\begin{flalign}\n\t\\bigg\\langle d_{y} \\frac{\\mathrm{d} v}{\\mathrm{d}y}\\bigg\\rangle\n\t&=\n\tkT\n\t\\left(\n\t\\braket{\\mathrm{e}^{-v\/kT}}^{-1}\n\t\\bigg\\langle y \\frac{\\mathrm{d}\\mathrm{e}^{-v\/kT}}{\\mathrm{d}y} \\bigg\\rangle \n\t-\n\tc_{1} \n\t\\bigg\\langle \\frac{\\mathrm{d}\\mathrm{e}^{-v\/kT}}{\\mathrm{d}y}\n\t\\int_{0}^{y}\n\t\\mathrm{e}^{v(\\eta)\/kT} \\, \\mathrm{d} \\eta\n\t\\bigg\\rangle \n\t-\n\tc_{2} L \n\t\\bigg\\langle \\frac{\\mathrm{d}\\mathrm{e}^{-v\/kT}}{\\mathrm{d}y} \\bigg\\rangle \n\t\\right).\n\t\\label{eq:SI_dy_gradv_1d_avg}\n\\end{flalign}\nInserting Eqs.~\\eqref{eq:SI_dy_1d_constants} into \\eqref{eq:SI_dy_gradv_1d_avg} and integrating by parts then gives, after some simplification,\n\\begin{flalign}\n\t\\bigg\\langle d_{y} \\frac{\\mathrm{d} v}{\\mathrm{d}y}\\bigg\\rangle\n\t&=\n\tk T\n\t\\left( \n\t\t-1\n\t\t+\n\t\t\\braket{\\mathrm{e}^{-v\/kT}}^{-1} \\braket{\\mathrm{e}^{v\/kT}}^{-1}\n\t\\right)\n\t.\n\\end{flalign}\nSince $d_{y}$ is independent of $x$, $\\braket{d_{y} (\\partial V \/ \\partial y)} = \\braket{d_{y} (\\mathrm{d} v \/ \\mathrm{d} y)}$. Thus, the $yy$ component of Eq.~\\eqref{eq:effective_diffusivity} simplifies to\n\\begin{flalign}\n\t\\overline{D}_{yy}\n\t&=\n\t\\frac{kT}{\\gamma}\n\t+\n\t\\frac{1}{\\gamma}\n\t\\bigg\\langle d_{y} \\frac{\\mathrm{d} v}{\\mathrm{d}y}\\bigg\\rangle\n\t\\nonumber\\\\\n\t&=\n\t\\frac{kT}{\\gamma}\n\t\\braket{\\mathrm{e}^{-v\/kT}}^{-1} \\braket{\\mathrm{e}^{v\/kT}}^{-1}\n\t,\n\t\\label{eq:SI_Dyy_1d}\n\\end{flalign}\nwhere an overbar is used to denote the long-time average over one periodic cycle.\nThis is the classical result for diffusion of a Brownian particle in a 1D periodic potential \\cite{lifson1962self,Festa1978}.\n\nUp until now, we have neglected the $d_{x}$-field, which appears in the $xx$-component of Eq.~\\eqref{eq:effective_diffusivity} and, therefore, influences the effective diffusivity along the $x$-axis. To a first approximation, we assume that the gradients in the $x$-direction have been ``smeared out'' so that dispersion in that direction is negligible: $\\braket{d_{x} (\\partial V \/ \\partial x)} \\approx 0$. This approximation is consistent with a model of dispersion in an effectively 1D potential. Therefore, the $xx$-component of Eq.~\\eqref{eq:effective_diffusivity} (time-averaged) is simply the Stokes-Einstein-Sutherland diffusivity:\n\\begin{equation}\n\t\\overline{D}_{xx} = \\frac{kT}{\\gamma}\n\t.\n\t\\label{eq:SI_Dxx_1d}\n\\end{equation}\nEqs.~\\eqref{eq:SI_Dyy_1d} and \\eqref{eq:SI_Dxx_1d} are exactly the same as Eq.~\\eqref{eq:diffusivity_high_frequency} from the main text.\n\n\n\n\n\\section{5. Brownian Dynamics Simulations} \\label{sec:brownian_dynamics_simulations}\n\n\nThe Langevin equation of motion corresponding to Eqs.~\\eqref{eq:SI_smoluchowski_eqn}-\\eqref{eq:SI_coordinate_conversion} is given by\n\\begin{equation}\n\t\\frac{\\mathrm{d} \\bm{r}_{i} (t)}{\\mathrm{d} t}\n\t=\n\t-\\bm{u}(t)\n\t-\n\t\\frac{1}{\\gamma} \\bm{\\nabla}_{\\bm{r}} V[\\bm{r}_{i}(t)]\n\t+\n\t\\sqrt{\\frac{2kT}{\\gamma}}\\bm{B}_{i}(t)\n\t,\n\t\\qquad\n\ti = 1, 2, \\dots, N_{\\text{p}}\n\t,\n\t\\label{eq:SI_langevin_eqn}\n\\end{equation}\nwhere $i$ is the particle index, $N_{\\text{p}}$ is the total number of particles in the system, and $\\bm{B}_{i} (t)$ is a white-noise source with statistics,\n\\begin{equation}\n\t\\braket{\\bm{B}_{i}(t)}\n\t=\n\t\\bm{0},\n\t\\qquad\n\t\\braket{\\bm{B}_{i}(t)\\bm{B}_{i}(t')}\n\t=\n\t\\delta(t - t')\n\t\\tens{I}\n\t.\n\t\\label{eq:SI_fluctuation_dissipation_theorem}\n\\end{equation}\n[Note that the angle brackets $\\langle\\,\\cdot\\,\\rangle$ appearing in Eq.~\\eqref{eq:SI_fluctuation_dissipation_theorem} denote {\\it ensemble} averages and are not to be confused with the {\\it cell} average defined in the main text.]\nThe potential-energy field $V(\\bm{r})$ and convective velocity $\\bm{u}(t)$ appearing in Eq.~\\eqref{eq:SI_langevin_eqn} are given by Eqs.~\\eqref{eq:harmonic_potential} and \\eqref{eq:trap_velocity}, respectively. Interactions between particles have been neglected, so the $N_{\\text{p}}$ equations of motion are uncoupled.\nFor the purpose of numerically time-advancing Eq.~\\eqref{eq:SI_langevin_eqn}, it is convenient to shift to the laboratory frame in which the position of each particle is measured as $\\bar{\\bm{r}}_{i}(t) = \\bm{r}_{0}(t) + \\bm{r}_{i}(t)$, where $\\bm{r}_{0}(t) = \\int_{0}^{t} \\bm{u} (\\tau) \\, \\mathrm{d} \\tau = \\hat{\\bm{e}}_{x} A \\sin{(\\omega t)}$ denotes the time-dependent position of the moving traps. In this frame, Eq.~\\eqref{eq:SI_langevin_eqn} becomes\n\\begin{equation}\n\t\\frac{\\mathrm{d} \\bar{\\bm{r}}_{i}(t)}{\\mathrm{d} t}\n\t=\n\t-\n\t\\frac{1}{\\gamma} \\bm{\\nabla}_{\\bar{\\bm{r}}} V[\\bar{\\bm{r}}_{i}(t)-\\bm{r}_{0}(t)]\n\t+\n\t\\sqrt{\\frac{2kT}{\\gamma}}\\bm{B}_{i}(t)\n\t,\n\t\\qquad\n\ti = 1, 2, \\dots, N_{\\text{p}}\n\t.\n\t\\label{eq:SI_langevin_eqn_relative}\n\\end{equation}\nHere, the convective term has been eliminated and the potential-energy field oscillates in time. \n\n\n\n\nIn our Brownian dynamics simulations, we numerically advanced Eq.~\\eqref{eq:SI_langevin_eqn_relative} using the GPU-enabled HOOMD-blue software package \\cite{anderson2020hoomd}. A system of $N_{\\text{p}} = 10,000$ particles was initialized at random positions within a periodically replicated $L\\times L$ cell and advanced for $\\tau = 10,000$ s (2.78 h) using a time step $\\Delta t = 1$ ms. Fig.~\\ref{Fig:SI_Gfield_Theory_vs_Simulation} shows that the simulated probability density shows excellent agreement with the deterministic solution of the corresponding Smoluchowski equation [Eq.~\\eqref{eq:g_eqn}]. The MSD and effective diffusivity of the particles were then computed exactly as in the experiments using Eqs.~\\eqref{eq:SI_msd_experiments}-\\eqref{eq:SI_diffusivity_experiments}, wherein the time integral was discretized using the left Riemann sum. \n\n\n\n\\begin{figure}[!h]\n\t\\begin{center}\n\\includegraphics[width=1\\linewidth]{Figs\/SI\/SIFig_Gfield_TheoryVsSimulation.png}\n\t\t\\caption{%\n\t\tComparison of the convected probability density $g(x,y,t)$ for a stiff trap near the critical frequency ($\\kappa=5$ $kT\/\\mu$m$^2$, $\\omega\/2\\uppi=18.33$ mHz) from\n\t\t(a) deterministic solution of the Smoluchowski equation [Eq.~\\eqref{eq:g_eqn}] and (b) stochastic simulation of the Langevin equation [Eq.~\\eqref{eq:SI_langevin_eqn}].\n\t\t}\n\t\t\\label{Fig:SI_Gfield_Theory_vs_Simulation}\n\t\\end{center}\n\t\\vspace{-18pt}\n\\end{figure}\n\n\n\n\n\n\n\n\n\\subsection{Derivation of Eq. (12): convective escape of a Brownian particle from a harmonic well}\n\nWe wish to estimate the critical oscillation frequency $\\omega_{\\text{max}}$ at which a Brownian particle rattling around the bottom of a potential-energy well is convected near the edge of the well with ample probability for escape. To make such an estimate, we start with the Langevin equation, Eq.~\\eqref{eq:SI_langevin_eqn}, simplified for a single particle in a harmonic well $V(\\bm{r}) = \\tfrac{1}{2} \\kappa r^{2}$:\n\\begin{equation}\n\t\\frac{\\mathrm{d} \\bm{r} (t)}{\\mathrm{d} t}\n\t=\n\t-\n\t\\frac{\\kappa}{\\gamma} \\bm{r}(t)\n\t-\n\t\\bm{u}(t)\n\t+\n\t\\sqrt{\\frac{2kT}{\\gamma}}\\bm{B}(t)\n\t.\n\t\\label{eq:SI_langevin_eqn_harmonic_well}\n\\end{equation}\nEq.~\\eqref{eq:SI_langevin_eqn_harmonic_well} may be straightforwardly integrated with the initial condition $\\bm{r}(0) = \\bm{0}$ to give the fluctuating particle position,\n\\begin{flalign}\n\t\\bm{r} (t)\n\t&=\n\t\\mathrm{e}^{-\\kappa t \/ \\gamma}\n\t\\int_{0}^{t}\n\t\\mathrm{e}^{\\kappa s \/ \\gamma}\n\t\\left(\n\t\t-\\bm{u}(s)\n\t\t+\n\t\t\\sqrt{\\frac{2kT}{\\gamma}} \\bm{B}(s)\n\t\\right)\n\t\\mathrm{d} s\n\t.\n\t\\label{eq:SI_langevin_eqn_harmonic_well_soln}\n\\end{flalign}\nSubstituting Eq.~\\eqref{eq:trap_velocity} into \\eqref{eq:SI_langevin_eqn_harmonic_well_soln} for the convective velocity then gives, upon integration,\n\\begin{flalign}\n\t\\bm{r} (t)\n\t&=\n\t-\\hat{\\bm{e}}_{x} A \n\t\\left( \\frac{\\gamma \\omega \/\\kappa}{1 + (\\gamma \\omega \/ \\kappa)^{2}} \\right)\n\t\\left(\n\t\t\\cos{(\\omega t)}\n\t\t+\n\t\t\\frac{\\gamma \\omega}{\\kappa}\n\t\t\\sin{(\\omega t)}\n\t\t-\n\t\t\\mathrm{e}^{-\\kappa t \/ \\gamma}\n\t\\right)\n\t+\n\t\\sqrt{\\frac{2kT}{\\gamma}} \n\t\\mathrm{e}^{-\\kappa t \/ \\gamma}\n\t\\int_{0}^{t}\n\t\\mathrm{e}^{\\kappa s \/ \\gamma}\n\t\\bm{B}(s)\n\t\\, \\mathrm{d} s\n\t.\n\t\\label{eq:SI_langevin_eqn_harmonic_well_soln_2}\n\\end{flalign}\nThe first term on the right-hand side of Eq.~\\eqref{eq:SI_langevin_eqn_harmonic_well_soln_2} is the deterministic part of the fluctuating particle particle position, which is driven by oscillatory convection and attenuated by the trapping force. The second term is the stochastic part due to Brownian motion. The mean displacement and mean squared displacement of the particle respectively capture strength of these deterministic and stochastic elements:\n\\begin{flalign}\n\t\\braket{\\bm{r}(t)}\n\t=\n\t-\\hat{\\bm{e}}_{x} A \n\t\\left( \\frac{\\gamma \\omega \/\\kappa}{1 + (\\gamma \\omega \/ \\kappa)^{2}} \\right)\n\t\\left(\n\t\t\\cos{(\\omega t)}\n\t\t+\n\t\t\\frac{\\gamma \\omega}{\\kappa}\n\t\t\\sin{(\\omega t)}\n\t\t-\n\t\t\\mathrm{e}^{-\\kappa t \/ \\gamma}\n\t\\right)\n\t,\n\t\\label{eq:SI_drift_in_harmonic_well}\n\\end{flalign}\n\\begin{flalign}\n\t\\braket{(\\bm{r}(t) - \\braket{\\bm{r}(t)})(\\bm{r}(t) - \\braket{\\bm{r}(t)})}\n\t=\n\t\\frac{kT}{\\kappa} \\left( 1 - \\mathrm{e}^{-2\\kappa t \/ \\gamma} \\right) \\tens{I}\n\t,\n\t\\label{eq:SI_variance_in_harmonic_well}\n\\end{flalign}\nwhere we have applied the white-noise statistics, Eq.~\\eqref{eq:SI_fluctuation_dissipation_theorem}, of the fluctuating $\\bm{B}$-field.\n\nAfter waiting a long enough time $t \\gg \\gamma \/\\kappa$, the exponential terms in Eqs.~\\eqref{eq:SI_drift_in_harmonic_well}-\\eqref{eq:SI_variance_in_harmonic_well} die off and we are left with an oscillating particle probability with variance $k T \/ \\kappa$ given by Eq.~\\eqref{eq:SI_variance_in_harmonic_well}. The amplitude of these oscillations are found from the extrema of the particle drift, Eq.~\\eqref{eq:SI_drift_in_harmonic_well}:\n\\begin{equation}\n\t\\sup_{t\\ge 0}\n\t| \\langle \\bm{r}(t)\\rangle |\n\t=\n\t\\frac{\\gamma \\omega A\/\\kappa}{\\sqrt{1+ (\\gamma \\omega \/ \\kappa)^{2}}}\n\t\\approx\n\t\\frac{\\gamma \\omega A}{\\kappa}\n\t\\quad\n\t\\text{for}\n\t\\quad\n\t\\frac{\\gamma \\omega}{\\kappa} \\ll 1\n\t.\n\\end{equation}\nThus, the basin of probability of size $\\sim \\sqrt{kT\/\\kappa}$ oscillates with amplitude $\\sim \\gamma \\omega A \/ \\kappa$ about the center of the potential-energy well. As the frequency $\\omega$ is increased, the oscillations become more pronounced. The particle is expected to escape a well of finite width $W_{\\text{trap}}$ when the spatial extent of the particle probability density crosses the edge of the well, at a critical frequency $\\omega_{\\text{max}}$:\n\\begin{equation}\n\t\\tfrac{1}{2} W_{\\text{trap}}\n\t\\approx\n\t\\frac{\\gamma \\omega_{\\text{max}} A}{\\kappa}\n\t+\n\t\\sqrt{\\frac{kT}{\\kappa}}\n\t,\n\\end{equation}\nor, solving for $\\omega_{\\text{max}}$,\n\\begin{equation}\n\t\\omega_{\\text{max}}\n\t\\approx\n\t\\frac{\\kappa}{\\gamma A}\n\t\\left(\n\t\t\\tfrac{1}{2} W_{\\text{trap}}\n\t\t-\n\t\t\\sqrt{\\frac{kT}{\\kappa}}\n\t\\right)\n\t.\n\\end{equation}\nThe last expression is exactly Eq.~\\eqref{eq:critical_frequency} from the main text. \n\n\n\n\n\\section{6. Additional Data}\n\n\n\nIn addition to measuring the effective diffusivity $\\overline{\\tens{\\bm{D}}}$ as a function of the oscillation frequency $\\omega$, we also varied the amplitude $A$ while holding the frequency fixed.\nThe strength of the convective velocity $\\bm{u}(t) = \\hat{\\bm{e}}_{x} \\omega A \\cos{(\\omega t)}$ may be modified by varying either the amplitude $A$ or the frequency $\\omega$. \nFig.~\\ref{Fig:SI4} plots $\\overline{D}_{xx}$ and $\\overline{D}_{yy}$ against $A$ for a fixed trap stiffness $\\kappa = 5~kT \/ \\mu\\mathrm{m}^2$ and frequency $\\omega\/2\\uppi = 18.3$ mHz.\nThis frequency corresponds to the critical frequency $\\omega_{\\text{max}}$ (for which $\\overline{D}_{xx}$ is maximized) for $\\kappa = 5~kT \/ \\mu\\mathrm{m}^2$ and $A = 5$ $\\mu$m, as shown in the main text (see Fig.~\\ref{fig:Fig3}).\nWe find that the $\\overline{D}_{xx}$ is non-monotonic and achieves a maximum at $A = 5$ $\\mu$m.\nFor amplitudes $A > 5$ $\\mu$m, the convective motion is fast compared to the particle response time. Consequently, the particles sample regions outside of the harmonic well and their average diffusivity along the convection axis is reduced. \n\n\n\n\\begin{figure}[!h]\n\t\\begin{center}\n\t\t\\includegraphics[width=0.5\\linewidth]{Figs\/SI\/SiFig_AmplitudeSweep.png}\n\t\t\\caption{%\n\t\t\tEffective diffusivity as a function of oscillation amplitude at a fixed trap stiffness $\\kappa = 5~kT \/ \\mu\\mathrm{m}^2$ and frequency $\\omega\/2\\uppi = 18.3~\\mathrm{mHz}$.\n\t\t\tLike Fig.~\\ref{fig:Fig3} in the main text, $\\overline{D}_{xx}$ is non-monotonic and reaches a maximum when the convection strength balances the harmonic trap strength.\n\t\t\tAt very large amplitudes, the particle cannot quickly respond to the rapidly oscillating trap and explores regions outside of the harmonic well.\n\t\t}\n\t\t\\label{Fig:SI4}\n\t\\end{center}\n\t\\vspace{-18pt}\n\\end{figure}\n\n\n\\pagebreak\n\\section{7. Supplemental Movies}\nBelow, we describe the Supplemental Movies associated with this manuscript. All time stamps corresponds to hours:minutes:seconds.\n\n\\begin{enumerate}\n\t\\item[] \\textbf{S1.} Experimental micrographs of silica particles with radius $a = 1.25~\\mu$m diffusing through a stationary array of harmonic traps (6$\\times$6 grid shown) with varying trap stiffness.\n\t\n\t\\item[] \\textbf{S2.} Microscopic Brownian dynamics simulations of a small sample of particles diffusing through a stationary array of harmonic traps (6$\\times$6 grid shown) with varying trap stiffness (same parameters as in S1).\n\n\t\\item[] \\textbf{S3.} Experimental micrographs of silica particles with radius $a = 1.25~\\mu$m diffusing through an oscillating array of stiff traps (6$\\times$6 grid shown) with varying oscillation frequency and fixed trap stiffness $\\kappa = 5~kT\/\\mu\\mathrm{m}^2$. The second part of the movie shows the trajectories of several tagged particles.\n\t\n\t\\item[] \\textbf{S4.} Microscopic Brownian dynamics simulations of a small sample of particles diffusing through an oscillating array of stiff traps (6$\\times$6 grid shown) with varying oscillation frequency and fixed trap stiffness $\\kappa = 5~kT\/\\mu\\mathrm{m}^2$ (same parameters as in S3).\n\t\n\t\\item[] \\textbf{S5.} Macroscopic Brownian dynamics simulations of 10,000 particles diffusing through a stationary array of harmonic traps (60$\\times$60 grid shown) over long length and time scales, varying the trap stiffness.\n\t\n\t\\item[] \\textbf{S6.} Macroscopic Brownian dynamics simulations of 10,000 particles diffusing through an oscillating array of stiff traps (60$\\times$60 grid shown) over long length and time scales, varying the oscillation frequency at a fixed trap stiffness $\\kappa = 5~kT\/\\mu\\mathrm{m}^2$.\n\t\n\t\\item[] \\textbf{S7.} 2D contour plots of the displacement field density $d_{x}(x,y,t)$ in an $L\\times L$ periodic cell containing an oscillating harmonic trap, varying the oscillation frequency at a fixed trap stiffness $\\kappa = 5~kT\/\\mu\\mathrm{m}^2$ (same parameters as in S6). Bottom row plots the long-time average over one periodic cycle.\n\t\n\\end{enumerate}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Generalities on Poisson algebras}\n\n\\subsection{Poisson algebras}\nLet $\\K$ be a field of characteristic $0$. A $\\K$-Poisson algebra is a $\\K$-vector space $\\p$ equipped with two bilinear products denoted by $x\\cdot y$ and $\\{x ,y \\}$, \nhaving the following properties:\n\\begin{enumerate}\n\\item The couple $(\\p,\\cdot)$ is an associative commutative $\\K$-algebra.\n\n\\item The couple $(\\p, \\{ , \\})$ is a $\\K$-Lie algebra.\n\n\\item The products $\\cdot$ and $\\{, \\}$ satisfy the Leibniz rule:\n$$\\{x\\cdot y,z\\}=x \\cdot\\{y,z\\}+\\{x,z\\}\\cdot y,$$\nfor any $x,y,z \\in \\p.$\n\\end{enumerate}\nThe product $\\{,\\}$ is usually called Poisson bracket and the Leibniz identity means that the Poisson bracket acts as a derivation of the associative product.\n\nIn \\cite{markl-eli-poisson}, one proves that any Poisson structure on a $\\K$-vector space is also given by a nonassociative product, denoted by $xy$ and satisfying the non associative identity\n\\begin{eqnarray}\n\\label{associator} 3A(x,y,z)=(xz)y+(yz) x-(y x)z-(z x) y.\n\\end{eqnarray}\nwhere $A(x,y,z)$ is the associator $A(x,y,z)=(xy)z-x(yz)$. In fact, if $\\p$ is a Poisson algebra given by the associative product $x\\cdot y$ and the Poisson bracket $\\{x,y\\}$, \nthen $xy$ is given by $$xy=\\{x,y\\}+x \\cdot y.$$ Conversely, the Poisson bracket and the associative product of $\\p$ are the skew-symmetric part and the symmetric part of \nthe product $xy$. Thus it is equivalent to present a Poisson algebra classically or by this nonassociative product. In \\cite{Mic-Eli-Poisson}, we have studied algebraic \nproperties of the nonassociative algebra $\\p$. \nIn particular we have proved that this algebra is flexible, power-associative and admits a Pierce decomposition.\n\n\\medskip\n\nIf $\\p$ is a Poisson algebra given by the nonassociative product (\\ref{associator}), \nwe denote by $\\g_{\\p}$ the Lie algebra on the same vector space $\\p$ whose Lie bracket is $$\\{x,y\\}=\\displaystyle\\frac{xy-yx}{2}$$ and by $\\mathcal{A}_{\\p}$ \nthe commutative associative algebra, on the same vector space, whose product is $$x \\cdot y=\\displaystyle\\frac{xy+yx}{2}.$$\nAn important problem in mathematical physics and more precisely in Quantum Field theory is the deformation of Poisson algebras. \nThe classical deformations of Poisson algebras consist of deformations of the Poisson brackets, that is, deformations of $\\g_\\mathcal{P}$ which let the\nassociatif multiplication of $\\mathcal{A}_\\mathcal{P}$ unchanged and satisfying the Leibniz identity \\cite{Pich}. In \\cite{remm-deformPoisson} \nthis type of deformation has been generalized by using a nonassociative multiplication defining the Poisson stucture. The deformations of this nonassociative\nmultiplication provides general Poisson deformations.\n\n\n\n\\section{Poisson superalgebra\n }\nBy a $\\K$-super vector space, we mean a $\\Z_2$-graded vector space $V=V_0 \\oplus V_1$. The vectors of $V_0$ and $V_1$ are called homogeneous vectors of degree respectively equal to $0$ and $1$. For an homogeneous vector $x$, we denote by $\\mid x\\mid$ its degree.\nA $\\K$-Poisson superalgebra\n is a $\\K$-super vector space $\\p=\\p_0 \\oplus \\p_1$ equipped with two bilinear products denoted by $x\\cdot y$ and $\\{x ,y \\}$, having the following properties:\n\\begin{itemize}\n\\item The couple $(\\p,\\cdot)$ is a associative super commutative $\\K$-algebra,\nthat is, $$x \\cdot y=(-1)^{\\mid x \\mid \\mid y \\mid}y \\cdot x.$$ \n\\item The couple $(\\p, \\{ , \\})$ is a $\\K$-Lie super algebra, that is,\n$$\\{x ,y \\}=-(-1)^{\\mid x \\mid \\mid y \\mid}\\{y ,x \\}$$\nand satisfying the super Jacobi condition:\n$$(-1)^{\\mid z \\mid \\mid x \\mid}\\{x , \\{y ,z \\}\\}+(-1)^{\\mid x \\mid \\mid y \\mid}\\{y , \\{z ,x \\}\\}+\n(-1)^{\\mid y \\mid \\mid z \\mid}\\{z , \\{x ,y \\}\\}=0.$$\n\\item The products $\\cdot$ and $\\{, \\}$ satisfy the super Leibniz rule:\n$$\\{x,y\\cdot z\\}=\\{x,y\\} \\cdot z+(-1)^{\\mid x \\mid \\mid y \\mid}y \\cdot \\{x,z\\}.$$\nwhere $x,y$ and $z$ are homogeneous vectors.\n\\end{itemize}\n\n\\bigskip\n\n\n\\begin{theorem}\nLet $\\p$ a $\\K$-super vector space. Thus $\\p$ is a Poisson superalgebra\n if and only if there exists on $\\p$ a nonassociative product $x y$ satisfying\n \\begin{equation}\\label{superPoisson}\n\\left\\{\n \\begin{array}{l}\n\\medskip\n 3(xy)z-3x(yz) + (-1)^{\\mid x \\mid \\mid y \\mid}(yx)z -(-1)^{\\mid y \\mid \\mid z \\mid} (xz)y\n - (-1)^{\\mid x \\mid \\mid y \\mid+\\mid x \\mid \\mid z \\mid}(yz)x \\\\\n + (-1)^{\\mid x \\mid \\mid z \\mid+\\mid y \\mid \\mid z \\mid}(zx)y =0 \n\\end{array}\n\\right.\n\\end{equation}\nfor any homogeneous vectors $x,y,z \\in\\p$.\n\\end{theorem}\n\n\\noindent{\\it Proof.} Assume that $(\\p,\\cdot,\\{,\\})$ is a Poisson superalgebra\n.\nConsider the multiplication\n$$\\begin{array}{l}\nxy=x \\cdot y + \\{ x,y \\}.\n\\end{array}$$\nWe deduce that\n$$\\displaystyle x \\cdot y=\\frac{1}{2}(xy+(-1)^{\\mid x \\mid \\mid y \\mid}yx).$$\nThus\nthe associativity condition writes for homogeneous vectors\n$$\\begin{array}{rl}\n\\medskip\nv_1(x,y,z)= & A(x,y,z)-(-1)^{\\mid x \\mid \\mid y \\mid+\\mid x \\mid \\mid z \\mid+\\mid y \\mid \\mid z \\mid}A(z,y,x)\n +(-1)^{\\mid x \\mid \\mid y \\mid}(yx)z\\\\\n & -(-1)^{\\mid y \\mid \\mid z \\mid}x(zy)-(-1)^{\\mid x \\mid \\mid y \\mid+\\mid x \\mid \\mid z\\mid}(yz)x+(-1)^{\\mid x \\mid \\mid z \\mid+\\mid y \\mid \\mid z \\mid}z(xy)\\\\\n = &0\n\\end{array}$$\nwhere $A(x,y,z)=(xy)z-x(yz)$. \nLikewise, the Poisson bracket writes for homogeneous vectors\n$$\\displaystyle \\{x , y\\}=\\frac{1}{2}(xy-(-1)^{\\mid x \\mid \\mid y \\mid}yx)$$\nand the super Jacobi condition\n$$\\begin{array}{rl}\n\\medskip\nv_2(x,y,z)=& (-1)^{\\mid x \\mid \\mid z \\mid}A(x,y,z)-(-1)^{\\mid x \\mid \\mid y \\mid+\\mid x \\mid \\mid z \\mid}A(y,x,z)-(-1)^{\\mid x \\mid \\mid y \\mid+\\mid y \\mid \\mid z \\mid}A(z,y,x)\\\\\n& -(-1)^{\\mid x \\mid \\mid z \\mid+\\mid y \\mid \\mid z \\mid}A(x,z,y)+(-1)^{\\mid x \\mid \\mid y \\mid}A(y,z,x)+(-1)^{\\mid y \\mid \\mid z \\mid}A(z,x,y) \\\\\n=&0\n\\end{array}$$\nThe super Leibniz writes\n$$\\begin{array}{rl}\n\\medskip\nv_3(x,y,z)=& A(x,y,z)-(-1)^{\\mid x \\mid \\mid y \\mid}A(y,x,z)+(-1)^{\\mid x \\mid \\mid y \\mid+\\mid x \\mid \\mid z \\mid+\\mid y \\mid \\mid z \\mid}A(z,y,x)\\\\\n& +(-1)^{\\mid y \\mid \\mid z \\mid}A(x,z,y)\n +(-1)^{\\mid x \\mid \\mid y \\mid+\\mid x \\mid \\mid z \\mid}A(y,z,x)-(-1)^{\\mid x \\mid \\mid z \\mid+\\mid y \\mid \\mid z \\mid}A(z,x,y) \\\\\n =& 0.\n\\end{array}$$\nLet us consider the vector\n\n \n$$\\begin{array}{ll}\n\\medskip\nv(x,y,z) = \n & \\frac{1}{3}\\left[ (-1)^{\\mid x \\mid \\mid y \\mid}(yx)z -(-1)^{\\mid y \\mid \\mid z \\mid} (xz)y\n - (-1)^{\\mid x \\mid (\\mid y \\mid +\\mid z \\mid}(yz)x\n + (-1)^{(\\mid x \\mid +\\mid y \\mid )\\mid z \\mid}(zx)y\\right] \\\\\n& +(xy)z-x(yz) .\\\\\n\\end{array}$$\nThen \n$$\\begin{array}{l}\nv(x,y,z)= \\frac{1}{6}\\left( 2v_1(x,y,z)+(-1)^{\\mid x \\mid \\mid z \\mid}v_2(x,y,z)+v_3(x,y,z)+2(-1)^{\\mid x \\mid \\mid z \\mid+\\mid y \\mid \\mid z \\mid}v_3(z,x,y) \\right).\n\\end{array}\n$$\nWe deduce that the product $xy$ satisfies\n$$v(x,y,z)=0$$\nfor any homogeneous vectors $x,y,z$.\n\n\\noindent Conversely, assume that the product of the non associative product $\\p$ satisfies $v(x,y,z)=0$ for any homogeneous vestors $x,y,z.$ Let \n$v_1(x,y,z), v_2(x,y,z), v_3(x,y,z)$ be the vectors of $\\p$ defined in the first part respectively in relation with the associativity, the super Jacobi and super Leibniz relations. \nWe have\n$$\\begin{array}{ll}\n\\medskip\nv_1(x,y,z)=&v(x,y,z)-(-1)^{\\mid x \\mid \\mid y \\mid+\\mid x \\mid \\mid z \\mid+\\mid y \\mid \\mid z \\mid}v(z,y,x)\n+(-1)^{\\mid y \\mid \\mid z \\mid}v(x,z,y)\\\\\n\\medskip\n& -(-1)^{\\mid x \\mid \\mid z \\mid+\\mid y \\mid \\mid z \\mid}v(z,x,y)\\\\\n\\medskip\nv_2(x,y,z) =&(-1)^{\\mid x \\mid \\mid z \\mid}v(x,y,z)-(-1)^{\\mid x \\mid \\mid y \\mid+\\mid x \\mid \\mid z \\mid}v(y,x,z)-(-1)^{\\mid x \\mid \\mid y \\mid+\\mid y \\mid \\mid z \\mid}v(z,y,x)\\\\\n\\medskip\n& -(-1)^{\\mid x \\mid \\mid z \\mid+\\mid y \\mid \\mid z \\mid}v(x,z,y)+(-1)^{\\mid x \\mid \\mid y \\mid}v(y,z,x)+(-1)^{\\mid y\\mid \\mid z \\mid}v(z,x,y)\\\\\n\\medskip\nv_3(x,y,z)=&v(x,y,z)-(-1)^{\\mid x \\mid \\mid y \\mid}v(y,x,z)+(-1)^{\\mid x \\mid \\mid y \\mid+\\mid x \\mid \\mid z \\mid+\\mid y \\mid \\mid z \\mid}v(z,y,x)\\\\\n& +(-1)^{\\mid y \\mid \\mid z \\mid}v(x,z,y)+(-1)^{\\mid x \\mid \\mid y \\mid+\\mid x \\mid \\mid z \\mid}v(y,z,x)-(-1)^{\\mid x\\mid \\mid z \\mid+\\mid y\\mid \\mid z \\mid}v(z,x,y)\n\\end{array}$$\n\n\\medskip\n\\noindent{\\bf Examples.} Any $2$-dimensional superalgebra $\\p=V_0 \\oplus V_1$ with an homogeneous basis $\\{e_0,e_1\\}$ is defined\n$$\\left\\{\n\\begin{array}{l}\ne_0e_0=ae_0,\\\\\n e_0e_1=be_1, \\ e_1e_0=ce_1,\\\\\ne_1e_1=de_0.\n\\end{array}\n\\right.\n$$\nThis is a super Poisson multiplication if and only if we have\n$$\\left\\{\n\\begin{array}{l}\nd=0,\\\\\n3(a-b)b+ab-2bc+c^2=0,\\\\\n3(a-c)c+ab-2bc+c^2=0,\n\\end{array}\n\\right.\n$$\nor\n$$\\left\\{\n\\begin{array}{l}\na=0,\\\\\na=b=c.\n\\end{array}\n\\right.\n$$\nWe obtain the following $2$-dimensional Poisson superalgebras\n$$\\left\\{\n\\begin{array}{ll}\n\\mathcal{SP}_{2,1} & e_0e_0=ae_1\\\\\n\\mathcal{SP}_{2,2} & e_0e_0=ae_1, e_0e_1=e_1e_0=ae_1\\\\\n\\mathcal{SP}_{2,3} & e_0e_1=-e_1e_0=be_1\\\\\n\\mathcal{SP}_{2,4} & e_1e_1=de_0,\\\\\n\\end{array}\n\\right.\n$$\nthe non written product being considered equal to $0$.\nLet us note that these $2$-dimensional algebras correspond to the algebras $(\\mu_{16},\\beta_2=0),(\\mu_{16},\\beta_2=1),(\\mu_{9},\\alpha_2=0,\\beta_4=0),(\\mu_{5},\\alpha_2=0)$ in the classification \\cite{goze-remm-2algebras}.\n\\section{Properties of Poisson superalgebras }\n\\begin{definition}\nA nonassociative superalgebra is called super flexive if the multiplication $xy$ satisfy\n$$A(x,y,z) + (-1)^{(|x||z|+|x||y|+|y||z|)}A(z,y,x)=0$$\nfor any homogeneous elemnts $x,y,z$, where $A(x,y,z)=(xy)z-x(yz)$ is the associator of the multiplication.\n\\end{definition}\n\\begin{proposition}\nLet $\\p$ be a Poisson superalgebra. Then the non associative product defining the super Poisson structure is super flexive.\n\\end{proposition}\n\\noindent{\\it Proof.} In fact, let \n$$B(x,y,z)=3(A(x,y,z) + (-1)^{(|x||z|+|x||y|+|y||z|)}A(z,y,x)) .\n$$\nWe have\n$$\n\\begin{array}\n{rl}\n\\medskip\nB(x,y,z) =& -(-1)^{\\mid x \\mid \\mid y \\mid}(yx)z +(-1)^{\\mid y \\mid \\mid z \\mid} (xz)y\n + (-1)^{\\mid x \\mid \\mid y \\mid+\\mid x \\mid \\mid z \\mid}(yz)x \n - (-1)^{\\mid x \\mid \\mid z \\mid+\\mid y \\mid \\mid z \\mid}(zx)y \\\\\n \\medskip\n \n &+ (-1)^{(|x||z|+|x||y|+|y||z|)}(-(-1)^{\\mid z \\mid \\mid y \\mid}(yz)x +(-1)^{\\mid y \\mid \\mid x \\mid} (zx)y \\\\\n \\medskip\n \n &+ (-1)^{\\mid z \\mid \\mid y \\mid+\\mid z \\mid \\mid x \\mid}(yx)z \n - (-1)^{\\mid z \\mid \\mid x \\mid+\\mid y \\mid \\mid x \\mid}(xz)y )\\\\\n \\medskip\n \n =& (-(-1)^{\\mid x \\mid \\mid y \\mid} +(-1)^{\\mid x \\mid \\mid y \\mid})(yx)z + ((-1)^{\\mid y \\mid \\mid z \\mid} -(-1)^{\\mid y \\mid \\mid z \\mid}) (xz)y \\\\\n \\medskip\n &+((-1)^{\\mid x \\mid \\mid y \\mid+\\mid x \\mid \\mid z \\mid} - (-1)^{\\mid x \\mid \\mid y \\mid+\\mid x \\mid \\mid z \\mid})(yz)x \n +( - (-1)^{\\mid x \\mid \\mid z \\mid+\\mid y \\mid \\mid z \\mid}\\\\\n \\medskip\n & +(-1)^{\\mid x \\mid \\mid z \\mid+\\mid y \\mid \\mid z \\mid})(zx)y \\\\\n \\medskip\n =& 0.\n \\end{array}\n $$\n \n \\medskip\n \n \\noindent{\\bf Remark : On the power associativity.} Recall that a nonassociative algebra is power associative if every element\ngenerates an associative subalgebra. Let $\\p$ be a Poisson superalgebra\n provided with its non associative product $xy$. If $V_0$ is its the even homogeneous part, then the \nrestriction of the product $xy$ is a multiplication in \nthis homogeneous vector space satisfying Identity (\\ref{superPoisson}). Since all the vectors of $V_0$ are of degree $0$, Identity (\\ref{superPoisson}) is reduced to Identity\n (\\ref{associator}). We deduce that $V_0$ is a Poisson algebra and any vector $x$ in $V_0$ generates an associative subalgebra of $V_0$ and of $\\p$.\n \n Assume now that $y$ is an odd vector. We have\n $$y\\cdot y=\\displaystyle\\frac{1}{2}(yy+(-1)yy)=0,$$\n and\n $$\\{y,y\\}=\\displaystyle\\frac{1}{2}(yy-(-1)yy)=yy.$$\n If we write $y^2=yy$, then $$y^2=\\{y,y\\}.$$\n This implies\n $$yy^2=y\\{y,y\\}=y \\cdot \\{y,y\\}+\\{y,\\{y,y\\}\\}.$$\n But from the super identity of Jacobi, $\\{y,\\{y,y\\}\\}=0.$ Thus we have\n $$yy^2= y \\cdot \\{y,y\\}=\\{y,y\\} \\cdot y=y^2y.$$\n We can write\n $$y^3=yy^2=y^2y.$$\n Now\n $$y^2y^2=\\{y,y\\}\\{y,y\\}=\\{y,y\\}\\cdot\\{y,y\\}+\\{\\{y,y\\},\\{y,y\\}\\}.$$\n We have also\n $$yy^3=y\\cdot y^3+\\{y,y^3\\}=y\\cdot y\\cdot\\{y,y\\}+\\{y,y\\cdot \\{y,y\\}\\}.$$\n But $y \\cdot y=0$. Thus, from the Leibniz rule,\n $$yy^3=\\{y,y\\cdot \\{y,y\\}\\}=-y\\cdot \\{y,\\{y,y\\}\\}+\\{y,y\\}\\cdot\\{y,y\\}=\\{y,y\\}\\cdot \\{y,y\\}.$$\n We deduce\n $$y^2y^2-yy^3= \\{\\{y,y\\},\\{y,y\\}\\}.$$\n Since $\\{y,y\\}$ is of degree $0$, we obtain\n$$y^2y^2-yy^3=0.$$\nWe can write\n$$y^4=y^2y^2=yy^3=y^3y$$\nthe last equality results of $\\{y,y\\cdot \\{y,y\\}\\}=\\{y\\cdot \\{y,y\\},y\\}.$ Now, using Identity (\\ref{superPoisson}) to the triple $(y^i, y^j,y^k)$ with $i+j+k=5$, we obtain a \nlinear system \non the vectors $y^iy^j$ with $i+j=5$, which admits as solutions\n$$yy^4=y^2y^3= y^3y^2=y\u2074y.$$\nThus $y^5$ is well determinated. By induction, using Identity (\\ref{superPoisson}) on the triple $(y^i, y^j,y^k)$ with $i+j+k=n$, using induction hypothesis $y^py^{n-1-p}=y^{n-1}$, we obtain that\n$$y^n=y^py^{n-p}$$\nfor any $p=1,\\cdots,n-1$. Thus any homogeneous element of edd degree generates an associative algebra.\n \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}