diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzipxi" "b/data_all_eng_slimpj/shuffled/split2/finalzzipxi" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzipxi" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nBiologically motivated models serve twofold: to give insight on the\npossibilities of real processes and systems\n\\cite{membrane_sim,gillespie,argollo_pnas} or to inspire the\ndevelopment of new artificial devices, such as neural networks\n\\cite{pitts,hopfield_model} and DNA computers\n\\cite{adleman-dnacomputer,shapiro}, touching upon issues as the\ndefinition of life, computation and self-awareness\n\\cite{whatislife,emergent_comp,immune_model_perelson,bacteria_computer,turing_plant}.\nWe advance on the latter and seek inspiration in the problem of\ndiversity generation by the adaptive immune system of vertebrates,\nwhere protein receptors expressed by B cells (called antibodies)\n``recognize'' complementary antigenic patterns by means of very\nspecific molecular interactions that initiate an immune response\nwhenever a threshold affinity is reached \\cite{bcell_affinity,thecell,janeway_book}. \nEach B cell expresses its own unique receptor and a\nhuman can make about $10^{12}$ different receptors\n\\cite{jerne,thecell,janeway_book}, an astonishing\nnumber if compared to the number of genes in its whole genome \n(about $50,000$) making it impossible to have antibody genes encoded on\nDNA. Instead, a relatively small number of disjoint gene segments is\ninherited and the antibody region relevant for binding is assembled\nduring B cell development by rearrangement of some gene segments\n\\cite{ais-review, janeway_book, biomonitor}. After stimulation by antigen,\nB cells reproduce and introduce further mutations on the antibody\nbinding region greatly increasing diversity \\cite{thecell,\n antibody-evolution, antibody-antigen}. \n\nOne might ask whether these genomic modifications are completely\nrandom or if they are guided by some organizing principles\n\\cite{affinity_nonrandom,affinity_nonrandom1}. Inspired by this\nquestion, we study how one could improve the probability that an\nimmune system with its antibodies can recognize a random antigen. In\nparticular, we encode each molecular pattern relevant to binding by\n$L$ binary features on Hamming shape space \\cite{perelson-bitstring}\nand allow for a finite number of modifications of each string\ncharacterizing an antibody. We ask if cellular automata (CA) rules of\nWolfram type can outperform the rule of random search, when strings\nare randomly shuffled. We develop a genetic algorithm where agents\nfrom a population are faced with an antigen's string, perform the\nspecific computational task of string matching \\cite{melanie} and\nreproduce if overlap exceeds an arbitrary threshold $T$. Minimum\noverlap for reproduction can be achieved by evolving the agent's\nstring a finite number of times according to its specific grammar\nrules.\nAlgorithms inspired by human immune system are \nabundant in the literature and,\neven if it is not possible to identify one archetypal model\n\\cite{greensmith10}, they are usually named\nArtificial Immune Systems \\cite{forrest93}.\nIn this contest, our approach \nis built on the fundamental difference\nof evolving not simple bit-strings but rules, in the form of \nCellular Automata \\cite{melanie}.\n\nThis study has been clearly motivated by\nthe determinism versus randomness debate\nwith respect to the diversity generation in the immune systems.\nHowever, our goal is not the definition of a toy\nmodel for simulating the biological immune system.\nIn contrast, we focus on replying to the general and abstract \nquestion whether it is possible to obtain a better efficiency in the pattern \nrecognition task using a random or a deterministic computation.\nThis question is strongly connected with the exploration of how collective \ncomputation can emerge throughout an evolutionary stochastic process\n\\cite{emergent_comp, melanie, hordijk}.\nA better understanding of this approach could \ngenerate methods for information processing \nand engineering of new forms of computing systems.\\\\\n\nThe paper is organized as follows. \nIn section \\ref{sec:random} we study, both analytically and\nnumerically, the case where the only rule is random shuffling. As\nPerelson and Oster,\\cite{perelson-bitstring}, we find a steplike\nbehavior for the probability that an antibody will bind a\nrandom antigen as a function of the minimum overlap for reproduction. \nIn section \\ref{sec:automata}, we analyse the case where agent\nrules are those defining elementary cellular automata and find that\nefficiency in the pattern \nrecognition task is enhanced. Moreover, maximal\nefficiency is achieved when agents with different automata rules\ncoexist, showing that in this system unsupervised collective\ncomputation emerges from evolution.\n\n \\section{The random model}\n \\label{sec:random} \n\nWe develop a genetic algorithm where agents coexist in a population\nthat, at each time step $t$, faces a recognition challenge originated\nfrom a randomly chosen bit string $Y$ of size $L$ (agents' strings are\nof same size) that persists for $P$ time steps before being replaced\nby another random string. One time step is accounted for when all\nagents have undergone the following selection rules:\n\n$(1)$ Death with population-dependent rate $\\frac{N(t)}{K}$ where $K$ is\nthe carrying capacity of the medium and $N(t)$ is the number of agents at time $t$. \nThis process is responsible for\nlimiting the size of the total population.\n\n$(2)$ Overlap-dependent replication: After assembling a random string\n$X$, the agent determines its affinity with $Y$ as the Hamming\ndistance $H(X,Y)$ from antigen $Y$. Replication occurs if $H(X,Y)\\le T$\nwhere $L$ is the size of the string. Step 2 is repeated $S$ times by\neach agent and reproduction adds a new agent to the population.\\\\\n\nThe last step of rule $(2)$ mimics the mechanisms of diversity\ngeneration. This model is implemented on a computer, in which an\ninitial population of $P_0=10^3$ agents with strings of size $L=32$\nevolves in a medium with carrying capacity $K=10^5$, eventually\nreaching a steady state. In this state an average population is\nestimated over $10^5$ time steps \n(simulation parameters are summarized in Table~\\ref{table:params}).\n\nWe repeat this procedure for\ndifferent values of $T$ and $S$ and investigate the effects of binding\nspecificity and sequence recombination on the average repertoire size,\n$N_{S,T}$.\nThis quantity can be obtained in the mean-field level from the solution of\n\\begin{equation}\nN(t+1)-N(t)=G_S [N(t)-\\frac{N(t)^2}{K}]-\\frac{N(t)^2}{K}\n\\end{equation}\n\nwhen $N(t+1)=N(t)=N_{S,T}$. Here, $G_{S,T}=\\sum^{S}_{j=0}\nF_T(1-F_T)^j$ is the probability that matching with antigen has\noccurred in at most $S$ attempts and $F_T=2^{-32}\\sum_{i=0}^{T}\n {32 \\choose i}$\nis the probability of occurrence of two random strings\nwith Hamming distance less or equal to $T$.\n\n\\begin{table}\n\\caption{Parameters of the model.}\n\\begin{center}\n\\begin{tabular}{l|r}\nParameter & Meaning\\\\\n\\hline\n\nK & carrying capacity (controls population size)\\\\\nT & threshold for Hamming distance\\\\\nP & time steps each stimulus remains in the system\\\\\nS & number of tests\\\\\nM & mutation rate of CA\n\\end{tabular}\n\\end{center}\n\\label{table:params}\n\\end{table}\n\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[angle=270,width=0.8\\textwidth]{spop.ps}\n\\end{center}\n\\caption{The random model and its analytical description match\n perfectly and exhibit a sharp transition in the size of the mean\n population as a particular value of the dissimilarity threshold $T$\n is reached. Results for different values of $S$ are presented where\n circles depict simulations and solid lines represent the mean field\n description. Mean populations were averaged over the last $100000$\n time steps after a stationary-like state was reached in the\n simulations (other parameters: $K= 10^5, P=100$). }\n\\label{Fig_mpop}\n\\end{figure}\n\nFor $T\\le 10$, $G_0\\approx 1$ and so the population at equilibrium is\n equal to $K\/2$. For higher $T$ values the population decreases until\n it becomes extinct. Increasing the $S$ values leads to a higher\n reproduction rate (larger $G_S$ values) which maintains the\n equilibrium population nearer to the classical equilibrium solution\n of $K\/2$. In figure \\ref{Fig_mpop} we can appreciate how well the\n mean field description captures the results generated by the\n simulations.\nThe important global quantity which we need to quantify is \nthe success in the recognition task, obtained by evaluating \nthe efficiency of the system.\nOne simple measure of efficiency can be the ratio between the\ntotal number of successful recognitions ($H\\le T$) \nand the total number of performed tests.\n\nFigure \\ref{Fig_meff} depicts efficiency in the recognition task\nas a function of the threshold $T$ when a stationary regime is\nreached. For the random model, efficiency is equal to the \nprobability of two 32-bit strings to have a Hamming distance \nless or equal to $32-T$: $F_{32-T}$. \nThe efficiency of the model does not depend on the other parameters $S$ and \n$P$. \nAs expected, the requirement of larger overlaps between stimulus and agents leads to a sudden decrease of the efficiency.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[angle=270,width=0.8\\textwidth]{Seff.ps}\n\\includegraphics[angle=270,width=0.8\\textwidth]{CApopS.ps}\n\\end{center}\n\\caption{\\small The CA model outperforms the random model both in terms of population \nsize and efficiency. \nTop: Efficiency as a function of the threshold $T$.\nThe squares represents the data obtained by the model of random somatic mutations.\nThe solid line shows the perfect match with the function $F_{T-32}$.\nThe circles represent the data obtained by the CA model for structured \nsomatic mutations with $S=1, M=0.01$. \nAs can be appreciate, cellular automata can be more efficient\nin bringing antibodies closer to antigens ($K= 10^5, P=1000$).\nBottom: Mean population versus dissimilarity threshold $T$ for different $S$ ($P=100$). For comparison, \nthe solid line represents results for the random model when $S=1$.\nInset: mean population versus $T$ for different values of $P$ ($S=10$).\nAverages were taken over the last 100000 time steps after a stationary state \nwas reached and divided by the the carrying capacity $K$ \n(other parameters: $M=0.01, K= 10^5$).}\n\\label{Fig_meff}\n\\end{figure}\n\nOur dynamics can be illustrated using an abstract description.\nGiven a metric space ${\\mathcal V}$ , a stimulus is represented by a feature\nvector ${\\vec x}=(x_0,...,x_N)$.\nA agent is represented by a vector ${\\vec\n y}=(y_0,...,y_N)$. The distance $|{\\vec x} - {\\vec y}|$ decides whether a test is successful and the agent reproduces.\nWorking with binary features, as in our case, the\ndistance between stimulus ${\\vec x}$ and detector ${\\vec y}$ can be\ngiven by the Hamming distance $H$ (the number of distinct binary\nfeatures). The agent carries out random jumps of ${\\vec x}$\nwhich might move it closer to ${\\vec y}$. \nFrom the analysis developed \nby Perelson and Oster \\cite{perelson-bitstring},\nin the continuum limit,\na step-like behavior is found for the probability \nof binding a random antigen (stimulus). \nThese results are analogous to the ones we have presented for our\nmodel and, effectively,\n our simulations, for $S=0$, correspond to a discrete version of the \n model studied in \\cite{perelson-bitstring}.\n\n\n\\section{The CA model}\n\\label{sec:automata}\n\nIn the following we introduce the CA model which is motivated by the analogy between antibody generation and grammatical structures.\nEach agent is now characterized by one rule to deterministically change its bit-string.\nThis rule is taken from one of\nall possible $256$ elementary Wolfram cellular automata \\cite{wolfram}.\nThese automata are composed by a one-dimensional array of two-state \ncells and by rules operating on the nearest neighbors.\n\nNow, the model is based on the following steps:\n\n1) Each agent dies with a rate $\\frac{N(t)}{K}$\nwhere $K$ is a carrying capacity.\n\n2) Surviving agents get a random bit-string.\nThey reproduce if a positive presentation, within $S$ tests, is reached.\nAfter each unsuccessful test ($H>T$), the agent's CA \nrule is applied on the bit-string and \nthe mutated string is re-compared with the stimulus.\nSuccessful detection generates one new agent with \nthe same CA as the ancestor or, with probability $M$, a different random CA rule.\nThe stimulus bit-string is randomly generated every $P$ time steps.\\\\\n\nIn Figure \\ref{Fig_meff}, we show the mean population as a function of the \nthreshold $T$ for different values of $S$. For all $S$ values, we notice that\nthe mean population is larger than the one of the random model for $T$ values where strong selective processes are forcing adaptation of the\nCA system. \nIn the inset, we present the mean population as a function of the threshold $T$ for different values of $P$. \nIt is possible to see how for higher $P$ values the population grows, indicating that \nit reaches an adapted phase with\na structure in the CA rule distribution, which allows more efficient recognition of stimuli.\n\n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[angle=270,width=0.8\\textwidth]{Evoeff.ps}\n\\end{center}\n\\caption{\\small Fast adaption of the system\nin the CA model leads to better performance. \nData show the temporal behavior of the efficiency. For large enough $P$, the CA model\noutperforms the random model.\nThe green curve is obtained for $P=1000$, the red for $P=1$,\nand the black one is the result generated by the random model.\nParameters: $T=20, M=0.01, K=10^5, S=10$. }\n\\label{Fig_dyn}\n\\end{figure}\n\nThe performance improvement of the CA model can be quantified looking at the efficiency measure. \nAs shown in Figure \\ref{Fig_meff}, CA rules perform better than random changes. \nThese results can be clarified looking at the time evolution of the efficiency\nfor a given simulation (Figure \\ref{Fig_dyn}). \nThe efficiency of the CA model is higher than in the random model if the same stimulus is presented for a sufficiently long time.\nIn fact, if $P=1$, the CA model exhibits the same mean efficiency but higher fluctuations than the random model.\nThe system is not capable to adapt to the new stimulus within one iteration.\nIn contrast, a selective dynamics operates for $P=1000$.\nAfter a change of the stimulus, the efficiency drops down, \nfollowed by a rapid transient where the efficiency grows towards a new plateau higher than the\ncorresponding value for the random model.\nThis is because the most efficient CA rules become selected \nand thus initially random agent\ncan be successfully mutated closer to the stimulus. \nSpecific CA rules map specific sub-spaces which contain strings close to the \nstimulus. These rules are able to take different random strings and to\ntake them closer to the antigen following deterministic paths. \n\n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[angle=270,width=0.8\\textwidth]{CAeffS.ps}\n\\end{center}\n\\caption{\\small\nThe CA model outperforms the random model for small $S$-values. \nThe figure shows the difference between the efficiency of the CA model and the one of the random model \n as a function of the threshold $T$ for different values of $S$. Each point is the time average over \nthe last 500000 steps (Parameters: $K= 10^5, M=0.01, P=100$).\n}\n\\label{fig_caeff}\n\\end{figure}\n\nIn Figure \\ref{fig_caeff}, we analyzed the efficiency in the recognition task for different values of $S$.\nWe can see an improvement up to more than 40\\% ($S=1, T=18$). \nFor high $S$ values, this advantage reduces and for $S>20$ the random model begins to outperform the CA model.\nIn general, efficiency increases for higher $P$ values, when the selection can effectively operate defining the ensemble of the bests CA rules.\n\n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[angle=270,width=0.8\\textwidth]{zipf.ps}\n\\includegraphics[angle=270,width=0.8\\textwidth]{zipf2.ps}\n\\end{center}\n\\caption{Top: Zipf plot for the rules present in the steady state for different $P$ values ($T=20$). \nBotton: Zipf plot for the rules present in the steady state for different $T$ values ($P=100$). \nAll data are averaged over the last 500000 time steps ($M=0.01, K=10^5, S=10$).\n}\n\\label{fig_dist}\n\\end{figure}\n\nFurthermore, we studied the distribution of the population\nin terms of the CA rules. Figure \\ref{fig_dist} depicts the Zipf plot of the rules.\nIf reproduction success is not affected by the \nrecognition operation ($T\\le10$), all the possible CA rules\nare maintained in the population generating a flat distribution with equal \nprobability for each rule.\nIn contrast, for higher selection pressures, some rules are selected over the\nensemble of all the CA rules and a structured distribution appears. \n The rules coexist in the population and they correspond to the ones which \n allow better performances in the recognition task.\nFor very high $T$ values, only a minimal fraction of rules survive. \nThis happens in correspondence to a very small population where random drift effects become dominant.\n\n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[angle=0,width=0.7\\textwidth]{LastGrammarM0.01b1N100000.ps}\n\\end{center}\n\\caption{Suppression of a subgroup of CA rules for intermediate $T$ values. The frequency of the CA rules ($\\log_{10}$-values) is\nshown \nas a function of the threshold $T$ (on the ordinate) for different values of $P$ and $S$ (other paramters: $M=0.01$ and $K=10^5$). \n}\n\\label{fig_heat1}\n\\end{figure}\n\n\nFigure \\ref{fig_heat1} represents the frequency of the CA rules for different values of $P$ and $T$.\nFrom these figures it is clear how when selection is well operating (hight $P$ and small $S$) a subset of the rules is suppressed and a structured population, with a larger number of lively and coexisting CA rules set on. \nWe tried to quantify if this subpopulation of CA rules can be \nrelated to some particular class following the heuristic Wolfram's classification scheme, \nbut unfortunately we were not able to distinguish any specific class of CA rules among \nthe ones that better perform in our simulations. \nIn contrast, an assortment of CA rules from different classes persists in the population.\nIn Table \\ref{tab_1} we present some of the most successful rules in a specific simulation where $T=19$ and $P=1000$.\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{|c|c|}\n\\hline\nCellular Automaton & Frequency \\\\ \n\\hline\n1 & 0.1935\\\\\n256 & 0.1256\\\\\n248 & 0.0842\\\\\n128 & 0.0839\\\\\n192 & 0.0535\\\\\t\n252 & 0.043\\\\\n58 & 0.0272\\\\\t\n51 & 0.0228\\\\\n52 & 0.0181\\\\\n56 & 0.0169\\\\\n20 & 0.0166\\\\\n19 & 0.0165\\\\\n64 & 0.016\\\\\n168 & 0.0156\\\\\n49 & 0.0153\\\\\n232 & 0.0146\\\\\n244 & 0.0118\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\label{tab_1}\n\\caption{ \\small Best performing CA, identified by their number, and the corresponding frequency in the population.\nData are averaged over the last 500000 time steps of one simulation \n($M=0.01$, $K=10^5$, $S=1$, $T=19$, $P=1000$). \n}\n\\label{tabula}\n\\end{table}\n\n\\begin{figure}[htb]\n\\begin{center}\n\\includegraphics[angle=0,width=0.7\\textwidth]{LastGrammarTempS10P1000M0.01b1N100000.ps}\n\\end{center}\n\\caption{\\small Fast switching on but slow switching off of new rules when selective pressure is high.\nThe temporal behavior of the $\\log_{10}$-frequencies of the CA rules\nare shown for different $T$ values.\nFor higher $T$, the system relies on a small number of highly abundant CA rules (parameters: $S=10$, $P=1000$ and $K=10^5$).}\n\\label{fig_heat2}\n\\end{figure}\n\nThe frequencies of the CA rules exhibit a peculiar temporal behavior in simulations where the selection pressure is high (e.g. $T=27$).\nFig.~\\ref{fig_heat2} shows that the introduction of new stimuli leads to a sudden on-switching of single rules that begin to dominate the\npopulation. They however remain in the system over several cycles of new stimuli, leading to long-term correlations between\ndifferent stimuli due to memory effects.\nThis also explains why prevailing CA become selected without following the general trends observed in Fig.~\\ref{fig_heat1} for intermediate $T$\nvalues. Rules are selected depending on the context of their co-efficiency with the other rules already in the system. \n\nAs consequence, the CA model relies on the possibility to dispose of a large number of different rules that might be switched on as soon \ntheir specific properties are required. A reduction to the mostly best performing rules therefore results in a decrease of its performance to\nrecognize random patterns for high selection pressures (large $T$), or, in other words, demanding recognition tasks.\n \n \\section{Conclusions}\n\nWe presented the study of a collective model for pattern recognition inspired by the basic biomolecular mechanisms that enable an immune system to detect new antigens.\nWe explore \nhow different mechanisms of antibodies diversity generation\ncan improve the performance in antigen recognition.\nAs usual, we represent antigens and antibodies by using bit strings \nand we test two possible strategies for generating antibodies diversity: \nto randomly shuffle or to apply deterministic rules to the strings which \nrepresent them.\nIn the last case we have been influenced by the Jerne's analogy \nbetween some properties of immunologic system and the concept of\ngenerative grammars \\cite{jerne}. \nWe have implemented these ideas introducing a genetic algorithm which evolves \nan ensemble of Wolfram's cellular automata\nwhich performs the computational task of string identification. \nThanks to the employment of evolutionary simulations \nbased on a genetic algorithm, we find that not one, but a group of rules, \nperforms the recognition task better than dull random shuffling.\\\\\n\nOur study outline interesting results which can be useful \nfor general information processing.\nBecause of the biomolecular nature of the biological \nproblem which we have theoretically explored, \nwe speculate that our abstract result could be transposed \ninto practical applications for designing \ncomputational devices for pattern recognition implemented \nby the means of a biomolecular computer.\n\n\\ack\nMAM acknowledges partial financial support\nfrom the Brazilian agency Conselho Nacional de Desenvolvimanto\nCient\\'{\\i}fico e Tecnol\\'ogico (CNPq). VS was supported by the Danish\nCouncil for Independent Research, Natural Sciences (FNU).\\\\\n\n\n\\bibliographystyle{unsrt}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe Hohenberg-Kohn (HK) theorems \\cite{1} constitute a fundamental\nadvance in quantum mechanics. As a consequence they have furthered\nour understanding of the electronic structure of matter: atoms,\nmolecules, solids, clusters, surfaces, lower dimensional electronic\nsystems such as heterostructures, quantum dots, graphene, etc.\nMatter, according to HK, is described as a system of $N$ electrons\nin an external electrostatic field ${\\boldsymbol{\\cal{E}}}\n({\\bf{r}}) = - {\\boldsymbol{\\nabla}} v ({\\bf{r}})$. The first HK\ntheorem defines the concept of a \\emph{basic variable} of quantum\nmechanics. Knowledge of this gauge invariant property -- the\nnondegenerate ground state density $\\rho ({\\bf{r}})$ -- is of\ntwo-fold significance: \\emph{(a)} It determines the Schr\\\"{o}dinger\ntheory wave functions $\\Psi$ of the system, both ground and excited\nstate; \\emph{(b)} As the wave function $\\Psi$ is now proved to be a\nfunctional of the basic variable, it constitutes together with the\nsecond HK theorem -- the energy variational principle for arbitrary\nvariations of the density -- the basis of theories of electronic\nstructure such as of Hohenberg-Kohn \\cite{1}, Kohn-Sham \\cite{2},\nand quantal density functional theory \\cite{3,4}. The theorems are\nvalid for arbitrary confining potential $v ({\\bf{r}})$ and electron\nnumber $N$, but are derived \\cite{5} for the constraint of\n\\emph{fixed} $N$. In this paper we generalize the HK theorems for\nspinless electrons to the added presence of an external\n\\emph{uniform} magnetostatic field. As the presence of the magnetic\nfield constitutes a new degree of freedom, we introduce the further\nnatural constraint of \\emph{fixed} canonical orbital angular\nmomentum. Thereby we prove that the basic variables in quantum\nmechanics in a uniform magnetic field are the gauge invariant\nnondegenerate ground state density $\\rho ({\\bf{r}})$ and physical\ncurrent density ${\\bf{j}} ({\\bf{r}})$. These theorems are then\nfurther generalized to electrons with spin by imposing the\nconstraints of \\emph{fixed} canonical orbital and spin angular\nmomentum.\n\n\nThe generalization is motivated by the considerable recent interest\nin yrast states which are states of lowest energy for fixed angular\nmomentum. These states have been studied experimentally and\ntheoretically for both bosons and fermions, e.g. rotating trapped\nBose-Einstein condensates \\cite{6}, and harmonically trapped\nelectrons in the presence of a uniform perpendicular magnetic field\n\\cite{7}. The theorems derived are applicable to all\nexperimentation with a uniform magnetic field such as the\nmagneto-caloric effect \\cite{8}, the Zeeman effect, cyclotron\nresonance, magnetoresistance, the de-Haas-van Alphen effect, the\nHall effect, the quantum Hall effect, the Meissner effect, nuclear\nmagnetic resonance, etc.\n\n\nThe manner by which a basic variable is so defined is via the proof\nof the first HK theorem for $v$-representable densities. To explain\nthis, and to contrast the present proofs with the HK proof, we first\nbriefly describe the HK arguments. The HK theorems are proved for a\nnondegenerate ground state. Particularizing to electrons without any\nloss of generality, the Hamiltonian $\\hat{H}$ in atomic units\n(charge of electron $-e; |e| = \\hbar = m = 1)$ is $\\hat{H} =\n\\frac{1}{2} \\sum_{k} p^{2}_{k} + \\frac{1}{2} \\sum'_{k, \\ell}\n1\/|{\\bf{r}}_{k} - {\\bf{r}}_{\\ell}| + \\sum_{k} v ({\\bf{r}}_{k})$,\nwhere the terms correspond to the kinetic $\\hat{T}$ (with momentum\n$\\hat{\\bf{p}}_{k} = - i {\\boldsymbol{\\nabla}}_{{\\bf{r}}_{k}}$), the\nelectron-interaction $\\hat{W}$, and external potential $\\hat{V}$\noperators, respectively. The Schr\\\"{o}dinger equation is $\\hat{H}\n({\\bf{R}}) \\Psi ({\\bf{X}}) = E \\Psi ({\\bf{X}})$, where $\\Psi\n({\\bf{X}}), E$ are the eigenfunctions and eigenenergies, with\n${\\bf{R}} = {\\bf{r}}_{1}, \\ldots, {\\bf{r}}_{N}$; ${\\bf{X}} =\n{\\bf{x}}_{1}, \\ldots, {\\bf{x}}_{N}$; ${\\bf{x}} = {\\bf{r}} \\sigma$\nbeing the spatial and spin coordinates of the electron. The energy\n$E$ is the expectation $E = <\\Psi ({\\bf{X}}) | \\hat{H} ({\\bf{R}}) |\n\\Psi ({\\bf{X}}) >$. In the first HK theorem it is initially proved\n(Map C) that there is a one-to-one relationship between the external\npotential $v ({\\bf{r}})$ and the nondegenerate ground-state wave\nfunction $\\Psi ({\\bf{X}})$. \\emph{Employing this relationship}, it\nis then proved (Map D) that there is a one-to-one relationship\nbetween the wave function $\\Psi ({\\bf{X}})$ and the corresponding\nnondegenerate ground state density $\\rho ({\\bf{r}})$. Thus,\nknowledge of $\\rho ({\\bf{r}})$ determines $v ({\\bf{r}})$ to within a\nconstant. Since for a fixed electron number $N$, the kinetic\n$\\hat{T}$ and electron-interaction potential $\\hat{W}$ energy\noperators are known, so is the system Hamiltonian. Solution of the\ncorresponding Schr\\\"{o}dinger equation then leads to the wave\nfunctions $\\Psi$ of the system. \\emph{It is the one-to-one\nrelationship between the external potential and the gauge invariant\ndensity that defines the latter as a basic variable}. As the wave\nfunction $\\Psi$, and hence energy $E_{v} [ \\rho]$ are functionals of\nthe density $\\rho ({\\bf{r}})$, the variational Euler equation for\nthe density with \\emph{fixed} $v ({\\bf{r}})$ follows subject to the\nconstraint of \\emph{known} electron number $N$ (see Table 1). (The\nlowest nondegenerate \\cite{9,10} excited state density $\\rho^{e}\n({\\bf{r}})$ of a given symmetry different from that of the ground\nstate is also a basic variable.)\n\nIn the added presence of an external magnetostatic field ${\\bf{B}}\n({\\bf{r}}) = \\nabla \\times {\\bf{A}} ({\\bf{r}})$, where ${\\bf{A}}\n({\\bf{r}})$ is the vector potential, the Hamiltonian when the\ninteraction of the field is only with the orbital angular momentum\nis\n\\begin{equation}\n\\hat{H} = \\frac{1}{2} \\sum_{k} \\bigg[ \\hat{\\bf{p}}_{k} + \\frac{1}{c}\n{\\bf{A}} ({\\bf{r}}_{k}) \\bigg]^{2} + \\hat{W} + \\hat{V}.\n\\end{equation}\nWhen the interaction of the magnetic field is with both the orbital\nand spin angular momentum, the Hamiltonian is\n\\begin{equation}\n\\hat{H} = \\frac{1}{2} \\sum_{k} \\bigg[ \\hat{\\bf{p}}_{k} + \\frac{1}{c}\n{\\bf{A}} ({\\bf{r}}_{k}) \\bigg]^{2} + \\hat{W} + \\hat{V} + \\frac{1}{c}\n\\sum_{k} {\\bf{B}} ({\\bf{r}}_{k}) \\cdot {\\bf{s}}_{k},\n\\end{equation}\nwhere ${\\bf{s}}$ is the electron spin angular momentum vector. In\nderiving the Hamiltonians of Eqs. (1) and (2), we have hewed to the\nphilosophy \\cite{11} that the only `fundamental' interactions are\nthose that can be generated by the substitution $\\hat{\\bf{p}}\n\\rightarrow \\hat{\\bf{p}} + \\frac{1}{c} {\\bf{A}}$. (This then\ndefines the physical momentum operator in the presence of a magnetic\nfield, and thereby the physical current density ${\\bf{j}}\n({\\bf{r}})$.) In non-relativistic quantum mechanics, the\nHamiltonian of Eq. (2) is derived \\cite{11} by Schr\\\"{o}dinger-Pauli\ntheory for spin $\\frac{1}{2}$ particles via the kinetic energy\noperator $\\frac{1}{2} \\boldsymbol{\\sigma} \\cdot ({\\bf{p}} +\n{\\bf{A}}) \\boldsymbol{\\sigma} \\cdot ({\\bf{p}} + {\\bf{A}})$, where\n$\\boldsymbol{\\sigma}$ is the Pauli matrix, and ${\\bf{s}} =\n\\frac{1}{2} \\boldsymbol{\\sigma}$. The spin magnetic moment generated\nin this way has the correct gyromagnetic ratio $g = 2$.\n\nIt would appear that one could prove a one-to-one relationship\nbetween the gauge invariant properties $\\{ \\rho ({\\bf{r}}), {\\bf{j}}\n({\\bf{r}}) \\}$ and the external potentials $\\{ v ({\\bf{r}}),\n{\\bf{A}} ({\\bf{r}}) \\}$ along the lines of the HK proof. However, no\nsuch proof is possible as the relationship between the external\npotentials $\\{ v ({\\bf{r}}), {\\bf{A}} ({\\bf{r}}) \\}$ and the\nnon-degenerate ground state wave function $\\Psi ({\\bf{X}})$ can be\n\\emph{many-to-one} \\cite{12} and even \\emph{infinite-to-one}\n\\cite{13}. Hence, in these cases, there is no equivalent of Map C,\nand therefore the original HK path is not possible. The proof that\n$\\{ \\rho ({\\bf{r}}), {\\bf{j}} ({\\bf{r}}) \\}$ are the basic variables\nmust then differ from the original HK proof. Furthermore, the proof\nmust account for the many-to-one relationship between $\\{ v\n({\\bf{r}}), {\\bf{A}} ({\\bf{r}}) \\}$ and $\\Psi ({\\bf{X}})$.\n\nIn the literature \\cite{2,12,14}, the proofs of what properties\nconstitute the basic variables are not rigorous in the HK sense of\nthe one-to-one relationship between the basic variables and the\nexternal potentials $\\{ v, {\\bf{A}} \\}$. Further, they do not\naccount for the many-to-one relationship between $\\{ v, {\\bf{A}} \\}$\nand $\\Psi$. Additionally, the system angular momentum is not\nconsidered. The choice of the basic variables is arrived at solely\non the basis of a Map D-type proof between these assumed properties\nand the nondegenerate ground state $\\Psi$, thereby the claim that\n$\\Psi$ is a functional of these properties. In these proofs, the\nexistence of a bijective Map C is implicitly assumed, \\cite{15,16}\n(see also last reference of 12). For example, in spin-DFT\n\\cite{2,12,14} for which the Hamiltonian is that of Eq. (2) with the\nfield component of the momentum absent, the basic variables are\nassumed to be $\\{ \\rho ({\\bf{r}}), {\\bf{m}} ({\\bf{r}}) \\}$, where\n${\\bf{m}} ({\\bf{r}})$ is the magnetization density. In current-DFT\n\\cite{14}, corresponding to the Hamiltonian of Eq. (1), the basic\nvariables are assumed to be $\\rho ({\\bf{r}})$ and the gauge variant\nparamagnetic current density ${\\bf{j}}_{p} ({\\bf{r}})$. For the\nHamiltonian of Eq. (2), the basic variables are assumed to be $\\{\n\\rho ({\\bf{r}}), {\\bf{m}} ({\\bf{r}}), {\\bf{j}}_{p} ({\\bf{r}}) \\}$ or\n$\\{ \\rho ({\\bf{r}}), {\\bf{m}} ({\\bf{r}}), {\\bf{j}}_{p} ({\\bf{r}}),\n{\\bf{j}}_{p, {\\bf{m}}} ({\\bf{r}}) \\}$ where ${\\bf{j}}_{p, {\\bf{m}}}\n({\\bf{r}})$ are the gauge variant paramagnetic currents of each\ncomponent of the magnetization density. Subsequently, a Map D proof\nis provided. Additionally, with the basic variables now assumed\nknown, a Percus-Levy-Lieb (PLL)-type proof \\cite{17,18} can then be\nformulated \\cite{19}. More recently, we gave a derivation\n\\cite{15,20} which purported to prove that $\\{ \\rho ({\\bf{r}}),\n{\\bf{j}} ({\\bf{r}}) \\}$ were the basic variables but the proof was\nin error \\cite{21}. Subsequently, we proved \\cite{22} for the\nHamiltonian of Eq. (1) that for the significant subset of systems\n\\cite{13,23} for which the ground state wave function $\\Psi$ is\nreal, the basic variables are $\\{ \\rho ({\\bf{r}}), {\\bf{j}}\n({\\bf{r}}) \\}$. Our proof of bijectivity between $\\{ \\rho\n({\\bf{r}}), {\\bf{j}} ({\\bf{r}}) \\}$ and $\\{ v ({\\bf{r}}), {\\bf{A}}\n({\\bf{r}}) \\}$ explicitly accounts for the many-to-one $\\{ v\n({\\bf{r}}), {\\bf{A}} ({\\bf{r}}) \\}$ to $\\Psi$ relationship. This\nproof then constitutes a special case of the more general proof for\n$\\Psi$ complex presented in this work.\n\nHere we extend the HK theorems to systems of electrons in external\nelectrostatic ${\\boldsymbol{\\cal{E}}} ({\\bf{r}}) = -\n{\\boldsymbol{\\nabla}} v ({\\bf{r}})$ and magnetostatic ${\\bf{B}}\n({\\bf{r}}) = \\nabla \\times {\\bf{A}} ({\\bf{r}})$ fields with known\nelectron number $N$ and angular momentum ${\\bf{J}}$. The proofs are\nfor a uniform magnetostatic field, and for Hamiltonians in which the\ninteraction of the magnetic field is \\emph{(i)} solely with the\norbital angular momentum $({\\bf{J}} = {\\bf{L}})$, and \\emph{(ii)}\nwith both the orbital and spin angular momentum $({\\bf{J}} =\n[{\\bf{L}}$ and ${\\bf{S}}])$. We prove, in the \\emph{rigorous} HK\nsense, that for \\emph{fixed} $N$ and ${\\bf{J}}$ the basic variables\nare the gauge invariant nondegenerate ground state density $\\rho\n({\\bf{r}})$ and \\emph{physical} current density ${\\bf{j}}\n({\\bf{r}})$. In other words, knowledge of $\\{ \\rho ({\\bf{r}}),\n{\\bf{j}} ({\\bf{r}}) \\}$ determines the potentials $\\{v ({\\bf{r}}),\n{\\bf{A}} ({\\bf{r}}) \\}$ to within a constant and the gradient of a\nscalar function, respectively. Hence, with the Hamiltonians known,\nsolution of the respective Schr\\\"{o}dinger and Schr\\\"{o}dinger-Pauli\nequations lead to the wave functions of each system. The proof is\nfor $(v, {\\bf{A}})$-representable $\\{ \\rho ({\\bf{r}}), {\\bf{j}}\n({\\bf{r}}) \\}$. The extension to the Percus-Levy-Lieb (PLL)\n\\cite{17,18} constrained-search path for $N$-representable and\ndegenerate states readily follows. As the wave function $\\Psi$ is a\nfunctional of $\\{ \\rho ({\\bf{r}}), {\\bf{j}} ({\\bf{r}}) \\}$, theories\nof electronic structure based on $\\{ \\rho ({\\bf{r}}), {\\bf{j}}\n({\\bf{r}}) \\}$ as the basic variables can then be formulated.\n\n\n\\section{Proof of Generalized Hohenberg-Kohn Theorems}\n\nTo accentuate the role of the density $\\rho ({\\bf{r}})$ and physical\ncurrent density ${\\bf{j}} ({\\bf{r}})$, we rewrite the Hamiltonians\nof Eqs. (1) and (2) in terms of operators representative of these\ngauge invariant properties. The Hamiltonians can then be written,\nrespectively, as\n\\begin{equation}\n\\hat{H} = \\hat{T} + \\hat{W} + \\hat{V}_{A},\n\\end{equation}\nand\n\\begin{equation}\n\\hat{H} = \\hat{T} + \\hat{W} + \\hat{V}_{A} - \\int \\hat{\\bf{m}}\n({\\bf{r}}) \\cdot {\\bf{B}} ({\\bf{r}}) d {\\bf{r}},\n\\end{equation}\nwhere the total external potential operator $\\hat{V}_{A}$ is\n\\begin{equation}\n\\hat{V}_{A} = \\hat{V} + \\frac{1}{c} \\int \\hat{\\bf{j}} ({\\bf{r}})\n\\cdot {\\bf{A}} ({\\bf{r}}) d {\\bf{r}} - \\frac{1}{2c^{2}} \\int\n\\hat{\\rho} ({\\bf{r}}) A^{2} ({\\bf{r}}) d {\\bf{r}},\n\\end{equation}\nand the corresponding energy expectations $E = <\\Psi ({\\bf{X}}) |\n\\hat{H} | \\Psi ({\\bf{X}}) >$ as\n\\begin{equation}\nE = T + E_{ee} + V_{A},\n\\end{equation}\nand\n\\begin{equation}\nE = T + E_{ee} + V_{A} - \\int {\\bf{m}} ({\\bf{r}}) \\cdot {\\bf{B}}\n({\\bf{r}}) d {\\bf{r}},\n\\end{equation}\nwhere the total external potential energy $V_{A}$ is\n\\begin{equation}\nV_{A} = ~ < \\Psi ({\\bf{X}}) | \\hat{V}_{A} | \\Psi ({\\bf{X}}) = \\int\n\\rho ({\\bf{r}}) v ({\\bf{r}}) d {\\bf{r}} + \\frac{1}{c} \\int {\\bf{j}}\n({\\bf{r}}) \\cdot {\\bf{A}} ({\\bf{r}}) d {\\bf{r}} - \\frac{1}{2c^{2}}\n\\int \\rho ({\\bf{r}}) A^{2} ({\\bf{r}}) d {\\bf{r}},\n\\end{equation}\nand where $T$ and $E_{ee}$ are the kinetic and electron-interaction\nenergy expectations. In the above equations, the physical current\ndensity ${\\bf{j}} ({\\bf{r}})$ is defined in terms of the physical\nmomentum operator $(\\hat{\\bf{p}} + \\frac{1}{c} {\\bf{A}})$ as\n\\begin{equation}\n{\\bf{j}} ({\\bf{r}}) = N \\Re \\sum_{\\sigma} \\int \\Psi^{\\star}\n({\\bf{r}} \\sigma, {\\bf{X}}^{N-1}) \\bigg(\\hat{\\bf{p}} + \\frac{1}{c}\n{\\bf{A}} ({\\bf{r}})\\bigg) \\Psi ({\\bf{r}} \\sigma, {\\bf{X}}^{N-1}) d\n{\\bf{X}}^{N-1},\n\\end{equation}\nor equivalently as the expectation of the current density operator\n$\\hat{\\bf{j}} ({\\bf{r}})$:\n\\begin{equation}\n{\\bf{j}} ({\\bf{r}}) = < \\Psi ({\\bf{X}}) | \\hat{\\bf{j}} ({\\bf{r}}) |\n\\Psi ({\\bf{X}}) >\n\\end{equation}\nwhere\n\\begin{equation}\n\\hat{\\bf{j}} ({\\bf{r}}) = \\hat{\\bf{j}}_{p} ({\\bf{r}}) +\n\\hat{\\bf{j}}_{d} ({\\bf{r}}),\n\\end{equation}\nwith the paramagnetic $\\hat{\\bf{j}}_{p} ({\\bf{r}})$ and diamagnetic\n$\\hat{\\bf{j}}_{d} ({\\bf{r}})$ operator components defined,\nrespectively, as\n\\begin{equation}\n\\hat{\\bf{j}}_{p} ({\\bf{r}}) = \\frac{1}{2} \\sum_{k} \\big[\n\\hat{\\bf{p}}_{k} \\delta ({\\bf{r}}_{k} - {\\bf{r}}) + \\delta\n({\\bf{r}}_{k} - {\\bf{r}}) \\hat{\\bf{p}}_{k} \\big],\n\\end{equation}\nand\n\\begin{equation}\n\\hat{\\bf{j}}_{d} ({\\bf{r}}) = \\hat{\\rho} ({\\bf{r}}) {\\bf{A}}\n({\\bf{r}})\/c,\n\\end{equation}\nwith the density operator $\\hat{\\rho} ({\\bf{r}})$ being\n\\begin{equation}\n\\hat{\\rho} ({\\bf{r}}) = \\sum_{k} \\delta ({\\bf{r}}_{k} - {\\bf{r}}).\n\\end{equation}\nThe magnetization density ${\\bf{m}} ({\\bf{r}})$ is the expectation\n\\begin{equation}\n{\\bf{m}} ({\\bf{r}}) = <\\Psi ({\\bf{X}}) | \\hat{\\bf{m}} ({\\bf{r}}) |\n\\Psi ({\\bf{X}}) >,\n\\end{equation}\nwith the local magnetization density operator $\\hat{\\bf{m}}\n({\\bf{r}})$ defined as\n\\begin{equation}\n\\hat{\\bf{m}} ({\\bf{r}}) = - \\frac{1}{c} \\sum_{k} {\\bf{s}}_{k} \\delta\n({\\bf{r}}_{k} - {\\bf{r}}).\n\\end{equation}\n(The current density operator $\\hat{\\bf{j}} ({\\bf{r}})$ can also be\ndefined in terms of the Hamiltonian $\\hat{H}$ as $\\hat{\\bf{j}}\n({\\bf{r}}) = c \\partial \\hat{H}\/\\partial {\\bf{A}}$. This confirms\nthat for both the Hamiltonians of Eqs. (3) and (4), the physical\ncurrent density is the orbital current density.)\n\n\nWe first present the proof of bijectivity between $\\{ \\rho\n({\\bf{r}}), {\\bf{j}} ({\\bf{r}}) \\}$ and $\\{ v ({\\bf{r}}), {\\bf{A}}\n({\\bf{r}}) \\}$ for spinless electrons corresponding to the\nHamiltonian of Eq. (1) or (3) for fixed electron number $N$ and\nangular momentum ${\\bf{L}}$. The proof is by \\emph{reductio ad\nabsurdum}. Let us consider two different physical systems $\\{ v,\n{\\bf{A}} \\}$ and $\\{ v', {\\bf{A}}' \\}$ that generate different\nnondegenerate ground state wave functions $\\Psi$ and $\\Psi'$. We\nassume the gauges of the unprimed and primed systems to be the same.\nLet us further assume that these systems lead to the \\emph{same}\nnondegenerate ground state $\\{ \\rho ({\\bf{r}}), {\\bf{j}} ({\\bf{r}})\n\\}$. We prove this cannot be the case. From the variational\nprinciple for the energy for a nondegenerate ground state, one\nobtains the inequality\n\\begin{equation}\nE = ~ <\\Psi | \\hat{H} | \\Psi > ~ < ~ <\\Psi' | \\hat{H} | \\Psi' >.\n\\end{equation}\nNow\n\\begin{eqnarray}\n<\\Psi' | \\hat{H} | \\Psi'> = < \\Psi' | \\hat{T} &+& \\hat{W} + \\hat{V}'\n+ \\frac{1}{c} \\int \\hat{\\bf{j}}' ({\\bf{r}}) \\cdot {\\bf{A}}'\n({\\bf{r}}) d\n{\\bf{r}} \\nonumber \\\\\n&-& \\frac{1}{2c^{2}} \\int \\hat{\\rho} ({\\bf{r}}) A'^{2} ({\\bf{r}}) d\n{\\bf{r}} | \\Psi' > + < \\Psi' | \\hat{V} - \\hat{V}' | \\Psi'> \\nonumber\n\\\\ &+& \\frac{1}{c} < \\Psi' | \\int [ \\hat{\\bf{j}} ({\\bf{r}}) \\cdot {\\bf{A}}\n({\\bf{r}}) - \\hat{\\bf{j}}' ({\\bf{r}}) \\cdot {\\bf{A}}' ({\\bf{r}})] d\n{\\bf{r}}| \\Psi' > \\nonumber \\\\\n&-& \\frac{1}{2c^{2}} < \\Psi' | \\int \\hat{\\rho} ({\\bf{r}}) [A^{2}\n({\\bf{r}}) - A'^{2} ({\\bf{r}})] d {\\bf{r}} | \\Psi' >.\n\\end{eqnarray}\nEmploying the above assumptions, and following the same steps as in\n\\cite{22}, one obtains the inequality\n\\begin{equation}\nE + E' < E + E' + \\int \\big[ {\\bf{j}}'_{p} ({\\bf{r}}) - {\\bf{j}}_{p}\n({\\bf{r}}) \\big] \\cdot \\big[ {\\bf{A}} ({\\bf{r}}) - {\\bf{A}}'\n({\\bf{r}}) \\big] d {\\bf{r}},\n\\end{equation}\nwhere $E' = <\\Psi' | \\hat{H}' | \\Psi' >$.\n\nAs the majority of the experimental and consequent theoretical work\nis performed for uniform magnetic fields, our proof too is for such\nfields.\n\nConsider next the third term on the right hand side of Eq. (19).\nWith ${\\bf{B}} ({\\bf{r}}) = B \\hat{\\bf{i}}_{z}$, ${\\bf{B}}'\n({\\bf{r}}) = B' \\hat{\\bf{i}}_{z}$, and the symmetric gauge ${\\bf{A}}\n({\\bf{r}}) = \\frac{1}{2} {\\bf{B}} \\times {\\bf{r}}$, ${\\bf{A}}'\n({\\bf{r}}) = \\frac{1}{2} {\\bf{B}}' \\times {\\bf{r}}$, this term may\nbe written as\n\\begin{equation}\nI = \\frac{1}{2} \\Delta {\\bf{B}} \\cdot \\int {\\bf{r}} \\times \\bigg[\n{\\bf{j}}'_{p} - {\\bf{j}}_{p} ({\\bf{r}}) \\bigg] d {\\bf{r}},\n\\end{equation}\nwhere $\\Delta {\\bf{B}} = (B - B') \\hat{\\bf{i}}_{z}$. First consider\nthe integral\n\\begin{eqnarray}\nI_{1} &=& \\int {\\bf{r}} \\times {\\bf{j}}_{p} ({\\bf{r}}) d {\\bf{r}} \\\\\n&=& - \\frac{i} {2} \\sum_{k} \\int d {\\bf{X}} \\int d {\\bf{r}}\n\\Psi^{\\star} ({\\bf{X}}) \\big[ {\\bf{r}} \\times\n{\\boldsymbol{\\nabla}}_{{\\bf{r}}_{k}} \\delta ({\\bf{r - r}}_{k}) +\n\\delta ({\\bf{r - r}}_{k}) {\\bf{r}} \\times\n{\\boldsymbol{\\nabla}}_{{\\bf{r}}_{k}} \\big] \\Psi ({\\bf{X}}).\n\\end{eqnarray}\nNext consider the second integral of $I_{1}$ of Eq. (22):\n\\begin{eqnarray}\nI_{12} &=& \\frac{1} {2} \\int d {\\bf{X}} \\Psi^{\\star} ({\\bf{X}})\n\\big( \\sum_{k} {\\bf{r}}_{k}\n\\times \\hat{\\bf{p}}_{k} \\big) \\Psi ({\\bf{X}}) \\\\\n&=& \\frac{1} {2} \\int d {\\bf{X}} \\Psi^{\\star} ({\\bf{X}}) \\sum_{k}\n\\hat{\\bf{L}}_{k} \\Psi ({\\bf{X}}) = \\frac{1}{2} {\\bf{L}},\n\\end{eqnarray}\nwhere $\\hat{\\bf{L}}_{k} = {\\bf{r}}_{k} \\times \\hat{\\bf{p}}_{k}$ is\nthe canonical orbital angular momentum operator, with $\\hat{\\bf{p}}$\nthe canonical momentum operator $(\\hat{\\bf{p}} =\n\\hat{\\bf{p}}_{kinetic} + \\hat{\\bf{p}}_{field} = m {\\bf{v}} +\n\\frac{q}{c} {\\bf{A}})$, and ${\\bf{L}}$ the total canonical orbital\nangular momentum defined by Eq. (24). Note that the canonical\nangular momentum is gauge variant.\n\nThe first integral of $I_{1}$ of Eq. (22) is\n\\begin{equation}\nI_{11} = - \\frac{i}{2} \\sum_{k} \\int d {\\bf{X}} \\int d {\\bf{r}}\n\\Psi^{\\star} ({\\bf{X}}) \\epsilon_{\\alpha \\beta \\gamma}\n\\frac{\\partial} {\\partial r_{k \\gamma}} \\big(r_{\\beta} \\delta\n({\\bf{r - r}}_{k}) \\Psi ({\\bf{X}}) \\big).\n\\end{equation}\nOn integrating the inner integral by parts and dropping the surface\nterm, one obtains\n\\begin{eqnarray}\nI_{11} &=& - \\frac{i}{2} \\sum_{k} \\int d {\\bf{X}} \\big [ -\n\\epsilon_{\\alpha \\beta \\gamma} \\int d {\\bf{r}} \\frac{\\partial\n\\Psi^{\\star} ({\\bf{X}})} {\\partial r_{k \\gamma}} r_{\\beta} \\delta\n({\\bf{r -\nr}}_{k}) \\Psi ({\\bf{X}}) \\big] \\\\\n&=& - \\frac{i}{2} \\sum_{k} \\int d {\\bf{X}} \\big [ - \\epsilon_{\\alpha\n\\beta \\gamma} \\frac{\\partial \\Psi^{\\star} ({\\bf{X}})} {\\partial r_{k\n\\gamma}} r_{k \\beta} \\Psi ({\\bf{X}}) \\big].\n\\end{eqnarray}\nOn integrating by parts again, one obtains\n\\begin{eqnarray}\nI_{11} &=& - \\frac{i}{2} \\sum_{k} \\epsilon_{\\alpha \\beta \\gamma}\n\\int d {\\bf{X}} \\Psi^{\\star} ({\\bf{X}}) \\frac{\\partial} {\\partial\nr_{k \\gamma}}\n\\big(r_{k \\beta} \\Psi ({\\bf{X}}) \\big) \\\\\n&=& - \\frac{i}{2} \\sum_{k} \\int d {\\bf{X}} \\Psi^{\\star} ({\\bf{X}}) ~\n({\\bf{r}}_{k} \\times {\\boldsymbol{\\nabla}}_{{\\bf{r}}_{k}}) \\Psi\n({\\bf{X}}) = \\frac{1}{2} {\\bf{L}}\n\\end{eqnarray}\nHence, the integral $I$ of Eq. (20) is\n\\begin{equation}\nI = \\frac{1}{2} \\Delta {\\bf{B}} \\cdot ({\\bf{L}}' - {\\bf{L}}).\n\\end{equation}\nIf one imposes the condition that the total canonical orbital\nangular momentum is \\emph{fixed} so that ${\\bf{L}} = {\\bf{L}}'$,\nthen the integral $I$ vanishes so that the third term on the right\nhand side of Eq. (19) vanishes.\n\n\nFor states with fixed orbital angular momentum ${\\bf{L}}$, Eq. (19)\nthen reduces to the contradiction\n\\begin{equation}\nE + E' < E + E'.\n\\end{equation}\nWhat this means is that the original assumption that $\\Psi$ and\n$\\Psi'$ differ is erroneous, and that there can exist a $\\{ v,\n{\\bf{A}} \\}$ and a $\\{ v', {\\bf{A}}' \\}$ with the same nondegenerate\nground state wave function. The fact that $\\Psi=\\Psi'$ means that\n$\\rho ({\\bf{r}}) |_{\\Psi} = \\rho' ({\\bf{r}}) |_{\\Psi'}$. However,\nthe corresponding physical current densities are not the same:\n${\\bf{j}} ({\\bf{r}}) |_{\\Psi} \\neq {\\bf{j}}' ({\\bf{r}}) |_{\\Psi'}$,\nbecause ${\\bf{j}}_{d} ({\\bf{r}})|_{\\Psi} \\neq\n{\\bf{j}}'_{d}|_{\\Psi'}$ if one hews with the original assumption\nthat ${\\bf{A}} ({\\bf{r}})$ is different from ${\\bf{A}}' ({\\bf{r}})$.\nThis proves that the assumption that there exists a different $\\{\nv', {\\bf{A}}' \\}$ (with the same $N$ and ${\\bf{L}}$) that leads to\nthe same $\\{ \\rho, {\\bf{j}} \\}$ as that due to $\\{ v, {\\bf{A}} \\}$\nis incorrect. This step takes into account the fact that there could\nexist many $\\{ v, {\\bf{A}} \\}$ that lead to the same nondegenerate\nground state $\\Psi$. Hence, there exists only one $\\{ v, {\\bf{A}}\n\\}$ for fixed $N$ and ${\\bf{L}}$ that leads to a nondegenerate\nground state $\\{ \\rho, {\\bf{j}} \\}$. The one-to-one relationship\nbetween $\\{ \\rho, {\\bf{j}} \\}$ and $\\{ v, {\\bf{A}} \\}$ is therefore\nproved for the case when the interaction of the magnetic field is\nsolely with the orbital angular momentum.\n\nWith $ \\{ \\rho ({\\bf{r}}), {\\bf{j}} ({\\bf{r}}) \\}$ as the basic\nvariables, the wave function $\\Psi$ is a functional of these\nproperties. By a density and physical current density\n\\emph{preserving} unitary transformation \\cite{4,15,24} it can be\nshown that the wave function must also be a functional of a gauge\nfunction $\\alpha ({\\bf{R}})$. This ensures that the wave function\nwhen written as a functional: $\\Psi = \\Psi [\\rho, {\\bf{j}}, \\alpha]$\nis gauge variant. However, as the physical system remains unchanged\nfor different gauge functions, the choice of vanishing gauge\nfunction is valid.\n\nAs the ground state energy is a functional of the basic variables:\n$E = E_{v, {\\bf{A}}} [\\rho, {\\bf{j}}]$, a variational principle for\n$E_{v, {\\bf{A}}} [\\rho, {\\bf{j}}]$ exists for arbitrary variations\nof $(v, {\\bf{A}})$-representable densities $ \\{ \\rho ({\\bf{r}}),\n{\\bf{j}} ({\\bf{r}}) \\}$. The corresponding Euler equations for $\\rho\n({\\bf{r}})$ and ${\\bf{j}} ({\\bf{r}})$ follow, and these must be\nsolved self-consistently with the constraints $\\int \\rho ({\\bf{r}})\nd {\\bf{r}} = N$, $\\int {\\bf{r}} \\times ({\\bf{j}} ({\\bf{r}}) -\n\\frac{1}{c} \\rho ({\\bf{r}}) {\\bf{A}} ({\\bf{r}}) d {\\bf{r}} =\n{\\bf{L}}$ and ${\\boldsymbol{\\nabla}} \\cdot {\\bf{j}} ({\\bf{r}}) = 0$.\nImplicit in this variational principle, as in all such energy\nvariational principles, \\emph{is that the external potentials remain\nfixed throughout the variation}. (See Table I.)\n\nWe next consider electrons with spin corresponding to the\nHamiltonian of Eq. (2) or (4). In this case, with the same\nassumptions made regarding the two different physical systems $\\{ v,\n{\\bf{A}}; \\psi \\}$ and $\\{ v', {\\bf{A}}'; \\psi' \\}$ leading to the\nsame $\\{ \\rho ({\\bf{r}}), {\\bf{j}} ({\\bf{r}})$ as before, the\ninequality of Eq. (19) is replaced by\n\\begin{eqnarray}\nE + E' < E + E' &+& \\int \\big[ {\\bf{j}}'_{p} ({\\bf{r}}) -\n{\\bf{j}}_{p} ({\\bf{r}}) \\big] \\cdot \\big[ {\\bf{A}} ({\\bf{r}}) -\n{\\bf{A}}' ({\\bf{r}}) \\big] d {\\bf{r}} \\nonumber \\\\\n&-& \\int \\big[ {\\bf{m}}' ({\\bf{r}}) - {\\bf{m}} ({\\bf{r}}) \\big]\n\\cdot \\big[ {\\bf{B}} ({\\bf{r}}) - {\\bf{B}}' ({\\bf{r}}) \\big] d\n{\\bf{r}}.\n\\end{eqnarray}\nThe third term on the right hand side vanishes if one imposes the\nconstraint that the orbital angular momentum ${\\bf{L}}$ of the\nunprimed and primed systems are the same. Hence, next consider the\nlast term of Eq. (32). For a uniform magnetic field with ${\\bf{B}}\n({\\bf{r}}) = B \\hat{\\bf{i}}_{z}$ and ${\\bf{B}}' ({\\bf{r}}) = B\n\\hat{\\bf{i}}_{z}$, we have\n\\begin{equation}\n\\int {\\bf{m}} ({\\bf{r}}) \\cdot {\\bf{B}} ({\\bf{r}}) d {\\bf{r}} = B\n\\int m_{z} ({\\bf{r}}) d {\\bf{r}},\n\\end{equation}\nwhere \\cite{19}\n\\begin{equation}\nm_{z} ({\\bf{r}}) = - \\frac{1}{2c} \\big[ \\rho_{\\alpha} ({\\bf{r}}) -\n\\rho_{\\beta} ({\\bf{r}}) \\big],\n\\end{equation}\nwith $\\rho_{\\alpha} ({\\bf{r}}), \\rho_{\\beta} ({\\bf{r}})$ being the\nspin-up and spin-down spin densities. The last term of the\ninequality is then\n\\begin{equation}\n\\int \\big[ {\\bf{m}}' ({\\bf{r}}) - {\\bf{m}} ({\\bf{r}}) \\big] \\cdot\n\\Delta {\\bf{B}} ({\\bf{r}}) d {\\bf{r}} = - \\frac{1} {2c} \\Delta B\n\\int \\big[ \\{ \\rho'_{\\alpha} ({\\bf{r}}) - \\rho'_{\\beta} ({\\bf{r}})\n\\} - \\{ \\rho_{\\alpha} ({\\bf{r}}) - \\rho_{\\beta} ({\\bf{r}}) \\} \\big]\nd {\\bf{r}},\n\\end{equation}\nwith $\\Delta B = B - B'$. If the $z$-component of the total spin\nangular momentum $S_{z}$ for the unprimed and primed systems are the\nsame, the corresponding spin densities are the same. The last term\nof Eq. (35) thus vanishes leading once again to the contradiction $E\n+ E' < E + E'$. More generally, the magnetization densities\n${\\bf{m}} ({\\bf{r}})$ and ${\\bf{m}}' ({\\bf{r}})$ are the same if the\ntotal spin angular momentum ${\\bf{S}}$ are the same. Hence, once\nagain, the bijective relationship between the nondegenerate ground\nstate densities $\\{ \\rho ({\\bf{r}}), {\\bf{j}} ({\\bf{r}}) \\}$ and the\npotentials $\\{ v ({\\bf{r}}), {\\bf{A}} ({\\bf{r}}) \\}$ is proved\nprovided one imposes the constraint that the total orbital\n${\\bf{L}}$ and spin ${\\bf{S}}$ angular momentum are fixed.\n\nThis may be seen in a different manner by accentuating the role of\nthe spin angular momentum. With the $z$-component of the total spin\n${\\bf{S}}$ being $S_{z} = \\sum_{k} s_{z,k}$, the density $m_{z}\n({\\bf{r}})$ may be written as\n\\begin{equation}\nm_{z} ({\\bf{r}}) = - \\frac{1}{cN} \\sum_{\\sigma} S_{z} \\gamma\n({\\bf{r}} \\sigma, {\\bf{r}} \\sigma),\n\\end{equation}\nwith $\\gamma ({\\bf{x x}}') = N \\int \\Psi^{\\star} ({\\bf{r}} \\sigma,\n{\\bf{X}}^{N-1}) \\Psi ({\\bf{r}}' \\sigma', {\\bf{X}}^{N-1}) d\n{\\bf{X}}^{N-1}$, the density matrix. Since in the primed system,\nthe spin vectors are different, i.e. some ${\\bf{s}}'_{k}$, we have\n\\begin{eqnarray}\n\\int \\big[ {\\bf{m}}' ({\\bf{r}}) - {\\bf{m}} ({\\bf{r}}) \\big] \\cdot\n\\Delta {\\bf{B}} ({\\bf{r}}) d {\\bf{r}} &=& \\Delta B \\int \\big[ m'_{z}\n({\\bf{r}}) - m_{z} ({\\bf{r}}) \\big] d {\\bf{r}} \\\\\n&=& \\frac{\\Delta B} {c N} \\sum_{\\sigma} \\int \\big[ S'_{z} \\gamma'\n({\\bf{r}} \\sigma, {\\bf{r}} \\sigma) - S_{z} \\gamma ({\\bf{r}} \\sigma,\n{\\bf{r}} \\sigma) \\big] d {\\bf{r}}.\n\\end{eqnarray}\nEmploying the original assumption that the diagonal matrix elements\n$\\gamma ({\\bf{r}} \\sigma, {\\bf{r}} \\sigma)$ of the density matrix\n$\\gamma ({\\bf{x x}}')$ are the same for the unprimed and primed\nsystems we have the right hand side of Eq. (38) to be\n\\begin{equation}\n\\frac{\\Delta B} {c N} \\sum_{\\sigma} \\int \\big[ S'_{z} - S_{z} \\big]\n\\gamma ({\\bf{r}} \\sigma, {\\bf{r}} \\sigma) = 0\n\\end{equation}\nprovided $S'_{z} = S_{z}$.\n\n\nIn the above proofs for the Hamiltonians of Eqs (3) and (4), the\ndefinition of the current density ${\\bf{j}} ({\\bf{r}})$ employed is\nthat of Eq. (10). However, for finite systems, the Hamiltonian of\nEq. (4) can also be written as \\cite{25}\n\\begin{eqnarray}\n\\hat{H} = \\hat{T} + \\hat{W} + \\hat{V} + \\frac{1}{c} \\int\n\\hat{\\bf{j}}_{p} ({\\bf{r}}) \\cdot {\\bf{A}} ({\\bf{r}}) d {\\bf{r}} &+&\n\\frac{1} {2c^{2}} \\int \\hat{\\rho} ({\\bf{r}}) A^{2} ({\\bf{r}}) d\n{\\bf{r}} \\nonumber \\\\\n&+& \\frac{1}{c} \\int \\hat{\\bf{j}}_{m} ({\\bf{r}}) \\cdot {\\bf{A}}\n({\\bf{r}}) d {\\bf{r}},\n\\end{eqnarray}\nwhere the magnetization current density operator $\\hat{\\bf{j}}_{m}\n({\\bf{r}})$ is defined as\n\\begin{equation}\n\\hat{\\bf{j}}_{m} ({\\bf{r}}) = - c {\\boldsymbol{\\nabla}} \\times\n{\\bf{m}} ({\\bf{r}}).\n\\end{equation}\nHence the physical current density ${\\bf{j}} ({\\bf{r}})$ may also be\ndefined as \\cite{25}\n\\begin{equation}\n{\\bf{j}} ({\\bf{r}}) = c \\frac{\\partial \\hat{H}} {\\partial {\\bf{A}}\n({\\bf{r}})} = {\\bf{j}}_{p} ({\\bf{r}}) + {\\bf{j}}_{d} ({\\bf{r}}) +\n{\\bf{j}}_{m} ({\\bf{r}}),\n\\end{equation}\nthe sum of the paramagnetic, diamagnetic, and magnetization current\ndensities. Even for this definition of the physical current density\n${\\bf{j}} ({\\bf{r}})$, the proof of bijectivity between $\\{ \\rho,\n{\\bf{j}} \\}$ and $\\{ v, {\\bf{A}} \\}$ is valid provided the angular\nmomentum ${\\bf{L}}$ and ${\\bf{S}}$ are fixed. For spin-compensated\nsystems, the magnetization current density ${\\bf{j}}_{m} ({\\bf{r}})$\nvanishes.\n\n\\section{Concluding Remarks}\n\nIn conclusion, we have generalized the HK theorems to the added\npresence of a uniform magnetic field. We have considered the cases\nof the interaction of the magnetic field with the orbital angular\nmomentum as well as when the interaction is with both the orbital\nand spin angular momentum. In this work we have proved a one-to-one\nrelationship between the external potentials $\\{ v ({\\bf{r}}),\n{\\bf{A}} ({\\bf{r}}) \\}$ and the nondegenerate ground state densities\n$\\{ \\rho ({\\bf{r}}), {\\bf{j}} ({\\bf{r}}) \\}$. The proof differs\nfrom that of the original HK theorem, and explicitly accounts for\nthe many-to-one relationship between the potentials $\\{ v\n({\\bf{r}}), {\\bf{A}} ({\\bf{r}}) \\}$ and the nondegenerate ground\nstate wave function $\\Psi$. To account for the presence of the\nmagnetic field, which constitutes an added degree of freedom, one\nmust then impose a further constraint beyond that of fixed electron\nnumber $N$ as in the original HK theorems. For the Hamiltonian\ncorresponding to spinless electrons, the added constraint is that of\nfixed canonical orbital angular momentum ${\\bf{L}}$. For that\ncorresponding to electrons with spin, the constraints imposed are\nthose of fixed canonical orbital ${\\bf{L}}$ and spin ${\\bf{S}}$\nangular momentum. (The gauge employed for the canonical angular\nmomentum ${\\bf{L}}$ can be chosen to be the same as that employed\nfor the Hamiltonian.) It is the further constraint on the angular\nmomentum that makes a rigorous HK-type proof of bijectivity between\nthe gauge invariant basic variables and the external scalar and\nvector potentials possible. Additionally, the HK-type proofs are\npossible because the Hamiltonians considered are rigorously derived\nfrom the tenets of nonrelativistic quantum mechanics.\n\nWith the knowledge that the basic variables are $\\{ \\rho ({\\bf{r}}),\n{\\bf{j}} ({\\bf{r}}) \\}$, a variational principle for the energy\nfunctional $E_{v, {\\bf{A}}} [\\rho, {\\bf{j}}]$ for arbitrary\nvariations of $(v, {\\bf{A}})$-representable densities $\\{ \\rho\n({\\bf{r}}), {\\bf{j}} ({\\bf{r}}) \\}$ is then developed for each\nHamiltonian considered. The constraints on the corresponding Euler\nequations are those of fixed electron number and angular momentum,\nand the satisfaction of the equation of continuity.\n\nAgain, knowing what the basic variables are, it is possible to map\nthe interacting system defined by the Hamiltonians of Eqs. (1) and\n(2) to one of noninteracting fermions with the same $\\rho\n({\\bf{r}}), {\\bf{j}} ({\\bf{r}})$, and ${\\bf{J}}$. Such a mapping\nhas been derived within QDFT \\cite{26}. The theory has been applied\nto map an interacting system \\cite{13} of two electrons in a\nmagnetic field and a harmonic trap $v ({\\bf{r}}) = \\frac{1}{2}\n\\omega_{0} r^{2}$ for which the ground state wave function is $\\Psi\n({\\bf{r}}_{1}, {\\bf{r}}_{2}) = C (1 + r_{12}) e^{- \\frac{1}{2}\n(r^{2}_{1} r^{2}_{2})}$, where $r_{12} = | {\\bf{r}}_{1} -\n{\\bf{r}}_{2}|$ and $C^{2} = 1\/\\pi^{2} (3 + \\sqrt{2 \\pi})$, to one of\nnoninteracting fermions with the same $ \\{ \\rho ({\\bf{r}}), {\\bf{j}}\n({\\bf{r}}) \\}$. This example corresponds to the special case of\nzero angular momentum. However, the QDFT mapping for finite angular\nmomentum is straight forward. For other recent work see\n\\cite{27,28}. The conclusions in the latter are based on the\nassumption of existence of a HK theorem but one without the\nrequirement of the constraint on the angular momentum.\n\n\n\n\nX.-P. was supported by the National Natural Sciences Foundation of\nChina, Grant No. 11275100, and the K.C. Wong Magna Foundation of\nNingbo University. The work of V.S. was supported in part by the\nResearch Foundation of the City University of New York.\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec_introduction}\n\nIn quantum field theory, particle creation effects exist in many different settings including the Schwinger effect \\cite{Sauter1931742,Schwinger1951} in quantum electrodynamics, the Hawking effect \\cite{BlackHexplosons} in black holes, and particle creation effects in cosmology \\cite{PhysRev.183.1057} such as the Gibbons-Hawking effect~\\cite{ PhysRevD.15.2738}. In order to study these phenomena, it is common to consider the dynamics of canonically quantized matter fields coupled to time-dependent external electromagnetic and\/or gravitational fields. In such analyses, one usually neglects backreaction, thus assuming a mean-field approximation for the test matter fields, which propagate on a classical background. We will do so in this work. In particular, we will focus our study on the time dependence of the number of created particles $N(t)$ generated in this kind of process.\n\nParticle creation is rooted in the breaking of symmetries caused by external agents. Free fields in flat spacetime possess Poincar\u00e9 symmetry. When imposing the invariance of the canonical quantum theory under this group of symmetry, there is only one possible basis of modes in which solutions to the equations of motion can be expanded: plane waves. This basis determines unique sets of annihilation and creation operators, which in turn define the Fock vacuum of the quantum theory: the so-called Minkowski vacuum. However, for instance, when a time-dependent external field is coupled to matter fields, the classical Hamiltonian is no longer invariant under time translations and there is freedom in the choice of the annihilation and creation operators (and thus, the vacuum) of the corresponding Fock quantization, which is therefore not unique. Depending on the choice of vacuum at each time, its evolution might produce particle-antiparticle pairs, determining the time dependence of $N(t)$. Due to these ambiguities, the physical interpretation of $N(t)$ and other physical observables such as the energy density is an ongoing discussion~\\cite{superadiabatic1,superadiabatic2,IldertonZerothOrder,Yamada_2021}.\n\nThere can be found different choices of vacuum in the literature. In general, its selection depends on the particular system under study and on the properties that we want to impose to its quantum theory. One of the most common options is to choose adiabatic vacua, introduced by Parker in \\cite{PhysRev.183.1057} and later formalized by L\u00fcders and Roberts in \\cite{LudersRoberts}. They are intensively used not only in cosmology but also in the context of the Schwinger effect~\\cite{Kluger_1998,MottolaDeSitter99}. These adiabatic modes are understood to be the most natural and simple extensions of Minkowski plane waves when the external agent slowly varies throughout time. However, many other options motivated by different criteria are also legitimate \\emph{a priori}. Some examples so far considered are: modes diagonalizing the Hamiltonian in cosmology \\cite{fahn2019dynamical,HamiltonianDiagonalizationGuillermo,Cortez:2020rla}, modes minimizing oscillations in the primordial power spectrum in cosmology \\cite{PrimordialPowerJavi,nonoscillations}, adiabatic modes minimizing oscillations in the number of created particles by the Schwinger effect and by cosmological settings \\cite{superadiabatic1,superadiabatic2,Yamada_2021}, etc. In this work, we will be interested in generalizing standard expressions found in the literature for the evolution of the particle number $N(t)$ by considering arbitrary selections of modes for the quantization. Bogoliubov transformations of the canonical quantization approach will allow us to address this question.\n\nIn the study of classical non-equilibrium physical systems, kinetic theory has been a very successful tool \\cite{liboff}. In particular, when describing a system composed by identical particles, the starting point in this theory is the Liouville equation for the joint probability distribution of the entire system. If we assume that particles are weakly correlated, we can deduce a closed equation of motion for the probability distribution of each individual particle: the so-called classical Vlasov equation. This equation does not consider collisions between particles. This can be accomplished with a more general but complicated approximation: the Boltzmann kinetic equation. A generalization to quantum field theory of the classical Vlasov equation should contemplate particle creation. This is done in the context of the quantum kinetic approach. The widely accepted proposal, based on incorporating a particle creation term, is the so-called quantum Vlasov equation (QVE), which is an integro-differential equation for $N(t)$. In the context of the Schwinger effect, this equation was first presented in \\cite{Kluger_1998} for scalar charged fields under a spatially homogeneous and time-dependent external electric field. Later, its extension to fermionic quantum fields was proposed in \\cite{1998}. This equation and its formalism has been used in a wide range of frameworks, including continuum strong QCD \\cite{RobertsSchmidt}, electron-positron pair creation in QED (from nuclei phenomena to black hole physics)~\\cite{Ruffini}, laser technology~\\cite{Dunne,DumluDunne,HebenstreitShortLaserPulses}, or in cosmology considering a de Sitter spacetime \\cite{MottolaDeSitter99,MottolaDeSitter}. Most of the literature use this QVE for the particular choice of vacuum defined by zeroth-order adiabatic modes. One of the main aims of this work is to generalize this equation to arbitrary vacua, getting in this way a ``generalized QVE''. Later we will restrict this generalized QVE to higher order adiabatic vacua \\cite{MottolaDeSitter99}. \n\nIn order to reduce the ambiguities in the quantization of classical theories with no time-translational invariance, a criterion has been proposed in various settings, from homogeneous cosmologies \\cite{CORTEZ201536,Cortez:2019orm,Cortez:2020rla,universe7080299} to the Schwinger effect \\cite{Garay2020,AlvarezFermions}: the unitary implementation in the quantum theory of matter fields dynamics. Weaker conditions are also found in the literature, imposing that only the in and out states (at asymptotic past and future times) are related by a unitary $S$-matrix \\cite{Gavrilov:1996pz,WALD1979490}. The motivation for imposing the former stronger requirement, at all intermediate times, is two-fold. First, we ensure that quantum theories at all times are physically equivalent, in the sense that they provide the same probability amplitudes. Moreover, in those references it was proved that in a wide range of settings this requirement reduces the ambiguities in the quantization to a unique family of unitarily equivalent quantizations. Second, it ensures that the total number of created particles is well defined (i.e., finite) at every finite time. \n\nThe unique family of vacua associated with the quantizations that unitarily implement the dynamics is precisely the family to which we will restrict our previously found generalized QVE. We will see that there is an interesting connection between the usual QVE and its generalization to modes unitarily implementing the dynamics: under certain conditions, the former is precisely the leading order of the latter in the ultraviolet regime. This will allow us to propose a more strict criterion for reducing the ambiguity in the quantization based on the ultraviolet behavior of the generalized QVE. \n\nFor definiteness, we will consider a charged scalar field in the presence of a spatially homogeneous but time-dependent electric field, system studied in \\cite{Garay2020}, although extensions to other homogeneous systems follow straightforwardly.\n\nThe structure of the paper is as follows.\nIn section \\ref{sec_schwingercurved} we specify the pair creation effects to which the generalizations of the expressions found in the literature for $N(t)$ might be extended. In section \\ref{sec_canonical} we present the key ideas of the canonical quantization approach, parametrizing the ambiguities and deducing a general expression for $N(t)$. These results will be used in section \\ref{sec_GQVE}, where we will obtain the generalization to arbitrary quantizations of the quantum Vlasov equation. In section \\ref{sec_unitary}, we will analyze the unitary dynamics criterion in the scalar Schwinger effect, particularizing our generalized QVE to modes satisfying this requirement. We will also propose an additional criterion for reducing the quantization ambiguities. Finally, section \\ref{sec_conclusions} is devoted to summarizing and discussing the essential results of this work.\n\n\\section{From the Schwinger effect to curved spacetimes} \n\\label{sec_schwingercurved}\n\nIn order to study the Schwinger effect, let us consider a scalar field $\\phi(t,\\textbf{x})$ of mass $m$ and charge $q$ propagating in Minkowski spacetime under an external time-dependent electric field with four-vector potential $A_{\\mu}$. It satisfies the Klein-Gordon equation of motion\n\\begin{equation} \\label{eq_KG}\n \\left[ (\\partial_{\\mu}+iqA_{\\mu})(\\partial^{\\mu}+iqA^{\\mu})+m^2 \\right]\\phi(t,\\textbf{x})=0.\n\\end{equation}\nIn this work we will also assume that the electric field is homogeneous, although not necessarily isotropic. We will use the temporal gauge, i.e., $A_{\\mu}(t)=(0,\\textbf{A}(t))$. Therefore, after Fourier transforming \\eqref{eq_KG} the time-dependent $\\textbf{k}$-modes\n\\begin{equation} \\label{eq_Fourier}\n \\phi_{\\textbf{k}}(t)=\\int \\frac{d^3\\textbf{x}}{(2\\pi)^{3\/2}} \\ e^{-i\\textbf{k}\\cdot\\textbf{x}}\\phi(t,\\textbf{x})\n\\end{equation}\nsatisfy decoupled harmonic oscillator equations\n\\begin{equation} \\label{eq_harmonic} \n \\ddot{\\phi}_{\\textbf{k}}(t)+\\omega_{\\textbf{k}}(t)^2\\phi_{\\textbf{k}}(t)=0,\n\\end{equation}\nwith time-dependent frequencies\n\\begin{equation} \\label{eq_omegaSchwinger}\n \\omega_{\\textbf{k}}(t)=\\sqrt{[\\textbf{k}+q\\textbf{A}(t)]^2+m^2}.\n\\end{equation}\nNote that complex scalar modes $\\phi_{\\textbf{k}}(t)$ can be split into their real and imaginary parts, both satisfying harmonic oscillator equations \\eqref{eq_harmonic}. Thus, from now on we will consider without loss of generality $\\phi_{\\textbf{k}}(t)$ as real variables.\n\nObserve that the dependence on the electric field only appears in $\\omega_{\\textbf{k}}(t)$. Due to the fact that we are treating it as an external agent, frequencies $\\omega_{\\textbf{k}}(t)$ are fixed and not affected by the dynamics of the modes $\\phi_{\\textbf{k}}(t)$. In other words, we neglect backreaction effects, only dealing with \\eqref{eq_harmonic} and forgetting about the equation of motion of the external field.\n\nAlthough our working example will be the scalar Schwinger effect, our approach can be easily extended to many other systems. Indeed, for most parts of this work we are not going to use the explicit expression of the Schwinger frequency \\eqref{eq_omegaSchwinger}, except for section~\\ref{sec_unitary}, where we will use a system-dependent reasoning. Thus, the key requirement that a theory has to verify so that our formalism is applicable is that it can be characterized by a collection of real degrees of freedom satisfying decoupled harmonic oscillator equations with time-dependent frequencies. \n\nFor instance, we can consider systems that have degrees of freedom $\\psi_i(t)$ verifying the equation of motion of a damped oscillator\n\\begin{equation} \\label{eq_eom2}\n \\ddot{\\psi}_i(t)+2\\gamma_i(t)\\dot{\\psi}_i(t)+\\Omega_i(t)^2\\psi_i(t)=0,\n\\end{equation}\nsince there always exists a canonical transformation $\\psi_i(t)=e^{-\\int^t dt' \\gamma_i(t')}\\phi_i(t)$ which removes the first order term \\cite{doi:10.1063\/1.527707}, transforming the equation of motion \\eqref{eq_eom2} for $\\psi_i(t)$ into a harmonic oscillator equation for $\\phi_i(t)$ with time-dependent frequency\n\\begin{equation}\n \\omega_i(t)=\\sqrt{\\Omega_i(t)^2- \\dot{\\gamma}_i(t)-\\gamma_i(t)^2}.\n\\end{equation} \n\nA second example of a system characterized by equations of the type \\eqref{eq_harmonic} is a fermionic field coupled to a homogeneous time-dependent electric field. In this case, the Fourier transform of the Dirac equation yields fermionic modes formed by four real variables satisfying~\\eqref{eq_harmonic} with time-dependent frequencies (see e.g. \\cite{1998,AlvarezFermions})\n\\begin{equation}\n \\omega_{\\textbf{k}}^{(\\pm)}(t)=\\sqrt{[\\textbf{k}+q\\textbf{A}(t)]^2+m^2\\pm iq|\\dot{\\textbf{A}}(t)|}.\n\\end{equation}\nHowever, these variables are not completely decoupled and minor manipulations inspired in references \\cite{1998,AlvarezFermions} should be applied to the procedure followed here in order to extend these results to the fermionic case.\n\nOther significant examples are found in cosmological settings. Let $\\varphi$ be a real scalar field with mass $m$ in a Friedmann-Lema\u00eetre-Robertson-Walker (FLRW) spacetime, defined by the well-known metric \n\\begin{equation}\n ds^2=a(t)^2\\left(-dt^2+h_{ij}dx^{i}dx^{j}\\right).\n\\end{equation}\nHere $a$ is the scale factor, $t$ is the conformal time, and $h_{ij}$ is the time-independent three-dimensional metric on a spatial hypersurface~$\\Sigma$. It can be easily seen \\cite{Cortez:2019orm} that the redefined scalar field $\\phi=a\\varphi$ satisfies\n\\begin{equation} \\label{eq_cosmologyKG}\n \\ddot\\phi-\\Delta\\phi+m(t)^2\\phi=0, \\quad m(t)=\\sqrt{m^2a(t)^2-\\ddot a(t)\/a(t)},\n\\end{equation}\nwhere $\\Delta$ is the Laplace-Beltrami operator on the spatial hypersurface~$\\Sigma$. While in the Schwinger effect the agent generating particle production is the external electric field, now in FLRW spacetimes the particle production is due to the evolution of the Universe, characterized by $a(t)$. An alternative interpretation is that the field $\\phi$ is a free field propagating in the static spacetime $ds^2=-dt^2+h_{ij}dx^{i}dx^{j}$, but with a time-dependent mass $m(t)$. In order to obtain a harmonic oscillator equation of the type \\eqref{eq_harmonic} for certain modes, different orthonormal bases for the expansions of the solutions to \\eqref{eq_cosmologyKG} can be chosen depending on the particular system. For example, if $\\Sigma$ is a three-sphere in a closed FLRW spacetime, an expansion in terms of hyperspherical harmonics of order $n$ leads to decoupled modes satisfying harmonic oscillator equations with time-dependent frequencies \\cite{Cortez:2019orm}\n\\begin{equation} \\label{eq_omegan}\n \\omega_n(t)=\\sqrt{n(n+2)+m(t)^2}.\n\\end{equation} \n\nIn all these systems the external agent (either the electric or the gravitational field) is assumed to be spatially homogeneous. Thanks to this symmetry, it is possible to find modes of the scalar matter field verifying decoupled harmonic oscillator equations with time-dependent frequencies. However, this is not the case when dealing with spatial inhomogeneities. In that case the mode decomposition would lead to a tower of coupled equations of motion for the infinite modes of the field and we would need other techniques. For example, in the case of the inhomogeneous Schwinger effect, one could consider the Wigner approach \\cite{Hebenstreit1,Hebenstreit2,Hebenstreit3,Schwingerwigner,WignerFonarev}. We leave those more complicated systems for future work.\n\n\\section{Canonical quantization approach} \\label{sec_canonical}\n\nIn this section we will quantize the classical systems described in the previous section following the canonical quantization approach. We present here the essential aspects in order to understand our work. For deeper analyses covering a wide range of systems of the type described in section \\ref{sec_schwingercurved}, see e.g. \\cite{wald1994quantum,Garay2020,AlvarezFermions,Cortez:2019orm, Cortez:2020rla}.\n\n\\subsection{Ambiguities in the canonical quantization}\n\nGiven a particular complex solution $z_{\\textbf{k}}(t)$ of the harmonic oscillator equation with time-dependent frequency \\eqref{eq_harmonic}, \nthere exists a unique complex coefficient $a_{\\textbf{k}}$ such that \nany other real solution $\\phi_{\\textbf{k}}(t)$ and its canonically conjugate momentum $\\pi_{\\textbf{k}}(t)=\\dot{\\phi}_{\\textbf{k}}(t)$ can be uniquely written as\n\\begin{align} \n \\mqty(\\phi_{\\textbf{k}}(t)\\\\\\pi_{\\textbf{k}}(t))&=G_{(z_{\\textbf{k}},\\dot z_{\\textbf{k}})}(t)\\mqty(a_{\\textbf{k}}\\\\a^*_\\textbf{k}), \n \\notag\\\\\n G_{(z_{\\textbf{k}},\\dot z_{\\textbf{k}})}(t)&=\n \\mqty(z_{\\textbf{k}}(t)&z_{\\textbf{k}}^*(t)\\\\\\dot z_{\\textbf{k}}(t)& \\dot z_{\\textbf{k}}^*(t)).\n\\label{eq_matrixz}\\end{align}\nThe coefficient $a_{\\textbf{k}}$ and its complex conjugate $a_{\\textbf{k}}^*$ of this linear combination are called annihilation and creation variables, respectively.\n\nThe above decomposition depends on the choice of the solution $z_\\textbf{k}(t)$. More generally, we have the possibility of expressing the solution $\\phi_{\\textbf{k}}(t)$ and its momentum $\\pi_{\\textbf{k}}(t)$ in terms of complex functions $\\zeta_{\\textbf{k}}(t)$ and $\\rho_{\\textbf{k}}(t)$, respectively:\\footnote{In the literature about the canonical study of the quantum unitary implementation of the dynamics (e.g., see references \\cite{Garay2020,AlvarezFermions,Cortez:2019orm}) it is usual to denote these time-dependent functions as $\\zeta_{\\textbf{k}}=ig_{\\textbf{k}}^*$ and $\\rho_{\\textbf{k}}=-if_{\\textbf{k}}^*$.}\n\\begin{align} \n \\mqty(\\phi_{\\textbf{k}}(t)\\\\\\pi_{\\textbf{k}}(t))&=G_{(\\zeta_{\\textbf{k}},\\rho_{\\textbf{k}})}(t)\\mqty(a_{\\textbf{k}}(t)\\\\a^*_\\textbf{k}(t)), \\notag\\\\ G_{(\\zeta_{\\textbf{k}},\\rho_{\\textbf{k}})}(t)&=\n \\mqty(\\zeta_{\\textbf{k}}(t)&\\zeta_{\\textbf{k}}^*(t)\\\\\\rho_{\\textbf{k}}(t)& \\rho_{\\textbf{k}}^*(t)).\n\\label{eq_matrix}\\end{align}\nLet us remark that $\\zeta_{\\textbf{k}}(t)$ is not necessarily a solution to the harmonic oscillator equation~\\eqref{eq_harmonic}. Only if this is the case, the annihilation and creation variables $a_{\\textbf{k}}(t)$ and $a^*_{\\textbf{k}}(t)$ are time-independent. Otherwise, these variables have to carry the appropriate time dependence compensating for that of $\\zeta_{\\textbf{k}}(t)$ and $\\rho_{\\textbf{k}}(t)$, so that the combination \\eqref{eq_matrix} leads to a solution $\\phi_{\\textbf{k}}(t)$ of~\\eqref{eq_harmonic}. \n\nNote that equation \\eqref{eq_matrix} reduces to \\eqref{eq_matrixz}\nif we choose $\\zeta_{\\textbf{k}}(t)=z_{\\textbf{k}}(t)$, which then implies $\\rho_{\\textbf{k}}(t)=\\dot{z}_{\\textbf{k}}(t)$ (and $a_{\\textbf{k}}(t)=a_{\\textbf{k}})$. In the general case, i.e., when $\\zeta_{\\textbf{k}}(t)$ is not a solution to~\\eqref{eq_harmonic}, $\\zeta_{\\textbf{k}}(t)$ and $\\rho_{\\textbf{k}}(t)$ are not completely independent. Indeed, the pair of canonical modes and the annihilation and creation variables have to verify the Poisson bracket relations\n\\begin{align}\n \\{\\phi_{\\textbf{k}}(t),\\pi_{\\textbf{k}'}(t)\\}&=\\delta(\\textbf{k}-\\textbf{k}'), \\notag\\\\ \\{a_{\\textbf{k}}(t),a_{\\textbf{k}'}^*(t)\\}&=-i\\delta(\\textbf{k}-\\textbf{k}'),\n\\end{align}\nwhere $\\delta$ denotes the Dirac delta distribution. They impose the normalization conditions\n\\begin{equation} \\label{eq_relzetarho}\n \\zeta_{\\textbf{k}}(t)\\rho_{\\textbf{k}}^*(t)-\\zeta_{\\textbf{k}}^*(t)\\rho_{\\textbf{k}}(t)=i.\n\\end{equation}\nIt can be easily verified that this requirement ensures that the expression for $\\pi_{\\textbf{k}}(t)$ given \n in the second row of \\eqref{eq_matrix} is equivalent to the time derivative of $\\phi_{\\textbf{k}}(t)$ in the first row. \n \n \nIn the canonical quantization approach we promote the annihilation and creation variables to annihilation and creation operators acting on the corresponding Fock space. Then, in view of \\eqref{eq_matrix}, one classical theory can have infinitely many associated quantum theories. Indeed, we have the ambiguity in the particular choice of functions $(\\zeta_{\\textbf{k}}(t),\\rho_{\\textbf{k}}(t))$, which have to verify the relation \\eqref{eq_relzetarho}. This selection determines a one-parameter family of quantizations, one for each value of the time variable~$t$: the corresponding quantum operators $(\\hat{a}_{\\textbf{k}}(t),\\hat{a}_{\\textbf{k}}(t)^{\\dagger})$ determine the associated Fock vacuum state $|0\\rangle_t$ as the state annihilated by $\\hat{a}_{\\textbf{k}}(t)$ for all $\\textbf{k}$. In other words, and connecting with analog discussions in the literature of unitary implementation of the quantum field dynamics (see e.g. \\cite{CORTEZ201536}), we have ambiguity in the choice of canonical variables to be quantized and in the choice of complex structure to carry out the quantization, both encoded in the functions $(\\zeta_{\\textbf{k}}(t),\\rho_{\\textbf{k}}(t))$.\n \n\nOne criterion to reduce these ambiguities is to unitarily implement the symmetries of the classical system in the quantum theory, which reduces the possible selections of functions $(\\zeta_{\\textbf{k}}(t),\\rho_{\\textbf{k}}(t))$. In fact, in Minkowski spacetime when no external field is present, Poincar\u00e9 symmetry fixes completely this choice: this maximal symmetry fixes $\\zeta_{\\textbf{k}}(t)$ to be the plane wave of frequency $\\omega_{\\textbf{k}}$. For systems which only differ slightly from flat spacetime, one can expect that this construction can be extended. This is the case when $\\omega_{\\textbf{k}}(t)$ varies slowly throughout time, recovering the Minkowski case in the limit of constant frequency. However, in our work we will go beyond this particular case. \n\nLet us note that in our system, requiring that our quantization unitarily implements the classical symmetries implies invariance of the vacuum under spatial translations. As a consequence, other expansions of the canonical pair $(\\phi(t,\\textbf{x}),\\pi(t,\\textbf{x}))$ mixing Fourier modes are not allowed, and all the ambiguity that we are considering is the one encoded in the choice of $ (\\zeta_{\\textbf{k}}(t),\\rho_{\\textbf{k}}(t))$.\n\n\\subsection{Parametrization of the ambiguities} \\label{sec_parametrization}\n\nFor later convenience, we parametrize the freedom in the choice of $\\zeta_{\\textbf{k}}(t)$ in terms of two arbitrary real functions $W_{\\textbf{k}}(t) > 0$ and $\\varphi_{\\textbf{k}}(t)$ related to its modulus and its phase, respectively, in the following way:\n\\begin{equation} \\label{eq_zetaparam}\n \\zeta_{\\textbf{k}}(t)=\\frac{1}{\\sqrt{2W_{\\textbf{k}}(t)}}e^{-i\\varphi_{\\textbf{k}}(t)}.\n\\end{equation}\nIn addition, it is easy to verify that the normalization condition \\eqref{eq_relzetarho} reduces the ambiguity in the choice of the complex function $\\rho_{\\textbf{k}}(t)$ to just one real function $Y_{\\textbf{k}}(t)$ such that\n\\begin{equation} \\label{eq_rhoparam}\n \\rho_{\\textbf{k}}(t)=-\\sqrt{\\frac{W_{\\textbf{k}}(t)}{2}}[i+Y_{\\textbf{k}}(t)]e^{-i\\varphi_{\\textbf{k}}(t)}.\n\\end{equation}\n\nThere are occasions in which certain families of functions $(\\zeta_{\\textbf{k}}(t),\\rho_{\\textbf{k}}(t))$ stand out. In the particular case in which we demand $\\zeta_{\\textbf{k}}(t)$ to be a solution to the harmonic oscillator equation with time-dependent frequency \\eqref{eq_harmonic}, then not only the normalization condition~\\eqref{eq_relzetarho} has to be verified but also $\\rho_{\\textbf{k}}(t)=\\dot{\\zeta}_{\\textbf{k}}(t)$. This fixes $\\dot{\\varphi}_{\\textbf{k}}(t)$ and $Y_{\\textbf{k}}(t)$ as functions of $W_{\\textbf{k}}(t)$ according to\n\\begin{align} \n\\label{eq_eqW}\n W_{\\textbf{k}}(t)^2&=\\omega_{\\textbf{k}}(t)^2-\\frac{1}{2}\\left[ \\frac{\\ddot{W}_{\\textbf{k}}(t)}{W_{\\textbf{k}}(t)}-\\frac{3}{2}\\frac{\\dot{W}_{\\textbf{k}}(t)^2}{W_{\\textbf{k}}(t)^2} \\right], \n\\\\ \\label{eq_phigammasol}\n \\dot{\\varphi}_{\\textbf{k}}(t)&=W_{\\textbf{k}}(t), \\qquad Y_{\\textbf{k}}(t)=\\frac{\\dot{W}_{\\textbf{k}}(t)}{2W_{\\textbf{k}}(t)^2}.\n\\end{align}\nThus, the freedom in the choice of the pair $(\\zeta_{\\textbf{k}}(t),\\rho_{\\textbf{k}}(t))$ when we impose that $\\zeta_{\\textbf{k}}(t)$ is a particular normalized solution to \\eqref{eq_harmonic} is encoded in the initial conditions $W_{\\textbf{k}}(t_0)$, $\\dot{W}_{\\textbf{k}}(t_0)$, and $\\varphi_{\\textbf{k}}(t_0)$ at some initial time~$t_0$.\n\nAnother possibility is to require that $\\zeta_{\\textbf{k}}(t)$ is an approximate solution to the equation of motion. In this case, equations \\eqref{eq_eqW} and \\eqref{eq_phigammasol} must hold approximately. For instance, when the time-dependent frequency $\\omega_{\\textbf{k}}(t)$ is slowly-varying, the most common selection in the literature is the adiabatic approximation \\cite{birrell_davies_1982}. It is recursively defined from the zeroth-order approximation \\begin{equation} \\label{eq_adiabatic0}\n W_{\\textbf{k}}^{(0)}(t)=\\omega_{\\textbf{k}}(t), \\qquad \\varphi^{(0)}_{\\textbf{k}}(t)=\\int_{t_0}^t dt' \\ \\omega_{\\textbf{k}}(t').\n\\end{equation}\nFor $Y^{(0)}_{\\textbf{k}}(t)$ there are two choices with different adiabatic order (i.e., number of time derivatives of $\\omega_{\\textbf{k}}(t)$). The first possibility $Y^{(0)}_\\textbf{k} (t)=\\dot\\omega_\\textbf{k}(t)\/[2\\omega_\\textbf{k}(t)^2]$ is common in the references about quantum field theory in curved spacetime (see, e.g., \\cite{birrell_davies_1982}). This approximates exact modes (called zeroth-order adiabatic modes) and their derivatives up to second-adiabatic order. The other choice $\\breve Y^{(0)}_\\textbf{k}(t)=0$ (we have added a $\\,\\breve{}\\,$ to differentiate it from the previous option) is common in the QVE literature (see, e.g., \\cite{Kluger_1998,1998,Fedotov_2011}). This approximates exact adiabatic modes only up to first-adiabatic order. We will emphasize the consequences of these two different selections later in the text. The $n$th-adiabatic approximation can be obtained in the standard way \\cite{birrell_davies_1982} recursively introducing the previous order in \\eqref{eq_eqW}.\nThe corresponding exact mode $z_{\\textbf{k}}^{(n)}(t)$, determined by fixing the initial data according to $z_{\\textbf{k}}^{(n)}(t_0)=\\zeta^{(n)}_{\\textbf{k}}(t_0)$ and $\\dot{z}_{\\textbf{k}}^{(n)}(t_0)=\\rho^{(n)}_{\\textbf{k}}(t_0)$, is usually called the $n$th-order adiabatic mode.\\footnote{In the context of cosmology it is also usual to define the zeroth-order adiabatic approximation via $W^{(0)}_{\\textbf{k}}(t)=k$ and $Y^{(0)}_{\\textbf{k}}(t)=0$ \\cite{parker_toms_2009}. With that convention the $(n+2)$th-order adiabatic mode is our $n$th-order adiabatic mode.}\n\nOn the other hand, remember that in general we do not require $(\\zeta_{\\textbf{k}}(t),\\rho_{\\textbf{k}}(t))$ to be solutions to the equation of motion, not even approximately. We find in the literature other selections, including functions diagonalizing the Hamiltonian for large wave numbers \\cite{HamiltonianDiagonalizationGuillermo,Cortez:2020rla}, and others which minimize oscillations of the number of created particles throughout time or of the primordial power spectrum \\cite{superadiabatic1,superadiabatic2,PrimordialPowerJavi,nonoscillations}. Moreover, recently the so-called exact WKB analysis has been used, which consists of a Borel resummation of the ordinary WKB approximations, to study the Schwinger effect \\cite{exactWKB}. Our analysis will be general, without assuming specific selections of these functions. In section~\\ref{sec_unitary} we will restrict the study to the family of Fock quantizations with unitary dynamics, as we consider that property as essential.\n\n\nIn summary, a particular family of canonical quantum theories (one for each time $t$) is unequivocally selected by choosing $(\\zeta_{\\textbf{k}}(t),\\rho_{\\textbf{k}}(t))$ for each $\\textbf{k}$. The preservation of the Poisson algebra of the canonical fields and the creation and annihilation variables at each time restricts in a precise way the choice of $\\rho_{\\textbf{k}}(t)$ through the normalization condition~\\eqref{eq_relzetarho}. We have a complete freedom of two real time-dependent functions ($W_{\\textbf{k}}(t)$, $\\varphi_{\\textbf{k}}(t)$) to determine $\\zeta_{\\textbf{k}}(t)$ and only one additional real function $Y_{\\textbf{k}}(t)$ to characterize $\\rho_{\\textbf{k}}(t)$. As we are going to see, the number of created particles throughout time will strongly depend on the choice of both $W_{\\textbf{k}}(t)$ and $Y_{\\textbf{k}}(t)$.\n\n\\subsection{Number of created particles} \\label{sec_N(t)}\n\nWe are interested in computing the number of particles in the vacua $|0\\rangle_t$ with respect to the vacuum of another quantum theory that we will take as reference. Furthermore, we will see the variation in $t$ in the functions $(\\zeta_{\\textbf{k}}(t),\\rho_{\\textbf{k}}(t))$ as providing time evolution for the quantization, so that particles are created or destroyed as time evolves. For this comparison, first we have to choose such reference vacuum. With that aim we fix a complex basis $(z_{\\textbf{k}}(t), z^*_{\\textbf{k}}(t))$ for the space of solutions of the harmonic oscillator equation \\eqref{eq_harmonic}, and determine the associated creation and annihilation time-independent variables $({a}_{\\textbf{k}}, {a}^*_{\\textbf{k}})$. Then, the reference vacuum, that will be denoted by~$\\ket{0}$, will be the state annihilated by all the operators~$\\hat{a}_{\\textbf{k}}$.\n\nThe different sets of annihilation and creation variables $({a}_{\\textbf{k}}, {a}_{\\textbf{k}}^*)$ and $({a}_{\\textbf{k}}(t), {a}_{\\textbf{k}}^*(t))$, associated with $(z_{\\textbf{k}}(t), z^*_{\\textbf{k}}(t))$ and $(\\zeta_{\\textbf{k}}(t),\\rho_{\\textbf{k}}(t))$, respectively, are related by a canonical transformation $\\mathcal{B}(t)$ called a Bogoliubov transformation. Since the modes $\\phi_{\\textbf{k}}(t)$ satisfy decoupled harmonic oscillator equations \\eqref{eq_harmonic} for different wave vectors $\\textbf{k}$, $\\mathcal{B}(t)$ does not mix them. Its $\\textbf{k}$-component $\\mathcal{B}_{\\textbf{k}}(t)$ can be written as\n\\begin{equation} \\label{eq_bogoliubov}\n \\mqty(a_{\\textbf{k}}(t)\\\\a_{\\textbf{k}}^*(t))=\\mathcal{B}_{\\textbf{k}}(t)\\mqty(a_{\\textbf{k}}\\\\a_{\\textbf{k}}^*), \\qquad \\mathcal{B}_{\\textbf{k}}(t)=\\mqty(\\alpha_{\\textbf{k}}(t)&\\beta_{\\textbf{k}}(t)\\\\\\beta_{\\textbf{k}}^*(t)&\\alpha_{\\textbf{k}}^*(t)).\n\\end{equation}\nThe preservation of the Poisson algebra of the creation and annihilation variables relates the Bogoliubov coefficients for all $t$ according to\n\\begin{equation} \\label{eq_relationbog}\n |\\alpha_{\\textbf{k}}(t)|^2-|\\beta_{\\textbf{k}}(t)|^2=1.\n\\end{equation}\nThis Bogoliubov transformation \\eqref{eq_bogoliubov} enables us to better understand the physical consequences of having an ambiguity in the selection of annihilation and creation variables. As long as these $\\beta$-coefficients do not vanish, the associated quantum theories will have different notions of particles and antiparticles. \nIn this way, the number of particles for each wave vector $\\textbf{k}$ in the quantum theory defined by the set $(\\hat{a}_{\\textbf{k}}(t), \\hat{a}_{\\textbf{k}}^*(t))$ measured with respect to the reference vacuum $|0\\rangle$ is given by\n\\begin{equation} \\label{eq_Nbeta}\n N_{\\textbf{k}}(t)=\\langle 0|\\hat{a}_{\\textbf{k}}(t)^{\\dagger}\\hat{a}_{\\textbf{k}}(t)|0\\rangle=|\\beta_{\\textbf{k}}(t)|^2.\n\\end{equation}\nThe last equality is obtained by substituting the expression of $\\hat{a}_{\\textbf{k}}(t)$ in terms of $\\hat{a}_{\\textbf{k}}$ using~\\eqref{eq_bogoliubov}. We see that this number of created particles strongly depends both on the reference vacuum and on the particular functions $(\\zeta_{\\textbf{k}}(t),\\rho_{\\textbf{k}}(t))$ chosen, and this will be made explicit in the following.\n\nIn order to write an expression for the Bogoliubov coefficients we use the classical equivalence between $\\phi_{\\textbf{k}}(t)$ and $\\pi_{\\textbf{k}}(t)$ written in terms of an exact solution $z_{\\textbf{k}}(t)$ (equation \\eqref{eq_matrixz}) and in terms of $\\zeta_{\\textbf{k}}(t)$ and $\\rho_{\\textbf{k}}(t)$ (equation \\eqref{eq_matrix}). Using also the normalization condition~\\eqref{eq_relzetarho} we finally deduce that\n\\begin{align}\n \\mqty(\\alpha_{\\textbf{k}}(t)\\\\\\beta_{\\textbf{k}}^*(t))&= G_{(\\zeta_\\textbf{k},\\rho_\\textbf{k})}^{-1}(t) \n \\mqty(z_{\\textbf{k}}(t)\\\\ \\dot z_{\\textbf{k}}(t)),\n \\notag\\\\\n G_{(\\zeta_\\textbf{k},\\rho_\\textbf{k})}^{-1}(t)&= -i\\mqty(\\rho^*_{\\textbf{k}}(t)&-\\zeta_{\\textbf{k}}^*(t)\\\\ -\\rho_{\\textbf{k}}(t)& \\zeta_{\\textbf{k}}(t)).\n\\label{eq_alphabetat}\\end{align}\nThen, it is direct to write $N_{\\textbf{k}}(t)$ in terms of the free functions $W_{\\textbf{k}}(t)$, $\\varphi_{\\textbf{k}}(t)$, and $Y_{\\textbf{k}}(t)$ that characterize $\\zeta_{\\textbf{k}}(t)$ and $\\rho_{\\textbf{k}}(t)$, and the particular solution $z_{\\textbf{k}}(t)$ defining the reference vacuum~$\\ket{0}$:\n\\begin{align} \n N_{\\textbf{k}}(t)&=\\frac{W_{\\textbf{k}}(t)}{2}\\left[1+Y_{\\textbf{k}}(t)^2\\right]|z_{\\textbf{k}}(t)|^2+\\frac{1}{2W_{\\textbf{k}}(t)}|\\dot{z}_{\\textbf{k}}(t)|^2\n \\notag\\\\\n &-\\frac{1}{2}+Y_{\\textbf{k}}(t)\\Re{z_{\\textbf{k}}^*(t)\\dot{z}_{\\textbf{k}}(t)}.\n\\label{eq_N1}\\end{align}\nThis first result is a generalized expression of the one found in \\cite{Fedotov_2011}, which corresponds to the particular case in which we choose $(\\zeta_{\\textbf{k}}(t),\\rho_{\\textbf{k}}(t))$ to be the zeroth-order adiabatic approximation $(\\zeta^{(0)}_{\\textbf{k}}(t),\\rho^{(0)}_{\\textbf{k}}(t))$, fixed by \\eqref{eq_adiabatic0} and the choice $\\breve Y^{(0)}_{\\textbf{k}}(t)=0$:\n\\begin{equation} \\label{eq_Nad}\n N^{(0)}_{\\textbf{k}}(t)=\\frac{\\omega_{\\textbf{k}}(t)}{2}|z_{\\textbf{k}}(t)|^2+\\frac{1}{2\\omega_{\\textbf{k}}(t)}|\\dot{z}_{\\textbf{k}}(t)|^2-\\frac{1}{2}.\n\\end{equation}\nIn this case, $z_{\\textbf{k}}(t)$ would naturally be the zeroth-order adiabatic mode (with initial adiabatic conditions at $t_0$) $\\breve z^{(0)}_\\textbf{k}(t)$. In particular, our formalism also allows us to write the alternative version of this equation when we select $ {Y}^{(0)}_{\\textbf{k}}(t)=\\dot{\\omega}_{\\textbf{k}}(t)\/[2\\omega_{\\textbf{k}}(t)^2]$ instead of $\\breve Y^{(0)}_{\\textbf{k}}(t)$, which provides a better zeroth-order adiabatic approximation to the equation of motion, as explained in section \\ref{sec_parametrization}.\n\nAs we see from \\eqref{eq_N1}, \n$N_{\\textbf{k}}(t)$ does not depend on the phase $\\varphi_{\\textbf{k}}(t)$ of $\\zeta_{\\textbf{k}}(t)$. Thus, although we have a freedom of three real time-dependent functions to determine the canonical quantization, the number of created particles only depends on two of them: $W_{\\textbf{k}}(t)$ and $Y_{\\textbf{k}}(t)$. This is obvious from the fact that, in our formalism where we do not ask the functions $\\zeta_{\\textbf{k}}(t)$ to solve the equation of motion, multiplying $\\zeta_{\\textbf{k}}(t)$ by a time-dependent phase is a trivial Bogoliubov transformation, i.e., a transformation with null $\\beta$-coefficients. In the particular case that we choose $\\zeta_{\\textbf{k}}(t)$ as a solution to \\eqref{eq_harmonic} related to $z_{\\textbf{k}}(t)$ by a non-trivial Bogoliubov transformation, $\\dot{\\varphi}_{\\textbf{k}}(t)$ would be fixed by $W_{\\textbf{k}}(t)$ according to \\eqref{eq_phigammasol}. Therefore, in that case, the only freedom in the phase is its value at initial time $\\varphi_{\\textbf{k}}(t_0)$, but again $N_{\\textbf{k}}(t)$ is independent from such initial value.\n\nOnce $W_{\\textbf{k}}(t)$ and $Y_{\\textbf{k}}(t)$ are chosen, in order to compute $N_{\\textbf{k}}(t)$ there is still a residual ambiguity in the choice of reference vacuum $\\ket{0}$, or equivalently, in the selection of a particular solution $z_{\\textbf{k}}(t)$ to the harmonic oscillator equation \\eqref{eq_harmonic} for each ${\\textbf{k}}$. However, this ambiguity can be suitably fixed under certain circumstances. For example, let us consider matter fields which behave as in free Minkowski spacetime in the asymptotic past. This can be achieved, for instance, in the Schwinger effect by turning on the electric field at a finite time or in FLRW spacetimes by considering an asymptotically static expanding universe~\\cite{birrell_davies_1982}. Then, the system possesses Poincar\u00e9 symmetry when $t\\rightarrow -\\infty$. When we require that the quantum theory preserves this classical symmetry in the past, we need to impose that in the asymptotic past $z_{\\textbf{k}}(t)$ behaves as a positive-frequency plane wave (according to our conventions of creation and annihilation of particles). This asymptotic condition $z_{\\textbf{k}}(t\\rightarrow-\\infty)$ completely determines $z_{\\textbf{k}}(t)$ for all $t$ and there remains no ambiguity in the selection of $\\ket{0}$. Another example, already mentioned above, in which there is a natural choice for $z_{\\textbf{k}}(t)$ is when the field modes behave adiabatically. In that case we are interested in comparing the $n$th-order adiabatic approximation $\\left(\\zeta_{\\textbf{k}}^{(n)}(t),\\rho_{\\textbf{k}}^{(n)}(t)\\right)$ with the corresponding exact solution, and then one chooses $z_{\\textbf{k}}(t)$ as the $n$th-order adiabatic mode~$z^{(n)}_\\textbf{k}(t)$.\n\nIn addition, there are only few cases in which it is possible to find particular solutions to~\\eqref{eq_harmonic}, and hence compute $N_{\\textbf{k}}(t)$ from \\eqref{eq_N1}. For example, this is the case in the Schwinger effect when the external electric field derives from a Sauter-type potential \\cite{Sauter1931742}, which turns off in the asymptotic past, and following the arguments above we search for solutions that behave as positive-frequency plane waves in $t\\rightarrow -\\infty$ \\cite{BreakingAdiabaticNavarro}. However, in general this is not possible and it would be useful to obtain a differential equation for $N_{\\textbf{k}}(t)$ in which particular solutions $z_{\\textbf{k}}(t)$ to the equations of motion do not take part explicitly. This is precisely what we are going to do in the next section.\n \n\\section{Generalized quantum Vlasov equation} \\label{sec_GQVE}\n\nIn the following we are interested in deducing a differential equation for the number of created particles for which, unlike \\eqref{eq_N1}, there is no need to solve the harmonic oscillator equation with time-dependent frequency first. Of course, this equation, just like \\eqref{eq_N1}, will strongly depend on the particular choices of $(\\zeta_{\\textbf{k}}(t),\\rho_{\\textbf{k}}(t))$. \n\nThe dynamics of the number of particles as compared with the reference vacuum, $N_{\\textbf{k}}(t)=|\\beta_{\\textbf{k}}(t)|^2$, is determined by the evolution of the Bogoliubov coefficients. Hence, it will be useful to write time evolution equations for both $\\alpha_{\\textbf{k}}(t)$ and $\\beta_{\\textbf{k}}(t)$. For that, \nwe differentiate \\eqref{eq_alphabetat} with respect to $t$ and replace $\\ddot{z}_{\\textbf{k}}(t)$ by $-\\omega_{\\textbf{k}}(t)z_{\\textbf{k}}(t)$ as dictated by the equation of motion \\eqref{eq_harmonic}. Finally, we use the inverse of \\eqref{eq_alphabetat} and obtain\n\\begin{equation} \\label{eq_evbogoliubov}\n \\mqty(\\dot{\\alpha}_{\\textbf{k}}\\\\\\dot{\\beta}_{\\textbf{k}}^*)=i\\mqty(s_{\\textbf{k}}+\\dot{\\varphi}_{\\textbf{k}}&r_{\\textbf{k}}e^{2i\\varphi_{\\textbf{k}}}\\\\-r_{\\textbf{k}}^*e^{-2i\\varphi_{\\textbf{k}}}&-(s_{\\textbf{k}}+\\dot{\\varphi}_{\\textbf{k}}))\\mqty(\\alpha_{\\textbf{k}}\\\\\\beta_{\\textbf{k}}^*),\n\\end{equation}\nwhere $s_{\\textbf{k}}$ is a real time-dependent function given by\n\\begin{equation} \\label{eq_s}\n s_{\\textbf{k}}=-\\frac{\\omega_{\\textbf{k}}^2}{2W_{\\textbf{k}}}+\\frac{1}{2}\\left[\\dot{Y}_{\\textbf{k}}-W_{\\textbf{k}}\\left(1+Y_{\\textbf{k}}^2\\right)\\right]+\\frac{\\dot{W}_{\\textbf{k}}}{2W_{\\textbf{k}}}Y_{\\textbf{k}},\n\\end{equation}\nwhile the time-dependent function $r_{\\textbf{k}}$ is determined by its real and imaginary parts, $\\mu_{\\textbf{k}}$ and $\\nu_{\\textbf{k}}$, respectively:\n\\begin{equation} \\label{eq_AB}\n \\mu_{\\textbf{k}}=s_{\\textbf{k}}+W_{\\textbf{k}}, \\qquad\\!\\!\n \\nu_{\\textbf{k}}=-\\frac{\\dot{W}_{\\textbf{k}}}{2W_{\\textbf{k}}}+W_{\\textbf{k}}Y_{\\textbf{k}}, \\qquad\\!\\! r_{\\textbf{k}}=\\mu_{\\textbf{k}}+i\\nu_{\\textbf{k}}.\n\\end{equation}\nNote that we have deliberately eliminated the dependence on the phase $\\varphi_{\\textbf{k}}$ in $s_{\\textbf{k}}$ and $r_{\\textbf{k}}$, extracting it explicitly in \\eqref{eq_evbogoliubov}. Thus, both $s_{\\textbf{k}}$ and $r_{\\textbf{k}}$ are unequivocally specified once the free functions $(W_{\\textbf{k}},Y_{\\textbf{k}})$, which characterize the particular annihilation and creation operators $\\hat{a}_{\\textbf{k}}(t)$ and $\\hat{a}_{\\textbf{k}}(t)^{\\dagger}$ in the quantum theory, are fixed. Equations \\eqref{eq_evbogoliubov} coincide with the results of \\cite{MottolaDeSitter99}, with the appropriate change of variables. \n\nOnce these evolution equations are known, we generalize the procedure followed in \\cite{Kluger_1998}. Differentiating $|\\beta_{\\textbf{k}}(t)|^2$ and using \\eqref{eq_evbogoliubov} it can be easily seen that\n\\begin{equation} \\label{eq_dotN}\n \\dot{N}_{\\textbf{k}}(t)=2\\Im{e^{-2i\\varphi_{\\textbf{k}}(t)}r_{\\textbf{k}}^*(t)M_{\\textbf{k}}(t)},\n\\end{equation}\nwhere we have taken advantage of the real character of $s_{\\textbf{k}}$ and we have defined the auxiliary function\n\\begin{equation}\n M_{\\textbf{k}}(t)=\\alpha_{\\textbf{k}}(t)\\beta_{\\textbf{k}}(t).\n\\end{equation}\nAnalogous to this deduction, it is not difficult to obtain an equation for $M_{\\textbf{k}}(t)$,\n\\begin{equation} \\label{eq_dotM}\n \\dot{M}_{\\textbf{k}}(t)=ir_{\\textbf{k}}(t)e^{2i\\varphi_{\\textbf{k}}(t)}[1+2N_{\\textbf{k}}(t)]+2i[s_{\\textbf{k}}(t)+\\dot{\\varphi_{\\textbf{k}}}(t)]M_{\\textbf{k}}(t),\n\\end{equation}\nby using \\eqref{eq_evbogoliubov} and the relation \\eqref{eq_relationbog} between the Bogoliubov coefficients. \n\nNote that neither equation \\eqref{eq_dotN} nor \\eqref{eq_dotM} depend explicitly on the particular solution $z_{\\textbf{k}}(t)$ of the harmonic oscillator equation with time-dependent frequency. However, the residual ambiguity in the choice of reference vacuum $\\ket{0}$ has not disappeared but has been transformed from the freedom in the selection of $z_{\\textbf{k}}(t)$ to the freedom in the initial conditions for $N_{\\textbf{k}}(t)$ and $M_{\\textbf{k}}(t)$. The natural choice ${z}_{\\textbf{k}}(t_0)=\\zeta_{\\textbf{k}}(t_0)$ and $\\dot{z}_{\\textbf{k}}(t_0)=\\rho_{\\textbf{k}}(t_0)$ ensures that both sets of annihilation and creation operators coincide at $t_0$, which implies \n$\\beta_{\\textbf{k}}(t_0)=0$ and hence $N_{\\textbf{k}}(t_0)=M_{\\textbf{k}}(t_0)=0$. \n\n\nIn order to make a direct comparison with the results in the quantum kinetic approach \\cite{Kluger_1998,1998,Fedotov_2011}, it will be interesting to rewrite equations \\eqref{eq_dotN} and \\eqref{eq_dotM} as an integro-differential equation for $N_{\\textbf{k}}(t)$ where the auxiliary function $M_{\\textbf{k}}(t)$ does not intervene. With this objective, we solve \\eqref{eq_dotM} by the method of variation of constants with $N_{\\textbf{k}}$ fixed and initial condition $M_{\\textbf{k}}(t_0)=0$. Then,\n\\begin{equation}\n M_{\\textbf{k}}(t)=e^{2i\\varphi_{\\textbf{k}}(t)}\\int^t_{t_0} d\\tau \\ ir_{\\textbf{k}}(\\tau)[1+2N_{\\textbf{k}}(\\tau)]e^{i\\theta_{\\textbf{k}}(t,\\tau)},\n\\end{equation}\nwhere \n\\begin{equation} \\label{eq_theta}\n \\theta_{\\textbf{k}}(t,\\tau)=2\\int_{\\tau}^t dt' \\ s_{\\textbf{k}}(t').\n\\end{equation}\nSubstituting this expression in \\eqref{eq_dotN} we finally obtain, in terms of the real and imaginary parts of $r_{\\textbf{k}}=\\mu_{\\textbf{k}}+i\\nu_{\\textbf{k}}$:\n\\begin{align} \n \\dot{N}_{\\textbf{k}}(t)&=\\int^t_{t_0} d\\tau \\ 2[1+2N_{\\textbf{k}}(\\tau)]\n \\notag\\\\\n &\\times\\big\\{ \\big[\\mu_{\\textbf{k}}(t)\\mu_{\\textbf{k}}(\\tau)+\\nu_{\\textbf{k}}(t)\\nu_{\\textbf{k}}(\\tau)\\big]\\cos[\\theta_{\\textbf{k}}(t,\\tau)] \\nonumber \\\\ \n &-\\big[\\mu_{\\textbf{k}}(t)\\nu_{\\textbf{k}}(\\tau)-\\nu_{\\textbf{k}}(t)\\mu_{\\textbf{k}}(\\tau)\\big]\\sin[\\theta_{\\textbf{k}}(t,\\tau)]\\big\\}.\n\\label{eq_QVE2}\\end{align} \nNote that $\\dot{N}_{\\textbf{k}}$ does not depend on the arbitrary phase $\\varphi_{\\textbf{k}}$, but only on $W_{\\textbf{k}}$ and $Y_{\\textbf{k}}$, as we already deduced in section \\ref{sec_canonical}. This equation is exact and completely general for any given quantization characterized by the pair $(\\zeta_{\\textbf{k}}(t),\\rho_{\\textbf{k}}(t))$.\n\nThe equation above shows that pair creation is non-local in time: time evolution of $N_{\\textbf{k}}(t)$ depends on the values of this magnitude in previous times through the bosonic enhancement factor $1+2N_{\\textbf{k}}(\\tau)$.\\footnote{In fermionic systems, the factor $1+2N_{\\textbf{k}}(\\tau)$ transforms into a Pauli blocking factor $1-2N_{\\textbf{k}}(\\tau)$ \\cite{1998}.} This is due to coherence between particle creation events when intense external fields are applied. Conversely, in the limit in which external agents are weak enough, particle creation events are sufficiently separated in time so that a local approximation of this equation is feasible \\cite{Kluger_1998,SchmidtNonMarkovian}.\n\nThe integro-differential equation \\eqref{eq_QVE2} might seem at first sight difficult to solve. However, the canonical approach discussed in section \\ref{sec_canonical} helped us to indirectly solve it. Indeed, the expression \\eqref{eq_N1} for $N_{\\textbf{k}}$ is a solution to the above equation. The difficulty in solving an integro-differential equation translates into calculating a particular solution $z_{\\textbf{k}}(t)$ of the harmonic oscillator equation with time-dependent frequency \\eqref{eq_harmonic}, which, as we have already discussed, can only be analytically done in specific cases such as constant external fields.\n\nWhen we choose $(\\zeta_{\\textbf{k}}(t),\\rho_{\\textbf{k}}(t))$ as a zeroth-order adiabatic approximation \\eqref{eq_adiabatic0} with $\\breve Y^{(0)}_{\\textbf{k}}(t)=0$, the real time-dependent functions taking part in the previous equation reduce to\n\\begin{align} \n & \\breve \\mu_{\\textbf{k}}^{(0)}(t)=0, \\qquad \\breve \\nu_{\\textbf{k}}^{(0)}(t)=-\\frac{\\dot{\\omega}_{\\textbf{k}}(t)}{2\\omega_{\\textbf{k}}(t)}, \\notag\\\\ & \\breve \\theta_{\\textbf{k}}^{(0)}(t,\\tau)=-2\\int_{\\tau}^t dt' \\ \\omega_{\\textbf{k}}(t'),\n\\label{ABthetaad}\\end{align}\nleading to the usual integro-differential QVE found in the literature \\cite{Kluger_1998}:\n\\begin{align}\n \\dot{ N}_{\\textbf{k}}^{(0)}(t)&=\\frac{\\dot{\\omega}_{\\textbf{k}}(t)}{2\\omega_{\\textbf{k}}(t)}\\int^t_{t_0} d\\tau \\ \\frac{\\dot{\\omega}_{\\textbf{k}}(\\tau)}{\\omega_{\\textbf{k}}(\\tau)}\\left[1+2N_{\\textbf{k}}^{(0)}(\\tau)\\right]\n \\notag\\\\\n &\\times\\cos\\left[ 2\\int^t_{\\tau} dt' \\ \\omega_{\\textbf{k}}(t') \\right].\n \\label{eq_QVEadiabatic}\\end{align} \nTherefore, \\eqref{eq_QVE2} is the generalized QVE for arbitrary chosen functions $(\\zeta_{\\textbf{k}}(t),\\rho_{\\textbf{k}}(t))$. In particular, this generalization allows us to write the QVE corresponding to the zeroth-order adiabatic approximation, but with the selection $ {Y}^{(0)}_{\\textbf{k}}(t)=\\dot{\\omega}_{\\textbf{k}}(t)\/[2\\omega_{\\textbf{k}}(t)^2]$. Indeed, it is easy to verify that this equation is characterized by the functions\n\\begin{align} \n {\\mu}_{\\textbf{k}}^{(0)}(t)&=\\frac{1}{4}\\left[ \\frac{\\ddot{\\omega}_{\\textbf{k}}(t)}{\\omega_{\\textbf{k}}(t)^2}-\\frac{3}{2}\\frac{\\dot{\\omega}_{\\textbf{k}}(t)^2}{\\omega_{\\textbf{k}}(t)^3} \\right], \\qquad {\\nu}^{(0)}_{\\textbf{k}}(t)=0, \n \\notag\\\\\n {\\theta}_{\\textbf{k}}^{(0)}(t)&=-2\\int_{\\tau}^{t} dt' \\ W_{\\textbf{k}}^{(2)}(t').\n\\label{eq_QVEYtilde}\\end{align}\nAs explained in section \\ref{sec_parametrization}, with this last choice one ensures a better adiabatic approximation to the equation of motion while maintaining the same expression for $\\zeta^{(0)}_{\\textbf{k}}(t)$. Moreover, remember that this is the usual definition for the zeroth-order adiabatic mode in the context of quantum field theory in curved spacetime \\cite{birrell_davies_1982}. In addition, while the only non-vanishing contribution to the usual QVE \\eqref{eq_QVEadiabatic}, $\\breve{\\nu}_{\\textbf{k}}^{(0)}(t)$, is of first-adiabatic order, for the generalized QVE characterized by \\eqref{eq_QVEYtilde} the only term which contributes, $\\mu_{\\textbf{k}}^{(0)}(t)$, is of second-adiabatic order. This translates into $\\dot{N}^{(0)}_{\\textbf{k}}(t)$ being of two higher adiabatic orders for the choice of $ {Y}_{\\textbf{k}}^{(0)}(t)$ than for $\\breve Y_{\\textbf{k}}^{(0)}(t)$. Thus, the generalized QVE for the choice of $ {Y}^{(0)}_{\\textbf{k}}(t)$ provides a good balance between precision and simplicity when compared to the usual QVE \\eqref{eq_QVEadiabatic}.\n\nIn section \\ref{sec_unitary} we will particularize the generalized QVE to quantizations that allow for a unitary implementation of the dynamics in the quantum theory, studying their relation in the ultraviolet limit with the ones for adiabatic modes.\n\nFinally, we note that in order to perform explicit calculations it is more convenient to rewrite the integro-differential equation \\eqref{eq_QVE2}, whose numerical resolution is not generally easy \\cite{SchmidtNonMarkovian}, as a real linear system of ordinary differential equations. This was first done in~\\cite{Bloch1999} for the standard QVE. To that end, we define two auxiliary time-dependent functions:\n\\begin{align}\n M_{1{\\textbf{k}}}(t)&=\\int^t_{t_0} d\\tau \\ 2[1+2N_{\\textbf{k}}(\\tau)]\n \\notag\\\\\n &\\times\\left\\{ \\mu_{\\textbf{k}}(\\tau)\\cos[\\theta_{\\textbf{k}}(t,\\tau)]-\\nu_{\\textbf{k}}(\\tau)\\sin[\\theta_{\\textbf{k}}(t,\\tau)] \\right\\}, \\nonumber\\\\\n M_{2{\\textbf{k}}}(t)&=\\int^t_{t_0} d\\tau \\ 2[1+2N_{\\textbf{k}}(\\tau)]\n \\notag\\\\\n &\\times\n \\left\\{ \\mu_{\\textbf{k}}(\\tau)\\sin[\\theta_{\\textbf{k}}(t,\\tau)]+\\nu_{\\textbf{k}}(\\tau)\\cos[\\theta_{\\textbf{k}}(t,\\tau)] \\right\\},\n\\end{align}\nsuch that\n\\begin{equation}\n \\dot{N}_{\\textbf{k}}(t)=\\mu_{\\textbf{k}}(t)M_{1{\\textbf{k}}}(t)+\\nu_{\\textbf{k}}(t)M_{2{\\textbf{k}}}(t).\n\\end{equation}\nDifferentiating these auxiliary functions we obtain the linear differential system: \n\\begin{equation} \\label{eq_difsyst}\n \\frac{d}{dt} \\mqty(1+2N_{\\textbf{k}}\\\\M_{1{\\textbf{k}}}\\\\M_{2{\\textbf{k}}})=2\\mqty(0&\\mu_{\\textbf{k}}&\\nu_{\\textbf{k}}\\\\\\mu_{\\textbf{k}}&0&-s_{\\textbf{k}}\\\\\\nu_{\\textbf{k}}&s_{\\textbf{k}}&0) \\mqty(1+2N_{\\textbf{k}}\\\\M_{1\\textbf{k}}\\\\M_{2\\textbf{k}}).\n\\end{equation}\nThese real differential equations are also equivalent to the complex differential system composed by \\eqref{eq_dotN} and \\eqref{eq_dotM}. We have verified that this system of equations is equivalent to the one derived in \\cite{MottolaDeSitter99}, that carries out an analog analysis focusing on adiabatic modes of arbitrary order.\n\n\\section{Unitary dynamics}\n\\label{sec_unitary}\n\nHamiltonians associated with the type of systems studied in this work are time-dependent. Thus, time translational invariance is broken. One usually wants the associated Fock quantum theory to preserve the symmetries of the classical system. When it no longer possesses Poincar\u00e9 symmetry, as it is the case, this requirement is not restrictive enough to select particular functions $(\\zeta_{\\textbf{k}}(t),\\rho_{\\textbf{k}}(t))$ in general and ambiguities emerge in the canonical quantization (see section \\ref{sec_canonical}). In order to reduce these ambiguities, previous studies of both scalar and fermionic fields in homogeneous cosmological settings \\cite{universe7080299,CORTEZ201536,Cortez:2019orm,Cortez:2020rla} as well as in the context of the Schwinger effect \\cite{Garay2020,AlvarezFermions} impose that the canonical time evolution of the fields be unitarily implemented in the quantum theory. Physically, this translates into a well-defined total number of created particles throughout the evolution of fields at all finite times. This physical condition imposes a restriction on the large wave vector $\\textbf{k}$-functions $\\zeta_{\\textbf{k}}(t)$ and $\\rho_{\\textbf{k}}(t)$. The main consequence of demanding a unitary implementation of the quantum field dynamics, as proven in references \\cite{universe7080299,CORTEZ201536,Cortez:2019orm,Cortez:2020rla,Garay2020,AlvarezFermions}, is its uniqueness: quantizations compatible with this requirement form a unique unitarily equivalent family. One of the primary objectives in this section is to emphasize and generalize the procedures from references \\cite{CORTEZ201536,universe7080299,Cortez:2019orm,Cortez:2020rla,Garay2020,AlvarezFermions} in order to extract relevant physical properties of the generalized QVE \\eqref{eq_QVE2} when particularized to this unique family of quantizations. More precisely, once $\\zeta_{\\textbf{k}}(t)$ and $\\rho_{\\textbf{k}}(t)$ allowing for a unitary implementation of the dynamics are characterized in terms of their asymptotic ultraviolet behavior, we will see that the usual QVE \\eqref{eq_QVEadiabatic} is in most cases (but not all) the leading order of the generalized QVE~\\eqref{eq_QVE2}. This analysis will motivate a new criterion to further reduce the quantization ambiguities.\n\n\\subsection{Time evolution} \\label{sec_timeevolution}\n\nFirst, we study time evolution as a classical Bogoliubov transformation. In addition, we are interested in comparing formalisms used in works about unitary implementation of the quantum dynamics \\cite{universe7080299,Cortez:2019orm,CORTEZ201536,Cortez:2020rla,Garay2020,AlvarezFermions} and others dealing with the quantum kinetic approach \\cite{Kluger_1998,MottolaDeSitter99,1998,Fedotov_2011} for deducing the usual QVE \\eqref{eq_QVEadiabatic}. Moreover, this will help to simplify proofs in future sections.\n\nLet us consider the canonical time evolution $\\mathcal{T}(t_0,t)$ of the canonically conjugate fields $(\\phi(t,\\textbf{x}),\\pi(t,\\textbf{x}))$ from $t_0$ to time $t$. The pairs of modes $(\\phi_{\\textbf{k}}(t),\\pi_{\\textbf{k}}(t))$ are dynamically decoupled for different $\\textbf{k}$, i.e., $\\phi_{\\textbf{k}}(t)$ satisfy decoupled harmonic oscillator equations \\eqref{eq_harmonic}. Thus, we can write\n\\begin{equation} \\label{eq_defTt0t}\n \\mqty(\\phi_{\\textbf{k}}(t)\\\\\\pi_{\\textbf{k}}(t))=\\mathcal{T}_{\\textbf{k}}(t_0,t) \\mqty(\\phi_{\\textbf{k}}(t_0)\\\\\\pi_{\\textbf{k}}(t_0)),\n\\end{equation}\nwhere $\\mathcal{T}_{\\textbf{k}}(t_0,t)$ is the component of $\\mathcal{T}(t_0,t)$ relating the $\\textbf{k}$-modes. As the annihilation and creation variables $a_{\\textbf{k}}$ and $a_{\\textbf{k}}^*$ are time-independent, from \\eqref{eq_matrixz} we deduce that\n\\begin{equation} \\label{eq_Tt0tG}\n \\mathcal{T}_{\\textbf{k}}(t_0,t)=G_{(z_{\\textbf{k}},\\dot{z}_{\\textbf{k}})}(t)G^{-1}_{(z_{\\textbf{k}},\\dot{z}_{\\textbf{k}})}(t_0).\n\\end{equation}\nNote that, although the fundamental matrix $G_{(z_{\\textbf{k}},\\dot{z}_{\\textbf{k}})}(t)$ depends on the particular solution $z_{\\textbf{k}}(t)$ to the equation of motion \\eqref{eq_harmonic} that we had chosen, according to the general knowledge about linear ordinary differential equations, the canonical matrix $\\mathcal{T}_{\\textbf{k}}(t_0,t)$ is independent of~$z_{\\textbf{k}}(t)$.\n\nThe time evolution transformation $\\mathcal{T}(t_0,t)$ has an associated Bogoliubov transformation $\\mathcal{\\tilde{B}}(t_0,t)$ whose $\\textbf{k}$-component $\\mathcal{\\tilde{B}}_{\\textbf{k}}(t_0,t)$ relates the initial conditions $a_{\\textbf{k}}(t_0)$ and $a_{\\textbf{k}}^*(t_0)$ for the creation and annihilation variables to their time-evolved ones, i.e.,\n\\begin{align} \n \\mqty(a_{\\textbf{k}}(t)\\\\a_{\\textbf{k}}^*(t))&=\\mathcal{\\tilde{B}}_{\\textbf{k}}(t_0,t) \\mqty(a_{\\textbf{k}}(t_0)\\\\a_{\\textbf{k}}^*(t_0)), \n \\notag\\\\\n \\mathcal{\\tilde{B}}_{\\textbf{k}}(t_0,t)&=\\mqty(\\tilde{\\alpha}_{\\textbf{k}}(t_0,t)&\\tilde{\\beta}_{\\textbf{k}}(t_0,t)\\\\\\tilde{\\beta}_{\\textbf{k}}^*(t_0,t)&\\tilde{\\alpha}_{\\textbf{k}}^*(t_0,t)).\n\\label{eq_Bogoliubovtime}\\end{align}\nTherefore, at the quantum level, $\\mathcal{\\tilde{B}}_{\\textbf{k}}(t_0,t)$ compares the quantum theories defined by the same choice of $(\\zeta_{\\textbf{k}}(t),\\rho_{\\textbf{k}}(t))$ at two different times $t_0$ and $t$, characterized by annihilation and creation operators $(\\hat{a}_{\\textbf{k}}(t_0),\\hat{a}_{\\textbf{k}}(t_0)^{\\dagger})$ and $(\\hat{a}_{\\textbf{k}}(t),\\hat{a}_{\\textbf{k}}(t)^{\\dagger})$, respectively. \n\nExplicitly, the relation between the time-evolution transformation $\\mathcal{T}_{\\textbf{k}}(t_0,t)$ defined in~\\eqref{eq_defTt0t} and $\\mathcal{\\tilde{B}}_{\\textbf{k}}(t_0,t)$ reads\n\\begin{equation} \\label{eq_Bt0tG}\n \\mathcal{\\tilde{B}}_{\\textbf{k}}(t_0,t)=G^{-1}_{(\\zeta_{\\textbf{k}},\\rho_{\\textbf{k}})}(t)\\mathcal{T}_{\\textbf{k}}(t_0,t)G_{(\\zeta_{\\textbf{k}},\\rho_{\\textbf{k}})}(t_0),\n\\end{equation}\nas it is easy to deduce using \\eqref{eq_matrix}. Written in this way it is clear how the time-dependent transformations $G_{(\\zeta_{\\textbf{k}},\\rho_{\\textbf{k}})}(t)$ for each $\\textbf{k}$ mediate between the classical time-evolution of the field $\\mathcal{T}(t_0,t)=\\oplus_{\\textbf{k}} \\mathcal{T}_{\\textbf{k}}(t_0,t)$ and the Bogoliubov transformation $\\mathcal{\\tilde{B}}(t_0,t)=\\oplus_{\\textbf{k}} \\mathcal{\\tilde{B}}_{\\textbf{k}}(t_0,t)$ that relates Fock quantizations at different times. It is the latter that encodes the quantum field dynamics and therefore the transformation that one would like to implement via a unitary operator $\\hat{U}(t_0,t)=\\oplus_{\\textbf{k}}\\hat{U}_{\\textbf{k}}(t_0,t)$ such that\n\\begin{equation} \\label{eq_unitary}\n \\mqty(\\hat{a}_{\\textbf{k}}(t)\\\\\\hat{a}_{\\textbf{k}}(t)^{\\dagger})=\\hat{U}_{\\textbf{k}}(t_0,t) \\mqty(\\hat{a}_{\\textbf{k}}(t_0)\\\\\\hat{a}_{\\textbf{k}}(t_0)^{\\dagger})\\hat{U}_{\\textbf{k}}(t_0,t)^{-1} .\n\\end{equation}\nThis is a non-trivial question, and only appropriate choices of $ (\\zeta_{\\textbf{k}}(t),\\rho_{\\textbf{k}}(t))$ render $\\mathcal{\\tilde{B}}(t_0,t)$ unitarily implementable at the quantum level \\cite{CORTEZ201536}, as we will discuss later.\n\nNote the difference between this time evolution Bogoliubov transformation $\\mathcal{\\tilde{B}}(t_0,t)$ and the previously considered $\\mathcal{B}(t)$, defined by \\eqref{eq_bogoliubov}. $\\mathcal{B}_{\\textbf{k}}(t)$ relates the reference Fock quantization associated with a particular solution $z_{\\textbf{k}}(t)$ of the harmonic oscillator equation~\\eqref{eq_harmonic} (with annihilation and creation variables denoted by $a_{\\textbf{k}}$ and $a_{\\textbf{k}}^*$) to another canonical quantization defined by chosen functions $(\\zeta_{\\textbf{k}}(t),\\rho_{\\textbf{k}}(t))$ (associated with $a_{\\textbf{k}}(t)$ and $a_{\\textbf{k}}^*(t)$). While $\\mathcal{B}(t)$ is usually studied in works concerning the quantum kinetic approach and the QVE \\cite{Kluger_1998,MottolaDeSitter99,1998,Fedotov_2011}, the works that study the uniqueness of the quantizations that unitarily implement the dynamics, such as \\cite{universe7080299,CORTEZ201536, Cortez:2019orm,Cortez:2020rla,Garay2020,AlvarezFermions}, deal with the time evolution Bogoliubov transformation $\\mathcal{\\tilde{B}}(t_0,t)$.\n\nFor the present study, it is useful to find a relation between $\\mathcal{\\tilde{B}}(t_0,t)$ and $\\mathcal{B}(t)$. The latter can be computed from \\eqref{eq_alphabetat}, and \nusing \\eqref{eq_Tt0tG} we can rewrite \\eqref{eq_Bt0tG} as\n\\begin{equation} \\label{eq_relBt0tBt}\n \\mathcal{\\tilde{B}}_{\\textbf{k}}(t_0,t)=\\mathcal{B}_{\\textbf{k}}(t)\\mathcal{B}_{\\textbf{k}}(t_0)^{-1}.\n\\end{equation}\nNote that this decomposition explicitly depends on the reference quantization. We can interpret the Bogoliubov transformation $\\mathcal{\\tilde{B}}_{\\textbf{k}}(t_0,t)$ implementing the time evolution of the field as a composition of two canonical transformations. First, $\\mathcal{B}_{\\textbf{k}}(t_0)^{-1}$ transforms the initial conditions $(a_{\\textbf{k}}(t_0),a_{\\textbf{k}}^*(t_0))$ into the time-independent annihilation and creation variables $(a_{\\textbf{k}},a_{\\textbf{k}}^*)$ associated with the particular solution $z_{\\textbf{k}}(t)$. Second, $\\mathcal{B}_{\\textbf{k}}(t)$ takes $(a_{\\textbf{k}},a_{\\textbf{k}}^*)$ to the time-evolved $(a_{\\textbf{k}}(t),a_{\\textbf{k}}^*(t))$. In other words, by means of an auxiliary set of modes $z_{\\textbf{k}}(t)$ we have factorized $\\mathcal{\\tilde{B}}_{\\textbf{k}}(t_0,t)$ in terms of Bogoliubov transformations relating $\\zeta_{\\textbf{k}}(t)$ and $z_{\\textbf{k}}(t)$ at different times.\n\n\nGiven $\\mathcal{\\tilde{B}}_{\\textbf{k}}(t_0,t)$, we can define the magnitude $\\tilde{N}_{\\textbf{k}}(t_0,t)=|\\tilde{\\beta}_{\\textbf{k}}(t_0,t)|^2$. It measures the number of created particles at some instant $t$ from the evolution of the vacuum defined at $t_0$, which is the state annihilated by all the operators $\\hat{a}_{\\textbf{k}}(t_0)$. As discussed previously, we will naturally choose $z_{\\textbf{k}}(t_0)=\\zeta_{\\textbf{k}}(t_0)$, and hence $\\dot{z}_{\\textbf{k}}(t_0)=\\rho_{\\textbf{k}}(t_0)$; equivalently, $\\hat{a}_{\\textbf{k}}(t_0)=\\hat{a}_{\\textbf{k}}$. Then, that vacuum state is just the reference vacuum $|0\\rangle$ of previous sections, and we would simply obtain $\\mathcal{\\tilde{B}}_{\\textbf{k}}(t_0,t)=\\mathcal{B}_{\\textbf{k}}(t)$. Then, both notions of the number of created particles coincide: $\\tilde{N}_{\\textbf{k}}(t_0,t)=N_{\\textbf{k}}(t)=|\\beta_{\\textbf{k}}(t)|^2$,\nwith $N_{\\textbf{k}}(t_0)=0$. This will simplify the study of the unitary implementation of the dynamics in the next section.\n\n\n\\subsection{Unitary implementation of the dynamics}\n\nIn this section we will characterize those quantizations that unitarily implement the quantum field dynamics. For concreteness, we will restrict our arguments to the scalar Schwinger effect, already studied in \\cite{Garay2020}. Here we will review the results that we need for our analysis, also adapting them to our present formalism. Other references \\cite{Cortez:2019orm,Cortez:2020rla,CORTEZ201536,universe7080299} have already studied this in cosmological settings and in the fermionic Schwinger effect~\\cite{AlvarezFermions}. This section depends on the particularities of the system, as we will use the asymptotic dependence of frequencies on its label (wave number $\\textbf{k}$ in the Schwinger effect, for example). Instead of working with $Y_{\\textbf{k}}(t)$ fixed to zero as it is often done, we will also study the restrictions imposed on it. This is interesting due to the fact that $N_{\\textbf{k}}(t)$ depends on it (see \\eqref{eq_N1} and~\\eqref{eq_QVE2}). \n\nA theorem by Shale \\cite{Shale:1962,RUIJSENAARS1978105} ensures that $\\mathcal{\\tilde{B}}(t_0,t)$ is unitarily implementable if and only if the total number of created particles in the evolution of the field,\n\\begin{equation} \\label{eq_Shale}\n \\int d^3\\textbf{k} \\tilde{N}_{\\textbf{k}}(t_0,t)=\\int_0^{2\\pi}d\\phi \\int_0^{\\pi}d\\theta \\sin{\\theta} \\int_0^{\\infty}dk k^2\\tilde{N}_{\\textbf{k}}(t_0,t),\n\\end{equation}\nis finite for each fixed finite time $t$.\\footnote{For other systems the integral might be substituted by a sum over the discrete indexes enumerating the frequencies, with their corresponding degeneracies; e.g., a sum in $n$ (see \\eqref{eq_omegan}) for closed FLRW spacetimes with spherical spatial symmetry.} Note that the notion of unitary implementation of the Bogoliubov transformation $\\mathcal{\\tilde{B}}(t_0,t)$ involves all its $\\textbf{k}$-components $\\mathcal{\\tilde{B}}_{\\textbf{k}}(t_0,t)$. Since $\\tilde{N}_{\\textbf{k}}(t_0,t)=N_{\\textbf{k}}(t)=|\\beta_{\\textbf{k}}(t)|^2$, the unitary implementation of $\\mathcal{\\tilde{B}}(t_0,t)$ is satisfied if and only if in the ultraviolet limit $|\\textbf{k}|=k\\rightarrow \\infty$ we have\n\\begin{equation} \\label{eq_unitarycondition}\n \\beta_{\\textbf{k}}(t)=\\order{k^{-\\lambda}},\\qquad \\lambda>3\/2,\n\\end{equation}\nat all finite times $t$ and for almost all fixed directions $(\\theta,\\phi)$.\\footnote{Note that we only consider ultraviolet divergences because we deal with massive scalar fields and, consequently, there are no infrared divergences.} Note that because of the anisotropy of the Schwinger effect, the ultraviolet behavior of $\\beta_{\\textbf{k}}(t)$ depends on the direction in which we calculate the limit of large $k$. Indeed, from \\eqref{eq_omegaSchwinger} we see that the time derivatives of the frequencies carry a leading order contribution $\\dot{\\omega}_{\\textbf{k}}(t)=\\order{k^0}$ for directions with constant $\\theta\\in (0,\\pi)$, while in the direction parallel to the vector potential ($\\theta=0,\\pi$), $\\dot{\\omega}_{\\textbf{k}}(t)=\\order{k^{-1}}$. However, this axis has zero measure in $\\mathbb{R}^3$ and does not contribute to the integral in~\\eqref{eq_Shale}.\n\nRemember that $\\beta_{\\textbf{k}}(t)$ depends both on the particular reference solution $z_{\\textbf{k}}(t)$ of the harmonic oscillator equation \\eqref{eq_harmonic} and the functions $(\\zeta_{\\textbf{k}}(t),\\rho_{\\textbf{k}}(t))$. As we said before, in the most realistic case in which the electric field is switched off in the asymptotic past, there is no ambiguity in the selection of $z_{\\textbf{k}}(t)$ as at that initial time we are forced to choose positive-frequency plane waves for all $\\textbf{k}$. Furthermore, assuming general mild conditions on the time dependence of the frequencies\\footnote{In the scalar Schwinger effect, a sufficient condition to satisfy this mild condition is that $\\dot{\\omega}_{\\textbf{k}}(t)\/\\omega_{\\textbf{k}}(t)$ both remains finite and changes it signs a finite number of times in each closed interval of time.}, reference \\cite{Garay2020} proves that this particular solution behaves in the ultraviolet as\n\\begin{equation} \\label{eq_zUV}\n |z_{\\textbf{k}}(t)|^2=\\order{k^{-1}}, \\quad \\dot{z}_{\\textbf{k}}(t)=i\\left[-\\omega_{\\textbf{k}}(t)+\\Lambda_{\\textbf{k}}(t)\\right]z_{\\textbf{k}}(t),\n\\end{equation}\nwhere $\\Lambda_{\\textbf{k}}(t)$ converges to zero at least as fast as $\\order{k^{-1}}$. Once $z_{\\textbf{k}}(t)$ is fixed, let us characterize the functions $(\\zeta_{\\textbf{k}}(t),\\rho_{\\textbf{k}}(t))$ which verify the unitary dynamics condition \\eqref{eq_unitarycondition}. Using \\eqref{eq_alphabetat} and \\eqref{eq_zUV}, we can write $\\beta_{\\textbf{k}}(t)$ as\n\\begin{align} \n \\beta_{\\textbf{k}}(t)&=\\bigg\\{ \\sqrt{\\frac{W_{\\textbf{k}}(t)}{2}}[1+iY_{\\textbf{k}}(t)]\n \\notag\\\\\n &+\\frac{1}{\\sqrt{2W_{\\textbf{k}}(t)}}[-\\omega_{\\textbf{k}}(t)+\\Lambda^*_{\\textbf{k}}(t)] \\bigg\\} e^{i\\varphi_{\\textbf{k}}(t)}z^*_{\\textbf{k}}(t).\n\\label{eq_betaLO}\\end{align}\nWe see that both its real and its imaginary parts are $\\order{k^{-\\lambda}}$ if and only if $W_{\\textbf{k}}(t)$ and $Y_{\\textbf{k}}(t)$ behave in the ultraviolet as\n\\begin{equation} \\label{eq_unitarysigma}\n W_{\\textbf{k}}(t)=\\omega_{\\textbf{k}}(t)\\big[1+\\order{k^{-\\gamma}}\\big], \\qquad Y_{\\textbf{k}}(t)=\\order{k^{-\\eta}}, \n\\end{equation}\nwith $\\gamma,\\eta>3\/2$,\nfor each finite time $t$ and for almost all $\\textbf{k}$. These two conditions characterize the choice of $(\\zeta_{\\textbf{k}}(t),\\rho_{\\textbf{k}}(t))$ that allow for a unitary implementation of the dynamics. \n\nAll adiabatic approximations of arbitrary order are in this family as they all behave in the ultraviolet as $W^{(n)}_{\\textbf{k}}(t)=\\omega_{\\textbf{k}}(t)\\big[1+\\order{k^{-3}}\\big]$, $Y^{(n)}_{\\textbf{k}}(t)=\\order{k^{-2}}$, and $\\breve{Y}^{(n)}_{\\textbf{k}}(t)=\\order{k^{-2}}$. However, without any additional criteria we cannot distinguish them from the rest of the possible choices that allow for a unitary implementation of the dynamics. On the other hand, note that as long as the external agent is time-dependent, the usual Minkowski positive-frequency plane wave modes \ndo not allow for a unitary implementation of the dynamics since, for this quantization, $\\gamma=1$ (see e.g. references ~\\cite{Nonadiabatic_Kim,Huet_2014} for the corresponding QVE). Then, using Minkowski modes in the Schwinger effect would lead to finite values of $N_{\\textbf{k}}(t)$ when the electric field is turned on but the sum of all of them would diverge~\\cite{Ruijsencharged}. \n\nAs an aside, in cosmological isotropic settings such as in FLRW spacetimes, the behavior of the time derivative of the frequencies does not depend on the angle $\\theta$. For instance, frequencies would be of the form $\\omega_{\\textbf{k}}(t)~=~\\sqrt{k^2+m(t)^2}$, where $m(t)$ is independent of $\\textbf{k}$, and hence $\\dot{\\omega}_{\\textbf{k}}(t)=\\order{k^{-1}}$ in all directions. An analogous analysis to the one developed here leads to the same behavior of functions \\eqref{eq_unitarysigma}.\n\nIn summary, in order to completely fix the canonical quantization scheme, we started with a freedom of three real time-dependent functions $(W_{\\textbf{k}}(t),Y_{\\textbf{k}}(t),\\varphi_{\\textbf{k}}(t))$ for each $\\textbf{k}$. We proved in sections \\ref{sec_N(t)} and \\ref{sec_GQVE} that the number of created particles $N_{\\textbf{k}}(t)$ does not depend on $\\varphi_{\\textbf{k}}(t)$. Now, the behavior of functions $W_{\\textbf{k}}(t)$ and $Y_{\\textbf{k}}(t)$ with large $k=|\\textbf{k}|$ have been restricted. Moreover, reference \\cite{Garay2020}\nshows that the possible selections compatible with these restrictions form a unique unitarily equivalent family of Fock quantizations in which the total number of created particles (sum of all the contributions by each~$\\textbf{k}$) remains finite at all finite times. Nevertheless, it is important to remember that each particular selection in the family provides a different total number of particles.\n\n\\subsection{Generalized QVE and unitary quantum dynamics}\n\nIn the following we are going to study the asymptotic ultraviolet behavior of the generalized QVE \\eqref{eq_QVE2} for canonical quantizations unitarily implementing the dynamics. In particular, this study will provide us with an additional physical criterion, stronger than the unitary implementation of the dynamics, to reduce the ambiguities in the canonical quantization.\n\nIn the ultraviolet, our system should resemble free Minkowski spacetime regardless of the curvature or the external fields at work. This suggests a kind of generic ultraviolet behavior for the generalized QVE, independent of the specifics of the canonical quantization, at leading order. Such details should certainly play a role in subleading terms. \n\nActually, we are going to argue that quantizations that unitarily implement the dynamics, i.e., with the ultraviolet behavior \\eqref{eq_unitarysigma}, satisfy this criterion provided that $\\gamma,\\eta>2$. Otherwise the leading order of the generalized QVE depends on the specific quantization that is being carried out. It turns out that this generic quantization-independent leading order is precisely that of the QVE, i.e., the generalized QVE for zeroth-order adiabatic modes.\n\nIndeed, let us consider a canonical quantization defined by functions $W_{\\textbf{k}}(t)$ and $Y_{\\textbf{k}}(t)$ which behave in the ultraviolet according to the unitary dynamics requirement \\eqref{eq_unitarysigma} but with the stronger condition\n\\begin{equation} \\label{eq_newcriterion}\n W_{\\textbf{k}}(t)=\\omega_{\\textbf{k}}(t)\\big[1+\\order{k^{-\\gamma}}\\big], \\qquad Y_{\\textbf{k}}(t)=\\order{k^{-\\eta}},\n\\end{equation}\nwith $\\gamma,\\eta>2$.\nThis faster ultraviolet decay implies that the leading order of its generalized QVE \\eqref{eq_QVE2} coincides with the usual QVE \\eqref{eq_QVEadiabatic} as can be seen by straightforward calculation. Thus, for generic functions $W_{\\textbf{k}}(t)$ and $Y_{\\textbf{k}}(t)$ in this subfamily of quantizations allowing for a unitary implementation of the dynamics, the ultraviolet behavior of $\\dot{N}_{\\textbf{k}}(t)$ at leading order is independent of the particular time-dependence of those functions other than that imposed by the external electric field through $\\omega_\\textbf{k}(t)$.\n\n \nOn the other hand, when generic canonical quantizations allow for a unitary implementation of the dynamics but do not satisfy the previous stronger condition \\eqref{eq_newcriterion}, their generalized QVE provides particle creation rates $\\dot{N}_{\\textbf{k}}(t)$ whose ultraviolet behaviors at leading order strongly depend on functions $W_{\\textbf{k}}(t)$ and $Y_{\\textbf{k}}(t)$ themselves, even with a slower ultraviolet decay. More precisely, under these hypotheses the leading orders in the expansions in $k=|\\textbf{k}|$ of the functions \\eqref{eq_AB} defining the generalized QVE are\n\\begin{align}\n \\mu_{\\textbf{k}}|_{\\text{L.O.}}=&2k\\left.\\left(1-\\sqrt{\\frac{\\omega_{\\textbf{k}}}{W_{\\textbf{k}}}}\\right)\\right|_{\\text{L.O.}}=\\order{k^{1-\\gamma}}, \\nonumber\\\\\n \\nu_{\\textbf{k}}|_{\\text{L.O.}}=&-\\frac{1}{2}k^{-1}q\\dot{A}\\cos{\\theta}+kY_{\\textbf{k}}|_{\\text{L.O.}}\n \\notag\\\\ =&\\order{k^{-1}}+\\order{k^{1-\\eta}}.\n\\end{align}\nWhen $\\gamma<2$ or $\\eta<2$, one of them converges to zero more slowly than $\\breve{\\nu}_{\\textbf{k}}^{(0)}(t)=\\order{k^{-1}}$ in the case of the usual QVE for the lowest adiabatic approximation (see~\\eqref{ABthetaad}). \nThe limiting cases $\\gamma=2$ and $\\eta\\geq 2$ and vice versa lead\nto the same ultraviolet decay as the generic one (for $\\gamma,\\eta~>~2$) but in a state-dependent fashion.\n\nNote that this analysis is valid as long as the leading order of the generalized QVE is of the same adiabatic order as the standard QVE. There are a few exceptions to this generic case such as canonical quantizations based on higher-order adiabatic approximations, whose generalized QVE are of higher order. The $n$th-adiabatic approximation cancels the lower order contributions to its generalized QVE, and in particular, that of the usual QVE. But this makes the leading order being that of the $n$th-adiabatic approximation, which decays faster than the zeroth-order in the ultraviolet, significantly diminishing the rate of particle creation. More explicitly, the leading order of the functions $\\mu^{(n)}_{\\textbf{k}}$ and $\\nu^{(n)}_{\\textbf{k}}$ for the $n$th-order adiabatic modes (with $n\\geq 2$) are\n\\begin{align}\n \\mu^{(n)}_{\\textbf{k}}|_{\\text{L.O.}}&=W_{\\textbf{k}}^{(n)}-W_{\\textbf{k}}^{(n+2)}=\\mathcal O (k^{-(n+2)}), \\nonumber\\\\\n \\nu^{(n)}_{\\textbf{k}}|_{\\text{L.O.}}&=k\\big(Y_{\\textbf{k}}^{(n)}-Y_{\\textbf{k}}^{(n+2)}\\big)=\\mathcal O (k^{-(n+3)}).\n\\end{align}\n\nFor all these reasons, we consider that the physically reasonable choices for generic $W_{\\textbf{k}}(t)$ and $Y_{\\textbf{k}}(t)$ should satisfy \\eqref{eq_newcriterion}. This allows not only for a unitary implementation of the dynamics, but also provides a generalized QVE whose leading order coincides with the corresponding to the lowest adiabatic order. Other selections not satisfying this criterion but such that they cancel the contribution for the usual QVE (e.g., higher order-adiabatic approximations, which have $Y^{(n)}_{\\textbf{k}}=\\order{k^{-2}}$), lead to particle creation rates which converge even faster to zero than all the others as we have discussed and are therefore good candidates as well. \n\n \n\n One could also consider more restricting criteria in the task of reducing the ambiguity in the quantization, based on the generalized QVE for higher adiabatic orders. A motivation for these criteria may come from the fact that, in cosmological settings and within the strict family of adiabatic vacua, it is necessary to consider higher adiabatic orders to obtain a well-defined renormalized stress-energy tensor \\cite{PhysRevD.9.341}. \n\n\\section{Conclusions} \n\\label{sec_conclusions}\n\nIn usual quantum field theory in Minkowski spacetime, Poincar\u00e9 symmetry fixes the vacuum. In curved spacetimes, or if an external agent is coupled to a matter field in flat spacetime, the classical system is, in general, not invariant under such a restrictive group of symmetry. In particular, when time translational invariance is lost, the vacuum changes throughout time and particle creation effects can occur. When imposing that the associate quantum theory preserves the classical symmetries we find that there are still ambiguities in the choice of vacuum defining the quantum theory.\n\nIn this work we have written a generalized version of the usual quantum Vlasov equation~\\cite{Kluger_1998}, which is an integro-differential equation for the number of created particles throughout time for the Schwinger effect, extending it to arbitrary canonical quantizations. We have also provided its formal solution, thus generalizing the result in \\cite{Fedotov_2011}. Moreover, we have particularized it for arbitrary $n$th-order adiabatic modes, calculating its leading order in an adiabatic expansion.\n\nAlthough our analysis has been carried out for the scalar Schwinger effect, in which an external homogeneous time-dependent electric field is applied in flat spacetime, we have also argued how our analysis can be straightforwardly applied to quantum matter fields propagating in FLRW spacetimes.\n\nNext, we have resorted to the unitary implementation of the quantum field dynamics as physical criterion to restrict the set of acceptable quantizations. This criterion, mainly pushed forward in the context of FLRW cosmological spacetimes \\cite{Cortez:2019orm,Cortez:2020rla,universe7080299,CORTEZ201536}, reduces the ambiguities in the canonical quantization to a unique family of unitarily equivalent quantizations. This also happens to be true in the scalar and fermionic Schwinger effects with a homogeneous electric field \\cite{Garay2020,AlvarezFermions}. In practice, this requirement restricts the ultraviolet behavior of the Fourier modes used in the quantization so that the total number of created particles is well-defined at all finite times. \n\nFocusing on the quantizations that allow for a unitary implementation of the dynamics, in the present work we have proved that there is a wide family of them whose generalized QVE behaves, at leading order in the ultraviolet asymptotic expansion, exactly as the standard QVE for zeroth-order adiabatic modes. Namely, the time dependence of such leading order is only due to the characteristics of the external agent (electric field) responsible for the creation of particles, and not to the specific modes used to quantize our field.\nOn the other hand, we have also proved that there is another family of quantizations that, while also allowing for a unitary implementation of the dynamics, yield a generalized QVE whose leading order in the ultraviolet limit depends explicitly on the quantization (via a time dependent term that is not simply determined by the time dependence of the external agent). In view of this last result we have proposed a new criterion which, together with the unitary implementation of the dynamics, restricts even more the quantizations that we consider acceptable: those for which the leading order of the generalized QVE is just that of the zeroth-order adiabatic vacuum (except when this leading order vanishes, e.g. for the higher order adiabatic vacua). This criterion guarantees that the particle creation rate is independent of the details of the quantization at leading order in the ultraviolet, and which decays at least as fast as for the lowest adiabatic approximation.\n\n\n\\acknowledgments\n\nThis work has been supported by Project. No. MICINN PID2020-118159GB-C44 from Spain.\nAAD acknowledges financial support from Universidad Complutense de Madrid through the predoctoral Grant No. CT82-~20.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe game of Cops and Robbers is a perfect-information two-player pursuit-evasion game played on a graph. One player controls a group of cops and the other player controls a single robber. To begin the game, the cops and robber each choose vertices to occupy, with the cops choosing first. Play then alternates between the cops and the robber, with the cops moving first. On a turn a player may move to an adjacent vertex or stay still. If any cop and the robber ever occupy the same vertex, the robber is caught and the cops win. On a graph \\graph{G}, the fewest number of cops required to catch the robber is called the \\dword{cop number} and denoted\n$c(\\graph{G})$. If $c(\\graph{G})=k$ we say that \\graph{G} is $k$-cop-win.\n The game was introduced by Nowakowski and Winkler \\cite{NW83}, and Quilliot \\cite{Qui78}. A nice introduction to the game and its many variants is found in \nthe book by Bonato and Nowakowski \\cite{BN11}. \n\nIn this paper we discuss a variation of Cops and Robbers introduced by Fitzpatrick, Howell, Messinger, and Pike \\cite{FHMP16} called deterministic Zombies and Survivors;\na probabilistic version was introduced by Bonato, Mitsche, Perez-Gimenez, and Pralat in \\cite{BMGP_ProbZombies_2016}.\nIn deterministic Zombies and Survivors, the cops are replaced by zombies and the robber is replaced by a survivor.\nThe rules of this variant are the same as Cops and Robbers, except for the following significant restriction on the zombies: each zombie must move on every turn, and furthermore, must move closer to the survivor on every turn. If there are multiple moves available to the zombies, they can coordinate and choose their moves intelligently. The number of zombies required to capture the survivor on a graph \\graph{G} is known as the \\dword{zombie number} and denoted $z(\\graph{G})$. \n\n\n\nWe focus on a comparison between the zombie number and the cop number.\nSince any zombie strategy can be played by the same number of cops, for all graphs \\graph{G}, \n$z(\\graph{G}) \\ge c(\\graph{G})$. Fitzpatrick, Howell, Messinger, and Pike \\cite{FHMP16} \ngave examples of graphs where $z(\\graph{G}) > c(\\graph{G})$ and asked about the relationship between these two parameters. Using hypercube graphs, they noted that the gap $z(\\graph{G})-c(\\graph{G})$ can be arbitrarily large. They also observed that the graph $\\graph{G}_5$ (See Figure~\\ref{G5}) is 1-cop-win but requires 2 zombies to win, \nand so $z(\\graph{G})\/c(\\graph{G})$ can be at least 2. In Question~19, they asked how large the ratio $z(\\graph{G})\/c(\\graph{G})$ can be. We show that any ratio of 1 or larger is possible, as in Section~\\ref{finalg}, we describe a family of graphs $\\graph{Z}_{k,m}$ such that for any integers $m \\ge k \\ge 1$, $z(\\graph{Z}_{k,m}) =m$ and $c(\\graph{Z}_{k,m}) = k$. \nThese graphs are constructed by combining a so-called base graph that has cop and zombie number $k$ with ``arms'' on which 1 cop can catch a robber, but many zombies may be required to catch a survivor. Thus $k$ cops can win by either catching the robber on the base graph, or forcing the robber onto one of the arms, and then catching the robber on the arm. However while $k$ zombies can force the survivor onto one arm of the graph, it requires more zombies to then catch the survivor. We describe the construction of the arm graphs in Section~\\ref{constr}, and then describe the base graph and final construction in Section~\\ref{zkmsec}.\n\n\n\n\nIn Section~\\ref{cube} we prove that Conjecture~18 from \\cite{FHMP16} is true, that is, the zombie number for the $n$-dimensional hypercube is $\\lceil 2n\/3\\rceil$.\nThis conjecture is also proved by Fitzpatrick in a different manner in\n\\cite{FitzpatrickArxiv2018} as a corollary of a more general result.\nNote that the cop number of the $n$-dimensional hypercube is \n$\\lceil (n+1)\/2 \\rceil$, as shown by Maamoun and Meyniel in \\cite{MM87}.\n\n\n\n\n\\begin{figure}\n\\begin{center}\n \\begin{tikzpicture}[every node\/.style={circle, inner sep=3pt, fill=black}, scale=.75]\n \\node (11) at (-1,-1) {};\n \\node (21) at (-2,-2) {};\n \\node (31) at (-3,-3) {};\n \\node (12) at (1,-1) {};\n \\node (22) at (2,-2) {};\n \\node (32) at (3,-3) {};\n \\node (13) at (1.5,.5) {};\n \\node (23) at (2.75,1) {};\n \\node (33) at (4,1.5) {};\n \\node (14) at (0,1.5) {};\n \\node (24) at (0,3) {};\n \\node (34) at (0,4.5) {};\n \\node (10) at (-1.5,.5) {};\n \\node (20) at (-2.75,1) {};\n \\node (30) at (-4,1.5) {};\n \\foreach \\from\/\\to in {11\/21, 21\/31, 12\/22, 22\/32, 13\/23, 23\/33, 14\/24, 24\/34, 10\/20, 20\/30}\n \\draw (\\from) -- (\\to);\n \\foreach \\from\/\\to in {11\/12, 11\/13, 11\/14, 11\/10, 12\/13, 12\/14, 12\/10, 13\/14, 13\/10, 14\/10}\n \\draw (\\from) -- (\\to);\n \\foreach \\from\/\\to in {11\/12, 12\/13, 13\/14, 14\/10, 10\/11, 20\/21, 21\/22, 22\/23, 23\/24, 24\/20, 31\/32, 32\/33, 33\/34, 34\/30, 30\/31}\n \\draw (\\from) -- (\\to);\n \\foreach \\from\/\\to in {11\/22, 12\/23, 13\/24, 14\/20, 10\/21, 20\/31, 21\/32, 22\/33, 23\/34, 24\/30, 31\/22, 32\/23, 33\/24, 34\/20, 30\/21, 21\/12, 22\/13, 23\/14, 24\/10, 20\/11}\n \\draw (\\from) -- (\\to);\n\n \\end{tikzpicture}\n \\hfill\n \\begin{tikzpicture}[every node\/.style={circle, fill=black!20}]\n \\node (11) at (-1,-1) {\\armin{4}{}};\n \\node (21) at (-2.1,-2.1) {\\armout{4}{}};\n \\node (12) at (1,-1) {\\armin{3}{}};\n \\node (22) at (2.1,-2.1) {\\armout{3}{}};\n \\node (13) at (1.5,.6) {\\armin{2}{}};\n \\node (23) at (3,1.2) {\\armout{2}{}};\n \\node (14) at (0,1.5) {\\armin{1}{}};\n \\node (24) at (0,3) {\\armout{1}{}};\n \\node (10) at (-1.5,.6) {\\armin{0}{}};\n \\node (20) at (-3,1.2) {\\armout{0}{}};\n \\node (c) at (0,0) {$c$};\n \\foreach \\from\/\\to in {11\/21, 12\/22, 13\/23, 14\/24, 10\/20}\n \\draw (\\from) -- (\\to);\n\\foreach \\from\/\\to in {11\/c, 12\/c, 13\/c, 14\/c, 10\/c}\n \\draw (\\from) -- (\\to);\n \\foreach \\from\/\\to in {11\/12, 12\/13, 13\/14, 14\/10, 10\/11, 20\/21, 21\/22, 22\/23, 23\/24, 24\/20}\n \\draw (\\from) -- (\\to);\n \\foreach \\from\/\\to in {11\/22, 12\/23, 13\/24, 14\/20, 10\/21, 21\/12, 22\/13, 23\/14, 24\/10, 20\/11}\n \\draw (\\from) -- (\\to);\n \\end{tikzpicture}\n\\end{center}\n\\caption{The graphs $\\graph{G}_5$ (left) and \\graph{T} (right)}\n\\label{G5}\n\\end{figure}\n\n\n\n\n\n\\section{The key building block: The arm graph}\\label{constr}\n\nIn this section we define what we call an \\dword{arm graph}, constructed using copies of a graph we call \\graph{T}. Then we prove two key properties about the arm graphs in Lemmas~\\ref{thm_one_arm} and \\ref{1armUB}. In both lemmas we describe the same kind of graph: some graph with an arm graph attached to it. The first lemma will describe a scenario where the survivor has a winning strategy, while the second lemma will describe a scenario where the zombies have a winning strategy.\n\n\n\n\n\nFor a graph \\graph{G} we denote its vertex set by $V(\\graph{G})$, its edge set by $E(\\graph{G})$,\nand an edge between $x$ and $y$ is denoted \\edge{x}{y}.\nThe graph \\graph{T} is shown on the right in Figure ~\\ref{G5}.\nSince a typical arm graph will contain multiple copies of $T$, we formally define the graph $\\graph{T}_x$, isomorphic to \\graph{T}, as follows, where the parameter $x$ is an\ninteger. \n\\begin{align*} \nV(\\graph{T}_x)&= \\{ \\ \\armin{i}{x} \\ \\setbar \\ 0 \\le i \\le 4 \\ \\} \\ \\cup \\\n \\{ \\ \\armout{i}{x} \\ \\setbar \\ 0 \\le i \\le 4 \\ \\}\n\\ \\cup \\ \\{c^x\\}\\\\\nE(\\graph{T}_x)&= \\{ \\ \\edge{ \\armvar{i}{x}{a} }{ \\armvar{j}{x}{b} } \\ \\setbar \\ |i-j| = 1 \\hbox{ or } 4\\ \\} \\\n\\cup \\ \\{\\ \\edge{ \\armin{i}{x} }{ \\armout{i}{x} }, \\ \\edge{\\armin{i}{x}}{c^x} \\ \\setbar \\ 0 \\le i \\le 4 \\ \\} \n\\end{align*}\nNote that in the above definition,the variables $i$ and $j$ range over the integers from 0 to 4, inclusive, while the variables $a$ and $b$\nrange over the formal symbols \\inside \\ and \\outside, whose names are meant to remind us, respectively, of ``inside'' vertices and ``outside'' vertices.\nThe graph \\graph{T} is similar to other graphs appearing in the literature, such as the graphs $\\graph{G}'$ and $\\graph{G}_5$ from \\cite{FHMP16} and the graph in the proof of Proposition 6 from \\cite{BoyOBoy2011}. \n\n\n\nWe now define the \\dword{arm graphs} $A_n$ for any integer $n \\ge 0$. See Figure~\\ref{ARM3} for a drawing of $\\arm_3$. \n\\begin{defn}\nFor $n \\ge 0$, define $\\arm_n$ to\nconsist of the vertices \\armout{2}{-1} and \\armout{0}{n}, along with graphs $\\graph{T}_0, \\graph{T}_1, \\ldots, \\graph{T}_{n-1}$, with the following edges added: $\\{\\edge{ \\armout{2}{x} }{ \\armout{0}{x+1} } \\ \\setbar \\ -1 \\le x \\le n-1\\}$\n\\end{defn}\nNote that $\\arm_0$ is the graph with two vertices, \\armout{2}{-1} and \\armout{0}{0}, and a single edge between these two vertices.\n\n\n\\begin{figure}\n\\begin{center}\n \\begin{tikzpicture}[every node\/.style={circle, inner sep=2pt, fill=black}, scale=.55]\n \\node (11) at (-1,-1) {};\n \\node (21) at (-2,-2) {};\n \\node (12) at (1,-1) {};\n \\node (22) at (2,-2) {};\n \\node (13) at (1.5,.5) {};\n \\node (23) at (2.75,1) [label=above: {\\armout{2}{0}}]{};\n \\node (14) at (0,1.5) {};\n \\node (24) at (0,3) {};\n \\node (10) at (-1.5,.5) {};\n \\node (20) at (-2.75,1) [label=above: {\\armout{0}{0}} ] {};\n \\node (211) at (-4.75,1) [label=left: {\\armout{2}{-1}} ] {};\n \\node (c) at (0,0) {};\n \\node[fill=none,rectangle] at (0,-3) {$T_0$}; \n \\foreach \\from\/\\to in {20\/211, 11\/21, 12\/22, 13\/23, 14\/24, 10\/20}\n \\draw (\\from) -- (\\to);\n \\foreach \\from\/\\to in {11\/c, 12\/c, 13\/c, 14\/c, 10\/c}\n \\draw (\\from) -- (\\to);\n \\foreach \\from\/\\to in {11\/12, 12\/13, 13\/14, 14\/10, 10\/11, 20\/21, 21\/22, 22\/23, 23\/24, 24\/20}\n \\draw (\\from) -- (\\to);\n \\foreach \\from\/\\to in {11\/22, 12\/23, 13\/24, 14\/20, 10\/21, 21\/12, 22\/13, 23\/14, 24\/10, 20\/11}\n \\draw (\\from) -- (\\to);\n \\node (11a) at (7,-1) {};\n \\node (21a) at (6,-2) {};\n \\node (12a) at (9,-1) {};\n \\node (22a) at (10,-2) {};\n \\node (13a) at (9.5,.5) {};\n \\node (23a) at (10.75,1) [label=above: {\\armout{2}{1}}]{};\n \\node (14a) at (8,1.5) {};\n \\node (24a) at (8,3) {};\n \\node (10a) at (6.5,.5) {};\n \\node (20a) at (5.25,1) [label=above: {\\armout{0}{1}}] {};\n \\node (ca) at (8,0) {};\n \\node[fill=none,rectangle] at (8,-3) {$T_1$}; \n \\foreach \\from\/\\to in {11a\/21a, 12a\/22a, 13a\/23a, 14a\/24a, 10a\/20a}\n \\draw (\\from) -- (\\to);\n \\foreach \\from\/\\to in {11a\/ca, 12a\/ca, 13a\/ca, 14a\/ca, 10a\/ca, 23\/20a}\n \\draw (\\from) -- (\\to);\n \\foreach \\from\/\\to in {11a\/12a, 12a\/13a, 13a\/14a, 14a\/10a, 10a\/11a, 20a\/21a, 21a\/22a, 22a\/23a, 23a\/24a, 24a\/20a}\n \\draw (\\from) -- (\\to);\n \\foreach \\from\/\\to in {11a\/22a, 12a\/23a, 13a\/24a, 14a\/20a, 10a\/21a, 21a\/12a, 22a\/13a, 23a\/14a, 24a\/10a, 20a\/11a}\n \\draw (\\from) -- (\\to);\n \\node (11b) at (15,-1) {};\n \\node (21b) at (14,-2) {};\n \\node (12b) at (17,-1) {};\n \\node (22b) at (18,-2) {};\n \\node (13b) at (17.5,.5) {};\n \\node (23b) at (18.75,1) [label=above: {\\armout{2}{2}}]{};\n \\node (14b) at (16,1.5) {};\n \\node (24b) at (16,3) {};\n \\node (10b) at (14.5,.5) {};\n \\node (20b) at (13.25,1) [label=above: {\\armout{0}{2}}]{};\n \\node (cb) at (16,0) {};\n \\node (203) at (20.75,1) [label=right: {\\armout{0}{3}} ] {};\n \\node[fill=none,rectangle] at (16,-3) {$T_2$}; \n \\foreach \\from\/\\to in {11b\/21b, 12b\/22b, 13b\/23b, 14b\/24b, 10b\/20b}\n \\draw (\\from) -- (\\to);\n \\foreach \\from\/\\to in {11b\/cb, 12b\/cb, 13b\/cb, 14b\/cb, 10b\/cb, 23a\/20b}\n \\draw (\\from) -- (\\to);\n \\foreach \\from\/\\to in {11b\/12b, 12b\/13b, 13b\/14b, 14b\/10b, 10b\/11b, 20b\/21b, 21b\/22b, 22b\/23b, 23b\/24b, 24b\/20b}\n \\draw (\\from) -- (\\to);\n \\foreach \\from\/\\to in {11b\/22b, 12b\/23b, 13b\/24b, 14b\/20b, 10b\/21b, 21b\/12b, 22b\/13b, 23b\/14b, 24b\/10b, 20b\/11b, 23b\/203}\n \\draw (\\from) -- (\\to);\n \\end{tikzpicture}\n\\end{center}\n\\caption{The arm graph $\\arm_3$}\n\\label{ARM3}\n\\end{figure}\n\n\n\n\nWe adopt the following notation. If there are $m$ zombies, we typically denote them as $z_1, z_2, \\ldots, z_m$. Let $d_i$ denote the distance from the survivor to $z_i$, a parameter that will typically change during the play of a game. For integers $n$ and $r$, let $(n)_r$ denote the remainder $n$ (mod $r$); for example: $(7)_5 = 2$ and $(10)_5 = 0$.\n\n\n\n\\begin{lemma} \\label{thm_one_arm}\nConsider the graph obtained by combining $\\arm_n$ with any graph \\graph{H} by adding an edge from $\\armout{2}{-1} \\in V(\\arm_n)$ to a vertex $v \\in V(\\graph{H})$. Suppose the survivor is at vertex \\armout{0}{0}, and there are $m$ zombies $z_1, z_2, \\ldots, z_m$, all on vertices of \\graph{H}, such that $d_1 \\le d_2 \\le \\cdots \\le d_m$. Then the survivor has a winning strategy if\n\\[(d_2 - d_1)_5 + (d_3 - d_2)_5 + \\cdots + (d_m - d_{m-1})_5 \\le n-1.\\]\n\\end{lemma}\n\n\\begin{proof}\n\\newcommand{\\distone}{\\ensuremath{\\mathcal{Z}}}\n\\newcommand{\\sgood}{-survivor-good}\n\nWe describe a winning strategy for the survivor. Let \\distone \\ refer to the set of zombies that are adjacent to the survivor when it is the survivor's turn. The set \\distone \\ will change as the game goes on.\n \nFor $0 \\le x \\le n-1$, define a position to be $x$\\dword{-survivor-good} if the following\nconditions are satisfied:\n\\begin{enumerate}\n\n\\item The survivor is at vertex \\armout{0}{x} and it is the survivor's turn to move.\n\n\n\\item The zombies not in \\distone \\ are in \n$\\graph{H} \\cup \\{\\armout{2}{-1}\\} \\cup \\graph{T}_0 \\cup \\cdots \\cup \\graph{T}_{x-1}$.\n\n\\item\nEvery zombie in \\distone \\ is at one these vertices:\n\\armout{1}{x}, \\armin{1}{x}, or \\armout{2}{x-1}.\n\n\\item \\label{cond_s_good}\n$(d_2 - d_1)_5 + (d_3 - d_2)_5 + \\cdots + (d_m - d_{m-1})_5 \\le n-1-x$.\n\n\\end{enumerate}\nThe survivor strategy, depending on cases we will describe, will be to move from an $x$\\sgood \\ position to a new $x$\\sgood \\ position or to an $(x+1)$\\sgood \\ position. Either way we will guarantee that the sum $d_1 + \\cdots + d_m$ strictly decreases until all the zombies are in \\distone.\n\nThe survivor starts at \\armout{0}{0} and should not move from \\armout{0}{0} until a zombie arrives at the vertex \\armout{2}{-1}, adjacent to the survivor. \nAt this point, the zombies in \\distone \\ are all at vertex \\armout{2}{-1}, so by the assumptions of the lemma the position is $0$\\sgood.\n\nWe now describe what the survivor does in an $x$\\sgood \\ position.\nThe survivor is at vertex \\armout{0}{x}. Suppose\n$\\distone = \\{ z_1, \\ldots, z_i \\}$, and if \\distone \\ does not include all the zombies, then\nfor zombies not in \\distone, zombie $z_{i+1}$ is a zombie whose distance, $d_{i+1}$, to the survivor is minimum. \nFrom the vertex \\armout{0}{x} the survivor's first\nmove will always be to \\armout{4}{x} and thus all of the zombies in \\distone \\ will move to \\armout{0}{x} or \\armin{0}{x}.\nSubsequently, there will be 2 different courses of move sequences, depending on \nwhether the distance of $z_{i+1}$ from the survivor is more than 5 or not.\n\nCase 1. Suppose $d_{i+1} \\ge 6$ or all the zombies are in \\distone. In this case, the survivor will follow a cycle around the outside vertices of $\\graph{T}_x$, eventually arriving back at \\armout{0}{x}, i.e. the survivor will move from \\armout{4}{x} to \\armout{3}{x} to \\armout{2}{x} to\n\\armout{1}{x}, and finally return to \\armout{0}{x}. The zombies in \\distone \\ are forced to follow around, so that when the survivor returns to \\armout{0}{x}, all the zombies who were in \\distone \\ will remain in \\distone, and be either at \\armout{1}{x} or \\armin{1}{x}. Since the survivor made 5 moves in cycling around $\\graph{T}_x$, any zombies that were not in \\distone \\ will be 5 steps closer to the survivor. Since $d_{i+1}$ was initially at least 6, only the zombies initially in \\distone \\ can be in $\\graph{T}_x$, though some other zombies may have joined \\distone, by being at the vertex \\armout{2}{x-1}. Note the position is still $x$\\sgood. \nCondition~\\ref{cond_s_good} of being $x$\\sgood \\ holds because the sum on the left of the inequality has not changed:\n$d_j$ is unchanged if $z_j$ started off in \\distone, and otherwise $d_j$ is reduced by 5. \nIf all the zombies were initially in \\distone, then they \nstill are, and otherwise the sum $d_1 + \\cdots + d_m$ has strictly decreased.\n\n \n\n\nCase 2. Suppose $d_{i+1} \\le 5$. In this case, the survivor moves from\n \\armout{4}{x} to \\armout{3}{x} to \\armout{2}{x}, and then\nto \\armout{0}{x+1} in $\\graph{T}_{x+1}$. The zombies that were in \\distone \\ remain in \\distone, ending up at $\\armout{2}{x}$. For a zombie that did not start out in \\distone, its distance to the survivor was at least 2, so when the survivor moves to \\armout{2}{x}, that zombie\nis at \\armout{0}{x}, or in $\\graph{T}_y$ for $y < x$, or at \\armout{2}{-1}, or in $H$. Thus when the survivor moves from \\armout{3}{x} to \\armout{2}{x}, the distance to any of these zombies does not change, and the zombies not in \\distone \\ will get one step closer to the survivor. \n Thus we note that\nthe sum $d_1 + \\cdots + d_m$ has strictly decreased. The position is now \n$(x+1)$\\sgood. Most of the conditions are immediate.\nWe point out that Condition~\\ref{cond_s_good} of being\n$(x+1)$\\sgood \\ holds.\nThe value of\n$x$ increased by 1, and so the right side of the inequality decreased by one. As for the left side, it continues to be the case that \n$(d_2 - d_1)_5 = \\cdots = (d_i - d_{i-1})_5 = 0$, but \n$(d_{i+1} - d_i)_5$ goes down by exactly 1, while \n$(d_{j+1} - d_j)_5$ is unchanged for $j \\ge i+1$. So the left side also decreases by one.\n\n \nIn both cases the sum $d_1 + \\cdots + d_m$ is strictly reduced, so eventually the left side of the inequality in Condition~\\ref{cond_s_good} must be reduced to zero (and is guaranteed to be reduced to zero should the survivor ever reach \\armout{0}{n-1}). \nAt that point the strategy will remain indefinitely in Case 1 with \\distone \\ eventually containing all the zombies and all zombies cycling around some copy of \\graph{T}, one step behind the survivor.\n\\end{proof}\n\n\n\n\n\n\\begin{lemma} \\label{1armUB}\nConsider the graph obtained by combining $\\arm_n$ with a graph \\graph{H} by adding an edge from $\\armout{2}{-1} \\in V(\\arm_n)$ to a vertex $v \\in V(\\graph{H})$. Suppose the survivor is on a vertex in $\\arm_n$ and there are zombies $z_1, \\ldots, z_m$ such that $z_1$ is at $v$, and for all $1 \\le i \\le m-1$, $0 \\le d_{i+1} - d_i \\le 4$, and $d_m - d_1 \\ge n$.\nThen the zombies have a winning strategy.\n\\end{lemma}\n\n\\begin{proof}\n\\newcommand{\\zgood}{-zombie-good}\n\nWe describe a winning strategy for the zombies. \nFor $0 \\le x \\le n$, define a position to be $x$\\dword{-zombie-good} if the following conditions hold:\n\\begin{enumerate}\n\n\\item The survivor is in $\\graph{T}_y$, for $y \\ge x$, or \nat the vertex \\armout{0}{n}.\n\n\\item For some $i \\ge 1$, all the zombies $z_1, \\ldots, z_i$ are at\nvertex \\armout{0}{x}.\n\n\\item The zombies $z_{i+1}, \\ldots, z_m$ are in\n$\\graph{H} \\cup \\{\\armout{2}{-1}\\} \\cup \\graph{T}_0 \\cup \\cdots \\cup \\graph{T}_{x-1}$.\n\n\\item\nFor $1 \\le i \\le m-1$, \\ $0 \\le d_{i+1} - d_i \\le 4$.\n\n\n\\item \\label{2zombies}\n$d_m - d_1 \\ge n-x$.\n\n\\end{enumerate}\nThe proof relies on the following claim, which we prove after giving the main argument.\n\\begin{claim}\\label{maintech}\nFor $0 \\le x < n$, if the game is in an $x$\\zgood \\ position, then the zombies can either catch the survivor in $\\graph{T}_x$ or force an $(x+1)$\\zgood \\ position. \n\\end{claim}\n\nThroughout the analysis, we will assume the survivor does not move adjacent to a zombie unless forced to, \nsince then the zombies immediately win and we are done. In the first two zombie moves, $z_1$ moves from $v$ to \\armout{2}{-1} to \\armout{0}{0}, with the other zombies following behind, and so for all $i$, $d_{i+1}-d_i$ remains unchanged, and the conditions of the lemma guarantee that this position is $0$\\zgood. By repeatedly applying Claim~\\ref{maintech}, the zombies either catch the survivor, or force $x$\\zgood \\ positions for $x=1, 2, \\ldots, n$. In an $n$\\zgood \\ position both the survivor and $z_1$ are at \\armout{0}{n}, so the survivor is caught.\n\nNow we prove Claim~\\ref{maintech}. Suppose the position is \n$x$\\zgood, where $0 \\le x < n$. \nFirst we dispense with the case where the survivor is in $\\graph{T}_y$ where $y > x$ or at \\armout{0}{n}. In this case, $z_1$ can move to \\armout{1}{x}, and since \\armout{1}{x} is adjacent to \\armout{2}{x}, the survivor cannot move into $\\graph{T}_x$ without being caught. Thus on the next two moves, $z_1$ should move to \\armout{2}{x}, and then \n\\armout{0}{x+1}. Since the relative distances between zombies do not change during these three moves, the position is now $(x+1)$\\zgood. \n\nThus we only need to consider the case in which\nthe position is $x$\\zgood \\ and the survivor is actually in\n$\\graph{T}_x$. The zombies $z_1, \\ldots, z_i$ all begin on vertex \\armin{0}{x}, while the zombies $z_{i+1}, \\ldots, z_m$ (Condition~\\ref{2zombies} implies there must be at least one) begin in \\graph{H} or at \\armout{2}{-1} or in some $\\graph{T}_y$ with $y < x$. We will describe a strategy for $z_1$, which all the zombies $z_1, \\ldots, z_i$ will follow, and a strategy for $z_{i+1}$. It is not necessary to explicitly describe the strategy for $z_{i+2}, \\ldots, z_m$, so long as they move toward the survivor on each turn. First \nwe make an observation, which we will use repeatedly below. \n\\begin{obs}\nSuppose that $z_1$ is at $c^x$, $z_{i+1}$ is adjacent to \\armout{0}{x} or \\armout{2}{x-1}, and the survivor is at either \\armin{0}{x} or \\armin{4}{x}.\nThen these two zombies can catch the survivor.\n\\end{obs}\nWe point out why the observation is true.\nIf the survivor moves to \\armout{4}{x}, \\armout{3}{x}, \\armout{1}{x}or \\armout{0}{x}, $z_1$ should move to \n\\armin{4}{x}, \\armin{3}{x}, \\armin{1}{x}, or \\armin{0}{x}, respectively. In the first three cases, $z_1$ alone can catch the survivor on the next move. In the fourth case (i.e. the survivor is at \\armout{0}{x} and $z_1$ is at \\armin{0}{x}), the zombie $z_{i+1}$ will have already caught the survivor before the survivor moves to \\armout{0}{x} or will be at \\armout{2}{x-1}. Thus the survivor is already caught or will be caught on the zombies' next move.\n\nNow we describe the strategies that $z_1$ and $z_{i+1}$ follow in order to either catch the survivor in $\\graph{T}_x$ or force the position to be $(x+1)$\\zgood.\nThe strategy for $z_{i+1}$ is simple: if adjacent to the survivor it catches the survivor; otherwise, it moves towards vertex \n\\armout{0}{x} and once there, it moves to \n\\armin{1}{x} or \\armin{4}{x}, whichever is closer to the survivor. If more moves are required of $z_{i+1}$, it just moves towards the survivor in any fashion. \n The strategy for $z_1$ will depend on the survivor's position, so we consider two cases (we leave out vertices adjacent to\n\\armout{0}{x} since then $z_1$ immediately catches the survivor).\n\n\n\n\n\n\\textbf{Case 1:} Suppose the survivor is at $c^x$, \\armin{2}{x} or \\armout{2}{x}.\n Zombie $z_1$ should move to \\armin{1}{x}, and then there are three subcases.\n\\begin{enumerate}[label={(\\alph*)}]\n\n\\item If the survivor was at \\armout{2}{x} and moves to \\armout{0}{x+1}, then $z_1$ should move to \\armout{2}{x} and then \\armout{0}{x+1} for a total of three moves, which is a shortest path\nfrom \\armout{0}{x} to \\armout{0}{x+1}. Note that now the survivor is in $\\graph{T}_{x+1}$ or at \\armout{0}{n} and no values of $d_i$ have changed. Thus the position is $(x+1)$\\zgood.\n\n\n\\item \\label{cx_to_4x}\nIf the survivor moves to \\armin{4}{x}, $z_1$ should move to $c^x$.\n Then the zombies apply the observation to win.\n\n\n\n\\item If the survivor moves to \\armin{3}{x} or \\armout{3}{x}, then $z_1$ should move to \\armin{2}{x}. This forces the survivor to move to \\armin{4}{x} or \\armout{4}{x}, and in either case $z_1$ should move to \\armin{3}{x}. If the survivor is not already caught by $z_{i+1}$ and moves to \\armout{0}{x} then the survivor will be caught by $z_{i+1}$. Otherwise, the survivor must move to \\armin{0}{x}. In this case, $z_1$ should move to $c^x$ and \nthe zombies apply the observation to win.\n\n\n\\end{enumerate}\n\n\n\n\\textbf{Case 2:} If the survivor is at \\armin{3}{x} or \\armout{3}{x}, then $z_1$ should move to \\armin{4}{x}. This forces the survivor to move to \\armin{2}{x} or \\armout{2}{x}, \nand in either case $z_1$ should move to \\armin{3}{x}. There are then two subcases.\n\n\\begin{enumerate}[label={(\\alph*)}]\n\n\\item If the survivor is at \\armout{2}{x} and moves to \\armout{0}{x+1}, then $z_1$ should move to \\armout{2}{x} and then to \\armout{0}{x+1} for a total of 4 moves. In this case, note that the zombies $z_{i+1}, \\ldots, z_m$ are all one step closer to the survivor than before, because the survivor move from \\armin{3}{x} or \\armout{3}{x} to \\armout{2}{x} did not increase the distance to any of these zombies. To verify that Condition~\\ref{2zombies} still holds, note that at the end of this sequence of moves, $d_{i+1}, \\ldots, d_m$ have all decreased by 1, while the value of $x$ has increased by 1. Thus the new position is $(x+1)$\\zgood.\n\n\n\\item If the survivor moves to \\armin{1}{x} or \\armout{1}{x}, \nthen $z_1$ should move to \\armin{2}{x}. This forces the survivor to move to \n\\armin{0}{x} or \\armout{0}{x}. If the survivor moves to \\armout{0}{x} then the survivor will be caught by $z_{i+1}$. Otherwise, the survivor must move to \\armin{0}{x}. In this case, $z_1$ should move to $c^x$ and\nthe zombies apply the observation to win.\n\n\n\\end{enumerate}\n\\end{proof}\n\n\n\n\n\n\\section{A graph family exhibiting all feasible combinations of cop and zombie number}\\label{zkmsec}\n\nFor integers $m \\ge k \\ge 1$, we will describe a graph with cop number $k$ and zombie number $m$, by first describing the base graph in Definition~\\ref{def_basegraph}, and then describing how we hang arm graphs and paths off of the base graph.\n\n\n\\subsection{The base graph} \\label{sec_bibd}\n\nLet $V$ be a set of points, and $\\mathcal{B}$ be a collection of subsets of $V$, called \\dword{blocks}. Then the pair $(V,\\B)$ is called a \\dword{design}. A design $(V,\\B)$ is called a \\dword{balanced incomplete block design} with parameters $(v,k,\\lambda)$ if $|V|=v$, each block in $\\B$ has cardinality $k$, and each pair of distinct points in $V$ is in exactly $\\lambda$ blocks.\n\n\nRecall that from any collection of sets, one can define its \\dword{intersection graph}: The graph where the vertices are the sets and two sets are adjacent if their intersection is nonempty. Given any balanced incomplete block design $(V,\\B)$, its \\dword{block intersection graph} is the intersection graph of its blocks. Let $BIG(v,k,\\lambda)$ denote the set of block intersection graphs of balanced incomplete block designs with parameters $(v, k,\\lambda)$. The cop numbers of graphs in $BIG(v,k,\\lambda)$ are studied by Bonato and Burgess in Section 5 of \\cite{BonBurg2017_Designs}. \n\n\n\\begin{lemmaNoNum} \n(Lemma 5.4 from \\cite{BonBurg2017_Designs})\nIf $v > k(k-1)^2 + 1$, and $\\graph{G} \\in BIG(v,k,1)$ then $c(\\graph{G}) \\ge k$.\n\\end{lemmaNoNum}\n\nLemma 5.1 from \\cite{BonBurg2017_Designs} states if \n$\\graph{G} \\in BIG(v,k,\\lambda)$ then $c(\\graph{G}) \\le k$. In the next lemma, we follow their proof; restricting our attention to $BIG(v,k,1)$, we show that $k$ zombies catch a survivor on these graphs. \n\n\n\\begin{lemma} \\label{lemma_zombie_upper}\nIf\n$\\graph{G} \\in BIG(v,k,1)$, then $z(\\graph{G}) \\le k$.\n\\end{lemma}\n\n\\begin{proof}\nLet $(V,\\B)$ be a balanced incomplete block design with parameters $(v,k,1)$ and \n\\graph{G} be its block intersection graph. We describe how $k$ zombies $z_1, \\ldots, z_k$ can catch the survivor on \\graph{G}. Pick any $y \\in V$ and have each zombie start at a block which contains $y$ (multiple zombies \ncan start at the same vertex). Suppose the survivor starts at a block $B=\\{x_1, \\ldots, x_k\\} \\in \\B$. We can assume $y \\notin B$ as otherwise the zombies could win on the next turn.\nFor $1 \\le i \\le k$, zombie $z_i$ should move to the unique block containing $y$ and $x_i$. Note that this move is a legal zombie move since it puts each zombie adjacent to the survivor.\nNow wherever the survivor moves, it is to a block which intersects $B$, so at least one zombie is adjacent to the survivor and catches the survivor.\n\\end{proof}\n\n\n\\begin{defn} \\label{def_basegraph}\nFor $k \\ge 2$, let $\\graph{ZC}_k$ be a specified element of $BIG(v,k,1)$ for some $v > k(k-1)^2 + 1$. \nLet $\\graph{ZC}_1$ be a single\nvertex.\n\\end{defn}\n\nWilson \\cite{Wilson1972, Wilson1975} proved that for any positive integer $k$ there are infinitely many $v$ for which some balanced incomplete block design with parameters $(v, k, 1)$ exists, so the graph $\\graph{ZC}_k$ exists for all $k$. Of course for a given $k$ there may be many possible choices for $v$, and many graphs in $BIG(v,k,1)$, so we have arbitrarily chosen one representative to be the graph $\\graph{ZC}_k$. \nSince the zombie number is at least as large as the cop number for any graph, we use Lemma 5.4 from \\cite{BonBurg2017_Designs} and\nLemma~\\ref{lemma_zombie_upper}, in order to conclude that $c(\\graph{ZC}_k) = z(\\graph{ZC}_k) = k$.\n\n\\subsection{The final construction}\\label{finalg}\n\nWe now describe the graph $\\graph{Z}_{k,m}$. For $n \\ge 1$, denote by $P_n$ the path with $V(P_n) = \\{v_1, \\ldots, v_n\\}$ and $E(P_n) = \\{\\edge{v_i}{v_{i+1}} \\ \\setbar \\ 1 \\le i \\le n-1\\}$. \n\n\n\\begin{defn}\nLet $m \\ge k \\ge 1$. Then $\\graph{Z}_{k,m}$ is the graph obtained from $\\graph{ZC}_k$ by adding the following for each vertex $v \\in V(\\graph{ZC}_k)$.\n\n\\begin{itemize}\n\n\\item\nOne copy of $\\graph{P}_{4(m-k)}$ and an edge from $v$ to vertex $v_1$.\n\n\\item\n$m$ copies of $\\arm_{4(m-k)}$ and an edge from $v$ to the vertex \\armout{2}{-1}\nin each copy of $\\arm_{4(m-k)}$. \n\n\\end{itemize}\n\n\\end{defn}\nThe inclusion of the paths in $Z_{k,m}$ is primarily to provide an easy way to describe a winning starting position for $m$ zombies.\n\n\n\n\n\\begin{thm}\nThe graph $\\graph{Z}_{k,m}$ has cop number $k$ and zombie number $m$.\n\\end{thm}\n\n\\begin{proof} \nWe first show that $c(\\graph{Z}_{m,k}) = k$. If a vertex $w$ \nis not in $\\graph{ZC}_k$ (i.e. $w$ is on one of the attached arms or paths), then we refer to the closest vertex in $\\graph{ZC}_k$ as the \\dword{projection} of $w$.\nSince $\\graph{ZC}_k$ has cop number at least $k$, a robber can evade $k-1$ cops in $\\graph{Z}_{k,m}$ simply by playing a winning strategy on $\\graph{ZC}_k$, and regarding any cop on an arm or path as being at its projection. Conversely, if $k$ cops play against the projection of the robber on $\\graph{ZC}_k$, they can catch the projection. Once this is done the robber is trapped on an arm or path. On a path, one cop is sufficient to catch the robber simply by moving toward the robber. On an arm, one cop can catch the robber as follows. Upon arriving at \\armout{0}{x}, with the robber in $\\graph{T}_y$ where $y\\ge x$ or at \\armout{0}{4(m-k)},\n the cop can move to \\armin{0}{x} then to $c^x$. After the robber's two responses, the only safe (i.e. non-adjacent) locations for the robber are vertices\n\\armout{1}{x}, \\armout{2}{x}, \\armout{3}{x}, \\armout{4}{x}, \\armout{0}{4(m-k)}, or some $\\graph{T}_y$, for $y > x$.\nIf the robber is at \\armout{4}{x}, \\armout{3}{x}, or \n\\armout{1}{x}, the cop moves to \\armin{4}{x}, \\armin{3}{x}, or \\armin{1}{x}, respectively, thus\ncornering the robber, and catching on the following move. If the robber is either at \\armout{2}{x}, \\armout{0}{4(m-k)}, or in $\\graph{T}_y$ where $y > x$, the cop should move to \\armin{2}{x}, then \\armout{2}{x}, and then \\armout{0}{x+1}. By following this strategy, the cop forces the robber down the arm, eventually catching the robber at \\armout{0}{4(m-k)}.\n\nFor the zombie number, we first show that $m$ zombies can catch the survivor. As in the proof of Lemma~\\ref{lemma_zombie_upper}, start $k$ zombies in the graph $\\graph{ZC}_k$ at a single vertex $y \\in V(\\graph{ZC}_k)$, calling one of \nthem $z_1$. \nThe remaining $m-k$ zombies $z_2, \\ldots, z_{m-k+1}$ should start on vertices $v_4, v_8, \\ldots, v_{4(m-k)}$ on the $P_{4(m-k)}$ that is attached to $y$.\nSuppose the survivor starts at a vertex $v \\in V(\\graph{ZC}_k)$ or on a copy of \n$\\arm_{4(m-k)}$ or $P_{4(m-k)}$ connected to $v$. If $v$ is adjacent to $y$, all $k$ zombies on $y$ must move onto $v$. If $v$ is at distance 2 from $y$, the $k$ zombies at $y$ play the strategy from Lemma~\\ref{lemma_zombie_upper}, moving to mutual neighbors of $v$ and $y$ in $\\graph{ZC}_k$. Since this forces the survivor onto the path or one of the arms connected to $v$, on the next move all of these $k$ zombies will arrive at $v$. A third possible case is $v=y$. In all three cases, after 1, 2, or 0 moves, respectively, we obtain a position where the survivor is on a path or arm connected to a vertex $v$, there are $k$ zombies at $v$, and none of the relative distances between the zombies have changed from their initial positions. If the survivor is on the path, then the zombies will win just by advancing down the path toward the survivor. \nOtherwise we can apply Lemma~\\ref{1armUB} with with $4(m-k)$ in place of $n$ and $m-k+1$ in place of $m$. \nWe have the $m-k+1$ zombies $z_1, \\ldots, z_{m-k+1}$\nin the position required by Lemma~\\ref{1armUB} such\nthat $d_{m-k+1} - d_1 \\ge 4(m-k)$, so the zombies have a winning strategy.\n\n\nWe now show that the survivor can beat $m-1$ zombies using a strategy that has two phases. We say that a zombie in $\\graph{Z}_{k,m}$ \\textit{touches} $\\graph{ZC}_{k}$ if it is in $V(\\graph{ZC}_k)$ or adjacent to a vertex in $\\graph{ZC}_k$. If there are initially fewer than $k$ zombies touching $\\graph{ZC}_k$, the survivor starts in Phase 1. Otherwise the survivor starts in Phase 2. For Phase 1, the survivor plays the winning robber strategy on $\\graph{ZC}_k$ described in \\cite{BonBurg2017_Designs}, Lemma 5.4, by ignoring all cops who do not touch $\\graph{ZC}_k$ and regarding those who are adjacent to $\\graph{ZC}_k$ as being at their projections, and does this until at least $k$ zombies touch $\\graph{ZC}_k$. Phase 1 is possible since $c(\\graph{ZC}_k) = k$, which means that so long as fewer than $k$ zombies touch $\\graph{ZC}_k$, the survivor cannot be caught. Once $k$ or more zombies touch $\\graph{ZC}_k$, the survivor moves to Phase 2.\n\nAs indicated above, the survivor may start in Phase 2, or transition to Phase 2 after starting in Phase 1.\nIf the survivor starts in Phase 2, then the survivor should start at vertex \\armout{0}{0} on a copy of $\\arm_{4(m-k)}$ which contains no zombies. If the survivor transitions from Phase 1 to Phase 2, then the survivor should begin Phase 2 by moving to vertex \\armout{2}{-1}, then to vertex \\armout{0}{0}, on a copy of $\\arm_{4(m-k)}$ which contains no zombies. In either case, the survivor can move to a copy of $\\arm_{4(m-k)}$ with no zombies since each vertex of $\\graph{ZC}_k$ is attached to $m$ copies of $\\arm_{4(m-k)}$, while there are only $m-1$ zombies.\n\nOnce in Phase 2, the survivor will remain in Phase 2, and win by applying Lemma~\\ref{thm_one_arm}.\nSince $\\graph{ZC}_k$ has diameter 2, \nafter the survivor moves to \\armout{2}{-1} and then \\armout{0}{0}, or simply starts at \\armout{0}{0}, when it is the survivor's next turn to move, all the zombies who were initially touching $\\graph{ZC}_k$ are now at distance 1, 2, 3, or 4 from the survivor. Thus, supposing $d_1 \\le d_2 \\le \\cdots \\le d_{m-1}$, we conclude that \n\\[(d_2 - d_1)_5 + (d_3 - d_2)_5 + \\cdots + (d_k - d_{k-1})_5 \\le 3.\\]\nFurthermore, we can conclude: \n\\[\\begin{split}\n(d_2 - d_1)_5 + (d_3 - d_2)_5 + \\cdots + (d_{m-1} - d_{m-2})_5 &\\le 3+ (d_{k+1} - d_k)_5 + \\cdots + (d_{m-1} - d_{m-2})_5\\\\\n&\\le 3+ 4(m-1-k)\\\\\n&= 4(m-k)-1.\n\\end{split}\\]\nBy Lemma~\\ref{thm_one_arm}, with $n=4(m-k)$, the survivor has a winning strategy.\n\\end{proof}\n\nWe remark that if one is not concerned with 1-cop-win graphs, but only cares about the above theorem when $k \\ge 2$, then the construction and argument can be significantly simplified:\nin place of the \\graph{T} graph we could simply use a \ncycle $\\graph{C}_5$. The arguments basically work the same with this modification, and much of the case analysis of Lemma~\\ref{1armUB} disappears. However, the result for $k=1$ is one of the most significant concerns. \n\n\n\n\n\n\n\n\n\n\n\\section{Zombie number of the hypercube graph}\\label{cube}\n\nLet $\\graph{Q}_n$ denote the $n$-dimensional hypercube.\nIn \\cite{FHMP16}, Theorem 16, Fitzpatrick, Howell, Messinger, and Pike noted that $z(\\graph{Q}_n) \\ge \\lceil\\frac{2n}{3}\\rceil$, which follows as a corollary from \\cite{OO14}.\nConjecture~18 of \\cite{FHMP16} states that $z(\\graph{Q}_n) = \\lceil 2n\/3 \\rceil$. We prove the conjecture is true. Independently, a more general result was just proven by Fitzgerald in \\cite{FitzpatrickArxiv2018} which obtains the conjecture as a corollary. Our proof is simpler, but just proves the conjecture.\nOur proof is a modification of our proof of Theorem 3.1 from \\cite{OO14}. \n\\begin{thm}\nFor all $n \\ge 1$, $z(\\graph{Q}_n)= \\lceil 2n\/3\\rceil$.\n\\end{thm}\n\n\\begin{proof}\n\nIn light of Theorem 16 from \\cite{FHMP16}, it suffices to show that $\\lceil 2n\/3 \\rceil$ zombies is enough to catch the survivor, by describing a winning zombie strategy.\n\nWe follow standard notation and write a vertex of $\\graph{Q}_n$ as a binary vector of dimension $n$. We will refer to the $n$ coordinates of the vector as coordinates $1, 2, \\ldots, n$. Since a move from one vertex to another changes (or \\dword{flips}) one of the coordinates, we describe moves of the zombies and survivor in terms of flipping coordinates. For example, if $n=6$, a move from $(001101)$ to $(001001)$ is achieved by flipping the fourth coordinate.\n\nIf it is the survivor's turn, we say that a zombie is \\dword{even} if its distance to the survivor is even, and otherwise we say that zombie is \\dword{odd}. If it is the zombies' turn,\nwe say that a zombie is \\dword{even} if its distance to the survivor is odd, and otherwise we say that zombie is \\dword{odd}; i.e. on the zombies' turn we think of measuring a zombie's parity after it moves. \n\nWe can now describe the zombie strategy.\nWe break the zombies into equal size groups or into groups whose sizes differ by at most one, \nputting one group (Group I) all on the single vertex $(0 \\ldots 0 0)$ and the other (Group II) on\n$(0 \\ldots 0 1)$, more on the latter if need be. \nSince $\\graph{Q}_n$ is bipartite, whatever the zombies or survivor do throughout the course of play, on either player's turn, all the zombies in Group I will have the same parity as one another (i.e. they are all even or all odd), and all the zombies in Group II will have the opposite parity.\n\nOn the zombies' first turn, and on every turn that occurs immediately after the survivor remains stationary, they will carry out\nan organizational action we call a \n\\dword{home setting}.\nSuppose there are $e$ even zombies and $d$ odd zombies.\nThe $d$ odd zombies each choose a unique coordinate from among the coordinates $1, \\ldots, d$ as their \\dword{home coordinate}. The $e$ even zombies choose unique coordinates among $d+2, d+4, \\ldots, d+2e$ as their home coordinates.\nThroughout the proof, we understand coordinates to \\dword{wrap around}, that is,\nfor any coordinate $N>n$ we interpret this as the coordinate $k$ so that $k \\equiv N \\pmod{n}$. A short calculation shows that the homes at least reach coordinate $n$ \n(i.e. $d+2e \\ge n$), since $d + e = \\lceil 2n\/3 \\rceil$ and $e \\ge \\lfloor (1\/2) \\lceil 2n\/3 \\rceil \\rfloor$.\nEvery time the survivor chooses to remain stationary, all zombies have their parity flipped, i.e. all even zombies become odd, and all odd zombies become even. Thus every time the survivor remains stationary, the zombies will carry out a new home setting.\n\n\n\nFrom a zombie's home, say coordinate $k$, it regards the coordinates $k+1, k+2, \\ldots, n, 1, 2, \\ldots k-1$, in that order, as being to the right of $k$. \nDefine the \\dword{reach} of the zombie with home at coordinate $k$ to be the number of consecutive coordinates on which the zombie's vector matches the survivor's, starting at coordinate $k$ and going right. \nEach zombie follows the same strategy relative to its home:\nwhen it is the zombies' turn, each zombie simply flips the coordinate which makes its reach as large as possible. For example, suppose $n = 7$ and the survivor is at vertex $(0000000)$. Suppose a zombie is at vertex $(0110010)$ and its home is coordinate $4$. Then this zombie's reach is currently $2$, and if it were the zombies' turn, this zombie would flip coordinate $6$ to make its reach $5$.\nWe will show that this strategy eventually catches the survivor. \n\nWe say that a zombie has a \\dword{critical reach} (only measured right before the survivor's turn) if its reach is $n-2$ for an even zombie, or $n-1$ for an odd zombie.\nAn even zombie with critical reach has two coordinates that differ from the survivor's; \nif the survivor flips one of these two coordinates, then that zombie flips the other to catch the survivor.\nAn odd zombie with critical reach has one coordinate that differs from the survivor's;\nif the survivor flips that one, then the survivor has landed on the zombie and is caught.\nThus, when a zombie has critical reach, we say that the coordinate to the left of its home (or the two coordinates to the left of its home, in the case of an even zombie) are \\dword{closed} to the survivor and assume that the survivor never flips such a coordinate.\n \nNote that each time the survivor stays still, each zombie gets one step closer, so the survivor can only do this a finite number of times, and eventually we can assume the survivor always moves and so there is no home setting and no parity change for any zombie (recall that, except for the beginning of the game, there is only a home setting when the survivor remains stationary).\nWe consider the game from that point on, where the survivor never stays still.\n\nWe will show that eventually all the zombies will have critical reach, unless the survivor gets caught earlier. \nOnce all the zombies have critical reach, since the homes at least reach coordinate $n$, all the coordinates will be closed to the survivor, so the survivor will lose no matter what is done.\n\nTo finish the proof and show that eventually all the zombies will have critical reach, it suffices to show that whatever the survivor does, every zombie's reach can at least be maintained, and some zombie without a critical reach can have its reach extended. \nSuppose the survivor flips coordinate $k$, which as mentioned above, we can assume is not a closed coordinate. \nBecause of the positioning of the zombie homes and the fact that the zombie homes extend out to at least coordinate $n$, at least one of the coordinates, $k+1$ or $k+2$, is the home of some zombie (where coordinates may wrap around as mentioned above).\n If coordinate $k+1$ is the home of a zombie, then since coordinate $k$ was not yet closed to the survivor, that means the zombie with home $k+1$ did not yet have critical reach, so its reach can be extended. If coordinate $k+1$ is home to no zombie, but $k+2$ is the home of a zombie (in this case, it must be an even zombie), then since coordinate $k$ was not yet closed to the survivor, that means the zombie with home $k+2$ did not yet have critical reach, so its reach can be extended. In both cases, all other zombies can at least maintain their reach just by flipping the same coordinate the survivor last flipped.\n\\end{proof}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \n\\label{s.introduction}\n\n\n\\subsection{Motivation and informal summary of results}\nWe identify the \\emph{optimal} error estimates for the stochastic homogenization of solutions $u^{\\varepsilon}$ solving:\n\\begin{equation}\\label{e.maineq}\n\\begin{cases}\n-\\tr \\left(A\\left(\\frac{x}{\\varepsilon}\\right)D^{2}u^{\\varepsilon}\\right)=0&\\mbox{in}\\quad U,\\\\\nu^{\\varepsilon}(x)=g(x)&\\mbox{on}\\quad \\partial U.\n\\end{cases}\n\\end{equation}\nHere $U$ is a smooth bounded subset of $\\mathbb{R}^d$ with $d\\geq 2$, $D^2v$ is the Hessian of a function $v$ and $\\tr(M)$ denotes the trace of a symmetric matrix $M\\in \\mathbb{S}^d$. The coefficient matrix~$A(\\cdot)$ is assumed to be a stationary random field, with given law~$\\P$, and valued in the subset of symmetric matrices with eigenvalues belonging to the interval~$[\\lambda,\\Lambda]$ for given ellipticity constants $0<\\lambda \\leq \\Lambda$. The solutions $u^{\\varepsilon}$ are understood in the \\emph{viscosity sense}~\\cite{CIL} although in most of the paper the equations can be interpreted classically. We assume that the probability measure $\\P$ has a product-type structure and in particular possesses a \\emph{finite range of dependence} (see Section \\ref{ss.assump} for the precise statement). According to the general qualitative theory of stochastic homogenization developed in~\\cite{PV2,Y0} for nondivergence form elliptic equations (see also the later work~\\cite{CSW}), the solutions $u^{\\varepsilon}$ of \\eqref{e.maineq} converge uniformly as $\\varepsilon \\to 0$, $\\P$--almost surely, to that of the homogenized problem \n\\begin{equation}\n\\begin{cases}\n-\\tr(\\overline{A} D^{2}u)=0&\\mbox{in}\\quad U,\\\\\nu(x)=g(x)&\\mbox{on}\\quad \\partial U,\n\\end{cases}\n\\end{equation}\nfor some deterministic, uniformly elliptic matrix $\\overline{A}$. Our interest in this paper is to study the rate of convergence of $u^\\varepsilon$ to $u$. \n\n\\smallskip\n\nError estimates quantifying the speed homogenization of $u^{\\varepsilon}\\rightarrow u$ have been obtained in~\\cite{Y0, Y2,CS, AS3}. The most recent paper~\\cite{AS3} was the first to give a general result stating that the typical size of the error is at most algebraic, that is, $O(\\varepsilon^{\\alpha})$ for some positive exponent $\\alpha$. The earlier work~\\cite{Y2} gave an algebraic error estimate in dimensions $d>4$. The main purpose of this paper is to reveal explicitly the optimal exponent. \n\n\\smallskip\n\nOur main quantitative estimates concern the size of certain stationary solutions called the \\emph{approximate correctors}. These are defined, for a fixed symmetric matrix $M\\in\\mathbb{S}^{d}$ and $\\varepsilon>0$, as the unique solution $\\phi_\\varepsilon \\in C(\\mathbb{R}^d) \\cap L^\\infty(\\mathbb{R}^d)$ of the equation\n\\begin{equation}\\label{e.approxcorrector}\n\\varepsilon^{2}\\phi_{\\varepsilon}-\\tr\\left(A(x)(M+D^{2}\\phi_{\\varepsilon})\\right)=0\\quad\\mbox{in} \\ \\mathbb{R}^d.\n\\end{equation}\nOur main result states roughly that, for every $x\\in\\mathbb{R}^d$, $\\varepsilon \\in (0,\\tfrac12]$ and $t >0$,\n\\begin{equation} \\label{e.opt}\n\\P \\Big[ \\big| \\varepsilon^2\\phi_\\varepsilon(x) - \\tr\\left(\\overline{A}M \\right) \\big| \\geq t \\mathcal{E}(\\varepsilon) \\Big] \\lesssim \\exp \\left( -t^{\\frac12} \\right),\n\\end{equation}\nwhere the typical size $\\mathcal{E}(\\varepsilon)$ of the error depends only on the dimension~$d$ in the following way:\n\\begin{equation} \\label{e.error}\n\\mathcal{E}(\\varepsilon):= \\left\\{ \\begin{aligned}\n& \\varepsilon \\left| \\log \\varepsilon \\right| && \\mbox{in} \\ d = 2, \\\\\n& \\varepsilon^{\\frac32} && \\mbox{in} \\ d=3,\\\\ \n& \\varepsilon^2 \\left| \\log \\varepsilon \\right|^{\\frac12} && \\mbox{in} \\ d=4,\\\\ \n& \\varepsilon^2 & & \\mbox{in} \\ d > 4.\n\\end{aligned} \\right.\n\\end{equation}\nNote that the rescaling $\\phi^\\varepsilon(x):= \\varepsilon^2 \\phi_\\varepsilon\\left( \\tfrac x\\varepsilon \\right)$ allows us to write~\\eqref{e.approxcorrector} in the so-called theatrical scaling as\n\\begin{equation*}\n\\phi^{\\varepsilon}-\\tr\\left(A\\left( \\tfrac x\\varepsilon\\right) \\left(M+D^{2}\\phi^{\\varepsilon}\\right)\\right)=0\n\\quad\\mbox{in} \\ \\mathbb{R}^d.\n\\end{equation*}\nThis is a well-posed problem (it has a unique bounded solution) on $\\mathbb{R}^d$ which homogenizes to the equation\n\\begin{equation*}\n\\phi -\\tr\\left(\\overline{A} \\left(M+D^{2}\\phi\\right)\\right)=0\n\\quad\\mbox{in} \\ \\mathbb{R}^d.\n\\end{equation*}\nThe solution of the latter is obviously the constant function $\\phi\\equiv \\tr\\left( \\overline{A}M \\right)$ and so the limit \n\\begin{equation}\n\\label{e.quallimit}\n\\varepsilon^2\\phi_\\varepsilon(x) = \\phi^\\varepsilon(\\varepsilon x) \\rightarrow \\tr\\left( \\overline{A}M \\right)\n\\end{equation}\nis a qualitative homogenization statement. Therefore, the estimate~\\eqref{e.opt} is a quantitative homogenization result for this particular problem which asserts that the speed of homogenization is $O(\\mathcal{E}(\\varepsilon))$. \n\n\\smallskip\n\nMoreover, it is well-known that estimating the speed of homogenization for the Dirichlet problem is essentially equivalent to obtaining estimates on the approximate correctors (see~\\cite{Evans,AL2,CS, AS3}). Indeed, the estimate~\\eqref{e.opt} can be transferred without any loss of exponent to an estimate on the speed of homogenization of the Dirichlet problem. One can see this from the standard two-scale ansatz\n\\begin{equation*}\nu^\\varepsilon (x) \\approx u(x) + \\varepsilon^2\\phi_\\varepsilon\\left( \\tfrac x\\varepsilon \\right),\n\\end{equation*}\nwhich is easy to formalize and quantify in the linear setting since the homogenized solution~$u$ is completely smooth. We remark that since~\\eqref{e.opt} is an estimate at a single point $x$, an estimate in $L^\\infty$ for the Dirichlet problem will necessarily have an additional logarithmic factor $| \\log \\varepsilon|^q$ for some $q(d)<\\infty$. Since the argument is completely deterministic and essentially the same as in the case of periodic coefficients, we do not give the details here and instead focus on the proof of~\\eqref{e.opt}.\n\n\\smallskip\n\nThe estimate~\\eqref{e.opt} can also be expressed in terms of an estimate on the subquadratic growth of the correctors $\\phi$, which are the solutions, for given $M\\in\\mathbb{S}^d$, of the problem\n\\begin{equation}\\label{e.corrector}\n\\left\\{ \n\\begin{aligned}\n& -\\tr\\left(A(x)(M+D^{2}\\phi \\right)= -\\tr(\\overline{A}M) & \\mbox{in} & \\ \\mathbb{R}^d, \\\\\n& \\limsup_{R\\to \\infty} \\, R^{-2} \\osc_{B_R} \\phi=0.\\\\\n\\end{aligned}\n\\right.\n\\end{equation}\nRecall that while $D^2\\phi$ exists as a stationary random field, $\\phi$ itself may not be stationary. The estimate~\\eqref{e.opt} implies that, for every $x\\in\\mathbb{R}^d$, \n\\begin{equation*}\n\\P \\left[ \\left| \\phi(x) - \\phi(0) \\right| \\geq t |x|^2 \\mathcal{E}\\left(|x|^{-1}\\right) \\right] \\lesssim \\exp\\left( -t^{\\frac12} \\right).\n\\end{equation*}\nNotice that in dimensions $d>4$, this implies that the typical size of $|\\phi(x) - \\phi(0)|$ stays bounded as $|x| \\to \\infty$, suggesting that $\\phi$ is a locally bounded, stationary random field. In Section~\\ref{s.correctors}, we prove that this is so: the correctors are locally bounded and stationary in dimensions~$d>4$.\n\n\\smallskip\n\nThe above estimates are optimal in the size of the scaling, that is, the function $\\mathcal{E}(\\varepsilon)$ cannot be improved in any dimension. This can be observed by considering the simple operator $-a(x)\\Delta$ where $a(x)$ is a scalar field with a random checkerboard structure. Fix a smooth (deterministic) function $f\\in C^\\infty_c(\\mathbb{R}^d)$ and consider the problem\n\\begin{equation}\n\\label{e.easyexample}\n-a\\left(\\tfrac x\\varepsilon \\right) \\Delta u^\\varepsilon = f(x) \\quad \\mbox{in} \\ \\mathbb{R}^d.\n\\end{equation}\nWe expect this to homogenize to a problem of the form\n\\begin{equation*}\n-\\overline a \\Delta u = f(x) \\quad \\mbox{in} \\ \\mathbb{R}^d.\n\\end{equation*}\nIn dimension $d=2$ (or in higher dimensions, if we wish) we can also consider the Dirichlet problem with zero boundary conditions in a ball much larger than the support of $f$. We can then move the randomness to the right side of the equation to have\n\\begin{equation*}\n-\\Delta u^\\varepsilon = a^{-1}\\left(\\tfrac x\\varepsilon \\right) f(x) \\quad \\mbox{in} \\ \\mathbb{R}^d\n\\end{equation*}\nand then write a formula for the value of the solution $u^\\varepsilon$ at the origin as a convolution of the random right side against the Green's kernel for the Laplacian operator. The size of the fluctuations of this convolution is easy to determine, since it is essentially a sum (actually a convolution) of i.i.d.~random variables, and it turns out to be precisely of order $\\mathcal{E}(\\varepsilon)$ (with a prefactor constant depending on the variance of $a(\\cdot)$ itself). For instance, we show roughly that\n\\begin{equation*}\n\\var\\left[ u^\\varepsilon(0) \\right] \\simeq \\mathcal{E}(\\varepsilon)^2.\n\\end{equation*}\nThis computation also serves as a motivation for our proof of~\\eqref{e.opt}, although handling the general case of a random diffusion matrix is of course much more difficult than that of~\\eqref{e.easyexample}, in which the randomness is scalar and can be split from the operator. \n\n\n\\subsection{Method of proof and comparison to previous works}\n\nThe arguments in this paper are inspired by the methods introduced in the divergence form setting by Gloria and Otto~\\cite{GO1,GO2} and Gloria, Neukamm and Otto~\\cite{GNO1} (see also Mourrat~\\cite{M}). The authors combined certain concentration inequalities and analytic arguments to prove optimal quantitative estimates in stochastic homogenization for linear elliptic equations of the form\n\\begin{equation*}\n-\\nabla \\cdot \\left( A(x) \\nabla u \\right) = 0.\n\\end{equation*}\nThe concentration inequalities provided a convenient mechanism for transferring quantitative ergodic information from the coefficient field to the solutions themselves, an idea which goes back to an unpublished paper of Naddaf and Spencer~\\cite{NS}. Most of these works rely on some version of the Efron-Stein inequality~\\cite{ES} or the logarithmic Sobolev inequality to control the fluctuations of the solution by estimates on the spatial derivatives of the Green's functions and the solution.\n\n\\smallskip\n\nA variant of these concentration inequalities plays an analogous role in this paper (see Proposition~\\ref{p.concentration}). There are then two main analytic ingredients we need to conclude: first, an estimate on the decay of the Green's function for the heterogenous operator (note that, in contrast to the divergence form case, there is no useful deterministic bound on the decay of the Green's function); and (ii) a higher-order regularity theory asserting that, with high $\\P$-probability, solutions of our random equation are more regular than the deterministic regularity theory would predict. We prove each of these estimates by using the sub-optimal (but algebraic) quantitative homogenization result of~\\cite{AS3}: we show that, since solutions are close to those of the homogenized equation on large scales, we may ``borrow\" the estimates from the constant-coefficient equation. This is an idea that was introduced in the context of stochastic homogenization for divergence form equations by the first author and Smart~\\cite{AS2} (see also~\\cite{GNO2,AM}) and goes back to work of Avellaneda and Lin~\\cite{AL1,AL2} in the case of periodic coefficients. \n\n\\smallskip\n\nWe remark that, while the scaling of the error $\\mathcal{E}(\\varepsilon)$ is optimal, the estimate~\\eqref{e.opt} is almost certainly sub-optimal in terms of stochastic integrability. This seems to be one limitation of an approach relying on (nonlinear) concentration inequalities, which so far yield only estimates with exponential moments~\\cite{GNO2} rather than Gaussian moments~\\cite{AKM1,AKM2,GO4}. Recently, new approaches based on renormalization-type arguments (rather than nonlinear concentration inequalities) have been introduced in the divergence form setting~\\cite{AKM1,AKM2,GO4}. It was shown in~\\cite{AKM2} that this approach yields estimates at the critical scale which are also optimal in stochastic integrability. It would be very interesting to see whether such arguments could be developed in the nondivergence form case.\n\n\n\\subsection{Assumptions}\n\\label{ss.assump}\nIn this subsection, we introduce some notation and present the hypotheses. Throughout the paper, we fix ellipticity constants $0< \\lambda\\leq \\Lambda$ and the dimension $d \\geq 2$. The set of $d$-by-$d$ matrices is denoted by $\\mathbb{M}^{d}$, the set of $d$-by-$d$ symmetric matrices is denoted by $\\mathbb{S}^{d}$, and $Id \\in \\mathbb{S}^d$ is the identity matrix. If $M,N \\in \\mathbb{S}^d$, we write $M \\leq N$ if every eigenvalue of $N-M$ is nonnegative; $|M|$ denotes the largest eigenvalue of $M$. \n\n\\subsubsection{Definition of the probability space $(\\Omega,\\mathcal{F})$}\nWe begin by giving the structural hypotheses on the equation. \nWe consider matrices $A\\in \\mathbb{M}^{d}$\nwhich are uniformly elliptic:\n\\begin{equation} \\label{e.Fellip}\n\\lambda Id\\leq A(\\cdot) \\leq \\Lambda Id\n\\end{equation}\nand H\\\"older continuous in $x$: \n\\begin{equation} \\label{e.Fcont}\n\\exists\\sigma \\in (0,1] \\quad \\mbox{such that} \\quad \\sup_{x,y\\in\\mathbb{R}^d} \\frac{\\left| A(x)-A(y) \\right|}{|x-y|^\\sigma} < \\infty,\n\\end{equation}\nWe define the set \n\\begin{equation} \\label{e.Omega}\n\\Omega:= \\left\\{ A \\, :\\, \\mbox{$A$ satisfies~\\eqref{e.Fellip} and~\\eqref{e.Fcont}} \\right\\}.\n\\end{equation}\nNotice that the assumption~\\eqref{e.Fcont} is a qualitative one. We purposefully do not specify any quantitative information regarding the size of the supremum in~\\eqref{e.Fcont}, because none of our estimates depend on this value. We make this assumption in order to ensure that a comparison principle holds for our equations. \n\n\\smallskip\n\nWe next introduce some $\\sigma$--algebras on $\\Omega$. For every Borel subset $U\\subseteq \\mathbb{R}^d$ we define $\\mathcal{F}(U)$ to be the $\\sigma$--algebra on $\\Omega$ generated by the behavior of $A$ in $U$, that is,\n\\begin{multline} \\label{e.FU}\n\\mathcal{F}(U):= \\mbox{$\\sigma$--algebra on $\\Omega$ generated by the family of random variables} \\\\ \\left\\{ A\\mapsto A(x) \\,:\\, \\ x\\in U \\right\\}.\n\\end{multline}\nWe define $\\mathcal{F} := \\mathcal{F}(\\mathbb{R}^d)$.\n\n\\subsubsection{Translation action on $(\\Omega,\\mathcal{F})$}\n\nThe translation group action on $\\mathbb{R}^d$ naturally pushes forward to $\\Omega$. We denote this action by $\\{ T_y \\}_{y\\in \\mathbb{R}^d}$, with $T_y:\\Omega \\to \\Omega$ given by $$(T_yA)(x): =A(x+y).$$\nThe map $T_y$ is clearly $\\mathcal{F}$--measurable, and is extended to~$\\mathcal{F}$ by setting, for each $E \\in \\mathcal{F}$,~$$T_yE:= \\left\\{ T_yA \\,:\\, A\\in E\\right\\}.$$ We also extend the translation action to $\\mathcal{F}$--measurable random elements $X:\\Omega\\to S$ on $\\Omega$, with $S$ an arbitrary set, by defining $(T_yX)(F):=X(T_yF)$.\n\n\\smallskip\n\nWe say that a random field $f:\\mathbb{Z}^d \\times \\Omega \\to S$ is \\emph{$\\mathbb{Z}^d$--stationary} provided that $f(y+z,A) = f(y,T_zA)$ for every $y,z\\in\\mathbb{Z}^d$ and $A\\in \\Omega$. Note that an $\\mathcal{F}$--measurable random element $X:\\Omega\\to S$ may be viewed as a $\\mathbb{Z}^d$--stationary random field via the identification with $\\widetilde X(z,A):= X(T_zA)$.\n\n\n\\subsubsection{Assumptions on the random environment}\nThroughout the paper, we fix $\\ell \\geq 2\\sqrt{d}$ and a probability measure $\\P$ on $(\\Omega,\\mathcal{F})$ which satisfies the following:\n\\begin{itemize}\n\n\\item[(P1)] $\\P$ has $\\mathbb{Z}^d$--stationary statistics: that is, for every $z\\in\\mathbb{Z}^d$ and $E\\in \\mathcal{F}$,\n\\begin{equation*\n \\P \\left[ E \\right] = \\P \\left[ T_z E\\right]. \n\\end{equation*}\n\n\\item[(P2)] $\\P$ has a range of dependence at most $\\ell$: that is, for all Borel subsets $U,V$ of $\\mathbb{R}^d$ such that $\\dist(U,V) \\geq \\ell$, \n\\begin{equation*}\n\\mbox{ $\\mathcal{F}(U)$ and $\\mathcal{F}(V)$ \\ are \\ $\\P$--independent.}\n\\end{equation*}\n\\end{itemize}\n\nSome of our main results rely on concentration inequalities (stated in Section~\\ref{ss.concentration}) which require a stronger independence assumption than finite range of dependence (P2) which was used in \\cite{AS3}. Namely, we require that~$\\P$ is the pushforward of another probability measure which has a product space structure. We consider a probability space $(\\Omega_0,\\mathcal{F}_0,\\P_0)$ and denote\n\\begin{equation} \\label{e.starform}\n(\\Omega_*,\\mathcal{F}_*,\\P_*) := (\\Omega_0^{\\mathbb{Z}^d},\\mathcal{F}_0^{\\mathbb{Z}^d},\\P_0^{\\mathbb{Z}^d}).\n\\end{equation}\nWe regard an element $\\omega\\in \\Omega_*$ as a map $\\omega:\\mathbb{Z}^d \\to \\Omega_0$. For each $\\Gamma \\subseteq\\mathbb{Z}^d$, we denote by $\\mathcal{F}_*(\\Gamma)$ the $\\sigma$-algebra generated by the family $\\{ \\omega\\mapsto \\omega(z) \\,:\\, z\\in \\Gamma\\}$ of maps from $\\Omega_*$ to $\\Omega_0$. We denote the expectation with respect to $\\P_*$ by $\\mathbb{E}_*$. Abusing notation slightly, we also denote the natural $\\mathbb{Z}^d$-translation action on $\\Omega_*$ by $T_z$, that is, $T_z:\\Omega_*\\to \\Omega_*$ is defined by $(T_z\\omega)(y):=\\omega(y+z)$.\n\n\\smallskip\n\nWe assume that there exists an $(\\mathcal{F}_*,\\mathcal{F})$--measurable map $\\pi:\\Omega_* \\to \\Omega$ which satisfies the following:\n\\begin{equation} \\label{e.pullback}\n\\P \\left[ E \\right] = \\P_* \\!\\left[ \\pi^{-1} (E)\\right] \\quad \\mbox{for every $E \\in \\mathcal{F}$,} \n\\end{equation}\n\\begin{equation} \\label{e.transcommute}\n\\pi \\circ T_z = T_z \\circ \\pi \\quad \\mbox{for every} \\ z\\in \\mathbb{Z}^d,\n\\end{equation}\nwith the translation operator interpreted on each side in the obvious way, and\n\\begin{multline} \\label{e.siginclusion}\n\\mbox{for every Borel subset $U \\subseteq \\mathbb{R}^d$ and $E \\in \\mathcal{F}(U)$,} \\\\ \\pi^{-1}(E) \\in \\mathcal{F}_{*}\\left( \\left\\{ z\\in \\mathbb{Z}^d \\,:\\, \\dist(z,U) \\leq \\ell\/2 \\right\\} \\right).\n\\end{multline}\nWe summarize the above conditions as:\n\\begin{itemize}\n\\item[(P3)] There exists a probability space $(\\Omega_*,\\mathcal{F}_*,\\P_*)$ of the form~\\eqref{e.starform} and a map $$\\pi:=\\Omega_* \\to \\Omega,$$ which is $(\\mathcal{F},\\mathcal{F}_*)$--measurable and satisfies~\\eqref{e.pullback},~\\eqref{e.transcommute} and~\\eqref{e.siginclusion}.\n\n\\end{itemize}\n\nNote that, in view of the product structure, the conditions~\\eqref{e.pullback} and~\\eqref{e.siginclusion} imply~(P2) and~\\eqref{e.pullback} and~\\eqref{e.transcommute} imply~(P1). Thus~(P1) and~(P2) are superseded by~(P3). \n\n\n\\subsection{Statement of main result}\n\nWe next present the main result concerning quantitative estimates of the approximate correctors. \n\n\\begin{theorem}\n\\label{t.correctors}\nAssume that $\\P$ is a probability measure on $(\\Omega,\\mathcal{F})$ satisfying~(P3). Let $\\mathcal{E}(\\varepsilon)$ be defined by~\\eqref{e.error}. Then there exist $\\delta(d,\\lambda,\\Lambda)>0$ and $C(d,\\lambda,\\Lambda,\\ell) \\geq 1$ such that, for every $\\varepsilon \\in (0,\\tfrac12]$, \n\\begin{equation} \\label{e.correctorerror}\n\\mathbb{E} \\left[ \\exp \\left(\\left( \\frac1{\\mathcal{E}(\\varepsilon)} \\sup_{x\\in B_{\\sqrt{d}}(0)}\\left| \\varepsilon^2 \\phi_\\varepsilon(x) - \\tr\\left( \\overline{A}M \\right) \\right| \\right)^{\\frac{1}{2} + \\delta } \\,\\right) \\right] \\leq C.\n\\end{equation}\n\\end{theorem}\nThe proof of Theorem~\\ref{t.correctors} is completed in Section~\\ref{s.optimals}. \n\n\n\n\n\n\n\\subsection{Outline of the paper} \nThe rest of the paper is organized as follows. In Section~\\ref{s.prelims}, we introduce the approximate correctors and the modified Green's functions and give some preliminary results which are needed for our main arguments. In Section~\\ref{s.reg}, we establish a~$C^{1,1}$ regularity theory down to unit scale for solutions. Section~\\ref{s.green} contains estimates on the modified Green's functions, which roughly state that the these functions have the same rate of decay at infinity as the Green's function for the homogenized equation (i.e, the Laplacian). In this section, we also mention estimates on the invariant measure associated to the linear operator in~\\eqref{e.maineq}. In Section~\\ref{s.sensitivity}, we use results from the previous sections to measure the ``sensitivity'' of solutions of the approximate corrector equation with respect to the coefficients. Finally, in Section~\\ref{s.optimals}, we obtain the optimal rates of decay for the approximate corrector, proving our main result, and, in Section~\\ref{s.correctors}, demonstrate the existence of stationary correctors in dimensions~$d>4$. In the appendix, we give a proof of the concentration inequality we use in our analysis, which is a stretched exponential version of the Efron-Stein inequality. \n\n\n\\section{Preliminaries}\n\\label{s.prelims}\n\n\\subsection{Approximate correctors and modified Green's functions}\n\nFor each given~$M\\in\\mathbb{S}^{d}$ and $\\varepsilon>0$, the approximate corrector equation is\n\\begin{equation*\n\\varepsilon^{2}\\phi_{\\varepsilon}-\\tr(A(x)(M+D^{2}\\phi_{\\varepsilon}))=0\\quad\\mbox{in} \\ \\mathbb{R}^d.\n\\end{equation*}\nThe existence of a unique solution $\\phi_\\varepsilon$ belonging to $C(\\mathbb{R}^d)\\cap L^\\infty(\\mathbb{R}^d)$ is obtained from the usual Perron method and the comparison principle. \n\n\\smallskip\n\nWe also introduce, for each $\\varepsilon\\in (0,1]$ and $y\\in\\mathbb{R}^d$, the ``modified Green's function\" $G_{\\varepsilon}(\\cdot, y; A)=G_\\varepsilon(\\cdot,y)$, which is the unique solution of the equation\n\\begin{equation} \\label{e.GF}\n\\varepsilon^2 G_\\varepsilon -\\tr\\left(A (x) D^2G_\\varepsilon \\right) = \\chi_{B_\\ell(y)} \\quad \\mbox{in} \\ \\mathbb{R}^d.\n\\end{equation}\nCompared to the usual Green's function, we have smeared out the singularity and added the zeroth-order ``massive\" term. \n\nTo see that~\\eqref{e.GF} is well-posed, we build the solution $G_{\\varepsilon}$ by compactly supported approximations. We first solve the equation in the ball $B_R$ (for $R>\\ell$) with zero Dirichlet boundary data to obtain a function $G_{\\varepsilon,R}(\\cdot,y)$. By the maximum principle, $G_{\\varepsilon,R} (\\cdot,y)$ is increasing in $R$ and we may therefore define its limit as $R\\to \\infty$ to be $G_{\\varepsilon}(\\cdot,y)$. We show in the following lemma that it is bounded and decays at infinity, and from this it is immediate that it satisfies~\\eqref{e.GF} in~$\\mathbb{R}^d$. The lemma is a simple deterministic estimate which is useful only in the regime $|x-y| \\gtrsim \\varepsilon^{-1} \\left| \\log \\varepsilon \\right|$ and follows from the fact (as demonstrated by a simple test function) that the interaction between the terms on the left of~\\eqref{e.GF} give the equation a characteristic length scale of~$\\varepsilon^{-1}$. \n\n\\begin{lemma}\n\\label{l.Gtails}\nThere exist $a(d,\\lambda,\\Lambda)>0$ and $C(d,\\lambda,\\Lambda)>1$ such that, for every $A\\in \\Omega$, $x,y\\in \\mathbb{R}^d$ and $\\varepsilon\\in(0,1]$,\n\\begin{equation} \\label{e.Gtails}\nG_\\varepsilon(x,y) \\leq C \\varepsilon^{-2} \\exp\\left( -\\varepsilon a |x-y|\\right).\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nWithout loss of generality, let $y=0$. Let $\\phi(x):= C\\exp\\left( -\\varepsilon a |x| \\right)$ for $C,a>0$ to be selected below. Compute, for every $x\\neq 0$, \n\\begin{equation*} \\label{}\nD^2\\phi(x) = \\phi(x) \\left( -\\varepsilon a \\frac{1}{|x|} \\left( I - \\frac{x\\otimes x}{|x|^2}\\right) + \\varepsilon^2 a^2 \\frac{x\\otimes x}{|x|^2} \\right)\n\\end{equation*}\nThus for any $A\\in\\Omega$,\n\\begin{equation*} \\label{}\n - \\tr\\left( A(x) D^2 \\phi(x) \\right) \\geq \\phi(x) \\left( \\frac{1}{|x|}\\lambda\\varepsilon a (d-1) - \\Lambda(\\varepsilon^2 a^2) \\right) \\quad \\mbox{in} \\ \\mathbb{R}^d\\setminus \\{ 0 \\}. \n\\end{equation*}\nSet $a:=\\frac{1}{\\sqrt{2\\Lambda}}$. Then \n\\begin{equation} \\label{e.detsupersol}\n\\varepsilon^{2}\\phi - \\tr\\left( A(x) D^2 \\phi(x) \\right) \\geq \\varepsilon^{2}\\phi(x)(1-\\Lambda a^{2})+\\frac{\\phi(x)}{|x|}\\lambda\\varepsilon a(d-1)\\geq 0\\quad \\mbox{in} \\ \\mathbb{R}^d\\setminus \\{ 0 \\}.\n\\end{equation}\nTake $C := \\exp(a\\ell)$ so that $\\phi(x) \\geq 1$ in $|x| \\leq \\ell$. Define $\\tilde \\phi: = \\varepsilon^{-2} \\min\\{ 1, \\phi \\}$. Then $\\tilde \\phi$ satisfies the inequality\n\\begin{equation*} \\label{}\n\\varepsilon^2 \\phi(x) - \\tr\\left( A(x) D^2 \\phi(x) \\right) \\geq \\chi_{B_{\\ell}} \\quad \\mbox{in} \\ \\mathbb{R}^d. \n\\end{equation*}\nAs $\\tilde \\phi>0$ on $\\partial B_R$, the comparison principle yields that $\\tilde \\phi \\geq G_{\\varepsilon,R}(\\cdot,0)$ for every $R>1$. Letting $R\\to \\infty$ yields the lemma.\n\\end{proof}\n\n\n\\subsection{Spectral gap inequalities}\\label{ss.concentration}\n\nIn this subsection, we state the probabilistic tool we use to obtain the quantitative estimates for the modified correctors. The result here is applied precisely once in the paper, in Section~\\ref{s.optimals}, and relies on the stronger independence condition (P3). It is a variation of the Efron-Stein (``spectral gap\") inequality; a proof is given in Appendix~\\ref{s.spectralgap}. \n\n\n\\begin{proposition}\n\\label{p.concentration}\nFix $\\beta \\in (0,2)$. Let $X$ be a random variable on $(\\Omega_*,\\mathcal{F}_*, \\P_{*})$ and set \n\\begin{equation*} \\label{}\nX_z' := \\mathbb{E}_* \\left[ X \\,\\vert\\, \\mathcal{F}_*(\\mathbb{Z}^d\\setminus \\{ z\\}) \\right] \\qquad \\mbox{and} \\qquad \\mathbb{V}_*\\!\\left[ X \\right] := \\sum_{z\\in\\mathbb{Z}^{d}} (X-X'_z)^2.\n\\end{equation*}\nThen there exists $C(\\beta)\\geq 1$ such that\n\\begin{equation} \\label{e.concentration}\n\\mathbb{E}_* \\!\\left[ \\exp\\left( \\left| X - \\mathbb{E} \\left[ X\\right] \\right|^\\beta \\right) \\right] \\leq C \\mathbb{E}_* \\!\\left[ \\exp\\left( \\left(C\\mathbb{V}_*\\!\\left[ X \\right] \\right)^{\\frac{\\beta}{2-\\beta}} \\right) \\right]^\\frac{2-\\beta}{\\beta}.\n\\end{equation}\n\\end{proposition}\n\nThe conditional expectation $X_z'$ can be identified by resampling the random environment near the point $z$ (this is explained in depth in Section \\ref{s.sensitivity}). Therefore, the quantity $(X-X'_{z})$ measures changes in $X$ with respect to changes in the environment near $z$. Following~\\cite{GNO1}, we refer to $(X-X'_{z})$ as the \\emph{vertical derivative} of $X$ at the point $z$. \n\n\n\n\n\\subsection{Suboptimal error estimate}\n\nWe recall the main result of \\cite{AS3}, which is the basis for much of the analysis in this paper. We reformulate their result slightly, to put it in a form which is convenient for our use here.\n\n\\begin{proposition}\n\\label{p.subopt}\nFix $\\sigma \\in (0,1]$ and $s \\in (0,d)$. Let $\\mathbb{P}$ satisfy (P1) and (P2). There exists an exponent $\\alpha(\\sigma,s,d,\\lambda,\\Lambda,\\ell)>0$ and a nonnegative random variable $\\X$ on $(\\Omega,\\mathcal{F})$, satisfying\n\\begin{equation} \\label{e.expboundX}\n\\mathbb{E} \\left[ \\exp\\left( \\X \\right) \\right] \\leq C(\\sigma,s,d,\\Lambda,\\lambda,\\ell) < \\infty\n\\end{equation}\nand such that the following holds: for every $R\\geq1$, $f\\in C^{0,\\sigma}(B_R)$, $g\\in C^{0,\\sigma}(\\partial B_R)$ and solutions $u,v\\in C(\\overline B_R)$ of the Dirichlet problems\n\\begin{equation} \\label{e.DP}\n\\left\\{ \n\\begin{aligned}\n& -\\tr \\left(A(x)D^{2}u\\right) = f(x)& \\mbox{in} & \\ B_{R}, \\\\\n& u = g & \\mbox{on} & \\ \\partial B_R,\n\\end{aligned} \n\\right.\n\\end{equation}\nand\n\\begin{equation}\n\\label{e.DP2}\n\\left\\{ \n\\begin{aligned}\n& -\\tr \\left(\\overline{A}D^{2}v\\right) = f(x) & \\mbox{in} & \\ B_{R}, \\\\\n& v = g & \\mbox{on} & \\ \\partial B_R,\n\\end{aligned} \n\\right.\n\\end{equation}\nwe have, for a constant $C(\\sigma,s,d,\\Lambda,\\lambda,\\ell)\\geq1$, the estimate\n\\begin{equation*} \\label{}\nR^{-2} \\sup_{B_R} \\left| u(x) - v(x) \\right| \\leq C R^{-\\alpha} \\left(1+\\X R^{-s} \\right) \\Gamma_{R,\\sigma}(f,g), \n\\end{equation*}\nwhere\n\\begin{equation*} \\label{}\n\\Gamma_{R,\\sigma}(f,g) := \\sup_{B_R} \\left| f \\right| + R^{\\sigma} \\left[ f \\right]_{C^{0,\\sigma}(B_R)} + R^{-2} \\osc_{\\partial B_R} g + R^{-2+\\sigma} \\left[ g \\right]_{C^{0,\\sigma}(\\partial B_R)}.\n\\end{equation*}\n\\end{proposition}\n\nWe note that, in view of the comparison principle, Proposition~\\ref{p.subopt} also give one-sided estimates for subsolutions and supersolutions.\n\n\\section{Uniform \\texorpdfstring{$C^{1,1}$}{C11} Estimates}\\label{s.reg}\n\n\nIn this section, we present a large-scale ($R\\gg 1$) regularity theory for solutions of the equation\n\\begin{equation}\n\\label{e.pder}\n-\\tr\\left(A(x)D^{2}u\\right)=f(x) \\quad \\mbox{in} \\ B_R\n\\end{equation}\nRecall that according to the Krylov-Safonov H\\\"{o}lder regularity theory \\cite{CC}, there exists an exponent~$\\sigma(d,\\lambda,\\Lambda)\\in (0,1)$ such that $u$ belongs to $C^{0, \\sigma}(B_{R\/2})$ with the estimate\n\\begin{equation}\\label{e.classicKS}\nR^{\\sigma-2}\\left[u\\right]_{C^{0, \\sigma}({B_{R\/2})}}\\leq C\\left(R^{-2} \\osc_{B_{R}} u+ \\left( \\fint_{B_R} \\left| f(x)\\right|^d\\,dx \\right)^{\\frac1d}\\right).\n\\end{equation}\nThis is the best possible estimate, independent of the size of $R$, for solutions of general equations of the form~\\eqref{e.pder}, even if the coefficients are smooth. What we show in this section is that, due to statistical effects, solutions of our random equation are typically much more regular, at least on scales larger than~$\\ell$, the length scale of the correlations of the coefficient field. Indeed, we will show that solutions, with high probability, are essentially $C^{1,1}$ on scales larger than the unit scale. \n\n\\smallskip\n\nThe arguments here are motivated by a similar $C^{0,1}$ regularity theory for equations in divergence form developed in~\\cite{AS2} and should be seen as a nondivergence analogue of those estimates. In fact, the arguments here are almost identical to those of~\\cite{AS2}. They can also be seen as generalizations to the random setting of the results of Avellaneda and Lin for periodic coefficients, who proved uniform $C^{0,1}$ estimates for divergence form equations~\\cite{AL1} and then $C^{1,1}$ estimates for the nondivergence case~\\cite{AL2}. Note that it is natural to expect estimates in nondivergence form to be one derivative better than those in divergence form (e.g., the Schauder estimates). Note that the ``$C^{1,1}$ estimate'' proved in~\\cite{GNO2} has a different statement which involves correctors; the statement we prove here would be simply false for divergence form equations.\n\n\\smallskip\n\nThe rough idea, similar to the proof of the classical Schauder estimates, is that, due to homogenization, large-scale solutions of~\\eqref{e.pder} can be well-approximated by those of the homogenized equation. Since the latter are harmonic, up to a change of variables, they possess good estimates. If the homogenization is fast enough (for this we need the results of~\\cite{AS3}, namely Proposition~\\ref{p.subopt}), then the better regularity of the homogenized equation is inherited by the heterogeneous equation. This is a quantitative version of the idea introduced in the context of periodic homogenization by Avellaneda and Lin \\cite{AL1, AL2}.\n\n\\smallskip\n\nThroughout this section, we let $\\mathcal{Q}$ be the set of polynomials of degree at most two \nand let $\\mathcal{L}$ denote the set of affine functions. For $\\sigma \\in (0,1]$ and $U\\subseteq\\mathbb{R}^d$, we denote the usual H\\\"older seminorm by $\\left[ \\cdot\\right]_{C^{0,\\sigma}(U)}$.\n\n\\begin{theorem}[{$C^{1,1}$ regularity}]\n\\label{t.regularity}\nAssume (P1) and (P2). Fix $s\\in (0,d)$ and $\\sigma \\in (0,1]$. There exists an $\\mathcal{F}$--measurable random variable $\\X$ and a constant $C(s,\\sigma,d,\\lambda,\\Lambda,\\ell)\\geq 1$ satisfying\n\\begin{equation} \\label{e.scrbound}\n\\mathbb{E} \\left[ \\exp\\left( \\X^s \\right) \\right] \\leq C < \\infty\n\\end{equation}\nsuch that the following holds: for every $M\\in\\mathbb{S}^{d}$, $R\\geq 2\\X$, $u\\in C(B_R)$ and $f\\in C^{0,\\sigma}(B_R)$ satisfying \n\\begin{equation*} \\label{}\n-\\tr\\left(A(x)(M+D^{2}u) \\right) = f(x) \\quad \\mbox{in} \\ B_R\n\\end{equation*}\nand every $r\\in \\left[ \\X, \\frac12 R \\right]$,\n we have the estimate\n\\begin{multline} \\label{e.pwC11}\n\\frac{1}{r^2} \\inf_{l\\in\\L} \\sup_{B_r} |u-l|\n\\\\\n\\leq C\\left( \\left|f(0)+\\tr(\\overline{A}M)\\right| + R^{\\sigma}\\! \\left[ f \\right]_{C^{0,\\sigma}(B_R)} + \\frac{1}{R^2} \\inf_{l\\in\\L} \\sup_{B_R} |u-l| \\right).\n\\end{multline}\n\\end{theorem}\n\nIt is well-known that any~$L^\\infty$ function which can be well-approximated on all scales by smooth functions must be smooth. The proof of Theorem~\\ref{t.regularity} is based on a similar idea: any function~$u$ which can be well-approximated on all scales above a fixed scale $\\X$, which is of the same order as the microscopic scale, by functions~$w$ enjoying an \\emph{improvement of quadratic approximation} property must itself have this property. This is formalized in the next proposition, the statement and proof of which are similar to those of~\\cite[Lemma 5.1]{AS3}.\n\n\\begin{proposition}\n\\label{p.quadapprox}\n\nFor each $r>0$ and $\\theta \\in (0,\\tfrac12)$, let $\\mathcal A(r,\\theta)$ denote the subset of $L^\\infty(B_r)$ consisting of $w$ which satisfy\n\\begin{equation*} \\label{}\n\\frac{1}{(\\theta r)^2} \\inf_{q\\in \\mathcal Q} \\sup_{x\\in B_{\\theta r}} \\left| w(x) - q(x) \\right| \\leq \\frac12 \\left( \\frac1{r^2} \\inf_{q\\in \\mathcal Q} \\sup_{x\\in B_{r}} \\left| w(x) - q(x) \\right| \\right).\n\\end{equation*}\nAssume that $\\alpha, \\gamma, K,L>0$, $1 \\leq 4r_0 \\leq R$ and $u\\in L^\\infty(B_R)$ are such that, for every $r \\in [r_0,R\/2]$, there exists $v\\in \\mathcal A(r,\\theta)$ such that \n\\begin{equation} \\label{e.quappr}\n\\frac1{r^2}\\sup_{x\\in B_r}\\left| u(x) - v(x) \\right| \\leq r^{-\\alpha} \\left( K + \\frac1{r^2}\\inf_{l\\in\\L} \\sup_{x\\in B_{2r}} \\left| u(x) - l(x) \\right| \\right) + Lr^\\gamma.\n\\end{equation}\nThen there exist $\\beta(\\theta) \\in (0,1]$ and $C(\\alpha,\\theta,\\gamma) \\geq 1$ such that, for every $s\\in [r_0,R\/2]$, \n\\begin{equation} \\label{e.C11lem2}\n\\frac{1}{s^2} \\inf_{l\\in\\L} \\sup_{x\\in B_s} \\left| u(x) - l(x) \\right| \\leq C\\left(K+LR^\\gamma+\\frac{1}{R^2} \\inf_{l\\in\\L} \\sup_{x\\in B_R} \\left| u(x) - l(x) \\right| \\right).\n\\end{equation}\nand\n\\begin{multline} \\label{e.C11lem1}\n\\frac{1}{s^2} \\inf_{q\\in\\mathcal Q} \\sup_{x\\in B_s} \\left| u(x) - q(x) \\right| \n\\leq C\\left( \\frac{s}{R} \\right)^{\\beta} \\left( LR^\\gamma+ \\frac{1}{R^2} \\inf_{q\\in\\mathcal Q} \\sup_{x\\in B_R} \\left| u(x) - q(x) \\right| \\right) \\\\\n+ Cs^{-\\alpha} \\left( K+LR^\\gamma + \\frac{1}{R^2} \\inf_{l\\in\\L} \\sup_{x\\in B_R} \\left| u(x) - l(x) \\right| \\right).\n\\end{multline}\n\\end{proposition}\n\n\\begin{proof}\nThroughout the proof, we let $C$ denote a positive constant which depends only on $(\\alpha,\\theta,\\gamma)$ and may vary in each occurrence. We may suppose without loss of generality that $\\alpha \\leq 1$ and that $\\gamma \\leq c$ so that $\\theta^\\gamma \\geq \\frac23$.\n\n\\smallskip\n\n\\emph{Step 1.} We set up the argument. We keep track of the two quantitites\n\\begin{equation*} \\label{}\nG(r):= \\frac1{r^2} \\inf_{q\\in \\mathcal Q} \\sup_{x\\in B_r} \\left| u(x) - q(x) \\right| \\quad \\mbox{and} \\quad H(r):= \\frac1{r^2} \\inf_{l\\in \\mathcal L} \\sup_{x\\in B_r} \\left| u(x) - l(x) \\right|.\n\\end{equation*}\nBy the hypotheses of the proposition and the triangle inequality, we obtain that, for every $r\\in [r_0,R\/2]$,\n\\begin{equation} \\label{e.improvequad}\nG(\\theta r) \\leq \\frac12 G(r) +C r^{-\\alpha} \\left( K + H(2r) \\right)+ Lr^\\gamma.\n\\end{equation}\nDenote $s_0:=R$ and $s_{j}:= \\theta^{j-1} R\/4$ and let $m\\in\\mathbb{N}$ be such that \n\\begin{equation*} \\label{}\ns_m^{-\\alpha} \\leq \\frac14 \\leq s_{m+1}^{-\\alpha}.\n\\end{equation*}\nDenote $G_j:= G(s_j)$ and $H_j := H(s_j)$. Noting that $\\theta \\leq \\frac12$, from~\\eqref{e.improvequad} we get, for every $j\\in\\{ 1,\\ldots,m-1\\}$,\n\\begin{equation} \\label{e.improvequad2}\n G_{j+1} \\leq \\frac12 G_j + C s_j^{-\\alpha} \\left( K + H_{j-1} \\right) + Ls_{j}^\\gamma.\n\\end{equation}\nFor each $j\\in\\{ 0,\\ldots,m-1\\}$, we may select $q_j\\in \\mathcal Q$ such that \n\\begin{equation*} \\label{}\nG_j = \\frac{1}{s_j^2} \\sup_{x\\in B_{s_j}} \\left| u(x) - q_j(x) \\right|.\n\\end{equation*}\nWe denote the Hessian matrix of $q_j$ by $Q_j$. The triangle inequality implies that \n\\begin{equation} \\label{e.GjHj}\nG_j \\leq H_j \\leq G_j + \\frac{1}{s_j^2}\\sup_{x\\in B_{s_j}} \\frac12 x \\cdot Q_jx = G_j + \\frac12 |Q_j|,\n\\end{equation}\nand \n\\begin{align*}\n\\frac{1}{s_j^2} \\sup_{x\\in B_{s_{j+1}}} \\left| q_{j+1}(x) - q_j(x) \\right| \\leq G_j + \\theta^2 G_{j+1}.\n\\end{align*}\nThe latter implies \n\\begin{equation*} \n\\left| Q_{j+1} - Q_j \\right| \\leq \\frac{2}{s_{j+1}^2} \\sup_{x\\in B_{s_{j+1}}} \\left| q_{j+1}(x) - q_j(x) \\right| \\leq \\frac{2}{\\theta^{2}} G_j + 2G_{j+1}.\n\\end{equation*}\nIn particular, \n\\begin{equation}\\label{e.Qjdiff}\n|Q_{j+1}| \\leq |Q_j| + C \\left( G_j + G_{j+1} \\right)\n\\end{equation}\nSimilarly, the triangle inequality also gives \n\\begin{equation} \\label{e.Qjstup}\n|Q_j| = \\frac{2}{s_j^2} \\inf_{l\\in\\L} \\sup_{x\\in B_{s_j}} \\left| q_j (x) - l(x) \\right| \\leq 2G_j + 2H_j \\leq 4H_j.\n\\end{equation}\nThus, combining \\eqref{e.Qjdiff} and \\eqref{e.Qjstup}, yields\n\\begin{equation} \\label{e.Qkbound}\n|Q_j| \\leq |Q_0| + C\\sum_{i=0}^j G_i \\leq C \\left( H_0 + \\sum_{i=0}^j G_i \\right).\n\\end{equation}\nNext, combining~\\eqref{e.improvequad2}~\\eqref{e.GjHj} and~\\eqref{e.Qkbound}, we obtain, for every $j\\in \\{0,\\ldots,m-1\\}$,\n\\begin{align} \\label{e.Gjdiff}\n G_{j+1} & \\leq \\frac12 G_{j} + C s_j^{-\\alpha} \\left( K + H_0 + \\sum_{i=0}^j G_i \\right) + Ls_j^\\gamma.\n\\end{align}\nThe rest of the argument involves first iterating~\\eqref{e.Gjdiff} to obtain bounds on $G_j$, which yield bounds on $|Q_j|$ by~\\eqref{e.Qkbound}, and finally on $H_j$ by~\\eqref{e.GjHj}. \n\n\\smallskip\n\n\\emph{Step 2.} We show that, for every $j\\in \\{ 1,\\ldots, m\\}$, \n\\begin{equation} \\label{e.GjdiffmaxQj}\nG_j \\leq 2^{-j} G_0 + C s_j^{-\\alpha} \\left( K+H_0 \\right) + CL \\left( s_j^\\gamma+ R^\\gamma s_j^{-\\alpha} \\right). \n\\end{equation}\nWe argue by induction. Fix $A,B\\geq 1$ (which are selected below) and suppose that $k\\in \\{ 0,\\ldots,m-1\\}$ is such that, for every $j \\in\\{ 0,\\ldots,k\\}$, \n\\begin{equation*} \\label{}\nG_j \\leq 2^{-j} G_0 + As_j^{-\\alpha} \\left( K+H_0 \\right) + L\\left( A s_j^\\gamma+ BR^\\gamma s_j^{-\\alpha} \\right).\n\\end{equation*}\nUsing~\\eqref{e.Gjdiff} and this induction hypothesis (and that $G_0 \\leq H_0$), we get\n\\begin{align*}\nG_{k+1} & \\leq \\frac12 G_{k} + C s_k^{-\\alpha} \\left( K + H_0 + \\sum_{j=0}^k G_j \\right) + Ls_{k}^\\gamma \\\\\n& \\leq \\frac12 \\left( 2^{-k} G_0 + As_k^{-\\alpha} \\left( K+H_0 \\right) + L \\left( As_k^\\gamma+ BR^\\gamma s_k^{-\\alpha} \\right) \\right) + Ls_k^\\gamma \\\\\n& \\qquad + Cs_k^{-\\alpha} \\left(K+H_0+\\sum_{j=0}^k \\left( 2^{-j} G_0 + As_j^{-\\alpha} \\left( K+H_0 \\right) + L \\left(A s_j^\\gamma+ BR^\\gamma s_j^{-\\alpha} \\right) \\right) \\right) \\\\\n& \\leq 2^{-(k+1)} G_0 + s_{k+1}^{-\\alpha} (K+H_0) \\left(\\frac12 A + CAs_k^{-\\alpha} +C \\right) \\\\ \n& \\qquad + Ls_{k+1}^\\gamma \\left( \\frac12\\theta^{-\\gamma}A+C \\right) + L R^\\gamma s_{k+1}^{-\\alpha} \\left( \\frac12 B + CA + CBs_k^{-\\alpha} \\right).\n\\end{align*}\nNow suppose in addition that $k\\leq n$ with $n$ such that $Cs_n^{-\\alpha} \\leq \\frac14$. Then using this and $\\theta^{\\gamma} \\geq \\frac23$, we may select $A$ large enough so that \n\\begin{equation*} \\label{}\n\\frac12 A + C A s_k^{-\\alpha} + C\\leq \\frac34 A + C \\leq A \\quad \\mbox{and} \\quad \\frac12 \\theta^{-\\gamma} A + C \\leq A,\n\\end{equation*}\nand then select $B$ large enough so that \n\\begin{equation*} \\label{}\n\\frac12 B + CA + CBs_k^{-\\alpha} \\leq B.\n\\end{equation*}\nWe obtain\n\\begin{equation*} \\label{}\nG_{k+1} \\leq 2^{-(k+1)} G_0 + As_{k+1}^{-\\alpha} \\left( K+H_0 \\right) + ALs_{k+1}^\\gamma + BLR^\\gamma s_{k+1}^{-\\alpha}.\n\\end{equation*}\nBy induction, we now obtain~\\eqref{e.GjdiffmaxQj} for every $j \\leq n\\leq m$. In addition, for every $j \\in \\{ n+1,\\ldots,m\\}$, we have that $1 \\leq s_j \/ s_m \\leq C$. This yields~\\eqref{e.GjdiffmaxQj} for every $j\\in \\{ 0,\\ldots,n\\}$.\n\n\\smallskip\n\n\\smallskip\n\n\\emph{Step 3.} The bound on $H_j$ and conclusion.\nBy~\\eqref{e.Qkbound},~\\eqref{e.GjdiffmaxQj}, we have\n\\begin{align*} \\label{}\n|Q_j| & \\leq C\\left(H_0 + \\sum_{i=0}^j G_i\\right) \\\\\n& \\leq C\\left(H_0 + \\sum_{i=0}^j \\left( 2^{-i} G_0 + Cs_i^{-\\alpha} (K+H_0) + CL(s_i^\\gamma+R^\\gamma s_i^{-\\alpha} \\right) \\right)\\\\\n& \\leq C\\left(H_0 + G_0 + Cs_j^{-\\alpha} (K+H_0) + C LR^\\gamma\\left(1+s_j^{-\\alpha} \\right) \\right)\\\\\n& \\leq CH_0 + CKs_j^{-\\alpha} + CLR^\\gamma. \n\\end{align*}\nHere we also used that $s_j^{-\\alpha} \\leq s_m^{-\\alpha} \\leq C$. Using the previous inequality,~\\eqref{e.GjHj} and~\\eqref{e.GjdiffmaxQj}, we conclude that \n\\begin{equation*} \\label{}\nH_j \\leq G_j + \\frac{1}{2}|Q_j| \\leq C H_0 + CKs_j^{-\\alpha}+ CLR^\\gamma.\n\\end{equation*}\nThis is~\\eqref{e.C11lem2}. Note that~\\eqref{e.GjdiffmaxQj} already implies~\\eqref{e.C11lem1} for $\\beta:=(\\log 2) \/ |\\log \\theta|$.\n\\end{proof}\n\nNext, we recall that solutions of the homogenized equation (which are essentially harmonic functions) satisfy the ``improvement of quadratic approximation'' property.\n\n\\begin{lemma}\n\\label{l.checkdyad}\nLet $r>0$. Assume that $\\overline{A}$ satisfies~\\eqref{e.Fellip} and let $w\\in C(B_r)$ be a solution of \n\\begin{equation} \n\\label{e.Ahom}\n-\\tr \\left(\\overline{A}D^2w\\right)= 0 \\quad \\mbox{in} \\ B_r.\n\\end{equation}\nThere exists~$\\theta(d,\\lambda,\\Lambda) \\in (0,\\tfrac12]$ such that, for every $r>0$,\n\\begin{equation*} \\label{}\n\\frac{1}{(\\theta r)^2} \\inf_{q\\in\\mathcal Q} \\sup_{x\\in B_{\\theta r}} \\left| w(x) - q(x) \\right| \\leq \\frac12 \\left( \\frac{1}{ r^2} \\inf_{q\\in\\mathcal Q} \\sup_{x\\in B_{r}} \\left| w(x) - q(x) \\right| \\right).\n\\end{equation*}\n\n\\end{lemma}\n\\begin{proof}\nSince $\\overline A$ is constant, the solutions of~\\eqref{e.Ahom} are harmonic (up to a linear change of coordinates). Thus the result of this lemma is classical.\n\\end{proof}\n \nEquipped with the above lemmas, we now give the proof of Theorem~\\ref{t.regularity}.\n\n\\begin{proof}[{Proof of Theorem~\\ref{t.regularity}}]\nFix $s\\in (0,d)$. We denote by $C$ and $c$ positive constants depending only on $(s,\\sigma,d,\\lambda,\\Lambda,\\ell)$ which may vary in each occurrence. We proceed with the proof of~\\eqref{e.pwC11}. Let $\\mathcal Y$ be the random variable in the statement of Proposition~\\ref{p.subopt}, with $\\alpha$ the exponent there. Define $\\X:= \\Y^{1\/s}$ and observe that $\\X$ satisfies~\\eqref{e.scrbound}. We take $\\sigma$ to be the smallest of the following: the exponent in~\\eqref{e.classicKS} and half the exponent $\\alpha$ in Proposition~\\ref{p.subopt}.\n\n\n\\smallskip\n\nWe may suppose without loss of generality that $-\\tr\\left(\\overline{A}M\\right) = f(0) = 0$. \n\n\n\n\n\\smallskip\n\n\\emph{Step 1.} We check that~$u$ satisfies the hypothesis of Proposition~\\ref{p.quadapprox} with $r_0= C\\X$. Fix $r\\in [ C\\X,R\/2]$. We take $v,w\\in C(B_{3r\/4})$ to be the solutions of the problems \n\\begin{equation*} \\label{}\n\\left\\{ \n\\begin{aligned}\n& -\\tr\\left(\\overline{A} D^2 v\\right)= f(x)& \\mbox{in} & \\ B_{3r\/4}, \\\\\n& v = u & \\mbox{on} & \\ \\partial B_{3r\/4},\n\\end{aligned} \n\\right. \\qquad \\left\\{ \n\\begin{aligned}\n& -\\tr \\left(\\overline{A} D^2w\\right) = 0 & \\mbox{in} & \\ B_{{3r\/4}}, \\\\\n& w = u & \\mbox{on} & \\ \\partial B_{3r\/4}.\n\\end{aligned} \n\\right.\n\\end{equation*}\nBy the Alexandrov-Bakelman-Pucci estimate \\cite{CC}, we have\n\\begin{equation} \\label{e.tregabp}\n\\frac1{r^2}\\sup_{B_{r\/2}} | v-w | \\leq C \\left( \\fint_{B_r} \\left| f(x) \\right|^d \\, dx \\right)^{\\frac1d} \\leq C r^{\\sigma} \\left[ f \\right]_{C^{0,\\sigma}(B_r)}.\n\\end{equation}\nBy the Krylov-Safanov H\\\"older estimate \\eqref{e.classicKS},\n\\begin{multline} \\label{e.tregKSapp}\nr^{\\sigma-2}\\left[ u \\right]_{C^{0,\\sigma}(B_{3r\/4}) } \\leq C \\left( \\frac{1}{r^2} \\osc_{B_r} u + \\left( \\fint_{B_r} \\left| f(x) \\right|^d \\, dx \\right)^{\\frac1d} \\right) \\\\\n\\leq C \\left( \\frac{1}{r^2} \\osc_{B_r} u + r^{\\sigma} \\left[ f \\right]_{C^{0,\\sigma}(B_r)} \\right).\n\\end{multline}\nBy the error estimate (Proposition~\\ref{p.subopt}), we have \n\\begin{multline*} \\label{}\n\\frac1{r^2}\\sup_{B_{r\/2}} | u-v | \\\\\n\\leq Cr^{-\\alpha} \\left( 1 + \\Y r^{-s} \\right) \\left( r^\\sigma \\left[ f \\right]_{C^{0,\\sigma}(B_r)} + \\frac 1{r^2} \\osc_{\\partial B_{3r\/4}} u + r^{\\sigma-2}\\left[ u \\right]_{C^{0,\\sigma}(B_{3r\/4}) }\\right).\n\\end{multline*}\nUsing the assumption $r^s \\geq \\X^s = \\Y$ and~\\eqref{e.tregKSapp}, this gives \n\\begin{equation} \\label{e.tregee}\n\\frac1{r^2}\\sup_{B_{r\/2}} | u-v | \\leq Cr^{-\\alpha} \\left( r^\\sigma \\left[ f \\right]_{C^{0,\\sigma}(B_r)} + \\frac 1{r^2} \\osc_{B_r} u \\right).\n\\end{equation}\nUsing~\\eqref{e.tregabp} and~\\eqref{e.tregee}, the triangle inequality, and the definition of $\\sigma$, we get\n\\begin{equation*} \\label{}\n\\frac1{r^2}\\sup_{B_{r\/2}} | u-w | \\leq Cr^{-\\alpha} \\left( \\frac 1{r^2} \\osc_{B_r} u \\right) + Cr^\\sigma \\left[ f \\right]_{C^{0,\\sigma}(B_R)}.\n\\end{equation*}\nBy Lemma~\\ref{l.checkdyad}, $w\\in \\mathcal A(r,\\theta)$ for some $\\theta\\geq c$.\n\n\\smallskip\n\n\\emph{Step 2.} We apply Proposition~\\ref{p.quadapprox} to obtain~\\eqref{e.pwC11}. The proposition gives, for every $r\\geq r_0 = C\\X$,\n\\begin{equation*} \\label{}\n\\frac{1}{r^2} \\inf_{l\\in\\L} \\sup_{x\\in B_{r}} \\left| u(x) - l(x) \\right| \\leq C\\left(R^\\sigma \\left[ f \\right]_{C^{0,\\sigma}(B_R)} +\\frac{1}{R^2} \\inf_{l\\in\\L} \\sup_{x\\in B_r} \\left| u(x) - l(x) \\right| \\right), \n\\end{equation*}\nwhich is \\eqref{e.pwC11}. \n\\end{proof}\n\nIt is convenient to restate the estimate~\\eqref{e.pwC11} in terms of ``coarsened\" seminorms. Recall that, for $\\phi \\in C^\\infty(B_1)$,\n\\begin{equation*}\n\\begin{aligned} \\label{}\n\\left|D\\phi(x_0) \\right| & \\simeq \\frac{1}{h} \\osc_{B_h(x_0)} \\phi(x), \\\\\n\\left|D^2\\phi(x_0) \\right| & \\simeq \\frac{1}{h^2}\\inf_{p\\in \\mathbb{R}^d} \\osc_{B_h(x_0)} \\left( \\phi(x) - p\\cdot x \\right),\n\\end{aligned} \\qquad \\mbox{for} \\quad 0 h} \\frac1{r^\\alpha} \\osc_{B_r(x_0) \\cap U} \\phi,\n\\end{equation*}\nand\n\\begin{equation*} \\label{}\n\\left[ \\phi \\right]_{C^{1,\\alpha}_h(x_0,U)} := \\sup_{r > h} \\frac1{r^{1+\\alpha}} \\inf_{l\\in \\mathcal{L}} \\osc_{B_r(x_0) \\cap U} \\left( \\phi(x) - l(x) \\right).\n\\end{equation*}\nThis allows us to write \\eqref{e.pwC11} in the form\n\\begin{multline}\\label{e.pwcC11}\n\\left[ u \\right]_{C^{1,1}_1(0,B_{R\/2})} \\\\\n\\leq C\\X^{2}\\left( \\left|f(0)+\\tr(\\overline{A}M)\\right| + R^{\\sigma}\\! \\left[ f \\right]_{C^{0,\\sigma}(B_R)} + \\frac{1}{R^2} \\inf_{l\\in\\L} \\sup_{x\\in B_R} |u-l| \\right).\n\\end{multline}\n\nAs a simple corollary to Theorem \\ref{t.regularity}, we also have $C^{0,1}_{1}$ bounds on $u$:\n\\begin{corollary}\\label{c.C01reg}\nAssume the hypotheses and notation of Theorem \\ref{t.regularity}. Then,\n\\begin{equation} \\label{e.pwC01}\n\\left[ u \\right]_{C^{0,1}_1(0,B_{R\/2})} \\leq \\X \\left( R \\left|f(0)+\\tr \\overline{A}M\\right| + R^{1+\\sigma} \\left[ f \\right]_{C^{0,\\sigma}(B_R)} + \\frac1R\\osc_{B_R}u \\right).\n\\end{equation}\n\\end{corollary}\n\n\nThe proof follows from a simple interpolation inequality, which controls the seminorm $\\left[ \\cdot \\right]_{ C^{0,1}_{h}(B_R)}$ in terms of $\\left[ \\cdot \\right]_{C^{1,1}_{h}(B_R)}$ and the oscillation in~$B_R$:\n\\begin{lemma}\n\\label{l.interp}\nFor any $R>0$ and $\\phi \\in C(B_R)$, we have\n\\begin{equation} \\label{e.interp}\n\\left[ \\phi \\right]_{C^{0,1}_h(0,B_R)} \\leq 14 \\left(\\left[ \\phi \\right]_{C^{1,1}_h(0,B_R)}\\right)^{\\frac12} \\left(\\osc_{B_R} \\phi\\right)^{\\frac12}. \n\\end{equation}\n\\end{lemma}\n\\begin{proof}\n\nWe must show that, for every $s \\in [h,R]$, \n\\begin{equation} \\label{e.verinterp}\n\\frac1s \\osc_{B_s} \\phi \\leq 14 \\left(\\left[ \\phi \\right]_{C^{1,1}_h(0,B_R)}\\right)^{\\frac12} \\left(\\osc_{B_R} \\phi\\right)^{\\frac12}.\n\\end{equation}\nSet $$K:= \\left(\\left[ \\phi \\right]_{C^{1,1}_h(0,B_R)}\\right)^{-\\frac12} \\left(\\osc_{B_R} \\phi\\right)^{\\frac12}$$\nand observe that, for every $s\\in[ K , R]$, we have\n\\begin{equation} \\label{e.easyste}\n\\frac1s \\osc_{B_s} \\phi \\leq K^{-1} \\osc_{B_R} \\phi =\\left( \\left[ \\phi \\right]_{C^{1,1}_h(0,B_R)}\\right)^{\\frac12} \\left(\\osc_{B_R} \\phi\\right)^{\\frac12}.\n\\end{equation}\nThus we need only check~\\eqref{e.verinterp} for $s\\in [h,K]$.\n\n\\smallskip\n\nWe next claim that, for every $s\\in [h,R)$, \n\\begin{equation} \\label{e.iters}\n\\frac{2}{s} \\osc_{B_{s\/2}} \\phi \\leq 3s \\left[ \\phi \\right]_{C^{1,1}_h(0,B_R)} + \\frac{1}{s} \\osc_{B_s} \\phi. \n\\end{equation}\nFix $s$ and select $p\\in\\mathbb{R}^d$ such that \n\\begin{equation*} \\label{}\n\\frac{1}{s^2} \\osc_{ B_s} \\left( \\phi(y) - p\\cdot y \\right) \\leq \\left[ \\phi \\right]_{C^{1,1}_h(0,B_R)}.\n\\end{equation*}\nThen\n\\begin{equation*} \\label{}\n|p| = \\frac{1}{2s} \\osc_{ B_s} \\left( -p\\cdot y \\right) \\leq \\frac1{2s} \\osc_{ B_s} \\left( \\phi(y) - p\\cdot y \\right) + \\frac1{2s} \\osc_{B_s} \\phi.\n\\end{equation*}\nTogether these yield\n\\begin{align*}\n\\frac{2}{s} \\osc_{B_{s\/2}} \\phi & \\leq \\frac{2}{s} \\osc_{ B_{s\/2}} \\left( \\phi(y) - p\\cdot y \\right) + \\frac{2}{s} \\osc_{B_{s\/2}} \\left( -p\\cdot y \\right) \\\\\n& = \\frac{2}{s} \\osc_{ B_{s\/2}} \\left( \\phi(y) - p\\cdot y \\right) + 2|p| \\\\\n& \\leq \\frac{3}{s} \\osc_{ B_{s}} \\left( \\phi(y) - p\\cdot y \\right) + \\frac1{s} \\osc_{B_s} \\phi \\\\\n& \\leq 3s \\left[ \\phi \\right]_{C^{1,1}_h(0,B_R)} + \\frac{1}{s} \\osc_{B_s} \\phi.\n\\end{align*}\nThis is~\\eqref{e.iters}.\n\nWe now iterate~\\eqref{e.iters} to obtain the conclusion for $s\\in [h,K]$. By induction, we see that for each $j\\in\\mathbb{N}$ with $R_j := 2^{-j} K \\geq h$,\n\\begin{align*} \\label{}\nR_{j}^{-1} \\osc_{B_{R_{j}}} \\phi & \\leq K^{-1} \\osc_{B_K} \\phi + 3 \\left( \\sum_{i=0}^{j-1} R_i \\right) \\left[ \\phi \\right]_{C^{1,1}_h(0,B_R)} \\\\\n& \\leq K^{-1} \\osc_{B_K} \\phi + 6K \\left[ \\phi \\right]_{C^{1,1}_h(0,B_R)}.\n\\end{align*}\nUsing~\\eqref{e.easyste}, we deduce that for each $j\\in\\mathbb{N}$ with $R_j := 2^{-j} K \\geq h$\n\\begin{equation*} \\label{}\nR_{j}^{-1} \\osc_{B_{R_{j}}} \\phi \\leq 7 \\left( \\left[ \\phi \\right]_{C^{1,1}_h(0,B_R)}\\right)^{\\frac12} \\left(\\osc_{B_R} \\phi\\right)^{\\frac12}.\n\\end{equation*}\nFor general $s\\in [h,R)$ we may find $j\\in \\mathbb{N}$ such that $R_{j+1} \\leq s < R_j$ to get\n\\begin{equation*} \\label{}\ns^{-1} \\osc_{B_s} \\phi \\leq R_{j+1}^{-1} \\osc_{B_{R_j}} \\phi \\leq 2 R_{j}^{-1} \\osc_{B_{R_{j}}} \\phi \\leq 14 \\left(\\left[ \\phi \\right]_{C^{1,1}_h(0,B_R)}\\right)^{\\frac12} \\left(\\osc_{B_R} \\phi\\right)^{\\frac12}. \\qedhere\n\\end{equation*}\n\\end{proof}\n\nEquipped with this lemma, we now present the simple proof of Corollary \\ref{c.C01reg}:\n\\begin{proof}[Proof of Corollary \\ref{c.C01reg}]\nBy interpolation, we also obtain \\eqref{e.pwC01}. This follows from~\\eqref{e.pwC11} and~Lemma~\\ref{l.interp} as follows:\n\\begin{align*}\n\\left[ u \\right]_{C^{1,1}_h(0,B_R)} & \\leq C \\left[ u \\right]^{\\frac12}_{C^{1,1}_h(0,B_R)} \\left(\\osc_{B_R} u \\right)^{\\frac12} \\\\\n& \\leq C\\X \\left( K_0 + |f(0)| + R^{\\sigma} \\left[ f \\right]_{C^{0,\\sigma}(B_R)} + R^{-2} \\osc_{B_R} u \\right)^{\\frac12} \\left(\\osc_{B_R} u \\right)^{\\frac12}\\\\\n& \\leq C\\X \\left( K_0R +R |f(0)| + R^{1+\\sigma} \\left[ f \\right]_{C^{0,\\sigma}(B_R)} + R^{-1} \\osc_{B_R} u \\right),\n\\end{align*}\nwhere we used~\\eqref{e.interp} in the first line,~\\eqref{e.pwC11} to get the second line and Young's inequality in the last line. Redefining $\\X$ to absorb the constant $C$, we obtain~\\eqref{e.pwC01}.\n\\end{proof}\n\n\n\n\n\n\\section{Green's function estimates}\n\\label{s.green}\n\nWe will now use a similar argument to the proof of Theorem \\ref{t.regularity} to obtain estimates on the modified Green's functions $G_{\\varepsilon}(\\cdot, 0)$ which are given by the solutions of:\n\\begin{equation}\\label{e.greens2}\n\\varepsilon^2 G_\\varepsilon -\\tr\\left(A (x) D^2G_\\varepsilon \\right) = \\chi_{B_\\ell} \\quad \\mbox{in} \\ \\mathbb{R}^d.\n\\end{equation}\n\n\\begin{proposition}\n\\label{p.Green}\nFix $s\\in (0,d)$. There exist $a(d,\\lambda,\\Lambda)>0$, $\\delta(d,\\lambda,\\Lambda)>0$ and an $\\mathcal{F}$--measurable random variable $\\X:\\Omega\\to [1,\\infty)$ satisfying\n\\begin{equation} \\label{e.Green3dC}\n\\mathbb{E} \\left[ \\exp\\left( \\X^{s} \\right) \\right] \\leq C(s,d,\\lambda,\\Lambda,\\ell) < \\infty\n\\end{equation}\nsuch that, for every $\\varepsilon \\in (0,1]$ and $x\\in \\mathbb{R}^d$,\n\\begin{equation} \\label{e.Greenest}\nG_\\varepsilon(x,0) \\leq \\X^{d-1-\\delta}\\xi_{\\varepsilon}(x)\n\\end{equation}\nwhere $\\xi_{\\varepsilon}(x)$ is defined by:\n\\begin{equation}\\label{e.dimsep}\n\\xi_{\\varepsilon}(x):= \\exp\\left( -a \\varepsilon|x| \\right) \\cdot \\left\\{ \\begin{aligned} \n& \\log\\left( 2 + \\frac{1}{\\varepsilon(1+|x|)} \\right), & \\mbox{in} \\ d=2,\\\\\n& \\left( 1 + |x| \\right)^{2-d}, & \\mbox{in} \\ d>2,\n\\end{aligned}\\right.\n\\end{equation}\n\nand\n\\begin{equation} \\label{e.Greenoscest}\n\\osc_{B_1(x)} G_\\varepsilon(\\cdot,0) \\leq (T_x\\X) \\X^{d-1-\\delta} \\left( 1+|x| \\right)^{1-d} \\exp\\left( -a \\varepsilon |x| \\right).\n\\end{equation}\n\\end{proposition}\n\nWe emphasize that \\eqref{e.Greenest} is a \\emph{random} estimate, and the proof relies on the homogenization process. In contrast to the situation for divergence form equations, there is no \\emph{deterministic} estimate for the decay of the Green's functions. Consider that for a general $A\\in \\Omega$, the Green's function $G(\\cdot,0;A)$ solves\n\\begin{equation*} \\label{}\n-\\tr\\left( AD^2G(\\cdot,0;A) \\right) = \\delta_0 \\quad \\mbox{in} \\ \\mathbb{R}^d.\n\\end{equation*}\nThe solution may behave, for $|x| \\gg 1$, like a multiple of \n\\begin{equation*} \\label{}\nK_\\gamma(x) := \\left\\{ \\begin{aligned} \n& |x|^{-\\gamma} & \\ \\gamma > 0, \\\\\n& \\log |x| & \\ \\gamma = 0, \\\\\n& - |x|^{-\\gamma} & \\ \\gamma < 0, \\\\\n\\end{aligned} \\right.\n\\end{equation*}\nfor any exponent~$\\gamma$ in the range\n\\begin{equation*} \\label{}\n \\frac{d-1}{\\Lambda} -1 \\leq \\gamma \\leq \\Lambda(d-1) - 1.\n\\end{equation*}\nIn particular, if $\\Lambda$ is large, then $\\gamma$ may be negative and so $G(\\cdot,0; A)$ may be bounded near the origin. To see that this range for $\\gamma$ is sharp, it suffices to consider, respectively, the diffusion matrices\n\\begin{equation*} \\label{}\nA_1(x) = \\Lambda \\frac{x \\otimes x}{|x|^2} + \\left( I - \\frac{x \\otimes x}{|x|^2} \\right) \\quad \\mbox{and} \\quad A_2(x) = \\frac{x \\otimes x}{|x|^2} + \\Lambda \\left( I - \\frac{x \\otimes x}{|x|^2} \\right).\n\\end{equation*}\nNote that $A_1$ and $A_2$ can be altered slightly to be smooth at $x=0$ without changing the decay of $G$ at infinity.\n\n\n\nBefore we discuss the proof of Proposition \\ref{p.Green}, we mention an interesting application to the \\emph{invariant measure} associated with \\eqref{e.greens2}. Recall that the invariant measure is defined to be the solution $m_\\varepsilon$ of the equation in double-divergence form:\n\\begin{equation*} \\label{}\n\\varepsilon^2(m_\\varepsilon-1) - \\div\\left( D( A(x)m_\\varepsilon ) \\right) = 0 \\quad \\mbox{in} \\ \\mathbb{R}^d. \n\\end{equation*}\nBy \\eqref{e.Greenest}, we have that for every $y\\in \\mathbb{R}^{d}$, \n\\begin{equation*} \\label{}\n\\int_{B_{\\ell}(y)} m_\\varepsilon(x)\\,dx \\leq \\int_{\\mathbb{R}^d} G_{\\varepsilon}(x,0)\\, dx\\leq \\X^{d-1-\\delta}.\n\\end{equation*}\nIn particular, we deduce that, for some $\\delta > 0$, \n\\begin{equation*} \\label{}\n\\P \\left[ \\int_{B_{\\ell}(y)} m_\\varepsilon(x)\\,dx > t \\right] \\leq C\\exp\\left( -t^{\\frac{d}{d-1}+\\delta} \\right).\n\\end{equation*}\nThis gives a very strong bound on the location of particles undergoing a diffusion in the random environment. \n\n\\smallskip\n\nWe now return to the proof of Proposition \\ref{p.Green}. Without loss of generality, we may change variables and assume that the effective operator $\\overline{A}=I$. The proof of \\eqref{e.Greenoscest} is based on the idea of using homogenization to compare the Green's function for the heterogeneous operator to that of the homogenized operator. The algebraic error estimates for homogenization in Proposition~\\ref{p.subopt} are just enough information to show that, with overwhelming probability, the ratio of Green's functions must be bounded at infinity. This is demonstrated by comparing the modified Green's function $G_{\\varepsilon}(\\cdot, 0)$ to a family of carefully constructed test functions. \n\nThe test functions $\\{ \\varphi_R\\}_{R\\geq C}$ will possess the following three properties:\n\\begin{equation} \\label{e.varphiRBR}\n\\inf_{A\\in\\Omega} -\\tr\\left( A(x) D^2 \\varphi_R \\right) \\geq \\chi_{B_\\ell} \\quad \\mbox{in} \\ B_R,\n\\end{equation}\n\\begin{equation} \\label{e.varphiRBR2}\n-\\Delta \\varphi_R (x) \\gtrsim |x|^{-d} \\quad \\mbox{in} \\ \\mathbb{R}^d \\setminus B_{R\/2},\n\\end{equation}\n\\begin{equation} \\label{e.varphiRBR3}\n\\varphi_R(x) \\lesssim R^{d-1-\\delta}(1+ |x|^{2-d}) \\quad \\mbox{in} \\ \\mathbb{R}^d\\setminus B_R.\n\\end{equation}\nAs we will show, these properties imply, for large enough $R$ (which will be random and depend on the value of $\\X$ from many different applications of Proposition~\\ref{p.subopt}), that $G_\\varepsilon(\\cdot,0) \\leq \\varphi_R$ in $\\mathbb{R}^d$.\n\n\\smallskip\n\nThe properties of the barrier function $\\varphi_{R}$ inside and outside of $B_{R}$ will be used to compare with $G_{\\varepsilon}(\\cdot, 0)$ in different ways. If $G_\\varepsilon(\\cdot,0) \\not \\leq \\varphi_R$ then, since they both decay at infinity, $G_\\varepsilon(\\cdot,0) - \\varphi_R$ must achieve its global maximum somewhere in $\\mathbb{R}^d$. Since $\\varphi_R$ is a supersolution of~\\eqref{e.varphiRBR}, this point must lie in $\\mathbb{R}^d\\setminus B_R$. As~$\\varphi_R$ is a supersolution of the homogenized equation outside $B_{R\/2}$, this event is very unlikely for $R\\gg1$, by Proposition~\\ref{p.subopt}. Note that there is a trade-off in our selection of the parameter $R$: if $R$ is relatively large, then $\\varphi_R$ is larger and hence the conclusion $G_\\varepsilon(\\cdot,0) \\leq \\varphi_R$ is weaker, however the probability that the conclusion fails is also much smaller.\n\n\\smallskip\n\nSince the Green's function for the Laplacian has different qualitative behavior in dimensions $d=2$ and $d>2$, we split the proof of Proposition~\\ref{p.Green} into these two cases, which are handled in the following subsections. \n\n\\subsection{Proof of Proposition \\ref{p.Green}: Dimensions three and larger}\n\n\\begin{lemma}\\label{l.testfcnd3}\nLet $s\\in (0, d)$. Then there exist constants $C, c, \\gamma,\\beta>0$, depending only on~$(s, d, \\lambda, \\Lambda, \\ell)$, and a family of continuous functions $\\left\\{\\varphi_{R}\\right\\}_{R\\geq C}$ satisfying the following: (i) for every $R \\geq C$ and $x\\in\\mathbb{R}^d$, \n\\begin{equation} \\label{e.varphimax}\n\\varphi_{R}(x) \\leq C R^{d-2+\\gamma} \\left( 1 + |x|\\right)^{2-d},\n\\end{equation}\n(ii) there exists a smooth function $\\psi_{R}$ such that \n\\begin{equation}\n\\left\\{ \n\\begin{aligned}\n& -\\Delta \\psi_{R}\\geq c|x|^{-2-\\beta}\\psi_{R} & \\mbox{in} & \\ \\mathbb{R}^d\\setminus B_{R\/2}, \\\\\n& \\varphi_{R}\\leq \\psi_{R} & \\mbox{in} & \\ \\mathbb{R}^d\\setminus B_{R\/2},\n\\end{aligned}\n\\right.\n\\end{equation}\nand (iii) for each $R\\geq C$ and $A\\in \\Omega$, we have \n\\begin{equation} \\label{e.eqvarphiR}\n-\\tr\\left(A(x) D^2 \\varphi_R \\right) \\geq \\chi_{B_\\ell} \\quad \\mbox{in} \\ B_R.\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nThroughout, we fix $s\\in (0,d)$ and let $C$ and $c$ denote positive constants which depend only on $(s,d,\\lambda,\\Lambda,\\ell)$ and may vary in each occurrence.\n\n\n\n\n\n\\smallskip\n\nWe define $\\varphi_R$. For each $R \\geq 4\\ell$, we set\n\\begin{equation*} \\label{}\n\\varphi_R(x):= \\left\\{ \\begin{aligned}\n& m_R - \\frac{h}{\\gamma} \\left( \\ell^2 + |x|^2 \\right)^{\\frac\\gamma2}, && 0\\leq |x| \\leq R, \\\\\n& k_R |x|^{2-d} \\exp \\left( -\\frac1\\beta |x|^{-\\beta} \\right), && |x|> R, \\\\\n\\end{aligned} \\right.\n\\end{equation*}\nwhere we define the following constants:\n\\begin{equation*}\n2\\beta := \\alpha(s,d,\\lambda,\\Lambda,\\ell) > 0 \n\\end{equation*}\nis the exponent from Proposition~\\ref{p.subopt} with $\\sigma=1$,\n\\begin{align*}\n \\gamma & := \\max\\left\\{ \\frac12 , 1-\\frac{\\lambda}{2\\Lambda} \\right\\} \\\\ \n h &:= \\frac{2}{\\lambda} (2\\ell)^{2-\\gamma} \\\\\n k_R & := h \\left(d-2-2^\\beta R^{-\\beta} \\right)^{-1} R^{d-2+\\gamma} \\exp\\left(\\frac1\\beta 2^\\beta R^{-\\beta} \\right) \\\\\n m_R & := \\frac{h}{\\gamma} \\left( \\ell^2 + R^2 \\right)^{\\frac\\gamma2} + k_R R^{2-d} \\exp \\left( -\\frac1\\beta R^{-\\beta} \\right).\n\\end{align*}\nNotice that the choice of $m_{R}$ makes $\\varphi_{R}$ continuous. We next perform some calculations to guarantee that this choice of $\\varphi_{R}$ satisfies the above claims. \n\\smallskip\n\n\\emph{Step 1.}\nWe check that for every $R\\geq 4\\ell$ and $x\\in \\mathbb{R}^d$, ~\\eqref{e.varphimax}~holds. Note that $\\beta = \\frac12\\alpha \\geq c$ and thus, for every $R \\geq 4\\ell$,\n\\begin{equation} \\label{e.easyexp}\nc\\leq \\exp\\left( -\\frac1\\beta R^{-\\beta} \\right) \\leq 1.\n\\end{equation}\nFor such $R$, we also have that since $d\\geq 3$, $(d-2-2^\\beta R^{-\\beta}) \\geq c$. Morever, since $R \\geq 4\\ell \\geq 4$, this implies that $(2\/R)^\\beta \\leq 1-c$. Using also that $h\\leq C$, we deduce that for every $R\\geq 4\\ell$,\n\\begin{equation} \\label{e.easykas}\nk_R \\leq CR^{d-2+\\gamma}\\quad \\mbox{and} \\quad m_R \\leq C R^\\gamma .\n\\end{equation}\nFor $|x|>R$,~\\eqref{e.varphimax} is immediate from the definition of $\\varphi_R$,~\\eqref{e.easyexp} and~\\eqref{e.easykas}. For~$|x|\\leq R$, we first note that $\\varphi_R$ is a decreasing function in the radial direction and therefore $\\sup_{\\mathbb{R}^d} \\varphi_R = \\varphi_R(0) \\leq m_R $. We then use~\\eqref{e.easykas} to get, for every $|x| \\leq R$,\n\\begin{equation*} \\label{}\n\\varphi_R(x) \\leq m_R \\leq CR^\\gamma \\leq C R^{d-2+\\gamma} (1+|x|)^{2-d}.\n\\end{equation*}\nThis gives~\\eqref{e.varphimax}.\n\n\\smallskip\n\n\\emph{Step 2.} We check that $\\varphi_R$ satisfies\n\\begin{equation} \\label{e.belowhalf}\n\\varphi_R(x) \\leq \\psi_R (x): = k_R |x|^{2-d} \\exp\\left( -\\frac1\\beta |x|^{-\\beta} \\right)\\quad \\mbox{in} \\ \\mathbb{R}^d\\setminus B_{R\/2}.\n\\end{equation}\nSince this holds with equality for $|x| \\geq R$, we just need to check it in the annulus $\\{ R\/2 \\leq |x| < R\\}$. For this it suffices to show that in this annulus, $\\psi_R-\\varphi_R$ is decreasing in the radial direction. Since both $\\psi_R$ and $\\varphi_R$ are decreasing radial functions, we simply need to check that\n\\begin{equation} \\label{e.checkdec}\n\\left|D\\varphi_R(x) \\right| < \\left|D\\psi_R(x) \\right| \\quad \\mbox{for every} \\ x\\in B_R \\setminus B_{R\/2}.\n \\end{equation}\nWe compute, for $R\/2\\leq |x|\\leq R$, since $\\gamma\\leq 1$, \n\\begin{align*} \\label{}\n\\left| D\\varphi_R(x) \\right| & = h \\left( \\ell^2 + |x|^2 \\right)^{\\frac\\gamma2-1} |x| \\leq h |x|^{\\gamma-1}\n\\end{align*}\nand\n\\begin{align*} \\label{}\n\\left| D\\psi_R(x) \\right| & =|x|^{-1}\\left(d-2-|x|^{-\\beta}\\right)\\psi_{R}(x)\\\\\n&= k_R \\left(d-2-|x|^{-\\beta}\\right) |x|^{1-d} \\exp\\left( -\\frac1\\beta |x|^{-\\beta} \\right) \\\\\n& \\geq k_R \\left(d-2-2^\\beta R^{-\\beta} \\right) |x|^{1-d} \\exp\\left( -\\frac1\\beta 2^\\beta R^{-\\beta} \\right).\n\\end{align*}\nIt is now evident that the choice of $k_R$ ensures that~\\eqref{e.checkdec} holds. This completes the proof of~\\eqref{e.belowhalf}. \n\n\\smallskip\n\n\\emph{Step 3.} We check that $\\psi_R$ satisfies \n\\begin{equation} \\label{e.psiR}\n-\\Delta \\psi_R(x) \\geq c |x|^{-2-\\beta} \\psi_R(x) \\quad \\mbox{in} \\ |x| \\geq C.\n\\end{equation}\nBy a direct computation, we find that, for $x\\neq 0$,\n\\begin{align*} \n-\\Delta \\psi_R(x) & = |x|^{-2-\\beta} \\left( d-2+\\beta -|x|^{-\\beta} \\right)\\psi_R(x).\n\\end{align*}\nThis yields~\\eqref{e.psiR}. For future reference, we also note that for every $|x| >1$,\n\\begin{equation} \\label{e.hesspsib}\n|x|^{-2} \\osc_{B_{|x|\/2}(x)} \\psi_R + \\sup_{y\\in B_{|x|\/2}(x)}\\left( |y|^{-1} \\left| D\\psi_R(y) \\right| \\right) \\leq C|x|^{-2} \\psi_R(x).\n\\end{equation}\nThis follows from the computation\n\\begin{equation*} \\label{}\n\\left| D\\psi_R(x)\\right| = |x|^{-1} \\psi_R(x) \\left( 2-d+|x|^{-\\beta} \\right).\n\\end{equation*}\n\n\n\n\\smallskip\n\n\\emph{Step 4.} We check that \\eqref{e.eqvarphiR} holds. By a direct computation, we find that for every $x\\in B_R$,\n\\begin{align*} \\label{}\nD^2 \\varphi_R(x) &= -h\\left(\\ell^{2}+|x|^{2}\\right)^{\\frac{\\gamma}{2}-1}\\left(Id+\\frac{\\gamma-2}{\\ell^{2}+|x|^{2}}\\left(x\\otimes x\\right)\\right)\\\\\n&=-h \\left( \\ell^2+|x|^2 \\right)^{\\frac\\gamma2-1} \\left( \\left( Id - \\frac{x\\otimes x}{|x|^2} \\right) + \\frac{\\ell^2 - (1-\\gamma)|x|^2}{\\ell^2+|x|^2}\\left( \\frac{x\\otimes x}{|x|^2} \\right) \\right).\n\\end{align*}\nMaking use of our choice of $\\gamma$, we see that, for any $A\\in \\Omega$ and $x\\in B_R$,\n\\begin{align*} \\label{}\n-\\tr\\left( A(x) D^2 \\varphi_R(x) \\right)\n& \\geq h \\left( \\ell^2 +|x|^2 \\right)^{\\frac\\gamma2 - 1} \\left( (d-1)\\lambda - \\Lambda(1-\\gamma)(\\ell^2+|x|^2)^{-1}|x|^2 \\right) \\\\\n& \\geq h\\left( \\ell^2 +|x|^2 \\right)^{\\frac\\gamma2 - 1} \\left( (d-1)\\lambda - \\Lambda(1-\\gamma)\\right).\n\\end{align*}\nThe last expression on the right side is positive since, by the choice of $\\gamma$, \n\\begin{equation*} \\label{}\n(d-1)\\lambda - \\Lambda(1-\\gamma) \\geq \\left(d-\\frac32\\right)\\lambda >\\lambda > 0,\n\\end{equation*}\nwhile for $x\\in B_\\ell$, we have, by the choice of $h$, \n\\begin{align*} \\label{}\nh\\left( \\ell^2 +|x|^2 \\right)^{\\frac\\gamma2 - 1} \\left( (d-1)\\lambda - \\Lambda(1-\\gamma)\\right) \\geq h\\left( 2\\ell^2 \\right)^{\\frac\\gamma2 - 1} \\lambda > 1.\n\\end{align*}\nThis completes the proof of~\\eqref{e.eqvarphiR}. \n\\end{proof}\n\n\\begin{proof}[Proof of Proposition~\\ref{p.Green}~when $d\\geq 3$]\nAs before, we fix $s\\in (0,d)$ and let $C$ and $c$ denote positive constants which depend only on $(s,d, \\lambda, \\Lambda, \\ell)$. We use the notation developed in Lemma~\\ref{l.testfcnd3} throughout the proof. \n\n\\smallskip\n\nWe make one reduction before beginning the main argument. Rather than proving~\\eqref{e.Greenest}, it suffices to prove\n\\begin{equation} \\label{e.Greenest2}\n\\forall x\\in\\mathbb{R}^d, \\quad \nG_\\varepsilon(x,0) \\leq \\X^{d-1-\\delta} \\left( 1 + |x| \\right)^{2-d}.\n\\end{equation}\nTo see this, we notice that \n\\begin{equation*} \\label{}\nG_\\varepsilon(x,0) \\leq \\left( \\sup_{|x|\\leq\\varepsilon^{-1}} \\frac{G_\\varepsilon(x,0)}{(1+|x|)^{2-d}} \\right) \\varepsilon^{d-2} \\exp(a)\\exp\\left( -a\\varepsilon|x| \\right) \\quad \\mbox{in} \\ \\mathbb{R}^d \\setminus B_{\\varepsilon^{-1}}.\n\\end{equation*}\nIndeed, the right hand side is larger than the left hand side on $\\partial B_{\\varepsilon^{-1}}$, and hence in $\\mathbb{R}^d\\setminus B_{\\varepsilon^{-1}}$ by the comparison principle and the fact that the right hand side is a supersolution of~\\eqref{e.detsupersol} for $a(d,\\lambda,\\Lambda)>0$ (by the proof of Lemma~\\ref{l.Gtails}). We then obtain~\\eqref{e.Greenest} in $\\mathbb{R}^d \\setminus B_{\\varepsilon^{-1}}$ by replacing $\\X$ by $C\\X$ and $a$ by $\\frac12a$, using~\\eqref{e.Greenest2}, and noting that\n\\begin{equation*} \\label{}\n\\varepsilon^{d-2} \\exp\\left( -a\\varepsilon|x| \\right) \\leq C |x|^{2-d} \\exp\\left( -\\frac a2 \\varepsilon |x| \\right) \\quad \\mbox{for every} \\ |x| \\geq \\varepsilon^{-1}.\n\\end{equation*}\nWe also get~\\eqref{e.Greenest} in $B_{\\varepsilon^{-1}}$, with $\\X$ again replaced by $C\\X$, from~\\eqref{e.Greenest2} and the simple inequality\n\\begin{equation*} \\label{}\n\\exp\\left( -a\\varepsilon|x| \\right) \\geq c \\quad \\mbox{for every} \\ |x| \\leq \\varepsilon^{-1}.\n\\end{equation*}\n\\emph{Step 1.} We define $\\X$ and check that it has the desired integrability. Let $\\mathcal Y$ denote the random variable~$\\X$ in the statement of Proposition~\\ref{p.subopt} in $B_{R}$ with $s$ as above and $\\sigma=1$. Also denote $\\mathcal Y_x(A):= \\mathcal Y(T_x A)$, which controls the error in balls of radius $R$ centered at a point $x\\in\\mathbb{R}^d$. \n\nWe now define\n\\begin{equation} \\label{e.defscrC}\n\\X(A) := \\sup \\left\\{ |z| \\, :\\, z\\in \\mathbb{Z}^d, \\ \\mathcal Y_z(A) \\geq 2^d|z|^s \\right\\}.\n\\end{equation}\nThe main point is that $\\X$ has the following property by Proposition~\\ref{p.subopt}: for every $z\\in \\mathbb{Z}^d$ with $|z| > \\X$, and every $R>\\frac18|z|$ and $g\\in C^{0,1} (\\partial B_R(z))$, every pair $u,v\\in C(\\overline B_R)$ such that \n\\begin{equation} \\label{e.X1defp1}\n\\left\\{ \\begin{aligned} \n& - \\tr(A(x)D^2u) \\leq 0 \\leq -\\Delta v & \\mbox{in} & \\ B_R(z), \\\\\n& u \\leq g \\leq v & \\mbox{on} & \\ \\partial B_R(z),\n\\end{aligned}\n\\right.\n\\end{equation}\nmust satisfy the estimate\n\\begin{equation} \\label{e.X1defp2}\nR^{-2}\\sup_{B_R(z)} \\left( u(x) - v(x) \\right) \\leq CR^{-\\alpha} \\left( R^{-2} \\osc_{\\partial B_{R}(z)} g + R^{-1} \\left[ g \\right]_{C^{0,1}(\\partial B_{R}(z))} \\right).\n\\end{equation}\n\nLet us check that \n\\begin{equation} \\label{e.expC1m}\n\\mathbb{E} \\left[ \\exp\\left(\\X^s\\right)\\right] \\leq C(s,d,\\lambda, \\Lambda,\\ell) < \\infty. \n\\end{equation}\nA union bound and stationarity yield, for $t\\geq 1$, \n\\begin{align*} \\label{}\n\\P \\left[ \\X > t \\right] & \\leq \\sum_{z\\in \\mathbb{Z}^d \\setminus B_t} \\P \\left[ \\mathcal Y_z \\geq 2^d|z|^s\n \\right]\n \\\\ & \\leq \\sum_{n\\in \\mathbb{N}, \\, 2^{n} \\geq t} \\ \\sum_{z\\in B_{2^n} \\setminus B_{2^{n-1}}} \\P \\left[ \\mathcal Y_z \\geq 2^d|z|^s\n \\right] \\\\ \n & \\leq C \\sum_{n\\in \\mathbb{N}, \\, 2^{n} \\geq t} 2^{dn} \\,\\P \\left[\\mathcal Y \\geq 2^{(n-1)s+d} \\right]. \n\\end{align*}\nBy Proposition~\\ref{p.subopt} and Chebyshev's inequality,\n\\begin{align*} \\label{}\n\\sum_{n\\in \\mathbb{N}, \\, 2^{n} \\geq t} 2^{dn} \\,\\P \\left[\\mathcal Y \\geq 2^{(n-1)s+d} \\right] & \\leq C\\sum_{n\\in \\mathbb{N}, \\, 2^{n} \\geq t} 2^{dn} \\exp\\left(-2^{(n-1)s+d} \\right) \\\\\n& \\leq C \\exp\\left( -2t^s \\right).\n\\end{align*}\nIt follows then that \n\\begin{align*}\n\\mathbb{E}[\\exp(\\X^{s})]&=s\\int_{0}^{\\infty} t^{s-1}\\exp(t^{s})\\P[\\X>t]\\, dt\\\\\n&\\leq sC\\int_{0}^{\\infty}t^{s-1} \\exp(-t^{s}) \\,dt\\leq C.\n\\end{align*}\nThis yields~\\eqref{e.expC1m}.\n\n\\smallskip\n\n\\emph{Step 2.}\nWe reduce the proposition to the claim that, for every $R\\geq C$,\n\\begin{equation}\n\\label{e.domclam2}\n\\left\\{ A\\in \\Omega \\,:\\, \\sup_{0<\\varepsilon\\leq 1} \\sup_{x\\in \\mathbb{R}^d} \\left( G_\\varepsilon(x,0; A) - \\varphi_R(x) \\right) >0 \\right\\} \\subseteq \\left\\{ A \\in\\Omega \\,:\\, \\X(A) > R \\right\\}.\n\\end{equation}\nIf~\\eqref{e.domclam2} holds, then by~\\eqref{e.varphimax} we have\n\\begin{multline*}\n\\left\\{ A\\in \\Omega \\,:\\, \\sup_{0<\\varepsilon\\leq1}\\sup_{x\\in\\mathbb{R}^d} \\left( G_\\varepsilon(x,0; A) - C R^{d-2+\\gamma} \\left( 1+ |x|\\right)^{2-d}\\right) >0 \\right\\} \\\\\n\n \\subseteq \\left\\{ A\\in\\Omega\\,:\\, \\X(A) > R \\right\\}.\n\\end{multline*}\nHowever this implies that, for every $R\\geq C$, $0<\\varepsilon \\leq 1$ and $x\\in\\mathbb{R}^d$,\n\\begin{equation*} \\label{}\nG_\\varepsilon(x,0) \\leq C \\X^{d-2+\\gamma} \\left(1+ |x|\\right)^{2-d}.\n\\end{equation*}\nSetting~$\\delta := 1-\\gamma \\geq c(d,\\lambda,\\Lambda)>0$, we obtain~\\eqref{e.Greenest2}.\n\n\\smallskip\n\n\\emph{Step 3.} We prove~\\eqref{e.domclam2}. Fix $A\\in \\Omega$, $0< \\varepsilon\\leq 1$ and $R\\geq 10\\sqrt{d}$ for which \n\\begin{equation*} \\label{}\n\\sup_{\\mathbb{R}^d} \\left( G_\\varepsilon(\\cdot,0) - \\varphi_R \\right) > 0. \n\\end{equation*}\nThe goal is to show that $\\X \\geq R$, at least if $R\\geq C$. We do this by exhibiting $|z| > R$ and functions $u$ and $v$ satisfying~\\eqref{e.X1defp1}, but not~\\eqref{e.X1defp2}. \n\n\\smallskip\n\nAs the functions $G_\\varepsilon(\\cdot,0)$ and $\\varphi_R$ decay at infinity (c.f. Lemma~\\ref{l.Gtails}), there exists a point $x_0 \\in \\mathbb{R}^d$ such that \n\\begin{equation*} \\label{}\nG_\\varepsilon(x_0,0) - \\varphi_R(x_0) = \\sup_{\\mathbb{R}^d} \\left( G_\\varepsilon(\\cdot,0) - \\varphi_R \\right) > 0.\n\\end{equation*}\nBy the maximum principle and~\\eqref{e.eqvarphiR}, it must be that $|x_0| \\geq R$. By~\\eqref{e.belowhalf},\n\\begin{equation} \\label{e.Greent1}\nG_\\varepsilon(x_0,0) - \\psi_R(x_0) = \\sup_{B_{|x_0|\/2}(x_0)} \\left( G_\\varepsilon(\\cdot,0) - \\psi_R \\right).\n\\end{equation}\nWe perturb $\\psi_R$ by setting $\\widetilde \\psi_R (x):= \\psi_R(x) + c|x_0|^{-2-\\beta} \\psi_R(x_0)|x-x_{0}|^{2}$ which, in view of~\\eqref{e.psiR}, satisfies\n\\begin{equation*} \\label{}\n-\\Delta \\widetilde \\psi_R \\geq 0 \\quad \\mbox{in} \\ B_{|x_0|\/2}(x_0).\n\\end{equation*}\nThe perturbation improves~\\eqref{e.Greent1} to\n\\begin{equation*} \nG_\\varepsilon(x_0,0) - \\tilde \\psi_R(x_0) \\geq \\sup_{\\partial B_{|x_0|\/2}(x_0)} \\left( G_\\varepsilon(\\cdot,0) - \\tilde \\psi_R \\right) + c|x_0|^{-\\beta} \\psi_R(x_0).\n\\end{equation*}\nIf $R\\geq C$, then we may take $z_0 \\in \\mathbb{Z}^d$ to be the nearest lattice point to $x_0$ such that $|z_0| > |x_0|$ and get $x_0\\in B_{|z_0| \/4}(z_{0})$. Since $\\psi(|x|)$ is decreasing in $|x|$, this implies\n\\begin{equation*}\nG_\\varepsilon(x_0,0) - \\tilde \\psi_R(x_0) \\geq \\sup_{\\partial B_{|z_0|\/4}(z_0)} \\left( G_\\varepsilon(\\cdot,0) - \\tilde \\psi_R \\right) + c|z_0|^{-\\beta} \\psi_R(z_0).\n\\end{equation*}\nIn view of~\\eqref{e.hesspsib}, this gives\n\\begin{equation*} \\label{}\nG_\\varepsilon(x_0,0) - \\tilde \\psi_R(x_0) \\geq \\sup_{\\partial B_{|z_0|\/4}(z_0)} \\left( G_\\varepsilon(\\cdot,0) - \\tilde \\psi_R \\right) + c \\Gamma |z_0|^{-\\beta},\n\\end{equation*}\nwhere \n\\begin{equation*} \\label{}\n\\Gamma: = \\osc_{\\partial B_{|z_0|\/4}(z_0)} \\tilde \\psi_R + |z_0| \\big[ \\tilde \\psi_R \\big]_{C^{0,1}(\\partial B_{|z_0|\/4}(z_0))}.\n\\end{equation*}\nWe have thus found functions satisfying~\\eqref{e.X1defp1} but in violation of~\\eqref{e.X1defp2}. That is, we deduce from the definition of $\\mathcal Y_z$ that $\\mathcal Y_{z_0} \\geq c|z_0|^{s+\\alpha-\\beta}-C$ and, in view of the fact that $\\beta =\\frac12\\alpha< \\alpha$ and $|z_0|>R$, this implies that $\\mathcal Y_{z_0} \\geq 2^d|z_0|^{s}$ provided $R \\geq C$. Hence $\\X \\geq |z_0| > R$. This completes the proof of~\\eqref{e.domclam2}. \n\\end{proof}\n\n\n\\subsection{Dimension two}\n\nThe argument for~\\eqref{e.Greenest} in two dimensions follows along similar lines as the proof when $d\\geq 3$, however the normalization of $G_\\varepsilon$ is more tricky since it depends on $\\varepsilon$. \n\n\\begin{lemma}\\label{l.testfcnd2}\nLet $s\\in (0, 2)$. Then there exist constants $C, c, \\gamma, \\beta>0$, depending only on $(s, d, \\lambda, \\Lambda, \\ell)$, and a family of continuous functions $\\left\\{\\varphi_{R, \\varepsilon}\\right\\}_{R\\geq C, \\varepsilon\\leq c}$ satisfying the following: (i) for every $R\\geq C$,\n\\begin{equation} \\label{e.varphiepbnds2}\n\\varphi_{R,\\varepsilon}(x) \\leq CR^\\gamma \\log\\left( 2 + \\frac{1}{\\varepsilon(1+|x|)} \\right) \\exp\\left( - a \\varepsilon |x| \\right),\n\\end{equation}\n(ii) there exists a smooth function $\\psi_{R, \\varepsilon}$ such that \n\\begin{equation*}\n\\left\\{\n\\begin{aligned}\n& -\\Delta \\psi_{R, \\varepsilon}\\geq c|x|^{-2-\\beta}\\psi_{R,\\varepsilon} & \\mbox{in} & \\ B_{2\\varepsilon^{-1}}\\setminus B_{R\/2}, \\\\\n& \\varphi_{R, \\varepsilon}\\leq \\psi_{R, \\varepsilon} & \\mbox{in} & \\ B_{2\\varepsilon^{-1}}\\setminus B_{R\/2},\n\\end{aligned}\n\\right.\n\\end{equation*}\n and (iii) for every $R\\ge C$ and $A\\in\\Omega$,\n\\begin{equation} \\label{e.eqvarphiR2}\n-\\tr\\left(A(x) D^2 \\varphi_{R, \\varepsilon} \\right) \\geq \\chi_{B_\\ell} \\quad \\mbox{in} \\ B_R\\bigcup\\left(\\mathbb{R}^{2}\\setminus B_{\\varepsilon^{-1}}\\right) .\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nThroughout we assume $d=2$, we fix $s\\in (0,2)$ and let $C$ and $c$ denote positive constants which depend only on $(s,\\lambda,\\Lambda,\\ell)$ and may vary in each occurrence. We roughly follow the outline of the proof of~\\eqref{e.Greenest} above in the case of $d\\geq 3$. \n\n\\smallskip\n\n\\emph{Step 1.} The definition of $\\varphi_R$. For $\\varepsilon\\in (0,\\frac12]$, $4\\ell\\leq R\\leq \\varepsilon^{-1}$ and $x\\in\\mathbb{R}^2$, we set\n\\begin{equation*} \\label{}\n\\varphi_{R,\\varepsilon}(x):= \\left\\{ \\begin{aligned}\n& m_{R,\\varepsilon} - \\frac{h}{\\gamma} \\left( \\ell^2 + |x|^2 \\right)^{\\frac\\gamma2}, && 0\\leq |x| \\leq R, \\\\\n& k_{R}\\left( \\frac1a \\exp(a) + \\left| \\log \\varepsilon \\right| - \\log |x| \\right) \\exp \\left( -\\frac1\\beta |x|^{-\\beta} \\right), && R< |x| \\leq \\frac 1\\varepsilon, \\\\\n& b_{R,\\varepsilon} \\exp\\left( -a\\varepsilon |x|\\right), && |x| > \\frac 1\\varepsilon,\n\\end{aligned} \\right.\n\\end{equation*}\nwhere the constants are defined as follows:\n\\begin{align*}\n a & : = a(\\lambda,\\Lambda) \\ \\ \\mbox{is the constant from Lemma~\\ref{l.Gtails},}\\\\\n2\\beta & := \\lefteqn{ \\alpha(s,\\lambda,\\Lambda,\\ell) > 0 \\ \\ \\mbox{is the exponent from Proposition~\\ref{p.subopt} with $\\sigma=1$,}} \\\\\n \\gamma & := \\max\\left\\{ \\frac12 , 1-\\frac{\\lambda}{2\\Lambda} \\right\\}, \\\\ \n h &:= \\frac{2}{\\lambda} (2\\ell)^{2-\\gamma},\\\\\n k_R & := 2h \\exp\\left(\\frac1\\beta 2^\\beta R^{-\\beta} \\right) R^{\\gamma} , \\\\\n m_{R,\\varepsilon} & := \\frac{h}{\\gamma} \\left( \\ell^2 + R^2 \\right)^{\\frac\\gamma2} + k_R \\left( \\frac1a\\exp(a) + \\left| \\log \\varepsilon \\right| - \\log R \\right) \\exp\\left(-\\frac1\\beta R^{-\\beta} \\right),\\\\\n b_{R,\\varepsilon} & :=\\frac{1}{a}k_R \\exp\\left(2a -\\frac1\\beta \\varepsilon^{\\beta} \\right)\n\\end{align*}\nObserve that\n\\begin{equation} \\label{e.easy2bnds}\nk_R \\leq CR^\\gamma, \\quad m_{R,\\varepsilon} \\leq C R^\\gamma \\left( 1+ \\left| \\log \\varepsilon \\right| - \\log R \\right) \\quad \\mbox{and} \\quad b_{R,\\varepsilon} \\leq CR^\\gamma.\n\\end{equation}\n\n\n\\smallskip\n\n\\emph{Step 2.} We show that, for every $\\varepsilon \\in(0, \\frac12]$, $4\\ell \\leq R\\leq \\varepsilon^{-1}$ and $x\\in\\mathbb{R}^2$,~\\eqref{e.varphiepbnds2} holds. This is relatively easy to check from the definition of $\\varphi_{R,\\varepsilon}$, using~\\eqref{e.easyexp} and~\\eqref{e.easy2bnds}. For $x\\in B_R$, we use $\\varphi_{R,\\varepsilon} \\leq m_{R,\\varepsilon}$,~\\eqref{e.easy2bnds} and $\\exp(-a\\varepsilon R) \\geq c$ to immediately obtain~\\eqref{e.varphiepbnds2}. For $x\\in B_{\\varepsilon^{-1}}\\setminus B_R$, the estimate is obtained from the definition of $\\varphi_{R,\\varepsilon}$, the bound for $k_R$ in~\\eqref{e.easy2bnds} and~\\eqref{e.easyexp}. For $x\\in \\mathbb{R}^{2}\\setminus B_{\\varepsilon^{-1}}$, the logarithm factor on the right side of~\\eqref{e.varphiepbnds2} is $C$ and we get~\\eqref{e.varphiepbnds2} from the bound for $b_{R,\\varepsilon}$ in~\\eqref{e.easy2bnds}.\n\n\n\n\\smallskip\n\n\\emph{Step 3.} We show that, for $\\varepsilon \\in (0,c]$ and $R\\geq C$, we have\n\\begin{multline} \\label{e.midrngs}\n\\varphi_{R,\\varepsilon} (x) \\leq \\psi_{R,\\varepsilon} (x) := k_{R}\\left( \\frac1a\\exp(a) + \\left| \\log \\varepsilon \\right| - \\log |x| \\right) \\exp \\left( -\\frac1\\beta |x|^{-\\beta} \\right) \\\\ \\mbox{in} \\ \\frac12 R \\leq |x| \\leq \\frac{2}{\\varepsilon}.\n\\end{multline}\nWe have equality in~\\eqref{e.midrngs} for $R \\leq |x| \\leq \\varepsilon^{-1}$ by the definition of~$\\varphi_{R,\\varepsilon}$. As $\\varphi_{R,\\varepsilon}$ is radial, it therefore suffices to check that the magnitude of the radial derivative of $\\varphi_{R,\\varepsilon}$ is less than (respectively, greater than) than that of $\\psi_{R,\\varepsilon}$ in the annulus $\\{ R\/2\\leq |x| \\leq R\\}$ (respectively, $\\{ \\varepsilon^{-1} \\leq |x| \\leq 2\\varepsilon^{-1}\\}$). This is ensured by the definitions of $k_R$ and $b_{R,\\varepsilon}$, as the following routine computation verifies: first, in $x\\in B_R \\setminus B_{R\/2}$, we have \n\\begin{align*}\n\\left| D\\varphi_{R,\\varepsilon}(x) \\right| = h\\left( \\ell^2 +|x|^2 \\right)^{\\frac\\gamma2-1} |x| < h |x|^{\\gamma-1},\n\\end{align*}\nand thus in $B_{R}\\setminus B_{R\/2}$, provided $R\\geq C$, we have that \n\\begin{align*}\n\\left| D\\psi_{R,\\varepsilon}(x) \\right| & = k_R |x|^{-1} \\left| -1+|x|^{-\\beta} \\left( \\frac1a\\exp(a) + \\left| \\log \\varepsilon \\right| - \\log |x| \\right) \\right|\\exp\\left( -\\frac1\\beta |x|^{-\\beta} \\right) \\\\\n& > \\frac{1}{2}k_R |x|^{-1} \\exp\\left( -\\frac1\\beta 2^\\beta R^{-\\beta} \\right) = h R^{\\gamma} |x|^{-1} \\geq h |x|^{\\gamma-1} > \\left| D\\varphi_{R,\\varepsilon}(x) \\right|.\n\\end{align*}\nNext we consider $x\\in B_{2\\varepsilon^{-1}} \\setminus B_{\\varepsilon^{-1}}$ and estimate\n\\begin{align*}\n\\left| D\\varphi_{R,\\varepsilon}(x) \\right| = a\\varepsilon b_{R,\\varepsilon} \\exp\\left( -a\\varepsilon|x| \\right) > a \\varepsilon b_{R,\\varepsilon} \\exp\\left( -2a \\right) = 2\\varepsilon k_R \\exp\\left(-\\frac{1}{\\beta}\\varepsilon^{\\beta}\\right)\n\\end{align*}\nand\n\\begin{align*}\n\\left| D\\psi_{R,\\varepsilon}(x) \\right| & \\leq k_R |x|^{-1} \\left(1+ \\frac{1}{a}\\exp(a)|x|^{-\\beta}\\right)\\exp\\left(-\\frac{1}{\\beta}\\frac{\\varepsilon^{\\beta}}{2^{\\beta}}\\right)\\leq 2\\varepsilon k_{R}\\exp\\left(-\\frac{1}{\\beta}\\varepsilon^{\\beta}\\right), \n\\end{align*}\nthe latter holding provided that $\\varepsilon \\leq c$. This completes the proof of~\\eqref{e.midrngs}.\n\n\\smallskip\n\n\\emph{Step 4.} We show that $\\psi_{R,\\varepsilon}$ satisfies\n\\begin{equation} \\label{e.psiRep}\n-\\Delta \\psi_{R,\\varepsilon} \\geq c |x|^{-2-\\beta} \\psi_{R,\\varepsilon}(x) \\quad \\mbox{in} \\ C \\leq |x| \\leq \\frac{2}{\\varepsilon}.\n\\end{equation}\nBy a direct computation, for every $x\\in \\mathbb{R}^2\\setminus\\{ 0\\}$, we have\n\\begin{align*} \\label{}\n-\\Delta \\psi_{R,\\varepsilon}(x) & = |x|^{-2-\\beta} \\left( \\left(\\beta - |x|^{-\\beta} \\right) \\psi_{R,\\varepsilon}(x) + k_R\\left(\\frac{1}{|x|^{2}}+1\\right) \\exp\\left( -\\frac1\\beta|x|^{-\\beta} \\right) \\right) \\\\\n& \\geq |x|^{-2-\\beta}\\left(\\beta - |x|^{-\\beta} \\right) \\psi_{R,\\varepsilon}(x).\n\\end{align*}\nFrom $\\beta \\geq c$ and the definition of~$\\psi_{R,\\varepsilon}$, we see that $\\psi_{R,\\varepsilon} >0$ and $(\\beta-|x|^{-\\beta}) \\geq c$ for every $\\beta^{-1\/\\beta} \\leq |x| \\leq 2\\varepsilon^{-1}$. This yields~\\eqref{e.psiRep}.\n\nFor future reference, we note that, for every $|x| \\leq 2\\varepsilon^{-1}$,\n\\begin{equation} \\label{e.blaggber}\n|x|^{-2} \\osc_{B_{|x|\/2}(x)} \\psi_{R,\\varepsilon} + \\sup_{y\\in B_{|x|\/2}(x)} \\left( |y|^{-1} D\\psi_{R,\\varepsilon}(y) \\right) \\leq C |x|^{-2} \\leq C |x|^{-2} \\psi_{R,\\varepsilon}(x).\n\\end{equation}\n\n\\smallskip\n\n\\emph{Step 5.} We check ~\\eqref{e.eqvarphiR2} by checking that for every $A\\in \\Omega$, \n\\begin{equation}\\label{e.eqvarphiR2*}\n-\\tr\\left(A(x) D^2 \\varphi_{R,\\varepsilon} \\right) \\geq \\chi_{B_{\\ell}} \\quad \\mbox{in} \\ B_R \\cup (\\mathbb{R}^2\\setminus B_{\\varepsilon^{-1}}).\n\\end{equation}\nIn fact, for $B_R$, the computation is identical to the one which established~\\eqref{e.eqvarphiR}, since our function $\\varphi_{R,\\varepsilon}$ here is the same as $\\varphi_R$ (from the argument in the case $d>2$) in $B_R$, up to a constant. Therefore we refer to Step 5 in Lemma~\\ref{l.testfcnd3}~for details. In $\\mathbb{R}^{2}\\setminus B_{\\varepsilon^{-1}}$, $\\varphi_{R, \\varepsilon}$ is a supersolution by the proof of Lemma \\ref{l.Gtails} and by the choice of $a$. \n\n We remark that in the case that $R= \\varepsilon^{-1}$, the middle annulus in the definition of $\\varphi_{R,\\varepsilon}$ disappears, and we have that $\\varphi_{R,\\varepsilon}$ is a global (viscosity) solution of~\\eqref{e.eqvarphiR2*}, that is, \n\\begin{equation} \\label{e.eqvarphiR2spec}\n-\\tr\\left(A(x) D^2 \\varphi_{R,\\varepsilon} \\right) \\geq \\chi_{B_{\\ell}} \\quad \\mbox{in} \\ \\mathbb{R}^2 \\quad \\mbox{if} \\ R = \\varepsilon^{-1}.\n\\end{equation}\n\n\nTo see why, by~\\eqref{e.eqvarphiR2}, we need only check that $\\varphi_{R,\\varepsilon}$ is a viscosity supersolution of~\\eqref{e.eqvarphiR2} on the sphere $\\partial B_R = \\partial B_{\\varepsilon^{-1}}$. However, the function $\\varphi_{R,\\varepsilon}$ cannot be touched from below on this sphere, since its inner radial derivative is smaller than its outer radial derivative by the computations in Step 3. Therefore we have~\\eqref{e.eqvarphiR2spec}. It follows by comparison that, for every $\\varepsilon \\in (0,\\frac12]$ and $x\\in \\mathbb{R}^2$,\n\\begin{align} \\label{e.taut}\nG_\\varepsilon(x,0) &\\leq \\varphi_{R,\\varepsilon}(x) \\quad \\mbox{if} \\ R = \\varepsilon^{-1}\\notag\\\\\n&\\leq C\\varepsilon^{-\\gamma} \\log\\left( 2 + \\frac{1}{\\varepsilon(1+|x|)} \\right) \\exp\\left( - a \\varepsilon |x| \\right)\n\\end{align}\n\n\n\n\\end{proof}\n\n\\begin{proof}[Proof of Proposition~\\ref{p.Green}~when $d=2$]\nThe proof follows very similar to the case when $d\\geq 3$, using the appropriate adaptations for the new test function introduced in Lemma~\\ref{l.testfcnd2}. As before, we fix $s\\in (0,2)$ and let $C$ and $c$ denote positive constants which depend only on $(s, \\lambda, \\Lambda, \\ell)$. We use the notation developed in Lemma~\\ref{l.testfcnd2} throughout the proof. \n\n\\smallskip\n\n\\emph{Step 1.} We define the random variable $\\X$ in exactly the same way as in \\eqref{e.defscrC}, so \n\\begin{equation}\\label{e.defscrC2}\n\\X(A) := \\sup \\left\\{ |z| \\, :\\, z\\in \\mathbb{Z}^d, \\ \\mathcal Y_z(A) \\geq 2^d|z|^s \\right\\},\n\\end{equation}\nThe argument leading to \\eqref{e.expC1m} follows exactly as before, so that \n\\begin{equation*}\n\\mathbb{E}[ \\exp(\\X^{s}) ]\\leq C(s, \\lambda, \\Lambda, \\ell)<\\infty.\n\\end{equation*}\n\n\\smallskip\n\n\\emph{Step 2.} We reduce the proposition to the claim that, for every $R\\geq C$,\n\\begin{multline}\n\\label{e.domclam22d}\n\\left\\{ A\\in \\Omega \\,:\\, \\sup_{0<\\varepsilon< \\frac1R} \\sup_{x\\in \\mathbb{R}^2} \\left( G_\\varepsilon(x,0,A) - \\varphi_{R,\\varepsilon}(x) \\right) >0 \\right\\} \\\\ \\subseteq \\left\\{ A \\in \\Omega \\,:\\, \\X (A) > R \\right\\}.\n\\end{multline}\nIf~\\eqref{e.domclam22d} holds, then by~\\eqref{e.varphiepbnds2} we have\n\\begin{align*}\n\\left\\{ A \\,:\\, \\sup_{0<\\varepsilon<\\frac1R}\\ \\sup_{x\\in\\mathbb{R}^2} \\left( G_\\varepsilon(x,0,A) - CR^\\gamma \\log\\left( 2 + \\frac{1}{\\varepsilon(1+|x|)} \\right) \\exp\\left( - a \\varepsilon |x| \\right)\\right) >0 \\right\\} \\\\\n \\subseteq \\left\\{ A\\in\\Omega\\,:\\, \\X(A) > R \\right\\}.\n\\end{align*}\nFrom this, we deduce that, for every $0<\\varepsilon < \\X^{-1}$ and $x\\in\\mathbb{R}^2$,\n\\begin{equation} \\label{e.Gep2bnd}\nG_\\varepsilon(x,0) \\leq C \\X^{\\gamma} \\log\\left( 2 + \\frac{1}{\\varepsilon(1+|x|)} \\right) \\exp\\left( - a \\varepsilon |x| \\right).\n\\end{equation}\nMoreover, if $\\varepsilon \\in (0,\\frac12]$ and $\\varepsilon \\geq \\X^{-1}$, then by ~\\eqref{e.taut} we have\n\\begin{align*}\nG_\\varepsilon(x,0) & \\leq C \\varepsilon^{-\\gamma} \\log\\left( 2 + \\frac{1}{\\varepsilon(1+|x|)} \\right) \\exp\\left( - a \\varepsilon |x| \\right) \\\\\n& \\leq C \\X^{\\gamma} \\log\\left( 2 + \\frac{1}{\\varepsilon(1+|x|)} \\right) \\exp\\left( - a \\varepsilon |x| \\right).\n\\end{align*}\nThus we have~\\eqref{e.Gep2bnd} for every $\\varepsilon \\in (0,\\frac12]$ and $x\\in\\mathbb{R}^2$. Taking $\\delta := 1-\\gamma \\geq c$, we obtain the desired conclusion~\\eqref{e.Greenest} for $d=2$.\n\n\\smallskip\n\n\\emph{Step 3.} We prove~\\eqref{e.domclam22d}. This is almost the same as the last step in the proof of~\\eqref{e.domclam2} for dimensions larger than two. Fix $A\\in \\Omega$, $0< \\varepsilon\\leq 1$ and $R\\geq 2\\ell$ for which \n\\begin{equation*} \\label{}\n\\sup_{\\mathbb{R}^d} \\left( G_\\varepsilon(\\cdot,0) - \\varphi_{R,\\varepsilon} \\right) > 0. \n\\end{equation*}\nAs in the case of dimensions larger than two, we select a point $x_0 \\in \\mathbb{R}^d$ such that \n\\begin{equation*} \\label{}\nG_\\varepsilon(x_0,0) - \\varphi_{R,\\varepsilon}(x_0) = \\sup_{\\mathbb{R}^d} \\left( G_\\varepsilon(\\cdot,0) - \\varphi_{R,\\varepsilon} \\right) > 0.\n\\end{equation*}\nBy the maximum principle and~\\eqref{e.eqvarphiR2}, it must be that $R \\leq |x_0| \\leq \\varepsilon^{-1}$. By~\\eqref{e.midrngs},\n\\begin{equation} \\label{e.Greent12d}\nG_\\varepsilon(x_0,0) - \\psi_{R,\\varepsilon}(x_0) = \\sup_{B_{|x_0|\/2}(x_0)} \\left( G_\\varepsilon(\\cdot,0) - \\psi_{R,\\varepsilon} \\right).\n\\end{equation}\nWe perturb $\\psi_{R,\\varepsilon}$ by setting $\\widetilde \\psi_{R,\\varepsilon} (x):= \\psi_{R,\\varepsilon}(x) + c|x_0|^{-2-\\beta} \\psi_{R,\\varepsilon}(x_0) |x-x_0|^2$ which, in view of~\\eqref{e.psiRep}, satisfies\n\\begin{equation*} \\label{}\n-\\Delta \\widetilde \\psi_{R,\\varepsilon} \\geq 0 \\quad \\mbox{in} \\ B_{|x_0|\/2}(x_0).\n\\end{equation*}\nAccording to~\\eqref{e.Greent1}, we have\n\\begin{equation*}\nG_\\varepsilon(x_0,0) - \\tilde \\psi_{R,\\varepsilon}(x_0) \\geq \\sup_{\\partial B_{|x_0|\/2}(z_0)} \\left( G_\\varepsilon(\\cdot,0) - \\tilde \\psi_{R,\\varepsilon} \\right) + c|x_0|^{-\\beta} \\psi_{R,\\varepsilon}(x_0).\n\\end{equation*}\nAssuming $R\\geq C$, we may take $z_0 \\in \\mathbb{Z}^d$ to be the nearest lattice point to $x_0$ such that $|z_0| > |x_0|$ and deduce that $x_0\\in B_{|z_0| \/4}$ as well as\n\\begin{equation*}\nG_\\varepsilon(x_0,0) - \\tilde \\psi_{R,\\varepsilon}(x_0) \\geq \\sup_{\\partial B_{|z_0|\/4}(z_0)} \\left( G_\\varepsilon(\\cdot,0) - \\tilde \\psi_{R,\\varepsilon} \\right) + c|z_0|^{-\\beta} \\psi_{R,\\varepsilon}(z_0).\n\\end{equation*}\nIn view of~\\eqref{e.blaggber}, this gives\n\\begin{equation*} \\label{}\nG_\\varepsilon(x_0,0) - \\tilde \\psi_{R,\\varepsilon}(x_0) \\geq \\sup_{\\partial B_{|z_0|\/4}(z_0)} \\left( G_\\varepsilon(\\cdot,0) - \\tilde \\psi_{R,\\varepsilon} \\right) + c \\Gamma |z_0|^{-\\beta},\n\\end{equation*}\nwhere \n\\begin{equation*} \\label{}\n\\Gamma: = |z_0|^{-2} \\osc_{\\partial B_{|z_0|\/4}(z_0)} \\tilde \\psi_{R,\\varepsilon} + |z_0|^{-1} \\big[ \\tilde \\psi_{R,\\varepsilon} \\big]_{C^{0,1}(\\partial B_{|z_0|\/4}(z_0))}.\n\\end{equation*}\nWe have thus found functions satisfying~\\eqref{e.X1defp1} but in violation of~\\eqref{e.X1defp2}, that is, we deduce from the definition of $\\mathcal Y_z$ that $\\mathcal Y_{z_0} \\geq c|z_0|^{s+\\alpha-\\beta}-C$. In view of the fact that $\\beta =\\frac12\\alpha< \\alpha$ and $|z_0|>R$, this implies that $\\mathcal Y_{z_0} \\geq 2^d|z_0|^{s}$ provided $R \\geq C$. By the definition of $\\X$, we obtain $\\X \\geq |z_0| > R$. This completes the proof of~\\eqref{e.domclam22d}. \n\\end{proof}\n\n\n\n\\section{Sensitivity estimates}\n\\label{s.sensitivity}\nIn this section, we present an estimate which uses the Green's function bounds to control the vertical derivatives introduced in the spectral gap inequality (Proposition \\ref{p.concentration}). Recall the notation from Proposition~\\ref{p.concentration}:\n\\begin{equation*} \\label{}\nX_z' := \\mathbb{E}_* \\left[ X \\,\\vert\\, \\mathcal{F}_*(\\mathbb{Z}^d\\setminus \\{ z\\}) \\right] \\qquad \\mbox{and} \\qquad \\mathbb{V}_*\\!\\left[ X \\right] := \\sum_{z\\in\\mathbb{Z}^{d}} (X-X'_z)^2.\n\\end{equation*}\nThe vertical derivative $(X-X'_{z})$ measures, in a precise sense, the \\emph{sensitivity} of $X$ subject to changes in the environment near $z$. We can therefore interpret $\\left(X-X_{z}'\\right)$ as a derivative of $X$ with respect to the coefficients near $z$. The goal then will be to understand the vertical derivative $(X-X_z')$ when $X$ is $\\phi_{\\varepsilon}(x)$, for fixed $x\\in \\mathbb{R}^{d}$. \n\\smallskip\n\nThe main result of this section is the following proposition which computes $\\phi_\\varepsilon(x) - \\mathbb{E}_*\\!\\left[ \\phi_\\varepsilon(x) \\,\\vert\\, \\mathcal{F}_*(\\mathbb{Z}^d\\setminus \\{ z \\})\\right]$ in terms of the random variable introduced in Proposition \\ref{p.Green}. Throughout the rest of the section, we fix $M\\in \\mathbb{S}^{d}$ with $|M|=1$ and let $\\xi_{\\varepsilon}$ be defined as in \\eqref{e.dimsep}. \n\n\\begin{proposition}\\label{p.sensitivity}\nFix $s\\in (0,d)$. There exist positive constants $a(d,\\lambda,\\Lambda)>0$, $\\delta(d,\\lambda,\\Lambda)>0$ and an $\\mathcal{F}_*$--measurable random variable $\\X:\\Omega_* \\to [1,\\infty)$ satisfying\n\\begin{equation} \\label{e.scruff}\n\\mathbb{E} \\left[ \\exp(\\X^s) \\right] \\leq C(s,d,\\lambda,\\Lambda,\\ell) < \\infty\n\\end{equation}\nsuch that, for every $\\varepsilon\\in \\left(0,\\tfrac12\\right]$, $x\\in\\mathbb{R}^d$ and $z\\in \\mathbb{Z}^d$, \n\\begin{equation}\\label{e.Vbound1}\n\\left| \\phi_\\varepsilon(x)\\right.\\left. - \\mathbb{E}_*\\!\\left[ \\phi_\\varepsilon(x) \\,\\vert\\, \\mathcal{F}_*(\\mathbb{Z}^d\\setminus \\{ z \\}) \\right] \\right|\n\\leq (T_z\\X)^{d+1-\\delta} \\xi_{\\varepsilon}(x-z).\n\\end{equation} \n\\end{proposition}\n\nBefore beginning the proof of Proposition~\\ref{p.sensitivity}, we first provide a heuristic explanation of the main argument. We begin with the observation that we may identify the conditional expectation $\\mathbb{E}_*\\!\\left[ X \\,\\vert\\, \\mathcal{F}_*(\\mathbb{Z}^d \\setminus \\{ z \\}) \\right]$ via resampling in the following way. Let $(\\Omega_*',\\mathcal{F}_*', \\P_*')$ denote an independent copy of $(\\Omega_*,\\mathcal{F}_*, \\P_*)$ and define, for each $z\\in\\mathbb{Z}^d$, a map\n\\begin{equation*} \\label{}\n\\theta_z':\\Omega\\times\\Omega'\\to \\Omega\n\\end{equation*}\nby\n\\begin{equation*} \\label{}\n\\theta_z'(\\omega,\\omega')(y):= \\left\\{ \\begin{aligned} & \\omega(y) & \\mbox{if} \\ y\\neq z,\\\\\n& \\omega'(z) & \\mbox{if} \\ y=z.\n\\end{aligned}\\right.\n\\end{equation*}\nIt follows that, for every $\\omega\\in\\Omega$,\n\\begin{equation} \\label{e.resampling}\n\\mathbb{E}_*\\left[ X \\,\\vert\\, \\mathcal{F}_*(\\mathbb{Z}^d\\setminus \\{ z \\}) \\right](\\omega) = \\mathbb{E}_*'\\left[ X(\\theta_z'(\\omega,\\cdot))\\right].\n\\end{equation}\n\nTherefore, we are interested in estimating differences of the form $X(\\omega) - X( \\theta'_z(\\omega,\\omega'))$, which represent the expected change in $X$ if we resample the environment at~$z$. Observe that, by ~\\eqref{e.siginclusion}, if $\\omega,\\omega' \\in\\Omega_*$, $z\\in \\mathbb{R}^d$, and $A := \\pi(\\omega)$ and $A':=\\pi(\\theta_z'(\\omega,\\omega'))$, then\n\\begin{equation} \\label{e.AtotA}\nA \\equiv A'\\quad \\mbox{in} \\ \\mathbb{R}^d\\setminus B_{\\ell\/2}(z).\n\\end{equation}\nDenote by~$\\phi_\\varepsilon$ and~$\\phi_\\varepsilon'$ the corresponding approximate correctors with modified Green's functions~$G_\\varepsilon$ and~$G_\\varepsilon'$. Let $w:= \\phi_\\varepsilon - \\phi_\\varepsilon'$. Then we have\n\\begin{align*} \\label{}\n\\varepsilon^2 w - \\tr\\left( A'(x) D^2w \\right) = \\tr\\left( \\left( A(x) - A'(x) \\right)\\right.&\\left.(M+D^2\\phi_\\varepsilon) \\right) \\\\\n&\\leq d\\Lambda\\left(1+ \\left| D^2 \\phi_\\varepsilon(x) \\right|\\right) \\chi_{B_{\\ell\/2}(z)}(x),\n\\end{align*}\nwhere~$\\chi_E$ denotes the characteristic function of a set~$E\\subseteq\\mathbb{R}^d$. By comparing~$w$ to~$G_\\varepsilon'$, we deduce that, for $C(d,\\lambda,\\Lambda)\\geq1$,\n\\begin{equation} \\label{e.rescomp}\n\\phi_\\varepsilon(x) - \\phi_\\varepsilon'(x) \\leq C(1+\\left[ \\phi_\\varepsilon \\right]_{C^{1,1}(B_{\\ell\/2}(z))}) G_\\varepsilon'(x,z).\n\\end{equation}\n\nIf $\\phi_{\\varepsilon}$ satisfied a $C^{1,1}$ bound, then by~\\eqref{e.pwcC11} and ~\\eqref{e.Greenest}, we deduce that\n\\begin{equation*} \\label{}\n\\phi_\\varepsilon(0) - \\phi_\\varepsilon'(0) \\leq C \\left((T_z\\X)(\\omega)\\right)^2 \\left((T_z\\Y)(\\theta'_z(\\omega,\\omega'))\\right)^{d-1-\\delta} \\xi_{\\varepsilon}(z),\n\\end{equation*}\nfor $\\X$ defined as in \\eqref{e.pwcC11}, $\\Y$ defined as in \\eqref{e.Greenest}, and $\\xi_{\\varepsilon}(z)$ defined as in~\\eqref{e.dimsep}.\n\nTaking expectations of both sides with respect to $\\P_*'$, we obtain\n\\begin{equation} \\label{e.wtssense}\n \\phi_\\varepsilon(0) - \\mathbb{E}_*\\!\\left[ \\phi_\\varepsilon(0) \\,\\vert\\, \\mathcal{F}_*(\\mathbb{Z}^d\\setminus \\{ z \\}) \\right] \\leq C (T_z\\X)^2 (T_z\\Y_*)^{d-1-\\delta}\\xi_{\\varepsilon}(z),\n\\end{equation}\nwhere \n\\begin{equation} \\label{e.Ysrv}\n\\Y_*:= \\mathbb{E}_*' \\left[ \\Y(\\theta'_0(\\omega,\\omega'))^{d-1-\\delta} \\right]^{1\/(d-1-\\delta)}.\n\\end{equation}\n\nJensen's inequality implies that the integrability of $\\Y_*$ is controlled by the integrability of $\\Y$. First, consider that, for $s\\geq d-1-\\delta$, by the convexity of $t\\mapsto \\exp(t^{r})$ for $r\\geq 1$, we have\n\\begin{align*}\n\\mathbb{E} \\left[ \\exp\\left( \\Y_*^s \\right) \\right] & = \\mathbb{E} \\left[ \\exp\\left( \\mathbb{E}_*' \\left[ \\Y(\\theta'_0(\\omega,\\omega'))^{d-1-\\delta} \\right]^{s\/(d-1-\\delta)} \\right) \\right] \\\\\n& \\leq \\mathbb{E} \\left[ \\mathbb{E}_*' \\left[ \\exp\\left( \\Y(\\theta'_0(\\omega,\\omega'))^{s} \\right) \\right] \\right] \\\\\n& = \\mathbb{E} \\left[ \\exp\\left( \\Y^s \\right) \\right].\n\\end{align*}\nIntegrability of lower moments for $s\\in (0, d-1-\\delta)$ follows by the bound\n\\begin{equation*}\n\\mathbb{E} \\left[ \\exp\\left( \\Y_*^s \\right) \\right] =\\mathbb{E} \\left[ \\exp\\left( \\Y_*^{d-1-\\delta} \\right) \\right] \\sup\\left|\\exp\\left( -\\Y_{*}^{d-1-\\delta}+\\Y_{*}^{s}\\right)\\right|\\leq \\mathbb{E} \\left[ \\exp\\left( \\Y_*^{d-1-\\delta} \\right) \\right]\n\\end{equation*}\nby the monotonicity of the map $p\\mapsto x^{p}$ for $p\\geq 0$ and $x\\geq 1$, which we can take without loss of generality by letting $\\Y_{*}=\\Y_{*}+1$. We may now redefine $\\X$ to be $\\X+\\Y_*$ to get one side of the desired bound~\\eqref{e.Vbound1}. The analogous bound from below is obtained by exchanging $M$ for $-M$ in the equation for $\\phi_\\varepsilon$, or by repeating the above argument and comparing~$w$ to~$-G_\\varepsilon'$.\n\n\\smallskip\n\nThe main reason that this argument fails to be rigorous is technical: the quantity $\\left[ \\phi_\\varepsilon \\right]_{C^{1,1}( B_{\\ell\/2}(z))}$ is not actually controlled by Theorem~\\ref{t.regularity}, rather we have control only over the coarsened quantity $\\left[ \\phi_\\varepsilon \\right]_{C^{1,1}_{1}( B_{\\ell\/2}(z))}$. Most of the work in the proof of Proposition~\\ref{p.sensitivity} is therefore to fix this glitch by proving that~\\eqref{e.rescomp} still holds if we replace the H\\\"older seminorm on the right side by the appropriate coarsened seminorm. This is handled by the following lemma, which we write in a rather general form:\\begin{lemma}\n\\label{l.coarseABP}\nLet $\\varepsilon \\in (0,\\frac{1}{\\ell}]$ and $z\\in \\mathbb{Z}^d$ and suppose $A,A'\\in \\Omega$ satisfy~\\eqref{e.AtotA}. Also fix $f,f'\\in C(\\mathbb{R}^d) \\cap L^\\infty(\\mathbb{R}^d)$ and let $u,u'\\in C(\\mathbb{R}^d) \\cap L^{\\infty}(\\mathbb{R}^{d}) $ be the solutions of\n\\begin{equation*} \\label{}\n\\left\\{ \\begin{aligned}\n& \\varepsilon^2 u - \\tr\\left(A(x) D^2u\\right) = f & \\mbox{in} & \\ \\mathbb{R}^d, \\\\\n& \\varepsilon^2 u' - \\tr\\left(A'(x) D^2u'\\right) = f' & \\mbox{in} & \\ \\mathbb{R}^d.\n\\end{aligned} \\right.\n\\end{equation*}\nThen there exists~$C(d,\\lambda,\\Lambda,\\ell)\\geq 1$ such that, for every $x\\in \\mathbb{R}^d$ and $\\delta\\in (0,1]$,\n\\begin{multline} \\label{e.coarseABP}\nu(x) - u'(x) \\\\\n\\leq C\\left( \\delta + \\left[ u \\right]_{C^{1,1}_1(z,B_{1\/\\varepsilon}(z))} + \\sup_{y\\in B_\\ell(z), \\, y'\\in\\mathbb{R}^d} \\left( f(y') - f(y) - \\delta \\varepsilon^{2}|y'-z|^2 \\right) \\right)G_\\varepsilon'(x,z) \\\\ + \\sum_{y\\in \\mathbb{Z}^d} G_\\varepsilon(x,y) \\sup_{B_{\\ell\/2}(y)} (f-f').\n\\end{multline}\nHere $G_\\varepsilon'$ denotes the modified Green's function for $A'$. \n\\end{lemma}\n\n\\begin{proof}\nWe may assume without loss of generality that $z=0$. By replacing $u(x)$ by the function $$u''(x):= u(x)-\\sum_{y\\in\\mathbb{Z}^d} G_\\varepsilon(x,y) \\sup_{B_{\\ell\/2}(y)} (f-f'),$$ we may assume furthermore that~$f' \\geq f$ in~$\\mathbb{R}^d$. Fix $\\varepsilon\\in(0,\\tfrac12]$. We will show that\n\\begin{equation*} \\label{}\n\\sup_{x\\in \\mathbb{R}^d} \\left( u(x) - u'(x) - K G_\\varepsilon'(x,0) \\right) > 0\n\\end{equation*}\nimplies a contradiction for \n\\begin{equation*} \\label{}\nK > \nC\\left( \\delta + \\left[ u \\right]_{C^{1,1}_1(z,B_{1\/\\varepsilon})} + \\sup_{y\\in B_\\ell, \\, y'\\in\\mathbb{R}^d} \\left( f(y') - f(y) - \\delta \\varepsilon^{2}|y'|^2 \\right) \\right)G_\\varepsilon'(x,0)\n\\end{equation*}\nand $C=C(d,\\lambda,\\Lambda,\\ell)$ chosen sufficiently large.\n\n\\smallskip\n\n\\emph{Step 1.} We find a touching point $x_0 \\in B_{\\ell\/2}$. Consider the auxiliary function\n\\begin{equation*} \\label{}\n\\xi(x):= u(x) - u'(x) - K G_\\varepsilon'(x,0).\n\\end{equation*}\nBy~\\eqref{e.AtotA} and using that $f'\\geq f$, we see that $\\xi$ satisfies\n\\begin{equation*} \\label{}\n\\varepsilon^2 \\xi - \\tr\\left(A(x) D^2\\xi\\right) \\leq 0 \\quad \\mbox{in} \\ \\mathbb{R}^d \\setminus B_{\\ell\/2}.\n\\end{equation*}\nBy the maximum principle and the hypothesis, $\\sup_{\\mathbb{R}^d} \\xi = \\sup_{B_{\\ell\/2}} \\xi > 0$. Select $x_0\\in B_{\\ell\/2}$ such that \n\\begin{equation} \\label{e.touchx0}\n\\xi(x_0) = \\sup_{\\mathbb{R}^d} \\xi.\n\\end{equation}\n\n\\emph{Step 2.} We replace $u$ by a quadratic approximation in $B_{\\ell}$ and get a new touching point. Select $p\\in \\mathbb{R}^d$ such that \n\\begin{equation} \\label{e.innerp1}\n\\sup_{x\\in B_{\\ell} } \\left| u(x) - u(x_{0}) - p\\cdot (x-x_{0})\\right| \\leq 4\\ell^2 \\left[ u \\right]_{C^{1,1}_1(0,B_{1\/\\varepsilon})}.\n\\end{equation}\nFix $\\nu\\geq1$ to be chosen below and define the function\n\\begin{equation*} \\label{}\n\\psi(x):= u(x_0) + p\\cdot (x-x_0) - \\nu |x-x_0|^2 - u'(x) - KG_\\varepsilon'( x,0),\n\\end{equation*}\nThe claim is that \n\\begin{equation} \\label{e.touchclm}\nx \\mapsto \\psi(x) \\quad \\mbox{has a local maximum in \\ $B_{\\ell}$.}\n\\end{equation}\nTo verify~\\eqref{e.touchclm}, we check that $\\psi(x_0) > \\sup_{\\partial B_{\\ell}} \\psi$. For~$y\\in \\partial B_{\\ell}$, we compute\n\\begin{align*}\n\\psi(x_0) & = u(x_0) - u'(x_0) - KG_\\varepsilon'(x_0,0) && \\\\\n& \\geq u(y) - u'(y) - KG_\\varepsilon'(y,0) && \\mbox{(by~\\eqref{e.touchx0})} \\\\\n& \\geq u(x_0) +p\\cdot(y-x_0) - 8\\ell^2 \\left[ u \\right]_{C^{1,1}_1(0,B_{1\/\\varepsilon})} - u'(y) - KG_\\varepsilon'(y,0)\n&& \\mbox{(by~\\eqref{e.innerp1})} \\\\\n& = \\psi(y) + \\nu|y-x_0|^2 - 8\\ell^2\\left[ u \\right]_{C^{1,1}_1(0,B_{1\/\\varepsilon})} \\\\\n& \\geq \\psi(y) + \\ell^2 \\nu - 8\\ell^2\\left[ u \\right]_{C^{1,1}_1(0,B_{1\/\\varepsilon})},\n\\end{align*}\nwhere in the last line we used $|y-x_0| \\geq\\frac{\\ell}{2}$. Therefore, for every \n\\begin{equation*} \\label{}\n\\nu > 8 \\left[ u \\right]_{C^{1,1}_1(0,B_{1\/\\varepsilon})},\n\\end{equation*}\nthe claim~\\eqref{e.touchclm} is satisfied. \n\n\n\\smallskip\n\n\\emph{Step 3.} We show that, for every $x\\in\\mathbb{R}^d$, \n\\begin{equation} \n\\label{e.ufdirtybound}\nu(x) \\leq C\\delta \\varepsilon^{-2} + |x|^2 + \\sup_{y\\in\\mathbb{R}^d} \\left( \\varepsilon^{-2} f(y) - \\delta |y|^2 \\right) \n\\end{equation}\nDefine\n\\begin{equation*} \\label{}\nw(x):= u(x) - \\left( |x|^2 + L \\right), \\quad \\mbox{for} \\quad L:= \\sup_{x\\in\\mathbb{R}^d} \\left( \\varepsilon^{-2} f(x) - \\delta |x|^2 \\right) + 2d\\Lambda \\varepsilon^{-2} \\delta. \n\\end{equation*}\nUsing the equation for $u$, we find that \n\\begin{equation*} \\label{}\n\\varepsilon^2 w - \\tr\\left(A(x) D^2w \\right) \n\\leq\nf - \\varepsilon^2 ( |x|^2 + L) + 2d\\Lambda\\delta.\n\\end{equation*}\nUsing the definition of $L$, we deduce that \n\\begin{equation*} \\label{}\n\\varepsilon^2 w - \\tr\\left(A(x) D^2w \\right) \\leq 0 \\quad \\mbox{in} \\ \\mathbb{R}^d. \n\\end{equation*}\nSince $w(x) \\to -\\infty$ as $|x|\\to \\infty$, we deduce from the maximum principle that $w\\leq 0$ in $\\mathbb{R}^d$. This yields~\\eqref{e.ufdirtybound}. \n\n\n\\smallskip\n\n\\emph{Step 4.} We conclude by obtaining a contradiction to~\\eqref{e.touchclm} for an appropriate choice of~$K$. Observe that, in $B_\\ell$, the function $\\psi$ satisfies\n\\begin{align*}\n\\varepsilon^{2}\\psi-\\tr \\left( A' (x)D^2 \\psi \\right)&\\leq \\varepsilon^{2}(u(x_{0})+p\\cdot (x-x_{0})-\\nu|x-x_{0}|^{2})+C\\nu- f'(x)-K\\\\\n& \\leq \\varepsilon^2 u(x) + C\\nu - f(x) - K \\\\\n& \\leq C\\nu + \\sup_{y\\in\\mathbb{R}^d} \\left( \\delta+ f(y) - f(x) - \\delta \\varepsilon^{2}|y|^2 \\right) - K. \n\\end{align*}\nThus~\\eqref{e.touchclm} violates the maximum principle provided that\n\\begin{equation*} \\label{}\nK > C(\\nu+\\delta) + \\sup_{x\\in B_\\ell, \\, y\\in\\mathbb{R}^d} \\left( f(y) - f(x) - \\delta \\varepsilon^{2}|y|^2 \\right).\n\\end{equation*}\nThis completes the proof. \n\\end{proof}\n\nWe now use the previous lemma and the estimates in Sections~\\ref{s.reg} and~\\ref{s.green} to prove the sensitivity estimates~\\eqref{e.Vbound1}.\n\n\\begin{proof}[Proof of Proposition~\\ref{p.sensitivity}]\nFix $s\\in (0,d)$. Throughout, $C$ and $c$ will denote positive constants depending only on $(s,d,\\lambda,\\Lambda,\\ell)$ which may vary in each occurrence, and $\\X$ denotes an $\\mathcal{F}_*$--measurable random variable on $\\Omega_*$ satisfying\n\\begin{equation*} \\label{}\n\\mathbb{E} \\left[ \\exp\\left( \\X^p \\right) \\right] \\leq C \\quad \\mbox{for every} \\ p < s,\n\\end{equation*}\nwhich we also allow to vary in each occurrence.\n\nWe fix $z\\in \\mathbb{Z}^d$ and identify the conditional expectation with respect to~$\\mathcal{F}_*(\\mathbb{Z}^d\\setminus \\{z\\})$ via resampling, as in~\\eqref{e.resampling}. By the discussion following the statement of the proposition, it suffices to prove the bound~\\eqref{e.wtssense} with $\\Y_*$ defined by~\\eqref{e.Ysrv} for some random variable $\\Y \\leq \\X$. To that end, fix $\\omega\\in\\Omega$, $\\omega\\in\\Omega'$, and denote $\\tilde \\omega:=\\theta_z(\\omega,\\omega')$ as well as $A=\\pi(\\omega)$ and $A'=\\pi(\\tilde \\omega)$. Note that~\\eqref{e.AtotA} holds. Also let $\\phi_\\varepsilon$ and $\\phi'_\\varepsilon$ denote the approximate correctors and $G_\\varepsilon$ and $G_\\varepsilon'$ the Green's functions. \n\\smallskip\n\n\\emph{Step 1.} We use Theorem~\\ref{t.regularity} to estimate $\\left[ \\phi_\\varepsilon \\right]_{C^{1,1}_1(z,B_{1\/\\varepsilon}(z))}$. In preparation, we rewrite the equation for $\\phi_\\varepsilon$ in terms of \n\\begin{equation*} \\label{}\nw_\\varepsilon(x): = \\frac{1}{2}x\\cdot Mx + \\phi_\\varepsilon(x),\n\\end{equation*}\nwhich satisfies\n\\begin{equation*} \\label{e.modmodcorr}\n-\\tr\\left( A(x) D^2w_\\varepsilon \\right) =-\\varepsilon^2 \\phi_\\varepsilon(x). \n\\end{equation*}\nIn view of the fact that the constant functions $\\pm\\sup_{x\\in \\mathbb{R}^{d}} \\left|\\tr(A(x)M)\\right|$ are super\/subsolutions, we have\n\\begin{equation} \\label{e.detep2bnd}\n\\sup_{x\\in\\mathbb{R}^d} \\varepsilon^2 \\left| \\phi_\\varepsilon(x)\\right| \\leq \\sup_{x\\in\\mathbb{R}^d} \\left| \\tr(A(x)M) \\right| \\leq d\\Lambda \\leq C,\n\\end{equation}\nand this yields\n\\begin{equation} \\label{e.woscbndd}\n\\varepsilon^{2} \\osc_{B_{4\/\\varepsilon}} w_\\varepsilon \\leq \\varepsilon^{2} \\osc_{ B_{4\/\\varepsilon}} \\frac{1}{2}x\\cdot Mx + \\varepsilon^{2} \\osc_{B_{4\/\\varepsilon}} \\phi_\\varepsilon \\leq C.\n\\end{equation}\nBy the Krylov-Safonov Holder estimate \\eqref{e.classicKS} applied to $w_\\varepsilon$ in $B_R$ with $R=4\\varepsilon^{-1}$ yields\n\\begin{equation} \\label{e.Holder.scaledapp}\n\\varepsilon^{2-\\beta} \\left[ w_\\varepsilon \\right]_{C^{0,\\beta}(B_{2\/\\varepsilon})} \\leq C.\n\\end{equation}\n Letting $Q(x):= \\frac{1}{2}x\\cdot Mx$, we have\n\\begin{equation}\n\\label{e.phiepcbeta}\n\\varepsilon^ 2\\left[ \\phi_\\varepsilon \\right]_{C^{0,\\beta}(B_{2\/\\varepsilon})} \\leq \\varepsilon^2 \\left[ w_\\varepsilon \\right]_{C^{0,\\beta}(B_{2\/\\varepsilon})} +\\varepsilon^2 \\left[ Q \\right]_{C^{0,\\beta}(B_{2\/\\varepsilon})} \\leq C\\varepsilon^{\\beta} + C|M|\\varepsilon^{\\beta} \\leq C\\varepsilon^{\\beta}.\n\\end{equation}\nWe now apply Theorem~\\ref{t.regularity} (specifically \\eqref{e.pwcC11}) to $w_{\\varepsilon}$ with $R=2\\varepsilon^{-1}$ to obtain\n\\begin{align*} \\label{}\n\\left[ w_\\varepsilon \\right]_{C^{1,1}_{1}(z,B_{1\/\\varepsilon}(z))} & \\leq (T_z\\X)^2 \\left( \\sup_{B_{2\/\\varepsilon}} \\varepsilon^2 \\phi+ \\varepsilon^{-\\beta} \\left[ \\varepsilon^ 2\\phi_\\varepsilon \\right]_{C^{0,\\beta}(B_{2\/\\varepsilon})} + \\varepsilon^2 \\osc_{B_{2\/\\varepsilon}} w_\\varepsilon\\right) \\\\\n& \\leq C (T_z\\X)^2.\n\\end{align*}\nAs $w_\\varepsilon$ and $\\phi_\\varepsilon$ differ by the quadratic $Q$, we obtain\n\\begin{equation} \\label{e.C11bndcorr}\n\\left[ \\phi_\\varepsilon \\right]_{C^{1,1}_{1}(z,B_{1\/\\varepsilon}(z))} \\leq C \\left( (T_z\\X)^2 + |M| \\right) \\leq C (T_z\\X)^2. \n\\end{equation}\n\\smallskip\n\n\\emph{Step 2.} We estimate~$G_\\varepsilon'(\\cdot,z)$ and complete the argument for~\\eqref{e.wtssense}. By Proposition~\\ref{p.Green}, we have\n\\begin{equation} \\label{e.Gwrapp}\nG_\\varepsilon'(x,z) \\leq (T_z\\X)^{d-1-\\delta} \\xi_\\varepsilon(x-z)\\quad\n\\end{equation}\nfor $\\xi_\\varepsilon(x-z)$ defined as in~\\eqref{e.dimsep}.\nLemma~\\ref{l.coarseABP} yields, for every $x\\in\\mathbb{R}^d$,\n\\begin{equation*} \\label{}\n\\big| \\phi_\\varepsilon(x) - \\phi_\\varepsilon'(x)\\big| \\leq C\\left( 1 + \\left[ \\phi_\\varepsilon\\right]_{C^{1,1}_{1}(z,B_{1\/\\varepsilon}(z))} \\right) G_\\varepsilon'(x,z)+2d\\Lambda G_{\\varepsilon}(x,z).\n\\end{equation*}\nInserting~\\eqref{e.C11bndcorr} and~\\eqref{e.Gwrapp} gives\n\\begin{equation} \n\\label{e.phiepprime}\n\\big| \\phi_\\varepsilon(x) - \\phi_\\varepsilon'(x)\\big| \\leq C (T_z\\X)^2 (T_z\\X)^{d-1-\\delta} \\xi_\\varepsilon(x-z).\n\\end{equation}\nThis is~\\eqref{e.wtssense}.\n\\end{proof}\n\n\\section{Optimal scaling for the approximate correctors}\n\\label{s.optimals}\n\nWe complete the rate computation for the approximate correctors $\\phi_{\\varepsilon}$. We think of breaking up the decay of $\\varepsilon^{2}\\phi_{\\varepsilon}(0)-\\tr(\\overline{A}M)$ into two main contributions of error: \n\\begin{equation*} \\label{}\n \\varepsilon^2 \\phi_\\varepsilon(0) - \\tr(\\overline{A}M) = \\underbrace{\\varepsilon^2 \\phi_\\varepsilon(0) - \\mathbb{E} \\left[ \\varepsilon^2 \\phi_\\varepsilon(0) \\right]}_{\\mbox{\\small``random error\"}} + \\underbrace{\\mathbb{E} \\left[ \\varepsilon^2 \\phi_\\varepsilon(0) \\right] - \\tr(\\overline{A}M)}_{\\mbox{\\small``deterministic error\"}}.\n\\end{equation*}\n\n \n The ``random error\" will be controlled by the concentration inequalities established in~Section~\\ref{s.sensitivity}. We will show that the ``deterministic error\" is controlled by the random error, and this will yield a rate for $\\varepsilon^{2}\\phi_{\\varepsilon}(0)+\\tr(\\overline{A}M)$. \n \n \\smallskip\n \n First, we control the random error using Proposition~\\ref{p.concentration} and the estimates from the previous three sections.\n \n \\begin{proposition}\n\\label{p.randomerror}\nThere exist $\\delta(d,\\lambda,\\Lambda)>0$ and $C(d,\\lambda,\\Lambda,\\ell) \\geq 1$ such that, for every $\\varepsilon \\in (0,\\tfrac12]$, and $x\\in \\mathbb{R}^{d}$,\n\\begin{equation} \\label{e.randomerror}\n\\mathbb{E} \\left[ \\exp \\left(\\left( \\frac1{\\mathcal{E}(\\varepsilon)} \\left| \\varepsilon^2 \\phi_\\varepsilon(x) - \\mathbb{E} \\left[\\varepsilon^2 \\phi_\\varepsilon(x) \\right] \\right| \\right)^{\\frac{1}{2} + \\delta } \\right) \\right] \\leq C.\n\\end{equation}\n\\end{proposition}\n\n\\begin{proof}\nFor readability, we prove \\eqref{e.randomerror} for~$x=0$. The argument for general $x\\in\\mathbb{R}^d$ is almost the same. \nDefine \n\\begin{equation*} \\label{}\n\\xi_\\varepsilon(x):= \\exp(-a\\varepsilon|x|)\\cdot \\left\\{ \\begin{aligned} \n & \\log\\left( 2 + \\frac{1}{\\varepsilon(1+|x|)}\\right) && \\mbox{if} \\ d = 2 , \\\\\n & (1 + |x|)^{2-d}&& \\mbox{if} \\ d\\geq 3.\n \\end{aligned} \\right.\n\\end{equation*}\nAccording to Proposition~\\ref{p.sensitivity}, for $\\beta>0$, \n\\begin{align*} \\label{}\n\\lefteqn{ \\exp\\left( C \\left( \\mathbb{V}_*\\left[ \\frac{\\varepsilon^2 \\phi_\\varepsilon(0)}{\\mathcal{E}(\\varepsilon)} \\right] \\right)^\\beta \\right) } \\qquad & \\\\\n& = \\exp\\left( C \\left( \\frac{\\varepsilon^4}{\\mathcal{E}(\\varepsilon)^2} \\sum_{z\\in\\mathbb{Z}^d} \\left( \\phi_\\varepsilon(0) - \\mathbb{E}_*\\!\\left[ \\phi_\\varepsilon(0) \\,\\vert\\, \\mathcal{F}_*(\\mathbb{Z}^d\\setminus \\{ z \\}) \\right] \\right)^2 \\right)^\\beta \\right) \\\\\n& \\leq \\exp\\left( C \\left( \\frac{\\varepsilon^4}{\\mathcal{E}(\\varepsilon)^2} \\sum_{z\\in\\mathbb{Z}^d} (T_z\\X)^{2d+2-2\\delta} \\xi_\\varepsilon(z)^2 \\right)^\\beta \\right).\n\\end{align*}\nWe claim (and prove below) that \n\\begin{equation}\\label{e.randerrwts} \n\\sum_{z\\in\\mathbb{Z}^{d}} \\xi_\\varepsilon(z)^{2}\\leq C\\varepsilon^{-4}\\mathcal{E}(\\varepsilon)^{2}.\n\\end{equation}\nAssuming~\\eqref{e.randerrwts} and applying Jensen's inequality for discrete sums, we have\n\\begin{align*} \\label{}\n\\lefteqn{\n\\exp\\left( C \\left( \\frac{\\varepsilon^4}{\\mathcal{E}(\\varepsilon)^2} \\sum_{z\\in\\mathbb{Z}^d} (T_z\\X)^{2d+2-2\\delta} \\xi_\\varepsilon(z)^2 \\right)^\\beta \\right) \n} \\qquad & \\\\\n&\\leq \\exp\\left(C\\left(\\frac{\\sum_{z\\in\\mathbb{Z}^d} (T_z\\X)^{2d+2-2\\delta} \\xi(z)^2 }{\\sum_{z\\in\\mathbb{Z}^d} \\xi_\\varepsilon(z)^{2}}\\right)^\\beta \\right) \\\\\n&\\leq \\frac{\\sum_{z\\in\\mathbb{Z}^d}\\xi(z)^2 \\exp \\left(C\\left(T_z\\X\\right)^{(2d+2-2\\delta)\\beta}\\right) }{\\sum_{z\\in\\mathbb{Z}^d} \\xi_\\varepsilon(z)^{2}}.\n\\end{align*}\nSelect $\\beta := d\/(2d+2- 3\\delta)$. Taking expectations, using stationarity, and applying Proposition~\\ref{p.sensitivity}, we obtain \n\\begin{equation*} \\label{}\n\\mathbb{E}_*\\left[ \\exp\\left( C \\left( \\mathbb{V}_*\\left[ \\frac{\\varepsilon^2 \\phi_\\varepsilon(0)}{\\mathcal{E}(\\varepsilon)} \\right] \\right)^\\beta \\right) \\right] \\leq C.\n\\end{equation*}\nFinally, an application of Proposition~\\ref{p.concentration} gives, for $\\gamma : = 2\\beta \/ (1+\\beta) \\in (0,2)$,\n\\begin{equation*} \\label{}\n\\mathbb{E}\\left[ \\exp\\left( \\left| \\frac{\\varepsilon^2 \\phi_\\varepsilon(0)}{\\mathcal{E}(\\varepsilon)} -\\mathbb{E}\\left[ \\frac{\\varepsilon^2 \\phi_\\varepsilon(0)}{\\mathcal{E}(\\varepsilon)} \\right] \\right|^\\gamma \\right) \\right] \\leq C.\n\\end{equation*}\n\nThis completes the proof of the proposition, subject to the verification of~\\eqref{e.randerrwts}, which is a straightforward computation. In dimension $d\\geq 3$, we have\n\\begin{align*}\n\\sum_{z\\in\\mathbb{Z}^d} \\xi_\\varepsilon(z)^2 & \\leq C \\int_{\\mathbb{R}^d} \\left( 1+ |x| \\right)^{4-2d} \\exp\\left(-2a\\varepsilon|x| \\right) \\, dx \\\\\n& = C \\varepsilon^{d-4} \\int_{\\mathbb{R}^d} \\left( \\varepsilon + |y| \\right)^{4-2d}\\exp\\left(-2a|y| \\right)\\, dy \\\\\n& \\leq C\\varepsilon^{d-4} \\left( \\int_{\\mathbb{R}^d \\setminus B_\\varepsilon} |y|^{4-2d} \\exp\\left(-2a|y| \\right) \\, dy + \\int_{B_\\varepsilon} \\varepsilon^{4-2d}\\, dy \\right) \\\\\n& = C \\cdot \\left\\{ \\begin{aligned} & 1 + \\varepsilon^{d-4} && \\mbox{in} \\ d\\neq 4,\\\\\n& 1+ \\left| \\log \\varepsilon \\right| && \\mbox{in} \\ d=4,\n\\end{aligned} \\right. \\\\\n&= C \\varepsilon^{-4} \\mathcal{E}(\\varepsilon)^2.\n\\end{align*}\nIn dimension $d=2$, we have\n\\begin{align*}\n\\sum_{z\\in\\mathbb{Z}^d} \\xi_\\varepsilon(z)^2 & \\leq C \\int_{\\mathbb{R}^d} \\log^2\\left( 2 + \\frac{1}{\\varepsilon(1+|x|)} \\right) \\exp\\left(-2a\\varepsilon|x| \\right) \\, dx \\\\\n& \\leq C \\varepsilon^{-2} \\int_{\\mathbb{R}^d} \\log^2\\left(2+\\frac{1}{\\varepsilon+|y|} \\right) \\exp\\left(-2a|y| \\right) \\, dy \\\\\n& \\leq C \\varepsilon^{-2} \\bigg( \\int_{B_\\varepsilon} \\log^2\\left( 2+\\frac{1}{\\varepsilon}\\right) \\exp\\left(-2a|y| \\right)\\,dy \\\\\n& \\qquad \\qquad \\qquad + \\int_{\\mathbb{R}^d \\setminus B_{\\varepsilon}} \\log^2\\left( 2+\\frac{1}{|y|}\\right) \\exp\\left(-2a|y| \\right) \\bigg).\n\\end{align*}\nWe estimate the two integrals on the right as follows:\n\\begin{equation*} \\label{}\n\\int_{B_\\varepsilon} \\log^2\\left( 2+\\frac{1}{\\varepsilon}\\right) \\exp\\left(-2a|y| \\right)\\,dy \\leq |B_\\varepsilon| \\log^2\\left( 2+\\frac{1}{\\varepsilon}\\right) \\leq C \\varepsilon^2 \\log^2\\left( 2+\\frac{1}{\\varepsilon}\\right)\n\\end{equation*}\nand\n\\begin{multline*}\n\\int_{\\mathbb{R}^d \\setminus B_{\\varepsilon}} \\log^2\\left( 2+\\frac{1}{|y|}\\right) \\exp\\left(-2a|y| \\right)\\,dy\\\\\n\\leq |B_1| \\log^2\\left(2+\\frac{1}{\\varepsilon} \\right) + C\\int_{\\mathbb{R}^d\\setminus B_1} \\exp\\left( -2a|y| \\right)\\, dy \\leq C\\log^2\\left(2+\\frac1\\varepsilon \\right).\n\\end{multline*}\nAssembling the last three sets of inequalities yields in $d=2$ that \n\\begin{equation*} \\label{}\n\\sum_{z\\in\\mathbb{Z}^d} \\xi_\\varepsilon(z)^2 \\leq C \\varepsilon^{-2} \\log^2\\left(2+\\frac1\\varepsilon \\right) \\leq C\\varepsilon^{-4} \\mathcal{E}(\\varepsilon)^2. \\qedhere\n\\end{equation*}\n\\end{proof}\n\nNext, we show that the deterministic error is controlled from above by the random error. The basic idea is to argue that if the deterministic error is larger than the typical size of the random error, then this is inconsistent with the homogenization. The argument must of course be quantitative, so it is natural that we will apply~Proposition~\\ref{p.subopt}. Note that if we possessed the bound $\\sup_{x\\in\\mathbb{R}^d}|\\phi_\\varepsilon(x) - \\mathbb{E}\\left[\\phi_\\varepsilon(0)\\right]| \\lesssim \\varepsilon^{-2} \\mathcal{E}(\\varepsilon)$, then our proof here would be much simpler. However, this bound is too strong-- we do not have, and of course cannot expect, such a uniform estimate on the fluctuations to hold-- and therefore we need to cut off larger fluctuations and argue by approximation. This is done by using the Alexandrov-Backelman-Pucci estimate and~\\eqref{e.randomerror} in a straightforward way.\n\n\n\\begin{proposition}\n\\label{p.deterministicerror}\nThere exists $C(d,\\lambda,\\Lambda,\\ell)\\geq1$ such that, for every $\\varepsilon \\in (0,\\tfrac12]$ and $x\\in\\mathbb{R}^d$,\n\\begin{equation} \\label{e.determinserr}\n\\left| \\mathbb{E} \\left[ \\varepsilon^2 \\phi_\\varepsilon(x) \\right] - \\tr(\\overline{A}M ) \\right| \\leq C \\mathcal{E}(\\varepsilon).\n\\end{equation}\n\\end{proposition}\n\\begin{proof}\nBy symmetry, it suffices to prove the following one-sided bound: for every $\\varepsilon\\in (0, \\frac{1}{2}]$ and $x\\in\\mathbb{R}^d$,\n\\begin{equation} \\label{e.systwts}\n\\tr(\\overline{A}M)-\\mathbb{E} \\left[ \\varepsilon^2 \\phi_\\varepsilon(x) \\right] \\geq -C \\mathcal{E}(\\varepsilon).\n\\end{equation}\nThe proof of \\eqref{e.systwts} will be broken down into several steps. \n\n\\smallskip\n\n\\emph{Step 1.} We show that \n\\begin{equation} \\label{e.offgrid}\n\\mathbb{E} \\left[ \\exp\\left( \\left( \\frac{1}{\\varepsilon^{-2} \\mathcal E(\\varepsilon)}\\left( \\osc_{B_{\\sqrt{d}}} \\phi_\\varepsilon\\right)\\right)^{\\frac12+\\delta}\\right) \\right] \\leq C.\n\\end{equation}\nLet $k \\in\\L$ be the affine function satisfying\n\\begin{equation*} \\label{}\n\\sup_{x\\in B_{\\sqrt{d}}} \\left| \\phi_\\varepsilon - k(x) \\right| = \\inf_{l \\in\\L} \\sup_{x\\in B_{\\sqrt{d}}} \\left| \\phi_\\varepsilon - l(x) \\right|.\n\\end{equation*}\nAccording to~\\eqref{e.C11bndcorr}, \n\\begin{equation} \n\\label{e.canoteherc11}\n\\sup_{x\\in B_{\\sqrt{d}}} \\left| \\phi_\\varepsilon - k(x) \\right| \\leq C\\X^2. \n\\end{equation}\nSince $k$ is affine, its slope can be estimated by its oscillation on $B_{\\sqrt{d}} \\cap \\mathbb{Z}^d$:\n\\begin{equation*} \\label{}\n\\left| \\nabla k \\right| \\leq C \\osc_{B_{\\sqrt{d}}\\cap \\mathbb{Z}^d} k.\n\\end{equation*}\nThe previous line and~\\eqref{e.canoteherc11} yield that \n\\begin{equation*} \\label{}\n\\left| \\nabla k \\right| \\leq C \\osc_{B_{\\sqrt{d}}\\cap \\mathbb{Z}^d} \\phi_\\varepsilon + C \\X^2. \n\\end{equation*}\nBy stationarity and~\\eqref{e.randomerror}, we get \n\\begin{equation} \n\\label{e.bghlas}\n\\mathbb{E} \\left[ \\exp\\left( \\left( \\frac{1}{\\varepsilon^{-2} \\mathcal E(\\varepsilon)}\\left( \\osc_{B_{\\sqrt{d}}\\cap \\mathbb{Z}^d} \\phi_\\varepsilon\\right)\\right)^{\\frac12+\\delta}\\right) \\right] \\leq C\n\\end{equation}\nTherefore, \n\\begin{equation*} \\label{}\n\\mathbb{E} \\left[ \\exp\\left( \\left( \\frac{1}{\\varepsilon^{-2} \\mathcal E(\\varepsilon)}\\left| \\nabla k \\right|\\right)^{\\frac12+\\delta}\\right) \\right] \\leq C\n\\end{equation*}\nThe triangle inequality,~\\eqref{e.canoteherc11} and~\\eqref{e.bghlas} imply~\\eqref{e.offgrid}. \n\n\\smallskip\n\n\\emph{Step 2.} Consider the function\n\\begin{equation*} \\label{}\nf_\\varepsilon (x): = \\left( -\\varepsilon^2 \\phi_\\varepsilon(x) + \\mathbb{E}\\left[\\varepsilon^2 \\phi_\\varepsilon(0) \\right] \\right)_+.\n\\end{equation*}\nWe claim that, for every $R\\geq 1$,\n\\begin{equation} \\label{e.ABPsetup}\n\\mathbb{E} \\left[ \\left( \\fint_{B_R} \\left| f_\\varepsilon(x) \\right|^d \\, dx \\right)^{\\frac1d} \\right] \\leq C \\mathcal{E}(\\varepsilon).\n\\end{equation}\nIndeed, by Jensen's inequality,~\\eqref{e.randomerror} and~\\eqref{e.offgrid},\n\\begin{equation*} \\label{}\n\\mathbb{E} \\left[ \\left( \\fint_{B_R} \\left| f_\\varepsilon(x) \\right|^d \\, dx \\right)^{\\frac1d} \\right] \\leq \\left( \\fint_{B_R} \\mathbb{E} \\left[ \\left| f_\\varepsilon(x) \\right|^d \\right]\\, dx \\right)^{\\frac1d} \\leq C\\mathcal{E}(\\varepsilon). \n\\end{equation*}\n\n\\smallskip\n\n\\emph{Step 3.} We prepare the comparison. Define\n\\begin{equation*} \\label{}\n\\widehat f_\\varepsilon(x):= \\min\\left\\{ -\\varepsilon^2 \\phi_\\varepsilon(x), \\mathbb{E} \\left[ -\\varepsilon^2\\phi_\\varepsilon(0) \\right] \\right\\} = -\\varepsilon^2 \\phi_\\varepsilon(x) - f_\\varepsilon(x),\n\\end{equation*}\nfix $R\\geq 1$ (we will send $R\\to \\infty$ below) and denote by $\\widehat \\phi_\\varepsilon$, the solution of\n\\begin{equation*}\n\\left\\{ \\begin{aligned} \n& -\\tr\\left( A(x) (M+D^2 \\widehat \\phi_\\varepsilon )\\right) = \\widehat f_\\varepsilon & \\mbox{in} & \\ B_R, \\\\\n& \\widehat \\phi_\\varepsilon = d\\Lambda \\varepsilon^{-2} |M|& \\mbox{on} & \\ \\partial B_R.\n\\end{aligned} \\right.\n\\end{equation*}\nNote that the boundary condition was chosen so that $\\phi_\\varepsilon \\leq \\widehat\\phi_\\varepsilon$ on $\\partial B_R$. Thus the Alexandrov-Backelman-Pucci estimate and~\\eqref{e.ABPsetup} yield\n\\begin{equation} \\label{e.ABPapp}\n\\mathbb{E} \\left[ R^{-2} \\sup_{B_R} \\left( \\phi_\\varepsilon - \\widehat\\phi_\\varepsilon \\right) \\right] \\leq C \\mathcal{E}(\\varepsilon).\n\\end{equation}\n\n\n\\smallskip\n\n\\emph{Step 4.} Let $\\widehat{\\phi}$ denote the solution to\n\\begin{equation} \\label{e.homogdetcmp}\n \\left\\{ \\begin{aligned} \n& -\\tr(\\overline{A} (M+D^{2}\\widehat \\phi)) = \\mathbb{E} \\left[ -\\varepsilon^2\\phi_\\varepsilon(0) \\right] & \\mbox{in} & \\ B_R, \\\\\n& \\widehat \\phi = d\\Lambda \\varepsilon^{-2} |M|& \\mbox{on} & \\ \\partial B_R.\n\\end{aligned} \\right.\n\\end{equation}\nNotice that the right hand side and boundary condition for~\\eqref{e.homogdetcmp}~are chosen to be constant. \nMoreover, we can solve for $\\widehat \\phi$ explicitly: for $x\\in B_R$, we have\n\\begin{equation} \\label{e.formhatphi}\n\\widehat \\phi(x) = d\\Lambda \\varepsilon^{-2} |M| - \\frac{|x|^2-R^2}{2\\tr \\overline{A}}\\left(\\tr(\\overline{A}M) - \\mathbb{E} \\left[ \\varepsilon^2\\phi_\\varepsilon(0) \\right] \\right).\n\\end{equation}\nWe point out that since $\\widehat f_\\varepsilon \\leq \\mathbb{E} \\left[ -\\varepsilon^2\\phi_\\varepsilon(0) \\right]$, we have that $\\widehat\\phi_{\\varepsilon}$ satisfies \n\\begin{equation*} \n \\left\\{ \\begin{aligned} \n& -\\tr(A(x) (M+D^{2}\\widehat \\phi_{\\varepsilon})) \\leq \\mathbb{E} \\left[ -\\varepsilon^2\\phi_\\varepsilon(0) \\right] & \\mbox{in} & \\ B_R, \\\\\n& \\widehat \\phi_{\\varepsilon} = d\\Lambda \\varepsilon^{-2}|M| & \\mbox{on} & \\ \\partial B_R.\n\\end{aligned} \\right.\n\\end{equation*}\nIt follows then that we may apply Proposition~\\ref{p.subopt}~to the pair $\\frac{1}{2}x\\cdot Mx + \\widehat\\phi_\\varepsilon(x)$ and $\\frac{1}{2}x\\cdot Mx + \\widehat \\phi(x)$, which gives\n\\begin{equation} \\label{e.EEapp}\n\\mathbb{E} \\left[ R^{-2} \\sup_{x\\in B_R} \\left( \\widehat \\phi_\\varepsilon - \\widehat \\phi \\right) \\right] \\leq CR^{-\\alpha}.\n\\end{equation}\n\n\n\n\\smallskip\n\n\\emph{Step 5.} The conclusion. We have, by~\\eqref{e.ABPapp},~\\eqref{e.EEapp} and~\\eqref{e.formhatphi},\n\\begin{align*}\n-d\\Lambda \\varepsilon^{-2} |M| &\\leq \\mathbb{E} \\left[ \\phi_\\varepsilon(0) \\right] \\\\\n& \\leq \\mathbb{E} \\left[\\widehat \\phi_\\varepsilon(0) \\right] + C \\mathcal{E}(\\varepsilon) R^2 \\\\\n& \\leq \\widehat\\phi(0) + CR^{2-\\alpha} + C \\mathcal{E}(\\varepsilon) R^2 \\\\\n& = d\\Lambda \\varepsilon^{-2} |M| + \\frac{R^2}{2\\tr\\overline{A}} \\left(\\tr(\\overline{A}M) - \\varepsilon^2\\mathbb{E} \\left[ \\phi_\\varepsilon(0) \\right] \\right) + CR^{2-\\alpha} + C \\mathcal{E}(\\varepsilon) R^2. \n\\end{align*} \nRearranging, we obtain\n\\begin{equation*} \\label{}\n\\tr(\\overline{A}M) - \\varepsilon^2\\mathbb{E} \\left[ \\phi_\\varepsilon(0) \\right] \\geq C\\left[-2d\\Lambda |M| \\varepsilon^{-2} R^{-2} - CR^{-\\alpha} - C\\mathcal{E}(\\varepsilon)\\right].\n\\end{equation*}\nSending $R\\to \\infty$ yields \n\\begin{equation*}\n\\tr(\\overline{A}M)-\\mathbb{E} \\left[ \\varepsilon^2 \\phi_\\varepsilon(0) \\right] \\geq -C \\mathcal{E}(\\varepsilon).\n\\end{equation*}\nSince~\\eqref{e.offgrid} and stationarity implies that, for every $x\\in\\mathbb{R}^d$, \n\\begin{equation*} \\label{}\n\\left| \\mathbb{E} \\left[ \\varepsilon^2 \\phi_\\varepsilon(x) \\right] - \\mathbb{E} \\left[ \\varepsilon^2 \\phi_\\varepsilon(0) \\right] \\right| \\leq C\\mathcal{E}(\\varepsilon),\n\\end{equation*}\nthe proof of~\\eqref{e.systwts} is complete.\n\\end{proof}\n\nThe proof of Theorem~\\ref{t.correctors} is now complete, as it follows immediately from Propositions~\\ref{p.randomerror},~\\ref{p.deterministicerror} and~\\eqref{e.offgrid}. \n\n\n\n\n\\section{Existence of stationary correctors in $d>4$}\n\\label{s.correctors}\n\nIn this section we prove the following result concerning the existence of stationary correctors in dimensions larger than four. \n\n\\begin{theorem}\n\\label{t.existcorrectors}\nSuppose $d>4$ and fix $M\\in\\mathbb{S}^d$, $|M|=1$. Then there exists a constant $C(d,\\lambda,\\Lambda,\\ell)\\geq 1$ and a stationary function $\\phi$ belonging $\\P$--almost surely to $C(\\mathbb{R}^d) \\cap L^\\infty(\\mathbb{R}^d)$, satisfying \n\\begin{equation}\n\\label{e.correctoreq}\n-\\tr\\left(A(x)\\left(M+D^2\\phi \\right) \\right) = -\\tr\\left( \\overline AM \\right) \\quad \\mbox{in} \\ \\mathbb{R}^d\n\\end{equation}\nand, for each $x\\in\\mathbb{R}^d$ and $t\\geq1$, the estimate\n\\begin{equation}\n\\label{e.correctorest}\n\\P \\left[ \\left| \\phi(x) \\right| > t \\right] \\leq C \\exp\\left( -t^{\\frac12} \\right). \n\\end{equation}\n\\end{theorem}\n\nTo prove Theorem~\\ref{t.existcorrectors}, we argue that, after subtracting an appropriate constant,~$\\phi_\\varepsilon$ has an almost sure limit as $\\varepsilon \\to 0$ to a stationary function~$\\phi$. We introduce the functions\n\\begin{equation*}\n\\widehat \\phi_{\\varepsilon}:=\\phi_{\\varepsilon}-\\frac{1}{\\varepsilon^{2}}\\tr(\\overline{A}M)\n\\end{equation*}\nObserve that\n\\begin{equation}\n\\label{e.hatphiep}\n\\varepsilon^{2}\\widehat \\phi_{\\varepsilon}-\\tr\\left(A(x)D^{2}\\widehat \\phi_{\\varepsilon}\\right)=-\\tr (\\overline{A}M).\n\\end{equation}\nTo show that $\\hat\\phi_\\varepsilon$ has an almost sure limit as $\\varepsilon\\to 0$, we introduce the functions\n\\begin{equation*} \\label{}\n\\psi_\\varepsilon := \\phi_\\varepsilon - \\phi_{2\\varepsilon}.\n\\end{equation*}\nThen \n\\begin{equation*}\n\\widehat \\phi_{\\varepsilon}-\\widehat \\phi_{2\\varepsilon}=\\psi_{\\varepsilon}-\\frac{3}{4 \\varepsilon^{2}} \\tr (\\overline{A}M),\n\\end{equation*}\nand the goal will be to prove bounds on $\\widehat \\phi_{\\varepsilon} - \\widehat{\\phi}_{\\varepsilon}$ which are summable over the sequence $\\varepsilon_n:=2^{-n}$. We proceed as in the previous section: we first estimate the fluctuations of~$\\psi_\\varepsilon$ using a sensitivity estimate and a suitable version of the Efron-Stein inequality. We then use this fluctuation estimate to obtain bounds on its expectation using a variation of the argument in the proof of Proposition~\\ref{p.deterministicerror}.\n\n\\smallskip\n\nWe begin with controlling the fluctuations. \n\n\\begin{lemma}\n\\label{l.psiepsensitivity}\nFor every~$p\\in [1, \\infty)$ and $\\gamma>0$, there exists $C(p,\\gamma,d,\\lambda,\\Lambda,\\ell)<\\infty$ such that, for every $\\varepsilon \\in\\left(0,\\frac12\\right]$ and $x\\in \\mathbb{R}^{d}$, \n\\begin{equation}\\label{e.flucpsi}\n\\mathbb{E}\\left[\\left|\\psi_{\\varepsilon}(x)-\\mathbb{E}[\\psi_{\\varepsilon}(x)]\\right|^{p}\\right]^{\\frac1p}\\leq C\\varepsilon^{\\left(\\frac{d-4}2 \\wedge 2 \\right) -\\gamma}.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nIn view of the Efron-Stein inequality for $p$th moments (cf.~\\eqref{e.pefronstein}), it suffices to show that \n\\begin{equation}\\label{e.vertflucpsi}\n\\mathbb{E} \\left[ \\mathbb{V}_{*}\\left[\\psi_{\\varepsilon}(x)\\right]^{\\frac p2} \\right]^{\\frac1p} \\leq C\\varepsilon^{\\left( \\frac{d-4}2 \\wedge 2 \\right) -\\gamma}.\n\\end{equation}\nWe start from the observation that $\\psi_{\\varepsilon}$ satisfies the equation\n\\begin{equation} \\label{e.eq1forpsiep}\n\\varepsilon^2 \\psi_\\varepsilon -\\tr\\left(A(x)D^2\\psi_\\varepsilon \\right) = 3\\varepsilon^2\\phi_{2\\varepsilon} \\quad \\mbox{in} \\ \\mathbb{R}^d.\n\\end{equation}\nDenote the right-hand side by $h_\\varepsilon:= 3\\varepsilon^2\\phi_{2\\varepsilon}$. \n\n\\smallskip\n\n\\emph{Step 1.} We outline the proof of~\\eqref{e.vertflucpsi}. Fix $z\\in \\mathbb{R}^{d}$. We use the notation from the previous section, letting $A':=\\pi(\\theta'_z(\\omega,\\omega'))$ denote a resampling of the coefficients at~$z$. We let $\\psi_\\varepsilon'$, $\\phi_\\varepsilon'$, $G_\\varepsilon'$, etc, denote the corresponding functions defined with respect to~$A'$. Applying Lemma~\\ref{l.coarseABP} with $\\delta = \\varepsilon^2$, in view of~\\eqref{e.eq1forpsiep}, we find that\n\\begin{align*} \n\\lefteqn{\n\\psi_\\varepsilon(x) - \\psi'_\\varepsilon(x)\n} \\ & \n\\\\ & \\notag\n\\leq \nC\\left( \\varepsilon^2 + \\left[ \\psi_\\varepsilon \\right]_{C^{1,1}_1(z,B_{1\/\\varepsilon}(z))} + \\sup_{y\\in B_\\ell(z), \\, y'\\in\\mathbb{R}^d} \\left( h_\\varepsilon(y') - h_\\varepsilon(y) - \\varepsilon^{4}|y'-z|^2 \\right) \\right)G_\\varepsilon'(x,z) \n\\\\ & \\notag \\qquad\n+ \\sum_{y\\in \\mathbb{Z}^d} G_\\varepsilon(x,y) \\sup_{B_{\\ell\/2}(y)} (h_\\varepsilon- h_\\varepsilon') \n\\\\ & \\notag\n =: CK(z) \\xi_\\varepsilon(x-z) + C\\sum_{y\\in\\mathbb{Z}^d} H(y,z) \\xi_\\varepsilon( z-y) \\xi_\\varepsilon(x-y).\n\\end{align*}\nHere we have defined\n\\begin{multline*}\nK(z):= \\xi_\\varepsilon(x-z)^{-1} \\bigg( \\varepsilon^2 + \\left[ \\psi_\\varepsilon \\right]_{C^{1,1}_1(z,B_{1\/\\varepsilon}(z))} \\\\\n+ \\sup_{y\\in B_\\ell(z), \\, y'\\in\\mathbb{R}^d} \\left( h_\\varepsilon(y') - h_\\varepsilon(y) - \\varepsilon^{4}|y'-z|^2 \\right) \\bigg)G_\\varepsilon'(x,z)\n\\end{multline*}\nand\n\\begin{equation*}\nH(y,z):= \\left( \\xi_\\varepsilon( z-y) \\xi_\\varepsilon(x-y) \\right)^{-1} G_\\varepsilon(x,y) \\sup_{B_{\\ell\/2}(y)} (h_\\varepsilon- h_\\varepsilon').\n\\end{equation*}\nThese are random variables on the probability space~$\\Omega\\times\\Omega'$ with respect to the probability measure $\\tilde{\\P} := \\P_*\\times \\P_*'$. Below we will check that, for each $p\\in [1, \\infty)$ and $\\gamma>0$, there exists $C(p,\\gamma,d,\\lambda,\\Lambda,\\ell)<\\infty$ such that\n\\begin{equation} \\label{e.expKzHyz}\n\\tilde{\\mathbb{E}} \\left[ K(z)^p \\right]^{\\frac1p} \n+\n\\tilde{\\mathbb{E}} \\left[ H(y,z)^p \\right]^{\\frac1p} \\leq C\\varepsilon^{2-\\gamma}.\n\\end{equation}\nTo ease the notation, we will drop the tildes and just write $\\mathbb{E}$ instead of $\\tilde{\\mathbb{E}}$. \nWe first complete the proof of~\\eqref{e.vertflucpsi} assuming that~\\eqref{e.expKzHyz} holds. In view of the discussion in Section~\\ref{s.sensitivity}, we compute:\n\\begin{align*}\n\\lefteqn{\n\\mathbb{E} \\left[ \\mathbb{V}_{*}\\left[\\psi_{\\varepsilon}(x)\\right]^{\\frac p2} \\right]\n} \\quad & \n\\\\ &\n\\leq \\mathbb{E}\\left[ \\left( \\sum_{z\\in\\mathbb{Z}^d} \\left( CK(z) \\xi_\\varepsilon(x-z) + C\\sum_{y\\in\\mathbb{Z}^d} H(y,z) \\xi_\\varepsilon( z-y) \\xi_\\varepsilon(x-y) \\right)^2 \\right)^{\\frac p2} \\right] \n\\\\ &\n\\leq C \\mathbb{E}\\left[ \\left( \\sum_{z\\in\\mathbb{Z}^d} \\left[ K(z) \\xi_\\varepsilon(x-z) \\right]^2 \\right)^{\\frac p2} \\right] \n\\\\ & \\qquad\n+ C \\mathbb{E}\\left[ \\left( \\sum_{z\\in\\mathbb{Z}^d} \\left( \\sum_{y\\in\\mathbb{Z}^d} H(y,z) \\xi_\\varepsilon( z-y) \\xi_\\varepsilon(x-y) \\right)^2 \\right)^{\\frac p2} \\right].\n\\end{align*}\nBy Jensen's inequality,~\\eqref{e.randerrwts} and~\\eqref{e.expKzHyz},\n\\begin{align*}\n\\mathbb{E}\\left[ \\left( \\sum_{z\\in\\mathbb{Z}^d} \\left[ K(z) \\xi_\\varepsilon(x-z) \\right]^2 \\right)^{\\frac p2} \\right] \n& \n\\leq \\left( \\sum_{z\\in\\mathbb{Z}^d} \\xi_\\varepsilon^2(x-z)\\right)^{\\frac p2 -1} \\mathbb{E}\\left[ \\left( \\sum_{z\\in\\mathbb{Z}^d} K(z)^{p} \\xi_\\varepsilon^2(x-z) \\right) \\right] \n\\\\ & \n\\leq C\\varepsilon^{(2-\\gamma)p} \\left( \\sum_{z\\in\\mathbb{Z}^d} \\xi_\\varepsilon^2(x-z)\\right)^{\\frac p2} \\leq C\\varepsilon^{(2-\\gamma)p}\n\\end{align*}\nand by Jensen's inequality and~\\eqref{e.expKzHyz}, \n\\begin{align*}\n\\lefteqn{\n\\mathbb{E}\\left[ \\left( \\sum_{z\\in\\mathbb{Z}^d} \\left( \\sum_{y\\in\\mathbb{Z}^d} H(y,z) \\xi_\\varepsilon( z-y) \\xi_\\varepsilon(x-y) \\right)^2 \\right)^{\\frac p2} \\right]\n} \\qquad & \n\\\\ & \n= \\mathbb{E}\\left[ \\left( \\sum_{z\\in\\mathbb{Z}^d} \\sum_{y,y'\\in\\mathbb{Z}^d} H(y,z)H(y',z) \\xi_\\varepsilon( z-y) \\xi_\\varepsilon(x-y)\\xi_\\varepsilon( z-y') \\xi_\\varepsilon(x-y') \\right)^{\\frac p2} \\right]\n\\\\ & \n\\leq \\left( \\sum_{z,y,y'\\in\\mathbb{Z}^d} \\xi_\\varepsilon( z-y) \\xi_\\varepsilon(x-y)\\xi_\\varepsilon( z-y') \\xi_\\varepsilon(x-y') \\right)^{\\frac p2-1}\n \\\\ & \\ \\ \\times\n \\mathbb{E}\\left[\\sum_{z,y,y'\\in\\mathbb{Z}^d} H(y,z)^{\\frac p2}H(y',z)^{\\frac p2} \\xi_\\varepsilon( z-y) \\xi_\\varepsilon(x-y)\\xi_\\varepsilon( z-y') \\xi_\\varepsilon(x-y') \\right]\n\\\\ & \n= \\left( \\sum_{z\\in\\mathbb{Z}^d} \\left( \\sum_{y\\in\\mathbb{Z}^d} \\xi_\\varepsilon( z-y) \\xi_\\varepsilon(x-y) \\right)^2 \\right)^{\\frac p2-1}\n\\\\ & \\quad \n\\times\n\\sum_{z,y,y'\\in\\mathbb{Z}^d} \\mathbb{E}\\left[H(y,z)^{\\frac p2}H(y',z)^{\\frac p2} \\right]\\xi_\\varepsilon( z-y) \\xi_\\varepsilon(x-y)\\xi_\\varepsilon( z-y') \\xi_\\varepsilon(x-y') \n\\\\ & \n\\leq \\left( \\sum_{z\\in\\mathbb{Z}^d} \\left( \\sum_{y\\in\\mathbb{Z}^d} \\xi_\\varepsilon( z-y) \\xi_\\varepsilon(x-y) \\right)^2 \\right)^{\\frac p2} C\\varepsilon^{(2-\\gamma)p}.\n\\end{align*}\nIn view of the inequality \n\\begin{equation} \\label{e.Greenie1}\n\\left( \\sum_{z\\in\\mathbb{Z}^d} \\left( \\sum_{y\\in\\mathbb{Z}^d} \\xi_\\varepsilon( z-y) \\xi_\\varepsilon(x-y) \\right)^2 \\right)^{\\frac 12} \n\\leq C\\left(1+\\varepsilon^{-1} \\right)^{4-\\frac d2},\n\\end{equation}\nwhich we also will prove below, the demonstration of~\\eqref{e.vertflucpsi} is complete. \n\n\\smallskip\n\nTo complete the proof, it remains to prove~\\eqref{e.expKzHyz} and~\\eqref{e.Greenie1}.\n\n\\smallskip\n\n\\emph{Step 2.} Proof of~\\eqref{e.Greenie1}. We first show that, for every $x,z\\in \\mathbb{R}^{d}$, \n\\begin{equation}\\label{e.estiny}\n\\sum_{y\\in \\mathbb{Z}^{d}} \\xi_{\\varepsilon}(z-y)\\xi_{\\varepsilon}(x-y)\\leq C\\exp(-c\\varepsilon|x-z|) \\left((1+|x-z|)^{4-d}+\\varepsilon^{d-4}\\right).\n\\end{equation}\nRecall that in dimensions $d>4$, $\\xi_{\\varepsilon}(x)=\\exp(-a\\varepsilon|x|)(1+|x|)^{2-d}$. Denote $r:=|x-z|$. We estimate the sum by an integral and then split the sum into five pieces:\n\\begin{align*}\n\\lefteqn{\n\\sum_{y\\in \\mathbb{Z}^{d}} \\xi_{\\varepsilon}(z-y)\\xi_{\\varepsilon}(x-y)\n} \\qquad & \\\\\n& \\leq C\\int_{\\mathbb{R}^{d}}\\xi_{\\varepsilon}(z-y)\\xi_{\\varepsilon}(x-y)\\, dy\n\\\\ & \n\\leq C \\int_{B_{r\/4}(x)}\\xi_{\\varepsilon}(z-y)\\xi_{\\varepsilon}(x-y)\\, dy\n +C\\int_{B_{r\/4}(z)}\\xi_{\\varepsilon}(z-y)\\xi_{\\varepsilon}(x-y)\\, dy\\\\\n& \\quad+C\\int_{B_{2r}(z)\\setminus \\left(B_{r\/4}(z)\\cup B_{r\/4}(x)\\right)}\\xi_{\\varepsilon}(z-y)\\xi_{\\varepsilon}(x-y)\\, dy\\\\\n& \\quad+C\\int_{B_{1\/\\varepsilon}(z)\\setminus B_{2r}(z)}\\xi_{\\varepsilon}(z-y)\\xi_{\\varepsilon}(x-y)\\, dy\\\\\n& \\quad+C\\int_{\\mathbb{R}^{d}\\setminus B_{1\/\\varepsilon}(z)}\\xi_{\\varepsilon}(z-y)\\xi_{\\varepsilon}(x-y)\\, dy.\n\\end{align*}\nWe now estimate each of the above terms. Observe first that\n\\begin{multline*}\n\\int_{B_{r\/4}(x)} \\xi_{\\varepsilon}(z-y)\\xi_{\\varepsilon}(x-y)\\, dy+\\int_{B_{r\/4}(z)} \\xi_{\\varepsilon}(z-y)\\xi_{\\varepsilon}(x-y)\\, dy\n\\\\\n\\leq C \\exp(-c\\varepsilon r) \\int_{B_{r}(0)} (1+r)^{2-d}(1+|y|)^{2-d}\\, dy = C\\exp(-c\\varepsilon r) (1+r)^{4-d}. \n\\end{multline*}\nNext, we estimate \n\\begin{multline*}\n\\int_{B_{2r}(z)\\setminus \\left(B_{r\/4}(z)\\cup B_{r\/4}(x)\\right)} \\xi_{\\varepsilon}(z-y)\\xi_{\\varepsilon}(x-y)\\, dy\n\\\\\n\\leq C\\exp (-c\\varepsilon r)\\int_{B_{2r}(z)} (1+|y|)^{2(2-d)}\\, dy C\\exp(-c\\varepsilon r) (1+r)^{4-d} \n\\end{multline*}\nand\n\\begin{multline*}\n\\int_{B_{1\/\\varepsilon}(z)\\setminus B_{2r}(z)}\\xi_{\\varepsilon}(z-y)\\xi_{\\varepsilon}(x-y)\\, dy\n\\\\\n\\leq C\\int_{B_{1\/\\varepsilon}\\setminus B_{2r}} \\exp(-c\\varepsilon r) (1+|y|)^{4-2d}\\, dy \n= C\\exp(-c\\varepsilon r) (1+r)^{4-d}. \n\\end{multline*}\nFinally, since $d>4$, \n \\begin{align*\n \\lefteqn{\n\\int_{\\mathbb{R}^{d}\\setminus B_{1\/\\varepsilon}(z)}\\xi_{\\varepsilon}(z-y)\\xi_{\\varepsilon}(x-y)\\, dy\n} \\qquad & \\\\\n&\\leq C\\exp(-c\\varepsilon r)\\int_{\\mathbb{R}^{d}\\setminus B_{1\/\\varepsilon}} \\exp(-c\\varepsilon |y|)(1+|y|)^{4-2d}\\, dy \\\\\n&=C\\exp(-c\\varepsilon r)\\varepsilon^{d-4}\\int_{\\mathbb{R}^{d}\\setminus B_{1}} \\exp(-2a|y|)(\\varepsilon+|y|)^{4-2d}\\, dy \\notag\\\\\n&= C\\exp(-c\\varepsilon r)\\varepsilon^{d-4}. \\notag\n\\end{align*}\nCombining the above inequalities yields~\\eqref{e.estiny}.\n\n\\smallskip\n\nTo obtain \\eqref{e.Greenie1}, we square~\\eqref{e.estiny} and sum it over $z\\in\\mathbb{Z}^d$ to find that \n\\begin{multline*}\n \\sum_{z\\in\\mathbb{Z}^d} \\left( \\sum_{y\\in\\mathbb{Z}^d} \\xi_\\varepsilon( z-y) \\xi_\\varepsilon(x-y) \\right)^2 \n \\\\\n \\leq\\int_{\\mathbb{R}^{d}}C \\exp\\left( -c\\varepsilon|x-z| \\right) \\left((1+|x-z|)^{8-2d}+\\varepsilon^{2d-8}\\right)\\, dz =C + C\\varepsilon^{d-8}.\n\\end{multline*}\n\n\\smallskip\n\n\\emph{Step 3.} The estimate of the first term on the left side of~\\eqref{e.expKzHyz}. \nNotice that according to Proposition~\\ref{p.Green}, we have that \n\\begin{multline}\\label{e.betterKest}\n|K(z)|\\leq (T_{z}\\X(\\theta'_{z}(\\omega, \\omega')))^{d-1-\\delta}\\left( \\varepsilon^2 + \\left[ \\psi_\\varepsilon \\right]_{C^{1,1}_1(z,B_{1\/\\varepsilon}(z))}\\right. \\\\\n+\\left. \\sup_{y\\in B_\\ell(z), \\, y'\\in\\mathbb{R}^d} \\left( h_\\varepsilon(y') - h_\\varepsilon(y) - \\varepsilon^{4}|y'-z|^2 \\right)\\right).\n\\end{multline}\nWe control each part individually. First, we claim that for every $\\gamma\\in (0,1)$, for every $p\\in (1, \\infty)$, there exists $C(\\gamma, \\lambda, \\Lambda, d, \\ell, p)$ such that \n\\begin{equation}\\label{e.c11psi}\n\\mathbb{E}\\left[\\left(\\left[ \\psi_\\varepsilon \\right]_{C^{1,1}_1(z,B_{1\/\\varepsilon}(z))}\\right)^{p}\\right]^{\\frac 1p} \\leq C\\varepsilon^{2-\\gamma}.\n\\end{equation}\nObserve that $\\psi_{\\varepsilon}$ is a solution of\n\\begin{equation} \n\\label{e.psiep2}\n -\\tr\\left(A(x)D^2\\psi_\\varepsilon \\right) = -\\varepsilon^2\\phi_\\varepsilon + 4\\varepsilon^2\\phi_{2\\varepsilon} \\quad \\mbox{in} \\ \\mathbb{R}^d. \n\\end{equation}\nDenote the right side by $f_\\varepsilon:= -\\varepsilon^2\\phi_\\varepsilon + 4\\varepsilon^2\\phi_{2\\varepsilon}=-\\varepsilon^{2}\\psi_{\\varepsilon}+3\\varepsilon^{2}\\phi_{2\\varepsilon}$.\n\n\\smallskip\n\nWe show that, for every $\\gamma >0$ and $p\\in [1,\\infty)$, there exists $C(\\gamma,p,d,\\lambda,\\Lambda,\\ell)<\\infty$ such that, for every $\\varepsilon\\in (0,\\frac12]$,\n\\begin{equation}\n\\label{e.fepbounds}\n\\mathbb{E} \\left[ \\left( \\left\\| f_\\varepsilon \\right\\|_{L^\\infty(B_{1\/\\varepsilon})} + \\varepsilon^{-\\beta} \\left[ f_\\varepsilon \\right]_{C^{0,\\beta}(B_{1\/\\varepsilon})} \\right)^p\\, \\right]^{\\frac1p} \\leq C\\varepsilon^{2-\\gamma}.\n\\end{equation}\nWe first observe that~\\eqref{e.randomerror},~\\eqref{e.determinserr}, and~\\eqref{e.offgrid} imply that \n\\begin{equation*} \\label{}\n\\mathbb{E} \\left[ \\exp\\left( \\left( \\frac{1}{ \\mathcal E(\\varepsilon)} \\sup_{B_{\\sqrt{d}} } \\left| \\varepsilon^2\\phi_\\varepsilon - \\tr\\left(\\overline{A}M \\right) \\right| \\right)^{\\frac12+\\delta}\\right) \\right] \\leq C.\n\\end{equation*}\nA union bound and stationarity then give, for every $\\gamma>0$ and $p\\in [1,\\infty)$,\n\\begin{equation} \n\\label{e.newoscboundphiep}\n\\mathbb{E} \\left[ \\left\\|\\varepsilon^2\\phi_\\varepsilon - \\tr\\left(\\overline{A}M \\right) \\right\\|_{L^\\infty(B_{4\/\\varepsilon})}^p \\right]^{\\frac1p} \\leq C\\varepsilon^{-\\gamma} \\mathcal{E}(\\varepsilon).\n\\end{equation}\nwhere~$C=C(\\gamma,p,d,\\lambda,\\Lambda,\\ell)<\\infty$. The Krylov-Safonov estimate yields \n\\begin{equation*} \\label{}\n\\mathbb{E} \\left[ \\left( \\varepsilon^{-\\beta} \\left[ \\phi_\\varepsilon \\right]_{C^{0,\\beta}(B_{2\/\\varepsilon})} \\right)^p\\, \\right]^{\\frac1p} \\leq \\varepsilon^{-2} \\mathcal{E}(\\varepsilon). \n\\end{equation*}\nThe previous two displays and the triangle inequality yield the claim~\\eqref{e.fepbounds}. Now~\\eqref{e.c11psi} follows from Theorem~\\ref{t.regularity},~\\eqref{e.fepbounds},~\\eqref{e.newoscboundphiep} and the H\\\"older inequality. \n\n\\smallskip\n\nWe next show that for every $\\gamma\\in (0,1)$ and for every $p\\in (1, \\infty)$, \n\\begin{equation} \\label{e.grosstermpsi}\n\\mathbb{E} \\left[\\left( \\sup_{y\\in B_\\ell(z), \\, y'\\in\\mathbb{R}^d} \\left( h_\\varepsilon(y') - h_\\varepsilon(y) - \\varepsilon^{4}|y'-z|^2 \\right)\\right)^p \\right]^{\\frac 1p}\n\\leq C\\varepsilon^{2-\\gamma}.\n\\end{equation}\nBy a union bound, we find that, for every $t>0$,\n\\begin{align*} \\label{}\n\\lefteqn{\n\\P \\left[ \\sup_{y'\\in\\mathbb{R}^d} \\left( h_\\varepsilon(y') -\\varepsilon^{4}|y'-z|^2 \\right) > t \\right] \n} \\qquad & \\\\\n& \\leq \\sum_{n=0}^\\infty \\P \\left[ \\sup_{y' \\in B_{2^n\/\\varepsilon}(z)} h_\\varepsilon(y') > c2^{2n}\\varepsilon^{2} + t \\right] \n\\\\ &\n\\leq\n\\sum_{n=0}^{n(t)} \\P \\left[ \\sup_{y' \\in B_{2^n\/\\varepsilon}(z)} h_\\varepsilon(y') > t \\right]\n+ \\sum_{n=n(t)+1}^{\\infty} \\P \\left[ \\sup_{y' \\in B_{2^n\/\\varepsilon}(z)} h_\\varepsilon(y') > c2^{2n}\\varepsilon^{2} \\right]\n\\end{align*}\nwhere $n(t)$ is the largest positive integer satisfying $2^{2n(t)} \\leq t\\varepsilon^{-2}$. By~\\eqref{e.newoscboundphiep},\n\\begin{align*} \\label{}\n\\sum_{n=0}^{n(t)} \\P \\left[ \\sup_{y' \\in B_{2^n\/\\varepsilon}(z)} h_\\varepsilon(y') > t \\right] \n&\n\\leq (n(t)+1) \\P \\left[ \\sup_{y' \\in B_{2^{n(t)}\/\\varepsilon}(z)} h_\\varepsilon(y') > t\\right] \n\\\\ &\n\\leq C(n(t)+1) 2^{dn(t)} \\P \\left[ \\sup_{y' \\in B_{1\/\\varepsilon}} h_\\varepsilon(y') > t\\right] \n\\\\ & \n\\leq C(n(t)+1) 2^{dn(t)} \\left( \\varepsilon^{\\gamma-2} t \\right)^{-p}\n\\\\ &\n\\leq C(\\log t \\varepsilon^{-2})\\left(t^{-1}\\varepsilon^{(2-\\gamma)}\\right)^{p-\\frac d2}\\varepsilon^{\\frac{\\gamma d}{2}}\n\\end{align*}\nand \n\\begin{align*}\n \\sum_{n=n(t)+1}^{\\infty}\\P \\left[ \\sup_{y' \\in B_{2^n\/\\varepsilon}(z)} h_\\varepsilon(y') > c2^{2n}\\varepsilon^{2} \\right]\n&\n\\leq\n \\sum_{n=n(t)+1}^{\\infty} C2^{dn} \\P \\left[ \\sup_{y' \\in B_{1\/\\varepsilon}} h_\\varepsilon(y') > c2^{2n}\\varepsilon^{2} \\right] \n \\\\ &\n\\leq\nC\\varepsilon^{(2-\\gamma)p} \\sum_{n=n(t)+1}^{\\infty} 2^{dn} 2^{-2np}\\varepsilon^{-2p} \n\\\\ &\n\\leq C\\left( \\varepsilon^{(2-\\gamma)} t^{-1} \\right)^{p-\\frac d2}.\n\\end{align*}\nCombining the above, taking~$p$ sufficiently large, integrating over~$t$, and shrinking~$\\gamma$ and redefining~$p$ yields \\eqref{e.grosstermpsi}.\n\n\\smallskip\n\nA combination of \\eqref{e.bghlas}, \\eqref{e.betterKest}, \\eqref{e.c11psi},~ \\eqref{e.grosstermpsi} and the H\\\"older inequality yields the desired bound for the first term on the left side of~\\eqref{e.expKzHyz}.\n\n\\smallskip\n\n\\emph{Step 4.} The estimate of the second term on the left side of~\\eqref{e.expKzHyz}. \nAccording to~\\eqref{e.phiepprime} and Proposition~\\ref{p.Green}, for $\\X$ and $\\delta(d, \\lambda, \\Lambda)>0$ as in Proposition~\\ref{p.Green},\n\\begin{align*} \\label{}\nG_\\varepsilon(x,y) \\sup_{B_{\\ell\/2}(y)} (h_\\varepsilon- h_\\varepsilon') &\\leq C\\varepsilon^2(T_{y}\\X)^{d-1-\\delta}\\xi_{\\varepsilon}(y-x) (T_{z}\\X)^{d+1-\\delta}\\xi_{2\\varepsilon}(y-z)\\\\\n&\\leq C\\varepsilon^{2} (T_{y}\\X)^{d-1-\\delta}(T_{z}\\X)^{d+1-\\delta}\\xi_{\\varepsilon}(y-x) \\xi_{\\varepsilon}(y-z).\n\\end{align*}\nTherefore, \n\\begin{equation*}\nH(y,z)\\leq C\\varepsilon^{2} (T_{y}\\X)^{d-1-\\delta}(T_{z}\\X)^{d+1-\\delta}.\n\\end{equation*}\nThus H\\\"older's inequality yields that, for every $p\\in(1,\\infty)$,\n\\begin{equation*}\n\\tilde{\\mathbb{E}}\\left[H(y,z)^{p}\\right]\n\\leq C\\varepsilon^{2p}.\n\\end{equation*}\nThis completes the proof of~\\eqref{e.expKzHyz}.\n\\end{proof}\n\nWe next control the expectation of $\\hat{\\phi}_{\\varepsilon}-\\hat{\\phi}_{2\\varepsilon} = \\psi_\\varepsilon(x)- \\frac3{4\\varepsilon^2} \\tr\\left( \\overline{A} M \\right)$. \n\n\\begin{lemma}\n\\label{l.expectationpsiep}\nFor every $p\\in [1, \\infty)$ and $\\gamma>0$, there exists ~$C(p,\\gamma,d,\\lambda,\\Lambda,\\ell)<\\infty$ such that, for every $\\varepsilon\\in \\left(0,\\frac12\\right]$ and $x\\in\\mathbb{R}^d$, \n\\begin{equation}\n\\label{e.Epsiep}\n\\left| \\mathbb{E}[\\psi_{\\varepsilon}(x)] - \\frac3{4\\varepsilon^2} \\tr\\left( \\overline{A} M \\right) \\right|\n\\leq \nC\\varepsilon^{\\left(\\frac{d-4}2 \\wedge 2 \\right) -\\gamma}.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nThe main step in the argument is to show that \n\\begin{equation} \n\\label{e.psieptelescoping}\n\\left| \\mathbb{E}\\left[ \\psi_\\varepsilon(0) \\right] - 4\\mathbb{E}\\left[ \\psi_{2\\varepsilon}(0) \\right]\\right| \\leq C\\varepsilon^{\\frac{d-4}{2}\\wedge 2}.\n\\end{equation}\nLet us assume~\\eqref{e.psieptelescoping} for the moment and see how to obtain~\\eqref{e.Epsiep} from it. First, it follows from~\\eqref{e.psieptelescoping} that, for every $\\varepsilon$ and $m,n\\in\\mathbb{N}$ with $m\\leq n$, \n\\begin{equation*} \\label{}\n\\left| (2^{-m}\\varepsilon)^2 \\mathbb{E}\\left[ \\psi_{2^{-m}\\varepsilon}(0) \\right] - (2^{-n}\\varepsilon)^2 \\mathbb{E} \\left[ \\psi_{2^{-n}\\varepsilon}(0) \\right] \\right| \\leq C(2^{-m}\\varepsilon)^{\\frac d2\\wedge 4}.\n\\end{equation*}\nThus, the sequence $\\left\\{ (2^{-n}\\varepsilon)^2 \\mathbb{E} \\left[ \\psi_{2^{-n}\\varepsilon} (0)\\right] \\right\\}_{n\\in\\mathbb{N}}$ is Cauchy and there exists $L \\in\\mathbb{R}$ with \n\\begin{equation*} \\label{}\n\\left| (2^{-m}\\varepsilon)^2 \\mathbb{E}\\left[ \\psi_{2^{-m}\\varepsilon}(0) \\right] -L \\right| \\leq C(2^{-m}\\varepsilon)^{\\frac d2\\wedge 4}.\n\\end{equation*}\nTaking $m=0$ and dividing by $\\varepsilon^2$, this yields \n\\begin{equation*} \\label{}\n\\left| \\mathbb{E}\\left[\\psi_{\\varepsilon}(0) \\right] - \\frac{L}{\\varepsilon^2} \\right| \\leq C\\varepsilon^{\\frac {d-4}2\\wedge 2}.\n\\end{equation*}\nBut in view of~\\eqref{e.determinserr}, we have that $L=\\frac34 \\tr\\left(\\overline{A}M \\right)$. This completes the proof of the lemma, subject to the verification of~\\eqref{e.psieptelescoping}.\n\n\\smallskip\n\nWe denote $h_\\varepsilon(x) := \\psi_\\varepsilon (x) - 4\\psi_{2\\varepsilon}(x)$ so that we may rewrite~\\eqref{e.psieptelescoping} as \n\\begin{equation} \\label{e.psieptelescoping2}\n\\mathbb{E}\\left[ h_\\varepsilon(0) \\right] \\leq C\\varepsilon^{\\frac{d-4}{2}\\wedge 2}.\n\\end{equation}\nWe next introduce the function\n\\begin{equation} \\label{e.etaep}\n\\eta_\\varepsilon(x):= \\psi_\\varepsilon - \\frac14 \\psi_{2\\varepsilon}\n\\end{equation}\nand observe that $\\eta_\\varepsilon$ is a solution of\n\\begin{equation} \\label{e.eqetaep}\n-\\tr\\left( A(x)D^2\\eta_\\varepsilon \\right) = -\\varepsilon^2 h_\\varepsilon \\quad \\mbox{in} \\ \\mathbb{R}^d. \n\\end{equation}\nIn the first step, we show that $\\psi_\\varepsilon$ has small oscillations in balls of radius $\\varepsilon^{-1}$, and therefore so do $h_\\varepsilon$ and $\\eta_\\varepsilon$. This will allow us to show in the second step that~\\eqref{e.eqetaep} is in violation of the maximum principle unless the mean of $h_\\varepsilon$ is close to zero. \n\n\n\n\\smallskip\n\n \\emph{Step 1.} The oscillation bound for~$\\psi_\\varepsilon$. The claim is that, for every~$\\gamma\\in (0,1)$ and~$p\\in [1, \\infty)$, there exists $C(p, \\gamma, \\lambda, \\Lambda, d, \\ell)$ such that \n\\begin{equation}\\label{e.psiosc}\n\\mathbb{E}\\left[\\sup_{x\\in B_{1\/\\varepsilon}} \\left| \\psi_{\\varepsilon}(x) - \\mathbb{E}\\left[ \\psi_\\varepsilon(0) \\right] \\right|^{p}\\right]^{\\frac1p} \\leq C \\varepsilon^{\\frac{d-4}{2}\\wedge 2}.\n\\end{equation}\nBy the equation~\\eqref{e.psiep2} for~$\\psi_\\varepsilon$, the Krylov-Safonov estimate~\\eqref{e.classicKS} and the bounds~\\eqref{e.fepbounds}, we have that (taking $\\gamma$ sufficiently small), \n\\begin{equation} \n\\label{e.snaptogridpsi}\n\\mathbb{E} \\left[ \\left( \\sup_{x\\in B_{1\/2\\varepsilon}} \\left| \\psi_\\varepsilon(x) - \\psi_\\varepsilon([x]) \\right| \\right)^p \\right]^{\\frac1p} \n\\leq C \\varepsilon^{\\sigma} \\varepsilon^{2-\\gamma} \\leq C\\varepsilon^2. \n\\end{equation}\nHere $[x]$ denotes the nearest point of $\\mathbb{Z}^d$ to $x\\in\\mathbb{R}^d$. \nBy the fluctuation estimate~\\eqref{e.flucpsi}, stationarity and a union bound, we have, for every $\\gamma>0$ and $p\\in(1,\\infty)$,\n\\begin{equation*} \\label{}\n\\mathbb{E}\\left[\\sup_{z\\in B_{1\/\\varepsilon} \\cap \\mathbb{Z}^d} \\left| \\psi_{\\varepsilon}(z) - \\mathbb{E}\\left[ \\psi_\\varepsilon(0) \\right] \\right|^{p}\\right]^{\\frac1p}\n\\leq C \\varepsilon^{\\frac{d-4}{2}\\wedge 2-\\gamma}.\n\\end{equation*}\nThe previous two lines complete the proof of~\\eqref{e.psiosc}. \n\n\\smallskip\n\n\\emph{Step 2.} We prove something stronger than~\\eqref{e.psieptelescoping2} by showing that\n\\begin{equation*} \\label{}\n\\mathbb{E} \\left[ \\sup_{x\\in B_{1\/\\varepsilon} } \\left| h_\\varepsilon (x) \\right| \\right] \\leq C\\varepsilon^{\\frac{d-4}{2}\\wedge 2}.\n\\end{equation*}\nBy~\\eqref{e.psiosc}, it suffices to show that \n\\begin{equation*} \\label{}\n\\mathbb{E} \\left[ \\sup_{x\\in B_{1\/\\varepsilon} } h_\\varepsilon (x) \\right] \\geq - C\\varepsilon^{\\frac{d-4}{2}\\wedge 2}\n\\quad \\mbox{and} \\quad \n\\mathbb{E} \\left[ \\inf_{x\\in B_{1\/\\varepsilon} } h_\\varepsilon (x) \\right] \\leq C\\varepsilon^{\\frac{d-4}{2}\\wedge 2}.\n\\end{equation*}\nWe will give only the argument for the second inequality in the display above since the proof of the first one is similar. Define the random variable \n\\begin{equation*} \\label{}\n\\kappa: = \\varepsilon^2 \\sup_{x\\in B_{1\/\\varepsilon}} \\left| \\psi_{\\varepsilon}(x) - \\mathbb{E}\\left[ \\psi_\\varepsilon(0) \\right] \\right|.\n\\end{equation*}\nObserve that the function\n\\begin{equation*} \\label{}\nx \\mapsto \\psi_\\varepsilon(x) - 8 \\kappa |x|^2 \\quad \\mbox{has a local maximum at some point} \\ x_0 \\in B_{1\/2\\varepsilon}. \n\\end{equation*}\nThe equation~\\eqref{e.psiep2} for $\\psi_\\varepsilon$ implies that \n\\begin{equation*} \\label{}\n -\\varepsilon^2 h_\\varepsilon(x_0) \\geq -16 \\Lambda d \\kappa \\geq -C\\kappa.\n\\end{equation*}\nThus\n\\begin{equation*} \\label{}\n\\inf_{x\\in B_{1\/2\\varepsilon}} h_\\varepsilon(x) \\leq h_\\varepsilon (x_0) \\leq C\\kappa = C \\sup_{x\\in B_{1\/\\varepsilon}} \\left| \\psi_{\\varepsilon}(x) - \\mathbb{E}\\left[ \\psi_\\varepsilon(0) \\right] \\right|. \n\\end{equation*}\nTaking expectations and applying~\\eqref{e.psiosc} yields the claim. \n\\end{proof}\n\n\nWe now complete the proof of Theorem~\\ref{t.existcorrectors}. \n\n\\begin{proof}{Proof of~Theorem~\\ref{t.existcorrectors}}\nAccording to~\\eqref{e.flucpsi},~\\eqref{e.psiosc},~\\eqref{e.Epsiep} and a union bound, we have\n\\begin{equation} \\label{}\n\\mathbb{E} \\left[ \\sup_{x\\in B_{1\/\\varepsilon}} \\left| \\psi_{\\varepsilon}(x) - \\frac3{4\\varepsilon^2} \\tr\\left( \\overline{A} M \\right) \\right| \\right] \n\\leq C \\varepsilon^{\\left(\\frac{d-4}2 \\wedge 2 \\right) -\\gamma}.\n\\end{equation}\nFrom this we deduce that \n\\begin{equation} \\label{}\n\\mathbb{E} \\left[ \\sup_{x\\in B_{1\/\\varepsilon}} \\left| \\hat{\\phi}_{\\varepsilon}(x) - \\hat{\\phi}_{2\\varepsilon} (x) \\right| \\right] \n\\leq C \\varepsilon^{\\left(\\frac{d-4}2 \\wedge 2 \\right) -\\gamma}.\n\\end{equation}\nWe deduce the existence of a stationary function $\\phi$ satisfying\n\\begin{equation} \\label{}\n\\mathbb{E} \\left[ \\sup_{x\\in B_{1\/\\varepsilon}} \\left| \\hat{\\phi}_{\\varepsilon}(x) - \\phi (x) \\right| \\right] \n\\leq C \\varepsilon^{\\left(\\frac{d-4}2 \\wedge 2 \\right) -\\gamma}.\n\\end{equation}\nPassing to the limit $\\varepsilon\\to 0$ in~\\eqref{e.hatphiep} and using the stability of solutions under uniform convergence, we obtain that $\\phi$ is a solution of~\\eqref{e.correctoreq}. The estimates~\\eqref{e.correctorest} are immediate from~\\eqref{e.correctorerror}. This completes the proof of the theorem. \n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}