diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzcmhk" "b/data_all_eng_slimpj/shuffled/split2/finalzzcmhk" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzcmhk" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\nFungicide resistance is a prime example of adaptation of a population\nto an environmental change, also known as evolutionary rescue\n\\cite{bego11,hosg11}.\nWhile global climate change is expected to result in a loss of\nbiodiversity in natural ecosystems, evolutionary rescue is seen as a\nmechanism that may mitigate this loss. In the context of crop\nprotection the point of view is quite the opposite: reducing\nadaptation of crop pathogens to chemical disease control would help\nstabilize food production. Better understanding of the adaptive\nprocess may help slow or prevent it.\nThis requires a detailed quantitative understanding of the dynamics of\ninfection and the factors driving the emergence and development of\nfungicide resistance \\cite{bogi08}. \nDespite the global importance and\nurgency of fungicide resistance, this problem has received relatively\nlittle theoretical consideration (see\n\\cite{hopa+11a,hagu+07,sh06,pagi+05,sh89,mile+89} and \\cite{bogi08}\nfor a comprehensive review) as compared, for example, to antibiotic\nresistance \\cite{ozsh+12,boau+01,le01,auan99}.\n\nIn recent years, agrochemical companies have begun marketing mixtures\nthat contain fungicides with a low-risk of developing resistance with\nfungicides that have a high-risk developing of resistance. In extreme\ncases the high-risk fungicide is no longer effective against some\ncommon pathogens because resistance has become widespread. For\nexample, a large proportion of the European population of the\nimportant wheat pathogen \\emph{Mycosphaerella graminicola} (recently\nrenamed \\emph{Zymoseptoria tritici}) \\cite{orde+11,pask03} is\nresistant to strobilurin fungicides \\cite{tobr+09}.\n\n\nA number of previous modeling studies addressed the effect of\nfungicide mixtures on selection for fungicide resistance (for example,\n\\cite{kaje80,sk81,jodo85,sh93,hopa+11a,hopa+13}). Different studies\nused different definitions of ``independent action'' (also called\n``additivity'' or ``zero interaction'' in the literature) of\nfungicides in the mixtures \\cite{sh89a} and reported somewhat\ndifferent conclusions. One study \\cite{sh89a} critically reviewed the\noutcomes of these earlier studies and attempted to clarify the\nconsequences of using different definitions of ``independent\naction''. Some studies found that alternations are preferable to\nmixtures \\cite{kaje80}, while others found that mixtures are\npreferable to alternations \\cite{sk81}. A more recent study\n\\cite{hopa+13} addressed this question using a detailed population\ndynamics model and found that in all scenarios considered, mixtures to\nprovided the longest effective life of fungicides as compared to\nalternations or concurrent use (when each field receives a single\nfungicide, but the fungicides applied differ between the fields). This\nstudy used the Bliss' definition of ``independent action'' of the two\nfungicides \\cite{bl39} (also called Abbot's formula in the fungicide\nliterature \\cite{ab25}).\n\n\n\nWe addressed the question of whether mixtures of low-risk and high\nrisk fungicides can provide adequate disease control while minimizing\nfurther selection for resistance using a simple population dynamics\nmodel of host-pathogen interaction based on a system of ordinary\ndifferential equations. We found that the fitness cost associated with\nresistance mutations is a crucial parameter, which governs the outcome\nof the competition between the sensitive and resistant pathogen\nstrains.\n\n\n\nA single point mutation associated with fungicide resistance sometimes\nmakes the pathogen completely insensitive to a fungicide, as is the\ncase for the G143A mutation giving resistance to strobilurin\nfungicides in many fungal pathogens \\cite{feto+08,gisi+02}. In many\nother cases the resistance is partial, for example, resistance of\n\\emph{Z. tritici} and other fungi to azole fungicides\n\\cite{co08,zhst+06}.\nTherefore, we considered varying degrees of resistance in our model.\n\n\nIn contrast to our study, resistance in \\cite{hopa+11a} was assumed to\nbear no fitness costs for the pathogen. It was found that in the\nabsence of fitness costs the use of fungicide mixtures \\emph{delays}\nthe development of resistance \\cite{hopa+11a}. This conclusion is in\nagreement with our results (see Appendix\\,A.4).\nHere we focus on finding conditions under which the selection for the\nresistant pathogen strain is \\emph{prevented} by using fungicide\nmixtures.\n\n\n\\section{Theory and approaches}\n\\label{sec:model-assump}\n\nWe use a deterministic mathematical model of susceptible-infected\ndynamics (see \\fig{fig:model-scheme})\n\n\\begin{align\n\\frac{d H}{d t} &= r_H (K - H - I_\\mrm{s} - I_\\mrm{r} ) - b \\left( \\left[ 1 - \\varepsilon_\\mrm{s}(C,\n r_\\mrm{B}) \\right] I_\\mrm{s} + \\left[ 1 - \\varepsilon_\\mrm{r}(C, r_\\mrm{B}) \\right] (1-\\rho_\\mrm{r}) I_\\mrm{r} \\right) H, \\label{eq:1host2fung-gen-1}\\\\\n\\frac{d I_\\mrm{s}}{d t} & = b \\left[ 1 - \\varepsilon_\\mrm{s}(C,\n r_\\mrm{B}) \\right] H I_\\mrm{s} - \\mu I_\\mrm{s}, \\label{eq:1host2fung-gen-2}\\\\ \n\\frac{d I_\\mrm{r}}{d t} & = b \\left[ 1 - \\varepsilon_\\mrm{r}(C,\n r_\\mrm{B}) \\right] (1-\\rho_\\mrm{r}) H I_\\mrm{r} - \\mu I_\\mrm{r}. \\label{eq:1host2fung-gen-3}\n\\end{align}\nThe model has three compartments: healthy hosts $H$, hosts\ninfected by a sensitive pathogen strain $I_\\mrm{s}$, hosts infected by\na resistant pathogen strain $I_\\mrm{r}$; and is similar to the models\ndescribed in \\cite{bogi08,hagu+07}. The subscript ``s'' stands for the\nsensitive strain and the subscript ``r'' stands for the resistant\nstrain.\nThe quantities $H$, $I_\\mrm{s}$ and $I_\\mrm{r}$, represent the total amount of the\ncorresponding host tissue within one field, which could be leaves,\nstems or grain tissue, depending on the specific host-pathogen interaction.\nHealthy hosts $H$ grow with the rate $r_H$. Their growth is\nlimited by the ``carrying capacity'' $K$, which may imply limited\nspace or nutrients.\nFurthermore, healthy hosts may be infected by the sensitive\npathogen strain and transformed into infected hosts in the compartment\n$I_\\mrm{s}$ with the transmission rate $b$. This is a compound\nparameter given by the product of the sporulation rate of the infected\ntissue and the probability that a spore causes new infection.\nHealthy hosts may also be infected by the resistant pathogen strain\nand transformed into infected hosts in the compartment $I_\\mrm{r}$. In\nthis case, resistant mutants suffer a fitness cost $\\rho_\\mrm{r}$\nwhich affects their transmission rate such that it becomes equal to $b\n(1 - \\rho_\\mrm{r})$.\nThe corresponding terms in\nEqs.\\,(\\ref{eq:1host2fung-gen-1})-(\\ref{eq:1host2fung-gen-3}) are\nproportional to the amount of the available healthy tissue $H$ and\nto the amount of the infected tissue $I_\\mrm{s}$ or\n$I_\\mrm{r}$. Infected host tissue loses its infectivity at a rate\n$\\mu$, where $\\mu^{-1}$ is the characteristic infectious period.\n\nSince our description is deterministic we do not take into account the\nemergence of new resistance mutations but assume that the resistant\npathogen strain is already present in the\npopulation. Therefore, when ``selection for resistance'' is discussed\nbelow,\nwe refer to the process of winning the competition by this existing\nresistant strain due to its higher fitness with respect to the\nsensitive strain in the presence of fungicide treatment.\nEmergence of new resistance mutations is a different problem, which\ngoes beyond the scope of our study and requires stochastic simulation\nmethods. We do not consider the possibility of double resistance in\nthe model, but by preventing selection for single resistance\nas described here, one would also diminish the probability of the\nemergence of double resistance for both sexually and asexually\nreproducing pathogens (see Appendix\\,A.7).\n\nWe consider two fungicides A and B. Fungicide A is the high-risk\nfungicide, to which the resistant pathogen strain exhibits a variable\ndegree of resistance. However, the sensitive strain is fully sensitive to\nfungicide A. Fungicide B is the low-risk fungicide, i.\\,e. both pathogen\nstrains are fully sensitive to it.\nWe compare the effects of the fungicide A applied alone,\nfungicide B applied alone and the effect of their mixture in\ndifferent proportions.\n\nWe assume that the fungicides will decrease the pathogen transmission\nrate $b$ [see the\nexpression in square brackets in \\eq{eq:1host2fung-gen-2}, \\eq{eq:1host2fung-gen-3}]. \nFor example, application of a fungicide could result in production of\nspores that are deficient essential metabolic products such as ergosterol or\n$\\beta$-tubulin. Consequently, these spores would\nlikely have a lower success rate in causing new infections.\nSpores of sensitive strains of \\emph{Z. tritici} produced shorter germ\ntubes when exposed to azoles \\cite{leal+07}. Spores that produce\nshorter germ tubes are less likely to find and penetrate stomata,\nhence are less likely to give rise to new infections. Protectant\nactivity of fungicides will also reduce the transmission rate $b$\n\\cite{wowi01,pf06}. These studies \\cite{wowi01,pf06} also reported\nthat fungicide application leads to a reduction in the number of\nspores produced. This outcome can be attributed to the fungicide\ndecreasing the sporulation rate and thus affecting $b$ or decreasing\nthe infectious period and thus affecting $\\mu$, or both of these\neffects. More detailed measurements are often needed to distinguish\nbetween these different effects.\n\n\n\n\nWhen only one fungicide applied, the reduction of the transmission rate is described by\n\\begin{equation}\\label{eq:eps-1fungic}\n \\varepsilon_\\mrm{A}(C_\\mrm{A}) = k_\\mrm{kA} \\frac{C_\\mrm{A}}{C_\\mrm{A} + C_\\mrm{50A}},\n\\end{equation}\nfor the fungicide A, and by\n\\begin{equation}\\label{eq:eps-1fungic}\n \\varepsilon_\\mrm{B}(C_\\mrm{B}) = k_\\mrm{kB} \\frac{C_\\mrm{B}}{C_\\mrm{B} + C_\\mrm{50B}},\n\\end{equation}\nfor the fungicide B.\nThese functions grow with the fungicide doses $C_\\mrm{A}$,\n$C_\\mrm{B}$ and saturate to values $k_\\mrm{kA}$, $k_\\mrm{kB}$,\nrespectively, which are the maximum reductions in the transmission\nrate (or efficacies). This functional form was used before in the fungicide\nresistance literature \\cite{hagu+07,gugi99}. We also performed the\nanalysis for the exponential fungicide action more common in plant\npathology and obtained qualitatively similar results. The reason for\nchoosing the function in \\eq{eq:eps-1fungic} was that it made possible\nto obtain all the results analytically.\nThe parameters $C_\\mrm{50A}$, $C_\\mrm{50B}$ represent the fungicide dose at\nwhich half of the maximum effect is achieved.\n These parameters can always be made equal by rescaling the\nconcentration axis for one of the fungicides. Hence, we set $C_\\mrm{50A}=C_\\mrm{50B}=C_\\mrm{50}$.\n\n\n\nWe next determine the effect of a mixture of two fungicides\naccording to the Loewe's definition of additivity (or non-interaction)\n\\cite{be89} (an equivalent graphic procedure is known as the Wadley\nmethod in the fungicide literature \\cite{lebe+86}). It is based on the\nnotion that a compound cannot interact pharmacologically with\nitself. A sham mixture of a compound A with itself can be created and\nits effect used as a reference point for assessing of whether the\ncomponents of a real mixture interact pharmacologically. When the two\ncompounds A and B have the same effect as the sham mixture of the\ncompound A with itself, they are said to have no interaction (or an\nadditive interaction). In this case, the isobologram equation\n\\begin{equation}\\label{eq:isobol}\nC_\\mrm{A}\/C_\\mrm{Ai} + C_\\mrm{B}\/C_\\mrm{Bi} = 1\n\\end{equation}\nholds (see Sec. VA of \\cite{be89} for the derivation).\nHere, $C_\\mrm{A}$ and $C_\\mrm{B}$ are the doses of the compounds A and B,\nrespectively, when applied in the mixture; $C_\\mrm{Ai}$ is the\nisoeffective dose of the compound A, that is the dose at which\ncompound A alone has the same effect as the mixture; and $C_\\mrm{Bi}$ is the\nisoeffective dose of the compound B.\nIf the mixture of A and B has a larger effect than the\nzero-interactive sham mixture, then $C_\\mrm{A}\/C_\\mrm{Ai} +\nC_\\mrm{B}\/C_\\mrm{Bi} < 1$ and the two compounds are said to interact\nsynergistically. On the contrary, when the mixture of A and B has a\nsmaller effect than the zero-interactive sham mixture,\n$C_\\mrm{A}\/C_\\mrm{Ai} + C_\\mrm{B}\/C_\\mrm{Bi} > 1$ and the two\ncompounds interact antagonistically.\n\nUsing the dose-response dependencies of each fungicide when applied\nalone, \\eq{eq:eps-1fungic} and \\eq{eq:isobol}, we derive the dose-response function\nfor the combined effect of the two fungicides on the sensitive\npathogen strain in the case of no pharmacological interaction (see Sec. VIB of \\cite{be89} for the derivation):\n\\begin{align}\\label{eq:veps-sens-addit}\n\\varepsilon_\\mrm{s}(C_\\mrm{A}, C_\\mrm{B}) = \\frac{ k_\\mrm{kA} C_\\mrm{A} +\n k_\\mrm{kB} C_\\mrm{B}} { C_\\mrm{A} + C_\\mrm{B} + C_{50}}.\n\\end{align}\nSimilarly, we determine the combined effect of the two fungicides on the\nresistant pathogen strain still without pharmacological\ninteraction:\n\\begin{align}\\label{eq:veps-res-addit}\n\\varepsilon_\\mrm{r}(C_\\mrm{A}, C_\\mrm{B}) = \\frac{ k_\\mrm{kA} \\alpha C_\\mrm{A} +\nk_\\mrm{kB} C_\\mrm{B}} { \\alpha C_\\mrm{A} + C_\\mrm{B} + C_{50}},\n\\end{align}\nwhere we introduced $\\alpha$, the degree of sensitivity of the resistant strain \nto the fungicide A (the high-risk fungicide). At $\\alpha=0$ the pathogen is fully resistant to\nfungicide A and the effect of the mixture $\\varepsilon_\\mrm{r}(C_\\mrm{A},\nC_\\mrm{B})$ in \\eq{eq:veps-res-addit} does not depend on its\ndose $C_\\mrm{A}$, while at $\\alpha = 1$ the pathogen is fully\nsensitive to fungicide A.\n\nThe expression in \\eq{eq:veps-sens-addit} and \\eq{eq:veps-res-addit}\nare only valid in the range of fungicide concentrations, over which\nisoeffective concentrations can be determined for both\nfungicides. Here, the isoeffective concentration is the concentration\nof a fungicide applied alone that has the same effect as the\nmixture. This requirement means that we are only able to consider the\neffect of the mixture at a sufficiently low total concentration: $C =\nC_\\mrm{A} + C_\\mrm{B}0$ represents synergy: the\ninteraction term proportional to $u$ in \\eq{eq:veps-mix-comp-sens} and\n\\eq{eq:veps-mix-comp-res} is positive and it reduces the value of\n$C_{50}$, meaning that the same effect can be achieved at a lower dose\nthan at $u=0$. The case when $u<0$ corresponds to antagonism (see\nAppendix\\,A.1). Note, that the interaction term is proportional to\n$\\sqrt{r_\\mrm{B}(1-r_\\mrm{B})}$. This functional form guarantees that\nit vanishes, whenever only one of the compounds is used,\ni.\\,e. $r_\\mrm{B}=0$ or $r_\\mrm{B}=1$.\n\n\n\n In order to make clear the questions we ask and the assumptions we\n make, we consider the dynamics of the frequency of the resistant strain\n $p(t) = I_\\mrm{r}(t)\/\\left[ I_\\mrm{r}(t) + I_\\mrm{s}(t) \\right]$.\n The rate of its change is obtained from\n Eqs.\\,(\\ref{eq:1host2fung-gen-1})-(\\ref{eq:1host2fung-gen-3})\n\\begin{equation}\\label{eq:resfreq-dyn}\n\\frac{d p}{d t} = s(t) p (1 - p),\n\\end{equation}\nwhere \n\\begin{equation}\\label{eq:sel-coeff}\n s = b \\left[ (1 - \\varepsilon_\\mrm{r}(C, r_\\mrm{B})) (1 - \\rho_\\mrm{r} ) -\n (1 -\\varepsilon_\\mrm{s}(C, r_\\mrm{B})) \\right] H(t)\n\\end{equation}\nis the selection coefficient [a similar expression was found in\n\\cite{gugi99}]. Here $\\varepsilon_\\mrm{s}(C, r_\\mrm{B})$ and\n$\\varepsilon_\\mrm{r}(C, r_\\mrm{B})$ are given by \\eq{eq:veps-mix-comp-sens}\nand \\eq{eq:veps-mix-comp-res}.\nIf $s>0$, then the resistant strain is favored by selection and will\neventually dominate the pathogen population ($p \\to 1$ at $t \\to\n\\infty$). Alternatively, if $s<0$, then the sensitive strain is\nselected and will dominate the population ($p \\to 0$ at $t \\to\n\\infty$). \n\nThe focus of this paper is to investigate the parameter range\nover which $s<0$, i.\\,e. the sensitive strain is favored by\nselection. Mathematically this corresponds to finding the range of\nstability of the equilibrium (fixed) point of the system\nEqs.\\,(\\ref{eq:1host2fung-gen-1})-(\\ref{eq:1host2fung-gen-3}),\ncorresponding to $H>0$, $I_\\mrm{s}>0$, $I_\\mrm{r}=0$. Our focus is\nmainly on the direction of selection. To address this point we\ndo not need to assume that the host-pathogen equilibrium is\nreached. However, we explicitly assume that the host-pathogen\nequilibrium is reached during one season in \\sec{sec:trben}, where we\nevaluate the benefit of fungicide treatment. The implications of this\nassumption are discussed at the beginning of \\sec{sec:trben}.\nFurthermore, we assume that the fungicide dose is constant over\ntime. (See Appendix A.\\,4 for the justification of this assumption.)\n\n\nA careful examination of the \\eq{eq:sel-coeff} reveals that the sign\nof the selection coefficient $s$, and therefore the direction of\nselection, is determined by the expression in square brackets, which\ncan be either positive or negative depending on the values of $C$,\n$r_\\mrm{B}$, $\\rho_\\mrm{r}$ and the shapes of the functions\n$\\varepsilon_\\mrm{s,r}$. The sign of the selection coefficient is unaffected\nby $b$ and $H(t)$ since both of them are non-negative. Consequently\nmost of the results of this paper do not depend on a particular shape\nof $H(t)$ and hence are independent of a particular form of the\ngrowth term (except for those in \\sec{sec:trben} concerned with the\nbenefit of fungicide treatment). This means that the main conclusions\nof the paper remain valid for both perennial crops, where the amount of\nhealthy host tissue steadily increases over many years, and for annual\ncrops, where the healthy host tissue changes cyclically during\neach growing season.\n\nWe also neglect the spatial dependencies of the variables $H$,\n$I_\\mrm{s}$ and $I_\\mrm{r}$ and all other parameters. The latent phase\nof infection, which can be considerable for some pathogens, is also\nneglected.\nSince we neglect mutation, migration and spatial heterogeneity, the\nresistant and sensitive pathogen strains cannot co-exist in the long\nterm (see Appendix\\,A.1). Only one of them eventually survives: the\none with a higher basic reproductive number.\n\nThe basic reproductive number, $R_0$, is often used in epidemiology as\na measure of transmission fitness of infectious pathogens\n\\cite{anma86}. It is defined as the expected number of secondary\ninfections resulting from a single infected individual introduced into\na susceptible (healthy) population. At $R_0>1$ the infection can spread over the\npopulation, while at $R_0<1$ the epidemic dies out.\n\nThe equilibrium stability analysis of the model\nEqs.\\,(\\ref{eq:1host2fung-gen-1})-(\\ref{eq:1host2fung-gen-3}) (see\nAppendix\\,A.1) shows that the relationship between the basic\nreproductive number of the sensitive strain $R_\\mrm{0s} = b \\left(1 -\n \\varepsilon_\\mrm{s}(C, r_\\mrm{B}) \\right) K\/\\mu $ and the basic\nreproductive number of the resistant strain $R_\\mrm{0r} = b \\left(1 -\n \\varepsilon_\\mrm{r}(C, r_\\mrm{B}) \\right) (1 - \\rho_\\mrm{r}) K \/ \\mu$\ndetermines the long-term outcome of the epidemic. The sensitive strain\nwins the competition and dominates the pathogen population if\n$R_\\mrm{0s}>1$, such that it can survive in the absence of the\nresistant strain, and $R_\\mrm{0s} > R_\\mrm{0r}$ (this is equivalent to\n$s<0$), such that it has a selective advantage over the resistant\nstrain. The latter inequality is equivalent to\n\\begin{align}\n\\varepsilon_\\mrm{s}(C,r_\\mrm{B}) < \\rho_\\mrm{r} + \\varepsilon_\\mrm{r} (C, r_\\mrm{B}) (1 - \\rho_\\mrm{r}) \\label{eq:brn-ineq-gen0}\n\\end{align}\nSimilarly, the resistant strain wins the competition and dominates the\npopulation if $R_\\mrm{0r}>1$ and $R_\\mrm{0r} > R_\\mrm{0s}$ (this is equivalent to\n$s>0$).\n\nWe determined the range of the fungicide doses $C$ and fitness costs\n$\\rho_\\mrm{r}$, according to the inequality (\\ref{eq:brn-ineq-gen0})\nanalytically when (i) the high-risk fungicide has a higher efficacy\nthan the low-risk fungicide ($k_\\mrm{kA}>k_\\mrm{kB}$), but\npharmacological interaction is absent ($u=0$); and (ii) the two\nfungicides have the same efficacy ($k_\\mrm{kA}=k_\\mrm{kB}=k_\\mrm{k}$),\nbut may interact pharmacologically ($u \\ne 0$).\nIn case (i) the criterium (\\ref{eq:brn-ineq-gen0}) assumes the form\n\\begin{equation}\\label{eq:brn-ineq-gen-kka-kkb}\n\\frac{ C}{ C + C_{50}} < \\rho_\\mrm{r}\/k_\\mrm{km} + k_\\mrm{kB}\/k_\\mrm{km} \\frac{ C}{ C +\n C_{50}\/\\gamma_\\mrm{r}} (1 - \\rho_\\mrm{r}),\n\\end{equation}\nwhere $k_\\mrm{km} = k_\\mrm{kA}(1-r_\\mrm{B}) + k_\\mrm{kB} r_\\mrm{B}$, while in case (ii) the criterium (\\ref{eq:brn-ineq-gen0}) reads\n\\begin{equation}\\label{eq:brn-ineq-gen}\n\\frac{ C}{ C + C_{50}\/\\gamma_\\mrm{s}} < \\rho_\\mrm{r}\/k_\\mrm{k} + \\frac{ C}{ C +\n C_{50}\/\\gamma_\\mrm{r}} (1 - \\rho_\\mrm{r}),\n\\end{equation}\n\n\nTo keep the presentation concise, below we present the results\ncorresponding to case (ii), i.\\,e. solve the inequality\n(\\ref{eq:brn-ineq-gen}). However, we verified that all the conclusions remain the same in case (i). \nIn a more general case, when $k_\\mrm{kA}>k_\\mrm{kB}$ and $u \\ne 0$ the\nparameter ranges satisfying the inequality (\\ref{eq:brn-ineq-gen0})\ncan only be determined numerically.\n\n\nWe assume here that both the cost of resistance and fungicidal\nactivity decrease the transmission rate $b$. However, we performed the\nsame analysis when the effect of the resistance cost and the fungicide\nenter the model in other ways and obtained qualitatively similar\nresults (see Appendix\\,A.5)\n\nThe simplicity of the model allows us to obtain all of the results\nanalytically. We determined explicit mathematical relationships\nbetween the quantities of interest, which enabled us to study the\neffects over the whole range of parameters.\n\n\\section{Results}\n\\label{sec:results}\n\nWe first investigate the parameter ranges over which resistant or\nsensitive strains dominate the pathogen population for the case of\nfungicides A and B applied individually and in a mixture\n(\\sec{sec:selres}). Then, we consider the optimal proportion of\nfungicides to include in a mixture in \\sec{sec:optrat} and the benefit\nof fungicide treatment in \\sec{sec:trben}. Finally, we take into\naccount possible pharmacological interactions between fungicides and\nconsider the effect of partial resistance (\\sec{sec:pharmint},\n\\ref{sec:partres}).\n\n\\subsection{Selection for resistance}\n\\label{sec:selres}\n\nThe ranges of fungicide dose and cost of resistance at which the\nsensitive (white) or resistant (grey) pathogen strain is favored by\nselection are shown in \\fig{fig:phase-diagr}. In all scenarios\ncompetitive exclusion is observed: one of the strains takes over the\nwhole pathogen population and the other one is eliminated. If a low-risk\nfungicide is applied alone, the sensitive strain has a selective\nadvantage across the whole parameter range in\n\\fig{fig:phase-diagr}A. When only a high-risk fungicide is applied\n(\\fig{fig:phase-diagr}B) the resistance dominates if the fitness cost\nis lower than the maximum effect of the fungicide\n$\\rho_\\mrm{r}\n\\rho_\\mrm{rb}$ [see Eq.\\,(A.16)]. The threshold\n$\\rho_\\mrm{rb}$ is shown by the dotted vertical line in \\fig{fig:phase-diagr}C.\n\nThe threshold $\\rho_\\mrm{rb}$ depends on the proportion of fungicides\nin the mixture.\nAdding more of the low-risk fungicide, while keeping the same total\ndose $C$, reduces the threshold. This diminishes the range of the\nvalues for fitness cost over which the resistant strain dominates. On\nthe other hand, adding less of the low-risk fungicide, while again\nkeeping $C$ the same, increases the threshold, which increases the\nparameter range over which the resistant strain is favored.\n\nTherefore, at a given fitness cost $\\rho_\\mrm{r}$, one can adjust the\nfungicide ratio $r_\\mrm{B}$ such that\n$\\rho_\\mrm{r}>\\rho_\\mrm{rb}$. This is shown in\n\\fig{fig:opt-fung-rat-comb}: the curve shows the critical proportion\nof the low-risk fungicide $r_\\mrm{Bc}$, above which no selection for\nresistance occurs at any total fungicide dose $C$. One can\nsee from \\fig{fig:opt-fung-rat-comb} that if the resistance cost is\nabsent ($\\rho_\\mrm{r} = 0$), then the high-risk fungicide should not\nbe added at all if one wants to prevent selection for resistance.\nAt larger fitness costs, the value of $r_\\mrm{Bc}$ decreases, giving\nthe possibility to use a larger proportion of the high-risk fungicide\nwithout selecting for resistance.\n\nFinding an optimum proportion of fungicides requires knowledge of both\nthe fitness cost $\\rho_\\mrm{r}$ and the maximum effect of the\nfungicide $k_\\mrm{k}$ [Eq.\\,(A.17)]. However, if the cost of\nresistance and fungicides affect the infectious period of the pathogen\n$\\mu^{-1}$ (see Sec.\\,A.5) and not the transmission rate $b$ as\nwe assumed above, then a simpler expression for the critical\nproportion of fungicides in the mixture is obtained\n[Eq.\\,(A.18)], which depends only on the ratio between the\nfitness cost and the maximum fungicide effect $\\rho_\\mrm{r}\/k_k$. In\nthis case, if the fitness cost is at least 5 percent of the maximum\nfungicide effect, then we predict that up to about 20 percent of the\nhigh-risk fungicide can be used in a mixture without selecting for\nresistance. \nAn example of the cost of fungicide resistance manifesting\nas a reduction in infectious period was in metalaxyl-resistant\nisolates of \\emph{Phytophthora infestans} \\cite{kaco89}. In this\nexperiment, the infectious period of the resistant isolates was\nreduced, on average, by 25\\,\\% compared to the susceptible isolates\n\\cite{kaco89}.\n\nSo far we have shown how choosing an optimal proportion of fungicides\nin the mixture prevents selection for resistance. Now, we will\nconsider in more detail how to achieve an adequate level of disease control.\n\n\\subsection{Treatment benefit}\n\\label{sec:trben}\n\nThe yield of cereal crops is usually assumed\nto be proportional to the healthy green leaf area, which corresponds\nin our model to the amount of healthy hosts $H(t)$. Accordingly,\nwe quantify the benefit of the fungicide treatment, $B(t)$, as the ratio\nbetween the amount of healthy hosts $H(t)$ when both the disease\nand treatment are present and its value $H_\\mrm{nd}(t)$ in the absence\nof disease: $B(t) = H(t)\/H_\\mrm{nd}(t)$. Hence, $B(t)=1$ corresponds\nto a perfect treatment, which eradicates the disease completely and\nthe treatment benefit of zero corresponds to a situation where all\nhealthy hosts are infected by disease.\nIn order to obtain analytical expressions for the treatment benefit\n$B(t)$, we consider one growing season and assume that the host-pathogen\nequilibrium is reached during the season (see Appendix\\,A.3 for a\ndiscussion of these assumptions).\n\nThe treatment benefit at equilibrium is shown in\n\\fig{fig:trben-vs-conc-cost-col} as a function of the fitness cost and\nthe fungicide dose (see Appendix\\,A.3 for equations).\nWhen a low-risk fungicide is applied alone\n[\\fig{fig:trben-vs-conc-cost-col}A], the sensitive strain is favored\nby selection over the whole range of parameters. Therefore, the\ntreatment benefit increases monotonically with the fungicide dose and\nis not affected by the cost of resistance.\nIn contrast, when a high-risk fungicide is applied alone\n[\\fig{fig:trben-vs-conc-cost-col}B], a region at low fitness costs appears [to\nthe left from the solid curve in \\fig{fig:trben-vs-conc-cost-col}B],\nwhere the resistant strain is favored. Here, the treatment\nbenefit does not depend on the fungicide dose, but increases with\nthe cost of resistance. Hence, if the fitness cost is too low to stop\nselection for resistance, then the fungicide treatment will fail.\n\nIn the case of a mixture of a high-risk and a low-risk fungicide, the\nparameter range over which the resistant strain is favored becomes\nsmaller [\\fig{fig:trben-vs-conc-cost-col}C, to the left from the solid\n curve].\nIn this range the treatment benefit increases with the cost of resistance,\nsince larger costs reduce the impact of disease per se. Also, the\ntreatment benefit increases with the total fungicide dose in this\nrange, because the low-risk fungicide works against the resistant\nstrain.\n\n\nAs we have shown above in \\sec{sec:optrat}, in the presence of a\nsubstantial fitness cost, one can avoid selection for resistance by\nadjusting the proportion of the two fungicides in the mixture. Then,\nthe total fungicide dose such that the treatment benefit reaches a\nhigh enough value and an adequate disease control is achieved.\n\nIn the end of \\sec{sec:optrat} we estimated that up to about 20 percent of the\nhigh-risk fungicide can be used in a mixture without selecting for\nresistance if the fitness cost is at least 5 percent of the maximum\nfungicide effect on the infectious period $\\mu^{-1}$.\nBut how much extra control does one obtain by adding the high-risk\nfungicide to the mixture?\nWe estimate that adding 20 percent of the high-risk fungicide to the mixture\nincreases the treatment benefit by about 12 percent at $R_\\mrm{0s}(C=0)=b\nK\/\\mu=4$ and by about 9 percent at $R_\\mrm{0s}(C=0)=2$ (see Sec.\\,A.3 and\nFig.\\,A.1 for more details). In the case when the high-risk fungicide\nhas a larger maximum effect, i.\\,e. $k_\\mrm{kA}>k_\\mrm{kB}$, the\nbenefit of adding it to the mixture will increase. However, the largest\nproportion of the high-risk that can be added without selecting for\nresistance will decrease.\n\n\n\n\\subsection{The effect of pharmacological interaction between\n fungicides}\n\\label{sec:pharmint}\nSynergistic interactions between fungicides make their combined effect\ngreater than expected with additive interactions. The sensitive\npathogen strain is suppressed more by a synergistic mixture, while the\nresistant strain is not affected by the interaction (in case of full\nresistance $\\alpha=0$). This increases the range of fitness costs over\nwhich resistance has a selective advantage [the dashed line in\n\\fig{fig:phase-diagr}C shifts to the right]. Consequently, the\ncritical proportion of the low-risk fungicide in the mixture\n$r_\\mrm{Bc}$, above which the resistant mutants are eliminated\nincreases [dotted curve in\n\\fig{fig:rbc-vs-rhoa-interact-partres}A]. In contrast, an antagonistic\nmixture suppresses the sensitive strain less effectively than either\nfungicide used alone. In this case the range of fitness costs over\nwhich resistance dominates becomes smaller and the ratio $r_\\mrm{Bc}$\ndecreases [dashed curve in\n\\fig{fig:rbc-vs-rhoa-interact-partres}A]. Hence, reduced resistance\nevolution is achieved, however, at the expense of reduced disease\ncontrol.\nThis result is in agreement with studies on drug interactions in the\ncontext of antibiotic resistance, where antagonistic drug combinations\nwere found to select against resistant bacterial strains \\cite{chcr+07}.\n\n\\subsection{The effect of partial fungicide resistance}\n\\label{sec:partres}\nConsider the situation when the resistant pathogen strain is not fully\nprotected from the high-risk fungicide, but exhibits a partial\nresistance ($0 < \\alpha < 1$). In this case, the fungicide mixture is\nmore effective in suppressing the resistant strain than in the case of\nfull resistance ($\\alpha=0$) considered above. Therefore, one needs\nless of the low-risk fungicide in the mixture to reach the conditions\nwhere resistance is eliminated by selection: the critical proportion\nof the low-risk fungicide in the mixture decreases with the degree of\nsensitivity $\\alpha$ in\n\\fig{fig:rbc-vs-rhoa-interact-partres}B. Also, in\n\\fig{fig:rbc-vs-rhoa-interact-partres}A the dependency of the\ncritical ratio of the fungicide B in the mixture for partial\nresistance (light grey curve) lies below the one at perfect resistance\nand reaches zero at a much smaller value.\nThus, knowledge of the degree of resistance is crucial for\ndetermining an appropriate proportion of fungicides in the mixture.\n\n\\section{Discussion}\n\\label{sec:conclusions}\n\nThe three main outcomes of our study are: (i) if fungicide resistance\ncomes without a fitness cost, application of fungicides prone to\nresistance (high-risk fungicides) in a mixture with fungicides still\nfree from resistance (low-risk fungicides) will select for resistance;\n(ii) if sufficiently high costs are found, then an optimal proportion\nof the high-risk fungicide in a mixture with the low-risk fungicide\nexists that does not select for resistance; (iii) this mixture can\npotentially be used for preventing de novo emergence of fungicide\nresistance, in which case the relevant fitness cost is the\n``inherent'' cost of fungicide resistance before the compensatory\nevolution occurs (see below).\n\nIn the absence of fitness costs application of a mixture of high-risk\nand low-risk fungicides will select for resistance.\nConsequently, the resistant strain will eventually dominate the pathogen\npopulation and the sensitive strain will be eliminated. Because of\nthis, the high-risk fungicide will not affect the amount of disease\nand only the low-risk fungicide component of the mixture will be\nacting against disease.\nHence, the high-risk fungicide becomes nonfunctional in the mixture\nand using the low-risk fungicide alone would have the same effect at a\nlower financial and environmental cost.\n\nIn contrast, if sufficiently high costs are found, then high-risk\nfungicides can be used effectively for an extended period of\ntime. According to our model, an optimal proportion of the high-risk\nfungicide in a mixture with the low-risk fungicide can be determined\nthat contains as much as possible of the high-risk fungicide, but\nstill does not select for resistance while providing adequate disease\ncontrol (see Box\\,\\ref{nbox:getrb-alg}). If a mixture with the optimum\nproportion is applied, then the rise of the resistant strain is\nprevented for an unlimited time. Thus, the scheme in\nBox\\,\\ref{nbox:getrb-alg} provides a framework for using our knowledge\nabout the evolutionary dynamics of plant pathogens and their\ninteraction with fungicides in devising practical strategies for\nmanagement of fungicide resistance.\n\nIn order to apply the scheme in Box\\,\\ref{nbox:getrb-alg}, one needs\nto know dose-response parameters of the fungicides $k_\\mrm{k}$ and\n$C_{50}$, the degree of fungicide sensitivity $\\alpha$ (or the\nresistance factor), the degree of pharmacological interaction $u$ and\nthe fitness cost of resistane mutations.\nFungicide dose-response curves are routinely determined empirically\n(for example, \\cite{locl05,pahi+98}) and can be used to estimate the\nmodel parameters $k_\\mrm{k}$ and $C_{50}$ \\cite{hopa+11a}. The\nfungicide sensitivity is known to be lost completely in some cases\n(for example, most cases of QoI resistance), i.\\,e. $\\alpha=0$, while\nin other cases with partial resistance the degree of sensitivity (or\nthe resistance factor) was\nmeasured (for example, \\cite{leal+07}). Pharmacological interaction between several different\nfungicides was also characterized empirically (see \\cite{gi96} and the\nreferences therein). Also, the fitness costs of resistance were\ncharacterized empirically in many cases (see below).\nIn the past these measurements were performed independently, but our\nstudy provides motivation to bring them together, since all these\nparameters need to be characterized for the same\nplant-pathogen-fungicide combination.\n\nThese measurements will allow one to predict the optimal proportion of the\ntwo fungicides in the mixture theoretically. This prediction needs to\nbe tested using field experimentation, in which the amount of disease\nand the frequency of resistance would be measured as functions of time\nat different proportions of the high- and low-risk fungicides in the\nmixture.\n From these measurements the optimal proportion of the fungicides can\n be obtained empirically. It is this empirically determined optimal\n proportion of fungicides that can be used for practical guidance on\n management of fungicide resistance. Moreover, from the comparison of\n the optimal proportion obtained theoretically and empirically, one\n can evaluate the performance of the model and identify the aspects of\n the model that need improvement.\n\n\n\n So far we considered the scenario where both the sensitive and the\n resistant pathogen strains increase from low numbers,\n i.\\,e. resistant mutants pre-exist in the pathogen population. In\n this scenario the strain with higher fitness (or basic reproductive\n number) eventually outcompetes the other strain. This competition may\n occur over a time scale of several growing seasons so that there is\n enough time for compensatory mutations that diminish fitness costs of\n resistance to emerge. This needs to be taken into account when\n determining the optimal proportion of fungicides in the\n mixture. However, an alternative scenario is possible when resistant\n mutants emerge de novo through mutation or migration and, in order to\n survive, they need to invade the host population already infected by\n the sensitive strain. The threshold of invasion in this case depends on the\n ``inherent'' fitness cost of resistance mutations, i.\\,e. their cost\n before the compensatory evolution occurs. In this case, one should\n measure the ``inherent'' cost of resistance mutations when performing\n step 3 in Box\\,\\ref{nbox:getrb-alg}.\n\n\nAs discussed above, it is crucial to know fitness costs of\nresistance mutations in order to determine whether the fungicide\nmixture will select for resistance. We extensively searched the\nliterature on fitness costs in different fungal pathogens of plants.\nA few studies inferred substantial fitness costs from field monitoring\n(see for example, \\cite{suya+10} and references in \\cite{pemi95}). But\nthese findings could result from other factors, including immigration\nof sensitive isolates, selection for other traits linked to resistance\nmutations or genetic drift \\cite{pemi95}. Though relatively few\ncarefully controlled experiments have been conducted, the majority\nindicate that fitness costs associated with fungicide resistance are\neither low (for example, \\cite{bifi+12,kixi11}) or absent (for\nexample, \\cite{codu+10,pemi94}). But in some cases fitness costs were\nfound to be substantial (for example,\n\\cite{iaba+08,kath+01,hoec95,we88}) both in laboratory measurements\nand in field experiments.\nAlthough measurements of fitness costs of resistant mutants performed\nunder laboratory conditions can be informative (as, for example in\n\\cite{bifi+12}), they do not necessarily reflect the costs connected\nwith resistant mutants selected in the field. This is because field\nmutants are likely to possess compensatory mutations improving\npathogen fitness \\cite{pemi95}. Moreover, a laboratory setting rarely\nreflects the balance of environmental and host conditions found\nthroughout the pathogen life-cycle, since the field environment is\nmuch more complex.\n\n\nHowever, the most relevant measure of pathogen fitness in the context\nof our study is the growth rate of the pathogen population at the very\nbeginning of an epidemic (often denoted as $r$). It is directly\nrelated to the basic reproductive number $R_0$. To the best of our\nknowledge, the fitness costs of fungicide resistant strains were not\nmeasured with respect to $r$. In the studies cited above different\ncomponents of fitness were measured that may or may not be related to\n$r$.\nTherefore, we identified a major gap in our knowledge of fitness costs. We\nhope this study will stimulate further experimental investigations to\nbetter characterize fitness costs and expect that substantial costs\nwill be found in some cases.\n\n\n\n\n\n\n\n\n\n\n \n\nInteractions of plants with fungal pathogens, fungicide action and,\npossibly, pharmacological interaction can depend on environmental\nconditions. This means that the outcomes of measurements necessary for\napplying the scheme in Box\\,\\ref{nbox:getrb-alg}, may vary between\nseasons and geographical locations. Moreover, the outcomes may also be\ndifferent in different host cultivars. Therefore, the optimal\nproportion of fungicides in the mixture may vary between seasons,\ngeographical locations and host cultivars. Thus, to provide general\npractical guidance on management of fungicide resistance, one needs to\nmeasure the optimal proportion of fungicides over many seasons, in\ndifferent geographical locations and host cultivars. This difficulty\nis not a unique property of our study, but rather it is a general\nproblem in the field of mathematical modeling of fungicide resistance\nand plant diseases. For example, it is also relevant for choosing\nappropriate fungicide dose rate \\cite{locl05}.\n\n\nWhile it was previously discussed \\cite{sh06} that alternation of\nhigh-risk and low-risk fungicides might be a useful tactic for disease\ncontrol in the presence of a fitness cost, we have shown that a\nmixture of these fungicides in an appropriate proportion can provide\nadequate disease control without selecting for resistance. Mixtures\noffer an advantage compared to alternation because there is no need to\ndelay the application of the high-risk fungicide and the resistant\nstrain does not rise to high frequencies, which lowers the risk of its\nfurther spread (see Appendix\\,A.6)\n\n\\begin{nicebox}\nAccording to our model, one can avoid selection for resistance while providing adequate\ndisease control by choosing the fungicide ratio $r_\\mrm{B}$ and the\ntotal dose $C$ in the following way:\n\\begin{enumerate}\n\\item measure the pharmacological properties of both fungicides under field conditions to\n determine $k_\\mrm{k}$, and $C_{50}$;\n\\item determine the degree of fungicide sensitivity $\\alpha$ under field conditions;\n\\item determine the degree of pharmacological interaction $u$ between\n fungicides A and B under field conditions;\n\\item measure the fitness cost of resistance $\\rho_\\mrm{r}$ under field conditions;\n\\item choose the proportion of the fungicide B above the threshold:\n$r_\\mrm{B}>r_\\mrm{Bc}$, such that the resistance is not favored by selection at any\ntotal fungicide dose $C$;\n\\item choose the total fungicide dose, which should be large enough to\n achieve an adequate level of disease control (see \\fig{fig:trben-vs-conc-cost-col}C).\n\\end{enumerate}\n\\caption{How to determine an optimal mixture of fungicides theoretically.}\\label{nbox:getrb-alg}\n\\end{nicebox}\n\n\n\nThe problem of combining chemical biocides in order to delay or\nprevent the development of resistance also appears in other contexts,\nincluding resistance of agricultural weeds to herbicides \\cite{bere09} and\ninsect pests to insecticides \\cite{ro06}.\nThe fitness cost of resistance is also recognized as a crucial\nparameter for managing antibiotic resistance \\cite{anhu10}. \n\n\nDevelopment of mathematical models of fungicide resistance dynamics\nhas been influenced by theoretical insights from animal and human\nepidemiology \\cite{bogi08,hagu+04}. Similarly, we expect that lessons\nlearned from modeling fungicide combinations may well apply to the\nproblem of biocide resistance in the other contexts. In particular,\none can investigate the idea of adjusting the proportion of the\ncomponents in a mixture of drugs in order to prevent selection for\nresistance in a more general context of biocide resistance.\n\n\n\n\\section*{Acknowledgements}\n\nAM and SB gratefully acknowledge support by the European Research\nCouncil advanced grant PBDR 268540. The authors are grateful to\nMichael Milgroom and Michael Shaw for helpful comments concerning\nfitness costs of fungicide resistance and to two anonymous reviewers\nfor improving the manuscript.\n\n\n\\newpage\n\n\n\n\n\n\n\\begin{figure}[!ht]\n \\centerline{\\includegraphics[width=\\textwidth]{fig1_mdiagr_fl.eps}}\n \\caption{\\doublespacing Scheme of the model in\n Eqs.\\,(\\ref{eq:1host2fung-gen-1})-(\\ref{eq:1host2fung-gen-3}).}\n\\label{fig:model-scheme}\n\\end{figure}%\n\\clearpage\n\\begin{figure}[!ht]\n \\centerline{\\includegraphics[width=\\textwidth]{fig2_pdiagr_pp_w.eps}} \\caption{\n \\doublespacing \nOutcomes of the competition between the sensitive and resistant\n pathogen strains depending on the fitness cost of resistance\n $\\rho_\\mrm{r}$ and the fungicide dose $C$ when treated\n with a single fungicide B at [$C_\\mrm{B} = C$, panel A], a\n single fungicide A [$C_\\mrm{A}=C$, panel B] and the combination\n of fungicides A and B [$C_\\mrm{A} = C_\\mrm{B} = C\/2$, panel\n C]. The range of the total fungicide dose $C$ and the\n fitness cost of resistance $\\rho_\\mrm{r}$, in which the resistant\n strain is favored is shown in grey. The range where selection\n favors the sensitive strain is shown in white. The dashed and the\n solid curves in panel B are plotted according to Eq.\\,(A.22) and\n Eq.\\,(A.23) in Appendix\\,A.2, respectively. The dashed and the\n solid curves in panel C are plotted according to Eq.\\,(A.13) and\n Eq.\\,(A.14) in Appendix\\,A.1, respectively, at $\\gamma_\\mrm{s}=1$,\n $\\gamma_\\mrm{r}=1\/2$. Fungicides are assumed to have zero\n interaction ($u = 0$) and the resistant strain is assumed to be\n fully protected from fungicide A ($\\alpha=0$), the fungicide\n dose-response parameters are $k_\\mrm{k}=0.5$, $C_{50}=1$.}\n\\label{fig:phase-diagr}\n\\end{figure}%\n\\clearpage\n\\begin{figure}[!ht]\n \\centerline{\\includegraphics[width=0.6\\textwidth]{fig3_rbc_vs_rhor_fl.eps}} \\caption{\n \\doublespacing The critical proportion $r_\\mrm{Bc}$ of fungicide B\n (low-risk fungicide) in the mixture, above which there is no\n selection for the resistant strain at any total fungicide\n dose $C$, plotted (black curve) according to Eq.\\,(A.17)\n as a function of the resistance cost $\\rho_\\mrm{r}$, assuming no\n pharmacological interaction ($u=0$), full resistance ($\\alpha=0$)\n and the maximum fungicide effect $k_\\mrm{k}=0.5$.}\n\\label{fig:opt-fung-rat-comb}\n\\end{figure}%\n\\clearpage\n\\begin{figure}[!ht]\n \\centerline{\\includegraphics[width=0.6\\textwidth]{fig4_tb_gs_fl.eps}} \\caption{\n\\doublespacing \n Treatment benefit as a function of fungicide dose $C$ and\n fitness cost of resistance $\\rho_\\mrm{r}$, plotted according to\n Eq.\\,(A.26) in panel A, according to Eq.\\,(A.27) in panel B\n and according to Eq.\\,(A.28) in panel C. Treatment with\n fungicide B is shown in panel A. Treatment with fungicide A is\n shown in panel B. Treatment with a mixture of A and B at equal\n concentrations ($r_\\mrm{B}=1\/2$) is shown in panel C. Solid and\n dashed curves in panels B and C are the same as in\n \\fig{fig:phase-diagr}. Fungicides are assumed to have zero\n interaction ($u = 0$) and the resistant strain is assumed to be\n fully protected from fungicide A ($\\alpha=0$). The fungicide\n dose-response parameters are $k_\\mrm{k}=0.5$, $C_{50}=1$, the basic\n reproductive number of the sensitive strain without fungicide\n treatment $R_\\mrm{0s}(C=0) =b K \/ \\mu=2$.}\n\\label{fig:trben-vs-conc-cost-col}\n\\end{figure}%\n\\clearpage\n\\begin{figure}[!ht]\n \\centerline{\\includegraphics[width=0.8\\textwidth]{fig5_pi_fl.eps}} \\caption{\n\\doublespacing \n The effect of pharmacological interaction and partial resistance\n on $r_\\mrm{Bc}$, the critical ratio of the fungicide\n B. $r_\\mrm{Bc}$ is plotted as a function of the fitness cost of\n resistance $\\rho_\\mrm{r}$ (left panel), according to\n Eq.\\,(A.13) for the case of no interaction between the\n fungicides $u=0$ (solid, the same as the curve in\n \\fig{fig:opt-fung-rat-comb}), synergy $u=0.9$ (dotted), and\n antagonism $u=-0.9$ (dashed) for the case of perfect resistance\n $\\alpha=1$. The case of partial resistance at no interaction\n ($\\alpha = 0.5$, $u=0$) is shown as a light grey curve.\n $r_\\mrm{Bc}$ is shown as a function of the degree of fungicide\n sensitivity $\\alpha$ at $\\rho_\\mrm{r}=0.05$ (solid) and\n $\\rho_\\mrm{r}=0.1$ (dash-dotted) also according to\n Eq.\\,(A.13) in the right panel.}\n\\label{fig:rbc-vs-rhoa-interact-partres}\n\\end{figure}%\n\\newpage\n\n\n\\section{Supplemental materials}\n\n\\subsection{Model equations}\n\\label{sec:ap-modeleq}\n\nIn order to explore the effect of the assumptions we made in\n\\sec{sec:model-assump}, we consider a more general system of\nequations, which describes the change in time of the same quantities\nas in Eqs.\\,(\\ref{eq:1host2fung-gen-1})-(\\ref{eq:1host2fung-gen-3}):\nthe amount of healthy host tissue $H$, the amount of host tissue\ninfected with the sensitive pathogen strain $I_\\mrm{s}$ and the amount\nof host tissue infected with the resistant pathogen strain $I_\\mrm{r}$\n\n\\begin{align}\n\\frac{d H}{d t} &= r_H (K - H - I_\\mrm{s} - I_\\mrm{r} ) - b \\left[\n \\left( 1 - \\varepsilon_\\mrm{s} \\right)\n(1-\\rho_\\mrm{s}) I_\\mrm{s} + \\left( 1 - \\varepsilon_\\mrm{r} \\right) (1-\\rho_\\mrm{r}) I_\\mrm{r} \\right] H, \\label{eq:1host2fung-gen-app-1} \\\\\n\\frac{d I_\\mrm{s}}{d t} & = b \\left( 1 - \\varepsilon_\\mrm{s} \\right) (1-\\rho_\\mrm{s}) H I_\\mrm{s} - \\mu I_\\mrm{s}, \\label{eq:1host2fung-gen-app-2}\\\\\n\\frac{d I_\\mrm{r}}{d t} & = b \\left( 1 - \\varepsilon_\\mrm{r} \\right) (1-\\rho_\\mrm{r}) H I_\\mrm{r} - \\mu I_\\mrm{r}, \\label{eq:1host2fung-gen-app-3}\n\\end{align}\nwhere, the function $\\varepsilon_\\mrm{s} = \\varepsilon_\\mrm{s}(C_\\mrm{A}, C_\\mrm{B})$ describes the effect of the\napplication of the mixture fungicides A and B with doses\n$C_\\mrm{A}$ and $C_\\mrm{B}$ on the transmission rate of the sensitive pathogen strain\nand the function $\\varepsilon_\\mrm{r} = \\varepsilon_\\mrm{r}(C_\\mrm{A}, C_\\mrm{B})$ describes the effect of this\nmixture on the transmission rate of the resistant strain:\n\\begin{align}\n \\varepsilon_\\mrm{s}(C_\\mrm{A}, C_\\mrm{B}) &= k_\\mrm{k} \\frac{\\alpha_\\mrm{s,A} C_\\mrm{A} + \\alpha_\\mrm{s,B} C_\\mrm{B}}{ \\alpha_\\mrm{s,A} C_\\mrm{A} + \\alpha_\\mrm{s,B} C_\\mrm{B} + C_{50}\/ \\left[1\n + u \\sqrt{ \\alpha_\\mrm{s,A} \\alpha_\\mrm{s,B} C_\\mrm{A} C_\\mrm{B}} \/\n (\\alpha_\\mrm{s,A} C_\\mrm{A} + \\alpha_\\mrm{s,B} C_\\mrm{B})\\right]}, \\label{eq:veps-s-mix-gen-app}\\\\\n \\varepsilon_\\mrm{r}(C_\\mrm{A}, C_\\mrm{B}) &= k_\\mrm{k} \\frac{\\alpha_\\mrm{r,A} C_\\mrm{A} + \\alpha_\\mrm{r,B} C_\\mrm{B}}{ \\alpha_\\mrm{r,A} C_\\mrm{A} + \\alpha_\\mrm{r,B} C_\\mrm{B} + C_{50}\/ \\left[1\n + u \\sqrt{ \\alpha_\\mrm{r,A} \\alpha_\\mrm{r,B} C_\\mrm{A} C_\\mrm{B}} \/ (\\alpha_\\mrm{r,A} C_\\mrm{A} + \\alpha_\\mrm{r,B} C_\\mrm{B})\\right]}. \\label{eq:veps-r-mix-gen-app}\n\\end{align}\nThe parameters $\\alpha_\\mrm{s,A}$, $\\alpha_\\mrm{s,B}$,\n$\\alpha_\\mrm{r,A}$ and $\\alpha_\\mrm{r,B}$ characterize the degree of\nsensitivity of each of the two pathogen strains (index \"s\" for\nthe sensitive strain, index ``r'' for the resistant strain) to each of the two\nfungicides A and B. Their values are between zero and one. In this\ngeneral case both pathogen strains are partially resistant to both\nfungicides. The maximum effect of the fungicide is characterized by\nthe parameter $k_k$ and assumed to be the same for both fungicides.\n\nThe parameter $C_{50}$ in Eqs.\\,(\\ref{eq:veps-s-mix-gen-app}),\n(\\ref{eq:veps-r-mix-gen-app}) is modified due to pharmacological\ninteraction between fungicides characterized by the degree of\ninteraction $u$. At $u=0$ fungicides do not interact, $u>0$ represents\nsynergy and $u<0$ corresponds to antagonism. [We restrict our\nconsideration to $u>-1$, since otherwise the term in the square\nbrackets of Eqs.\\,(\\ref{eq:veps-s-mix-gen-app}),\n(\\ref{eq:veps-r-mix-gen-app}) may become negative, which makes no\nsense.] This way to define pharmacological interaction between\ncompounds is called ``Loewe additivity'' or ``concentration addition''\nin the literature \\cite{grbr95,be89}. In this approach an interaction\nof a compound with itself is set by definition to be additive (zero\ninteraction). For example, when the fungicide A is mixed with itself,\nthe resulting sham mixture is neither synergistic, nor antagonistic\nbut has zero interaction. An equivalent graphic procedure is known as\nthe Wadley method in the fungicide literature [second method described\nin \\cite{lebe+86}].\n\nAn alternative way to define pharmacological interaction assumes that\nthe two compounds have independent modes of action and is called\n``Bliss independence'' \\cite{bl39} or Abbott's formula\n\\cite{ab25}. However, in this definition a compound can have a\npharmacological interaction with itself, i.\\,e. be synergistic or\nantagonistic.\nThe study \\cite{sh89a} discusses the definition of ``independent action''\nof the two fungicides, according to which the two fungicides are\nindependent when one fungicide does not affect the evolution of\nresistance in the other. According to \\cite{sh89a,sh93}, this is only\npossible when each of the fungicides affects different stages of the\npathogen life cycle.\n\nThere are several ways to introduce a deviation from the zero\ninteraction regime, in which usually an interaction term is added to\nthe isobologram equation \\cite{grbr95}. We have chosen a specific form\nof the interaction term, which is proportional to the square root of\nthe product of the concentrations of the two compounds [Eq.\\,(28) in\n\\cite{grbr95}]. This form allows for a simple analytical expression of\nthe effect of the combination in Eqs.\\,(\\ref{eq:veps-s-mix-gen-app}), (\\ref{eq:veps-r-mix-gen-app}).\n\nWe assume that the cost of resistance decreases the transmission\nrate $b$ by a fixed amount $\\rho_\\mrm{s}$ for the sensitive strain and\nby $\\rho_\\mrm{r}$ for the resistant strain in\nEqs.\\,(\\ref{eq:1host2fung-gen-app-1})-(\\ref{eq:1host2fung-gen-app-3}).\nWe restrict our consideration here to the case when the ``sensitive''\npathogen strain is fully sensitive to both fungicides\n($\\alpha_\\mrm{s,A} = \\alpha_\\mrm{s,B} = 1$) and the ``resistant''\nstrain can have varying degrees of resistance to the fungicide A\n($\\alpha_\\mrm{r,A} \\equiv \\alpha$, $0 \\le \\alpha \\le 1$), but is fully\nsensitive to the fungicide B ($\\alpha_\\mrm{r,B} = 1$). Therefore, the\ncost of resistance for the sensitive strain is zero $\\rho_\\mrm{s}\n=0$. Then, the fungicide dose-response functions become simpler.\n\nIn order to determine the range of fitness costs $\\rho_\\mrm{r}$ and\nfungicide doses $C$, over which the sensitive or resistant strain is\nfavored by selection, we perform the linear stability analysis of the\nfixed points of the system\nEqs.\\,(\\ref{eq:1host2fung-gen-app-1})-(\\ref{eq:1host2fung-gen-app-3}). Fixed\npoints are the values of $H$, $I_\\mrm{s}$ and $I_\\mrm{r}$ at which the\nexpressions on the right-hand side of\nEqs.\\,(\\ref{eq:1host2fung-gen-app-1})-(\\ref{eq:1host2fung-gen-app-3})\nequal zero.\nThe system\nEqs.\\,(\\ref{eq:1host2fung-gen-app-1})-(\\ref{eq:1host2fung-gen-app-3})\nhas three fixed points: (i) $H^* = K$, $I_\\mrm{s}=I_\\mrm{r}=0$; (ii)\n$H^*= \\mu\/b_\\mrm{s}$, $I_\\mrm{s}=r_H (b_\\mrm{s} K - \\mu)\/\\left[ b_\\mrm{s}\n (\\mu+r_H) \\right]$, $I_\\mrm{r} = 0$;\n(iii) $H^*= \\mu\/b_\\mrm{r}$, $I_\\mrm{s}= 0 $, $I_\\mrm{r} = r_H (b_\\mrm{r} K - \\mu)\/\\left[ b_\\mrm{r}\n (\\mu+r_H) \\right]$. Here $b_\\mrm{s} = b \\left[ 1 -\n \\varepsilon_\\mrm{s}(C_\\mrm{A}, C_\\mrm{B})\\right]$, $b_\\mrm{r} = \\left[ 1 - \\varepsilon_\\mrm{r}(C_\\mrm{A},\n C_\\mrm{B}) \\right] (1 - \\rho_\\mrm{r})$.\nTo determine whether a fixed point is stable, we first linearize the system\nEqs.\\,(\\ref{eq:1host2fung-gen-app-1})-(\\ref{eq:1host2fung-gen-app-3})\nin its vicinity, then determine the Jacobian and its eigenvalues. A\nfixed point is stable if all the eigenvalues have negative real parts.\n\nThe results of this analysis can be conveniently expressed using the\nbasic reproductive number of the sensitive strain\n\\begin{equation}\\label{eq:ap-r0s}\nR_\\mrm{0s} = \\frac{b \\left[ 1 - \\varepsilon_\\mrm{s}(C_\\mrm{A}, C_\\mrm{B})\\right] K}{\\mu}\n\\end{equation}\nand the basic reproductive number of the resistant strain\n\\begin{equation}\\label{eq:ap-r0r}\nR_\\mrm{0r} = \\frac{b \\left[ 1 - \\varepsilon_\\mrm{r}(C_\\mrm{A},\n C_\\mrm{B}) \\right] (1 - \\rho_\\mrm{r}) K}{ \\mu}.\n\\end{equation}\nThe sensitive strain is favored by selection [meaning that the fixed\npoint (ii) is stable and both fixed points (i) and (iii) are unstable]\nwhen both inequalities $R_\\mrm{0s}>1$, $R_\\mrm{0s} > R_\\mrm{0r}$ are\nfulfilled.\n\nWe consider then the inequality $R_\\mrm{0s} > R_\\mrm{0r}$, which\nis equivalent to\n\\begin{equation}\\label{eq:brn-ineq-gen-app}\n\\frac{ C}{ C + C_{50}\/\\gamma_\\mrm{s}} < \\rho_\\mrm{r}\/k_\\mrm{k} + \\frac{ C}{ C +\n C_{50}\/\\gamma_\\mrm{r}} (1 - \\rho_\\mrm{r}),\n\\end{equation}\nwhere\n\\begin{equation}\\label{eq:app-gammas}\n\\gamma_\\mrm{s} = 1 + u \\sqrt{r_\\mrm{B}(1-r_\\mrm{B})},\n\\end{equation}\n\\begin{equation}\\label{eq:app-gammar}\n \\gamma_\\mrm{r} = \\alpha (1 - r_\\mrm{B}) + r_\\mrm{B} + u\n \\sqrt{\\alpha r_\\mrm{B}(1-r_\\mrm{B})}\n\\end{equation}\nand $r_\\mrm{B} = C_\\mrm{B}\/C$ is the proportion of the funcigide B in\nthe mixture, $C = C_\\mrm{A} + C_\\mrm{B}$. \n\nThe inequality (\\ref{eq:brn-ineq-gen-app}) holds at\n\\begin{align}\\label{eq:range-sens-gen1}\n\\rho_\\mrm{r} < \\rho_\\mrm{rb}, \\:\\: \\mrm{for} \\left( CC_\\mrm{b2}\n\\right)\n\\end{align}\nor at \n\\begin{align}\\label{eq:range-sens-gen2}\n\\rho_\\mrm{r} > \\rho_\\mrm{rb}, \\:\\: \\mrm{for\\:any\\:value\\:of}\\: C.\n\\end{align}\n Here,\n\\begin{equation}\\label{eq:rhorb-gen-app}\n\\rho_\\mrm{rb} = \\frac{k_\\mrm{k} (\\gamma_\\mrm{s} - \\gamma_\\mrm{r}) \\left( \\gamma_\\mrm{s} +\n \\gamma_\\mrm{r} (1 -k_\\mrm{k} ) - 2 \\sqrt{\\gamma_\\mrm{r} \\gamma_\\mrm{s} (1 - k_\\mrm{k})}\n \\right) } { \\left( \\gamma_\\mrm{s} - \\gamma_\\mrm{r} (1 - k_\\mrm{k}) \\right)^2},\n\\end{equation}\n\\begin{equation}\\label{eq:c12-gen-app}\nC_{b1,2} = \\frac{C_{50}}{2 \\gamma_\\mrm{s} \\gamma_\\mrm{r} \\rho_\\mrm{r} (1 - k_\\mrm{k})} \\left[\n \\gamma_\\mrm{s} ( k_\\mrm{k} - \\rho_\\mrm{r}) - \\gamma_\\mrm{r}\n (k_\\mrm{k} + \\rho_\\mrm{r} (1 - k_\\mrm{k})) \\mp \\sqrt{D} \\right],\n\\end{equation}\nwhere\n\\begin{equation}\\label{eq:d-gen-app}\nD = \\gamma_\\mrm{s}^2 \\left( k_\\mrm{k} - \\rho_\\mrm{r} \\right)^2 + \\gamma_\\mrm{r}^2\n(k_\\mrm{k} - \\rho_\\mrm{r} - k_\\mrm{k} \\rho_\\mrm{r} )^2 - 2 \\gamma_\\mrm{s} \\gamma_\\mrm{r} \\left( k_\\mrm{k}^2 (1 - \\rho_\\mrm{r}) +\n \\rho_\\mrm{r}^2(1 - k_\\mrm{k}) \\right).\n\\end{equation}\nAccording to the inequality (\\ref{eq:range-sens-gen2}), if the fitness\ncost of resistance is larger than a threshold value given by\n\\eq{eq:rhorb-gen-app}, the sensitive strain has a selective advantage\nand the resistant strain is eliminated from the population at any\nfungicide dose $C \\geq 0$.\n\n\nFor the case of no interaction between fungicides ($u=0$) and perfect\nresistance ($\\alpha=0$) we obtain from \\eq{eq:app-gammas} and\n\\eq{eq:app-gammar} $\\gamma_\\mrm{s} = 1$, $\\gamma_\\mrm{r} =\nr_\\mrm{B}$. Then, the \\eq{eq:rhorb-gen-app} is simplified:\n\\begin{equation}\\label{eq:rhorb-noint-app}\n\\rho_\\mrm{rb} = k_\\mrm{k} \\frac{(1 - r_\\mrm{B}) \\left[ 1 + r_\\mrm{B} (1-k_\\mrm{k}) - 2\n \\sqrt{r_\\mrm{B} ( 1 - k_\\mrm{k})} \\right]} { (1 - r_\\mrm{B} (1 -\n k_\\mrm{k}) )^2 }.\n\\end{equation}\nWe then solve the inequality $\\rho_\\mrm{r} > \\rho_\\mrm{rb}$ with respect to\n$r_\\mrm{B}$ and find that it is fulfilled at $r_\\mrm{B}>r_\\mrm{Bc}$, where\n\\begin{equation}\\label{eq:rbc-fullres}\nr_\\mrm{Bc} = \\frac{ k_\\mrm{k}^2 (1 - \\rho_\\mrm{r}) + \\rho_\\mrm{r}^2 ( 1 -\n k_\\mrm{k}) - 2 k_\\mrm{k} \\rho_\\mrm{r} \\sqrt{ (1 - k_\\mrm{k}) (1 - \\rho_\\mrm{r}) })} { (k_\\mrm{k}\n + \\rho_\\mrm{r} - k_\\mrm{k} \\rho_\\mrm{r})^2}\n\\end{equation}\nIt represents the critical proportion of the fungicide B in the\nmixture above which the resistant strain is not favored by selection\n(\\fig{fig:opt-fung-rat-comb}). If the cost of resistance affects the\ndeath rate of the pathogen $\\mu$ (see \\sec{sec:gener}) and not the\ntransmission rate $b$ as considered above, then a simpler expression\nfor the critical proportion of the fungicide B is obtained\n\\begin{equation}\\label{eq:rbc-fullres1}\nr_\\mrm{Bc} = \\left( \\frac{1 - \\rho_\\mrm{r}\/k_k}{1 + \\rho_\\mrm{r}\/k_k} \\right)^2.\n\\end{equation}\nHere, $r_\\mrm{Bc}$ depends only on the ratio $\\rho_\\mrm{r}\/k_k$ of the\ncost of resistance to the maximum fungicide effect $k_k$, which allows to\nmake a more general prediction about the value of $r_\\mrm{Bc}$.\n\n\\subsection{Selection for resistance at no interaction between fungicides}\n\\label{sec:app-single-fung}\n\nWhen only the high risk fungicide (fungicide A) is applied with the\ndose $C_\\mrm{A}$, we set $r_\\mrm{B}=0$ in \\eq{eq:app-gammas}\nand \\eq{eq:app-gammar} to obtain $\\gamma_\\mrm{s} = 1$, $\\gamma_\\mrm{r}\n= \\alpha$. Then, the following expressions are obtained for the\nthreshold value of the resistance cost from \\eq{eq:rhorb-gen-app}\n\\begin{equation}\\label{eq:rhorb-fungica}\n\\rho_\\mrm{rb} = k_\\mrm{k} \\frac{(1 - \\alpha) \\left[ 1 + \\alpha (1-k_\\mrm{k}) - 2\n \\sqrt{\\alpha ( 1 - k_\\mrm{k})} \\right]} { (1 - \\alpha (1 - k_\\mrm{k}) )^2 },\n\\end{equation}\nand the fungicide dose from \\eq{eq:c12-gen-app}\n\\begin{equation}\\label{eq:c12-fungica}\nC_{b1,2} = \\frac{C_{50}}{2 \\alpha \\rho_\\mrm{r} (1 - k_\\mrm{k})} \\left[\n k_\\mrm{k} - \\rho_\\mrm{r} - \\alpha\n (k_\\mrm{k} + \\rho_\\mrm{r} (1 - k_\\mrm{k})) \\mp \\sqrt{D} \\right],\n\\end{equation}\nwhere\n\\begin{equation}\\label{eq:d-fungica}\n D = \\left( k_\\mrm{k} - \\rho_\\mrm{r} \\right)^2 + \\alpha^2\n(k_\\mrm{k} - \\rho_\\mrm{r} - k_\\mrm{k} \\rho_\\mrm{r} )^2 - 2 \\alpha \\left( k_\\mrm{k}^2 (1 - \\rho_\\mrm{r}) +\n \\rho_\\mrm{r}^2(1 - k_\\mrm{k}) \\right).\n\\end{equation}\nIn the simpler case of full resistance we take the limit $\\alpha \\to\n0$. Then, by taking this limit in \\eq{eq:rhorb-fungica},\n\\eq{eq:c12-fungica} and \\eq{eq:d-fungica} we obtain for the threshold\nvalues of the fitness cost and the fungicide dose\n\\begin{equation}\\label{eq:rhorb-fungica-perfres}\n\\rho_\\mrm{rb} = k_\\mrm{k},\n\\end{equation}\n\\begin{equation}\\label{eq:cb-fungica-perfres}\nC_\\mrm{b} = C_{50} \\frac{\\rho_\\mrm{r}}{k_\\mrm{k}- \\rho_\\mrm{r}}.\n\\end{equation}\nIn this case the sensitive strain dominates at $C\\rho_\\mrm{rb}$\n[white area in \\fig{fig:phase-diagr}B].\n\nWhen only the low risk fungicide (fungicide B) is applied, we set\n$r_\\mrm{B} = 1$ and, hence $\\gamma_\\mrm{s}=\\gamma_\\mrm{r}=1$ in the inequality (\\ref{eq:brn-ineq-gen-app}) and obtain\n\\begin{equation}\\label{eq:brn-ineq-fungica}\n\\frac{ C}{ C + C_{50}} < \\rho_\\mrm{r}\/k_\\mrm{k} + \\frac{ C}{ C +\n C_{50}} (1 - \\rho_\\mrm{r}),\n\\end{equation}\nThis inequality holds and the sensitive strain dominates for all positive values of $\\rho_\\mrm{r}$ and\n$C$ at which $R_\\mrm{0s}>1$.\n\nConsider the case when the two fungicides A and B are applied\ntogether at an arbitrary mixing ratio $r_\\mrm{B}$, assuming no\npharmacological interaction ($u=0$) and perfect resistance of the\nresistant strain to the fungicide A ($\\alpha=0$).\nIn this case, $\\gamma_\\mrm{s} = 1$ and $\\gamma_\\mrm{r} =\nr_\\mrm{B}$. Substituting these values in \\eq{eq:rhorb-gen-app},\n\\eq{eq:c12-gen-app} and \\eq{eq:d-gen-app} gives the same expressions\nas in \\eq{eq:rhorb-fungica}, \\eq{eq:c12-fungica} and\n\\eq{eq:d-fungica}, but with $\\alpha$ substituted by $r_\\mrm{B}$.\n \n\n\n\\subsection{Expressions for the treatment benefit}\n\\label{sec:app-trben}\n\nThe treatment benefit is defined as the ratio between the amount of\nhealthy hosts $H(t)$ when both the disease and treatment are\npresent and the amount of healthy hosts at no disease $B(t) =\nH(t)\/H_\\mrm{nd}(t)$ (see \\sec{sec:trben}). \n\nIn order to obtain analytical expressions for $B(t)$, we consider one growing season and assume the host-pathogen\nequilibrium is reached during the season.\n This corresponds to the time-dependent solution of\n Eqs.\\,(\\ref{eq:1host2fung-gen-1})-(\\ref{eq:1host2fung-gen-3})\n reaching its stable fixed point (or steady state). Fixed points of\n Eqs.\\,(\\ref{eq:1host2fung-gen-1})-(\\ref{eq:1host2fung-gen-3}) can be\n found by equating the right-hand sides of all equations to zero and\n solving the resulting algebraic equations with respect to $H(t)$,\n $I_\\mrm{s}(t)$ and $I_\\mrm{r}(t)$. Biologically this occurs when the\n first positive term in \\eq{eq:1host2fung-gen-1} corresponding to\n growth of healthy hosts, is compensated by the second, negative term\n that corresponds to the decrease in healthy hosts due to\n infection. In other words, equilibrium occurs when the rate of\n emergence of new healthy tissue as a result of plant growth is\n exactly offset by the rate of its decrease due to infection. The\n right-hand side of \\eq{eq:1host2fung-gen-2} goes to zero, when the\n rate of increase in $I_\\mrm{s}$ due to new infections is compensated\n by the loss of the infectious tissue due to the completion of the\n infectious period (similar reasoning applies\n for \\eq{eq:1host2fung-gen-3}).\n\nThen, the treatment benefit is given by\n\\begin{equation}\\label{trben-gen}\nB(t \\to \\infty) = B^* = \\frac{H^*}{K},\n\\end{equation}\nwhere $H^*$ is the equilibrium amount of healthy hosts and $K$ is the\nhost carrying capacity, and assume full resistance ($\\alpha=0$).\n\nWhen only the fungicide B is applied at a dose $C$\n[\\fig{fig:trben-vs-conc-cost-col}A], the basic reproductive number of\nthe sensitive pathogen strain always exceeds the one for the resistant\nstrain $R_\\mrm{0s}>R_\\mrm{0r}$. Therefore, the resistant mutants are\neliminated in the long run and the amount of the healthy host tissue is equal\nto $H^* = \\mu\/ (b \\left[ 1 - \\varepsilon(C) \\right] )$, where $\\varepsilon(C)$ is given\nby \\eq{eq:eps-1fungic}. Then, according to \\eq{trben-gen}, the\ntreatment benefit is\n\\begin{align}\\label{eq:trben-fungicb}\nB^*(C)= \\frac{ \\mu } {b \\left[ 1 - \\varepsilon(C) \\right] K}.\n\\end{align}\nIt grows with the fungicide dose and saturates, since the\nfunction $\\varepsilon(C)$ saturates.\n\nApplication of the fungicide A alone at a dose $C$ may favor\neither resistant or sensitive pathogen strain depending on the fitness\ncost of resistance $\\rho_\\mrm{r}$ and the fungicide dose $C$\n[see \\fig{fig:phase-diagr}B]. The treatment benefit in this case is\n\\begin{align}\\label{eq:trben-fungica}\nB^*(C, \\rho_\\mrm{r})= \n\\begin{cases} \n\\frac{ \\mu } { b \\left[ 1-\\varepsilon(C) \\right] K }, & \\mrm{for} \\:\n(\\rho_\\mrm{r} k_k \\: \\mrm{and} \\: \\forall C),\\\\\n\\frac{\\mu} { b ( 1-\\rho_\\mrm{r}) K }, & \\mrm{for} \\:\n\\rho_\\mrm{r} < k_k \\: \\mrm{and} \\: C>C_\\mrm{b},\n\\end{cases}\n\\end{align}\nwhere the $C_\\mrm{b}$ is given by\n\\eq{eq:cb-fungica-perfres}.\n\nNow, consider application of both fungicides in a mixture at equal\nconcentrations ($r_\\mrm{B} = 1\/2$), assuming no interaction between\nfungicides ($u=0$). In this case, again either resistant or sensitive\npathogen strain will dominate the population depending on the fitness\ncost $\\rho_\\mrm{r}$ and the total fungicide dose $C$ [see\n\\fig{fig:phase-diagr}C]. The treatment benefit now has the following\nexpression\n\\begin{align}\\label{eq:trben-equalmix}\nB^*(C, \\rho_\\mrm{r})= \n\\begin{cases} \n \\frac{\\mu} { b \\left[1 - \\varepsilon(C) \\right] K}, & \\mrm{for} \\: (\\rho_\\mrm{r}\n <\\rho_\\mrm{rb} \\: \\mrm{and} \\: (CC_\\mrm{b2})) \\\\ & \\mrm{or}\n \\: (\\rho_\\mrm{r} > \\rho_\\mrm{rb} \\: \\mrm{and} \\: \\forall C),\\\\\n \\frac{\\mu } { b \\left[1 - \\varepsilon(C\/2) \\right] (1 - \\rho_r) K }, & \\mrm{for} \\:\n \\rho_\\mrm{r} <\\rho_\\mrm{rb} \\: \\mrm{and} \\:\n C_\\mrm{b1}r_\\mrm{Bc}$, i.\\,e. the sensitive pathogen\nstrain is favored by selection. Hence, the the treatment benefit is\ngiven by $B^*=\\mu\/(b K)$ [the upper expression in\n\\eq{eq:trben-equalmix}]. We set $B^*=B_\\mrm{suf}$ and substitute the\ndependence of the pathogen infectious period or the transmission rate\non the fungicide dose according to $\\mu \\to \\mu ( 1 + \\varepsilon(C))$ or\n$b \\to b ( 1 - \\varepsilon(C))$. Then, we obtain\n\\begin{align}\\label{eq:csuf}\n C_\\mrm{suf} =\n\\begin{cases} \n C_{50} \\frac {B_\\mrm{suf} R_0 - 1} {1 + k_\\mrm{k} - B_\\mrm{suf} R_0},\n & \\mrm{if} \\: \\mrm{fungicides} \\: \\mrm{affect} \\: \\mu\\\\ \n C_{50} \\frac {B_\\mrm{suf} R_0 - 1} {1 - B_\\mrm{suf} R_0 (1 - k_\\mrm{k}) }, & \\mrm{if} \\: \\mrm{fungicides} \\: \\mrm{affect} \\: b.\n\\end{cases}\n\\end{align}\nThe ratio\n\\begin{align}\\label{eq:trben-rat}\n\\frac {B^*(C=C_\\mrm{suf})} {B^*(C=r_\\mrm{B} C_\\mrm{suf})} = \n\\begin{cases} \n \\frac {1 + \\varepsilon(C_\\mrm{suf})} {1 + \\varepsilon(r_\\mrm{B} C_\\mrm{suf})},\n & \\mrm{if} \\: \\mrm{fungicides} \\: \\mrm{affect} \\: \\mu\\\\ \n \\frac {1 - \\varepsilon(r_\\mrm{B} C_\\mrm{suf}) } {1 - \\varepsilon(C_\\mrm{suf}) }, & \\mrm{if} \\: \\mrm{fungicides} \\: \\mrm{affect} \\: b\n\\end{cases}\n\\end{align}\ncharacterizes the extra benefit due to addition of the high-risk\nfungicide, since $B^*(C=C_\\mrm{suf})$ is the treatment benefit when\nboth high-risk and low-risk fungicides are present and\n$B^*(C=r_\\mrm{B} C_\\mrm{suf})$ is the treatment benefit when the\nhigh-risk fungicide is absent (here, $r_\\mrm{B}$ is the proportion of\nthe low-risk fungicide in the mixture). The ratio $B^*(C=C_\\mrm{suf})\n\/ B^*(C=r_\\mrm{B} C_\\mrm{suf})$ is shown in \\fig{fig:benrat-vs-ra} as\na function of the proportion of the high-risk fungicide in the mixture\n$r_\\mrm{A} = 1 - r_\\mrm{B}$. One sees from \\fig{fig:benrat-vs-ra} that\nthe more high-risk fungicide is used in the mixture, the larger is the\nextra benefit from its application (provided that\n$r_\\mrm{B}>r_\\mrm{Bc}$, when the sensitive pathogen strain is favored\nby selection). However, the largest $r_\\mrm{A}$ which still does not favor the\nresistant strain is determined by the value of the fitness cost of\nfungicide resistance (see \\sec{sec:optrat}).\n\nInterestingly, the ratio $B^*(C=C_\\mrm{suf}) \/ B^*(C=r_\\mrm{B}\nC_\\mrm{suf})$ does not depend on where the fungicide acts, on the\ninfectious period $\\mu^{-1}$ or the transmission rate $b$, as long as\nthe maximum fungicide effects in these to cases $k_{\\mrm{k}b}$ and\n$k_{\\mrm{k}\\mu}$ are related by $k_{\\mrm{k}\\mu} = k_{\\mrm{k}b}\/(1 -\nk_{\\mrm{k}b})$ such that the basic reproductive number is reduced by\nthe same amount when the maximum effect is achieved.\n\n\n\\begin{figure}[!ht]\n \\centerline{\\includegraphics[width=0.6\\textwidth]{fig_a1_benrat_vs_ra.eps}}\n \\caption\n Extra benefit of adding the high-risk fungicide to the mixture\n plotted according to \\eq{eq:trben-rat} and \\eq{eq:csuf} versus the\n proportion of the high-risk fungicide $r_\\mrm{A}$, provided the\n sensitive pathogen strain is favored by selection. The basic\n reproductive number of the sensitive strain in the absence of\n fungicides $R_0 = b K \/ \\mu$ had the value $R_0=2$ (dashed curve)\n and $R_0 =4$ (solid curve). Other parameters: $C_{50}=1$,\n $B_\\mrm{suf}=0.9$, maximum fungicide effect on $b$ is\n $k_{\\mrm{k}b}=0.9$ and the equivalent maximum fungicide effect on\n $\\mu$ is $k_{\\mrm{k}\\mu}=k_{\\mrm{k}b}\/(1 - k_{\\mrm{k}b}) = 9$ (see\n text for explanation).}\n\\label{fig:benrat-vs-ra}\n\\end{figure}%\n\n\n\\subsection{Dynamics of the frequency of the resistant pathogen strain}\n\\label{sec:app-delsel}\n\nIf the fungicide resistance is not associated with a fitness cost,\nthen the resistant strain is favored by selection and eventually\ndominates the population whenever the high risk fungicide is applied\nalone or in a mixture with the low risk fungicide [\\fig{fig:phase-diagr}B,C].\nHowever, for a given value of the total fungicide dose $C$,\nthe selection for resistance slows down when applying the fungicide\nmixture as compared to the treatment with the high risk fungicide\nalone [as seen from time-dependent numerical solutions of the model\nEqs.\\,(\\ref{eq:1host2fung-gen-1})-(\\ref{eq:1host2fung-gen-3})] in\nagreement with the findings of \\cite{hopa+11a}.\n\nIn order to understand this result we consider the dynamics of the\nfrequency of the resistant pathogen strain $p(t) =\nI_\\mrm{r}\/(I_\\mrm{r} + I_\\mrm{s})$. The rate of its change is obtained\nfrom Eqs.\\,(\\ref{eq:1host2fung-gen-1})-(\\ref{eq:1host2fung-gen-3}) \\cite{mile+89}\n\\begin{equation}\\label{eq:resfreq-dyn}\n\\frac{d p}{d t} = s(t) p (1 - p),\n\\end{equation}\nwhere \n\\begin{equation}\\label{eq:sel-coeff-app}\n s = b \\left[ (1 - \\varepsilon_\\mrm{r}(C, r_\\mrm{B})) (1 - \\rho_\\mrm{r} ) -\n (1 -\\varepsilon_\\mrm{s}(C, r_\\mrm{B})) \\right] H(t)\n\\end{equation}\nis the selection coefficient [a similar expression was found in\n\\cite{gugi99}]. Here $\\varepsilon_\\mrm{s}(C, r_\\mrm{B}) = k_\\mrm{k} C \/ ( C\n+ C_{50}\/ \\gamma_\\mrm{s})$, $\\varepsilon_\\mrm{r}(C, r_\\mrm{B}) = k_\\mrm{k} C\n\/ ( C + C_{50}\/ \\gamma_\\mrm{r})$ and $r_\\mrm{B}$ is the proportion of\nthe fungicide B in the mixture. Here, $C=C_\\mrm{A} + C_\\mrm{B}$, where\nthe dose $C_\\mrm{A}$ of the fungicide A and the dose $C_\\mrm{B}$ of\nthe fungicide B may depend on time due to fungicide decay:\n\\begin{equation}\\label{eq:cab-t}\nC_\\mrm{A} = C_\\mrm{A0} \\exp \\left( -\\nu_\\mrm{A} t \\right), \\: C_\\mrm{B} = C_\\mrm{B0} \\exp \\left( -\\nu_\\mrm{B} t \\right)\n\\end{equation}\nwhere $C_\\mrm{A0}$, $C_\\mrm{B0}$ are the fungicide doses at the time\nof application, $\\nu_\\mrm{A}$ and $\\nu_\\mrm{B}$ are the fungicide\ndecay rates.\n\nThe expression (\\ref{eq:sel-coeff-app}) for the selection coefficient was\nobtained under the assumption that the fungicide decreases the\ntransmission rate $b$. In the case when the fungicide decreases the\ninfectious period $\\mu^{-1}$, the selection coefficient does not depend on the\namount of healthy hosts $H(t)$.\n\nVariables in \\eq{eq:resfreq-dyn} can be separated and a\nclosed-form solution is found\n\\begin{equation}\\label{eq:resfreq-dyn-sol}\n\\int \\frac{d p}{p (1 - p)} = \\int_0^{t_\\mrm{m}} s( t ) dt.\n\\end{equation}\n\nOne can see from \\eq{eq:resfreq-dyn-sol} that the overall selection\nover the time $t_\\mrm{m}$ is determined by the integral of the\nselection coefficient $s(t)$ over time $\\int_0^{t_\\mrm{m}} s(t)\ndt$. We are interested in the overall selection that occurs during the\ntime $t_\\mrm{m}$ which is longer than the time scale of change in the\nfungicide dose. In this case, an equivalent, constant over time\nfungicide dose can be determined, which gives rise to the same value\nof the integral $\\int_0^T s(t) dt$. This effective fungicide dose\nwould take into account the time-dependent effect of the amount of the\nhost tissue on the strength of selection.\n\nAssuming a zero fitness cost ($\\rho_\\mrm{r}=0$), no pharmacological\ninteraction ($u=0$) and full resistance ($\\alpha=0$), the selection\ncoefficient can be written as \n\\begin{equation}\\label{eq:sel-coeff1}\ns(t) = b \\left[ \\varepsilon(C) - \\varepsilon(r_\\mrm{B} C) \\right] H(t).\n\\end{equation}\nAssuming that $H(t)$ is a slowly varying function compared to the time\nscale of selection, the solution of \\eq{eq:resfreq-dyn} reads:\n\\begin{equation}\\label{eq:resfreq-logist}\np(t) = \\frac{p_0 \\exp [s(t) t]}{ 1 + p_0 \\left( \\exp [s(t) t] - 1 \\right)},\n\\end{equation}\nwhere $p_0 = p(t=0)$. At $s>0$, the function $p(t)$ grows monotonically\nand tends to one at large times. The rate, at which it grows is\ndetermined by the magnitude of the selection coefficient $s$.\n\nOne can see from \\eq{eq:sel-coeff1} that when the high risk fungicide is\napplied alone ($r_\\mrm{B} = 0$), the selection coefficient is larger\nthan when it is mixed with a low risk fungicide ($0\ns(r_\\mrm{B}>0,C)$. This is because the function $\\varepsilon(r_\\mrm{B}\nC)$ has positive values for any\n$r_\\mrm{B}>0$. Thus, the selection for the resistant strain (against the\nsensitive strain) is delayed when a mixture of high risk and low risk\nfungicides is applied compared to treatment with the high risk\nfungicide alone. A careful consideration of \\eq{eq:resfreq-dyn-sol}\nreveals that this conclusion holds also when $H(t)$ does not vary\nslowly over the time scale of selection. Lower fungicide dose will\ndecrease the selection coefficient under the integral on the right-hand side of\n\\eq{eq:resfreq-dyn-sol}. Hence, in order to achieve a given large value\nof the frequency of resistance $p$, one would need to integrate over a\nlonger time $t_\\mrm{m}$.\n\n\\subsection{Generalization of the model: effect of the fungicide and fitness cost of resistance\n on the pathogen}\n\\label{sec:gener}\n\n\nSo far we assumed that both the resistance cost and fungicides affect\nthe transmission rate $b$. We performed the same analysis for the\nthree remaining cases possible in the model: When (i) both resistance\ncost and the fungicide affect the pathogen death rate according to\n$\\mu \\to \\mu (1 + \\rho_\\mrm{r} + \\varepsilon_\\mrm{r}(C, r_\\mrm{B}))$ for the\nresistant strain and $\\mu \\to \\mu (1 + \\varepsilon_\\mrm{s}(C, r_\\mrm{B}))$\nfor the sensitive strain; (ii) the resistance cost affects the\ntransmission rate $b \\to b (1 - \\rho_\\mrm{r})$ of the resistant strain\nand the fungicides affect the pathogen death rate $\\mu \\to\\mu (1 +\n\\varepsilon_\\mrm{s,r}(C, r_\\mrm{B}))$ ; (iii) resistance cost affects the\ndeath rate of the resistant pathogen strain $\\mu \\to \\mu (1 +\n\\rho_\\mrm{r})$, while the fungicide affects the infection rate of both\nresistant and sensitive strains $b \\to b (1 - \\varepsilon_\\mrm{r,s}(C, r_\\mrm{B}))$.\nWe have found that although the mathematical expressions for the\nresults have a different form in these cases and there is a slight\nquantitative difference, all the conclusions remain the same and do\nnot depend on whether the fungicide and the resistance cost manifest\nin the infection rate $b$ or in the pathogen death rate $\\mu$.\n\nMoreover, we have done the same analysis using a fungicide\ndose-response function different from \\eq{eq:eps-1fungic}, namely\nusing the function $\\varepsilon(C) = \\varepsilon_\\mrm{m} ( 1 - \\exp\n\\left[ - \\beta C \\right])$. If the two fungicides have the same values\nof $\\varepsilon_\\mrm{m}$ and $\\beta$ and are applied at doses $C_\\mrm{A}$\nand $C_\\mrm{B}$, then according to Loewe's additivity, their combined\naction has the form $\\varepsilon(C_\\mrm{A}, C_\\mrm{B}) = \\varepsilon_\\mrm{m} ( 1 -\n\\exp \\left[ - \\beta (C_\\mrm{A} + C_\\mrm{B}) \\right])$. We found again\nthat all the conclusions remain the same in this case.\n\nThis generalization applies to determination of the direction of\nselection (the sign of the selection coefficient in\n\\eq{eq:resfreq-dyn}) and to the outcomes for the treatment benefit at\nequilibrium obtained in \\sec{sec:trben}. However, the time-dependent\nsolutions of\nEqs.\\,(\\ref{eq:1host2fung-gen-1})-(\\ref{eq:1host2fung-gen-3}) may\nbehave differently depending on how the fungicide and the fitness cost\naffect the pathogen life cycle and the form of the fungicide\ndose-response function. This is an interesting topic for further\ninvestigations, but lies beyond the scope of this study.\n\n\n\n\\subsection{Fungicide mixture versus alternation}\n\\label{sec:app-mix-vs-altern}\n\nIt was previously discussed \\cite{sh06} that in the presence of a\nfitness cost the alternation of fungicides can be effective, but we\nhave shown here that fungicide mixtures can also be effective in this\ncase. When using an alternation strategy, the period of selection\nduring which the resistant strain is favored in the presence of the\nhigh risk fungicide is followed by a period during which selection\nfavors the sensitive strain in the absence of this fungicide. The\nlatter period is typically much longer because the selection pressure\ninduced by the high risk fungicide is much larger than that induced by\nthe fitness cost of resistance. Hence, one needs to wait for quite a\nlong time before the resistant strain disappears and the high risk\nfungicide can be used again. Moreover, there are times during which\nthe frequency of the resistant strain becomes large (at the end of the\nperiod of the application of the high risk fungicide), which increases\nthe risk that resistance will spread to other regions. Both of these\ndisadvantages are avoided by using a mixture where the proportion of\nthe low risk fungicide is above a critical value determined here\n(\\fig{fig:opt-fung-rat-comb}). In this case there is no need to delay\nthe application of the high risk fungicide and the frequency of the\nresistant strain does not rise above the mutation- or\nmigration-selection equilibrium because the mixture does not induce\nselection for resistance.\n\n\\subsection{The risk of double resistance}\n\\label{sec:dr-risk}\n\nAlthough we do not consider the possibility of double resistance in\nour model, by applying an optimal proportion of fungicides in the\nmixture as suggested here, one would prevent selection for resistance\nto the high risk fungicide. \nConsequently, the risk of development of double resistance would be\nreduced.\nFor both sexually and asexually reproducing pathogens, there are three\npathways for generating double resistance: (i) A-resistant\nmutants are produced first and then a proportion of them acquires also\nB-resistance by spontaneous mutation (ii) B-resistant mutants are\ngenerated first and subsequently acquire A-resistance and (iii) double\nresistance is generated directly from the wild-type. In this case, by\npreventing selection for A-resistance, one removes only the pathway (i)\nto double resistance. \nIf a pathogen is able to reproduce sexually, then a much more\nlikely scenario for the double resistance to emerge is through\nrecombination. For the recombination to occur, both singly resistant\nstrains (A-resistant and B-resistant) would need to be present in the\npopulation at significant frequencies. Hence, preventing selection for\nA-resistance would diminish the probability of the emergence of double\nresistance by recombination.\nThus, our findings would also help to significantly reduce the risk of\ndevelopment of double resistance, especially in sexually reproducing\npathogens.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nSeparation of the wave equation in prolate spheroidal coordinates leads to the\nprolate spheroidal wave equation (PSWE)\n\\begin{equation}\n\\left( {1-z^{2}}\\right) \\frac{d^{2}y}{dz^{2}}-2z\\frac{dy}{dz}+\\left(\n{\\lambda-\\frac{\\mu^{2}}{1-z^{2}}+\\gamma^{2}\\left( {1-z^{2}}\\right) }\\right)\ny=0, \\label{eq1}%\n\\end{equation}\nwhere $\\lambda$ and $\\mu$ are separation constants, and $\\gamma$ is\nproportional to the frequency (see [30] and [40]).\n\nSolutions of \\eqref{eq1}, the prolate spheroidal wave functions (PSWFs), are\nviewed as depending on the parameters $\\mu$ and $\\gamma$ from the equation, as\nwell as an implicitly defined parameter $\\nu$ (which describes the behavior of\nsolutions at infinity). This latter parameter is the so-called characteristic\nexponent, and for details see [1, \\S 8.1.1].\n\nThe parameter $\\lambda$ is usually regarded as an eigenvalue admitting an\neigensolution that is bounded at both $z=\\pm1$, which is equivalent to $\\mu=m$\nand $\\nu=n$ being integers (see [1] and [19]). Most of the literature focuses\non PSWFs with these parameters being integers, since this is the most useful\ncase in practical applications. We shall assume this as well throughout this paper.\n\nWe consider the important case of $\\gamma\\rightarrow\\infty$ (which\ncorresponds, for example, to high-frequency scattering in acoustics). In this\ncase it is known [1, p. 186] that $\\lambda\\rightarrow-\\infty$, and we shall\nassume this here. With the exception of \\S 5, our results will be uniformly\nvalid for $m$ bounded, $n$ small or large, and specifically\n\\begin{equation}\n0\\leq m\\leq n\\leq2\\pi^{-1}\\gamma\\left( {1-\\delta}\\right) , \\label{eq2}%\n\\end{equation}\nwhere (here and throughout) $\\delta\\in\\left( {0,1}\\right) $ is arbitrarily chosen.\n\nAlthough we will consider the case $z$ complex, our primary concern will be\nfor $z$ real (denoted by $x)$, and in particular the so-called angular\n($-10$ was assumed, which\ndoes not have many of the applications described above.\n\nThe PSWE (\\ref{eq1}) has regular singularities at $z=\\pm1$, each with\nexponents $\\pm{\\frac{1}{2}}m$. When $\\gamma=0$ the PSWE degenerates into the\nassociated Legendre equation (regular singularities at $z=\\pm1$ and $z=\\infty\n$), which for $-11$, and the branches of the square roots are such that\nintegrand is positive for $0\\leq t0$, in which $E$ is the Elliptic integral of the second kind given\nby (\\ref{eq33}). Thus\n\\begin{equation}\n\\xi=z-\\sigma E\\left( {\\sigma;\\sigma^{-1}}\\right) +{O}\\left(\n{z^{-1}}\\right) \\quad\\left( {z\\rightarrow\\infty}\\right) . \\label{eq43}%\n\\end{equation}\nWe now apply Theorem 3.1 of [26], with $u$ replaced by $\\gamma$, and with\n$\\xi$ replaced by $i\\xi$. Then, by matching solutions that are recessive at\n$z=\\pm i\\infty$, we have from \\eqref{eq13}, (\\ref{eq14}) and (\\ref{eq43})\n\\begin{equation}%\n\\begin{array}\n[c]{l}%\nS_{n}^{m\\left( 3\\right) }\\left( {z,\\gamma}\\right) =i^{-1-n}\\gamma\n^{-1}\\left[ {\\left( {z^{2}-1}\\right) \\left( {z^{2}-\\sigma^{2}}\\right)\n}\\right] ^{-1\/4}e^{i\\gamma J\\left( \\sigma\\right) }\\\\\n\\times\\left[ {e^{i\\gamma\\xi}\\sum\\limits_{s=0}^{p-1}{\\left( {-i}\\right)\n^{s}\\frac{\\displaystyle A_{s}\\left( \\xi\\right) }{\\displaystyle\\gamma^{s}}}+\\varepsilon_{p,1}\\left(\n{\\gamma,\\xi}\\right) }\\right] ,\n\\end{array}\n\\label{eq44}%\n\\end{equation}\nand\n\\begin{equation}%\n\\begin{array}\n[c]{l}%\nS_{n}^{m\\left( 4\\right) }\\left( {z,\\gamma}\\right) =i^{1+n}\\gamma\n^{-1}\\left[ {\\left( {z^{2}-1}\\right) \\left( {z^{2}-\\sigma^{2}}\\right)\n}\\right] ^{-1\/4}e^{-i\\gamma J\\left( \\sigma\\right) }\\\\\n\\times\\left[ {e^{-i\\gamma\\xi}\\sum\\limits_{s=0}^{p-1}{i^{s}\\frac{\\displaystyle A_{s}\\left(\n\\xi\\right) }{\\displaystyle \\gamma^{s}}}+\\varepsilon_{p,2}\\left( {\\gamma,\\xi}\\right)\n}\\right] .\n\\end{array}\n\\label{eq45}%\n\\end{equation}\n\n\n\nThe error terms $\\varepsilon_{p,j}\\left( {\\gamma,\\xi}\\right) $ ($j=1,2$) are\nbounded by Olver's theorem, and are ${O}\\left( {\\gamma^{-p}}\\right)\n$ in unbounded domains containing the real interval $1+\\delta\\leq z<\\infty$\n($\\delta>0$). Here the coefficients are defined recursively by $A_{0}\\left(\n\\xi\\right) =1$ and\n\\begin{equation}\nA_{s+1}\\left( \\xi\\right) =-{\\tfrac{1}{2}A}_{s}^{\\prime}\\left( \\xi\\right)\n+{\\tfrac{1}{2}}\\int{\\psi\\left( \\xi\\right) A_{s}\\left( \\xi\\right) d\\xi\n}\\quad\\left( {s=0,1,2,\\cdots}\\right) . \\label{eq46}%\n\\end{equation}\nThus, from (\\ref{eq15}), (\\ref{eq23}), (\\ref{eq42}), (\\ref{eq44}) and\n(\\ref{eq45}), we obtain the desired Liouville-Green expansion for $Ps_{n}%\n^{m}\\left( {x,\\gamma^{2}}\\right) $. In particular, to leading order, we\nhave\n\\begin{equation}\nPs_{n}^{m}\\left( {x,\\gamma^{2}}\\right) =\\frac{\\left( {-1}\\right) ^{n}%\n\\sin\\left( {\\gamma\\xi+\\gamma\\sigma E\\left( {\\sigma;\\sigma^{-1}}\\right)\n-{\\frac{1}{2}}n\\pi}\\right) +{O}\\left( {\\gamma^{-1}}\\right) }%\n{\\gamma\\left( {n-m}\\right) !V_{n}^{m}\\left( \\gamma\\right) \\left[ {\\left(\n{x^{2}-1}\\right) \\left( {x^{2}-\\sigma^{2}}\\right) }\\right] ^{1\/4}},\n\\label{eq47}%\n\\end{equation}\nas $\\gamma\\rightarrow\\infty$, uniformly for $1+\\delta\\leq x<\\infty$. In order\nfor this approximation to be practicable, one requires an asymptotic\napproximation for $\\lambda_{n}^{m}\\left( {\\gamma^{2}}\\right) $ as\n$\\gamma\\rightarrow\\infty$, and we shall discuss this in the next section. We\nalso remark that (\\ref{eq47}) breaks down at the simple pole $x=1$, and in the\nnext section we obtain asymptotic approximations that are valid at this pole.\n\n\\section{Bessel function approximations: the radial case}\n\n\nWe now obtain approximations valid at the simple pole of $f\\left( {\\sigma\n,z}\\right) $ at $z=1$, using the asymptotic theory of [25, Chap. 12]. We\nconsider $z=x$ real and positive. The appropriate Liouville transformation is\nnow given by\n\\begin{equation}\n\\eta=\\xi^{2}=\\left[ {\\int_{1}^{x}{\\left\\{ {-f\\left( {\\sigma,t}\\right)\n}\\right\\} ^{1\/2}dt}}\\right] ^{2}, \\label{eq48}%\n\\end{equation}\nalong with\n\\begin{equation}\n\\hat{{W}}=\\left\\{ {\\frac{\\eta\\left( {x^{2}-\\sigma^{2}}\\right) }{x^{2}-1}%\n}\\right\\} ^{1\/4}w, \\label{eq49}%\n\\end{equation}\nwhich yields the new equation\n\\begin{equation}\n\\frac{d^{2}\\hat{{W}}}{d\\eta^{2}}=\\left[ {-\\frac{\\gamma^{2}}{4\\eta}%\n+\\frac{m^{2}-1}{4\\eta^{2}}+\\frac{\\hat{{\\psi}}\\left( \\eta\\right) }{\\eta}%\n}\\right] \\hat{{W}}. \\label{eq50}%\n\\end{equation}\nHere\n\\begin{equation}%\n\\begin{array}\n[c]{l}%\n\\hat{{\\psi}}\\left( \\eta\\right) =\\dfrac{1-4m^{2}}{16\\eta}+\\dfrac{m^{2}%\n-1}{4\\left( {x^{2}-1}\\right) \\left( {x^{2}-\\sigma^{2}}\\right) }\\\\\n+\\dfrac{\\left( {1-\\sigma^{2}}\\right) \\left( {6x^{4}-\\left( {3+\\sigma^{2}%\n}\\right) x^{2}-2\\sigma^{2}}\\right) }{16\\left( {x^{2}-1}\\right) \\left(\n{x^{2}-\\sigma^{2}}\\right) ^{3}}.\n\\end{array}\n\\label{eq51}%\n\\end{equation}\nThis has the same main features of (\\ref{eq30}), namely a simple pole for the\ndominant term (for large $\\gamma$) and a double pole in another term. We note\nthat $x=1$ corresponds to $\\eta=0$.\n\nThe difference here is that non-dominant term $\\hat{{\\psi}}\\left(\n\\eta\\right) $ is now analytic at $\\eta=0$, i.e. $x=1$. Neglecting $\\hat\n{{\\psi}}\\left( \\eta\\right) $ in \\eqref{eq50} gives an equation solvable in\nterms of Bessel functions. We then find (by matching recessive solutions at\n$x=1$) and applying theorem 4.1 of [25, Chap. 12] (with $u$ replaced by\n$\\gamma$ and $\\zeta$ replaced by $\\eta$)\n\\begin{equation}%\n\\begin{array}\n[c]{l}%\nPs_{n}^{m}\\left( {x,\\gamma^{2}}\\right) =c_{n}^{m}\\left( \\gamma\\right)\n\\left\\{ {\\dfrac{\\eta}{\\left( {x^{2}-1}\\right) \\left( {x^{2}-\\sigma^{2}%\n}\\right) }}\\right\\} ^{1\/4}\\\\\n\\times\\left[ {J_{m}\\left( {\\gamma\\eta^{1\/2}}\\right) +{O}\\left(\n{\\gamma^{-1}}\\right) \\operatorname{env}J_{m}\\left( {\\gamma\\eta^{1\/2}%\n}\\right) }\\right] ,\n\\end{array}\n\\label{eq52}%\n\\end{equation}\nas $\\gamma\\rightarrow\\infty$, uniformly for $10$\n\\begin{equation}\nU\\left( {-{\\tfrac{1}{2}}\\gamma\\alpha^{2},\\zeta\\sqrt{2\\gamma}}\\right)\n\\sim\\left( {\\frac{\\gamma\\alpha^{2}}{2e}}\\right) ^{\\gamma\\alpha^{2}\/4}%\n\\frac{\\exp\\left\\{ {-\\gamma\\int_{\\sigma}^{x}{\\left\\{ {f\\left( {\\sigma\n,t}\\right) }\\right\\} ^{1\/2}dt}}\\right\\} }{\\left\\{ {2\\gamma\\left(\n{\\zeta^{2}-\\alpha^{2}}\\right) }\\right\\} ^{1\/4}}, \\label{eq88}%\n\\end{equation}%\n\\begin{equation}%\n\\begin{array}\n[c]{l}%\n{U}^{\\prime}\\left( {-{\\tfrac{1}{2}}\\gamma\\alpha^{2},\\zeta\\sqrt{2\\gamma}%\n}\\right) \\sim-\\frac{1}{2}\\left( {\\dfrac{\\gamma\\alpha^{2}}{2e}}\\right)\n^{\\gamma\\alpha^{2}\/4}\\\\\n\\times\\left\\{ {2\\gamma\\left( {\\zeta^{2}-\\alpha^{2}}\\right) }\\right\\}\n^{1\/4}\\exp\\left\\{ {-\\gamma\\int_{\\sigma}^{x}{\\left\\{ {f\\left( {\\sigma\n,t}\\right) }\\right\\} ^{1\/2}dt}}\\right\\} ,\n\\end{array}\n\\label{eq89}%\n\\end{equation}%\n\\begin{equation}\n\\overline{U}\\left( {-{\\tfrac{1}{2}}\\gamma\\alpha^{2},\\zeta\\sqrt{2\\gamma}%\n}\\right) \\sim2\\left( {\\frac{\\gamma\\alpha^{2}}{2e}}\\right) ^{\\gamma\n\\alpha^{2}\/4}\\frac{\\exp\\left\\{ {\\gamma\\int_{\\sigma}^{x}{\\left\\{ {f\\left(\n{\\sigma,t}\\right) }\\right\\} ^{1\/2}dt}}\\right\\} }{\\left\\{ {2\\gamma\\left(\n{\\zeta^{2}-\\alpha^{2}}\\right) }\\right\\} ^{1\/4}}, \\label{eq90}%\n\\end{equation}\nand\n\\begin{equation}%\n\\begin{array}\n[c]{l}%\n\\overline{U}^{\\prime}\\left( {-{\\tfrac{1}{2}}\\gamma\\alpha^{2},\\zeta\n\\sqrt{2\\gamma}}\\right) \\sim\\left( {\\dfrac{\\gamma\\alpha^{2}}{2e}}\\right)\n^{\\gamma\\alpha^{2}\/4}\\left\\{ {2\\gamma\\left( {\\zeta^{2}-\\alpha^{2}}\\right)\n}\\right\\} ^{1\/4}\\\\\n\\times\\exp\\left\\{ {\\gamma\\int_{\\sigma}^{x}{\\left\\{ {f\\left( {\\sigma\n,t}\\right) }\\right\\} ^{1\/2}dt}}\\right\\} .\n\\end{array}\n\\label{eq91}%\n\\end{equation}\n\n\n\n\nThese, along with\n\\begin{equation}\n\\left\\vert \\eta\\right\\vert ^{1\/4}I_{m}\\left( {\\gamma\\left\\vert \\eta\n\\right\\vert ^{1\/2}}\\right) \\sim\\left( {2\\pi\\gamma}\\right) ^{-1\/2}%\n\\exp\\left\\{ {\\gamma\\int_{x}^{1}{\\left\\{ {f\\left( {\\sigma,t}\\right)\n}\\right\\} ^{1\/2}dt}}\\right\\} , \\label{eq92}%\n\\end{equation}%\n\\begin{equation}\n\\frac{d\\left\\{ {\\left\\vert \\eta\\right\\vert ^{1\/4}I_{m}\\left( {\\gamma\n\\left\\vert \\eta\\right\\vert ^{1\/2}}\\right) }\\right\\} }{dx}\\sim-\\left(\n{\\frac{\\gamma}{2\\pi}}\\right) ^{1\/2}\\left( {\\frac{x^{2}-\\sigma^{2}}{1-x^{2}}%\n}\\right) ^{1\/2}\\exp\\left\\{ {\\gamma\\int_{x}^{1}{\\left\\{ {f\\left( {\\sigma\n,t}\\right) }\\right\\} ^{1\/2}dt}}\\right\\} , \\label{eq93}%\n\\end{equation}\nand\n\\begin{equation}\n\\frac{d\\zeta}{dx}=\\left\\{ {\\frac{x^{2}-\\sigma^{2}}{\\left( {1-x^{2}}\\right)\n\\left( {\\zeta^{2}-\\alpha^{2}}\\right) }}\\right\\} ^{1\/2}, \\label{eq94}%\n\\end{equation}\ncan be used to simplify (\\ref{eq86}) and (\\ref{eq87}). In particular, we find\nthat\n\\begin{equation}%\n\\begin{array}\n[c]{l}%\ne_{n}^{m}\\left( \\gamma\\right) \\left\\{ {w_{2}\\left( {\\gamma,\\alpha,\\zeta\n}\\right) -\\left( {-1}\\right) ^{m+n}w_{4}\\left( {\\gamma,\\alpha,\\zeta\n}\\right) }\\right\\} \\\\\n=o\\left( 1\\right) A{\\operatorname{env}}U\\left( {-{\\tfrac{1}{2}}\\gamma\n\\alpha^{2},\\zeta\\sqrt{2\\gamma}}\\right) ,\n\\end{array}\n\\label{eq95}%\n\\end{equation}\nwhere the $o\\left( 1\\right) $ term is exponentially small as $\\gamma\n\\rightarrow\\infty_{\\,}$for $x\\in\\left[ {0,1-\\delta_{0}}\\right] $. In\naddition, we obtain the useful result\n\\begin{equation}\nc_{n}^{m}\\left( \\gamma\\right) \\sim d_{n}^{m}\\left( \\gamma\\right) \\left(\n{\\frac{\\gamma\\alpha^{2}}{2e}}\\right) ^{\\gamma\\alpha^{2}\/4}\\left( {\\frac\n{\\pi^{2}}{2\\gamma}}\\right) ^{1\/4}\\exp\\left\\{ {-\\gamma\\int_{\\sigma}%\n^{1}{\\left\\{ {f\\left( {\\sigma,t}\\right) }\\right\\} ^{1\/2}dt}}\\right\\} .\n\\label{eq96}%\n\\end{equation}\nFrom (\\ref{eq73}) and (\\ref{eq84}) - (\\ref{eq95}), for $m+n$ even, $m$ bounded\nand $n$ satisfying (\\ref{eq2}), we arrive at our desired result\n\\begin{equation}%\n\\begin{array}\n[c]{l}%\n\\operatorname{Ps}_{n}^{m}\\left( {x,\\gamma^{2}}\\right) =\\dfrac\n{\\operatorname{Ps}\\left( {0,\\gamma^{2}}\\right) }{U\\left( {-{\\frac{1}{2}%\n}\\gamma\\alpha^{2},0}\\right) }\\left\\{ {\\dfrac{\\sigma^{2}\\left( {\\alpha\n^{2}-\\zeta^{2}}\\right) }{\\alpha^{2}\\left( {\\sigma^{2}-x^{2}}\\right) \\left(\n{1-x^{2}}\\right) }}\\right\\} ^{1\/4}\\\\\n\\times\\left\\{ {U\\left( {-{\\frac{1}{2}}\\gamma\\alpha^{2},\\zeta\\sqrt{2\\gamma}%\n}\\right) +{O}\\left( {\\gamma^{-2\/3}\\ln\\left( \\gamma\\right)\n}\\right) \\operatorname{env}U\\left( {-{\\frac{1}{2}}\\gamma\\alpha^{2}%\n,\\zeta\\sqrt{2\\gamma}}\\right) }\\right\\} ,\n\\end{array}\n\\label{eq97}%\n\\end{equation}\nas $\\gamma\\rightarrow\\infty$, uniformly for $0\\leq x\\leq1-\\delta_{0}$.\n\nFrom [26, \\S 5] we note that\n\\begin{equation}\nU\\left( {-{\\tfrac{1}{2}}\\gamma\\alpha^{2},0}\\right) =\\pi^{-1\/2}2^{\\left(\n{\\gamma\\alpha^{2}-1}\\right) \/4}\\Gamma\\left( {{\\frac{1}{4}}\\gamma\\alpha\n^{2}+{\\frac{1}{4}}}\\right) \\sin\\left( {{\\frac{1}{4}}\\gamma\\alpha^{2}%\n\\pi+{\\frac{1}{4}}\\pi}\\right) , \\label{eq98}%\n\\end{equation}\nas well as\n\\begin{equation}\n{U}^{\\prime}\\left( {-{\\tfrac{1}{2}}\\gamma\\alpha^{2},0}\\right) =-\\pi\n^{-1\/2}2^{\\left( {\\gamma\\alpha^{2}+1}\\right) \/4}\\Gamma\\left( {{\\frac{1}{4}%\n}\\gamma\\alpha^{2}+{\\frac{3}{4}}}\\right) \\sin\\left( {{\\frac{1}{4}}%\n\\gamma\\alpha^{2}\\pi+{\\frac{3}{4}}\\pi}\\right) . \\label{eq99}%\n\\end{equation}\nThus, on referring to (\\ref{eq82}), we observe that the RHS of (\\ref{eq98}) is\nbounded away from zero for large $\\gamma$ when $m+n$ is even, and likewise for\nthe RHS of \\eqref{eq99} when $m+n$ is odd (see \\eqref{eq101} below).\n\n\n\n\n\n\nFor the case $\\operatorname{Ps}_{n}^{m}\\left( {x,\\gamma^{2}}\\right) $ odd,\nequivalently $m+n$ odd, we differentiate both sides of \\eqref{eq84} with\nrespect to $\\zeta$, and then set $x=\\zeta=0$. As a result, using \\eqref{eq62}\nand \\eqref{eq80}, along with the fact that $\\operatorname{Ps}_{n}^{m}\\left(\n{0,\\gamma^{2}}\\right) =0$, we obtain\n\\begin{equation}\nd_{n}^{m}\\left( \\gamma\\right) =\\left( {\\frac{\\alpha}{\\sigma}}\\right)\n^{1\/2}\\frac{\\operatorname{Ps}_{n}^{m}{}^{\\prime}\\left( {0,\\gamma^{2}}\\right)\n}{\\partial w_{1}\\left( {\\gamma,\\alpha,0}\\right) \/\\partial\\zeta}.\n\\label{eq100}%\n\\end{equation}\nThus, again from \\eqref{eq95}, we conclude for $m+n$ odd, $m$ bounded and $n$\nsatisfying \\eqref{eq2}, that\n\\begin{equation}%\n\\begin{array}\n[c]{l}%\n\\operatorname{Ps}_{n}^{m}\\left( {x,\\gamma^{2}}\\right) =\\dfrac\n{\\operatorname{Ps}_{n}^{m}{}^{\\prime}\\left( {0,\\gamma^{2}}\\right) }%\n{{U}^{\\prime}\\left( {-{\\frac{1}{2}}\\gamma\\alpha^{2},0}\\right) }\\left\\{\n{\\dfrac{\\alpha^{2}\\left( {\\alpha^{2}-\\zeta^{2}}\\right) }{4\\gamma^{2}%\n\\sigma^{2}\\left( {\\sigma^{2}-x^{2}}\\right) \\left( {1-x^{2}}\\right) }%\n}\\right\\} ^{1\/4}\\\\\n\\times\\left\\{ {U\\left( {-{\\frac{1}{2}}\\gamma\\alpha^{2},\\zeta\\sqrt{2\\gamma}%\n}\\right) +{O}\\left( {\\gamma^{-2\/3}\\ln\\left( \\gamma\\right)\n}\\right) \\operatorname{env}U\\left( {-{\\frac{1}{2}}\\gamma\\alpha^{2}%\n,\\zeta\\sqrt{2\\gamma}}\\right) }\\right\\} ,\n\\end{array}\n\\label{eq101}%\n\\end{equation}\nas $\\gamma\\rightarrow\\infty$, uniformly for $0\\leq x\\leq1-\\delta$. In this\n${U}^{\\prime}\\left( {-{\\frac{1}{2}}\\gamma\\alpha^{2},0}\\right) $ is given by\n(\\ref{eq99}).\n\nWe now show that the proportionality constants in (\\ref{eq97}) and\n(\\ref{eq101}) can be replaced by one that does not involve $\\operatorname{Ps}%\n_{n}^{m}\\left( {0,\\gamma^{2}}\\right) $ or $\\operatorname{Ps}_{n}^{m}%\n{}^{\\prime}\\left( {0,\\gamma^{2}}\\right) $. Specifically, from (\\ref{eq19}),\n(\\ref{eq61}), (\\ref{eq84}), (\\ref{eq95}) and (\\ref{eq96}) we have (for both\nthe even and odd cases) that\n\\begin{equation}\nd_{n}^{m}\\left( \\gamma\\right) \\sim\\left\\{ {\\frac{\\left( {n+m}\\right)\n!}{\\left( {2n+1}\\right) \\left( {n-m}\\right) !p_{n}^{m}\\left(\n\\gamma\\right) }}\\right\\} ^{1\/2}, \\label{eq102}%\n\\end{equation}\nas $\\gamma\\rightarrow\\infty$, again with $m$ bounded and $n$ satisfying\n(\\ref{eq2}). Here\n\\begin{equation}%\n\\begin{array}\n[c]{l}%\np_{n}^{m}\\left( \\gamma\\right) =\\left[ {\\int_{0}^{1-\\delta_{0}}{\\left\\{\n{\\dfrac{\\alpha^{2}-\\zeta^{2}}{\\left( {\\sigma^{2}-x^{2}}\\right) \\left(\n{1-x^{2}}\\right) }}\\right\\} ^{1\/2}U^{2}\\left( {-{\\frac{1}{2}}\\gamma\n\\alpha^{2},\\zeta\\sqrt{2\\gamma}}\\right) dx}}\\right. \\\\\n\\left. {+q_{n}^{m}\\left( \\gamma\\right) \\int_{1-\\delta_{0}}^{1}{\\left\\{\n{\\dfrac{\\left\\vert \\eta\\right\\vert }{\\left( {1-x^{2}}\\right) \\left(\n{x^{2}-\\sigma^{2}}\\right) }}\\right\\} ^{1\/2}I_{m}^{2}\\left( {\\gamma\n\\left\\vert \\eta\\right\\vert ^{1\/2}}\\right) dx}}\\right] ,\n\\end{array}\n\\label{eq103}%\n\\end{equation}\nin which\n\\begin{equation}\nq_{n}^{m}\\left( \\gamma\\right) =\\left( {\\frac{\\gamma\\alpha^{2}}{2e}}\\right)\n^{\\gamma\\alpha^{2}\/2}\\left( {\\frac{\\pi^{2}}{2\\gamma}}\\right) ^{1\/2}%\n\\exp\\left\\{ {-2\\gamma\\int_{\\sigma}^{1}{\\left\\{ {f\\left( {\\sigma,t}\\right)\n}\\right\\} ^{1\/2}dt}}\\right\\} . \\label{eq104}%\n\\end{equation}\nNote also, from (\\ref{eq96}), that under the same conditions\n\\begin{equation}\nc_{n}^{m}\\left( \\gamma\\right) \\sim\\left\\{ {\\frac{\\left( {n+m}\\right)\n!q_{n}^{m}\\left( \\gamma\\right) }{\\left( {2n+1}\\right) \\left(\n{n-m}\\right) !p_{n}^{m}\\left( \\gamma\\right) }}\\right\\} ^{1\/2}.\n\\label{eq105}%\n\\end{equation}\n\n\n\\section{Fixed $m$ and $n$: the angular case}\n\n\nFor fixed $m$\\textit{ and }$n$ we can simplify the results of the previous\nsection, by applying the theory of [10]. To this end we observe that\n(\\ref{eq26}) can be expressed in the form\n\\begin{equation}\n\\frac{d^{2}w}{dx^{2}}=\\left[ {\\frac{\\gamma^{2}x^{2}}{1-x^{2}}-\\frac{a\\gamma\n}{1-x^{2}}+\\frac{m^{2}-1}{\\left( {1-x^{2}}\\right) ^{2}}}\\right] w,\n\\label{eq106}%\n\\end{equation}\nwhere\n\\begin{equation}\na=\\lambda\\gamma^{-1}+\\gamma=2\\left( {n-m+{\\tfrac{1}{2}}}\\right)\n+{O}\\left( {\\gamma^{-1}}\\right) , \\label{eq107}%\n\\end{equation}\nthe ${O}\\left( {\\gamma^{-1}}\\right) $ term being valid for fixed $m$\nand $n$ and $\\gamma\\rightarrow\\infty$. In particular, $a$ is bounded.\n\nEquation (\\ref{eq106}) is characterised as having a pair of almost coalescent\nturning points near $x=0$. The appropriate Liouville transformation in this\ncase is given by\n\\begin{equation}\n\\frac{1}{2}\\rho^{2}=\\int_{0}^{x}{\\frac{t}{\\left( {1-t^{2}}\\right) ^{1\/2}}%\ndt}=1-\\left( {1-x^{2}}\\right) ^{1\/2}. \\label{eq108}%\n\\end{equation}\nNote $x=0$ corresponds to $\\rho=0$, and $x=1$ corresponds to $\\rho=\\sqrt{2}$.\nThen with\n\\begin{equation}\nW=\\frac{x^{1\/2}}{\\rho^{1\/2}\\left( {1-x^{2}}\\right) ^{1\/4}}w, \\label{eq109}%\n\\end{equation}\nwe obtain\n\\begin{equation}\n\\frac{d^{2}W}{d\\rho^{2}}=\\left[ {\\gamma^{2}\\rho^{2}-\\gamma a+\\gamma\\zeta\n\\phi\\left( \\rho\\right) +\\chi\\left( \\rho\\right) }\\right] W, \\label{eq110}%\n\\end{equation}\nwhere\n\\begin{equation}\n\\phi\\left( \\rho\\right) =-\\frac{a\\rho}{4-\\zeta^{2}}, \\label{eq111}%\n\\end{equation}\nand\n\\begin{equation}\n\\chi\\left( \\rho\\right) =\\frac{\\rho^{2}\\left( {4m^{2}-1}\\right) }{\\left(\n{2-\\rho^{2}}\\right) ^{2}}+\\frac{7\\rho^{2}-40}{4\\left( {4-\\rho^{2}}\\right)\n^{2}}+\\frac{4m^{2}}{\\left( {4-\\rho^{2}}\\right) }. \\label{eq112}%\n\\end{equation}\nWe remark that $\\chi\\left( \\rho\\right) ={O}\\left( 1\\right) _{\\,}%\n$as $\\gamma\\rightarrow\\infty$, and this function is analytic at $\\rho=0$\n($x=0$), but is not analytic at $\\rho=\\sqrt{2}\\ $(${x=1}$).\n\nOur approximants are again the parabolic cylinder functions $U\\left(\n{-{\\frac{1}{2}}a,\\rho\\sqrt{2\\gamma}}\\right) $ and $\\bar{{U}}\\left(\n{-{\\frac{1}{2}}a,\\rho\\sqrt{2\\gamma}}\\right) $ (c.f. (\\ref{eq71}) and\n(\\ref{eq72})). In this form they are solutions of\n\\begin{equation}\n\\frac{d^{2}W}{d\\rho^{2}}=\\left[ {\\gamma^{2}\\rho^{2}-\\gamma a}\\right] W.\n\\label{eq113}%\n\\end{equation}\nOn comparing this equation with (\\ref{eq110}) we note the extra\n\\textquotedblleft large\\textquotedblright\\ term $\\gamma\\zeta\\phi\\left(\n\\rho\\right) $. On account of this discrepancy we perturb the independent\nvariable, thus taking as approximants\n\\begin{equation}\nU_{1}=\\left\\{ {1+\\gamma^{-1}{\\Phi}^{\\prime}\\left( \\rho\\right) }\\right\\}\n^{-1\/2}U\\left( {-{\\tfrac{1}{2}}a,\\hat{{\\rho}}\\sqrt{2\\gamma}}\\right) ,\n\\label{eq114}%\n\\end{equation}\nand\n\\begin{equation}\nU_{2}=\\left\\{ {1+\\gamma^{-1}{\\Phi}^{\\prime}\\left( \\rho\\right) }\\right\\}\n^{-1\/2}\\bar{{U}}\\left( {-{\\tfrac{1}{2}}a,\\hat{{\\rho}}\\sqrt{2\\gamma}}\\right)\n, \\label{eq115}%\n\\end{equation}\nwhere\n\\begin{equation}\n\\hat{{\\rho}}=\\rho+\\gamma^{-1}\\Phi\\left( \\rho\\right) , \\label{eq116}%\n\\end{equation}\nin which\n\\begin{equation}\n\\Phi\\left( \\rho\\right) =\\frac{1}{2\\rho}\\int_{0}^{\\rho}{\\phi\\left( v\\right)\ndv}=\\frac{a\\ln\\left( {1-{\\frac{1}{4}}\\rho^{2}}\\right) }{4\\rho}.\n\\label{eq117}%\n\\end{equation}\nIn [10] it is shown that $U_{j}$ satisfy the differential equation\n\\begin{equation}\n\\frac{d^{2}U}{d\\rho^{2}}=\\left\\{ {\\gamma^{2}\\rho^{2}-\\gamma a+\\gamma\\rho\n\\phi\\left( \\rho\\right) +g\\left( {\\gamma,\\rho}\\right) }\\right\\} U,\n\\label{eq118}%\n\\end{equation}\nwhere $g\\left( {\\gamma,\\rho}\\right) ={O}\\left( 1\\right) _{\\,}$as\n$\\gamma\\rightarrow\\infty$, uniformly for $\\rho\\in\\left[ {0,\\sqrt{2}-\\delta\n}\\right] $. Thus (\\ref{eq118}) is the appropriate comparison equation to\n(\\ref{eq110}).\n\n\n\nFollowing [10] we then define\n\\begin{equation}\n\\hat{{w}}_{j}\\left( {\\gamma,\\rho}\\right) =U_{j}\\left( {\\gamma,\\rho}\\right)\n+\\hat{{\\varepsilon}}_{j}\\left( {\\gamma,\\rho}\\right) \\quad\\left(\n{j=1,2}\\right) , \\label{eq119}%\n\\end{equation}\nas exact solutions of (\\ref{eq110}). Explicit error bounds are furnished in\n[10], and from these it follows that\n\\begin{equation}\n\\hat{{\\varepsilon}}_{1}\\left( {\\gamma,\\rho}\\right) ={O}\\left(\n{\\gamma^{-1}\\ln\\left( \\gamma\\right) }\\right) {\\operatorname{env}}U\\left(\n{-{\\tfrac{1}{2}}a,\\hat{{\\rho}}\\sqrt{2\\gamma}}\\right) , \\label{eq120}%\n\\end{equation}\nuniformly for $0\\leq x\\leq1-\\delta_{0}$, and similarly for $\\hat{{\\varepsilon\n}}_{2}\\left( {\\gamma,\\rho}\\right) $.\n\nLet us assume that $\\operatorname{Ps}_{n}^{m}\\left( {x,\\gamma^{2}}\\right) $\n(and hence $m+n)$ is even. Similarly to (\\ref{eq84}) we write\n\\begin{equation}%\n\\begin{array}\n[c]{l}%\n\\operatorname{Ps}_{n}^{m}\\left( {x,\\gamma^{2}}\\right) =\\rho^{1\/2}%\nx^{-1\/2}\\left( {1-x^{2}}\\right) ^{-1\/4}\\\\\n\\times\\left[ {\\hat{{d}}_{n}^{m}\\left( \\gamma\\right) \\hat{{w}}_{1}\\left(\n{\\gamma,\\rho}\\right) +\\hat{{e}}_{n}^{m}\\left( \\gamma\\right) \\left\\{\n{\\hat{{w}}_{2}\\left( {\\gamma,\\rho}\\right) -\\hat{{w}}_{4}\\left( {\\gamma\n,\\rho}\\right) }\\right\\} }\\right] ,\n\\end{array}\n\\label{eq121}%\n\\end{equation}\nwhere $\\hat{{w}}_{4}\\left( {\\gamma,\\rho}\\right) $ is the solution (involving\n$\\bar{{U}})$ given by eq. (110) of [10]. By matching at $x=\\rho=0$ we find\n\\begin{equation}\n\\hat{{d}}_{n}^{m}\\left( \\gamma\\right) =\\frac{Ps_{n}^{m}\\left( {0,\\gamma\n^{2}}\\right) }{\\hat{{w}}_{1}\\left( {\\gamma,0}\\right) }. \\label{eq122}%\n\\end{equation}\nAnalogously to the proof of (\\ref{eq95}) it can be shown that\n\\begin{equation}\n\\hat{{e}}_{n}^{m}\\left( \\gamma\\right) \\left\\{ {\\hat{{w}}_{2}\\left(\n{\\gamma,\\rho}\\right) -\\hat{{w}}_{4}\\left( {\\gamma,\\rho}\\right) }\\right\\}\n=o\\left( 1\\right) \\hat{{A}}{\\operatorname{env}}U\\left( {-{\\tfrac{1}{2}%\n}a,\\hat{{\\rho}}\\sqrt{2\\gamma}}\\right) , \\label{eq123}%\n\\end{equation}\nwhere $o\\left( 1\\right) $ is exponentially small for $0\\leq x\\leq\n1-\\delta_{0}$ as $\\gamma\\rightarrow\\infty$. Consequently, we arrive at our\ndesired result\n\\begin{equation}%\n\\begin{array}\n[c]{l}%\n\\operatorname{Ps}_{n}^{m}\\left( {x,\\gamma^{2}}\\right) =\\dfrac\n{\\operatorname{Ps}_{n}^{m}\\left( {0,\\gamma^{2}}\\right) }{U\\left(\n{-{\\frac{1}{2}}a,0}\\right) }\\left( {\\dfrac{\\rho}{x}}\\right) ^{1\/2}\\left(\n{1-x^{2}}\\right) ^{-1\/4}\\\\\n\\times\\left[ {U\\left( {-{\\frac{1}{2}}a,\\hat{{\\rho}}\\sqrt{2\\gamma}}\\right)\n+{O}\\left( {\\gamma^{-1}\\ln\\left( \\gamma\\right) }\\right)\n\\operatorname{env}U\\left( {-{\\frac{1}{2}}a,\\hat{{\\rho}}\\sqrt{2\\gamma}%\n}\\right) }\\right] ,\n\\end{array}\n\\label{eq124}%\n\\end{equation}\nas $\\gamma\\rightarrow\\infty$, uniformly for $0\\leq x\\leq1-\\delta_{0}$.\n\nFor the case $\\operatorname{Ps}_{n}^{m}\\left( {x,\\gamma^{2}}\\right) $ being\nodd we likewise obtain, under the same conditions,\n\\begin{equation}%\n\\begin{array}\n[c]{l}%\n\\operatorname{Ps}_{n}^{m}\\left( {x,\\gamma^{2}}\\right) =\\dfrac\n{\\operatorname{Ps}_{n}^{m}{}^{\\prime}\\left( {0,\\gamma^{2}}\\right) }%\n{{U}^{\\prime}\\left( {-{\\frac{1}{2}}a,0}\\right) }\\left( {\\dfrac{\\rho\n}{2\\gamma x}}\\right) ^{1\/2}\\left( {1-x^{2}}\\right) ^{-1\/4}\\\\\n\\times\\left[ {U\\left( {-{\\frac{1}{2}}a,\\hat{{\\rho}}\\sqrt{2\\gamma}}\\right)\n+{O}\\left( {\\gamma^{-1}\\ln\\left( \\gamma\\right) }\\right)\n\\operatorname{env}U\\left( {-{\\frac{1}{2}}a,\\hat{{\\rho}}\\sqrt{2\\gamma}%\n}\\right) }\\right] .\n\\end{array}\n\\label{eq125}%\n\\end{equation}\n\n\n\\section{Summary}\n\n\n\nFor reference we collect the principal results of the paper. All results are\nuniformly valid for $\\gamma\\to\\infty$, $m$ and $n$ integers, $m$ bounded, and\n$n$ satisfying $0\\le m\\le n\\le2\\pi^{-1}\\gamma\\left( {1-\\delta} \\right) $\nwhere $\\delta\\in\\left( {0,1} \\right) $ is fixed.\n\nWe define $\\sigma=\\sqrt{1+\\gamma^{-2}\\lambda_{n}^{m}\\left( {\\gamma^{2}%\n}\\right) }$ and assume $0\\leq\\sigma\\leq\\sigma_{0}<1$ for an arbitrary fixed\npositive $\\sigma_{0}$. We further define variables $\\xi=\\xi\\left( x\\right) $\nand $\\zeta=\\zeta\\left( x\\right) $ by\n\\begin{equation}\n\\xi=\\int_{1}^{x}{\\left( {\\frac{t^{2}-\\sigma^{2}}{t^{2}-1}}\\right) ^{1\/2}dt},\n\\label{eq126}%\n\\end{equation}\nand\n\\begin{equation}\n\\int_{\\alpha}^{\\zeta}{\\left\\vert {\\tau^{2}-\\alpha^{2}}\\right\\vert ^{1\/2}d\\tau\n}=\\int_{\\sigma}^{x}{\\left( {\\frac{\\left\\vert {t^{2}-\\sigma^{2}}\\right\\vert\n}{1-t^{2}}}\\right) ^{1\/2}dt}, \\label{eq127}%\n\\end{equation}\nwhere\n\\begin{equation}\n\\alpha=2\\left\\{ {\\frac{1}{\\pi}\\int_{0}^{\\sigma}{\\left( {\\frac{\\sigma\n^{2}-t^{2}}{1-t^{2}}}\\right) ^{1\/2}dt}}\\right\\} ^{1\/2}. \\label{eq128}%\n\\end{equation}\nThen, using the definition above for $\\sigma$, a uniform asymptotic\nrelationship between $\\lambda_{n}^{m}\\left( {\\gamma^{2}}\\right) $ and the\nparameters $m$, $n$ and $\\gamma$ is given implicitly by the relation\n\\begin{equation}\n\\gamma\\int_{0}^{\\sigma}{\\left( {\\frac{\\sigma^{2}-t^{2}}{1-t^{2}}}\\right)\n^{1\/2}dt}=\\frac{1}{2}\\left( {n-m+\\frac{1}{2}}\\right) \\pi+{O}\\left(\n{\\frac{1}{\\gamma}}\\right) . \\label{eq129}%\n\\end{equation}\n\n\nThe following approximation holds for the radial PSWF\n\\begin{equation}%\n\\begin{array}\n[c]{l}%\nPs_{n}^{m}\\left( {x,\\gamma^{2}}\\right) =\\left\\{ {\\dfrac{\\left(\n{n+m}\\right) !q_{n}^{m}\\left( \\gamma\\right) }{\\left( {2n+1}\\right)\n\\left( {n-m}\\right) !p_{n}^{m}\\left( \\gamma\\right) }}\\right\\}\n^{1\/2}\\left\\{ {\\left( {x^{2}-1}\\right) \\left( {x^{2}-\\sigma^{2}}\\right)\n}\\right\\} ^{-1\/4}\\\\\n\\times\\xi^{1\/2}\\left[ {J_{m}\\left( {\\gamma\\xi}\\right) +{O}\\left(\n{\\gamma^{-1}}\\right) \\operatorname{env}J_{m}\\left( {\\gamma\\xi}\\right)\n}\\right] ,\n\\end{array}\n\\label{eq130}%\n\\end{equation}\nthis being uniformly valid for $1 q$, and a set of $k$ neurons $\\stimcore{A}$ in the sensory area. To generate a stimulus $x \\in \\{0,1\\}^n$ in the class $A$, each neuron $i \\in \\stimcore{A}$ is chosen with probability $r$, while for each $i \\not\\in \\stimcore{A}$, the probability of choosing neuron $i$ is $qk\/n$. It follows immediately that, in expectation, an $r$ fraction of the neurons in the stimulus core are set to $1$ and the number of neurons outside the core that are set to $1$ is also $O(k)$. \n\nThe presentation of a sequence of stimuli from a class $A$ in the sensory area evokes in the learning system a {\\em response} $R$, a {\\em distribution over assemblies} in the brain area. \nWe show that, as a consequence of plasticity and $k$-cap, this distribution $R$ will be highly concentrated, in the following sense: Consider the set $S_R$ of all assemblies $x$ that have positive probability in $R$. Then the numbers of neurons in both the intersection $\\assmcore{R}=\\bigcap_{x\\in S_R} x$, called the {\\em core} of $R$ and the union $\\support{R}=\\bigcup_{x\\in S_R} x$ are close to $k$, in particular $k-o(k)$ and $k+o(k)$ respectively.\\footnote{The larger the plasticity, the closer these two values are (see \\citet{papadimitriou2019random}, Fig. 2).} \nIn other words, neurons in $\\assmcore{R}$ fire far more often on average than neurons in $\\support{R} \\setminus \\assmcore{R}$.\n\nFinally, our learning protocol is this: Beginning with the brain area at rest, stimuli are repeatedly sampled from the class, and made to fire.\nAfter a small number of training samples, the brain area returns to rest, and then the same procedure is repeated for the next stimulus class, and so on. Then testing stimuli are presented in random order to test the extent of learning.\n(see Algorithm 1 in an AC-derived programming language.)\n\\if{false}\nAt a particular moment in time, the state of the network can be described as $\\mathcal B = (x, y, A, W)$, where $x \\in \\{0, 1\\}^n$ is the activations of neurons in the sensory area, $y \\in \\{0, 1\\}^n$ is the activations of neurons in the learning area (with $\\sum_{i} y_i \\le k$), $A$ is a set of weighted directed edges from neurons in $X$ to neurons in $Y$, and $W$ is a set of weighted directed edges from neurons in $Y$ to other neurons in $Y$, excluding loops. To interact with the brain area, we provide a few basic commands: \n\n\\begin{itemize}\n \\item $\\texttt{input}(\\mathcal B, x)$ updates the current value of the sensory area activations (the input) to $x$\n \\item $\\texttt{step}(\\mathcal B)$ progresses to the next time step: $\\mathcal B$ computes the new value $y'$ of $y$ based on the current value of $(x, y, A, W)$, and updates $A$ and $W$ using Hebbian plasticity.\n \\item $\\texttt{read}(\\mathcal B)$ returns $y$, i.e. the indicator vector of the neurons in the learning area which are currently firing\n \\item $\\texttt{inhibit}(\\mathcal B)$ forces $y$ to zero, silencing all activity in the learning area\n\\end{itemize}\n\\fi\n\n\\begin{algorithm} \\label{alg:mechanism}\n\\caption{The learning mechanism. ($B$ denotes the brain area.)}\n\\KwIn{a set of stimulus classes $A_1, \\ldots, A_c$; $T \\ge 1$}\n\\KwOut{A set of assemblies $y_1, \\ldots, y_c$ in the brain area encoding these classes}\n\\ForEach{ stimulus class $i$}{\n inh$(B) \\gets 0$\\;\\\\\n \\ForEach{ time step $1 \\le t \\le T$}{\n Sample $x \\sim A_i$ and fire $x$\\; \n }\n $y_i \\gets \\texttt{read}(B)$\\;\\\\\n inh$(B) \\gets 1$\\;\n \n }\n\\end{algorithm}\n\nThat is, we sample $T$ stimuli $x$ from each class, fire each $x$ to cause synaptic input in the brain area, and after the $T$th sample has fired we record the assembly which has been formed in the brain area. This is the representation for this class.\n\n\\subsection*{Related work}\nThere are numerous learning models in the neuroscience literature. \nIn a variation of the model we consider here, \\citet{RangamaniGandhi2020} have considered supervised learning of Boolean functions using assemblies of neurons, by setting up separate brain areas for each label value.\nAmongst other systems with rigorous guarantees, assemblies are superficially similar to the ``items'' of Valiant's neuroidal model \\citep{Valiant94}, in which supervised learning experiments have been conducted \\citep{Valiant00, FeldmanV09}, where an output neuron is clamped to the correct label value, while the network weights are updated under the model. The neuroidal model is considerably more powerful than ours, allowing for arbitrary state changes of neurons and synapses; in contrast, our assemblies rely on only two biologically sound mechanisms, plasticity and inhibition. \n\nHopfield nets \\citep{hopfield1982neural} are recurrent networks of neurons with symmetric connection weights which will converge to a memorized state from a sufficiently similar one, when properly trained using a local and incremental update rule. In contrast, the memorized states our model produces (which we call assemblies) emerge through plasticity and randomization from the structure of a random directed network, whose weights are asymmetric and nonnegative, and in which inhibition --- not the sign of total input --- selects which neurons will fire. \n\nStronger learning mechanisms have recently been proposed. Inspired by the success of deep learning, a large body of work has shown that cleverly laid-out microcircuits of neurons can approximate backpropagation to perform gradient descent \\citep{Lillicrap2016RandomSF, sacramento2017dendritic, guerguiev2017towards, sacramento2018dendritic, whittington2019theories, lillicrap2020}. These models rely crucially on novel types of neural circuits which, although biologically possible, are not presently known or hypothesized in neurobiology, nor are they proposed as a theory of the way the brain works. These models are capable of matching the performance of deep networks on many tasks, which are more complex than the simple, classical learning problems we consider here. The difference between this work and ours is, again, that here we are showing that learning arises naturally from well-understood mechanisms in the brain, in the context of the assembly calculus.\n\n\\section{Results} \\label{results}\nVery few stimuli sampled from an input distribution are activated sequentially at the sensory area. \nThe only form of supervision required is that all training samples from a given class are presented consecutively.\nPlasticity and inhibition alone ensure that, in response to this activation, an assembly will be formed for each class, and that this same assembly will be recalled at testing upon presentation of other samples from the same distribution. In other words, learning happens. \nAnd in fact, despite all these limitations, we show that the device is an efficient learner of interesting concept classes.\n\nOur first theorem is about the creation of an assembly in response to inputs from a stimulus class. This is a generalization of a theorem from \\citet{papadimitriou2019random}, where the input stimulus was held constant; here the input is a stream of random samples from the same stimulus class. Like all our results, it is a statement holding with high probability (WHP), where the underlying random event is the random graph and the random samples. When sampled stimuli fire, the assembly in the brain area changes. The neurons participating in the current assembly (those whose synaptic input from the previous step is among the $k$ highest) are called the current {\\em winners.}\nA {\\em first-time winner} is a current winner that participated in no previous assembly (for the current stimulus class).\n\n\\begin{theorem}[Creation] \\label{theorem:creation} \nConsider a stimulus class $A$ projected to a brain area. Assume that \n\\[\\beta \\geq \\beta_0 = \\frac{1}{r^2}\\frac{\\left(\\sqrt{2} - r^2\\right)\\sqrt{2\\ln\\left(\\frac{n}{k}\\right)} + \\sqrt{6}}{\\sqrt{kp} + \\sqrt{2\\ln\\left(\\frac{n}{k}\\right)}}\\] Then WHP no first-time winners will enter the cap after $O(\\log k)$ rounds, and moreover the total number of winners $\\support{A}$ can be bounded as \\[|\\support{A}| \\leq \\frac{k}{1-\\exp(-(\\frac{\\beta}{\\beta_0})^2)} \\leq k + O\\left(\\frac{\\log n}{r^3p\\beta^2}\\right)\\] \\end{theorem}\n\\begin{remark}\nThe theorem implies that for a small constant $c$, it suffices to have plasticity parameter \n\\[\n\\beta \\ge \\frac{1}{r^2}\\frac{c}{\\sqrt{kp\/(2\\ln(n\/k))}+1}.\n\\]\n\\end{remark}\n\n\\noindent Our second theorem is about {\\em recall} for a single assembly, when a new stimulus from the same class is presented. We assume that examples from an an assembly class $A$ have been presented, and a response assembly $A^*$ encoding this class has been created, by the previous theorem. \n\n\\begin{theorem}[Recall] \\label{theorem:recall}\nWHP over the stimulus class, the set $C_1$ firing in response to a test assembly from the class $A$ will overlap $\\assmcore{A}$ by a fraction of at least $1 - e^{-kpr}$, i.e. \\[\\frac{|C_1 \\setminus \\assmcore{A}|}{k} \\leq e^{-kpr}\\]\n\\end{theorem}\n\\noindent The proof entails showing that the average weight of incoming connections to a neuron in $\\assmcore{A}$ from neurons in $\\stimcore{A}$ is at least \\[1 + \\frac{1}{\\sqrt{r}}\\left(\\sqrt{2} + \\sqrt{\\frac{2}{kpr}\\ln\\left(\\frac{n}{k}\\right) + 2}\\right)\\]\n\n\\noindent Our third theorem is about the creation of a second assembly corresponding to a second stimulus class. This can easily be extended to many classes and assemblies. As in the previous theorem, we assume that $O(\\log k)$ examples from assembly class $A$ have been presented, and $\\support{A}$ has been created. Then we introduce $B$, a second stimulus class, with $|\\stimcore{A} \\cap \\stimcore{B}| = \\alpha k$, and present $O(\\log k)$ samples to induce a series of caps, $B_1, B_2, \\ldots$, with $B^*$ as their union.\n\n\\begin{theorem}[Multiple Assemblies] \\label{theorem:multiple}\nThe total support of $B^*$ can be bounded WHP as \n\\[|B^*| \\leq \\frac{k}{1-\\exp(-(\\frac{\\beta}{\\beta_0})^2)} \\leq k + O\\left(\\frac{\\log n}{r^3p\\beta^2}\\right)\\]\nMoreover, WHP, the overlap in the core sets $\\assmcore{A}$ and $\\assmcore{B}$ will preserve the overlap of the stimulus classes, so that $|\\assmcore{A} \\cap \\assmcore{B}| \\leq \\alpha k$.\n\\end{theorem}\nThis time the proof relies on the fact that the average weight of incoming connections to a neuron in $\\assmcore{A}$ is {\\em upper-bounded} by \\[\\gamma \\leq 1 + \\frac{\\sqrt{2\\ln\\left(\\frac{n}{k}\\right)} - \\sqrt{2\\ln((1+r)\/r\\alpha)}}{\\alpha r \\sqrt{kp}}\\] \n\n\\noindent Our fourth theorem is about classification after the creation of multiple assemblies, and shows that random stimuli from any class are mapped to their corresponding assembly. We state it here for two stimuli classes, but again it is extended to several. We assume that stimulus classes $A$ and $B$ overlap in their core sets by a fraction of $\\alpha$, and that they have been projected to form a distribution of assemblies $\\assmcore{A}$ and $\\assmcore{B}$, respectively.\n\n\\begin{theorem}[Classification] \\label{theorem:classify}\nIf a random stimulus chosen from a particular class (WLOG, say $B$) fires to cause a set $C_1$ of learning area neurons to fire, then WHP over the stimulus class the fraction of neurons in the cap $C_1$ and in $\\assmcore{B}$ will be at least \n\\[\\frac{|C_1 \\cap \\assmcore{B}|}{k} \\geq 1 - 2\\exp\\left(-\\frac{1}{2}(\\gamma \\alpha - 1)^2 kpr \\right)\\]\nwhere $\\gamma$ is a lower bound on the average weight of incoming connections to a neuron in $\\assmcore{A}$ (resp. $\\assmcore{B}$) from neurons in $\\stimcore{A}$ (resp. $\\stimcore{B}$).\n\\end{theorem}\n\n\n\n\\noindent Taken together, the above results guarantee that this mechanism can learn to classify well-separated distributions, where each distribution has a constant fraction of its nonzero coordinates in a subset of $k$ input coordinates. The process is {\\em naturally interpretable:} an assembly is created for each distribution, so that random stimuli are mapped to their corresponding assemblies, and the assemblies for different distributions overlap in no more than the core subsets of their corresponding distributions. \n\n\nFinally, we consider the setting where the labeling function is a linear threshold function, parameterized by an arbitrary nonnegative vector $v$ and margin $\\Delta$. We will create a single assembly to represent examples on one side of the threshold, i.e. those for which $v \\cdot X \\ge \\|v\\|_1 k \/ n$. We define $\\mathcal D_+$ denote the distribution of these examples, where each coordinate is an independent Bernoulli variable with mean $\\mathbb{E}(X_i) = k\/n + \\Delta v_i$, and define $\\mathcal D_-$ to be the distribution of negative examples, where each coordinate is again an independent Bernoulli variable yet now all identically distributed with mean $k\/n$. (Note that the support of the positive and negative distributions is the same; there is a small probability of drawing a positive example from the negative distribution, or vice versa.) To serve as a classifier, a fraction $1- \\epsilon_+$ of neurons in the assembly must be guaranteed to fire for a positive example, and a fraction $\\epsilon_- < 1 - \\epsilon_+$ guaranteed \\emph{not} to fire for a negative one. A test example is then classified as positive if at least a $1 - \\epsilon$ fraction of neurons in the assembly fire (for $\\epsilon \\in [\\epsilon_-, 1 - \\epsilon_+]$), and negative otherwise. The last theorem shows that this can in fact be done with high probability, as long as the normal vector $v$ of the linear threshold is neither too dense nor too sparse. Additionally, we assume synapses are subject to homeostasis in between training and evaluation; that is, all of the incoming weights to a neuron are normalized to sum to 1.\n\\begin{theorem}[Learning Linear Thresholds]\\label{theorem:halfspace}\nLet $v$ be a nonnegative vector normalized to be of unit Euclidean length ($\\|v\\|_2 = 1$). Assume that $\\Omega(k) = \\|v\\|_1 \\le \\sqrt{n}\/2$ and \n\\[\\Delta^2\\beta \\ge \\sqrt{\\frac{2k}{p}}(\\sqrt{2\\ln(n\/k)+2)} + 1).\\] \nThen,\nsequentially presenting $\\Omega(\\log k)$ samples drawn at random from $\\mathcal D^+$ forms an assembly $\\assmcore{A}$ that correctly separates $D^+$ from $D^-$: with probability $1-o(1)$ a randomly drawn example from $\\mathcal D^+$ will result in a cap which overlaps at least $3k\/4$ neurons in $\\assmcore{A}$, and an example from $\\mathcal D^-$ will create a cap which overlaps no more than $k\/4$ neurons in $\\assmcore{A}$.\n\\end{theorem}\n\n\\begin{remark}\nThe bound on $\\Delta^2 \\beta$ leads to two regimes of particular interest: In the first, \\[\\beta \\ge \\frac{\\sqrt{2\\ln(n\/k) + 2} + 1}{\\sqrt{kp}}\\] and $\\Delta \\ge \\sqrt{k}$, which is similar to the plasticity parameter required for a fixed stimulus \\citep{papadimitriou2019random} or stimulus classes; in the second, $\\beta$ is a constant, and \\[\\Delta \\ge \\left(\\frac{2k}{\\beta^2 p}\\right)^{1\/4}\\left(\\sqrt{2\\ln(n\/k)+2} +1\\right)^{1\/2}.\\]\n\\end{remark}\n\n\\begin{remark}\nWe can ensure that the number of neurons outside of $\\assmcore{A}$ for a positive example or in $\\assmcore{A}$ for a negative example are both $o(k)$ with small overhead\\footnote{i.e. increasing the plasticity constant $\\beta$ by a factor of $1 + o(1)$}, so that plasticity can be active during the classification phase. \n\\end{remark}\n\nSince our focus in this paper is on highlighting the brain-like aspects of this learning mechanism, we emphasize stimulus classes as a case of particular interest, as they are a probabilistic generalization of the single stimuli considered in \\citet{papadimitriou2019random}.\nLinear threshold functions are an equally natural way to generalize a single $k$-sparse stimulus, say $v$; all the 0\/1 points on the positive side of the threshold $v^\\top x \\ge \\alpha k$ have at least an $\\alpha$ fraction of the $k$ neurons of the stimulus active.\n\n\nFinally, reading the output of the device by the Assembly Calculus is simple: Add a {\\em readout area} to the two areas so far (stimulus and learning), and project to this area one of the assemblies formed in the learning area for each stimulus class. The assembly in the learning area that fires in response to a test sample will cause the assembly in the readout area corresponding to the class to fire, and this can be sensed through the {\\tt readout} operation of the AC.\n\n\\paragraph{Proof overview.}\nThe proofs of all five theorems can be found in the Appendix. The proofs hinge on showing that large numbers of certain neurons of interest will be included in the cap on a particular round --- or excluded from it. More specifically: \\begin{itemize}\n \\item To create an assembly, the sequence of caps should converge to the assembly's core set. In other words, WHP an increasing fraction of the neurons selected by the cap in a particular step will also be selected at the next one.\n \\item For recall, a large fraction of the assembly should fire (i.e. be included in the cap) when presented with an example from the class.\n \\item To differentiate stimuli (i.e. classify), we need to ensure that a large fraction of the correct assembly will fire, while no more than a small fraction of the other assemblies do.\n\\end{itemize} Following \\citet{papadimitriou2019random}, we observe that if the probability of a neuron having input at least $t$ is no more than $\\epsilon$, then no more than an $\\epsilon$ fraction of the cohort of neurons will have input exceeding $t$ (with constant probability). By approximating the total input to a neuron as Gaussian and using well-known bounds on Gaussian tail probabilities, we can solve for $t$, which gives an explicit input threshold neurons must surpass to make a particular cap. Then, we argue that the advantage conferred by plasticity, combined with the similarity of examples from the same class, gives the neurons of interest enough of an advantage that the input to all but a small constant fraction will exceed the threshold.\n\n\n\\section{Experiments}\nThe learning algorithm has been run on both synthetic and real-world datasets, as illustrated in the figures below. Code for experiments is available at \n\\url{https:\/\/github.com\/mdabagia\/learning-with-assemblies}.\n\nBeyond the basic method of presenting a few examples from the same class and allowing plasticity to alter synaptic weights, the training procedure is slightly different for each of the concept classes (stimulus classes, linearly-separated, and MNIST digits). In each case, we renormalize the incoming weights of each neuron to sum to one after concluding the presentation of each class, and classification is performed on top of the learned assemblies by predicting the class corresponding to the assembly with the most neurons on. \\begin{itemize}\n \\item For stimulus classes, we estimate the assembly for each class as composed of the neurons which fired in response to the last training example, which in practice are the same as those most likely to fire for a random test example. \n \\item For a linear threshold, we only present positive examples, and thus only form an assembly for one class. As with stimulus classes, the neurons in the assembly can be estimated by the last training cap or by averaging over test examples. We classify by comparing against a fixed threshold, generally half the cap size.\n \n\\end{itemize}\nAdditionally, it is important to set the plasticity parameter ($\\beta$) large enough that assemblies are reliably formed. We had success with $\\beta = 0.1$ for stimulus classes and $\\beta = 1.0$ for linear thresholds.\n\n\nIn Figure \\ref{figure:experiments} (a) \\& (b), we demonstrate learning of two stimulus classes, while in\nFigure \\ref{figure:experiments} (c) \\& (d), we demonstrate the result of learning a well-separated linear threshold function with assemblies. Both had perfect accuracy. Additionally, assemblies readily generalize to a larger number of classes (see Figure \\ref{figure:fourclasses} in the appendix). We also recorded sharp threshold transitions in classification performance as the key parameters of the model are varied (see Figures \\ref{fig:accuracies} \\& \\ref{fig:transition}).\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.7\\linewidth]{images\/overlap.png}\n \\includegraphics[width=0.7\\linewidth]{images\/overlap_hs.png}\n \\caption{Assemblies learned for various concept classes. On the top two lines, we show assemblies learned for stimulus classes, and on the bottom two lines, for a linear threshold with margin. In (a) \\& (c) we exhibit the distribution of firing probabilities over neurons of the learning area. In (b) \\& (d) we show the average overlap of different input samples (red square) and the overlaps of the corresponding representations in the assemblies (blue square). Using a simple sum readout over assembly neurons, both stimulus classes and linear thresholds are classified with 100\\% accuracy. Here, $n=10^3, k=10^2, p=0.1, r=0.9, q=0.1, \\Delta = 1.0$, with 5 samples per class, and $\\beta = 0.01$ (stimulus classes) and $\\beta=1.0$ (linear threshold).}\n \\label{figure:experiments}\n\\end{figure}\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.45\\linewidth]{images\/stimulus_acc.png}\n \\includegraphics[width=0.45\\linewidth]{images\/halfspace_acc.png}\n \\caption{Mean (dark line) and range (shaded area) of classification accuracy for two stimulus classes (left) and a fixed linear threshold (right) over 20 trials, as the classes become more separable. For stimulus classes, we vary the firing probability of neurons in the stimulus core while fixing the probability for the rest at $k\/n$, while for the linear threshold, we vary the margin. \n \n \n For both we used 5 training examples with $n=1000, k=100, p=0.1$, and $\\beta = 0.1$ (stimulus classes), $\\beta = 1.0$ (linear threshold).}\n \\label{fig:accuracies}\n\\end{figure}\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.45\\linewidth]{images\/acc_vs_n.png}\n \\includegraphics[width=0.45\\linewidth]{images\/acc_vs_k.png}\n \\caption{Mean (dark line) and range (shaded area) of classification accuracy of two stimulus classes for various values of the number of neurons ($n$, left) and the cap size ($k$, right). For variable $n$, we let $k = n \/ 10$; for variable $k$, we fix $n = 1000$. Other parameters are fixed, as $p = 0.1, r = 0.9, q = k\/n$, and $\\beta = 0.1$.}\n \\label{fig:transition}\n\\end{figure}\n\nThere are a number of possible extensions to the simplest strategy, where within a single brain region we learn an assembly for each concept class and classify based on which assembly is most activated in response to an example. \nWe compared the performance of various classification models on MNIST as the number of features increases. The high-level model is to extract a certain number of features using one of the five different methods, and then find the best linear classifier (of the training data) on these features to measure performance (on the test data). The five different feature extractors are:\n\\begin{itemize}\n \\item Linear features. Each feature's weights are sampled i.i.d. from a Gaussian with standard deviation $0.1$.\n \\item Nonlinear features. Each feature is a binary neuron: it has $784$ i.i.d. Bernoulli$(0.2)$ weights, and `fires' (has output $1$, otherwise $0$) if its total input exceeds the expected input ($70 \\times 0.2$).\n \\item Large area assembly features. In a single brain area of size $m$ with cap size $m \/ 10$, we attempt to form an assembly for each class. The area sees a sequence of $5$ examples from each class, with homeostasis applied after each class. Weights are updated according to Hebbian plasticity with $\\beta = 1.0$. Additionally, we apply a negative bias: A neuron which has fired for a given class is heavily penalized against firing for subsequent classes.\n \\item 'Random' assembly features. For a total of $m$ features, we create $m \/ 100$ different areas of $100$ neurons each, with cap size $10$. We then repeat the large area training procedure above in each area, with the order of the presentation of classes randomized for each area.\n \\item 'Split' assembly features: For a total of $m$ features, we create $10$ different areas of $m \/ 10$ neurons each, with cap size $m \/ 100$. Area $i$ sees a sequence of $5$ examples from class $i$. Weights are updated according to Hebbian plasticity, and homeostasis is applied after training.\n\\end{itemize}\nAfter extracting features, we train the linear classification layer to minimize cross-entropy loss on the standard MNIST training set ($60000$ images) and finally test on the full test set ($10000$ images). \n\n\\begin{figure}[b!]\\label{fig:mnistcompare}\n \\centering\n \\includegraphics[width=0.7\\linewidth]{images\/acc_v_neurons.png}\n \\caption{MNIST test accuracy as the number of features increases, for various classification models. 'Split' assembly features, which forms an assembly for class $i$ in area $i$, achieves the highest accuracy with the largest number of features.}\n\\end{figure}\n\nThe results as the total number of features ranges from $1000$ to $10000$ is shown in Fig. \\ref{fig:mnistcompare}. 'Split' assembly features are ultimately the best of the five, with 'split' features achieving $96\\%$ accuracy with $10000$ features. However, nonlinear features outperform 'split' and large-area features and match 'random' assembly features when the number of features is less than $8000$. \nFor reference, the linear classifier gets to $89\\%$, while a two-layer neural network with width $800$ trained end-to-end gets to $98.4\\%$.\n\nGoing further, one could even create a hierarchy of brain areas, so that the areas in the first ``layer'' all project to a higher-level area, in hopes of forming assemblies for each digit in the higher-level area which are more robust. In this paper, our goal was to highlight the potential to form useful representations of a classification dataset using assemblies, and so we concentrated on a single layer of brain areas with a very simple classification layer on top. It will be interesting to explore what is possible with more complex architectures. \n\n\\section{Discussion}\nAssemblies are widely believed to be involved in cognitive phenomena, and the AC provides evidence of their computational aptitude. Here we have made the first steps towards understanding how {\\em learning} can happen in assemblies. Normally, an assembly is associated with a stimulus, such as Grandma. We have shown that this can be extended to {\\em a distribution over stimuli.} Furthermore, for a wide range of model parameters, distinct assemblies can be formed for multiple stimulus classes in a single brain area, so long as the classes are reasonably differentiated.\n\nA model of the brain at this level of abstraction should allow for the kind of classification that the brain does effortlessly --- e.g., the mechanism that enables us to understand that individual frames in a video of an object depict the same object. With this in mind, the learning algorithm we present is remarkably parsimonious: it generalizes from a handful of examples which are seen only once, and requires no outside control or supervision other than ensuring multiple samples from the same concept class are presented in succession (and this latter requirement could be relaxed in a more complex architecture which channels stimuli from different classes). Finally, even though our results are framed within the Assembly Calculus and the underlying brain model, we note that they have implications far beyond this realm. In particular, they suggest that \\emph{any} recurrent neural network, equipped with the mechanisms of plasticity and inhibition, will naturally form an assembly-like group of neurons to represent similar patterns of stimuli.\n\nBut of course, many questions remain. In this first step we considered a single brain area --- whereas it is known that assemblies draw their computational power from the interaction, through the AC, among many areas. \nWe believe that a more general architecture encompassing a hierarchy of interconnected brain areas, where the assemblies in one area act like stimulus classes for others, can succeed in learning more complex tasks --- and even within a single brain area improvements can result from optimizing the various parameters, something that we have not tried yet. \n\nIn another direction, here we only considered Hebbian plasticity, the simplest and most well-understood mechanism for synaptic changes. Evidence is mounting in experimental neuroscience that the range of plasticity mechanisms is far more diverse \\citep{magee2020synaptic}, and in fact it has been demonstrated recently \\citep{payeur2021burst} that more complex rules are sufficient to learn harder tasks. Which plasticity rules make learning by assemblies more powerful?\n\nWe showed that assemblies can learn nonnegative linear threshold functions with sufficiently large margins. Experimental results suggest that the requirement of nonnegativity is a limitation of our proof technique, as empirically assemblies readily learn arbitrary linear threshold functions (with margin). What other concept classes can assemblies provably learn? We know from support vector machines that linear threshold functions can be the basis of far more sophisticated learning when their input is pre-processed in specific ways, while the celebrated results of \\citet{rahimi2007random} demonstrated that certain families of random nonlinear features can approximate sophisticated kernels quite well. What would constitute {\\em a kernel} in the context of assemblies? The sensory areas of the cortex (of which the visual cortex is the best studied example) do pre-process sensory inputs extracting features such as edges, colors, and motions. Presumably learning by the non-sensory brain --- which is our focus here --- operates on the output of such pre-processing. We believe that studying the implementation of kernels in cortex is a very promising direction for discovering powerful learning mechanisms in the brain based on assemblies.\n\n\n\n\\acks{We thank Shivam Garg, Chris Jung, and Mirabel Reid for helpful discussions. MD is supported by an NSF Graduate Research Fellowship. SV is supported in part by NSF awards CCF-1909756, CCF-2007443 and CCF-2134105. CP is supported by NSF Awards CCF-1763970 and CCF-1910700, and by a research contract with Softbank.}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Local Description} \n\nAmong the singular Riemannian metrics on surfaces, the simplest ones are those with isolated singularities. Away from a discrete set, these metrics are then smooth (say of class $C^2$). We will make two additional hypotheses which are natural from a geometric point of view.\nThe first concerns the conformal structure. If $g$ is a Riemannian metric on a surface $S$ having an isolated singularity at $p$, and if $U$ is a neighborhood of $p$ homeomorphic to the disk, then $U' = U \\setminus \\{0\\}$ has a well-defined conformal structure (since the metric $g$ is smooth on $U'$, one may apply Korn-Lichtenstein's Theorem). From the classification of conformal structures on the annulus, we know that $U'$ is conformally equivalent to a standard annulus \n$$\n A_{\\rho} = \\{z \\in \\mathbb{C} \\mid \\rho < |z| < 1\\},\n$$\nfor some fixed parameter $\\rho \\in [0, 1)$.\nIn this paper, we will always assume $U'$ to be conformally equivalent to the punctured disk $A_0$. In other words, we are assuming that the conformal structure of $U'$ extends to $U$ (i.e. the point $p$ is a removable singularity from the conformal viewpoint). \n\nIn particular, every point of $S$ (singular or not) has a coordinate neighborhood in which the metric $g$ can be written as \n$$\n g = \\rho(x,y)(dx^2 + dy^2) = \\rho(z) |dz|^2,\n$$\nwhere $z = x+iy$ and $\\rho$ is a positive function that is of class $C^2$ outside the singularities. Such coordinates are called \\textit{isothermal coordinates}.\n\n\\smallskip\n\n{\\small As a counterexample, one may consider a non-negative function $\\phi$ on $\\mathbb{C}$ that vanishes exactly on a contractible compact subset $Q \\subset \\mathbb{C}$. Let us denote by $S = \\mathbb{C}\/Q$ the space obtained by identifying all the points of $Q$ and $g = \\phi(z) |dz|^2$. If $Q$ contains more than one point, then $(S,g)$ has a singular point $q = [Q]$ that does not satisfy the above condition.\n}\n\n\\medskip\n\nOur second assumption concerns the curvature. It says that if $K$ denotes the curvature and $dA$ the area element of $g$, then\n$$\n \\int_{U'} |K| dA < \\infty.\n$$\n\nAn important class of singularities satisfying the above conditions is given by the simple singularities:\n\n\\medskip\n\n\\textbf{Definition.} \\index{Simple singularity}\nA conformal metric $g$ on a Riemann surface $S$ is said to have a \\textit{simple singularity \nof order} $\\beta$ at $p\\in S$ if it can be locally written as \n$$\n g = e^{2u(z)} |z|^{2\\beta} |dz|^2,\n$$\nwhere $\\beta$ is a real number and $u$ is a function satisfying\n$$\n u \\in L^1 \\quad \\text{and} \\quad \\Delta u \\in L^1.\n$$\nIn this definition, $z = x + iy$ is a local coordinate on $S$ defined in a neighborhood $U$ of $p$ and such that $z (p) = 0$. The Lebesgue space $L^1$ is defined with respect to the Lebesgue measure $dx dy$ on $U$ and the Laplacian of $u$ is defined in the sense of distributions by\n$\\Delta u= - \\frac{\\partial^2}{\\partial x^2} - \\frac{\\partial^2}{\\partial y^2}$. \n\n\n\\medskip \n\n\nSimple singularities naturally appear in several contexts as we shall soon see. A first class of examples \nis given by the following result due to MacOwen \\cite[Appendix B]{McO}:\n\n\\begin{theorem}\n Let $g = e^{2u} |dz|^2$ be a conformal metric on the unit disk $D = \\{z \\in \\mathbb{C}\u00a0\\mid |z| <1\\}$ having a singularity at the origin. Suppose that $g$ is smooth on the punctured disk $D' = D \\setminus \\{0\\}$.\nIf there exist $\\ell \\in \\r$ and $a, b> 0$ such that the curvature $K$ of $g$ satisfies\n$$\n - b|z|^{\\ell} \\leq K(z) \\leq - a|z|^{\\ell}, \n$$\nthen $0$ is a simple singularity of $g$.\n\\end{theorem}\n\nA simple singularity of order $\\beta < -1$ is always at infinite distance while a simple singularity of order $\\beta > -1$ is always at a finite distance of ordinary points. For a singularity of order $- $1, both cases can occur, see \\S 2.2 in \\cite{HT}. A \\textit{cusp} \\index{Cusp} is a simple singularity of order $\\beta = -1$ admitting a neighborhood of finite area.\n\n\\medskip\n\nA simple singularity of order $\\beta > -1$ is also called a \\textit{conical singularity} \\index{Conical Singularity} \nof (total) angle $\\theta = 2\\pi (\\beta+ 1)$. Such a singularity can indeed be approximated by a Euclidean cone of total angle $\\theta$. In particular, if the curvature of $g$ is bounded in some neighborhood of a conical singularity, there exists an ``exponential map'' making it possible to parametrize a neighborhood of the conical singularity by a neighborhood of the vertex of its tangent cone. In other words, one can introduce polar coordinates near a conical singularity. Moreover, if the curvature is continuous, then these polar coordinates are of class $C^1$ with respect to the isothermal coordinates, see \\cite{Troyanov1990b} for proofs of these facts. \n\n\\section{Global description}\n\nTo investigate singular surfaces having several simple singularities, it is convenient to introduce the following notion:\n\\medskip \n\n\\textbf{Definition.} \\index{Divisor}\nA \\textit{divisor} on a Riemann surface $S$ is a formal sum \n$\\pmb{\\beta} = \\sum_{i=1}^n \\beta_i p_i$.\nThe \\textit{support} of this divisor is the set $\\mathrm{supp} (\\pmb{\\beta}) = \\{p_1, \\dots, p_n\\}$. A conformal metric $g$ on $S$ \\textit{represents the divisor} $\\pmb{\\beta}$ if it is smooth on the complement of $\\mathrm{supp} (\\pmb{\\beta})$ and if $g$ has a simple singularity of order $\\beta_i$ at $p_i$ for $i = 1, \\dots, n$.\n\n\\medskip \n\n\\textbf{Examples.} \\ (1) The metric $g = |dz|^2$ on the Riemann sphere $\\mathbb{C} \\cup \\{\\infty\\}$ represents the divisor $\\pmb{\\beta} = (-2)\\cdot \\infty$. \n\n\\medskip\n\n(2) More generaly, the metric $g = |z|^{2\\alpha}|dz|^2$ on $\\mathbb{C} \\cup \\{\\infty\\}$ represents the divisor $\\pmb{\\beta} = \\alpha \\cdot 0 + (-2- \\alpha)\\cdot \\infty$. \n\n\\medskip\n\n (3) If $\\omega = \\varphi (z) dz$ is a meromorphic differential on the Riemann surface $S$, then $g = |\\omega|^2$ is a flat Riemannian metric with simple singularities representing the divisor $\\pmb{\\beta} = \\mathrm{div} (\\omega)$. \n\n\\medskip\n\n(4) If $(S_1,g_1)$ is a smooth Riemannian surface and $f : S \\to S_1$ is a branched covering, then $g = f^*(g_1)$ is a Riemannian metric on $S$ representing the ramification divisor of $f$, that is $\\pmb{\\beta} = \\sum_p O_p(f) \\cdot p$, where $O_p(f)$ is the ramification order of $f$ at $p$ (i.e. the local degree minus $1$).\n\n\\medskip\n\n(5) If $S$ is a two-dimensional polyhedron (euclidean, spherical or hyperbolic) with vertices $p_1, \\dots, p_n$, then the metric induced by the geometric realization of that polyhedron represents the divisor $\\pmb{\\beta} = \\sum_{i=1}^n \\beta_i p_i$, where \n$2\\pi(\\beta_i+1) = \\theta_i$ is the sum of the angles at $p_i$ of all faces incident with $p_i$.\n\n\\medskip\n\n(6) Let $(\\tilde{S},\\tilde{g})$ be a smooth Riemannian surface on which a finite group $\\Gamma$ acts by isometries. If\n$S = \\tilde{S}\/\\Gamma$ is a surface without boundary, then it inherits a Riemannian metric with simple singularities representing the divisor\n$\\pmb{\\beta} = \\sum_{i=1}^n \\beta_i p_i$, where $\\beta_i = (\\frac{1}{n_i}-1)$. Here, the point $p_i$ is the image of a point\n$\\tilde{p}_i \\in \\tilde{S}$ such that $\\Gamma$ has a stabiliser of order $n_i$ at $\\tilde{p}_i $. This examples generalizes to two-dimensional orbifolds.\n\n\\medskip \n\nIn examples 4 to 6, all the singular points are conical singularities. An important source of examples, where no singular point is conical, is given by the following Theorem:\n\\begin{theorem}\\label{th.Huber} \n Let $(S', g ')$ be a complete Riemannian surface of class $C^2$ with finite total curvature: $\\int_{S'}|K|dA < \\infty$. Then there exists a compact Riemann surface $S$, a divisor $\\pmb{\\beta} = \\sum_{i=1}^n \\beta_i p_i$ on $S$ such that $\\beta_i \\leq -1$ for all $i$, and a conformal metric $g$ on $S$ representing this divisor such that $(S', g ')$ is isometric to\n$(S \\setminus \\mathrm{supp}(\\pmb{\\beta}), g)$.\n\\end{theorem}\nThis result is essentially due to A. Huber, we refer to \\S 1.1 and \\S 2.9 in \\cite{HT} for a discussion and a proof of Huber's theorem in the above formulation. \nIt is not difficult to see that if the surface $(S', g ')$ has finite area, then $\\beta_i = -1$ for all $i$.\n\n\\section{Some global geometry}\n\nFor compact Riemannian surfaces with simple singularities, there is a well known Gauss-Bonnet Formula (see e.g. \\cite{Finn} for the case where all orders satisfy $\\beta_i \\leq -1$). To state the Formula we define the \\textit{Euler characteristic of a surface $S$ with divisor} as $\\pmb{\\beta} = \\sum_{i=1}^n \\beta_i p_i$ as $\\chi(S, \\pmb{\\beta}) = \\chi(S) + \\sum_i \\beta_i$. \n\n\n\\begin{theorem}[The Gauss-Bonnet Formula] \\index{Gauss-Bonnet Formula} \\label{GB}\nLet $(S, g)$ be a compact Riemannian surface whose metric represents a divisor $\\pmb{\\beta}$. Then the total curvature of $(S, g)$ is finite and \nwe have\n$$\n \\frac{1}{2\\pi}\\int_S K dA = \\chi(S, \\pmb{\\beta}).\n$$\n\\end{theorem}\n\nSee e.g. \\cite[Theorem 2.8]{HT} for a proof.\n\n\\medskip \n\nFor example, if $\\omega$ is a meromorphic differential on the closed Riemann surface of genus $\\gamma$, then the Gauss-Bonnet Formula implies that the degree of $\\omega$ (i.e. the number of zeroes minus the number of poles), is equal to $2\\gamma - 2$. Indeed, $g = |\\omega|^2$ is a flat metric representing the divisor $\\mathrm{div}(\\omega)$. Another application of the Gauss-Bonnet formula is the Riemann-Hurwitz formula:\n\n\n\\begin{proposition}[The Riemann-Hurwitz Formula] \\index{Riemann-Hurwitz Formula}\nLet $f : S \\to S_1$ be a branched cover of degree $d$ between two closed surfaces, then \n$$\n \\chi(S) + \\sum_{p\\in S} O_p(f) = d \\chi(S_1),\n$$\nwhere $O_p(f)$ is the branching order of $f$ at $p$. \n\\end{proposition} \n\n\\begin{proof}\nPick an arbitrary smooth metric $g_1$ on $S_1$ and set $g = f^*(g_1)$. Then $g$ is a Riemannian metric with simple singularities on\n $S$ representing the ramification divisor of $f$. The above formula follows now from Theorem \\ref{GB}, since we obviously have \n$$\n\\int_S K dA = d \\cdot \\int_{S_1} K_1 dA_1. \n$$\n\\end{proof}\n\n\\medskip\n\n \nHuber's theorem, together with the Gauss-Bonnet Formula, is a refinement of the Cohn-Vossen inequality:\n\n\\begin{proposition}[The Cohn-Vossen inequality] \\index{Cohn-Vossen inequality}\nLet $(S', g')$ be a compact Riemannian surface of class $C^2$ with finite total curvature: $\\int_{S'} |K'| dA' < \\infty$, then we have\n$$\n \\frac{1}{2\\pi}\\int_{S'} K' dA' \\leq \\chi(S').\n$$\nMoreover we have equality if $(S', g')$ has finite area.\n\\end{proposition}\n\n\\begin{proof}\nHuber's Theorem tells us that $(S', g')$ admits a compactification $(S, g)$ where $g$ is a metric representing a divisor\n$\\pmb{\\beta} = \\sum_{i=1}^n \\beta_i p_i$ such that $\\beta_i \\leq -1$ for all $i$. We then have from the Gauss-Bonnet Formula\n$$\n \\frac{1}{2\\pi}\\int_{S'} K' dA' = \\frac{1}{2\\pi}\\int_{S} K dA = \\chi(S) + \\sum_{i=1}^n \\beta_i \\leq \\chi(S) -n = \\chi(S').\n$$\nIf $S'$ has finite area, then we have $\\beta_i =-1$ for all $i$ and the above inequality is in fact an equality. \n\\end{proof}\n{\\small Note that although the Cohn-Vossen inequality is a consequence of Huber's Theorem \\ref{th.Huber} and the Gauss-Bonnet Formula, \none should not consider it to be a corollary of these results. The reason is that the proof of Huber's theorem is in part based on the Cohn-Vossen inequality.}\n\n\\medskip\n\n\nThe difference between $\\chi(S')$ and $\\chi(S, \\pmb{\\beta})$ in the Cohn-Vossen inequality is an isoperimetric constant: \n\n\\begin{theorem}\nLet $(S, g)$ be a compact Riemannian surface whose metric $g$ represents a divisor $\\pmb{\\beta} = \\sum_{i=1}^n \\beta_i p_i$ such that $\\beta_i \\leq -1$ for all $i$. \nFix a point $q$ on $S' = S\\setminus \\{p_1, \\dots, p_n\\}$ and denote by $A(q, r)$ the area of $B_q(r) := \\{x \\in S' \\mid d(q,x) \\leq r\\}$ and $L(q, r)$ the length of $\\partial B_q(r)$. Then\n$$\n \\lim_{r \\to \\infty} \\frac{L^2(q,r)}{4\\pi A(q,r)} = - \\sum_{i=1}^n (\\beta_i +1) = \\chi(S') - \\chi(S, \\pmb{\\beta}).\n$$\n\\end{theorem}\n\nThis result is due to K. Shiohama \\cite{Shiohama}, but R. Finn obtained a partial result in this direction \\cite[Theorem 10]{Finn}. \n\n\n\n\\section{Classifying Flat Metrics}\n\nLet us now formulate a classification theorem for flat metrics with simple singularities on a compact surface.\n\n\\begin{theorem}\n Let $S$ be a compact Riemann surface with divisor $\\pmb{\\beta} = \\sum \\beta_ip_i$. Then there exists a conformal flat metric representing $\\pmb{\\beta}$ on $S$ if and only if $\\chi(S,\\pmb{\\beta}) = 0$. Moreover this metric is unique up to homothety.\n\\end{theorem}\n\nThis theorem has several proofs, see e.g. \\cite{Troyanov1986} and \\cite[\\S 7]{HT}. We give here the proof from \\cite{HT}.\n\n\\begin{proof}\nIntroduce in the neighborhood of each $p_i$ a coordinate $z_i$ such that $z_i(p_i) = 0$ and choose an arbitrary conformal metric $g_0$ on $S$ such that $g_0 = |dz_i|^2$ in the neighborhood of each $p_i$. Let us now choose a positive function $\\rho : S \\to \\r$ which is of class $C^2$ on $S\\setminus \\mathrm{supp}(\\pmb{\\beta})$ and such that $\\rho = |z_i|^{2\\beta_i}$ in the neighborhood of each $p_i$.\nThe metric $g_1 = \\rho g_0$ is then a conformal metric representing the divisor $\\pmb{\\beta}$. \n\nSince the desired metric must be conformal on $S$, it can be written as $g = e^{2u} g_1$. Note that if $u$ is a function of class $C^2$ on $S$ such that \n\\begin{equation}\\label{LaplK1}\n \\Delta_1 u = -K_1,\n\\end{equation}\nwhere $\\Delta_1$ and $K_1$ denote the Laplacian and the curvature of $g_1$, then $g = e^{2u} g_1$ is a flat conformal metric representing $\\pmb{\\beta}$ on $S$. \nBecause $\\Delta_1$ is a singular operator, it is more convenient to write the previous equation as\n\\begin{equation}\\label{LaplK2}\n \\Delta_0 u = -\\rho K_1,\n\\end{equation}\nwhere $\\Delta_0$ is the Laplacian of the smooth metric $g_0$.\n\nNote that \\eqref{LaplK1} and \\eqref{LaplK2} are equivalent equations, but since $K_1$ vanishes in a neighborhood of the points $p_i$ and the functions $\\rho$ and $K_1$ are of class $C^2$ on $S\\setminus \\mathrm{supp}(\\pmb{\\beta})$, the right hand side of \\eqref{LaplK2} is of class $C^2$ on the whole surface $S$.\n\nIt is well known that the partial differential equation \\eqref{LaplK2} has a solution if and only if the integral of the right hand side vanishes, which follows from the Gauss-Bonnet formula:\n$$\n \\int_S K_1 \\rho dA_0 = \\int_S K_1 dA_1 = 2\\pi \\chi(S,\\pmb{\\beta}) = 0.\n$$\nWe have thus proved the existence of a flat conformal metric on $S$ representing the divisor $\\pmb{\\beta}$. \nThe uniqueness follows from the fact that if $g_1$ and $g_2$ are two such metrics, then $g_2 = e^{2v} g_1$ for a harmonic function $v$ on $S$. This function is constant (because $S$ is a closed surface) and the two metrics are therefore homothetic.\n\\end{proof}\n \n\\medskip\n\nThe above Theorem gives us a short proof of the Uniformization Theorem for the sphere:\n\n\\begin{theorem} \\index{Uniformization Theorem (for the 2-sphere)}\n Any Riemann surface homeomorphic to the two-sphere is conformally equivalent to $\\mathbb{C} \\cup \\{\\infty\\}$.\n\\end{theorem}\n\n\n\\begin{proof}\nLet us choose a point $p$ in $S$ and consider the divisor $\\pmb{\\beta} = (-2)\\cdot p$. Observe that $\\chi(S, \\pmb{\\beta}) = 2 - 2 = 0$.\nThe previous theorem tells us that there is a conformal flat metric $g$ on $S$ representing this divisor. It is clear that $(S, g)$ is isometric (and thus conformally equivalent) to $\\left(\\mathbb{C} \\cup \\{\\infty\\}, |dz|^2\\right) $.\n\\end{proof}\n \n\\medskip\n\n{\\small Of course the Uniformization Theorem also means that there exists a smooth conformal metric of constant positive curvature on $S$. However, it is hard to prove this result by directly solving the corresponding Berger-Nirenberg problem, that is by directly constructing a conformal metric of curvature $+1$. The above proof on the other hand is almost trivial.}\n\n\n\\section{The Berger--Nirenberg Problem on Surfaces with Divisors} \\index{The Berger--Nirenberg Problem}\n\nThe classical Berger-Nirenberg problem is the following:\n\n\\medskip \n\n\\textbf{Problem 1.} Let $S$ be a Riemann surface and $K : S \\to \\mathbb{R}$ a function on this surface. Is there a conformal metric on $S$ whose curvature is the function $K$? If it exists, is such a metric unique?\n\n\\medskip \n\nThis problem is clearly not well posed for open surfaces. One could hope that the problem is well posed for complete Riemannian metrics, however, it is not difficult to construct families $\\{g_{\\lambda}\\}$ of conformal metrics on a Riemann surface which are complete, conformal, of the same curvature and whose geometry at infinity drastically varies with $\\lambda$, in the sense that they are not mutually bilipschitz. An example is given in \\cite{HT1990}.\nThe previous discussion, in particular Huber's Theorem \\ref{th.Huber}, suggests to replace the Berger-Nirenberg Problem on open surfaces by a version of the problem on compact surfaces with a divisor.\n \n\n\\medskip \n\n\\textbf{Problem 2.} Let $(S,\\beta)$ be a compact Riemann surface with divisor, and $K : S \\to \\mathbb{R}$ be a smooth function. Is there a conformal metric $g$ on $S$ that represents $\\pmb{\\beta}$ and whose curvature is the function $K$? If it exists, is such a metric unique?\n\n\\medskip \n\nWe have already answered this question when $K$ vanishes everywhere.\n\n\\smallskip \n\nProblem 2 is studied in the papers \\cite{Troyanov1991} (in the case of conic singularity) and \\cite{HT} in the general case. The results can be summarized in a form which is similar to the classical theory in the smooth case as it is exposed in the foundational article \\cite{KW} by Jerry Kazdan and Frank Warner.\n\n\\begin{theorem}\\label{prescrK}\nLet $(S, \\pmb{\\beta})$ be a compact Riemann surface with a divisor $\\pmb{\\beta} = \\sum \\beta_ip_i$ , and $K : S \\to \\r$ be a smooth function. Suppose that there exists $p>1$ such that $h_i(z) = |z-p_i|^{2\\beta_i}K(z)$ is a function of class $L^p$ in a neighborhood of each $p_i$. Moreover\n\\begin{enumerate}[(a)]\n\\item If $\\chi(S,\\pmb{\\beta}) >0$, we assume $\\sup(K) >0$ and $q\\chi(S,\\pmb{\\beta})<2$, where $q=p\/(p-1)$.\n\\item If $\\chi(S,\\pmb{\\beta}) =0$, we assume either that $K \\equiv 0$ or $\\sup(K) >0$ and $\\int_S KdA_0 < 0$, where $dA_0$\nis the area element of a flat conformal metric representing $\\pmb{\\beta}$.\n\\item If $\\chi(S,\\pmb{\\beta}) <0$, we assume $K \\leq 0$ and $K \\not\\equiv 0$.\n\\end{enumerate}\nThen there exists a conformal metric $g$ on $S $ which represents the divisor $\\pmb{\\beta}$\nand whose curvature is $K$. In case (c), this metric is unique.\n\\end{theorem}\n\n\nA very brief idea of the proof is presented in \\cite{HT1990} (see \\cite{Troyanov1991} and \\cite{HT} for details).\nSome particular cases of this theorem have been obtained previously by W. M. Ni, R. MacOwen and P. Aviles. At the beginning of the twentieth century, Emile Picard had already studied the case of curvature $- 1$ in \\cite{Picard}.\nThe hypotheses of the previous theorem impose a decay of the curvature when approaching the singularities of order $< - 1$. The next result, only valid for non-positive curvature, does not impose such a behavior.\n\n\\begin{theorem}\nLet $S$ be a compact Riemann surface and $g_1$ be a conformal metric representing a divisor $\\pmb{\\beta} = \\sum_{i=1}^n \\beta_ip_i$\nsuch that $n \\geq 1$ and $\\chi(S,\\pmb{\\beta}) < 0$. Let $K : S \\to \\r$ be a smooth nonpositive function such that \n$$ bK\\leq K_1\\leq aK<0$$\non the complement of a compact subset of $S' = S \\setminus \\{p_1, \\dots, p_n\\}$, where $K_1$ is the curvature of $g_1$ and $a,b$ are positive constants. Then there exists a unique conformal metric $g$ on $S$ which represents $\\pmb{\\beta}$, has curvature $K$ and is \nconformally quasi-isometric to $g_1$.\n\\end{theorem}\n\nSee \\cite[Theorem 8.1, 8.4]{HT} or \\cite{McO} for the proof. As an application, using this Theorem, one can construct metrics with prescribed (negative) curvature having cusps. The previous Theorem also admits a generalization to non-compact Riemann surfaces of finite type having hyperbolic ends.\n\n\\medskip\n\nWe end this section with two results on the Berger-Nirenberg Problem on closed Riemann surfaces with divisor. The first result, proved in \\cite{Tang} by Junjie Tang, allows us to solve the Berger-Nirenberg Problem when $\\chi (S, \\pmb{\\beta})$ is small enough. \n\n \n\n\\begin{theorem}\nLet $S$ be a closed Riemann surface with a divisor ${\\pmb{{\\pmb{\\alpha}}}} = \\sum_{i=1}^n \\alpha_i p_i$ such that $\\chi(S, {\\pmb{\\alpha}}) = 0$, and let\nus pick a conformal metric $g_0$ representing ${\\pmb{\\alpha}}$. \n\nSuppose another divisor $\\pmb{\\beta}= \\sum_{i=1}^n \\beta_i p_i$ with same support is given on $S$, such that $\\chi(S, \\pmb{\\beta}) < 0$ and consider a function $K : S\\to \\r$ such that $K = O(|z - p_i|^{\\ell_i})$ (in the neighborhood of each $p_i$), where $\\ell_i > -2(1+\\alpha_i)$.\n\n\\smallskip \n\n\\emph{(A)} If $\\beta_i \\leq \\alpha_i$ for all $i$, then a necessary condition for the existence of a conformal metric $g$ on $S$ representing \n$\\pmb{\\beta}$ is\n\\begin{equation}\\label{kaneg}\n \\int_S KdA_0 < 0,\n\\end{equation}\nwhere $dA_0$ is the area element of $g_0$.\n\n\\smallskip \n\n\\emph{(B)} There exists $\\varepsilon >0$ (depending on $S$, $\\alpha$ and $K$) such that if $\\max_i |\\beta_i - \\alpha_i| \\leq \\varepsilon$,\nthen \\eqref{kaneg} is a sufficient condition for the existence of a conformal metric $g$ representing $\\beta$.\n\\end{theorem}\n\n\n\\medskip\n\nThe second result is due to Dominique Hulin \\cite{Hulin1993}. It allows us to solve the problem when $K$ is close enough to a given non-positive function: \n\\begin{theorem} \nLet $S$ be a compact Riemann surface with a divisor $\\pmb{\\beta}= \\sum_{i=1}^n \\beta_i p_i$ such that $\\chi(S, \\pmb{\\beta}) < 0$.\nLet $K_1, k : S \\to \\mathbb{R}$ be two smooth functions on $S$ such that $K_1 \\leq 0 \\leq k$ everywhere on $S$, and $K_1 \\not\\equiv 0$.\nSuppose that $|z-p_i|^{2\\beta_i}(k(z) - K_1(z)) \\in L^p$ for some $p>1$ in the neighborhood of the points $p_i$. \n\\ Then there exists a constant $C > 0$ (depending on $S$, $\\pmb{\\beta}$, $K_1$ and $k$) such that if\n$$\nK_1\\leq K \\leq K_1 + Ck\n$$\non $S$, then there exists a conformal metric $g$ of curvature $K$ on $S$ which represents $\\pmb{\\beta}$. \n\\end{theorem}\nThe dependence of the constant $C$ on $S$, $\\pmb{\\beta}$, $k$ and $K_1$ is explicitly given in \\cite[Theorem 6.1]{Hulin1993}\n\n\\newpage\n\n\\section{Spherical Polyhedra}\n\nThe following result classifies spherical surfaces with less than three conical points.\n\\begin{theorem}\nLet $g$ be a metric on the sphere $S^2$ representing a divisor $\\pmb{\\beta} = \\beta_1p_1 + \\beta_2p_2$ and whose curvature is constant $K = + 1$. Then $\\beta_1 = \\beta_2$, moreover:\n\\begin{enumerate}[(a)]\n\\item If $\\beta_i$ is not an integer, then $p_1$ and $p_2$ are conjugate points (that is $d(p_1,p_2) = \\pi$).\n\\item If $\\beta = m \\in \\mathbb{N}$, then $(S, g)$ is isometric to a branched covering of degree $m+1$ of the standard sphere $\\mathbb{S}^2$, \n(with its canonical metric) branched over two points, with ramification order equal to $m$.\nMoreover, two such metrics are isometric if and only if their singularities are of the same order and separated by the same distance.\n\\end{enumerate}\n\\end{theorem}\n\nLet us call \\emph{spherical polyhedra} \\index{Spherical polyhedra} a Riemannian surface homeomorphic to the sphere, with conical singularities of order \n$\\beta_i \\in (-1,0)$ and whose Gauss curvature is constant $K = +1$. \nA fundamental Theorem by A. D. Aleksandrov states that a Riemannian surface, homeomorphic to the sphere, with constant curvature $K = +1$ and\n conical singularities of order $\\beta_i \\in (-1,0)$ can be realized as the boundary of a convex polytope \nin the standard three-sphere $\\mathbb{S}^3$ (see \\cite[Chapter XII, p. 400]{Alexandrov} ). Note that from the previous Theorem, a spherical polyhedron cannot have exactly one singularity.\n\n\\medskip \n\nThe last result classifies the divisors that can be represented by a spherical polyhedron having at least three singularities:\n\\begin{theorem}\nLet $\\pmb{\\beta} = \\sum_{i=1}^n \\beta_i p_i$ be a divisor on $S^2 = \\mathbb{C} \\cup \\infty$ such $n\\geq 3$ and that $- 1 < \\beta_i < 0$ for all $i$.\nThen there exists a unique conformal metric $g$ with constant curvature $K = +1$ representing $\\pmb{\\beta}$ if and only if\n\\begin{equation}\\label{CondLT}\n 0 < 2 + \\sum_{i=1}^n \\beta_i < 2(1 + \\min_i \\{\\beta_i\\}).\n\\end{equation}\n\\end{theorem}\nExpressed with the cone angles $\\theta_i = 2\\pi(1+\\beta_i)$, the hypothesis can also be written as $\\theta_i < 2\\pi$ and \n\\begin{equation}\\label{CondLT2}\n 0 < 4\\pi + \\sum_{i=1}^n (\\theta_i - 2\\pi) < 2 \\min_i \\{\\theta_i\\}.\n\\end{equation}\nThe first inequality in this condition is none other than the Gauss-Bonnet formula. \n\n\nLet us observe that the condition \\eqref{CondLT2} is similar to the condition satisfied by the angles \n$\\varphi_1, \\dots, \\varphi_n$ of a spherical convex polygon:\n$$\n 0 < 2\\pi + \\sum_{i=1}^n (\\varphi_i - \\pi) < 2 \\min_i \\{\\varphi_i\\}.\n$$\nThe existence of a spherical metric representing $\\pmb{\\beta}$ follows from Theorem \\ref{prescrK} above (see also \\cite{Troyanov1991}). The necessity of \\eqref{CondLT}, as well as the uniqueness of the metric have been proved by Feng Luo and Gang Tian in \\cite{LT}. \n\n\\bigskip \n\n\n\\textit{Note added in 2021.} \nThe subject of this survey has grown considerably over the past 30 years. A nice reference covering more recent aspects of the theory is the article \\cite{Lai} by M. Lai\n\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}}