diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzhhop" "b/data_all_eng_slimpj/shuffled/split2/finalzzhhop" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzhhop" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nLocalising the source of an unknown or uncertain scalar field has attracted significant attention in recent years. Extremum seeking can then be understood as driving the state of an agent or network of agents to the source, and maintaining a steady state in the neighbourhood of this optimal state in the unknown field. The widespread applications include internal combustion engine calibration~\\cite{killingsworth2009hcci}, locating RF leakage\\cite{al2012position}, optimising energy distribution~\\cite{ye2016distributed}, and maximising the output of bioreactors~\\cite{wang1999optimizing}. The main challenge in general is the approximation of the field, or a valid descent direction, with the additional challenge in the multi-agent case of coordinating the agents to improve the estimation. In this work, we consider discrete time extremum seeking, for the more classical continuous time extremum seeking problem see~\\cite{tan2010extremum} and the references therein.\n\nExtremum seeking with a single agent primarily uses ``dither'' or other motion patterns to estimate a descent direction. In~\\cite{cochran2009nonholonomic, zhang2007extremum}, extremum seeking with a single agent is investigated relying only on the measurements of the scalar field, without usage of the agent's position. Both approaches use a sinusoidal dither signal to estimate the gradient of the unknown field. Using finite difference with previous measurements, tracking and estimation error bounds for the minima of a time-varying scalar field are derived in~\\cite{shames2019online}, along with extensive numerical studies using a single agent. A hybrid controller is defined in~\\cite{mayhew2007robust}, conducting a series of line minimisations to construct the descent direction, with stability and convergence results.\n\nUsing a network of agents allows for a more robust estimate of the gradient, as the measurements are typically assumed to be simultaneous and thus unaffected by a time-varying source. In\\cite{biyik2008gradient} a network is used with a single leader determining the estimated gradient, employing a zero mean dither signal, with the followers only keeping formation. The authors show that with a fast dither and slow formation keeping, the followers only track the gradient descent movement of the leader. Using multiple ``leader'' agents and only inter-agent bearing measurements, the authors in~\\cite{zhao2015bearing,zhao2015translational} stabilise a formation in arbitrary dimension with leaders following reference velocities or trajectories. In addition, using only bearing measurements allows for formation scaling and rotation. In multi-agent approaches, the set of measurements from each agent can be used to compute an estimated gradient, assuming a single sensor aboard each agent\\cite{khong2014multi,ogren2004cooperative,skobeleva2018planar,vandermeulen2017discrete,vweza2015gradient}. All of these publications use some form of the simplex gradient\\cite{regis2015calculus}, as do we in this paper. The controller design derived in~\\cite{khong2014multi} uses a centralised extremum seeking controller, with access to all of the agents' measurements, which provides reference velocities to each of the agents. Convergence guarantees are provided for a variety of formation and extremum seeking methods satisfying their assumptions. A centralised controller is implemented in~\\cite{ogren2004cooperative} to track the estimated gradient using least squares estimation and refined by Kalman filtering. The agents are tasked with formation keeping around a virtual leader, which climbs the gradient of the unknown field. However, the problem formulation only considers finite manoeuvres, and the formation may move extremely slowly. For networks of $3$ agents in $2$ dimensions a distributed control law with exponential convergence guarantees is investigated in\\cite{skobeleva2018planar}. The agents in\\cite{vandermeulen2017discrete} use a dynamic consensus algorithm to coordinate the gradient estimation, combined with a zero mean dither to construct a local gradient estimation. Finally, in a series of papers\\cite{circular2015distributed,circular2013consensus,circular2011collaborative,circular2010source}, a group of unicycle agents performing distributed extremum seeking in circular formations is examined. The agents stabilise their formation and gradient estimate using a consensus algorithm, and performs well even with lossy communication and time-varying communication networks. The algorithm described in~\\cite{circular2015distributed} is implemented in Section~\\ref{sec:simul} to compare to the results derived in this paper.\n\n\nThis paper provides a novel analysis of multiagent extremum seeking focused on a time-varying source without using a centralised coordinator or dither motion. This differs from the majority of the literature, which assumes a static or slowly drifting scalar field. We show that given an appropriate choice of the agents' formation, a simple gradient descent algorithm enables the network of agents to converge to a bounded neighbourhood of the time-varying extremum sets. At each iteration, we only assume that the time varying field is represented by a function which has Lipschitz continuous gradient (bounded second derivative), and satisfies the Polyak-\\L{}ojasiewicz inequality. The Polyak-\\L{}ojasiewicz inequality assumption is also weaker than many which are used to provide the linear convergence of gradient descent algorithms, such as convexity or quadratic growth\\cite{plstuff}. The authors' previous investigation into this problem \\cite{michael2020optimisation} included a more complicated control law than is presented here, with results restricted to $2$ dimensions. In this analysis, we simplify the control law, derive stronger convergence guarantees, and broaden the method to arbitrary dimension.\n\nThe paper is organised as follows. Section~\\ref{sec:probForm} is devoted to basic assumptions on the time-varying field and agent dynamics. Section~\\ref{sec:coopGradDesc} discusses the distributed control law for extremum seeking and formation keeping, along with convergence guarantees. Section~\\ref{sec:gradEstimation} provides an example of cooperative gradient estimation, an improvement and generalisation of the results from~\\cite{michael2020optimisation}. We provide numerical studies and comparison in Section~\\ref{sec:simul}, and conclude in Section~\\ref{sec:conclude}.\n\n\\section{Problem Formulation}\\label{sec:probForm}\n\nConsider a network of $n$ agents where $x_k^{(i)} \\in \\mathbb{R}^d$ denotes the position of the $i$-th agent for $i\\in \\{1,...,n\\}$ at iteration $k$. We use bold variables throughout the paper to describe the stacked vector for all agents, i.e. ${\\bf x}_k$ to denote the vector of all agents' states stacked vertically. Let $\\mathcal{G}=(\\mathcal{V},\\mathcal{E})$ be the underlying graph of the network with the vertex set $\\mathcal{V}=\\{1,...,n\\}$ representing the agents and the edge set $\\mathcal{E}\\subseteq \\mathcal{V}\\times\\mathcal{V}$ representing the communication topology. For each agent $i$, we define a set of neighbours $\\mathcal{N}^{(i)}:= \\{ j \\mid (j,i)\\in\\mathcal{E} \\}$ from which agent $i$ receives information at each iteration step.\n\n\\begin{assum}\\label{ass:networkProp}\nAssume that the agent communication graph $\\mathcal{G}=(\\mathcal{V},\\mathcal{E})$ is connected and time invariant.\n\\end{assum}\nThe agents are modeled as single integrators, with dynamics\n\\begin{align}\nx_{k+1}^{(i)} = x_k^{(i)} + \\alpha p_k^{(i)}. \\label{eq:dyn}\n\\end{align}\nThe constant term $\\alpha$ is determined by the time-varying field and is time invariant and uniform across the network. At each iteration $k$, the time-varying field is represented by the function $f_k:\\mathbb{R}^{d}\\rightarrow\\mathbb{R}$ with the non-empty minimiser set $\\mathcal{X}^*_{f_k} := \\textrm{argmin}_{x\\in\\mathbb{R}^{d}} f_k(x)$. The agents can only measure the function value at their location at each iteration, i.e. the value $f_k(x_k^{(i)})$. For any dimension $m\\in\\mathbb{Z}^+$ we define the distance between a point $x\\in\\mathbb{R}^{m}$ and a set $\\mathcal{S}\\subseteq\\mathbb{R}^{m}$ as\n\\begin{align}\n d(x,\\mathcal{S}) = \\inf_{y\\in\\mathcal{S}} ||y-x||, \\label{eq:pointSetDist}\n\\end{align}\nwhere $||\\cdot||$ is the Euclidean norm. Additionally, for a function $h:\\mathcal{D}\\rightarrow\\mathbb{R}$ we will use the shorthand\n\\begin{align}\n h^* := \\inf_{x\\in\\mathcal{D}} h(x), \\label{eq:funcMinVal}\n\\end{align}\nto represent the minimum value of that function.\n\n\\begin{assum}(Differentiability and Lipschitz Gradient):\\label{ass:Lipschitz}\nFor all $k\\geq0$, the functions $f_k:\\mathbb{R}^d \\rightarrow \\mathbb{R}$ are at least once continuously differentiable. The gradients are $L_{f}-$Lipschitz continuous, i.e. there exists a positive scalar $L_{f}$ such that, for all $k \\geq 0, x\\in\\mathbb{R}^d,\\; y\\in\\mathbb{R}^d$, $$|| \\nabla f_k(x) - \\nabla f_k(y) || \\leq L_{f}||x-y||, $$ or equivalently $$ f_k(y) \\leq f_k(x) + \\nabla f_k(x)^T(y-x) + \\frac{L_{f}}{2}||y-x||^2.$$\n\\end{assum}\n\n\\begin{assum}(Polyak-\\L{}ojasiewicz Condition):\\label{ass:polyak}\nFor all $k\\geq0$, there exists a positive scalar $\\mu_{f}$ such that $$\\frac{1}{2}||\\nabla f_k(x)||^2\\geq \\mu_{f}(f_k(x) - f^*_k),$$ for $f^*_k$ defined in~\\eqref{eq:funcMinVal}.\n\\end{assum}\nThe assumption that a function has an $L-$Lipschitz continuous gradient is equivalent to assuming the second derivative has bounded norm, if it is twice differentiable. The Polyak-\\L{}ojasiewicz condition simply requires that the gradient grows faster than a quadratic as we move away from the optimal function value. The Polyak-\\L{}ojasiewicz condition does not require the minima to be unique, although it does guarantee that every stationary point is a global minimum~\\cite{plstuff}. In addition to Assumptions~\\ref{ass:Lipschitz}-\\ref{ass:polyak} on each $f_k$, we quantify the ``speed'' with which the field may vary in the following assumption.\n\\begin{assum}(Bounded Drift in Time):\\label{ass:drift}\nThere exist positive scalars $\\eta_0$ and $\\eta^*$ such that $|f_{k+1}(x)-f_k(x)|\\leq \\eta_0$ for all $x\\in\\mathbb{R}^d$ and $|f^*_k-f^*_{k+1}|\\leq \\eta^*$.\n\\end{assum}\nThe problem of interest is given below.\n\\begin{prob}\\label{prob:onlyProb}\nFor a network of $n$ agents with dynamics~\\eqref{eq:dyn} and communication topology satisfying Assumption~\\ref{ass:networkProp}, let $\\{f_k\\}$ be a sequence of functions with a corresponding sequence of minimiser sets $\\{\\mathcal{X}^*_{f_k}\\}$ satisfying Assumptions~\\ref{ass:Lipschitz}-\\ref{ass:drift}. Given the measurements $\\mathcal{Y}_k^{(i)}=\\{f_k(x_k^{(j)}) \\mid j\\in\\mathcal{N}^{(i)}\\cup\\{i\\}\\}$, find $\\alpha,p_k^{(i)}$ and a constant $M$ for all agents $i\\in\\mathcal{V}$ and for all $k\\geq0$ such that $\\lim\\limits_{k\\rightarrow\\infty}d(x_k^{(i)},\\mathcal{X}^*_{f_k})\\leq M$.\n\\end{prob}\nPrevious research in extremum seeking of stationary sources have resulted in stronger convergence guarantees, such as semi-global practical asymptotic stability~\\cite{mayhew2007robust,khong2014multi,kvaternik2012analytic}. The authors of~\\cite{zhang2007extremum} are able to show local exponential stabilty of their extremum seeking algorithm for slowly moving source, i.e. the source velocity is significantly less than the dither speed. Further, the stability results are only extended for moving sources with periodic or ``almost-periodic'' trajectories. In this paper we have a more general definition of a ``moving source'' than found in the literature, and show that the control law described in the following sections is able to stabilise the steady state tracking error $d(x_k^{(i)},\\mathcal{X}^*_{f_k})$.\n\n\\section{Cooperative Gradient Descent}\\label{sec:coopGradDesc}\n\nIn this section we discuss our primary result, showing that a network of agents cooperating can reach a bounded neighbourhood of the minimiser set. In this section, for simplicity, we assume each agent uses an $\\varepsilon-$\\emph{gradient oracle} at each iteration to construct a step direction.\n\n\\begin{defn}\\label{def:epsOracle}\n\\emph{($\\varepsilon$-gradient oracle):} Given the function $f_k:\\mathbb{R}^{d}\\rightarrow\\mathbb{R}$ and the state of the agents in the network ${\\bf x}_k\\in\\mathbb{R}^{d}$, the oracle returns $O(f_k,{\\bf x}_k,\\mathcal{N}^{(i)}) = \\nabla f_k(x^{(i)}_k) + \\varepsilon_k$.\n\\end{defn}\n\nIn order to motivate the incorporation of formation control, we first analyse the erroneous gradient descent dynamics with input $p^{(i)}_k = -O(f_k,{\\bf x}_k,\\mathcal{N}^{(i)})$,\n\\begin{align}\n\\begin{split}\nx^{(i)}_{k+1} :=& x^{(i)}_{k} + \\alpha (-O(f_k,{\\bf x}_k,\\mathcal{N}^{(i)})) \\\\\n=& x^{(i)}_{k} - \\alpha (\\nabla f_k(x^{(i)}_k) + \\varepsilon_k)),\n\\end{split}\\label{eq:naiveDyn}\n\\end{align}\nand provide the following lemma on the convergence properties of the system.\n\n\\begin{lem}\\label{lem:noisyGradDesc}\nFor a sequence of functions $\\{f_k\\}$ with minimiser sets $\\{\\mathcal{X}^*_{f_k}\\}$ satisfying Assumptions~\\ref{ass:Lipschitz}-\\ref{ass:drift}, the system with dynamics~\\eqref{eq:naiveDyn} satisfies\n\\begin{align}\n\\begin{split}\n\\frac{1}{2}d(x^{(i)}_k,\\mathcal{X}^*_{f_k})^2 &\\leq \\beta(d(x^{(i)}_0,\\mathcal{X}^*_{f_0})^2,k) \\\\\n&\\hspace{-1cm} + \\frac{\\alpha}{2\\mu_{f}}\\sum_{t=0}^{k}(1 - \\alpha\\mu_{f})^{k-t}||\\varepsilon_t||^2 + \\frac{\\eta_{0}+\\eta^*}{\\alpha\\mu_{f}^2}, \\label{eq:noisyGradNeighbourhood}\n\\end{split}\n\\end{align}\nfor $\\beta\\in\\mathcal{KL}$, $\\alpha\\in(0,\\frac{1}{L_{f}}]$ with $L_{f},\\mu_{f}$ from Assumptions~\\ref{ass:Lipschitz}-\\ref{ass:polyak}, and $d(x^{(i)}_k,\\mathcal{X}^*_{f_k})$ defined in~\\eqref{eq:pointSetDist}.\n\\end{lem}\n\n\\begin{pf}\nSee Appendix~\\ref{app:noisyGradDesc}.\n\\end{pf}\n\n\\begin{rem}\n Lemma~\\ref{lem:noisyGradDesc} seems to imply that if $\\alpha$ is chosen to be $\\frac{1}{\\mu_f}$, the impact of the gradient error from steps before $k$ is zero. To understand why this is so, note that the Lipschitz constant $L_{f}$ and Polyak-\\L{}ojasiewicz constant $\\mu_{f}$ satisfy the following\n \\begin{align}\n \\frac{\\mu_{f}}{2}d(x^{(i)}_k,\\mathcal{X}^*_{f_k})^2 \\leq f_k(x) - f^*_k \\leq \\frac{L_{f}}{2}d(x^{(i)}_k,\\mathcal{X}^*_{f_k})^2, \\label{eq:quadraticSqueeze}\n \\end{align}\n see~\\cite{plstuff} for in depth discussion regarding the Polyak-\\L{}ojasiewicz inequality. Requiring that $\\alpha \\leq \\frac{1}{L_{f}}$ then implies $\\alpha \\leq \\frac{1}{\\mu_{f}}$. Therefore, if $\\alpha \\approx \\frac{1}{\\mu_f}$, then we must have that $\\mu_f\\approx L_{f}$ and $f_k$ is approximately a scaled norm as a consequence of~\\eqref{eq:quadraticSqueeze}. For the scaled norm function, the gradient dynamics~\\eqref{eq:naiveDyn} would take the agent directly to the minimiser, except for the error term from the most recent gradient estimate in~\\eqref{eq:noisyGradNeighbourhood} and the drift error term $\\frac{\\eta_0+\\eta^*}{\\mu_f}$.\n\\end{rem}\n\nWe see from Lemma~\\ref{lem:noisyGradDesc} that the system with dynamics~\\eqref{eq:naiveDyn} converges to a neighbourhood dependent on the magnitude of the gradient error terms $||\\varepsilon_k||^2$ and a constant term due to drift. In using function samples to estimate the gradient, the error in estimation is generally a function of the geometry of the samples taken. By incorporating formation control into the dynamics, we are able to bound the error terms $||\\varepsilon_k||^2$. We show a specific example of this in Section~\\ref{sec:gradEstimation}, but make minimal assumptions in this section on the specifics of how to construct a gradient estimate from sample points.\n\nTo expand the focus onto the entire network's behavior, we define the collective time-varying function $F_k:\\mathbb{R}^{nd}\\rightarrow\\mathbb{R}$ as\n\\begin{align*}\nF_k({\\bf x}_k) := \\sum_{i\\in\\mathcal{V}} f_k(x^{(i)}_k),\n\\end{align*}\nand note that $F_k$ satisfies Assumptions~\\ref{ass:Lipschitz}-\\ref{ass:polyak} with the same constants $L_{f},\\mu_{f}$. The time-varying minimiser set of $F_k({\\bf x}_k)$ is\n\\begin{align*}\n\\mathcal{X}^*_{F_k} = \\overbrace{\\mathcal{X}^*_{f_k}\\times\\mathcal{X}^*_{f_k}...\\times\\mathcal{X}^*_{f_k}}^{n}.\n\\end{align*}\nTo incorporate formation control, we use a formation potential function $\\phi({\\bf x}_k):\\mathbb{R}^{nd}\\rightarrow\\mathbb{R}^+$ which takes the full state vector of all agents and returns a scalar which is minimised when the agents are in formation. Let the minimum be $\\phi^* := \\min\\limits_{{\\bf x}\\in\\mathbb{R}^{nd}}\\phi({\\bf x})$.\n\\begin{defn}\\label{def:formFuncs}\nWe define $\\phi:\\mathbb{R}^{nd} \\rightarrow \\mathbb{R}^+$ to be the \\emph{formation potential function} for the network, with minimisers $\\mathcal{X}^*_{\\phi}$, and assume the following properties. The function $\\phi({\\bf x}_k)$\n\\begin{enumerate}\n \\item is continuously differentiable on $\\mathbb{R}^{nd}$ with gradient which is Lipschitz continuous with constant $L_{\\phi}$;\n \\item satisfies the Polyak-\\L{}ojasiewicz inequality (Assumption~\\ref{ass:polyak}), with constant $\\mu_{\\phi} \\geq \\mu_{f}$;\n \\item has gradient component $\\nabla_{x^{(i)}_k} \\phi({\\bf x}_k)$ which is computable using only the state of agent $i$ and neighbours $j\\in\\mathcal{N}_{i}$;\n \\item satisfies $\\phi({\\bf x}_k) \\geq \\frac{c}{2}\\sum_{i\\in\\mathcal{V}}||\\varepsilon^{(i)}_k||^2$ for $c > \\frac{1}{\\mu_{f}}$.\n\\end{enumerate}\n\\end{defn}\nIn the definition of the formation potential functions, the first two conditions ensure that $\\phi({\\bf x}_k)$ shares the minimal properties that make $f_k$ amenable to analysis. The third property ensures that the local information each agent has is sufficient for computation of the descent direction. Finally, the fourth property formalises the connection between the formation and the gradient estimation error. This final property formalises the relationship between the gradient estimation error and the formation. In Section~\\ref{sec:simul} we provide the example $\\phi({\\bf x}_k) = \\phi^* + L_{f} \\sum_{i\\in\\mathcal{V}} ||x^{(i)}-x^{(j)}-\\hat{x}^{(ij)}||^2$, where the terms $\\hat{x}^{(ij)}$ define the optimal formation and the constant $\\phi^*$ ensures condition $4$ from Definition~\\ref{def:formFuncs} is satisfied when the agents are in perfect formation. The constant offset does not change the dynamics, it simply allows $\\phi({\\bf x}_k)$ to bound the gradient error in the convergence analysis, see the proof of Theorem~\\ref{thm:compositeConvergence}.\n\nWith the formation potential function defined, we define the ``composite'' function $\\hat{f}_k:\\mathbb{R}^{nd} \\rightarrow \\mathbb{R}$ as\n\\begin{align}\n\\hat{f}_k({\\bf x}_k) := F_k({\\bf x}_k) + \\phi({\\bf x}_k),\\label{eq:compositeFunc}\n\\end{align}\nwith corresponding minimisers in the set $\\mathcal{X}^*_{\\hat{f}_k}$, and the new system dynamics\n\\begin{align}\nx^{(i)}_{k+1} := x^{(i)}_k - \\alpha(\\nabla_{x^{(i)}_k} \\hat{f}_k + \\varepsilon_k). \\label{eq:compositeDyn}\n\\end{align}\nEach agent can compute the gradient $\\nabla_{x^{(i)}_k} \\phi({\\bf x}_k)$ with only local information, so the gradient of the composite function, being the sum of $f_k$ and $\\phi$, can be estimated by using the same $\\varepsilon-$gradient oracle for $f_k$. Both $F_k$ and $\\phi$ satisfy Assumption~\\ref{ass:Lipschitz} with constants $L_{f},L_{\\phi}$ respectively, and Assumption~\\ref{ass:polyak} with constants $\\mu_{f},\\mu_{\\phi}$. Therefore, the composite function satisfies both Assumptions~\\ref{ass:Lipschitz}-\\ref{ass:polyak} with constants $L_{\\hat{f}} := L_{f}+L_{\\phi}$ and $\\mu_{\\hat{f}} \\geq \\min(\\mu_{f},\\mu_{\\phi}) = \\mu_{f}$.\n\n\\begin{lem}\\label{lem:minimiserRelation}\nFor the composite function $\\hat{f}_k$, as defined in~\\eqref{eq:compositeFunc}, we have\n\\begin{align*}\n\\hat{f}^*_k := \\min_{{\\bf x}\\in\\mathbb{R}^{nd}}\\hat{f}_k({\\bf x}) \\leq \\phi^* + \\frac{\\min(L_{f},L_{\\phi})}{2} d(\\mathcal{X}^*_{F_k},\\mathcal{X}^*_{\\phi})^2,\n\\end{align*}\nwhere we define the distance between the minimiser sets as\n\\begin{align*}\nd(\\mathcal{X}^*_{F_k},\\mathcal{X}^*_{\\phi}) := \\min \\{ ||x^*_{\\phi} - x^*_{F_k}|| \\mid x^*_{\\phi}\\in\\mathcal{X}^*_{\\phi} \\; , \\;x^*_{F_k}\\in \\mathcal{X}^*_{F_k} \\}.\n\\end{align*}\n\\end{lem}\n\n\\begin{pf}\nSee Appendix~\\ref{app:minimiserRelation}.\n\\end{pf}\nThe primary result is given in Theorem~\\ref{thm:compositeConvergence}.\n\n\\begin{thm}\\label{thm:compositeConvergence}\nFor a sequence of functions $\\{\\hat{f}_k\\}$ as defined in~\\eqref{eq:compositeFunc} with minimisers $\\{\\mathcal{X}^*_{\\hat{f}_k}\\}$, the system with dynamics~\\eqref{eq:naiveDyn} satisfies\n\\begin{align*}\n\\begin{split}\n\\frac{1}{2}d(x^{(i)}_{k+1},\\mathcal{X}^*_{\\hat{f}_{k+1}})^2 &\\leq \\beta(d(x^{(i)}_{0},\\mathcal{X}^*_{\\hat{f}_0})^2,k) \\\\\n&\\hspace{-1cm} + \\frac{\\alpha}{c\\mu}\\sum_{t=0}^{k}(1 - \\alpha\\mu')^{k-t}\\hat{f}^*_t + \\frac{\\eta_{0}+\\eta^*}{\\alpha\\mu\\mu'},\n\\end{split}\n\\end{align*}\nfor $\\beta\\in\\mathcal{KL}$, $\\alpha\\in(0,\\frac{1}{L_{\\hat{f}}}]$, and $\\mu' = \\mu_{f}-\\frac{1}{c}$. Therefore, we have\n\\begin{align}\n\\lim_{k\\rightarrow\\infty} \\frac{1}{2}d(x^{(i)}_{k+1},\\mathcal{X}^*_{\\hat{f}_{k+1}})^2 \\leq \\frac{\\underset{k\\rightarrow\\infty}{\\lim} \\sup \\hat{f}^*_k}{\\mu'} + \\frac{\\eta_{0}+\\eta^*}{\\alpha\\mu\\mu'}. \\label{eq:limBound}\n\\end{align}\n\\end{thm}\n\n\\begin{pf}\nSee Appendix~\\ref{app:compositeConvergence}.\n\\end{pf}\nIn a typical gradient descent scenario, we may expect a convergence bound similar to~\\eqref{eq:limBound}, without the $\\limsup \\hat{f}^*_k$ term. However, the gradient being used is only an estimation, and $\\limsup \\hat{f}^*_k$ represent the limit of the ``quality'' of the gradient estimate. Even on a stalimsuptionary function, i.e. $\\eta_0 = \\eta^* = 0$, we would not expect asymptotic convergence with erroneous gradient information.\n\nIn Theorem~\\ref{thm:compositeConvergence}, we demonstrate that by incorporating a formation potential function, which bounds the gradient estimation error, we are able to converge to a bounded neighbourhood of the time varying minimiser set $\\mathcal{X}^*_{\\hat{f}_{k}}$. Further, the system does not require the delineation of leaders and followers, a separate time-scale for the formation-keeping, or any centralised computing to guide the formation.\n\\section{Gradient Estimation and Error}\\label{sec:gradEstimation}\nIn Section~\\ref{sec:coopGradDesc}, we assume that each agent has access to an estimate local gradient $\\nabla f(x_k^{(i)}) + \\epsilon^{(i)}$. In this section, we provide a method by which agent $i$ can estimate $\\nabla f(x_k^{(i)})$ as well as compute an error bound for the estimate. The error bound and gradient estimation method apply to \\emph{any} function which satisfies Assumption~\\ref{ass:Lipschitz}. This method is a significant improvement of our previous work\\cite{michael2020optimisation}, which generalises to any dimension with any number of neighbours. We make the following assumption on the neighbour set.\n\n\\begin{assum}\\label{ass:fullRankNeighbours}\nFor each agent $i\\in\\mathcal{V}$ with state $x^{(i)}_k\\in\\mathbb{R}^d$, the neighbour set cardinality satisfies $|\\mathcal{N}^{(i)}| \\geq d$. Further, the vectors $\\{x^{(l)}_k - x^{(i)}_k\\}_{l\\in\\mathcal{N}^{(i)}}$ span $\\mathbb{R}^d$.\n\\end{assum}\n\nThe requirement that the agents do not arrange on a low dimensional subspace is the primary motivation for incorporating formation control. We define three variables before proceeding, as they will figure heavily in the analysis,\n\\begin{align}\n\\begin{split}\ns^{(ij)}_k &:= \\frac{f_k(x^{(j)}_k) - f_k(x^{(i)}_k)}{||x^{(j)}_k - x^{(i)}_k||}, \\\\\nv^{(ij)}_{k} &:= \\frac{x^{(j)}_k - x^{(i)}_k}{||x^{(j)}_k - x^{(i)}_k||}, \\\\\na^{(ij)}_k &:= \\frac{L_{f}}{2}||x^{(j)}_k - x^{(i)}_k||.\n\\end{split}\n\\label{eq:usefulConstants}\n\\end{align}\nFor an agent $i$, $s^{(ij)}_k$ is the average slope between themselves and agent $j$, $v^{(ij)}_k$ the unit vector pointing from agent $i$ to $j$, and $a^{(ij)}_k$ is the distance between the agents, scaled by the Lipschitz constant $L_{f}$. We will use ${\\bf s}^{(i)}_k,{\\bf a}^{(i)}_k$ to denote the vertically stacked vectors of $s^{(ij)}_k,a^{(ij)}_k$ for all neighbours $j\\in\\mathcal{N}^{(i)}$.\n\n\\begin{lem}\\label{lem:gradPolyhedron}\nFor a function $f_k$ satisfying Assumption~\\ref{ass:Lipschitz} and an agent $i$ with neighbour set $\\mathcal{N}^{(i)}$ satisfying Assumption~\\ref{ass:fullRankNeighbours}, there exists a \\textbf{bounded} polyhedron\n\\begin{align}\n\\mathcal{P}^{(i)}_k := \\{ x\\in\\mathbb{R}^d \\mid \\begin{bmatrix} A^{(i)}_k \\\\ -A^{(i)}_k \\end{bmatrix} x \\leq b^{(i)}_k \\} \\label{eq:polyhedron}\n\\end{align}\nsuch that $\\nabla f(x^{(i)}_k) \\in \\mathcal{P}^{(i)}_k$, for $A^{(i)}_k\\in\\mathbb{R}^{|\\mathcal{N}^{(i)}|\\times d}$ and $b_k\\in\\mathbb{R}^{2|\\mathcal{N}^{(i)}|\\times d}$.\n\\end{lem}\n\n\\begin{pf}\nSee Appendix~\\ref{app:gradPolyhedron}.\n\\end{pf}\nFrom Lemma~\\ref{lem:gradPolyhedron}, there exists a bounded space $\\mathcal{P}^{(i)}_k$ within which the gradient $\\nabla f(x^{(i)}_k)$ must exist. In~\\cite{michael2020optimisation}, we restricted the error bound analysis to $2$ dimensions with $2$ neighbours. The same method is not computationally feasible in higher dimension, as it requires computation of the largest diagonal in the $d$-parallelotope, which has $2^{d-1}$ diagonals. Instead, we compute a ``rounding'' of the polytope by defining an ellipse\n\\begin{align}\nm^{(i)}_{k} &:= \\sqrt{\\sum_{j\\in\\mathcal{N}^{(i)}} (|s^{(ij)}_k-(g^{(i)}_k)^Tv^{(ij)}_k| + a^{(ij)}_k)^2} \\label{eq:ellipseScaling}\\\\\n\\mathcal{E}^{(i)}_k &:= \\left\\{ x\\in\\mathbb{R}^{d} \\mid \\left\\|\\frac{A^{(i)}_k(x-g^{(i)}_k)}{m^{(i)}_k}\\right\\|^2 \\leq 1 \\right\\}, \\label{eq:ellipseDef}\n\\end{align}\nwith $g^{(i)}_k$ the center of $\\mathcal{E}^{(i)}_k$, $A^{(i)}_k$ the matrix defined in Lemma~\\ref{lem:gradPolyhedron}, and $s^{(ij)}_k,v^{(ij)}_k,a^{(ij)}_k$ defined in~\\eqref{eq:usefulConstants}. We define $g^{(i)}_k$ to be\n\\begin{align}\ng^{(i)}_k := ((A^{(i)}_k)^TA^{(i)}_k)^{-1}(A^{(i)}_k)^T{\\bf s}^{(i)}_k. \\label{eq:centerDef}\n\\end{align}\nNote that the center of the ellipse will serve as the gradient estimate for agent $i$, and is equivalent to the simplex gradient\\cite{regis2015calculus} of agent $i$ and its neighbours. In the following theorem we present the main result for this section, an error bounding process which works in arbitrary dimension for any number of neighbours, given Assumption~\\ref{ass:fullRankNeighbours}.\n\n\\begin{thm}\\label{thm:boundingEllipse}\nFor a function $f_k$ satisfying Assumption~\\ref{ass:Lipschitz} and an agent $i$ with neighbour set $\\mathcal{N}^{(i)}$ satisfying Assumption~\\ref{ass:fullRankNeighbours}, let $\\mathcal{P}^{(i)}_k$ be the polytope defined in Lemma~\\ref{lem:gradPolyhedron}. Then $\\mathcal{P}^{(i)}_k\\subseteq \\mathcal{E}^{(i)}_k$, for $\\mathcal{E}^{(i)}_k$ the ellipse defined in~\\eqref{eq:ellipseDef} with center $g^{(i)}_k$ defined in~\\eqref{eq:centerDef}. Further, if $|\\mathcal{N}^{(i)}| = d$, and we assume $B(r,c) = \\{ x\\in\\mathbb{R}^{d} \\mid ||x-c||_2 \\leq r\\}$ is the smallest bounding ball such that $\\mathcal{P}^{(i)}_k \\subseteq B(r,c)$, then\n\\begin{align}\n\\frac{||{\\bf a}_k^{(i)}||}{\\sigma_{\\textrm{max}}(A^{(i)}_k)} \\leq r \\leq \\frac{||{\\bf a}_k^{(i)}||}{\\sigma_{\\textrm{min}}(A^{(i)}_k)} \\label{eq:radiusBounds}\n\\end{align}\nfor $\\sigma_{\\textrm{max\/min}}$ the largest\/smallest singular values of $A^{(i)}_k$ and ${\\bf a}_k^{(i)}$ the vector of $a_k^{(ij)}$ for all $j\\in\\mathcal{N}^{(i)}$.\n\\end{thm}\n\n\\begin{pf}\nSee Appendix~\\ref{app:boundingEllipse}.\n\\end{pf}\nThe result in~\\eqref{eq:radiusBounds} may be interpreted as ``the radius of the smallest bounding ball lies between the largest and smallest radii of $\\mathcal{E}^{(i)}_k$.'' A simple example of the ellipse~\\eqref{eq:ellipseDef} with $2$ neighbours labelled \\emph{uniform scaling} (due to the uniform scaling of the shape matrix) compared the smallest bounding ball is shown in Figure~\\ref{fig:2neigh}.\n\\begin{figure}[thpb]\n \\centering\n \\includegraphics[scale=0.45]{2neighFont9797.pdf}\n \\caption{Ellipse bounding demonstration of Theorem~\\ref{thm:boundingEllipse}. \\label{fig:2neigh}}\n\\end{figure}\n\nGiven that finding the smallest bounding ball which contains a polytope is an NP hard problem, even for the relatively simple centrally symmetric parallelotopes\\cite{bodlaender1990computational}, this approximation is sufficient for the primary goal of gradient estimation. Further, this approximation method gives the smallest $2$-norm bound on the error in the simplest case, with $d$ neighbours distributed in a lattice around agent $i$, as demonstrated in Corollary~\\ref{cor:boundingBall}.\n\n\\begin{cor}\\label{cor:boundingBall}\nIf agent $i$ has neighbour set with cardinality $|\\mathcal{N}^{(i)}|=d$, and $(v^{(ij)}_k)^Tv^{(il)}_k=0$ for all $j,l\\in\\mathcal{N}^{(i)}$ with $j\\neq l$, then $\\mathcal{E}^{(i)}_k$ as defined in~\\eqref{eq:ellipseDef} is the smallest bounding ball such that $\\mathcal{P}^{(i)}_k\\in\\mathcal{E}^{(i)}_k$.\n\\end{cor}\n\n\\begin{pf}\nIf all neighbours are orthogonal, then $A^{(i)}_k$ as defined in Lemma~\\ref{lem:gradPolyhedron} is an orthogonal matrix, i.e. $(A^{(i)}_k)^TA^{(i)}_k = I$. Therefore, $\\mathcal{E}^{(i)}$ is a ball. Further, from Theorem~\\ref{thm:boundingEllipse}, the smallest bounding ball radius lies between the largest and smallest radii of $\\mathcal{E}^{(i)}_k$, which in this case are the same radius. Therefore, $\\mathcal{E}^{(i)}$ is the smallest bounding ball containing $\\mathcal{P}^{(i)}_k$.\n\\end{pf}\nFor any number of neighbours satisfying Assumption~\\ref{ass:fullRankNeighbours}, Theorem~\\ref{thm:boundingEllipse} guarantees a gradient estimation error bound of the form\n\\begin{align}\n||g^{(i)}_k - \\nabla f_k(x^{(i)}_k)|| \\leq \\frac{m^{(i)}_k}{\\sigma_{\\min}(A_k^{(i)})}, \\label{eq:gradErrorBound}\n\\end{align}\nfor $g^{(i)}_k$ the estimated gradient~\\eqref{eq:centerDef} and $m^{(i)}_k$ as defined in~\\eqref{eq:ellipseScaling}.\n\n\\subsection{Bounding Ellipse for large Neighbour Sets}\n\nThe ellipse definition from~\\eqref{eq:ellipseDef} performs well for smaller sets of neighbours, but tends to be conservative when the neighbour set is larger than $d$. We provide an additional bounding ellipse here, which shares many of the useful properties of the ellipse defined in~\\eqref{eq:ellipseDef}, but tends to be significantly less conservative in larger problems. The potentially large scaling factor in the denominator of ellipse definition~\\eqref{eq:ellipseDef} is distributed to each row, rather than applied uniformly, which mitigates some of the inflation from redundant neighbours. We define a matrix $B^{(i)}_{k}\\in\\mathbb{R}^{|\\mathcal{N}^{(i)}|\\times d}$ with the $j$-th row $B^{(i)}_{k}[j]$ defined\n\\begin{align}\nB^{(i)}_{k}[j] := \\frac{(v^{(ij)}_{k})^T}{\\sqrt{|\\mathcal{N}^{(i)}|}(|s^{(ij)}_k-(g^{(i)}_k)^Tv^{(ij)}_{k}| + a^{(ij)}_k)} \\label{eq:otherEllipseRow}\n\\end{align}\nfor $g^{(i)}_k\\in\\mathbb{R}^{d}$ the center of the ellipse. The second ellipsoidal approximation of $\\mathcal{P}^{(i)}_k$ can then be defined as\n\\begin{align}\n\\bar{\\mathcal{E}}^{(i)}_k := \\left\\{ x\\in\\mathbb{R}^{d} \\mid \\left\\|B^{(i)}_k(x-g^{(i)}_k)\\right\\|^2 \\leq 1 \\right\\}, \\label{eq:otherEllipseDef}\n\\end{align}\nIt can be verified that $\\bar{\\mathcal{E}}^{(i)}_k$ defined in~\\eqref{eq:otherEllipseDef} also contains $\\mathcal{P}^{(i)}_k$. However, the radius of the smallest bounding ball is not guaranteed to lie between the largest and smallest eigenvalues, and thus $\\bar{\\mathcal{E}}^{(i)}_k$ does not satisfy the claims of Corollary~\\ref{cor:boundingBall}. For problems with larger sets of neighbours however, the authors note that $\\bar{\\mathcal{E}}^{(i)}_k$ seems to be a tighter approximation of $\\mathcal{P}^{(i)}_k$, based on a large number of trials generating random neighbour sets and comparing the ellipses. An example comparing the ``uniform scaling ellipse'' from~\\eqref{eq:ellipseDef} to the ``row scaling ellipse'' from~\\eqref{eq:otherEllipseDef} is included in Figure~\\ref{fig:4neigh}.\n\\begin{figure}[thpb]\n \\centering\n \\includegraphics[scale=0.5]{4neighFont8838.pdf}\n \\caption{Comparing the bounds~\\eqref{eq:ellipseDef} and~\\eqref{eq:otherEllipseDef}. \\label{fig:4neigh}}\n\\end{figure}\n\n\\section{Simulations}\\label{sec:simul}\n\nIn this section we provide numerical studies to illustrate the results from the previous sections, as well as comparison to another distributed extremum seeking algorithm. For the time varying scalar field, we use convex quadratic functions $f_k(x) = \\frac{1}{2}(x-c(k))^TQ(x-c(k)) + \\zeta^T(x-c(k)) + p$, for positive semi-definite $Q$. The values used in the following plots are\n\\begin{align*}\nQ = \\begin{bmatrix} 2.66 &-0.36 \\\\ -0.35 &1.74 \\end{bmatrix} \\; , \\; &\\zeta = [-1.28,4.66]^T \\;,\\; p = 6.26,\\\\\nc(k) = 10\\sin(\\frac{\\sqrt{2}k}{100}) &+ 10\\sin(\\frac{\\sqrt{3}k}{100}) +\\frac{k}{100},\n\\end{align*}\nwith $L_f,\\mu_f$ the largest and smallest eigenvalues of $Q$ respectively. For the formation control function, we designate a set of neighbours for each agent $\\mathcal{N}^{(i)}$ along with a corresponding set of ideal displacements $\\hat{x}^{(ij)}$. The formation potential function is then\n\\begin{align}\n\\phi({\\bf x}_k) = \\phi^* + L_{f}\\sum_{i\\in\\mathcal{V}}\\sum_{j\\in\\mathcal{N}^{(i)}} ||x^{(i)} - x^{(j)} - \\hat{x}^{(ij)}||^2_2. \\label{eq:formationPotential}\n\\end{align}\nIn~\\cite{michael2020optimisation} we derive the error bound on the gradient estimation in two dimensions, and show that the estimation error is proportional to the distance between the agents, with proportionality constant $L_{f}$, so the the Lipschitz constant $L_{f}$ and the minimum value $\\phi^*$ in~\\eqref{eq:formationPotential} ensure that $\\phi({\\bf x}_k) $ satisfies the assumptions in Definition~\\ref{def:formFuncs}. The minimum value $\\phi^*$ is chosen as an upper bound on the gradient approximation error when the agents are in perfect formation, derived from the gradient estimation error bounds in Theorem~\\ref{thm:boundingEllipse}.\n\nThe simulated methods include the composite method derived in Section~\\ref{sec:coopGradDesc} using two different formations, as well as the consensus for circular formations from~\\cite{circular2015distributed} for comparison. For the composite method, as described in Section~\\ref{sec:coopGradDesc}, we use the simplex gradient as the local gradient estimation method at each iteration, as presented in Algorithm~\\ref{alg:compDyn}.\n\n\\begin{algorithm}\n\\caption{Distributed Composite Dynamics\\label{alg:compDyn}}\n\\begin{algorithmic}\n\\For{$k=1,2,...$}\n \\For{$i\\in\\{1,2,...,n\\}$}\n \\State $g^{(i)}_k = ((A^{(i)}_k)^TA^{(i)}_k)^{-1}(A^{(i)}_k)^T{\\bf s}^{(i)}_k$ \n \\EndFor\n \\For{$i\\in\\{1,2,...,n\\}$}\n \\State $x^{(i)}_{k+1} = x^{(i)}_{k} - \\frac{1}{L}(g^{(i)}_k+\\nabla_{x^{(i)}_k} \\phi({\\bf x}_k))$\n \\EndFor\n\\EndFor\n\\end{algorithmic}\n\\end{algorithm}\n\nThe circular formation controller is presented in Algorithm~\\ref{alg:circDyn}, and is written exactly as in~\\cite{circular2015distributed} accounting for the notation of this paper. The parameters used within Algorithm~\\ref{alg:circDyn} are the same as used in the original paper~\\cite{circular2015distributed}, in the example provided therein without noise. The radius of the formation $D=3$, the rotation velocity $\\omega=1$, $\\epsilon=0.5$ and $\\alpha=1$. The consensus matrix used is also the same as shown in~\\cite{circular2015distributed}, for $6$ agents we have used\n\\begin{align*}\n P = \\begin{bmatrix}\n 0.5 & 0.25 & 0 & 0 & 0 & 0.25 \\\\\n 0.25 & 0.5 & 0.25 & 0 & 0 & 0 \\\\\n 0 & 0.25 & 0.5 & 0.25 & 0 & 0 \\\\\n 0 & 0 & 0.25 & 0.5 & 0.25 & 0 \\\\\n 0 & 0 & 0 & 0.25 & 0.5 & 0.25 \\\\\n 0.25 & 0 & 0 & 0 & 0.25 & 0.5\n \\end{bmatrix}.\n\\end{align*}\n\\begin{algorithm}\n\\caption{Circular Source Seeking\\label{alg:circDyn}}\n\\begin{algorithmic}\n\\For{$i=1,...,n$}\n \\State $h^{(i)}_0 = \\tilde{g}^{(i)}_0=h^{(i)}_{-1}=c^{(i)}_{0}+f_{0}(x^{(i)}_0)(x^{(i)}_0-c^{(i)}_0)$\n \\State $\\phi^{(i)} = i\\frac{2\\pi}{n}$\\;\n\\EndFor\n\\For{$k=1,2,...$}\n \\For{$i=1,...,n$}\n \\State $g^{(i)}_{k}=c^{(i)}_{k}+\\frac{2}{D^2}f(x^{(i)}_{k})(x^{(i)}_{k}-c^{(i)}_{k})$\n \\State $\\tilde{g}^{(i)}_{k}=(1-{\\bf \\alpha})\\tilde{g}^{(i)}_{k-1}+\\alpha\\tilde{g}^{(i)}_{k}$\n \\State $\\tilde{h}^{(i)}_{k}=h^{(i)}_{k-1}+\\tilde{g}^{(i)}_{k-1}-\\tilde{g}^{(i)}_{k-2}$\n \\EndFor\n \\State ${\\bf h_{k}} = (P\\otimes I_{2})(\\bf \\tilde{h}_{k})$\n \\For{$i=1,...,n$}\n \\State $c^{(i)}_{k} = (1-\\varepsilon)c^{(i)}_{k-1}+\\varepsilon h^{(i)}_{k}$\n \\State $x^{(i)}_{k} = c^{(i)}_{k} + D R(\\phi^{(i)}+\\omega k)$\n \\EndFor\n\\EndFor\n\\end{algorithmic}\n\\end{algorithm}\n\nChoosing six agents forces the use of a regular hexagon for~\\cite{circular2015distributed}. We therefore included the composite method using a regular hexagon formation for comparison. The neighbours are chosen to be the adjacent vertices as in Figure~\\ref{fig:formation2d}.\n\\begin{figure}[t!]\n \\centering\n \\begin{subfigure}[t]{0.2\\textwidth}\n \\begin{tikzpicture}\n \n \\node[draw,minimum size=3cm,regular polygon,regular polygon sides=6] (a) {};\n\n \n \\foreach \\x in {1,2,...,6}\n \\fill (a.corner \\x) circle[radius=2pt];\n\n \\end{tikzpicture}\n \\caption{Hexagonal\\label{fig:formation2d}}\n \\end{subfigure}%\n ~\n \\begin{subfigure}[t]{0.2\\textwidth}\n \\begin{tikzpicture}\n \n \\node[draw,minimum size=3cm,regular polygon,regular polygon sides=4] (a) {};\n \n \\node[draw,minimum size=3cm,regular polygon,regular polygon sides=4,right of= a,xshift=1.12cm] (b) {};\n\n \n \\foreach \\x in {1,2,...,4}\n \\fill (a.corner \\x) circle[radius=2pt];\n\n \n \\foreach \\x in {1,4}\n \\fill (b.corner \\x) circle[radius=2pt];\n\n \\end{tikzpicture}\n \\caption{Rectangular\\label{fig:rectangleForm}}\n \\end{subfigure}\n \\caption{Neighbour topology for six agents in two dimensions.}\n\\end{figure}\n\nWhile the circular motion controller in~\\cite{circular2015distributed} requires this hexagonal arrangement for six agents, the framework proposed in this paper is flexible in the choice of formation by changing the ideal displacements $\\hat{x}^{(ij)}$. To this end we also include a rectangular formation, illustrated in Figure~\\ref{fig:rectangleForm}. As shown in~\\cite{michael2020optimisation}, the gradient estimation error bound is a function of the orthogonality of the neighbours as well as the distance between them, so the rectangular formation will have lower gradient estimation error than the hexagonal formation with the same neighbour distances.\n\nFigures~\\ref{fig:trajComp} shows the resulting trajectories from the composite method. We exclude the trajectories from other methods, as they are visually identical. Instead, we include the comparison of the tracking error $\\frac{1}{2}d(x_{k+1},\\mathcal{X}^*_{\\hat{f}_{k+1}})^2$ in Figure~\\ref{fig:minErrorComp} for each method, including the theoretical bounds from Theorem~\\ref{thm:compositeConvergence}.\n\n\\begin{figure}[thpb]\n \\centering\n \\includegraphics[scale=0.45]{traj.pdf}\n \\caption{Agent Trajectories using the composite method from Section~\\ref{sec:coopGradDesc}. \\label{fig:trajComp}}\n\\end{figure}\n\n\\begin{figure*}[thpb]\n \\centering\n \\includegraphics[scale=0.33]{errorComp.pdf}\n \\caption{Comparison of formation distance from the signal source. \\label{fig:minErrorComp}}\n\\end{figure*}\n\nWe can see from Figure~\\ref{fig:minErrorComp} that the theoretical minimiser error bound derived in Theorem~\\ref{thm:compositeConvergence} holds in simulation. All methods exhibit similar performance, including the periodic increases in tracking error, i.e. the five ``bumps'' in Figure~\\ref{fig:minErrorComp}. These coincide with the source accelerating around the curves of the path. The circular formation has higher tracking error, but the method in~\\cite{circular2015distributed} is not explicitly designed to operate on time-varying scalar fields. The rectangular and hexagonal formations using the composite method track nearly identically, although the rectangular formation converges slightly closer to the optimal value set due to the lower gradient error.\n\nAlthough all methods achieve similar performance in tracking the time varying minimiser, the composite method derived in this paper can work in any formation, with any number of agents, in any dimension, without the need to delineate leaders and followers, and is guaranteed to converge to a bounded neighbourhood of the minimiser. Further, we can see from Figure~\\ref{fig:minErrorComp} that formations with orthogonal neighbours improve gradient information, and lead the better source tracking, as the rectangular formation of agents has a lower tracking error by the end of the simulation.\n\nIn Figure~\\ref{fig:gradError}, we show the error of the estimated gradient, as well as the error bound for each agent derived from the results of Theorem~\\ref{thm:boundingEllipse}, defined in~\\eqref{eq:gradErrorBound}.\n\n\\begin{figure}[thpb]\n \\centering\n \\includegraphics[scale=0.37]{gradError.pdf}\n \\caption{Gradient estimation error (black) and estimation error bound (red) for each agent from Figure~\\ref{fig:trajComp} \\label{fig:gradError}}\n\\end{figure}\n\nAs the results from Section~\\ref{sec:coopGradDesc} generalise to any dimension, we provide an example in three dimensions, as well as an implementation of the extremum seeking algorithm from Section~\\ref{sec:coopGradDesc}, at the provided link.\\footnote{\\url{https:\/\/tinyurl.com\/yc4fzpv2}}\n\n\\section{Conclusion}\\label{sec:conclude}\n\nIn this paper we consider a formation of agents tracking the optimum of a time varying scalar field with no gradient information, in arbitrary dimension. At each iteration, the agents take measurements, communicate with their neighbours to estimate a descent direction, and converge to a neighbourhood of the optimum. We derive distributed control laws which drive the agents to a bounded neighbourhood of the optimiser set, without the delineation of leaders\/followers or the use of communication intensive consensus protocols. The method is flexible to the choice of formation and gradient estimation method, and we provide examples using two formations and gradient estimation using the simplex gradient. By blending formation control with extremum seeking, the agents are able to minimise the gradient estimation error, improving the neighbourhood of convergence. We concluded with numerical studies showing that the proposed method is comparable with other extremum seeking methods, converging to a tighter neighbourhood while being more flexible in the choice of formation. Further research will focus on the relaxing of the assumptions on the formation potential functions, allowing for potential functions with non unique minima which do not satisfy the Polyak-\\L{}ojasiewicz inequality, and incorporating time-varying neighbour sets.\n\n\n\n\\bibliographystyle{plain} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:introduction}\n\n\\subsection{Overview and Motivation}\n\n\n\n\nModern technologies enable Internet resources such as routers, computing servers and cables to be abstracted from the physical layer to a \\emph{virtual} layer, facilitating a quick response to demands for setting up communication networks or processing computing jobs. Virtual servers comprising different sets of physical resources are assigned to arriving customers who use these resources for a period of time and then return them to a pool when they depart. \n\n\n\n\nSuch networks are just particular examples of more general systems where users of different types arrive with a desire to be allocated resources of various kinds, to use these resources and then return them. Users are often indifferent to the precise set of resources that they are allocated, they just require allocation of some resources that will enable them to accomplish the task at hand. In such circumstances a network manager has the task of deciding whether an arriving customer should be admitted into the system and, if so, which set of resources should be assigned to satisfying their requirements.\n\nIn this paper we describe and analyze a very general model for such systems. Specifically, we study a system in which $J$ \\emph{resource pools}, each made up of finite numbers of \\emph{resource units} (RUs), await allocation to incoming requests of $L$ different types. We refer to the number of RUs in a resource pool as its \\emph{capacity}. Each resource\npool is potentially shared and \\emph{competed} for by many requests, but \\emph{reservation} of RUs for still-to-arrive requests is also allowed. When a request has been accommodated by a resource pool, an appropriate number of RUs of this type are occupied by the request until it leaves the system. The released RUs can be reused by other requests. A request is permitted to occupy RUs from more than one resource pool simultaneously. In this context, the number of requests of the same type that are accommodated by a group of resource pools varies according to a stochastic process, where the transition rates are affected by the resource allocation policy employed. Several such processes associated with the same resource pool are coupled by its capacity limitations.\n\n\nBy strategically assigning requests to appropriate combinations of RUs, we aim to maximize the long-run average revenue, defined as the difference between the long-run average reward earned by serving the requests and the long-run average cost incurred by using the resource pools. \nSuch a resource allocation problem can be easily applied to a rich collection of classical models, such as loss networks in telecommunications, resource allocation for logistic systems, and job assignment in parallel computing. \n\n\\cite{kelly1991loss} published a comprehensive analysis of \\emph{loss network} models with and without \\emph{alternative routing}. In the latter case, network traffic can be re-routed onto alternative paths when the original path fails or is full. \nIn \\cite{kelly1991loss}, a list of alternative paths as choices of resource pools is given for each call\/request. The alternative paths are selected in turn after if preceding offered paths are unavailable. In contrast, the manager of a typical resource allocation problem described above is potentially able to change the priorities of paths dynamically. How this should be done is a key focus of this paper.\n\n\\begin{figure}[t]\n\\centering\n\\begin{minipage}{.45\\textwidth}\n\\centering\n\\includegraphics[width=0.4\\linewidth]{loss_network.eps}\n\\caption{A simple loss network.}\\label{fig:loss_network}\n\\end{minipage}\n\\begin{minipage}{.45\\textwidth}\n\\centering\n\\includegraphics[width=0.4\\linewidth]{queue_model.eps}\n\\caption{A simple parallel queueing model.}\\label{fig:queue_model}\n\\end{minipage}\n\\vspace{-0.5cm}\n\\end{figure}\nTo illustrate the kind of problem of interest here, consider the simple loss network model shown in Figure~\\ref{fig:loss_network}. Links $a$, $b$ and $c$ are abstracted as resource pools with capacities equal to 1, 3 and 3, respectively: link $a$ consists of one channel as an RU, and links $b$ and $c$ each have 3 channels. Requests asking for a connection from $A$ to $B$ occupying one channel can be served by either path $\\{a\\}$ or $\\{b,c\\}$, but requests requiring two channels for each connection from $A$ to $B$ are able to be accommodated only by path $\\{b,c\\}$. We refer to the former and the latter as type-I and type-II requests, respectively. An arrival of a type-I request results in one of the paths $\\{a\\}$ and $\\{b,c\\}$ being chosen by the optimizer depending on current traffic loads on the three links, where links $b$ and $c$ might be shared with existing type-II requests. Occupied channels or RUs are released immediately and simultaneously when relevant requests are completed. \n\nResource allocation problems with small values of $L$ and $J$, such as the example above, can be modeled by a Markov Decision Processes (MDP), and solved through dynamic programming. However, in real-world applications, where $L$ and $J$ are large, resulting in high dimensionality of the state and action spaces, such an approach is often intractable.\n\n\n\nIn this paper we use an analysis inspired by techniques applied to Restless Multi-Armed Bandit Problems (RMABPs).\nThe standard RMABP consists of parallel MDPs with binary actions (they can either be ``pulled', that is activated, or not), which are competing for a limited possibility of being selected at each decision epoch. Each of the MDPs, referred to as a \\emph{bandit process}, has its own individual state-dependent reward rates and transition probabilities when it is activated and when it is not. \n\nAttempts to solve the problem are faced with exponential growth in the size of the state space as the number of parallel bandit processes increases. \nThis class of problems was described by \\cite{whittle1988restless}, who proposed a heuristic management policy that was shown to be asymptotically optimal under non-trivial extra conditions by~\\cite{weber1990index};\nthis policy approaches optimality as the number of bandit processes tends to infinity.\nThe policy, subsequently referred to as the \\emph{Whittle index policy}, always prioritizes bandit processes with higher state-dependent \\emph{indices} that intuitively represent marginal rewards earned by processes if they are selected.\nThe Whittle indices can be computed independently for each bandit process - a process that imposes significantly reduced computational complexity. The Whittle index policy is scalable to a RMABP with a large number of bandit processes.\nAlso, the asymptotic optimality property, if it is satisfied, guarantees a bounded performance degradation in a large-scale system and is appropriate for large problems where optimal solutions are intractable.\nThe non-trivial extra conditions required by the asymptotic optimality proof in \\cite{weber1990index} are related to proving the existence of a global attractor of a stochastic process.\n\nRMABPs have been widely used in scheduling problems, such as channel detecting (see \\cite{liu2012learning,wang2019whittle} ), job assignments in data centers (see \\cite{fu2016asymptotic}), web crawling (see \\cite{avrachenkov2016whittle}), target tracking (see \\cite{krishnamurthy2007structured,le2010scheduling}) and job admission control (see \\cite{nino2012admission,nino2019resource}).\nHere we treat the resource allocation problem described above as a set of RMABPs coupled by linear inequalities involving random state and action variables.\n\n\n\\subsection{Main Contributions}\n\n\n\n\nWe propose a modified \\emph{index policy} that takes into account the capacity constraints of the problem. The index policy prioritizes combinations of RUs with the highest indices, each of which is a real number representing the marginal revenue of using its associated RUs. The policy is simple, scalable and appropriate for a large scale resource allocation problem.\n\n\n \nOur analysis of asymptotic optimality of the index policy proceeds through a relaxed version of the problem and study of a global attractor of a stochastic process defined in \\eqref{eqn:z_process} below.\nWe prove that the process \\eqref{eqn:z_process} will almost surely converge to a global attractor in the asymptotic regime regardless of its initial point, and hence the index policy is asymptotically optimal if and only if this global attractor coincides with an optimal solution of the resource allocation problem. \nFollowing ideas similar to those of \\cite{weber1990index}, optimality of the global attractor for the resource allocation problem can be deduced from its optimality for the relaxed problem, which can be analyzed with remarkably reduced computational complexity. \n\nA sufficient condition for the global attractor and optimal solution to coincide is that the offered traffic for the entire system is \\emph{heavy} and the resource pools in our system are \\emph{weakly coupled}.\nWe rigorously define these concepts in Section~\\ref{subsec:sufficient_condition}.\nThese results are enunciated in Theorems~\\ref{theorem:main_second}, \\ref{theorem:main} and Corollary~\\ref{coro:main} in Section~\\ref{sec:asym_opt}.\n\nWhen the above-mentioned sufficient conditions are not satisfied, an asymptotically optimal index policy can still exist. In this case, we propose a method that can derive the parameters required by the asymptotically optimal policy.\nAlthough asymptotic optimality is not guaranteed, \nTheorem~\\ref{theorem:main} provides a verifiable sufficient condition, less stringent than the one mentioned above, to check asymptotic optimality of the index policy with adapted parameters. \nWe numerically demonstrate the effectiveness of this method in Section~\\ref{sec:example}.\n\n\nThe index policy exhibits remarkably reduced computational complexity, compared to conventional optimizers, and its potential asymptotic optimality is appropriate for large-scale systems where computational power is a scarce commodity. Furthermore, simulation studies indicate that an index policy can still be good in the pre-limit regime.\nAs mentioned earlier, our problem can be seen as a set of RMABPs coupled by the capacity constraints. When the capacities of all resource pools tend to infinity, the index policy reduces to the Whittle index policy because the links between RMABPs no longer exist.\n\n\nTo the best of our knowledge, no existing work has proved asymptotic optimality in resource allocation problems, where resource competition and reservation are potentially permitted, nor has there been a previous analysis of such a combination of multiple, different RMABPs, resulting in a much higher dimensionality of the state space.\n\n\nThe remainder of the paper is organized as follows.\nIn Section~\\ref{sec:model}, we describe the resource allocation problem.\nIn Section~\\ref{sec:relaxation}, we apply the Whittle relaxation technique. \nIn Section~\\ref{sec:index_policy}, we propose an algorithm to implement an index policy.\nIn Section~\\ref{sec:asym}, we define the asymptotic regime and we prove the asymptotic optimality of the index policy under some conditions.\nTo demonstrate the effectiveness of the proposed policies, numerical results are provided in Section~\\ref{sec:example}.\nIn Section~\\ref{sec:conclusions}, we present conclusions.\n\n\n\n\n\n\\subsection{Relation to the Literature}\nThe classical Multi-Armed Bandit Problem (MABP) is a optimization problem in which only one bandit process (BP) among $K$ BPs can be activated at any one time, while all the other $K-1$ BPs are \\emph{frozen}: an active BP randomly changes its state, while state transitions will not happen to the frozen BPs.\nIn 1974, Gittins and Jones published the well-known \\emph{index theorem} for the MABP \\cite{gittins1974dynamic}, and in 1979, \\cite{gittins1979bandit} proved the optimality of a simple \\emph{index policy}, subsequently referred to as the \\emph{Gittins index policy}.\nUnder the Gittins index policy, an index value, referred to as the \\emph{Gittins index}, is associated with each state of each BP, and the BP with the largest index value is activated, while all the other BPs are frozen.\nMore details about Gittins indices can be found in \\cite[Chapter 2.12]{gittins2011multiarmed} (and the references therein).\n\nThe optimality of the Gittins index policy for the conventional MABP fails for the general case where the $K-1$ BPs that are not selected can also change their states randomly; such a process is known as a Restless Multi-Armed Bandit Process (RMABP).\nThe RMABP was proposed by \\cite{whittle1988restless}.\nThe RMABP allows $M=1,2,\\ldots,K$ BPs to be active simultaneously.\nIn a similar vein to the Gittins index policy, Whittle assigned a state-dependent index value, referred to as the \\emph{Whittle index}, to each BP and always activated the $M$ BPs with the highest indices. \nThe Whittle indices are calculated from a \\emph{relaxed} version of the original RMABP obtained by randomizing the action variables. \n\\cite{whittle1988restless} defined a property of a RMABP, referred to as \\emph{indexability}, under which the \\emph{Whittle index policy} exists.\nWhittle conjectured in \\cite{whittle1988restless} that the Whittle index policy, if it exists, is \\emph{asymptotically optimal}.\n\\cite{papadimitriou1999complexity} proved that the optimization of RMABPs is PSPACE-hard in general;\nnonetheless, \\cite{weber1990index} were able to establish asymptotic optimality of Whittle index policy under mild conditions. \n\n\n\n\\cite{nino2001restless} proposed a Partial Conservation Law\n(PCL) for the optimality of RMABP; this is an\nextension of the General Conservation Law (GCL) published in\n\\cite{bertsimas1996conservation}. \nLater, \\cite{nino2002dynamic} defined a group of problems that satisfies\nPCL-indexibility and proposed a new index policy that improved the\nWhittle index. \nThe new index policy was proved to be optimal for problems with PCL-indexibility. \nPCL-indexibility implies (and is stronger than) Whittle indexibility.\nA detailed survey about the optimality of bandit problems can be found in \\cite{nino2007dynamic}. \n\n\n\\cite{verloop2016asymptotically} proved the asymptotic optimality of the Whittle index policy in an extended version of an RMABP, where BPs randomly arrive and depart the system. She proposed an index policy that was not restricted to Whittle indexable models and numerically demonstrated its near-optimality. \n\\cite{larranaga2015asymptotically} applied this extended RMABP to a queueing problem assuming convex, non-decreasing functions for both holding costs and measured values of people's impatience.\nMore results on asymptotic optimality of index-like polices can be\nfound in \\cite[Chapter IV]{fu2016thesis}. \n\n\n\n\n\n\nAsymptotically optimal policies for cost-minimization problems in network systems using a fluid approximation technique have been considered in \\cite{bauerle2000asymptotic,bauerle2002optimal,stolyar2004maxweight,\nnazarathy2009near} and \\cite{bertsimas2015robust}.\nThe fluid approximation to the stochastic optimization problem can be much simpler than the original.\nA key problem here is to establish an appropriate fluid problem and translate its optimal solution to a policy amenable to the stochastic problem.\nAsymptotic optimality of the translated stochastic policy can be established if the fluid solution provides an upper\/lower bound of the stochastic problem and the policy coincides with this bound asymptotically.\nThe reader is referred to \\cite{meyn2008control} for a detailed description of fluid approximation across various models. \n\n\n\nAlthough the fluid approximation technique helps with asymptotic analysis in a wide range of (cost-minimization) network problems, existing results cannot be directly applied to our problem, where the arrival and departure rates of request queues are state-dependent and capacity violation over resource pools is strictly forbidden.\nOur system is always stable for any offered traffic because of the strict capacity constraints.\nIn our case, the form of the corresponding fluid model remains unclear for generic policies. \nEven given the optimal solution of a well-established fluid model, the synthesis of an explicit policy in the stochastic model remains a challenge.\n\nWe adopt another approach, following the ideas of \\cite{whittle1988restless} and \\cite{weber1990index}.\nOur asymptotic optimality is derived from an optimal solution of a relaxed version of the stochastic optimization problem. The relaxed problem is still a stochastic optimization problem with a discrete state space. We propose a policy based on intuition captured by the relaxed problem, of which the optimal solution provides a performance upper bound of the original problem.\nThen, we prove, under certain conditions, that this policy coincides with the upper bound asymptotically. \nThe detailed analysis comprises the main content of the paper.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{A Resource Allocation Problem}\\label{sec:model}\n\n\n\n\nWe use $\\mathbb{N}_{+}$ and $\\mathbb{N}_{0}$ to denote the sets of positive and non-negative integers, respectively, and for any $N\\in\\mathbb{N}_{+}$, let $[N]$ represent the set $\\{1,2,\\ldots,N\\}$ with $[0]=\\emptyset$.\nLet $\\mathbb{R}$, $\\mathbb{R}_{+}$ and $\\mathbb{R}_{0}$ be the set of all, positive and non-negative reals, respectively.\n\n\n\n\\subsection{System Model}\\label{subsec:model}\n\nRecall that there are $L$ types of requests and $J$ pools of RUs, all\npotentially different, with resource pool $j\\in[J]$ having capacity $C_j$\nRUs that can be dynamically allocated to and released by the $L$ types\nof requests.\n\n\nEach request comes with an associated list of candidate resource combinations. \nSpecifically, requests from \\emph{request type} $\\ell \\in [L]$ can be accommodated by one of a set $\\mathscr{P}_{\\ell}$ of candidate \\emph{patterns}.\nOne of these candidate patterns will be selected by a policy.\nPatterns are indexed by $i\\in\\mathbb{N}_+$. \nIf a request is accommodated by pattern $i$, $w_{j,i}$\nRUs of pool $j\\in [J]$ are occupied until the request is completed and\ndeparts. \nWe can thus identify pattern $i$ with the \\emph{weight vector} $\\bm{w}_i = (w_{j,i})$ that defines its requirement.\nPreemption or re-allocation of requests\nare not allowed.\nA request is blocked if there is not enough capacity on any\nof its corresponding patterns. We might also want to block a request in\nother circumstances, if accepting it would be detrimental to future\nperformance. In either case, we model the situation where a request\nis blocked by assigning it to the dummy pattern $d(\\ell)$ with the weight vector set to $\\bm{0}$.\n\n\n\n\n\n\n\nIt is possible for different RTs to be satisfied by the same pattern (this occurs, in particular with the dummy pattern). In such cases, we consider there to be multiple copies of each pattern, one for each RT that it can satisfy. \nThis enables us to consider the sets $\\mathscr{P}_\\ell$ \nto be mutually\nexclusive; that is,\n$\\mathscr{P}_{\\ell_1}\\cap \\mathscr{P}_{\\ell_2} = \\emptyset$ for any\n$\\ell_1\\neq \\ell_2$. Given\n$|\\mathscr{P}_{\\ell}|$ patterns for each RT $\\ell$, we have in total\n$I = \\sum_{\\ell\\in[L]}|\\mathscr{P}_{\\ell}|$ patterns \nassociated with weight vectors $\\bm{w}_i\\in\\mathbb{N}_0^{J}$,\n$i\\in[I]$.\nFor any pattern $i$, let $\\ell(i)$ be the unique RT that is satisfied by that pattern.\n\nLet $\\mathcal{W}$ be a $J\\times I$ matrix with entries $w_{j,i}$.\nWe assume that there is no row and exactly $L$ columns in $\\mathcal{W}$ with all zero entries.\nEach of these zero columns corresponds to one of the dummy patterns $d(\\ell)$ where requests of type $\\ell\\in[L]$ are blocked.\n\nRequests of RT $\\ell$ arrive at the system sequentially, following a Poisson process, with rates $ \\lambda_{\\ell}$ \nand the occupation times of the requests accommodated by pattern $i\\in\\mathscr{P}_{\\ell}$ are exponentially distributed with parameter $\\mu_{i}$. \nAlthough there might be situations when it is reasonable to assume that the occupation time depends only on the request type $\\ell$, there might also be cases where the lifetime of a request depends on the resources accommodating it, which is why we allow the occupation time distribution to depend on $i$.\nThe RUs used to accommodate a request are occupied and released at the same time.\nNeither the request nor the system knows the lifespan of a request until it is accomplished and departs the system.\n\n\n\nSince there are similarities between our problem and a parallel\nqueueing model, we present a second example to clarify the similarities and differences. \nConsider two\nresource pools corresponding to two queues as illustrated in\nFigure~\\ref{fig:queue_model}, where both capacities are set to three; that is, $J=2$ and $C_1=C_2=3$. There are two types\nof requests: if a type-one request is accommodated in the system, it\nwill simultaneously occupy one RU of both pools;\nand a type-two request can be accommodated by two RUs of either\npool. In other words, $L=2$, $\\mathscr{P}_{1}=\\{1,2\\}$,\n$\\mathscr{P}_2=\\{3,4,5\\}$, patterns $2$ and $5$ are dummy patterns\nwith $\\bm{w}_2=\\bm{w}_5=\\bm{0}$, $\\bm{w}_1=(1,1)$, $\\bm{w}_3=(2,0)$,\n$\\bm{w}_4=(0,2)$ and $I=5$. \n\n\nIn this case, the number of occupied RUs in both resource pools may decrease or increase\nby one simultaneously, or by two exclusively for an arrival or\ndeparture event.\nThe transition rates are affected by the system\ncontroller: if the capacity constraints are not violated, there are\ntwo choices, resource pool one or two, for accommodating a type-two request. The task of a system manager is to find a policy for deciding which of these choices to take in order to maximize some long-term objective.\nEach choice will result in a parallel queueing model with dependencies\nbetween the sizes of queues, between the policy employed and\nqueue transition rates. As mentioned in Section~\\ref{sec:introduction},\nconventional optimization methods cannot be applied directly when $L$ and $J$ are large.\n\n\n\n\n\n\n\\subsection{A Stochastic Optimization Problem}\\label{subsec:RMABPs:general_case}\n\nWe focus here an explanation of\nthe stochastic mechanism of the resource allocation problem.\n\n\nAn \\emph{instantiation} is \ngenerated in the memory of the system controller\nwhen a request of RT $\\ell\\in[L]$ is accommodated by a pattern $i\\in\\mathscr{P}_{\\ell}$. \nOnce the request departs the system, the associated instantiation will\nbe removed from the controller's memory. As requests are accommodated\nand completed, the number of instantiations associated with each\npattern forms a birth-and-death process, indicating the\nnumber of requests being served by this pattern. As mentioned\nin the second example, the birth-and-death processes for all patterns\n$i\\in[I]$ are coupled by capacity constraints and affected by control\ndecisions.\n\n\nLet $N_i(t)$, $t\\geq 0$, represent the number of instantiations\nfor pattern $i$ at time $t$. The\nprocess $N_i(t)$ has state space $\\mathscr{N}_i $ that is a discrete,\nfinite set of possible values. The finiteness of $\\mathscr{N}_i$ derives from the finite\ncapacities $C_j$. \nIf $N_i (t)$ is known for all $i \\in [I]$, \nthe number of occupied RUs in pool $j \\in [J]$ at time $t$ is given by\n$S_j(t)=\\sum_{i \\in [I]} w_{j,i}N_i(t)$, which must be less than $C_j$.\nThe vector $\\bm{N}(t)=(N_i(t):\\ i\\in[I])$ is the state variable of the entire system taking values in\n$\\mathscr{N}\\coloneqq\\prod_{i\\in[I]}\\mathscr{N}_i$,\nwhere $\\prod$ represents Cartesian product.\nSince the state variables are further subject to capacity constraints to be discussed in Section~\\ref{subsubsec:capacity_constraints}, $\\mathscr{N}$ is larger than necessary. With slightly abused notation, we still refer to $\\mathscr{N}$ as the state space of the system. \n\n\\subsubsection{Action Constraints}\\label{subsubsec:action_constraints}\nWe associate an action variable $a_i(\\bm{n})\\in\\{0,1\\}$ with\nprocess $i\\in[I]$ when the system is\nin state $\\bm{n}\\in\\mathscr{N}$, and $\\bm{a}(\\bm{n})=(a_i(\\bm{n}):\\ i\\in[I])$. The action variable $a_i(\\bm{n})$ tells us what to do with a potential new request of type $\\ell(i)$. If $a_i(\\bm{n}) =1$, then such a pattern will be allocated to pattern $i$.\nThe \\emph{action constraint},\n\\begin{equation}\\label{eqn:constraint:action}\n\\sum\\limits_{i\\in\\mathscr{P}_{\\ell}} a_i(\\bm{n}) = 1,~\n\\forall \\ell\\in[L],~\n\\forall \\bm{n}\\in\\mathscr{N},\n\\end{equation}\nensures that exactly one pattern (which may be the dummy pattern $d(\\ell)$) is selected for each RT $\\ell$ and current state $\\bm{n}$.\n\n\nAt any time $t$, we say that the arrival process for pattern $i$ is \\emph{active} or \\emph{passive} according to whether $a_i(\\bm{N}(t))$ is $1$ or $0$ respectively.\nThe birth rate of process $i\\in\\mathscr{P}_{\\ell}$,\n$\\ell\\in[L]$, is $\\lambda_{\\ell}$ if $a_i(\\bm{N}(t))=1$;\nand zero otherwise. The death rate of process $i$ is\n$\\mu_{i} N_{i}(t)$.\nThe time proportion that $a_{d(\\ell)}(\\bm{N}(t))=1$ is the \\emph{blocking probability} for requests of type $\\ell$.\n \n \n\n\\subsubsection{Capacity Constraints}\\label{subsubsec:capacity_constraints}\n\nTo ensure feasibility of an allocation of a request of type $\\ell(i)$ to pattern $i$ when the state is $\\bm{n}$, we need \n\\begin{equation}\\label{eqn:constraint:resourcesi}\n\\mathcal{W}\\left(\\bm{n}+ \\bm{e}_i\\right) \\leq \\bm{C},~\n\\end{equation}\nwhere $\\bm{e}_i$ is a vector with a one in the $i$th position and zeros everywhere else and $\\bm{C}\\in\\mathbb{N}_{+}^{J}$ is a vector with entries $C_j$. In view of the action constraint (\\ref{eqn:constraint:action}), a neat way to collect together the constraints (\\ref{eqn:constraint:resourcesi}) for all $i \\in \\mathscr{P}_{\\ell}$ is to write them in the form\n\\begin{equation}\\label{eqn:resources1}\n\\mathcal{W}\\left(\\bm{n}+\\mathcal{E}_{\\ell}\\bm{a}(\\bm{n})\\right) \\leq \\bm{C},~\\forall \\bm{n}\\in\\mathscr{N},\n\\end{equation}\nwhere $\\mathcal{E}_{\\ell}$ is a diagonal matrix of size $I$ with entries $e_{\\ell,i,i}=1$ if $i\\in\\mathscr{P}_{\\ell}$ and $e_{\\ell,i,i}=0$ if $i\\in[I]\\backslash \\mathscr{P}_{\\ell}$.\n\n\n\nFor two different request types $\\ell_1$ and $\\ell_2$, a constraint of the form \n\\begin{equation}\\label{eqn:resources12}\n\\mathcal{W}\\left(\\bm{n}+\\mathcal{E}_{\\ell_1}\\bm{a}(\\bm{n})+\\mathcal{E}_{\\ell_2}\\bm{a}(\\bm{n})\\right) \\leq \\bm{C},~\\forall \\bm{n}\\in\\mathscr{N},\n\\end{equation} \ncaptures the idea that the action vector $\\bm{a}(\\bm{n})$ must be such that the allocation decisions for $\\ell_1$ and \n$\\ell_2$ ensure enough capacity to implement both of them when both requests arrive simultaneously while the state is\n$\\bm{n}$. Another way to think about this is that, if a request of type $\\ell_1$ is allocated to a non-dummy pattern $i_1$ when the state is $\\bm{n}$, the decision for a request of type $\\ell_2$ when the state is $\\bm{n}$ must satisfy constraint \\eqref{eqn:resources1} when the state is $\\bm{n}+\\bm{e}_{i_1}$. In particular, if there is not enough capacity to accommodate a request of type $\\ell_2$ when the state is $\\bm{n}+\\bm{e}_{i_1}$, then a request of type $\\ell_2$ must be allocated to the dummy pattern $d(\\ell_2)$, when the state is $\\bm{n}$.\nThis can be viewed as giving priority to \\emph{reserving} resources for a type $\\ell_1$ request over a type $\\ell_2$ request\nwhen the state is $\\bm{n}$. \nAs we shall see below, the decision to do this will be made in order to optimize a long-term reward function.\n\n\n\nObserving that $\\sum_{\\ell \\in [L]} \\mathcal{E}_{\\ell} = I$, we see that the constraint \n\\begin{equation}\\label{eqn:constraint:resources}\n\\mathcal{W}\\left(\\bm{n}+ \\bm{a}(\\bm{n})\\right) = \\mathcal{W}\\Bigl(\\bm{n}+ \\bigl(\\sum_{\\ell \\in [L]} \\mathcal{E}_{\\ell}\\bigr) \\bm{a}(\\bm{n})\\Bigr) \\leq \\bm{C},~\n\\forall \\bm{n}\\in\\mathscr{N},\n\\end{equation}\ncan be thought of as an extended version of \\eqref{eqn:resources12}.\nIn \\eqref{eqn:constraint:resources}, requests of all types are taken into account when the state is $\\bm{n}$ and allocation decisions for some types are made in order to reserve resources for other types that turn out to be more profitable in the long run. In particular, resources are reserved for those request types $\\ell$ which are allocated to non-dummy patterns $i$ at the expense of those types that are allocated to less profitable patterns or the corresponding dummy patterns.\nIn this paper, all the results presented are based on capacity constraint~\\eqref{eqn:constraint:resources}.\n\n\nFrom \\eqref{eqn:constraint:resources}, there is an upper bound, $\\min_{j\\in[J]}\\lceil C_j\/w_{j,i} \\rceil$, on the number of instantiations of pattern $i$, and this serves as a bounding state at which no further instantiation of this pattern can be added; that is, $\\mathscr{N}_i=\\{0,1,\\ldots,\\min_{j\\in[J]}\\lceil C_j\/w_{j,i}\\rceil\\}$ and $|\\mathscr{N}_i|=\\min_{j\\in[J]}\\lceil C_j\/w_{j,i}\\rceil +1 <+\\infty$.\nIn this context, Equation~\\eqref{eqn:constraint:resources} implies the condition\n\\begin{equation}\\label{eqn:constraint:resources:zero}\na_i(\\bm{n}) = 0, ~\\text{if}~i\\notin \\{d(\\ell):\\ \\ell\\in[L]\\} \\text{ and } n_i = |\\mathscr{N}_i|-1.\n\\end{equation}\n\n\n\n\\subsubsection{Objective}\nA \\emph{policy} $\\phi$ is defined as a mapping $\\mathscr{N}\\rightarrow \\mathscr{A}$ where\n$\\mathscr{A}\\coloneqq\\prod_{\\ell\\in[L]}\\{0,1\\}^{|\\mathscr{P}_{\\ell}|}$,\ndetermined by the action variables $\\bm{a}(\\bm{n})$ defined above.\nWhen we are discussing a system operating with a given policy $\\phi$, we rewrite the action and state variables as $\\bm{a}^{\\phi}(\\cdot)$ and $\\bm{N}^{\\phi}(t)$, respectively.\n\nBy serving a request of type $\\ell\\in[L]$ and occupying an RU of pool\n$j$ for one unit of time, we gain expected reward $r_{\\ell}$ and pay\nexpected cost $\\varepsilon_j$. \nThe expected reward for a whole service is gained at the moment the service is completed. \nIt corresponds to the situation where a request allocated to pattern $i$ earns reward at rate $r_{\\ell(i)} \\mu_i$ for as long as it is in the system (so that the expected revenue per customer is $(r_{\\ell(i)} \\mu_i) . (1\/\\mu_i) = r_{\\ell(i)}$).\nThe value of $\\epsilon_j$ is the cost per unit time of using a unit of capacity from resource pool $j$ in which case the expected cost of accommodating the request in pool $j$ as part of pattern $i$ is $\\epsilon_j\/\\mu_i$.\nWe seek a policy that\nmaximizes the \\emph{revenue}: the difference between expected reward\nand cost, by efficiently utilizing the limited amount of\nresources. \n\nThe objective is to maximize the long-run average rate of earning revenue, which exists because, for any policy $\\phi$, the process can be modeled by a finite-state Markov chain.\nLet $\\bm{r}=(r_{\\ell}:\\ \\ell\\in[L])$ and $\\bm{\\varepsilon}=(\\varepsilon_{j}:\\ j\\in[J])$. \nFor all $\\ell\\in[L]$ and $i\\in[I]$, define a $L\\times I$ matrix $\\mathcal{U}$ with entries\n$u_{\\ell,i}\\coloneqq \\mu_{i}\\mathds{1}_{i\\in\\mathscr{P}_{\\ell}}$.\nBy the Strong Law of Large Numbers for Continuous Time Markov Chains, see for example \\cite{serfozo2009basics} Theorem 45 in Chapter 4, noting the subsequent discussion of the case where rewards are earned at jump times, the long-run average rate of earning revenue when the policy is $\\phi$ is given by\n\\begin{equation}\nR^{\\phi} \\coloneqq \\mathbb{E}_{\\pi^{\\phi}} \\Bigl[\\bm{r}\\mathcal{U} - \\bm{\\varepsilon}\\mathcal{W}\\Bigr] =\\sum\\limits_{i\\in[I]} \\sum_{n_i\\in\\mathscr{N}_i} \\pi_i^{\\phi}(n_i) \\Bigl(r_{\\ell(i)} \\mu_i - \\sum_{j\\in\\mathscr{J}} w_{j,i} \\varepsilon_j\\Bigr)n_i,\n\\end{equation}\nwhere $\\pi_i^{\\phi}(n_i)$ is the stationary probability that the state of process $i$ is $n_i$ when the policy is $\\phi$. Then we wish to find the policy $\\phi$ that maximizes $R^\\phi$, that is we wish to find\n\\begin{equation}\\label{eqn:objective}\nR \\coloneqq \\max\\limits_{\\phi}R^{\\phi}.\n\\end{equation}\nDefine\n$\\Phi$ to be the set of all policies\nwith the constraints in~\\eqref{eqn:constraint:action} and \\eqref{eqn:constraint:resources} satisfied.\nEach policy in $\\Phi$ is then a \\emph{feasible policy} for our resource allocation problem.\n\n\n\\section{Whittle Relaxation}\\label{sec:relaxation}\nOur resource\nallocation problem with objective function defined by \\eqref{eqn:objective} and constraints given by \\eqref{eqn:constraint:action} and \\eqref{eqn:constraint:resources} can be modeled as a set of RMABPs coupled by\ncapacity constraints.\nWe leave the\nspecification of the RMABPs to Appendix~\\ref{app:RMABPs}.\n\n\n\nIn this section, we provide a theoretical analysis of the resource allocation problem,\nfollowing the idea of Whittle relaxation \\cite{whittle1988restless}.\nIn the vein of a RMABP, \nwe randomize the action variable $\\bm{a}^{\\phi}(\\bm{n})$ so that its elements take values from $\\{0,1\\}$ with probabilities determined by the policy $\\phi$ and relax constraint~\\eqref{eqn:constraint:action} to require that\n\\begin{equation}\\label{eqn:constraint:relax:action}\n\\lim\\limits_{t\\rightarrow +\\infty}\\mathbb{E}\\biggl[\\sum\\limits_{i\\in\\mathscr{P}_{\\ell}} a^{\\phi}_i(\\bm{N}^{\\phi}(t)) \\biggr]= 1,~\n\\forall \\ell\\in[L].\n\\end{equation}\nFollowing similar ideas, we relax \\eqref{eqn:constraint:resources} into two equations:\n\\begin{equation}\\label{eqn:constraint:relax:resources}\n\\lim\\limits_{t\\rightarrow +\\infty}\\mathbb{E}\\biggl[\\mathcal{W}\\Bigl(\\bm{N}^{\\phi}(t)+ \\bm{a}^{\\phi}(\\bm{N}^{\\phi}(t))\\Bigr)\\biggr] \\leq \\bm{C},\n\\end{equation}\nand \n\\begin{equation}\\label{eqn:constraint:dummy}\n\\lim\\limits_{t\\rightarrow+\\infty} \\mathbb{E}\\Bigl[a^{\\phi}_i(\\bm{N}^{\\phi}(t))\\ \\mathds{1}_{N^{\\phi}_i(t)=|\\mathscr{N}_{i}|-1}\\Bigr] = 0,~\\forall i\\in[I]\\backslash\\{d(\\ell):\\ \\ell\\in[L]\\}.\n\\end{equation}\n{\\bf Remark} \nEquation~\\eqref{eqn:constraint:relax:resources} \nis derived by taking expectations for both sides of Equation~\\eqref{eqn:constraint:resources},\nand \\eqref{eqn:constraint:dummy} is a consequence of \\eqref{eqn:constraint:resources:zero}, \nso constraints described by~\\eqref{eqn:constraint:relax:resources} and \\eqref{eqn:constraint:dummy} are relaxed versions of the constraints described by~\\eqref{eqn:constraint:resources}.\nThe justification for Equation~\\eqref{eqn:constraint:dummy} will be discussed in Section~\\ref{subsec:asym_regime}, in conjunction with the physical meanings of all variables, when we increase the scale of the entire system.\nWe refer to the problem with objective~\\eqref{eqn:objective}, constraints~\\eqref{eqn:constraint:relax:action}, \\eqref{eqn:constraint:relax:resources} and \\eqref{eqn:constraint:dummy} and randomized control variables $\\bm{a}^{\\phi}(\\bm{n})$, for all $\\bm{n}\\in\\mathscr{N}$, as the \\emph{relaxed problem}.\n\n\nA value $a$ in $(0,1)$ can be interpreted as a randomisation between taking $a_i^{\\phi}(\\bm{n}) = 0$ and $a_i^{\\phi}(\\bm{n}) = 1$. Specifically we take $a_i^\\phi(n) = 1$ with probability $a$.\nWe represent the set of policies that correspond to assigning such values $a\\in(0,1)$ as $\\tilde{\\Phi}$.\nFor $n_i\\in \\mathscr{N}_i$, $\\phi\\in\\tilde{\\Phi}$, $i\\in[I]$, define\n\\begin{itemize}\n\\item $\\alpha^{\\phi}_i(n_i) \\coloneqq \\lim\\limits_{t\\rightarrow+\\infty} \\mathbb{E}\\left[a^{\\phi}_i(\\bm{N}^{\\phi}(t))\\ |\\ N^{\\phi}_i(t)=n_i\\right]$, \nwhich is the expectation with respect to the stationary distribution when policy $\\phi$ is used,\nand the vector $\\bm{\\alpha}^{\\phi}_i \\coloneqq (\\alpha^{\\phi}_i(n_i) :\\ n_i\\in\\mathscr{N}_i)$;\n\\item the stationary probability that $N^{\\phi}_i(t)=n_i$ under policy $\\phi$ to be $\\pi_{i,n_{i}}^{\\phi}$, and the vector $\\bm{\\pi}^{\\phi}_i \\coloneqq (\\pi_{i,n_i}^{\\phi}:\\ n_i\\in\\mathscr{N}_i)$.\n\\end{itemize}\nLet $\\bm{\\Pi}^{\\phi}_n \\coloneqq \\left( \\bm{\\pi}^{\\phi}_{i}\\cdot (\\mathscr{N}_i):\\ i\\in[I]\\right)^{T}$ and $\\bm{\\Pi}^{\\phi}_a \\coloneqq \\left(\\bm{\\pi}^{\\phi}_{i}\\cdot \\bm{\\alpha}^{\\phi}_i:\\ i\\in[I]\\right)^{T}$, where \n$(\\mathscr{N}_i)$ represents the vector $(0,1,\\ldots,|\\mathscr{N}_i|-1)$.\nThe Lagrangian function for the optimization problem with objective function (\\ref{eqn:objective}) and constraints \\eqref{eqn:constraint:relax:action}, \\eqref{eqn:constraint:relax:resources} and \\eqref{eqn:constraint:dummy} is\n\\begin{multline}\\label{eqn:dual_func}\ng(\\pmb{\\gamma},\\bm{\\nu},\\bm{\\eta})\\coloneqq\n\\max\\limits_{\\phi\\in\\tilde{\\Phi}}\n(\\bm{r}\\mathcal{U}-\\bm{\\varepsilon}\\mathcal{W})\\bm{\\Pi}^{\\phi}_n-\\sum\\limits_{\\ell=1}^{L}\\nu_{\\ell}\\Bigl(\\sum\\limits_{i\\in\\mathscr{P}_{\\ell}}\\bm{\\pi}^{\\phi}_i\\cdot \\bm{\\alpha}^{\\phi}_i -1 \\Bigr)\\\\\n- \\pmb{\\gamma}\\cdot \\Bigl(\\mathcal{W}(\\bm{\\Pi}^{\\phi}_n+\\bm{\\Pi}^{\\phi}_a)-\\bm{C}\\Bigr)\n-\\sum\\limits_{i\\in[I]\\backslash\\{d(\\ell):\\ \\ell\\in[L]\\}}\\eta_i \\pi^{\\phi}_{i,|\\mathscr{N}_i|-1} \\alpha^{\\phi}_{i}(|\\mathscr{N}_i|-1),\n\\end{multline}\nwhere\n$\\bm{\\nu}\\in\\mathbb{R}^{L}$, $\\pmb{\\gamma}\\in\\mathbb{R}_{0}^{J}$ and $\\bm{\\eta}\\in\\mathbb{R}^{I-L}$ are Lagrange multiplier vectors for constraints~\\eqref{eqn:constraint:relax:action}, \\eqref{eqn:constraint:relax:resources} and \\eqref{eqn:constraint:dummy}, respectively. \nIn~\\eqref{eqn:dual_func}, the constraints no longer apply to variables $\\bm{\\alpha}^{\\phi}_i$ ($i\\in[I]$) but appear in the maximization as cost items weighted by their Lagrange multipliers.\nFor $i\\in[I]\\backslash\\{d(\\ell):\\ \\ell\\in[L]\\}$,\ndefine functions \n\\begin{multline}\\label{eqn:dual_func_i}\n\\Lambda_{i}(\\phi,\\pmb{\\gamma},\\nu_{\\ell(i)},\\eta_i)\n\\coloneqq (r_{\\ell(i)}\\mu_i-\\bm{\\varepsilon}\\cdot\\bm{w}_i)\\bm{\\pi}^{\\phi}_i\\cdot (\\mathscr{N}_i)-\\nu_{\\ell(i)}\\bm{\\pi}^{\\phi}_i\\cdot \\bm{\\alpha}^{\\phi}_i \n- \\pmb{\\gamma}\\cdot \\bigl(\\bm{w}_i(\\bm{\\pi}^{\\phi}_i\\cdot (\\mathscr{N}_i)+\\bm{\\pi}^{\\phi}_i\\cdot \\bm{\\alpha}^{\\phi}_i)\\bigr)\\\\\n-\\eta_i \\pi^{\\phi}_{i,|\\mathscr{N}_i|-1}\\alpha^{\\phi}_{i}(|\\mathscr{N}_i|-1),\n\\end{multline}\nwhere we recall that $\\bm{w}_i$ is the weight vector of pattern $i$ given by the $i$th column vector of $\\mathcal{W}$; similarly, for $\\ell\\in[L]$, $\\pmb{\\gamma}\\in\\mathbb{R}_0^J$ and $\\eta\\in\\mathbb{R}$, define\n$\\Lambda_{d(\\ell)}(\\phi,\\pmb{\\gamma},\\nu_{\\ell},\\eta) \\coloneqq -\\nu_{\\ell}\\alpha^{\\phi}_{d(\\ell)}(n)$, where $n$ is the only state in $\\mathscr{N}_{d(\\ell)}$.\nFrom Equation \\eqref{eqn:dual_func}, for $\\pmb{\\gamma}\\in\\mathbb{R}_0^J$, $\\bm{\\nu}\\in\\mathbb{R}^L$ and $\\bm{\\eta}\\in\\mathbb{R}^{I-L}$,\n\\begin{equation}\\label{eqn:dual_func2}\ng(\\pmb{\\gamma},\\bm{\\nu},\\bm{\\eta})=\\max\\limits_{\\phi\\in\\tilde{\\Phi}}\\sum\\limits_{i\\in[I]}\\Lambda_{i}(\\phi,\\pmb{\\gamma},\\nu_{\\ell(i)},\\eta_i)\n+\\sum\\limits_{\\ell\\in[L]}\\nu_{\\ell}\n+ \\pmb{\\gamma}\\cdot \\bm{C}.\n\\end{equation}\nwhere $\\eta_{d(\\ell)}$ $(\\ell\\in[L])$ are unconstrained real numbers that are used for notational convenience. \n\n\n\n\n\nIn the maximization problem on the right hand side of~\\eqref{eqn:dual_func2}, there is no constraint that restricts the value of one $\\Lambda_i(\\phi,\\pmb{\\gamma},\\nu_{\\ell(i)},\\eta_i)$ once the others are known. \nAs a result, we can maximize the sum in \\eqref{eqn:dual_func2} by maximizing each of the summands independently. \nWe can thus write \\eqref{eqn:dual_func2} as\n\\begin{equation}\ng(\\pmb{\\gamma},\\bm{\\nu},\\bm{\\eta})=\\sum\\limits_{i\\in[I]}\\max\\limits_{\\phi\\in\\tilde{\\Phi}}\\Lambda_{i}(\\phi,\\pmb{\\gamma},\\nu_{\\ell(i)},\\eta_i)\n+\\sum\\limits_{\\ell\\in[L]}\\nu_{\\ell}\n+ \\pmb{\\gamma}\\cdot \\bm{C},\n\\end{equation}\nbut with the maximum over $\\phi\\in\\tilde{\\Phi}$. \nObserve now that maximizing $\\Lambda_i$ over $\\phi$ is equivalent to choosing $\\bm{\\alpha}^{\\phi}_i(n_i)$ from $[0,1]^{|\\mathscr{N}_i|}$, by interpreting $\\alpha^{\\phi}_{i,n}\\in[0,1]$ as the probability that process $i$ is activated under policy $\\phi$ when it is in state $n$. Thus,\n\\begin{equation}\\label{eqn:dual_problem}\ng(\\pmb{\\gamma},\\bm{\\nu},\\bm{\\eta})\n=\\sum\\limits_{i\\in[I]}\\max\\limits_{\\bm{\\alpha}^{\\phi}_i\\in[0,1]^{|\\mathscr{N}_i|}}\\Lambda_{i}(\\phi,\\pmb{\\gamma},\\nu_{\\ell},\\eta_i)\n+\\sum\\limits_{\\ell\\in[L]}\\nu_{\\ell} + \\pmb{\\gamma}\\cdot \\bm{C}.\n\\end{equation}\n\n\n\nBy slightly abusing notation, we refer to the policy $\\phi$ determined by an action vector $\\bm{\\alpha}^{\\phi}_i$ as the policy for pattern $i$,\nand define $\\Phi_{i}$ as the set of all policies for pattern $i$.\n\n\n\n\n\n\n\\begin{definition}\\label{def:sub-problem}\nThe maximization of $\\Lambda_{i}(\\phi,\\pmb{\\gamma},\\nu_{\\ell},\\eta_i)$ over $\\bm{\\alpha}^{\\phi}_{i}\\in[0,1]^{|\\mathscr{N}_{i}|}$ is the \\emph{sub-problem} for pattern $i\\in[I]$. \n\\end{definition}\n\nFor given $\\pmb{\\gamma}$, $\\bm{\\nu}$ and $\\bm{\\eta}$, the sub-problem for any pattern is an MDP, so that it can be numerically solved by dynamic programming. By solving the sub-problems for all patterns $i\\in[I]$, we obtain $g(\\pmb{\\gamma},\\bm{\\nu},\\bm{\\eta})$.\nFor any $\\pmb{\\gamma}$, $\\bm{\\nu}$ and $\\bm{\\eta}$, the Lagrangian function $g(\\pmb{\\gamma},\\bm{\\nu},\\bm{\\eta})$ is a performance upper bound for the primal problem described in \\eqref{eqn:objective}, \\eqref{eqn:constraint:relax:action}, \\eqref{eqn:constraint:relax:resources} and \\eqref{eqn:constraint:dummy}, which is a relaxed version of the original resource allocation problem. \nThus there will be a non-negative gap between this upper bound and the maximized performance of the original problem.\n\n\\subsection{Analytical Solutions}\\label{sec:indexability}\n\\begin{proposition}\\label{prop:a_opt}\nFor given $\\bm{\\nu}$ and $\\pmb{\\gamma}$, there exists $\\bm{E}\\in\\mathbb{R}^{I-L}$ such that, for any $\\bm{\\eta}>\\bm{E}$, a policy of the sub-problem for pattern $i$, referred to as $\\bar{\\varphi}\\in\\Phi_i$, determined by action vector $\\bm{\\alpha}^{\\bar{\\varphi}}_i\\in[0,1]^{|\\mathscr{N}_i|}$ is optimal for this sub-problem, if, for $n\\in\\mathscr{N}_i$,\n\\begin{numcases}{\\alpha^{\\bar{\\varphi}}_{i}(n)}\n=1 &\\text{if }$0<\\lambda_{\\ell}(r_{\\ell}-\\frac{1}{\\mu_{i}}\\sum\\limits_{j\\in \\mathscr{J}_i}\\varepsilon_{j}w_{j,i})\n- (1+\\frac{\\lambda_{\\ell}}{\\mu_{i}})\\sum\\limits_{j\\in \\mathscr{J}_i}w_{j,i}\\gamma_{j}-\\nu_{\\ell} \\text{ and } n< |\\mathscr{N}_i|-1$,\\label{eqn:a_opt:a}\\\\\n\\in [0,1]\n&\\text{if }$0=\\lambda_{\\ell}(r_{\\ell}-\\frac{1}{\\mu_{i}}\\sum\\limits_{j\\in \\mathscr{J}_i}\\varepsilon_{j}w_{j,i})\n- (1+\\frac{\\lambda_{\\ell}}{\\mu_{i}})\\sum\\limits_{j\\in \\mathscr{J}_i}w_{j,i}\\gamma_{j}-\\nu_{\\ell} \\text{ and } n< |\\mathscr{N}_i|-1$,\\label{eqn:a_opt:b}\\\\\n=0&\\text{otherwise},\\label{eqn:a_opt:c}\n\\end{numcases}\nwhere $\\ell=\\ell(i)$.\n\\end{proposition}\nThe proof will be given in Appendix~\\ref{app:prop:a_opt} in the e-companion to this paper.\nIn the maximization of $\\Lambda_i(\\phi,\\pmb{\\gamma},\\nu_{\\ell(i)},\\eta_i)$), the only term of $\\Lambda_i$ dependent on $\\bm{\\eta}$ is $-\\eta_i\\pi^{\\phi}_{i,|\\mathscr{N}_i|-1}\\alpha^{\\phi}_i(|\\mathscr{N}_i|-1)$.\nThe choice of a sufficiently large $\\eta_i$ guarantees that $\\alpha^{\\phi}_i(|\\mathscr{N}_i|-1)$ is $0$ for an optimal policy of the sub-problem, so that constraints~\\eqref{eqn:constraint:dummy} of the relaxed problem are satisfied. \nFor convenience, in what follows we fix $\\bm{\\eta}$ to be one of these large values \nso that $\\alpha^{\\phi}_i(|\\mathscr{N}_i|-1)$ is also fixed to be $0$ for any optimal policy $\\phi$ of the sub-problem for pattern $i$.\nBy slightly abusing notation, in all subsequent equations and discussions, we directly require $\\alpha^{\\phi}_i(|\\mathscr{N}_i|-1)=0$ ($i\\in[I]\\backslash\\{d(\\ell):\\ell\\in[L]\\}$) unless specified otherwise.\n\n\n{\\bf Remark}\nRecall that the action variables $\\bm{\\alpha}^{\\phi}_i$ for any pattern $i\\in[I]$ and policy $\\phi\\in\\Phi_i$ are potentially state-dependent. However, the right hand sides of equations~\\eqref{eqn:a_opt:a}-\\eqref{eqn:a_opt:c} are independent of the state variable $n$ which appears on their left hand side, provided that this is less than $|\\mathscr{N}_i|-1$. This state-independence phenomenon is a consequence of the linearity of the reward and cost rates in the state variable, $N^{\\phi}_i(t)$, for pattern $i\\in[I]\\backslash\\{d(\\ell):\\ \\ell\\in[L]\\}$: from our definition in Section~\\ref{sec:model}, the reward and cost rates of process $i$ in state $N^{\\phi}_i(t)$ are $r_{\\ell(i)}\\mu_{i}N^{\\phi}_i(t)$ and $\\sum_{j\\in\\mathscr{J}_i}\\varepsilon_j w_{j,i}N^{\\phi}_i(t)$, respectively.\nA detailed analysis is provided in the proof of Proposition~\\ref{prop:a_opt}.\n\n\n\n\n\n\nUsing an argument similar to that in \\cite{whittle1988restless}, we can derive from \\eqref{eqn:a_opt:a}-\\eqref{eqn:a_opt:c} an abstracted \\emph{priority} for each \\emph{pattern-state pair} (PS pair) $(i,n)$ with $n\\in\\mathscr{N}_i\\backslash\\{|\\mathscr{N}_i|-1\\}$ and $i\\in[I]$; unlike in \\cite{whittle1988restless}, here, this priority is $(\\pmb{\\gamma},\\bm{\\nu})$-dependent. \nThe priority of a PS pair $(i,n)$ with $n\\in\\mathscr{N}_i\\backslash\\{|\\mathscr{N}_i|-1\\}$\nis determined by the \\emph{index}\n\\begin{equation}\\label{eqn:index_value}\n\\Xi_{i}(\\pmb{\\gamma},\\bm{\\nu})\\coloneqq\\lambda_{\\ell(i)}\\Bigl(r_{\\ell(i)}-\\frac{1}{\\mu_{i}}\\sum\\limits_{j\\in \\mathscr{J}_i}\\varepsilon_{j}w_{j,i}\\Bigr)\n- \\Bigl(1+\\frac{\\lambda_{\\ell(i)}}{\\mu_{i}}\\Bigr)\\sum\\limits_{j\\in \\mathscr{J}_i}w_{j,i}\\gamma_{j}-\\nu_{\\ell(i)},\n\\end{equation}\nand \\eqref{eqn:a_opt:a}-\\eqref{eqn:a_opt:c} can be characterized as comparing $\\Xi_i(\\pmb{\\gamma},\\bm{\\nu})$ with $0$. \nWhen there is strict inequality in the comparison (that is, the cases described in \\eqref{eqn:a_opt:a} and \\eqref{eqn:a_opt:c}), the value of $\\alpha^\\phi_i(n)$ is specified, since PS pairs $(i,n)$ for all $n\\in\\mathscr{N}_i\\backslash\\{|\\mathscr{N}_i|-1\\}$ correspond to the same $\\Xi_i(\\pmb{\\gamma},\\bm{\\nu})$ value. \nHowever, there is still freedom to decide different values of $\\alpha^\\phi_i(n)$,\nwhen $\\Xi_i(\\pmb{\\gamma},\\bm{\\nu})=0$ (the case described in \\eqref{eqn:a_opt:b}).\nA detailed discussion about priorities of PS pairs corresponding to the same $\\Xi_i(\\pmb{\\gamma},\\bm{\\nu})$ will be provided in Section~\\ref{subsec:decomposable}.\nBy solving the sub-problem of dummy pattern $d(\\ell)$ $(\\ell\\in[L])$ which involves only one state $n\\in\\mathscr{N}_{d(\\ell)}$, we obtain an optimal policy $\\varphi$ determined by\n\\vspace{-0.1cm}\n\\begin{equation}\\label{eqn:a_opt:dummy}\n\\alpha^{\\varphi}_{d(\\ell)}(n) \\left\\{\\begin{array}{ll}\n=1, &\\text{if } 0 < -\\nu_{\\ell}, \\vspace{-0.1cm}\\\\\n\\in[0,1], &\\text{if }0 = -\\nu_{\\ell},\\vspace{-0.1cm}\\\\\n=0, &\\text{otherwise}. \n\\end{array}\\right.\n\\end{equation}\n\\vspace{-0.1cm}\nThe priority of the state of a dummy pattern is then $\\Xi_{d(\\ell)}(\\pmb{\\gamma},\\bm{\\nu}) \\equiv -\\bm{\\nu}$ for any $\\pmb{\\gamma}$. \n\nIn addition, from Equation~\\eqref{eqn:a_opt:c} in Proposition~\\ref{prop:a_opt}, for any given $\\bm{\\nu}\\in\\mathbb{R}^I$ and $\\pmb{\\gamma}\\in\\mathbb{R}_0^J$, there exists $\\bm{\\eta}\\in\\mathbb{R}^{I-L}$ such that it is optimal to make states $|\\mathscr{N}_i|-1$ passive (that is, $\\alpha^{\\bar{\\varphi}}_i (|\\mathscr{N}_i| - 1) = 0$) for all $i\\in[I]\\backslash\\{d(\\ell):\\ \\ell\\in[L]\\}$. \nAmong all PS pairs $(n,i)$ ($n\\in\\mathscr{N}_i$, $i\\in[I]$), we assign, without loss of generality, the least priority to those PS pairs $(i,|\\mathscr{N}_i|-1)$ for which $i\\in[I]\\backslash\\{d(\\ell):\\ \\ell\\in[L]\\}$. \n\n\nThe policy $\\bar{\\varphi}$ determined by \\eqref{eqn:a_opt:a}-\\eqref{eqn:a_opt:c} and \\eqref{eqn:a_opt:dummy} is optimal for the relaxed problem described by \\eqref{eqn:objective}, \\eqref{eqn:constraint:relax:action}, \\eqref{eqn:constraint:relax:resources} and \\eqref{eqn:constraint:dummy}, if the given multipliers $\\bm{\\nu}$ and $\\pmb{\\gamma}$ that appear in \\eqref{eqn:a_opt:a}-\\eqref{eqn:a_opt:c} and \\eqref{eqn:a_opt:dummy} satisfy the \\emph{complementary slackness conditions} of this relaxed problem, defined by \\vspace{-0.3cm}\n\\begin{condition}{Complementary Slackness}\\label{cond:complementary_slackness}\n\\vspace{-0.3cm}\n\\begin{equation}\n\\label{eqn:relaxaction:slack}\n\\nu_{\\ell}\\Bigl(\\sum\\limits_{i\\in\\mathscr{P}_l} \\bm{\\pi}^{\\phi}_i\\cdot\\bm{\\alpha}^{\\phi}_i -1\\Bigr)=0,~\n\\forall l\\in[L],\\vspace{-0.3cm}\n\\end{equation}\nand \\vspace{-0.3cm}\n\\begin{equation}\n\\label{eqn:relaxconstraint:slack}\n\\gamma_{j} \\Bigl(\\bm{\\omega}_j\\cdot \\left(\\bm{\\Pi}^{\\phi}_n+\\bm{\\Pi}^{\\phi}_a\\right)-C_j\\Bigr) = 0,~\\forall j\\in[J], \\vspace{-0.3cm}\n\\end{equation}\nwhere $\\bm{\\omega}_j=(w_{j,i}:\\ i\\in[I])$ is the $j$th row of matrix $\\mathcal{W}$.\\vspace{-0.3cm}\n\\end{condition}\nIn this context, if resource pool $j\\in[J]$ is very popular so that the capacity constraint corresponding to the $j$th row in \\eqref{eqn:constraint:relax:resources} achieves equality, then $\\gamma_j$ is allowed to be positive, leading to a lower value of $\\Xi_i(\\pmb{\\gamma},\\bm{\\nu})$ than for $\\gamma_j=0$.\nOn the other hand, if resource pool $j\\in[J]$ cannot be fully occupied and the $j$th capacity constraint in \\eqref{eqn:constraint:relax:resources} is satisfied with a strict inequality, then the \ncomplementary slackness condition described in \\eqref{eqn:relaxconstraint:slack} forces $\\gamma_j$ to be zero.\nFollowing this mechanism, when resource pool $j\\in[J]$ is overloaded and its priority is reduced by increasing $\\gamma_j$, the offered traffic to this resource pool will be reduced in line with its priority.\n\n\nIf\nthere exist multipliers $\\bm{\\nu}$, $\\pmb{\\gamma}$ and a policy $\\bar{\\varphi}$ determined by \\eqref{eqn:a_opt:a}-\\eqref{eqn:a_opt:c}, such that the complementary slackness conditions~\\eqref{eqn:relaxaction:slack} and \\eqref{eqn:relaxconstraint:slack} are satisfied by taking $\\phi=\\bar{\\varphi}$, then, by the strong duality theorem, this policy $\\bar{\\varphi}$ is optimal for the relaxed problem;\nthis observation, together with Theorem~\\ref{theorem:main_second} in Section~\\ref{sec:asym_opt}, leads to the existence of an asymptotically optimal policy feasible for the original problem, derived with priorities of patterns induced by the descending order of $\\Xi_i(\\pmb{\\gamma},\\bm{\\nu})$. More details about the analysis in the asymptotic regime will be provided in Section~\\ref{sec:asym}.\nHere we focus on the non-asymptotic regime, and specifically on the choice and computation of $\\pmb{\\gamma}$ and $\\bm{\\nu}$. \n\n\n\\subsection{Decomposable Capacity Constraints}\\label{subsec:decomposable}\n\n\n\n\n\nIn the general case, it is not clear whether the \nthe complementary slackness conditions~\\eqref{eqn:relaxaction:slack} and \\eqref{eqn:relaxconstraint:slack}\ncan be satisfied and, \neven if they are, what the values of $\\pmb{\\gamma}$ and the corresponding $\\bm{\\nu}$ are.\nMore important is the question of how the multipliers help with proposing the asymptotically optimal policy applicable to the original problem.\n\n\nIn Sections~\\ref{subsec:decomposable} and \\ref{subsec:sufficient_condition}, we concentrate on the complementary slackness conditions and the existence of an optimal policy (for the relaxed problem) satisfying \\eqref{eqn:a_opt:a}-\\eqref{eqn:a_opt:c}.\nRecall that \\eqref{eqn:a_opt:a}-\\eqref{eqn:a_opt:c} intuitively suggest priorities of patterns induced by $\\Xi_i(\\pmb{\\gamma},\\bm{\\nu})$. Later in Section~\\ref{sec:index_policy}, a policy feasible for the original problem will be proposed based on given priorities of patterns, and its asymptotic optimality will be discussed in Section~\\ref{sec:asym_opt}.\n\n\\subsubsection{Priorities of PS Pairs}\\label{subsubsec:priorities_of_pairs}\n\nAs described in Section~\\ref{sec:indexability}, \nthe priorities of PS pairs are determined by the descending order of $\\Xi_i(\\pmb{\\gamma},\\bm{\\nu})$, with higher priorities given by higher values of $\\Xi_i(\\pmb{\\gamma},\\bm{\\nu})$. \nIt may happen that, because of different tie-breaking rules, the same $\\pmb{\\gamma}$ and $\\bm{\\nu}$ lead to different priorities. For given $\\pmb{\\gamma}\\in\\mathbb{R}_0^J$ and $\\bm{\\nu}\\in\\mathbb{R}^L$, let $\\mathscr{O}(\\pmb{\\gamma},\\bm{\\nu})$ represent the set of all rankings of PS pairs according to the descending order of $\\Xi_i(\\pmb{\\gamma},\\bm{\\nu})$ ($i\\in[I]$). \nAlso, for notational convenience, let $\\mathscr{O}$ represent the set of all PS pair rankings.\n\n\nTo emphasize the priorities of these PS pairs, according to a given ranking $o\\in\\mathscr{O}$, we label all these pairs by their order $\\iota^{o}\\in[N]$ with $N\\coloneqq \\sum_{i\\in[I]} |\\mathscr{N}_i|$ and $(i_{\\iota^o},n_{\\iota^o})$ giving the pattern and the state of the $\\iota^{o}$th PS pair.\nWe will omit the superscript $o$ and use $\\iota$ for notational simplicity unless it is necessary to specify the underlying ranking.\nThere exists one and only one $\\ell\\in[L]$ satisfying $i_{\\iota} \\in \\mathscr{P}_{\\ell}$ for any PS pair labeled by $\\iota$. Such an $\\ell$ is denoted by $\\ell_{\\iota}$.\n\n\n\\IncMargin{1em}\n\\begin{algorithm}\n\\small \n\\linespread{0.4}\\selectfont\n\n\\SetKwFunction{FPriorityPolicy}{PriorityPolicy}\n\\SetKwProg{Fn}{Function}{:}{\\KwRet}\n\\SetKwInOut{Input}{Input}\\SetKwInOut{Output}{Output}\n\\SetAlgoLined\n\\DontPrintSemicolon\n\\Input{a vector of non-negative reals $\\pmb{\\gamma}\\in\\mathbb{R}_0^J$ and a ranking of PS pairs $o\\in\\mathscr{O}$.}\n\\Output{a policy $\\bar{\\varphi}(o)\\in\\tilde{\\Phi}$ determined by action variables $\\bm{\\alpha}^{\\bar{\\varphi}(o)}_i\\in[0,1]^{|\\mathscr{N}_i|}$ for all $i\\in[I]$ and a vector of reals $\\bm{\\nu}(o,\\pmb{\\gamma})$.}\n\n\n\n\\Fn{\\FPriorityPolicy{$o,\\pmb{\\gamma}$}}{\n\n\n $\\bm{\\alpha}^{\\bar{\\varphi}}_i\\gets \\bm{0}$ for all $i\\in[I]$ \\tcc*{Variables $\\bm{\\alpha}^{\\bar{\\varphi}}_i$ determine a policy $\\bar{\\varphi}$}\n Initializing the list of candidate PS pairs as the list of all PS pairs\\;\n $\\iota \\gets 0$ \\tcc*{Iteration variable} \n \\While {$\\iota\\iota$ with $\\ell_{\\iota'}=\\ell_{\\iota}$ from the list of candidate PS pairs\\;\n\t \t}\n \t\t\\ElseIf{$\\exists j\\in[J],\\ \\bm{\\omega}_j\\cdot\\left(\\bm{\\Pi}^{\\bar{\\varphi}}_n+\\bm{\\Pi}^{\\bar{\\varphi}}_a\\right)=C_j$}{\n\t\t\t\\tcc*{If a capacity constraint achieves equality under policy $\\bar{\\varphi}$}\n\t\t\t\\tcc*{determined by updated $\\bm{\\alpha}^{\\bar{\\varphi}}_i$, $i\\in[I]$.}\n\n\t\t remove all PS pairs $\\iota'>\\iota$ with $w_{j,i_{\\iota'}}>0$ from the list of candidate PS pairs\\;\n\t\t}\n\n}\n\n$\\bm{\\alpha}^{\\bar{\\varphi}(o)}_i\\gets \\bm{\\alpha}^{\\bar{\\varphi}}_{i}$ for all $i\\in[I]$\\;\n\n}\n\\caption{Priority-style policy for the relaxed problem}\\label{algo:varphi_gamma}\n\\end{algorithm}\n \\DecMargin{1em}\n\n\nFor any given ranking of PS pairs $o\\in\\mathscr{O}$, we can generate a policy $\\bar{\\varphi}(o)$ with priorities of PS pairs defined by $o$, such that \n\\eqref{eqn:constraint:relax:action}, \\eqref{eqn:constraint:relax:resources} and \\eqref{eqn:constraint:dummy} are satisfied:\nthe policy $\\bar{\\varphi}(o)$ is feasible for the relaxed problem but not necessarily feasible for the original problem.\nThe pseudo-code for generating $\\bar{\\varphi}(o)$ is presented in Algorithm~\\ref{algo:varphi_gamma}.\nThe key idea for generating such a $\\bar{\\varphi}(o)$ is to initialize $\\bm{\\alpha}_i^{\\bar{\\varphi}(o)}$ to $\\bm{0}$ for all $i\\in[I]$, and sequentially activate the PS pairs according to their priorities defined by $o$ until either a relaxed action or capacity constraint described in \\eqref{eqn:constraint:relax:action} and \\eqref{eqn:constraint:relax:resources}, respectively, achieves equality. In particular, \n\\begin{enumerate}[label=(\\Roman*)]\n\\item\nif a relaxed action constraint described in \\eqref{eqn:constraint:relax:action} achieves equality by activating PS pairs less than or equal to $\\iota$, then the multiplier $\\nu_{\\ell_{\\iota}}$ is set to $\\Xi_{i_{\\iota}}(\\pmb{\\gamma},\\bm{0})$, and all later PS pairs $\\iota'>\\iota$ with $\\ell_{\\iota'}=\\ell_{\\iota}$ are \\emph{disabled} from being activated and are removed from the \\emph{list of candidate pairs} awaiting later activation; \\label{case:equality:action}\n\\item similarly, if a relaxed capacity constraint described in \\eqref{eqn:constraint:relax:resources} associated with resource pool $j\\in[J]$ achieves equality by activating PS pairs less than or equal to $\\iota$, then all later PS pairs $\\iota'>\\iota$ with $w_{j,i_{\\iota'}}>0$ are disabled and removed from the list of candidate states.\\label{case:equality:capacity}\n\\end{enumerate}\nMaintaining an iteratively updated list of candidate pairs in this way continues until all \naction constraints in \\eqref{eqn:constraint:relax:action} achieve equality: the policy $\\bar{\\varphi}(o)$ is determined by the resulting $\\bm{\\alpha}^{\\bar{\\varphi}(o)}_i$ ($i\\in[I]$), and the multipliers $\\bm{\\nu}$ are updated in \\ref{case:equality:action}.\nThe vector of these multipliers is denoted by $\\bm{\\nu}(o,\\pmb{\\gamma})$.\nThe PS pair labeled by $\\iota$ satisfying the condition described in \\ref{case:equality:capacity} is called the \\emph{critical pair}, with the corresponding resource pool $j$ referred to as the \\emph{critical pool} of PS pair $\\iota$, denoted by $j_{\\iota}(o)$. Note that, from the description in \\ref{case:equality:capacity}, there might be more than one resource pool for which the capacity constraints achieve equality simultaneously while activating PS pair $\\iota$; we choose one of them to be $j_{\\iota}(o)$ and refer to this resource pool as the critical pool of $\\iota$.\nLet $\\mathscr{I}(o)$ represent the set of all critical pairs with respect to the policy $\\bar{\\varphi}(o)$. \n\\begin{lemma}\\label{lemma:critical_pattern}\nFor any $o\\in\\mathscr{O}$ and $\\iota,\\iota'\\in\\mathscr{I}(o)$, if $\\iota \\neq \\iota'$ then $i_{\\iota}\\neq i_{\\iota'}$.\n\\end{lemma}\n\\proof{Proof.}\nConsider critical pairs $\\iota,\\iota'\\in\\mathscr{I}(o)$ with $\\iota\\neq \\iota'$, and assume $\\iota < \\iota'$ without loss of generality. Since $\\iota$ is a critical pair, there is a critical resource pool $j_{\\iota}$ which is fully occupied. In this case, if $i_{\\iota} = i_{\\iota'}$, then pair $\\iota'$ must require some resource units from pool $j_{\\iota}$ and so $\\alpha^{\\bar{\\varphi}(o)}_{\\iota'}=0$.\nPS pair $\\iota'$ cannot be critical, which violates the condition $\\iota'\\in\\mathscr{I}(o)$. \nHence, $i_{\\iota} \\neq i_{\\iota'}$. This proves the lemma.\n\\endproof\n\n\n\n\nRecall, for any ranking $o$, the policy $\\bar{\\varphi}(o)$ must satisfy the action and capacity constraints~\\eqref{eqn:constraint:relax:action}, \\eqref{eqn:constraint:relax:resources} and \\eqref{eqn:constraint:dummy}. Also, since \\eqref{eqn:constraint:relax:action} holds, the \ncomplementary slackness conditions \ncorresponding to the action constraints~\\eqref{eqn:relaxaction:slack} are satisfied by taking $\\phi=\\bar{\\varphi}(o)$. However, the complementary slackness conditions \ncorresponding to the capacity constraints~\\eqref{eqn:relaxconstraint:slack} and equations~\\eqref{eqn:a_opt:a}-\\eqref{eqn:a_opt:c} are not necessarily satisfied if we plug in $\\phi=\\bar{\\varphi}(o)$ and $\\pmb{\\gamma}$: the policy $\\bar{\\varphi}(o)$ is a heuristic policy applicable for the relaxed problem defined by \\eqref{eqn:objective}, \\eqref{eqn:constraint:relax:action}, \\eqref{eqn:constraint:relax:resources} and \\eqref{eqn:constraint:dummy} derived by intuitively prioritizing PS pairs according to their ranking $o\\in\\mathscr{O}$.\n\nIn Section~\\ref{subsec:sufficient_condition} we shall define a particular class of resource allocation models, for which we can show the complementary slackness conditions are indeed satisfied.\n\n\n\\begin{definition}\\label{define:decomposable}\nThe system said to be \\emph{decomposable} if there exist multipliers $\\pmb{\\gamma}\\in\\mathbb{R}_0^J$, $\\bm{\\nu}\\in\\mathbb{R}^L$ and a ranking $o\\in\\mathscr{O}(\\pmb{\\gamma},\\bm{\\nu})$ such that $\\bm{\\nu}=\\bm{\\nu}(o,\\pmb{\\gamma})$ and the complementary slackness conditions~\\eqref{eqn:relaxaction:slack} and \\eqref{eqn:relaxconstraint:slack} are achieved by taking $\\phi=\\bar{\\varphi}(o)$. In this case the optimal values of the dual variables are called \\emph{decomposable values}. \n\\end{definition}\n\n\nRecall that, in the general case, for $\\pmb{\\gamma}\\in\\mathbb{R}_0^J$ and $\\bm{\\nu}\\in\\mathbb{R}^L$, even if $o\\in\\mathscr{O}(\\pmb{\\gamma},\\bm{\\nu})$, the policy $\\bar{\\varphi}(o)$ is not necessarily optimal (because it does not necessarily satisfy \\eqref{eqn:a_opt:a}-\\eqref{eqn:a_opt:c}). \nWhen the policy $\\bar{\\varphi}(o)$ is optimal for the relaxed problem, the ranking $o$ can be used to construct an index policy applicable to the original problem (detailed steps are provided in Section~\\ref{sec:index_policy}). Theorem~\\ref{theorem:main_second} (in Section~\\ref{sec:asym_opt}) then ensures that such an index policy is asymptotically optimal.\n\n\\subsubsection{Derivation of the Pair Ranking} \\label{subsubsec:derive_ranking}\n\nWe start with a proposition that shows how the values of the Lagrange multipliers $\\bm{\\nu}$ and $\\pmb{\\gamma}$ can be derived from a knowledge of the critical pair and critical resource pool corresponding to a given order $o\\in \\mathscr{O}$.\n\n\\begin{proposition}\\label{prop:solution_existence}\nFor any given $\\pmb{\\gamma}_0\\in\\mathbb{R}_0^J$ and $o\\in\\mathscr{O}$, the linear equations\n\\begin{equation}\\label{eqn:necessary_gamma}\n\\nu_{\\ell_{\\iota}}(o,\\pmb{\\gamma}_0)=\\Xi_{i_{\\iota}}(\\pmb{\\gamma},\\bm{0}),~\\forall \\iota\\in\\mathscr{I}(o)\n\\vspace{-0.3cm}\n\\end{equation}\nand \\vspace{-0.3cm}\n\\begin{equation}\\label{eqn:necessary_gamma:zero}\n\\gamma_{j} = 0,~\\forall j \\notin \\{j_{\\iota}(o)\\in[J] ~|~\\iota\\in\\mathscr{I}(o)\\}\n\\end{equation}\nhave a unique solution $\\pmb{\\gamma}\\in\\mathbb{R}^J$.\n\\end{proposition}\nThe proof of Proposition~\\ref{prop:solution_existence} will be given in Appendix~\\ref{app:prop:solution_existence} in the e-companion. \nFor a ranking $o\\in\\mathscr{O}$, define an function $\\mathcal{T}^o$ of $\\pmb{\\gamma}_0\\in\\mathbb{R}_0^J$ with respect to $o\\in\\mathscr{O}$: $\\mathcal{T}^o(\\pmb{\\gamma}_0)\\coloneqq \\pmb{\\gamma}$ where $\\pmb{\\gamma}$ is the unique solution of \\eqref{eqn:necessary_gamma} and \\eqref{eqn:necessary_gamma:zero}. \nLet $\\mathcal{T}^o_j(\\pmb{\\gamma}_0)$ represent the $j$th element of $\\mathcal{T}^o(\\pmb{\\gamma}_0)$.\n\n\n\\begin{proposition}\\label{prop:converge_gamma}\nIf there exist $\\pmb{\\gamma}_0\\in\\mathbb{R}_0^J$ and $o\\in\\mathscr{O}(\\pmb{\\gamma}_0,\\bm{0})$ such that $\\mathcal{T}^o(\\pmb{\\gamma}_0)=\\pmb{\\gamma}_0$, then \n$\\pmb{\\gamma}_0$ is a vector of decomposable multipliers and the policy $\\bar{\\varphi}(o)$ based on ranking $o$ is optimal for the relaxed problem defined by \\eqref{eqn:objective}, \\eqref{eqn:constraint:relax:action}, \\eqref{eqn:constraint:relax:resources} and \\eqref{eqn:constraint:dummy}.\n\\end{proposition}\nThe proof of Proposition~\\ref{prop:converge_gamma} will be given in Appendix~\\ref{app:prop:converge_gamma} in the e-companion. Recall that $\\mathscr{I}(o)$ is the set of critical pairs with respect to the policy $\\bar{\\varphi}(o)$, $j_{\\iota}(o)$ is the critical resource pool corresponding to critical pair $\\iota\\in\\mathscr{I}(o)$ according to ranking $o$, and $\\nu_{\\ell_{\\iota}}(o,\\pmb{\\gamma}_0)$ is an output of Algorithm~\\ref{algo:varphi_gamma} when the inputs are $o$ and $\\pmb{\\gamma}=\\pmb{\\gamma}_0$.\n\n{\\bf Remark} Proposition~\\ref{prop:converge_gamma} provides a way of checking decomposability of $\\pmb{\\gamma}_0$ and optimality of $\\bar{\\varphi}(o)$.\nBy Proposition~\\ref{prop:converge_gamma}, any fixed point $\\pmb{\\gamma}_0\\in\\mathbb{R}_0^J$ of the function $\\mathcal{T}^o$ with respect to a ranking $o\\in\\mathscr{O}(\\pmb{\\gamma}_0,\\bm{0})$ is a decomposable vector. \nThe decomposability of $\\pmb{\\gamma}_0$ can be checked without requiring knowledge of any $\\bm{\\nu}\\in\\mathbb{R}^L$. \nAlso, we present the following corollary of Proposition~\\ref{prop:converge_gamma}.\n\\begin{corollary}\\label{coro:converge_gamma}\nFor $\\pmb{\\gamma}_0\\in\\mathbb{R}_0^J$ and $o\\in\\mathscr{O}(\\pmb{\\gamma}_0,\\bm{0})$, if $\\mathcal{T}^o(\\pmb{\\gamma}_0)\\neq \\pmb{\\gamma}_0$, $\\mathcal{T}^o(\\pmb{\\gamma}_0)\\in \\mathbb{R}_0^J$ and $o\\in\\mathscr{O}(\\mathcal{T}^o(\\pmb{\\gamma}_0),\\bm{0})$, then $\\mathcal{T}^o(\\mathcal{T}^o(\\pmb{\\gamma}_0))=\\mathcal{T}^o(\\pmb{\\gamma}_0)$. \n\\end{corollary}\nNote that the hypothesis of Corollary~\\ref{coro:converge_gamma} requires all components of $\\mathcal{T}^o(\\pmb{\\gamma}_0)$ to be nonnegative. This is not such an easy condition to satisfy. \nThe proof of Corollary~\\ref{coro:converge_gamma} will be given in Appendix~\\ref{app:coro:converge_gamma} in the e-companion.\n\n\n\nIn this context, consider a given $\\pmb{\\gamma}_0\\in\\mathbb{R}_0^J$ and a ranking $o\\in\\mathscr{O}(\\pmb{\\gamma}_0,\\bm{0})$. If $\\pmb{\\gamma}_0$ is a fixed point of $\\mathcal{T}^o$, then it is the vector of decomposable multipliers; if it is not but $\\mathcal{T}^o(\\pmb{\\gamma}_0)$ is a nonnegative fixed point of $\\mathcal{T}^o$, then $\\mathcal{T}^o(\\pmb{\\gamma}_0)$ represents the decomposable multipliers. \nHowever, in both cases we need to propose a specific $\\pmb{\\gamma}_0$; it requires prior knowledge of the multipliers, which is, in general, not available.\nSection~\\ref{subsec:sufficient_condition} will discuss a special case where the decomposability is provable and we have a method of deriving the decomposable multipliers.\nHere, to make a reasonably good choice of the Lagrangian multipliers in a general system, we embark on a \\emph{fixed point iteration method}.\n\n\nSince Proposition~\\ref{prop:converge_gamma} requires a fixed point $\\pmb{\\gamma}$ of the function $\\mathcal{T}^o$ with $o\\in\\mathscr{O}(\\pmb{\\gamma},\\bm{0})$, we need to iterate not only the value of $\\pmb{\\gamma}$ but also the corresponding ranking $o$ which affects the function $\\mathcal{T}^o$ and should be an element of $\\mathscr{O}(\\pmb{\\gamma},\\bm{0})$.\nFollowing the idea of conventional\nfixed point interation methods, for $k\\in\\mathbb{N}_0$, \nlet $\\pmb{\\gamma}_{k+1}= \\bigl(\\mathcal{T}^{o_k}(\\pmb{\\gamma}_k)\\bigr)^+$ with initial $\\pmb{\\gamma}_0$ and $o_0\\in\\mathscr{O}(\\pmb{\\gamma}_0,\\bm{0})$, \nwhere $(\\bm{v})^+\\coloneqq (\\max\\{0,v_i\\}: i\\in[N])$ for a vector $\\bm{v}\\in \\mathbb{R}^N$ ($N\\in\\mathbb{N}_+$). \nConstruct a ranking $o_{k+1}\\in\\mathscr{O}(\\pmb{\\gamma}_{k+1},\\bm{0})$ according to $o_k$:\nfor any two different PS pairs $(i,n)$ and $(i',n')$ with $\\Xi_i(\\pmb{\\gamma}_{k+1},\\bm{0})=\\Xi_{i'}(\\pmb{\\gamma}_{k+1},\\bm{0})$, $(i,n)$ precedes $(i',n')$ in the ranking $o_{k+1}$ if and only if $(i,n)$ precedes $(i',n')$ in the ranking $o_k$.\nHere, the operation $(\\cdot)^+$ is used to make all the elements of $\\pmb{\\gamma}_{k+1}$ non-negative, so that $\\pmb{\\gamma}_{k+1}$ is feasible for the function $\\mathcal{T}^{o_{k+1}}$.\nThus the ranking $o_{k+1}$ inherits the tie-breaking rule used for $o_k$ so that the difference between $o_k$ and $o_{k+1}$, which must satisfy $o_k\\in\\mathscr{O}(\\pmb{\\gamma}_k,\\bm{0})$ and $o_{k+1}\\in\\mathscr{O}(\\pmb{\\gamma}_{k+1},\\bm{0})$, is minimized.\nCorollary~\\ref{coro:converge_gamma} can be used to check whether the $\\pmb{\\gamma}_{k+1}$ is a fixed point of the function $\\mathcal{T}^{o_k}$.\nAlso, $\\pmb{\\gamma}_{k+1}$ and $o_{k+1}$ are uniquely determined by $\\pmb{\\gamma}_k$ and $o_k$. We can consider $(\\pmb{\\gamma}_k,o_k)$ as an entity which is an argument delivered to the function $\\mathcal{T}^{o_k}(\\pmb{\\gamma}_k)$, and wish to find a fixed point in this sense.\n\n\nIn the general case, the function $\\mathcal{T}^{o_k}(\\pmb{\\gamma}_k)$ is discontinuous in $\\pmb{\\gamma}_k$ and the sequence $\\{\\pmb{\\gamma}_k\\}_{k=0}^{\\infty}$ is heuristically generated with no proof of convergence to a fixed point. \nIn fact, the choice of $\\pmb{\\gamma}_{k+1}= \\bigl(\\mathcal{T}^{o_k}(\\pmb{\\gamma}_k)\\bigr)^+$ may result in the sequence $\\{\\pmb{\\gamma}_k\\}_{k=0}^{\\infty}$ being trapped in oscillations. \nTo avoid this, with slight abuse of notation, we modify the iteration as $\\pmb{\\gamma}_{k+1}= \\bigl(c\\mathcal{T}^{o_k}(\\pmb{\\gamma}_k)+(1-c)\\pmb{\\gamma}_k\\bigr)^+$ with a parameter $c\\in [0,1]$, which captures the effects of exploring the new point $\\mathcal{T}^{o_k}(\\pmb{\\gamma}_k)$.\nNumerical examples of iterating $\\pmb{\\gamma}_k$ will be provided in Section~\\ref{sec:example}.\n\n\n\nWith an upper bound, $U\\in\\mathbb{N}_+$, we take $k^*\\coloneqq \\arg\\min_{k=1,2,\\ldots,U} \\lVert \\pmb{\\gamma}_{k-1}-\\pmb{\\gamma}_k\\rVert$ and consider $o_{k^*}$ as a reasonably good ranking of PS pairs. \nSuch $o_{k^*}$ is pre-computable with computational complexity no worse than $O(U(N^2+J^2))$, where $N^2$ and $J^2$ result from ordering the $N$ pairs and solving the $J$ linear equations, respectively.\nIn Section~\\ref{sec:index_policy}, we show that an index policy feasible for the original problem can always be generated with such an $o_{k^*}$, and the implementation complexity is $O(I)$ in terms of computation and storage. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\vspace{-0.3cm}\n\\subsection{Weakly Coupled Constraints}\\label{subsec:sufficient_condition}\n\\vspace{-0.3cm}\nHere, we discuss a sufficient condition under which the sequence $\\{\\pmb{\\gamma}_k\\}_{k=0}^{\\infty}$ is provably convergent; and, in Section~\\ref{sec:example}, when this condition fails, we show via numerical examples that the sequence might still converge. \n\n\n\\begin{definition}\nRecall the matrix $\\mathcal{W}=(w_{j,i})$ defined in Section~\\ref{subsec:model}.\nWe say that row $j\\in[J]$ is \n\\begin{enumerate}\n\\item a \\emph{type-1} row if there is at most one $i\\in[I]$ with $w_{j,i}>0$; \\vspace{-0.15cm}\n\\item a \\emph{type-2} row if there is more than one $i\\in[I]$ with $w_{j,i}>0$.\n\\end{enumerate} \\vspace{-0.3cm}\n\\end{definition}\nThat is, row $j$ is a type-1 row if resource pool $j$ is not shared by patterns of different types; and is a type-2 row, otherwise.\nDenote by $\\mathscr{J}_i=\\{j\\in[J]\\ |\\ w_{j,i}>0\\}$ the set of resource pools used by pattern $i$. \nWe then define a condition. \\vspace{-0.3cm}\n\\begin{condition}{Weak Coupling}\\label{cond:hypothesis}\nA system is weakly coupled if, for any pattern $i$, there is at most one $j\\in\\mathscr{J}_i$ with row $j$ of $\\mathcal{W}$ being a type-2 row. \\vspace{-0.3cm}\n\\end{condition}\nThis condition implies that there is at most one shared resource pool associated with each pattern. \nIn a weakly coupled system, if pattern $i_1$ shares a resource pool $j_{12}$ with pattern $i_2$ and pattern $i_1$ shares a resource pool $j_{13}$ with pattern $i_3$ then $j_{12}=j_{13}$. \nA system where each of the patterns requires only one resource pool is clearly weakly coupled.\nNote that, in a weakly coupled system, dependencies between state variables of different patterns still exist, because \neach resource pool can be shared by requests of multiple RTs.\n\n\\begin{definition}\\label{define:w_star}\nFor a weakly coupled system define, for each $i\\in[I]\\backslash\\{d(\\ell):\\ell\\in[L]\\}$, $w^*_i=w_{j,i}$ where $j$ is the only resource pool in $\\mathscr{J}_i$ shared with other patterns, if there is one; or any member of the set $\\arg\\min\\limits_{j'\\in\\mathscr{J}_i}\\frac{C_{j'}}{w_{j',i}}$, otherwise.\n\\end{definition}\n\\begin{definition}\\label{define:o_star}\n\nFor a weakly coupled system define, for $\\bm{\\nu}\\in\\mathbb{R}^L$, a set of PS rankings $\\mathscr{O}^*(\\bm{\\nu})\\subset \\mathscr{O}$ such that, for any $o\\in\\mathscr{O}^*(\\bm{\\nu})$, PS pairs $\\iota\\in[N]$ are ranked according to the descending order of \n\\begin{equation}\\label{eqn:main:index}\n\\Xi_{\\iota}^*= \\left\\{\n\\begin{array}{ll}\n\\frac{\\Xi_{i_{\\iota}}(\\bm{0},\\bm{0})-\\nu_{\\ell_{\\iota}}}{w^*_{i_{\\iota}}(1+\\lambda_{\\ell_{\\iota}}\/\\mu_{i_{\\iota}})}, &\\text{if } \\nexists \\ell\\in[L],~i_{\\iota}=d(\\ell),\\\\\n0, & \\text{otherwise},\n\\end{array}\\right.\n\\end{equation} \n\\end{definition}\n\n\n\\begin{proposition}\\label{prop:equal_opt}\nIf the system is weakly coupled and there exists a ranking $o\\in\\mathscr{O}^*(\\bm{0})$ satisfying $\\nu(o,\\bm{0})=\\bm{0}$, then the capacity constraints described in \\eqref{eqn:constraint:relax:resources} are decomposable and the policy $\\bar{\\varphi}(o)$ is optimal for the relaxed problem defined by \\eqref{eqn:objective} and \\eqref{eqn:constraint:relax:action}-\\eqref{eqn:constraint:dummy}.\nIn particular, there exist decomposable multipliers $\\pmb{\\gamma}\\in\\mathbb{R}_0^{J}$ satisfying, for $j\\in[J]$,\n\\begin{enumerate}[label=\\roman*)]\n\\item if there is a critical PS pair $\\iota\\in\\mathscr{I}(o)$ with critical resource pool $j=j_{\\iota}(o)$, and no \n$j'\\neq j$ with $j'\\in\\mathscr{J}_{i_{\\iota}}$ is critical for any other PS pair $\\iota' \\in\\mathscr{I}(o)$, then\n\\vspace{-0.3cm}\n\\begin{equation}\\label{eqn:equal_opt:a}\n\\gamma_j = \\frac{\\Xi_{i_{\\iota}}(\\bm{0},\\bm{0})-\\nu_{\\ell_i}}{w_{j,i_{\\iota}}\\left(1+\\lambda_{\\ell_{\\iota}}\/\\mu_{i_{\\iota}}\\right)};\n\\vspace{-0.3cm}\n\\end{equation}\n\\item if there are critical PS pairs $\\iota$ and $\\iota'$ in $\\mathscr{I}(o)$ with critical resource pools $j=j_{\\iota}(o)\\neq j_{\\iota'}(o)$ and $ j_{\\iota'}(o)\\in\\mathscr{J}_{i_{\\iota}}$, then \\vspace{-0.3cm}\n\\begin{equation}\\label{eqn:equal_opt:b}\n\\gamma_j=\\frac{w_{j_{\\iota'}(o),i_{\\iota}}}{w_{j,i_{\\iota}}}\\left(\\frac{\\Xi_{i_{\\iota}}(\\bm{0},\\bm{0})-\\nu_{\\ell_{\\iota}}}{w_{j_{\\iota'}(o),i_{\\iota}}\\left(1+\\lambda_{\\ell_{\\iota}}\/\\mu_{i_{\\iota}}\\right)}\n-\\frac{\\Xi_{i_{\\iota'}}(\\bm{0},\\bm{0})-\\nu_{\\ell_{\\iota'}}}{w_{j_{\\iota'}(o),i_{\\iota'}}\\left(1+\\lambda_{\\ell_{\\iota'}}\/\\mu_{i_{\\iota'}}\\right)}\\right);\\vspace{-0.3cm}\n\\end{equation}\n\\item otherwise, \\vspace{-0.3cm}\n\\begin{equation}\\label{eqn:equal_opt:c}\n\\gamma_j = 0. \\vspace{-0.3cm}\n\\end{equation}\n\\end{enumerate}\n\\end{proposition}\nThe proof is given in Appendix~\\ref{app:prop:equal_opt} in the e-companion. \nNote that, from Lemma~\\ref{lemma:critical_pattern}, for any critical PS pairs $\\iota,\\iota'\\in\\mathscr{I}(o)$ with $\\iota\\neq \\iota'$, it follows that $i_{\\iota}\\neq i_{\\iota'}$.\nIf the system is weakly coupled, for any $j\\in[J]$, there exist at most two different critical pairs $\\iota\\in\\mathscr{I}(o)$ satisfying $j\\in\\mathscr{J}_{i_{\\iota}}$. Also, in a weakly coupled system, for the second case stated in Proposition~\\ref{prop:equal_opt}, if there are critical PS pairs $\\iota$ and $\\iota'$ in $\\mathscr{I}(o)$ with critical resource pools $j=j_{\\iota}(o)\\neq j_{\\iota'}(o)$ and $ j_{\\iota'}(o)\\in\\mathscr{J}_{i_{\\iota}}$, then $ j_{\\iota}(o)\\notin \\mathscr{J}_{i_{\\iota'}}$ because there is at most one resource pool in $\\mathscr{J}_{i_{\\iota}}$ shared with other patterns.\n\n\n\nIn Proposition~\\ref{prop:equal_opt}, \nthe assumption that the system is weakly coupled constrains the way in which resource pools are shared by different requests. \nThe case where there is an $o\\in\\mathscr{O}^*(\\bm{0})$ with $\\bm{\\nu}(o,\\bm{0})=\\bm{0}$ will occur when the relaxed action constraint~\\eqref{eqn:constraint:relax:action} is satisfied with $\\alpha^{\\bar{\\varphi}(o)}_{d(\\ell)}(n)>0$ for the only $n\\in\\mathscr{N}_{d(\\ell)}$ and for all $\\ell\\in[L]$. \nTo see this, note that\nthe construction of the policy $\\bar{\\varphi}(o)$ guarantees that the resulting multipliers $\\bm{\\nu}(o,\\bm{0})$ will be non-negative, and so\nit follows from~\\eqref{eqn:a_opt:dummy} that $\\alpha^{\\bar{\\varphi}(o)}_{d(\\ell)}(n) > 0$ only if $\\nu_{\\ell}(o,\\bm{0}) = 0$. That is, having $\\nu_{\\ell}(o,\\bm{0}) = 0$ is associated with there being a positive probability that the dummy pattern $d(\\ell)$ is selected in the relaxed system.\nFurthermore, if there is a PS pair $\\iota$ (for a non-dummy pattern $i_{\\iota}\\in\\mathscr{P}_{\\ell}$) which satisfies the condition described in \\ref{case:equality:action}, that is PS pair $\\iota$ causes the relaxed action constraint~\\eqref{eqn:constraint:relax:action} to bite, Algorithm~\\ref{algo:varphi_gamma} will ensure that $\\alpha^{\\bar{\\varphi}(o)}_{i_{\\iota'}}(n_{\\iota'}) = 0$ for all PS pairs $\\iota'$ \nranked lower than $\\iota$ according to the order $o$. In particular, this will cause $\\alpha^{\\bar{\\varphi}(o)}_{d(\\ell)}(n) = 0$ for the only $n\\in\\mathscr{N}_{d(\\ell)}$.\n\nSo if $\\alpha^{\\bar{\\varphi}(o)}_{d(\\ell)}(n) > 0$, it is because the relaxed capacity constraints~\\eqref{eqn:constraint:relax:resources} bite before the relaxed action constraints~\\eqref{eqn:constraint:relax:action}.\nIf this is true for all $\\ell$, then the capacity constraints are biting for every request type, and so we refer to the case where there is an $o\\in\\mathscr{O}^*(\\bm{0})$ with $\\bm{\\nu}(o,\\bm{0})=\\bm{0}$ as a \\emph{heavy traffic} condition.\n\n\n\n\n\n\\begin{condition}{Heavy Traffic}\\label{cond:heavy_traffic}\nThe system is in heavy traffic if there is a ranking $o\\in\\mathscr{O}^*(\\bm{0})$ such that $\\bm{\\nu}(o,\\bm{0})=\\bm{0}$. \\vspace{-0.5cm}\n\\end{condition}\n\n{\\bf Remark} \nThe property of being weakly coupled and in heavy traffic simplifies the analysis of the complementary slackness condition of the relaxed problem. In particular, the index related to a pattern, described in Equation~\\eqref{eqn:index_value}, is affected only by the multipliers of resource pools $j \\in[J]$ with $w_{j,i}>0$. Weak coupling helps reduce the number of such multipliers $\\gamma_j$, so that the index of a pattern is affected by at most one $\\gamma_ j$, which in turn affects other pattern indices. When the system is weakly coupled and in heavy traffic, we can analytically solve the $I$ linear equations~\\eqref{eqn:necessary_gamma} and \\eqref{eqn:necessary_gamma:zero} and derive the $\\phi$ and $\\pmb{\\gamma}$ that satisfy the complementary slackness condition described in equaitons~\\eqref{eqn:relaxaction:slack} and \\eqref{eqn:relaxconstraint:slack}. A detailed discussion is provided in the proof of Proposition~\\ref{prop:equal_opt}.\n\n\nProposition~\\ref{prop:equal_opt} guarantees the decomposability of a system when it is weakly coupled and in heavy traffic.\nThe property of being weakly coupled and in heavy traffic is stronger than necessary for decomposability, but it is simple to check and is satisfied in a number of common resource allocation problems. We consider examples about how to easily define such a system.\n\n\nAs explained above, the heavy traffic property is usually satisfied when the service capacity is not enough (or just enough) to address its high traffic load.\nOn the other hand, the weak coupling specifies the structure of the weight matrix $\\mathcal{W}$. \nFor instance, if each pattern involves only one resource pool (that is, for all $i\\in[I]\\backslash\\{d(\\ell)|\\ell\\in[L]\\}$, $|\\mathscr{J}_i|=1$), then the system is weakly coupled as each resource pool is still potentially shared by requests of different types. \n\n\nWithin the above framework, we can model skill-based resource pooling in call centers (see \\cite{wallace2004resource,cezik2008staffing}) as a weakly coupled resource allocation problem; and when its traffic load is also heavy, the system is decomposable.\nIn each call center, agents are trained for several skills, such as two or three languages, and are able to handle some but not all of the incoming calls. We classify these agents into multiple call centers according to their trained skills; that is, all agents in the same call center have the same skills and are able to serve the same types of calls. In this context, a call corresponds to a request, an agent corresponds to a resource unit, a call center is a resource pool and a call type is a request type.\n\nSince each call is served by an agent with appropriate skills, each pattern consists of only one call center (resource pool) and selecting a pattern means selecting an agent (a resource unit) from the corresponding call center: this problem is weakly coupled. Note that agents of each call center are potentially serving different types of calls simultaneously, and the capacity constraints \\eqref{eqn:constraint:resources} still restrict the system because of the limited number of agents in each call center. \n\n\nIn particular, the trained skills are used to establish the availability of an agent to serve a call, and do not relate to any concept defined in the resource allocation problem. When an agent of call center $j\\in[J]$ is able to serve calls of type $\\ell\\in[L]$, regardless of the skills needed for this service, there is a pattern $i$ in $\\mathscr{P}_{\\ell}$ with $w_{j,i}=1$ and $w_{j',i}=0$ for all $j'\\in[J]\\backslash\\{j\\}$. \nFor instance, a call center has agents who can speak English and Chinese, and there are two types of calls: one requires English or French-speaking agents and the other Chinese or Japanese speakers. A call of either type can be served by an agent of this call center, and many calls of both types can be served by this call center simultaneously. \n\n \n\nOther problems with similar features, such as health-care task scheduling for agents with different qualifications (see \\cite{lieder2015task}) and home health-care scheduling (see \\cite{fikar2017home}), can also be modeled as weakly coupled systems. And, of course, when the systems are also in heavy traffic, they are decomposable.\n\n\nA virtual machine (VM) replacement problem can be modeled as a resource allocation problem (see \\cite{stolyar2013infinite,stolyar2017large}). VM replacement is about consolidating multiple VMs onto a set of physical machines\/servers, where each physical server can usually accommodate more than one VM simultaneously. To consolidate a VM, certain numbers of physical units, such as CPU cycles, memory, disk, or I\/O ports, located on a server will be occupied by this VM until it is completed. The VMs and servers are potentially different, and, because of different server profiles or user preference, a server is not necessarily able to accommodate every VM. \nConsider a simple version, for which the capacity of a server is determined by the total amount of only one type of physical unit: this server has a plentiful supply of the others or is not aware of other physical units.\nIn this case, regarding a VM as a request, a server as a resource pool and a physical unit of the shortage type as a resource unit, we obtain a resource allocation problem that is weakly coupled. \nSimilar problems in computer networks, such as the virtual node embedding (see \\cite{esposito2016distributed}), server provisioning in distributed cloud environments (see \\cite{wei2017data}), and wireless resource scheduling (see \\cite{chen2017wireless}), can potentially be modeled as weakly coupled resource allocation problems. And as before, when the weakly coupled systems are in heavy traffic, the decomposability property holds.\n\n\nAs in \\cite{stolyar2013infinite,stolyar2017large}, for general VM replacement problems, each server capacity is not necessarily constrained by physical units of just one type. As above, we model a VM as a request, a physical unit as a resource unit, and the set of all physical units of the same type located on the same server as a resource pool. In this context, the capacity of each resource pool is determined by the total number of its associated physical units of a given type on a server and the weak coupling property cannot hold in general. \nIt follows that, unlike the preceding examples, the system is not necessarily decomposable.\nHowever, as discussed in Section~\\ref{subsubsec:derive_ranking}, a decomposable system that is not weakly coupled or in heavy traffic can be found by finding a fixed point $\\pmb{\\gamma}\\in\\mathbb{R}_0^J$ of the function $\\mathcal{T}^o$ ($o\\in\\mathscr{O}(\\pmb{\\gamma},\\bm{0})$). Numerical examples of such systems will be provided in Section~\\ref{sec:example}.\n\n\n\\section{The Index Policy: Its Implementation in the Non-Asymptotic Regime}\\label{sec:index_policy}\nIn Section~\\ref{sec:relaxation}, we considered the relaxed problem with constraints \\eqref{eqn:constraint:relax:action}-\\eqref{eqn:constraint:dummy}. Here, we return to the original problem with constraints \\eqref{eqn:constraint:action} and \\eqref{eqn:constraint:resources}.\n\n\nFor each RT $\\ell\\in[L]$, we refer to a policy $\\varphi\\in\\Phi$ as an \\emph{index policy} according to PS-pair ranking $o\\in\\mathscr{O}$, if it always prioritizes a candidate process in \na PS pair with a ranking equal or higher than those of all the other candidate processes.\nThis policy $\\varphi$ is applicable to the original problem while, the policy $\\bar{\\varphi}(o)$ proposed in Section~\\ref{subsubsec:priorities_of_pairs} is not in general.\nThe method of implementing such a $\\varphi$ is not unique; for instance, the computation of the ranking of the PS pairs can vary. Here we propose one possible implementation.\n\n\nFor $t>0$, we maintain a sequence of $I$ ordered PS pairs $(i,N^{\\varphi}_i(t))$ ($i\\in[I]$) that are associated with the $I$ patterns, according to the given ranking $o$ and the state vector $\\bm{N}^{\\varphi}(t)$: PS pair $(i,N^{\\varphi}_i(t))$ is placed ahead of $(i',N^{\\varphi}_{i'}(t))$ if and only if the former precedes the latter in the ranking $o$.\nLet $i^o_{\\sigma}(\\bm{N}^{\\varphi}(t))$ ($\\sigma\\in[I]$) represent the pattern associated with the $\\sigma$th PS pair in this ordered sequence.\n\nFor a general ranking $o\\in\\mathscr{O}$, the variables $i^o_{\\sigma}(\\bm{N}^{\\varphi}(t))$ are potentially updated at each state transition. Nonetheless, for the purpose of this paper, we mainly focus on the rankings $o\\in\\mathscr{O}(\\pmb{\\gamma},\\bm{\\nu})$ (for some $\\pmb{\\gamma}\\in\\mathbb{R}^J_0$ and $\\bm{\\nu}\\in\\mathbb{R}^L$) that follow the descending order of $\\Xi_i(\\pmb{\\gamma},\\bm{\\nu})$.\nIn this case, the variables $i^o_{\\sigma}(\\bm{N}^{\\varphi}(t))$ are updated only if a pattern $i\\in[I]\\backslash\\{d(\\ell)|\\ell\\in[L]\\}$ transitions into or out of its boundary state $|\\mathscr{N}_i|-1$.\n\n\n\nConsider the capacity constraints\n\\begin{equation}\\label{eqn:constraint:resources:epsilon:1}\n\\sum\\limits_{i'\\in[I]}w_{j,i'}N^{\\varphi}_{i'}(t) + \\sum\\limits_{i'\\in[I],i'\\neq i}w_{j,i'}a^{\\varphi}_{i'}(\\bm{N}^{\\varphi}(t))+ a^{\\varphi}_i(\\bm{N}^{\\varphi}(t)) \\leq \\Bigl\\lceil C_j\\bigl(1-\\bar{\\epsilon}_{j,\\iota(i,N^{\\varphi}_i(t))}\\bigr)\\Bigr\\rceil,~\\forall j\\in[J], i\\in[I],\n\\end{equation}\nwhere $\\iota(i,N^{\\varphi}_i(t))\\in[N]$ represents the order of PS pair $(i,N^{\\varphi}_i(t))$ in the ranking $o$ and $\\bar{\\bm{\\epsilon}}\\in [0,1]^{J\\times [N]}$ is a given matrix of parameters.\nApart from this matrix of parameters, constraints~\\eqref{eqn:constraint:resources:epsilon:1} are the same as constraints~\\eqref{eqn:constraint:resources}.\nAs we shall discuss in Section~\\ref{subsec:policies}, we choose the $\\bar{\\epsilon}_{j,\\iota}$ such that $\\bar{\\epsilon}_{j,\\iota}C_j \\geq w_{j,i_{\\iota}}-1$ and, for any $j\\in[J]$ and PS pairs $\\iota<\\iota'$ with respect to the given ranking $o$, if $w_{j,i_{\\iota}},w_{j,i_{\\iota'}}>0$, then $\\bar{\\epsilon}_{j,\\iota} < \\bar{\\epsilon}_{j,\\iota'}$.\nIn this context, if $\\bar{\\epsilon}_{j,\\iota}C_j \\in [ w_{j,i_{\\iota}}-1,w_{j,i_{\\iota}})$ for all $\\iota\\in[N]$ and $j\\in[J]$, then constraints~\\eqref{eqn:constraint:resources:epsilon:1} reduce to \\eqref{eqn:constraint:resources}; otherwise, they are more stringent than \\eqref{eqn:constraint:resources}.\nThe parameter $\\bar{\\bm{\\epsilon}}$ is used to specify the trajectory of the underlying process $\\bm{N}^{\\varphi}(t)$ when the system is scaled to the asymptotic regime. \nThis specification is required \nfor proving the asymptotic optimality of the index policy $\\varphi$. \n\n\nIn the interests of notational consistency, we shall use the form \\eqref{eqn:constraint:resources:epsilon:1} throughout\nbut, here, since we do not worry about the asymptotic behavior, we consider the case with $\\bar{\\epsilon}_{j,\\iota}C_j \\in [ w_{j,i_{\\iota}}-1,w_{j,i_{\\iota}})$ for all $\\iota\\in[N]$ and $j\\in[J]$ so that \\eqref{eqn:constraint:resources:epsilon:1} reduces to \\eqref{eqn:constraint:resources}.\nA detailed discussion about the scaling procedure and the role of $\\bar{\\bm{\\epsilon}}$ in the asymptotic case will be provided in Section~\\ref{sec:asym}.\n\nUnder the index policy $\\varphi$, we select $L$ patterns to accept new arrivals of $L$ types according to their orders in sequence $i^o_{\\sigma}(\\bm{N}^{\\varphi}(t))$ ($\\sigma\\in[I]$).\nIn particular, at a decision making epoch $t>0$, we initialize $a^{\\varphi}_i(\\bm{N}^{\\varphi}(t))=0$ for all $i\\in[I]$ and a set of \\emph{available patterns} to be $[I]$. \nIf, for $i=i^o_1(\\bm{N}^{\\varphi}(t))$, constraints~\\eqref{eqn:constraint:resources:epsilon:1}\nwill not be violated by setting $a^{\\varphi}_i(\\bm{N}^{\\varphi}(t))=1$, then set $a^{\\varphi}_i(\\bm{N}^{\\varphi}(t))=1$ and remove all patterns associated with request type $\\ell(i)$ from the set of available patterns.\n\n\n\nThe other $L-1$ patterns are selected similarly and iteratively.\nThat is, we look for the smallest $\\sigma\\in\\{2,3,\\ldots,I\\}$ such that \n\\begin{itemize}\n\\item $i^o_{\\sigma}(\\bm{N}^{\\varphi}(t))$ remains in the set of available patterns; and\n\\item the capacity constraints~\\eqref{eqn:constraint:resources:epsilon:1} will not be violated by setting $a^{\\varphi}_i(\\bm{N}^{\\varphi}(t))=1$ where $i=i^o_{\\sigma}(\\bm{N}^{\\varphi}(t))$.\n\\end{itemize}\nIf there is such a $\\sigma$, set $a^{\\varphi}_i(\\bm{N}^{\\varphi}(t))=1$ for $i=i^o_{\\sigma}(\\bm{N}^{\\varphi}(t))$, remove all patterns associated with request type $\\ell(i)$ from the set of available patterns and continue selecting the remaining $L-2$ patterns in the same manner.\nWhen all of the $L$ patterns have been selected we can stop. \nDetailed steps are provided in Algorithm~\\ref{algo:index_policy}, \nwhich has a computational complexity that is linear in $I$.\n\n\n\\IncMargin{1em}\n\\begin{algorithm}\n\\small \n\\linespread{0.4}\\selectfont\n\n\\SetKwFunction{FIndexPolicy}{IndexPolicy}\n\\SetKwProg{Fn}{Function}{:}{\\KwRet $\\bm{a}^{\\varphi}(\\bm{n})$}\n\\SetKwInOut{Input}{Input}\\SetKwInOut{Output}{Output}\n\\SetAlgoLined\n\\DontPrintSemicolon\n\\Input{a ranking of PS pairs $o\\in\\mathscr{O}$ and a given state $\\bm{n}\\in\\mathscr{N}$.}\n\\Output{the action variables $\\bm{a}^{\\varphi}(\\bm{n})$ under the index policy $\\varphi\\in\\Phi$ with respect to ranking $o$ when the system is in state $\\bm{n}$.}\n\\Fn{\\FIndexPolicy{$o,\\bm{n}$}}{\n\t$\\bm{a}^{\\varphi}(\\bm{n}) \\gets \\bm{0}$ \\tcc*{Initializing the action variables}\n\t$\\mathscr{P}\\gets [I]$ \\tcc*{Initializing the set of available patterns}\n\t$\\sigma \\gets 1$ \\tcc*{Starting with the pattern with the highest priority}\n \\While {$\\mathscr{P}\\neq \\emptyset $}{\t\n\t \t $i\\gets i^o_{\\sigma}(\\bm{n})$\\;\n\t\t\t\\If {$i\\in\\mathscr{P}$ {\\bf and} Constraints~\\eqref{eqn:constraint:resources:epsilon:1} are not violated by setting $a^{\\varphi}_i(\\bm{n}) =1$ and $\\bm{N}^{\\varphi}(t)=\\bm{n}$}{\n\t\t\t $a^{\\varphi}_i(\\bm{n})\\gets 1$\\;\n\t\t\t\tRemove all patterns $i'\\in\\mathscr{P}$ with $\\ell(i')=\\ell(i)$ from $\\mathscr{P}$\\;\n\t\t\t}\n\t \t$\\sigma\\gets \\sigma +1$\\;\n\t}\n}\n\\caption{Implementing the index policy $\\varphi$ with respect to ranking $o$.}\\label{algo:index_policy}\n\\end{algorithm}\n \\DecMargin{1em}\n\n\n\nThe performance of $\\varphi$ is mainly determined by the given order $o\\in\\mathscr{O}$. \nBased on later discussion of the asymptotic regime, if the policy $\\bar{\\varphi}(o)$ is optimal for the relaxed problem in the asymptotic regime, then $\\varphi$ is asymptotically optimal for the original problem.\nEven without the proved asymptotic optimality, the ranking $o$ should ensure good performance of $\\varphi$ as it is always rational to prioritize patterns according to their potential profits.\nAs long as there are reasonably good $\\pmb{\\gamma}$ and $\\bm{\\nu}$ leading to a $o\\in\\mathscr{O}(\\pmb{\\gamma},\\bm{\\nu})$, which correctly reflects the potential profits of patterns, the performance degradation of $\\bar{\\varphi}(o)$ is likely to be limited for the relaxed problem and close to the optimal solution of the original problem; and the index policy $\\varphi$ derived from $o$ is a promising choice for managing resources.\n\n\nThe selection of $\\pmb{\\gamma}$, $\\bm{\\nu}$ and $o\\in\\mathscr{O}(\\pmb{\\gamma},\\bm{\\nu})$ is discussed in Section~\\ref{sec:relaxation}. The key point is to guarantee good performance of $\\bar{\\varphi}(o)$: the policy that is guaranteed to be optimal for the relaxed problem when the system is decomposable.\n\n\n\n\\section{Stochastic Optimization in a Scaled System} \\label{sec:asym}\n\n\nIn this section, we establish asymptotic optimality of $\\varphi$.\n\n\\vspace{-0.3cm}\n\\subsection{Scaling Parameter}\\label{subsec:asym_regime}\n\\vspace{-0.3cm}\nWith a parameter $h\\in\\mathbb{N}_+$, let $\\bm{C}\\coloneqq h\\bm{C}^{0}$, $\\bm{C}^{0}\\in\\mathbb{N}_{+}^{J}$, and the arrival rates scale as $\\bm{\\lambda}\\coloneqq h\\bm{\\lambda}^{0}$, $\\bm{\\lambda}^{0}\\in\\mathbb{R}_{+}^{L}$.\nWe refer to the parameter $h$ as the \\emph{scaling parameter}, and the \\emph{asymptotic regime} as the limiting case with $h\\rightarrow +\\infty$.\n\n\n\nWe split the process associated with pattern $i$ into $h$ identical \\emph{sub-processes}\n$(i,k)$, $k\\in[h]$, and divide $N^{\\phi}_{i}(t)$, the number of instantiations for pattern $i$ under policy $\\phi$ at time $t$, into $h$ pieces.\nThe number of instantiations of the $k$th piece is $N^{\\phi}_{i,k}(t)$, so that $N^{\\phi}_{i}(t) = \\sum_{k=1}^{h}N^{\\phi}_{i,k}(t)$.\nWe refer to $N^{\\phi}_{i,k}(t)$ as the number of instantiations for \\emph{sub-pattern} $(i,k)$.\nThe counting process given by $N^{\\phi}_{i,k}(t)$ ($k\\in [h], i\\in [I]$) has state space \n$\\mathscr{N}_{i}^{0} \\coloneqq \\{0,1,\\ldots,\\min_{j\\in \\mathscr{J}_i}\\lceil C_{j}^{0}\/w_{j,i}\\rceil\\}$.\nFor any dummy pattern $d(\\ell)$, we take $\\mathscr{N}^0_{d(\\ell)}= \\mathscr{N}_{d(\\ell)} = \\{0\\}$.\n\n\n\nThe objective and constraints defined by \\eqref{eqn:objective}, \\eqref{eqn:constraint:action} and \\eqref{eqn:constraint:resources} still apply to the sums of variables $ \\sum_{k=1}^{h}N^{\\phi}_{i,k}(t)\\coloneqq N^{\\phi}_i(t)$, $i\\in[I]$.\nWe say the process associated with pattern $i$ is \\emph{replaced} by the $h$ sub-processes associated with sub-patterns $(i,k)$, $k\\in[h]$.\nEach sub-pattern $(i,k)$ earns reward $r_{\\ell(i)}$ per each served request and the cost rate that a request accommodated by this sub-pattern imposes on resource pool $j\\in[J]$ is $\\varepsilon_jw_{j,i}$; that is,\nthe sub-process $(i,k)$ maintains the same reward and cost rates in states $n\\in\\mathscr{N}_i^0$ as process $i$.\nLet $\\bm{N}_{h}^{\\phi}(t) = (N^{\\phi}_{i,k}(t):\\ i\\in[I], k\\in[h])$ be the state variable after this replacement,\nand $a^{\\phi}_{i,k}(\\bm{N}^{\\phi}_h(t))\\in\\{0,1\\}$ ($i\\in[I],k\\in[h]$) be the action variables with respect to the process $\\bm{N}^{\\phi}_h(t)$.\nTo clarify, we rewrite the objective described in \\eqref{eqn:objective} as\n\\begin{equation}\\label{eqn:objective:h}\n\\max\\limits_{\\phi} \\frac{1}{h}\\sum\\limits_{i\\in[I]} \\sum\\limits_{k\\in[h]}\\sum_{n_i\\in\\mathscr{N}_i} \\pi_{i,k}^{\\phi,h}(n_i) \\Bigl(r_{\\ell(i)} \\mu_i - \\sum_{j\\in\\mathscr{J}} w_{j,i} \\varepsilon_j\\Bigr)n_i,\n\\end{equation}\nwhere $\\pi^{\\phi,h}_{i,k}(n_i)$ represents the stationary probability that the state of sub-process $(i,k)$ is $n_i$ under policy $\\phi$ with scaling parameter $h$.\nWe divide the total revenue earned by all sub-patterns by $h\\in\\mathbb{N}_+$ so that the objective function is always bounded when $h\\rightarrow +\\infty$. \nThe policy $\\phi$ in \\eqref{eqn:objective:h} is determined by the action variables $a^{\\phi}_{i,k}(\\bm{N}^{\\phi}_h(t))$ ($i\\in[I],k\\in[h]$)\nsubject to \\vspace{-0.3cm}\n\\begin{equation}\\label{eqn:constraint:action:h}\n\\sum\\limits_{i\\in\\mathscr{P}_{\\ell}}\\sum\\limits_{k\\in[h]}a^{\\phi}_{i,k}(\\bm{N}^{\\phi}_h(t))=1,~\\forall \\ell\\in[L],~\\forall t \\geq 0, \\vspace{-0.3cm}\n\\end{equation}\nand \\vspace{-0.3cm}\n\\begin{equation}\\label{eqn:constraint:resources:h}\n\\sum\\limits_{i\\in[I]}\\frac{w_{j,i}}{h}\\sum\\limits_{k\\in[h]}\\Bigl(N^{\\phi}_{i,k}(t)+a^{\\phi}_{i,k}(\\bm{N}^{\\phi}_h(t))\\Bigr)\\leq C_j^0,~\\forall j\\in[J],~\\forall t \\geq 0.\n\\end{equation}\nThe constraints in \\eqref{eqn:constraint:resources:h} are obtained by substituting $C_j^0h$ for $C_j$ in the constraints stated in \\eqref{eqn:constraint:resources}, and thus \\eqref{eqn:constraint:resources:h} is equivalent to \\eqref{eqn:constraint:resources}.\nAlso, to guarantee that the maximal value of each $N^{\\phi}_{i,k}(t)$ ($k\\in[h]$), $\\min_{j\\in\\mathscr{J}_i}\\lceil C_j^0\/w_{j,i}\\rceil$, is not exceeded, define, for $k\\in[h]$ and $i\\in[I]\\backslash\\{d(\\ell):\\ \\ell\\in[L]\\}$,\n\\vspace{-0.4cm}\n\\begin{equation}\\label{eqn:constraint:resources:zero:h}\na^{\\phi}_{i,k}(\\bm{N}^{\\phi}_h(t))=0,~\\text{if } N^{\\phi}_{i,k}(t)=|\\mathscr{N}_i^0|-1,\n\\vspace{-0.4cm}\n\\end{equation}\nwhich corresponds to the redundant constraints described in \\eqref{eqn:constraint:resources:zero}.\n\n\n\n\n{\\bf Remark}\nAs in Section~\\ref{sec:introduction}, we activate exactly one sub-process $(i,k)$ ($i\\in\\mathscr{P}_{\\ell}$, $k\\in[h]$) for RT $\\ell\\in[L]$ regardless of the scaling parameter $h\\in\\mathbb{N}_+$.\nThe birth and death rates of this active sub-process are $h\\lambda_{\\ell}^0$ and $N^{\\phi}_{i,k}(t)\\mu_{i}^0$, respectively, so that if $h\\lambda^0_{\\ell} \\gg (|\\mathscr{N}^0_i|-1)\\mu_{i}^0$, the number of instantiations of pattern $i$ will increase rapidly until it is restricted by the capacity constraints.\n\n\nA model with a single active sub-process at any time has different stochastic properties compared to the case where the number of active sub-processes is proportional to $h$ (which was discussed in \\cite{weber1990index}). \nTo illustrate the difference, we present an example in Appendix~\\ref{app:example3} of the e-companion.\n\n\n\n\nThe optimization problem consisting of the $hI$ sub-processes associated with $hI$ sub-patterns, coupled through constraints~\\eqref{eqn:constraint:action:h}-\\eqref{eqn:constraint:resources:zero:h} can be analyzed and relaxed along the same lines as in Section~\\ref{sec:relaxation}. \nLet $\\alpha^{\\phi}_{i,k}(n)\\coloneqq \\lim_{t\\rightarrow +\\infty}\\mathbb{E}\\{a^{\\phi}_{i,k}(\\bm{N}^{\\phi}_h(t))| N^{\\phi}_{i,k}(t)=n\\}$ ($n\\in\\mathscr{N}^0_{i}$, $i\\in[I]$, $k\\in[h]$) represent the action variables of the $hI$ sub-problems for the relaxed problem scaled by $h$. \n\n\nAll the sub-processes corresponding to a given pattern $i\\in[I]$ in the same state $n\\in\\mathscr{N}_i^0$ are equivalent.\nThe controller then is concerned only with the total number of active sub-processes of a given pattern in a given state. \nDefine the random variable $Z^{\\phi,h}_{\\iota}(t)$ to be the proportion of sub-processes in PS pair $\\iota$ at time $t$ under policy $\\phi$ where $h$ is the scaling parameter; that is, \\vspace{-0.3cm}\n\\begin{equation}\\label{eqn:z_process}\nZ^{\\phi,h}_{\\iota}(t)=\\frac{1}{hI}\\Bigl|\\bigl\\{(i,k)\\in[I]\\times[h]\\ \\left|\\ N^{\\phi}_{i,k}(t)=n_{\\iota},\\ i_{\\iota}=i\\right.\\bigr\\}\\Bigr|. \\vspace{-0.3cm}\n\\end{equation} \n\nLet $\\bm{Z}^{\\phi,h}(t)=(Z^{\\phi,h}_{\\iota}(t):\\ \\iota\\in[N])$ and $\\mathscr{Z}$ be the probability simplex $\\{\\bm{z}\\in[0,1]^{N}\\ |\\ \\sum_{\\iota\\in[N]}z_{\\iota}=1\\}$.\nIn this model, the process $\\bm{Z}^{\\phi,h}(t)$ is analogous to the counting process $\\bm{N}^{\\phi}_h(t)$ in the original process.\nWhen the process $\\bm{Z}^{\\phi,h}(t)$ takes value $\\bm{z}\\in\\mathscr{Z}$, it can transition only to a state of the form $\\bm{z}+\\bm{e}_{\\iota,\\iota'}\\in\\mathscr{Z}$ with $i_{\\iota}=i_{\\iota'}$, where $\\bm{e}_{\\iota,\\iota'}\\in \\mathbb{R}^{N}$ is a vector with $\\iota$th element $+1\/hI$, $\\iota'$th element $-1\/hI$ and all the other elements set to zero. \nFor our birth-and-death process, a transition will happen only with $n_{\\iota'} = n_{\\iota}\\pm 1$ corresponding to the arrival and departure of a request, respectively.\nFor any given $h\\in\\mathbb{N}_+$, \nthe state space of the process $\\bm{Z}^{\\phi,h}(t)$ is a subset of $\\mathscr{Z}$ and thus the system is always stable.\nWe refer to the system with $h\\rightarrow +\\infty$ as the \\emph{asymptotic regime}.\n\n\n\nNote that any resource allocation problem in the non-asymptotic regime coincides with a scaled problem described in \\eqref{eqn:objective:h}-\\eqref{eqn:constraint:resources:zero:h} with given $h<+\\infty$. The scaling parameter $h$ is introduced to rigorously specify the trajectory of the entire system going from a non-asymptotic regime to an asymptotic regime.\n\n\\vspace{-0.5cm}\n\n\\subsection{Index Policies in a Scaled System}\\label{subsec:policies}\n\\vspace{-0.3cm}\n\nIn Section~\\ref{sec:index_policy}, for a ranking $o\\in\\mathscr{O}$, we proposed an index policy $\\varphi\\in\\Phi$ for the resource allocation problem in the non-asymptotic regime; this coincides with the problem described in \\eqref{eqn:objective:h}-\\eqref{eqn:constraint:resources:zero:h} with given $h<+\\infty$. \nFor clarity, we translate the description of $\\varphi$ to a policy used for a scaled system with the notation described in Section~\\ref{subsec:asym_regime}.\n\n\nFor a ranking $o\\in\\mathscr{O}$, the index policy $\\varphi$ \\emph{activates} a sub-process in the first PS pair $\\iota\\in[N]$ in ranking $o$ with $Z^{\\varphi,h}_{\\iota}(t)>0$ and the action and capacity constraints holding; that is, $\\varphi$ selects a sub-process $(i_{\\iota},k)$ ($k\\in[h]$) satisfying $N^{\\varphi}_{i_{\\iota},k}(t)=n_{\\iota}$ and sets $a^{\\varphi}_{i_{\\iota},k }(\\bm{N}^{\\varphi}_h(t))$ to $1$. The condition $Z^{\\varphi,h}_{\\iota}(t)>0$ is required because there has to be some sub-processes in PS pair $\\iota$ for us to be able to activate.\nOnce a sub-process in PS pair $\\iota$ is selected for activation, the action constraint~\\eqref{eqn:constraint:action:h} for RT $\\ell_{\\iota}$ achieves equality: there is exactly one active sub-process for a specific RT $\\ell\\in[L]$.\nResource units in associated resource pools are reserved for this activated sub-process in PS pair $\\iota$. \nIn this way, $L$ sub-processes in $L$ different PS pairs will be activated iteratively, according to the ranking $o$, for the $L$ RTs.\n\nUnder the index policy $\\varphi$, the transition matrix of process $\\bm{Z}^{\\varphi,h}(t)$ is determined by the value of $\\sum_{k\\in[h],N^{\\varphi}_{i_{\\iota},k}(t)=n_{\\iota}}a^{\\varphi}_{i_{\\iota},k }(\\bm{N}^{\\varphi}_h(t))$ for each PS pair $\\iota\\in[N]$, which is either $0$ or $1$ and is dependent on $\\bm{N}^{\\varphi}_h(t)$ through only $\\bm{Z}^{\\varphi,h}(t)$.\nDefine $\\upsilon^{\\varphi,h}_{\\iota}(\\bm{z})$, $\\iota\\in[N]$, $\\bm{z}\\in \\mathscr{Z}$, to be the ratio of the number of active sub-processes in PS pair $\\iota$, for which the corresponding sub-patterns are prepared to accept a request, to the total number of sub-processes in this PS pair under $\\varphi$, when the proportions of sub-processes in all PS pairs are currently specified by $\\bm{z}$. \nThat is, at time $t$, for $\\iota\\in[N]$,\n\\begin{equation}\n\\upsilon^{\\varphi,h}_{\\iota}\\bigl(\\bm{Z}^{\\varphi,h}(t)\\bigr)= \\frac{\\sum_{k\\in[h],N^{\\phi}_{i_{\\iota},k}(t)=n_{\\iota}}a^{\\phi}_{i_{\\iota},k}(\\bm{N}^{\\varphi}_h(t))}{IhZ^{\\varphi,h}_{\\iota}(t)},\n\\end{equation}\nwhere we recall that \nthe numerator on the right hand side relies on $\\bm{N}^{\\varphi}_h(t)$ through $\\bm{Z}^{\\varphi,h}(t)\\in\\mathscr{Z}$.\nNote that, for arbitrarily large $h$, the value of $\\upsilon^{\\varphi,h}_{\\iota}(\\bm{z})Ihz_{\\iota}$ ($\\bm{z}\\in\\mathscr{Z}$), representing the number of active sub-processes in PS pair $\\iota$, is never more than $1$ because the policy $\\varphi$ must satisfy the action constraints \\eqref{eqn:constraint:action:h}.\nLet $\\bm{\\upsilon}^{\\varphi,h}(\\bm{z})=(\\upsilon^{\\varphi,h}_{\\iota}(\\bm{z}):\\ \\iota\\in[N])$.\nAlthough different tie-breaking rules lead to the same process $\\bm{Z}^{\\varphi,h}(t)$, \nwe shall stipulate that, when there is more than one sub-process $(i,k)$ ($i\\in [I], k\\in[h]$) in the same PS pair available for activation, we prioritize the one with the smaller value of $k$.\nIn this context, the variables $\\bm{\\upsilon}^{\\varphi,h}(\\bm{z})$, $\\bm{z}\\in\\mathscr{Z}$, provide sufficient information for the index policy $\\varphi$ to make decisions on the counting process $\\bm{N}^{\\varphi}_h(t)$.\n\n\n\n\nLet $\\zeta_{\\iota}^{\\varphi,h}(\\bm{z})$ represent the maximal proportion of sub-processes in PS pair $\\iota$ that can be active if we consider only the capacity constraints defined by \\eqref{eqn:constraint:resources:h} (neglecting the action constraints defined by \\eqref{eqn:constraint:action:h}) with proportions of sub-processes in all PS pairs specified by $\\bm{z}$ under policy $\\varphi$.\nWe obtain that, for $\\iota\\in[N]\\backslash\\{d(\\ell):\\ \\ell\\in[L]\\}$,\n\\begin{equation}\\label{eqn:tilde_z}\n\\zeta^{\\varphi,h}_{\\iota}(\\bm{z}) =\n\\min\\Biggl\\{z_{\\iota},\\max\\biggl\\{0,\\min\\limits_{j\\in\\mathscr{J}_i} \\frac{1}{w_{j,i}Ih}\\Bigl\\lceil hC_{j}^{0}(1-\\epsilon^h_{j,\\iota})-\\sum\\limits_{\\iota'=1}^{N}w_{j,i_{\\iota'}}n_{\\iota'}z_{\\iota'}Ih-\\sum\\limits_{\\iota'\\in \\mathscr{N}_{\\iota}^{+}}w_{j,i_{\\iota'}}\\upsilon^{\\varphi,h}_{\\iota'}(\\bm{z})z_{\\iota'}Ih\\Bigr\\rceil\\biggr\\}\\Biggr\\},\n\\end{equation}\nwhere $\\mathscr{N}_{\\iota}^{+}$, $\\iota\\in[N]$, is the set of PS pairs $\\iota'\\in[N]$ with $\\iota' <\\iota$ (with higher priorities than pair $\\iota$) with respect to ranking $o$, \nand $\\epsilon^h_{j,\\iota}\\in [0,1]$ corresponds to $\\bar{\\epsilon}_{j,\\iota}$ in \\eqref{eqn:constraint:resources:epsilon:1}.\n\nHere, the parameter $\\epsilon^h_{j,\\iota}$ is defined so that\n\\begin{equation}\\label{eqn:epsilon_condition}\n0<\\lim\\limits_{h\\rightarrow +\\infty}\\epsilon^h_{j,\\iota} < \\lim\\limits_{h\\rightarrow +\\infty}\\epsilon^h_{j,\\iota'} \\leq 1\n\\end{equation}\nfor any $\\iota < \\iota'$, $w_{j,i_{\\iota}}>0$ and $w_{j,i_{\\iota'}}>0$.\nWe need $\\epsilon^h_{j,\\iota}$ to indicate the priorities of PS pairs in resource pool $j$, because\nthe last term in \\eqref{eqn:tilde_z},\n$\\sum_{\\iota'\\in \\mathscr{N}_{\\iota}^{+}}w_{j,i_{\\iota'}}\\upsilon^{\\varphi,h}_{\\iota'}(\\bm{z})z_{\\iota'}Ih$,\nis $o(h)$.\nIn particular, \nin order to follow the strict capacity constraints described in \\eqref{eqn:constraint:resources:h}, we need to define the $\\epsilon^h_{j,\\iota}$ so that $\\epsilon^h_{j,\\iota}C_{j}^0 h \\geq w_{j,i_{\\iota}}-1$\nfor all $j\\in[J]$, $\\iota\\in[N]$ and $h\\in\\mathbb{N}_+$ and $\\lim\\limits_{h\\rightarrow+\\infty}\\epsilon^h_{j,\\iota}$ exists.\nLet $\\bm{\\epsilon}^h \\coloneqq (\\epsilon^h_{j,\\iota}:\\ j\\in[J],\\iota\\in[N])$ and $\\bm{\\epsilon} \\coloneqq \\lim\\limits_{h\\rightarrow +\\infty}\\bm{\\epsilon}^h$. \nDefine $\\mathscr{E}^h$, $h\\in\\mathbb{N}_+ \\cup \\{+\\infty\\}$, and $\\Psi$ as the sets of all such vectors $\\bm{\\epsilon}^h$ and sequences of such vectors $\\psi\\coloneqq (\\bm{\\epsilon}^1,\\bm{\\epsilon}^2,\\ldots,)$, respectively.\n\n\n\nEquation \\eqref{eqn:epsilon_condition} specifies possible trajectories of $\\bm{\\epsilon}^h$ as $h\\rightarrow +\\infty$, and is required for subsequent proofs of asymptotic optimality.\nNote that, although the asymptotic regime is a limiting situation, using an asymptotically-optimal policy is likely to be appropriate for systems with finite but large $h$. \n\n\nIn \\eqref{eqn:tilde_z}, the value of $\\zeta^{\\varphi,h}_{\\iota}(\\bm{z})$ is constrained by the remaining capacities of relevant resource pools, the proportion of sub-processes currently in PS pair $\\iota$ and the proportions of active sub-processes in PS pairs with higher priorities.\nRecall that $\\upsilon^{\\varphi,h}_{\\iota}(\\bm{z})$ represents the proportion of active sub-processes in PS pair $\\iota$, for which the corresponding sub-patterns are prepared to accept a request, when the proportions of sub-processes in all PS pairs are currently specified by $\\bm{z}$.\nTogether with the action constraints described in \\eqref{eqn:constraint:action:h}, under an index policy $\\varphi$, \nfor $z_{\\iota}>0$,\\vspace{-0.15cm}\n\\begin{equation}\\label{eqn:probability_active}\n\\upsilon^{\\varphi,h}_{\\iota}(\\bm{z}) = \\frac{1}{z_{\\iota}hI}\n\\min\\biggl\\{\\zeta^{\\varphi,h}_{\\iota}(\\bm{z})hI,\\ \\max\\Bigl\\{0,1- \\sum\\limits_{\\begin{subarray}\\ \\iota'\\in \\mathscr{N}^{+}_{\\iota}\\\\ l_{\\iota'}=l_{\\iota}\\end{subarray}}\\zeta^{\\varphi,h}_{\\iota'}(\\bm{z})hI\\Bigr\\}\\biggr\\}.\n\\end{equation}\nIf $z_\\iota = 0$, then there are no sub-processes in PS pair $\\iota$ and so $\\upsilon^{\\varphi,h}_{\\iota}(\\bm{z})$ can take any value in $[0,1]$ without making a difference to the evolution of the process.\nFor completeness, define, for $\\bm{z}$ with $z_{\\iota}=0$ and $\\bm{z}_{\\iota}^{x} \n\\coloneqq (z_{1},z_{2},\\ldots,z_{\\iota-1},x,z_{\\iota+1},\\ldots,z_{N})$, \n$\\upsilon^{\\varphi,h}_{\\iota}(\\bm{z})=\\lim\\limits_{x\\downarrow 0}\\upsilon^{\\varphi,h}_{\\iota}(\\bm{z}^{x}_{\\iota})$.\nFor any given $\\bm{z}\\in\\mathscr{Z}$, $\\zeta^{\\varphi,h}_{\\iota}(\\bm{z})$ and $\\upsilon^{\\varphi,h}_{\\iota}(\\bm{z})$ can be obtained iteratively using equations \\eqref{eqn:tilde_z} and \\eqref{eqn:probability_active} from $\\iota=1$ to $N$.\n\n\n{\\bf Remark}\nAlthough capacity constraints were not considered in \\cite{weber1990index}, the construction of $\\bm{\\upsilon}^{\\varphi,h}(\\bm{z})$ ($\\bm{z}\\in\\mathscr{Z}$) follows ideas similar to those used in that paper. \nRecall that, for a given ranking, $o\\in\\mathscr{O}$, the policy $\\bar{\\varphi}(o)$ is generated by Algorithm~\\ref{algo:varphi_gamma} and is infeasible for the original problem. \nThis gives rise to the interesting property that values of $\\upsilon^{\\varphi,h}_{\\iota}(\\bm{z})$ and $\\alpha^{\\bar{\\varphi}(o)}_{i_{\\iota},k}(n_{\\iota})$ ($k\\in[h]$) for all $h\\in\\mathbb{N}_+\\cup\\{+\\infty\\}$, are always independent of those of PS pairs $\\iota'$ with $\\iota' > \\iota$: the PS pairs with lower priorities than $\\iota$.\nThe property is important \nfor the proofs of Theorems~\\ref{theorem:main_second} and \\ref{theorem:main}.\n\n\\vspace{-0.3cm}\n\n\\subsection{Asymptotic Optimality}\\label{sec:asym_opt}\n\\vspace{-0.3cm}\n\nFor given $h\\in\\mathbb{N}_+$, define the long-run average revenue normalized by $h$ of the resource allocation problem under policy $\\phi$ to be $R^{\\phi,h}$; that is, \\vspace{-0.2cm}\n\\begin{equation}\nR^{\\phi,h} \\coloneqq \\frac{1}{h}\\sum\\limits_{i\\in[I]} \\sum\\limits_{k\\in[h]}\\sum_{n_i\\in\\mathscr{N}_i} \\pi_{i,k}^{\\phi,h}(n_i) \\Bigl(r_{\\ell(i)} \\mu_i - \\sum_{j\\in\\mathscr{J}} w_{j,i} \\varepsilon_j\\Bigr)n_i, \\vspace{-0.2cm}\n\\end{equation}\nthe objective function described in \\eqref{eqn:objective:h}.\n\\vspace{-0.3cm}\n\\begin{definition}\nWe say that the index policy $\\varphi$ derived from PS ranking $o$ by iterating \\eqref{eqn:tilde_z} and \\eqref{eqn:probability_active} is \\emph{asymptotically optimal} if \n$\n\\lim\\limits_{\\lVert \\bm{\\epsilon}\\rVert \\to \\bm{0}}\\lim\\limits_{h\\rightarrow+\\infty }|R^{\\varphi,h}-\\max\\limits_{\\phi\\in\\Phi}R^{\\phi,h}| = 0$.\n\\end{definition}\nRecall that the index policy $\\varphi$ described in Section~\\ref{subsec:policies} is dependent on the parameter $\\bm{\\epsilon}^h$ with $\\bm{\\epsilon} \\coloneqq \\lim\\limits_{h\\rightarrow +\\infty}\\bm{\\epsilon}^h$. The $\\bm{\\epsilon}$ is used to guarantee strict priorities of the sub-processes in the asymptotic regime as discussed in Section~\\ref{subsec:policies}. \nThe policy $\\bar{\\varphi}(o)$, proposed in Section~\\ref{subsubsec:priorities_of_pairs} for the relaxed problem, is generally not applicable to the original resource allocation problem.\nAlthough the policies $\\bar{\\varphi}(o)$ and $\\varphi$ both rely on the same ranking $o\\in\\mathscr{O}$, they are different policies. \n\\begin{theorem}\\label{theorem:main_second}\nFor given $o\\in\\mathscr{O}$, \n$\\varphi$ derived from $o$ by iterating \\eqref{eqn:tilde_z} and \\eqref{eqn:probability_active} is asymptotically optimal if and only if\n\\vspace{-0.5cm}\n\\begin{equation}\\label{eqn:main_second}\n\\lim\\limits_{h\\rightarrow +\\infty}|R^{\\bar{\\varphi}(o),h} -\\max\\limits_{\\phi\\in\\Phi}R^{\\phi,h}|=0.\n\\end{equation}\n\\end{theorem}\nThe proof is given in Appendix~\\ref{app:theorem:main_second}.\nTheorem~\\ref{theorem:main_second} indicates that asymptotic optimality of $\\varphi$ is equivalent to the convergence between $R^{\\bar{\\varphi}(o),h}$ and the maximized long-run average revenue of the original problem as $h\\rightarrow +\\infty$.\nIt is proved by showing the existence of $\\lim_{h\\rightarrow +\\infty} \\lim_{t \\rightarrow +\\infty}\\mathbb{E}[\\bm{Z}^{\\bar{\\varphi}(o),h}(t)]$ and\na global attractor of the process $\\bm{Z}^{\\varphi,h}(t)$ as $t,h\\rightarrow +\\infty$ and $\\lVert\\bm{\\epsilon}\\rVert\\rightarrow 0$, and specifically that they coincide with each other.\nThe long-run average revenue $R^{\\varphi,h}$ then coincides with $R^{\\bar{\\varphi}(o),h}$ as $h\\rightarrow +\\infty$ and $\\lVert \\bm{\\epsilon}\\rVert \\rightarrow 0$. \n\n\nA similar condition relevant to the global attractor was required in \\cite{weber1990index} for asymptotic optimality of the Whittle index policy in a general RMABP. It does not necessarily hold. However, in our problem, each sub-process is a queueing process with the departure rate increasing in its queue size.\nSuch a sub-process is a special case of a general bandit process.\nWe prove in general that the underlying process $\\bm{Z}^{\\varphi,h}(t)$, regardless of its initial point, will converge to any specified neighborhood of a fixed point almost surely as $t,h\\rightarrow +\\infty$ and $\\lVert \\bm{\\epsilon}\\rVert \\rightarrow 0$.\nDetailed proofs are provided in Appendix~\\ref{app:theorem:main_second}.\n\n\nTheorem~\\ref{theorem:main_second}, in itself, does not provide a verifiable condition.\nThis is given in our next theorem. If there exists $H\\in\\mathbb{R}$ such that, for all $h>H$, the system is decomposable, we say the system is decomposable in the asymptotic regime.\n\\begin{theorem}\\label{theorem:main}\nIf the capacity constraints described in \\eqref{eqn:constraint:resources:h} (or equivalently \\eqref{eqn:constraint:resources}) are decomposable with decomposable multipliers $\\pmb{\\gamma}\\in\\mathbb{R}^J_0$ in the asymptotic regime, then there exist $\\bm{\\nu}\\in\\mathbb{R}^L$ and a PS pair ranking $o\\in\\mathscr{O}(\\pmb{\\gamma},\\bm{\\nu})$ such that the index policy $\\varphi$ derived from $o$ is asymptotically optimal.\n\\end{theorem}\nThe proof, based on Theorem~\\ref{theorem:main_second}, is given in Appendix~\\ref{app:theorem:main}. \nRecall that we discussed decomposability of multipliers in Section~\\ref{sec:relaxation}\nand provided examples of provably decomposable systems.\n\n\n{\\bf Remark}\nTheorem~\\ref{theorem:main} binds asymptotic optimality of $\\varphi$ to the decomposability of the relaxed problem. \nFor a decomposable system (see Definition~\\ref{define:decomposable}), there always exists a ranking $o\\in\\mathscr{O}$ such that $\\bar{\\varphi}(o)$ is optimal for the relaxed problem.\nIf a system is decomposable in the asymptotic regime \nthen \\eqref{eqn:main_second} is satisfied.\nThis follows because, for any $h\\in\\mathbb{N}_+\\cup\\{+\\infty\\}$, $R^{\\varphi,h} \\leq \\max_{\\phi\\in\\Phi}R^{\\phi,h} \\leq \\max_{\\phi\\in\\tilde{\\Phi}}R^{\\phi,h}$ where $\\Phi$ and $\\tilde{\\Phi}$ are the sets of feasible policies for the original and relaxed problem, and $R^{\\varphi,h}$ coincides with $R^{\\bar{\\varphi}(o),h}$ as $h\\rightarrow +\\infty$ and $\\lVert\\bm{\\epsilon}\\rVert \\rightarrow 0$.\n\n\n\n\nSimilarly, we say the system is in heavy traffic in the asymptotic regime if there exists $H\\in\\mathbb{R}$ such that, for all $h>H$, the system is in heavy traffic.\n\\vspace{-0.3cm} \n\\begin{corollary}\\label{coro:main}\nIf the system is in heavy traffic in the asymptotic regime and is weakly coupled,\nthen there exist decomposable multipliers $\\pmb{\\gamma}\\in\\mathbb{R}^J_0$, satisfying \\eqref{eqn:equal_opt:a}-\\eqref{eqn:equal_opt:c}, and a PS pair ranking $o\\in\\mathscr{O}^*(\\bm{0})$,\nso that the index policy $\\varphi$ derived from $o$ is asymptotically optimal. \\vspace{-0.3cm}\n\\end{corollary}\nThe proof, invoking Theorem~\\ref{theorem:main} and Proposition~\\ref{prop:equal_opt}, is given in Appendix~\\ref{app:coro:main}. \nIn particular, the PS pair ranking $o$, described in Corollary~\\ref{coro:main}, exists in closed form: its PS pairs are ranked according to the descending order of $\\Xi_{\\iota}^*$ with $\\bm{\\nu}=\\bm{0}$. \n\n\n\n\\section{Numerical Results}\\label{sec:example}\n\n\nWe demonstrate by simulation the performance of the index policy $\\varphi$, defined in Section~\\ref{subsec:policies} (or equivalently, defined in Section~\\ref{sec:index_policy} for a given $h<+\\infty$), in systems that are not weakly coupled or in heavy traffic in comparison with baseline policies.\n\n\n\nIn this section, the confidence intervals of all the simulated average revenues at the $95\\%$ level based on the Student's t-distribution are maintained within $\\pm 3\\%$ of the observed mean.\nWe recall that the capacities $\\bm{C}$ and arrival rates $\\bm{\\lambda}$ are scaled by the scaling parameter $h$.\n\n\n\n\n\n\n\n\n\n\nAlong with the fixed point iteration method proposed in Section~\\ref{subsubsec:derive_ranking},\nwe have been able to find systems which are not weakly coupled or in heavy traffic but are decomposable.\nHere, we provide two examples, where $L$ and $J$ are sampled uniformly from the sets $\\{2,3,4,5\\}$ and $\\{10,11,\\ldots,20\\}$, respectively.\nLet\n$\\epsilon_M = \\max_{j\\in[J],\\iota\\in[N]}\\epsilon_{j,\\iota}$.\nWe refer to an index policy $\\varphi$ with specific $\\epsilon_M\\in [0,1]$ as INDEX($\\epsilon_M$).\n\n\n\\begin{figure}[t]\n\\centering\n\\subfigure[]{\\includegraphics[width=0.45\\linewidth]{OR-OPT-seed457-v3.eps}\\label{fig1:opt_v1}}\n\\subfigure[]{\\includegraphics[width=0.45\\linewidth]{OR-OPT-seed406-v3.eps}\\label{fig1:opt_v3}}\n\\caption{Relative difference of a specific policy to $R(o_{k^*})$ against the scaling parameter of the system: (a) diverse performance and non-zero decomposable multipliers; (b) similar performance and non-zero decomposable multipliers; and (c) zero decomposable multipliers.}\\label{fig:fig1}\n\\vspace{-0.5cm}\n\\end{figure}\n\n\nWe consider three baseline policies: two greedy policies that prioritize patterns with maximal reward rates and minimal cost rates, and one policy randomly uniformly selecting an available pattern for each request type. We refer to the three policies as \\emph{Max-Reward}, \\emph{Min-Cost} and \\emph{Random}.\nThe Max-Reward and Min-Cost policies are in fact index policies with PS pairs ranked according to the descending order of their reward rates and the ascending order of their cost rates, respectively. \nThe Random policy was proposed by \\cite{stolyar2017large} for a VM replacement problem, aiming to minimize the system blocking probabilities in the case with finite capacities.\nIt is not a feasible policy of the original problem with capacity constraints \\eqref{eqn:constraint:resources} because it does not reserve resource units for a specific pattern that is more profitable than the others.\nWhen there are not enough resource units in a pool to accommodate multiple request types that have chosen their patterns involving this pool, the Random policy will always assign the resource units to the request that arrives first. \n\n\nIn Figure~\\ref{fig:fig1}, we compare the performance of INDEX(0), INDEX($0.01$), the baseline policies and $\\bar{\\varphi}(o_{k^*})$, where $o_{k^*}$ is the ranking of the multipliers $\\pmb{\\gamma}_{k^*}$ resulting from the fixed point iteration method (described in Section~\\ref{subsubsec:derive_ranking}) with parameter $c=0.5$ and initial point $\\pmb{\\gamma}_0=\\bm{0}$. \nThe system parameters are listed in Appendix~\\ref{app:simulation:opt} and are generated by pseudo-random functions. \nThe discovered multipliers $\\pmb{\\gamma}_{k^*}$ for simulations in Figures~\\ref{fig1:opt_v1} and \\ref{fig1:opt_v3} are \n$(269.555,0,0,0,0,273.11,0,$ $347.995,0,0,0,8.323\\times 10^{-7},9.726\\times 10^{-5},0)$ and $\\bm{0}$,\nrespectively, satisfying $\\mathcal{T}^{o_{k^*}}(\\pmb{\\gamma}_{k^*})=\\pmb{\\gamma}_{k^*}$ in the asymptotic regime.\nBy Proposition~\\ref{prop:converge_gamma}, these $\\pmb{\\gamma}_{k^*}$ are decomposable multipliers and, by Theorem~\\ref{theorem:main}, the index policies derived from the rankings $o_{k^*}$ are asymptotically optimal.\nLet $R(o)\\coloneqq \\lim\\limits_{h\\rightarrow +\\infty}R^{\\bar{\\varphi}(o),h}$ ($o\\in\\mathscr{O}$) of which the existence is guaranteed in the proof of Theorem~\\ref{theorem:main_second}.\nFor the decomposable systems with $h<+\\infty$ and $\\bar{\\varphi}(o_{k^*})$ optimal for the relaxed problem in the asymptotic regime, the asymptotic long-run average revenue, $R(o_{k^*})$, is no less than the optimum of the original problem: $R(o_{k^*})$ is an upper bound of $R^{\\phi,h}$ for any $\\phi\\in\\Phi$.\n\nFigure~\\ref{fig:fig1} illustrates the relative difference of average revenues,\n$\\bigl(R(o_{k^*}) - R^{\\phi,h}\\bigr)\/R(o_{k^*})$ for $\\phi=\\text{INDEX}(0),\\text{INDEX}(0.01)$, Max-Reward, Min-Cost and Random,\nagainst the scaling parameter $h$. \n\n\nIn this context, there are two aspects of performance evaluation presented in Figure~\\ref{fig:fig1}.\nFirst, we see the performance of the index policies in the non-asymptotic regime by comparing their long-run average revenues with an upper bound on the optimum. \nIn particular, Figures~\\ref{fig1:opt_v1} and \\ref{fig1:opt_v3} show that INDEX($0.01$) significantly outperforms INDEX(0) for large $h$: the small but positive $\\bm{\\epsilon}$ does affects the performance of $\\varphi$.\nThe performance of INDEX($0.01$) is close to the upper bound of the optimal solution with relative difference less than $5\\%$ for $h$ greater than $50$ in all three examples: its performance degradation against the optimal solution is limited in the non-asymptotic regime.\n\n\nOn the other hand, by comparing to $R(o_{k^*})$, a trend of coincidence between $R^{\\text{INDEX}(0.01),h}$ and $R(o_{k^*})$ is observed in Figure~\\ref{fig:fig1} as $h$ increases from $1$ to $100$, consistent with the proved asymptotic optimality of $\\varphi$.\nRecall that the examples presented in Figure~\\ref{fig:fig1} are not for systems with weak coupling or heavy traffic but the index policy $\\varphi$ is still proved to be asymptotically optimal here.\nAlso, the performance of $\\varphi$ is close to the optimum without requiring extremely large $h$.\n\n\n\n\\begin{figure}[t]\n\\vspace{-0.5cm}\n\\centering\n\\subfigure[]{\\includegraphics[width=0.45\\linewidth]{OR-nonOPT-seed10-max50-v3.eps}\\label{fig2:nonopt_v1}}\n\\subfigure[]{\\includegraphics[width=0.45\\linewidth]{OR-nonOPT-seed10-scale50-v3.eps}\\label{fig2:nonopt_v2}}\n\\caption{(a) Relative difference of a specific policy to $R(o_{k^*})$ against scaling parameter of the system; (b) Relative difference of a specific policy to $R(o_k)$ against $k$.}\\label{fig:fig2}\n\\vspace{-0.5cm}\n\\end{figure}\n\n\nIn Figure~\\ref{fig:fig2}, we consider another example with multipliers that are not decomposable (that is, $\\mathcal{T}^{o_{k^*}}(\\pmb{\\gamma}_{k^*})\\neq \\pmb{\\gamma}_{k^*}$).\nSimilar to Figure~\\ref{fig:fig1}, in Figure~\\ref{fig2:nonopt_v1}, we plot the relative difference of revenue of INDEX(0), INDEX($0.01$) and the baseline policies to $R(o_{k^*})$ against the scaling parameter; while, in Figure~\\ref{fig2:nonopt_v2}, fixing the scaling parameter $h=50$, we illustrate curves of the relative differences, $\\bigl(R(o_k)-R^{\\phi,h}\\bigr)\/R(o_k)$ ($\\phi=\\text{INDEX}(0),\\text{INDEX}(0.01), \\text{Max-Reward}, \\text{Min-Cost}, \\text{Random}$), against the number of iterations $k$ for the fixed point iteration method. \nNote that the rankings $o_k$ are potentially different as $k$ varies, so as $R(o_k)$.\nIn Figure~\\ref{fig2:nonopt_v1}, the INDEX($0$) and INDEX($0.01$) are proposed based on the ranking $o_{k^*}$, while, with slightly abused notation, in Figure~\\ref{fig2:nonopt_v2}, INDEX($0$) and INDEX($0.01$) represent the index policies $\\varphi$, which are derived from the rankings $o_k$ associated with the varying $k$, with $\\epsilon_M=0$ and $0.01$, respectively. \nThe system parameters for the simulations in Figure~\\ref{fig:fig2} are listed in Appendix~\\ref{app:simulation:opt}. \n\n\nFigure~\\ref{fig2:nonopt_v1} can be read in a similar way to Figure~\\ref{fig:fig1} except that $R(o_{k^*})$ is not a proved upper bound for the average revenue for the original problem.\nHere, INDEX($0$) and INDEX($0.01$) perform similarly and numerically converge to $R(o_{k^*})$ as $h$ increases although the system is not necessarily decomposable. \nThe convergence is consistent with Theorem~\\ref{theorem:main_second} which generally holds without requiring decomposability.\nOn the other hand, for each finite $h$ (which corresponds to the non-asymptotic regime), \nINDEX($0$) and INDEX($0.01$) significantly outperform all the other baseline policies, although the system is not proved to be decomposable, and their performance advantages are likely to maintain as $h$ continues increasing.\n\nFigure~\\ref{fig2:nonopt_v2} illustrates the performance trajectory as the iteration number $k$ (the x-axis) for the fixed point iteration method increases for a system with $h=50$ (in the non-asymptotic regime).\nRecall that, for the simulations presented here, the average revenues of INDEX($0$) and INDEX($0.01$) and $R(o_k)$ are varying with $k$ while all of the baseline policies are independent of $k$. \nWe observe a shape jump on the curves between $k=1$ and $5$.\nThis is caused by the initial setting, $\\pmb{\\gamma}_0=\\bm{0}$, which is not a good choice of multipliers.\nAfter several steps of the iteration method, the curves in Figure~\\ref{fig2:nonopt_v2} become almost flat; \nthat is, the values of $R(o_k)$, $R^{\\text{INDEX}(0)}$ and $R^{\\text{INDEX}(0.01)}$ become relatively stable for $k=5$ to $50$. \nAlso, in Figure~\\ref{fig2:nonopt_v2}, after the performance becomes stable, INDEX($0$) and INDEX($0.01$) achieve clearly higher long-run average revenues than those of the baseline policies: given the poor setting at the beginning, the fixed point iteration method can still lead to a reasonably good ranking $o_{k^*}$ and its associated multipliers $\\pmb{\\gamma}_{k^*}$.\n\n\n\n\\vspace{-0.3cm}\n\\section{Conclusions}\\label{sec:conclusions}\n\\vspace{-0.3cm}\n\n\nWe have modeled a resource allocation problem, described by \\eqref{eqn:objective}, \\eqref{eqn:constraint:action} and \\eqref{eqn:constraint:resources}, as a combination of various RMABPs coupled by limited capacities\nof the shared resource pools, which are shared, competed for, and reserved by requests. \nThis presents us with an optimization problem for a stochastic system, aimed at maximizing the long-run average revenue by dynamically accommodating requests into appropriate resource pools.\n\n\n\n\nUsing the ideas of Whittle relaxation \\cite{whittle1988restless} and the asymptotic optimality proof of \\cite{weber1990index},\nwe have proved the asymptotic optimality of an index policy (referred to as $\\varphi$) if the capacity constraints are decomposable with multipliers $\\pmb{\\gamma}\\in\\mathbb{R}_0^J$ (Theorem~\\ref{theorem:main}).\nThe asymptotic optimality is proved based on the existence of a global attractor $\\bm{z}\\in\\mathscr{Z}$ for the underlying process $\\bm{Z}^{\\varphi,h}(t)$ as $h\\rightarrow +\\infty$ and $\\bm{\\epsilon}\\rightarrow \\bm{0}$. \nWe have proved in general that such a global attractor exists, and then proposed a necessary and sufficient condition for asymptotic optimality in Theorem~\\ref{theorem:main_second}: the performance of the attractor $\\bm{z}$ approaches the optimum of the original problem in the asymptotic regime.\nThis condition holds if the system is decomposable. \n\n\nWe have proved a sufficient condition, described as the property of being weakly coupled and in heavy traffic, \nfor the existence of such decomposable multipliers as well as the asymptotic optimality of policy $\\varphi$ (Corollary~\\ref{coro:main}).\nThe property is not necessary, but is easy to verify and covers a significant class of resource allocation problems. We have listed examples of systems with the property satisfied in Section~\\ref{subsec:sufficient_condition}.\n\n\n\n\nIn a general system, we have proposed a fixed point method to fine tune the multipliers $\\pmb{\\gamma}\\in\\mathbb{R}_0^J$ and a ranking $o\\in\\mathscr{O}(\\pmb{\\gamma},\\bm{0})$. \nWe have proved that, if there exists a fixed point $\\pmb{\\gamma}\\in\\mathbb{R}_0^J$ of the function $\\mathcal{T}^o$ satisfying $o\\in\\mathscr{O}(\\pmb{\\gamma},\\bm{0})$, then this $\\pmb{\\gamma}$ is a vector of decomposable multipliers.\nWe have successfully discovered the decomposable multipliers in some situations without assuming weak coupling or heavy traffic by applying the fixed point method.\nAlso, in Section~\\ref{sec:example}, we have compared the index policy $\\varphi$ with different parameter $\\bm{\\epsilon}$ to baseline policies through simulations for systems that are not weakly coupled or in heavy traffic in the non-asymptotic regime. The index policy achieves clearly higher performance than the baseline policies.\nTo the best of our knowledge, no existing work provides rigorous asymptotic optimality for a resource allocation problem where dynamic allocation, competition and reservation are permitted.\n\n\\vspace{-0.3cm}\n\n\n\n\\section*{Acknowledgment}\nJing Fu's and Peter Taylor's research is supported by the Australian Research Council (ARC) Centre of Excellence for the Mathematical and Statistical Frontiers (ACEMS) and ARC Laureate Fellowship FL130100039.\n\\vspace{-0.1cm}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{apalike}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{}\n\nThe necessity to generalize the Dirac equations and spinors in momentum space\nto free unstable spin-$1\/2$ particles has recently been recognized in\nconnection with the wave-function renormalizations of mixed systems of Dirac\n\\cite{Kniehl:2008cj,Kniehl:2014dra} and Majorana fermions\n\\cite{Kniehl:2014gfa}.\nIn this report, we discuss their construction when the fundamental requirement\nof Lorentz covariance is taken into account.\nWe also derive the generalized adjoint Dirac equations and spinors, and explain\nthe very simple relation that exists, in our formulation, between the\ngeneralized Dirac equations and spinors and the corresponding expressions for\nstable fermions.\\footnote{%\nFor brevity, spin-$1\/2$ particles are henceforth called fermions.}\nWe illustrate the application of the generalized spinors by evaluating the\nprobability density. \n\nDefining the complex mass $M$ of the unstable fermion as the zero of the\ninverse propagator, a frequently used parametrization is \\cite{Smith:1996xz}\n\\begin{equation}\nM=m-i\\frac{\\Gamma}{2},\n\\label{eq:pole}\n\\end{equation}\nwhere $m$ and $\\Gamma$ are its mass and width, respectively.\n\nWe define the four-momentum of the unstable fermion according to\n\\begin{equation}\np^0=M\\gamma c,\\qquad\n\\vec{p}=M\\gamma\\vec{v},\n\\label{eq:p}\n\\end{equation}\nwhere $\\gamma=(1-\\vec{v}^{\\,2}\/c^2)^{-1\/2}$ and $\\vec{v}$ is the particle's\nvelocity in the chosen inertial frame.\nSince Eq.~(\\ref{eq:p}) differs from the expressions of special relativity\nfor stable fermions by only the constant factor $M\/m$, $p^\\mu=(p^0,\\vec{p}\\,)$\ntransforms as a four-vector.\nEquation~(\\ref{eq:p}) can also be written as\n\\begin{equation}\np^\\mu=Mu^\\mu,\n\\label{eq:u}\n\\end{equation}\nwhere $u^\\mu=\\gamma(c,\\vec{v}\\,)$ is the four-velocity.\nFrom Eqs.~(\\ref{eq:p}) and (\\ref{eq:u}) one finds the basic relation\\footnote{%\nIn this paper, we adopt the notations and conventions of\nRefs.~\\cite{bjorken,mandl}.}\n\\begin{equation}\np_\\mu p^\\mu={p^0}^2-\\vec{p}^{\\,2}=M^2c^2.\n\\label{eq:ps}\n\\end{equation}\nEvaluating the real and imaginary parts of $p^0$ from Eq.~(\\ref{eq:ps}) and\nusing the relation $\\mathop{\\mathrm{Im}}\\nolimits\\vec{p}^{\\,2}=-[m\\Gamma\/(m^2-\\Gamma^2\/4)]\\mathop{\\mathrm{Re}}\\nolimits\\vec{p}^{\\,2}$\nthat follows from Eqs.~(\\ref{eq:pole}) and (\\ref{eq:p}), one can express $p^0$\nin terms of $\\mathop{\\mathrm{Re}}\\nolimits\\vec{p}^{\\,2}$, $m$, and $\\Gamma$ as\n\\begin{equation}\np^0=\nM\\left[\\frac{\\mathop{\\mathrm{Re}}\\nolimits\\vec{p}^{\\,2}+(m^2-\\Gamma^2\/4)c^2}{m^2-\\Gamma^2\/4}\\right]^{1\/2}.\n\\label{eq:p0}\n\\end{equation}\nIn the limit $\\Gamma\\to0$, $\\vec{p}^{\\,2}$ is real, $p^0=E\/c$, and\nEq.~(\\ref{eq:p0}) becomes\n\\begin{equation}\nE=(\\vec{p}^{\\,2}c^2+m^2c^4)^{1\/2},\n\\end{equation}\nthe well-known energy-momentum relation for stable particles.\nIn the rest frame, $\\vec{p}=\\vec{0}$ and Eq.~(\\ref{eq:p0}) reduces to $p^0=Mc$,\nin agreement with Eq.~(\\ref{eq:p}) when $\\vec{v}=\\vec{0}$.\n\nIn momentum space, the generalizations of the Dirac equations to free unstable\nfermions are\n\\begin{equation}\n(\\slashed{p}-Mc)u_r(\\vec{p}\\,)=0,\\qquad\n(\\slashed{p}+Mc)v_r(\\vec{p}\\,)=0,\n\\label{eq:dirac}\n\\end{equation}\nwhere $\\slashed{p}=p_\\mu\\gamma^\\mu$ and $r=1,2$ labels the two independent\nsolutions.\nRecalling Eq.~(\\ref{eq:p}), we note that Eq.~(\\ref{eq:dirac}) can be derived by\nmultiplying the corresponding Dirac equations for stable fermions by $M\/m$.\nThe four independent solutions can be written explicitly in the form\n\\begin{equation}\nu_r(\\vec{p}\\,)=\\left(\\frac{p^0+Mc}{2Mc}\\right)^{1\/2}\\left(\n\\begin{array}{c}\n\\chi_r \\\\\n\\frac{\\vec{\\sigma}\\cdot\\vec{p}}{p^0+Mc}\\chi_r\n\\end{array}\\right),\n\\qquad\nv_r(\\vec{p}\\,)=\\left(\\frac{p^0+Mc}{2Mc}\\right)^{1\/2}\\left(\n\\begin{array}{c}\n\\frac{\\vec{\\sigma}\\cdot\\vec{p}}{p^0+Mc}\\chi_r^\\prime \\\\\n\\chi_r^\\prime\n\\end{array}\\right),\n\\label{eq:spinor}\n\\end{equation}\nwhere $\\sigma^i$ are the Pauli matrices and $\\chi_r$ and $\\chi_r^\\prime$ are\ntwo-dimensional constant and orthogonal spinors frequently chosen as\n\\begin{equation}\n\\chi_1=\\chi_2^\\prime=\\left(\n\\begin{array}{c}\n1 \\\\\n0\n\\end{array}\\right),\n\\qquad\n\\chi_2=\\chi_1^\\prime=\\left(\n\\begin{array}{c}\n0 \\\\\n1\n\\end{array}\\right).\n\\label{eq:chi}\n\\end{equation}\nWith this choice, $u_r(\\vec{p}\\,)$ and $v_r(\\vec{p}\\,)$ are eigenstates of the\n$z$ component of spin in the rest frame of the fermion, with eigenvalue\n$+\\hbar\/2$ (spin up) for $u_1(\\vec{p}\\,)$ and $v_2(\\vec{p}\\,)$ and $-\\hbar\/2$\n(spin down) for $u_2(\\vec{p}\\,)$ and $v_1(\\vec{p}\\,)$.\n\nIncluding the space-time dependencies, the plane-wave solutions associated with\nthe spinors $u_r(\\vec{p}\\,)$ and $v_r(\\vec{p}\\,)$ are\n$u_r(\\vec{p}\\,)\\exp(-ip\\cdot x)$ and $v_r(\\vec{p}\\,)\\exp(ip\\cdot x)$,\nrespectively.\nUsing Eqs.~(\\ref{eq:pole})--(\\ref{eq:u}), we have\n\\begin{equation}\ne^{-ip\\cdot x}=e^{-imu\\cdot x}e^{-(\\Gamma\/2)u\\cdot x}.\n\\label{eq:exp}\n\\end{equation}\nThe first factor on the right-hand side of Eq.~(\\ref{eq:exp}) is the space-time\ndependence in the stable case, while the second factor reflects the fact that\nthe fermion is unstable.\nThe amplitude $u_r(\\vec{p}\\,)\\exp(-imu\\cdot x)$ is a solution of the usual\nDirac equation for stable fermions and is, therefore, time-reversal invariant.\nBy contrast, the second factor,\n$\\exp[-\\Gamma\\gamma(c^2t-\\vec{v}\\cdot\\vec{x})\/2]$, is not invariant under the\ntime-reversal transformation $t\\to-t$ and $\\vec{v}\\to-\\vec{v}$.\n\nAnother simple way to show that the generalized Dirac equations are not\ninvariant under time reversal is the following:\nwe recall that the operator that relates the wave functions at times $t$ and\n$t^\\prime=-t$ is antiunitary, namely of the form $KU$, where $U$ is a unitary\nmatrix and $K$ means complex conjugation.\nIf the Hamiltonian $H(t)$ at time $t$ involves the complex mass $M$, as is the\ncase in the formulation of the generalized Dirac equations, when $K$ acts on\n$H(t)$ it transforms $M\\to M^*$.\nAs a consequence, $H(t^\\prime)$ differs from $H(t)$ by the same change\n$M\\to M^*$, and the proof of time-reversal invariance, explained for instance\nin Ref.~\\cite{bjorken}, breaks down.\n\nThe Hermitian adjoints of Eq.~(\\ref{eq:dirac}) are\n\\begin{equation}\n\\bar{u}_r(\\vec{p}\\,)(\\slashed{p}^*-M^*c)=0,\\qquad\n\\bar{v}_r(\\vec{p}\\,)(\\slashed{p}^*+M^*c)=0,\n\\label{eq:diracc}\n\\end{equation}\nwhere\n\\begin{equation}\n\\bar{u}_r(\\vec{p}\\,)=u_r^\\dagger(\\vec{p}\\,)\\gamma^0,\\qquad\n\\bar{v}_r(\\vec{p}\\,)=v_r^\\dagger(\\vec{p}\\,)\\gamma^0\n\\end{equation}\nare the usual adjoint spinors and $\\slashed{p}^*=p_\\mu^*\\gamma^\\mu$.\nAt first sight, the presence of the complex conjugates $p_\\mu^*$ and $M^*$ seems\nto complicate matters.\nHowever, we note from Eq.~(\\ref{eq:u}) that $p^\\mu\/M=u^\\mu$ is real.\nTherefore, we have the important relation\n\\begin{equation}\n\\left(\\frac{p^\\mu}{M}\\right)^*=\\frac{p^\\mu}{M}.\n\\label{eq:ratio}\n\\end{equation}\nInserting $p_\\mu^*=(M^*\/M)p_\\mu$ and multiplying by $M\/M^*$,\nEq.~(\\ref{eq:diracc}) becomes\n\\begin{equation}\n\\bar{u}_r(\\vec{p}\\,)(\\slashed{p}-Mc)=0,\\qquad\n\\bar{v}_r(\\vec{p}\\,)(\\slashed{p}+Mc)=0.\n\\label{eq:diraca}\n\\end{equation}\nThe four generalized Dirac equations shown in Eqs.~(\\ref{eq:dirac}) and\n(\\ref{eq:diraca}) were postulated in Ref.~\\cite{Kniehl:2014dra} without\napplying Eq.~(\\ref{eq:ratio}) and with a different interpretation of the\nadjoint spinors $\\bar{u}_r(\\vec{p}\\,)$ and $\\bar{v}_r(\\vec{p}\\,)$.\nIn the case of unstable fermions, Eqs.~(\\ref{eq:dirac}) and (\\ref{eq:diraca})\nplay an important role in the implementation of the\nAoki-Hioki-Kawabe-Konuma-Muta (AHKKM) \\cite{Aoki:1982ed} renormalization\nconditions in general theories with intergeneration mixing, as pointed out\nin Refs.~\\cite{Kniehl:2008cj,Kniehl:2014dra,Kniehl:2014gfa}.\n\nSince $p^\\mu$ transforms as a four-vector, the proof of Lorentz covariance of\nEq.~(\\ref{eq:dirac}) follows the same steps as the proof of the Lorentz\ncovariance of the Dirac equation in coordinate space (see, for example,\nchapter~2 in Ref.~\\cite{bjorken}).\nSpecifically, if $p^\\mu$ and $u_r(\\vec{p}\\,)$ are the four-momentum and the\nspinor in the Lorentz frame $O$ and $p^{\\prime\\mu}$ and\n$u_r^\\prime(\\vec{p}^{\\,\\prime})$ are those in the Lorenz frame $O^\\prime$, one\nexpresses, for example, the first equality in Eq.~(\\ref{eq:dirac}) in terms of\nthe $O^\\prime$ variables by means of the relations\n$p_\\mu=a_{\\phantom{\\nu}\\mu}^\\nu p_\\nu^\\prime$, where $a_{\\phantom{\\nu}\\mu}^\\nu$ are the\ncoefficients of the Lorentz transformation between the four-vectors $p_\\mu$ and\n$p_\\mu^\\prime$, and $u_r(\\vec{p}\\,)=S^{-1}u_r^\\prime(\\vec{p}^{\\,\\prime})$, where $S$\nis a matrix that satisfies the relations\n$a_{\\phantom{\\nu}\\mu}^\\nu S\\gamma^\\mu S^{-1}=\\gamma^\\nu$ and\n$S^{-1}=\\gamma^0S^\\dagger\\gamma^0$.\nThen the first equality in Eq.~(\\ref{eq:dirac}) becomes\n$(\\slashed{p}^\\prime-Mc)u_r^\\prime(\\vec{p}^{\\,\\prime})=0$, which demonstrates its\nLorentz covariance.\nCarrying out the Hermitian conjugation of the $O^\\prime$ Dirac equation and\nusing Eq.~(\\ref{eq:ratio}), one finds\n$\\bar{u}_r^\\prime(\\vec{p}^{\\,\\prime})(\\slashed{p}^\\prime-Mc)=0$, which shows the\nLorentz covariance of the corresponding adjoint Dirac equation,\nEq.~(\\ref{eq:diraca}).\n\nThe adjoint spinors with respect to $u_r(\\vec{p}\\,)$ and $v_r(\\vec{p}\\,)$ in\nEq.~(\\ref{eq:spinor}) are\n\\begin{equation}\n\\bar{u}_r(\\vec{p}\\,)=\\left(\\frac{p^0+Mc}{2Mc}\\right)^{1\/2}\\left(\n\\chi_r^\\dagger,-\\chi_r^\\dagger\\frac{\\vec{\\sigma}\\cdot\\vec{p}}{p^0+Mc}\\right),\n\\quad\n\\bar{v}_r(\\vec{p}\\,)=\\left(\\frac{p^0+Mc}{2Mc}\\right)^{1\/2}\\left(\n\\chi_r^{\\prime\\dagger}\\frac{\\vec{\\sigma}\\cdot\\vec{p}}{p^0+Mc},-\\chi_r^{\\prime\\dagger}\n\\right),\n\\label{eq:spinora}\n\\end{equation}\nwhere we have again applied Eq.~(\\ref{eq:ratio}) to eliminate $p_\\mu^*$ and\n$M^*$.\nIn particular, Eq.~(\\ref{eq:ratio}) implies that $[(p^0+Mc)\/(2Mc)]^{1\/2}$ and\n$\\vec{p}\/(p^0+Mc)$ are real.\n\nIt is interesting to note that the four generalized Dirac equations shown in\nEqs.~(\\ref{eq:dirac}) and (\\ref{eq:diraca}) as well as their spinor solutions\npresented in Eqs.~(\\ref{eq:spinor}) and (\\ref{eq:spinora}) can be obtained from\nthe corresponding ones for stable fermions, for $\\Gamma=0$, by simply\nsubstituting $m\\to M$ in their explicit mass dependencies and in the definition\nof $p^\\mu$ in Eqs.~(\\ref{eq:p}) and (\\ref{eq:u}).\n\nSince $M$ cancels in the ratios $(p^0+Mc)\/(2Mc)$ and $\\vec{p}\/(p^0+Mc)$, these\nfactors are the same as in the $\\Gamma=0$ case.\nIt hence follows that the spinor solutions in Eqs.~(\\ref{eq:spinor}) and\n(\\ref{eq:spinora}) satisfy the same normalization and completeness relations as\nin the case of stable fermions, which are given, for example, by Eqs.~(A.29)\nand (A.30) in Ref.~\\cite{mandl}.\nIn particular,\n\\begin{eqnarray}\n\\bar{u}_r(\\vec{p}\\,)u_s(\\vec{p}\\,)&=&-\\bar{v}_r(\\vec{p}\\,)v_s(\\vec{p}\\,)\n=\\delta_{rs},\n\\nonumber\\\\\n\\bar{u}_r(\\vec{p}\\,)v_s(\\vec{p}\\,)&=&-\\bar{v}_r(\\vec{p}\\,)u_s(\\vec{p}\\,)=0.\n\\end{eqnarray}\n\nIn some applications, the two-component spinors $\\chi_r$ and $\\chi_r^\\prime$ in\nEq.~(\\ref{eq:chi}) are replaced by helicity eigenstates $\\phi_s$ and\n$\\phi_s^\\prime$, respectively.\nBecause, in the case of unstable particles, $\\vec{p}$ is complex [cf.\\\nEqs.~(\\ref{eq:p}) and (\\ref{eq:u})], the usual helicity projection,\n\\begin{equation}\n\\frac{\\vec{p}}{|\\vec{p}\\,|}\\cdot\\frac{\\vec{\\sigma}}{2}\\phi_s=s\\phi_s,\n\\qquad\n\\frac{\\vec{p}}{|\\vec{p}\\,|}\\cdot\\frac{\\vec{\\sigma}}{2}\\phi_s^\\prime\n=-s\\phi_s^\\prime,\n\\label{eq:hel}\n\\end{equation}\nwhere $|\\vec{p}\\,|=(\\vec{p}^{\\,*}\\cdot\\vec{p}\\,)^{1\/2}$, leads to complex\neigenvalues $s=\\pm(\\vec{p}^{\\,2})^{1\/2}\/(2|\\vec{p}\\,|)$, as can be checked by\napplying the helicity projection operator twice.\nA consistent alternative is to define the helicity projection operator in\nEq.~(\\ref{eq:hel}) according to $\\vec{u}\/|\\vec{u}|\\cdot\\vec{\\sigma}\/2$, where\n$\\vec{u}=\\vec{p}\/M$ are the spatial components of the four-velocity $u$ given\nbelow Eq.~(\\ref{eq:u}), or, equivalently,\n$\\vec{v}\/|\\vec{v}|\\cdot\\vec{\\sigma}\/2$.\n\nIt is also instructive to calculate the probability current using the spinor\nsolutions in Eqs.~(\\ref{eq:spinor}) and (\\ref{eq:spinora}).\nWe find\n\\begin{equation}\nc\\bar{u}_r(\\vec{p}\\,)\\gamma^\\mu u_r(\\vec{p}\\,)=\nc\\bar{v}_r(\\vec{p}\\,)\\gamma^\\mu v_r(\\vec{p}\\,)=\\frac{p^\\mu}{M}=u^\\mu.\n\\end{equation}\nAs expected, these currents transform as four-vectors.\nIn particular, for $\\mu=0$ we have\n\\begin{equation}\ncu_r^\\dagger(\\vec{p}\\,)u_r(\\vec{p}\\,)=\ncv_r^\\dagger(\\vec{p}\\,)v_r(\\vec{p}\\,)=u^0=\\gamma c.\n\\label{eq:pro}\n\\end{equation}\nSince $cu_r^\\dagger(\\vec{p}\\,)u_r(\\vec{p}\\,)$ and\n$cv_r^\\dagger(\\vec{p}\\,)v_r(\\vec{p}\\,)$ are the probability densities, they\nshould be real and positive, consistent with Eq.~(\\ref{eq:pro}).\n\nAnother interesting application is to examine the space-time dependence of the\nprobability density.\nIncorporating the space-time factor $\\exp(-ip\\cdot x)$ of Eq.~(\\ref{eq:exp}) in\n$cu_r^\\dagger(\\vec{p}\\,)u_r(\\vec{p}\\,)$, we find that the latter is multiplied by\nan overall factor $\\exp[-\\Gamma\\gamma(c^2t-\\vec{v}\\cdot\\vec{x})]$, which\nimplies that the probability density for the positive-energy states decreases\nexponentially with time, reflecting the fermion's instability.\n\nIn summary, (i) we have proposed a simple definition of the (complex)\nfour-momentum of a free unstable spin-$1\/2$ particle [cf.\\ Eqs.~(\\ref{eq:p})\nand (\\ref{eq:u})] and shown that it indeed transforms as a four-vector,\n(ii) we have derived the generalized Dirac equations in momentum space [cf.\\\nEq.~(\\ref{eq:dirac})] and found explicit spinor solutions [cf.\\\nEq.~(\\ref{eq:spinor})],\n(iii) we have derived the generalized adjoint Dirac equations [cf.\\\nEq.~(\\ref{eq:diraca})] and spinors [cf.\\ Eq.~(\\ref{eq:spinora})] by Hermitian\nconjugation of Eqs.~(\\ref{eq:dirac}) and (\\ref{eq:spinor}), respectively,\ntaking into account the basic relation in Eq.~(\\ref{eq:ratio}),\n(iv) using the important fact that $p^\\mu$ transforms as a four-vector, we have\nshown how the proof of Lorentz covariance carries over to the generalized Dirac\nequations and their adjoints,\n(v) we have pointed out the very simple relation that exists, in our\nformulation, between the generalized Dirac equations and spinors and the\ncorresponding expressions for stable fermions,\n(vi) in particular, we have shown that our spinors and adjoint spinors satisfy\nthe same normalization and completeness relations as in the case of stable\nfermions,\n(vii) we have proposed a modified definition of the helicity projection\noperator for unstable fermions that leads to real eigenvalues,\n(viii) as an illustration, we have applied our spinors and adjoint spinors to\ncalculate the probability density and found that it satisfies the expected\ntheoretical properties,\nand (ix) we have discussed the behavior of the generalized Dirac equations\nunder time reversal.\nAs mentioned after Eq.~(\\ref{eq:diraca}), the four generalized Dirac equations\nin Eqs.~(\\ref{eq:dirac}) and (\\ref{eq:diraca}) play an important role in the\nimplementation of the AHKKM \\cite{Aoki:1982ed} renormalization conditions for\nunstable fermions in general theories with intergeneration mixing.\n\n\n\n\n\n\n\n\n\\begin{acknowledgments}\nWe thank the Werner-Heisenberg-Institut for the hospitality extended to us\nduring a visit when this paper was prepared.\nThis research was supported in part by the German Research Foundation through\nthe Collaborative Research Center No.~676 {\\it Particles, Strings and the\nEarly Universe---The Structure of Matter and Space Time}.\nThe work of A.S. was supported in part by the National Science Foundation \nthrough Grant No.\\ PHY--1316452. \n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nMotion of a symmetric top can be studied by using either a cubic function or effective potential.\nThe cubic function is mostly used in works that utilize geometric techniques \\cite{Routh, Scarborough, MacMillan, ArnoldMaunder, Groesberg, JoseSaletan},\nand effective potential is mostly used in works considering physical parameters \\cite{Symon, McCauley, LandauLifshitz, MarionThornton, Taylor}.\nIn some other works, both the cubic function and effective potential are used \\cite{Goldstein, Arnold, Corinaldesi, MatznerShepley, Arya, Greiner, FowlesCassiday}.\n\nEffective potential shows different characteristics when one of the conserved angular momenta greater than the other one or equal to.\nOne can find different aspects of effective potential in the literature when magnitudes of the conserved angular momenta are equal to each other \\cite{Symon, Tanriverdi_abeql}.\nHowever, it is not studied when magnitudes of the conserved angular momenta are not equal to each other except in Greiner's work, \nand his study does not cover different possibilities related to the conserved angular momenta and the minimum of effective potential \\cite{Greiner}. \nStudying this topic helps understand the motion of a spinning heavy symmetric top,\nand in this study, we will study this case together with the relation between the minimum of effective potential and a constant derived from parameters of gyroscope and conserved angular momenta.\n\nIn section \\ref{frst}, we will give a quick overview of constants of motion and effective potential.\nIn section \\ref{scnd}, we will study effective potential when magnitudes of the conserved angular momenta are not equal to each other.\nThen, we will give a conclusion.\nIn the appendix, we will compare the cubic function with effective potential.\n\n\n\\section{Constants of motion and effective potential}\n\\label{frst}\n\nFor a spinning heavy symmetric top, Lagrangian is \\cite{Goldstein}\n\\begin{eqnarray}\n\tL&=&T-U \\nonumber \\\\ \n\t&=&\\frac{I_x}{2}(\\dot \\theta ^2 + \\dot \\phi ^2 \\sin^2 \\theta)+\\frac{I_z}{2}(\\dot \\psi+\\dot \\phi \\cos \\theta)^2-M g l \\cos \\theta, \n\t\\label{lagrngn}\n\\end{eqnarray}\nwhere $M$ is the mass of the symmetric top, $l$ is the distance from the center of mass to the fixed point, $I_x=I_y$ and $I_z$ are moments of inertia, $g$ is the gravitational acceleration, $\\theta$ is the angle between the stationary $z'$-axis and the body $z$-axis, $\\dot \\psi$ is the spin angular velocity, $\\dot \\phi$ is the precession angular velocity and $\\dot \\theta$ is the nutation angular velocity. \nThe domain of $\\theta$ is $[0,\\pi]$.\nFor a spinning symmetric top on the ground $\\theta$ should be smaller than $\\pi\/2$, and if $\\theta>\\pi\/2$, then the spinning top is suspended from the fixed point. \n\nThere are two conserved angular momenta which can be obtained from Lagrangian,\nand one can define two constants $a$ and $b$ by using these conserved angular momenta as \\cite{Goldstein}\n\\begin{eqnarray}\n\ta&=&\\frac{I_z}{I_x}(\\dot \\psi+\\dot \\phi \\cos \\theta), \\\\\n\tb&=&\\dot \\phi \\sin^2 \\theta + a \\cos \\theta,\n\\end{eqnarray}\nwhere $a=L_z\/I_x$ and $b=L_{z'}\/I_x$.\nHere, $L_z$ and $L_{z'}$ are conserved angular momenta in the body $z$ direction and stationary $z'$ direction, respectively.\n\nOne can define a constant from energy as \n\\begin{equation}\n\tE'=\\frac{I_x}{2} \\dot \\theta ^2 +\\frac{I_x}{2} \\dot \\phi^2 \\sin^2 \\theta + Mgl \\cos \\theta \\label{eprime},\n\\end{equation}\nand its relation with the energy is $E'=E-I_x^2 a^2\/(2 I_z)$.\n\nBy using change of variable $u=\\cos \\theta$, one can obtain the cubic function from \\eqref{eprime} as\\cite{Goldstein}\n\\begin{equation}\n\tf(u)=(\\alpha - \\beta u)(1-u^2)-(b-a u^2)\t\n\t\\label{cubicf}\n\\end{equation}\nwhich is equal to $\\dot u^2$,\nwhere $\\alpha=2 E'\/I_x$ and $\\beta=2 Mgl\/I_x$.\nThis cubic function can be used to find turning angles.\n\nFrom $E'=I_x \\dot \\theta ^2\/2+U_{eff}$ \\cite{LandauLifshitz}, it is possible to define an effective potential\n\\begin{equation}\n\tU_{eff}(\\theta)= \\frac{I_x}{2}\\frac{(b-a \\cos \\theta)^2}{\\sin^2 \\theta}+Mgl \\cos \\theta.\n\t\\label{ueff}\n\\end{equation}\n\nBy using the derivative of $U_{eff}$ with respect to $\\theta$\n\\begin{equation}\n\t\\frac{d U_{eff}(\\theta)}{d \\theta}= \\frac{I_x}{\\sin^3 \\theta} \\left[ (b-a \\cos \\theta)(a-b \\cos \\theta)- \\frac{Mgl}{I_x} \\sin^4 \\theta \\right],\n\t\\label{dueff}\n\\end{equation}\nit is possible to find the minimum of $U_{eff}$.\nThe factor $\\sin \\theta$ is equal to zero when $\\theta$ is equal to $0$ or $\\pi$, and effective potential goes to infinity at these angles.\nThe root of equation \\eqref{dueff} is between $0$ and $\\pi$, and it will be designated by $\\theta_r$ giving the minimum of effective potential,\nand it can be found numerically.\nThen, the form of effective potential is like a well. \nThe general structure of $U_{eff}$ together with $E'$ can be seen in figure \\ref{fig:ueffg}.\n\n\\begin{figure}[!h]\n\t\\begin{center}\n\t\t\\includegraphics[width=7cm]{ueff_gn2_d.pdf}\n\t\t\\caption{General structure of $U_{eff}(\\theta)$ and $E'$. $\\theta_{min}$ and $\\theta_{max}$ show turning angles, and $\\theta_r$ represents the angle where minimum of $U_{eff}$ occurs.\n\t\tCurve (red) shows $U_{eff}$, dashed (blue) line shows $E'$ and horizontal continious (black) line shows the minimum of $U_{eff}$.\n\t\t}\n\t\t\\label{fig:ueffg}\n\t\\end{center}\n\\end{figure}\n\nBy using equation \\eqref{dueff}, one can write \\cite{Goldstein}\n\\begin{equation}\n\t\\dot \\phi^2 \\cos \\theta- \\dot \\phi a+\\frac{Mgl}{I_x}=0.\n\\end{equation}\nThe root of this equation can also be used to obtain the minimum of $U_{eff}$.\nBy using the discriminant of this equation, \none can define a parameter $\\tilde a=\\sqrt{4 Mgl \/I_x}$ to make a disrimination between \"strong top\" (or fast top) where $a> \\tilde a$ and \"weak top\" (or slow top) where $a< \\tilde a$ \\cite{KleinSommerfeld, Tanriverdi_abdffrnt}. \n\nThe position of the minimum and the shape of $U_{eff}$ can be helpful in understanding the motion.\nIf $E'$ is equal to the minimum of $U_{eff}$ then the regular precession is observed.\nIf $E'$ is greater than the minimum of $U_{eff}$, like figure \\ref{fig:ueffg}, the intersection points of $E'$ and $U_{eff}$ give turning angles.\nAnd, symmetric top nutates between these two angles periodically.\nThere can be different types of motion, and some of these motions can be determined by using relations between $E'$ \\& $Mglb\/a$ and $a$ \\& $b$ when $|a|\\ne|b|$ \\cite{Tanriverdi_abdffrnt}. \n\n\n\\section{Effective potential}\n\\label{scnd}\n\nThe relation between $a$ and $b$ can affect effective potential.\nThere are three possible relation between $a$ and $b$: $|a|>|b|$, $|a|<|b|$ and $|a|=|b|$. \nWe will consider two different possibilities, $|a|>|b|$ and $|a|<|b|$, to study effective potential since the third one is studied previously, i.e. $|a|=|b|$ \\cite{Symon, Tanriverdi_abeql}.\nWe will give examples to studied cases, and for examples, the following constants will be used: $Mgl=0.068 \\,J$, $I_x=0.000228 \\,kg \\,m^{2}$ and $I_z=0.0000572 \\,kg \\,m^{2}$.\n\n\\subsection{Effective potential when $|a|>|b|$}\n\nIn this section, we will study the case when $|a|>|b|$.\nAfter factoring equation \\eqref{dueff}, it can be written as\n\\begin{equation}\n\t\\frac{d U_{eff}(\\theta)}{d \\theta}= \\frac{a^2 I_x}{ \\sin^3 \\theta} \\left[ (\\frac{b}{a}- \\cos \\theta)(1- \\frac{b}{a} \\cos \\theta)- \\frac{Mgl}{I_x a^2} \\sin^4 \\theta \\right].\n\t\\label{dueff3}\n\\end{equation}\nThe angle, making the terms in the parentheses zero, gives the minimum of effective potential.\nIf $|a|>|b|$, the second term in the parentheses is always negative, and then $b\/a-\\cos \\theta$ should also be positive for the root.\nTherefore, the inclination angle should satisfy $\\pi>\\theta>\\arccos b\/a$.\nIn the limit where $a$ goes to infinity, $\\theta_r$ goes to $\\arccos b\/a$.\nIn $a$ goes to zero limit, $b$ should also go to zero since $|a|>|b|$, then the first term goes to zero (see equation \\eqref{dueff}) and the second term should also go to zero for the root which is possible when $\\theta_r$ goes to $\\pi$.\nIf both $a$ and $b$ are negative or positive, $\\theta_r$ is between $\\pi\/2$ and $\\pi$ when $|a|$ is close to zero, and it is between $0$ and $\\pi\/2$ when $|a|$ and $|b|$ are great enough.\nIf only one of them is negative, then $\\theta_r$ is always greater than $\\pi\/2$.\n\nWhen $b=0$, in $|a|$ goes to infinity limit $\\theta_r$ goes to $\\pi\/2$, and $a$ goes to zero limit does not change and remains as $\\pi$.\n\nThese shows that $\\theta_r \\in (\\arccos b\/a,\\pi)$. \nIf $b\/a$ goes to $1$, then $\\arccos b\/a$ goes to $0$.\nTherefore, $\\theta_r$ can take values between $0$ and $\\pi$ depending on signs of $a$ and $b$, the ratio $b\/a$ and greatness of $a$ and $b$.\n\nNow, we will consider the change of $U_{eff_{min}}$ when $|a|>|b|$.\nWe have seen that as $|a|$ goes to zero, $\\theta_r$ goes to $\\pi$ .\nThen, it can be seen from equation \\eqref{ueff} that $U_{eff_{min}}$ goes to $-Mgl$ as $|a|$ goes to zero.\nAs $|a|$ goes to infinity $\\theta_r$ goes to $\\arccos b\/a$, then $U_{eff_{min}}$ goes to $Mgl b\/a$ from below.\nThen, $Mglb\/a$ is always grater than $U_{eff_{min}}$ when $|a|>|b|$.\n\n\\begin{figure}\n\t\\begin{center}\n\t\t\\subfigure[$U_{eff}$]{\n\t\t\t\\includegraphics[width=4.0cm]{ueff_a3.pdf}\n\t\t\n\t\t\t}\n\t\t\t\\subfigure[$\\theta_r$]{\n\t\t\t\t\\includegraphics[width=4.0cm]{loop_a_theta.pdf}\n\t\t\t\n\t\t\t\t}\n\t\t\t\t\\subfigure[$U_{eff_{min}}$]{\n\t\t\t\t\t\\includegraphics[width=4.0cm]{loop_a_umin.pdf}\n\t\t\t\t\n\t\t\t\t\t}\n\t\t\t\t\t\\caption{ $U_{eff}$, change of $\\theta_r$ with respect to $a$ and change of $U_{eff_{min}}$ with respect to $a$.\n\t\t\t\t\ta) Three different effective potential: $a=10 \\, rad \\,s^{-1}$ (green dashed-dotted curve), $a=30 \\, rad \\,s^{-1}$ (blue dashed curve) and $a=60 \\, rad \\,s^{-1}$ (red continious curve), and all of them satisfy $b\/a=0.5$. Black line shows $Mglb\/a$.\n\t\t\t\t\tb) Change of $\\theta_r$ with respect to $a$ for constant $b\/a=0.5$ ratio (red curve). Black line shows $\\arccos(b\/a)=1.05$. Vertical dotted line shows position of $\\tilde a$.\n\t\t\t\t\tc) Change of $U_{eff_{min}}$ with respect to $a$ for constant $b\/a=0.5$ ratio (red curve). Black line shows $Mglb\/a$. Vertical dotted line shows position of $\\tilde a$.\n\t\t\t\t\t}\n\t\t\t\t\t\\label{fig:ueffg5}\n\t\\end{center}\n\\end{figure}\n\nAs an example, we will consider that there is a constant ratio between $a$ and $b$: $b\/a=0.5$.\nIn figure \\ref{fig:ueffg5}(a), three different effective potentials for three different $a$ values are shown together with $Mglb\/a$.\nIn this figure, it can be seen that the form and magnitude of the minimum of $U_{eff}$ are changing as $a$ changes, and it can also be seen that $\\theta_r$ is also changing.\nIn figure \\ref{fig:ueffg5}(b), it can be seen that $\\theta_r$ takes very close values to $\\pi$ for very small values of $a$ and goes to $\\arccos 0.5=1.05 \\,rad$ as $a$ increases.\nIn figure \\ref{fig:ueffg5}(c), it can be seen that the minimum of $U_{eff}$ takes very close values to $-Mgl$ when $a$ is small, and it goes to $Mglb\/a$ as $a$ goes to infinity.\nThese are consistent with previous considerations.\n\nIt can be considered that there is a shift in the behaviour of $\\theta_r$ and $U_{eff_{min}}$ near $a=\\tilde a$. \nBut this shift is not sudden, and one can say that the usage $\\tilde a$ gives an approximate separation when $|a|>|b|$.\n\nIn some cases, $Mgl$ can be negative and there are some differences in effective potential in these cases.\nWhen $Mgl$ is negative, the second term in equation \\eqref{dueff3} becomes positive, and then $\\arccos b\/a> \\theta > 0$ for the root.\nIn the limit where $a$ goes to infinity, again $\\theta_r$ goes to $\\arccos b\/a$.\nIn $a$ goes to zero limit, $\\theta_r$ goes to $0$.\nThese show that the interval for the minimum of effective potential changed from $(\\arccos b\/a,\\pi)$ to $(0,\\arccos b\/a)$ when $Mgl$ changed sign from positive to negative.\nIf both $a$ and $b$ are negative or positive, $\\theta_r$ is between $0$ and $\\pi\/2$.\nIf only one of them is negative, then $\\theta_r$ can be greater than $\\pi\/2$ when $|a|$ is great enough.\nThe minimum of $U_{eff}$ goes to $-|Mgl|$ when $a$ goes to $0$, and it goes to $-|Mgl| b\/a$ when $a$ goes to infinity when $Mgl$ is negative.\n\n\\subsection{Effective potential when $|b|>|a|$}\n\nIn this section, we will study the case when $|b|>|a|$.\nAfter factoring equation \\eqref{dueff} in another way, it can be written as \n\\begin{equation}\n\t\\frac{d U_{eff}(\\theta)}{d \\theta}= \\frac{b^2 I_x}{\\sin^3 \\theta} \\left[ (1-\\frac{a}{b} \\cos \\theta)(\\frac{a}{b} - \\cos \\theta)- \\frac{Mgl}{I_x b^2} \\sin^4 \\theta \\right].\n\t\\label{dueff2}\n\\end{equation} \nSimilar to the previous case, the first term should be positive, and $a\/b- \\cos \\theta$ should be positive when $|b|>|a|$ for the root, and then $\\pi>\\theta>\\arccos a\/b$.\nIn $b$ goes to infinity limit, the second term in the parentheses goes to zero.\nThen, as $|b|$ goes to infinity, $\\theta_r$ should go to $\\arccos a\/b$.\nIn $b$ goes to zero limit, $\\theta_r$ goes to $\\pi$ which can be seen from equation \\eqref{dueff} similar to the previous section. \nThen, $\\theta_r$ goes to $\\pi$ when $b$ goes to zero, and it goes to $\\arccos a\/b$ when $|b|$ goes to infinity.\n\nWhen $a$ and $b$ are both positive or negative, as $|b|$ increases from zero to infinity, $\\theta_r$ decreases from $\\pi$ to $\\arccos a\/b<\\pi\/2$.\nIf only one of them is positive, then $\\theta_r$ is always greater than $\\pi\/2$ and shows a similar decrease to both positive or negative cases.\n\nWhen $a=0$, as $|b|$ goes to infinity $\\theta_r$ goes to $\\pi\/2$ and it goes to $\\pi$ as $|b|$ goes to $0$.\n\nSimilar to the previous case, $\\theta_r$ can take values between $0$ and $\\pi$ depending on signs of $a$ and $b$, the ratio $a\/b$ and greatness of $a$ and $b$.\n\nThe magnitude of the minimum of $U_{eff}$ changes with respect to $b$.\nIn $b$ goes to zero limit, $U_{eff_{min}}$ goes to $-Mgl$ since $\\theta_r$ goes to $\\pi$.\nIn $b$ goes to infinity limit, $\\theta_r$ goes to $\\arccos a\/b$, and then the minimum of $U_{eff}$ goes to infinity with $I_x b^2(1-(a\/b)^2)\/2$.\n\n\\begin{figure}[!h]\n\t\\begin{center}\n\t\t\\subfigure[$U_{eff}$]{\n\t\t\t\\includegraphics[width=4.2cm]{ueff_b3.pdf}\n\t\t\t\\label{fig:ueffg4a}\n\t\t\t}\n\t\t\t\\subfigure[$\\theta_r$]{\n\t\t\t\t\\includegraphics[width=4.2cm]{loop_b_theta.pdf}\n\t\t\t\t\\label{fig:ueffg4b}\n\t\t\t\t}\n\t\t\t\t\\subfigure[$U_{eff_{min}}$]{\n\t\t\t\t\t\\includegraphics[width=4.2cm]{loop_b_umin.pdf}\n\t\t\t\t\t\\label{fig:ueffg4c}\n\t\t\t\t\t}\n\t\t\t\t\t\\caption{ $U_{eff}$, change of $\\theta_r$ with respect to $b$ and change of $U_{eff_{min}}$ with respect to $b$.\n\t\t\t\t\ta) Three different effective potential: $b=10 \\, rad \\,s^{-1}$ (green dashed-dotted curve), $b=30 \\, rad \\,s^{-1}$ (blue dashed curve) and $b=60 \\, rad \\,s^{-1}$ (red continious curve) with $a\/b=0.5$. Black line shows $Mglb\/a$.\n\t\t\t\t\tb) Change of $\\theta_r$ with respect to $b$ for constant $a\/b=0.5$ ratio (red curve). Black line shows $\\arccos(a\/b)=1.05 \\,rad$. Vertical dotted line shows the position of $b=2 \\tilde a$.\n\t\t\t\t\tc) Change of $U_{eff_{min}}$ with respect to $b$ for constant $a\/b=0.5$ ratio (red curve). Black line shows $Mglb\/a$. Vertical dotted line shows position of $b=2 \\tilde a$. Dotted curve shows $I_x b^2(1-(a\/b)^2)\/2$.\n\t\t\t\t\t}\n\t\t\t\t\t\\label{fig:ueffg6}\n\t\\end{center}\n\\end{figure}\n\nFor examples, similar to the previous case, a constant ratio between $a$ and $b$ is considered: This time $a\/b=0.5$.\nIn figure \\ref{fig:ueffg6}(a), three different effective potentials for three different $b$ values are shown similar to the previous section.\nIn this figure, there are some similarities and differences from figure \\ref{fig:ueffg5}(a).\nOne can see that $\\theta_r$ is also different for different $b$ values similar to the previous section.\nIt can be seen that as $b$ takes different values, the form and magnitude of the minimum of $U_{eff}$ becomes different similar to previous case, and it can be greater than $Mglb\/a$, unlike the previous case.\nIn figure \\ref{fig:ueffg6}(b), it can be seen that for very small values of $b$, $\\theta_r$ is close to $\\pi$ and it goes to $\\arccos 0.5=1.05 \\,rad$ as $b$ increases.\nIn figure \\ref{fig:ueffg6}(c), it can be seen that the minimum of $U_{eff}$ is close to $-Mgl$ if $b$ is small, and it goes to infinity with $I_x b^2(1-(a\/b)^2)\/2$ as $b$ goes to infinity.\nThese are the expected results from the explanations given above.\n\nBy considering these results, it can be said that $Mglb\/a$ is not important differently from $|a|>|b|$ case.\nFrom figures \\ref{fig:ueffg6}(b) and \\ref{fig:ueffg6}(c), one can say that the shift in the behaviour of $\\theta_r$ and $U_{eff_{min}}$ does not take place around $a=\\tilde a$, and the usage of $\\tilde a$ for seperation is not suitable when $|b|>|a|$.\n\nWhen $Mgl$ is negative, the second term in equation \\eqref{dueff3} becomes positive, and then in this case, $a\/b-\\cos \\theta$ should be negative which is possible when $\\arccos a\/b> \\theta > 0$.\nIn the limit where $b$ goes to infinity, again $\\theta_r$ goes to $\\arccos a\/b$.\nIn $b$ goes to zero limit, $\\theta_r$ goes to $0$.\nSimilar to the previous case, the interval for the minimum of effective potential changed from $(\\arccos b\/a,\\pi)$ to $(0,\\arccos b\/a)$.\nIf both $a$ and $b$ are negative or positive, $\\theta_r$ is between $0$ and $\\pi\/2$.\nIf only one of them is negative, then $\\theta_r$ can be greater than $\\pi\/2$ when $|b|$ goes to infinity, and $\\theta_r$ goes to $0$ as $b$ goes to zero.\nWhen $a=0$, in $|b|$ goes to infinity limit $\\theta_r$ goes to $\\pi\/2$, and $|b|$ goes to zero limit does not change and remains as $0$.\nIf $Mgl$ is negative, the minimum of $U_{eff}$ goes to $-|Mgl|$ when $b$ goes to $0$, and it goes to infinity as $|b|$ goes to infinity.\n\n\n\\section{Conclusion}\n\nEffective potential can be helpful in understanding the motion of a symmetric top in different ways.\n$E'$ should be equal to or greater than the minimum of $U_{eff}$ for physical motions.\nBy using the limits given in section \\ref{scnd}, one can say that the regular precession takes place at greater angles when $a$ and $b$ are small, \nand as $a$ and $b$ increase, it takes place at smaller angles.\nTo observe regular precession smaller than $\\pi\/2$, $a$ and $b$ should have the same sign and have greater magnitudes.\nThe limiting angle when $|a|$ or $|b|$ goes to infinity can be found by using inverse cosine of $b\/a$ and $a\/b$ when $|a|>|b|$ and $|b|>|a|$, respectively.\nIf $E'$ is greater than the minimum of $U_{eff}$, then different types of motions can be seen \\cite{Tanriverdi_abdffrnt}.\nThese motion will take place closer angles to $\\theta_r$ when $E'$ is close to the minimum of $U_{eff}$,\nand by considering signs and magnitudes of $a$ and $b$ one can have an opinion on the angles where the motion takes place.\n\nIf $a$ and\/or $b$ are small, then there can be a high asymmetry in the form of $U_{eff}$.\nFrom the definitions of $U_{eff}$ and $E'$, one can say that $\\dot \\theta$ is propotional to the difference $E'-U_{eff}(\\theta)$ for a specific $\\theta$ value.\nTherefore, one can say that as $\\theta$ increases from $\\theta_{min}$ to $\\theta_r$, the change in $\\dot \\theta$ is gradual, and as $\\theta$ increases from $\\theta_{r}$ to $\\theta_{max}$, the change in $\\dot \\theta$ is more rapid when $a$ and\/or $b$ are small.\nAs $\\theta$ changes from $\\theta_{max}$ to $\\theta_{min}$, this change in $\\dot \\theta$ is firstly rapid and then gradual.\n\nIf $a$ and $b$ are great enough and the difference $E'-U_{eff_{min}}$ is small enough, then the asymmetry in $U_{eff}$ can be ignored.\nIn these cases, one can make an approximation and find an exact solution for this approximation \\cite{Goldstein, Arnold}.\nThis approximation works better when the asymmetry in $U_{eff}$ is least.\n\nWe have seen that comparison of $|a|$ with $\\tilde a$ can be used when $|a|>|b|$ for an approximate seperation, and it is not suitable when $|b|>|a|$.\nBut comparison between $|b|$ and $\\tilde a$ can be used when $|b|>|a|$, and if it is used, one should use a naming other than \"strong top\" or \"weak top\". \nWe should note that comparison of $|a|$ with $\\tilde a$ is very useful when $|a|=|b|$ \\cite{Tanriverdi_abeql}.\n\nAnother thing that should be taken into account is the relation between $Mglb\/a$ and $E'$ \\cite{Tanriverdi_abdffrnt}.\nThis study has shown that the minimum of $U_{eff}$ is always smaller than $Mglb\/a$ when $|a|>|b|$, which shows that one can always observe all possible motions when $|a|>|b|$.\nOn the other hand, $Mglb\/a$ can be greater than or smaller than the minimum of $U_{eff}$ when $|b|>|a|$.\n\nThese results show that effective potential has different advantages over the cubic function in understanding the motion of a spinning heavy symmetric top.\nHowever, the cubic function is still important since it is better for proofs.\n\n\n\\section{Appendix}\n\nThere is an alternative to effective potential: the cubic function given in equation \\eqref{cubicf}.\n\nHere, we will compare the cubic function with effective potential.\nThe cubic function is equal to $\\dot u^2$, and its roots give the points where $\\dot u=0$.\n$\\dot \\theta$ is equal to zero at two of these three points, and the third root is irrelevant to turning angles.\nThen, one can use the cubic function to obtain turning angles. \nIf these two roots are the same, i.e. double root, then one can also say that this case gives regular precession.\nThese turning angles can also be obtained from effective potential by using $E'=U_{eff}(\\theta)$.\nAnd, if $E'=U_{eff_{min}}$ then the regular precession is observed as explained above.\n\nOn the other hand, there is not any correspondence between the minimum of $U_{eff}$ and the maximum of $f(u)$.\nThe reason for this is the multiplication with $1-u^2$ during the change of variable.\nThen, $f(u)$ can not be used to make further analyses similar to $U_{eff}$, given above.\n\nWe will consider a case satisfying $\\alpha=575.1 \\, s^{-2}$, $a=10 \\, rad \\, s^{-1}$, $b=2 \\, rad \\, s^{-1}$ as an example.\nFor the symmetric top with previously given parameters, $\\beta$ becomes $ 596.5 \\, s^{-2}$.\n$U_{eff}$ and $f(u)$ can be seen in figure \\ref{fig:ueff_fu}.\nOne can see that $\\theta_{min}=1.83 \\, rad$ and $\\theta_{max}=2.57 \\, rad$ can be obtained from $\\arccos(u2)=1.83 \\, rad$ and $\\arccos(u1)=2.57 \\, rad$, respectively.\nOn the other hand, $\\theta_r=2.28 \\, rad$ can not be obtained from $\\arccos(u_m)=2.18$.\n\n\\begin{figure}[!h]\n\t\\begin{center}\n\t\t\\subfigure[$U_{eff}$]{\n\t\t\t\\includegraphics[width=5.5cm]{ueff_comp.pdf}\n\t\t\t}\n\t\t\t\\subfigure[$f(u)$]{\n\t\t\t\t\\includegraphics[width=5.5cm]{fu_comp.pdf}\n\t\t\t\t}\n\t\t\t\t\\caption{$U_{eff}$ and $f(u)$ when $\\alpha=575.1 \\, s^{-2}$, $\\beta=596.5 \\, s^{-2}$, $a=10 \\, rad \\, s^{-1}$ and $b=2 \\, rad \\, s^{-1}$.\n\t\t\t\ta) $U_{eff}$ continious (red) curve, $E'=-0.0150 \\, J$ dashed (blue) line, $\\theta_{min}=1.83 \\, rad$, $\\theta_{max}=2.57 \\, rad$, $\\theta_r=2.28 \\, rad$ and $U_{eff_{min}}=-0.0299 J$. \n\t\t\t\tb) $f(u)$ continious (red) curve, $u_1=-0.841$, $u_2=-0.258$, $u_3=1.05$, $u_m=-0.575$ and $f_{max}=81.6 \\, s^{-2}$.\n\t\t\t\t}\n\t\t\t\t\\label{fig:ueff_fu}\n\t\\end{center}\n\\end{figure}\n\nThese show that $f(u)$ can be used to obtain turning angles, however, it can not be used to obtain $\\theta_r$ where the minimum of $U_{eff}$ occurs.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nLet $\\mathbb{T}=\\mathbb{R}\/\\mathbb{Z}$ and identify it with $[0,1]$ in the usual way. For integers $d\\ge 2$, let $F$ be the $d$-adic Bernoulli map \n$$F: \\mathbb T\\rightarrow \\mathbb T,\\ t\\mapsto d\\cdot t\\mod 1.$$\nLet $f$ be a function (weight) on $\\mathbb{T}$. Consider the weighted transfer operator $L$ associated to $F$:\n\\begin{align*}\n(Lu)(t)\n&=\\frac1 d\\sum_{s\\in F^{-1}(t)} f(s)u(s)\\\\\n&=\\frac1 d\\sum_{i=0}^{d-1} f\\left(\\frac{t+i}{d}\\right)u\\left(\\frac{t+i}{d}\\right),\\ t\\in\\mathbb{T}\n\\end{align*}\nwhere $u$ is a function on $\\mathbb{T}$. Such operators are also called Ruelle (or Ruelle-Perron-Frobenius) operators. They can also be defined associated to more general maps and on more general spaces (cf. Hennion \\cite{Hennion1993} and Baladi \\cite{Baladi2000} for more background).\n\nIn this paper, we study spectral properties of $L$ as an operator acting on $C(\\mathbb{T})$, the space of continuous functions on $\\mathbb{T}$. When $f\\equiv 1$, this question has been extensively studied,\nespecially in the case $d=2$ (cf. Vep\\v{s}tas \\cite{V} and references therein). For more general weights $f$, there are Perron-Frobenius type theorems that describe spectral properties of $L$ (cf. \\cite[Theorem~1.5]{Baladi2000}). However, such theorems often require $f$ to be strictly positive, which is not met by the main examples we are interested in:\n$$(c)\\ f(t)=|\\cos(\\pi t)|^q \\hspace{1cm} (s)\\ f(t)=|\\sin(\\pi t)|^q$$\nwhere $q>0$. In Section \\ref{sec:quasicompact}, we develop Perron-Frobenius type theorems for transfer operators $L$ with such `degenerate' weights (more precisely, weights that have exactly one zero on $\\mathbb{T}$). The theorems are derived using notions of quasicompactness and Krein property, which we verify by exploiting the specific structure of the Bernoulli map $F$; see also Fan and Lau \\cite{FanLau1998} for similar treatments. As a corollary, we conclude that the operator $L$ satisfies classical Perron-Frobenius theorems in all cases of $d$ and $(c)\/(s)$, except for the case $d=2$ and $(c)$. \n\nIn Section \\ref{sec:special-weights}, we study in more detail the spectral properties of $L$ in the non-exceptional cases. When $q$ is an even integer, we obtain explicit computations of $\\rho(L)$ (the spectral radius of $L$) by reducing to a finite-dimensional problem. When $q$ is not an even integer, evaluating $\\rho(L)$ is more difficult. We derive in this case estimations of $\\rho(L)$, particularly for $d=3$ (note that when $d$ is odd, $(c)$ and $(s)$ are equivalent). As an application, we obtain asymptotic behavior of some integrals of the form\n$$I_n=\\int_{\\mathbb{T}} \\prod_{j=0}^{n-1} f(d^j t)dt,\\quad \\text{as } n\\rightarrow\\infty.$$\nIn particular, we extend a result of Strichartz \\cite{Strichartz1990} concerning the Fourier transform of the middle-third Cantor set. We also study geometric properties of the function $L1$, \nas well as asymptotic behavior of $\\rho(L)$ as $q\\rightarrow\\infty$. For the latter question it turns out that one needs to distinguish the case when $d$ is even and $f$ is given by $(s)$.\n\nIn Section \\ref{sec:binary}, we give a detailed account of the exceptional case $d=2$ and $(c)$. Using an explicit formula for the iterates $L^n 1$, we find the spectral radius and eigenfunctions of $L$ explicitly (see also Fan and Lau \\cite{FanLau1998} for related results), and obtain geometric properties of $L^n1$ for $n\\ge 1$ (especially for $q\\le 1$ and even $q$'s). The spectral problem in this case is closely related to the case $f\\equiv 1$ mentioned above, and has to do with the Hurwitz zeta functions.\n\nIn Section \\ref{sec:Lp}, we study the spectral problem on Lebesgue spaces. In Section \\ref{sec:application}, we give an application to Fourier multipliers.\n\n\\section{Quasicompact transfer operators}\\label{sec:quasicompact}\n\nLet $f:\\mathbb{R}\\to\\mathbb{R}$ be a continuous nonnegative 1-periodic function, and let $d\\ge 2$ be an integer.\nWe consider the transfer operator\n\\begin{equation}\\label{L:L}\n (Lu)(t)=\\frac1 d\\sum_{i=0}^{d-1} f\\left(\\frac{t+i}{d}\\right)u\\left(\\frac{t+i}{d}\\right) .\n\\end{equation}\nLet $\\mathbb{T}=[0,1]$ with $0$ and $1$ identified (circle). Let $C(\\mathbb{T})$ be the Banach space of continuous complex-valued functions on $\\mathbb{T}$\nendowed with the maximum norm $\\|\\cdot\\|_\\infty$.\nThen $L:C(\\mathbb{T})\\to C(\\mathbb{T})$ is a bounded linear operator. Moreover, $L$ is positive in the sense that $u\\ge 0$ implies $Lu\\ge 0$.\n\nDefine a map $F:\\mathbb{T}\\to \\mathbb{T}$ by $F(t)=d\\cdot t \\mod 1$. Then we can write\n\\[ (Lu)(t)=\\frac1d \\sum_{s\\in F^{-1}(t)} f(s)u(s), \\quad t\\in\\mathbb{T} .\\]\nFor each $n\\in\\mathbb{N}$, set\n\\[ f_n(t)=\\prod_{j=0}^{n-1} f\\big(F^j(t)\\big) .\\]\nThen\n\\begin{equation}\\label{L:Ln}\n(L^n u)(t)=\\frac{1}{d^n} \\sum_{s\\in F^{-n}(t)} f_n(s)u(s) .\n\\end{equation}\n\nLet $0<\\alpha\\le 1$. \nConsider the Banach space $\\H$ \nof H\\\"older continuous functions $u:\\mathbb{T}\\to \\mathbb{C}$ with the norm\n\\[ \\|u\\|_\\alpha=\\sup_{s\\neq t} \\frac{|u(s)-u(t)|}{|s-t|^\\alpha} + \\|u\\|_\\infty .\\]\nIf $f\\in \\H$ then $L_\\alpha u=Lu$ defines a bounded linear operator $L_\\alpha:\\H\\to \\H$.\n\nLet $T$ be a bounded linear operator on a Banach space $X$. We denote its spectral radius by $\\rho(T)$.\n$T$ is called quasicompact if there exists a compact operator $K$ on $X$ such that $\\rho(T-K)<\\rho(T)$.\nIf $T$ is quasicompact and $\\lambda\\in\\mathbb{C}$ is in the spectrum of $T$ with $|\\lambda|>\\rho(T-K)$, then $\\lambda$ is an eigenvalue of\n$T$.\n\nThe following theorem is proved in \\cite[pages 3--4]{Sm}. In \\cite[Proposition 1]{Sm} it is assumed that $f$ is positive while we assume here that $f$ is nonnegative. However positivity of $f$ is not used on pages 3--4 of \\cite{Sm}. See also \\cite{Hennion1993}.\n\n\\begin{thm}\\label{L:t1}\nLet $0\\le f\\in \\H$ for some $0<\\alpha\\le 1$.\nThen $\\rho(L)=\\rho(L_\\alpha)$. Furthermore, if $\\rho(L)>0$, then $L_\\alpha$ is quasicompact.\n\\end{thm}\n\nFor each $n\\in\\mathbb{N}$, set\n\\begin{equation}\\label{L:hn}\n h_n=L^n 1.\n \\end{equation}\nDefine\n\\begin{equation}\\label{L:rnRn}\n r_n=\\min_{t\\in\\mathbb{T}} h_n(t),\\quad R_n=\\max_{t\\in\\mathbb{T}} h_n(t) .\n\\end{equation}\nIt is easy to show that\n\\[ r_{m+n}\\ge r_mr_n,\\quad R_{m+n}\\le R_mR_n .\\]\nTherefore, the limits\n\\begin{equation}\\label{L:rR}\n r=\\lim_{n\\to\\infty} (r_n)^{1\/n} =\\sup_n\\ (r_n)^{1\/n},\\quad R=\\lim_{n\\to\\infty} (R_n)^{1\/n}=\\inf_n\\ (R_n)^{1\/n}\n\\end{equation}\nexist. In particular, for every $n\\in\\mathbb{N}$ we have\n\\begin{equation}\\label{L:est1}\n r_n^{1\/n}\\le r\\le R\\le R_n^{1\/n}.\n\\end{equation}\nMoreover, since $R_n=\\|L^n\\|_{C(\\mathbb{T})\\rightarrow C(\\mathbb{T})}$, by Gelfand's formula, $R=\\rho(L)$.\n\n\\begin{thm}\\label{L:t2}\nLet $w\\in C(\\mathbb{T})$ be a unit, that is, $w(t)>0$ for all $t\\in\\mathbb{T}$.\nThen\n\\begin{equation}\\label{L:est2}\n\\min_{t\\in\\mathbb{T}} \\frac{(Lw)(t)}{w(t)}\\le r\\le R\\le \\max_{t\\in\\mathbb{T}} \\frac{(Lw)(t)}{w(t)} .\n\\end{equation}\n\\end{thm}\n\\begin{proof}\nWe define a bounded linear operator $S$ on $C(\\mathbb{T})$ by\n\\[ (Su)(t)= \\frac{1}{w(t)} L(wu)(t),\\]\na sequence of functions\n\\[ \\tilde h_n= S^n 1,\\]\nand sequences of numbers\n\\[ \\tilde r_n =\\min_{t\\in\\mathbb{T}} \\tilde h_n(t),\\quad \\tilde R_n=\\max_{t\\in\\mathbb{T}} \\tilde h_n(t) .\\]\nSince $S$ is positive, we obtain for every $n\\in\\mathbb{N}$\n\\[\n (\\tilde r_n)^{1\/n}\\le \\lim_{n\\to\\infty} (\\tilde r_n)^{1\/n}=: \\tilde r\\le \\tilde R:= \\lim_{n\\to\\infty} (\\tilde R_n)^{1\/n}\\le (\\tilde R_n)^{1\/n} .\n\\]\nSince $w$ is a unite, there are constants $a,b>0$ such that\n\\[ a\\le w(t)\\le b\\]\nfor all $t\\in\\mathbb{T}$. This implies\n\\[ a h_n(t)=(L^na)(t) \\le w(t) \\tilde h_n(t) \\le (L^nb)(t)=b h_n(t) .\\]\nFrom this we obtain\n\\[ r_n\\le \\frac{b}{a} \\tilde r_n,\\quad \\tilde r_n\\le \\frac{b}{a} r_n,\\quad R_n\\le \\frac{b}{a} \\tilde R_n,\\quad \\tilde R_n\\le \\frac{b}{a} R_n .\\]\nThus\n\\[ r=\\tilde r,\\quad R=\\tilde R .\\]\nNow \\eqref{L:est2} follows from\n\\[ \\tilde r_1\\le \\tilde r=r\\le \\tilde R=R\\le \\tilde R_1.\\]\n\\end{proof}\n\nWe say that $L$ is a Krein operator if, for all $u\\in C(\\mathbb{T})$ such that $u(t)\\ge 0$ for all $t\\in\\mathbb{T}$ but $u(t_0)>0$ for at least one $t_0\\in\\mathbb{T}$, there is $n\\in\\mathbb{N}$ such that $L^nu$ is a unit. Note that $n$ may depend on $u$.\nIt is easy to show that a Krein operator carries units to units (cf. \\cite[Lemma 5.2]{A}).\nIt follows from \\eqref{L:Ln} that if $f(t)>0$ for all $t$ then $L$ is a Krein operator,\nAlso, if $f$ vanishes on an interval of positive length, then $L$ cannot be a Krein operator.\n\n\\begin{lemma}\\label{L:Krein1}\nSuppose $f$ has exactly one zero in $[0,1)$.\nIf $f_n$ has four zeros that form an arithmetic progression with step size $d^{-n}$, then $d=2$ and $f(\\frac12)=0$.\n\\end{lemma}\n\\begin{proof}\nLet $s_0+\\mathbb{Z}$ be the set of zeros of $f$.\nSuppose that $t_i=t+id^{-n}$ is a zero of $f_n$ for $i=0,1,2,3$.\nThen there exist integers $0\\le k_i\\le n-1$, and integers $j_i$ such that\n\\begin{equation}\\label{eq}\n t_i=d^{-k_i}(s_0+j_i),\\quad i=0,1,2,3.\n\\end{equation}\nWe will assume that $k_0=\\max\\{k_0,k_1,k_2,k_3\\}$ (the other cases are mentioned at the end of the proof.)\nClearly, $k_0>k_1$.\nSince we can replace $s_0$ by $s_0+j$ with any integer $j$, we will assume that $j_1=0$ in order to simplify the notation.\nFrom $t_1-t_0=t_2-t_1=d^{-n}$, we obtain\n\\[ d^{-k_1}s_0-d^{-k_0}(s_0+j_0)=d^{-n},\\quad d^{-k_2}(s_0+j_2) -d^{-k_1}s_0=d^{-n} .\\]\nEliminating $s_0$ from these equations, we find\n\\[ (1-j_0d^{n-k_0})(d^{k_0-k_1}-d^{k_0-k_2})= (1-j_2d^{n-k_2})(1-d^{k_0-k_1}) .\\]\nThis is an equation involving only integers. Since $n>k_2$, $k_0>k_1$, the right-hand side is not divisible by $d$.\nTherefore, we must have $k_0=k_2$. But this is impossible when $d>2$. So we must have $d=2$ and $k_0=k_2=n-1$.\n\nNow suppose that $d=2$ and $k_0=k_2=n-1$.\nWithout loss of generality we take $j_0=0$, $j_2=1$.\nSince $t_1-t_0=t_3-t_2=2^{-n}$ we obtain\n\\[ 2^{n-k_1}(s_0+j_1)=2s_0+1,\\quad 2^{n-k_3}(s_0+j_3)=2s_0+3 .\\]\nEliminating $s_0$ this gives\n\\[ (3-j_32^{n-k_3})(2^{n-k_1-1}-1)=(1-j_12^{n-k_1})(2^{n-k_3-1}-1) .\\]\nIt is clear that $k_10$ for $m d^{-n}\\le t\\le (m+4) d^{-n}$ for some integer $m$, $0\\le m\\le d^n-4$.\nWe claim that $(L^n u)(t)>0$ for all $t\\in\\mathbb{T}$. In fact, by Lemma \\ref{L:Krein1}, if $t\\in\\mathbb{T}$ then among the four points $t_i=d^{-n}(t+m+i)$, $i=0,1,2,3$,\nat least one satisfies $f_n(t_i)>0$. Then \\eqref{L:Ln} implies $(L^n u)(t)\\ge f_n(t_i)u(t_i)>0$.\n\\end{proof}\n\nIf $d=2$ and $f(\\frac12)=0$ then $L$ is not a Krein operator: If $u(0)=0$ then $(L^nu)(0)=0$ for all $n\\in\\mathbb{N}$.\n\n\\begin{thm}[See also \\cite{FanLau1998}]\\label{L:t3}\nSuppose that $L$ is a Krein operator. Then the following statements hold.\\\\\n(a) $R>0$.\\\\\n(b) If $Lv=\\lambda v$ with $v\\ne 0$ and $|\\lambda|=R$,\nthen there is a constant $\\theta\\in\\mathbb{R}$ such that $e^{-i\\theta} v$ is a unit. \\\\\n(c) $L$ has no eigenvalue $\\lambda$ on the circle $|\\lambda|=R$ except possibly $\\lambda=R$.\\\\\n(d) If $R$ is an eigenvalue of $L$, then its algebraic multiplicity is $1$.\\\\\n(e) If $0\\le f\\in \\H$, then $L_\\alpha$ is quasicompact, $r=R$, and $R$ is an eigenvalue of $L_\\alpha$ of algebraic multiplicity $1$ with a unit eigenfunction. \\end{thm}\n\\begin{proof}\n(a) Since $L$ is a Krein operator, $h_1=L1$ is a unit, so $00$ such that\n\\[ (L^nz)(t)=(Lw)(t)-R w(t)\\ge\\delta w(t),\\quad t\\in \\mathbb{T}.\\]\nApplying Theorem \\ref{L:t2}, we obtain the contradiction\n\\[ R+\\delta\\le \\min_{t\\in\\mathbb{T}} \\frac{(Lw)(t)}{w(t)}\\le R .\\]\nTherefore, $z=0$ and\n\\begin{equation}\\label{L:eq}\nL|v|=R|v|.\n\\end{equation}\nThen\n\\[ |L^n v|=R^n |v|=L^n|v|=w.\\]\nTherefore, $|v|$ is a unit.\nWe claim that there is a constant $\\theta\\in\\mathbb{R}$ such that $e^{-i\\theta} v(t)>0$ for all $t$.\nSuppose this is not true. Since $L|v|=|Lv|$, there is $n\\in\\mathbb{N}$ and $1\\le i0$ such that $\\delta u\\le w$.\nThen Theorem \\ref{L:t2} leads to the contradiction\n\\[ R+\\delta\\le \\min_{t\\in\\mathbb{T}} \\frac{(Lu)(t)}{u(t)}\\le R .\\]\nThis shows that the algebraic multiplicity of the eigenvalue $R$ is $1$.\n\n\\noindent (e)\nSince $\\rho(L)=R>0$, it follows from Theorem \\ref{L:t1} that $L_\\alpha$ is quasicompact. Then $L_\\alpha$ has an eigenvalue $\\lambda$ on the circle $|\\lambda|=R$.\nBy (c), $R$ is an eigenvalue of $L_\\alpha$. There is a corresponding unit eigenfunction.\nNow $r=R$ follows from Theorem \\ref{L:t2}.\n\\end{proof}\n\n\\begin{thm}[See also \\cite{FanLau1998}]\\label{L:t4}\nSuppose that $0\\le f\\in \\H$ and that $L$ is a Krein operator.\nLet $P$ be the spectral projection onto the eigenspace of $L_\\alpha$ corresponding to the eigenvalue $R$.\\\\\n(a) The sequence $R^{-n}L_\\alpha^n$ converges to $P$ as $n\\to\\infty$ with respect to the operator norm.\\\\\n(b) The sequence $R^{-n} h_n$ converges in $\\H$ to an eigenfunction of $L_\\alpha$ corresponding to the eigenvalue $R$.\n\\end{thm}\n\\begin{proof}\n(a)\nBy Theorem \\ref{L:t3}, $L_\\alpha$ is quasicompact and the eigenvalue $R$ of $L_\\alpha$ is an isolated point of its spectrum.\nTherefore, there exists the spectral projection $P$ onto the one-dimensional root subspace belonging to the eigenvalue $R$.\nThe Banach space $\\H$ is a direct sum of the subspaces $P\\H$ and $(1-P)\\H$. Both subspaces are invariant under $L_\\alpha$.\nOn $P\\H$, $L_\\alpha$ acts as $R$ times the identity.\nSet $S:=R^{-1}(1-P)L_\\alpha$. By Theorem \\ref{L:t3}, the spectral radius of $S$ is less than $1$ so $S^n$ converges to $0$ as $n\\to\\infty$ in the operator norm.\nWe have $R^{-n} L_\\alpha^n =S^n+ P$ which implies statement (a).\n\n\\noindent (b)\nWe have $R^{-n}h_n=R^{-n}L^n_\\alpha1\\to P1$ as $n\\to\\infty$. Since\n\\[1\\le R^{-n}R_n\\le \\|R^{-n} h_n\\|_\\alpha\\to \\|P1\\|_\\alpha,\\]\nwe have $P1\\ne0$.\n\\end{proof}\n\nWe consider now the following problem that was the original motivation for this paper.\nLet $f:\\mathbb{R}\\to\\mathbb{C}$ be a bounded measurable and 1-periodic function, and let $d\\ge 2$ be an integer. For $n\\in\\mathbb{N}$, define as before\n\\[ f_n(t)=\\prod_{j=0}^{n-1} f(d^j t) .\\]\nThe problem is to find the behavior of the sequence of integrals\n\\[ I_n(f)=\\int_0^1 f_n(t)\\,dt \\]\nas $n\\to\\infty$. In particular, we want to find $c(f)$ defined by\n\\begin{equation}\\label{L:c}\n c(f)=\\limsup_{n\\to\\infty} |I_n(f)|^{1\/n}.\n \\end{equation}\n\nThe sequence $I_n$ is related to the bounded linear operator\n\\begin{equation}\\label{T}\n (Tx)(t)=f(t)x(d\\cdot t)\n\\end{equation}\nwhich maps $L^2(\\mathbb{T})$ to itself. Note that\n\\[ f_n=T^n 1 \\]\nand\n\\begin{equation}\\label{Im}\n I_n(f) =\\langle T^n1,1\\rangle\n\\end{equation}\nwith the inner product $\\langle\\cdot,\\cdot\\rangle$ in $L^2(\\mathbb{T})$.\nIn particular, \n\\[ |I_n|\\le \\|T^n\\|\\]\nand\n\\begin{equation}\\label{L:eq1}\nc(f)\\le \\lim_{n\\to\\infty} \\|T^n\\|^{1\/n}=\\rho(T).\n\\end{equation}\n\nWe show that $c(f)$ is equal to the spectral radius of a transfer operator under suitable assumptions on $f$. See also \\cite{FanLau1998} for related results.\n\n\\begin{thm}\\label{A:t1}\nSuppose that $0\\le f\\in \\H$ for some $0<\\alpha\\le 1$, and that the transfer operator $L$ defined by \\eqref{L:L} is a Krein operator. Then $r=c(f)=R=\\rho(L)$, where\n$r,R$ are defined in \\eqref{L:rR}. Moreover, we can replace $\\limsup$ by $\\lim$ in definition \\eqref{L:c}.\n\\end{thm}\n\\begin{proof}\nThe adjoint $T^\\ast$ of $T$ agrees with the operator $L$ when considered as an operator on $L^2(\\mathbb{T})$.\nLet $h_n, r_n, R_n$ be defined by \\eqref{L:hn}, \\eqref{L:rnRn}.\nIt follows from \\eqref{Im} that\n\\[ I_n=\\langle 1, h_n\\rangle=\\int_0^1 h_n(t)\\,dt .\\]\nThus\n\\[r_n^{1\/n}\\le I_n^{1\/n}\\le R_n^{1\/n}.\\]\nBy Theorem \\ref{L:t3}(e), the sequences $r_n^{1\/n}$ and $R_n^{1\/n}$ converge to the same limit $r=R$.\nTherefore, the sequence $I_n^{1\/n}$ converges and we obtain $r=c(f)=R$.\n\\end{proof}\n\nUsing Theorem \\ref{A:t1} in connection with \\eqref{L:est1} or Theorem \\ref{L:t2} we can estimate $c(f)$. We will look at some examples in the next section.\n\nWe mention two special classes of functions $f$ for which $c(f)$ can be calculated explicitly.\n\n1)\nSuppose that $f$ is a step function such that $f(t)=f_i={\\rm const}$ for $\\frac{i-1}{d}\\le t< \\frac{i}{d}$, $i=1,\\cdots,d$.\nThen it is easy to show that\n\\[ I_n(f)= \\left(\\int_0^1 f(t)\\,dt\\right)^n.\\]\nTherefore,\n\\[ c(f)=\\left|\\int_0^1 f(t)\\,dt\\right| .\\]\nIf $f$ is any nonnegative bounded measurable $1$-periodic function, we may introduce\ntwo step function $g,h$ defined by\n\\begin{eqnarray*}\n g(t)&=&\\inf\\left \\{ f(s): \\frac{i-1}{d}\\le s<\\frac{i}{d}\\right \\}\n \\quad\\text{for $\\frac{i-1}{d}\\le t<\\frac{i}{d}$},\\\\\nh(t)&=&\\sup\\left \\{ f(s): \\frac{i-1}{d}\\le s<\\frac{i}{d}\\right \\}\n\\quad\\text{for $\\frac{i-1}{d}\\le t<\\frac{i}{d}$}.\n \\end{eqnarray*}\nThen we obtain the estimate\n\\begin{equation}\\label{ineq}\nI_n(g)\\le I_n(f)\\le I_n(h)\n\\end{equation}\nand so\n\\begin{equation}\\label{ineq1}\n\\int_0^1 g(t)\\,dt\\le c(f)\\le \\int_0^1 h(t)\\,dt .\n\\end{equation}\n\n2) Let $f$ be any bounded measurable $1$-periodic function with\nFourier expansion\n\\[ f(t)=\\sum_{k\\in\\mathbb{Z}} a_k e^{2\\pi i k t} .\\]\nWe represent the operator $T$ by an infinite matrix in the orthonormal basis $\\{e^{2\\pi ikt}\\}_{k\\in\\mathbb{Z}}$.\nThe matrix of $T$ is\n\\begin{equation}\\label{T:matrix}\n \\left( a_{k-d\\ell} \\right)_{k,\\ell \\in\\mathbb{Z}} .\n\\end{equation}\nIn this notation $k$ is the row index and $\\ell$ is the column index.\nIf we write\n\\[ f_n(t)=\\sum_{k\\in\\mathbb{Z}} a_{k,n} e^{2\\pi i kt},\\]\nthen we obtain the coefficients $a_{k,n+1}$ from $a_{k,n}$ by application of $T$, so\n\\[ a_{k,n+1}=\\sum_{\\ell\\in\\mathbb{Z}} a_{k-d\\ell}\\, a_{\\ell,n} .\\]\nNote that $I_n=a_{0,n}$.\n\nIn particular, suppose that $f(t)$ is a trigonometric polynomial of degree $N$, so\n\\[ f(t)=\\sum_{k=-N}^N a_k e^{2\\pi i k t} .\\]\nWe set\n\\[ K:=\\left\\lfloor \\frac{N-1}{d-1}\\right\\rfloor.\\]\nConsider the central $2K+1$ by $2K+1$ submatrix $B$ of $T$ consisting of rows $-K\\le k\\le K$ and columns $-K\\le \\ell\\le K$.\nNotice that all entries in the rows $-N\\le k\\le N$ outside the central submatrix vanish. Therefore, we obtain the recursion\n\\[ a_{k,n+1} =\\sum_{\\ell=-K}^K a_{k-d\\ell}\\,a_{\\ell,n} \\quad\\text{if $-K\\le k\\le K$.}\\]\nHence we can calculate $I_n=a_{0,n}$ by computing the powers of the matrix $B$.\nIt is clear that\n\\begin{equation}\\label{eq2}\n c(f)\\le \\rho(B)\n\\end{equation}\nbut it is not immediately clear whether we have equality in \\eqref{eq2}.\nIt depends on how the constant function $1$ is represented in a Jordan basis of $B$ (whether the basis vectors associated with largest eigenvalue of $B$ contribute to the expansion of $1$.)\n\nThe situation is clear if the matrix $B$ is nonnegative and primitive ($B^p$ is a positive matrix for some $p\\in\\mathbb{N}$.)\nThen the spectral radius of $B$ is a simple positive eigenvalue and we can use Theorem 8.5.1 in \\cite{HJ} to show that there is equality in \\eqref{eq2}.\nSuppose that $a_k>0$ for all $k=-N,-N+1,\\cdots,N$. Then all entries in the main diagonal, the subdiagonal and superdiagonal of $B$ are positive. Therefore, $B$ is primitive.\n\nIf we have symmetry $a_{-k}=a_k$ then we can replace the matrix $B$ by a $K+1$ by $K+1$ matrix $C$ whose entries are\n\\[ c_{i,0}=a_i, \\quad c_{i,j}=a_{i-dj}+a_{i+dj} \\quad\\text{if $0\\le i\\le K$, $1\\le j\\le K$.}\\]\nSee the next section for examples.\n\n\\section{The special cases $f(t)=|\\cos(\\pi t)|^q$ and $f(t)=|\\sin(\\pi t)|^q$}\\label{sec:special-weights}\n\nIn this section we consider the functions\n\\[f(t)=|\\cos(\\pi t)|^q,\\quad \\tilde f(t)=|\\sin(\\pi t)|^q\\quad \\text{where $q>0$}.\n\\]\nWe set\n\\[ c(q):=c(f),\\quad \\tilde c(q):=c(\\tilde f).\\]\nObviously, $0\\le c(q), \\tilde c(q)\\le 1$. Note that $f, \\tilde f\\in \\H$ with $\\alpha=\\min(q,1)$. By Lemma \\ref{L:Krein2}, $L$ is a Krein operator except when $f(t)=|\\cos(\\pi t)|^q$ and $d=2$.\nThis is an exceptional case that will be considered in the next section.\nExcept for this case we can apply Theorems \\ref{L:t3} and \\ref{A:t1}.\n\nIf $d$ is odd, then $I_n(f)=I_n(\\tilde f)$, and consequently $c(f)=c(\\tilde f)$. In fact, in this case the transfer operators weighted by $f$ and $\\tilde f$ are conjugate to each other.\n\n\\begin{thm}\\label{S:t1}\nThe functions $c(q)$ and $\\tilde c(q)$ are convex and nonincreasing in $q>0$. Moreover, \n\\[ \\lim_{q\\to\\infty} c(q)=\\frac1 d, \\quad\n\\lim_{q\\to\\infty} \\tilde c(q)=\\begin{cases} \\frac1d & \\text{if $d$ is odd,}\\\\ 0 & \\text{if $d$ is even,}\\end{cases}\n\\]\nand\n\\[ \\lim_{q\\to 0^+} c(q)= \\lim_{q\\to 0^+} \\tilde c(q)=1.\\]\n\\end{thm}\n\\begin{figure}[h]\n\\includegraphics[scale=0.4]{cq.png}\n\\caption{A graph of $c(q)$ for $00$. Hence $$\\lim_{q\\to\\infty} c(q)=d^{-1}.$$\n\nWhen treating $\\tilde c(q)$, we may assume that $d$ is even. The proof is similar to the preceding one. To determine the limit of $\\tilde c(q)$ as $q\\to\\infty$, we use $c(\\tilde f)\\le R_2^{1\/2}$.\n\nTo show the limits as $q\\to 0^+$, it suffices to show that\n\\[c(q)^{1\/q}\\ge 1\/2,\\quad \\tilde c(q)^{1\/q}\\ge 1\/2\\]\nfor all $q>0$. To this end,\nlet $g(t)=|\\cos(\\pi t)|$ (or $|\\sin(\\pi t)|$). By Jensen's inequality, we have\n$$I_n^{1\/q}=\\left (\\int_0^1 |g_n(t)|^q dt\\right )^{1\/q} \n\\ge \\exp\\left (\\int_0^1 \\ln |g_n(t)|dt\\right )$$\nfor all $q>0$. \nOn the other hand, since\n$$\\int_0^1 \\ln |\\cos(\\pi t)|dt=\\int_0^1 \\ln |\\sin(\\pi t)|dt=\\ln (1\/2),$$\nwe have\n$$\\int_0^1 \\ln |g_n(t)|dt=\\sum_{j=0}^{n-1} \\int_0^1 \\ln |g(d^j t)|dt=n\\ln (1\/2).$$\nThus, for any $q>0$,\n$$c(q)^{1\/q}\\ (\\text{or }\\tilde c(q)^{1\/q})=\\lim_{n\\to\\infty} I_n^{1\/(nq)}\\ge 1\/2.$$\nThis completes the proof.\n\\end{proof}\n\n\\subsection{The case $f(t)=\\cos^{2N}(\\pi t)$, $d=3$}\nFor $N\\in\\mathbb{N}$, consider the trigonometric polynomial\n\\[ f(t)=\\cos^{2N}(\\pi t). \\]\nThe degree of $f$ is $N$ and, for $-N\\le k\\le N$,\n\\[ a_k=2^{-2N} \\binom{2N}{k+N}>0 .\\]\nWe use the method 2) from Section \\ref{sec:quasicompact}.\nIf $N=1,2$, then $K=0$ and so $c(2)=\\frac12$ and $c(4)=\\frac38$ (note that this recovers a result of Strichartz \\cite{Strichartz1990}).\nIf $N=3$, then $K=1$ and\n\\[ C=\\frac1{64} \\left[\\begin{array}{rr} 20 & 2\\\\ 15 & 6 \\end{array}\\right]. \\]\nSo\n\\[ c(6)=\\rho(C)=\\frac1{64}(13+\\sqrt{79}) =0.342003\\cdots.\\]\nIf $N=4$, then $K=1$ and\n\\[ C=\\frac1{256} \\left[\\begin{array}{rr} 70 & 16\\\\ 56 & 29 \\end{array}\\right]. \\]\nIt follows that\n\\[ c(8)=\\rho(C)= \\frac1{512}(99+9\\sqrt{65})=0.335078\\cdots.\\]\nWhen $N\\ge 5$, the formulas for $c(2N)$ become more complicated, but $c(2N)$ is easy to compute numerically.\nFor example, we obtain\n\\[ c(10)=0.333691\\cdots.\\]\n\nThe same method can be used to determine $c(2N)$ for other values of $d$.\n\n\\subsection{The case $f(t)=|\\sin(\\pi t)|$, $d=3$}\\label{subsec:d=3}\nEven if $f$ is not a trigonometric polynomial, we can still use matrix methods to estimate $c(f)$.\nAs an example, consider\n\\begin{equation}\\label{S:fs}\n f(t)=|\\sin(\\pi t)|.\n\\end{equation}\nThe Fourier coefficients of $f$ are\n\\[ a_k=-\\frac{2}{\\pi} \\frac{1}{(2k-1)(2k+1)},\\ k\\in\\mathbb{Z} .\\]\nNote that $a_0>0$ but all other $a_k$ are negative.\nLet $N\\in\\mathbb{N}$. We estimate\n\\[ f(t)\\le \\sum_{k=-N}^N a_k e^{2\\pi i k t} +\\frac2\\pi \\sum_{k=N+1}^\\infty \\frac{2}{(2k-1)(2k+1)} \\]\nso we have\n\\[ f(t)\\le h(t) ,\\]\nwhere\n\\[ h(t)=\\frac2\\pi \\frac1{2N+1} + \\sum_{k=-N}^N a_k e^{2\\pi i k t} .\\]\nUsing\n\\[ c(f)\\le c(h)\\le \\rho(C) \\]\nwe get upper bounds for $c(f)=c(1)$ (when $d=3$):\n\n\\begin{center}\n\\begin{tabular}{r|c}\n$N$ & $\\rho(C)$ \\\\\n\\hline\n1 & 0.848826\\\\\n2 & 0.763943\\\\\n3 & 0.737463\\\\\n4 & 0.717381\\\\\n5 & 0.704696\\\\\n10& 0.678384\\\\\n20& 0.663593\\\\\n30& 0.658613\\\\\n50 & 0.654552\\\\\n100 & 0.651436\n\\end{tabular}\n\\end{center}\nAs far as we know the exact value of $c(1)$ is not known. We conjecture that\n$c(1)=0.648314\\cdots$.\n\nWe also obtain\n\\[ g(t)\\le f(t) \\]\nwhere\n\\[ g(t)= -\\frac2\\pi \\frac1{2N+1} + \\sum_{k=-N}^N a_k e^{2\\pi i k t} .\\]\nSince $a_0>0$ and all other $a_k<0$, we can easily show that $g(t)\\ge 0$.\nTherefore, we have\n\\[ c(g)\\le c(f) .\\]\nHowever, the trigonometric polynomial $g$ does not have positive coefficients, so we do not know whether $c(g)=\\rho(C)$. Therefore, we do not obtain lower bounds by this method.\nFor $N=100$, one would get $\\rho(C)=0.645194\\cdots $.\n\nSomewhat surprisingly, the functions $h_n$ associated with \\eqref{S:fs}\ncan be represented in a fairly explicit way.\nIf\n\\[ u(t)=\\cos\\left(\\pi as\\right),\\quad s=t-\\tfrac12, \\]\nthen\n\\[ (Lu)(t)= \\tfrac16 \\big(1+2\\cos\\tfrac\\pi3(1+a)\\big)\\cos\\tfrac\\pi3(1+a)s+\\tfrac16\\big(1+2\\cos\\tfrac\\pi3(1-a)\\big) \\cos\\tfrac\\pi3(1-a) s .\\]\nIterating this formula, we see that $h_n$ is a sum of $2^{n-1}$ many terms of the form\n\\[ A\\cos(\\pi a s) \\]\nwhere $A>0$ and $a=\\frac13\\pm\\frac19\\pm\\frac1{27}\\pm\\cdots\\pm \\frac{1}{3^n}\\in (0,\\tfrac12)$.\nIt follows that\n\\[ r_n=h_n(0),\\quad R_n=h_n(\\tfrac12).\\]\nBy \\eqref{L:rR},\n\\[ \\left(h_n(0)\\right)^{1\/n}\\le c(f)\\le \\left(h_n(\\tfrac12)\\right)^{1\/n} .\\]\nFor example, if $n=1$ we get the bounds\n\\[ \\frac13\\sqrt3\\le c(f)\\le \\frac23.\\]\n\nSince $00,\\quad 00$ if $s_10$, we have $(-1)^{k-1}h_1'(t)>0$ when $04$, then $\\frac{h_{n+1}(t)}{h_n(t)}$ attains its maximum at $t=\\frac12$ and its minimum at $t=0$.\n\\end{conj}\n\nIf we believe these conjectures then $c(f)$ would lie between $\\frac{r_{n+1}}{r_n}$ and $\\frac{R_{n+1}}{R_n}$ for every $n$.\nIn the case $d=3$ we get the following estimates for $c(1)$:\n\n\\begin{center}\n\\begin{tabular}{r|r|r}\n$n$ & lower bound & upper bound \\\\\n\\hline\n1 & 0.577350 & 0.666666 \\\\\n2 & 0.646564 & 0.656538 \\\\\n3 & 0.648297 & 0.648396\n\\end{tabular}\n\\end{center}\n\nWe see that these bounds are much better than those from Section \\ref{subsec:d=3}. Unfortunately, we used conjecture \\ref{S:conj2} but for small $n$ it can be proved by direct computation.\n\n\\section{The case $f(t)=|\\cos(\\pi t)|^q$, $d=2$}\\label{sec:binary}\n\nIn the exceptional case\n\\[ f(t)=|\\cos(\\pi t)|^q, \\quad d=2\\]\nwe can obtain more explicit computations. See also \\cite{FanLau1998} for related results.\n\n\\subsection{The integrals $I_n$}\nUsing the identity\n\\[\n\\prod_{j=0}^{n-1} \\cos(2^j t)=\\frac{\\sin(2^n t)}{2^n \\sin(t)} ,\n\\]\nwe can write\n\\begin{equation}\\label{1:fm}\nf_n(t)=\\frac{1}{2^{q n}} \\frac{|\\sin(\\pi 2^n t)|^q}{|\\sin(\\pi t)|^q}.\n\\end{equation}\nTherefore,\n\\begin{equation}\\label{1:fm-integral}\nI_n=\\frac{1}{2^{q n}} \\int_0^1 \\frac{|\\sin(\\pi 2^n t)|^q}{|\\sin(\\pi t)|^q}\\, dt=\\frac{1}{2^{q n-1}} \\int_0^{1\/2} \\frac{|\\sin(\\pi 2^n t)|^q}{|\\sin(\\pi t)|^q}\\, dt.\n\\end{equation}\n\n\\begin{thm}\\label{1:t}\nFor $d=2$, we have\n\\[ c(q)=\\lim_{n\\to\\infty} I_n^{1\/n}=\n\\begin{cases} \n2^{-q} & \\text{if $01$.}\n\\end{cases}\n\\]\n\\end{thm}\n\\begin{proof}\nSubstituting $u=2^n t$ in \\eqref{1:fm-integral}, we get\n\\[ I_n=\\frac{1}{2^{n-1}} \\int_0^{2^{n-1}} \\frac{|\\sin(\\pi u)|^q}{2^{q n}|\\sin(\\pi 2^{-n} u)|^q}\\,du .\\]\nUsing\n\\[ \\frac{2}{\\pi} t\\le \\sin t\\le t,\\quad 0\\le t\\le \\frac{\\pi}{2},\\]\nwe find\n\\begin{equation}\\label{1:ineq}\n\\frac{\\pi^{-q}}{2^{n-1}} \\int_0^{2^{n-1}} \\frac{|\\sin(\\pi u)|^q}{u^q}\\, du\\le I_n \\le \\frac{2^{-q}}{2^{n-1}} \\int_0^{2^{n-1}} \\frac{|\\sin(\\pi u)|^q}{u^q}\\, du .\n\\end{equation}\nIf $q>1$, the integral\n\\begin{equation}\\label{1:int}\n\\int_0^\\infty \\frac{|\\sin(\\pi u)|^q}{u^q}\\,du\n\\end{equation}\nconverges. Therefore, the statement of the theorem follows for $q>1$.\nIf $q=1$, the integral \\eqref{1:int} diverges and the integrals in \\eqref{1:ineq} behave like $\\ln(2^n)$. Since $n^{1\/n}$ converges to $1$ as $n\\to\\infty$,\nwe obtain the statement of the theorem when $q=1$. If $00$}\\]\nand\n\\[ R=\\begin{cases}\n2^{-q} & \\text{if $01$.}\n\\end{cases}\n\\]\n\\end{thm}\n\\begin{proof}\nUsing only the term with $k=0$ in \\eqref{2:hm}, we obtain, for $0\\le t\\le \\frac12$,\n\\[ h_n(t)\\ge \\frac{1}{2^n}\\frac{1}{2^{q n}}\\frac{|\\sin(\\pi t)|^q}{|\\sin(\\pi 2^{-n}t)|^q}\\ge \\frac{1}{2^n}\\frac{2^q}{\\pi ^q}.\\]\nThis inequality together with $h_n(0)=2^{-n}$ proves $r=\\frac12$.\nThe proof of the formula for $R$ is elaborated in Section \\ref{subsec:convergence}.\n\\end{proof}\n\n\\subsection{Eigenfunctions}\\label{subsec:eigenfunctions}\nLet $\\alpha=\\min\\{1,q\\}$.\nBy Theorem \\ref{L:t1}, the spectral radii of $L$ and $L_\\alpha$ agree, and $L_\\alpha$ is quasicompact.\nSince $L$ is also a positive operator, $\\lambda=R$ must be an eigenvalue, so there must exist a corresponding eigenfunction.\nBut $L$ is not a Krein operator (cf. \\cite{A}), so we do not know whether the eigenfunction is unique (up to a constant factor) \nor whether it is positive on $\\mathbb{T}$.\n\nWe want to find nontrivial solutions $u\\in C(\\mathbb{T})$ to the equation $Lu=\\lambda u$, particularly for $\\lambda=R$.\nInterestingly, we can find these eigenfunctions fairly explicitly.\nIn fact, if we substitute\n\\[ u(t)=|\\sin(\\pi t)|^q g(t) \\]\nin $Lu=\\lambda u$, we find\n\\begin{equation}\\label{3:ber}\n\\tfrac12 g\\left(\\tfrac12 t\\right)+\\tfrac12 g\\left(\\tfrac12 (t+1)\\right) =\\mu g(t)\\quad \\text{where }\\mu=2^q \\lambda .\n\\end{equation}\nNote that $g(t)$ will usually be continuous only on the open interval $(0,1)$.\nMuch is known about equation \\eqref{3:ber} (cf. \\cite{V}).\nClearly, $g(t)=1$ is a solution to \\eqref{3:ber} with $\\mu=1$. Therefore, $u(t)=|\\sin(\\pi t)|^q$ is an eigenfunction of $L$ corresponding\nto the eigenvalue $\\lambda=2^{-q}$. If $01$.\n\nUsing an idea from \\cite{V}, we find eigenfunctions corresponding to $\\lambda=R$ when $q>1$.\nFor $s>1$, consider the Hurwitz zeta function\n\\begin{equation}\\label{3:g}\n\\zeta(s,t)=\\sum_{k=0}^\\infty \\frac{1}{(t+k)^s},\\quad 00$ we fix $\\delta>0$ such that\n$$\\left |\\frac{x}{\\sin x}-1\\right |<\\varepsilon,\\quad 00$ for $n=1,\\cdots,N(q)$, where $N(q)$ satisfies $\\lim_{q\\rightarrow\\infty}N(q)=\\infty$.\n\\end{prop}\n\\begin{proof}\n\\noindent(a) In the case $n=0$, $h_0(t)\\equiv 1$, so the statement obviously holds with strict inequality replaced by equality. Assume that $h_{n-1}''(t)\\le 0$ for all $t\\in (0,1)$. We now show that $h_n''(t)<0$ for all $t\\in (0,1)$.\n\nBy definition, we have\n$$h_n(t)=\\frac{1}{2}\\Big|\\cos\\Big(\\pi \\frac{t}{2}\\Big)\\Big|^q h_{n-1}\\Big(\\frac{t}{2}\\Big)+\\frac{1}{2}\\Big|\\cos\\Big(\\pi \\frac{t+1}{2}\\Big)\\Big|^q h_{n-1}\\Big(\\frac{t+1}{2}\\Big).$$\nSince the second term equals the first term after the change of variable $t\\rightarrow 1-t$, it suffices to show $(fg)''(t)<0$ for all $t\\in (0,1\/2)$, where\n$$f(t)=|\\cos(\\pi t)|^q,\\quad g(t)=h_{n-1}(t).$$\nHowever, by the product rule,\n$$(fg)''=f''g + 2 f'g' + fg''.$$\nSince $q\\le 1$, we have $f''(t)<0$ for all $t\\in (0,1\/2)$. Also, by symmetry we have $g'(1\/2)=0$, and so the induction hypothesis implies $g'(t)\\ge 0$ for all $t\\in (0,1\/2)$. Combining these we get $f''g<0$, $f'g'\\le 0$, and $fg''\\le 0$, which gives $(fg)''(t)<0$ for all $t\\in (0,1\/2)$. This completes the proof by induction.\n\n\\noindent(b) The proof is similar to that of (a). Using the same notation, we observe that\n$$f''(t)=\\pi^2 q |\\cos(\\pi t)|^q \\Big(q|\\sin(\\pi t)|^2-1\\Big),\\quad 01\/4$ if $q<2$, and $t_q<1\/4$ if $q>2$; moreover,\n$$\\lim_{q\\rightarrow 1^+} t_q = 1\/2,\\quad\n\\lim_{q\\rightarrow \\infty} t_q = 0.$$\nIn order to determine the sign of\n\\begin{equation}\\label{eqn:2nd-derivative}\n(fg)''=f''g + 2 f'g' + fg'',\n\\end{equation}\nas before we want all the three terms to have the same sign.\n\nIn the case $n=1$, since $g\\equiv 1$, we have $(fg)''(t)=f''(t)<0$ for all $t\\in(0,2t_q)$, where $2t_q>1\/2$ and $2t_q\\rightarrow 1$ as $q\\rightarrow 1^+$. This implies $h_1''(t)<0$ for all $t\\in (1-2t_q,2t_q)$. By symmetry we have $h_1'(t)>0$ for all $t\\in (1-2t_q,1\/2)$. Now proceeding by induction, we see that, using \\eqref{eqn:2nd-derivative},\n$$h_n''(t)<0,\\quad t\\in (2^{n-1}(1-2t_q),1-2^{n-1}(1-2t_q))$$\nand\n$$h_n'(t)>0,\\quad t\\in (2^{n-1}(1-2t_q),1\/2).$$\nIn particular, if $q>1$ is sufficiently close to 1, we have $2^{n-1}(1-2t_q)<1\/2$ and thus $h_n''(1\/2)<0$, as desired.\n\nThe proof for (c) is similar.\n\\end{proof}\n\nWhen $q$ is an even integer, we can have more information.\n\n\\begin{prop}\nIf $q\\ge 4$ is an even integer, then\n$$h_\\infty(t):=\\pi^{-q}|\\sin(\\pi t)|^q G(q,t)$$\nsatisfies $h_\\infty''(1\/2)>0$; moreover, $h_\\infty'(t)<0$ for all $t\\in(0,1\/2)$.\n\\end{prop}\n\n\\begin{proof}\nBy \\eqref{3:G}, we have\n$$G(q,t)=\\frac{(-1)^{q-1}\\pi}{(q-1)!}\n\\left (\\frac{d}{dt}\\right )^{q-1}\\cot (\\pi t).$$\nLemma \\ref{lem:polynomial} below shows that, after simplification,\n$$h_\\infty(t)=\\frac{2}{(q-1)!} P_{q-1}(\\cos(\\pi t))$$\nwhere $P_{q-1}(x)$ is a polynomial consisting of the even powers $1, x^2, \\cdots, x^{q-2}$ and has positive coefficients. By direct computation, we then have\n$$h_\\infty'(t)=-\\frac{2\\pi}{(q-1)!}P_{q-1}'(\\cos(\\pi t)) \\sin(\\pi t)$$\nand\n$$h_\\infty''(1\/2)=\\frac{2\\pi^2}{(q-1)!} P_{q-1}''(0).$$\nThe desired conclusions now follow immediately from the properties of $P_{q-1}(x)$ mentioned above.\n\\end{proof}\n\n\\begin{lemma}\\label{lem:polynomial}\nFor all $n\\in\\mathbb{N}$, we have\n$$\\left (\\frac{d}{dt}\\right )^{n}\\cot t=(-1)^n\\frac{P_n(\\cos t)}{(\\sin t)^{n+1}}$$\nwhere $P_n(x)$ is a polynomial of degree $n-1$ whose coefficients are nonnegative integers. Moreover, when $n$ is odd, $P_n(x)$ consists of the even powers $1,x^2,\\cdots,x^{n-1}$; when $n$ is $even$, $P_n(x)$ consists of the odd powers $x,x^3,\\cdots,x^{n-1}$.\n\\end{lemma}\n\n\\begin{proof}\nIt is easy to see that $P_0(x)=x$ and $P_1(x)=1$. Moreover, by direct computation we have\n$$P_{n+1}(x)=(n+1)xP_{n}(x)+(1-x^2)P_{n}'(x).$$\nSuppose the statement holds for $P_n(x)$, i.e.\n$$P_n(x)=a_{n-1}x^{n-1}+\\sum_{j=0}^{n-2} a_j x^j$$\nwhere $a_{n-1}$ is a positive integer and the $a_j$'s ($j\\le n-2$) are nonnegative integers. Then\n\\begin{equation}\\label{eqn:coefficients}\nP_{n+1}(x)=2 a_{n-1} x^{n} + \\sum_{j=0}^{n-2} (n-j+1) a_j x^{j+1}+\\sum_{j=0}^{n-1} j a_j x^{j-1}.\n\\end{equation}\nTherefore $P_{n+1}(x)$ is a polynomial of degree $n$ whose coefficients are nonnegative integers. By induction, this completes the proof of the first part of the lemma.\n\nThe fact that $P_n(x)$ consists of either the even powers $1,x^2,\\cdots,x^{n-1}$ or the odd powers powers $x,x^3,\\cdots,x^{n-1}$ (depending on whether $n$ is odd or even) follows easily from the recursion formula \\eqref{eqn:coefficients} and induction.\n\\end{proof}\n\nWe believe that the $N(q)$'s the Proposition \\ref{prop:convexity} should not be present, but we have not been able to remove them. By examining $h_\\infty''(1\/2)$ in its dependence on $q$, we make the following conjecture, where $\\zeta(s)$ denotes the Riemann zeta function.\n\n\\begin{conj}\nThe function\n$$F(s)=2(s+1)(2^{s+2}-1)\\zeta(s+2)-2\\pi^2 (2^s-1)\\zeta(s),\\ 11$, we have\n$$\\rho_p (L_q)=\\big[\\rho (L_{q p'})\\big]^{1\/p'}.$$\n\\end{prop}\n\nSimilar as in Section \\ref{sec:binary}, in the special case $d=2$, we can find eigenfunctions of $L_q$ in $L^p(\\mathbb T)$ explicitly. We consider two different cases.\n\nCase 1: $q p'\\le 1$. In this case we have, by Theorem \\ref{2:t},\n$$\\rho (L_{q p'})=2^{-q p'},$$\nand so\n$$\\rho_p (L_q)=2^{-q}.$$\nSince $q\\le 1$, the spectral radius of $L_q$ on $L^p(\\mathbb T)$ coincides that on $C(\\mathbb T)$. In particular, we have the same eigenfunction\n$$u(t)=|\\sin(\\pi t)|^q\\in L^p(\\mathbb{T})$$\ncorresponding to the eigenvalue $\\lambda=2^{-q}$.\n\nCase 2: $q p'>1$. In this case we have\n$$\\rho (L_{q p'})=\\frac12,$$\nand so\n$$\\rho_p (L_q)=2^{-1\/p'}.$$\nNote that $1\/p'1$ and $G(s,t)=\\zeta(s,t)+\\zeta(s,1-t)$ \n\\footnote{More generally, one can take $G(s,t)$ to be linear combinations of $\\zeta(s,t)$ and $\\zeta(s,1-t)$.}\nis as in \\eqref{3:Gst}. Since\n$$\\zeta(s,t)\\sim t^{-s},\\quad \\text{as }t\\rightarrow 0^+,$$\nwe have that $u_s\\in L^p(\\mathbb T)$ if and only if $(s-q)p<1$, i.e.\n$$s1$ exactly when $q+\\frac{1}{p}>1$,\nwe can take\n$$s=q+\\frac{1}{p}-\\varepsilon$$\nfor sufficiently small $\\varepsilon>0$ to obtain an eigenfunction in $L^p(\\mathbb T)$ corresponding to the eigenvalue\n$$2^{-q+(s-1)}=2^{-1\/p'-\\varepsilon}.$$\nTherefore, as $\\varepsilon\\rightarrow 0$, $u_s(t)$ gives an `approximate' eigenfunction corresponding to $\\rho_p (L_q)=2^{-1\/p'}$.\nNote that when $\\varepsilon=0$, $u_s(t)$ gives an eigenfunction in the Lorentz space $L^{p,\\infty}(\\mathbb T)$.\n \n\\section{An application to Fourier multipliers}\\label{sec:application}\nIn this section, we present an application to some Bochner-Riesz type multipliers introduced by Mockenhaupt in \\cite[Section~4.3]{Mockenhaupt}. Let $E\\subset\\mathbb{R}$ be the middle-third Cantor set obtained from dissecting the interval $[-1\/2,1\/2]$, and let $\\mu$ be the Cantor measure on $E$. It is well known that\n$$\\dim E=\\alpha:=\\frac{\\log 2}{\\log 3}$$\nand that the Fourier transform of $\\mu$ is given by\n\\begin{align}\\label{fourier-cantor}\n\\hat{\\mu}(x)=\\int_{\\mathbb R} e^{-\\pi i x \\xi}d\\mu(\\xi)=\\prod_{j=1}^\\infty {\\cos(\\pi 3^{-j} x)}.\n\\end{align}\nLet $\\chi\\in C_c^\\infty(\\mathbb{R})$ be a bump function with $\\hat{\\chi}\\ge 0$. For $\\delta>0$, let\n\\begin{align*}\nm_\\delta(\\xi)\n=\\frac{\\chi(\\cdot)}{|\\cdot|^{\\alpha-\\delta}}*\\mu(\\xi)\n=\\int_\\mathbb{R} \\frac{\\chi(\\xi-\\eta)}{|\\xi-\\eta|^{\\alpha-\\delta}}d\\mu(\\eta).\n\\end{align*}\nNote that $m_\\delta$ defines a bounded function only when $\\delta>0$. In particular, $m_\\delta$ is an $L^2$-Fourier multiplier if and only if $\\delta>0$. \n\n\\begin{thm}\\label{multiplier}\n$m_\\delta$ is an $L^1$-Fourier multiplier if and only if\n$$\\delta>\\frac{\\log 2}{\\log 3}+\\frac{\\log c(1)}{\\log 3}=0.236\\cdots$$\nwhere $c(1)$ is as in Section \\ref{sec:special-weights} (with $d=3$).\n\\end{thm}\n\n\\begin{proof}\nRecall that an $L^p$-Fourier multiplier is a function $m(\\xi)$ such that\n\\begin{equation}\\label{eq:multiplier}\n\\|\\mathcal{F}^{-1}\\big(m(\\xi)\\hat f(\\xi)\\big)\\|_{L^p(\\mathbb{R})}\\le C \\|f\\|_{L^p(\\mathbb{R})}\n\\end{equation}\nholds for a constant $C$ independent of $f$, where $\\mathcal{F}^{-1}$ denotes the inverse Fourier transform. In the case $p=1$, this is equivalent to $\\widehat {m}$ being a finite measure. If $\\alpha-\\delta\\le 0$, it is easy to see that this is the case with $m=m_\\delta$. If $\\alpha-\\delta>0$, then we have\n$$\\widehat{m}_\\delta(x)=c\\cdot\\left (\\hat{\\chi}*{|\\cdot|^{\\alpha-\\delta-1}}\\right )(x)\\cdot \\hat{\\mu}(x)$$\nfor some constant $c$. Thus, ${m}_\\delta$ is an $L^1$-Fourier multiplier if and only if \n\\begin{align*}\n\\int_{\\mathbb{R}} |\\widehat m_\\delta(x)|dx\n&=\\int_{|x|\\le 3} |\\widehat m_\\delta(x)|dx + \\sum_{k=1}^\\infty \\int_{3^k<|x|\\le 3^{k+1}} |\\widehat m_\\delta(x)|dx\\\\\n&\\approx 1 + \\sum_{k=1}^\\infty 3^{(\\alpha-\\delta-1)k} \\int_{3^k}^{3^{k+1}} \\prod_{j=1}^\\infty |{\\cos(\\pi 3^{-j} x)}|dx\\\\\n&<\\infty\n\\end{align*}\nwhere we have used \n$$\\hat{\\chi}*{|\\cdot|^{\\alpha-\\delta-1}}(x)\\approx |x|^{\\alpha-\\delta-1},\\ \\text{as } |x|\\rightarrow\\infty$$\nand \\eqref{fourier-cantor}. On the other hand, notice that\n\\begin{align*}\n\\int_{3^k}^{3^{k+1}} \\prod_{j=1}^\\infty |{\\cos(\\pi 3^{-j} x)}|dx\n&=3^k \\int_{1}^{3} \\prod_{j=1}^\\infty |{\\cos(\\pi 3^{k-j} x)}|dx\\\\\n&=3^k \\int_{1}^{3} |\\hat \\mu(x)| \\prod_{j=0}^{k-1} |{\\cos(\\pi 3^{j} x)}| dx\\\\\n&\\approx 3^k \\int_{0}^{1} \\prod_{j=0}^{k-1} |{\\cos(\\pi 3^{j} x)}| dx\n\\end{align*}\nwhere in the last line we have used periodicity and the fact that $|\\hat\\mu(x)|$ is bounded below on the interval $[2,3]$. Now by Theorem \\ref{L:t4}(b), we know that\n$$\\int_{0}^{1} \\prod_{j=0}^{k-1} |{\\cos(\\pi 3^{j} x)}| dx\\approx c(1)^k.$$\nTherefore \n$$\\int_{\\mathbb{R}} |\\widehat m_\\delta(x)|dx<\\infty$$\nif and only if\n$$\\sum_{k=1}^\\infty 3^{(\\alpha-\\delta-1)k} 3^k c(1)^k<\\infty,$$\nwhich is equivalent to\n$$\\delta>\\frac{\\log 2}{\\log 3}+\\frac{\\log c(1)}{\\log 3}.$$\nThis completes the proof.\n\\end{proof}\n\nSince $m_\\delta$ is compactly supported, we can choose $f\\in L^p(\\mathbb{R})$ in \\eqref{eq:multiplier} such that $\\hat f\\equiv 1$ on the support of $m=m_\\delta$, and get $\\widehat m_\\delta\\in L^p(\\mathbb{R})$ as a necessary condition for $m_\\delta$ to be an $L^p$-Fourier multiplier. By the same argument as above, this leads us to\n$$\\delta>\\delta(p):=\\frac{\\log 2}{\\log 3}-1+\\frac{1}{p}+\\frac{\\log \\big(c(p)^{1\/p}\\big)}{\\log 3}.$$\n\n\\begin{figure}[h]\n\\includegraphics[scale=0.4]{deltap.png}\n\\caption{A graph of $\\delta(p)$ as a function of $1\/p\\in (0,1)$.}\n\\end{figure}\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}